title
stringlengths
4
295
pmid
stringlengths
8
8
background_abstract
stringlengths
12
1.65k
background_abstract_label
stringclasses
12 values
methods_abstract
stringlengths
39
1.48k
methods_abstract_label
stringlengths
6
31
results_abstract
stringlengths
65
1.93k
results_abstract_label
stringclasses
10 values
conclusions_abstract
stringlengths
57
1.02k
conclusions_abstract_label
stringclasses
22 values
mesh_descriptor_names
sequence
pmcid
stringlengths
6
8
background_title
stringlengths
10
86
background_text
stringlengths
215
23.3k
methods_title
stringlengths
6
74
methods_text
stringlengths
99
42.9k
results_title
stringlengths
6
172
results_text
stringlengths
141
62.9k
conclusions_title
stringlengths
9
44
conclusions_text
stringlengths
5
13.6k
other_sections_titles
sequence
other_sections_texts
sequence
other_sections_sec_types
sequence
all_sections_titles
sequence
all_sections_texts
sequence
all_sections_sec_types
sequence
keywords
sequence
whole_article_text
stringlengths
6.93k
126k
whole_article_abstract
stringlengths
936
2.95k
background_conclusion_text
stringlengths
587
24.7k
background_conclusion_abstract
stringlengths
936
2.83k
whole_article_text_length
int64
1.3k
22.5k
whole_article_abstract_length
int64
183
490
other_sections_lengths
sequence
num_sections
int64
3
28
most_frequent_words
sequence
keybert_topics
sequence
annotated_base_background_abstract_prompt
stringclasses
1 value
annotated_base_methods_abstract_prompt
stringclasses
1 value
annotated_base_results_abstract_prompt
stringclasses
1 value
annotated_base_conclusions_abstract_prompt
stringclasses
1 value
annotated_base_whole_article_abstract_prompt
stringclasses
1 value
annotated_base_background_conclusion_abstract_prompt
stringclasses
1 value
annotated_keywords_background_abstract_prompt
stringlengths
28
460
annotated_keywords_methods_abstract_prompt
stringlengths
28
701
annotated_keywords_results_abstract_prompt
stringlengths
28
701
annotated_keywords_conclusions_abstract_prompt
stringlengths
28
428
annotated_keywords_whole_article_abstract_prompt
stringlengths
28
701
annotated_keywords_background_conclusion_abstract_prompt
stringlengths
28
428
annotated_mesh_background_abstract_prompt
stringlengths
53
701
annotated_mesh_methods_abstract_prompt
stringlengths
53
701
annotated_mesh_results_abstract_prompt
stringlengths
53
692
annotated_mesh_conclusions_abstract_prompt
stringlengths
54
701
annotated_mesh_whole_article_abstract_prompt
stringlengths
53
701
annotated_mesh_background_conclusion_abstract_prompt
stringlengths
54
701
annotated_keybert_background_abstract_prompt
stringlengths
100
219
annotated_keybert_methods_abstract_prompt
stringlengths
100
219
annotated_keybert_results_abstract_prompt
stringlengths
101
219
annotated_keybert_conclusions_abstract_prompt
stringlengths
100
240
annotated_keybert_whole_article_abstract_prompt
stringlengths
100
240
annotated_keybert_background_conclusion_abstract_prompt
stringlengths
100
211
annotated_most_frequent_background_abstract_prompt
stringlengths
67
217
annotated_most_frequent_methods_abstract_prompt
stringlengths
67
217
annotated_most_frequent_results_abstract_prompt
stringlengths
67
217
annotated_most_frequent_conclusions_abstract_prompt
stringlengths
71
217
annotated_most_frequent_whole_article_abstract_prompt
stringlengths
67
217
annotated_most_frequent_background_conclusion_abstract_prompt
stringlengths
71
217
annotated_tf_idf_background_abstract_prompt
stringlengths
74
283
annotated_tf_idf_methods_abstract_prompt
stringlengths
67
325
annotated_tf_idf_results_abstract_prompt
stringlengths
69
340
annotated_tf_idf_conclusions_abstract_prompt
stringlengths
83
403
annotated_tf_idf_whole_article_abstract_prompt
stringlengths
70
254
annotated_tf_idf_background_conclusion_abstract_prompt
stringlengths
71
254
annotated_entity_plan_background_abstract_prompt
stringlengths
20
313
annotated_entity_plan_methods_abstract_prompt
stringlengths
20
452
annotated_entity_plan_results_abstract_prompt
stringlengths
20
596
annotated_entity_plan_conclusions_abstract_prompt
stringlengths
20
150
annotated_entity_plan_whole_article_abstract_prompt
stringlengths
50
758
annotated_entity_plan_background_conclusion_abstract_prompt
stringlengths
50
758
Out-of-hospital health care costs of childhood food allergy in Australia: A population-based longitudinal study.
36433856
Australia has one of the highest prevalence of childhood food allergy in the world, but there are no data on its economic burden in Australia.
BACKGROUND
We used data from the HealthNuts study, a population-based longitudinal study undertaken in Melbourne, Australia. Infants were recruited at age 12 months between Sept 2007 and Aug 2011 with food allergy diagnosed using oral food challenges. Health care costs of out-of-hospital services were collected through data linkage to Australia's universal health insurance scheme Medicare. Two-part model was used to compare costs after controlling for potential confounders.
METHODS
2919 children were included, and 390 (13.4%) had challenge-confirmed food allergy at age 1 year. Compared with children without food allergy, children with food allergy had significantly higher costs for GP visits, specialist visits, tests, and prescriptions in the first four years of life. The total Medicare cost associated with food allergy from age 1 to 4 years was estimated to be AUD$889.7 (95% CI $566.1-$1188.3) or €411.0 (95% CI €261.5-€549.0) per child. This was projected into an annual Medicare cost of AUD$26.1 million (95% CI $20.1-$32.3 million) or €12.1 (95% CI €9.3-€14.9 million) based on population size in 2020.
RESULTS
Childhood food allergy causes considerable Medicare costs for out-of-hospital services in the first four years after birth in Australia. These findings can help anticipate the financial impact on the health care system associated with childhood food allergy, act as a useful costing resource for future evaluations, and inform management of childhood food allergy internationally.
CONCLUSIONS
[ "Aged", "Infant", "Child", "Humans", "Longitudinal Studies", "National Health Programs", "Food Hypersensitivity", "Australia", "Health Care Costs", "Hospitals" ]
9828422
BACKGROUND
The prevalence of hospital presentations for food‐related anaphylaxis in children has increased rapidly in the last few decades, making it a growing public health concern worldwide. 1 Australia has one of the highest prevalence of childhood food allergy in the world, with the condition affecting more than 10% of infants. 2 , 3 It has been reported that in Australia, compared with the overall population, 0 to 4‐year‐old children had the highest rates of hospital admissions for anaphylaxis caused by food allergy, 4 , 5 as well as the highest rates of adrenaline autoinjector prescription. 6 With this magnitude of childhood food allergy, it is important to understand its cost implication on the healthcare system to inform resource allocation and policies. A few studies have investigated the economic burden of childhood food allergy in the US 7 and in some European countries 8 , 9 , 10 and reported significant direct medical costs for the healthcare system. Despite food allergy representing one of the most common chronic health conditions in children, 11 no such study has been conducted in Australia. In this study, using a longitudinal population‐based cohort of infants with challenge‐confirmed IgE‐mediated food allergy outcomes (the HealthNuts study), 12 linked with administrative health data, we estimated the out‐of‐hospital health care costs for childhood food allergy in Australia. The aim is to understand the out‐of‐hospital costs for children with food allergy and with different types of food allergy from birth to 4 years old and to estimate the relevant economic burden in Australia at the population level.
METHODS
Study design and sample The HealthNuts study is a single‐centre, multi‐wave, population‐based, longitudinal food allergy study undertaken in Melbourne, Australia. Detailed study methods have been published previously. 12 In brief, infants were recruited between September 2007 and August 2011 when presenting for their routine scheduled 12‐month vaccinations at immunization clinics. They were invited to undertake SPT screening to 4 common food allergens: egg, peanut, sesame, and either cow's milk or shrimp. Any infant with a detectable SPT wheal (≥1 mm) to egg, peanut, or sesame was invited for a food challenge, repeat SPT, and blood test to measure serum‐specific IgE at Melbourne's Royal Children's Hospital. As described previously, food challenges were not undertaken for cow's milk or shrimp (Supplementary material S1). 2 , 3 The cohort was followed up at age 4 years. Parents completed a mailed questionnaire or short telephone questionnaire to capture information on their child's current allergy status. Children born after 1st June 2007 (the earliest date for available healthcare cost data linkage), with a valid linkage to healthcare cost data, with nonmissing food allergy information at year 1 were included in this study as the health economics sample. Healthcare service utilization and costs from 1 June 2007 till 1 December 2011 were collected. The HealthNuts study is a single‐centre, multi‐wave, population‐based, longitudinal food allergy study undertaken in Melbourne, Australia. Detailed study methods have been published previously. 12 In brief, infants were recruited between September 2007 and August 2011 when presenting for their routine scheduled 12‐month vaccinations at immunization clinics. They were invited to undertake SPT screening to 4 common food allergens: egg, peanut, sesame, and either cow's milk or shrimp. Any infant with a detectable SPT wheal (≥1 mm) to egg, peanut, or sesame was invited for a food challenge, repeat SPT, and blood test to measure serum‐specific IgE at Melbourne's Royal Children's Hospital. As described previously, food challenges were not undertaken for cow's milk or shrimp (Supplementary material S1). 2 , 3 The cohort was followed up at age 4 years. Parents completed a mailed questionnaire or short telephone questionnaire to capture information on their child's current allergy status. Children born after 1st June 2007 (the earliest date for available healthcare cost data linkage), with a valid linkage to healthcare cost data, with nonmissing food allergy information at year 1 were included in this study as the health economics sample. Healthcare service utilization and costs from 1 June 2007 till 1 December 2011 were collected. Ethical conduct in human research Approval to conduct the HealthNuts study was obtained from the Victorian State Government Office for Children (reference no. CDF/07/492), the Victorian State Government Department of Human Services (reference no. 10/07), and the Royal Children's Hospital Human Research Ethics Committee (reference no. 27047). Parents provided written informed consent for study participation, and separate written informed consent to access their child's Medicare data for use in this study. Approval to conduct the HealthNuts study was obtained from the Victorian State Government Office for Children (reference no. CDF/07/492), the Victorian State Government Department of Human Services (reference no. 10/07), and the Royal Children's Hospital Human Research Ethics Committee (reference no. 27047). Parents provided written informed consent for study participation, and separate written informed consent to access their child's Medicare data for use in this study. Costs We investigated direct healthcare costs from a healthcare sector perspective. Out‐of‐hospital healthcare costs were collected through data linkage to the Medicare data, which covers study children's Medicare records from birth till December 2011. Medicare is Australia's universal health insurance scheme, which guarantees all Australians access to a wide range of health services at low or no cost. It has two components: the Medicare Benefits Schedule (MBS), which covers medical services (visits to health professionals, diagnostic and pathology services) mainly outside the hospital setting, and the Pharmaceutical Benefits Scheme (PBS), which covers prescription medications. All out‐of‐hospital MBS and PBS services were covered in the analysis, with MBS costs categorized into general practitioner (GP) visits, specialist visits (including visits to pediatricians and all other types of specialists), tests and all others, and PBS costs categorized into adrenaline autoinjectors, nutrition products (including infant formula), dermatologic medicines (e.g., emollients, antihistamines, corticosteroids, etc.) and all others. Details of the included services and prescription medicines and codes used to identify these categories are provided in Supplementary material S2. Costs were inflated to 2019/20 Australian dollars using the total health price index constructed by the Australian Institute of Health and Welfare. 13 The 2020 Purchasing power parities were used to exchange the Australian dollars into euros. We investigated direct healthcare costs from a healthcare sector perspective. Out‐of‐hospital healthcare costs were collected through data linkage to the Medicare data, which covers study children's Medicare records from birth till December 2011. Medicare is Australia's universal health insurance scheme, which guarantees all Australians access to a wide range of health services at low or no cost. It has two components: the Medicare Benefits Schedule (MBS), which covers medical services (visits to health professionals, diagnostic and pathology services) mainly outside the hospital setting, and the Pharmaceutical Benefits Scheme (PBS), which covers prescription medications. All out‐of‐hospital MBS and PBS services were covered in the analysis, with MBS costs categorized into general practitioner (GP) visits, specialist visits (including visits to pediatricians and all other types of specialists), tests and all others, and PBS costs categorized into adrenaline autoinjectors, nutrition products (including infant formula), dermatologic medicines (e.g., emollients, antihistamines, corticosteroids, etc.) and all others. Details of the included services and prescription medicines and codes used to identify these categories are provided in Supplementary material S2. Costs were inflated to 2019/20 Australian dollars using the total health price index constructed by the Australian Institute of Health and Welfare. 13 The 2020 Purchasing power parities were used to exchange the Australian dollars into euros. Definitions We identified children with food allergy and other allergic diseases using the HealthNuts data: Food allergy at age 1 was defined as an oral food challenge‐confirmed allergy to egg, peanut, or sesame, in the context of an SPT ≥ 2 mm or specific IgE ≥ 0.35 kU/L to that food. Food‐allergic children were further classified as having peanut allergy alone, egg allergy alone, or peanut and other food allergy (with co‐existing egg or sesame food allergy). These classifications were selected to reflect the differences in the natural history of peanut vs egg allergy. 14 We were not able to separate out sesame allergy for analysis in this study due to the small sample size with this food allergy type. Since food allergy often co‐exists with other allergic diseases such as eczema and asthma, which may contribute to higher healthcare costs among children with food allergy, we also identified eczema and asthma in the analysis. These allergic diseases were defined as a patient‐report history of doctor‐diagnosed eczema or asthma at any time in the first 4 years of life. We identified children with food allergy and other allergic diseases using the HealthNuts data: Food allergy at age 1 was defined as an oral food challenge‐confirmed allergy to egg, peanut, or sesame, in the context of an SPT ≥ 2 mm or specific IgE ≥ 0.35 kU/L to that food. Food‐allergic children were further classified as having peanut allergy alone, egg allergy alone, or peanut and other food allergy (with co‐existing egg or sesame food allergy). These classifications were selected to reflect the differences in the natural history of peanut vs egg allergy. 14 We were not able to separate out sesame allergy for analysis in this study due to the small sample size with this food allergy type. Since food allergy often co‐exists with other allergic diseases such as eczema and asthma, which may contribute to higher healthcare costs among children with food allergy, we also identified eczema and asthma in the analysis. These allergic diseases were defined as a patient‐report history of doctor‐diagnosed eczema or asthma at any time in the first 4 years of life. Statistical analysis We compared characteristics of children included and not included in the health economic sample, with differences tested using Chi‐square. Inverse probability weights were generated using logistic regression to adjust for potential bias caused by differences in child sex, socioeconomic status, and parents' country of birth between the health economic sample and the whole HealthNuts sample. We compared the MBS and PBS costs by age between children with and without food allergy, and across food‐allergic children from different socioeconomic groups. Only a proportion of study children had a full follow‐up for 4 years when the data linkage closed (1st December 2011). As a result, costs were aggregated into half‐year intervals by age in the calculation. Annual and total four‐year MBS and PBS costs were estimated by summing over the half‐year interval costs with confidence interval (CI) generated using nonparametric bootstrapping with 1000 replications. Two‐part model, a frequently used model to analyze cost data with a logistic regression part to estimate the likelihood of having any cost and a linear part to estimate costs when positive costs were predicted, was used to test whether the cost differences would vary after controlling for potential confounders (child sex, socioeconomic status, and parents' country of birth). 15 To understand the source of the cost differences, we decomposed the total four‐year MBS and PBS costs into different types of services, and compared them between children with and without food allergy and across children with different types of food allergy. To understand how other allergic diseases may contribute to children's food allergy costs, we compared the MBS and PBS costs between children with and without food allergy among those with or without asthma and eczema, respectively. Based on age‐specific population size in 2020, 16 food allergy prevalence in Australia estimated in previous studies, 14 , 17 and the cost differences and uncertainties estimated between children with and without food allergy in this study, total extra Medicare out‐of‐hospital costs caused by food allergy among 0‐ to 4‐year‐old children were projected for Victoria and Australia through simulation with 1000 replications (Supplementary material S3). Based on Osborne 2012, 17 a lower food allergy prevalence was assumed in North Australia compared with South and Central Australia in the calculation (Supplementary material S3). We compared characteristics of children included and not included in the health economic sample, with differences tested using Chi‐square. Inverse probability weights were generated using logistic regression to adjust for potential bias caused by differences in child sex, socioeconomic status, and parents' country of birth between the health economic sample and the whole HealthNuts sample. We compared the MBS and PBS costs by age between children with and without food allergy, and across food‐allergic children from different socioeconomic groups. Only a proportion of study children had a full follow‐up for 4 years when the data linkage closed (1st December 2011). As a result, costs were aggregated into half‐year intervals by age in the calculation. Annual and total four‐year MBS and PBS costs were estimated by summing over the half‐year interval costs with confidence interval (CI) generated using nonparametric bootstrapping with 1000 replications. Two‐part model, a frequently used model to analyze cost data with a logistic regression part to estimate the likelihood of having any cost and a linear part to estimate costs when positive costs were predicted, was used to test whether the cost differences would vary after controlling for potential confounders (child sex, socioeconomic status, and parents' country of birth). 15 To understand the source of the cost differences, we decomposed the total four‐year MBS and PBS costs into different types of services, and compared them between children with and without food allergy and across children with different types of food allergy. To understand how other allergic diseases may contribute to children's food allergy costs, we compared the MBS and PBS costs between children with and without food allergy among those with or without asthma and eczema, respectively. Based on age‐specific population size in 2020, 16 food allergy prevalence in Australia estimated in previous studies, 14 , 17 and the cost differences and uncertainties estimated between children with and without food allergy in this study, total extra Medicare out‐of‐hospital costs caused by food allergy among 0‐ to 4‐year‐old children were projected for Victoria and Australia through simulation with 1000 replications (Supplementary material S3). Based on Osborne 2012, 17 a lower food allergy prevalence was assumed in North Australia compared with South and Central Australia in the calculation (Supplementary material S3).
RESULTS
In total 2919 children were included in this study, with 390 (13.4%) having challenge‐confirmed allergy at year 1. Compared with children who participated in the HealthNuts study but were not included in the current analysis, the health economics study sample included more males, less families from the lowest and highest socioeconomics quintiles, more families with both parents born in Australia, and more children with challenge‐confirmed food allergy at year 1 (Table 1). Characteristics of children included and not included in the health economics analysis sample Children born after 1 June 2007 (the earliest date for available healthcare cost data linkage), with a valid linkage to healthcare cost data, with nonmissing food allergy information at year 1 were included in this study as the health economics sample. Socio‐Economic Indexes for Areas (SEIFA) is a product developed by the Australian Bureau Statistics that ranks areas in Australia according to relative socioeconomic advantage and disadvantage. The indexes are based on information from the five‐yearly Census. After adjusting for potential selection bias using inverse probability weights, compared to children without food allergy at age 1, children with food allergy had significantly higher costs for MBS medical services from the second year after birth, and significantly higher costs for PBS prescriptions from birth up to age four years, with the adjusted results similar to the observed ones (Table 2). The total healthcare costs associated with food allergy from age 1 to age 4 years were estimated to be AUD$586.3 (95%CI: $305.2–$855.0) or €270.9 (95%CI: €141.0–€395.0) for MBS services, AUD$303.4 (95%CI: $194.5–$420.2) or €140.2 (95%CI: €89.9–€194.1) for PBS services, and AUD$889.7 (95%CI: $566.1–$1188.3) or €411.0 (95% CI €261.5–€549.0) for all Medicare out‐of‐hospital services. There was no significant difference detected in total MBS costs and PBS costs among children with food allergy across different socioeconomic groups (Supplementary material S4). Out‐of‐hospital Medicare Benefits Schedule (MBS, medical services) and Pharmaceutical Benefits Scheme (PBS, prescriptions) cost for children with and without challenge‐confirmed food allergy at year 1 (2020 AUD$), by year after birth Note: Results were inverse probability weighted. Among which 79 children had patient‐reported history of doctor‐diagnosed asthma, 237 had patient‐reported history of doctor‐diagnosed eczema in the first 4 years of life. Adjusted by child's sex, socioeconomic status, and parents' country of birth using the two‐part model. Costs by category of services for children with and without food allergy and with different types of food allergy are presented in Figure 1. Compared to children without food allergy, children with food allergy incurred significantly more healthcare costs for GP visits (difference = AUD$151.1 or €69.8), specialist visits (difference = AUD$242.9 or €112.2), tests (difference = AUD$127.4 or €58.9), as well as on adrenaline autoinjectors (difference = AUD$148.3 or €68.5), nutrition products (difference = AUD$109.3 or €50.5), and dermatological medications (difference = AUD$27.7 or €12.8) in the first four years of life (Figure 1A,B). The cost differences across different food allergy groups mainly come from adrenaline autoinjectors, with children only allergic to peanut incurring AUD$144.5 or €66.8 more on adrenaline autoinjectors than children with egg allergy only (who incurred AUD$66.6 or €30.8 on adrenaline autoinjectors), and children with multiple food allergy (peanut co‐existing with other) incurring another AUD$148.1 or €68.4 compared with children only allergic to peanut (Figure 1D). Total four‐year out‐of‐hospital MBS (for medical services) and PBS (for prescriptions) costs for children without and with challenge‐confirmed food allergy at year 1, and for children with different types of food allergy, by category of services (2020 AUD$). Results were inverse probability weighted. The error bar represents the 95% confidence interval of the estimation. GP, general practitioner; MBS, Medicare Benefits Schedule; PBS, Pharmaceutical Benefits Scheme The comparisons of healthcare costs between children with food allergy and with other allergic diseases are presented in Figure 2. Children with food allergy alone tended to incur slightly lower costs for MBS medical services compared to children with asthma alone, and higher costs for PBS prescriptions compared to children with asthma or eczema. Having food allergy co‐existing with other allergic diseases tended to increase children's healthcare costs more than a purely additive way. This effect was more significant for PBS costs and for children co‐existing with asthma. Out‐of‐hospital MBS (for medical services) and PBS (for prescriptions) costs for children with challenge‐confirmed food allergy at year 1 and with other allergic diseases, by year after birth (2020 AUD$). Results were inverse probability weighted. The error bar represents the 95% confidence interval of the estimation. FA, food allergy; MBS, Medicare Benefits Schedule; PBS, Pharmaceutical Benefits Scheme Based on the age‐specific MBS and PBS cost differences and uncertainties estimated between children with and without food allergy (Table 2), it was estimated that in 2020 the total economic burden on Medicare out‐of‐hospital services caused by food allergy among children 0–4 years old was AUD$7.7 million (95% CI $5.8–$9.9 million) or €3.6 (95% CI €2.7–€4.6 million) in Victoria and AUD$26.1 million (95% CI $20.1–$32.3 million) or €12.1 (95% CI €9.3–€14.9 million) in Australia, with around two‐thirds of the total economic burden coming from MBS medical services and one‐third from PBS prescriptions (Table 3). Total economic burden on Medicare out‐of‐hospital services caused by food allergy among children 0–4 years old in Victoria and Australia in 2020 (2020 AUD$/Euro€) Abbreviations: MBS, Medicare Benefits Schedule; PBS, Pharmaceutical Benefits Scheme.
null
null
[ "Study design and sample", "Ethical conduct in human research", "Costs", "Definitions", "Statistical analysis", "AUTHOR CONTRIBUTIONS", "FUNDING INFORMATION", "PEER REVIEW", "PEER REVIEW" ]
[ "The HealthNuts study is a single‐centre, multi‐wave, population‐based, longitudinal food allergy study undertaken in Melbourne, Australia. Detailed study methods have been published previously.\n12\n In brief, infants were recruited between September 2007 and August 2011 when presenting for their routine scheduled 12‐month vaccinations at immunization clinics. They were invited to undertake SPT screening to 4 common food allergens: egg, peanut, sesame, and either cow's milk or shrimp. Any infant with a detectable SPT wheal (≥1 mm) to egg, peanut, or sesame was invited for a food challenge, repeat SPT, and blood test to measure serum‐specific IgE at Melbourne's Royal Children's Hospital. As described previously, food challenges were not undertaken for cow's milk or shrimp (Supplementary material S1).\n2\n, \n3\n\n\nThe cohort was followed up at age 4 years. Parents completed a mailed questionnaire or short telephone questionnaire to capture information on their child's current allergy status.\nChildren born after 1st June 2007 (the earliest date for available healthcare cost data linkage), with a valid linkage to healthcare cost data, with nonmissing food allergy information at year 1 were included in this study as the health economics sample. Healthcare service utilization and costs from 1 June 2007 till 1 December 2011 were collected.", "Approval to conduct the HealthNuts study was obtained from the Victorian State Government Office for Children (reference no. CDF/07/492), the Victorian State Government Department of Human Services (reference no. 10/07), and the Royal Children's Hospital Human Research Ethics Committee (reference no. 27047). Parents provided written informed consent for study participation, and separate written informed consent to access their child's Medicare data for use in this study.", "We investigated direct healthcare costs from a healthcare sector perspective. Out‐of‐hospital healthcare costs were collected through data linkage to the Medicare data, which covers study children's Medicare records from birth till December 2011. Medicare is Australia's universal health insurance scheme, which guarantees all Australians access to a wide range of health services at low or no cost. It has two components: the Medicare Benefits Schedule (MBS), which covers medical services (visits to health professionals, diagnostic and pathology services) mainly outside the hospital setting, and the Pharmaceutical Benefits Scheme (PBS), which covers prescription medications. All out‐of‐hospital MBS and PBS services were covered in the analysis, with MBS costs categorized into general practitioner (GP) visits, specialist visits (including visits to pediatricians and all other types of specialists), tests and all others, and PBS costs categorized into adrenaline autoinjectors, nutrition products (including infant formula), dermatologic medicines (e.g., emollients, antihistamines, corticosteroids, etc.) and all others. Details of the included services and prescription medicines and codes used to identify these categories are provided in Supplementary material S2. Costs were inflated to 2019/20 Australian dollars using the total health price index constructed by the Australian Institute of Health and Welfare.\n13\n The 2020 Purchasing power parities were used to exchange the Australian dollars into euros.", "We identified children with food allergy and other allergic diseases using the HealthNuts data: Food allergy at age 1 was defined as an oral food challenge‐confirmed allergy to egg, peanut, or sesame, in the context of an SPT ≥ 2 mm or specific IgE ≥ 0.35 kU/L to that food. Food‐allergic children were further classified as having peanut allergy alone, egg allergy alone, or peanut and other food allergy (with co‐existing egg or sesame food allergy). These classifications were selected to reflect the differences in the natural history of peanut vs egg allergy.\n14\n We were not able to separate out sesame allergy for analysis in this study due to the small sample size with this food allergy type.\nSince food allergy often co‐exists with other allergic diseases such as eczema and asthma, which may contribute to higher healthcare costs among children with food allergy, we also identified eczema and asthma in the analysis. These allergic diseases were defined as a patient‐report history of doctor‐diagnosed eczema or asthma at any time in the first 4 years of life.", "We compared characteristics of children included and not included in the health economic sample, with differences tested using Chi‐square. Inverse probability weights were generated using logistic regression to adjust for potential bias caused by differences in child sex, socioeconomic status, and parents' country of birth between the health economic sample and the whole HealthNuts sample. We compared the MBS and PBS costs by age between children with and without food allergy, and across food‐allergic children from different socioeconomic groups. Only a proportion of study children had a full follow‐up for 4 years when the data linkage closed (1st December 2011). As a result, costs were aggregated into half‐year intervals by age in the calculation. Annual and total four‐year MBS and PBS costs were estimated by summing over the half‐year interval costs with confidence interval (CI) generated using nonparametric bootstrapping with 1000 replications. Two‐part model, a frequently used model to analyze cost data with a logistic regression part to estimate the likelihood of having any cost and a linear part to estimate costs when positive costs were predicted, was used to test whether the cost differences would vary after controlling for potential confounders (child sex, socioeconomic status, and parents' country of birth).\n15\n To understand the source of the cost differences, we decomposed the total four‐year MBS and PBS costs into different types of services, and compared them between children with and without food allergy and across children with different types of food allergy. To understand how other allergic diseases may contribute to children's food allergy costs, we compared the MBS and PBS costs between children with and without food allergy among those with or without asthma and eczema, respectively.\nBased on age‐specific population size in 2020,\n16\n food allergy prevalence in Australia estimated in previous studies,\n14\n, \n17\n and the cost differences and uncertainties estimated between children with and without food allergy in this study, total extra Medicare out‐of‐hospital costs caused by food allergy among 0‐ to 4‐year‐old children were projected for Victoria and Australia through simulation with 1000 replications (Supplementary material S3). Based on Osborne 2012,\n17\n a lower food allergy prevalence was assumed in North Australia compared with South and Central Australia in the calculation (Supplementary material S3).", "Xinyang Hua: Conceptualization; Methodology; Formal analysis; Writing—original draft; Writing—review & editing; Project administration. Kim Dalziel: Conceptualization; Methodology; Supervision; Writing—review & editing; Writing—original draft; Formal analysis. Tim Brettig: Conceptualization; Methodology; Writing—review & editing. Shyamali C Dharmage: Conceptualization; Methodology; Writing—review & editing; Funding acquisition; Investigation. Adrian Lowe: Conceptualization; Methodology; Writing—review & editing. Kirsten P Perrett: Conceptualization; Methodology; Funding acquisition; Investigation; Writing—review & editing. Rachel L Peters: Conceptualization; Methodology; Investigation; Funding acquisition; Writing—review & editing. Anne‐Louise Ponsonby: Conceptualization; Methodology; Writing—review & editing. Mimi LK Tang: Conceptualization; Methodology; Investigation; Funding acquisition; Writing—review & editing. Jennifer Koplin: Conceptualization; Methodology; Investigation; Funding acquisition; Writing—review & editing; Writing—original draft; Data curation; Supervision.", "Xinyang Hua is an Australian National Health and Medical Research Council (NHMRC) Emerging Leadership Fellow (GN 2009253). Tim Brettig is supported by a PhD scholarship from the NHMRC‐funded Centre for Food and Allergy Research (CFAR, GNT1134812). Kirsten Perrett is supported by an NHMRC Investigator grant (GNT2008911) and a Melbourne Children's Clinician–Scientist Fellowship. Rachel Peters is supported by an NHMRC Early Career Fellowship and an NHMRC Clinical Trials and Cohort Studies Grant. Anne‐Louise Ponsonby is supported by an NHMRC Senior Research Fellowship (APP1110200). Jennifer Koplin is supported by NHMRC fellowship GNT1158699. Research at Murdoch Children's Research Institute is supported by the Victorian Government's Operational Infrastructure Support Program.", " PEER REVIEW The peer review history for this article is available at https://publons.com/publon/10.1111/pai.13883.\nThe peer review history for this article is available at https://publons.com/publon/10.1111/pai.13883.", "The peer review history for this article is available at https://publons.com/publon/10.1111/pai.13883." ]
[ null, null, null, null, null, null, null, null, null ]
[ "BACKGROUND", "METHODS", "Study design and sample", "Ethical conduct in human research", "Costs", "Definitions", "Statistical analysis", "RESULTS", "DISCUSSION", "AUTHOR CONTRIBUTIONS", "FUNDING INFORMATION", "CONFLICT OF INTEREST", "PEER REVIEW", "PEER REVIEW", "Supporting information" ]
[ "The prevalence of hospital presentations for food‐related anaphylaxis in children has increased rapidly in the last few decades, making it a growing public health concern worldwide.\n1\n Australia has one of the highest prevalence of childhood food allergy in the world, with the condition affecting more than 10% of infants.\n2\n, \n3\n It has been reported that in Australia, compared with the overall population, 0 to 4‐year‐old children had the highest rates of hospital admissions for anaphylaxis caused by food allergy,\n4\n, \n5\n as well as the highest rates of adrenaline autoinjector prescription.\n6\n\n\nWith this magnitude of childhood food allergy, it is important to understand its cost implication on the healthcare system to inform resource allocation and policies. A few studies have investigated the economic burden of childhood food allergy in the US\n7\n and in some European countries\n8\n, \n9\n, \n10\n and reported significant direct medical costs for the healthcare system. Despite food allergy representing one of the most common chronic health conditions in children,\n11\n no such study has been conducted in Australia.\nIn this study, using a longitudinal population‐based cohort of infants with challenge‐confirmed IgE‐mediated food allergy outcomes (the HealthNuts study),\n12\n linked with administrative health data, we estimated the out‐of‐hospital health care costs for childhood food allergy in Australia. The aim is to understand the out‐of‐hospital costs for children with food allergy and with different types of food allergy from birth to 4 years old and to estimate the relevant economic burden in Australia at the population level.", " Study design and sample The HealthNuts study is a single‐centre, multi‐wave, population‐based, longitudinal food allergy study undertaken in Melbourne, Australia. Detailed study methods have been published previously.\n12\n In brief, infants were recruited between September 2007 and August 2011 when presenting for their routine scheduled 12‐month vaccinations at immunization clinics. They were invited to undertake SPT screening to 4 common food allergens: egg, peanut, sesame, and either cow's milk or shrimp. Any infant with a detectable SPT wheal (≥1 mm) to egg, peanut, or sesame was invited for a food challenge, repeat SPT, and blood test to measure serum‐specific IgE at Melbourne's Royal Children's Hospital. As described previously, food challenges were not undertaken for cow's milk or shrimp (Supplementary material S1).\n2\n, \n3\n\n\nThe cohort was followed up at age 4 years. Parents completed a mailed questionnaire or short telephone questionnaire to capture information on their child's current allergy status.\nChildren born after 1st June 2007 (the earliest date for available healthcare cost data linkage), with a valid linkage to healthcare cost data, with nonmissing food allergy information at year 1 were included in this study as the health economics sample. Healthcare service utilization and costs from 1 June 2007 till 1 December 2011 were collected.\nThe HealthNuts study is a single‐centre, multi‐wave, population‐based, longitudinal food allergy study undertaken in Melbourne, Australia. Detailed study methods have been published previously.\n12\n In brief, infants were recruited between September 2007 and August 2011 when presenting for their routine scheduled 12‐month vaccinations at immunization clinics. They were invited to undertake SPT screening to 4 common food allergens: egg, peanut, sesame, and either cow's milk or shrimp. Any infant with a detectable SPT wheal (≥1 mm) to egg, peanut, or sesame was invited for a food challenge, repeat SPT, and blood test to measure serum‐specific IgE at Melbourne's Royal Children's Hospital. As described previously, food challenges were not undertaken for cow's milk or shrimp (Supplementary material S1).\n2\n, \n3\n\n\nThe cohort was followed up at age 4 years. Parents completed a mailed questionnaire or short telephone questionnaire to capture information on their child's current allergy status.\nChildren born after 1st June 2007 (the earliest date for available healthcare cost data linkage), with a valid linkage to healthcare cost data, with nonmissing food allergy information at year 1 were included in this study as the health economics sample. Healthcare service utilization and costs from 1 June 2007 till 1 December 2011 were collected.\n Ethical conduct in human research Approval to conduct the HealthNuts study was obtained from the Victorian State Government Office for Children (reference no. CDF/07/492), the Victorian State Government Department of Human Services (reference no. 10/07), and the Royal Children's Hospital Human Research Ethics Committee (reference no. 27047). Parents provided written informed consent for study participation, and separate written informed consent to access their child's Medicare data for use in this study.\nApproval to conduct the HealthNuts study was obtained from the Victorian State Government Office for Children (reference no. CDF/07/492), the Victorian State Government Department of Human Services (reference no. 10/07), and the Royal Children's Hospital Human Research Ethics Committee (reference no. 27047). Parents provided written informed consent for study participation, and separate written informed consent to access their child's Medicare data for use in this study.\n Costs We investigated direct healthcare costs from a healthcare sector perspective. Out‐of‐hospital healthcare costs were collected through data linkage to the Medicare data, which covers study children's Medicare records from birth till December 2011. Medicare is Australia's universal health insurance scheme, which guarantees all Australians access to a wide range of health services at low or no cost. It has two components: the Medicare Benefits Schedule (MBS), which covers medical services (visits to health professionals, diagnostic and pathology services) mainly outside the hospital setting, and the Pharmaceutical Benefits Scheme (PBS), which covers prescription medications. All out‐of‐hospital MBS and PBS services were covered in the analysis, with MBS costs categorized into general practitioner (GP) visits, specialist visits (including visits to pediatricians and all other types of specialists), tests and all others, and PBS costs categorized into adrenaline autoinjectors, nutrition products (including infant formula), dermatologic medicines (e.g., emollients, antihistamines, corticosteroids, etc.) and all others. Details of the included services and prescription medicines and codes used to identify these categories are provided in Supplementary material S2. Costs were inflated to 2019/20 Australian dollars using the total health price index constructed by the Australian Institute of Health and Welfare.\n13\n The 2020 Purchasing power parities were used to exchange the Australian dollars into euros.\nWe investigated direct healthcare costs from a healthcare sector perspective. Out‐of‐hospital healthcare costs were collected through data linkage to the Medicare data, which covers study children's Medicare records from birth till December 2011. Medicare is Australia's universal health insurance scheme, which guarantees all Australians access to a wide range of health services at low or no cost. It has two components: the Medicare Benefits Schedule (MBS), which covers medical services (visits to health professionals, diagnostic and pathology services) mainly outside the hospital setting, and the Pharmaceutical Benefits Scheme (PBS), which covers prescription medications. All out‐of‐hospital MBS and PBS services were covered in the analysis, with MBS costs categorized into general practitioner (GP) visits, specialist visits (including visits to pediatricians and all other types of specialists), tests and all others, and PBS costs categorized into adrenaline autoinjectors, nutrition products (including infant formula), dermatologic medicines (e.g., emollients, antihistamines, corticosteroids, etc.) and all others. Details of the included services and prescription medicines and codes used to identify these categories are provided in Supplementary material S2. Costs were inflated to 2019/20 Australian dollars using the total health price index constructed by the Australian Institute of Health and Welfare.\n13\n The 2020 Purchasing power parities were used to exchange the Australian dollars into euros.\n Definitions We identified children with food allergy and other allergic diseases using the HealthNuts data: Food allergy at age 1 was defined as an oral food challenge‐confirmed allergy to egg, peanut, or sesame, in the context of an SPT ≥ 2 mm or specific IgE ≥ 0.35 kU/L to that food. Food‐allergic children were further classified as having peanut allergy alone, egg allergy alone, or peanut and other food allergy (with co‐existing egg or sesame food allergy). These classifications were selected to reflect the differences in the natural history of peanut vs egg allergy.\n14\n We were not able to separate out sesame allergy for analysis in this study due to the small sample size with this food allergy type.\nSince food allergy often co‐exists with other allergic diseases such as eczema and asthma, which may contribute to higher healthcare costs among children with food allergy, we also identified eczema and asthma in the analysis. These allergic diseases were defined as a patient‐report history of doctor‐diagnosed eczema or asthma at any time in the first 4 years of life.\nWe identified children with food allergy and other allergic diseases using the HealthNuts data: Food allergy at age 1 was defined as an oral food challenge‐confirmed allergy to egg, peanut, or sesame, in the context of an SPT ≥ 2 mm or specific IgE ≥ 0.35 kU/L to that food. Food‐allergic children were further classified as having peanut allergy alone, egg allergy alone, or peanut and other food allergy (with co‐existing egg or sesame food allergy). These classifications were selected to reflect the differences in the natural history of peanut vs egg allergy.\n14\n We were not able to separate out sesame allergy for analysis in this study due to the small sample size with this food allergy type.\nSince food allergy often co‐exists with other allergic diseases such as eczema and asthma, which may contribute to higher healthcare costs among children with food allergy, we also identified eczema and asthma in the analysis. These allergic diseases were defined as a patient‐report history of doctor‐diagnosed eczema or asthma at any time in the first 4 years of life.\n Statistical analysis We compared characteristics of children included and not included in the health economic sample, with differences tested using Chi‐square. Inverse probability weights were generated using logistic regression to adjust for potential bias caused by differences in child sex, socioeconomic status, and parents' country of birth between the health economic sample and the whole HealthNuts sample. We compared the MBS and PBS costs by age between children with and without food allergy, and across food‐allergic children from different socioeconomic groups. Only a proportion of study children had a full follow‐up for 4 years when the data linkage closed (1st December 2011). As a result, costs were aggregated into half‐year intervals by age in the calculation. Annual and total four‐year MBS and PBS costs were estimated by summing over the half‐year interval costs with confidence interval (CI) generated using nonparametric bootstrapping with 1000 replications. Two‐part model, a frequently used model to analyze cost data with a logistic regression part to estimate the likelihood of having any cost and a linear part to estimate costs when positive costs were predicted, was used to test whether the cost differences would vary after controlling for potential confounders (child sex, socioeconomic status, and parents' country of birth).\n15\n To understand the source of the cost differences, we decomposed the total four‐year MBS and PBS costs into different types of services, and compared them between children with and without food allergy and across children with different types of food allergy. To understand how other allergic diseases may contribute to children's food allergy costs, we compared the MBS and PBS costs between children with and without food allergy among those with or without asthma and eczema, respectively.\nBased on age‐specific population size in 2020,\n16\n food allergy prevalence in Australia estimated in previous studies,\n14\n, \n17\n and the cost differences and uncertainties estimated between children with and without food allergy in this study, total extra Medicare out‐of‐hospital costs caused by food allergy among 0‐ to 4‐year‐old children were projected for Victoria and Australia through simulation with 1000 replications (Supplementary material S3). Based on Osborne 2012,\n17\n a lower food allergy prevalence was assumed in North Australia compared with South and Central Australia in the calculation (Supplementary material S3).\nWe compared characteristics of children included and not included in the health economic sample, with differences tested using Chi‐square. Inverse probability weights were generated using logistic regression to adjust for potential bias caused by differences in child sex, socioeconomic status, and parents' country of birth between the health economic sample and the whole HealthNuts sample. We compared the MBS and PBS costs by age between children with and without food allergy, and across food‐allergic children from different socioeconomic groups. Only a proportion of study children had a full follow‐up for 4 years when the data linkage closed (1st December 2011). As a result, costs were aggregated into half‐year intervals by age in the calculation. Annual and total four‐year MBS and PBS costs were estimated by summing over the half‐year interval costs with confidence interval (CI) generated using nonparametric bootstrapping with 1000 replications. Two‐part model, a frequently used model to analyze cost data with a logistic regression part to estimate the likelihood of having any cost and a linear part to estimate costs when positive costs were predicted, was used to test whether the cost differences would vary after controlling for potential confounders (child sex, socioeconomic status, and parents' country of birth).\n15\n To understand the source of the cost differences, we decomposed the total four‐year MBS and PBS costs into different types of services, and compared them between children with and without food allergy and across children with different types of food allergy. To understand how other allergic diseases may contribute to children's food allergy costs, we compared the MBS and PBS costs between children with and without food allergy among those with or without asthma and eczema, respectively.\nBased on age‐specific population size in 2020,\n16\n food allergy prevalence in Australia estimated in previous studies,\n14\n, \n17\n and the cost differences and uncertainties estimated between children with and without food allergy in this study, total extra Medicare out‐of‐hospital costs caused by food allergy among 0‐ to 4‐year‐old children were projected for Victoria and Australia through simulation with 1000 replications (Supplementary material S3). Based on Osborne 2012,\n17\n a lower food allergy prevalence was assumed in North Australia compared with South and Central Australia in the calculation (Supplementary material S3).", "The HealthNuts study is a single‐centre, multi‐wave, population‐based, longitudinal food allergy study undertaken in Melbourne, Australia. Detailed study methods have been published previously.\n12\n In brief, infants were recruited between September 2007 and August 2011 when presenting for their routine scheduled 12‐month vaccinations at immunization clinics. They were invited to undertake SPT screening to 4 common food allergens: egg, peanut, sesame, and either cow's milk or shrimp. Any infant with a detectable SPT wheal (≥1 mm) to egg, peanut, or sesame was invited for a food challenge, repeat SPT, and blood test to measure serum‐specific IgE at Melbourne's Royal Children's Hospital. As described previously, food challenges were not undertaken for cow's milk or shrimp (Supplementary material S1).\n2\n, \n3\n\n\nThe cohort was followed up at age 4 years. Parents completed a mailed questionnaire or short telephone questionnaire to capture information on their child's current allergy status.\nChildren born after 1st June 2007 (the earliest date for available healthcare cost data linkage), with a valid linkage to healthcare cost data, with nonmissing food allergy information at year 1 were included in this study as the health economics sample. Healthcare service utilization and costs from 1 June 2007 till 1 December 2011 were collected.", "Approval to conduct the HealthNuts study was obtained from the Victorian State Government Office for Children (reference no. CDF/07/492), the Victorian State Government Department of Human Services (reference no. 10/07), and the Royal Children's Hospital Human Research Ethics Committee (reference no. 27047). Parents provided written informed consent for study participation, and separate written informed consent to access their child's Medicare data for use in this study.", "We investigated direct healthcare costs from a healthcare sector perspective. Out‐of‐hospital healthcare costs were collected through data linkage to the Medicare data, which covers study children's Medicare records from birth till December 2011. Medicare is Australia's universal health insurance scheme, which guarantees all Australians access to a wide range of health services at low or no cost. It has two components: the Medicare Benefits Schedule (MBS), which covers medical services (visits to health professionals, diagnostic and pathology services) mainly outside the hospital setting, and the Pharmaceutical Benefits Scheme (PBS), which covers prescription medications. All out‐of‐hospital MBS and PBS services were covered in the analysis, with MBS costs categorized into general practitioner (GP) visits, specialist visits (including visits to pediatricians and all other types of specialists), tests and all others, and PBS costs categorized into adrenaline autoinjectors, nutrition products (including infant formula), dermatologic medicines (e.g., emollients, antihistamines, corticosteroids, etc.) and all others. Details of the included services and prescription medicines and codes used to identify these categories are provided in Supplementary material S2. Costs were inflated to 2019/20 Australian dollars using the total health price index constructed by the Australian Institute of Health and Welfare.\n13\n The 2020 Purchasing power parities were used to exchange the Australian dollars into euros.", "We identified children with food allergy and other allergic diseases using the HealthNuts data: Food allergy at age 1 was defined as an oral food challenge‐confirmed allergy to egg, peanut, or sesame, in the context of an SPT ≥ 2 mm or specific IgE ≥ 0.35 kU/L to that food. Food‐allergic children were further classified as having peanut allergy alone, egg allergy alone, or peanut and other food allergy (with co‐existing egg or sesame food allergy). These classifications were selected to reflect the differences in the natural history of peanut vs egg allergy.\n14\n We were not able to separate out sesame allergy for analysis in this study due to the small sample size with this food allergy type.\nSince food allergy often co‐exists with other allergic diseases such as eczema and asthma, which may contribute to higher healthcare costs among children with food allergy, we also identified eczema and asthma in the analysis. These allergic diseases were defined as a patient‐report history of doctor‐diagnosed eczema or asthma at any time in the first 4 years of life.", "We compared characteristics of children included and not included in the health economic sample, with differences tested using Chi‐square. Inverse probability weights were generated using logistic regression to adjust for potential bias caused by differences in child sex, socioeconomic status, and parents' country of birth between the health economic sample and the whole HealthNuts sample. We compared the MBS and PBS costs by age between children with and without food allergy, and across food‐allergic children from different socioeconomic groups. Only a proportion of study children had a full follow‐up for 4 years when the data linkage closed (1st December 2011). As a result, costs were aggregated into half‐year intervals by age in the calculation. Annual and total four‐year MBS and PBS costs were estimated by summing over the half‐year interval costs with confidence interval (CI) generated using nonparametric bootstrapping with 1000 replications. Two‐part model, a frequently used model to analyze cost data with a logistic regression part to estimate the likelihood of having any cost and a linear part to estimate costs when positive costs were predicted, was used to test whether the cost differences would vary after controlling for potential confounders (child sex, socioeconomic status, and parents' country of birth).\n15\n To understand the source of the cost differences, we decomposed the total four‐year MBS and PBS costs into different types of services, and compared them between children with and without food allergy and across children with different types of food allergy. To understand how other allergic diseases may contribute to children's food allergy costs, we compared the MBS and PBS costs between children with and without food allergy among those with or without asthma and eczema, respectively.\nBased on age‐specific population size in 2020,\n16\n food allergy prevalence in Australia estimated in previous studies,\n14\n, \n17\n and the cost differences and uncertainties estimated between children with and without food allergy in this study, total extra Medicare out‐of‐hospital costs caused by food allergy among 0‐ to 4‐year‐old children were projected for Victoria and Australia through simulation with 1000 replications (Supplementary material S3). Based on Osborne 2012,\n17\n a lower food allergy prevalence was assumed in North Australia compared with South and Central Australia in the calculation (Supplementary material S3).", "In total 2919 children were included in this study, with 390 (13.4%) having challenge‐confirmed allergy at year 1. Compared with children who participated in the HealthNuts study but were not included in the current analysis, the health economics study sample included more males, less families from the lowest and highest socioeconomics quintiles, more families with both parents born in Australia, and more children with challenge‐confirmed food allergy at year 1 (Table 1).\nCharacteristics of children included and not included in the health economics analysis sample\nChildren born after 1 June 2007 (the earliest date for available healthcare cost data linkage), with a valid linkage to healthcare cost data, with nonmissing food allergy information at year 1 were included in this study as the health economics sample.\nSocio‐Economic Indexes for Areas (SEIFA) is a product developed by the Australian Bureau Statistics that ranks areas in Australia according to relative socioeconomic advantage and disadvantage. The indexes are based on information from the five‐yearly Census.\nAfter adjusting for potential selection bias using inverse probability weights, compared to children without food allergy at age 1, children with food allergy had significantly higher costs for MBS medical services from the second year after birth, and significantly higher costs for PBS prescriptions from birth up to age four years, with the adjusted results similar to the observed ones (Table 2). The total healthcare costs associated with food allergy from age 1 to age 4 years were estimated to be AUD$586.3 (95%CI: $305.2–$855.0) or €270.9 (95%CI: €141.0–€395.0) for MBS services, AUD$303.4 (95%CI: $194.5–$420.2) or €140.2 (95%CI: €89.9–€194.1) for PBS services, and AUD$889.7 (95%CI: $566.1–$1188.3) or €411.0 (95% CI €261.5–€549.0) for all Medicare out‐of‐hospital services. There was no significant difference detected in total MBS costs and PBS costs among children with food allergy across different socioeconomic groups (Supplementary material S4).\nOut‐of‐hospital Medicare Benefits Schedule (MBS, medical services) and Pharmaceutical Benefits Scheme (PBS, prescriptions) cost for children with and without challenge‐confirmed food allergy at year 1 (2020 AUD$), by year after birth\n\nNote: Results were inverse probability weighted.\nAmong which 79 children had patient‐reported history of doctor‐diagnosed asthma, 237 had patient‐reported history of doctor‐diagnosed eczema in the first 4 years of life.\nAdjusted by child's sex, socioeconomic status, and parents' country of birth using the two‐part model.\nCosts by category of services for children with and without food allergy and with different types of food allergy are presented in Figure 1. Compared to children without food allergy, children with food allergy incurred significantly more healthcare costs for GP visits (difference = AUD$151.1 or €69.8), specialist visits (difference = AUD$242.9 or €112.2), tests (difference = AUD$127.4 or €58.9), as well as on adrenaline autoinjectors (difference = AUD$148.3 or €68.5), nutrition products (difference = AUD$109.3 or €50.5), and dermatological medications (difference = AUD$27.7 or €12.8) in the first four years of life (Figure 1A,B). The cost differences across different food allergy groups mainly come from adrenaline autoinjectors, with children only allergic to peanut incurring AUD$144.5 or €66.8 more on adrenaline autoinjectors than children with egg allergy only (who incurred AUD$66.6 or €30.8 on adrenaline autoinjectors), and children with multiple food allergy (peanut co‐existing with other) incurring another AUD$148.1 or €68.4 compared with children only allergic to peanut (Figure 1D).\nTotal four‐year out‐of‐hospital MBS (for medical services) and PBS (for prescriptions) costs for children without and with challenge‐confirmed food allergy at year 1, and for children with different types of food allergy, by category of services (2020 AUD$). Results were inverse probability weighted. The error bar represents the 95% confidence interval of the estimation. GP, general practitioner; MBS, Medicare Benefits Schedule; PBS, Pharmaceutical Benefits Scheme\nThe comparisons of healthcare costs between children with food allergy and with other allergic diseases are presented in Figure 2. Children with food allergy alone tended to incur slightly lower costs for MBS medical services compared to children with asthma alone, and higher costs for PBS prescriptions compared to children with asthma or eczema. Having food allergy co‐existing with other allergic diseases tended to increase children's healthcare costs more than a purely additive way. This effect was more significant for PBS costs and for children co‐existing with asthma.\nOut‐of‐hospital MBS (for medical services) and PBS (for prescriptions) costs for children with challenge‐confirmed food allergy at year 1 and with other allergic diseases, by year after birth (2020 AUD$). Results were inverse probability weighted. The error bar represents the 95% confidence interval of the estimation. FA, food allergy; MBS, Medicare Benefits Schedule; PBS, Pharmaceutical Benefits Scheme\nBased on the age‐specific MBS and PBS cost differences and uncertainties estimated between children with and without food allergy (Table 2), it was estimated that in 2020 the total economic burden on Medicare out‐of‐hospital services caused by food allergy among children 0–4 years old was AUD$7.7 million (95% CI $5.8–$9.9 million) or €3.6 (95% CI €2.7–€4.6 million) in Victoria and AUD$26.1 million (95% CI $20.1–$32.3 million) or €12.1 (95% CI €9.3–€14.9 million) in Australia, with around two‐thirds of the total economic burden coming from MBS medical services and one‐third from PBS prescriptions (Table 3).\nTotal economic burden on Medicare out‐of‐hospital services caused by food allergy among children 0–4 years old in Victoria and Australia in 2020 (2020 AUD$/Euro€)\nAbbreviations: MBS, Medicare Benefits Schedule; PBS, Pharmaceutical Benefits Scheme.", "In this study, we estimated the Medicare out‐of‐hospital healthcare costs for food allergy among children 0–4 years old in Australia using a longitudinal cohort linked to administrative health data. We found that compared to children without challenge‐confirmed food allergy, children with food allergy had significantly higher costs for GP visits, specialist visits, tests as well as on prescriptions. Children with peanut allergy tended to incur higher costs for adrenaline autoinjectors compared to children with egg allergy. This has translated into considerable Medicare costs for out‐of‐hospital services in Australia, estimated to be AUD$26.1 million or €12.1 million among 0‐ to 4‐year‐old children in 2020.\nWe found no significant difference in out‐of‐hospital costs for childhood food allergy across different socioeconomic groups. This is different from what was found in a US study, which identified disparities in the economic effect of food allergy based on socioeconomic status.\n18\n It is worth noting that children in the HealthNuts cohort were all recruited from immunization clinics in greater metropolitan Melbourne.\n12\n This suggests that any inequality between families from the remote and the metropolitan area will not be captured in this study. Regarding different types of food allergy, we found a higher cost associated with the prescription of adrenaline autoinjectors among children with peanut allergy and higher prescription costs among children with food allergy co‐existing with asthma. This is consistent with clinical guidelines at the time of the study, which specifically suggested considering the prescription of adrenaline autoinjectors for children with food allergies to peanuts, tree nuts, and seafood, as well as those with a history of anaphylaxis or asthma.\n19\n\n\nThis is the first study in Australia estimating the costs of childhood food allergy and projecting its economic burden at the population level. The results show that although infant food allergy in many cases resolves in the first few years of life, it causes high costs to the health system including out‐of‐hospital services. This is consistent with a few other studies in the United States and Finland, which have also separated out primary care, specialist visits, and examinations in the analysis.\n20\n Our estimated costs of specialist consultations and tests due to childhood allergy were significantly lower compared with the US study (US$212 or AUD$311 annually, 2018 price). This may be due to the GP‐centred primary care services for children with food allergy in Australia. In Australia, primary care stands for the “gateway” to the wider health system including secondary specialist and tertiary hospital services.\n21\n It will be useful to strengthen the role of primary care in the management of childhood food allergy, e.g., to advise on foods to avoid and deal with nonurgent food allergy reactions. This may help reduce referrals to specialists and avoid the more expensive hospital admissions and emergency department presentations.\nThis study represents good baseline cost data for childhood food allergy in Australia. With the ongoing effort on monitoring and reducing the incidence of childhood food allergy in Australia through primary prevention strategies such as the timely introduction of allergenic foods,\n22\n results in this study can be useful to conduct future projections on the economic burden of childhood food allergy and guide policy decisions.\nThe strength of this study is the use of a longitudinal cohort with the challenge‐confirmed diagnosis of childhood food allergy and linkage to administrative health data, which allows for accurate identification of the food allergy sample and the utilization of health services. The limitation is that we were not able to measure hospital admission and emergency presentation costs for children with food allergy in this study, which could represent more than half of the total healthcare costs.\n7\n It will be useful for future studies with a good hospital data linkage to investigate this further. We were not able to measure any cost of items that are not covered by Medicare, for example, over‐the‐counter medicines and formulas that can be purchased without a prescription. This suggests that our cost estimates should be viewed as conservative.\nIn conclusion, in this study using a population‐based longitudinal study linked to administrative health records, we quantified the out‐of‐hospital healthcare costs for children with food allergy from birth to 4 years of age in Australia. The findings can help anticipate the financial impact on the health care system associated with childhood food allergy, and act as a useful costing resource for future evaluations focusing on preventive and treatment strategies for children with food allergy.", "Xinyang Hua: Conceptualization; Methodology; Formal analysis; Writing—original draft; Writing—review & editing; Project administration. Kim Dalziel: Conceptualization; Methodology; Supervision; Writing—review & editing; Writing—original draft; Formal analysis. Tim Brettig: Conceptualization; Methodology; Writing—review & editing. Shyamali C Dharmage: Conceptualization; Methodology; Writing—review & editing; Funding acquisition; Investigation. Adrian Lowe: Conceptualization; Methodology; Writing—review & editing. Kirsten P Perrett: Conceptualization; Methodology; Funding acquisition; Investigation; Writing—review & editing. Rachel L Peters: Conceptualization; Methodology; Investigation; Funding acquisition; Writing—review & editing. Anne‐Louise Ponsonby: Conceptualization; Methodology; Writing—review & editing. Mimi LK Tang: Conceptualization; Methodology; Investigation; Funding acquisition; Writing—review & editing. Jennifer Koplin: Conceptualization; Methodology; Investigation; Funding acquisition; Writing—review & editing; Writing—original draft; Data curation; Supervision.", "Xinyang Hua is an Australian National Health and Medical Research Council (NHMRC) Emerging Leadership Fellow (GN 2009253). Tim Brettig is supported by a PhD scholarship from the NHMRC‐funded Centre for Food and Allergy Research (CFAR, GNT1134812). Kirsten Perrett is supported by an NHMRC Investigator grant (GNT2008911) and a Melbourne Children's Clinician–Scientist Fellowship. Rachel Peters is supported by an NHMRC Early Career Fellowship and an NHMRC Clinical Trials and Cohort Studies Grant. Anne‐Louise Ponsonby is supported by an NHMRC Senior Research Fellowship (APP1110200). Jennifer Koplin is supported by NHMRC fellowship GNT1158699. Research at Murdoch Children's Research Institute is supported by the Victorian Government's Operational Infrastructure Support Program.", "Kirsten Perrett is Chair of the scientific advisory board for AllergyPal. Her institution has received research grants from the National Health and Medical Research Council, Immune Tolerance Network, DBV Technologies, and Novartis and consultant fees from Aravax; outside the submitted work. Other authors declare no conflict of interest. S.C.D. and A.J.L. declare they have received research funds from GSK’s competitively awarded Investigator Sponsored Studies programme, for unrelated research. A.J.L. has also received donations of interventional product (EpiCeram) from Primus Pharmaceuticals for unrelated research.", " PEER REVIEW The peer review history for this article is available at https://publons.com/publon/10.1111/pai.13883.\nThe peer review history for this article is available at https://publons.com/publon/10.1111/pai.13883.", "The peer review history for this article is available at https://publons.com/publon/10.1111/pai.13883.", "\nAppendix S1\n\nClick here for additional data file." ]
[ "background", "methods", null, null, null, null, null, "results", "discussion", null, null, "COI-statement", null, null, "supplementary-material" ]
[ "burden", "child", "food allergy", "health care costs" ]
BACKGROUND: The prevalence of hospital presentations for food‐related anaphylaxis in children has increased rapidly in the last few decades, making it a growing public health concern worldwide. 1 Australia has one of the highest prevalence of childhood food allergy in the world, with the condition affecting more than 10% of infants. 2 , 3 It has been reported that in Australia, compared with the overall population, 0 to 4‐year‐old children had the highest rates of hospital admissions for anaphylaxis caused by food allergy, 4 , 5 as well as the highest rates of adrenaline autoinjector prescription. 6 With this magnitude of childhood food allergy, it is important to understand its cost implication on the healthcare system to inform resource allocation and policies. A few studies have investigated the economic burden of childhood food allergy in the US 7 and in some European countries 8 , 9 , 10 and reported significant direct medical costs for the healthcare system. Despite food allergy representing one of the most common chronic health conditions in children, 11 no such study has been conducted in Australia. In this study, using a longitudinal population‐based cohort of infants with challenge‐confirmed IgE‐mediated food allergy outcomes (the HealthNuts study), 12 linked with administrative health data, we estimated the out‐of‐hospital health care costs for childhood food allergy in Australia. The aim is to understand the out‐of‐hospital costs for children with food allergy and with different types of food allergy from birth to 4 years old and to estimate the relevant economic burden in Australia at the population level. METHODS: Study design and sample The HealthNuts study is a single‐centre, multi‐wave, population‐based, longitudinal food allergy study undertaken in Melbourne, Australia. Detailed study methods have been published previously. 12 In brief, infants were recruited between September 2007 and August 2011 when presenting for their routine scheduled 12‐month vaccinations at immunization clinics. They were invited to undertake SPT screening to 4 common food allergens: egg, peanut, sesame, and either cow's milk or shrimp. Any infant with a detectable SPT wheal (≥1 mm) to egg, peanut, or sesame was invited for a food challenge, repeat SPT, and blood test to measure serum‐specific IgE at Melbourne's Royal Children's Hospital. As described previously, food challenges were not undertaken for cow's milk or shrimp (Supplementary material S1). 2 , 3 The cohort was followed up at age 4 years. Parents completed a mailed questionnaire or short telephone questionnaire to capture information on their child's current allergy status. Children born after 1st June 2007 (the earliest date for available healthcare cost data linkage), with a valid linkage to healthcare cost data, with nonmissing food allergy information at year 1 were included in this study as the health economics sample. Healthcare service utilization and costs from 1 June 2007 till 1 December 2011 were collected. The HealthNuts study is a single‐centre, multi‐wave, population‐based, longitudinal food allergy study undertaken in Melbourne, Australia. Detailed study methods have been published previously. 12 In brief, infants were recruited between September 2007 and August 2011 when presenting for their routine scheduled 12‐month vaccinations at immunization clinics. They were invited to undertake SPT screening to 4 common food allergens: egg, peanut, sesame, and either cow's milk or shrimp. Any infant with a detectable SPT wheal (≥1 mm) to egg, peanut, or sesame was invited for a food challenge, repeat SPT, and blood test to measure serum‐specific IgE at Melbourne's Royal Children's Hospital. As described previously, food challenges were not undertaken for cow's milk or shrimp (Supplementary material S1). 2 , 3 The cohort was followed up at age 4 years. Parents completed a mailed questionnaire or short telephone questionnaire to capture information on their child's current allergy status. Children born after 1st June 2007 (the earliest date for available healthcare cost data linkage), with a valid linkage to healthcare cost data, with nonmissing food allergy information at year 1 were included in this study as the health economics sample. Healthcare service utilization and costs from 1 June 2007 till 1 December 2011 were collected. Ethical conduct in human research Approval to conduct the HealthNuts study was obtained from the Victorian State Government Office for Children (reference no. CDF/07/492), the Victorian State Government Department of Human Services (reference no. 10/07), and the Royal Children's Hospital Human Research Ethics Committee (reference no. 27047). Parents provided written informed consent for study participation, and separate written informed consent to access their child's Medicare data for use in this study. Approval to conduct the HealthNuts study was obtained from the Victorian State Government Office for Children (reference no. CDF/07/492), the Victorian State Government Department of Human Services (reference no. 10/07), and the Royal Children's Hospital Human Research Ethics Committee (reference no. 27047). Parents provided written informed consent for study participation, and separate written informed consent to access their child's Medicare data for use in this study. Costs We investigated direct healthcare costs from a healthcare sector perspective. Out‐of‐hospital healthcare costs were collected through data linkage to the Medicare data, which covers study children's Medicare records from birth till December 2011. Medicare is Australia's universal health insurance scheme, which guarantees all Australians access to a wide range of health services at low or no cost. It has two components: the Medicare Benefits Schedule (MBS), which covers medical services (visits to health professionals, diagnostic and pathology services) mainly outside the hospital setting, and the Pharmaceutical Benefits Scheme (PBS), which covers prescription medications. All out‐of‐hospital MBS and PBS services were covered in the analysis, with MBS costs categorized into general practitioner (GP) visits, specialist visits (including visits to pediatricians and all other types of specialists), tests and all others, and PBS costs categorized into adrenaline autoinjectors, nutrition products (including infant formula), dermatologic medicines (e.g., emollients, antihistamines, corticosteroids, etc.) and all others. Details of the included services and prescription medicines and codes used to identify these categories are provided in Supplementary material S2. Costs were inflated to 2019/20 Australian dollars using the total health price index constructed by the Australian Institute of Health and Welfare. 13 The 2020 Purchasing power parities were used to exchange the Australian dollars into euros. We investigated direct healthcare costs from a healthcare sector perspective. Out‐of‐hospital healthcare costs were collected through data linkage to the Medicare data, which covers study children's Medicare records from birth till December 2011. Medicare is Australia's universal health insurance scheme, which guarantees all Australians access to a wide range of health services at low or no cost. It has two components: the Medicare Benefits Schedule (MBS), which covers medical services (visits to health professionals, diagnostic and pathology services) mainly outside the hospital setting, and the Pharmaceutical Benefits Scheme (PBS), which covers prescription medications. All out‐of‐hospital MBS and PBS services were covered in the analysis, with MBS costs categorized into general practitioner (GP) visits, specialist visits (including visits to pediatricians and all other types of specialists), tests and all others, and PBS costs categorized into adrenaline autoinjectors, nutrition products (including infant formula), dermatologic medicines (e.g., emollients, antihistamines, corticosteroids, etc.) and all others. Details of the included services and prescription medicines and codes used to identify these categories are provided in Supplementary material S2. Costs were inflated to 2019/20 Australian dollars using the total health price index constructed by the Australian Institute of Health and Welfare. 13 The 2020 Purchasing power parities were used to exchange the Australian dollars into euros. Definitions We identified children with food allergy and other allergic diseases using the HealthNuts data: Food allergy at age 1 was defined as an oral food challenge‐confirmed allergy to egg, peanut, or sesame, in the context of an SPT ≥ 2 mm or specific IgE ≥ 0.35 kU/L to that food. Food‐allergic children were further classified as having peanut allergy alone, egg allergy alone, or peanut and other food allergy (with co‐existing egg or sesame food allergy). These classifications were selected to reflect the differences in the natural history of peanut vs egg allergy. 14 We were not able to separate out sesame allergy for analysis in this study due to the small sample size with this food allergy type. Since food allergy often co‐exists with other allergic diseases such as eczema and asthma, which may contribute to higher healthcare costs among children with food allergy, we also identified eczema and asthma in the analysis. These allergic diseases were defined as a patient‐report history of doctor‐diagnosed eczema or asthma at any time in the first 4 years of life. We identified children with food allergy and other allergic diseases using the HealthNuts data: Food allergy at age 1 was defined as an oral food challenge‐confirmed allergy to egg, peanut, or sesame, in the context of an SPT ≥ 2 mm or specific IgE ≥ 0.35 kU/L to that food. Food‐allergic children were further classified as having peanut allergy alone, egg allergy alone, or peanut and other food allergy (with co‐existing egg or sesame food allergy). These classifications were selected to reflect the differences in the natural history of peanut vs egg allergy. 14 We were not able to separate out sesame allergy for analysis in this study due to the small sample size with this food allergy type. Since food allergy often co‐exists with other allergic diseases such as eczema and asthma, which may contribute to higher healthcare costs among children with food allergy, we also identified eczema and asthma in the analysis. These allergic diseases were defined as a patient‐report history of doctor‐diagnosed eczema or asthma at any time in the first 4 years of life. Statistical analysis We compared characteristics of children included and not included in the health economic sample, with differences tested using Chi‐square. Inverse probability weights were generated using logistic regression to adjust for potential bias caused by differences in child sex, socioeconomic status, and parents' country of birth between the health economic sample and the whole HealthNuts sample. We compared the MBS and PBS costs by age between children with and without food allergy, and across food‐allergic children from different socioeconomic groups. Only a proportion of study children had a full follow‐up for 4 years when the data linkage closed (1st December 2011). As a result, costs were aggregated into half‐year intervals by age in the calculation. Annual and total four‐year MBS and PBS costs were estimated by summing over the half‐year interval costs with confidence interval (CI) generated using nonparametric bootstrapping with 1000 replications. Two‐part model, a frequently used model to analyze cost data with a logistic regression part to estimate the likelihood of having any cost and a linear part to estimate costs when positive costs were predicted, was used to test whether the cost differences would vary after controlling for potential confounders (child sex, socioeconomic status, and parents' country of birth). 15 To understand the source of the cost differences, we decomposed the total four‐year MBS and PBS costs into different types of services, and compared them between children with and without food allergy and across children with different types of food allergy. To understand how other allergic diseases may contribute to children's food allergy costs, we compared the MBS and PBS costs between children with and without food allergy among those with or without asthma and eczema, respectively. Based on age‐specific population size in 2020, 16 food allergy prevalence in Australia estimated in previous studies, 14 , 17 and the cost differences and uncertainties estimated between children with and without food allergy in this study, total extra Medicare out‐of‐hospital costs caused by food allergy among 0‐ to 4‐year‐old children were projected for Victoria and Australia through simulation with 1000 replications (Supplementary material S3). Based on Osborne 2012, 17 a lower food allergy prevalence was assumed in North Australia compared with South and Central Australia in the calculation (Supplementary material S3). We compared characteristics of children included and not included in the health economic sample, with differences tested using Chi‐square. Inverse probability weights were generated using logistic regression to adjust for potential bias caused by differences in child sex, socioeconomic status, and parents' country of birth between the health economic sample and the whole HealthNuts sample. We compared the MBS and PBS costs by age between children with and without food allergy, and across food‐allergic children from different socioeconomic groups. Only a proportion of study children had a full follow‐up for 4 years when the data linkage closed (1st December 2011). As a result, costs were aggregated into half‐year intervals by age in the calculation. Annual and total four‐year MBS and PBS costs were estimated by summing over the half‐year interval costs with confidence interval (CI) generated using nonparametric bootstrapping with 1000 replications. Two‐part model, a frequently used model to analyze cost data with a logistic regression part to estimate the likelihood of having any cost and a linear part to estimate costs when positive costs were predicted, was used to test whether the cost differences would vary after controlling for potential confounders (child sex, socioeconomic status, and parents' country of birth). 15 To understand the source of the cost differences, we decomposed the total four‐year MBS and PBS costs into different types of services, and compared them between children with and without food allergy and across children with different types of food allergy. To understand how other allergic diseases may contribute to children's food allergy costs, we compared the MBS and PBS costs between children with and without food allergy among those with or without asthma and eczema, respectively. Based on age‐specific population size in 2020, 16 food allergy prevalence in Australia estimated in previous studies, 14 , 17 and the cost differences and uncertainties estimated between children with and without food allergy in this study, total extra Medicare out‐of‐hospital costs caused by food allergy among 0‐ to 4‐year‐old children were projected for Victoria and Australia through simulation with 1000 replications (Supplementary material S3). Based on Osborne 2012, 17 a lower food allergy prevalence was assumed in North Australia compared with South and Central Australia in the calculation (Supplementary material S3). Study design and sample: The HealthNuts study is a single‐centre, multi‐wave, population‐based, longitudinal food allergy study undertaken in Melbourne, Australia. Detailed study methods have been published previously. 12 In brief, infants were recruited between September 2007 and August 2011 when presenting for their routine scheduled 12‐month vaccinations at immunization clinics. They were invited to undertake SPT screening to 4 common food allergens: egg, peanut, sesame, and either cow's milk or shrimp. Any infant with a detectable SPT wheal (≥1 mm) to egg, peanut, or sesame was invited for a food challenge, repeat SPT, and blood test to measure serum‐specific IgE at Melbourne's Royal Children's Hospital. As described previously, food challenges were not undertaken for cow's milk or shrimp (Supplementary material S1). 2 , 3 The cohort was followed up at age 4 years. Parents completed a mailed questionnaire or short telephone questionnaire to capture information on their child's current allergy status. Children born after 1st June 2007 (the earliest date for available healthcare cost data linkage), with a valid linkage to healthcare cost data, with nonmissing food allergy information at year 1 were included in this study as the health economics sample. Healthcare service utilization and costs from 1 June 2007 till 1 December 2011 were collected. Ethical conduct in human research: Approval to conduct the HealthNuts study was obtained from the Victorian State Government Office for Children (reference no. CDF/07/492), the Victorian State Government Department of Human Services (reference no. 10/07), and the Royal Children's Hospital Human Research Ethics Committee (reference no. 27047). Parents provided written informed consent for study participation, and separate written informed consent to access their child's Medicare data for use in this study. Costs: We investigated direct healthcare costs from a healthcare sector perspective. Out‐of‐hospital healthcare costs were collected through data linkage to the Medicare data, which covers study children's Medicare records from birth till December 2011. Medicare is Australia's universal health insurance scheme, which guarantees all Australians access to a wide range of health services at low or no cost. It has two components: the Medicare Benefits Schedule (MBS), which covers medical services (visits to health professionals, diagnostic and pathology services) mainly outside the hospital setting, and the Pharmaceutical Benefits Scheme (PBS), which covers prescription medications. All out‐of‐hospital MBS and PBS services were covered in the analysis, with MBS costs categorized into general practitioner (GP) visits, specialist visits (including visits to pediatricians and all other types of specialists), tests and all others, and PBS costs categorized into adrenaline autoinjectors, nutrition products (including infant formula), dermatologic medicines (e.g., emollients, antihistamines, corticosteroids, etc.) and all others. Details of the included services and prescription medicines and codes used to identify these categories are provided in Supplementary material S2. Costs were inflated to 2019/20 Australian dollars using the total health price index constructed by the Australian Institute of Health and Welfare. 13 The 2020 Purchasing power parities were used to exchange the Australian dollars into euros. Definitions: We identified children with food allergy and other allergic diseases using the HealthNuts data: Food allergy at age 1 was defined as an oral food challenge‐confirmed allergy to egg, peanut, or sesame, in the context of an SPT ≥ 2 mm or specific IgE ≥ 0.35 kU/L to that food. Food‐allergic children were further classified as having peanut allergy alone, egg allergy alone, or peanut and other food allergy (with co‐existing egg or sesame food allergy). These classifications were selected to reflect the differences in the natural history of peanut vs egg allergy. 14 We were not able to separate out sesame allergy for analysis in this study due to the small sample size with this food allergy type. Since food allergy often co‐exists with other allergic diseases such as eczema and asthma, which may contribute to higher healthcare costs among children with food allergy, we also identified eczema and asthma in the analysis. These allergic diseases were defined as a patient‐report history of doctor‐diagnosed eczema or asthma at any time in the first 4 years of life. Statistical analysis: We compared characteristics of children included and not included in the health economic sample, with differences tested using Chi‐square. Inverse probability weights were generated using logistic regression to adjust for potential bias caused by differences in child sex, socioeconomic status, and parents' country of birth between the health economic sample and the whole HealthNuts sample. We compared the MBS and PBS costs by age between children with and without food allergy, and across food‐allergic children from different socioeconomic groups. Only a proportion of study children had a full follow‐up for 4 years when the data linkage closed (1st December 2011). As a result, costs were aggregated into half‐year intervals by age in the calculation. Annual and total four‐year MBS and PBS costs were estimated by summing over the half‐year interval costs with confidence interval (CI) generated using nonparametric bootstrapping with 1000 replications. Two‐part model, a frequently used model to analyze cost data with a logistic regression part to estimate the likelihood of having any cost and a linear part to estimate costs when positive costs were predicted, was used to test whether the cost differences would vary after controlling for potential confounders (child sex, socioeconomic status, and parents' country of birth). 15 To understand the source of the cost differences, we decomposed the total four‐year MBS and PBS costs into different types of services, and compared them between children with and without food allergy and across children with different types of food allergy. To understand how other allergic diseases may contribute to children's food allergy costs, we compared the MBS and PBS costs between children with and without food allergy among those with or without asthma and eczema, respectively. Based on age‐specific population size in 2020, 16 food allergy prevalence in Australia estimated in previous studies, 14 , 17 and the cost differences and uncertainties estimated between children with and without food allergy in this study, total extra Medicare out‐of‐hospital costs caused by food allergy among 0‐ to 4‐year‐old children were projected for Victoria and Australia through simulation with 1000 replications (Supplementary material S3). Based on Osborne 2012, 17 a lower food allergy prevalence was assumed in North Australia compared with South and Central Australia in the calculation (Supplementary material S3). RESULTS: In total 2919 children were included in this study, with 390 (13.4%) having challenge‐confirmed allergy at year 1. Compared with children who participated in the HealthNuts study but were not included in the current analysis, the health economics study sample included more males, less families from the lowest and highest socioeconomics quintiles, more families with both parents born in Australia, and more children with challenge‐confirmed food allergy at year 1 (Table 1). Characteristics of children included and not included in the health economics analysis sample Children born after 1 June 2007 (the earliest date for available healthcare cost data linkage), with a valid linkage to healthcare cost data, with nonmissing food allergy information at year 1 were included in this study as the health economics sample. Socio‐Economic Indexes for Areas (SEIFA) is a product developed by the Australian Bureau Statistics that ranks areas in Australia according to relative socioeconomic advantage and disadvantage. The indexes are based on information from the five‐yearly Census. After adjusting for potential selection bias using inverse probability weights, compared to children without food allergy at age 1, children with food allergy had significantly higher costs for MBS medical services from the second year after birth, and significantly higher costs for PBS prescriptions from birth up to age four years, with the adjusted results similar to the observed ones (Table 2). The total healthcare costs associated with food allergy from age 1 to age 4 years were estimated to be AUD$586.3 (95%CI: $305.2–$855.0) or €270.9 (95%CI: €141.0–€395.0) for MBS services, AUD$303.4 (95%CI: $194.5–$420.2) or €140.2 (95%CI: €89.9–€194.1) for PBS services, and AUD$889.7 (95%CI: $566.1–$1188.3) or €411.0 (95% CI €261.5–€549.0) for all Medicare out‐of‐hospital services. There was no significant difference detected in total MBS costs and PBS costs among children with food allergy across different socioeconomic groups (Supplementary material S4). Out‐of‐hospital Medicare Benefits Schedule (MBS, medical services) and Pharmaceutical Benefits Scheme (PBS, prescriptions) cost for children with and without challenge‐confirmed food allergy at year 1 (2020 AUD$), by year after birth Note: Results were inverse probability weighted. Among which 79 children had patient‐reported history of doctor‐diagnosed asthma, 237 had patient‐reported history of doctor‐diagnosed eczema in the first 4 years of life. Adjusted by child's sex, socioeconomic status, and parents' country of birth using the two‐part model. Costs by category of services for children with and without food allergy and with different types of food allergy are presented in Figure 1. Compared to children without food allergy, children with food allergy incurred significantly more healthcare costs for GP visits (difference = AUD$151.1 or €69.8), specialist visits (difference = AUD$242.9 or €112.2), tests (difference = AUD$127.4 or €58.9), as well as on adrenaline autoinjectors (difference = AUD$148.3 or €68.5), nutrition products (difference = AUD$109.3 or €50.5), and dermatological medications (difference = AUD$27.7 or €12.8) in the first four years of life (Figure 1A,B). The cost differences across different food allergy groups mainly come from adrenaline autoinjectors, with children only allergic to peanut incurring AUD$144.5 or €66.8 more on adrenaline autoinjectors than children with egg allergy only (who incurred AUD$66.6 or €30.8 on adrenaline autoinjectors), and children with multiple food allergy (peanut co‐existing with other) incurring another AUD$148.1 or €68.4 compared with children only allergic to peanut (Figure 1D). Total four‐year out‐of‐hospital MBS (for medical services) and PBS (for prescriptions) costs for children without and with challenge‐confirmed food allergy at year 1, and for children with different types of food allergy, by category of services (2020 AUD$). Results were inverse probability weighted. The error bar represents the 95% confidence interval of the estimation. GP, general practitioner; MBS, Medicare Benefits Schedule; PBS, Pharmaceutical Benefits Scheme The comparisons of healthcare costs between children with food allergy and with other allergic diseases are presented in Figure 2. Children with food allergy alone tended to incur slightly lower costs for MBS medical services compared to children with asthma alone, and higher costs for PBS prescriptions compared to children with asthma or eczema. Having food allergy co‐existing with other allergic diseases tended to increase children's healthcare costs more than a purely additive way. This effect was more significant for PBS costs and for children co‐existing with asthma. Out‐of‐hospital MBS (for medical services) and PBS (for prescriptions) costs for children with challenge‐confirmed food allergy at year 1 and with other allergic diseases, by year after birth (2020 AUD$). Results were inverse probability weighted. The error bar represents the 95% confidence interval of the estimation. FA, food allergy; MBS, Medicare Benefits Schedule; PBS, Pharmaceutical Benefits Scheme Based on the age‐specific MBS and PBS cost differences and uncertainties estimated between children with and without food allergy (Table 2), it was estimated that in 2020 the total economic burden on Medicare out‐of‐hospital services caused by food allergy among children 0–4 years old was AUD$7.7 million (95% CI $5.8–$9.9 million) or €3.6 (95% CI €2.7–€4.6 million) in Victoria and AUD$26.1 million (95% CI $20.1–$32.3 million) or €12.1 (95% CI €9.3–€14.9 million) in Australia, with around two‐thirds of the total economic burden coming from MBS medical services and one‐third from PBS prescriptions (Table 3). Total economic burden on Medicare out‐of‐hospital services caused by food allergy among children 0–4 years old in Victoria and Australia in 2020 (2020 AUD$/Euro€) Abbreviations: MBS, Medicare Benefits Schedule; PBS, Pharmaceutical Benefits Scheme. DISCUSSION: In this study, we estimated the Medicare out‐of‐hospital healthcare costs for food allergy among children 0–4 years old in Australia using a longitudinal cohort linked to administrative health data. We found that compared to children without challenge‐confirmed food allergy, children with food allergy had significantly higher costs for GP visits, specialist visits, tests as well as on prescriptions. Children with peanut allergy tended to incur higher costs for adrenaline autoinjectors compared to children with egg allergy. This has translated into considerable Medicare costs for out‐of‐hospital services in Australia, estimated to be AUD$26.1 million or €12.1 million among 0‐ to 4‐year‐old children in 2020. We found no significant difference in out‐of‐hospital costs for childhood food allergy across different socioeconomic groups. This is different from what was found in a US study, which identified disparities in the economic effect of food allergy based on socioeconomic status. 18 It is worth noting that children in the HealthNuts cohort were all recruited from immunization clinics in greater metropolitan Melbourne. 12 This suggests that any inequality between families from the remote and the metropolitan area will not be captured in this study. Regarding different types of food allergy, we found a higher cost associated with the prescription of adrenaline autoinjectors among children with peanut allergy and higher prescription costs among children with food allergy co‐existing with asthma. This is consistent with clinical guidelines at the time of the study, which specifically suggested considering the prescription of adrenaline autoinjectors for children with food allergies to peanuts, tree nuts, and seafood, as well as those with a history of anaphylaxis or asthma. 19 This is the first study in Australia estimating the costs of childhood food allergy and projecting its economic burden at the population level. The results show that although infant food allergy in many cases resolves in the first few years of life, it causes high costs to the health system including out‐of‐hospital services. This is consistent with a few other studies in the United States and Finland, which have also separated out primary care, specialist visits, and examinations in the analysis. 20 Our estimated costs of specialist consultations and tests due to childhood allergy were significantly lower compared with the US study (US$212 or AUD$311 annually, 2018 price). This may be due to the GP‐centred primary care services for children with food allergy in Australia. In Australia, primary care stands for the “gateway” to the wider health system including secondary specialist and tertiary hospital services. 21 It will be useful to strengthen the role of primary care in the management of childhood food allergy, e.g., to advise on foods to avoid and deal with nonurgent food allergy reactions. This may help reduce referrals to specialists and avoid the more expensive hospital admissions and emergency department presentations. This study represents good baseline cost data for childhood food allergy in Australia. With the ongoing effort on monitoring and reducing the incidence of childhood food allergy in Australia through primary prevention strategies such as the timely introduction of allergenic foods, 22 results in this study can be useful to conduct future projections on the economic burden of childhood food allergy and guide policy decisions. The strength of this study is the use of a longitudinal cohort with the challenge‐confirmed diagnosis of childhood food allergy and linkage to administrative health data, which allows for accurate identification of the food allergy sample and the utilization of health services. The limitation is that we were not able to measure hospital admission and emergency presentation costs for children with food allergy in this study, which could represent more than half of the total healthcare costs. 7 It will be useful for future studies with a good hospital data linkage to investigate this further. We were not able to measure any cost of items that are not covered by Medicare, for example, over‐the‐counter medicines and formulas that can be purchased without a prescription. This suggests that our cost estimates should be viewed as conservative. In conclusion, in this study using a population‐based longitudinal study linked to administrative health records, we quantified the out‐of‐hospital healthcare costs for children with food allergy from birth to 4 years of age in Australia. The findings can help anticipate the financial impact on the health care system associated with childhood food allergy, and act as a useful costing resource for future evaluations focusing on preventive and treatment strategies for children with food allergy. AUTHOR CONTRIBUTIONS: Xinyang Hua: Conceptualization; Methodology; Formal analysis; Writing—original draft; Writing—review & editing; Project administration. Kim Dalziel: Conceptualization; Methodology; Supervision; Writing—review & editing; Writing—original draft; Formal analysis. Tim Brettig: Conceptualization; Methodology; Writing—review & editing. Shyamali C Dharmage: Conceptualization; Methodology; Writing—review & editing; Funding acquisition; Investigation. Adrian Lowe: Conceptualization; Methodology; Writing—review & editing. Kirsten P Perrett: Conceptualization; Methodology; Funding acquisition; Investigation; Writing—review & editing. Rachel L Peters: Conceptualization; Methodology; Investigation; Funding acquisition; Writing—review & editing. Anne‐Louise Ponsonby: Conceptualization; Methodology; Writing—review & editing. Mimi LK Tang: Conceptualization; Methodology; Investigation; Funding acquisition; Writing—review & editing. Jennifer Koplin: Conceptualization; Methodology; Investigation; Funding acquisition; Writing—review & editing; Writing—original draft; Data curation; Supervision. FUNDING INFORMATION: Xinyang Hua is an Australian National Health and Medical Research Council (NHMRC) Emerging Leadership Fellow (GN 2009253). Tim Brettig is supported by a PhD scholarship from the NHMRC‐funded Centre for Food and Allergy Research (CFAR, GNT1134812). Kirsten Perrett is supported by an NHMRC Investigator grant (GNT2008911) and a Melbourne Children's Clinician–Scientist Fellowship. Rachel Peters is supported by an NHMRC Early Career Fellowship and an NHMRC Clinical Trials and Cohort Studies Grant. Anne‐Louise Ponsonby is supported by an NHMRC Senior Research Fellowship (APP1110200). Jennifer Koplin is supported by NHMRC fellowship GNT1158699. Research at Murdoch Children's Research Institute is supported by the Victorian Government's Operational Infrastructure Support Program. CONFLICT OF INTEREST: Kirsten Perrett is Chair of the scientific advisory board for AllergyPal. Her institution has received research grants from the National Health and Medical Research Council, Immune Tolerance Network, DBV Technologies, and Novartis and consultant fees from Aravax; outside the submitted work. Other authors declare no conflict of interest. S.C.D. and A.J.L. declare they have received research funds from GSK’s competitively awarded Investigator Sponsored Studies programme, for unrelated research. A.J.L. has also received donations of interventional product (EpiCeram) from Primus Pharmaceuticals for unrelated research. PEER REVIEW: PEER REVIEW The peer review history for this article is available at https://publons.com/publon/10.1111/pai.13883. The peer review history for this article is available at https://publons.com/publon/10.1111/pai.13883. PEER REVIEW: The peer review history for this article is available at https://publons.com/publon/10.1111/pai.13883. Supporting information: Appendix S1 Click here for additional data file.
Background: Australia has one of the highest prevalence of childhood food allergy in the world, but there are no data on its economic burden in Australia. Methods: We used data from the HealthNuts study, a population-based longitudinal study undertaken in Melbourne, Australia. Infants were recruited at age 12 months between Sept 2007 and Aug 2011 with food allergy diagnosed using oral food challenges. Health care costs of out-of-hospital services were collected through data linkage to Australia's universal health insurance scheme Medicare. Two-part model was used to compare costs after controlling for potential confounders. Results: 2919 children were included, and 390 (13.4%) had challenge-confirmed food allergy at age 1 year. Compared with children without food allergy, children with food allergy had significantly higher costs for GP visits, specialist visits, tests, and prescriptions in the first four years of life. The total Medicare cost associated with food allergy from age 1 to 4 years was estimated to be AUD$889.7 (95% CI $566.1-$1188.3) or €411.0 (95% CI €261.5-€549.0) per child. This was projected into an annual Medicare cost of AUD$26.1 million (95% CI $20.1-$32.3 million) or €12.1 (95% CI €9.3-€14.9 million) based on population size in 2020. Conclusions: Childhood food allergy causes considerable Medicare costs for out-of-hospital services in the first four years after birth in Australia. These findings can help anticipate the financial impact on the health care system associated with childhood food allergy, act as a useful costing resource for future evaluations, and inform management of childhood food allergy internationally.
null
null
6,448
320
[ 250, 82, 254, 206, 426, 190, 129, 29, 12 ]
15
[ "allergy", "food", "food allergy", "children", "costs", "study", "health", "services", "children food", "children food allergy" ]
[ "food allergy costs", "food allergy australia", "allergy prevalence australia", "anaphylaxis children increased", "food allergic children" ]
null
null
[CONTENT] burden | child | food allergy | health care costs [SUMMARY]
[CONTENT] burden | child | food allergy | health care costs [SUMMARY]
[CONTENT] burden | child | food allergy | health care costs [SUMMARY]
null
[CONTENT] burden | child | food allergy | health care costs [SUMMARY]
null
[CONTENT] Aged | Infant | Child | Humans | Longitudinal Studies | National Health Programs | Food Hypersensitivity | Australia | Health Care Costs | Hospitals [SUMMARY]
[CONTENT] Aged | Infant | Child | Humans | Longitudinal Studies | National Health Programs | Food Hypersensitivity | Australia | Health Care Costs | Hospitals [SUMMARY]
[CONTENT] Aged | Infant | Child | Humans | Longitudinal Studies | National Health Programs | Food Hypersensitivity | Australia | Health Care Costs | Hospitals [SUMMARY]
null
[CONTENT] Aged | Infant | Child | Humans | Longitudinal Studies | National Health Programs | Food Hypersensitivity | Australia | Health Care Costs | Hospitals [SUMMARY]
null
[CONTENT] food allergy costs | food allergy australia | allergy prevalence australia | anaphylaxis children increased | food allergic children [SUMMARY]
[CONTENT] food allergy costs | food allergy australia | allergy prevalence australia | anaphylaxis children increased | food allergic children [SUMMARY]
[CONTENT] food allergy costs | food allergy australia | allergy prevalence australia | anaphylaxis children increased | food allergic children [SUMMARY]
null
[CONTENT] food allergy costs | food allergy australia | allergy prevalence australia | anaphylaxis children increased | food allergic children [SUMMARY]
null
[CONTENT] allergy | food | food allergy | children | costs | study | health | services | children food | children food allergy [SUMMARY]
[CONTENT] allergy | food | food allergy | children | costs | study | health | services | children food | children food allergy [SUMMARY]
[CONTENT] allergy | food | food allergy | children | costs | study | health | services | children food | children food allergy [SUMMARY]
null
[CONTENT] allergy | food | food allergy | children | costs | study | health | services | children food | children food allergy [SUMMARY]
null
[CONTENT] food | food allergy | allergy | childhood food | childhood | childhood food allergy | australia | highest | hospital | healthcare system [SUMMARY]
[CONTENT] food | allergy | food allergy | costs | children | study | mbs | pbs | cost | differences [SUMMARY]
[CONTENT] aud | children | allergy | 95 | food allergy | food | 95 ci | pbs | mbs | ci [SUMMARY]
null
[CONTENT] allergy | food | food allergy | children | costs | study | review | research | health | services [SUMMARY]
null
[CONTENT] Australia | one | Australia [SUMMARY]
[CONTENT] HealthNuts | Melbourne | Australia ||| age 12 months | Sept 2007 | Aug 2011 ||| Australia | Medicare ||| Two [SUMMARY]
[CONTENT] 2919 | 390 | 13.4% | age 1 year ||| GP | the first four years ||| Medicare | age 1 to 4 years | 95% | CI | 566.1-$1188.3 | 411.0 | 95% | CI | 261.5-€549.0 ||| Medicare | AUD$26.1 million | 95% | $20.1-$32.3 million | 12.1 | 95% | €9.3-€14.9 million | 2020 [SUMMARY]
null
[CONTENT] Australia | one | Australia ||| HealthNuts | Melbourne | Australia ||| age 12 months | Sept 2007 | Aug 2011 ||| Australia | Medicare ||| Two ||| 2919 | 390 | 13.4% | age 1 year ||| GP | the first four years ||| Medicare | age 1 to 4 years | 95% | CI | 566.1-$1188.3 | 411.0 | 95% | CI | 261.5-€549.0 ||| Medicare | AUD$26.1 million | 95% | $20.1-$32.3 million | 12.1 | 95% | €9.3-€14.9 million | 2020 ||| Medicare | the first four years | Australia ||| [SUMMARY]
null
Cannabidiol for Treating Lennox-Gastaut Syndrome and Dravet Syndrome in Korea.
33372424
For the first time in Korea, we aimed to study the efficacy and safety of cannabidiol (CBD), which is emerging as a new alternative in treating epileptic encephalopathies.
BACKGROUND
This study was conducted retrospectively with patients between the ages of 2-18 years diagnosed with Lennox-Gastaut syndrome (LGS) or Dravet syndrome (DS) were enrolled from March to October 2019, who visited outpatient unit at 3 and 6 months to evaluate medication efficacy and safety based on caregiver reporting. Additional evaluations, such as electroencephalogram and blood tests, were conducted at each period also. CBD was administered orally at a starting dose of 5 mg/kg/day, and was maintained at 10 mg/kg/day.
METHODS
We analyzed 34 patients in the LGS group and 10 patients in the DS group between the ages of 1.2-15.8 years. In the 3-month evaluation, the overall reduction of seizure frequency in the LGS group was 52.9% (>50% reduction in 32.3% of the cases), and 29.4% in the 6-month evaluation (more than 50% reduction in 20.6%). In DS group, the reduction of seizure frequency by more than 50% was 30% and 20% in the 3-month and 6-month evaluation, respectively. Good outcomes were defined as the reduction of seizure frequency by more than 50% and similar results were observed in both LGS and DS groups. Adverse events were reported in 36.3% of total patients of which most common adverse events were gastrointestinal problems. However, no life-threatening adverse event was reported in both LGS and DS during the observation period.
RESULTS
In this first Korean study, CBD was safe and tolerable for use and could be expected to potentially reduce the seizure frequency in pediatric patients with LGS or DS.
CONCLUSION
[ "Adolescent", "Anticonvulsants", "Cannabidiol", "Caregivers", "Child", "Child, Preschool", "Electroencephalography", "Epilepsies, Myoclonic", "Epilepsy", "Female", "Humans", "Infant", "Lennox Gastaut Syndrome", "Male", "Patient Safety", "Republic of Korea", "Retrospective Studies", "Treatment Outcome" ]
7769699
INTRODUCTION
Epilepsy is associated with cognitive dysfunction and behavioral disorders. Uncontrolled seizures often affect one's quality of life, especially when they occur at a young age.12345 In particular, in epileptic encephalopathies such as Lennox-Gastaut syndrome (LGS) and Dravet syndrome (DS), epileptic activity causes severe cognitive and behavioral disorders to worsen over time.6 Purified cannabidiol (CBD) (Epidiolex®; GW Pharmaceuticals, Cambridge, UK) is a new drug that can be used to treat drug-resistant epilepsy,7 as its administration can reduce seizure frequency. Mild adverse events such as somnolence and gastrointestinal (GI) symptoms have been reported.78910 After randomized trials assessing CBD use in LGS and DS were conducted, the United States Food and Drug Administration approved the prescription of CBD for both epileptic encephalopathies in 2018.1112 CBD exerts anti-seizure effects, while another component of cannabis, Δ9-tetrahydrocannabinol (THC), has psychoactive functions.1314 In Korea, due to the addictive nature of THC, even CBD was considered an illegal drug until March 2019. The efforts of the media, patient groups, and non-governmental organizations informed the public that CBD medication contained a very small amount of THC, with a very low potential for addiction and psychoactive effects. These acts led to the approval of CBD by the Ministry of Food and Drug Safety for the treatment of DS or LGS in those over two years of age. This study was performed to evaluate the tolerability and efficacy of CBD in children with LGS or DS for the first time in Korea.
METHODS
Patients and study design Patients who initiated CBD treatment from March 2019 to October 2019 and who were between the ages of 2 and 18 years and diagnosed with of DS or LGS were included in this study. All patients diagnosed with DS met the clinical diagnostic criteria, including an early onset of febrile or vaccination related seizures, developmental regression in relation to the seizures, and drug-resistant epilepsy. All had pathogenic SCN1A (sodium channel, voltage-gated, type I, alpha subunit) mutations. All patients diagnosed with LGS exhibited impaired intellectual function and multiple mixed-type seizures, as well as characteristic electroencephalography (EEG) patterns showing generalized sharp and slow waves (GSSW) or generalized paroxysmal fast activities (PFA). This study was conducted retrospectively and evaluated the medication efficacy and safety through outpatient visits at 3 and 6 months based on caregiver reporting. Patients received Epidiolex®, manufactured by GW Pharmaceuticals, which contained CBD (100 mg/mL). CBD was administered orally at a starting dose of 5 mg/kg/day, in addition to baseline antiepileptic drug therapy. After 1 week, the dosage was up-titrated by 5 mg/kg/day and was maintained at 10 mg/kg/day (Fig. 1). CBD = cannabidiol, EEG = electroencephalogram. Patients who initiated CBD treatment from March 2019 to October 2019 and who were between the ages of 2 and 18 years and diagnosed with of DS or LGS were included in this study. All patients diagnosed with DS met the clinical diagnostic criteria, including an early onset of febrile or vaccination related seizures, developmental regression in relation to the seizures, and drug-resistant epilepsy. All had pathogenic SCN1A (sodium channel, voltage-gated, type I, alpha subunit) mutations. All patients diagnosed with LGS exhibited impaired intellectual function and multiple mixed-type seizures, as well as characteristic electroencephalography (EEG) patterns showing generalized sharp and slow waves (GSSW) or generalized paroxysmal fast activities (PFA). This study was conducted retrospectively and evaluated the medication efficacy and safety through outpatient visits at 3 and 6 months based on caregiver reporting. Patients received Epidiolex®, manufactured by GW Pharmaceuticals, which contained CBD (100 mg/mL). CBD was administered orally at a starting dose of 5 mg/kg/day, in addition to baseline antiepileptic drug therapy. After 1 week, the dosage was up-titrated by 5 mg/kg/day and was maintained at 10 mg/kg/day (Fig. 1). CBD = cannabidiol, EEG = electroencephalogram. Assessment of efficacy The efficacy of CBD was evaluated by comparing the mean frequency of motor seizures experienced per month, defined by the tonic, clonic, and atonic components, before and after the administration of medication. The change in the seizure frequency was expressed as a percentage that was calculated as [(seizure frequency per month) − (seizure frequency at baseline)]/(seizure frequency at baseline) × 100. Each patient's response to CBD was categorized as one of the following 4 responses: 1) the patient became seizure-free, 2) the frequency of seizures was reduced by more than 50%, 3) the frequency of seizures was reduced by less than 50%, or 4) there was no effect. A good outcome was defined as the seizure frequency being reduced by more than 50%, including being seizure-free. The additional effect of CBD was evaluated based on comparisons of the EEGs performed during each period, and each assessment was performed for 30 minutes or more. Each scalp electrode was placed in accordance with an international 10–20 system. The diagnosis was based on the characteristic interictal and ictal patterns of the EEG.15 In the LGS group, we evaluated whether the EEG improved at the 3- and 6-month visits, and the criteria were based on the persistence of the EEG characteristics of LGS or the prevalence of the epileptiform pattern.16 The efficacy of CBD was evaluated by comparing the mean frequency of motor seizures experienced per month, defined by the tonic, clonic, and atonic components, before and after the administration of medication. The change in the seizure frequency was expressed as a percentage that was calculated as [(seizure frequency per month) − (seizure frequency at baseline)]/(seizure frequency at baseline) × 100. Each patient's response to CBD was categorized as one of the following 4 responses: 1) the patient became seizure-free, 2) the frequency of seizures was reduced by more than 50%, 3) the frequency of seizures was reduced by less than 50%, or 4) there was no effect. A good outcome was defined as the seizure frequency being reduced by more than 50%, including being seizure-free. The additional effect of CBD was evaluated based on comparisons of the EEGs performed during each period, and each assessment was performed for 30 minutes or more. Each scalp electrode was placed in accordance with an international 10–20 system. The diagnosis was based on the characteristic interictal and ictal patterns of the EEG.15 In the LGS group, we evaluated whether the EEG improved at the 3- and 6-month visits, and the criteria were based on the persistence of the EEG characteristics of LGS or the prevalence of the epileptiform pattern.16 Assessment of safety All safety results in this report were evaluated over the entire follow-up period. Patients or caregivers documented the adverse events at each visit. Safety was assessed through laboratory studies, including complete blood counts, liver function tests, and vital signs at baseline and at the time of the outpatient clinic visits. The appearance of adverse events was resolved by dose reduction or discontinuation of treatment. All safety results in this report were evaluated over the entire follow-up period. Patients or caregivers documented the adverse events at each visit. Safety was assessed through laboratory studies, including complete blood counts, liver function tests, and vital signs at baseline and at the time of the outpatient clinic visits. The appearance of adverse events was resolved by dose reduction or discontinuation of treatment. Analyses An intention-to-treat analysis was conducted to assess CBD efficacy. Data processing and analysis were performed with Statistical Package for the Social Sciences version 25 (IBM Co., Armonk, NY, USA). P values < 0.05 were regarded as statistically significant. All numbers were rounded to 2 decimal places. The statistical analyses included the independent paired-sample t-test and Fisher's exact test, and Pearson's χ2 test to compare the variables and prognosis. An intention-to-treat analysis was conducted to assess CBD efficacy. Data processing and analysis were performed with Statistical Package for the Social Sciences version 25 (IBM Co., Armonk, NY, USA). P values < 0.05 were regarded as statistically significant. All numbers were rounded to 2 decimal places. The statistical analyses included the independent paired-sample t-test and Fisher's exact test, and Pearson's χ2 test to compare the variables and prognosis. Ethics statement This study was approved by the Institutional Review Board of Severance Hospital (approval No.: 4-2020-0120). Informed consent was waived due to the retrospective nature of the study. This study was approved by the Institutional Review Board of Severance Hospital (approval No.: 4-2020-0120). Informed consent was waived due to the retrospective nature of the study.
RESULTS
Study population During the study period, 44 patients were managed with CBD, 34 of whom were in the LGS group and 10 of whom were in the DS group. One patient from the LGS group was excluded due to follow-up loss. One patient from the DS group was excluded due to the occurrence of an adverse event before the 3-month evaluation. After the 3-month evaluation, 6 patients in the LGS group were excluded from the study due to dissatisfaction with the treatment effect, and 3 patients were excluded because of the development of adverse events. In the DS group, 1 patient was excluded due to adverse events and 2 were excluded due to financial constraints (Fig. 2). LGS = Lennox-Gastaut syndrome, DS = Dravet syndrome. The ages of the patients included in this study ranged from 1.2 to 15.8 years (mean age = 5.1 years), with 28 males (63.6%) and 16 females (36.4%). There were 5 patients (11.4%) undergoing diet therapy. The most frequently used antiepileptic drug at the time of the study was valproic acid (VPA) at an average dose of 22.2 mg/kg/day, followed by levetiracetam at an average dose of 20.3 mg/kg/day. Stiripentol was used only in DS patients. Patient characteristics are shown in Table 1. Data are presented as median (range) or number (%) unless otherwise indicated. CBD = cannabidiol, IQR = interquartile range, EEG = electroencephalogram, MRI = magnetic resonance imaging. aAll diet therapies used in this study were the modified Atkins diet; bAll used deflazacort. During the study period, 44 patients were managed with CBD, 34 of whom were in the LGS group and 10 of whom were in the DS group. One patient from the LGS group was excluded due to follow-up loss. One patient from the DS group was excluded due to the occurrence of an adverse event before the 3-month evaluation. After the 3-month evaluation, 6 patients in the LGS group were excluded from the study due to dissatisfaction with the treatment effect, and 3 patients were excluded because of the development of adverse events. In the DS group, 1 patient was excluded due to adverse events and 2 were excluded due to financial constraints (Fig. 2). LGS = Lennox-Gastaut syndrome, DS = Dravet syndrome. The ages of the patients included in this study ranged from 1.2 to 15.8 years (mean age = 5.1 years), with 28 males (63.6%) and 16 females (36.4%). There were 5 patients (11.4%) undergoing diet therapy. The most frequently used antiepileptic drug at the time of the study was valproic acid (VPA) at an average dose of 22.2 mg/kg/day, followed by levetiracetam at an average dose of 20.3 mg/kg/day. Stiripentol was used only in DS patients. Patient characteristics are shown in Table 1. Data are presented as median (range) or number (%) unless otherwise indicated. CBD = cannabidiol, IQR = interquartile range, EEG = electroencephalogram, MRI = magnetic resonance imaging. aAll diet therapies used in this study were the modified Atkins diet; bAll used deflazacort. Efficacy in LGS When outcomes were observed in the LGS group, a reduction in seizure frequency was observed in 52.9% of the patients at 3 months and in 29.4% at 6 months. Eight patients (23.5%) were seizure-free at the 3-month evaluation, 3 patients (8.8%) had a reduction in seizure frequency of more than 50%, and 7 patients (20.6%) showed a reduction in seizure frequency of less than 50%. It can be stated that the result was good in 32.3% of patients. At the 6-month evaluation, 4 patients (11.8%) were seizure-free and 3 patients (8.8%) showed more than a 50% reduction in seizure frequency; thus, 20.6% of the patients had good outcomes (Fig. 3A). LGS = Lennox-Gastaut syndrome, DS = Dravet syndrome. When evaluating the initial EEGs of LGS patients, 16 cases showed GSSW or PFA, characteristic of LGS. The outcomes at the 3- and 6-month evaluations were investigated. At the 3-month evaluation, good outcomes were observed as there were more cases with an initial EEG that did not show the PFA, but at the 6-month evaluation, the opposite was observed. In the case of the GSSW pattern, good outcomes were obtained, with no cases showing the pattern at either the 3- or the 6-month evaluations; however, there was no statistically significant change in EEG activity after the administration of CBD. When compared with the baseline EEG, most of the patients who showed no improvement at the 3-month evaluation (P value = 0.017) and the 6-month evaluation (P value = 0.009) were determined to have had a “no effect” outcome. When outcomes were observed in the LGS group, a reduction in seizure frequency was observed in 52.9% of the patients at 3 months and in 29.4% at 6 months. Eight patients (23.5%) were seizure-free at the 3-month evaluation, 3 patients (8.8%) had a reduction in seizure frequency of more than 50%, and 7 patients (20.6%) showed a reduction in seizure frequency of less than 50%. It can be stated that the result was good in 32.3% of patients. At the 6-month evaluation, 4 patients (11.8%) were seizure-free and 3 patients (8.8%) showed more than a 50% reduction in seizure frequency; thus, 20.6% of the patients had good outcomes (Fig. 3A). LGS = Lennox-Gastaut syndrome, DS = Dravet syndrome. When evaluating the initial EEGs of LGS patients, 16 cases showed GSSW or PFA, characteristic of LGS. The outcomes at the 3- and 6-month evaluations were investigated. At the 3-month evaluation, good outcomes were observed as there were more cases with an initial EEG that did not show the PFA, but at the 6-month evaluation, the opposite was observed. In the case of the GSSW pattern, good outcomes were obtained, with no cases showing the pattern at either the 3- or the 6-month evaluations; however, there was no statistically significant change in EEG activity after the administration of CBD. When compared with the baseline EEG, most of the patients who showed no improvement at the 3-month evaluation (P value = 0.017) and the 6-month evaluation (P value = 0.009) were determined to have had a “no effect” outcome. Efficacy in DS In the DS group, the “no effect” outcome was observed at a higher frequency at both the 3- and 6-month evaluations than it was in the LGS group. At the 3-month evaluation, 1 patient (10.0%) was seizure-free, 2 patients (20.0%) showed a more than 50% reduction in seizure frequency, and 60.0% of the enrolled patients experienced no effect. None of the patients were “seizure-free” at the 6-month evaluation, and only 2 (20.0%) showed a reduction in seizure frequency of 50% or more. At the 6-month evaluation, a “no effect” outcome was observed in 66.7% of patients (Fig. 3B). In the DS group, the “no effect” outcome was observed at a higher frequency at both the 3- and 6-month evaluations than it was in the LGS group. At the 3-month evaluation, 1 patient (10.0%) was seizure-free, 2 patients (20.0%) showed a more than 50% reduction in seizure frequency, and 60.0% of the enrolled patients experienced no effect. None of the patients were “seizure-free” at the 6-month evaluation, and only 2 (20.0%) showed a reduction in seizure frequency of 50% or more. At the 6-month evaluation, a “no effect” outcome was observed in 66.7% of patients (Fig. 3B). Safety No life-threatening adverse event was reported in the LGS and DS groups during the observation period. However, adverse events were reported in 16 (36.3%) of the enrolled patients, and GI problems were the most frequent adverse event in 7 cases (15.9%); among them, mild liver enzyme elevation was observed in 1 case, whereas most included vomiting or diarrhea. Five cases (11.4%) experienced behavioral changes, such as irritability, hyperactivity, excessive alertness, and sleep disturbances (Table 2). In the DS group, 2 cases had increased seizure frequency, which warranted the discontinuation of the medication. The other adverse events necessitating the cessation of the drug were as follows: 1 case of drowsiness, 1 case of GI problems with acute pancreatitis, and 1 case of behavioral changes. Values are presented as number (%). AED = antiepileptic drug, CBD = cannabidiol. No life-threatening adverse event was reported in the LGS and DS groups during the observation period. However, adverse events were reported in 16 (36.3%) of the enrolled patients, and GI problems were the most frequent adverse event in 7 cases (15.9%); among them, mild liver enzyme elevation was observed in 1 case, whereas most included vomiting or diarrhea. Five cases (11.4%) experienced behavioral changes, such as irritability, hyperactivity, excessive alertness, and sleep disturbances (Table 2). In the DS group, 2 cases had increased seizure frequency, which warranted the discontinuation of the medication. The other adverse events necessitating the cessation of the drug were as follows: 1 case of drowsiness, 1 case of GI problems with acute pancreatitis, and 1 case of behavioral changes. Values are presented as number (%). AED = antiepileptic drug, CBD = cannabidiol.
null
null
[ "Patients and study design", "Assessment of efficacy", "Assessment of safety", "Analyses", "Ethics statement", "Study population", "Efficacy in LGS", "Efficacy in DS", "Safety" ]
[ "Patients who initiated CBD treatment from March 2019 to October 2019 and who were between the ages of 2 and 18 years and diagnosed with of DS or LGS were included in this study. All patients diagnosed with DS met the clinical diagnostic criteria, including an early onset of febrile or vaccination related seizures, developmental regression in relation to the seizures, and drug-resistant epilepsy. All had pathogenic SCN1A (sodium channel, voltage-gated, type I, alpha subunit) mutations. All patients diagnosed with LGS exhibited impaired intellectual function and multiple mixed-type seizures, as well as characteristic electroencephalography (EEG) patterns showing generalized sharp and slow waves (GSSW) or generalized paroxysmal fast activities (PFA). This study was conducted retrospectively and evaluated the medication efficacy and safety through outpatient visits at 3 and 6 months based on caregiver reporting. Patients received Epidiolex®, manufactured by GW Pharmaceuticals, which contained CBD (100 mg/mL). CBD was administered orally at a starting dose of 5 mg/kg/day, in addition to baseline antiepileptic drug therapy. After 1 week, the dosage was up-titrated by 5 mg/kg/day and was maintained at 10 mg/kg/day (Fig. 1).\nCBD = cannabidiol, EEG = electroencephalogram.", "The efficacy of CBD was evaluated by comparing the mean frequency of motor seizures experienced per month, defined by the tonic, clonic, and atonic components, before and after the administration of medication. The change in the seizure frequency was expressed as a percentage that was calculated as [(seizure frequency per month) − (seizure frequency at baseline)]/(seizure frequency at baseline) × 100. Each patient's response to CBD was categorized as one of the following 4 responses: 1) the patient became seizure-free, 2) the frequency of seizures was reduced by more than 50%, 3) the frequency of seizures was reduced by less than 50%, or 4) there was no effect. A good outcome was defined as the seizure frequency being reduced by more than 50%, including being seizure-free.\nThe additional effect of CBD was evaluated based on comparisons of the EEGs performed during each period, and each assessment was performed for 30 minutes or more. Each scalp electrode was placed in accordance with an international 10–20 system. The diagnosis was based on the characteristic interictal and ictal patterns of the EEG.15 In the LGS group, we evaluated whether the EEG improved at the 3- and 6-month visits, and the criteria were based on the persistence of the EEG characteristics of LGS or the prevalence of the epileptiform pattern.16", "All safety results in this report were evaluated over the entire follow-up period. Patients or caregivers documented the adverse events at each visit. Safety was assessed through laboratory studies, including complete blood counts, liver function tests, and vital signs at baseline and at the time of the outpatient clinic visits. The appearance of adverse events was resolved by dose reduction or discontinuation of treatment.", "An intention-to-treat analysis was conducted to assess CBD efficacy. Data processing and analysis were performed with Statistical Package for the Social Sciences version 25 (IBM Co., Armonk, NY, USA). P values < 0.05 were regarded as statistically significant. All numbers were rounded to 2 decimal places. The statistical analyses included the independent paired-sample t-test and Fisher's exact test, and Pearson's χ2 test to compare the variables and prognosis.", "This study was approved by the Institutional Review Board of Severance Hospital (approval No.: 4-2020-0120). Informed consent was waived due to the retrospective nature of the study.", "During the study period, 44 patients were managed with CBD, 34 of whom were in the LGS group and 10 of whom were in the DS group. One patient from the LGS group was excluded due to follow-up loss. One patient from the DS group was excluded due to the occurrence of an adverse event before the 3-month evaluation. After the 3-month evaluation, 6 patients in the LGS group were excluded from the study due to dissatisfaction with the treatment effect, and 3 patients were excluded because of the development of adverse events. In the DS group, 1 patient was excluded due to adverse events and 2 were excluded due to financial constraints (Fig. 2).\nLGS = Lennox-Gastaut syndrome, DS = Dravet syndrome.\nThe ages of the patients included in this study ranged from 1.2 to 15.8 years (mean age = 5.1 years), with 28 males (63.6%) and 16 females (36.4%). There were 5 patients (11.4%) undergoing diet therapy. The most frequently used antiepileptic drug at the time of the study was valproic acid (VPA) at an average dose of 22.2 mg/kg/day, followed by levetiracetam at an average dose of 20.3 mg/kg/day. Stiripentol was used only in DS patients. Patient characteristics are shown in Table 1.\nData are presented as median (range) or number (%) unless otherwise indicated.\nCBD = cannabidiol, IQR = interquartile range, EEG = electroencephalogram, MRI = magnetic resonance imaging.\naAll diet therapies used in this study were the modified Atkins diet; bAll used deflazacort.", "When outcomes were observed in the LGS group, a reduction in seizure frequency was observed in 52.9% of the patients at 3 months and in 29.4% at 6 months. Eight patients (23.5%) were seizure-free at the 3-month evaluation, 3 patients (8.8%) had a reduction in seizure frequency of more than 50%, and 7 patients (20.6%) showed a reduction in seizure frequency of less than 50%. It can be stated that the result was good in 32.3% of patients. At the 6-month evaluation, 4 patients (11.8%) were seizure-free and 3 patients (8.8%) showed more than a 50% reduction in seizure frequency; thus, 20.6% of the patients had good outcomes (Fig. 3A).\nLGS = Lennox-Gastaut syndrome, DS = Dravet syndrome.\nWhen evaluating the initial EEGs of LGS patients, 16 cases showed GSSW or PFA, characteristic of LGS. The outcomes at the 3- and 6-month evaluations were investigated. At the 3-month evaluation, good outcomes were observed as there were more cases with an initial EEG that did not show the PFA, but at the 6-month evaluation, the opposite was observed. In the case of the GSSW pattern, good outcomes were obtained, with no cases showing the pattern at either the 3- or the 6-month evaluations; however, there was no statistically significant change in EEG activity after the administration of CBD.\nWhen compared with the baseline EEG, most of the patients who showed no improvement at the 3-month evaluation (P value = 0.017) and the 6-month evaluation (P value = 0.009) were determined to have had a “no effect” outcome.", "In the DS group, the “no effect” outcome was observed at a higher frequency at both the 3- and 6-month evaluations than it was in the LGS group. At the 3-month evaluation, 1 patient (10.0%) was seizure-free, 2 patients (20.0%) showed a more than 50% reduction in seizure frequency, and 60.0% of the enrolled patients experienced no effect. None of the patients were “seizure-free” at the 6-month evaluation, and only 2 (20.0%) showed a reduction in seizure frequency of 50% or more. At the 6-month evaluation, a “no effect” outcome was observed in 66.7% of patients (Fig. 3B).", "No life-threatening adverse event was reported in the LGS and DS groups during the observation period. However, adverse events were reported in 16 (36.3%) of the enrolled patients, and GI problems were the most frequent adverse event in 7 cases (15.9%); among them, mild liver enzyme elevation was observed in 1 case, whereas most included vomiting or diarrhea. Five cases (11.4%) experienced behavioral changes, such as irritability, hyperactivity, excessive alertness, and sleep disturbances (Table 2). In the DS group, 2 cases had increased seizure frequency, which warranted the discontinuation of the medication. The other adverse events necessitating the cessation of the drug were as follows: 1 case of drowsiness, 1 case of GI problems with acute pancreatitis, and 1 case of behavioral changes.\nValues are presented as number (%).\nAED = antiepileptic drug, CBD = cannabidiol." ]
[ null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Patients and study design", "Assessment of efficacy", "Assessment of safety", "Analyses", "Ethics statement", "RESULTS", "Study population", "Efficacy in LGS", "Efficacy in DS", "Safety", "DISCUSSION" ]
[ "Epilepsy is associated with cognitive dysfunction and behavioral disorders. Uncontrolled seizures often affect one's quality of life, especially when they occur at a young age.12345 In particular, in epileptic encephalopathies such as Lennox-Gastaut syndrome (LGS) and Dravet syndrome (DS), epileptic activity causes severe cognitive and behavioral disorders to worsen over time.6\nPurified cannabidiol (CBD) (Epidiolex®; GW Pharmaceuticals, Cambridge, UK) is a new drug that can be used to treat drug-resistant epilepsy,7 as its administration can reduce seizure frequency. Mild adverse events such as somnolence and gastrointestinal (GI) symptoms have been reported.78910 After randomized trials assessing CBD use in LGS and DS were conducted, the United States Food and Drug Administration approved the prescription of CBD for both epileptic encephalopathies in 2018.1112\nCBD exerts anti-seizure effects, while another component of cannabis, Δ9-tetrahydrocannabinol (THC), has psychoactive functions.1314 In Korea, due to the addictive nature of THC, even CBD was considered an illegal drug until March 2019. The efforts of the media, patient groups, and non-governmental organizations informed the public that CBD medication contained a very small amount of THC, with a very low potential for addiction and psychoactive effects. These acts led to the approval of CBD by the Ministry of Food and Drug Safety for the treatment of DS or LGS in those over two years of age.\nThis study was performed to evaluate the tolerability and efficacy of CBD in children with LGS or DS for the first time in Korea.", " Patients and study design Patients who initiated CBD treatment from March 2019 to October 2019 and who were between the ages of 2 and 18 years and diagnosed with of DS or LGS were included in this study. All patients diagnosed with DS met the clinical diagnostic criteria, including an early onset of febrile or vaccination related seizures, developmental regression in relation to the seizures, and drug-resistant epilepsy. All had pathogenic SCN1A (sodium channel, voltage-gated, type I, alpha subunit) mutations. All patients diagnosed with LGS exhibited impaired intellectual function and multiple mixed-type seizures, as well as characteristic electroencephalography (EEG) patterns showing generalized sharp and slow waves (GSSW) or generalized paroxysmal fast activities (PFA). This study was conducted retrospectively and evaluated the medication efficacy and safety through outpatient visits at 3 and 6 months based on caregiver reporting. Patients received Epidiolex®, manufactured by GW Pharmaceuticals, which contained CBD (100 mg/mL). CBD was administered orally at a starting dose of 5 mg/kg/day, in addition to baseline antiepileptic drug therapy. After 1 week, the dosage was up-titrated by 5 mg/kg/day and was maintained at 10 mg/kg/day (Fig. 1).\nCBD = cannabidiol, EEG = electroencephalogram.\nPatients who initiated CBD treatment from March 2019 to October 2019 and who were between the ages of 2 and 18 years and diagnosed with of DS or LGS were included in this study. All patients diagnosed with DS met the clinical diagnostic criteria, including an early onset of febrile or vaccination related seizures, developmental regression in relation to the seizures, and drug-resistant epilepsy. All had pathogenic SCN1A (sodium channel, voltage-gated, type I, alpha subunit) mutations. All patients diagnosed with LGS exhibited impaired intellectual function and multiple mixed-type seizures, as well as characteristic electroencephalography (EEG) patterns showing generalized sharp and slow waves (GSSW) or generalized paroxysmal fast activities (PFA). This study was conducted retrospectively and evaluated the medication efficacy and safety through outpatient visits at 3 and 6 months based on caregiver reporting. Patients received Epidiolex®, manufactured by GW Pharmaceuticals, which contained CBD (100 mg/mL). CBD was administered orally at a starting dose of 5 mg/kg/day, in addition to baseline antiepileptic drug therapy. After 1 week, the dosage was up-titrated by 5 mg/kg/day and was maintained at 10 mg/kg/day (Fig. 1).\nCBD = cannabidiol, EEG = electroencephalogram.\n Assessment of efficacy The efficacy of CBD was evaluated by comparing the mean frequency of motor seizures experienced per month, defined by the tonic, clonic, and atonic components, before and after the administration of medication. The change in the seizure frequency was expressed as a percentage that was calculated as [(seizure frequency per month) − (seizure frequency at baseline)]/(seizure frequency at baseline) × 100. Each patient's response to CBD was categorized as one of the following 4 responses: 1) the patient became seizure-free, 2) the frequency of seizures was reduced by more than 50%, 3) the frequency of seizures was reduced by less than 50%, or 4) there was no effect. A good outcome was defined as the seizure frequency being reduced by more than 50%, including being seizure-free.\nThe additional effect of CBD was evaluated based on comparisons of the EEGs performed during each period, and each assessment was performed for 30 minutes or more. Each scalp electrode was placed in accordance with an international 10–20 system. The diagnosis was based on the characteristic interictal and ictal patterns of the EEG.15 In the LGS group, we evaluated whether the EEG improved at the 3- and 6-month visits, and the criteria were based on the persistence of the EEG characteristics of LGS or the prevalence of the epileptiform pattern.16\nThe efficacy of CBD was evaluated by comparing the mean frequency of motor seizures experienced per month, defined by the tonic, clonic, and atonic components, before and after the administration of medication. The change in the seizure frequency was expressed as a percentage that was calculated as [(seizure frequency per month) − (seizure frequency at baseline)]/(seizure frequency at baseline) × 100. Each patient's response to CBD was categorized as one of the following 4 responses: 1) the patient became seizure-free, 2) the frequency of seizures was reduced by more than 50%, 3) the frequency of seizures was reduced by less than 50%, or 4) there was no effect. A good outcome was defined as the seizure frequency being reduced by more than 50%, including being seizure-free.\nThe additional effect of CBD was evaluated based on comparisons of the EEGs performed during each period, and each assessment was performed for 30 minutes or more. Each scalp electrode was placed in accordance with an international 10–20 system. The diagnosis was based on the characteristic interictal and ictal patterns of the EEG.15 In the LGS group, we evaluated whether the EEG improved at the 3- and 6-month visits, and the criteria were based on the persistence of the EEG characteristics of LGS or the prevalence of the epileptiform pattern.16\n Assessment of safety All safety results in this report were evaluated over the entire follow-up period. Patients or caregivers documented the adverse events at each visit. Safety was assessed through laboratory studies, including complete blood counts, liver function tests, and vital signs at baseline and at the time of the outpatient clinic visits. The appearance of adverse events was resolved by dose reduction or discontinuation of treatment.\nAll safety results in this report were evaluated over the entire follow-up period. Patients or caregivers documented the adverse events at each visit. Safety was assessed through laboratory studies, including complete blood counts, liver function tests, and vital signs at baseline and at the time of the outpatient clinic visits. The appearance of adverse events was resolved by dose reduction or discontinuation of treatment.\n Analyses An intention-to-treat analysis was conducted to assess CBD efficacy. Data processing and analysis were performed with Statistical Package for the Social Sciences version 25 (IBM Co., Armonk, NY, USA). P values < 0.05 were regarded as statistically significant. All numbers were rounded to 2 decimal places. The statistical analyses included the independent paired-sample t-test and Fisher's exact test, and Pearson's χ2 test to compare the variables and prognosis.\nAn intention-to-treat analysis was conducted to assess CBD efficacy. Data processing and analysis were performed with Statistical Package for the Social Sciences version 25 (IBM Co., Armonk, NY, USA). P values < 0.05 were regarded as statistically significant. All numbers were rounded to 2 decimal places. The statistical analyses included the independent paired-sample t-test and Fisher's exact test, and Pearson's χ2 test to compare the variables and prognosis.\n Ethics statement This study was approved by the Institutional Review Board of Severance Hospital (approval No.: 4-2020-0120). Informed consent was waived due to the retrospective nature of the study.\nThis study was approved by the Institutional Review Board of Severance Hospital (approval No.: 4-2020-0120). Informed consent was waived due to the retrospective nature of the study.", "Patients who initiated CBD treatment from March 2019 to October 2019 and who were between the ages of 2 and 18 years and diagnosed with of DS or LGS were included in this study. All patients diagnosed with DS met the clinical diagnostic criteria, including an early onset of febrile or vaccination related seizures, developmental regression in relation to the seizures, and drug-resistant epilepsy. All had pathogenic SCN1A (sodium channel, voltage-gated, type I, alpha subunit) mutations. All patients diagnosed with LGS exhibited impaired intellectual function and multiple mixed-type seizures, as well as characteristic electroencephalography (EEG) patterns showing generalized sharp and slow waves (GSSW) or generalized paroxysmal fast activities (PFA). This study was conducted retrospectively and evaluated the medication efficacy and safety through outpatient visits at 3 and 6 months based on caregiver reporting. Patients received Epidiolex®, manufactured by GW Pharmaceuticals, which contained CBD (100 mg/mL). CBD was administered orally at a starting dose of 5 mg/kg/day, in addition to baseline antiepileptic drug therapy. After 1 week, the dosage was up-titrated by 5 mg/kg/day and was maintained at 10 mg/kg/day (Fig. 1).\nCBD = cannabidiol, EEG = electroencephalogram.", "The efficacy of CBD was evaluated by comparing the mean frequency of motor seizures experienced per month, defined by the tonic, clonic, and atonic components, before and after the administration of medication. The change in the seizure frequency was expressed as a percentage that was calculated as [(seizure frequency per month) − (seizure frequency at baseline)]/(seizure frequency at baseline) × 100. Each patient's response to CBD was categorized as one of the following 4 responses: 1) the patient became seizure-free, 2) the frequency of seizures was reduced by more than 50%, 3) the frequency of seizures was reduced by less than 50%, or 4) there was no effect. A good outcome was defined as the seizure frequency being reduced by more than 50%, including being seizure-free.\nThe additional effect of CBD was evaluated based on comparisons of the EEGs performed during each period, and each assessment was performed for 30 minutes or more. Each scalp electrode was placed in accordance with an international 10–20 system. The diagnosis was based on the characteristic interictal and ictal patterns of the EEG.15 In the LGS group, we evaluated whether the EEG improved at the 3- and 6-month visits, and the criteria were based on the persistence of the EEG characteristics of LGS or the prevalence of the epileptiform pattern.16", "All safety results in this report were evaluated over the entire follow-up period. Patients or caregivers documented the adverse events at each visit. Safety was assessed through laboratory studies, including complete blood counts, liver function tests, and vital signs at baseline and at the time of the outpatient clinic visits. The appearance of adverse events was resolved by dose reduction or discontinuation of treatment.", "An intention-to-treat analysis was conducted to assess CBD efficacy. Data processing and analysis were performed with Statistical Package for the Social Sciences version 25 (IBM Co., Armonk, NY, USA). P values < 0.05 were regarded as statistically significant. All numbers were rounded to 2 decimal places. The statistical analyses included the independent paired-sample t-test and Fisher's exact test, and Pearson's χ2 test to compare the variables and prognosis.", "This study was approved by the Institutional Review Board of Severance Hospital (approval No.: 4-2020-0120). Informed consent was waived due to the retrospective nature of the study.", " Study population During the study period, 44 patients were managed with CBD, 34 of whom were in the LGS group and 10 of whom were in the DS group. One patient from the LGS group was excluded due to follow-up loss. One patient from the DS group was excluded due to the occurrence of an adverse event before the 3-month evaluation. After the 3-month evaluation, 6 patients in the LGS group were excluded from the study due to dissatisfaction with the treatment effect, and 3 patients were excluded because of the development of adverse events. In the DS group, 1 patient was excluded due to adverse events and 2 were excluded due to financial constraints (Fig. 2).\nLGS = Lennox-Gastaut syndrome, DS = Dravet syndrome.\nThe ages of the patients included in this study ranged from 1.2 to 15.8 years (mean age = 5.1 years), with 28 males (63.6%) and 16 females (36.4%). There were 5 patients (11.4%) undergoing diet therapy. The most frequently used antiepileptic drug at the time of the study was valproic acid (VPA) at an average dose of 22.2 mg/kg/day, followed by levetiracetam at an average dose of 20.3 mg/kg/day. Stiripentol was used only in DS patients. Patient characteristics are shown in Table 1.\nData are presented as median (range) or number (%) unless otherwise indicated.\nCBD = cannabidiol, IQR = interquartile range, EEG = electroencephalogram, MRI = magnetic resonance imaging.\naAll diet therapies used in this study were the modified Atkins diet; bAll used deflazacort.\nDuring the study period, 44 patients were managed with CBD, 34 of whom were in the LGS group and 10 of whom were in the DS group. One patient from the LGS group was excluded due to follow-up loss. One patient from the DS group was excluded due to the occurrence of an adverse event before the 3-month evaluation. After the 3-month evaluation, 6 patients in the LGS group were excluded from the study due to dissatisfaction with the treatment effect, and 3 patients were excluded because of the development of adverse events. In the DS group, 1 patient was excluded due to adverse events and 2 were excluded due to financial constraints (Fig. 2).\nLGS = Lennox-Gastaut syndrome, DS = Dravet syndrome.\nThe ages of the patients included in this study ranged from 1.2 to 15.8 years (mean age = 5.1 years), with 28 males (63.6%) and 16 females (36.4%). There were 5 patients (11.4%) undergoing diet therapy. The most frequently used antiepileptic drug at the time of the study was valproic acid (VPA) at an average dose of 22.2 mg/kg/day, followed by levetiracetam at an average dose of 20.3 mg/kg/day. Stiripentol was used only in DS patients. Patient characteristics are shown in Table 1.\nData are presented as median (range) or number (%) unless otherwise indicated.\nCBD = cannabidiol, IQR = interquartile range, EEG = electroencephalogram, MRI = magnetic resonance imaging.\naAll diet therapies used in this study were the modified Atkins diet; bAll used deflazacort.\n Efficacy in LGS When outcomes were observed in the LGS group, a reduction in seizure frequency was observed in 52.9% of the patients at 3 months and in 29.4% at 6 months. Eight patients (23.5%) were seizure-free at the 3-month evaluation, 3 patients (8.8%) had a reduction in seizure frequency of more than 50%, and 7 patients (20.6%) showed a reduction in seizure frequency of less than 50%. It can be stated that the result was good in 32.3% of patients. At the 6-month evaluation, 4 patients (11.8%) were seizure-free and 3 patients (8.8%) showed more than a 50% reduction in seizure frequency; thus, 20.6% of the patients had good outcomes (Fig. 3A).\nLGS = Lennox-Gastaut syndrome, DS = Dravet syndrome.\nWhen evaluating the initial EEGs of LGS patients, 16 cases showed GSSW or PFA, characteristic of LGS. The outcomes at the 3- and 6-month evaluations were investigated. At the 3-month evaluation, good outcomes were observed as there were more cases with an initial EEG that did not show the PFA, but at the 6-month evaluation, the opposite was observed. In the case of the GSSW pattern, good outcomes were obtained, with no cases showing the pattern at either the 3- or the 6-month evaluations; however, there was no statistically significant change in EEG activity after the administration of CBD.\nWhen compared with the baseline EEG, most of the patients who showed no improvement at the 3-month evaluation (P value = 0.017) and the 6-month evaluation (P value = 0.009) were determined to have had a “no effect” outcome.\nWhen outcomes were observed in the LGS group, a reduction in seizure frequency was observed in 52.9% of the patients at 3 months and in 29.4% at 6 months. Eight patients (23.5%) were seizure-free at the 3-month evaluation, 3 patients (8.8%) had a reduction in seizure frequency of more than 50%, and 7 patients (20.6%) showed a reduction in seizure frequency of less than 50%. It can be stated that the result was good in 32.3% of patients. At the 6-month evaluation, 4 patients (11.8%) were seizure-free and 3 patients (8.8%) showed more than a 50% reduction in seizure frequency; thus, 20.6% of the patients had good outcomes (Fig. 3A).\nLGS = Lennox-Gastaut syndrome, DS = Dravet syndrome.\nWhen evaluating the initial EEGs of LGS patients, 16 cases showed GSSW or PFA, characteristic of LGS. The outcomes at the 3- and 6-month evaluations were investigated. At the 3-month evaluation, good outcomes were observed as there were more cases with an initial EEG that did not show the PFA, but at the 6-month evaluation, the opposite was observed. In the case of the GSSW pattern, good outcomes were obtained, with no cases showing the pattern at either the 3- or the 6-month evaluations; however, there was no statistically significant change in EEG activity after the administration of CBD.\nWhen compared with the baseline EEG, most of the patients who showed no improvement at the 3-month evaluation (P value = 0.017) and the 6-month evaluation (P value = 0.009) were determined to have had a “no effect” outcome.\n Efficacy in DS In the DS group, the “no effect” outcome was observed at a higher frequency at both the 3- and 6-month evaluations than it was in the LGS group. At the 3-month evaluation, 1 patient (10.0%) was seizure-free, 2 patients (20.0%) showed a more than 50% reduction in seizure frequency, and 60.0% of the enrolled patients experienced no effect. None of the patients were “seizure-free” at the 6-month evaluation, and only 2 (20.0%) showed a reduction in seizure frequency of 50% or more. At the 6-month evaluation, a “no effect” outcome was observed in 66.7% of patients (Fig. 3B).\nIn the DS group, the “no effect” outcome was observed at a higher frequency at both the 3- and 6-month evaluations than it was in the LGS group. At the 3-month evaluation, 1 patient (10.0%) was seizure-free, 2 patients (20.0%) showed a more than 50% reduction in seizure frequency, and 60.0% of the enrolled patients experienced no effect. None of the patients were “seizure-free” at the 6-month evaluation, and only 2 (20.0%) showed a reduction in seizure frequency of 50% or more. At the 6-month evaluation, a “no effect” outcome was observed in 66.7% of patients (Fig. 3B).\n Safety No life-threatening adverse event was reported in the LGS and DS groups during the observation period. However, adverse events were reported in 16 (36.3%) of the enrolled patients, and GI problems were the most frequent adverse event in 7 cases (15.9%); among them, mild liver enzyme elevation was observed in 1 case, whereas most included vomiting or diarrhea. Five cases (11.4%) experienced behavioral changes, such as irritability, hyperactivity, excessive alertness, and sleep disturbances (Table 2). In the DS group, 2 cases had increased seizure frequency, which warranted the discontinuation of the medication. The other adverse events necessitating the cessation of the drug were as follows: 1 case of drowsiness, 1 case of GI problems with acute pancreatitis, and 1 case of behavioral changes.\nValues are presented as number (%).\nAED = antiepileptic drug, CBD = cannabidiol.\nNo life-threatening adverse event was reported in the LGS and DS groups during the observation period. However, adverse events were reported in 16 (36.3%) of the enrolled patients, and GI problems were the most frequent adverse event in 7 cases (15.9%); among them, mild liver enzyme elevation was observed in 1 case, whereas most included vomiting or diarrhea. Five cases (11.4%) experienced behavioral changes, such as irritability, hyperactivity, excessive alertness, and sleep disturbances (Table 2). In the DS group, 2 cases had increased seizure frequency, which warranted the discontinuation of the medication. The other adverse events necessitating the cessation of the drug were as follows: 1 case of drowsiness, 1 case of GI problems with acute pancreatitis, and 1 case of behavioral changes.\nValues are presented as number (%).\nAED = antiepileptic drug, CBD = cannabidiol.", "During the study period, 44 patients were managed with CBD, 34 of whom were in the LGS group and 10 of whom were in the DS group. One patient from the LGS group was excluded due to follow-up loss. One patient from the DS group was excluded due to the occurrence of an adverse event before the 3-month evaluation. After the 3-month evaluation, 6 patients in the LGS group were excluded from the study due to dissatisfaction with the treatment effect, and 3 patients were excluded because of the development of adverse events. In the DS group, 1 patient was excluded due to adverse events and 2 were excluded due to financial constraints (Fig. 2).\nLGS = Lennox-Gastaut syndrome, DS = Dravet syndrome.\nThe ages of the patients included in this study ranged from 1.2 to 15.8 years (mean age = 5.1 years), with 28 males (63.6%) and 16 females (36.4%). There were 5 patients (11.4%) undergoing diet therapy. The most frequently used antiepileptic drug at the time of the study was valproic acid (VPA) at an average dose of 22.2 mg/kg/day, followed by levetiracetam at an average dose of 20.3 mg/kg/day. Stiripentol was used only in DS patients. Patient characteristics are shown in Table 1.\nData are presented as median (range) or number (%) unless otherwise indicated.\nCBD = cannabidiol, IQR = interquartile range, EEG = electroencephalogram, MRI = magnetic resonance imaging.\naAll diet therapies used in this study were the modified Atkins diet; bAll used deflazacort.", "When outcomes were observed in the LGS group, a reduction in seizure frequency was observed in 52.9% of the patients at 3 months and in 29.4% at 6 months. Eight patients (23.5%) were seizure-free at the 3-month evaluation, 3 patients (8.8%) had a reduction in seizure frequency of more than 50%, and 7 patients (20.6%) showed a reduction in seizure frequency of less than 50%. It can be stated that the result was good in 32.3% of patients. At the 6-month evaluation, 4 patients (11.8%) were seizure-free and 3 patients (8.8%) showed more than a 50% reduction in seizure frequency; thus, 20.6% of the patients had good outcomes (Fig. 3A).\nLGS = Lennox-Gastaut syndrome, DS = Dravet syndrome.\nWhen evaluating the initial EEGs of LGS patients, 16 cases showed GSSW or PFA, characteristic of LGS. The outcomes at the 3- and 6-month evaluations were investigated. At the 3-month evaluation, good outcomes were observed as there were more cases with an initial EEG that did not show the PFA, but at the 6-month evaluation, the opposite was observed. In the case of the GSSW pattern, good outcomes were obtained, with no cases showing the pattern at either the 3- or the 6-month evaluations; however, there was no statistically significant change in EEG activity after the administration of CBD.\nWhen compared with the baseline EEG, most of the patients who showed no improvement at the 3-month evaluation (P value = 0.017) and the 6-month evaluation (P value = 0.009) were determined to have had a “no effect” outcome.", "In the DS group, the “no effect” outcome was observed at a higher frequency at both the 3- and 6-month evaluations than it was in the LGS group. At the 3-month evaluation, 1 patient (10.0%) was seizure-free, 2 patients (20.0%) showed a more than 50% reduction in seizure frequency, and 60.0% of the enrolled patients experienced no effect. None of the patients were “seizure-free” at the 6-month evaluation, and only 2 (20.0%) showed a reduction in seizure frequency of 50% or more. At the 6-month evaluation, a “no effect” outcome was observed in 66.7% of patients (Fig. 3B).", "No life-threatening adverse event was reported in the LGS and DS groups during the observation period. However, adverse events were reported in 16 (36.3%) of the enrolled patients, and GI problems were the most frequent adverse event in 7 cases (15.9%); among them, mild liver enzyme elevation was observed in 1 case, whereas most included vomiting or diarrhea. Five cases (11.4%) experienced behavioral changes, such as irritability, hyperactivity, excessive alertness, and sleep disturbances (Table 2). In the DS group, 2 cases had increased seizure frequency, which warranted the discontinuation of the medication. The other adverse events necessitating the cessation of the drug were as follows: 1 case of drowsiness, 1 case of GI problems with acute pancreatitis, and 1 case of behavioral changes.\nValues are presented as number (%).\nAED = antiepileptic drug, CBD = cannabidiol.", "This study evaluated the efficacy and safety of CBD administration for treating LGS and DS for the first time in Korea. A good treatment outcome was observed in 31.8% of patients at the 3-month evaluation and in 20.5% at the 6-month evaluation, including both the LGS and DS groups. This positive response was lower than the 36%–57% positive response within 6 months shown previously by studies that did not distinguish between the patient groups.7817\nAmong the LGS patients, a good outcome was observed in 32.4% of cases at the 3-month evaluation and in 20.6% at the 6-month evaluation, which was similar to the percentage of all patients who showed good treatment outcomes overall. When comparing the effects of CBD treatment over a similar period in other studies that assessed LGS patients, good outcomes were noted in 33%–44% of cases.1118 In our study, the percentage of good outcomes was slightly lower; however, it was within a similar range. According to a study by Thiele et al.11 conducted in 2018, when comparing the LGS group using CBD and a patient group receiving a placebo, the monthly frequency of drop attacks decreased by 43.9% in those receiving CBD, compared to a reduction of only 21.8% in the placebo group. The total monthly seizure frequency decreased by 41.2% in the CBD group compared to a 13.7% decrease in the placebo group. Moreover, Thiele et al.11 documented that caregivers reported an improved condition in the CBD group that was 24% more than that of the placebo group. From these results, it appears that the use of CBD in LGS patients can reduce the frequency of seizures, including drop attacks, which can be a major obstacle in everyday life; thus, CBD treatment provides the opportunity for a good treatment outcome by reducing the seizure frequency by more than 50% in LGS patients.\nIn the case of the DS group, as the number of patients enrolled was small, an accurate evaluation was difficult. Upon reviewing previous studies, the reduction in seizure frequency by more than 50% was found to be about 43%,1219 whereas, in our research, it was 30.0% at the 3-month evaluation. We evaluated the EEG changes and outcomes as we did with the LGS, but the changes do not seem to be relevant. Devinsky et al.12 reported that the monthly frequency of motor seizures experienced by the DS group using CBD decreased by a median of 6.5%, compared to a reduction of 0.8% in the placebo group. However, patients with LGS improved not only in terms of motor seizures but also in terms of seizures of other types, whereas DS patients showed no significant improvement in seizures overall, except for motor seizures.12\nAccording to a recently published study that conducted a comparison of outcomes and CBD dosages in a DS patient group, the number of cases experiencing a reduction in seizure frequency of at least 50% was higher in the CBD 20 group (20 mg/kg/day) than in the CBD 10 group (10 mg/kg/day). On the contrary, more individuals experienced a reduction in seizure frequency of more than 75% in the CBD 10 group than in the CBD 20 group.19 In a study related to LGS treatment with CBD published in 2018, a reduction in the frequency of seizures by more than 50% during the research period was observed for 36% of patients in the CBD 10 group and 39% of patients in the CBD 20 group. Moreover, the median percent reduction in monthly seizure frequency tended to decrease further at a higher dosage of CBD, except for non-drop attacks.18\nUsually, adverse events related to CBD administration are mild, and the most common adverse events are drowsiness and GI problems such as diarrhea. In rare cases, serious adverse events occur that may require hospital admission for treatment, such as status epilepticus or extrapyramidal symptoms.7820 However, in the case of status epilepticus, the causal relationship with CBD is unclear because most enrolled patients were those with refractory epilepsy, such as LGS or DS. In our study, similar to others, the adverse events reported included high rates of GI problems, such as diarrhea and inadequate appetite, which can be expected, as the current formulation of CBD in use is in the form of an oil.\nDrowsiness and other adverse events, such as elevated liver enzymes, that have been reported in some studies7 were thought to be essential for confirming the interaction between CBD and antiepileptic drugs such as VPA and clobazam (CLB) through further research; the reason for this is that the metabolism of CBD by the liver can inhibit cytochrome P450. This affects the metabolism of some antiepileptic drugs; in the case of CLB, it can increase the serum level of N-desmethylclobazam, which is an active metabolite of CLB.821222324\nThere were some limitations in this study. For example, this study was performed with a small number of enrolled patients from a single institution; primary data collection was performed through the assessment of caregivers' reports; and adverse events were investigated retrospectively and may not have been accurate. In addition, there may have been a selection bias.\nHowever, this study is still meaningful, as it is the first pilot study to be conducted in Korea, showing that CBD can be used effectively and safely at a dose of 10 mg/kg/day for treating children of Asian ethnicity. In the future, we plan to conduct a well-designed, randomized, controlled clinical trial to evaluate CBD efficacy, safety, and drug interactions." ]
[ "intro", "methods", null, null, null, null, null, "results", null, null, null, null, "discussion" ]
[ "Cannabidiol", "Lennox Gastaut Syndrome", "Dravet Syndrome" ]
INTRODUCTION: Epilepsy is associated with cognitive dysfunction and behavioral disorders. Uncontrolled seizures often affect one's quality of life, especially when they occur at a young age.12345 In particular, in epileptic encephalopathies such as Lennox-Gastaut syndrome (LGS) and Dravet syndrome (DS), epileptic activity causes severe cognitive and behavioral disorders to worsen over time.6 Purified cannabidiol (CBD) (Epidiolex®; GW Pharmaceuticals, Cambridge, UK) is a new drug that can be used to treat drug-resistant epilepsy,7 as its administration can reduce seizure frequency. Mild adverse events such as somnolence and gastrointestinal (GI) symptoms have been reported.78910 After randomized trials assessing CBD use in LGS and DS were conducted, the United States Food and Drug Administration approved the prescription of CBD for both epileptic encephalopathies in 2018.1112 CBD exerts anti-seizure effects, while another component of cannabis, Δ9-tetrahydrocannabinol (THC), has psychoactive functions.1314 In Korea, due to the addictive nature of THC, even CBD was considered an illegal drug until March 2019. The efforts of the media, patient groups, and non-governmental organizations informed the public that CBD medication contained a very small amount of THC, with a very low potential for addiction and psychoactive effects. These acts led to the approval of CBD by the Ministry of Food and Drug Safety for the treatment of DS or LGS in those over two years of age. This study was performed to evaluate the tolerability and efficacy of CBD in children with LGS or DS for the first time in Korea. METHODS: Patients and study design Patients who initiated CBD treatment from March 2019 to October 2019 and who were between the ages of 2 and 18 years and diagnosed with of DS or LGS were included in this study. All patients diagnosed with DS met the clinical diagnostic criteria, including an early onset of febrile or vaccination related seizures, developmental regression in relation to the seizures, and drug-resistant epilepsy. All had pathogenic SCN1A (sodium channel, voltage-gated, type I, alpha subunit) mutations. All patients diagnosed with LGS exhibited impaired intellectual function and multiple mixed-type seizures, as well as characteristic electroencephalography (EEG) patterns showing generalized sharp and slow waves (GSSW) or generalized paroxysmal fast activities (PFA). This study was conducted retrospectively and evaluated the medication efficacy and safety through outpatient visits at 3 and 6 months based on caregiver reporting. Patients received Epidiolex®, manufactured by GW Pharmaceuticals, which contained CBD (100 mg/mL). CBD was administered orally at a starting dose of 5 mg/kg/day, in addition to baseline antiepileptic drug therapy. After 1 week, the dosage was up-titrated by 5 mg/kg/day and was maintained at 10 mg/kg/day (Fig. 1). CBD = cannabidiol, EEG = electroencephalogram. Patients who initiated CBD treatment from March 2019 to October 2019 and who were between the ages of 2 and 18 years and diagnosed with of DS or LGS were included in this study. All patients diagnosed with DS met the clinical diagnostic criteria, including an early onset of febrile or vaccination related seizures, developmental regression in relation to the seizures, and drug-resistant epilepsy. All had pathogenic SCN1A (sodium channel, voltage-gated, type I, alpha subunit) mutations. All patients diagnosed with LGS exhibited impaired intellectual function and multiple mixed-type seizures, as well as characteristic electroencephalography (EEG) patterns showing generalized sharp and slow waves (GSSW) or generalized paroxysmal fast activities (PFA). This study was conducted retrospectively and evaluated the medication efficacy and safety through outpatient visits at 3 and 6 months based on caregiver reporting. Patients received Epidiolex®, manufactured by GW Pharmaceuticals, which contained CBD (100 mg/mL). CBD was administered orally at a starting dose of 5 mg/kg/day, in addition to baseline antiepileptic drug therapy. After 1 week, the dosage was up-titrated by 5 mg/kg/day and was maintained at 10 mg/kg/day (Fig. 1). CBD = cannabidiol, EEG = electroencephalogram. Assessment of efficacy The efficacy of CBD was evaluated by comparing the mean frequency of motor seizures experienced per month, defined by the tonic, clonic, and atonic components, before and after the administration of medication. The change in the seizure frequency was expressed as a percentage that was calculated as [(seizure frequency per month) − (seizure frequency at baseline)]/(seizure frequency at baseline) × 100. Each patient's response to CBD was categorized as one of the following 4 responses: 1) the patient became seizure-free, 2) the frequency of seizures was reduced by more than 50%, 3) the frequency of seizures was reduced by less than 50%, or 4) there was no effect. A good outcome was defined as the seizure frequency being reduced by more than 50%, including being seizure-free. The additional effect of CBD was evaluated based on comparisons of the EEGs performed during each period, and each assessment was performed for 30 minutes or more. Each scalp electrode was placed in accordance with an international 10–20 system. The diagnosis was based on the characteristic interictal and ictal patterns of the EEG.15 In the LGS group, we evaluated whether the EEG improved at the 3- and 6-month visits, and the criteria were based on the persistence of the EEG characteristics of LGS or the prevalence of the epileptiform pattern.16 The efficacy of CBD was evaluated by comparing the mean frequency of motor seizures experienced per month, defined by the tonic, clonic, and atonic components, before and after the administration of medication. The change in the seizure frequency was expressed as a percentage that was calculated as [(seizure frequency per month) − (seizure frequency at baseline)]/(seizure frequency at baseline) × 100. Each patient's response to CBD was categorized as one of the following 4 responses: 1) the patient became seizure-free, 2) the frequency of seizures was reduced by more than 50%, 3) the frequency of seizures was reduced by less than 50%, or 4) there was no effect. A good outcome was defined as the seizure frequency being reduced by more than 50%, including being seizure-free. The additional effect of CBD was evaluated based on comparisons of the EEGs performed during each period, and each assessment was performed for 30 minutes or more. Each scalp electrode was placed in accordance with an international 10–20 system. The diagnosis was based on the characteristic interictal and ictal patterns of the EEG.15 In the LGS group, we evaluated whether the EEG improved at the 3- and 6-month visits, and the criteria were based on the persistence of the EEG characteristics of LGS or the prevalence of the epileptiform pattern.16 Assessment of safety All safety results in this report were evaluated over the entire follow-up period. Patients or caregivers documented the adverse events at each visit. Safety was assessed through laboratory studies, including complete blood counts, liver function tests, and vital signs at baseline and at the time of the outpatient clinic visits. The appearance of adverse events was resolved by dose reduction or discontinuation of treatment. All safety results in this report were evaluated over the entire follow-up period. Patients or caregivers documented the adverse events at each visit. Safety was assessed through laboratory studies, including complete blood counts, liver function tests, and vital signs at baseline and at the time of the outpatient clinic visits. The appearance of adverse events was resolved by dose reduction or discontinuation of treatment. Analyses An intention-to-treat analysis was conducted to assess CBD efficacy. Data processing and analysis were performed with Statistical Package for the Social Sciences version 25 (IBM Co., Armonk, NY, USA). P values < 0.05 were regarded as statistically significant. All numbers were rounded to 2 decimal places. The statistical analyses included the independent paired-sample t-test and Fisher's exact test, and Pearson's χ2 test to compare the variables and prognosis. An intention-to-treat analysis was conducted to assess CBD efficacy. Data processing and analysis were performed with Statistical Package for the Social Sciences version 25 (IBM Co., Armonk, NY, USA). P values < 0.05 were regarded as statistically significant. All numbers were rounded to 2 decimal places. The statistical analyses included the independent paired-sample t-test and Fisher's exact test, and Pearson's χ2 test to compare the variables and prognosis. Ethics statement This study was approved by the Institutional Review Board of Severance Hospital (approval No.: 4-2020-0120). Informed consent was waived due to the retrospective nature of the study. This study was approved by the Institutional Review Board of Severance Hospital (approval No.: 4-2020-0120). Informed consent was waived due to the retrospective nature of the study. Patients and study design: Patients who initiated CBD treatment from March 2019 to October 2019 and who were between the ages of 2 and 18 years and diagnosed with of DS or LGS were included in this study. All patients diagnosed with DS met the clinical diagnostic criteria, including an early onset of febrile or vaccination related seizures, developmental regression in relation to the seizures, and drug-resistant epilepsy. All had pathogenic SCN1A (sodium channel, voltage-gated, type I, alpha subunit) mutations. All patients diagnosed with LGS exhibited impaired intellectual function and multiple mixed-type seizures, as well as characteristic electroencephalography (EEG) patterns showing generalized sharp and slow waves (GSSW) or generalized paroxysmal fast activities (PFA). This study was conducted retrospectively and evaluated the medication efficacy and safety through outpatient visits at 3 and 6 months based on caregiver reporting. Patients received Epidiolex®, manufactured by GW Pharmaceuticals, which contained CBD (100 mg/mL). CBD was administered orally at a starting dose of 5 mg/kg/day, in addition to baseline antiepileptic drug therapy. After 1 week, the dosage was up-titrated by 5 mg/kg/day and was maintained at 10 mg/kg/day (Fig. 1). CBD = cannabidiol, EEG = electroencephalogram. Assessment of efficacy: The efficacy of CBD was evaluated by comparing the mean frequency of motor seizures experienced per month, defined by the tonic, clonic, and atonic components, before and after the administration of medication. The change in the seizure frequency was expressed as a percentage that was calculated as [(seizure frequency per month) − (seizure frequency at baseline)]/(seizure frequency at baseline) × 100. Each patient's response to CBD was categorized as one of the following 4 responses: 1) the patient became seizure-free, 2) the frequency of seizures was reduced by more than 50%, 3) the frequency of seizures was reduced by less than 50%, or 4) there was no effect. A good outcome was defined as the seizure frequency being reduced by more than 50%, including being seizure-free. The additional effect of CBD was evaluated based on comparisons of the EEGs performed during each period, and each assessment was performed for 30 minutes or more. Each scalp electrode was placed in accordance with an international 10–20 system. The diagnosis was based on the characteristic interictal and ictal patterns of the EEG.15 In the LGS group, we evaluated whether the EEG improved at the 3- and 6-month visits, and the criteria were based on the persistence of the EEG characteristics of LGS or the prevalence of the epileptiform pattern.16 Assessment of safety: All safety results in this report were evaluated over the entire follow-up period. Patients or caregivers documented the adverse events at each visit. Safety was assessed through laboratory studies, including complete blood counts, liver function tests, and vital signs at baseline and at the time of the outpatient clinic visits. The appearance of adverse events was resolved by dose reduction or discontinuation of treatment. Analyses: An intention-to-treat analysis was conducted to assess CBD efficacy. Data processing and analysis were performed with Statistical Package for the Social Sciences version 25 (IBM Co., Armonk, NY, USA). P values < 0.05 were regarded as statistically significant. All numbers were rounded to 2 decimal places. The statistical analyses included the independent paired-sample t-test and Fisher's exact test, and Pearson's χ2 test to compare the variables and prognosis. Ethics statement: This study was approved by the Institutional Review Board of Severance Hospital (approval No.: 4-2020-0120). Informed consent was waived due to the retrospective nature of the study. RESULTS: Study population During the study period, 44 patients were managed with CBD, 34 of whom were in the LGS group and 10 of whom were in the DS group. One patient from the LGS group was excluded due to follow-up loss. One patient from the DS group was excluded due to the occurrence of an adverse event before the 3-month evaluation. After the 3-month evaluation, 6 patients in the LGS group were excluded from the study due to dissatisfaction with the treatment effect, and 3 patients were excluded because of the development of adverse events. In the DS group, 1 patient was excluded due to adverse events and 2 were excluded due to financial constraints (Fig. 2). LGS = Lennox-Gastaut syndrome, DS = Dravet syndrome. The ages of the patients included in this study ranged from 1.2 to 15.8 years (mean age = 5.1 years), with 28 males (63.6%) and 16 females (36.4%). There were 5 patients (11.4%) undergoing diet therapy. The most frequently used antiepileptic drug at the time of the study was valproic acid (VPA) at an average dose of 22.2 mg/kg/day, followed by levetiracetam at an average dose of 20.3 mg/kg/day. Stiripentol was used only in DS patients. Patient characteristics are shown in Table 1. Data are presented as median (range) or number (%) unless otherwise indicated. CBD = cannabidiol, IQR = interquartile range, EEG = electroencephalogram, MRI = magnetic resonance imaging. aAll diet therapies used in this study were the modified Atkins diet; bAll used deflazacort. During the study period, 44 patients were managed with CBD, 34 of whom were in the LGS group and 10 of whom were in the DS group. One patient from the LGS group was excluded due to follow-up loss. One patient from the DS group was excluded due to the occurrence of an adverse event before the 3-month evaluation. After the 3-month evaluation, 6 patients in the LGS group were excluded from the study due to dissatisfaction with the treatment effect, and 3 patients were excluded because of the development of adverse events. In the DS group, 1 patient was excluded due to adverse events and 2 were excluded due to financial constraints (Fig. 2). LGS = Lennox-Gastaut syndrome, DS = Dravet syndrome. The ages of the patients included in this study ranged from 1.2 to 15.8 years (mean age = 5.1 years), with 28 males (63.6%) and 16 females (36.4%). There were 5 patients (11.4%) undergoing diet therapy. The most frequently used antiepileptic drug at the time of the study was valproic acid (VPA) at an average dose of 22.2 mg/kg/day, followed by levetiracetam at an average dose of 20.3 mg/kg/day. Stiripentol was used only in DS patients. Patient characteristics are shown in Table 1. Data are presented as median (range) or number (%) unless otherwise indicated. CBD = cannabidiol, IQR = interquartile range, EEG = electroencephalogram, MRI = magnetic resonance imaging. aAll diet therapies used in this study were the modified Atkins diet; bAll used deflazacort. Efficacy in LGS When outcomes were observed in the LGS group, a reduction in seizure frequency was observed in 52.9% of the patients at 3 months and in 29.4% at 6 months. Eight patients (23.5%) were seizure-free at the 3-month evaluation, 3 patients (8.8%) had a reduction in seizure frequency of more than 50%, and 7 patients (20.6%) showed a reduction in seizure frequency of less than 50%. It can be stated that the result was good in 32.3% of patients. At the 6-month evaluation, 4 patients (11.8%) were seizure-free and 3 patients (8.8%) showed more than a 50% reduction in seizure frequency; thus, 20.6% of the patients had good outcomes (Fig. 3A). LGS = Lennox-Gastaut syndrome, DS = Dravet syndrome. When evaluating the initial EEGs of LGS patients, 16 cases showed GSSW or PFA, characteristic of LGS. The outcomes at the 3- and 6-month evaluations were investigated. At the 3-month evaluation, good outcomes were observed as there were more cases with an initial EEG that did not show the PFA, but at the 6-month evaluation, the opposite was observed. In the case of the GSSW pattern, good outcomes were obtained, with no cases showing the pattern at either the 3- or the 6-month evaluations; however, there was no statistically significant change in EEG activity after the administration of CBD. When compared with the baseline EEG, most of the patients who showed no improvement at the 3-month evaluation (P value = 0.017) and the 6-month evaluation (P value = 0.009) were determined to have had a “no effect” outcome. When outcomes were observed in the LGS group, a reduction in seizure frequency was observed in 52.9% of the patients at 3 months and in 29.4% at 6 months. Eight patients (23.5%) were seizure-free at the 3-month evaluation, 3 patients (8.8%) had a reduction in seizure frequency of more than 50%, and 7 patients (20.6%) showed a reduction in seizure frequency of less than 50%. It can be stated that the result was good in 32.3% of patients. At the 6-month evaluation, 4 patients (11.8%) were seizure-free and 3 patients (8.8%) showed more than a 50% reduction in seizure frequency; thus, 20.6% of the patients had good outcomes (Fig. 3A). LGS = Lennox-Gastaut syndrome, DS = Dravet syndrome. When evaluating the initial EEGs of LGS patients, 16 cases showed GSSW or PFA, characteristic of LGS. The outcomes at the 3- and 6-month evaluations were investigated. At the 3-month evaluation, good outcomes were observed as there were more cases with an initial EEG that did not show the PFA, but at the 6-month evaluation, the opposite was observed. In the case of the GSSW pattern, good outcomes were obtained, with no cases showing the pattern at either the 3- or the 6-month evaluations; however, there was no statistically significant change in EEG activity after the administration of CBD. When compared with the baseline EEG, most of the patients who showed no improvement at the 3-month evaluation (P value = 0.017) and the 6-month evaluation (P value = 0.009) were determined to have had a “no effect” outcome. Efficacy in DS In the DS group, the “no effect” outcome was observed at a higher frequency at both the 3- and 6-month evaluations than it was in the LGS group. At the 3-month evaluation, 1 patient (10.0%) was seizure-free, 2 patients (20.0%) showed a more than 50% reduction in seizure frequency, and 60.0% of the enrolled patients experienced no effect. None of the patients were “seizure-free” at the 6-month evaluation, and only 2 (20.0%) showed a reduction in seizure frequency of 50% or more. At the 6-month evaluation, a “no effect” outcome was observed in 66.7% of patients (Fig. 3B). In the DS group, the “no effect” outcome was observed at a higher frequency at both the 3- and 6-month evaluations than it was in the LGS group. At the 3-month evaluation, 1 patient (10.0%) was seizure-free, 2 patients (20.0%) showed a more than 50% reduction in seizure frequency, and 60.0% of the enrolled patients experienced no effect. None of the patients were “seizure-free” at the 6-month evaluation, and only 2 (20.0%) showed a reduction in seizure frequency of 50% or more. At the 6-month evaluation, a “no effect” outcome was observed in 66.7% of patients (Fig. 3B). Safety No life-threatening adverse event was reported in the LGS and DS groups during the observation period. However, adverse events were reported in 16 (36.3%) of the enrolled patients, and GI problems were the most frequent adverse event in 7 cases (15.9%); among them, mild liver enzyme elevation was observed in 1 case, whereas most included vomiting or diarrhea. Five cases (11.4%) experienced behavioral changes, such as irritability, hyperactivity, excessive alertness, and sleep disturbances (Table 2). In the DS group, 2 cases had increased seizure frequency, which warranted the discontinuation of the medication. The other adverse events necessitating the cessation of the drug were as follows: 1 case of drowsiness, 1 case of GI problems with acute pancreatitis, and 1 case of behavioral changes. Values are presented as number (%). AED = antiepileptic drug, CBD = cannabidiol. No life-threatening adverse event was reported in the LGS and DS groups during the observation period. However, adverse events were reported in 16 (36.3%) of the enrolled patients, and GI problems were the most frequent adverse event in 7 cases (15.9%); among them, mild liver enzyme elevation was observed in 1 case, whereas most included vomiting or diarrhea. Five cases (11.4%) experienced behavioral changes, such as irritability, hyperactivity, excessive alertness, and sleep disturbances (Table 2). In the DS group, 2 cases had increased seizure frequency, which warranted the discontinuation of the medication. The other adverse events necessitating the cessation of the drug were as follows: 1 case of drowsiness, 1 case of GI problems with acute pancreatitis, and 1 case of behavioral changes. Values are presented as number (%). AED = antiepileptic drug, CBD = cannabidiol. Study population: During the study period, 44 patients were managed with CBD, 34 of whom were in the LGS group and 10 of whom were in the DS group. One patient from the LGS group was excluded due to follow-up loss. One patient from the DS group was excluded due to the occurrence of an adverse event before the 3-month evaluation. After the 3-month evaluation, 6 patients in the LGS group were excluded from the study due to dissatisfaction with the treatment effect, and 3 patients were excluded because of the development of adverse events. In the DS group, 1 patient was excluded due to adverse events and 2 were excluded due to financial constraints (Fig. 2). LGS = Lennox-Gastaut syndrome, DS = Dravet syndrome. The ages of the patients included in this study ranged from 1.2 to 15.8 years (mean age = 5.1 years), with 28 males (63.6%) and 16 females (36.4%). There were 5 patients (11.4%) undergoing diet therapy. The most frequently used antiepileptic drug at the time of the study was valproic acid (VPA) at an average dose of 22.2 mg/kg/day, followed by levetiracetam at an average dose of 20.3 mg/kg/day. Stiripentol was used only in DS patients. Patient characteristics are shown in Table 1. Data are presented as median (range) or number (%) unless otherwise indicated. CBD = cannabidiol, IQR = interquartile range, EEG = electroencephalogram, MRI = magnetic resonance imaging. aAll diet therapies used in this study were the modified Atkins diet; bAll used deflazacort. Efficacy in LGS: When outcomes were observed in the LGS group, a reduction in seizure frequency was observed in 52.9% of the patients at 3 months and in 29.4% at 6 months. Eight patients (23.5%) were seizure-free at the 3-month evaluation, 3 patients (8.8%) had a reduction in seizure frequency of more than 50%, and 7 patients (20.6%) showed a reduction in seizure frequency of less than 50%. It can be stated that the result was good in 32.3% of patients. At the 6-month evaluation, 4 patients (11.8%) were seizure-free and 3 patients (8.8%) showed more than a 50% reduction in seizure frequency; thus, 20.6% of the patients had good outcomes (Fig. 3A). LGS = Lennox-Gastaut syndrome, DS = Dravet syndrome. When evaluating the initial EEGs of LGS patients, 16 cases showed GSSW or PFA, characteristic of LGS. The outcomes at the 3- and 6-month evaluations were investigated. At the 3-month evaluation, good outcomes were observed as there were more cases with an initial EEG that did not show the PFA, but at the 6-month evaluation, the opposite was observed. In the case of the GSSW pattern, good outcomes were obtained, with no cases showing the pattern at either the 3- or the 6-month evaluations; however, there was no statistically significant change in EEG activity after the administration of CBD. When compared with the baseline EEG, most of the patients who showed no improvement at the 3-month evaluation (P value = 0.017) and the 6-month evaluation (P value = 0.009) were determined to have had a “no effect” outcome. Efficacy in DS: In the DS group, the “no effect” outcome was observed at a higher frequency at both the 3- and 6-month evaluations than it was in the LGS group. At the 3-month evaluation, 1 patient (10.0%) was seizure-free, 2 patients (20.0%) showed a more than 50% reduction in seizure frequency, and 60.0% of the enrolled patients experienced no effect. None of the patients were “seizure-free” at the 6-month evaluation, and only 2 (20.0%) showed a reduction in seizure frequency of 50% or more. At the 6-month evaluation, a “no effect” outcome was observed in 66.7% of patients (Fig. 3B). Safety: No life-threatening adverse event was reported in the LGS and DS groups during the observation period. However, adverse events were reported in 16 (36.3%) of the enrolled patients, and GI problems were the most frequent adverse event in 7 cases (15.9%); among them, mild liver enzyme elevation was observed in 1 case, whereas most included vomiting or diarrhea. Five cases (11.4%) experienced behavioral changes, such as irritability, hyperactivity, excessive alertness, and sleep disturbances (Table 2). In the DS group, 2 cases had increased seizure frequency, which warranted the discontinuation of the medication. The other adverse events necessitating the cessation of the drug were as follows: 1 case of drowsiness, 1 case of GI problems with acute pancreatitis, and 1 case of behavioral changes. Values are presented as number (%). AED = antiepileptic drug, CBD = cannabidiol. DISCUSSION: This study evaluated the efficacy and safety of CBD administration for treating LGS and DS for the first time in Korea. A good treatment outcome was observed in 31.8% of patients at the 3-month evaluation and in 20.5% at the 6-month evaluation, including both the LGS and DS groups. This positive response was lower than the 36%–57% positive response within 6 months shown previously by studies that did not distinguish between the patient groups.7817 Among the LGS patients, a good outcome was observed in 32.4% of cases at the 3-month evaluation and in 20.6% at the 6-month evaluation, which was similar to the percentage of all patients who showed good treatment outcomes overall. When comparing the effects of CBD treatment over a similar period in other studies that assessed LGS patients, good outcomes were noted in 33%–44% of cases.1118 In our study, the percentage of good outcomes was slightly lower; however, it was within a similar range. According to a study by Thiele et al.11 conducted in 2018, when comparing the LGS group using CBD and a patient group receiving a placebo, the monthly frequency of drop attacks decreased by 43.9% in those receiving CBD, compared to a reduction of only 21.8% in the placebo group. The total monthly seizure frequency decreased by 41.2% in the CBD group compared to a 13.7% decrease in the placebo group. Moreover, Thiele et al.11 documented that caregivers reported an improved condition in the CBD group that was 24% more than that of the placebo group. From these results, it appears that the use of CBD in LGS patients can reduce the frequency of seizures, including drop attacks, which can be a major obstacle in everyday life; thus, CBD treatment provides the opportunity for a good treatment outcome by reducing the seizure frequency by more than 50% in LGS patients. In the case of the DS group, as the number of patients enrolled was small, an accurate evaluation was difficult. Upon reviewing previous studies, the reduction in seizure frequency by more than 50% was found to be about 43%,1219 whereas, in our research, it was 30.0% at the 3-month evaluation. We evaluated the EEG changes and outcomes as we did with the LGS, but the changes do not seem to be relevant. Devinsky et al.12 reported that the monthly frequency of motor seizures experienced by the DS group using CBD decreased by a median of 6.5%, compared to a reduction of 0.8% in the placebo group. However, patients with LGS improved not only in terms of motor seizures but also in terms of seizures of other types, whereas DS patients showed no significant improvement in seizures overall, except for motor seizures.12 According to a recently published study that conducted a comparison of outcomes and CBD dosages in a DS patient group, the number of cases experiencing a reduction in seizure frequency of at least 50% was higher in the CBD 20 group (20 mg/kg/day) than in the CBD 10 group (10 mg/kg/day). On the contrary, more individuals experienced a reduction in seizure frequency of more than 75% in the CBD 10 group than in the CBD 20 group.19 In a study related to LGS treatment with CBD published in 2018, a reduction in the frequency of seizures by more than 50% during the research period was observed for 36% of patients in the CBD 10 group and 39% of patients in the CBD 20 group. Moreover, the median percent reduction in monthly seizure frequency tended to decrease further at a higher dosage of CBD, except for non-drop attacks.18 Usually, adverse events related to CBD administration are mild, and the most common adverse events are drowsiness and GI problems such as diarrhea. In rare cases, serious adverse events occur that may require hospital admission for treatment, such as status epilepticus or extrapyramidal symptoms.7820 However, in the case of status epilepticus, the causal relationship with CBD is unclear because most enrolled patients were those with refractory epilepsy, such as LGS or DS. In our study, similar to others, the adverse events reported included high rates of GI problems, such as diarrhea and inadequate appetite, which can be expected, as the current formulation of CBD in use is in the form of an oil. Drowsiness and other adverse events, such as elevated liver enzymes, that have been reported in some studies7 were thought to be essential for confirming the interaction between CBD and antiepileptic drugs such as VPA and clobazam (CLB) through further research; the reason for this is that the metabolism of CBD by the liver can inhibit cytochrome P450. This affects the metabolism of some antiepileptic drugs; in the case of CLB, it can increase the serum level of N-desmethylclobazam, which is an active metabolite of CLB.821222324 There were some limitations in this study. For example, this study was performed with a small number of enrolled patients from a single institution; primary data collection was performed through the assessment of caregivers' reports; and adverse events were investigated retrospectively and may not have been accurate. In addition, there may have been a selection bias. However, this study is still meaningful, as it is the first pilot study to be conducted in Korea, showing that CBD can be used effectively and safely at a dose of 10 mg/kg/day for treating children of Asian ethnicity. In the future, we plan to conduct a well-designed, randomized, controlled clinical trial to evaluate CBD efficacy, safety, and drug interactions.
Background: For the first time in Korea, we aimed to study the efficacy and safety of cannabidiol (CBD), which is emerging as a new alternative in treating epileptic encephalopathies. Methods: This study was conducted retrospectively with patients between the ages of 2-18 years diagnosed with Lennox-Gastaut syndrome (LGS) or Dravet syndrome (DS) were enrolled from March to October 2019, who visited outpatient unit at 3 and 6 months to evaluate medication efficacy and safety based on caregiver reporting. Additional evaluations, such as electroencephalogram and blood tests, were conducted at each period also. CBD was administered orally at a starting dose of 5 mg/kg/day, and was maintained at 10 mg/kg/day. Results: We analyzed 34 patients in the LGS group and 10 patients in the DS group between the ages of 1.2-15.8 years. In the 3-month evaluation, the overall reduction of seizure frequency in the LGS group was 52.9% (>50% reduction in 32.3% of the cases), and 29.4% in the 6-month evaluation (more than 50% reduction in 20.6%). In DS group, the reduction of seizure frequency by more than 50% was 30% and 20% in the 3-month and 6-month evaluation, respectively. Good outcomes were defined as the reduction of seizure frequency by more than 50% and similar results were observed in both LGS and DS groups. Adverse events were reported in 36.3% of total patients of which most common adverse events were gastrointestinal problems. However, no life-threatening adverse event was reported in both LGS and DS during the observation period. Conclusions: In this first Korean study, CBD was safe and tolerable for use and could be expected to potentially reduce the seizure frequency in pediatric patients with LGS or DS.
null
null
6,463
358
[ 245, 254, 73, 89, 37, 315, 339, 142, 175 ]
13
[ "patients", "cbd", "seizure", "frequency", "lgs", "month", "group", "ds", "seizure frequency", "study" ]
[ "cbd cannabidiol study", "cbd antiepileptic drugs", "cbd cannabidiol eeg", "cbd epileptic encephalopathies", "cannabidiol cbd epidiolex" ]
null
null
[CONTENT] Cannabidiol | Lennox Gastaut Syndrome | Dravet Syndrome [SUMMARY]
[CONTENT] Cannabidiol | Lennox Gastaut Syndrome | Dravet Syndrome [SUMMARY]
[CONTENT] Cannabidiol | Lennox Gastaut Syndrome | Dravet Syndrome [SUMMARY]
null
[CONTENT] Cannabidiol | Lennox Gastaut Syndrome | Dravet Syndrome [SUMMARY]
null
[CONTENT] Adolescent | Anticonvulsants | Cannabidiol | Caregivers | Child | Child, Preschool | Electroencephalography | Epilepsies, Myoclonic | Epilepsy | Female | Humans | Infant | Lennox Gastaut Syndrome | Male | Patient Safety | Republic of Korea | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Adolescent | Anticonvulsants | Cannabidiol | Caregivers | Child | Child, Preschool | Electroencephalography | Epilepsies, Myoclonic | Epilepsy | Female | Humans | Infant | Lennox Gastaut Syndrome | Male | Patient Safety | Republic of Korea | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Adolescent | Anticonvulsants | Cannabidiol | Caregivers | Child | Child, Preschool | Electroencephalography | Epilepsies, Myoclonic | Epilepsy | Female | Humans | Infant | Lennox Gastaut Syndrome | Male | Patient Safety | Republic of Korea | Retrospective Studies | Treatment Outcome [SUMMARY]
null
[CONTENT] Adolescent | Anticonvulsants | Cannabidiol | Caregivers | Child | Child, Preschool | Electroencephalography | Epilepsies, Myoclonic | Epilepsy | Female | Humans | Infant | Lennox Gastaut Syndrome | Male | Patient Safety | Republic of Korea | Retrospective Studies | Treatment Outcome [SUMMARY]
null
[CONTENT] cbd cannabidiol study | cbd antiepileptic drugs | cbd cannabidiol eeg | cbd epileptic encephalopathies | cannabidiol cbd epidiolex [SUMMARY]
[CONTENT] cbd cannabidiol study | cbd antiepileptic drugs | cbd cannabidiol eeg | cbd epileptic encephalopathies | cannabidiol cbd epidiolex [SUMMARY]
[CONTENT] cbd cannabidiol study | cbd antiepileptic drugs | cbd cannabidiol eeg | cbd epileptic encephalopathies | cannabidiol cbd epidiolex [SUMMARY]
null
[CONTENT] cbd cannabidiol study | cbd antiepileptic drugs | cbd cannabidiol eeg | cbd epileptic encephalopathies | cannabidiol cbd epidiolex [SUMMARY]
null
[CONTENT] patients | cbd | seizure | frequency | lgs | month | group | ds | seizure frequency | study [SUMMARY]
[CONTENT] patients | cbd | seizure | frequency | lgs | month | group | ds | seizure frequency | study [SUMMARY]
[CONTENT] patients | cbd | seizure | frequency | lgs | month | group | ds | seizure frequency | study [SUMMARY]
null
[CONTENT] patients | cbd | seizure | frequency | lgs | month | group | ds | seizure frequency | study [SUMMARY]
null
[CONTENT] cbd | thc | epileptic | drug | disorders | food | behavioral disorders | psychoactive | encephalopathies | food drug [SUMMARY]
[CONTENT] frequency | seizures | seizure | cbd | evaluated | based | eeg | diagnosed | reduced | reduced 50 [SUMMARY]
[CONTENT] patients | month | month evaluation | evaluation | seizure | excluded | group | lgs | observed | ds [SUMMARY]
null
[CONTENT] patients | seizure | month | frequency | cbd | evaluation | lgs | month evaluation | study | group [SUMMARY]
null
[CONTENT] first | Korea [SUMMARY]
[CONTENT] between the ages of 2-18 years | Lennox-Gastaut | LGS | Dravet | March to October 2019 | 3 and 6 months ||| ||| 5 mg/kg/day | 10 mg/kg/day [SUMMARY]
[CONTENT] 34 | LGS | 10 | DS | between the ages of 1.2-15.8 years ||| 3-month | LGS | 52.9% | 50% | 32.3% | 29.4% | 6-month | more than 50% | 20.6% ||| DS | more than 50% | 30% and 20% | 3-month | 6-month ||| more than 50% | LGS ||| 36.3% ||| LGS [SUMMARY]
null
[CONTENT] first | Korea ||| between the ages of 2-18 years | Lennox-Gastaut | LGS | Dravet | March to October 2019 | 3 and 6 months ||| ||| 5 mg/kg/day | 10 mg/kg/day ||| 34 | LGS | 10 | DS | between the ages of 1.2-15.8 years ||| 3-month | LGS | 52.9% | 50% | 32.3% | 29.4% | 6-month | more than 50% | 20.6% ||| DS | more than 50% | 30% and 20% | 3-month | 6-month ||| more than 50% | LGS ||| 36.3% ||| LGS ||| first | Korean | LGS [SUMMARY]
null
Effect of Different Nutritional Education Based on Healthy Eating Index for HemoDialysis Patients on Dietary Quality and Muscle Mass.
36364878
Hemodialysis patients are at high risk of muscle loss as a result of aging and disease, and combined with inadequate dietary intake. The Healthy Eating Index for HemoDialysis patients (HEI-HD) was developed to assess the dietary quality of hemodialysis patients. The purposes of this study were to examine the effects of different nutritional education models using HEI-HD-based education on dietary quality and muscle mass in hemodialysis patients.
BACKGROUND
A quasi-experimental study was conducted from May 2019 to April 2021, with four groups, including no course for patients and nurses (Non-C), course for nurses (CN), course for patients (CP), and course for patients and nurses (CPN). The courses were delivered by registered dietitians. The data of 94 patients were collected and analyzed at baseline, after 2 months of intervention, and 2 months follow-up, including demographics, body composition, 3-day dietary records, and hemodialysis dietary knowledge. The HEI-HD index score was calculated.
METHODS
Patients aged 58.3 ± 10.1 years. The dietary quality change in the CPN group was improved as compared with the Non-C group (-3.4 ± 9.5 vs. 3.0 ± 5.5, 0.04). The skeletal muscle mass of the Non-C group at intervention was also significantly lower than baseline, but the CPN group was not.
RESULTS
The HEI-HD-based nutritional education for both patients and nurses showed a positive effect on improving the dietary quality and maintaining muscle mass in hemodialysis patients.
CONCLUSIONS
[ "Humans", "Diet, Healthy", "Diet", "Renal Dialysis", "Diet Records", "Muscles", "Nutritional Status" ]
9658203
1. Introduction
End-stage renal disease (ESRD) has been a public issue of global concern, with 90% of countries using hemodialysis (HD) as a treatment modality [1]. Most HD patients have difficulty maintaining a dietary compliance due to the stress of dietary restrictions, eventually leads to malnutrition [2]. Malnutrition in patients with chronic kidney disease is called protein-energy wasting (PEW), caused by marked reduction in storage of protein and energy in body. Muscle wasting is the most direct and valid criterion for PEW [3]. Muscle wasting is one of the direct criteria of PEW [3]. Skeletal muscle accounts for 30–40% of the whole body and maintains basal energy metabolism [4], physical activity, and daily life [5]. However, HD patients often gradually lose muscle mass with age, treatment, hormonal changes, and disease progression [6], resulting in frailty, decreased quality of life, falls, and even fractures, which increase the risk of death in the long run [7,8]. Dietary quality considers whether the individual’s diet complies with dietary recommendations and healthy diet principles [9]. The Dietary Quality Index is an index developed following national dietary guidelines or specific disease recommendations, that reflects patients’ compliance with dietary recommendations and dietary changes [10]. The Healthy Eating Index for HemoDialysis patients (HEI-HD) was developed in 2014 [11]. It was constructed based on the healthy eating pattern of HD patients [12,13,14,15,16,17,18,19,20,21]. HEI-HD contains 16 diet items. The total score of HEU-HD ranged between 0 and 100. The higher the score, the better the dietary quality, with a 4% reduction in the risk of death in HD patients [19]. The dietary principles of kidney disease are too complex to make patients have difficulty with dietary compliance [22]. Previous studies proposed that patients are reluctant to follow dietary principles because they do not have sufficient knowledge or lack of skills to self-manage their diet [23]. Therefore, appropriate education and reinforcement of knowledge concepts enhance the understanding of the HD diet principles to encourage diet compliance [24]. The Health Belief Model (HBM), as a guiding framework for educational interventions, is the most widely applied theory of health education, which describes changes in health-related behaviors depending on the personal beliefs or attitudes toward disease [25]. Shariatjafari et al. (2012) showed that the nutritional education based on HBM was effective in promoting the adherence to dietary guidelines in the participants [26]. In the routine medical care of HD centers, nurses are the most frequent contact with HD patients and often help patients to adapt to their problems by providing information [27]. Therefore, nutrition education offered by dietitians could be more helpful for HD patients. In addition, the low ratio of dietitians to patients in HD centers and nutrition care are not included in the regular care process under the health insurance system [28]. Therefore, we conducted a quasi-experimental study to examine the effects of HEI-HD-based nutritional education models on dietary quality and muscle mass in HD patients.
null
null
3. Results
3.1. Characteristics of Participants Ninety-four HD patients’ data were analyzed (Figure 1). The mean age of the HD patients was 58.3 ± 10.1 years, 64.9% were men. The means of total HEI-HD score, SMM were 65.6 ± 8.3, 26.3 ± 5.7 kg, respectively. There was a statistically significant difference of marital status existed among the four groups (Table 2). Ninety-four HD patients’ data were analyzed (Figure 1). The mean age of the HD patients was 58.3 ± 10.1 years, 64.9% were men. The means of total HEI-HD score, SMM were 65.6 ± 8.3, 26.3 ± 5.7 kg, respectively. There was a statistically significant difference of marital status existed among the four groups (Table 2). 3.2. Comparison of Changes in Dietary Knowledge among the Groups The dietary knowledge score in the Non-C group at T2 was significantly higher than at T1 (Table 3). The dietary knowledge score in the Non-C group at T2 was significantly higher than at T1 (Table 3). 3.3. Comparison of Changes in HEI-HD among the Groups After two months of intervention, there was a significant increase in vegetable scores in the CPN group and the CP group but not in the Non-C group. The ratio of white to red meat score changes in the CPN group was significantly increased. In comparison with T0, at the end of T2, the vegetable scores change in the CP group were significantly higher than the Non-C group, the ratio of white to red meat scores of the CPN group was significantly higher than the CP group, and the total HEI-HD scores of the CPN group was significantly higher than the Non-C group. There was no significant difference among the four groups in T2 to T1 period (Table 4). After two months of intervention, there was a significant increase in vegetable scores in the CPN group and the CP group but not in the Non-C group. The ratio of white to red meat score changes in the CPN group was significantly increased. In comparison with T0, at the end of T2, the vegetable scores change in the CP group were significantly higher than the Non-C group, the ratio of white to red meat scores of the CPN group was significantly higher than the CP group, and the total HEI-HD scores of the CPN group was significantly higher than the Non-C group. There was no significant difference among the four groups in T2 to T1 period (Table 4). 3.4. Comparison of Changes in Skeletal Muscle Mass among the Groups There was no significant difference in muscle mass among the four groups. To compare with T0, all patients’ SMM, SMMHt2, and SMMWt were significantly decreased at T1. The SMM and SMMHt2 of the Non-C group at T1 were also significantly lower than T0. At T2, all patients’ SMM, SMMHt2, and SMMWt were significantly higher than T1, but only SMM of the Non-C group and the CN group were significantly better than T1 with or without adjusted (Table 5). There was no significant difference in muscle mass among the four groups. To compare with T0, all patients’ SMM, SMMHt2, and SMMWt were significantly decreased at T1. The SMM and SMMHt2 of the Non-C group at T1 were also significantly lower than T0. At T2, all patients’ SMM, SMMHt2, and SMMWt were significantly higher than T1, but only SMM of the Non-C group and the CN group were significantly better than T1 with or without adjusted (Table 5).
5. Conclusions
The HEI-HD-based nutritional education was effective in improving patients’ dietary quality, and the dietitians’ model educating both patients and nurses was the most effective in improving dietary quality and maintaining skeletal muscle mass in HD patients.
[ "2. Materials and Methods", "2.1. Study Design and Participants", "2.2. Nutrition Education Booklet and Education Program", "2.3. Data Collection and Measurements", "2.3.1. Demographic Data", "2.3.2. Anthropometry and Body Composition", "2.3.3. Dietary Intake", "2.3.4. Dietary Knowledge", "2.3.5. Physical Activity", "2.4. Statistical Analysis", "3.2. Comparison of Changes in Dietary Knowledge among the Groups", "3.3. Comparison of Changes in HEI-HD among the Groups", "3.4. Comparison of Changes in Skeletal Muscle Mass among the Groups" ]
[ "2.1. Study Design and Participants A quasi-experimental study was conducted from May 2019 to April 2021 at HD centers of four hospitals in northern Taiwan. This study was approved by the Taipei Medical University Joint Institutional Review Board (no. N201801034) for Taipei Medical University Hospital, Taipei Medical University-Wan Fang Hospital, Taipei Medical University-Shuang Ho Hospital, and approved by the institutional Ethics Committee of Cathay General Hospital (no. CGH-OP108007) for Cathay General Hospital. All participants were fully informed and signed informed consent forms before their participation.\nPatients included were those aged 20–75 years, received HD treatment thrice a week for at least 3 months, the education level of junior high school and higher, and Kt/V > 1.2. Patients with obvious edema, pregnancy, amputation, hyperthyroidism, hypothyroidism, malignancy, liver failure or cancer, mental disorders, tube feeding, hospitalization and plan to surgery, loss to measure body composition, and percentage body fat < 4% were excluded.\nThe study recruited 141 patients who met the criteria. All study subjects received a nutritional education booklet. The subjects were divided into four groups according to the hospital. Those were no courses for patients and nurses (Non-C) group (only provided education booklet); courses for nurses (CN) group (provided nutrition course for nurses); courses for patients (CP) group (provided nutrition course for HD patients); and courses for patients and nurses (CPN) group (provided nutrition course for nurses and HD patients). This study period was divided into baseline (T0), intervention (T1), and follow-up (T2).\nA quasi-experimental study was conducted from May 2019 to April 2021 at HD centers of four hospitals in northern Taiwan. This study was approved by the Taipei Medical University Joint Institutional Review Board (no. N201801034) for Taipei Medical University Hospital, Taipei Medical University-Wan Fang Hospital, Taipei Medical University-Shuang Ho Hospital, and approved by the institutional Ethics Committee of Cathay General Hospital (no. CGH-OP108007) for Cathay General Hospital. All participants were fully informed and signed informed consent forms before their participation.\nPatients included were those aged 20–75 years, received HD treatment thrice a week for at least 3 months, the education level of junior high school and higher, and Kt/V > 1.2. Patients with obvious edema, pregnancy, amputation, hyperthyroidism, hypothyroidism, malignancy, liver failure or cancer, mental disorders, tube feeding, hospitalization and plan to surgery, loss to measure body composition, and percentage body fat < 4% were excluded.\nThe study recruited 141 patients who met the criteria. All study subjects received a nutritional education booklet. The subjects were divided into four groups according to the hospital. Those were no courses for patients and nurses (Non-C) group (only provided education booklet); courses for nurses (CN) group (provided nutrition course for nurses); courses for patients (CP) group (provided nutrition course for HD patients); and courses for patients and nurses (CPN) group (provided nutrition course for nurses and HD patients). This study period was divided into baseline (T0), intervention (T1), and follow-up (T2).\n2.2. Nutrition Education Booklet and Education Program All participants received a nutritional education booklet. The nutritional education booklet was developed based on the HEI-HD to increase patient knowledge, promote positive attitudes, and change eating behaviors. The contents of the educational booklet were corrected by both six renal dietitians and three nephrologists. Table 1 shows the 9 chapters and the contents of the educational booklet. Nutrition education was offered at T1. A dietitian provided one-on-one, 15–20 min/week personalized nutrition education at the patient’s bedside in the first month and a 15–20 min/month personalized nutrition education with a special focus on skills to improve the low-scoring dietary items to achieve higher score in the second month. A 10-min group nutritional education session was provided by a dietitian to nurses at the beginning of T1 to address the incorrect answers to the dietary knowledge questionnaire.\nAll participants received a nutritional education booklet. The nutritional education booklet was developed based on the HEI-HD to increase patient knowledge, promote positive attitudes, and change eating behaviors. The contents of the educational booklet were corrected by both six renal dietitians and three nephrologists. Table 1 shows the 9 chapters and the contents of the educational booklet. Nutrition education was offered at T1. A dietitian provided one-on-one, 15–20 min/week personalized nutrition education at the patient’s bedside in the first month and a 15–20 min/month personalized nutrition education with a special focus on skills to improve the low-scoring dietary items to achieve higher score in the second month. A 10-min group nutritional education session was provided by a dietitian to nurses at the beginning of T1 to address the incorrect answers to the dietary knowledge questionnaire.\n2.3. Data Collection and Measurements For patients at T0, socio-demographic factors, anthropometric and body composition, dietary records, and questionnaires were collected. At T1 and T2, the data collected were the same as those at T0, except for socio-demographic factors. For the nurses, only a dietary knowledge questionnaire was collected at T0, T1, and T2.\n2.3.1. Demographic Data The basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires.\nThe basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires.\n2.3.2. Anthropometry and Body Composition Height and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass.\nHeight and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass.\n2.3.3. Dietary Intake The patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19].\nThe patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19].\n2.3.4. Dietary Knowledge We used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29].\nWe used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29].\n2.3.5. Physical Activity Physical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value.\nPhysical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value.\nFor patients at T0, socio-demographic factors, anthropometric and body composition, dietary records, and questionnaires were collected. At T1 and T2, the data collected were the same as those at T0, except for socio-demographic factors. For the nurses, only a dietary knowledge questionnaire was collected at T0, T1, and T2.\n2.3.1. Demographic Data The basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires.\nThe basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires.\n2.3.2. Anthropometry and Body Composition Height and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass.\nHeight and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass.\n2.3.3. Dietary Intake The patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19].\nThe patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19].\n2.3.4. Dietary Knowledge We used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29].\nWe used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29].\n2.3.5. Physical Activity Physical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value.\nPhysical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value.\n2.4. Statistical Analysis Data were shown as mean with standard deviation (SD) for continuous variables and number (n) with percentage (%) for categorical variables. A Shapiro–Wilk test or Kolmogorov–Smirnov test was used to verify the normal distribution assumption for continuous variables. One-way ANOVA with post hoc Bonferroni’s test was used to compare demographics, changes in the dietary knowledge, and HEI-HD. Changes in the skeletal muscle mass was compared using the ANCOVA by adjusting for gender and age, followed by Bonferroni’s test. Kruskal–Wallis test was used to compare non-normal distribution data between multiple groups. The Chi-square test was used to compare categorical variables. Paired t-test was used to compare continuous variables within groups. All statistical analyses were used by SAS version 9.4. A p-value < 0.05 was considered statistical significance, and 0.05 < p < 0.1 indicated marginal significant differences.\nData were shown as mean with standard deviation (SD) for continuous variables and number (n) with percentage (%) for categorical variables. A Shapiro–Wilk test or Kolmogorov–Smirnov test was used to verify the normal distribution assumption for continuous variables. One-way ANOVA with post hoc Bonferroni’s test was used to compare demographics, changes in the dietary knowledge, and HEI-HD. Changes in the skeletal muscle mass was compared using the ANCOVA by adjusting for gender and age, followed by Bonferroni’s test. Kruskal–Wallis test was used to compare non-normal distribution data between multiple groups. The Chi-square test was used to compare categorical variables. Paired t-test was used to compare continuous variables within groups. All statistical analyses were used by SAS version 9.4. A p-value < 0.05 was considered statistical significance, and 0.05 < p < 0.1 indicated marginal significant differences.", "A quasi-experimental study was conducted from May 2019 to April 2021 at HD centers of four hospitals in northern Taiwan. This study was approved by the Taipei Medical University Joint Institutional Review Board (no. N201801034) for Taipei Medical University Hospital, Taipei Medical University-Wan Fang Hospital, Taipei Medical University-Shuang Ho Hospital, and approved by the institutional Ethics Committee of Cathay General Hospital (no. CGH-OP108007) for Cathay General Hospital. All participants were fully informed and signed informed consent forms before their participation.\nPatients included were those aged 20–75 years, received HD treatment thrice a week for at least 3 months, the education level of junior high school and higher, and Kt/V > 1.2. Patients with obvious edema, pregnancy, amputation, hyperthyroidism, hypothyroidism, malignancy, liver failure or cancer, mental disorders, tube feeding, hospitalization and plan to surgery, loss to measure body composition, and percentage body fat < 4% were excluded.\nThe study recruited 141 patients who met the criteria. All study subjects received a nutritional education booklet. The subjects were divided into four groups according to the hospital. Those were no courses for patients and nurses (Non-C) group (only provided education booklet); courses for nurses (CN) group (provided nutrition course for nurses); courses for patients (CP) group (provided nutrition course for HD patients); and courses for patients and nurses (CPN) group (provided nutrition course for nurses and HD patients). This study period was divided into baseline (T0), intervention (T1), and follow-up (T2).", "All participants received a nutritional education booklet. The nutritional education booklet was developed based on the HEI-HD to increase patient knowledge, promote positive attitudes, and change eating behaviors. The contents of the educational booklet were corrected by both six renal dietitians and three nephrologists. Table 1 shows the 9 chapters and the contents of the educational booklet. Nutrition education was offered at T1. A dietitian provided one-on-one, 15–20 min/week personalized nutrition education at the patient’s bedside in the first month and a 15–20 min/month personalized nutrition education with a special focus on skills to improve the low-scoring dietary items to achieve higher score in the second month. A 10-min group nutritional education session was provided by a dietitian to nurses at the beginning of T1 to address the incorrect answers to the dietary knowledge questionnaire.", "For patients at T0, socio-demographic factors, anthropometric and body composition, dietary records, and questionnaires were collected. At T1 and T2, the data collected were the same as those at T0, except for socio-demographic factors. For the nurses, only a dietary knowledge questionnaire was collected at T0, T1, and T2.\n2.3.1. Demographic Data The basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires.\nThe basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires.\n2.3.2. Anthropometry and Body Composition Height and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass.\nHeight and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass.\n2.3.3. Dietary Intake The patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19].\nThe patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19].\n2.3.4. Dietary Knowledge We used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29].\nWe used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29].\n2.3.5. Physical Activity Physical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value.\nPhysical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value.", "The basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires.", "Height and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass.", "The patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19].", "We used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29].", "Physical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value.", "Data were shown as mean with standard deviation (SD) for continuous variables and number (n) with percentage (%) for categorical variables. A Shapiro–Wilk test or Kolmogorov–Smirnov test was used to verify the normal distribution assumption for continuous variables. One-way ANOVA with post hoc Bonferroni’s test was used to compare demographics, changes in the dietary knowledge, and HEI-HD. Changes in the skeletal muscle mass was compared using the ANCOVA by adjusting for gender and age, followed by Bonferroni’s test. Kruskal–Wallis test was used to compare non-normal distribution data between multiple groups. The Chi-square test was used to compare categorical variables. Paired t-test was used to compare continuous variables within groups. All statistical analyses were used by SAS version 9.4. A p-value < 0.05 was considered statistical significance, and 0.05 < p < 0.1 indicated marginal significant differences.", "The dietary knowledge score in the Non-C group at T2 was significantly higher than at T1 (Table 3).", "After two months of intervention, there was a significant increase in vegetable scores in the CPN group and the CP group but not in the Non-C group. The ratio of white to red meat score changes in the CPN group was significantly increased. In comparison with T0, at the end of T2, the vegetable scores change in the CP group were significantly higher than the Non-C group, the ratio of white to red meat scores of the CPN group was significantly higher than the CP group, and the total HEI-HD scores of the CPN group was significantly higher than the Non-C group. There was no significant difference among the four groups in T2 to T1 period (Table 4).", "There was no significant difference in muscle mass among the four groups. To compare with T0, all patients’ SMM, SMMHt2, and SMMWt were significantly decreased at T1. The SMM and SMMHt2 of the Non-C group at T1 were also significantly lower than T0. At T2, all patients’ SMM, SMMHt2, and SMMWt were significantly higher than T1, but only SMM of the Non-C group and the CN group were significantly better than T1 with or without adjusted (Table 5)." ]
[ null, "subjects", null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Study Design and Participants", "2.2. Nutrition Education Booklet and Education Program", "2.3. Data Collection and Measurements", "2.3.1. Demographic Data", "2.3.2. Anthropometry and Body Composition", "2.3.3. Dietary Intake", "2.3.4. Dietary Knowledge", "2.3.5. Physical Activity", "2.4. Statistical Analysis", "3. Results", "3.1. Characteristics of Participants", "3.2. Comparison of Changes in Dietary Knowledge among the Groups", "3.3. Comparison of Changes in HEI-HD among the Groups", "3.4. Comparison of Changes in Skeletal Muscle Mass among the Groups", "4. Discussion", "5. Conclusions" ]
[ "End-stage renal disease (ESRD) has been a public issue of global concern, with 90% of countries using hemodialysis (HD) as a treatment modality [1]. Most HD patients have difficulty maintaining a dietary compliance due to the stress of dietary restrictions, eventually leads to malnutrition [2]. Malnutrition in patients with chronic kidney disease is called protein-energy wasting (PEW), caused by marked reduction in storage of protein and energy in body. Muscle wasting is the most direct and valid criterion for PEW [3]. Muscle wasting is one of the direct criteria of PEW [3]. Skeletal muscle accounts for 30–40% of the whole body and maintains basal energy metabolism [4], physical activity, and daily life [5]. However, HD patients often gradually lose muscle mass with age, treatment, hormonal changes, and disease progression [6], resulting in frailty, decreased quality of life, falls, and even fractures, which increase the risk of death in the long run [7,8].\nDietary quality considers whether the individual’s diet complies with dietary recommendations and healthy diet principles [9]. The Dietary Quality Index is an index developed following national dietary guidelines or specific disease recommendations, that reflects patients’ compliance with dietary recommendations and dietary changes [10]. The Healthy Eating Index for HemoDialysis patients (HEI-HD) was developed in 2014 [11]. It was constructed based on the healthy eating pattern of HD patients [12,13,14,15,16,17,18,19,20,21]. HEI-HD contains 16 diet items. The total score of HEU-HD ranged between 0 and 100. The higher the score, the better the dietary quality, with a 4% reduction in the risk of death in HD patients [19].\nThe dietary principles of kidney disease are too complex to make patients have difficulty with dietary compliance [22]. Previous studies proposed that patients are reluctant to follow dietary principles because they do not have sufficient knowledge or lack of skills to self-manage their diet [23]. Therefore, appropriate education and reinforcement of knowledge concepts enhance the understanding of the HD diet principles to encourage diet compliance [24]. The Health Belief Model (HBM), as a guiding framework for educational interventions, is the most widely applied theory of health education, which describes changes in health-related behaviors depending on the personal beliefs or attitudes toward disease [25]. Shariatjafari et al. (2012) showed that the nutritional education based on HBM was effective in promoting the adherence to dietary guidelines in the participants [26].\nIn the routine medical care of HD centers, nurses are the most frequent contact with HD patients and often help patients to adapt to their problems by providing information [27]. Therefore, nutrition education offered by dietitians could be more helpful for HD patients. In addition, the low ratio of dietitians to patients in HD centers and nutrition care are not included in the regular care process under the health insurance system [28]. Therefore, we conducted a quasi-experimental study to examine the effects of HEI-HD-based nutritional education models on dietary quality and muscle mass in HD patients.", "2.1. Study Design and Participants A quasi-experimental study was conducted from May 2019 to April 2021 at HD centers of four hospitals in northern Taiwan. This study was approved by the Taipei Medical University Joint Institutional Review Board (no. N201801034) for Taipei Medical University Hospital, Taipei Medical University-Wan Fang Hospital, Taipei Medical University-Shuang Ho Hospital, and approved by the institutional Ethics Committee of Cathay General Hospital (no. CGH-OP108007) for Cathay General Hospital. All participants were fully informed and signed informed consent forms before their participation.\nPatients included were those aged 20–75 years, received HD treatment thrice a week for at least 3 months, the education level of junior high school and higher, and Kt/V > 1.2. Patients with obvious edema, pregnancy, amputation, hyperthyroidism, hypothyroidism, malignancy, liver failure or cancer, mental disorders, tube feeding, hospitalization and plan to surgery, loss to measure body composition, and percentage body fat < 4% were excluded.\nThe study recruited 141 patients who met the criteria. All study subjects received a nutritional education booklet. The subjects were divided into four groups according to the hospital. Those were no courses for patients and nurses (Non-C) group (only provided education booklet); courses for nurses (CN) group (provided nutrition course for nurses); courses for patients (CP) group (provided nutrition course for HD patients); and courses for patients and nurses (CPN) group (provided nutrition course for nurses and HD patients). This study period was divided into baseline (T0), intervention (T1), and follow-up (T2).\nA quasi-experimental study was conducted from May 2019 to April 2021 at HD centers of four hospitals in northern Taiwan. This study was approved by the Taipei Medical University Joint Institutional Review Board (no. N201801034) for Taipei Medical University Hospital, Taipei Medical University-Wan Fang Hospital, Taipei Medical University-Shuang Ho Hospital, and approved by the institutional Ethics Committee of Cathay General Hospital (no. CGH-OP108007) for Cathay General Hospital. All participants were fully informed and signed informed consent forms before their participation.\nPatients included were those aged 20–75 years, received HD treatment thrice a week for at least 3 months, the education level of junior high school and higher, and Kt/V > 1.2. Patients with obvious edema, pregnancy, amputation, hyperthyroidism, hypothyroidism, malignancy, liver failure or cancer, mental disorders, tube feeding, hospitalization and plan to surgery, loss to measure body composition, and percentage body fat < 4% were excluded.\nThe study recruited 141 patients who met the criteria. All study subjects received a nutritional education booklet. The subjects were divided into four groups according to the hospital. Those were no courses for patients and nurses (Non-C) group (only provided education booklet); courses for nurses (CN) group (provided nutrition course for nurses); courses for patients (CP) group (provided nutrition course for HD patients); and courses for patients and nurses (CPN) group (provided nutrition course for nurses and HD patients). This study period was divided into baseline (T0), intervention (T1), and follow-up (T2).\n2.2. Nutrition Education Booklet and Education Program All participants received a nutritional education booklet. The nutritional education booklet was developed based on the HEI-HD to increase patient knowledge, promote positive attitudes, and change eating behaviors. The contents of the educational booklet were corrected by both six renal dietitians and three nephrologists. Table 1 shows the 9 chapters and the contents of the educational booklet. Nutrition education was offered at T1. A dietitian provided one-on-one, 15–20 min/week personalized nutrition education at the patient’s bedside in the first month and a 15–20 min/month personalized nutrition education with a special focus on skills to improve the low-scoring dietary items to achieve higher score in the second month. A 10-min group nutritional education session was provided by a dietitian to nurses at the beginning of T1 to address the incorrect answers to the dietary knowledge questionnaire.\nAll participants received a nutritional education booklet. The nutritional education booklet was developed based on the HEI-HD to increase patient knowledge, promote positive attitudes, and change eating behaviors. The contents of the educational booklet were corrected by both six renal dietitians and three nephrologists. Table 1 shows the 9 chapters and the contents of the educational booklet. Nutrition education was offered at T1. A dietitian provided one-on-one, 15–20 min/week personalized nutrition education at the patient’s bedside in the first month and a 15–20 min/month personalized nutrition education with a special focus on skills to improve the low-scoring dietary items to achieve higher score in the second month. A 10-min group nutritional education session was provided by a dietitian to nurses at the beginning of T1 to address the incorrect answers to the dietary knowledge questionnaire.\n2.3. Data Collection and Measurements For patients at T0, socio-demographic factors, anthropometric and body composition, dietary records, and questionnaires were collected. At T1 and T2, the data collected were the same as those at T0, except for socio-demographic factors. For the nurses, only a dietary knowledge questionnaire was collected at T0, T1, and T2.\n2.3.1. Demographic Data The basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires.\nThe basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires.\n2.3.2. Anthropometry and Body Composition Height and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass.\nHeight and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass.\n2.3.3. Dietary Intake The patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19].\nThe patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19].\n2.3.4. Dietary Knowledge We used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29].\nWe used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29].\n2.3.5. Physical Activity Physical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value.\nPhysical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value.\nFor patients at T0, socio-demographic factors, anthropometric and body composition, dietary records, and questionnaires were collected. At T1 and T2, the data collected were the same as those at T0, except for socio-demographic factors. For the nurses, only a dietary knowledge questionnaire was collected at T0, T1, and T2.\n2.3.1. Demographic Data The basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires.\nThe basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires.\n2.3.2. Anthropometry and Body Composition Height and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass.\nHeight and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass.\n2.3.3. Dietary Intake The patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19].\nThe patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19].\n2.3.4. Dietary Knowledge We used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29].\nWe used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29].\n2.3.5. Physical Activity Physical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value.\nPhysical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value.\n2.4. Statistical Analysis Data were shown as mean with standard deviation (SD) for continuous variables and number (n) with percentage (%) for categorical variables. A Shapiro–Wilk test or Kolmogorov–Smirnov test was used to verify the normal distribution assumption for continuous variables. One-way ANOVA with post hoc Bonferroni’s test was used to compare demographics, changes in the dietary knowledge, and HEI-HD. Changes in the skeletal muscle mass was compared using the ANCOVA by adjusting for gender and age, followed by Bonferroni’s test. Kruskal–Wallis test was used to compare non-normal distribution data between multiple groups. The Chi-square test was used to compare categorical variables. Paired t-test was used to compare continuous variables within groups. All statistical analyses were used by SAS version 9.4. A p-value < 0.05 was considered statistical significance, and 0.05 < p < 0.1 indicated marginal significant differences.\nData were shown as mean with standard deviation (SD) for continuous variables and number (n) with percentage (%) for categorical variables. A Shapiro–Wilk test or Kolmogorov–Smirnov test was used to verify the normal distribution assumption for continuous variables. One-way ANOVA with post hoc Bonferroni’s test was used to compare demographics, changes in the dietary knowledge, and HEI-HD. Changes in the skeletal muscle mass was compared using the ANCOVA by adjusting for gender and age, followed by Bonferroni’s test. Kruskal–Wallis test was used to compare non-normal distribution data between multiple groups. The Chi-square test was used to compare categorical variables. Paired t-test was used to compare continuous variables within groups. All statistical analyses were used by SAS version 9.4. A p-value < 0.05 was considered statistical significance, and 0.05 < p < 0.1 indicated marginal significant differences.", "A quasi-experimental study was conducted from May 2019 to April 2021 at HD centers of four hospitals in northern Taiwan. This study was approved by the Taipei Medical University Joint Institutional Review Board (no. N201801034) for Taipei Medical University Hospital, Taipei Medical University-Wan Fang Hospital, Taipei Medical University-Shuang Ho Hospital, and approved by the institutional Ethics Committee of Cathay General Hospital (no. CGH-OP108007) for Cathay General Hospital. All participants were fully informed and signed informed consent forms before their participation.\nPatients included were those aged 20–75 years, received HD treatment thrice a week for at least 3 months, the education level of junior high school and higher, and Kt/V > 1.2. Patients with obvious edema, pregnancy, amputation, hyperthyroidism, hypothyroidism, malignancy, liver failure or cancer, mental disorders, tube feeding, hospitalization and plan to surgery, loss to measure body composition, and percentage body fat < 4% were excluded.\nThe study recruited 141 patients who met the criteria. All study subjects received a nutritional education booklet. The subjects were divided into four groups according to the hospital. Those were no courses for patients and nurses (Non-C) group (only provided education booklet); courses for nurses (CN) group (provided nutrition course for nurses); courses for patients (CP) group (provided nutrition course for HD patients); and courses for patients and nurses (CPN) group (provided nutrition course for nurses and HD patients). This study period was divided into baseline (T0), intervention (T1), and follow-up (T2).", "All participants received a nutritional education booklet. The nutritional education booklet was developed based on the HEI-HD to increase patient knowledge, promote positive attitudes, and change eating behaviors. The contents of the educational booklet were corrected by both six renal dietitians and three nephrologists. Table 1 shows the 9 chapters and the contents of the educational booklet. Nutrition education was offered at T1. A dietitian provided one-on-one, 15–20 min/week personalized nutrition education at the patient’s bedside in the first month and a 15–20 min/month personalized nutrition education with a special focus on skills to improve the low-scoring dietary items to achieve higher score in the second month. A 10-min group nutritional education session was provided by a dietitian to nurses at the beginning of T1 to address the incorrect answers to the dietary knowledge questionnaire.", "For patients at T0, socio-demographic factors, anthropometric and body composition, dietary records, and questionnaires were collected. At T1 and T2, the data collected were the same as those at T0, except for socio-demographic factors. For the nurses, only a dietary knowledge questionnaire was collected at T0, T1, and T2.\n2.3.1. Demographic Data The basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires.\nThe basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires.\n2.3.2. Anthropometry and Body Composition Height and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass.\nHeight and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass.\n2.3.3. Dietary Intake The patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19].\nThe patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19].\n2.3.4. Dietary Knowledge We used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29].\nWe used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29].\n2.3.5. Physical Activity Physical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value.\nPhysical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value.", "The basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires.", "Height and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass.", "The patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19].", "We used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29].", "Physical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value.", "Data were shown as mean with standard deviation (SD) for continuous variables and number (n) with percentage (%) for categorical variables. A Shapiro–Wilk test or Kolmogorov–Smirnov test was used to verify the normal distribution assumption for continuous variables. One-way ANOVA with post hoc Bonferroni’s test was used to compare demographics, changes in the dietary knowledge, and HEI-HD. Changes in the skeletal muscle mass was compared using the ANCOVA by adjusting for gender and age, followed by Bonferroni’s test. Kruskal–Wallis test was used to compare non-normal distribution data between multiple groups. The Chi-square test was used to compare categorical variables. Paired t-test was used to compare continuous variables within groups. All statistical analyses were used by SAS version 9.4. A p-value < 0.05 was considered statistical significance, and 0.05 < p < 0.1 indicated marginal significant differences.", "3.1. Characteristics of Participants Ninety-four HD patients’ data were analyzed (Figure 1). The mean age of the HD patients was 58.3 ± 10.1 years, 64.9% were men. The means of total HEI-HD score, SMM were 65.6 ± 8.3, 26.3 ± 5.7 kg, respectively. There was a statistically significant difference of marital status existed among the four groups (Table 2).\nNinety-four HD patients’ data were analyzed (Figure 1). The mean age of the HD patients was 58.3 ± 10.1 years, 64.9% were men. The means of total HEI-HD score, SMM were 65.6 ± 8.3, 26.3 ± 5.7 kg, respectively. There was a statistically significant difference of marital status existed among the four groups (Table 2).\n3.2. Comparison of Changes in Dietary Knowledge among the Groups The dietary knowledge score in the Non-C group at T2 was significantly higher than at T1 (Table 3).\nThe dietary knowledge score in the Non-C group at T2 was significantly higher than at T1 (Table 3).\n3.3. Comparison of Changes in HEI-HD among the Groups After two months of intervention, there was a significant increase in vegetable scores in the CPN group and the CP group but not in the Non-C group. The ratio of white to red meat score changes in the CPN group was significantly increased. In comparison with T0, at the end of T2, the vegetable scores change in the CP group were significantly higher than the Non-C group, the ratio of white to red meat scores of the CPN group was significantly higher than the CP group, and the total HEI-HD scores of the CPN group was significantly higher than the Non-C group. There was no significant difference among the four groups in T2 to T1 period (Table 4).\nAfter two months of intervention, there was a significant increase in vegetable scores in the CPN group and the CP group but not in the Non-C group. The ratio of white to red meat score changes in the CPN group was significantly increased. In comparison with T0, at the end of T2, the vegetable scores change in the CP group were significantly higher than the Non-C group, the ratio of white to red meat scores of the CPN group was significantly higher than the CP group, and the total HEI-HD scores of the CPN group was significantly higher than the Non-C group. There was no significant difference among the four groups in T2 to T1 period (Table 4).\n3.4. Comparison of Changes in Skeletal Muscle Mass among the Groups There was no significant difference in muscle mass among the four groups. To compare with T0, all patients’ SMM, SMMHt2, and SMMWt were significantly decreased at T1. The SMM and SMMHt2 of the Non-C group at T1 were also significantly lower than T0. At T2, all patients’ SMM, SMMHt2, and SMMWt were significantly higher than T1, but only SMM of the Non-C group and the CN group were significantly better than T1 with or without adjusted (Table 5).\nThere was no significant difference in muscle mass among the four groups. To compare with T0, all patients’ SMM, SMMHt2, and SMMWt were significantly decreased at T1. The SMM and SMMHt2 of the Non-C group at T1 were also significantly lower than T0. At T2, all patients’ SMM, SMMHt2, and SMMWt were significantly higher than T1, but only SMM of the Non-C group and the CN group were significantly better than T1 with or without adjusted (Table 5).", "Ninety-four HD patients’ data were analyzed (Figure 1). The mean age of the HD patients was 58.3 ± 10.1 years, 64.9% were men. The means of total HEI-HD score, SMM were 65.6 ± 8.3, 26.3 ± 5.7 kg, respectively. There was a statistically significant difference of marital status existed among the four groups (Table 2).", "The dietary knowledge score in the Non-C group at T2 was significantly higher than at T1 (Table 3).", "After two months of intervention, there was a significant increase in vegetable scores in the CPN group and the CP group but not in the Non-C group. The ratio of white to red meat score changes in the CPN group was significantly increased. In comparison with T0, at the end of T2, the vegetable scores change in the CP group were significantly higher than the Non-C group, the ratio of white to red meat scores of the CPN group was significantly higher than the CP group, and the total HEI-HD scores of the CPN group was significantly higher than the Non-C group. There was no significant difference among the four groups in T2 to T1 period (Table 4).", "There was no significant difference in muscle mass among the four groups. To compare with T0, all patients’ SMM, SMMHt2, and SMMWt were significantly decreased at T1. The SMM and SMMHt2 of the Non-C group at T1 were also significantly lower than T0. At T2, all patients’ SMM, SMMHt2, and SMMWt were significantly higher than T1, but only SMM of the Non-C group and the CN group were significantly better than T1 with or without adjusted (Table 5).", "This was the first study to assess the effects of dietitians providing nutritional education for nurses and patients alone and in combination based on HEI-HD on dietary quality and muscle mass of HD patients. This study showed that nutritional education for the CPN group, both patients and nurses by dietitians, based on HEI-HD was more effective than other groups. The dietitians provided nutritional education and increased the scores of vegetable, the ratio of white to red meat, and total HEI-HD, but no significant change in SMM, SMMHt2, and SMMWt.\nFord et al. (2004) showed that giving nutrition education for 20–30 min per month to HD patients could improve patients’ dietary knowledge [24]. Abd et al. (2015) used small group combined educational booklets for nutrition education for the study groups. Their results reported dietary knowledge would be improved [31]. In this present study, the dietary knowledge of HD patients improved at T2, which compared to T0, the Non-C group had a higher dietary knowledge score at T2 than T1, indicating that only providing the HEI-HD booklet could help patients’ dietary knowledge improvement.\nVegetables are rich in vitamins C and E and antioxidant phytochemicals [32]. Saglimbene et al. (2019) reported that only 4% HD patients achieved vegetable recommendations, which was far below the recommendations for preventing chronic diseases [33]. Wagner et al. (2016) showed that nutrition courses given by dietitians could enhance the vegetable intake of obese adults [34], which was similar to the results of our study, indicating that nutrition education by dietitians could increase the vegetable intake of patients and increase the dietary vegetable scores.\nThe ratio of white to red meat scores improved. Increasing white meat intake daily could reduce 33% mortality [19]. HD patients often have excessive red meat consumption problems. Red meat, such as pork, beef, and lamb, which are rich in phosphorus and saturated fat, could increase the risk of cardiovascular disease [35]. McCullough et al. (2002) recommended that the ratio of white and red meat is 4:1 to prevent cardiovascular disease [36]. The ratio of white to red meat scores of the CPN group was better than the others in this study, which means that the nutrition education given to both patients and nurses by dietitians could significantly increase HD patients’ dietary ratio of white and red meat.\nThis present study showed that the total HEI-HD scores of the CPN group were significantly higher than others due to the increase in vegetable scores and the ratio of white to red meat scores. Previous studies have shown that people who have received nutrition education have better dietary quality. Nutrition education by dietitians could improve the dietary quality of diabetes patients [37,38]. Our study shows that nutritional education by dietitians to patients and nurses had the best effect on improving the dietary quality of HD patients.\nPrevious studies have shown that increased dietary quality could maintain muscle mass [39,40]. Rondanelli et al. (2015) reported that the intake of white meat increased combined with the intake of red meat maintained or decreased, which was able to prevent the occurrence of sarcopenia [41], similar to our study in which no significant decrease in muscle mass was observed in the CPN group with enhanced dietary quality. In contrast, the total HEI-HD score of the Non-C group was slightly lower, and their SMM and SMMHt2 were significantly reduced at T1. In addition, previous studies showed that higher educated people are more willing to exercise [42], and unmarried people and blue-collar workers also have high physical activity [43]. Our study results showed that the higher education level in the Non-C group and more unmarried, more blue-collar workers in the CN group tend to have relatively high levels of activity, which might affect muscle mass. Our study’s strengths: (1) this was a multi-center study, which could reduce the sampling bias compared to a single center; (2) this was the first nutrition education study that utilizes four educational models; and (3) this was the first nutrition education study using the HEI-HD, a simple and validated dietary index scale that could quickly assess patients’ dietary status. There are some limitations in this study as follows: (1) only 4 months of the study were monitored, which is relatively short and it may not be possible to see the increase in muscle mass; (2) the effects of diet on muscle mass were limited, so physical activities, exercise and nutrition supplementation should be considered to be included in the future study; (3) the subjects were all voluntary participants with high health awareness, so the nutrition education may be more effective; (4) no adequate information was provided regarding the frequency and content of nurse education for patient [44], so future studies should monitor the frequency and how long for nurse education; and (5) it should be noted that the present study is a multi-center study and was conducted only in northern Taiwan urban with high literacy population. Therefore, the results obtained may not be appropriate to apply to patients in Taiwan rural areas or other countries.", "The HEI-HD-based nutritional education was effective in improving patients’ dietary quality, and the dietitians’ model educating both patients and nurses was the most effective in improving dietary quality and maintaining skeletal muscle mass in HD patients." ]
[ "intro", null, "subjects", null, null, null, null, null, null, null, null, "results", "subjects", null, null, null, "discussion", "conclusions" ]
[ "hemodialysis", "skeletal muscle", "dietitian", "nutritional education", "dietary quality" ]
1. Introduction: End-stage renal disease (ESRD) has been a public issue of global concern, with 90% of countries using hemodialysis (HD) as a treatment modality [1]. Most HD patients have difficulty maintaining a dietary compliance due to the stress of dietary restrictions, eventually leads to malnutrition [2]. Malnutrition in patients with chronic kidney disease is called protein-energy wasting (PEW), caused by marked reduction in storage of protein and energy in body. Muscle wasting is the most direct and valid criterion for PEW [3]. Muscle wasting is one of the direct criteria of PEW [3]. Skeletal muscle accounts for 30–40% of the whole body and maintains basal energy metabolism [4], physical activity, and daily life [5]. However, HD patients often gradually lose muscle mass with age, treatment, hormonal changes, and disease progression [6], resulting in frailty, decreased quality of life, falls, and even fractures, which increase the risk of death in the long run [7,8]. Dietary quality considers whether the individual’s diet complies with dietary recommendations and healthy diet principles [9]. The Dietary Quality Index is an index developed following national dietary guidelines or specific disease recommendations, that reflects patients’ compliance with dietary recommendations and dietary changes [10]. The Healthy Eating Index for HemoDialysis patients (HEI-HD) was developed in 2014 [11]. It was constructed based on the healthy eating pattern of HD patients [12,13,14,15,16,17,18,19,20,21]. HEI-HD contains 16 diet items. The total score of HEU-HD ranged between 0 and 100. The higher the score, the better the dietary quality, with a 4% reduction in the risk of death in HD patients [19]. The dietary principles of kidney disease are too complex to make patients have difficulty with dietary compliance [22]. Previous studies proposed that patients are reluctant to follow dietary principles because they do not have sufficient knowledge or lack of skills to self-manage their diet [23]. Therefore, appropriate education and reinforcement of knowledge concepts enhance the understanding of the HD diet principles to encourage diet compliance [24]. The Health Belief Model (HBM), as a guiding framework for educational interventions, is the most widely applied theory of health education, which describes changes in health-related behaviors depending on the personal beliefs or attitudes toward disease [25]. Shariatjafari et al. (2012) showed that the nutritional education based on HBM was effective in promoting the adherence to dietary guidelines in the participants [26]. In the routine medical care of HD centers, nurses are the most frequent contact with HD patients and often help patients to adapt to their problems by providing information [27]. Therefore, nutrition education offered by dietitians could be more helpful for HD patients. In addition, the low ratio of dietitians to patients in HD centers and nutrition care are not included in the regular care process under the health insurance system [28]. Therefore, we conducted a quasi-experimental study to examine the effects of HEI-HD-based nutritional education models on dietary quality and muscle mass in HD patients. 2. Materials and Methods: 2.1. Study Design and Participants A quasi-experimental study was conducted from May 2019 to April 2021 at HD centers of four hospitals in northern Taiwan. This study was approved by the Taipei Medical University Joint Institutional Review Board (no. N201801034) for Taipei Medical University Hospital, Taipei Medical University-Wan Fang Hospital, Taipei Medical University-Shuang Ho Hospital, and approved by the institutional Ethics Committee of Cathay General Hospital (no. CGH-OP108007) for Cathay General Hospital. All participants were fully informed and signed informed consent forms before their participation. Patients included were those aged 20–75 years, received HD treatment thrice a week for at least 3 months, the education level of junior high school and higher, and Kt/V > 1.2. Patients with obvious edema, pregnancy, amputation, hyperthyroidism, hypothyroidism, malignancy, liver failure or cancer, mental disorders, tube feeding, hospitalization and plan to surgery, loss to measure body composition, and percentage body fat < 4% were excluded. The study recruited 141 patients who met the criteria. All study subjects received a nutritional education booklet. The subjects were divided into four groups according to the hospital. Those were no courses for patients and nurses (Non-C) group (only provided education booklet); courses for nurses (CN) group (provided nutrition course for nurses); courses for patients (CP) group (provided nutrition course for HD patients); and courses for patients and nurses (CPN) group (provided nutrition course for nurses and HD patients). This study period was divided into baseline (T0), intervention (T1), and follow-up (T2). A quasi-experimental study was conducted from May 2019 to April 2021 at HD centers of four hospitals in northern Taiwan. This study was approved by the Taipei Medical University Joint Institutional Review Board (no. N201801034) for Taipei Medical University Hospital, Taipei Medical University-Wan Fang Hospital, Taipei Medical University-Shuang Ho Hospital, and approved by the institutional Ethics Committee of Cathay General Hospital (no. CGH-OP108007) for Cathay General Hospital. All participants were fully informed and signed informed consent forms before their participation. Patients included were those aged 20–75 years, received HD treatment thrice a week for at least 3 months, the education level of junior high school and higher, and Kt/V > 1.2. Patients with obvious edema, pregnancy, amputation, hyperthyroidism, hypothyroidism, malignancy, liver failure or cancer, mental disorders, tube feeding, hospitalization and plan to surgery, loss to measure body composition, and percentage body fat < 4% were excluded. The study recruited 141 patients who met the criteria. All study subjects received a nutritional education booklet. The subjects were divided into four groups according to the hospital. Those were no courses for patients and nurses (Non-C) group (only provided education booklet); courses for nurses (CN) group (provided nutrition course for nurses); courses for patients (CP) group (provided nutrition course for HD patients); and courses for patients and nurses (CPN) group (provided nutrition course for nurses and HD patients). This study period was divided into baseline (T0), intervention (T1), and follow-up (T2). 2.2. Nutrition Education Booklet and Education Program All participants received a nutritional education booklet. The nutritional education booklet was developed based on the HEI-HD to increase patient knowledge, promote positive attitudes, and change eating behaviors. The contents of the educational booklet were corrected by both six renal dietitians and three nephrologists. Table 1 shows the 9 chapters and the contents of the educational booklet. Nutrition education was offered at T1. A dietitian provided one-on-one, 15–20 min/week personalized nutrition education at the patient’s bedside in the first month and a 15–20 min/month personalized nutrition education with a special focus on skills to improve the low-scoring dietary items to achieve higher score in the second month. A 10-min group nutritional education session was provided by a dietitian to nurses at the beginning of T1 to address the incorrect answers to the dietary knowledge questionnaire. All participants received a nutritional education booklet. The nutritional education booklet was developed based on the HEI-HD to increase patient knowledge, promote positive attitudes, and change eating behaviors. The contents of the educational booklet were corrected by both six renal dietitians and three nephrologists. Table 1 shows the 9 chapters and the contents of the educational booklet. Nutrition education was offered at T1. A dietitian provided one-on-one, 15–20 min/week personalized nutrition education at the patient’s bedside in the first month and a 15–20 min/month personalized nutrition education with a special focus on skills to improve the low-scoring dietary items to achieve higher score in the second month. A 10-min group nutritional education session was provided by a dietitian to nurses at the beginning of T1 to address the incorrect answers to the dietary knowledge questionnaire. 2.3. Data Collection and Measurements For patients at T0, socio-demographic factors, anthropometric and body composition, dietary records, and questionnaires were collected. At T1 and T2, the data collected were the same as those at T0, except for socio-demographic factors. For the nurses, only a dietary knowledge questionnaire was collected at T0, T1, and T2. 2.3.1. Demographic Data The basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires. The basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires. 2.3.2. Anthropometry and Body Composition Height and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass. Height and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass. 2.3.3. Dietary Intake The patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19]. The patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19]. 2.3.4. Dietary Knowledge We used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29]. We used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29]. 2.3.5. Physical Activity Physical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value. Physical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value. For patients at T0, socio-demographic factors, anthropometric and body composition, dietary records, and questionnaires were collected. At T1 and T2, the data collected were the same as those at T0, except for socio-demographic factors. For the nurses, only a dietary knowledge questionnaire was collected at T0, T1, and T2. 2.3.1. Demographic Data The basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires. The basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires. 2.3.2. Anthropometry and Body Composition Height and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass. Height and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass. 2.3.3. Dietary Intake The patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19]. The patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19]. 2.3.4. Dietary Knowledge We used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29]. We used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29]. 2.3.5. Physical Activity Physical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value. Physical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value. 2.4. Statistical Analysis Data were shown as mean with standard deviation (SD) for continuous variables and number (n) with percentage (%) for categorical variables. A Shapiro–Wilk test or Kolmogorov–Smirnov test was used to verify the normal distribution assumption for continuous variables. One-way ANOVA with post hoc Bonferroni’s test was used to compare demographics, changes in the dietary knowledge, and HEI-HD. Changes in the skeletal muscle mass was compared using the ANCOVA by adjusting for gender and age, followed by Bonferroni’s test. Kruskal–Wallis test was used to compare non-normal distribution data between multiple groups. The Chi-square test was used to compare categorical variables. Paired t-test was used to compare continuous variables within groups. All statistical analyses were used by SAS version 9.4. A p-value < 0.05 was considered statistical significance, and 0.05 < p < 0.1 indicated marginal significant differences. Data were shown as mean with standard deviation (SD) for continuous variables and number (n) with percentage (%) for categorical variables. A Shapiro–Wilk test or Kolmogorov–Smirnov test was used to verify the normal distribution assumption for continuous variables. One-way ANOVA with post hoc Bonferroni’s test was used to compare demographics, changes in the dietary knowledge, and HEI-HD. Changes in the skeletal muscle mass was compared using the ANCOVA by adjusting for gender and age, followed by Bonferroni’s test. Kruskal–Wallis test was used to compare non-normal distribution data between multiple groups. The Chi-square test was used to compare categorical variables. Paired t-test was used to compare continuous variables within groups. All statistical analyses were used by SAS version 9.4. A p-value < 0.05 was considered statistical significance, and 0.05 < p < 0.1 indicated marginal significant differences. 2.1. Study Design and Participants: A quasi-experimental study was conducted from May 2019 to April 2021 at HD centers of four hospitals in northern Taiwan. This study was approved by the Taipei Medical University Joint Institutional Review Board (no. N201801034) for Taipei Medical University Hospital, Taipei Medical University-Wan Fang Hospital, Taipei Medical University-Shuang Ho Hospital, and approved by the institutional Ethics Committee of Cathay General Hospital (no. CGH-OP108007) for Cathay General Hospital. All participants were fully informed and signed informed consent forms before their participation. Patients included were those aged 20–75 years, received HD treatment thrice a week for at least 3 months, the education level of junior high school and higher, and Kt/V > 1.2. Patients with obvious edema, pregnancy, amputation, hyperthyroidism, hypothyroidism, malignancy, liver failure or cancer, mental disorders, tube feeding, hospitalization and plan to surgery, loss to measure body composition, and percentage body fat < 4% were excluded. The study recruited 141 patients who met the criteria. All study subjects received a nutritional education booklet. The subjects were divided into four groups according to the hospital. Those were no courses for patients and nurses (Non-C) group (only provided education booklet); courses for nurses (CN) group (provided nutrition course for nurses); courses for patients (CP) group (provided nutrition course for HD patients); and courses for patients and nurses (CPN) group (provided nutrition course for nurses and HD patients). This study period was divided into baseline (T0), intervention (T1), and follow-up (T2). 2.2. Nutrition Education Booklet and Education Program: All participants received a nutritional education booklet. The nutritional education booklet was developed based on the HEI-HD to increase patient knowledge, promote positive attitudes, and change eating behaviors. The contents of the educational booklet were corrected by both six renal dietitians and three nephrologists. Table 1 shows the 9 chapters and the contents of the educational booklet. Nutrition education was offered at T1. A dietitian provided one-on-one, 15–20 min/week personalized nutrition education at the patient’s bedside in the first month and a 15–20 min/month personalized nutrition education with a special focus on skills to improve the low-scoring dietary items to achieve higher score in the second month. A 10-min group nutritional education session was provided by a dietitian to nurses at the beginning of T1 to address the incorrect answers to the dietary knowledge questionnaire. 2.3. Data Collection and Measurements: For patients at T0, socio-demographic factors, anthropometric and body composition, dietary records, and questionnaires were collected. At T1 and T2, the data collected were the same as those at T0, except for socio-demographic factors. For the nurses, only a dietary knowledge questionnaire was collected at T0, T1, and T2. 2.3.1. Demographic Data The basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires. The basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires. 2.3.2. Anthropometry and Body Composition Height and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass. Height and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass. 2.3.3. Dietary Intake The patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19]. The patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19]. 2.3.4. Dietary Knowledge We used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29]. We used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29]. 2.3.5. Physical Activity Physical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value. Physical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value. 2.3.1. Demographic Data: The basic information of the patients was collected by chart review, including age, gender, dialysis vintage, and comorbidity, and was calculated using the Charlson comorbidity index (CCI). In addition, the occupation, economic status, marital status, living conditions, smoking, nutrition education, and consultation status were collected using questionnaires. 2.3.2. Anthropometry and Body Composition: Height and dry weight after dialysis were collected from the chart review to calculate body mass index (BMI). Skeletal muscle mass (SMM) and body fat were measured by using the bioelectrical impedance analysis (InBody S10, Biospace, Seoul, Korea) after the hemodialysis session (sitting position) [21]. The SMM was adjusted for height (SMMHt2 (kg/m2)) and weight (SMMWt (%)) which were indicators of muscle mass. 2.3.3. Dietary Intake: The patients recorded three days of dietary intake during the week before and after the monthly examination, including one dialysis day, one non-dialysis day, and one non-dialysis weekend day [11]. To confirm dietary records, we used 24-h dietary recall by well-trained dietitians to interview patients. Dietary intake was analyzed using nutrients analysis software (Cofit pro, Taipei, Taiwan), which is based on the 2018 Taiwan Food Nutrient Database. The components of the HEI-HD score include total grains, total protein foods, total vegetable, whole fruits, oils, high biological value proteins, the ratio of white to red meat, processed meat, fish and seafood, saturated fatty acids rich oils, sugar-sweetened beverages and fruit juice, alcohol, whole grains, nuts, taste, and milk and dairy products. Total HEI-HD score is in the range of 0–100 [19]. 2.3.4. Dietary Knowledge: We used Ryu’s questionnaire to assess the dietary knowledge of HD patients and nurses. It consists of 10 questions to ask patients about knowledge related to protein, potassium, phosphorus, sodium, and water. The score of the questionnaire ranged from 0 to 10; the higher the score, the better the knowledge of diet [29]. 2.3.5. Physical Activity: Physical activity was collected by using the short version of the International Physical Activity Questionnaire (IPAQ-SF) [30]. Patients recorded the number of days and time spent on exercise (vigorous, moderate, walking exercise, and sleep) during the last 7 days. The level of physical activity was indicated by the metabolic equivalent (MET) value. 2.4. Statistical Analysis: Data were shown as mean with standard deviation (SD) for continuous variables and number (n) with percentage (%) for categorical variables. A Shapiro–Wilk test or Kolmogorov–Smirnov test was used to verify the normal distribution assumption for continuous variables. One-way ANOVA with post hoc Bonferroni’s test was used to compare demographics, changes in the dietary knowledge, and HEI-HD. Changes in the skeletal muscle mass was compared using the ANCOVA by adjusting for gender and age, followed by Bonferroni’s test. Kruskal–Wallis test was used to compare non-normal distribution data between multiple groups. The Chi-square test was used to compare categorical variables. Paired t-test was used to compare continuous variables within groups. All statistical analyses were used by SAS version 9.4. A p-value < 0.05 was considered statistical significance, and 0.05 < p < 0.1 indicated marginal significant differences. 3. Results: 3.1. Characteristics of Participants Ninety-four HD patients’ data were analyzed (Figure 1). The mean age of the HD patients was 58.3 ± 10.1 years, 64.9% were men. The means of total HEI-HD score, SMM were 65.6 ± 8.3, 26.3 ± 5.7 kg, respectively. There was a statistically significant difference of marital status existed among the four groups (Table 2). Ninety-four HD patients’ data were analyzed (Figure 1). The mean age of the HD patients was 58.3 ± 10.1 years, 64.9% were men. The means of total HEI-HD score, SMM were 65.6 ± 8.3, 26.3 ± 5.7 kg, respectively. There was a statistically significant difference of marital status existed among the four groups (Table 2). 3.2. Comparison of Changes in Dietary Knowledge among the Groups The dietary knowledge score in the Non-C group at T2 was significantly higher than at T1 (Table 3). The dietary knowledge score in the Non-C group at T2 was significantly higher than at T1 (Table 3). 3.3. Comparison of Changes in HEI-HD among the Groups After two months of intervention, there was a significant increase in vegetable scores in the CPN group and the CP group but not in the Non-C group. The ratio of white to red meat score changes in the CPN group was significantly increased. In comparison with T0, at the end of T2, the vegetable scores change in the CP group were significantly higher than the Non-C group, the ratio of white to red meat scores of the CPN group was significantly higher than the CP group, and the total HEI-HD scores of the CPN group was significantly higher than the Non-C group. There was no significant difference among the four groups in T2 to T1 period (Table 4). After two months of intervention, there was a significant increase in vegetable scores in the CPN group and the CP group but not in the Non-C group. The ratio of white to red meat score changes in the CPN group was significantly increased. In comparison with T0, at the end of T2, the vegetable scores change in the CP group were significantly higher than the Non-C group, the ratio of white to red meat scores of the CPN group was significantly higher than the CP group, and the total HEI-HD scores of the CPN group was significantly higher than the Non-C group. There was no significant difference among the four groups in T2 to T1 period (Table 4). 3.4. Comparison of Changes in Skeletal Muscle Mass among the Groups There was no significant difference in muscle mass among the four groups. To compare with T0, all patients’ SMM, SMMHt2, and SMMWt were significantly decreased at T1. The SMM and SMMHt2 of the Non-C group at T1 were also significantly lower than T0. At T2, all patients’ SMM, SMMHt2, and SMMWt were significantly higher than T1, but only SMM of the Non-C group and the CN group were significantly better than T1 with or without adjusted (Table 5). There was no significant difference in muscle mass among the four groups. To compare with T0, all patients’ SMM, SMMHt2, and SMMWt were significantly decreased at T1. The SMM and SMMHt2 of the Non-C group at T1 were also significantly lower than T0. At T2, all patients’ SMM, SMMHt2, and SMMWt were significantly higher than T1, but only SMM of the Non-C group and the CN group were significantly better than T1 with or without adjusted (Table 5). 3.1. Characteristics of Participants: Ninety-four HD patients’ data were analyzed (Figure 1). The mean age of the HD patients was 58.3 ± 10.1 years, 64.9% were men. The means of total HEI-HD score, SMM were 65.6 ± 8.3, 26.3 ± 5.7 kg, respectively. There was a statistically significant difference of marital status existed among the four groups (Table 2). 3.2. Comparison of Changes in Dietary Knowledge among the Groups: The dietary knowledge score in the Non-C group at T2 was significantly higher than at T1 (Table 3). 3.3. Comparison of Changes in HEI-HD among the Groups: After two months of intervention, there was a significant increase in vegetable scores in the CPN group and the CP group but not in the Non-C group. The ratio of white to red meat score changes in the CPN group was significantly increased. In comparison with T0, at the end of T2, the vegetable scores change in the CP group were significantly higher than the Non-C group, the ratio of white to red meat scores of the CPN group was significantly higher than the CP group, and the total HEI-HD scores of the CPN group was significantly higher than the Non-C group. There was no significant difference among the four groups in T2 to T1 period (Table 4). 3.4. Comparison of Changes in Skeletal Muscle Mass among the Groups: There was no significant difference in muscle mass among the four groups. To compare with T0, all patients’ SMM, SMMHt2, and SMMWt were significantly decreased at T1. The SMM and SMMHt2 of the Non-C group at T1 were also significantly lower than T0. At T2, all patients’ SMM, SMMHt2, and SMMWt were significantly higher than T1, but only SMM of the Non-C group and the CN group were significantly better than T1 with or without adjusted (Table 5). 4. Discussion: This was the first study to assess the effects of dietitians providing nutritional education for nurses and patients alone and in combination based on HEI-HD on dietary quality and muscle mass of HD patients. This study showed that nutritional education for the CPN group, both patients and nurses by dietitians, based on HEI-HD was more effective than other groups. The dietitians provided nutritional education and increased the scores of vegetable, the ratio of white to red meat, and total HEI-HD, but no significant change in SMM, SMMHt2, and SMMWt. Ford et al. (2004) showed that giving nutrition education for 20–30 min per month to HD patients could improve patients’ dietary knowledge [24]. Abd et al. (2015) used small group combined educational booklets for nutrition education for the study groups. Their results reported dietary knowledge would be improved [31]. In this present study, the dietary knowledge of HD patients improved at T2, which compared to T0, the Non-C group had a higher dietary knowledge score at T2 than T1, indicating that only providing the HEI-HD booklet could help patients’ dietary knowledge improvement. Vegetables are rich in vitamins C and E and antioxidant phytochemicals [32]. Saglimbene et al. (2019) reported that only 4% HD patients achieved vegetable recommendations, which was far below the recommendations for preventing chronic diseases [33]. Wagner et al. (2016) showed that nutrition courses given by dietitians could enhance the vegetable intake of obese adults [34], which was similar to the results of our study, indicating that nutrition education by dietitians could increase the vegetable intake of patients and increase the dietary vegetable scores. The ratio of white to red meat scores improved. Increasing white meat intake daily could reduce 33% mortality [19]. HD patients often have excessive red meat consumption problems. Red meat, such as pork, beef, and lamb, which are rich in phosphorus and saturated fat, could increase the risk of cardiovascular disease [35]. McCullough et al. (2002) recommended that the ratio of white and red meat is 4:1 to prevent cardiovascular disease [36]. The ratio of white to red meat scores of the CPN group was better than the others in this study, which means that the nutrition education given to both patients and nurses by dietitians could significantly increase HD patients’ dietary ratio of white and red meat. This present study showed that the total HEI-HD scores of the CPN group were significantly higher than others due to the increase in vegetable scores and the ratio of white to red meat scores. Previous studies have shown that people who have received nutrition education have better dietary quality. Nutrition education by dietitians could improve the dietary quality of diabetes patients [37,38]. Our study shows that nutritional education by dietitians to patients and nurses had the best effect on improving the dietary quality of HD patients. Previous studies have shown that increased dietary quality could maintain muscle mass [39,40]. Rondanelli et al. (2015) reported that the intake of white meat increased combined with the intake of red meat maintained or decreased, which was able to prevent the occurrence of sarcopenia [41], similar to our study in which no significant decrease in muscle mass was observed in the CPN group with enhanced dietary quality. In contrast, the total HEI-HD score of the Non-C group was slightly lower, and their SMM and SMMHt2 were significantly reduced at T1. In addition, previous studies showed that higher educated people are more willing to exercise [42], and unmarried people and blue-collar workers also have high physical activity [43]. Our study results showed that the higher education level in the Non-C group and more unmarried, more blue-collar workers in the CN group tend to have relatively high levels of activity, which might affect muscle mass. Our study’s strengths: (1) this was a multi-center study, which could reduce the sampling bias compared to a single center; (2) this was the first nutrition education study that utilizes four educational models; and (3) this was the first nutrition education study using the HEI-HD, a simple and validated dietary index scale that could quickly assess patients’ dietary status. There are some limitations in this study as follows: (1) only 4 months of the study were monitored, which is relatively short and it may not be possible to see the increase in muscle mass; (2) the effects of diet on muscle mass were limited, so physical activities, exercise and nutrition supplementation should be considered to be included in the future study; (3) the subjects were all voluntary participants with high health awareness, so the nutrition education may be more effective; (4) no adequate information was provided regarding the frequency and content of nurse education for patient [44], so future studies should monitor the frequency and how long for nurse education; and (5) it should be noted that the present study is a multi-center study and was conducted only in northern Taiwan urban with high literacy population. Therefore, the results obtained may not be appropriate to apply to patients in Taiwan rural areas or other countries. 5. Conclusions: The HEI-HD-based nutritional education was effective in improving patients’ dietary quality, and the dietitians’ model educating both patients and nurses was the most effective in improving dietary quality and maintaining skeletal muscle mass in HD patients.
Background: Hemodialysis patients are at high risk of muscle loss as a result of aging and disease, and combined with inadequate dietary intake. The Healthy Eating Index for HemoDialysis patients (HEI-HD) was developed to assess the dietary quality of hemodialysis patients. The purposes of this study were to examine the effects of different nutritional education models using HEI-HD-based education on dietary quality and muscle mass in hemodialysis patients. Methods: A quasi-experimental study was conducted from May 2019 to April 2021, with four groups, including no course for patients and nurses (Non-C), course for nurses (CN), course for patients (CP), and course for patients and nurses (CPN). The courses were delivered by registered dietitians. The data of 94 patients were collected and analyzed at baseline, after 2 months of intervention, and 2 months follow-up, including demographics, body composition, 3-day dietary records, and hemodialysis dietary knowledge. The HEI-HD index score was calculated. Results: Patients aged 58.3 ± 10.1 years. The dietary quality change in the CPN group was improved as compared with the Non-C group (-3.4 ± 9.5 vs. 3.0 ± 5.5, 0.04). The skeletal muscle mass of the Non-C group at intervention was also significantly lower than baseline, but the CPN group was not. Conclusions: The HEI-HD-based nutritional education for both patients and nurses showed a positive effect on improving the dietary quality and maintaining muscle mass in hemodialysis patients.
1. Introduction: End-stage renal disease (ESRD) has been a public issue of global concern, with 90% of countries using hemodialysis (HD) as a treatment modality [1]. Most HD patients have difficulty maintaining a dietary compliance due to the stress of dietary restrictions, eventually leads to malnutrition [2]. Malnutrition in patients with chronic kidney disease is called protein-energy wasting (PEW), caused by marked reduction in storage of protein and energy in body. Muscle wasting is the most direct and valid criterion for PEW [3]. Muscle wasting is one of the direct criteria of PEW [3]. Skeletal muscle accounts for 30–40% of the whole body and maintains basal energy metabolism [4], physical activity, and daily life [5]. However, HD patients often gradually lose muscle mass with age, treatment, hormonal changes, and disease progression [6], resulting in frailty, decreased quality of life, falls, and even fractures, which increase the risk of death in the long run [7,8]. Dietary quality considers whether the individual’s diet complies with dietary recommendations and healthy diet principles [9]. The Dietary Quality Index is an index developed following national dietary guidelines or specific disease recommendations, that reflects patients’ compliance with dietary recommendations and dietary changes [10]. The Healthy Eating Index for HemoDialysis patients (HEI-HD) was developed in 2014 [11]. It was constructed based on the healthy eating pattern of HD patients [12,13,14,15,16,17,18,19,20,21]. HEI-HD contains 16 diet items. The total score of HEU-HD ranged between 0 and 100. The higher the score, the better the dietary quality, with a 4% reduction in the risk of death in HD patients [19]. The dietary principles of kidney disease are too complex to make patients have difficulty with dietary compliance [22]. Previous studies proposed that patients are reluctant to follow dietary principles because they do not have sufficient knowledge or lack of skills to self-manage their diet [23]. Therefore, appropriate education and reinforcement of knowledge concepts enhance the understanding of the HD diet principles to encourage diet compliance [24]. The Health Belief Model (HBM), as a guiding framework for educational interventions, is the most widely applied theory of health education, which describes changes in health-related behaviors depending on the personal beliefs or attitudes toward disease [25]. Shariatjafari et al. (2012) showed that the nutritional education based on HBM was effective in promoting the adherence to dietary guidelines in the participants [26]. In the routine medical care of HD centers, nurses are the most frequent contact with HD patients and often help patients to adapt to their problems by providing information [27]. Therefore, nutrition education offered by dietitians could be more helpful for HD patients. In addition, the low ratio of dietitians to patients in HD centers and nutrition care are not included in the regular care process under the health insurance system [28]. Therefore, we conducted a quasi-experimental study to examine the effects of HEI-HD-based nutritional education models on dietary quality and muscle mass in HD patients. 5. Conclusions: The HEI-HD-based nutritional education was effective in improving patients’ dietary quality, and the dietitians’ model educating both patients and nurses was the most effective in improving dietary quality and maintaining skeletal muscle mass in HD patients.
Background: Hemodialysis patients are at high risk of muscle loss as a result of aging and disease, and combined with inadequate dietary intake. The Healthy Eating Index for HemoDialysis patients (HEI-HD) was developed to assess the dietary quality of hemodialysis patients. The purposes of this study were to examine the effects of different nutritional education models using HEI-HD-based education on dietary quality and muscle mass in hemodialysis patients. Methods: A quasi-experimental study was conducted from May 2019 to April 2021, with four groups, including no course for patients and nurses (Non-C), course for nurses (CN), course for patients (CP), and course for patients and nurses (CPN). The courses were delivered by registered dietitians. The data of 94 patients were collected and analyzed at baseline, after 2 months of intervention, and 2 months follow-up, including demographics, body composition, 3-day dietary records, and hemodialysis dietary knowledge. The HEI-HD index score was calculated. Results: Patients aged 58.3 ± 10.1 years. The dietary quality change in the CPN group was improved as compared with the Non-C group (-3.4 ± 9.5 vs. 3.0 ± 5.5, 0.04). The skeletal muscle mass of the Non-C group at intervention was also significantly lower than baseline, but the CPN group was not. Conclusions: The HEI-HD-based nutritional education for both patients and nurses showed a positive effect on improving the dietary quality and maintaining muscle mass in hemodialysis patients.
8,366
301
[ 3373, 315, 160, 1021, 63, 89, 177, 65, 68, 175, 23, 137, 97 ]
18
[ "patients", "dietary", "hd", "group", "education", "knowledge", "score", "non", "nutrition", "hei" ]
[ "patients dietary quality", "protein energy wasting", "quality maintain muscle", "dietary principles kidney", "eating index hemodialysis" ]
null
[CONTENT] hemodialysis | skeletal muscle | dietitian | nutritional education | dietary quality [SUMMARY]
null
[CONTENT] hemodialysis | skeletal muscle | dietitian | nutritional education | dietary quality [SUMMARY]
[CONTENT] hemodialysis | skeletal muscle | dietitian | nutritional education | dietary quality [SUMMARY]
[CONTENT] hemodialysis | skeletal muscle | dietitian | nutritional education | dietary quality [SUMMARY]
[CONTENT] hemodialysis | skeletal muscle | dietitian | nutritional education | dietary quality [SUMMARY]
[CONTENT] Humans | Diet, Healthy | Diet | Renal Dialysis | Diet Records | Muscles | Nutritional Status [SUMMARY]
null
[CONTENT] Humans | Diet, Healthy | Diet | Renal Dialysis | Diet Records | Muscles | Nutritional Status [SUMMARY]
[CONTENT] Humans | Diet, Healthy | Diet | Renal Dialysis | Diet Records | Muscles | Nutritional Status [SUMMARY]
[CONTENT] Humans | Diet, Healthy | Diet | Renal Dialysis | Diet Records | Muscles | Nutritional Status [SUMMARY]
[CONTENT] Humans | Diet, Healthy | Diet | Renal Dialysis | Diet Records | Muscles | Nutritional Status [SUMMARY]
[CONTENT] patients dietary quality | protein energy wasting | quality maintain muscle | dietary principles kidney | eating index hemodialysis [SUMMARY]
null
[CONTENT] patients dietary quality | protein energy wasting | quality maintain muscle | dietary principles kidney | eating index hemodialysis [SUMMARY]
[CONTENT] patients dietary quality | protein energy wasting | quality maintain muscle | dietary principles kidney | eating index hemodialysis [SUMMARY]
[CONTENT] patients dietary quality | protein energy wasting | quality maintain muscle | dietary principles kidney | eating index hemodialysis [SUMMARY]
[CONTENT] patients dietary quality | protein energy wasting | quality maintain muscle | dietary principles kidney | eating index hemodialysis [SUMMARY]
[CONTENT] patients | dietary | hd | group | education | knowledge | score | non | nutrition | hei [SUMMARY]
null
[CONTENT] patients | dietary | hd | group | education | knowledge | score | non | nutrition | hei [SUMMARY]
[CONTENT] patients | dietary | hd | group | education | knowledge | score | non | nutrition | hei [SUMMARY]
[CONTENT] patients | dietary | hd | group | education | knowledge | score | non | nutrition | hei [SUMMARY]
[CONTENT] patients | dietary | hd | group | education | knowledge | score | non | nutrition | hei [SUMMARY]
[CONTENT] dietary | hd | patients | disease | principles | compliance | diet | quality | hd patients | health [SUMMARY]
null
[CONTENT] group | significantly | group significantly | non group | significantly higher | scores | t1 | smm | non | cpn [SUMMARY]
[CONTENT] effective improving | improving | dietary quality | quality | effective | patients | educating | patients dietary quality | patients dietary quality dietitians | education effective improving patients [SUMMARY]
[CONTENT] patients | group | dietary | hd | significantly | education | knowledge | non | score | t1 [SUMMARY]
[CONTENT] patients | group | dietary | hd | significantly | education | knowledge | non | score | t1 [SUMMARY]
[CONTENT] ||| HemoDialysis ||| [SUMMARY]
null
[CONTENT] 58.3 ± | 10.1 years ||| CPN | Non-C | 9.5 | 3.0 ± | 5.5 | 0.04 ||| Non-C | CPN [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| HemoDialysis ||| ||| May 2019 to April 2021 | four | Non-C | CPN ||| ||| 94 | 2 months | 2 months | 3-day ||| ||| ||| 58.3 ± | 10.1 years ||| CPN | Non-C | 9.5 | 3.0 ± | 5.5 | 0.04 ||| Non-C | CPN ||| [SUMMARY]
[CONTENT] ||| HemoDialysis ||| ||| May 2019 to April 2021 | four | Non-C | CPN ||| ||| 94 | 2 months | 2 months | 3-day ||| ||| ||| 58.3 ± | 10.1 years ||| CPN | Non-C | 9.5 | 3.0 ± | 5.5 | 0.04 ||| Non-C | CPN ||| [SUMMARY]
Descriptive Epidemiology in Mexican children with cancer under an open national public health insurance program.
25355045
All the children registered at the National Council for the Prevention and Treatment of Childhood Cancer were analyzed. The rationale for this Federal Government Council is to financially support the treatment of all children registered into this system. All patients are within a network of 55 public certified hospitals nationwide.
BACKGROUND
In the current study, data from 2007 to 2012 are presented for all patients (0-18 years) with a pathological diagnosis of leukemia, lymphoma and solid tumors. The parameters analyzed were prevalence, incidence, mortality, and abandonment rate.
METHODS
A diagnosis of cancer was documented in 14,178 children. The incidence was of 156.9/million/year (2012). The median age was 4.9. The most common childhood cancer is leukemia, which occurs in 49.8% of patients (2007-2012); and has an incidence rate of 78.1/million/year (2012). The national mortality rate was 5.3/100,000 in 2012, however in the group between 15 to 18 years it reaches a level of 8.6.
RESULTS
The study demonstrates that there is a high incidence of childhood cancer in Mexico. In particular, the results reveal an elevated incidence and prevalence of leukemia especially from 0 to 4 years. Only 4.7% of these patients abandoned treatment. The clinical outcome for all of the children studied improved since the establishment of this national program.
CONCLUSIONS
[ "Adolescent", "Child", "Child, Preschool", "Female", "Humans", "Incidence", "Infant", "Infant, Newborn", "Insurance, Health", "Male", "Mexico", "Neoplasms", "Prevalence", "Public Health Surveillance", "Registries" ]
4228174
Background
Of the biggest problems in developing nations is the high incidence of childhood cancer is the high incidence and the limited financial resources to develop a solid national program for early diagnosis and management of these patients. We believe that pediatric cancer should now be considered a global child health priority [1]. A worldwide childhood cancer survey in 2012 estimates a higher incidence in developing countries (147,000 cancers/year) than in developed nations [2]. It is expected that this disease will continue growing because the populations in these countries are younger and expanding. Since 2010 in Mexico, cancer is the second leading cause of mortality among children between 4 to 15 years old [3]. This represents a current national health problem. There is a need to expand, fortify and improve early diagnosis and treatment. As we have previously documented in developing countries including Mexico, there is increasing evidence that the mortality rate is higher than in developed countries [4]. This might be related to a late referral to expert institutions for diagnosis and early multidisciplinary treatment. Close to half of the Mexican-children and adults-are under a medical socialized program. This system contains large hospitals and clinics nationwide and has been run by the Federal Government Socialized Medicine Program for more than 50 years. Formal, salaried sector workers and their families had been able to access pooled-prepayment options through public social security programs. Meanwhile the other half (n: 51,823,314) of Mexicans, including children and adults (2011) [4] has been progressively incorporating into a national health insurance program. This universal health coverage is described as “Popular Medical Insurance” (PMI) [5]. This program was developed to provide access to a package of comprehensive health services-including childhood cancer treatment with financial protection. This is done through a national medical network from primary care clinics to specialized medical hospitals certified by the Federal Government. The program officially started in January 2005. However, it was not until 2007 that the full coverage was issued for all childhood cancers [6]. A referral system has been progressively developed among this network to diagnose, treat and follow all of these children. Currently 55 accredited medical institutions nationwide with departments of pediatric hem/oncology are working fulltime. In each institution, there is at least one certified pediatric hem/oncologist. The population enrolled in this program belongs to the lowest socio-economic levels, and their parents are mostly non-salaried workers, rural residents and the unemployed. The National Pediatric Cancer Registry under the auspices of the Mexican Federal Government was started in 2006 but it was not until January 2007 that a national registry appeared [7]. This registry includes all Mexican children with cancer regardless of the medical system to which they belong. The purpose of this report is to present the prevalence, incidence, mortality, and rate of abandonment in the PMI program between 2007 and 2012.
Methods
An analysis was performed from 2007 to 2012 on all the children from 0 to 18 years diagnosed with cancer registered at the PMI program from all of the accredited institutions. The diagnosis of leukemia was determined through bone marrow aspiration for cytomorphology analysis, immunophenotype, DNA index and cytogenetic. The diagnosis of lymphomas and malignant solid tumors was performed through biopsy and/or surgical resection for pathology diagnosis. All institutional pathologic reports were classified under the International Classification of Childhood Cancer (ICCC-3) [8]. The medical data on each patient was obtained from the National Commission on Health Social Protection (PMI) which is the financial component of the program and from the National Council for the Prevention and Treatment of Childhood Cancer (CENSIA) headquarters which is the technical and normative entity. Both strategies are under the Federal Department of Health, and they work together collaboratively and exchange all of the patient information. This data managing collaboration occurs through an on-line process. The current work is a retrospective, descriptive study at a national level and the data presented was analyzed from the data base program, no approval from the Ethics Committee was required. The variables obtained included in each patient’s hospital registration: location of the treatment center, name, address, date of birth, age, gender, diagnosis, stage of the disease, follow up during treatment or under clinical surveillance for more than 3 months, date of death and/or abandonment. Only patients registered in this program were included. Patients were subjected to standardized treatment protocol regimens according to their respective diagnosis, and stage. All protocols were evaluated previously by the National Council of Health [9] follow by CENSIA [10]. The chemotherapy treatment protocols were developed through a panel of experts that designated a national coordinator and associate coordinators. Development of this protocols took place from 2004 to 2006. All of the treatment regimens were developed on the basis of international cancer treatment protocols that demonstrated solid therapeutic results. The scope of the current study is to analyze exclusively the epidemiological aspects of this population. This descriptive work comprises patients from 55 accredited institutions registered from January 2007 to December 2012. The institutional accreditation criteria were applied through a methodical evaluation from the Office of Medical Innovation and Medical Quality of the National Health Department and medical personnel trained for this type of task. From a strategic standpoint, Mexico was divided into 6 geographical areas so as to have a physician supervising each area. The role of this specialist is to organize, assist and solve all the problems connected to treatment regimens and local institutional problems related to the children with cancer. Prevalence was defined as the number of children who have been diagnosed with a type of cancer, at a given year and expressed as the percentage of the disease in Mexican children between 0 to 18 years and registered in the period of time of the study [2]. The incidence rate was defined as the total number of newly diagnosed cases per year/the total population under 18 years who were-registered with a Medical Policy at the PMI by 1 million population/year (Table 1) [2]. Age-specific incidence was defined as the total number of cases of a specific age group/the total population registered in that age group by 1,000,000 population/year. The overall mortality rate was determined as an absolute number of deceased patients/ 100,000 persons/year [2].Table 1 Absolute number of children (0–18 years) registered with a popular medical insurance health policy and newly diagnosed children with cancer/year Year of registrationPopulation registered with a PMI health policy (yearly increment)New cases/year200715,106,1002,017200815,281,3752,229200915,497,8312,287201015,986,1502,403201116,146,0112,571201217,014,3212,671 Absolute number of children (0–18 years) registered with a popular medical insurance health policy and newly diagnosed children with cancer/year All patients presented in this work study had a pathological diagnosis of cancer, and were treated with a minimal follow-up of 48 months at registered pediatric oncology units. The program started on January 5th 2005 [10], and the number of original participating institutions was 16. At the beginning only acute leukemia was introduced into this program. However all childhood cancers were progressively included, until January 2007 after which all types of childhood cancers were registered and treated under the national protocols. Therefore, the current work includes only cases collected from January 2007 to December 2012. Descriptive statistical analysis was performed using GraphPad Instant version 3.0.
Results
From January 2007 to 2012 there were 14,178 patients registered with cancer (Table 2). The most common cancer was leukemia (49.8%), followed by lymphoma (9.9%) and central nervous system tumors (9.4%).When the total cohort of children were grouped by age, it showed that the highest incidence was in children between 0–4 years and the lowest was 15 to 18 years through these 6 years (Figure 1). In terms of gender, males were predominately affected [7868 (55.5%)], while the frequency was slightly lower in females [6310 (44.5%)], with a ratio M: F of 1.2:1. Depending upon the type of childhood malignancy males accounted for 54.4% of patients with acute leukemia, 54.2% with brain tumors and 64% of the lymphoma cases (both Hodgkin’s and non-Hodgkin’s subtypes).Table 2 Prevalence of PMI childhood and adolescent cancers 2007–2012 DiagnosisCases n (%) year 2007Cases n (%) year 2008Cases n (%) year 2009Cases n (%) year 2010Cases n (%) year 2011Cases n (%) year 2012Prevalence 2007–2012 Total n (%)Leukemia1,056 (52.4)1,122 (50.3)1,133 (49.5)1,204 (50.1)1,222 (47.5)1329 (49.7)7,066 (49.8)Lymphomas207 (10.3)206 (9.2)244 (10.7)247 (10.3)255 (9.9)258 (9.6)1,417 (9.9)Intracranial neoplasm’s188 (9.3)198 (8.9)198 (8.7)228 (9.5)254 (9.9)277 (10.3)1,343 (9.4)Germ cell tumors53 (2.6)91 (4.1)118 (5.2)146 (6.1)153 (6.0)152 (5.6)713 (5.0)Soft tissue sarcomas108 (5.4)110 (4.9)95 (4.2)79 (3.3)120 (4.7)112 (4.1)624 (4.4)Osteosarcoma92 (4.6)104 (4.7)126 (5.5)74 (3.1)94 (3.7)77 (2.8)567 (3.9)Retinoblastoma75 (3.7)97 (4.4)70 (3.1)93 (3.9)104 (4.0)100 (3.7)539 (3.8)Renal tumors80 (4.0)84 (3.8)67 (2.9)89 (3.7)80 (3.1)87 (3.2)487 (3.4)Miscellaneous reticular neoplasm’s40 (2.0)47 (2.1)58 (2.5)63 (2.6)79 (3.1)95 (3.5)382 (2.6)Hepatic tumors30 (1.5)44 (2.0)35 (1.5)51 (2.1)67 (2.6)48 (1.7)275 (1.9)Neuroblastoma39 (1.9)46 (2.1)54 (2.4)38 (1.6)51 (2.0)46 (1.7)274 (1.9)Ewing tumor and related sarcomas of bone17 (0.8)31 (1.4)35 (1.5)47 (2.0)38 (1.5)24 (0.8)192 (1.3)Other solid tumors32 (1.6.)49 (2.2)54 (2.4)44 (1.8)54 (2.1)66 (2.4)299 (2.1)Total n cases2,0172,2292,2872,4032,5712,67114,178International Classification of Childhood Cancers (ICCC-3)(7).Figure 1 Incidence by age group during 6 years of the study. Prevalence of PMI childhood and adolescent cancers 2007–2012 International Classification of Childhood Cancers (ICCC-3)(7). Incidence by age group during 6 years of the study. The national childhood cancer incidence rate in the PMI program over the six years of this study has been increasing (Table 3). In 2012 the incidence was 156.9/million. It was taken into account that the population of Mexico increased from 106,900,000 in 2007 to 113,336,538 in 2012 [3].Table 3 Incidence* of PMI childhood and adolescent cancers Diagnosis200720082009201020112012Leukemia 69.9 73.4 73.1 75.3 75.6 78.1 Lymphomas13.713.415.715.415.715.1Intracranial neoplasm’s12.412.912.714.215.716.2Germ cell tumors3.55.97.69.19.48.9Soft tissue sarcomas7.17.16.14.64.96.5Osteosarcoma6.06.88.14.76.24.5Retinoblastoma4.96.34.55.86.55.8Renal tumors5.25.44.35.55.05.1Miscellaneous reticular neoplasm’s2.63.03.73.94.95.5Hepatic tumors1.92.82.22.33.12.8Neuroblastoma2.53.03.43.14.12.7Ewing tumor and related sarcomas of bone 1.1 2.0 2.2 2.9 2.3 1.4 Other solid tumors2.13.23.42.73.33.8Total incidence/year133.5145.8146.2150.3159.2156.9*Incidence: total number of newly diagnosed cases per year/total population under 18 years by 1,000,000 population/year. Incidence* of PMI childhood and adolescent cancers *Incidence: total number of newly diagnosed cases per year/total population under 18 years by 1,000,000 population/year. In 2012 the national population census reported 32,972,300 Mexicans between 0 to 18 years (37%). However, only 17,014,321 of the eligible population under 18 years was registered at the PMI program. In the process of tumor classification, Langerhans-cell histiocytosis was included as miscellaneous reticular neoplasm-following the ICCC-3 pathologic classification. The incidence of leukemia was 78.1/1,000,000/year (2012). The prevalence of leukemia over the six years of the current analysis was 49.8%. The predominance of acute lymphoblastic leukemia was documented in all age groups; (Table 4) in comparison with acute and chronic non-lymphoblastic leukemia. Lymphomas were documented in 1417 patients of which 52.2% had non-Hodgkin’s lymphoma and 47.8% of children presented with Hodgkin’s lymphoma; this difference was not statistically significant.Table 4 Prevalence of 7066 leukemia’s by age groups registered in the PMI from 2007-2012 Type of leukemia0-45-910-1415-18Totalnnnnn(%)(%)(%)(%)(%)Acute lymphoblastic leukemia2,346(40)2,170(37)762(13)586(10)5,864(83)Acute myelogenous leukemia278(30)195(21)185(20)269(29)927(13.1)Chronic myelocytic leukemia45(23)54(27)55(28)44(22)198(2.8)Myelodysplastic syndromes8(10)22(28)24(32)23(30)77(1.0)Total number2,6772,4411,0269227,066 Prevalence of 7066 leukemia’s by age groups registered in the PMI from 2007-2012 This trend clearly indicates that the participation of the PMI accredited institutions and their respective regional coordinators had a beneficial effect on treatment compliance. The national mortality rate/state of residence was from 5.8 in 2007, to 5.3 in 2012/100,000/year. In this last year the lowest rate was in the group under 1 year at 2.5 and was the highest in the group of adolescents between 15 to 18 with a rate of 8.6 (Table 5).Table 5 *Mortality rate by geographical location and by age groups in accredited pediatric cancer institutions in 2012 State-rate <1 1-4 5-9 10-14 15-18 NORTHWEST STATES 5.0 1. Baja California 5.45.13.33.85.710.0 2. Baja California South 6.70.01.911.87.72.6 3. Chihuahua 5.54.05.44.82.911.6 4. Durango 3.16.02.21.23.44.9 5. Nayarit 3.74.64.40.94.64.6 6. Sinaloa 4.51.93.34.04.76.1 7. Sonora 6.61.96.15.54.112.9 NORTH EAST STATES 4.7 8. Coahuila 5.13.76.52.64.76.9 9. Nuevo León 4.73.55.73.44.15.5 10. San Luis Potosí 4.43.81.92.25.18.7 11. Tamaulipas 4.90.05.24.72.29.3 CENTRAL STATES 5.4 12. Aguascalientes 4.60.03.93.13.110.8 13. Colima 7.17.88.04.83.213.2 14. Guanajuato 5.53.54.24.55.18.4 15. Jalisco 6.34.85.14.65.211.1 16. Michoacán 4.85.64.24.12.97.8 17. Querétaro 5.32.73.45.84.28.0 18. Zacatecas 4.23.39.01.91.94.4 CENTRAL METROPOLITAN 5.1 19. Hidalgo 4.85.53.24.82.29.4 20. México 4.71.64.23.84.07.6 21. Mexico City 5.90.84.85.44.89.6 22. Tlaxcala 5.24.03.04.84.88.3 SOUTHERN STATES 5.4 23. Morelos 5.25.91.55.36.45.7 24. Oaxaca 5.43.75.94.24.67.1 25. Puebla 5.50.83.84.45.59.3 26. Veracruz 5.71.43.45.04.311.9 SOUTHEAST STATES 5.7 27. Campeche 6.16.13.02.412.34.0 28. Chiapas 5.71.87.83.53.89.1 29. Guerrero 3.94.14.42.43.15.8 30. Quintana Roo 5.70.07.75.04.56.2 31. Tabasco 7.72.28.88.84.98.2 32. Yucatán 5.10.04.64.93.28.7 NATIONAL RATE 5.3 2.5 4.7 4.2 4.3 8.6 *mortality rate by 100,000 children/year. *Mortality rate by geographical location and by age groups in accredited pediatric cancer institutions in 2012 *mortality rate by 100,000 children/year. The percentage of patients who abandoned treatment was 4.7%/year, the range was from 4.5% in 2012 to 5.2% in 2007.
Conclusion
The national childhood cancer registration in the PMI program denotes a high prevalence and incidence among Mexican children. The current experience demonstrates that the creation of a cooperative treatment group for childhood cancer in a developing nation can be accomplished through the efforts of a multidisciplinary approach. The combination of horizontal coverage of personal health services with a catastrophic fund makes it possible to offer financial protection for childhood cancer, as well as investing in early detection and survivorship care. Our future goal is to continue improving the treatment program and therefore the outcome for these children. We believe that children with cancer worldwide deserve a stronger focus on their needs than they are currently receiving, especially in resource-poor settings.
[ "Background" ]
[ "Of the biggest problems in developing nations is the high incidence of childhood cancer is the high incidence and the limited financial resources to develop a solid national program for early diagnosis and management of these patients. We believe that pediatric cancer should now be considered a global child health priority [1].\nA worldwide childhood cancer survey in 2012 estimates a higher incidence in developing countries (147,000 cancers/year) than in developed nations [2]. It is expected that this disease will continue growing because the populations in these countries are younger and expanding.\nSince 2010 in Mexico, cancer is the second leading cause of mortality among children between 4 to 15 years old [3]. This represents a current national health problem. There is a need to expand, fortify and improve early diagnosis and treatment. As we have previously documented in developing countries including Mexico, there is increasing evidence that the mortality rate is higher than in developed countries [4]. This might be related to a late referral to expert institutions for diagnosis and early multidisciplinary treatment.\nClose to half of the Mexican-children and adults-are under a medical socialized program. This system contains large hospitals and clinics nationwide and has been run by the Federal Government Socialized Medicine Program for more than 50 years. Formal, salaried sector workers and their families had been able to access pooled-prepayment options through public social security programs.\nMeanwhile the other half (n: 51,823,314) of Mexicans, including children and adults (2011) [4] has been progressively incorporating into a national health insurance program. This universal health coverage is described as “Popular Medical Insurance” (PMI) [5]. This program was developed to provide access to a package of comprehensive health services-including childhood cancer treatment with financial protection. This is done through a national medical network from primary care clinics to specialized medical hospitals certified by the Federal Government. The program officially started in January 2005. However, it was not until 2007 that the full coverage was issued for all childhood cancers [6].\nA referral system has been progressively developed among this network to diagnose, treat and follow all of these children. Currently 55 accredited medical institutions nationwide with departments of pediatric hem/oncology are working fulltime. In each institution, there is at least one certified pediatric hem/oncologist. The population enrolled in this program belongs to the lowest socio-economic levels, and their parents are mostly non-salaried workers, rural residents and the unemployed.\nThe National Pediatric Cancer Registry under the auspices of the Mexican Federal Government was started in 2006 but it was not until January 2007 that a national registry appeared [7]. This registry includes all Mexican children with cancer regardless of the medical system to which they belong.\nThe purpose of this report is to present the prevalence, incidence, mortality, and rate of abandonment in the PMI program between 2007 and 2012." ]
[ null ]
[ "Background", "Methods", "Results", "Discussion", "Conclusion" ]
[ "Of the biggest problems in developing nations is the high incidence of childhood cancer is the high incidence and the limited financial resources to develop a solid national program for early diagnosis and management of these patients. We believe that pediatric cancer should now be considered a global child health priority [1].\nA worldwide childhood cancer survey in 2012 estimates a higher incidence in developing countries (147,000 cancers/year) than in developed nations [2]. It is expected that this disease will continue growing because the populations in these countries are younger and expanding.\nSince 2010 in Mexico, cancer is the second leading cause of mortality among children between 4 to 15 years old [3]. This represents a current national health problem. There is a need to expand, fortify and improve early diagnosis and treatment. As we have previously documented in developing countries including Mexico, there is increasing evidence that the mortality rate is higher than in developed countries [4]. This might be related to a late referral to expert institutions for diagnosis and early multidisciplinary treatment.\nClose to half of the Mexican-children and adults-are under a medical socialized program. This system contains large hospitals and clinics nationwide and has been run by the Federal Government Socialized Medicine Program for more than 50 years. Formal, salaried sector workers and their families had been able to access pooled-prepayment options through public social security programs.\nMeanwhile the other half (n: 51,823,314) of Mexicans, including children and adults (2011) [4] has been progressively incorporating into a national health insurance program. This universal health coverage is described as “Popular Medical Insurance” (PMI) [5]. This program was developed to provide access to a package of comprehensive health services-including childhood cancer treatment with financial protection. This is done through a national medical network from primary care clinics to specialized medical hospitals certified by the Federal Government. The program officially started in January 2005. However, it was not until 2007 that the full coverage was issued for all childhood cancers [6].\nA referral system has been progressively developed among this network to diagnose, treat and follow all of these children. Currently 55 accredited medical institutions nationwide with departments of pediatric hem/oncology are working fulltime. In each institution, there is at least one certified pediatric hem/oncologist. The population enrolled in this program belongs to the lowest socio-economic levels, and their parents are mostly non-salaried workers, rural residents and the unemployed.\nThe National Pediatric Cancer Registry under the auspices of the Mexican Federal Government was started in 2006 but it was not until January 2007 that a national registry appeared [7]. This registry includes all Mexican children with cancer regardless of the medical system to which they belong.\nThe purpose of this report is to present the prevalence, incidence, mortality, and rate of abandonment in the PMI program between 2007 and 2012.", "An analysis was performed from 2007 to 2012 on all the children from 0 to 18 years diagnosed with cancer registered at the PMI program from all of the accredited institutions.\nThe diagnosis of leukemia was determined through bone marrow aspiration for cytomorphology analysis, immunophenotype, DNA index and cytogenetic. The diagnosis of lymphomas and malignant solid tumors was performed through biopsy and/or surgical resection for pathology diagnosis. All institutional pathologic reports were classified under the International Classification of Childhood Cancer (ICCC-3) [8].\nThe medical data on each patient was obtained from the National Commission on Health Social Protection (PMI) which is the financial component of the program and from the National Council for the Prevention and Treatment of Childhood Cancer (CENSIA) headquarters which is the technical and normative entity. Both strategies are under the Federal Department of Health, and they work together collaboratively and exchange all of the patient information. This data managing collaboration occurs through an on-line process. The current work is a retrospective, descriptive study at a national level and the data presented was analyzed from the data base program, no approval from the Ethics Committee was required.\nThe variables obtained included in each patient’s hospital registration: location of the treatment center, name, address, date of birth, age, gender, diagnosis, stage of the disease, follow up during treatment or under clinical surveillance for more than 3 months, date of death and/or abandonment. Only patients registered in this program were included. Patients were subjected to standardized treatment protocol regimens according to their respective diagnosis, and stage. All protocols were evaluated previously by the National Council of Health [9] follow by CENSIA [10]. The chemotherapy treatment protocols were developed through a panel of experts that designated a national coordinator and associate coordinators. Development of this protocols took place from 2004 to 2006. All of the treatment regimens were developed on the basis of international cancer treatment protocols that demonstrated solid therapeutic results. The scope of the current study is to analyze exclusively the epidemiological aspects of this population.\nThis descriptive work comprises patients from 55 accredited institutions registered from January 2007 to December 2012. The institutional accreditation criteria were applied through a methodical evaluation from the Office of Medical Innovation and Medical Quality of the National Health Department and medical personnel trained for this type of task.\nFrom a strategic standpoint, Mexico was divided into 6 geographical areas so as to have a physician supervising each area. The role of this specialist is to organize, assist and solve all the problems connected to treatment regimens and local institutional problems related to the children with cancer.\nPrevalence was defined as the number of children who have been diagnosed with a type of cancer, at a given year and expressed as the percentage of the disease in Mexican children between 0 to 18 years and registered in the period of time of the study [2]. The incidence rate was defined as the total number of newly diagnosed cases per year/the total population under 18 years who were-registered with a Medical Policy at the PMI by 1 million population/year (Table 1) [2]. Age-specific incidence was defined as the total number of cases of a specific age group/the total population registered in that age group by 1,000,000 population/year. The overall mortality rate was determined as an absolute number of deceased patients/ 100,000 persons/year [2].Table 1\nAbsolute number of children (0–18 years) registered with a popular medical insurance health policy and newly diagnosed children with cancer/year\nYear of registrationPopulation registered with a PMI health policy (yearly increment)New cases/year200715,106,1002,017200815,281,3752,229200915,497,8312,287201015,986,1502,403201116,146,0112,571201217,014,3212,671\n\nAbsolute number of children (0–18 years) registered with a popular medical insurance health policy and newly diagnosed children with cancer/year\n\nAll patients presented in this work study had a pathological diagnosis of cancer, and were treated with a minimal follow-up of 48 months at registered pediatric oncology units.\nThe program started on January 5th 2005 [10], and the number of original participating institutions was 16. At the beginning only acute leukemia was introduced into this program. However all childhood cancers were progressively included, until January 2007 after which all types of childhood cancers were registered and treated under the national protocols. Therefore, the current work includes only cases collected from January 2007 to December 2012. Descriptive statistical analysis was performed using GraphPad Instant version 3.0.", "From January 2007 to 2012 there were 14,178 patients registered with cancer (Table 2). The most common cancer was leukemia (49.8%), followed by lymphoma (9.9%) and central nervous system tumors (9.4%).When the total cohort of children were grouped by age, it showed that the highest incidence was in children between 0–4 years and the lowest was 15 to 18 years through these 6 years (Figure 1). In terms of gender, males were predominately affected [7868 (55.5%)], while the frequency was slightly lower in females [6310 (44.5%)], with a ratio M: F of 1.2:1. Depending upon the type of childhood malignancy males accounted for 54.4% of patients with acute leukemia, 54.2% with brain tumors and 64% of the lymphoma cases (both Hodgkin’s and non-Hodgkin’s subtypes).Table 2\nPrevalence of PMI childhood and adolescent cancers 2007–2012\nDiagnosisCases n (%) year 2007Cases n (%) year 2008Cases n (%) year 2009Cases n (%) year 2010Cases n (%) year 2011Cases n (%) year 2012Prevalence 2007–2012 Total n (%)Leukemia1,056 (52.4)1,122 (50.3)1,133 (49.5)1,204 (50.1)1,222 (47.5)1329 (49.7)7,066 (49.8)Lymphomas207 (10.3)206 (9.2)244 (10.7)247 (10.3)255 (9.9)258 (9.6)1,417 (9.9)Intracranial neoplasm’s188 (9.3)198 (8.9)198 (8.7)228 (9.5)254 (9.9)277 (10.3)1,343 (9.4)Germ cell tumors53 (2.6)91 (4.1)118 (5.2)146 (6.1)153 (6.0)152 (5.6)713 (5.0)Soft tissue sarcomas108 (5.4)110 (4.9)95 (4.2)79 (3.3)120 (4.7)112 (4.1)624 (4.4)Osteosarcoma92 (4.6)104 (4.7)126 (5.5)74 (3.1)94 (3.7)77 (2.8)567 (3.9)Retinoblastoma75 (3.7)97 (4.4)70 (3.1)93 (3.9)104 (4.0)100 (3.7)539 (3.8)Renal tumors80 (4.0)84 (3.8)67 (2.9)89 (3.7)80 (3.1)87 (3.2)487 (3.4)Miscellaneous reticular neoplasm’s40 (2.0)47 (2.1)58 (2.5)63 (2.6)79 (3.1)95 (3.5)382 (2.6)Hepatic tumors30 (1.5)44 (2.0)35 (1.5)51 (2.1)67 (2.6)48 (1.7)275 (1.9)Neuroblastoma39 (1.9)46 (2.1)54 (2.4)38 (1.6)51 (2.0)46 (1.7)274 (1.9)Ewing tumor and related sarcomas of bone17 (0.8)31 (1.4)35 (1.5)47 (2.0)38 (1.5)24 (0.8)192 (1.3)Other solid tumors32 (1.6.)49 (2.2)54 (2.4)44 (1.8)54 (2.1)66 (2.4)299 (2.1)Total n cases2,0172,2292,2872,4032,5712,67114,178International Classification of Childhood Cancers (ICCC-3)(7).Figure 1\nIncidence by age group during 6 years of the study.\n\n\nPrevalence of PMI childhood and adolescent cancers 2007–2012\n\nInternational Classification of Childhood Cancers (ICCC-3)(7).\n\nIncidence by age group during 6 years of the study.\n\nThe national childhood cancer incidence rate in the PMI program over the six years of this study has been increasing (Table 3). In 2012 the incidence was 156.9/million. It was taken into account that the population of Mexico increased from 106,900,000 in 2007 to 113,336,538 in 2012 [3].Table 3\nIncidence* of PMI childhood and adolescent cancers\nDiagnosis200720082009201020112012Leukemia\n69.9\n\n73.4\n\n73.1\n\n75.3\n75.6\n78.1\nLymphomas13.713.415.715.415.715.1Intracranial neoplasm’s12.412.912.714.215.716.2Germ cell tumors3.55.97.69.19.48.9Soft tissue sarcomas7.17.16.14.64.96.5Osteosarcoma6.06.88.14.76.24.5Retinoblastoma4.96.34.55.86.55.8Renal tumors5.25.44.35.55.05.1Miscellaneous reticular neoplasm’s2.63.03.73.94.95.5Hepatic tumors1.92.82.22.33.12.8Neuroblastoma2.53.03.43.14.12.7Ewing tumor and related sarcomas of bone\n1.1\n\n2.0\n\n2.2\n\n2.9\n2.3\n1.4\nOther solid tumors2.13.23.42.73.33.8Total incidence/year133.5145.8146.2150.3159.2156.9*Incidence: total number of newly diagnosed cases per year/total population under 18 years by 1,000,000 population/year.\n\nIncidence* of PMI childhood and adolescent cancers\n\n*Incidence: total number of newly diagnosed cases per year/total population under 18 years by 1,000,000 population/year.\nIn 2012 the national population census reported 32,972,300 Mexicans between 0 to 18 years (37%). However, only 17,014,321 of the eligible population under 18 years was registered at the PMI program. In the process of tumor classification, Langerhans-cell histiocytosis was included as miscellaneous reticular neoplasm-following the ICCC-3 pathologic classification.\nThe incidence of leukemia was 78.1/1,000,000/year (2012). The prevalence of leukemia over the six years of the current analysis was 49.8%. The predominance of acute lymphoblastic leukemia was documented in all age groups; (Table 4) in comparison with acute and chronic non-lymphoblastic leukemia. Lymphomas were documented in 1417 patients of which 52.2% had non-Hodgkin’s lymphoma and 47.8% of children presented with Hodgkin’s lymphoma; this difference was not statistically significant.Table 4\nPrevalence of 7066 leukemia’s by age groups registered in the PMI from 2007-2012\nType of leukemia0-45-910-1415-18Totalnnnnn(%)(%)(%)(%)(%)Acute lymphoblastic leukemia2,346(40)2,170(37)762(13)586(10)5,864(83)Acute myelogenous leukemia278(30)195(21)185(20)269(29)927(13.1)Chronic myelocytic leukemia45(23)54(27)55(28)44(22)198(2.8)Myelodysplastic syndromes8(10)22(28)24(32)23(30)77(1.0)Total number2,6772,4411,0269227,066\n\nPrevalence of 7066 leukemia’s by age groups registered in the PMI from 2007-2012\n\nThis trend clearly indicates that the participation of the PMI accredited institutions and their respective regional coordinators had a beneficial effect on treatment compliance. The national mortality rate/state of residence was from 5.8 in 2007, to 5.3 in 2012/100,000/year. In this last year the lowest rate was in the group under 1 year at 2.5 and was the highest in the group of adolescents between 15 to 18 with a rate of 8.6 (Table 5).Table 5\n*Mortality rate by geographical location and by age groups in accredited pediatric cancer institutions in 2012\n\nState-rate\n\n<1\n\n1-4\n\n5-9\n\n10-14\n\n15-18\n\nNORTHWEST STATES\n\n5.0\n\n1. Baja California\n5.45.13.33.85.710.0\n2. Baja California South\n6.70.01.911.87.72.6\n3. Chihuahua\n5.54.05.44.82.911.6\n4. Durango\n3.16.02.21.23.44.9\n5. Nayarit\n3.74.64.40.94.64.6\n6. Sinaloa\n4.51.93.34.04.76.1\n7. Sonora\n6.61.96.15.54.112.9\nNORTH EAST STATES\n\n4.7\n\n8. Coahuila\n5.13.76.52.64.76.9\n9. Nuevo León\n4.73.55.73.44.15.5\n10. San Luis Potosí\n4.43.81.92.25.18.7\n11. Tamaulipas\n4.90.05.24.72.29.3\nCENTRAL STATES\n\n5.4\n\n12. Aguascalientes\n4.60.03.93.13.110.8\n13. Colima\n7.17.88.04.83.213.2\n14. Guanajuato\n5.53.54.24.55.18.4\n15. Jalisco\n6.34.85.14.65.211.1\n16. Michoacán\n4.85.64.24.12.97.8\n17. Querétaro\n5.32.73.45.84.28.0\n18. Zacatecas\n4.23.39.01.91.94.4\nCENTRAL METROPOLITAN\n\n5.1\n\n19. Hidalgo\n4.85.53.24.82.29.4\n20. México\n4.71.64.23.84.07.6\n21. Mexico City\n5.90.84.85.44.89.6\n22. Tlaxcala\n5.24.03.04.84.88.3\nSOUTHERN STATES\n\n5.4\n\n23. Morelos\n5.25.91.55.36.45.7\n24. Oaxaca\n5.43.75.94.24.67.1\n25. Puebla\n5.50.83.84.45.59.3\n26. Veracruz\n5.71.43.45.04.311.9\nSOUTHEAST STATES\n\n5.7\n\n27. Campeche\n6.16.13.02.412.34.0\n28. Chiapas\n5.71.87.83.53.89.1\n29. Guerrero\n3.94.14.42.43.15.8\n30. Quintana Roo\n5.70.07.75.04.56.2\n31. Tabasco\n7.72.28.88.84.98.2\n32. Yucatán\n5.10.04.64.93.28.7\nNATIONAL RATE\n\n5.3\n\n2.5\n\n4.7\n\n4.2\n\n4.3\n\n8.6\n*mortality rate by 100,000 children/year.\n\n*Mortality rate by geographical location and by age groups in accredited pediatric cancer institutions in 2012\n\n*mortality rate by 100,000 children/year.\nThe percentage of patients who abandoned treatment was 4.7%/year, the range was from 4.5% in 2012 to 5.2% in 2007.", "The incidence of cancer in children under 18 is increasing especially in developing countries including Mexico [11–14]. To address this problem, several key initiatives have been established by the Mexican Federal Government, including the development of the PMI which applies a diagonal approach to health insurance. Horizontal, population-based coverage is provided for all public and community health services. A package of essential health services is managed at the state-level for all those enrolled with the PMI [5].\nThe initiatives of this program involve supporting hospital accreditation and providing financial assistance to each institution for the treatment of qualifying children with cancer. These initiatives [14] also include the endorsement of treatment protocols/guidelines, the supply of hospital equipment to pediatric oncology units, technical and financial support for pediatric oncology training programs [13]. By 2011, only 92/150 (61.3%) pediatric hem/oncologists in Mexico were working in accredited hospitals under the PMI program. Rigorous evaluation processes have been underway since the PMI was established, and the results are encouraging for childhood cancer. Adherence to treatment prior to the PMI program was 48% in 2007 and by 2012 rose to 95%.\nThere were 35,591 children treated for cancer among the 55 PMI accredited Mexican institutions during the period of analysis, which included new patients (14,178) plus 21,413 follow-up patients by the end of 2012. That being said, a pediatric oncologist in the PMI program treated an average of 392 patients in 6 years. In institutions that have one pediatric oncologist for an entire state, the burden of the patient load increases significantly with the imperative to provide a continuously high standard patient care. However, few institutions in Mexico have as many as 8 pediatric oncologists on their full-time staff. This national patient load is obviously higher than recommended for develop countries.\nIn our current study, we report the findings from a larger number of patients from the Mexican PMI program than the previous study [4]. This represents approximately more than half from the national cancer registry including socialized medicine with their hospitals and in the other side the PMI program. The number of children with cancer registered in the PMI program is very substantial and merits further analysis, especially if these children belong to the lowest socioeconomically deprived bracket of Mexicans. The national childhood cancer incidence [7] in Mexico in the year 2010 from the PMI patients plus those registered in socialized public healthcare systems accounted for 4653 patients with an incidence of 145.5/million/year. The previously published SEER data from the United States [15] has indicated an incidence of childhood cancer (0–19 years) of 173.4/million person/year. In our current study, we have found a higher incidence in childhood cancers than previously published [4] which indicate the need for a more vigorous national public health program for this disease [1]. The active participation of the federal government, which could include providing additional funds to develop better equipped medical institutions and promoting the training of certified pediatricians to enroll in pediatric oncology fellowship programs, is required.\nAs outline before [2] the high prevalence of leukemia deserves further research especially with the prevalent and constant exposure of particular groups of children in Mexico and elsewhere to organophosphate-based pesticides [16]. Other factors need to be taken into account, including paternal smoking [17], fertilizers and proximity to oil fields [18] especially around the time of conception. Exposure to early infections constitutes another constant factor that is considered in the possible etiology of acute leukemia of childhood especially in the very young [19]. Another factor that might contribute to the etiology of childhood leukemia is in-vitro fertilization, which is reported to be associated with increased risk of early onset of acute lymphoblastic leukemia in the offspring [20].\nAccording to the Mexican national statistics [3] from 112 million inhabitants, there are 50 million that live below the poverty line. This fact, from the public health point of view, should direct attention to the triangle of poverty, malnutrition and early infections that most likely play a very important role in the etiology and incidence of childhood cancer in the PMI population. Recent studies [21] have suggested that the incidence of cancer among children living in poverty is much higher than in more affluent populations. This deficiency might be a factor that can account for the higher incidence of acute lymphoblastic leukemia in comparison to those children treated by the socialized healthcare systems in Mexico, in which the socioeconomic and cultural level is much higher.\nDue to the improvement of national public health measures, the incidence of perinatal diseases, malnutrition, gastroenteritis and pneumonia have decreased the pediatric mortality rate in Mexico very significantly, allowing for children to reach an age which childhood cancer is more common. It is ironic that the improved survival rates in children with these diseases may be a factor in the increased incidence rates of cancer. However, incidence and mortality rate are not necessarily correlated in Mexico, as has been outlined [22], but the standard of medical care is constantly improving and the PMI program has had a very beneficial impact on the prognosis for low-income pediatric cancer patients.\nIn 2004, the National Council of Health from the Federal Health Department and a group of pediatric hem/oncologists produced technical protocols for treating the most commonly occurring childhood malignancies experienced in Mexico. We formulate 26 treatment protocols based on international treatment guidelines [9], which included multidisciplinary treatment approaches for all childhood cancers. One of the principal achievements of this program is the progressive improvement of the overall survival. We were able to identify 11 states in Mexico that had an overall survival for all childhood malignancies beyond 72% versus 10 states in which the overall survival was less than 65%. The possible explanation for this later finding is multifactorial and includes, limited number of pediatric hem/oncologist in these areas, advanced and complicated disease at diagnosis, some with small, space-restricted units, low number of pediatric oncology nurses, and a lack of full-time pediatric specialists available for the multidisciplinary approach including surgeons, intensive care specialists, and radiotherapists. Those institutions that had the best survival rates are affiliated with university hospitals and at least a residency program in pediatrics and a training program in childhood cancer for the nursing personnel.\nRegarding mortality, we have been able to maintain a similar rate from 2007 to 2012. However when compared to other developing countries we need to work harder to decrease the mortality even further [23–25]. A very important finding is the high mortality rate in 2012 among adolescents from 15 to 18 years old. One of the observations we have documented in this respect is that more than half of these patients were very advanced or had unfavorable prognostic signs when they arrived for medical care of their cancer. There are multiple possible explanations for these findings and they deserve to be investigated and reported in future scientific publication.\nThe need to continue monitoring the treatment protocols and avoiding violation of these regimens could eventually lead to better results and a decrease in the mortality rate. On a collegiate basis the protocols are revised every 6 months among all participating institutions and the Council of General Health of Mexico. Proper modifications are performed in those instances where there is a high rate of relapse and/or complications.\nChatenoud, et al. [26] reported a standardized mortality rate (per 100,000/year) of 6.45 and 5.42 among boys and girls respectively in Mexico in the period from 2005–2006. However, in 2012 we documented a standardized overall mortality rate of 5.3 without a significant difference between genders. We recognize that our mortality rate is higher compared with other Latin-American countries [27–29] in spite of the PMI program. Mexican children (<18 years) with cancer had an average lifespan of 10.8 years and therefore impede living on an average of 59.2 years [6], while in develop countries, the survival rate in childhood cancer is higher and better [30]. It is important to direct our attention to pediatric oncology in developing nations. In most of our Latin-American countries there is a failure to make an early diagnoses and, to a lack of treatments regimens-clinical trials and inadequate health care systems [31, 32]. Another point that deserves to be mentioned is the continuous low rate of abandonment that has been maintained during the six years of this study and up to the present time, consistent with observations that international studies have eradicated this problem [32].", "The national childhood cancer registration in the PMI program denotes a high prevalence and incidence among Mexican children. The current experience demonstrates that the creation of a cooperative treatment group for childhood cancer in a developing nation can be accomplished through the efforts of a multidisciplinary approach. The combination of horizontal coverage of personal health services with a catastrophic fund makes it possible to offer financial protection for childhood cancer, as well as investing in early detection and survivorship care. Our future goal is to continue improving the treatment program and therefore the outcome for these children. We believe that children with cancer worldwide deserve a stronger focus on their needs than they are currently receiving, especially in resource-poor settings." ]
[ null, "methods", "results", "discussion", "conclusions" ]
[ "Childhood cancer", "Incidence", "Prevalence", "Mortality", "Mexican children" ]
Background: Of the biggest problems in developing nations is the high incidence of childhood cancer is the high incidence and the limited financial resources to develop a solid national program for early diagnosis and management of these patients. We believe that pediatric cancer should now be considered a global child health priority [1]. A worldwide childhood cancer survey in 2012 estimates a higher incidence in developing countries (147,000 cancers/year) than in developed nations [2]. It is expected that this disease will continue growing because the populations in these countries are younger and expanding. Since 2010 in Mexico, cancer is the second leading cause of mortality among children between 4 to 15 years old [3]. This represents a current national health problem. There is a need to expand, fortify and improve early diagnosis and treatment. As we have previously documented in developing countries including Mexico, there is increasing evidence that the mortality rate is higher than in developed countries [4]. This might be related to a late referral to expert institutions for diagnosis and early multidisciplinary treatment. Close to half of the Mexican-children and adults-are under a medical socialized program. This system contains large hospitals and clinics nationwide and has been run by the Federal Government Socialized Medicine Program for more than 50 years. Formal, salaried sector workers and their families had been able to access pooled-prepayment options through public social security programs. Meanwhile the other half (n: 51,823,314) of Mexicans, including children and adults (2011) [4] has been progressively incorporating into a national health insurance program. This universal health coverage is described as “Popular Medical Insurance” (PMI) [5]. This program was developed to provide access to a package of comprehensive health services-including childhood cancer treatment with financial protection. This is done through a national medical network from primary care clinics to specialized medical hospitals certified by the Federal Government. The program officially started in January 2005. However, it was not until 2007 that the full coverage was issued for all childhood cancers [6]. A referral system has been progressively developed among this network to diagnose, treat and follow all of these children. Currently 55 accredited medical institutions nationwide with departments of pediatric hem/oncology are working fulltime. In each institution, there is at least one certified pediatric hem/oncologist. The population enrolled in this program belongs to the lowest socio-economic levels, and their parents are mostly non-salaried workers, rural residents and the unemployed. The National Pediatric Cancer Registry under the auspices of the Mexican Federal Government was started in 2006 but it was not until January 2007 that a national registry appeared [7]. This registry includes all Mexican children with cancer regardless of the medical system to which they belong. The purpose of this report is to present the prevalence, incidence, mortality, and rate of abandonment in the PMI program between 2007 and 2012. Methods: An analysis was performed from 2007 to 2012 on all the children from 0 to 18 years diagnosed with cancer registered at the PMI program from all of the accredited institutions. The diagnosis of leukemia was determined through bone marrow aspiration for cytomorphology analysis, immunophenotype, DNA index and cytogenetic. The diagnosis of lymphomas and malignant solid tumors was performed through biopsy and/or surgical resection for pathology diagnosis. All institutional pathologic reports were classified under the International Classification of Childhood Cancer (ICCC-3) [8]. The medical data on each patient was obtained from the National Commission on Health Social Protection (PMI) which is the financial component of the program and from the National Council for the Prevention and Treatment of Childhood Cancer (CENSIA) headquarters which is the technical and normative entity. Both strategies are under the Federal Department of Health, and they work together collaboratively and exchange all of the patient information. This data managing collaboration occurs through an on-line process. The current work is a retrospective, descriptive study at a national level and the data presented was analyzed from the data base program, no approval from the Ethics Committee was required. The variables obtained included in each patient’s hospital registration: location of the treatment center, name, address, date of birth, age, gender, diagnosis, stage of the disease, follow up during treatment or under clinical surveillance for more than 3 months, date of death and/or abandonment. Only patients registered in this program were included. Patients were subjected to standardized treatment protocol regimens according to their respective diagnosis, and stage. All protocols were evaluated previously by the National Council of Health [9] follow by CENSIA [10]. The chemotherapy treatment protocols were developed through a panel of experts that designated a national coordinator and associate coordinators. Development of this protocols took place from 2004 to 2006. All of the treatment regimens were developed on the basis of international cancer treatment protocols that demonstrated solid therapeutic results. The scope of the current study is to analyze exclusively the epidemiological aspects of this population. This descriptive work comprises patients from 55 accredited institutions registered from January 2007 to December 2012. The institutional accreditation criteria were applied through a methodical evaluation from the Office of Medical Innovation and Medical Quality of the National Health Department and medical personnel trained for this type of task. From a strategic standpoint, Mexico was divided into 6 geographical areas so as to have a physician supervising each area. The role of this specialist is to organize, assist and solve all the problems connected to treatment regimens and local institutional problems related to the children with cancer. Prevalence was defined as the number of children who have been diagnosed with a type of cancer, at a given year and expressed as the percentage of the disease in Mexican children between 0 to 18 years and registered in the period of time of the study [2]. The incidence rate was defined as the total number of newly diagnosed cases per year/the total population under 18 years who were-registered with a Medical Policy at the PMI by 1 million population/year (Table 1) [2]. Age-specific incidence was defined as the total number of cases of a specific age group/the total population registered in that age group by 1,000,000 population/year. The overall mortality rate was determined as an absolute number of deceased patients/ 100,000 persons/year [2].Table 1 Absolute number of children (0–18 years) registered with a popular medical insurance health policy and newly diagnosed children with cancer/year Year of registrationPopulation registered with a PMI health policy (yearly increment)New cases/year200715,106,1002,017200815,281,3752,229200915,497,8312,287201015,986,1502,403201116,146,0112,571201217,014,3212,671 Absolute number of children (0–18 years) registered with a popular medical insurance health policy and newly diagnosed children with cancer/year All patients presented in this work study had a pathological diagnosis of cancer, and were treated with a minimal follow-up of 48 months at registered pediatric oncology units. The program started on January 5th 2005 [10], and the number of original participating institutions was 16. At the beginning only acute leukemia was introduced into this program. However all childhood cancers were progressively included, until January 2007 after which all types of childhood cancers were registered and treated under the national protocols. Therefore, the current work includes only cases collected from January 2007 to December 2012. Descriptive statistical analysis was performed using GraphPad Instant version 3.0. Results: From January 2007 to 2012 there were 14,178 patients registered with cancer (Table 2). The most common cancer was leukemia (49.8%), followed by lymphoma (9.9%) and central nervous system tumors (9.4%).When the total cohort of children were grouped by age, it showed that the highest incidence was in children between 0–4 years and the lowest was 15 to 18 years through these 6 years (Figure 1). In terms of gender, males were predominately affected [7868 (55.5%)], while the frequency was slightly lower in females [6310 (44.5%)], with a ratio M: F of 1.2:1. Depending upon the type of childhood malignancy males accounted for 54.4% of patients with acute leukemia, 54.2% with brain tumors and 64% of the lymphoma cases (both Hodgkin’s and non-Hodgkin’s subtypes).Table 2 Prevalence of PMI childhood and adolescent cancers 2007–2012 DiagnosisCases n (%) year 2007Cases n (%) year 2008Cases n (%) year 2009Cases n (%) year 2010Cases n (%) year 2011Cases n (%) year 2012Prevalence 2007–2012 Total n (%)Leukemia1,056 (52.4)1,122 (50.3)1,133 (49.5)1,204 (50.1)1,222 (47.5)1329 (49.7)7,066 (49.8)Lymphomas207 (10.3)206 (9.2)244 (10.7)247 (10.3)255 (9.9)258 (9.6)1,417 (9.9)Intracranial neoplasm’s188 (9.3)198 (8.9)198 (8.7)228 (9.5)254 (9.9)277 (10.3)1,343 (9.4)Germ cell tumors53 (2.6)91 (4.1)118 (5.2)146 (6.1)153 (6.0)152 (5.6)713 (5.0)Soft tissue sarcomas108 (5.4)110 (4.9)95 (4.2)79 (3.3)120 (4.7)112 (4.1)624 (4.4)Osteosarcoma92 (4.6)104 (4.7)126 (5.5)74 (3.1)94 (3.7)77 (2.8)567 (3.9)Retinoblastoma75 (3.7)97 (4.4)70 (3.1)93 (3.9)104 (4.0)100 (3.7)539 (3.8)Renal tumors80 (4.0)84 (3.8)67 (2.9)89 (3.7)80 (3.1)87 (3.2)487 (3.4)Miscellaneous reticular neoplasm’s40 (2.0)47 (2.1)58 (2.5)63 (2.6)79 (3.1)95 (3.5)382 (2.6)Hepatic tumors30 (1.5)44 (2.0)35 (1.5)51 (2.1)67 (2.6)48 (1.7)275 (1.9)Neuroblastoma39 (1.9)46 (2.1)54 (2.4)38 (1.6)51 (2.0)46 (1.7)274 (1.9)Ewing tumor and related sarcomas of bone17 (0.8)31 (1.4)35 (1.5)47 (2.0)38 (1.5)24 (0.8)192 (1.3)Other solid tumors32 (1.6.)49 (2.2)54 (2.4)44 (1.8)54 (2.1)66 (2.4)299 (2.1)Total n cases2,0172,2292,2872,4032,5712,67114,178International Classification of Childhood Cancers (ICCC-3)(7).Figure 1 Incidence by age group during 6 years of the study. Prevalence of PMI childhood and adolescent cancers 2007–2012 International Classification of Childhood Cancers (ICCC-3)(7). Incidence by age group during 6 years of the study. The national childhood cancer incidence rate in the PMI program over the six years of this study has been increasing (Table 3). In 2012 the incidence was 156.9/million. It was taken into account that the population of Mexico increased from 106,900,000 in 2007 to 113,336,538 in 2012 [3].Table 3 Incidence* of PMI childhood and adolescent cancers Diagnosis200720082009201020112012Leukemia 69.9 73.4 73.1 75.3 75.6 78.1 Lymphomas13.713.415.715.415.715.1Intracranial neoplasm’s12.412.912.714.215.716.2Germ cell tumors3.55.97.69.19.48.9Soft tissue sarcomas7.17.16.14.64.96.5Osteosarcoma6.06.88.14.76.24.5Retinoblastoma4.96.34.55.86.55.8Renal tumors5.25.44.35.55.05.1Miscellaneous reticular neoplasm’s2.63.03.73.94.95.5Hepatic tumors1.92.82.22.33.12.8Neuroblastoma2.53.03.43.14.12.7Ewing tumor and related sarcomas of bone 1.1 2.0 2.2 2.9 2.3 1.4 Other solid tumors2.13.23.42.73.33.8Total incidence/year133.5145.8146.2150.3159.2156.9*Incidence: total number of newly diagnosed cases per year/total population under 18 years by 1,000,000 population/year. Incidence* of PMI childhood and adolescent cancers *Incidence: total number of newly diagnosed cases per year/total population under 18 years by 1,000,000 population/year. In 2012 the national population census reported 32,972,300 Mexicans between 0 to 18 years (37%). However, only 17,014,321 of the eligible population under 18 years was registered at the PMI program. In the process of tumor classification, Langerhans-cell histiocytosis was included as miscellaneous reticular neoplasm-following the ICCC-3 pathologic classification. The incidence of leukemia was 78.1/1,000,000/year (2012). The prevalence of leukemia over the six years of the current analysis was 49.8%. The predominance of acute lymphoblastic leukemia was documented in all age groups; (Table 4) in comparison with acute and chronic non-lymphoblastic leukemia. Lymphomas were documented in 1417 patients of which 52.2% had non-Hodgkin’s lymphoma and 47.8% of children presented with Hodgkin’s lymphoma; this difference was not statistically significant.Table 4 Prevalence of 7066 leukemia’s by age groups registered in the PMI from 2007-2012 Type of leukemia0-45-910-1415-18Totalnnnnn(%)(%)(%)(%)(%)Acute lymphoblastic leukemia2,346(40)2,170(37)762(13)586(10)5,864(83)Acute myelogenous leukemia278(30)195(21)185(20)269(29)927(13.1)Chronic myelocytic leukemia45(23)54(27)55(28)44(22)198(2.8)Myelodysplastic syndromes8(10)22(28)24(32)23(30)77(1.0)Total number2,6772,4411,0269227,066 Prevalence of 7066 leukemia’s by age groups registered in the PMI from 2007-2012 This trend clearly indicates that the participation of the PMI accredited institutions and their respective regional coordinators had a beneficial effect on treatment compliance. The national mortality rate/state of residence was from 5.8 in 2007, to 5.3 in 2012/100,000/year. In this last year the lowest rate was in the group under 1 year at 2.5 and was the highest in the group of adolescents between 15 to 18 with a rate of 8.6 (Table 5).Table 5 *Mortality rate by geographical location and by age groups in accredited pediatric cancer institutions in 2012 State-rate <1 1-4 5-9 10-14 15-18 NORTHWEST STATES 5.0 1. Baja California 5.45.13.33.85.710.0 2. Baja California South 6.70.01.911.87.72.6 3. Chihuahua 5.54.05.44.82.911.6 4. Durango 3.16.02.21.23.44.9 5. Nayarit 3.74.64.40.94.64.6 6. Sinaloa 4.51.93.34.04.76.1 7. Sonora 6.61.96.15.54.112.9 NORTH EAST STATES 4.7 8. Coahuila 5.13.76.52.64.76.9 9. Nuevo León 4.73.55.73.44.15.5 10. San Luis Potosí 4.43.81.92.25.18.7 11. Tamaulipas 4.90.05.24.72.29.3 CENTRAL STATES 5.4 12. Aguascalientes 4.60.03.93.13.110.8 13. Colima 7.17.88.04.83.213.2 14. Guanajuato 5.53.54.24.55.18.4 15. Jalisco 6.34.85.14.65.211.1 16. Michoacán 4.85.64.24.12.97.8 17. Querétaro 5.32.73.45.84.28.0 18. Zacatecas 4.23.39.01.91.94.4 CENTRAL METROPOLITAN 5.1 19. Hidalgo 4.85.53.24.82.29.4 20. México 4.71.64.23.84.07.6 21. Mexico City 5.90.84.85.44.89.6 22. Tlaxcala 5.24.03.04.84.88.3 SOUTHERN STATES 5.4 23. Morelos 5.25.91.55.36.45.7 24. Oaxaca 5.43.75.94.24.67.1 25. Puebla 5.50.83.84.45.59.3 26. Veracruz 5.71.43.45.04.311.9 SOUTHEAST STATES 5.7 27. Campeche 6.16.13.02.412.34.0 28. Chiapas 5.71.87.83.53.89.1 29. Guerrero 3.94.14.42.43.15.8 30. Quintana Roo 5.70.07.75.04.56.2 31. Tabasco 7.72.28.88.84.98.2 32. Yucatán 5.10.04.64.93.28.7 NATIONAL RATE 5.3 2.5 4.7 4.2 4.3 8.6 *mortality rate by 100,000 children/year. *Mortality rate by geographical location and by age groups in accredited pediatric cancer institutions in 2012 *mortality rate by 100,000 children/year. The percentage of patients who abandoned treatment was 4.7%/year, the range was from 4.5% in 2012 to 5.2% in 2007. Discussion: The incidence of cancer in children under 18 is increasing especially in developing countries including Mexico [11–14]. To address this problem, several key initiatives have been established by the Mexican Federal Government, including the development of the PMI which applies a diagonal approach to health insurance. Horizontal, population-based coverage is provided for all public and community health services. A package of essential health services is managed at the state-level for all those enrolled with the PMI [5]. The initiatives of this program involve supporting hospital accreditation and providing financial assistance to each institution for the treatment of qualifying children with cancer. These initiatives [14] also include the endorsement of treatment protocols/guidelines, the supply of hospital equipment to pediatric oncology units, technical and financial support for pediatric oncology training programs [13]. By 2011, only 92/150 (61.3%) pediatric hem/oncologists in Mexico were working in accredited hospitals under the PMI program. Rigorous evaluation processes have been underway since the PMI was established, and the results are encouraging for childhood cancer. Adherence to treatment prior to the PMI program was 48% in 2007 and by 2012 rose to 95%. There were 35,591 children treated for cancer among the 55 PMI accredited Mexican institutions during the period of analysis, which included new patients (14,178) plus 21,413 follow-up patients by the end of 2012. That being said, a pediatric oncologist in the PMI program treated an average of 392 patients in 6 years. In institutions that have one pediatric oncologist for an entire state, the burden of the patient load increases significantly with the imperative to provide a continuously high standard patient care. However, few institutions in Mexico have as many as 8 pediatric oncologists on their full-time staff. This national patient load is obviously higher than recommended for develop countries. In our current study, we report the findings from a larger number of patients from the Mexican PMI program than the previous study [4]. This represents approximately more than half from the national cancer registry including socialized medicine with their hospitals and in the other side the PMI program. The number of children with cancer registered in the PMI program is very substantial and merits further analysis, especially if these children belong to the lowest socioeconomically deprived bracket of Mexicans. The national childhood cancer incidence [7] in Mexico in the year 2010 from the PMI patients plus those registered in socialized public healthcare systems accounted for 4653 patients with an incidence of 145.5/million/year. The previously published SEER data from the United States [15] has indicated an incidence of childhood cancer (0–19 years) of 173.4/million person/year. In our current study, we have found a higher incidence in childhood cancers than previously published [4] which indicate the need for a more vigorous national public health program for this disease [1]. The active participation of the federal government, which could include providing additional funds to develop better equipped medical institutions and promoting the training of certified pediatricians to enroll in pediatric oncology fellowship programs, is required. As outline before [2] the high prevalence of leukemia deserves further research especially with the prevalent and constant exposure of particular groups of children in Mexico and elsewhere to organophosphate-based pesticides [16]. Other factors need to be taken into account, including paternal smoking [17], fertilizers and proximity to oil fields [18] especially around the time of conception. Exposure to early infections constitutes another constant factor that is considered in the possible etiology of acute leukemia of childhood especially in the very young [19]. Another factor that might contribute to the etiology of childhood leukemia is in-vitro fertilization, which is reported to be associated with increased risk of early onset of acute lymphoblastic leukemia in the offspring [20]. According to the Mexican national statistics [3] from 112 million inhabitants, there are 50 million that live below the poverty line. This fact, from the public health point of view, should direct attention to the triangle of poverty, malnutrition and early infections that most likely play a very important role in the etiology and incidence of childhood cancer in the PMI population. Recent studies [21] have suggested that the incidence of cancer among children living in poverty is much higher than in more affluent populations. This deficiency might be a factor that can account for the higher incidence of acute lymphoblastic leukemia in comparison to those children treated by the socialized healthcare systems in Mexico, in which the socioeconomic and cultural level is much higher. Due to the improvement of national public health measures, the incidence of perinatal diseases, malnutrition, gastroenteritis and pneumonia have decreased the pediatric mortality rate in Mexico very significantly, allowing for children to reach an age which childhood cancer is more common. It is ironic that the improved survival rates in children with these diseases may be a factor in the increased incidence rates of cancer. However, incidence and mortality rate are not necessarily correlated in Mexico, as has been outlined [22], but the standard of medical care is constantly improving and the PMI program has had a very beneficial impact on the prognosis for low-income pediatric cancer patients. In 2004, the National Council of Health from the Federal Health Department and a group of pediatric hem/oncologists produced technical protocols for treating the most commonly occurring childhood malignancies experienced in Mexico. We formulate 26 treatment protocols based on international treatment guidelines [9], which included multidisciplinary treatment approaches for all childhood cancers. One of the principal achievements of this program is the progressive improvement of the overall survival. We were able to identify 11 states in Mexico that had an overall survival for all childhood malignancies beyond 72% versus 10 states in which the overall survival was less than 65%. The possible explanation for this later finding is multifactorial and includes, limited number of pediatric hem/oncologist in these areas, advanced and complicated disease at diagnosis, some with small, space-restricted units, low number of pediatric oncology nurses, and a lack of full-time pediatric specialists available for the multidisciplinary approach including surgeons, intensive care specialists, and radiotherapists. Those institutions that had the best survival rates are affiliated with university hospitals and at least a residency program in pediatrics and a training program in childhood cancer for the nursing personnel. Regarding mortality, we have been able to maintain a similar rate from 2007 to 2012. However when compared to other developing countries we need to work harder to decrease the mortality even further [23–25]. A very important finding is the high mortality rate in 2012 among adolescents from 15 to 18 years old. One of the observations we have documented in this respect is that more than half of these patients were very advanced or had unfavorable prognostic signs when they arrived for medical care of their cancer. There are multiple possible explanations for these findings and they deserve to be investigated and reported in future scientific publication. The need to continue monitoring the treatment protocols and avoiding violation of these regimens could eventually lead to better results and a decrease in the mortality rate. On a collegiate basis the protocols are revised every 6 months among all participating institutions and the Council of General Health of Mexico. Proper modifications are performed in those instances where there is a high rate of relapse and/or complications. Chatenoud, et al. [26] reported a standardized mortality rate (per 100,000/year) of 6.45 and 5.42 among boys and girls respectively in Mexico in the period from 2005–2006. However, in 2012 we documented a standardized overall mortality rate of 5.3 without a significant difference between genders. We recognize that our mortality rate is higher compared with other Latin-American countries [27–29] in spite of the PMI program. Mexican children (<18 years) with cancer had an average lifespan of 10.8 years and therefore impede living on an average of 59.2 years [6], while in develop countries, the survival rate in childhood cancer is higher and better [30]. It is important to direct our attention to pediatric oncology in developing nations. In most of our Latin-American countries there is a failure to make an early diagnoses and, to a lack of treatments regimens-clinical trials and inadequate health care systems [31, 32]. Another point that deserves to be mentioned is the continuous low rate of abandonment that has been maintained during the six years of this study and up to the present time, consistent with observations that international studies have eradicated this problem [32]. Conclusion: The national childhood cancer registration in the PMI program denotes a high prevalence and incidence among Mexican children. The current experience demonstrates that the creation of a cooperative treatment group for childhood cancer in a developing nation can be accomplished through the efforts of a multidisciplinary approach. The combination of horizontal coverage of personal health services with a catastrophic fund makes it possible to offer financial protection for childhood cancer, as well as investing in early detection and survivorship care. Our future goal is to continue improving the treatment program and therefore the outcome for these children. We believe that children with cancer worldwide deserve a stronger focus on their needs than they are currently receiving, especially in resource-poor settings.
Background: All the children registered at the National Council for the Prevention and Treatment of Childhood Cancer were analyzed. The rationale for this Federal Government Council is to financially support the treatment of all children registered into this system. All patients are within a network of 55 public certified hospitals nationwide. Methods: In the current study, data from 2007 to 2012 are presented for all patients (0-18 years) with a pathological diagnosis of leukemia, lymphoma and solid tumors. The parameters analyzed were prevalence, incidence, mortality, and abandonment rate. Results: A diagnosis of cancer was documented in 14,178 children. The incidence was of 156.9/million/year (2012). The median age was 4.9. The most common childhood cancer is leukemia, which occurs in 49.8% of patients (2007-2012); and has an incidence rate of 78.1/million/year (2012). The national mortality rate was 5.3/100,000 in 2012, however in the group between 15 to 18 years it reaches a level of 8.6. Conclusions: The study demonstrates that there is a high incidence of childhood cancer in Mexico. In particular, the results reveal an elevated incidence and prevalence of leukemia especially from 0 to 4 years. Only 4.7% of these patients abandoned treatment. The clinical outcome for all of the children studied improved since the establishment of this national program.
Background: Of the biggest problems in developing nations is the high incidence of childhood cancer is the high incidence and the limited financial resources to develop a solid national program for early diagnosis and management of these patients. We believe that pediatric cancer should now be considered a global child health priority [1]. A worldwide childhood cancer survey in 2012 estimates a higher incidence in developing countries (147,000 cancers/year) than in developed nations [2]. It is expected that this disease will continue growing because the populations in these countries are younger and expanding. Since 2010 in Mexico, cancer is the second leading cause of mortality among children between 4 to 15 years old [3]. This represents a current national health problem. There is a need to expand, fortify and improve early diagnosis and treatment. As we have previously documented in developing countries including Mexico, there is increasing evidence that the mortality rate is higher than in developed countries [4]. This might be related to a late referral to expert institutions for diagnosis and early multidisciplinary treatment. Close to half of the Mexican-children and adults-are under a medical socialized program. This system contains large hospitals and clinics nationwide and has been run by the Federal Government Socialized Medicine Program for more than 50 years. Formal, salaried sector workers and their families had been able to access pooled-prepayment options through public social security programs. Meanwhile the other half (n: 51,823,314) of Mexicans, including children and adults (2011) [4] has been progressively incorporating into a national health insurance program. This universal health coverage is described as “Popular Medical Insurance” (PMI) [5]. This program was developed to provide access to a package of comprehensive health services-including childhood cancer treatment with financial protection. This is done through a national medical network from primary care clinics to specialized medical hospitals certified by the Federal Government. The program officially started in January 2005. However, it was not until 2007 that the full coverage was issued for all childhood cancers [6]. A referral system has been progressively developed among this network to diagnose, treat and follow all of these children. Currently 55 accredited medical institutions nationwide with departments of pediatric hem/oncology are working fulltime. In each institution, there is at least one certified pediatric hem/oncologist. The population enrolled in this program belongs to the lowest socio-economic levels, and their parents are mostly non-salaried workers, rural residents and the unemployed. The National Pediatric Cancer Registry under the auspices of the Mexican Federal Government was started in 2006 but it was not until January 2007 that a national registry appeared [7]. This registry includes all Mexican children with cancer regardless of the medical system to which they belong. The purpose of this report is to present the prevalence, incidence, mortality, and rate of abandonment in the PMI program between 2007 and 2012. Conclusion: The national childhood cancer registration in the PMI program denotes a high prevalence and incidence among Mexican children. The current experience demonstrates that the creation of a cooperative treatment group for childhood cancer in a developing nation can be accomplished through the efforts of a multidisciplinary approach. The combination of horizontal coverage of personal health services with a catastrophic fund makes it possible to offer financial protection for childhood cancer, as well as investing in early detection and survivorship care. Our future goal is to continue improving the treatment program and therefore the outcome for these children. We believe that children with cancer worldwide deserve a stronger focus on their needs than they are currently receiving, especially in resource-poor settings.
Background: All the children registered at the National Council for the Prevention and Treatment of Childhood Cancer were analyzed. The rationale for this Federal Government Council is to financially support the treatment of all children registered into this system. All patients are within a network of 55 public certified hospitals nationwide. Methods: In the current study, data from 2007 to 2012 are presented for all patients (0-18 years) with a pathological diagnosis of leukemia, lymphoma and solid tumors. The parameters analyzed were prevalence, incidence, mortality, and abandonment rate. Results: A diagnosis of cancer was documented in 14,178 children. The incidence was of 156.9/million/year (2012). The median age was 4.9. The most common childhood cancer is leukemia, which occurs in 49.8% of patients (2007-2012); and has an incidence rate of 78.1/million/year (2012). The national mortality rate was 5.3/100,000 in 2012, however in the group between 15 to 18 years it reaches a level of 8.6. Conclusions: The study demonstrates that there is a high incidence of childhood cancer in Mexico. In particular, the results reveal an elevated incidence and prevalence of leukemia especially from 0 to 4 years. Only 4.7% of these patients abandoned treatment. The clinical outcome for all of the children studied improved since the establishment of this national program.
4,475
267
[ 563 ]
5
[ "cancer", "children", "childhood", "program", "pmi", "year", "incidence", "national", "rate", "years" ]
[ "rate childhood cancer", "cancer incidence mexico", "children cancer initiatives", "worldwide childhood cancer", "mexico pediatric oncologists" ]
[CONTENT] Childhood cancer | Incidence | Prevalence | Mortality | Mexican children [SUMMARY]
[CONTENT] Childhood cancer | Incidence | Prevalence | Mortality | Mexican children [SUMMARY]
[CONTENT] Childhood cancer | Incidence | Prevalence | Mortality | Mexican children [SUMMARY]
[CONTENT] Childhood cancer | Incidence | Prevalence | Mortality | Mexican children [SUMMARY]
[CONTENT] Childhood cancer | Incidence | Prevalence | Mortality | Mexican children [SUMMARY]
[CONTENT] Childhood cancer | Incidence | Prevalence | Mortality | Mexican children [SUMMARY]
[CONTENT] Adolescent | Child | Child, Preschool | Female | Humans | Incidence | Infant | Infant, Newborn | Insurance, Health | Male | Mexico | Neoplasms | Prevalence | Public Health Surveillance | Registries [SUMMARY]
[CONTENT] Adolescent | Child | Child, Preschool | Female | Humans | Incidence | Infant | Infant, Newborn | Insurance, Health | Male | Mexico | Neoplasms | Prevalence | Public Health Surveillance | Registries [SUMMARY]
[CONTENT] Adolescent | Child | Child, Preschool | Female | Humans | Incidence | Infant | Infant, Newborn | Insurance, Health | Male | Mexico | Neoplasms | Prevalence | Public Health Surveillance | Registries [SUMMARY]
[CONTENT] Adolescent | Child | Child, Preschool | Female | Humans | Incidence | Infant | Infant, Newborn | Insurance, Health | Male | Mexico | Neoplasms | Prevalence | Public Health Surveillance | Registries [SUMMARY]
[CONTENT] Adolescent | Child | Child, Preschool | Female | Humans | Incidence | Infant | Infant, Newborn | Insurance, Health | Male | Mexico | Neoplasms | Prevalence | Public Health Surveillance | Registries [SUMMARY]
[CONTENT] Adolescent | Child | Child, Preschool | Female | Humans | Incidence | Infant | Infant, Newborn | Insurance, Health | Male | Mexico | Neoplasms | Prevalence | Public Health Surveillance | Registries [SUMMARY]
[CONTENT] rate childhood cancer | cancer incidence mexico | children cancer initiatives | worldwide childhood cancer | mexico pediatric oncologists [SUMMARY]
[CONTENT] rate childhood cancer | cancer incidence mexico | children cancer initiatives | worldwide childhood cancer | mexico pediatric oncologists [SUMMARY]
[CONTENT] rate childhood cancer | cancer incidence mexico | children cancer initiatives | worldwide childhood cancer | mexico pediatric oncologists [SUMMARY]
[CONTENT] rate childhood cancer | cancer incidence mexico | children cancer initiatives | worldwide childhood cancer | mexico pediatric oncologists [SUMMARY]
[CONTENT] rate childhood cancer | cancer incidence mexico | children cancer initiatives | worldwide childhood cancer | mexico pediatric oncologists [SUMMARY]
[CONTENT] rate childhood cancer | cancer incidence mexico | children cancer initiatives | worldwide childhood cancer | mexico pediatric oncologists [SUMMARY]
[CONTENT] cancer | children | childhood | program | pmi | year | incidence | national | rate | years [SUMMARY]
[CONTENT] cancer | children | childhood | program | pmi | year | incidence | national | rate | years [SUMMARY]
[CONTENT] cancer | children | childhood | program | pmi | year | incidence | national | rate | years [SUMMARY]
[CONTENT] cancer | children | childhood | program | pmi | year | incidence | national | rate | years [SUMMARY]
[CONTENT] cancer | children | childhood | program | pmi | year | incidence | national | rate | years [SUMMARY]
[CONTENT] cancer | children | childhood | program | pmi | year | incidence | national | rate | years [SUMMARY]
[CONTENT] medical | program | cancer | developed | countries | national | health | system | including | registry [SUMMARY]
[CONTENT] registered | number | medical | year | cancer | work | protocols | diagnosed | diagnosis | policy [SUMMARY]
[CONTENT] 24 | year | 54 | 44 | 64 | 2012 | 73 | 84 | 18 | table [SUMMARY]
[CONTENT] cancer | children | childhood cancer | childhood | efforts | early detection survivorship | early detection survivorship care | investing early detection survivorship | receiving | receiving especially [SUMMARY]
[CONTENT] cancer | children | program | childhood | health | year | national | pmi | incidence | medical [SUMMARY]
[CONTENT] cancer | children | program | childhood | health | year | national | pmi | incidence | medical [SUMMARY]
[CONTENT] the National Council for the Prevention and Treatment of Childhood Cancer ||| this Federal Government Council ||| 55 [SUMMARY]
[CONTENT] 2007 | 2012 | 0-18 years ||| [SUMMARY]
[CONTENT] 14,178 ||| 156.9/million/year | 2012 ||| 4.9 ||| 49.8% | 2007-2012 | 78.1/million/year | 2012 ||| 5.3/100,000 | 2012 | between 15 to 18 years | 8.6 [SUMMARY]
[CONTENT] Mexico ||| 0 to 4 years ||| Only 4.7% ||| [SUMMARY]
[CONTENT] the National Council for the Prevention and Treatment of Childhood Cancer ||| this Federal Government Council ||| 55 ||| 2007 | 2012 | 0-18 years ||| ||| 14,178 ||| 156.9/million/year | 2012 ||| 4.9 ||| 49.8% | 2007-2012 | 78.1/million/year | 2012 ||| 5.3/100,000 | 2012 | between 15 to 18 years | 8.6 ||| Mexico ||| 0 to 4 years ||| Only 4.7% ||| [SUMMARY]
[CONTENT] the National Council for the Prevention and Treatment of Childhood Cancer ||| this Federal Government Council ||| 55 ||| 2007 | 2012 | 0-18 years ||| ||| 14,178 ||| 156.9/million/year | 2012 ||| 4.9 ||| 49.8% | 2007-2012 | 78.1/million/year | 2012 ||| 5.3/100,000 | 2012 | between 15 to 18 years | 8.6 ||| Mexico ||| 0 to 4 years ||| Only 4.7% ||| [SUMMARY]
IL-1 signal affects both protection and pathogenesis of virus-induced chronic CNS demyelinating disease.
22985464
Theiler's virus infection induces chronic demyelinating disease in mice and has been investigated as an infectious model for multiple sclerosis (MS). IL-1 plays an important role in the pathogenesis of both the autoimmune disease model (EAE) and this viral model for MS. However, IL-1 is known to play an important protective role against certain viral infections. Therefore, it is unclear whether IL-1-mediated signaling plays a protective or pathogenic role in the development of TMEV-induced demyelinating disease.
BACKGROUND
Female C57BL/6 mice and B6.129S7-Il1r1tm1Imx/J mice (IL-1R KO) were infected with Theiler's murine encephalomyelitis virus (1 x 106 PFU). Differences in the development of demyelinating disease and changes in the histopathology were compared. Viral persistence, cytokine production, and immune responses in the CNS of infected mice were analyzed using quantitative PCR, ELISA, and flow cytometry.
METHODS
Administration of IL-1β, thereby rending resistant B6 mice susceptible to TMEV-induced demyelinating disease, induced a high level of Th17 response. Interestingly, infection of TMEV into IL-1R-deficient resistant C57BL/6 (B6) mice also induced TMEV-induced demyelinating disease. High viral persistence was found in the late stage of viral infection in IL-1R-deficient mice, although there were few differences in the initial anti-viral immune responses and viral persistent levels between the WT B6 and IL-1R-deficiecent mice. The initial type I IFN responses and the expression of PDL-1 and Tim-3 were higher in the CNS of TMEV-infected IL-1R-deficient mice, leading to deficiencies in T cell function that permit viral persistence.
RESULTS
These results suggest that the presence of high IL-1 level exerts the pathogenic role by elevating pathogenic Th17 responses, whereas the lack of IL-1 signals promotes viral persistence in the spinal cord due to insufficient T cell activation by elevating the production of inhibitory cytokines and regulatory molecules. Therefore, the balance of IL-1 signaling appears to be extremely important for the protection from TMEV-induced demyelinating disease, and either too much or too little signaling promotes the development of disease.
CONCLUSIONS
[ "Animals", "Demyelinating Diseases", "Disease Models, Animal", "Female", "Interleukin-1beta", "Mice", "Mice, 129 Strain", "Mice, Inbred C57BL", "Mice, Knockout", "Multiple Sclerosis", "Poliomyelitis", "Signal Transduction", "Theilovirus" ]
3462702
Introduction
Toll-like receptors (TLRs) and interleukin-1 receptors (IL-1Rs) are involved in the production of various cytokines that are associated with the innate immune response against many different infectious agents. TLRs and IL-1Rs share many structural similarities and utilize common downstream adaptive molecules after activation by their ligands. In general, these innate immune responses induced by TLRs and IL-1Rs are known to play a protective role against various microbes [1]. However, several recent studies have indicated that these signals may also play a pathogenic role in viral infections [2-4]. In addition to TLRs, IL-1Rs are also considered to be important innate receptors because IL-1β, in particular, is a prominent cytokine that appears in the early stage of microbial infections [3]. The IL-1R family contains six receptors, including IL-1RI, which recognizes the principal inflammatory cytokine IL-1β and the less inflammatory cytokine IL-1α [1,5]. IL-1β is generated from the cleavage of pro-IL-1β by caspase-1 in inflammasomes after infections, and the downstream signaling cascade of the IL-1β-IL-1R interaction leads to the induction of various proinflammatory cytokines and the activation of lymphocytes [6]. IL-1β-deficient mice show broad host susceptibility to various infections [7,8]. Moreover, IL-1RI-deficient mice are susceptible to certain pathogens, including Listeria monocytogenes[1]. Therefore, these responses to IL-1β are apparently critical for protection from many types of viruses and microbes. However, the level of IL-1β has also been linked to many different inflammatory autoimmune diseases, including diabetes, lupus, arthritis, and multiple sclerosis (MS) [1,4]. Theiler’s murine encephalomyelitis virus (TMEV) is a positive-stranded RNA virus in the Picornaviridae family [9]. TMEV establishes a persistent CNS infection in susceptible mouse strains that results in the development of chronic demyelinating disease, and the system has been studied as a relevant viral model for human multiple sclerosis [10-12]. Cells infected with TMEV produce various proinflammatory cytokines, including type I IFNs, IL-6 and IL-1β [13]. TLR3 and TLR2 are involved in the production of these cytokines following infection with TMEV [14,15]. In addition, melanoma differentiation-associated gene 5 and dsRNA-activated protein kinase R are known to contribute to the production of proinflammatory cytokines [14,16]. These pathways also induce activation of caspase-1, leading to the generation of IL-1β and IL-1α, which contribute to further cytokine production, such as IL-6 promoting the development of pathogenic Th17 cells. Because IL-1β signals are associated with both host protection from viral infections and pathogenesis of inflammatory immune-mediated diseases, we here investigated the role of IL-1β-mediated signals in the development of TMEV-induced demyelinating disease. We have previously reported that Th17 cells preferentially develop in an IL-6-dependent manner after TMEV infection, and that Th17 cells promote persistent viral infection and induce the pathogenesis of chronic demyelinating disease [17]. In addition, our earlier studies indicated that administration of either lipopolysaccharide (LPS) or IL-1β, thereby inducing high levels of IL-6 production, into resistant C57BL/6 (B6) mice renders the mice susceptible to the development of TMEV-induced demyelinating disease [18]. These results suggest that an excessive level of IL-1β is harmful to TMEV-induced demyelinating disease by generating high levels of pathogenic Th17 cells [19]. In this study, we confirmed the role of excessive IL-1β in the generation of a high level of Th17 cells in resistant B6 mice, supporting the pathogenic mechanisms of IL-1β. Furthermore, we have also utilized IL-1R-deficient mice to investigate the role of IL-1β-mediated signaling in the development of TMEV-induced demyelinating disease. Our results indicate that the lack of IL-1 signaling in resistant B6 mice also induced TMEV-induced demyelinating disease. The initial deficiencies in T cell function, including cytokine production and high viral persistence in the late stage of viral infection, were found in IL-1R-deficient mice. Therefore, the presence of an excessive amount of IL-1 plays a pathogenic role by elevating pathogenic Th17 responses, whereas the lack of IL-1 signals promotes viral persistence in the spinal cord, leading to chronic immune-mediated inflammatory disease.
null
null
Results
Administration of IL-1β promotes a Th17 response to TMEV to exacerbate the pathogenicity of demyelinating disease We have previously demonstrated that administration of LPS or IL-1β causes resistant C57BL/6 mice to develop demyelinating disease [18]. It has recently been shown that LPS treatment promotes this pathogenesis by elevating the induction of the pathogenic Th17 response [17]. However, it is unknown how IL-1β promotes this pathogenesis. To understand the mechanism, we compared the levels of Th1 and Th17 in the CNS of B6 mice treated with either LPS or IL-1β, along with control mice treated with PBS, following infection with TMEV at 8 days post-infection (Figure 1). The results clearly indicated that the levels of IL-17A-producing Th17 cells in mice treated with either LPS or IL-1β were significantly elevated compared to PBS-treated control mice (Figure 1A and B). In contrast, the levels of IFN-γ-producing Th1 cells were not different. It is interesting to note that IL-1β-treated mice exceeded the Th17 level of LPS-treated mice. However, the levels of IL-17-producing CD8+ T cells were minimal (not shown), and the levels of IFN-γ-producing CD8+ T cells were also similar among the groups (Figure 1C). These results strongly suggest that IL-1β can promote the pathogenesis of TMEV-induced demyelinating disease by enhancing the induction of pathogenic Th17 cells rather than altering the Th1 response. Effects of IL-1β administration on Th17 and Th1 responses in TMEV-infected B6 mice. (A) Levels of IFN-γ- and IL-17-producing CD4+ T cells in B6 mice intraperitoneally treated with PBS, lipopolysaccharide (LPS), or IL-1β during the early stage (−1 and 3 days post-infection (dpi)) of Theiler’s murine encephalitis virus (TMEV) infection (three mice per group) were analyzed using flow cytometry of the pooled central nervous system (CNS) cells at 8 dpi after stimulation with either PBS or anti-CD3/CD28 antibodies. (B) The overall numbers of IL-17-producing cells in the CNS of the above treated B6 mice (three mice per group) were shown. (C) Levels of IFN-γ-producing CD8+ T cells in the CNS of these groups of mice were also similarly analyzed. ***P < 0.001. We have previously demonstrated that administration of LPS or IL-1β causes resistant C57BL/6 mice to develop demyelinating disease [18]. It has recently been shown that LPS treatment promotes this pathogenesis by elevating the induction of the pathogenic Th17 response [17]. However, it is unknown how IL-1β promotes this pathogenesis. To understand the mechanism, we compared the levels of Th1 and Th17 in the CNS of B6 mice treated with either LPS or IL-1β, along with control mice treated with PBS, following infection with TMEV at 8 days post-infection (Figure 1). The results clearly indicated that the levels of IL-17A-producing Th17 cells in mice treated with either LPS or IL-1β were significantly elevated compared to PBS-treated control mice (Figure 1A and B). In contrast, the levels of IFN-γ-producing Th1 cells were not different. It is interesting to note that IL-1β-treated mice exceeded the Th17 level of LPS-treated mice. However, the levels of IL-17-producing CD8+ T cells were minimal (not shown), and the levels of IFN-γ-producing CD8+ T cells were also similar among the groups (Figure 1C). These results strongly suggest that IL-1β can promote the pathogenesis of TMEV-induced demyelinating disease by enhancing the induction of pathogenic Th17 cells rather than altering the Th1 response. Effects of IL-1β administration on Th17 and Th1 responses in TMEV-infected B6 mice. (A) Levels of IFN-γ- and IL-17-producing CD4+ T cells in B6 mice intraperitoneally treated with PBS, lipopolysaccharide (LPS), or IL-1β during the early stage (−1 and 3 days post-infection (dpi)) of Theiler’s murine encephalitis virus (TMEV) infection (three mice per group) were analyzed using flow cytometry of the pooled central nervous system (CNS) cells at 8 dpi after stimulation with either PBS or anti-CD3/CD28 antibodies. (B) The overall numbers of IL-17-producing cells in the CNS of the above treated B6 mice (three mice per group) were shown. (C) Levels of IFN-γ-producing CD8+ T cells in the CNS of these groups of mice were also similarly analyzed. ***P < 0.001. IL-1R KO mice are susceptible to TMEV-induced demyelinating disease and display high cellular infiltration to the CNS Although administration of IL-1β promotes the pathogenesis of TMEV-induced demyelinating disease, the IL-1β produced via NLRP3 proteosome activation upon viral infection is considered to be a protective mechanism against microbial infections by promoting the apoptosis of infected cells [24]. To further investigate the potential role of IL-1β-mediated signaling in the development of TMEV-induced demyelinating disease, we compared the development of TMEV-induced demyelinating disease in IL-1R KO mice with a B6 background and control B6 mice (Figure 2A). Every IL-1R KO mouse developed demyelinating disease while none of the control B6 mice showed clinical signs at 35 days post-infection (dpi). The results clearly indicated that B6 mice with the deficiency in IL-1 signaling became susceptible to the TMEV-induced disease. This is somewhat unexpected because our previous study indicated that administration of IL-1β to B6 mice renders the mice susceptible to the disease, suggesting a pathogenic role for IL-1β in disease development. The course of TMEV-induced demyelinating disease development, viral persistence levels and CNS-infiltrating mononuclear cells in TMEV-infected in B6 and IL-1R KO mice. (A) Frequency and severity of demyelinating disease in B6 (n = 10) and IL-1R knockout (KO) (n = 10) mice were monitored for 70 days after Theiler’s murine encephalitis virus (TMEV) infection. (B) Viral persistence levels in the pooled brains (BR) and spinal cords (SC) of infected mice (three mice per group) at 8, 21 and 70 days post-infection (dpi) were determined by quantitative PCR. Data are expressed by fold induction after normalization to the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA levels. The values are expressed as the means ± SD of triplicate experiments. Statistically significant differences are indicated with asterisks;*P < 0.05, **P < 0.01, ***P < 0.001. (C) Levels of T cells (CD4+ and CD8+), macrophages (CD11b+CD45high), microglia (CD11b+CD45int), NK cells (NK1.1+CD45+) and granulocytes (Ly6G/6C+CD45+) were assessed using flow cytometry in central nervous system (CNS)-infiltrating mononuclear cells from TMEV-infected C57BL/6 and IL-1R KO mice at 3 and 8 dpi. Numbers in FACS plots represent percentages in the CNS. Data are representative of three experiments using three mice per group. To correlate the disease susceptibility of IL-1R KO mice with viral persistence in the CNS, the relative viral message levels in the CNS of wild type (WT) B6 and IL-1R KO mice were compared at days 8, 21 and 70 post-infection with TMEV (Figure 2B). The results showed that the level of viral load in the spinal cord, but not in the brain, is consistently higher in IL-1R KO mice compared to control B6 mice. These results strongly suggest that IL-1 signaling plays an important role in controlling viral persistence in the spinal cord during the course of TMEV infection. However, it was previously shown that the viral load level alone is not sufficient for the pathogenesis of TMEV-induced demyelinating disease [25]. Thus, we further assessed the levels of cellular infiltration to the CNS of these mice during the early stages (3 and 8 dpi) of viral infection (Figure 2C). The results indicated that infiltration into the CNS of granulocytes, NK cells, macrophages and CD8+ T cells but not CD4+ T cells was elevated, particularly at the early stage of viral infection. These results collectively suggest that high viral loads and cellular infiltration into the CNS in resistant B6 mice in the absence of IL-1 signaling leads to the elevated development of TMEV-induced demyelinating disease. Although administration of IL-1β promotes the pathogenesis of TMEV-induced demyelinating disease, the IL-1β produced via NLRP3 proteosome activation upon viral infection is considered to be a protective mechanism against microbial infections by promoting the apoptosis of infected cells [24]. To further investigate the potential role of IL-1β-mediated signaling in the development of TMEV-induced demyelinating disease, we compared the development of TMEV-induced demyelinating disease in IL-1R KO mice with a B6 background and control B6 mice (Figure 2A). Every IL-1R KO mouse developed demyelinating disease while none of the control B6 mice showed clinical signs at 35 days post-infection (dpi). The results clearly indicated that B6 mice with the deficiency in IL-1 signaling became susceptible to the TMEV-induced disease. This is somewhat unexpected because our previous study indicated that administration of IL-1β to B6 mice renders the mice susceptible to the disease, suggesting a pathogenic role for IL-1β in disease development. The course of TMEV-induced demyelinating disease development, viral persistence levels and CNS-infiltrating mononuclear cells in TMEV-infected in B6 and IL-1R KO mice. (A) Frequency and severity of demyelinating disease in B6 (n = 10) and IL-1R knockout (KO) (n = 10) mice were monitored for 70 days after Theiler’s murine encephalitis virus (TMEV) infection. (B) Viral persistence levels in the pooled brains (BR) and spinal cords (SC) of infected mice (three mice per group) at 8, 21 and 70 days post-infection (dpi) were determined by quantitative PCR. Data are expressed by fold induction after normalization to the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA levels. The values are expressed as the means ± SD of triplicate experiments. Statistically significant differences are indicated with asterisks;*P < 0.05, **P < 0.01, ***P < 0.001. (C) Levels of T cells (CD4+ and CD8+), macrophages (CD11b+CD45high), microglia (CD11b+CD45int), NK cells (NK1.1+CD45+) and granulocytes (Ly6G/6C+CD45+) were assessed using flow cytometry in central nervous system (CNS)-infiltrating mononuclear cells from TMEV-infected C57BL/6 and IL-1R KO mice at 3 and 8 dpi. Numbers in FACS plots represent percentages in the CNS. Data are representative of three experiments using three mice per group. To correlate the disease susceptibility of IL-1R KO mice with viral persistence in the CNS, the relative viral message levels in the CNS of wild type (WT) B6 and IL-1R KO mice were compared at days 8, 21 and 70 post-infection with TMEV (Figure 2B). The results showed that the level of viral load in the spinal cord, but not in the brain, is consistently higher in IL-1R KO mice compared to control B6 mice. These results strongly suggest that IL-1 signaling plays an important role in controlling viral persistence in the spinal cord during the course of TMEV infection. However, it was previously shown that the viral load level alone is not sufficient for the pathogenesis of TMEV-induced demyelinating disease [25]. Thus, we further assessed the levels of cellular infiltration to the CNS of these mice during the early stages (3 and 8 dpi) of viral infection (Figure 2C). The results indicated that infiltration into the CNS of granulocytes, NK cells, macrophages and CD8+ T cells but not CD4+ T cells was elevated, particularly at the early stage of viral infection. These results collectively suggest that high viral loads and cellular infiltration into the CNS in resistant B6 mice in the absence of IL-1 signaling leads to the elevated development of TMEV-induced demyelinating disease. IL-1R KO mice show widely spread mild demyelinating lesions accompanied by patchy axon damage At 70 days post-infection, the histopathology of TMEV-infected IL-1R KO and B6 mice was compared to correlate the disease development with the histopathology of the CNS (Figure 3). Series of histopathological examinations of the spinal cords from both KO and WT mice were conducted after H & E, LFB, and Bielschowsky silver staining. The H & E staining was used for evaluating the evidence of active inflammation and lymphocyte infiltration. LFB specifically stains axonal myelin sheath, and this was used to evaluate the axonal demyelination. Bielschowsky silver staining stains axons dark brown and was used to evaluate axonal integrity. Lymphocyte infiltration, minor demyelination and axon loss were detected in the CNS, including the brain and spinal cord, in IL-1R KO mice but not in WT B6 mice. Compared to control B6 mice (Figure 3A a-b), IL-1R KO mice (Figure 3A c-d) showed more lymphocyte infiltration in the white matter of the lumbar spinal cord when examined by H & E staining. LFB staining of the adjacent sections showed irregular vacuoles and demyelination in the white matter of the spinal cord in IL-1R KO mice (Figure 3A g-h) and in brain regions including the cerebellum and medulla (not shown). In contrast, myelin that appeared normal and little histopathological change were observed in the control B6 mice (Figure 3A e-f). Bielschowsky silver staining of the adjacent sections also showed irregular vacuolation and mild axon loss in the demyelinated regions of the spinal cord from IL-1R KO mice (Figure 3A k-l) but not in the sections from the WT control mice (Figure 3A i-j). To further compare the cellular infiltration levels in the CNS of these mice, we examined the levels of CD45+ cells in the CNS which largely represents infiltrating cells (Figure 3B). Our results clearly displayed that the level of CD45+ cells (Figure 3B d), many of which overlap with H & E staining Figure 3B c), was higher in the CNS of IL-1R KO mice compared to that of the control B6 mice (Figure 3B a-b). Histopathology of the spinal cord in IL-1R KO and wild-type mice. (A) H & E staining of the spinal cord showed infiltration of inflammatory cells in knockout (KO) mice (c, d) and little infiltration in wild-type (WT) mice (a, b). Luxol Fast Blue (LFB) staining of adjacent sections showed irregular vacuolation and minor demyelination in the white matter of KO mice (g, h) but no loss of myelin in WT mice (e, f). Bielschowsky silver staining of the same area shows the presence of irregular vacuolation and minor axonal loss in KO mice (k, l) but not in WT mice (i, j). Magnification, ×10 and ×40. Black arrows indicate regions of lymphocyte infiltrates, demyelination, or axon loss; thin black squares indicate the areas from the lumbar spinal cord region, which are shown in high magnification (b, f, and j for B6 mice and d, h, and l for IL-1R KO mice). (B) H & E staining of spinal cords of control (a) and IL-1R KO mice (c) are shown. The adjacent sections (b and d, respectively) were stained with anti-CD45 antibody (red) for infiltrating cells and counterstained with 4',6-diamidino-2-phenylindole (DAPI) (blue) for nuclei. At 70 days post-infection, the histopathology of TMEV-infected IL-1R KO and B6 mice was compared to correlate the disease development with the histopathology of the CNS (Figure 3). Series of histopathological examinations of the spinal cords from both KO and WT mice were conducted after H & E, LFB, and Bielschowsky silver staining. The H & E staining was used for evaluating the evidence of active inflammation and lymphocyte infiltration. LFB specifically stains axonal myelin sheath, and this was used to evaluate the axonal demyelination. Bielschowsky silver staining stains axons dark brown and was used to evaluate axonal integrity. Lymphocyte infiltration, minor demyelination and axon loss were detected in the CNS, including the brain and spinal cord, in IL-1R KO mice but not in WT B6 mice. Compared to control B6 mice (Figure 3A a-b), IL-1R KO mice (Figure 3A c-d) showed more lymphocyte infiltration in the white matter of the lumbar spinal cord when examined by H & E staining. LFB staining of the adjacent sections showed irregular vacuoles and demyelination in the white matter of the spinal cord in IL-1R KO mice (Figure 3A g-h) and in brain regions including the cerebellum and medulla (not shown). In contrast, myelin that appeared normal and little histopathological change were observed in the control B6 mice (Figure 3A e-f). Bielschowsky silver staining of the adjacent sections also showed irregular vacuolation and mild axon loss in the demyelinated regions of the spinal cord from IL-1R KO mice (Figure 3A k-l) but not in the sections from the WT control mice (Figure 3A i-j). To further compare the cellular infiltration levels in the CNS of these mice, we examined the levels of CD45+ cells in the CNS which largely represents infiltrating cells (Figure 3B). Our results clearly displayed that the level of CD45+ cells (Figure 3B d), many of which overlap with H & E staining Figure 3B c), was higher in the CNS of IL-1R KO mice compared to that of the control B6 mice (Figure 3B a-b). Histopathology of the spinal cord in IL-1R KO and wild-type mice. (A) H & E staining of the spinal cord showed infiltration of inflammatory cells in knockout (KO) mice (c, d) and little infiltration in wild-type (WT) mice (a, b). Luxol Fast Blue (LFB) staining of adjacent sections showed irregular vacuolation and minor demyelination in the white matter of KO mice (g, h) but no loss of myelin in WT mice (e, f). Bielschowsky silver staining of the same area shows the presence of irregular vacuolation and minor axonal loss in KO mice (k, l) but not in WT mice (i, j). Magnification, ×10 and ×40. Black arrows indicate regions of lymphocyte infiltrates, demyelination, or axon loss; thin black squares indicate the areas from the lumbar spinal cord region, which are shown in high magnification (b, f, and j for B6 mice and d, h, and l for IL-1R KO mice). (B) H & E staining of spinal cords of control (a) and IL-1R KO mice (c) are shown. The adjacent sections (b and d, respectively) were stained with anti-CD45 antibody (red) for infiltrating cells and counterstained with 4',6-diamidino-2-phenylindole (DAPI) (blue) for nuclei. Cytokine gene expression is transentily higher in the CNS of IL-1R KO mice during early viral infection To understand the susceptibility to TMEV-induced demyelinating disease in IL-1R KO mice, we analyzed various cytokine message levels expressed in the CNS of virus-infected control and IL-1R KO mice during the early stages (3, 5, and 8 dpi) of viral infection using real-time PCR (Figure 4). The levels of IFN-α and IFN-β gene expression in IL-1R KO mice were significantly higher than those in B6 mice at 3 dpi, although the levels became similar at 5 and 8 dpi. The expression levels of CXCL-10 that were associated with T cell infiltration and IL-10 that was associated with viral persistence were higher at 5 dpi, and this trend was maintained at 8 dpi. However, the expression level of IL-6 in IL-1R KO mice was transiently lower at 5 dpi, while no differences in TNF-α expression were noted. Similarly, the production of a pathogenic T cell cytokine, IL-17 was largely unchanged. However, viral RNA and the production of IFN-γ were transiently higher at 3 and 8 dpi. These results suggest that the lack of IL-1 signaling differentially affects viral replication and the expression of various innate and immune cytokines depending on the stage of TMEV infection. Expression level of cytokine genes in the CNS of TMEV infected B6 and IL-1R knockout (KO) mice at 3, 5 and 8 days post-infection (dpi). The relative expression levels of the indicated mRNAs in the central nervous system (CNS) of Theiler’s murine encephalitis virus (TMEV)-infected C57BL/6 and IL-1R KO mice at 3, 5 and 8 dpi were assessed by real-time PCR. Data are expressed by fold induction after normalization to the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA levels. The values are expressed as the means ± SD of triplicates. Statistically significant differences are indicated with asterisks; *P < 0.05, **P < 0.01, ***P < 0.001). To understand the susceptibility to TMEV-induced demyelinating disease in IL-1R KO mice, we analyzed various cytokine message levels expressed in the CNS of virus-infected control and IL-1R KO mice during the early stages (3, 5, and 8 dpi) of viral infection using real-time PCR (Figure 4). The levels of IFN-α and IFN-β gene expression in IL-1R KO mice were significantly higher than those in B6 mice at 3 dpi, although the levels became similar at 5 and 8 dpi. The expression levels of CXCL-10 that were associated with T cell infiltration and IL-10 that was associated with viral persistence were higher at 5 dpi, and this trend was maintained at 8 dpi. However, the expression level of IL-6 in IL-1R KO mice was transiently lower at 5 dpi, while no differences in TNF-α expression were noted. Similarly, the production of a pathogenic T cell cytokine, IL-17 was largely unchanged. However, viral RNA and the production of IFN-γ were transiently higher at 3 and 8 dpi. These results suggest that the lack of IL-1 signaling differentially affects viral replication and the expression of various innate and immune cytokines depending on the stage of TMEV infection. Expression level of cytokine genes in the CNS of TMEV infected B6 and IL-1R knockout (KO) mice at 3, 5 and 8 days post-infection (dpi). The relative expression levels of the indicated mRNAs in the central nervous system (CNS) of Theiler’s murine encephalitis virus (TMEV)-infected C57BL/6 and IL-1R KO mice at 3, 5 and 8 dpi were assessed by real-time PCR. Data are expressed by fold induction after normalization to the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA levels. The values are expressed as the means ± SD of triplicates. Statistically significant differences are indicated with asterisks; *P < 0.05, **P < 0.01, ***P < 0.001). Anti-viral CD4+ T cell responses in the CNS, of virus-infected IL-1R KO mice are lower during the early stage of infection To compare CD4+ T cell responses specific to viral determinants in the CNS of IL-1R KO and WT mice, infiltration levels of CD4+ T cells specific to the predominant CD4+ T cell epitopes were assessed (Figure 5A and B). The levels of IFN-γ-producing CD4+ T cells in response to pan-T cell stimulation (either PMA plus ionomycin or anti-CD3 and anti-CD28 antibodies) were similar between IL-1R KO and control WT B6 mice. However, such CD4+ T cell responses to viral epitopes were proportionally lower in the CNS of virus-infected IL-1R KO mice compared to B6 mice, although the overall levels in the CNS were similar. This discrepancy may be due to the high levels of CXCL-10 expression (Figure 4), which promotes infiltration of T cells, in the CNS of IL-1R KO mice. The levels of IL-17-producing CD4+ T cells in either TMEV-infected IL-1R KO or B6 mice were undetectable. To further determine whether the pattern of CD4+ T cell responses is unique in the CNS of virus-infected mice, we also assessed the T cell responses in the periphery of TMEV-infected IL-1R KO and control B6 mice at 8 and 21 dpi (Figure 5C). Again, the levels of T cell proliferation and the production of key T cell cytokines (IFN-γ and IL-17) against viral epitopes (for both CD4+ and CD8+ T cells) were not drasticaly different between the splenic T cells from IL-1R KO and control B6 mice. Despite the similar low levels of IL-17 production in response to viral epitopes, the IL-17 level was significantly lower in IL-1R KO mice compared to WT B6 mice after robust stimulation with anti-CD3/CD28 antibodies. These results are consistent with the role of IL-1 signaling in promoting IL-17 production [26,27]. Virus-specific CD4+T cell responses in the CNS and the periphery of TMEV-infected B6 and IL-1R knockout (KO) mice. (A) Levels of Th1 and Th17 cells in the central nervous system (CNS) of virus-infected B6 and IL-1R KO mice were assessed using flow cytometry at 8 days post-infection (dpi) after stimulation with PBS, PMA/ionomycin, anti-CD3/CD28 antibodies, or viral epitope peptides (Pep: mixture of 2 μM VP2203-220 and 2 μM VP425-38). The flow cytometric plots show gated CD4+ T cells. (B) The numbers of CNS-infiltrating IFN-γ-producing CD4+ T cells reactive to either viral epitope peptides (CD4: equal mixture of VP2203-220 and VP425-38) or anti-CD3/CD28 antibodies in virus-infected B6 and IL-1R KO mice (three mice each) at 8 and 21 dpi. A representative result from two to three similar experiments is shown here. (C) Levels of proliferation and cytokine production by splenic T cells in response to viral epitopes by CD4+ T cells (CD4 Mix: equal mixture of VP2203-220 and VP425-38), CD8+ T cells (VP2-121–130), or both CD4+ and CD8+ T cells (anti-CD3/CD28 antibodies) were assessed at 8 and 21 dpi. Values are expressed as the mean of triplicate samples (mean ± SD) from a representative of three experiments. *P < 0.05, **P < 0.01, ***P < 0.001. To compare CD4+ T cell responses specific to viral determinants in the CNS of IL-1R KO and WT mice, infiltration levels of CD4+ T cells specific to the predominant CD4+ T cell epitopes were assessed (Figure 5A and B). The levels of IFN-γ-producing CD4+ T cells in response to pan-T cell stimulation (either PMA plus ionomycin or anti-CD3 and anti-CD28 antibodies) were similar between IL-1R KO and control WT B6 mice. However, such CD4+ T cell responses to viral epitopes were proportionally lower in the CNS of virus-infected IL-1R KO mice compared to B6 mice, although the overall levels in the CNS were similar. This discrepancy may be due to the high levels of CXCL-10 expression (Figure 4), which promotes infiltration of T cells, in the CNS of IL-1R KO mice. The levels of IL-17-producing CD4+ T cells in either TMEV-infected IL-1R KO or B6 mice were undetectable. To further determine whether the pattern of CD4+ T cell responses is unique in the CNS of virus-infected mice, we also assessed the T cell responses in the periphery of TMEV-infected IL-1R KO and control B6 mice at 8 and 21 dpi (Figure 5C). Again, the levels of T cell proliferation and the production of key T cell cytokines (IFN-γ and IL-17) against viral epitopes (for both CD4+ and CD8+ T cells) were not drasticaly different between the splenic T cells from IL-1R KO and control B6 mice. Despite the similar low levels of IL-17 production in response to viral epitopes, the IL-17 level was significantly lower in IL-1R KO mice compared to WT B6 mice after robust stimulation with anti-CD3/CD28 antibodies. These results are consistent with the role of IL-1 signaling in promoting IL-17 production [26,27]. Virus-specific CD4+T cell responses in the CNS and the periphery of TMEV-infected B6 and IL-1R knockout (KO) mice. (A) Levels of Th1 and Th17 cells in the central nervous system (CNS) of virus-infected B6 and IL-1R KO mice were assessed using flow cytometry at 8 days post-infection (dpi) after stimulation with PBS, PMA/ionomycin, anti-CD3/CD28 antibodies, or viral epitope peptides (Pep: mixture of 2 μM VP2203-220 and 2 μM VP425-38). The flow cytometric plots show gated CD4+ T cells. (B) The numbers of CNS-infiltrating IFN-γ-producing CD4+ T cells reactive to either viral epitope peptides (CD4: equal mixture of VP2203-220 and VP425-38) or anti-CD3/CD28 antibodies in virus-infected B6 and IL-1R KO mice (three mice each) at 8 and 21 dpi. A representative result from two to three similar experiments is shown here. (C) Levels of proliferation and cytokine production by splenic T cells in response to viral epitopes by CD4+ T cells (CD4 Mix: equal mixture of VP2203-220 and VP425-38), CD8+ T cells (VP2-121–130), or both CD4+ and CD8+ T cells (anti-CD3/CD28 antibodies) were assessed at 8 and 21 dpi. Values are expressed as the mean of triplicate samples (mean ± SD) from a representative of three experiments. *P < 0.05, **P < 0.01, ***P < 0.001. Levels of TMEV-specific CD8+ T cell responses in the CNS are comparable between IL-1R KO and WT B6 mice To further determine whether the susceptibility of IL-1R KO mice to TMEV-induced demyelinating disease is associated with a compromised anti-viral CD8+ T cell response, we also analyzed the T cell responses in the CNS of TMEV-infected IL-1R KO and control B6 mice (Figure 6). Virus-specific CD8+ T cells reactive to the predominant epitope (VP2121-130), determined using the VP2121-H-2Db tetramer, indicated that the proportions of virus-specific CD8+ T cells in the CNS of virus-infected WT B6 and IL-1R KO mice are similar (Figure 6A). To further determine whether the functions of the virus-reactive CD8+ T cells are different, we assessed the abilities of the cells to produce IFN-γ in response to specific and non-specific stimulations (Figure 6B). The results clearly indicated that their ability to produce IFN-γ is also similar in both proportion (Figure 6B) and number in the CNS of TMEV-infected B6 and IL-1R KO mice (Figure 6C). These results strongly suggest that there are no significant differences in the CD8+ T cell responses to the viral determinants, unlike those of CD4+ T cell responses, in the CNS of virus-infected WT B6 and IL-1R KO mice. Levels of virus-specific CD8+T cell responses in the CNS of virus-infected B6 and IL-1R KO mice. (A) Levels of H-2Db-VP2-121–130-tetramer reactive CD8+ T cells in the central nervous system (CNS) of B6 and IL-1R knockout (KO) mice at 8 days post-infection (dpi). (B) Proportions of CNS-infiltrating CD8+ T cells reactive to viral epitopes, anti-CD3/CD28 antibodies and PMA/ionomycin were assessed using flow cytometry following intracellular cytokine staining at 8 dpi. (C) The overall numbers of virus-specific and anti-CD3/CD28 reactive CD8+ T cells in the CNS of virus-infected B6 and IL-1R KO mice are shown at 8 and 21 dpi. A representative result from two to three similar experiments is shown here. To further determine whether the susceptibility of IL-1R KO mice to TMEV-induced demyelinating disease is associated with a compromised anti-viral CD8+ T cell response, we also analyzed the T cell responses in the CNS of TMEV-infected IL-1R KO and control B6 mice (Figure 6). Virus-specific CD8+ T cells reactive to the predominant epitope (VP2121-130), determined using the VP2121-H-2Db tetramer, indicated that the proportions of virus-specific CD8+ T cells in the CNS of virus-infected WT B6 and IL-1R KO mice are similar (Figure 6A). To further determine whether the functions of the virus-reactive CD8+ T cells are different, we assessed the abilities of the cells to produce IFN-γ in response to specific and non-specific stimulations (Figure 6B). The results clearly indicated that their ability to produce IFN-γ is also similar in both proportion (Figure 6B) and number in the CNS of TMEV-infected B6 and IL-1R KO mice (Figure 6C). These results strongly suggest that there are no significant differences in the CD8+ T cell responses to the viral determinants, unlike those of CD4+ T cell responses, in the CNS of virus-infected WT B6 and IL-1R KO mice. Levels of virus-specific CD8+T cell responses in the CNS of virus-infected B6 and IL-1R KO mice. (A) Levels of H-2Db-VP2-121–130-tetramer reactive CD8+ T cells in the central nervous system (CNS) of B6 and IL-1R knockout (KO) mice at 8 days post-infection (dpi). (B) Proportions of CNS-infiltrating CD8+ T cells reactive to viral epitopes, anti-CD3/CD28 antibodies and PMA/ionomycin were assessed using flow cytometry following intracellular cytokine staining at 8 dpi. (C) The overall numbers of virus-specific and anti-CD3/CD28 reactive CD8+ T cells in the CNS of virus-infected B6 and IL-1R KO mice are shown at 8 and 21 dpi. A representative result from two to three similar experiments is shown here. Cytokine production by Th cells stimulated with macrophages from IL-1R KO mice is reduced To compare the function of CD4+ T cells stimulated by macrophages from B6 and IL-1R KO mice, T cells from naïve OT-II mice, which carry T cell receptor (TCR) transgenes specific for OVA323-339, were stimulated with peritoneal macrophages infected in vitro for 24 h with TMEV (10 MOI) in the presence of OVA323-339 peptide (Figure 7A) or OVA protein (not shown). Viral infection did not significantly alter the levels of T cell stimulation by these macrophages. However, proliferation of CD4+ T cells in response to the cognate peptide or protein was higher when stimulated with IL-1R KO macrophages compared to the proliferation stimulated with B6 macrophages. In contrast, IFN-γ and IL-17 production by the T cells stimulated with IL-1R KO macrophages were significantly lower than the production stimulated with control B6 macrophages. These results indicated that antigen-presenting cells display altered T cell stimulating function in the absence of IL-1R-mediated signaling. Cytokine reduction of CD4+T cells with stimulation of IL-1R KO macrophages. (A) Isolated CD4+ T cells (1 × 105) from the spleen of OT-II mice were cultured with in vitro 10 MOI Theiler’s murine encephalitis virus (TMEV)-infected peritoneal macrophages (1 × 104) from either C57BL/6 or IL-1R knockout (KO) mice for 3 days in the presence of 2 μM OVA epitope peptides. T cell proliferative responses were analyzed using [3H]TdR uptake, and cytokine production (IFN-γ and IL-17) of the cultures were analyzed using specific ELISAs. (B) Isolated CD4+ T cells (1 × 105) from the spleen of B6 mice infected with TMEV at 8 days post-infection (dpi) were cultured with in vitro TMEV-infected (10 MOI for 24 h) peritoneal macrophages (1 × 104) from either naïve C57BL/6 or IL-1R KO mice for 3 days in the presence of viral epitope peptides. Message levels of the indicated genes were then analyzed by real-time PCR. The glyceraldehyde-3-phosphate dehydrogenase (GAPDH) level was used as an internal control. The values are expressed as the means ± SD of triplicates. Statistically significant differences are indicated with asterisks; *P < 0.05, **P < 0.01, ***P < 0.001. Data shown are the representation of three independent experiments. (C) Peritoneal CD11b+ cells from naïve B6 and IL-1R KO mice infected with TMEV in vitro for 24 h were analyzed for the expression of PDL-1 and TIM-3 T cell inhibitory molecules. A representative flow cytometry plot of three similar results is shown here. To further understand the potential mechanisms underlying the altered T cell stimulation by IL-1R KO macrophages, we examined the potential contribution of IL-1 signaling to the induction of cytokines in TMEV-infected macrophages (Figure 7B). After TMEV infection in vitro for 24 h, levels of viral RNA as well as IFN-α and IFN-β messages were similar between macrophages from WT B6 and IL-1R KO mice. However, the expression of the IL-6 message was extremely compromised in IL-1R KO macrophages compared to B6 macrophages. In contrast, the expression level of TNF-α message was more highly upregulated in IL-1R KO macrophages after TMEV infection. Interestingly however, the expression of TGF-β in uninfected IL-1R KO macrophages was higher than the expression in B6 macrophages but reduced to a similar level after viral infection. These results suggest that the cytokine production profile of macrophages and perhaps other antigen-presenting cells is altered in the absence of IL-1 signaling, which may affect the initial development and/or function of T cells following viral infection. To further understand the underlying mechanisms associated with altered CD4+ T cell development in the absence of IL-1 signaling, the levels of co-stimulatory molecules and key inhibitory molecules were studied in a representative macrophage antigen-presenting cell (APC) population (Figure 7C). The levels of CD80, CD86 and CD40 in B6 and IL-1R1 KO mice were not significantly different (not shown). However, PDL-1 and Tim-3 were significantly elevated in the absence of IL-1 signaling. These molecules are known to negatively affect the function of T cells [28] and/or promote inflammatory responses through its expression by innate immune cells, such as microglia [29]. It was interesting to note that the expression of PDL-1 was upregulated upon viral infection in B6 macrophages, whereas the expression in IL-1R-deficient macrophages was constitutively upregulated even in the absence of viral infection. In contrast, the expression of Tim-3 was constitutively upregulated in IL-1R-deficient macrophages to the level (approximately 5%) of upregulation seen after viral infection in B6 macrophages, but it was further upregulated in IL-1R KO macrophages after TMEV infection. These results strongly suggest that these increases in the inhibitory molecules may participate in the altered T cell development and/or function. To compare the function of CD4+ T cells stimulated by macrophages from B6 and IL-1R KO mice, T cells from naïve OT-II mice, which carry T cell receptor (TCR) transgenes specific for OVA323-339, were stimulated with peritoneal macrophages infected in vitro for 24 h with TMEV (10 MOI) in the presence of OVA323-339 peptide (Figure 7A) or OVA protein (not shown). Viral infection did not significantly alter the levels of T cell stimulation by these macrophages. However, proliferation of CD4+ T cells in response to the cognate peptide or protein was higher when stimulated with IL-1R KO macrophages compared to the proliferation stimulated with B6 macrophages. In contrast, IFN-γ and IL-17 production by the T cells stimulated with IL-1R KO macrophages were significantly lower than the production stimulated with control B6 macrophages. These results indicated that antigen-presenting cells display altered T cell stimulating function in the absence of IL-1R-mediated signaling. Cytokine reduction of CD4+T cells with stimulation of IL-1R KO macrophages. (A) Isolated CD4+ T cells (1 × 105) from the spleen of OT-II mice were cultured with in vitro 10 MOI Theiler’s murine encephalitis virus (TMEV)-infected peritoneal macrophages (1 × 104) from either C57BL/6 or IL-1R knockout (KO) mice for 3 days in the presence of 2 μM OVA epitope peptides. T cell proliferative responses were analyzed using [3H]TdR uptake, and cytokine production (IFN-γ and IL-17) of the cultures were analyzed using specific ELISAs. (B) Isolated CD4+ T cells (1 × 105) from the spleen of B6 mice infected with TMEV at 8 days post-infection (dpi) were cultured with in vitro TMEV-infected (10 MOI for 24 h) peritoneal macrophages (1 × 104) from either naïve C57BL/6 or IL-1R KO mice for 3 days in the presence of viral epitope peptides. Message levels of the indicated genes were then analyzed by real-time PCR. The glyceraldehyde-3-phosphate dehydrogenase (GAPDH) level was used as an internal control. The values are expressed as the means ± SD of triplicates. Statistically significant differences are indicated with asterisks; *P < 0.05, **P < 0.01, ***P < 0.001. Data shown are the representation of three independent experiments. (C) Peritoneal CD11b+ cells from naïve B6 and IL-1R KO mice infected with TMEV in vitro for 24 h were analyzed for the expression of PDL-1 and TIM-3 T cell inhibitory molecules. A representative flow cytometry plot of three similar results is shown here. To further understand the potential mechanisms underlying the altered T cell stimulation by IL-1R KO macrophages, we examined the potential contribution of IL-1 signaling to the induction of cytokines in TMEV-infected macrophages (Figure 7B). After TMEV infection in vitro for 24 h, levels of viral RNA as well as IFN-α and IFN-β messages were similar between macrophages from WT B6 and IL-1R KO mice. However, the expression of the IL-6 message was extremely compromised in IL-1R KO macrophages compared to B6 macrophages. In contrast, the expression level of TNF-α message was more highly upregulated in IL-1R KO macrophages after TMEV infection. Interestingly however, the expression of TGF-β in uninfected IL-1R KO macrophages was higher than the expression in B6 macrophages but reduced to a similar level after viral infection. These results suggest that the cytokine production profile of macrophages and perhaps other antigen-presenting cells is altered in the absence of IL-1 signaling, which may affect the initial development and/or function of T cells following viral infection. To further understand the underlying mechanisms associated with altered CD4+ T cell development in the absence of IL-1 signaling, the levels of co-stimulatory molecules and key inhibitory molecules were studied in a representative macrophage antigen-presenting cell (APC) population (Figure 7C). The levels of CD80, CD86 and CD40 in B6 and IL-1R1 KO mice were not significantly different (not shown). However, PDL-1 and Tim-3 were significantly elevated in the absence of IL-1 signaling. These molecules are known to negatively affect the function of T cells [28] and/or promote inflammatory responses through its expression by innate immune cells, such as microglia [29]. It was interesting to note that the expression of PDL-1 was upregulated upon viral infection in B6 macrophages, whereas the expression in IL-1R-deficient macrophages was constitutively upregulated even in the absence of viral infection. In contrast, the expression of Tim-3 was constitutively upregulated in IL-1R-deficient macrophages to the level (approximately 5%) of upregulation seen after viral infection in B6 macrophages, but it was further upregulated in IL-1R KO macrophages after TMEV infection. These results strongly suggest that these increases in the inhibitory molecules may participate in the altered T cell development and/or function.
Conclusions
IL-1 signaling plays a protective role against viral infections. However, we have previously demonstrated that administration of IL-1 promotes the pathogenesis of TMEV-induced demyelinating disease, similar to the autoimmune disease model (EAE) for MS. The IL-1-mediated pathogenesis of TMEV-induced demyelinating disease appears to reflect an elevated Th17 response in the presence of IL-1. However, IL-1R-deficient B6 mice also induced TMEV-induced demyelinating disease accompanied by high viral persistence and upregulated expression of T cell inhibitory molecules such as PDL-1 and Tim-3. These results suggest that the presence of high IL-1 level promotes the pathogenesis by elevating Th17 responses, whereas the absence of IL-1 signals permits viral persistence in the CNS due to insufficient T cell activation. Therefore, the balance of IL-1 signaling appears to be critical for the determination of protection vs. pathogenesis in the development of a virus-induced demyelinating disease.
[ "Introduction", "Animals", "Synthetic peptides and antibodies", "Virus", "Viral infection of mice and assessment of clinical signs", "Reverse-transcriptase PCR and real-time PCR", "Isolation of CNS-infiltrating mononuclear cells (MNCs)", "Flow cytometry", "Intracellular staining of cytokine production", "T cell proliferation assay", "Histopathological analyses", "ELISA", "Statistical analysis", "Administration of IL-1β promotes a Th17 response to TMEV to exacerbate the pathogenicity of demyelinating disease", "IL-1R KO mice are susceptible to TMEV-induced demyelinating disease and display high cellular infiltration to the CNS", "IL-1R KO mice show widely spread mild demyelinating lesions accompanied by patchy axon damage", "Cytokine gene expression is transentily higher in the CNS of IL-1R KO mice during early viral infection", "Anti-viral CD4+ T cell responses in the CNS, of virus-infected IL-1R KO mice are lower during the early stage of infection", "Levels of TMEV-specific CD8+ T cell responses in the CNS are comparable between IL-1R KO and WT B6 mice", "Cytokine production by Th cells stimulated with macrophages from IL-1R KO mice is reduced", "Abbreviations", "Competing interests", "Authors’ contribution" ]
[ "Toll-like receptors (TLRs) and interleukin-1 receptors (IL-1Rs) are involved in the production of various cytokines that are associated with the innate immune response against many different infectious agents. TLRs and IL-1Rs share many structural similarities and utilize common downstream adaptive molecules after activation by their ligands. In general, these innate immune responses induced by TLRs and IL-1Rs are known to play a protective role against various microbes [1]. However, several recent studies have indicated that these signals may also play a pathogenic role in viral infections [2-4]. In addition to TLRs, IL-1Rs are also considered to be important innate receptors because IL-1β, in particular, is a prominent cytokine that appears in the early stage of microbial infections [3]. The IL-1R family contains six receptors, including IL-1RI, which recognizes the principal inflammatory cytokine IL-1β and the less inflammatory cytokine IL-1α [1,5]. IL-1β is generated from the cleavage of pro-IL-1β by caspase-1 in inflammasomes after infections, and the downstream signaling cascade of the IL-1β-IL-1R interaction leads to the induction of various proinflammatory cytokines and the activation of lymphocytes [6]. IL-1β-deficient mice show broad host susceptibility to various infections [7,8]. Moreover, IL-1RI-deficient mice are susceptible to certain pathogens, including Listeria monocytogenes[1]. Therefore, these responses to IL-1β are apparently critical for protection from many types of viruses and microbes. However, the level of IL-1β has also been linked to many different inflammatory autoimmune diseases, including diabetes, lupus, arthritis, and multiple sclerosis (MS) [1,4].\nTheiler’s murine encephalomyelitis virus (TMEV) is a positive-stranded RNA virus in the Picornaviridae family [9]. TMEV establishes a persistent CNS infection in susceptible mouse strains that results in the development of chronic demyelinating disease, and the system has been studied as a relevant viral model for human multiple sclerosis [10-12]. Cells infected with TMEV produce various proinflammatory cytokines, including type I IFNs, IL-6 and IL-1β [13]. TLR3 and TLR2 are involved in the production of these cytokines following infection with TMEV [14,15]. In addition, melanoma differentiation-associated gene 5 and dsRNA-activated protein kinase R are known to contribute to the production of proinflammatory cytokines [14,16]. These pathways also induce activation of caspase-1, leading to the generation of IL-1β and IL-1α, which contribute to further cytokine production, such as IL-6 promoting the development of pathogenic Th17 cells. Because IL-1β signals are associated with both host protection from viral infections and pathogenesis of inflammatory immune-mediated diseases, we here investigated the role of IL-1β-mediated signals in the development of TMEV-induced demyelinating disease.\nWe have previously reported that Th17 cells preferentially develop in an IL-6-dependent manner after TMEV infection, and that Th17 cells promote persistent viral infection and induce the pathogenesis of chronic demyelinating disease [17]. In addition, our earlier studies indicated that administration of either lipopolysaccharide (LPS) or IL-1β, thereby inducing high levels of IL-6 production, into resistant C57BL/6 (B6) mice renders the mice susceptible to the development of TMEV-induced demyelinating disease [18]. These results suggest that an excessive level of IL-1β is harmful to TMEV-induced demyelinating disease by generating high levels of pathogenic Th17 cells [19]. In this study, we confirmed the role of excessive IL-1β in the generation of a high level of Th17 cells in resistant B6 mice, supporting the pathogenic mechanisms of IL-1β. Furthermore, we have also utilized IL-1R-deficient mice to investigate the role of IL-1β-mediated signaling in the development of TMEV-induced demyelinating disease. Our results indicate that the lack of IL-1 signaling in resistant B6 mice also induced TMEV-induced demyelinating disease. The initial deficiencies in T cell function, including cytokine production and high viral persistence in the late stage of viral infection, were found in IL-1R-deficient mice. Therefore, the presence of an excessive amount of IL-1 plays a pathogenic role by elevating pathogenic Th17 responses, whereas the lack of IL-1 signals promotes viral persistence in the spinal cord, leading to chronic immune-mediated inflammatory disease.", "Female C57BL/6 mice were purchased from the Charles River Laboratories (Charles River, MA, USA) through the National Cancer Institute (Frederick, MD). Female B6.129S7-Il1r1tm1Imx/J mice (IL-1R knockout (KO)) were purchased from Jackson Laboratories (Bar Harbor, ME, USA). These mice were housed in the Animal Care Facility of Northwestern University. Experimental procedures that were approved by the Animal Care and Use Committee (ACUC) of Northwestern University in accordance with NIH animal care guidelines were used in this study.", "All peptides used were purchased from GeneMed (GeneMed Synthesis Inc, CA, USA) and used as described previously [20]. All antibodies used were purchased from BD Pharmingen (San Diego, CA, USA).", "The BeAn strain of TMEV was generated, propagated, and titered in BHK-21 cells grown in Dulbecco’s modified Eagle medium supplemented with 7.5% donor calf serum as previously described [21]. Viral titer was determined by plaque assays on BHK cell monolayers.", "For intracerebral (i.c.) infection, 30 μl virus solution, containing 1×106 pfu, was injected into the right cerebral hemisphere of 6 to 8 week-old mice (n = 10 per group) anesthetized with isoflurane. Clinical symptoms of disease were assessed weekly on the following grading scale: grade 0 = no clinical signs; grade 1 = mild waddling gait; grade 2 = severe waddling gait; grade 3 = moderate hind limb paralysis; grade 4 = severe hind limb paralysis and loss of righting reflex.", "Total cellular RNA from the brain and spinal cord of infected SJL/J mice was isolated using Trizol® Reagent (Invitrogen, CA, USA). First-strand cDNA was synthesized from 1 μg total RNA utilizing SuperScript® III First-Strand Synthesis Supermix or M-MLV (Invitrogen). The cDNAs were amplified with specific primer sets using the SYBR Green Supermix (Bio-Rad) on an iCycler (Bio-Rad). Primers for control GAPDH and cytokine genes were purchased from Integrated DNA Technologies. GAPDH: forward primer, AACTTTGGCATTGTGGAAGG and reverse primer, ACACATTGGGGGTAGGAACA; VP-1: TGACTAAGCAGGACTA-TGCCTTCC and CAACGAGCCACATATGCGGATTAC; IFN-α: (5’-ACCTCCTCT-GACCCAGGAAG-3’ and 5’-GGCTCTCCAGACTTCTGCTC-3’); IFN-β: CCCTAT-GGAGATGACGGAGA and CTGTCTGCTGGTGGAGTTCA; CXCL10: (5’-AAGT-GCTGCCGTCATTTTCT-3’ and 5’-GTGGCAATGATCTCAACACG-3’); IL-10: GCCAAGCCTTATCGGAAATGATCC and AGACACCTTGGTCTTGGAGCTT; IFN-γ: ACTGGCAAAAGGATGGTGAC and TGA GCTCATTGAATGCTTGG; IL-17A: CTCCAGAAGGCCCTCAGACTAC and AGCTTTCCCTCCGCATTGACACAG; IL-6: AGTTGCCTTCTTGGGACTGA and TCCACGATTTCCCAGAGAAC; TNF-α: GGTCACTGTCCCAGCATCTT and CTGTGAAGGGAATGGGTGTT.", "Mice were perfused with sterile Hank’s balanced salt solution (HBSS), and excised brains and spinal cords of 3 mice per group were homogenized. CNS-infiltrating MNCs were then enriched in the one third bottom fraction of a continuous Percoll (Pharmacia, Piscataway, NJ, USA) gradient after centrifugation for 30 minutes at 27,000 g as described previously [22].", "CNS-infiltrating lymphocytes were isolated, and Fc receptors were blocked using 100 μL of 2.4G2 hybridoma (ATCC) supernatant by incubating at 4°C for 30 minutes. Cells were stained with anti-CD8 (clone 53–6.7), anti-CD4 (clone GK1.5), anti-CD11b (clone M1/70), anti-NK1.1 (clone PK136), anti-GR-1 (clone RB6-8C5) and anti-CD45 (clone 30-F11) antibodies. All antibodies used for flow cytometry were purchased from BD Pharmingen (San Diego, CA). Cells were analyzed using a Becton Dickinson LSRII flow cytometer.", "Freshly isolated CNS-infiltrating MNCs from three mice per group were cultured in 96-well round-bottom plates in the presence of the relevant or control peptide as previously described [23]. Allophycocyanin-conjugated anti-CD8 (clone Ly2) or anti-CD4 (clone L3T4) antibodies and a PE-labeled rat monoclonal anti-IFN-γ (XMG1.2) antibody were used for intracellular cytokine staining. Cells were analyzed on a Becton Dickinson FACS Calibur or LSRII cytometer. Live cells were gated based on light scattering properties.", "Splenocytes from TMEV-infected mice, CD4+ T cells from spleens of OTII mice stimulated with specific epitope peptide, or in vitro TMEV-infected peritoneal macrophages in the presence of 2 μM ovalbumin (OVA)-specific peptides or 100 μg OVA protein were used. Cultures were incubated in 96-well flat-bottomed microtiter plates for 72 h and then pulsed with 1.0 μCi [3H]TdR and harvested 18 h later. [3H]TdR uptake by the cells was determined in triplicate using a scintillation counter and expressed as net counts per minute (Δcpm) ± standard error of the mean (SEM) after subtraction of the background count of cultures with PBS instead of stimulators.", "At 70 days post-infection, mice were anesthetized and perfused via intracardiac puncture with 50 mL of PBS. Brain and spinal cords from IL-1R KO and B6 control mice were dissected and fixed in 4% formalin in PBS for 24 h. Anterior cortex (bregma: 3.0 to 2.0 mm), subventricular zone (bregma: 1.7 to 0.48), hippocampus (bregma: -1.0 to −2.5 mm), and cerebellum (bregma: -5.6 to −7.0 mm) were investigated. In addition, cervical, thoracic, and lumbar regions of the spinal cord were also examined. The tissues were embedded in paraffin for sectioning and staining. Paraffin-processed brains and spinal cords were sectioned at 6 μm. Adjacent sets of three sections from each animal were deparaffinized, rehydrated, and evaluated by H & E staining for inflammatory infiltrates, Luxol Fast Blue (LFB) staining for axonal demyelination, and Bielschowsky silver staining for axon loss. Slides were examined using a Leica DMR light microscope, and images were captured using an AxioCam MRc camera and AxioVision imaging software. The inflammatory infiltrates were evaluated by the presence or absence of the monocytes/lymphocytes based on the H & E staining and immunofluorescent staining of CD45+ cells. Histologic white matter demyelination was graded as: 1) normal myelination, 2) mild or minor demyelination (> 50% myelin staining preserved), or 3) moderate to severe demyelination (< 50% myelin staining preserved).", "Cytokine levels produced by splenocytes from TMEV-infected mice or CD4+ T cells from spleens of OTII mice were determined after stimulation with specific epitope peptides (2 μM each), or in vitro TMEV-infected peritoneal macrophages in the presence of OVA-specific peptide (2 μM) for 72 h, respectively. IFN-γ (OPTEIA kit; BD Pharmingen), IL-17 (R&D Systems, Minneapolis, MN, USA) levels were assessed. Plates were read using a Spectra MAX 190 microplate reader (Molecular Devices, Sunnyvale, CA, USA) at a 450 nm wavelength.", "Data were presented as mean ± SD of either two to three independent experiments or one representative of at least three separate experiments. The significance of differences in the mean values was determined by Student’s t-test. Clinical scores were analyzed by Mann–Whitney U-test. P values < 0.05 were considered statistically significant.", "We have previously demonstrated that administration of LPS or IL-1β causes resistant C57BL/6 mice to develop demyelinating disease [18]. It has recently been shown that LPS treatment promotes this pathogenesis by elevating the induction of the pathogenic Th17 response [17]. However, it is unknown how IL-1β promotes this pathogenesis. To understand the mechanism, we compared the levels of Th1 and Th17 in the CNS of B6 mice treated with either LPS or IL-1β, along with control mice treated with PBS, following infection with TMEV at 8 days post-infection (Figure 1). The results clearly indicated that the levels of IL-17A-producing Th17 cells in mice treated with either LPS or IL-1β were significantly elevated compared to PBS-treated control mice (Figure 1A and B). In contrast, the levels of IFN-γ-producing Th1 cells were not different. It is interesting to note that IL-1β-treated mice exceeded the Th17 level of LPS-treated mice. However, the levels of IL-17-producing CD8+ T cells were minimal (not shown), and the levels of IFN-γ-producing CD8+ T cells were also similar among the groups (Figure 1C). These results strongly suggest that IL-1β can promote the pathogenesis of TMEV-induced demyelinating disease by enhancing the induction of pathogenic Th17 cells rather than altering the Th1 response.\nEffects of IL-1β administration on Th17 and Th1 responses in TMEV-infected B6 mice. (A) Levels of IFN-γ- and IL-17-producing CD4+ T cells in B6 mice intraperitoneally treated with PBS, lipopolysaccharide (LPS), or IL-1β during the early stage (−1 and 3 days post-infection (dpi)) of Theiler’s murine encephalitis virus (TMEV) infection (three mice per group) were analyzed using flow cytometry of the pooled central nervous system (CNS) cells at 8 dpi after stimulation with either PBS or anti-CD3/CD28 antibodies. (B) The overall numbers of IL-17-producing cells in the CNS of the above treated B6 mice (three mice per group) were shown. (C) Levels of IFN-γ-producing CD8+ T cells in the CNS of these groups of mice were also similarly analyzed. ***P < 0.001.", "Although administration of IL-1β promotes the pathogenesis of TMEV-induced demyelinating disease, the IL-1β produced via NLRP3 proteosome activation upon viral infection is considered to be a protective mechanism against microbial infections by promoting the apoptosis of infected cells [24]. To further investigate the potential role of IL-1β-mediated signaling in the development of TMEV-induced demyelinating disease, we compared the development of TMEV-induced demyelinating disease in IL-1R KO mice with a B6 background and control B6 mice (Figure 2A). Every IL-1R KO mouse developed demyelinating disease while none of the control B6 mice showed clinical signs at 35 days post-infection (dpi). The results clearly indicated that B6 mice with the deficiency in IL-1 signaling became susceptible to the TMEV-induced disease. This is somewhat unexpected because our previous study indicated that administration of IL-1β to B6 mice renders the mice susceptible to the disease, suggesting a pathogenic role for IL-1β in disease development.\nThe course of TMEV-induced demyelinating disease development, viral persistence levels and CNS-infiltrating mononuclear cells in TMEV-infected in B6 and IL-1R KO mice. (A) Frequency and severity of demyelinating disease in B6 (n = 10) and IL-1R knockout (KO) (n = 10) mice were monitored for 70 days after Theiler’s murine encephalitis virus (TMEV) infection. (B) Viral persistence levels in the pooled brains (BR) and spinal cords (SC) of infected mice (three mice per group) at 8, 21 and 70 days post-infection (dpi) were determined by quantitative PCR. Data are expressed by fold induction after normalization to the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA levels. The values are expressed as the means ± SD of triplicate experiments. Statistically significant differences are indicated with asterisks;*P < 0.05, **P < 0.01, ***P < 0.001. (C) Levels of T cells (CD4+ and CD8+), macrophages (CD11b+CD45high), microglia (CD11b+CD45int), NK cells (NK1.1+CD45+) and granulocytes (Ly6G/6C+CD45+) were assessed using flow cytometry in central nervous system (CNS)-infiltrating mononuclear cells from TMEV-infected C57BL/6 and IL-1R KO mice at 3 and 8 dpi. Numbers in FACS plots represent percentages in the CNS. Data are representative of three experiments using three mice per group.\nTo correlate the disease susceptibility of IL-1R KO mice with viral persistence in the CNS, the relative viral message levels in the CNS of wild type (WT) B6 and IL-1R KO mice were compared at days 8, 21 and 70 post-infection with TMEV (Figure 2B). The results showed that the level of viral load in the spinal cord, but not in the brain, is consistently higher in IL-1R KO mice compared to control B6 mice. These results strongly suggest that IL-1 signaling plays an important role in controlling viral persistence in the spinal cord during the course of TMEV infection. However, it was previously shown that the viral load level alone is not sufficient for the pathogenesis of TMEV-induced demyelinating disease [25]. Thus, we further assessed the levels of cellular infiltration to the CNS of these mice during the early stages (3 and 8 dpi) of viral infection (Figure 2C). The results indicated that infiltration into the CNS of granulocytes, NK cells, macrophages and CD8+ T cells but not CD4+ T cells was elevated, particularly at the early stage of viral infection. These results collectively suggest that high viral loads and cellular infiltration into the CNS in resistant B6 mice in the absence of IL-1 signaling leads to the elevated development of TMEV-induced demyelinating disease.", "At 70 days post-infection, the histopathology of TMEV-infected IL-1R KO and B6 mice was compared to correlate the disease development with the histopathology of the CNS (Figure 3). Series of histopathological examinations of the spinal cords from both KO and WT mice were conducted after H & E, LFB, and Bielschowsky silver staining. The H & E staining was used for evaluating the evidence of active inflammation and lymphocyte infiltration. LFB specifically stains axonal myelin sheath, and this was used to evaluate the axonal demyelination. Bielschowsky silver staining stains axons dark brown and was used to evaluate axonal integrity. Lymphocyte infiltration, minor demyelination and axon loss were detected in the CNS, including the brain and spinal cord, in IL-1R KO mice but not in WT B6 mice. Compared to control B6 mice (Figure 3A a-b), IL-1R KO mice (Figure 3A c-d) showed more lymphocyte infiltration in the white matter of the lumbar spinal cord when examined by H & E staining. LFB staining of the adjacent sections showed irregular vacuoles and demyelination in the white matter of the spinal cord in IL-1R KO mice (Figure 3A g-h) and in brain regions including the cerebellum and medulla (not shown). In contrast, myelin that appeared normal and little histopathological change were observed in the control B6 mice (Figure 3A e-f). Bielschowsky silver staining of the adjacent sections also showed irregular vacuolation and mild axon loss in the demyelinated regions of the spinal cord from IL-1R KO mice (Figure 3A k-l) but not in the sections from the WT control mice (Figure 3A i-j). To further compare the cellular infiltration levels in the CNS of these mice, we examined the levels of CD45+ cells in the CNS which largely represents infiltrating cells (Figure 3B). Our results clearly displayed that the level of CD45+ cells (Figure 3B d), many of which overlap with H & E staining Figure 3B c), was higher in the CNS of IL-1R KO mice compared to that of the control B6 mice (Figure 3B a-b).\nHistopathology of the spinal cord in IL-1R KO and wild-type mice. (A) H & E staining of the spinal cord showed infiltration of inflammatory cells in knockout (KO) mice (c, d) and little infiltration in wild-type (WT) mice (a, b). Luxol Fast Blue (LFB) staining of adjacent sections showed irregular vacuolation and minor demyelination in the white matter of KO mice (g, h) but no loss of myelin in WT mice (e, f). Bielschowsky silver staining of the same area shows the presence of irregular vacuolation and minor axonal loss in KO mice (k, l) but not in WT mice (i, j). Magnification, ×10 and ×40. Black arrows indicate regions of lymphocyte infiltrates, demyelination, or axon loss; thin black squares indicate the areas from the lumbar spinal cord region, which are shown in high magnification (b, f, and j for B6 mice and d, h, and l for IL-1R KO mice). (B) H & E staining of spinal cords of control (a) and IL-1R KO mice (c) are shown. The adjacent sections (b and d, respectively) were stained with anti-CD45 antibody (red) for infiltrating cells and counterstained with 4',6-diamidino-2-phenylindole (DAPI) (blue) for nuclei.", "To understand the susceptibility to TMEV-induced demyelinating disease in IL-1R KO mice, we analyzed various cytokine message levels expressed in the CNS of virus-infected control and IL-1R KO mice during the early stages (3, 5, and 8 dpi) of viral infection using real-time PCR (Figure 4). The levels of IFN-α and IFN-β gene expression in IL-1R KO mice were significantly higher than those in B6 mice at 3 dpi, although the levels became similar at 5 and 8 dpi. The expression levels of CXCL-10 that were associated with T cell infiltration and IL-10 that was associated with viral persistence were higher at 5 dpi, and this trend was maintained at 8 dpi. However, the expression level of IL-6 in IL-1R KO mice was transiently lower at 5 dpi, while no differences in TNF-α expression were noted. Similarly, the production of a pathogenic T cell cytokine, IL-17 was largely unchanged. However, viral RNA and the production of IFN-γ were transiently higher at 3 and 8 dpi. These results suggest that the lack of IL-1 signaling differentially affects viral replication and the expression of various innate and immune cytokines depending on the stage of TMEV infection.\nExpression level of cytokine genes in the CNS of TMEV infected B6 and IL-1R knockout (KO) mice at 3, 5 and 8 days post-infection (dpi). The relative expression levels of the indicated mRNAs in the central nervous system (CNS) of Theiler’s murine encephalitis virus (TMEV)-infected C57BL/6 and IL-1R KO mice at 3, 5 and 8 dpi were assessed by real-time PCR. Data are expressed by fold induction after normalization to the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA levels. The values are expressed as the means ± SD of triplicates. Statistically significant differences are indicated with asterisks; *P < 0.05, **P < 0.01, ***P < 0.001).", "To compare CD4+ T cell responses specific to viral determinants in the CNS of IL-1R KO and WT mice, infiltration levels of CD4+ T cells specific to the predominant CD4+ T cell epitopes were assessed (Figure 5A and B). The levels of IFN-γ-producing CD4+ T cells in response to pan-T cell stimulation (either PMA plus ionomycin or anti-CD3 and anti-CD28 antibodies) were similar between IL-1R KO and control WT B6 mice. However, such CD4+ T cell responses to viral epitopes were proportionally lower in the CNS of virus-infected IL-1R KO mice compared to B6 mice, although the overall levels in the CNS were similar. This discrepancy may be due to the high levels of CXCL-10 expression (Figure 4), which promotes infiltration of T cells, in the CNS of IL-1R KO mice. The levels of IL-17-producing CD4+ T cells in either TMEV-infected IL-1R KO or B6 mice were undetectable. To further determine whether the pattern of CD4+ T cell responses is unique in the CNS of virus-infected mice, we also assessed the T cell responses in the periphery of TMEV-infected IL-1R KO and control B6 mice at 8 and 21 dpi (Figure 5C). Again, the levels of T cell proliferation and the production of key T cell cytokines (IFN-γ and IL-17) against viral epitopes (for both CD4+ and CD8+ T cells) were not drasticaly different between the splenic T cells from IL-1R KO and control B6 mice. Despite the similar low levels of IL-17 production in response to viral epitopes, the IL-17 level was significantly lower in IL-1R KO mice compared to WT B6 mice after robust stimulation with anti-CD3/CD28 antibodies. These results are consistent with the role of IL-1 signaling in promoting IL-17 production [26,27].\nVirus-specific CD4+T cell responses in the CNS and the periphery of TMEV-infected B6 and IL-1R knockout (KO) mice. (A) Levels of Th1 and Th17 cells in the central nervous system (CNS) of virus-infected B6 and IL-1R KO mice were assessed using flow cytometry at 8 days post-infection (dpi) after stimulation with PBS, PMA/ionomycin, anti-CD3/CD28 antibodies, or viral epitope peptides (Pep: mixture of 2 μM VP2203-220 and 2 μM VP425-38). The flow cytometric plots show gated CD4+ T cells. (B) The numbers of CNS-infiltrating IFN-γ-producing CD4+ T cells reactive to either viral epitope peptides (CD4: equal mixture of VP2203-220 and VP425-38) or anti-CD3/CD28 antibodies in virus-infected B6 and IL-1R KO mice (three mice each) at 8 and 21 dpi. A representative result from two to three similar experiments is shown here. (C) Levels of proliferation and cytokine production by splenic T cells in response to viral epitopes by CD4+ T cells (CD4 Mix: equal mixture of VP2203-220 and VP425-38), CD8+ T cells (VP2-121–130), or both CD4+ and CD8+ T cells (anti-CD3/CD28 antibodies) were assessed at 8 and 21 dpi. Values are expressed as the mean of triplicate samples (mean ± SD) from a representative of three experiments. *P < 0.05, **P < 0.01, ***P < 0.001.", "To further determine whether the susceptibility of IL-1R KO mice to TMEV-induced demyelinating disease is associated with a compromised anti-viral CD8+ T cell response, we also analyzed the T cell responses in the CNS of TMEV-infected IL-1R KO and control B6 mice (Figure 6). Virus-specific CD8+ T cells reactive to the predominant epitope (VP2121-130), determined using the VP2121-H-2Db tetramer, indicated that the proportions of virus-specific CD8+ T cells in the CNS of virus-infected WT B6 and IL-1R KO mice are similar (Figure 6A). To further determine whether the functions of the virus-reactive CD8+ T cells are different, we assessed the abilities of the cells to produce IFN-γ in response to specific and non-specific stimulations (Figure 6B). The results clearly indicated that their ability to produce IFN-γ is also similar in both proportion (Figure 6B) and number in the CNS of TMEV-infected B6 and IL-1R KO mice (Figure 6C). These results strongly suggest that there are no significant differences in the CD8+ T cell responses to the viral determinants, unlike those of CD4+ T cell responses, in the CNS of virus-infected WT B6 and IL-1R KO mice.\nLevels of virus-specific CD8+T cell responses in the CNS of virus-infected B6 and IL-1R KO mice. (A) Levels of H-2Db-VP2-121–130-tetramer reactive CD8+ T cells in the central nervous system (CNS) of B6 and IL-1R knockout (KO) mice at 8 days post-infection (dpi). (B) Proportions of CNS-infiltrating CD8+ T cells reactive to viral epitopes, anti-CD3/CD28 antibodies and PMA/ionomycin were assessed using flow cytometry following intracellular cytokine staining at 8 dpi. (C) The overall numbers of virus-specific and anti-CD3/CD28 reactive CD8+ T cells in the CNS of virus-infected B6 and IL-1R KO mice are shown at 8 and 21 dpi. A representative result from two to three similar experiments is shown here.", "To compare the function of CD4+ T cells stimulated by macrophages from B6 and IL-1R KO mice, T cells from naïve OT-II mice, which carry T cell receptor (TCR) transgenes specific for OVA323-339, were stimulated with peritoneal macrophages infected in vitro for 24 h with TMEV (10 MOI) in the presence of OVA323-339 peptide (Figure 7A) or OVA protein (not shown). Viral infection did not significantly alter the levels of T cell stimulation by these macrophages. However, proliferation of CD4+ T cells in response to the cognate peptide or protein was higher when stimulated with IL-1R KO macrophages compared to the proliferation stimulated with B6 macrophages. In contrast, IFN-γ and IL-17 production by the T cells stimulated with IL-1R KO macrophages were significantly lower than the production stimulated with control B6 macrophages. These results indicated that antigen-presenting cells display altered T cell stimulating function in the absence of IL-1R-mediated signaling.\nCytokine reduction of CD4+T cells with stimulation of IL-1R KO macrophages. (A) Isolated CD4+ T cells (1 × 105) from the spleen of OT-II mice were cultured with in vitro 10 MOI Theiler’s murine encephalitis virus (TMEV)-infected peritoneal macrophages (1 × 104) from either C57BL/6 or IL-1R knockout (KO) mice for 3 days in the presence of 2 μM OVA epitope peptides. T cell proliferative responses were analyzed using [3H]TdR uptake, and cytokine production (IFN-γ and IL-17) of the cultures were analyzed using specific ELISAs. (B) Isolated CD4+ T cells (1 × 105) from the spleen of B6 mice infected with TMEV at 8 days post-infection (dpi) were cultured with in vitro TMEV-infected (10 MOI for 24 h) peritoneal macrophages (1 × 104) from either naïve C57BL/6 or IL-1R KO mice for 3 days in the presence of viral epitope peptides. Message levels of the indicated genes were then analyzed by real-time PCR. The glyceraldehyde-3-phosphate dehydrogenase (GAPDH) level was used as an internal control. The values are expressed as the means ± SD of triplicates. Statistically significant differences are indicated with asterisks; *P < 0.05, **P < 0.01, ***P < 0.001. Data shown are the representation of three independent experiments. (C) Peritoneal CD11b+ cells from naïve B6 and IL-1R KO mice infected with TMEV in vitro for 24 h were analyzed for the expression of PDL-1 and TIM-3 T cell inhibitory molecules. A representative flow cytometry plot of three similar results is shown here.\nTo further understand the potential mechanisms underlying the altered T cell stimulation by IL-1R KO macrophages, we examined the potential contribution of IL-1 signaling to the induction of cytokines in TMEV-infected macrophages (Figure 7B). After TMEV infection in vitro for 24 h, levels of viral RNA as well as IFN-α and IFN-β messages were similar between macrophages from WT B6 and IL-1R KO mice. However, the expression of the IL-6 message was extremely compromised in IL-1R KO macrophages compared to B6 macrophages. In contrast, the expression level of TNF-α message was more highly upregulated in IL-1R KO macrophages after TMEV infection. Interestingly however, the expression of TGF-β in uninfected IL-1R KO macrophages was higher than the expression in B6 macrophages but reduced to a similar level after viral infection. These results suggest that the cytokine production profile of macrophages and perhaps other antigen-presenting cells is altered in the absence of IL-1 signaling, which may affect the initial development and/or function of T cells following viral infection.\nTo further understand the underlying mechanisms associated with altered CD4+ T cell development in the absence of IL-1 signaling, the levels of co-stimulatory molecules and key inhibitory molecules were studied in a representative macrophage antigen-presenting cell (APC) population (Figure 7C). The levels of CD80, CD86 and CD40 in B6 and IL-1R1 KO mice were not significantly different (not shown). However, PDL-1 and Tim-3 were significantly elevated in the absence of IL-1 signaling. These molecules are known to negatively affect the function of T cells [28] and/or promote inflammatory responses through its expression by innate immune cells, such as microglia [29]. It was interesting to note that the expression of PDL-1 was upregulated upon viral infection in B6 macrophages, whereas the expression in IL-1R-deficient macrophages was constitutively upregulated even in the absence of viral infection. In contrast, the expression of Tim-3 was constitutively upregulated in IL-1R-deficient macrophages to the level (approximately 5%) of upregulation seen after viral infection in B6 macrophages, but it was further upregulated in IL-1R KO macrophages after TMEV infection. These results strongly suggest that these increases in the inhibitory molecules may participate in the altered T cell development and/or function.", "APC: antigen-presenting cell; CNS: central nervous system; Dpi: days post-infection; EAE: experimental autoimmune encephalomyelitis; ELISA: enzyme-linked immunosorbent assay; GAPDH: glyceraldehyde-3-phosphate dehydrogenase; H & E: hematoxylin and eosin; IL-1R: interleukin-1 receptor; LFB: Luxol Fast Blue; LPS: lipopolysaccharide; MNC: mononuclear cell; MS: multiple sclerosis; OVA: ovalbumin; PBS: phosphate-buffered saline; PCR: polymerase chain reaction; PFU: plaque-forming unit; SEM: standard error of the mean; TLR: toll-like receptor; TMEV: Theiler’s murine encephalomyelitis virus; TMEV-IDD: TMEV-induced demyelinating disease.", "The authors declare that they have no competing interests.", "BSK directed experiments, interpreted the results and wrote the manuscript. YHJ conducted immunological experiments and helped writing. LM conducted histological experiments and wrote the corresponding portions. HSK performed some molecular analyses. WH and HSP conducted the initial immunological experiments. CSK contributed for the interpretation of results and direction of the study. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Animals", "Synthetic peptides and antibodies", "Virus", "Viral infection of mice and assessment of clinical signs", "Reverse-transcriptase PCR and real-time PCR", "Isolation of CNS-infiltrating mononuclear cells (MNCs)", "Flow cytometry", "Intracellular staining of cytokine production", "T cell proliferation assay", "Histopathological analyses", "ELISA", "Statistical analysis", "Results", "Administration of IL-1β promotes a Th17 response to TMEV to exacerbate the pathogenicity of demyelinating disease", "IL-1R KO mice are susceptible to TMEV-induced demyelinating disease and display high cellular infiltration to the CNS", "IL-1R KO mice show widely spread mild demyelinating lesions accompanied by patchy axon damage", "Cytokine gene expression is transentily higher in the CNS of IL-1R KO mice during early viral infection", "Anti-viral CD4+ T cell responses in the CNS, of virus-infected IL-1R KO mice are lower during the early stage of infection", "Levels of TMEV-specific CD8+ T cell responses in the CNS are comparable between IL-1R KO and WT B6 mice", "Cytokine production by Th cells stimulated with macrophages from IL-1R KO mice is reduced", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contribution" ]
[ "Toll-like receptors (TLRs) and interleukin-1 receptors (IL-1Rs) are involved in the production of various cytokines that are associated with the innate immune response against many different infectious agents. TLRs and IL-1Rs share many structural similarities and utilize common downstream adaptive molecules after activation by their ligands. In general, these innate immune responses induced by TLRs and IL-1Rs are known to play a protective role against various microbes [1]. However, several recent studies have indicated that these signals may also play a pathogenic role in viral infections [2-4]. In addition to TLRs, IL-1Rs are also considered to be important innate receptors because IL-1β, in particular, is a prominent cytokine that appears in the early stage of microbial infections [3]. The IL-1R family contains six receptors, including IL-1RI, which recognizes the principal inflammatory cytokine IL-1β and the less inflammatory cytokine IL-1α [1,5]. IL-1β is generated from the cleavage of pro-IL-1β by caspase-1 in inflammasomes after infections, and the downstream signaling cascade of the IL-1β-IL-1R interaction leads to the induction of various proinflammatory cytokines and the activation of lymphocytes [6]. IL-1β-deficient mice show broad host susceptibility to various infections [7,8]. Moreover, IL-1RI-deficient mice are susceptible to certain pathogens, including Listeria monocytogenes[1]. Therefore, these responses to IL-1β are apparently critical for protection from many types of viruses and microbes. However, the level of IL-1β has also been linked to many different inflammatory autoimmune diseases, including diabetes, lupus, arthritis, and multiple sclerosis (MS) [1,4].\nTheiler’s murine encephalomyelitis virus (TMEV) is a positive-stranded RNA virus in the Picornaviridae family [9]. TMEV establishes a persistent CNS infection in susceptible mouse strains that results in the development of chronic demyelinating disease, and the system has been studied as a relevant viral model for human multiple sclerosis [10-12]. Cells infected with TMEV produce various proinflammatory cytokines, including type I IFNs, IL-6 and IL-1β [13]. TLR3 and TLR2 are involved in the production of these cytokines following infection with TMEV [14,15]. In addition, melanoma differentiation-associated gene 5 and dsRNA-activated protein kinase R are known to contribute to the production of proinflammatory cytokines [14,16]. These pathways also induce activation of caspase-1, leading to the generation of IL-1β and IL-1α, which contribute to further cytokine production, such as IL-6 promoting the development of pathogenic Th17 cells. Because IL-1β signals are associated with both host protection from viral infections and pathogenesis of inflammatory immune-mediated diseases, we here investigated the role of IL-1β-mediated signals in the development of TMEV-induced demyelinating disease.\nWe have previously reported that Th17 cells preferentially develop in an IL-6-dependent manner after TMEV infection, and that Th17 cells promote persistent viral infection and induce the pathogenesis of chronic demyelinating disease [17]. In addition, our earlier studies indicated that administration of either lipopolysaccharide (LPS) or IL-1β, thereby inducing high levels of IL-6 production, into resistant C57BL/6 (B6) mice renders the mice susceptible to the development of TMEV-induced demyelinating disease [18]. These results suggest that an excessive level of IL-1β is harmful to TMEV-induced demyelinating disease by generating high levels of pathogenic Th17 cells [19]. In this study, we confirmed the role of excessive IL-1β in the generation of a high level of Th17 cells in resistant B6 mice, supporting the pathogenic mechanisms of IL-1β. Furthermore, we have also utilized IL-1R-deficient mice to investigate the role of IL-1β-mediated signaling in the development of TMEV-induced demyelinating disease. Our results indicate that the lack of IL-1 signaling in resistant B6 mice also induced TMEV-induced demyelinating disease. The initial deficiencies in T cell function, including cytokine production and high viral persistence in the late stage of viral infection, were found in IL-1R-deficient mice. Therefore, the presence of an excessive amount of IL-1 plays a pathogenic role by elevating pathogenic Th17 responses, whereas the lack of IL-1 signals promotes viral persistence in the spinal cord, leading to chronic immune-mediated inflammatory disease.", " Animals Female C57BL/6 mice were purchased from the Charles River Laboratories (Charles River, MA, USA) through the National Cancer Institute (Frederick, MD). Female B6.129S7-Il1r1tm1Imx/J mice (IL-1R knockout (KO)) were purchased from Jackson Laboratories (Bar Harbor, ME, USA). These mice were housed in the Animal Care Facility of Northwestern University. Experimental procedures that were approved by the Animal Care and Use Committee (ACUC) of Northwestern University in accordance with NIH animal care guidelines were used in this study.\nFemale C57BL/6 mice were purchased from the Charles River Laboratories (Charles River, MA, USA) through the National Cancer Institute (Frederick, MD). Female B6.129S7-Il1r1tm1Imx/J mice (IL-1R knockout (KO)) were purchased from Jackson Laboratories (Bar Harbor, ME, USA). These mice were housed in the Animal Care Facility of Northwestern University. Experimental procedures that were approved by the Animal Care and Use Committee (ACUC) of Northwestern University in accordance with NIH animal care guidelines were used in this study.\n Synthetic peptides and antibodies All peptides used were purchased from GeneMed (GeneMed Synthesis Inc, CA, USA) and used as described previously [20]. All antibodies used were purchased from BD Pharmingen (San Diego, CA, USA).\nAll peptides used were purchased from GeneMed (GeneMed Synthesis Inc, CA, USA) and used as described previously [20]. All antibodies used were purchased from BD Pharmingen (San Diego, CA, USA).\n Virus The BeAn strain of TMEV was generated, propagated, and titered in BHK-21 cells grown in Dulbecco’s modified Eagle medium supplemented with 7.5% donor calf serum as previously described [21]. Viral titer was determined by plaque assays on BHK cell monolayers.\nThe BeAn strain of TMEV was generated, propagated, and titered in BHK-21 cells grown in Dulbecco’s modified Eagle medium supplemented with 7.5% donor calf serum as previously described [21]. Viral titer was determined by plaque assays on BHK cell monolayers.\n Viral infection of mice and assessment of clinical signs For intracerebral (i.c.) infection, 30 μl virus solution, containing 1×106 pfu, was injected into the right cerebral hemisphere of 6 to 8 week-old mice (n = 10 per group) anesthetized with isoflurane. Clinical symptoms of disease were assessed weekly on the following grading scale: grade 0 = no clinical signs; grade 1 = mild waddling gait; grade 2 = severe waddling gait; grade 3 = moderate hind limb paralysis; grade 4 = severe hind limb paralysis and loss of righting reflex.\nFor intracerebral (i.c.) infection, 30 μl virus solution, containing 1×106 pfu, was injected into the right cerebral hemisphere of 6 to 8 week-old mice (n = 10 per group) anesthetized with isoflurane. Clinical symptoms of disease were assessed weekly on the following grading scale: grade 0 = no clinical signs; grade 1 = mild waddling gait; grade 2 = severe waddling gait; grade 3 = moderate hind limb paralysis; grade 4 = severe hind limb paralysis and loss of righting reflex.\n Reverse-transcriptase PCR and real-time PCR Total cellular RNA from the brain and spinal cord of infected SJL/J mice was isolated using Trizol® Reagent (Invitrogen, CA, USA). First-strand cDNA was synthesized from 1 μg total RNA utilizing SuperScript® III First-Strand Synthesis Supermix or M-MLV (Invitrogen). The cDNAs were amplified with specific primer sets using the SYBR Green Supermix (Bio-Rad) on an iCycler (Bio-Rad). Primers for control GAPDH and cytokine genes were purchased from Integrated DNA Technologies. GAPDH: forward primer, AACTTTGGCATTGTGGAAGG and reverse primer, ACACATTGGGGGTAGGAACA; VP-1: TGACTAAGCAGGACTA-TGCCTTCC and CAACGAGCCACATATGCGGATTAC; IFN-α: (5’-ACCTCCTCT-GACCCAGGAAG-3’ and 5’-GGCTCTCCAGACTTCTGCTC-3’); IFN-β: CCCTAT-GGAGATGACGGAGA and CTGTCTGCTGGTGGAGTTCA; CXCL10: (5’-AAGT-GCTGCCGTCATTTTCT-3’ and 5’-GTGGCAATGATCTCAACACG-3’); IL-10: GCCAAGCCTTATCGGAAATGATCC and AGACACCTTGGTCTTGGAGCTT; IFN-γ: ACTGGCAAAAGGATGGTGAC and TGA GCTCATTGAATGCTTGG; IL-17A: CTCCAGAAGGCCCTCAGACTAC and AGCTTTCCCTCCGCATTGACACAG; IL-6: AGTTGCCTTCTTGGGACTGA and TCCACGATTTCCCAGAGAAC; TNF-α: GGTCACTGTCCCAGCATCTT and CTGTGAAGGGAATGGGTGTT.\nTotal cellular RNA from the brain and spinal cord of infected SJL/J mice was isolated using Trizol® Reagent (Invitrogen, CA, USA). First-strand cDNA was synthesized from 1 μg total RNA utilizing SuperScript® III First-Strand Synthesis Supermix or M-MLV (Invitrogen). The cDNAs were amplified with specific primer sets using the SYBR Green Supermix (Bio-Rad) on an iCycler (Bio-Rad). Primers for control GAPDH and cytokine genes were purchased from Integrated DNA Technologies. GAPDH: forward primer, AACTTTGGCATTGTGGAAGG and reverse primer, ACACATTGGGGGTAGGAACA; VP-1: TGACTAAGCAGGACTA-TGCCTTCC and CAACGAGCCACATATGCGGATTAC; IFN-α: (5’-ACCTCCTCT-GACCCAGGAAG-3’ and 5’-GGCTCTCCAGACTTCTGCTC-3’); IFN-β: CCCTAT-GGAGATGACGGAGA and CTGTCTGCTGGTGGAGTTCA; CXCL10: (5’-AAGT-GCTGCCGTCATTTTCT-3’ and 5’-GTGGCAATGATCTCAACACG-3’); IL-10: GCCAAGCCTTATCGGAAATGATCC and AGACACCTTGGTCTTGGAGCTT; IFN-γ: ACTGGCAAAAGGATGGTGAC and TGA GCTCATTGAATGCTTGG; IL-17A: CTCCAGAAGGCCCTCAGACTAC and AGCTTTCCCTCCGCATTGACACAG; IL-6: AGTTGCCTTCTTGGGACTGA and TCCACGATTTCCCAGAGAAC; TNF-α: GGTCACTGTCCCAGCATCTT and CTGTGAAGGGAATGGGTGTT.\n Isolation of CNS-infiltrating mononuclear cells (MNCs) Mice were perfused with sterile Hank’s balanced salt solution (HBSS), and excised brains and spinal cords of 3 mice per group were homogenized. CNS-infiltrating MNCs were then enriched in the one third bottom fraction of a continuous Percoll (Pharmacia, Piscataway, NJ, USA) gradient after centrifugation for 30 minutes at 27,000 g as described previously [22].\nMice were perfused with sterile Hank’s balanced salt solution (HBSS), and excised brains and spinal cords of 3 mice per group were homogenized. CNS-infiltrating MNCs were then enriched in the one third bottom fraction of a continuous Percoll (Pharmacia, Piscataway, NJ, USA) gradient after centrifugation for 30 minutes at 27,000 g as described previously [22].\n Flow cytometry CNS-infiltrating lymphocytes were isolated, and Fc receptors were blocked using 100 μL of 2.4G2 hybridoma (ATCC) supernatant by incubating at 4°C for 30 minutes. Cells were stained with anti-CD8 (clone 53–6.7), anti-CD4 (clone GK1.5), anti-CD11b (clone M1/70), anti-NK1.1 (clone PK136), anti-GR-1 (clone RB6-8C5) and anti-CD45 (clone 30-F11) antibodies. All antibodies used for flow cytometry were purchased from BD Pharmingen (San Diego, CA). Cells were analyzed using a Becton Dickinson LSRII flow cytometer.\nCNS-infiltrating lymphocytes were isolated, and Fc receptors were blocked using 100 μL of 2.4G2 hybridoma (ATCC) supernatant by incubating at 4°C for 30 minutes. Cells were stained with anti-CD8 (clone 53–6.7), anti-CD4 (clone GK1.5), anti-CD11b (clone M1/70), anti-NK1.1 (clone PK136), anti-GR-1 (clone RB6-8C5) and anti-CD45 (clone 30-F11) antibodies. All antibodies used for flow cytometry were purchased from BD Pharmingen (San Diego, CA). Cells were analyzed using a Becton Dickinson LSRII flow cytometer.\n Intracellular staining of cytokine production Freshly isolated CNS-infiltrating MNCs from three mice per group were cultured in 96-well round-bottom plates in the presence of the relevant or control peptide as previously described [23]. Allophycocyanin-conjugated anti-CD8 (clone Ly2) or anti-CD4 (clone L3T4) antibodies and a PE-labeled rat monoclonal anti-IFN-γ (XMG1.2) antibody were used for intracellular cytokine staining. Cells were analyzed on a Becton Dickinson FACS Calibur or LSRII cytometer. Live cells were gated based on light scattering properties.\nFreshly isolated CNS-infiltrating MNCs from three mice per group were cultured in 96-well round-bottom plates in the presence of the relevant or control peptide as previously described [23]. Allophycocyanin-conjugated anti-CD8 (clone Ly2) or anti-CD4 (clone L3T4) antibodies and a PE-labeled rat monoclonal anti-IFN-γ (XMG1.2) antibody were used for intracellular cytokine staining. Cells were analyzed on a Becton Dickinson FACS Calibur or LSRII cytometer. Live cells were gated based on light scattering properties.\n T cell proliferation assay Splenocytes from TMEV-infected mice, CD4+ T cells from spleens of OTII mice stimulated with specific epitope peptide, or in vitro TMEV-infected peritoneal macrophages in the presence of 2 μM ovalbumin (OVA)-specific peptides or 100 μg OVA protein were used. Cultures were incubated in 96-well flat-bottomed microtiter plates for 72 h and then pulsed with 1.0 μCi [3H]TdR and harvested 18 h later. [3H]TdR uptake by the cells was determined in triplicate using a scintillation counter and expressed as net counts per minute (Δcpm) ± standard error of the mean (SEM) after subtraction of the background count of cultures with PBS instead of stimulators.\nSplenocytes from TMEV-infected mice, CD4+ T cells from spleens of OTII mice stimulated with specific epitope peptide, or in vitro TMEV-infected peritoneal macrophages in the presence of 2 μM ovalbumin (OVA)-specific peptides or 100 μg OVA protein were used. Cultures were incubated in 96-well flat-bottomed microtiter plates for 72 h and then pulsed with 1.0 μCi [3H]TdR and harvested 18 h later. [3H]TdR uptake by the cells was determined in triplicate using a scintillation counter and expressed as net counts per minute (Δcpm) ± standard error of the mean (SEM) after subtraction of the background count of cultures with PBS instead of stimulators.\n Histopathological analyses At 70 days post-infection, mice were anesthetized and perfused via intracardiac puncture with 50 mL of PBS. Brain and spinal cords from IL-1R KO and B6 control mice were dissected and fixed in 4% formalin in PBS for 24 h. Anterior cortex (bregma: 3.0 to 2.0 mm), subventricular zone (bregma: 1.7 to 0.48), hippocampus (bregma: -1.0 to −2.5 mm), and cerebellum (bregma: -5.6 to −7.0 mm) were investigated. In addition, cervical, thoracic, and lumbar regions of the spinal cord were also examined. The tissues were embedded in paraffin for sectioning and staining. Paraffin-processed brains and spinal cords were sectioned at 6 μm. Adjacent sets of three sections from each animal were deparaffinized, rehydrated, and evaluated by H & E staining for inflammatory infiltrates, Luxol Fast Blue (LFB) staining for axonal demyelination, and Bielschowsky silver staining for axon loss. Slides were examined using a Leica DMR light microscope, and images were captured using an AxioCam MRc camera and AxioVision imaging software. The inflammatory infiltrates were evaluated by the presence or absence of the monocytes/lymphocytes based on the H & E staining and immunofluorescent staining of CD45+ cells. Histologic white matter demyelination was graded as: 1) normal myelination, 2) mild or minor demyelination (> 50% myelin staining preserved), or 3) moderate to severe demyelination (< 50% myelin staining preserved).\nAt 70 days post-infection, mice were anesthetized and perfused via intracardiac puncture with 50 mL of PBS. Brain and spinal cords from IL-1R KO and B6 control mice were dissected and fixed in 4% formalin in PBS for 24 h. Anterior cortex (bregma: 3.0 to 2.0 mm), subventricular zone (bregma: 1.7 to 0.48), hippocampus (bregma: -1.0 to −2.5 mm), and cerebellum (bregma: -5.6 to −7.0 mm) were investigated. In addition, cervical, thoracic, and lumbar regions of the spinal cord were also examined. The tissues were embedded in paraffin for sectioning and staining. Paraffin-processed brains and spinal cords were sectioned at 6 μm. Adjacent sets of three sections from each animal were deparaffinized, rehydrated, and evaluated by H & E staining for inflammatory infiltrates, Luxol Fast Blue (LFB) staining for axonal demyelination, and Bielschowsky silver staining for axon loss. Slides were examined using a Leica DMR light microscope, and images were captured using an AxioCam MRc camera and AxioVision imaging software. The inflammatory infiltrates were evaluated by the presence or absence of the monocytes/lymphocytes based on the H & E staining and immunofluorescent staining of CD45+ cells. Histologic white matter demyelination was graded as: 1) normal myelination, 2) mild or minor demyelination (> 50% myelin staining preserved), or 3) moderate to severe demyelination (< 50% myelin staining preserved).\n ELISA Cytokine levels produced by splenocytes from TMEV-infected mice or CD4+ T cells from spleens of OTII mice were determined after stimulation with specific epitope peptides (2 μM each), or in vitro TMEV-infected peritoneal macrophages in the presence of OVA-specific peptide (2 μM) for 72 h, respectively. IFN-γ (OPTEIA kit; BD Pharmingen), IL-17 (R&D Systems, Minneapolis, MN, USA) levels were assessed. Plates were read using a Spectra MAX 190 microplate reader (Molecular Devices, Sunnyvale, CA, USA) at a 450 nm wavelength.\nCytokine levels produced by splenocytes from TMEV-infected mice or CD4+ T cells from spleens of OTII mice were determined after stimulation with specific epitope peptides (2 μM each), or in vitro TMEV-infected peritoneal macrophages in the presence of OVA-specific peptide (2 μM) for 72 h, respectively. IFN-γ (OPTEIA kit; BD Pharmingen), IL-17 (R&D Systems, Minneapolis, MN, USA) levels were assessed. Plates were read using a Spectra MAX 190 microplate reader (Molecular Devices, Sunnyvale, CA, USA) at a 450 nm wavelength.\n Statistical analysis Data were presented as mean ± SD of either two to three independent experiments or one representative of at least three separate experiments. The significance of differences in the mean values was determined by Student’s t-test. Clinical scores were analyzed by Mann–Whitney U-test. P values < 0.05 were considered statistically significant.\nData were presented as mean ± SD of either two to three independent experiments or one representative of at least three separate experiments. The significance of differences in the mean values was determined by Student’s t-test. Clinical scores were analyzed by Mann–Whitney U-test. P values < 0.05 were considered statistically significant.", "Female C57BL/6 mice were purchased from the Charles River Laboratories (Charles River, MA, USA) through the National Cancer Institute (Frederick, MD). Female B6.129S7-Il1r1tm1Imx/J mice (IL-1R knockout (KO)) were purchased from Jackson Laboratories (Bar Harbor, ME, USA). These mice were housed in the Animal Care Facility of Northwestern University. Experimental procedures that were approved by the Animal Care and Use Committee (ACUC) of Northwestern University in accordance with NIH animal care guidelines were used in this study.", "All peptides used were purchased from GeneMed (GeneMed Synthesis Inc, CA, USA) and used as described previously [20]. All antibodies used were purchased from BD Pharmingen (San Diego, CA, USA).", "The BeAn strain of TMEV was generated, propagated, and titered in BHK-21 cells grown in Dulbecco’s modified Eagle medium supplemented with 7.5% donor calf serum as previously described [21]. Viral titer was determined by plaque assays on BHK cell monolayers.", "For intracerebral (i.c.) infection, 30 μl virus solution, containing 1×106 pfu, was injected into the right cerebral hemisphere of 6 to 8 week-old mice (n = 10 per group) anesthetized with isoflurane. Clinical symptoms of disease were assessed weekly on the following grading scale: grade 0 = no clinical signs; grade 1 = mild waddling gait; grade 2 = severe waddling gait; grade 3 = moderate hind limb paralysis; grade 4 = severe hind limb paralysis and loss of righting reflex.", "Total cellular RNA from the brain and spinal cord of infected SJL/J mice was isolated using Trizol® Reagent (Invitrogen, CA, USA). First-strand cDNA was synthesized from 1 μg total RNA utilizing SuperScript® III First-Strand Synthesis Supermix or M-MLV (Invitrogen). The cDNAs were amplified with specific primer sets using the SYBR Green Supermix (Bio-Rad) on an iCycler (Bio-Rad). Primers for control GAPDH and cytokine genes were purchased from Integrated DNA Technologies. GAPDH: forward primer, AACTTTGGCATTGTGGAAGG and reverse primer, ACACATTGGGGGTAGGAACA; VP-1: TGACTAAGCAGGACTA-TGCCTTCC and CAACGAGCCACATATGCGGATTAC; IFN-α: (5’-ACCTCCTCT-GACCCAGGAAG-3’ and 5’-GGCTCTCCAGACTTCTGCTC-3’); IFN-β: CCCTAT-GGAGATGACGGAGA and CTGTCTGCTGGTGGAGTTCA; CXCL10: (5’-AAGT-GCTGCCGTCATTTTCT-3’ and 5’-GTGGCAATGATCTCAACACG-3’); IL-10: GCCAAGCCTTATCGGAAATGATCC and AGACACCTTGGTCTTGGAGCTT; IFN-γ: ACTGGCAAAAGGATGGTGAC and TGA GCTCATTGAATGCTTGG; IL-17A: CTCCAGAAGGCCCTCAGACTAC and AGCTTTCCCTCCGCATTGACACAG; IL-6: AGTTGCCTTCTTGGGACTGA and TCCACGATTTCCCAGAGAAC; TNF-α: GGTCACTGTCCCAGCATCTT and CTGTGAAGGGAATGGGTGTT.", "Mice were perfused with sterile Hank’s balanced salt solution (HBSS), and excised brains and spinal cords of 3 mice per group were homogenized. CNS-infiltrating MNCs were then enriched in the one third bottom fraction of a continuous Percoll (Pharmacia, Piscataway, NJ, USA) gradient after centrifugation for 30 minutes at 27,000 g as described previously [22].", "CNS-infiltrating lymphocytes were isolated, and Fc receptors were blocked using 100 μL of 2.4G2 hybridoma (ATCC) supernatant by incubating at 4°C for 30 minutes. Cells were stained with anti-CD8 (clone 53–6.7), anti-CD4 (clone GK1.5), anti-CD11b (clone M1/70), anti-NK1.1 (clone PK136), anti-GR-1 (clone RB6-8C5) and anti-CD45 (clone 30-F11) antibodies. All antibodies used for flow cytometry were purchased from BD Pharmingen (San Diego, CA). Cells were analyzed using a Becton Dickinson LSRII flow cytometer.", "Freshly isolated CNS-infiltrating MNCs from three mice per group were cultured in 96-well round-bottom plates in the presence of the relevant or control peptide as previously described [23]. Allophycocyanin-conjugated anti-CD8 (clone Ly2) or anti-CD4 (clone L3T4) antibodies and a PE-labeled rat monoclonal anti-IFN-γ (XMG1.2) antibody were used for intracellular cytokine staining. Cells were analyzed on a Becton Dickinson FACS Calibur or LSRII cytometer. Live cells were gated based on light scattering properties.", "Splenocytes from TMEV-infected mice, CD4+ T cells from spleens of OTII mice stimulated with specific epitope peptide, or in vitro TMEV-infected peritoneal macrophages in the presence of 2 μM ovalbumin (OVA)-specific peptides or 100 μg OVA protein were used. Cultures were incubated in 96-well flat-bottomed microtiter plates for 72 h and then pulsed with 1.0 μCi [3H]TdR and harvested 18 h later. [3H]TdR uptake by the cells was determined in triplicate using a scintillation counter and expressed as net counts per minute (Δcpm) ± standard error of the mean (SEM) after subtraction of the background count of cultures with PBS instead of stimulators.", "At 70 days post-infection, mice were anesthetized and perfused via intracardiac puncture with 50 mL of PBS. Brain and spinal cords from IL-1R KO and B6 control mice were dissected and fixed in 4% formalin in PBS for 24 h. Anterior cortex (bregma: 3.0 to 2.0 mm), subventricular zone (bregma: 1.7 to 0.48), hippocampus (bregma: -1.0 to −2.5 mm), and cerebellum (bregma: -5.6 to −7.0 mm) were investigated. In addition, cervical, thoracic, and lumbar regions of the spinal cord were also examined. The tissues were embedded in paraffin for sectioning and staining. Paraffin-processed brains and spinal cords were sectioned at 6 μm. Adjacent sets of three sections from each animal were deparaffinized, rehydrated, and evaluated by H & E staining for inflammatory infiltrates, Luxol Fast Blue (LFB) staining for axonal demyelination, and Bielschowsky silver staining for axon loss. Slides were examined using a Leica DMR light microscope, and images were captured using an AxioCam MRc camera and AxioVision imaging software. The inflammatory infiltrates were evaluated by the presence or absence of the monocytes/lymphocytes based on the H & E staining and immunofluorescent staining of CD45+ cells. Histologic white matter demyelination was graded as: 1) normal myelination, 2) mild or minor demyelination (> 50% myelin staining preserved), or 3) moderate to severe demyelination (< 50% myelin staining preserved).", "Cytokine levels produced by splenocytes from TMEV-infected mice or CD4+ T cells from spleens of OTII mice were determined after stimulation with specific epitope peptides (2 μM each), or in vitro TMEV-infected peritoneal macrophages in the presence of OVA-specific peptide (2 μM) for 72 h, respectively. IFN-γ (OPTEIA kit; BD Pharmingen), IL-17 (R&D Systems, Minneapolis, MN, USA) levels were assessed. Plates were read using a Spectra MAX 190 microplate reader (Molecular Devices, Sunnyvale, CA, USA) at a 450 nm wavelength.", "Data were presented as mean ± SD of either two to three independent experiments or one representative of at least three separate experiments. The significance of differences in the mean values was determined by Student’s t-test. Clinical scores were analyzed by Mann–Whitney U-test. P values < 0.05 were considered statistically significant.", " Administration of IL-1β promotes a Th17 response to TMEV to exacerbate the pathogenicity of demyelinating disease We have previously demonstrated that administration of LPS or IL-1β causes resistant C57BL/6 mice to develop demyelinating disease [18]. It has recently been shown that LPS treatment promotes this pathogenesis by elevating the induction of the pathogenic Th17 response [17]. However, it is unknown how IL-1β promotes this pathogenesis. To understand the mechanism, we compared the levels of Th1 and Th17 in the CNS of B6 mice treated with either LPS or IL-1β, along with control mice treated with PBS, following infection with TMEV at 8 days post-infection (Figure 1). The results clearly indicated that the levels of IL-17A-producing Th17 cells in mice treated with either LPS or IL-1β were significantly elevated compared to PBS-treated control mice (Figure 1A and B). In contrast, the levels of IFN-γ-producing Th1 cells were not different. It is interesting to note that IL-1β-treated mice exceeded the Th17 level of LPS-treated mice. However, the levels of IL-17-producing CD8+ T cells were minimal (not shown), and the levels of IFN-γ-producing CD8+ T cells were also similar among the groups (Figure 1C). These results strongly suggest that IL-1β can promote the pathogenesis of TMEV-induced demyelinating disease by enhancing the induction of pathogenic Th17 cells rather than altering the Th1 response.\nEffects of IL-1β administration on Th17 and Th1 responses in TMEV-infected B6 mice. (A) Levels of IFN-γ- and IL-17-producing CD4+ T cells in B6 mice intraperitoneally treated with PBS, lipopolysaccharide (LPS), or IL-1β during the early stage (−1 and 3 days post-infection (dpi)) of Theiler’s murine encephalitis virus (TMEV) infection (three mice per group) were analyzed using flow cytometry of the pooled central nervous system (CNS) cells at 8 dpi after stimulation with either PBS or anti-CD3/CD28 antibodies. (B) The overall numbers of IL-17-producing cells in the CNS of the above treated B6 mice (three mice per group) were shown. (C) Levels of IFN-γ-producing CD8+ T cells in the CNS of these groups of mice were also similarly analyzed. ***P < 0.001.\nWe have previously demonstrated that administration of LPS or IL-1β causes resistant C57BL/6 mice to develop demyelinating disease [18]. It has recently been shown that LPS treatment promotes this pathogenesis by elevating the induction of the pathogenic Th17 response [17]. However, it is unknown how IL-1β promotes this pathogenesis. To understand the mechanism, we compared the levels of Th1 and Th17 in the CNS of B6 mice treated with either LPS or IL-1β, along with control mice treated with PBS, following infection with TMEV at 8 days post-infection (Figure 1). The results clearly indicated that the levels of IL-17A-producing Th17 cells in mice treated with either LPS or IL-1β were significantly elevated compared to PBS-treated control mice (Figure 1A and B). In contrast, the levels of IFN-γ-producing Th1 cells were not different. It is interesting to note that IL-1β-treated mice exceeded the Th17 level of LPS-treated mice. However, the levels of IL-17-producing CD8+ T cells were minimal (not shown), and the levels of IFN-γ-producing CD8+ T cells were also similar among the groups (Figure 1C). These results strongly suggest that IL-1β can promote the pathogenesis of TMEV-induced demyelinating disease by enhancing the induction of pathogenic Th17 cells rather than altering the Th1 response.\nEffects of IL-1β administration on Th17 and Th1 responses in TMEV-infected B6 mice. (A) Levels of IFN-γ- and IL-17-producing CD4+ T cells in B6 mice intraperitoneally treated with PBS, lipopolysaccharide (LPS), or IL-1β during the early stage (−1 and 3 days post-infection (dpi)) of Theiler’s murine encephalitis virus (TMEV) infection (three mice per group) were analyzed using flow cytometry of the pooled central nervous system (CNS) cells at 8 dpi after stimulation with either PBS or anti-CD3/CD28 antibodies. (B) The overall numbers of IL-17-producing cells in the CNS of the above treated B6 mice (three mice per group) were shown. (C) Levels of IFN-γ-producing CD8+ T cells in the CNS of these groups of mice were also similarly analyzed. ***P < 0.001.\n IL-1R KO mice are susceptible to TMEV-induced demyelinating disease and display high cellular infiltration to the CNS Although administration of IL-1β promotes the pathogenesis of TMEV-induced demyelinating disease, the IL-1β produced via NLRP3 proteosome activation upon viral infection is considered to be a protective mechanism against microbial infections by promoting the apoptosis of infected cells [24]. To further investigate the potential role of IL-1β-mediated signaling in the development of TMEV-induced demyelinating disease, we compared the development of TMEV-induced demyelinating disease in IL-1R KO mice with a B6 background and control B6 mice (Figure 2A). Every IL-1R KO mouse developed demyelinating disease while none of the control B6 mice showed clinical signs at 35 days post-infection (dpi). The results clearly indicated that B6 mice with the deficiency in IL-1 signaling became susceptible to the TMEV-induced disease. This is somewhat unexpected because our previous study indicated that administration of IL-1β to B6 mice renders the mice susceptible to the disease, suggesting a pathogenic role for IL-1β in disease development.\nThe course of TMEV-induced demyelinating disease development, viral persistence levels and CNS-infiltrating mononuclear cells in TMEV-infected in B6 and IL-1R KO mice. (A) Frequency and severity of demyelinating disease in B6 (n = 10) and IL-1R knockout (KO) (n = 10) mice were monitored for 70 days after Theiler’s murine encephalitis virus (TMEV) infection. (B) Viral persistence levels in the pooled brains (BR) and spinal cords (SC) of infected mice (three mice per group) at 8, 21 and 70 days post-infection (dpi) were determined by quantitative PCR. Data are expressed by fold induction after normalization to the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA levels. The values are expressed as the means ± SD of triplicate experiments. Statistically significant differences are indicated with asterisks;*P < 0.05, **P < 0.01, ***P < 0.001. (C) Levels of T cells (CD4+ and CD8+), macrophages (CD11b+CD45high), microglia (CD11b+CD45int), NK cells (NK1.1+CD45+) and granulocytes (Ly6G/6C+CD45+) were assessed using flow cytometry in central nervous system (CNS)-infiltrating mononuclear cells from TMEV-infected C57BL/6 and IL-1R KO mice at 3 and 8 dpi. Numbers in FACS plots represent percentages in the CNS. Data are representative of three experiments using three mice per group.\nTo correlate the disease susceptibility of IL-1R KO mice with viral persistence in the CNS, the relative viral message levels in the CNS of wild type (WT) B6 and IL-1R KO mice were compared at days 8, 21 and 70 post-infection with TMEV (Figure 2B). The results showed that the level of viral load in the spinal cord, but not in the brain, is consistently higher in IL-1R KO mice compared to control B6 mice. These results strongly suggest that IL-1 signaling plays an important role in controlling viral persistence in the spinal cord during the course of TMEV infection. However, it was previously shown that the viral load level alone is not sufficient for the pathogenesis of TMEV-induced demyelinating disease [25]. Thus, we further assessed the levels of cellular infiltration to the CNS of these mice during the early stages (3 and 8 dpi) of viral infection (Figure 2C). The results indicated that infiltration into the CNS of granulocytes, NK cells, macrophages and CD8+ T cells but not CD4+ T cells was elevated, particularly at the early stage of viral infection. These results collectively suggest that high viral loads and cellular infiltration into the CNS in resistant B6 mice in the absence of IL-1 signaling leads to the elevated development of TMEV-induced demyelinating disease.\nAlthough administration of IL-1β promotes the pathogenesis of TMEV-induced demyelinating disease, the IL-1β produced via NLRP3 proteosome activation upon viral infection is considered to be a protective mechanism against microbial infections by promoting the apoptosis of infected cells [24]. To further investigate the potential role of IL-1β-mediated signaling in the development of TMEV-induced demyelinating disease, we compared the development of TMEV-induced demyelinating disease in IL-1R KO mice with a B6 background and control B6 mice (Figure 2A). Every IL-1R KO mouse developed demyelinating disease while none of the control B6 mice showed clinical signs at 35 days post-infection (dpi). The results clearly indicated that B6 mice with the deficiency in IL-1 signaling became susceptible to the TMEV-induced disease. This is somewhat unexpected because our previous study indicated that administration of IL-1β to B6 mice renders the mice susceptible to the disease, suggesting a pathogenic role for IL-1β in disease development.\nThe course of TMEV-induced demyelinating disease development, viral persistence levels and CNS-infiltrating mononuclear cells in TMEV-infected in B6 and IL-1R KO mice. (A) Frequency and severity of demyelinating disease in B6 (n = 10) and IL-1R knockout (KO) (n = 10) mice were monitored for 70 days after Theiler’s murine encephalitis virus (TMEV) infection. (B) Viral persistence levels in the pooled brains (BR) and spinal cords (SC) of infected mice (three mice per group) at 8, 21 and 70 days post-infection (dpi) were determined by quantitative PCR. Data are expressed by fold induction after normalization to the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA levels. The values are expressed as the means ± SD of triplicate experiments. Statistically significant differences are indicated with asterisks;*P < 0.05, **P < 0.01, ***P < 0.001. (C) Levels of T cells (CD4+ and CD8+), macrophages (CD11b+CD45high), microglia (CD11b+CD45int), NK cells (NK1.1+CD45+) and granulocytes (Ly6G/6C+CD45+) were assessed using flow cytometry in central nervous system (CNS)-infiltrating mononuclear cells from TMEV-infected C57BL/6 and IL-1R KO mice at 3 and 8 dpi. Numbers in FACS plots represent percentages in the CNS. Data are representative of three experiments using three mice per group.\nTo correlate the disease susceptibility of IL-1R KO mice with viral persistence in the CNS, the relative viral message levels in the CNS of wild type (WT) B6 and IL-1R KO mice were compared at days 8, 21 and 70 post-infection with TMEV (Figure 2B). The results showed that the level of viral load in the spinal cord, but not in the brain, is consistently higher in IL-1R KO mice compared to control B6 mice. These results strongly suggest that IL-1 signaling plays an important role in controlling viral persistence in the spinal cord during the course of TMEV infection. However, it was previously shown that the viral load level alone is not sufficient for the pathogenesis of TMEV-induced demyelinating disease [25]. Thus, we further assessed the levels of cellular infiltration to the CNS of these mice during the early stages (3 and 8 dpi) of viral infection (Figure 2C). The results indicated that infiltration into the CNS of granulocytes, NK cells, macrophages and CD8+ T cells but not CD4+ T cells was elevated, particularly at the early stage of viral infection. These results collectively suggest that high viral loads and cellular infiltration into the CNS in resistant B6 mice in the absence of IL-1 signaling leads to the elevated development of TMEV-induced demyelinating disease.\n IL-1R KO mice show widely spread mild demyelinating lesions accompanied by patchy axon damage At 70 days post-infection, the histopathology of TMEV-infected IL-1R KO and B6 mice was compared to correlate the disease development with the histopathology of the CNS (Figure 3). Series of histopathological examinations of the spinal cords from both KO and WT mice were conducted after H & E, LFB, and Bielschowsky silver staining. The H & E staining was used for evaluating the evidence of active inflammation and lymphocyte infiltration. LFB specifically stains axonal myelin sheath, and this was used to evaluate the axonal demyelination. Bielschowsky silver staining stains axons dark brown and was used to evaluate axonal integrity. Lymphocyte infiltration, minor demyelination and axon loss were detected in the CNS, including the brain and spinal cord, in IL-1R KO mice but not in WT B6 mice. Compared to control B6 mice (Figure 3A a-b), IL-1R KO mice (Figure 3A c-d) showed more lymphocyte infiltration in the white matter of the lumbar spinal cord when examined by H & E staining. LFB staining of the adjacent sections showed irregular vacuoles and demyelination in the white matter of the spinal cord in IL-1R KO mice (Figure 3A g-h) and in brain regions including the cerebellum and medulla (not shown). In contrast, myelin that appeared normal and little histopathological change were observed in the control B6 mice (Figure 3A e-f). Bielschowsky silver staining of the adjacent sections also showed irregular vacuolation and mild axon loss in the demyelinated regions of the spinal cord from IL-1R KO mice (Figure 3A k-l) but not in the sections from the WT control mice (Figure 3A i-j). To further compare the cellular infiltration levels in the CNS of these mice, we examined the levels of CD45+ cells in the CNS which largely represents infiltrating cells (Figure 3B). Our results clearly displayed that the level of CD45+ cells (Figure 3B d), many of which overlap with H & E staining Figure 3B c), was higher in the CNS of IL-1R KO mice compared to that of the control B6 mice (Figure 3B a-b).\nHistopathology of the spinal cord in IL-1R KO and wild-type mice. (A) H & E staining of the spinal cord showed infiltration of inflammatory cells in knockout (KO) mice (c, d) and little infiltration in wild-type (WT) mice (a, b). Luxol Fast Blue (LFB) staining of adjacent sections showed irregular vacuolation and minor demyelination in the white matter of KO mice (g, h) but no loss of myelin in WT mice (e, f). Bielschowsky silver staining of the same area shows the presence of irregular vacuolation and minor axonal loss in KO mice (k, l) but not in WT mice (i, j). Magnification, ×10 and ×40. Black arrows indicate regions of lymphocyte infiltrates, demyelination, or axon loss; thin black squares indicate the areas from the lumbar spinal cord region, which are shown in high magnification (b, f, and j for B6 mice and d, h, and l for IL-1R KO mice). (B) H & E staining of spinal cords of control (a) and IL-1R KO mice (c) are shown. The adjacent sections (b and d, respectively) were stained with anti-CD45 antibody (red) for infiltrating cells and counterstained with 4',6-diamidino-2-phenylindole (DAPI) (blue) for nuclei.\nAt 70 days post-infection, the histopathology of TMEV-infected IL-1R KO and B6 mice was compared to correlate the disease development with the histopathology of the CNS (Figure 3). Series of histopathological examinations of the spinal cords from both KO and WT mice were conducted after H & E, LFB, and Bielschowsky silver staining. The H & E staining was used for evaluating the evidence of active inflammation and lymphocyte infiltration. LFB specifically stains axonal myelin sheath, and this was used to evaluate the axonal demyelination. Bielschowsky silver staining stains axons dark brown and was used to evaluate axonal integrity. Lymphocyte infiltration, minor demyelination and axon loss were detected in the CNS, including the brain and spinal cord, in IL-1R KO mice but not in WT B6 mice. Compared to control B6 mice (Figure 3A a-b), IL-1R KO mice (Figure 3A c-d) showed more lymphocyte infiltration in the white matter of the lumbar spinal cord when examined by H & E staining. LFB staining of the adjacent sections showed irregular vacuoles and demyelination in the white matter of the spinal cord in IL-1R KO mice (Figure 3A g-h) and in brain regions including the cerebellum and medulla (not shown). In contrast, myelin that appeared normal and little histopathological change were observed in the control B6 mice (Figure 3A e-f). Bielschowsky silver staining of the adjacent sections also showed irregular vacuolation and mild axon loss in the demyelinated regions of the spinal cord from IL-1R KO mice (Figure 3A k-l) but not in the sections from the WT control mice (Figure 3A i-j). To further compare the cellular infiltration levels in the CNS of these mice, we examined the levels of CD45+ cells in the CNS which largely represents infiltrating cells (Figure 3B). Our results clearly displayed that the level of CD45+ cells (Figure 3B d), many of which overlap with H & E staining Figure 3B c), was higher in the CNS of IL-1R KO mice compared to that of the control B6 mice (Figure 3B a-b).\nHistopathology of the spinal cord in IL-1R KO and wild-type mice. (A) H & E staining of the spinal cord showed infiltration of inflammatory cells in knockout (KO) mice (c, d) and little infiltration in wild-type (WT) mice (a, b). Luxol Fast Blue (LFB) staining of adjacent sections showed irregular vacuolation and minor demyelination in the white matter of KO mice (g, h) but no loss of myelin in WT mice (e, f). Bielschowsky silver staining of the same area shows the presence of irregular vacuolation and minor axonal loss in KO mice (k, l) but not in WT mice (i, j). Magnification, ×10 and ×40. Black arrows indicate regions of lymphocyte infiltrates, demyelination, or axon loss; thin black squares indicate the areas from the lumbar spinal cord region, which are shown in high magnification (b, f, and j for B6 mice and d, h, and l for IL-1R KO mice). (B) H & E staining of spinal cords of control (a) and IL-1R KO mice (c) are shown. The adjacent sections (b and d, respectively) were stained with anti-CD45 antibody (red) for infiltrating cells and counterstained with 4',6-diamidino-2-phenylindole (DAPI) (blue) for nuclei.\n Cytokine gene expression is transentily higher in the CNS of IL-1R KO mice during early viral infection To understand the susceptibility to TMEV-induced demyelinating disease in IL-1R KO mice, we analyzed various cytokine message levels expressed in the CNS of virus-infected control and IL-1R KO mice during the early stages (3, 5, and 8 dpi) of viral infection using real-time PCR (Figure 4). The levels of IFN-α and IFN-β gene expression in IL-1R KO mice were significantly higher than those in B6 mice at 3 dpi, although the levels became similar at 5 and 8 dpi. The expression levels of CXCL-10 that were associated with T cell infiltration and IL-10 that was associated with viral persistence were higher at 5 dpi, and this trend was maintained at 8 dpi. However, the expression level of IL-6 in IL-1R KO mice was transiently lower at 5 dpi, while no differences in TNF-α expression were noted. Similarly, the production of a pathogenic T cell cytokine, IL-17 was largely unchanged. However, viral RNA and the production of IFN-γ were transiently higher at 3 and 8 dpi. These results suggest that the lack of IL-1 signaling differentially affects viral replication and the expression of various innate and immune cytokines depending on the stage of TMEV infection.\nExpression level of cytokine genes in the CNS of TMEV infected B6 and IL-1R knockout (KO) mice at 3, 5 and 8 days post-infection (dpi). The relative expression levels of the indicated mRNAs in the central nervous system (CNS) of Theiler’s murine encephalitis virus (TMEV)-infected C57BL/6 and IL-1R KO mice at 3, 5 and 8 dpi were assessed by real-time PCR. Data are expressed by fold induction after normalization to the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA levels. The values are expressed as the means ± SD of triplicates. Statistically significant differences are indicated with asterisks; *P < 0.05, **P < 0.01, ***P < 0.001).\nTo understand the susceptibility to TMEV-induced demyelinating disease in IL-1R KO mice, we analyzed various cytokine message levels expressed in the CNS of virus-infected control and IL-1R KO mice during the early stages (3, 5, and 8 dpi) of viral infection using real-time PCR (Figure 4). The levels of IFN-α and IFN-β gene expression in IL-1R KO mice were significantly higher than those in B6 mice at 3 dpi, although the levels became similar at 5 and 8 dpi. The expression levels of CXCL-10 that were associated with T cell infiltration and IL-10 that was associated with viral persistence were higher at 5 dpi, and this trend was maintained at 8 dpi. However, the expression level of IL-6 in IL-1R KO mice was transiently lower at 5 dpi, while no differences in TNF-α expression were noted. Similarly, the production of a pathogenic T cell cytokine, IL-17 was largely unchanged. However, viral RNA and the production of IFN-γ were transiently higher at 3 and 8 dpi. These results suggest that the lack of IL-1 signaling differentially affects viral replication and the expression of various innate and immune cytokines depending on the stage of TMEV infection.\nExpression level of cytokine genes in the CNS of TMEV infected B6 and IL-1R knockout (KO) mice at 3, 5 and 8 days post-infection (dpi). The relative expression levels of the indicated mRNAs in the central nervous system (CNS) of Theiler’s murine encephalitis virus (TMEV)-infected C57BL/6 and IL-1R KO mice at 3, 5 and 8 dpi were assessed by real-time PCR. Data are expressed by fold induction after normalization to the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA levels. The values are expressed as the means ± SD of triplicates. Statistically significant differences are indicated with asterisks; *P < 0.05, **P < 0.01, ***P < 0.001).\n Anti-viral CD4+ T cell responses in the CNS, of virus-infected IL-1R KO mice are lower during the early stage of infection To compare CD4+ T cell responses specific to viral determinants in the CNS of IL-1R KO and WT mice, infiltration levels of CD4+ T cells specific to the predominant CD4+ T cell epitopes were assessed (Figure 5A and B). The levels of IFN-γ-producing CD4+ T cells in response to pan-T cell stimulation (either PMA plus ionomycin or anti-CD3 and anti-CD28 antibodies) were similar between IL-1R KO and control WT B6 mice. However, such CD4+ T cell responses to viral epitopes were proportionally lower in the CNS of virus-infected IL-1R KO mice compared to B6 mice, although the overall levels in the CNS were similar. This discrepancy may be due to the high levels of CXCL-10 expression (Figure 4), which promotes infiltration of T cells, in the CNS of IL-1R KO mice. The levels of IL-17-producing CD4+ T cells in either TMEV-infected IL-1R KO or B6 mice were undetectable. To further determine whether the pattern of CD4+ T cell responses is unique in the CNS of virus-infected mice, we also assessed the T cell responses in the periphery of TMEV-infected IL-1R KO and control B6 mice at 8 and 21 dpi (Figure 5C). Again, the levels of T cell proliferation and the production of key T cell cytokines (IFN-γ and IL-17) against viral epitopes (for both CD4+ and CD8+ T cells) were not drasticaly different between the splenic T cells from IL-1R KO and control B6 mice. Despite the similar low levels of IL-17 production in response to viral epitopes, the IL-17 level was significantly lower in IL-1R KO mice compared to WT B6 mice after robust stimulation with anti-CD3/CD28 antibodies. These results are consistent with the role of IL-1 signaling in promoting IL-17 production [26,27].\nVirus-specific CD4+T cell responses in the CNS and the periphery of TMEV-infected B6 and IL-1R knockout (KO) mice. (A) Levels of Th1 and Th17 cells in the central nervous system (CNS) of virus-infected B6 and IL-1R KO mice were assessed using flow cytometry at 8 days post-infection (dpi) after stimulation with PBS, PMA/ionomycin, anti-CD3/CD28 antibodies, or viral epitope peptides (Pep: mixture of 2 μM VP2203-220 and 2 μM VP425-38). The flow cytometric plots show gated CD4+ T cells. (B) The numbers of CNS-infiltrating IFN-γ-producing CD4+ T cells reactive to either viral epitope peptides (CD4: equal mixture of VP2203-220 and VP425-38) or anti-CD3/CD28 antibodies in virus-infected B6 and IL-1R KO mice (three mice each) at 8 and 21 dpi. A representative result from two to three similar experiments is shown here. (C) Levels of proliferation and cytokine production by splenic T cells in response to viral epitopes by CD4+ T cells (CD4 Mix: equal mixture of VP2203-220 and VP425-38), CD8+ T cells (VP2-121–130), or both CD4+ and CD8+ T cells (anti-CD3/CD28 antibodies) were assessed at 8 and 21 dpi. Values are expressed as the mean of triplicate samples (mean ± SD) from a representative of three experiments. *P < 0.05, **P < 0.01, ***P < 0.001.\nTo compare CD4+ T cell responses specific to viral determinants in the CNS of IL-1R KO and WT mice, infiltration levels of CD4+ T cells specific to the predominant CD4+ T cell epitopes were assessed (Figure 5A and B). The levels of IFN-γ-producing CD4+ T cells in response to pan-T cell stimulation (either PMA plus ionomycin or anti-CD3 and anti-CD28 antibodies) were similar between IL-1R KO and control WT B6 mice. However, such CD4+ T cell responses to viral epitopes were proportionally lower in the CNS of virus-infected IL-1R KO mice compared to B6 mice, although the overall levels in the CNS were similar. This discrepancy may be due to the high levels of CXCL-10 expression (Figure 4), which promotes infiltration of T cells, in the CNS of IL-1R KO mice. The levels of IL-17-producing CD4+ T cells in either TMEV-infected IL-1R KO or B6 mice were undetectable. To further determine whether the pattern of CD4+ T cell responses is unique in the CNS of virus-infected mice, we also assessed the T cell responses in the periphery of TMEV-infected IL-1R KO and control B6 mice at 8 and 21 dpi (Figure 5C). Again, the levels of T cell proliferation and the production of key T cell cytokines (IFN-γ and IL-17) against viral epitopes (for both CD4+ and CD8+ T cells) were not drasticaly different between the splenic T cells from IL-1R KO and control B6 mice. Despite the similar low levels of IL-17 production in response to viral epitopes, the IL-17 level was significantly lower in IL-1R KO mice compared to WT B6 mice after robust stimulation with anti-CD3/CD28 antibodies. These results are consistent with the role of IL-1 signaling in promoting IL-17 production [26,27].\nVirus-specific CD4+T cell responses in the CNS and the periphery of TMEV-infected B6 and IL-1R knockout (KO) mice. (A) Levels of Th1 and Th17 cells in the central nervous system (CNS) of virus-infected B6 and IL-1R KO mice were assessed using flow cytometry at 8 days post-infection (dpi) after stimulation with PBS, PMA/ionomycin, anti-CD3/CD28 antibodies, or viral epitope peptides (Pep: mixture of 2 μM VP2203-220 and 2 μM VP425-38). The flow cytometric plots show gated CD4+ T cells. (B) The numbers of CNS-infiltrating IFN-γ-producing CD4+ T cells reactive to either viral epitope peptides (CD4: equal mixture of VP2203-220 and VP425-38) or anti-CD3/CD28 antibodies in virus-infected B6 and IL-1R KO mice (three mice each) at 8 and 21 dpi. A representative result from two to three similar experiments is shown here. (C) Levels of proliferation and cytokine production by splenic T cells in response to viral epitopes by CD4+ T cells (CD4 Mix: equal mixture of VP2203-220 and VP425-38), CD8+ T cells (VP2-121–130), or both CD4+ and CD8+ T cells (anti-CD3/CD28 antibodies) were assessed at 8 and 21 dpi. Values are expressed as the mean of triplicate samples (mean ± SD) from a representative of three experiments. *P < 0.05, **P < 0.01, ***P < 0.001.\n Levels of TMEV-specific CD8+ T cell responses in the CNS are comparable between IL-1R KO and WT B6 mice To further determine whether the susceptibility of IL-1R KO mice to TMEV-induced demyelinating disease is associated with a compromised anti-viral CD8+ T cell response, we also analyzed the T cell responses in the CNS of TMEV-infected IL-1R KO and control B6 mice (Figure 6). Virus-specific CD8+ T cells reactive to the predominant epitope (VP2121-130), determined using the VP2121-H-2Db tetramer, indicated that the proportions of virus-specific CD8+ T cells in the CNS of virus-infected WT B6 and IL-1R KO mice are similar (Figure 6A). To further determine whether the functions of the virus-reactive CD8+ T cells are different, we assessed the abilities of the cells to produce IFN-γ in response to specific and non-specific stimulations (Figure 6B). The results clearly indicated that their ability to produce IFN-γ is also similar in both proportion (Figure 6B) and number in the CNS of TMEV-infected B6 and IL-1R KO mice (Figure 6C). These results strongly suggest that there are no significant differences in the CD8+ T cell responses to the viral determinants, unlike those of CD4+ T cell responses, in the CNS of virus-infected WT B6 and IL-1R KO mice.\nLevels of virus-specific CD8+T cell responses in the CNS of virus-infected B6 and IL-1R KO mice. (A) Levels of H-2Db-VP2-121–130-tetramer reactive CD8+ T cells in the central nervous system (CNS) of B6 and IL-1R knockout (KO) mice at 8 days post-infection (dpi). (B) Proportions of CNS-infiltrating CD8+ T cells reactive to viral epitopes, anti-CD3/CD28 antibodies and PMA/ionomycin were assessed using flow cytometry following intracellular cytokine staining at 8 dpi. (C) The overall numbers of virus-specific and anti-CD3/CD28 reactive CD8+ T cells in the CNS of virus-infected B6 and IL-1R KO mice are shown at 8 and 21 dpi. A representative result from two to three similar experiments is shown here.\nTo further determine whether the susceptibility of IL-1R KO mice to TMEV-induced demyelinating disease is associated with a compromised anti-viral CD8+ T cell response, we also analyzed the T cell responses in the CNS of TMEV-infected IL-1R KO and control B6 mice (Figure 6). Virus-specific CD8+ T cells reactive to the predominant epitope (VP2121-130), determined using the VP2121-H-2Db tetramer, indicated that the proportions of virus-specific CD8+ T cells in the CNS of virus-infected WT B6 and IL-1R KO mice are similar (Figure 6A). To further determine whether the functions of the virus-reactive CD8+ T cells are different, we assessed the abilities of the cells to produce IFN-γ in response to specific and non-specific stimulations (Figure 6B). The results clearly indicated that their ability to produce IFN-γ is also similar in both proportion (Figure 6B) and number in the CNS of TMEV-infected B6 and IL-1R KO mice (Figure 6C). These results strongly suggest that there are no significant differences in the CD8+ T cell responses to the viral determinants, unlike those of CD4+ T cell responses, in the CNS of virus-infected WT B6 and IL-1R KO mice.\nLevels of virus-specific CD8+T cell responses in the CNS of virus-infected B6 and IL-1R KO mice. (A) Levels of H-2Db-VP2-121–130-tetramer reactive CD8+ T cells in the central nervous system (CNS) of B6 and IL-1R knockout (KO) mice at 8 days post-infection (dpi). (B) Proportions of CNS-infiltrating CD8+ T cells reactive to viral epitopes, anti-CD3/CD28 antibodies and PMA/ionomycin were assessed using flow cytometry following intracellular cytokine staining at 8 dpi. (C) The overall numbers of virus-specific and anti-CD3/CD28 reactive CD8+ T cells in the CNS of virus-infected B6 and IL-1R KO mice are shown at 8 and 21 dpi. A representative result from two to three similar experiments is shown here.\n Cytokine production by Th cells stimulated with macrophages from IL-1R KO mice is reduced To compare the function of CD4+ T cells stimulated by macrophages from B6 and IL-1R KO mice, T cells from naïve OT-II mice, which carry T cell receptor (TCR) transgenes specific for OVA323-339, were stimulated with peritoneal macrophages infected in vitro for 24 h with TMEV (10 MOI) in the presence of OVA323-339 peptide (Figure 7A) or OVA protein (not shown). Viral infection did not significantly alter the levels of T cell stimulation by these macrophages. However, proliferation of CD4+ T cells in response to the cognate peptide or protein was higher when stimulated with IL-1R KO macrophages compared to the proliferation stimulated with B6 macrophages. In contrast, IFN-γ and IL-17 production by the T cells stimulated with IL-1R KO macrophages were significantly lower than the production stimulated with control B6 macrophages. These results indicated that antigen-presenting cells display altered T cell stimulating function in the absence of IL-1R-mediated signaling.\nCytokine reduction of CD4+T cells with stimulation of IL-1R KO macrophages. (A) Isolated CD4+ T cells (1 × 105) from the spleen of OT-II mice were cultured with in vitro 10 MOI Theiler’s murine encephalitis virus (TMEV)-infected peritoneal macrophages (1 × 104) from either C57BL/6 or IL-1R knockout (KO) mice for 3 days in the presence of 2 μM OVA epitope peptides. T cell proliferative responses were analyzed using [3H]TdR uptake, and cytokine production (IFN-γ and IL-17) of the cultures were analyzed using specific ELISAs. (B) Isolated CD4+ T cells (1 × 105) from the spleen of B6 mice infected with TMEV at 8 days post-infection (dpi) were cultured with in vitro TMEV-infected (10 MOI for 24 h) peritoneal macrophages (1 × 104) from either naïve C57BL/6 or IL-1R KO mice for 3 days in the presence of viral epitope peptides. Message levels of the indicated genes were then analyzed by real-time PCR. The glyceraldehyde-3-phosphate dehydrogenase (GAPDH) level was used as an internal control. The values are expressed as the means ± SD of triplicates. Statistically significant differences are indicated with asterisks; *P < 0.05, **P < 0.01, ***P < 0.001. Data shown are the representation of three independent experiments. (C) Peritoneal CD11b+ cells from naïve B6 and IL-1R KO mice infected with TMEV in vitro for 24 h were analyzed for the expression of PDL-1 and TIM-3 T cell inhibitory molecules. A representative flow cytometry plot of three similar results is shown here.\nTo further understand the potential mechanisms underlying the altered T cell stimulation by IL-1R KO macrophages, we examined the potential contribution of IL-1 signaling to the induction of cytokines in TMEV-infected macrophages (Figure 7B). After TMEV infection in vitro for 24 h, levels of viral RNA as well as IFN-α and IFN-β messages were similar between macrophages from WT B6 and IL-1R KO mice. However, the expression of the IL-6 message was extremely compromised in IL-1R KO macrophages compared to B6 macrophages. In contrast, the expression level of TNF-α message was more highly upregulated in IL-1R KO macrophages after TMEV infection. Interestingly however, the expression of TGF-β in uninfected IL-1R KO macrophages was higher than the expression in B6 macrophages but reduced to a similar level after viral infection. These results suggest that the cytokine production profile of macrophages and perhaps other antigen-presenting cells is altered in the absence of IL-1 signaling, which may affect the initial development and/or function of T cells following viral infection.\nTo further understand the underlying mechanisms associated with altered CD4+ T cell development in the absence of IL-1 signaling, the levels of co-stimulatory molecules and key inhibitory molecules were studied in a representative macrophage antigen-presenting cell (APC) population (Figure 7C). The levels of CD80, CD86 and CD40 in B6 and IL-1R1 KO mice were not significantly different (not shown). However, PDL-1 and Tim-3 were significantly elevated in the absence of IL-1 signaling. These molecules are known to negatively affect the function of T cells [28] and/or promote inflammatory responses through its expression by innate immune cells, such as microglia [29]. It was interesting to note that the expression of PDL-1 was upregulated upon viral infection in B6 macrophages, whereas the expression in IL-1R-deficient macrophages was constitutively upregulated even in the absence of viral infection. In contrast, the expression of Tim-3 was constitutively upregulated in IL-1R-deficient macrophages to the level (approximately 5%) of upregulation seen after viral infection in B6 macrophages, but it was further upregulated in IL-1R KO macrophages after TMEV infection. These results strongly suggest that these increases in the inhibitory molecules may participate in the altered T cell development and/or function.\nTo compare the function of CD4+ T cells stimulated by macrophages from B6 and IL-1R KO mice, T cells from naïve OT-II mice, which carry T cell receptor (TCR) transgenes specific for OVA323-339, were stimulated with peritoneal macrophages infected in vitro for 24 h with TMEV (10 MOI) in the presence of OVA323-339 peptide (Figure 7A) or OVA protein (not shown). Viral infection did not significantly alter the levels of T cell stimulation by these macrophages. However, proliferation of CD4+ T cells in response to the cognate peptide or protein was higher when stimulated with IL-1R KO macrophages compared to the proliferation stimulated with B6 macrophages. In contrast, IFN-γ and IL-17 production by the T cells stimulated with IL-1R KO macrophages were significantly lower than the production stimulated with control B6 macrophages. These results indicated that antigen-presenting cells display altered T cell stimulating function in the absence of IL-1R-mediated signaling.\nCytokine reduction of CD4+T cells with stimulation of IL-1R KO macrophages. (A) Isolated CD4+ T cells (1 × 105) from the spleen of OT-II mice were cultured with in vitro 10 MOI Theiler’s murine encephalitis virus (TMEV)-infected peritoneal macrophages (1 × 104) from either C57BL/6 or IL-1R knockout (KO) mice for 3 days in the presence of 2 μM OVA epitope peptides. T cell proliferative responses were analyzed using [3H]TdR uptake, and cytokine production (IFN-γ and IL-17) of the cultures were analyzed using specific ELISAs. (B) Isolated CD4+ T cells (1 × 105) from the spleen of B6 mice infected with TMEV at 8 days post-infection (dpi) were cultured with in vitro TMEV-infected (10 MOI for 24 h) peritoneal macrophages (1 × 104) from either naïve C57BL/6 or IL-1R KO mice for 3 days in the presence of viral epitope peptides. Message levels of the indicated genes were then analyzed by real-time PCR. The glyceraldehyde-3-phosphate dehydrogenase (GAPDH) level was used as an internal control. The values are expressed as the means ± SD of triplicates. Statistically significant differences are indicated with asterisks; *P < 0.05, **P < 0.01, ***P < 0.001. Data shown are the representation of three independent experiments. (C) Peritoneal CD11b+ cells from naïve B6 and IL-1R KO mice infected with TMEV in vitro for 24 h were analyzed for the expression of PDL-1 and TIM-3 T cell inhibitory molecules. A representative flow cytometry plot of three similar results is shown here.\nTo further understand the potential mechanisms underlying the altered T cell stimulation by IL-1R KO macrophages, we examined the potential contribution of IL-1 signaling to the induction of cytokines in TMEV-infected macrophages (Figure 7B). After TMEV infection in vitro for 24 h, levels of viral RNA as well as IFN-α and IFN-β messages were similar between macrophages from WT B6 and IL-1R KO mice. However, the expression of the IL-6 message was extremely compromised in IL-1R KO macrophages compared to B6 macrophages. In contrast, the expression level of TNF-α message was more highly upregulated in IL-1R KO macrophages after TMEV infection. Interestingly however, the expression of TGF-β in uninfected IL-1R KO macrophages was higher than the expression in B6 macrophages but reduced to a similar level after viral infection. These results suggest that the cytokine production profile of macrophages and perhaps other antigen-presenting cells is altered in the absence of IL-1 signaling, which may affect the initial development and/or function of T cells following viral infection.\nTo further understand the underlying mechanisms associated with altered CD4+ T cell development in the absence of IL-1 signaling, the levels of co-stimulatory molecules and key inhibitory molecules were studied in a representative macrophage antigen-presenting cell (APC) population (Figure 7C). The levels of CD80, CD86 and CD40 in B6 and IL-1R1 KO mice were not significantly different (not shown). However, PDL-1 and Tim-3 were significantly elevated in the absence of IL-1 signaling. These molecules are known to negatively affect the function of T cells [28] and/or promote inflammatory responses through its expression by innate immune cells, such as microglia [29]. It was interesting to note that the expression of PDL-1 was upregulated upon viral infection in B6 macrophages, whereas the expression in IL-1R-deficient macrophages was constitutively upregulated even in the absence of viral infection. In contrast, the expression of Tim-3 was constitutively upregulated in IL-1R-deficient macrophages to the level (approximately 5%) of upregulation seen after viral infection in B6 macrophages, but it was further upregulated in IL-1R KO macrophages after TMEV infection. These results strongly suggest that these increases in the inhibitory molecules may participate in the altered T cell development and/or function.", "We have previously demonstrated that administration of LPS or IL-1β causes resistant C57BL/6 mice to develop demyelinating disease [18]. It has recently been shown that LPS treatment promotes this pathogenesis by elevating the induction of the pathogenic Th17 response [17]. However, it is unknown how IL-1β promotes this pathogenesis. To understand the mechanism, we compared the levels of Th1 and Th17 in the CNS of B6 mice treated with either LPS or IL-1β, along with control mice treated with PBS, following infection with TMEV at 8 days post-infection (Figure 1). The results clearly indicated that the levels of IL-17A-producing Th17 cells in mice treated with either LPS or IL-1β were significantly elevated compared to PBS-treated control mice (Figure 1A and B). In contrast, the levels of IFN-γ-producing Th1 cells were not different. It is interesting to note that IL-1β-treated mice exceeded the Th17 level of LPS-treated mice. However, the levels of IL-17-producing CD8+ T cells were minimal (not shown), and the levels of IFN-γ-producing CD8+ T cells were also similar among the groups (Figure 1C). These results strongly suggest that IL-1β can promote the pathogenesis of TMEV-induced demyelinating disease by enhancing the induction of pathogenic Th17 cells rather than altering the Th1 response.\nEffects of IL-1β administration on Th17 and Th1 responses in TMEV-infected B6 mice. (A) Levels of IFN-γ- and IL-17-producing CD4+ T cells in B6 mice intraperitoneally treated with PBS, lipopolysaccharide (LPS), or IL-1β during the early stage (−1 and 3 days post-infection (dpi)) of Theiler’s murine encephalitis virus (TMEV) infection (three mice per group) were analyzed using flow cytometry of the pooled central nervous system (CNS) cells at 8 dpi after stimulation with either PBS or anti-CD3/CD28 antibodies. (B) The overall numbers of IL-17-producing cells in the CNS of the above treated B6 mice (three mice per group) were shown. (C) Levels of IFN-γ-producing CD8+ T cells in the CNS of these groups of mice were also similarly analyzed. ***P < 0.001.", "Although administration of IL-1β promotes the pathogenesis of TMEV-induced demyelinating disease, the IL-1β produced via NLRP3 proteosome activation upon viral infection is considered to be a protective mechanism against microbial infections by promoting the apoptosis of infected cells [24]. To further investigate the potential role of IL-1β-mediated signaling in the development of TMEV-induced demyelinating disease, we compared the development of TMEV-induced demyelinating disease in IL-1R KO mice with a B6 background and control B6 mice (Figure 2A). Every IL-1R KO mouse developed demyelinating disease while none of the control B6 mice showed clinical signs at 35 days post-infection (dpi). The results clearly indicated that B6 mice with the deficiency in IL-1 signaling became susceptible to the TMEV-induced disease. This is somewhat unexpected because our previous study indicated that administration of IL-1β to B6 mice renders the mice susceptible to the disease, suggesting a pathogenic role for IL-1β in disease development.\nThe course of TMEV-induced demyelinating disease development, viral persistence levels and CNS-infiltrating mononuclear cells in TMEV-infected in B6 and IL-1R KO mice. (A) Frequency and severity of demyelinating disease in B6 (n = 10) and IL-1R knockout (KO) (n = 10) mice were monitored for 70 days after Theiler’s murine encephalitis virus (TMEV) infection. (B) Viral persistence levels in the pooled brains (BR) and spinal cords (SC) of infected mice (three mice per group) at 8, 21 and 70 days post-infection (dpi) were determined by quantitative PCR. Data are expressed by fold induction after normalization to the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA levels. The values are expressed as the means ± SD of triplicate experiments. Statistically significant differences are indicated with asterisks;*P < 0.05, **P < 0.01, ***P < 0.001. (C) Levels of T cells (CD4+ and CD8+), macrophages (CD11b+CD45high), microglia (CD11b+CD45int), NK cells (NK1.1+CD45+) and granulocytes (Ly6G/6C+CD45+) were assessed using flow cytometry in central nervous system (CNS)-infiltrating mononuclear cells from TMEV-infected C57BL/6 and IL-1R KO mice at 3 and 8 dpi. Numbers in FACS plots represent percentages in the CNS. Data are representative of three experiments using three mice per group.\nTo correlate the disease susceptibility of IL-1R KO mice with viral persistence in the CNS, the relative viral message levels in the CNS of wild type (WT) B6 and IL-1R KO mice were compared at days 8, 21 and 70 post-infection with TMEV (Figure 2B). The results showed that the level of viral load in the spinal cord, but not in the brain, is consistently higher in IL-1R KO mice compared to control B6 mice. These results strongly suggest that IL-1 signaling plays an important role in controlling viral persistence in the spinal cord during the course of TMEV infection. However, it was previously shown that the viral load level alone is not sufficient for the pathogenesis of TMEV-induced demyelinating disease [25]. Thus, we further assessed the levels of cellular infiltration to the CNS of these mice during the early stages (3 and 8 dpi) of viral infection (Figure 2C). The results indicated that infiltration into the CNS of granulocytes, NK cells, macrophages and CD8+ T cells but not CD4+ T cells was elevated, particularly at the early stage of viral infection. These results collectively suggest that high viral loads and cellular infiltration into the CNS in resistant B6 mice in the absence of IL-1 signaling leads to the elevated development of TMEV-induced demyelinating disease.", "At 70 days post-infection, the histopathology of TMEV-infected IL-1R KO and B6 mice was compared to correlate the disease development with the histopathology of the CNS (Figure 3). Series of histopathological examinations of the spinal cords from both KO and WT mice were conducted after H & E, LFB, and Bielschowsky silver staining. The H & E staining was used for evaluating the evidence of active inflammation and lymphocyte infiltration. LFB specifically stains axonal myelin sheath, and this was used to evaluate the axonal demyelination. Bielschowsky silver staining stains axons dark brown and was used to evaluate axonal integrity. Lymphocyte infiltration, minor demyelination and axon loss were detected in the CNS, including the brain and spinal cord, in IL-1R KO mice but not in WT B6 mice. Compared to control B6 mice (Figure 3A a-b), IL-1R KO mice (Figure 3A c-d) showed more lymphocyte infiltration in the white matter of the lumbar spinal cord when examined by H & E staining. LFB staining of the adjacent sections showed irregular vacuoles and demyelination in the white matter of the spinal cord in IL-1R KO mice (Figure 3A g-h) and in brain regions including the cerebellum and medulla (not shown). In contrast, myelin that appeared normal and little histopathological change were observed in the control B6 mice (Figure 3A e-f). Bielschowsky silver staining of the adjacent sections also showed irregular vacuolation and mild axon loss in the demyelinated regions of the spinal cord from IL-1R KO mice (Figure 3A k-l) but not in the sections from the WT control mice (Figure 3A i-j). To further compare the cellular infiltration levels in the CNS of these mice, we examined the levels of CD45+ cells in the CNS which largely represents infiltrating cells (Figure 3B). Our results clearly displayed that the level of CD45+ cells (Figure 3B d), many of which overlap with H & E staining Figure 3B c), was higher in the CNS of IL-1R KO mice compared to that of the control B6 mice (Figure 3B a-b).\nHistopathology of the spinal cord in IL-1R KO and wild-type mice. (A) H & E staining of the spinal cord showed infiltration of inflammatory cells in knockout (KO) mice (c, d) and little infiltration in wild-type (WT) mice (a, b). Luxol Fast Blue (LFB) staining of adjacent sections showed irregular vacuolation and minor demyelination in the white matter of KO mice (g, h) but no loss of myelin in WT mice (e, f). Bielschowsky silver staining of the same area shows the presence of irregular vacuolation and minor axonal loss in KO mice (k, l) but not in WT mice (i, j). Magnification, ×10 and ×40. Black arrows indicate regions of lymphocyte infiltrates, demyelination, or axon loss; thin black squares indicate the areas from the lumbar spinal cord region, which are shown in high magnification (b, f, and j for B6 mice and d, h, and l for IL-1R KO mice). (B) H & E staining of spinal cords of control (a) and IL-1R KO mice (c) are shown. The adjacent sections (b and d, respectively) were stained with anti-CD45 antibody (red) for infiltrating cells and counterstained with 4',6-diamidino-2-phenylindole (DAPI) (blue) for nuclei.", "To understand the susceptibility to TMEV-induced demyelinating disease in IL-1R KO mice, we analyzed various cytokine message levels expressed in the CNS of virus-infected control and IL-1R KO mice during the early stages (3, 5, and 8 dpi) of viral infection using real-time PCR (Figure 4). The levels of IFN-α and IFN-β gene expression in IL-1R KO mice were significantly higher than those in B6 mice at 3 dpi, although the levels became similar at 5 and 8 dpi. The expression levels of CXCL-10 that were associated with T cell infiltration and IL-10 that was associated with viral persistence were higher at 5 dpi, and this trend was maintained at 8 dpi. However, the expression level of IL-6 in IL-1R KO mice was transiently lower at 5 dpi, while no differences in TNF-α expression were noted. Similarly, the production of a pathogenic T cell cytokine, IL-17 was largely unchanged. However, viral RNA and the production of IFN-γ were transiently higher at 3 and 8 dpi. These results suggest that the lack of IL-1 signaling differentially affects viral replication and the expression of various innate and immune cytokines depending on the stage of TMEV infection.\nExpression level of cytokine genes in the CNS of TMEV infected B6 and IL-1R knockout (KO) mice at 3, 5 and 8 days post-infection (dpi). The relative expression levels of the indicated mRNAs in the central nervous system (CNS) of Theiler’s murine encephalitis virus (TMEV)-infected C57BL/6 and IL-1R KO mice at 3, 5 and 8 dpi were assessed by real-time PCR. Data are expressed by fold induction after normalization to the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA levels. The values are expressed as the means ± SD of triplicates. Statistically significant differences are indicated with asterisks; *P < 0.05, **P < 0.01, ***P < 0.001).", "To compare CD4+ T cell responses specific to viral determinants in the CNS of IL-1R KO and WT mice, infiltration levels of CD4+ T cells specific to the predominant CD4+ T cell epitopes were assessed (Figure 5A and B). The levels of IFN-γ-producing CD4+ T cells in response to pan-T cell stimulation (either PMA plus ionomycin or anti-CD3 and anti-CD28 antibodies) were similar between IL-1R KO and control WT B6 mice. However, such CD4+ T cell responses to viral epitopes were proportionally lower in the CNS of virus-infected IL-1R KO mice compared to B6 mice, although the overall levels in the CNS were similar. This discrepancy may be due to the high levels of CXCL-10 expression (Figure 4), which promotes infiltration of T cells, in the CNS of IL-1R KO mice. The levels of IL-17-producing CD4+ T cells in either TMEV-infected IL-1R KO or B6 mice were undetectable. To further determine whether the pattern of CD4+ T cell responses is unique in the CNS of virus-infected mice, we also assessed the T cell responses in the periphery of TMEV-infected IL-1R KO and control B6 mice at 8 and 21 dpi (Figure 5C). Again, the levels of T cell proliferation and the production of key T cell cytokines (IFN-γ and IL-17) against viral epitopes (for both CD4+ and CD8+ T cells) were not drasticaly different between the splenic T cells from IL-1R KO and control B6 mice. Despite the similar low levels of IL-17 production in response to viral epitopes, the IL-17 level was significantly lower in IL-1R KO mice compared to WT B6 mice after robust stimulation with anti-CD3/CD28 antibodies. These results are consistent with the role of IL-1 signaling in promoting IL-17 production [26,27].\nVirus-specific CD4+T cell responses in the CNS and the periphery of TMEV-infected B6 and IL-1R knockout (KO) mice. (A) Levels of Th1 and Th17 cells in the central nervous system (CNS) of virus-infected B6 and IL-1R KO mice were assessed using flow cytometry at 8 days post-infection (dpi) after stimulation with PBS, PMA/ionomycin, anti-CD3/CD28 antibodies, or viral epitope peptides (Pep: mixture of 2 μM VP2203-220 and 2 μM VP425-38). The flow cytometric plots show gated CD4+ T cells. (B) The numbers of CNS-infiltrating IFN-γ-producing CD4+ T cells reactive to either viral epitope peptides (CD4: equal mixture of VP2203-220 and VP425-38) or anti-CD3/CD28 antibodies in virus-infected B6 and IL-1R KO mice (three mice each) at 8 and 21 dpi. A representative result from two to three similar experiments is shown here. (C) Levels of proliferation and cytokine production by splenic T cells in response to viral epitopes by CD4+ T cells (CD4 Mix: equal mixture of VP2203-220 and VP425-38), CD8+ T cells (VP2-121–130), or both CD4+ and CD8+ T cells (anti-CD3/CD28 antibodies) were assessed at 8 and 21 dpi. Values are expressed as the mean of triplicate samples (mean ± SD) from a representative of three experiments. *P < 0.05, **P < 0.01, ***P < 0.001.", "To further determine whether the susceptibility of IL-1R KO mice to TMEV-induced demyelinating disease is associated with a compromised anti-viral CD8+ T cell response, we also analyzed the T cell responses in the CNS of TMEV-infected IL-1R KO and control B6 mice (Figure 6). Virus-specific CD8+ T cells reactive to the predominant epitope (VP2121-130), determined using the VP2121-H-2Db tetramer, indicated that the proportions of virus-specific CD8+ T cells in the CNS of virus-infected WT B6 and IL-1R KO mice are similar (Figure 6A). To further determine whether the functions of the virus-reactive CD8+ T cells are different, we assessed the abilities of the cells to produce IFN-γ in response to specific and non-specific stimulations (Figure 6B). The results clearly indicated that their ability to produce IFN-γ is also similar in both proportion (Figure 6B) and number in the CNS of TMEV-infected B6 and IL-1R KO mice (Figure 6C). These results strongly suggest that there are no significant differences in the CD8+ T cell responses to the viral determinants, unlike those of CD4+ T cell responses, in the CNS of virus-infected WT B6 and IL-1R KO mice.\nLevels of virus-specific CD8+T cell responses in the CNS of virus-infected B6 and IL-1R KO mice. (A) Levels of H-2Db-VP2-121–130-tetramer reactive CD8+ T cells in the central nervous system (CNS) of B6 and IL-1R knockout (KO) mice at 8 days post-infection (dpi). (B) Proportions of CNS-infiltrating CD8+ T cells reactive to viral epitopes, anti-CD3/CD28 antibodies and PMA/ionomycin were assessed using flow cytometry following intracellular cytokine staining at 8 dpi. (C) The overall numbers of virus-specific and anti-CD3/CD28 reactive CD8+ T cells in the CNS of virus-infected B6 and IL-1R KO mice are shown at 8 and 21 dpi. A representative result from two to three similar experiments is shown here.", "To compare the function of CD4+ T cells stimulated by macrophages from B6 and IL-1R KO mice, T cells from naïve OT-II mice, which carry T cell receptor (TCR) transgenes specific for OVA323-339, were stimulated with peritoneal macrophages infected in vitro for 24 h with TMEV (10 MOI) in the presence of OVA323-339 peptide (Figure 7A) or OVA protein (not shown). Viral infection did not significantly alter the levels of T cell stimulation by these macrophages. However, proliferation of CD4+ T cells in response to the cognate peptide or protein was higher when stimulated with IL-1R KO macrophages compared to the proliferation stimulated with B6 macrophages. In contrast, IFN-γ and IL-17 production by the T cells stimulated with IL-1R KO macrophages were significantly lower than the production stimulated with control B6 macrophages. These results indicated that antigen-presenting cells display altered T cell stimulating function in the absence of IL-1R-mediated signaling.\nCytokine reduction of CD4+T cells with stimulation of IL-1R KO macrophages. (A) Isolated CD4+ T cells (1 × 105) from the spleen of OT-II mice were cultured with in vitro 10 MOI Theiler’s murine encephalitis virus (TMEV)-infected peritoneal macrophages (1 × 104) from either C57BL/6 or IL-1R knockout (KO) mice for 3 days in the presence of 2 μM OVA epitope peptides. T cell proliferative responses were analyzed using [3H]TdR uptake, and cytokine production (IFN-γ and IL-17) of the cultures were analyzed using specific ELISAs. (B) Isolated CD4+ T cells (1 × 105) from the spleen of B6 mice infected with TMEV at 8 days post-infection (dpi) were cultured with in vitro TMEV-infected (10 MOI for 24 h) peritoneal macrophages (1 × 104) from either naïve C57BL/6 or IL-1R KO mice for 3 days in the presence of viral epitope peptides. Message levels of the indicated genes were then analyzed by real-time PCR. The glyceraldehyde-3-phosphate dehydrogenase (GAPDH) level was used as an internal control. The values are expressed as the means ± SD of triplicates. Statistically significant differences are indicated with asterisks; *P < 0.05, **P < 0.01, ***P < 0.001. Data shown are the representation of three independent experiments. (C) Peritoneal CD11b+ cells from naïve B6 and IL-1R KO mice infected with TMEV in vitro for 24 h were analyzed for the expression of PDL-1 and TIM-3 T cell inhibitory molecules. A representative flow cytometry plot of three similar results is shown here.\nTo further understand the potential mechanisms underlying the altered T cell stimulation by IL-1R KO macrophages, we examined the potential contribution of IL-1 signaling to the induction of cytokines in TMEV-infected macrophages (Figure 7B). After TMEV infection in vitro for 24 h, levels of viral RNA as well as IFN-α and IFN-β messages were similar between macrophages from WT B6 and IL-1R KO mice. However, the expression of the IL-6 message was extremely compromised in IL-1R KO macrophages compared to B6 macrophages. In contrast, the expression level of TNF-α message was more highly upregulated in IL-1R KO macrophages after TMEV infection. Interestingly however, the expression of TGF-β in uninfected IL-1R KO macrophages was higher than the expression in B6 macrophages but reduced to a similar level after viral infection. These results suggest that the cytokine production profile of macrophages and perhaps other antigen-presenting cells is altered in the absence of IL-1 signaling, which may affect the initial development and/or function of T cells following viral infection.\nTo further understand the underlying mechanisms associated with altered CD4+ T cell development in the absence of IL-1 signaling, the levels of co-stimulatory molecules and key inhibitory molecules were studied in a representative macrophage antigen-presenting cell (APC) population (Figure 7C). The levels of CD80, CD86 and CD40 in B6 and IL-1R1 KO mice were not significantly different (not shown). However, PDL-1 and Tim-3 were significantly elevated in the absence of IL-1 signaling. These molecules are known to negatively affect the function of T cells [28] and/or promote inflammatory responses through its expression by innate immune cells, such as microglia [29]. It was interesting to note that the expression of PDL-1 was upregulated upon viral infection in B6 macrophages, whereas the expression in IL-1R-deficient macrophages was constitutively upregulated even in the absence of viral infection. In contrast, the expression of Tim-3 was constitutively upregulated in IL-1R-deficient macrophages to the level (approximately 5%) of upregulation seen after viral infection in B6 macrophages, but it was further upregulated in IL-1R KO macrophages after TMEV infection. These results strongly suggest that these increases in the inhibitory molecules may participate in the altered T cell development and/or function.", "TMEV-infection in susceptible strains of mice induces chronic demyelinating disease that is primarily mediated by CD4+ T cells [17,30,31]. However, epitope-specific CD4+ T cells can be protective or pathogenic depending on when activated T cells are available in conjunction with viral infection [23,32,33]. Interestingly, the level of IL-1β, induced following infection with TMEV, plays an important role in the pathogenesis of TMEV-induced demyelinating disease [18,34]. Previously, it has been shown that administration of IL-1 to mice exacerbates the development of experimental autoimmune encephalomyelitis (EAE), the pathogenic immune mechanisms of which are similar to those of TMEV-induced demyelinating disease [35-37]. In addition, IL-1 appears to directly activate astrocytes and microglia to exacerbate neurodegeneration in non-immune-mediated diseases [38]. Because IL-1β is induced via the innate immunity mediated by various TLRs and because the downstream IL-1 signals mediated via IL-1R also play an important role in the host defense [1,4], we have investigated the role of IL-1β signals in the development of TMEV-induced demyelinating disease by assessing the effects of IL-1β administration and using IL-1R-deficient mice.\nWe have previously demonstrated that administration of IL-1β into resistant B6 mice renders the resistant mice susceptible to TMEV-induced demyelinating disease [18]. The administration of IL-1β dramatically increased the level of IL-17 production in the CNS of the resistant mice, which do not produce a high level of Th17 cells following TMEV infection (Figure 1). This result is consistent with recent reports that IL-1β strongly promotes the development of IL-17-producing Th17 cells either directly or via the production of IL-6 [19,39]. The presence of high levels of IL-17A in mice infected with TMEV exerts a strong pathogenic role by inhibiting the apoptosis of virus-infected cells, blocking cytolytic CD8+ T cell function, and elevating cellular infiltration to the CNS [17]. Recently, it was also shown that the presence of FoxP3+ Treg cells that preferentially expand due to stimulation by IL-1β [40] is not beneficial for the development of TMEV-induced demyelinating disease; hence, these regulatory cells inhibit the protective anti-viral immune responses [41]. Therefore, administration of IL-1β, resulting in a higher level of IL-1β, appears to promote the pathogenesis of TMEV-induced demyelinating disease in resistant B6 mice by elevating pathogenic Th17 and Treg responses to TMEV antigens. In addition, it is known that IL-1 directly activates astrocytes and microglia in the CNS [42], which are associated with the pathogenesis of TMEV-induced demyelinating disease [13,43]. Furthermore, IL-1 mediates the loss of astroglial glutamate transport and drives motor neuron injury in the spinal cord during viral encephalomyelitis [44]. The expression of IL-1R1 is upregulated in glial cells following TMEV infection [45], and thus the elevated receptor expression is likely to exert the detrimental effects seen as a result of IL-1 signaling on neurodegeneration and/or pathogenic immune responses.\nIn the absence of IL-1R1-mediated signals, resulting from engagements with the predominant cytokine IL-1β and weak cytokine IL-1α, strongly resistant B6 mice become susceptible to the development of TMEV-induced disease (Figure 2). Viral loads in the spinal cord are higher in the absence of IL-1R signals, suggesting that the presence of IL-1 signaling plays an important role in controlling viral persistence during the course of TMEV infection. The high viral loads also accompanied higher cellular infiltration into the CNS. Histopathological examinations of the virus-infected IL-1R-deficient B6 mice confirmed the elevated lymphocyte infiltration, demyelination and axonal losses in the CNS compared to control B6 mice (Figure 3). These results are consistent with previous reports indicating that either IL-1β- or IL-1RI-deficient mice are susceptible to various infections [1,7,8,46]. These results collectively suggest that either an abnormally high level of IL-1β or the absence of IL-1-mediated signals lead to high viral loads and cellular infiltration to the CNS, resulting in the elevated development of TMEV-induced demyelinating disease. Therefore, a fine balance of IL-1β-mediated signaling appears to be important for protection from viral infections. It is also interesting to note that this viral model for MS is markedly different from the EAE model, which is not associated with microbial infections, in that a deficiency of IL-1R1 significantly reduces the development of demyelinating disease [37].\nDespite many previous studies on the role of IL-1β signaling in viral infections, the underlying mechanisms of the signals involved in the protection from infection remain unclear. Previously, it has shown that IL-1-mediated signals augment T cell responses by increasing cellular infiltration, as well as upregulating cytokine production and co-stimulatory molecule expression in APCs [5,47,48]. However, our results showed that the cellular infiltration is elevated in IL-1R1 KO mice during the early stages of viral infection (Figure 2), although the anti-viral CD4+ T cell responses in the CNS of virus-infected IL-1R KO mice are lower without compromising either peripheral CD4+ T cell responses (Figure 5) or CNS CD8+ T cell responses (Figure 6). These results suggest that the APCs associated with CD4+ T cell responses in the CNS are primarily affected by the absence of IL-1-mediated signaling. Our previous studies strongly suggested that primarily the microglia and, to a certain extent, astrocytes, harbor viral loads and play important roles in the stimulation of the level and type of the CD4+ T cell response [43]. In addition, it is known that IL-1 signaling affects the function of these cell types [42]. Therefore, it is most likely that these cells play an important role in the development of anti-viral CD4+ T cell responses in the CNS during the early stage of viral infection. Because the cytokine production profile of APCs is altered in the absence of IL-1 signaling, perhaps due to the elevated expression of inhibitory molecules (Figure 7), similar mechanisms by CNS APCs may negatively affect the initial development and/or function of anti-viral T cells following viral infection. Regarding the underlying mechanisms, it is currently unclear how the deficiency in IL-1 signals enhances the expression of inhibitory molecules in APCs. However, we have observed that APCs from susceptible SJL mice expressed significantly higher levels of these molecules upon viral infection either in vitro or in vivo compared to cells from resistant B6 mice (data not shown), suggesting that the viral load may lead to the elevated expression. Therefore, it is most likely that the absence of IL-1 signals permits the initial elevation of viral load (Figure 4), and the higher viral load, in turn, leads to an eventual compromise in the efficiency of anti-viral T cell responses and functions. In contrast, the presence of excessive IL-1 signals preferentially triggers T cell responses that are unfavorable for the protection of the hosts from chronic viral persistence and the pathogenesis of demyelinating disease, as previously seen [17,19].", "IL-1 signaling plays a protective role against viral infections. However, we have previously demonstrated that administration of IL-1 promotes the pathogenesis of TMEV-induced demyelinating disease, similar to the autoimmune disease model (EAE) for MS. The IL-1-mediated pathogenesis of TMEV-induced demyelinating disease appears to reflect an elevated Th17 response in the presence of IL-1. However, IL-1R-deficient B6 mice also induced TMEV-induced demyelinating disease accompanied by high viral persistence and upregulated expression of T cell inhibitory molecules such as PDL-1 and Tim-3. These results suggest that the presence of high IL-1 level promotes the pathogenesis by elevating Th17 responses, whereas the absence of IL-1 signals permits viral persistence in the CNS due to insufficient T cell activation. Therefore, the balance of IL-1 signaling appears to be critical for the determination of protection vs. pathogenesis in the development of a virus-induced demyelinating disease.", "APC: antigen-presenting cell; CNS: central nervous system; Dpi: days post-infection; EAE: experimental autoimmune encephalomyelitis; ELISA: enzyme-linked immunosorbent assay; GAPDH: glyceraldehyde-3-phosphate dehydrogenase; H & E: hematoxylin and eosin; IL-1R: interleukin-1 receptor; LFB: Luxol Fast Blue; LPS: lipopolysaccharide; MNC: mononuclear cell; MS: multiple sclerosis; OVA: ovalbumin; PBS: phosphate-buffered saline; PCR: polymerase chain reaction; PFU: plaque-forming unit; SEM: standard error of the mean; TLR: toll-like receptor; TMEV: Theiler’s murine encephalomyelitis virus; TMEV-IDD: TMEV-induced demyelinating disease.", "The authors declare that they have no competing interests.", "BSK directed experiments, interpreted the results and wrote the manuscript. YHJ conducted immunological experiments and helped writing. LM conducted histological experiments and wrote the corresponding portions. HSK performed some molecular analyses. WH and HSP conducted the initial immunological experiments. CSK contributed for the interpretation of results and direction of the study. All authors read and approved the final manuscript." ]
[ null, "materials|methods", null, null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, null, "discussion", "conclusions", null, null, null ]
[ "IL-1R", "TMEV", "Demyelination", "CNS", "T cell responses", "IL-1", "IL-1R KO mice", "Th17" ]
Introduction: Toll-like receptors (TLRs) and interleukin-1 receptors (IL-1Rs) are involved in the production of various cytokines that are associated with the innate immune response against many different infectious agents. TLRs and IL-1Rs share many structural similarities and utilize common downstream adaptive molecules after activation by their ligands. In general, these innate immune responses induced by TLRs and IL-1Rs are known to play a protective role against various microbes [1]. However, several recent studies have indicated that these signals may also play a pathogenic role in viral infections [2-4]. In addition to TLRs, IL-1Rs are also considered to be important innate receptors because IL-1β, in particular, is a prominent cytokine that appears in the early stage of microbial infections [3]. The IL-1R family contains six receptors, including IL-1RI, which recognizes the principal inflammatory cytokine IL-1β and the less inflammatory cytokine IL-1α [1,5]. IL-1β is generated from the cleavage of pro-IL-1β by caspase-1 in inflammasomes after infections, and the downstream signaling cascade of the IL-1β-IL-1R interaction leads to the induction of various proinflammatory cytokines and the activation of lymphocytes [6]. IL-1β-deficient mice show broad host susceptibility to various infections [7,8]. Moreover, IL-1RI-deficient mice are susceptible to certain pathogens, including Listeria monocytogenes[1]. Therefore, these responses to IL-1β are apparently critical for protection from many types of viruses and microbes. However, the level of IL-1β has also been linked to many different inflammatory autoimmune diseases, including diabetes, lupus, arthritis, and multiple sclerosis (MS) [1,4]. Theiler’s murine encephalomyelitis virus (TMEV) is a positive-stranded RNA virus in the Picornaviridae family [9]. TMEV establishes a persistent CNS infection in susceptible mouse strains that results in the development of chronic demyelinating disease, and the system has been studied as a relevant viral model for human multiple sclerosis [10-12]. Cells infected with TMEV produce various proinflammatory cytokines, including type I IFNs, IL-6 and IL-1β [13]. TLR3 and TLR2 are involved in the production of these cytokines following infection with TMEV [14,15]. In addition, melanoma differentiation-associated gene 5 and dsRNA-activated protein kinase R are known to contribute to the production of proinflammatory cytokines [14,16]. These pathways also induce activation of caspase-1, leading to the generation of IL-1β and IL-1α, which contribute to further cytokine production, such as IL-6 promoting the development of pathogenic Th17 cells. Because IL-1β signals are associated with both host protection from viral infections and pathogenesis of inflammatory immune-mediated diseases, we here investigated the role of IL-1β-mediated signals in the development of TMEV-induced demyelinating disease. We have previously reported that Th17 cells preferentially develop in an IL-6-dependent manner after TMEV infection, and that Th17 cells promote persistent viral infection and induce the pathogenesis of chronic demyelinating disease [17]. In addition, our earlier studies indicated that administration of either lipopolysaccharide (LPS) or IL-1β, thereby inducing high levels of IL-6 production, into resistant C57BL/6 (B6) mice renders the mice susceptible to the development of TMEV-induced demyelinating disease [18]. These results suggest that an excessive level of IL-1β is harmful to TMEV-induced demyelinating disease by generating high levels of pathogenic Th17 cells [19]. In this study, we confirmed the role of excessive IL-1β in the generation of a high level of Th17 cells in resistant B6 mice, supporting the pathogenic mechanisms of IL-1β. Furthermore, we have also utilized IL-1R-deficient mice to investigate the role of IL-1β-mediated signaling in the development of TMEV-induced demyelinating disease. Our results indicate that the lack of IL-1 signaling in resistant B6 mice also induced TMEV-induced demyelinating disease. The initial deficiencies in T cell function, including cytokine production and high viral persistence in the late stage of viral infection, were found in IL-1R-deficient mice. Therefore, the presence of an excessive amount of IL-1 plays a pathogenic role by elevating pathogenic Th17 responses, whereas the lack of IL-1 signals promotes viral persistence in the spinal cord, leading to chronic immune-mediated inflammatory disease. Materials and methods: Animals Female C57BL/6 mice were purchased from the Charles River Laboratories (Charles River, MA, USA) through the National Cancer Institute (Frederick, MD). Female B6.129S7-Il1r1tm1Imx/J mice (IL-1R knockout (KO)) were purchased from Jackson Laboratories (Bar Harbor, ME, USA). These mice were housed in the Animal Care Facility of Northwestern University. Experimental procedures that were approved by the Animal Care and Use Committee (ACUC) of Northwestern University in accordance with NIH animal care guidelines were used in this study. Female C57BL/6 mice were purchased from the Charles River Laboratories (Charles River, MA, USA) through the National Cancer Institute (Frederick, MD). Female B6.129S7-Il1r1tm1Imx/J mice (IL-1R knockout (KO)) were purchased from Jackson Laboratories (Bar Harbor, ME, USA). These mice were housed in the Animal Care Facility of Northwestern University. Experimental procedures that were approved by the Animal Care and Use Committee (ACUC) of Northwestern University in accordance with NIH animal care guidelines were used in this study. Synthetic peptides and antibodies All peptides used were purchased from GeneMed (GeneMed Synthesis Inc, CA, USA) and used as described previously [20]. All antibodies used were purchased from BD Pharmingen (San Diego, CA, USA). All peptides used were purchased from GeneMed (GeneMed Synthesis Inc, CA, USA) and used as described previously [20]. All antibodies used were purchased from BD Pharmingen (San Diego, CA, USA). Virus The BeAn strain of TMEV was generated, propagated, and titered in BHK-21 cells grown in Dulbecco’s modified Eagle medium supplemented with 7.5% donor calf serum as previously described [21]. Viral titer was determined by plaque assays on BHK cell monolayers. The BeAn strain of TMEV was generated, propagated, and titered in BHK-21 cells grown in Dulbecco’s modified Eagle medium supplemented with 7.5% donor calf serum as previously described [21]. Viral titer was determined by plaque assays on BHK cell monolayers. Viral infection of mice and assessment of clinical signs For intracerebral (i.c.) infection, 30 μl virus solution, containing 1×106 pfu, was injected into the right cerebral hemisphere of 6 to 8 week-old mice (n = 10 per group) anesthetized with isoflurane. Clinical symptoms of disease were assessed weekly on the following grading scale: grade 0 = no clinical signs; grade 1 = mild waddling gait; grade 2 = severe waddling gait; grade 3 = moderate hind limb paralysis; grade 4 = severe hind limb paralysis and loss of righting reflex. For intracerebral (i.c.) infection, 30 μl virus solution, containing 1×106 pfu, was injected into the right cerebral hemisphere of 6 to 8 week-old mice (n = 10 per group) anesthetized with isoflurane. Clinical symptoms of disease were assessed weekly on the following grading scale: grade 0 = no clinical signs; grade 1 = mild waddling gait; grade 2 = severe waddling gait; grade 3 = moderate hind limb paralysis; grade 4 = severe hind limb paralysis and loss of righting reflex. Reverse-transcriptase PCR and real-time PCR Total cellular RNA from the brain and spinal cord of infected SJL/J mice was isolated using Trizol® Reagent (Invitrogen, CA, USA). First-strand cDNA was synthesized from 1 μg total RNA utilizing SuperScript® III First-Strand Synthesis Supermix or M-MLV (Invitrogen). The cDNAs were amplified with specific primer sets using the SYBR Green Supermix (Bio-Rad) on an iCycler (Bio-Rad). Primers for control GAPDH and cytokine genes were purchased from Integrated DNA Technologies. GAPDH: forward primer, AACTTTGGCATTGTGGAAGG and reverse primer, ACACATTGGGGGTAGGAACA; VP-1: TGACTAAGCAGGACTA-TGCCTTCC and CAACGAGCCACATATGCGGATTAC; IFN-α: (5’-ACCTCCTCT-GACCCAGGAAG-3’ and 5’-GGCTCTCCAGACTTCTGCTC-3’); IFN-β: CCCTAT-GGAGATGACGGAGA and CTGTCTGCTGGTGGAGTTCA; CXCL10: (5’-AAGT-GCTGCCGTCATTTTCT-3’ and 5’-GTGGCAATGATCTCAACACG-3’); IL-10: GCCAAGCCTTATCGGAAATGATCC and AGACACCTTGGTCTTGGAGCTT; IFN-γ: ACTGGCAAAAGGATGGTGAC and TGA GCTCATTGAATGCTTGG; IL-17A: CTCCAGAAGGCCCTCAGACTAC and AGCTTTCCCTCCGCATTGACACAG; IL-6: AGTTGCCTTCTTGGGACTGA and TCCACGATTTCCCAGAGAAC; TNF-α: GGTCACTGTCCCAGCATCTT and CTGTGAAGGGAATGGGTGTT. Total cellular RNA from the brain and spinal cord of infected SJL/J mice was isolated using Trizol® Reagent (Invitrogen, CA, USA). First-strand cDNA was synthesized from 1 μg total RNA utilizing SuperScript® III First-Strand Synthesis Supermix or M-MLV (Invitrogen). The cDNAs were amplified with specific primer sets using the SYBR Green Supermix (Bio-Rad) on an iCycler (Bio-Rad). Primers for control GAPDH and cytokine genes were purchased from Integrated DNA Technologies. GAPDH: forward primer, AACTTTGGCATTGTGGAAGG and reverse primer, ACACATTGGGGGTAGGAACA; VP-1: TGACTAAGCAGGACTA-TGCCTTCC and CAACGAGCCACATATGCGGATTAC; IFN-α: (5’-ACCTCCTCT-GACCCAGGAAG-3’ and 5’-GGCTCTCCAGACTTCTGCTC-3’); IFN-β: CCCTAT-GGAGATGACGGAGA and CTGTCTGCTGGTGGAGTTCA; CXCL10: (5’-AAGT-GCTGCCGTCATTTTCT-3’ and 5’-GTGGCAATGATCTCAACACG-3’); IL-10: GCCAAGCCTTATCGGAAATGATCC and AGACACCTTGGTCTTGGAGCTT; IFN-γ: ACTGGCAAAAGGATGGTGAC and TGA GCTCATTGAATGCTTGG; IL-17A: CTCCAGAAGGCCCTCAGACTAC and AGCTTTCCCTCCGCATTGACACAG; IL-6: AGTTGCCTTCTTGGGACTGA and TCCACGATTTCCCAGAGAAC; TNF-α: GGTCACTGTCCCAGCATCTT and CTGTGAAGGGAATGGGTGTT. Isolation of CNS-infiltrating mononuclear cells (MNCs) Mice were perfused with sterile Hank’s balanced salt solution (HBSS), and excised brains and spinal cords of 3 mice per group were homogenized. CNS-infiltrating MNCs were then enriched in the one third bottom fraction of a continuous Percoll (Pharmacia, Piscataway, NJ, USA) gradient after centrifugation for 30 minutes at 27,000 g as described previously [22]. Mice were perfused with sterile Hank’s balanced salt solution (HBSS), and excised brains and spinal cords of 3 mice per group were homogenized. CNS-infiltrating MNCs were then enriched in the one third bottom fraction of a continuous Percoll (Pharmacia, Piscataway, NJ, USA) gradient after centrifugation for 30 minutes at 27,000 g as described previously [22]. Flow cytometry CNS-infiltrating lymphocytes were isolated, and Fc receptors were blocked using 100 μL of 2.4G2 hybridoma (ATCC) supernatant by incubating at 4°C for 30 minutes. Cells were stained with anti-CD8 (clone 53–6.7), anti-CD4 (clone GK1.5), anti-CD11b (clone M1/70), anti-NK1.1 (clone PK136), anti-GR-1 (clone RB6-8C5) and anti-CD45 (clone 30-F11) antibodies. All antibodies used for flow cytometry were purchased from BD Pharmingen (San Diego, CA). Cells were analyzed using a Becton Dickinson LSRII flow cytometer. CNS-infiltrating lymphocytes were isolated, and Fc receptors were blocked using 100 μL of 2.4G2 hybridoma (ATCC) supernatant by incubating at 4°C for 30 minutes. Cells were stained with anti-CD8 (clone 53–6.7), anti-CD4 (clone GK1.5), anti-CD11b (clone M1/70), anti-NK1.1 (clone PK136), anti-GR-1 (clone RB6-8C5) and anti-CD45 (clone 30-F11) antibodies. All antibodies used for flow cytometry were purchased from BD Pharmingen (San Diego, CA). Cells were analyzed using a Becton Dickinson LSRII flow cytometer. Intracellular staining of cytokine production Freshly isolated CNS-infiltrating MNCs from three mice per group were cultured in 96-well round-bottom plates in the presence of the relevant or control peptide as previously described [23]. Allophycocyanin-conjugated anti-CD8 (clone Ly2) or anti-CD4 (clone L3T4) antibodies and a PE-labeled rat monoclonal anti-IFN-γ (XMG1.2) antibody were used for intracellular cytokine staining. Cells were analyzed on a Becton Dickinson FACS Calibur or LSRII cytometer. Live cells were gated based on light scattering properties. Freshly isolated CNS-infiltrating MNCs from three mice per group were cultured in 96-well round-bottom plates in the presence of the relevant or control peptide as previously described [23]. Allophycocyanin-conjugated anti-CD8 (clone Ly2) or anti-CD4 (clone L3T4) antibodies and a PE-labeled rat monoclonal anti-IFN-γ (XMG1.2) antibody were used for intracellular cytokine staining. Cells were analyzed on a Becton Dickinson FACS Calibur or LSRII cytometer. Live cells were gated based on light scattering properties. T cell proliferation assay Splenocytes from TMEV-infected mice, CD4+ T cells from spleens of OTII mice stimulated with specific epitope peptide, or in vitro TMEV-infected peritoneal macrophages in the presence of 2 μM ovalbumin (OVA)-specific peptides or 100 μg OVA protein were used. Cultures were incubated in 96-well flat-bottomed microtiter plates for 72 h and then pulsed with 1.0 μCi [3H]TdR and harvested 18 h later. [3H]TdR uptake by the cells was determined in triplicate using a scintillation counter and expressed as net counts per minute (Δcpm) ± standard error of the mean (SEM) after subtraction of the background count of cultures with PBS instead of stimulators. Splenocytes from TMEV-infected mice, CD4+ T cells from spleens of OTII mice stimulated with specific epitope peptide, or in vitro TMEV-infected peritoneal macrophages in the presence of 2 μM ovalbumin (OVA)-specific peptides or 100 μg OVA protein were used. Cultures were incubated in 96-well flat-bottomed microtiter plates for 72 h and then pulsed with 1.0 μCi [3H]TdR and harvested 18 h later. [3H]TdR uptake by the cells was determined in triplicate using a scintillation counter and expressed as net counts per minute (Δcpm) ± standard error of the mean (SEM) after subtraction of the background count of cultures with PBS instead of stimulators. Histopathological analyses At 70 days post-infection, mice were anesthetized and perfused via intracardiac puncture with 50 mL of PBS. Brain and spinal cords from IL-1R KO and B6 control mice were dissected and fixed in 4% formalin in PBS for 24 h. Anterior cortex (bregma: 3.0 to 2.0 mm), subventricular zone (bregma: 1.7 to 0.48), hippocampus (bregma: -1.0 to −2.5 mm), and cerebellum (bregma: -5.6 to −7.0 mm) were investigated. In addition, cervical, thoracic, and lumbar regions of the spinal cord were also examined. The tissues were embedded in paraffin for sectioning and staining. Paraffin-processed brains and spinal cords were sectioned at 6 μm. Adjacent sets of three sections from each animal were deparaffinized, rehydrated, and evaluated by H & E staining for inflammatory infiltrates, Luxol Fast Blue (LFB) staining for axonal demyelination, and Bielschowsky silver staining for axon loss. Slides were examined using a Leica DMR light microscope, and images were captured using an AxioCam MRc camera and AxioVision imaging software. The inflammatory infiltrates were evaluated by the presence or absence of the monocytes/lymphocytes based on the H & E staining and immunofluorescent staining of CD45+ cells. Histologic white matter demyelination was graded as: 1) normal myelination, 2) mild or minor demyelination (> 50% myelin staining preserved), or 3) moderate to severe demyelination (< 50% myelin staining preserved). At 70 days post-infection, mice were anesthetized and perfused via intracardiac puncture with 50 mL of PBS. Brain and spinal cords from IL-1R KO and B6 control mice were dissected and fixed in 4% formalin in PBS for 24 h. Anterior cortex (bregma: 3.0 to 2.0 mm), subventricular zone (bregma: 1.7 to 0.48), hippocampus (bregma: -1.0 to −2.5 mm), and cerebellum (bregma: -5.6 to −7.0 mm) were investigated. In addition, cervical, thoracic, and lumbar regions of the spinal cord were also examined. The tissues were embedded in paraffin for sectioning and staining. Paraffin-processed brains and spinal cords were sectioned at 6 μm. Adjacent sets of three sections from each animal were deparaffinized, rehydrated, and evaluated by H & E staining for inflammatory infiltrates, Luxol Fast Blue (LFB) staining for axonal demyelination, and Bielschowsky silver staining for axon loss. Slides were examined using a Leica DMR light microscope, and images were captured using an AxioCam MRc camera and AxioVision imaging software. The inflammatory infiltrates were evaluated by the presence or absence of the monocytes/lymphocytes based on the H & E staining and immunofluorescent staining of CD45+ cells. Histologic white matter demyelination was graded as: 1) normal myelination, 2) mild or minor demyelination (> 50% myelin staining preserved), or 3) moderate to severe demyelination (< 50% myelin staining preserved). ELISA Cytokine levels produced by splenocytes from TMEV-infected mice or CD4+ T cells from spleens of OTII mice were determined after stimulation with specific epitope peptides (2 μM each), or in vitro TMEV-infected peritoneal macrophages in the presence of OVA-specific peptide (2 μM) for 72 h, respectively. IFN-γ (OPTEIA kit; BD Pharmingen), IL-17 (R&D Systems, Minneapolis, MN, USA) levels were assessed. Plates were read using a Spectra MAX 190 microplate reader (Molecular Devices, Sunnyvale, CA, USA) at a 450 nm wavelength. Cytokine levels produced by splenocytes from TMEV-infected mice or CD4+ T cells from spleens of OTII mice were determined after stimulation with specific epitope peptides (2 μM each), or in vitro TMEV-infected peritoneal macrophages in the presence of OVA-specific peptide (2 μM) for 72 h, respectively. IFN-γ (OPTEIA kit; BD Pharmingen), IL-17 (R&D Systems, Minneapolis, MN, USA) levels were assessed. Plates were read using a Spectra MAX 190 microplate reader (Molecular Devices, Sunnyvale, CA, USA) at a 450 nm wavelength. Statistical analysis Data were presented as mean ± SD of either two to three independent experiments or one representative of at least three separate experiments. The significance of differences in the mean values was determined by Student’s t-test. Clinical scores were analyzed by Mann–Whitney U-test. P values < 0.05 were considered statistically significant. Data were presented as mean ± SD of either two to three independent experiments or one representative of at least three separate experiments. The significance of differences in the mean values was determined by Student’s t-test. Clinical scores were analyzed by Mann–Whitney U-test. P values < 0.05 were considered statistically significant. Animals: Female C57BL/6 mice were purchased from the Charles River Laboratories (Charles River, MA, USA) through the National Cancer Institute (Frederick, MD). Female B6.129S7-Il1r1tm1Imx/J mice (IL-1R knockout (KO)) were purchased from Jackson Laboratories (Bar Harbor, ME, USA). These mice were housed in the Animal Care Facility of Northwestern University. Experimental procedures that were approved by the Animal Care and Use Committee (ACUC) of Northwestern University in accordance with NIH animal care guidelines were used in this study. Synthetic peptides and antibodies: All peptides used were purchased from GeneMed (GeneMed Synthesis Inc, CA, USA) and used as described previously [20]. All antibodies used were purchased from BD Pharmingen (San Diego, CA, USA). Virus: The BeAn strain of TMEV was generated, propagated, and titered in BHK-21 cells grown in Dulbecco’s modified Eagle medium supplemented with 7.5% donor calf serum as previously described [21]. Viral titer was determined by plaque assays on BHK cell monolayers. Viral infection of mice and assessment of clinical signs: For intracerebral (i.c.) infection, 30 μl virus solution, containing 1×106 pfu, was injected into the right cerebral hemisphere of 6 to 8 week-old mice (n = 10 per group) anesthetized with isoflurane. Clinical symptoms of disease were assessed weekly on the following grading scale: grade 0 = no clinical signs; grade 1 = mild waddling gait; grade 2 = severe waddling gait; grade 3 = moderate hind limb paralysis; grade 4 = severe hind limb paralysis and loss of righting reflex. Reverse-transcriptase PCR and real-time PCR: Total cellular RNA from the brain and spinal cord of infected SJL/J mice was isolated using Trizol® Reagent (Invitrogen, CA, USA). First-strand cDNA was synthesized from 1 μg total RNA utilizing SuperScript® III First-Strand Synthesis Supermix or M-MLV (Invitrogen). The cDNAs were amplified with specific primer sets using the SYBR Green Supermix (Bio-Rad) on an iCycler (Bio-Rad). Primers for control GAPDH and cytokine genes were purchased from Integrated DNA Technologies. GAPDH: forward primer, AACTTTGGCATTGTGGAAGG and reverse primer, ACACATTGGGGGTAGGAACA; VP-1: TGACTAAGCAGGACTA-TGCCTTCC and CAACGAGCCACATATGCGGATTAC; IFN-α: (5’-ACCTCCTCT-GACCCAGGAAG-3’ and 5’-GGCTCTCCAGACTTCTGCTC-3’); IFN-β: CCCTAT-GGAGATGACGGAGA and CTGTCTGCTGGTGGAGTTCA; CXCL10: (5’-AAGT-GCTGCCGTCATTTTCT-3’ and 5’-GTGGCAATGATCTCAACACG-3’); IL-10: GCCAAGCCTTATCGGAAATGATCC and AGACACCTTGGTCTTGGAGCTT; IFN-γ: ACTGGCAAAAGGATGGTGAC and TGA GCTCATTGAATGCTTGG; IL-17A: CTCCAGAAGGCCCTCAGACTAC and AGCTTTCCCTCCGCATTGACACAG; IL-6: AGTTGCCTTCTTGGGACTGA and TCCACGATTTCCCAGAGAAC; TNF-α: GGTCACTGTCCCAGCATCTT and CTGTGAAGGGAATGGGTGTT. Isolation of CNS-infiltrating mononuclear cells (MNCs): Mice were perfused with sterile Hank’s balanced salt solution (HBSS), and excised brains and spinal cords of 3 mice per group were homogenized. CNS-infiltrating MNCs were then enriched in the one third bottom fraction of a continuous Percoll (Pharmacia, Piscataway, NJ, USA) gradient after centrifugation for 30 minutes at 27,000 g as described previously [22]. Flow cytometry: CNS-infiltrating lymphocytes were isolated, and Fc receptors were blocked using 100 μL of 2.4G2 hybridoma (ATCC) supernatant by incubating at 4°C for 30 minutes. Cells were stained with anti-CD8 (clone 53–6.7), anti-CD4 (clone GK1.5), anti-CD11b (clone M1/70), anti-NK1.1 (clone PK136), anti-GR-1 (clone RB6-8C5) and anti-CD45 (clone 30-F11) antibodies. All antibodies used for flow cytometry were purchased from BD Pharmingen (San Diego, CA). Cells were analyzed using a Becton Dickinson LSRII flow cytometer. Intracellular staining of cytokine production: Freshly isolated CNS-infiltrating MNCs from three mice per group were cultured in 96-well round-bottom plates in the presence of the relevant or control peptide as previously described [23]. Allophycocyanin-conjugated anti-CD8 (clone Ly2) or anti-CD4 (clone L3T4) antibodies and a PE-labeled rat monoclonal anti-IFN-γ (XMG1.2) antibody were used for intracellular cytokine staining. Cells were analyzed on a Becton Dickinson FACS Calibur or LSRII cytometer. Live cells were gated based on light scattering properties. T cell proliferation assay: Splenocytes from TMEV-infected mice, CD4+ T cells from spleens of OTII mice stimulated with specific epitope peptide, or in vitro TMEV-infected peritoneal macrophages in the presence of 2 μM ovalbumin (OVA)-specific peptides or 100 μg OVA protein were used. Cultures were incubated in 96-well flat-bottomed microtiter plates for 72 h and then pulsed with 1.0 μCi [3H]TdR and harvested 18 h later. [3H]TdR uptake by the cells was determined in triplicate using a scintillation counter and expressed as net counts per minute (Δcpm) ± standard error of the mean (SEM) after subtraction of the background count of cultures with PBS instead of stimulators. Histopathological analyses: At 70 days post-infection, mice were anesthetized and perfused via intracardiac puncture with 50 mL of PBS. Brain and spinal cords from IL-1R KO and B6 control mice were dissected and fixed in 4% formalin in PBS for 24 h. Anterior cortex (bregma: 3.0 to 2.0 mm), subventricular zone (bregma: 1.7 to 0.48), hippocampus (bregma: -1.0 to −2.5 mm), and cerebellum (bregma: -5.6 to −7.0 mm) were investigated. In addition, cervical, thoracic, and lumbar regions of the spinal cord were also examined. The tissues were embedded in paraffin for sectioning and staining. Paraffin-processed brains and spinal cords were sectioned at 6 μm. Adjacent sets of three sections from each animal were deparaffinized, rehydrated, and evaluated by H & E staining for inflammatory infiltrates, Luxol Fast Blue (LFB) staining for axonal demyelination, and Bielschowsky silver staining for axon loss. Slides were examined using a Leica DMR light microscope, and images were captured using an AxioCam MRc camera and AxioVision imaging software. The inflammatory infiltrates were evaluated by the presence or absence of the monocytes/lymphocytes based on the H & E staining and immunofluorescent staining of CD45+ cells. Histologic white matter demyelination was graded as: 1) normal myelination, 2) mild or minor demyelination (> 50% myelin staining preserved), or 3) moderate to severe demyelination (< 50% myelin staining preserved). ELISA: Cytokine levels produced by splenocytes from TMEV-infected mice or CD4+ T cells from spleens of OTII mice were determined after stimulation with specific epitope peptides (2 μM each), or in vitro TMEV-infected peritoneal macrophages in the presence of OVA-specific peptide (2 μM) for 72 h, respectively. IFN-γ (OPTEIA kit; BD Pharmingen), IL-17 (R&D Systems, Minneapolis, MN, USA) levels were assessed. Plates were read using a Spectra MAX 190 microplate reader (Molecular Devices, Sunnyvale, CA, USA) at a 450 nm wavelength. Statistical analysis: Data were presented as mean ± SD of either two to three independent experiments or one representative of at least three separate experiments. The significance of differences in the mean values was determined by Student’s t-test. Clinical scores were analyzed by Mann–Whitney U-test. P values < 0.05 were considered statistically significant. Results: Administration of IL-1β promotes a Th17 response to TMEV to exacerbate the pathogenicity of demyelinating disease We have previously demonstrated that administration of LPS or IL-1β causes resistant C57BL/6 mice to develop demyelinating disease [18]. It has recently been shown that LPS treatment promotes this pathogenesis by elevating the induction of the pathogenic Th17 response [17]. However, it is unknown how IL-1β promotes this pathogenesis. To understand the mechanism, we compared the levels of Th1 and Th17 in the CNS of B6 mice treated with either LPS or IL-1β, along with control mice treated with PBS, following infection with TMEV at 8 days post-infection (Figure 1). The results clearly indicated that the levels of IL-17A-producing Th17 cells in mice treated with either LPS or IL-1β were significantly elevated compared to PBS-treated control mice (Figure 1A and B). In contrast, the levels of IFN-γ-producing Th1 cells were not different. It is interesting to note that IL-1β-treated mice exceeded the Th17 level of LPS-treated mice. However, the levels of IL-17-producing CD8+ T cells were minimal (not shown), and the levels of IFN-γ-producing CD8+ T cells were also similar among the groups (Figure 1C). These results strongly suggest that IL-1β can promote the pathogenesis of TMEV-induced demyelinating disease by enhancing the induction of pathogenic Th17 cells rather than altering the Th1 response. Effects of IL-1β administration on Th17 and Th1 responses in TMEV-infected B6 mice. (A) Levels of IFN-γ- and IL-17-producing CD4+ T cells in B6 mice intraperitoneally treated with PBS, lipopolysaccharide (LPS), or IL-1β during the early stage (−1 and 3 days post-infection (dpi)) of Theiler’s murine encephalitis virus (TMEV) infection (three mice per group) were analyzed using flow cytometry of the pooled central nervous system (CNS) cells at 8 dpi after stimulation with either PBS or anti-CD3/CD28 antibodies. (B) The overall numbers of IL-17-producing cells in the CNS of the above treated B6 mice (three mice per group) were shown. (C) Levels of IFN-γ-producing CD8+ T cells in the CNS of these groups of mice were also similarly analyzed. ***P < 0.001. We have previously demonstrated that administration of LPS or IL-1β causes resistant C57BL/6 mice to develop demyelinating disease [18]. It has recently been shown that LPS treatment promotes this pathogenesis by elevating the induction of the pathogenic Th17 response [17]. However, it is unknown how IL-1β promotes this pathogenesis. To understand the mechanism, we compared the levels of Th1 and Th17 in the CNS of B6 mice treated with either LPS or IL-1β, along with control mice treated with PBS, following infection with TMEV at 8 days post-infection (Figure 1). The results clearly indicated that the levels of IL-17A-producing Th17 cells in mice treated with either LPS or IL-1β were significantly elevated compared to PBS-treated control mice (Figure 1A and B). In contrast, the levels of IFN-γ-producing Th1 cells were not different. It is interesting to note that IL-1β-treated mice exceeded the Th17 level of LPS-treated mice. However, the levels of IL-17-producing CD8+ T cells were minimal (not shown), and the levels of IFN-γ-producing CD8+ T cells were also similar among the groups (Figure 1C). These results strongly suggest that IL-1β can promote the pathogenesis of TMEV-induced demyelinating disease by enhancing the induction of pathogenic Th17 cells rather than altering the Th1 response. Effects of IL-1β administration on Th17 and Th1 responses in TMEV-infected B6 mice. (A) Levels of IFN-γ- and IL-17-producing CD4+ T cells in B6 mice intraperitoneally treated with PBS, lipopolysaccharide (LPS), or IL-1β during the early stage (−1 and 3 days post-infection (dpi)) of Theiler’s murine encephalitis virus (TMEV) infection (three mice per group) were analyzed using flow cytometry of the pooled central nervous system (CNS) cells at 8 dpi after stimulation with either PBS or anti-CD3/CD28 antibodies. (B) The overall numbers of IL-17-producing cells in the CNS of the above treated B6 mice (three mice per group) were shown. (C) Levels of IFN-γ-producing CD8+ T cells in the CNS of these groups of mice were also similarly analyzed. ***P < 0.001. IL-1R KO mice are susceptible to TMEV-induced demyelinating disease and display high cellular infiltration to the CNS Although administration of IL-1β promotes the pathogenesis of TMEV-induced demyelinating disease, the IL-1β produced via NLRP3 proteosome activation upon viral infection is considered to be a protective mechanism against microbial infections by promoting the apoptosis of infected cells [24]. To further investigate the potential role of IL-1β-mediated signaling in the development of TMEV-induced demyelinating disease, we compared the development of TMEV-induced demyelinating disease in IL-1R KO mice with a B6 background and control B6 mice (Figure 2A). Every IL-1R KO mouse developed demyelinating disease while none of the control B6 mice showed clinical signs at 35 days post-infection (dpi). The results clearly indicated that B6 mice with the deficiency in IL-1 signaling became susceptible to the TMEV-induced disease. This is somewhat unexpected because our previous study indicated that administration of IL-1β to B6 mice renders the mice susceptible to the disease, suggesting a pathogenic role for IL-1β in disease development. The course of TMEV-induced demyelinating disease development, viral persistence levels and CNS-infiltrating mononuclear cells in TMEV-infected in B6 and IL-1R KO mice. (A) Frequency and severity of demyelinating disease in B6 (n = 10) and IL-1R knockout (KO) (n = 10) mice were monitored for 70 days after Theiler’s murine encephalitis virus (TMEV) infection. (B) Viral persistence levels in the pooled brains (BR) and spinal cords (SC) of infected mice (three mice per group) at 8, 21 and 70 days post-infection (dpi) were determined by quantitative PCR. Data are expressed by fold induction after normalization to the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA levels. The values are expressed as the means ± SD of triplicate experiments. Statistically significant differences are indicated with asterisks;*P < 0.05, **P < 0.01, ***P < 0.001. (C) Levels of T cells (CD4+ and CD8+), macrophages (CD11b+CD45high), microglia (CD11b+CD45int), NK cells (NK1.1+CD45+) and granulocytes (Ly6G/6C+CD45+) were assessed using flow cytometry in central nervous system (CNS)-infiltrating mononuclear cells from TMEV-infected C57BL/6 and IL-1R KO mice at 3 and 8 dpi. Numbers in FACS plots represent percentages in the CNS. Data are representative of three experiments using three mice per group. To correlate the disease susceptibility of IL-1R KO mice with viral persistence in the CNS, the relative viral message levels in the CNS of wild type (WT) B6 and IL-1R KO mice were compared at days 8, 21 and 70 post-infection with TMEV (Figure 2B). The results showed that the level of viral load in the spinal cord, but not in the brain, is consistently higher in IL-1R KO mice compared to control B6 mice. These results strongly suggest that IL-1 signaling plays an important role in controlling viral persistence in the spinal cord during the course of TMEV infection. However, it was previously shown that the viral load level alone is not sufficient for the pathogenesis of TMEV-induced demyelinating disease [25]. Thus, we further assessed the levels of cellular infiltration to the CNS of these mice during the early stages (3 and 8 dpi) of viral infection (Figure 2C). The results indicated that infiltration into the CNS of granulocytes, NK cells, macrophages and CD8+ T cells but not CD4+ T cells was elevated, particularly at the early stage of viral infection. These results collectively suggest that high viral loads and cellular infiltration into the CNS in resistant B6 mice in the absence of IL-1 signaling leads to the elevated development of TMEV-induced demyelinating disease. Although administration of IL-1β promotes the pathogenesis of TMEV-induced demyelinating disease, the IL-1β produced via NLRP3 proteosome activation upon viral infection is considered to be a protective mechanism against microbial infections by promoting the apoptosis of infected cells [24]. To further investigate the potential role of IL-1β-mediated signaling in the development of TMEV-induced demyelinating disease, we compared the development of TMEV-induced demyelinating disease in IL-1R KO mice with a B6 background and control B6 mice (Figure 2A). Every IL-1R KO mouse developed demyelinating disease while none of the control B6 mice showed clinical signs at 35 days post-infection (dpi). The results clearly indicated that B6 mice with the deficiency in IL-1 signaling became susceptible to the TMEV-induced disease. This is somewhat unexpected because our previous study indicated that administration of IL-1β to B6 mice renders the mice susceptible to the disease, suggesting a pathogenic role for IL-1β in disease development. The course of TMEV-induced demyelinating disease development, viral persistence levels and CNS-infiltrating mononuclear cells in TMEV-infected in B6 and IL-1R KO mice. (A) Frequency and severity of demyelinating disease in B6 (n = 10) and IL-1R knockout (KO) (n = 10) mice were monitored for 70 days after Theiler’s murine encephalitis virus (TMEV) infection. (B) Viral persistence levels in the pooled brains (BR) and spinal cords (SC) of infected mice (three mice per group) at 8, 21 and 70 days post-infection (dpi) were determined by quantitative PCR. Data are expressed by fold induction after normalization to the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA levels. The values are expressed as the means ± SD of triplicate experiments. Statistically significant differences are indicated with asterisks;*P < 0.05, **P < 0.01, ***P < 0.001. (C) Levels of T cells (CD4+ and CD8+), macrophages (CD11b+CD45high), microglia (CD11b+CD45int), NK cells (NK1.1+CD45+) and granulocytes (Ly6G/6C+CD45+) were assessed using flow cytometry in central nervous system (CNS)-infiltrating mononuclear cells from TMEV-infected C57BL/6 and IL-1R KO mice at 3 and 8 dpi. Numbers in FACS plots represent percentages in the CNS. Data are representative of three experiments using three mice per group. To correlate the disease susceptibility of IL-1R KO mice with viral persistence in the CNS, the relative viral message levels in the CNS of wild type (WT) B6 and IL-1R KO mice were compared at days 8, 21 and 70 post-infection with TMEV (Figure 2B). The results showed that the level of viral load in the spinal cord, but not in the brain, is consistently higher in IL-1R KO mice compared to control B6 mice. These results strongly suggest that IL-1 signaling plays an important role in controlling viral persistence in the spinal cord during the course of TMEV infection. However, it was previously shown that the viral load level alone is not sufficient for the pathogenesis of TMEV-induced demyelinating disease [25]. Thus, we further assessed the levels of cellular infiltration to the CNS of these mice during the early stages (3 and 8 dpi) of viral infection (Figure 2C). The results indicated that infiltration into the CNS of granulocytes, NK cells, macrophages and CD8+ T cells but not CD4+ T cells was elevated, particularly at the early stage of viral infection. These results collectively suggest that high viral loads and cellular infiltration into the CNS in resistant B6 mice in the absence of IL-1 signaling leads to the elevated development of TMEV-induced demyelinating disease. IL-1R KO mice show widely spread mild demyelinating lesions accompanied by patchy axon damage At 70 days post-infection, the histopathology of TMEV-infected IL-1R KO and B6 mice was compared to correlate the disease development with the histopathology of the CNS (Figure 3). Series of histopathological examinations of the spinal cords from both KO and WT mice were conducted after H & E, LFB, and Bielschowsky silver staining. The H & E staining was used for evaluating the evidence of active inflammation and lymphocyte infiltration. LFB specifically stains axonal myelin sheath, and this was used to evaluate the axonal demyelination. Bielschowsky silver staining stains axons dark brown and was used to evaluate axonal integrity. Lymphocyte infiltration, minor demyelination and axon loss were detected in the CNS, including the brain and spinal cord, in IL-1R KO mice but not in WT B6 mice. Compared to control B6 mice (Figure 3A a-b), IL-1R KO mice (Figure 3A c-d) showed more lymphocyte infiltration in the white matter of the lumbar spinal cord when examined by H & E staining. LFB staining of the adjacent sections showed irregular vacuoles and demyelination in the white matter of the spinal cord in IL-1R KO mice (Figure 3A g-h) and in brain regions including the cerebellum and medulla (not shown). In contrast, myelin that appeared normal and little histopathological change were observed in the control B6 mice (Figure 3A e-f). Bielschowsky silver staining of the adjacent sections also showed irregular vacuolation and mild axon loss in the demyelinated regions of the spinal cord from IL-1R KO mice (Figure 3A k-l) but not in the sections from the WT control mice (Figure 3A i-j). To further compare the cellular infiltration levels in the CNS of these mice, we examined the levels of CD45+ cells in the CNS which largely represents infiltrating cells (Figure 3B). Our results clearly displayed that the level of CD45+ cells (Figure 3B d), many of which overlap with H & E staining Figure 3B c), was higher in the CNS of IL-1R KO mice compared to that of the control B6 mice (Figure 3B a-b). Histopathology of the spinal cord in IL-1R KO and wild-type mice. (A) H & E staining of the spinal cord showed infiltration of inflammatory cells in knockout (KO) mice (c, d) and little infiltration in wild-type (WT) mice (a, b). Luxol Fast Blue (LFB) staining of adjacent sections showed irregular vacuolation and minor demyelination in the white matter of KO mice (g, h) but no loss of myelin in WT mice (e, f). Bielschowsky silver staining of the same area shows the presence of irregular vacuolation and minor axonal loss in KO mice (k, l) but not in WT mice (i, j). Magnification, ×10 and ×40. Black arrows indicate regions of lymphocyte infiltrates, demyelination, or axon loss; thin black squares indicate the areas from the lumbar spinal cord region, which are shown in high magnification (b, f, and j for B6 mice and d, h, and l for IL-1R KO mice). (B) H & E staining of spinal cords of control (a) and IL-1R KO mice (c) are shown. The adjacent sections (b and d, respectively) were stained with anti-CD45 antibody (red) for infiltrating cells and counterstained with 4',6-diamidino-2-phenylindole (DAPI) (blue) for nuclei. At 70 days post-infection, the histopathology of TMEV-infected IL-1R KO and B6 mice was compared to correlate the disease development with the histopathology of the CNS (Figure 3). Series of histopathological examinations of the spinal cords from both KO and WT mice were conducted after H & E, LFB, and Bielschowsky silver staining. The H & E staining was used for evaluating the evidence of active inflammation and lymphocyte infiltration. LFB specifically stains axonal myelin sheath, and this was used to evaluate the axonal demyelination. Bielschowsky silver staining stains axons dark brown and was used to evaluate axonal integrity. Lymphocyte infiltration, minor demyelination and axon loss were detected in the CNS, including the brain and spinal cord, in IL-1R KO mice but not in WT B6 mice. Compared to control B6 mice (Figure 3A a-b), IL-1R KO mice (Figure 3A c-d) showed more lymphocyte infiltration in the white matter of the lumbar spinal cord when examined by H & E staining. LFB staining of the adjacent sections showed irregular vacuoles and demyelination in the white matter of the spinal cord in IL-1R KO mice (Figure 3A g-h) and in brain regions including the cerebellum and medulla (not shown). In contrast, myelin that appeared normal and little histopathological change were observed in the control B6 mice (Figure 3A e-f). Bielschowsky silver staining of the adjacent sections also showed irregular vacuolation and mild axon loss in the demyelinated regions of the spinal cord from IL-1R KO mice (Figure 3A k-l) but not in the sections from the WT control mice (Figure 3A i-j). To further compare the cellular infiltration levels in the CNS of these mice, we examined the levels of CD45+ cells in the CNS which largely represents infiltrating cells (Figure 3B). Our results clearly displayed that the level of CD45+ cells (Figure 3B d), many of which overlap with H & E staining Figure 3B c), was higher in the CNS of IL-1R KO mice compared to that of the control B6 mice (Figure 3B a-b). Histopathology of the spinal cord in IL-1R KO and wild-type mice. (A) H & E staining of the spinal cord showed infiltration of inflammatory cells in knockout (KO) mice (c, d) and little infiltration in wild-type (WT) mice (a, b). Luxol Fast Blue (LFB) staining of adjacent sections showed irregular vacuolation and minor demyelination in the white matter of KO mice (g, h) but no loss of myelin in WT mice (e, f). Bielschowsky silver staining of the same area shows the presence of irregular vacuolation and minor axonal loss in KO mice (k, l) but not in WT mice (i, j). Magnification, ×10 and ×40. Black arrows indicate regions of lymphocyte infiltrates, demyelination, or axon loss; thin black squares indicate the areas from the lumbar spinal cord region, which are shown in high magnification (b, f, and j for B6 mice and d, h, and l for IL-1R KO mice). (B) H & E staining of spinal cords of control (a) and IL-1R KO mice (c) are shown. The adjacent sections (b and d, respectively) were stained with anti-CD45 antibody (red) for infiltrating cells and counterstained with 4',6-diamidino-2-phenylindole (DAPI) (blue) for nuclei. Cytokine gene expression is transentily higher in the CNS of IL-1R KO mice during early viral infection To understand the susceptibility to TMEV-induced demyelinating disease in IL-1R KO mice, we analyzed various cytokine message levels expressed in the CNS of virus-infected control and IL-1R KO mice during the early stages (3, 5, and 8 dpi) of viral infection using real-time PCR (Figure 4). The levels of IFN-α and IFN-β gene expression in IL-1R KO mice were significantly higher than those in B6 mice at 3 dpi, although the levels became similar at 5 and 8 dpi. The expression levels of CXCL-10 that were associated with T cell infiltration and IL-10 that was associated with viral persistence were higher at 5 dpi, and this trend was maintained at 8 dpi. However, the expression level of IL-6 in IL-1R KO mice was transiently lower at 5 dpi, while no differences in TNF-α expression were noted. Similarly, the production of a pathogenic T cell cytokine, IL-17 was largely unchanged. However, viral RNA and the production of IFN-γ were transiently higher at 3 and 8 dpi. These results suggest that the lack of IL-1 signaling differentially affects viral replication and the expression of various innate and immune cytokines depending on the stage of TMEV infection. Expression level of cytokine genes in the CNS of TMEV infected B6 and IL-1R knockout (KO) mice at 3, 5 and 8 days post-infection (dpi). The relative expression levels of the indicated mRNAs in the central nervous system (CNS) of Theiler’s murine encephalitis virus (TMEV)-infected C57BL/6 and IL-1R KO mice at 3, 5 and 8 dpi were assessed by real-time PCR. Data are expressed by fold induction after normalization to the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA levels. The values are expressed as the means ± SD of triplicates. Statistically significant differences are indicated with asterisks; *P < 0.05, **P < 0.01, ***P < 0.001). To understand the susceptibility to TMEV-induced demyelinating disease in IL-1R KO mice, we analyzed various cytokine message levels expressed in the CNS of virus-infected control and IL-1R KO mice during the early stages (3, 5, and 8 dpi) of viral infection using real-time PCR (Figure 4). The levels of IFN-α and IFN-β gene expression in IL-1R KO mice were significantly higher than those in B6 mice at 3 dpi, although the levels became similar at 5 and 8 dpi. The expression levels of CXCL-10 that were associated with T cell infiltration and IL-10 that was associated with viral persistence were higher at 5 dpi, and this trend was maintained at 8 dpi. However, the expression level of IL-6 in IL-1R KO mice was transiently lower at 5 dpi, while no differences in TNF-α expression were noted. Similarly, the production of a pathogenic T cell cytokine, IL-17 was largely unchanged. However, viral RNA and the production of IFN-γ were transiently higher at 3 and 8 dpi. These results suggest that the lack of IL-1 signaling differentially affects viral replication and the expression of various innate and immune cytokines depending on the stage of TMEV infection. Expression level of cytokine genes in the CNS of TMEV infected B6 and IL-1R knockout (KO) mice at 3, 5 and 8 days post-infection (dpi). The relative expression levels of the indicated mRNAs in the central nervous system (CNS) of Theiler’s murine encephalitis virus (TMEV)-infected C57BL/6 and IL-1R KO mice at 3, 5 and 8 dpi were assessed by real-time PCR. Data are expressed by fold induction after normalization to the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA levels. The values are expressed as the means ± SD of triplicates. Statistically significant differences are indicated with asterisks; *P < 0.05, **P < 0.01, ***P < 0.001). Anti-viral CD4+ T cell responses in the CNS, of virus-infected IL-1R KO mice are lower during the early stage of infection To compare CD4+ T cell responses specific to viral determinants in the CNS of IL-1R KO and WT mice, infiltration levels of CD4+ T cells specific to the predominant CD4+ T cell epitopes were assessed (Figure 5A and B). The levels of IFN-γ-producing CD4+ T cells in response to pan-T cell stimulation (either PMA plus ionomycin or anti-CD3 and anti-CD28 antibodies) were similar between IL-1R KO and control WT B6 mice. However, such CD4+ T cell responses to viral epitopes were proportionally lower in the CNS of virus-infected IL-1R KO mice compared to B6 mice, although the overall levels in the CNS were similar. This discrepancy may be due to the high levels of CXCL-10 expression (Figure 4), which promotes infiltration of T cells, in the CNS of IL-1R KO mice. The levels of IL-17-producing CD4+ T cells in either TMEV-infected IL-1R KO or B6 mice were undetectable. To further determine whether the pattern of CD4+ T cell responses is unique in the CNS of virus-infected mice, we also assessed the T cell responses in the periphery of TMEV-infected IL-1R KO and control B6 mice at 8 and 21 dpi (Figure 5C). Again, the levels of T cell proliferation and the production of key T cell cytokines (IFN-γ and IL-17) against viral epitopes (for both CD4+ and CD8+ T cells) were not drasticaly different between the splenic T cells from IL-1R KO and control B6 mice. Despite the similar low levels of IL-17 production in response to viral epitopes, the IL-17 level was significantly lower in IL-1R KO mice compared to WT B6 mice after robust stimulation with anti-CD3/CD28 antibodies. These results are consistent with the role of IL-1 signaling in promoting IL-17 production [26,27]. Virus-specific CD4+T cell responses in the CNS and the periphery of TMEV-infected B6 and IL-1R knockout (KO) mice. (A) Levels of Th1 and Th17 cells in the central nervous system (CNS) of virus-infected B6 and IL-1R KO mice were assessed using flow cytometry at 8 days post-infection (dpi) after stimulation with PBS, PMA/ionomycin, anti-CD3/CD28 antibodies, or viral epitope peptides (Pep: mixture of 2 μM VP2203-220 and 2 μM VP425-38). The flow cytometric plots show gated CD4+ T cells. (B) The numbers of CNS-infiltrating IFN-γ-producing CD4+ T cells reactive to either viral epitope peptides (CD4: equal mixture of VP2203-220 and VP425-38) or anti-CD3/CD28 antibodies in virus-infected B6 and IL-1R KO mice (three mice each) at 8 and 21 dpi. A representative result from two to three similar experiments is shown here. (C) Levels of proliferation and cytokine production by splenic T cells in response to viral epitopes by CD4+ T cells (CD4 Mix: equal mixture of VP2203-220 and VP425-38), CD8+ T cells (VP2-121–130), or both CD4+ and CD8+ T cells (anti-CD3/CD28 antibodies) were assessed at 8 and 21 dpi. Values are expressed as the mean of triplicate samples (mean ± SD) from a representative of three experiments. *P < 0.05, **P < 0.01, ***P < 0.001. To compare CD4+ T cell responses specific to viral determinants in the CNS of IL-1R KO and WT mice, infiltration levels of CD4+ T cells specific to the predominant CD4+ T cell epitopes were assessed (Figure 5A and B). The levels of IFN-γ-producing CD4+ T cells in response to pan-T cell stimulation (either PMA plus ionomycin or anti-CD3 and anti-CD28 antibodies) were similar between IL-1R KO and control WT B6 mice. However, such CD4+ T cell responses to viral epitopes were proportionally lower in the CNS of virus-infected IL-1R KO mice compared to B6 mice, although the overall levels in the CNS were similar. This discrepancy may be due to the high levels of CXCL-10 expression (Figure 4), which promotes infiltration of T cells, in the CNS of IL-1R KO mice. The levels of IL-17-producing CD4+ T cells in either TMEV-infected IL-1R KO or B6 mice were undetectable. To further determine whether the pattern of CD4+ T cell responses is unique in the CNS of virus-infected mice, we also assessed the T cell responses in the periphery of TMEV-infected IL-1R KO and control B6 mice at 8 and 21 dpi (Figure 5C). Again, the levels of T cell proliferation and the production of key T cell cytokines (IFN-γ and IL-17) against viral epitopes (for both CD4+ and CD8+ T cells) were not drasticaly different between the splenic T cells from IL-1R KO and control B6 mice. Despite the similar low levels of IL-17 production in response to viral epitopes, the IL-17 level was significantly lower in IL-1R KO mice compared to WT B6 mice after robust stimulation with anti-CD3/CD28 antibodies. These results are consistent with the role of IL-1 signaling in promoting IL-17 production [26,27]. Virus-specific CD4+T cell responses in the CNS and the periphery of TMEV-infected B6 and IL-1R knockout (KO) mice. (A) Levels of Th1 and Th17 cells in the central nervous system (CNS) of virus-infected B6 and IL-1R KO mice were assessed using flow cytometry at 8 days post-infection (dpi) after stimulation with PBS, PMA/ionomycin, anti-CD3/CD28 antibodies, or viral epitope peptides (Pep: mixture of 2 μM VP2203-220 and 2 μM VP425-38). The flow cytometric plots show gated CD4+ T cells. (B) The numbers of CNS-infiltrating IFN-γ-producing CD4+ T cells reactive to either viral epitope peptides (CD4: equal mixture of VP2203-220 and VP425-38) or anti-CD3/CD28 antibodies in virus-infected B6 and IL-1R KO mice (three mice each) at 8 and 21 dpi. A representative result from two to three similar experiments is shown here. (C) Levels of proliferation and cytokine production by splenic T cells in response to viral epitopes by CD4+ T cells (CD4 Mix: equal mixture of VP2203-220 and VP425-38), CD8+ T cells (VP2-121–130), or both CD4+ and CD8+ T cells (anti-CD3/CD28 antibodies) were assessed at 8 and 21 dpi. Values are expressed as the mean of triplicate samples (mean ± SD) from a representative of three experiments. *P < 0.05, **P < 0.01, ***P < 0.001. Levels of TMEV-specific CD8+ T cell responses in the CNS are comparable between IL-1R KO and WT B6 mice To further determine whether the susceptibility of IL-1R KO mice to TMEV-induced demyelinating disease is associated with a compromised anti-viral CD8+ T cell response, we also analyzed the T cell responses in the CNS of TMEV-infected IL-1R KO and control B6 mice (Figure 6). Virus-specific CD8+ T cells reactive to the predominant epitope (VP2121-130), determined using the VP2121-H-2Db tetramer, indicated that the proportions of virus-specific CD8+ T cells in the CNS of virus-infected WT B6 and IL-1R KO mice are similar (Figure 6A). To further determine whether the functions of the virus-reactive CD8+ T cells are different, we assessed the abilities of the cells to produce IFN-γ in response to specific and non-specific stimulations (Figure 6B). The results clearly indicated that their ability to produce IFN-γ is also similar in both proportion (Figure 6B) and number in the CNS of TMEV-infected B6 and IL-1R KO mice (Figure 6C). These results strongly suggest that there are no significant differences in the CD8+ T cell responses to the viral determinants, unlike those of CD4+ T cell responses, in the CNS of virus-infected WT B6 and IL-1R KO mice. Levels of virus-specific CD8+T cell responses in the CNS of virus-infected B6 and IL-1R KO mice. (A) Levels of H-2Db-VP2-121–130-tetramer reactive CD8+ T cells in the central nervous system (CNS) of B6 and IL-1R knockout (KO) mice at 8 days post-infection (dpi). (B) Proportions of CNS-infiltrating CD8+ T cells reactive to viral epitopes, anti-CD3/CD28 antibodies and PMA/ionomycin were assessed using flow cytometry following intracellular cytokine staining at 8 dpi. (C) The overall numbers of virus-specific and anti-CD3/CD28 reactive CD8+ T cells in the CNS of virus-infected B6 and IL-1R KO mice are shown at 8 and 21 dpi. A representative result from two to three similar experiments is shown here. To further determine whether the susceptibility of IL-1R KO mice to TMEV-induced demyelinating disease is associated with a compromised anti-viral CD8+ T cell response, we also analyzed the T cell responses in the CNS of TMEV-infected IL-1R KO and control B6 mice (Figure 6). Virus-specific CD8+ T cells reactive to the predominant epitope (VP2121-130), determined using the VP2121-H-2Db tetramer, indicated that the proportions of virus-specific CD8+ T cells in the CNS of virus-infected WT B6 and IL-1R KO mice are similar (Figure 6A). To further determine whether the functions of the virus-reactive CD8+ T cells are different, we assessed the abilities of the cells to produce IFN-γ in response to specific and non-specific stimulations (Figure 6B). The results clearly indicated that their ability to produce IFN-γ is also similar in both proportion (Figure 6B) and number in the CNS of TMEV-infected B6 and IL-1R KO mice (Figure 6C). These results strongly suggest that there are no significant differences in the CD8+ T cell responses to the viral determinants, unlike those of CD4+ T cell responses, in the CNS of virus-infected WT B6 and IL-1R KO mice. Levels of virus-specific CD8+T cell responses in the CNS of virus-infected B6 and IL-1R KO mice. (A) Levels of H-2Db-VP2-121–130-tetramer reactive CD8+ T cells in the central nervous system (CNS) of B6 and IL-1R knockout (KO) mice at 8 days post-infection (dpi). (B) Proportions of CNS-infiltrating CD8+ T cells reactive to viral epitopes, anti-CD3/CD28 antibodies and PMA/ionomycin were assessed using flow cytometry following intracellular cytokine staining at 8 dpi. (C) The overall numbers of virus-specific and anti-CD3/CD28 reactive CD8+ T cells in the CNS of virus-infected B6 and IL-1R KO mice are shown at 8 and 21 dpi. A representative result from two to three similar experiments is shown here. Cytokine production by Th cells stimulated with macrophages from IL-1R KO mice is reduced To compare the function of CD4+ T cells stimulated by macrophages from B6 and IL-1R KO mice, T cells from naïve OT-II mice, which carry T cell receptor (TCR) transgenes specific for OVA323-339, were stimulated with peritoneal macrophages infected in vitro for 24 h with TMEV (10 MOI) in the presence of OVA323-339 peptide (Figure 7A) or OVA protein (not shown). Viral infection did not significantly alter the levels of T cell stimulation by these macrophages. However, proliferation of CD4+ T cells in response to the cognate peptide or protein was higher when stimulated with IL-1R KO macrophages compared to the proliferation stimulated with B6 macrophages. In contrast, IFN-γ and IL-17 production by the T cells stimulated with IL-1R KO macrophages were significantly lower than the production stimulated with control B6 macrophages. These results indicated that antigen-presenting cells display altered T cell stimulating function in the absence of IL-1R-mediated signaling. Cytokine reduction of CD4+T cells with stimulation of IL-1R KO macrophages. (A) Isolated CD4+ T cells (1 × 105) from the spleen of OT-II mice were cultured with in vitro 10 MOI Theiler’s murine encephalitis virus (TMEV)-infected peritoneal macrophages (1 × 104) from either C57BL/6 or IL-1R knockout (KO) mice for 3 days in the presence of 2 μM OVA epitope peptides. T cell proliferative responses were analyzed using [3H]TdR uptake, and cytokine production (IFN-γ and IL-17) of the cultures were analyzed using specific ELISAs. (B) Isolated CD4+ T cells (1 × 105) from the spleen of B6 mice infected with TMEV at 8 days post-infection (dpi) were cultured with in vitro TMEV-infected (10 MOI for 24 h) peritoneal macrophages (1 × 104) from either naïve C57BL/6 or IL-1R KO mice for 3 days in the presence of viral epitope peptides. Message levels of the indicated genes were then analyzed by real-time PCR. The glyceraldehyde-3-phosphate dehydrogenase (GAPDH) level was used as an internal control. The values are expressed as the means ± SD of triplicates. Statistically significant differences are indicated with asterisks; *P < 0.05, **P < 0.01, ***P < 0.001. Data shown are the representation of three independent experiments. (C) Peritoneal CD11b+ cells from naïve B6 and IL-1R KO mice infected with TMEV in vitro for 24 h were analyzed for the expression of PDL-1 and TIM-3 T cell inhibitory molecules. A representative flow cytometry plot of three similar results is shown here. To further understand the potential mechanisms underlying the altered T cell stimulation by IL-1R KO macrophages, we examined the potential contribution of IL-1 signaling to the induction of cytokines in TMEV-infected macrophages (Figure 7B). After TMEV infection in vitro for 24 h, levels of viral RNA as well as IFN-α and IFN-β messages were similar between macrophages from WT B6 and IL-1R KO mice. However, the expression of the IL-6 message was extremely compromised in IL-1R KO macrophages compared to B6 macrophages. In contrast, the expression level of TNF-α message was more highly upregulated in IL-1R KO macrophages after TMEV infection. Interestingly however, the expression of TGF-β in uninfected IL-1R KO macrophages was higher than the expression in B6 macrophages but reduced to a similar level after viral infection. These results suggest that the cytokine production profile of macrophages and perhaps other antigen-presenting cells is altered in the absence of IL-1 signaling, which may affect the initial development and/or function of T cells following viral infection. To further understand the underlying mechanisms associated with altered CD4+ T cell development in the absence of IL-1 signaling, the levels of co-stimulatory molecules and key inhibitory molecules were studied in a representative macrophage antigen-presenting cell (APC) population (Figure 7C). The levels of CD80, CD86 and CD40 in B6 and IL-1R1 KO mice were not significantly different (not shown). However, PDL-1 and Tim-3 were significantly elevated in the absence of IL-1 signaling. These molecules are known to negatively affect the function of T cells [28] and/or promote inflammatory responses through its expression by innate immune cells, such as microglia [29]. It was interesting to note that the expression of PDL-1 was upregulated upon viral infection in B6 macrophages, whereas the expression in IL-1R-deficient macrophages was constitutively upregulated even in the absence of viral infection. In contrast, the expression of Tim-3 was constitutively upregulated in IL-1R-deficient macrophages to the level (approximately 5%) of upregulation seen after viral infection in B6 macrophages, but it was further upregulated in IL-1R KO macrophages after TMEV infection. These results strongly suggest that these increases in the inhibitory molecules may participate in the altered T cell development and/or function. To compare the function of CD4+ T cells stimulated by macrophages from B6 and IL-1R KO mice, T cells from naïve OT-II mice, which carry T cell receptor (TCR) transgenes specific for OVA323-339, were stimulated with peritoneal macrophages infected in vitro for 24 h with TMEV (10 MOI) in the presence of OVA323-339 peptide (Figure 7A) or OVA protein (not shown). Viral infection did not significantly alter the levels of T cell stimulation by these macrophages. However, proliferation of CD4+ T cells in response to the cognate peptide or protein was higher when stimulated with IL-1R KO macrophages compared to the proliferation stimulated with B6 macrophages. In contrast, IFN-γ and IL-17 production by the T cells stimulated with IL-1R KO macrophages were significantly lower than the production stimulated with control B6 macrophages. These results indicated that antigen-presenting cells display altered T cell stimulating function in the absence of IL-1R-mediated signaling. Cytokine reduction of CD4+T cells with stimulation of IL-1R KO macrophages. (A) Isolated CD4+ T cells (1 × 105) from the spleen of OT-II mice were cultured with in vitro 10 MOI Theiler’s murine encephalitis virus (TMEV)-infected peritoneal macrophages (1 × 104) from either C57BL/6 or IL-1R knockout (KO) mice for 3 days in the presence of 2 μM OVA epitope peptides. T cell proliferative responses were analyzed using [3H]TdR uptake, and cytokine production (IFN-γ and IL-17) of the cultures were analyzed using specific ELISAs. (B) Isolated CD4+ T cells (1 × 105) from the spleen of B6 mice infected with TMEV at 8 days post-infection (dpi) were cultured with in vitro TMEV-infected (10 MOI for 24 h) peritoneal macrophages (1 × 104) from either naïve C57BL/6 or IL-1R KO mice for 3 days in the presence of viral epitope peptides. Message levels of the indicated genes were then analyzed by real-time PCR. The glyceraldehyde-3-phosphate dehydrogenase (GAPDH) level was used as an internal control. The values are expressed as the means ± SD of triplicates. Statistically significant differences are indicated with asterisks; *P < 0.05, **P < 0.01, ***P < 0.001. Data shown are the representation of three independent experiments. (C) Peritoneal CD11b+ cells from naïve B6 and IL-1R KO mice infected with TMEV in vitro for 24 h were analyzed for the expression of PDL-1 and TIM-3 T cell inhibitory molecules. A representative flow cytometry plot of three similar results is shown here. To further understand the potential mechanisms underlying the altered T cell stimulation by IL-1R KO macrophages, we examined the potential contribution of IL-1 signaling to the induction of cytokines in TMEV-infected macrophages (Figure 7B). After TMEV infection in vitro for 24 h, levels of viral RNA as well as IFN-α and IFN-β messages were similar between macrophages from WT B6 and IL-1R KO mice. However, the expression of the IL-6 message was extremely compromised in IL-1R KO macrophages compared to B6 macrophages. In contrast, the expression level of TNF-α message was more highly upregulated in IL-1R KO macrophages after TMEV infection. Interestingly however, the expression of TGF-β in uninfected IL-1R KO macrophages was higher than the expression in B6 macrophages but reduced to a similar level after viral infection. These results suggest that the cytokine production profile of macrophages and perhaps other antigen-presenting cells is altered in the absence of IL-1 signaling, which may affect the initial development and/or function of T cells following viral infection. To further understand the underlying mechanisms associated with altered CD4+ T cell development in the absence of IL-1 signaling, the levels of co-stimulatory molecules and key inhibitory molecules were studied in a representative macrophage antigen-presenting cell (APC) population (Figure 7C). The levels of CD80, CD86 and CD40 in B6 and IL-1R1 KO mice were not significantly different (not shown). However, PDL-1 and Tim-3 were significantly elevated in the absence of IL-1 signaling. These molecules are known to negatively affect the function of T cells [28] and/or promote inflammatory responses through its expression by innate immune cells, such as microglia [29]. It was interesting to note that the expression of PDL-1 was upregulated upon viral infection in B6 macrophages, whereas the expression in IL-1R-deficient macrophages was constitutively upregulated even in the absence of viral infection. In contrast, the expression of Tim-3 was constitutively upregulated in IL-1R-deficient macrophages to the level (approximately 5%) of upregulation seen after viral infection in B6 macrophages, but it was further upregulated in IL-1R KO macrophages after TMEV infection. These results strongly suggest that these increases in the inhibitory molecules may participate in the altered T cell development and/or function. Administration of IL-1β promotes a Th17 response to TMEV to exacerbate the pathogenicity of demyelinating disease: We have previously demonstrated that administration of LPS or IL-1β causes resistant C57BL/6 mice to develop demyelinating disease [18]. It has recently been shown that LPS treatment promotes this pathogenesis by elevating the induction of the pathogenic Th17 response [17]. However, it is unknown how IL-1β promotes this pathogenesis. To understand the mechanism, we compared the levels of Th1 and Th17 in the CNS of B6 mice treated with either LPS or IL-1β, along with control mice treated with PBS, following infection with TMEV at 8 days post-infection (Figure 1). The results clearly indicated that the levels of IL-17A-producing Th17 cells in mice treated with either LPS or IL-1β were significantly elevated compared to PBS-treated control mice (Figure 1A and B). In contrast, the levels of IFN-γ-producing Th1 cells were not different. It is interesting to note that IL-1β-treated mice exceeded the Th17 level of LPS-treated mice. However, the levels of IL-17-producing CD8+ T cells were minimal (not shown), and the levels of IFN-γ-producing CD8+ T cells were also similar among the groups (Figure 1C). These results strongly suggest that IL-1β can promote the pathogenesis of TMEV-induced demyelinating disease by enhancing the induction of pathogenic Th17 cells rather than altering the Th1 response. Effects of IL-1β administration on Th17 and Th1 responses in TMEV-infected B6 mice. (A) Levels of IFN-γ- and IL-17-producing CD4+ T cells in B6 mice intraperitoneally treated with PBS, lipopolysaccharide (LPS), or IL-1β during the early stage (−1 and 3 days post-infection (dpi)) of Theiler’s murine encephalitis virus (TMEV) infection (three mice per group) were analyzed using flow cytometry of the pooled central nervous system (CNS) cells at 8 dpi after stimulation with either PBS or anti-CD3/CD28 antibodies. (B) The overall numbers of IL-17-producing cells in the CNS of the above treated B6 mice (three mice per group) were shown. (C) Levels of IFN-γ-producing CD8+ T cells in the CNS of these groups of mice were also similarly analyzed. ***P < 0.001. IL-1R KO mice are susceptible to TMEV-induced demyelinating disease and display high cellular infiltration to the CNS: Although administration of IL-1β promotes the pathogenesis of TMEV-induced demyelinating disease, the IL-1β produced via NLRP3 proteosome activation upon viral infection is considered to be a protective mechanism against microbial infections by promoting the apoptosis of infected cells [24]. To further investigate the potential role of IL-1β-mediated signaling in the development of TMEV-induced demyelinating disease, we compared the development of TMEV-induced demyelinating disease in IL-1R KO mice with a B6 background and control B6 mice (Figure 2A). Every IL-1R KO mouse developed demyelinating disease while none of the control B6 mice showed clinical signs at 35 days post-infection (dpi). The results clearly indicated that B6 mice with the deficiency in IL-1 signaling became susceptible to the TMEV-induced disease. This is somewhat unexpected because our previous study indicated that administration of IL-1β to B6 mice renders the mice susceptible to the disease, suggesting a pathogenic role for IL-1β in disease development. The course of TMEV-induced demyelinating disease development, viral persistence levels and CNS-infiltrating mononuclear cells in TMEV-infected in B6 and IL-1R KO mice. (A) Frequency and severity of demyelinating disease in B6 (n = 10) and IL-1R knockout (KO) (n = 10) mice were monitored for 70 days after Theiler’s murine encephalitis virus (TMEV) infection. (B) Viral persistence levels in the pooled brains (BR) and spinal cords (SC) of infected mice (three mice per group) at 8, 21 and 70 days post-infection (dpi) were determined by quantitative PCR. Data are expressed by fold induction after normalization to the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA levels. The values are expressed as the means ± SD of triplicate experiments. Statistically significant differences are indicated with asterisks;*P < 0.05, **P < 0.01, ***P < 0.001. (C) Levels of T cells (CD4+ and CD8+), macrophages (CD11b+CD45high), microglia (CD11b+CD45int), NK cells (NK1.1+CD45+) and granulocytes (Ly6G/6C+CD45+) were assessed using flow cytometry in central nervous system (CNS)-infiltrating mononuclear cells from TMEV-infected C57BL/6 and IL-1R KO mice at 3 and 8 dpi. Numbers in FACS plots represent percentages in the CNS. Data are representative of three experiments using three mice per group. To correlate the disease susceptibility of IL-1R KO mice with viral persistence in the CNS, the relative viral message levels in the CNS of wild type (WT) B6 and IL-1R KO mice were compared at days 8, 21 and 70 post-infection with TMEV (Figure 2B). The results showed that the level of viral load in the spinal cord, but not in the brain, is consistently higher in IL-1R KO mice compared to control B6 mice. These results strongly suggest that IL-1 signaling plays an important role in controlling viral persistence in the spinal cord during the course of TMEV infection. However, it was previously shown that the viral load level alone is not sufficient for the pathogenesis of TMEV-induced demyelinating disease [25]. Thus, we further assessed the levels of cellular infiltration to the CNS of these mice during the early stages (3 and 8 dpi) of viral infection (Figure 2C). The results indicated that infiltration into the CNS of granulocytes, NK cells, macrophages and CD8+ T cells but not CD4+ T cells was elevated, particularly at the early stage of viral infection. These results collectively suggest that high viral loads and cellular infiltration into the CNS in resistant B6 mice in the absence of IL-1 signaling leads to the elevated development of TMEV-induced demyelinating disease. IL-1R KO mice show widely spread mild demyelinating lesions accompanied by patchy axon damage: At 70 days post-infection, the histopathology of TMEV-infected IL-1R KO and B6 mice was compared to correlate the disease development with the histopathology of the CNS (Figure 3). Series of histopathological examinations of the spinal cords from both KO and WT mice were conducted after H & E, LFB, and Bielschowsky silver staining. The H & E staining was used for evaluating the evidence of active inflammation and lymphocyte infiltration. LFB specifically stains axonal myelin sheath, and this was used to evaluate the axonal demyelination. Bielschowsky silver staining stains axons dark brown and was used to evaluate axonal integrity. Lymphocyte infiltration, minor demyelination and axon loss were detected in the CNS, including the brain and spinal cord, in IL-1R KO mice but not in WT B6 mice. Compared to control B6 mice (Figure 3A a-b), IL-1R KO mice (Figure 3A c-d) showed more lymphocyte infiltration in the white matter of the lumbar spinal cord when examined by H & E staining. LFB staining of the adjacent sections showed irregular vacuoles and demyelination in the white matter of the spinal cord in IL-1R KO mice (Figure 3A g-h) and in brain regions including the cerebellum and medulla (not shown). In contrast, myelin that appeared normal and little histopathological change were observed in the control B6 mice (Figure 3A e-f). Bielschowsky silver staining of the adjacent sections also showed irregular vacuolation and mild axon loss in the demyelinated regions of the spinal cord from IL-1R KO mice (Figure 3A k-l) but not in the sections from the WT control mice (Figure 3A i-j). To further compare the cellular infiltration levels in the CNS of these mice, we examined the levels of CD45+ cells in the CNS which largely represents infiltrating cells (Figure 3B). Our results clearly displayed that the level of CD45+ cells (Figure 3B d), many of which overlap with H & E staining Figure 3B c), was higher in the CNS of IL-1R KO mice compared to that of the control B6 mice (Figure 3B a-b). Histopathology of the spinal cord in IL-1R KO and wild-type mice. (A) H & E staining of the spinal cord showed infiltration of inflammatory cells in knockout (KO) mice (c, d) and little infiltration in wild-type (WT) mice (a, b). Luxol Fast Blue (LFB) staining of adjacent sections showed irregular vacuolation and minor demyelination in the white matter of KO mice (g, h) but no loss of myelin in WT mice (e, f). Bielschowsky silver staining of the same area shows the presence of irregular vacuolation and minor axonal loss in KO mice (k, l) but not in WT mice (i, j). Magnification, ×10 and ×40. Black arrows indicate regions of lymphocyte infiltrates, demyelination, or axon loss; thin black squares indicate the areas from the lumbar spinal cord region, which are shown in high magnification (b, f, and j for B6 mice and d, h, and l for IL-1R KO mice). (B) H & E staining of spinal cords of control (a) and IL-1R KO mice (c) are shown. The adjacent sections (b and d, respectively) were stained with anti-CD45 antibody (red) for infiltrating cells and counterstained with 4',6-diamidino-2-phenylindole (DAPI) (blue) for nuclei. Cytokine gene expression is transentily higher in the CNS of IL-1R KO mice during early viral infection: To understand the susceptibility to TMEV-induced demyelinating disease in IL-1R KO mice, we analyzed various cytokine message levels expressed in the CNS of virus-infected control and IL-1R KO mice during the early stages (3, 5, and 8 dpi) of viral infection using real-time PCR (Figure 4). The levels of IFN-α and IFN-β gene expression in IL-1R KO mice were significantly higher than those in B6 mice at 3 dpi, although the levels became similar at 5 and 8 dpi. The expression levels of CXCL-10 that were associated with T cell infiltration and IL-10 that was associated with viral persistence were higher at 5 dpi, and this trend was maintained at 8 dpi. However, the expression level of IL-6 in IL-1R KO mice was transiently lower at 5 dpi, while no differences in TNF-α expression were noted. Similarly, the production of a pathogenic T cell cytokine, IL-17 was largely unchanged. However, viral RNA and the production of IFN-γ were transiently higher at 3 and 8 dpi. These results suggest that the lack of IL-1 signaling differentially affects viral replication and the expression of various innate and immune cytokines depending on the stage of TMEV infection. Expression level of cytokine genes in the CNS of TMEV infected B6 and IL-1R knockout (KO) mice at 3, 5 and 8 days post-infection (dpi). The relative expression levels of the indicated mRNAs in the central nervous system (CNS) of Theiler’s murine encephalitis virus (TMEV)-infected C57BL/6 and IL-1R KO mice at 3, 5 and 8 dpi were assessed by real-time PCR. Data are expressed by fold induction after normalization to the glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA levels. The values are expressed as the means ± SD of triplicates. Statistically significant differences are indicated with asterisks; *P < 0.05, **P < 0.01, ***P < 0.001). Anti-viral CD4+ T cell responses in the CNS, of virus-infected IL-1R KO mice are lower during the early stage of infection: To compare CD4+ T cell responses specific to viral determinants in the CNS of IL-1R KO and WT mice, infiltration levels of CD4+ T cells specific to the predominant CD4+ T cell epitopes were assessed (Figure 5A and B). The levels of IFN-γ-producing CD4+ T cells in response to pan-T cell stimulation (either PMA plus ionomycin or anti-CD3 and anti-CD28 antibodies) were similar between IL-1R KO and control WT B6 mice. However, such CD4+ T cell responses to viral epitopes were proportionally lower in the CNS of virus-infected IL-1R KO mice compared to B6 mice, although the overall levels in the CNS were similar. This discrepancy may be due to the high levels of CXCL-10 expression (Figure 4), which promotes infiltration of T cells, in the CNS of IL-1R KO mice. The levels of IL-17-producing CD4+ T cells in either TMEV-infected IL-1R KO or B6 mice were undetectable. To further determine whether the pattern of CD4+ T cell responses is unique in the CNS of virus-infected mice, we also assessed the T cell responses in the periphery of TMEV-infected IL-1R KO and control B6 mice at 8 and 21 dpi (Figure 5C). Again, the levels of T cell proliferation and the production of key T cell cytokines (IFN-γ and IL-17) against viral epitopes (for both CD4+ and CD8+ T cells) were not drasticaly different between the splenic T cells from IL-1R KO and control B6 mice. Despite the similar low levels of IL-17 production in response to viral epitopes, the IL-17 level was significantly lower in IL-1R KO mice compared to WT B6 mice after robust stimulation with anti-CD3/CD28 antibodies. These results are consistent with the role of IL-1 signaling in promoting IL-17 production [26,27]. Virus-specific CD4+T cell responses in the CNS and the periphery of TMEV-infected B6 and IL-1R knockout (KO) mice. (A) Levels of Th1 and Th17 cells in the central nervous system (CNS) of virus-infected B6 and IL-1R KO mice were assessed using flow cytometry at 8 days post-infection (dpi) after stimulation with PBS, PMA/ionomycin, anti-CD3/CD28 antibodies, or viral epitope peptides (Pep: mixture of 2 μM VP2203-220 and 2 μM VP425-38). The flow cytometric plots show gated CD4+ T cells. (B) The numbers of CNS-infiltrating IFN-γ-producing CD4+ T cells reactive to either viral epitope peptides (CD4: equal mixture of VP2203-220 and VP425-38) or anti-CD3/CD28 antibodies in virus-infected B6 and IL-1R KO mice (three mice each) at 8 and 21 dpi. A representative result from two to three similar experiments is shown here. (C) Levels of proliferation and cytokine production by splenic T cells in response to viral epitopes by CD4+ T cells (CD4 Mix: equal mixture of VP2203-220 and VP425-38), CD8+ T cells (VP2-121–130), or both CD4+ and CD8+ T cells (anti-CD3/CD28 antibodies) were assessed at 8 and 21 dpi. Values are expressed as the mean of triplicate samples (mean ± SD) from a representative of three experiments. *P < 0.05, **P < 0.01, ***P < 0.001. Levels of TMEV-specific CD8+ T cell responses in the CNS are comparable between IL-1R KO and WT B6 mice: To further determine whether the susceptibility of IL-1R KO mice to TMEV-induced demyelinating disease is associated with a compromised anti-viral CD8+ T cell response, we also analyzed the T cell responses in the CNS of TMEV-infected IL-1R KO and control B6 mice (Figure 6). Virus-specific CD8+ T cells reactive to the predominant epitope (VP2121-130), determined using the VP2121-H-2Db tetramer, indicated that the proportions of virus-specific CD8+ T cells in the CNS of virus-infected WT B6 and IL-1R KO mice are similar (Figure 6A). To further determine whether the functions of the virus-reactive CD8+ T cells are different, we assessed the abilities of the cells to produce IFN-γ in response to specific and non-specific stimulations (Figure 6B). The results clearly indicated that their ability to produce IFN-γ is also similar in both proportion (Figure 6B) and number in the CNS of TMEV-infected B6 and IL-1R KO mice (Figure 6C). These results strongly suggest that there are no significant differences in the CD8+ T cell responses to the viral determinants, unlike those of CD4+ T cell responses, in the CNS of virus-infected WT B6 and IL-1R KO mice. Levels of virus-specific CD8+T cell responses in the CNS of virus-infected B6 and IL-1R KO mice. (A) Levels of H-2Db-VP2-121–130-tetramer reactive CD8+ T cells in the central nervous system (CNS) of B6 and IL-1R knockout (KO) mice at 8 days post-infection (dpi). (B) Proportions of CNS-infiltrating CD8+ T cells reactive to viral epitopes, anti-CD3/CD28 antibodies and PMA/ionomycin were assessed using flow cytometry following intracellular cytokine staining at 8 dpi. (C) The overall numbers of virus-specific and anti-CD3/CD28 reactive CD8+ T cells in the CNS of virus-infected B6 and IL-1R KO mice are shown at 8 and 21 dpi. A representative result from two to three similar experiments is shown here. Cytokine production by Th cells stimulated with macrophages from IL-1R KO mice is reduced: To compare the function of CD4+ T cells stimulated by macrophages from B6 and IL-1R KO mice, T cells from naïve OT-II mice, which carry T cell receptor (TCR) transgenes specific for OVA323-339, were stimulated with peritoneal macrophages infected in vitro for 24 h with TMEV (10 MOI) in the presence of OVA323-339 peptide (Figure 7A) or OVA protein (not shown). Viral infection did not significantly alter the levels of T cell stimulation by these macrophages. However, proliferation of CD4+ T cells in response to the cognate peptide or protein was higher when stimulated with IL-1R KO macrophages compared to the proliferation stimulated with B6 macrophages. In contrast, IFN-γ and IL-17 production by the T cells stimulated with IL-1R KO macrophages were significantly lower than the production stimulated with control B6 macrophages. These results indicated that antigen-presenting cells display altered T cell stimulating function in the absence of IL-1R-mediated signaling. Cytokine reduction of CD4+T cells with stimulation of IL-1R KO macrophages. (A) Isolated CD4+ T cells (1 × 105) from the spleen of OT-II mice were cultured with in vitro 10 MOI Theiler’s murine encephalitis virus (TMEV)-infected peritoneal macrophages (1 × 104) from either C57BL/6 or IL-1R knockout (KO) mice for 3 days in the presence of 2 μM OVA epitope peptides. T cell proliferative responses were analyzed using [3H]TdR uptake, and cytokine production (IFN-γ and IL-17) of the cultures were analyzed using specific ELISAs. (B) Isolated CD4+ T cells (1 × 105) from the spleen of B6 mice infected with TMEV at 8 days post-infection (dpi) were cultured with in vitro TMEV-infected (10 MOI for 24 h) peritoneal macrophages (1 × 104) from either naïve C57BL/6 or IL-1R KO mice for 3 days in the presence of viral epitope peptides. Message levels of the indicated genes were then analyzed by real-time PCR. The glyceraldehyde-3-phosphate dehydrogenase (GAPDH) level was used as an internal control. The values are expressed as the means ± SD of triplicates. Statistically significant differences are indicated with asterisks; *P < 0.05, **P < 0.01, ***P < 0.001. Data shown are the representation of three independent experiments. (C) Peritoneal CD11b+ cells from naïve B6 and IL-1R KO mice infected with TMEV in vitro for 24 h were analyzed for the expression of PDL-1 and TIM-3 T cell inhibitory molecules. A representative flow cytometry plot of three similar results is shown here. To further understand the potential mechanisms underlying the altered T cell stimulation by IL-1R KO macrophages, we examined the potential contribution of IL-1 signaling to the induction of cytokines in TMEV-infected macrophages (Figure 7B). After TMEV infection in vitro for 24 h, levels of viral RNA as well as IFN-α and IFN-β messages were similar between macrophages from WT B6 and IL-1R KO mice. However, the expression of the IL-6 message was extremely compromised in IL-1R KO macrophages compared to B6 macrophages. In contrast, the expression level of TNF-α message was more highly upregulated in IL-1R KO macrophages after TMEV infection. Interestingly however, the expression of TGF-β in uninfected IL-1R KO macrophages was higher than the expression in B6 macrophages but reduced to a similar level after viral infection. These results suggest that the cytokine production profile of macrophages and perhaps other antigen-presenting cells is altered in the absence of IL-1 signaling, which may affect the initial development and/or function of T cells following viral infection. To further understand the underlying mechanisms associated with altered CD4+ T cell development in the absence of IL-1 signaling, the levels of co-stimulatory molecules and key inhibitory molecules were studied in a representative macrophage antigen-presenting cell (APC) population (Figure 7C). The levels of CD80, CD86 and CD40 in B6 and IL-1R1 KO mice were not significantly different (not shown). However, PDL-1 and Tim-3 were significantly elevated in the absence of IL-1 signaling. These molecules are known to negatively affect the function of T cells [28] and/or promote inflammatory responses through its expression by innate immune cells, such as microglia [29]. It was interesting to note that the expression of PDL-1 was upregulated upon viral infection in B6 macrophages, whereas the expression in IL-1R-deficient macrophages was constitutively upregulated even in the absence of viral infection. In contrast, the expression of Tim-3 was constitutively upregulated in IL-1R-deficient macrophages to the level (approximately 5%) of upregulation seen after viral infection in B6 macrophages, but it was further upregulated in IL-1R KO macrophages after TMEV infection. These results strongly suggest that these increases in the inhibitory molecules may participate in the altered T cell development and/or function. Discussion: TMEV-infection in susceptible strains of mice induces chronic demyelinating disease that is primarily mediated by CD4+ T cells [17,30,31]. However, epitope-specific CD4+ T cells can be protective or pathogenic depending on when activated T cells are available in conjunction with viral infection [23,32,33]. Interestingly, the level of IL-1β, induced following infection with TMEV, plays an important role in the pathogenesis of TMEV-induced demyelinating disease [18,34]. Previously, it has been shown that administration of IL-1 to mice exacerbates the development of experimental autoimmune encephalomyelitis (EAE), the pathogenic immune mechanisms of which are similar to those of TMEV-induced demyelinating disease [35-37]. In addition, IL-1 appears to directly activate astrocytes and microglia to exacerbate neurodegeneration in non-immune-mediated diseases [38]. Because IL-1β is induced via the innate immunity mediated by various TLRs and because the downstream IL-1 signals mediated via IL-1R also play an important role in the host defense [1,4], we have investigated the role of IL-1β signals in the development of TMEV-induced demyelinating disease by assessing the effects of IL-1β administration and using IL-1R-deficient mice. We have previously demonstrated that administration of IL-1β into resistant B6 mice renders the resistant mice susceptible to TMEV-induced demyelinating disease [18]. The administration of IL-1β dramatically increased the level of IL-17 production in the CNS of the resistant mice, which do not produce a high level of Th17 cells following TMEV infection (Figure 1). This result is consistent with recent reports that IL-1β strongly promotes the development of IL-17-producing Th17 cells either directly or via the production of IL-6 [19,39]. The presence of high levels of IL-17A in mice infected with TMEV exerts a strong pathogenic role by inhibiting the apoptosis of virus-infected cells, blocking cytolytic CD8+ T cell function, and elevating cellular infiltration to the CNS [17]. Recently, it was also shown that the presence of FoxP3+ Treg cells that preferentially expand due to stimulation by IL-1β [40] is not beneficial for the development of TMEV-induced demyelinating disease; hence, these regulatory cells inhibit the protective anti-viral immune responses [41]. Therefore, administration of IL-1β, resulting in a higher level of IL-1β, appears to promote the pathogenesis of TMEV-induced demyelinating disease in resistant B6 mice by elevating pathogenic Th17 and Treg responses to TMEV antigens. In addition, it is known that IL-1 directly activates astrocytes and microglia in the CNS [42], which are associated with the pathogenesis of TMEV-induced demyelinating disease [13,43]. Furthermore, IL-1 mediates the loss of astroglial glutamate transport and drives motor neuron injury in the spinal cord during viral encephalomyelitis [44]. The expression of IL-1R1 is upregulated in glial cells following TMEV infection [45], and thus the elevated receptor expression is likely to exert the detrimental effects seen as a result of IL-1 signaling on neurodegeneration and/or pathogenic immune responses. In the absence of IL-1R1-mediated signals, resulting from engagements with the predominant cytokine IL-1β and weak cytokine IL-1α, strongly resistant B6 mice become susceptible to the development of TMEV-induced disease (Figure 2). Viral loads in the spinal cord are higher in the absence of IL-1R signals, suggesting that the presence of IL-1 signaling plays an important role in controlling viral persistence during the course of TMEV infection. The high viral loads also accompanied higher cellular infiltration into the CNS. Histopathological examinations of the virus-infected IL-1R-deficient B6 mice confirmed the elevated lymphocyte infiltration, demyelination and axonal losses in the CNS compared to control B6 mice (Figure 3). These results are consistent with previous reports indicating that either IL-1β- or IL-1RI-deficient mice are susceptible to various infections [1,7,8,46]. These results collectively suggest that either an abnormally high level of IL-1β or the absence of IL-1-mediated signals lead to high viral loads and cellular infiltration to the CNS, resulting in the elevated development of TMEV-induced demyelinating disease. Therefore, a fine balance of IL-1β-mediated signaling appears to be important for protection from viral infections. It is also interesting to note that this viral model for MS is markedly different from the EAE model, which is not associated with microbial infections, in that a deficiency of IL-1R1 significantly reduces the development of demyelinating disease [37]. Despite many previous studies on the role of IL-1β signaling in viral infections, the underlying mechanisms of the signals involved in the protection from infection remain unclear. Previously, it has shown that IL-1-mediated signals augment T cell responses by increasing cellular infiltration, as well as upregulating cytokine production and co-stimulatory molecule expression in APCs [5,47,48]. However, our results showed that the cellular infiltration is elevated in IL-1R1 KO mice during the early stages of viral infection (Figure 2), although the anti-viral CD4+ T cell responses in the CNS of virus-infected IL-1R KO mice are lower without compromising either peripheral CD4+ T cell responses (Figure 5) or CNS CD8+ T cell responses (Figure 6). These results suggest that the APCs associated with CD4+ T cell responses in the CNS are primarily affected by the absence of IL-1-mediated signaling. Our previous studies strongly suggested that primarily the microglia and, to a certain extent, astrocytes, harbor viral loads and play important roles in the stimulation of the level and type of the CD4+ T cell response [43]. In addition, it is known that IL-1 signaling affects the function of these cell types [42]. Therefore, it is most likely that these cells play an important role in the development of anti-viral CD4+ T cell responses in the CNS during the early stage of viral infection. Because the cytokine production profile of APCs is altered in the absence of IL-1 signaling, perhaps due to the elevated expression of inhibitory molecules (Figure 7), similar mechanisms by CNS APCs may negatively affect the initial development and/or function of anti-viral T cells following viral infection. Regarding the underlying mechanisms, it is currently unclear how the deficiency in IL-1 signals enhances the expression of inhibitory molecules in APCs. However, we have observed that APCs from susceptible SJL mice expressed significantly higher levels of these molecules upon viral infection either in vitro or in vivo compared to cells from resistant B6 mice (data not shown), suggesting that the viral load may lead to the elevated expression. Therefore, it is most likely that the absence of IL-1 signals permits the initial elevation of viral load (Figure 4), and the higher viral load, in turn, leads to an eventual compromise in the efficiency of anti-viral T cell responses and functions. In contrast, the presence of excessive IL-1 signals preferentially triggers T cell responses that are unfavorable for the protection of the hosts from chronic viral persistence and the pathogenesis of demyelinating disease, as previously seen [17,19]. Conclusions: IL-1 signaling plays a protective role against viral infections. However, we have previously demonstrated that administration of IL-1 promotes the pathogenesis of TMEV-induced demyelinating disease, similar to the autoimmune disease model (EAE) for MS. The IL-1-mediated pathogenesis of TMEV-induced demyelinating disease appears to reflect an elevated Th17 response in the presence of IL-1. However, IL-1R-deficient B6 mice also induced TMEV-induced demyelinating disease accompanied by high viral persistence and upregulated expression of T cell inhibitory molecules such as PDL-1 and Tim-3. These results suggest that the presence of high IL-1 level promotes the pathogenesis by elevating Th17 responses, whereas the absence of IL-1 signals permits viral persistence in the CNS due to insufficient T cell activation. Therefore, the balance of IL-1 signaling appears to be critical for the determination of protection vs. pathogenesis in the development of a virus-induced demyelinating disease. Abbreviations: APC: antigen-presenting cell; CNS: central nervous system; Dpi: days post-infection; EAE: experimental autoimmune encephalomyelitis; ELISA: enzyme-linked immunosorbent assay; GAPDH: glyceraldehyde-3-phosphate dehydrogenase; H & E: hematoxylin and eosin; IL-1R: interleukin-1 receptor; LFB: Luxol Fast Blue; LPS: lipopolysaccharide; MNC: mononuclear cell; MS: multiple sclerosis; OVA: ovalbumin; PBS: phosphate-buffered saline; PCR: polymerase chain reaction; PFU: plaque-forming unit; SEM: standard error of the mean; TLR: toll-like receptor; TMEV: Theiler’s murine encephalomyelitis virus; TMEV-IDD: TMEV-induced demyelinating disease. Competing interests: The authors declare that they have no competing interests. Authors’ contribution: BSK directed experiments, interpreted the results and wrote the manuscript. YHJ conducted immunological experiments and helped writing. LM conducted histological experiments and wrote the corresponding portions. HSK performed some molecular analyses. WH and HSP conducted the initial immunological experiments. CSK contributed for the interpretation of results and direction of the study. All authors read and approved the final manuscript.
Background: Theiler's virus infection induces chronic demyelinating disease in mice and has been investigated as an infectious model for multiple sclerosis (MS). IL-1 plays an important role in the pathogenesis of both the autoimmune disease model (EAE) and this viral model for MS. However, IL-1 is known to play an important protective role against certain viral infections. Therefore, it is unclear whether IL-1-mediated signaling plays a protective or pathogenic role in the development of TMEV-induced demyelinating disease. Methods: Female C57BL/6 mice and B6.129S7-Il1r1tm1Imx/J mice (IL-1R KO) were infected with Theiler's murine encephalomyelitis virus (1 x 106 PFU). Differences in the development of demyelinating disease and changes in the histopathology were compared. Viral persistence, cytokine production, and immune responses in the CNS of infected mice were analyzed using quantitative PCR, ELISA, and flow cytometry. Results: Administration of IL-1β, thereby rending resistant B6 mice susceptible to TMEV-induced demyelinating disease, induced a high level of Th17 response. Interestingly, infection of TMEV into IL-1R-deficient resistant C57BL/6 (B6) mice also induced TMEV-induced demyelinating disease. High viral persistence was found in the late stage of viral infection in IL-1R-deficient mice, although there were few differences in the initial anti-viral immune responses and viral persistent levels between the WT B6 and IL-1R-deficiecent mice. The initial type I IFN responses and the expression of PDL-1 and Tim-3 were higher in the CNS of TMEV-infected IL-1R-deficient mice, leading to deficiencies in T cell function that permit viral persistence. Conclusions: These results suggest that the presence of high IL-1 level exerts the pathogenic role by elevating pathogenic Th17 responses, whereas the lack of IL-1 signals promotes viral persistence in the spinal cord due to insufficient T cell activation by elevating the production of inhibitory cytokines and regulatory molecules. Therefore, the balance of IL-1 signaling appears to be extremely important for the protection from TMEV-induced demyelinating disease, and either too much or too little signaling promotes the development of disease.
Introduction: Toll-like receptors (TLRs) and interleukin-1 receptors (IL-1Rs) are involved in the production of various cytokines that are associated with the innate immune response against many different infectious agents. TLRs and IL-1Rs share many structural similarities and utilize common downstream adaptive molecules after activation by their ligands. In general, these innate immune responses induced by TLRs and IL-1Rs are known to play a protective role against various microbes [1]. However, several recent studies have indicated that these signals may also play a pathogenic role in viral infections [2-4]. In addition to TLRs, IL-1Rs are also considered to be important innate receptors because IL-1β, in particular, is a prominent cytokine that appears in the early stage of microbial infections [3]. The IL-1R family contains six receptors, including IL-1RI, which recognizes the principal inflammatory cytokine IL-1β and the less inflammatory cytokine IL-1α [1,5]. IL-1β is generated from the cleavage of pro-IL-1β by caspase-1 in inflammasomes after infections, and the downstream signaling cascade of the IL-1β-IL-1R interaction leads to the induction of various proinflammatory cytokines and the activation of lymphocytes [6]. IL-1β-deficient mice show broad host susceptibility to various infections [7,8]. Moreover, IL-1RI-deficient mice are susceptible to certain pathogens, including Listeria monocytogenes[1]. Therefore, these responses to IL-1β are apparently critical for protection from many types of viruses and microbes. However, the level of IL-1β has also been linked to many different inflammatory autoimmune diseases, including diabetes, lupus, arthritis, and multiple sclerosis (MS) [1,4]. Theiler’s murine encephalomyelitis virus (TMEV) is a positive-stranded RNA virus in the Picornaviridae family [9]. TMEV establishes a persistent CNS infection in susceptible mouse strains that results in the development of chronic demyelinating disease, and the system has been studied as a relevant viral model for human multiple sclerosis [10-12]. Cells infected with TMEV produce various proinflammatory cytokines, including type I IFNs, IL-6 and IL-1β [13]. TLR3 and TLR2 are involved in the production of these cytokines following infection with TMEV [14,15]. In addition, melanoma differentiation-associated gene 5 and dsRNA-activated protein kinase R are known to contribute to the production of proinflammatory cytokines [14,16]. These pathways also induce activation of caspase-1, leading to the generation of IL-1β and IL-1α, which contribute to further cytokine production, such as IL-6 promoting the development of pathogenic Th17 cells. Because IL-1β signals are associated with both host protection from viral infections and pathogenesis of inflammatory immune-mediated diseases, we here investigated the role of IL-1β-mediated signals in the development of TMEV-induced demyelinating disease. We have previously reported that Th17 cells preferentially develop in an IL-6-dependent manner after TMEV infection, and that Th17 cells promote persistent viral infection and induce the pathogenesis of chronic demyelinating disease [17]. In addition, our earlier studies indicated that administration of either lipopolysaccharide (LPS) or IL-1β, thereby inducing high levels of IL-6 production, into resistant C57BL/6 (B6) mice renders the mice susceptible to the development of TMEV-induced demyelinating disease [18]. These results suggest that an excessive level of IL-1β is harmful to TMEV-induced demyelinating disease by generating high levels of pathogenic Th17 cells [19]. In this study, we confirmed the role of excessive IL-1β in the generation of a high level of Th17 cells in resistant B6 mice, supporting the pathogenic mechanisms of IL-1β. Furthermore, we have also utilized IL-1R-deficient mice to investigate the role of IL-1β-mediated signaling in the development of TMEV-induced demyelinating disease. Our results indicate that the lack of IL-1 signaling in resistant B6 mice also induced TMEV-induced demyelinating disease. The initial deficiencies in T cell function, including cytokine production and high viral persistence in the late stage of viral infection, were found in IL-1R-deficient mice. Therefore, the presence of an excessive amount of IL-1 plays a pathogenic role by elevating pathogenic Th17 responses, whereas the lack of IL-1 signals promotes viral persistence in the spinal cord, leading to chronic immune-mediated inflammatory disease. Conclusions: IL-1 signaling plays a protective role against viral infections. However, we have previously demonstrated that administration of IL-1 promotes the pathogenesis of TMEV-induced demyelinating disease, similar to the autoimmune disease model (EAE) for MS. The IL-1-mediated pathogenesis of TMEV-induced demyelinating disease appears to reflect an elevated Th17 response in the presence of IL-1. However, IL-1R-deficient B6 mice also induced TMEV-induced demyelinating disease accompanied by high viral persistence and upregulated expression of T cell inhibitory molecules such as PDL-1 and Tim-3. These results suggest that the presence of high IL-1 level promotes the pathogenesis by elevating Th17 responses, whereas the absence of IL-1 signals permits viral persistence in the CNS due to insufficient T cell activation. Therefore, the balance of IL-1 signaling appears to be critical for the determination of protection vs. pathogenesis in the development of a virus-induced demyelinating disease.
Background: Theiler's virus infection induces chronic demyelinating disease in mice and has been investigated as an infectious model for multiple sclerosis (MS). IL-1 plays an important role in the pathogenesis of both the autoimmune disease model (EAE) and this viral model for MS. However, IL-1 is known to play an important protective role against certain viral infections. Therefore, it is unclear whether IL-1-mediated signaling plays a protective or pathogenic role in the development of TMEV-induced demyelinating disease. Methods: Female C57BL/6 mice and B6.129S7-Il1r1tm1Imx/J mice (IL-1R KO) were infected with Theiler's murine encephalomyelitis virus (1 x 106 PFU). Differences in the development of demyelinating disease and changes in the histopathology were compared. Viral persistence, cytokine production, and immune responses in the CNS of infected mice were analyzed using quantitative PCR, ELISA, and flow cytometry. Results: Administration of IL-1β, thereby rending resistant B6 mice susceptible to TMEV-induced demyelinating disease, induced a high level of Th17 response. Interestingly, infection of TMEV into IL-1R-deficient resistant C57BL/6 (B6) mice also induced TMEV-induced demyelinating disease. High viral persistence was found in the late stage of viral infection in IL-1R-deficient mice, although there were few differences in the initial anti-viral immune responses and viral persistent levels between the WT B6 and IL-1R-deficiecent mice. The initial type I IFN responses and the expression of PDL-1 and Tim-3 were higher in the CNS of TMEV-infected IL-1R-deficient mice, leading to deficiencies in T cell function that permit viral persistence. Conclusions: These results suggest that the presence of high IL-1 level exerts the pathogenic role by elevating pathogenic Th17 responses, whereas the lack of IL-1 signals promotes viral persistence in the spinal cord due to insufficient T cell activation by elevating the production of inhibitory cytokines and regulatory molecules. Therefore, the balance of IL-1 signaling appears to be extremely important for the protection from TMEV-induced demyelinating disease, and either too much or too little signaling promotes the development of disease.
19,515
395
[ 786, 101, 42, 48, 110, 189, 70, 118, 102, 124, 272, 112, 63, 431, 703, 667, 374, 663, 403, 920, 131, 10, 67 ]
27
[ "il", "mice", "cells", "il 1r", "1r", "ko", "1r ko", "il 1r ko", "b6", "tmev" ]
[ "pathogenic immune responses", "innate immune cytokines", "il 1β inflammatory", "tlrs interleukin receptors", "toll like receptors" ]
null
[CONTENT] IL-1R | TMEV | Demyelination | CNS | T cell responses | IL-1 | IL-1R KO mice | Th17 [SUMMARY]
null
[CONTENT] IL-1R | TMEV | Demyelination | CNS | T cell responses | IL-1 | IL-1R KO mice | Th17 [SUMMARY]
[CONTENT] IL-1R | TMEV | Demyelination | CNS | T cell responses | IL-1 | IL-1R KO mice | Th17 [SUMMARY]
[CONTENT] IL-1R | TMEV | Demyelination | CNS | T cell responses | IL-1 | IL-1R KO mice | Th17 [SUMMARY]
[CONTENT] IL-1R | TMEV | Demyelination | CNS | T cell responses | IL-1 | IL-1R KO mice | Th17 [SUMMARY]
[CONTENT] Animals | Demyelinating Diseases | Disease Models, Animal | Female | Interleukin-1beta | Mice | Mice, 129 Strain | Mice, Inbred C57BL | Mice, Knockout | Multiple Sclerosis | Poliomyelitis | Signal Transduction | Theilovirus [SUMMARY]
null
[CONTENT] Animals | Demyelinating Diseases | Disease Models, Animal | Female | Interleukin-1beta | Mice | Mice, 129 Strain | Mice, Inbred C57BL | Mice, Knockout | Multiple Sclerosis | Poliomyelitis | Signal Transduction | Theilovirus [SUMMARY]
[CONTENT] Animals | Demyelinating Diseases | Disease Models, Animal | Female | Interleukin-1beta | Mice | Mice, 129 Strain | Mice, Inbred C57BL | Mice, Knockout | Multiple Sclerosis | Poliomyelitis | Signal Transduction | Theilovirus [SUMMARY]
[CONTENT] Animals | Demyelinating Diseases | Disease Models, Animal | Female | Interleukin-1beta | Mice | Mice, 129 Strain | Mice, Inbred C57BL | Mice, Knockout | Multiple Sclerosis | Poliomyelitis | Signal Transduction | Theilovirus [SUMMARY]
[CONTENT] Animals | Demyelinating Diseases | Disease Models, Animal | Female | Interleukin-1beta | Mice | Mice, 129 Strain | Mice, Inbred C57BL | Mice, Knockout | Multiple Sclerosis | Poliomyelitis | Signal Transduction | Theilovirus [SUMMARY]
[CONTENT] pathogenic immune responses | innate immune cytokines | il 1β inflammatory | tlrs interleukin receptors | toll like receptors [SUMMARY]
null
[CONTENT] pathogenic immune responses | innate immune cytokines | il 1β inflammatory | tlrs interleukin receptors | toll like receptors [SUMMARY]
[CONTENT] pathogenic immune responses | innate immune cytokines | il 1β inflammatory | tlrs interleukin receptors | toll like receptors [SUMMARY]
[CONTENT] pathogenic immune responses | innate immune cytokines | il 1β inflammatory | tlrs interleukin receptors | toll like receptors [SUMMARY]
[CONTENT] pathogenic immune responses | innate immune cytokines | il 1β inflammatory | tlrs interleukin receptors | toll like receptors [SUMMARY]
[CONTENT] il | mice | cells | il 1r | 1r | ko | 1r ko | il 1r ko | b6 | tmev [SUMMARY]
null
[CONTENT] il | mice | cells | il 1r | 1r | ko | 1r ko | il 1r ko | b6 | tmev [SUMMARY]
[CONTENT] il | mice | cells | il 1r | 1r | ko | 1r ko | il 1r ko | b6 | tmev [SUMMARY]
[CONTENT] il | mice | cells | il 1r | 1r | ko | 1r ko | il 1r ko | b6 | tmev [SUMMARY]
[CONTENT] il | mice | cells | il 1r | 1r | ko | 1r ko | il 1r ko | b6 | tmev [SUMMARY]
[CONTENT] il | il 1β | 1β | tmev | including | il 1rs | 1rs | pathogenic | th17 | role [SUMMARY]
null
[CONTENT] il | mice | ko | il 1r ko | 1r ko | il 1r | 1r | ko mice | cells | b6 [SUMMARY]
[CONTENT] il | induced | pathogenesis | disease | induced demyelinating | induced demyelinating disease | demyelinating disease | demyelinating | tmev induced demyelinating | tmev induced [SUMMARY]
[CONTENT] il | mice | cells | ko | il 1r | 1r | tmev | viral | 1r ko | il 1r ko [SUMMARY]
[CONTENT] il | mice | cells | ko | il 1r | 1r | tmev | viral | 1r ko | il 1r ko [SUMMARY]
[CONTENT] Theiler ||| IL-1 | MS ||| IL-1 ||| IL-1 | TMEV [SUMMARY]
null
[CONTENT] IL-1β | B6 | TMEV ||| TMEV | IL-1R | TMEV ||| IL-1R | the WT B6 | IL-1R ||| IFN | PDL-1 | CNS | TMEV | IL-1R [SUMMARY]
[CONTENT] ||| IL-1 | TMEV [SUMMARY]
[CONTENT] Theiler ||| IL-1 | MS ||| IL-1 ||| IL-1 | TMEV ||| Theiler | 1 | 106 ||| ||| CNS | PCR | ELISA ||| IL-1β | B6 | TMEV ||| TMEV | IL-1R | TMEV ||| IL-1R | the WT B6 | IL-1R ||| IFN | PDL-1 | CNS | TMEV | IL-1R ||| ||| IL-1 | TMEV [SUMMARY]
[CONTENT] Theiler ||| IL-1 | MS ||| IL-1 ||| IL-1 | TMEV ||| Theiler | 1 | 106 ||| ||| CNS | PCR | ELISA ||| IL-1β | B6 | TMEV ||| TMEV | IL-1R | TMEV ||| IL-1R | the WT B6 | IL-1R ||| IFN | PDL-1 | CNS | TMEV | IL-1R ||| ||| IL-1 | TMEV [SUMMARY]
Validity of OSCE Evaluation Using the FLEX Model of Blended Learning.
35607741
For OSCE (Objective Structured Clinical Examination) scoring, medical schools must bring together many clinical experts at the same place, which is very risky in the context of the coronavirus pandemic. However, if the FLEX model with the properties of self-directed learning and offline feedback is applied to OSCE, it is possible to provide a safe and effective evaluation environment for both universities and students through experts' evaluation of self-video clips of medical students. The present study investigated validity of the FLEX model to evaluate OSCE in a small group of medical students.
BACKGROUND
Sixteen 3rd grade medical students who failed on OSCE were required to take a make-up examination by videotaping the failed items and submitting them online. The scores between original examination and make-up examination were compared using Paired Wilcoxon Signed Rank Test, and a post-hoc questionnaire was conducted.
METHODS
The score for make-up examination was significantly higher than those for original examination. The significance was maintained even when the score was compared by individual domains of skills and proficiency. In terms of preference, students were largely in favor of self-videotaped examination primarily due to the availability of self-practice.
RESULTS
The FLEX model can be effectively applied to medical education, especially for evaluation of OSCE.
CONCLUSION
[ "Clinical Competence", "Education, Medical", "Humans", "Learning", "Pandemics", "Schools, Medical", "Students, Medical" ]
9127434
INTRODUCTION
Blended learning, defined as a teaching method conjoining traditional lectures and various technologies, is rapidly being introduced in the field of education along the coronavirus disease 2019 (COVID-19) crisis. Online education first appeared in the 1990’s under the name of E-learning as an ancillary instrument to traditional lectures. However in 2010’s, it started to integrate itself with offline education in order to overcome the shortcomings of one-sided, lecture based education, and started being called blended learning. The COVID-19 crisis accelerated adoption of online learning into the field of education, and along with it a contemplation regarding relative weight, target, and integration of offline and online education has become greater than ever.12 Representative models of Blended Learning include a supplemental model that utilizes online learning activities to supplement online classes, a flipped learning model that performs offline cooperative learning after watching a lecture video, a rotation model that circulates various learning activities by mixing online and offline, And there is a flex model that conducts self-directed learning activities online and gives individual feedback offline.3 In medical education, clinical skills education was traditionally carried out in an offline setting. Conventionally, the whole curriculum would be carried out offline, but when online lecture is necessary, flipped learning model of blended learning proposed by Garrison et al. would normally be used. This refers to in-advance online learning such as watching video clips offered by e-learning consortium (http://www.mededu.or.kr/), followed by offline practice. Prior studies conducted in college of nursing that utilized such model have reported improved student satisfaction.456789 Meanwhile video clips used in the clinical skills education can be divided into tutorial video by the instructor, and self-video by the student themselves. Both the tutorial video and self-video are assumed to have similar educational effect, and researches that advocate the use of self-video taken by smart phones are gaining attention.4567810 And it is very difficult to recruit clinician experts in charge of scoring OSCE (Objective Structured Clinical Examination) in medical school, so rather than meeting the schedule of busy clinician experts, we think the evaluation using these medical students’ self-video clips seems to be very useful in a medical school. In the light of such prior research, we designed a study where students would self-videotape their own OSCE and receive feedback, in order to validate the applicability of FLEX model in learning clinical skills of medical education. This is because self-video recording has a common context with self-directed learning of the FLEX model. Additionally, due to the elongation of COVID-19 crisis, there is a growing interest in FLEX model which puts more emphasis on online education but little research has been conducted using this model.
METHODS
Sixteen 3rd grade students who failed on OSCE in the year 2015 were asked to take a make-up examination using self-taken video clip. Students were required to upload video clips for the failed items at the school homepage within 7 days. The time limit of 5 minute was identically applied as the original examination. These students were not randomly selected by the researcher, but were students who did not reach the standard score in the tests conducted in class. Self-taken video clips were reviewed by three independent examiners, all of whom had over 10 years of experience in medical education. The checklist was identical to the original examination, which was divided into skills domain and proficiency domain. The only difference was the ‘not identified’ column that was added in addition to ‘yes’ and ‘no’ for skills domain. There was no difference in the checklist item between the original and the make-up examination. Evaluation of proficiency domain was carried out using 5-point Likert scale. The score for skills domain was converted into total of 80, and the score for proficiency domain was converted into total of 20, adding up to 100. Students were considered to have ‘passed’ the item when the converted score was equal to or greater than 70. This cutoff was identical to the cutoff applied for the original examination. The Paired Wilcoxon Signed Rank Test was used to investigate whether there is a statistically significant difference between the results of original examination and make-up examination. Also, post-hoc questionnaire was conducted to investigate the following; number participants required for videotaping, number of videotaping trials, preference between conventional examination and self-videotaped examination, and the reasons behind the preference. Ethics statement This study is about the results of tests conducted during regular class hours, and the subject of analysis is not students but anonymized test scores. The post-mortem survey was also conducted anonymously. Therefore, this study is a study in which there is no reason to presume the subject’s refusal to consent, and the risk to the subject is extremely low even if consent is waived. The study was approved by the Kangwon National University Hospital Institutional Review Board (KNUH-2022-02-031-002). This study is about the results of tests conducted during regular class hours, and the subject of analysis is not students but anonymized test scores. The post-mortem survey was also conducted anonymously. Therefore, this study is a study in which there is no reason to presume the subject’s refusal to consent, and the risk to the subject is extremely low even if consent is waived. The study was approved by the Kangwon National University Hospital Institutional Review Board (KNUH-2022-02-031-002).
RESULTS
A total of 16 students submitted 33 video clips. There were 11 video clips for cardiopulmonary resuscitation (CPR) item, 10 for endotracheal intubation (ET) item, and 12 for male Foley insertion (FI) item (Supplementary Table 1). The results of the make-up examination can be seen in Table 1. Students were given credit for each checklist item when 2 or more examiners marked ‘yes.’ In all three items, an increase in the score was noted in the make-up examination compared to the original examination, and the Paired Wilcoxon Signed Rank Test showed a significant difference between the two. For each of CPR, ET, and FI item, the average original examination score was 57.8, 50.8, 51.1, but the average make-up examination score was 73.7 for CPR and 78 for FI, surpassing the cutoff. When comparing the result by domains, the score for both the skills domain and the proficiency domain showed a significant increase in make-up examination compared to the original examination. aCut-off value of pass is 70 points; **P < 0.01 by paired Wilcoxon signed rank test. The results of post-hoc questionnaire can be seen in Tables 2 and 3. Among the 16 students, 15 responded. The average number of videotaping trials exceeded 2 for all three items, and the maximum number of videotaping trials for each item was 10, 5, and 5. Eleven of 15 responders reported subjective improvement in OSCE skills, and another 11 out of 15 students reported preference towards self-videotaped examination over conventional examination. The main reason behind the preference towards self-videotaped examination was availability of self-practice without time limit, and students who chose conventional examination pointed out lack of tension in self-videotaped examination.
null
null
[ "Ethics statement" ]
[ "This study is about the results of tests conducted during regular class hours, and the subject of analysis is not students but anonymized test scores. The post-mortem survey was also conducted anonymously. Therefore, this study is a study in which there is no reason to presume the subject’s refusal to consent, and the risk to the subject is extremely low even if consent is waived. The study was approved by the Kangwon National University Hospital Institutional Review Board (KNUH-2022-02-031-002)." ]
[ null ]
[ "INTRODUCTION", "METHODS", "Ethics statement", "RESULTS", "DISCUSSION" ]
[ "Blended learning, defined as a teaching method conjoining traditional lectures and various technologies, is rapidly being introduced in the field of education along the coronavirus disease 2019 (COVID-19) crisis. Online education first appeared in the 1990’s under the name of E-learning as an ancillary instrument to traditional lectures. However in 2010’s, it started to integrate itself with offline education in order to overcome the shortcomings of one-sided, lecture based education, and started being called blended learning. The COVID-19 crisis accelerated adoption of online learning into the field of education, and along with it a contemplation regarding relative weight, target, and integration of offline and online education has become greater than ever.12\nRepresentative models of Blended Learning include a supplemental model that utilizes online learning activities to supplement online classes, a flipped learning model that performs offline cooperative learning after watching a lecture video, a rotation model that circulates various learning activities by mixing online and offline, And there is a flex model that conducts self-directed learning activities online and gives individual feedback offline.3 In medical education, clinical skills education was traditionally carried out in an offline setting. Conventionally, the whole curriculum would be carried out offline, but when online lecture is necessary, flipped learning model of blended learning proposed by Garrison et al. would normally be used. This refers to in-advance online learning such as watching video clips offered by e-learning consortium (http://www.mededu.or.kr/), followed by offline practice. Prior studies conducted in college of nursing that utilized such model have reported improved student satisfaction.456789\nMeanwhile video clips used in the clinical skills education can be divided into tutorial video by the instructor, and self-video by the student themselves. Both the tutorial video and self-video are assumed to have similar educational effect, and researches that advocate the use of self-video taken by smart phones are gaining attention.4567810 And it is very difficult to recruit clinician experts in charge of scoring OSCE (Objective Structured Clinical Examination) in medical school, so rather than meeting the schedule of busy clinician experts, we think the evaluation using these medical students’ self-video clips seems to be very useful in a medical school.\nIn the light of such prior research, we designed a study where students would self-videotape their own OSCE and receive feedback, in order to validate the applicability of FLEX model in learning clinical skills of medical education. This is because self-video recording has a common context with self-directed learning of the FLEX model. Additionally, due to the elongation of COVID-19 crisis, there is a growing interest in FLEX model which puts more emphasis on online education but little research has been conducted using this model.", "Sixteen 3rd grade students who failed on OSCE in the year 2015 were asked to take a make-up examination using self-taken video clip. Students were required to upload video clips for the failed items at the school homepage within 7 days. The time limit of 5 minute was identically applied as the original examination. These students were not randomly selected by the researcher, but were students who did not reach the standard score in the tests conducted in class.\nSelf-taken video clips were reviewed by three independent examiners, all of whom had over 10 years of experience in medical education. The checklist was identical to the original examination, which was divided into skills domain and proficiency domain. The only difference was the ‘not identified’ column that was added in addition to ‘yes’ and ‘no’ for skills domain. There was no difference in the checklist item between the original and the make-up examination.\nEvaluation of proficiency domain was carried out using 5-point Likert scale. The score for skills domain was converted into total of 80, and the score for proficiency domain was converted into total of 20, adding up to 100. Students were considered to have ‘passed’ the item when the converted score was equal to or greater than 70. This cutoff was identical to the cutoff applied for the original examination.\nThe Paired Wilcoxon Signed Rank Test was used to investigate whether there is a statistically significant difference between the results of original examination and make-up examination. Also, post-hoc questionnaire was conducted to investigate the following; number participants required for videotaping, number of videotaping trials, preference between conventional examination and self-videotaped examination, and the reasons behind the preference.\nEthics statement This study is about the results of tests conducted during regular class hours, and the subject of analysis is not students but anonymized test scores. The post-mortem survey was also conducted anonymously. Therefore, this study is a study in which there is no reason to presume the subject’s refusal to consent, and the risk to the subject is extremely low even if consent is waived. The study was approved by the Kangwon National University Hospital Institutional Review Board (KNUH-2022-02-031-002).\nThis study is about the results of tests conducted during regular class hours, and the subject of analysis is not students but anonymized test scores. The post-mortem survey was also conducted anonymously. Therefore, this study is a study in which there is no reason to presume the subject’s refusal to consent, and the risk to the subject is extremely low even if consent is waived. The study was approved by the Kangwon National University Hospital Institutional Review Board (KNUH-2022-02-031-002).", "This study is about the results of tests conducted during regular class hours, and the subject of analysis is not students but anonymized test scores. The post-mortem survey was also conducted anonymously. Therefore, this study is a study in which there is no reason to presume the subject’s refusal to consent, and the risk to the subject is extremely low even if consent is waived. The study was approved by the Kangwon National University Hospital Institutional Review Board (KNUH-2022-02-031-002).", "A total of 16 students submitted 33 video clips. There were 11 video clips for cardiopulmonary resuscitation (CPR) item, 10 for endotracheal intubation (ET) item, and 12 for male Foley insertion (FI) item (Supplementary Table 1).\nThe results of the make-up examination can be seen in Table 1. Students were given credit for each checklist item when 2 or more examiners marked ‘yes.’ In all three items, an increase in the score was noted in the make-up examination compared to the original examination, and the Paired Wilcoxon Signed Rank Test showed a significant difference between the two. For each of CPR, ET, and FI item, the average original examination score was 57.8, 50.8, 51.1, but the average make-up examination score was 73.7 for CPR and 78 for FI, surpassing the cutoff. When comparing the result by domains, the score for both the skills domain and the proficiency domain showed a significant increase in make-up examination compared to the original examination.\naCut-off value of pass is 70 points; **P < 0.01 by paired Wilcoxon signed rank test.\nThe results of post-hoc questionnaire can be seen in Tables 2 and 3. Among the 16 students, 15 responded. The average number of videotaping trials exceeded 2 for all three items, and the maximum number of videotaping trials for each item was 10, 5, and 5. Eleven of 15 responders reported subjective improvement in OSCE skills, and another 11 out of 15 students reported preference towards self-videotaped examination over conventional examination. The main reason behind the preference towards self-videotaped examination was availability of self-practice without time limit, and students who chose conventional examination pointed out lack of tension in self-videotaped examination.", "The findings of the present study indicate that online education followed by small group offline feedback is capable of improving students’ clinical skills and proficiency in OSCE. In addition, a majority of students reported subjective improvement in their OSCE skills and preference towards self-videotaped examination. Therefore, the researchers have reached a conclusion that FLEX model may be applied in skill learning of medical education and is effective.\nThe FLEX model has an advantage both in cognitive and affective area. In the cognitive area, the FLEX model may promote academic achievement, problem-solving skills, critical thinking, and comprehension. In the affective area, the FLEX model may promote satisfaction, academic interest, learning attitude, and motivation. In this study’s setting, students can videotape and review their own skills and knowledge and interact with other students by assisting them and giving feedbacks, thereby achieving the above mentioned effects. Video clips have an additional advantage of enabling a comparison between student’s skills along the time course due to the fact that it may be saved and accessed afterwards. Also, utilizing smart phones for video clips can reduce the psychological and technical barrier to making video clips.\nIn the perspective of education provider, self-videotaped examination requires less educational resources. Normally, even a single round of OSCE examination requires scheduling of examiners, in-test student management, maintenance of consumables and dummy models. In such background, the results by Casey et al.,11 which showed no significant difference between self-assessed remediation and faculty-guided remediation is worth noting. If self-assessed remediation suffices, medical schools may put aside manpower and material resources.\nThis study has certain limitations. First of all, not having been designed as a comparative study, the score difference between original examination and make-up examination may have been impacted by multiple factors. Specifically, the reason behind the improvement of performance is confounding, potentially attributable to repeated pre-practice or change in learning method, and cannot be assessed individually.\nVivekananda-Schmidt asserted the usefulness of using videotaped OSCE for Joint Examination of medical students, but the study utilized fixed camera and thus the quality, angle, and the voice recording of the video was identical among one another.12 A study by Christoph Kiehl which also argued the usefulness of videotaped OSCE utilized tripod-mounted digital video camera as well, and the quality difference between the videos were negligible.13 However in this study, video clips were taken by student themselves, and thus three independent examiners were required to evaluate video clips in order to minimize the effect of quality difference between one another.\nThis resulted in another limitation of compromised equivalence in evaluation between the two examinations. In original examination, students were evaluated by one examiner, whereas in the make-up examination, they were evaluated by three examiners. Despite the differences among the two examinations, the present study adopted a form of self-videotaped examination and condoned resulting difference in the number of examiners so that the meaning of offline feedback may be maximized without renouncing the quality of evaluation.\nFinally, some checklist items were marked ‘not identified’ due to the physical limitation of a video clip. Major reasons behind ‘not identified’ were small object size, unobservable fine movement of students, unobservable fine movement of dummy models, and unevaluable visual inspection of students. Examples for the suggested reasons are as follows; evaluating a stylet tip of the endotracheal tube, pulling the penis in right angle when inserting a Foley catheter, chest compression in CPR, and visual inspection of sclera to confirm jaundice.\nTo overcome such limitations, further study comparing the results of self-videotaped examination and conventional examination in the same examination round is required. In addition, we suggest following guidelines in order to minimize the physical limitations that have been confirmed in the present study. First, smart phone cameras need to film the target in close proximity when filming small objects or body surface. Second, if possible, it is recommended to infer the movement of a small object by the movement of large objects when evaluating video clips. Third, a concrete checklist for each item should be prepared in advance. If a certain checklist item is ambiguous, then removal of such checklist item is encouraged. Finally, a monitor and a sensor should be clearly shown on the video clip as ancillary devices.\nIn spite of above mentioned limitations, the present study validated the applicability of FLEX model of blended learning in OSCE, which is yet to be widely used. Blended learning is a teaching method that will not fade away with the remission of COVID-19 crisis, but a method that will be applied extensively in the coming future. Medical schools therefore need contemplation on how to apply blended learning in their curriculum, and the present study may serve as one indicator in doing so. In particular, since the OSCE curriculum, which has been conducted as a “face-to-face education,” has verified effectiveness of blended learning, the present study demonstrates a very helpful basic material for the medical school curriculum in the future." ]
[ "intro", "methods", null, "results", "discussion" ]
[ "Checklist", "Smart Phone", "Clinical Skill", "Competence", "Video Recording", "COVID-19 Pandemic" ]
INTRODUCTION: Blended learning, defined as a teaching method conjoining traditional lectures and various technologies, is rapidly being introduced in the field of education along the coronavirus disease 2019 (COVID-19) crisis. Online education first appeared in the 1990’s under the name of E-learning as an ancillary instrument to traditional lectures. However in 2010’s, it started to integrate itself with offline education in order to overcome the shortcomings of one-sided, lecture based education, and started being called blended learning. The COVID-19 crisis accelerated adoption of online learning into the field of education, and along with it a contemplation regarding relative weight, target, and integration of offline and online education has become greater than ever.12 Representative models of Blended Learning include a supplemental model that utilizes online learning activities to supplement online classes, a flipped learning model that performs offline cooperative learning after watching a lecture video, a rotation model that circulates various learning activities by mixing online and offline, And there is a flex model that conducts self-directed learning activities online and gives individual feedback offline.3 In medical education, clinical skills education was traditionally carried out in an offline setting. Conventionally, the whole curriculum would be carried out offline, but when online lecture is necessary, flipped learning model of blended learning proposed by Garrison et al. would normally be used. This refers to in-advance online learning such as watching video clips offered by e-learning consortium (http://www.mededu.or.kr/), followed by offline practice. Prior studies conducted in college of nursing that utilized such model have reported improved student satisfaction.456789 Meanwhile video clips used in the clinical skills education can be divided into tutorial video by the instructor, and self-video by the student themselves. Both the tutorial video and self-video are assumed to have similar educational effect, and researches that advocate the use of self-video taken by smart phones are gaining attention.4567810 And it is very difficult to recruit clinician experts in charge of scoring OSCE (Objective Structured Clinical Examination) in medical school, so rather than meeting the schedule of busy clinician experts, we think the evaluation using these medical students’ self-video clips seems to be very useful in a medical school. In the light of such prior research, we designed a study where students would self-videotape their own OSCE and receive feedback, in order to validate the applicability of FLEX model in learning clinical skills of medical education. This is because self-video recording has a common context with self-directed learning of the FLEX model. Additionally, due to the elongation of COVID-19 crisis, there is a growing interest in FLEX model which puts more emphasis on online education but little research has been conducted using this model. METHODS: Sixteen 3rd grade students who failed on OSCE in the year 2015 were asked to take a make-up examination using self-taken video clip. Students were required to upload video clips for the failed items at the school homepage within 7 days. The time limit of 5 minute was identically applied as the original examination. These students were not randomly selected by the researcher, but were students who did not reach the standard score in the tests conducted in class. Self-taken video clips were reviewed by three independent examiners, all of whom had over 10 years of experience in medical education. The checklist was identical to the original examination, which was divided into skills domain and proficiency domain. The only difference was the ‘not identified’ column that was added in addition to ‘yes’ and ‘no’ for skills domain. There was no difference in the checklist item between the original and the make-up examination. Evaluation of proficiency domain was carried out using 5-point Likert scale. The score for skills domain was converted into total of 80, and the score for proficiency domain was converted into total of 20, adding up to 100. Students were considered to have ‘passed’ the item when the converted score was equal to or greater than 70. This cutoff was identical to the cutoff applied for the original examination. The Paired Wilcoxon Signed Rank Test was used to investigate whether there is a statistically significant difference between the results of original examination and make-up examination. Also, post-hoc questionnaire was conducted to investigate the following; number participants required for videotaping, number of videotaping trials, preference between conventional examination and self-videotaped examination, and the reasons behind the preference. Ethics statement This study is about the results of tests conducted during regular class hours, and the subject of analysis is not students but anonymized test scores. The post-mortem survey was also conducted anonymously. Therefore, this study is a study in which there is no reason to presume the subject’s refusal to consent, and the risk to the subject is extremely low even if consent is waived. The study was approved by the Kangwon National University Hospital Institutional Review Board (KNUH-2022-02-031-002). This study is about the results of tests conducted during regular class hours, and the subject of analysis is not students but anonymized test scores. The post-mortem survey was also conducted anonymously. Therefore, this study is a study in which there is no reason to presume the subject’s refusal to consent, and the risk to the subject is extremely low even if consent is waived. The study was approved by the Kangwon National University Hospital Institutional Review Board (KNUH-2022-02-031-002). Ethics statement: This study is about the results of tests conducted during regular class hours, and the subject of analysis is not students but anonymized test scores. The post-mortem survey was also conducted anonymously. Therefore, this study is a study in which there is no reason to presume the subject’s refusal to consent, and the risk to the subject is extremely low even if consent is waived. The study was approved by the Kangwon National University Hospital Institutional Review Board (KNUH-2022-02-031-002). RESULTS: A total of 16 students submitted 33 video clips. There were 11 video clips for cardiopulmonary resuscitation (CPR) item, 10 for endotracheal intubation (ET) item, and 12 for male Foley insertion (FI) item (Supplementary Table 1). The results of the make-up examination can be seen in Table 1. Students were given credit for each checklist item when 2 or more examiners marked ‘yes.’ In all three items, an increase in the score was noted in the make-up examination compared to the original examination, and the Paired Wilcoxon Signed Rank Test showed a significant difference between the two. For each of CPR, ET, and FI item, the average original examination score was 57.8, 50.8, 51.1, but the average make-up examination score was 73.7 for CPR and 78 for FI, surpassing the cutoff. When comparing the result by domains, the score for both the skills domain and the proficiency domain showed a significant increase in make-up examination compared to the original examination. aCut-off value of pass is 70 points; **P < 0.01 by paired Wilcoxon signed rank test. The results of post-hoc questionnaire can be seen in Tables 2 and 3. Among the 16 students, 15 responded. The average number of videotaping trials exceeded 2 for all three items, and the maximum number of videotaping trials for each item was 10, 5, and 5. Eleven of 15 responders reported subjective improvement in OSCE skills, and another 11 out of 15 students reported preference towards self-videotaped examination over conventional examination. The main reason behind the preference towards self-videotaped examination was availability of self-practice without time limit, and students who chose conventional examination pointed out lack of tension in self-videotaped examination. DISCUSSION: The findings of the present study indicate that online education followed by small group offline feedback is capable of improving students’ clinical skills and proficiency in OSCE. In addition, a majority of students reported subjective improvement in their OSCE skills and preference towards self-videotaped examination. Therefore, the researchers have reached a conclusion that FLEX model may be applied in skill learning of medical education and is effective. The FLEX model has an advantage both in cognitive and affective area. In the cognitive area, the FLEX model may promote academic achievement, problem-solving skills, critical thinking, and comprehension. In the affective area, the FLEX model may promote satisfaction, academic interest, learning attitude, and motivation. In this study’s setting, students can videotape and review their own skills and knowledge and interact with other students by assisting them and giving feedbacks, thereby achieving the above mentioned effects. Video clips have an additional advantage of enabling a comparison between student’s skills along the time course due to the fact that it may be saved and accessed afterwards. Also, utilizing smart phones for video clips can reduce the psychological and technical barrier to making video clips. In the perspective of education provider, self-videotaped examination requires less educational resources. Normally, even a single round of OSCE examination requires scheduling of examiners, in-test student management, maintenance of consumables and dummy models. In such background, the results by Casey et al.,11 which showed no significant difference between self-assessed remediation and faculty-guided remediation is worth noting. If self-assessed remediation suffices, medical schools may put aside manpower and material resources. This study has certain limitations. First of all, not having been designed as a comparative study, the score difference between original examination and make-up examination may have been impacted by multiple factors. Specifically, the reason behind the improvement of performance is confounding, potentially attributable to repeated pre-practice or change in learning method, and cannot be assessed individually. Vivekananda-Schmidt asserted the usefulness of using videotaped OSCE for Joint Examination of medical students, but the study utilized fixed camera and thus the quality, angle, and the voice recording of the video was identical among one another.12 A study by Christoph Kiehl which also argued the usefulness of videotaped OSCE utilized tripod-mounted digital video camera as well, and the quality difference between the videos were negligible.13 However in this study, video clips were taken by student themselves, and thus three independent examiners were required to evaluate video clips in order to minimize the effect of quality difference between one another. This resulted in another limitation of compromised equivalence in evaluation between the two examinations. In original examination, students were evaluated by one examiner, whereas in the make-up examination, they were evaluated by three examiners. Despite the differences among the two examinations, the present study adopted a form of self-videotaped examination and condoned resulting difference in the number of examiners so that the meaning of offline feedback may be maximized without renouncing the quality of evaluation. Finally, some checklist items were marked ‘not identified’ due to the physical limitation of a video clip. Major reasons behind ‘not identified’ were small object size, unobservable fine movement of students, unobservable fine movement of dummy models, and unevaluable visual inspection of students. Examples for the suggested reasons are as follows; evaluating a stylet tip of the endotracheal tube, pulling the penis in right angle when inserting a Foley catheter, chest compression in CPR, and visual inspection of sclera to confirm jaundice. To overcome such limitations, further study comparing the results of self-videotaped examination and conventional examination in the same examination round is required. In addition, we suggest following guidelines in order to minimize the physical limitations that have been confirmed in the present study. First, smart phone cameras need to film the target in close proximity when filming small objects or body surface. Second, if possible, it is recommended to infer the movement of a small object by the movement of large objects when evaluating video clips. Third, a concrete checklist for each item should be prepared in advance. If a certain checklist item is ambiguous, then removal of such checklist item is encouraged. Finally, a monitor and a sensor should be clearly shown on the video clip as ancillary devices. In spite of above mentioned limitations, the present study validated the applicability of FLEX model of blended learning in OSCE, which is yet to be widely used. Blended learning is a teaching method that will not fade away with the remission of COVID-19 crisis, but a method that will be applied extensively in the coming future. Medical schools therefore need contemplation on how to apply blended learning in their curriculum, and the present study may serve as one indicator in doing so. In particular, since the OSCE curriculum, which has been conducted as a “face-to-face education,” has verified effectiveness of blended learning, the present study demonstrates a very helpful basic material for the medical school curriculum in the future.
Background: For OSCE (Objective Structured Clinical Examination) scoring, medical schools must bring together many clinical experts at the same place, which is very risky in the context of the coronavirus pandemic. However, if the FLEX model with the properties of self-directed learning and offline feedback is applied to OSCE, it is possible to provide a safe and effective evaluation environment for both universities and students through experts' evaluation of self-video clips of medical students. The present study investigated validity of the FLEX model to evaluate OSCE in a small group of medical students. Methods: Sixteen 3rd grade medical students who failed on OSCE were required to take a make-up examination by videotaping the failed items and submitting them online. The scores between original examination and make-up examination were compared using Paired Wilcoxon Signed Rank Test, and a post-hoc questionnaire was conducted. Results: The score for make-up examination was significantly higher than those for original examination. The significance was maintained even when the score was compared by individual domains of skills and proficiency. In terms of preference, students were largely in favor of self-videotaped examination primarily due to the availability of self-practice. Conclusions: The FLEX model can be effectively applied to medical education, especially for evaluation of OSCE.
null
null
2,467
253
[ 97 ]
5
[ "examination", "study", "video", "students", "learning", "self", "education", "model", "clips", "skills" ]
[ "called blended learning", "online education appeared", "online learning activities", "blended learning covid", "offline online education" ]
null
null
[CONTENT] Checklist | Smart Phone | Clinical Skill | Competence | Video Recording | COVID-19 Pandemic [SUMMARY]
[CONTENT] Checklist | Smart Phone | Clinical Skill | Competence | Video Recording | COVID-19 Pandemic [SUMMARY]
[CONTENT] Checklist | Smart Phone | Clinical Skill | Competence | Video Recording | COVID-19 Pandemic [SUMMARY]
null
[CONTENT] Checklist | Smart Phone | Clinical Skill | Competence | Video Recording | COVID-19 Pandemic [SUMMARY]
null
[CONTENT] Clinical Competence | Education, Medical | Humans | Learning | Pandemics | Schools, Medical | Students, Medical [SUMMARY]
[CONTENT] Clinical Competence | Education, Medical | Humans | Learning | Pandemics | Schools, Medical | Students, Medical [SUMMARY]
[CONTENT] Clinical Competence | Education, Medical | Humans | Learning | Pandemics | Schools, Medical | Students, Medical [SUMMARY]
null
[CONTENT] Clinical Competence | Education, Medical | Humans | Learning | Pandemics | Schools, Medical | Students, Medical [SUMMARY]
null
[CONTENT] called blended learning | online education appeared | online learning activities | blended learning covid | offline online education [SUMMARY]
[CONTENT] called blended learning | online education appeared | online learning activities | blended learning covid | offline online education [SUMMARY]
[CONTENT] called blended learning | online education appeared | online learning activities | blended learning covid | offline online education [SUMMARY]
null
[CONTENT] called blended learning | online education appeared | online learning activities | blended learning covid | offline online education [SUMMARY]
null
[CONTENT] examination | study | video | students | learning | self | education | model | clips | skills [SUMMARY]
[CONTENT] examination | study | video | students | learning | self | education | model | clips | skills [SUMMARY]
[CONTENT] examination | study | video | students | learning | self | education | model | clips | skills [SUMMARY]
null
[CONTENT] examination | study | video | students | learning | self | education | model | clips | skills [SUMMARY]
null
[CONTENT] learning | model | online | education | offline | video | self video | self | medical | clinical [SUMMARY]
[CONTENT] examination | domain | subject | study | conducted | original examination | original | students | consent | converted [SUMMARY]
[CONTENT] examination | item | average | fi | 15 | score | cpr | students | self | self videotaped [SUMMARY]
null
[CONTENT] examination | study | learning | subject | video | students | self | model | education | consent [SUMMARY]
null
[CONTENT] OSCE ||| FLEX | OSCE ||| FLEX | OSCE [SUMMARY]
[CONTENT] Sixteen 3rd | OSCE ||| Paired Wilcoxon Signed Rank Test [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
null
[CONTENT] OSCE ||| FLEX | OSCE ||| FLEX | OSCE ||| Sixteen | 3rd | OSCE ||| Paired Wilcoxon Signed Rank Test ||| ||| ||| ||| FLEX | OSCE [SUMMARY]
null
Trajectories of the current situation and characteristics of workplace violence among nurses: a nine-year follow-up study.
34763686
Workplace violence (WPV) among nurses has become an increasingly serious public health issue worldwide. Investigating the status quo and characteristics of WPV among nurses in different time periods can help hospital managers understand the current status of WPV and its trends over time. This study aimed to understand the current situation of WPV among nurses in Suzhou general hospitals from 2010 to 2019 and analyze changes over time.
BACKGROUND
A cross-sectional study was conducted to investigate 942, 2,110 and 2,566 nurses in 6 fixed polyclinic hospitals in Suzhou in 2010, 2015 and 2019, respectively. This study used the revised version of the hospital WPV questionnaire. The count data are described as frequencies and percentages, and the measurement data are represented as means and standard deviations. The general data of nurses during different time periods, the incidence of WPV, nurses' cognition and attitudes toward WPV and the attitudes and measures of hospitals regarding WPV were analyzed by the chi-square test.
METHODS
The incidence of WPV among nurses in Suzhou general hospitals in 2015 (69.0 %) and in 2019 (68.4 %) was higher than the incidence of 62.4 % in 2010 (P<0.05), and there were significant differences among periods in the specific types of violence (P˂0.05). Nurses who participated in the surveys in 2015 and 2019 scored higher on "having heard of WPV before", "thinking WPV coping management organizations are needed" and "supporting a zero-tolerance policy" than those who participated in 2010 (P<0.05). The attitudes and responses of hospitals with regard to WPV among nurses have greatly improved, as evidenced by the results for the items "offering training", "encouraging reporting of WPV to supervisors", "equipped with a WPV managing department", "handling WPV efficiently" and "hospital's attitudes" (P<0.005).
RESULTS
Despite an increase in nurses' awareness and attitudes regarding WPV and significant improvements in hospitals' attitudes and responses to WPV, the incidence of WPV remains high. Hospitals should continue to explore scientific training modes that are in accordance with the needs of nurses to reduce the incidence of WPV.
CONCLUSIONS
[ "Cross-Sectional Studies", "Follow-Up Studies", "Humans", "Nurses", "Surveys and Questionnaires", "Workplace", "Workplace Violence" ]
8582131
Background
According to the World Health Organization (WHO), workplace violence (WPV) can be defined as “incidents where staff are abused, threatened or assaulted in circumstances related to their work, involving an explicit or implicit challenge to their safety, well-being or health”. WPV can manifest as physical violence or psychological violence in different forms [1]. In recent years, as the reform of China’s medical system has entered the “deep water zone”, the doctor-patient relationship is becoming increasingly tense, and doctor-patient conflict is gradually emerging. As the main manifestation of doctor-patient conflict, the incidence of hospital WPV is increasing yearly. As the main workforce in medical and health care institutions, nurses have become the main victims of WPV due to the nature of their work, which requires direct contact and frequent communication with patients [2–5]. Studies from outside of China have shown that approximately 70 %-90 % of nurses have experienced one or more types of WPV [6–8]. In China, cross-sectional investigations of WPV among nurses have shown that the incidence of WPV is 71.9 %-84.6 %, which is similar to the incidence reported in other countries [9–11]. These findings show that WPV among nurses has become a common phenomenon in both developing and developed countries. WPV is a very serious public health problem that not only causes different degrees of physical and psychological injury to nurses but also has negative effects on hospitals [12, 13]. At the individual level, nurses may suffer from varying degrees of physiological harm, such as chronic pain, elevated blood pressure, gastrointestinal disorders, nightmares and even death [14, 15]. Regarding the psychological effects of WPV, in addition to feelings of grievance, anger, anxiety and other emotions, nurses may experience symptoms of insomnia, suicidal ideation and post-traumatic stress disorder [16–18]. At the hospital level, WPV reduces the enthusiasm of nurses toward their work and the quality of their work and increases their turnover rate, which affects the stability of the nursing team [19, 20]. In summary, WPV among nurses is characterized by a high incidence rate and great harmfulness. Since the 1980 s, scholars at home and abroad have sought to reduce hospital WPV incidence and improve nurses’ ability to respond through a series of measures, including improving medical-related laws and regulations [21], improving the hospital environment and treatment procedures [22], improving nurses’ communication ability [23], and focusing on WPV training [24]. However, a review of the current domestic and foreign literature found that the incidence of WPV among nurses remains high [25, 26]. Some scholars [27] have pointed out that due to the lack of long-term tracking data on the occurrence of WPV among nurses, the existing prevention and intervention measures lack pertinence. A study by Spelten et al. [28] pointed out that reducing the incidence of WPV first requires a comprehensive understanding of the dynamic development process of violence toward nurses, including by determining the frequency and intensity of WPV events and assessing the long-term results of violence on staff. However, most of the existing WPV studies are short-term status surveys, and few studies continuously compare the WPV status and characteristics of nurses in a certain region. This study aimed to analyze the current situation and characteristics of WPV among nurses in Suzhou general hospitals from 2010 to 2019 to provide information that can help hospital managers understand the status quo of WPV among nurses and trends in WPV over time.
null
null
Results
Questionnaire response rate In 2010, 2015 and 2019, 1,000, 2,132 and 2,700 questionnaires were distributed, respectively, and 942, 2,110 and 2,566 effective questionnaires were actually recovered, for effective response rates of 94.20 %, 98.97 % and 95.04 %. In 2010, 2015 and 2019, 1,000, 2,132 and 2,700 questionnaires were distributed, respectively, and 942, 2,110 and 2,566 effective questionnaires were actually recovered, for effective response rates of 94.20 %, 98.97 % and 95.04 %. Nurses’ general information A total of 5,618 nurses completed the three surveys, including 388 emergency nurses, 618 outpatient nurses and 4,612 ward nurses. The age of the respondents ranged from 20 to 63 years, with an average age of 31.41 ± 5.82 years, and the proportion of respondents < 30 years old was 51.6 %. The number of years worked ranged from 1 to 37, with the average being 8.85 ± 7.45 years. Of the respondents, 23.2 % had worked ≤ 5 years; 52.8 % had a bachelor’s degree; and 42.7 % had junior professional titles. The demographic characteristics are presented in Table 1. Table 1Comparison of the demographic characteristics of the nurses, n (%)ItemsCategories201020152019χ2/ZPAge (Years)<30506 (53.7)1080 (51.2)1311 (51.1)16.915*0.01030~39348 (36.9)737 (34.9)933 (36.4)40~4975 (8.0)236 (11.2)278 (10.8)≥5013 (1.4)57 (2.7)44 (1.7)GenderFemale894 (94.9)2049 (97.1)2396(93.4)34.249<0.001Male48 (5.1)61 (2.9)170(6.6)Marital statusMarried605 (64.2)1430 (67.8)1628 (63.4)17.2570.002Single337 (35.8)665 (31.5)922 (35.9)Other0 (0.0)15 (0.7)16 (0.6)Only childYes392 (41.6)1098 (52.0)1291 (50.3)29.551<0.001No550 (58.4)1012 (48.0)1275 (49.7)Education levelTechnical secondary degree or below62 (6.6)63 (3.0)33 (1.3)385.760*<0.001Junior college568 (60.3)1008 (47.8)811 (31.6)Bachelor’s degree305 (32.4)1019 (48.3)1656 (64.5)Master’s degree or above7 (0.7)20 (0.9)66 (2.6)Years worked≤5194 (20.6)381 (18.1)726 (28.3)85.897*<0.0016~10294 (31.2)775 (36.7)847 (33.0)11~15204 (21.7)384 (18.2)428 (16.7)≥16250 (26.5)570 (27.0)565 (22.0)Professional titleSenior324 (34.4)687 (32.6)592 (23.1)120.281*<0.001Medium288 (30.6)640 (30.3)688 (26.8)Junior330 (30.5)783 (37.1)1286 (50.1)PostNurse901 (95.6)1977 (93.7)2432 (94.8)5.4010.067Head nurse41 (4.4)133 (6.3)134 (5.2)Employment typeOfficial staff495 (52.5)1068 (50.6)1213 (47.3)9.8780.043Contract staff437 (46.4)1016 (48.2)1317 (51.3)Temporary staff10 (1.1)26 (1.2)36 (1.4)DepartmentEmergency59 (6.3)168 (8.0)161 (6.3)77.396<0.001Outpatient159 (16.9)265 (12.6)194 (7.6)Ward724 (76.9)1677 (79.5)2211 (86.2)*: The Mann-Whitney U test was used Comparison of the demographic characteristics of the nurses, n (%) *: The Mann-Whitney U test was used A total of 5,618 nurses completed the three surveys, including 388 emergency nurses, 618 outpatient nurses and 4,612 ward nurses. The age of the respondents ranged from 20 to 63 years, with an average age of 31.41 ± 5.82 years, and the proportion of respondents < 30 years old was 51.6 %. The number of years worked ranged from 1 to 37, with the average being 8.85 ± 7.45 years. Of the respondents, 23.2 % had worked ≤ 5 years; 52.8 % had a bachelor’s degree; and 42.7 % had junior professional titles. The demographic characteristics are presented in Table 1. Table 1Comparison of the demographic characteristics of the nurses, n (%)ItemsCategories201020152019χ2/ZPAge (Years)<30506 (53.7)1080 (51.2)1311 (51.1)16.915*0.01030~39348 (36.9)737 (34.9)933 (36.4)40~4975 (8.0)236 (11.2)278 (10.8)≥5013 (1.4)57 (2.7)44 (1.7)GenderFemale894 (94.9)2049 (97.1)2396(93.4)34.249<0.001Male48 (5.1)61 (2.9)170(6.6)Marital statusMarried605 (64.2)1430 (67.8)1628 (63.4)17.2570.002Single337 (35.8)665 (31.5)922 (35.9)Other0 (0.0)15 (0.7)16 (0.6)Only childYes392 (41.6)1098 (52.0)1291 (50.3)29.551<0.001No550 (58.4)1012 (48.0)1275 (49.7)Education levelTechnical secondary degree or below62 (6.6)63 (3.0)33 (1.3)385.760*<0.001Junior college568 (60.3)1008 (47.8)811 (31.6)Bachelor’s degree305 (32.4)1019 (48.3)1656 (64.5)Master’s degree or above7 (0.7)20 (0.9)66 (2.6)Years worked≤5194 (20.6)381 (18.1)726 (28.3)85.897*<0.0016~10294 (31.2)775 (36.7)847 (33.0)11~15204 (21.7)384 (18.2)428 (16.7)≥16250 (26.5)570 (27.0)565 (22.0)Professional titleSenior324 (34.4)687 (32.6)592 (23.1)120.281*<0.001Medium288 (30.6)640 (30.3)688 (26.8)Junior330 (30.5)783 (37.1)1286 (50.1)PostNurse901 (95.6)1977 (93.7)2432 (94.8)5.4010.067Head nurse41 (4.4)133 (6.3)134 (5.2)Employment typeOfficial staff495 (52.5)1068 (50.6)1213 (47.3)9.8780.043Contract staff437 (46.4)1016 (48.2)1317 (51.3)Temporary staff10 (1.1)26 (1.2)36 (1.4)DepartmentEmergency59 (6.3)168 (8.0)161 (6.3)77.396<0.001Outpatient159 (16.9)265 (12.6)194 (7.6)Ward724 (76.9)1677 (79.5)2211 (86.2)*: The Mann-Whitney U test was used Comparison of the demographic characteristics of the nurses, n (%) *: The Mann-Whitney U test was used Incidence of WPV among nurses in 2010, 2015, and 2019 The incidence of WPV among nurses was 62.4 % in 2010, 69.0 % in 2015 and 68.4 % in 2019 (P=0.001). Compared with that in 2010, the incidence increased significantly in 2015 and 2019 (P<0.001, P=0.001). The details are shown in Table 2. Table 2Comparison of the incidence of WPV among nurses in different years, n (%)YearQuantityNot exposed to WPVExposed to WPVχ2PT1 (2010)942354 (37.6)588 (62.4)13.9790.001T2 (2015)2110655 (31.0)1455 (69.0)T3 (2019)2566812 (31.6)1754 (68.4)Multiple comparisonPT1−T212.575<0.001PT1−T310.9380.001PT2−T30.1950.681 Comparison of the incidence of WPV among nurses in different years, n (%) The incidence of WPV among nurses was 62.4 % in 2010, 69.0 % in 2015 and 68.4 % in 2019 (P=0.001). Compared with that in 2010, the incidence increased significantly in 2015 and 2019 (P<0.001, P=0.001). The details are shown in Table 2. Table 2Comparison of the incidence of WPV among nurses in different years, n (%)YearQuantityNot exposed to WPVExposed to WPVχ2PT1 (2010)942354 (37.6)588 (62.4)13.9790.001T2 (2015)2110655 (31.0)1455 (69.0)T3 (2019)2566812 (31.6)1754 (68.4)Multiple comparisonPT1−T212.575<0.001PT1−T310.9380.001PT2−T30.1950.681 Comparison of the incidence of WPV among nurses in different years, n (%) Incidence of different forms of WPV among nurses in 2010, 2015, and 2019 According to the current situation of WPV toward nurses in different years, the proportion of nurses who experienced sexual harassment (4.1 %) was the highest in 2010. The proportion of people who suffered verbal abuse (66.2 %) and threats (46.3 %) was the highest in 2015. Physical assault (13.1 %) was the highest in 2019. The overall comparison of WPV in different years and of different types showed significant differences (P<0.05). Further multiple comparisons of the data showed that there were highly statistically significant differences between the total number of WPV incidents in 2010 and 2015 and between the number of physical assault incidents in 2015 and 2019 (P<0.001). The details are shown in Table 3 Table 3Comparison of different forms of WPV among nurses in different years, n (%)YearQuantityExposed to WPVVerbal abuseVerbal threatPhysical assaultSexual harassmentT1 (2010)942588 (62.4)570 (60.5)387 (41.1)93 (9.9)39 (4.1)T2 (2015)21101455 (69.0)1396 (66.2)977 (46.3)211 (10.0)55 (2.6)T3 (2019)25661754 (68.4)1651 (64.3)1116 (43.5)335 (13.1)102 (4.0)χ213.97928.58819.42326.86516.185P0.001<0.0010.004<0.0010.013Multiple comparisonPT1-T2<0.0010.0120.4400.2320.025T1-T30.0010.0050.2500.0480.303T2-T30.6810.0010.020<0.0010.010 Comparison of different forms of WPV among nurses in different years, n (%) According to the current situation of WPV toward nurses in different years, the proportion of nurses who experienced sexual harassment (4.1 %) was the highest in 2010. The proportion of people who suffered verbal abuse (66.2 %) and threats (46.3 %) was the highest in 2015. Physical assault (13.1 %) was the highest in 2019. The overall comparison of WPV in different years and of different types showed significant differences (P<0.05). Further multiple comparisons of the data showed that there were highly statistically significant differences between the total number of WPV incidents in 2010 and 2015 and between the number of physical assault incidents in 2015 and 2019 (P<0.001). The details are shown in Table 3 Table 3Comparison of different forms of WPV among nurses in different years, n (%)YearQuantityExposed to WPVVerbal abuseVerbal threatPhysical assaultSexual harassmentT1 (2010)942588 (62.4)570 (60.5)387 (41.1)93 (9.9)39 (4.1)T2 (2015)21101455 (69.0)1396 (66.2)977 (46.3)211 (10.0)55 (2.6)T3 (2019)25661754 (68.4)1651 (64.3)1116 (43.5)335 (13.1)102 (4.0)χ213.97928.58819.42326.86516.185P0.001<0.0010.004<0.0010.013Multiple comparisonPT1-T2<0.0010.0120.4400.2320.025T1-T30.0010.0050.2500.0480.303T2-T30.6810.0010.020<0.0010.010 Comparison of different forms of WPV among nurses in different years, n (%) Cognition and attitudes of nurses toward WPV in 2010, 2015 and 2019 Compared with the number in 2010, more nurses surveyed in 2015 and 2019 said that they had heard of WPV before, and the result was considered statistically significant (PT1−T2=0.020, PT1−T3=0.032). In 2010 and 2015, less about WPV was included in the nurses’ preservice training (PT1−T3=0.002, PT2−T3=0.001), but more nurses thought they would benefit from WPV training (PT1−T3=0.016, PT2−T3=0.010). The study also found that clinical nurses in 2015 and 2019 were more likely than nurses in 2010 to approve of hospitals setting up WPV processing facilities and to support a zero-tolerance policy toward WPV (P<0.001). In addition, more than 85 % of nurses involved in the three surveys expressed their willingness to attend WPV training and thought they would benefit from it. The details are shown in Table 4Table 4Comparison of cognition and attitudes toward WPV among nurses in different years, n (%)ItemsCategories201020152019χ2PP(T1)(T2)(T3)T1-T2T1-T3T2-T3Having heard of WPV beforeYes763 (81.0)1782 (84.5)2157 (84.7)6.1660.0460.0200.0320.717No179 (19.0)328 (15.5)916 (16.3)WPV is inevitable at workYes712 (75.6)1594 (75.5)1962 (76.5)0.6250.7321.0000.5910.470No230 (24.4)516 (24.5)604 (24.0)Thinking WPV is not worth the fussYes132 (14.0)264 (12.5)374 (14.6)4.2580.1190.2680.7040.044No810 (86.0)1846 (87.5)2192 (85.4)Thinking preservice training should include training programs on WPVYes922 (97.9)2058 (97.5)2457 (95.8)16.192<0.0010.6080.0020.001No20 (2.1)352 (2.5)109 (4.2)Willing to attend the WPV trainingYes874 (92.8)1968 (93.3)2405 (93.7)1.0830.5820.6420.3170.551No68 (7.2)142 (6.7)151 (6.3)Thinking training on WPV would be beneficialYes822 (87.3)1851 (87.7)2313 (90.1)9.2790.0100.7220.0160.010No120 (12.7)259 (12.3)253 (9.9)Thinking WPV coping management organizations are neededYes903 (95.9)2071 (98.2)2519 (98.2)19.082<0.001<0.001<0.0011.000No39 (4.1)39 (1.8)47 (1.8)Supporting a “zero-tolerance” policyYes618 (65.6)1882 (89.2)2310 (90.0)368.750<0.001<0.001<0.0010.360No324 (34.4)228 (10.8)256 (10.0) Comparison of cognition and attitudes toward WPV among nurses in different years, n (%) Compared with the number in 2010, more nurses surveyed in 2015 and 2019 said that they had heard of WPV before, and the result was considered statistically significant (PT1−T2=0.020, PT1−T3=0.032). In 2010 and 2015, less about WPV was included in the nurses’ preservice training (PT1−T3=0.002, PT2−T3=0.001), but more nurses thought they would benefit from WPV training (PT1−T3=0.016, PT2−T3=0.010). The study also found that clinical nurses in 2015 and 2019 were more likely than nurses in 2010 to approve of hospitals setting up WPV processing facilities and to support a zero-tolerance policy toward WPV (P<0.001). In addition, more than 85 % of nurses involved in the three surveys expressed their willingness to attend WPV training and thought they would benefit from it. The details are shown in Table 4Table 4Comparison of cognition and attitudes toward WPV among nurses in different years, n (%)ItemsCategories201020152019χ2PP(T1)(T2)(T3)T1-T2T1-T3T2-T3Having heard of WPV beforeYes763 (81.0)1782 (84.5)2157 (84.7)6.1660.0460.0200.0320.717No179 (19.0)328 (15.5)916 (16.3)WPV is inevitable at workYes712 (75.6)1594 (75.5)1962 (76.5)0.6250.7321.0000.5910.470No230 (24.4)516 (24.5)604 (24.0)Thinking WPV is not worth the fussYes132 (14.0)264 (12.5)374 (14.6)4.2580.1190.2680.7040.044No810 (86.0)1846 (87.5)2192 (85.4)Thinking preservice training should include training programs on WPVYes922 (97.9)2058 (97.5)2457 (95.8)16.192<0.0010.6080.0020.001No20 (2.1)352 (2.5)109 (4.2)Willing to attend the WPV trainingYes874 (92.8)1968 (93.3)2405 (93.7)1.0830.5820.6420.3170.551No68 (7.2)142 (6.7)151 (6.3)Thinking training on WPV would be beneficialYes822 (87.3)1851 (87.7)2313 (90.1)9.2790.0100.7220.0160.010No120 (12.7)259 (12.3)253 (9.9)Thinking WPV coping management organizations are neededYes903 (95.9)2071 (98.2)2519 (98.2)19.082<0.001<0.001<0.0011.000No39 (4.1)39 (1.8)47 (1.8)Supporting a “zero-tolerance” policyYes618 (65.6)1882 (89.2)2310 (90.0)368.750<0.001<0.001<0.0010.360No324 (34.4)228 (10.8)256 (10.0) Comparison of cognition and attitudes toward WPV among nurses in different years, n (%) Attitudes and responses of hospitals to WPV towards nurses in 2010, 2015 and 2019 This study found that in the past 9 years, the hospitals’ attitudes and responses to WPV towards nurses have greatly improved, as indicated by improvements in five aspects, namely, “offering training”, “encouraging reporting of WPV to supervisors”, “equipped with a WPV management department”, “handling WPV efficiently”, and “hospital’s attitudes” (P<0.001). However, most hospitals still have not developed training. Regarding hospitals’ emphasis on WPV, hospitals paid more attention to WPV toward physicians (P=0.013). The details are shown in Table 5 Table 5Comparison of attitudes and responses of hospitals toward WPV among nurses in different years, n (%)ItemsCategories201020152019χ2PP(T1)(T2)(T3)T1-T2T1-T3T2-T3Offering trainingYes91 (9.7)487 (23.1)840 (32.7)202.794<0.001<0.001<0.001<0.001No851 (90.3)1623 (76.9)1726 (67.3)Encouraging reporting of WPV to supervisorsYes585 (62.1)1575 (74.6)1849 (72.1)51.255<0.001<0.001<0.0010.050No357 (37.9)535 (25.4)717 (27.9)Equipped with WPV management departmentYes432 (45.9)1215 (57.6)1451 (56.5)39.942<0.001<0.001<0.0010.495No510 (54.1)895 (42.4)1115 (43.5)Handling WPV efficientlyYes264 (28.0)791 (37.5)1193 (46.5)106.884<0.001<0.001<0.001<0.001No678 (72.0)1319 (62.5)1373 (53.5)Hospital’s attitudesProtecting the interests of staff91 (9.7)574 (27.2)775 (30.2)283.785<0.001<0.001<0.001<0.001Handling WPV fairly based on the facts425 (45.1)855 (40.5)1194 (46.5)Turning a blind eye283 (30.0)549 (26.0)470 (18.3)Punishing staff regardless of the cause143 (15.2)132 (6.3)127 (4.9)Degree of emphasisPhysician>Nurse403 (42.8)959 (45.5)1117 (43.5)12.6490.0130.002<0.0010.346Nurse>Physician0 (0.0)22 (1.0)23 (0.9)Physician=Nurse539 (57.2)1129 (53.5)1426 (55.6) Comparison of attitudes and responses of hospitals toward WPV among nurses in different years, n (%) This study found that in the past 9 years, the hospitals’ attitudes and responses to WPV towards nurses have greatly improved, as indicated by improvements in five aspects, namely, “offering training”, “encouraging reporting of WPV to supervisors”, “equipped with a WPV management department”, “handling WPV efficiently”, and “hospital’s attitudes” (P<0.001). However, most hospitals still have not developed training. Regarding hospitals’ emphasis on WPV, hospitals paid more attention to WPV toward physicians (P=0.013). The details are shown in Table 5 Table 5Comparison of attitudes and responses of hospitals toward WPV among nurses in different years, n (%)ItemsCategories201020152019χ2PP(T1)(T2)(T3)T1-T2T1-T3T2-T3Offering trainingYes91 (9.7)487 (23.1)840 (32.7)202.794<0.001<0.001<0.001<0.001No851 (90.3)1623 (76.9)1726 (67.3)Encouraging reporting of WPV to supervisorsYes585 (62.1)1575 (74.6)1849 (72.1)51.255<0.001<0.001<0.0010.050No357 (37.9)535 (25.4)717 (27.9)Equipped with WPV management departmentYes432 (45.9)1215 (57.6)1451 (56.5)39.942<0.001<0.001<0.0010.495No510 (54.1)895 (42.4)1115 (43.5)Handling WPV efficientlyYes264 (28.0)791 (37.5)1193 (46.5)106.884<0.001<0.001<0.001<0.001No678 (72.0)1319 (62.5)1373 (53.5)Hospital’s attitudesProtecting the interests of staff91 (9.7)574 (27.2)775 (30.2)283.785<0.001<0.001<0.001<0.001Handling WPV fairly based on the facts425 (45.1)855 (40.5)1194 (46.5)Turning a blind eye283 (30.0)549 (26.0)470 (18.3)Punishing staff regardless of the cause143 (15.2)132 (6.3)127 (4.9)Degree of emphasisPhysician>Nurse403 (42.8)959 (45.5)1117 (43.5)12.6490.0130.002<0.0010.346Nurse>Physician0 (0.0)22 (1.0)23 (0.9)Physician=Nurse539 (57.2)1129 (53.5)1426 (55.6) Comparison of attitudes and responses of hospitals toward WPV among nurses in different years, n (%)
Conclusions
The incidence of WPV among nurses in Suzhou general hospitals increased in 2015 and 2019 compared to that in 2010. The main form of WPV is verbal abuse. Despite the continuous improvement of nurses′ awareness of WPV, there is still room for improvement in hospitals′ attitudes and responses to WPV, especially in terms of actively carrying out WPV training. In conclusion, hospital managers should aim to comprehensively understand the dynamics of WPV, especially the trends of violence over long time periods, to reduce the incidence of WPV among nurses.
[ "Background", "Methods", "Study design", "Participants", "Inclusion criteria", "Exclusion criteria", "Ethics approval", "Measurement instruments", "General information questionnaire", "Revised version of the hospital WPV questionnaire", "Data collection", "Statistical analysis", "Questionnaire response rate", "Nurses’ general information", "Incidence of WPV among nurses in 2010, 2015, and 2019", "Incidence of different forms of WPV among nurses in 2010, 2015, and 2019", "Cognition and attitudes of nurses toward WPV in 2010, 2015 and 2019", "Attitudes and responses of hospitals to WPV towards nurses in 2010, 2015 and 2019", "Discussion" ]
[ "According to the World Health Organization (WHO), workplace violence (WPV) can be defined as “incidents where staff are abused, threatened or assaulted in circumstances related to their work, involving an explicit or implicit challenge to their safety, well-being or health”. WPV can manifest as physical violence or psychological violence in different forms [1]. In recent years, as the reform of China’s medical system has entered the “deep water zone”, the doctor-patient relationship is becoming increasingly tense, and doctor-patient conflict is gradually emerging. As the main manifestation of doctor-patient conflict, the incidence of hospital WPV is increasing yearly. As the main workforce in medical and health care institutions, nurses have become the main victims of WPV due to the nature of their work, which requires direct contact and frequent communication with patients [2–5]. Studies from outside of China have shown that approximately 70 %-90 % of nurses have experienced one or more types of WPV [6–8]. In China, cross-sectional investigations of WPV among nurses have shown that the incidence of WPV is 71.9 %-84.6 %, which is similar to the incidence reported in other countries [9–11]. These findings show that WPV among nurses has become a common phenomenon in both developing and developed countries.\nWPV is a very serious public health problem that not only causes different degrees of physical and psychological injury to nurses but also has negative effects on hospitals [12, 13]. At the individual level, nurses may suffer from varying degrees of physiological harm, such as chronic pain, elevated blood pressure, gastrointestinal disorders, nightmares and even death [14, 15]. Regarding the psychological effects of WPV, in addition to feelings of grievance, anger, anxiety and other emotions, nurses may experience symptoms of insomnia, suicidal ideation and post-traumatic stress disorder [16–18]. At the hospital level, WPV reduces the enthusiasm of nurses toward their work and the quality of their work and increases their turnover rate, which affects the stability of the nursing team [19, 20]. In summary, WPV among nurses is characterized by a high incidence rate and great harmfulness.\nSince the 1980 s, scholars at home and abroad have sought to reduce hospital WPV incidence and improve nurses’ ability to respond through a series of measures, including improving medical-related laws and regulations [21], improving the hospital environment and treatment procedures [22], improving nurses’ communication ability [23], and focusing on WPV training [24]. However, a review of the current domestic and foreign literature found that the incidence of WPV among nurses remains high [25, 26]. Some scholars [27] have pointed out that due to the lack of long-term tracking data on the occurrence of WPV among nurses, the existing prevention and intervention measures lack pertinence. A study by Spelten et al. [28] pointed out that reducing the incidence of WPV first requires a comprehensive understanding of the dynamic development process of violence toward nurses, including by determining the frequency and intensity of WPV events and assessing the long-term results of violence on staff. However, most of the existing WPV studies are short-term status surveys, and few studies continuously compare the WPV status and characteristics of nurses in a certain region. This study aimed to analyze the current situation and characteristics of WPV among nurses in Suzhou general hospitals from 2010 to 2019 to provide information that can help hospital managers understand the status quo of WPV among nurses and trends in WPV over time.", "Study design In this cross-sectional study, structured questionnaires were used to investigate the occurrence of WPV among nurses in 6 general hospitals in Suzhou in 2010, 2015 and 2019.\nIn this cross-sectional study, structured questionnaires were used to investigate the occurrence of WPV among nurses in 6 general hospitals in Suzhou in 2010, 2015 and 2019.\nParticipants A total of 942, 2,110 and 2,566 clinical nurses were selected from Suzhou general hospitals in November of each year in 2010, 2015 and 2019, respectively, by the convenience sampling method.\nA total of 942, 2,110 and 2,566 clinical nurses were selected from Suzhou general hospitals in November of each year in 2010, 2015 and 2019, respectively, by the convenience sampling method.\nInclusion criteria (1) Employment as in-service nurses in clinical departments who had direct contact with patients in their daily work.\n(2) Professional certification.\n(3) Completion of at least 1 year of clinical nursing work.\n (4) Provision of a signed informed consent form and voluntary participation in the study.\n(1) Employment as in-service nurses in clinical departments who had direct contact with patients in their daily work.\n(2) Professional certification.\n(3) Completion of at least 1 year of clinical nursing work.\n (4) Provision of a signed informed consent form and voluntary participation in the study.\nExclusion criteria (1) Nurses off-duty during the investigation, e.g., on maternity leave, sick leave, or vacation.\n(2) Non-regular nurses and personnel undergoing refresher training in our hospital.\n(1) Nurses off-duty during the investigation, e.g., on maternity leave, sick leave, or vacation.\n(2) Non-regular nurses and personnel undergoing refresher training in our hospital.\nEthics approval The study was approved by the Medical Ethics Committee of the First Affiliated Hospital of Soochow University in China (No. 2,018,062). The participants were made aware of the aim and procedures of the study, and they all signed an informed consent form. The researchers guaranteed the confidentiality of the subjects’ information and that it would be used only for research purposes. The voluntary principle was always followed during the study, and the participants could leave the study at any time.\n The study was approved by the Medical Ethics Committee of the First Affiliated Hospital of Soochow University in China (No. 2,018,062). The participants were made aware of the aim and procedures of the study, and they all signed an informed consent form. The researchers guaranteed the confidentiality of the subjects’ information and that it would be used only for research purposes. The voluntary principle was always followed during the study, and the participants could leave the study at any time.", "In this cross-sectional study, structured questionnaires were used to investigate the occurrence of WPV among nurses in 6 general hospitals in Suzhou in 2010, 2015 and 2019.", "A total of 942, 2,110 and 2,566 clinical nurses were selected from Suzhou general hospitals in November of each year in 2010, 2015 and 2019, respectively, by the convenience sampling method.", "(1) Employment as in-service nurses in clinical departments who had direct contact with patients in their daily work.\n(2) Professional certification.\n(3) Completion of at least 1 year of clinical nursing work.\n (4) Provision of a signed informed consent form and voluntary participation in the study.", "(1) Nurses off-duty during the investigation, e.g., on maternity leave, sick leave, or vacation.\n(2) Non-regular nurses and personnel undergoing refresher training in our hospital.", " The study was approved by the Medical Ethics Committee of the First Affiliated Hospital of Soochow University in China (No. 2,018,062). The participants were made aware of the aim and procedures of the study, and they all signed an informed consent form. The researchers guaranteed the confidentiality of the subjects’ information and that it would be used only for research purposes. The voluntary principle was always followed during the study, and the participants could leave the study at any time.", "General information questionnaire The questionnaire was designed by the research group to suit the research purpose and collected information on gender, age, education, whether an only child, marital status, hospital grade, department, years worked, employment type and professional title.\nThe questionnaire was designed by the research group to suit the research purpose and collected information on gender, age, education, whether an only child, marital status, hospital grade, department, years worked, employment type and professional title.\nRevised version of the hospital WPV questionnaire The questionnaire designed by Shengyong Wang [29] of Jinan University includes 33 entries in 4 dimensions: the frequency of WPV occurrence in the past year, the most impressive WPV experience in the past year, cognition and attitudes toward WPV, and the attitudes and responses of hospitals to WPV. The questionnaire has been widely used in clinical practice. The questionnaire has good reliability and validity; the test-retest reliability of the questionnaire was 0.803, and the average content validity of each item was 0.916.\nThe questionnaire designed by Shengyong Wang [29] of Jinan University includes 33 entries in 4 dimensions: the frequency of WPV occurrence in the past year, the most impressive WPV experience in the past year, cognition and attitudes toward WPV, and the attitudes and responses of hospitals to WPV. The questionnaire has been widely used in clinical practice. The questionnaire has good reliability and validity; the test-retest reliability of the questionnaire was 0.803, and the average content validity of each item was 0.916.\nData collection Two weeks before the beginning of the study, the head of the research team (which included 2 postgraduate tutors, 6 postgraduates, and 4 undergraduates) contacted the managers of each hospital by telephone or e-mail to request the support of the corresponding person in charge for the study and published a notice on each hospital website to recruit participants to participate in the survey. The contents of the notice included the location, time, purpose, significance, confidentiality principle and precautions of the questionnaire. If nurses were willing to participate in the survey, they could go to a designated conference room to receive the questionnaire and complete it anonymously. Each hospital was equipped with 2 research team members. If a nurse had any questions during completion of the questionnaire, the project team members could give guidance at any time. After completing the questionnaire, the nurse placed it into the questionnaire recovery box in the conference room. At 6:00 p.m., after the nurses had completed the questionnaires, the questionnaire recovery box was sealed by the members of the research group and sent to the questionnaire archives of the research group for safekeeping.\nTwo weeks before the beginning of the study, the head of the research team (which included 2 postgraduate tutors, 6 postgraduates, and 4 undergraduates) contacted the managers of each hospital by telephone or e-mail to request the support of the corresponding person in charge for the study and published a notice on each hospital website to recruit participants to participate in the survey. The contents of the notice included the location, time, purpose, significance, confidentiality principle and precautions of the questionnaire. If nurses were willing to participate in the survey, they could go to a designated conference room to receive the questionnaire and complete it anonymously. Each hospital was equipped with 2 research team members. If a nurse had any questions during completion of the questionnaire, the project team members could give guidance at any time. After completing the questionnaire, the nurse placed it into the questionnaire recovery box in the conference room. At 6:00 p.m., after the nurses had completed the questionnaires, the questionnaire recovery box was sealed by the members of the research group and sent to the questionnaire archives of the research group for safekeeping.\nStatistical analysis Data management and analysis were performed using SPSS 18.0. The count data are described as frequencies and percentages, and the quantitative data are reported as means and standard deviations. The chi-square test was used to analyze the general data of the nurses from the different time periods, the incidence of WPV, nurses’ cognition and attitudes regarding WPV and the hospitals’ attitudes toward WPV and corresponding measures. The Mann-Whitney U test was used to analyze the demographic data. Statistical significance was set at P<0.05.\nData management and analysis were performed using SPSS 18.0. The count data are described as frequencies and percentages, and the quantitative data are reported as means and standard deviations. The chi-square test was used to analyze the general data of the nurses from the different time periods, the incidence of WPV, nurses’ cognition and attitudes regarding WPV and the hospitals’ attitudes toward WPV and corresponding measures. The Mann-Whitney U test was used to analyze the demographic data. Statistical significance was set at P<0.05.", "The questionnaire was designed by the research group to suit the research purpose and collected information on gender, age, education, whether an only child, marital status, hospital grade, department, years worked, employment type and professional title.", "The questionnaire designed by Shengyong Wang [29] of Jinan University includes 33 entries in 4 dimensions: the frequency of WPV occurrence in the past year, the most impressive WPV experience in the past year, cognition and attitudes toward WPV, and the attitudes and responses of hospitals to WPV. The questionnaire has been widely used in clinical practice. The questionnaire has good reliability and validity; the test-retest reliability of the questionnaire was 0.803, and the average content validity of each item was 0.916.", "Two weeks before the beginning of the study, the head of the research team (which included 2 postgraduate tutors, 6 postgraduates, and 4 undergraduates) contacted the managers of each hospital by telephone or e-mail to request the support of the corresponding person in charge for the study and published a notice on each hospital website to recruit participants to participate in the survey. The contents of the notice included the location, time, purpose, significance, confidentiality principle and precautions of the questionnaire. If nurses were willing to participate in the survey, they could go to a designated conference room to receive the questionnaire and complete it anonymously. Each hospital was equipped with 2 research team members. If a nurse had any questions during completion of the questionnaire, the project team members could give guidance at any time. After completing the questionnaire, the nurse placed it into the questionnaire recovery box in the conference room. At 6:00 p.m., after the nurses had completed the questionnaires, the questionnaire recovery box was sealed by the members of the research group and sent to the questionnaire archives of the research group for safekeeping.", "Data management and analysis were performed using SPSS 18.0. The count data are described as frequencies and percentages, and the quantitative data are reported as means and standard deviations. The chi-square test was used to analyze the general data of the nurses from the different time periods, the incidence of WPV, nurses’ cognition and attitudes regarding WPV and the hospitals’ attitudes toward WPV and corresponding measures. The Mann-Whitney U test was used to analyze the demographic data. Statistical significance was set at P<0.05.", "In 2010, 2015 and 2019, 1,000, 2,132 and 2,700 questionnaires were distributed, respectively, and 942, 2,110 and 2,566 effective questionnaires were actually recovered, for effective response rates of 94.20 %, 98.97 % and 95.04 %.", "A total of 5,618 nurses completed the three surveys, including 388 emergency nurses, 618 outpatient nurses and 4,612 ward nurses. The age of the respondents ranged from 20 to 63 years, with an average age of 31.41 ± 5.82 years, and the proportion of respondents < 30 years old was 51.6 %. The number of years worked ranged from 1 to 37, with the average being 8.85 ± 7.45 years. Of the respondents, 23.2 % had worked ≤ 5 years; 52.8 % had a bachelor’s degree; and 42.7 % had junior professional titles. The demographic characteristics are presented in Table 1.\n\nTable 1Comparison of the demographic characteristics of the nurses, n (%)ItemsCategories201020152019χ2/ZPAge (Years)<30506 (53.7)1080 (51.2)1311 (51.1)16.915*0.01030~39348 (36.9)737 (34.9)933 (36.4)40~4975 (8.0)236 (11.2)278 (10.8)≥5013 (1.4)57 (2.7)44 (1.7)GenderFemale894 (94.9)2049 (97.1)2396(93.4)34.249<0.001Male48 (5.1)61 (2.9)170(6.6)Marital statusMarried605 (64.2)1430 (67.8)1628 (63.4)17.2570.002Single337 (35.8)665 (31.5)922 (35.9)Other0 (0.0)15 (0.7)16 (0.6)Only childYes392 (41.6)1098 (52.0)1291 (50.3)29.551<0.001No550 (58.4)1012 (48.0)1275 (49.7)Education levelTechnical secondary degree or below62 (6.6)63 (3.0)33 (1.3)385.760*<0.001Junior college568 (60.3)1008 (47.8)811 (31.6)Bachelor’s degree305 (32.4)1019 (48.3)1656 (64.5)Master’s degree or above7 (0.7)20 (0.9)66 (2.6)Years worked≤5194 (20.6)381 (18.1)726 (28.3)85.897*<0.0016~10294 (31.2)775 (36.7)847 (33.0)11~15204 (21.7)384 (18.2)428 (16.7)≥16250 (26.5)570 (27.0)565 (22.0)Professional titleSenior324 (34.4)687 (32.6)592 (23.1)120.281*<0.001Medium288 (30.6)640 (30.3)688 (26.8)Junior330 (30.5)783 (37.1)1286 (50.1)PostNurse901 (95.6)1977 (93.7)2432 (94.8)5.4010.067Head nurse41 (4.4)133 (6.3)134 (5.2)Employment typeOfficial staff495 (52.5)1068 (50.6)1213 (47.3)9.8780.043Contract staff437 (46.4)1016 (48.2)1317 (51.3)Temporary staff10 (1.1)26 (1.2)36 (1.4)DepartmentEmergency59 (6.3)168 (8.0)161 (6.3)77.396<0.001Outpatient159 (16.9)265 (12.6)194 (7.6)Ward724 (76.9)1677 (79.5)2211 (86.2)*: The Mann-Whitney U test was used\nComparison of the demographic characteristics of the nurses, n (%)\n*: The Mann-Whitney U test was used", "The incidence of WPV among nurses was 62.4 % in 2010, 69.0 % in 2015 and 68.4 % in 2019 (P=0.001). Compared with that in 2010, the incidence increased significantly in 2015 and 2019 (P<0.001, P=0.001). The details are shown in Table 2.\n\nTable 2Comparison of the incidence of WPV among nurses in different years, n (%)YearQuantityNot exposed to WPVExposed to WPVχ2PT1 (2010)942354 (37.6)588 (62.4)13.9790.001T2 (2015)2110655 (31.0)1455 (69.0)T3 (2019)2566812 (31.6)1754 (68.4)Multiple comparisonPT1−T212.575<0.001PT1−T310.9380.001PT2−T30.1950.681\nComparison of the incidence of WPV among nurses in different years, n (%)", "According to the current situation of WPV toward nurses in different years, the proportion of nurses who experienced sexual harassment (4.1 %) was the highest in 2010. The proportion of people who suffered verbal abuse (66.2 %) and threats (46.3 %) was the highest in 2015. Physical assault (13.1 %) was the highest in 2019. The overall comparison of WPV in different years and of different types showed significant differences (P<0.05). Further multiple comparisons of the data showed that there were highly statistically significant differences between the total number of WPV incidents in 2010 and 2015 and between the number of physical assault incidents in 2015 and 2019 (P<0.001). The details are shown in Table 3\n\nTable 3Comparison of different forms of WPV among nurses in different years, n (%)YearQuantityExposed to WPVVerbal abuseVerbal threatPhysical assaultSexual harassmentT1 (2010)942588 (62.4)570 (60.5)387 (41.1)93 (9.9)39 (4.1)T2 (2015)21101455 (69.0)1396 (66.2)977 (46.3)211 (10.0)55 (2.6)T3 (2019)25661754 (68.4)1651 (64.3)1116 (43.5)335 (13.1)102 (4.0)χ213.97928.58819.42326.86516.185P0.001<0.0010.004<0.0010.013Multiple comparisonPT1-T2<0.0010.0120.4400.2320.025T1-T30.0010.0050.2500.0480.303T2-T30.6810.0010.020<0.0010.010\nComparison of different forms of WPV among nurses in different years, n (%)", "Compared with the number in 2010, more nurses surveyed in 2015 and 2019 said that they had heard of WPV before, and the result was considered statistically significant (PT1−T2=0.020, PT1−T3=0.032). In 2010 and 2015, less about WPV was included in the nurses’ preservice training (PT1−T3=0.002, PT2−T3=0.001), but more nurses thought they would benefit from WPV training (PT1−T3=0.016, PT2−T3=0.010). The study also found that clinical nurses in 2015 and 2019 were more likely than nurses in 2010 to approve of hospitals setting up WPV processing facilities and to support a zero-tolerance policy toward WPV (P<0.001). In addition, more than 85 % of nurses involved in the three surveys expressed their willingness to attend WPV training and thought they would benefit from it. The details are shown in Table 4Table 4Comparison of cognition and attitudes toward WPV among nurses in different years, n (%)ItemsCategories201020152019χ2PP(T1)(T2)(T3)T1-T2T1-T3T2-T3Having heard of WPV beforeYes763 (81.0)1782 (84.5)2157 (84.7)6.1660.0460.0200.0320.717No179 (19.0)328 (15.5)916 (16.3)WPV is inevitable at workYes712 (75.6)1594 (75.5)1962 (76.5)0.6250.7321.0000.5910.470No230 (24.4)516 (24.5)604 (24.0)Thinking WPV is not worth the fussYes132 (14.0)264 (12.5)374 (14.6)4.2580.1190.2680.7040.044No810 (86.0)1846 (87.5)2192 (85.4)Thinking preservice training should include training programs on WPVYes922 (97.9)2058 (97.5)2457 (95.8)16.192<0.0010.6080.0020.001No20 (2.1)352 (2.5)109 (4.2)Willing to attend the WPV trainingYes874 (92.8)1968 (93.3)2405 (93.7)1.0830.5820.6420.3170.551No68 (7.2)142 (6.7)151 (6.3)Thinking training on WPV would be beneficialYes822 (87.3)1851 (87.7)2313 (90.1)9.2790.0100.7220.0160.010No120 (12.7)259 (12.3)253 (9.9)Thinking WPV coping management organizations are neededYes903 (95.9)2071 (98.2)2519 (98.2)19.082<0.001<0.001<0.0011.000No39 (4.1)39 (1.8)47 (1.8)Supporting a “zero-tolerance” policyYes618 (65.6)1882 (89.2)2310 (90.0)368.750<0.001<0.001<0.0010.360No324 (34.4)228 (10.8)256 (10.0)\nComparison of cognition and attitudes toward WPV among nurses in different years, n (%)", "This study found that in the past 9 years, the hospitals’ attitudes and responses to WPV towards nurses have greatly improved, as indicated by improvements in five aspects, namely, “offering training”, “encouraging reporting of WPV to supervisors”, “equipped with a WPV management department”, “handling WPV efficiently”, and “hospital’s attitudes” (P<0.001). However, most hospitals still have not developed training. Regarding hospitals’ emphasis on WPV, hospitals paid more attention to WPV toward physicians (P=0.013). The details are shown in Table 5\n\nTable 5Comparison of attitudes and responses of hospitals toward WPV among nurses in different years, n (%)ItemsCategories201020152019χ2PP(T1)(T2)(T3)T1-T2T1-T3T2-T3Offering trainingYes91 (9.7)487 (23.1)840 (32.7)202.794<0.001<0.001<0.001<0.001No851 (90.3)1623 (76.9)1726 (67.3)Encouraging reporting of WPV to supervisorsYes585 (62.1)1575 (74.6)1849 (72.1)51.255<0.001<0.001<0.0010.050No357 (37.9)535 (25.4)717 (27.9)Equipped with WPV management departmentYes432 (45.9)1215 (57.6)1451 (56.5)39.942<0.001<0.001<0.0010.495No510 (54.1)895 (42.4)1115 (43.5)Handling WPV efficientlyYes264 (28.0)791 (37.5)1193 (46.5)106.884<0.001<0.001<0.001<0.001No678 (72.0)1319 (62.5)1373 (53.5)Hospital’s attitudesProtecting the interests of staff91 (9.7)574 (27.2)775 (30.2)283.785<0.001<0.001<0.001<0.001Handling WPV fairly based on the facts425 (45.1)855 (40.5)1194 (46.5)Turning a blind eye283 (30.0)549 (26.0)470 (18.3)Punishing staff regardless of the cause143 (15.2)132 (6.3)127 (4.9)Degree of emphasisPhysician>Nurse403 (42.8)959 (45.5)1117 (43.5)12.6490.0130.002<0.0010.346Nurse>Physician0 (0.0)22 (1.0)23 (0.9)Physician=Nurse539 (57.2)1129 (53.5)1426 (55.6)\nComparison of attitudes and responses of hospitals toward WPV among nurses in different years, n (%)", "This study found that the overall incidence of WPV towards nurses in Suzhou general hospitals gradually increased from 62.4 to 68.4 % from 2010 to 2019 and significantly increased from 2010 to 2015. There were significant differences at first (P<0.05), but then a plateau was reached (P>0.05). The incidence reported here is lower than the overall WPV incidence of 76 % among 762 registered nurses identified in a study from the U.S. [30], higher than the incidence reported in a study conducted in Botswana (44.1 %) [31], and similar to the incidence reported in a study by Ren et al. (68.1 %) [32]. The incidence of WPV towards nurses varies by country, region, social background and sample size. However, the phenomenon of WPV against nurses has become increasingly common at home and abroad, in developing countries and developed countries. A comparison of the types of WPV nurses experience across the three study years showed that “verbal abuse” is the most common form of WPV, which is consistent with most surveys [33–36]. Verbal abuse and threats showed an upward trend and then a downward trend from 2010 to 2019, with the highest incidence in 2015. This was possibly related to the greater emphasis of hospitals on the training of nurses’ service attitudes and communication skills in recent years. To reduce the incidence of verbal abuse or threats, hospital management developed training on coping and communication strategies for nurses, including using amiable words, putting themselves in patients’ shoes and listening to the needs of patients from the patients’ point of view so that patients could feel understood and respected. However, the lack of corresponding legal constraints on verbal violence led to nurses being unable to obtain timely evidence and a fair hearing after experiencing this type of WPV, which made verbal violence hard to intervene in and prevent. The results of this survey showed that the incidence of physical assault against nurses has gradually increased over the past nine years. Though the incidence of physical violence was relatively low between 2010 and 2015 (P>0.05), the incidence of physical assault increased significantly from 2015 to 2019 (P<0.001). With the development of the economy and the improvement of living standards, people’s demand for medical treatment has changed; people are not satisfied with only cure of the disease itself but are increasingly attentive to the overall effect of treatment and attach increasing importance to the experience of the treatment process. When patients and their relatives have unpleasant experiences or their requirements are not met during medical treatment, it is easy for conflicts to arise between nurses and patients. If the hospital treatment system or process is not perfect, such conflict is likely to escalate and induce physical aggression. It is recommended that hospital management give attention to the various forms of WPV while focusing on the service attitude, the patient’s medical experience and the transformation of the service process. Hospital management should focus on not only implementing policies but also establishing hospital security forces and an effective cooperation mechanism between hospitals and public security systems, implementing measures such as increasing patrol times, improving security inspections, prohibiting the carrying of sharp or dangerous goods into hospitals and developing security training for staff [37]. Such practices would allow a hospital to intervene immediately in the event of violence, ensure the safety of medical personnel and reduce the occurrence of violence through multi-sector coordination. It was shown that the incidence of sexual harassment among nurses had a decreasing trend and then an increasing trend over time, with the incidence being basically flat in 2010 and 2019, which was due to the improvement of laws and the health care system in recent years. Since 2014, China has issued a series of laws and regulations, such as the Regulations on the Prevention and Treatment of Medical Disputes, the Memorandum of Cooperation on Joint Punishment of Persons who seriously endanger the Normal Medical Order, and the Basic Medical and Health Promotion Law, which have played a positive role in protecting legitimate rights by law, deterring illegal acts targeting medical staff and institutions and maintaining the order of hospitals. However, with the passage of time, some law enforcement officers have not truly implemented the laws and regulations, resulting in a rebound trend of hospital WPV since 2015. Therefore, it is suggested that law enforcement officers should sternly punish people who attack medical staff and promote the implementation of policies and laws so that they can effectively protect the legitimate rights and interests of nurses.\nIn the past 9 years, nurses’ cognition and attitudes toward WPV have improved. The rates of “having heard of WPV before”, “thinking WPV coping management organizations are needed” and “supporting a zero-tolerance policy” increased significantly from 2010 to 2015 (P<0.05) but showed no great changes from 2015 to 2019 (P>0.05), which was likely related to the formulation of corresponding preventive measures and provision of reasonable and effective education and training. The National Institute of Occupational Safety and Health [38] developed and delivered WPV prevention training course for nurses, and more than 90 % of participants thought the course filled gaps in their knowledge. It was also found that the reasons why some nurses were unable to effectively judge whether they had experienced WPV were that they lacked an understanding of the concept of WPV or had never heard the term WPV before. Ming et al. [39] found that after a randomized intervention for clinical nurses, the cognition and attitudes of the nurses who participated in violence training were higher than those who did not. Brann et al. [40] found that the level of cognition and knowledge of WPV among nurses was higher than before training; nurses could identify violence that they had experienced personally or as bystanders and no longer considered violence part of their work. Therefore, it is necessary to develop training on WPV for clinical nurses. In addition, the results of the three surveys showed that although the majority of nurses were willing to participate in WPV prevention training, there was less WPV content in their preservice training in 2019 than in 2010 and 2015 (PT1−T3=0.002, PT2−T3=0.001), which indicates to hospital management that in order to maximize the effect of education and training and improve nurses’ ability to identify and deal with WPV in clinical practice, it is necessary to strengthen the training on WPV and make it more systematic and routine, integrate WPV content into new nurses’ education and in-hospital education, regularly disseminate knowledge on violence to nurses, strengthen continuous management after the training, and promote the continuous improvement of the post-training results while meeting the learning needs of nurses [41]. Furthermore, it is necessary to carry out public education, publicize the key role of the hospital in solving problems in the medical system through various networks, and increase public understanding of the efforts of medical centers to improve the medical experience and providing services to patients to decrease WPV.\nIn this survey, hospitals’ attitudes and responses to nurses’ WPV improved significantly from 2010 to 2019, which was embodied in five aspects: “offering training”, “encouraging reporting of WPV to supervisors”, “equipped with a WPV management department”, “handling WPV efficiently” and “hospital’s attitudes” (P<0.05). The differences in the response rates for “offering training”, “handling WPV efficiently”, “hospital’s attitudes” at different times were statistically significant (P<0.001), which may be related to changes in the management philosophy of the hospital managers. At present, hospital management pays more attention to the safety of nurses, constantly explores and develops scientific and effective nurse-related WPV management measures, actively and effectively controls violence, addresses nurses’ suffering from violence and minimizes hospital violence caused by improper or unclear management policies [37]. However, less than half of respondents reported receiving WPV prevention-related training. Although the training incidence increased in 2019 compared with that in 2010 and 2015, it was only 32.7 %, which is consistent with the results of other studies [42, 43]. The results of the survey also showed that more than 95 % of nurses believed that preservice training should include training on WPV, and more than 90 % of nurses expressed a willingness to attend, which showed that the current hospital-related education and training is far from meeting the needs of clinical nurses. To reduce the incidence of WPV or minimize the harm of WPV, nurses’ WPV management intervention skills should be strengthened, with special attention given to developing training, exploring the content and form of training, increasing the initiative and enthusiasm of nurses, and improving nurses’ cognition, attitudes and coping abilities with regard to WPV. In addition, compared with the results from 2010, those from 2015 to 2019 showed that hospitals paid more attention to doctors who had experienced WPV, with a gradual upward trend (P<0.001). Some studies [19, 44, 45] suggest that organizational support and hospital management can reduce the negative effects of WPV on nurses. Without strong hospital attention, nurses may lack trust in the management department and experience reduced job satisfaction, which will aggravate adverse reactions to WPV. In conclusion, hospitals should treat WPV among doctors and nurses equally so that nurse-directed WPV can be resolved equally and fairly and the psychological gap of employees can be reduced." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study design", "Participants", "Inclusion criteria", "Exclusion criteria", "Ethics approval", "Measurement instruments", "General information questionnaire", "Revised version of the hospital WPV questionnaire", "Data collection", "Statistical analysis", "Results", "Questionnaire response rate", "Nurses’ general information", "Incidence of WPV among nurses in 2010, 2015, and 2019", "Incidence of different forms of WPV among nurses in 2010, 2015, and 2019", "Cognition and attitudes of nurses toward WPV in 2010, 2015 and 2019", "Attitudes and responses of hospitals to WPV towards nurses in 2010, 2015 and 2019", "Discussion", "Conclusions" ]
[ "According to the World Health Organization (WHO), workplace violence (WPV) can be defined as “incidents where staff are abused, threatened or assaulted in circumstances related to their work, involving an explicit or implicit challenge to their safety, well-being or health”. WPV can manifest as physical violence or psychological violence in different forms [1]. In recent years, as the reform of China’s medical system has entered the “deep water zone”, the doctor-patient relationship is becoming increasingly tense, and doctor-patient conflict is gradually emerging. As the main manifestation of doctor-patient conflict, the incidence of hospital WPV is increasing yearly. As the main workforce in medical and health care institutions, nurses have become the main victims of WPV due to the nature of their work, which requires direct contact and frequent communication with patients [2–5]. Studies from outside of China have shown that approximately 70 %-90 % of nurses have experienced one or more types of WPV [6–8]. In China, cross-sectional investigations of WPV among nurses have shown that the incidence of WPV is 71.9 %-84.6 %, which is similar to the incidence reported in other countries [9–11]. These findings show that WPV among nurses has become a common phenomenon in both developing and developed countries.\nWPV is a very serious public health problem that not only causes different degrees of physical and psychological injury to nurses but also has negative effects on hospitals [12, 13]. At the individual level, nurses may suffer from varying degrees of physiological harm, such as chronic pain, elevated blood pressure, gastrointestinal disorders, nightmares and even death [14, 15]. Regarding the psychological effects of WPV, in addition to feelings of grievance, anger, anxiety and other emotions, nurses may experience symptoms of insomnia, suicidal ideation and post-traumatic stress disorder [16–18]. At the hospital level, WPV reduces the enthusiasm of nurses toward their work and the quality of their work and increases their turnover rate, which affects the stability of the nursing team [19, 20]. In summary, WPV among nurses is characterized by a high incidence rate and great harmfulness.\nSince the 1980 s, scholars at home and abroad have sought to reduce hospital WPV incidence and improve nurses’ ability to respond through a series of measures, including improving medical-related laws and regulations [21], improving the hospital environment and treatment procedures [22], improving nurses’ communication ability [23], and focusing on WPV training [24]. However, a review of the current domestic and foreign literature found that the incidence of WPV among nurses remains high [25, 26]. Some scholars [27] have pointed out that due to the lack of long-term tracking data on the occurrence of WPV among nurses, the existing prevention and intervention measures lack pertinence. A study by Spelten et al. [28] pointed out that reducing the incidence of WPV first requires a comprehensive understanding of the dynamic development process of violence toward nurses, including by determining the frequency and intensity of WPV events and assessing the long-term results of violence on staff. However, most of the existing WPV studies are short-term status surveys, and few studies continuously compare the WPV status and characteristics of nurses in a certain region. This study aimed to analyze the current situation and characteristics of WPV among nurses in Suzhou general hospitals from 2010 to 2019 to provide information that can help hospital managers understand the status quo of WPV among nurses and trends in WPV over time.", "Study design In this cross-sectional study, structured questionnaires were used to investigate the occurrence of WPV among nurses in 6 general hospitals in Suzhou in 2010, 2015 and 2019.\nIn this cross-sectional study, structured questionnaires were used to investigate the occurrence of WPV among nurses in 6 general hospitals in Suzhou in 2010, 2015 and 2019.\nParticipants A total of 942, 2,110 and 2,566 clinical nurses were selected from Suzhou general hospitals in November of each year in 2010, 2015 and 2019, respectively, by the convenience sampling method.\nA total of 942, 2,110 and 2,566 clinical nurses were selected from Suzhou general hospitals in November of each year in 2010, 2015 and 2019, respectively, by the convenience sampling method.\nInclusion criteria (1) Employment as in-service nurses in clinical departments who had direct contact with patients in their daily work.\n(2) Professional certification.\n(3) Completion of at least 1 year of clinical nursing work.\n (4) Provision of a signed informed consent form and voluntary participation in the study.\n(1) Employment as in-service nurses in clinical departments who had direct contact with patients in their daily work.\n(2) Professional certification.\n(3) Completion of at least 1 year of clinical nursing work.\n (4) Provision of a signed informed consent form and voluntary participation in the study.\nExclusion criteria (1) Nurses off-duty during the investigation, e.g., on maternity leave, sick leave, or vacation.\n(2) Non-regular nurses and personnel undergoing refresher training in our hospital.\n(1) Nurses off-duty during the investigation, e.g., on maternity leave, sick leave, or vacation.\n(2) Non-regular nurses and personnel undergoing refresher training in our hospital.\nEthics approval The study was approved by the Medical Ethics Committee of the First Affiliated Hospital of Soochow University in China (No. 2,018,062). The participants were made aware of the aim and procedures of the study, and they all signed an informed consent form. The researchers guaranteed the confidentiality of the subjects’ information and that it would be used only for research purposes. The voluntary principle was always followed during the study, and the participants could leave the study at any time.\n The study was approved by the Medical Ethics Committee of the First Affiliated Hospital of Soochow University in China (No. 2,018,062). The participants were made aware of the aim and procedures of the study, and they all signed an informed consent form. The researchers guaranteed the confidentiality of the subjects’ information and that it would be used only for research purposes. The voluntary principle was always followed during the study, and the participants could leave the study at any time.", "In this cross-sectional study, structured questionnaires were used to investigate the occurrence of WPV among nurses in 6 general hospitals in Suzhou in 2010, 2015 and 2019.", "A total of 942, 2,110 and 2,566 clinical nurses were selected from Suzhou general hospitals in November of each year in 2010, 2015 and 2019, respectively, by the convenience sampling method.", "(1) Employment as in-service nurses in clinical departments who had direct contact with patients in their daily work.\n(2) Professional certification.\n(3) Completion of at least 1 year of clinical nursing work.\n (4) Provision of a signed informed consent form and voluntary participation in the study.", "(1) Nurses off-duty during the investigation, e.g., on maternity leave, sick leave, or vacation.\n(2) Non-regular nurses and personnel undergoing refresher training in our hospital.", " The study was approved by the Medical Ethics Committee of the First Affiliated Hospital of Soochow University in China (No. 2,018,062). The participants were made aware of the aim and procedures of the study, and they all signed an informed consent form. The researchers guaranteed the confidentiality of the subjects’ information and that it would be used only for research purposes. The voluntary principle was always followed during the study, and the participants could leave the study at any time.", "General information questionnaire The questionnaire was designed by the research group to suit the research purpose and collected information on gender, age, education, whether an only child, marital status, hospital grade, department, years worked, employment type and professional title.\nThe questionnaire was designed by the research group to suit the research purpose and collected information on gender, age, education, whether an only child, marital status, hospital grade, department, years worked, employment type and professional title.\nRevised version of the hospital WPV questionnaire The questionnaire designed by Shengyong Wang [29] of Jinan University includes 33 entries in 4 dimensions: the frequency of WPV occurrence in the past year, the most impressive WPV experience in the past year, cognition and attitudes toward WPV, and the attitudes and responses of hospitals to WPV. The questionnaire has been widely used in clinical practice. The questionnaire has good reliability and validity; the test-retest reliability of the questionnaire was 0.803, and the average content validity of each item was 0.916.\nThe questionnaire designed by Shengyong Wang [29] of Jinan University includes 33 entries in 4 dimensions: the frequency of WPV occurrence in the past year, the most impressive WPV experience in the past year, cognition and attitudes toward WPV, and the attitudes and responses of hospitals to WPV. The questionnaire has been widely used in clinical practice. The questionnaire has good reliability and validity; the test-retest reliability of the questionnaire was 0.803, and the average content validity of each item was 0.916.\nData collection Two weeks before the beginning of the study, the head of the research team (which included 2 postgraduate tutors, 6 postgraduates, and 4 undergraduates) contacted the managers of each hospital by telephone or e-mail to request the support of the corresponding person in charge for the study and published a notice on each hospital website to recruit participants to participate in the survey. The contents of the notice included the location, time, purpose, significance, confidentiality principle and precautions of the questionnaire. If nurses were willing to participate in the survey, they could go to a designated conference room to receive the questionnaire and complete it anonymously. Each hospital was equipped with 2 research team members. If a nurse had any questions during completion of the questionnaire, the project team members could give guidance at any time. After completing the questionnaire, the nurse placed it into the questionnaire recovery box in the conference room. At 6:00 p.m., after the nurses had completed the questionnaires, the questionnaire recovery box was sealed by the members of the research group and sent to the questionnaire archives of the research group for safekeeping.\nTwo weeks before the beginning of the study, the head of the research team (which included 2 postgraduate tutors, 6 postgraduates, and 4 undergraduates) contacted the managers of each hospital by telephone or e-mail to request the support of the corresponding person in charge for the study and published a notice on each hospital website to recruit participants to participate in the survey. The contents of the notice included the location, time, purpose, significance, confidentiality principle and precautions of the questionnaire. If nurses were willing to participate in the survey, they could go to a designated conference room to receive the questionnaire and complete it anonymously. Each hospital was equipped with 2 research team members. If a nurse had any questions during completion of the questionnaire, the project team members could give guidance at any time. After completing the questionnaire, the nurse placed it into the questionnaire recovery box in the conference room. At 6:00 p.m., after the nurses had completed the questionnaires, the questionnaire recovery box was sealed by the members of the research group and sent to the questionnaire archives of the research group for safekeeping.\nStatistical analysis Data management and analysis were performed using SPSS 18.0. The count data are described as frequencies and percentages, and the quantitative data are reported as means and standard deviations. The chi-square test was used to analyze the general data of the nurses from the different time periods, the incidence of WPV, nurses’ cognition and attitudes regarding WPV and the hospitals’ attitudes toward WPV and corresponding measures. The Mann-Whitney U test was used to analyze the demographic data. Statistical significance was set at P<0.05.\nData management and analysis were performed using SPSS 18.0. The count data are described as frequencies and percentages, and the quantitative data are reported as means and standard deviations. The chi-square test was used to analyze the general data of the nurses from the different time periods, the incidence of WPV, nurses’ cognition and attitudes regarding WPV and the hospitals’ attitudes toward WPV and corresponding measures. The Mann-Whitney U test was used to analyze the demographic data. Statistical significance was set at P<0.05.", "The questionnaire was designed by the research group to suit the research purpose and collected information on gender, age, education, whether an only child, marital status, hospital grade, department, years worked, employment type and professional title.", "The questionnaire designed by Shengyong Wang [29] of Jinan University includes 33 entries in 4 dimensions: the frequency of WPV occurrence in the past year, the most impressive WPV experience in the past year, cognition and attitudes toward WPV, and the attitudes and responses of hospitals to WPV. The questionnaire has been widely used in clinical practice. The questionnaire has good reliability and validity; the test-retest reliability of the questionnaire was 0.803, and the average content validity of each item was 0.916.", "Two weeks before the beginning of the study, the head of the research team (which included 2 postgraduate tutors, 6 postgraduates, and 4 undergraduates) contacted the managers of each hospital by telephone or e-mail to request the support of the corresponding person in charge for the study and published a notice on each hospital website to recruit participants to participate in the survey. The contents of the notice included the location, time, purpose, significance, confidentiality principle and precautions of the questionnaire. If nurses were willing to participate in the survey, they could go to a designated conference room to receive the questionnaire and complete it anonymously. Each hospital was equipped with 2 research team members. If a nurse had any questions during completion of the questionnaire, the project team members could give guidance at any time. After completing the questionnaire, the nurse placed it into the questionnaire recovery box in the conference room. At 6:00 p.m., after the nurses had completed the questionnaires, the questionnaire recovery box was sealed by the members of the research group and sent to the questionnaire archives of the research group for safekeeping.", "Data management and analysis were performed using SPSS 18.0. The count data are described as frequencies and percentages, and the quantitative data are reported as means and standard deviations. The chi-square test was used to analyze the general data of the nurses from the different time periods, the incidence of WPV, nurses’ cognition and attitudes regarding WPV and the hospitals’ attitudes toward WPV and corresponding measures. The Mann-Whitney U test was used to analyze the demographic data. Statistical significance was set at P<0.05.", "Questionnaire response rate In 2010, 2015 and 2019, 1,000, 2,132 and 2,700 questionnaires were distributed, respectively, and 942, 2,110 and 2,566 effective questionnaires were actually recovered, for effective response rates of 94.20 %, 98.97 % and 95.04 %.\nIn 2010, 2015 and 2019, 1,000, 2,132 and 2,700 questionnaires were distributed, respectively, and 942, 2,110 and 2,566 effective questionnaires were actually recovered, for effective response rates of 94.20 %, 98.97 % and 95.04 %.\nNurses’ general information A total of 5,618 nurses completed the three surveys, including 388 emergency nurses, 618 outpatient nurses and 4,612 ward nurses. The age of the respondents ranged from 20 to 63 years, with an average age of 31.41 ± 5.82 years, and the proportion of respondents < 30 years old was 51.6 %. The number of years worked ranged from 1 to 37, with the average being 8.85 ± 7.45 years. Of the respondents, 23.2 % had worked ≤ 5 years; 52.8 % had a bachelor’s degree; and 42.7 % had junior professional titles. The demographic characteristics are presented in Table 1.\n\nTable 1Comparison of the demographic characteristics of the nurses, n (%)ItemsCategories201020152019χ2/ZPAge (Years)<30506 (53.7)1080 (51.2)1311 (51.1)16.915*0.01030~39348 (36.9)737 (34.9)933 (36.4)40~4975 (8.0)236 (11.2)278 (10.8)≥5013 (1.4)57 (2.7)44 (1.7)GenderFemale894 (94.9)2049 (97.1)2396(93.4)34.249<0.001Male48 (5.1)61 (2.9)170(6.6)Marital statusMarried605 (64.2)1430 (67.8)1628 (63.4)17.2570.002Single337 (35.8)665 (31.5)922 (35.9)Other0 (0.0)15 (0.7)16 (0.6)Only childYes392 (41.6)1098 (52.0)1291 (50.3)29.551<0.001No550 (58.4)1012 (48.0)1275 (49.7)Education levelTechnical secondary degree or below62 (6.6)63 (3.0)33 (1.3)385.760*<0.001Junior college568 (60.3)1008 (47.8)811 (31.6)Bachelor’s degree305 (32.4)1019 (48.3)1656 (64.5)Master’s degree or above7 (0.7)20 (0.9)66 (2.6)Years worked≤5194 (20.6)381 (18.1)726 (28.3)85.897*<0.0016~10294 (31.2)775 (36.7)847 (33.0)11~15204 (21.7)384 (18.2)428 (16.7)≥16250 (26.5)570 (27.0)565 (22.0)Professional titleSenior324 (34.4)687 (32.6)592 (23.1)120.281*<0.001Medium288 (30.6)640 (30.3)688 (26.8)Junior330 (30.5)783 (37.1)1286 (50.1)PostNurse901 (95.6)1977 (93.7)2432 (94.8)5.4010.067Head nurse41 (4.4)133 (6.3)134 (5.2)Employment typeOfficial staff495 (52.5)1068 (50.6)1213 (47.3)9.8780.043Contract staff437 (46.4)1016 (48.2)1317 (51.3)Temporary staff10 (1.1)26 (1.2)36 (1.4)DepartmentEmergency59 (6.3)168 (8.0)161 (6.3)77.396<0.001Outpatient159 (16.9)265 (12.6)194 (7.6)Ward724 (76.9)1677 (79.5)2211 (86.2)*: The Mann-Whitney U test was used\nComparison of the demographic characteristics of the nurses, n (%)\n*: The Mann-Whitney U test was used\nA total of 5,618 nurses completed the three surveys, including 388 emergency nurses, 618 outpatient nurses and 4,612 ward nurses. The age of the respondents ranged from 20 to 63 years, with an average age of 31.41 ± 5.82 years, and the proportion of respondents < 30 years old was 51.6 %. The number of years worked ranged from 1 to 37, with the average being 8.85 ± 7.45 years. Of the respondents, 23.2 % had worked ≤ 5 years; 52.8 % had a bachelor’s degree; and 42.7 % had junior professional titles. The demographic characteristics are presented in Table 1.\n\nTable 1Comparison of the demographic characteristics of the nurses, n (%)ItemsCategories201020152019χ2/ZPAge (Years)<30506 (53.7)1080 (51.2)1311 (51.1)16.915*0.01030~39348 (36.9)737 (34.9)933 (36.4)40~4975 (8.0)236 (11.2)278 (10.8)≥5013 (1.4)57 (2.7)44 (1.7)GenderFemale894 (94.9)2049 (97.1)2396(93.4)34.249<0.001Male48 (5.1)61 (2.9)170(6.6)Marital statusMarried605 (64.2)1430 (67.8)1628 (63.4)17.2570.002Single337 (35.8)665 (31.5)922 (35.9)Other0 (0.0)15 (0.7)16 (0.6)Only childYes392 (41.6)1098 (52.0)1291 (50.3)29.551<0.001No550 (58.4)1012 (48.0)1275 (49.7)Education levelTechnical secondary degree or below62 (6.6)63 (3.0)33 (1.3)385.760*<0.001Junior college568 (60.3)1008 (47.8)811 (31.6)Bachelor’s degree305 (32.4)1019 (48.3)1656 (64.5)Master’s degree or above7 (0.7)20 (0.9)66 (2.6)Years worked≤5194 (20.6)381 (18.1)726 (28.3)85.897*<0.0016~10294 (31.2)775 (36.7)847 (33.0)11~15204 (21.7)384 (18.2)428 (16.7)≥16250 (26.5)570 (27.0)565 (22.0)Professional titleSenior324 (34.4)687 (32.6)592 (23.1)120.281*<0.001Medium288 (30.6)640 (30.3)688 (26.8)Junior330 (30.5)783 (37.1)1286 (50.1)PostNurse901 (95.6)1977 (93.7)2432 (94.8)5.4010.067Head nurse41 (4.4)133 (6.3)134 (5.2)Employment typeOfficial staff495 (52.5)1068 (50.6)1213 (47.3)9.8780.043Contract staff437 (46.4)1016 (48.2)1317 (51.3)Temporary staff10 (1.1)26 (1.2)36 (1.4)DepartmentEmergency59 (6.3)168 (8.0)161 (6.3)77.396<0.001Outpatient159 (16.9)265 (12.6)194 (7.6)Ward724 (76.9)1677 (79.5)2211 (86.2)*: The Mann-Whitney U test was used\nComparison of the demographic characteristics of the nurses, n (%)\n*: The Mann-Whitney U test was used\nIncidence of WPV among nurses in 2010, 2015, and 2019 The incidence of WPV among nurses was 62.4 % in 2010, 69.0 % in 2015 and 68.4 % in 2019 (P=0.001). Compared with that in 2010, the incidence increased significantly in 2015 and 2019 (P<0.001, P=0.001). The details are shown in Table 2.\n\nTable 2Comparison of the incidence of WPV among nurses in different years, n (%)YearQuantityNot exposed to WPVExposed to WPVχ2PT1 (2010)942354 (37.6)588 (62.4)13.9790.001T2 (2015)2110655 (31.0)1455 (69.0)T3 (2019)2566812 (31.6)1754 (68.4)Multiple comparisonPT1−T212.575<0.001PT1−T310.9380.001PT2−T30.1950.681\nComparison of the incidence of WPV among nurses in different years, n (%)\nThe incidence of WPV among nurses was 62.4 % in 2010, 69.0 % in 2015 and 68.4 % in 2019 (P=0.001). Compared with that in 2010, the incidence increased significantly in 2015 and 2019 (P<0.001, P=0.001). The details are shown in Table 2.\n\nTable 2Comparison of the incidence of WPV among nurses in different years, n (%)YearQuantityNot exposed to WPVExposed to WPVχ2PT1 (2010)942354 (37.6)588 (62.4)13.9790.001T2 (2015)2110655 (31.0)1455 (69.0)T3 (2019)2566812 (31.6)1754 (68.4)Multiple comparisonPT1−T212.575<0.001PT1−T310.9380.001PT2−T30.1950.681\nComparison of the incidence of WPV among nurses in different years, n (%)\nIncidence of different forms of WPV among nurses in 2010, 2015, and 2019 According to the current situation of WPV toward nurses in different years, the proportion of nurses who experienced sexual harassment (4.1 %) was the highest in 2010. The proportion of people who suffered verbal abuse (66.2 %) and threats (46.3 %) was the highest in 2015. Physical assault (13.1 %) was the highest in 2019. The overall comparison of WPV in different years and of different types showed significant differences (P<0.05). Further multiple comparisons of the data showed that there were highly statistically significant differences between the total number of WPV incidents in 2010 and 2015 and between the number of physical assault incidents in 2015 and 2019 (P<0.001). The details are shown in Table 3\n\nTable 3Comparison of different forms of WPV among nurses in different years, n (%)YearQuantityExposed to WPVVerbal abuseVerbal threatPhysical assaultSexual harassmentT1 (2010)942588 (62.4)570 (60.5)387 (41.1)93 (9.9)39 (4.1)T2 (2015)21101455 (69.0)1396 (66.2)977 (46.3)211 (10.0)55 (2.6)T3 (2019)25661754 (68.4)1651 (64.3)1116 (43.5)335 (13.1)102 (4.0)χ213.97928.58819.42326.86516.185P0.001<0.0010.004<0.0010.013Multiple comparisonPT1-T2<0.0010.0120.4400.2320.025T1-T30.0010.0050.2500.0480.303T2-T30.6810.0010.020<0.0010.010\nComparison of different forms of WPV among nurses in different years, n (%)\nAccording to the current situation of WPV toward nurses in different years, the proportion of nurses who experienced sexual harassment (4.1 %) was the highest in 2010. The proportion of people who suffered verbal abuse (66.2 %) and threats (46.3 %) was the highest in 2015. Physical assault (13.1 %) was the highest in 2019. The overall comparison of WPV in different years and of different types showed significant differences (P<0.05). Further multiple comparisons of the data showed that there were highly statistically significant differences between the total number of WPV incidents in 2010 and 2015 and between the number of physical assault incidents in 2015 and 2019 (P<0.001). The details are shown in Table 3\n\nTable 3Comparison of different forms of WPV among nurses in different years, n (%)YearQuantityExposed to WPVVerbal abuseVerbal threatPhysical assaultSexual harassmentT1 (2010)942588 (62.4)570 (60.5)387 (41.1)93 (9.9)39 (4.1)T2 (2015)21101455 (69.0)1396 (66.2)977 (46.3)211 (10.0)55 (2.6)T3 (2019)25661754 (68.4)1651 (64.3)1116 (43.5)335 (13.1)102 (4.0)χ213.97928.58819.42326.86516.185P0.001<0.0010.004<0.0010.013Multiple comparisonPT1-T2<0.0010.0120.4400.2320.025T1-T30.0010.0050.2500.0480.303T2-T30.6810.0010.020<0.0010.010\nComparison of different forms of WPV among nurses in different years, n (%)\nCognition and attitudes of nurses toward WPV in 2010, 2015 and 2019 Compared with the number in 2010, more nurses surveyed in 2015 and 2019 said that they had heard of WPV before, and the result was considered statistically significant (PT1−T2=0.020, PT1−T3=0.032). In 2010 and 2015, less about WPV was included in the nurses’ preservice training (PT1−T3=0.002, PT2−T3=0.001), but more nurses thought they would benefit from WPV training (PT1−T3=0.016, PT2−T3=0.010). The study also found that clinical nurses in 2015 and 2019 were more likely than nurses in 2010 to approve of hospitals setting up WPV processing facilities and to support a zero-tolerance policy toward WPV (P<0.001). In addition, more than 85 % of nurses involved in the three surveys expressed their willingness to attend WPV training and thought they would benefit from it. The details are shown in Table 4Table 4Comparison of cognition and attitudes toward WPV among nurses in different years, n (%)ItemsCategories201020152019χ2PP(T1)(T2)(T3)T1-T2T1-T3T2-T3Having heard of WPV beforeYes763 (81.0)1782 (84.5)2157 (84.7)6.1660.0460.0200.0320.717No179 (19.0)328 (15.5)916 (16.3)WPV is inevitable at workYes712 (75.6)1594 (75.5)1962 (76.5)0.6250.7321.0000.5910.470No230 (24.4)516 (24.5)604 (24.0)Thinking WPV is not worth the fussYes132 (14.0)264 (12.5)374 (14.6)4.2580.1190.2680.7040.044No810 (86.0)1846 (87.5)2192 (85.4)Thinking preservice training should include training programs on WPVYes922 (97.9)2058 (97.5)2457 (95.8)16.192<0.0010.6080.0020.001No20 (2.1)352 (2.5)109 (4.2)Willing to attend the WPV trainingYes874 (92.8)1968 (93.3)2405 (93.7)1.0830.5820.6420.3170.551No68 (7.2)142 (6.7)151 (6.3)Thinking training on WPV would be beneficialYes822 (87.3)1851 (87.7)2313 (90.1)9.2790.0100.7220.0160.010No120 (12.7)259 (12.3)253 (9.9)Thinking WPV coping management organizations are neededYes903 (95.9)2071 (98.2)2519 (98.2)19.082<0.001<0.001<0.0011.000No39 (4.1)39 (1.8)47 (1.8)Supporting a “zero-tolerance” policyYes618 (65.6)1882 (89.2)2310 (90.0)368.750<0.001<0.001<0.0010.360No324 (34.4)228 (10.8)256 (10.0)\nComparison of cognition and attitudes toward WPV among nurses in different years, n (%)\nCompared with the number in 2010, more nurses surveyed in 2015 and 2019 said that they had heard of WPV before, and the result was considered statistically significant (PT1−T2=0.020, PT1−T3=0.032). In 2010 and 2015, less about WPV was included in the nurses’ preservice training (PT1−T3=0.002, PT2−T3=0.001), but more nurses thought they would benefit from WPV training (PT1−T3=0.016, PT2−T3=0.010). The study also found that clinical nurses in 2015 and 2019 were more likely than nurses in 2010 to approve of hospitals setting up WPV processing facilities and to support a zero-tolerance policy toward WPV (P<0.001). In addition, more than 85 % of nurses involved in the three surveys expressed their willingness to attend WPV training and thought they would benefit from it. The details are shown in Table 4Table 4Comparison of cognition and attitudes toward WPV among nurses in different years, n (%)ItemsCategories201020152019χ2PP(T1)(T2)(T3)T1-T2T1-T3T2-T3Having heard of WPV beforeYes763 (81.0)1782 (84.5)2157 (84.7)6.1660.0460.0200.0320.717No179 (19.0)328 (15.5)916 (16.3)WPV is inevitable at workYes712 (75.6)1594 (75.5)1962 (76.5)0.6250.7321.0000.5910.470No230 (24.4)516 (24.5)604 (24.0)Thinking WPV is not worth the fussYes132 (14.0)264 (12.5)374 (14.6)4.2580.1190.2680.7040.044No810 (86.0)1846 (87.5)2192 (85.4)Thinking preservice training should include training programs on WPVYes922 (97.9)2058 (97.5)2457 (95.8)16.192<0.0010.6080.0020.001No20 (2.1)352 (2.5)109 (4.2)Willing to attend the WPV trainingYes874 (92.8)1968 (93.3)2405 (93.7)1.0830.5820.6420.3170.551No68 (7.2)142 (6.7)151 (6.3)Thinking training on WPV would be beneficialYes822 (87.3)1851 (87.7)2313 (90.1)9.2790.0100.7220.0160.010No120 (12.7)259 (12.3)253 (9.9)Thinking WPV coping management organizations are neededYes903 (95.9)2071 (98.2)2519 (98.2)19.082<0.001<0.001<0.0011.000No39 (4.1)39 (1.8)47 (1.8)Supporting a “zero-tolerance” policyYes618 (65.6)1882 (89.2)2310 (90.0)368.750<0.001<0.001<0.0010.360No324 (34.4)228 (10.8)256 (10.0)\nComparison of cognition and attitudes toward WPV among nurses in different years, n (%)\nAttitudes and responses of hospitals to WPV towards nurses in 2010, 2015 and 2019 This study found that in the past 9 years, the hospitals’ attitudes and responses to WPV towards nurses have greatly improved, as indicated by improvements in five aspects, namely, “offering training”, “encouraging reporting of WPV to supervisors”, “equipped with a WPV management department”, “handling WPV efficiently”, and “hospital’s attitudes” (P<0.001). However, most hospitals still have not developed training. Regarding hospitals’ emphasis on WPV, hospitals paid more attention to WPV toward physicians (P=0.013). The details are shown in Table 5\n\nTable 5Comparison of attitudes and responses of hospitals toward WPV among nurses in different years, n (%)ItemsCategories201020152019χ2PP(T1)(T2)(T3)T1-T2T1-T3T2-T3Offering trainingYes91 (9.7)487 (23.1)840 (32.7)202.794<0.001<0.001<0.001<0.001No851 (90.3)1623 (76.9)1726 (67.3)Encouraging reporting of WPV to supervisorsYes585 (62.1)1575 (74.6)1849 (72.1)51.255<0.001<0.001<0.0010.050No357 (37.9)535 (25.4)717 (27.9)Equipped with WPV management departmentYes432 (45.9)1215 (57.6)1451 (56.5)39.942<0.001<0.001<0.0010.495No510 (54.1)895 (42.4)1115 (43.5)Handling WPV efficientlyYes264 (28.0)791 (37.5)1193 (46.5)106.884<0.001<0.001<0.001<0.001No678 (72.0)1319 (62.5)1373 (53.5)Hospital’s attitudesProtecting the interests of staff91 (9.7)574 (27.2)775 (30.2)283.785<0.001<0.001<0.001<0.001Handling WPV fairly based on the facts425 (45.1)855 (40.5)1194 (46.5)Turning a blind eye283 (30.0)549 (26.0)470 (18.3)Punishing staff regardless of the cause143 (15.2)132 (6.3)127 (4.9)Degree of emphasisPhysician>Nurse403 (42.8)959 (45.5)1117 (43.5)12.6490.0130.002<0.0010.346Nurse>Physician0 (0.0)22 (1.0)23 (0.9)Physician=Nurse539 (57.2)1129 (53.5)1426 (55.6)\nComparison of attitudes and responses of hospitals toward WPV among nurses in different years, n (%)\nThis study found that in the past 9 years, the hospitals’ attitudes and responses to WPV towards nurses have greatly improved, as indicated by improvements in five aspects, namely, “offering training”, “encouraging reporting of WPV to supervisors”, “equipped with a WPV management department”, “handling WPV efficiently”, and “hospital’s attitudes” (P<0.001). However, most hospitals still have not developed training. Regarding hospitals’ emphasis on WPV, hospitals paid more attention to WPV toward physicians (P=0.013). The details are shown in Table 5\n\nTable 5Comparison of attitudes and responses of hospitals toward WPV among nurses in different years, n (%)ItemsCategories201020152019χ2PP(T1)(T2)(T3)T1-T2T1-T3T2-T3Offering trainingYes91 (9.7)487 (23.1)840 (32.7)202.794<0.001<0.001<0.001<0.001No851 (90.3)1623 (76.9)1726 (67.3)Encouraging reporting of WPV to supervisorsYes585 (62.1)1575 (74.6)1849 (72.1)51.255<0.001<0.001<0.0010.050No357 (37.9)535 (25.4)717 (27.9)Equipped with WPV management departmentYes432 (45.9)1215 (57.6)1451 (56.5)39.942<0.001<0.001<0.0010.495No510 (54.1)895 (42.4)1115 (43.5)Handling WPV efficientlyYes264 (28.0)791 (37.5)1193 (46.5)106.884<0.001<0.001<0.001<0.001No678 (72.0)1319 (62.5)1373 (53.5)Hospital’s attitudesProtecting the interests of staff91 (9.7)574 (27.2)775 (30.2)283.785<0.001<0.001<0.001<0.001Handling WPV fairly based on the facts425 (45.1)855 (40.5)1194 (46.5)Turning a blind eye283 (30.0)549 (26.0)470 (18.3)Punishing staff regardless of the cause143 (15.2)132 (6.3)127 (4.9)Degree of emphasisPhysician>Nurse403 (42.8)959 (45.5)1117 (43.5)12.6490.0130.002<0.0010.346Nurse>Physician0 (0.0)22 (1.0)23 (0.9)Physician=Nurse539 (57.2)1129 (53.5)1426 (55.6)\nComparison of attitudes and responses of hospitals toward WPV among nurses in different years, n (%)", "In 2010, 2015 and 2019, 1,000, 2,132 and 2,700 questionnaires were distributed, respectively, and 942, 2,110 and 2,566 effective questionnaires were actually recovered, for effective response rates of 94.20 %, 98.97 % and 95.04 %.", "A total of 5,618 nurses completed the three surveys, including 388 emergency nurses, 618 outpatient nurses and 4,612 ward nurses. The age of the respondents ranged from 20 to 63 years, with an average age of 31.41 ± 5.82 years, and the proportion of respondents < 30 years old was 51.6 %. The number of years worked ranged from 1 to 37, with the average being 8.85 ± 7.45 years. Of the respondents, 23.2 % had worked ≤ 5 years; 52.8 % had a bachelor’s degree; and 42.7 % had junior professional titles. The demographic characteristics are presented in Table 1.\n\nTable 1Comparison of the demographic characteristics of the nurses, n (%)ItemsCategories201020152019χ2/ZPAge (Years)<30506 (53.7)1080 (51.2)1311 (51.1)16.915*0.01030~39348 (36.9)737 (34.9)933 (36.4)40~4975 (8.0)236 (11.2)278 (10.8)≥5013 (1.4)57 (2.7)44 (1.7)GenderFemale894 (94.9)2049 (97.1)2396(93.4)34.249<0.001Male48 (5.1)61 (2.9)170(6.6)Marital statusMarried605 (64.2)1430 (67.8)1628 (63.4)17.2570.002Single337 (35.8)665 (31.5)922 (35.9)Other0 (0.0)15 (0.7)16 (0.6)Only childYes392 (41.6)1098 (52.0)1291 (50.3)29.551<0.001No550 (58.4)1012 (48.0)1275 (49.7)Education levelTechnical secondary degree or below62 (6.6)63 (3.0)33 (1.3)385.760*<0.001Junior college568 (60.3)1008 (47.8)811 (31.6)Bachelor’s degree305 (32.4)1019 (48.3)1656 (64.5)Master’s degree or above7 (0.7)20 (0.9)66 (2.6)Years worked≤5194 (20.6)381 (18.1)726 (28.3)85.897*<0.0016~10294 (31.2)775 (36.7)847 (33.0)11~15204 (21.7)384 (18.2)428 (16.7)≥16250 (26.5)570 (27.0)565 (22.0)Professional titleSenior324 (34.4)687 (32.6)592 (23.1)120.281*<0.001Medium288 (30.6)640 (30.3)688 (26.8)Junior330 (30.5)783 (37.1)1286 (50.1)PostNurse901 (95.6)1977 (93.7)2432 (94.8)5.4010.067Head nurse41 (4.4)133 (6.3)134 (5.2)Employment typeOfficial staff495 (52.5)1068 (50.6)1213 (47.3)9.8780.043Contract staff437 (46.4)1016 (48.2)1317 (51.3)Temporary staff10 (1.1)26 (1.2)36 (1.4)DepartmentEmergency59 (6.3)168 (8.0)161 (6.3)77.396<0.001Outpatient159 (16.9)265 (12.6)194 (7.6)Ward724 (76.9)1677 (79.5)2211 (86.2)*: The Mann-Whitney U test was used\nComparison of the demographic characteristics of the nurses, n (%)\n*: The Mann-Whitney U test was used", "The incidence of WPV among nurses was 62.4 % in 2010, 69.0 % in 2015 and 68.4 % in 2019 (P=0.001). Compared with that in 2010, the incidence increased significantly in 2015 and 2019 (P<0.001, P=0.001). The details are shown in Table 2.\n\nTable 2Comparison of the incidence of WPV among nurses in different years, n (%)YearQuantityNot exposed to WPVExposed to WPVχ2PT1 (2010)942354 (37.6)588 (62.4)13.9790.001T2 (2015)2110655 (31.0)1455 (69.0)T3 (2019)2566812 (31.6)1754 (68.4)Multiple comparisonPT1−T212.575<0.001PT1−T310.9380.001PT2−T30.1950.681\nComparison of the incidence of WPV among nurses in different years, n (%)", "According to the current situation of WPV toward nurses in different years, the proportion of nurses who experienced sexual harassment (4.1 %) was the highest in 2010. The proportion of people who suffered verbal abuse (66.2 %) and threats (46.3 %) was the highest in 2015. Physical assault (13.1 %) was the highest in 2019. The overall comparison of WPV in different years and of different types showed significant differences (P<0.05). Further multiple comparisons of the data showed that there were highly statistically significant differences between the total number of WPV incidents in 2010 and 2015 and between the number of physical assault incidents in 2015 and 2019 (P<0.001). The details are shown in Table 3\n\nTable 3Comparison of different forms of WPV among nurses in different years, n (%)YearQuantityExposed to WPVVerbal abuseVerbal threatPhysical assaultSexual harassmentT1 (2010)942588 (62.4)570 (60.5)387 (41.1)93 (9.9)39 (4.1)T2 (2015)21101455 (69.0)1396 (66.2)977 (46.3)211 (10.0)55 (2.6)T3 (2019)25661754 (68.4)1651 (64.3)1116 (43.5)335 (13.1)102 (4.0)χ213.97928.58819.42326.86516.185P0.001<0.0010.004<0.0010.013Multiple comparisonPT1-T2<0.0010.0120.4400.2320.025T1-T30.0010.0050.2500.0480.303T2-T30.6810.0010.020<0.0010.010\nComparison of different forms of WPV among nurses in different years, n (%)", "Compared with the number in 2010, more nurses surveyed in 2015 and 2019 said that they had heard of WPV before, and the result was considered statistically significant (PT1−T2=0.020, PT1−T3=0.032). In 2010 and 2015, less about WPV was included in the nurses’ preservice training (PT1−T3=0.002, PT2−T3=0.001), but more nurses thought they would benefit from WPV training (PT1−T3=0.016, PT2−T3=0.010). The study also found that clinical nurses in 2015 and 2019 were more likely than nurses in 2010 to approve of hospitals setting up WPV processing facilities and to support a zero-tolerance policy toward WPV (P<0.001). In addition, more than 85 % of nurses involved in the three surveys expressed their willingness to attend WPV training and thought they would benefit from it. The details are shown in Table 4Table 4Comparison of cognition and attitudes toward WPV among nurses in different years, n (%)ItemsCategories201020152019χ2PP(T1)(T2)(T3)T1-T2T1-T3T2-T3Having heard of WPV beforeYes763 (81.0)1782 (84.5)2157 (84.7)6.1660.0460.0200.0320.717No179 (19.0)328 (15.5)916 (16.3)WPV is inevitable at workYes712 (75.6)1594 (75.5)1962 (76.5)0.6250.7321.0000.5910.470No230 (24.4)516 (24.5)604 (24.0)Thinking WPV is not worth the fussYes132 (14.0)264 (12.5)374 (14.6)4.2580.1190.2680.7040.044No810 (86.0)1846 (87.5)2192 (85.4)Thinking preservice training should include training programs on WPVYes922 (97.9)2058 (97.5)2457 (95.8)16.192<0.0010.6080.0020.001No20 (2.1)352 (2.5)109 (4.2)Willing to attend the WPV trainingYes874 (92.8)1968 (93.3)2405 (93.7)1.0830.5820.6420.3170.551No68 (7.2)142 (6.7)151 (6.3)Thinking training on WPV would be beneficialYes822 (87.3)1851 (87.7)2313 (90.1)9.2790.0100.7220.0160.010No120 (12.7)259 (12.3)253 (9.9)Thinking WPV coping management organizations are neededYes903 (95.9)2071 (98.2)2519 (98.2)19.082<0.001<0.001<0.0011.000No39 (4.1)39 (1.8)47 (1.8)Supporting a “zero-tolerance” policyYes618 (65.6)1882 (89.2)2310 (90.0)368.750<0.001<0.001<0.0010.360No324 (34.4)228 (10.8)256 (10.0)\nComparison of cognition and attitudes toward WPV among nurses in different years, n (%)", "This study found that in the past 9 years, the hospitals’ attitudes and responses to WPV towards nurses have greatly improved, as indicated by improvements in five aspects, namely, “offering training”, “encouraging reporting of WPV to supervisors”, “equipped with a WPV management department”, “handling WPV efficiently”, and “hospital’s attitudes” (P<0.001). However, most hospitals still have not developed training. Regarding hospitals’ emphasis on WPV, hospitals paid more attention to WPV toward physicians (P=0.013). The details are shown in Table 5\n\nTable 5Comparison of attitudes and responses of hospitals toward WPV among nurses in different years, n (%)ItemsCategories201020152019χ2PP(T1)(T2)(T3)T1-T2T1-T3T2-T3Offering trainingYes91 (9.7)487 (23.1)840 (32.7)202.794<0.001<0.001<0.001<0.001No851 (90.3)1623 (76.9)1726 (67.3)Encouraging reporting of WPV to supervisorsYes585 (62.1)1575 (74.6)1849 (72.1)51.255<0.001<0.001<0.0010.050No357 (37.9)535 (25.4)717 (27.9)Equipped with WPV management departmentYes432 (45.9)1215 (57.6)1451 (56.5)39.942<0.001<0.001<0.0010.495No510 (54.1)895 (42.4)1115 (43.5)Handling WPV efficientlyYes264 (28.0)791 (37.5)1193 (46.5)106.884<0.001<0.001<0.001<0.001No678 (72.0)1319 (62.5)1373 (53.5)Hospital’s attitudesProtecting the interests of staff91 (9.7)574 (27.2)775 (30.2)283.785<0.001<0.001<0.001<0.001Handling WPV fairly based on the facts425 (45.1)855 (40.5)1194 (46.5)Turning a blind eye283 (30.0)549 (26.0)470 (18.3)Punishing staff regardless of the cause143 (15.2)132 (6.3)127 (4.9)Degree of emphasisPhysician>Nurse403 (42.8)959 (45.5)1117 (43.5)12.6490.0130.002<0.0010.346Nurse>Physician0 (0.0)22 (1.0)23 (0.9)Physician=Nurse539 (57.2)1129 (53.5)1426 (55.6)\nComparison of attitudes and responses of hospitals toward WPV among nurses in different years, n (%)", "This study found that the overall incidence of WPV towards nurses in Suzhou general hospitals gradually increased from 62.4 to 68.4 % from 2010 to 2019 and significantly increased from 2010 to 2015. There were significant differences at first (P<0.05), but then a plateau was reached (P>0.05). The incidence reported here is lower than the overall WPV incidence of 76 % among 762 registered nurses identified in a study from the U.S. [30], higher than the incidence reported in a study conducted in Botswana (44.1 %) [31], and similar to the incidence reported in a study by Ren et al. (68.1 %) [32]. The incidence of WPV towards nurses varies by country, region, social background and sample size. However, the phenomenon of WPV against nurses has become increasingly common at home and abroad, in developing countries and developed countries. A comparison of the types of WPV nurses experience across the three study years showed that “verbal abuse” is the most common form of WPV, which is consistent with most surveys [33–36]. Verbal abuse and threats showed an upward trend and then a downward trend from 2010 to 2019, with the highest incidence in 2015. This was possibly related to the greater emphasis of hospitals on the training of nurses’ service attitudes and communication skills in recent years. To reduce the incidence of verbal abuse or threats, hospital management developed training on coping and communication strategies for nurses, including using amiable words, putting themselves in patients’ shoes and listening to the needs of patients from the patients’ point of view so that patients could feel understood and respected. However, the lack of corresponding legal constraints on verbal violence led to nurses being unable to obtain timely evidence and a fair hearing after experiencing this type of WPV, which made verbal violence hard to intervene in and prevent. The results of this survey showed that the incidence of physical assault against nurses has gradually increased over the past nine years. Though the incidence of physical violence was relatively low between 2010 and 2015 (P>0.05), the incidence of physical assault increased significantly from 2015 to 2019 (P<0.001). With the development of the economy and the improvement of living standards, people’s demand for medical treatment has changed; people are not satisfied with only cure of the disease itself but are increasingly attentive to the overall effect of treatment and attach increasing importance to the experience of the treatment process. When patients and their relatives have unpleasant experiences or their requirements are not met during medical treatment, it is easy for conflicts to arise between nurses and patients. If the hospital treatment system or process is not perfect, such conflict is likely to escalate and induce physical aggression. It is recommended that hospital management give attention to the various forms of WPV while focusing on the service attitude, the patient’s medical experience and the transformation of the service process. Hospital management should focus on not only implementing policies but also establishing hospital security forces and an effective cooperation mechanism between hospitals and public security systems, implementing measures such as increasing patrol times, improving security inspections, prohibiting the carrying of sharp or dangerous goods into hospitals and developing security training for staff [37]. Such practices would allow a hospital to intervene immediately in the event of violence, ensure the safety of medical personnel and reduce the occurrence of violence through multi-sector coordination. It was shown that the incidence of sexual harassment among nurses had a decreasing trend and then an increasing trend over time, with the incidence being basically flat in 2010 and 2019, which was due to the improvement of laws and the health care system in recent years. Since 2014, China has issued a series of laws and regulations, such as the Regulations on the Prevention and Treatment of Medical Disputes, the Memorandum of Cooperation on Joint Punishment of Persons who seriously endanger the Normal Medical Order, and the Basic Medical and Health Promotion Law, which have played a positive role in protecting legitimate rights by law, deterring illegal acts targeting medical staff and institutions and maintaining the order of hospitals. However, with the passage of time, some law enforcement officers have not truly implemented the laws and regulations, resulting in a rebound trend of hospital WPV since 2015. Therefore, it is suggested that law enforcement officers should sternly punish people who attack medical staff and promote the implementation of policies and laws so that they can effectively protect the legitimate rights and interests of nurses.\nIn the past 9 years, nurses’ cognition and attitudes toward WPV have improved. The rates of “having heard of WPV before”, “thinking WPV coping management organizations are needed” and “supporting a zero-tolerance policy” increased significantly from 2010 to 2015 (P<0.05) but showed no great changes from 2015 to 2019 (P>0.05), which was likely related to the formulation of corresponding preventive measures and provision of reasonable and effective education and training. The National Institute of Occupational Safety and Health [38] developed and delivered WPV prevention training course for nurses, and more than 90 % of participants thought the course filled gaps in their knowledge. It was also found that the reasons why some nurses were unable to effectively judge whether they had experienced WPV were that they lacked an understanding of the concept of WPV or had never heard the term WPV before. Ming et al. [39] found that after a randomized intervention for clinical nurses, the cognition and attitudes of the nurses who participated in violence training were higher than those who did not. Brann et al. [40] found that the level of cognition and knowledge of WPV among nurses was higher than before training; nurses could identify violence that they had experienced personally or as bystanders and no longer considered violence part of their work. Therefore, it is necessary to develop training on WPV for clinical nurses. In addition, the results of the three surveys showed that although the majority of nurses were willing to participate in WPV prevention training, there was less WPV content in their preservice training in 2019 than in 2010 and 2015 (PT1−T3=0.002, PT2−T3=0.001), which indicates to hospital management that in order to maximize the effect of education and training and improve nurses’ ability to identify and deal with WPV in clinical practice, it is necessary to strengthen the training on WPV and make it more systematic and routine, integrate WPV content into new nurses’ education and in-hospital education, regularly disseminate knowledge on violence to nurses, strengthen continuous management after the training, and promote the continuous improvement of the post-training results while meeting the learning needs of nurses [41]. Furthermore, it is necessary to carry out public education, publicize the key role of the hospital in solving problems in the medical system through various networks, and increase public understanding of the efforts of medical centers to improve the medical experience and providing services to patients to decrease WPV.\nIn this survey, hospitals’ attitudes and responses to nurses’ WPV improved significantly from 2010 to 2019, which was embodied in five aspects: “offering training”, “encouraging reporting of WPV to supervisors”, “equipped with a WPV management department”, “handling WPV efficiently” and “hospital’s attitudes” (P<0.05). The differences in the response rates for “offering training”, “handling WPV efficiently”, “hospital’s attitudes” at different times were statistically significant (P<0.001), which may be related to changes in the management philosophy of the hospital managers. At present, hospital management pays more attention to the safety of nurses, constantly explores and develops scientific and effective nurse-related WPV management measures, actively and effectively controls violence, addresses nurses’ suffering from violence and minimizes hospital violence caused by improper or unclear management policies [37]. However, less than half of respondents reported receiving WPV prevention-related training. Although the training incidence increased in 2019 compared with that in 2010 and 2015, it was only 32.7 %, which is consistent with the results of other studies [42, 43]. The results of the survey also showed that more than 95 % of nurses believed that preservice training should include training on WPV, and more than 90 % of nurses expressed a willingness to attend, which showed that the current hospital-related education and training is far from meeting the needs of clinical nurses. To reduce the incidence of WPV or minimize the harm of WPV, nurses’ WPV management intervention skills should be strengthened, with special attention given to developing training, exploring the content and form of training, increasing the initiative and enthusiasm of nurses, and improving nurses’ cognition, attitudes and coping abilities with regard to WPV. In addition, compared with the results from 2010, those from 2015 to 2019 showed that hospitals paid more attention to doctors who had experienced WPV, with a gradual upward trend (P<0.001). Some studies [19, 44, 45] suggest that organizational support and hospital management can reduce the negative effects of WPV on nurses. Without strong hospital attention, nurses may lack trust in the management department and experience reduced job satisfaction, which will aggravate adverse reactions to WPV. In conclusion, hospitals should treat WPV among doctors and nurses equally so that nurse-directed WPV can be resolved equally and fairly and the psychological gap of employees can be reduced.", "The incidence of WPV among nurses in Suzhou general hospitals increased in 2015 and 2019 compared to that in 2010. The main form of WPV is verbal abuse. Despite the continuous improvement of nurses′ awareness of WPV, there is still room for improvement in hospitals′ attitudes and responses to WPV, especially in terms of actively carrying out WPV training. In conclusion, hospital managers should aim to comprehensively understand the dynamics of WPV, especially the trends of violence over long time periods, to reduce the incidence of WPV among nurses." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, null, "conclusion" ]
[ "Nurse", "Workplace violence", "General hospital", "Comparison" ]
Background: According to the World Health Organization (WHO), workplace violence (WPV) can be defined as “incidents where staff are abused, threatened or assaulted in circumstances related to their work, involving an explicit or implicit challenge to their safety, well-being or health”. WPV can manifest as physical violence or psychological violence in different forms [1]. In recent years, as the reform of China’s medical system has entered the “deep water zone”, the doctor-patient relationship is becoming increasingly tense, and doctor-patient conflict is gradually emerging. As the main manifestation of doctor-patient conflict, the incidence of hospital WPV is increasing yearly. As the main workforce in medical and health care institutions, nurses have become the main victims of WPV due to the nature of their work, which requires direct contact and frequent communication with patients [2–5]. Studies from outside of China have shown that approximately 70 %-90 % of nurses have experienced one or more types of WPV [6–8]. In China, cross-sectional investigations of WPV among nurses have shown that the incidence of WPV is 71.9 %-84.6 %, which is similar to the incidence reported in other countries [9–11]. These findings show that WPV among nurses has become a common phenomenon in both developing and developed countries. WPV is a very serious public health problem that not only causes different degrees of physical and psychological injury to nurses but also has negative effects on hospitals [12, 13]. At the individual level, nurses may suffer from varying degrees of physiological harm, such as chronic pain, elevated blood pressure, gastrointestinal disorders, nightmares and even death [14, 15]. Regarding the psychological effects of WPV, in addition to feelings of grievance, anger, anxiety and other emotions, nurses may experience symptoms of insomnia, suicidal ideation and post-traumatic stress disorder [16–18]. At the hospital level, WPV reduces the enthusiasm of nurses toward their work and the quality of their work and increases their turnover rate, which affects the stability of the nursing team [19, 20]. In summary, WPV among nurses is characterized by a high incidence rate and great harmfulness. Since the 1980 s, scholars at home and abroad have sought to reduce hospital WPV incidence and improve nurses’ ability to respond through a series of measures, including improving medical-related laws and regulations [21], improving the hospital environment and treatment procedures [22], improving nurses’ communication ability [23], and focusing on WPV training [24]. However, a review of the current domestic and foreign literature found that the incidence of WPV among nurses remains high [25, 26]. Some scholars [27] have pointed out that due to the lack of long-term tracking data on the occurrence of WPV among nurses, the existing prevention and intervention measures lack pertinence. A study by Spelten et al. [28] pointed out that reducing the incidence of WPV first requires a comprehensive understanding of the dynamic development process of violence toward nurses, including by determining the frequency and intensity of WPV events and assessing the long-term results of violence on staff. However, most of the existing WPV studies are short-term status surveys, and few studies continuously compare the WPV status and characteristics of nurses in a certain region. This study aimed to analyze the current situation and characteristics of WPV among nurses in Suzhou general hospitals from 2010 to 2019 to provide information that can help hospital managers understand the status quo of WPV among nurses and trends in WPV over time. Methods: Study design In this cross-sectional study, structured questionnaires were used to investigate the occurrence of WPV among nurses in 6 general hospitals in Suzhou in 2010, 2015 and 2019. In this cross-sectional study, structured questionnaires were used to investigate the occurrence of WPV among nurses in 6 general hospitals in Suzhou in 2010, 2015 and 2019. Participants A total of 942, 2,110 and 2,566 clinical nurses were selected from Suzhou general hospitals in November of each year in 2010, 2015 and 2019, respectively, by the convenience sampling method. A total of 942, 2,110 and 2,566 clinical nurses were selected from Suzhou general hospitals in November of each year in 2010, 2015 and 2019, respectively, by the convenience sampling method. Inclusion criteria (1) Employment as in-service nurses in clinical departments who had direct contact with patients in their daily work. (2) Professional certification. (3) Completion of at least 1 year of clinical nursing work. (4) Provision of a signed informed consent form and voluntary participation in the study. (1) Employment as in-service nurses in clinical departments who had direct contact with patients in their daily work. (2) Professional certification. (3) Completion of at least 1 year of clinical nursing work. (4) Provision of a signed informed consent form and voluntary participation in the study. Exclusion criteria (1) Nurses off-duty during the investigation, e.g., on maternity leave, sick leave, or vacation. (2) Non-regular nurses and personnel undergoing refresher training in our hospital. (1) Nurses off-duty during the investigation, e.g., on maternity leave, sick leave, or vacation. (2) Non-regular nurses and personnel undergoing refresher training in our hospital. Ethics approval The study was approved by the Medical Ethics Committee of the First Affiliated Hospital of Soochow University in China (No. 2,018,062). The participants were made aware of the aim and procedures of the study, and they all signed an informed consent form. The researchers guaranteed the confidentiality of the subjects’ information and that it would be used only for research purposes. The voluntary principle was always followed during the study, and the participants could leave the study at any time. The study was approved by the Medical Ethics Committee of the First Affiliated Hospital of Soochow University in China (No. 2,018,062). The participants were made aware of the aim and procedures of the study, and they all signed an informed consent form. The researchers guaranteed the confidentiality of the subjects’ information and that it would be used only for research purposes. The voluntary principle was always followed during the study, and the participants could leave the study at any time. Study design: In this cross-sectional study, structured questionnaires were used to investigate the occurrence of WPV among nurses in 6 general hospitals in Suzhou in 2010, 2015 and 2019. Participants: A total of 942, 2,110 and 2,566 clinical nurses were selected from Suzhou general hospitals in November of each year in 2010, 2015 and 2019, respectively, by the convenience sampling method. Inclusion criteria: (1) Employment as in-service nurses in clinical departments who had direct contact with patients in their daily work. (2) Professional certification. (3) Completion of at least 1 year of clinical nursing work. (4) Provision of a signed informed consent form and voluntary participation in the study. Exclusion criteria: (1) Nurses off-duty during the investigation, e.g., on maternity leave, sick leave, or vacation. (2) Non-regular nurses and personnel undergoing refresher training in our hospital. Ethics approval: The study was approved by the Medical Ethics Committee of the First Affiliated Hospital of Soochow University in China (No. 2,018,062). The participants were made aware of the aim and procedures of the study, and they all signed an informed consent form. The researchers guaranteed the confidentiality of the subjects’ information and that it would be used only for research purposes. The voluntary principle was always followed during the study, and the participants could leave the study at any time. Measurement instruments: General information questionnaire The questionnaire was designed by the research group to suit the research purpose and collected information on gender, age, education, whether an only child, marital status, hospital grade, department, years worked, employment type and professional title. The questionnaire was designed by the research group to suit the research purpose and collected information on gender, age, education, whether an only child, marital status, hospital grade, department, years worked, employment type and professional title. Revised version of the hospital WPV questionnaire The questionnaire designed by Shengyong Wang [29] of Jinan University includes 33 entries in 4 dimensions: the frequency of WPV occurrence in the past year, the most impressive WPV experience in the past year, cognition and attitudes toward WPV, and the attitudes and responses of hospitals to WPV. The questionnaire has been widely used in clinical practice. The questionnaire has good reliability and validity; the test-retest reliability of the questionnaire was 0.803, and the average content validity of each item was 0.916. The questionnaire designed by Shengyong Wang [29] of Jinan University includes 33 entries in 4 dimensions: the frequency of WPV occurrence in the past year, the most impressive WPV experience in the past year, cognition and attitudes toward WPV, and the attitudes and responses of hospitals to WPV. The questionnaire has been widely used in clinical practice. The questionnaire has good reliability and validity; the test-retest reliability of the questionnaire was 0.803, and the average content validity of each item was 0.916. Data collection Two weeks before the beginning of the study, the head of the research team (which included 2 postgraduate tutors, 6 postgraduates, and 4 undergraduates) contacted the managers of each hospital by telephone or e-mail to request the support of the corresponding person in charge for the study and published a notice on each hospital website to recruit participants to participate in the survey. The contents of the notice included the location, time, purpose, significance, confidentiality principle and precautions of the questionnaire. If nurses were willing to participate in the survey, they could go to a designated conference room to receive the questionnaire and complete it anonymously. Each hospital was equipped with 2 research team members. If a nurse had any questions during completion of the questionnaire, the project team members could give guidance at any time. After completing the questionnaire, the nurse placed it into the questionnaire recovery box in the conference room. At 6:00 p.m., after the nurses had completed the questionnaires, the questionnaire recovery box was sealed by the members of the research group and sent to the questionnaire archives of the research group for safekeeping. Two weeks before the beginning of the study, the head of the research team (which included 2 postgraduate tutors, 6 postgraduates, and 4 undergraduates) contacted the managers of each hospital by telephone or e-mail to request the support of the corresponding person in charge for the study and published a notice on each hospital website to recruit participants to participate in the survey. The contents of the notice included the location, time, purpose, significance, confidentiality principle and precautions of the questionnaire. If nurses were willing to participate in the survey, they could go to a designated conference room to receive the questionnaire and complete it anonymously. Each hospital was equipped with 2 research team members. If a nurse had any questions during completion of the questionnaire, the project team members could give guidance at any time. After completing the questionnaire, the nurse placed it into the questionnaire recovery box in the conference room. At 6:00 p.m., after the nurses had completed the questionnaires, the questionnaire recovery box was sealed by the members of the research group and sent to the questionnaire archives of the research group for safekeeping. Statistical analysis Data management and analysis were performed using SPSS 18.0. The count data are described as frequencies and percentages, and the quantitative data are reported as means and standard deviations. The chi-square test was used to analyze the general data of the nurses from the different time periods, the incidence of WPV, nurses’ cognition and attitudes regarding WPV and the hospitals’ attitudes toward WPV and corresponding measures. The Mann-Whitney U test was used to analyze the demographic data. Statistical significance was set at P<0.05. Data management and analysis were performed using SPSS 18.0. The count data are described as frequencies and percentages, and the quantitative data are reported as means and standard deviations. The chi-square test was used to analyze the general data of the nurses from the different time periods, the incidence of WPV, nurses’ cognition and attitudes regarding WPV and the hospitals’ attitudes toward WPV and corresponding measures. The Mann-Whitney U test was used to analyze the demographic data. Statistical significance was set at P<0.05. General information questionnaire: The questionnaire was designed by the research group to suit the research purpose and collected information on gender, age, education, whether an only child, marital status, hospital grade, department, years worked, employment type and professional title. Revised version of the hospital WPV questionnaire: The questionnaire designed by Shengyong Wang [29] of Jinan University includes 33 entries in 4 dimensions: the frequency of WPV occurrence in the past year, the most impressive WPV experience in the past year, cognition and attitudes toward WPV, and the attitudes and responses of hospitals to WPV. The questionnaire has been widely used in clinical practice. The questionnaire has good reliability and validity; the test-retest reliability of the questionnaire was 0.803, and the average content validity of each item was 0.916. Data collection: Two weeks before the beginning of the study, the head of the research team (which included 2 postgraduate tutors, 6 postgraduates, and 4 undergraduates) contacted the managers of each hospital by telephone or e-mail to request the support of the corresponding person in charge for the study and published a notice on each hospital website to recruit participants to participate in the survey. The contents of the notice included the location, time, purpose, significance, confidentiality principle and precautions of the questionnaire. If nurses were willing to participate in the survey, they could go to a designated conference room to receive the questionnaire and complete it anonymously. Each hospital was equipped with 2 research team members. If a nurse had any questions during completion of the questionnaire, the project team members could give guidance at any time. After completing the questionnaire, the nurse placed it into the questionnaire recovery box in the conference room. At 6:00 p.m., after the nurses had completed the questionnaires, the questionnaire recovery box was sealed by the members of the research group and sent to the questionnaire archives of the research group for safekeeping. Statistical analysis: Data management and analysis were performed using SPSS 18.0. The count data are described as frequencies and percentages, and the quantitative data are reported as means and standard deviations. The chi-square test was used to analyze the general data of the nurses from the different time periods, the incidence of WPV, nurses’ cognition and attitudes regarding WPV and the hospitals’ attitudes toward WPV and corresponding measures. The Mann-Whitney U test was used to analyze the demographic data. Statistical significance was set at P<0.05. Results: Questionnaire response rate In 2010, 2015 and 2019, 1,000, 2,132 and 2,700 questionnaires were distributed, respectively, and 942, 2,110 and 2,566 effective questionnaires were actually recovered, for effective response rates of 94.20 %, 98.97 % and 95.04 %. In 2010, 2015 and 2019, 1,000, 2,132 and 2,700 questionnaires were distributed, respectively, and 942, 2,110 and 2,566 effective questionnaires were actually recovered, for effective response rates of 94.20 %, 98.97 % and 95.04 %. Nurses’ general information A total of 5,618 nurses completed the three surveys, including 388 emergency nurses, 618 outpatient nurses and 4,612 ward nurses. The age of the respondents ranged from 20 to 63 years, with an average age of 31.41 ± 5.82 years, and the proportion of respondents < 30 years old was 51.6 %. The number of years worked ranged from 1 to 37, with the average being 8.85 ± 7.45 years. Of the respondents, 23.2 % had worked ≤ 5 years; 52.8 % had a bachelor’s degree; and 42.7 % had junior professional titles. The demographic characteristics are presented in Table 1. Table 1Comparison of the demographic characteristics of the nurses, n (%)ItemsCategories201020152019χ2/ZPAge (Years)<30506 (53.7)1080 (51.2)1311 (51.1)16.915*0.01030~39348 (36.9)737 (34.9)933 (36.4)40~4975 (8.0)236 (11.2)278 (10.8)≥5013 (1.4)57 (2.7)44 (1.7)GenderFemale894 (94.9)2049 (97.1)2396(93.4)34.249<0.001Male48 (5.1)61 (2.9)170(6.6)Marital statusMarried605 (64.2)1430 (67.8)1628 (63.4)17.2570.002Single337 (35.8)665 (31.5)922 (35.9)Other0 (0.0)15 (0.7)16 (0.6)Only childYes392 (41.6)1098 (52.0)1291 (50.3)29.551<0.001No550 (58.4)1012 (48.0)1275 (49.7)Education levelTechnical secondary degree or below62 (6.6)63 (3.0)33 (1.3)385.760*<0.001Junior college568 (60.3)1008 (47.8)811 (31.6)Bachelor’s degree305 (32.4)1019 (48.3)1656 (64.5)Master’s degree or above7 (0.7)20 (0.9)66 (2.6)Years worked≤5194 (20.6)381 (18.1)726 (28.3)85.897*<0.0016~10294 (31.2)775 (36.7)847 (33.0)11~15204 (21.7)384 (18.2)428 (16.7)≥16250 (26.5)570 (27.0)565 (22.0)Professional titleSenior324 (34.4)687 (32.6)592 (23.1)120.281*<0.001Medium288 (30.6)640 (30.3)688 (26.8)Junior330 (30.5)783 (37.1)1286 (50.1)PostNurse901 (95.6)1977 (93.7)2432 (94.8)5.4010.067Head nurse41 (4.4)133 (6.3)134 (5.2)Employment typeOfficial staff495 (52.5)1068 (50.6)1213 (47.3)9.8780.043Contract staff437 (46.4)1016 (48.2)1317 (51.3)Temporary staff10 (1.1)26 (1.2)36 (1.4)DepartmentEmergency59 (6.3)168 (8.0)161 (6.3)77.396<0.001Outpatient159 (16.9)265 (12.6)194 (7.6)Ward724 (76.9)1677 (79.5)2211 (86.2)*: The Mann-Whitney U test was used Comparison of the demographic characteristics of the nurses, n (%) *: The Mann-Whitney U test was used A total of 5,618 nurses completed the three surveys, including 388 emergency nurses, 618 outpatient nurses and 4,612 ward nurses. The age of the respondents ranged from 20 to 63 years, with an average age of 31.41 ± 5.82 years, and the proportion of respondents < 30 years old was 51.6 %. The number of years worked ranged from 1 to 37, with the average being 8.85 ± 7.45 years. Of the respondents, 23.2 % had worked ≤ 5 years; 52.8 % had a bachelor’s degree; and 42.7 % had junior professional titles. The demographic characteristics are presented in Table 1. Table 1Comparison of the demographic characteristics of the nurses, n (%)ItemsCategories201020152019χ2/ZPAge (Years)<30506 (53.7)1080 (51.2)1311 (51.1)16.915*0.01030~39348 (36.9)737 (34.9)933 (36.4)40~4975 (8.0)236 (11.2)278 (10.8)≥5013 (1.4)57 (2.7)44 (1.7)GenderFemale894 (94.9)2049 (97.1)2396(93.4)34.249<0.001Male48 (5.1)61 (2.9)170(6.6)Marital statusMarried605 (64.2)1430 (67.8)1628 (63.4)17.2570.002Single337 (35.8)665 (31.5)922 (35.9)Other0 (0.0)15 (0.7)16 (0.6)Only childYes392 (41.6)1098 (52.0)1291 (50.3)29.551<0.001No550 (58.4)1012 (48.0)1275 (49.7)Education levelTechnical secondary degree or below62 (6.6)63 (3.0)33 (1.3)385.760*<0.001Junior college568 (60.3)1008 (47.8)811 (31.6)Bachelor’s degree305 (32.4)1019 (48.3)1656 (64.5)Master’s degree or above7 (0.7)20 (0.9)66 (2.6)Years worked≤5194 (20.6)381 (18.1)726 (28.3)85.897*<0.0016~10294 (31.2)775 (36.7)847 (33.0)11~15204 (21.7)384 (18.2)428 (16.7)≥16250 (26.5)570 (27.0)565 (22.0)Professional titleSenior324 (34.4)687 (32.6)592 (23.1)120.281*<0.001Medium288 (30.6)640 (30.3)688 (26.8)Junior330 (30.5)783 (37.1)1286 (50.1)PostNurse901 (95.6)1977 (93.7)2432 (94.8)5.4010.067Head nurse41 (4.4)133 (6.3)134 (5.2)Employment typeOfficial staff495 (52.5)1068 (50.6)1213 (47.3)9.8780.043Contract staff437 (46.4)1016 (48.2)1317 (51.3)Temporary staff10 (1.1)26 (1.2)36 (1.4)DepartmentEmergency59 (6.3)168 (8.0)161 (6.3)77.396<0.001Outpatient159 (16.9)265 (12.6)194 (7.6)Ward724 (76.9)1677 (79.5)2211 (86.2)*: The Mann-Whitney U test was used Comparison of the demographic characteristics of the nurses, n (%) *: The Mann-Whitney U test was used Incidence of WPV among nurses in 2010, 2015, and 2019 The incidence of WPV among nurses was 62.4 % in 2010, 69.0 % in 2015 and 68.4 % in 2019 (P=0.001). Compared with that in 2010, the incidence increased significantly in 2015 and 2019 (P<0.001, P=0.001). The details are shown in Table 2. Table 2Comparison of the incidence of WPV among nurses in different years, n (%)YearQuantityNot exposed to WPVExposed to WPVχ2PT1 (2010)942354 (37.6)588 (62.4)13.9790.001T2 (2015)2110655 (31.0)1455 (69.0)T3 (2019)2566812 (31.6)1754 (68.4)Multiple comparisonPT1−T212.575<0.001PT1−T310.9380.001PT2−T30.1950.681 Comparison of the incidence of WPV among nurses in different years, n (%) The incidence of WPV among nurses was 62.4 % in 2010, 69.0 % in 2015 and 68.4 % in 2019 (P=0.001). Compared with that in 2010, the incidence increased significantly in 2015 and 2019 (P<0.001, P=0.001). The details are shown in Table 2. Table 2Comparison of the incidence of WPV among nurses in different years, n (%)YearQuantityNot exposed to WPVExposed to WPVχ2PT1 (2010)942354 (37.6)588 (62.4)13.9790.001T2 (2015)2110655 (31.0)1455 (69.0)T3 (2019)2566812 (31.6)1754 (68.4)Multiple comparisonPT1−T212.575<0.001PT1−T310.9380.001PT2−T30.1950.681 Comparison of the incidence of WPV among nurses in different years, n (%) Incidence of different forms of WPV among nurses in 2010, 2015, and 2019 According to the current situation of WPV toward nurses in different years, the proportion of nurses who experienced sexual harassment (4.1 %) was the highest in 2010. The proportion of people who suffered verbal abuse (66.2 %) and threats (46.3 %) was the highest in 2015. Physical assault (13.1 %) was the highest in 2019. The overall comparison of WPV in different years and of different types showed significant differences (P<0.05). Further multiple comparisons of the data showed that there were highly statistically significant differences between the total number of WPV incidents in 2010 and 2015 and between the number of physical assault incidents in 2015 and 2019 (P<0.001). The details are shown in Table 3 Table 3Comparison of different forms of WPV among nurses in different years, n (%)YearQuantityExposed to WPVVerbal abuseVerbal threatPhysical assaultSexual harassmentT1 (2010)942588 (62.4)570 (60.5)387 (41.1)93 (9.9)39 (4.1)T2 (2015)21101455 (69.0)1396 (66.2)977 (46.3)211 (10.0)55 (2.6)T3 (2019)25661754 (68.4)1651 (64.3)1116 (43.5)335 (13.1)102 (4.0)χ213.97928.58819.42326.86516.185P0.001<0.0010.004<0.0010.013Multiple comparisonPT1-T2<0.0010.0120.4400.2320.025T1-T30.0010.0050.2500.0480.303T2-T30.6810.0010.020<0.0010.010 Comparison of different forms of WPV among nurses in different years, n (%) According to the current situation of WPV toward nurses in different years, the proportion of nurses who experienced sexual harassment (4.1 %) was the highest in 2010. The proportion of people who suffered verbal abuse (66.2 %) and threats (46.3 %) was the highest in 2015. Physical assault (13.1 %) was the highest in 2019. The overall comparison of WPV in different years and of different types showed significant differences (P<0.05). Further multiple comparisons of the data showed that there were highly statistically significant differences between the total number of WPV incidents in 2010 and 2015 and between the number of physical assault incidents in 2015 and 2019 (P<0.001). The details are shown in Table 3 Table 3Comparison of different forms of WPV among nurses in different years, n (%)YearQuantityExposed to WPVVerbal abuseVerbal threatPhysical assaultSexual harassmentT1 (2010)942588 (62.4)570 (60.5)387 (41.1)93 (9.9)39 (4.1)T2 (2015)21101455 (69.0)1396 (66.2)977 (46.3)211 (10.0)55 (2.6)T3 (2019)25661754 (68.4)1651 (64.3)1116 (43.5)335 (13.1)102 (4.0)χ213.97928.58819.42326.86516.185P0.001<0.0010.004<0.0010.013Multiple comparisonPT1-T2<0.0010.0120.4400.2320.025T1-T30.0010.0050.2500.0480.303T2-T30.6810.0010.020<0.0010.010 Comparison of different forms of WPV among nurses in different years, n (%) Cognition and attitudes of nurses toward WPV in 2010, 2015 and 2019 Compared with the number in 2010, more nurses surveyed in 2015 and 2019 said that they had heard of WPV before, and the result was considered statistically significant (PT1−T2=0.020, PT1−T3=0.032). In 2010 and 2015, less about WPV was included in the nurses’ preservice training (PT1−T3=0.002, PT2−T3=0.001), but more nurses thought they would benefit from WPV training (PT1−T3=0.016, PT2−T3=0.010). The study also found that clinical nurses in 2015 and 2019 were more likely than nurses in 2010 to approve of hospitals setting up WPV processing facilities and to support a zero-tolerance policy toward WPV (P<0.001). In addition, more than 85 % of nurses involved in the three surveys expressed their willingness to attend WPV training and thought they would benefit from it. The details are shown in Table 4Table 4Comparison of cognition and attitudes toward WPV among nurses in different years, n (%)ItemsCategories201020152019χ2PP(T1)(T2)(T3)T1-T2T1-T3T2-T3Having heard of WPV beforeYes763 (81.0)1782 (84.5)2157 (84.7)6.1660.0460.0200.0320.717No179 (19.0)328 (15.5)916 (16.3)WPV is inevitable at workYes712 (75.6)1594 (75.5)1962 (76.5)0.6250.7321.0000.5910.470No230 (24.4)516 (24.5)604 (24.0)Thinking WPV is not worth the fussYes132 (14.0)264 (12.5)374 (14.6)4.2580.1190.2680.7040.044No810 (86.0)1846 (87.5)2192 (85.4)Thinking preservice training should include training programs on WPVYes922 (97.9)2058 (97.5)2457 (95.8)16.192<0.0010.6080.0020.001No20 (2.1)352 (2.5)109 (4.2)Willing to attend the WPV trainingYes874 (92.8)1968 (93.3)2405 (93.7)1.0830.5820.6420.3170.551No68 (7.2)142 (6.7)151 (6.3)Thinking training on WPV would be beneficialYes822 (87.3)1851 (87.7)2313 (90.1)9.2790.0100.7220.0160.010No120 (12.7)259 (12.3)253 (9.9)Thinking WPV coping management organizations are neededYes903 (95.9)2071 (98.2)2519 (98.2)19.082<0.001<0.001<0.0011.000No39 (4.1)39 (1.8)47 (1.8)Supporting a “zero-tolerance” policyYes618 (65.6)1882 (89.2)2310 (90.0)368.750<0.001<0.001<0.0010.360No324 (34.4)228 (10.8)256 (10.0) Comparison of cognition and attitudes toward WPV among nurses in different years, n (%) Compared with the number in 2010, more nurses surveyed in 2015 and 2019 said that they had heard of WPV before, and the result was considered statistically significant (PT1−T2=0.020, PT1−T3=0.032). In 2010 and 2015, less about WPV was included in the nurses’ preservice training (PT1−T3=0.002, PT2−T3=0.001), but more nurses thought they would benefit from WPV training (PT1−T3=0.016, PT2−T3=0.010). The study also found that clinical nurses in 2015 and 2019 were more likely than nurses in 2010 to approve of hospitals setting up WPV processing facilities and to support a zero-tolerance policy toward WPV (P<0.001). In addition, more than 85 % of nurses involved in the three surveys expressed their willingness to attend WPV training and thought they would benefit from it. The details are shown in Table 4Table 4Comparison of cognition and attitudes toward WPV among nurses in different years, n (%)ItemsCategories201020152019χ2PP(T1)(T2)(T3)T1-T2T1-T3T2-T3Having heard of WPV beforeYes763 (81.0)1782 (84.5)2157 (84.7)6.1660.0460.0200.0320.717No179 (19.0)328 (15.5)916 (16.3)WPV is inevitable at workYes712 (75.6)1594 (75.5)1962 (76.5)0.6250.7321.0000.5910.470No230 (24.4)516 (24.5)604 (24.0)Thinking WPV is not worth the fussYes132 (14.0)264 (12.5)374 (14.6)4.2580.1190.2680.7040.044No810 (86.0)1846 (87.5)2192 (85.4)Thinking preservice training should include training programs on WPVYes922 (97.9)2058 (97.5)2457 (95.8)16.192<0.0010.6080.0020.001No20 (2.1)352 (2.5)109 (4.2)Willing to attend the WPV trainingYes874 (92.8)1968 (93.3)2405 (93.7)1.0830.5820.6420.3170.551No68 (7.2)142 (6.7)151 (6.3)Thinking training on WPV would be beneficialYes822 (87.3)1851 (87.7)2313 (90.1)9.2790.0100.7220.0160.010No120 (12.7)259 (12.3)253 (9.9)Thinking WPV coping management organizations are neededYes903 (95.9)2071 (98.2)2519 (98.2)19.082<0.001<0.001<0.0011.000No39 (4.1)39 (1.8)47 (1.8)Supporting a “zero-tolerance” policyYes618 (65.6)1882 (89.2)2310 (90.0)368.750<0.001<0.001<0.0010.360No324 (34.4)228 (10.8)256 (10.0) Comparison of cognition and attitudes toward WPV among nurses in different years, n (%) Attitudes and responses of hospitals to WPV towards nurses in 2010, 2015 and 2019 This study found that in the past 9 years, the hospitals’ attitudes and responses to WPV towards nurses have greatly improved, as indicated by improvements in five aspects, namely, “offering training”, “encouraging reporting of WPV to supervisors”, “equipped with a WPV management department”, “handling WPV efficiently”, and “hospital’s attitudes” (P<0.001). However, most hospitals still have not developed training. Regarding hospitals’ emphasis on WPV, hospitals paid more attention to WPV toward physicians (P=0.013). The details are shown in Table 5 Table 5Comparison of attitudes and responses of hospitals toward WPV among nurses in different years, n (%)ItemsCategories201020152019χ2PP(T1)(T2)(T3)T1-T2T1-T3T2-T3Offering trainingYes91 (9.7)487 (23.1)840 (32.7)202.794<0.001<0.001<0.001<0.001No851 (90.3)1623 (76.9)1726 (67.3)Encouraging reporting of WPV to supervisorsYes585 (62.1)1575 (74.6)1849 (72.1)51.255<0.001<0.001<0.0010.050No357 (37.9)535 (25.4)717 (27.9)Equipped with WPV management departmentYes432 (45.9)1215 (57.6)1451 (56.5)39.942<0.001<0.001<0.0010.495No510 (54.1)895 (42.4)1115 (43.5)Handling WPV efficientlyYes264 (28.0)791 (37.5)1193 (46.5)106.884<0.001<0.001<0.001<0.001No678 (72.0)1319 (62.5)1373 (53.5)Hospital’s attitudesProtecting the interests of staff91 (9.7)574 (27.2)775 (30.2)283.785<0.001<0.001<0.001<0.001Handling WPV fairly based on the facts425 (45.1)855 (40.5)1194 (46.5)Turning a blind eye283 (30.0)549 (26.0)470 (18.3)Punishing staff regardless of the cause143 (15.2)132 (6.3)127 (4.9)Degree of emphasisPhysician>Nurse403 (42.8)959 (45.5)1117 (43.5)12.6490.0130.002<0.0010.346Nurse>Physician0 (0.0)22 (1.0)23 (0.9)Physician=Nurse539 (57.2)1129 (53.5)1426 (55.6) Comparison of attitudes and responses of hospitals toward WPV among nurses in different years, n (%) This study found that in the past 9 years, the hospitals’ attitudes and responses to WPV towards nurses have greatly improved, as indicated by improvements in five aspects, namely, “offering training”, “encouraging reporting of WPV to supervisors”, “equipped with a WPV management department”, “handling WPV efficiently”, and “hospital’s attitudes” (P<0.001). However, most hospitals still have not developed training. Regarding hospitals’ emphasis on WPV, hospitals paid more attention to WPV toward physicians (P=0.013). The details are shown in Table 5 Table 5Comparison of attitudes and responses of hospitals toward WPV among nurses in different years, n (%)ItemsCategories201020152019χ2PP(T1)(T2)(T3)T1-T2T1-T3T2-T3Offering trainingYes91 (9.7)487 (23.1)840 (32.7)202.794<0.001<0.001<0.001<0.001No851 (90.3)1623 (76.9)1726 (67.3)Encouraging reporting of WPV to supervisorsYes585 (62.1)1575 (74.6)1849 (72.1)51.255<0.001<0.001<0.0010.050No357 (37.9)535 (25.4)717 (27.9)Equipped with WPV management departmentYes432 (45.9)1215 (57.6)1451 (56.5)39.942<0.001<0.001<0.0010.495No510 (54.1)895 (42.4)1115 (43.5)Handling WPV efficientlyYes264 (28.0)791 (37.5)1193 (46.5)106.884<0.001<0.001<0.001<0.001No678 (72.0)1319 (62.5)1373 (53.5)Hospital’s attitudesProtecting the interests of staff91 (9.7)574 (27.2)775 (30.2)283.785<0.001<0.001<0.001<0.001Handling WPV fairly based on the facts425 (45.1)855 (40.5)1194 (46.5)Turning a blind eye283 (30.0)549 (26.0)470 (18.3)Punishing staff regardless of the cause143 (15.2)132 (6.3)127 (4.9)Degree of emphasisPhysician>Nurse403 (42.8)959 (45.5)1117 (43.5)12.6490.0130.002<0.0010.346Nurse>Physician0 (0.0)22 (1.0)23 (0.9)Physician=Nurse539 (57.2)1129 (53.5)1426 (55.6) Comparison of attitudes and responses of hospitals toward WPV among nurses in different years, n (%) Questionnaire response rate: In 2010, 2015 and 2019, 1,000, 2,132 and 2,700 questionnaires were distributed, respectively, and 942, 2,110 and 2,566 effective questionnaires were actually recovered, for effective response rates of 94.20 %, 98.97 % and 95.04 %. Nurses’ general information: A total of 5,618 nurses completed the three surveys, including 388 emergency nurses, 618 outpatient nurses and 4,612 ward nurses. The age of the respondents ranged from 20 to 63 years, with an average age of 31.41 ± 5.82 years, and the proportion of respondents < 30 years old was 51.6 %. The number of years worked ranged from 1 to 37, with the average being 8.85 ± 7.45 years. Of the respondents, 23.2 % had worked ≤ 5 years; 52.8 % had a bachelor’s degree; and 42.7 % had junior professional titles. The demographic characteristics are presented in Table 1. Table 1Comparison of the demographic characteristics of the nurses, n (%)ItemsCategories201020152019χ2/ZPAge (Years)<30506 (53.7)1080 (51.2)1311 (51.1)16.915*0.01030~39348 (36.9)737 (34.9)933 (36.4)40~4975 (8.0)236 (11.2)278 (10.8)≥5013 (1.4)57 (2.7)44 (1.7)GenderFemale894 (94.9)2049 (97.1)2396(93.4)34.249<0.001Male48 (5.1)61 (2.9)170(6.6)Marital statusMarried605 (64.2)1430 (67.8)1628 (63.4)17.2570.002Single337 (35.8)665 (31.5)922 (35.9)Other0 (0.0)15 (0.7)16 (0.6)Only childYes392 (41.6)1098 (52.0)1291 (50.3)29.551<0.001No550 (58.4)1012 (48.0)1275 (49.7)Education levelTechnical secondary degree or below62 (6.6)63 (3.0)33 (1.3)385.760*<0.001Junior college568 (60.3)1008 (47.8)811 (31.6)Bachelor’s degree305 (32.4)1019 (48.3)1656 (64.5)Master’s degree or above7 (0.7)20 (0.9)66 (2.6)Years worked≤5194 (20.6)381 (18.1)726 (28.3)85.897*<0.0016~10294 (31.2)775 (36.7)847 (33.0)11~15204 (21.7)384 (18.2)428 (16.7)≥16250 (26.5)570 (27.0)565 (22.0)Professional titleSenior324 (34.4)687 (32.6)592 (23.1)120.281*<0.001Medium288 (30.6)640 (30.3)688 (26.8)Junior330 (30.5)783 (37.1)1286 (50.1)PostNurse901 (95.6)1977 (93.7)2432 (94.8)5.4010.067Head nurse41 (4.4)133 (6.3)134 (5.2)Employment typeOfficial staff495 (52.5)1068 (50.6)1213 (47.3)9.8780.043Contract staff437 (46.4)1016 (48.2)1317 (51.3)Temporary staff10 (1.1)26 (1.2)36 (1.4)DepartmentEmergency59 (6.3)168 (8.0)161 (6.3)77.396<0.001Outpatient159 (16.9)265 (12.6)194 (7.6)Ward724 (76.9)1677 (79.5)2211 (86.2)*: The Mann-Whitney U test was used Comparison of the demographic characteristics of the nurses, n (%) *: The Mann-Whitney U test was used Incidence of WPV among nurses in 2010, 2015, and 2019: The incidence of WPV among nurses was 62.4 % in 2010, 69.0 % in 2015 and 68.4 % in 2019 (P=0.001). Compared with that in 2010, the incidence increased significantly in 2015 and 2019 (P<0.001, P=0.001). The details are shown in Table 2. Table 2Comparison of the incidence of WPV among nurses in different years, n (%)YearQuantityNot exposed to WPVExposed to WPVχ2PT1 (2010)942354 (37.6)588 (62.4)13.9790.001T2 (2015)2110655 (31.0)1455 (69.0)T3 (2019)2566812 (31.6)1754 (68.4)Multiple comparisonPT1−T212.575<0.001PT1−T310.9380.001PT2−T30.1950.681 Comparison of the incidence of WPV among nurses in different years, n (%) Incidence of different forms of WPV among nurses in 2010, 2015, and 2019: According to the current situation of WPV toward nurses in different years, the proportion of nurses who experienced sexual harassment (4.1 %) was the highest in 2010. The proportion of people who suffered verbal abuse (66.2 %) and threats (46.3 %) was the highest in 2015. Physical assault (13.1 %) was the highest in 2019. The overall comparison of WPV in different years and of different types showed significant differences (P<0.05). Further multiple comparisons of the data showed that there were highly statistically significant differences between the total number of WPV incidents in 2010 and 2015 and between the number of physical assault incidents in 2015 and 2019 (P<0.001). The details are shown in Table 3 Table 3Comparison of different forms of WPV among nurses in different years, n (%)YearQuantityExposed to WPVVerbal abuseVerbal threatPhysical assaultSexual harassmentT1 (2010)942588 (62.4)570 (60.5)387 (41.1)93 (9.9)39 (4.1)T2 (2015)21101455 (69.0)1396 (66.2)977 (46.3)211 (10.0)55 (2.6)T3 (2019)25661754 (68.4)1651 (64.3)1116 (43.5)335 (13.1)102 (4.0)χ213.97928.58819.42326.86516.185P0.001<0.0010.004<0.0010.013Multiple comparisonPT1-T2<0.0010.0120.4400.2320.025T1-T30.0010.0050.2500.0480.303T2-T30.6810.0010.020<0.0010.010 Comparison of different forms of WPV among nurses in different years, n (%) Cognition and attitudes of nurses toward WPV in 2010, 2015 and 2019: Compared with the number in 2010, more nurses surveyed in 2015 and 2019 said that they had heard of WPV before, and the result was considered statistically significant (PT1−T2=0.020, PT1−T3=0.032). In 2010 and 2015, less about WPV was included in the nurses’ preservice training (PT1−T3=0.002, PT2−T3=0.001), but more nurses thought they would benefit from WPV training (PT1−T3=0.016, PT2−T3=0.010). The study also found that clinical nurses in 2015 and 2019 were more likely than nurses in 2010 to approve of hospitals setting up WPV processing facilities and to support a zero-tolerance policy toward WPV (P<0.001). In addition, more than 85 % of nurses involved in the three surveys expressed their willingness to attend WPV training and thought they would benefit from it. The details are shown in Table 4Table 4Comparison of cognition and attitudes toward WPV among nurses in different years, n (%)ItemsCategories201020152019χ2PP(T1)(T2)(T3)T1-T2T1-T3T2-T3Having heard of WPV beforeYes763 (81.0)1782 (84.5)2157 (84.7)6.1660.0460.0200.0320.717No179 (19.0)328 (15.5)916 (16.3)WPV is inevitable at workYes712 (75.6)1594 (75.5)1962 (76.5)0.6250.7321.0000.5910.470No230 (24.4)516 (24.5)604 (24.0)Thinking WPV is not worth the fussYes132 (14.0)264 (12.5)374 (14.6)4.2580.1190.2680.7040.044No810 (86.0)1846 (87.5)2192 (85.4)Thinking preservice training should include training programs on WPVYes922 (97.9)2058 (97.5)2457 (95.8)16.192<0.0010.6080.0020.001No20 (2.1)352 (2.5)109 (4.2)Willing to attend the WPV trainingYes874 (92.8)1968 (93.3)2405 (93.7)1.0830.5820.6420.3170.551No68 (7.2)142 (6.7)151 (6.3)Thinking training on WPV would be beneficialYes822 (87.3)1851 (87.7)2313 (90.1)9.2790.0100.7220.0160.010No120 (12.7)259 (12.3)253 (9.9)Thinking WPV coping management organizations are neededYes903 (95.9)2071 (98.2)2519 (98.2)19.082<0.001<0.001<0.0011.000No39 (4.1)39 (1.8)47 (1.8)Supporting a “zero-tolerance” policyYes618 (65.6)1882 (89.2)2310 (90.0)368.750<0.001<0.001<0.0010.360No324 (34.4)228 (10.8)256 (10.0) Comparison of cognition and attitudes toward WPV among nurses in different years, n (%) Attitudes and responses of hospitals to WPV towards nurses in 2010, 2015 and 2019: This study found that in the past 9 years, the hospitals’ attitudes and responses to WPV towards nurses have greatly improved, as indicated by improvements in five aspects, namely, “offering training”, “encouraging reporting of WPV to supervisors”, “equipped with a WPV management department”, “handling WPV efficiently”, and “hospital’s attitudes” (P<0.001). However, most hospitals still have not developed training. Regarding hospitals’ emphasis on WPV, hospitals paid more attention to WPV toward physicians (P=0.013). The details are shown in Table 5 Table 5Comparison of attitudes and responses of hospitals toward WPV among nurses in different years, n (%)ItemsCategories201020152019χ2PP(T1)(T2)(T3)T1-T2T1-T3T2-T3Offering trainingYes91 (9.7)487 (23.1)840 (32.7)202.794<0.001<0.001<0.001<0.001No851 (90.3)1623 (76.9)1726 (67.3)Encouraging reporting of WPV to supervisorsYes585 (62.1)1575 (74.6)1849 (72.1)51.255<0.001<0.001<0.0010.050No357 (37.9)535 (25.4)717 (27.9)Equipped with WPV management departmentYes432 (45.9)1215 (57.6)1451 (56.5)39.942<0.001<0.001<0.0010.495No510 (54.1)895 (42.4)1115 (43.5)Handling WPV efficientlyYes264 (28.0)791 (37.5)1193 (46.5)106.884<0.001<0.001<0.001<0.001No678 (72.0)1319 (62.5)1373 (53.5)Hospital’s attitudesProtecting the interests of staff91 (9.7)574 (27.2)775 (30.2)283.785<0.001<0.001<0.001<0.001Handling WPV fairly based on the facts425 (45.1)855 (40.5)1194 (46.5)Turning a blind eye283 (30.0)549 (26.0)470 (18.3)Punishing staff regardless of the cause143 (15.2)132 (6.3)127 (4.9)Degree of emphasisPhysician>Nurse403 (42.8)959 (45.5)1117 (43.5)12.6490.0130.002<0.0010.346Nurse>Physician0 (0.0)22 (1.0)23 (0.9)Physician=Nurse539 (57.2)1129 (53.5)1426 (55.6) Comparison of attitudes and responses of hospitals toward WPV among nurses in different years, n (%) Discussion: This study found that the overall incidence of WPV towards nurses in Suzhou general hospitals gradually increased from 62.4 to 68.4 % from 2010 to 2019 and significantly increased from 2010 to 2015. There were significant differences at first (P<0.05), but then a plateau was reached (P>0.05). The incidence reported here is lower than the overall WPV incidence of 76 % among 762 registered nurses identified in a study from the U.S. [30], higher than the incidence reported in a study conducted in Botswana (44.1 %) [31], and similar to the incidence reported in a study by Ren et al. (68.1 %) [32]. The incidence of WPV towards nurses varies by country, region, social background and sample size. However, the phenomenon of WPV against nurses has become increasingly common at home and abroad, in developing countries and developed countries. A comparison of the types of WPV nurses experience across the three study years showed that “verbal abuse” is the most common form of WPV, which is consistent with most surveys [33–36]. Verbal abuse and threats showed an upward trend and then a downward trend from 2010 to 2019, with the highest incidence in 2015. This was possibly related to the greater emphasis of hospitals on the training of nurses’ service attitudes and communication skills in recent years. To reduce the incidence of verbal abuse or threats, hospital management developed training on coping and communication strategies for nurses, including using amiable words, putting themselves in patients’ shoes and listening to the needs of patients from the patients’ point of view so that patients could feel understood and respected. However, the lack of corresponding legal constraints on verbal violence led to nurses being unable to obtain timely evidence and a fair hearing after experiencing this type of WPV, which made verbal violence hard to intervene in and prevent. The results of this survey showed that the incidence of physical assault against nurses has gradually increased over the past nine years. Though the incidence of physical violence was relatively low between 2010 and 2015 (P>0.05), the incidence of physical assault increased significantly from 2015 to 2019 (P<0.001). With the development of the economy and the improvement of living standards, people’s demand for medical treatment has changed; people are not satisfied with only cure of the disease itself but are increasingly attentive to the overall effect of treatment and attach increasing importance to the experience of the treatment process. When patients and their relatives have unpleasant experiences or their requirements are not met during medical treatment, it is easy for conflicts to arise between nurses and patients. If the hospital treatment system or process is not perfect, such conflict is likely to escalate and induce physical aggression. It is recommended that hospital management give attention to the various forms of WPV while focusing on the service attitude, the patient’s medical experience and the transformation of the service process. Hospital management should focus on not only implementing policies but also establishing hospital security forces and an effective cooperation mechanism between hospitals and public security systems, implementing measures such as increasing patrol times, improving security inspections, prohibiting the carrying of sharp or dangerous goods into hospitals and developing security training for staff [37]. Such practices would allow a hospital to intervene immediately in the event of violence, ensure the safety of medical personnel and reduce the occurrence of violence through multi-sector coordination. It was shown that the incidence of sexual harassment among nurses had a decreasing trend and then an increasing trend over time, with the incidence being basically flat in 2010 and 2019, which was due to the improvement of laws and the health care system in recent years. Since 2014, China has issued a series of laws and regulations, such as the Regulations on the Prevention and Treatment of Medical Disputes, the Memorandum of Cooperation on Joint Punishment of Persons who seriously endanger the Normal Medical Order, and the Basic Medical and Health Promotion Law, which have played a positive role in protecting legitimate rights by law, deterring illegal acts targeting medical staff and institutions and maintaining the order of hospitals. However, with the passage of time, some law enforcement officers have not truly implemented the laws and regulations, resulting in a rebound trend of hospital WPV since 2015. Therefore, it is suggested that law enforcement officers should sternly punish people who attack medical staff and promote the implementation of policies and laws so that they can effectively protect the legitimate rights and interests of nurses. In the past 9 years, nurses’ cognition and attitudes toward WPV have improved. The rates of “having heard of WPV before”, “thinking WPV coping management organizations are needed” and “supporting a zero-tolerance policy” increased significantly from 2010 to 2015 (P<0.05) but showed no great changes from 2015 to 2019 (P>0.05), which was likely related to the formulation of corresponding preventive measures and provision of reasonable and effective education and training. The National Institute of Occupational Safety and Health [38] developed and delivered WPV prevention training course for nurses, and more than 90 % of participants thought the course filled gaps in their knowledge. It was also found that the reasons why some nurses were unable to effectively judge whether they had experienced WPV were that they lacked an understanding of the concept of WPV or had never heard the term WPV before. Ming et al. [39] found that after a randomized intervention for clinical nurses, the cognition and attitudes of the nurses who participated in violence training were higher than those who did not. Brann et al. [40] found that the level of cognition and knowledge of WPV among nurses was higher than before training; nurses could identify violence that they had experienced personally or as bystanders and no longer considered violence part of their work. Therefore, it is necessary to develop training on WPV for clinical nurses. In addition, the results of the three surveys showed that although the majority of nurses were willing to participate in WPV prevention training, there was less WPV content in their preservice training in 2019 than in 2010 and 2015 (PT1−T3=0.002, PT2−T3=0.001), which indicates to hospital management that in order to maximize the effect of education and training and improve nurses’ ability to identify and deal with WPV in clinical practice, it is necessary to strengthen the training on WPV and make it more systematic and routine, integrate WPV content into new nurses’ education and in-hospital education, regularly disseminate knowledge on violence to nurses, strengthen continuous management after the training, and promote the continuous improvement of the post-training results while meeting the learning needs of nurses [41]. Furthermore, it is necessary to carry out public education, publicize the key role of the hospital in solving problems in the medical system through various networks, and increase public understanding of the efforts of medical centers to improve the medical experience and providing services to patients to decrease WPV. In this survey, hospitals’ attitudes and responses to nurses’ WPV improved significantly from 2010 to 2019, which was embodied in five aspects: “offering training”, “encouraging reporting of WPV to supervisors”, “equipped with a WPV management department”, “handling WPV efficiently” and “hospital’s attitudes” (P<0.05). The differences in the response rates for “offering training”, “handling WPV efficiently”, “hospital’s attitudes” at different times were statistically significant (P<0.001), which may be related to changes in the management philosophy of the hospital managers. At present, hospital management pays more attention to the safety of nurses, constantly explores and develops scientific and effective nurse-related WPV management measures, actively and effectively controls violence, addresses nurses’ suffering from violence and minimizes hospital violence caused by improper or unclear management policies [37]. However, less than half of respondents reported receiving WPV prevention-related training. Although the training incidence increased in 2019 compared with that in 2010 and 2015, it was only 32.7 %, which is consistent with the results of other studies [42, 43]. The results of the survey also showed that more than 95 % of nurses believed that preservice training should include training on WPV, and more than 90 % of nurses expressed a willingness to attend, which showed that the current hospital-related education and training is far from meeting the needs of clinical nurses. To reduce the incidence of WPV or minimize the harm of WPV, nurses’ WPV management intervention skills should be strengthened, with special attention given to developing training, exploring the content and form of training, increasing the initiative and enthusiasm of nurses, and improving nurses’ cognition, attitudes and coping abilities with regard to WPV. In addition, compared with the results from 2010, those from 2015 to 2019 showed that hospitals paid more attention to doctors who had experienced WPV, with a gradual upward trend (P<0.001). Some studies [19, 44, 45] suggest that organizational support and hospital management can reduce the negative effects of WPV on nurses. Without strong hospital attention, nurses may lack trust in the management department and experience reduced job satisfaction, which will aggravate adverse reactions to WPV. In conclusion, hospitals should treat WPV among doctors and nurses equally so that nurse-directed WPV can be resolved equally and fairly and the psychological gap of employees can be reduced. Conclusions: The incidence of WPV among nurses in Suzhou general hospitals increased in 2015 and 2019 compared to that in 2010. The main form of WPV is verbal abuse. Despite the continuous improvement of nurses′ awareness of WPV, there is still room for improvement in hospitals′ attitudes and responses to WPV, especially in terms of actively carrying out WPV training. In conclusion, hospital managers should aim to comprehensively understand the dynamics of WPV, especially the trends of violence over long time periods, to reduce the incidence of WPV among nurses.
Background: Workplace violence (WPV) among nurses has become an increasingly serious public health issue worldwide. Investigating the status quo and characteristics of WPV among nurses in different time periods can help hospital managers understand the current status of WPV and its trends over time. This study aimed to understand the current situation of WPV among nurses in Suzhou general hospitals from 2010 to 2019 and analyze changes over time. Methods: A cross-sectional study was conducted to investigate 942, 2,110 and 2,566 nurses in 6 fixed polyclinic hospitals in Suzhou in 2010, 2015 and 2019, respectively. This study used the revised version of the hospital WPV questionnaire. The count data are described as frequencies and percentages, and the measurement data are represented as means and standard deviations. The general data of nurses during different time periods, the incidence of WPV, nurses' cognition and attitudes toward WPV and the attitudes and measures of hospitals regarding WPV were analyzed by the chi-square test. Results: The incidence of WPV among nurses in Suzhou general hospitals in 2015 (69.0 %) and in 2019 (68.4 %) was higher than the incidence of 62.4 % in 2010 (P<0.05), and there were significant differences among periods in the specific types of violence (P˂0.05). Nurses who participated in the surveys in 2015 and 2019 scored higher on "having heard of WPV before", "thinking WPV coping management organizations are needed" and "supporting a zero-tolerance policy" than those who participated in 2010 (P<0.05). The attitudes and responses of hospitals with regard to WPV among nurses have greatly improved, as evidenced by the results for the items "offering training", "encouraging reporting of WPV to supervisors", "equipped with a WPV managing department", "handling WPV efficiently" and "hospital's attitudes" (P<0.005). Conclusions: Despite an increase in nurses' awareness and attitudes regarding WPV and significant improvements in hospitals' attitudes and responses to WPV, the incidence of WPV remains high. Hospitals should continue to explore scientific training modes that are in accordance with the needs of nurses to reduce the incidence of WPV.
Background: According to the World Health Organization (WHO), workplace violence (WPV) can be defined as “incidents where staff are abused, threatened or assaulted in circumstances related to their work, involving an explicit or implicit challenge to their safety, well-being or health”. WPV can manifest as physical violence or psychological violence in different forms [1]. In recent years, as the reform of China’s medical system has entered the “deep water zone”, the doctor-patient relationship is becoming increasingly tense, and doctor-patient conflict is gradually emerging. As the main manifestation of doctor-patient conflict, the incidence of hospital WPV is increasing yearly. As the main workforce in medical and health care institutions, nurses have become the main victims of WPV due to the nature of their work, which requires direct contact and frequent communication with patients [2–5]. Studies from outside of China have shown that approximately 70 %-90 % of nurses have experienced one or more types of WPV [6–8]. In China, cross-sectional investigations of WPV among nurses have shown that the incidence of WPV is 71.9 %-84.6 %, which is similar to the incidence reported in other countries [9–11]. These findings show that WPV among nurses has become a common phenomenon in both developing and developed countries. WPV is a very serious public health problem that not only causes different degrees of physical and psychological injury to nurses but also has negative effects on hospitals [12, 13]. At the individual level, nurses may suffer from varying degrees of physiological harm, such as chronic pain, elevated blood pressure, gastrointestinal disorders, nightmares and even death [14, 15]. Regarding the psychological effects of WPV, in addition to feelings of grievance, anger, anxiety and other emotions, nurses may experience symptoms of insomnia, suicidal ideation and post-traumatic stress disorder [16–18]. At the hospital level, WPV reduces the enthusiasm of nurses toward their work and the quality of their work and increases their turnover rate, which affects the stability of the nursing team [19, 20]. In summary, WPV among nurses is characterized by a high incidence rate and great harmfulness. Since the 1980 s, scholars at home and abroad have sought to reduce hospital WPV incidence and improve nurses’ ability to respond through a series of measures, including improving medical-related laws and regulations [21], improving the hospital environment and treatment procedures [22], improving nurses’ communication ability [23], and focusing on WPV training [24]. However, a review of the current domestic and foreign literature found that the incidence of WPV among nurses remains high [25, 26]. Some scholars [27] have pointed out that due to the lack of long-term tracking data on the occurrence of WPV among nurses, the existing prevention and intervention measures lack pertinence. A study by Spelten et al. [28] pointed out that reducing the incidence of WPV first requires a comprehensive understanding of the dynamic development process of violence toward nurses, including by determining the frequency and intensity of WPV events and assessing the long-term results of violence on staff. However, most of the existing WPV studies are short-term status surveys, and few studies continuously compare the WPV status and characteristics of nurses in a certain region. This study aimed to analyze the current situation and characteristics of WPV among nurses in Suzhou general hospitals from 2010 to 2019 to provide information that can help hospital managers understand the status quo of WPV among nurses and trends in WPV over time. Conclusions: The incidence of WPV among nurses in Suzhou general hospitals increased in 2015 and 2019 compared to that in 2010. The main form of WPV is verbal abuse. Despite the continuous improvement of nurses′ awareness of WPV, there is still room for improvement in hospitals′ attitudes and responses to WPV, especially in terms of actively carrying out WPV training. In conclusion, hospital managers should aim to comprehensively understand the dynamics of WPV, especially the trends of violence over long time periods, to reduce the incidence of WPV among nurses.
Background: Workplace violence (WPV) among nurses has become an increasingly serious public health issue worldwide. Investigating the status quo and characteristics of WPV among nurses in different time periods can help hospital managers understand the current status of WPV and its trends over time. This study aimed to understand the current situation of WPV among nurses in Suzhou general hospitals from 2010 to 2019 and analyze changes over time. Methods: A cross-sectional study was conducted to investigate 942, 2,110 and 2,566 nurses in 6 fixed polyclinic hospitals in Suzhou in 2010, 2015 and 2019, respectively. This study used the revised version of the hospital WPV questionnaire. The count data are described as frequencies and percentages, and the measurement data are represented as means and standard deviations. The general data of nurses during different time periods, the incidence of WPV, nurses' cognition and attitudes toward WPV and the attitudes and measures of hospitals regarding WPV were analyzed by the chi-square test. Results: The incidence of WPV among nurses in Suzhou general hospitals in 2015 (69.0 %) and in 2019 (68.4 %) was higher than the incidence of 62.4 % in 2010 (P<0.05), and there were significant differences among periods in the specific types of violence (P˂0.05). Nurses who participated in the surveys in 2015 and 2019 scored higher on "having heard of WPV before", "thinking WPV coping management organizations are needed" and "supporting a zero-tolerance policy" than those who participated in 2010 (P<0.05). The attitudes and responses of hospitals with regard to WPV among nurses have greatly improved, as evidenced by the results for the items "offering training", "encouraging reporting of WPV to supervisors", "equipped with a WPV managing department", "handling WPV efficiently" and "hospital's attitudes" (P<0.005). Conclusions: Despite an increase in nurses' awareness and attitudes regarding WPV and significant improvements in hospitals' attitudes and responses to WPV, the incidence of WPV remains high. Hospitals should continue to explore scientific training modes that are in accordance with the needs of nurses to reduce the incidence of WPV.
9,133
415
[ 693, 541, 32, 36, 63, 40, 91, 917, 45, 95, 211, 97, 47, 375, 117, 224, 338, 289, 1777 ]
21
[ "wpv", "nurses", "001", "years", "wpv nurses", "2015", "2010", "training", "2019", "hospital" ]
[ "violence addresses nurses", "phenomenon wpv nurses", "hospital violence caused", "workplace violence wpv", "incidence wpv nurses" ]
null
[CONTENT] Nurse | Workplace violence | General hospital | Comparison [SUMMARY]
null
[CONTENT] Nurse | Workplace violence | General hospital | Comparison [SUMMARY]
[CONTENT] Nurse | Workplace violence | General hospital | Comparison [SUMMARY]
[CONTENT] Nurse | Workplace violence | General hospital | Comparison [SUMMARY]
[CONTENT] Nurse | Workplace violence | General hospital | Comparison [SUMMARY]
[CONTENT] Cross-Sectional Studies | Follow-Up Studies | Humans | Nurses | Surveys and Questionnaires | Workplace | Workplace Violence [SUMMARY]
null
[CONTENT] Cross-Sectional Studies | Follow-Up Studies | Humans | Nurses | Surveys and Questionnaires | Workplace | Workplace Violence [SUMMARY]
[CONTENT] Cross-Sectional Studies | Follow-Up Studies | Humans | Nurses | Surveys and Questionnaires | Workplace | Workplace Violence [SUMMARY]
[CONTENT] Cross-Sectional Studies | Follow-Up Studies | Humans | Nurses | Surveys and Questionnaires | Workplace | Workplace Violence [SUMMARY]
[CONTENT] Cross-Sectional Studies | Follow-Up Studies | Humans | Nurses | Surveys and Questionnaires | Workplace | Workplace Violence [SUMMARY]
[CONTENT] violence addresses nurses | phenomenon wpv nurses | hospital violence caused | workplace violence wpv | incidence wpv nurses [SUMMARY]
null
[CONTENT] violence addresses nurses | phenomenon wpv nurses | hospital violence caused | workplace violence wpv | incidence wpv nurses [SUMMARY]
[CONTENT] violence addresses nurses | phenomenon wpv nurses | hospital violence caused | workplace violence wpv | incidence wpv nurses [SUMMARY]
[CONTENT] violence addresses nurses | phenomenon wpv nurses | hospital violence caused | workplace violence wpv | incidence wpv nurses [SUMMARY]
[CONTENT] violence addresses nurses | phenomenon wpv nurses | hospital violence caused | workplace violence wpv | incidence wpv nurses [SUMMARY]
[CONTENT] wpv | nurses | 001 | years | wpv nurses | 2015 | 2010 | training | 2019 | hospital [SUMMARY]
null
[CONTENT] wpv | nurses | 001 | years | wpv nurses | 2015 | 2010 | training | 2019 | hospital [SUMMARY]
[CONTENT] wpv | nurses | 001 | years | wpv nurses | 2015 | 2010 | training | 2019 | hospital [SUMMARY]
[CONTENT] wpv | nurses | 001 | years | wpv nurses | 2015 | 2010 | training | 2019 | hospital [SUMMARY]
[CONTENT] wpv | nurses | 001 | years | wpv nurses | 2015 | 2010 | training | 2019 | hospital [SUMMARY]
[CONTENT] wpv | nurses | incidence | violence | health | wpv nurses | doctor patient | doctor | work | main [SUMMARY]
null
[CONTENT] 001 | wpv | years | nurses | 0010 | 001 001 | different | different years | 2015 | wpv nurses different years [SUMMARY]
[CONTENT] wpv | especially | wpv especially | improvement | incidence wpv nurses | incidence wpv | incidence | nurses | training conclusion | especially trends violence [SUMMARY]
[CONTENT] wpv | nurses | 001 | questionnaire | 2015 | study | 2010 | wpv nurses | hospital | years [SUMMARY]
[CONTENT] wpv | nurses | 001 | questionnaire | 2015 | study | 2010 | wpv nurses | hospital | years [SUMMARY]
[CONTENT] WPV ||| WPV | WPV ||| WPV | Suzhou | 2010 | 2019 [SUMMARY]
null
[CONTENT] WPV | Suzhou | 2015 | 69.0 % | 2019 | 68.4 % | 62.4 % | 2010 | P<0.05 ||| 2015 | 2019 | WPV | WPV | zero | 2010 | P<0.05 ||| WPV | WPV | WPV | WPV [SUMMARY]
[CONTENT] WPV | WPV | WPV ||| WPV [SUMMARY]
[CONTENT] WPV ||| WPV | WPV ||| WPV | Suzhou | 2010 | 2019 ||| 942 | 2,110 | 2,566 | 6 | Suzhou | 2010 | 2015 | 2019 ||| WPV ||| ||| WPV | WPV | WPV ||| WPV | Suzhou | 2015 | 69.0 % | 2019 | 68.4 % | 62.4 % | 2010 | P<0.05 ||| 2015 | 2019 | WPV | WPV | zero | 2010 | P<0.05 ||| WPV | WPV | WPV | WPV ||| WPV | WPV | WPV ||| WPV [SUMMARY]
[CONTENT] WPV ||| WPV | WPV ||| WPV | Suzhou | 2010 | 2019 ||| 942 | 2,110 | 2,566 | 6 | Suzhou | 2010 | 2015 | 2019 ||| WPV ||| ||| WPV | WPV | WPV ||| WPV | Suzhou | 2015 | 69.0 % | 2019 | 68.4 % | 62.4 % | 2010 | P<0.05 ||| 2015 | 2019 | WPV | WPV | zero | 2010 | P<0.05 ||| WPV | WPV | WPV | WPV ||| WPV | WPV | WPV ||| WPV [SUMMARY]
Near-roadway pollution and childhood asthma: implications for developing "win-win" compact urban development and clean vehicle strategies.
23008270
The emerging consensus that exposure to near-roadway traffic-related pollution causes asthma has implications for compact urban development policies designed to reduce driving and greenhouse gases.
BACKGROUND
The burden of asthma attributable to the dual effects of near-roadway and regional air pollution was estimated, using nitrogen dioxide and ozone as markers of urban combustion-related and secondary oxidant pollution, respectively. We also estimated the impact of alternative scenarios that assumed a 20% reduction in regional pollution in combination with a 3.6% reduction or 3.6% increase in the proportion of the total population living near major roads, a proxy for near-roadway exposure.
METHODS
We estimated that 27,100 cases of childhood asthma (8% of total) in LAC were at least partly attributable to pollution associated with residential location within 75 m of a major road. As a result, a substantial proportion of asthma-related morbidity is a consequence of near-roadway pollution, even if symptoms are triggered by other factors. Benefits resulting from a 20% regional pollution reduction varied markedly depending on the associated change in near-roadway proximity.
RESULTS
Our findings suggest that there are large and previously unappreciated public health consequences of air pollution in LAC and probably in other metropolitan areas with dense traffic corridors. To maximize health benefits, compact urban development strategies should be coupled with policies to reduce near-roadway pollution exposure.
CONCLUSIONS
[ "Adolescent", "Air Pollutants", "Air Pollution", "Asthma", "Child", "Child, Preschool", "Environmental Exposure", "Environmental Policy", "Government Regulation", "Humans", "Los Angeles", "Models, Theoretical", "Morbidity", "Nitrogen Dioxide", "Ozone", "Residence Characteristics", "Urban Renewal", "Vehicle Emissions" ]
3556611
null
null
Methods
We used population-attributable fractions to quantify the impact of air pollution on asthma-related outcomes in LAC for year 2007 for children < 18 years of age. We followed an existing methodological framework (Künzli et al. 2008; Perez et al. 2009) that we adapted for this new study as summarized below. To estimate the prevalence of asthma attributable to near-roadway pollution exposure, we used a concentration–response function (CRF) from the Children’s Health Study (CHS), a large population-based cohort in Southern California, in which living near major roadways, a proxy for traffic-related pollution exposure, was associated with increased prevalence of asthma (McConnell et al. 2006). Details on CHS study design and recruitment methods have been published previously (McConnell et al. 2006; Peters et al. 1999). To be consistent with the exposure assignment of the CRF study, we used the TeleAtlas MultiNet roads network (http://www.tomtom.com/en_gb/licensing/products/maps/multinet/) to map major LAC roads, defined as freeways, highways, or major arterial roads. These were then linked to census population data to derive the percentage of persons living within 75 m of these roads. For the present study we linked exposure to census population data given at the parcel level, which increased accuracy relative to linkage at the census block level used in a previous analysis (Perez et al. 2009). To be consistent with the prior CRF outcome definition, we used as background risk the asthma prevalence reported in the CHS (defined by use of controller medications in the previous year or lifetime asthma with any wheeze in the previous year or severe wheeze in the previous 12 months). Regional pollutants including particulate matter, nitrogen dioxide (NO2) and ozone (O3) are among the many causes of acute exacerbation among children with asthma, regardless of the cause of asthma onset (Jackson et al. 2011). However, an important consideration is that among those children with asthma attributable to living near a major road, all subsequent exacerbation should be attributed to air pollution, regardless of the trigger, assuming that these children would not otherwise have had the disease (Künzli et al. 2008). Conceptually, the total burden of asthma due to near-source and regional pollution includes the number of yearly asthma exacerbations triggered by causes other than regional air pollution among children whose asthma was caused (at least in part) by near-roadway pollution (Figure 1). These exacerbations are in addition to those directly triggered by regional air pollution exposure among all children with asthma, including children whose asthma was caused by near-roadway exposure and children whose asthma was caused by something other than traffic proximity. Air pollution risk assessments typically calculate only the asthma exacerbation burden triggered directly by regional pollution exposures, regardless of the underlying cause of asthma, whereas we included the additional burden of disease among children with asthma caused by near-roadway exposure but with exacerbations triggered by factors other than regional pollution. Conceptual model used to calculate asthma-related exacerbation attributable to air pollution for Los Angeles County based on Künzli et al. (2008). The thick dashed line indicates children with asthma attributable to near-roadway exposure. The thick solid line indicates total exacerbations due to regional and near-roadway air pollution. To avoid double counting the burden associated with correlated regional pollutants, we estimated exacerbation attributable to NO2 or O3 only. NO2 was selected to represent urban-scale combustion-related pollution because it is correlated with particulate mass and other regional pollutants associated with respiratory health effects in southern California (Gauderman et al. 2004). O3 is produced as a result of photo-oxidation that is uncorrelated with other regional pollutants in the Los Angeles air basin (Gauderman et al. 2004). The CRFs for bronchitis episodes among those with asthma, and for prevalent asthma attributable to near-roadway exposure, were derived from the CHS (McConnell et al. 2003, 2006) (Table 1). CRFs from appropriate studies of Southern California populations were not available for doctor visits, emergency department visits, or hospital admissions. Therefore, we applied CRFs used in a previous Southern California health impact assessment (Perez et al. 2009) or averaged the coefficient used in the previous analysis with the coefficient from a more recent study conducted in a similar population, as indicated in Table 1. Concentration–response functions (CRF) with 95% confidence intervals (CI) considered in the evaluation of air pollution burden. The number of children < 18 years of age (> 2.5 million) was obtained from the American Community Survey (U.S. Census Bureau 2011). Background rates of the outcomes were obtained from the CHS or from local surveys (Table 2). Annual average daily concentrations of NO2 and O3 obtained from the 2007 U.S. Environmental Protection Agency Air Quality System (AQS) (U.S. Environmental Protection Agency 2009) and CHS monitoring stations were interpolated based on inverse distance-squared weighting to each census block group in the county to estimate population exposures. Because of the seasonality of school attendance and both the seasonal and day-of-week variability of O3, the O3 population exposure for school absences was based on 2007 daily maps, rather than annual maps, obtained from interpolated hourly ambient school-week concentrations projected to 2000 census block group centroids. Population size and baseline health outcome and exposure estimates used to evaluate the burden of asthma due to air pollution in LAC in 2007. We estimated that 17.8% of LAC children lived within 75 m of major roads, and that the annual population-weighted exposure to NO2 was 23.3 ppb (24-hr average) and to O3 was 39.3 ppb (8-hr maximum) in LAC (Table 2). We assumed background concentrations of 4 ppb for NO2 annually and 38 ppb for 8-hr maximum O3 on all days, based on long-term measurements (1994–2003) from CHS monitoring stations in clean coastal locations (i.e., Lompoc, CA) (McConnell et al. 2003). [The average population-weighted annual O3 for LAC was near background because population exposures in the areas with high O3 are offset by population exposures in areas with high oxides of nitrogen (NOx) emissions and very low O3 concentrations, due to nitric oxide (NO) in fresh vehicular exhaust scavenging O3 in those areas.] We considered three near-roadway proximity exposure reduction scenarios (Table 3): Exposure reduction scenarios for near-roadway exposure, regional NO2 and O3, and corresponding reduction in childhood asthma cases attributable to near-roadway pollution exposure (based on total of 320,500 children with asthma in LAC). A reduction in annual concentrations of regional pollutants for each census block group to levels found in clean CHS communities (from 23.3 ppb to 4 ppb for NO2 and 39.3 ppb to 36.3 ppb for O3) in combination with a reduction in the proportion of children in the county living within 75 m of a major road from 17.8% to 0% A 20% reduction in the annual concentrations of regional pollutants for each census block group (from 23.3 ppb to 19.4 ppb for NO2 and 39.3 ppb to 38.7 ppb for O3) in combination with a 3.6% reduction in the proportion of all children in the county living within 75 m of a major road (from 17.8% to 14.2%, corresponding to a 20% decrease in the proportion of children currently living within 75 m) A 20% reduction in regional pollutant concentrations in combination with a 3.6% increase in the proportion of children living within 75 m of a major road (from 17.8% to 21.2%). Scenario 1 reflects the total burden of preventable illness from both exposures. At this time there is considerable uncertainty regarding the potential impact of compact urban growth strategies on near-roadway exposures, so scenarios 2 and 3 were selected assuming moderate reductions in regional pollutants from continued regulatory efforts and a moderate 20% increase or decrease in near-roadway exposure—a value that was chosen for illustration and could be refined using data from regional planners as they become available. Regional pollutant concentrations aggregated to the census block group level that exceeded background levels were reduced linearly, whereas we assumed that concentrations at or below the background level would be unaffected by changes in emissions. There are intrinsic limitations and uncertainties in risk analysis. We estimated a 95% confidence interval (CI) derived from the propagation of the CIs for the CRFs to address uncertainty in these estimates. In addition, proximity to major roadways has uncertainty as a proxy for near-roadway pollution exposure that depends on traffic volume, the emissions of the vehicular fleet, and local meteorological factors. Therefore, we also estimated the total burden of asthma-related exacerbations associated with a 100% and a 20% reduction in population-weighted exposure to the near-roadway dispersion-modeled pollution mixture (instead of a change in roadway proximity in exposure scenarios 1 and 2 in Table 3) using the CHS CRF from an estimate of the association of asthma prevalence with dispersion-modeled near-roadway pollution exposure accounting for traffic volume and emission factors (McConnell et al. 2006). Specifically, we used modeled NOx to represent the incremental contribution of local traffic to a more homogeneous community background concentration of NOx that included both primary and secondary pollution resulting from long-range transport and regional atmospheric photochemistry. It is a marker for correlated pollutants in the near-roadway mixture (rather than the etiologic agent for near-roadway health effects). We developed new estimates of population-weighted yearly average of local traffic-related NOx concentrations for 2007 in LAC using the CALINE4 dispersion model with the 2007 TeleAtlas MultiNet Roadway network, and 2007 vehicle emission factors for Los Angeles from the EMFAC model (California Air Resources Board 2007). Vehicle emission factors were developed for winter (55oF/50% relative humidity) and summer (75oF/50% relative humidity) conditions using average speeds of 65 mph on freeways and highways [FCC (functional class code) 1 and FCC2 class roads], 50 mph on major arterials (FCC3 class roads), and 30 mph on minor arterials and collectors (FCC4 roads). The model used year 2000 traffic volumes adjusted to 2007 VMT provided by the California Department of Transportation (17.5% increase in VMT for LAC). Modeled NOx concentrations were estimated for the block group centroids. The CHS CRF was developed for the contribution of local traffic on non-freeways using an older road functional roadway classification (FRC) scheme (McConnell et al. 2006) that is no longer available in a form that matches the most current FCC classification that we used. To minimize overestimation of population exposure to near-roadway exposure in LAC, we used estimates of exposure from FCC3 (major arterials) as representative of non-freeway roads used in developing the CHS CRF. We considered the impact of all near-FCC3 roadway NOx (corresponding to a scenario of 100% reduction in modeled near-roadway pollution at the block group centroid) and of a 20% decrease in population exposure. This corresponds to the 3.6% reduction in the total population of children within 75 m of a major roadway (a 20% reduction the proportion of the total population living within 75 m) (Table 3).
Results
Of the estimated 320,500 cases of childhood asthma in the county (based on Southern California prevalence of 0.1257) (Table 2), we estimated that approximately 27,100 (8%; 95% CI: 2%, 16%) were caused at least in part by residential proximity to a major road (Table 3). A 3.6% reduction in the proportion of children living within 75 m of a major road (scenario 2) would result in 5,900 fewer asthma cases (95% CI: 1,000, 11,800) caused by near-roadway exposure (2% of the total cases in the county; 95% CI: 0.3%, 4%), whereas a 3.6% increase (scenario 3) would result in an additional 5,900 asthma cases caused by near-roadway exposure (Table 3). Estimates of yearly asthma-related exacerbations attributable to air pollution are presented in Table 4 for NO2 and O3, with results partitioned by cause of asthma (traffic proximity or other factors) using the conceptual scheme in Figure 1. Using this approach and assuming no near-roadway exposure and a reduction of NO2 or O3 to background levels (scenario 1), we estimated that 70,200 episodes (95% CI: 31,000, 95,700) of bronchitis (56.6% of all episodes) could be attributed to the combined effects of traffic proximity and regional NO2, our marker for the regional mixture of combustion-related pollutant exposure. The estimated burdens of other exacerbations attributable to exposure were smaller, ranging from 10.6% for emergency department visits to 19.5% for hospital admissions. The overall impact of air pollution was highly sensitive to the inclusion of exacerbations attributable to asthma triggers other than regional NO2 among children whose asthma was caused by traffic proximity. For example, we estimated that 65,100 bronchitis episodes were triggered by regional air pollution, including 5,600 episodes among children whose asthma was caused by near-roadway air pollution and 59,500 episodes among children whose asthma was caused by something other than air pollution (Table 4). In addition, we estimated that 5,100 bronchitis episodes (4.1% of the total) triggered by something other than regional air pollution would not have occurred if children had not lived within 75 m of a major road, because these children would not have developed asthma to begin with. These episodes would not have been accounted for if estimated effects of traffic proximity on asthma prevalence had not been considered. The estimated impact of such cases was especially large for outcomes that were weakly associated with regional NO2 (e.g., emergency department visits for asthma with an estimated CRF of 1.0011 per 1 ppb NO2) (Table 1), because exacerbations triggered by causes other than air pollution among children with asthma caused by traffic proximity account for a larger proportion of all episodes. Yearly number (%) of childhood asthma-related exacerbations attributable to near-roadway pollution in combination with regional NO2 and regional O3 above background levels in clean communities (scenario 1, traffic proximity model) (95% confidence intervals).a The estimate of the burden of bronchitis episodes directly attributable to O3 and traffic proximity pollution (18,790 preventable episodes triggered by regional air pollution under the usual risk assessment approach) was considerably less than for NO2 (65,100 episodes triggered by air pollution) (Table 4). O3 CRFs for outcomes other than bronchitis and missed school days were modest (e.g., for emergency department visits, 1.024 for each 10-ppb increase in O3) (Table 1); therefore, accounting for all exacerbations among children whose asthma was caused by traffic proximity led to substantial increases in estimates of disease burden attributable to O3—from 133 to 1,730 emergency department visits (Table 4). A partial reduction in the burden of asthma exacerbation could be achieved with a more modest reduction in population exposure to traffic proximity and regional pollutants. Figure 2 shows the estimated numbers of exacerbations (and percent of total) attributable to a 20% decrease in regional air pollution (either NO2 or O3) in combination with a 3.6% reduction in the proportion of all children living near major roadways (scenario 2) or a 3.6% increase in traffic proximity that might result from compact urban development (scenario 3). We report the estimated impacts for the regional pollutant with the strongest association with each outcome—based on a reduction in NO2 for all outcomes except school absences, which were estimated for a reduction in O3. The net impact of each air pollution reduction scenario depended on the total number of outcomes per year and the strength of the CRF for the regional pollutant. Thus, a 3.6% reduction in the proportion of all children in the county who lived near major roadways (scenario 2) would result in a 1.9% decrease in each outcome in Figure 2 (corresponding to the reduction in cases of asthma caused by near-roadway exposure, regardless of whether the exacerbation was triggered by air pollution or by other factors). For bronchitis episodes, which are relatively common and have a large CRF for NO2 (1.070 per 1-ppb increase) (Table 1), we estimate that a total of 19,900 exacerbations (16.1% of all episodes annually) could be prevented by a 20% NO2 reduction, most of which (17,600 episodes, 14.2%) were attributable to the reduction of episodes triggered by regional pollution among children whose asthma was not caused by air pollution, rather than to bronchitis episodes among children whose asthma was caused by near-traffic pollution (scenario 2 in Figure 2). If NO2 were reduced by 20% but traffic proximity increased by 3.6% (scenario 3), we estimated a smaller reduction in total bronchitis episodes (15,580 episodes, 12.6% of the total). In contrast, for emergency department visits (with the weakest CRF for NO2, 1.0011 per 1-ppb increase), most of the benefit under scenario 2 resulted from the estimated reduction in the number of children with asthma caused by traffic proximity pollution, whereas an increase in the population living near major roadways under scenario 3 resulted in an increase in emergency department visits that exceeded the modest benefit from NO2 reduction, resulting in a net increase in total number of visits. Other outcomes demonstrated intermediate patterns for the relative impact of a reduction in regional pollution and a change in traffic proximity, with the net absolute impact depending also on the baseline frequency of the outcome in the population (Figure 2). Number and percentage of exacerbations attributable to changes in pollutant levels under different exposure scenarios. Scenario 2 assumes a 3.6% decrease in children living near major roads and a 20% decrease in regional pollution. Scenario 3 assumes a 3.6% increase in children living near major roads and a 20% decrease in regional pollution. Regional pollution is represented by NO2 for all outcomes except O3 for school absences. Bars to the left and right of zero represent reductions and increases in the burden of asthma exacerbation, respectively, compared with baseline. (A) Bronchitis episodes, (B) hospital admissions, (C) emergency department visits, (D) clinic visits, and (E) school absences. In additional analyses we used the CRF from a dispersion-modeled estimate of the effect of near-roadway pollutant mixture exposure on asthma (2.07; 95% CI: 1.12, 3.83 per 16 ppb) (McConnell et al. 2006) and an estimated annual average contribution of near-roadway pollution derived from major arterials expressed as 2.6 ppb of NOx, or 8% of all near-roadway pollution generated from freeways, highways, state highways, minor arterials, and collectors. We estimated that 39,800 cases of prevalent asthma (12% of the total; 95% CI: 2%, 20%) were caused by near-roadway pollution based on this analysis, compared with 27,100 cases based on roadway proximity (scenario 1) and that a 20% decrease in near-roadway exposure (scenario 2) would result in 8,400 fewer cases (3% of all cases in the county; 95% CI 0.4%, 4%) compared with 5,900 fewer cases based on roadway proximity (Table 3). The total burden of asthma-related exacerbation attributable to the combination of near-roadway and regional pollution under these scenarios is shown in Supplemental Material, Tables S1 and S2 (http://dx.doi.org/10.1289/ehp.1104785). Although the numbers of exacerbations attributed to near-roadway exposure were increased, estimates were of the same order of magnitude as those obtained using traffic proximity as an indicator of near-roadway exposure.
null
null
[ "Supplemental Material" ]
[ "Click here for additional data file." ]
[ null ]
[ "Methods", "Results", "Discussion", "Supplemental Material" ]
[ "We used population-attributable fractions to quantify the impact of air pollution on asthma-related outcomes in LAC for year 2007 for children < 18 years of age. We followed an existing methodological framework (Künzli et al. 2008; Perez et al. 2009) that we adapted for this new study as summarized below.\nTo estimate the prevalence of asthma attributable to near-roadway pollution exposure, we used a concentration–response function (CRF) from the Children’s Health Study (CHS), a large population-based cohort in Southern California, in which living near major roadways, a proxy for traffic-related pollution exposure, was associated with increased prevalence of asthma (McConnell et al. 2006). Details on CHS study design and recruitment methods have been published previously (McConnell et al. 2006; Peters et al. 1999). To be consistent with the exposure assignment of the CRF study, we used the TeleAtlas MultiNet roads network (http://www.tomtom.com/en_gb/licensing/products/maps/multinet/) to map major LAC roads, defined as freeways, highways, or major arterial roads. These were then linked to census population data to derive the percentage of persons living within 75 m of these roads. For the present study we linked exposure to census population data given at the parcel level, which increased accuracy relative to linkage at the census block level used in a previous analysis (Perez et al. 2009). To be consistent with the prior CRF outcome definition, we used as background risk the asthma prevalence reported in the CHS (defined by use of controller medications in the previous year or lifetime asthma with any wheeze in the previous year or severe wheeze in the previous 12 months).\nRegional pollutants including particulate matter, nitrogen dioxide (NO2) and ozone (O3) are among the many causes of acute exacerbation among children with asthma, regardless of the cause of asthma onset (Jackson et al. 2011). However, an important consideration is that among those children with asthma attributable to living near a major road, all subsequent exacerbation should be attributed to air pollution, regardless of the trigger, assuming that these children would not otherwise have had the disease (Künzli et al. 2008). Conceptually, the total burden of asthma due to near-source and regional pollution includes the number of yearly asthma exacerbations triggered by causes other than regional air pollution among children whose asthma was caused (at least in part) by near-roadway pollution (Figure 1). These exacerbations are in addition to those directly triggered by regional air pollution exposure among all children with asthma, including children whose asthma was caused by near-roadway exposure and children whose asthma was caused by something other than traffic proximity. Air pollution risk assessments typically calculate only the asthma exacerbation burden triggered directly by regional pollution exposures, regardless of the underlying cause of asthma, whereas we included the additional burden of disease among children with asthma caused by near-roadway exposure but with exacerbations triggered by factors other than regional pollution.\nConceptual model used to calculate asthma-related exacerbation attributable to air pollution for Los Angeles County based on Künzli et al. (2008). The thick dashed line indicates children with asthma attributable to near-roadway exposure. The thick solid line indicates total exacerbations due to regional and near-roadway air pollution.\nTo avoid double counting the burden associated with correlated regional pollutants, we estimated exacerbation attributable to NO2 or O3 only. NO2 was selected to represent urban-scale combustion-related pollution because it is correlated with particulate mass and other regional pollutants associated with respiratory health effects in southern California (Gauderman et al. 2004). O3 is produced as a result of photo-oxidation that is uncorrelated with other regional pollutants in the Los Angeles air basin (Gauderman et al. 2004).\nThe CRFs for bronchitis episodes among those with asthma, and for prevalent asthma attributable to near-roadway exposure, were derived from the CHS (McConnell et al. 2003, 2006) (Table 1). CRFs from appropriate studies of Southern California populations were not available for doctor visits, emergency department visits, or hospital admissions. Therefore, we applied CRFs used in a previous Southern California health impact assessment (Perez et al. 2009) or averaged the coefficient used in the previous analysis with the coefficient from a more recent study conducted in a similar population, as indicated in Table 1.\nConcentration–response functions (CRF) with 95% confidence intervals (CI) considered in the evaluation of air pollution burden.\nThe number of children < 18 years of age (> 2.5 million) was obtained from the American Community Survey (U.S. Census Bureau 2011). Background rates of the outcomes were obtained from the CHS or from local surveys (Table 2). Annual average daily concentrations of NO2 and O3 obtained from the 2007 U.S. Environmental Protection Agency Air Quality System (AQS) (U.S. Environmental Protection Agency 2009) and CHS monitoring stations were interpolated based on inverse distance-squared weighting to each census block group in the county to estimate population exposures. Because of the seasonality of school attendance and both the seasonal and day-of-week variability of O3, the O3 population exposure for school absences was based on 2007 daily maps, rather than annual maps, obtained from interpolated hourly ambient school-week concentrations projected to 2000 census block group centroids.\nPopulation size and baseline health outcome and exposure estimates used to evaluate the burden of asthma due to air pollution in LAC in 2007.\nWe estimated that 17.8% of LAC children lived within 75 m of major roads, and that the annual population-weighted exposure to NO2 was 23.3 ppb (24-hr average) and to O3 was 39.3 ppb (8-hr maximum) in LAC (Table 2). We assumed background concentrations of 4 ppb for NO2 annually and 38 ppb for 8-hr maximum O3 on all days, based on long-term measurements (1994–2003) from CHS monitoring stations in clean coastal locations (i.e., Lompoc, CA) (McConnell et al. 2003). [The average population-weighted annual O3 for LAC was near background because population exposures in the areas with high O3 are offset by population exposures in areas with high oxides of nitrogen (NOx) emissions and very low O3 concentrations, due to nitric oxide (NO) in fresh vehicular exhaust scavenging O3 in those areas.] We considered three near-roadway proximity exposure reduction scenarios (Table 3):\nExposure reduction scenarios for near-roadway exposure, regional NO2 and O3, and corresponding reduction in childhood asthma cases attributable to near-roadway pollution exposure (based on total of 320,500 children with asthma in LAC).\nA reduction in annual concentrations of regional pollutants for each census block group to levels found in clean CHS communities (from 23.3 ppb to 4 ppb for NO2 and 39.3 ppb to 36.3 ppb for O3) in combination with a reduction in the proportion of children in the county living within 75 m of a major road from 17.8% to 0%\nA 20% reduction in the annual concentrations of regional pollutants for each census block group (from 23.3 ppb to 19.4 ppb for NO2 and 39.3 ppb to 38.7 ppb for O3) in combination with a 3.6% reduction in the proportion of all children in the county living within 75 m of a major road (from 17.8% to 14.2%, corresponding to a 20% decrease in the proportion of children currently living within 75 m)\nA 20% reduction in regional pollutant concentrations in combination with a 3.6% increase in the proportion of children living within 75 m of a major road (from 17.8% to 21.2%).\nScenario 1 reflects the total burden of preventable illness from both exposures. At this time there is considerable uncertainty regarding the potential impact of compact urban growth strategies on near-roadway exposures, so scenarios 2 and 3 were selected assuming moderate reductions in regional pollutants from continued regulatory efforts and a moderate 20% increase or decrease in near-roadway exposure—a value that was chosen for illustration and could be refined using data from regional planners as they become available. Regional pollutant concentrations aggregated to the census block group level that exceeded background levels were reduced linearly, whereas we assumed that concentrations at or below the background level would be unaffected by changes in emissions.\nThere are intrinsic limitations and uncertainties in risk analysis. We estimated a 95% confidence interval (CI) derived from the propagation of the CIs for the CRFs to address uncertainty in these estimates. In addition, proximity to major roadways has uncertainty as a proxy for near-roadway pollution exposure that depends on traffic volume, the emissions of the vehicular fleet, and local meteorological factors. Therefore, we also estimated the total burden of asthma-related exacerbations associated with a 100% and a 20% reduction in population-weighted exposure to the near-roadway dispersion-modeled pollution mixture (instead of a change in roadway proximity in exposure scenarios 1 and 2 in Table 3) using the CHS CRF from an estimate of the association of asthma prevalence with dispersion-modeled near-roadway pollution exposure accounting for traffic volume and emission factors (McConnell et al. 2006). Specifically, we used modeled NOx to represent the incremental contribution of local traffic to a more homogeneous community background concentration of NOx that included both primary and secondary pollution resulting from long-range transport and regional atmospheric photochemistry. It is a marker for correlated pollutants in the near-roadway mixture (rather than the etiologic agent for near-roadway health effects). We developed new estimates of population-weighted yearly average of local traffic-related NOx concentrations for 2007 in LAC using the CALINE4 dispersion model with the 2007 TeleAtlas MultiNet Roadway network, and 2007 vehicle emission factors for Los Angeles from the EMFAC model (California Air Resources Board 2007). Vehicle emission factors were developed for winter (55oF/50% relative humidity) and summer (75oF/50% relative humidity) conditions using average speeds of 65 mph on freeways and highways [FCC (functional class code) 1 and FCC2 class roads], 50 mph on major arterials (FCC3 class roads), and 30 mph on minor arterials and collectors (FCC4 roads). The model used year 2000 traffic volumes adjusted to 2007 VMT provided by the California Department of Transportation (17.5% increase in VMT for LAC). Modeled NOx concentrations were estimated for the block group centroids. The CHS CRF was developed for the contribution of local traffic on non-freeways using an older road functional roadway classification (FRC) scheme (McConnell et al. 2006) that is no longer available in a form that matches the most current FCC classification that we used. To minimize overestimation of population exposure to near-roadway exposure in LAC, we used estimates of exposure from FCC3 (major arterials) as representative of non-freeway roads used in developing the CHS CRF. We considered the impact of all near-FCC3 roadway NOx (corresponding to a scenario of 100% reduction in modeled near-roadway pollution at the block group centroid) and of a 20% decrease in population exposure. This corresponds to the 3.6% reduction in the total population of children within 75 m of a major roadway (a 20% reduction the proportion of the total population living within 75 m) (Table 3).", "Of the estimated 320,500 cases of childhood asthma in the county (based on Southern California prevalence of 0.1257) (Table 2), we estimated that approximately 27,100 (8%; 95% CI: 2%, 16%) were caused at least in part by residential proximity to a major road (Table 3). A 3.6% reduction in the proportion of children living within 75 m of a major road (scenario 2) would result in 5,900 fewer asthma cases (95% CI: 1,000, 11,800) caused by near-roadway exposure (2% of the total cases in the county; 95% CI: 0.3%, 4%), whereas a 3.6% increase (scenario 3) would result in an additional 5,900 asthma cases caused by near-roadway exposure (Table 3).\nEstimates of yearly asthma-related exacerbations attributable to air pollution are presented in Table 4 for NO2 and O3, with results partitioned by cause of asthma (traffic proximity or other factors) using the conceptual scheme in Figure 1. Using this approach and assuming no near-roadway exposure and a reduction of NO2 or O3 to background levels (scenario 1), we estimated that 70,200 episodes (95% CI: 31,000, 95,700) of bronchitis (56.6% of all episodes) could be attributed to the combined effects of traffic proximity and regional NO2, our marker for the regional mixture of combustion-related pollutant exposure. The estimated burdens of other exacerbations attributable to exposure were smaller, ranging from 10.6% for emergency department visits to 19.5% for hospital admissions. The overall impact of air pollution was highly sensitive to the inclusion of exacerbations attributable to asthma triggers other than regional NO2 among children whose asthma was caused by traffic proximity. For example, we estimated that 65,100 bronchitis episodes were triggered by regional air pollution, including 5,600 episodes among children whose asthma was caused by near-roadway air pollution and 59,500 episodes among children whose asthma was caused by something other than air pollution (Table 4). In addition, we estimated that 5,100 bronchitis episodes (4.1% of the total) triggered by something other than regional air pollution would not have occurred if children had not lived within 75 m of a major road, because these children would not have developed asthma to begin with. These episodes would not have been accounted for if estimated effects of traffic proximity on asthma prevalence had not been considered. The estimated impact of such cases was especially large for outcomes that were weakly associated with regional NO2 (e.g., emergency department visits for asthma with an estimated CRF of 1.0011 per 1 ppb NO2) (Table 1), because exacerbations triggered by causes other than air pollution among children with asthma caused by traffic proximity account for a larger proportion of all episodes.\nYearly number (%) of childhood asthma-related exacerbations attributable to near-roadway pollution in combination with regional NO2 and regional O3 above background levels in clean communities (scenario 1, traffic proximity model) (95% confidence intervals).a\nThe estimate of the burden of bronchitis episodes directly attributable to O3 and traffic proximity pollution (18,790 preventable episodes triggered by regional air pollution under the usual risk assessment approach) was considerably less than for NO2 (65,100 episodes triggered by air pollution) (Table 4). O3 CRFs for outcomes other than bronchitis and missed school days were modest (e.g., for emergency department visits, 1.024 for each 10-ppb increase in O3) (Table 1); therefore, accounting for all exacerbations among children whose asthma was caused by traffic proximity led to substantial increases in estimates of disease burden attributable to O3—from 133 to 1,730 emergency department visits (Table 4).\nA partial reduction in the burden of asthma exacerbation could be achieved with a more modest reduction in population exposure to traffic proximity and regional pollutants. Figure 2 shows the estimated numbers of exacerbations (and percent of total) attributable to a 20% decrease in regional air pollution (either NO2 or O3) in combination with a 3.6% reduction in the proportion of all children living near major roadways (scenario 2) or a 3.6% increase in traffic proximity that might result from compact urban development (scenario 3). We report the estimated impacts for the regional pollutant with the strongest association with each outcome—based on a reduction in NO2 for all outcomes except school absences, which were estimated for a reduction in O3. The net impact of each air pollution reduction scenario depended on the total number of outcomes per year and the strength of the CRF for the regional pollutant. Thus, a 3.6% reduction in the proportion of all children in the county who lived near major roadways (scenario 2) would result in a 1.9% decrease in each outcome in Figure 2 (corresponding to the reduction in cases of asthma caused by near-roadway exposure, regardless of whether the exacerbation was triggered by air pollution or by other factors). For bronchitis episodes, which are relatively common and have a large CRF for NO2 (1.070 per 1-ppb increase) (Table 1), we estimate that a total of 19,900 exacerbations (16.1% of all episodes annually) could be prevented by a 20% NO2 reduction, most of which (17,600 episodes, 14.2%) were attributable to the reduction of episodes triggered by regional pollution among children whose asthma was not caused by air pollution, rather than to bronchitis episodes among children whose asthma was caused by near-traffic pollution (scenario 2 in Figure 2). If NO2 were reduced by 20% but traffic proximity increased by 3.6% (scenario 3), we estimated a smaller reduction in total bronchitis episodes (15,580 episodes, 12.6% of the total). In contrast, for emergency department visits (with the weakest CRF for NO2, 1.0011 per 1-ppb increase), most of the benefit under scenario 2 resulted from the estimated reduction in the number of children with asthma caused by traffic proximity pollution, whereas an increase in the population living near major roadways under scenario 3 resulted in an increase in emergency department visits that exceeded the modest benefit from NO2 reduction, resulting in a net increase in total number of visits. Other outcomes demonstrated intermediate patterns for the relative impact of a reduction in regional pollution and a change in traffic proximity, with the net absolute impact depending also on the baseline frequency of the outcome in the population (Figure 2).\nNumber and percentage of exacerbations attributable to changes in pollutant levels under different exposure scenarios. Scenario 2 assumes a 3.6% decrease in children living near major roads and a 20% decrease in regional pollution. Scenario 3 assumes a 3.6% increase in children living near major roads and a 20% decrease in regional pollution. Regional pollution is represented by NO2 for all outcomes except O3 for school absences. Bars to the left and right of zero represent reductions and increases in the burden of asthma exacerbation, respectively, compared with baseline. (A) Bronchitis episodes, (B) hospital admissions, (C) emergency department visits, (D) clinic visits, and (E) school absences.\nIn additional analyses we used the CRF from a dispersion-modeled estimate of the effect of near-roadway pollutant mixture exposure on asthma (2.07; 95% CI: 1.12, 3.83 per 16 ppb) (McConnell et al. 2006) and an estimated annual average contribution of near-roadway pollution derived from major arterials expressed as 2.6 ppb of NOx, or 8% of all near-roadway pollution generated from freeways, highways, state highways, minor arterials, and collectors. We estimated that 39,800 cases of prevalent asthma (12% of the total; 95% CI: 2%, 20%) were caused by near-roadway pollution based on this analysis, compared with 27,100 cases based on roadway proximity (scenario 1) and that a 20% decrease in near-roadway exposure (scenario 2) would result in 8,400 fewer cases (3% of all cases in the county; 95% CI 0.4%, 4%) compared with 5,900 fewer cases based on roadway proximity (Table 3). The total burden of asthma-related exacerbation attributable to the combination of near-roadway and regional pollution under these scenarios is shown in Supplemental Material, Tables S1 and S2 (http://dx.doi.org/10.1289/ehp.1104785). Although the numbers of exacerbations attributed to near-roadway exposure were increased, estimates were of the same order of magnitude as those obtained using traffic proximity as an indicator of near-roadway exposure.", "The implications of near-roadway exposures for the burden of disease due to air pollution have not been fully appreciated. Our results indicate that risk assessment focusing exclusively on regional pollutant effects substantially underestimates the impact of air pollution on childhood asthma because it does not account for exacerbations caused by exposures other than air pollution among the approximately 8% of children with asthma in Los Angeles County whose asthma can be attributed to pollution from near-roadway pollution exposure based on proximity. Moreover, the burden of asthma exacerbation among children whose asthma was caused by living near roadways, and the potential benefits of reducing near-roadway exposures, are disproportionately larger for more severe and more expensive outcomes, such as hospital admissions and emergency department visits.\nThe use of traffic proximity is both a strength and limitation. A traffic proximity buffer is appealing as a regulatory metric because it is precise and easily measured. However, reductions in traffic density and vehicular emissions are not reflected in this metric, so estimates of the burden of disease based on the proximity CRF (for the Los Angeles Basin) may not easily be generalized to other urban settings. Proximity is a proxy indicator of local traffic-related pollutants that vary by type of roadway, over time, and by region, and that also depend on the pollution reduction technologies and age of local vehicular fleets [Health Effects Institute (HEI) 2009]. Therefore, we conducted a sensitivity analysis based on exposure estimates derived from a dispersion model of near-roadway traffic-related pollution that integrated traffic density, emission factors, and meteorology. Results of this alternative approach demonstrated a pattern of preventable asthma burden for a 100% decrease in near-roadway pollution exposure that was generally consistent with that estimated for a 100% decrease in residential proximity to major roads, and estimates for a 20% decrease in near-roadway pollution that were consistent with estimates based on a 20% decrease in proportion of children currently living near major roads (corresponding to the 3.6% decrease from 17.8% to 14.2% of all children in the county). It is also possible that living near a major traffic corridor is associated with socioeconomic characteristics and mold or other substandard housing characteristics that could explain the association of traffic proximity with asthma in studies from which the CRFs were derived. However, markers for these characteristics did not confound this association (McConnell et al. 2006). The CHS has also shown that the association between traffic proximity and asthma is modified by genetic variants in plausible biological pathways, which would be difficult to explain based on confounding by socioeconomic status or related characteristics (Salam et al. 2007).\nCompact urban development (scenario 3) that increases the number of children living near major roadways could limit the overall benefit of reduced exposure to regional pollution, especially for emergency department and clinic visits and for school absences, if not accompanied by substantial reductions in emissions and traffic volume along major traffic corridors. In practice, the CRF associated with near-roadway proximity may be reduced if overall reductions in vehicle miles traveled and emissions result in reduced air pollution exposures near roadways. We did not attempt to estimate a more informative association between dispersion-modeled exposure and illness under scenario 3 because data are not readily available to make such estimates. Realistic estimates under different SB375 compact urban development scenarios of the likely impact on traffic volume and emissions factors, in addition to changes in population distributions along major traffic corridors, are urgently needed to identify strategies that will optimize the benefits of compact urban development on both GHG emissions and on respiratory health. Strong “win–win” policies could result from compact growth and other strategies that increase the number of people living along busy roads, thereby reducing vehicle-miles traveled and regional air pollution, if coupled with reduction of traffic-related primary emissions such as ultrafine particles and black carbon. Promoting the rapid adoption of low- or zero-emission vehicles powered by carbon-neutral energy sources, and limiting residential development in buffer zones very near the largest roadways, are obvious examples of such strategies. Conversely, our findings suggest that a compact urban development plan that increases the proportion of children near major roadways, while simultaneously expanding or adding new traffic corridors to accommodate greater traffic volume, will increase rather than decrease the burden of air pollution–related asthma. Although such a scenario may be unlikely in California, in other regions in the world—particularly in developing countries—it is plausible.\nThere are uncertainties in our estimates related to the conceptual approach. Until recently, the causal association of asthma onset and near-roadway pollution—an important assumption underlying our analysis—was uncertain (Eder et al. 2006). However, a scientific consensus is emerging that the observed epidemiological associations of higher asthma rates along major roads are causal (Anderson et al. 2011a, 2011b; HEI 2009; Ryan and Holguin 2010; Salam et al. 2008). In addition, our approach assumes that removal or reduction of near-roadway pollutant exposure would reduce the number of children developing asthma (Künzli et al. 2008). Asthma is likely to develop as a consequence of multiple, potentially synergistic, risk factors, and we may have overestimated the benefit of exposure reduction if some of the cases of childhood asthma attributed to near-roadway pollution would have developed due to competing risk factors even if the children had not been exposed to traffic. However, an 8-year follow-up of a Dutch birth cohort showed that the association between incident asthma and soot exposure did not diminish during follow-up, as might have been expected if some of the soot-associated cases would have developed due to competing causes in the absence of soot exposure (Gehring et al. 2010). Further research is needed to determine the potential role of competing risks, but the current understanding of asthma etiology warrants a precautionary preventive approach to near-roadway pollution effects. Our conceptual model of effects of exposure reduction also does not address the time lag that might be required for health benefits to be achieved.\nWe have previously acknowledged uncertainties related to the extrapolation of CRFs across populations, which we have attempted to minimize by applying CRFs, asthma prevalence, and outcome frequency estimates for Southern California, when available (Künzli et al. 2008). Statistical uncertainty was estimated by calculating 95% CIs, which were relatively wide. We accounted for the propagation of random variation based on the observed distribution of chronic and acute CRF estimates, but mean estimates are not influenced by this uncertainty.\nWe may have underestimated the burden of disease by using NO2 as a surrogate for regional combustion pollutants including also particulate matter and other pollutants, without accounting for additive effects of different pollutants. We used spatial mapping of ambient daily air quality data to assign exposure to NO2 and O3 at the census block level. Although the accuracy of assignment from the monitoring grid may not be uniform within the area of study, the interpolation method, which is similar to that used by the U.S. Environmental Protection Agency, has been shown to capture sufficient spatial variability for the scale of our assessment (Hall et al. 2008).\nOur estimates of near-roadway effects also did not account for exposure at school and other locations beside the home, which recent studies suggest may also cause asthma (McConnell et al. 2010). The asthma exacerbation outcomes that we examined vary with regard to their impact on quality of life and resource utilization, ranging from school absences (with relatively modest morbidity for each episode) to hospitalization. It would be useful to evaluate the impact that the alternative risk assessment approach we propose would have on economic costs of air pollution, which preliminary results suggest may be substantial (Brandt et al. 2012).\nAlthough this study focused on the impact of near-roadway exposure on asthma, there is emerging evidence that these exposures also contribute to atherosclerotic heart disease, chronic obstructive pulmonary disease, lung cancer, and childhood neurodevelopmental outcomes (Guxens et al. 2012; HEI 2009; Künzli et al. 2010). Therefore, the implications of near-roadway exposure for the total burden of disease associated with air pollution are potentially quite large. However, the potential increase in physical activity associated with more compact urban design and walkable neighborhoods should also be considered (Saelens et al. 2003). Although there is little evidence that exercise improves asthma, physical activity has known cardiovascular health benefits that could outweigh potential detrimental effects of near-roadway exposure (Giles et al. 2011; Hankey et al. 2012). A more comprehensive assessment of all health-relevant benefits of planned GHG policies would be useful for policy makers. Nonetheless, it is clear that by using available health information to develop “win–win” policies to prevent childhood respiratory disease as cities are redeveloped to reduce GHG emissions, a much larger burden of disease could be prevented.", "Click here for additional data file." ]
[ "methods", "results", "discussion", null ]
[ "air pollution", "asthma", "burden of disease", "children", "compact urban growth", "risk assessment", "vehicle emissions" ]
Methods: We used population-attributable fractions to quantify the impact of air pollution on asthma-related outcomes in LAC for year 2007 for children < 18 years of age. We followed an existing methodological framework (Künzli et al. 2008; Perez et al. 2009) that we adapted for this new study as summarized below. To estimate the prevalence of asthma attributable to near-roadway pollution exposure, we used a concentration–response function (CRF) from the Children’s Health Study (CHS), a large population-based cohort in Southern California, in which living near major roadways, a proxy for traffic-related pollution exposure, was associated with increased prevalence of asthma (McConnell et al. 2006). Details on CHS study design and recruitment methods have been published previously (McConnell et al. 2006; Peters et al. 1999). To be consistent with the exposure assignment of the CRF study, we used the TeleAtlas MultiNet roads network (http://www.tomtom.com/en_gb/licensing/products/maps/multinet/) to map major LAC roads, defined as freeways, highways, or major arterial roads. These were then linked to census population data to derive the percentage of persons living within 75 m of these roads. For the present study we linked exposure to census population data given at the parcel level, which increased accuracy relative to linkage at the census block level used in a previous analysis (Perez et al. 2009). To be consistent with the prior CRF outcome definition, we used as background risk the asthma prevalence reported in the CHS (defined by use of controller medications in the previous year or lifetime asthma with any wheeze in the previous year or severe wheeze in the previous 12 months). Regional pollutants including particulate matter, nitrogen dioxide (NO2) and ozone (O3) are among the many causes of acute exacerbation among children with asthma, regardless of the cause of asthma onset (Jackson et al. 2011). However, an important consideration is that among those children with asthma attributable to living near a major road, all subsequent exacerbation should be attributed to air pollution, regardless of the trigger, assuming that these children would not otherwise have had the disease (Künzli et al. 2008). Conceptually, the total burden of asthma due to near-source and regional pollution includes the number of yearly asthma exacerbations triggered by causes other than regional air pollution among children whose asthma was caused (at least in part) by near-roadway pollution (Figure 1). These exacerbations are in addition to those directly triggered by regional air pollution exposure among all children with asthma, including children whose asthma was caused by near-roadway exposure and children whose asthma was caused by something other than traffic proximity. Air pollution risk assessments typically calculate only the asthma exacerbation burden triggered directly by regional pollution exposures, regardless of the underlying cause of asthma, whereas we included the additional burden of disease among children with asthma caused by near-roadway exposure but with exacerbations triggered by factors other than regional pollution. Conceptual model used to calculate asthma-related exacerbation attributable to air pollution for Los Angeles County based on Künzli et al. (2008). The thick dashed line indicates children with asthma attributable to near-roadway exposure. The thick solid line indicates total exacerbations due to regional and near-roadway air pollution. To avoid double counting the burden associated with correlated regional pollutants, we estimated exacerbation attributable to NO2 or O3 only. NO2 was selected to represent urban-scale combustion-related pollution because it is correlated with particulate mass and other regional pollutants associated with respiratory health effects in southern California (Gauderman et al. 2004). O3 is produced as a result of photo-oxidation that is uncorrelated with other regional pollutants in the Los Angeles air basin (Gauderman et al. 2004). The CRFs for bronchitis episodes among those with asthma, and for prevalent asthma attributable to near-roadway exposure, were derived from the CHS (McConnell et al. 2003, 2006) (Table 1). CRFs from appropriate studies of Southern California populations were not available for doctor visits, emergency department visits, or hospital admissions. Therefore, we applied CRFs used in a previous Southern California health impact assessment (Perez et al. 2009) or averaged the coefficient used in the previous analysis with the coefficient from a more recent study conducted in a similar population, as indicated in Table 1. Concentration–response functions (CRF) with 95% confidence intervals (CI) considered in the evaluation of air pollution burden. The number of children < 18 years of age (> 2.5 million) was obtained from the American Community Survey (U.S. Census Bureau 2011). Background rates of the outcomes were obtained from the CHS or from local surveys (Table 2). Annual average daily concentrations of NO2 and O3 obtained from the 2007 U.S. Environmental Protection Agency Air Quality System (AQS) (U.S. Environmental Protection Agency 2009) and CHS monitoring stations were interpolated based on inverse distance-squared weighting to each census block group in the county to estimate population exposures. Because of the seasonality of school attendance and both the seasonal and day-of-week variability of O3, the O3 population exposure for school absences was based on 2007 daily maps, rather than annual maps, obtained from interpolated hourly ambient school-week concentrations projected to 2000 census block group centroids. Population size and baseline health outcome and exposure estimates used to evaluate the burden of asthma due to air pollution in LAC in 2007. We estimated that 17.8% of LAC children lived within 75 m of major roads, and that the annual population-weighted exposure to NO2 was 23.3 ppb (24-hr average) and to O3 was 39.3 ppb (8-hr maximum) in LAC (Table 2). We assumed background concentrations of 4 ppb for NO2 annually and 38 ppb for 8-hr maximum O3 on all days, based on long-term measurements (1994–2003) from CHS monitoring stations in clean coastal locations (i.e., Lompoc, CA) (McConnell et al. 2003). [The average population-weighted annual O3 for LAC was near background because population exposures in the areas with high O3 are offset by population exposures in areas with high oxides of nitrogen (NOx) emissions and very low O3 concentrations, due to nitric oxide (NO) in fresh vehicular exhaust scavenging O3 in those areas.] We considered three near-roadway proximity exposure reduction scenarios (Table 3): Exposure reduction scenarios for near-roadway exposure, regional NO2 and O3, and corresponding reduction in childhood asthma cases attributable to near-roadway pollution exposure (based on total of 320,500 children with asthma in LAC). A reduction in annual concentrations of regional pollutants for each census block group to levels found in clean CHS communities (from 23.3 ppb to 4 ppb for NO2 and 39.3 ppb to 36.3 ppb for O3) in combination with a reduction in the proportion of children in the county living within 75 m of a major road from 17.8% to 0% A 20% reduction in the annual concentrations of regional pollutants for each census block group (from 23.3 ppb to 19.4 ppb for NO2 and 39.3 ppb to 38.7 ppb for O3) in combination with a 3.6% reduction in the proportion of all children in the county living within 75 m of a major road (from 17.8% to 14.2%, corresponding to a 20% decrease in the proportion of children currently living within 75 m) A 20% reduction in regional pollutant concentrations in combination with a 3.6% increase in the proportion of children living within 75 m of a major road (from 17.8% to 21.2%). Scenario 1 reflects the total burden of preventable illness from both exposures. At this time there is considerable uncertainty regarding the potential impact of compact urban growth strategies on near-roadway exposures, so scenarios 2 and 3 were selected assuming moderate reductions in regional pollutants from continued regulatory efforts and a moderate 20% increase or decrease in near-roadway exposure—a value that was chosen for illustration and could be refined using data from regional planners as they become available. Regional pollutant concentrations aggregated to the census block group level that exceeded background levels were reduced linearly, whereas we assumed that concentrations at or below the background level would be unaffected by changes in emissions. There are intrinsic limitations and uncertainties in risk analysis. We estimated a 95% confidence interval (CI) derived from the propagation of the CIs for the CRFs to address uncertainty in these estimates. In addition, proximity to major roadways has uncertainty as a proxy for near-roadway pollution exposure that depends on traffic volume, the emissions of the vehicular fleet, and local meteorological factors. Therefore, we also estimated the total burden of asthma-related exacerbations associated with a 100% and a 20% reduction in population-weighted exposure to the near-roadway dispersion-modeled pollution mixture (instead of a change in roadway proximity in exposure scenarios 1 and 2 in Table 3) using the CHS CRF from an estimate of the association of asthma prevalence with dispersion-modeled near-roadway pollution exposure accounting for traffic volume and emission factors (McConnell et al. 2006). Specifically, we used modeled NOx to represent the incremental contribution of local traffic to a more homogeneous community background concentration of NOx that included both primary and secondary pollution resulting from long-range transport and regional atmospheric photochemistry. It is a marker for correlated pollutants in the near-roadway mixture (rather than the etiologic agent for near-roadway health effects). We developed new estimates of population-weighted yearly average of local traffic-related NOx concentrations for 2007 in LAC using the CALINE4 dispersion model with the 2007 TeleAtlas MultiNet Roadway network, and 2007 vehicle emission factors for Los Angeles from the EMFAC model (California Air Resources Board 2007). Vehicle emission factors were developed for winter (55oF/50% relative humidity) and summer (75oF/50% relative humidity) conditions using average speeds of 65 mph on freeways and highways [FCC (functional class code) 1 and FCC2 class roads], 50 mph on major arterials (FCC3 class roads), and 30 mph on minor arterials and collectors (FCC4 roads). The model used year 2000 traffic volumes adjusted to 2007 VMT provided by the California Department of Transportation (17.5% increase in VMT for LAC). Modeled NOx concentrations were estimated for the block group centroids. The CHS CRF was developed for the contribution of local traffic on non-freeways using an older road functional roadway classification (FRC) scheme (McConnell et al. 2006) that is no longer available in a form that matches the most current FCC classification that we used. To minimize overestimation of population exposure to near-roadway exposure in LAC, we used estimates of exposure from FCC3 (major arterials) as representative of non-freeway roads used in developing the CHS CRF. We considered the impact of all near-FCC3 roadway NOx (corresponding to a scenario of 100% reduction in modeled near-roadway pollution at the block group centroid) and of a 20% decrease in population exposure. This corresponds to the 3.6% reduction in the total population of children within 75 m of a major roadway (a 20% reduction the proportion of the total population living within 75 m) (Table 3). Results: Of the estimated 320,500 cases of childhood asthma in the county (based on Southern California prevalence of 0.1257) (Table 2), we estimated that approximately 27,100 (8%; 95% CI: 2%, 16%) were caused at least in part by residential proximity to a major road (Table 3). A 3.6% reduction in the proportion of children living within 75 m of a major road (scenario 2) would result in 5,900 fewer asthma cases (95% CI: 1,000, 11,800) caused by near-roadway exposure (2% of the total cases in the county; 95% CI: 0.3%, 4%), whereas a 3.6% increase (scenario 3) would result in an additional 5,900 asthma cases caused by near-roadway exposure (Table 3). Estimates of yearly asthma-related exacerbations attributable to air pollution are presented in Table 4 for NO2 and O3, with results partitioned by cause of asthma (traffic proximity or other factors) using the conceptual scheme in Figure 1. Using this approach and assuming no near-roadway exposure and a reduction of NO2 or O3 to background levels (scenario 1), we estimated that 70,200 episodes (95% CI: 31,000, 95,700) of bronchitis (56.6% of all episodes) could be attributed to the combined effects of traffic proximity and regional NO2, our marker for the regional mixture of combustion-related pollutant exposure. The estimated burdens of other exacerbations attributable to exposure were smaller, ranging from 10.6% for emergency department visits to 19.5% for hospital admissions. The overall impact of air pollution was highly sensitive to the inclusion of exacerbations attributable to asthma triggers other than regional NO2 among children whose asthma was caused by traffic proximity. For example, we estimated that 65,100 bronchitis episodes were triggered by regional air pollution, including 5,600 episodes among children whose asthma was caused by near-roadway air pollution and 59,500 episodes among children whose asthma was caused by something other than air pollution (Table 4). In addition, we estimated that 5,100 bronchitis episodes (4.1% of the total) triggered by something other than regional air pollution would not have occurred if children had not lived within 75 m of a major road, because these children would not have developed asthma to begin with. These episodes would not have been accounted for if estimated effects of traffic proximity on asthma prevalence had not been considered. The estimated impact of such cases was especially large for outcomes that were weakly associated with regional NO2 (e.g., emergency department visits for asthma with an estimated CRF of 1.0011 per 1 ppb NO2) (Table 1), because exacerbations triggered by causes other than air pollution among children with asthma caused by traffic proximity account for a larger proportion of all episodes. Yearly number (%) of childhood asthma-related exacerbations attributable to near-roadway pollution in combination with regional NO2 and regional O3 above background levels in clean communities (scenario 1, traffic proximity model) (95% confidence intervals).a The estimate of the burden of bronchitis episodes directly attributable to O3 and traffic proximity pollution (18,790 preventable episodes triggered by regional air pollution under the usual risk assessment approach) was considerably less than for NO2 (65,100 episodes triggered by air pollution) (Table 4). O3 CRFs for outcomes other than bronchitis and missed school days were modest (e.g., for emergency department visits, 1.024 for each 10-ppb increase in O3) (Table 1); therefore, accounting for all exacerbations among children whose asthma was caused by traffic proximity led to substantial increases in estimates of disease burden attributable to O3—from 133 to 1,730 emergency department visits (Table 4). A partial reduction in the burden of asthma exacerbation could be achieved with a more modest reduction in population exposure to traffic proximity and regional pollutants. Figure 2 shows the estimated numbers of exacerbations (and percent of total) attributable to a 20% decrease in regional air pollution (either NO2 or O3) in combination with a 3.6% reduction in the proportion of all children living near major roadways (scenario 2) or a 3.6% increase in traffic proximity that might result from compact urban development (scenario 3). We report the estimated impacts for the regional pollutant with the strongest association with each outcome—based on a reduction in NO2 for all outcomes except school absences, which were estimated for a reduction in O3. The net impact of each air pollution reduction scenario depended on the total number of outcomes per year and the strength of the CRF for the regional pollutant. Thus, a 3.6% reduction in the proportion of all children in the county who lived near major roadways (scenario 2) would result in a 1.9% decrease in each outcome in Figure 2 (corresponding to the reduction in cases of asthma caused by near-roadway exposure, regardless of whether the exacerbation was triggered by air pollution or by other factors). For bronchitis episodes, which are relatively common and have a large CRF for NO2 (1.070 per 1-ppb increase) (Table 1), we estimate that a total of 19,900 exacerbations (16.1% of all episodes annually) could be prevented by a 20% NO2 reduction, most of which (17,600 episodes, 14.2%) were attributable to the reduction of episodes triggered by regional pollution among children whose asthma was not caused by air pollution, rather than to bronchitis episodes among children whose asthma was caused by near-traffic pollution (scenario 2 in Figure 2). If NO2 were reduced by 20% but traffic proximity increased by 3.6% (scenario 3), we estimated a smaller reduction in total bronchitis episodes (15,580 episodes, 12.6% of the total). In contrast, for emergency department visits (with the weakest CRF for NO2, 1.0011 per 1-ppb increase), most of the benefit under scenario 2 resulted from the estimated reduction in the number of children with asthma caused by traffic proximity pollution, whereas an increase in the population living near major roadways under scenario 3 resulted in an increase in emergency department visits that exceeded the modest benefit from NO2 reduction, resulting in a net increase in total number of visits. Other outcomes demonstrated intermediate patterns for the relative impact of a reduction in regional pollution and a change in traffic proximity, with the net absolute impact depending also on the baseline frequency of the outcome in the population (Figure 2). Number and percentage of exacerbations attributable to changes in pollutant levels under different exposure scenarios. Scenario 2 assumes a 3.6% decrease in children living near major roads and a 20% decrease in regional pollution. Scenario 3 assumes a 3.6% increase in children living near major roads and a 20% decrease in regional pollution. Regional pollution is represented by NO2 for all outcomes except O3 for school absences. Bars to the left and right of zero represent reductions and increases in the burden of asthma exacerbation, respectively, compared with baseline. (A) Bronchitis episodes, (B) hospital admissions, (C) emergency department visits, (D) clinic visits, and (E) school absences. In additional analyses we used the CRF from a dispersion-modeled estimate of the effect of near-roadway pollutant mixture exposure on asthma (2.07; 95% CI: 1.12, 3.83 per 16 ppb) (McConnell et al. 2006) and an estimated annual average contribution of near-roadway pollution derived from major arterials expressed as 2.6 ppb of NOx, or 8% of all near-roadway pollution generated from freeways, highways, state highways, minor arterials, and collectors. We estimated that 39,800 cases of prevalent asthma (12% of the total; 95% CI: 2%, 20%) were caused by near-roadway pollution based on this analysis, compared with 27,100 cases based on roadway proximity (scenario 1) and that a 20% decrease in near-roadway exposure (scenario 2) would result in 8,400 fewer cases (3% of all cases in the county; 95% CI 0.4%, 4%) compared with 5,900 fewer cases based on roadway proximity (Table 3). The total burden of asthma-related exacerbation attributable to the combination of near-roadway and regional pollution under these scenarios is shown in Supplemental Material, Tables S1 and S2 (http://dx.doi.org/10.1289/ehp.1104785). Although the numbers of exacerbations attributed to near-roadway exposure were increased, estimates were of the same order of magnitude as those obtained using traffic proximity as an indicator of near-roadway exposure. Discussion: The implications of near-roadway exposures for the burden of disease due to air pollution have not been fully appreciated. Our results indicate that risk assessment focusing exclusively on regional pollutant effects substantially underestimates the impact of air pollution on childhood asthma because it does not account for exacerbations caused by exposures other than air pollution among the approximately 8% of children with asthma in Los Angeles County whose asthma can be attributed to pollution from near-roadway pollution exposure based on proximity. Moreover, the burden of asthma exacerbation among children whose asthma was caused by living near roadways, and the potential benefits of reducing near-roadway exposures, are disproportionately larger for more severe and more expensive outcomes, such as hospital admissions and emergency department visits. The use of traffic proximity is both a strength and limitation. A traffic proximity buffer is appealing as a regulatory metric because it is precise and easily measured. However, reductions in traffic density and vehicular emissions are not reflected in this metric, so estimates of the burden of disease based on the proximity CRF (for the Los Angeles Basin) may not easily be generalized to other urban settings. Proximity is a proxy indicator of local traffic-related pollutants that vary by type of roadway, over time, and by region, and that also depend on the pollution reduction technologies and age of local vehicular fleets [Health Effects Institute (HEI) 2009]. Therefore, we conducted a sensitivity analysis based on exposure estimates derived from a dispersion model of near-roadway traffic-related pollution that integrated traffic density, emission factors, and meteorology. Results of this alternative approach demonstrated a pattern of preventable asthma burden for a 100% decrease in near-roadway pollution exposure that was generally consistent with that estimated for a 100% decrease in residential proximity to major roads, and estimates for a 20% decrease in near-roadway pollution that were consistent with estimates based on a 20% decrease in proportion of children currently living near major roads (corresponding to the 3.6% decrease from 17.8% to 14.2% of all children in the county). It is also possible that living near a major traffic corridor is associated with socioeconomic characteristics and mold or other substandard housing characteristics that could explain the association of traffic proximity with asthma in studies from which the CRFs were derived. However, markers for these characteristics did not confound this association (McConnell et al. 2006). The CHS has also shown that the association between traffic proximity and asthma is modified by genetic variants in plausible biological pathways, which would be difficult to explain based on confounding by socioeconomic status or related characteristics (Salam et al. 2007). Compact urban development (scenario 3) that increases the number of children living near major roadways could limit the overall benefit of reduced exposure to regional pollution, especially for emergency department and clinic visits and for school absences, if not accompanied by substantial reductions in emissions and traffic volume along major traffic corridors. In practice, the CRF associated with near-roadway proximity may be reduced if overall reductions in vehicle miles traveled and emissions result in reduced air pollution exposures near roadways. We did not attempt to estimate a more informative association between dispersion-modeled exposure and illness under scenario 3 because data are not readily available to make such estimates. Realistic estimates under different SB375 compact urban development scenarios of the likely impact on traffic volume and emissions factors, in addition to changes in population distributions along major traffic corridors, are urgently needed to identify strategies that will optimize the benefits of compact urban development on both GHG emissions and on respiratory health. Strong “win–win” policies could result from compact growth and other strategies that increase the number of people living along busy roads, thereby reducing vehicle-miles traveled and regional air pollution, if coupled with reduction of traffic-related primary emissions such as ultrafine particles and black carbon. Promoting the rapid adoption of low- or zero-emission vehicles powered by carbon-neutral energy sources, and limiting residential development in buffer zones very near the largest roadways, are obvious examples of such strategies. Conversely, our findings suggest that a compact urban development plan that increases the proportion of children near major roadways, while simultaneously expanding or adding new traffic corridors to accommodate greater traffic volume, will increase rather than decrease the burden of air pollution–related asthma. Although such a scenario may be unlikely in California, in other regions in the world—particularly in developing countries—it is plausible. There are uncertainties in our estimates related to the conceptual approach. Until recently, the causal association of asthma onset and near-roadway pollution—an important assumption underlying our analysis—was uncertain (Eder et al. 2006). However, a scientific consensus is emerging that the observed epidemiological associations of higher asthma rates along major roads are causal (Anderson et al. 2011a, 2011b; HEI 2009; Ryan and Holguin 2010; Salam et al. 2008). In addition, our approach assumes that removal or reduction of near-roadway pollutant exposure would reduce the number of children developing asthma (Künzli et al. 2008). Asthma is likely to develop as a consequence of multiple, potentially synergistic, risk factors, and we may have overestimated the benefit of exposure reduction if some of the cases of childhood asthma attributed to near-roadway pollution would have developed due to competing risk factors even if the children had not been exposed to traffic. However, an 8-year follow-up of a Dutch birth cohort showed that the association between incident asthma and soot exposure did not diminish during follow-up, as might have been expected if some of the soot-associated cases would have developed due to competing causes in the absence of soot exposure (Gehring et al. 2010). Further research is needed to determine the potential role of competing risks, but the current understanding of asthma etiology warrants a precautionary preventive approach to near-roadway pollution effects. Our conceptual model of effects of exposure reduction also does not address the time lag that might be required for health benefits to be achieved. We have previously acknowledged uncertainties related to the extrapolation of CRFs across populations, which we have attempted to minimize by applying CRFs, asthma prevalence, and outcome frequency estimates for Southern California, when available (Künzli et al. 2008). Statistical uncertainty was estimated by calculating 95% CIs, which were relatively wide. We accounted for the propagation of random variation based on the observed distribution of chronic and acute CRF estimates, but mean estimates are not influenced by this uncertainty. We may have underestimated the burden of disease by using NO2 as a surrogate for regional combustion pollutants including also particulate matter and other pollutants, without accounting for additive effects of different pollutants. We used spatial mapping of ambient daily air quality data to assign exposure to NO2 and O3 at the census block level. Although the accuracy of assignment from the monitoring grid may not be uniform within the area of study, the interpolation method, which is similar to that used by the U.S. Environmental Protection Agency, has been shown to capture sufficient spatial variability for the scale of our assessment (Hall et al. 2008). Our estimates of near-roadway effects also did not account for exposure at school and other locations beside the home, which recent studies suggest may also cause asthma (McConnell et al. 2010). The asthma exacerbation outcomes that we examined vary with regard to their impact on quality of life and resource utilization, ranging from school absences (with relatively modest morbidity for each episode) to hospitalization. It would be useful to evaluate the impact that the alternative risk assessment approach we propose would have on economic costs of air pollution, which preliminary results suggest may be substantial (Brandt et al. 2012). Although this study focused on the impact of near-roadway exposure on asthma, there is emerging evidence that these exposures also contribute to atherosclerotic heart disease, chronic obstructive pulmonary disease, lung cancer, and childhood neurodevelopmental outcomes (Guxens et al. 2012; HEI 2009; Künzli et al. 2010). Therefore, the implications of near-roadway exposure for the total burden of disease associated with air pollution are potentially quite large. However, the potential increase in physical activity associated with more compact urban design and walkable neighborhoods should also be considered (Saelens et al. 2003). Although there is little evidence that exercise improves asthma, physical activity has known cardiovascular health benefits that could outweigh potential detrimental effects of near-roadway exposure (Giles et al. 2011; Hankey et al. 2012). A more comprehensive assessment of all health-relevant benefits of planned GHG policies would be useful for policy makers. Nonetheless, it is clear that by using available health information to develop “win–win” policies to prevent childhood respiratory disease as cities are redeveloped to reduce GHG emissions, a much larger burden of disease could be prevented. Supplemental Material: Click here for additional data file.
Background: The emerging consensus that exposure to near-roadway traffic-related pollution causes asthma has implications for compact urban development policies designed to reduce driving and greenhouse gases. Methods: The burden of asthma attributable to the dual effects of near-roadway and regional air pollution was estimated, using nitrogen dioxide and ozone as markers of urban combustion-related and secondary oxidant pollution, respectively. We also estimated the impact of alternative scenarios that assumed a 20% reduction in regional pollution in combination with a 3.6% reduction or 3.6% increase in the proportion of the total population living near major roads, a proxy for near-roadway exposure. Results: We estimated that 27,100 cases of childhood asthma (8% of total) in LAC were at least partly attributable to pollution associated with residential location within 75 m of a major road. As a result, a substantial proportion of asthma-related morbidity is a consequence of near-roadway pollution, even if symptoms are triggered by other factors. Benefits resulting from a 20% regional pollution reduction varied markedly depending on the associated change in near-roadway proximity. Conclusions: Our findings suggest that there are large and previously unappreciated public health consequences of air pollution in LAC and probably in other metropolitan areas with dense traffic corridors. To maximize health benefits, compact urban development strategies should be coupled with policies to reduce near-roadway pollution exposure.
null
null
5,499
270
[ 7 ]
4
[ "asthma", "near", "pollution", "roadway", "exposure", "near roadway", "children", "regional", "traffic", "air" ]
[ "pollution asthma related", "asthma traffic proximity", "pollution exposure children", "roadway exposure asthma", "exposure children asthma" ]
null
null
null
null
[CONTENT] air pollution | asthma | burden of disease | children | compact urban growth | risk assessment | vehicle emissions [SUMMARY]
[CONTENT] air pollution | asthma | burden of disease | children | compact urban growth | risk assessment | vehicle emissions [SUMMARY]
null
[CONTENT] air pollution | asthma | burden of disease | children | compact urban growth | risk assessment | vehicle emissions [SUMMARY]
null
null
[CONTENT] Adolescent | Air Pollutants | Air Pollution | Asthma | Child | Child, Preschool | Environmental Exposure | Environmental Policy | Government Regulation | Humans | Los Angeles | Models, Theoretical | Morbidity | Nitrogen Dioxide | Ozone | Residence Characteristics | Urban Renewal | Vehicle Emissions [SUMMARY]
[CONTENT] Adolescent | Air Pollutants | Air Pollution | Asthma | Child | Child, Preschool | Environmental Exposure | Environmental Policy | Government Regulation | Humans | Los Angeles | Models, Theoretical | Morbidity | Nitrogen Dioxide | Ozone | Residence Characteristics | Urban Renewal | Vehicle Emissions [SUMMARY]
null
[CONTENT] Adolescent | Air Pollutants | Air Pollution | Asthma | Child | Child, Preschool | Environmental Exposure | Environmental Policy | Government Regulation | Humans | Los Angeles | Models, Theoretical | Morbidity | Nitrogen Dioxide | Ozone | Residence Characteristics | Urban Renewal | Vehicle Emissions [SUMMARY]
null
null
[CONTENT] pollution asthma related | asthma traffic proximity | pollution exposure children | roadway exposure asthma | exposure children asthma [SUMMARY]
[CONTENT] pollution asthma related | asthma traffic proximity | pollution exposure children | roadway exposure asthma | exposure children asthma [SUMMARY]
null
[CONTENT] pollution asthma related | asthma traffic proximity | pollution exposure children | roadway exposure asthma | exposure children asthma [SUMMARY]
null
null
[CONTENT] asthma | near | pollution | roadway | exposure | near roadway | children | regional | traffic | air [SUMMARY]
[CONTENT] asthma | near | pollution | roadway | exposure | near roadway | children | regional | traffic | air [SUMMARY]
null
[CONTENT] asthma | near | pollution | roadway | exposure | near roadway | children | regional | traffic | air [SUMMARY]
null
null
[CONTENT] asthma | exposure | near | roadway | pollution | near roadway | children | regional | population | concentrations [SUMMARY]
[CONTENT] pollution | asthma | episodes | near | regional | proximity | reduction | scenario | estimated | no2 [SUMMARY]
null
[CONTENT] asthma | near | pollution | roadway | exposure | near roadway | click | file | additional data file | data file [SUMMARY]
null
null
[CONTENT] ||| 20% | 3.6% | 3.6% [SUMMARY]
[CONTENT] 27,100 | 8% | LAC | 75 ||| ||| 20% [SUMMARY]
null
[CONTENT] ||| ||| 20% | 3.6% | 3.6% ||| ||| 27,100 | 8% | LAC | 75 ||| ||| 20% ||| LAC ||| [SUMMARY]
null
The effectiveness of multimedia combined with teach-back method on the level of knowledge, confidence and behavior of professional caregivers in preventing falls in elderly patients: A randomized non-blind controlled clinical study.
36181096
Teach-back is a teaching method that can quickly improve the acknowledge of target audience and change their behaviors effectively. However, this approach has not been reported in previous studies that were dedicated to reducing the incidence of falls in elderly inpatients. Therefore, we aimed to evaluate the effectiveness of the teach-back method for improving the knowledge, confidence, and behaviors (KCB) of professional caregivers on the fall prevention in elderly inpatients and to provide practical evidence for reducing the incidence of falls.
BACKGROUND
This is a prospective study. At the recruitment, the demographic data of the professional caregivers were completely collected. Questionnaire about KCB of professional caregivers on fall prevention in elderly inpatients was used as an assessment scale, and the differences between the scores were analyzed. At the end of the study, the fall rate of the patients cared by different groups was counted and analyzed.
METHODS
A total of 100 professional caregivers were recruited, all of which participated in the whole study process. There was no statistical differences in demographic data. Three or six months after the courses, the knowledge scores, confidence scores, and behavior scores of the two groups were significantly improved, and the observation group scores were significantly higher than it was in the control group (P < .05). During the study period, the incidence of falls in the observation group was 1.32%, while it was 0.30% in the control group (P < .05).
RESULTS
Teach-back method can rapidly improve KCB of professional caregivers about fall prevention in elderly inpatients, which is worthy of clinical practice.
CONCLUSION
[ "Aged", "Caregivers", "Humans", "Multimedia", "Prospective Studies", "Research Design" ]
9524941
1. Introduction
Falls are the most common adverse events in hospitals, accounting for 20% to 30% of all adverse events.[1] Approximately 3% of inpatients will fall and 25% of them will cause injury.[2] Elderly inpatients have a particularly high risk of fall owing to many comorbidities as well as decreased physical function and activity ability.[3] Falls of inpatients not only bring pains to patients and economic burden to families and society, but also prolong the hospital stay. Moreover, inpatients’ fall may become the fuse of medical contradictions.[4] Clinically, varieties of nurse-led interventions have been used to reduce the incidence of falls in hospitalized patients.[5] There are also plenty of patient-involved studies in regard to fall prevention.[6] But most researches ignored that the elderly are less active, and expected too much of senile inpatients, therefore, the effectiveness of interventions on falls prevention is not ideal. For older patients, the active involvement of caregivers in fall prevention is still needed to reduce the incidence of falls. It has been reported that patients cared for by trained caregivers are less prone to fall.[7] Though playing an important role in the prevention of falls, many professional caregivers are not familiar with knowledge regarding falls in elderly patients.[8] To ensure the safety of patients, it is necessary to conduct fall-related interventions for professional caregivers. The traditional way of preaching is boring, so it’s difficult for caregivers to understand, memorize and apply the content they learn. The teach-back method is a two-way mode of information transmission. This means that after receiving health education, professional caregivers are allowed to express their understandings of relevant information in their own language and they will be given guidance on the information they misunderstand or do not understand again until completely mastering relevant knowledge.[9] This approach has been endorsed by several institutions, including the Joint Commission in The United States, and is widely used in the healthcare communities.[10,11] In present study, teach-back method combined with multimedia was used to train professional caregivers about the knowledge of preventing falls in elderly inpatients, and certain effects were achieved, which are reported as follows.
2.4. Statistical methods
Microsoft Office 2010 Excel was used for data entry and double verification was performed. SPSS23.0 statistical software was used for data analysis. Counting data are expressed as frequencies or percentages, and the chi-square test was used for comparison between groups. The measured data were first tested for normality. If a normal distribution was satisfied, the mean ± standard deviation was used to represent the data and t test was performed between groups, while if not, the rank sum test was used. P < .05 indicates a statistically significant difference.
3. Results
All 100 caregivers participated in the study and completed the questionnaire survey, with recovery rate 100%. There was no significant difference in the general data between two groups (P > .05). The demographic details are presented in Table 1. Comparison of general data of two groups. At the beginning of the study, the KCB scores on falls in elderly inpatients in both groups were low, and there was no statistical difference. After different interventions, the scores of KCB of the two groups on fall prevention in elderly inpatients were improved. After once or twice training, the average scores of the three dimensions in the observation group were higher than those in the control group, and the difference was statistically significant (P < .05). The details are presented in Tables 2, 3, and 4. Comparisons of scores of knowledge dimension. Comparisons of scores of confidence dimension. Comparisons of scores of behavior dimension. During the study period, all the caregivers in the control group took care of 684 elderly inpatients, 9 of whom fell down so that the incidence of falls was 1.32%, and the person-daily rate was 1.34%. In the observation group, all caregivers took care of 658 elderly inpatients, 2 of whom fell down, so the incidence of falls was 0.30%, and then the person-daily rate was 0.33%. There was a statistically difference in the incidence of falls between two groups (P < .05; Table 5). Comparisons of the incidence of falls.
null
null
[ "2. Methods", "2.1. Study population", "2.2. Study design", "2.3. Survey tools and evaluation indicators", "2.5. Sample size", "Acknowledgements", "Author contributions" ]
[ " 2.1. Study population Using convenience sampling, 100 professional caregivers were selected as the research subjects in a 3A grade hospital, and 50 caregivers in each group were randomly assigned to the control group and the observation group. Inclusion criteria: physical and mental health; >65 years old; attendants of the professional caregiver company connected to the hospital; informed consent and voluntary participation in this study. Exclusion criteria: cognitive dysfunction, inability to complete the training and investigations; those who refuse to participate or withdraw voluntarily after reasonable explanation. This study was approved by the medical ethics committee of the right hospital and all participants provided informed consent. The authors have declared that no competing interests exist.\nUsing convenience sampling, 100 professional caregivers were selected as the research subjects in a 3A grade hospital, and 50 caregivers in each group were randomly assigned to the control group and the observation group. Inclusion criteria: physical and mental health; >65 years old; attendants of the professional caregiver company connected to the hospital; informed consent and voluntary participation in this study. Exclusion criteria: cognitive dysfunction, inability to complete the training and investigations; those who refuse to participate or withdraw voluntarily after reasonable explanation. This study was approved by the medical ethics committee of the right hospital and all participants provided informed consent. The authors have declared that no competing interests exist.\n 2.2. Study design The study lasted for 6 months from March to August, 2021. The control group were educated by conventional educational methods, while the observation group by teach-back method. The interventions were implemented at the beginning of the study and 3 months later. Before, in the middle, and at the late stage of the study, questionnaires were completed on the knowledge, confidence, and behaviors (KCB) of fall prevention in elderly inpatients, and the occurrence of falls in all elderly patients cared by the caregivers within 6 months were collected. All relevant data are within the paper and its supporting information files. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific methods are as follows:\nThe nursing department organized and established a fall prevention and management group for the elderly inpatients. The head nurse of the rehabilitation is the team leader, with six charge nurses as members and the director of the rehabilitation as consultants.\nIn the control group, caregivers were randomly assigned to clinical departments, and charge nurses taught them about fall prevention using conventional methods, that is, informing them of the high-risk factors of falling in elderly inpatients and distributing fall prevention guidance manual.\nIn the observation group, the teach-back method combined with multimedia was adopted for centralized training. Training content: PPT courseware, scene simulation video, and use of WeChat group. PPT courseware content takes about 10 minutes, including the incidence of falls, varieties of factors related to fall, the harm of fall in elderly patients, treatment measures after the fall, meanings of fall prevention and specific methods, such as for long-term bedridden patients, lifting the head of a bed for 3 to 5 minutes, and then informing to sit by the bed for a few minutes, finally asking to walk slowly without abnormal response standing in situ. The scene simulation video lasted six minutes, showing a variety of causes by which elderly patients fall in the hospital (such as dizziness, sliding, etc.), showing the seriousness of patients fall (such as fractures needing for surgical treatment), as well as advocating some effective measures to prevent falls. A nurse is responsible for the establishment of a wechat group to share the knowledge and experience on fall prevention from time to time and analyze the causes of falling events within the group so that everyone can learn lessons. Implementation of teach-back method: After the training, charge nurses ask the caregivers questions one by one using a gentle and open way such as “Do you think what kind of inpatients are prone to fall? “What should we do when a patient complains of dizziness while moving”. The caregivers are encouraged to provide positive answers based on their own understanding. If the answers are incorrect, the nurses will correct them immediately, such as “Let’s strengthen the memory again, it should be...” “When the patient complains of dizziness in the activity, we should immediately ask the patient to stop the activity, sit down or lie down to rest, and inform the medical staff,” etc., and ask for retelling until the retelling is correct.\nThe study lasted for 6 months from March to August, 2021. The control group were educated by conventional educational methods, while the observation group by teach-back method. The interventions were implemented at the beginning of the study and 3 months later. Before, in the middle, and at the late stage of the study, questionnaires were completed on the knowledge, confidence, and behaviors (KCB) of fall prevention in elderly inpatients, and the occurrence of falls in all elderly patients cared by the caregivers within 6 months were collected. All relevant data are within the paper and its supporting information files. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific methods are as follows:\nThe nursing department organized and established a fall prevention and management group for the elderly inpatients. The head nurse of the rehabilitation is the team leader, with six charge nurses as members and the director of the rehabilitation as consultants.\nIn the control group, caregivers were randomly assigned to clinical departments, and charge nurses taught them about fall prevention using conventional methods, that is, informing them of the high-risk factors of falling in elderly inpatients and distributing fall prevention guidance manual.\nIn the observation group, the teach-back method combined with multimedia was adopted for centralized training. Training content: PPT courseware, scene simulation video, and use of WeChat group. PPT courseware content takes about 10 minutes, including the incidence of falls, varieties of factors related to fall, the harm of fall in elderly patients, treatment measures after the fall, meanings of fall prevention and specific methods, such as for long-term bedridden patients, lifting the head of a bed for 3 to 5 minutes, and then informing to sit by the bed for a few minutes, finally asking to walk slowly without abnormal response standing in situ. The scene simulation video lasted six minutes, showing a variety of causes by which elderly patients fall in the hospital (such as dizziness, sliding, etc.), showing the seriousness of patients fall (such as fractures needing for surgical treatment), as well as advocating some effective measures to prevent falls. A nurse is responsible for the establishment of a wechat group to share the knowledge and experience on fall prevention from time to time and analyze the causes of falling events within the group so that everyone can learn lessons. Implementation of teach-back method: After the training, charge nurses ask the caregivers questions one by one using a gentle and open way such as “Do you think what kind of inpatients are prone to fall? “What should we do when a patient complains of dizziness while moving”. The caregivers are encouraged to provide positive answers based on their own understanding. If the answers are incorrect, the nurses will correct them immediately, such as “Let’s strengthen the memory again, it should be...” “When the patient complains of dizziness in the activity, we should immediately ask the patient to stop the activity, sit down or lie down to rest, and inform the medical staff,” etc., and ask for retelling until the retelling is correct.\n 2.3. Survey tools and evaluation indicators A self-designed questionnaire (including name, gender, age, education level, fall-related training experience, and length of service) was used to collect demographic information of the caregivers and evaluate whether there was a statistical difference in two groups.\nWith experiences based on the characteristics of clinical nursing work, and after widely searching relevant literatures and referring to “Morse fall assessment scale” and so on, we designed “the questionnaire about the KCB of professional caregivers on fall prevention in elderly inpatients.” The questionnaire was reviewed and modified by five nursing specialists, the content validity is more than 80%. The questionnaire is composed of 33 items, including three dimensions KCB. The knowledge dimension, with a total of 15 items, covering disease factors, environmental factors, prevention measures and emergency response, etc., used the Likert 5 scoring method. Each entry got 1 point for “I don’t know”, 2 points for “heard”, 3 points for “understanding”, 4 points for “clear”, and 5 points for “very clear”. There were 9 items in the confidence dimension, which reflected the positivity and stance of fall prevention. Alike Likert rating method was adopted and each entry got 1 point for “very reluctant”, 2 points for “unwilling”, 3 points for “uncertain”, 4 points for “willing”, and 5 points for “very willing”, with a total score of 45 points. There were 9 items in the behavioral dimension, and Likert 5 scoring method was also adopted when each entry got 1 point for “Never”, 2 points for “occasionally”, 3 points for “some time”, 4 points for “often”, and 5 points for “always”, with a total score of 45 points. Higher the score is, better the cognition and confidence of the caregivers regarding fall prevention is, and higher the credibility of the normative behavior is. The questionnaire has good reliability and validity. The content validity indices of KCB were 0.812, 0.875, and 0.838, respectively. The total Cronbach’s α coefficient was 0.868, and the Cronbach’s α coefficient for each dimension ranged from 0.808 to 0.865.\nWe used two ways to describe the incidence of falls. One is the proportion of all fallers and the other is the person-daily rate which means the ratio of the number of all falls occurred to the total days of stay for all inpatients taken care of during the study period. The control and observation groups were calculated separately.\nA self-designed questionnaire (including name, gender, age, education level, fall-related training experience, and length of service) was used to collect demographic information of the caregivers and evaluate whether there was a statistical difference in two groups.\nWith experiences based on the characteristics of clinical nursing work, and after widely searching relevant literatures and referring to “Morse fall assessment scale” and so on, we designed “the questionnaire about the KCB of professional caregivers on fall prevention in elderly inpatients.” The questionnaire was reviewed and modified by five nursing specialists, the content validity is more than 80%. The questionnaire is composed of 33 items, including three dimensions KCB. The knowledge dimension, with a total of 15 items, covering disease factors, environmental factors, prevention measures and emergency response, etc., used the Likert 5 scoring method. Each entry got 1 point for “I don’t know”, 2 points for “heard”, 3 points for “understanding”, 4 points for “clear”, and 5 points for “very clear”. There were 9 items in the confidence dimension, which reflected the positivity and stance of fall prevention. Alike Likert rating method was adopted and each entry got 1 point for “very reluctant”, 2 points for “unwilling”, 3 points for “uncertain”, 4 points for “willing”, and 5 points for “very willing”, with a total score of 45 points. There were 9 items in the behavioral dimension, and Likert 5 scoring method was also adopted when each entry got 1 point for “Never”, 2 points for “occasionally”, 3 points for “some time”, 4 points for “often”, and 5 points for “always”, with a total score of 45 points. Higher the score is, better the cognition and confidence of the caregivers regarding fall prevention is, and higher the credibility of the normative behavior is. The questionnaire has good reliability and validity. The content validity indices of KCB were 0.812, 0.875, and 0.838, respectively. The total Cronbach’s α coefficient was 0.868, and the Cronbach’s α coefficient for each dimension ranged from 0.808 to 0.865.\nWe used two ways to describe the incidence of falls. One is the proportion of all fallers and the other is the person-daily rate which means the ratio of the number of all falls occurred to the total days of stay for all inpatients taken care of during the study period. The control and observation groups were calculated separately.\n 2.4. Statistical methods Microsoft Office 2010 Excel was used for data entry and double verification was performed. SPSS23.0 statistical software was used for data analysis. Counting data are expressed as frequencies or percentages, and the chi-square test was used for comparison between groups. The measured data were first tested for normality. If a normal distribution was satisfied, the mean ± standard deviation was used to represent the data and t test was performed between groups, while if not, the rank sum test was used. P < .05 indicates a statistically significant difference.\nMicrosoft Office 2010 Excel was used for data entry and double verification was performed. SPSS23.0 statistical software was used for data analysis. Counting data are expressed as frequencies or percentages, and the chi-square test was used for comparison between groups. The measured data were first tested for normality. If a normal distribution was satisfied, the mean ± standard deviation was used to represent the data and t test was performed between groups, while if not, the rank sum test was used. P < .05 indicates a statistically significant difference.\n 2.5. Sample size The sample size was calculated using G*Power© software version 3.1.9.7 (Heinrich Heine University, Dusseldorf, Germany). Depending on the results of a pilot trial with 2-sided (two tails) Type-I error 0.05 and power of 80% (1-β = 0.8), effect size (d) factor 0.60, and considering a dropout rate of approximately 10%, the minimum sample size was 100.\nThe sample size was calculated using G*Power© software version 3.1.9.7 (Heinrich Heine University, Dusseldorf, Germany). Depending on the results of a pilot trial with 2-sided (two tails) Type-I error 0.05 and power of 80% (1-β = 0.8), effect size (d) factor 0.60, and considering a dropout rate of approximately 10%, the minimum sample size was 100.", "Using convenience sampling, 100 professional caregivers were selected as the research subjects in a 3A grade hospital, and 50 caregivers in each group were randomly assigned to the control group and the observation group. Inclusion criteria: physical and mental health; >65 years old; attendants of the professional caregiver company connected to the hospital; informed consent and voluntary participation in this study. Exclusion criteria: cognitive dysfunction, inability to complete the training and investigations; those who refuse to participate or withdraw voluntarily after reasonable explanation. This study was approved by the medical ethics committee of the right hospital and all participants provided informed consent. The authors have declared that no competing interests exist.", "The study lasted for 6 months from March to August, 2021. The control group were educated by conventional educational methods, while the observation group by teach-back method. The interventions were implemented at the beginning of the study and 3 months later. Before, in the middle, and at the late stage of the study, questionnaires were completed on the knowledge, confidence, and behaviors (KCB) of fall prevention in elderly inpatients, and the occurrence of falls in all elderly patients cared by the caregivers within 6 months were collected. All relevant data are within the paper and its supporting information files. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific methods are as follows:\nThe nursing department organized and established a fall prevention and management group for the elderly inpatients. The head nurse of the rehabilitation is the team leader, with six charge nurses as members and the director of the rehabilitation as consultants.\nIn the control group, caregivers were randomly assigned to clinical departments, and charge nurses taught them about fall prevention using conventional methods, that is, informing them of the high-risk factors of falling in elderly inpatients and distributing fall prevention guidance manual.\nIn the observation group, the teach-back method combined with multimedia was adopted for centralized training. Training content: PPT courseware, scene simulation video, and use of WeChat group. PPT courseware content takes about 10 minutes, including the incidence of falls, varieties of factors related to fall, the harm of fall in elderly patients, treatment measures after the fall, meanings of fall prevention and specific methods, such as for long-term bedridden patients, lifting the head of a bed for 3 to 5 minutes, and then informing to sit by the bed for a few minutes, finally asking to walk slowly without abnormal response standing in situ. The scene simulation video lasted six minutes, showing a variety of causes by which elderly patients fall in the hospital (such as dizziness, sliding, etc.), showing the seriousness of patients fall (such as fractures needing for surgical treatment), as well as advocating some effective measures to prevent falls. A nurse is responsible for the establishment of a wechat group to share the knowledge and experience on fall prevention from time to time and analyze the causes of falling events within the group so that everyone can learn lessons. Implementation of teach-back method: After the training, charge nurses ask the caregivers questions one by one using a gentle and open way such as “Do you think what kind of inpatients are prone to fall? “What should we do when a patient complains of dizziness while moving”. The caregivers are encouraged to provide positive answers based on their own understanding. If the answers are incorrect, the nurses will correct them immediately, such as “Let’s strengthen the memory again, it should be...” “When the patient complains of dizziness in the activity, we should immediately ask the patient to stop the activity, sit down or lie down to rest, and inform the medical staff,” etc., and ask for retelling until the retelling is correct.", "A self-designed questionnaire (including name, gender, age, education level, fall-related training experience, and length of service) was used to collect demographic information of the caregivers and evaluate whether there was a statistical difference in two groups.\nWith experiences based on the characteristics of clinical nursing work, and after widely searching relevant literatures and referring to “Morse fall assessment scale” and so on, we designed “the questionnaire about the KCB of professional caregivers on fall prevention in elderly inpatients.” The questionnaire was reviewed and modified by five nursing specialists, the content validity is more than 80%. The questionnaire is composed of 33 items, including three dimensions KCB. The knowledge dimension, with a total of 15 items, covering disease factors, environmental factors, prevention measures and emergency response, etc., used the Likert 5 scoring method. Each entry got 1 point for “I don’t know”, 2 points for “heard”, 3 points for “understanding”, 4 points for “clear”, and 5 points for “very clear”. There were 9 items in the confidence dimension, which reflected the positivity and stance of fall prevention. Alike Likert rating method was adopted and each entry got 1 point for “very reluctant”, 2 points for “unwilling”, 3 points for “uncertain”, 4 points for “willing”, and 5 points for “very willing”, with a total score of 45 points. There were 9 items in the behavioral dimension, and Likert 5 scoring method was also adopted when each entry got 1 point for “Never”, 2 points for “occasionally”, 3 points for “some time”, 4 points for “often”, and 5 points for “always”, with a total score of 45 points. Higher the score is, better the cognition and confidence of the caregivers regarding fall prevention is, and higher the credibility of the normative behavior is. The questionnaire has good reliability and validity. The content validity indices of KCB were 0.812, 0.875, and 0.838, respectively. The total Cronbach’s α coefficient was 0.868, and the Cronbach’s α coefficient for each dimension ranged from 0.808 to 0.865.\nWe used two ways to describe the incidence of falls. One is the proportion of all fallers and the other is the person-daily rate which means the ratio of the number of all falls occurred to the total days of stay for all inpatients taken care of during the study period. The control and observation groups were calculated separately.", "The sample size was calculated using G*Power© software version 3.1.9.7 (Heinrich Heine University, Dusseldorf, Germany). Depending on the results of a pilot trial with 2-sided (two tails) Type-I error 0.05 and power of 80% (1-β = 0.8), effect size (d) factor 0.60, and considering a dropout rate of approximately 10%, the minimum sample size was 100.", "Thanks to Yili Ou for helping with data collecting; Thanks to Jie Tan, Zhu Liu, Yamei Wang, and Jiyao Xiang for designing the questionnaires; and thanks to Luxi Yang for quality assurance of the research. Although the above members did not participate in the writing of the article, they did a lot help.", "Conceptualization: Qin Wang.\nData curation: Qin Wang.\nInvestigation: Huixiang Zou.\nResources: Huixiang Zou.\nSoftware: Qin Wang.\nWriting – original draft: Qinqin Wang.\nWriting – review & editing: Qinqin Wang." ]
[ "methods", null, null, null, null, null, null ]
[ "1. Introduction", "2. Methods", "2.1. Study population", "2.2. Study design", "2.3. Survey tools and evaluation indicators", "2.4. Statistical methods", "2.5. Sample size", "3. Results", "4. Discussion", "Acknowledgements", "Author contributions" ]
[ "Falls are the most common adverse events in hospitals, accounting for 20% to 30% of all adverse events.[1] Approximately 3% of inpatients will fall and 25% of them will cause injury.[2] Elderly inpatients have a particularly high risk of fall owing to many comorbidities as well as decreased physical function and activity ability.[3] Falls of inpatients not only bring pains to patients and economic burden to families and society, but also prolong the hospital stay. Moreover, inpatients’ fall may become the fuse of medical contradictions.[4] Clinically, varieties of nurse-led interventions have been used to reduce the incidence of falls in hospitalized patients.[5] There are also plenty of patient-involved studies in regard to fall prevention.[6] But most researches ignored that the elderly are less active, and expected too much of senile inpatients, therefore, the effectiveness of interventions on falls prevention is not ideal. For older patients, the active involvement of caregivers in fall prevention is still needed to reduce the incidence of falls. It has been reported that patients cared for by trained caregivers are less prone to fall.[7] Though playing an important role in the prevention of falls, many professional caregivers are not familiar with knowledge regarding falls in elderly patients.[8] To ensure the safety of patients, it is necessary to conduct fall-related interventions for professional caregivers. The traditional way of preaching is boring, so it’s difficult for caregivers to understand, memorize and apply the content they learn. The teach-back method is a two-way mode of information transmission. This means that after receiving health education, professional caregivers are allowed to express their understandings of relevant information in their own language and they will be given guidance on the information they misunderstand or do not understand again until completely mastering relevant knowledge.[9] This approach has been endorsed by several institutions, including the Joint Commission in The United States, and is widely used in the healthcare communities.[10,11] In present study, teach-back method combined with multimedia was used to train professional caregivers about the knowledge of preventing falls in elderly inpatients, and certain effects were achieved, which are reported as follows.", " 2.1. Study population Using convenience sampling, 100 professional caregivers were selected as the research subjects in a 3A grade hospital, and 50 caregivers in each group were randomly assigned to the control group and the observation group. Inclusion criteria: physical and mental health; >65 years old; attendants of the professional caregiver company connected to the hospital; informed consent and voluntary participation in this study. Exclusion criteria: cognitive dysfunction, inability to complete the training and investigations; those who refuse to participate or withdraw voluntarily after reasonable explanation. This study was approved by the medical ethics committee of the right hospital and all participants provided informed consent. The authors have declared that no competing interests exist.\nUsing convenience sampling, 100 professional caregivers were selected as the research subjects in a 3A grade hospital, and 50 caregivers in each group were randomly assigned to the control group and the observation group. Inclusion criteria: physical and mental health; >65 years old; attendants of the professional caregiver company connected to the hospital; informed consent and voluntary participation in this study. Exclusion criteria: cognitive dysfunction, inability to complete the training and investigations; those who refuse to participate or withdraw voluntarily after reasonable explanation. This study was approved by the medical ethics committee of the right hospital and all participants provided informed consent. The authors have declared that no competing interests exist.\n 2.2. Study design The study lasted for 6 months from March to August, 2021. The control group were educated by conventional educational methods, while the observation group by teach-back method. The interventions were implemented at the beginning of the study and 3 months later. Before, in the middle, and at the late stage of the study, questionnaires were completed on the knowledge, confidence, and behaviors (KCB) of fall prevention in elderly inpatients, and the occurrence of falls in all elderly patients cared by the caregivers within 6 months were collected. All relevant data are within the paper and its supporting information files. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific methods are as follows:\nThe nursing department organized and established a fall prevention and management group for the elderly inpatients. The head nurse of the rehabilitation is the team leader, with six charge nurses as members and the director of the rehabilitation as consultants.\nIn the control group, caregivers were randomly assigned to clinical departments, and charge nurses taught them about fall prevention using conventional methods, that is, informing them of the high-risk factors of falling in elderly inpatients and distributing fall prevention guidance manual.\nIn the observation group, the teach-back method combined with multimedia was adopted for centralized training. Training content: PPT courseware, scene simulation video, and use of WeChat group. PPT courseware content takes about 10 minutes, including the incidence of falls, varieties of factors related to fall, the harm of fall in elderly patients, treatment measures after the fall, meanings of fall prevention and specific methods, such as for long-term bedridden patients, lifting the head of a bed for 3 to 5 minutes, and then informing to sit by the bed for a few minutes, finally asking to walk slowly without abnormal response standing in situ. The scene simulation video lasted six minutes, showing a variety of causes by which elderly patients fall in the hospital (such as dizziness, sliding, etc.), showing the seriousness of patients fall (such as fractures needing for surgical treatment), as well as advocating some effective measures to prevent falls. A nurse is responsible for the establishment of a wechat group to share the knowledge and experience on fall prevention from time to time and analyze the causes of falling events within the group so that everyone can learn lessons. Implementation of teach-back method: After the training, charge nurses ask the caregivers questions one by one using a gentle and open way such as “Do you think what kind of inpatients are prone to fall? “What should we do when a patient complains of dizziness while moving”. The caregivers are encouraged to provide positive answers based on their own understanding. If the answers are incorrect, the nurses will correct them immediately, such as “Let’s strengthen the memory again, it should be...” “When the patient complains of dizziness in the activity, we should immediately ask the patient to stop the activity, sit down or lie down to rest, and inform the medical staff,” etc., and ask for retelling until the retelling is correct.\nThe study lasted for 6 months from March to August, 2021. The control group were educated by conventional educational methods, while the observation group by teach-back method. The interventions were implemented at the beginning of the study and 3 months later. Before, in the middle, and at the late stage of the study, questionnaires were completed on the knowledge, confidence, and behaviors (KCB) of fall prevention in elderly inpatients, and the occurrence of falls in all elderly patients cared by the caregivers within 6 months were collected. All relevant data are within the paper and its supporting information files. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific methods are as follows:\nThe nursing department organized and established a fall prevention and management group for the elderly inpatients. The head nurse of the rehabilitation is the team leader, with six charge nurses as members and the director of the rehabilitation as consultants.\nIn the control group, caregivers were randomly assigned to clinical departments, and charge nurses taught them about fall prevention using conventional methods, that is, informing them of the high-risk factors of falling in elderly inpatients and distributing fall prevention guidance manual.\nIn the observation group, the teach-back method combined with multimedia was adopted for centralized training. Training content: PPT courseware, scene simulation video, and use of WeChat group. PPT courseware content takes about 10 minutes, including the incidence of falls, varieties of factors related to fall, the harm of fall in elderly patients, treatment measures after the fall, meanings of fall prevention and specific methods, such as for long-term bedridden patients, lifting the head of a bed for 3 to 5 minutes, and then informing to sit by the bed for a few minutes, finally asking to walk slowly without abnormal response standing in situ. The scene simulation video lasted six minutes, showing a variety of causes by which elderly patients fall in the hospital (such as dizziness, sliding, etc.), showing the seriousness of patients fall (such as fractures needing for surgical treatment), as well as advocating some effective measures to prevent falls. A nurse is responsible for the establishment of a wechat group to share the knowledge and experience on fall prevention from time to time and analyze the causes of falling events within the group so that everyone can learn lessons. Implementation of teach-back method: After the training, charge nurses ask the caregivers questions one by one using a gentle and open way such as “Do you think what kind of inpatients are prone to fall? “What should we do when a patient complains of dizziness while moving”. The caregivers are encouraged to provide positive answers based on their own understanding. If the answers are incorrect, the nurses will correct them immediately, such as “Let’s strengthen the memory again, it should be...” “When the patient complains of dizziness in the activity, we should immediately ask the patient to stop the activity, sit down or lie down to rest, and inform the medical staff,” etc., and ask for retelling until the retelling is correct.\n 2.3. Survey tools and evaluation indicators A self-designed questionnaire (including name, gender, age, education level, fall-related training experience, and length of service) was used to collect demographic information of the caregivers and evaluate whether there was a statistical difference in two groups.\nWith experiences based on the characteristics of clinical nursing work, and after widely searching relevant literatures and referring to “Morse fall assessment scale” and so on, we designed “the questionnaire about the KCB of professional caregivers on fall prevention in elderly inpatients.” The questionnaire was reviewed and modified by five nursing specialists, the content validity is more than 80%. The questionnaire is composed of 33 items, including three dimensions KCB. The knowledge dimension, with a total of 15 items, covering disease factors, environmental factors, prevention measures and emergency response, etc., used the Likert 5 scoring method. Each entry got 1 point for “I don’t know”, 2 points for “heard”, 3 points for “understanding”, 4 points for “clear”, and 5 points for “very clear”. There were 9 items in the confidence dimension, which reflected the positivity and stance of fall prevention. Alike Likert rating method was adopted and each entry got 1 point for “very reluctant”, 2 points for “unwilling”, 3 points for “uncertain”, 4 points for “willing”, and 5 points for “very willing”, with a total score of 45 points. There were 9 items in the behavioral dimension, and Likert 5 scoring method was also adopted when each entry got 1 point for “Never”, 2 points for “occasionally”, 3 points for “some time”, 4 points for “often”, and 5 points for “always”, with a total score of 45 points. Higher the score is, better the cognition and confidence of the caregivers regarding fall prevention is, and higher the credibility of the normative behavior is. The questionnaire has good reliability and validity. The content validity indices of KCB were 0.812, 0.875, and 0.838, respectively. The total Cronbach’s α coefficient was 0.868, and the Cronbach’s α coefficient for each dimension ranged from 0.808 to 0.865.\nWe used two ways to describe the incidence of falls. One is the proportion of all fallers and the other is the person-daily rate which means the ratio of the number of all falls occurred to the total days of stay for all inpatients taken care of during the study period. The control and observation groups were calculated separately.\nA self-designed questionnaire (including name, gender, age, education level, fall-related training experience, and length of service) was used to collect demographic information of the caregivers and evaluate whether there was a statistical difference in two groups.\nWith experiences based on the characteristics of clinical nursing work, and after widely searching relevant literatures and referring to “Morse fall assessment scale” and so on, we designed “the questionnaire about the KCB of professional caregivers on fall prevention in elderly inpatients.” The questionnaire was reviewed and modified by five nursing specialists, the content validity is more than 80%. The questionnaire is composed of 33 items, including three dimensions KCB. The knowledge dimension, with a total of 15 items, covering disease factors, environmental factors, prevention measures and emergency response, etc., used the Likert 5 scoring method. Each entry got 1 point for “I don’t know”, 2 points for “heard”, 3 points for “understanding”, 4 points for “clear”, and 5 points for “very clear”. There were 9 items in the confidence dimension, which reflected the positivity and stance of fall prevention. Alike Likert rating method was adopted and each entry got 1 point for “very reluctant”, 2 points for “unwilling”, 3 points for “uncertain”, 4 points for “willing”, and 5 points for “very willing”, with a total score of 45 points. There were 9 items in the behavioral dimension, and Likert 5 scoring method was also adopted when each entry got 1 point for “Never”, 2 points for “occasionally”, 3 points for “some time”, 4 points for “often”, and 5 points for “always”, with a total score of 45 points. Higher the score is, better the cognition and confidence of the caregivers regarding fall prevention is, and higher the credibility of the normative behavior is. The questionnaire has good reliability and validity. The content validity indices of KCB were 0.812, 0.875, and 0.838, respectively. The total Cronbach’s α coefficient was 0.868, and the Cronbach’s α coefficient for each dimension ranged from 0.808 to 0.865.\nWe used two ways to describe the incidence of falls. One is the proportion of all fallers and the other is the person-daily rate which means the ratio of the number of all falls occurred to the total days of stay for all inpatients taken care of during the study period. The control and observation groups were calculated separately.\n 2.4. Statistical methods Microsoft Office 2010 Excel was used for data entry and double verification was performed. SPSS23.0 statistical software was used for data analysis. Counting data are expressed as frequencies or percentages, and the chi-square test was used for comparison between groups. The measured data were first tested for normality. If a normal distribution was satisfied, the mean ± standard deviation was used to represent the data and t test was performed between groups, while if not, the rank sum test was used. P < .05 indicates a statistically significant difference.\nMicrosoft Office 2010 Excel was used for data entry and double verification was performed. SPSS23.0 statistical software was used for data analysis. Counting data are expressed as frequencies or percentages, and the chi-square test was used for comparison between groups. The measured data were first tested for normality. If a normal distribution was satisfied, the mean ± standard deviation was used to represent the data and t test was performed between groups, while if not, the rank sum test was used. P < .05 indicates a statistically significant difference.\n 2.5. Sample size The sample size was calculated using G*Power© software version 3.1.9.7 (Heinrich Heine University, Dusseldorf, Germany). Depending on the results of a pilot trial with 2-sided (two tails) Type-I error 0.05 and power of 80% (1-β = 0.8), effect size (d) factor 0.60, and considering a dropout rate of approximately 10%, the minimum sample size was 100.\nThe sample size was calculated using G*Power© software version 3.1.9.7 (Heinrich Heine University, Dusseldorf, Germany). Depending on the results of a pilot trial with 2-sided (two tails) Type-I error 0.05 and power of 80% (1-β = 0.8), effect size (d) factor 0.60, and considering a dropout rate of approximately 10%, the minimum sample size was 100.", "Using convenience sampling, 100 professional caregivers were selected as the research subjects in a 3A grade hospital, and 50 caregivers in each group were randomly assigned to the control group and the observation group. Inclusion criteria: physical and mental health; >65 years old; attendants of the professional caregiver company connected to the hospital; informed consent and voluntary participation in this study. Exclusion criteria: cognitive dysfunction, inability to complete the training and investigations; those who refuse to participate or withdraw voluntarily after reasonable explanation. This study was approved by the medical ethics committee of the right hospital and all participants provided informed consent. The authors have declared that no competing interests exist.", "The study lasted for 6 months from March to August, 2021. The control group were educated by conventional educational methods, while the observation group by teach-back method. The interventions were implemented at the beginning of the study and 3 months later. Before, in the middle, and at the late stage of the study, questionnaires were completed on the knowledge, confidence, and behaviors (KCB) of fall prevention in elderly inpatients, and the occurrence of falls in all elderly patients cared by the caregivers within 6 months were collected. All relevant data are within the paper and its supporting information files. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific methods are as follows:\nThe nursing department organized and established a fall prevention and management group for the elderly inpatients. The head nurse of the rehabilitation is the team leader, with six charge nurses as members and the director of the rehabilitation as consultants.\nIn the control group, caregivers were randomly assigned to clinical departments, and charge nurses taught them about fall prevention using conventional methods, that is, informing them of the high-risk factors of falling in elderly inpatients and distributing fall prevention guidance manual.\nIn the observation group, the teach-back method combined with multimedia was adopted for centralized training. Training content: PPT courseware, scene simulation video, and use of WeChat group. PPT courseware content takes about 10 minutes, including the incidence of falls, varieties of factors related to fall, the harm of fall in elderly patients, treatment measures after the fall, meanings of fall prevention and specific methods, such as for long-term bedridden patients, lifting the head of a bed for 3 to 5 minutes, and then informing to sit by the bed for a few minutes, finally asking to walk slowly without abnormal response standing in situ. The scene simulation video lasted six minutes, showing a variety of causes by which elderly patients fall in the hospital (such as dizziness, sliding, etc.), showing the seriousness of patients fall (such as fractures needing for surgical treatment), as well as advocating some effective measures to prevent falls. A nurse is responsible for the establishment of a wechat group to share the knowledge and experience on fall prevention from time to time and analyze the causes of falling events within the group so that everyone can learn lessons. Implementation of teach-back method: After the training, charge nurses ask the caregivers questions one by one using a gentle and open way such as “Do you think what kind of inpatients are prone to fall? “What should we do when a patient complains of dizziness while moving”. The caregivers are encouraged to provide positive answers based on their own understanding. If the answers are incorrect, the nurses will correct them immediately, such as “Let’s strengthen the memory again, it should be...” “When the patient complains of dizziness in the activity, we should immediately ask the patient to stop the activity, sit down or lie down to rest, and inform the medical staff,” etc., and ask for retelling until the retelling is correct.", "A self-designed questionnaire (including name, gender, age, education level, fall-related training experience, and length of service) was used to collect demographic information of the caregivers and evaluate whether there was a statistical difference in two groups.\nWith experiences based on the characteristics of clinical nursing work, and after widely searching relevant literatures and referring to “Morse fall assessment scale” and so on, we designed “the questionnaire about the KCB of professional caregivers on fall prevention in elderly inpatients.” The questionnaire was reviewed and modified by five nursing specialists, the content validity is more than 80%. The questionnaire is composed of 33 items, including three dimensions KCB. The knowledge dimension, with a total of 15 items, covering disease factors, environmental factors, prevention measures and emergency response, etc., used the Likert 5 scoring method. Each entry got 1 point for “I don’t know”, 2 points for “heard”, 3 points for “understanding”, 4 points for “clear”, and 5 points for “very clear”. There were 9 items in the confidence dimension, which reflected the positivity and stance of fall prevention. Alike Likert rating method was adopted and each entry got 1 point for “very reluctant”, 2 points for “unwilling”, 3 points for “uncertain”, 4 points for “willing”, and 5 points for “very willing”, with a total score of 45 points. There were 9 items in the behavioral dimension, and Likert 5 scoring method was also adopted when each entry got 1 point for “Never”, 2 points for “occasionally”, 3 points for “some time”, 4 points for “often”, and 5 points for “always”, with a total score of 45 points. Higher the score is, better the cognition and confidence of the caregivers regarding fall prevention is, and higher the credibility of the normative behavior is. The questionnaire has good reliability and validity. The content validity indices of KCB were 0.812, 0.875, and 0.838, respectively. The total Cronbach’s α coefficient was 0.868, and the Cronbach’s α coefficient for each dimension ranged from 0.808 to 0.865.\nWe used two ways to describe the incidence of falls. One is the proportion of all fallers and the other is the person-daily rate which means the ratio of the number of all falls occurred to the total days of stay for all inpatients taken care of during the study period. The control and observation groups were calculated separately.", "Microsoft Office 2010 Excel was used for data entry and double verification was performed. SPSS23.0 statistical software was used for data analysis. Counting data are expressed as frequencies or percentages, and the chi-square test was used for comparison between groups. The measured data were first tested for normality. If a normal distribution was satisfied, the mean ± standard deviation was used to represent the data and t test was performed between groups, while if not, the rank sum test was used. P < .05 indicates a statistically significant difference.", "The sample size was calculated using G*Power© software version 3.1.9.7 (Heinrich Heine University, Dusseldorf, Germany). Depending on the results of a pilot trial with 2-sided (two tails) Type-I error 0.05 and power of 80% (1-β = 0.8), effect size (d) factor 0.60, and considering a dropout rate of approximately 10%, the minimum sample size was 100.", "All 100 caregivers participated in the study and completed the questionnaire survey, with recovery rate 100%. There was no significant difference in the general data between two groups (P > .05). The demographic details are presented in Table 1.\nComparison of general data of two groups.\nAt the beginning of the study, the KCB scores on falls in elderly inpatients in both groups were low, and there was no statistical difference. After different interventions, the scores of KCB of the two groups on fall prevention in elderly inpatients were improved. After once or twice training, the average scores of the three dimensions in the observation group were higher than those in the control group, and the difference was statistically significant (P < .05). The details are presented in Tables 2, 3, and 4.\nComparisons of scores of knowledge dimension.\nComparisons of scores of confidence dimension.\nComparisons of scores of behavior dimension.\nDuring the study period, all the caregivers in the control group took care of 684 elderly inpatients, 9 of whom fell down so that the incidence of falls was 1.32%, and the person-daily rate was 1.34%. In the observation group, all caregivers took care of 658 elderly inpatients, 2 of whom fell down, so the incidence of falls was 0.30%, and then the person-daily rate was 0.33%. There was a statistically difference in the incidence of falls between two groups (P < .05; Table 5).\nComparisons of the incidence of falls.", "Inpatients fall is a permanent topic we need to pay attention to, particularly for elderly inpatients. There are many ways to prevent falls, which have been proven to work.[12,13] Through the review of relevant studies, this study is the first to use the teach-back method to intervene professional caregivers so as to reduce inpatients fall incidence. With the aging of the population in my country and the establishment of the “one patient, one accompany” system after the epidemic of COVID-19, there are more and more professional caregivers, who has become an irreplaceable part. At the same time, compared with medical staff, professional caregivers, spending the most time with their patients, play a vital role in the prevention of fall. In the present study, we adopt teach-back method to improve the KCB of professional caregivers, and the incidence of falls was 1.32% in the control group and 0.30% in the observation group. Study results showed that using teach-back method to intervene professional caregivers is an effective measure to prevent falls.\nThe teach-back method can significantly improve the level of KCB of professional caregivers on the fall prevention in elderly inpatients. The results showed that before the interventions, the average knowledge scores in the two groups was <40 at a failing level. On one hand, it was because of the low education level of caregivers. In this study, 56% of the control group just had a primary school education or below, and it was 62% in the observation group, which was similar to the results of Cho MY.[14] On the other hand, it may be that the caregivers had not received professional trainings about how to prevent fall. Gu YY[15] pointed out that the main way for caregivers to acquire fall-related knowledge is through personal life experience and hospital training. Without professional training, caregivers tend to view falls according to their own experience, which not only is easy to lead negligence, but also not positive enough in fall prevention, and even at a loss when facing high-risk situations such as dizziness. For the nation, the contradiction of the low degree of the caregiver groups will continue for a long time, so it is of great importance to strengthen professional training to reduce the incidence of falls. According to the theoretical model of “KCB”, cognition determines confidence and finally changes behavior. Through training, caregivers can calculate their knowledge about falls in elderly inpatients and fundamentally realize the harm and severity of falls, and voluntarily participate in the management of fall prevention. According to the characteristics of the professional caregiver group, the present study took advantage of multimedia which is full of diversified forms, rich content, specific images, and repeatability to conduct professional training for the observation group and obtained obvious effects.\nThe teach-back method has several advantages over traditional propaganda. From the research results we find that the average scores of KCB of two groups increased after different nursing interventions. The average scores of the observation group was higher than that of the control group, and the difference was statistically significant, indicating that the teach-back method combined with multimedia was more effective than the routine heath education. This is consistent with Wu Qiuxiang’s[7] view that the incidence of falls in elderly patients who have been professionally trained is lower than those who have not been trained. Traditional propaganda is characterized by passive acceptance of the inculcated knowledge, which is inefficient and the contents are easy to forget for professional caregivers. Multimedia can fully display the related contents, also be three-dimensional and imaginable, so it is more easily accepted by the caregivers. Videos in Wechat plays a role of continuable health education, so that the caregivers can review the previous knowledge and learn the new, which can strengthen their memories. Finally, through the method of teach-back, the caregivers will feel in the role of active participators, profoundly understand the significance of fall prevention, and understand when the occurrence is likely to appear, and finally achieve the purpose of falling prevention. The results showed that the overall level after the second training was relatively higher than that after the first training, suggesting the need for persistent interventions. In this regard, the nursing department can set up a special group to conduct fall prevention training regularly.\nMultimedia combined with teach-back method can reduce the incidence of falls in elderly inpatients. In this study, the incidence of falls was 1.32% in the control group and 0.30% in the observation group, which was statistically significant. Previous studies have used different intervention methods, therefore, the reported incidence of falls was very different. The incidence of falls in this study was roughly the same as that in Hill AM’s research.[16] In view of the growing number of elderly patients in hospital, professional care is becoming increasingly important. It is of great significance to train professional caregivers through a teach-back method to reduce the incidence of falls among elderly inpatients. And multimedia combined with teach-back method can quickly make caregivers master the basic knowledge and skills about fall prevention and nursing.\nOverall, with global aging and restrictive visit after COVID-19 pandemic, professional caregivers are in great demand. How to quickly and efficiently improve caregivers’ KCB on fall prevention is particularly important. In our study, the training mode of teach-back method had achieved a certain effect, and it is worth more clinical research and applications.\nThis study had some limitations. For example, elderly patients cared for by the control group and the observation group came from different departments, and the incidence of falls among inpatients in different departments is differential, so there might be some bias.", "Thanks to Yili Ou for helping with data collecting; Thanks to Jie Tan, Zhu Liu, Yamei Wang, and Jiyao Xiang for designing the questionnaires; and thanks to Luxi Yang for quality assurance of the research. Although the above members did not participate in the writing of the article, they did a lot help.", "Conceptualization: Qin Wang.\nData curation: Qin Wang.\nInvestigation: Huixiang Zou.\nResources: Huixiang Zou.\nSoftware: Qin Wang.\nWriting – original draft: Qinqin Wang.\nWriting – review & editing: Qinqin Wang." ]
[ "intro", "methods", null, null, null, "methods", null, "results", "discussion", null, null ]
[ "caregiver", "elderly inpatient", "fall", "KCB", "teach-back" ]
1. Introduction: Falls are the most common adverse events in hospitals, accounting for 20% to 30% of all adverse events.[1] Approximately 3% of inpatients will fall and 25% of them will cause injury.[2] Elderly inpatients have a particularly high risk of fall owing to many comorbidities as well as decreased physical function and activity ability.[3] Falls of inpatients not only bring pains to patients and economic burden to families and society, but also prolong the hospital stay. Moreover, inpatients’ fall may become the fuse of medical contradictions.[4] Clinically, varieties of nurse-led interventions have been used to reduce the incidence of falls in hospitalized patients.[5] There are also plenty of patient-involved studies in regard to fall prevention.[6] But most researches ignored that the elderly are less active, and expected too much of senile inpatients, therefore, the effectiveness of interventions on falls prevention is not ideal. For older patients, the active involvement of caregivers in fall prevention is still needed to reduce the incidence of falls. It has been reported that patients cared for by trained caregivers are less prone to fall.[7] Though playing an important role in the prevention of falls, many professional caregivers are not familiar with knowledge regarding falls in elderly patients.[8] To ensure the safety of patients, it is necessary to conduct fall-related interventions for professional caregivers. The traditional way of preaching is boring, so it’s difficult for caregivers to understand, memorize and apply the content they learn. The teach-back method is a two-way mode of information transmission. This means that after receiving health education, professional caregivers are allowed to express their understandings of relevant information in their own language and they will be given guidance on the information they misunderstand or do not understand again until completely mastering relevant knowledge.[9] This approach has been endorsed by several institutions, including the Joint Commission in The United States, and is widely used in the healthcare communities.[10,11] In present study, teach-back method combined with multimedia was used to train professional caregivers about the knowledge of preventing falls in elderly inpatients, and certain effects were achieved, which are reported as follows. 2. Methods: 2.1. Study population Using convenience sampling, 100 professional caregivers were selected as the research subjects in a 3A grade hospital, and 50 caregivers in each group were randomly assigned to the control group and the observation group. Inclusion criteria: physical and mental health; >65 years old; attendants of the professional caregiver company connected to the hospital; informed consent and voluntary participation in this study. Exclusion criteria: cognitive dysfunction, inability to complete the training and investigations; those who refuse to participate or withdraw voluntarily after reasonable explanation. This study was approved by the medical ethics committee of the right hospital and all participants provided informed consent. The authors have declared that no competing interests exist. Using convenience sampling, 100 professional caregivers were selected as the research subjects in a 3A grade hospital, and 50 caregivers in each group were randomly assigned to the control group and the observation group. Inclusion criteria: physical and mental health; >65 years old; attendants of the professional caregiver company connected to the hospital; informed consent and voluntary participation in this study. Exclusion criteria: cognitive dysfunction, inability to complete the training and investigations; those who refuse to participate or withdraw voluntarily after reasonable explanation. This study was approved by the medical ethics committee of the right hospital and all participants provided informed consent. The authors have declared that no competing interests exist. 2.2. Study design The study lasted for 6 months from March to August, 2021. The control group were educated by conventional educational methods, while the observation group by teach-back method. The interventions were implemented at the beginning of the study and 3 months later. Before, in the middle, and at the late stage of the study, questionnaires were completed on the knowledge, confidence, and behaviors (KCB) of fall prevention in elderly inpatients, and the occurrence of falls in all elderly patients cared by the caregivers within 6 months were collected. All relevant data are within the paper and its supporting information files. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific methods are as follows: The nursing department organized and established a fall prevention and management group for the elderly inpatients. The head nurse of the rehabilitation is the team leader, with six charge nurses as members and the director of the rehabilitation as consultants. In the control group, caregivers were randomly assigned to clinical departments, and charge nurses taught them about fall prevention using conventional methods, that is, informing them of the high-risk factors of falling in elderly inpatients and distributing fall prevention guidance manual. In the observation group, the teach-back method combined with multimedia was adopted for centralized training. Training content: PPT courseware, scene simulation video, and use of WeChat group. PPT courseware content takes about 10 minutes, including the incidence of falls, varieties of factors related to fall, the harm of fall in elderly patients, treatment measures after the fall, meanings of fall prevention and specific methods, such as for long-term bedridden patients, lifting the head of a bed for 3 to 5 minutes, and then informing to sit by the bed for a few minutes, finally asking to walk slowly without abnormal response standing in situ. The scene simulation video lasted six minutes, showing a variety of causes by which elderly patients fall in the hospital (such as dizziness, sliding, etc.), showing the seriousness of patients fall (such as fractures needing for surgical treatment), as well as advocating some effective measures to prevent falls. A nurse is responsible for the establishment of a wechat group to share the knowledge and experience on fall prevention from time to time and analyze the causes of falling events within the group so that everyone can learn lessons. Implementation of teach-back method: After the training, charge nurses ask the caregivers questions one by one using a gentle and open way such as “Do you think what kind of inpatients are prone to fall? “What should we do when a patient complains of dizziness while moving”. The caregivers are encouraged to provide positive answers based on their own understanding. If the answers are incorrect, the nurses will correct them immediately, such as “Let’s strengthen the memory again, it should be...” “When the patient complains of dizziness in the activity, we should immediately ask the patient to stop the activity, sit down or lie down to rest, and inform the medical staff,” etc., and ask for retelling until the retelling is correct. The study lasted for 6 months from March to August, 2021. The control group were educated by conventional educational methods, while the observation group by teach-back method. The interventions were implemented at the beginning of the study and 3 months later. Before, in the middle, and at the late stage of the study, questionnaires were completed on the knowledge, confidence, and behaviors (KCB) of fall prevention in elderly inpatients, and the occurrence of falls in all elderly patients cared by the caregivers within 6 months were collected. All relevant data are within the paper and its supporting information files. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific methods are as follows: The nursing department organized and established a fall prevention and management group for the elderly inpatients. The head nurse of the rehabilitation is the team leader, with six charge nurses as members and the director of the rehabilitation as consultants. In the control group, caregivers were randomly assigned to clinical departments, and charge nurses taught them about fall prevention using conventional methods, that is, informing them of the high-risk factors of falling in elderly inpatients and distributing fall prevention guidance manual. In the observation group, the teach-back method combined with multimedia was adopted for centralized training. Training content: PPT courseware, scene simulation video, and use of WeChat group. PPT courseware content takes about 10 minutes, including the incidence of falls, varieties of factors related to fall, the harm of fall in elderly patients, treatment measures after the fall, meanings of fall prevention and specific methods, such as for long-term bedridden patients, lifting the head of a bed for 3 to 5 minutes, and then informing to sit by the bed for a few minutes, finally asking to walk slowly without abnormal response standing in situ. The scene simulation video lasted six minutes, showing a variety of causes by which elderly patients fall in the hospital (such as dizziness, sliding, etc.), showing the seriousness of patients fall (such as fractures needing for surgical treatment), as well as advocating some effective measures to prevent falls. A nurse is responsible for the establishment of a wechat group to share the knowledge and experience on fall prevention from time to time and analyze the causes of falling events within the group so that everyone can learn lessons. Implementation of teach-back method: After the training, charge nurses ask the caregivers questions one by one using a gentle and open way such as “Do you think what kind of inpatients are prone to fall? “What should we do when a patient complains of dizziness while moving”. The caregivers are encouraged to provide positive answers based on their own understanding. If the answers are incorrect, the nurses will correct them immediately, such as “Let’s strengthen the memory again, it should be...” “When the patient complains of dizziness in the activity, we should immediately ask the patient to stop the activity, sit down or lie down to rest, and inform the medical staff,” etc., and ask for retelling until the retelling is correct. 2.3. Survey tools and evaluation indicators A self-designed questionnaire (including name, gender, age, education level, fall-related training experience, and length of service) was used to collect demographic information of the caregivers and evaluate whether there was a statistical difference in two groups. With experiences based on the characteristics of clinical nursing work, and after widely searching relevant literatures and referring to “Morse fall assessment scale” and so on, we designed “the questionnaire about the KCB of professional caregivers on fall prevention in elderly inpatients.” The questionnaire was reviewed and modified by five nursing specialists, the content validity is more than 80%. The questionnaire is composed of 33 items, including three dimensions KCB. The knowledge dimension, with a total of 15 items, covering disease factors, environmental factors, prevention measures and emergency response, etc., used the Likert 5 scoring method. Each entry got 1 point for “I don’t know”, 2 points for “heard”, 3 points for “understanding”, 4 points for “clear”, and 5 points for “very clear”. There were 9 items in the confidence dimension, which reflected the positivity and stance of fall prevention. Alike Likert rating method was adopted and each entry got 1 point for “very reluctant”, 2 points for “unwilling”, 3 points for “uncertain”, 4 points for “willing”, and 5 points for “very willing”, with a total score of 45 points. There were 9 items in the behavioral dimension, and Likert 5 scoring method was also adopted when each entry got 1 point for “Never”, 2 points for “occasionally”, 3 points for “some time”, 4 points for “often”, and 5 points for “always”, with a total score of 45 points. Higher the score is, better the cognition and confidence of the caregivers regarding fall prevention is, and higher the credibility of the normative behavior is. The questionnaire has good reliability and validity. The content validity indices of KCB were 0.812, 0.875, and 0.838, respectively. The total Cronbach’s α coefficient was 0.868, and the Cronbach’s α coefficient for each dimension ranged from 0.808 to 0.865. We used two ways to describe the incidence of falls. One is the proportion of all fallers and the other is the person-daily rate which means the ratio of the number of all falls occurred to the total days of stay for all inpatients taken care of during the study period. The control and observation groups were calculated separately. A self-designed questionnaire (including name, gender, age, education level, fall-related training experience, and length of service) was used to collect demographic information of the caregivers and evaluate whether there was a statistical difference in two groups. With experiences based on the characteristics of clinical nursing work, and after widely searching relevant literatures and referring to “Morse fall assessment scale” and so on, we designed “the questionnaire about the KCB of professional caregivers on fall prevention in elderly inpatients.” The questionnaire was reviewed and modified by five nursing specialists, the content validity is more than 80%. The questionnaire is composed of 33 items, including three dimensions KCB. The knowledge dimension, with a total of 15 items, covering disease factors, environmental factors, prevention measures and emergency response, etc., used the Likert 5 scoring method. Each entry got 1 point for “I don’t know”, 2 points for “heard”, 3 points for “understanding”, 4 points for “clear”, and 5 points for “very clear”. There were 9 items in the confidence dimension, which reflected the positivity and stance of fall prevention. Alike Likert rating method was adopted and each entry got 1 point for “very reluctant”, 2 points for “unwilling”, 3 points for “uncertain”, 4 points for “willing”, and 5 points for “very willing”, with a total score of 45 points. There were 9 items in the behavioral dimension, and Likert 5 scoring method was also adopted when each entry got 1 point for “Never”, 2 points for “occasionally”, 3 points for “some time”, 4 points for “often”, and 5 points for “always”, with a total score of 45 points. Higher the score is, better the cognition and confidence of the caregivers regarding fall prevention is, and higher the credibility of the normative behavior is. The questionnaire has good reliability and validity. The content validity indices of KCB were 0.812, 0.875, and 0.838, respectively. The total Cronbach’s α coefficient was 0.868, and the Cronbach’s α coefficient for each dimension ranged from 0.808 to 0.865. We used two ways to describe the incidence of falls. One is the proportion of all fallers and the other is the person-daily rate which means the ratio of the number of all falls occurred to the total days of stay for all inpatients taken care of during the study period. The control and observation groups were calculated separately. 2.4. Statistical methods Microsoft Office 2010 Excel was used for data entry and double verification was performed. SPSS23.0 statistical software was used for data analysis. Counting data are expressed as frequencies or percentages, and the chi-square test was used for comparison between groups. The measured data were first tested for normality. If a normal distribution was satisfied, the mean ± standard deviation was used to represent the data and t test was performed between groups, while if not, the rank sum test was used. P < .05 indicates a statistically significant difference. Microsoft Office 2010 Excel was used for data entry and double verification was performed. SPSS23.0 statistical software was used for data analysis. Counting data are expressed as frequencies or percentages, and the chi-square test was used for comparison between groups. The measured data were first tested for normality. If a normal distribution was satisfied, the mean ± standard deviation was used to represent the data and t test was performed between groups, while if not, the rank sum test was used. P < .05 indicates a statistically significant difference. 2.5. Sample size The sample size was calculated using G*Power© software version 3.1.9.7 (Heinrich Heine University, Dusseldorf, Germany). Depending on the results of a pilot trial with 2-sided (two tails) Type-I error 0.05 and power of 80% (1-β = 0.8), effect size (d) factor 0.60, and considering a dropout rate of approximately 10%, the minimum sample size was 100. The sample size was calculated using G*Power© software version 3.1.9.7 (Heinrich Heine University, Dusseldorf, Germany). Depending on the results of a pilot trial with 2-sided (two tails) Type-I error 0.05 and power of 80% (1-β = 0.8), effect size (d) factor 0.60, and considering a dropout rate of approximately 10%, the minimum sample size was 100. 2.1. Study population: Using convenience sampling, 100 professional caregivers were selected as the research subjects in a 3A grade hospital, and 50 caregivers in each group were randomly assigned to the control group and the observation group. Inclusion criteria: physical and mental health; >65 years old; attendants of the professional caregiver company connected to the hospital; informed consent and voluntary participation in this study. Exclusion criteria: cognitive dysfunction, inability to complete the training and investigations; those who refuse to participate or withdraw voluntarily after reasonable explanation. This study was approved by the medical ethics committee of the right hospital and all participants provided informed consent. The authors have declared that no competing interests exist. 2.2. Study design: The study lasted for 6 months from March to August, 2021. The control group were educated by conventional educational methods, while the observation group by teach-back method. The interventions were implemented at the beginning of the study and 3 months later. Before, in the middle, and at the late stage of the study, questionnaires were completed on the knowledge, confidence, and behaviors (KCB) of fall prevention in elderly inpatients, and the occurrence of falls in all elderly patients cared by the caregivers within 6 months were collected. All relevant data are within the paper and its supporting information files. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific methods are as follows: The nursing department organized and established a fall prevention and management group for the elderly inpatients. The head nurse of the rehabilitation is the team leader, with six charge nurses as members and the director of the rehabilitation as consultants. In the control group, caregivers were randomly assigned to clinical departments, and charge nurses taught them about fall prevention using conventional methods, that is, informing them of the high-risk factors of falling in elderly inpatients and distributing fall prevention guidance manual. In the observation group, the teach-back method combined with multimedia was adopted for centralized training. Training content: PPT courseware, scene simulation video, and use of WeChat group. PPT courseware content takes about 10 minutes, including the incidence of falls, varieties of factors related to fall, the harm of fall in elderly patients, treatment measures after the fall, meanings of fall prevention and specific methods, such as for long-term bedridden patients, lifting the head of a bed for 3 to 5 minutes, and then informing to sit by the bed for a few minutes, finally asking to walk slowly without abnormal response standing in situ. The scene simulation video lasted six minutes, showing a variety of causes by which elderly patients fall in the hospital (such as dizziness, sliding, etc.), showing the seriousness of patients fall (such as fractures needing for surgical treatment), as well as advocating some effective measures to prevent falls. A nurse is responsible for the establishment of a wechat group to share the knowledge and experience on fall prevention from time to time and analyze the causes of falling events within the group so that everyone can learn lessons. Implementation of teach-back method: After the training, charge nurses ask the caregivers questions one by one using a gentle and open way such as “Do you think what kind of inpatients are prone to fall? “What should we do when a patient complains of dizziness while moving”. The caregivers are encouraged to provide positive answers based on their own understanding. If the answers are incorrect, the nurses will correct them immediately, such as “Let’s strengthen the memory again, it should be...” “When the patient complains of dizziness in the activity, we should immediately ask the patient to stop the activity, sit down or lie down to rest, and inform the medical staff,” etc., and ask for retelling until the retelling is correct. 2.3. Survey tools and evaluation indicators: A self-designed questionnaire (including name, gender, age, education level, fall-related training experience, and length of service) was used to collect demographic information of the caregivers and evaluate whether there was a statistical difference in two groups. With experiences based on the characteristics of clinical nursing work, and after widely searching relevant literatures and referring to “Morse fall assessment scale” and so on, we designed “the questionnaire about the KCB of professional caregivers on fall prevention in elderly inpatients.” The questionnaire was reviewed and modified by five nursing specialists, the content validity is more than 80%. The questionnaire is composed of 33 items, including three dimensions KCB. The knowledge dimension, with a total of 15 items, covering disease factors, environmental factors, prevention measures and emergency response, etc., used the Likert 5 scoring method. Each entry got 1 point for “I don’t know”, 2 points for “heard”, 3 points for “understanding”, 4 points for “clear”, and 5 points for “very clear”. There were 9 items in the confidence dimension, which reflected the positivity and stance of fall prevention. Alike Likert rating method was adopted and each entry got 1 point for “very reluctant”, 2 points for “unwilling”, 3 points for “uncertain”, 4 points for “willing”, and 5 points for “very willing”, with a total score of 45 points. There were 9 items in the behavioral dimension, and Likert 5 scoring method was also adopted when each entry got 1 point for “Never”, 2 points for “occasionally”, 3 points for “some time”, 4 points for “often”, and 5 points for “always”, with a total score of 45 points. Higher the score is, better the cognition and confidence of the caregivers regarding fall prevention is, and higher the credibility of the normative behavior is. The questionnaire has good reliability and validity. The content validity indices of KCB were 0.812, 0.875, and 0.838, respectively. The total Cronbach’s α coefficient was 0.868, and the Cronbach’s α coefficient for each dimension ranged from 0.808 to 0.865. We used two ways to describe the incidence of falls. One is the proportion of all fallers and the other is the person-daily rate which means the ratio of the number of all falls occurred to the total days of stay for all inpatients taken care of during the study period. The control and observation groups were calculated separately. 2.4. Statistical methods: Microsoft Office 2010 Excel was used for data entry and double verification was performed. SPSS23.0 statistical software was used for data analysis. Counting data are expressed as frequencies or percentages, and the chi-square test was used for comparison between groups. The measured data were first tested for normality. If a normal distribution was satisfied, the mean ± standard deviation was used to represent the data and t test was performed between groups, while if not, the rank sum test was used. P < .05 indicates a statistically significant difference. 2.5. Sample size: The sample size was calculated using G*Power© software version 3.1.9.7 (Heinrich Heine University, Dusseldorf, Germany). Depending on the results of a pilot trial with 2-sided (two tails) Type-I error 0.05 and power of 80% (1-β = 0.8), effect size (d) factor 0.60, and considering a dropout rate of approximately 10%, the minimum sample size was 100. 3. Results: All 100 caregivers participated in the study and completed the questionnaire survey, with recovery rate 100%. There was no significant difference in the general data between two groups (P > .05). The demographic details are presented in Table 1. Comparison of general data of two groups. At the beginning of the study, the KCB scores on falls in elderly inpatients in both groups were low, and there was no statistical difference. After different interventions, the scores of KCB of the two groups on fall prevention in elderly inpatients were improved. After once or twice training, the average scores of the three dimensions in the observation group were higher than those in the control group, and the difference was statistically significant (P < .05). The details are presented in Tables 2, 3, and 4. Comparisons of scores of knowledge dimension. Comparisons of scores of confidence dimension. Comparisons of scores of behavior dimension. During the study period, all the caregivers in the control group took care of 684 elderly inpatients, 9 of whom fell down so that the incidence of falls was 1.32%, and the person-daily rate was 1.34%. In the observation group, all caregivers took care of 658 elderly inpatients, 2 of whom fell down, so the incidence of falls was 0.30%, and then the person-daily rate was 0.33%. There was a statistically difference in the incidence of falls between two groups (P < .05; Table 5). Comparisons of the incidence of falls. 4. Discussion: Inpatients fall is a permanent topic we need to pay attention to, particularly for elderly inpatients. There are many ways to prevent falls, which have been proven to work.[12,13] Through the review of relevant studies, this study is the first to use the teach-back method to intervene professional caregivers so as to reduce inpatients fall incidence. With the aging of the population in my country and the establishment of the “one patient, one accompany” system after the epidemic of COVID-19, there are more and more professional caregivers, who has become an irreplaceable part. At the same time, compared with medical staff, professional caregivers, spending the most time with their patients, play a vital role in the prevention of fall. In the present study, we adopt teach-back method to improve the KCB of professional caregivers, and the incidence of falls was 1.32% in the control group and 0.30% in the observation group. Study results showed that using teach-back method to intervene professional caregivers is an effective measure to prevent falls. The teach-back method can significantly improve the level of KCB of professional caregivers on the fall prevention in elderly inpatients. The results showed that before the interventions, the average knowledge scores in the two groups was <40 at a failing level. On one hand, it was because of the low education level of caregivers. In this study, 56% of the control group just had a primary school education or below, and it was 62% in the observation group, which was similar to the results of Cho MY.[14] On the other hand, it may be that the caregivers had not received professional trainings about how to prevent fall. Gu YY[15] pointed out that the main way for caregivers to acquire fall-related knowledge is through personal life experience and hospital training. Without professional training, caregivers tend to view falls according to their own experience, which not only is easy to lead negligence, but also not positive enough in fall prevention, and even at a loss when facing high-risk situations such as dizziness. For the nation, the contradiction of the low degree of the caregiver groups will continue for a long time, so it is of great importance to strengthen professional training to reduce the incidence of falls. According to the theoretical model of “KCB”, cognition determines confidence and finally changes behavior. Through training, caregivers can calculate their knowledge about falls in elderly inpatients and fundamentally realize the harm and severity of falls, and voluntarily participate in the management of fall prevention. According to the characteristics of the professional caregiver group, the present study took advantage of multimedia which is full of diversified forms, rich content, specific images, and repeatability to conduct professional training for the observation group and obtained obvious effects. The teach-back method has several advantages over traditional propaganda. From the research results we find that the average scores of KCB of two groups increased after different nursing interventions. The average scores of the observation group was higher than that of the control group, and the difference was statistically significant, indicating that the teach-back method combined with multimedia was more effective than the routine heath education. This is consistent with Wu Qiuxiang’s[7] view that the incidence of falls in elderly patients who have been professionally trained is lower than those who have not been trained. Traditional propaganda is characterized by passive acceptance of the inculcated knowledge, which is inefficient and the contents are easy to forget for professional caregivers. Multimedia can fully display the related contents, also be three-dimensional and imaginable, so it is more easily accepted by the caregivers. Videos in Wechat plays a role of continuable health education, so that the caregivers can review the previous knowledge and learn the new, which can strengthen their memories. Finally, through the method of teach-back, the caregivers will feel in the role of active participators, profoundly understand the significance of fall prevention, and understand when the occurrence is likely to appear, and finally achieve the purpose of falling prevention. The results showed that the overall level after the second training was relatively higher than that after the first training, suggesting the need for persistent interventions. In this regard, the nursing department can set up a special group to conduct fall prevention training regularly. Multimedia combined with teach-back method can reduce the incidence of falls in elderly inpatients. In this study, the incidence of falls was 1.32% in the control group and 0.30% in the observation group, which was statistically significant. Previous studies have used different intervention methods, therefore, the reported incidence of falls was very different. The incidence of falls in this study was roughly the same as that in Hill AM’s research.[16] In view of the growing number of elderly patients in hospital, professional care is becoming increasingly important. It is of great significance to train professional caregivers through a teach-back method to reduce the incidence of falls among elderly inpatients. And multimedia combined with teach-back method can quickly make caregivers master the basic knowledge and skills about fall prevention and nursing. Overall, with global aging and restrictive visit after COVID-19 pandemic, professional caregivers are in great demand. How to quickly and efficiently improve caregivers’ KCB on fall prevention is particularly important. In our study, the training mode of teach-back method had achieved a certain effect, and it is worth more clinical research and applications. This study had some limitations. For example, elderly patients cared for by the control group and the observation group came from different departments, and the incidence of falls among inpatients in different departments is differential, so there might be some bias. Acknowledgements: Thanks to Yili Ou for helping with data collecting; Thanks to Jie Tan, Zhu Liu, Yamei Wang, and Jiyao Xiang for designing the questionnaires; and thanks to Luxi Yang for quality assurance of the research. Although the above members did not participate in the writing of the article, they did a lot help. Author contributions: Conceptualization: Qin Wang. Data curation: Qin Wang. Investigation: Huixiang Zou. Resources: Huixiang Zou. Software: Qin Wang. Writing – original draft: Qinqin Wang. Writing – review & editing: Qinqin Wang.
Background: Teach-back is a teaching method that can quickly improve the acknowledge of target audience and change their behaviors effectively. However, this approach has not been reported in previous studies that were dedicated to reducing the incidence of falls in elderly inpatients. Therefore, we aimed to evaluate the effectiveness of the teach-back method for improving the knowledge, confidence, and behaviors (KCB) of professional caregivers on the fall prevention in elderly inpatients and to provide practical evidence for reducing the incidence of falls. Methods: This is a prospective study. At the recruitment, the demographic data of the professional caregivers were completely collected. Questionnaire about KCB of professional caregivers on fall prevention in elderly inpatients was used as an assessment scale, and the differences between the scores were analyzed. At the end of the study, the fall rate of the patients cared by different groups was counted and analyzed. Results: A total of 100 professional caregivers were recruited, all of which participated in the whole study process. There was no statistical differences in demographic data. Three or six months after the courses, the knowledge scores, confidence scores, and behavior scores of the two groups were significantly improved, and the observation group scores were significantly higher than it was in the control group (P < .05). During the study period, the incidence of falls in the observation group was 1.32%, while it was 0.30% in the control group (P < .05). Conclusions: Teach-back method can rapidly improve KCB of professional caregivers about fall prevention in elderly inpatients, which is worthy of clinical practice.
null
null
6,247
316
[ 2870, 126, 613, 490, 82, 61, 49 ]
11
[ "fall", "caregivers", "group", "prevention", "points", "falls", "study", "fall prevention", "elderly", "inpatients" ]
[ "falls inpatients", "fall prevention nursing", "prevent falls nurse", "falling elderly inpatients", "caregivers fall prevention" ]
null
null
[CONTENT] caregiver | elderly inpatient | fall | KCB | teach-back [SUMMARY]
[CONTENT] caregiver | elderly inpatient | fall | KCB | teach-back [SUMMARY]
[CONTENT] caregiver | elderly inpatient | fall | KCB | teach-back [SUMMARY]
null
[CONTENT] caregiver | elderly inpatient | fall | KCB | teach-back [SUMMARY]
null
[CONTENT] Aged | Caregivers | Humans | Multimedia | Prospective Studies | Research Design [SUMMARY]
[CONTENT] Aged | Caregivers | Humans | Multimedia | Prospective Studies | Research Design [SUMMARY]
[CONTENT] Aged | Caregivers | Humans | Multimedia | Prospective Studies | Research Design [SUMMARY]
null
[CONTENT] Aged | Caregivers | Humans | Multimedia | Prospective Studies | Research Design [SUMMARY]
null
[CONTENT] falls inpatients | fall prevention nursing | prevent falls nurse | falling elderly inpatients | caregivers fall prevention [SUMMARY]
[CONTENT] falls inpatients | fall prevention nursing | prevent falls nurse | falling elderly inpatients | caregivers fall prevention [SUMMARY]
[CONTENT] falls inpatients | fall prevention nursing | prevent falls nurse | falling elderly inpatients | caregivers fall prevention [SUMMARY]
null
[CONTENT] falls inpatients | fall prevention nursing | prevent falls nurse | falling elderly inpatients | caregivers fall prevention [SUMMARY]
null
[CONTENT] fall | caregivers | group | prevention | points | falls | study | fall prevention | elderly | inpatients [SUMMARY]
[CONTENT] fall | caregivers | group | prevention | points | falls | study | fall prevention | elderly | inpatients [SUMMARY]
[CONTENT] fall | caregivers | group | prevention | points | falls | study | fall prevention | elderly | inpatients [SUMMARY]
null
[CONTENT] fall | caregivers | group | prevention | points | falls | study | fall prevention | elderly | inpatients [SUMMARY]
null
[CONTENT] falls | patients | fall | caregivers | inpatients | professional caregivers | professional | prevention | elderly | information [SUMMARY]
[CONTENT] data | test | performed | groups | performed spss23 statistical | mean standard deviation represent | mean standard deviation | mean standard | mean | data analysis counting [SUMMARY]
[CONTENT] scores | comparisons | groups | comparisons scores | falls | group | difference | dimension | incidence falls | elderly inpatients [SUMMARY]
null
[CONTENT] fall | group | caregivers | points | falls | data | prevention | inpatients | elderly | wang [SUMMARY]
null
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| [SUMMARY]
[CONTENT] 100 ||| ||| Three or six months | two ||| 1.32% | 0.30% [SUMMARY]
null
[CONTENT] ||| ||| ||| ||| ||| ||| ||| 100 ||| ||| Three or six months | two ||| 1.32% | 0.30% ||| [SUMMARY]
null
Television watching and risk of colorectal adenoma.
25590667
Prolonged TV watching, a major sedentary behaviour, is associated with increased risk of obesity and diabetes and may involve in colorectal carcinogenesis.
BACKGROUND
We conducted a cross-sectional analysis among 31 065 men with ⩾1 endoscopy in the Health Professionals Follow-up Study (1988-2008) to evaluate sitting while watching TV and its joint influence with leisure-time physical activity on risk of colorectal adenoma. Logistic regression was used to calculate odds ratios (ORs) and 95% confidence intervals (CIs).
METHODS
Prolonged sitting while watching TV was significantly associated with increased risk of colorectal adenoma (n=4280), and adjusting for physical activity or a potential mediator body mass index did not change the estimates. The ORs (95% CIs) across categories of TV watching (0-6, 7-13, 14-20, and 21+ h per week) were 1.00 (referent), 1.09 (1.01-1.17), 1.16 (1.06-1.27), and 1.10 (0.97-1.25) (OR per 14-h per week increment=1.11; 95% CI: 1.04-1.18; Ptrend=0.001). Compared with the least sedentary (0-6 h per week of TV) and most physically active (highest quintile) men, the most sedentary (14+ h per week) and least active (lowest quintile) men had a significant increased risk of adenoma (OR=1.25; 95% CI: 1.05-1.49), particularly for high-risk adenoma.
RESULTS
Prolonged TV viewing is associated with modest increased risk of colorectal adenoma independent of leisure-time physical activity and minimally mediated by obesity.
CONCLUSIONS
[ "Adenoma", "Adult", "Aged", "Body Mass Index", "Colorectal Neoplasms", "Cross-Sectional Studies", "Humans", "Leisure Activities", "Logistic Models", "Male", "Middle Aged", "Obesity", "Risk Factors", "Sedentary Behavior", "Television" ]
4453948
null
null
null
null
Results
During 20 years of follow-up, we documented 4280 newly diagnosed adenomas among 31 065 men who had at least one endoscopy between 1988 and 2008 and reported information on TV watching (Supplementary Table 1). We calculated the distribution of potential risk factors for adenomas according to categories of sitting while watching TV in 1998 (Table 1). Men who spent more time sitting while watching TV were older, had slightly higher BMI, and were more likely to have history of diabetes. These men also spent more time sitting at home reading, eating, or at desk, had higher intake of total energy, and red and processed meat. They were less likely to be employed and had lower intake of folate and calcium. Although men who spent more than 21 h sitting while watching TV per week engaged in less leisure-time physical activity compared with men who watched 0–6 h of TV per week, physical activity levels were similar across the other categories of TV watching. More time spent sitting while watching TV was significantly associated with increased risk of colorectal adenoma in multivariate analysis (Table 2). When we adjusted for DASH score instead of folate, calcium, and red and processed meat, the ORs were essentially the same (data not shown). The results were similar after adjusting for physical activity as well as after further adjustment for BMI, suggesting that leisure-time physical activity did not confound and obesity minimally mediated the observed association. The ORs (95% CIs) of adenoma across categories of TV watching (0–6, 7–13,14–20, and 21+ h per week) were 1.00 (referent), 1.09 (1.01–1.17), 1.16 (1.06–1.27), 1.10 (0.97–1.25), respectively (Ptrend=0.001). Each 14-h increment of TV watching per week was associated with 11% increased risk of adenoma (OR=1.11; 95% CI: 1.04–1.18). Sensitivity analysis controlling for waist circumference instead of BMI showed similar results (data not shown). Other sitting at home (reading, eating, or at desk) (OR per 14-h per week increment=1.08; 95% CI: 1.02–1.16) but not sitting at work/driving was also associated with increased risk of adenoma. For the remaining analyses, we focused on sitting while watching TV and presented results for multivariate analysis adjusting for both physical activity and BMI. Prolonged sitting while watching TV was associated more strongly with proximal and rectal adenoma compared with distal adenoma (Table 3). Excluding men who did not have colonoscopies minimally affected the risk estimates for proximal adenoma (Supplementary Table 2). Men who spent more time watching TV were slightly more likely to have high-risk compared with low-risk adenoma, which was primarily driven by ⩾3 adenomas (Table 4). The positive association between TV watching and risk of adenoma was slightly stronger for men aged 65 years and above compared with men younger than 65 years, men without compared with men with family history of colorectal cancer, among those who were employed (full/part time) compared with retired/unemployed/disabled, and among normal and overweight men than for obese men, although interactions were not significant (Supplementary Table 3). Leisure-physical activity was associated with lower risk of adenoma (ORhighest vs lowest quintile: 0.87; 95% CI: 0.78–0.97) independent of TV watching. In joint analysis, we observed minimal interaction of TV watching and physical activity levels with risk of adenoma (Pinteraction=0.98) (Table 5). Compared with men who were least sedentary (0–6 h per week) and physically most active (highest quintile), those who were most sedentary (14+ h per week of TV watching) and least active (lowest quintile) had a significant increased risk for adenoma (OR=1.25; 95% CI: 1.05–1.49). The joint association was more pronounced for high-risk (OR=1.29; 95% CI: 1.00–1.66) than for low-risk adenoma (OR=1.06; 95% CI: 0.80–1.40). The significant positive association between TV watching and risk of adenoma persisted when only the most recent information before each endoscopy was used instead of cumulative average (Supplementary Table 4). When we restricted to the first reported endoscopy for each man, the association remained similar, suggesting that sedentary lifestyle may be involved in both progression and initiation of colorectal adenoma (Supplementary Table 4).
Conclusion
In conclusion, prolonged TV watching is associated with modest increased risk of colorectal adenoma, particularly proximal, rectal, and high-risk adenoma, independent of leisure-time physical activity and not mediated by obesity. Sedentary lifestyle may be relevant in colorectal carcinogenesis. More research is warranted to confirm our findings and studies of time spent sitting while watching TV as well as other sedentary behaviours in relation to the risk of colorectal cancer and survival among colorectal cancer patients are needed.
[ "Study population", "Ascertainment of colorectal adenoma cases and controls", "Assessment of TV watching and other sedentary behaviours", "Assessment of leisure-time physical activity", "Statistical analysis" ]
[ "The HPFS is a cohort study of 51 529 US male health professionals aged 40–75 years at enrolment in 1986. Participants have been mailed questionnaires every 2 years since baseline to collect data on demographics, lifestyle factors, medical history, and disease outcomes, and every 4 years to report update in dietary intake. The overall follow-up rate was >94% (Rimm et al, 1990). In this analysis, we included participants without diagnosis of cancer (except nonmelanoma skin cancer), ulcerative colitis, or colorectal polyp before 1988. To reduce the potential for detection bias, we further restricted to 31 716 men who reported having undergone at least one sigmoidoscopy or colonoscopy between 1988 and 2008. This study was approved by the institutional review board at the Harvard School of Public Health.", "On each biennial questionnaire, we asked whether participants had undergone sigmoidoscopy or colonoscopy; what the indications for these procedures were; whether colon or rectal polyps had been diagnosed in the past 2 years; and if they had the date of diagnosis. When a diagnosis was reported, we obtained informed consent to acquire medical records and pathology reports. Investigators blinded to any exposure information reviewed all records and extracted data on histological type, anatomic location, size, and number of the polyps.\nCases and controls were defined in each 2-year period: all newly diagnosed adenomas (including prevalent adenomas that may have been present for a long time and detected by current endoscopy as well as incident adenomas identified after a previous negative endoscopy, but not recurrent adenomas identified after positive endoscopies) were considered as cases, and all the participants who reported endoscopy but without diagnosis of adenoma were defined as controls. If more than one adenoma was diagnosed, the subject was classified according to the adenoma of the largest size and most advanced histological characteristics. Adenomas in the cecum, ascending colon, hepatic flexure, transverse colon, or splenic flexure were classified as being in the proximal colon. Adenomas in the descending or sigmoid colon were classified as distal and adenomas in the rectum or at the rectosigmoid junction were classified as rectal. We also grouped adenoma cases according to likelihood of developing advanced neoplasia during surveillance (high-risk: at least one adenoma ⩾1 cm in diameter, or with advanced histology (tubulovillous or villous histologic features or high grade or severe dysplasia), or ⩾3 adenomas vs low-risk: other adenoma) (Lieberman et al, 2012), size (large: ⩾1 cm vs small: <1 cm), histology (villous vs tubular), and multiplicity (⩾3 vs <3).", "Starting from 1988, participants reported their average weekly time spent watching TV (including videotapes) biennially. The 1988 questionnaire included six response categories (ranging from 0–1 to 40+ h per week). Subsequent questionnaires included 13 response categories (ranging from 0 to 40+ h per week). We also assessed weekly hours of sitting at work, driving, and other sitting at home (including reading, eating, or at desk), respectively, using the same response categories as TV watching since 1990. To capture long-term sedentary behaviour, we calculated cumulative average hours of sitting (TV watching, at work/driving, and other sitting at home) up to the 2-year interval before the time of the current endoscopy. The main analysis of TV watching included 31 065 men with information on TV watching.", "Participants reported average weekly time spent on the following activities biennially: walking, jogging, running, bicycling, calisthenics or use of a rowing machine, lap swimming, squash or racquetball, and tennis, and their usual walking pace. From these information, weekly energy expenditure in MET-h was calculated (Ainsworth et al, 1993). Our physical activity questions have been previously validated against physical activity diaries (Chasans-Taber et al, 1996).", "We analysed sitting while watching TV (0–6, 7–13,14–20, and 21+ h per week) in relation to the risk of colorectal adenoma as the main analysis. We also evaluated whether sitting at work/driving, and other sitting at home was associated with adenoma risk. We then investigated the association between TV watching and risk of adenoma in the proximal, distal, and rectal colon. For proximal adenomas, we conducted sensitivity analysis excluding participants who only had sigmoidoscopy but no colonoscopy. In addition, we stratified by subtypes of adenoma.\nTo take into account that one person may have undergone multiple endoscopies between 1988 and 2008 and to handle time-varying exposure and covariates efficiently, Andersen-Gill data structure with a new record for each 2-year follow-up period during which a participant underwent an endoscopy was used. Exposure and covariates were set to their values at the time that the questionnaire was returned. Once a participant was diagnosed with adenoma, he was censored for all later follow-up cycles. Age and multivariate-adjusted logistic regressions for clustered data (PROC GENMOD) were used to account for repeated observations (i.e. multiple endoscopies) and calculate odds ratios (ORs) approximating relative risks. Test for trend was conducted using sitting time as a continuous variable. We controlled for the following potential confounders (cumulative updated when applicable): age in 5-year intervals; history of colorectal cancer in a first-degree relative (yes/no); personal history of diabetes (yes/no); height (meter in continuous); alcohol intake (g per day in categories: <5, 5–9.9, 10–14.9, 15–29.9, and 30+); smoking (pack-years in categories: never smoker, 1–4.9, 5–19.9, 20–39.9, and 40+); regular aspirin use (yes/no); total calorie (kcal per day in quintiles); dietary variables including energy-adjusted total folate (μg per day in quintiles); calcium intake (mg per day in quintiles); red and processed meat intake (servings per day in quintiles); time period of endoscopy (in 2-year interval to capture possible changes in adenoma detection rates); number of endoscopies (continuous); time in years since the most recent endoscopy (continuous); and reason for the current endoscopy (screening/symptoms/other). To fully assess whether observed associations may be explained by unhealthy diet, we adjusted for DASH (Dietary Approaches to Stop Hypertension Diet) score, instead of individual food/nutrient intake (total folate, calcium, and red and processed meat intake). DASH score is a composite score that features high intakes of fruit, vegetables, legumes, and nuts; moderate amounts of low-fat dairy products; and low amounts of animal protein and sweets (Fung et al, 2008). Adherence to the DASH diet is associated with reduced risk of colorectal cancer (Fung et al, 2010).\nWe additionally adjusted for leisure-time physical activity (MET-h per week in quintiles) to assess whether the influence of sedentary behaviours were independent of physical activity. We adjusted for body mass index (BMI, kg m−2 in quintiles) to assess whether it mediates the association of interest. As BMI is not an accurate indicator of overweight and obesity in the elderly (Wannamethee et al, 2007), we conducted sensitivity analysis adjusting for waist circumference, a measure of abdominal obesity and a potential independent risk factor for colorectal cancer (Wang et al, 2008), among a subsample of men who reported their waist information in 1987 and 1996.\nWe examined if the association between TV watching and risk of adenoma differed by age, family history of colorectal cancer (yes/no), BMI (<25, 25–29.9, and 30+ kg m−2) and employment status (yes/no). Because employment likely influences the potential amount of time that can be spent sitting while watching TV, it could modify any association with adenoma. We also assessed the joint association of TV watching and physical activity by cross-classifying the two variables. We evaluated interaction by entering a product term of continuous TV watching and the above variables, and the P-value for interaction was determined by a Wald test.\nTo compare the influence of recent and long-term sedentary behaviour (the primary analysis) on risk of adenoma, we conducted sensitivity analysis using only the most recent TV viewing information before each endoscopy. Because adenoma cases in our primary analysis included both prevalent adenoma diagnosed at the first endoscopy and incident adenoma diagnosed after previous negative endoscopies, as exploratory analysis, we restricted to the first endoscopies to assess whether prolonged sitting was more associated with progression of adenoma. All the analyses were performed using SAS v. 9.3 (SAS Institute, Cary, NC, USA), and the statistical tests were two-sided and P-values <0.05 was considered statistically significant." ]
[ null, null, null, null, null ]
[ "Materials and Methods", "Study population", "Ascertainment of colorectal adenoma cases and controls", "Assessment of TV watching and other sedentary behaviours", "Assessment of leisure-time physical activity", "Statistical analysis", "Results", "Discussion", "Conclusion" ]
[ " Study population The HPFS is a cohort study of 51 529 US male health professionals aged 40–75 years at enrolment in 1986. Participants have been mailed questionnaires every 2 years since baseline to collect data on demographics, lifestyle factors, medical history, and disease outcomes, and every 4 years to report update in dietary intake. The overall follow-up rate was >94% (Rimm et al, 1990). In this analysis, we included participants without diagnosis of cancer (except nonmelanoma skin cancer), ulcerative colitis, or colorectal polyp before 1988. To reduce the potential for detection bias, we further restricted to 31 716 men who reported having undergone at least one sigmoidoscopy or colonoscopy between 1988 and 2008. This study was approved by the institutional review board at the Harvard School of Public Health.\nThe HPFS is a cohort study of 51 529 US male health professionals aged 40–75 years at enrolment in 1986. Participants have been mailed questionnaires every 2 years since baseline to collect data on demographics, lifestyle factors, medical history, and disease outcomes, and every 4 years to report update in dietary intake. The overall follow-up rate was >94% (Rimm et al, 1990). In this analysis, we included participants without diagnosis of cancer (except nonmelanoma skin cancer), ulcerative colitis, or colorectal polyp before 1988. To reduce the potential for detection bias, we further restricted to 31 716 men who reported having undergone at least one sigmoidoscopy or colonoscopy between 1988 and 2008. This study was approved by the institutional review board at the Harvard School of Public Health.\n Ascertainment of colorectal adenoma cases and controls On each biennial questionnaire, we asked whether participants had undergone sigmoidoscopy or colonoscopy; what the indications for these procedures were; whether colon or rectal polyps had been diagnosed in the past 2 years; and if they had the date of diagnosis. When a diagnosis was reported, we obtained informed consent to acquire medical records and pathology reports. Investigators blinded to any exposure information reviewed all records and extracted data on histological type, anatomic location, size, and number of the polyps.\nCases and controls were defined in each 2-year period: all newly diagnosed adenomas (including prevalent adenomas that may have been present for a long time and detected by current endoscopy as well as incident adenomas identified after a previous negative endoscopy, but not recurrent adenomas identified after positive endoscopies) were considered as cases, and all the participants who reported endoscopy but without diagnosis of adenoma were defined as controls. If more than one adenoma was diagnosed, the subject was classified according to the adenoma of the largest size and most advanced histological characteristics. Adenomas in the cecum, ascending colon, hepatic flexure, transverse colon, or splenic flexure were classified as being in the proximal colon. Adenomas in the descending or sigmoid colon were classified as distal and adenomas in the rectum or at the rectosigmoid junction were classified as rectal. We also grouped adenoma cases according to likelihood of developing advanced neoplasia during surveillance (high-risk: at least one adenoma ⩾1 cm in diameter, or with advanced histology (tubulovillous or villous histologic features or high grade or severe dysplasia), or ⩾3 adenomas vs low-risk: other adenoma) (Lieberman et al, 2012), size (large: ⩾1 cm vs small: <1 cm), histology (villous vs tubular), and multiplicity (⩾3 vs <3).\nOn each biennial questionnaire, we asked whether participants had undergone sigmoidoscopy or colonoscopy; what the indications for these procedures were; whether colon or rectal polyps had been diagnosed in the past 2 years; and if they had the date of diagnosis. When a diagnosis was reported, we obtained informed consent to acquire medical records and pathology reports. Investigators blinded to any exposure information reviewed all records and extracted data on histological type, anatomic location, size, and number of the polyps.\nCases and controls were defined in each 2-year period: all newly diagnosed adenomas (including prevalent adenomas that may have been present for a long time and detected by current endoscopy as well as incident adenomas identified after a previous negative endoscopy, but not recurrent adenomas identified after positive endoscopies) were considered as cases, and all the participants who reported endoscopy but without diagnosis of adenoma were defined as controls. If more than one adenoma was diagnosed, the subject was classified according to the adenoma of the largest size and most advanced histological characteristics. Adenomas in the cecum, ascending colon, hepatic flexure, transverse colon, or splenic flexure were classified as being in the proximal colon. Adenomas in the descending or sigmoid colon were classified as distal and adenomas in the rectum or at the rectosigmoid junction were classified as rectal. We also grouped adenoma cases according to likelihood of developing advanced neoplasia during surveillance (high-risk: at least one adenoma ⩾1 cm in diameter, or with advanced histology (tubulovillous or villous histologic features or high grade or severe dysplasia), or ⩾3 adenomas vs low-risk: other adenoma) (Lieberman et al, 2012), size (large: ⩾1 cm vs small: <1 cm), histology (villous vs tubular), and multiplicity (⩾3 vs <3).\n Assessment of TV watching and other sedentary behaviours Starting from 1988, participants reported their average weekly time spent watching TV (including videotapes) biennially. The 1988 questionnaire included six response categories (ranging from 0–1 to 40+ h per week). Subsequent questionnaires included 13 response categories (ranging from 0 to 40+ h per week). We also assessed weekly hours of sitting at work, driving, and other sitting at home (including reading, eating, or at desk), respectively, using the same response categories as TV watching since 1990. To capture long-term sedentary behaviour, we calculated cumulative average hours of sitting (TV watching, at work/driving, and other sitting at home) up to the 2-year interval before the time of the current endoscopy. The main analysis of TV watching included 31 065 men with information on TV watching.\nStarting from 1988, participants reported their average weekly time spent watching TV (including videotapes) biennially. The 1988 questionnaire included six response categories (ranging from 0–1 to 40+ h per week). Subsequent questionnaires included 13 response categories (ranging from 0 to 40+ h per week). We also assessed weekly hours of sitting at work, driving, and other sitting at home (including reading, eating, or at desk), respectively, using the same response categories as TV watching since 1990. To capture long-term sedentary behaviour, we calculated cumulative average hours of sitting (TV watching, at work/driving, and other sitting at home) up to the 2-year interval before the time of the current endoscopy. The main analysis of TV watching included 31 065 men with information on TV watching.\n Assessment of leisure-time physical activity Participants reported average weekly time spent on the following activities biennially: walking, jogging, running, bicycling, calisthenics or use of a rowing machine, lap swimming, squash or racquetball, and tennis, and their usual walking pace. From these information, weekly energy expenditure in MET-h was calculated (Ainsworth et al, 1993). Our physical activity questions have been previously validated against physical activity diaries (Chasans-Taber et al, 1996).\nParticipants reported average weekly time spent on the following activities biennially: walking, jogging, running, bicycling, calisthenics or use of a rowing machine, lap swimming, squash or racquetball, and tennis, and their usual walking pace. From these information, weekly energy expenditure in MET-h was calculated (Ainsworth et al, 1993). Our physical activity questions have been previously validated against physical activity diaries (Chasans-Taber et al, 1996).\n Statistical analysis We analysed sitting while watching TV (0–6, 7–13,14–20, and 21+ h per week) in relation to the risk of colorectal adenoma as the main analysis. We also evaluated whether sitting at work/driving, and other sitting at home was associated with adenoma risk. We then investigated the association between TV watching and risk of adenoma in the proximal, distal, and rectal colon. For proximal adenomas, we conducted sensitivity analysis excluding participants who only had sigmoidoscopy but no colonoscopy. In addition, we stratified by subtypes of adenoma.\nTo take into account that one person may have undergone multiple endoscopies between 1988 and 2008 and to handle time-varying exposure and covariates efficiently, Andersen-Gill data structure with a new record for each 2-year follow-up period during which a participant underwent an endoscopy was used. Exposure and covariates were set to their values at the time that the questionnaire was returned. Once a participant was diagnosed with adenoma, he was censored for all later follow-up cycles. Age and multivariate-adjusted logistic regressions for clustered data (PROC GENMOD) were used to account for repeated observations (i.e. multiple endoscopies) and calculate odds ratios (ORs) approximating relative risks. Test for trend was conducted using sitting time as a continuous variable. We controlled for the following potential confounders (cumulative updated when applicable): age in 5-year intervals; history of colorectal cancer in a first-degree relative (yes/no); personal history of diabetes (yes/no); height (meter in continuous); alcohol intake (g per day in categories: <5, 5–9.9, 10–14.9, 15–29.9, and 30+); smoking (pack-years in categories: never smoker, 1–4.9, 5–19.9, 20–39.9, and 40+); regular aspirin use (yes/no); total calorie (kcal per day in quintiles); dietary variables including energy-adjusted total folate (μg per day in quintiles); calcium intake (mg per day in quintiles); red and processed meat intake (servings per day in quintiles); time period of endoscopy (in 2-year interval to capture possible changes in adenoma detection rates); number of endoscopies (continuous); time in years since the most recent endoscopy (continuous); and reason for the current endoscopy (screening/symptoms/other). To fully assess whether observed associations may be explained by unhealthy diet, we adjusted for DASH (Dietary Approaches to Stop Hypertension Diet) score, instead of individual food/nutrient intake (total folate, calcium, and red and processed meat intake). DASH score is a composite score that features high intakes of fruit, vegetables, legumes, and nuts; moderate amounts of low-fat dairy products; and low amounts of animal protein and sweets (Fung et al, 2008). Adherence to the DASH diet is associated with reduced risk of colorectal cancer (Fung et al, 2010).\nWe additionally adjusted for leisure-time physical activity (MET-h per week in quintiles) to assess whether the influence of sedentary behaviours were independent of physical activity. We adjusted for body mass index (BMI, kg m−2 in quintiles) to assess whether it mediates the association of interest. As BMI is not an accurate indicator of overweight and obesity in the elderly (Wannamethee et al, 2007), we conducted sensitivity analysis adjusting for waist circumference, a measure of abdominal obesity and a potential independent risk factor for colorectal cancer (Wang et al, 2008), among a subsample of men who reported their waist information in 1987 and 1996.\nWe examined if the association between TV watching and risk of adenoma differed by age, family history of colorectal cancer (yes/no), BMI (<25, 25–29.9, and 30+ kg m−2) and employment status (yes/no). Because employment likely influences the potential amount of time that can be spent sitting while watching TV, it could modify any association with adenoma. We also assessed the joint association of TV watching and physical activity by cross-classifying the two variables. We evaluated interaction by entering a product term of continuous TV watching and the above variables, and the P-value for interaction was determined by a Wald test.\nTo compare the influence of recent and long-term sedentary behaviour (the primary analysis) on risk of adenoma, we conducted sensitivity analysis using only the most recent TV viewing information before each endoscopy. Because adenoma cases in our primary analysis included both prevalent adenoma diagnosed at the first endoscopy and incident adenoma diagnosed after previous negative endoscopies, as exploratory analysis, we restricted to the first endoscopies to assess whether prolonged sitting was more associated with progression of adenoma. All the analyses were performed using SAS v. 9.3 (SAS Institute, Cary, NC, USA), and the statistical tests were two-sided and P-values <0.05 was considered statistically significant.\nWe analysed sitting while watching TV (0–6, 7–13,14–20, and 21+ h per week) in relation to the risk of colorectal adenoma as the main analysis. We also evaluated whether sitting at work/driving, and other sitting at home was associated with adenoma risk. We then investigated the association between TV watching and risk of adenoma in the proximal, distal, and rectal colon. For proximal adenomas, we conducted sensitivity analysis excluding participants who only had sigmoidoscopy but no colonoscopy. In addition, we stratified by subtypes of adenoma.\nTo take into account that one person may have undergone multiple endoscopies between 1988 and 2008 and to handle time-varying exposure and covariates efficiently, Andersen-Gill data structure with a new record for each 2-year follow-up period during which a participant underwent an endoscopy was used. Exposure and covariates were set to their values at the time that the questionnaire was returned. Once a participant was diagnosed with adenoma, he was censored for all later follow-up cycles. Age and multivariate-adjusted logistic regressions for clustered data (PROC GENMOD) were used to account for repeated observations (i.e. multiple endoscopies) and calculate odds ratios (ORs) approximating relative risks. Test for trend was conducted using sitting time as a continuous variable. We controlled for the following potential confounders (cumulative updated when applicable): age in 5-year intervals; history of colorectal cancer in a first-degree relative (yes/no); personal history of diabetes (yes/no); height (meter in continuous); alcohol intake (g per day in categories: <5, 5–9.9, 10–14.9, 15–29.9, and 30+); smoking (pack-years in categories: never smoker, 1–4.9, 5–19.9, 20–39.9, and 40+); regular aspirin use (yes/no); total calorie (kcal per day in quintiles); dietary variables including energy-adjusted total folate (μg per day in quintiles); calcium intake (mg per day in quintiles); red and processed meat intake (servings per day in quintiles); time period of endoscopy (in 2-year interval to capture possible changes in adenoma detection rates); number of endoscopies (continuous); time in years since the most recent endoscopy (continuous); and reason for the current endoscopy (screening/symptoms/other). To fully assess whether observed associations may be explained by unhealthy diet, we adjusted for DASH (Dietary Approaches to Stop Hypertension Diet) score, instead of individual food/nutrient intake (total folate, calcium, and red and processed meat intake). DASH score is a composite score that features high intakes of fruit, vegetables, legumes, and nuts; moderate amounts of low-fat dairy products; and low amounts of animal protein and sweets (Fung et al, 2008). Adherence to the DASH diet is associated with reduced risk of colorectal cancer (Fung et al, 2010).\nWe additionally adjusted for leisure-time physical activity (MET-h per week in quintiles) to assess whether the influence of sedentary behaviours were independent of physical activity. We adjusted for body mass index (BMI, kg m−2 in quintiles) to assess whether it mediates the association of interest. As BMI is not an accurate indicator of overweight and obesity in the elderly (Wannamethee et al, 2007), we conducted sensitivity analysis adjusting for waist circumference, a measure of abdominal obesity and a potential independent risk factor for colorectal cancer (Wang et al, 2008), among a subsample of men who reported their waist information in 1987 and 1996.\nWe examined if the association between TV watching and risk of adenoma differed by age, family history of colorectal cancer (yes/no), BMI (<25, 25–29.9, and 30+ kg m−2) and employment status (yes/no). Because employment likely influences the potential amount of time that can be spent sitting while watching TV, it could modify any association with adenoma. We also assessed the joint association of TV watching and physical activity by cross-classifying the two variables. We evaluated interaction by entering a product term of continuous TV watching and the above variables, and the P-value for interaction was determined by a Wald test.\nTo compare the influence of recent and long-term sedentary behaviour (the primary analysis) on risk of adenoma, we conducted sensitivity analysis using only the most recent TV viewing information before each endoscopy. Because adenoma cases in our primary analysis included both prevalent adenoma diagnosed at the first endoscopy and incident adenoma diagnosed after previous negative endoscopies, as exploratory analysis, we restricted to the first endoscopies to assess whether prolonged sitting was more associated with progression of adenoma. All the analyses were performed using SAS v. 9.3 (SAS Institute, Cary, NC, USA), and the statistical tests were two-sided and P-values <0.05 was considered statistically significant.", "The HPFS is a cohort study of 51 529 US male health professionals aged 40–75 years at enrolment in 1986. Participants have been mailed questionnaires every 2 years since baseline to collect data on demographics, lifestyle factors, medical history, and disease outcomes, and every 4 years to report update in dietary intake. The overall follow-up rate was >94% (Rimm et al, 1990). In this analysis, we included participants without diagnosis of cancer (except nonmelanoma skin cancer), ulcerative colitis, or colorectal polyp before 1988. To reduce the potential for detection bias, we further restricted to 31 716 men who reported having undergone at least one sigmoidoscopy or colonoscopy between 1988 and 2008. This study was approved by the institutional review board at the Harvard School of Public Health.", "On each biennial questionnaire, we asked whether participants had undergone sigmoidoscopy or colonoscopy; what the indications for these procedures were; whether colon or rectal polyps had been diagnosed in the past 2 years; and if they had the date of diagnosis. When a diagnosis was reported, we obtained informed consent to acquire medical records and pathology reports. Investigators blinded to any exposure information reviewed all records and extracted data on histological type, anatomic location, size, and number of the polyps.\nCases and controls were defined in each 2-year period: all newly diagnosed adenomas (including prevalent adenomas that may have been present for a long time and detected by current endoscopy as well as incident adenomas identified after a previous negative endoscopy, but not recurrent adenomas identified after positive endoscopies) were considered as cases, and all the participants who reported endoscopy but without diagnosis of adenoma were defined as controls. If more than one adenoma was diagnosed, the subject was classified according to the adenoma of the largest size and most advanced histological characteristics. Adenomas in the cecum, ascending colon, hepatic flexure, transverse colon, or splenic flexure were classified as being in the proximal colon. Adenomas in the descending or sigmoid colon were classified as distal and adenomas in the rectum or at the rectosigmoid junction were classified as rectal. We also grouped adenoma cases according to likelihood of developing advanced neoplasia during surveillance (high-risk: at least one adenoma ⩾1 cm in diameter, or with advanced histology (tubulovillous or villous histologic features or high grade or severe dysplasia), or ⩾3 adenomas vs low-risk: other adenoma) (Lieberman et al, 2012), size (large: ⩾1 cm vs small: <1 cm), histology (villous vs tubular), and multiplicity (⩾3 vs <3).", "Starting from 1988, participants reported their average weekly time spent watching TV (including videotapes) biennially. The 1988 questionnaire included six response categories (ranging from 0–1 to 40+ h per week). Subsequent questionnaires included 13 response categories (ranging from 0 to 40+ h per week). We also assessed weekly hours of sitting at work, driving, and other sitting at home (including reading, eating, or at desk), respectively, using the same response categories as TV watching since 1990. To capture long-term sedentary behaviour, we calculated cumulative average hours of sitting (TV watching, at work/driving, and other sitting at home) up to the 2-year interval before the time of the current endoscopy. The main analysis of TV watching included 31 065 men with information on TV watching.", "Participants reported average weekly time spent on the following activities biennially: walking, jogging, running, bicycling, calisthenics or use of a rowing machine, lap swimming, squash or racquetball, and tennis, and their usual walking pace. From these information, weekly energy expenditure in MET-h was calculated (Ainsworth et al, 1993). Our physical activity questions have been previously validated against physical activity diaries (Chasans-Taber et al, 1996).", "We analysed sitting while watching TV (0–6, 7–13,14–20, and 21+ h per week) in relation to the risk of colorectal adenoma as the main analysis. We also evaluated whether sitting at work/driving, and other sitting at home was associated with adenoma risk. We then investigated the association between TV watching and risk of adenoma in the proximal, distal, and rectal colon. For proximal adenomas, we conducted sensitivity analysis excluding participants who only had sigmoidoscopy but no colonoscopy. In addition, we stratified by subtypes of adenoma.\nTo take into account that one person may have undergone multiple endoscopies between 1988 and 2008 and to handle time-varying exposure and covariates efficiently, Andersen-Gill data structure with a new record for each 2-year follow-up period during which a participant underwent an endoscopy was used. Exposure and covariates were set to their values at the time that the questionnaire was returned. Once a participant was diagnosed with adenoma, he was censored for all later follow-up cycles. Age and multivariate-adjusted logistic regressions for clustered data (PROC GENMOD) were used to account for repeated observations (i.e. multiple endoscopies) and calculate odds ratios (ORs) approximating relative risks. Test for trend was conducted using sitting time as a continuous variable. We controlled for the following potential confounders (cumulative updated when applicable): age in 5-year intervals; history of colorectal cancer in a first-degree relative (yes/no); personal history of diabetes (yes/no); height (meter in continuous); alcohol intake (g per day in categories: <5, 5–9.9, 10–14.9, 15–29.9, and 30+); smoking (pack-years in categories: never smoker, 1–4.9, 5–19.9, 20–39.9, and 40+); regular aspirin use (yes/no); total calorie (kcal per day in quintiles); dietary variables including energy-adjusted total folate (μg per day in quintiles); calcium intake (mg per day in quintiles); red and processed meat intake (servings per day in quintiles); time period of endoscopy (in 2-year interval to capture possible changes in adenoma detection rates); number of endoscopies (continuous); time in years since the most recent endoscopy (continuous); and reason for the current endoscopy (screening/symptoms/other). To fully assess whether observed associations may be explained by unhealthy diet, we adjusted for DASH (Dietary Approaches to Stop Hypertension Diet) score, instead of individual food/nutrient intake (total folate, calcium, and red and processed meat intake). DASH score is a composite score that features high intakes of fruit, vegetables, legumes, and nuts; moderate amounts of low-fat dairy products; and low amounts of animal protein and sweets (Fung et al, 2008). Adherence to the DASH diet is associated with reduced risk of colorectal cancer (Fung et al, 2010).\nWe additionally adjusted for leisure-time physical activity (MET-h per week in quintiles) to assess whether the influence of sedentary behaviours were independent of physical activity. We adjusted for body mass index (BMI, kg m−2 in quintiles) to assess whether it mediates the association of interest. As BMI is not an accurate indicator of overweight and obesity in the elderly (Wannamethee et al, 2007), we conducted sensitivity analysis adjusting for waist circumference, a measure of abdominal obesity and a potential independent risk factor for colorectal cancer (Wang et al, 2008), among a subsample of men who reported their waist information in 1987 and 1996.\nWe examined if the association between TV watching and risk of adenoma differed by age, family history of colorectal cancer (yes/no), BMI (<25, 25–29.9, and 30+ kg m−2) and employment status (yes/no). Because employment likely influences the potential amount of time that can be spent sitting while watching TV, it could modify any association with adenoma. We also assessed the joint association of TV watching and physical activity by cross-classifying the two variables. We evaluated interaction by entering a product term of continuous TV watching and the above variables, and the P-value for interaction was determined by a Wald test.\nTo compare the influence of recent and long-term sedentary behaviour (the primary analysis) on risk of adenoma, we conducted sensitivity analysis using only the most recent TV viewing information before each endoscopy. Because adenoma cases in our primary analysis included both prevalent adenoma diagnosed at the first endoscopy and incident adenoma diagnosed after previous negative endoscopies, as exploratory analysis, we restricted to the first endoscopies to assess whether prolonged sitting was more associated with progression of adenoma. All the analyses were performed using SAS v. 9.3 (SAS Institute, Cary, NC, USA), and the statistical tests were two-sided and P-values <0.05 was considered statistically significant.", "During 20 years of follow-up, we documented 4280 newly diagnosed adenomas among 31 065 men who had at least one endoscopy between 1988 and 2008 and reported information on TV watching (Supplementary Table 1). We calculated the distribution of potential risk factors for adenomas according to categories of sitting while watching TV in 1998 (Table 1). Men who spent more time sitting while watching TV were older, had slightly higher BMI, and were more likely to have history of diabetes. These men also spent more time sitting at home reading, eating, or at desk, had higher intake of total energy, and red and processed meat. They were less likely to be employed and had lower intake of folate and calcium. Although men who spent more than 21 h sitting while watching TV per week engaged in less leisure-time physical activity compared with men who watched 0–6 h of TV per week, physical activity levels were similar across the other categories of TV watching.\nMore time spent sitting while watching TV was significantly associated with increased risk of colorectal adenoma in multivariate analysis (Table 2). When we adjusted for DASH score instead of folate, calcium, and red and processed meat, the ORs were essentially the same (data not shown). The results were similar after adjusting for physical activity as well as after further adjustment for BMI, suggesting that leisure-time physical activity did not confound and obesity minimally mediated the observed association. The ORs (95% CIs) of adenoma across categories of TV watching (0–6, 7–13,14–20, and 21+ h per week) were 1.00 (referent), 1.09 (1.01–1.17), 1.16 (1.06–1.27), 1.10 (0.97–1.25), respectively (Ptrend=0.001). Each 14-h increment of TV watching per week was associated with 11% increased risk of adenoma (OR=1.11; 95% CI: 1.04–1.18). Sensitivity analysis controlling for waist circumference instead of BMI showed similar results (data not shown). Other sitting at home (reading, eating, or at desk) (OR per 14-h per week increment=1.08; 95% CI: 1.02–1.16) but not sitting at work/driving was also associated with increased risk of adenoma. For the remaining analyses, we focused on sitting while watching TV and presented results for multivariate analysis adjusting for both physical activity and BMI.\nProlonged sitting while watching TV was associated more strongly with proximal and rectal adenoma compared with distal adenoma (Table 3). Excluding men who did not have colonoscopies minimally affected the risk estimates for proximal adenoma (Supplementary Table 2). Men who spent more time watching TV were slightly more likely to have high-risk compared with low-risk adenoma, which was primarily driven by ⩾3 adenomas (Table 4).\nThe positive association between TV watching and risk of adenoma was slightly stronger for men aged 65 years and above compared with men younger than 65 years, men without compared with men with family history of colorectal cancer, among those who were employed (full/part time) compared with retired/unemployed/disabled, and among normal and overweight men than for obese men, although interactions were not significant (Supplementary Table 3).\nLeisure-physical activity was associated with lower risk of adenoma (ORhighest vs lowest quintile: 0.87; 95% CI: 0.78–0.97) independent of TV watching. In joint analysis, we observed minimal interaction of TV watching and physical activity levels with risk of adenoma (Pinteraction=0.98) (Table 5). Compared with men who were least sedentary (0–6 h per week) and physically most active (highest quintile), those who were most sedentary (14+ h per week of TV watching) and least active (lowest quintile) had a significant increased risk for adenoma (OR=1.25; 95% CI: 1.05–1.49). The joint association was more pronounced for high-risk (OR=1.29; 95% CI: 1.00–1.66) than for low-risk adenoma (OR=1.06; 95% CI: 0.80–1.40).\nThe significant positive association between TV watching and risk of adenoma persisted when only the most recent information before each endoscopy was used instead of cumulative average (Supplementary Table 4). When we restricted to the first reported endoscopy for each man, the association remained similar, suggesting that sedentary lifestyle may be involved in both progression and initiation of colorectal adenoma (Supplementary Table 4).", "In this prospective analysis nested in a large cohort of men, a sedentary lifestyle, primarily more time spent sitting while watching TV, was significantly associated with an increased risk for colorectal adenoma. This association appeared independent of levels of leisure-time physical activity and minimally mediated by BMI. The association was more pronounced for rectal and proximal compared with distal adenoma, and slightly stronger for high-risk than for low-risk adenoma, driven by multiple adenomas. To the best of knowledge, our study is the first to link sedentary behaviour and a precursor of cancer.\nOur findings indicate the potential importance of reducing time spent sedentary, in particular TV viewing time, in addition to promoting physical activity in the prevention of colorectal neoplasia. In particular, being less sedentary and more active reduce primarily risk of high-risk adenoma, which is more likely to progress to colorectal cancer and serves as the major target in screening endoscopies (Winawer and Zauber, 2002; Regula et al, 2006). Our observed weaker and null association between other sitting at home (reading, eating, or at desk), sitting at work/driving, and risk of colorectal adenoma, respectively, were consistent with a recent meta-analysis that, for colon cancer, the RR was higher for TV watching (RR=1.54) than for time spent sitting at work (RR=1.24), comparing the highest vs lowest levels of sedentary time (Schmid and Leitzmann, 2014). The reasons for these differences across sedentary pursuits were unclear but may be related to measurement errors as well as additional link between TV and unhealthy diet (Hu et al, 2003; Owen et al, 2010).\nThere are several plausible mechanisms through which sedentary behaviour may increase the risk of colorectal neoplasia. Obesity may represent an intermediate step in the causal pathway (Lynch, 2010). However, in our multivariate analysis, the odds ratio estimates were hardly changed after adjustment for BMI or waist circumference. Another hypothesis is that prolonged TV watching was associated with increased consumption of unhealthy food (Hu et al, 2001, 2003); however, in our study, the positive significant association persisted after adjusting for individual food/nutrient intake (red and processed meat, folate, and calcium) or composite DASH score. Other mechanisms have been postulated. Hyperinsulinaemia and possibly hyperglycaemia may promote colon carcinogenesis (Giovannucci, 1995, 2007). A meta-analysis of 10 cross-sectional studies showed that, comparing the highest level of sedentary behaviour to the lowest, greater time spent sedentary increased the risk of metabolic syndrome by 73% (Edwardson et al, 2012). A prospective analysis among 376 middle-aged adults suggested that baseline sedentary behaviour (defined by heart rate observations below an individually predetermined threshold) was independently associated with higher log fasting plasma insulin at follow-up (Helmerhorst et al, 2009). In cross-sectional studies, sedentary behaviour was also positively associated with insulin (Gustat et al, 2002), insulin resistance (Balkau et al, 2008; Schmidt et al, 2008), and 2-h glucose (Healy et al, 2007). Inflammation (Fung et al, 2000) and loss of muscle contractile activity that leads to suppressed lipoprotein lipase activity and glucose uptake (Hamilton et al, 2007) may also be involved in the link between sitting time and adenoma/cancer.\nOur observation that prolonged sitting while watching TV was associated more strongly with proximal and rectal adenoma compared with distal adenoma requires confirmation by other studies. Indeed, literature concerning physical activity and colorectal cancer by anatomical subsites were inconsistent (Robsahm et al, 2013) and limited evidence suggested that sedentary behaviour may also differentially affect colon carcinogenesis by subsite (Boyle et al, 2011). Of note, identification of potential differential associations between an exposure and risk of colorectal adenoma/cancer by anatomic subsite could provide more insight into the colorectal carcinogenic mechanisms. For example, cigarette smoking is associated with higher risk of CIMP (CpG island methylator phenotype) high (Samowitz et al, 2006; Curtin et al, 2009; Limsui et al, 2010; Nishihara et al, 2013), MSI (microsatellite instability) high (Slattery et al, 2000; Curtin et al, 2009; Poynter et al, 2009; Limsui et al, 2010; Nishihara et al, 2013), and BRAF- (v-raf murine sarcoma viral oncogene homolog B1) mutated (Samowitz et al, 2006; Curtin et al, 2009; Limsui et al, 2010; Rozek et al, 2010; Nishihara et al, 2013) colorectal cancers, which occur more frequently in the proximal colon. In particular, if our observed stronger association between sitting while watching TV and proximal adenoma were true, decreasing sedentary behaviour may be particularly beneficial for the prevention of proximal adenomas/cancers, which are less detectable and more likely to be missed even in colonoscopies (Lieberman et al, 2012).\nStrengths of our study include the ability to capture long-term sedentary behaviour, minimise measurement errors through asking time spent sitting watching TV explicitly while excluding TV viewing coupled with other non-sedentary activities (e.g. cooking, on treadmills), and control for a variety of potential confounders and mediators. In addition, although the average hours of sitting watching TV (10 h per week) was lower than the national estimates (34 h per week in adults aged 50–64 years) (Nielsen, 2011), a widespread distribution of TV viewing time in our study (an average of 4 h per week among the least and 26 h per week in the most sedentary men) allowed us to assess the potential health benefit of a less sedentary lifestyle. Our study also had several limitations. First, we did not assess time of standing at home/work, which similarly requires low energy expenditure, but physically different from sitting by involving isometric contraction of the antigravity (postural) muscles (Hamilton et al, 2008; Owen et al, 2010). Whether standing could ameliorate the increased risk of colorectal adenoma and other chronic outcomes associated with too much sitting requires more investigation. In addition, time spent sitting at computer, an increasingly prevalent sedentary behaviour in modern society, was not assessed in our cohort. Second, we only included participants who reported having undergone an endoscopy. However, as adenomas are largely asymptomatic, misclassification of outcome is likely non-differential, that is, not related to sedentary behaviours, and thus influence of such bias if any is likely to be small. Moreover, self-report of endoscopies were reliable in our cohort. A previous review of the medical records obtained from a random sample of 200 patients who reported a negative endoscopic result confirmed the absence of adenomas in all cases. In addition, measurement errors associated with recall of sedentary behaviours as well as potential confounders from the biennial questionnaires were likely; however, they would be non-differential to adenoma diagnosis. Possibility of residual confounding, especially from physical activity, could not be ruled out even though our leisure-time physical activity questions have been previously validated and occupational physical activities engaged by these health professionals were limited. Finally, the generalisability of our data to other populations, particularly women and other racial or ethnic groups, may be limited.", "In conclusion, prolonged TV watching is associated with modest increased risk of colorectal adenoma, particularly proximal, rectal, and high-risk adenoma, independent of leisure-time physical activity and not mediated by obesity. Sedentary lifestyle may be relevant in colorectal carcinogenesis. More research is warranted to confirm our findings and studies of time spent sitting while watching TV as well as other sedentary behaviours in relation to the risk of colorectal cancer and survival among colorectal cancer patients are needed." ]
[ "materials|methods", null, null, null, null, null, "results", "discussion", "conclusions" ]
[ "colorectal adenoma", "sedentary behaviour", "television watching" ]
Materials and Methods: Study population The HPFS is a cohort study of 51 529 US male health professionals aged 40–75 years at enrolment in 1986. Participants have been mailed questionnaires every 2 years since baseline to collect data on demographics, lifestyle factors, medical history, and disease outcomes, and every 4 years to report update in dietary intake. The overall follow-up rate was >94% (Rimm et al, 1990). In this analysis, we included participants without diagnosis of cancer (except nonmelanoma skin cancer), ulcerative colitis, or colorectal polyp before 1988. To reduce the potential for detection bias, we further restricted to 31 716 men who reported having undergone at least one sigmoidoscopy or colonoscopy between 1988 and 2008. This study was approved by the institutional review board at the Harvard School of Public Health. The HPFS is a cohort study of 51 529 US male health professionals aged 40–75 years at enrolment in 1986. Participants have been mailed questionnaires every 2 years since baseline to collect data on demographics, lifestyle factors, medical history, and disease outcomes, and every 4 years to report update in dietary intake. The overall follow-up rate was >94% (Rimm et al, 1990). In this analysis, we included participants without diagnosis of cancer (except nonmelanoma skin cancer), ulcerative colitis, or colorectal polyp before 1988. To reduce the potential for detection bias, we further restricted to 31 716 men who reported having undergone at least one sigmoidoscopy or colonoscopy between 1988 and 2008. This study was approved by the institutional review board at the Harvard School of Public Health. Ascertainment of colorectal adenoma cases and controls On each biennial questionnaire, we asked whether participants had undergone sigmoidoscopy or colonoscopy; what the indications for these procedures were; whether colon or rectal polyps had been diagnosed in the past 2 years; and if they had the date of diagnosis. When a diagnosis was reported, we obtained informed consent to acquire medical records and pathology reports. Investigators blinded to any exposure information reviewed all records and extracted data on histological type, anatomic location, size, and number of the polyps. Cases and controls were defined in each 2-year period: all newly diagnosed adenomas (including prevalent adenomas that may have been present for a long time and detected by current endoscopy as well as incident adenomas identified after a previous negative endoscopy, but not recurrent adenomas identified after positive endoscopies) were considered as cases, and all the participants who reported endoscopy but without diagnosis of adenoma were defined as controls. If more than one adenoma was diagnosed, the subject was classified according to the adenoma of the largest size and most advanced histological characteristics. Adenomas in the cecum, ascending colon, hepatic flexure, transverse colon, or splenic flexure were classified as being in the proximal colon. Adenomas in the descending or sigmoid colon were classified as distal and adenomas in the rectum or at the rectosigmoid junction were classified as rectal. We also grouped adenoma cases according to likelihood of developing advanced neoplasia during surveillance (high-risk: at least one adenoma ⩾1 cm in diameter, or with advanced histology (tubulovillous or villous histologic features or high grade or severe dysplasia), or ⩾3 adenomas vs low-risk: other adenoma) (Lieberman et al, 2012), size (large: ⩾1 cm vs small: <1 cm), histology (villous vs tubular), and multiplicity (⩾3 vs <3). On each biennial questionnaire, we asked whether participants had undergone sigmoidoscopy or colonoscopy; what the indications for these procedures were; whether colon or rectal polyps had been diagnosed in the past 2 years; and if they had the date of diagnosis. When a diagnosis was reported, we obtained informed consent to acquire medical records and pathology reports. Investigators blinded to any exposure information reviewed all records and extracted data on histological type, anatomic location, size, and number of the polyps. Cases and controls were defined in each 2-year period: all newly diagnosed adenomas (including prevalent adenomas that may have been present for a long time and detected by current endoscopy as well as incident adenomas identified after a previous negative endoscopy, but not recurrent adenomas identified after positive endoscopies) were considered as cases, and all the participants who reported endoscopy but without diagnosis of adenoma were defined as controls. If more than one adenoma was diagnosed, the subject was classified according to the adenoma of the largest size and most advanced histological characteristics. Adenomas in the cecum, ascending colon, hepatic flexure, transverse colon, or splenic flexure were classified as being in the proximal colon. Adenomas in the descending or sigmoid colon were classified as distal and adenomas in the rectum or at the rectosigmoid junction were classified as rectal. We also grouped adenoma cases according to likelihood of developing advanced neoplasia during surveillance (high-risk: at least one adenoma ⩾1 cm in diameter, or with advanced histology (tubulovillous or villous histologic features or high grade or severe dysplasia), or ⩾3 adenomas vs low-risk: other adenoma) (Lieberman et al, 2012), size (large: ⩾1 cm vs small: <1 cm), histology (villous vs tubular), and multiplicity (⩾3 vs <3). Assessment of TV watching and other sedentary behaviours Starting from 1988, participants reported their average weekly time spent watching TV (including videotapes) biennially. The 1988 questionnaire included six response categories (ranging from 0–1 to 40+ h per week). Subsequent questionnaires included 13 response categories (ranging from 0 to 40+ h per week). We also assessed weekly hours of sitting at work, driving, and other sitting at home (including reading, eating, or at desk), respectively, using the same response categories as TV watching since 1990. To capture long-term sedentary behaviour, we calculated cumulative average hours of sitting (TV watching, at work/driving, and other sitting at home) up to the 2-year interval before the time of the current endoscopy. The main analysis of TV watching included 31 065 men with information on TV watching. Starting from 1988, participants reported their average weekly time spent watching TV (including videotapes) biennially. The 1988 questionnaire included six response categories (ranging from 0–1 to 40+ h per week). Subsequent questionnaires included 13 response categories (ranging from 0 to 40+ h per week). We also assessed weekly hours of sitting at work, driving, and other sitting at home (including reading, eating, or at desk), respectively, using the same response categories as TV watching since 1990. To capture long-term sedentary behaviour, we calculated cumulative average hours of sitting (TV watching, at work/driving, and other sitting at home) up to the 2-year interval before the time of the current endoscopy. The main analysis of TV watching included 31 065 men with information on TV watching. Assessment of leisure-time physical activity Participants reported average weekly time spent on the following activities biennially: walking, jogging, running, bicycling, calisthenics or use of a rowing machine, lap swimming, squash or racquetball, and tennis, and their usual walking pace. From these information, weekly energy expenditure in MET-h was calculated (Ainsworth et al, 1993). Our physical activity questions have been previously validated against physical activity diaries (Chasans-Taber et al, 1996). Participants reported average weekly time spent on the following activities biennially: walking, jogging, running, bicycling, calisthenics or use of a rowing machine, lap swimming, squash or racquetball, and tennis, and their usual walking pace. From these information, weekly energy expenditure in MET-h was calculated (Ainsworth et al, 1993). Our physical activity questions have been previously validated against physical activity diaries (Chasans-Taber et al, 1996). Statistical analysis We analysed sitting while watching TV (0–6, 7–13,14–20, and 21+ h per week) in relation to the risk of colorectal adenoma as the main analysis. We also evaluated whether sitting at work/driving, and other sitting at home was associated with adenoma risk. We then investigated the association between TV watching and risk of adenoma in the proximal, distal, and rectal colon. For proximal adenomas, we conducted sensitivity analysis excluding participants who only had sigmoidoscopy but no colonoscopy. In addition, we stratified by subtypes of adenoma. To take into account that one person may have undergone multiple endoscopies between 1988 and 2008 and to handle time-varying exposure and covariates efficiently, Andersen-Gill data structure with a new record for each 2-year follow-up period during which a participant underwent an endoscopy was used. Exposure and covariates were set to their values at the time that the questionnaire was returned. Once a participant was diagnosed with adenoma, he was censored for all later follow-up cycles. Age and multivariate-adjusted logistic regressions for clustered data (PROC GENMOD) were used to account for repeated observations (i.e. multiple endoscopies) and calculate odds ratios (ORs) approximating relative risks. Test for trend was conducted using sitting time as a continuous variable. We controlled for the following potential confounders (cumulative updated when applicable): age in 5-year intervals; history of colorectal cancer in a first-degree relative (yes/no); personal history of diabetes (yes/no); height (meter in continuous); alcohol intake (g per day in categories: <5, 5–9.9, 10–14.9, 15–29.9, and 30+); smoking (pack-years in categories: never smoker, 1–4.9, 5–19.9, 20–39.9, and 40+); regular aspirin use (yes/no); total calorie (kcal per day in quintiles); dietary variables including energy-adjusted total folate (μg per day in quintiles); calcium intake (mg per day in quintiles); red and processed meat intake (servings per day in quintiles); time period of endoscopy (in 2-year interval to capture possible changes in adenoma detection rates); number of endoscopies (continuous); time in years since the most recent endoscopy (continuous); and reason for the current endoscopy (screening/symptoms/other). To fully assess whether observed associations may be explained by unhealthy diet, we adjusted for DASH (Dietary Approaches to Stop Hypertension Diet) score, instead of individual food/nutrient intake (total folate, calcium, and red and processed meat intake). DASH score is a composite score that features high intakes of fruit, vegetables, legumes, and nuts; moderate amounts of low-fat dairy products; and low amounts of animal protein and sweets (Fung et al, 2008). Adherence to the DASH diet is associated with reduced risk of colorectal cancer (Fung et al, 2010). We additionally adjusted for leisure-time physical activity (MET-h per week in quintiles) to assess whether the influence of sedentary behaviours were independent of physical activity. We adjusted for body mass index (BMI, kg m−2 in quintiles) to assess whether it mediates the association of interest. As BMI is not an accurate indicator of overweight and obesity in the elderly (Wannamethee et al, 2007), we conducted sensitivity analysis adjusting for waist circumference, a measure of abdominal obesity and a potential independent risk factor for colorectal cancer (Wang et al, 2008), among a subsample of men who reported their waist information in 1987 and 1996. We examined if the association between TV watching and risk of adenoma differed by age, family history of colorectal cancer (yes/no), BMI (<25, 25–29.9, and 30+ kg m−2) and employment status (yes/no). Because employment likely influences the potential amount of time that can be spent sitting while watching TV, it could modify any association with adenoma. We also assessed the joint association of TV watching and physical activity by cross-classifying the two variables. We evaluated interaction by entering a product term of continuous TV watching and the above variables, and the P-value for interaction was determined by a Wald test. To compare the influence of recent and long-term sedentary behaviour (the primary analysis) on risk of adenoma, we conducted sensitivity analysis using only the most recent TV viewing information before each endoscopy. Because adenoma cases in our primary analysis included both prevalent adenoma diagnosed at the first endoscopy and incident adenoma diagnosed after previous negative endoscopies, as exploratory analysis, we restricted to the first endoscopies to assess whether prolonged sitting was more associated with progression of adenoma. All the analyses were performed using SAS v. 9.3 (SAS Institute, Cary, NC, USA), and the statistical tests were two-sided and P-values <0.05 was considered statistically significant. We analysed sitting while watching TV (0–6, 7–13,14–20, and 21+ h per week) in relation to the risk of colorectal adenoma as the main analysis. We also evaluated whether sitting at work/driving, and other sitting at home was associated with adenoma risk. We then investigated the association between TV watching and risk of adenoma in the proximal, distal, and rectal colon. For proximal adenomas, we conducted sensitivity analysis excluding participants who only had sigmoidoscopy but no colonoscopy. In addition, we stratified by subtypes of adenoma. To take into account that one person may have undergone multiple endoscopies between 1988 and 2008 and to handle time-varying exposure and covariates efficiently, Andersen-Gill data structure with a new record for each 2-year follow-up period during which a participant underwent an endoscopy was used. Exposure and covariates were set to their values at the time that the questionnaire was returned. Once a participant was diagnosed with adenoma, he was censored for all later follow-up cycles. Age and multivariate-adjusted logistic regressions for clustered data (PROC GENMOD) were used to account for repeated observations (i.e. multiple endoscopies) and calculate odds ratios (ORs) approximating relative risks. Test for trend was conducted using sitting time as a continuous variable. We controlled for the following potential confounders (cumulative updated when applicable): age in 5-year intervals; history of colorectal cancer in a first-degree relative (yes/no); personal history of diabetes (yes/no); height (meter in continuous); alcohol intake (g per day in categories: <5, 5–9.9, 10–14.9, 15–29.9, and 30+); smoking (pack-years in categories: never smoker, 1–4.9, 5–19.9, 20–39.9, and 40+); regular aspirin use (yes/no); total calorie (kcal per day in quintiles); dietary variables including energy-adjusted total folate (μg per day in quintiles); calcium intake (mg per day in quintiles); red and processed meat intake (servings per day in quintiles); time period of endoscopy (in 2-year interval to capture possible changes in adenoma detection rates); number of endoscopies (continuous); time in years since the most recent endoscopy (continuous); and reason for the current endoscopy (screening/symptoms/other). To fully assess whether observed associations may be explained by unhealthy diet, we adjusted for DASH (Dietary Approaches to Stop Hypertension Diet) score, instead of individual food/nutrient intake (total folate, calcium, and red and processed meat intake). DASH score is a composite score that features high intakes of fruit, vegetables, legumes, and nuts; moderate amounts of low-fat dairy products; and low amounts of animal protein and sweets (Fung et al, 2008). Adherence to the DASH diet is associated with reduced risk of colorectal cancer (Fung et al, 2010). We additionally adjusted for leisure-time physical activity (MET-h per week in quintiles) to assess whether the influence of sedentary behaviours were independent of physical activity. We adjusted for body mass index (BMI, kg m−2 in quintiles) to assess whether it mediates the association of interest. As BMI is not an accurate indicator of overweight and obesity in the elderly (Wannamethee et al, 2007), we conducted sensitivity analysis adjusting for waist circumference, a measure of abdominal obesity and a potential independent risk factor for colorectal cancer (Wang et al, 2008), among a subsample of men who reported their waist information in 1987 and 1996. We examined if the association between TV watching and risk of adenoma differed by age, family history of colorectal cancer (yes/no), BMI (<25, 25–29.9, and 30+ kg m−2) and employment status (yes/no). Because employment likely influences the potential amount of time that can be spent sitting while watching TV, it could modify any association with adenoma. We also assessed the joint association of TV watching and physical activity by cross-classifying the two variables. We evaluated interaction by entering a product term of continuous TV watching and the above variables, and the P-value for interaction was determined by a Wald test. To compare the influence of recent and long-term sedentary behaviour (the primary analysis) on risk of adenoma, we conducted sensitivity analysis using only the most recent TV viewing information before each endoscopy. Because adenoma cases in our primary analysis included both prevalent adenoma diagnosed at the first endoscopy and incident adenoma diagnosed after previous negative endoscopies, as exploratory analysis, we restricted to the first endoscopies to assess whether prolonged sitting was more associated with progression of adenoma. All the analyses were performed using SAS v. 9.3 (SAS Institute, Cary, NC, USA), and the statistical tests were two-sided and P-values <0.05 was considered statistically significant. Study population: The HPFS is a cohort study of 51 529 US male health professionals aged 40–75 years at enrolment in 1986. Participants have been mailed questionnaires every 2 years since baseline to collect data on demographics, lifestyle factors, medical history, and disease outcomes, and every 4 years to report update in dietary intake. The overall follow-up rate was >94% (Rimm et al, 1990). In this analysis, we included participants without diagnosis of cancer (except nonmelanoma skin cancer), ulcerative colitis, or colorectal polyp before 1988. To reduce the potential for detection bias, we further restricted to 31 716 men who reported having undergone at least one sigmoidoscopy or colonoscopy between 1988 and 2008. This study was approved by the institutional review board at the Harvard School of Public Health. Ascertainment of colorectal adenoma cases and controls: On each biennial questionnaire, we asked whether participants had undergone sigmoidoscopy or colonoscopy; what the indications for these procedures were; whether colon or rectal polyps had been diagnosed in the past 2 years; and if they had the date of diagnosis. When a diagnosis was reported, we obtained informed consent to acquire medical records and pathology reports. Investigators blinded to any exposure information reviewed all records and extracted data on histological type, anatomic location, size, and number of the polyps. Cases and controls were defined in each 2-year period: all newly diagnosed adenomas (including prevalent adenomas that may have been present for a long time and detected by current endoscopy as well as incident adenomas identified after a previous negative endoscopy, but not recurrent adenomas identified after positive endoscopies) were considered as cases, and all the participants who reported endoscopy but without diagnosis of adenoma were defined as controls. If more than one adenoma was diagnosed, the subject was classified according to the adenoma of the largest size and most advanced histological characteristics. Adenomas in the cecum, ascending colon, hepatic flexure, transverse colon, or splenic flexure were classified as being in the proximal colon. Adenomas in the descending or sigmoid colon were classified as distal and adenomas in the rectum or at the rectosigmoid junction were classified as rectal. We also grouped adenoma cases according to likelihood of developing advanced neoplasia during surveillance (high-risk: at least one adenoma ⩾1 cm in diameter, or with advanced histology (tubulovillous or villous histologic features or high grade or severe dysplasia), or ⩾3 adenomas vs low-risk: other adenoma) (Lieberman et al, 2012), size (large: ⩾1 cm vs small: <1 cm), histology (villous vs tubular), and multiplicity (⩾3 vs <3). Assessment of TV watching and other sedentary behaviours: Starting from 1988, participants reported their average weekly time spent watching TV (including videotapes) biennially. The 1988 questionnaire included six response categories (ranging from 0–1 to 40+ h per week). Subsequent questionnaires included 13 response categories (ranging from 0 to 40+ h per week). We also assessed weekly hours of sitting at work, driving, and other sitting at home (including reading, eating, or at desk), respectively, using the same response categories as TV watching since 1990. To capture long-term sedentary behaviour, we calculated cumulative average hours of sitting (TV watching, at work/driving, and other sitting at home) up to the 2-year interval before the time of the current endoscopy. The main analysis of TV watching included 31 065 men with information on TV watching. Assessment of leisure-time physical activity: Participants reported average weekly time spent on the following activities biennially: walking, jogging, running, bicycling, calisthenics or use of a rowing machine, lap swimming, squash or racquetball, and tennis, and their usual walking pace. From these information, weekly energy expenditure in MET-h was calculated (Ainsworth et al, 1993). Our physical activity questions have been previously validated against physical activity diaries (Chasans-Taber et al, 1996). Statistical analysis: We analysed sitting while watching TV (0–6, 7–13,14–20, and 21+ h per week) in relation to the risk of colorectal adenoma as the main analysis. We also evaluated whether sitting at work/driving, and other sitting at home was associated with adenoma risk. We then investigated the association between TV watching and risk of adenoma in the proximal, distal, and rectal colon. For proximal adenomas, we conducted sensitivity analysis excluding participants who only had sigmoidoscopy but no colonoscopy. In addition, we stratified by subtypes of adenoma. To take into account that one person may have undergone multiple endoscopies between 1988 and 2008 and to handle time-varying exposure and covariates efficiently, Andersen-Gill data structure with a new record for each 2-year follow-up period during which a participant underwent an endoscopy was used. Exposure and covariates were set to their values at the time that the questionnaire was returned. Once a participant was diagnosed with adenoma, he was censored for all later follow-up cycles. Age and multivariate-adjusted logistic regressions for clustered data (PROC GENMOD) were used to account for repeated observations (i.e. multiple endoscopies) and calculate odds ratios (ORs) approximating relative risks. Test for trend was conducted using sitting time as a continuous variable. We controlled for the following potential confounders (cumulative updated when applicable): age in 5-year intervals; history of colorectal cancer in a first-degree relative (yes/no); personal history of diabetes (yes/no); height (meter in continuous); alcohol intake (g per day in categories: <5, 5–9.9, 10–14.9, 15–29.9, and 30+); smoking (pack-years in categories: never smoker, 1–4.9, 5–19.9, 20–39.9, and 40+); regular aspirin use (yes/no); total calorie (kcal per day in quintiles); dietary variables including energy-adjusted total folate (μg per day in quintiles); calcium intake (mg per day in quintiles); red and processed meat intake (servings per day in quintiles); time period of endoscopy (in 2-year interval to capture possible changes in adenoma detection rates); number of endoscopies (continuous); time in years since the most recent endoscopy (continuous); and reason for the current endoscopy (screening/symptoms/other). To fully assess whether observed associations may be explained by unhealthy diet, we adjusted for DASH (Dietary Approaches to Stop Hypertension Diet) score, instead of individual food/nutrient intake (total folate, calcium, and red and processed meat intake). DASH score is a composite score that features high intakes of fruit, vegetables, legumes, and nuts; moderate amounts of low-fat dairy products; and low amounts of animal protein and sweets (Fung et al, 2008). Adherence to the DASH diet is associated with reduced risk of colorectal cancer (Fung et al, 2010). We additionally adjusted for leisure-time physical activity (MET-h per week in quintiles) to assess whether the influence of sedentary behaviours were independent of physical activity. We adjusted for body mass index (BMI, kg m−2 in quintiles) to assess whether it mediates the association of interest. As BMI is not an accurate indicator of overweight and obesity in the elderly (Wannamethee et al, 2007), we conducted sensitivity analysis adjusting for waist circumference, a measure of abdominal obesity and a potential independent risk factor for colorectal cancer (Wang et al, 2008), among a subsample of men who reported their waist information in 1987 and 1996. We examined if the association between TV watching and risk of adenoma differed by age, family history of colorectal cancer (yes/no), BMI (<25, 25–29.9, and 30+ kg m−2) and employment status (yes/no). Because employment likely influences the potential amount of time that can be spent sitting while watching TV, it could modify any association with adenoma. We also assessed the joint association of TV watching and physical activity by cross-classifying the two variables. We evaluated interaction by entering a product term of continuous TV watching and the above variables, and the P-value for interaction was determined by a Wald test. To compare the influence of recent and long-term sedentary behaviour (the primary analysis) on risk of adenoma, we conducted sensitivity analysis using only the most recent TV viewing information before each endoscopy. Because adenoma cases in our primary analysis included both prevalent adenoma diagnosed at the first endoscopy and incident adenoma diagnosed after previous negative endoscopies, as exploratory analysis, we restricted to the first endoscopies to assess whether prolonged sitting was more associated with progression of adenoma. All the analyses were performed using SAS v. 9.3 (SAS Institute, Cary, NC, USA), and the statistical tests were two-sided and P-values <0.05 was considered statistically significant. Results: During 20 years of follow-up, we documented 4280 newly diagnosed adenomas among 31 065 men who had at least one endoscopy between 1988 and 2008 and reported information on TV watching (Supplementary Table 1). We calculated the distribution of potential risk factors for adenomas according to categories of sitting while watching TV in 1998 (Table 1). Men who spent more time sitting while watching TV were older, had slightly higher BMI, and were more likely to have history of diabetes. These men also spent more time sitting at home reading, eating, or at desk, had higher intake of total energy, and red and processed meat. They were less likely to be employed and had lower intake of folate and calcium. Although men who spent more than 21 h sitting while watching TV per week engaged in less leisure-time physical activity compared with men who watched 0–6 h of TV per week, physical activity levels were similar across the other categories of TV watching. More time spent sitting while watching TV was significantly associated with increased risk of colorectal adenoma in multivariate analysis (Table 2). When we adjusted for DASH score instead of folate, calcium, and red and processed meat, the ORs were essentially the same (data not shown). The results were similar after adjusting for physical activity as well as after further adjustment for BMI, suggesting that leisure-time physical activity did not confound and obesity minimally mediated the observed association. The ORs (95% CIs) of adenoma across categories of TV watching (0–6, 7–13,14–20, and 21+ h per week) were 1.00 (referent), 1.09 (1.01–1.17), 1.16 (1.06–1.27), 1.10 (0.97–1.25), respectively (Ptrend=0.001). Each 14-h increment of TV watching per week was associated with 11% increased risk of adenoma (OR=1.11; 95% CI: 1.04–1.18). Sensitivity analysis controlling for waist circumference instead of BMI showed similar results (data not shown). Other sitting at home (reading, eating, or at desk) (OR per 14-h per week increment=1.08; 95% CI: 1.02–1.16) but not sitting at work/driving was also associated with increased risk of adenoma. For the remaining analyses, we focused on sitting while watching TV and presented results for multivariate analysis adjusting for both physical activity and BMI. Prolonged sitting while watching TV was associated more strongly with proximal and rectal adenoma compared with distal adenoma (Table 3). Excluding men who did not have colonoscopies minimally affected the risk estimates for proximal adenoma (Supplementary Table 2). Men who spent more time watching TV were slightly more likely to have high-risk compared with low-risk adenoma, which was primarily driven by ⩾3 adenomas (Table 4). The positive association between TV watching and risk of adenoma was slightly stronger for men aged 65 years and above compared with men younger than 65 years, men without compared with men with family history of colorectal cancer, among those who were employed (full/part time) compared with retired/unemployed/disabled, and among normal and overweight men than for obese men, although interactions were not significant (Supplementary Table 3). Leisure-physical activity was associated with lower risk of adenoma (ORhighest vs lowest quintile: 0.87; 95% CI: 0.78–0.97) independent of TV watching. In joint analysis, we observed minimal interaction of TV watching and physical activity levels with risk of adenoma (Pinteraction=0.98) (Table 5). Compared with men who were least sedentary (0–6 h per week) and physically most active (highest quintile), those who were most sedentary (14+ h per week of TV watching) and least active (lowest quintile) had a significant increased risk for adenoma (OR=1.25; 95% CI: 1.05–1.49). The joint association was more pronounced for high-risk (OR=1.29; 95% CI: 1.00–1.66) than for low-risk adenoma (OR=1.06; 95% CI: 0.80–1.40). The significant positive association between TV watching and risk of adenoma persisted when only the most recent information before each endoscopy was used instead of cumulative average (Supplementary Table 4). When we restricted to the first reported endoscopy for each man, the association remained similar, suggesting that sedentary lifestyle may be involved in both progression and initiation of colorectal adenoma (Supplementary Table 4). Discussion: In this prospective analysis nested in a large cohort of men, a sedentary lifestyle, primarily more time spent sitting while watching TV, was significantly associated with an increased risk for colorectal adenoma. This association appeared independent of levels of leisure-time physical activity and minimally mediated by BMI. The association was more pronounced for rectal and proximal compared with distal adenoma, and slightly stronger for high-risk than for low-risk adenoma, driven by multiple adenomas. To the best of knowledge, our study is the first to link sedentary behaviour and a precursor of cancer. Our findings indicate the potential importance of reducing time spent sedentary, in particular TV viewing time, in addition to promoting physical activity in the prevention of colorectal neoplasia. In particular, being less sedentary and more active reduce primarily risk of high-risk adenoma, which is more likely to progress to colorectal cancer and serves as the major target in screening endoscopies (Winawer and Zauber, 2002; Regula et al, 2006). Our observed weaker and null association between other sitting at home (reading, eating, or at desk), sitting at work/driving, and risk of colorectal adenoma, respectively, were consistent with a recent meta-analysis that, for colon cancer, the RR was higher for TV watching (RR=1.54) than for time spent sitting at work (RR=1.24), comparing the highest vs lowest levels of sedentary time (Schmid and Leitzmann, 2014). The reasons for these differences across sedentary pursuits were unclear but may be related to measurement errors as well as additional link between TV and unhealthy diet (Hu et al, 2003; Owen et al, 2010). There are several plausible mechanisms through which sedentary behaviour may increase the risk of colorectal neoplasia. Obesity may represent an intermediate step in the causal pathway (Lynch, 2010). However, in our multivariate analysis, the odds ratio estimates were hardly changed after adjustment for BMI or waist circumference. Another hypothesis is that prolonged TV watching was associated with increased consumption of unhealthy food (Hu et al, 2001, 2003); however, in our study, the positive significant association persisted after adjusting for individual food/nutrient intake (red and processed meat, folate, and calcium) or composite DASH score. Other mechanisms have been postulated. Hyperinsulinaemia and possibly hyperglycaemia may promote colon carcinogenesis (Giovannucci, 1995, 2007). A meta-analysis of 10 cross-sectional studies showed that, comparing the highest level of sedentary behaviour to the lowest, greater time spent sedentary increased the risk of metabolic syndrome by 73% (Edwardson et al, 2012). A prospective analysis among 376 middle-aged adults suggested that baseline sedentary behaviour (defined by heart rate observations below an individually predetermined threshold) was independently associated with higher log fasting plasma insulin at follow-up (Helmerhorst et al, 2009). In cross-sectional studies, sedentary behaviour was also positively associated with insulin (Gustat et al, 2002), insulin resistance (Balkau et al, 2008; Schmidt et al, 2008), and 2-h glucose (Healy et al, 2007). Inflammation (Fung et al, 2000) and loss of muscle contractile activity that leads to suppressed lipoprotein lipase activity and glucose uptake (Hamilton et al, 2007) may also be involved in the link between sitting time and adenoma/cancer. Our observation that prolonged sitting while watching TV was associated more strongly with proximal and rectal adenoma compared with distal adenoma requires confirmation by other studies. Indeed, literature concerning physical activity and colorectal cancer by anatomical subsites were inconsistent (Robsahm et al, 2013) and limited evidence suggested that sedentary behaviour may also differentially affect colon carcinogenesis by subsite (Boyle et al, 2011). Of note, identification of potential differential associations between an exposure and risk of colorectal adenoma/cancer by anatomic subsite could provide more insight into the colorectal carcinogenic mechanisms. For example, cigarette smoking is associated with higher risk of CIMP (CpG island methylator phenotype) high (Samowitz et al, 2006; Curtin et al, 2009; Limsui et al, 2010; Nishihara et al, 2013), MSI (microsatellite instability) high (Slattery et al, 2000; Curtin et al, 2009; Poynter et al, 2009; Limsui et al, 2010; Nishihara et al, 2013), and BRAF- (v-raf murine sarcoma viral oncogene homolog B1) mutated (Samowitz et al, 2006; Curtin et al, 2009; Limsui et al, 2010; Rozek et al, 2010; Nishihara et al, 2013) colorectal cancers, which occur more frequently in the proximal colon. In particular, if our observed stronger association between sitting while watching TV and proximal adenoma were true, decreasing sedentary behaviour may be particularly beneficial for the prevention of proximal adenomas/cancers, which are less detectable and more likely to be missed even in colonoscopies (Lieberman et al, 2012). Strengths of our study include the ability to capture long-term sedentary behaviour, minimise measurement errors through asking time spent sitting watching TV explicitly while excluding TV viewing coupled with other non-sedentary activities (e.g. cooking, on treadmills), and control for a variety of potential confounders and mediators. In addition, although the average hours of sitting watching TV (10 h per week) was lower than the national estimates (34 h per week in adults aged 50–64 years) (Nielsen, 2011), a widespread distribution of TV viewing time in our study (an average of 4 h per week among the least and 26 h per week in the most sedentary men) allowed us to assess the potential health benefit of a less sedentary lifestyle. Our study also had several limitations. First, we did not assess time of standing at home/work, which similarly requires low energy expenditure, but physically different from sitting by involving isometric contraction of the antigravity (postural) muscles (Hamilton et al, 2008; Owen et al, 2010). Whether standing could ameliorate the increased risk of colorectal adenoma and other chronic outcomes associated with too much sitting requires more investigation. In addition, time spent sitting at computer, an increasingly prevalent sedentary behaviour in modern society, was not assessed in our cohort. Second, we only included participants who reported having undergone an endoscopy. However, as adenomas are largely asymptomatic, misclassification of outcome is likely non-differential, that is, not related to sedentary behaviours, and thus influence of such bias if any is likely to be small. Moreover, self-report of endoscopies were reliable in our cohort. A previous review of the medical records obtained from a random sample of 200 patients who reported a negative endoscopic result confirmed the absence of adenomas in all cases. In addition, measurement errors associated with recall of sedentary behaviours as well as potential confounders from the biennial questionnaires were likely; however, they would be non-differential to adenoma diagnosis. Possibility of residual confounding, especially from physical activity, could not be ruled out even though our leisure-time physical activity questions have been previously validated and occupational physical activities engaged by these health professionals were limited. Finally, the generalisability of our data to other populations, particularly women and other racial or ethnic groups, may be limited. Conclusion: In conclusion, prolonged TV watching is associated with modest increased risk of colorectal adenoma, particularly proximal, rectal, and high-risk adenoma, independent of leisure-time physical activity and not mediated by obesity. Sedentary lifestyle may be relevant in colorectal carcinogenesis. More research is warranted to confirm our findings and studies of time spent sitting while watching TV as well as other sedentary behaviours in relation to the risk of colorectal cancer and survival among colorectal cancer patients are needed.
Background: Prolonged TV watching, a major sedentary behaviour, is associated with increased risk of obesity and diabetes and may involve in colorectal carcinogenesis. Methods: We conducted a cross-sectional analysis among 31 065 men with ⩾1 endoscopy in the Health Professionals Follow-up Study (1988-2008) to evaluate sitting while watching TV and its joint influence with leisure-time physical activity on risk of colorectal adenoma. Logistic regression was used to calculate odds ratios (ORs) and 95% confidence intervals (CIs). Results: Prolonged sitting while watching TV was significantly associated with increased risk of colorectal adenoma (n=4280), and adjusting for physical activity or a potential mediator body mass index did not change the estimates. The ORs (95% CIs) across categories of TV watching (0-6, 7-13, 14-20, and 21+ h per week) were 1.00 (referent), 1.09 (1.01-1.17), 1.16 (1.06-1.27), and 1.10 (0.97-1.25) (OR per 14-h per week increment=1.11; 95% CI: 1.04-1.18; Ptrend=0.001). Compared with the least sedentary (0-6 h per week of TV) and most physically active (highest quintile) men, the most sedentary (14+ h per week) and least active (lowest quintile) men had a significant increased risk of adenoma (OR=1.25; 95% CI: 1.05-1.49), particularly for high-risk adenoma. Conclusions: Prolonged TV viewing is associated with modest increased risk of colorectal adenoma independent of leisure-time physical activity and minimally mediated by obesity.
null
null
7,499
319
[ 152, 344, 161, 87, 949 ]
9
[ "adenoma", "tv", "watching", "time", "risk", "sitting", "analysis", "tv watching", "colorectal", "sedentary" ]
[ "colonoscopy", "men colonoscopies minimally", "sigmoidoscopy colonoscopy addition", "excluding men colonoscopies", "sigmoidoscopy colonoscopy 1988" ]
null
null
null
null
null
[CONTENT] colorectal adenoma | sedentary behaviour | television watching [SUMMARY]
[CONTENT] colorectal adenoma | sedentary behaviour | television watching [SUMMARY]
[CONTENT] colorectal adenoma | sedentary behaviour | television watching [SUMMARY]
null
null
null
[CONTENT] Adenoma | Adult | Aged | Body Mass Index | Colorectal Neoplasms | Cross-Sectional Studies | Humans | Leisure Activities | Logistic Models | Male | Middle Aged | Obesity | Risk Factors | Sedentary Behavior | Television [SUMMARY]
[CONTENT] Adenoma | Adult | Aged | Body Mass Index | Colorectal Neoplasms | Cross-Sectional Studies | Humans | Leisure Activities | Logistic Models | Male | Middle Aged | Obesity | Risk Factors | Sedentary Behavior | Television [SUMMARY]
[CONTENT] Adenoma | Adult | Aged | Body Mass Index | Colorectal Neoplasms | Cross-Sectional Studies | Humans | Leisure Activities | Logistic Models | Male | Middle Aged | Obesity | Risk Factors | Sedentary Behavior | Television [SUMMARY]
null
null
null
[CONTENT] colonoscopy | men colonoscopies minimally | sigmoidoscopy colonoscopy addition | excluding men colonoscopies | sigmoidoscopy colonoscopy 1988 [SUMMARY]
[CONTENT] colonoscopy | men colonoscopies minimally | sigmoidoscopy colonoscopy addition | excluding men colonoscopies | sigmoidoscopy colonoscopy 1988 [SUMMARY]
[CONTENT] colonoscopy | men colonoscopies minimally | sigmoidoscopy colonoscopy addition | excluding men colonoscopies | sigmoidoscopy colonoscopy 1988 [SUMMARY]
null
null
null
[CONTENT] adenoma | tv | watching | time | risk | sitting | analysis | tv watching | colorectal | sedentary [SUMMARY]
[CONTENT] adenoma | tv | watching | time | risk | sitting | analysis | tv watching | colorectal | sedentary [SUMMARY]
[CONTENT] adenoma | tv | watching | time | risk | sitting | analysis | tv watching | colorectal | sedentary [SUMMARY]
null
null
null
[CONTENT] table | tv | watching | adenoma | risk | men | 95 | 95 ci | ci | compared [SUMMARY]
[CONTENT] colorectal | risk | risk colorectal | colorectal cancer | tv | sedentary | cancer | watching | adenoma | activity mediated [SUMMARY]
[CONTENT] adenoma | tv | watching | risk | sitting | time | colorectal | tv watching | adenomas | sedentary [SUMMARY]
null
null
null
[CONTENT] adenoma (n=4280 ||| 95% | 0-6 | 7 | 14-20 | 21+ h per week | 1.00 | 1.09 | 1.01 | 1.16 | 1.06-1.27 | 1.10 | 0.97-1.25 | 14 | 95% | CI | 1.04-1.18 ||| 0-6 | 14+ h per week | 95% | CI | 1.05-1.49 [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| 31 | 065 | the Health Professionals Follow-up Study | 1988-2008 | adenoma ||| 95% ||| ||| adenoma (n=4280 ||| 95% | 0-6 | 7 | 14-20 | 21+ h per week | 1.00 | 1.09 | 1.01 | 1.16 | 1.06-1.27 | 1.10 | 0.97-1.25 | 14 | 95% | CI | 1.04-1.18 ||| 0-6 | 14+ h per week | 95% | CI | 1.05-1.49 ||| [SUMMARY]
null
The social network index and its relation to later-life depression among the elderly aged ≥80 years in Northern Thailand.
27540286
Having a diverse social network is considered to be beneficial to a person's well-being. The significance, however, of social network diversity in the geriatric assessment of people aged ≥80 years has not been adequately investigated within the Southeast Asian context. This study explored the social networks belonging to the elderly aged ≥80 years and assessed the relation of social network and geriatric depression.
BACKGROUND
This study was a community-based cross-sectional survey conducted in Chiang Mai Province, Northern Thailand. A representative sample of 435 community residents, aged ≥80 years, were included in a multistage sample. The participants' social network diversity was assessed by applying Cohen's social network index (SNI). The geriatric depression scale and activities of daily living measures were carried out during home visits. Descriptive analyses revealed the distribution of SNI, while the relationship between the SNI and the geriatric depression scale was examined by ordinal logistic regression models controlling possible covariants such as age, sex, and educational attainment.
METHODS
The median age of the sample was 83 years, with females comprising of 54.94% of the sample. The participants' children, their neighbors, and members of Buddhist temples were reported as the most frequent contacts of the study participants. Among the 435 participants, 25% were at risk of social isolation due to having a "limited" social network group (SNI 0-3), whereas 37% had a "medium" social network (SNI 4-5), and 38% had a "diverse" social network (SNI ≥6). The SNI was not different among the two sexes. Activities of daily living scores in the diverse social network group were significantly higher than those in the limited social network group. Multivariate ordinal logistic regression analysis models revealed a significant negative association between social network diversity and geriatric depression.
RESULTS
Regular and frequent contact with various social contacts may safeguard common geriatric depression among persons aged ≥80 years. As a result, screening those at risk of social isolation is recommended to be integrated into routine primary health care-based geriatric assessment and intervention programs.
CONCLUSION
[ "Activities of Daily Living", "Aged, 80 and over", "Cross-Sectional Studies", "Depression", "Family", "Female", "Friends", "Geriatric Assessment", "Humans", "Interpersonal Relations", "Logistic Models", "Male", "Quality of Life", "Social Support", "Thailand" ]
4982492
Introduction
Social ties are important for a persons’ physical and mental well-being.1 Since the late 1900s research has been carried out to investigate the impact of social ties on health and lifespan.2,3 Cohen et al2 reported the preventive effect of social network diversity on illnesses such as the common cold. Recent epidemiological studies have shown that socially active people are less likely to develop noncommunicable diseases such as diabetes and metabolic syndrome and are likely to live longer than their nonactive counterparts.4–6 Therefore, being connected in social networks may allow us to lead a healthy life by its salutary and supportive social impacts. The gift of a socially active life is not experienced by all elderly people whose contact list fades away with years of survival. Nowadays, due to the trend of population aging, global societies are seeing more and more people living into their 80s and 90s.7 For a significant minority of these people, however, a common negative aspect of old age is that of social isolation.8,9 A recent national survey in Malaysia, a neighboring country to the study site, reported that 48.6% of old persons were at risk of social isolation and that the oldest old persons were at higher risk.10,11 Research in Korea, Singapore, and the People’s Republic of China also investigated social connectedness of elderly people in each society. Although different instruments were applied in those studies, they reported similarly that people in the bracket of oldest age are liable to social isolation.11–15 Research exploring the nature and diversity of elderly people’s social contacts is therefore a necessity, particularly given the unique social and cultural contexts within different countries. In 2010, Thailand was home to 8.5 million elderly people aged ≥60 years and now stands as the fastest population-aging country in Southeast Asia.16,17 In 2006, people aged ≥60 years comprised 11.3% of the Thai population, a figure which is estimated to reach 29.8% by 2050.18,19 People aged ≥80 years, meanwhile, constituted 11.5% of the elderly population,17 a figure which is similarly estimated to increase to 23.6% by 2050.18 Although the impact of social networks has been investigated in a wide range of contexts globally, the types and diversity of social contacts in the oldest age group of Thai society is still scant. An increasingly common problem affecting the elderly is depression. This problem may affect as many as 20% of those in the oldest age bracket.20 However, psychogeriatric research, investigating, for example, the prevalence of depression among the elderly in a country like Thailand, where psychogeriatric services are not yet in place, is still limited. An inverse relation between social network and geriatric depression was reported in a recent study in the US.21 Similar studies, however, are still necessary for Thailand and the Southeast Asian context to find out how social networks influence geriatric depression, in particular among people aged ≥80 years. The current study, therefore, measuring the social network index (SNI) using a quantitative approach aimed to reveal the types of social ties experienced by Thais in the oldest age bracket and assess how their SNI relates to geriatric depression. The findings of the current study seek to highlight the importance of social networks in active aging and may provide insights into ways to preserve the social network diversity of the elderly, resulting in their improved physical and mental well-being.
Statistical analysis
The distribution and types of social ties were analyzed through descriptive analysis. The difference between geriatric parameters such as ADL, GDS, disability, and memory loss among the three groups of social network diversity was analyzed applying the Kruskal–Wallis and chi-squared tests. Categorization of data such as ADL and GDS followed existing standard references.16,29 To examine the hypothesis whether increase in SNI would significantly decrease the rank of GDS, the ordinal logistic regression analysis was applied. The dependent variable GDS was regarded as the ordinal variable. Three models of ordinal logistic regression analyses were applied to test the association between the dependent and independent variables. Various covariates were controlled in three regression models. P-value <0.05 was considered as statistically significant with a 95% confidence interval. Stata version 11 (Stata Corporation, College Station, TX) was used for data analysis.
Results
The median age of the sample was 83 years. More than half (54.94%) of the sample (435 persons) were female. The average SNI, in terms of social network diversity, was 4.9±1.9 (mean ± standard deviation [SD]). The median number of people in the contact networks was 15, with a range of 5–21. Categorizing the participants’ SNI, 25% fell into the “limited” social network group (SNI 0–3), 37% into the “medium” network group (SNI 4–5), and 38% into the “diverse” social network group (SNI ≥6). Table 2 shows the characteristics of the participants according to the different categories of SNI. The SNI did not differ between males and females. The participants’ most frequent regular contacts were with their children or their neighbors, with religious society members as their third most active social tie (Figure 1). Contact with close friends and close relatives, meanwhile, represented active social ties for less than half of the participants. Only a third of the participants reported their spouse as an active social contact: 52% of the males reported this as an active social contact, compared with only 29% of the females (P<0.001, chi-squared). The ADL scores differed significantly among the three different SNI groups (Table 2). There was a significant difference in age across the three groups of SNI. The proportion of females was uniform and more than the males in all groups. The elderly in the “diverse” social network group whose SNI ≥6 had the highest ADL score. The GDS score of this group was the lowest. Approximately 10.34% of the participants were suffering from depression: 80% from “mild” depression; 16% from “moderate” depression; and 4% from “severe” depression (Table 2). The proportion of people with a long-term disability and those with a long-term memory loss were significantly higher among elderly seniors with limited social network group than in medium and diverse social network groups. The level of GDS declined with increasing SNI (Table 2). SNI was significantly associated with GDS in all three models of ordinal logistic regression analyses (Table 3). Covariates in the model 1 were SNI, age, sex, and educational attainment; in model 2 were SNI, age, sex, educational attainment, self-impression of health, and dependency; in model 3 were SNI, age, sex, educational attainment, self-impression of health, dependency, and short-term and long-term memory loss (Table 3). The higher the score in the SNI (reflecting a more diverse social network), the lower the GDS score (reflecting lower geriatric depression). Moreover, participants’ good self-rated health status indicated a likely lower level of geriatric depression (β–1.03, P<0.001), whereas those with long-term memory loss and short-term memory loss were more likely to suffer from geriatric depression (Table 3).
Conclusion
In the setting of the current study, children, neighbors, and the Buddhist temple members have been reported as the most likely resources of social network for those in the oldest bracket of the elderly. There is still ample opportunity, however, for this group to strengthen their social ties by engaging in volunteer activity and clubs for the elderly. Moreover, the diversity of their social networks may serve to prevent the problem of depression. Therefore, future research should devise community-based intervention to promote the social networks of those in the oldest age bracket, which may lead the elderly to enjoy active aging and living, in turn positively impacting upon their mental health.
[ "Design and methods", "Ethical approval", "Data collection procedures", "Research instrument and measurement", "Independent and dependent variables", "Controlled variables", "Conclusion" ]
[ " Ethical approval This study was approved by Chiang Mai Provincial Health Office, Ethics Review Committee (EC-CMPHO 2/56). The participants were interviewed and written informed consent was obtained.\nThis study was approved by Chiang Mai Provincial Health Office, Ethics Review Committee (EC-CMPHO 2/56). The participants were interviewed and written informed consent was obtained.\n Data collection procedures A community-based, cross-sectional study was conducted in Chiang Mai Province, Northern Thailand. The study sample comprised of 435 people aged ≥80 years at the time of the survey. All were community residents. The sample size was calculated to ensure the power of the finding with a 95% confidence interval and to represent the population of 10,461 people aged ≥80 years living in the Chiang Mai Province. A multistage sampling approach was applied, taking into account the geographical area, population size, and whether the districts were rural or urban.\nMultistage sampling was designed to obtain a representative sample in a stratified proportionate approach. There are totally 25 districts in Chiang Mai Province. A large district, a medium district, and a small district were selected in the first stage of sampling through stratified random sampling approach applying population size to form stratum (Table 1).\nFrom each selected district, one subdistrict was selected by applying simple random sampling. From each subdistrict, participants were sampled in consecutive villages starting from village number one, until a defined number of participants were recruited.\nThe district Muang Chiang Mai is the main city of Chiang Mai province. Comprising of 16 subdistricts, its population size is comparable to the population of a southern part, northern part, and middle part of the province (Table 1). Therefore, three subdistricts were selected randomly from the 16 subdistricts of Muang Chiang Mai (Table 1). Finally, the sample represented Chiang Mai Province by selecting a total of ten districts. Researchers traveled to villages in twelve subdistricts to reach 435 participants and collected data through interviews (Table 1).\nThe interviewer administered the questionnaires, and measurement was carried out during home visits in ten districts of Chiang Mai Province. Public health officers who were well trained to interview elderly people conducted data collection from May to July 2014. The response rate was 97.7% among those who were interviewed.\nA community-based, cross-sectional study was conducted in Chiang Mai Province, Northern Thailand. The study sample comprised of 435 people aged ≥80 years at the time of the survey. All were community residents. The sample size was calculated to ensure the power of the finding with a 95% confidence interval and to represent the population of 10,461 people aged ≥80 years living in the Chiang Mai Province. A multistage sampling approach was applied, taking into account the geographical area, population size, and whether the districts were rural or urban.\nMultistage sampling was designed to obtain a representative sample in a stratified proportionate approach. There are totally 25 districts in Chiang Mai Province. A large district, a medium district, and a small district were selected in the first stage of sampling through stratified random sampling approach applying population size to form stratum (Table 1).\nFrom each selected district, one subdistrict was selected by applying simple random sampling. From each subdistrict, participants were sampled in consecutive villages starting from village number one, until a defined number of participants were recruited.\nThe district Muang Chiang Mai is the main city of Chiang Mai province. Comprising of 16 subdistricts, its population size is comparable to the population of a southern part, northern part, and middle part of the province (Table 1). Therefore, three subdistricts were selected randomly from the 16 subdistricts of Muang Chiang Mai (Table 1). Finally, the sample represented Chiang Mai Province by selecting a total of ten districts. Researchers traveled to villages in twelve subdistricts to reach 435 participants and collected data through interviews (Table 1).\nThe interviewer administered the questionnaires, and measurement was carried out during home visits in ten districts of Chiang Mai Province. Public health officers who were well trained to interview elderly people conducted data collection from May to July 2014. The response rate was 97.7% among those who were interviewed.\n Research instrument and measurement Independent and dependent variables The SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections.\nThe maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection.\nTo measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8.\nThe SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections.\nThe maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection.\nTo measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8.\n Controlled variables Barthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26\nDisability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?”\nBarthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26\nDisability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?”\n Independent and dependent variables The SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections.\nThe maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection.\nTo measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8.\nThe SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections.\nThe maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection.\nTo measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8.\n Controlled variables Barthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26\nDisability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?”\nBarthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26\nDisability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?”\n Statistical analysis The distribution and types of social ties were analyzed through descriptive analysis. The difference between geriatric parameters such as ADL, GDS, disability, and memory loss among the three groups of social network diversity was analyzed applying the Kruskal–Wallis and chi-squared tests. Categorization of data such as ADL and GDS followed existing standard references.16,29\nTo examine the hypothesis whether increase in SNI would significantly decrease the rank of GDS, the ordinal logistic regression analysis was applied. The dependent variable GDS was regarded as the ordinal variable. Three models of ordinal logistic regression analyses were applied to test the association between the dependent and independent variables. Various covariates were controlled in three regression models. P-value <0.05 was considered as statistically significant with a 95% confidence interval. Stata version 11 (Stata Corporation, College Station, TX) was used for data analysis.\nThe distribution and types of social ties were analyzed through descriptive analysis. The difference between geriatric parameters such as ADL, GDS, disability, and memory loss among the three groups of social network diversity was analyzed applying the Kruskal–Wallis and chi-squared tests. Categorization of data such as ADL and GDS followed existing standard references.16,29\nTo examine the hypothesis whether increase in SNI would significantly decrease the rank of GDS, the ordinal logistic regression analysis was applied. The dependent variable GDS was regarded as the ordinal variable. Three models of ordinal logistic regression analyses were applied to test the association between the dependent and independent variables. Various covariates were controlled in three regression models. P-value <0.05 was considered as statistically significant with a 95% confidence interval. Stata version 11 (Stata Corporation, College Station, TX) was used for data analysis.", "This study was approved by Chiang Mai Provincial Health Office, Ethics Review Committee (EC-CMPHO 2/56). The participants were interviewed and written informed consent was obtained.", "A community-based, cross-sectional study was conducted in Chiang Mai Province, Northern Thailand. The study sample comprised of 435 people aged ≥80 years at the time of the survey. All were community residents. The sample size was calculated to ensure the power of the finding with a 95% confidence interval and to represent the population of 10,461 people aged ≥80 years living in the Chiang Mai Province. A multistage sampling approach was applied, taking into account the geographical area, population size, and whether the districts were rural or urban.\nMultistage sampling was designed to obtain a representative sample in a stratified proportionate approach. There are totally 25 districts in Chiang Mai Province. A large district, a medium district, and a small district were selected in the first stage of sampling through stratified random sampling approach applying population size to form stratum (Table 1).\nFrom each selected district, one subdistrict was selected by applying simple random sampling. From each subdistrict, participants were sampled in consecutive villages starting from village number one, until a defined number of participants were recruited.\nThe district Muang Chiang Mai is the main city of Chiang Mai province. Comprising of 16 subdistricts, its population size is comparable to the population of a southern part, northern part, and middle part of the province (Table 1). Therefore, three subdistricts were selected randomly from the 16 subdistricts of Muang Chiang Mai (Table 1). Finally, the sample represented Chiang Mai Province by selecting a total of ten districts. Researchers traveled to villages in twelve subdistricts to reach 435 participants and collected data through interviews (Table 1).\nThe interviewer administered the questionnaires, and measurement was carried out during home visits in ten districts of Chiang Mai Province. Public health officers who were well trained to interview elderly people conducted data collection from May to July 2014. The response rate was 97.7% among those who were interviewed.", " Independent and dependent variables The SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections.\nThe maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection.\nTo measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8.\nThe SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections.\nThe maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection.\nTo measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8.\n Controlled variables Barthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26\nDisability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?”\nBarthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26\nDisability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?”", "The SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections.\nThe maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection.\nTo measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8.", "Barthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26\nDisability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?”", "In the setting of the current study, children, neighbors, and the Buddhist temple members have been reported as the most likely resources of social network for those in the oldest bracket of the elderly. There is still ample opportunity, however, for this group to strengthen their social ties by engaging in volunteer activity and clubs for the elderly. Moreover, the diversity of their social networks may serve to prevent the problem of depression. Therefore, future research should devise community-based intervention to promote the social networks of those in the oldest age bracket, which may lead the elderly to enjoy active aging and living, in turn positively impacting upon their mental health." ]
[ null, null, "methods", null, null, null, null ]
[ "Introduction", "Design and methods", "Ethical approval", "Data collection procedures", "Research instrument and measurement", "Independent and dependent variables", "Controlled variables", "Statistical analysis", "Results", "Discussion", "Conclusion" ]
[ "Social ties are important for a persons’ physical and mental well-being.1 Since the late 1900s research has been carried out to investigate the impact of social ties on health and lifespan.2,3 Cohen et al2 reported the preventive effect of social network diversity on illnesses such as the common cold. Recent epidemiological studies have shown that socially active people are less likely to develop noncommunicable diseases such as diabetes and metabolic syndrome and are likely to live longer than their nonactive counterparts.4–6 Therefore, being connected in social networks may allow us to lead a healthy life by its salutary and supportive social impacts.\nThe gift of a socially active life is not experienced by all elderly people whose contact list fades away with years of survival. Nowadays, due to the trend of population aging, global societies are seeing more and more people living into their 80s and 90s.7 For a significant minority of these people, however, a common negative aspect of old age is that of social isolation.8,9 A recent national survey in Malaysia, a neighboring country to the study site, reported that 48.6% of old persons were at risk of social isolation and that the oldest old persons were at higher risk.10,11 Research in Korea, Singapore, and the People’s Republic of China also investigated social connectedness of elderly people in each society. Although different instruments were applied in those studies, they reported similarly that people in the bracket of oldest age are liable to social isolation.11–15\nResearch exploring the nature and diversity of elderly people’s social contacts is therefore a necessity, particularly given the unique social and cultural contexts within different countries. In 2010, Thailand was home to 8.5 million elderly people aged ≥60 years and now stands as the fastest population-aging country in Southeast Asia.16,17 In 2006, people aged ≥60 years comprised 11.3% of the Thai population, a figure which is estimated to reach 29.8% by 2050.18,19 People aged ≥80 years, meanwhile, constituted 11.5% of the elderly population,17 a figure which is similarly estimated to increase to 23.6% by 2050.18 Although the impact of social networks has been investigated in a wide range of contexts globally, the types and diversity of social contacts in the oldest age group of Thai society is still scant.\nAn increasingly common problem affecting the elderly is depression. This problem may affect as many as 20% of those in the oldest age bracket.20 However, psychogeriatric research, investigating, for example, the prevalence of depression among the elderly in a country like Thailand, where psychogeriatric services are not yet in place, is still limited. An inverse relation between social network and geriatric depression was reported in a recent study in the US.21 Similar studies, however, are still necessary for Thailand and the Southeast Asian context to find out how social networks influence geriatric depression, in particular among people aged ≥80 years.\nThe current study, therefore, measuring the social network index (SNI) using a quantitative approach aimed to reveal the types of social ties experienced by Thais in the oldest age bracket and assess how their SNI relates to geriatric depression. The findings of the current study seek to highlight the importance of social networks in active aging and may provide insights into ways to preserve the social network diversity of the elderly, resulting in their improved physical and mental well-being.", " Ethical approval This study was approved by Chiang Mai Provincial Health Office, Ethics Review Committee (EC-CMPHO 2/56). The participants were interviewed and written informed consent was obtained.\nThis study was approved by Chiang Mai Provincial Health Office, Ethics Review Committee (EC-CMPHO 2/56). The participants were interviewed and written informed consent was obtained.\n Data collection procedures A community-based, cross-sectional study was conducted in Chiang Mai Province, Northern Thailand. The study sample comprised of 435 people aged ≥80 years at the time of the survey. All were community residents. The sample size was calculated to ensure the power of the finding with a 95% confidence interval and to represent the population of 10,461 people aged ≥80 years living in the Chiang Mai Province. A multistage sampling approach was applied, taking into account the geographical area, population size, and whether the districts were rural or urban.\nMultistage sampling was designed to obtain a representative sample in a stratified proportionate approach. There are totally 25 districts in Chiang Mai Province. A large district, a medium district, and a small district were selected in the first stage of sampling through stratified random sampling approach applying population size to form stratum (Table 1).\nFrom each selected district, one subdistrict was selected by applying simple random sampling. From each subdistrict, participants were sampled in consecutive villages starting from village number one, until a defined number of participants were recruited.\nThe district Muang Chiang Mai is the main city of Chiang Mai province. Comprising of 16 subdistricts, its population size is comparable to the population of a southern part, northern part, and middle part of the province (Table 1). Therefore, three subdistricts were selected randomly from the 16 subdistricts of Muang Chiang Mai (Table 1). Finally, the sample represented Chiang Mai Province by selecting a total of ten districts. Researchers traveled to villages in twelve subdistricts to reach 435 participants and collected data through interviews (Table 1).\nThe interviewer administered the questionnaires, and measurement was carried out during home visits in ten districts of Chiang Mai Province. Public health officers who were well trained to interview elderly people conducted data collection from May to July 2014. The response rate was 97.7% among those who were interviewed.\nA community-based, cross-sectional study was conducted in Chiang Mai Province, Northern Thailand. The study sample comprised of 435 people aged ≥80 years at the time of the survey. All were community residents. The sample size was calculated to ensure the power of the finding with a 95% confidence interval and to represent the population of 10,461 people aged ≥80 years living in the Chiang Mai Province. A multistage sampling approach was applied, taking into account the geographical area, population size, and whether the districts were rural or urban.\nMultistage sampling was designed to obtain a representative sample in a stratified proportionate approach. There are totally 25 districts in Chiang Mai Province. A large district, a medium district, and a small district were selected in the first stage of sampling through stratified random sampling approach applying population size to form stratum (Table 1).\nFrom each selected district, one subdistrict was selected by applying simple random sampling. From each subdistrict, participants were sampled in consecutive villages starting from village number one, until a defined number of participants were recruited.\nThe district Muang Chiang Mai is the main city of Chiang Mai province. Comprising of 16 subdistricts, its population size is comparable to the population of a southern part, northern part, and middle part of the province (Table 1). Therefore, three subdistricts were selected randomly from the 16 subdistricts of Muang Chiang Mai (Table 1). Finally, the sample represented Chiang Mai Province by selecting a total of ten districts. Researchers traveled to villages in twelve subdistricts to reach 435 participants and collected data through interviews (Table 1).\nThe interviewer administered the questionnaires, and measurement was carried out during home visits in ten districts of Chiang Mai Province. Public health officers who were well trained to interview elderly people conducted data collection from May to July 2014. The response rate was 97.7% among those who were interviewed.\n Research instrument and measurement Independent and dependent variables The SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections.\nThe maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection.\nTo measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8.\nThe SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections.\nThe maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection.\nTo measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8.\n Controlled variables Barthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26\nDisability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?”\nBarthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26\nDisability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?”\n Independent and dependent variables The SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections.\nThe maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection.\nTo measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8.\nThe SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections.\nThe maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection.\nTo measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8.\n Controlled variables Barthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26\nDisability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?”\nBarthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26\nDisability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?”\n Statistical analysis The distribution and types of social ties were analyzed through descriptive analysis. The difference between geriatric parameters such as ADL, GDS, disability, and memory loss among the three groups of social network diversity was analyzed applying the Kruskal–Wallis and chi-squared tests. Categorization of data such as ADL and GDS followed existing standard references.16,29\nTo examine the hypothesis whether increase in SNI would significantly decrease the rank of GDS, the ordinal logistic regression analysis was applied. The dependent variable GDS was regarded as the ordinal variable. Three models of ordinal logistic regression analyses were applied to test the association between the dependent and independent variables. Various covariates were controlled in three regression models. P-value <0.05 was considered as statistically significant with a 95% confidence interval. Stata version 11 (Stata Corporation, College Station, TX) was used for data analysis.\nThe distribution and types of social ties were analyzed through descriptive analysis. The difference between geriatric parameters such as ADL, GDS, disability, and memory loss among the three groups of social network diversity was analyzed applying the Kruskal–Wallis and chi-squared tests. Categorization of data such as ADL and GDS followed existing standard references.16,29\nTo examine the hypothesis whether increase in SNI would significantly decrease the rank of GDS, the ordinal logistic regression analysis was applied. The dependent variable GDS was regarded as the ordinal variable. Three models of ordinal logistic regression analyses were applied to test the association between the dependent and independent variables. Various covariates were controlled in three regression models. P-value <0.05 was considered as statistically significant with a 95% confidence interval. Stata version 11 (Stata Corporation, College Station, TX) was used for data analysis.", "This study was approved by Chiang Mai Provincial Health Office, Ethics Review Committee (EC-CMPHO 2/56). The participants were interviewed and written informed consent was obtained.", "A community-based, cross-sectional study was conducted in Chiang Mai Province, Northern Thailand. The study sample comprised of 435 people aged ≥80 years at the time of the survey. All were community residents. The sample size was calculated to ensure the power of the finding with a 95% confidence interval and to represent the population of 10,461 people aged ≥80 years living in the Chiang Mai Province. A multistage sampling approach was applied, taking into account the geographical area, population size, and whether the districts were rural or urban.\nMultistage sampling was designed to obtain a representative sample in a stratified proportionate approach. There are totally 25 districts in Chiang Mai Province. A large district, a medium district, and a small district were selected in the first stage of sampling through stratified random sampling approach applying population size to form stratum (Table 1).\nFrom each selected district, one subdistrict was selected by applying simple random sampling. From each subdistrict, participants were sampled in consecutive villages starting from village number one, until a defined number of participants were recruited.\nThe district Muang Chiang Mai is the main city of Chiang Mai province. Comprising of 16 subdistricts, its population size is comparable to the population of a southern part, northern part, and middle part of the province (Table 1). Therefore, three subdistricts were selected randomly from the 16 subdistricts of Muang Chiang Mai (Table 1). Finally, the sample represented Chiang Mai Province by selecting a total of ten districts. Researchers traveled to villages in twelve subdistricts to reach 435 participants and collected data through interviews (Table 1).\nThe interviewer administered the questionnaires, and measurement was carried out during home visits in ten districts of Chiang Mai Province. Public health officers who were well trained to interview elderly people conducted data collection from May to July 2014. The response rate was 97.7% among those who were interviewed.", " Independent and dependent variables The SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections.\nThe maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection.\nTo measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8.\nThe SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections.\nThe maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection.\nTo measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8.\n Controlled variables Barthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26\nDisability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?”\nBarthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26\nDisability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?”", "The SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections.\nThe maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection.\nTo measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8.", "Barthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26\nDisability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?”", "The distribution and types of social ties were analyzed through descriptive analysis. The difference between geriatric parameters such as ADL, GDS, disability, and memory loss among the three groups of social network diversity was analyzed applying the Kruskal–Wallis and chi-squared tests. Categorization of data such as ADL and GDS followed existing standard references.16,29\nTo examine the hypothesis whether increase in SNI would significantly decrease the rank of GDS, the ordinal logistic regression analysis was applied. The dependent variable GDS was regarded as the ordinal variable. Three models of ordinal logistic regression analyses were applied to test the association between the dependent and independent variables. Various covariates were controlled in three regression models. P-value <0.05 was considered as statistically significant with a 95% confidence interval. Stata version 11 (Stata Corporation, College Station, TX) was used for data analysis.", "The median age of the sample was 83 years. More than half (54.94%) of the sample (435 persons) were female. The average SNI, in terms of social network diversity, was 4.9±1.9 (mean ± standard deviation [SD]). The median number of people in the contact networks was 15, with a range of 5–21. Categorizing the participants’ SNI, 25% fell into the “limited” social network group (SNI 0–3), 37% into the “medium” network group (SNI 4–5), and 38% into the “diverse” social network group (SNI ≥6). Table 2 shows the characteristics of the participants according to the different categories of SNI. The SNI did not differ between males and females.\nThe participants’ most frequent regular contacts were with their children or their neighbors, with religious society members as their third most active social tie (Figure 1). Contact with close friends and close relatives, meanwhile, represented active social ties for less than half of the participants. Only a third of the participants reported their spouse as an active social contact: 52% of the males reported this as an active social contact, compared with only 29% of the females (P<0.001, chi-squared).\nThe ADL scores differed significantly among the three different SNI groups (Table 2). There was a significant difference in age across the three groups of SNI. The proportion of females was uniform and more than the males in all groups. The elderly in the “diverse” social network group whose SNI ≥6 had the highest ADL score. The GDS score of this group was the lowest. Approximately 10.34% of the participants were suffering from depression: 80% from “mild” depression; 16% from “moderate” depression; and 4% from “severe” depression (Table 2). The proportion of people with a long-term disability and those with a long-term memory loss were significantly higher among elderly seniors with limited social network group than in medium and diverse social network groups.\nThe level of GDS declined with increasing SNI (Table 2). SNI was significantly associated with GDS in all three models of ordinal logistic regression analyses (Table 3). Covariates in the model 1 were SNI, age, sex, and educational attainment; in model 2 were SNI, age, sex, educational attainment, self-impression of health, and dependency; in model 3 were SNI, age, sex, educational attainment, self-impression of health, dependency, and short-term and long-term memory loss (Table 3). The higher the score in the SNI (reflecting a more diverse social network), the lower the GDS score (reflecting lower geriatric depression).\nMoreover, participants’ good self-rated health status indicated a likely lower level of geriatric depression (β–1.03, P<0.001), whereas those with long-term memory loss and short-term memory loss were more likely to suffer from geriatric depression (Table 3).", "This study is of particular value because few studies have conducted empirical research into the social network diversity of people aged ≥80 years.30 It has revealed common patterns of the social networks among the oldest age group in the Northern Thailand context (Figure 1). Many of the participants have lost their closest ties, such as their spouse and their friends. Hence, those who survive longer than their national life expectancy and enter their 80s and 90s may experience drastic changes in their social network. More than a quarter of the participants in the current study belonged to a very limited social network, SNI <3. They were at high risk of social isolation, a health problem overlooked in many communities despite its negative impact on life expectancy, morbidity, and quality of life.31 The findings of the current study, therefore, suggest integrating screening for social isolation as part of primary health care-based geriatric assessment.\nThe majority of participants ranked their children as their most frequently contacted social tie (Figure 1). This highlights the importance of families within the social network of the elderly. In Thai culture, parents, children, and grandchildren traditionally live together, and many elderly people retain a role of caregiver to their grandchildren within these intergenerational families.32 Moreover, a recent study reported that such intergenerational families are common to countries such as Thailand, Myanmar, and Vietnam.32 Many children may migrate to work outside their native province, for example, for career development, yet they can keep in touch with their parents due to the increasing use of mobile phones, as well as improved road access and transportation in Thailand.33\nNeighbors were ranked second by the participants among their most frequent contacts. As reported by Basto et al,34 neighbors represent an ecological asset for elderly community residents, although this may not be the case for nursing home residents.35 Neighborhood hospitality also depends on how urban the location of a community is. Communities in metropolitan Bangkok and traditional communities in Chiang Mai may be rather different. All participants in the current study were community residents living in the Chiang Mai Province, Northern Thailand. This then, may reflect the traditional culture and society unique to Northern Thailand.\nVery few participants retained occupational or educational networks (Figure 1). After retirement, there is less of a role for elderly people within the labor market and occupational networks.8 However, aging people still have other possible active social roles within their communities. The lack of formal working hours coupled with a likely more relaxed daily life may permit the elderly to engage more in social volunteering and religious activity. Notably, >60% of the participants reported their active contact as a member of the temple. Given that all of the participants were Buddhist, this finding is concurrent with that of another recent study in Thailand which reported that involvement in the Buddhist social network positively contributes to the functional health of the elderly.36\nFurthermore, more than half of the participants reported being in contact with a group or club. This could be the result of clubs especially for the elderly which have recently been established in a number of districts in Thailand. These clubs have become popular opportunities for the elderly to get together and socialize. In contrast, less than a quarter of participants reported being involved in volunteer work or regularly talked with people in volunteering activities. This finding is very similar to that of a UK study in which around 20% of the elderly, aged 75 years and older, reported participation in UK volunteering activities.36\nThis study also assessed, in a characteristic sample, the relationship of the participants’ SNI to their level of geriatric depression. The self-impression of health and long-term memory loss had a significant association with depression. After adjusting several important covariates, such as demographics (age, sex, and educational attainment), health status (dependency and self-impression of health), and cognitive decline (short-term and long-term memory loss), there was a significant negative association between social network diversity and geriatric depression. Therefore, being in regular and frequent contact with various social contacts may prevent common geriatric depression among those aged ≥80 years (Table 3).\nThe relation between geriatric depression and social network isolation has been reported in different studies with variable results.21,31,37 These inconsistent findings are probably due to various social networks and their impacts in different cultural contexts. However, the findings of earlier reports in other parts of Thailand are consistent with the findings of the current study despite the different methodological approaches applied. Thanakwang et al38 reported the positive social impact of friends and families on the psychological well-being of elderly Thais. Sasiwongsaroj et al,36 meanwhile, reported that elderly Thais involved in the Buddhist social network gained better functional, mental, and social health status as a result of this social network.36 The samples in these studies, however, comprised of elderly people aged >60 years, whereas the median age within the current study was 83 years. A twenty-year difference in later life may cause drastic changes in social networks, notably with profound loss of friends and colleagues. The current study, therefore, has been able to uncover different typologies of social networking among the oldest bracket of the elderly.\nThis study may have some limitations. The way social network diversity was measured was based on the number and frequency of contact. The quality of the relationship in each social tie could not be known using this approach, however. Moreover, the relation between variables, tested in a cross-sectional study, may require future longitudinal study and analysis. Despite these shortcomings, it is hoped that public health and long-term care programs will benefit from the findings of the present study. It is considered that the study’s descriptive and analytical findings can provide a practical impetus to design evidence-based intervention programs in the Thai setting and more broadly in that of Southeast Asia, which will preserve and promote the social networks of the elderly, in doing so serving to reduce the problem of geriatric depression.", "In the setting of the current study, children, neighbors, and the Buddhist temple members have been reported as the most likely resources of social network for those in the oldest bracket of the elderly. There is still ample opportunity, however, for this group to strengthen their social ties by engaging in volunteer activity and clubs for the elderly. Moreover, the diversity of their social networks may serve to prevent the problem of depression. Therefore, future research should devise community-based intervention to promote the social networks of those in the oldest age bracket, which may lead the elderly to enjoy active aging and living, in turn positively impacting upon their mental health." ]
[ "intro", null, null, "methods", null, null, null, "methods", "results", "discussion", null ]
[ "aging", "gerontology", "psychogeriatrics", "sociology of aging", "community", "Southeast Asia", "Chiang Mai" ]
Introduction: Social ties are important for a persons’ physical and mental well-being.1 Since the late 1900s research has been carried out to investigate the impact of social ties on health and lifespan.2,3 Cohen et al2 reported the preventive effect of social network diversity on illnesses such as the common cold. Recent epidemiological studies have shown that socially active people are less likely to develop noncommunicable diseases such as diabetes and metabolic syndrome and are likely to live longer than their nonactive counterparts.4–6 Therefore, being connected in social networks may allow us to lead a healthy life by its salutary and supportive social impacts. The gift of a socially active life is not experienced by all elderly people whose contact list fades away with years of survival. Nowadays, due to the trend of population aging, global societies are seeing more and more people living into their 80s and 90s.7 For a significant minority of these people, however, a common negative aspect of old age is that of social isolation.8,9 A recent national survey in Malaysia, a neighboring country to the study site, reported that 48.6% of old persons were at risk of social isolation and that the oldest old persons were at higher risk.10,11 Research in Korea, Singapore, and the People’s Republic of China also investigated social connectedness of elderly people in each society. Although different instruments were applied in those studies, they reported similarly that people in the bracket of oldest age are liable to social isolation.11–15 Research exploring the nature and diversity of elderly people’s social contacts is therefore a necessity, particularly given the unique social and cultural contexts within different countries. In 2010, Thailand was home to 8.5 million elderly people aged ≥60 years and now stands as the fastest population-aging country in Southeast Asia.16,17 In 2006, people aged ≥60 years comprised 11.3% of the Thai population, a figure which is estimated to reach 29.8% by 2050.18,19 People aged ≥80 years, meanwhile, constituted 11.5% of the elderly population,17 a figure which is similarly estimated to increase to 23.6% by 2050.18 Although the impact of social networks has been investigated in a wide range of contexts globally, the types and diversity of social contacts in the oldest age group of Thai society is still scant. An increasingly common problem affecting the elderly is depression. This problem may affect as many as 20% of those in the oldest age bracket.20 However, psychogeriatric research, investigating, for example, the prevalence of depression among the elderly in a country like Thailand, where psychogeriatric services are not yet in place, is still limited. An inverse relation between social network and geriatric depression was reported in a recent study in the US.21 Similar studies, however, are still necessary for Thailand and the Southeast Asian context to find out how social networks influence geriatric depression, in particular among people aged ≥80 years. The current study, therefore, measuring the social network index (SNI) using a quantitative approach aimed to reveal the types of social ties experienced by Thais in the oldest age bracket and assess how their SNI relates to geriatric depression. The findings of the current study seek to highlight the importance of social networks in active aging and may provide insights into ways to preserve the social network diversity of the elderly, resulting in their improved physical and mental well-being. Design and methods: Ethical approval This study was approved by Chiang Mai Provincial Health Office, Ethics Review Committee (EC-CMPHO 2/56). The participants were interviewed and written informed consent was obtained. This study was approved by Chiang Mai Provincial Health Office, Ethics Review Committee (EC-CMPHO 2/56). The participants were interviewed and written informed consent was obtained. Data collection procedures A community-based, cross-sectional study was conducted in Chiang Mai Province, Northern Thailand. The study sample comprised of 435 people aged ≥80 years at the time of the survey. All were community residents. The sample size was calculated to ensure the power of the finding with a 95% confidence interval and to represent the population of 10,461 people aged ≥80 years living in the Chiang Mai Province. A multistage sampling approach was applied, taking into account the geographical area, population size, and whether the districts were rural or urban. Multistage sampling was designed to obtain a representative sample in a stratified proportionate approach. There are totally 25 districts in Chiang Mai Province. A large district, a medium district, and a small district were selected in the first stage of sampling through stratified random sampling approach applying population size to form stratum (Table 1). From each selected district, one subdistrict was selected by applying simple random sampling. From each subdistrict, participants were sampled in consecutive villages starting from village number one, until a defined number of participants were recruited. The district Muang Chiang Mai is the main city of Chiang Mai province. Comprising of 16 subdistricts, its population size is comparable to the population of a southern part, northern part, and middle part of the province (Table 1). Therefore, three subdistricts were selected randomly from the 16 subdistricts of Muang Chiang Mai (Table 1). Finally, the sample represented Chiang Mai Province by selecting a total of ten districts. Researchers traveled to villages in twelve subdistricts to reach 435 participants and collected data through interviews (Table 1). The interviewer administered the questionnaires, and measurement was carried out during home visits in ten districts of Chiang Mai Province. Public health officers who were well trained to interview elderly people conducted data collection from May to July 2014. The response rate was 97.7% among those who were interviewed. A community-based, cross-sectional study was conducted in Chiang Mai Province, Northern Thailand. The study sample comprised of 435 people aged ≥80 years at the time of the survey. All were community residents. The sample size was calculated to ensure the power of the finding with a 95% confidence interval and to represent the population of 10,461 people aged ≥80 years living in the Chiang Mai Province. A multistage sampling approach was applied, taking into account the geographical area, population size, and whether the districts were rural or urban. Multistage sampling was designed to obtain a representative sample in a stratified proportionate approach. There are totally 25 districts in Chiang Mai Province. A large district, a medium district, and a small district were selected in the first stage of sampling through stratified random sampling approach applying population size to form stratum (Table 1). From each selected district, one subdistrict was selected by applying simple random sampling. From each subdistrict, participants were sampled in consecutive villages starting from village number one, until a defined number of participants were recruited. The district Muang Chiang Mai is the main city of Chiang Mai province. Comprising of 16 subdistricts, its population size is comparable to the population of a southern part, northern part, and middle part of the province (Table 1). Therefore, three subdistricts were selected randomly from the 16 subdistricts of Muang Chiang Mai (Table 1). Finally, the sample represented Chiang Mai Province by selecting a total of ten districts. Researchers traveled to villages in twelve subdistricts to reach 435 participants and collected data through interviews (Table 1). The interviewer administered the questionnaires, and measurement was carried out during home visits in ten districts of Chiang Mai Province. Public health officers who were well trained to interview elderly people conducted data collection from May to July 2014. The response rate was 97.7% among those who were interviewed. Research instrument and measurement Independent and dependent variables The SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections. The maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection. To measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8. The SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections. The maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection. To measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8. Controlled variables Barthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26 Disability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?” Barthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26 Disability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?” Independent and dependent variables The SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections. The maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection. To measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8. The SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections. The maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection. To measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8. Controlled variables Barthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26 Disability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?” Barthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26 Disability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?” Statistical analysis The distribution and types of social ties were analyzed through descriptive analysis. The difference between geriatric parameters such as ADL, GDS, disability, and memory loss among the three groups of social network diversity was analyzed applying the Kruskal–Wallis and chi-squared tests. Categorization of data such as ADL and GDS followed existing standard references.16,29 To examine the hypothesis whether increase in SNI would significantly decrease the rank of GDS, the ordinal logistic regression analysis was applied. The dependent variable GDS was regarded as the ordinal variable. Three models of ordinal logistic regression analyses were applied to test the association between the dependent and independent variables. Various covariates were controlled in three regression models. P-value <0.05 was considered as statistically significant with a 95% confidence interval. Stata version 11 (Stata Corporation, College Station, TX) was used for data analysis. The distribution and types of social ties were analyzed through descriptive analysis. The difference between geriatric parameters such as ADL, GDS, disability, and memory loss among the three groups of social network diversity was analyzed applying the Kruskal–Wallis and chi-squared tests. Categorization of data such as ADL and GDS followed existing standard references.16,29 To examine the hypothesis whether increase in SNI would significantly decrease the rank of GDS, the ordinal logistic regression analysis was applied. The dependent variable GDS was regarded as the ordinal variable. Three models of ordinal logistic regression analyses were applied to test the association between the dependent and independent variables. Various covariates were controlled in three regression models. P-value <0.05 was considered as statistically significant with a 95% confidence interval. Stata version 11 (Stata Corporation, College Station, TX) was used for data analysis. Ethical approval: This study was approved by Chiang Mai Provincial Health Office, Ethics Review Committee (EC-CMPHO 2/56). The participants were interviewed and written informed consent was obtained. Data collection procedures: A community-based, cross-sectional study was conducted in Chiang Mai Province, Northern Thailand. The study sample comprised of 435 people aged ≥80 years at the time of the survey. All were community residents. The sample size was calculated to ensure the power of the finding with a 95% confidence interval and to represent the population of 10,461 people aged ≥80 years living in the Chiang Mai Province. A multistage sampling approach was applied, taking into account the geographical area, population size, and whether the districts were rural or urban. Multistage sampling was designed to obtain a representative sample in a stratified proportionate approach. There are totally 25 districts in Chiang Mai Province. A large district, a medium district, and a small district were selected in the first stage of sampling through stratified random sampling approach applying population size to form stratum (Table 1). From each selected district, one subdistrict was selected by applying simple random sampling. From each subdistrict, participants were sampled in consecutive villages starting from village number one, until a defined number of participants were recruited. The district Muang Chiang Mai is the main city of Chiang Mai province. Comprising of 16 subdistricts, its population size is comparable to the population of a southern part, northern part, and middle part of the province (Table 1). Therefore, three subdistricts were selected randomly from the 16 subdistricts of Muang Chiang Mai (Table 1). Finally, the sample represented Chiang Mai Province by selecting a total of ten districts. Researchers traveled to villages in twelve subdistricts to reach 435 participants and collected data through interviews (Table 1). The interviewer administered the questionnaires, and measurement was carried out during home visits in ten districts of Chiang Mai Province. Public health officers who were well trained to interview elderly people conducted data collection from May to July 2014. The response rate was 97.7% among those who were interviewed. Research instrument and measurement: Independent and dependent variables The SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections. The maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection. To measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8. The SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections. The maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection. To measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8. Controlled variables Barthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26 Disability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?” Barthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26 Disability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?” Independent and dependent variables: The SNI was calculated from the responses to twelve questions.2 This index counts the number of social roles in which the respondent has regular contact, at least once every 2 weeks, with at least one person: spouse, parents, their children and children-in-law, close relatives, close friends, religious members (such as church or temple), classmates, teachers and students in adult education; coworkers or colleagues: employee or employer, neighbors, volunteer networks; and others organizations: social clubs, recreational groups, trade unions, commercial groups, professional organizations, and clubs for the elderly. Hereafter, such regular contact will be referred as “active social contact” in the “Results” and “Discussion” sections. The maximum SNI score is 12. Three categories of social network diversity were formed based on the SNI score: SNI 1–3 represents a “limited” social network, 4–5 as a “medium” social network, and SNI ≥6 as a “diverse” social network. SNI is freely accessible in Measurement Instrument Database for the Social Sciences.2,22 The instrument was transculturally translated from its original English version into Thai so as to maximize the participants’ comprehension and ease of use and tested in a pilot study prior to actual data collection. To measure depression a 30-item long form geriatric depression scale (GDS) was applied, which had been validated and adopted for the Thai population.23,24 Reliability of GDS scale was tested in a pilot sample of 30 people. The data collectors were well trained prior to carrying out the study. The Cronbach’s alpha rating GDS in the current study population was 0.8. Controlled variables: Barthel’s activities of daily living (ADL) index was applied to measure the participants’ daily activity.25,26 The ADL scores were categorized into dependent group (ADL <14) and independent group (ADL 14–20).27 The participants’ self-rated health status was investigated using a 5-point scale, with responses including “very bad,” “bad,” “reasonable,” “good,” and “very good.” The responses “bad” and “very bad” were categorized as “poor health status,” while the others were categorized as “good health status.” Memory assessment applied a single-item assessment tool. Long-term memory was assessed by asking “Can you remember the events that happened years ago?” and short-term memory was assessed by asking participants “Do you remember what you ate for breakfast this morning?”26 Disability assessment applied long-term disability and short-term disability assessment.27,28 Participants were asked two questions to detect long-term disability: “Have you had any condition or health problem for 6 months or longer?” and “Does it prevent or limit you in the kind or amount of activity you can do?”28 A positive response to both of these questions was defined as long-term disability. Participants were asked two questions to detect short-term disability: “Have you had any condition or health problem within the last two weeks?” and “Did it prevent or limit you in the kind or amount of activity you can do?” Statistical analysis: The distribution and types of social ties were analyzed through descriptive analysis. The difference between geriatric parameters such as ADL, GDS, disability, and memory loss among the three groups of social network diversity was analyzed applying the Kruskal–Wallis and chi-squared tests. Categorization of data such as ADL and GDS followed existing standard references.16,29 To examine the hypothesis whether increase in SNI would significantly decrease the rank of GDS, the ordinal logistic regression analysis was applied. The dependent variable GDS was regarded as the ordinal variable. Three models of ordinal logistic regression analyses were applied to test the association between the dependent and independent variables. Various covariates were controlled in three regression models. P-value <0.05 was considered as statistically significant with a 95% confidence interval. Stata version 11 (Stata Corporation, College Station, TX) was used for data analysis. Results: The median age of the sample was 83 years. More than half (54.94%) of the sample (435 persons) were female. The average SNI, in terms of social network diversity, was 4.9±1.9 (mean ± standard deviation [SD]). The median number of people in the contact networks was 15, with a range of 5–21. Categorizing the participants’ SNI, 25% fell into the “limited” social network group (SNI 0–3), 37% into the “medium” network group (SNI 4–5), and 38% into the “diverse” social network group (SNI ≥6). Table 2 shows the characteristics of the participants according to the different categories of SNI. The SNI did not differ between males and females. The participants’ most frequent regular contacts were with their children or their neighbors, with religious society members as their third most active social tie (Figure 1). Contact with close friends and close relatives, meanwhile, represented active social ties for less than half of the participants. Only a third of the participants reported their spouse as an active social contact: 52% of the males reported this as an active social contact, compared with only 29% of the females (P<0.001, chi-squared). The ADL scores differed significantly among the three different SNI groups (Table 2). There was a significant difference in age across the three groups of SNI. The proportion of females was uniform and more than the males in all groups. The elderly in the “diverse” social network group whose SNI ≥6 had the highest ADL score. The GDS score of this group was the lowest. Approximately 10.34% of the participants were suffering from depression: 80% from “mild” depression; 16% from “moderate” depression; and 4% from “severe” depression (Table 2). The proportion of people with a long-term disability and those with a long-term memory loss were significantly higher among elderly seniors with limited social network group than in medium and diverse social network groups. The level of GDS declined with increasing SNI (Table 2). SNI was significantly associated with GDS in all three models of ordinal logistic regression analyses (Table 3). Covariates in the model 1 were SNI, age, sex, and educational attainment; in model 2 were SNI, age, sex, educational attainment, self-impression of health, and dependency; in model 3 were SNI, age, sex, educational attainment, self-impression of health, dependency, and short-term and long-term memory loss (Table 3). The higher the score in the SNI (reflecting a more diverse social network), the lower the GDS score (reflecting lower geriatric depression). Moreover, participants’ good self-rated health status indicated a likely lower level of geriatric depression (β–1.03, P<0.001), whereas those with long-term memory loss and short-term memory loss were more likely to suffer from geriatric depression (Table 3). Discussion: This study is of particular value because few studies have conducted empirical research into the social network diversity of people aged ≥80 years.30 It has revealed common patterns of the social networks among the oldest age group in the Northern Thailand context (Figure 1). Many of the participants have lost their closest ties, such as their spouse and their friends. Hence, those who survive longer than their national life expectancy and enter their 80s and 90s may experience drastic changes in their social network. More than a quarter of the participants in the current study belonged to a very limited social network, SNI <3. They were at high risk of social isolation, a health problem overlooked in many communities despite its negative impact on life expectancy, morbidity, and quality of life.31 The findings of the current study, therefore, suggest integrating screening for social isolation as part of primary health care-based geriatric assessment. The majority of participants ranked their children as their most frequently contacted social tie (Figure 1). This highlights the importance of families within the social network of the elderly. In Thai culture, parents, children, and grandchildren traditionally live together, and many elderly people retain a role of caregiver to their grandchildren within these intergenerational families.32 Moreover, a recent study reported that such intergenerational families are common to countries such as Thailand, Myanmar, and Vietnam.32 Many children may migrate to work outside their native province, for example, for career development, yet they can keep in touch with their parents due to the increasing use of mobile phones, as well as improved road access and transportation in Thailand.33 Neighbors were ranked second by the participants among their most frequent contacts. As reported by Basto et al,34 neighbors represent an ecological asset for elderly community residents, although this may not be the case for nursing home residents.35 Neighborhood hospitality also depends on how urban the location of a community is. Communities in metropolitan Bangkok and traditional communities in Chiang Mai may be rather different. All participants in the current study were community residents living in the Chiang Mai Province, Northern Thailand. This then, may reflect the traditional culture and society unique to Northern Thailand. Very few participants retained occupational or educational networks (Figure 1). After retirement, there is less of a role for elderly people within the labor market and occupational networks.8 However, aging people still have other possible active social roles within their communities. The lack of formal working hours coupled with a likely more relaxed daily life may permit the elderly to engage more in social volunteering and religious activity. Notably, >60% of the participants reported their active contact as a member of the temple. Given that all of the participants were Buddhist, this finding is concurrent with that of another recent study in Thailand which reported that involvement in the Buddhist social network positively contributes to the functional health of the elderly.36 Furthermore, more than half of the participants reported being in contact with a group or club. This could be the result of clubs especially for the elderly which have recently been established in a number of districts in Thailand. These clubs have become popular opportunities for the elderly to get together and socialize. In contrast, less than a quarter of participants reported being involved in volunteer work or regularly talked with people in volunteering activities. This finding is very similar to that of a UK study in which around 20% of the elderly, aged 75 years and older, reported participation in UK volunteering activities.36 This study also assessed, in a characteristic sample, the relationship of the participants’ SNI to their level of geriatric depression. The self-impression of health and long-term memory loss had a significant association with depression. After adjusting several important covariates, such as demographics (age, sex, and educational attainment), health status (dependency and self-impression of health), and cognitive decline (short-term and long-term memory loss), there was a significant negative association between social network diversity and geriatric depression. Therefore, being in regular and frequent contact with various social contacts may prevent common geriatric depression among those aged ≥80 years (Table 3). The relation between geriatric depression and social network isolation has been reported in different studies with variable results.21,31,37 These inconsistent findings are probably due to various social networks and their impacts in different cultural contexts. However, the findings of earlier reports in other parts of Thailand are consistent with the findings of the current study despite the different methodological approaches applied. Thanakwang et al38 reported the positive social impact of friends and families on the psychological well-being of elderly Thais. Sasiwongsaroj et al,36 meanwhile, reported that elderly Thais involved in the Buddhist social network gained better functional, mental, and social health status as a result of this social network.36 The samples in these studies, however, comprised of elderly people aged >60 years, whereas the median age within the current study was 83 years. A twenty-year difference in later life may cause drastic changes in social networks, notably with profound loss of friends and colleagues. The current study, therefore, has been able to uncover different typologies of social networking among the oldest bracket of the elderly. This study may have some limitations. The way social network diversity was measured was based on the number and frequency of contact. The quality of the relationship in each social tie could not be known using this approach, however. Moreover, the relation between variables, tested in a cross-sectional study, may require future longitudinal study and analysis. Despite these shortcomings, it is hoped that public health and long-term care programs will benefit from the findings of the present study. It is considered that the study’s descriptive and analytical findings can provide a practical impetus to design evidence-based intervention programs in the Thai setting and more broadly in that of Southeast Asia, which will preserve and promote the social networks of the elderly, in doing so serving to reduce the problem of geriatric depression. Conclusion: In the setting of the current study, children, neighbors, and the Buddhist temple members have been reported as the most likely resources of social network for those in the oldest bracket of the elderly. There is still ample opportunity, however, for this group to strengthen their social ties by engaging in volunteer activity and clubs for the elderly. Moreover, the diversity of their social networks may serve to prevent the problem of depression. Therefore, future research should devise community-based intervention to promote the social networks of those in the oldest age bracket, which may lead the elderly to enjoy active aging and living, in turn positively impacting upon their mental health.
Background: Having a diverse social network is considered to be beneficial to a person's well-being. The significance, however, of social network diversity in the geriatric assessment of people aged ≥80 years has not been adequately investigated within the Southeast Asian context. This study explored the social networks belonging to the elderly aged ≥80 years and assessed the relation of social network and geriatric depression. Methods: This study was a community-based cross-sectional survey conducted in Chiang Mai Province, Northern Thailand. A representative sample of 435 community residents, aged ≥80 years, were included in a multistage sample. The participants' social network diversity was assessed by applying Cohen's social network index (SNI). The geriatric depression scale and activities of daily living measures were carried out during home visits. Descriptive analyses revealed the distribution of SNI, while the relationship between the SNI and the geriatric depression scale was examined by ordinal logistic regression models controlling possible covariants such as age, sex, and educational attainment. Results: The median age of the sample was 83 years, with females comprising of 54.94% of the sample. The participants' children, their neighbors, and members of Buddhist temples were reported as the most frequent contacts of the study participants. Among the 435 participants, 25% were at risk of social isolation due to having a "limited" social network group (SNI 0-3), whereas 37% had a "medium" social network (SNI 4-5), and 38% had a "diverse" social network (SNI ≥6). The SNI was not different among the two sexes. Activities of daily living scores in the diverse social network group were significantly higher than those in the limited social network group. Multivariate ordinal logistic regression analysis models revealed a significant negative association between social network diversity and geriatric depression. Conclusions: Regular and frequent contact with various social contacts may safeguard common geriatric depression among persons aged ≥80 years. As a result, screening those at risk of social isolation is recommended to be integrated into routine primary health care-based geriatric assessment and intervention programs.
Introduction: Social ties are important for a persons’ physical and mental well-being.1 Since the late 1900s research has been carried out to investigate the impact of social ties on health and lifespan.2,3 Cohen et al2 reported the preventive effect of social network diversity on illnesses such as the common cold. Recent epidemiological studies have shown that socially active people are less likely to develop noncommunicable diseases such as diabetes and metabolic syndrome and are likely to live longer than their nonactive counterparts.4–6 Therefore, being connected in social networks may allow us to lead a healthy life by its salutary and supportive social impacts. The gift of a socially active life is not experienced by all elderly people whose contact list fades away with years of survival. Nowadays, due to the trend of population aging, global societies are seeing more and more people living into their 80s and 90s.7 For a significant minority of these people, however, a common negative aspect of old age is that of social isolation.8,9 A recent national survey in Malaysia, a neighboring country to the study site, reported that 48.6% of old persons were at risk of social isolation and that the oldest old persons were at higher risk.10,11 Research in Korea, Singapore, and the People’s Republic of China also investigated social connectedness of elderly people in each society. Although different instruments were applied in those studies, they reported similarly that people in the bracket of oldest age are liable to social isolation.11–15 Research exploring the nature and diversity of elderly people’s social contacts is therefore a necessity, particularly given the unique social and cultural contexts within different countries. In 2010, Thailand was home to 8.5 million elderly people aged ≥60 years and now stands as the fastest population-aging country in Southeast Asia.16,17 In 2006, people aged ≥60 years comprised 11.3% of the Thai population, a figure which is estimated to reach 29.8% by 2050.18,19 People aged ≥80 years, meanwhile, constituted 11.5% of the elderly population,17 a figure which is similarly estimated to increase to 23.6% by 2050.18 Although the impact of social networks has been investigated in a wide range of contexts globally, the types and diversity of social contacts in the oldest age group of Thai society is still scant. An increasingly common problem affecting the elderly is depression. This problem may affect as many as 20% of those in the oldest age bracket.20 However, psychogeriatric research, investigating, for example, the prevalence of depression among the elderly in a country like Thailand, where psychogeriatric services are not yet in place, is still limited. An inverse relation between social network and geriatric depression was reported in a recent study in the US.21 Similar studies, however, are still necessary for Thailand and the Southeast Asian context to find out how social networks influence geriatric depression, in particular among people aged ≥80 years. The current study, therefore, measuring the social network index (SNI) using a quantitative approach aimed to reveal the types of social ties experienced by Thais in the oldest age bracket and assess how their SNI relates to geriatric depression. The findings of the current study seek to highlight the importance of social networks in active aging and may provide insights into ways to preserve the social network diversity of the elderly, resulting in their improved physical and mental well-being. Conclusion: In the setting of the current study, children, neighbors, and the Buddhist temple members have been reported as the most likely resources of social network for those in the oldest bracket of the elderly. There is still ample opportunity, however, for this group to strengthen their social ties by engaging in volunteer activity and clubs for the elderly. Moreover, the diversity of their social networks may serve to prevent the problem of depression. Therefore, future research should devise community-based intervention to promote the social networks of those in the oldest age bracket, which may lead the elderly to enjoy active aging and living, in turn positively impacting upon their mental health.
Background: Having a diverse social network is considered to be beneficial to a person's well-being. The significance, however, of social network diversity in the geriatric assessment of people aged ≥80 years has not been adequately investigated within the Southeast Asian context. This study explored the social networks belonging to the elderly aged ≥80 years and assessed the relation of social network and geriatric depression. Methods: This study was a community-based cross-sectional survey conducted in Chiang Mai Province, Northern Thailand. A representative sample of 435 community residents, aged ≥80 years, were included in a multistage sample. The participants' social network diversity was assessed by applying Cohen's social network index (SNI). The geriatric depression scale and activities of daily living measures were carried out during home visits. Descriptive analyses revealed the distribution of SNI, while the relationship between the SNI and the geriatric depression scale was examined by ordinal logistic regression models controlling possible covariants such as age, sex, and educational attainment. Results: The median age of the sample was 83 years, with females comprising of 54.94% of the sample. The participants' children, their neighbors, and members of Buddhist temples were reported as the most frequent contacts of the study participants. Among the 435 participants, 25% were at risk of social isolation due to having a "limited" social network group (SNI 0-3), whereas 37% had a "medium" social network (SNI 4-5), and 38% had a "diverse" social network (SNI ≥6). The SNI was not different among the two sexes. Activities of daily living scores in the diverse social network group were significantly higher than those in the limited social network group. Multivariate ordinal logistic regression analysis models revealed a significant negative association between social network diversity and geriatric depression. Conclusions: Regular and frequent contact with various social contacts may safeguard common geriatric depression among persons aged ≥80 years. As a result, screening those at risk of social isolation is recommended to be integrated into routine primary health care-based geriatric assessment and intervention programs.
8,429
407
[ 3551, 32, 365, 1206, 310, 287, 126 ]
11
[ "social", "participants", "sni", "term", "health", "network", "social network", "study", "disability", "long" ]
[ "networks aging people", "elderly diverse social", "elderly engage social", "age social isolation", "connectedness elderly people" ]
[CONTENT] aging | gerontology | psychogeriatrics | sociology of aging | community | Southeast Asia | Chiang Mai [SUMMARY]
[CONTENT] aging | gerontology | psychogeriatrics | sociology of aging | community | Southeast Asia | Chiang Mai [SUMMARY]
[CONTENT] aging | gerontology | psychogeriatrics | sociology of aging | community | Southeast Asia | Chiang Mai [SUMMARY]
[CONTENT] aging | gerontology | psychogeriatrics | sociology of aging | community | Southeast Asia | Chiang Mai [SUMMARY]
[CONTENT] aging | gerontology | psychogeriatrics | sociology of aging | community | Southeast Asia | Chiang Mai [SUMMARY]
[CONTENT] aging | gerontology | psychogeriatrics | sociology of aging | community | Southeast Asia | Chiang Mai [SUMMARY]
[CONTENT] Activities of Daily Living | Aged, 80 and over | Cross-Sectional Studies | Depression | Family | Female | Friends | Geriatric Assessment | Humans | Interpersonal Relations | Logistic Models | Male | Quality of Life | Social Support | Thailand [SUMMARY]
[CONTENT] Activities of Daily Living | Aged, 80 and over | Cross-Sectional Studies | Depression | Family | Female | Friends | Geriatric Assessment | Humans | Interpersonal Relations | Logistic Models | Male | Quality of Life | Social Support | Thailand [SUMMARY]
[CONTENT] Activities of Daily Living | Aged, 80 and over | Cross-Sectional Studies | Depression | Family | Female | Friends | Geriatric Assessment | Humans | Interpersonal Relations | Logistic Models | Male | Quality of Life | Social Support | Thailand [SUMMARY]
[CONTENT] Activities of Daily Living | Aged, 80 and over | Cross-Sectional Studies | Depression | Family | Female | Friends | Geriatric Assessment | Humans | Interpersonal Relations | Logistic Models | Male | Quality of Life | Social Support | Thailand [SUMMARY]
[CONTENT] Activities of Daily Living | Aged, 80 and over | Cross-Sectional Studies | Depression | Family | Female | Friends | Geriatric Assessment | Humans | Interpersonal Relations | Logistic Models | Male | Quality of Life | Social Support | Thailand [SUMMARY]
[CONTENT] Activities of Daily Living | Aged, 80 and over | Cross-Sectional Studies | Depression | Family | Female | Friends | Geriatric Assessment | Humans | Interpersonal Relations | Logistic Models | Male | Quality of Life | Social Support | Thailand [SUMMARY]
[CONTENT] networks aging people | elderly diverse social | elderly engage social | age social isolation | connectedness elderly people [SUMMARY]
[CONTENT] networks aging people | elderly diverse social | elderly engage social | age social isolation | connectedness elderly people [SUMMARY]
[CONTENT] networks aging people | elderly diverse social | elderly engage social | age social isolation | connectedness elderly people [SUMMARY]
[CONTENT] networks aging people | elderly diverse social | elderly engage social | age social isolation | connectedness elderly people [SUMMARY]
[CONTENT] networks aging people | elderly diverse social | elderly engage social | age social isolation | connectedness elderly people [SUMMARY]
[CONTENT] networks aging people | elderly diverse social | elderly engage social | age social isolation | connectedness elderly people [SUMMARY]
[CONTENT] social | participants | sni | term | health | network | social network | study | disability | long [SUMMARY]
[CONTENT] social | participants | sni | term | health | network | social network | study | disability | long [SUMMARY]
[CONTENT] social | participants | sni | term | health | network | social network | study | disability | long [SUMMARY]
[CONTENT] social | participants | sni | term | health | network | social network | study | disability | long [SUMMARY]
[CONTENT] social | participants | sni | term | health | network | social network | study | disability | long [SUMMARY]
[CONTENT] social | participants | sni | term | health | network | social network | study | disability | long [SUMMARY]
[CONTENT] social | people | oldest | elderly | age | oldest age | 11 | social networks | old | country [SUMMARY]
[CONTENT] gds | analysis | regression | ordinal | stata | adl gds | analyzed | variable | logistic | ordinal logistic [SUMMARY]
[CONTENT] sni | social | network group | table | social network group | group sni | network group sni | network | term | depression [SUMMARY]
[CONTENT] social | oldest | social networks | bracket | elderly | networks | temple members reported likely | prevent problem depression | prevent problem depression future | temple members [SUMMARY]
[CONTENT] social | sni | participants | term | network | social network | study | disability | health | gds [SUMMARY]
[CONTENT] social | sni | participants | term | network | social network | study | disability | health | gds [SUMMARY]
[CONTENT] ||| the Southeast Asian ||| [SUMMARY]
[CONTENT] Chiang Mai Province | Northern Thailand ||| 435 ||| Cohen | SNI ||| daily ||| SNI | SNI [SUMMARY]
[CONTENT] 83 years | 54.94% ||| Buddhist ||| 435 | 25% | SNI 0-3 | 37% | 38% ||| SNI | two ||| ||| [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| the Southeast Asian ||| ||| Chiang Mai Province | Northern Thailand ||| 435 ||| Cohen | SNI ||| daily ||| SNI | SNI ||| ||| 83 years | 54.94% ||| Buddhist ||| 435 | 25% | SNI 0-3 | 37% | 38% ||| SNI | two ||| ||| ||| ||| [SUMMARY]
[CONTENT] ||| the Southeast Asian ||| ||| Chiang Mai Province | Northern Thailand ||| 435 ||| Cohen | SNI ||| daily ||| SNI | SNI ||| ||| 83 years | 54.94% ||| Buddhist ||| 435 | 25% | SNI 0-3 | 37% | 38% ||| SNI | two ||| ||| ||| ||| [SUMMARY]
Correlation between goose circovirus and goose parvovirus with gosling feather loss disease and goose broke feather disease in southern Taiwan.
33522153
Goslings in several Taiwanese farms experienced gosling feather loss disease (GFL) at 21-35 days and goose broke feather disease (GBF) at 42-60 days. The prevalence ranges from a few birds to 500 cases per field. It is estimated that about 12,000 geese have been infected, the morbidity is 70-80% and the mortality is 20-30%.
BACKGROUND
Samples were collected from animal hospitals. Molecular and microscopy diagnostics were used to examine 92 geese. Specific quantitative polymerase chain reaction (Q-PCR) assays are performed to evaluate GPV and GoCV viral loads and simultaneously evaluated the feather loss conditions in geese with the scoring method.
METHODS
High prevalence of GoCV and GPV infection in geese showing signs of GFL and GBF. Inclusion body was detected in the feather follicles and Lieberkühn crypt epithelial cells. The Q-PCR showed the high correlation between feather loss and viruses during 3rd-5th week. However, the infection was not detected using the same test in 60 healthy geese.
RESULTS
Thus, GFL and GBF appear to be significantly closely related to GoCV and GPV. The geese feathers showed increasing recovery after being quarantined and disinfected.
CONCLUSIONS
[ "Animals", "Circoviridae Infections", "Circovirus", "Feathers", "Geese", "Parvoviridae Infections", "Parvovirinae", "Poultry Diseases", "Prevalence", "Taiwan" ]
7850790
INTRODUCTION
Goslings in several Taiwanese farms experienced gosling feather loss disease (GFL) at 21–35 days old and goose broken feather disease (GBF) at 42–60 days. There are about 40 farms of infections. The incidence ranges from a few birds to 500 cases per field, which accounts for about 1%. It is estimated that about 12,000 geese have been infected, the morbidity is 70–80%, and the mortality is 20–30%. The economic loss may be as high as 10 million. This study aims to investigate the pathogens causing GFL and GBF. A total of 92 geese with these feather diseases were sent to animal hospitals by goose farmers. They came from 40 different farms in 5 counties (Supplementary Fig. 1A and B). There are many factors affecting feather follicle atrophy or underdevelopment, including viral infections, bacterial, parasites infections, and endocrine disorders. Common viral infections can cause GFL and GBF such as goose circovirus (GoCV), goose parvovirus (GPV). Malabsorption of nutrients after enteritis that cause feathers disease, such as new gosling viral enteritis (NGVE) and goose haemorrhagic polyomavirus (GHPV). GoCV was first identified in a Germany's large commercial flock of geese with a history of increased mortality and runting by Soike [12]. This case occurred in a one-week-old goose. The clinical symptoms included growth retardation and feather shedding lesions. Histopathology showed varying degrees of lymphocyte loss and histiocytosis lesions in the Fahrenheit bursa, spleen, and thymus. The same geese were also found to have infectious serositis and aspergillosis infection. Soike et al. [1] believed that GoCV infection would inhibit the immune system of the geese, making the geese prone to secondary infections, causing the invasion of mold and bacteria that result geese growth retardation. Spherical or coarse-grained cytoplasmic inclusions were found in Fahrenheit bursa and Lieberkühn crypt epithelial cells. When the ultramicroscopic structure of virus inclusion bodies was examined, it was confirmed that the virus particles arranged in crystals were multilayered or irregular in size 14 nm. The author speculates that the circovirus can induce immunosuppression and make geese prone to secondary infection. In 2004, the survey results in Hungary's goose farm showed that the Fahrenheit bursa of 52 flocks of geese were examined by histology, and 75% of the GoCV-positive populations and 50% of the GoCV-negative populations were observed to have lost lymphocytes in the Fahrenheit bursa. Inclusion bodies were only found in three positive geese flocks, and the three geese flocks were positive for GoCV by dot blot hybridization test (DBH) and polymerase chain reaction (PCR) [3]. GPV infection was first reported in Hungary in 1967 as a goose farm often called gosling plague, goose viral hepatitis, goose influenza, and ascites hepatitis, with high mortality, mainly caused by violations of goose fiber necrotizing enterocolitis [4]. This is an acute, highly contagious and fatal disease, mainly caused in goslings, fibrous necrotizing enteritis, also known as Derzsy's disease [5]. However, it first invaded Taiwan goose farms in Taiwan in 1982. Experts and scholars unified the disease as goose viral enteritis. This disease is currently infected in Europe, United States, Canada, Russia, Israel, China, Japan and Vietnam. The virus is different from the small virus that infects humans and other poultry [6]. The severity of the disease is greatly related to age. If birds are infected with this virus before one week of age, the mortality is as high as 100%. Infections over four weeks of age rarely have clinical features. If the yolk antibody is administered before three weeks of age, it will effectively combat the occurrence of GPV [7]. A new epidemic occurred in western France in 1989, causing 40–50% mortality of 2–4 weeks old Muscovy ducks. Although the clinical symptoms are similar to GPV infection, it is not pathogenic to geese and other strains of ducks. After virus isolation and resistance analysis, it was found that this virus and goose-origin waterfowl virus (GPV) are parvoviruses, but there are significant differences in viral nucleic acid and amino acid sequences, so this virus was named Muscovy duck parvovirus (MDPV). Woolcock et al. [8] pointed out that MDPV has been isolated in many countries such as the United States, Germany, Malaysia, Japan, Taiwan, and Thailand. The disease causes the most serious death at the age of about five weeks ago, there will be 30–80% morbidity and 10–50% mortality, and older birds are also susceptible to infection when their immunity is weak [8]. NGVE is an infectious disease that affects goslings < 30 days old with haemorrhagic, fibrinonecrotic, hyperaemic, necrotic and exudative enteritis in the small intestine [9]. It is caused by an adenovirus called the NGVE virus (NGVEV), which was first reported by Cheng [10]. Haemorrhagic nephritis enteritis of geese is an epizootic viral disease caused by infection with GHPV [11]. Since 1969, GHPV has caused serious economic losses in European countries, including Hungary [12], Germany [13] and southern France [1415]. Geese may also be infected with lice and red mites, which can induce feather loss and occasionally cause naked feathers known colloquially as ‘ghost pull hairs.’ Insufficient essential amino acids in the feed and malnutrition could become an important secondary factor after the outbreak of GFL and GBF.
null
null
RESULTS
PCR diagnosis PCR results are shown negative results for GHPV and NGVEV. The incidence of various pathogens in geese with GFL were found to be GoCV 94.6% (53/56), CPV 60.7% (34/56) (Supplementary Fig. 5), whereas geese with GBF showed the following incidence of pathogens: GoCV 83.3% (30/36), GPV 72% (26/36) (Table 1). The result shows that GoCV and GPV are significantly correlated with GFL and GBF (p < 0.05). GFL, gosling feather loss disease; GBF, goose broken feather disease; PCR, polymerase chain reaction; GPV, goose parvovirus; GoCV, goose circovirus. PCR results are shown negative results for GHPV and NGVEV. The incidence of various pathogens in geese with GFL were found to be GoCV 94.6% (53/56), CPV 60.7% (34/56) (Supplementary Fig. 5), whereas geese with GBF showed the following incidence of pathogens: GoCV 83.3% (30/36), GPV 72% (26/36) (Table 1). The result shows that GoCV and GPV are significantly correlated with GFL and GBF (p < 0.05). GFL, gosling feather loss disease; GBF, goose broken feather disease; PCR, polymerase chain reaction; GPV, goose parvovirus; GoCV, goose circovirus. Gross lesions and histopathological examination Figures showing pathological sections (Fig. 1) reveal the gross and histopathologic findings in cases of goose infectious disease with GFL and GBF: Fig. 1A shows the clinical findings for feather loss (GFL), and Fig. 1B shows the clinical findings for broken feather (GBF). Fig. 1C-F shows histopathologic findings in cases of GoCV infection by PCR assay: feather follicle necrosis with inclusion body (Fig. 1C); folliculitis with necropsy follicular inflammatory cell infiltration (Fig. 1D); degeneration and necrosis of epithelial cells on the mucous membrane and the crypts of Lieberkühn (Fig. 1E) and intranuclear inclusion bodies in the degenerated epithelial cells of the crypts of Lieberkühn (Fig. 1F). Figures showing pathological sections (Fig. 1) reveal the gross and histopathologic findings in cases of goose infectious disease with GFL and GBF: Fig. 1A shows the clinical findings for feather loss (GFL), and Fig. 1B shows the clinical findings for broken feather (GBF). Fig. 1C-F shows histopathologic findings in cases of GoCV infection by PCR assay: feather follicle necrosis with inclusion body (Fig. 1C); folliculitis with necropsy follicular inflammatory cell infiltration (Fig. 1D); degeneration and necrosis of epithelial cells on the mucous membrane and the crypts of Lieberkühn (Fig. 1E) and intranuclear inclusion bodies in the degenerated epithelial cells of the crypts of Lieberkühn (Fig. 1F). Building a phylogenetic tree from the alignment New sequences of GoCV and GPV (569 bp and 806 bp, respectively) were generated and submitted to a public repository GenBank. The GoCV amplified sequence (VP1) was examined by PCR genetic sequence analysis software. A phylogenetic tree was built on the basis of GoCV alignment (VP1) in samples of Pingtung, Kaohsiung, Yunlin and Taiwan. This was compared with 11 GoCV sequences isolated in Taiwan. They were generally divided into two groups. For the first group, both Kaohsiung (W103-1337) and Pingtung areas (W103-1342), the GoCV similarity rate was 100%, whereas in the Taiwanese strains (TW1), (TW10) and (TW11), the similarity was 98%–99%. The other groups were amplified sequences from the Yunlin samples (W103-1368) and (W103-1346). The similarity of (W103-1368) compared with (TW9) and (TW8) was 100%. The similarity of (W103-1346) compared with (TW8) and (TW6) was 98%. The South Taiwan samples were similar to the Yunlin area samples (95%). This implies that GoCV strains differ between the south and north of Taiwan (Fig. 2). In the GPV (VP2) group of amplified sequence fragments found by PCR in this study, the Taiwanese strains, 06-0329 and 82-0321, were compared with strains from China and Europe, and found to strongly resemble them with a similarity of over 99.6%, compared with Muscovy duck parvovirus in which the similarity is only 80.9% (Fig. 3). New sequences of GoCV and GPV (569 bp and 806 bp, respectively) were generated and submitted to a public repository GenBank. The GoCV amplified sequence (VP1) was examined by PCR genetic sequence analysis software. A phylogenetic tree was built on the basis of GoCV alignment (VP1) in samples of Pingtung, Kaohsiung, Yunlin and Taiwan. This was compared with 11 GoCV sequences isolated in Taiwan. They were generally divided into two groups. For the first group, both Kaohsiung (W103-1337) and Pingtung areas (W103-1342), the GoCV similarity rate was 100%, whereas in the Taiwanese strains (TW1), (TW10) and (TW11), the similarity was 98%–99%. The other groups were amplified sequences from the Yunlin samples (W103-1368) and (W103-1346). The similarity of (W103-1368) compared with (TW9) and (TW8) was 100%. The similarity of (W103-1346) compared with (TW8) and (TW6) was 98%. The South Taiwan samples were similar to the Yunlin area samples (95%). This implies that GoCV strains differ between the south and north of Taiwan (Fig. 2). In the GPV (VP2) group of amplified sequence fragments found by PCR in this study, the Taiwanese strains, 06-0329 and 82-0321, were compared with strains from China and Europe, and found to strongly resemble them with a similarity of over 99.6%, compared with Muscovy duck parvovirus in which the similarity is only 80.9% (Fig. 3). Real-time Q-PCR in quarantined and disinfected birds The viral variable was observed in the GLF birds from 39 to 66 days old. By the age of 66 days, although the feathers had all returned to normal growth, the virus had not been eliminated from the system. To reduce the effects of environmental pathogens and secondary infection by bacteria, geese were separated into the treatment group (T) and control group (C). Virus levels were compared between the 2 groups using a t-test. No significant difference in GoCV levels was found for T and C (p = 0.082; p > 0.05). GPV levels in C and T did not differ significantly (p = 0.3; p > 0.05). Virus levels were monitored in birds from the age of 39 to 66 days and the decrease in GoCV and GPV levels over time noted. Using the self-designed primer to quantify the change curve of virus quantity (Fig. 4A) and the change of the feathers of the goose (Fig. 4B) when we conduct field monitoring two viral loads. Compared with the change trend of feather score, it can be seen that there seems to be a similar trend, so we divide the data into two sections, and we will perform the Pearson's correlation coefficient test after 35 weeks and 6 weeks of age mentioned earlier. The results of GoCV and GPV of GLB by t-test calculation indicated no significant difference in the GoCV levels in C and T (p = 0.43; p > 0.05) and no significant difference in the GPV levels in C and T (p = 0.348; p > 0.05). GPV, goose parvovirus; GoCV, goose circovirus. However, during the treatment period, the birds between 42 and 63 days old with GFL did not have folliculitis, mobility also gradually improved, the grooming behaviour of the geese gradually became normal and the feather colour turned from yellow to white (Fig. 5A). Similarly, birds 49–75 days old with GBF gradually recovered. The recovery of broken feathers was maintained for about 3–4 weeks. Results indicated that if the breeding environment is properly controlled, there might be cases of early recovery (Fig. 5B). GPV, goose parvovirus; GoCV, goose circovirus; GFL, gosling feather loss disease; GBF, goose broken feather disease. The viral variable was observed in the GLF birds from 39 to 66 days old. By the age of 66 days, although the feathers had all returned to normal growth, the virus had not been eliminated from the system. To reduce the effects of environmental pathogens and secondary infection by bacteria, geese were separated into the treatment group (T) and control group (C). Virus levels were compared between the 2 groups using a t-test. No significant difference in GoCV levels was found for T and C (p = 0.082; p > 0.05). GPV levels in C and T did not differ significantly (p = 0.3; p > 0.05). Virus levels were monitored in birds from the age of 39 to 66 days and the decrease in GoCV and GPV levels over time noted. Using the self-designed primer to quantify the change curve of virus quantity (Fig. 4A) and the change of the feathers of the goose (Fig. 4B) when we conduct field monitoring two viral loads. Compared with the change trend of feather score, it can be seen that there seems to be a similar trend, so we divide the data into two sections, and we will perform the Pearson's correlation coefficient test after 35 weeks and 6 weeks of age mentioned earlier. The results of GoCV and GPV of GLB by t-test calculation indicated no significant difference in the GoCV levels in C and T (p = 0.43; p > 0.05) and no significant difference in the GPV levels in C and T (p = 0.348; p > 0.05). GPV, goose parvovirus; GoCV, goose circovirus. However, during the treatment period, the birds between 42 and 63 days old with GFL did not have folliculitis, mobility also gradually improved, the grooming behaviour of the geese gradually became normal and the feather colour turned from yellow to white (Fig. 5A). Similarly, birds 49–75 days old with GBF gradually recovered. The recovery of broken feathers was maintained for about 3–4 weeks. Results indicated that if the breeding environment is properly controlled, there might be cases of early recovery (Fig. 5B). GPV, goose parvovirus; GoCV, goose circovirus; GFL, gosling feather loss disease; GBF, goose broken feather disease. Negative control The 60 healthy geese did not show the presence of GoCV infection, GPV infection or inclusion bodies (Table 1). The 60 healthy geese did not show the presence of GoCV infection, GPV infection or inclusion bodies (Table 1).
null
null
[ "Experimental design", "Birds", "Histopathology", "Molecular biological detection", "Real-time Q-PCR in quarantined and disinfected birds", "Sequence sequencing and comparison", "PCR diagnosis", "Gross lesions and histopathological examination", "Building a phylogenetic tree from the alignment", "Real-time Q-PCR in quarantined and disinfected birds", "Negative control" ]
[ "This study is divided into 2 parts. Part one explores the pathogens of GFL and GBF using microbiology, histopathology and molecular biology. In the second part, the infected geese are separated, treated and observed for feather recovery. Real-time quantitative polymerase chain reaction (Q-PCR) was used to detect the presence of GPV and GoCV viruses in the blood (Supplementary Fig. 2). All experiments within this study conform to the ethical standards issued by the National Research Council. All animal experiments have been approved by the ethics committee of the National Pingtung University of Science and Technology, and care was taken to comply with the 3R concept. In this study, the laboratory animal application form protocol number was NPUST-99-065 and approved number was NPUST-IACUC-99-065.", "A total of 92 geese from 40 farms, comprising 56 showing signs of GFL and 36 with signs of GBF, were examined. Sixty healthy geese (30 goslings and 30 geese) were procured from the same farms with rigorous disinfection and management protocols; they were not affected by goose feather loss or broken feather diseases. These healthy geese were used as negative controls. The grading evaluation of feather shedding uses the feather shedding evaluation proposed in the past literature. According to the feather loss evaluation table, it is divided into back and wings (Supplementary Figs. 3 and 4), there are 5 levels, the lower the score the more complete the feather and the higher the score, obvious severity [16].", "The tissues were embedded in paraffin and processed into 2- to 3-μm-thick sections that were stained with haematoxylin and eosin for histopathological examination. Tissue samples from other gross lesions observed were similarly collected and processed for histopathological assessment.", "PCR protocols and the specificity of primer pairs used are summarised (Supplementary Table 1). RNA and DNA were extracted from tissue using Corning Axygen AP-MN-BF-VNA-250 AxyPrep™ Body Fluid Viral DNA & RNA Purification Miniprep Kits (BIOKIT, Taiwan). Each PCR was performed in a total volume of 20 μL containing 9 µL template DNA, 10 µL Taq DNA, polymerase 2× Master Mix red (1.5 mM MgCl2) and 1 µL primer.\nGoCV PCR protocol was as follows: pre-denaturation 94°C for 5 min; denaturation 94°C for 45 sec; annealing 55°C for 45 sec; extension 72°C for 45 sec, performed for 35 cycles; final extension 72°C for 5 min. GPV PCR protocol was as follows [1718]: pre-denaturation 94°C for 3 min; denaturation 94°C for 30 sec; annealing 60°C for 30 sec; extension 72°C for 30 sec, performed for 35 cycles; and final extension 72°C for 5 min. GHPV PCR protocol was as follows [19]: pre-denaturation 94°C for 3 min; denaturation 94°C for 30 sec; annealing 55°C for 30 sec; extension 72°C for 60 sec; performed for 30 cycles; and final extension 72°C for 5 min. Goose adenovirus PCR protocol was as follows: pre-denaturation 94°C for 5 min; denaturation 94°C for 40 sec; annealing 54°C for 45 sec; extension 72°C for 40 sec, performed for 30 cycles; and final extension 72°C for 10 min. PCR products were analysed by electrophoresis in a 1.5% (w/v) agarose gel.", "Twenty geese from a farm in Wandan Township of Pingtung were separated into 4 groups, 2 GFL and 2 GBF. The geese were treated daily with a spray of 400-times-diluted iodine solution 50–60 mL. The solution was sprayed onto the head, back, wings and abdomen of each goose, and blood samples were collected every 3 days. Q-PCR Bio-Rad CFX Manager software was used to analyse the data of real-time Q-PCR products to continuously observe the quantification of GPV and GoCV in infected geese. Viral variables in geese from 39 to 66 days of age and from 49 to 76 days of age after GPV and GoCV infection were observed. DNA extractions were performed using whole blood. Real-time PCR detection of GPV and GoCV viruses, using the National Center for Biotechnology Information (NCBI) website design provided by Primer, was as follows: GoCV F, 5′-CCAGTCCATTGTCCGAA-3′; GoCV R, 5′-GGAGGAAGACAACTATGGC-3′; GPV 3F, 5′-ACAACTTTGAGTTTACGTTTGAC-3′; and GPV 3R, 5′-ATTCCAGAGGTATTGATCC ACTA-3′. Bio-Rad Manager 3.1 Software was used to interpret PCR results. The measured OD values were entered into Excel Software for conversion into a standard curve. After that, the plastid concentration was used to convert the virus amount (plasmid copy number). When Q-PCR was completed, the Cq value was obtained.", "This experiment refers to the GPV-specific primer pair GPV (VP2)-F: CCGGGTTGCAGGAGGTAC, R: AGCTACAACAACCACATC that was used by Limn et al. in 1996. It was amplified fragment 800 bp, and the GoCV-specific primer pair GoCV (466-1014)-F: CGGAAGTACCCGACGACTTA, R: ACAATGGACTGGGCTTT CAC that was used by Lin in 2005. It was amplified fragments of 568 bp. The virus-specific fragments were sequenced and analyzed for virus sequence. The sequence analysis used DNASTAR software MegAlign Version 7.1 (DNASTAR, Inc., USA), and provided it in NCBI database. Gene sequence comparison, GoCV part is based on TW11/2001 (AF536941.1), TW10/2001 (AF536940.1), TW9/2001 (AF536939.1), TW8/2001 (AF536938.1), TW7/2001 (AF536937.1), TW6/2001 (AF536936.1), TW5/2001 (AF536935.1), TW4/2001 (AF536934.1), TW3/2001 (AF536933.1), TW2/2001 (AF536932.1), TW1 /2001 (AF536931.1) as the reference sequence; the GPV part uses GPV strain Y (China) (KC178571.1), GPV strain E (China) (KC184133.1), GPV strain GDa (China) (HQ891825.1), GPV strain SH (China) (JF333590.1), GPV strain SHFX1201 (China) (KC478066.1), GPV strain VG32-1 (Europe) (EU583392.1), GPV strain Virulent B (Hungary) (U25749.1), GPV strain 06-0329 (Taiwan) (EU583391.1), GPV strain 82-0321 (Taiwan) (AY382884.1), Muscovy duck parvovirus (AY510603.1) as reference sequence, and use ClustalW in the system to compare and draw a phylogenetic tree. The phylogenetic tree was obtained using 1000 bootstrap replications to evaluate the supporting values ​​for lineage grouping.", "PCR results are shown negative results for GHPV and NGVEV. The incidence of various pathogens in geese with GFL were found to be GoCV 94.6% (53/56), CPV 60.7% (34/56) (Supplementary Fig. 5), whereas geese with GBF showed the following incidence of pathogens: GoCV 83.3% (30/36), GPV 72% (26/36) (Table 1). The result shows that GoCV and GPV are significantly correlated with GFL and GBF (p < 0.05).\nGFL, gosling feather loss disease; GBF, goose broken feather disease; PCR, polymerase chain reaction; GPV, goose parvovirus; GoCV, goose circovirus.", "Figures showing pathological sections (Fig. 1) reveal the gross and histopathologic findings in cases of goose infectious disease with GFL and GBF: Fig. 1A shows the clinical findings for feather loss (GFL), and Fig. 1B shows the clinical findings for broken feather (GBF). Fig. 1C-F shows histopathologic findings in cases of GoCV infection by PCR assay: feather follicle necrosis with inclusion body (Fig. 1C); folliculitis with necropsy follicular inflammatory cell infiltration (Fig. 1D); degeneration and necrosis of epithelial cells on the mucous membrane and the crypts of Lieberkühn (Fig. 1E) and intranuclear inclusion bodies in the degenerated epithelial cells of the crypts of Lieberkühn (Fig. 1F).", "New sequences of GoCV and GPV (569 bp and 806 bp, respectively) were generated and submitted to a public repository GenBank. The GoCV amplified sequence (VP1) was examined by PCR genetic sequence analysis software. A phylogenetic tree was built on the basis of GoCV alignment (VP1) in samples of Pingtung, Kaohsiung, Yunlin and Taiwan. This was compared with 11 GoCV sequences isolated in Taiwan. They were generally divided into two groups. For the first group, both Kaohsiung (W103-1337) and Pingtung areas (W103-1342), the GoCV similarity rate was 100%, whereas in the Taiwanese strains (TW1), (TW10) and (TW11), the similarity was 98%–99%. The other groups were amplified sequences from the Yunlin samples (W103-1368) and (W103-1346). The similarity of (W103-1368) compared with (TW9) and (TW8) was 100%. The similarity of (W103-1346) compared with (TW8) and (TW6) was 98%. The South Taiwan samples were similar to the Yunlin area samples (95%). This implies that GoCV strains differ between the south and north of Taiwan (Fig. 2).\nIn the GPV (VP2) group of amplified sequence fragments found by PCR in this study, the Taiwanese strains, 06-0329 and 82-0321, were compared with strains from China and Europe, and found to strongly resemble them with a similarity of over 99.6%, compared with Muscovy duck parvovirus in which the similarity is only 80.9% (Fig. 3).", "The viral variable was observed in the GLF birds from 39 to 66 days old. By the age of 66 days, although the feathers had all returned to normal growth, the virus had not been eliminated from the system. To reduce the effects of environmental pathogens and secondary infection by bacteria, geese were separated into the treatment group (T) and control group (C). Virus levels were compared between the 2 groups using a t-test. No significant difference in GoCV levels was found for T and C (p = 0.082; p > 0.05). GPV levels in C and T did not differ significantly (p = 0.3; p > 0.05).\nVirus levels were monitored in birds from the age of 39 to 66 days and the decrease in GoCV and GPV levels over time noted. Using the self-designed primer to quantify the change curve of virus quantity (Fig. 4A) and the change of the feathers of the goose (Fig. 4B) when we conduct field monitoring two viral loads. Compared with the change trend of feather score, it can be seen that there seems to be a similar trend, so we divide the data into two sections, and we will perform the Pearson's correlation coefficient test after 35 weeks and 6 weeks of age mentioned earlier. The results of GoCV and GPV of GLB by t-test calculation indicated no significant difference in the GoCV levels in C and T (p = 0.43; p > 0.05) and no significant difference in the GPV levels in C and T (p = 0.348; p > 0.05).\nGPV, goose parvovirus; GoCV, goose circovirus.\nHowever, during the treatment period, the birds between 42 and 63 days old with GFL did not have folliculitis, mobility also gradually improved, the grooming behaviour of the geese gradually became normal and the feather colour turned from yellow to white (Fig. 5A). Similarly, birds 49–75 days old with GBF gradually recovered. The recovery of broken feathers was maintained for about 3–4 weeks. Results indicated that if the breeding environment is properly controlled, there might be cases of early recovery (Fig. 5B).\nGPV, goose parvovirus; GoCV, goose circovirus; GFL, gosling feather loss disease; GBF, goose broken feather disease.", "The 60 healthy geese did not show the presence of GoCV infection, GPV infection or inclusion bodies (Table 1)." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Experimental design", "Birds", "Histopathology", "Molecular biological detection", "Real-time Q-PCR in quarantined and disinfected birds", "Sequence sequencing and comparison", "RESULTS", "PCR diagnosis", "Gross lesions and histopathological examination", "Building a phylogenetic tree from the alignment", "Real-time Q-PCR in quarantined and disinfected birds", "Negative control", "DISCUSSION" ]
[ "Goslings in several Taiwanese farms experienced gosling feather loss disease (GFL) at 21–35 days old and goose broken feather disease (GBF) at 42–60 days. There are about 40 farms of infections. The incidence ranges from a few birds to 500 cases per field, which accounts for about 1%. It is estimated that about 12,000 geese have been infected, the morbidity is 70–80%, and the mortality is 20–30%. The economic loss may be as high as 10 million. This study aims to investigate the pathogens causing GFL and GBF. A total of 92 geese with these feather diseases were sent to animal hospitals by goose farmers. They came from 40 different farms in 5 counties (Supplementary Fig. 1A and B).\nThere are many factors affecting feather follicle atrophy or underdevelopment, including viral infections, bacterial, parasites infections, and endocrine disorders. Common viral infections can cause GFL and GBF such as goose circovirus (GoCV), goose parvovirus (GPV). Malabsorption of nutrients after enteritis that cause feathers disease, such as new gosling viral enteritis (NGVE) and goose haemorrhagic polyomavirus (GHPV).\nGoCV was first identified in a Germany's large commercial flock of geese with a history of increased mortality and runting by Soike [12]. This case occurred in a one-week-old goose. The clinical symptoms included growth retardation and feather shedding lesions. Histopathology showed varying degrees of lymphocyte loss and histiocytosis lesions in the Fahrenheit bursa, spleen, and thymus. The same geese were also found to have infectious serositis and aspergillosis infection. Soike et al. [1] believed that GoCV infection would inhibit the immune system of the geese, making the geese prone to secondary infections, causing the invasion of mold and bacteria that result geese growth retardation. Spherical or coarse-grained cytoplasmic inclusions were found in Fahrenheit bursa and Lieberkühn crypt epithelial cells. When the ultramicroscopic structure of virus inclusion bodies was examined, it was confirmed that the virus particles arranged in crystals were multilayered or irregular in size 14 nm. The author speculates that the circovirus can induce immunosuppression and make geese prone to secondary infection. In 2004, the survey results in Hungary's goose farm showed that the Fahrenheit bursa of 52 flocks of geese were examined by histology, and 75% of the GoCV-positive populations and 50% of the GoCV-negative populations were observed to have lost lymphocytes in the Fahrenheit bursa. Inclusion bodies were only found in three positive geese flocks, and the three geese flocks were positive for GoCV by dot blot hybridization test (DBH) and polymerase chain reaction (PCR) [3].\nGPV infection was first reported in Hungary in 1967 as a goose farm often called gosling plague, goose viral hepatitis, goose influenza, and ascites hepatitis, with high mortality, mainly caused by violations of goose fiber necrotizing enterocolitis [4]. This is an acute, highly contagious and fatal disease, mainly caused in goslings, fibrous necrotizing enteritis, also known as Derzsy's disease [5]. However, it first invaded Taiwan goose farms in Taiwan in 1982. Experts and scholars unified the disease as goose viral enteritis. This disease is currently infected in Europe, United States, Canada, Russia, Israel, China, Japan and Vietnam. The virus is different from the small virus that infects humans and other poultry [6]. The severity of the disease is greatly related to age. If birds are infected with this virus before one week of age, the mortality is as high as 100%. Infections over four weeks of age rarely have clinical features. If the yolk antibody is administered before three weeks of age, it will effectively combat the occurrence of GPV [7].\nA new epidemic occurred in western France in 1989, causing 40–50% mortality of 2–4 weeks old Muscovy ducks. Although the clinical symptoms are similar to GPV infection, it is not pathogenic to geese and other strains of ducks. After virus isolation and resistance analysis, it was found that this virus and goose-origin waterfowl virus (GPV) are parvoviruses, but there are significant differences in viral nucleic acid and amino acid sequences, so this virus was named Muscovy duck parvovirus (MDPV). Woolcock et al. [8] pointed out that MDPV has been isolated in many countries such as the United States, Germany, Malaysia, Japan, Taiwan, and Thailand. The disease causes the most serious death at the age of about five weeks ago, there will be 30–80% morbidity and 10–50% mortality, and older birds are also susceptible to infection when their immunity is weak [8].\nNGVE is an infectious disease that affects goslings < 30 days old with haemorrhagic, fibrinonecrotic, hyperaemic, necrotic and exudative enteritis in the small intestine [9]. It is caused by an adenovirus called the NGVE virus (NGVEV), which was first reported by Cheng [10]. Haemorrhagic nephritis enteritis of geese is an epizootic viral disease caused by infection with GHPV [11]. Since 1969, GHPV has caused serious economic losses in European countries, including Hungary [12], Germany [13] and southern France [1415]. Geese may also be infected with lice and red mites, which can induce feather loss and occasionally cause naked feathers known colloquially as ‘ghost pull hairs.’ Insufficient essential amino acids in the feed and malnutrition could become an important secondary factor after the outbreak of GFL and GBF.", " Experimental design This study is divided into 2 parts. Part one explores the pathogens of GFL and GBF using microbiology, histopathology and molecular biology. In the second part, the infected geese are separated, treated and observed for feather recovery. Real-time quantitative polymerase chain reaction (Q-PCR) was used to detect the presence of GPV and GoCV viruses in the blood (Supplementary Fig. 2). All experiments within this study conform to the ethical standards issued by the National Research Council. All animal experiments have been approved by the ethics committee of the National Pingtung University of Science and Technology, and care was taken to comply with the 3R concept. In this study, the laboratory animal application form protocol number was NPUST-99-065 and approved number was NPUST-IACUC-99-065.\nThis study is divided into 2 parts. Part one explores the pathogens of GFL and GBF using microbiology, histopathology and molecular biology. In the second part, the infected geese are separated, treated and observed for feather recovery. Real-time quantitative polymerase chain reaction (Q-PCR) was used to detect the presence of GPV and GoCV viruses in the blood (Supplementary Fig. 2). All experiments within this study conform to the ethical standards issued by the National Research Council. All animal experiments have been approved by the ethics committee of the National Pingtung University of Science and Technology, and care was taken to comply with the 3R concept. In this study, the laboratory animal application form protocol number was NPUST-99-065 and approved number was NPUST-IACUC-99-065.\n Birds A total of 92 geese from 40 farms, comprising 56 showing signs of GFL and 36 with signs of GBF, were examined. Sixty healthy geese (30 goslings and 30 geese) were procured from the same farms with rigorous disinfection and management protocols; they were not affected by goose feather loss or broken feather diseases. These healthy geese were used as negative controls. The grading evaluation of feather shedding uses the feather shedding evaluation proposed in the past literature. According to the feather loss evaluation table, it is divided into back and wings (Supplementary Figs. 3 and 4), there are 5 levels, the lower the score the more complete the feather and the higher the score, obvious severity [16].\nA total of 92 geese from 40 farms, comprising 56 showing signs of GFL and 36 with signs of GBF, were examined. Sixty healthy geese (30 goslings and 30 geese) were procured from the same farms with rigorous disinfection and management protocols; they were not affected by goose feather loss or broken feather diseases. These healthy geese were used as negative controls. The grading evaluation of feather shedding uses the feather shedding evaluation proposed in the past literature. According to the feather loss evaluation table, it is divided into back and wings (Supplementary Figs. 3 and 4), there are 5 levels, the lower the score the more complete the feather and the higher the score, obvious severity [16].\n Histopathology The tissues were embedded in paraffin and processed into 2- to 3-μm-thick sections that were stained with haematoxylin and eosin for histopathological examination. Tissue samples from other gross lesions observed were similarly collected and processed for histopathological assessment.\nThe tissues were embedded in paraffin and processed into 2- to 3-μm-thick sections that were stained with haematoxylin and eosin for histopathological examination. Tissue samples from other gross lesions observed were similarly collected and processed for histopathological assessment.\n Molecular biological detection PCR protocols and the specificity of primer pairs used are summarised (Supplementary Table 1). RNA and DNA were extracted from tissue using Corning Axygen AP-MN-BF-VNA-250 AxyPrep™ Body Fluid Viral DNA & RNA Purification Miniprep Kits (BIOKIT, Taiwan). Each PCR was performed in a total volume of 20 μL containing 9 µL template DNA, 10 µL Taq DNA, polymerase 2× Master Mix red (1.5 mM MgCl2) and 1 µL primer.\nGoCV PCR protocol was as follows: pre-denaturation 94°C for 5 min; denaturation 94°C for 45 sec; annealing 55°C for 45 sec; extension 72°C for 45 sec, performed for 35 cycles; final extension 72°C for 5 min. GPV PCR protocol was as follows [1718]: pre-denaturation 94°C for 3 min; denaturation 94°C for 30 sec; annealing 60°C for 30 sec; extension 72°C for 30 sec, performed for 35 cycles; and final extension 72°C for 5 min. GHPV PCR protocol was as follows [19]: pre-denaturation 94°C for 3 min; denaturation 94°C for 30 sec; annealing 55°C for 30 sec; extension 72°C for 60 sec; performed for 30 cycles; and final extension 72°C for 5 min. Goose adenovirus PCR protocol was as follows: pre-denaturation 94°C for 5 min; denaturation 94°C for 40 sec; annealing 54°C for 45 sec; extension 72°C for 40 sec, performed for 30 cycles; and final extension 72°C for 10 min. PCR products were analysed by electrophoresis in a 1.5% (w/v) agarose gel.\nPCR protocols and the specificity of primer pairs used are summarised (Supplementary Table 1). RNA and DNA were extracted from tissue using Corning Axygen AP-MN-BF-VNA-250 AxyPrep™ Body Fluid Viral DNA & RNA Purification Miniprep Kits (BIOKIT, Taiwan). Each PCR was performed in a total volume of 20 μL containing 9 µL template DNA, 10 µL Taq DNA, polymerase 2× Master Mix red (1.5 mM MgCl2) and 1 µL primer.\nGoCV PCR protocol was as follows: pre-denaturation 94°C for 5 min; denaturation 94°C for 45 sec; annealing 55°C for 45 sec; extension 72°C for 45 sec, performed for 35 cycles; final extension 72°C for 5 min. GPV PCR protocol was as follows [1718]: pre-denaturation 94°C for 3 min; denaturation 94°C for 30 sec; annealing 60°C for 30 sec; extension 72°C for 30 sec, performed for 35 cycles; and final extension 72°C for 5 min. GHPV PCR protocol was as follows [19]: pre-denaturation 94°C for 3 min; denaturation 94°C for 30 sec; annealing 55°C for 30 sec; extension 72°C for 60 sec; performed for 30 cycles; and final extension 72°C for 5 min. Goose adenovirus PCR protocol was as follows: pre-denaturation 94°C for 5 min; denaturation 94°C for 40 sec; annealing 54°C for 45 sec; extension 72°C for 40 sec, performed for 30 cycles; and final extension 72°C for 10 min. PCR products were analysed by electrophoresis in a 1.5% (w/v) agarose gel.\n Real-time Q-PCR in quarantined and disinfected birds Twenty geese from a farm in Wandan Township of Pingtung were separated into 4 groups, 2 GFL and 2 GBF. The geese were treated daily with a spray of 400-times-diluted iodine solution 50–60 mL. The solution was sprayed onto the head, back, wings and abdomen of each goose, and blood samples were collected every 3 days. Q-PCR Bio-Rad CFX Manager software was used to analyse the data of real-time Q-PCR products to continuously observe the quantification of GPV and GoCV in infected geese. Viral variables in geese from 39 to 66 days of age and from 49 to 76 days of age after GPV and GoCV infection were observed. DNA extractions were performed using whole blood. Real-time PCR detection of GPV and GoCV viruses, using the National Center for Biotechnology Information (NCBI) website design provided by Primer, was as follows: GoCV F, 5′-CCAGTCCATTGTCCGAA-3′; GoCV R, 5′-GGAGGAAGACAACTATGGC-3′; GPV 3F, 5′-ACAACTTTGAGTTTACGTTTGAC-3′; and GPV 3R, 5′-ATTCCAGAGGTATTGATCC ACTA-3′. Bio-Rad Manager 3.1 Software was used to interpret PCR results. The measured OD values were entered into Excel Software for conversion into a standard curve. After that, the plastid concentration was used to convert the virus amount (plasmid copy number). When Q-PCR was completed, the Cq value was obtained.\nTwenty geese from a farm in Wandan Township of Pingtung were separated into 4 groups, 2 GFL and 2 GBF. The geese were treated daily with a spray of 400-times-diluted iodine solution 50–60 mL. The solution was sprayed onto the head, back, wings and abdomen of each goose, and blood samples were collected every 3 days. Q-PCR Bio-Rad CFX Manager software was used to analyse the data of real-time Q-PCR products to continuously observe the quantification of GPV and GoCV in infected geese. Viral variables in geese from 39 to 66 days of age and from 49 to 76 days of age after GPV and GoCV infection were observed. DNA extractions were performed using whole blood. Real-time PCR detection of GPV and GoCV viruses, using the National Center for Biotechnology Information (NCBI) website design provided by Primer, was as follows: GoCV F, 5′-CCAGTCCATTGTCCGAA-3′; GoCV R, 5′-GGAGGAAGACAACTATGGC-3′; GPV 3F, 5′-ACAACTTTGAGTTTACGTTTGAC-3′; and GPV 3R, 5′-ATTCCAGAGGTATTGATCC ACTA-3′. Bio-Rad Manager 3.1 Software was used to interpret PCR results. The measured OD values were entered into Excel Software for conversion into a standard curve. After that, the plastid concentration was used to convert the virus amount (plasmid copy number). When Q-PCR was completed, the Cq value was obtained.\n Sequence sequencing and comparison This experiment refers to the GPV-specific primer pair GPV (VP2)-F: CCGGGTTGCAGGAGGTAC, R: AGCTACAACAACCACATC that was used by Limn et al. in 1996. It was amplified fragment 800 bp, and the GoCV-specific primer pair GoCV (466-1014)-F: CGGAAGTACCCGACGACTTA, R: ACAATGGACTGGGCTTT CAC that was used by Lin in 2005. It was amplified fragments of 568 bp. The virus-specific fragments were sequenced and analyzed for virus sequence. The sequence analysis used DNASTAR software MegAlign Version 7.1 (DNASTAR, Inc., USA), and provided it in NCBI database. Gene sequence comparison, GoCV part is based on TW11/2001 (AF536941.1), TW10/2001 (AF536940.1), TW9/2001 (AF536939.1), TW8/2001 (AF536938.1), TW7/2001 (AF536937.1), TW6/2001 (AF536936.1), TW5/2001 (AF536935.1), TW4/2001 (AF536934.1), TW3/2001 (AF536933.1), TW2/2001 (AF536932.1), TW1 /2001 (AF536931.1) as the reference sequence; the GPV part uses GPV strain Y (China) (KC178571.1), GPV strain E (China) (KC184133.1), GPV strain GDa (China) (HQ891825.1), GPV strain SH (China) (JF333590.1), GPV strain SHFX1201 (China) (KC478066.1), GPV strain VG32-1 (Europe) (EU583392.1), GPV strain Virulent B (Hungary) (U25749.1), GPV strain 06-0329 (Taiwan) (EU583391.1), GPV strain 82-0321 (Taiwan) (AY382884.1), Muscovy duck parvovirus (AY510603.1) as reference sequence, and use ClustalW in the system to compare and draw a phylogenetic tree. The phylogenetic tree was obtained using 1000 bootstrap replications to evaluate the supporting values ​​for lineage grouping.\nThis experiment refers to the GPV-specific primer pair GPV (VP2)-F: CCGGGTTGCAGGAGGTAC, R: AGCTACAACAACCACATC that was used by Limn et al. in 1996. It was amplified fragment 800 bp, and the GoCV-specific primer pair GoCV (466-1014)-F: CGGAAGTACCCGACGACTTA, R: ACAATGGACTGGGCTTT CAC that was used by Lin in 2005. It was amplified fragments of 568 bp. The virus-specific fragments were sequenced and analyzed for virus sequence. The sequence analysis used DNASTAR software MegAlign Version 7.1 (DNASTAR, Inc., USA), and provided it in NCBI database. Gene sequence comparison, GoCV part is based on TW11/2001 (AF536941.1), TW10/2001 (AF536940.1), TW9/2001 (AF536939.1), TW8/2001 (AF536938.1), TW7/2001 (AF536937.1), TW6/2001 (AF536936.1), TW5/2001 (AF536935.1), TW4/2001 (AF536934.1), TW3/2001 (AF536933.1), TW2/2001 (AF536932.1), TW1 /2001 (AF536931.1) as the reference sequence; the GPV part uses GPV strain Y (China) (KC178571.1), GPV strain E (China) (KC184133.1), GPV strain GDa (China) (HQ891825.1), GPV strain SH (China) (JF333590.1), GPV strain SHFX1201 (China) (KC478066.1), GPV strain VG32-1 (Europe) (EU583392.1), GPV strain Virulent B (Hungary) (U25749.1), GPV strain 06-0329 (Taiwan) (EU583391.1), GPV strain 82-0321 (Taiwan) (AY382884.1), Muscovy duck parvovirus (AY510603.1) as reference sequence, and use ClustalW in the system to compare and draw a phylogenetic tree. The phylogenetic tree was obtained using 1000 bootstrap replications to evaluate the supporting values ​​for lineage grouping.", "This study is divided into 2 parts. Part one explores the pathogens of GFL and GBF using microbiology, histopathology and molecular biology. In the second part, the infected geese are separated, treated and observed for feather recovery. Real-time quantitative polymerase chain reaction (Q-PCR) was used to detect the presence of GPV and GoCV viruses in the blood (Supplementary Fig. 2). All experiments within this study conform to the ethical standards issued by the National Research Council. All animal experiments have been approved by the ethics committee of the National Pingtung University of Science and Technology, and care was taken to comply with the 3R concept. In this study, the laboratory animal application form protocol number was NPUST-99-065 and approved number was NPUST-IACUC-99-065.", "A total of 92 geese from 40 farms, comprising 56 showing signs of GFL and 36 with signs of GBF, were examined. Sixty healthy geese (30 goslings and 30 geese) were procured from the same farms with rigorous disinfection and management protocols; they were not affected by goose feather loss or broken feather diseases. These healthy geese were used as negative controls. The grading evaluation of feather shedding uses the feather shedding evaluation proposed in the past literature. According to the feather loss evaluation table, it is divided into back and wings (Supplementary Figs. 3 and 4), there are 5 levels, the lower the score the more complete the feather and the higher the score, obvious severity [16].", "The tissues were embedded in paraffin and processed into 2- to 3-μm-thick sections that were stained with haematoxylin and eosin for histopathological examination. Tissue samples from other gross lesions observed were similarly collected and processed for histopathological assessment.", "PCR protocols and the specificity of primer pairs used are summarised (Supplementary Table 1). RNA and DNA were extracted from tissue using Corning Axygen AP-MN-BF-VNA-250 AxyPrep™ Body Fluid Viral DNA & RNA Purification Miniprep Kits (BIOKIT, Taiwan). Each PCR was performed in a total volume of 20 μL containing 9 µL template DNA, 10 µL Taq DNA, polymerase 2× Master Mix red (1.5 mM MgCl2) and 1 µL primer.\nGoCV PCR protocol was as follows: pre-denaturation 94°C for 5 min; denaturation 94°C for 45 sec; annealing 55°C for 45 sec; extension 72°C for 45 sec, performed for 35 cycles; final extension 72°C for 5 min. GPV PCR protocol was as follows [1718]: pre-denaturation 94°C for 3 min; denaturation 94°C for 30 sec; annealing 60°C for 30 sec; extension 72°C for 30 sec, performed for 35 cycles; and final extension 72°C for 5 min. GHPV PCR protocol was as follows [19]: pre-denaturation 94°C for 3 min; denaturation 94°C for 30 sec; annealing 55°C for 30 sec; extension 72°C for 60 sec; performed for 30 cycles; and final extension 72°C for 5 min. Goose adenovirus PCR protocol was as follows: pre-denaturation 94°C for 5 min; denaturation 94°C for 40 sec; annealing 54°C for 45 sec; extension 72°C for 40 sec, performed for 30 cycles; and final extension 72°C for 10 min. PCR products were analysed by electrophoresis in a 1.5% (w/v) agarose gel.", "Twenty geese from a farm in Wandan Township of Pingtung were separated into 4 groups, 2 GFL and 2 GBF. The geese were treated daily with a spray of 400-times-diluted iodine solution 50–60 mL. The solution was sprayed onto the head, back, wings and abdomen of each goose, and blood samples were collected every 3 days. Q-PCR Bio-Rad CFX Manager software was used to analyse the data of real-time Q-PCR products to continuously observe the quantification of GPV and GoCV in infected geese. Viral variables in geese from 39 to 66 days of age and from 49 to 76 days of age after GPV and GoCV infection were observed. DNA extractions were performed using whole blood. Real-time PCR detection of GPV and GoCV viruses, using the National Center for Biotechnology Information (NCBI) website design provided by Primer, was as follows: GoCV F, 5′-CCAGTCCATTGTCCGAA-3′; GoCV R, 5′-GGAGGAAGACAACTATGGC-3′; GPV 3F, 5′-ACAACTTTGAGTTTACGTTTGAC-3′; and GPV 3R, 5′-ATTCCAGAGGTATTGATCC ACTA-3′. Bio-Rad Manager 3.1 Software was used to interpret PCR results. The measured OD values were entered into Excel Software for conversion into a standard curve. After that, the plastid concentration was used to convert the virus amount (plasmid copy number). When Q-PCR was completed, the Cq value was obtained.", "This experiment refers to the GPV-specific primer pair GPV (VP2)-F: CCGGGTTGCAGGAGGTAC, R: AGCTACAACAACCACATC that was used by Limn et al. in 1996. It was amplified fragment 800 bp, and the GoCV-specific primer pair GoCV (466-1014)-F: CGGAAGTACCCGACGACTTA, R: ACAATGGACTGGGCTTT CAC that was used by Lin in 2005. It was amplified fragments of 568 bp. The virus-specific fragments were sequenced and analyzed for virus sequence. The sequence analysis used DNASTAR software MegAlign Version 7.1 (DNASTAR, Inc., USA), and provided it in NCBI database. Gene sequence comparison, GoCV part is based on TW11/2001 (AF536941.1), TW10/2001 (AF536940.1), TW9/2001 (AF536939.1), TW8/2001 (AF536938.1), TW7/2001 (AF536937.1), TW6/2001 (AF536936.1), TW5/2001 (AF536935.1), TW4/2001 (AF536934.1), TW3/2001 (AF536933.1), TW2/2001 (AF536932.1), TW1 /2001 (AF536931.1) as the reference sequence; the GPV part uses GPV strain Y (China) (KC178571.1), GPV strain E (China) (KC184133.1), GPV strain GDa (China) (HQ891825.1), GPV strain SH (China) (JF333590.1), GPV strain SHFX1201 (China) (KC478066.1), GPV strain VG32-1 (Europe) (EU583392.1), GPV strain Virulent B (Hungary) (U25749.1), GPV strain 06-0329 (Taiwan) (EU583391.1), GPV strain 82-0321 (Taiwan) (AY382884.1), Muscovy duck parvovirus (AY510603.1) as reference sequence, and use ClustalW in the system to compare and draw a phylogenetic tree. The phylogenetic tree was obtained using 1000 bootstrap replications to evaluate the supporting values ​​for lineage grouping.", " PCR diagnosis PCR results are shown negative results for GHPV and NGVEV. The incidence of various pathogens in geese with GFL were found to be GoCV 94.6% (53/56), CPV 60.7% (34/56) (Supplementary Fig. 5), whereas geese with GBF showed the following incidence of pathogens: GoCV 83.3% (30/36), GPV 72% (26/36) (Table 1). The result shows that GoCV and GPV are significantly correlated with GFL and GBF (p < 0.05).\nGFL, gosling feather loss disease; GBF, goose broken feather disease; PCR, polymerase chain reaction; GPV, goose parvovirus; GoCV, goose circovirus.\nPCR results are shown negative results for GHPV and NGVEV. The incidence of various pathogens in geese with GFL were found to be GoCV 94.6% (53/56), CPV 60.7% (34/56) (Supplementary Fig. 5), whereas geese with GBF showed the following incidence of pathogens: GoCV 83.3% (30/36), GPV 72% (26/36) (Table 1). The result shows that GoCV and GPV are significantly correlated with GFL and GBF (p < 0.05).\nGFL, gosling feather loss disease; GBF, goose broken feather disease; PCR, polymerase chain reaction; GPV, goose parvovirus; GoCV, goose circovirus.\n Gross lesions and histopathological examination Figures showing pathological sections (Fig. 1) reveal the gross and histopathologic findings in cases of goose infectious disease with GFL and GBF: Fig. 1A shows the clinical findings for feather loss (GFL), and Fig. 1B shows the clinical findings for broken feather (GBF). Fig. 1C-F shows histopathologic findings in cases of GoCV infection by PCR assay: feather follicle necrosis with inclusion body (Fig. 1C); folliculitis with necropsy follicular inflammatory cell infiltration (Fig. 1D); degeneration and necrosis of epithelial cells on the mucous membrane and the crypts of Lieberkühn (Fig. 1E) and intranuclear inclusion bodies in the degenerated epithelial cells of the crypts of Lieberkühn (Fig. 1F).\nFigures showing pathological sections (Fig. 1) reveal the gross and histopathologic findings in cases of goose infectious disease with GFL and GBF: Fig. 1A shows the clinical findings for feather loss (GFL), and Fig. 1B shows the clinical findings for broken feather (GBF). Fig. 1C-F shows histopathologic findings in cases of GoCV infection by PCR assay: feather follicle necrosis with inclusion body (Fig. 1C); folliculitis with necropsy follicular inflammatory cell infiltration (Fig. 1D); degeneration and necrosis of epithelial cells on the mucous membrane and the crypts of Lieberkühn (Fig. 1E) and intranuclear inclusion bodies in the degenerated epithelial cells of the crypts of Lieberkühn (Fig. 1F).\n Building a phylogenetic tree from the alignment New sequences of GoCV and GPV (569 bp and 806 bp, respectively) were generated and submitted to a public repository GenBank. The GoCV amplified sequence (VP1) was examined by PCR genetic sequence analysis software. A phylogenetic tree was built on the basis of GoCV alignment (VP1) in samples of Pingtung, Kaohsiung, Yunlin and Taiwan. This was compared with 11 GoCV sequences isolated in Taiwan. They were generally divided into two groups. For the first group, both Kaohsiung (W103-1337) and Pingtung areas (W103-1342), the GoCV similarity rate was 100%, whereas in the Taiwanese strains (TW1), (TW10) and (TW11), the similarity was 98%–99%. The other groups were amplified sequences from the Yunlin samples (W103-1368) and (W103-1346). The similarity of (W103-1368) compared with (TW9) and (TW8) was 100%. The similarity of (W103-1346) compared with (TW8) and (TW6) was 98%. The South Taiwan samples were similar to the Yunlin area samples (95%). This implies that GoCV strains differ between the south and north of Taiwan (Fig. 2).\nIn the GPV (VP2) group of amplified sequence fragments found by PCR in this study, the Taiwanese strains, 06-0329 and 82-0321, were compared with strains from China and Europe, and found to strongly resemble them with a similarity of over 99.6%, compared with Muscovy duck parvovirus in which the similarity is only 80.9% (Fig. 3).\nNew sequences of GoCV and GPV (569 bp and 806 bp, respectively) were generated and submitted to a public repository GenBank. The GoCV amplified sequence (VP1) was examined by PCR genetic sequence analysis software. A phylogenetic tree was built on the basis of GoCV alignment (VP1) in samples of Pingtung, Kaohsiung, Yunlin and Taiwan. This was compared with 11 GoCV sequences isolated in Taiwan. They were generally divided into two groups. For the first group, both Kaohsiung (W103-1337) and Pingtung areas (W103-1342), the GoCV similarity rate was 100%, whereas in the Taiwanese strains (TW1), (TW10) and (TW11), the similarity was 98%–99%. The other groups were amplified sequences from the Yunlin samples (W103-1368) and (W103-1346). The similarity of (W103-1368) compared with (TW9) and (TW8) was 100%. The similarity of (W103-1346) compared with (TW8) and (TW6) was 98%. The South Taiwan samples were similar to the Yunlin area samples (95%). This implies that GoCV strains differ between the south and north of Taiwan (Fig. 2).\nIn the GPV (VP2) group of amplified sequence fragments found by PCR in this study, the Taiwanese strains, 06-0329 and 82-0321, were compared with strains from China and Europe, and found to strongly resemble them with a similarity of over 99.6%, compared with Muscovy duck parvovirus in which the similarity is only 80.9% (Fig. 3).\n Real-time Q-PCR in quarantined and disinfected birds The viral variable was observed in the GLF birds from 39 to 66 days old. By the age of 66 days, although the feathers had all returned to normal growth, the virus had not been eliminated from the system. To reduce the effects of environmental pathogens and secondary infection by bacteria, geese were separated into the treatment group (T) and control group (C). Virus levels were compared between the 2 groups using a t-test. No significant difference in GoCV levels was found for T and C (p = 0.082; p > 0.05). GPV levels in C and T did not differ significantly (p = 0.3; p > 0.05).\nVirus levels were monitored in birds from the age of 39 to 66 days and the decrease in GoCV and GPV levels over time noted. Using the self-designed primer to quantify the change curve of virus quantity (Fig. 4A) and the change of the feathers of the goose (Fig. 4B) when we conduct field monitoring two viral loads. Compared with the change trend of feather score, it can be seen that there seems to be a similar trend, so we divide the data into two sections, and we will perform the Pearson's correlation coefficient test after 35 weeks and 6 weeks of age mentioned earlier. The results of GoCV and GPV of GLB by t-test calculation indicated no significant difference in the GoCV levels in C and T (p = 0.43; p > 0.05) and no significant difference in the GPV levels in C and T (p = 0.348; p > 0.05).\nGPV, goose parvovirus; GoCV, goose circovirus.\nHowever, during the treatment period, the birds between 42 and 63 days old with GFL did not have folliculitis, mobility also gradually improved, the grooming behaviour of the geese gradually became normal and the feather colour turned from yellow to white (Fig. 5A). Similarly, birds 49–75 days old with GBF gradually recovered. The recovery of broken feathers was maintained for about 3–4 weeks. Results indicated that if the breeding environment is properly controlled, there might be cases of early recovery (Fig. 5B).\nGPV, goose parvovirus; GoCV, goose circovirus; GFL, gosling feather loss disease; GBF, goose broken feather disease.\nThe viral variable was observed in the GLF birds from 39 to 66 days old. By the age of 66 days, although the feathers had all returned to normal growth, the virus had not been eliminated from the system. To reduce the effects of environmental pathogens and secondary infection by bacteria, geese were separated into the treatment group (T) and control group (C). Virus levels were compared between the 2 groups using a t-test. No significant difference in GoCV levels was found for T and C (p = 0.082; p > 0.05). GPV levels in C and T did not differ significantly (p = 0.3; p > 0.05).\nVirus levels were monitored in birds from the age of 39 to 66 days and the decrease in GoCV and GPV levels over time noted. Using the self-designed primer to quantify the change curve of virus quantity (Fig. 4A) and the change of the feathers of the goose (Fig. 4B) when we conduct field monitoring two viral loads. Compared with the change trend of feather score, it can be seen that there seems to be a similar trend, so we divide the data into two sections, and we will perform the Pearson's correlation coefficient test after 35 weeks and 6 weeks of age mentioned earlier. The results of GoCV and GPV of GLB by t-test calculation indicated no significant difference in the GoCV levels in C and T (p = 0.43; p > 0.05) and no significant difference in the GPV levels in C and T (p = 0.348; p > 0.05).\nGPV, goose parvovirus; GoCV, goose circovirus.\nHowever, during the treatment period, the birds between 42 and 63 days old with GFL did not have folliculitis, mobility also gradually improved, the grooming behaviour of the geese gradually became normal and the feather colour turned from yellow to white (Fig. 5A). Similarly, birds 49–75 days old with GBF gradually recovered. The recovery of broken feathers was maintained for about 3–4 weeks. Results indicated that if the breeding environment is properly controlled, there might be cases of early recovery (Fig. 5B).\nGPV, goose parvovirus; GoCV, goose circovirus; GFL, gosling feather loss disease; GBF, goose broken feather disease.\n Negative control The 60 healthy geese did not show the presence of GoCV infection, GPV infection or inclusion bodies (Table 1).\nThe 60 healthy geese did not show the presence of GoCV infection, GPV infection or inclusion bodies (Table 1).", "PCR results are shown negative results for GHPV and NGVEV. The incidence of various pathogens in geese with GFL were found to be GoCV 94.6% (53/56), CPV 60.7% (34/56) (Supplementary Fig. 5), whereas geese with GBF showed the following incidence of pathogens: GoCV 83.3% (30/36), GPV 72% (26/36) (Table 1). The result shows that GoCV and GPV are significantly correlated with GFL and GBF (p < 0.05).\nGFL, gosling feather loss disease; GBF, goose broken feather disease; PCR, polymerase chain reaction; GPV, goose parvovirus; GoCV, goose circovirus.", "Figures showing pathological sections (Fig. 1) reveal the gross and histopathologic findings in cases of goose infectious disease with GFL and GBF: Fig. 1A shows the clinical findings for feather loss (GFL), and Fig. 1B shows the clinical findings for broken feather (GBF). Fig. 1C-F shows histopathologic findings in cases of GoCV infection by PCR assay: feather follicle necrosis with inclusion body (Fig. 1C); folliculitis with necropsy follicular inflammatory cell infiltration (Fig. 1D); degeneration and necrosis of epithelial cells on the mucous membrane and the crypts of Lieberkühn (Fig. 1E) and intranuclear inclusion bodies in the degenerated epithelial cells of the crypts of Lieberkühn (Fig. 1F).", "New sequences of GoCV and GPV (569 bp and 806 bp, respectively) were generated and submitted to a public repository GenBank. The GoCV amplified sequence (VP1) was examined by PCR genetic sequence analysis software. A phylogenetic tree was built on the basis of GoCV alignment (VP1) in samples of Pingtung, Kaohsiung, Yunlin and Taiwan. This was compared with 11 GoCV sequences isolated in Taiwan. They were generally divided into two groups. For the first group, both Kaohsiung (W103-1337) and Pingtung areas (W103-1342), the GoCV similarity rate was 100%, whereas in the Taiwanese strains (TW1), (TW10) and (TW11), the similarity was 98%–99%. The other groups were amplified sequences from the Yunlin samples (W103-1368) and (W103-1346). The similarity of (W103-1368) compared with (TW9) and (TW8) was 100%. The similarity of (W103-1346) compared with (TW8) and (TW6) was 98%. The South Taiwan samples were similar to the Yunlin area samples (95%). This implies that GoCV strains differ between the south and north of Taiwan (Fig. 2).\nIn the GPV (VP2) group of amplified sequence fragments found by PCR in this study, the Taiwanese strains, 06-0329 and 82-0321, were compared with strains from China and Europe, and found to strongly resemble them with a similarity of over 99.6%, compared with Muscovy duck parvovirus in which the similarity is only 80.9% (Fig. 3).", "The viral variable was observed in the GLF birds from 39 to 66 days old. By the age of 66 days, although the feathers had all returned to normal growth, the virus had not been eliminated from the system. To reduce the effects of environmental pathogens and secondary infection by bacteria, geese were separated into the treatment group (T) and control group (C). Virus levels were compared between the 2 groups using a t-test. No significant difference in GoCV levels was found for T and C (p = 0.082; p > 0.05). GPV levels in C and T did not differ significantly (p = 0.3; p > 0.05).\nVirus levels were monitored in birds from the age of 39 to 66 days and the decrease in GoCV and GPV levels over time noted. Using the self-designed primer to quantify the change curve of virus quantity (Fig. 4A) and the change of the feathers of the goose (Fig. 4B) when we conduct field monitoring two viral loads. Compared with the change trend of feather score, it can be seen that there seems to be a similar trend, so we divide the data into two sections, and we will perform the Pearson's correlation coefficient test after 35 weeks and 6 weeks of age mentioned earlier. The results of GoCV and GPV of GLB by t-test calculation indicated no significant difference in the GoCV levels in C and T (p = 0.43; p > 0.05) and no significant difference in the GPV levels in C and T (p = 0.348; p > 0.05).\nGPV, goose parvovirus; GoCV, goose circovirus.\nHowever, during the treatment period, the birds between 42 and 63 days old with GFL did not have folliculitis, mobility also gradually improved, the grooming behaviour of the geese gradually became normal and the feather colour turned from yellow to white (Fig. 5A). Similarly, birds 49–75 days old with GBF gradually recovered. The recovery of broken feathers was maintained for about 3–4 weeks. Results indicated that if the breeding environment is properly controlled, there might be cases of early recovery (Fig. 5B).\nGPV, goose parvovirus; GoCV, goose circovirus; GFL, gosling feather loss disease; GBF, goose broken feather disease.", "The 60 healthy geese did not show the presence of GoCV infection, GPV infection or inclusion bodies (Table 1).", "GFL and GBF appear to be a common phenomenon in Taiwan. In this study, 2 cases of inclusion bodies were observed in the feather follicles, and viral nucleic acids were detected in cases of GoCV infection. Neither GHPV nor NGVEV was detected.\nThis study was similar to previous outbreak investigations in terms of the challenge and disease investigation. No inclusion bodies were observed, but the nucleic acids detected were universal [20212223]. Chen conducted an epidemiological survey of GoCV and found that the positive rate of GoCV in Taiwan is as high as 94.7% (197/208). GoCV in Taiwan has become a common virus. It is inferred that although circovirus can cause lymphocyte loss and lead to immunosuppression that could be induced many disease outbreaks [24]. Ball focused on the epidemiological investigation of GoCV, simultaneously, the parvovirus, polyomavirus, reticuloendotheliosis virus, salmonella, streptococcus and mycoplasma were detected [3]. Jing et al. [25] purified the circovirus and inoculated with 21-day-old goslings, and observed pathological changes that caused goose diarrhea, growth retardation and feather growth disorder. It can be found circovirus positive in the Fahrenheit bursa, thymus, spleen, duodenum, and liver when PCR test. The virus can be detected in the blood 14 days after vaccination. Pathological examinations have lymphopenia in the Fahrenheit bursa, spleen and liver. Inflammatory cells proliferate in the lungs and kidneys [25]. Australia has also found fractured feathers in crow with circovirus infection, which is quite similar to the situation of the goose found in this study. In the case, inclusion bodies were found in the basal layer of the feather follicle under the section, and a large number of inflammatory cells infiltrated in the feather follicle. It is the same as the nuclear inclusion bodies in the feather follicle seen in this study. Although some reports have pointed out that no inclusion bodies were seen under the slices in the investigation of the challenge and disease, but the nucleic acids detected were universal [2021].\nResearch on the etiology of GFL and GBF is becoming more and more urgent. At present, many scholars have carried out quantitative detection of viruses for GPV and GoCV, and their sensitivity can be detected up to 28 copy number variants [2627]. Woźniakowski et al. [28] also explored the relationship between the clinical investigation of the aquatic parvovirus and the amount of virus found in the system, using quantitative GPV, combined with clinical pathology, which can be used as the main method of elucidation of epidemiology. The previous study showed that there was no comparison of virus levels between GPV and GoCV and broken feathers disease. Therefore, this study designed primers for Q-PCR testing and found that the virus levels of goose infected in the wild are usually quite unstable. The virus levels ranges from 105 to 108, and it is impossible to determine the amount of virus from the appearance. Therefore, when statistics are made, there will be thought the difference in the amount of each virus exceeds 100 or more, but the data trends shown are almost the same.\nAfter the full-length sequencing and comparison of the GoCV strains, isolated from different regions in 2002–2003, the sequence similarity is above 97%, showing the genome sequence of the GoCV strain is very stable in Taiwan [24]. The sequence analysed in this study is VP1 (rep) nucleic acid between nt466 and nt1014. Although it is not a full-length sequence, there are still differences when compared with W103-1368 and W103-1346 from Yunlin County and W103-1339, W103-1337 and 1342 from the south of Kaohsiung. We found that the sequence from the south of Kaohsiung is significantly different from that of Yunlin. It is speculated that there may be differences in different types of viruses due to regional relationships; further research into this topic is required.\nGPV infection has so far produced commercially available vaccines, and farmers usually vaccinate within 1 week of age. If the maternal antibody is administered at this time, it will last for 35 days. Immunity value continues to 20 weeks of age. Most of the infected birds will have the results in reduced growth and feather development. If the gosling is older than one month, the survivor will become the carrier for life. When the goose recovers from the parvovirus infection, a large number of antibodies will continue to remain in the body for 80 months. At this time, the maternal antibody of breeding geese will be able to transfer to the hatched goslings and will achieve a good immune effect [29]. Goose infection with parvovirus often causes hemorrhagic enteritis, which results in poor absorption efficiency, relatively slow growth and uneven feathering. The feather loss is mainly related to GPV. Although it can be seen a large number of inflammatory cell infiltration in the feather follicle, PCR diagnosis of parvovirus is positive. After that, in situ hybridization (ISH) or immunohistochemical staining (immunohistochemistry, IHC) is still needed to observe whether GPV causes the feathers disease.\nThe bacterial infection is usually a secondary infection in feather diseases. However, the prevalence of bacteria was found in this study: Riemerella anatipestifer and Salmonella spp., Escherichia coli, Staphylococcus aureus and Pasteurella multocida, however, no ectoparasitic infestation was found. Sporadic cases also have amyloidosis, renal tubular degeneration and deposition of calcium salt. The feeds have been found that all essential amino acids are insufficient compared with the standard value, so it will be malnutrition after the outbreak of the disease, which will lead to more serious GFL and GBF.\nThe GoCV and GPV in Taiwan are still quite prevalent in most goose farms. Because the circovirus is an unenveloped virus, according to a previous study, it was quite difficult to eliminate GoCV and GPV. The pathogenic microorganism was still present in the system after being treated with common surfactants and halogen disinfectants [3031]. ​​It needs to be cleaned for a long time when it is empty, and it must be disinfected with formalin fumigation and phenol, which can at least remove the pathogen to reduce and improve the biological safety control of geese. The disinfectant used in this study is halogen-based iodine, and the disinfectant used in cleaning the environment is Anteweak. Although the disinfection effect tends to reduce the virus, the virus cannot be completely eradicated." ]
[ "intro", "materials|methods", null, null, null, null, null, null, "results", null, null, null, null, null, "discussion" ]
[ "Circovirus", "goose disease", "parvovirus", "polymerase chain reaction", "Taiwan" ]
INTRODUCTION: Goslings in several Taiwanese farms experienced gosling feather loss disease (GFL) at 21–35 days old and goose broken feather disease (GBF) at 42–60 days. There are about 40 farms of infections. The incidence ranges from a few birds to 500 cases per field, which accounts for about 1%. It is estimated that about 12,000 geese have been infected, the morbidity is 70–80%, and the mortality is 20–30%. The economic loss may be as high as 10 million. This study aims to investigate the pathogens causing GFL and GBF. A total of 92 geese with these feather diseases were sent to animal hospitals by goose farmers. They came from 40 different farms in 5 counties (Supplementary Fig. 1A and B). There are many factors affecting feather follicle atrophy or underdevelopment, including viral infections, bacterial, parasites infections, and endocrine disorders. Common viral infections can cause GFL and GBF such as goose circovirus (GoCV), goose parvovirus (GPV). Malabsorption of nutrients after enteritis that cause feathers disease, such as new gosling viral enteritis (NGVE) and goose haemorrhagic polyomavirus (GHPV). GoCV was first identified in a Germany's large commercial flock of geese with a history of increased mortality and runting by Soike [12]. This case occurred in a one-week-old goose. The clinical symptoms included growth retardation and feather shedding lesions. Histopathology showed varying degrees of lymphocyte loss and histiocytosis lesions in the Fahrenheit bursa, spleen, and thymus. The same geese were also found to have infectious serositis and aspergillosis infection. Soike et al. [1] believed that GoCV infection would inhibit the immune system of the geese, making the geese prone to secondary infections, causing the invasion of mold and bacteria that result geese growth retardation. Spherical or coarse-grained cytoplasmic inclusions were found in Fahrenheit bursa and Lieberkühn crypt epithelial cells. When the ultramicroscopic structure of virus inclusion bodies was examined, it was confirmed that the virus particles arranged in crystals were multilayered or irregular in size 14 nm. The author speculates that the circovirus can induce immunosuppression and make geese prone to secondary infection. In 2004, the survey results in Hungary's goose farm showed that the Fahrenheit bursa of 52 flocks of geese were examined by histology, and 75% of the GoCV-positive populations and 50% of the GoCV-negative populations were observed to have lost lymphocytes in the Fahrenheit bursa. Inclusion bodies were only found in three positive geese flocks, and the three geese flocks were positive for GoCV by dot blot hybridization test (DBH) and polymerase chain reaction (PCR) [3]. GPV infection was first reported in Hungary in 1967 as a goose farm often called gosling plague, goose viral hepatitis, goose influenza, and ascites hepatitis, with high mortality, mainly caused by violations of goose fiber necrotizing enterocolitis [4]. This is an acute, highly contagious and fatal disease, mainly caused in goslings, fibrous necrotizing enteritis, also known as Derzsy's disease [5]. However, it first invaded Taiwan goose farms in Taiwan in 1982. Experts and scholars unified the disease as goose viral enteritis. This disease is currently infected in Europe, United States, Canada, Russia, Israel, China, Japan and Vietnam. The virus is different from the small virus that infects humans and other poultry [6]. The severity of the disease is greatly related to age. If birds are infected with this virus before one week of age, the mortality is as high as 100%. Infections over four weeks of age rarely have clinical features. If the yolk antibody is administered before three weeks of age, it will effectively combat the occurrence of GPV [7]. A new epidemic occurred in western France in 1989, causing 40–50% mortality of 2–4 weeks old Muscovy ducks. Although the clinical symptoms are similar to GPV infection, it is not pathogenic to geese and other strains of ducks. After virus isolation and resistance analysis, it was found that this virus and goose-origin waterfowl virus (GPV) are parvoviruses, but there are significant differences in viral nucleic acid and amino acid sequences, so this virus was named Muscovy duck parvovirus (MDPV). Woolcock et al. [8] pointed out that MDPV has been isolated in many countries such as the United States, Germany, Malaysia, Japan, Taiwan, and Thailand. The disease causes the most serious death at the age of about five weeks ago, there will be 30–80% morbidity and 10–50% mortality, and older birds are also susceptible to infection when their immunity is weak [8]. NGVE is an infectious disease that affects goslings < 30 days old with haemorrhagic, fibrinonecrotic, hyperaemic, necrotic and exudative enteritis in the small intestine [9]. It is caused by an adenovirus called the NGVE virus (NGVEV), which was first reported by Cheng [10]. Haemorrhagic nephritis enteritis of geese is an epizootic viral disease caused by infection with GHPV [11]. Since 1969, GHPV has caused serious economic losses in European countries, including Hungary [12], Germany [13] and southern France [1415]. Geese may also be infected with lice and red mites, which can induce feather loss and occasionally cause naked feathers known colloquially as ‘ghost pull hairs.’ Insufficient essential amino acids in the feed and malnutrition could become an important secondary factor after the outbreak of GFL and GBF. MATERIALS AND METHODS: Experimental design This study is divided into 2 parts. Part one explores the pathogens of GFL and GBF using microbiology, histopathology and molecular biology. In the second part, the infected geese are separated, treated and observed for feather recovery. Real-time quantitative polymerase chain reaction (Q-PCR) was used to detect the presence of GPV and GoCV viruses in the blood (Supplementary Fig. 2). All experiments within this study conform to the ethical standards issued by the National Research Council. All animal experiments have been approved by the ethics committee of the National Pingtung University of Science and Technology, and care was taken to comply with the 3R concept. In this study, the laboratory animal application form protocol number was NPUST-99-065 and approved number was NPUST-IACUC-99-065. This study is divided into 2 parts. Part one explores the pathogens of GFL and GBF using microbiology, histopathology and molecular biology. In the second part, the infected geese are separated, treated and observed for feather recovery. Real-time quantitative polymerase chain reaction (Q-PCR) was used to detect the presence of GPV and GoCV viruses in the blood (Supplementary Fig. 2). All experiments within this study conform to the ethical standards issued by the National Research Council. All animal experiments have been approved by the ethics committee of the National Pingtung University of Science and Technology, and care was taken to comply with the 3R concept. In this study, the laboratory animal application form protocol number was NPUST-99-065 and approved number was NPUST-IACUC-99-065. Birds A total of 92 geese from 40 farms, comprising 56 showing signs of GFL and 36 with signs of GBF, were examined. Sixty healthy geese (30 goslings and 30 geese) were procured from the same farms with rigorous disinfection and management protocols; they were not affected by goose feather loss or broken feather diseases. These healthy geese were used as negative controls. The grading evaluation of feather shedding uses the feather shedding evaluation proposed in the past literature. According to the feather loss evaluation table, it is divided into back and wings (Supplementary Figs. 3 and 4), there are 5 levels, the lower the score the more complete the feather and the higher the score, obvious severity [16]. A total of 92 geese from 40 farms, comprising 56 showing signs of GFL and 36 with signs of GBF, were examined. Sixty healthy geese (30 goslings and 30 geese) were procured from the same farms with rigorous disinfection and management protocols; they were not affected by goose feather loss or broken feather diseases. These healthy geese were used as negative controls. The grading evaluation of feather shedding uses the feather shedding evaluation proposed in the past literature. According to the feather loss evaluation table, it is divided into back and wings (Supplementary Figs. 3 and 4), there are 5 levels, the lower the score the more complete the feather and the higher the score, obvious severity [16]. Histopathology The tissues were embedded in paraffin and processed into 2- to 3-μm-thick sections that were stained with haematoxylin and eosin for histopathological examination. Tissue samples from other gross lesions observed were similarly collected and processed for histopathological assessment. The tissues were embedded in paraffin and processed into 2- to 3-μm-thick sections that were stained with haematoxylin and eosin for histopathological examination. Tissue samples from other gross lesions observed were similarly collected and processed for histopathological assessment. Molecular biological detection PCR protocols and the specificity of primer pairs used are summarised (Supplementary Table 1). RNA and DNA were extracted from tissue using Corning Axygen AP-MN-BF-VNA-250 AxyPrep™ Body Fluid Viral DNA & RNA Purification Miniprep Kits (BIOKIT, Taiwan). Each PCR was performed in a total volume of 20 μL containing 9 µL template DNA, 10 µL Taq DNA, polymerase 2× Master Mix red (1.5 mM MgCl2) and 1 µL primer. GoCV PCR protocol was as follows: pre-denaturation 94°C for 5 min; denaturation 94°C for 45 sec; annealing 55°C for 45 sec; extension 72°C for 45 sec, performed for 35 cycles; final extension 72°C for 5 min. GPV PCR protocol was as follows [1718]: pre-denaturation 94°C for 3 min; denaturation 94°C for 30 sec; annealing 60°C for 30 sec; extension 72°C for 30 sec, performed for 35 cycles; and final extension 72°C for 5 min. GHPV PCR protocol was as follows [19]: pre-denaturation 94°C for 3 min; denaturation 94°C for 30 sec; annealing 55°C for 30 sec; extension 72°C for 60 sec; performed for 30 cycles; and final extension 72°C for 5 min. Goose adenovirus PCR protocol was as follows: pre-denaturation 94°C for 5 min; denaturation 94°C for 40 sec; annealing 54°C for 45 sec; extension 72°C for 40 sec, performed for 30 cycles; and final extension 72°C for 10 min. PCR products were analysed by electrophoresis in a 1.5% (w/v) agarose gel. PCR protocols and the specificity of primer pairs used are summarised (Supplementary Table 1). RNA and DNA were extracted from tissue using Corning Axygen AP-MN-BF-VNA-250 AxyPrep™ Body Fluid Viral DNA & RNA Purification Miniprep Kits (BIOKIT, Taiwan). Each PCR was performed in a total volume of 20 μL containing 9 µL template DNA, 10 µL Taq DNA, polymerase 2× Master Mix red (1.5 mM MgCl2) and 1 µL primer. GoCV PCR protocol was as follows: pre-denaturation 94°C for 5 min; denaturation 94°C for 45 sec; annealing 55°C for 45 sec; extension 72°C for 45 sec, performed for 35 cycles; final extension 72°C for 5 min. GPV PCR protocol was as follows [1718]: pre-denaturation 94°C for 3 min; denaturation 94°C for 30 sec; annealing 60°C for 30 sec; extension 72°C for 30 sec, performed for 35 cycles; and final extension 72°C for 5 min. GHPV PCR protocol was as follows [19]: pre-denaturation 94°C for 3 min; denaturation 94°C for 30 sec; annealing 55°C for 30 sec; extension 72°C for 60 sec; performed for 30 cycles; and final extension 72°C for 5 min. Goose adenovirus PCR protocol was as follows: pre-denaturation 94°C for 5 min; denaturation 94°C for 40 sec; annealing 54°C for 45 sec; extension 72°C for 40 sec, performed for 30 cycles; and final extension 72°C for 10 min. PCR products were analysed by electrophoresis in a 1.5% (w/v) agarose gel. Real-time Q-PCR in quarantined and disinfected birds Twenty geese from a farm in Wandan Township of Pingtung were separated into 4 groups, 2 GFL and 2 GBF. The geese were treated daily with a spray of 400-times-diluted iodine solution 50–60 mL. The solution was sprayed onto the head, back, wings and abdomen of each goose, and blood samples were collected every 3 days. Q-PCR Bio-Rad CFX Manager software was used to analyse the data of real-time Q-PCR products to continuously observe the quantification of GPV and GoCV in infected geese. Viral variables in geese from 39 to 66 days of age and from 49 to 76 days of age after GPV and GoCV infection were observed. DNA extractions were performed using whole blood. Real-time PCR detection of GPV and GoCV viruses, using the National Center for Biotechnology Information (NCBI) website design provided by Primer, was as follows: GoCV F, 5′-CCAGTCCATTGTCCGAA-3′; GoCV R, 5′-GGAGGAAGACAACTATGGC-3′; GPV 3F, 5′-ACAACTTTGAGTTTACGTTTGAC-3′; and GPV 3R, 5′-ATTCCAGAGGTATTGATCC ACTA-3′. Bio-Rad Manager 3.1 Software was used to interpret PCR results. The measured OD values were entered into Excel Software for conversion into a standard curve. After that, the plastid concentration was used to convert the virus amount (plasmid copy number). When Q-PCR was completed, the Cq value was obtained. Twenty geese from a farm in Wandan Township of Pingtung were separated into 4 groups, 2 GFL and 2 GBF. The geese were treated daily with a spray of 400-times-diluted iodine solution 50–60 mL. The solution was sprayed onto the head, back, wings and abdomen of each goose, and blood samples were collected every 3 days. Q-PCR Bio-Rad CFX Manager software was used to analyse the data of real-time Q-PCR products to continuously observe the quantification of GPV and GoCV in infected geese. Viral variables in geese from 39 to 66 days of age and from 49 to 76 days of age after GPV and GoCV infection were observed. DNA extractions were performed using whole blood. Real-time PCR detection of GPV and GoCV viruses, using the National Center for Biotechnology Information (NCBI) website design provided by Primer, was as follows: GoCV F, 5′-CCAGTCCATTGTCCGAA-3′; GoCV R, 5′-GGAGGAAGACAACTATGGC-3′; GPV 3F, 5′-ACAACTTTGAGTTTACGTTTGAC-3′; and GPV 3R, 5′-ATTCCAGAGGTATTGATCC ACTA-3′. Bio-Rad Manager 3.1 Software was used to interpret PCR results. The measured OD values were entered into Excel Software for conversion into a standard curve. After that, the plastid concentration was used to convert the virus amount (plasmid copy number). When Q-PCR was completed, the Cq value was obtained. Sequence sequencing and comparison This experiment refers to the GPV-specific primer pair GPV (VP2)-F: CCGGGTTGCAGGAGGTAC, R: AGCTACAACAACCACATC that was used by Limn et al. in 1996. It was amplified fragment 800 bp, and the GoCV-specific primer pair GoCV (466-1014)-F: CGGAAGTACCCGACGACTTA, R: ACAATGGACTGGGCTTT CAC that was used by Lin in 2005. It was amplified fragments of 568 bp. The virus-specific fragments were sequenced and analyzed for virus sequence. The sequence analysis used DNASTAR software MegAlign Version 7.1 (DNASTAR, Inc., USA), and provided it in NCBI database. Gene sequence comparison, GoCV part is based on TW11/2001 (AF536941.1), TW10/2001 (AF536940.1), TW9/2001 (AF536939.1), TW8/2001 (AF536938.1), TW7/2001 (AF536937.1), TW6/2001 (AF536936.1), TW5/2001 (AF536935.1), TW4/2001 (AF536934.1), TW3/2001 (AF536933.1), TW2/2001 (AF536932.1), TW1 /2001 (AF536931.1) as the reference sequence; the GPV part uses GPV strain Y (China) (KC178571.1), GPV strain E (China) (KC184133.1), GPV strain GDa (China) (HQ891825.1), GPV strain SH (China) (JF333590.1), GPV strain SHFX1201 (China) (KC478066.1), GPV strain VG32-1 (Europe) (EU583392.1), GPV strain Virulent B (Hungary) (U25749.1), GPV strain 06-0329 (Taiwan) (EU583391.1), GPV strain 82-0321 (Taiwan) (AY382884.1), Muscovy duck parvovirus (AY510603.1) as reference sequence, and use ClustalW in the system to compare and draw a phylogenetic tree. The phylogenetic tree was obtained using 1000 bootstrap replications to evaluate the supporting values ​​for lineage grouping. This experiment refers to the GPV-specific primer pair GPV (VP2)-F: CCGGGTTGCAGGAGGTAC, R: AGCTACAACAACCACATC that was used by Limn et al. in 1996. It was amplified fragment 800 bp, and the GoCV-specific primer pair GoCV (466-1014)-F: CGGAAGTACCCGACGACTTA, R: ACAATGGACTGGGCTTT CAC that was used by Lin in 2005. It was amplified fragments of 568 bp. The virus-specific fragments were sequenced and analyzed for virus sequence. The sequence analysis used DNASTAR software MegAlign Version 7.1 (DNASTAR, Inc., USA), and provided it in NCBI database. Gene sequence comparison, GoCV part is based on TW11/2001 (AF536941.1), TW10/2001 (AF536940.1), TW9/2001 (AF536939.1), TW8/2001 (AF536938.1), TW7/2001 (AF536937.1), TW6/2001 (AF536936.1), TW5/2001 (AF536935.1), TW4/2001 (AF536934.1), TW3/2001 (AF536933.1), TW2/2001 (AF536932.1), TW1 /2001 (AF536931.1) as the reference sequence; the GPV part uses GPV strain Y (China) (KC178571.1), GPV strain E (China) (KC184133.1), GPV strain GDa (China) (HQ891825.1), GPV strain SH (China) (JF333590.1), GPV strain SHFX1201 (China) (KC478066.1), GPV strain VG32-1 (Europe) (EU583392.1), GPV strain Virulent B (Hungary) (U25749.1), GPV strain 06-0329 (Taiwan) (EU583391.1), GPV strain 82-0321 (Taiwan) (AY382884.1), Muscovy duck parvovirus (AY510603.1) as reference sequence, and use ClustalW in the system to compare and draw a phylogenetic tree. The phylogenetic tree was obtained using 1000 bootstrap replications to evaluate the supporting values ​​for lineage grouping. Experimental design: This study is divided into 2 parts. Part one explores the pathogens of GFL and GBF using microbiology, histopathology and molecular biology. In the second part, the infected geese are separated, treated and observed for feather recovery. Real-time quantitative polymerase chain reaction (Q-PCR) was used to detect the presence of GPV and GoCV viruses in the blood (Supplementary Fig. 2). All experiments within this study conform to the ethical standards issued by the National Research Council. All animal experiments have been approved by the ethics committee of the National Pingtung University of Science and Technology, and care was taken to comply with the 3R concept. In this study, the laboratory animal application form protocol number was NPUST-99-065 and approved number was NPUST-IACUC-99-065. Birds: A total of 92 geese from 40 farms, comprising 56 showing signs of GFL and 36 with signs of GBF, were examined. Sixty healthy geese (30 goslings and 30 geese) were procured from the same farms with rigorous disinfection and management protocols; they were not affected by goose feather loss or broken feather diseases. These healthy geese were used as negative controls. The grading evaluation of feather shedding uses the feather shedding evaluation proposed in the past literature. According to the feather loss evaluation table, it is divided into back and wings (Supplementary Figs. 3 and 4), there are 5 levels, the lower the score the more complete the feather and the higher the score, obvious severity [16]. Histopathology: The tissues were embedded in paraffin and processed into 2- to 3-μm-thick sections that were stained with haematoxylin and eosin for histopathological examination. Tissue samples from other gross lesions observed were similarly collected and processed for histopathological assessment. Molecular biological detection: PCR protocols and the specificity of primer pairs used are summarised (Supplementary Table 1). RNA and DNA were extracted from tissue using Corning Axygen AP-MN-BF-VNA-250 AxyPrep™ Body Fluid Viral DNA & RNA Purification Miniprep Kits (BIOKIT, Taiwan). Each PCR was performed in a total volume of 20 μL containing 9 µL template DNA, 10 µL Taq DNA, polymerase 2× Master Mix red (1.5 mM MgCl2) and 1 µL primer. GoCV PCR protocol was as follows: pre-denaturation 94°C for 5 min; denaturation 94°C for 45 sec; annealing 55°C for 45 sec; extension 72°C for 45 sec, performed for 35 cycles; final extension 72°C for 5 min. GPV PCR protocol was as follows [1718]: pre-denaturation 94°C for 3 min; denaturation 94°C for 30 sec; annealing 60°C for 30 sec; extension 72°C for 30 sec, performed for 35 cycles; and final extension 72°C for 5 min. GHPV PCR protocol was as follows [19]: pre-denaturation 94°C for 3 min; denaturation 94°C for 30 sec; annealing 55°C for 30 sec; extension 72°C for 60 sec; performed for 30 cycles; and final extension 72°C for 5 min. Goose adenovirus PCR protocol was as follows: pre-denaturation 94°C for 5 min; denaturation 94°C for 40 sec; annealing 54°C for 45 sec; extension 72°C for 40 sec, performed for 30 cycles; and final extension 72°C for 10 min. PCR products were analysed by electrophoresis in a 1.5% (w/v) agarose gel. Real-time Q-PCR in quarantined and disinfected birds: Twenty geese from a farm in Wandan Township of Pingtung were separated into 4 groups, 2 GFL and 2 GBF. The geese were treated daily with a spray of 400-times-diluted iodine solution 50–60 mL. The solution was sprayed onto the head, back, wings and abdomen of each goose, and blood samples were collected every 3 days. Q-PCR Bio-Rad CFX Manager software was used to analyse the data of real-time Q-PCR products to continuously observe the quantification of GPV and GoCV in infected geese. Viral variables in geese from 39 to 66 days of age and from 49 to 76 days of age after GPV and GoCV infection were observed. DNA extractions were performed using whole blood. Real-time PCR detection of GPV and GoCV viruses, using the National Center for Biotechnology Information (NCBI) website design provided by Primer, was as follows: GoCV F, 5′-CCAGTCCATTGTCCGAA-3′; GoCV R, 5′-GGAGGAAGACAACTATGGC-3′; GPV 3F, 5′-ACAACTTTGAGTTTACGTTTGAC-3′; and GPV 3R, 5′-ATTCCAGAGGTATTGATCC ACTA-3′. Bio-Rad Manager 3.1 Software was used to interpret PCR results. The measured OD values were entered into Excel Software for conversion into a standard curve. After that, the plastid concentration was used to convert the virus amount (plasmid copy number). When Q-PCR was completed, the Cq value was obtained. Sequence sequencing and comparison: This experiment refers to the GPV-specific primer pair GPV (VP2)-F: CCGGGTTGCAGGAGGTAC, R: AGCTACAACAACCACATC that was used by Limn et al. in 1996. It was amplified fragment 800 bp, and the GoCV-specific primer pair GoCV (466-1014)-F: CGGAAGTACCCGACGACTTA, R: ACAATGGACTGGGCTTT CAC that was used by Lin in 2005. It was amplified fragments of 568 bp. The virus-specific fragments were sequenced and analyzed for virus sequence. The sequence analysis used DNASTAR software MegAlign Version 7.1 (DNASTAR, Inc., USA), and provided it in NCBI database. Gene sequence comparison, GoCV part is based on TW11/2001 (AF536941.1), TW10/2001 (AF536940.1), TW9/2001 (AF536939.1), TW8/2001 (AF536938.1), TW7/2001 (AF536937.1), TW6/2001 (AF536936.1), TW5/2001 (AF536935.1), TW4/2001 (AF536934.1), TW3/2001 (AF536933.1), TW2/2001 (AF536932.1), TW1 /2001 (AF536931.1) as the reference sequence; the GPV part uses GPV strain Y (China) (KC178571.1), GPV strain E (China) (KC184133.1), GPV strain GDa (China) (HQ891825.1), GPV strain SH (China) (JF333590.1), GPV strain SHFX1201 (China) (KC478066.1), GPV strain VG32-1 (Europe) (EU583392.1), GPV strain Virulent B (Hungary) (U25749.1), GPV strain 06-0329 (Taiwan) (EU583391.1), GPV strain 82-0321 (Taiwan) (AY382884.1), Muscovy duck parvovirus (AY510603.1) as reference sequence, and use ClustalW in the system to compare and draw a phylogenetic tree. The phylogenetic tree was obtained using 1000 bootstrap replications to evaluate the supporting values ​​for lineage grouping. RESULTS: PCR diagnosis PCR results are shown negative results for GHPV and NGVEV. The incidence of various pathogens in geese with GFL were found to be GoCV 94.6% (53/56), CPV 60.7% (34/56) (Supplementary Fig. 5), whereas geese with GBF showed the following incidence of pathogens: GoCV 83.3% (30/36), GPV 72% (26/36) (Table 1). The result shows that GoCV and GPV are significantly correlated with GFL and GBF (p < 0.05). GFL, gosling feather loss disease; GBF, goose broken feather disease; PCR, polymerase chain reaction; GPV, goose parvovirus; GoCV, goose circovirus. PCR results are shown negative results for GHPV and NGVEV. The incidence of various pathogens in geese with GFL were found to be GoCV 94.6% (53/56), CPV 60.7% (34/56) (Supplementary Fig. 5), whereas geese with GBF showed the following incidence of pathogens: GoCV 83.3% (30/36), GPV 72% (26/36) (Table 1). The result shows that GoCV and GPV are significantly correlated with GFL and GBF (p < 0.05). GFL, gosling feather loss disease; GBF, goose broken feather disease; PCR, polymerase chain reaction; GPV, goose parvovirus; GoCV, goose circovirus. Gross lesions and histopathological examination Figures showing pathological sections (Fig. 1) reveal the gross and histopathologic findings in cases of goose infectious disease with GFL and GBF: Fig. 1A shows the clinical findings for feather loss (GFL), and Fig. 1B shows the clinical findings for broken feather (GBF). Fig. 1C-F shows histopathologic findings in cases of GoCV infection by PCR assay: feather follicle necrosis with inclusion body (Fig. 1C); folliculitis with necropsy follicular inflammatory cell infiltration (Fig. 1D); degeneration and necrosis of epithelial cells on the mucous membrane and the crypts of Lieberkühn (Fig. 1E) and intranuclear inclusion bodies in the degenerated epithelial cells of the crypts of Lieberkühn (Fig. 1F). Figures showing pathological sections (Fig. 1) reveal the gross and histopathologic findings in cases of goose infectious disease with GFL and GBF: Fig. 1A shows the clinical findings for feather loss (GFL), and Fig. 1B shows the clinical findings for broken feather (GBF). Fig. 1C-F shows histopathologic findings in cases of GoCV infection by PCR assay: feather follicle necrosis with inclusion body (Fig. 1C); folliculitis with necropsy follicular inflammatory cell infiltration (Fig. 1D); degeneration and necrosis of epithelial cells on the mucous membrane and the crypts of Lieberkühn (Fig. 1E) and intranuclear inclusion bodies in the degenerated epithelial cells of the crypts of Lieberkühn (Fig. 1F). Building a phylogenetic tree from the alignment New sequences of GoCV and GPV (569 bp and 806 bp, respectively) were generated and submitted to a public repository GenBank. The GoCV amplified sequence (VP1) was examined by PCR genetic sequence analysis software. A phylogenetic tree was built on the basis of GoCV alignment (VP1) in samples of Pingtung, Kaohsiung, Yunlin and Taiwan. This was compared with 11 GoCV sequences isolated in Taiwan. They were generally divided into two groups. For the first group, both Kaohsiung (W103-1337) and Pingtung areas (W103-1342), the GoCV similarity rate was 100%, whereas in the Taiwanese strains (TW1), (TW10) and (TW11), the similarity was 98%–99%. The other groups were amplified sequences from the Yunlin samples (W103-1368) and (W103-1346). The similarity of (W103-1368) compared with (TW9) and (TW8) was 100%. The similarity of (W103-1346) compared with (TW8) and (TW6) was 98%. The South Taiwan samples were similar to the Yunlin area samples (95%). This implies that GoCV strains differ between the south and north of Taiwan (Fig. 2). In the GPV (VP2) group of amplified sequence fragments found by PCR in this study, the Taiwanese strains, 06-0329 and 82-0321, were compared with strains from China and Europe, and found to strongly resemble them with a similarity of over 99.6%, compared with Muscovy duck parvovirus in which the similarity is only 80.9% (Fig. 3). New sequences of GoCV and GPV (569 bp and 806 bp, respectively) were generated and submitted to a public repository GenBank. The GoCV amplified sequence (VP1) was examined by PCR genetic sequence analysis software. A phylogenetic tree was built on the basis of GoCV alignment (VP1) in samples of Pingtung, Kaohsiung, Yunlin and Taiwan. This was compared with 11 GoCV sequences isolated in Taiwan. They were generally divided into two groups. For the first group, both Kaohsiung (W103-1337) and Pingtung areas (W103-1342), the GoCV similarity rate was 100%, whereas in the Taiwanese strains (TW1), (TW10) and (TW11), the similarity was 98%–99%. The other groups were amplified sequences from the Yunlin samples (W103-1368) and (W103-1346). The similarity of (W103-1368) compared with (TW9) and (TW8) was 100%. The similarity of (W103-1346) compared with (TW8) and (TW6) was 98%. The South Taiwan samples were similar to the Yunlin area samples (95%). This implies that GoCV strains differ between the south and north of Taiwan (Fig. 2). In the GPV (VP2) group of amplified sequence fragments found by PCR in this study, the Taiwanese strains, 06-0329 and 82-0321, were compared with strains from China and Europe, and found to strongly resemble them with a similarity of over 99.6%, compared with Muscovy duck parvovirus in which the similarity is only 80.9% (Fig. 3). Real-time Q-PCR in quarantined and disinfected birds The viral variable was observed in the GLF birds from 39 to 66 days old. By the age of 66 days, although the feathers had all returned to normal growth, the virus had not been eliminated from the system. To reduce the effects of environmental pathogens and secondary infection by bacteria, geese were separated into the treatment group (T) and control group (C). Virus levels were compared between the 2 groups using a t-test. No significant difference in GoCV levels was found for T and C (p = 0.082; p > 0.05). GPV levels in C and T did not differ significantly (p = 0.3; p > 0.05). Virus levels were monitored in birds from the age of 39 to 66 days and the decrease in GoCV and GPV levels over time noted. Using the self-designed primer to quantify the change curve of virus quantity (Fig. 4A) and the change of the feathers of the goose (Fig. 4B) when we conduct field monitoring two viral loads. Compared with the change trend of feather score, it can be seen that there seems to be a similar trend, so we divide the data into two sections, and we will perform the Pearson's correlation coefficient test after 35 weeks and 6 weeks of age mentioned earlier. The results of GoCV and GPV of GLB by t-test calculation indicated no significant difference in the GoCV levels in C and T (p = 0.43; p > 0.05) and no significant difference in the GPV levels in C and T (p = 0.348; p > 0.05). GPV, goose parvovirus; GoCV, goose circovirus. However, during the treatment period, the birds between 42 and 63 days old with GFL did not have folliculitis, mobility also gradually improved, the grooming behaviour of the geese gradually became normal and the feather colour turned from yellow to white (Fig. 5A). Similarly, birds 49–75 days old with GBF gradually recovered. The recovery of broken feathers was maintained for about 3–4 weeks. Results indicated that if the breeding environment is properly controlled, there might be cases of early recovery (Fig. 5B). GPV, goose parvovirus; GoCV, goose circovirus; GFL, gosling feather loss disease; GBF, goose broken feather disease. The viral variable was observed in the GLF birds from 39 to 66 days old. By the age of 66 days, although the feathers had all returned to normal growth, the virus had not been eliminated from the system. To reduce the effects of environmental pathogens and secondary infection by bacteria, geese were separated into the treatment group (T) and control group (C). Virus levels were compared between the 2 groups using a t-test. No significant difference in GoCV levels was found for T and C (p = 0.082; p > 0.05). GPV levels in C and T did not differ significantly (p = 0.3; p > 0.05). Virus levels were monitored in birds from the age of 39 to 66 days and the decrease in GoCV and GPV levels over time noted. Using the self-designed primer to quantify the change curve of virus quantity (Fig. 4A) and the change of the feathers of the goose (Fig. 4B) when we conduct field monitoring two viral loads. Compared with the change trend of feather score, it can be seen that there seems to be a similar trend, so we divide the data into two sections, and we will perform the Pearson's correlation coefficient test after 35 weeks and 6 weeks of age mentioned earlier. The results of GoCV and GPV of GLB by t-test calculation indicated no significant difference in the GoCV levels in C and T (p = 0.43; p > 0.05) and no significant difference in the GPV levels in C and T (p = 0.348; p > 0.05). GPV, goose parvovirus; GoCV, goose circovirus. However, during the treatment period, the birds between 42 and 63 days old with GFL did not have folliculitis, mobility also gradually improved, the grooming behaviour of the geese gradually became normal and the feather colour turned from yellow to white (Fig. 5A). Similarly, birds 49–75 days old with GBF gradually recovered. The recovery of broken feathers was maintained for about 3–4 weeks. Results indicated that if the breeding environment is properly controlled, there might be cases of early recovery (Fig. 5B). GPV, goose parvovirus; GoCV, goose circovirus; GFL, gosling feather loss disease; GBF, goose broken feather disease. Negative control The 60 healthy geese did not show the presence of GoCV infection, GPV infection or inclusion bodies (Table 1). The 60 healthy geese did not show the presence of GoCV infection, GPV infection or inclusion bodies (Table 1). PCR diagnosis: PCR results are shown negative results for GHPV and NGVEV. The incidence of various pathogens in geese with GFL were found to be GoCV 94.6% (53/56), CPV 60.7% (34/56) (Supplementary Fig. 5), whereas geese with GBF showed the following incidence of pathogens: GoCV 83.3% (30/36), GPV 72% (26/36) (Table 1). The result shows that GoCV and GPV are significantly correlated with GFL and GBF (p < 0.05). GFL, gosling feather loss disease; GBF, goose broken feather disease; PCR, polymerase chain reaction; GPV, goose parvovirus; GoCV, goose circovirus. Gross lesions and histopathological examination: Figures showing pathological sections (Fig. 1) reveal the gross and histopathologic findings in cases of goose infectious disease with GFL and GBF: Fig. 1A shows the clinical findings for feather loss (GFL), and Fig. 1B shows the clinical findings for broken feather (GBF). Fig. 1C-F shows histopathologic findings in cases of GoCV infection by PCR assay: feather follicle necrosis with inclusion body (Fig. 1C); folliculitis with necropsy follicular inflammatory cell infiltration (Fig. 1D); degeneration and necrosis of epithelial cells on the mucous membrane and the crypts of Lieberkühn (Fig. 1E) and intranuclear inclusion bodies in the degenerated epithelial cells of the crypts of Lieberkühn (Fig. 1F). Building a phylogenetic tree from the alignment: New sequences of GoCV and GPV (569 bp and 806 bp, respectively) were generated and submitted to a public repository GenBank. The GoCV amplified sequence (VP1) was examined by PCR genetic sequence analysis software. A phylogenetic tree was built on the basis of GoCV alignment (VP1) in samples of Pingtung, Kaohsiung, Yunlin and Taiwan. This was compared with 11 GoCV sequences isolated in Taiwan. They were generally divided into two groups. For the first group, both Kaohsiung (W103-1337) and Pingtung areas (W103-1342), the GoCV similarity rate was 100%, whereas in the Taiwanese strains (TW1), (TW10) and (TW11), the similarity was 98%–99%. The other groups were amplified sequences from the Yunlin samples (W103-1368) and (W103-1346). The similarity of (W103-1368) compared with (TW9) and (TW8) was 100%. The similarity of (W103-1346) compared with (TW8) and (TW6) was 98%. The South Taiwan samples were similar to the Yunlin area samples (95%). This implies that GoCV strains differ between the south and north of Taiwan (Fig. 2). In the GPV (VP2) group of amplified sequence fragments found by PCR in this study, the Taiwanese strains, 06-0329 and 82-0321, were compared with strains from China and Europe, and found to strongly resemble them with a similarity of over 99.6%, compared with Muscovy duck parvovirus in which the similarity is only 80.9% (Fig. 3). Real-time Q-PCR in quarantined and disinfected birds: The viral variable was observed in the GLF birds from 39 to 66 days old. By the age of 66 days, although the feathers had all returned to normal growth, the virus had not been eliminated from the system. To reduce the effects of environmental pathogens and secondary infection by bacteria, geese were separated into the treatment group (T) and control group (C). Virus levels were compared between the 2 groups using a t-test. No significant difference in GoCV levels was found for T and C (p = 0.082; p > 0.05). GPV levels in C and T did not differ significantly (p = 0.3; p > 0.05). Virus levels were monitored in birds from the age of 39 to 66 days and the decrease in GoCV and GPV levels over time noted. Using the self-designed primer to quantify the change curve of virus quantity (Fig. 4A) and the change of the feathers of the goose (Fig. 4B) when we conduct field monitoring two viral loads. Compared with the change trend of feather score, it can be seen that there seems to be a similar trend, so we divide the data into two sections, and we will perform the Pearson's correlation coefficient test after 35 weeks and 6 weeks of age mentioned earlier. The results of GoCV and GPV of GLB by t-test calculation indicated no significant difference in the GoCV levels in C and T (p = 0.43; p > 0.05) and no significant difference in the GPV levels in C and T (p = 0.348; p > 0.05). GPV, goose parvovirus; GoCV, goose circovirus. However, during the treatment period, the birds between 42 and 63 days old with GFL did not have folliculitis, mobility also gradually improved, the grooming behaviour of the geese gradually became normal and the feather colour turned from yellow to white (Fig. 5A). Similarly, birds 49–75 days old with GBF gradually recovered. The recovery of broken feathers was maintained for about 3–4 weeks. Results indicated that if the breeding environment is properly controlled, there might be cases of early recovery (Fig. 5B). GPV, goose parvovirus; GoCV, goose circovirus; GFL, gosling feather loss disease; GBF, goose broken feather disease. Negative control: The 60 healthy geese did not show the presence of GoCV infection, GPV infection or inclusion bodies (Table 1). DISCUSSION: GFL and GBF appear to be a common phenomenon in Taiwan. In this study, 2 cases of inclusion bodies were observed in the feather follicles, and viral nucleic acids were detected in cases of GoCV infection. Neither GHPV nor NGVEV was detected. This study was similar to previous outbreak investigations in terms of the challenge and disease investigation. No inclusion bodies were observed, but the nucleic acids detected were universal [20212223]. Chen conducted an epidemiological survey of GoCV and found that the positive rate of GoCV in Taiwan is as high as 94.7% (197/208). GoCV in Taiwan has become a common virus. It is inferred that although circovirus can cause lymphocyte loss and lead to immunosuppression that could be induced many disease outbreaks [24]. Ball focused on the epidemiological investigation of GoCV, simultaneously, the parvovirus, polyomavirus, reticuloendotheliosis virus, salmonella, streptococcus and mycoplasma were detected [3]. Jing et al. [25] purified the circovirus and inoculated with 21-day-old goslings, and observed pathological changes that caused goose diarrhea, growth retardation and feather growth disorder. It can be found circovirus positive in the Fahrenheit bursa, thymus, spleen, duodenum, and liver when PCR test. The virus can be detected in the blood 14 days after vaccination. Pathological examinations have lymphopenia in the Fahrenheit bursa, spleen and liver. Inflammatory cells proliferate in the lungs and kidneys [25]. Australia has also found fractured feathers in crow with circovirus infection, which is quite similar to the situation of the goose found in this study. In the case, inclusion bodies were found in the basal layer of the feather follicle under the section, and a large number of inflammatory cells infiltrated in the feather follicle. It is the same as the nuclear inclusion bodies in the feather follicle seen in this study. Although some reports have pointed out that no inclusion bodies were seen under the slices in the investigation of the challenge and disease, but the nucleic acids detected were universal [2021]. Research on the etiology of GFL and GBF is becoming more and more urgent. At present, many scholars have carried out quantitative detection of viruses for GPV and GoCV, and their sensitivity can be detected up to 28 copy number variants [2627]. Woźniakowski et al. [28] also explored the relationship between the clinical investigation of the aquatic parvovirus and the amount of virus found in the system, using quantitative GPV, combined with clinical pathology, which can be used as the main method of elucidation of epidemiology. The previous study showed that there was no comparison of virus levels between GPV and GoCV and broken feathers disease. Therefore, this study designed primers for Q-PCR testing and found that the virus levels of goose infected in the wild are usually quite unstable. The virus levels ranges from 105 to 108, and it is impossible to determine the amount of virus from the appearance. Therefore, when statistics are made, there will be thought the difference in the amount of each virus exceeds 100 or more, but the data trends shown are almost the same. After the full-length sequencing and comparison of the GoCV strains, isolated from different regions in 2002–2003, the sequence similarity is above 97%, showing the genome sequence of the GoCV strain is very stable in Taiwan [24]. The sequence analysed in this study is VP1 (rep) nucleic acid between nt466 and nt1014. Although it is not a full-length sequence, there are still differences when compared with W103-1368 and W103-1346 from Yunlin County and W103-1339, W103-1337 and 1342 from the south of Kaohsiung. We found that the sequence from the south of Kaohsiung is significantly different from that of Yunlin. It is speculated that there may be differences in different types of viruses due to regional relationships; further research into this topic is required. GPV infection has so far produced commercially available vaccines, and farmers usually vaccinate within 1 week of age. If the maternal antibody is administered at this time, it will last for 35 days. Immunity value continues to 20 weeks of age. Most of the infected birds will have the results in reduced growth and feather development. If the gosling is older than one month, the survivor will become the carrier for life. When the goose recovers from the parvovirus infection, a large number of antibodies will continue to remain in the body for 80 months. At this time, the maternal antibody of breeding geese will be able to transfer to the hatched goslings and will achieve a good immune effect [29]. Goose infection with parvovirus often causes hemorrhagic enteritis, which results in poor absorption efficiency, relatively slow growth and uneven feathering. The feather loss is mainly related to GPV. Although it can be seen a large number of inflammatory cell infiltration in the feather follicle, PCR diagnosis of parvovirus is positive. After that, in situ hybridization (ISH) or immunohistochemical staining (immunohistochemistry, IHC) is still needed to observe whether GPV causes the feathers disease. The bacterial infection is usually a secondary infection in feather diseases. However, the prevalence of bacteria was found in this study: Riemerella anatipestifer and Salmonella spp., Escherichia coli, Staphylococcus aureus and Pasteurella multocida, however, no ectoparasitic infestation was found. Sporadic cases also have amyloidosis, renal tubular degeneration and deposition of calcium salt. The feeds have been found that all essential amino acids are insufficient compared with the standard value, so it will be malnutrition after the outbreak of the disease, which will lead to more serious GFL and GBF. The GoCV and GPV in Taiwan are still quite prevalent in most goose farms. Because the circovirus is an unenveloped virus, according to a previous study, it was quite difficult to eliminate GoCV and GPV. The pathogenic microorganism was still present in the system after being treated with common surfactants and halogen disinfectants [3031]. ​​It needs to be cleaned for a long time when it is empty, and it must be disinfected with formalin fumigation and phenol, which can at least remove the pathogen to reduce and improve the biological safety control of geese. The disinfectant used in this study is halogen-based iodine, and the disinfectant used in cleaning the environment is Anteweak. Although the disinfection effect tends to reduce the virus, the virus cannot be completely eradicated.
Background: Goslings in several Taiwanese farms experienced gosling feather loss disease (GFL) at 21-35 days and goose broke feather disease (GBF) at 42-60 days. The prevalence ranges from a few birds to 500 cases per field. It is estimated that about 12,000 geese have been infected, the morbidity is 70-80% and the mortality is 20-30%. Methods: Samples were collected from animal hospitals. Molecular and microscopy diagnostics were used to examine 92 geese. Specific quantitative polymerase chain reaction (Q-PCR) assays are performed to evaluate GPV and GoCV viral loads and simultaneously evaluated the feather loss conditions in geese with the scoring method. Results: High prevalence of GoCV and GPV infection in geese showing signs of GFL and GBF. Inclusion body was detected in the feather follicles and Lieberkühn crypt epithelial cells. The Q-PCR showed the high correlation between feather loss and viruses during 3rd-5th week. However, the infection was not detected using the same test in 60 healthy geese. Conclusions: Thus, GFL and GBF appear to be significantly closely related to GoCV and GPV. The geese feathers showed increasing recovery after being quarantined and disinfected.
null
null
9,247
230
[ 149, 137, 44, 337, 250, 321, 124, 137, 309, 441, 23 ]
15
[ "gpv", "gocv", "pcr", "feather", "goose", "geese", "fig", "virus", "gfl", "gbf" ]
[ "infected geese viral", "goose infectious disease", "diseases healthy geese", "pathogens geese gfl", "geese feather diseases" ]
null
null
null
[CONTENT] Circovirus | goose disease | parvovirus | polymerase chain reaction | Taiwan [SUMMARY]
null
[CONTENT] Circovirus | goose disease | parvovirus | polymerase chain reaction | Taiwan [SUMMARY]
null
[CONTENT] Circovirus | goose disease | parvovirus | polymerase chain reaction | Taiwan [SUMMARY]
null
[CONTENT] Animals | Circoviridae Infections | Circovirus | Feathers | Geese | Parvoviridae Infections | Parvovirinae | Poultry Diseases | Prevalence | Taiwan [SUMMARY]
null
[CONTENT] Animals | Circoviridae Infections | Circovirus | Feathers | Geese | Parvoviridae Infections | Parvovirinae | Poultry Diseases | Prevalence | Taiwan [SUMMARY]
null
[CONTENT] Animals | Circoviridae Infections | Circovirus | Feathers | Geese | Parvoviridae Infections | Parvovirinae | Poultry Diseases | Prevalence | Taiwan [SUMMARY]
null
[CONTENT] infected geese viral | goose infectious disease | diseases healthy geese | pathogens geese gfl | geese feather diseases [SUMMARY]
null
[CONTENT] infected geese viral | goose infectious disease | diseases healthy geese | pathogens geese gfl | geese feather diseases [SUMMARY]
null
[CONTENT] infected geese viral | goose infectious disease | diseases healthy geese | pathogens geese gfl | geese feather diseases [SUMMARY]
null
[CONTENT] gpv | gocv | pcr | feather | goose | geese | fig | virus | gfl | gbf [SUMMARY]
null
[CONTENT] gpv | gocv | pcr | feather | goose | geese | fig | virus | gfl | gbf [SUMMARY]
null
[CONTENT] gpv | gocv | pcr | feather | goose | geese | fig | virus | gfl | gbf [SUMMARY]
null
[CONTENT] disease | geese | goose | mortality | infections | virus | enteritis | caused | viral | infection [SUMMARY]
null
[CONTENT] fig | gocv | compared | gpv | w103 | similarity | levels | goose | feather | 05 [SUMMARY]
null
[CONTENT] gpv | gocv | feather | geese | fig | goose | pcr | 2001 | infection | sec [SUMMARY]
null
[CONTENT] Taiwanese | 21-35 days | GBF | 42-60 days ||| 500 ||| about 12,000 | geese | 70-80% | 20-30% [SUMMARY]
null
[CONTENT] GoCV | GPV | geese | GFL | GBF ||| Lieberkühn ||| 3rd-5th week ||| 60 | geese [SUMMARY]
null
[CONTENT] Taiwanese | 21-35 days | GBF | 42-60 days ||| 500 ||| about 12,000 | geese | 70-80% | 20-30% ||| ||| geese ||| GPV | GoCV | geese ||| GoCV | GPV | geese | GFL | GBF ||| Lieberkühn ||| 3rd-5th week ||| 60 | geese ||| GFL | GBF | GoCV | GPV ||| geese [SUMMARY]
null
Effect of aldehyde dehydrogenase 2 gene polymorphism on hemodynamics after nitroglycerin intervention in Northern Chinese Han population.
25591559
Nitroglycerin (NTG) is one of the few immediate treatments for acute angina. Aldehyde dehydrogenase 2 (ALDH2) is a key enzyme in the human body that facilitates the biological metabolism of NTG. The biological mechanism of NTG serves an important function in NTG efficacy. Some reports still contradict the results that the correlation between ALDH2 gene polymorphisms and NTG and its clinical efficacy is different. However, data on NTG measurement by pain relief are subjective. This study aimed to investigate the influence of ALDH2 gene polymorphism on intervention with sublingual NTG using noninvasive hemodynamic parameters of cardiac output (CO) and systemic vascular resistance (SVR) in Northern Chinese Han population.
BACKGROUND
This study selected 559 patients from the Affiliated Hospital of Qingdao University. A total of 203 patients presented with coronary heart disease (CHD) and 356 had non-CHD (NCHD) cases. All patient ALDH2 genotypes (G504A) were detected and divided into two types: Wild (GG) and mutant (GA/AA). Among the CHD group, 103 were wild-type cases, and 100 were mutant-type cases. Moreover, 196 cases were wild-type, and 160 cases were mutant type among the NCHD volunteers. A noninvasive hemodynamic detector was used to monitor the CO and the SVR at the 0, 5, and 15 minute time points after medication with 0.5 mg sublingual NTG. Two CO and SVR indicators were used for a comparative analysis of all case genotypes.
METHODS
Both CO and SVR indicators significantly differed between the wild and mutant genotypes at various time points after intervention with sublingual NTG at 5 and 15 minutes in the NCHD (F = 16.460, 15.003, P = 0.000, 0.000) and CHD groups (F = 194.482, 60.582, P = 0.000, 0.000). All CO values in the wild-type case of both NCHD and CHD groups increased, whereas those in the mutant type decreased. The CO and ΔCO differences were statistically significant (P < 0.05; P < 0.05). The SVR and ΔSVR changed between the wild- and mutant-type cases at all-time points in both NCHD and CHD groups had statistically significant differences (P < 0.05; P < 0.05).
RESULTS
ALDH2 (G504A) gene polymorphism is associated with changes in noninvasive hemodynamic parameters (i.e. CO and SVR) after intervention with sublingual NTG. This gene polymorphism may influence the effect of NTG intervention on Northern Chinese Han population.
CONCLUSION
[ "Aged", "Aldehyde Dehydrogenase", "Aldehyde Dehydrogenase, Mitochondrial", "Asian People", "Female", "Hemodynamics", "Humans", "Male", "Middle Aged", "Nitroglycerin", "Polymorphism, Genetic" ]
4837835
INTRODUCTION
Increasing evidence suggests that the aldehyde dehydrogenase 2 (ALDH2) (G-504-A) gene polymorphism is associated with nitroglycerin (NTG) inhibition. ALDH2 is a key enzyme in the human body that facilitates the biological metabolism of NTG, which serves an important function in NTG efficacy.[1] NTG is one of the few immediate treatments for acute angina. ALDH2 is an important mitochondrial aldehyde oxidase, the gene of which is located on chromosome 12q24. The gene has a high degree of genetic polymorphism.[2] The ALDH2 gene is found outside the 12th exon and exists in 504 point mutations with base substitutions of G to A. The ALDH2 of the GG genotype (i.e., wild-type) has normal activity on the NTG. The GA genotype (i.e., mutant) only has 6% activity, whereas the AA genotype (i.e., mutant) loses all its activities.[3] The clinical research analysis of Jo et al.[4] revealed that mutant ALDH2 is an independent risk factor for myocardial infarction in elderly men in Korea. Chen et al.[5] used animal experiments to demonstrate that the ALDH2 has a protective effect on myocardial ischemia. However, some reports still contradict the results that the correlation between the ALDH2 gene polymorphisms and NTG or its clinical efficacy is different. However, data on NTG measurement by pain relief are highly subjective.[67] A thoracic impedance method based on the noninvasive hemodynamic detector (i.e., BioZ.com), which monitors real-time hemodynamic and index parameters of patients, was used in this study. This method was employed to assess hemodynamic status and ventricular function. Cardiac output (CO) is representative of the CO, whereas peripheral vascular resistance (SVR) is representative of the cardiac load. Both parameters have application values in cardiac care.[8910] Therefore, this study verified NTG efficacy using noninvasive hemodynamic monitoring results of CO and SVR with objective judgment. Furthermore, this study hinders the effect of subjective factors.
METHODS
Subjects All patients comprising the noncoronary heart disease (NCHD) and the CHD groups from Jiaodong Peninsula of Northern China were investigated. All procedures conformed to the tenets of the Affiliated Hospital of Qingdao University. Informed consent was obtained from each subject. Furthermore, the study was approved by the Local Institutional Review Board. The exclusion criteria included serious cardiovascular and cerebrovascular diseases, endocrine and digestive disorders, kidney disease in the last 3 years, and NTG contraindications. The clinical diagnosis of obesity was determined from abnormal laboratory experiments before the start of the procedures. Accordingly, 356 patients with non-CHD (i.e., 234 men and 122 women; mean age: 57.01 ± 10.29 years) participated in the study. The inclusion criteria included: (1) Explicit exclusion of coronary artery disease, (2) without NTG contraindications (e.g., hypotension), and (3) without a history of serious alcohol abuse. The CHD group comprises 203 patients (i.e., 147 men and 56 women; mean age: 63.43 ± 9.61 years) diagnosed following the European Society of Cardiology Pocket Guide criteria of 2013. The inclusion criteria included the (1) presence of angina pectoris, (2) other possible causes of similar disease and angina pectoris (e.g., coronary spasm), (3) without NTG contraindications (e.g., hypotension), (4) sublingual NTG was used to cure angina attack, and (5) did not use angina or other antiangina drugs at the same time. The blood pressure of the patients with hypertension was controlled in the 140–120/90–70 mmHg range by taking nonoral nitrates (e.g., angiotensin II type 1 receptor blocker, calcium channel blocker, and diuretics) before the test. All risk factors were determined from routine tests, including blood, urine, and conventional and biochemical blood clotting prior to the study. All patients comprising the noncoronary heart disease (NCHD) and the CHD groups from Jiaodong Peninsula of Northern China were investigated. All procedures conformed to the tenets of the Affiliated Hospital of Qingdao University. Informed consent was obtained from each subject. Furthermore, the study was approved by the Local Institutional Review Board. The exclusion criteria included serious cardiovascular and cerebrovascular diseases, endocrine and digestive disorders, kidney disease in the last 3 years, and NTG contraindications. The clinical diagnosis of obesity was determined from abnormal laboratory experiments before the start of the procedures. Accordingly, 356 patients with non-CHD (i.e., 234 men and 122 women; mean age: 57.01 ± 10.29 years) participated in the study. The inclusion criteria included: (1) Explicit exclusion of coronary artery disease, (2) without NTG contraindications (e.g., hypotension), and (3) without a history of serious alcohol abuse. The CHD group comprises 203 patients (i.e., 147 men and 56 women; mean age: 63.43 ± 9.61 years) diagnosed following the European Society of Cardiology Pocket Guide criteria of 2013. The inclusion criteria included the (1) presence of angina pectoris, (2) other possible causes of similar disease and angina pectoris (e.g., coronary spasm), (3) without NTG contraindications (e.g., hypotension), (4) sublingual NTG was used to cure angina attack, and (5) did not use angina or other antiangina drugs at the same time. The blood pressure of the patients with hypertension was controlled in the 140–120/90–70 mmHg range by taking nonoral nitrates (e.g., angiotensin II type 1 receptor blocker, calcium channel blocker, and diuretics) before the test. All risk factors were determined from routine tests, including blood, urine, and conventional and biochemical blood clotting prior to the study. Genetic analysis Genomic DNA was extracted from each subject via blood sample. The genetic variants in the extra ALDH2 (G504A) region were identified using polymerase chain reaction and DNA sequencing in all cases. Genotype determination was measured by the Shanghai Biological Science and Technology Company (China). The ALDH2 genotypes were divided into wild (GG) and mutant (GA/AA) types. Among 356 patients with NCHD, 196 were GG cases, and 160 were GA/AA. Among 203 patients with CHD, 103 were GG cases and 100 were GA/AA cases. Genomic DNA was extracted from each subject via blood sample. The genetic variants in the extra ALDH2 (G504A) region were identified using polymerase chain reaction and DNA sequencing in all cases. Genotype determination was measured by the Shanghai Biological Science and Technology Company (China). The ALDH2 genotypes were divided into wild (GG) and mutant (GA/AA) types. Among 356 patients with NCHD, 196 were GG cases, and 160 were GA/AA. Among 203 patients with CHD, 103 were GG cases and 100 were GA/AA cases. Clinical and hemodynamic The history of NTG treatment by objective pain relief was obtained from each participant through interview. This history was then divided into two subgroups of efficacy and nonefficacy. The hemodynamic parameter test included the checking of all patients administered with 0.5 mg sublingual NTG at 0, 5, and 15 minutes. CO and SVR were measured using the noninvasive hemodynamic parameter of BioZ.com (CardioDynamics Company). The patients were placed in a supine position with an empty stomach during the test. Medical professionals then attached the sensor to the patients’ necks and to the xiphoid level in the mid-axillary line at the junction on the bilinear sides of the chest according to user instructions. The contact skin was cleaned using alcohol wipes to ensure full conductivity. All personal data related to the test were entered. These data included gender, age, name, height, weight, and body surface area. The systolic and diastolic blood pressure (SBP and DBP), SVR, and CO were then tested and recorded. The SVR showed the state of the body's peripheral vasomotor reactions, whereas the CO depended on the cardiac preload and afterload and myocardial contractility. The history of NTG treatment by objective pain relief was obtained from each participant through interview. This history was then divided into two subgroups of efficacy and nonefficacy. The hemodynamic parameter test included the checking of all patients administered with 0.5 mg sublingual NTG at 0, 5, and 15 minutes. CO and SVR were measured using the noninvasive hemodynamic parameter of BioZ.com (CardioDynamics Company). The patients were placed in a supine position with an empty stomach during the test. Medical professionals then attached the sensor to the patients’ necks and to the xiphoid level in the mid-axillary line at the junction on the bilinear sides of the chest according to user instructions. The contact skin was cleaned using alcohol wipes to ensure full conductivity. All personal data related to the test were entered. These data included gender, age, name, height, weight, and body surface area. The systolic and diastolic blood pressure (SBP and DBP), SVR, and CO were then tested and recorded. The SVR showed the state of the body's peripheral vasomotor reactions, whereas the CO depended on the cardiac preload and afterload and myocardial contractility. Statistical analysis SPSS software for Windows version 17.0 (SPSS 17.0 Statistical Product and Service Solutions, the software was developed by the company of SPSS in December 2008) was used for statistical analysis. Pearson χ2 test was employed to assess deviations from the Hardy–Weinberg equilibrium for genotypes. Clinical information was subsequently compared across the genotypes using Pearson χ2 or Fisher's exact test. Fisher's exact test was used when the expected number in any cell was <5. The observed number and differences of NTG efficacy among various ALDH2 genotypes were statistically analyzed using χ2 test. Patient gender and smoking and drinking history were analyzed by t-test regression. P < 0.05 was considered statistically significant. The changes in CO and SVR at 3 time points were analyzed using repeated measurement data analysis of variance. The relation among the number of multiple impacts of ΔSVR, ΔCO factors, and ALDH2 gene polymorphisms was analyzed using a multivariate linear regression. SPSS software for Windows version 17.0 (SPSS 17.0 Statistical Product and Service Solutions, the software was developed by the company of SPSS in December 2008) was used for statistical analysis. Pearson χ2 test was employed to assess deviations from the Hardy–Weinberg equilibrium for genotypes. Clinical information was subsequently compared across the genotypes using Pearson χ2 or Fisher's exact test. Fisher's exact test was used when the expected number in any cell was <5. The observed number and differences of NTG efficacy among various ALDH2 genotypes were statistically analyzed using χ2 test. Patient gender and smoking and drinking history were analyzed by t-test regression. P < 0.05 was considered statistically significant. The changes in CO and SVR at 3 time points were analyzed using repeated measurement data analysis of variance. The relation among the number of multiple impacts of ΔSVR, ΔCO factors, and ALDH2 gene polymorphisms was analyzed using a multivariate linear regression.
RESULTS
The genetic polymorphism balancing test was conducted on the genetic polymorphisms to balance the test allele and the genotype distribution of 510 subjects. The genotype distribution of both groups inherited the Hardy–Weinberg equilibrium (χ2 = 1.59, 0.49, P > 0.05), which indicated that the sample had a population representative. Of the 356 cases in the NCHD group, 196 (55.1%) were of the GG genotype and 130 (36.5%) were of the GA genotype. Of the 130 cases, 30 (8.4%) accounted for the AA genotype with the G and A allele frequencies at 82.8%–17.2%, respectively. Of the 203 cases in the CHD group, 103 (50.7%) were of the GG genotype and 86 (42.4%) were of the GA genotypes. Of the 86, 14 cases (6.9%) accounted for the AA genotype with the G and A allele frequencies of 81.4%–18.6%, respectively. The comparison between the two groups showed that the differences in the genotype distribution were not statistically significant (P = 0.33). Moreover, the allele distribution between the two groups was not statistically significant (P = 0.64) [Table 1]. ALDH2 of the two groups rs671 gene mutation genotype and allele CHD: Coronary heart disease; ALDH2: Aldehyde dehydrogenase-2; GG: Wild-type; GA/AA: Mutant. GG and GA/AA clinical data comparison The GG and GA/AA types demonstrated that the GC and GA/AA sample size, gender, age, smoking, and alcohol consumption ratio difference were not statistically significant (P > 0.05) [Table 2]. General characteristics of clinical data *Means χ2 value, other test statistics value for t value. GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease; BSA: Body surface area. The GG and GA/AA types demonstrated that the GC and GA/AA sample size, gender, age, smoking, and alcohol consumption ratio difference were not statistically significant (P > 0.05) [Table 2]. General characteristics of clinical data *Means χ2 value, other test statistics value for t value. GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease; BSA: Body surface area. Aldehyde dehydrogenase 2 genotype and nitroglycerin efficacy Accordingly, 103 cases of GG and 100 cases GA/AA were recorded in the CHD group. The NTG efficacies of the GG and the GA/AA subgroups were 79.4% and 50.6%, respectively. The difference in rapid NTG efficiency was statistically significant (P < 0.01) [Table 3]. ALDH2 gene volunteers cases of CHD distribution and efficacy of NTG CHD: Coronary heart disease; ALDH2: Aldehyde dehydrogenase-2; GG: Wild-type; GA/AA: Mutant; NTG: Nitroglycerin. Accordingly, 103 cases of GG and 100 cases GA/AA were recorded in the CHD group. The NTG efficacies of the GG and the GA/AA subgroups were 79.4% and 50.6%, respectively. The difference in rapid NTG efficiency was statistically significant (P < 0.01) [Table 3]. ALDH2 gene volunteers cases of CHD distribution and efficacy of NTG CHD: Coronary heart disease; ALDH2: Aldehyde dehydrogenase-2; GG: Wild-type; GA/AA: Mutant; NTG: Nitroglycerin. Changes in cardiac output, SVR, heart rate, systolic and diastolic blood pressure between groups The repeated measurement data analysis of variance showed that the CO difference of the GG genotype at all three time points was statistically significant (P < 0.05) and differed with the significant increase in CO at 0 and 5 minutes (P = 0.000). The CO difference in the GA/AA group at 0, 5, and 15 minutes was statistically significant (P < 0.05), which indicated an obvious decrease from the CO difference at 0 and 15 minutes in both groups (P < 0.001). The SVR differences in both groups were statistically significant (P < 0.001) at all three-time points. A significant decrease was observed at 5 minutes, which suggested a statistically significant difference (P < 0.01) [Table 4]. Comparison of CO and SVR in different time points CO: Cardiac output; SVR: Peripheral vascular resistance; GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease. The heart rate (HR), SBP, and DBP differences between the two groups at three-time points were not statistically significant (P > 0.05). The repeated measurement data analysis of variance showed that the CO difference of the GG genotype at all three time points was statistically significant (P < 0.05) and differed with the significant increase in CO at 0 and 5 minutes (P = 0.000). The CO difference in the GA/AA group at 0, 5, and 15 minutes was statistically significant (P < 0.05), which indicated an obvious decrease from the CO difference at 0 and 15 minutes in both groups (P < 0.001). The SVR differences in both groups were statistically significant (P < 0.001) at all three-time points. A significant decrease was observed at 5 minutes, which suggested a statistically significant difference (P < 0.01) [Table 4]. Comparison of CO and SVR in different time points CO: Cardiac output; SVR: Peripheral vascular resistance; GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease. The heart rate (HR), SBP, and DBP differences between the two groups at three-time points were not statistically significant (P > 0.05). Relationship between aldehyde dehydrogenase 2 gene polymorphisms and ΔCO and ΔSVR ΔCO and ΔSVR included ΔCO1 = CO 0 − CO 5 minutes, ΔSVR1 = SVR 0 − SVR 5 minutes, ΔCO2 = CO 0 – CO 15 minutes, and ΔSVR2 = SVR 0 –SVR 15 minutes. ΔCO1 was used as the dependent variable, whereas ALDH2 polymorphism was employed as the independent variable. The ALDH2 polymorphism significantly influenced the dependent variable (P = 0.046, 0.000). Similarly, ΔCO2 was used as the dependent variable. ΔSVR1 and ΔSVR2 were utilized as independent variables. The ALDH2 polymorphism had a significant influence on the dependent variable in both groups (P < 0.05). ΔCO and ΔSVR included ΔCO1 = CO 0 − CO 5 minutes, ΔSVR1 = SVR 0 − SVR 5 minutes, ΔCO2 = CO 0 – CO 15 minutes, and ΔSVR2 = SVR 0 –SVR 15 minutes. ΔCO1 was used as the dependent variable, whereas ALDH2 polymorphism was employed as the independent variable. The ALDH2 polymorphism significantly influenced the dependent variable (P = 0.046, 0.000). Similarly, ΔCO2 was used as the dependent variable. ΔSVR1 and ΔSVR2 were utilized as independent variables. The ALDH2 polymorphism had a significant influence on the dependent variable in both groups (P < 0.05). Noninvasive hemodynamic parameter (i.e., cardiac output and SVR) changes between the GG and GG/AA groups The comparisons of ΔCO1 and ΔCO2 and ΔSVR1 and ΔSVR2 in the GG and GG/AA types were statistically significant (P < 0.001) [Table 5]. Comparison of ΔCO, ΔSVR CO: Cardiac output; SVR: Peripheral vascular resistance; GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease. The comparisons of ΔCO1 and ΔCO2 and ΔSVR1 and ΔSVR2 in the GG and GG/AA types were statistically significant (P < 0.001) [Table 5]. Comparison of ΔCO, ΔSVR CO: Cardiac output; SVR: Peripheral vascular resistance; GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease.
CONCLUSION
Aldehyde dehydrogenase 2 (G504A) gene polymorphism is associated with changes in noninvasive hemodynamic parameters (i.e., CO and SVR) after intervention with sublingual NTG. This gene polymorphism may influence the effect of NTG intervention on Northern Chinese Han population.
[ "Subjects", "Genetic analysis", "Clinical and hemodynamic", "Statistical analysis", "GG and GA/AA clinical data comparison", "Aldehyde dehydrogenase 2 genotype and nitroglycerin efficacy", "Changes in cardiac output, SVR, heart rate, systolic and diastolic blood pressure between groups", "Relationship between aldehyde dehydrogenase 2 gene polymorphisms and ΔCO and ΔSVR", "Noninvasive hemodynamic parameter (i.e., cardiac output and SVR) changes between the GG and GG/AA groups", "Limitations of the study" ]
[ "All patients comprising the noncoronary heart disease (NCHD) and the CHD groups from Jiaodong Peninsula of Northern China were investigated. All procedures conformed to the tenets of the Affiliated Hospital of Qingdao University. Informed consent was obtained from each subject. Furthermore, the study was approved by the Local Institutional Review Board. The exclusion criteria included serious cardiovascular and cerebrovascular diseases, endocrine and digestive disorders, kidney disease in the last 3 years, and NTG contraindications. The clinical diagnosis of obesity was determined from abnormal laboratory experiments before the start of the procedures. Accordingly, 356 patients with non-CHD (i.e., 234 men and 122 women; mean age: 57.01 ± 10.29 years) participated in the study. The inclusion criteria included: (1) Explicit exclusion of coronary artery disease, (2) without NTG contraindications (e.g., hypotension), and (3) without a history of serious alcohol abuse. The CHD group comprises 203 patients (i.e., 147 men and 56 women; mean age: 63.43 ± 9.61 years) diagnosed following the European Society of Cardiology Pocket Guide criteria of 2013. The inclusion criteria included the (1) presence of angina pectoris, (2) other possible causes of similar disease and angina pectoris (e.g., coronary spasm), (3) without NTG contraindications (e.g., hypotension), (4) sublingual NTG was used to cure angina attack, and (5) did not use angina or other antiangina drugs at the same time. The blood pressure of the patients with hypertension was controlled in the 140–120/90–70 mmHg range by taking nonoral nitrates (e.g., angiotensin II type 1 receptor blocker, calcium channel blocker, and diuretics) before the test. All risk factors were determined from routine tests, including blood, urine, and conventional and biochemical blood clotting prior to the study.", "Genomic DNA was extracted from each subject via blood sample. The genetic variants in the extra ALDH2 (G504A) region were identified using polymerase chain reaction and DNA sequencing in all cases. Genotype determination was measured by the Shanghai Biological Science and Technology Company (China).\nThe ALDH2 genotypes were divided into wild (GG) and mutant (GA/AA) types. Among 356 patients with NCHD, 196 were GG cases, and 160 were GA/AA. Among 203 patients with CHD, 103 were GG cases and 100 were GA/AA cases.", "The history of NTG treatment by objective pain relief was obtained from each participant through interview. This history was then divided into two subgroups of efficacy and nonefficacy. The hemodynamic parameter test included the checking of all patients administered with 0.5 mg sublingual NTG at 0, 5, and 15 minutes. CO and SVR were measured using the noninvasive hemodynamic parameter of BioZ.com (CardioDynamics Company). The patients were placed in a supine position with an empty stomach during the test. Medical professionals then attached the sensor to the patients’ necks and to the xiphoid level in the mid-axillary line at the junction on the bilinear sides of the chest according to user instructions. The contact skin was cleaned using alcohol wipes to ensure full conductivity. All personal data related to the test were entered. These data included gender, age, name, height, weight, and body surface area. The systolic and diastolic blood pressure (SBP and DBP), SVR, and CO were then tested and recorded. The SVR showed the state of the body's peripheral vasomotor reactions, whereas the CO depended on the cardiac preload and afterload and myocardial contractility.", "SPSS software for Windows version 17.0 (SPSS 17.0 Statistical Product and Service Solutions, the software was developed by the company of SPSS in December 2008) was used for statistical analysis. Pearson χ2 test was employed to assess deviations from the Hardy–Weinberg equilibrium for genotypes. Clinical information was subsequently compared across the genotypes using Pearson χ2 or Fisher's exact test. Fisher's exact test was used when the expected number in any cell was <5. The observed number and differences of NTG efficacy among various ALDH2 genotypes were statistically analyzed using χ2 test. Patient gender and smoking and drinking history were analyzed by t-test regression. P < 0.05 was considered statistically significant. The changes in CO and SVR at 3 time points were analyzed using repeated measurement data analysis of variance. The relation among the number of multiple impacts of ΔSVR, ΔCO factors, and ALDH2 gene polymorphisms was analyzed using a multivariate linear regression.", "The GG and GA/AA types demonstrated that the GC and GA/AA sample size, gender, age, smoking, and alcohol consumption ratio difference were not statistically significant (P > 0.05) [Table 2].\nGeneral characteristics of clinical data\n*Means χ2 value, other test statistics value for t value. GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease; BSA: Body surface area.", "Accordingly, 103 cases of GG and 100 cases GA/AA were recorded in the CHD group. The NTG efficacies of the GG and the GA/AA subgroups were 79.4% and 50.6%, respectively. The difference in rapid NTG efficiency was statistically significant (P < 0.01) [Table 3].\nALDH2 gene volunteers cases of CHD distribution and efficacy of NTG\nCHD: Coronary heart disease; ALDH2: Aldehyde dehydrogenase-2; GG: Wild-type; GA/AA: Mutant; NTG: Nitroglycerin.", "The repeated measurement data analysis of variance showed that the CO difference of the GG genotype at all three time points was statistically significant (P < 0.05) and differed with the significant increase in CO at 0 and 5 minutes (P = 0.000). The CO difference in the GA/AA group at 0, 5, and 15 minutes was statistically significant (P < 0.05), which indicated an obvious decrease from the CO difference at 0 and 15 minutes in both groups (P < 0.001). The SVR differences in both groups were statistically significant (P < 0.001) at all three-time points. A significant decrease was observed at 5 minutes, which suggested a statistically significant difference (P < 0.01) [Table 4].\nComparison of CO and SVR in different time points\nCO: Cardiac output; SVR: Peripheral vascular resistance; GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease.\nThe heart rate (HR), SBP, and DBP differences between the two groups at three-time points were not statistically significant (P > 0.05).", "ΔCO and ΔSVR included ΔCO1 = CO 0 − CO 5 minutes, ΔSVR1 = SVR 0 − SVR 5 minutes, ΔCO2 = CO 0 – CO 15 minutes, and ΔSVR2 = SVR 0 –SVR 15 minutes. ΔCO1 was used as the dependent variable, whereas ALDH2 polymorphism was employed as the independent variable. The ALDH2 polymorphism significantly influenced the dependent variable (P = 0.046, 0.000). Similarly, ΔCO2 was used as the dependent variable. ΔSVR1 and ΔSVR2 were utilized as independent variables. The ALDH2 polymorphism had a significant influence on the dependent variable in both groups (P < 0.05).", "The comparisons of ΔCO1 and ΔCO2 and ΔSVR1 and ΔSVR2 in the GG and GG/AA types were statistically significant (P < 0.001) [Table 5].\nComparison of ΔCO, ΔSVR\nCO: Cardiac output; SVR: Peripheral vascular resistance; GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease.", "Only considered ALDH2 polymorphism to the effect of metabolism, and in vitro and in vivo studies suggest that in vivo biotransformation of NTG is also affected by the following process a variety of enzymes, such as: Glutathione-S-transferase, cytochrome P450 reductase, xanthine oxidoreductase enzyme, including a number of species, this study does not exclude the other enzymes in the metabolism of NTG affected.\nThis research is a clinical experimental study, and the sample size is inadequate. The sample size should be expanded for further investigation. A selection bias in this study as regards the NCHD group and the subject might have affected NTG efficacy.\nThe researchers of this study plan to expand the sample size and apply invasive monitoring techniques to monitor all indicators. Accordingly, a more in-depth study of the metabolic ALDH2 pathways and processes will be conducted." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Subjects", "Genetic analysis", "Clinical and hemodynamic", "Statistical analysis", "RESULTS", "GG and GA/AA clinical data comparison", "Aldehyde dehydrogenase 2 genotype and nitroglycerin efficacy", "Changes in cardiac output, SVR, heart rate, systolic and diastolic blood pressure between groups", "Relationship between aldehyde dehydrogenase 2 gene polymorphisms and ΔCO and ΔSVR", "Noninvasive hemodynamic parameter (i.e., cardiac output and SVR) changes between the GG and GG/AA groups", "DISCUSSION", "Limitations of the study", "CONCLUSION" ]
[ "Increasing evidence suggests that the aldehyde dehydrogenase 2 (ALDH2) (G-504-A) gene polymorphism is associated with nitroglycerin (NTG) inhibition. ALDH2 is a key enzyme in the human body that facilitates the biological metabolism of NTG, which serves an important function in NTG efficacy.[1] NTG is one of the few immediate treatments for acute angina. ALDH2 is an important mitochondrial aldehyde oxidase, the gene of which is located on chromosome 12q24. The gene has a high degree of genetic polymorphism.[2] The ALDH2 gene is found outside the 12th exon and exists in 504 point mutations with base substitutions of G to A. The ALDH2 of the GG genotype (i.e., wild-type) has normal activity on the NTG. The GA genotype (i.e., mutant) only has 6% activity, whereas the AA genotype (i.e., mutant) loses all its activities.[3] The clinical research analysis of Jo et al.[4] revealed that mutant ALDH2 is an independent risk factor for myocardial infarction in elderly men in Korea. Chen et al.[5] used animal experiments to demonstrate that the ALDH2 has a protective effect on myocardial ischemia. However, some reports still contradict the results that the correlation between the ALDH2 gene polymorphisms and NTG or its clinical efficacy is different. However, data on NTG measurement by pain relief are highly subjective.[67]\nA thoracic impedance method based on the noninvasive hemodynamic detector (i.e., BioZ.com), which monitors real-time hemodynamic and index parameters of patients, was used in this study. This method was employed to assess hemodynamic status and ventricular function. Cardiac output (CO) is representative of the CO, whereas peripheral vascular resistance (SVR) is representative of the cardiac load. Both parameters have application values in cardiac care.[8910] Therefore, this study verified NTG efficacy using noninvasive hemodynamic monitoring results of CO and SVR with objective judgment. Furthermore, this study hinders the effect of subjective factors.", " Subjects All patients comprising the noncoronary heart disease (NCHD) and the CHD groups from Jiaodong Peninsula of Northern China were investigated. All procedures conformed to the tenets of the Affiliated Hospital of Qingdao University. Informed consent was obtained from each subject. Furthermore, the study was approved by the Local Institutional Review Board. The exclusion criteria included serious cardiovascular and cerebrovascular diseases, endocrine and digestive disorders, kidney disease in the last 3 years, and NTG contraindications. The clinical diagnosis of obesity was determined from abnormal laboratory experiments before the start of the procedures. Accordingly, 356 patients with non-CHD (i.e., 234 men and 122 women; mean age: 57.01 ± 10.29 years) participated in the study. The inclusion criteria included: (1) Explicit exclusion of coronary artery disease, (2) without NTG contraindications (e.g., hypotension), and (3) without a history of serious alcohol abuse. The CHD group comprises 203 patients (i.e., 147 men and 56 women; mean age: 63.43 ± 9.61 years) diagnosed following the European Society of Cardiology Pocket Guide criteria of 2013. The inclusion criteria included the (1) presence of angina pectoris, (2) other possible causes of similar disease and angina pectoris (e.g., coronary spasm), (3) without NTG contraindications (e.g., hypotension), (4) sublingual NTG was used to cure angina attack, and (5) did not use angina or other antiangina drugs at the same time. The blood pressure of the patients with hypertension was controlled in the 140–120/90–70 mmHg range by taking nonoral nitrates (e.g., angiotensin II type 1 receptor blocker, calcium channel blocker, and diuretics) before the test. All risk factors were determined from routine tests, including blood, urine, and conventional and biochemical blood clotting prior to the study.\nAll patients comprising the noncoronary heart disease (NCHD) and the CHD groups from Jiaodong Peninsula of Northern China were investigated. All procedures conformed to the tenets of the Affiliated Hospital of Qingdao University. Informed consent was obtained from each subject. Furthermore, the study was approved by the Local Institutional Review Board. The exclusion criteria included serious cardiovascular and cerebrovascular diseases, endocrine and digestive disorders, kidney disease in the last 3 years, and NTG contraindications. The clinical diagnosis of obesity was determined from abnormal laboratory experiments before the start of the procedures. Accordingly, 356 patients with non-CHD (i.e., 234 men and 122 women; mean age: 57.01 ± 10.29 years) participated in the study. The inclusion criteria included: (1) Explicit exclusion of coronary artery disease, (2) without NTG contraindications (e.g., hypotension), and (3) without a history of serious alcohol abuse. The CHD group comprises 203 patients (i.e., 147 men and 56 women; mean age: 63.43 ± 9.61 years) diagnosed following the European Society of Cardiology Pocket Guide criteria of 2013. The inclusion criteria included the (1) presence of angina pectoris, (2) other possible causes of similar disease and angina pectoris (e.g., coronary spasm), (3) without NTG contraindications (e.g., hypotension), (4) sublingual NTG was used to cure angina attack, and (5) did not use angina or other antiangina drugs at the same time. The blood pressure of the patients with hypertension was controlled in the 140–120/90–70 mmHg range by taking nonoral nitrates (e.g., angiotensin II type 1 receptor blocker, calcium channel blocker, and diuretics) before the test. All risk factors were determined from routine tests, including blood, urine, and conventional and biochemical blood clotting prior to the study.\n Genetic analysis Genomic DNA was extracted from each subject via blood sample. The genetic variants in the extra ALDH2 (G504A) region were identified using polymerase chain reaction and DNA sequencing in all cases. Genotype determination was measured by the Shanghai Biological Science and Technology Company (China).\nThe ALDH2 genotypes were divided into wild (GG) and mutant (GA/AA) types. Among 356 patients with NCHD, 196 were GG cases, and 160 were GA/AA. Among 203 patients with CHD, 103 were GG cases and 100 were GA/AA cases.\nGenomic DNA was extracted from each subject via blood sample. The genetic variants in the extra ALDH2 (G504A) region were identified using polymerase chain reaction and DNA sequencing in all cases. Genotype determination was measured by the Shanghai Biological Science and Technology Company (China).\nThe ALDH2 genotypes were divided into wild (GG) and mutant (GA/AA) types. Among 356 patients with NCHD, 196 were GG cases, and 160 were GA/AA. Among 203 patients with CHD, 103 were GG cases and 100 were GA/AA cases.\n Clinical and hemodynamic The history of NTG treatment by objective pain relief was obtained from each participant through interview. This history was then divided into two subgroups of efficacy and nonefficacy. The hemodynamic parameter test included the checking of all patients administered with 0.5 mg sublingual NTG at 0, 5, and 15 minutes. CO and SVR were measured using the noninvasive hemodynamic parameter of BioZ.com (CardioDynamics Company). The patients were placed in a supine position with an empty stomach during the test. Medical professionals then attached the sensor to the patients’ necks and to the xiphoid level in the mid-axillary line at the junction on the bilinear sides of the chest according to user instructions. The contact skin was cleaned using alcohol wipes to ensure full conductivity. All personal data related to the test were entered. These data included gender, age, name, height, weight, and body surface area. The systolic and diastolic blood pressure (SBP and DBP), SVR, and CO were then tested and recorded. The SVR showed the state of the body's peripheral vasomotor reactions, whereas the CO depended on the cardiac preload and afterload and myocardial contractility.\nThe history of NTG treatment by objective pain relief was obtained from each participant through interview. This history was then divided into two subgroups of efficacy and nonefficacy. The hemodynamic parameter test included the checking of all patients administered with 0.5 mg sublingual NTG at 0, 5, and 15 minutes. CO and SVR were measured using the noninvasive hemodynamic parameter of BioZ.com (CardioDynamics Company). The patients were placed in a supine position with an empty stomach during the test. Medical professionals then attached the sensor to the patients’ necks and to the xiphoid level in the mid-axillary line at the junction on the bilinear sides of the chest according to user instructions. The contact skin was cleaned using alcohol wipes to ensure full conductivity. All personal data related to the test were entered. These data included gender, age, name, height, weight, and body surface area. The systolic and diastolic blood pressure (SBP and DBP), SVR, and CO were then tested and recorded. The SVR showed the state of the body's peripheral vasomotor reactions, whereas the CO depended on the cardiac preload and afterload and myocardial contractility.\n Statistical analysis SPSS software for Windows version 17.0 (SPSS 17.0 Statistical Product and Service Solutions, the software was developed by the company of SPSS in December 2008) was used for statistical analysis. Pearson χ2 test was employed to assess deviations from the Hardy–Weinberg equilibrium for genotypes. Clinical information was subsequently compared across the genotypes using Pearson χ2 or Fisher's exact test. Fisher's exact test was used when the expected number in any cell was <5. The observed number and differences of NTG efficacy among various ALDH2 genotypes were statistically analyzed using χ2 test. Patient gender and smoking and drinking history were analyzed by t-test regression. P < 0.05 was considered statistically significant. The changes in CO and SVR at 3 time points were analyzed using repeated measurement data analysis of variance. The relation among the number of multiple impacts of ΔSVR, ΔCO factors, and ALDH2 gene polymorphisms was analyzed using a multivariate linear regression.\nSPSS software for Windows version 17.0 (SPSS 17.0 Statistical Product and Service Solutions, the software was developed by the company of SPSS in December 2008) was used for statistical analysis. Pearson χ2 test was employed to assess deviations from the Hardy–Weinberg equilibrium for genotypes. Clinical information was subsequently compared across the genotypes using Pearson χ2 or Fisher's exact test. Fisher's exact test was used when the expected number in any cell was <5. The observed number and differences of NTG efficacy among various ALDH2 genotypes were statistically analyzed using χ2 test. Patient gender and smoking and drinking history were analyzed by t-test regression. P < 0.05 was considered statistically significant. The changes in CO and SVR at 3 time points were analyzed using repeated measurement data analysis of variance. The relation among the number of multiple impacts of ΔSVR, ΔCO factors, and ALDH2 gene polymorphisms was analyzed using a multivariate linear regression.", "All patients comprising the noncoronary heart disease (NCHD) and the CHD groups from Jiaodong Peninsula of Northern China were investigated. All procedures conformed to the tenets of the Affiliated Hospital of Qingdao University. Informed consent was obtained from each subject. Furthermore, the study was approved by the Local Institutional Review Board. The exclusion criteria included serious cardiovascular and cerebrovascular diseases, endocrine and digestive disorders, kidney disease in the last 3 years, and NTG contraindications. The clinical diagnosis of obesity was determined from abnormal laboratory experiments before the start of the procedures. Accordingly, 356 patients with non-CHD (i.e., 234 men and 122 women; mean age: 57.01 ± 10.29 years) participated in the study. The inclusion criteria included: (1) Explicit exclusion of coronary artery disease, (2) without NTG contraindications (e.g., hypotension), and (3) without a history of serious alcohol abuse. The CHD group comprises 203 patients (i.e., 147 men and 56 women; mean age: 63.43 ± 9.61 years) diagnosed following the European Society of Cardiology Pocket Guide criteria of 2013. The inclusion criteria included the (1) presence of angina pectoris, (2) other possible causes of similar disease and angina pectoris (e.g., coronary spasm), (3) without NTG contraindications (e.g., hypotension), (4) sublingual NTG was used to cure angina attack, and (5) did not use angina or other antiangina drugs at the same time. The blood pressure of the patients with hypertension was controlled in the 140–120/90–70 mmHg range by taking nonoral nitrates (e.g., angiotensin II type 1 receptor blocker, calcium channel blocker, and diuretics) before the test. All risk factors were determined from routine tests, including blood, urine, and conventional and biochemical blood clotting prior to the study.", "Genomic DNA was extracted from each subject via blood sample. The genetic variants in the extra ALDH2 (G504A) region were identified using polymerase chain reaction and DNA sequencing in all cases. Genotype determination was measured by the Shanghai Biological Science and Technology Company (China).\nThe ALDH2 genotypes were divided into wild (GG) and mutant (GA/AA) types. Among 356 patients with NCHD, 196 were GG cases, and 160 were GA/AA. Among 203 patients with CHD, 103 were GG cases and 100 were GA/AA cases.", "The history of NTG treatment by objective pain relief was obtained from each participant through interview. This history was then divided into two subgroups of efficacy and nonefficacy. The hemodynamic parameter test included the checking of all patients administered with 0.5 mg sublingual NTG at 0, 5, and 15 minutes. CO and SVR were measured using the noninvasive hemodynamic parameter of BioZ.com (CardioDynamics Company). The patients were placed in a supine position with an empty stomach during the test. Medical professionals then attached the sensor to the patients’ necks and to the xiphoid level in the mid-axillary line at the junction on the bilinear sides of the chest according to user instructions. The contact skin was cleaned using alcohol wipes to ensure full conductivity. All personal data related to the test were entered. These data included gender, age, name, height, weight, and body surface area. The systolic and diastolic blood pressure (SBP and DBP), SVR, and CO were then tested and recorded. The SVR showed the state of the body's peripheral vasomotor reactions, whereas the CO depended on the cardiac preload and afterload and myocardial contractility.", "SPSS software for Windows version 17.0 (SPSS 17.0 Statistical Product and Service Solutions, the software was developed by the company of SPSS in December 2008) was used for statistical analysis. Pearson χ2 test was employed to assess deviations from the Hardy–Weinberg equilibrium for genotypes. Clinical information was subsequently compared across the genotypes using Pearson χ2 or Fisher's exact test. Fisher's exact test was used when the expected number in any cell was <5. The observed number and differences of NTG efficacy among various ALDH2 genotypes were statistically analyzed using χ2 test. Patient gender and smoking and drinking history were analyzed by t-test regression. P < 0.05 was considered statistically significant. The changes in CO and SVR at 3 time points were analyzed using repeated measurement data analysis of variance. The relation among the number of multiple impacts of ΔSVR, ΔCO factors, and ALDH2 gene polymorphisms was analyzed using a multivariate linear regression.", "The genetic polymorphism balancing test was conducted on the genetic polymorphisms to balance the test allele and the genotype distribution of 510 subjects. The genotype distribution of both groups inherited the Hardy–Weinberg equilibrium (χ2 = 1.59, 0.49, P > 0.05), which indicated that the sample had a population representative.\nOf the 356 cases in the NCHD group, 196 (55.1%) were of the GG genotype and 130 (36.5%) were of the GA genotype. Of the 130 cases, 30 (8.4%) accounted for the AA genotype with the G and A allele frequencies at 82.8%–17.2%, respectively. Of the 203 cases in the CHD group, 103 (50.7%) were of the GG genotype and 86 (42.4%) were of the GA genotypes. Of the 86, 14 cases (6.9%) accounted for the AA genotype with the G and A allele frequencies of 81.4%–18.6%, respectively. The comparison between the two groups showed that the differences in the genotype distribution were not statistically significant (P = 0.33). Moreover, the allele distribution between the two groups was not statistically significant (P = 0.64) [Table 1].\nALDH2 of the two groups rs671 gene mutation genotype and allele\nCHD: Coronary heart disease; ALDH2: Aldehyde dehydrogenase-2; GG: Wild-type; GA/AA: Mutant.\n GG and GA/AA clinical data comparison The GG and GA/AA types demonstrated that the GC and GA/AA sample size, gender, age, smoking, and alcohol consumption ratio difference were not statistically significant (P > 0.05) [Table 2].\nGeneral characteristics of clinical data\n*Means χ2 value, other test statistics value for t value. GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease; BSA: Body surface area.\nThe GG and GA/AA types demonstrated that the GC and GA/AA sample size, gender, age, smoking, and alcohol consumption ratio difference were not statistically significant (P > 0.05) [Table 2].\nGeneral characteristics of clinical data\n*Means χ2 value, other test statistics value for t value. GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease; BSA: Body surface area.\n Aldehyde dehydrogenase 2 genotype and nitroglycerin efficacy Accordingly, 103 cases of GG and 100 cases GA/AA were recorded in the CHD group. The NTG efficacies of the GG and the GA/AA subgroups were 79.4% and 50.6%, respectively. The difference in rapid NTG efficiency was statistically significant (P < 0.01) [Table 3].\nALDH2 gene volunteers cases of CHD distribution and efficacy of NTG\nCHD: Coronary heart disease; ALDH2: Aldehyde dehydrogenase-2; GG: Wild-type; GA/AA: Mutant; NTG: Nitroglycerin.\nAccordingly, 103 cases of GG and 100 cases GA/AA were recorded in the CHD group. The NTG efficacies of the GG and the GA/AA subgroups were 79.4% and 50.6%, respectively. The difference in rapid NTG efficiency was statistically significant (P < 0.01) [Table 3].\nALDH2 gene volunteers cases of CHD distribution and efficacy of NTG\nCHD: Coronary heart disease; ALDH2: Aldehyde dehydrogenase-2; GG: Wild-type; GA/AA: Mutant; NTG: Nitroglycerin.\n Changes in cardiac output, SVR, heart rate, systolic and diastolic blood pressure between groups The repeated measurement data analysis of variance showed that the CO difference of the GG genotype at all three time points was statistically significant (P < 0.05) and differed with the significant increase in CO at 0 and 5 minutes (P = 0.000). The CO difference in the GA/AA group at 0, 5, and 15 minutes was statistically significant (P < 0.05), which indicated an obvious decrease from the CO difference at 0 and 15 minutes in both groups (P < 0.001). The SVR differences in both groups were statistically significant (P < 0.001) at all three-time points. A significant decrease was observed at 5 minutes, which suggested a statistically significant difference (P < 0.01) [Table 4].\nComparison of CO and SVR in different time points\nCO: Cardiac output; SVR: Peripheral vascular resistance; GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease.\nThe heart rate (HR), SBP, and DBP differences between the two groups at three-time points were not statistically significant (P > 0.05).\nThe repeated measurement data analysis of variance showed that the CO difference of the GG genotype at all three time points was statistically significant (P < 0.05) and differed with the significant increase in CO at 0 and 5 minutes (P = 0.000). The CO difference in the GA/AA group at 0, 5, and 15 minutes was statistically significant (P < 0.05), which indicated an obvious decrease from the CO difference at 0 and 15 minutes in both groups (P < 0.001). The SVR differences in both groups were statistically significant (P < 0.001) at all three-time points. A significant decrease was observed at 5 minutes, which suggested a statistically significant difference (P < 0.01) [Table 4].\nComparison of CO and SVR in different time points\nCO: Cardiac output; SVR: Peripheral vascular resistance; GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease.\nThe heart rate (HR), SBP, and DBP differences between the two groups at three-time points were not statistically significant (P > 0.05).\n Relationship between aldehyde dehydrogenase 2 gene polymorphisms and ΔCO and ΔSVR ΔCO and ΔSVR included ΔCO1 = CO 0 − CO 5 minutes, ΔSVR1 = SVR 0 − SVR 5 minutes, ΔCO2 = CO 0 – CO 15 minutes, and ΔSVR2 = SVR 0 –SVR 15 minutes. ΔCO1 was used as the dependent variable, whereas ALDH2 polymorphism was employed as the independent variable. The ALDH2 polymorphism significantly influenced the dependent variable (P = 0.046, 0.000). Similarly, ΔCO2 was used as the dependent variable. ΔSVR1 and ΔSVR2 were utilized as independent variables. The ALDH2 polymorphism had a significant influence on the dependent variable in both groups (P < 0.05).\nΔCO and ΔSVR included ΔCO1 = CO 0 − CO 5 minutes, ΔSVR1 = SVR 0 − SVR 5 minutes, ΔCO2 = CO 0 – CO 15 minutes, and ΔSVR2 = SVR 0 –SVR 15 minutes. ΔCO1 was used as the dependent variable, whereas ALDH2 polymorphism was employed as the independent variable. The ALDH2 polymorphism significantly influenced the dependent variable (P = 0.046, 0.000). Similarly, ΔCO2 was used as the dependent variable. ΔSVR1 and ΔSVR2 were utilized as independent variables. The ALDH2 polymorphism had a significant influence on the dependent variable in both groups (P < 0.05).\n Noninvasive hemodynamic parameter (i.e., cardiac output and SVR) changes between the GG and GG/AA groups The comparisons of ΔCO1 and ΔCO2 and ΔSVR1 and ΔSVR2 in the GG and GG/AA types were statistically significant (P < 0.001) [Table 5].\nComparison of ΔCO, ΔSVR\nCO: Cardiac output; SVR: Peripheral vascular resistance; GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease.\nThe comparisons of ΔCO1 and ΔCO2 and ΔSVR1 and ΔSVR2 in the GG and GG/AA types were statistically significant (P < 0.001) [Table 5].\nComparison of ΔCO, ΔSVR\nCO: Cardiac output; SVR: Peripheral vascular resistance; GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease.", "The GG and GA/AA types demonstrated that the GC and GA/AA sample size, gender, age, smoking, and alcohol consumption ratio difference were not statistically significant (P > 0.05) [Table 2].\nGeneral characteristics of clinical data\n*Means χ2 value, other test statistics value for t value. GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease; BSA: Body surface area.", "Accordingly, 103 cases of GG and 100 cases GA/AA were recorded in the CHD group. The NTG efficacies of the GG and the GA/AA subgroups were 79.4% and 50.6%, respectively. The difference in rapid NTG efficiency was statistically significant (P < 0.01) [Table 3].\nALDH2 gene volunteers cases of CHD distribution and efficacy of NTG\nCHD: Coronary heart disease; ALDH2: Aldehyde dehydrogenase-2; GG: Wild-type; GA/AA: Mutant; NTG: Nitroglycerin.", "The repeated measurement data analysis of variance showed that the CO difference of the GG genotype at all three time points was statistically significant (P < 0.05) and differed with the significant increase in CO at 0 and 5 minutes (P = 0.000). The CO difference in the GA/AA group at 0, 5, and 15 minutes was statistically significant (P < 0.05), which indicated an obvious decrease from the CO difference at 0 and 15 minutes in both groups (P < 0.001). The SVR differences in both groups were statistically significant (P < 0.001) at all three-time points. A significant decrease was observed at 5 minutes, which suggested a statistically significant difference (P < 0.01) [Table 4].\nComparison of CO and SVR in different time points\nCO: Cardiac output; SVR: Peripheral vascular resistance; GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease.\nThe heart rate (HR), SBP, and DBP differences between the two groups at three-time points were not statistically significant (P > 0.05).", "ΔCO and ΔSVR included ΔCO1 = CO 0 − CO 5 minutes, ΔSVR1 = SVR 0 − SVR 5 minutes, ΔCO2 = CO 0 – CO 15 minutes, and ΔSVR2 = SVR 0 –SVR 15 minutes. ΔCO1 was used as the dependent variable, whereas ALDH2 polymorphism was employed as the independent variable. The ALDH2 polymorphism significantly influenced the dependent variable (P = 0.046, 0.000). Similarly, ΔCO2 was used as the dependent variable. ΔSVR1 and ΔSVR2 were utilized as independent variables. The ALDH2 polymorphism had a significant influence on the dependent variable in both groups (P < 0.05).", "The comparisons of ΔCO1 and ΔCO2 and ΔSVR1 and ΔSVR2 in the GG and GG/AA types were statistically significant (P < 0.001) [Table 5].\nComparison of ΔCO, ΔSVR\nCO: Cardiac output; SVR: Peripheral vascular resistance; GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease.", "Nitroglycerin is a classic drug that cures acute angina attack. This drug generates NO or NO-related media in the body. Moreover, NTG directly acts upon guanylate cyclase and increases the amount of second messenger cyclic guanosine monophosphate in the vascular smooth muscle, thereby activating downstream target proteins (i.e., protein kinase G and AMP-dependent protein kinase). NTG also reduces the Ca concentration in the smooth muscle cells while reducing myosin sensitivity toward Ca, as well as vascular smooth muscle dilation.[11]\nAldehyde dehydrogenase 2 is a key enzyme mediating the biotransformation of NTG,[12] which consequently mediates the biological activation of NTG. Thus, the biological activation of NTG reduces the occurrence of NTG tolerance. ALDH2 prevents NTG conversion into nitrite and 1,2-bis nitroglycerin (1,2-GDN), which reduce NTG-induced vasodilation. Mackenzie et al.[13] used disulfiram-ALDH2 and demonstrated that ALDH2 partially inhibited the effects of NTG on the human body. ALDH2 gene knockout studies in mice have shown the significance and necessity of ALDH2 in vascular activity at therapeutic doses of NTG.[314] The human ALDH2 gene is located in chromosome 12 and contains 13 exons. Exon 12 of its outer 1510 guanine (G) mutated to adenine (A) facilitates the replacement of the protein encoded by this gene (504), glutamate (Glu), with lysine (Lys) (i.e., Glu504 Lys [rs671]). The single nucleotide polymorphism mutation rate in East Asia reaches up to 30%–50%.[15] The mutations cause a significant decrease in ALDH2 enzyme activity, which diminishes the biological function of ALDH2.[16] The results of this study provide clinical evidence that ALDH2 polymorphisms influence the NTG effect.\nRecent animal studies abroad have reported that the ALDH2 gene serves an important function in NTG on the pulmonary vascular bed and circulatory system biotransformation.[17] The ALDH2 gene mutation reduces the vasodilator NTG.[1819] Li et al.[19] selected 80 cases of patients diagnosed with coronary artery disease with a history of stable angina in 2006. The ALDH2 gene polymorphism and enzyme activity were detected in patients from each group. The NTG effect on all subjects in the ALDH2 wild-type group was significantly higher than that in the invalid proportion of the wild-type group. Ji et al.[7] enrolled 113 people diagnosed with coronary artery disease in 2010. They found that the ALDH2 gene polymorphisms had no association with NTG effectiveness. Mackenzie et al.[13] also observed that the clinical use of NTG was not found in the ALDH2 gene and had a high mutation rate in the Japanese population. The preceding experiment illustrates that the relationship between the ALDH2 gene polymorphisms and NTG efficacy remains controversial. This experiment also selects subjects with noncoronary and CHD to study sublingual NTG aging and further proves that the ALDH2 gene polymorphisms are related to the NTG clinical effect. The wild-type effect is better than that of the mutant type.\nThe subjects are divided into the NCHD and CHD groups. The determined ALDH2 genotype is divided into the wild and mutant groups. The ALDH2 genotypes have no significant distribution difference, similar to the findings of previous studies.[20] The NTG effect rate in the GG-type patients of the CHD group was significantly higher than that of the GA/AA type. This result is similar to that of the previous studies.[21] The noninvasive hemodynamic detector (i.e., impedance cardiography [ICG]) is employed to evaluate the NTG effect. The CO and SVR changes and other objective indicators are monitored. Furthermore, determining the degree of pain relief as a subjective factor is avoided. This study selects three representative time points (i.e., CO 0, CO 5, and CO 15 minutes) according to the pharmacokinetic characteristics of NTG. The changes in noninvasive dynamic indicators (i.e., CO and SVR) are then combined. Such combination indicates that the CO per minute reveals differences in sublingual NTG efficacy among subjects with different genotypes. Experimental results show that the ALDH2 polymorphisms are relevant to CO and SVR, which confirms the CO NTG effect.\nThe ICG is an important device for body impedance measurement in the cardiovascular dynamics of blood flow.[22] The cardiac hemodynamic status is measured on the basis of changes in systolic and diastolic thoracic impedance.[9] The ICG enables the diagnosis of heart failure and assesses the severity of coronary artery disease[23] to evaluate the efficacy of drug treatment and prognosis.[24] This study avoids the influence of subjective factors as much as possible by observing the CO per minute and the noninvasive hemodynamic index changes in SVR. The results of the observation are further employed to determine the differences among volunteers with different gene types after sublingual NTG administration. The obtained results are highly reliable.\nThe wild-type CO increases, whereas the mutant CO decreases after sublingual NTG administration. The results suggest the following: (1) Wild-type: The sublingual NTG hemorheology (HR) reflex significantly increased, whereas the stroke volume (SV) relatively and insignificantly decreased. (2) The formula CO = SV × HR demonstrates product increases. Mutant type: SV significantly was reduced, whereas HR was relatively and insignificantly increased after sublingual NTG administration.\n Limitations of the study Only considered ALDH2 polymorphism to the effect of metabolism, and in vitro and in vivo studies suggest that in vivo biotransformation of NTG is also affected by the following process a variety of enzymes, such as: Glutathione-S-transferase, cytochrome P450 reductase, xanthine oxidoreductase enzyme, including a number of species, this study does not exclude the other enzymes in the metabolism of NTG affected.\nThis research is a clinical experimental study, and the sample size is inadequate. The sample size should be expanded for further investigation. A selection bias in this study as regards the NCHD group and the subject might have affected NTG efficacy.\nThe researchers of this study plan to expand the sample size and apply invasive monitoring techniques to monitor all indicators. Accordingly, a more in-depth study of the metabolic ALDH2 pathways and processes will be conducted.\nOnly considered ALDH2 polymorphism to the effect of metabolism, and in vitro and in vivo studies suggest that in vivo biotransformation of NTG is also affected by the following process a variety of enzymes, such as: Glutathione-S-transferase, cytochrome P450 reductase, xanthine oxidoreductase enzyme, including a number of species, this study does not exclude the other enzymes in the metabolism of NTG affected.\nThis research is a clinical experimental study, and the sample size is inadequate. The sample size should be expanded for further investigation. A selection bias in this study as regards the NCHD group and the subject might have affected NTG efficacy.\nThe researchers of this study plan to expand the sample size and apply invasive monitoring techniques to monitor all indicators. Accordingly, a more in-depth study of the metabolic ALDH2 pathways and processes will be conducted.", "Only considered ALDH2 polymorphism to the effect of metabolism, and in vitro and in vivo studies suggest that in vivo biotransformation of NTG is also affected by the following process a variety of enzymes, such as: Glutathione-S-transferase, cytochrome P450 reductase, xanthine oxidoreductase enzyme, including a number of species, this study does not exclude the other enzymes in the metabolism of NTG affected.\nThis research is a clinical experimental study, and the sample size is inadequate. The sample size should be expanded for further investigation. A selection bias in this study as regards the NCHD group and the subject might have affected NTG efficacy.\nThe researchers of this study plan to expand the sample size and apply invasive monitoring techniques to monitor all indicators. Accordingly, a more in-depth study of the metabolic ALDH2 pathways and processes will be conducted.", "Aldehyde dehydrogenase 2 (G504A) gene polymorphism is associated with changes in noninvasive hemodynamic parameters (i.e., CO and SVR) after intervention with sublingual NTG. This gene polymorphism may influence the effect of NTG intervention on Northern Chinese Han population." ]
[ "intro", "methods", null, null, null, null, "results", null, null, null, null, null, "discussion", null, "conclusion" ]
[ "Aldehyde Dehydrogenase 2", "Coronary Disease", "Genetic Polymorphism", "Hemodynamic", "Nitroglycerin" ]
INTRODUCTION: Increasing evidence suggests that the aldehyde dehydrogenase 2 (ALDH2) (G-504-A) gene polymorphism is associated with nitroglycerin (NTG) inhibition. ALDH2 is a key enzyme in the human body that facilitates the biological metabolism of NTG, which serves an important function in NTG efficacy.[1] NTG is one of the few immediate treatments for acute angina. ALDH2 is an important mitochondrial aldehyde oxidase, the gene of which is located on chromosome 12q24. The gene has a high degree of genetic polymorphism.[2] The ALDH2 gene is found outside the 12th exon and exists in 504 point mutations with base substitutions of G to A. The ALDH2 of the GG genotype (i.e., wild-type) has normal activity on the NTG. The GA genotype (i.e., mutant) only has 6% activity, whereas the AA genotype (i.e., mutant) loses all its activities.[3] The clinical research analysis of Jo et al.[4] revealed that mutant ALDH2 is an independent risk factor for myocardial infarction in elderly men in Korea. Chen et al.[5] used animal experiments to demonstrate that the ALDH2 has a protective effect on myocardial ischemia. However, some reports still contradict the results that the correlation between the ALDH2 gene polymorphisms and NTG or its clinical efficacy is different. However, data on NTG measurement by pain relief are highly subjective.[67] A thoracic impedance method based on the noninvasive hemodynamic detector (i.e., BioZ.com), which monitors real-time hemodynamic and index parameters of patients, was used in this study. This method was employed to assess hemodynamic status and ventricular function. Cardiac output (CO) is representative of the CO, whereas peripheral vascular resistance (SVR) is representative of the cardiac load. Both parameters have application values in cardiac care.[8910] Therefore, this study verified NTG efficacy using noninvasive hemodynamic monitoring results of CO and SVR with objective judgment. Furthermore, this study hinders the effect of subjective factors. METHODS: Subjects All patients comprising the noncoronary heart disease (NCHD) and the CHD groups from Jiaodong Peninsula of Northern China were investigated. All procedures conformed to the tenets of the Affiliated Hospital of Qingdao University. Informed consent was obtained from each subject. Furthermore, the study was approved by the Local Institutional Review Board. The exclusion criteria included serious cardiovascular and cerebrovascular diseases, endocrine and digestive disorders, kidney disease in the last 3 years, and NTG contraindications. The clinical diagnosis of obesity was determined from abnormal laboratory experiments before the start of the procedures. Accordingly, 356 patients with non-CHD (i.e., 234 men and 122 women; mean age: 57.01 ± 10.29 years) participated in the study. The inclusion criteria included: (1) Explicit exclusion of coronary artery disease, (2) without NTG contraindications (e.g., hypotension), and (3) without a history of serious alcohol abuse. The CHD group comprises 203 patients (i.e., 147 men and 56 women; mean age: 63.43 ± 9.61 years) diagnosed following the European Society of Cardiology Pocket Guide criteria of 2013. The inclusion criteria included the (1) presence of angina pectoris, (2) other possible causes of similar disease and angina pectoris (e.g., coronary spasm), (3) without NTG contraindications (e.g., hypotension), (4) sublingual NTG was used to cure angina attack, and (5) did not use angina or other antiangina drugs at the same time. The blood pressure of the patients with hypertension was controlled in the 140–120/90–70 mmHg range by taking nonoral nitrates (e.g., angiotensin II type 1 receptor blocker, calcium channel blocker, and diuretics) before the test. All risk factors were determined from routine tests, including blood, urine, and conventional and biochemical blood clotting prior to the study. All patients comprising the noncoronary heart disease (NCHD) and the CHD groups from Jiaodong Peninsula of Northern China were investigated. All procedures conformed to the tenets of the Affiliated Hospital of Qingdao University. Informed consent was obtained from each subject. Furthermore, the study was approved by the Local Institutional Review Board. The exclusion criteria included serious cardiovascular and cerebrovascular diseases, endocrine and digestive disorders, kidney disease in the last 3 years, and NTG contraindications. The clinical diagnosis of obesity was determined from abnormal laboratory experiments before the start of the procedures. Accordingly, 356 patients with non-CHD (i.e., 234 men and 122 women; mean age: 57.01 ± 10.29 years) participated in the study. The inclusion criteria included: (1) Explicit exclusion of coronary artery disease, (2) without NTG contraindications (e.g., hypotension), and (3) without a history of serious alcohol abuse. The CHD group comprises 203 patients (i.e., 147 men and 56 women; mean age: 63.43 ± 9.61 years) diagnosed following the European Society of Cardiology Pocket Guide criteria of 2013. The inclusion criteria included the (1) presence of angina pectoris, (2) other possible causes of similar disease and angina pectoris (e.g., coronary spasm), (3) without NTG contraindications (e.g., hypotension), (4) sublingual NTG was used to cure angina attack, and (5) did not use angina or other antiangina drugs at the same time. The blood pressure of the patients with hypertension was controlled in the 140–120/90–70 mmHg range by taking nonoral nitrates (e.g., angiotensin II type 1 receptor blocker, calcium channel blocker, and diuretics) before the test. All risk factors were determined from routine tests, including blood, urine, and conventional and biochemical blood clotting prior to the study. Genetic analysis Genomic DNA was extracted from each subject via blood sample. The genetic variants in the extra ALDH2 (G504A) region were identified using polymerase chain reaction and DNA sequencing in all cases. Genotype determination was measured by the Shanghai Biological Science and Technology Company (China). The ALDH2 genotypes were divided into wild (GG) and mutant (GA/AA) types. Among 356 patients with NCHD, 196 were GG cases, and 160 were GA/AA. Among 203 patients with CHD, 103 were GG cases and 100 were GA/AA cases. Genomic DNA was extracted from each subject via blood sample. The genetic variants in the extra ALDH2 (G504A) region were identified using polymerase chain reaction and DNA sequencing in all cases. Genotype determination was measured by the Shanghai Biological Science and Technology Company (China). The ALDH2 genotypes were divided into wild (GG) and mutant (GA/AA) types. Among 356 patients with NCHD, 196 were GG cases, and 160 were GA/AA. Among 203 patients with CHD, 103 were GG cases and 100 were GA/AA cases. Clinical and hemodynamic The history of NTG treatment by objective pain relief was obtained from each participant through interview. This history was then divided into two subgroups of efficacy and nonefficacy. The hemodynamic parameter test included the checking of all patients administered with 0.5 mg sublingual NTG at 0, 5, and 15 minutes. CO and SVR were measured using the noninvasive hemodynamic parameter of BioZ.com (CardioDynamics Company). The patients were placed in a supine position with an empty stomach during the test. Medical professionals then attached the sensor to the patients’ necks and to the xiphoid level in the mid-axillary line at the junction on the bilinear sides of the chest according to user instructions. The contact skin was cleaned using alcohol wipes to ensure full conductivity. All personal data related to the test were entered. These data included gender, age, name, height, weight, and body surface area. The systolic and diastolic blood pressure (SBP and DBP), SVR, and CO were then tested and recorded. The SVR showed the state of the body's peripheral vasomotor reactions, whereas the CO depended on the cardiac preload and afterload and myocardial contractility. The history of NTG treatment by objective pain relief was obtained from each participant through interview. This history was then divided into two subgroups of efficacy and nonefficacy. The hemodynamic parameter test included the checking of all patients administered with 0.5 mg sublingual NTG at 0, 5, and 15 minutes. CO and SVR were measured using the noninvasive hemodynamic parameter of BioZ.com (CardioDynamics Company). The patients were placed in a supine position with an empty stomach during the test. Medical professionals then attached the sensor to the patients’ necks and to the xiphoid level in the mid-axillary line at the junction on the bilinear sides of the chest according to user instructions. The contact skin was cleaned using alcohol wipes to ensure full conductivity. All personal data related to the test were entered. These data included gender, age, name, height, weight, and body surface area. The systolic and diastolic blood pressure (SBP and DBP), SVR, and CO were then tested and recorded. The SVR showed the state of the body's peripheral vasomotor reactions, whereas the CO depended on the cardiac preload and afterload and myocardial contractility. Statistical analysis SPSS software for Windows version 17.0 (SPSS 17.0 Statistical Product and Service Solutions, the software was developed by the company of SPSS in December 2008) was used for statistical analysis. Pearson χ2 test was employed to assess deviations from the Hardy–Weinberg equilibrium for genotypes. Clinical information was subsequently compared across the genotypes using Pearson χ2 or Fisher's exact test. Fisher's exact test was used when the expected number in any cell was <5. The observed number and differences of NTG efficacy among various ALDH2 genotypes were statistically analyzed using χ2 test. Patient gender and smoking and drinking history were analyzed by t-test regression. P < 0.05 was considered statistically significant. The changes in CO and SVR at 3 time points were analyzed using repeated measurement data analysis of variance. The relation among the number of multiple impacts of ΔSVR, ΔCO factors, and ALDH2 gene polymorphisms was analyzed using a multivariate linear regression. SPSS software for Windows version 17.0 (SPSS 17.0 Statistical Product and Service Solutions, the software was developed by the company of SPSS in December 2008) was used for statistical analysis. Pearson χ2 test was employed to assess deviations from the Hardy–Weinberg equilibrium for genotypes. Clinical information was subsequently compared across the genotypes using Pearson χ2 or Fisher's exact test. Fisher's exact test was used when the expected number in any cell was <5. The observed number and differences of NTG efficacy among various ALDH2 genotypes were statistically analyzed using χ2 test. Patient gender and smoking and drinking history were analyzed by t-test regression. P < 0.05 was considered statistically significant. The changes in CO and SVR at 3 time points were analyzed using repeated measurement data analysis of variance. The relation among the number of multiple impacts of ΔSVR, ΔCO factors, and ALDH2 gene polymorphisms was analyzed using a multivariate linear regression. Subjects: All patients comprising the noncoronary heart disease (NCHD) and the CHD groups from Jiaodong Peninsula of Northern China were investigated. All procedures conformed to the tenets of the Affiliated Hospital of Qingdao University. Informed consent was obtained from each subject. Furthermore, the study was approved by the Local Institutional Review Board. The exclusion criteria included serious cardiovascular and cerebrovascular diseases, endocrine and digestive disorders, kidney disease in the last 3 years, and NTG contraindications. The clinical diagnosis of obesity was determined from abnormal laboratory experiments before the start of the procedures. Accordingly, 356 patients with non-CHD (i.e., 234 men and 122 women; mean age: 57.01 ± 10.29 years) participated in the study. The inclusion criteria included: (1) Explicit exclusion of coronary artery disease, (2) without NTG contraindications (e.g., hypotension), and (3) without a history of serious alcohol abuse. The CHD group comprises 203 patients (i.e., 147 men and 56 women; mean age: 63.43 ± 9.61 years) diagnosed following the European Society of Cardiology Pocket Guide criteria of 2013. The inclusion criteria included the (1) presence of angina pectoris, (2) other possible causes of similar disease and angina pectoris (e.g., coronary spasm), (3) without NTG contraindications (e.g., hypotension), (4) sublingual NTG was used to cure angina attack, and (5) did not use angina or other antiangina drugs at the same time. The blood pressure of the patients with hypertension was controlled in the 140–120/90–70 mmHg range by taking nonoral nitrates (e.g., angiotensin II type 1 receptor blocker, calcium channel blocker, and diuretics) before the test. All risk factors were determined from routine tests, including blood, urine, and conventional and biochemical blood clotting prior to the study. Genetic analysis: Genomic DNA was extracted from each subject via blood sample. The genetic variants in the extra ALDH2 (G504A) region were identified using polymerase chain reaction and DNA sequencing in all cases. Genotype determination was measured by the Shanghai Biological Science and Technology Company (China). The ALDH2 genotypes were divided into wild (GG) and mutant (GA/AA) types. Among 356 patients with NCHD, 196 were GG cases, and 160 were GA/AA. Among 203 patients with CHD, 103 were GG cases and 100 were GA/AA cases. Clinical and hemodynamic: The history of NTG treatment by objective pain relief was obtained from each participant through interview. This history was then divided into two subgroups of efficacy and nonefficacy. The hemodynamic parameter test included the checking of all patients administered with 0.5 mg sublingual NTG at 0, 5, and 15 minutes. CO and SVR were measured using the noninvasive hemodynamic parameter of BioZ.com (CardioDynamics Company). The patients were placed in a supine position with an empty stomach during the test. Medical professionals then attached the sensor to the patients’ necks and to the xiphoid level in the mid-axillary line at the junction on the bilinear sides of the chest according to user instructions. The contact skin was cleaned using alcohol wipes to ensure full conductivity. All personal data related to the test were entered. These data included gender, age, name, height, weight, and body surface area. The systolic and diastolic blood pressure (SBP and DBP), SVR, and CO were then tested and recorded. The SVR showed the state of the body's peripheral vasomotor reactions, whereas the CO depended on the cardiac preload and afterload and myocardial contractility. Statistical analysis: SPSS software for Windows version 17.0 (SPSS 17.0 Statistical Product and Service Solutions, the software was developed by the company of SPSS in December 2008) was used for statistical analysis. Pearson χ2 test was employed to assess deviations from the Hardy–Weinberg equilibrium for genotypes. Clinical information was subsequently compared across the genotypes using Pearson χ2 or Fisher's exact test. Fisher's exact test was used when the expected number in any cell was <5. The observed number and differences of NTG efficacy among various ALDH2 genotypes were statistically analyzed using χ2 test. Patient gender and smoking and drinking history were analyzed by t-test regression. P < 0.05 was considered statistically significant. The changes in CO and SVR at 3 time points were analyzed using repeated measurement data analysis of variance. The relation among the number of multiple impacts of ΔSVR, ΔCO factors, and ALDH2 gene polymorphisms was analyzed using a multivariate linear regression. RESULTS: The genetic polymorphism balancing test was conducted on the genetic polymorphisms to balance the test allele and the genotype distribution of 510 subjects. The genotype distribution of both groups inherited the Hardy–Weinberg equilibrium (χ2 = 1.59, 0.49, P > 0.05), which indicated that the sample had a population representative. Of the 356 cases in the NCHD group, 196 (55.1%) were of the GG genotype and 130 (36.5%) were of the GA genotype. Of the 130 cases, 30 (8.4%) accounted for the AA genotype with the G and A allele frequencies at 82.8%–17.2%, respectively. Of the 203 cases in the CHD group, 103 (50.7%) were of the GG genotype and 86 (42.4%) were of the GA genotypes. Of the 86, 14 cases (6.9%) accounted for the AA genotype with the G and A allele frequencies of 81.4%–18.6%, respectively. The comparison between the two groups showed that the differences in the genotype distribution were not statistically significant (P = 0.33). Moreover, the allele distribution between the two groups was not statistically significant (P = 0.64) [Table 1]. ALDH2 of the two groups rs671 gene mutation genotype and allele CHD: Coronary heart disease; ALDH2: Aldehyde dehydrogenase-2; GG: Wild-type; GA/AA: Mutant. GG and GA/AA clinical data comparison The GG and GA/AA types demonstrated that the GC and GA/AA sample size, gender, age, smoking, and alcohol consumption ratio difference were not statistically significant (P > 0.05) [Table 2]. General characteristics of clinical data *Means χ2 value, other test statistics value for t value. GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease; BSA: Body surface area. The GG and GA/AA types demonstrated that the GC and GA/AA sample size, gender, age, smoking, and alcohol consumption ratio difference were not statistically significant (P > 0.05) [Table 2]. General characteristics of clinical data *Means χ2 value, other test statistics value for t value. GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease; BSA: Body surface area. Aldehyde dehydrogenase 2 genotype and nitroglycerin efficacy Accordingly, 103 cases of GG and 100 cases GA/AA were recorded in the CHD group. The NTG efficacies of the GG and the GA/AA subgroups were 79.4% and 50.6%, respectively. The difference in rapid NTG efficiency was statistically significant (P < 0.01) [Table 3]. ALDH2 gene volunteers cases of CHD distribution and efficacy of NTG CHD: Coronary heart disease; ALDH2: Aldehyde dehydrogenase-2; GG: Wild-type; GA/AA: Mutant; NTG: Nitroglycerin. Accordingly, 103 cases of GG and 100 cases GA/AA were recorded in the CHD group. The NTG efficacies of the GG and the GA/AA subgroups were 79.4% and 50.6%, respectively. The difference in rapid NTG efficiency was statistically significant (P < 0.01) [Table 3]. ALDH2 gene volunteers cases of CHD distribution and efficacy of NTG CHD: Coronary heart disease; ALDH2: Aldehyde dehydrogenase-2; GG: Wild-type; GA/AA: Mutant; NTG: Nitroglycerin. Changes in cardiac output, SVR, heart rate, systolic and diastolic blood pressure between groups The repeated measurement data analysis of variance showed that the CO difference of the GG genotype at all three time points was statistically significant (P < 0.05) and differed with the significant increase in CO at 0 and 5 minutes (P = 0.000). The CO difference in the GA/AA group at 0, 5, and 15 minutes was statistically significant (P < 0.05), which indicated an obvious decrease from the CO difference at 0 and 15 minutes in both groups (P < 0.001). The SVR differences in both groups were statistically significant (P < 0.001) at all three-time points. A significant decrease was observed at 5 minutes, which suggested a statistically significant difference (P < 0.01) [Table 4]. Comparison of CO and SVR in different time points CO: Cardiac output; SVR: Peripheral vascular resistance; GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease. The heart rate (HR), SBP, and DBP differences between the two groups at three-time points were not statistically significant (P > 0.05). The repeated measurement data analysis of variance showed that the CO difference of the GG genotype at all three time points was statistically significant (P < 0.05) and differed with the significant increase in CO at 0 and 5 minutes (P = 0.000). The CO difference in the GA/AA group at 0, 5, and 15 minutes was statistically significant (P < 0.05), which indicated an obvious decrease from the CO difference at 0 and 15 minutes in both groups (P < 0.001). The SVR differences in both groups were statistically significant (P < 0.001) at all three-time points. A significant decrease was observed at 5 minutes, which suggested a statistically significant difference (P < 0.01) [Table 4]. Comparison of CO and SVR in different time points CO: Cardiac output; SVR: Peripheral vascular resistance; GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease. The heart rate (HR), SBP, and DBP differences between the two groups at three-time points were not statistically significant (P > 0.05). Relationship between aldehyde dehydrogenase 2 gene polymorphisms and ΔCO and ΔSVR ΔCO and ΔSVR included ΔCO1 = CO 0 − CO 5 minutes, ΔSVR1 = SVR 0 − SVR 5 minutes, ΔCO2 = CO 0 – CO 15 minutes, and ΔSVR2 = SVR 0 –SVR 15 minutes. ΔCO1 was used as the dependent variable, whereas ALDH2 polymorphism was employed as the independent variable. The ALDH2 polymorphism significantly influenced the dependent variable (P = 0.046, 0.000). Similarly, ΔCO2 was used as the dependent variable. ΔSVR1 and ΔSVR2 were utilized as independent variables. The ALDH2 polymorphism had a significant influence on the dependent variable in both groups (P < 0.05). ΔCO and ΔSVR included ΔCO1 = CO 0 − CO 5 minutes, ΔSVR1 = SVR 0 − SVR 5 minutes, ΔCO2 = CO 0 – CO 15 minutes, and ΔSVR2 = SVR 0 –SVR 15 minutes. ΔCO1 was used as the dependent variable, whereas ALDH2 polymorphism was employed as the independent variable. The ALDH2 polymorphism significantly influenced the dependent variable (P = 0.046, 0.000). Similarly, ΔCO2 was used as the dependent variable. ΔSVR1 and ΔSVR2 were utilized as independent variables. The ALDH2 polymorphism had a significant influence on the dependent variable in both groups (P < 0.05). Noninvasive hemodynamic parameter (i.e., cardiac output and SVR) changes between the GG and GG/AA groups The comparisons of ΔCO1 and ΔCO2 and ΔSVR1 and ΔSVR2 in the GG and GG/AA types were statistically significant (P < 0.001) [Table 5]. Comparison of ΔCO, ΔSVR CO: Cardiac output; SVR: Peripheral vascular resistance; GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease. The comparisons of ΔCO1 and ΔCO2 and ΔSVR1 and ΔSVR2 in the GG and GG/AA types were statistically significant (P < 0.001) [Table 5]. Comparison of ΔCO, ΔSVR CO: Cardiac output; SVR: Peripheral vascular resistance; GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease. GG and GA/AA clinical data comparison: The GG and GA/AA types demonstrated that the GC and GA/AA sample size, gender, age, smoking, and alcohol consumption ratio difference were not statistically significant (P > 0.05) [Table 2]. General characteristics of clinical data *Means χ2 value, other test statistics value for t value. GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease; BSA: Body surface area. Aldehyde dehydrogenase 2 genotype and nitroglycerin efficacy: Accordingly, 103 cases of GG and 100 cases GA/AA were recorded in the CHD group. The NTG efficacies of the GG and the GA/AA subgroups were 79.4% and 50.6%, respectively. The difference in rapid NTG efficiency was statistically significant (P < 0.01) [Table 3]. ALDH2 gene volunteers cases of CHD distribution and efficacy of NTG CHD: Coronary heart disease; ALDH2: Aldehyde dehydrogenase-2; GG: Wild-type; GA/AA: Mutant; NTG: Nitroglycerin. Changes in cardiac output, SVR, heart rate, systolic and diastolic blood pressure between groups: The repeated measurement data analysis of variance showed that the CO difference of the GG genotype at all three time points was statistically significant (P < 0.05) and differed with the significant increase in CO at 0 and 5 minutes (P = 0.000). The CO difference in the GA/AA group at 0, 5, and 15 minutes was statistically significant (P < 0.05), which indicated an obvious decrease from the CO difference at 0 and 15 minutes in both groups (P < 0.001). The SVR differences in both groups were statistically significant (P < 0.001) at all three-time points. A significant decrease was observed at 5 minutes, which suggested a statistically significant difference (P < 0.01) [Table 4]. Comparison of CO and SVR in different time points CO: Cardiac output; SVR: Peripheral vascular resistance; GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease. The heart rate (HR), SBP, and DBP differences between the two groups at three-time points were not statistically significant (P > 0.05). Relationship between aldehyde dehydrogenase 2 gene polymorphisms and ΔCO and ΔSVR: ΔCO and ΔSVR included ΔCO1 = CO 0 − CO 5 minutes, ΔSVR1 = SVR 0 − SVR 5 minutes, ΔCO2 = CO 0 – CO 15 minutes, and ΔSVR2 = SVR 0 –SVR 15 minutes. ΔCO1 was used as the dependent variable, whereas ALDH2 polymorphism was employed as the independent variable. The ALDH2 polymorphism significantly influenced the dependent variable (P = 0.046, 0.000). Similarly, ΔCO2 was used as the dependent variable. ΔSVR1 and ΔSVR2 were utilized as independent variables. The ALDH2 polymorphism had a significant influence on the dependent variable in both groups (P < 0.05). Noninvasive hemodynamic parameter (i.e., cardiac output and SVR) changes between the GG and GG/AA groups: The comparisons of ΔCO1 and ΔCO2 and ΔSVR1 and ΔSVR2 in the GG and GG/AA types were statistically significant (P < 0.001) [Table 5]. Comparison of ΔCO, ΔSVR CO: Cardiac output; SVR: Peripheral vascular resistance; GG: Wild-type; GA/AA: Mutant; NCHD: Noncoronary heart disease; CHD: Coronary heart disease. DISCUSSION: Nitroglycerin is a classic drug that cures acute angina attack. This drug generates NO or NO-related media in the body. Moreover, NTG directly acts upon guanylate cyclase and increases the amount of second messenger cyclic guanosine monophosphate in the vascular smooth muscle, thereby activating downstream target proteins (i.e., protein kinase G and AMP-dependent protein kinase). NTG also reduces the Ca concentration in the smooth muscle cells while reducing myosin sensitivity toward Ca, as well as vascular smooth muscle dilation.[11] Aldehyde dehydrogenase 2 is a key enzyme mediating the biotransformation of NTG,[12] which consequently mediates the biological activation of NTG. Thus, the biological activation of NTG reduces the occurrence of NTG tolerance. ALDH2 prevents NTG conversion into nitrite and 1,2-bis nitroglycerin (1,2-GDN), which reduce NTG-induced vasodilation. Mackenzie et al.[13] used disulfiram-ALDH2 and demonstrated that ALDH2 partially inhibited the effects of NTG on the human body. ALDH2 gene knockout studies in mice have shown the significance and necessity of ALDH2 in vascular activity at therapeutic doses of NTG.[314] The human ALDH2 gene is located in chromosome 12 and contains 13 exons. Exon 12 of its outer 1510 guanine (G) mutated to adenine (A) facilitates the replacement of the protein encoded by this gene (504), glutamate (Glu), with lysine (Lys) (i.e., Glu504 Lys [rs671]). The single nucleotide polymorphism mutation rate in East Asia reaches up to 30%–50%.[15] The mutations cause a significant decrease in ALDH2 enzyme activity, which diminishes the biological function of ALDH2.[16] The results of this study provide clinical evidence that ALDH2 polymorphisms influence the NTG effect. Recent animal studies abroad have reported that the ALDH2 gene serves an important function in NTG on the pulmonary vascular bed and circulatory system biotransformation.[17] The ALDH2 gene mutation reduces the vasodilator NTG.[1819] Li et al.[19] selected 80 cases of patients diagnosed with coronary artery disease with a history of stable angina in 2006. The ALDH2 gene polymorphism and enzyme activity were detected in patients from each group. The NTG effect on all subjects in the ALDH2 wild-type group was significantly higher than that in the invalid proportion of the wild-type group. Ji et al.[7] enrolled 113 people diagnosed with coronary artery disease in 2010. They found that the ALDH2 gene polymorphisms had no association with NTG effectiveness. Mackenzie et al.[13] also observed that the clinical use of NTG was not found in the ALDH2 gene and had a high mutation rate in the Japanese population. The preceding experiment illustrates that the relationship between the ALDH2 gene polymorphisms and NTG efficacy remains controversial. This experiment also selects subjects with noncoronary and CHD to study sublingual NTG aging and further proves that the ALDH2 gene polymorphisms are related to the NTG clinical effect. The wild-type effect is better than that of the mutant type. The subjects are divided into the NCHD and CHD groups. The determined ALDH2 genotype is divided into the wild and mutant groups. The ALDH2 genotypes have no significant distribution difference, similar to the findings of previous studies.[20] The NTG effect rate in the GG-type patients of the CHD group was significantly higher than that of the GA/AA type. This result is similar to that of the previous studies.[21] The noninvasive hemodynamic detector (i.e., impedance cardiography [ICG]) is employed to evaluate the NTG effect. The CO and SVR changes and other objective indicators are monitored. Furthermore, determining the degree of pain relief as a subjective factor is avoided. This study selects three representative time points (i.e., CO 0, CO 5, and CO 15 minutes) according to the pharmacokinetic characteristics of NTG. The changes in noninvasive dynamic indicators (i.e., CO and SVR) are then combined. Such combination indicates that the CO per minute reveals differences in sublingual NTG efficacy among subjects with different genotypes. Experimental results show that the ALDH2 polymorphisms are relevant to CO and SVR, which confirms the CO NTG effect. The ICG is an important device for body impedance measurement in the cardiovascular dynamics of blood flow.[22] The cardiac hemodynamic status is measured on the basis of changes in systolic and diastolic thoracic impedance.[9] The ICG enables the diagnosis of heart failure and assesses the severity of coronary artery disease[23] to evaluate the efficacy of drug treatment and prognosis.[24] This study avoids the influence of subjective factors as much as possible by observing the CO per minute and the noninvasive hemodynamic index changes in SVR. The results of the observation are further employed to determine the differences among volunteers with different gene types after sublingual NTG administration. The obtained results are highly reliable. The wild-type CO increases, whereas the mutant CO decreases after sublingual NTG administration. The results suggest the following: (1) Wild-type: The sublingual NTG hemorheology (HR) reflex significantly increased, whereas the stroke volume (SV) relatively and insignificantly decreased. (2) The formula CO = SV × HR demonstrates product increases. Mutant type: SV significantly was reduced, whereas HR was relatively and insignificantly increased after sublingual NTG administration. Limitations of the study Only considered ALDH2 polymorphism to the effect of metabolism, and in vitro and in vivo studies suggest that in vivo biotransformation of NTG is also affected by the following process a variety of enzymes, such as: Glutathione-S-transferase, cytochrome P450 reductase, xanthine oxidoreductase enzyme, including a number of species, this study does not exclude the other enzymes in the metabolism of NTG affected. This research is a clinical experimental study, and the sample size is inadequate. The sample size should be expanded for further investigation. A selection bias in this study as regards the NCHD group and the subject might have affected NTG efficacy. The researchers of this study plan to expand the sample size and apply invasive monitoring techniques to monitor all indicators. Accordingly, a more in-depth study of the metabolic ALDH2 pathways and processes will be conducted. Only considered ALDH2 polymorphism to the effect of metabolism, and in vitro and in vivo studies suggest that in vivo biotransformation of NTG is also affected by the following process a variety of enzymes, such as: Glutathione-S-transferase, cytochrome P450 reductase, xanthine oxidoreductase enzyme, including a number of species, this study does not exclude the other enzymes in the metabolism of NTG affected. This research is a clinical experimental study, and the sample size is inadequate. The sample size should be expanded for further investigation. A selection bias in this study as regards the NCHD group and the subject might have affected NTG efficacy. The researchers of this study plan to expand the sample size and apply invasive monitoring techniques to monitor all indicators. Accordingly, a more in-depth study of the metabolic ALDH2 pathways and processes will be conducted. Limitations of the study: Only considered ALDH2 polymorphism to the effect of metabolism, and in vitro and in vivo studies suggest that in vivo biotransformation of NTG is also affected by the following process a variety of enzymes, such as: Glutathione-S-transferase, cytochrome P450 reductase, xanthine oxidoreductase enzyme, including a number of species, this study does not exclude the other enzymes in the metabolism of NTG affected. This research is a clinical experimental study, and the sample size is inadequate. The sample size should be expanded for further investigation. A selection bias in this study as regards the NCHD group and the subject might have affected NTG efficacy. The researchers of this study plan to expand the sample size and apply invasive monitoring techniques to monitor all indicators. Accordingly, a more in-depth study of the metabolic ALDH2 pathways and processes will be conducted. CONCLUSION: Aldehyde dehydrogenase 2 (G504A) gene polymorphism is associated with changes in noninvasive hemodynamic parameters (i.e., CO and SVR) after intervention with sublingual NTG. This gene polymorphism may influence the effect of NTG intervention on Northern Chinese Han population.
Background: Nitroglycerin (NTG) is one of the few immediate treatments for acute angina. Aldehyde dehydrogenase 2 (ALDH2) is a key enzyme in the human body that facilitates the biological metabolism of NTG. The biological mechanism of NTG serves an important function in NTG efficacy. Some reports still contradict the results that the correlation between ALDH2 gene polymorphisms and NTG and its clinical efficacy is different. However, data on NTG measurement by pain relief are subjective. This study aimed to investigate the influence of ALDH2 gene polymorphism on intervention with sublingual NTG using noninvasive hemodynamic parameters of cardiac output (CO) and systemic vascular resistance (SVR) in Northern Chinese Han population. Methods: This study selected 559 patients from the Affiliated Hospital of Qingdao University. A total of 203 patients presented with coronary heart disease (CHD) and 356 had non-CHD (NCHD) cases. All patient ALDH2 genotypes (G504A) were detected and divided into two types: Wild (GG) and mutant (GA/AA). Among the CHD group, 103 were wild-type cases, and 100 were mutant-type cases. Moreover, 196 cases were wild-type, and 160 cases were mutant type among the NCHD volunteers. A noninvasive hemodynamic detector was used to monitor the CO and the SVR at the 0, 5, and 15 minute time points after medication with 0.5 mg sublingual NTG. Two CO and SVR indicators were used for a comparative analysis of all case genotypes. Results: Both CO and SVR indicators significantly differed between the wild and mutant genotypes at various time points after intervention with sublingual NTG at 5 and 15 minutes in the NCHD (F = 16.460, 15.003, P = 0.000, 0.000) and CHD groups (F = 194.482, 60.582, P = 0.000, 0.000). All CO values in the wild-type case of both NCHD and CHD groups increased, whereas those in the mutant type decreased. The CO and ΔCO differences were statistically significant (P < 0.05; P < 0.05). The SVR and ΔSVR changed between the wild- and mutant-type cases at all-time points in both NCHD and CHD groups had statistically significant differences (P < 0.05; P < 0.05). Conclusions: ALDH2 (G504A) gene polymorphism is associated with changes in noninvasive hemodynamic parameters (i.e. CO and SVR) after intervention with sublingual NTG. This gene polymorphism may influence the effect of NTG intervention on Northern Chinese Han population.
INTRODUCTION: Increasing evidence suggests that the aldehyde dehydrogenase 2 (ALDH2) (G-504-A) gene polymorphism is associated with nitroglycerin (NTG) inhibition. ALDH2 is a key enzyme in the human body that facilitates the biological metabolism of NTG, which serves an important function in NTG efficacy.[1] NTG is one of the few immediate treatments for acute angina. ALDH2 is an important mitochondrial aldehyde oxidase, the gene of which is located on chromosome 12q24. The gene has a high degree of genetic polymorphism.[2] The ALDH2 gene is found outside the 12th exon and exists in 504 point mutations with base substitutions of G to A. The ALDH2 of the GG genotype (i.e., wild-type) has normal activity on the NTG. The GA genotype (i.e., mutant) only has 6% activity, whereas the AA genotype (i.e., mutant) loses all its activities.[3] The clinical research analysis of Jo et al.[4] revealed that mutant ALDH2 is an independent risk factor for myocardial infarction in elderly men in Korea. Chen et al.[5] used animal experiments to demonstrate that the ALDH2 has a protective effect on myocardial ischemia. However, some reports still contradict the results that the correlation between the ALDH2 gene polymorphisms and NTG or its clinical efficacy is different. However, data on NTG measurement by pain relief are highly subjective.[67] A thoracic impedance method based on the noninvasive hemodynamic detector (i.e., BioZ.com), which monitors real-time hemodynamic and index parameters of patients, was used in this study. This method was employed to assess hemodynamic status and ventricular function. Cardiac output (CO) is representative of the CO, whereas peripheral vascular resistance (SVR) is representative of the cardiac load. Both parameters have application values in cardiac care.[8910] Therefore, this study verified NTG efficacy using noninvasive hemodynamic monitoring results of CO and SVR with objective judgment. Furthermore, this study hinders the effect of subjective factors. CONCLUSION: Aldehyde dehydrogenase 2 (G504A) gene polymorphism is associated with changes in noninvasive hemodynamic parameters (i.e., CO and SVR) after intervention with sublingual NTG. This gene polymorphism may influence the effect of NTG intervention on Northern Chinese Han population.
Background: Nitroglycerin (NTG) is one of the few immediate treatments for acute angina. Aldehyde dehydrogenase 2 (ALDH2) is a key enzyme in the human body that facilitates the biological metabolism of NTG. The biological mechanism of NTG serves an important function in NTG efficacy. Some reports still contradict the results that the correlation between ALDH2 gene polymorphisms and NTG and its clinical efficacy is different. However, data on NTG measurement by pain relief are subjective. This study aimed to investigate the influence of ALDH2 gene polymorphism on intervention with sublingual NTG using noninvasive hemodynamic parameters of cardiac output (CO) and systemic vascular resistance (SVR) in Northern Chinese Han population. Methods: This study selected 559 patients from the Affiliated Hospital of Qingdao University. A total of 203 patients presented with coronary heart disease (CHD) and 356 had non-CHD (NCHD) cases. All patient ALDH2 genotypes (G504A) were detected and divided into two types: Wild (GG) and mutant (GA/AA). Among the CHD group, 103 were wild-type cases, and 100 were mutant-type cases. Moreover, 196 cases were wild-type, and 160 cases were mutant type among the NCHD volunteers. A noninvasive hemodynamic detector was used to monitor the CO and the SVR at the 0, 5, and 15 minute time points after medication with 0.5 mg sublingual NTG. Two CO and SVR indicators were used for a comparative analysis of all case genotypes. Results: Both CO and SVR indicators significantly differed between the wild and mutant genotypes at various time points after intervention with sublingual NTG at 5 and 15 minutes in the NCHD (F = 16.460, 15.003, P = 0.000, 0.000) and CHD groups (F = 194.482, 60.582, P = 0.000, 0.000). All CO values in the wild-type case of both NCHD and CHD groups increased, whereas those in the mutant type decreased. The CO and ΔCO differences were statistically significant (P < 0.05; P < 0.05). The SVR and ΔSVR changed between the wild- and mutant-type cases at all-time points in both NCHD and CHD groups had statistically significant differences (P < 0.05; P < 0.05). Conclusions: ALDH2 (G504A) gene polymorphism is associated with changes in noninvasive hemodynamic parameters (i.e. CO and SVR) after intervention with sublingual NTG. This gene polymorphism may influence the effect of NTG intervention on Northern Chinese Han population.
6,660
473
[ 346, 107, 215, 173, 93, 99, 221, 114, 73, 161 ]
15
[ "ntg", "aldh2", "co", "gg", "aa", "svr", "ga", "significant", "ga aa", "disease" ]
[ "aldh2 gene", "efficacy aldh2 genotypes", "aldh2 gg genotype", "determined aldh2 genotype", "polymorphism aldh2 gene" ]
[CONTENT] Aldehyde Dehydrogenase 2 | Coronary Disease | Genetic Polymorphism | Hemodynamic | Nitroglycerin [SUMMARY]
[CONTENT] Aldehyde Dehydrogenase 2 | Coronary Disease | Genetic Polymorphism | Hemodynamic | Nitroglycerin [SUMMARY]
[CONTENT] Aldehyde Dehydrogenase 2 | Coronary Disease | Genetic Polymorphism | Hemodynamic | Nitroglycerin [SUMMARY]
[CONTENT] Aldehyde Dehydrogenase 2 | Coronary Disease | Genetic Polymorphism | Hemodynamic | Nitroglycerin [SUMMARY]
[CONTENT] Aldehyde Dehydrogenase 2 | Coronary Disease | Genetic Polymorphism | Hemodynamic | Nitroglycerin [SUMMARY]
[CONTENT] Aldehyde Dehydrogenase 2 | Coronary Disease | Genetic Polymorphism | Hemodynamic | Nitroglycerin [SUMMARY]
[CONTENT] Aged | Aldehyde Dehydrogenase | Aldehyde Dehydrogenase, Mitochondrial | Asian People | Female | Hemodynamics | Humans | Male | Middle Aged | Nitroglycerin | Polymorphism, Genetic [SUMMARY]
[CONTENT] Aged | Aldehyde Dehydrogenase | Aldehyde Dehydrogenase, Mitochondrial | Asian People | Female | Hemodynamics | Humans | Male | Middle Aged | Nitroglycerin | Polymorphism, Genetic [SUMMARY]
[CONTENT] Aged | Aldehyde Dehydrogenase | Aldehyde Dehydrogenase, Mitochondrial | Asian People | Female | Hemodynamics | Humans | Male | Middle Aged | Nitroglycerin | Polymorphism, Genetic [SUMMARY]
[CONTENT] Aged | Aldehyde Dehydrogenase | Aldehyde Dehydrogenase, Mitochondrial | Asian People | Female | Hemodynamics | Humans | Male | Middle Aged | Nitroglycerin | Polymorphism, Genetic [SUMMARY]
[CONTENT] Aged | Aldehyde Dehydrogenase | Aldehyde Dehydrogenase, Mitochondrial | Asian People | Female | Hemodynamics | Humans | Male | Middle Aged | Nitroglycerin | Polymorphism, Genetic [SUMMARY]
[CONTENT] Aged | Aldehyde Dehydrogenase | Aldehyde Dehydrogenase, Mitochondrial | Asian People | Female | Hemodynamics | Humans | Male | Middle Aged | Nitroglycerin | Polymorphism, Genetic [SUMMARY]
[CONTENT] aldh2 gene | efficacy aldh2 genotypes | aldh2 gg genotype | determined aldh2 genotype | polymorphism aldh2 gene [SUMMARY]
[CONTENT] aldh2 gene | efficacy aldh2 genotypes | aldh2 gg genotype | determined aldh2 genotype | polymorphism aldh2 gene [SUMMARY]
[CONTENT] aldh2 gene | efficacy aldh2 genotypes | aldh2 gg genotype | determined aldh2 genotype | polymorphism aldh2 gene [SUMMARY]
[CONTENT] aldh2 gene | efficacy aldh2 genotypes | aldh2 gg genotype | determined aldh2 genotype | polymorphism aldh2 gene [SUMMARY]
[CONTENT] aldh2 gene | efficacy aldh2 genotypes | aldh2 gg genotype | determined aldh2 genotype | polymorphism aldh2 gene [SUMMARY]
[CONTENT] aldh2 gene | efficacy aldh2 genotypes | aldh2 gg genotype | determined aldh2 genotype | polymorphism aldh2 gene [SUMMARY]
[CONTENT] ntg | aldh2 | co | gg | aa | svr | ga | significant | ga aa | disease [SUMMARY]
[CONTENT] ntg | aldh2 | co | gg | aa | svr | ga | significant | ga aa | disease [SUMMARY]
[CONTENT] ntg | aldh2 | co | gg | aa | svr | ga | significant | ga aa | disease [SUMMARY]
[CONTENT] ntg | aldh2 | co | gg | aa | svr | ga | significant | ga aa | disease [SUMMARY]
[CONTENT] ntg | aldh2 | co | gg | aa | svr | ga | significant | ga aa | disease [SUMMARY]
[CONTENT] ntg | aldh2 | co | gg | aa | svr | ga | significant | ga aa | disease [SUMMARY]
[CONTENT] aldh2 | ntg | gene | hemodynamic | method | genotype mutant | study | genotype | activity | 504 [SUMMARY]
[CONTENT] patients | test | analyzed | criteria | included | ntg | blood | angina | contraindications | years [SUMMARY]
[CONTENT] gg | aa | significant | ga | ga aa | statistically significant | statistically | co | minutes | heart [SUMMARY]
[CONTENT] intervention | gene polymorphism | polymorphism | gene | northern chinese han | ntg intervention northern | gene polymorphism influence | gene polymorphism influence effect | ntg intervention | northern chinese han population [SUMMARY]
[CONTENT] ntg | aldh2 | co | gg | aa | ga aa | ga | svr | study | disease [SUMMARY]
[CONTENT] ntg | aldh2 | co | gg | aa | ga aa | ga | svr | study | disease [SUMMARY]
[CONTENT] Nitroglycerin | NTG ||| Aldehyde | 2 | ALDH2 | NTG ||| NTG | NTG ||| ALDH2 | NTG ||| NTG ||| ALDH2 | NTG | Northern Chinese | Han [SUMMARY]
[CONTENT] 559 | the Affiliated Hospital of Qingdao University ||| 203 | CHD | 356 | non-CHD | NCHD ||| ALDH2 | two | GA/AA ||| CHD | 103 | 100 ||| 196 | 160 | NCHD ||| CO | SVR | 0 | 5 | 15 minute | 0.5 | NTG ||| Two | CO | SVR [SUMMARY]
[CONTENT] CO | SVR | NTG | 5 and 15 minutes | NCHD | 16.460 | 15.003 | 0.000 | 0.000 | CHD | F = | 194.482 | 60.582 | 0.000 | 0.000 ||| CO | NCHD | CHD ||| CO | ΔCO | P < 0.05 ||| SVR | ΔSVR | NCHD | CHD | P < 0.05 [SUMMARY]
[CONTENT] ALDH2 | CO | SVR ||| NTG ||| NTG | Northern Chinese | Han [SUMMARY]
[CONTENT] Nitroglycerin | NTG ||| Aldehyde | 2 | ALDH2 | NTG ||| NTG | NTG ||| ALDH2 | NTG ||| NTG ||| ALDH2 | NTG | Northern Chinese | Han ||| 559 | the Affiliated Hospital of Qingdao University ||| 203 | CHD | 356 | non-CHD | NCHD ||| ALDH2 | two | GA/AA ||| CHD | 103 | 100 ||| 196 | 160 | NCHD ||| CO | SVR | 0 | 5 | 15 minute | 0.5 | NTG ||| Two | CO | SVR ||| CO | SVR | NTG | 5 and 15 minutes | NCHD | 16.460 | 15.003 | 0.000 | 0.000 | CHD | F = | 194.482 | 60.582 | 0.000 | 0.000 ||| CO | NCHD | CHD ||| CO | ΔCO | P < 0.05 ||| SVR | ΔSVR | NCHD | CHD | P < 0.05 ||| ALDH2 | CO | SVR ||| NTG ||| NTG | Northern Chinese | Han [SUMMARY]
[CONTENT] Nitroglycerin | NTG ||| Aldehyde | 2 | ALDH2 | NTG ||| NTG | NTG ||| ALDH2 | NTG ||| NTG ||| ALDH2 | NTG | Northern Chinese | Han ||| 559 | the Affiliated Hospital of Qingdao University ||| 203 | CHD | 356 | non-CHD | NCHD ||| ALDH2 | two | GA/AA ||| CHD | 103 | 100 ||| 196 | 160 | NCHD ||| CO | SVR | 0 | 5 | 15 minute | 0.5 | NTG ||| Two | CO | SVR ||| CO | SVR | NTG | 5 and 15 minutes | NCHD | 16.460 | 15.003 | 0.000 | 0.000 | CHD | F = | 194.482 | 60.582 | 0.000 | 0.000 ||| CO | NCHD | CHD ||| CO | ΔCO | P < 0.05 ||| SVR | ΔSVR | NCHD | CHD | P < 0.05 ||| ALDH2 | CO | SVR ||| NTG ||| NTG | Northern Chinese | Han [SUMMARY]
STAPLED HEMORRHOIDOPEXY: RESULTS, LATE COMPLICATIONS, AND DEGREE OF SATISFACTION AFTER 16 YEARS OF FOLLOW-UP.
36134815
Stapled hemorrhoidopexy has been widely used for the treatment of hemorrhoids, but concerns about complications and recurrences after prolonged follow-up are still under debate.
BACKGROUND
Stapled hemorrhoidopexy was performed on 155 patients between 2000 and 2003, and the early results have already been published. In this study, we evaluated the same patients after a very long follow-up. Data were collected with regard to late complications, rate and timing of recurrences, and patients' degree of satisfaction.
METHODS
From a total of 155 patients, 98 patients were evaluated: 59 (60.2%) were interviewed by telephone and 39 (39.8%) were evaluated by outpatient consultation. The mean follow-up was 193 months (range: 184-231), 52 were female, 52 were grade III hemorrhoids, and 46 were grade IV. Recurrence was higher in grade IV (26.1%) than in grade III (7.7%) (p=0.014). Recurrence after prolonged follow-up was seen in 16 patients (16.3%) and 11 (11.2%) required reoperations. The complications were skin tags (3.1%), anal sub-stenosis (2.1%), and fecal incontinence (2.1%). After a prolonged follow-up, 82.5% of patients were either very satisfied or satisfied with the surgery.
RESULTS
Stapled hemorrhoidopexy is a safe and effective treatment for hemorrhoidal disease grades III and IV. Recurrence is higher for grade IV hemorrhoids and may occur up to 9 years of follow-up. Reoperations were infrequent and there is a high patient's degree of satisfaction associated with this technique.
CONCLUSIONS
[ "Female", "Follow-Up Studies", "Hemorrhoids", "Humans", "Male", "Personal Satisfaction", "Recurrence", "Surgical Stapling", "Treatment Outcome" ]
9484825
INTRODUCTION
For many years, surgical treatment of hemorrhoidal disease (HD) consisted solely of excisional hemorrhoidectomy. Despite being an effective procedure with good long-term results, it is associated with severe pain due to the excision of the innervated anoderm below the dentate line 35 . In 1998, a transanal circular stapling instrument, initially used for mucosal prolapses, was used to treat hemorrhoids through a procedure named stapled hemorrhoidopexy (SH) 18 , 25 . This technique introduced a different concept for the treatment of HD, not based on resecting the diseased hemorrhoidal cushions but on reconstituting the anatomy and physiology of the anal canal through a mucosal lift of the distal rectum 12 . Since then, several studies with follow-up of less than 5 years have reported that it is safe and efficient, with less postoperative pain, shorter hospital stay, and an earlier return to regular activities 1 , 22 . This procedure has gained wide acceptance and has been performed extensively by colorectal surgeons worldwide 12 , 35 , 38 . However, results after a follow-up of more than 10 years remain unclear. Some studies have associated SH with higher rates of recurrence and the need for additional surgical procedures 33 , 38 . Concerns with regard to severe short-term complications including rectovaginal fistula, sepsis but also long-term complications such as anal stenosis, and fecal incontinence, have also been discussed in the literature 8 . In the early 2000s, our group published a prospective analysis of the initial experience with SH that included 155 consecutive patients with grades III and IV symptomatic hemorrhoids operated from June 2000 to December 2003 by a single experienced colorectal surgeon, after a mean follow-up of 20 months 1 . Now, 16 years later, we reevaluated these patients and investigated the long-term complications, recurrences, and degree of satisfaction with the procedure.
METHODS
This research was approved by the Ethics Committee of University Hospital, Universidade de São Paulo (IRB number: 44948621.5.0000.0068), and written informed consent was obtained from all patients. In all, 98 of 155 patients who underwent SH between June 2000 and December 2003 were retrospectively reassessed, with the aim of analyzing late complications, recurrence rate, and degree of satisfaction after a long period of follow-up. In this study, we describe the results observed in the period between 2 and 16 years of follow-up, since the surgical indications, operative technical aspects, complications, and short-term results (up to 2 years) were extensively described in our previous study, published in 2006 35 . The following data were collected: sex, age, HD degree, treatments performed, clinical symptoms, complications, and degree of satisfaction with the procedure. Data collection was based on outpatient assessments or telephone interviews. The degree of satisfaction was evaluated and classified into four levels: very satisfied, satisfied, indifferent, or dissatisfied. Statistical analysis was performed using the chi-square test for comparison of recurrence between grades III and IV hemorrhoids; p-value <0.05 was considered significant. Analysis was carried out using the SPSS version 21.0 software for Windows (SPSS, Chicago, IL, USA).
RESULTS
From a total of 155 patients operated during the study period, 57 (36.7%) were lost to follow-up and 98 were evaluated. Demographic data and HD classification are shown in Table 1. With regard to the follow-up, 59 patients (60.2%) were interviewed by telephone and 39 (39.8%) were evaluated as outpatients. Table 1 -Patients’ demographics, degree of hemorrhoidal disease, and follow-up period.Patient characteristicsNumberTotal of patients98 (100%)Male/female46 (46.9%)/52 (53.1%)Mean age, years (range)41 (39-79)HD grade III52 (53.1%)HD grade IV46 (46.9%)Mean follow-up, months (range)193 (184-231) Long-term results After a very long postoperative follow-up, 16 patients (16.3%) had a recurrence of HD and all were diagnosed between 2 and 9 years following surgery. Recurrence was higher for grade IV HD (26.1%, n=12/46) than for grade III (7.7%, n=4/52) (p=0.014), but reoperation rates were similar for both groups (n=2/4 for grade III and n=9/12 for grade IV HD) (p=0.547). Of these 16 patients, 9 underwent excisional hemorrhoidectomy, 2 transanal dearterialization and mucopexy (THD-M), and the other 5 were treated with rubber band ligation in the office. All patients at the final follow-up had complete resolution of their symptoms and no further recurrences were detected. After a very long postoperative follow-up, 16 patients (16.3%) had a recurrence of HD and all were diagnosed between 2 and 9 years following surgery. Recurrence was higher for grade IV HD (26.1%, n=12/46) than for grade III (7.7%, n=4/52) (p=0.014), but reoperation rates were similar for both groups (n=2/4 for grade III and n=9/12 for grade IV HD) (p=0.547). Of these 16 patients, 9 underwent excisional hemorrhoidectomy, 2 transanal dearterialization and mucopexy (THD-M), and the other 5 were treated with rubber band ligation in the office. All patients at the final follow-up had complete resolution of their symptoms and no further recurrences were detected. Long-term complications Anal canal sub-stenosis occurred in two male patients (2.1%). Their main complaint was difficulty in eliminating stools and the need for enemas to assist on evacuation. Both patients were treated with periodic digital dilations and had complete resolution of their symptoms. New fecal incontinence to flatus was detected in two patients (2.1%). One patient was a 47 years old female with two previous vaginal deliveries who developed new incontinence to flatus 4 years after surgery. She had excellent results with biofeedback, improving her Jorge-Wexner 15 score from 11 to 3. The other patient was also a female with type-2 diabetes mellitus who developed new incontinence for flatus and liquid stools 10 years postoperatively and had little improvement with conservative treatment, which included dietary modification, biofeedback, and probiotics. Additional procedures for removal of symptomatic fibrotic anal skin tags that impacted anal hygiene were required in three (3.1%) patients, all females. Chronic anal pain, rectovaginal fistula, or pelvic sepsis was not reported in this series. There was no mortality. Anal canal sub-stenosis occurred in two male patients (2.1%). Their main complaint was difficulty in eliminating stools and the need for enemas to assist on evacuation. Both patients were treated with periodic digital dilations and had complete resolution of their symptoms. New fecal incontinence to flatus was detected in two patients (2.1%). One patient was a 47 years old female with two previous vaginal deliveries who developed new incontinence to flatus 4 years after surgery. She had excellent results with biofeedback, improving her Jorge-Wexner 15 score from 11 to 3. The other patient was also a female with type-2 diabetes mellitus who developed new incontinence for flatus and liquid stools 10 years postoperatively and had little improvement with conservative treatment, which included dietary modification, biofeedback, and probiotics. Additional procedures for removal of symptomatic fibrotic anal skin tags that impacted anal hygiene were required in three (3.1%) patients, all females. Chronic anal pain, rectovaginal fistula, or pelvic sepsis was not reported in this series. There was no mortality. Long-term satisfaction rates Out of 98 patients who were evaluated, 34 patients (34.6%) stated that they were very satisfied, 47 (47.9%) satisfied, 7 (7.2%) indifferent, and 10 (10.3%) dissatisfied with the surgical procedure. Long-term results, late complications, and degree of satisfaction are described in Table 2. Table 2 -Long-term postoperative results, late complications, and degree of satisfaction with stapled hemorrhoidopexy.CharacteristicsN (%)Total of patients98 (100)Prolapse recurrence16 (16.3)Skin tags3 (3.1)Anal sub-stenosis2 (2.1)Anal incontinence2 (2.1)Degree of satisfactionVery satisfied34 (34.6)Satisfied47 (47.9)Indifferent7 (7.2)Dissatisfied10 (10.3) Out of 98 patients who were evaluated, 34 patients (34.6%) stated that they were very satisfied, 47 (47.9%) satisfied, 7 (7.2%) indifferent, and 10 (10.3%) dissatisfied with the surgical procedure. Long-term results, late complications, and degree of satisfaction are described in Table 2. Table 2 -Long-term postoperative results, late complications, and degree of satisfaction with stapled hemorrhoidopexy.CharacteristicsN (%)Total of patients98 (100)Prolapse recurrence16 (16.3)Skin tags3 (3.1)Anal sub-stenosis2 (2.1)Anal incontinence2 (2.1)Degree of satisfactionVery satisfied34 (34.6)Satisfied47 (47.9)Indifferent7 (7.2)Dissatisfied10 (10.3)
CONCLUSIONS
This study provides evidence that SH is a safe and effective surgical procedure for the treatment of symptomatic hemorrhoids of grades III and IV, even after a long follow-up. Recurrence is higher for grade IV hemorrhoids and may occur up to 9 years of follow-up. Complications after prolonged follow-up are uncommon and most often can be managed with conservative treatment and low-complexity procedures. Reoperations for resection of new hemorrhoidal nodules are infrequent, and the patient’s degree of satisfaction with this procedure is high.
[ "Long-term results", "Long-term complications", "Long-term satisfaction rates" ]
[ "After a very long postoperative follow-up, 16 patients (16.3%) had a recurrence\nof HD and all were diagnosed between 2 and 9 years following surgery. Recurrence\nwas higher for grade IV HD (26.1%, n=12/46) than for grade III (7.7%, n=4/52)\n(p=0.014), but reoperation rates were similar for both groups (n=2/4 for grade\nIII and n=9/12 for grade IV HD) (p=0.547). Of these 16 patients, 9 underwent\nexcisional hemorrhoidectomy, 2 transanal dearterialization and mucopexy (THD-M),\nand the other 5 were treated with rubber band ligation in the office. All\npatients at the final follow-up had complete resolution of their symptoms and no\nfurther recurrences were detected.", "Anal canal sub-stenosis occurred in two male patients (2.1%). Their main\ncomplaint was difficulty in eliminating stools and the need for enemas to assist\non evacuation. Both patients were treated with periodic digital dilations and\nhad complete resolution of their symptoms.\nNew fecal incontinence to flatus was detected in two patients (2.1%). One patient\nwas a 47 years old female with two previous vaginal deliveries who developed new\nincontinence to flatus 4 years after surgery. She had excellent results with\nbiofeedback, improving her Jorge-Wexner\n15\n score from 11 to 3. The other patient was also a female with type-2\ndiabetes mellitus who developed new incontinence for flatus and liquid stools 10\nyears postoperatively and had little improvement with conservative treatment,\nwhich included dietary modification, biofeedback, and probiotics.\nAdditional procedures for removal of symptomatic fibrotic anal skin tags that\nimpacted anal hygiene were required in three (3.1%) patients, all females.\nChronic anal pain, rectovaginal fistula, or pelvic sepsis was not reported in\nthis series. There was no mortality.", "Out of 98 patients who were evaluated, 34 patients (34.6%) stated that they were\nvery satisfied, 47 (47.9%) satisfied, 7 (7.2%) indifferent, and 10 (10.3%)\ndissatisfied with the surgical procedure. Long-term results, late complications,\nand degree of satisfaction are described in Table 2.\n\nTable 2 -Long-term postoperative results, late complications, and degree\nof satisfaction with stapled hemorrhoidopexy.CharacteristicsN (%)Total of patients98 (100)Prolapse recurrence16 (16.3)Skin tags3 (3.1)Anal sub-stenosis2 (2.1)Anal incontinence2 (2.1)Degree of satisfactionVery satisfied34 (34.6)Satisfied47 (47.9)Indifferent7 (7.2)Dissatisfied10 (10.3)\n" ]
[ null, null, null ]
[ "INTRODUCTION", "METHODS", "RESULTS", "Long-term results", "Long-term complications", "Long-term satisfaction rates", "DISCUSSION", "CONCLUSIONS" ]
[ "For many years, surgical treatment of hemorrhoidal disease (HD) consisted solely of\nexcisional hemorrhoidectomy. Despite being an effective procedure with good\nlong-term results, it is associated with severe pain due to the excision of the\ninnervated anoderm below the dentate line\n35\n.\nIn 1998, a transanal circular stapling instrument, initially used for mucosal\nprolapses, was used to treat hemorrhoids through a procedure named stapled\nhemorrhoidopexy (SH)\n18\n\n,\n\n25\n. This technique introduced a different concept for the treatment of HD, not\nbased on resecting the diseased hemorrhoidal cushions but on reconstituting the\nanatomy and physiology of the anal canal through a mucosal lift of the distal\nrectum\n12\n. Since then, several studies with follow-up of less than 5 years have\nreported that it is safe and efficient, with less postoperative pain, shorter\nhospital stay, and an earlier return to regular activities\n1\n\n,\n\n22\n.\nThis procedure has gained wide acceptance and has been performed extensively by\ncolorectal surgeons worldwide\n12\n\n,\n\n35\n\n,\n\n38\n. However, results after a follow-up of more than 10 years remain unclear.\nSome studies have associated SH with higher rates of recurrence and the need for\nadditional surgical procedures\n33\n\n,\n\n38\n. Concerns with regard to severe short-term complications including\nrectovaginal fistula, sepsis but also long-term complications such as anal stenosis,\nand fecal incontinence, have also been discussed in the literature\n8\n.\nIn the early 2000s, our group published a prospective analysis of the initial\nexperience with SH that included 155 consecutive patients with grades III and IV\nsymptomatic hemorrhoids operated from June 2000 to December 2003 by a single\nexperienced colorectal surgeon, after a mean follow-up of 20 months\n1\n. Now, 16 years later, we reevaluated these patients and investigated the\nlong-term complications, recurrences, and degree of satisfaction with the\nprocedure.", "This research was approved by the Ethics Committee of University Hospital,\nUniversidade de São Paulo (IRB number: 44948621.5.0000.0068), and written informed\nconsent was obtained from all patients.\nIn all, 98 of 155 patients who underwent SH between June 2000 and December 2003 were\nretrospectively reassessed, with the aim of analyzing late complications, recurrence\nrate, and degree of satisfaction after a long period of follow-up. In this study, we\ndescribe the results observed in the period between 2 and 16 years of follow-up,\nsince the surgical indications, operative technical aspects, complications, and\nshort-term results (up to 2 years) were extensively described in our previous study,\npublished in 2006\n35\n. The following data were collected: sex, age, HD degree, treatments\nperformed, clinical symptoms, complications, and degree of satisfaction with the\nprocedure. Data collection was based on outpatient assessments or telephone\ninterviews. The degree of satisfaction was evaluated and classified into four\nlevels: very satisfied, satisfied, indifferent, or dissatisfied.\nStatistical analysis was performed using the chi-square test for comparison of\nrecurrence between grades III and IV hemorrhoids; p-value <0.05 was considered\nsignificant. Analysis was carried out using the SPSS version 21.0 software for\nWindows (SPSS, Chicago, IL, USA).", "From a total of 155 patients operated during the study period, 57 (36.7%) were lost\nto follow-up and 98 were evaluated. Demographic data and HD classification are shown\nin Table 1. With regard to the follow-up, 59\npatients (60.2%) were interviewed by telephone and 39 (39.8%) were evaluated as\noutpatients.\n\nTable 1 -Patients’ demographics, degree of hemorrhoidal disease, and follow-up\nperiod.Patient characteristicsNumberTotal of patients98 (100%)Male/female46 (46.9%)/52 (53.1%)Mean age, years (range)41 (39-79)HD grade III52 (53.1%)HD grade IV46 (46.9%)Mean follow-up, months (range)193 (184-231)\n\nLong-term results After a very long postoperative follow-up, 16 patients (16.3%) had a recurrence\nof HD and all were diagnosed between 2 and 9 years following surgery. Recurrence\nwas higher for grade IV HD (26.1%, n=12/46) than for grade III (7.7%, n=4/52)\n(p=0.014), but reoperation rates were similar for both groups (n=2/4 for grade\nIII and n=9/12 for grade IV HD) (p=0.547). Of these 16 patients, 9 underwent\nexcisional hemorrhoidectomy, 2 transanal dearterialization and mucopexy (THD-M),\nand the other 5 were treated with rubber band ligation in the office. All\npatients at the final follow-up had complete resolution of their symptoms and no\nfurther recurrences were detected.\nAfter a very long postoperative follow-up, 16 patients (16.3%) had a recurrence\nof HD and all were diagnosed between 2 and 9 years following surgery. Recurrence\nwas higher for grade IV HD (26.1%, n=12/46) than for grade III (7.7%, n=4/52)\n(p=0.014), but reoperation rates were similar for both groups (n=2/4 for grade\nIII and n=9/12 for grade IV HD) (p=0.547). Of these 16 patients, 9 underwent\nexcisional hemorrhoidectomy, 2 transanal dearterialization and mucopexy (THD-M),\nand the other 5 were treated with rubber band ligation in the office. All\npatients at the final follow-up had complete resolution of their symptoms and no\nfurther recurrences were detected.\nLong-term complications Anal canal sub-stenosis occurred in two male patients (2.1%). Their main\ncomplaint was difficulty in eliminating stools and the need for enemas to assist\non evacuation. Both patients were treated with periodic digital dilations and\nhad complete resolution of their symptoms.\nNew fecal incontinence to flatus was detected in two patients (2.1%). One patient\nwas a 47 years old female with two previous vaginal deliveries who developed new\nincontinence to flatus 4 years after surgery. She had excellent results with\nbiofeedback, improving her Jorge-Wexner\n15\n score from 11 to 3. The other patient was also a female with type-2\ndiabetes mellitus who developed new incontinence for flatus and liquid stools 10\nyears postoperatively and had little improvement with conservative treatment,\nwhich included dietary modification, biofeedback, and probiotics.\nAdditional procedures for removal of symptomatic fibrotic anal skin tags that\nimpacted anal hygiene were required in three (3.1%) patients, all females.\nChronic anal pain, rectovaginal fistula, or pelvic sepsis was not reported in\nthis series. There was no mortality.\nAnal canal sub-stenosis occurred in two male patients (2.1%). Their main\ncomplaint was difficulty in eliminating stools and the need for enemas to assist\non evacuation. Both patients were treated with periodic digital dilations and\nhad complete resolution of their symptoms.\nNew fecal incontinence to flatus was detected in two patients (2.1%). One patient\nwas a 47 years old female with two previous vaginal deliveries who developed new\nincontinence to flatus 4 years after surgery. She had excellent results with\nbiofeedback, improving her Jorge-Wexner\n15\n score from 11 to 3. The other patient was also a female with type-2\ndiabetes mellitus who developed new incontinence for flatus and liquid stools 10\nyears postoperatively and had little improvement with conservative treatment,\nwhich included dietary modification, biofeedback, and probiotics.\nAdditional procedures for removal of symptomatic fibrotic anal skin tags that\nimpacted anal hygiene were required in three (3.1%) patients, all females.\nChronic anal pain, rectovaginal fistula, or pelvic sepsis was not reported in\nthis series. There was no mortality.\nLong-term satisfaction rates Out of 98 patients who were evaluated, 34 patients (34.6%) stated that they were\nvery satisfied, 47 (47.9%) satisfied, 7 (7.2%) indifferent, and 10 (10.3%)\ndissatisfied with the surgical procedure. Long-term results, late complications,\nand degree of satisfaction are described in Table 2.\n\nTable 2 -Long-term postoperative results, late complications, and degree\nof satisfaction with stapled hemorrhoidopexy.CharacteristicsN (%)Total of patients98 (100)Prolapse recurrence16 (16.3)Skin tags3 (3.1)Anal sub-stenosis2 (2.1)Anal incontinence2 (2.1)Degree of satisfactionVery satisfied34 (34.6)Satisfied47 (47.9)Indifferent7 (7.2)Dissatisfied10 (10.3)\n\nOut of 98 patients who were evaluated, 34 patients (34.6%) stated that they were\nvery satisfied, 47 (47.9%) satisfied, 7 (7.2%) indifferent, and 10 (10.3%)\ndissatisfied with the surgical procedure. Long-term results, late complications,\nand degree of satisfaction are described in Table 2.\n\nTable 2 -Long-term postoperative results, late complications, and degree\nof satisfaction with stapled hemorrhoidopexy.CharacteristicsN (%)Total of patients98 (100)Prolapse recurrence16 (16.3)Skin tags3 (3.1)Anal sub-stenosis2 (2.1)Anal incontinence2 (2.1)Degree of satisfactionVery satisfied34 (34.6)Satisfied47 (47.9)Indifferent7 (7.2)Dissatisfied10 (10.3)\n", "After a very long postoperative follow-up, 16 patients (16.3%) had a recurrence\nof HD and all were diagnosed between 2 and 9 years following surgery. Recurrence\nwas higher for grade IV HD (26.1%, n=12/46) than for grade III (7.7%, n=4/52)\n(p=0.014), but reoperation rates were similar for both groups (n=2/4 for grade\nIII and n=9/12 for grade IV HD) (p=0.547). Of these 16 patients, 9 underwent\nexcisional hemorrhoidectomy, 2 transanal dearterialization and mucopexy (THD-M),\nand the other 5 were treated with rubber band ligation in the office. All\npatients at the final follow-up had complete resolution of their symptoms and no\nfurther recurrences were detected.", "Anal canal sub-stenosis occurred in two male patients (2.1%). Their main\ncomplaint was difficulty in eliminating stools and the need for enemas to assist\non evacuation. Both patients were treated with periodic digital dilations and\nhad complete resolution of their symptoms.\nNew fecal incontinence to flatus was detected in two patients (2.1%). One patient\nwas a 47 years old female with two previous vaginal deliveries who developed new\nincontinence to flatus 4 years after surgery. She had excellent results with\nbiofeedback, improving her Jorge-Wexner\n15\n score from 11 to 3. The other patient was also a female with type-2\ndiabetes mellitus who developed new incontinence for flatus and liquid stools 10\nyears postoperatively and had little improvement with conservative treatment,\nwhich included dietary modification, biofeedback, and probiotics.\nAdditional procedures for removal of symptomatic fibrotic anal skin tags that\nimpacted anal hygiene were required in three (3.1%) patients, all females.\nChronic anal pain, rectovaginal fistula, or pelvic sepsis was not reported in\nthis series. There was no mortality.", "Out of 98 patients who were evaluated, 34 patients (34.6%) stated that they were\nvery satisfied, 47 (47.9%) satisfied, 7 (7.2%) indifferent, and 10 (10.3%)\ndissatisfied with the surgical procedure. Long-term results, late complications,\nand degree of satisfaction are described in Table 2.\n\nTable 2 -Long-term postoperative results, late complications, and degree\nof satisfaction with stapled hemorrhoidopexy.CharacteristicsN (%)Total of patients98 (100)Prolapse recurrence16 (16.3)Skin tags3 (3.1)Anal sub-stenosis2 (2.1)Anal incontinence2 (2.1)Degree of satisfactionVery satisfied34 (34.6)Satisfied47 (47.9)Indifferent7 (7.2)Dissatisfied10 (10.3)\n", "New surgical procedures for the surgical treatment of HD, with less invasive\ntechniques, have been developed, such as PPH, THD-M, and LigaSure, with the aim of\nreducing postoperative pain and shortening the return to regular activities\n36\n\n,\n\n37\n.\nSH has been used for more than 20 years as a therapeutic option for the third- and\nfourth-degree HD, mainly due to shorter operative time, less postoperative pain, and\nearlier return to activities when compared with conventional hemorrhoidectomy\n12\n\n,\n\n35\n. It is considered a safe technique especially for the treatment of\ncircumferential hemorrhoidal prolapse, but concerns for long-term recurrences and\nsevere complications still exists\n12\n\n,\n\n20\n\n,\n\n22\n\n,\n\n29\n\n,\n\n31\n\n,\n\n35\n.\nEarly complications, such as rectovaginal fistulas, perianal abscess, and rectal\nlumen obliteration, have been described extensively in the literature and are well\nknown to colorectal surgeons\n29\n. However, few studies have evaluated the long-term results after more than\n10 years of follow-up\n4\n\n,\n\n33\n\n,\n\n38\n. In our study, all 98 patients had a minimum follow-up of 184 months. There\nare doubts about the ideal follow-up period to assess the postoperative recurrences\nof non-excisional procedures (PPH and THD-M)\n5\n. Some authors use the following classification for the follow-up period:\nshort, up to 2 years; medium, from 2 to 5 years; long, from 5 to 10 years; and very\nlong, more than 10 years\n38\n. To the best of our knowledge, this study has the longest follow-up in the\nliterature.\nAnal sub-stenosis occurred in two patients, and both were treated successfully with\nperiodic digital dilations. In the literature, this complication has been reported\nin 0.8-5% of patients after SH\n9\n. It is defined as circumferential narrowing of the distal rectum that cannot\nbe transposed by a digital rectal examination. Main symptoms are difficulty in\neliminating feces, the need for digital maneuvers, and the use of evacuation enemas.\nThe diagnosis is usually straightforward with a careful digital rectal examination.\nThe mechanism behind this phenomenon is submucosal inflammation due to ring\ndehiscence with local infection and full thickness excision of the rectal wall if\nthe stapled ring is placed too deep into the anal canal with the subsequent\nhypertrophic scarring of the rectal wall\n28\n. Conservative treatment with periodic outpatient dilations and infiltration\nwith corticosteroids is the first-line treatment. Ng et al.\n23\n, in a large series of 3711 cases submitted to SH, reported anal stenosis in\n1.4%, most of them were successfully treated with digital dilation.\nFecal incontinence following SH is usually transient and occurs in the early\npostoperative period\n32\n. This complication is usually mild with loss of flatus or occasional soiling\nduring exertion. It should be emphasized that fecal soiling can also occur even\nafter conventional hemorrhoidectomy and can be triggered by diarrhea, diabetes\nmellitus, and medications\n14\n.\nIn our previous study\n35\n, three patients developed transient fecal incontinence shortly after the\nsurgical procedure, but all of them resolved spontaneously. After prolonged\nfollow-up, two female patients developed new fecal incontinence. This is one of the\nmost feared complications after the treatment of HD, and it has been reported from 0\nto 8% of patients following SH\n24\n\n,\n\n26\n\n,\n\n32\n. This complication has been attributed to internal sphincter fragmentation;\na low-purse string suture that results in the staple line being too close to the\npectinate line and due to mechanical anal stretching from the 33 mm stapler. Risk\nfactors include female sex with previous vaginal delivery, pudendal neuropathy, and\nfecal straining\n22\n. The treatment is based on biofeedback rehabilitation with a success rate of\nup to 80%\n38\n. In refractory cases, the use of injectable bulking agents has been\nindicated\n19\n.\nChronic anal pain following SH is a feared complication and some authors have\nreported it in 1.6-17.5% patients\n16\n\n,\n\n38\n. In this study, no patient presented persistent pain after the long\nfollow-up. The precise etiology is unclear, but muscle incorporation in the doughnut\nmay play a role in its pathophysiology. Other possible causes are sphincter spasm,\nvery low purse string including the pectinate line, rectal pocket syndrome, and\nchronic proctitis secondary to ischemia\n7\n\n,\n\n27\n.\nRecurrence of prolapse with SH is highly variable and depends on the surgeon’s\nexperience, degree of HD, and the follow-up period. Low recurrence rates have been\nreported in series with shorter follow-up\n2\n\n,\n\n3\n\n,\n\n11\n\n,\n\n18\n\n,\n\n22\n\n,\n\n35\n. A systematic review published by Shao et al.\n34\n in 2008, with a follow-up ranging from 6 weeks to a median of 62 months,\nshowed recurrence in 9% of patients, and only 7% required a new surgical procedure.\nWhite et al.\n39\n, in a series of 169 patients, reported that recurrences occurred in 11.2%\nafter a mean follow-up of 15 months.\nIn this study, all recurrences were detected up to 9 years of follow-up, which is\nconsistent with other series with long-term\n2\n\n,\n\n3\n\n,\n\n11\n\n,\n\n13\n\n,\n\n31\n\n,\n\n34\n\n,\n\n38\n\n,\n\n39\n. Our recurrence rate was 16.3%, which is lower than previously published by\nother authors\n20\n\n,\n\n33\n\n,\n\n38\n. Sturiale et al.\n38\n evaluated patients after 12 years of follow-up and reported 40.9% recurrence\nrate. It should be noticed that evaluation was performed over the telephone and\npatients may have been unable to differentiate true recurrence from residual skin\ntags or other anorectal pathologies, which may have overestimated the recurrence\nrate. Schneider et al.\n33\n also evaluated patients over the telephone and reported recurrence of\nsymptoms in 47.4%. Belio et al.\n4\n published the only study that performed a clinical evaluation of patients\nafter a 10-year follow-up and reported 39% of recurrence rate.\nWe observed greater recurrence in HD grade IV than in grade III, similar to what was\nreported by other authors\n13\n\n,\n\n21\n. Technical aspects that may play a role in recurrence included incomplete\npurse string, too high a suture in the rectum, and incomplete mucosal resection in\ngrade IV HD, which may occur when the volume of mucosal tissue exceeds the capacity\nof the stapler casing\n6\n\n,\n\n10\n\n,\n\n14\n. In the literature, recurrence after SH seems to be more frequent than after\nconventional hemorrhoidectomy\n12\n\n,\n\n13\n\n,\n\n26\n.\nIt is important to emphasize that although most series with long-term have recorded\nhigh recurrence rates with SH, some authors reported high recurrence with excisional\nmethods. In a British trial with 17 years of follow-up, there was a symptomatic\nrecurrence of HD in 26% of the patients\n17\n. In another large randomized study with 688 patients, the recurrence rate\nfollowing Ferguson hemorrhoidectomy for grade IV HD was 40.3% (n=126/208) after a\nmean follow-up of 7.4 years\n30\n.\nBesides recurrence, bothersome fibrotic skin tags are also a cause of\nre-intervention. In our series, three patients required removal of symptomatic skin\ntags after more than 24 months of follow-up. We currently recommend the excision of\nskin tags during the initial procedure, after the patient’s agreement. Ommer et\nal.\n24\n in a prospective study of 224 consecutive patients concluded that the\nresection of large skin tags during SH provided better symptom control, lower rates\nof recurrence and reoperation, and higher degree of satisfaction.\nDegree of satisfaction after a surgical procedure is multifactorial and subjective.\nIts quantification is complex and depends on the patient’s previous experiences,\nexpectation with the surgical procedure, and the final results. A successful surgery\ndoes not always correlate with a high degree of satisfaction in terms of patient’s\nperspective, which has motivated the use of patients’ reported outcome in clinical\ntrials. In our series, 82.5% were either very satisfied or satisfied with the\nprocedure, which is consistent with other series\n4\n\n,\n\n33\n\n,\n\n38\n.\nThe main limitations of this manuscript are the retrospective nature of the study and\nthe loss of follow-up of 36.8% of patients. However, few studies have assessed the\nvery long-term (>10 years) results with this surgical technique. Its strengths\nare a large series of patients undergoing SH performed by a surgical team with\nexperience in anorectal operations and a very long follow-up that included physical\nand proctologic examination in 40% of patients.", "This study provides evidence that SH is a safe and effective surgical procedure for\nthe treatment of symptomatic hemorrhoids of grades III and IV, even after a long\nfollow-up. Recurrence is higher for grade IV hemorrhoids and may occur up to 9 years\nof follow-up. Complications after prolonged follow-up are uncommon and most often\ncan be managed with conservative treatment and low-complexity procedures.\nReoperations for resection of new hemorrhoidal nodules are infrequent, and the\npatient’s degree of satisfaction with this procedure is high." ]
[ "intro", "methods", "results", null, null, null, "discussion", "conclusions" ]
[ "Hemorrhoids", "Hemorrhoidectomy", "Surgical Staplers", "Postoperative Complications", "Long-Term Care", "Patient Satisfaction", "Hemorroidas", "Hemorroidectomia", "Grampeadores Cirúrgicos", "Complicações Pós-Operatórias", "Assistência de Longa Duração", "Satisfação do Paciente" ]
INTRODUCTION: For many years, surgical treatment of hemorrhoidal disease (HD) consisted solely of excisional hemorrhoidectomy. Despite being an effective procedure with good long-term results, it is associated with severe pain due to the excision of the innervated anoderm below the dentate line 35 . In 1998, a transanal circular stapling instrument, initially used for mucosal prolapses, was used to treat hemorrhoids through a procedure named stapled hemorrhoidopexy (SH) 18 , 25 . This technique introduced a different concept for the treatment of HD, not based on resecting the diseased hemorrhoidal cushions but on reconstituting the anatomy and physiology of the anal canal through a mucosal lift of the distal rectum 12 . Since then, several studies with follow-up of less than 5 years have reported that it is safe and efficient, with less postoperative pain, shorter hospital stay, and an earlier return to regular activities 1 , 22 . This procedure has gained wide acceptance and has been performed extensively by colorectal surgeons worldwide 12 , 35 , 38 . However, results after a follow-up of more than 10 years remain unclear. Some studies have associated SH with higher rates of recurrence and the need for additional surgical procedures 33 , 38 . Concerns with regard to severe short-term complications including rectovaginal fistula, sepsis but also long-term complications such as anal stenosis, and fecal incontinence, have also been discussed in the literature 8 . In the early 2000s, our group published a prospective analysis of the initial experience with SH that included 155 consecutive patients with grades III and IV symptomatic hemorrhoids operated from June 2000 to December 2003 by a single experienced colorectal surgeon, after a mean follow-up of 20 months 1 . Now, 16 years later, we reevaluated these patients and investigated the long-term complications, recurrences, and degree of satisfaction with the procedure. METHODS: This research was approved by the Ethics Committee of University Hospital, Universidade de São Paulo (IRB number: 44948621.5.0000.0068), and written informed consent was obtained from all patients. In all, 98 of 155 patients who underwent SH between June 2000 and December 2003 were retrospectively reassessed, with the aim of analyzing late complications, recurrence rate, and degree of satisfaction after a long period of follow-up. In this study, we describe the results observed in the period between 2 and 16 years of follow-up, since the surgical indications, operative technical aspects, complications, and short-term results (up to 2 years) were extensively described in our previous study, published in 2006 35 . The following data were collected: sex, age, HD degree, treatments performed, clinical symptoms, complications, and degree of satisfaction with the procedure. Data collection was based on outpatient assessments or telephone interviews. The degree of satisfaction was evaluated and classified into four levels: very satisfied, satisfied, indifferent, or dissatisfied. Statistical analysis was performed using the chi-square test for comparison of recurrence between grades III and IV hemorrhoids; p-value <0.05 was considered significant. Analysis was carried out using the SPSS version 21.0 software for Windows (SPSS, Chicago, IL, USA). RESULTS: From a total of 155 patients operated during the study period, 57 (36.7%) were lost to follow-up and 98 were evaluated. Demographic data and HD classification are shown in Table 1. With regard to the follow-up, 59 patients (60.2%) were interviewed by telephone and 39 (39.8%) were evaluated as outpatients. Table 1 -Patients’ demographics, degree of hemorrhoidal disease, and follow-up period.Patient characteristicsNumberTotal of patients98 (100%)Male/female46 (46.9%)/52 (53.1%)Mean age, years (range)41 (39-79)HD grade III52 (53.1%)HD grade IV46 (46.9%)Mean follow-up, months (range)193 (184-231) Long-term results After a very long postoperative follow-up, 16 patients (16.3%) had a recurrence of HD and all were diagnosed between 2 and 9 years following surgery. Recurrence was higher for grade IV HD (26.1%, n=12/46) than for grade III (7.7%, n=4/52) (p=0.014), but reoperation rates were similar for both groups (n=2/4 for grade III and n=9/12 for grade IV HD) (p=0.547). Of these 16 patients, 9 underwent excisional hemorrhoidectomy, 2 transanal dearterialization and mucopexy (THD-M), and the other 5 were treated with rubber band ligation in the office. All patients at the final follow-up had complete resolution of their symptoms and no further recurrences were detected. After a very long postoperative follow-up, 16 patients (16.3%) had a recurrence of HD and all were diagnosed between 2 and 9 years following surgery. Recurrence was higher for grade IV HD (26.1%, n=12/46) than for grade III (7.7%, n=4/52) (p=0.014), but reoperation rates were similar for both groups (n=2/4 for grade III and n=9/12 for grade IV HD) (p=0.547). Of these 16 patients, 9 underwent excisional hemorrhoidectomy, 2 transanal dearterialization and mucopexy (THD-M), and the other 5 were treated with rubber band ligation in the office. All patients at the final follow-up had complete resolution of their symptoms and no further recurrences were detected. Long-term complications Anal canal sub-stenosis occurred in two male patients (2.1%). Their main complaint was difficulty in eliminating stools and the need for enemas to assist on evacuation. Both patients were treated with periodic digital dilations and had complete resolution of their symptoms. New fecal incontinence to flatus was detected in two patients (2.1%). One patient was a 47 years old female with two previous vaginal deliveries who developed new incontinence to flatus 4 years after surgery. She had excellent results with biofeedback, improving her Jorge-Wexner 15 score from 11 to 3. The other patient was also a female with type-2 diabetes mellitus who developed new incontinence for flatus and liquid stools 10 years postoperatively and had little improvement with conservative treatment, which included dietary modification, biofeedback, and probiotics. Additional procedures for removal of symptomatic fibrotic anal skin tags that impacted anal hygiene were required in three (3.1%) patients, all females. Chronic anal pain, rectovaginal fistula, or pelvic sepsis was not reported in this series. There was no mortality. Anal canal sub-stenosis occurred in two male patients (2.1%). Their main complaint was difficulty in eliminating stools and the need for enemas to assist on evacuation. Both patients were treated with periodic digital dilations and had complete resolution of their symptoms. New fecal incontinence to flatus was detected in two patients (2.1%). One patient was a 47 years old female with two previous vaginal deliveries who developed new incontinence to flatus 4 years after surgery. She had excellent results with biofeedback, improving her Jorge-Wexner 15 score from 11 to 3. The other patient was also a female with type-2 diabetes mellitus who developed new incontinence for flatus and liquid stools 10 years postoperatively and had little improvement with conservative treatment, which included dietary modification, biofeedback, and probiotics. Additional procedures for removal of symptomatic fibrotic anal skin tags that impacted anal hygiene were required in three (3.1%) patients, all females. Chronic anal pain, rectovaginal fistula, or pelvic sepsis was not reported in this series. There was no mortality. Long-term satisfaction rates Out of 98 patients who were evaluated, 34 patients (34.6%) stated that they were very satisfied, 47 (47.9%) satisfied, 7 (7.2%) indifferent, and 10 (10.3%) dissatisfied with the surgical procedure. Long-term results, late complications, and degree of satisfaction are described in Table 2. Table 2 -Long-term postoperative results, late complications, and degree of satisfaction with stapled hemorrhoidopexy.CharacteristicsN (%)Total of patients98 (100)Prolapse recurrence16 (16.3)Skin tags3 (3.1)Anal sub-stenosis2 (2.1)Anal incontinence2 (2.1)Degree of satisfactionVery satisfied34 (34.6)Satisfied47 (47.9)Indifferent7 (7.2)Dissatisfied10 (10.3) Out of 98 patients who were evaluated, 34 patients (34.6%) stated that they were very satisfied, 47 (47.9%) satisfied, 7 (7.2%) indifferent, and 10 (10.3%) dissatisfied with the surgical procedure. Long-term results, late complications, and degree of satisfaction are described in Table 2. Table 2 -Long-term postoperative results, late complications, and degree of satisfaction with stapled hemorrhoidopexy.CharacteristicsN (%)Total of patients98 (100)Prolapse recurrence16 (16.3)Skin tags3 (3.1)Anal sub-stenosis2 (2.1)Anal incontinence2 (2.1)Degree of satisfactionVery satisfied34 (34.6)Satisfied47 (47.9)Indifferent7 (7.2)Dissatisfied10 (10.3) Long-term results: After a very long postoperative follow-up, 16 patients (16.3%) had a recurrence of HD and all were diagnosed between 2 and 9 years following surgery. Recurrence was higher for grade IV HD (26.1%, n=12/46) than for grade III (7.7%, n=4/52) (p=0.014), but reoperation rates were similar for both groups (n=2/4 for grade III and n=9/12 for grade IV HD) (p=0.547). Of these 16 patients, 9 underwent excisional hemorrhoidectomy, 2 transanal dearterialization and mucopexy (THD-M), and the other 5 were treated with rubber band ligation in the office. All patients at the final follow-up had complete resolution of their symptoms and no further recurrences were detected. Long-term complications: Anal canal sub-stenosis occurred in two male patients (2.1%). Their main complaint was difficulty in eliminating stools and the need for enemas to assist on evacuation. Both patients were treated with periodic digital dilations and had complete resolution of their symptoms. New fecal incontinence to flatus was detected in two patients (2.1%). One patient was a 47 years old female with two previous vaginal deliveries who developed new incontinence to flatus 4 years after surgery. She had excellent results with biofeedback, improving her Jorge-Wexner 15 score from 11 to 3. The other patient was also a female with type-2 diabetes mellitus who developed new incontinence for flatus and liquid stools 10 years postoperatively and had little improvement with conservative treatment, which included dietary modification, biofeedback, and probiotics. Additional procedures for removal of symptomatic fibrotic anal skin tags that impacted anal hygiene were required in three (3.1%) patients, all females. Chronic anal pain, rectovaginal fistula, or pelvic sepsis was not reported in this series. There was no mortality. Long-term satisfaction rates: Out of 98 patients who were evaluated, 34 patients (34.6%) stated that they were very satisfied, 47 (47.9%) satisfied, 7 (7.2%) indifferent, and 10 (10.3%) dissatisfied with the surgical procedure. Long-term results, late complications, and degree of satisfaction are described in Table 2. Table 2 -Long-term postoperative results, late complications, and degree of satisfaction with stapled hemorrhoidopexy.CharacteristicsN (%)Total of patients98 (100)Prolapse recurrence16 (16.3)Skin tags3 (3.1)Anal sub-stenosis2 (2.1)Anal incontinence2 (2.1)Degree of satisfactionVery satisfied34 (34.6)Satisfied47 (47.9)Indifferent7 (7.2)Dissatisfied10 (10.3) DISCUSSION: New surgical procedures for the surgical treatment of HD, with less invasive techniques, have been developed, such as PPH, THD-M, and LigaSure, with the aim of reducing postoperative pain and shortening the return to regular activities 36 , 37 . SH has been used for more than 20 years as a therapeutic option for the third- and fourth-degree HD, mainly due to shorter operative time, less postoperative pain, and earlier return to activities when compared with conventional hemorrhoidectomy 12 , 35 . It is considered a safe technique especially for the treatment of circumferential hemorrhoidal prolapse, but concerns for long-term recurrences and severe complications still exists 12 , 20 , 22 , 29 , 31 , 35 . Early complications, such as rectovaginal fistulas, perianal abscess, and rectal lumen obliteration, have been described extensively in the literature and are well known to colorectal surgeons 29 . However, few studies have evaluated the long-term results after more than 10 years of follow-up 4 , 33 , 38 . In our study, all 98 patients had a minimum follow-up of 184 months. There are doubts about the ideal follow-up period to assess the postoperative recurrences of non-excisional procedures (PPH and THD-M) 5 . Some authors use the following classification for the follow-up period: short, up to 2 years; medium, from 2 to 5 years; long, from 5 to 10 years; and very long, more than 10 years 38 . To the best of our knowledge, this study has the longest follow-up in the literature. Anal sub-stenosis occurred in two patients, and both were treated successfully with periodic digital dilations. In the literature, this complication has been reported in 0.8-5% of patients after SH 9 . It is defined as circumferential narrowing of the distal rectum that cannot be transposed by a digital rectal examination. Main symptoms are difficulty in eliminating feces, the need for digital maneuvers, and the use of evacuation enemas. The diagnosis is usually straightforward with a careful digital rectal examination. The mechanism behind this phenomenon is submucosal inflammation due to ring dehiscence with local infection and full thickness excision of the rectal wall if the stapled ring is placed too deep into the anal canal with the subsequent hypertrophic scarring of the rectal wall 28 . Conservative treatment with periodic outpatient dilations and infiltration with corticosteroids is the first-line treatment. Ng et al. 23 , in a large series of 3711 cases submitted to SH, reported anal stenosis in 1.4%, most of them were successfully treated with digital dilation. Fecal incontinence following SH is usually transient and occurs in the early postoperative period 32 . This complication is usually mild with loss of flatus or occasional soiling during exertion. It should be emphasized that fecal soiling can also occur even after conventional hemorrhoidectomy and can be triggered by diarrhea, diabetes mellitus, and medications 14 . In our previous study 35 , three patients developed transient fecal incontinence shortly after the surgical procedure, but all of them resolved spontaneously. After prolonged follow-up, two female patients developed new fecal incontinence. This is one of the most feared complications after the treatment of HD, and it has been reported from 0 to 8% of patients following SH 24 , 26 , 32 . This complication has been attributed to internal sphincter fragmentation; a low-purse string suture that results in the staple line being too close to the pectinate line and due to mechanical anal stretching from the 33 mm stapler. Risk factors include female sex with previous vaginal delivery, pudendal neuropathy, and fecal straining 22 . The treatment is based on biofeedback rehabilitation with a success rate of up to 80% 38 . In refractory cases, the use of injectable bulking agents has been indicated 19 . Chronic anal pain following SH is a feared complication and some authors have reported it in 1.6-17.5% patients 16 , 38 . In this study, no patient presented persistent pain after the long follow-up. The precise etiology is unclear, but muscle incorporation in the doughnut may play a role in its pathophysiology. Other possible causes are sphincter spasm, very low purse string including the pectinate line, rectal pocket syndrome, and chronic proctitis secondary to ischemia 7 , 27 . Recurrence of prolapse with SH is highly variable and depends on the surgeon’s experience, degree of HD, and the follow-up period. Low recurrence rates have been reported in series with shorter follow-up 2 , 3 , 11 , 18 , 22 , 35 . A systematic review published by Shao et al. 34 in 2008, with a follow-up ranging from 6 weeks to a median of 62 months, showed recurrence in 9% of patients, and only 7% required a new surgical procedure. White et al. 39 , in a series of 169 patients, reported that recurrences occurred in 11.2% after a mean follow-up of 15 months. In this study, all recurrences were detected up to 9 years of follow-up, which is consistent with other series with long-term 2 , 3 , 11 , 13 , 31 , 34 , 38 , 39 . Our recurrence rate was 16.3%, which is lower than previously published by other authors 20 , 33 , 38 . Sturiale et al. 38 evaluated patients after 12 years of follow-up and reported 40.9% recurrence rate. It should be noticed that evaluation was performed over the telephone and patients may have been unable to differentiate true recurrence from residual skin tags or other anorectal pathologies, which may have overestimated the recurrence rate. Schneider et al. 33 also evaluated patients over the telephone and reported recurrence of symptoms in 47.4%. Belio et al. 4 published the only study that performed a clinical evaluation of patients after a 10-year follow-up and reported 39% of recurrence rate. We observed greater recurrence in HD grade IV than in grade III, similar to what was reported by other authors 13 , 21 . Technical aspects that may play a role in recurrence included incomplete purse string, too high a suture in the rectum, and incomplete mucosal resection in grade IV HD, which may occur when the volume of mucosal tissue exceeds the capacity of the stapler casing 6 , 10 , 14 . In the literature, recurrence after SH seems to be more frequent than after conventional hemorrhoidectomy 12 , 13 , 26 . It is important to emphasize that although most series with long-term have recorded high recurrence rates with SH, some authors reported high recurrence with excisional methods. In a British trial with 17 years of follow-up, there was a symptomatic recurrence of HD in 26% of the patients 17 . In another large randomized study with 688 patients, the recurrence rate following Ferguson hemorrhoidectomy for grade IV HD was 40.3% (n=126/208) after a mean follow-up of 7.4 years 30 . Besides recurrence, bothersome fibrotic skin tags are also a cause of re-intervention. In our series, three patients required removal of symptomatic skin tags after more than 24 months of follow-up. We currently recommend the excision of skin tags during the initial procedure, after the patient’s agreement. Ommer et al. 24 in a prospective study of 224 consecutive patients concluded that the resection of large skin tags during SH provided better symptom control, lower rates of recurrence and reoperation, and higher degree of satisfaction. Degree of satisfaction after a surgical procedure is multifactorial and subjective. Its quantification is complex and depends on the patient’s previous experiences, expectation with the surgical procedure, and the final results. A successful surgery does not always correlate with a high degree of satisfaction in terms of patient’s perspective, which has motivated the use of patients’ reported outcome in clinical trials. In our series, 82.5% were either very satisfied or satisfied with the procedure, which is consistent with other series 4 , 33 , 38 . The main limitations of this manuscript are the retrospective nature of the study and the loss of follow-up of 36.8% of patients. However, few studies have assessed the very long-term (>10 years) results with this surgical technique. Its strengths are a large series of patients undergoing SH performed by a surgical team with experience in anorectal operations and a very long follow-up that included physical and proctologic examination in 40% of patients. CONCLUSIONS: This study provides evidence that SH is a safe and effective surgical procedure for the treatment of symptomatic hemorrhoids of grades III and IV, even after a long follow-up. Recurrence is higher for grade IV hemorrhoids and may occur up to 9 years of follow-up. Complications after prolonged follow-up are uncommon and most often can be managed with conservative treatment and low-complexity procedures. Reoperations for resection of new hemorrhoidal nodules are infrequent, and the patient’s degree of satisfaction with this procedure is high.
Background: Stapled hemorrhoidopexy has been widely used for the treatment of hemorrhoids, but concerns about complications and recurrences after prolonged follow-up are still under debate. Methods: Stapled hemorrhoidopexy was performed on 155 patients between 2000 and 2003, and the early results have already been published. In this study, we evaluated the same patients after a very long follow-up. Data were collected with regard to late complications, rate and timing of recurrences, and patients' degree of satisfaction. Results: From a total of 155 patients, 98 patients were evaluated: 59 (60.2%) were interviewed by telephone and 39 (39.8%) were evaluated by outpatient consultation. The mean follow-up was 193 months (range: 184-231), 52 were female, 52 were grade III hemorrhoids, and 46 were grade IV. Recurrence was higher in grade IV (26.1%) than in grade III (7.7%) (p=0.014). Recurrence after prolonged follow-up was seen in 16 patients (16.3%) and 11 (11.2%) required reoperations. The complications were skin tags (3.1%), anal sub-stenosis (2.1%), and fecal incontinence (2.1%). After a prolonged follow-up, 82.5% of patients were either very satisfied or satisfied with the surgery. Conclusions: Stapled hemorrhoidopexy is a safe and effective treatment for hemorrhoidal disease grades III and IV. Recurrence is higher for grade IV hemorrhoids and may occur up to 9 years of follow-up. Reoperations were infrequent and there is a high patient's degree of satisfaction associated with this technique.
INTRODUCTION: For many years, surgical treatment of hemorrhoidal disease (HD) consisted solely of excisional hemorrhoidectomy. Despite being an effective procedure with good long-term results, it is associated with severe pain due to the excision of the innervated anoderm below the dentate line 35 . In 1998, a transanal circular stapling instrument, initially used for mucosal prolapses, was used to treat hemorrhoids through a procedure named stapled hemorrhoidopexy (SH) 18 , 25 . This technique introduced a different concept for the treatment of HD, not based on resecting the diseased hemorrhoidal cushions but on reconstituting the anatomy and physiology of the anal canal through a mucosal lift of the distal rectum 12 . Since then, several studies with follow-up of less than 5 years have reported that it is safe and efficient, with less postoperative pain, shorter hospital stay, and an earlier return to regular activities 1 , 22 . This procedure has gained wide acceptance and has been performed extensively by colorectal surgeons worldwide 12 , 35 , 38 . However, results after a follow-up of more than 10 years remain unclear. Some studies have associated SH with higher rates of recurrence and the need for additional surgical procedures 33 , 38 . Concerns with regard to severe short-term complications including rectovaginal fistula, sepsis but also long-term complications such as anal stenosis, and fecal incontinence, have also been discussed in the literature 8 . In the early 2000s, our group published a prospective analysis of the initial experience with SH that included 155 consecutive patients with grades III and IV symptomatic hemorrhoids operated from June 2000 to December 2003 by a single experienced colorectal surgeon, after a mean follow-up of 20 months 1 . Now, 16 years later, we reevaluated these patients and investigated the long-term complications, recurrences, and degree of satisfaction with the procedure. CONCLUSIONS: This study provides evidence that SH is a safe and effective surgical procedure for the treatment of symptomatic hemorrhoids of grades III and IV, even after a long follow-up. Recurrence is higher for grade IV hemorrhoids and may occur up to 9 years of follow-up. Complications after prolonged follow-up are uncommon and most often can be managed with conservative treatment and low-complexity procedures. Reoperations for resection of new hemorrhoidal nodules are infrequent, and the patient’s degree of satisfaction with this procedure is high.
Background: Stapled hemorrhoidopexy has been widely used for the treatment of hemorrhoids, but concerns about complications and recurrences after prolonged follow-up are still under debate. Methods: Stapled hemorrhoidopexy was performed on 155 patients between 2000 and 2003, and the early results have already been published. In this study, we evaluated the same patients after a very long follow-up. Data were collected with regard to late complications, rate and timing of recurrences, and patients' degree of satisfaction. Results: From a total of 155 patients, 98 patients were evaluated: 59 (60.2%) were interviewed by telephone and 39 (39.8%) were evaluated by outpatient consultation. The mean follow-up was 193 months (range: 184-231), 52 were female, 52 were grade III hemorrhoids, and 46 were grade IV. Recurrence was higher in grade IV (26.1%) than in grade III (7.7%) (p=0.014). Recurrence after prolonged follow-up was seen in 16 patients (16.3%) and 11 (11.2%) required reoperations. The complications were skin tags (3.1%), anal sub-stenosis (2.1%), and fecal incontinence (2.1%). After a prolonged follow-up, 82.5% of patients were either very satisfied or satisfied with the surgery. Conclusions: Stapled hemorrhoidopexy is a safe and effective treatment for hemorrhoidal disease grades III and IV. Recurrence is higher for grade IV hemorrhoids and may occur up to 9 years of follow-up. Reoperations were infrequent and there is a high patient's degree of satisfaction associated with this technique.
4,232
315
[ 147, 213, 125 ]
8
[ "patients", "follow", "years", "long", "recurrence", "anal", "hd", "term", "degree", "results" ]
[ "excisional hemorrhoidectomy", "hemorrhoidectomy transanal dearterialization", "treat hemorrhoids procedure", "hemorrhoidectomy transanal", "compared conventional hemorrhoidectomy" ]
[CONTENT] Hemorrhoids | Hemorrhoidectomy | Surgical Staplers | Postoperative Complications | Long-Term Care | Patient Satisfaction | Hemorroidas | Hemorroidectomia | Grampeadores Cirúrgicos | Complicações Pós-Operatórias | Assistência de Longa Duração | Satisfação do Paciente [SUMMARY]
[CONTENT] Hemorrhoids | Hemorrhoidectomy | Surgical Staplers | Postoperative Complications | Long-Term Care | Patient Satisfaction | Hemorroidas | Hemorroidectomia | Grampeadores Cirúrgicos | Complicações Pós-Operatórias | Assistência de Longa Duração | Satisfação do Paciente [SUMMARY]
[CONTENT] Hemorrhoids | Hemorrhoidectomy | Surgical Staplers | Postoperative Complications | Long-Term Care | Patient Satisfaction | Hemorroidas | Hemorroidectomia | Grampeadores Cirúrgicos | Complicações Pós-Operatórias | Assistência de Longa Duração | Satisfação do Paciente [SUMMARY]
[CONTENT] Hemorrhoids | Hemorrhoidectomy | Surgical Staplers | Postoperative Complications | Long-Term Care | Patient Satisfaction | Hemorroidas | Hemorroidectomia | Grampeadores Cirúrgicos | Complicações Pós-Operatórias | Assistência de Longa Duração | Satisfação do Paciente [SUMMARY]
[CONTENT] Hemorrhoids | Hemorrhoidectomy | Surgical Staplers | Postoperative Complications | Long-Term Care | Patient Satisfaction | Hemorroidas | Hemorroidectomia | Grampeadores Cirúrgicos | Complicações Pós-Operatórias | Assistência de Longa Duração | Satisfação do Paciente [SUMMARY]
[CONTENT] Hemorrhoids | Hemorrhoidectomy | Surgical Staplers | Postoperative Complications | Long-Term Care | Patient Satisfaction | Hemorroidas | Hemorroidectomia | Grampeadores Cirúrgicos | Complicações Pós-Operatórias | Assistência de Longa Duração | Satisfação do Paciente [SUMMARY]
[CONTENT] Female | Follow-Up Studies | Hemorrhoids | Humans | Male | Personal Satisfaction | Recurrence | Surgical Stapling | Treatment Outcome [SUMMARY]
[CONTENT] Female | Follow-Up Studies | Hemorrhoids | Humans | Male | Personal Satisfaction | Recurrence | Surgical Stapling | Treatment Outcome [SUMMARY]
[CONTENT] Female | Follow-Up Studies | Hemorrhoids | Humans | Male | Personal Satisfaction | Recurrence | Surgical Stapling | Treatment Outcome [SUMMARY]
[CONTENT] Female | Follow-Up Studies | Hemorrhoids | Humans | Male | Personal Satisfaction | Recurrence | Surgical Stapling | Treatment Outcome [SUMMARY]
[CONTENT] Female | Follow-Up Studies | Hemorrhoids | Humans | Male | Personal Satisfaction | Recurrence | Surgical Stapling | Treatment Outcome [SUMMARY]
[CONTENT] Female | Follow-Up Studies | Hemorrhoids | Humans | Male | Personal Satisfaction | Recurrence | Surgical Stapling | Treatment Outcome [SUMMARY]
[CONTENT] excisional hemorrhoidectomy | hemorrhoidectomy transanal dearterialization | treat hemorrhoids procedure | hemorrhoidectomy transanal | compared conventional hemorrhoidectomy [SUMMARY]
[CONTENT] excisional hemorrhoidectomy | hemorrhoidectomy transanal dearterialization | treat hemorrhoids procedure | hemorrhoidectomy transanal | compared conventional hemorrhoidectomy [SUMMARY]
[CONTENT] excisional hemorrhoidectomy | hemorrhoidectomy transanal dearterialization | treat hemorrhoids procedure | hemorrhoidectomy transanal | compared conventional hemorrhoidectomy [SUMMARY]
[CONTENT] excisional hemorrhoidectomy | hemorrhoidectomy transanal dearterialization | treat hemorrhoids procedure | hemorrhoidectomy transanal | compared conventional hemorrhoidectomy [SUMMARY]
[CONTENT] excisional hemorrhoidectomy | hemorrhoidectomy transanal dearterialization | treat hemorrhoids procedure | hemorrhoidectomy transanal | compared conventional hemorrhoidectomy [SUMMARY]
[CONTENT] excisional hemorrhoidectomy | hemorrhoidectomy transanal dearterialization | treat hemorrhoids procedure | hemorrhoidectomy transanal | compared conventional hemorrhoidectomy [SUMMARY]
[CONTENT] patients | follow | years | long | recurrence | anal | hd | term | degree | results [SUMMARY]
[CONTENT] patients | follow | years | long | recurrence | anal | hd | term | degree | results [SUMMARY]
[CONTENT] patients | follow | years | long | recurrence | anal | hd | term | degree | results [SUMMARY]
[CONTENT] patients | follow | years | long | recurrence | anal | hd | term | degree | results [SUMMARY]
[CONTENT] patients | follow | years | long | recurrence | anal | hd | term | degree | results [SUMMARY]
[CONTENT] patients | follow | years | long | recurrence | anal | hd | term | degree | results [SUMMARY]
[CONTENT] term complications | term | associated | procedure | long term | sh | years | long term complications | severe | colorectal [SUMMARY]
[CONTENT] spss | degree | data | analysis | degree satisfaction | complications | satisfaction | period | performed | satisfied [SUMMARY]
[CONTENT] patients | anal | grade | 47 | hd | incontinence flatus | table | 10 | long term | 34 [SUMMARY]
[CONTENT] follow | hemorrhoids | treatment | provides | low complexity | procedure treatment | infrequent patient degree satisfaction | procedure treatment symptomatic | procedure treatment symptomatic hemorrhoids | hemorrhoidal nodules [SUMMARY]
[CONTENT] patients | follow | anal | years | grade | degree | hd | recurrence | long | 10 [SUMMARY]
[CONTENT] patients | follow | anal | years | grade | degree | hd | recurrence | long | 10 [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] 155 | between 2000 and 2003 ||| ||| [SUMMARY]
[CONTENT] 155 | 98 | 59 | 60.2% | 39 | 39.8% ||| 193 months | 184 | 52 | 52 | 46 | IV ||| IV | 26.1% | 7.7% ||| 16 | 16.3% | 11 | 11.2% ||| 3.1% | 2.1% | 2.1% ||| 82.5% [SUMMARY]
[CONTENT] III | IV ||| IV | up to 9 years ||| [SUMMARY]
[CONTENT] ||| 155 | between 2000 and 2003 ||| ||| ||| 155 | 98 | 59 | 60.2% | 39 | 39.8% ||| 193 months | 184 | 52 | 52 | 46 | IV ||| IV | 26.1% | 7.7% ||| 16 | 16.3% | 11 | 11.2% ||| 3.1% | 2.1% | 2.1% ||| 82.5% ||| ||| III | IV ||| IV | up to 9 years ||| [SUMMARY]
[CONTENT] ||| 155 | between 2000 and 2003 ||| ||| ||| 155 | 98 | 59 | 60.2% | 39 | 39.8% ||| 193 months | 184 | 52 | 52 | 46 | IV ||| IV | 26.1% | 7.7% ||| 16 | 16.3% | 11 | 11.2% ||| 3.1% | 2.1% | 2.1% ||| 82.5% ||| ||| III | IV ||| IV | up to 9 years ||| [SUMMARY]
Role of Triggering Receptor Expressed on Myeloid Cell-1 Expression in Mammalian Target of Rapamycin Modulation of CD8
28485322
Triggering receptor expressed on myeloid cell-1 (TREM-1) may play a vital role in mammalian target of rapamycin (mTOR) modulation of CD8+ T-cell differentiation through the transcription factors T-box expressed in T-cells and eomesodermin during the immune response to invasive pulmonary aspergillosis (IPA). This study aimed to investigate whether the mTOR signaling pathway modulates the proliferation and differentiation of CD8+ T-cells during the immune response to IPA and the role TREM-1 plays in this process.
BACKGROUND
Cyclophosphamide (CTX) was injected intraperitoneally, and Aspergillus fumigatus spore suspension was inoculated intranasally to establish the immunosuppressed IPA mouse model. After inoculation, rapamycin (2 mg.kg-1.d-1) or interleukin (IL)-12 (5 μg/kg every other day) was given for 7 days. The number of CD8+ effector memory T-cells (Tem), expression of interferon (IFN)-γ, mTOR, and ribosomal protein S6 kinase (S6K), and the levels of IL-6, IL-10, galactomannan (GM), and soluble TREM-1 (sTREM-1) were measured.
METHODS
Viable A. fumigatus was cultured from the lung tissue of the inoculated mice. Histological examination indicated greater inflammation, hemorrhage, and lung tissue injury in both IPA and CTX + IPA mice groups. The expression of mTOR and S6K was significantly increased in the CTX + IPA + IL-12 group compared with the control, IPA (P = 0.01; P= 0.001), and CTX + IPA (P = 0.034; P= 0.032) groups, but significantly decreased in the CTX + IPA + RAPA group (P < 0.001). Compared with the CTX + IPA group, the proportion of Tem, expression of IFN-γ, and the level of sTREM-1 were significantly higher after IL-12 treatment (P = 0.024, P= 0.032, and P= 0.017, respectively), and the opposite results were observed when the mTOR pathway was blocked by rapamycin (P < 0.001). Compared with the CTX + IPA and CTX + IPA + RAPA groups, IL-12 treatment increased IL-6 and downregulated IL-10 as well as GM, which strengthened the immune response to the IPA infection.
RESULTS
mTOR modulates CD8+ T-cell differentiation during the immune response to IPA. TREM-1 may play a vital role in signal transduction between mTOR and the downstream immune response.
CONCLUSIONS
[ "Animals", "CD8-Positive T-Lymphocytes", "Cell Differentiation", "Female", "Interferon-gamma", "Invasive Pulmonary Aspergillosis", "Lymphocyte Activation", "Mice", "Mice, Inbred BALB C", "Myeloid Cells", "Ribosomal Protein S6 Kinases", "TOR Serine-Threonine Kinases", "Tissue Culture Techniques" ]
5443028
Introduction
Invasive pulmonary aspergillosis (IPA) has become a leading cause of severe fungal infections in critically ill patients and has a high mortality rate, especially in patients with immune dysfunction such as those undergoing immunosuppressive or high-dose glucocorticoid therapy, and solid organ or hematopoietic stem cell transplantation patients.[1] The acquired immune response is mediated by T-lymphocytes, which determine whether the host can effectively clear the pathogen. CD8+ T-cells are a vital component of the acquired immune system.[23] Effector-phenotype CD8+ memory T-cells (Tem) can trigger a rapid, potent, and effective immune response to specific antigens in the early stages of infection. Investigating the mechanism of differentiation of CD8+ T cells in the early stages of infection could provide new insights into effectively controlling infection in IPA patients with immune dysfunction.[45] The mammalian target of rapamycin (mTOR) signaling pathway regulates cell metabolism, survival, growth, and proliferation. The mTOR downstream signaling pathway contains two key substrates, the ribosomal S6 kinase (S6K) and the eukaryotic translational initiation factor 4E binding protein 1 (4E-BP1). When activated, mTOR phosphorylates S6K and 4E-BP1 to promote protein translation and is sensitive to rapamycin inhibition.[6] Recent studies have shown that mTOR may regulate T-cell differentiation and adaptive immunity following the binding of the T-cell receptor to antigen.[7] In preliminary experiments, we found that mTOR modulates lymphocyte differentiation in immunosuppressed animals with fungal infections by regulating the transcription factors T-box expressed in T-cells (T-bet) and eomesodermin.[8] T-bet is a key transcription factor regulating T-cell differentiation and belongs to the T-box transcription factor family.[9] Eomesodermin plays a synergistic role in T-bet function and regulates the differentiation of CD8+ T-cells into effector and memory T-cells. A recent study suggests that interleukin (IL)-12 can affect memory/effector CD8+ T-cell phenotype differentiation by regulating T-bet and eomesodermin.[10] IL-12 can also enhance the activity of mTOR kinase in naïve CD8+ T-cells.[11] In contrast, rapamycin inhibits the activity of mTOR kinase in CD8+ T-cells, thereby blocking the expression of T-bet and promoting the expression of eomesodermin. Increased expression of eomesodermin in CD8+ T-cells may promote the generation of memory T-cells, while high expression of T-bet could promote effector CD8+ T-cell proliferation.[12] Triggering receptors expressed on myeloid cells (TREMs) were identified as a new family of receptors that regulate both innate and adaptive immune responses to infection.[1314] TREM-1 is mainly expressed on neutrophils, monocytes, and macrophages. High TREM-1 levels are indicative of acute or chronic conditions caused by fungi or bacteria.[1415] The TREM-1 extracellular domain can also be found as soluble TREM-1 (sTREM-1). It is proposed that sTREM-1 may arise directly from a splice variant or following the shedding of membrane-bound TREM-1 by metalloproteinase-mediated proteolytic cleavage.[16] sTREM-1 has proven to be a valuable diagnostic and prognostic marker since it is easily detected with immunochemical assays in biological fluid samples. Previous studies have found that sTREM-1 levels are elevated in plasma and bronchoalveolar lavage fluid from patients with bacterial or fungal pneumonia.[1718] In preliminary studies, we found that plasma sTREM-1 is closely correlated with T-bet and eomesodermin levels in immunocompromised Aspergillus-infected rats, suggesting that TREM-1 may be involved in lymphocyte regulation and differentiation during fungal infection.[19] Based on the theories and experimental results described above, we hypothesized that TREM-1 may play a vital role in mTOR modulation of CD8+ T-cell differentiation through the transcription factors T-bet and eomesodermin during the immune response to IPA.
Methods
Pathogen preparation The strain of Aspergillus fumigatus, provided by the Department of Clinical Laboratory, Peking Union Medical College Hospital (PUMCH), was obtained from a case of pulmonary aspergillosis. Viable conidia (>95%) were obtained by growth on Sabouraud dextrose agar for 5 days at 35°C. Conidia were harvested in 10 ml 0.1% Tween-80/PBS and filtered through five layers of gauze. The concentration of conidia was adjusted to 1 × 108 CFU/ml by turbidity adjustment method. The strain of Aspergillus fumigatus, provided by the Department of Clinical Laboratory, Peking Union Medical College Hospital (PUMCH), was obtained from a case of pulmonary aspergillosis. Viable conidia (>95%) were obtained by growth on Sabouraud dextrose agar for 5 days at 35°C. Conidia were harvested in 10 ml 0.1% Tween-80/PBS and filtered through five layers of gauze. The concentration of conidia was adjusted to 1 × 108 CFU/ml by turbidity adjustment method. Animal model preparation Healthy 4-week-old female BALB/c mice, weighing 20 ± 5 g, were obtained from the Animal Facility Center, PUMCH. All animals were housed in a pathogen-free facility and treated according to protocols approved by the Institutional Animal Care and Use Committee of PUMCH. Thirty mice were randomly divided into the following groups (six mice per group): (1) control group: nontreated mice. (2) IPA group: animals were inoculated with 0.1 ml A. fumigatus conidia solution intranasally. (3) Immunosuppression plus aspergillosis group (cyclophosphamide [CTX] + IPA): CTX (Jiangsu Hengrui Medicine Co., Ltd., Jiangsu, China) was injected intraperitoneally at a dose of 200 mg·kg−1·d−1 for 5 consecutive days. Animals were then infected with A. fumigatus as mentioned above. (4) Immunosuppression plus aspergillosis plus rapamycin treatment group (CTX + IPA + RAPA): mice were given 2 mg·kg−1·d−1 rapamycin for the 7 consecutive days after intraperitoneal injection of CTX and A. fumigatus infection. (5) Immunosuppression plus IPA plus IL-12 treatment group (CTX + IPA + IL-12): mice were given 5 μg/kg IL-12 every other day for the 7 days following intraperitoneal injection of CTX and A. fumigatus infection. Blood samples were obtained by retro-orbital bleeding. Part of the lung tissue was minced and used for A. fumigatus culture, the rest was fixed with 4% formaldehyde, and paraffin-embedded tissue sections were stained with hematoxylin and eosin (H and E), Masson trichrome, and periodic acid-silver methenamine. Healthy 4-week-old female BALB/c mice, weighing 20 ± 5 g, were obtained from the Animal Facility Center, PUMCH. All animals were housed in a pathogen-free facility and treated according to protocols approved by the Institutional Animal Care and Use Committee of PUMCH. Thirty mice were randomly divided into the following groups (six mice per group): (1) control group: nontreated mice. (2) IPA group: animals were inoculated with 0.1 ml A. fumigatus conidia solution intranasally. (3) Immunosuppression plus aspergillosis group (cyclophosphamide [CTX] + IPA): CTX (Jiangsu Hengrui Medicine Co., Ltd., Jiangsu, China) was injected intraperitoneally at a dose of 200 mg·kg−1·d−1 for 5 consecutive days. Animals were then infected with A. fumigatus as mentioned above. (4) Immunosuppression plus aspergillosis plus rapamycin treatment group (CTX + IPA + RAPA): mice were given 2 mg·kg−1·d−1 rapamycin for the 7 consecutive days after intraperitoneal injection of CTX and A. fumigatus infection. (5) Immunosuppression plus IPA plus IL-12 treatment group (CTX + IPA + IL-12): mice were given 5 μg/kg IL-12 every other day for the 7 days following intraperitoneal injection of CTX and A. fumigatus infection. Blood samples were obtained by retro-orbital bleeding. Part of the lung tissue was minced and used for A. fumigatus culture, the rest was fixed with 4% formaldehyde, and paraffin-embedded tissue sections were stained with hematoxylin and eosin (H and E), Masson trichrome, and periodic acid-silver methenamine. CD8+ effector memory T-cells counts and interferon-γ, mammalian target of rapamycin, and S6 kinase expression Peripheral blood mononuclear cells were isolated from blood samples and counted by flow cytometry. Cells were then labeled with the following fluorescence-conjugated monoclonal antibodies: anti-rat CD45 PE (12-0451-81, eBioscience, San Diego, CA, USA), anti-rat CD8a APC (17-0081-81, eBioscience), anti-rat CD44 PE (25-0441-81, eBioscience), and anti-rat CD62L (104432, Biolegend). CD8+ Tem were sorted by flow cytometry (EPICS-XL, Beckman-Coulter, Indianapolis, IN, USA) and then stained for interferon (IFN)-γ (11-7311-81, eBioscience), mTOR (ab87540, Abcam, Cambridge, MA, USA), and S6K (ab32529, Abcam) expression. Peripheral blood mononuclear cells were isolated from blood samples and counted by flow cytometry. Cells were then labeled with the following fluorescence-conjugated monoclonal antibodies: anti-rat CD45 PE (12-0451-81, eBioscience, San Diego, CA, USA), anti-rat CD8a APC (17-0081-81, eBioscience), anti-rat CD44 PE (25-0441-81, eBioscience), and anti-rat CD62L (104432, Biolegend). CD8+ Tem were sorted by flow cytometry (EPICS-XL, Beckman-Coulter, Indianapolis, IN, USA) and then stained for interferon (IFN)-γ (11-7311-81, eBioscience), mTOR (ab87540, Abcam, Cambridge, MA, USA), and S6K (ab32529, Abcam) expression. Cytokine and soluble triggering receptors expressed on myeloid cell-1 quantification Cytokines and sTREM-1 were quantified using the following ELISA kits as per the manufacturer's instructions; IL-6 (cat#: ab168538, Abcam), IL-10 (cat#: ab176665, Abcam), GM (cat#: 85-86051, Affymetrix Bioscience, San Diego, CA, USA), and sTREM-1 (cat#: SEA213Mu, USCN, Wuhan, China). Cytokines and sTREM-1 were quantified using the following ELISA kits as per the manufacturer's instructions; IL-6 (cat#: ab168538, Abcam), IL-10 (cat#: ab176665, Abcam), GM (cat#: 85-86051, Affymetrix Bioscience, San Diego, CA, USA), and sTREM-1 (cat#: SEA213Mu, USCN, Wuhan, China). Statistical analysis Data were analyzed using SPSS version 18.0 software (SPSS Inc., Armonk, NY, USA). All the data for the continuous variables in this study were proven to have normal distributions and are given as mean ± standard deviations (SD). Results for continuous variables that were not normally distributed are given as medians (interquartile ranges) and were compared using nonparametric tests. Student's t-test or analysis of variance followed by Bonferroni's test were used to determine the statistical significance (P) of differences. Pearson's correlation coefficient was used to analyze the correlation of two parameters. A value of P < 0.05 was considered statistically significant. Data were analyzed using SPSS version 18.0 software (SPSS Inc., Armonk, NY, USA). All the data for the continuous variables in this study were proven to have normal distributions and are given as mean ± standard deviations (SD). Results for continuous variables that were not normally distributed are given as medians (interquartile ranges) and were compared using nonparametric tests. Student's t-test or analysis of variance followed by Bonferroni's test were used to determine the statistical significance (P) of differences. Pearson's correlation coefficient was used to analyze the correlation of two parameters. A value of P < 0.05 was considered statistically significant.
Results
Tissue culture and histology Viable A. fumigatus was cultured from lung tissue of both IPA and CTX + IPA mice treated with rapamycin or IL-12 or without treatment [Figure 1a–1c], while control mice were negative. Histological examination indicated that the lung tissue structure was intact in the normal control [Figure 1d]. In contrast, infiltration of inflammatory cells, hemorrhage, and interstitial lung tissue injury were found in the lungs of the IPA, CTX + IPA, CTX + IPA + IL-12, and CTX + IPA + RAPA groups [Figure 1e–1h]. Compared with the IPA group, Figure 1 suggests that CTX + IPA mice treated with rapamycin or IL-12 or without treatment had greater inflammation and hemorrhage in the interstitial lung tissue. Histology of lung tissue stained with H and E, Masson trichrome, and PASM. (a–c) The fungal spores of Aspergillus fumigatus. (d) Control animal. (e) Animal infected with IPA. (f) Immunosuppression plus aspergillosis group (CTX + IPA). (g) Immunosuppression plus IPA plus IL-12 treatment group (CTX + IPA + IL-12). (h) Immunosuppression plus aspergillosis plus rapamycin treatment group (CTX + IPA + RAPA). Original magnification: (a) Masson trichrome staining, original magnification ×200; (b) Masson trichrome staining, original magnification ×400; (c) PASM staining, original magnification ×600; (d–h) H and E, original magnification ×100. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; PASM: Periodic acid silver methenamine; IL: Interleukin. Viable A. fumigatus was cultured from lung tissue of both IPA and CTX + IPA mice treated with rapamycin or IL-12 or without treatment [Figure 1a–1c], while control mice were negative. Histological examination indicated that the lung tissue structure was intact in the normal control [Figure 1d]. In contrast, infiltration of inflammatory cells, hemorrhage, and interstitial lung tissue injury were found in the lungs of the IPA, CTX + IPA, CTX + IPA + IL-12, and CTX + IPA + RAPA groups [Figure 1e–1h]. Compared with the IPA group, Figure 1 suggests that CTX + IPA mice treated with rapamycin or IL-12 or without treatment had greater inflammation and hemorrhage in the interstitial lung tissue. Histology of lung tissue stained with H and E, Masson trichrome, and PASM. (a–c) The fungal spores of Aspergillus fumigatus. (d) Control animal. (e) Animal infected with IPA. (f) Immunosuppression plus aspergillosis group (CTX + IPA). (g) Immunosuppression plus IPA plus IL-12 treatment group (CTX + IPA + IL-12). (h) Immunosuppression plus aspergillosis plus rapamycin treatment group (CTX + IPA + RAPA). Original magnification: (a) Masson trichrome staining, original magnification ×200; (b) Masson trichrome staining, original magnification ×400; (c) PASM staining, original magnification ×600; (d–h) H and E, original magnification ×100. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; PASM: Periodic acid silver methenamine; IL: Interleukin. Regulation of the mammalian target of rapamycin pathway As shown in Figure 2, compared with the control group (0.25 ± 0.04, P = 0.004), IPA group (0.27 ± 0.04, P = 0.01), and CTX + IPA group (0.29 ± 0.04, P = 0.034), mTOR activity was significantly increased in the CTX + IPA + IL-12 group (0.39 ± 0.12). To verify that the induction of mTOR phosphorylation also led to its kinase activity, we monitored the kinetics of S6K, a direct target of mTOR kinase activity. As anticipated, IL-12 stimulation significantly enhanced the expression of S6K in CD8+ Tem (0.13 ± 0.03) compared with the control (0.06 ± 0.04, P < 0.001), IPA (0.07 ± 0.02, P = 0.001), and CTX + IPA (0.09 ± 0.01, P = 0.032) groups. IL-12 increases the expression of mammalian target of rapamycin and S6 kinase, which was blocked by rapamycin. The peripheral blood mononuclear cells obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected using flow cytometry 7 days after intranasal inoculation of Aspergillus fumigatus. Side scatter and CD8a were used to gate on CD8+ T-lymphocytes, and CD44+ CD45+ CD62+/− cells represent CD8+ effector memory T-cells. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin. Similarly, when we blocked mTOR activity by adding rapamycin during IPA infection, we found that the expression of mTOR was significantly lower in the CTX + IPA + RAPA group (0.19 ± 0.05) compared with the CTX + IPA (0.29 ± 0.04, P = 0.034) and CTX + IPA + IL-12 (0.39 ± 0.12, P < 0.001) groups. The expression of S6K was also significantly lower in the CTX + IPA + RAPA (0.04 ± 0.04) group than in the CTX + IPA (0.09 ± 0.01, P = 0.01) and CTX + IPA + IL-12 (0.13 ± 0.03, P < 0.001) groups. As shown in Figure 2, compared with the control group (0.25 ± 0.04, P = 0.004), IPA group (0.27 ± 0.04, P = 0.01), and CTX + IPA group (0.29 ± 0.04, P = 0.034), mTOR activity was significantly increased in the CTX + IPA + IL-12 group (0.39 ± 0.12). To verify that the induction of mTOR phosphorylation also led to its kinase activity, we monitored the kinetics of S6K, a direct target of mTOR kinase activity. As anticipated, IL-12 stimulation significantly enhanced the expression of S6K in CD8+ Tem (0.13 ± 0.03) compared with the control (0.06 ± 0.04, P < 0.001), IPA (0.07 ± 0.02, P = 0.001), and CTX + IPA (0.09 ± 0.01, P = 0.032) groups. IL-12 increases the expression of mammalian target of rapamycin and S6 kinase, which was blocked by rapamycin. The peripheral blood mononuclear cells obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected using flow cytometry 7 days after intranasal inoculation of Aspergillus fumigatus. Side scatter and CD8a were used to gate on CD8+ T-lymphocytes, and CD44+ CD45+ CD62+/− cells represent CD8+ effector memory T-cells. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin. Similarly, when we blocked mTOR activity by adding rapamycin during IPA infection, we found that the expression of mTOR was significantly lower in the CTX + IPA + RAPA group (0.19 ± 0.05) compared with the CTX + IPA (0.29 ± 0.04, P = 0.034) and CTX + IPA + IL-12 (0.39 ± 0.12, P < 0.001) groups. The expression of S6K was also significantly lower in the CTX + IPA + RAPA (0.04 ± 0.04) group than in the CTX + IPA (0.09 ± 0.01, P = 0.01) and CTX + IPA + IL-12 (0.13 ± 0.03, P < 0.001) groups. Mammalian target of rapamycin modulates CD8+ T-cell differentiation and expression of triggering receptor expressed on myeloid cell-1 As shown in Figure 3, the proportion of CD8+ Tem, IFN-γ production, and serum sTREM-1 expression were significantly increased in the CTX + IPA + IL-12 group (0.47 ± 0.06, 0.09 ± 0.03, and 1876.91 ± 247.80, respectively) compared with the CTX + IPA group (0.35 ± 0.10, P = 0.024; 0.05 ± 0.04, P = 0.032; and 1537.64 ± 359.52, P = 0.017, respectively). The result indicates that the addition of IL-12 could improve CD8+ Tem differentiation, IFN-γ production, and resulted in robust serum sTREM-1 expression during IPA infection by activating the mTOR pathway. The proportion of CD8+ effector memory T-cells, expression of interferon-γ, and sTREM-1 levels were significantly increased by IL-12 stimulation but significantly decreased by rapamycin treatment. The peripheral blood mononuclear cells obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected using flow cytometry 7 days after Aspergillus fumigatus intranasal inoculation. Side scatter and CD8a were used to gate on CD8+ T-lymphocytes, and CD44+ CD45+ CD62+/- cells represented CD8+ effector memory T-cells. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin; sTREM-1: Soluble triggering receptors expressed on myeloid cell-1. More importantly, we also found that adding rapamycin could significantly decrease the proportion of CD8+ Tem (0.22 ± 0.05), IFN-γ production (0.02 ± 0.01), and serum TREM-1 expression (1214.45 ± 293.24) compared with the CTX + IPA (0.35 ± 0.10, P = 0.017; 0.05 ± 0.04, P = 0.039; and 1537.64 ± 359.52, P = 0.022, respectively) and CTX + IPA + IL-12 (0.47 ± 0.06, P < 0.001; 0.09 ± 0.03, P < 0.001; and 1876.91 ± 247.80, P < 0.001, respectively) groups. These results indicate that sustained mTOR activity is essential for CD8+ Tem differentiation, Type I effector functions, and serum sTREM-1 expression. As shown in Figure 3, the proportion of CD8+ Tem, IFN-γ production, and serum sTREM-1 expression were significantly increased in the CTX + IPA + IL-12 group (0.47 ± 0.06, 0.09 ± 0.03, and 1876.91 ± 247.80, respectively) compared with the CTX + IPA group (0.35 ± 0.10, P = 0.024; 0.05 ± 0.04, P = 0.032; and 1537.64 ± 359.52, P = 0.017, respectively). The result indicates that the addition of IL-12 could improve CD8+ Tem differentiation, IFN-γ production, and resulted in robust serum sTREM-1 expression during IPA infection by activating the mTOR pathway. The proportion of CD8+ effector memory T-cells, expression of interferon-γ, and sTREM-1 levels were significantly increased by IL-12 stimulation but significantly decreased by rapamycin treatment. The peripheral blood mononuclear cells obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected using flow cytometry 7 days after Aspergillus fumigatus intranasal inoculation. Side scatter and CD8a were used to gate on CD8+ T-lymphocytes, and CD44+ CD45+ CD62+/- cells represented CD8+ effector memory T-cells. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin; sTREM-1: Soluble triggering receptors expressed on myeloid cell-1. More importantly, we also found that adding rapamycin could significantly decrease the proportion of CD8+ Tem (0.22 ± 0.05), IFN-γ production (0.02 ± 0.01), and serum TREM-1 expression (1214.45 ± 293.24) compared with the CTX + IPA (0.35 ± 0.10, P = 0.017; 0.05 ± 0.04, P = 0.039; and 1537.64 ± 359.52, P = 0.022, respectively) and CTX + IPA + IL-12 (0.47 ± 0.06, P < 0.001; 0.09 ± 0.03, P < 0.001; and 1876.91 ± 247.80, P < 0.001, respectively) groups. These results indicate that sustained mTOR activity is essential for CD8+ Tem differentiation, Type I effector functions, and serum sTREM-1 expression. Alteration of inflammatory responses and severity of fungal infection IL-6 levels reflect the inflammatory response, and IL-10 levels reflect anti-inflammatory responses. Figure 4 shows that the IL-12-treated group had the highest IL-6 levels (2053.91 ± 402.37 pg/ml), followed by the CTX + IPA (1426.57 ± 488.03 pg/ml, P = 0.036), CTX + IPA + RAPA (822.74 ± 211.30 pg/ml, P < 0.001), IPA (811.58 ± 247.03 pg/ml, P < 0.001), and control (245.65 ± 85.78 pg/ml, P < 0.001) groups. In contrast, the concentration of IL-10 showed the reverse trend. The IL-10 level of the IL-12-treated group (314.94 ± 36.97 pg/ml), as well as the control (267.60 ± 46.10 pg/ml) and IPA (350.93 ± 43.11 pg/ml) groups, were significantly lower than the CTX + IPA group (437.95 ± 53.41 pg/ml, P < 0.001) and especially the CTX + IPA + RAPA group (542.44 ± 37.32 pg/ml, P < 0.001). IL-12 significantly increased the IL-6 level, which was significantly decreased by rapamycin treatment. In contrast, the concentration of IL-10 and galactomannan showed the reverse trend. Plasma samples obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected by ELISA 7 days after Aspergillus fumigatus intranasal inoculation. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin. Galactomannan (GM) is used to demonstrate the severity of fungal infection in clinical practice. In this study, we found that, after A. fumigates inoculation, the level of GM was significantly increased in the IPA (1985.98 ± 152.79 pg/ml, P < 0.001), CTX + IPA (2251.06 ± 230.50 pg/ml, P < 0.001), CTX + IPA + IL-12 (1920.15 ± 86.16 pg/ml, P < 0.001), and CTX + IPA + RAPA (2496.87 ± 278.05 pg/ml, P < 0.001) groups compared with the control group (126.82 ± 10.31 pg/ml). However, as shown in Figure 4, treatment with IL-12 significantly decreased the level of GM compared with the CTX + IPA (P = 0.002) and CTX + IPA + RAPA (P < 0.001) groups. IL-6 levels reflect the inflammatory response, and IL-10 levels reflect anti-inflammatory responses. Figure 4 shows that the IL-12-treated group had the highest IL-6 levels (2053.91 ± 402.37 pg/ml), followed by the CTX + IPA (1426.57 ± 488.03 pg/ml, P = 0.036), CTX + IPA + RAPA (822.74 ± 211.30 pg/ml, P < 0.001), IPA (811.58 ± 247.03 pg/ml, P < 0.001), and control (245.65 ± 85.78 pg/ml, P < 0.001) groups. In contrast, the concentration of IL-10 showed the reverse trend. The IL-10 level of the IL-12-treated group (314.94 ± 36.97 pg/ml), as well as the control (267.60 ± 46.10 pg/ml) and IPA (350.93 ± 43.11 pg/ml) groups, were significantly lower than the CTX + IPA group (437.95 ± 53.41 pg/ml, P < 0.001) and especially the CTX + IPA + RAPA group (542.44 ± 37.32 pg/ml, P < 0.001). IL-12 significantly increased the IL-6 level, which was significantly decreased by rapamycin treatment. In contrast, the concentration of IL-10 and galactomannan showed the reverse trend. Plasma samples obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected by ELISA 7 days after Aspergillus fumigatus intranasal inoculation. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin. Galactomannan (GM) is used to demonstrate the severity of fungal infection in clinical practice. In this study, we found that, after A. fumigates inoculation, the level of GM was significantly increased in the IPA (1985.98 ± 152.79 pg/ml, P < 0.001), CTX + IPA (2251.06 ± 230.50 pg/ml, P < 0.001), CTX + IPA + IL-12 (1920.15 ± 86.16 pg/ml, P < 0.001), and CTX + IPA + RAPA (2496.87 ± 278.05 pg/ml, P < 0.001) groups compared with the control group (126.82 ± 10.31 pg/ml). However, as shown in Figure 4, treatment with IL-12 significantly decreased the level of GM compared with the CTX + IPA (P = 0.002) and CTX + IPA + RAPA (P < 0.001) groups.
null
null
[ "Pathogen preparation", "Animal model preparation", "CD8+ effector memory T-cells counts and interferon-γ, mammalian target of rapamycin, and S6 kinase expression", "Cytokine and soluble triggering receptors expressed on myeloid cell-1 quantification", "Statistical analysis", "Tissue culture and histology", "Regulation of the mammalian target of rapamycin pathway", "Mammalian target of rapamycin modulates CD8+ T-cell differentiation and expression of triggering receptor expressed on myeloid cell-1", "Alteration of inflammatory responses and severity of fungal infection", "Financial support and sponsorship" ]
[ "The strain of Aspergillus fumigatus, provided by the Department of Clinical Laboratory, Peking Union Medical College Hospital (PUMCH), was obtained from a case of pulmonary aspergillosis. Viable conidia (>95%) were obtained by growth on Sabouraud dextrose agar for 5 days at 35°C. Conidia were harvested in 10 ml 0.1% Tween-80/PBS and filtered through five layers of gauze. The concentration of conidia was adjusted to 1 × 108 CFU/ml by turbidity adjustment method.", "Healthy 4-week-old female BALB/c mice, weighing 20 ± 5 g, were obtained from the Animal Facility Center, PUMCH. All animals were housed in a pathogen-free facility and treated according to protocols approved by the Institutional Animal Care and Use Committee of PUMCH. Thirty mice were randomly divided into the following groups (six mice per group): (1) control group: nontreated mice. (2) IPA group: animals were inoculated with 0.1 ml A. fumigatus conidia solution intranasally. (3) Immunosuppression plus aspergillosis group (cyclophosphamide [CTX] + IPA): CTX (Jiangsu Hengrui Medicine Co., Ltd., Jiangsu, China) was injected intraperitoneally at a dose of 200 mg·kg−1·d−1 for 5 consecutive days. Animals were then infected with A. fumigatus as mentioned above. (4) Immunosuppression plus aspergillosis plus rapamycin treatment group (CTX + IPA + RAPA): mice were given 2 mg·kg−1·d−1 rapamycin for the 7 consecutive days after intraperitoneal injection of CTX and A. fumigatus infection. (5) Immunosuppression plus IPA plus IL-12 treatment group (CTX + IPA + IL-12): mice were given 5 μg/kg IL-12 every other day for the 7 days following intraperitoneal injection of CTX and A. fumigatus infection. Blood samples were obtained by retro-orbital bleeding. Part of the lung tissue was minced and used for A. fumigatus culture, the rest was fixed with 4% formaldehyde, and paraffin-embedded tissue sections were stained with hematoxylin and eosin (H and E), Masson trichrome, and periodic acid-silver methenamine.", "Peripheral blood mononuclear cells were isolated from blood samples and counted by flow cytometry. Cells were then labeled with the following fluorescence-conjugated monoclonal antibodies: anti-rat CD45 PE (12-0451-81, eBioscience, San Diego, CA, USA), anti-rat CD8a APC (17-0081-81, eBioscience), anti-rat CD44 PE (25-0441-81, eBioscience), and anti-rat CD62L (104432, Biolegend). CD8+ Tem were sorted by flow cytometry (EPICS-XL, Beckman-Coulter, Indianapolis, IN, USA) and then stained for interferon (IFN)-γ (11-7311-81, eBioscience), mTOR (ab87540, Abcam, Cambridge, MA, USA), and S6K (ab32529, Abcam) expression.", "Cytokines and sTREM-1 were quantified using the following ELISA kits as per the manufacturer's instructions; IL-6 (cat#: ab168538, Abcam), IL-10 (cat#: ab176665, Abcam), GM (cat#: 85-86051, Affymetrix Bioscience, San Diego, CA, USA), and sTREM-1 (cat#: SEA213Mu, USCN, Wuhan, China).", "Data were analyzed using SPSS version 18.0 software (SPSS Inc., Armonk, NY, USA). All the data for the continuous variables in this study were proven to have normal distributions and are given as mean ± standard deviations (SD). Results for continuous variables that were not normally distributed are given as medians (interquartile ranges) and were compared using nonparametric tests. Student's t-test or analysis of variance followed by Bonferroni's test were used to determine the statistical significance (P) of differences. Pearson's correlation coefficient was used to analyze the correlation of two parameters. A value of P < 0.05 was considered statistically significant.", "Viable A. fumigatus was cultured from lung tissue of both IPA and CTX + IPA mice treated with rapamycin or IL-12 or without treatment [Figure 1a–1c], while control mice were negative. Histological examination indicated that the lung tissue structure was intact in the normal control [Figure 1d]. In contrast, infiltration of inflammatory cells, hemorrhage, and interstitial lung tissue injury were found in the lungs of the IPA, CTX + IPA, CTX + IPA + IL-12, and CTX + IPA + RAPA groups [Figure 1e–1h]. Compared with the IPA group, Figure 1 suggests that CTX + IPA mice treated with rapamycin or IL-12 or without treatment had greater inflammation and hemorrhage in the interstitial lung tissue.\nHistology of lung tissue stained with H and E, Masson trichrome, and PASM. (a–c) The fungal spores of Aspergillus fumigatus. (d) Control animal. (e) Animal infected with IPA. (f) Immunosuppression plus aspergillosis group (CTX + IPA). (g) Immunosuppression plus IPA plus IL-12 treatment group (CTX + IPA + IL-12). (h) Immunosuppression plus aspergillosis plus rapamycin treatment group (CTX + IPA + RAPA). Original magnification: (a) Masson trichrome staining, original magnification ×200; (b) Masson trichrome staining, original magnification ×400; (c) PASM staining, original magnification ×600; (d–h) H and E, original magnification ×100. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; PASM: Periodic acid silver methenamine; IL: Interleukin.", "As shown in Figure 2, compared with the control group (0.25 ± 0.04, P = 0.004), IPA group (0.27 ± 0.04, P = 0.01), and CTX + IPA group (0.29 ± 0.04, P = 0.034), mTOR activity was significantly increased in the CTX + IPA + IL-12 group (0.39 ± 0.12). To verify that the induction of mTOR phosphorylation also led to its kinase activity, we monitored the kinetics of S6K, a direct target of mTOR kinase activity. As anticipated, IL-12 stimulation significantly enhanced the expression of S6K in CD8+ Tem (0.13 ± 0.03) compared with the control (0.06 ± 0.04, P < 0.001), IPA (0.07 ± 0.02, P = 0.001), and CTX + IPA (0.09 ± 0.01, P = 0.032) groups.\nIL-12 increases the expression of mammalian target of rapamycin and S6 kinase, which was blocked by rapamycin. The peripheral blood mononuclear cells obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected using flow cytometry 7 days after intranasal inoculation of Aspergillus fumigatus. Side scatter and CD8a were used to gate on CD8+ T-lymphocytes, and CD44+ CD45+ CD62+/− cells represent CD8+ effector memory T-cells. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin.\nSimilarly, when we blocked mTOR activity by adding rapamycin during IPA infection, we found that the expression of mTOR was significantly lower in the CTX + IPA + RAPA group (0.19 ± 0.05) compared with the CTX + IPA (0.29 ± 0.04, P = 0.034) and CTX + IPA + IL-12 (0.39 ± 0.12, P < 0.001) groups. The expression of S6K was also significantly lower in the CTX + IPA + RAPA (0.04 ± 0.04) group than in the CTX + IPA (0.09 ± 0.01, P = 0.01) and CTX + IPA + IL-12 (0.13 ± 0.03, P < 0.001) groups.", "As shown in Figure 3, the proportion of CD8+ Tem, IFN-γ production, and serum sTREM-1 expression were significantly increased in the CTX + IPA + IL-12 group (0.47 ± 0.06, 0.09 ± 0.03, and 1876.91 ± 247.80, respectively) compared with the CTX + IPA group (0.35 ± 0.10, P = 0.024; 0.05 ± 0.04, P = 0.032; and 1537.64 ± 359.52, P = 0.017, respectively). The result indicates that the addition of IL-12 could improve CD8+ Tem differentiation, IFN-γ production, and resulted in robust serum sTREM-1 expression during IPA infection by activating the mTOR pathway.\nThe proportion of CD8+ effector memory T-cells, expression of interferon-γ, and sTREM-1 levels were significantly increased by IL-12 stimulation but significantly decreased by rapamycin treatment. The peripheral blood mononuclear cells obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected using flow cytometry 7 days after Aspergillus fumigatus intranasal inoculation. Side scatter and CD8a were used to gate on CD8+ T-lymphocytes, and CD44+ CD45+ CD62+/- cells represented CD8+ effector memory T-cells. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin; sTREM-1: Soluble triggering receptors expressed on myeloid cell-1.\nMore importantly, we also found that adding rapamycin could significantly decrease the proportion of CD8+ Tem (0.22 ± 0.05), IFN-γ production (0.02 ± 0.01), and serum TREM-1 expression (1214.45 ± 293.24) compared with the CTX + IPA (0.35 ± 0.10, P = 0.017; 0.05 ± 0.04, P = 0.039; and 1537.64 ± 359.52, P = 0.022, respectively) and CTX + IPA + IL-12 (0.47 ± 0.06, P < 0.001; 0.09 ± 0.03, P < 0.001; and 1876.91 ± 247.80, P < 0.001, respectively) groups. These results indicate that sustained mTOR activity is essential for CD8+ Tem differentiation, Type I effector functions, and serum sTREM-1 expression.", "IL-6 levels reflect the inflammatory response, and IL-10 levels reflect anti-inflammatory responses. Figure 4 shows that the IL-12-treated group had the highest IL-6 levels (2053.91 ± 402.37 pg/ml), followed by the CTX + IPA (1426.57 ± 488.03 pg/ml, P = 0.036), CTX + IPA + RAPA (822.74 ± 211.30 pg/ml, P < 0.001), IPA (811.58 ± 247.03 pg/ml, P < 0.001), and control (245.65 ± 85.78 pg/ml, P < 0.001) groups. In contrast, the concentration of IL-10 showed the reverse trend. The IL-10 level of the IL-12-treated group (314.94 ± 36.97 pg/ml), as well as the control (267.60 ± 46.10 pg/ml) and IPA (350.93 ± 43.11 pg/ml) groups, were significantly lower than the CTX + IPA group (437.95 ± 53.41 pg/ml, P < 0.001) and especially the CTX + IPA + RAPA group (542.44 ± 37.32 pg/ml, P < 0.001).\nIL-12 significantly increased the IL-6 level, which was significantly decreased by rapamycin treatment. In contrast, the concentration of IL-10 and galactomannan showed the reverse trend. Plasma samples obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected by ELISA 7 days after Aspergillus fumigatus intranasal inoculation. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin.\nGalactomannan (GM) is used to demonstrate the severity of fungal infection in clinical practice. In this study, we found that, after A. fumigates inoculation, the level of GM was significantly increased in the IPA (1985.98 ± 152.79 pg/ml, P < 0.001), CTX + IPA (2251.06 ± 230.50 pg/ml, P < 0.001), CTX + IPA + IL-12 (1920.15 ± 86.16 pg/ml, P < 0.001), and CTX + IPA + RAPA (2496.87 ± 278.05 pg/ml, P < 0.001) groups compared with the control group (126.82 ± 10.31 pg/ml). However, as shown in Figure 4, treatment with IL-12 significantly decreased the level of GM compared with the CTX + IPA (P = 0.002) and CTX + IPA + RAPA (P < 0.001) groups.", "The work was supported by the Beijing Municipal Natural Science Foundation (No. 7152119) and Chinese National Natural Science Foundation for Young Scholars (No. 81601657)." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Pathogen preparation", "Animal model preparation", "CD8+ effector memory T-cells counts and interferon-γ, mammalian target of rapamycin, and S6 kinase expression", "Cytokine and soluble triggering receptors expressed on myeloid cell-1 quantification", "Statistical analysis", "Results", "Tissue culture and histology", "Regulation of the mammalian target of rapamycin pathway", "Mammalian target of rapamycin modulates CD8+ T-cell differentiation and expression of triggering receptor expressed on myeloid cell-1", "Alteration of inflammatory responses and severity of fungal infection", "Discussion", "Financial support and sponsorship", "Conflicts of interest" ]
[ "Invasive pulmonary aspergillosis (IPA) has become a leading cause of severe fungal infections in critically ill patients and has a high mortality rate, especially in patients with immune dysfunction such as those undergoing immunosuppressive or high-dose glucocorticoid therapy, and solid organ or hematopoietic stem cell transplantation patients.[1] The acquired immune response is mediated by T-lymphocytes, which determine whether the host can effectively clear the pathogen. CD8+ T-cells are a vital component of the acquired immune system.[23] Effector-phenotype CD8+ memory T-cells (Tem) can trigger a rapid, potent, and effective immune response to specific antigens in the early stages of infection. Investigating the mechanism of differentiation of CD8+ T cells in the early stages of infection could provide new insights into effectively controlling infection in IPA patients with immune dysfunction.[45]\nThe mammalian target of rapamycin (mTOR) signaling pathway regulates cell metabolism, survival, growth, and proliferation. The mTOR downstream signaling pathway contains two key substrates, the ribosomal S6 kinase (S6K) and the eukaryotic translational initiation factor 4E binding protein 1 (4E-BP1). When activated, mTOR phosphorylates S6K and 4E-BP1 to promote protein translation and is sensitive to rapamycin inhibition.[6] Recent studies have shown that mTOR may regulate T-cell differentiation and adaptive immunity following the binding of the T-cell receptor to antigen.[7] In preliminary experiments, we found that mTOR modulates lymphocyte differentiation in immunosuppressed animals with fungal infections by regulating the transcription factors T-box expressed in T-cells (T-bet) and eomesodermin.[8] T-bet is a key transcription factor regulating T-cell differentiation and belongs to the T-box transcription factor family.[9] Eomesodermin plays a synergistic role in T-bet function and regulates the differentiation of CD8+ T-cells into effector and memory T-cells. A recent study suggests that interleukin (IL)-12 can affect memory/effector CD8+ T-cell phenotype differentiation by regulating T-bet and eomesodermin.[10] IL-12 can also enhance the activity of mTOR kinase in naïve CD8+ T-cells.[11] In contrast, rapamycin inhibits the activity of mTOR kinase in CD8+ T-cells, thereby blocking the expression of T-bet and promoting the expression of eomesodermin. Increased expression of eomesodermin in CD8+ T-cells may promote the generation of memory T-cells, while high expression of T-bet could promote effector CD8+ T-cell proliferation.[12]\nTriggering receptors expressed on myeloid cells (TREMs) were identified as a new family of receptors that regulate both innate and adaptive immune responses to infection.[1314] TREM-1 is mainly expressed on neutrophils, monocytes, and macrophages. High TREM-1 levels are indicative of acute or chronic conditions caused by fungi or bacteria.[1415] The TREM-1 extracellular domain can also be found as soluble TREM-1 (sTREM-1). It is proposed that sTREM-1 may arise directly from a splice variant or following the shedding of membrane-bound TREM-1 by metalloproteinase-mediated proteolytic cleavage.[16] sTREM-1 has proven to be a valuable diagnostic and prognostic marker since it is easily detected with immunochemical assays in biological fluid samples. Previous studies have found that sTREM-1 levels are elevated in plasma and bronchoalveolar lavage fluid from patients with bacterial or fungal pneumonia.[1718] In preliminary studies, we found that plasma sTREM-1 is closely correlated with T-bet and eomesodermin levels in immunocompromised Aspergillus-infected rats, suggesting that TREM-1 may be involved in lymphocyte regulation and differentiation during fungal infection.[19]\nBased on the theories and experimental results described above, we hypothesized that TREM-1 may play a vital role in mTOR modulation of CD8+ T-cell differentiation through the transcription factors T-bet and eomesodermin during the immune response to IPA.", " Pathogen preparation The strain of Aspergillus fumigatus, provided by the Department of Clinical Laboratory, Peking Union Medical College Hospital (PUMCH), was obtained from a case of pulmonary aspergillosis. Viable conidia (>95%) were obtained by growth on Sabouraud dextrose agar for 5 days at 35°C. Conidia were harvested in 10 ml 0.1% Tween-80/PBS and filtered through five layers of gauze. The concentration of conidia was adjusted to 1 × 108 CFU/ml by turbidity adjustment method.\nThe strain of Aspergillus fumigatus, provided by the Department of Clinical Laboratory, Peking Union Medical College Hospital (PUMCH), was obtained from a case of pulmonary aspergillosis. Viable conidia (>95%) were obtained by growth on Sabouraud dextrose agar for 5 days at 35°C. Conidia were harvested in 10 ml 0.1% Tween-80/PBS and filtered through five layers of gauze. The concentration of conidia was adjusted to 1 × 108 CFU/ml by turbidity adjustment method.\n Animal model preparation Healthy 4-week-old female BALB/c mice, weighing 20 ± 5 g, were obtained from the Animal Facility Center, PUMCH. All animals were housed in a pathogen-free facility and treated according to protocols approved by the Institutional Animal Care and Use Committee of PUMCH. Thirty mice were randomly divided into the following groups (six mice per group): (1) control group: nontreated mice. (2) IPA group: animals were inoculated with 0.1 ml A. fumigatus conidia solution intranasally. (3) Immunosuppression plus aspergillosis group (cyclophosphamide [CTX] + IPA): CTX (Jiangsu Hengrui Medicine Co., Ltd., Jiangsu, China) was injected intraperitoneally at a dose of 200 mg·kg−1·d−1 for 5 consecutive days. Animals were then infected with A. fumigatus as mentioned above. (4) Immunosuppression plus aspergillosis plus rapamycin treatment group (CTX + IPA + RAPA): mice were given 2 mg·kg−1·d−1 rapamycin for the 7 consecutive days after intraperitoneal injection of CTX and A. fumigatus infection. (5) Immunosuppression plus IPA plus IL-12 treatment group (CTX + IPA + IL-12): mice were given 5 μg/kg IL-12 every other day for the 7 days following intraperitoneal injection of CTX and A. fumigatus infection. Blood samples were obtained by retro-orbital bleeding. Part of the lung tissue was minced and used for A. fumigatus culture, the rest was fixed with 4% formaldehyde, and paraffin-embedded tissue sections were stained with hematoxylin and eosin (H and E), Masson trichrome, and periodic acid-silver methenamine.\nHealthy 4-week-old female BALB/c mice, weighing 20 ± 5 g, were obtained from the Animal Facility Center, PUMCH. All animals were housed in a pathogen-free facility and treated according to protocols approved by the Institutional Animal Care and Use Committee of PUMCH. Thirty mice were randomly divided into the following groups (six mice per group): (1) control group: nontreated mice. (2) IPA group: animals were inoculated with 0.1 ml A. fumigatus conidia solution intranasally. (3) Immunosuppression plus aspergillosis group (cyclophosphamide [CTX] + IPA): CTX (Jiangsu Hengrui Medicine Co., Ltd., Jiangsu, China) was injected intraperitoneally at a dose of 200 mg·kg−1·d−1 for 5 consecutive days. Animals were then infected with A. fumigatus as mentioned above. (4) Immunosuppression plus aspergillosis plus rapamycin treatment group (CTX + IPA + RAPA): mice were given 2 mg·kg−1·d−1 rapamycin for the 7 consecutive days after intraperitoneal injection of CTX and A. fumigatus infection. (5) Immunosuppression plus IPA plus IL-12 treatment group (CTX + IPA + IL-12): mice were given 5 μg/kg IL-12 every other day for the 7 days following intraperitoneal injection of CTX and A. fumigatus infection. Blood samples were obtained by retro-orbital bleeding. Part of the lung tissue was minced and used for A. fumigatus culture, the rest was fixed with 4% formaldehyde, and paraffin-embedded tissue sections were stained with hematoxylin and eosin (H and E), Masson trichrome, and periodic acid-silver methenamine.\n CD8+ effector memory T-cells counts and interferon-γ, mammalian target of rapamycin, and S6 kinase expression Peripheral blood mononuclear cells were isolated from blood samples and counted by flow cytometry. Cells were then labeled with the following fluorescence-conjugated monoclonal antibodies: anti-rat CD45 PE (12-0451-81, eBioscience, San Diego, CA, USA), anti-rat CD8a APC (17-0081-81, eBioscience), anti-rat CD44 PE (25-0441-81, eBioscience), and anti-rat CD62L (104432, Biolegend). CD8+ Tem were sorted by flow cytometry (EPICS-XL, Beckman-Coulter, Indianapolis, IN, USA) and then stained for interferon (IFN)-γ (11-7311-81, eBioscience), mTOR (ab87540, Abcam, Cambridge, MA, USA), and S6K (ab32529, Abcam) expression.\nPeripheral blood mononuclear cells were isolated from blood samples and counted by flow cytometry. Cells were then labeled with the following fluorescence-conjugated monoclonal antibodies: anti-rat CD45 PE (12-0451-81, eBioscience, San Diego, CA, USA), anti-rat CD8a APC (17-0081-81, eBioscience), anti-rat CD44 PE (25-0441-81, eBioscience), and anti-rat CD62L (104432, Biolegend). CD8+ Tem were sorted by flow cytometry (EPICS-XL, Beckman-Coulter, Indianapolis, IN, USA) and then stained for interferon (IFN)-γ (11-7311-81, eBioscience), mTOR (ab87540, Abcam, Cambridge, MA, USA), and S6K (ab32529, Abcam) expression.\n Cytokine and soluble triggering receptors expressed on myeloid cell-1 quantification Cytokines and sTREM-1 were quantified using the following ELISA kits as per the manufacturer's instructions; IL-6 (cat#: ab168538, Abcam), IL-10 (cat#: ab176665, Abcam), GM (cat#: 85-86051, Affymetrix Bioscience, San Diego, CA, USA), and sTREM-1 (cat#: SEA213Mu, USCN, Wuhan, China).\nCytokines and sTREM-1 were quantified using the following ELISA kits as per the manufacturer's instructions; IL-6 (cat#: ab168538, Abcam), IL-10 (cat#: ab176665, Abcam), GM (cat#: 85-86051, Affymetrix Bioscience, San Diego, CA, USA), and sTREM-1 (cat#: SEA213Mu, USCN, Wuhan, China).\n Statistical analysis Data were analyzed using SPSS version 18.0 software (SPSS Inc., Armonk, NY, USA). All the data for the continuous variables in this study were proven to have normal distributions and are given as mean ± standard deviations (SD). Results for continuous variables that were not normally distributed are given as medians (interquartile ranges) and were compared using nonparametric tests. Student's t-test or analysis of variance followed by Bonferroni's test were used to determine the statistical significance (P) of differences. Pearson's correlation coefficient was used to analyze the correlation of two parameters. A value of P < 0.05 was considered statistically significant.\nData were analyzed using SPSS version 18.0 software (SPSS Inc., Armonk, NY, USA). All the data for the continuous variables in this study were proven to have normal distributions and are given as mean ± standard deviations (SD). Results for continuous variables that were not normally distributed are given as medians (interquartile ranges) and were compared using nonparametric tests. Student's t-test or analysis of variance followed by Bonferroni's test were used to determine the statistical significance (P) of differences. Pearson's correlation coefficient was used to analyze the correlation of two parameters. A value of P < 0.05 was considered statistically significant.", "The strain of Aspergillus fumigatus, provided by the Department of Clinical Laboratory, Peking Union Medical College Hospital (PUMCH), was obtained from a case of pulmonary aspergillosis. Viable conidia (>95%) were obtained by growth on Sabouraud dextrose agar for 5 days at 35°C. Conidia were harvested in 10 ml 0.1% Tween-80/PBS and filtered through five layers of gauze. The concentration of conidia was adjusted to 1 × 108 CFU/ml by turbidity adjustment method.", "Healthy 4-week-old female BALB/c mice, weighing 20 ± 5 g, were obtained from the Animal Facility Center, PUMCH. All animals were housed in a pathogen-free facility and treated according to protocols approved by the Institutional Animal Care and Use Committee of PUMCH. Thirty mice were randomly divided into the following groups (six mice per group): (1) control group: nontreated mice. (2) IPA group: animals were inoculated with 0.1 ml A. fumigatus conidia solution intranasally. (3) Immunosuppression plus aspergillosis group (cyclophosphamide [CTX] + IPA): CTX (Jiangsu Hengrui Medicine Co., Ltd., Jiangsu, China) was injected intraperitoneally at a dose of 200 mg·kg−1·d−1 for 5 consecutive days. Animals were then infected with A. fumigatus as mentioned above. (4) Immunosuppression plus aspergillosis plus rapamycin treatment group (CTX + IPA + RAPA): mice were given 2 mg·kg−1·d−1 rapamycin for the 7 consecutive days after intraperitoneal injection of CTX and A. fumigatus infection. (5) Immunosuppression plus IPA plus IL-12 treatment group (CTX + IPA + IL-12): mice were given 5 μg/kg IL-12 every other day for the 7 days following intraperitoneal injection of CTX and A. fumigatus infection. Blood samples were obtained by retro-orbital bleeding. Part of the lung tissue was minced and used for A. fumigatus culture, the rest was fixed with 4% formaldehyde, and paraffin-embedded tissue sections were stained with hematoxylin and eosin (H and E), Masson trichrome, and periodic acid-silver methenamine.", "Peripheral blood mononuclear cells were isolated from blood samples and counted by flow cytometry. Cells were then labeled with the following fluorescence-conjugated monoclonal antibodies: anti-rat CD45 PE (12-0451-81, eBioscience, San Diego, CA, USA), anti-rat CD8a APC (17-0081-81, eBioscience), anti-rat CD44 PE (25-0441-81, eBioscience), and anti-rat CD62L (104432, Biolegend). CD8+ Tem were sorted by flow cytometry (EPICS-XL, Beckman-Coulter, Indianapolis, IN, USA) and then stained for interferon (IFN)-γ (11-7311-81, eBioscience), mTOR (ab87540, Abcam, Cambridge, MA, USA), and S6K (ab32529, Abcam) expression.", "Cytokines and sTREM-1 were quantified using the following ELISA kits as per the manufacturer's instructions; IL-6 (cat#: ab168538, Abcam), IL-10 (cat#: ab176665, Abcam), GM (cat#: 85-86051, Affymetrix Bioscience, San Diego, CA, USA), and sTREM-1 (cat#: SEA213Mu, USCN, Wuhan, China).", "Data were analyzed using SPSS version 18.0 software (SPSS Inc., Armonk, NY, USA). All the data for the continuous variables in this study were proven to have normal distributions and are given as mean ± standard deviations (SD). Results for continuous variables that were not normally distributed are given as medians (interquartile ranges) and were compared using nonparametric tests. Student's t-test or analysis of variance followed by Bonferroni's test were used to determine the statistical significance (P) of differences. Pearson's correlation coefficient was used to analyze the correlation of two parameters. A value of P < 0.05 was considered statistically significant.", " Tissue culture and histology Viable A. fumigatus was cultured from lung tissue of both IPA and CTX + IPA mice treated with rapamycin or IL-12 or without treatment [Figure 1a–1c], while control mice were negative. Histological examination indicated that the lung tissue structure was intact in the normal control [Figure 1d]. In contrast, infiltration of inflammatory cells, hemorrhage, and interstitial lung tissue injury were found in the lungs of the IPA, CTX + IPA, CTX + IPA + IL-12, and CTX + IPA + RAPA groups [Figure 1e–1h]. Compared with the IPA group, Figure 1 suggests that CTX + IPA mice treated with rapamycin or IL-12 or without treatment had greater inflammation and hemorrhage in the interstitial lung tissue.\nHistology of lung tissue stained with H and E, Masson trichrome, and PASM. (a–c) The fungal spores of Aspergillus fumigatus. (d) Control animal. (e) Animal infected with IPA. (f) Immunosuppression plus aspergillosis group (CTX + IPA). (g) Immunosuppression plus IPA plus IL-12 treatment group (CTX + IPA + IL-12). (h) Immunosuppression plus aspergillosis plus rapamycin treatment group (CTX + IPA + RAPA). Original magnification: (a) Masson trichrome staining, original magnification ×200; (b) Masson trichrome staining, original magnification ×400; (c) PASM staining, original magnification ×600; (d–h) H and E, original magnification ×100. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; PASM: Periodic acid silver methenamine; IL: Interleukin.\nViable A. fumigatus was cultured from lung tissue of both IPA and CTX + IPA mice treated with rapamycin or IL-12 or without treatment [Figure 1a–1c], while control mice were negative. Histological examination indicated that the lung tissue structure was intact in the normal control [Figure 1d]. In contrast, infiltration of inflammatory cells, hemorrhage, and interstitial lung tissue injury were found in the lungs of the IPA, CTX + IPA, CTX + IPA + IL-12, and CTX + IPA + RAPA groups [Figure 1e–1h]. Compared with the IPA group, Figure 1 suggests that CTX + IPA mice treated with rapamycin or IL-12 or without treatment had greater inflammation and hemorrhage in the interstitial lung tissue.\nHistology of lung tissue stained with H and E, Masson trichrome, and PASM. (a–c) The fungal spores of Aspergillus fumigatus. (d) Control animal. (e) Animal infected with IPA. (f) Immunosuppression plus aspergillosis group (CTX + IPA). (g) Immunosuppression plus IPA plus IL-12 treatment group (CTX + IPA + IL-12). (h) Immunosuppression plus aspergillosis plus rapamycin treatment group (CTX + IPA + RAPA). Original magnification: (a) Masson trichrome staining, original magnification ×200; (b) Masson trichrome staining, original magnification ×400; (c) PASM staining, original magnification ×600; (d–h) H and E, original magnification ×100. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; PASM: Periodic acid silver methenamine; IL: Interleukin.\n Regulation of the mammalian target of rapamycin pathway As shown in Figure 2, compared with the control group (0.25 ± 0.04, P = 0.004), IPA group (0.27 ± 0.04, P = 0.01), and CTX + IPA group (0.29 ± 0.04, P = 0.034), mTOR activity was significantly increased in the CTX + IPA + IL-12 group (0.39 ± 0.12). To verify that the induction of mTOR phosphorylation also led to its kinase activity, we monitored the kinetics of S6K, a direct target of mTOR kinase activity. As anticipated, IL-12 stimulation significantly enhanced the expression of S6K in CD8+ Tem (0.13 ± 0.03) compared with the control (0.06 ± 0.04, P < 0.001), IPA (0.07 ± 0.02, P = 0.001), and CTX + IPA (0.09 ± 0.01, P = 0.032) groups.\nIL-12 increases the expression of mammalian target of rapamycin and S6 kinase, which was blocked by rapamycin. The peripheral blood mononuclear cells obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected using flow cytometry 7 days after intranasal inoculation of Aspergillus fumigatus. Side scatter and CD8a were used to gate on CD8+ T-lymphocytes, and CD44+ CD45+ CD62+/− cells represent CD8+ effector memory T-cells. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin.\nSimilarly, when we blocked mTOR activity by adding rapamycin during IPA infection, we found that the expression of mTOR was significantly lower in the CTX + IPA + RAPA group (0.19 ± 0.05) compared with the CTX + IPA (0.29 ± 0.04, P = 0.034) and CTX + IPA + IL-12 (0.39 ± 0.12, P < 0.001) groups. The expression of S6K was also significantly lower in the CTX + IPA + RAPA (0.04 ± 0.04) group than in the CTX + IPA (0.09 ± 0.01, P = 0.01) and CTX + IPA + IL-12 (0.13 ± 0.03, P < 0.001) groups.\nAs shown in Figure 2, compared with the control group (0.25 ± 0.04, P = 0.004), IPA group (0.27 ± 0.04, P = 0.01), and CTX + IPA group (0.29 ± 0.04, P = 0.034), mTOR activity was significantly increased in the CTX + IPA + IL-12 group (0.39 ± 0.12). To verify that the induction of mTOR phosphorylation also led to its kinase activity, we monitored the kinetics of S6K, a direct target of mTOR kinase activity. As anticipated, IL-12 stimulation significantly enhanced the expression of S6K in CD8+ Tem (0.13 ± 0.03) compared with the control (0.06 ± 0.04, P < 0.001), IPA (0.07 ± 0.02, P = 0.001), and CTX + IPA (0.09 ± 0.01, P = 0.032) groups.\nIL-12 increases the expression of mammalian target of rapamycin and S6 kinase, which was blocked by rapamycin. The peripheral blood mononuclear cells obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected using flow cytometry 7 days after intranasal inoculation of Aspergillus fumigatus. Side scatter and CD8a were used to gate on CD8+ T-lymphocytes, and CD44+ CD45+ CD62+/− cells represent CD8+ effector memory T-cells. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin.\nSimilarly, when we blocked mTOR activity by adding rapamycin during IPA infection, we found that the expression of mTOR was significantly lower in the CTX + IPA + RAPA group (0.19 ± 0.05) compared with the CTX + IPA (0.29 ± 0.04, P = 0.034) and CTX + IPA + IL-12 (0.39 ± 0.12, P < 0.001) groups. The expression of S6K was also significantly lower in the CTX + IPA + RAPA (0.04 ± 0.04) group than in the CTX + IPA (0.09 ± 0.01, P = 0.01) and CTX + IPA + IL-12 (0.13 ± 0.03, P < 0.001) groups.\n Mammalian target of rapamycin modulates CD8+ T-cell differentiation and expression of triggering receptor expressed on myeloid cell-1 As shown in Figure 3, the proportion of CD8+ Tem, IFN-γ production, and serum sTREM-1 expression were significantly increased in the CTX + IPA + IL-12 group (0.47 ± 0.06, 0.09 ± 0.03, and 1876.91 ± 247.80, respectively) compared with the CTX + IPA group (0.35 ± 0.10, P = 0.024; 0.05 ± 0.04, P = 0.032; and 1537.64 ± 359.52, P = 0.017, respectively). The result indicates that the addition of IL-12 could improve CD8+ Tem differentiation, IFN-γ production, and resulted in robust serum sTREM-1 expression during IPA infection by activating the mTOR pathway.\nThe proportion of CD8+ effector memory T-cells, expression of interferon-γ, and sTREM-1 levels were significantly increased by IL-12 stimulation but significantly decreased by rapamycin treatment. The peripheral blood mononuclear cells obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected using flow cytometry 7 days after Aspergillus fumigatus intranasal inoculation. Side scatter and CD8a were used to gate on CD8+ T-lymphocytes, and CD44+ CD45+ CD62+/- cells represented CD8+ effector memory T-cells. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin; sTREM-1: Soluble triggering receptors expressed on myeloid cell-1.\nMore importantly, we also found that adding rapamycin could significantly decrease the proportion of CD8+ Tem (0.22 ± 0.05), IFN-γ production (0.02 ± 0.01), and serum TREM-1 expression (1214.45 ± 293.24) compared with the CTX + IPA (0.35 ± 0.10, P = 0.017; 0.05 ± 0.04, P = 0.039; and 1537.64 ± 359.52, P = 0.022, respectively) and CTX + IPA + IL-12 (0.47 ± 0.06, P < 0.001; 0.09 ± 0.03, P < 0.001; and 1876.91 ± 247.80, P < 0.001, respectively) groups. These results indicate that sustained mTOR activity is essential for CD8+ Tem differentiation, Type I effector functions, and serum sTREM-1 expression.\nAs shown in Figure 3, the proportion of CD8+ Tem, IFN-γ production, and serum sTREM-1 expression were significantly increased in the CTX + IPA + IL-12 group (0.47 ± 0.06, 0.09 ± 0.03, and 1876.91 ± 247.80, respectively) compared with the CTX + IPA group (0.35 ± 0.10, P = 0.024; 0.05 ± 0.04, P = 0.032; and 1537.64 ± 359.52, P = 0.017, respectively). The result indicates that the addition of IL-12 could improve CD8+ Tem differentiation, IFN-γ production, and resulted in robust serum sTREM-1 expression during IPA infection by activating the mTOR pathway.\nThe proportion of CD8+ effector memory T-cells, expression of interferon-γ, and sTREM-1 levels were significantly increased by IL-12 stimulation but significantly decreased by rapamycin treatment. The peripheral blood mononuclear cells obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected using flow cytometry 7 days after Aspergillus fumigatus intranasal inoculation. Side scatter and CD8a were used to gate on CD8+ T-lymphocytes, and CD44+ CD45+ CD62+/- cells represented CD8+ effector memory T-cells. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin; sTREM-1: Soluble triggering receptors expressed on myeloid cell-1.\nMore importantly, we also found that adding rapamycin could significantly decrease the proportion of CD8+ Tem (0.22 ± 0.05), IFN-γ production (0.02 ± 0.01), and serum TREM-1 expression (1214.45 ± 293.24) compared with the CTX + IPA (0.35 ± 0.10, P = 0.017; 0.05 ± 0.04, P = 0.039; and 1537.64 ± 359.52, P = 0.022, respectively) and CTX + IPA + IL-12 (0.47 ± 0.06, P < 0.001; 0.09 ± 0.03, P < 0.001; and 1876.91 ± 247.80, P < 0.001, respectively) groups. These results indicate that sustained mTOR activity is essential for CD8+ Tem differentiation, Type I effector functions, and serum sTREM-1 expression.\n Alteration of inflammatory responses and severity of fungal infection IL-6 levels reflect the inflammatory response, and IL-10 levels reflect anti-inflammatory responses. Figure 4 shows that the IL-12-treated group had the highest IL-6 levels (2053.91 ± 402.37 pg/ml), followed by the CTX + IPA (1426.57 ± 488.03 pg/ml, P = 0.036), CTX + IPA + RAPA (822.74 ± 211.30 pg/ml, P < 0.001), IPA (811.58 ± 247.03 pg/ml, P < 0.001), and control (245.65 ± 85.78 pg/ml, P < 0.001) groups. In contrast, the concentration of IL-10 showed the reverse trend. The IL-10 level of the IL-12-treated group (314.94 ± 36.97 pg/ml), as well as the control (267.60 ± 46.10 pg/ml) and IPA (350.93 ± 43.11 pg/ml) groups, were significantly lower than the CTX + IPA group (437.95 ± 53.41 pg/ml, P < 0.001) and especially the CTX + IPA + RAPA group (542.44 ± 37.32 pg/ml, P < 0.001).\nIL-12 significantly increased the IL-6 level, which was significantly decreased by rapamycin treatment. In contrast, the concentration of IL-10 and galactomannan showed the reverse trend. Plasma samples obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected by ELISA 7 days after Aspergillus fumigatus intranasal inoculation. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin.\nGalactomannan (GM) is used to demonstrate the severity of fungal infection in clinical practice. In this study, we found that, after A. fumigates inoculation, the level of GM was significantly increased in the IPA (1985.98 ± 152.79 pg/ml, P < 0.001), CTX + IPA (2251.06 ± 230.50 pg/ml, P < 0.001), CTX + IPA + IL-12 (1920.15 ± 86.16 pg/ml, P < 0.001), and CTX + IPA + RAPA (2496.87 ± 278.05 pg/ml, P < 0.001) groups compared with the control group (126.82 ± 10.31 pg/ml). However, as shown in Figure 4, treatment with IL-12 significantly decreased the level of GM compared with the CTX + IPA (P = 0.002) and CTX + IPA + RAPA (P < 0.001) groups.\nIL-6 levels reflect the inflammatory response, and IL-10 levels reflect anti-inflammatory responses. Figure 4 shows that the IL-12-treated group had the highest IL-6 levels (2053.91 ± 402.37 pg/ml), followed by the CTX + IPA (1426.57 ± 488.03 pg/ml, P = 0.036), CTX + IPA + RAPA (822.74 ± 211.30 pg/ml, P < 0.001), IPA (811.58 ± 247.03 pg/ml, P < 0.001), and control (245.65 ± 85.78 pg/ml, P < 0.001) groups. In contrast, the concentration of IL-10 showed the reverse trend. The IL-10 level of the IL-12-treated group (314.94 ± 36.97 pg/ml), as well as the control (267.60 ± 46.10 pg/ml) and IPA (350.93 ± 43.11 pg/ml) groups, were significantly lower than the CTX + IPA group (437.95 ± 53.41 pg/ml, P < 0.001) and especially the CTX + IPA + RAPA group (542.44 ± 37.32 pg/ml, P < 0.001).\nIL-12 significantly increased the IL-6 level, which was significantly decreased by rapamycin treatment. In contrast, the concentration of IL-10 and galactomannan showed the reverse trend. Plasma samples obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected by ELISA 7 days after Aspergillus fumigatus intranasal inoculation. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin.\nGalactomannan (GM) is used to demonstrate the severity of fungal infection in clinical practice. In this study, we found that, after A. fumigates inoculation, the level of GM was significantly increased in the IPA (1985.98 ± 152.79 pg/ml, P < 0.001), CTX + IPA (2251.06 ± 230.50 pg/ml, P < 0.001), CTX + IPA + IL-12 (1920.15 ± 86.16 pg/ml, P < 0.001), and CTX + IPA + RAPA (2496.87 ± 278.05 pg/ml, P < 0.001) groups compared with the control group (126.82 ± 10.31 pg/ml). However, as shown in Figure 4, treatment with IL-12 significantly decreased the level of GM compared with the CTX + IPA (P = 0.002) and CTX + IPA + RAPA (P < 0.001) groups.", "Viable A. fumigatus was cultured from lung tissue of both IPA and CTX + IPA mice treated with rapamycin or IL-12 or without treatment [Figure 1a–1c], while control mice were negative. Histological examination indicated that the lung tissue structure was intact in the normal control [Figure 1d]. In contrast, infiltration of inflammatory cells, hemorrhage, and interstitial lung tissue injury were found in the lungs of the IPA, CTX + IPA, CTX + IPA + IL-12, and CTX + IPA + RAPA groups [Figure 1e–1h]. Compared with the IPA group, Figure 1 suggests that CTX + IPA mice treated with rapamycin or IL-12 or without treatment had greater inflammation and hemorrhage in the interstitial lung tissue.\nHistology of lung tissue stained with H and E, Masson trichrome, and PASM. (a–c) The fungal spores of Aspergillus fumigatus. (d) Control animal. (e) Animal infected with IPA. (f) Immunosuppression plus aspergillosis group (CTX + IPA). (g) Immunosuppression plus IPA plus IL-12 treatment group (CTX + IPA + IL-12). (h) Immunosuppression plus aspergillosis plus rapamycin treatment group (CTX + IPA + RAPA). Original magnification: (a) Masson trichrome staining, original magnification ×200; (b) Masson trichrome staining, original magnification ×400; (c) PASM staining, original magnification ×600; (d–h) H and E, original magnification ×100. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; PASM: Periodic acid silver methenamine; IL: Interleukin.", "As shown in Figure 2, compared with the control group (0.25 ± 0.04, P = 0.004), IPA group (0.27 ± 0.04, P = 0.01), and CTX + IPA group (0.29 ± 0.04, P = 0.034), mTOR activity was significantly increased in the CTX + IPA + IL-12 group (0.39 ± 0.12). To verify that the induction of mTOR phosphorylation also led to its kinase activity, we monitored the kinetics of S6K, a direct target of mTOR kinase activity. As anticipated, IL-12 stimulation significantly enhanced the expression of S6K in CD8+ Tem (0.13 ± 0.03) compared with the control (0.06 ± 0.04, P < 0.001), IPA (0.07 ± 0.02, P = 0.001), and CTX + IPA (0.09 ± 0.01, P = 0.032) groups.\nIL-12 increases the expression of mammalian target of rapamycin and S6 kinase, which was blocked by rapamycin. The peripheral blood mononuclear cells obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected using flow cytometry 7 days after intranasal inoculation of Aspergillus fumigatus. Side scatter and CD8a were used to gate on CD8+ T-lymphocytes, and CD44+ CD45+ CD62+/− cells represent CD8+ effector memory T-cells. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin.\nSimilarly, when we blocked mTOR activity by adding rapamycin during IPA infection, we found that the expression of mTOR was significantly lower in the CTX + IPA + RAPA group (0.19 ± 0.05) compared with the CTX + IPA (0.29 ± 0.04, P = 0.034) and CTX + IPA + IL-12 (0.39 ± 0.12, P < 0.001) groups. The expression of S6K was also significantly lower in the CTX + IPA + RAPA (0.04 ± 0.04) group than in the CTX + IPA (0.09 ± 0.01, P = 0.01) and CTX + IPA + IL-12 (0.13 ± 0.03, P < 0.001) groups.", "As shown in Figure 3, the proportion of CD8+ Tem, IFN-γ production, and serum sTREM-1 expression were significantly increased in the CTX + IPA + IL-12 group (0.47 ± 0.06, 0.09 ± 0.03, and 1876.91 ± 247.80, respectively) compared with the CTX + IPA group (0.35 ± 0.10, P = 0.024; 0.05 ± 0.04, P = 0.032; and 1537.64 ± 359.52, P = 0.017, respectively). The result indicates that the addition of IL-12 could improve CD8+ Tem differentiation, IFN-γ production, and resulted in robust serum sTREM-1 expression during IPA infection by activating the mTOR pathway.\nThe proportion of CD8+ effector memory T-cells, expression of interferon-γ, and sTREM-1 levels were significantly increased by IL-12 stimulation but significantly decreased by rapamycin treatment. The peripheral blood mononuclear cells obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected using flow cytometry 7 days after Aspergillus fumigatus intranasal inoculation. Side scatter and CD8a were used to gate on CD8+ T-lymphocytes, and CD44+ CD45+ CD62+/- cells represented CD8+ effector memory T-cells. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin; sTREM-1: Soluble triggering receptors expressed on myeloid cell-1.\nMore importantly, we also found that adding rapamycin could significantly decrease the proportion of CD8+ Tem (0.22 ± 0.05), IFN-γ production (0.02 ± 0.01), and serum TREM-1 expression (1214.45 ± 293.24) compared with the CTX + IPA (0.35 ± 0.10, P = 0.017; 0.05 ± 0.04, P = 0.039; and 1537.64 ± 359.52, P = 0.022, respectively) and CTX + IPA + IL-12 (0.47 ± 0.06, P < 0.001; 0.09 ± 0.03, P < 0.001; and 1876.91 ± 247.80, P < 0.001, respectively) groups. These results indicate that sustained mTOR activity is essential for CD8+ Tem differentiation, Type I effector functions, and serum sTREM-1 expression.", "IL-6 levels reflect the inflammatory response, and IL-10 levels reflect anti-inflammatory responses. Figure 4 shows that the IL-12-treated group had the highest IL-6 levels (2053.91 ± 402.37 pg/ml), followed by the CTX + IPA (1426.57 ± 488.03 pg/ml, P = 0.036), CTX + IPA + RAPA (822.74 ± 211.30 pg/ml, P < 0.001), IPA (811.58 ± 247.03 pg/ml, P < 0.001), and control (245.65 ± 85.78 pg/ml, P < 0.001) groups. In contrast, the concentration of IL-10 showed the reverse trend. The IL-10 level of the IL-12-treated group (314.94 ± 36.97 pg/ml), as well as the control (267.60 ± 46.10 pg/ml) and IPA (350.93 ± 43.11 pg/ml) groups, were significantly lower than the CTX + IPA group (437.95 ± 53.41 pg/ml, P < 0.001) and especially the CTX + IPA + RAPA group (542.44 ± 37.32 pg/ml, P < 0.001).\nIL-12 significantly increased the IL-6 level, which was significantly decreased by rapamycin treatment. In contrast, the concentration of IL-10 and galactomannan showed the reverse trend. Plasma samples obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected by ELISA 7 days after Aspergillus fumigatus intranasal inoculation. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin.\nGalactomannan (GM) is used to demonstrate the severity of fungal infection in clinical practice. In this study, we found that, after A. fumigates inoculation, the level of GM was significantly increased in the IPA (1985.98 ± 152.79 pg/ml, P < 0.001), CTX + IPA (2251.06 ± 230.50 pg/ml, P < 0.001), CTX + IPA + IL-12 (1920.15 ± 86.16 pg/ml, P < 0.001), and CTX + IPA + RAPA (2496.87 ± 278.05 pg/ml, P < 0.001) groups compared with the control group (126.82 ± 10.31 pg/ml). However, as shown in Figure 4, treatment with IL-12 significantly decreased the level of GM compared with the CTX + IPA (P = 0.002) and CTX + IPA + RAPA (P < 0.001) groups.", "In the current study, we demonstrated that IL-12 increased the number and effector response (IFN-γ release) of CD8+ Tem and the level of sTREM-1 during IPA infection in CTX-induced immunocompromised animals, through increased expression and activity of the mTOR signal transduction pathway. Furthermore, inflammatory cytokines (IL-6) were increased and anti-inflammatory cytokines (IL-10) were decreased significantly, which markedly affect the fungal burden of the host. These effects could be blocked by the mTOR inhibitor rapamycin.\nCD8+ T-cells play a vital role in the adaptive immune system. After infection, some of the CD8+ T-cells will display a memory phenotype and can survive long term. CD8+ effector memory T-cells (CD8+ Tem), memory T-cells with effector cell phenotype, will establish a rapid immune response when the host comes into contact with the same pathogen, which is crucial for patients who are at a high risk of repeated infections such as immunosuppressed patients.[520] The mTOR signaling pathway has the ability to sense cellular metabolic state, extracellular nutrient availability, presence of growth factors/cytokines, and control key cellular processes, including apoptosis/autophagy, proliferation, and cell growth that govern cell fate. Studies indicate that mTOR contributes to the proliferation and differentiation of T-cells in response to antigen stimulation.[721] mTOR inhibition by rapamycin induces CD4+ T-cell anergy and/or differentiation into FoxP3+ regulatory T-cells and attenuates mTOR kinase signaling in regulating CD8+ T-cell trafficking.[2223] Consistent with these results, the current study demonstrated that the presence of IL-12 augments mTOR activity and influences CD8+ Tem proliferation and Type I effector maturation following immunosuppression by CTX and A. fumigatus infection. Effector functions induced by IL-12 treatment were significantly attenuated by adding the mTOR inhibitor rapamycin, confirming that sustained mTOR kinase activity is required for IL-12-programed proliferation and Type I effector function in CD8+ Tem in response to a fungal infection.\nTREM-1 is largely known as an activating receptor on neutrophils and monocytes, playing an important role in amplification of the inflammatory response. Studies demonstrated that activation of TREM-1 on monocytes drove robust production of pro-inflammatory chemokines such as monocyte chemoattractant protein 1 and IL-8. Engagement of TREM-1 in combination with microbial ligands that activate toll-like receptors also synergistically increased the production of the pro-inflammatory cytokines, TNF-α and IL-1β.[2425] Moreover, in primary human monocytes, TREM-1 activation was found to be upregulated in the context of host defense directed against bacterial and fungal infections. These cells had an improved ability to elicit T-cell proliferation and production of IFN-γ. The balance between the transcription factors T-bet and eomesodermin has been shown to determine effector and memory cell differentiation in CD8+ T-cells. Although in our previous studies, we found that both sTREM-1 and mTOR activity was significantly correlated with T-bet and eomesodermin levels in immunocompromised animals with IPA, little is known regarding its specific involvement in mTOR modulation of CD8+ T-cell differentiation and Type I effector function.[819] Consistent with previous studies, the findings of the current study demonstrated that, similar to the number and effector cell response of CD8+ Tem, the level of serum sTREM-1 expression was also significantly increased after IL-12 treatment and significantly decreased after adding rapamycin. These findings suggest that the mTOR signaling pathway is the central regulator of transcriptional programs which determine effector and/or memory cell differentiation in CD8+ T-cells through the transcription factors, T-bet and eomesodermin. Thus, sTREM-1 might be the molecular link between mTOR modulation and immune cell differentiation in the immunocompromised mice with a fungal infection. However, elucidation of a clear mechanism requires further study.\nThe current study also elucidated the final immune effects of the mTOR signal pathway. We found that IL-12 could increase plasma pro-inflammatory cytokine (IL-6) level, decrease the anti-inflammatory cytokine (IL-10), and reduce the fungal load (GM) significantly. The mTOR inhibitor rapamycin completely caused the opposite effects. These results indicated that the mTOR signaling pathway plays a vital role in CD8+ T-cell differentiation and function, resulting in changes of the immune response and fungal load in infected hosts with immunosuppression.\nTo summarize, mTOR could modulate CD8+ T-cell differentiation during immune responses to IPA. sTREM-1 may play a vital role in signal transduction between mTOR and the immune response. Whether this mechanism could be used as a target of antifungal immunotherapy in the future requires further study.\n Financial support and sponsorship The work was supported by the Beijing Municipal Natural Science Foundation (No. 7152119) and Chinese National Natural Science Foundation for Young Scholars (No. 81601657).\nThe work was supported by the Beijing Municipal Natural Science Foundation (No. 7152119) and Chinese National Natural Science Foundation for Young Scholars (No. 81601657).\n Conflicts of interest There are no conflicts of interest.\nThere are no conflicts of interest.", "The work was supported by the Beijing Municipal Natural Science Foundation (No. 7152119) and Chinese National Natural Science Foundation for Young Scholars (No. 81601657).", "There are no conflicts of interest." ]
[ "intro", "methods", null, null, null, null, null, "results", null, null, null, null, "discussion", null, "COI-statement" ]
[ "CD8+ T Effector Memory Cells", "Immunosuppression", "Invasive Pulmonary Aspergillosis", "Mammalian Target of Rapamycin", "Triggering Receptor Expressed on Myeloid Cell-1" ]
Introduction: Invasive pulmonary aspergillosis (IPA) has become a leading cause of severe fungal infections in critically ill patients and has a high mortality rate, especially in patients with immune dysfunction such as those undergoing immunosuppressive or high-dose glucocorticoid therapy, and solid organ or hematopoietic stem cell transplantation patients.[1] The acquired immune response is mediated by T-lymphocytes, which determine whether the host can effectively clear the pathogen. CD8+ T-cells are a vital component of the acquired immune system.[23] Effector-phenotype CD8+ memory T-cells (Tem) can trigger a rapid, potent, and effective immune response to specific antigens in the early stages of infection. Investigating the mechanism of differentiation of CD8+ T cells in the early stages of infection could provide new insights into effectively controlling infection in IPA patients with immune dysfunction.[45] The mammalian target of rapamycin (mTOR) signaling pathway regulates cell metabolism, survival, growth, and proliferation. The mTOR downstream signaling pathway contains two key substrates, the ribosomal S6 kinase (S6K) and the eukaryotic translational initiation factor 4E binding protein 1 (4E-BP1). When activated, mTOR phosphorylates S6K and 4E-BP1 to promote protein translation and is sensitive to rapamycin inhibition.[6] Recent studies have shown that mTOR may regulate T-cell differentiation and adaptive immunity following the binding of the T-cell receptor to antigen.[7] In preliminary experiments, we found that mTOR modulates lymphocyte differentiation in immunosuppressed animals with fungal infections by regulating the transcription factors T-box expressed in T-cells (T-bet) and eomesodermin.[8] T-bet is a key transcription factor regulating T-cell differentiation and belongs to the T-box transcription factor family.[9] Eomesodermin plays a synergistic role in T-bet function and regulates the differentiation of CD8+ T-cells into effector and memory T-cells. A recent study suggests that interleukin (IL)-12 can affect memory/effector CD8+ T-cell phenotype differentiation by regulating T-bet and eomesodermin.[10] IL-12 can also enhance the activity of mTOR kinase in naïve CD8+ T-cells.[11] In contrast, rapamycin inhibits the activity of mTOR kinase in CD8+ T-cells, thereby blocking the expression of T-bet and promoting the expression of eomesodermin. Increased expression of eomesodermin in CD8+ T-cells may promote the generation of memory T-cells, while high expression of T-bet could promote effector CD8+ T-cell proliferation.[12] Triggering receptors expressed on myeloid cells (TREMs) were identified as a new family of receptors that regulate both innate and adaptive immune responses to infection.[1314] TREM-1 is mainly expressed on neutrophils, monocytes, and macrophages. High TREM-1 levels are indicative of acute or chronic conditions caused by fungi or bacteria.[1415] The TREM-1 extracellular domain can also be found as soluble TREM-1 (sTREM-1). It is proposed that sTREM-1 may arise directly from a splice variant or following the shedding of membrane-bound TREM-1 by metalloproteinase-mediated proteolytic cleavage.[16] sTREM-1 has proven to be a valuable diagnostic and prognostic marker since it is easily detected with immunochemical assays in biological fluid samples. Previous studies have found that sTREM-1 levels are elevated in plasma and bronchoalveolar lavage fluid from patients with bacterial or fungal pneumonia.[1718] In preliminary studies, we found that plasma sTREM-1 is closely correlated with T-bet and eomesodermin levels in immunocompromised Aspergillus-infected rats, suggesting that TREM-1 may be involved in lymphocyte regulation and differentiation during fungal infection.[19] Based on the theories and experimental results described above, we hypothesized that TREM-1 may play a vital role in mTOR modulation of CD8+ T-cell differentiation through the transcription factors T-bet and eomesodermin during the immune response to IPA. Methods: Pathogen preparation The strain of Aspergillus fumigatus, provided by the Department of Clinical Laboratory, Peking Union Medical College Hospital (PUMCH), was obtained from a case of pulmonary aspergillosis. Viable conidia (>95%) were obtained by growth on Sabouraud dextrose agar for 5 days at 35°C. Conidia were harvested in 10 ml 0.1% Tween-80/PBS and filtered through five layers of gauze. The concentration of conidia was adjusted to 1 × 108 CFU/ml by turbidity adjustment method. The strain of Aspergillus fumigatus, provided by the Department of Clinical Laboratory, Peking Union Medical College Hospital (PUMCH), was obtained from a case of pulmonary aspergillosis. Viable conidia (>95%) were obtained by growth on Sabouraud dextrose agar for 5 days at 35°C. Conidia were harvested in 10 ml 0.1% Tween-80/PBS and filtered through five layers of gauze. The concentration of conidia was adjusted to 1 × 108 CFU/ml by turbidity adjustment method. Animal model preparation Healthy 4-week-old female BALB/c mice, weighing 20 ± 5 g, were obtained from the Animal Facility Center, PUMCH. All animals were housed in a pathogen-free facility and treated according to protocols approved by the Institutional Animal Care and Use Committee of PUMCH. Thirty mice were randomly divided into the following groups (six mice per group): (1) control group: nontreated mice. (2) IPA group: animals were inoculated with 0.1 ml A. fumigatus conidia solution intranasally. (3) Immunosuppression plus aspergillosis group (cyclophosphamide [CTX] + IPA): CTX (Jiangsu Hengrui Medicine Co., Ltd., Jiangsu, China) was injected intraperitoneally at a dose of 200 mg·kg−1·d−1 for 5 consecutive days. Animals were then infected with A. fumigatus as mentioned above. (4) Immunosuppression plus aspergillosis plus rapamycin treatment group (CTX + IPA + RAPA): mice were given 2 mg·kg−1·d−1 rapamycin for the 7 consecutive days after intraperitoneal injection of CTX and A. fumigatus infection. (5) Immunosuppression plus IPA plus IL-12 treatment group (CTX + IPA + IL-12): mice were given 5 μg/kg IL-12 every other day for the 7 days following intraperitoneal injection of CTX and A. fumigatus infection. Blood samples were obtained by retro-orbital bleeding. Part of the lung tissue was minced and used for A. fumigatus culture, the rest was fixed with 4% formaldehyde, and paraffin-embedded tissue sections were stained with hematoxylin and eosin (H and E), Masson trichrome, and periodic acid-silver methenamine. Healthy 4-week-old female BALB/c mice, weighing 20 ± 5 g, were obtained from the Animal Facility Center, PUMCH. All animals were housed in a pathogen-free facility and treated according to protocols approved by the Institutional Animal Care and Use Committee of PUMCH. Thirty mice were randomly divided into the following groups (six mice per group): (1) control group: nontreated mice. (2) IPA group: animals were inoculated with 0.1 ml A. fumigatus conidia solution intranasally. (3) Immunosuppression plus aspergillosis group (cyclophosphamide [CTX] + IPA): CTX (Jiangsu Hengrui Medicine Co., Ltd., Jiangsu, China) was injected intraperitoneally at a dose of 200 mg·kg−1·d−1 for 5 consecutive days. Animals were then infected with A. fumigatus as mentioned above. (4) Immunosuppression plus aspergillosis plus rapamycin treatment group (CTX + IPA + RAPA): mice were given 2 mg·kg−1·d−1 rapamycin for the 7 consecutive days after intraperitoneal injection of CTX and A. fumigatus infection. (5) Immunosuppression plus IPA plus IL-12 treatment group (CTX + IPA + IL-12): mice were given 5 μg/kg IL-12 every other day for the 7 days following intraperitoneal injection of CTX and A. fumigatus infection. Blood samples were obtained by retro-orbital bleeding. Part of the lung tissue was minced and used for A. fumigatus culture, the rest was fixed with 4% formaldehyde, and paraffin-embedded tissue sections were stained with hematoxylin and eosin (H and E), Masson trichrome, and periodic acid-silver methenamine. CD8+ effector memory T-cells counts and interferon-γ, mammalian target of rapamycin, and S6 kinase expression Peripheral blood mononuclear cells were isolated from blood samples and counted by flow cytometry. Cells were then labeled with the following fluorescence-conjugated monoclonal antibodies: anti-rat CD45 PE (12-0451-81, eBioscience, San Diego, CA, USA), anti-rat CD8a APC (17-0081-81, eBioscience), anti-rat CD44 PE (25-0441-81, eBioscience), and anti-rat CD62L (104432, Biolegend). CD8+ Tem were sorted by flow cytometry (EPICS-XL, Beckman-Coulter, Indianapolis, IN, USA) and then stained for interferon (IFN)-γ (11-7311-81, eBioscience), mTOR (ab87540, Abcam, Cambridge, MA, USA), and S6K (ab32529, Abcam) expression. Peripheral blood mononuclear cells were isolated from blood samples and counted by flow cytometry. Cells were then labeled with the following fluorescence-conjugated monoclonal antibodies: anti-rat CD45 PE (12-0451-81, eBioscience, San Diego, CA, USA), anti-rat CD8a APC (17-0081-81, eBioscience), anti-rat CD44 PE (25-0441-81, eBioscience), and anti-rat CD62L (104432, Biolegend). CD8+ Tem were sorted by flow cytometry (EPICS-XL, Beckman-Coulter, Indianapolis, IN, USA) and then stained for interferon (IFN)-γ (11-7311-81, eBioscience), mTOR (ab87540, Abcam, Cambridge, MA, USA), and S6K (ab32529, Abcam) expression. Cytokine and soluble triggering receptors expressed on myeloid cell-1 quantification Cytokines and sTREM-1 were quantified using the following ELISA kits as per the manufacturer's instructions; IL-6 (cat#: ab168538, Abcam), IL-10 (cat#: ab176665, Abcam), GM (cat#: 85-86051, Affymetrix Bioscience, San Diego, CA, USA), and sTREM-1 (cat#: SEA213Mu, USCN, Wuhan, China). Cytokines and sTREM-1 were quantified using the following ELISA kits as per the manufacturer's instructions; IL-6 (cat#: ab168538, Abcam), IL-10 (cat#: ab176665, Abcam), GM (cat#: 85-86051, Affymetrix Bioscience, San Diego, CA, USA), and sTREM-1 (cat#: SEA213Mu, USCN, Wuhan, China). Statistical analysis Data were analyzed using SPSS version 18.0 software (SPSS Inc., Armonk, NY, USA). All the data for the continuous variables in this study were proven to have normal distributions and are given as mean ± standard deviations (SD). Results for continuous variables that were not normally distributed are given as medians (interquartile ranges) and were compared using nonparametric tests. Student's t-test or analysis of variance followed by Bonferroni's test were used to determine the statistical significance (P) of differences. Pearson's correlation coefficient was used to analyze the correlation of two parameters. A value of P < 0.05 was considered statistically significant. Data were analyzed using SPSS version 18.0 software (SPSS Inc., Armonk, NY, USA). All the data for the continuous variables in this study were proven to have normal distributions and are given as mean ± standard deviations (SD). Results for continuous variables that were not normally distributed are given as medians (interquartile ranges) and were compared using nonparametric tests. Student's t-test or analysis of variance followed by Bonferroni's test were used to determine the statistical significance (P) of differences. Pearson's correlation coefficient was used to analyze the correlation of two parameters. A value of P < 0.05 was considered statistically significant. Pathogen preparation: The strain of Aspergillus fumigatus, provided by the Department of Clinical Laboratory, Peking Union Medical College Hospital (PUMCH), was obtained from a case of pulmonary aspergillosis. Viable conidia (>95%) were obtained by growth on Sabouraud dextrose agar for 5 days at 35°C. Conidia were harvested in 10 ml 0.1% Tween-80/PBS and filtered through five layers of gauze. The concentration of conidia was adjusted to 1 × 108 CFU/ml by turbidity adjustment method. Animal model preparation: Healthy 4-week-old female BALB/c mice, weighing 20 ± 5 g, were obtained from the Animal Facility Center, PUMCH. All animals were housed in a pathogen-free facility and treated according to protocols approved by the Institutional Animal Care and Use Committee of PUMCH. Thirty mice were randomly divided into the following groups (six mice per group): (1) control group: nontreated mice. (2) IPA group: animals were inoculated with 0.1 ml A. fumigatus conidia solution intranasally. (3) Immunosuppression plus aspergillosis group (cyclophosphamide [CTX] + IPA): CTX (Jiangsu Hengrui Medicine Co., Ltd., Jiangsu, China) was injected intraperitoneally at a dose of 200 mg·kg−1·d−1 for 5 consecutive days. Animals were then infected with A. fumigatus as mentioned above. (4) Immunosuppression plus aspergillosis plus rapamycin treatment group (CTX + IPA + RAPA): mice were given 2 mg·kg−1·d−1 rapamycin for the 7 consecutive days after intraperitoneal injection of CTX and A. fumigatus infection. (5) Immunosuppression plus IPA plus IL-12 treatment group (CTX + IPA + IL-12): mice were given 5 μg/kg IL-12 every other day for the 7 days following intraperitoneal injection of CTX and A. fumigatus infection. Blood samples were obtained by retro-orbital bleeding. Part of the lung tissue was minced and used for A. fumigatus culture, the rest was fixed with 4% formaldehyde, and paraffin-embedded tissue sections were stained with hematoxylin and eosin (H and E), Masson trichrome, and periodic acid-silver methenamine. CD8+ effector memory T-cells counts and interferon-γ, mammalian target of rapamycin, and S6 kinase expression: Peripheral blood mononuclear cells were isolated from blood samples and counted by flow cytometry. Cells were then labeled with the following fluorescence-conjugated monoclonal antibodies: anti-rat CD45 PE (12-0451-81, eBioscience, San Diego, CA, USA), anti-rat CD8a APC (17-0081-81, eBioscience), anti-rat CD44 PE (25-0441-81, eBioscience), and anti-rat CD62L (104432, Biolegend). CD8+ Tem were sorted by flow cytometry (EPICS-XL, Beckman-Coulter, Indianapolis, IN, USA) and then stained for interferon (IFN)-γ (11-7311-81, eBioscience), mTOR (ab87540, Abcam, Cambridge, MA, USA), and S6K (ab32529, Abcam) expression. Cytokine and soluble triggering receptors expressed on myeloid cell-1 quantification: Cytokines and sTREM-1 were quantified using the following ELISA kits as per the manufacturer's instructions; IL-6 (cat#: ab168538, Abcam), IL-10 (cat#: ab176665, Abcam), GM (cat#: 85-86051, Affymetrix Bioscience, San Diego, CA, USA), and sTREM-1 (cat#: SEA213Mu, USCN, Wuhan, China). Statistical analysis: Data were analyzed using SPSS version 18.0 software (SPSS Inc., Armonk, NY, USA). All the data for the continuous variables in this study were proven to have normal distributions and are given as mean ± standard deviations (SD). Results for continuous variables that were not normally distributed are given as medians (interquartile ranges) and were compared using nonparametric tests. Student's t-test or analysis of variance followed by Bonferroni's test were used to determine the statistical significance (P) of differences. Pearson's correlation coefficient was used to analyze the correlation of two parameters. A value of P < 0.05 was considered statistically significant. Results: Tissue culture and histology Viable A. fumigatus was cultured from lung tissue of both IPA and CTX + IPA mice treated with rapamycin or IL-12 or without treatment [Figure 1a–1c], while control mice were negative. Histological examination indicated that the lung tissue structure was intact in the normal control [Figure 1d]. In contrast, infiltration of inflammatory cells, hemorrhage, and interstitial lung tissue injury were found in the lungs of the IPA, CTX + IPA, CTX + IPA + IL-12, and CTX + IPA + RAPA groups [Figure 1e–1h]. Compared with the IPA group, Figure 1 suggests that CTX + IPA mice treated with rapamycin or IL-12 or without treatment had greater inflammation and hemorrhage in the interstitial lung tissue. Histology of lung tissue stained with H and E, Masson trichrome, and PASM. (a–c) The fungal spores of Aspergillus fumigatus. (d) Control animal. (e) Animal infected with IPA. (f) Immunosuppression plus aspergillosis group (CTX + IPA). (g) Immunosuppression plus IPA plus IL-12 treatment group (CTX + IPA + IL-12). (h) Immunosuppression plus aspergillosis plus rapamycin treatment group (CTX + IPA + RAPA). Original magnification: (a) Masson trichrome staining, original magnification ×200; (b) Masson trichrome staining, original magnification ×400; (c) PASM staining, original magnification ×600; (d–h) H and E, original magnification ×100. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; PASM: Periodic acid silver methenamine; IL: Interleukin. Viable A. fumigatus was cultured from lung tissue of both IPA and CTX + IPA mice treated with rapamycin or IL-12 or without treatment [Figure 1a–1c], while control mice were negative. Histological examination indicated that the lung tissue structure was intact in the normal control [Figure 1d]. In contrast, infiltration of inflammatory cells, hemorrhage, and interstitial lung tissue injury were found in the lungs of the IPA, CTX + IPA, CTX + IPA + IL-12, and CTX + IPA + RAPA groups [Figure 1e–1h]. Compared with the IPA group, Figure 1 suggests that CTX + IPA mice treated with rapamycin or IL-12 or without treatment had greater inflammation and hemorrhage in the interstitial lung tissue. Histology of lung tissue stained with H and E, Masson trichrome, and PASM. (a–c) The fungal spores of Aspergillus fumigatus. (d) Control animal. (e) Animal infected with IPA. (f) Immunosuppression plus aspergillosis group (CTX + IPA). (g) Immunosuppression plus IPA plus IL-12 treatment group (CTX + IPA + IL-12). (h) Immunosuppression plus aspergillosis plus rapamycin treatment group (CTX + IPA + RAPA). Original magnification: (a) Masson trichrome staining, original magnification ×200; (b) Masson trichrome staining, original magnification ×400; (c) PASM staining, original magnification ×600; (d–h) H and E, original magnification ×100. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; PASM: Periodic acid silver methenamine; IL: Interleukin. Regulation of the mammalian target of rapamycin pathway As shown in Figure 2, compared with the control group (0.25 ± 0.04, P = 0.004), IPA group (0.27 ± 0.04, P = 0.01), and CTX + IPA group (0.29 ± 0.04, P = 0.034), mTOR activity was significantly increased in the CTX + IPA + IL-12 group (0.39 ± 0.12). To verify that the induction of mTOR phosphorylation also led to its kinase activity, we monitored the kinetics of S6K, a direct target of mTOR kinase activity. As anticipated, IL-12 stimulation significantly enhanced the expression of S6K in CD8+ Tem (0.13 ± 0.03) compared with the control (0.06 ± 0.04, P < 0.001), IPA (0.07 ± 0.02, P = 0.001), and CTX + IPA (0.09 ± 0.01, P = 0.032) groups. IL-12 increases the expression of mammalian target of rapamycin and S6 kinase, which was blocked by rapamycin. The peripheral blood mononuclear cells obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected using flow cytometry 7 days after intranasal inoculation of Aspergillus fumigatus. Side scatter and CD8a were used to gate on CD8+ T-lymphocytes, and CD44+ CD45+ CD62+/− cells represent CD8+ effector memory T-cells. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin. Similarly, when we blocked mTOR activity by adding rapamycin during IPA infection, we found that the expression of mTOR was significantly lower in the CTX + IPA + RAPA group (0.19 ± 0.05) compared with the CTX + IPA (0.29 ± 0.04, P = 0.034) and CTX + IPA + IL-12 (0.39 ± 0.12, P < 0.001) groups. The expression of S6K was also significantly lower in the CTX + IPA + RAPA (0.04 ± 0.04) group than in the CTX + IPA (0.09 ± 0.01, P = 0.01) and CTX + IPA + IL-12 (0.13 ± 0.03, P < 0.001) groups. As shown in Figure 2, compared with the control group (0.25 ± 0.04, P = 0.004), IPA group (0.27 ± 0.04, P = 0.01), and CTX + IPA group (0.29 ± 0.04, P = 0.034), mTOR activity was significantly increased in the CTX + IPA + IL-12 group (0.39 ± 0.12). To verify that the induction of mTOR phosphorylation also led to its kinase activity, we monitored the kinetics of S6K, a direct target of mTOR kinase activity. As anticipated, IL-12 stimulation significantly enhanced the expression of S6K in CD8+ Tem (0.13 ± 0.03) compared with the control (0.06 ± 0.04, P < 0.001), IPA (0.07 ± 0.02, P = 0.001), and CTX + IPA (0.09 ± 0.01, P = 0.032) groups. IL-12 increases the expression of mammalian target of rapamycin and S6 kinase, which was blocked by rapamycin. The peripheral blood mononuclear cells obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected using flow cytometry 7 days after intranasal inoculation of Aspergillus fumigatus. Side scatter and CD8a were used to gate on CD8+ T-lymphocytes, and CD44+ CD45+ CD62+/− cells represent CD8+ effector memory T-cells. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin. Similarly, when we blocked mTOR activity by adding rapamycin during IPA infection, we found that the expression of mTOR was significantly lower in the CTX + IPA + RAPA group (0.19 ± 0.05) compared with the CTX + IPA (0.29 ± 0.04, P = 0.034) and CTX + IPA + IL-12 (0.39 ± 0.12, P < 0.001) groups. The expression of S6K was also significantly lower in the CTX + IPA + RAPA (0.04 ± 0.04) group than in the CTX + IPA (0.09 ± 0.01, P = 0.01) and CTX + IPA + IL-12 (0.13 ± 0.03, P < 0.001) groups. Mammalian target of rapamycin modulates CD8+ T-cell differentiation and expression of triggering receptor expressed on myeloid cell-1 As shown in Figure 3, the proportion of CD8+ Tem, IFN-γ production, and serum sTREM-1 expression were significantly increased in the CTX + IPA + IL-12 group (0.47 ± 0.06, 0.09 ± 0.03, and 1876.91 ± 247.80, respectively) compared with the CTX + IPA group (0.35 ± 0.10, P = 0.024; 0.05 ± 0.04, P = 0.032; and 1537.64 ± 359.52, P = 0.017, respectively). The result indicates that the addition of IL-12 could improve CD8+ Tem differentiation, IFN-γ production, and resulted in robust serum sTREM-1 expression during IPA infection by activating the mTOR pathway. The proportion of CD8+ effector memory T-cells, expression of interferon-γ, and sTREM-1 levels were significantly increased by IL-12 stimulation but significantly decreased by rapamycin treatment. The peripheral blood mononuclear cells obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected using flow cytometry 7 days after Aspergillus fumigatus intranasal inoculation. Side scatter and CD8a were used to gate on CD8+ T-lymphocytes, and CD44+ CD45+ CD62+/- cells represented CD8+ effector memory T-cells. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin; sTREM-1: Soluble triggering receptors expressed on myeloid cell-1. More importantly, we also found that adding rapamycin could significantly decrease the proportion of CD8+ Tem (0.22 ± 0.05), IFN-γ production (0.02 ± 0.01), and serum TREM-1 expression (1214.45 ± 293.24) compared with the CTX + IPA (0.35 ± 0.10, P = 0.017; 0.05 ± 0.04, P = 0.039; and 1537.64 ± 359.52, P = 0.022, respectively) and CTX + IPA + IL-12 (0.47 ± 0.06, P < 0.001; 0.09 ± 0.03, P < 0.001; and 1876.91 ± 247.80, P < 0.001, respectively) groups. These results indicate that sustained mTOR activity is essential for CD8+ Tem differentiation, Type I effector functions, and serum sTREM-1 expression. As shown in Figure 3, the proportion of CD8+ Tem, IFN-γ production, and serum sTREM-1 expression were significantly increased in the CTX + IPA + IL-12 group (0.47 ± 0.06, 0.09 ± 0.03, and 1876.91 ± 247.80, respectively) compared with the CTX + IPA group (0.35 ± 0.10, P = 0.024; 0.05 ± 0.04, P = 0.032; and 1537.64 ± 359.52, P = 0.017, respectively). The result indicates that the addition of IL-12 could improve CD8+ Tem differentiation, IFN-γ production, and resulted in robust serum sTREM-1 expression during IPA infection by activating the mTOR pathway. The proportion of CD8+ effector memory T-cells, expression of interferon-γ, and sTREM-1 levels were significantly increased by IL-12 stimulation but significantly decreased by rapamycin treatment. The peripheral blood mononuclear cells obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected using flow cytometry 7 days after Aspergillus fumigatus intranasal inoculation. Side scatter and CD8a were used to gate on CD8+ T-lymphocytes, and CD44+ CD45+ CD62+/- cells represented CD8+ effector memory T-cells. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin; sTREM-1: Soluble triggering receptors expressed on myeloid cell-1. More importantly, we also found that adding rapamycin could significantly decrease the proportion of CD8+ Tem (0.22 ± 0.05), IFN-γ production (0.02 ± 0.01), and serum TREM-1 expression (1214.45 ± 293.24) compared with the CTX + IPA (0.35 ± 0.10, P = 0.017; 0.05 ± 0.04, P = 0.039; and 1537.64 ± 359.52, P = 0.022, respectively) and CTX + IPA + IL-12 (0.47 ± 0.06, P < 0.001; 0.09 ± 0.03, P < 0.001; and 1876.91 ± 247.80, P < 0.001, respectively) groups. These results indicate that sustained mTOR activity is essential for CD8+ Tem differentiation, Type I effector functions, and serum sTREM-1 expression. Alteration of inflammatory responses and severity of fungal infection IL-6 levels reflect the inflammatory response, and IL-10 levels reflect anti-inflammatory responses. Figure 4 shows that the IL-12-treated group had the highest IL-6 levels (2053.91 ± 402.37 pg/ml), followed by the CTX + IPA (1426.57 ± 488.03 pg/ml, P = 0.036), CTX + IPA + RAPA (822.74 ± 211.30 pg/ml, P < 0.001), IPA (811.58 ± 247.03 pg/ml, P < 0.001), and control (245.65 ± 85.78 pg/ml, P < 0.001) groups. In contrast, the concentration of IL-10 showed the reverse trend. The IL-10 level of the IL-12-treated group (314.94 ± 36.97 pg/ml), as well as the control (267.60 ± 46.10 pg/ml) and IPA (350.93 ± 43.11 pg/ml) groups, were significantly lower than the CTX + IPA group (437.95 ± 53.41 pg/ml, P < 0.001) and especially the CTX + IPA + RAPA group (542.44 ± 37.32 pg/ml, P < 0.001). IL-12 significantly increased the IL-6 level, which was significantly decreased by rapamycin treatment. In contrast, the concentration of IL-10 and galactomannan showed the reverse trend. Plasma samples obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected by ELISA 7 days after Aspergillus fumigatus intranasal inoculation. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin. Galactomannan (GM) is used to demonstrate the severity of fungal infection in clinical practice. In this study, we found that, after A. fumigates inoculation, the level of GM was significantly increased in the IPA (1985.98 ± 152.79 pg/ml, P < 0.001), CTX + IPA (2251.06 ± 230.50 pg/ml, P < 0.001), CTX + IPA + IL-12 (1920.15 ± 86.16 pg/ml, P < 0.001), and CTX + IPA + RAPA (2496.87 ± 278.05 pg/ml, P < 0.001) groups compared with the control group (126.82 ± 10.31 pg/ml). However, as shown in Figure 4, treatment with IL-12 significantly decreased the level of GM compared with the CTX + IPA (P = 0.002) and CTX + IPA + RAPA (P < 0.001) groups. IL-6 levels reflect the inflammatory response, and IL-10 levels reflect anti-inflammatory responses. Figure 4 shows that the IL-12-treated group had the highest IL-6 levels (2053.91 ± 402.37 pg/ml), followed by the CTX + IPA (1426.57 ± 488.03 pg/ml, P = 0.036), CTX + IPA + RAPA (822.74 ± 211.30 pg/ml, P < 0.001), IPA (811.58 ± 247.03 pg/ml, P < 0.001), and control (245.65 ± 85.78 pg/ml, P < 0.001) groups. In contrast, the concentration of IL-10 showed the reverse trend. The IL-10 level of the IL-12-treated group (314.94 ± 36.97 pg/ml), as well as the control (267.60 ± 46.10 pg/ml) and IPA (350.93 ± 43.11 pg/ml) groups, were significantly lower than the CTX + IPA group (437.95 ± 53.41 pg/ml, P < 0.001) and especially the CTX + IPA + RAPA group (542.44 ± 37.32 pg/ml, P < 0.001). IL-12 significantly increased the IL-6 level, which was significantly decreased by rapamycin treatment. In contrast, the concentration of IL-10 and galactomannan showed the reverse trend. Plasma samples obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected by ELISA 7 days after Aspergillus fumigatus intranasal inoculation. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin. Galactomannan (GM) is used to demonstrate the severity of fungal infection in clinical practice. In this study, we found that, after A. fumigates inoculation, the level of GM was significantly increased in the IPA (1985.98 ± 152.79 pg/ml, P < 0.001), CTX + IPA (2251.06 ± 230.50 pg/ml, P < 0.001), CTX + IPA + IL-12 (1920.15 ± 86.16 pg/ml, P < 0.001), and CTX + IPA + RAPA (2496.87 ± 278.05 pg/ml, P < 0.001) groups compared with the control group (126.82 ± 10.31 pg/ml). However, as shown in Figure 4, treatment with IL-12 significantly decreased the level of GM compared with the CTX + IPA (P = 0.002) and CTX + IPA + RAPA (P < 0.001) groups. Tissue culture and histology: Viable A. fumigatus was cultured from lung tissue of both IPA and CTX + IPA mice treated with rapamycin or IL-12 or without treatment [Figure 1a–1c], while control mice were negative. Histological examination indicated that the lung tissue structure was intact in the normal control [Figure 1d]. In contrast, infiltration of inflammatory cells, hemorrhage, and interstitial lung tissue injury were found in the lungs of the IPA, CTX + IPA, CTX + IPA + IL-12, and CTX + IPA + RAPA groups [Figure 1e–1h]. Compared with the IPA group, Figure 1 suggests that CTX + IPA mice treated with rapamycin or IL-12 or without treatment had greater inflammation and hemorrhage in the interstitial lung tissue. Histology of lung tissue stained with H and E, Masson trichrome, and PASM. (a–c) The fungal spores of Aspergillus fumigatus. (d) Control animal. (e) Animal infected with IPA. (f) Immunosuppression plus aspergillosis group (CTX + IPA). (g) Immunosuppression plus IPA plus IL-12 treatment group (CTX + IPA + IL-12). (h) Immunosuppression plus aspergillosis plus rapamycin treatment group (CTX + IPA + RAPA). Original magnification: (a) Masson trichrome staining, original magnification ×200; (b) Masson trichrome staining, original magnification ×400; (c) PASM staining, original magnification ×600; (d–h) H and E, original magnification ×100. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; PASM: Periodic acid silver methenamine; IL: Interleukin. Regulation of the mammalian target of rapamycin pathway: As shown in Figure 2, compared with the control group (0.25 ± 0.04, P = 0.004), IPA group (0.27 ± 0.04, P = 0.01), and CTX + IPA group (0.29 ± 0.04, P = 0.034), mTOR activity was significantly increased in the CTX + IPA + IL-12 group (0.39 ± 0.12). To verify that the induction of mTOR phosphorylation also led to its kinase activity, we monitored the kinetics of S6K, a direct target of mTOR kinase activity. As anticipated, IL-12 stimulation significantly enhanced the expression of S6K in CD8+ Tem (0.13 ± 0.03) compared with the control (0.06 ± 0.04, P < 0.001), IPA (0.07 ± 0.02, P = 0.001), and CTX + IPA (0.09 ± 0.01, P = 0.032) groups. IL-12 increases the expression of mammalian target of rapamycin and S6 kinase, which was blocked by rapamycin. The peripheral blood mononuclear cells obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected using flow cytometry 7 days after intranasal inoculation of Aspergillus fumigatus. Side scatter and CD8a were used to gate on CD8+ T-lymphocytes, and CD44+ CD45+ CD62+/− cells represent CD8+ effector memory T-cells. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin. Similarly, when we blocked mTOR activity by adding rapamycin during IPA infection, we found that the expression of mTOR was significantly lower in the CTX + IPA + RAPA group (0.19 ± 0.05) compared with the CTX + IPA (0.29 ± 0.04, P = 0.034) and CTX + IPA + IL-12 (0.39 ± 0.12, P < 0.001) groups. The expression of S6K was also significantly lower in the CTX + IPA + RAPA (0.04 ± 0.04) group than in the CTX + IPA (0.09 ± 0.01, P = 0.01) and CTX + IPA + IL-12 (0.13 ± 0.03, P < 0.001) groups. Mammalian target of rapamycin modulates CD8+ T-cell differentiation and expression of triggering receptor expressed on myeloid cell-1: As shown in Figure 3, the proportion of CD8+ Tem, IFN-γ production, and serum sTREM-1 expression were significantly increased in the CTX + IPA + IL-12 group (0.47 ± 0.06, 0.09 ± 0.03, and 1876.91 ± 247.80, respectively) compared with the CTX + IPA group (0.35 ± 0.10, P = 0.024; 0.05 ± 0.04, P = 0.032; and 1537.64 ± 359.52, P = 0.017, respectively). The result indicates that the addition of IL-12 could improve CD8+ Tem differentiation, IFN-γ production, and resulted in robust serum sTREM-1 expression during IPA infection by activating the mTOR pathway. The proportion of CD8+ effector memory T-cells, expression of interferon-γ, and sTREM-1 levels were significantly increased by IL-12 stimulation but significantly decreased by rapamycin treatment. The peripheral blood mononuclear cells obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected using flow cytometry 7 days after Aspergillus fumigatus intranasal inoculation. Side scatter and CD8a were used to gate on CD8+ T-lymphocytes, and CD44+ CD45+ CD62+/- cells represented CD8+ effector memory T-cells. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin; sTREM-1: Soluble triggering receptors expressed on myeloid cell-1. More importantly, we also found that adding rapamycin could significantly decrease the proportion of CD8+ Tem (0.22 ± 0.05), IFN-γ production (0.02 ± 0.01), and serum TREM-1 expression (1214.45 ± 293.24) compared with the CTX + IPA (0.35 ± 0.10, P = 0.017; 0.05 ± 0.04, P = 0.039; and 1537.64 ± 359.52, P = 0.022, respectively) and CTX + IPA + IL-12 (0.47 ± 0.06, P < 0.001; 0.09 ± 0.03, P < 0.001; and 1876.91 ± 247.80, P < 0.001, respectively) groups. These results indicate that sustained mTOR activity is essential for CD8+ Tem differentiation, Type I effector functions, and serum sTREM-1 expression. Alteration of inflammatory responses and severity of fungal infection: IL-6 levels reflect the inflammatory response, and IL-10 levels reflect anti-inflammatory responses. Figure 4 shows that the IL-12-treated group had the highest IL-6 levels (2053.91 ± 402.37 pg/ml), followed by the CTX + IPA (1426.57 ± 488.03 pg/ml, P = 0.036), CTX + IPA + RAPA (822.74 ± 211.30 pg/ml, P < 0.001), IPA (811.58 ± 247.03 pg/ml, P < 0.001), and control (245.65 ± 85.78 pg/ml, P < 0.001) groups. In contrast, the concentration of IL-10 showed the reverse trend. The IL-10 level of the IL-12-treated group (314.94 ± 36.97 pg/ml), as well as the control (267.60 ± 46.10 pg/ml) and IPA (350.93 ± 43.11 pg/ml) groups, were significantly lower than the CTX + IPA group (437.95 ± 53.41 pg/ml, P < 0.001) and especially the CTX + IPA + RAPA group (542.44 ± 37.32 pg/ml, P < 0.001). IL-12 significantly increased the IL-6 level, which was significantly decreased by rapamycin treatment. In contrast, the concentration of IL-10 and galactomannan showed the reverse trend. Plasma samples obtained from IPA mice, CTX + IPA mice, CTX + IPA mice treated with IL-12 or rapamycin, and control animals were detected by ELISA 7 days after Aspergillus fumigatus intranasal inoculation. The data are presented as the mean ± standard deviation. *P < 0.05. CTX: Cyclophosphamide; IPA: Invasive pulmonary aspergillosis; IL: Interleukin. Galactomannan (GM) is used to demonstrate the severity of fungal infection in clinical practice. In this study, we found that, after A. fumigates inoculation, the level of GM was significantly increased in the IPA (1985.98 ± 152.79 pg/ml, P < 0.001), CTX + IPA (2251.06 ± 230.50 pg/ml, P < 0.001), CTX + IPA + IL-12 (1920.15 ± 86.16 pg/ml, P < 0.001), and CTX + IPA + RAPA (2496.87 ± 278.05 pg/ml, P < 0.001) groups compared with the control group (126.82 ± 10.31 pg/ml). However, as shown in Figure 4, treatment with IL-12 significantly decreased the level of GM compared with the CTX + IPA (P = 0.002) and CTX + IPA + RAPA (P < 0.001) groups. Discussion: In the current study, we demonstrated that IL-12 increased the number and effector response (IFN-γ release) of CD8+ Tem and the level of sTREM-1 during IPA infection in CTX-induced immunocompromised animals, through increased expression and activity of the mTOR signal transduction pathway. Furthermore, inflammatory cytokines (IL-6) were increased and anti-inflammatory cytokines (IL-10) were decreased significantly, which markedly affect the fungal burden of the host. These effects could be blocked by the mTOR inhibitor rapamycin. CD8+ T-cells play a vital role in the adaptive immune system. After infection, some of the CD8+ T-cells will display a memory phenotype and can survive long term. CD8+ effector memory T-cells (CD8+ Tem), memory T-cells with effector cell phenotype, will establish a rapid immune response when the host comes into contact with the same pathogen, which is crucial for patients who are at a high risk of repeated infections such as immunosuppressed patients.[520] The mTOR signaling pathway has the ability to sense cellular metabolic state, extracellular nutrient availability, presence of growth factors/cytokines, and control key cellular processes, including apoptosis/autophagy, proliferation, and cell growth that govern cell fate. Studies indicate that mTOR contributes to the proliferation and differentiation of T-cells in response to antigen stimulation.[721] mTOR inhibition by rapamycin induces CD4+ T-cell anergy and/or differentiation into FoxP3+ regulatory T-cells and attenuates mTOR kinase signaling in regulating CD8+ T-cell trafficking.[2223] Consistent with these results, the current study demonstrated that the presence of IL-12 augments mTOR activity and influences CD8+ Tem proliferation and Type I effector maturation following immunosuppression by CTX and A. fumigatus infection. Effector functions induced by IL-12 treatment were significantly attenuated by adding the mTOR inhibitor rapamycin, confirming that sustained mTOR kinase activity is required for IL-12-programed proliferation and Type I effector function in CD8+ Tem in response to a fungal infection. TREM-1 is largely known as an activating receptor on neutrophils and monocytes, playing an important role in amplification of the inflammatory response. Studies demonstrated that activation of TREM-1 on monocytes drove robust production of pro-inflammatory chemokines such as monocyte chemoattractant protein 1 and IL-8. Engagement of TREM-1 in combination with microbial ligands that activate toll-like receptors also synergistically increased the production of the pro-inflammatory cytokines, TNF-α and IL-1β.[2425] Moreover, in primary human monocytes, TREM-1 activation was found to be upregulated in the context of host defense directed against bacterial and fungal infections. These cells had an improved ability to elicit T-cell proliferation and production of IFN-γ. The balance between the transcription factors T-bet and eomesodermin has been shown to determine effector and memory cell differentiation in CD8+ T-cells. Although in our previous studies, we found that both sTREM-1 and mTOR activity was significantly correlated with T-bet and eomesodermin levels in immunocompromised animals with IPA, little is known regarding its specific involvement in mTOR modulation of CD8+ T-cell differentiation and Type I effector function.[819] Consistent with previous studies, the findings of the current study demonstrated that, similar to the number and effector cell response of CD8+ Tem, the level of serum sTREM-1 expression was also significantly increased after IL-12 treatment and significantly decreased after adding rapamycin. These findings suggest that the mTOR signaling pathway is the central regulator of transcriptional programs which determine effector and/or memory cell differentiation in CD8+ T-cells through the transcription factors, T-bet and eomesodermin. Thus, sTREM-1 might be the molecular link between mTOR modulation and immune cell differentiation in the immunocompromised mice with a fungal infection. However, elucidation of a clear mechanism requires further study. The current study also elucidated the final immune effects of the mTOR signal pathway. We found that IL-12 could increase plasma pro-inflammatory cytokine (IL-6) level, decrease the anti-inflammatory cytokine (IL-10), and reduce the fungal load (GM) significantly. The mTOR inhibitor rapamycin completely caused the opposite effects. These results indicated that the mTOR signaling pathway plays a vital role in CD8+ T-cell differentiation and function, resulting in changes of the immune response and fungal load in infected hosts with immunosuppression. To summarize, mTOR could modulate CD8+ T-cell differentiation during immune responses to IPA. sTREM-1 may play a vital role in signal transduction between mTOR and the immune response. Whether this mechanism could be used as a target of antifungal immunotherapy in the future requires further study. Financial support and sponsorship The work was supported by the Beijing Municipal Natural Science Foundation (No. 7152119) and Chinese National Natural Science Foundation for Young Scholars (No. 81601657). The work was supported by the Beijing Municipal Natural Science Foundation (No. 7152119) and Chinese National Natural Science Foundation for Young Scholars (No. 81601657). Conflicts of interest There are no conflicts of interest. There are no conflicts of interest. Financial support and sponsorship: The work was supported by the Beijing Municipal Natural Science Foundation (No. 7152119) and Chinese National Natural Science Foundation for Young Scholars (No. 81601657). Conflicts of interest: There are no conflicts of interest.
Background: Triggering receptor expressed on myeloid cell-1 (TREM-1) may play a vital role in mammalian target of rapamycin (mTOR) modulation of CD8+ T-cell differentiation through the transcription factors T-box expressed in T-cells and eomesodermin during the immune response to invasive pulmonary aspergillosis (IPA). This study aimed to investigate whether the mTOR signaling pathway modulates the proliferation and differentiation of CD8+ T-cells during the immune response to IPA and the role TREM-1 plays in this process. Methods: Cyclophosphamide (CTX) was injected intraperitoneally, and Aspergillus fumigatus spore suspension was inoculated intranasally to establish the immunosuppressed IPA mouse model. After inoculation, rapamycin (2 mg.kg-1.d-1) or interleukin (IL)-12 (5 μg/kg every other day) was given for 7 days. The number of CD8+ effector memory T-cells (Tem), expression of interferon (IFN)-γ, mTOR, and ribosomal protein S6 kinase (S6K), and the levels of IL-6, IL-10, galactomannan (GM), and soluble TREM-1 (sTREM-1) were measured. Results: Viable A. fumigatus was cultured from the lung tissue of the inoculated mice. Histological examination indicated greater inflammation, hemorrhage, and lung tissue injury in both IPA and CTX + IPA mice groups. The expression of mTOR and S6K was significantly increased in the CTX + IPA + IL-12 group compared with the control, IPA (P = 0.01; P= 0.001), and CTX + IPA (P = 0.034; P= 0.032) groups, but significantly decreased in the CTX + IPA + RAPA group (P < 0.001). Compared with the CTX + IPA group, the proportion of Tem, expression of IFN-γ, and the level of sTREM-1 were significantly higher after IL-12 treatment (P = 0.024, P= 0.032, and P= 0.017, respectively), and the opposite results were observed when the mTOR pathway was blocked by rapamycin (P < 0.001). Compared with the CTX + IPA and CTX + IPA + RAPA groups, IL-12 treatment increased IL-6 and downregulated IL-10 as well as GM, which strengthened the immune response to the IPA infection. Conclusions: mTOR modulates CD8+ T-cell differentiation during the immune response to IPA. TREM-1 may play a vital role in signal transduction between mTOR and the downstream immune response.
null
null
8,824
445
[ 92, 291, 154, 72, 123, 294, 406, 406, 463, 31 ]
15
[ "ipa", "ctx", "il", "ctx ipa", "12", "il 12", "group", "cd8", "rapamycin", "mice" ]
[ "response fungal infection", "pulmonary aspergillosis il", "rapamycin modulates cd8", "rapamycin cd8 cells", "immunosuppression plus aspergillosis" ]
null
null
[CONTENT] CD8+ T Effector Memory Cells | Immunosuppression | Invasive Pulmonary Aspergillosis | Mammalian Target of Rapamycin | Triggering Receptor Expressed on Myeloid Cell-1 [SUMMARY]
[CONTENT] CD8+ T Effector Memory Cells | Immunosuppression | Invasive Pulmonary Aspergillosis | Mammalian Target of Rapamycin | Triggering Receptor Expressed on Myeloid Cell-1 [SUMMARY]
[CONTENT] CD8+ T Effector Memory Cells | Immunosuppression | Invasive Pulmonary Aspergillosis | Mammalian Target of Rapamycin | Triggering Receptor Expressed on Myeloid Cell-1 [SUMMARY]
null
[CONTENT] CD8+ T Effector Memory Cells | Immunosuppression | Invasive Pulmonary Aspergillosis | Mammalian Target of Rapamycin | Triggering Receptor Expressed on Myeloid Cell-1 [SUMMARY]
null
[CONTENT] Animals | CD8-Positive T-Lymphocytes | Cell Differentiation | Female | Interferon-gamma | Invasive Pulmonary Aspergillosis | Lymphocyte Activation | Mice | Mice, Inbred BALB C | Myeloid Cells | Ribosomal Protein S6 Kinases | TOR Serine-Threonine Kinases | Tissue Culture Techniques [SUMMARY]
[CONTENT] Animals | CD8-Positive T-Lymphocytes | Cell Differentiation | Female | Interferon-gamma | Invasive Pulmonary Aspergillosis | Lymphocyte Activation | Mice | Mice, Inbred BALB C | Myeloid Cells | Ribosomal Protein S6 Kinases | TOR Serine-Threonine Kinases | Tissue Culture Techniques [SUMMARY]
[CONTENT] Animals | CD8-Positive T-Lymphocytes | Cell Differentiation | Female | Interferon-gamma | Invasive Pulmonary Aspergillosis | Lymphocyte Activation | Mice | Mice, Inbred BALB C | Myeloid Cells | Ribosomal Protein S6 Kinases | TOR Serine-Threonine Kinases | Tissue Culture Techniques [SUMMARY]
null
[CONTENT] Animals | CD8-Positive T-Lymphocytes | Cell Differentiation | Female | Interferon-gamma | Invasive Pulmonary Aspergillosis | Lymphocyte Activation | Mice | Mice, Inbred BALB C | Myeloid Cells | Ribosomal Protein S6 Kinases | TOR Serine-Threonine Kinases | Tissue Culture Techniques [SUMMARY]
null
[CONTENT] response fungal infection | pulmonary aspergillosis il | rapamycin modulates cd8 | rapamycin cd8 cells | immunosuppression plus aspergillosis [SUMMARY]
[CONTENT] response fungal infection | pulmonary aspergillosis il | rapamycin modulates cd8 | rapamycin cd8 cells | immunosuppression plus aspergillosis [SUMMARY]
[CONTENT] response fungal infection | pulmonary aspergillosis il | rapamycin modulates cd8 | rapamycin cd8 cells | immunosuppression plus aspergillosis [SUMMARY]
null
[CONTENT] response fungal infection | pulmonary aspergillosis il | rapamycin modulates cd8 | rapamycin cd8 cells | immunosuppression plus aspergillosis [SUMMARY]
null
[CONTENT] ipa | ctx | il | ctx ipa | 12 | il 12 | group | cd8 | rapamycin | mice [SUMMARY]
[CONTENT] ipa | ctx | il | ctx ipa | 12 | il 12 | group | cd8 | rapamycin | mice [SUMMARY]
[CONTENT] ipa | ctx | il | ctx ipa | 12 | il 12 | group | cd8 | rapamycin | mice [SUMMARY]
null
[CONTENT] ipa | ctx | il | ctx ipa | 12 | il 12 | group | cd8 | rapamycin | mice [SUMMARY]
null
[CONTENT] bet | eomesodermin | immune | cells | differentiation | cd8 | cd8 cells | cell | trem | mtor [SUMMARY]
[CONTENT] usa | plus | cat | ebioscience | 81 ebioscience | anti rat | rat | 81 | group | conidia [SUMMARY]
[CONTENT] ipa | ctx | ctx ipa | il | pg ml | pg | 001 | 12 | il 12 | ml [SUMMARY]
null
[CONTENT] ipa | ctx | ctx ipa | il | 12 | group | il 12 | conflicts | conflicts interest | interest [SUMMARY]
null
[CONTENT] myeloid ||| mTOR | transcription ||| mTOR | IPA [SUMMARY]
[CONTENT] CTX | Aspergillus | IPA ||| 2 mg.kg-1.d-1 | 5 μg/kg | 7 days ||| mTOR | S6 | S6K | IL-10 | GM [SUMMARY]
[CONTENT] Viable A. ||| IPA | CTX ||| mTOR | S6K | CTX | IPA | 0.01 | 0.001 | CTX + IPA | 0.034 | 0.032 | CTX ||| CTX | Tem | IFN | 0.024 | 0.032 | 0.017 | mTOR | P < 0.001 ||| CTX | CTX | IL-10 | GM | IPA [SUMMARY]
null
[CONTENT] myeloid ||| mTOR | transcription ||| mTOR | IPA ||| CTX | Aspergillus | IPA ||| 2 mg.kg-1.d-1 | 5 μg/kg | 7 days ||| mTOR | S6 | S6K | IL-10 | GM ||| ||| Viable A. ||| IPA | CTX ||| mTOR | S6K | CTX | IPA | 0.01 | 0.001 | CTX + IPA | 0.034 | 0.032 | CTX ||| CTX | Tem | IFN | 0.024 | 0.032 | 0.017 | mTOR | P < 0.001 ||| CTX | CTX | IL-10 | GM | IPA ||| mTOR | IPA ||| mTOR [SUMMARY]
null
Sex trafficking awareness and associated factors among youth females in Bahir Dar town, North-West Ethiopia: a community based study.
25028202
Sex trafficking is a contemporary issue in both developed and developing countries. The number of trafficked women and young girls has increased globally. Females aged 18-25 are the most targeted group of trafficking. Although the problem is evident in Ethiopia, there are no studies that explored sex trafficking awareness among females. Therefore, the aim of this study was to assess sex trafficking awareness and associated factors among youth females in Bahir Dar town, North-West Ethiopia.
BACKGROUND
A community based cross-sectional study design was employed to collect data from February 1st-30th 2012 from a total of 417 youth females. The participants in the study were selected using systematic random sampling techniques. A structured Amharic questionnaire was used to collect data. Data were entered, cleaned and analyzed using SPSS 16.0. Descriptive statistics were used to describe data. Logistic regression analysis was used to identify factors associated with sex trafficking awareness.
METHODS
Two hundred forty-nine (60%) of the participants reported that they had heard or read about sex trafficking. Television (64%), friends (46%) and radio (39%) were the most frequently mentioned sources of information about sex trafficking. About 87% and 74% of the participants mentioned friends and brokers respectively as mediators of sex trafficking. Having TV at home (AOR = 2. 19, 95% CI: 1.31-3.67), completing grade 10 or more (AOR = 2. 22, 95% CI: 1.18-4.17), taking training on gender issues (AOR = 3. 59, 95% CI: 2.11-6.10) and living together with parents (AOR = 3. 65, 95% CI: 1.68-7.93) were factors found associated with sex trafficking awareness.
RESULT
In this study, sex trafficking awareness was low among youth females. Having TV at home, living together with someone and being trained on gender issues were predictors of sex trafficking awareness. Therefore, providing education about sex trafficking will help to increase sex trafficking awareness among youth females.
CONCLUSION
[ "Adolescent", "Cross-Sectional Studies", "Educational Status", "Ethiopia", "Family Characteristics", "Female", "Human Trafficking", "Humans", "Information Dissemination", "Logistic Models", "Multivariate Analysis", "Radio", "Residence Characteristics", "Surveys and Questionnaires", "Television", "Young Adult" ]
4118269
Background
The United Nation defines sex trafficking as “the recruitment, transportation, transfer, harboring, or receipt of persons, by means of the threat or use of force or other forms of coercion, of abduction, of fraud, of deception, of the abuse of power or of a position of vulnerability” for the purposes of sexual exploitation and for economic and other personal gains [1]. It is a contemporary public health issue of both developed and developing countries that violates human rights and has been described as a modern form of slavery [1,2]. According to the International Organization for Migration (IOM) and United Nation Office on Drugs and Crime report, the numbers of trafficked women have increased both in developed and developing countries [2,3]. In another study, girls aged 18–25 are mainly targeted by traffickers usually in poor socioeconomic areas [4,5]. It is difficult to estimate accurately the global prevalence of human trafficking due to its hidden nature [2]. But a recent estimate indicated that trafficking reaches between one and two million people each year worldwide; 60-70% of which are young girls [2,6]. The International Labor Organization (ILO) estimated that there are 4.5 million victims of forced sexual exploitation worldwide; 98% of whom are estimated to be women and girls [7]. Young girls are misguided by a well-organized group with promises of employment in some Western countries, North America, and Australia [2]. Sometimes, the girls themselves are aware and well informed that they would be engaged in prostitution abroad and pay huge sums of money to a mediator who assists them in obtaining passports or in transferring them illegally to the migrating country. The full cost associated with the migration would be covered by the mediator, with the hope that the woman would re-pay the loan from the proceeds of the prostitution abroad. Such agreement leads women to carry out various traditional rituals to ensure that they remain bonded to the mediator until they have been fully exploited financially [8]. A study conducted in Nigeria in 2004 showed that about 86.1% of students had heard about sex trafficking [9]. In another study conducted in Nigeria, 97.4% of the women reported that they had heard of women being taken abroad for commercial sex work. In this study, 47% of the young women believed that sex trafficking brings wealth and prosperity to the family [10]. About 76.5% of participants in above study believed that victims of sex trafficking were more likely to become infected with Sexually Transmitted Diseases (STDs). Another 56.7% believed that they would experience physical abuse and 56.6% thought they would have unwanted pregnancies [9]. Several studies reported that sex trafficking has impact on the physical, mental, social and psychological well being of women and girls [11-17]. Sex-trafficked women and girls are more likely to contract HIV and other STDs [18-21]. Studies done in India found that sex trafficking was a mode of entry into sex work [22,23]. These studies added that history of sex trafficking was associated with a greater vulnerability to violence and HIV risk behaviors [22,23]. About 88.3% of the women mentioned that sex trafficking had negative consequences to the life of women as it exposes them to STDs and HIV/AIDS [10]. Being out-of-school, unemployed, uneducated and unemployed parents, environmental and socio-cultural factors were factors mentioned as risk factors associated with sex trafficking in different studies [11,17,24,25]. A study done in Nigeria 77.2%, 68.4%, 56.1%, 44.5% of the participants reported poverty, unemployment, illiteracy and low social status respectively as factors associated with sex trafficking [11]. Likewise, thousands of teenage girls are shipped out of Ethiopia each year [5]. Recent reports showed that women and girls are also exploited in the sex trade after migrating for labor purposes [26]. Despite efforts made by the government and non-governmental organizations, victims of sex trafficking are increasing. Still, many women and young girls want to go abroad without knowing the situations there [26]. However, youth females’ awareness about sex trafficking is not yet well explored in Ethiopia, particularly in the study area. Credible evidence on awareness of sex trafficking at grass root level is important to formulate evidence based approaches for targeting preventative interventions against sexual trafficking. Therefore, this study was designed to assess sex trafficking awareness and associated factors among young females in Bahir Dar town, North-West Ethiopia.
Methods
Study design, area and population A community based cross-sectional study design was employed from February 1st-30th, 2012. The study was conducted in Bahir Dar town, the capital city of Amhara National Regional State, which is located 565 kms North-West of Addis Ababa, Ethiopia. According to Central Statistics Agency (CSA) report of 2007 population census [27], the total population of the town was estimated 180,174 (87,160 males and 93,014 females). The majority of the population (96%) is Amhara by ethnicity and Orthodox Christian (90%) by religion [27]. All youth females aged 15–24 years who were living in the study area were the study population. Those youth females who lived at least six months in the town were included in the study. A community based cross-sectional study design was employed from February 1st-30th, 2012. The study was conducted in Bahir Dar town, the capital city of Amhara National Regional State, which is located 565 kms North-West of Addis Ababa, Ethiopia. According to Central Statistics Agency (CSA) report of 2007 population census [27], the total population of the town was estimated 180,174 (87,160 males and 93,014 females). The majority of the population (96%) is Amhara by ethnicity and Orthodox Christian (90%) by religion [27]. All youth females aged 15–24 years who were living in the study area were the study population. Those youth females who lived at least six months in the town were included in the study. Sample size and sampling techniques Single population proportional formula (n = [(Z α/2)2 P (1 - P)]/d2) was used to determine sample size based on the following assumptions: 95% level of confidence (Zα/2 = 1.96), the proportion of respondents who heard about sex trafficking in a previous study (p = 86%) (9), and margin of error (d = 4%). By considering 5% contingency, and 1.5 design effect, the final sample size was 456 youth females. Four urban kebeles (the lowest administrative level) of the total nine kebeles in the town were selected using a simple random sampling technique [28]. Proportional to size allocation was used in order to determine the required numbers of youth females from each kebele. Household was used as a sampling unit in this study. Households were selected using systematic random sampling. If there were more than one youth female in a household, lottery method was used to select one participant. If the selected youth was not available at home at the time of visit, revisit for the second time was made to contact the selected youth for interview. Single population proportional formula (n = [(Z α/2)2 P (1 - P)]/d2) was used to determine sample size based on the following assumptions: 95% level of confidence (Zα/2 = 1.96), the proportion of respondents who heard about sex trafficking in a previous study (p = 86%) (9), and margin of error (d = 4%). By considering 5% contingency, and 1.5 design effect, the final sample size was 456 youth females. Four urban kebeles (the lowest administrative level) of the total nine kebeles in the town were selected using a simple random sampling technique [28]. Proportional to size allocation was used in order to determine the required numbers of youth females from each kebele. Household was used as a sampling unit in this study. Households were selected using systematic random sampling. If there were more than one youth female in a household, lottery method was used to select one participant. If the selected youth was not available at home at the time of visit, revisit for the second time was made to contact the selected youth for interview. Data collection tool A structured pretested Amharic questionnaire was used to elicit information about sex trafficking from the study participants (see Additional file 1). The questionnaire had two sections. The first section of the questionnaire was about socio-demographic characteristics of the youth females and their parents and the second section was on sex trafficking awareness. A youth female was classified as having awareness about sex trafficking if she reported that she had heard or read about sex trafficking that a woman who had been taken to another place or foreign countries for the purposes of sexual exploitation to gain money or other personal gains. As an indirect measure of the prevalence of sex trafficking in the area, a participant was asked whether or not she had been approached by someone that assists to go abroad. Ten grade 10 complete females and two others were recruited as data collectors and supervisors for this study. One day training was given to data collectors and supervisors with particular emphasis on the objective of the study and methods of the survey. A structured pretested Amharic questionnaire was used to elicit information about sex trafficking from the study participants (see Additional file 1). The questionnaire had two sections. The first section of the questionnaire was about socio-demographic characteristics of the youth females and their parents and the second section was on sex trafficking awareness. A youth female was classified as having awareness about sex trafficking if she reported that she had heard or read about sex trafficking that a woman who had been taken to another place or foreign countries for the purposes of sexual exploitation to gain money or other personal gains. As an indirect measure of the prevalence of sex trafficking in the area, a participant was asked whether or not she had been approached by someone that assists to go abroad. Ten grade 10 complete females and two others were recruited as data collectors and supervisors for this study. One day training was given to data collectors and supervisors with particular emphasis on the objective of the study and methods of the survey. Data quality The questionnaire was pretested to evaluate the face-validity and to ensure whether the study participants understood what the investigators intended to know and some modification of questions were made. Training was given to data collectors and supervisors on how to select household and study participants. Daily supervision was done by the principal investigator to check the completeness of the questionnaire. Female data collectors were used to get accurate information since most questions are gender sensitive. The questionnaire was pretested to evaluate the face-validity and to ensure whether the study participants understood what the investigators intended to know and some modification of questions were made. Training was given to data collectors and supervisors on how to select household and study participants. Daily supervision was done by the principal investigator to check the completeness of the questionnaire. Female data collectors were used to get accurate information since most questions are gender sensitive. Data analysis Data were entered, cleaned and analyzed with SPSS 16 software. Descriptive statistics such as mean and percentage were used to describe the data. Bivariate and multivariable logistic regression analyses were used to identify the predictors of sex trafficking awareness. Odds ratio with 95% confidence interval (CI) was calculated to identify predictors of sex trafficking awareness. Data were entered, cleaned and analyzed with SPSS 16 software. Descriptive statistics such as mean and percentage were used to describe the data. Bivariate and multivariable logistic regression analyses were used to identify the predictors of sex trafficking awareness. Odds ratio with 95% confidence interval (CI) was calculated to identify predictors of sex trafficking awareness. Ethical clearance Ethical clearance was taken from the ethical committee of the Bahir Dar University, College of Medicine and Health Sciences before data collection. Informed written consent was taken from the respective kebeles. Informed verbal consent and assent was obtained from study participants or parents after explaining the purposes of the study. Those who gave their consent and assent were interviewed. Confidentiality was insured by collecting the data anonymously. Ethical clearance was taken from the ethical committee of the Bahir Dar University, College of Medicine and Health Sciences before data collection. Informed written consent was taken from the respective kebeles. Informed verbal consent and assent was obtained from study participants or parents after explaining the purposes of the study. Those who gave their consent and assent were interviewed. Confidentiality was insured by collecting the data anonymously.
Results
A total of 417 youth females participated in this study. The response rate was 92%. About 64.7% of the study participants were in the age range of 20–24 years and 57.3% were single. The majority of the study participants (83.0%) were Orthodox Christian and 53% of the study participants had completed 10th grade and above. About 72.9% and 70.5% of the study participants had radio and television in their house respectively. The majority of study participants (33.6%) were students. One hundred twenty two participants took training on gender issues (Table 1). Socio-demographic characteristics of youth females in Bahir Dar Town, North-West Ethiopia, February 2012 Of all youth females, about 59.7% reported that they had read or heard about sex trafficking. About 64%, 46%, 37% and 17% of the study participants mentioned television, friends, radio and print materials respectively as sources of information. Friends and brokers were mentioned as mediators for sex trafficking by 87% and 74% of the study participants respectively. About 72%, 50%, 45%, and 18% of the study participants mentioned hoping for a better life elsewhere, unemployment, poverty and illiteracy respectively as reasons for being trafficked. About 71.4% of the participants reported that youth females ages greater than 25 years are vulnerable for sex trafficking. About 25% of the participants reported that they had been approached by someone else to assist them go abroad (Table 2). Sex trafficking awareness among youth females in Bahir Dar town, North-West Ethiopia, February 2012 *multiple responses are there, amini media, family, internet; brelatives, strangers, tourists; cfamily, divorce, family loss. Factors associated with awareness of sex trafficking In bivariate logistic regression analysis, having radio and television at home, living with parents, boyfriend or fiancé or husband, and getting training on gender issues were significantly associated with awareness of sex trafficking. However, during multivariable logistic regression analysis, only having television at home; living with parents and boyfriend or fiancé or husband and getting training on gender issues showed statistically significant association with sex trafficking awareness. Completing grade 10 or above, living with either mother or father and alone did not show statistically significant association with sex trafficking awareness during multivariable logistic regression analysis (Table 3). Factors associated with awareness of sex trafficking among youth females in Bahir Dar Town, Northwest Ethiopia, February 2012 *COR: crude Odds Ratio, AOR: Adjusted Odds Ratio. Those whose educational status was grade 10 or above were 2.22 (AOR = 2. 22, 95% CI: 1.18-4.17) times more likely to be aware about sex trafficking than illiterates. Youth who had television at home were about 2.19 (AOR = 2. 19, 95% CI: 1.31-3.67) times more likely to be aware about sex trafficking compared to their counterparts. Youth who were living with parents, boyfriend or fiancé or husband, either mother and father and relatives were 3.65 (AOR = 3. 65, 95% CI: 1.68-7.93), 3.46 (AOR = 3. 46, 95% CI: 1.69-7.06), 2.31 (AOR = 2. 31, 95% CI:1.03-5.16) and 2.86 (AOR = 2. 86, 95% CI: 1.24-6.58) times more likely to be aware about sex trafficking respectively than those who were living alone. Youth trained on gender issues were 3.59 (AOR = 3. 59, 95% CI: 2.11-6.10) times more likely to be aware about sex trafficking compared to those youth females who did not take the training (Table 3). In bivariate logistic regression analysis, having radio and television at home, living with parents, boyfriend or fiancé or husband, and getting training on gender issues were significantly associated with awareness of sex trafficking. However, during multivariable logistic regression analysis, only having television at home; living with parents and boyfriend or fiancé or husband and getting training on gender issues showed statistically significant association with sex trafficking awareness. Completing grade 10 or above, living with either mother or father and alone did not show statistically significant association with sex trafficking awareness during multivariable logistic regression analysis (Table 3). Factors associated with awareness of sex trafficking among youth females in Bahir Dar Town, Northwest Ethiopia, February 2012 *COR: crude Odds Ratio, AOR: Adjusted Odds Ratio. Those whose educational status was grade 10 or above were 2.22 (AOR = 2. 22, 95% CI: 1.18-4.17) times more likely to be aware about sex trafficking than illiterates. Youth who had television at home were about 2.19 (AOR = 2. 19, 95% CI: 1.31-3.67) times more likely to be aware about sex trafficking compared to their counterparts. Youth who were living with parents, boyfriend or fiancé or husband, either mother and father and relatives were 3.65 (AOR = 3. 65, 95% CI: 1.68-7.93), 3.46 (AOR = 3. 46, 95% CI: 1.69-7.06), 2.31 (AOR = 2. 31, 95% CI:1.03-5.16) and 2.86 (AOR = 2. 86, 95% CI: 1.24-6.58) times more likely to be aware about sex trafficking respectively than those who were living alone. Youth trained on gender issues were 3.59 (AOR = 3. 59, 95% CI: 2.11-6.10) times more likely to be aware about sex trafficking compared to those youth females who did not take the training (Table 3).
Conclusion
The level of awareness of female youths about sex trafficking in this study was low. Having a television at home, completing grade 10 or above, living together with someone and taking training on gender issues were the predictors of sex trafficking awareness. Awareness creation on sex trafficking of youth females should be given through different approaches to increase the accessibility of information on sex trafficking. Moreover, further research is recommended to determine the magnitude and nature of sex trafficking in detail in the study area.
[ "Background", "Study design, area and population", "Sample size and sampling techniques", "Data collection tool", "Data quality", "Data analysis", "Ethical clearance", "Factors associated with awareness of sex trafficking", "Abbreviations", "Competing interests", "Authors’ contribution", "Authors’ information", "Pre-publication history" ]
[ "The United Nation defines sex trafficking as “the recruitment, transportation, transfer, harboring, or receipt of persons, by means of the threat or use of force or other forms of coercion, of abduction, of fraud, of deception, of the abuse of power or of a position of vulnerability” for the purposes of sexual exploitation and for economic and other personal gains [1]. It is a contemporary public health issue of both developed and developing countries that violates human rights and has been described as a modern form of slavery [1,2].\nAccording to the International Organization for Migration (IOM) and United Nation Office on Drugs and Crime report, the numbers of trafficked women have increased both in developed and developing countries [2,3]. In another study, girls aged 18–25 are mainly targeted by traffickers usually in poor socioeconomic areas [4,5]. It is difficult to estimate accurately the global prevalence of human trafficking due to its hidden nature [2]. But a recent estimate indicated that trafficking reaches between one and two million people each year worldwide; 60-70% of which are young girls [2,6]. The International Labor Organization (ILO) estimated that there are 4.5 million victims of forced sexual exploitation worldwide; 98% of whom are estimated to be women and girls [7].\nYoung girls are misguided by a well-organized group with promises of employment in some Western countries, North America, and Australia [2]. Sometimes, the girls themselves are aware and well informed that they would be engaged in prostitution abroad and pay huge sums of money to a mediator who assists them in obtaining passports or in transferring them illegally to the migrating country. The full cost associated with the migration would be covered by the mediator, with the hope that the woman would re-pay the loan from the proceeds of the prostitution abroad. Such agreement leads women to carry out various traditional rituals to ensure that they remain bonded to the mediator until they have been fully exploited financially [8].\nA study conducted in Nigeria in 2004 showed that about 86.1% of students had heard about sex trafficking [9]. In another study conducted in Nigeria, 97.4% of the women reported that they had heard of women being taken abroad for commercial sex work. In this study, 47% of the young women believed that sex trafficking brings wealth and prosperity to the family [10]. About 76.5% of participants in above study believed that victims of sex trafficking were more likely to become infected with Sexually Transmitted Diseases (STDs). Another 56.7% believed that they would experience physical abuse and 56.6% thought they would have unwanted pregnancies [9].\nSeveral studies reported that sex trafficking has impact on the physical, mental, social and psychological well being of women and girls [11-17]. Sex-trafficked women and girls are more likely to contract HIV and other STDs [18-21]. Studies done in India found that sex trafficking was a mode of entry into sex work [22,23]. These studies added that history of sex trafficking was associated with a greater vulnerability to violence and HIV risk behaviors [22,23]. About 88.3% of the women mentioned that sex trafficking had negative consequences to the life of women as it exposes them to STDs and HIV/AIDS [10].\nBeing out-of-school, unemployed, uneducated and unemployed parents, environmental and socio-cultural factors were factors mentioned as risk factors associated with sex trafficking in different studies [11,17,24,25]. A study done in Nigeria 77.2%, 68.4%, 56.1%, 44.5% of the participants reported poverty, unemployment, illiteracy and low social status respectively as factors associated with sex trafficking [11].\nLikewise, thousands of teenage girls are shipped out of Ethiopia each year [5]. Recent reports showed that women and girls are also exploited in the sex trade after migrating for labor purposes [26]. Despite efforts made by the government and non-governmental organizations, victims of sex trafficking are increasing. Still, many women and young girls want to go abroad without knowing the situations there [26]. However, youth females’ awareness about sex trafficking is not yet well explored in Ethiopia, particularly in the study area. Credible evidence on awareness of sex trafficking at grass root level is important to formulate evidence based approaches for targeting preventative interventions against sexual trafficking. Therefore, this study was designed to assess sex trafficking awareness and associated factors among young females in Bahir Dar town, North-West Ethiopia.", "A community based cross-sectional study design was employed from February 1st-30th, 2012. The study was conducted in Bahir Dar town, the capital city of Amhara National Regional State, which is located 565 kms North-West of Addis Ababa, Ethiopia. According to Central Statistics Agency (CSA) report of 2007 population census [27], the total population of the town was estimated 180,174 (87,160 males and 93,014 females). The majority of the population (96%) is Amhara by ethnicity and Orthodox Christian (90%) by religion [27]. All youth females aged 15–24 years who were living in the study area were the study population. Those youth females who lived at least six months in the town were included in the study.", "Single population proportional formula (n = [(Z\nα/2)2 P (1 - P)]/d2) was used to determine sample size based on the following assumptions: 95% level of confidence (Zα/2 = 1.96), the proportion of respondents who heard about sex trafficking in a previous study (p = 86%) (9), and margin of error (d = 4%). By considering 5% contingency, and 1.5 design effect, the final sample size was 456 youth females. Four urban kebeles (the lowest administrative level) of the total nine kebeles in the town were selected using a simple random sampling technique [28]. Proportional to size allocation was used in order to determine the required numbers of youth females from each kebele. Household was used as a sampling unit in this study. Households were selected using systematic random sampling. If there were more than one youth female in a household, lottery method was used to select one participant. If the selected youth was not available at home at the time of visit, revisit for the second time was made to contact the selected youth for interview.", "A structured pretested Amharic questionnaire was used to elicit information about sex trafficking from the study participants (see Additional file 1). The questionnaire had two sections. The first section of the questionnaire was about socio-demographic characteristics of the youth females and their parents and the second section was on sex trafficking awareness. A youth female was classified as having awareness about sex trafficking if she reported that she had heard or read about sex trafficking that a woman who had been taken to another place or foreign countries for the purposes of sexual exploitation to gain money or other personal gains. As an indirect measure of the prevalence of sex trafficking in the area, a participant was asked whether or not she had been approached by someone that assists to go abroad.\nTen grade 10 complete females and two others were recruited as data collectors and supervisors for this study. One day training was given to data collectors and supervisors with particular emphasis on the objective of the study and methods of the survey.", "The questionnaire was pretested to evaluate the face-validity and to ensure whether the study participants understood what the investigators intended to know and some modification of questions were made. Training was given to data collectors and supervisors on how to select household and study participants. Daily supervision was done by the principal investigator to check the completeness of the questionnaire. Female data collectors were used to get accurate information since most questions are gender sensitive.", "Data were entered, cleaned and analyzed with SPSS 16 software. Descriptive statistics such as mean and percentage were used to describe the data. Bivariate and multivariable logistic regression analyses were used to identify the predictors of sex trafficking awareness. Odds ratio with 95% confidence interval (CI) was calculated to identify predictors of sex trafficking awareness.", "Ethical clearance was taken from the ethical committee of the Bahir Dar University, College of Medicine and Health Sciences before data collection. Informed written consent was taken from the respective kebeles. Informed verbal consent and assent was obtained from study participants or parents after explaining the purposes of the study. Those who gave their consent and assent were interviewed. Confidentiality was insured by collecting the data anonymously.", "In bivariate logistic regression analysis, having radio and television at home, living with parents, boyfriend or fiancé or husband, and getting training on gender issues were significantly associated with awareness of sex trafficking. However, during multivariable logistic regression analysis, only having television at home; living with parents and boyfriend or fiancé or husband and getting training on gender issues showed statistically significant association with sex trafficking awareness. Completing grade 10 or above, living with either mother or father and alone did not show statistically significant association with sex trafficking awareness during multivariable logistic regression analysis (Table 3).\nFactors associated with awareness of sex trafficking among youth females in Bahir Dar Town, Northwest Ethiopia, February 2012\n*COR: crude Odds Ratio, AOR: Adjusted Odds Ratio.\nThose whose educational status was grade 10 or above were 2.22 (AOR = 2. 22, 95% CI: 1.18-4.17) times more likely to be aware about sex trafficking than illiterates. Youth who had television at home were about 2.19 (AOR = 2. 19, 95% CI: 1.31-3.67) times more likely to be aware about sex trafficking compared to their counterparts. Youth who were living with parents, boyfriend or fiancé or husband, either mother and father and relatives were 3.65 (AOR = 3. 65, 95% CI: 1.68-7.93), 3.46 (AOR = 3. 46, 95% CI: 1.69-7.06), 2.31 (AOR = 2. 31, 95% CI:1.03-5.16) and 2.86 (AOR = 2. 86, 95% CI: 1.24-6.58) times more likely to be aware about sex trafficking respectively than those who were living alone. Youth trained on gender issues were 3.59 (AOR = 3. 59, 95% CI: 2.11-6.10) times more likely to be aware about sex trafficking compared to those youth females who did not take the training (Table 3).", "AOR: Adjusted odds ratio; CI: Confidence interval; IOM: International organization for migration; ILO: International labor organization; SPSS: Statistical package for social sciences; STDs: Sexually transmitted diseases.", "The authors declare that they have no competing interests.", "MA designed the study, developed the questionnaire, supervised the data collection, analyzed the data and wrote the paper. GA supervised the data collection, contributed to the interpretation of the findings as well as the drafting and writing of the manuscript. AM trained data collectors, supervised the data collection, contributed to the interpretation of the findings and helped to draft the manuscript. All authors read and approved the final manuscript.", "MA was graduated with BSc in Environmental Health and Master of Public Health in Environmental Health and working as lecturer at Department of Public Health, College of Medicine and Health Sciences, Bahir Dar University.\nGA was graduated with BSc in Nursing, Master of Public Health in Reproductive health and working as lecturer at Department of Public Health, College of Medicine and Health Sciences, Bahir Dar University.\nAM was graduated with BSc in Midwifery and Master of Public Health in Reproductive Health and working as lecturer at Department of Public Health, College of Medicine and Health Sciences, Bahir Dar University.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6874/14/85/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study design, area and population", "Sample size and sampling techniques", "Data collection tool", "Data quality", "Data analysis", "Ethical clearance", "Results", "Factors associated with awareness of sex trafficking", "Discussion", "Conclusion", "Abbreviations", "Competing interests", "Authors’ contribution", "Authors’ information", "Pre-publication history", "Supplementary Material" ]
[ "The United Nation defines sex trafficking as “the recruitment, transportation, transfer, harboring, or receipt of persons, by means of the threat or use of force or other forms of coercion, of abduction, of fraud, of deception, of the abuse of power or of a position of vulnerability” for the purposes of sexual exploitation and for economic and other personal gains [1]. It is a contemporary public health issue of both developed and developing countries that violates human rights and has been described as a modern form of slavery [1,2].\nAccording to the International Organization for Migration (IOM) and United Nation Office on Drugs and Crime report, the numbers of trafficked women have increased both in developed and developing countries [2,3]. In another study, girls aged 18–25 are mainly targeted by traffickers usually in poor socioeconomic areas [4,5]. It is difficult to estimate accurately the global prevalence of human trafficking due to its hidden nature [2]. But a recent estimate indicated that trafficking reaches between one and two million people each year worldwide; 60-70% of which are young girls [2,6]. The International Labor Organization (ILO) estimated that there are 4.5 million victims of forced sexual exploitation worldwide; 98% of whom are estimated to be women and girls [7].\nYoung girls are misguided by a well-organized group with promises of employment in some Western countries, North America, and Australia [2]. Sometimes, the girls themselves are aware and well informed that they would be engaged in prostitution abroad and pay huge sums of money to a mediator who assists them in obtaining passports or in transferring them illegally to the migrating country. The full cost associated with the migration would be covered by the mediator, with the hope that the woman would re-pay the loan from the proceeds of the prostitution abroad. Such agreement leads women to carry out various traditional rituals to ensure that they remain bonded to the mediator until they have been fully exploited financially [8].\nA study conducted in Nigeria in 2004 showed that about 86.1% of students had heard about sex trafficking [9]. In another study conducted in Nigeria, 97.4% of the women reported that they had heard of women being taken abroad for commercial sex work. In this study, 47% of the young women believed that sex trafficking brings wealth and prosperity to the family [10]. About 76.5% of participants in above study believed that victims of sex trafficking were more likely to become infected with Sexually Transmitted Diseases (STDs). Another 56.7% believed that they would experience physical abuse and 56.6% thought they would have unwanted pregnancies [9].\nSeveral studies reported that sex trafficking has impact on the physical, mental, social and psychological well being of women and girls [11-17]. Sex-trafficked women and girls are more likely to contract HIV and other STDs [18-21]. Studies done in India found that sex trafficking was a mode of entry into sex work [22,23]. These studies added that history of sex trafficking was associated with a greater vulnerability to violence and HIV risk behaviors [22,23]. About 88.3% of the women mentioned that sex trafficking had negative consequences to the life of women as it exposes them to STDs and HIV/AIDS [10].\nBeing out-of-school, unemployed, uneducated and unemployed parents, environmental and socio-cultural factors were factors mentioned as risk factors associated with sex trafficking in different studies [11,17,24,25]. A study done in Nigeria 77.2%, 68.4%, 56.1%, 44.5% of the participants reported poverty, unemployment, illiteracy and low social status respectively as factors associated with sex trafficking [11].\nLikewise, thousands of teenage girls are shipped out of Ethiopia each year [5]. Recent reports showed that women and girls are also exploited in the sex trade after migrating for labor purposes [26]. Despite efforts made by the government and non-governmental organizations, victims of sex trafficking are increasing. Still, many women and young girls want to go abroad without knowing the situations there [26]. However, youth females’ awareness about sex trafficking is not yet well explored in Ethiopia, particularly in the study area. Credible evidence on awareness of sex trafficking at grass root level is important to formulate evidence based approaches for targeting preventative interventions against sexual trafficking. Therefore, this study was designed to assess sex trafficking awareness and associated factors among young females in Bahir Dar town, North-West Ethiopia.", " Study design, area and population A community based cross-sectional study design was employed from February 1st-30th, 2012. The study was conducted in Bahir Dar town, the capital city of Amhara National Regional State, which is located 565 kms North-West of Addis Ababa, Ethiopia. According to Central Statistics Agency (CSA) report of 2007 population census [27], the total population of the town was estimated 180,174 (87,160 males and 93,014 females). The majority of the population (96%) is Amhara by ethnicity and Orthodox Christian (90%) by religion [27]. All youth females aged 15–24 years who were living in the study area were the study population. Those youth females who lived at least six months in the town were included in the study.\nA community based cross-sectional study design was employed from February 1st-30th, 2012. The study was conducted in Bahir Dar town, the capital city of Amhara National Regional State, which is located 565 kms North-West of Addis Ababa, Ethiopia. According to Central Statistics Agency (CSA) report of 2007 population census [27], the total population of the town was estimated 180,174 (87,160 males and 93,014 females). The majority of the population (96%) is Amhara by ethnicity and Orthodox Christian (90%) by religion [27]. All youth females aged 15–24 years who were living in the study area were the study population. Those youth females who lived at least six months in the town were included in the study.\n Sample size and sampling techniques Single population proportional formula (n = [(Z\nα/2)2 P (1 - P)]/d2) was used to determine sample size based on the following assumptions: 95% level of confidence (Zα/2 = 1.96), the proportion of respondents who heard about sex trafficking in a previous study (p = 86%) (9), and margin of error (d = 4%). By considering 5% contingency, and 1.5 design effect, the final sample size was 456 youth females. Four urban kebeles (the lowest administrative level) of the total nine kebeles in the town were selected using a simple random sampling technique [28]. Proportional to size allocation was used in order to determine the required numbers of youth females from each kebele. Household was used as a sampling unit in this study. Households were selected using systematic random sampling. If there were more than one youth female in a household, lottery method was used to select one participant. If the selected youth was not available at home at the time of visit, revisit for the second time was made to contact the selected youth for interview.\nSingle population proportional formula (n = [(Z\nα/2)2 P (1 - P)]/d2) was used to determine sample size based on the following assumptions: 95% level of confidence (Zα/2 = 1.96), the proportion of respondents who heard about sex trafficking in a previous study (p = 86%) (9), and margin of error (d = 4%). By considering 5% contingency, and 1.5 design effect, the final sample size was 456 youth females. Four urban kebeles (the lowest administrative level) of the total nine kebeles in the town were selected using a simple random sampling technique [28]. Proportional to size allocation was used in order to determine the required numbers of youth females from each kebele. Household was used as a sampling unit in this study. Households were selected using systematic random sampling. If there were more than one youth female in a household, lottery method was used to select one participant. If the selected youth was not available at home at the time of visit, revisit for the second time was made to contact the selected youth for interview.\n Data collection tool A structured pretested Amharic questionnaire was used to elicit information about sex trafficking from the study participants (see Additional file 1). The questionnaire had two sections. The first section of the questionnaire was about socio-demographic characteristics of the youth females and their parents and the second section was on sex trafficking awareness. A youth female was classified as having awareness about sex trafficking if she reported that she had heard or read about sex trafficking that a woman who had been taken to another place or foreign countries for the purposes of sexual exploitation to gain money or other personal gains. As an indirect measure of the prevalence of sex trafficking in the area, a participant was asked whether or not she had been approached by someone that assists to go abroad.\nTen grade 10 complete females and two others were recruited as data collectors and supervisors for this study. One day training was given to data collectors and supervisors with particular emphasis on the objective of the study and methods of the survey.\nA structured pretested Amharic questionnaire was used to elicit information about sex trafficking from the study participants (see Additional file 1). The questionnaire had two sections. The first section of the questionnaire was about socio-demographic characteristics of the youth females and their parents and the second section was on sex trafficking awareness. A youth female was classified as having awareness about sex trafficking if she reported that she had heard or read about sex trafficking that a woman who had been taken to another place or foreign countries for the purposes of sexual exploitation to gain money or other personal gains. As an indirect measure of the prevalence of sex trafficking in the area, a participant was asked whether or not she had been approached by someone that assists to go abroad.\nTen grade 10 complete females and two others were recruited as data collectors and supervisors for this study. One day training was given to data collectors and supervisors with particular emphasis on the objective of the study and methods of the survey.\n Data quality The questionnaire was pretested to evaluate the face-validity and to ensure whether the study participants understood what the investigators intended to know and some modification of questions were made. Training was given to data collectors and supervisors on how to select household and study participants. Daily supervision was done by the principal investigator to check the completeness of the questionnaire. Female data collectors were used to get accurate information since most questions are gender sensitive.\nThe questionnaire was pretested to evaluate the face-validity and to ensure whether the study participants understood what the investigators intended to know and some modification of questions were made. Training was given to data collectors and supervisors on how to select household and study participants. Daily supervision was done by the principal investigator to check the completeness of the questionnaire. Female data collectors were used to get accurate information since most questions are gender sensitive.\n Data analysis Data were entered, cleaned and analyzed with SPSS 16 software. Descriptive statistics such as mean and percentage were used to describe the data. Bivariate and multivariable logistic regression analyses were used to identify the predictors of sex trafficking awareness. Odds ratio with 95% confidence interval (CI) was calculated to identify predictors of sex trafficking awareness.\nData were entered, cleaned and analyzed with SPSS 16 software. Descriptive statistics such as mean and percentage were used to describe the data. Bivariate and multivariable logistic regression analyses were used to identify the predictors of sex trafficking awareness. Odds ratio with 95% confidence interval (CI) was calculated to identify predictors of sex trafficking awareness.\n Ethical clearance Ethical clearance was taken from the ethical committee of the Bahir Dar University, College of Medicine and Health Sciences before data collection. Informed written consent was taken from the respective kebeles. Informed verbal consent and assent was obtained from study participants or parents after explaining the purposes of the study. Those who gave their consent and assent were interviewed. Confidentiality was insured by collecting the data anonymously.\nEthical clearance was taken from the ethical committee of the Bahir Dar University, College of Medicine and Health Sciences before data collection. Informed written consent was taken from the respective kebeles. Informed verbal consent and assent was obtained from study participants or parents after explaining the purposes of the study. Those who gave their consent and assent were interviewed. Confidentiality was insured by collecting the data anonymously.", "A community based cross-sectional study design was employed from February 1st-30th, 2012. The study was conducted in Bahir Dar town, the capital city of Amhara National Regional State, which is located 565 kms North-West of Addis Ababa, Ethiopia. According to Central Statistics Agency (CSA) report of 2007 population census [27], the total population of the town was estimated 180,174 (87,160 males and 93,014 females). The majority of the population (96%) is Amhara by ethnicity and Orthodox Christian (90%) by religion [27]. All youth females aged 15–24 years who were living in the study area were the study population. Those youth females who lived at least six months in the town were included in the study.", "Single population proportional formula (n = [(Z\nα/2)2 P (1 - P)]/d2) was used to determine sample size based on the following assumptions: 95% level of confidence (Zα/2 = 1.96), the proportion of respondents who heard about sex trafficking in a previous study (p = 86%) (9), and margin of error (d = 4%). By considering 5% contingency, and 1.5 design effect, the final sample size was 456 youth females. Four urban kebeles (the lowest administrative level) of the total nine kebeles in the town were selected using a simple random sampling technique [28]. Proportional to size allocation was used in order to determine the required numbers of youth females from each kebele. Household was used as a sampling unit in this study. Households were selected using systematic random sampling. If there were more than one youth female in a household, lottery method was used to select one participant. If the selected youth was not available at home at the time of visit, revisit for the second time was made to contact the selected youth for interview.", "A structured pretested Amharic questionnaire was used to elicit information about sex trafficking from the study participants (see Additional file 1). The questionnaire had two sections. The first section of the questionnaire was about socio-demographic characteristics of the youth females and their parents and the second section was on sex trafficking awareness. A youth female was classified as having awareness about sex trafficking if she reported that she had heard or read about sex trafficking that a woman who had been taken to another place or foreign countries for the purposes of sexual exploitation to gain money or other personal gains. As an indirect measure of the prevalence of sex trafficking in the area, a participant was asked whether or not she had been approached by someone that assists to go abroad.\nTen grade 10 complete females and two others were recruited as data collectors and supervisors for this study. One day training was given to data collectors and supervisors with particular emphasis on the objective of the study and methods of the survey.", "The questionnaire was pretested to evaluate the face-validity and to ensure whether the study participants understood what the investigators intended to know and some modification of questions were made. Training was given to data collectors and supervisors on how to select household and study participants. Daily supervision was done by the principal investigator to check the completeness of the questionnaire. Female data collectors were used to get accurate information since most questions are gender sensitive.", "Data were entered, cleaned and analyzed with SPSS 16 software. Descriptive statistics such as mean and percentage were used to describe the data. Bivariate and multivariable logistic regression analyses were used to identify the predictors of sex trafficking awareness. Odds ratio with 95% confidence interval (CI) was calculated to identify predictors of sex trafficking awareness.", "Ethical clearance was taken from the ethical committee of the Bahir Dar University, College of Medicine and Health Sciences before data collection. Informed written consent was taken from the respective kebeles. Informed verbal consent and assent was obtained from study participants or parents after explaining the purposes of the study. Those who gave their consent and assent were interviewed. Confidentiality was insured by collecting the data anonymously.", "A total of 417 youth females participated in this study. The response rate was 92%. About 64.7% of the study participants were in the age range of 20–24 years and 57.3% were single. The majority of the study participants (83.0%) were Orthodox Christian and 53% of the study participants had completed 10th grade and above. About 72.9% and 70.5% of the study participants had radio and television in their house respectively. The majority of study participants (33.6%) were students. One hundred twenty two participants took training on gender issues (Table 1).\nSocio-demographic characteristics of youth females in Bahir Dar Town, North-West Ethiopia, February 2012\nOf all youth females, about 59.7% reported that they had read or heard about sex trafficking. About 64%, 46%, 37% and 17% of the study participants mentioned television, friends, radio and print materials respectively as sources of information. Friends and brokers were mentioned as mediators for sex trafficking by 87% and 74% of the study participants respectively. About 72%, 50%, 45%, and 18% of the study participants mentioned hoping for a better life elsewhere, unemployment, poverty and illiteracy respectively as reasons for being trafficked. About 71.4% of the participants reported that youth females ages greater than 25 years are vulnerable for sex trafficking. About 25% of the participants reported that they had been approached by someone else to assist them go abroad (Table 2).\nSex trafficking awareness among youth females in Bahir Dar town, North-West Ethiopia, February 2012\n*multiple responses are there, amini media, family, internet; brelatives, strangers, tourists; cfamily, divorce, family loss.\n Factors associated with awareness of sex trafficking In bivariate logistic regression analysis, having radio and television at home, living with parents, boyfriend or fiancé or husband, and getting training on gender issues were significantly associated with awareness of sex trafficking. However, during multivariable logistic regression analysis, only having television at home; living with parents and boyfriend or fiancé or husband and getting training on gender issues showed statistically significant association with sex trafficking awareness. Completing grade 10 or above, living with either mother or father and alone did not show statistically significant association with sex trafficking awareness during multivariable logistic regression analysis (Table 3).\nFactors associated with awareness of sex trafficking among youth females in Bahir Dar Town, Northwest Ethiopia, February 2012\n*COR: crude Odds Ratio, AOR: Adjusted Odds Ratio.\nThose whose educational status was grade 10 or above were 2.22 (AOR = 2. 22, 95% CI: 1.18-4.17) times more likely to be aware about sex trafficking than illiterates. Youth who had television at home were about 2.19 (AOR = 2. 19, 95% CI: 1.31-3.67) times more likely to be aware about sex trafficking compared to their counterparts. Youth who were living with parents, boyfriend or fiancé or husband, either mother and father and relatives were 3.65 (AOR = 3. 65, 95% CI: 1.68-7.93), 3.46 (AOR = 3. 46, 95% CI: 1.69-7.06), 2.31 (AOR = 2. 31, 95% CI:1.03-5.16) and 2.86 (AOR = 2. 86, 95% CI: 1.24-6.58) times more likely to be aware about sex trafficking respectively than those who were living alone. Youth trained on gender issues were 3.59 (AOR = 3. 59, 95% CI: 2.11-6.10) times more likely to be aware about sex trafficking compared to those youth females who did not take the training (Table 3).\nIn bivariate logistic regression analysis, having radio and television at home, living with parents, boyfriend or fiancé or husband, and getting training on gender issues were significantly associated with awareness of sex trafficking. However, during multivariable logistic regression analysis, only having television at home; living with parents and boyfriend or fiancé or husband and getting training on gender issues showed statistically significant association with sex trafficking awareness. Completing grade 10 or above, living with either mother or father and alone did not show statistically significant association with sex trafficking awareness during multivariable logistic regression analysis (Table 3).\nFactors associated with awareness of sex trafficking among youth females in Bahir Dar Town, Northwest Ethiopia, February 2012\n*COR: crude Odds Ratio, AOR: Adjusted Odds Ratio.\nThose whose educational status was grade 10 or above were 2.22 (AOR = 2. 22, 95% CI: 1.18-4.17) times more likely to be aware about sex trafficking than illiterates. Youth who had television at home were about 2.19 (AOR = 2. 19, 95% CI: 1.31-3.67) times more likely to be aware about sex trafficking compared to their counterparts. Youth who were living with parents, boyfriend or fiancé or husband, either mother and father and relatives were 3.65 (AOR = 3. 65, 95% CI: 1.68-7.93), 3.46 (AOR = 3. 46, 95% CI: 1.69-7.06), 2.31 (AOR = 2. 31, 95% CI:1.03-5.16) and 2.86 (AOR = 2. 86, 95% CI: 1.24-6.58) times more likely to be aware about sex trafficking respectively than those who were living alone. Youth trained on gender issues were 3.59 (AOR = 3. 59, 95% CI: 2.11-6.10) times more likely to be aware about sex trafficking compared to those youth females who did not take the training (Table 3).", "In bivariate logistic regression analysis, having radio and television at home, living with parents, boyfriend or fiancé or husband, and getting training on gender issues were significantly associated with awareness of sex trafficking. However, during multivariable logistic regression analysis, only having television at home; living with parents and boyfriend or fiancé or husband and getting training on gender issues showed statistically significant association with sex trafficking awareness. Completing grade 10 or above, living with either mother or father and alone did not show statistically significant association with sex trafficking awareness during multivariable logistic regression analysis (Table 3).\nFactors associated with awareness of sex trafficking among youth females in Bahir Dar Town, Northwest Ethiopia, February 2012\n*COR: crude Odds Ratio, AOR: Adjusted Odds Ratio.\nThose whose educational status was grade 10 or above were 2.22 (AOR = 2. 22, 95% CI: 1.18-4.17) times more likely to be aware about sex trafficking than illiterates. Youth who had television at home were about 2.19 (AOR = 2. 19, 95% CI: 1.31-3.67) times more likely to be aware about sex trafficking compared to their counterparts. Youth who were living with parents, boyfriend or fiancé or husband, either mother and father and relatives were 3.65 (AOR = 3. 65, 95% CI: 1.68-7.93), 3.46 (AOR = 3. 46, 95% CI: 1.69-7.06), 2.31 (AOR = 2. 31, 95% CI:1.03-5.16) and 2.86 (AOR = 2. 86, 95% CI: 1.24-6.58) times more likely to be aware about sex trafficking respectively than those who were living alone. Youth trained on gender issues were 3.59 (AOR = 3. 59, 95% CI: 2.11-6.10) times more likely to be aware about sex trafficking compared to those youth females who did not take the training (Table 3).", "Sex trafficking is one of the major contemporary public health issues in both developed and developing countries. Youth girls and children from low socioeconomic countries are more vulnerable for sex trafficking [2,11,24,25]. Still, many girls in many developing countries are waiting to go abroad without adequate information about the circumstance or situation there.\nIn this study, 60% of the study participants reported that they had heard or read about sex trafficking, which is lower than the studies done in Benin city, Nigeria in which about 97.4% of the respondents reported that they had heard or read about sex trafficking [10]. It is also lower than a study done among students in urban and rural schools of Delta and Edo states in Nigeria (86.1%) [9]. In this study, about 25% of young women had been approached by someone to assist them to go abroad, which is lower than the study done in Benin City, Nigeria (31.9%). This discrepancy may be due to the extent of the problem and sex trafficking promotion activities, and the cultural variation to discuss sensitive issues in the study areas. For instance, in Nigeria, there were well-organized programs that recruit young women and girls for sex trafficking at the time of the study [10]. A large proportion of young women in Benin City, Nigeria had been approached to assist them to travel abroad for sex trafficking [9,10]. Similarly, governmental and non-governmental organizations were involved in sex trafficking awareness promotion at the time of the study using print media and victims of sex trafficking were involved in experience sharing on television in Nigeria [10].\nThe sources of information about sex trafficking in this study were television, friends, radio and print materials which are consistent with the study done in Delta and Edo states of Nigeria [9]. In this study, about 29% of study participants had taken some training on gender issues which is more likely increase the awareness of sex trafficking.\nPoverty was mentioned as a reason for sex trafficking in different regions of the world. Victims of sex trafficking are trafficked from relatively poorer areas to more affluent areas [2]. Poverty may lead women to leave their place for searching a job in other places [11,24-26]. In this study, about 45%, 50% and 72% of the study participants mentioned poverty, unemployment and hoping for a better life elsewhere respectively as reasons for sex trafficking. The above reasons are related to searching for economic opportunity which is consistent with other findings that support the role of poverty as underling cause of sex trafficking [17,21]. Similarly, other studies in South Asia, Nigeria and South Africa found that poverty was the underlying cause of sex trafficking [10,29-31].\nIn this study, friends and brokers were mentioned by 87% and 74% of the respondents respectively as mediators for sex trafficking. The above findings are consistent with the studies done in Ethiopia, Nigeria and South Africa [9,10,31,32].\nIn this study, about 71% of the respondents reported that young females aged less than 25 were more vulnerable than those aged greater than 25. This finding is consistent with a study done in South Asia (72%) [21].\nIn this study, completing grade 10 or above, having television at home, living with parents and boyfriend or fiancé or husband, living with either mother or father and relatives and taking training on gender issues were factors found associated with sex trafficking awareness. All the above factors were related to the accessibility of information on gender issues, including sex trafficking using different ways. Moreover, for the past decade international organizations, government and non-governmental organizations have been working on the empowerment of women to understand their rights and responsibilities in the communities through different media, workshops and training which leads females to have better awareness on reproductive issues and sexual rights including sex trafficking [33]. This study has also some limitations. Social desirability bias could be considered one potential challenge since some of the information was gender sensitive. Furthermore, the result may not be generalized beyond the study population itself since the study involved a small sample of a single, non-randomly selected city in Bahir Dar.", "The level of awareness of female youths about sex trafficking in this study was low. Having a television at home, completing grade 10 or above, living together with someone and taking training on gender issues were the predictors of sex trafficking awareness. Awareness creation on sex trafficking of youth females should be given through different approaches to increase the accessibility of information on sex trafficking. Moreover, further research is recommended to determine the magnitude and nature of sex trafficking in detail in the study area.", "AOR: Adjusted odds ratio; CI: Confidence interval; IOM: International organization for migration; ILO: International labor organization; SPSS: Statistical package for social sciences; STDs: Sexually transmitted diseases.", "The authors declare that they have no competing interests.", "MA designed the study, developed the questionnaire, supervised the data collection, analyzed the data and wrote the paper. GA supervised the data collection, contributed to the interpretation of the findings as well as the drafting and writing of the manuscript. AM trained data collectors, supervised the data collection, contributed to the interpretation of the findings and helped to draft the manuscript. All authors read and approved the final manuscript.", "MA was graduated with BSc in Environmental Health and Master of Public Health in Environmental Health and working as lecturer at Department of Public Health, College of Medicine and Health Sciences, Bahir Dar University.\nGA was graduated with BSc in Nursing, Master of Public Health in Reproductive health and working as lecturer at Department of Public Health, College of Medicine and Health Sciences, Bahir Dar University.\nAM was graduated with BSc in Midwifery and Master of Public Health in Reproductive Health and working as lecturer at Department of Public Health, College of Medicine and Health Sciences, Bahir Dar University.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6874/14/85/prepub\n", "Questionnaire to asses awareness of sex trafficking.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, "results", null, "discussion", "conclusions", null, null, null, null, null, "supplementary-material" ]
[ "Awareness", "Sex trafficking", "Youth females" ]
Background: The United Nation defines sex trafficking as “the recruitment, transportation, transfer, harboring, or receipt of persons, by means of the threat or use of force or other forms of coercion, of abduction, of fraud, of deception, of the abuse of power or of a position of vulnerability” for the purposes of sexual exploitation and for economic and other personal gains [1]. It is a contemporary public health issue of both developed and developing countries that violates human rights and has been described as a modern form of slavery [1,2]. According to the International Organization for Migration (IOM) and United Nation Office on Drugs and Crime report, the numbers of trafficked women have increased both in developed and developing countries [2,3]. In another study, girls aged 18–25 are mainly targeted by traffickers usually in poor socioeconomic areas [4,5]. It is difficult to estimate accurately the global prevalence of human trafficking due to its hidden nature [2]. But a recent estimate indicated that trafficking reaches between one and two million people each year worldwide; 60-70% of which are young girls [2,6]. The International Labor Organization (ILO) estimated that there are 4.5 million victims of forced sexual exploitation worldwide; 98% of whom are estimated to be women and girls [7]. Young girls are misguided by a well-organized group with promises of employment in some Western countries, North America, and Australia [2]. Sometimes, the girls themselves are aware and well informed that they would be engaged in prostitution abroad and pay huge sums of money to a mediator who assists them in obtaining passports or in transferring them illegally to the migrating country. The full cost associated with the migration would be covered by the mediator, with the hope that the woman would re-pay the loan from the proceeds of the prostitution abroad. Such agreement leads women to carry out various traditional rituals to ensure that they remain bonded to the mediator until they have been fully exploited financially [8]. A study conducted in Nigeria in 2004 showed that about 86.1% of students had heard about sex trafficking [9]. In another study conducted in Nigeria, 97.4% of the women reported that they had heard of women being taken abroad for commercial sex work. In this study, 47% of the young women believed that sex trafficking brings wealth and prosperity to the family [10]. About 76.5% of participants in above study believed that victims of sex trafficking were more likely to become infected with Sexually Transmitted Diseases (STDs). Another 56.7% believed that they would experience physical abuse and 56.6% thought they would have unwanted pregnancies [9]. Several studies reported that sex trafficking has impact on the physical, mental, social and psychological well being of women and girls [11-17]. Sex-trafficked women and girls are more likely to contract HIV and other STDs [18-21]. Studies done in India found that sex trafficking was a mode of entry into sex work [22,23]. These studies added that history of sex trafficking was associated with a greater vulnerability to violence and HIV risk behaviors [22,23]. About 88.3% of the women mentioned that sex trafficking had negative consequences to the life of women as it exposes them to STDs and HIV/AIDS [10]. Being out-of-school, unemployed, uneducated and unemployed parents, environmental and socio-cultural factors were factors mentioned as risk factors associated with sex trafficking in different studies [11,17,24,25]. A study done in Nigeria 77.2%, 68.4%, 56.1%, 44.5% of the participants reported poverty, unemployment, illiteracy and low social status respectively as factors associated with sex trafficking [11]. Likewise, thousands of teenage girls are shipped out of Ethiopia each year [5]. Recent reports showed that women and girls are also exploited in the sex trade after migrating for labor purposes [26]. Despite efforts made by the government and non-governmental organizations, victims of sex trafficking are increasing. Still, many women and young girls want to go abroad without knowing the situations there [26]. However, youth females’ awareness about sex trafficking is not yet well explored in Ethiopia, particularly in the study area. Credible evidence on awareness of sex trafficking at grass root level is important to formulate evidence based approaches for targeting preventative interventions against sexual trafficking. Therefore, this study was designed to assess sex trafficking awareness and associated factors among young females in Bahir Dar town, North-West Ethiopia. Methods: Study design, area and population A community based cross-sectional study design was employed from February 1st-30th, 2012. The study was conducted in Bahir Dar town, the capital city of Amhara National Regional State, which is located 565 kms North-West of Addis Ababa, Ethiopia. According to Central Statistics Agency (CSA) report of 2007 population census [27], the total population of the town was estimated 180,174 (87,160 males and 93,014 females). The majority of the population (96%) is Amhara by ethnicity and Orthodox Christian (90%) by religion [27]. All youth females aged 15–24 years who were living in the study area were the study population. Those youth females who lived at least six months in the town were included in the study. A community based cross-sectional study design was employed from February 1st-30th, 2012. The study was conducted in Bahir Dar town, the capital city of Amhara National Regional State, which is located 565 kms North-West of Addis Ababa, Ethiopia. According to Central Statistics Agency (CSA) report of 2007 population census [27], the total population of the town was estimated 180,174 (87,160 males and 93,014 females). The majority of the population (96%) is Amhara by ethnicity and Orthodox Christian (90%) by religion [27]. All youth females aged 15–24 years who were living in the study area were the study population. Those youth females who lived at least six months in the town were included in the study. Sample size and sampling techniques Single population proportional formula (n = [(Z α/2)2 P (1 - P)]/d2) was used to determine sample size based on the following assumptions: 95% level of confidence (Zα/2 = 1.96), the proportion of respondents who heard about sex trafficking in a previous study (p = 86%) (9), and margin of error (d = 4%). By considering 5% contingency, and 1.5 design effect, the final sample size was 456 youth females. Four urban kebeles (the lowest administrative level) of the total nine kebeles in the town were selected using a simple random sampling technique [28]. Proportional to size allocation was used in order to determine the required numbers of youth females from each kebele. Household was used as a sampling unit in this study. Households were selected using systematic random sampling. If there were more than one youth female in a household, lottery method was used to select one participant. If the selected youth was not available at home at the time of visit, revisit for the second time was made to contact the selected youth for interview. Single population proportional formula (n = [(Z α/2)2 P (1 - P)]/d2) was used to determine sample size based on the following assumptions: 95% level of confidence (Zα/2 = 1.96), the proportion of respondents who heard about sex trafficking in a previous study (p = 86%) (9), and margin of error (d = 4%). By considering 5% contingency, and 1.5 design effect, the final sample size was 456 youth females. Four urban kebeles (the lowest administrative level) of the total nine kebeles in the town were selected using a simple random sampling technique [28]. Proportional to size allocation was used in order to determine the required numbers of youth females from each kebele. Household was used as a sampling unit in this study. Households were selected using systematic random sampling. If there were more than one youth female in a household, lottery method was used to select one participant. If the selected youth was not available at home at the time of visit, revisit for the second time was made to contact the selected youth for interview. Data collection tool A structured pretested Amharic questionnaire was used to elicit information about sex trafficking from the study participants (see Additional file 1). The questionnaire had two sections. The first section of the questionnaire was about socio-demographic characteristics of the youth females and their parents and the second section was on sex trafficking awareness. A youth female was classified as having awareness about sex trafficking if she reported that she had heard or read about sex trafficking that a woman who had been taken to another place or foreign countries for the purposes of sexual exploitation to gain money or other personal gains. As an indirect measure of the prevalence of sex trafficking in the area, a participant was asked whether or not she had been approached by someone that assists to go abroad. Ten grade 10 complete females and two others were recruited as data collectors and supervisors for this study. One day training was given to data collectors and supervisors with particular emphasis on the objective of the study and methods of the survey. A structured pretested Amharic questionnaire was used to elicit information about sex trafficking from the study participants (see Additional file 1). The questionnaire had two sections. The first section of the questionnaire was about socio-demographic characteristics of the youth females and their parents and the second section was on sex trafficking awareness. A youth female was classified as having awareness about sex trafficking if she reported that she had heard or read about sex trafficking that a woman who had been taken to another place or foreign countries for the purposes of sexual exploitation to gain money or other personal gains. As an indirect measure of the prevalence of sex trafficking in the area, a participant was asked whether or not she had been approached by someone that assists to go abroad. Ten grade 10 complete females and two others were recruited as data collectors and supervisors for this study. One day training was given to data collectors and supervisors with particular emphasis on the objective of the study and methods of the survey. Data quality The questionnaire was pretested to evaluate the face-validity and to ensure whether the study participants understood what the investigators intended to know and some modification of questions were made. Training was given to data collectors and supervisors on how to select household and study participants. Daily supervision was done by the principal investigator to check the completeness of the questionnaire. Female data collectors were used to get accurate information since most questions are gender sensitive. The questionnaire was pretested to evaluate the face-validity and to ensure whether the study participants understood what the investigators intended to know and some modification of questions were made. Training was given to data collectors and supervisors on how to select household and study participants. Daily supervision was done by the principal investigator to check the completeness of the questionnaire. Female data collectors were used to get accurate information since most questions are gender sensitive. Data analysis Data were entered, cleaned and analyzed with SPSS 16 software. Descriptive statistics such as mean and percentage were used to describe the data. Bivariate and multivariable logistic regression analyses were used to identify the predictors of sex trafficking awareness. Odds ratio with 95% confidence interval (CI) was calculated to identify predictors of sex trafficking awareness. Data were entered, cleaned and analyzed with SPSS 16 software. Descriptive statistics such as mean and percentage were used to describe the data. Bivariate and multivariable logistic regression analyses were used to identify the predictors of sex trafficking awareness. Odds ratio with 95% confidence interval (CI) was calculated to identify predictors of sex trafficking awareness. Ethical clearance Ethical clearance was taken from the ethical committee of the Bahir Dar University, College of Medicine and Health Sciences before data collection. Informed written consent was taken from the respective kebeles. Informed verbal consent and assent was obtained from study participants or parents after explaining the purposes of the study. Those who gave their consent and assent were interviewed. Confidentiality was insured by collecting the data anonymously. Ethical clearance was taken from the ethical committee of the Bahir Dar University, College of Medicine and Health Sciences before data collection. Informed written consent was taken from the respective kebeles. Informed verbal consent and assent was obtained from study participants or parents after explaining the purposes of the study. Those who gave their consent and assent were interviewed. Confidentiality was insured by collecting the data anonymously. Study design, area and population: A community based cross-sectional study design was employed from February 1st-30th, 2012. The study was conducted in Bahir Dar town, the capital city of Amhara National Regional State, which is located 565 kms North-West of Addis Ababa, Ethiopia. According to Central Statistics Agency (CSA) report of 2007 population census [27], the total population of the town was estimated 180,174 (87,160 males and 93,014 females). The majority of the population (96%) is Amhara by ethnicity and Orthodox Christian (90%) by religion [27]. All youth females aged 15–24 years who were living in the study area were the study population. Those youth females who lived at least six months in the town were included in the study. Sample size and sampling techniques: Single population proportional formula (n = [(Z α/2)2 P (1 - P)]/d2) was used to determine sample size based on the following assumptions: 95% level of confidence (Zα/2 = 1.96), the proportion of respondents who heard about sex trafficking in a previous study (p = 86%) (9), and margin of error (d = 4%). By considering 5% contingency, and 1.5 design effect, the final sample size was 456 youth females. Four urban kebeles (the lowest administrative level) of the total nine kebeles in the town were selected using a simple random sampling technique [28]. Proportional to size allocation was used in order to determine the required numbers of youth females from each kebele. Household was used as a sampling unit in this study. Households were selected using systematic random sampling. If there were more than one youth female in a household, lottery method was used to select one participant. If the selected youth was not available at home at the time of visit, revisit for the second time was made to contact the selected youth for interview. Data collection tool: A structured pretested Amharic questionnaire was used to elicit information about sex trafficking from the study participants (see Additional file 1). The questionnaire had two sections. The first section of the questionnaire was about socio-demographic characteristics of the youth females and their parents and the second section was on sex trafficking awareness. A youth female was classified as having awareness about sex trafficking if she reported that she had heard or read about sex trafficking that a woman who had been taken to another place or foreign countries for the purposes of sexual exploitation to gain money or other personal gains. As an indirect measure of the prevalence of sex trafficking in the area, a participant was asked whether or not she had been approached by someone that assists to go abroad. Ten grade 10 complete females and two others were recruited as data collectors and supervisors for this study. One day training was given to data collectors and supervisors with particular emphasis on the objective of the study and methods of the survey. Data quality: The questionnaire was pretested to evaluate the face-validity and to ensure whether the study participants understood what the investigators intended to know and some modification of questions were made. Training was given to data collectors and supervisors on how to select household and study participants. Daily supervision was done by the principal investigator to check the completeness of the questionnaire. Female data collectors were used to get accurate information since most questions are gender sensitive. Data analysis: Data were entered, cleaned and analyzed with SPSS 16 software. Descriptive statistics such as mean and percentage were used to describe the data. Bivariate and multivariable logistic regression analyses were used to identify the predictors of sex trafficking awareness. Odds ratio with 95% confidence interval (CI) was calculated to identify predictors of sex trafficking awareness. Ethical clearance: Ethical clearance was taken from the ethical committee of the Bahir Dar University, College of Medicine and Health Sciences before data collection. Informed written consent was taken from the respective kebeles. Informed verbal consent and assent was obtained from study participants or parents after explaining the purposes of the study. Those who gave their consent and assent were interviewed. Confidentiality was insured by collecting the data anonymously. Results: A total of 417 youth females participated in this study. The response rate was 92%. About 64.7% of the study participants were in the age range of 20–24 years and 57.3% were single. The majority of the study participants (83.0%) were Orthodox Christian and 53% of the study participants had completed 10th grade and above. About 72.9% and 70.5% of the study participants had radio and television in their house respectively. The majority of study participants (33.6%) were students. One hundred twenty two participants took training on gender issues (Table 1). Socio-demographic characteristics of youth females in Bahir Dar Town, North-West Ethiopia, February 2012 Of all youth females, about 59.7% reported that they had read or heard about sex trafficking. About 64%, 46%, 37% and 17% of the study participants mentioned television, friends, radio and print materials respectively as sources of information. Friends and brokers were mentioned as mediators for sex trafficking by 87% and 74% of the study participants respectively. About 72%, 50%, 45%, and 18% of the study participants mentioned hoping for a better life elsewhere, unemployment, poverty and illiteracy respectively as reasons for being trafficked. About 71.4% of the participants reported that youth females ages greater than 25 years are vulnerable for sex trafficking. About 25% of the participants reported that they had been approached by someone else to assist them go abroad (Table 2). Sex trafficking awareness among youth females in Bahir Dar town, North-West Ethiopia, February 2012 *multiple responses are there, amini media, family, internet; brelatives, strangers, tourists; cfamily, divorce, family loss. Factors associated with awareness of sex trafficking In bivariate logistic regression analysis, having radio and television at home, living with parents, boyfriend or fiancé or husband, and getting training on gender issues were significantly associated with awareness of sex trafficking. However, during multivariable logistic regression analysis, only having television at home; living with parents and boyfriend or fiancé or husband and getting training on gender issues showed statistically significant association with sex trafficking awareness. Completing grade 10 or above, living with either mother or father and alone did not show statistically significant association with sex trafficking awareness during multivariable logistic regression analysis (Table 3). Factors associated with awareness of sex trafficking among youth females in Bahir Dar Town, Northwest Ethiopia, February 2012 *COR: crude Odds Ratio, AOR: Adjusted Odds Ratio. Those whose educational status was grade 10 or above were 2.22 (AOR = 2. 22, 95% CI: 1.18-4.17) times more likely to be aware about sex trafficking than illiterates. Youth who had television at home were about 2.19 (AOR = 2. 19, 95% CI: 1.31-3.67) times more likely to be aware about sex trafficking compared to their counterparts. Youth who were living with parents, boyfriend or fiancé or husband, either mother and father and relatives were 3.65 (AOR = 3. 65, 95% CI: 1.68-7.93), 3.46 (AOR = 3. 46, 95% CI: 1.69-7.06), 2.31 (AOR = 2. 31, 95% CI:1.03-5.16) and 2.86 (AOR = 2. 86, 95% CI: 1.24-6.58) times more likely to be aware about sex trafficking respectively than those who were living alone. Youth trained on gender issues were 3.59 (AOR = 3. 59, 95% CI: 2.11-6.10) times more likely to be aware about sex trafficking compared to those youth females who did not take the training (Table 3). In bivariate logistic regression analysis, having radio and television at home, living with parents, boyfriend or fiancé or husband, and getting training on gender issues were significantly associated with awareness of sex trafficking. However, during multivariable logistic regression analysis, only having television at home; living with parents and boyfriend or fiancé or husband and getting training on gender issues showed statistically significant association with sex trafficking awareness. Completing grade 10 or above, living with either mother or father and alone did not show statistically significant association with sex trafficking awareness during multivariable logistic regression analysis (Table 3). Factors associated with awareness of sex trafficking among youth females in Bahir Dar Town, Northwest Ethiopia, February 2012 *COR: crude Odds Ratio, AOR: Adjusted Odds Ratio. Those whose educational status was grade 10 or above were 2.22 (AOR = 2. 22, 95% CI: 1.18-4.17) times more likely to be aware about sex trafficking than illiterates. Youth who had television at home were about 2.19 (AOR = 2. 19, 95% CI: 1.31-3.67) times more likely to be aware about sex trafficking compared to their counterparts. Youth who were living with parents, boyfriend or fiancé or husband, either mother and father and relatives were 3.65 (AOR = 3. 65, 95% CI: 1.68-7.93), 3.46 (AOR = 3. 46, 95% CI: 1.69-7.06), 2.31 (AOR = 2. 31, 95% CI:1.03-5.16) and 2.86 (AOR = 2. 86, 95% CI: 1.24-6.58) times more likely to be aware about sex trafficking respectively than those who were living alone. Youth trained on gender issues were 3.59 (AOR = 3. 59, 95% CI: 2.11-6.10) times more likely to be aware about sex trafficking compared to those youth females who did not take the training (Table 3). Factors associated with awareness of sex trafficking: In bivariate logistic regression analysis, having radio and television at home, living with parents, boyfriend or fiancé or husband, and getting training on gender issues were significantly associated with awareness of sex trafficking. However, during multivariable logistic regression analysis, only having television at home; living with parents and boyfriend or fiancé or husband and getting training on gender issues showed statistically significant association with sex trafficking awareness. Completing grade 10 or above, living with either mother or father and alone did not show statistically significant association with sex trafficking awareness during multivariable logistic regression analysis (Table 3). Factors associated with awareness of sex trafficking among youth females in Bahir Dar Town, Northwest Ethiopia, February 2012 *COR: crude Odds Ratio, AOR: Adjusted Odds Ratio. Those whose educational status was grade 10 or above were 2.22 (AOR = 2. 22, 95% CI: 1.18-4.17) times more likely to be aware about sex trafficking than illiterates. Youth who had television at home were about 2.19 (AOR = 2. 19, 95% CI: 1.31-3.67) times more likely to be aware about sex trafficking compared to their counterparts. Youth who were living with parents, boyfriend or fiancé or husband, either mother and father and relatives were 3.65 (AOR = 3. 65, 95% CI: 1.68-7.93), 3.46 (AOR = 3. 46, 95% CI: 1.69-7.06), 2.31 (AOR = 2. 31, 95% CI:1.03-5.16) and 2.86 (AOR = 2. 86, 95% CI: 1.24-6.58) times more likely to be aware about sex trafficking respectively than those who were living alone. Youth trained on gender issues were 3.59 (AOR = 3. 59, 95% CI: 2.11-6.10) times more likely to be aware about sex trafficking compared to those youth females who did not take the training (Table 3). Discussion: Sex trafficking is one of the major contemporary public health issues in both developed and developing countries. Youth girls and children from low socioeconomic countries are more vulnerable for sex trafficking [2,11,24,25]. Still, many girls in many developing countries are waiting to go abroad without adequate information about the circumstance or situation there. In this study, 60% of the study participants reported that they had heard or read about sex trafficking, which is lower than the studies done in Benin city, Nigeria in which about 97.4% of the respondents reported that they had heard or read about sex trafficking [10]. It is also lower than a study done among students in urban and rural schools of Delta and Edo states in Nigeria (86.1%) [9]. In this study, about 25% of young women had been approached by someone to assist them to go abroad, which is lower than the study done in Benin City, Nigeria (31.9%). This discrepancy may be due to the extent of the problem and sex trafficking promotion activities, and the cultural variation to discuss sensitive issues in the study areas. For instance, in Nigeria, there were well-organized programs that recruit young women and girls for sex trafficking at the time of the study [10]. A large proportion of young women in Benin City, Nigeria had been approached to assist them to travel abroad for sex trafficking [9,10]. Similarly, governmental and non-governmental organizations were involved in sex trafficking awareness promotion at the time of the study using print media and victims of sex trafficking were involved in experience sharing on television in Nigeria [10]. The sources of information about sex trafficking in this study were television, friends, radio and print materials which are consistent with the study done in Delta and Edo states of Nigeria [9]. In this study, about 29% of study participants had taken some training on gender issues which is more likely increase the awareness of sex trafficking. Poverty was mentioned as a reason for sex trafficking in different regions of the world. Victims of sex trafficking are trafficked from relatively poorer areas to more affluent areas [2]. Poverty may lead women to leave their place for searching a job in other places [11,24-26]. In this study, about 45%, 50% and 72% of the study participants mentioned poverty, unemployment and hoping for a better life elsewhere respectively as reasons for sex trafficking. The above reasons are related to searching for economic opportunity which is consistent with other findings that support the role of poverty as underling cause of sex trafficking [17,21]. Similarly, other studies in South Asia, Nigeria and South Africa found that poverty was the underlying cause of sex trafficking [10,29-31]. In this study, friends and brokers were mentioned by 87% and 74% of the respondents respectively as mediators for sex trafficking. The above findings are consistent with the studies done in Ethiopia, Nigeria and South Africa [9,10,31,32]. In this study, about 71% of the respondents reported that young females aged less than 25 were more vulnerable than those aged greater than 25. This finding is consistent with a study done in South Asia (72%) [21]. In this study, completing grade 10 or above, having television at home, living with parents and boyfriend or fiancé or husband, living with either mother or father and relatives and taking training on gender issues were factors found associated with sex trafficking awareness. All the above factors were related to the accessibility of information on gender issues, including sex trafficking using different ways. Moreover, for the past decade international organizations, government and non-governmental organizations have been working on the empowerment of women to understand their rights and responsibilities in the communities through different media, workshops and training which leads females to have better awareness on reproductive issues and sexual rights including sex trafficking [33]. This study has also some limitations. Social desirability bias could be considered one potential challenge since some of the information was gender sensitive. Furthermore, the result may not be generalized beyond the study population itself since the study involved a small sample of a single, non-randomly selected city in Bahir Dar. Conclusion: The level of awareness of female youths about sex trafficking in this study was low. Having a television at home, completing grade 10 or above, living together with someone and taking training on gender issues were the predictors of sex trafficking awareness. Awareness creation on sex trafficking of youth females should be given through different approaches to increase the accessibility of information on sex trafficking. Moreover, further research is recommended to determine the magnitude and nature of sex trafficking in detail in the study area. Abbreviations: AOR: Adjusted odds ratio; CI: Confidence interval; IOM: International organization for migration; ILO: International labor organization; SPSS: Statistical package for social sciences; STDs: Sexually transmitted diseases. Competing interests: The authors declare that they have no competing interests. Authors’ contribution: MA designed the study, developed the questionnaire, supervised the data collection, analyzed the data and wrote the paper. GA supervised the data collection, contributed to the interpretation of the findings as well as the drafting and writing of the manuscript. AM trained data collectors, supervised the data collection, contributed to the interpretation of the findings and helped to draft the manuscript. All authors read and approved the final manuscript. Authors’ information: MA was graduated with BSc in Environmental Health and Master of Public Health in Environmental Health and working as lecturer at Department of Public Health, College of Medicine and Health Sciences, Bahir Dar University. GA was graduated with BSc in Nursing, Master of Public Health in Reproductive health and working as lecturer at Department of Public Health, College of Medicine and Health Sciences, Bahir Dar University. AM was graduated with BSc in Midwifery and Master of Public Health in Reproductive Health and working as lecturer at Department of Public Health, College of Medicine and Health Sciences, Bahir Dar University. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6874/14/85/prepub Supplementary Material: Questionnaire to asses awareness of sex trafficking. Click here for file
Background: Sex trafficking is a contemporary issue in both developed and developing countries. The number of trafficked women and young girls has increased globally. Females aged 18-25 are the most targeted group of trafficking. Although the problem is evident in Ethiopia, there are no studies that explored sex trafficking awareness among females. Therefore, the aim of this study was to assess sex trafficking awareness and associated factors among youth females in Bahir Dar town, North-West Ethiopia. Methods: A community based cross-sectional study design was employed to collect data from February 1st-30th 2012 from a total of 417 youth females. The participants in the study were selected using systematic random sampling techniques. A structured Amharic questionnaire was used to collect data. Data were entered, cleaned and analyzed using SPSS 16.0. Descriptive statistics were used to describe data. Logistic regression analysis was used to identify factors associated with sex trafficking awareness. Results: Two hundred forty-nine (60%) of the participants reported that they had heard or read about sex trafficking. Television (64%), friends (46%) and radio (39%) were the most frequently mentioned sources of information about sex trafficking. About 87% and 74% of the participants mentioned friends and brokers respectively as mediators of sex trafficking. Having TV at home (AOR = 2. 19, 95% CI: 1.31-3.67), completing grade 10 or more (AOR = 2. 22, 95% CI: 1.18-4.17), taking training on gender issues (AOR = 3. 59, 95% CI: 2.11-6.10) and living together with parents (AOR = 3. 65, 95% CI: 1.68-7.93) were factors found associated with sex trafficking awareness. Conclusions: In this study, sex trafficking awareness was low among youth females. Having TV at home, living together with someone and being trained on gender issues were predictors of sex trafficking awareness. Therefore, providing education about sex trafficking will help to increase sex trafficking awareness among youth females.
Background: The United Nation defines sex trafficking as “the recruitment, transportation, transfer, harboring, or receipt of persons, by means of the threat or use of force or other forms of coercion, of abduction, of fraud, of deception, of the abuse of power or of a position of vulnerability” for the purposes of sexual exploitation and for economic and other personal gains [1]. It is a contemporary public health issue of both developed and developing countries that violates human rights and has been described as a modern form of slavery [1,2]. According to the International Organization for Migration (IOM) and United Nation Office on Drugs and Crime report, the numbers of trafficked women have increased both in developed and developing countries [2,3]. In another study, girls aged 18–25 are mainly targeted by traffickers usually in poor socioeconomic areas [4,5]. It is difficult to estimate accurately the global prevalence of human trafficking due to its hidden nature [2]. But a recent estimate indicated that trafficking reaches between one and two million people each year worldwide; 60-70% of which are young girls [2,6]. The International Labor Organization (ILO) estimated that there are 4.5 million victims of forced sexual exploitation worldwide; 98% of whom are estimated to be women and girls [7]. Young girls are misguided by a well-organized group with promises of employment in some Western countries, North America, and Australia [2]. Sometimes, the girls themselves are aware and well informed that they would be engaged in prostitution abroad and pay huge sums of money to a mediator who assists them in obtaining passports or in transferring them illegally to the migrating country. The full cost associated with the migration would be covered by the mediator, with the hope that the woman would re-pay the loan from the proceeds of the prostitution abroad. Such agreement leads women to carry out various traditional rituals to ensure that they remain bonded to the mediator until they have been fully exploited financially [8]. A study conducted in Nigeria in 2004 showed that about 86.1% of students had heard about sex trafficking [9]. In another study conducted in Nigeria, 97.4% of the women reported that they had heard of women being taken abroad for commercial sex work. In this study, 47% of the young women believed that sex trafficking brings wealth and prosperity to the family [10]. About 76.5% of participants in above study believed that victims of sex trafficking were more likely to become infected with Sexually Transmitted Diseases (STDs). Another 56.7% believed that they would experience physical abuse and 56.6% thought they would have unwanted pregnancies [9]. Several studies reported that sex trafficking has impact on the physical, mental, social and psychological well being of women and girls [11-17]. Sex-trafficked women and girls are more likely to contract HIV and other STDs [18-21]. Studies done in India found that sex trafficking was a mode of entry into sex work [22,23]. These studies added that history of sex trafficking was associated with a greater vulnerability to violence and HIV risk behaviors [22,23]. About 88.3% of the women mentioned that sex trafficking had negative consequences to the life of women as it exposes them to STDs and HIV/AIDS [10]. Being out-of-school, unemployed, uneducated and unemployed parents, environmental and socio-cultural factors were factors mentioned as risk factors associated with sex trafficking in different studies [11,17,24,25]. A study done in Nigeria 77.2%, 68.4%, 56.1%, 44.5% of the participants reported poverty, unemployment, illiteracy and low social status respectively as factors associated with sex trafficking [11]. Likewise, thousands of teenage girls are shipped out of Ethiopia each year [5]. Recent reports showed that women and girls are also exploited in the sex trade after migrating for labor purposes [26]. Despite efforts made by the government and non-governmental organizations, victims of sex trafficking are increasing. Still, many women and young girls want to go abroad without knowing the situations there [26]. However, youth females’ awareness about sex trafficking is not yet well explored in Ethiopia, particularly in the study area. Credible evidence on awareness of sex trafficking at grass root level is important to formulate evidence based approaches for targeting preventative interventions against sexual trafficking. Therefore, this study was designed to assess sex trafficking awareness and associated factors among young females in Bahir Dar town, North-West Ethiopia. Conclusion: The level of awareness of female youths about sex trafficking in this study was low. Having a television at home, completing grade 10 or above, living together with someone and taking training on gender issues were the predictors of sex trafficking awareness. Awareness creation on sex trafficking of youth females should be given through different approaches to increase the accessibility of information on sex trafficking. Moreover, further research is recommended to determine the magnitude and nature of sex trafficking in detail in the study area.
Background: Sex trafficking is a contemporary issue in both developed and developing countries. The number of trafficked women and young girls has increased globally. Females aged 18-25 are the most targeted group of trafficking. Although the problem is evident in Ethiopia, there are no studies that explored sex trafficking awareness among females. Therefore, the aim of this study was to assess sex trafficking awareness and associated factors among youth females in Bahir Dar town, North-West Ethiopia. Methods: A community based cross-sectional study design was employed to collect data from February 1st-30th 2012 from a total of 417 youth females. The participants in the study were selected using systematic random sampling techniques. A structured Amharic questionnaire was used to collect data. Data were entered, cleaned and analyzed using SPSS 16.0. Descriptive statistics were used to describe data. Logistic regression analysis was used to identify factors associated with sex trafficking awareness. Results: Two hundred forty-nine (60%) of the participants reported that they had heard or read about sex trafficking. Television (64%), friends (46%) and radio (39%) were the most frequently mentioned sources of information about sex trafficking. About 87% and 74% of the participants mentioned friends and brokers respectively as mediators of sex trafficking. Having TV at home (AOR = 2. 19, 95% CI: 1.31-3.67), completing grade 10 or more (AOR = 2. 22, 95% CI: 1.18-4.17), taking training on gender issues (AOR = 3. 59, 95% CI: 2.11-6.10) and living together with parents (AOR = 3. 65, 95% CI: 1.68-7.93) were factors found associated with sex trafficking awareness. Conclusions: In this study, sex trafficking awareness was low among youth females. Having TV at home, living together with someone and being trained on gender issues were predictors of sex trafficking awareness. Therefore, providing education about sex trafficking will help to increase sex trafficking awareness among youth females.
5,996
406
[ 876, 144, 220, 187, 81, 63, 73, 387, 38, 10, 79, 111, 16 ]
18
[ "sex", "trafficking", "sex trafficking", "study", "youth", "awareness", "females", "data", "youth females", "participants" ]
[ "sex trafficking findings", "sex trafficking compared", "predictors sex trafficking", "trafficking increasing women", "prevalence human trafficking" ]
[CONTENT] Awareness | Sex trafficking | Youth females [SUMMARY]
[CONTENT] Awareness | Sex trafficking | Youth females [SUMMARY]
[CONTENT] Awareness | Sex trafficking | Youth females [SUMMARY]
[CONTENT] Awareness | Sex trafficking | Youth females [SUMMARY]
[CONTENT] Awareness | Sex trafficking | Youth females [SUMMARY]
[CONTENT] Awareness | Sex trafficking | Youth females [SUMMARY]
[CONTENT] Adolescent | Cross-Sectional Studies | Educational Status | Ethiopia | Family Characteristics | Female | Human Trafficking | Humans | Information Dissemination | Logistic Models | Multivariate Analysis | Radio | Residence Characteristics | Surveys and Questionnaires | Television | Young Adult [SUMMARY]
[CONTENT] Adolescent | Cross-Sectional Studies | Educational Status | Ethiopia | Family Characteristics | Female | Human Trafficking | Humans | Information Dissemination | Logistic Models | Multivariate Analysis | Radio | Residence Characteristics | Surveys and Questionnaires | Television | Young Adult [SUMMARY]
[CONTENT] Adolescent | Cross-Sectional Studies | Educational Status | Ethiopia | Family Characteristics | Female | Human Trafficking | Humans | Information Dissemination | Logistic Models | Multivariate Analysis | Radio | Residence Characteristics | Surveys and Questionnaires | Television | Young Adult [SUMMARY]
[CONTENT] Adolescent | Cross-Sectional Studies | Educational Status | Ethiopia | Family Characteristics | Female | Human Trafficking | Humans | Information Dissemination | Logistic Models | Multivariate Analysis | Radio | Residence Characteristics | Surveys and Questionnaires | Television | Young Adult [SUMMARY]
[CONTENT] Adolescent | Cross-Sectional Studies | Educational Status | Ethiopia | Family Characteristics | Female | Human Trafficking | Humans | Information Dissemination | Logistic Models | Multivariate Analysis | Radio | Residence Characteristics | Surveys and Questionnaires | Television | Young Adult [SUMMARY]
[CONTENT] Adolescent | Cross-Sectional Studies | Educational Status | Ethiopia | Family Characteristics | Female | Human Trafficking | Humans | Information Dissemination | Logistic Models | Multivariate Analysis | Radio | Residence Characteristics | Surveys and Questionnaires | Television | Young Adult [SUMMARY]
[CONTENT] sex trafficking findings | sex trafficking compared | predictors sex trafficking | trafficking increasing women | prevalence human trafficking [SUMMARY]
[CONTENT] sex trafficking findings | sex trafficking compared | predictors sex trafficking | trafficking increasing women | prevalence human trafficking [SUMMARY]
[CONTENT] sex trafficking findings | sex trafficking compared | predictors sex trafficking | trafficking increasing women | prevalence human trafficking [SUMMARY]
[CONTENT] sex trafficking findings | sex trafficking compared | predictors sex trafficking | trafficking increasing women | prevalence human trafficking [SUMMARY]
[CONTENT] sex trafficking findings | sex trafficking compared | predictors sex trafficking | trafficking increasing women | prevalence human trafficking [SUMMARY]
[CONTENT] sex trafficking findings | sex trafficking compared | predictors sex trafficking | trafficking increasing women | prevalence human trafficking [SUMMARY]
[CONTENT] sex | trafficking | sex trafficking | study | youth | awareness | females | data | youth females | participants [SUMMARY]
[CONTENT] sex | trafficking | sex trafficking | study | youth | awareness | females | data | youth females | participants [SUMMARY]
[CONTENT] sex | trafficking | sex trafficking | study | youth | awareness | females | data | youth females | participants [SUMMARY]
[CONTENT] sex | trafficking | sex trafficking | study | youth | awareness | females | data | youth females | participants [SUMMARY]
[CONTENT] sex | trafficking | sex trafficking | study | youth | awareness | females | data | youth females | participants [SUMMARY]
[CONTENT] sex | trafficking | sex trafficking | study | youth | awareness | females | data | youth females | participants [SUMMARY]
[CONTENT] women | girls | sex | trafficking | sex trafficking | young | study | associated | factors | women girls [SUMMARY]
[CONTENT] study | data | youth | population | trafficking | sex trafficking | sex | females | questionnaire | selected [SUMMARY]
[CONTENT] aor | 95 ci | sex trafficking | trafficking | sex | ci | 95 | youth | times likely | times [SUMMARY]
[CONTENT] sex trafficking | sex | trafficking | awareness | television home completing grade | home completing | sex trafficking research recommended | sex trafficking study low | magnitude | magnitude nature [SUMMARY]
[CONTENT] sex | trafficking | sex trafficking | study | data | awareness | youth | questionnaire | females | aor [SUMMARY]
[CONTENT] sex | trafficking | sex trafficking | study | data | awareness | youth | questionnaire | females | aor [SUMMARY]
[CONTENT] ||| ||| 18-25 ||| Ethiopia ||| Bahir Dar | North-West Ethiopia [SUMMARY]
[CONTENT] February 1st-30th 2012 | 417 ||| ||| ||| SPSS 16.0 ||| ||| [SUMMARY]
[CONTENT] Two hundred forty-nine | 60% ||| 64% | 46% | 39% ||| About 87% and 74% ||| 2. 19 | 95% | CI | 1.31-3.67 | 10 | 2 | 95% | CI | 1.18 | 3 | 95% | CI | 2.11-6.10 | 3. 65 | 95% | CI | 1.68-7.93 [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] ||| ||| 18-25 ||| Ethiopia ||| Bahir Dar | North-West Ethiopia ||| February 1st-30th 2012 | 417 ||| ||| ||| SPSS 16.0 ||| ||| ||| Two hundred forty-nine | 60% ||| 64% | 46% | 39% ||| About 87% and 74% ||| 2. 19 | 95% | CI | 1.31-3.67 | 10 | 2 | 95% | CI | 1.18 | 3 | 95% | CI | 2.11-6.10 | 3. 65 | 95% | CI | 1.68-7.93 ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| 18-25 ||| Ethiopia ||| Bahir Dar | North-West Ethiopia ||| February 1st-30th 2012 | 417 ||| ||| ||| SPSS 16.0 ||| ||| ||| Two hundred forty-nine | 60% ||| 64% | 46% | 39% ||| About 87% and 74% ||| 2. 19 | 95% | CI | 1.31-3.67 | 10 | 2 | 95% | CI | 1.18 | 3 | 95% | CI | 2.11-6.10 | 3. 65 | 95% | CI | 1.68-7.93 ||| ||| ||| [SUMMARY]
Posterior lumbar interbody fusion using a unilateral single cage and a local morselized bone graft in the degenerative lumbar spine.
19956479
We retrospectively evaluated the clinical and radiological outcomes of posterior lumbar interbody fusion (PLIF) with using a unilateral single cage and a local morselized bone graft.
BACKGROUND
Fifty three patients who underwent PLIF with a unilateral single cage filled with local morselized bone graft were enrolled in this study. The average follow-up duration was 31.1 months. The clinical outcomes were evaluated with using the visual analogue scale (VAS) at the pre-operative period, at 1 year post-operation and at the last follow-up, the Oswestry Disability Index, the Prolo scale and the Kim & Kim criteria at the last follow-up; the radiological outcomes were evaluated according to the change of bone bridging, the radiolucency, the instability and the disc height.
METHODS
For the clinical evaluation, the VAS pain index, the Oswestry Disability Index, the Prolo scale and the Kim & Kim criteria showed excellent outcomes. For the the radiological evaluation, 52 cases showed complete bone union at the last follow-up. Regarding the complications, only 1 patient had cage breakage during follow-up.
RESULTS
PLIF using a unilateral single cage filled with a local morselized bone graft has the advantages of a shorter operation time, less blood loss and a shorter hospital stay, as compared with the PLIF using bilateral cages, for treating degenerative lumbar spine disease. This technique also provides excellent outcomes according to the clinical and radiological evaluation.
CONCLUSIONS
[ "Adult", "Aged", "Blood Loss, Surgical", "Bone Transplantation", "Female", "Follow-Up Studies", "Humans", "Intervertebral Disc Degeneration", "Lumbar Vertebrae", "Male", "Middle Aged", "Prosthesis Implantation", "Radiography", "Retrospective Studies", "Spinal Fusion", "Spinal Stenosis", "Spondylolisthesis", "Time and Motion Studies", "Treatment Outcome" ]
2784962
null
null
METHODS
Materials Between January 2003 and September 2006, unilateral PLIFs using a single cage filled with a local morselized bone graft were performed at our institution for the patients who were diagnosed with spinal stenosis, herniation of an intervertebral disc combined with lumbar instability, and spondylolisthesis. The local chip bone graft, which was obtained during posterior decompression, was packed in the anterior area before cage insertion and after performing discectomy. Fifty three of these patients who were followed up for more than 1 year were included in this study. The mean age at the time of surgery was 59.1 years (range, 39 to 77 years). There were 18 males and 35 females. The mean follow-up period was 31.1 months (range, 12 to 54 months). The indication for surgery was spinal stenosis in 36 cases, spondylolisthesis in 12 and herniation of an intervertebral disc combined with lumbar instability in 5. Single-level fusion was perfomed in 39 patients, two-level fusion was done in 9 and three-level fusion was done in 5. Between January 2003 and September 2006, unilateral PLIFs using a single cage filled with a local morselized bone graft were performed at our institution for the patients who were diagnosed with spinal stenosis, herniation of an intervertebral disc combined with lumbar instability, and spondylolisthesis. The local chip bone graft, which was obtained during posterior decompression, was packed in the anterior area before cage insertion and after performing discectomy. Fifty three of these patients who were followed up for more than 1 year were included in this study. The mean age at the time of surgery was 59.1 years (range, 39 to 77 years). There were 18 males and 35 females. The mean follow-up period was 31.1 months (range, 12 to 54 months). The indication for surgery was spinal stenosis in 36 cases, spondylolisthesis in 12 and herniation of an intervertebral disc combined with lumbar instability in 5. Single-level fusion was perfomed in 39 patients, two-level fusion was done in 9 and three-level fusion was done in 5. Surgical Technique The patients were placed in the prone position under general anesthesia. With the muscles adjacent to the spine retraced laterally to minimize damage, the area lateral to the lamina and the posterior joint was exposed via a posteromedial approach while the transverse process was not exposed. Nerve root decompression was achieved by performing laminectomy, complete excision of the inferior articular process and discectomy, depending on the cause of disease. The insertion of a single cage was planned on the side with the more severe symptoms or the more severe stenotic foramen seen on MRI, and the dura mater and the nerve root were medially retracted. Extensive removal of the intervertebral disc and the adjacent end plates was performed on the ipsilateral side with using a pituitary rongeur and a curved curette until subchondral bone was exposed. The size of a cage was determined based on the disc height. The involved titanium cages (Titanum O.I.C.®, Stryker, NJ, USA) were of various sizes (width: 11 mm, angulation: 0°, 4°, 8°, height: 9-13 mm, length: 20, 25 mm) and they were rectangular in shape and radiopaque. The lamina, spinous process and posterior articular process obtained during decompression were morselized in a bone mill and packed into the cage. Before the insertion of the cage, the local morselized bone was grafted as much as possible into the anterior side of the intervertebral space (Fig. 1). Pedicle screw fixation was carried out after inserting the cage to secure the stability and to improve the bony union immediately after surgery. Standard wound closure was performed following hemostasis. From the 3rd postoperative day, a lumbo-sacral orthosis was used for 4-5 weeks postoperatively when the patient was walking. The patients were placed in the prone position under general anesthesia. With the muscles adjacent to the spine retraced laterally to minimize damage, the area lateral to the lamina and the posterior joint was exposed via a posteromedial approach while the transverse process was not exposed. Nerve root decompression was achieved by performing laminectomy, complete excision of the inferior articular process and discectomy, depending on the cause of disease. The insertion of a single cage was planned on the side with the more severe symptoms or the more severe stenotic foramen seen on MRI, and the dura mater and the nerve root were medially retracted. Extensive removal of the intervertebral disc and the adjacent end plates was performed on the ipsilateral side with using a pituitary rongeur and a curved curette until subchondral bone was exposed. The size of a cage was determined based on the disc height. The involved titanium cages (Titanum O.I.C.®, Stryker, NJ, USA) were of various sizes (width: 11 mm, angulation: 0°, 4°, 8°, height: 9-13 mm, length: 20, 25 mm) and they were rectangular in shape and radiopaque. The lamina, spinous process and posterior articular process obtained during decompression were morselized in a bone mill and packed into the cage. Before the insertion of the cage, the local morselized bone was grafted as much as possible into the anterior side of the intervertebral space (Fig. 1). Pedicle screw fixation was carried out after inserting the cage to secure the stability and to improve the bony union immediately after surgery. Standard wound closure was performed following hemostasis. From the 3rd postoperative day, a lumbo-sacral orthosis was used for 4-5 weeks postoperatively when the patient was walking. Assessments The clinical evaluation was based on the visual analogue scale (VAS) for the preoperative back pain and radiating pain at the 1st , 2nd, 4th, 6th, 12th, and 24th postoperative month, and at the last follow-up. The Oswestry Disability Index was assessed preoperatively and at the last follow-up, the Prolo scale was obtained at the last follow-up and the Kim & Kim criteria16) were also used (Tables 1 and 2). The lateral plain radiographs taken preoperatively, immediately postoperatively and at the last follow-up were compared for the radiological assessment. Although the radiopaque titanium cage made it difficult to assess whether boney union was achieved, the local morselized bone graft impacted anterior to the cage allowed for directly evaluating the boney union. In other words, we carefully looked for bone bridging and radiolucency around the cage and the metal screws, and any evidence of instability on the flexion-extension lateral radiographs for assessing the boney union. The changes in the intervertebral disc height were evaluated using the restored disc height, based on the measurements performed preoperatively, immediate-postoperatively and at the last follow-up. The tube-to-patient distance was 40 inches and any magnification error was avoided by adjusting the radiation dose of the anteroposterior and lateral radiography to 5.50 dGycm2 and 10.00 dGycm2, respectively. The boney union status was classified into solid union, delayed union and non-union. Solid union was considered to be obtained when the endplates observed immediately postoperatively on the radiographs became invisible during the follow-up examinations and there was bony trabecular continuity and bone bridging from the graft to the adjacent vertebral bodies in the intervertebral space, the bone graft that appeared as granules on the lateral radiographs became a radiopaque mass after union and any instability on the flexion-extension radiographs and radiolucencey around the cage and screws were not observed. Non-union was defined as disruption of the trabecular continuity, the appearance of instability on the flexion-extension radiographs and ≥ 1 mm radiolucency around the screws and cage. Delayed union was diagnosed when all of the definitions of solid union were met despite that disruption of the trabecular continuity and evidence of non-union were not observable.17) Instability was considered present when ≥ 3° of posterior angular formation was observed on the lateral radiographs and ≥ 2 mm of displacement of the vertebral body and cage movement occurred. The duration of surgery and the hemorrhage volume were also recorded. SPSS ver. 12.0 (SPSS Inc., Chicago, IL, USA) was used for the statistical anaylsis. The changes in the intervertebral disc height were evaluated using a paired-sample t-test with a 95% confidence interval. The clinical evaluation was based on the visual analogue scale (VAS) for the preoperative back pain and radiating pain at the 1st , 2nd, 4th, 6th, 12th, and 24th postoperative month, and at the last follow-up. The Oswestry Disability Index was assessed preoperatively and at the last follow-up, the Prolo scale was obtained at the last follow-up and the Kim & Kim criteria16) were also used (Tables 1 and 2). The lateral plain radiographs taken preoperatively, immediately postoperatively and at the last follow-up were compared for the radiological assessment. Although the radiopaque titanium cage made it difficult to assess whether boney union was achieved, the local morselized bone graft impacted anterior to the cage allowed for directly evaluating the boney union. In other words, we carefully looked for bone bridging and radiolucency around the cage and the metal screws, and any evidence of instability on the flexion-extension lateral radiographs for assessing the boney union. The changes in the intervertebral disc height were evaluated using the restored disc height, based on the measurements performed preoperatively, immediate-postoperatively and at the last follow-up. The tube-to-patient distance was 40 inches and any magnification error was avoided by adjusting the radiation dose of the anteroposterior and lateral radiography to 5.50 dGycm2 and 10.00 dGycm2, respectively. The boney union status was classified into solid union, delayed union and non-union. Solid union was considered to be obtained when the endplates observed immediately postoperatively on the radiographs became invisible during the follow-up examinations and there was bony trabecular continuity and bone bridging from the graft to the adjacent vertebral bodies in the intervertebral space, the bone graft that appeared as granules on the lateral radiographs became a radiopaque mass after union and any instability on the flexion-extension radiographs and radiolucencey around the cage and screws were not observed. Non-union was defined as disruption of the trabecular continuity, the appearance of instability on the flexion-extension radiographs and ≥ 1 mm radiolucency around the screws and cage. Delayed union was diagnosed when all of the definitions of solid union were met despite that disruption of the trabecular continuity and evidence of non-union were not observable.17) Instability was considered present when ≥ 3° of posterior angular formation was observed on the lateral radiographs and ≥ 2 mm of displacement of the vertebral body and cage movement occurred. The duration of surgery and the hemorrhage volume were also recorded. SPSS ver. 12.0 (SPSS Inc., Chicago, IL, USA) was used for the statistical anaylsis. The changes in the intervertebral disc height were evaluated using a paired-sample t-test with a 95% confidence interval.
RESULTS
The clinical outcomes were as follows. The VAS score measured preoperatively, at the 1st postoperative year and at the last follow-up examination improved significantly to 6.5, 2.8 and 1.8, respectively, for the back pain and to 6.1, 2.7 and 1.8, respectively, for the radiating pain (Table 3). The Oswestry Pain Index remarkably improved from 70.0 preoperatively to 37.9 at the last follow-up and the Economic Prolo Scale was 3 in 3 cases, 4 in 38 cases and 5 in 12 cases while the Functional Prolo Scale was 3 in 5 cases, 4 in 36 cases and 5 in 12 cases; according to the Kim & Kim criteria, 12 (23%) of the 53 cases had a score of excellent, 39 (73%) good, and 2 (4%) fair cases. The radiological outcomes were as follows: of the 53 cases, solid union was observed in 50 cases (94.4%) and delayed union was seen in 3 cases (5.6%) at the 6th postoperative month, and complete union was identified in 52 cases (98.1%) at the last follow-up (Figs. 2 and 3). Radiolucency around the cage and pedicle screws was not observed in any of the cases at the last follow-up. Although instability caused by cage breakage was idenifited in 1 case (2%) at the 5th postoperative month, stability without further breakage was achieved at the 8th postoperative month. Trabecular continuity was not obvious at the last follow-up, but any radiolucency around the cage and screws and instability on the flexion-extension radiographs were not observed (Fig. 4). The intervertebral disc height significantly improved from 9.21mm preoperatively to 13.63 mm immediately postoperatively and it became 12.47 mm at the last follow-up. The mean increase of the intervertebral disc height of 3.26 mm from the preoperative measurement to the last follow-up examination was statistically significant (p = 0.009) (Fig. 5, Table 4). The mean duration of surgery was 221.5 minutes (range, 140 to 320 minutes) for single-level fusion, 258.9 minutes (range, 200 to 440 minutes) for two-level fusion and 353.3 minutes (range, 290 to 410 minutes) for three-level fusion. The mean hemorrhage volume was 933.3 ml for single-level fusion, 964.2 ml for two-level fusion and 1,011.6 ml for three-level fusion. The mean hospitalization period was 14.5 days (range, 7 to 28 days). Immediately after surgery, a case of cauda equina syndrome that occurred as a complication of hematoma formation was treated with removal of the hematoma. Although cage breakage was observed in 1 case, the radiography revealed no instability at the last follow-up.
null
null
[ "Materials", "Surgical Technique", "Assessments" ]
[ "Between January 2003 and September 2006, unilateral PLIFs using a single cage filled with a local morselized bone graft were performed at our institution for the patients who were diagnosed with spinal stenosis, herniation of an intervertebral disc combined with lumbar instability, and spondylolisthesis. The local chip bone graft, which was obtained during posterior decompression, was packed in the anterior area before cage insertion and after performing discectomy. Fifty three of these patients who were followed up for more than 1 year were included in this study. The mean age at the time of surgery was 59.1 years (range, 39 to 77 years). There were 18 males and 35 females. The mean follow-up period was 31.1 months (range, 12 to 54 months). The indication for surgery was spinal stenosis in 36 cases, spondylolisthesis in 12 and herniation of an intervertebral disc combined with lumbar instability in 5. Single-level fusion was perfomed in 39 patients, two-level fusion was done in 9 and three-level fusion was done in 5.", "The patients were placed in the prone position under general anesthesia. With the muscles adjacent to the spine retraced laterally to minimize damage, the area lateral to the lamina and the posterior joint was exposed via a posteromedial approach while the transverse process was not exposed. Nerve root decompression was achieved by performing laminectomy, complete excision of the inferior articular process and discectomy, depending on the cause of disease. The insertion of a single cage was planned on the side with the more severe symptoms or the more severe stenotic foramen seen on MRI, and the dura mater and the nerve root were medially retracted. Extensive removal of the intervertebral disc and the adjacent end plates was performed on the ipsilateral side with using a pituitary rongeur and a curved curette until subchondral bone was exposed. The size of a cage was determined based on the disc height. The involved titanium cages (Titanum O.I.C.®, Stryker, NJ, USA) were of various sizes (width: 11 mm, angulation: 0°, 4°, 8°, height: 9-13 mm, length: 20, 25 mm) and they were rectangular in shape and radiopaque. The lamina, spinous process and posterior articular process obtained during decompression were morselized in a bone mill and packed into the cage. Before the insertion of the cage, the local morselized bone was grafted as much as possible into the anterior side of the intervertebral space (Fig. 1). Pedicle screw fixation was carried out after inserting the cage to secure the stability and to improve the bony union immediately after surgery. Standard wound closure was performed following hemostasis. From the 3rd postoperative day, a lumbo-sacral orthosis was used for 4-5 weeks postoperatively when the patient was walking.", "The clinical evaluation was based on the visual analogue scale (VAS) for the preoperative back pain and radiating pain at the 1st , 2nd, 4th, 6th, 12th, and 24th postoperative month, and at the last follow-up. The Oswestry Disability Index was assessed preoperatively and at the last follow-up, the Prolo scale was obtained at the last follow-up and the Kim & Kim criteria16) were also used (Tables 1 and 2).\nThe lateral plain radiographs taken preoperatively, immediately postoperatively and at the last follow-up were compared for the radiological assessment. Although the radiopaque titanium cage made it difficult to assess whether boney union was achieved, the local morselized bone graft impacted anterior to the cage allowed for directly evaluating the boney union. In other words, we carefully looked for bone bridging and radiolucency around the cage and the metal screws, and any evidence of instability on the flexion-extension lateral radiographs for assessing the boney union. The changes in the intervertebral disc height were evaluated using the restored disc height, based on the measurements performed preoperatively, immediate-postoperatively and at the last follow-up. The tube-to-patient distance was 40 inches and any magnification error was avoided by adjusting the radiation dose of the anteroposterior and lateral radiography to 5.50 dGycm2 and 10.00 dGycm2, respectively. The boney union status was classified into solid union, delayed union and non-union. Solid union was considered to be obtained when the endplates observed immediately postoperatively on the radiographs became invisible during the follow-up examinations and there was bony trabecular continuity and bone bridging from the graft to the adjacent vertebral bodies in the intervertebral space, the bone graft that appeared as granules on the lateral radiographs became a radiopaque mass after union and any instability on the flexion-extension radiographs and radiolucencey around the cage and screws were not observed. Non-union was defined as disruption of the trabecular continuity, the appearance of instability on the flexion-extension radiographs and ≥ 1 mm radiolucency around the screws and cage. Delayed union was diagnosed when all of the definitions of solid union were met despite that disruption of the trabecular continuity and evidence of non-union were not observable.17) Instability was considered present when ≥ 3° of posterior angular formation was observed on the lateral radiographs and ≥ 2 mm of displacement of the vertebral body and cage movement occurred.\nThe duration of surgery and the hemorrhage volume were also recorded.\nSPSS ver. 12.0 (SPSS Inc., Chicago, IL, USA) was used for the statistical anaylsis. The changes in the intervertebral disc height were evaluated using a paired-sample t-test with a 95% confidence interval." ]
[ null, null, null ]
[ "METHODS", "Materials", "Surgical Technique", "Assessments", "RESULTS", "DISCUSSION" ]
[ " Materials Between January 2003 and September 2006, unilateral PLIFs using a single cage filled with a local morselized bone graft were performed at our institution for the patients who were diagnosed with spinal stenosis, herniation of an intervertebral disc combined with lumbar instability, and spondylolisthesis. The local chip bone graft, which was obtained during posterior decompression, was packed in the anterior area before cage insertion and after performing discectomy. Fifty three of these patients who were followed up for more than 1 year were included in this study. The mean age at the time of surgery was 59.1 years (range, 39 to 77 years). There were 18 males and 35 females. The mean follow-up period was 31.1 months (range, 12 to 54 months). The indication for surgery was spinal stenosis in 36 cases, spondylolisthesis in 12 and herniation of an intervertebral disc combined with lumbar instability in 5. Single-level fusion was perfomed in 39 patients, two-level fusion was done in 9 and three-level fusion was done in 5.\nBetween January 2003 and September 2006, unilateral PLIFs using a single cage filled with a local morselized bone graft were performed at our institution for the patients who were diagnosed with spinal stenosis, herniation of an intervertebral disc combined with lumbar instability, and spondylolisthesis. The local chip bone graft, which was obtained during posterior decompression, was packed in the anterior area before cage insertion and after performing discectomy. Fifty three of these patients who were followed up for more than 1 year were included in this study. The mean age at the time of surgery was 59.1 years (range, 39 to 77 years). There were 18 males and 35 females. The mean follow-up period was 31.1 months (range, 12 to 54 months). The indication for surgery was spinal stenosis in 36 cases, spondylolisthesis in 12 and herniation of an intervertebral disc combined with lumbar instability in 5. Single-level fusion was perfomed in 39 patients, two-level fusion was done in 9 and three-level fusion was done in 5.\n Surgical Technique The patients were placed in the prone position under general anesthesia. With the muscles adjacent to the spine retraced laterally to minimize damage, the area lateral to the lamina and the posterior joint was exposed via a posteromedial approach while the transverse process was not exposed. Nerve root decompression was achieved by performing laminectomy, complete excision of the inferior articular process and discectomy, depending on the cause of disease. The insertion of a single cage was planned on the side with the more severe symptoms or the more severe stenotic foramen seen on MRI, and the dura mater and the nerve root were medially retracted. Extensive removal of the intervertebral disc and the adjacent end plates was performed on the ipsilateral side with using a pituitary rongeur and a curved curette until subchondral bone was exposed. The size of a cage was determined based on the disc height. The involved titanium cages (Titanum O.I.C.®, Stryker, NJ, USA) were of various sizes (width: 11 mm, angulation: 0°, 4°, 8°, height: 9-13 mm, length: 20, 25 mm) and they were rectangular in shape and radiopaque. The lamina, spinous process and posterior articular process obtained during decompression were morselized in a bone mill and packed into the cage. Before the insertion of the cage, the local morselized bone was grafted as much as possible into the anterior side of the intervertebral space (Fig. 1). Pedicle screw fixation was carried out after inserting the cage to secure the stability and to improve the bony union immediately after surgery. Standard wound closure was performed following hemostasis. From the 3rd postoperative day, a lumbo-sacral orthosis was used for 4-5 weeks postoperatively when the patient was walking.\nThe patients were placed in the prone position under general anesthesia. With the muscles adjacent to the spine retraced laterally to minimize damage, the area lateral to the lamina and the posterior joint was exposed via a posteromedial approach while the transverse process was not exposed. Nerve root decompression was achieved by performing laminectomy, complete excision of the inferior articular process and discectomy, depending on the cause of disease. The insertion of a single cage was planned on the side with the more severe symptoms or the more severe stenotic foramen seen on MRI, and the dura mater and the nerve root were medially retracted. Extensive removal of the intervertebral disc and the adjacent end plates was performed on the ipsilateral side with using a pituitary rongeur and a curved curette until subchondral bone was exposed. The size of a cage was determined based on the disc height. The involved titanium cages (Titanum O.I.C.®, Stryker, NJ, USA) were of various sizes (width: 11 mm, angulation: 0°, 4°, 8°, height: 9-13 mm, length: 20, 25 mm) and they were rectangular in shape and radiopaque. The lamina, spinous process and posterior articular process obtained during decompression were morselized in a bone mill and packed into the cage. Before the insertion of the cage, the local morselized bone was grafted as much as possible into the anterior side of the intervertebral space (Fig. 1). Pedicle screw fixation was carried out after inserting the cage to secure the stability and to improve the bony union immediately after surgery. Standard wound closure was performed following hemostasis. From the 3rd postoperative day, a lumbo-sacral orthosis was used for 4-5 weeks postoperatively when the patient was walking.\n Assessments The clinical evaluation was based on the visual analogue scale (VAS) for the preoperative back pain and radiating pain at the 1st , 2nd, 4th, 6th, 12th, and 24th postoperative month, and at the last follow-up. The Oswestry Disability Index was assessed preoperatively and at the last follow-up, the Prolo scale was obtained at the last follow-up and the Kim & Kim criteria16) were also used (Tables 1 and 2).\nThe lateral plain radiographs taken preoperatively, immediately postoperatively and at the last follow-up were compared for the radiological assessment. Although the radiopaque titanium cage made it difficult to assess whether boney union was achieved, the local morselized bone graft impacted anterior to the cage allowed for directly evaluating the boney union. In other words, we carefully looked for bone bridging and radiolucency around the cage and the metal screws, and any evidence of instability on the flexion-extension lateral radiographs for assessing the boney union. The changes in the intervertebral disc height were evaluated using the restored disc height, based on the measurements performed preoperatively, immediate-postoperatively and at the last follow-up. The tube-to-patient distance was 40 inches and any magnification error was avoided by adjusting the radiation dose of the anteroposterior and lateral radiography to 5.50 dGycm2 and 10.00 dGycm2, respectively. The boney union status was classified into solid union, delayed union and non-union. Solid union was considered to be obtained when the endplates observed immediately postoperatively on the radiographs became invisible during the follow-up examinations and there was bony trabecular continuity and bone bridging from the graft to the adjacent vertebral bodies in the intervertebral space, the bone graft that appeared as granules on the lateral radiographs became a radiopaque mass after union and any instability on the flexion-extension radiographs and radiolucencey around the cage and screws were not observed. Non-union was defined as disruption of the trabecular continuity, the appearance of instability on the flexion-extension radiographs and ≥ 1 mm radiolucency around the screws and cage. Delayed union was diagnosed when all of the definitions of solid union were met despite that disruption of the trabecular continuity and evidence of non-union were not observable.17) Instability was considered present when ≥ 3° of posterior angular formation was observed on the lateral radiographs and ≥ 2 mm of displacement of the vertebral body and cage movement occurred.\nThe duration of surgery and the hemorrhage volume were also recorded.\nSPSS ver. 12.0 (SPSS Inc., Chicago, IL, USA) was used for the statistical anaylsis. The changes in the intervertebral disc height were evaluated using a paired-sample t-test with a 95% confidence interval.\nThe clinical evaluation was based on the visual analogue scale (VAS) for the preoperative back pain and radiating pain at the 1st , 2nd, 4th, 6th, 12th, and 24th postoperative month, and at the last follow-up. The Oswestry Disability Index was assessed preoperatively and at the last follow-up, the Prolo scale was obtained at the last follow-up and the Kim & Kim criteria16) were also used (Tables 1 and 2).\nThe lateral plain radiographs taken preoperatively, immediately postoperatively and at the last follow-up were compared for the radiological assessment. Although the radiopaque titanium cage made it difficult to assess whether boney union was achieved, the local morselized bone graft impacted anterior to the cage allowed for directly evaluating the boney union. In other words, we carefully looked for bone bridging and radiolucency around the cage and the metal screws, and any evidence of instability on the flexion-extension lateral radiographs for assessing the boney union. The changes in the intervertebral disc height were evaluated using the restored disc height, based on the measurements performed preoperatively, immediate-postoperatively and at the last follow-up. The tube-to-patient distance was 40 inches and any magnification error was avoided by adjusting the radiation dose of the anteroposterior and lateral radiography to 5.50 dGycm2 and 10.00 dGycm2, respectively. The boney union status was classified into solid union, delayed union and non-union. Solid union was considered to be obtained when the endplates observed immediately postoperatively on the radiographs became invisible during the follow-up examinations and there was bony trabecular continuity and bone bridging from the graft to the adjacent vertebral bodies in the intervertebral space, the bone graft that appeared as granules on the lateral radiographs became a radiopaque mass after union and any instability on the flexion-extension radiographs and radiolucencey around the cage and screws were not observed. Non-union was defined as disruption of the trabecular continuity, the appearance of instability on the flexion-extension radiographs and ≥ 1 mm radiolucency around the screws and cage. Delayed union was diagnosed when all of the definitions of solid union were met despite that disruption of the trabecular continuity and evidence of non-union were not observable.17) Instability was considered present when ≥ 3° of posterior angular formation was observed on the lateral radiographs and ≥ 2 mm of displacement of the vertebral body and cage movement occurred.\nThe duration of surgery and the hemorrhage volume were also recorded.\nSPSS ver. 12.0 (SPSS Inc., Chicago, IL, USA) was used for the statistical anaylsis. The changes in the intervertebral disc height were evaluated using a paired-sample t-test with a 95% confidence interval.", "Between January 2003 and September 2006, unilateral PLIFs using a single cage filled with a local morselized bone graft were performed at our institution for the patients who were diagnosed with spinal stenosis, herniation of an intervertebral disc combined with lumbar instability, and spondylolisthesis. The local chip bone graft, which was obtained during posterior decompression, was packed in the anterior area before cage insertion and after performing discectomy. Fifty three of these patients who were followed up for more than 1 year were included in this study. The mean age at the time of surgery was 59.1 years (range, 39 to 77 years). There were 18 males and 35 females. The mean follow-up period was 31.1 months (range, 12 to 54 months). The indication for surgery was spinal stenosis in 36 cases, spondylolisthesis in 12 and herniation of an intervertebral disc combined with lumbar instability in 5. Single-level fusion was perfomed in 39 patients, two-level fusion was done in 9 and three-level fusion was done in 5.", "The patients were placed in the prone position under general anesthesia. With the muscles adjacent to the spine retraced laterally to minimize damage, the area lateral to the lamina and the posterior joint was exposed via a posteromedial approach while the transverse process was not exposed. Nerve root decompression was achieved by performing laminectomy, complete excision of the inferior articular process and discectomy, depending on the cause of disease. The insertion of a single cage was planned on the side with the more severe symptoms or the more severe stenotic foramen seen on MRI, and the dura mater and the nerve root were medially retracted. Extensive removal of the intervertebral disc and the adjacent end plates was performed on the ipsilateral side with using a pituitary rongeur and a curved curette until subchondral bone was exposed. The size of a cage was determined based on the disc height. The involved titanium cages (Titanum O.I.C.®, Stryker, NJ, USA) were of various sizes (width: 11 mm, angulation: 0°, 4°, 8°, height: 9-13 mm, length: 20, 25 mm) and they were rectangular in shape and radiopaque. The lamina, spinous process and posterior articular process obtained during decompression were morselized in a bone mill and packed into the cage. Before the insertion of the cage, the local morselized bone was grafted as much as possible into the anterior side of the intervertebral space (Fig. 1). Pedicle screw fixation was carried out after inserting the cage to secure the stability and to improve the bony union immediately after surgery. Standard wound closure was performed following hemostasis. From the 3rd postoperative day, a lumbo-sacral orthosis was used for 4-5 weeks postoperatively when the patient was walking.", "The clinical evaluation was based on the visual analogue scale (VAS) for the preoperative back pain and radiating pain at the 1st , 2nd, 4th, 6th, 12th, and 24th postoperative month, and at the last follow-up. The Oswestry Disability Index was assessed preoperatively and at the last follow-up, the Prolo scale was obtained at the last follow-up and the Kim & Kim criteria16) were also used (Tables 1 and 2).\nThe lateral plain radiographs taken preoperatively, immediately postoperatively and at the last follow-up were compared for the radiological assessment. Although the radiopaque titanium cage made it difficult to assess whether boney union was achieved, the local morselized bone graft impacted anterior to the cage allowed for directly evaluating the boney union. In other words, we carefully looked for bone bridging and radiolucency around the cage and the metal screws, and any evidence of instability on the flexion-extension lateral radiographs for assessing the boney union. The changes in the intervertebral disc height were evaluated using the restored disc height, based on the measurements performed preoperatively, immediate-postoperatively and at the last follow-up. The tube-to-patient distance was 40 inches and any magnification error was avoided by adjusting the radiation dose of the anteroposterior and lateral radiography to 5.50 dGycm2 and 10.00 dGycm2, respectively. The boney union status was classified into solid union, delayed union and non-union. Solid union was considered to be obtained when the endplates observed immediately postoperatively on the radiographs became invisible during the follow-up examinations and there was bony trabecular continuity and bone bridging from the graft to the adjacent vertebral bodies in the intervertebral space, the bone graft that appeared as granules on the lateral radiographs became a radiopaque mass after union and any instability on the flexion-extension radiographs and radiolucencey around the cage and screws were not observed. Non-union was defined as disruption of the trabecular continuity, the appearance of instability on the flexion-extension radiographs and ≥ 1 mm radiolucency around the screws and cage. Delayed union was diagnosed when all of the definitions of solid union were met despite that disruption of the trabecular continuity and evidence of non-union were not observable.17) Instability was considered present when ≥ 3° of posterior angular formation was observed on the lateral radiographs and ≥ 2 mm of displacement of the vertebral body and cage movement occurred.\nThe duration of surgery and the hemorrhage volume were also recorded.\nSPSS ver. 12.0 (SPSS Inc., Chicago, IL, USA) was used for the statistical anaylsis. The changes in the intervertebral disc height were evaluated using a paired-sample t-test with a 95% confidence interval.", "The clinical outcomes were as follows. The VAS score measured preoperatively, at the 1st postoperative year and at the last follow-up examination improved significantly to 6.5, 2.8 and 1.8, respectively, for the back pain and to 6.1, 2.7 and 1.8, respectively, for the radiating pain (Table 3). The Oswestry Pain Index remarkably improved from 70.0 preoperatively to 37.9 at the last follow-up and the Economic Prolo Scale was 3 in 3 cases, 4 in 38 cases and 5 in 12 cases while the Functional Prolo Scale was 3 in 5 cases, 4 in 36 cases and 5 in 12 cases; according to the Kim & Kim criteria, 12 (23%) of the 53 cases had a score of excellent, 39 (73%) good, and 2 (4%) fair cases.\nThe radiological outcomes were as follows: of the 53 cases, solid union was observed in 50 cases (94.4%) and delayed union was seen in 3 cases (5.6%) at the 6th postoperative month, and complete union was identified in 52 cases (98.1%) at the last follow-up (Figs. 2 and 3). Radiolucency around the cage and pedicle screws was not observed in any of the cases at the last follow-up. Although instability caused by cage breakage was idenifited in 1 case (2%) at the 5th postoperative month, stability without further breakage was achieved at the 8th postoperative month. Trabecular continuity was not obvious at the last follow-up, but any radiolucency around the cage and screws and instability on the flexion-extension radiographs were not observed (Fig. 4). The intervertebral disc height significantly improved from 9.21mm preoperatively to 13.63 mm immediately postoperatively and it became 12.47 mm at the last follow-up. The mean increase of the intervertebral disc height of 3.26 mm from the preoperative measurement to the last follow-up examination was statistically significant (p = 0.009) (Fig. 5, Table 4).\nThe mean duration of surgery was 221.5 minutes (range, 140 to 320 minutes) for single-level fusion, 258.9 minutes (range, 200 to 440 minutes) for two-level fusion and 353.3 minutes (range, 290 to 410 minutes) for three-level fusion. The mean hemorrhage volume was 933.3 ml for single-level fusion, 964.2 ml for two-level fusion and 1,011.6 ml for three-level fusion. The mean hospitalization period was 14.5 days (range, 7 to 28 days).\nImmediately after surgery, a case of cauda equina syndrome that occurred as a complication of hematoma formation was treated with removal of the hematoma. Although cage breakage was observed in 1 case, the radiography revealed no instability at the last follow-up.", "PLIF was designed to reduce the pain resulting from nerve compression and to secure the stability of the surgical constructs. The minor symptoms of such degenerative lumbar diseases as spinal stenosis and spondylolisthesis improve with conservative treatments in most cases. However, when the symptoms of these diseases such as back pain, radiating pain in the lower limb and neurogenic claudication severely restrict a person's daily activities, then surgical options should be taken into consideration.1,8-11,13,18,19)\nInterbody fusion is one of the most common types of vertebral body fusions, and this is regarded as the most recommendable biomechanical technique. Particularly, the popularity of interbody fusion with using a cage has prompted the invention of various cages, which also led to the advancement of PLIF techniques.13,20,21) Posterolateral fusion involves the risk of muscle fibrosis caused by the extensive release of muscles adjacent to the transverse process, and the loss of blood and postoperative wound infection due to a lengthened operative time. In contrast, interbody fusion was advantageous for increasing the fusion rate and reducing the extensive muscle release around the transverse process with the fusion being performed at the level of the spinal compression, and obtained early stability and a high rate of fusion following PLIF with the use of pedicle screws for fixation.22-25) In this study, there were no complications such as infection that developed following PLIF with using pedicle screws and muscle release around the transverse process. In addition, early stability was obtained in many cases and satisfying clinical results and solid fusion union were achieved at the last follow-up.\nThere are three common types of PLIF techniques: one involves bilateral laminectomy and implantation of two cages, another involves unilateral laminectomy and implantation of two cages and the other involves unilateral laminectomy and implantation of one cage.26-28) The first and the last techniques have both been recently reported to be conducive to postoperative stability of the vertebral body. Oxland and Lund29) reported that single-cage PLIF provided high stability in flexion, that the supplementary use of pedicle screws improved the stabilization in all directions and that the two-cage PLIF might increase risk of damage to the bilateral nerve roots. Zhao et al.20,30) documented that single-cage PLIF was easier to perform than two-cage PLIF. Particularly, retraction of the nerve roots and the dura mater of the asymptomatic side could be avoided with unilateral placement of a cage in patients with unilateral sciatica, and the supplementry use of pedicle screws also allowed immediate postoperative stabilization. They also added that single-cage PLIF was advantageous in reducing the blood loss, the operative time and the hospital stay. In this study, single-cage PLIF minimized the damage to the posterior structures while providing proper decompression, high stability and a remarkable fusion rate, and the cost of an additional cage could be saved.\nAccording to the biomechanical comparison of single-cage PLIF and two-cage PLIF by Chiang et al.,1) while both techniques result in a similar level of flexion of the spine, the former procedure additionally requires a bone graft. They postulated that a single cage that had a small implant-vertebral contact area led to an increase in the load of contact compression, which eventually raises the risk of cage subsidence, migration and breakage, and it increases the compression load on adjacent discs. In our study, the intervertebral contact surface could be enlarged by transplanting a bone graft anterior to the cage, which prevented the occurrence of the problems associated with single-cage PLIF. In addition, the graft anterior to the cage was an efficient indicator of boney union in spite of the radiopacity of the involved cages.\nAn autogenous bone graft can be accomplished with tricortical, bicortical and monocortical bone, cancellous bone, a morselized local bone graft obtained from decompression and transplantation of 3-5 blocks of morselized local bone obtained from decompression into the intervertebral discs. An iliac crest bone graft facilitates rapid bone union, but it increases the risk of donor site pain, excessive blood loss, donor site infection and pelvic fracture, and there is another skin incision and the operative time is lengthened. In contrast, a local bone graft shortens the operative time and blood loss because a bone harvesting procedure is not necessary and so there are no problems associated with any donor site, but a local bone graft is disadvantageous for the boney union due to its poor bone quality, as compared with the iliac crest bone graft. In this study, the lamina, the articular process and the spinous process obtained from decompression were morselized and then transplanted at the anterior portion of the intervertebral disc space and then a cage filled with local morselized bone was inserted. With these procedures, we could obtain excellent boney union and we reduced the operative time and blood loss.\nIn conclusion, this study used unilateral PLIF, a local morselized bone graft was placed anterior to the cage and a single cage filled with the graft was inserted. This technique produced satisfying clinical outcomes and radiological outcomes such as maintaining the proper intervertebral disc space, good boney union, rigid stability and a high fusion rate." ]
[ "methods", null, null, null, "results", "discussion" ]
[ "Spinal fusion", "Posterior lumbar interbody fusion", "Unilateral single cage", "Local morselized graft" ]
METHODS: Materials Between January 2003 and September 2006, unilateral PLIFs using a single cage filled with a local morselized bone graft were performed at our institution for the patients who were diagnosed with spinal stenosis, herniation of an intervertebral disc combined with lumbar instability, and spondylolisthesis. The local chip bone graft, which was obtained during posterior decompression, was packed in the anterior area before cage insertion and after performing discectomy. Fifty three of these patients who were followed up for more than 1 year were included in this study. The mean age at the time of surgery was 59.1 years (range, 39 to 77 years). There were 18 males and 35 females. The mean follow-up period was 31.1 months (range, 12 to 54 months). The indication for surgery was spinal stenosis in 36 cases, spondylolisthesis in 12 and herniation of an intervertebral disc combined with lumbar instability in 5. Single-level fusion was perfomed in 39 patients, two-level fusion was done in 9 and three-level fusion was done in 5. Between January 2003 and September 2006, unilateral PLIFs using a single cage filled with a local morselized bone graft were performed at our institution for the patients who were diagnosed with spinal stenosis, herniation of an intervertebral disc combined with lumbar instability, and spondylolisthesis. The local chip bone graft, which was obtained during posterior decompression, was packed in the anterior area before cage insertion and after performing discectomy. Fifty three of these patients who were followed up for more than 1 year were included in this study. The mean age at the time of surgery was 59.1 years (range, 39 to 77 years). There were 18 males and 35 females. The mean follow-up period was 31.1 months (range, 12 to 54 months). The indication for surgery was spinal stenosis in 36 cases, spondylolisthesis in 12 and herniation of an intervertebral disc combined with lumbar instability in 5. Single-level fusion was perfomed in 39 patients, two-level fusion was done in 9 and three-level fusion was done in 5. Surgical Technique The patients were placed in the prone position under general anesthesia. With the muscles adjacent to the spine retraced laterally to minimize damage, the area lateral to the lamina and the posterior joint was exposed via a posteromedial approach while the transverse process was not exposed. Nerve root decompression was achieved by performing laminectomy, complete excision of the inferior articular process and discectomy, depending on the cause of disease. The insertion of a single cage was planned on the side with the more severe symptoms or the more severe stenotic foramen seen on MRI, and the dura mater and the nerve root were medially retracted. Extensive removal of the intervertebral disc and the adjacent end plates was performed on the ipsilateral side with using a pituitary rongeur and a curved curette until subchondral bone was exposed. The size of a cage was determined based on the disc height. The involved titanium cages (Titanum O.I.C.®, Stryker, NJ, USA) were of various sizes (width: 11 mm, angulation: 0°, 4°, 8°, height: 9-13 mm, length: 20, 25 mm) and they were rectangular in shape and radiopaque. The lamina, spinous process and posterior articular process obtained during decompression were morselized in a bone mill and packed into the cage. Before the insertion of the cage, the local morselized bone was grafted as much as possible into the anterior side of the intervertebral space (Fig. 1). Pedicle screw fixation was carried out after inserting the cage to secure the stability and to improve the bony union immediately after surgery. Standard wound closure was performed following hemostasis. From the 3rd postoperative day, a lumbo-sacral orthosis was used for 4-5 weeks postoperatively when the patient was walking. The patients were placed in the prone position under general anesthesia. With the muscles adjacent to the spine retraced laterally to minimize damage, the area lateral to the lamina and the posterior joint was exposed via a posteromedial approach while the transverse process was not exposed. Nerve root decompression was achieved by performing laminectomy, complete excision of the inferior articular process and discectomy, depending on the cause of disease. The insertion of a single cage was planned on the side with the more severe symptoms or the more severe stenotic foramen seen on MRI, and the dura mater and the nerve root were medially retracted. Extensive removal of the intervertebral disc and the adjacent end plates was performed on the ipsilateral side with using a pituitary rongeur and a curved curette until subchondral bone was exposed. The size of a cage was determined based on the disc height. The involved titanium cages (Titanum O.I.C.®, Stryker, NJ, USA) were of various sizes (width: 11 mm, angulation: 0°, 4°, 8°, height: 9-13 mm, length: 20, 25 mm) and they were rectangular in shape and radiopaque. The lamina, spinous process and posterior articular process obtained during decompression were morselized in a bone mill and packed into the cage. Before the insertion of the cage, the local morselized bone was grafted as much as possible into the anterior side of the intervertebral space (Fig. 1). Pedicle screw fixation was carried out after inserting the cage to secure the stability and to improve the bony union immediately after surgery. Standard wound closure was performed following hemostasis. From the 3rd postoperative day, a lumbo-sacral orthosis was used for 4-5 weeks postoperatively when the patient was walking. Assessments The clinical evaluation was based on the visual analogue scale (VAS) for the preoperative back pain and radiating pain at the 1st , 2nd, 4th, 6th, 12th, and 24th postoperative month, and at the last follow-up. The Oswestry Disability Index was assessed preoperatively and at the last follow-up, the Prolo scale was obtained at the last follow-up and the Kim & Kim criteria16) were also used (Tables 1 and 2). The lateral plain radiographs taken preoperatively, immediately postoperatively and at the last follow-up were compared for the radiological assessment. Although the radiopaque titanium cage made it difficult to assess whether boney union was achieved, the local morselized bone graft impacted anterior to the cage allowed for directly evaluating the boney union. In other words, we carefully looked for bone bridging and radiolucency around the cage and the metal screws, and any evidence of instability on the flexion-extension lateral radiographs for assessing the boney union. The changes in the intervertebral disc height were evaluated using the restored disc height, based on the measurements performed preoperatively, immediate-postoperatively and at the last follow-up. The tube-to-patient distance was 40 inches and any magnification error was avoided by adjusting the radiation dose of the anteroposterior and lateral radiography to 5.50 dGycm2 and 10.00 dGycm2, respectively. The boney union status was classified into solid union, delayed union and non-union. Solid union was considered to be obtained when the endplates observed immediately postoperatively on the radiographs became invisible during the follow-up examinations and there was bony trabecular continuity and bone bridging from the graft to the adjacent vertebral bodies in the intervertebral space, the bone graft that appeared as granules on the lateral radiographs became a radiopaque mass after union and any instability on the flexion-extension radiographs and radiolucencey around the cage and screws were not observed. Non-union was defined as disruption of the trabecular continuity, the appearance of instability on the flexion-extension radiographs and ≥ 1 mm radiolucency around the screws and cage. Delayed union was diagnosed when all of the definitions of solid union were met despite that disruption of the trabecular continuity and evidence of non-union were not observable.17) Instability was considered present when ≥ 3° of posterior angular formation was observed on the lateral radiographs and ≥ 2 mm of displacement of the vertebral body and cage movement occurred. The duration of surgery and the hemorrhage volume were also recorded. SPSS ver. 12.0 (SPSS Inc., Chicago, IL, USA) was used for the statistical anaylsis. The changes in the intervertebral disc height were evaluated using a paired-sample t-test with a 95% confidence interval. The clinical evaluation was based on the visual analogue scale (VAS) for the preoperative back pain and radiating pain at the 1st , 2nd, 4th, 6th, 12th, and 24th postoperative month, and at the last follow-up. The Oswestry Disability Index was assessed preoperatively and at the last follow-up, the Prolo scale was obtained at the last follow-up and the Kim & Kim criteria16) were also used (Tables 1 and 2). The lateral plain radiographs taken preoperatively, immediately postoperatively and at the last follow-up were compared for the radiological assessment. Although the radiopaque titanium cage made it difficult to assess whether boney union was achieved, the local morselized bone graft impacted anterior to the cage allowed for directly evaluating the boney union. In other words, we carefully looked for bone bridging and radiolucency around the cage and the metal screws, and any evidence of instability on the flexion-extension lateral radiographs for assessing the boney union. The changes in the intervertebral disc height were evaluated using the restored disc height, based on the measurements performed preoperatively, immediate-postoperatively and at the last follow-up. The tube-to-patient distance was 40 inches and any magnification error was avoided by adjusting the radiation dose of the anteroposterior and lateral radiography to 5.50 dGycm2 and 10.00 dGycm2, respectively. The boney union status was classified into solid union, delayed union and non-union. Solid union was considered to be obtained when the endplates observed immediately postoperatively on the radiographs became invisible during the follow-up examinations and there was bony trabecular continuity and bone bridging from the graft to the adjacent vertebral bodies in the intervertebral space, the bone graft that appeared as granules on the lateral radiographs became a radiopaque mass after union and any instability on the flexion-extension radiographs and radiolucencey around the cage and screws were not observed. Non-union was defined as disruption of the trabecular continuity, the appearance of instability on the flexion-extension radiographs and ≥ 1 mm radiolucency around the screws and cage. Delayed union was diagnosed when all of the definitions of solid union were met despite that disruption of the trabecular continuity and evidence of non-union were not observable.17) Instability was considered present when ≥ 3° of posterior angular formation was observed on the lateral radiographs and ≥ 2 mm of displacement of the vertebral body and cage movement occurred. The duration of surgery and the hemorrhage volume were also recorded. SPSS ver. 12.0 (SPSS Inc., Chicago, IL, USA) was used for the statistical anaylsis. The changes in the intervertebral disc height were evaluated using a paired-sample t-test with a 95% confidence interval. Materials: Between January 2003 and September 2006, unilateral PLIFs using a single cage filled with a local morselized bone graft were performed at our institution for the patients who were diagnosed with spinal stenosis, herniation of an intervertebral disc combined with lumbar instability, and spondylolisthesis. The local chip bone graft, which was obtained during posterior decompression, was packed in the anterior area before cage insertion and after performing discectomy. Fifty three of these patients who were followed up for more than 1 year were included in this study. The mean age at the time of surgery was 59.1 years (range, 39 to 77 years). There were 18 males and 35 females. The mean follow-up period was 31.1 months (range, 12 to 54 months). The indication for surgery was spinal stenosis in 36 cases, spondylolisthesis in 12 and herniation of an intervertebral disc combined with lumbar instability in 5. Single-level fusion was perfomed in 39 patients, two-level fusion was done in 9 and three-level fusion was done in 5. Surgical Technique: The patients were placed in the prone position under general anesthesia. With the muscles adjacent to the spine retraced laterally to minimize damage, the area lateral to the lamina and the posterior joint was exposed via a posteromedial approach while the transverse process was not exposed. Nerve root decompression was achieved by performing laminectomy, complete excision of the inferior articular process and discectomy, depending on the cause of disease. The insertion of a single cage was planned on the side with the more severe symptoms or the more severe stenotic foramen seen on MRI, and the dura mater and the nerve root were medially retracted. Extensive removal of the intervertebral disc and the adjacent end plates was performed on the ipsilateral side with using a pituitary rongeur and a curved curette until subchondral bone was exposed. The size of a cage was determined based on the disc height. The involved titanium cages (Titanum O.I.C.®, Stryker, NJ, USA) were of various sizes (width: 11 mm, angulation: 0°, 4°, 8°, height: 9-13 mm, length: 20, 25 mm) and they were rectangular in shape and radiopaque. The lamina, spinous process and posterior articular process obtained during decompression were morselized in a bone mill and packed into the cage. Before the insertion of the cage, the local morselized bone was grafted as much as possible into the anterior side of the intervertebral space (Fig. 1). Pedicle screw fixation was carried out after inserting the cage to secure the stability and to improve the bony union immediately after surgery. Standard wound closure was performed following hemostasis. From the 3rd postoperative day, a lumbo-sacral orthosis was used for 4-5 weeks postoperatively when the patient was walking. Assessments: The clinical evaluation was based on the visual analogue scale (VAS) for the preoperative back pain and radiating pain at the 1st , 2nd, 4th, 6th, 12th, and 24th postoperative month, and at the last follow-up. The Oswestry Disability Index was assessed preoperatively and at the last follow-up, the Prolo scale was obtained at the last follow-up and the Kim & Kim criteria16) were also used (Tables 1 and 2). The lateral plain radiographs taken preoperatively, immediately postoperatively and at the last follow-up were compared for the radiological assessment. Although the radiopaque titanium cage made it difficult to assess whether boney union was achieved, the local morselized bone graft impacted anterior to the cage allowed for directly evaluating the boney union. In other words, we carefully looked for bone bridging and radiolucency around the cage and the metal screws, and any evidence of instability on the flexion-extension lateral radiographs for assessing the boney union. The changes in the intervertebral disc height were evaluated using the restored disc height, based on the measurements performed preoperatively, immediate-postoperatively and at the last follow-up. The tube-to-patient distance was 40 inches and any magnification error was avoided by adjusting the radiation dose of the anteroposterior and lateral radiography to 5.50 dGycm2 and 10.00 dGycm2, respectively. The boney union status was classified into solid union, delayed union and non-union. Solid union was considered to be obtained when the endplates observed immediately postoperatively on the radiographs became invisible during the follow-up examinations and there was bony trabecular continuity and bone bridging from the graft to the adjacent vertebral bodies in the intervertebral space, the bone graft that appeared as granules on the lateral radiographs became a radiopaque mass after union and any instability on the flexion-extension radiographs and radiolucencey around the cage and screws were not observed. Non-union was defined as disruption of the trabecular continuity, the appearance of instability on the flexion-extension radiographs and ≥ 1 mm radiolucency around the screws and cage. Delayed union was diagnosed when all of the definitions of solid union were met despite that disruption of the trabecular continuity and evidence of non-union were not observable.17) Instability was considered present when ≥ 3° of posterior angular formation was observed on the lateral radiographs and ≥ 2 mm of displacement of the vertebral body and cage movement occurred. The duration of surgery and the hemorrhage volume were also recorded. SPSS ver. 12.0 (SPSS Inc., Chicago, IL, USA) was used for the statistical anaylsis. The changes in the intervertebral disc height were evaluated using a paired-sample t-test with a 95% confidence interval. RESULTS: The clinical outcomes were as follows. The VAS score measured preoperatively, at the 1st postoperative year and at the last follow-up examination improved significantly to 6.5, 2.8 and 1.8, respectively, for the back pain and to 6.1, 2.7 and 1.8, respectively, for the radiating pain (Table 3). The Oswestry Pain Index remarkably improved from 70.0 preoperatively to 37.9 at the last follow-up and the Economic Prolo Scale was 3 in 3 cases, 4 in 38 cases and 5 in 12 cases while the Functional Prolo Scale was 3 in 5 cases, 4 in 36 cases and 5 in 12 cases; according to the Kim & Kim criteria, 12 (23%) of the 53 cases had a score of excellent, 39 (73%) good, and 2 (4%) fair cases. The radiological outcomes were as follows: of the 53 cases, solid union was observed in 50 cases (94.4%) and delayed union was seen in 3 cases (5.6%) at the 6th postoperative month, and complete union was identified in 52 cases (98.1%) at the last follow-up (Figs. 2 and 3). Radiolucency around the cage and pedicle screws was not observed in any of the cases at the last follow-up. Although instability caused by cage breakage was idenifited in 1 case (2%) at the 5th postoperative month, stability without further breakage was achieved at the 8th postoperative month. Trabecular continuity was not obvious at the last follow-up, but any radiolucency around the cage and screws and instability on the flexion-extension radiographs were not observed (Fig. 4). The intervertebral disc height significantly improved from 9.21mm preoperatively to 13.63 mm immediately postoperatively and it became 12.47 mm at the last follow-up. The mean increase of the intervertebral disc height of 3.26 mm from the preoperative measurement to the last follow-up examination was statistically significant (p = 0.009) (Fig. 5, Table 4). The mean duration of surgery was 221.5 minutes (range, 140 to 320 minutes) for single-level fusion, 258.9 minutes (range, 200 to 440 minutes) for two-level fusion and 353.3 minutes (range, 290 to 410 minutes) for three-level fusion. The mean hemorrhage volume was 933.3 ml for single-level fusion, 964.2 ml for two-level fusion and 1,011.6 ml for three-level fusion. The mean hospitalization period was 14.5 days (range, 7 to 28 days). Immediately after surgery, a case of cauda equina syndrome that occurred as a complication of hematoma formation was treated with removal of the hematoma. Although cage breakage was observed in 1 case, the radiography revealed no instability at the last follow-up. DISCUSSION: PLIF was designed to reduce the pain resulting from nerve compression and to secure the stability of the surgical constructs. The minor symptoms of such degenerative lumbar diseases as spinal stenosis and spondylolisthesis improve with conservative treatments in most cases. However, when the symptoms of these diseases such as back pain, radiating pain in the lower limb and neurogenic claudication severely restrict a person's daily activities, then surgical options should be taken into consideration.1,8-11,13,18,19) Interbody fusion is one of the most common types of vertebral body fusions, and this is regarded as the most recommendable biomechanical technique. Particularly, the popularity of interbody fusion with using a cage has prompted the invention of various cages, which also led to the advancement of PLIF techniques.13,20,21) Posterolateral fusion involves the risk of muscle fibrosis caused by the extensive release of muscles adjacent to the transverse process, and the loss of blood and postoperative wound infection due to a lengthened operative time. In contrast, interbody fusion was advantageous for increasing the fusion rate and reducing the extensive muscle release around the transverse process with the fusion being performed at the level of the spinal compression, and obtained early stability and a high rate of fusion following PLIF with the use of pedicle screws for fixation.22-25) In this study, there were no complications such as infection that developed following PLIF with using pedicle screws and muscle release around the transverse process. In addition, early stability was obtained in many cases and satisfying clinical results and solid fusion union were achieved at the last follow-up. There are three common types of PLIF techniques: one involves bilateral laminectomy and implantation of two cages, another involves unilateral laminectomy and implantation of two cages and the other involves unilateral laminectomy and implantation of one cage.26-28) The first and the last techniques have both been recently reported to be conducive to postoperative stability of the vertebral body. Oxland and Lund29) reported that single-cage PLIF provided high stability in flexion, that the supplementary use of pedicle screws improved the stabilization in all directions and that the two-cage PLIF might increase risk of damage to the bilateral nerve roots. Zhao et al.20,30) documented that single-cage PLIF was easier to perform than two-cage PLIF. Particularly, retraction of the nerve roots and the dura mater of the asymptomatic side could be avoided with unilateral placement of a cage in patients with unilateral sciatica, and the supplementry use of pedicle screws also allowed immediate postoperative stabilization. They also added that single-cage PLIF was advantageous in reducing the blood loss, the operative time and the hospital stay. In this study, single-cage PLIF minimized the damage to the posterior structures while providing proper decompression, high stability and a remarkable fusion rate, and the cost of an additional cage could be saved. According to the biomechanical comparison of single-cage PLIF and two-cage PLIF by Chiang et al.,1) while both techniques result in a similar level of flexion of the spine, the former procedure additionally requires a bone graft. They postulated that a single cage that had a small implant-vertebral contact area led to an increase in the load of contact compression, which eventually raises the risk of cage subsidence, migration and breakage, and it increases the compression load on adjacent discs. In our study, the intervertebral contact surface could be enlarged by transplanting a bone graft anterior to the cage, which prevented the occurrence of the problems associated with single-cage PLIF. In addition, the graft anterior to the cage was an efficient indicator of boney union in spite of the radiopacity of the involved cages. An autogenous bone graft can be accomplished with tricortical, bicortical and monocortical bone, cancellous bone, a morselized local bone graft obtained from decompression and transplantation of 3-5 blocks of morselized local bone obtained from decompression into the intervertebral discs. An iliac crest bone graft facilitates rapid bone union, but it increases the risk of donor site pain, excessive blood loss, donor site infection and pelvic fracture, and there is another skin incision and the operative time is lengthened. In contrast, a local bone graft shortens the operative time and blood loss because a bone harvesting procedure is not necessary and so there are no problems associated with any donor site, but a local bone graft is disadvantageous for the boney union due to its poor bone quality, as compared with the iliac crest bone graft. In this study, the lamina, the articular process and the spinous process obtained from decompression were morselized and then transplanted at the anterior portion of the intervertebral disc space and then a cage filled with local morselized bone was inserted. With these procedures, we could obtain excellent boney union and we reduced the operative time and blood loss. In conclusion, this study used unilateral PLIF, a local morselized bone graft was placed anterior to the cage and a single cage filled with the graft was inserted. This technique produced satisfying clinical outcomes and radiological outcomes such as maintaining the proper intervertebral disc space, good boney union, rigid stability and a high fusion rate.
Background: We retrospectively evaluated the clinical and radiological outcomes of posterior lumbar interbody fusion (PLIF) with using a unilateral single cage and a local morselized bone graft. Methods: Fifty three patients who underwent PLIF with a unilateral single cage filled with local morselized bone graft were enrolled in this study. The average follow-up duration was 31.1 months. The clinical outcomes were evaluated with using the visual analogue scale (VAS) at the pre-operative period, at 1 year post-operation and at the last follow-up, the Oswestry Disability Index, the Prolo scale and the Kim & Kim criteria at the last follow-up; the radiological outcomes were evaluated according to the change of bone bridging, the radiolucency, the instability and the disc height. Results: For the clinical evaluation, the VAS pain index, the Oswestry Disability Index, the Prolo scale and the Kim & Kim criteria showed excellent outcomes. For the the radiological evaluation, 52 cases showed complete bone union at the last follow-up. Regarding the complications, only 1 patient had cage breakage during follow-up. Conclusions: PLIF using a unilateral single cage filled with a local morselized bone graft has the advantages of a shorter operation time, less blood loss and a shorter hospital stay, as compared with the PLIF using bilateral cages, for treating degenerative lumbar spine disease. This technique also provides excellent outcomes according to the clinical and radiological evaluation.
null
null
4,630
279
[ 196, 329, 510 ]
6
[ "cage", "union", "bone", "follow", "intervertebral", "graft", "fusion", "disc", "radiographs", "bone graft" ]
[ "increase intervertebral disc", "herniation intervertebral disc", "spine procedure", "surgery spinal stenosis", "spinal stenosis spondylolisthesis" ]
null
null
null
null
[CONTENT] Spinal fusion | Posterior lumbar interbody fusion | Unilateral single cage | Local morselized graft [SUMMARY]
[CONTENT] Spinal fusion | Posterior lumbar interbody fusion | Unilateral single cage | Local morselized graft [SUMMARY]
null
[CONTENT] Spinal fusion | Posterior lumbar interbody fusion | Unilateral single cage | Local morselized graft [SUMMARY]
null
null
[CONTENT] Adult | Aged | Blood Loss, Surgical | Bone Transplantation | Female | Follow-Up Studies | Humans | Intervertebral Disc Degeneration | Lumbar Vertebrae | Male | Middle Aged | Prosthesis Implantation | Radiography | Retrospective Studies | Spinal Fusion | Spinal Stenosis | Spondylolisthesis | Time and Motion Studies | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Blood Loss, Surgical | Bone Transplantation | Female | Follow-Up Studies | Humans | Intervertebral Disc Degeneration | Lumbar Vertebrae | Male | Middle Aged | Prosthesis Implantation | Radiography | Retrospective Studies | Spinal Fusion | Spinal Stenosis | Spondylolisthesis | Time and Motion Studies | Treatment Outcome [SUMMARY]
null
[CONTENT] Adult | Aged | Blood Loss, Surgical | Bone Transplantation | Female | Follow-Up Studies | Humans | Intervertebral Disc Degeneration | Lumbar Vertebrae | Male | Middle Aged | Prosthesis Implantation | Radiography | Retrospective Studies | Spinal Fusion | Spinal Stenosis | Spondylolisthesis | Time and Motion Studies | Treatment Outcome [SUMMARY]
null
null
[CONTENT] increase intervertebral disc | herniation intervertebral disc | spine procedure | surgery spinal stenosis | spinal stenosis spondylolisthesis [SUMMARY]
[CONTENT] increase intervertebral disc | herniation intervertebral disc | spine procedure | surgery spinal stenosis | spinal stenosis spondylolisthesis [SUMMARY]
null
[CONTENT] increase intervertebral disc | herniation intervertebral disc | spine procedure | surgery spinal stenosis | spinal stenosis spondylolisthesis [SUMMARY]
null
null
[CONTENT] cage | union | bone | follow | intervertebral | graft | fusion | disc | radiographs | bone graft [SUMMARY]
[CONTENT] cage | union | bone | follow | intervertebral | graft | fusion | disc | radiographs | bone graft [SUMMARY]
null
[CONTENT] cage | union | bone | follow | intervertebral | graft | fusion | disc | radiographs | bone graft [SUMMARY]
null
null
[CONTENT] union | cage | radiographs | bone | lateral | follow | instability | intervertebral | disc | height [SUMMARY]
[CONTENT] cases | minutes | level fusion | follow | fusion | level | case | ml | minutes range | mean [SUMMARY]
null
[CONTENT] cage | union | bone | fusion | follow | graft | radiographs | cases | level fusion | instability [SUMMARY]
null
null
[CONTENT] Fifty three | PLIF ||| 31.1 months ||| 1 year | the Oswestry Disability Index | Prolo | Kim & Kim [SUMMARY]
[CONTENT] VAS | the Oswestry Disability Index | Prolo | Kim & Kim ||| 52 ||| only 1 [SUMMARY]
null
[CONTENT] PLIF ||| Fifty three | PLIF ||| 31.1 months ||| 1 year | the Oswestry Disability Index | Prolo | Kim & Kim ||| VAS | the Oswestry Disability Index | Prolo | Kim & Kim ||| 52 ||| only 1 ||| PLIF | PLIF ||| [SUMMARY]
null
Prediction models for clustered data: comparison of a random intercept and standard regression model.
23414436
When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions.
BACKGROUND
Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated.
METHODS
The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept.
RESULTS
The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.
CONCLUSION
[ "Adult", "Aged", "Analgesics, Opioid", "Calibration", "Cluster Analysis", "Female", "Humans", "Logistic Models", "Male", "Middle Aged", "Models, Statistical", "Postoperative Nausea and Vomiting", "Predictive Value of Tests", "Reproducibility of Results", "Risk Assessment", "Surgical Procedures, Operative", "Treatment Outcome" ]
3658967
Background
Many clinical prediction models are being developed. Diagnostic prediction models combine patient characteristics and test results to predict the presence or absence of a certain diagnosis. Prognostic prediction models predict the future occurrence of outcomes [1]. Study data that are used for model development are frequently clustered within e.g. centers or treating physician [2]. For instance, patients treated in a particular center may be more alike compared to patients treated in another center due to differences in treatment policies. As a result, patients treated in the same center are dependent (clustered), rather than independent. Regression techniques that take clustering into account [3-6] are frequently used in cluster randomized trials and in etiologic research with subjects clustered within e.g. neighborhoods or countries. Surprisingly, such regression models were hardly used in research aimed at developing prediction models [2]. Notably from the domains of therapeutic and causal studies, it has been shown that regression methods that take the clustering into account yield different estimates of the regression coefficients than standard regression techniques neglecting the clustered data structure [5,7,8]. However, in the domain of prediction studies, it is yet unknown to what extent regression methods for clustered data need to be used in the presence of clustered data. Two types of models can be used for analyzing clustered data: marginal models and conditional models [9]. Marginal models, such as the Generalized Estimation Equation (GEE) method, adjust for the clustering nature of data and estimate the standard error of the estimated parameters correctly. The interpretation of GEE results is on the higher cluster level and therefore not suitable for predictions in individual patients [10]. Conditional models estimate predictor effects for patients in the specific clusters. Conditioning on cluster is often done with random effects to save degrees of freedom. Thereby, conditional models allow for predictions of outcomes on the lowest clustering level (here, the patient). Accurate predictions are not necessarily achieved with a random effects model (having different regression parameters compared to a standard model), because the random effects are not readily applicable in new data with new clusters and the clustering in the data may be weak resulting in minor differences between the models. We explore the effect of including a random intercept in a conditional prediction model compared to standard regression. We use empirical and simulated clustered data to assess the performance of the prediction models. We show that model calibration is suboptimal, particularly when applied in new subjects, if clustering is not accounted for in the prediction model development.
Methods
Prediction of postoperative nausea and vomiting A frequently occurring side effect of surgery is postoperative nausea and vomiting (PONV). To prevent or treat PONV, a risk model was developed to predict PONV within 24 hours after surgery [11]. We used a cohort of 1642 consecutive surgical patients (development sample) that were treated in the UMC Utrecht to develop prediction models. Patients were clustered within 19 treating anesthesiologists [12]. Predictors for the occurrence of PONV included gender, age, history of PONV or motion sickness, current smoking, abdominal or middle ear surgery versus other type of surgery and the use of volatile anesthetics during surgery [13]. Data of 1458 patients from the same center were used to study the validity of the prediction models. Patients included in the validation sample were treated by 19 other anesthesiologists than the patients from the development sample. A frequently occurring side effect of surgery is postoperative nausea and vomiting (PONV). To prevent or treat PONV, a risk model was developed to predict PONV within 24 hours after surgery [11]. We used a cohort of 1642 consecutive surgical patients (development sample) that were treated in the UMC Utrecht to develop prediction models. Patients were clustered within 19 treating anesthesiologists [12]. Predictors for the occurrence of PONV included gender, age, history of PONV or motion sickness, current smoking, abdominal or middle ear surgery versus other type of surgery and the use of volatile anesthetics during surgery [13]. Data of 1458 patients from the same center were used to study the validity of the prediction models. Patients included in the validation sample were treated by 19 other anesthesiologists than the patients from the development sample. Model development and risk calculation The prediction models included all before mentioned predictors and were fitted with standard logistic regression or with a random intercept logistic regression model (also known as partial multilevel model). The standard model was fitted with a generalized linear model, including a logit link function. The intercept and predictors were included as fixed effects (i.e. not varying by cluster). The random effect models thus included fixed effects for the predictors plus a random intercept for the effects of clusters (anesthesiologists or centers in the simulation study). The random intercept was assumed to be normally distributed with mean zero, and variance σ2u0[14]. The predicted risks of PONV for individual patients were calculated with the log-odds transformation of the linear predictor. The risk based on the standard logistic regression model was: P Y i = 1 | X i = 1 1 + exp − α ^ standard + ∑ m ∈ 1 , … , 6 x im ⋅ β ^ standard , m where P(Yi = 1) is the predicted risk that a patient i will get PONV, given patients’ predictor values X. The linear predictor consist of âstandard which equals the estimated intercept of the standard model, and ∑xim⋅β^standard,m which is the sumproduct of the six predictor values of patient i and the six regression coefficients. From the random intercept logistic regression model, predicted risks were calculated in two ways. The first risk calculation was based on only the fixed effects of the random intercept logistic regression model (called marginal risk calculation): P Y i = 1 | X i = 1 1 + exp − α ^ RE + ∑ m ∈ 1 , … , 6 x im ⋅ β ^ RE , m where âRE equals the fixed intercept and ∑xim⋅β^RE,m is the sumproduct of the six predictor values of patient i and the corresponding fixed regression coefficients of the random effects model. However, the cluster effects were not used for the risk calculation [15]. We explicitly studied this risk calculation since cluster effects are unknown for patients in clusters that are not included in the development data. The second risk calculation used the fixed and random effects of the random intercept logistic regression model (called conditional risk calculation):PYij=1|Xij=11+exp−α^RE+∑m∈1,…,6xim⋅β^RE,m+u0j This risk calculation included the same predictor effects as the marginal risk calculation, plus the random intercept u0j (i.e. the effect of anesthesiologist j). This risk calculation cannot be used in new data of patients treated by new anesthesiologists, since the random effect of the new anesthesiologist is unknown. The prediction models included all before mentioned predictors and were fitted with standard logistic regression or with a random intercept logistic regression model (also known as partial multilevel model). The standard model was fitted with a generalized linear model, including a logit link function. The intercept and predictors were included as fixed effects (i.e. not varying by cluster). The random effect models thus included fixed effects for the predictors plus a random intercept for the effects of clusters (anesthesiologists or centers in the simulation study). The random intercept was assumed to be normally distributed with mean zero, and variance σ2u0[14]. The predicted risks of PONV for individual patients were calculated with the log-odds transformation of the linear predictor. The risk based on the standard logistic regression model was: P Y i = 1 | X i = 1 1 + exp − α ^ standard + ∑ m ∈ 1 , … , 6 x im ⋅ β ^ standard , m where P(Yi = 1) is the predicted risk that a patient i will get PONV, given patients’ predictor values X. The linear predictor consist of âstandard which equals the estimated intercept of the standard model, and ∑xim⋅β^standard,m which is the sumproduct of the six predictor values of patient i and the six regression coefficients. From the random intercept logistic regression model, predicted risks were calculated in two ways. The first risk calculation was based on only the fixed effects of the random intercept logistic regression model (called marginal risk calculation): P Y i = 1 | X i = 1 1 + exp − α ^ RE + ∑ m ∈ 1 , … , 6 x im ⋅ β ^ RE , m where âRE equals the fixed intercept and ∑xim⋅β^RE,m is the sumproduct of the six predictor values of patient i and the corresponding fixed regression coefficients of the random effects model. However, the cluster effects were not used for the risk calculation [15]. We explicitly studied this risk calculation since cluster effects are unknown for patients in clusters that are not included in the development data. The second risk calculation used the fixed and random effects of the random intercept logistic regression model (called conditional risk calculation):PYij=1|Xij=11+exp−α^RE+∑m∈1,…,6xim⋅β^RE,m+u0j This risk calculation included the same predictor effects as the marginal risk calculation, plus the random intercept u0j (i.e. the effect of anesthesiologist j). This risk calculation cannot be used in new data of patients treated by new anesthesiologists, since the random effect of the new anesthesiologist is unknown. Model evaluation Apparent and test performance of the prediction models were assessed. Apparent performance is the performance of the prediction model in the development data. Test performance was assessed in a cohort of 1458 new patients treated by 19 other anesthesiologists than the patients from the development sample. The predictive performance of each of the risk calculations was assessed with the concordance index (c-index) [16], the calibration slope and calibration in the large [17,18]. The calibration slope was estimated with standard logistic regression analysis, modeling the outcome of interest as dependent variable and the linear predictor as independent variable. Calibration in the large was assessed as the intercept of a logistic regression model with the linear predictor as offset variable. The ideal values of the calibration in the large and calibration slope are respectively 0 and 1. Since, standard performance measures ignore the clustered data structure, they can be considered as overall measures. To take clustering into account in the model evaluation, we assessed the predictive performance in individual anesthesiologists (within cluster performance). The within cluster c-index was estimated as the average of the c-indices of the clusters, as described by van Oirbeek [19]. Within cluster calibration was assessed with mixed effect models, with random effects for the intercept and linear predictor (calibration slope) or only for the intercept (calibration in the large). Apparent and test performance of the prediction models were assessed. Apparent performance is the performance of the prediction model in the development data. Test performance was assessed in a cohort of 1458 new patients treated by 19 other anesthesiologists than the patients from the development sample. The predictive performance of each of the risk calculations was assessed with the concordance index (c-index) [16], the calibration slope and calibration in the large [17,18]. The calibration slope was estimated with standard logistic regression analysis, modeling the outcome of interest as dependent variable and the linear predictor as independent variable. Calibration in the large was assessed as the intercept of a logistic regression model with the linear predictor as offset variable. The ideal values of the calibration in the large and calibration slope are respectively 0 and 1. Since, standard performance measures ignore the clustered data structure, they can be considered as overall measures. To take clustering into account in the model evaluation, we assessed the predictive performance in individual anesthesiologists (within cluster performance). The within cluster c-index was estimated as the average of the c-indices of the clusters, as described by van Oirbeek [19]. Within cluster calibration was assessed with mixed effect models, with random effects for the intercept and linear predictor (calibration slope) or only for the intercept (calibration in the large). Simulation study We generated a source population which included 100 centers. The number of patients per center was Poisson distributed, with a mean and variance varying per center according to the exponential function of a normal distribution (N(5.7, 0.3)). This resulted in a total of 30,556 patients and a median of 301 patients per center (range 155–552). The dichotomous outcome Y was predicted with 3 continuous (X1-X3) and 3 dichotomous variables (X4-X6). The three continuous predictors were independently drawn from a normal distribution, with a mean of 0 and standard deviations of 0.2, 0.4, and 1. The three dichotomous predictors were independently drawn from binomial distributions with incidences 0.2, 0.3, and 0.4. The regression coefficients of all predictors were 1. To introduce clustering of events, we generated a latent random effect from a normal distribution with mean 0 and variance 0.17. This corresponded to an intraclass correlation coefficient (ICC) of 5%, which was calculated as σ2u0/(σ2u0 + ((π ^ 2)/3). The σ2u0 equals the second level variance estimated with a random intercept logistic regression model [6]. Based on the six predictors and the latent random effect, the linear predictor lp was calculated for each patient. The linear predictor was normally distributed with mean −1.06 and standard deviation 1.41. The linear predictor was transformed to probabilities for the outcome using the formula P(Y) = 1/(1 + exp(−lp)). The outcome value Y (1 or 0) was then generated by comparing P(Y) with an independently generated variable u having a uniform distribution from 0 to 1. We used the rule Y = 1 if P(Y) ≤ u, and Y = 0 otherwise. The incidence of the outcome (P(Y = 1)) was 30% in all source populations, except for the situation with low number of events (incidence = 3%). Further, we varied several parameters in the source population as described above. We studied ICC values of 5%, 15% and 30%; Pearson correlation coefficient values between predictor X1 and the random intercept were 0.0 or 0.4. Study samples were drawn according to the practice of data collection in a multicenter setting [20,21]. We randomly drew study samples from the source population in two stages. First we sampled 20 centers, and then we sampled in total 1000 patients from the included centers (two-stage sampling). We also studied the performance in study samples with 5 or 50 centers (including respectively 100 and 1000 patients in total). Standard and random intercept logistic regression models were fitted in the study sample, and evaluated in that study sample (apparent performance) and in the whole source population (test performance). The whole process (sampling from source population, model development and evaluation) was repeated 100 times. Calculations were performed with R version 2.11.1 [22]. We used the lmer function from the lme4 library to perform mixed effect regression analyses [23]. The lrm function of the Design package was used to fit the standard model and estimate overall performance measures [24]. We generated a source population which included 100 centers. The number of patients per center was Poisson distributed, with a mean and variance varying per center according to the exponential function of a normal distribution (N(5.7, 0.3)). This resulted in a total of 30,556 patients and a median of 301 patients per center (range 155–552). The dichotomous outcome Y was predicted with 3 continuous (X1-X3) and 3 dichotomous variables (X4-X6). The three continuous predictors were independently drawn from a normal distribution, with a mean of 0 and standard deviations of 0.2, 0.4, and 1. The three dichotomous predictors were independently drawn from binomial distributions with incidences 0.2, 0.3, and 0.4. The regression coefficients of all predictors were 1. To introduce clustering of events, we generated a latent random effect from a normal distribution with mean 0 and variance 0.17. This corresponded to an intraclass correlation coefficient (ICC) of 5%, which was calculated as σ2u0/(σ2u0 + ((π ^ 2)/3). The σ2u0 equals the second level variance estimated with a random intercept logistic regression model [6]. Based on the six predictors and the latent random effect, the linear predictor lp was calculated for each patient. The linear predictor was normally distributed with mean −1.06 and standard deviation 1.41. The linear predictor was transformed to probabilities for the outcome using the formula P(Y) = 1/(1 + exp(−lp)). The outcome value Y (1 or 0) was then generated by comparing P(Y) with an independently generated variable u having a uniform distribution from 0 to 1. We used the rule Y = 1 if P(Y) ≤ u, and Y = 0 otherwise. The incidence of the outcome (P(Y = 1)) was 30% in all source populations, except for the situation with low number of events (incidence = 3%). Further, we varied several parameters in the source population as described above. We studied ICC values of 5%, 15% and 30%; Pearson correlation coefficient values between predictor X1 and the random intercept were 0.0 or 0.4. Study samples were drawn according to the practice of data collection in a multicenter setting [20,21]. We randomly drew study samples from the source population in two stages. First we sampled 20 centers, and then we sampled in total 1000 patients from the included centers (two-stage sampling). We also studied the performance in study samples with 5 or 50 centers (including respectively 100 and 1000 patients in total). Standard and random intercept logistic regression models were fitted in the study sample, and evaluated in that study sample (apparent performance) and in the whole source population (test performance). The whole process (sampling from source population, model development and evaluation) was repeated 100 times. Calculations were performed with R version 2.11.1 [22]. We used the lmer function from the lme4 library to perform mixed effect regression analyses [23]. The lrm function of the Design package was used to fit the standard model and estimate overall performance measures [24].
Results
Prediction of postoperative nausea and vomiting The incidence of PONV was 37.5% (616/1642) in the development cohort (Table 1). 19 anesthesiologists treated on average 82 surgical patients (median 64, range 1–460). The incidence of PONV per anesthesiologist varied (interquartile range 29% – 47%), with an ICC of 3.2%. The corresponding variance of the random intercept (0.15) was significantly different from 0 (confidence interval 0.07 – 0.33). Distribution of predictor values and outcome, and predictor effects in multivariable logistic regression models * interquartile ranges of the predictor distributions per anesthesiologist. † logistic regression coefficients with 95 percent confidence intervals. Intervals of the random intercept model were based on the t-distribution. ‡ mean (standard deviation). § number (percentage) of patients with PONV, rather than the intercept is reported. PONV = post operative nausea and vomiting. †† non-parametric confidence interval of the random intercept variance was obtained from a cluster bootstrap procedure. When the clustering by anesthesiologist was taken into account (random intercept model), the predictive effect of type of anesthetics use (volatile yes/no) was different compared with the standard multivariable model (Table 1). If predictor distributions varied among anesthesiologists, as indicated by a wide interquartile range (Table 1), differences in predictor effects were found between the standard and random intercept model. The variance of the random intercept was 0.15, when the six predictors were included in the model. Consequently, random intercepts ranged from −0.49 to 0.50. Figure 1 shows the variation in predicted risks by the three risk calculations. Applying the three risk calculations for the prediction of PONV resulted in different predicted risks for the individual patients. A-C Predicted probabilities from the standard model and from the risk calculations based on the random intercept model. The predicted risks differed among the models. The diagonal indicates the line of identity (predicted probabilities of the two models are equal). The risk calculation that included fixed and random effects (the conditional risk calculation) showed the best overall discriminative ability in the development data (Table 2, c-index 0.69). The discriminative ability of the standard model was similar to the discriminative ability of risks that were only based on the fixed effects of the random intercept model (the marginal risk calculation) (c-index both 0.66). The difference in discriminative ability among the standard model and marginal and conditional risk calculations disappeared when the c-index was estimated within each anesthesiologist, because the random anesthesiologist effects included in the conditional risk calculation only contributes to discrimination of patients treated by different anesthesiologist. Apparent and test performance of the PONV models described in Table 1 † overall performance (standard error). ‡ within anesthesiologist performance (standard deviation). * With calibration slope equal to 1 (i.e. calibration in the large). The standard model showed excellent apparent calibration in the large, when estimated with the overall performance estimates (Table 2). Apparent calibration in the large (overall) for the marginal risk calculation was almost optimal. However, calibration in the large assessed within clusters showed that predicted and observed incidences differed for some anesthesiologist. The standard deviations of calibration in the large within clusters were 0.329 and 0.385 for respectively the standard model and the marginal risk calculation (Table 2). The differences in predicted and observed incidences within some anesthesiologists were slightly smaller for the standard model compared to the marginal risk calculation, because predictors included in the marginal risk calculation did not contain information of the anesthesiologist specific intercept as these predictors were adjusted for the random anesthesiologist effects. For the conditional risk calculation, the calibration in the large within clusters and corresponding standard deviation were close to 0, which means that the observed and predicted incidences were similar among all anesthesiologists. This is due to the inclusion of the random anesthesiologist effects in the risk calculation, which comprises an estimate of the cluster specific incidence. The calibration slope assesses the correlation between predicted and observed risks for all patients (overall performance), or for patients treated by a particular anesthesiologist (within cluster performance). The overall and within cluster calibration slopes of the marginal risk calculation were slightly smaller compared to the calibration slopes of the standard model. The overall and within cluster calibration slopes of the conditional risk calculation were >1 in the development data (i.e. predicted risks lower than observed risks), because the anesthesiologist effects were shrunken to the average effect by the random intercept model. The standard deviations of the calibration slopes within clusters were limited for all models, indicating that the observed and predicted risks differed similarly among the anesthesiologists (Table 2). The standard and the marginal risk calculation had similar test performance, which was estimated in patients treated by 19 other anesthesiologists (overall c-index resp. 0.68 and 0.67) (Table 2). The test performance as evaluated with the overall and within cluster c-indexes was even higher than in the apparent performance. Possible reasons are stronger true predictor effects in the test data, differences in case-mix and randomness [25]. (The models were not re-calibrated in the external data). The overall and within cluster calibration in the large were too high for both models in the external data, indicating that the predicted risks were lower than the observed proportions of PONV. The calibration slopes from the standard model were larger than slopes from the random intercept model, as was also shown in the apparent validation. The incidence of PONV was 37.5% (616/1642) in the development cohort (Table 1). 19 anesthesiologists treated on average 82 surgical patients (median 64, range 1–460). The incidence of PONV per anesthesiologist varied (interquartile range 29% – 47%), with an ICC of 3.2%. The corresponding variance of the random intercept (0.15) was significantly different from 0 (confidence interval 0.07 – 0.33). Distribution of predictor values and outcome, and predictor effects in multivariable logistic regression models * interquartile ranges of the predictor distributions per anesthesiologist. † logistic regression coefficients with 95 percent confidence intervals. Intervals of the random intercept model were based on the t-distribution. ‡ mean (standard deviation). § number (percentage) of patients with PONV, rather than the intercept is reported. PONV = post operative nausea and vomiting. †† non-parametric confidence interval of the random intercept variance was obtained from a cluster bootstrap procedure. When the clustering by anesthesiologist was taken into account (random intercept model), the predictive effect of type of anesthetics use (volatile yes/no) was different compared with the standard multivariable model (Table 1). If predictor distributions varied among anesthesiologists, as indicated by a wide interquartile range (Table 1), differences in predictor effects were found between the standard and random intercept model. The variance of the random intercept was 0.15, when the six predictors were included in the model. Consequently, random intercepts ranged from −0.49 to 0.50. Figure 1 shows the variation in predicted risks by the three risk calculations. Applying the three risk calculations for the prediction of PONV resulted in different predicted risks for the individual patients. A-C Predicted probabilities from the standard model and from the risk calculations based on the random intercept model. The predicted risks differed among the models. The diagonal indicates the line of identity (predicted probabilities of the two models are equal). The risk calculation that included fixed and random effects (the conditional risk calculation) showed the best overall discriminative ability in the development data (Table 2, c-index 0.69). The discriminative ability of the standard model was similar to the discriminative ability of risks that were only based on the fixed effects of the random intercept model (the marginal risk calculation) (c-index both 0.66). The difference in discriminative ability among the standard model and marginal and conditional risk calculations disappeared when the c-index was estimated within each anesthesiologist, because the random anesthesiologist effects included in the conditional risk calculation only contributes to discrimination of patients treated by different anesthesiologist. Apparent and test performance of the PONV models described in Table 1 † overall performance (standard error). ‡ within anesthesiologist performance (standard deviation). * With calibration slope equal to 1 (i.e. calibration in the large). The standard model showed excellent apparent calibration in the large, when estimated with the overall performance estimates (Table 2). Apparent calibration in the large (overall) for the marginal risk calculation was almost optimal. However, calibration in the large assessed within clusters showed that predicted and observed incidences differed for some anesthesiologist. The standard deviations of calibration in the large within clusters were 0.329 and 0.385 for respectively the standard model and the marginal risk calculation (Table 2). The differences in predicted and observed incidences within some anesthesiologists were slightly smaller for the standard model compared to the marginal risk calculation, because predictors included in the marginal risk calculation did not contain information of the anesthesiologist specific intercept as these predictors were adjusted for the random anesthesiologist effects. For the conditional risk calculation, the calibration in the large within clusters and corresponding standard deviation were close to 0, which means that the observed and predicted incidences were similar among all anesthesiologists. This is due to the inclusion of the random anesthesiologist effects in the risk calculation, which comprises an estimate of the cluster specific incidence. The calibration slope assesses the correlation between predicted and observed risks for all patients (overall performance), or for patients treated by a particular anesthesiologist (within cluster performance). The overall and within cluster calibration slopes of the marginal risk calculation were slightly smaller compared to the calibration slopes of the standard model. The overall and within cluster calibration slopes of the conditional risk calculation were >1 in the development data (i.e. predicted risks lower than observed risks), because the anesthesiologist effects were shrunken to the average effect by the random intercept model. The standard deviations of the calibration slopes within clusters were limited for all models, indicating that the observed and predicted risks differed similarly among the anesthesiologists (Table 2). The standard and the marginal risk calculation had similar test performance, which was estimated in patients treated by 19 other anesthesiologists (overall c-index resp. 0.68 and 0.67) (Table 2). The test performance as evaluated with the overall and within cluster c-indexes was even higher than in the apparent performance. Possible reasons are stronger true predictor effects in the test data, differences in case-mix and randomness [25]. (The models were not re-calibrated in the external data). The overall and within cluster calibration in the large were too high for both models in the external data, indicating that the predicted risks were lower than the observed proportions of PONV. The calibration slopes from the standard model were larger than slopes from the random intercept model, as was also shown in the apparent validation. Simulation study The simulation study also showed similar overall discriminative ability for the standard model and the marginal risk calculation (apparent performance, c-index both 0.79, for ICC = 5%, Table 3). Discrimination of the conditional risk calculation was slightly better compared to the standard model and the marginal risk calculation (c-index 0.82, 2.5 and 97.5 percentiles 0.79; 0.84). The apparent calibration intercept and slope were ideal for standard model when assessing the overall performance, and ideal for the marginal risk calculation when assessing the within cluster performance. As in the empirical data, the overall and within cluster calibration slopes were too high for the conditional risk calculation due to shrinkage of the random center effects. Variation in the calibration in the large estimates within centers was lowest for the conditional risk calculation (calibration in the large and corresponding standard deviation both 0) (Table 3). The test performance of the standard model and the marginal risk calculation, as assessed in the source population, showed that the performance of these models was similar (Table 3). The difference between the apparent and test performance may be can be interpreted as optimism in model performance. Optimism in overall and within cluster performance was similar for the standard model and the marginal risk calculation. For instance, the difference in apparent and test calibration slopes (overall) was 0.04 for both models (Table 3). The risks for patients clustered within different centers was similar in these data (ICC 5%), which means that including center effects in the prediction model (i.e. random intercept model) cannot improve predictive performance considerably. Consequently, the performance of the models was similar, in the data with small differences in risks among centers. Simulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). The similarity in performance among the models disappeared when the ICC was 15% or 30% (Table 4, Additional file 1: Table S3). The discriminative ability of the conditional risk calculation was more accurate compared to the standard model and the marginal risk calculation. The apparent overall c-indexes and corresponding 2.5%; 97.5% range were respectively 0.85 (0.82; 0.87), 0.77 (0.74; 0.80) and 0.77 (0.73; 0.80) (Table 4). Assessment of the apparent performance of the standard model and the marginal risk calculation showed that the c-indexes remained similar in data with a higher ICC, however, the calibration parameters differed. The standard calibration in the large and calibration slope were equal to the line of identity for the standard model, but not for the marginal risk calculation. However, the calibration parameters assessed within clusters were on average more accurate for the marginal risk calculation (−0.00 and 1.00), compared to the standard model (−0.18 and 1.18). The evaluation of the standard model and the marginal risk calculation in the source population (test performance) showed similar results compared to the evaluation in the study sample (apparent performance) (Table 4). Simulation results in a domain with ICC = 15%, Pearson correlation X1 and random effect 0.0 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). Further, the apparent and test performance showed that, in data with an ICC of 15% or 30%, the standard deviations of the calibration in the large within clusters for the standard model and the marginal risk calculation were higher (e.g. respectively 0.94 and 1.01), compared to the standard deviations in data with an ICC of 5% (respectively 0.44 and 0.45) (Tables 4 and 3). So, when predictions were based on models neglecting center specific effects, the correlation between the observed and predicted incidences within centers differed among centers, especially in data with a high ICC. The (standard deviations of the) c-indexes within clusters were not influenced by a higher ICC. Tables 5 and 6 show the results of simulations investigating the influence of the number of centers on model performance. Especially when the number of centers is low – e.g. 5 centers –, it is more difficult to estimate accurate random intercepts and corresponding center effects. This potentially affects the performance of the random intercept logistic regression model. However, as in Table 3, the performance of the standard model and the marginal risk calculation were similar, and the conditional risk calculation had the most accurate performance. Simulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, number of patients 100, number of centers 5 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). Simulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, number of patients 1000, number of centers 50 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). The differences in performance between standard and random intercept models were smaller, when the clustering (i.e. the center effect) was associated with one of the predictors (Additional file 1: Tables S1, S2, and S4). For ICC values of 15% or 30%, the model performance of the standard model and the marginal risk calculation was better compared to model performance in datasets without associations between the clustering and a predictor. Finally, we compared the standard and random intercept model in data with a low incidence of the outcome Y (3%) (Additional file 1: Table S5). We sampled in total 1000 patients clustered in 20 centers. The performance of the models was similar. Only the calibration intercept of the marginal risk calculation showed an underestimation of the outcome incidence (intercept 0.14). The calibration intercept within clusters of the conditional risk calculation had a lower variance (0.00) compared to the other models. The simulation study also showed similar overall discriminative ability for the standard model and the marginal risk calculation (apparent performance, c-index both 0.79, for ICC = 5%, Table 3). Discrimination of the conditional risk calculation was slightly better compared to the standard model and the marginal risk calculation (c-index 0.82, 2.5 and 97.5 percentiles 0.79; 0.84). The apparent calibration intercept and slope were ideal for standard model when assessing the overall performance, and ideal for the marginal risk calculation when assessing the within cluster performance. As in the empirical data, the overall and within cluster calibration slopes were too high for the conditional risk calculation due to shrinkage of the random center effects. Variation in the calibration in the large estimates within centers was lowest for the conditional risk calculation (calibration in the large and corresponding standard deviation both 0) (Table 3). The test performance of the standard model and the marginal risk calculation, as assessed in the source population, showed that the performance of these models was similar (Table 3). The difference between the apparent and test performance may be can be interpreted as optimism in model performance. Optimism in overall and within cluster performance was similar for the standard model and the marginal risk calculation. For instance, the difference in apparent and test calibration slopes (overall) was 0.04 for both models (Table 3). The risks for patients clustered within different centers was similar in these data (ICC 5%), which means that including center effects in the prediction model (i.e. random intercept model) cannot improve predictive performance considerably. Consequently, the performance of the models was similar, in the data with small differences in risks among centers. Simulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). The similarity in performance among the models disappeared when the ICC was 15% or 30% (Table 4, Additional file 1: Table S3). The discriminative ability of the conditional risk calculation was more accurate compared to the standard model and the marginal risk calculation. The apparent overall c-indexes and corresponding 2.5%; 97.5% range were respectively 0.85 (0.82; 0.87), 0.77 (0.74; 0.80) and 0.77 (0.73; 0.80) (Table 4). Assessment of the apparent performance of the standard model and the marginal risk calculation showed that the c-indexes remained similar in data with a higher ICC, however, the calibration parameters differed. The standard calibration in the large and calibration slope were equal to the line of identity for the standard model, but not for the marginal risk calculation. However, the calibration parameters assessed within clusters were on average more accurate for the marginal risk calculation (−0.00 and 1.00), compared to the standard model (−0.18 and 1.18). The evaluation of the standard model and the marginal risk calculation in the source population (test performance) showed similar results compared to the evaluation in the study sample (apparent performance) (Table 4). Simulation results in a domain with ICC = 15%, Pearson correlation X1 and random effect 0.0 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). Further, the apparent and test performance showed that, in data with an ICC of 15% or 30%, the standard deviations of the calibration in the large within clusters for the standard model and the marginal risk calculation were higher (e.g. respectively 0.94 and 1.01), compared to the standard deviations in data with an ICC of 5% (respectively 0.44 and 0.45) (Tables 4 and 3). So, when predictions were based on models neglecting center specific effects, the correlation between the observed and predicted incidences within centers differed among centers, especially in data with a high ICC. The (standard deviations of the) c-indexes within clusters were not influenced by a higher ICC. Tables 5 and 6 show the results of simulations investigating the influence of the number of centers on model performance. Especially when the number of centers is low – e.g. 5 centers –, it is more difficult to estimate accurate random intercepts and corresponding center effects. This potentially affects the performance of the random intercept logistic regression model. However, as in Table 3, the performance of the standard model and the marginal risk calculation were similar, and the conditional risk calculation had the most accurate performance. Simulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, number of patients 100, number of centers 5 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). Simulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, number of patients 1000, number of centers 50 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). The differences in performance between standard and random intercept models were smaller, when the clustering (i.e. the center effect) was associated with one of the predictors (Additional file 1: Tables S1, S2, and S4). For ICC values of 15% or 30%, the model performance of the standard model and the marginal risk calculation was better compared to model performance in datasets without associations between the clustering and a predictor. Finally, we compared the standard and random intercept model in data with a low incidence of the outcome Y (3%) (Additional file 1: Table S5). We sampled in total 1000 patients clustered in 20 centers. The performance of the models was similar. Only the calibration intercept of the marginal risk calculation showed an underestimation of the outcome incidence (intercept 0.14). The calibration intercept within clusters of the conditional risk calculation had a lower variance (0.00) compared to the other models.
Conclusion
In summary, we compared prediction models that were developed with random intercept or standard logistic regression analysis in clustered data. Adding the cluster effect in a prediction model increases the amount of predictive information, resulting in improved overall discriminative ability and calibration in the large within clusters. Particularly if cluster effects are relatively strong (ICC larger than 5%), prediction modeling with inclusion of the cluster effect in the model will result in better performance than models not including cluster specific effects.
[ "Background", "Prediction of postoperative nausea and vomiting", "Model development and risk calculation", "Model evaluation", "Simulation study", "Prediction of postoperative nausea and vomiting", "Simulation study", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Many clinical prediction models are being developed. Diagnostic prediction models combine patient characteristics and test results to predict the presence or absence of a certain diagnosis. Prognostic prediction models predict the future occurrence of outcomes [1].\nStudy data that are used for model development are frequently clustered within e.g. centers or treating physician [2]. For instance, patients treated in a particular center may be more alike compared to patients treated in another center due to differences in treatment policies. As a result, patients treated in the same center are dependent (clustered), rather than independent. Regression techniques that take clustering into account [3-6] are frequently used in cluster randomized trials and in etiologic research with subjects clustered within e.g. neighborhoods or countries. Surprisingly, such regression models were hardly used in research aimed at developing prediction models [2].\nNotably from the domains of therapeutic and causal studies, it has been shown that regression methods that take the clustering into account yield different estimates of the regression coefficients than standard regression techniques neglecting the clustered data structure [5,7,8]. However, in the domain of prediction studies, it is yet unknown to what extent regression methods for clustered data need to be used in the presence of clustered data. Two types of models can be used for analyzing clustered data: marginal models and conditional models [9]. Marginal models, such as the Generalized Estimation Equation (GEE) method, adjust for the clustering nature of data and estimate the standard error of the estimated parameters correctly. The interpretation of GEE results is on the higher cluster level and therefore not suitable for predictions in individual patients [10]. Conditional models estimate predictor effects for patients in the specific clusters. Conditioning on cluster is often done with random effects to save degrees of freedom. Thereby, conditional models allow for predictions of outcomes on the lowest clustering level (here, the patient). Accurate predictions are not necessarily achieved with a random effects model (having different regression parameters compared to a standard model), because the random effects are not readily applicable in new data with new clusters and the clustering in the data may be weak resulting in minor differences between the models.\nWe explore the effect of including a random intercept in a conditional prediction model compared to standard regression. We use empirical and simulated clustered data to assess the performance of the prediction models. We show that model calibration is suboptimal, particularly when applied in new subjects, if clustering is not accounted for in the prediction model development.", "A frequently occurring side effect of surgery is postoperative nausea and vomiting (PONV). To prevent or treat PONV, a risk model was developed to predict PONV within 24 hours after surgery [11]. We used a cohort of 1642 consecutive surgical patients (development sample) that were treated in the UMC Utrecht to develop prediction models. Patients were clustered within 19 treating anesthesiologists [12]. Predictors for the occurrence of PONV included gender, age, history of PONV or motion sickness, current smoking, abdominal or middle ear surgery versus other type of surgery and the use of volatile anesthetics during surgery [13]. Data of 1458 patients from the same center were used to study the validity of the prediction models. Patients included in the validation sample were treated by 19 other anesthesiologists than the patients from the development sample.", "The prediction models included all before mentioned predictors and were fitted with standard logistic regression or with a random intercept logistic regression model (also known as partial multilevel model). The standard model was fitted with a generalized linear model, including a logit link function. The intercept and predictors were included as fixed effects (i.e. not varying by cluster). The random effect models thus included fixed effects for the predictors plus a random intercept for the effects of clusters (anesthesiologists or centers in the simulation study). The random intercept was assumed to be normally distributed with mean zero, and variance σ2u0[14].\nThe predicted risks of PONV for individual patients were calculated with the log-odds transformation of the linear predictor. The risk based on the standard logistic regression model was:\n\n\n\n\nP\n\n\n\nY\ni\n\n=\n1\n|\n\nX\ni\n\n\n\n=\n\n1\n\n1\n+\nexp\n\n\n−\n\n\n\n\nα\n^\n\nstandard\n\n+\n\n\n∑\n\nm\n∈\n1\n,\n…\n,\n6\n\n\n\n\nx\nim\n\n⋅\n\n\nβ\n^\n\n\nstandard\n,\nm\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nwhere P(Yi = 1) is the predicted risk that a patient i will get PONV, given patients’ predictor values X. The linear predictor consist of âstandard which equals the estimated intercept of the standard model, and ∑xim⋅β^standard,m which is the sumproduct of the six predictor values of patient i and the six regression coefficients.\nFrom the random intercept logistic regression model, predicted risks were calculated in two ways. The first risk calculation was based on only the fixed effects of the random intercept logistic regression model (called marginal risk calculation):\n\n\n\n\nP\n\n\n\nY\ni\n\n=\n1\n|\n\nX\ni\n\n\n\n=\n\n1\n\n1\n+\nexp\n\n\n−\n\n\n\n\nα\n^\n\nRE\n\n+\n\n\n∑\n\nm\n∈\n1\n,\n…\n,\n6\n\n\n\n\nx\nim\n\n⋅\n\n\nβ\n^\n\n\nRE\n,\nm\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nwhere âRE equals the fixed intercept and ∑xim⋅β^RE,m\n is the sumproduct of the six predictor values of patient i and the corresponding fixed regression coefficients of the random effects model. However, the cluster effects were not used for the risk calculation [15]. We explicitly studied this risk calculation since cluster effects are unknown for patients in clusters that are not included in the development data.\nThe second risk calculation used the fixed and random effects of the random intercept logistic regression model (called conditional risk calculation):PYij=1|Xij=11+exp−α^RE+∑m∈1,…,6xim⋅β^RE,m+u0j\nThis risk calculation included the same predictor effects as the marginal risk calculation, plus the random intercept u0j (i.e. the effect of anesthesiologist j). This risk calculation cannot be used in new data of patients treated by new anesthesiologists, since the random effect of the new anesthesiologist is unknown.", "Apparent and test performance of the prediction models were assessed. Apparent performance is the performance of the prediction model in the development data. Test performance was assessed in a cohort of 1458 new patients treated by 19 other anesthesiologists than the patients from the development sample.\nThe predictive performance of each of the risk calculations was assessed with the concordance index (c-index) [16], the calibration slope and calibration in the large [17,18]. The calibration slope was estimated with standard logistic regression analysis, modeling the outcome of interest as dependent variable and the linear predictor as independent variable. Calibration in the large was assessed as the intercept of a logistic regression model with the linear predictor as offset variable. The ideal values of the calibration in the large and calibration slope are respectively 0 and 1. Since, standard performance measures ignore the clustered data structure, they can be considered as overall measures. To take clustering into account in the model evaluation, we assessed the predictive performance in individual anesthesiologists (within cluster performance). The within cluster c-index was estimated as the average of the c-indices of the clusters, as described by van Oirbeek [19]. Within cluster calibration was assessed with mixed effect models, with random effects for the intercept and linear predictor (calibration slope) or only for the intercept (calibration in the large).", "We generated a source population which included 100 centers. The number of patients per center was Poisson distributed, with a mean and variance varying per center according to the exponential function of a normal distribution (N(5.7, 0.3)). This resulted in a total of 30,556 patients and a median of 301 patients per center (range 155–552). The dichotomous outcome Y was predicted with 3 continuous (X1-X3) and 3 dichotomous variables (X4-X6). The three continuous predictors were independently drawn from a normal distribution, with a mean of 0 and standard deviations of 0.2, 0.4, and 1. The three dichotomous predictors were independently drawn from binomial distributions with incidences 0.2, 0.3, and 0.4. The regression coefficients of all predictors were 1. To introduce clustering of events, we generated a latent random effect from a normal distribution with mean 0 and variance 0.17. This corresponded to an intraclass correlation coefficient (ICC) of 5%, which was calculated as σ2u0/(σ2u0 + ((π ^ 2)/3). The σ2u0 equals the second level variance estimated with a random intercept logistic regression model [6]. Based on the six predictors and the latent random effect, the linear predictor lp was calculated for each patient. The linear predictor was normally distributed with mean −1.06 and standard deviation 1.41. The linear predictor was transformed to probabilities for the outcome using the formula P(Y) = 1/(1 + exp(−lp)). The outcome value Y (1 or 0) was then generated by comparing P(Y) with an independently generated variable u having a uniform distribution from 0 to 1. We used the rule Y = 1 if P(Y) ≤ u, and Y = 0 otherwise. The incidence of the outcome (P(Y = 1)) was 30% in all source populations, except for the situation with low number of events (incidence = 3%). Further, we varied several parameters in the source population as described above. We studied ICC values of 5%, 15% and 30%; Pearson correlation coefficient values between predictor X1 and the random intercept were 0.0 or 0.4.\nStudy samples were drawn according to the practice of data collection in a multicenter setting [20,21]. We randomly drew study samples from the source population in two stages. First we sampled 20 centers, and then we sampled in total 1000 patients from the included centers (two-stage sampling). We also studied the performance in study samples with 5 or 50 centers (including respectively 100 and 1000 patients in total). Standard and random intercept logistic regression models were fitted in the study sample, and evaluated in that study sample (apparent performance) and in the whole source population (test performance). The whole process (sampling from source population, model development and evaluation) was repeated 100 times.\nCalculations were performed with R version 2.11.1 [22]. We used the lmer function from the lme4 library to perform mixed effect regression analyses [23]. The lrm function of the Design package was used to fit the standard model and estimate overall performance measures [24].", "The incidence of PONV was 37.5% (616/1642) in the development cohort (Table 1). 19 anesthesiologists treated on average 82 surgical patients (median 64, range 1–460). The incidence of PONV per anesthesiologist varied (interquartile range 29% – 47%), with an ICC of 3.2%. The corresponding variance of the random intercept (0.15) was significantly different from 0 (confidence interval 0.07 – 0.33).\nDistribution of predictor values and outcome, and predictor effects in multivariable logistic regression models\n* interquartile ranges of the predictor distributions per anesthesiologist.\n† logistic regression coefficients with 95 percent confidence intervals. Intervals of the random intercept model were based on the t-distribution.\n‡ mean (standard deviation).\n§ number (percentage) of patients with PONV, rather than the intercept is reported.\nPONV = post operative nausea and vomiting.\n†† non-parametric confidence interval of the random intercept variance was obtained from a cluster bootstrap procedure.\nWhen the clustering by anesthesiologist was taken into account (random intercept model), the predictive effect of type of anesthetics use (volatile yes/no) was different compared with the standard multivariable model (Table 1). If predictor distributions varied among anesthesiologists, as indicated by a wide interquartile range (Table 1), differences in predictor effects were found between the standard and random intercept model. The variance of the random intercept was 0.15, when the six predictors were included in the model. Consequently, random intercepts ranged from −0.49 to 0.50. Figure 1 shows the variation in predicted risks by the three risk calculations. Applying the three risk calculations for the prediction of PONV resulted in different predicted risks for the individual patients.\nA-C Predicted probabilities from the standard model and from the risk calculations based on the random intercept model. The predicted risks differed among the models. The diagonal indicates the line of identity (predicted probabilities of the two models are equal).\nThe risk calculation that included fixed and random effects (the conditional risk calculation) showed the best overall discriminative ability in the development data (Table 2, c-index 0.69). The discriminative ability of the standard model was similar to the discriminative ability of risks that were only based on the fixed effects of the random intercept model (the marginal risk calculation) (c-index both 0.66). The difference in discriminative ability among the standard model and marginal and conditional risk calculations disappeared when the c-index was estimated within each anesthesiologist, because the random anesthesiologist effects included in the conditional risk calculation only contributes to discrimination of patients treated by different anesthesiologist.\n\nApparent and test performance of the PONV models described in Table\n1\n\n† overall performance (standard error).\n‡ within anesthesiologist performance (standard deviation).\n* With calibration slope equal to 1 (i.e. calibration in the large).\nThe standard model showed excellent apparent calibration in the large, when estimated with the overall performance estimates (Table 2). Apparent calibration in the large (overall) for the marginal risk calculation was almost optimal. However, calibration in the large assessed within clusters showed that predicted and observed incidences differed for some anesthesiologist. The standard deviations of calibration in the large within clusters were 0.329 and 0.385 for respectively the standard model and the marginal risk calculation (Table 2). The differences in predicted and observed incidences within some anesthesiologists were slightly smaller for the standard model compared to the marginal risk calculation, because predictors included in the marginal risk calculation did not contain information of the anesthesiologist specific intercept as these predictors were adjusted for the random anesthesiologist effects. For the conditional risk calculation, the calibration in the large within clusters and corresponding standard deviation were close to 0, which means that the observed and predicted incidences were similar among all anesthesiologists. This is due to the inclusion of the random anesthesiologist effects in the risk calculation, which comprises an estimate of the cluster specific incidence.\nThe calibration slope assesses the correlation between predicted and observed risks for all patients (overall performance), or for patients treated by a particular anesthesiologist (within cluster performance). The overall and within cluster calibration slopes of the marginal risk calculation were slightly smaller compared to the calibration slopes of the standard model. The overall and within cluster calibration slopes of the conditional risk calculation were >1 in the development data (i.e. predicted risks lower than observed risks), because the anesthesiologist effects were shrunken to the average effect by the random intercept model. The standard deviations of the calibration slopes within clusters were limited for all models, indicating that the observed and predicted risks differed similarly among the anesthesiologists (Table 2).\nThe standard and the marginal risk calculation had similar test performance, which was estimated in patients treated by 19 other anesthesiologists (overall c-index resp. 0.68 and 0.67) (Table 2). The test performance as evaluated with the overall and within cluster c-indexes was even higher than in the apparent performance. Possible reasons are stronger true predictor effects in the test data, differences in case-mix and randomness [25]. (The models were not re-calibrated in the external data). The overall and within cluster calibration in the large were too high for both models in the external data, indicating that the predicted risks were lower than the observed proportions of PONV. The calibration slopes from the standard model were larger than slopes from the random intercept model, as was also shown in the apparent validation.", "The simulation study also showed similar overall discriminative ability for the standard model and the marginal risk calculation (apparent performance, c-index both 0.79, for ICC = 5%, Table 3). Discrimination of the conditional risk calculation was slightly better compared to the standard model and the marginal risk calculation (c-index 0.82, 2.5 and 97.5 percentiles 0.79; 0.84). The apparent calibration intercept and slope were ideal for standard model when assessing the overall performance, and ideal for the marginal risk calculation when assessing the within cluster performance. As in the empirical data, the overall and within cluster calibration slopes were too high for the conditional risk calculation due to shrinkage of the random center effects. Variation in the calibration in the large estimates within centers was lowest for the conditional risk calculation (calibration in the large and corresponding standard deviation both 0) (Table 3). The test performance of the standard model and the marginal risk calculation, as assessed in the source population, showed that the performance of these models was similar (Table 3). The difference between the apparent and test performance may be can be interpreted as optimism in model performance. Optimism in overall and within cluster performance was similar for the standard model and the marginal risk calculation. For instance, the difference in apparent and test calibration slopes (overall) was 0.04 for both models (Table 3). The risks for patients clustered within different centers was similar in these data (ICC 5%), which means that including center effects in the prediction model (i.e. random intercept model) cannot improve predictive performance considerably. Consequently, the performance of the models was similar, in the data with small differences in risks among centers.\nSimulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0\n* With calibration slope equal to 1 (i.e. calibration in the large).\n† overall performance (2.5 and 97.5 percentiles).\n‡ median of overall performance from 100 simulations (median of within cluster performances).\nThe similarity in performance among the models disappeared when the ICC was 15% or 30% (Table 4, Additional file 1: Table S3). The discriminative ability of the conditional risk calculation was more accurate compared to the standard model and the marginal risk calculation. The apparent overall c-indexes and corresponding 2.5%; 97.5% range were respectively 0.85 (0.82; 0.87), 0.77 (0.74; 0.80) and 0.77 (0.73; 0.80) (Table 4). Assessment of the apparent performance of the standard model and the marginal risk calculation showed that the c-indexes remained similar in data with a higher ICC, however, the calibration parameters differed. The standard calibration in the large and calibration slope were equal to the line of identity for the standard model, but not for the marginal risk calculation. However, the calibration parameters assessed within clusters were on average more accurate for the marginal risk calculation (−0.00 and 1.00), compared to the standard model (−0.18 and 1.18). The evaluation of the standard model and the marginal risk calculation in the source population (test performance) showed similar results compared to the evaluation in the study sample (apparent performance) (Table 4).\nSimulation results in a domain with ICC = 15%, Pearson correlation X1 and random effect 0.0\n* With calibration slope equal to 1 (i.e. calibration in the large).\n† overall performance (2.5 and 97.5 percentiles).\n‡ median of overall performance from 100 simulations (median of within cluster performances).\nFurther, the apparent and test performance showed that, in data with an ICC of 15% or 30%, the standard deviations of the calibration in the large within clusters for the standard model and the marginal risk calculation were higher (e.g. respectively 0.94 and 1.01), compared to the standard deviations in data with an ICC of 5% (respectively 0.44 and 0.45) (Tables 4 and 3). So, when predictions were based on models neglecting center specific effects, the correlation between the observed and predicted incidences within centers differed among centers, especially in data with a high ICC. The (standard deviations of the) c-indexes within clusters were not influenced by a higher ICC.\nTables 5 and 6 show the results of simulations investigating the influence of the number of centers on model performance. Especially when the number of centers is low – e.g. 5 centers –, it is more difficult to estimate accurate random intercepts and corresponding center effects. This potentially affects the performance of the random intercept logistic regression model. However, as in Table 3, the performance of the standard model and the marginal risk calculation were similar, and the conditional risk calculation had the most accurate performance.\nSimulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, number of patients 100, number of centers 5\n* With calibration slope equal to 1 (i.e. calibration in the large).\n† overall performance (2.5 and 97.5 percentiles).\n‡ median of overall performance from 100 simulations (median of within cluster performances).\nSimulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, number of patients 1000, number of centers 50\n* With calibration slope equal to 1 (i.e. calibration in the large).\n† overall performance (2.5 and 97.5 percentiles).\n‡ median of overall performance from 100 simulations (median of within cluster performances).\nThe differences in performance between standard and random intercept models were smaller, when the clustering (i.e. the center effect) was associated with one of the predictors (Additional file 1: Tables S1, S2, and S4). For ICC values of 15% or 30%, the model performance of the standard model and the marginal risk calculation was better compared to model performance in datasets without associations between the clustering and a predictor. Finally, we compared the standard and random intercept model in data with a low incidence of the outcome Y (3%) (Additional file 1: Table S5). We sampled in total 1000 patients clustered in 20 centers. The performance of the models was similar. Only the calibration intercept of the marginal risk calculation showed an underestimation of the outcome incidence (intercept 0.14). The calibration intercept within clusters of the conditional risk calculation had a lower variance (0.00) compared to the other models.", "PONV: Postoperative nausea and vomiting; c-index: Concordance index; ICC: Intraclass correlation coefficient", "All authors (W Bouwmeester, JWR Twisk, TH Kappen, WA van Klei, KGM Moons, Y Vergouwe) state that they have no conflict of interests.", "Study design and analysis: WB, YV. Drafting manuscript: WB. Study design and reviewing the manuscript: JWRT, KGMM, THK, WAvK, YV. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2288/13/19/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Prediction of postoperative nausea and vomiting", "Model development and risk calculation", "Model evaluation", "Simulation study", "Results", "Prediction of postoperative nausea and vomiting", "Simulation study", "Discussion", "Conclusion", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history", "Supplementary Material" ]
[ "Many clinical prediction models are being developed. Diagnostic prediction models combine patient characteristics and test results to predict the presence or absence of a certain diagnosis. Prognostic prediction models predict the future occurrence of outcomes [1].\nStudy data that are used for model development are frequently clustered within e.g. centers or treating physician [2]. For instance, patients treated in a particular center may be more alike compared to patients treated in another center due to differences in treatment policies. As a result, patients treated in the same center are dependent (clustered), rather than independent. Regression techniques that take clustering into account [3-6] are frequently used in cluster randomized trials and in etiologic research with subjects clustered within e.g. neighborhoods or countries. Surprisingly, such regression models were hardly used in research aimed at developing prediction models [2].\nNotably from the domains of therapeutic and causal studies, it has been shown that regression methods that take the clustering into account yield different estimates of the regression coefficients than standard regression techniques neglecting the clustered data structure [5,7,8]. However, in the domain of prediction studies, it is yet unknown to what extent regression methods for clustered data need to be used in the presence of clustered data. Two types of models can be used for analyzing clustered data: marginal models and conditional models [9]. Marginal models, such as the Generalized Estimation Equation (GEE) method, adjust for the clustering nature of data and estimate the standard error of the estimated parameters correctly. The interpretation of GEE results is on the higher cluster level and therefore not suitable for predictions in individual patients [10]. Conditional models estimate predictor effects for patients in the specific clusters. Conditioning on cluster is often done with random effects to save degrees of freedom. Thereby, conditional models allow for predictions of outcomes on the lowest clustering level (here, the patient). Accurate predictions are not necessarily achieved with a random effects model (having different regression parameters compared to a standard model), because the random effects are not readily applicable in new data with new clusters and the clustering in the data may be weak resulting in minor differences between the models.\nWe explore the effect of including a random intercept in a conditional prediction model compared to standard regression. We use empirical and simulated clustered data to assess the performance of the prediction models. We show that model calibration is suboptimal, particularly when applied in new subjects, if clustering is not accounted for in the prediction model development.", " Prediction of postoperative nausea and vomiting A frequently occurring side effect of surgery is postoperative nausea and vomiting (PONV). To prevent or treat PONV, a risk model was developed to predict PONV within 24 hours after surgery [11]. We used a cohort of 1642 consecutive surgical patients (development sample) that were treated in the UMC Utrecht to develop prediction models. Patients were clustered within 19 treating anesthesiologists [12]. Predictors for the occurrence of PONV included gender, age, history of PONV or motion sickness, current smoking, abdominal or middle ear surgery versus other type of surgery and the use of volatile anesthetics during surgery [13]. Data of 1458 patients from the same center were used to study the validity of the prediction models. Patients included in the validation sample were treated by 19 other anesthesiologists than the patients from the development sample.\nA frequently occurring side effect of surgery is postoperative nausea and vomiting (PONV). To prevent or treat PONV, a risk model was developed to predict PONV within 24 hours after surgery [11]. We used a cohort of 1642 consecutive surgical patients (development sample) that were treated in the UMC Utrecht to develop prediction models. Patients were clustered within 19 treating anesthesiologists [12]. Predictors for the occurrence of PONV included gender, age, history of PONV or motion sickness, current smoking, abdominal or middle ear surgery versus other type of surgery and the use of volatile anesthetics during surgery [13]. Data of 1458 patients from the same center were used to study the validity of the prediction models. Patients included in the validation sample were treated by 19 other anesthesiologists than the patients from the development sample.\n Model development and risk calculation The prediction models included all before mentioned predictors and were fitted with standard logistic regression or with a random intercept logistic regression model (also known as partial multilevel model). The standard model was fitted with a generalized linear model, including a logit link function. The intercept and predictors were included as fixed effects (i.e. not varying by cluster). The random effect models thus included fixed effects for the predictors plus a random intercept for the effects of clusters (anesthesiologists or centers in the simulation study). The random intercept was assumed to be normally distributed with mean zero, and variance σ2u0[14].\nThe predicted risks of PONV for individual patients were calculated with the log-odds transformation of the linear predictor. The risk based on the standard logistic regression model was:\n\n\n\n\nP\n\n\n\nY\ni\n\n=\n1\n|\n\nX\ni\n\n\n\n=\n\n1\n\n1\n+\nexp\n\n\n−\n\n\n\n\nα\n^\n\nstandard\n\n+\n\n\n∑\n\nm\n∈\n1\n,\n…\n,\n6\n\n\n\n\nx\nim\n\n⋅\n\n\nβ\n^\n\n\nstandard\n,\nm\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nwhere P(Yi = 1) is the predicted risk that a patient i will get PONV, given patients’ predictor values X. The linear predictor consist of âstandard which equals the estimated intercept of the standard model, and ∑xim⋅β^standard,m which is the sumproduct of the six predictor values of patient i and the six regression coefficients.\nFrom the random intercept logistic regression model, predicted risks were calculated in two ways. The first risk calculation was based on only the fixed effects of the random intercept logistic regression model (called marginal risk calculation):\n\n\n\n\nP\n\n\n\nY\ni\n\n=\n1\n|\n\nX\ni\n\n\n\n=\n\n1\n\n1\n+\nexp\n\n\n−\n\n\n\n\nα\n^\n\nRE\n\n+\n\n\n∑\n\nm\n∈\n1\n,\n…\n,\n6\n\n\n\n\nx\nim\n\n⋅\n\n\nβ\n^\n\n\nRE\n,\nm\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nwhere âRE equals the fixed intercept and ∑xim⋅β^RE,m\n is the sumproduct of the six predictor values of patient i and the corresponding fixed regression coefficients of the random effects model. However, the cluster effects were not used for the risk calculation [15]. We explicitly studied this risk calculation since cluster effects are unknown for patients in clusters that are not included in the development data.\nThe second risk calculation used the fixed and random effects of the random intercept logistic regression model (called conditional risk calculation):PYij=1|Xij=11+exp−α^RE+∑m∈1,…,6xim⋅β^RE,m+u0j\nThis risk calculation included the same predictor effects as the marginal risk calculation, plus the random intercept u0j (i.e. the effect of anesthesiologist j). This risk calculation cannot be used in new data of patients treated by new anesthesiologists, since the random effect of the new anesthesiologist is unknown.\nThe prediction models included all before mentioned predictors and were fitted with standard logistic regression or with a random intercept logistic regression model (also known as partial multilevel model). The standard model was fitted with a generalized linear model, including a logit link function. The intercept and predictors were included as fixed effects (i.e. not varying by cluster). The random effect models thus included fixed effects for the predictors plus a random intercept for the effects of clusters (anesthesiologists or centers in the simulation study). The random intercept was assumed to be normally distributed with mean zero, and variance σ2u0[14].\nThe predicted risks of PONV for individual patients were calculated with the log-odds transformation of the linear predictor. The risk based on the standard logistic regression model was:\n\n\n\n\nP\n\n\n\nY\ni\n\n=\n1\n|\n\nX\ni\n\n\n\n=\n\n1\n\n1\n+\nexp\n\n\n−\n\n\n\n\nα\n^\n\nstandard\n\n+\n\n\n∑\n\nm\n∈\n1\n,\n…\n,\n6\n\n\n\n\nx\nim\n\n⋅\n\n\nβ\n^\n\n\nstandard\n,\nm\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nwhere P(Yi = 1) is the predicted risk that a patient i will get PONV, given patients’ predictor values X. The linear predictor consist of âstandard which equals the estimated intercept of the standard model, and ∑xim⋅β^standard,m which is the sumproduct of the six predictor values of patient i and the six regression coefficients.\nFrom the random intercept logistic regression model, predicted risks were calculated in two ways. The first risk calculation was based on only the fixed effects of the random intercept logistic regression model (called marginal risk calculation):\n\n\n\n\nP\n\n\n\nY\ni\n\n=\n1\n|\n\nX\ni\n\n\n\n=\n\n1\n\n1\n+\nexp\n\n\n−\n\n\n\n\nα\n^\n\nRE\n\n+\n\n\n∑\n\nm\n∈\n1\n,\n…\n,\n6\n\n\n\n\nx\nim\n\n⋅\n\n\nβ\n^\n\n\nRE\n,\nm\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nwhere âRE equals the fixed intercept and ∑xim⋅β^RE,m\n is the sumproduct of the six predictor values of patient i and the corresponding fixed regression coefficients of the random effects model. However, the cluster effects were not used for the risk calculation [15]. We explicitly studied this risk calculation since cluster effects are unknown for patients in clusters that are not included in the development data.\nThe second risk calculation used the fixed and random effects of the random intercept logistic regression model (called conditional risk calculation):PYij=1|Xij=11+exp−α^RE+∑m∈1,…,6xim⋅β^RE,m+u0j\nThis risk calculation included the same predictor effects as the marginal risk calculation, plus the random intercept u0j (i.e. the effect of anesthesiologist j). This risk calculation cannot be used in new data of patients treated by new anesthesiologists, since the random effect of the new anesthesiologist is unknown.\n Model evaluation Apparent and test performance of the prediction models were assessed. Apparent performance is the performance of the prediction model in the development data. Test performance was assessed in a cohort of 1458 new patients treated by 19 other anesthesiologists than the patients from the development sample.\nThe predictive performance of each of the risk calculations was assessed with the concordance index (c-index) [16], the calibration slope and calibration in the large [17,18]. The calibration slope was estimated with standard logistic regression analysis, modeling the outcome of interest as dependent variable and the linear predictor as independent variable. Calibration in the large was assessed as the intercept of a logistic regression model with the linear predictor as offset variable. The ideal values of the calibration in the large and calibration slope are respectively 0 and 1. Since, standard performance measures ignore the clustered data structure, they can be considered as overall measures. To take clustering into account in the model evaluation, we assessed the predictive performance in individual anesthesiologists (within cluster performance). The within cluster c-index was estimated as the average of the c-indices of the clusters, as described by van Oirbeek [19]. Within cluster calibration was assessed with mixed effect models, with random effects for the intercept and linear predictor (calibration slope) or only for the intercept (calibration in the large).\nApparent and test performance of the prediction models were assessed. Apparent performance is the performance of the prediction model in the development data. Test performance was assessed in a cohort of 1458 new patients treated by 19 other anesthesiologists than the patients from the development sample.\nThe predictive performance of each of the risk calculations was assessed with the concordance index (c-index) [16], the calibration slope and calibration in the large [17,18]. The calibration slope was estimated with standard logistic regression analysis, modeling the outcome of interest as dependent variable and the linear predictor as independent variable. Calibration in the large was assessed as the intercept of a logistic regression model with the linear predictor as offset variable. The ideal values of the calibration in the large and calibration slope are respectively 0 and 1. Since, standard performance measures ignore the clustered data structure, they can be considered as overall measures. To take clustering into account in the model evaluation, we assessed the predictive performance in individual anesthesiologists (within cluster performance). The within cluster c-index was estimated as the average of the c-indices of the clusters, as described by van Oirbeek [19]. Within cluster calibration was assessed with mixed effect models, with random effects for the intercept and linear predictor (calibration slope) or only for the intercept (calibration in the large).\n Simulation study We generated a source population which included 100 centers. The number of patients per center was Poisson distributed, with a mean and variance varying per center according to the exponential function of a normal distribution (N(5.7, 0.3)). This resulted in a total of 30,556 patients and a median of 301 patients per center (range 155–552). The dichotomous outcome Y was predicted with 3 continuous (X1-X3) and 3 dichotomous variables (X4-X6). The three continuous predictors were independently drawn from a normal distribution, with a mean of 0 and standard deviations of 0.2, 0.4, and 1. The three dichotomous predictors were independently drawn from binomial distributions with incidences 0.2, 0.3, and 0.4. The regression coefficients of all predictors were 1. To introduce clustering of events, we generated a latent random effect from a normal distribution with mean 0 and variance 0.17. This corresponded to an intraclass correlation coefficient (ICC) of 5%, which was calculated as σ2u0/(σ2u0 + ((π ^ 2)/3). The σ2u0 equals the second level variance estimated with a random intercept logistic regression model [6]. Based on the six predictors and the latent random effect, the linear predictor lp was calculated for each patient. The linear predictor was normally distributed with mean −1.06 and standard deviation 1.41. The linear predictor was transformed to probabilities for the outcome using the formula P(Y) = 1/(1 + exp(−lp)). The outcome value Y (1 or 0) was then generated by comparing P(Y) with an independently generated variable u having a uniform distribution from 0 to 1. We used the rule Y = 1 if P(Y) ≤ u, and Y = 0 otherwise. The incidence of the outcome (P(Y = 1)) was 30% in all source populations, except for the situation with low number of events (incidence = 3%). Further, we varied several parameters in the source population as described above. We studied ICC values of 5%, 15% and 30%; Pearson correlation coefficient values between predictor X1 and the random intercept were 0.0 or 0.4.\nStudy samples were drawn according to the practice of data collection in a multicenter setting [20,21]. We randomly drew study samples from the source population in two stages. First we sampled 20 centers, and then we sampled in total 1000 patients from the included centers (two-stage sampling). We also studied the performance in study samples with 5 or 50 centers (including respectively 100 and 1000 patients in total). Standard and random intercept logistic regression models were fitted in the study sample, and evaluated in that study sample (apparent performance) and in the whole source population (test performance). The whole process (sampling from source population, model development and evaluation) was repeated 100 times.\nCalculations were performed with R version 2.11.1 [22]. We used the lmer function from the lme4 library to perform mixed effect regression analyses [23]. The lrm function of the Design package was used to fit the standard model and estimate overall performance measures [24].\nWe generated a source population which included 100 centers. The number of patients per center was Poisson distributed, with a mean and variance varying per center according to the exponential function of a normal distribution (N(5.7, 0.3)). This resulted in a total of 30,556 patients and a median of 301 patients per center (range 155–552). The dichotomous outcome Y was predicted with 3 continuous (X1-X3) and 3 dichotomous variables (X4-X6). The three continuous predictors were independently drawn from a normal distribution, with a mean of 0 and standard deviations of 0.2, 0.4, and 1. The three dichotomous predictors were independently drawn from binomial distributions with incidences 0.2, 0.3, and 0.4. The regression coefficients of all predictors were 1. To introduce clustering of events, we generated a latent random effect from a normal distribution with mean 0 and variance 0.17. This corresponded to an intraclass correlation coefficient (ICC) of 5%, which was calculated as σ2u0/(σ2u0 + ((π ^ 2)/3). The σ2u0 equals the second level variance estimated with a random intercept logistic regression model [6]. Based on the six predictors and the latent random effect, the linear predictor lp was calculated for each patient. The linear predictor was normally distributed with mean −1.06 and standard deviation 1.41. The linear predictor was transformed to probabilities for the outcome using the formula P(Y) = 1/(1 + exp(−lp)). The outcome value Y (1 or 0) was then generated by comparing P(Y) with an independently generated variable u having a uniform distribution from 0 to 1. We used the rule Y = 1 if P(Y) ≤ u, and Y = 0 otherwise. The incidence of the outcome (P(Y = 1)) was 30% in all source populations, except for the situation with low number of events (incidence = 3%). Further, we varied several parameters in the source population as described above. We studied ICC values of 5%, 15% and 30%; Pearson correlation coefficient values between predictor X1 and the random intercept were 0.0 or 0.4.\nStudy samples were drawn according to the practice of data collection in a multicenter setting [20,21]. We randomly drew study samples from the source population in two stages. First we sampled 20 centers, and then we sampled in total 1000 patients from the included centers (two-stage sampling). We also studied the performance in study samples with 5 or 50 centers (including respectively 100 and 1000 patients in total). Standard and random intercept logistic regression models were fitted in the study sample, and evaluated in that study sample (apparent performance) and in the whole source population (test performance). The whole process (sampling from source population, model development and evaluation) was repeated 100 times.\nCalculations were performed with R version 2.11.1 [22]. We used the lmer function from the lme4 library to perform mixed effect regression analyses [23]. The lrm function of the Design package was used to fit the standard model and estimate overall performance measures [24].", "A frequently occurring side effect of surgery is postoperative nausea and vomiting (PONV). To prevent or treat PONV, a risk model was developed to predict PONV within 24 hours after surgery [11]. We used a cohort of 1642 consecutive surgical patients (development sample) that were treated in the UMC Utrecht to develop prediction models. Patients were clustered within 19 treating anesthesiologists [12]. Predictors for the occurrence of PONV included gender, age, history of PONV or motion sickness, current smoking, abdominal or middle ear surgery versus other type of surgery and the use of volatile anesthetics during surgery [13]. Data of 1458 patients from the same center were used to study the validity of the prediction models. Patients included in the validation sample were treated by 19 other anesthesiologists than the patients from the development sample.", "The prediction models included all before mentioned predictors and were fitted with standard logistic regression or with a random intercept logistic regression model (also known as partial multilevel model). The standard model was fitted with a generalized linear model, including a logit link function. The intercept and predictors were included as fixed effects (i.e. not varying by cluster). The random effect models thus included fixed effects for the predictors plus a random intercept for the effects of clusters (anesthesiologists or centers in the simulation study). The random intercept was assumed to be normally distributed with mean zero, and variance σ2u0[14].\nThe predicted risks of PONV for individual patients were calculated with the log-odds transformation of the linear predictor. The risk based on the standard logistic regression model was:\n\n\n\n\nP\n\n\n\nY\ni\n\n=\n1\n|\n\nX\ni\n\n\n\n=\n\n1\n\n1\n+\nexp\n\n\n−\n\n\n\n\nα\n^\n\nstandard\n\n+\n\n\n∑\n\nm\n∈\n1\n,\n…\n,\n6\n\n\n\n\nx\nim\n\n⋅\n\n\nβ\n^\n\n\nstandard\n,\nm\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nwhere P(Yi = 1) is the predicted risk that a patient i will get PONV, given patients’ predictor values X. The linear predictor consist of âstandard which equals the estimated intercept of the standard model, and ∑xim⋅β^standard,m which is the sumproduct of the six predictor values of patient i and the six regression coefficients.\nFrom the random intercept logistic regression model, predicted risks were calculated in two ways. The first risk calculation was based on only the fixed effects of the random intercept logistic regression model (called marginal risk calculation):\n\n\n\n\nP\n\n\n\nY\ni\n\n=\n1\n|\n\nX\ni\n\n\n\n=\n\n1\n\n1\n+\nexp\n\n\n−\n\n\n\n\nα\n^\n\nRE\n\n+\n\n\n∑\n\nm\n∈\n1\n,\n…\n,\n6\n\n\n\n\nx\nim\n\n⋅\n\n\nβ\n^\n\n\nRE\n,\nm\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nwhere âRE equals the fixed intercept and ∑xim⋅β^RE,m\n is the sumproduct of the six predictor values of patient i and the corresponding fixed regression coefficients of the random effects model. However, the cluster effects were not used for the risk calculation [15]. We explicitly studied this risk calculation since cluster effects are unknown for patients in clusters that are not included in the development data.\nThe second risk calculation used the fixed and random effects of the random intercept logistic regression model (called conditional risk calculation):PYij=1|Xij=11+exp−α^RE+∑m∈1,…,6xim⋅β^RE,m+u0j\nThis risk calculation included the same predictor effects as the marginal risk calculation, plus the random intercept u0j (i.e. the effect of anesthesiologist j). This risk calculation cannot be used in new data of patients treated by new anesthesiologists, since the random effect of the new anesthesiologist is unknown.", "Apparent and test performance of the prediction models were assessed. Apparent performance is the performance of the prediction model in the development data. Test performance was assessed in a cohort of 1458 new patients treated by 19 other anesthesiologists than the patients from the development sample.\nThe predictive performance of each of the risk calculations was assessed with the concordance index (c-index) [16], the calibration slope and calibration in the large [17,18]. The calibration slope was estimated with standard logistic regression analysis, modeling the outcome of interest as dependent variable and the linear predictor as independent variable. Calibration in the large was assessed as the intercept of a logistic regression model with the linear predictor as offset variable. The ideal values of the calibration in the large and calibration slope are respectively 0 and 1. Since, standard performance measures ignore the clustered data structure, they can be considered as overall measures. To take clustering into account in the model evaluation, we assessed the predictive performance in individual anesthesiologists (within cluster performance). The within cluster c-index was estimated as the average of the c-indices of the clusters, as described by van Oirbeek [19]. Within cluster calibration was assessed with mixed effect models, with random effects for the intercept and linear predictor (calibration slope) or only for the intercept (calibration in the large).", "We generated a source population which included 100 centers. The number of patients per center was Poisson distributed, with a mean and variance varying per center according to the exponential function of a normal distribution (N(5.7, 0.3)). This resulted in a total of 30,556 patients and a median of 301 patients per center (range 155–552). The dichotomous outcome Y was predicted with 3 continuous (X1-X3) and 3 dichotomous variables (X4-X6). The three continuous predictors were independently drawn from a normal distribution, with a mean of 0 and standard deviations of 0.2, 0.4, and 1. The three dichotomous predictors were independently drawn from binomial distributions with incidences 0.2, 0.3, and 0.4. The regression coefficients of all predictors were 1. To introduce clustering of events, we generated a latent random effect from a normal distribution with mean 0 and variance 0.17. This corresponded to an intraclass correlation coefficient (ICC) of 5%, which was calculated as σ2u0/(σ2u0 + ((π ^ 2)/3). The σ2u0 equals the second level variance estimated with a random intercept logistic regression model [6]. Based on the six predictors and the latent random effect, the linear predictor lp was calculated for each patient. The linear predictor was normally distributed with mean −1.06 and standard deviation 1.41. The linear predictor was transformed to probabilities for the outcome using the formula P(Y) = 1/(1 + exp(−lp)). The outcome value Y (1 or 0) was then generated by comparing P(Y) with an independently generated variable u having a uniform distribution from 0 to 1. We used the rule Y = 1 if P(Y) ≤ u, and Y = 0 otherwise. The incidence of the outcome (P(Y = 1)) was 30% in all source populations, except for the situation with low number of events (incidence = 3%). Further, we varied several parameters in the source population as described above. We studied ICC values of 5%, 15% and 30%; Pearson correlation coefficient values between predictor X1 and the random intercept were 0.0 or 0.4.\nStudy samples were drawn according to the practice of data collection in a multicenter setting [20,21]. We randomly drew study samples from the source population in two stages. First we sampled 20 centers, and then we sampled in total 1000 patients from the included centers (two-stage sampling). We also studied the performance in study samples with 5 or 50 centers (including respectively 100 and 1000 patients in total). Standard and random intercept logistic regression models were fitted in the study sample, and evaluated in that study sample (apparent performance) and in the whole source population (test performance). The whole process (sampling from source population, model development and evaluation) was repeated 100 times.\nCalculations were performed with R version 2.11.1 [22]. We used the lmer function from the lme4 library to perform mixed effect regression analyses [23]. The lrm function of the Design package was used to fit the standard model and estimate overall performance measures [24].", " Prediction of postoperative nausea and vomiting The incidence of PONV was 37.5% (616/1642) in the development cohort (Table 1). 19 anesthesiologists treated on average 82 surgical patients (median 64, range 1–460). The incidence of PONV per anesthesiologist varied (interquartile range 29% – 47%), with an ICC of 3.2%. The corresponding variance of the random intercept (0.15) was significantly different from 0 (confidence interval 0.07 – 0.33).\nDistribution of predictor values and outcome, and predictor effects in multivariable logistic regression models\n* interquartile ranges of the predictor distributions per anesthesiologist.\n† logistic regression coefficients with 95 percent confidence intervals. Intervals of the random intercept model were based on the t-distribution.\n‡ mean (standard deviation).\n§ number (percentage) of patients with PONV, rather than the intercept is reported.\nPONV = post operative nausea and vomiting.\n†† non-parametric confidence interval of the random intercept variance was obtained from a cluster bootstrap procedure.\nWhen the clustering by anesthesiologist was taken into account (random intercept model), the predictive effect of type of anesthetics use (volatile yes/no) was different compared with the standard multivariable model (Table 1). If predictor distributions varied among anesthesiologists, as indicated by a wide interquartile range (Table 1), differences in predictor effects were found between the standard and random intercept model. The variance of the random intercept was 0.15, when the six predictors were included in the model. Consequently, random intercepts ranged from −0.49 to 0.50. Figure 1 shows the variation in predicted risks by the three risk calculations. Applying the three risk calculations for the prediction of PONV resulted in different predicted risks for the individual patients.\nA-C Predicted probabilities from the standard model and from the risk calculations based on the random intercept model. The predicted risks differed among the models. The diagonal indicates the line of identity (predicted probabilities of the two models are equal).\nThe risk calculation that included fixed and random effects (the conditional risk calculation) showed the best overall discriminative ability in the development data (Table 2, c-index 0.69). The discriminative ability of the standard model was similar to the discriminative ability of risks that were only based on the fixed effects of the random intercept model (the marginal risk calculation) (c-index both 0.66). The difference in discriminative ability among the standard model and marginal and conditional risk calculations disappeared when the c-index was estimated within each anesthesiologist, because the random anesthesiologist effects included in the conditional risk calculation only contributes to discrimination of patients treated by different anesthesiologist.\n\nApparent and test performance of the PONV models described in Table\n1\n\n† overall performance (standard error).\n‡ within anesthesiologist performance (standard deviation).\n* With calibration slope equal to 1 (i.e. calibration in the large).\nThe standard model showed excellent apparent calibration in the large, when estimated with the overall performance estimates (Table 2). Apparent calibration in the large (overall) for the marginal risk calculation was almost optimal. However, calibration in the large assessed within clusters showed that predicted and observed incidences differed for some anesthesiologist. The standard deviations of calibration in the large within clusters were 0.329 and 0.385 for respectively the standard model and the marginal risk calculation (Table 2). The differences in predicted and observed incidences within some anesthesiologists were slightly smaller for the standard model compared to the marginal risk calculation, because predictors included in the marginal risk calculation did not contain information of the anesthesiologist specific intercept as these predictors were adjusted for the random anesthesiologist effects. For the conditional risk calculation, the calibration in the large within clusters and corresponding standard deviation were close to 0, which means that the observed and predicted incidences were similar among all anesthesiologists. This is due to the inclusion of the random anesthesiologist effects in the risk calculation, which comprises an estimate of the cluster specific incidence.\nThe calibration slope assesses the correlation between predicted and observed risks for all patients (overall performance), or for patients treated by a particular anesthesiologist (within cluster performance). The overall and within cluster calibration slopes of the marginal risk calculation were slightly smaller compared to the calibration slopes of the standard model. The overall and within cluster calibration slopes of the conditional risk calculation were >1 in the development data (i.e. predicted risks lower than observed risks), because the anesthesiologist effects were shrunken to the average effect by the random intercept model. The standard deviations of the calibration slopes within clusters were limited for all models, indicating that the observed and predicted risks differed similarly among the anesthesiologists (Table 2).\nThe standard and the marginal risk calculation had similar test performance, which was estimated in patients treated by 19 other anesthesiologists (overall c-index resp. 0.68 and 0.67) (Table 2). The test performance as evaluated with the overall and within cluster c-indexes was even higher than in the apparent performance. Possible reasons are stronger true predictor effects in the test data, differences in case-mix and randomness [25]. (The models were not re-calibrated in the external data). The overall and within cluster calibration in the large were too high for both models in the external data, indicating that the predicted risks were lower than the observed proportions of PONV. The calibration slopes from the standard model were larger than slopes from the random intercept model, as was also shown in the apparent validation.\nThe incidence of PONV was 37.5% (616/1642) in the development cohort (Table 1). 19 anesthesiologists treated on average 82 surgical patients (median 64, range 1–460). The incidence of PONV per anesthesiologist varied (interquartile range 29% – 47%), with an ICC of 3.2%. The corresponding variance of the random intercept (0.15) was significantly different from 0 (confidence interval 0.07 – 0.33).\nDistribution of predictor values and outcome, and predictor effects in multivariable logistic regression models\n* interquartile ranges of the predictor distributions per anesthesiologist.\n† logistic regression coefficients with 95 percent confidence intervals. Intervals of the random intercept model were based on the t-distribution.\n‡ mean (standard deviation).\n§ number (percentage) of patients with PONV, rather than the intercept is reported.\nPONV = post operative nausea and vomiting.\n†† non-parametric confidence interval of the random intercept variance was obtained from a cluster bootstrap procedure.\nWhen the clustering by anesthesiologist was taken into account (random intercept model), the predictive effect of type of anesthetics use (volatile yes/no) was different compared with the standard multivariable model (Table 1). If predictor distributions varied among anesthesiologists, as indicated by a wide interquartile range (Table 1), differences in predictor effects were found between the standard and random intercept model. The variance of the random intercept was 0.15, when the six predictors were included in the model. Consequently, random intercepts ranged from −0.49 to 0.50. Figure 1 shows the variation in predicted risks by the three risk calculations. Applying the three risk calculations for the prediction of PONV resulted in different predicted risks for the individual patients.\nA-C Predicted probabilities from the standard model and from the risk calculations based on the random intercept model. The predicted risks differed among the models. The diagonal indicates the line of identity (predicted probabilities of the two models are equal).\nThe risk calculation that included fixed and random effects (the conditional risk calculation) showed the best overall discriminative ability in the development data (Table 2, c-index 0.69). The discriminative ability of the standard model was similar to the discriminative ability of risks that were only based on the fixed effects of the random intercept model (the marginal risk calculation) (c-index both 0.66). The difference in discriminative ability among the standard model and marginal and conditional risk calculations disappeared when the c-index was estimated within each anesthesiologist, because the random anesthesiologist effects included in the conditional risk calculation only contributes to discrimination of patients treated by different anesthesiologist.\n\nApparent and test performance of the PONV models described in Table\n1\n\n† overall performance (standard error).\n‡ within anesthesiologist performance (standard deviation).\n* With calibration slope equal to 1 (i.e. calibration in the large).\nThe standard model showed excellent apparent calibration in the large, when estimated with the overall performance estimates (Table 2). Apparent calibration in the large (overall) for the marginal risk calculation was almost optimal. However, calibration in the large assessed within clusters showed that predicted and observed incidences differed for some anesthesiologist. The standard deviations of calibration in the large within clusters were 0.329 and 0.385 for respectively the standard model and the marginal risk calculation (Table 2). The differences in predicted and observed incidences within some anesthesiologists were slightly smaller for the standard model compared to the marginal risk calculation, because predictors included in the marginal risk calculation did not contain information of the anesthesiologist specific intercept as these predictors were adjusted for the random anesthesiologist effects. For the conditional risk calculation, the calibration in the large within clusters and corresponding standard deviation were close to 0, which means that the observed and predicted incidences were similar among all anesthesiologists. This is due to the inclusion of the random anesthesiologist effects in the risk calculation, which comprises an estimate of the cluster specific incidence.\nThe calibration slope assesses the correlation between predicted and observed risks for all patients (overall performance), or for patients treated by a particular anesthesiologist (within cluster performance). The overall and within cluster calibration slopes of the marginal risk calculation were slightly smaller compared to the calibration slopes of the standard model. The overall and within cluster calibration slopes of the conditional risk calculation were >1 in the development data (i.e. predicted risks lower than observed risks), because the anesthesiologist effects were shrunken to the average effect by the random intercept model. The standard deviations of the calibration slopes within clusters were limited for all models, indicating that the observed and predicted risks differed similarly among the anesthesiologists (Table 2).\nThe standard and the marginal risk calculation had similar test performance, which was estimated in patients treated by 19 other anesthesiologists (overall c-index resp. 0.68 and 0.67) (Table 2). The test performance as evaluated with the overall and within cluster c-indexes was even higher than in the apparent performance. Possible reasons are stronger true predictor effects in the test data, differences in case-mix and randomness [25]. (The models were not re-calibrated in the external data). The overall and within cluster calibration in the large were too high for both models in the external data, indicating that the predicted risks were lower than the observed proportions of PONV. The calibration slopes from the standard model were larger than slopes from the random intercept model, as was also shown in the apparent validation.\n Simulation study The simulation study also showed similar overall discriminative ability for the standard model and the marginal risk calculation (apparent performance, c-index both 0.79, for ICC = 5%, Table 3). Discrimination of the conditional risk calculation was slightly better compared to the standard model and the marginal risk calculation (c-index 0.82, 2.5 and 97.5 percentiles 0.79; 0.84). The apparent calibration intercept and slope were ideal for standard model when assessing the overall performance, and ideal for the marginal risk calculation when assessing the within cluster performance. As in the empirical data, the overall and within cluster calibration slopes were too high for the conditional risk calculation due to shrinkage of the random center effects. Variation in the calibration in the large estimates within centers was lowest for the conditional risk calculation (calibration in the large and corresponding standard deviation both 0) (Table 3). The test performance of the standard model and the marginal risk calculation, as assessed in the source population, showed that the performance of these models was similar (Table 3). The difference between the apparent and test performance may be can be interpreted as optimism in model performance. Optimism in overall and within cluster performance was similar for the standard model and the marginal risk calculation. For instance, the difference in apparent and test calibration slopes (overall) was 0.04 for both models (Table 3). The risks for patients clustered within different centers was similar in these data (ICC 5%), which means that including center effects in the prediction model (i.e. random intercept model) cannot improve predictive performance considerably. Consequently, the performance of the models was similar, in the data with small differences in risks among centers.\nSimulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0\n* With calibration slope equal to 1 (i.e. calibration in the large).\n† overall performance (2.5 and 97.5 percentiles).\n‡ median of overall performance from 100 simulations (median of within cluster performances).\nThe similarity in performance among the models disappeared when the ICC was 15% or 30% (Table 4, Additional file 1: Table S3). The discriminative ability of the conditional risk calculation was more accurate compared to the standard model and the marginal risk calculation. The apparent overall c-indexes and corresponding 2.5%; 97.5% range were respectively 0.85 (0.82; 0.87), 0.77 (0.74; 0.80) and 0.77 (0.73; 0.80) (Table 4). Assessment of the apparent performance of the standard model and the marginal risk calculation showed that the c-indexes remained similar in data with a higher ICC, however, the calibration parameters differed. The standard calibration in the large and calibration slope were equal to the line of identity for the standard model, but not for the marginal risk calculation. However, the calibration parameters assessed within clusters were on average more accurate for the marginal risk calculation (−0.00 and 1.00), compared to the standard model (−0.18 and 1.18). The evaluation of the standard model and the marginal risk calculation in the source population (test performance) showed similar results compared to the evaluation in the study sample (apparent performance) (Table 4).\nSimulation results in a domain with ICC = 15%, Pearson correlation X1 and random effect 0.0\n* With calibration slope equal to 1 (i.e. calibration in the large).\n† overall performance (2.5 and 97.5 percentiles).\n‡ median of overall performance from 100 simulations (median of within cluster performances).\nFurther, the apparent and test performance showed that, in data with an ICC of 15% or 30%, the standard deviations of the calibration in the large within clusters for the standard model and the marginal risk calculation were higher (e.g. respectively 0.94 and 1.01), compared to the standard deviations in data with an ICC of 5% (respectively 0.44 and 0.45) (Tables 4 and 3). So, when predictions were based on models neglecting center specific effects, the correlation between the observed and predicted incidences within centers differed among centers, especially in data with a high ICC. The (standard deviations of the) c-indexes within clusters were not influenced by a higher ICC.\nTables 5 and 6 show the results of simulations investigating the influence of the number of centers on model performance. Especially when the number of centers is low – e.g. 5 centers –, it is more difficult to estimate accurate random intercepts and corresponding center effects. This potentially affects the performance of the random intercept logistic regression model. However, as in Table 3, the performance of the standard model and the marginal risk calculation were similar, and the conditional risk calculation had the most accurate performance.\nSimulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, number of patients 100, number of centers 5\n* With calibration slope equal to 1 (i.e. calibration in the large).\n† overall performance (2.5 and 97.5 percentiles).\n‡ median of overall performance from 100 simulations (median of within cluster performances).\nSimulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, number of patients 1000, number of centers 50\n* With calibration slope equal to 1 (i.e. calibration in the large).\n† overall performance (2.5 and 97.5 percentiles).\n‡ median of overall performance from 100 simulations (median of within cluster performances).\nThe differences in performance between standard and random intercept models were smaller, when the clustering (i.e. the center effect) was associated with one of the predictors (Additional file 1: Tables S1, S2, and S4). For ICC values of 15% or 30%, the model performance of the standard model and the marginal risk calculation was better compared to model performance in datasets without associations between the clustering and a predictor. Finally, we compared the standard and random intercept model in data with a low incidence of the outcome Y (3%) (Additional file 1: Table S5). We sampled in total 1000 patients clustered in 20 centers. The performance of the models was similar. Only the calibration intercept of the marginal risk calculation showed an underestimation of the outcome incidence (intercept 0.14). The calibration intercept within clusters of the conditional risk calculation had a lower variance (0.00) compared to the other models.\nThe simulation study also showed similar overall discriminative ability for the standard model and the marginal risk calculation (apparent performance, c-index both 0.79, for ICC = 5%, Table 3). Discrimination of the conditional risk calculation was slightly better compared to the standard model and the marginal risk calculation (c-index 0.82, 2.5 and 97.5 percentiles 0.79; 0.84). The apparent calibration intercept and slope were ideal for standard model when assessing the overall performance, and ideal for the marginal risk calculation when assessing the within cluster performance. As in the empirical data, the overall and within cluster calibration slopes were too high for the conditional risk calculation due to shrinkage of the random center effects. Variation in the calibration in the large estimates within centers was lowest for the conditional risk calculation (calibration in the large and corresponding standard deviation both 0) (Table 3). The test performance of the standard model and the marginal risk calculation, as assessed in the source population, showed that the performance of these models was similar (Table 3). The difference between the apparent and test performance may be can be interpreted as optimism in model performance. Optimism in overall and within cluster performance was similar for the standard model and the marginal risk calculation. For instance, the difference in apparent and test calibration slopes (overall) was 0.04 for both models (Table 3). The risks for patients clustered within different centers was similar in these data (ICC 5%), which means that including center effects in the prediction model (i.e. random intercept model) cannot improve predictive performance considerably. Consequently, the performance of the models was similar, in the data with small differences in risks among centers.\nSimulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0\n* With calibration slope equal to 1 (i.e. calibration in the large).\n† overall performance (2.5 and 97.5 percentiles).\n‡ median of overall performance from 100 simulations (median of within cluster performances).\nThe similarity in performance among the models disappeared when the ICC was 15% or 30% (Table 4, Additional file 1: Table S3). The discriminative ability of the conditional risk calculation was more accurate compared to the standard model and the marginal risk calculation. The apparent overall c-indexes and corresponding 2.5%; 97.5% range were respectively 0.85 (0.82; 0.87), 0.77 (0.74; 0.80) and 0.77 (0.73; 0.80) (Table 4). Assessment of the apparent performance of the standard model and the marginal risk calculation showed that the c-indexes remained similar in data with a higher ICC, however, the calibration parameters differed. The standard calibration in the large and calibration slope were equal to the line of identity for the standard model, but not for the marginal risk calculation. However, the calibration parameters assessed within clusters were on average more accurate for the marginal risk calculation (−0.00 and 1.00), compared to the standard model (−0.18 and 1.18). The evaluation of the standard model and the marginal risk calculation in the source population (test performance) showed similar results compared to the evaluation in the study sample (apparent performance) (Table 4).\nSimulation results in a domain with ICC = 15%, Pearson correlation X1 and random effect 0.0\n* With calibration slope equal to 1 (i.e. calibration in the large).\n† overall performance (2.5 and 97.5 percentiles).\n‡ median of overall performance from 100 simulations (median of within cluster performances).\nFurther, the apparent and test performance showed that, in data with an ICC of 15% or 30%, the standard deviations of the calibration in the large within clusters for the standard model and the marginal risk calculation were higher (e.g. respectively 0.94 and 1.01), compared to the standard deviations in data with an ICC of 5% (respectively 0.44 and 0.45) (Tables 4 and 3). So, when predictions were based on models neglecting center specific effects, the correlation between the observed and predicted incidences within centers differed among centers, especially in data with a high ICC. The (standard deviations of the) c-indexes within clusters were not influenced by a higher ICC.\nTables 5 and 6 show the results of simulations investigating the influence of the number of centers on model performance. Especially when the number of centers is low – e.g. 5 centers –, it is more difficult to estimate accurate random intercepts and corresponding center effects. This potentially affects the performance of the random intercept logistic regression model. However, as in Table 3, the performance of the standard model and the marginal risk calculation were similar, and the conditional risk calculation had the most accurate performance.\nSimulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, number of patients 100, number of centers 5\n* With calibration slope equal to 1 (i.e. calibration in the large).\n† overall performance (2.5 and 97.5 percentiles).\n‡ median of overall performance from 100 simulations (median of within cluster performances).\nSimulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, number of patients 1000, number of centers 50\n* With calibration slope equal to 1 (i.e. calibration in the large).\n† overall performance (2.5 and 97.5 percentiles).\n‡ median of overall performance from 100 simulations (median of within cluster performances).\nThe differences in performance between standard and random intercept models were smaller, when the clustering (i.e. the center effect) was associated with one of the predictors (Additional file 1: Tables S1, S2, and S4). For ICC values of 15% or 30%, the model performance of the standard model and the marginal risk calculation was better compared to model performance in datasets without associations between the clustering and a predictor. Finally, we compared the standard and random intercept model in data with a low incidence of the outcome Y (3%) (Additional file 1: Table S5). We sampled in total 1000 patients clustered in 20 centers. The performance of the models was similar. Only the calibration intercept of the marginal risk calculation showed an underestimation of the outcome incidence (intercept 0.14). The calibration intercept within clusters of the conditional risk calculation had a lower variance (0.00) compared to the other models.", "The incidence of PONV was 37.5% (616/1642) in the development cohort (Table 1). 19 anesthesiologists treated on average 82 surgical patients (median 64, range 1–460). The incidence of PONV per anesthesiologist varied (interquartile range 29% – 47%), with an ICC of 3.2%. The corresponding variance of the random intercept (0.15) was significantly different from 0 (confidence interval 0.07 – 0.33).\nDistribution of predictor values and outcome, and predictor effects in multivariable logistic regression models\n* interquartile ranges of the predictor distributions per anesthesiologist.\n† logistic regression coefficients with 95 percent confidence intervals. Intervals of the random intercept model were based on the t-distribution.\n‡ mean (standard deviation).\n§ number (percentage) of patients with PONV, rather than the intercept is reported.\nPONV = post operative nausea and vomiting.\n†† non-parametric confidence interval of the random intercept variance was obtained from a cluster bootstrap procedure.\nWhen the clustering by anesthesiologist was taken into account (random intercept model), the predictive effect of type of anesthetics use (volatile yes/no) was different compared with the standard multivariable model (Table 1). If predictor distributions varied among anesthesiologists, as indicated by a wide interquartile range (Table 1), differences in predictor effects were found between the standard and random intercept model. The variance of the random intercept was 0.15, when the six predictors were included in the model. Consequently, random intercepts ranged from −0.49 to 0.50. Figure 1 shows the variation in predicted risks by the three risk calculations. Applying the three risk calculations for the prediction of PONV resulted in different predicted risks for the individual patients.\nA-C Predicted probabilities from the standard model and from the risk calculations based on the random intercept model. The predicted risks differed among the models. The diagonal indicates the line of identity (predicted probabilities of the two models are equal).\nThe risk calculation that included fixed and random effects (the conditional risk calculation) showed the best overall discriminative ability in the development data (Table 2, c-index 0.69). The discriminative ability of the standard model was similar to the discriminative ability of risks that were only based on the fixed effects of the random intercept model (the marginal risk calculation) (c-index both 0.66). The difference in discriminative ability among the standard model and marginal and conditional risk calculations disappeared when the c-index was estimated within each anesthesiologist, because the random anesthesiologist effects included in the conditional risk calculation only contributes to discrimination of patients treated by different anesthesiologist.\n\nApparent and test performance of the PONV models described in Table\n1\n\n† overall performance (standard error).\n‡ within anesthesiologist performance (standard deviation).\n* With calibration slope equal to 1 (i.e. calibration in the large).\nThe standard model showed excellent apparent calibration in the large, when estimated with the overall performance estimates (Table 2). Apparent calibration in the large (overall) for the marginal risk calculation was almost optimal. However, calibration in the large assessed within clusters showed that predicted and observed incidences differed for some anesthesiologist. The standard deviations of calibration in the large within clusters were 0.329 and 0.385 for respectively the standard model and the marginal risk calculation (Table 2). The differences in predicted and observed incidences within some anesthesiologists were slightly smaller for the standard model compared to the marginal risk calculation, because predictors included in the marginal risk calculation did not contain information of the anesthesiologist specific intercept as these predictors were adjusted for the random anesthesiologist effects. For the conditional risk calculation, the calibration in the large within clusters and corresponding standard deviation were close to 0, which means that the observed and predicted incidences were similar among all anesthesiologists. This is due to the inclusion of the random anesthesiologist effects in the risk calculation, which comprises an estimate of the cluster specific incidence.\nThe calibration slope assesses the correlation between predicted and observed risks for all patients (overall performance), or for patients treated by a particular anesthesiologist (within cluster performance). The overall and within cluster calibration slopes of the marginal risk calculation were slightly smaller compared to the calibration slopes of the standard model. The overall and within cluster calibration slopes of the conditional risk calculation were >1 in the development data (i.e. predicted risks lower than observed risks), because the anesthesiologist effects were shrunken to the average effect by the random intercept model. The standard deviations of the calibration slopes within clusters were limited for all models, indicating that the observed and predicted risks differed similarly among the anesthesiologists (Table 2).\nThe standard and the marginal risk calculation had similar test performance, which was estimated in patients treated by 19 other anesthesiologists (overall c-index resp. 0.68 and 0.67) (Table 2). The test performance as evaluated with the overall and within cluster c-indexes was even higher than in the apparent performance. Possible reasons are stronger true predictor effects in the test data, differences in case-mix and randomness [25]. (The models were not re-calibrated in the external data). The overall and within cluster calibration in the large were too high for both models in the external data, indicating that the predicted risks were lower than the observed proportions of PONV. The calibration slopes from the standard model were larger than slopes from the random intercept model, as was also shown in the apparent validation.", "The simulation study also showed similar overall discriminative ability for the standard model and the marginal risk calculation (apparent performance, c-index both 0.79, for ICC = 5%, Table 3). Discrimination of the conditional risk calculation was slightly better compared to the standard model and the marginal risk calculation (c-index 0.82, 2.5 and 97.5 percentiles 0.79; 0.84). The apparent calibration intercept and slope were ideal for standard model when assessing the overall performance, and ideal for the marginal risk calculation when assessing the within cluster performance. As in the empirical data, the overall and within cluster calibration slopes were too high for the conditional risk calculation due to shrinkage of the random center effects. Variation in the calibration in the large estimates within centers was lowest for the conditional risk calculation (calibration in the large and corresponding standard deviation both 0) (Table 3). The test performance of the standard model and the marginal risk calculation, as assessed in the source population, showed that the performance of these models was similar (Table 3). The difference between the apparent and test performance may be can be interpreted as optimism in model performance. Optimism in overall and within cluster performance was similar for the standard model and the marginal risk calculation. For instance, the difference in apparent and test calibration slopes (overall) was 0.04 for both models (Table 3). The risks for patients clustered within different centers was similar in these data (ICC 5%), which means that including center effects in the prediction model (i.e. random intercept model) cannot improve predictive performance considerably. Consequently, the performance of the models was similar, in the data with small differences in risks among centers.\nSimulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0\n* With calibration slope equal to 1 (i.e. calibration in the large).\n† overall performance (2.5 and 97.5 percentiles).\n‡ median of overall performance from 100 simulations (median of within cluster performances).\nThe similarity in performance among the models disappeared when the ICC was 15% or 30% (Table 4, Additional file 1: Table S3). The discriminative ability of the conditional risk calculation was more accurate compared to the standard model and the marginal risk calculation. The apparent overall c-indexes and corresponding 2.5%; 97.5% range were respectively 0.85 (0.82; 0.87), 0.77 (0.74; 0.80) and 0.77 (0.73; 0.80) (Table 4). Assessment of the apparent performance of the standard model and the marginal risk calculation showed that the c-indexes remained similar in data with a higher ICC, however, the calibration parameters differed. The standard calibration in the large and calibration slope were equal to the line of identity for the standard model, but not for the marginal risk calculation. However, the calibration parameters assessed within clusters were on average more accurate for the marginal risk calculation (−0.00 and 1.00), compared to the standard model (−0.18 and 1.18). The evaluation of the standard model and the marginal risk calculation in the source population (test performance) showed similar results compared to the evaluation in the study sample (apparent performance) (Table 4).\nSimulation results in a domain with ICC = 15%, Pearson correlation X1 and random effect 0.0\n* With calibration slope equal to 1 (i.e. calibration in the large).\n† overall performance (2.5 and 97.5 percentiles).\n‡ median of overall performance from 100 simulations (median of within cluster performances).\nFurther, the apparent and test performance showed that, in data with an ICC of 15% or 30%, the standard deviations of the calibration in the large within clusters for the standard model and the marginal risk calculation were higher (e.g. respectively 0.94 and 1.01), compared to the standard deviations in data with an ICC of 5% (respectively 0.44 and 0.45) (Tables 4 and 3). So, when predictions were based on models neglecting center specific effects, the correlation between the observed and predicted incidences within centers differed among centers, especially in data with a high ICC. The (standard deviations of the) c-indexes within clusters were not influenced by a higher ICC.\nTables 5 and 6 show the results of simulations investigating the influence of the number of centers on model performance. Especially when the number of centers is low – e.g. 5 centers –, it is more difficult to estimate accurate random intercepts and corresponding center effects. This potentially affects the performance of the random intercept logistic regression model. However, as in Table 3, the performance of the standard model and the marginal risk calculation were similar, and the conditional risk calculation had the most accurate performance.\nSimulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, number of patients 100, number of centers 5\n* With calibration slope equal to 1 (i.e. calibration in the large).\n† overall performance (2.5 and 97.5 percentiles).\n‡ median of overall performance from 100 simulations (median of within cluster performances).\nSimulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, number of patients 1000, number of centers 50\n* With calibration slope equal to 1 (i.e. calibration in the large).\n† overall performance (2.5 and 97.5 percentiles).\n‡ median of overall performance from 100 simulations (median of within cluster performances).\nThe differences in performance between standard and random intercept models were smaller, when the clustering (i.e. the center effect) was associated with one of the predictors (Additional file 1: Tables S1, S2, and S4). For ICC values of 15% or 30%, the model performance of the standard model and the marginal risk calculation was better compared to model performance in datasets without associations between the clustering and a predictor. Finally, we compared the standard and random intercept model in data with a low incidence of the outcome Y (3%) (Additional file 1: Table S5). We sampled in total 1000 patients clustered in 20 centers. The performance of the models was similar. Only the calibration intercept of the marginal risk calculation showed an underestimation of the outcome incidence (intercept 0.14). The calibration intercept within clusters of the conditional risk calculation had a lower variance (0.00) compared to the other models.", "We compared standard logistic regression with random intercept logistic regression for the development of clinical prediction models in clustered data. Our example with empirical data showed similar overall discriminative ability of the standard model and the random intercept model, if the cluster specific effects (estimated by a random intercept) were not used in the risk calculation. If the cluster specific effects of the random intercept model could be used for predictions, the overall discrimination and calibration in the large within clusters improved. This was confirmed in the simulation studies. The quality of model calibration depended on how the calibration was estimated. Standard developed prediction models showed better overall calibration than random intercept prediction models using the average of cluster effect, but the latter showed better calibration within clusters than standard developed models.\nPredicted risks from the random intercept model were calculated using only the fixed predictor effects (the marginal risk calculation), or both fixed predictor effects and random cluster effects (the conditional risk calculation). The conditional risk calculation, including the same fixed predictor effects as the marginal risk calculation, showed the highest overall discrimination, apparently due to inclusion of the cluster specific information in the prediction model. The conditional risk calculation could only be applied and evaluated in the development data. To evaluate this risk calculation in subjects from new clusters, the new cluster effects should be known.\nThe differences that we found in calibration parameters between the standard model and random intercept logistic regression model (either used with the marginal or conditional risk calculation) slightly disappeared when the cluster effect was correlated with one of the predictors (Pearson correlation coefficient between cluster and X1 = 0.4, see Additional file 1: Table S1, S2 and S4). Especially the standard deviations of the calibration in the large within clusters were lower in data with correlation (e.g. 0.110 for the apparent performance of the standard model, ICC 5%) as compared to simulated data without correlation between the cluster effect and predictor X1 (standard deviation 0.444). So, the predicted and observed incidences within several clusters were better in agreement in data with the correlation. Due to the correlation of predictor and cluster, the predictor X1 contained information of the cluster and was able to predict (partly) the cluster specific incidence, hence improving the calibration in the large within clusters.\nUntil now, random effects models are mainly used to obtain correct estimates of intervention effects in cluster randomized trials [5,26], or causal effects in multilevel etiologic studies [27,28] and more recently in meta-analysis with individual patient data. The focus in such studies is on the effect estimate of one main factor, usually the intervention or exposure, which may be adjusted for confounding factors. In prediction studies, the focus is on all estimated predictor effects. All effects combined in the linear predictor results in an absolute risk estimate. Indeed, we found in our data, that the predictor effects for PONV were different in the random intercept logistic regression model compared to the standard model (Table 1). The different predictor effects, however, did not result in clear improvements in model performance (discrimination and calibration) between the marginal risk calculation and the standard model. This may be the result of relatively low clustering (ICC = 3.2%) in our empirical data. The simulations showed that particularly model calibration within clusters was better for the random intercept logistic regression models than the standard model, if the data were stronger clustered (ICC = 15%).\nThe differences in overall and within cluster performance measures – both discrimination and calibration measures – raise the question what estimates are preferable in the presence of clustered data. Measures that assess the within cluster performance – i.e. performance per cluster – will be probably more informative than overall measures, since prediction models are to be used by individual clinicians or treatment centers. Reporting the variability of the within cluster performance measures found in the original development data set, indicates future users whether the model performance will differ widely among centers. Wide differences would implicate that the model may need updating for individual centers with center specific information.", "In summary, we compared prediction models that were developed with random intercept or standard logistic regression analysis in clustered data. Adding the cluster effect in a prediction model increases the amount of predictive information, resulting in improved overall discriminative ability and calibration in the large within clusters. Particularly if cluster effects are relatively strong (ICC larger than 5%), prediction modeling with inclusion of the cluster effect in the model will result in better performance than models not including cluster specific effects.", "PONV: Postoperative nausea and vomiting; c-index: Concordance index; ICC: Intraclass correlation coefficient", "All authors (W Bouwmeester, JWR Twisk, TH Kappen, WA van Klei, KGM Moons, Y Vergouwe) state that they have no conflict of interests.", "Study design and analysis: WB, YV. Drafting manuscript: WB. Study design and reviewing the manuscript: JWRT, KGMM, THK, WAvK, YV. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2288/13/19/prepub\n", "Simulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.4. Apparent performance. Table S2 Simulation results in a domain with ICC = 15%, Pearson correlation X1 and random effect 0.4. Table S3 Simulation results in a domain with ICC = 30%, Pearson correlation X1 and random effect 0.0. Table S4 Simulation results in a domain with ICC = 30%, Pearson correlation X1 and random effect 0.4. Table S5 Simulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, outcome incidence 3% in 1000 patients. (DOC 83 kb)\nClick here for file" ]
[ null, "methods", null, null, null, null, "results", null, null, "discussion", "conclusions", null, null, null, null, "supplementary-material" ]
[ "Logistic regression analysis", "Prediction model with random intercept", "Validation" ]
Background: Many clinical prediction models are being developed. Diagnostic prediction models combine patient characteristics and test results to predict the presence or absence of a certain diagnosis. Prognostic prediction models predict the future occurrence of outcomes [1]. Study data that are used for model development are frequently clustered within e.g. centers or treating physician [2]. For instance, patients treated in a particular center may be more alike compared to patients treated in another center due to differences in treatment policies. As a result, patients treated in the same center are dependent (clustered), rather than independent. Regression techniques that take clustering into account [3-6] are frequently used in cluster randomized trials and in etiologic research with subjects clustered within e.g. neighborhoods or countries. Surprisingly, such regression models were hardly used in research aimed at developing prediction models [2]. Notably from the domains of therapeutic and causal studies, it has been shown that regression methods that take the clustering into account yield different estimates of the regression coefficients than standard regression techniques neglecting the clustered data structure [5,7,8]. However, in the domain of prediction studies, it is yet unknown to what extent regression methods for clustered data need to be used in the presence of clustered data. Two types of models can be used for analyzing clustered data: marginal models and conditional models [9]. Marginal models, such as the Generalized Estimation Equation (GEE) method, adjust for the clustering nature of data and estimate the standard error of the estimated parameters correctly. The interpretation of GEE results is on the higher cluster level and therefore not suitable for predictions in individual patients [10]. Conditional models estimate predictor effects for patients in the specific clusters. Conditioning on cluster is often done with random effects to save degrees of freedom. Thereby, conditional models allow for predictions of outcomes on the lowest clustering level (here, the patient). Accurate predictions are not necessarily achieved with a random effects model (having different regression parameters compared to a standard model), because the random effects are not readily applicable in new data with new clusters and the clustering in the data may be weak resulting in minor differences between the models. We explore the effect of including a random intercept in a conditional prediction model compared to standard regression. We use empirical and simulated clustered data to assess the performance of the prediction models. We show that model calibration is suboptimal, particularly when applied in new subjects, if clustering is not accounted for in the prediction model development. Methods: Prediction of postoperative nausea and vomiting A frequently occurring side effect of surgery is postoperative nausea and vomiting (PONV). To prevent or treat PONV, a risk model was developed to predict PONV within 24 hours after surgery [11]. We used a cohort of 1642 consecutive surgical patients (development sample) that were treated in the UMC Utrecht to develop prediction models. Patients were clustered within 19 treating anesthesiologists [12]. Predictors for the occurrence of PONV included gender, age, history of PONV or motion sickness, current smoking, abdominal or middle ear surgery versus other type of surgery and the use of volatile anesthetics during surgery [13]. Data of 1458 patients from the same center were used to study the validity of the prediction models. Patients included in the validation sample were treated by 19 other anesthesiologists than the patients from the development sample. A frequently occurring side effect of surgery is postoperative nausea and vomiting (PONV). To prevent or treat PONV, a risk model was developed to predict PONV within 24 hours after surgery [11]. We used a cohort of 1642 consecutive surgical patients (development sample) that were treated in the UMC Utrecht to develop prediction models. Patients were clustered within 19 treating anesthesiologists [12]. Predictors for the occurrence of PONV included gender, age, history of PONV or motion sickness, current smoking, abdominal or middle ear surgery versus other type of surgery and the use of volatile anesthetics during surgery [13]. Data of 1458 patients from the same center were used to study the validity of the prediction models. Patients included in the validation sample were treated by 19 other anesthesiologists than the patients from the development sample. Model development and risk calculation The prediction models included all before mentioned predictors and were fitted with standard logistic regression or with a random intercept logistic regression model (also known as partial multilevel model). The standard model was fitted with a generalized linear model, including a logit link function. The intercept and predictors were included as fixed effects (i.e. not varying by cluster). The random effect models thus included fixed effects for the predictors plus a random intercept for the effects of clusters (anesthesiologists or centers in the simulation study). The random intercept was assumed to be normally distributed with mean zero, and variance σ2u0[14]. The predicted risks of PONV for individual patients were calculated with the log-odds transformation of the linear predictor. The risk based on the standard logistic regression model was: P Y i = 1 | X i = 1 1 + exp − α ^ standard + ∑ m ∈ 1 , … , 6 x im ⋅ β ^ standard , m where P(Yi = 1) is the predicted risk that a patient i will get PONV, given patients’ predictor values X. The linear predictor consist of âstandard which equals the estimated intercept of the standard model, and ∑xim⋅β^standard,m which is the sumproduct of the six predictor values of patient i and the six regression coefficients. From the random intercept logistic regression model, predicted risks were calculated in two ways. The first risk calculation was based on only the fixed effects of the random intercept logistic regression model (called marginal risk calculation): P Y i = 1 | X i = 1 1 + exp − α ^ RE + ∑ m ∈ 1 , … , 6 x im ⋅ β ^ RE , m where âRE equals the fixed intercept and ∑xim⋅β^RE,m is the sumproduct of the six predictor values of patient i and the corresponding fixed regression coefficients of the random effects model. However, the cluster effects were not used for the risk calculation [15]. We explicitly studied this risk calculation since cluster effects are unknown for patients in clusters that are not included in the development data. The second risk calculation used the fixed and random effects of the random intercept logistic regression model (called conditional risk calculation):PYij=1|Xij=11+exp−α^RE+∑m∈1,…,6xim⋅β^RE,m+u0j This risk calculation included the same predictor effects as the marginal risk calculation, plus the random intercept u0j (i.e. the effect of anesthesiologist j). This risk calculation cannot be used in new data of patients treated by new anesthesiologists, since the random effect of the new anesthesiologist is unknown. The prediction models included all before mentioned predictors and were fitted with standard logistic regression or with a random intercept logistic regression model (also known as partial multilevel model). The standard model was fitted with a generalized linear model, including a logit link function. The intercept and predictors were included as fixed effects (i.e. not varying by cluster). The random effect models thus included fixed effects for the predictors plus a random intercept for the effects of clusters (anesthesiologists or centers in the simulation study). The random intercept was assumed to be normally distributed with mean zero, and variance σ2u0[14]. The predicted risks of PONV for individual patients were calculated with the log-odds transformation of the linear predictor. The risk based on the standard logistic regression model was: P Y i = 1 | X i = 1 1 + exp − α ^ standard + ∑ m ∈ 1 , … , 6 x im ⋅ β ^ standard , m where P(Yi = 1) is the predicted risk that a patient i will get PONV, given patients’ predictor values X. The linear predictor consist of âstandard which equals the estimated intercept of the standard model, and ∑xim⋅β^standard,m which is the sumproduct of the six predictor values of patient i and the six regression coefficients. From the random intercept logistic regression model, predicted risks were calculated in two ways. The first risk calculation was based on only the fixed effects of the random intercept logistic regression model (called marginal risk calculation): P Y i = 1 | X i = 1 1 + exp − α ^ RE + ∑ m ∈ 1 , … , 6 x im ⋅ β ^ RE , m where âRE equals the fixed intercept and ∑xim⋅β^RE,m is the sumproduct of the six predictor values of patient i and the corresponding fixed regression coefficients of the random effects model. However, the cluster effects were not used for the risk calculation [15]. We explicitly studied this risk calculation since cluster effects are unknown for patients in clusters that are not included in the development data. The second risk calculation used the fixed and random effects of the random intercept logistic regression model (called conditional risk calculation):PYij=1|Xij=11+exp−α^RE+∑m∈1,…,6xim⋅β^RE,m+u0j This risk calculation included the same predictor effects as the marginal risk calculation, plus the random intercept u0j (i.e. the effect of anesthesiologist j). This risk calculation cannot be used in new data of patients treated by new anesthesiologists, since the random effect of the new anesthesiologist is unknown. Model evaluation Apparent and test performance of the prediction models were assessed. Apparent performance is the performance of the prediction model in the development data. Test performance was assessed in a cohort of 1458 new patients treated by 19 other anesthesiologists than the patients from the development sample. The predictive performance of each of the risk calculations was assessed with the concordance index (c-index) [16], the calibration slope and calibration in the large [17,18]. The calibration slope was estimated with standard logistic regression analysis, modeling the outcome of interest as dependent variable and the linear predictor as independent variable. Calibration in the large was assessed as the intercept of a logistic regression model with the linear predictor as offset variable. The ideal values of the calibration in the large and calibration slope are respectively 0 and 1. Since, standard performance measures ignore the clustered data structure, they can be considered as overall measures. To take clustering into account in the model evaluation, we assessed the predictive performance in individual anesthesiologists (within cluster performance). The within cluster c-index was estimated as the average of the c-indices of the clusters, as described by van Oirbeek [19]. Within cluster calibration was assessed with mixed effect models, with random effects for the intercept and linear predictor (calibration slope) or only for the intercept (calibration in the large). Apparent and test performance of the prediction models were assessed. Apparent performance is the performance of the prediction model in the development data. Test performance was assessed in a cohort of 1458 new patients treated by 19 other anesthesiologists than the patients from the development sample. The predictive performance of each of the risk calculations was assessed with the concordance index (c-index) [16], the calibration slope and calibration in the large [17,18]. The calibration slope was estimated with standard logistic regression analysis, modeling the outcome of interest as dependent variable and the linear predictor as independent variable. Calibration in the large was assessed as the intercept of a logistic regression model with the linear predictor as offset variable. The ideal values of the calibration in the large and calibration slope are respectively 0 and 1. Since, standard performance measures ignore the clustered data structure, they can be considered as overall measures. To take clustering into account in the model evaluation, we assessed the predictive performance in individual anesthesiologists (within cluster performance). The within cluster c-index was estimated as the average of the c-indices of the clusters, as described by van Oirbeek [19]. Within cluster calibration was assessed with mixed effect models, with random effects for the intercept and linear predictor (calibration slope) or only for the intercept (calibration in the large). Simulation study We generated a source population which included 100 centers. The number of patients per center was Poisson distributed, with a mean and variance varying per center according to the exponential function of a normal distribution (N(5.7, 0.3)). This resulted in a total of 30,556 patients and a median of 301 patients per center (range 155–552). The dichotomous outcome Y was predicted with 3 continuous (X1-X3) and 3 dichotomous variables (X4-X6). The three continuous predictors were independently drawn from a normal distribution, with a mean of 0 and standard deviations of 0.2, 0.4, and 1. The three dichotomous predictors were independently drawn from binomial distributions with incidences 0.2, 0.3, and 0.4. The regression coefficients of all predictors were 1. To introduce clustering of events, we generated a latent random effect from a normal distribution with mean 0 and variance 0.17. This corresponded to an intraclass correlation coefficient (ICC) of 5%, which was calculated as σ2u0/(σ2u0 + ((π ^ 2)/3). The σ2u0 equals the second level variance estimated with a random intercept logistic regression model [6]. Based on the six predictors and the latent random effect, the linear predictor lp was calculated for each patient. The linear predictor was normally distributed with mean −1.06 and standard deviation 1.41. The linear predictor was transformed to probabilities for the outcome using the formula P(Y) = 1/(1 + exp(−lp)). The outcome value Y (1 or 0) was then generated by comparing P(Y) with an independently generated variable u having a uniform distribution from 0 to 1. We used the rule Y = 1 if P(Y) ≤ u, and Y = 0 otherwise. The incidence of the outcome (P(Y = 1)) was 30% in all source populations, except for the situation with low number of events (incidence = 3%). Further, we varied several parameters in the source population as described above. We studied ICC values of 5%, 15% and 30%; Pearson correlation coefficient values between predictor X1 and the random intercept were 0.0 or 0.4. Study samples were drawn according to the practice of data collection in a multicenter setting [20,21]. We randomly drew study samples from the source population in two stages. First we sampled 20 centers, and then we sampled in total 1000 patients from the included centers (two-stage sampling). We also studied the performance in study samples with 5 or 50 centers (including respectively 100 and 1000 patients in total). Standard and random intercept logistic regression models were fitted in the study sample, and evaluated in that study sample (apparent performance) and in the whole source population (test performance). The whole process (sampling from source population, model development and evaluation) was repeated 100 times. Calculations were performed with R version 2.11.1 [22]. We used the lmer function from the lme4 library to perform mixed effect regression analyses [23]. The lrm function of the Design package was used to fit the standard model and estimate overall performance measures [24]. We generated a source population which included 100 centers. The number of patients per center was Poisson distributed, with a mean and variance varying per center according to the exponential function of a normal distribution (N(5.7, 0.3)). This resulted in a total of 30,556 patients and a median of 301 patients per center (range 155–552). The dichotomous outcome Y was predicted with 3 continuous (X1-X3) and 3 dichotomous variables (X4-X6). The three continuous predictors were independently drawn from a normal distribution, with a mean of 0 and standard deviations of 0.2, 0.4, and 1. The three dichotomous predictors were independently drawn from binomial distributions with incidences 0.2, 0.3, and 0.4. The regression coefficients of all predictors were 1. To introduce clustering of events, we generated a latent random effect from a normal distribution with mean 0 and variance 0.17. This corresponded to an intraclass correlation coefficient (ICC) of 5%, which was calculated as σ2u0/(σ2u0 + ((π ^ 2)/3). The σ2u0 equals the second level variance estimated with a random intercept logistic regression model [6]. Based on the six predictors and the latent random effect, the linear predictor lp was calculated for each patient. The linear predictor was normally distributed with mean −1.06 and standard deviation 1.41. The linear predictor was transformed to probabilities for the outcome using the formula P(Y) = 1/(1 + exp(−lp)). The outcome value Y (1 or 0) was then generated by comparing P(Y) with an independently generated variable u having a uniform distribution from 0 to 1. We used the rule Y = 1 if P(Y) ≤ u, and Y = 0 otherwise. The incidence of the outcome (P(Y = 1)) was 30% in all source populations, except for the situation with low number of events (incidence = 3%). Further, we varied several parameters in the source population as described above. We studied ICC values of 5%, 15% and 30%; Pearson correlation coefficient values between predictor X1 and the random intercept were 0.0 or 0.4. Study samples were drawn according to the practice of data collection in a multicenter setting [20,21]. We randomly drew study samples from the source population in two stages. First we sampled 20 centers, and then we sampled in total 1000 patients from the included centers (two-stage sampling). We also studied the performance in study samples with 5 or 50 centers (including respectively 100 and 1000 patients in total). Standard and random intercept logistic regression models were fitted in the study sample, and evaluated in that study sample (apparent performance) and in the whole source population (test performance). The whole process (sampling from source population, model development and evaluation) was repeated 100 times. Calculations were performed with R version 2.11.1 [22]. We used the lmer function from the lme4 library to perform mixed effect regression analyses [23]. The lrm function of the Design package was used to fit the standard model and estimate overall performance measures [24]. Prediction of postoperative nausea and vomiting: A frequently occurring side effect of surgery is postoperative nausea and vomiting (PONV). To prevent or treat PONV, a risk model was developed to predict PONV within 24 hours after surgery [11]. We used a cohort of 1642 consecutive surgical patients (development sample) that were treated in the UMC Utrecht to develop prediction models. Patients were clustered within 19 treating anesthesiologists [12]. Predictors for the occurrence of PONV included gender, age, history of PONV or motion sickness, current smoking, abdominal or middle ear surgery versus other type of surgery and the use of volatile anesthetics during surgery [13]. Data of 1458 patients from the same center were used to study the validity of the prediction models. Patients included in the validation sample were treated by 19 other anesthesiologists than the patients from the development sample. Model development and risk calculation: The prediction models included all before mentioned predictors and were fitted with standard logistic regression or with a random intercept logistic regression model (also known as partial multilevel model). The standard model was fitted with a generalized linear model, including a logit link function. The intercept and predictors were included as fixed effects (i.e. not varying by cluster). The random effect models thus included fixed effects for the predictors plus a random intercept for the effects of clusters (anesthesiologists or centers in the simulation study). The random intercept was assumed to be normally distributed with mean zero, and variance σ2u0[14]. The predicted risks of PONV for individual patients were calculated with the log-odds transformation of the linear predictor. The risk based on the standard logistic regression model was: P Y i = 1 | X i = 1 1 + exp − α ^ standard + ∑ m ∈ 1 , … , 6 x im ⋅ β ^ standard , m where P(Yi = 1) is the predicted risk that a patient i will get PONV, given patients’ predictor values X. The linear predictor consist of âstandard which equals the estimated intercept of the standard model, and ∑xim⋅β^standard,m which is the sumproduct of the six predictor values of patient i and the six regression coefficients. From the random intercept logistic regression model, predicted risks were calculated in two ways. The first risk calculation was based on only the fixed effects of the random intercept logistic regression model (called marginal risk calculation): P Y i = 1 | X i = 1 1 + exp − α ^ RE + ∑ m ∈ 1 , … , 6 x im ⋅ β ^ RE , m where âRE equals the fixed intercept and ∑xim⋅β^RE,m is the sumproduct of the six predictor values of patient i and the corresponding fixed regression coefficients of the random effects model. However, the cluster effects were not used for the risk calculation [15]. We explicitly studied this risk calculation since cluster effects are unknown for patients in clusters that are not included in the development data. The second risk calculation used the fixed and random effects of the random intercept logistic regression model (called conditional risk calculation):PYij=1|Xij=11+exp−α^RE+∑m∈1,…,6xim⋅β^RE,m+u0j This risk calculation included the same predictor effects as the marginal risk calculation, plus the random intercept u0j (i.e. the effect of anesthesiologist j). This risk calculation cannot be used in new data of patients treated by new anesthesiologists, since the random effect of the new anesthesiologist is unknown. Model evaluation: Apparent and test performance of the prediction models were assessed. Apparent performance is the performance of the prediction model in the development data. Test performance was assessed in a cohort of 1458 new patients treated by 19 other anesthesiologists than the patients from the development sample. The predictive performance of each of the risk calculations was assessed with the concordance index (c-index) [16], the calibration slope and calibration in the large [17,18]. The calibration slope was estimated with standard logistic regression analysis, modeling the outcome of interest as dependent variable and the linear predictor as independent variable. Calibration in the large was assessed as the intercept of a logistic regression model with the linear predictor as offset variable. The ideal values of the calibration in the large and calibration slope are respectively 0 and 1. Since, standard performance measures ignore the clustered data structure, they can be considered as overall measures. To take clustering into account in the model evaluation, we assessed the predictive performance in individual anesthesiologists (within cluster performance). The within cluster c-index was estimated as the average of the c-indices of the clusters, as described by van Oirbeek [19]. Within cluster calibration was assessed with mixed effect models, with random effects for the intercept and linear predictor (calibration slope) or only for the intercept (calibration in the large). Simulation study: We generated a source population which included 100 centers. The number of patients per center was Poisson distributed, with a mean and variance varying per center according to the exponential function of a normal distribution (N(5.7, 0.3)). This resulted in a total of 30,556 patients and a median of 301 patients per center (range 155–552). The dichotomous outcome Y was predicted with 3 continuous (X1-X3) and 3 dichotomous variables (X4-X6). The three continuous predictors were independently drawn from a normal distribution, with a mean of 0 and standard deviations of 0.2, 0.4, and 1. The three dichotomous predictors were independently drawn from binomial distributions with incidences 0.2, 0.3, and 0.4. The regression coefficients of all predictors were 1. To introduce clustering of events, we generated a latent random effect from a normal distribution with mean 0 and variance 0.17. This corresponded to an intraclass correlation coefficient (ICC) of 5%, which was calculated as σ2u0/(σ2u0 + ((π ^ 2)/3). The σ2u0 equals the second level variance estimated with a random intercept logistic regression model [6]. Based on the six predictors and the latent random effect, the linear predictor lp was calculated for each patient. The linear predictor was normally distributed with mean −1.06 and standard deviation 1.41. The linear predictor was transformed to probabilities for the outcome using the formula P(Y) = 1/(1 + exp(−lp)). The outcome value Y (1 or 0) was then generated by comparing P(Y) with an independently generated variable u having a uniform distribution from 0 to 1. We used the rule Y = 1 if P(Y) ≤ u, and Y = 0 otherwise. The incidence of the outcome (P(Y = 1)) was 30% in all source populations, except for the situation with low number of events (incidence = 3%). Further, we varied several parameters in the source population as described above. We studied ICC values of 5%, 15% and 30%; Pearson correlation coefficient values between predictor X1 and the random intercept were 0.0 or 0.4. Study samples were drawn according to the practice of data collection in a multicenter setting [20,21]. We randomly drew study samples from the source population in two stages. First we sampled 20 centers, and then we sampled in total 1000 patients from the included centers (two-stage sampling). We also studied the performance in study samples with 5 or 50 centers (including respectively 100 and 1000 patients in total). Standard and random intercept logistic regression models were fitted in the study sample, and evaluated in that study sample (apparent performance) and in the whole source population (test performance). The whole process (sampling from source population, model development and evaluation) was repeated 100 times. Calculations were performed with R version 2.11.1 [22]. We used the lmer function from the lme4 library to perform mixed effect regression analyses [23]. The lrm function of the Design package was used to fit the standard model and estimate overall performance measures [24]. Results: Prediction of postoperative nausea and vomiting The incidence of PONV was 37.5% (616/1642) in the development cohort (Table 1). 19 anesthesiologists treated on average 82 surgical patients (median 64, range 1–460). The incidence of PONV per anesthesiologist varied (interquartile range 29% – 47%), with an ICC of 3.2%. The corresponding variance of the random intercept (0.15) was significantly different from 0 (confidence interval 0.07 – 0.33). Distribution of predictor values and outcome, and predictor effects in multivariable logistic regression models * interquartile ranges of the predictor distributions per anesthesiologist. † logistic regression coefficients with 95 percent confidence intervals. Intervals of the random intercept model were based on the t-distribution. ‡ mean (standard deviation). § number (percentage) of patients with PONV, rather than the intercept is reported. PONV = post operative nausea and vomiting. †† non-parametric confidence interval of the random intercept variance was obtained from a cluster bootstrap procedure. When the clustering by anesthesiologist was taken into account (random intercept model), the predictive effect of type of anesthetics use (volatile yes/no) was different compared with the standard multivariable model (Table 1). If predictor distributions varied among anesthesiologists, as indicated by a wide interquartile range (Table 1), differences in predictor effects were found between the standard and random intercept model. The variance of the random intercept was 0.15, when the six predictors were included in the model. Consequently, random intercepts ranged from −0.49 to 0.50. Figure 1 shows the variation in predicted risks by the three risk calculations. Applying the three risk calculations for the prediction of PONV resulted in different predicted risks for the individual patients. A-C Predicted probabilities from the standard model and from the risk calculations based on the random intercept model. The predicted risks differed among the models. The diagonal indicates the line of identity (predicted probabilities of the two models are equal). The risk calculation that included fixed and random effects (the conditional risk calculation) showed the best overall discriminative ability in the development data (Table 2, c-index 0.69). The discriminative ability of the standard model was similar to the discriminative ability of risks that were only based on the fixed effects of the random intercept model (the marginal risk calculation) (c-index both 0.66). The difference in discriminative ability among the standard model and marginal and conditional risk calculations disappeared when the c-index was estimated within each anesthesiologist, because the random anesthesiologist effects included in the conditional risk calculation only contributes to discrimination of patients treated by different anesthesiologist. Apparent and test performance of the PONV models described in Table 1 † overall performance (standard error). ‡ within anesthesiologist performance (standard deviation). * With calibration slope equal to 1 (i.e. calibration in the large). The standard model showed excellent apparent calibration in the large, when estimated with the overall performance estimates (Table 2). Apparent calibration in the large (overall) for the marginal risk calculation was almost optimal. However, calibration in the large assessed within clusters showed that predicted and observed incidences differed for some anesthesiologist. The standard deviations of calibration in the large within clusters were 0.329 and 0.385 for respectively the standard model and the marginal risk calculation (Table 2). The differences in predicted and observed incidences within some anesthesiologists were slightly smaller for the standard model compared to the marginal risk calculation, because predictors included in the marginal risk calculation did not contain information of the anesthesiologist specific intercept as these predictors were adjusted for the random anesthesiologist effects. For the conditional risk calculation, the calibration in the large within clusters and corresponding standard deviation were close to 0, which means that the observed and predicted incidences were similar among all anesthesiologists. This is due to the inclusion of the random anesthesiologist effects in the risk calculation, which comprises an estimate of the cluster specific incidence. The calibration slope assesses the correlation between predicted and observed risks for all patients (overall performance), or for patients treated by a particular anesthesiologist (within cluster performance). The overall and within cluster calibration slopes of the marginal risk calculation were slightly smaller compared to the calibration slopes of the standard model. The overall and within cluster calibration slopes of the conditional risk calculation were >1 in the development data (i.e. predicted risks lower than observed risks), because the anesthesiologist effects were shrunken to the average effect by the random intercept model. The standard deviations of the calibration slopes within clusters were limited for all models, indicating that the observed and predicted risks differed similarly among the anesthesiologists (Table 2). The standard and the marginal risk calculation had similar test performance, which was estimated in patients treated by 19 other anesthesiologists (overall c-index resp. 0.68 and 0.67) (Table 2). The test performance as evaluated with the overall and within cluster c-indexes was even higher than in the apparent performance. Possible reasons are stronger true predictor effects in the test data, differences in case-mix and randomness [25]. (The models were not re-calibrated in the external data). The overall and within cluster calibration in the large were too high for both models in the external data, indicating that the predicted risks were lower than the observed proportions of PONV. The calibration slopes from the standard model were larger than slopes from the random intercept model, as was also shown in the apparent validation. The incidence of PONV was 37.5% (616/1642) in the development cohort (Table 1). 19 anesthesiologists treated on average 82 surgical patients (median 64, range 1–460). The incidence of PONV per anesthesiologist varied (interquartile range 29% – 47%), with an ICC of 3.2%. The corresponding variance of the random intercept (0.15) was significantly different from 0 (confidence interval 0.07 – 0.33). Distribution of predictor values and outcome, and predictor effects in multivariable logistic regression models * interquartile ranges of the predictor distributions per anesthesiologist. † logistic regression coefficients with 95 percent confidence intervals. Intervals of the random intercept model were based on the t-distribution. ‡ mean (standard deviation). § number (percentage) of patients with PONV, rather than the intercept is reported. PONV = post operative nausea and vomiting. †† non-parametric confidence interval of the random intercept variance was obtained from a cluster bootstrap procedure. When the clustering by anesthesiologist was taken into account (random intercept model), the predictive effect of type of anesthetics use (volatile yes/no) was different compared with the standard multivariable model (Table 1). If predictor distributions varied among anesthesiologists, as indicated by a wide interquartile range (Table 1), differences in predictor effects were found between the standard and random intercept model. The variance of the random intercept was 0.15, when the six predictors were included in the model. Consequently, random intercepts ranged from −0.49 to 0.50. Figure 1 shows the variation in predicted risks by the three risk calculations. Applying the three risk calculations for the prediction of PONV resulted in different predicted risks for the individual patients. A-C Predicted probabilities from the standard model and from the risk calculations based on the random intercept model. The predicted risks differed among the models. The diagonal indicates the line of identity (predicted probabilities of the two models are equal). The risk calculation that included fixed and random effects (the conditional risk calculation) showed the best overall discriminative ability in the development data (Table 2, c-index 0.69). The discriminative ability of the standard model was similar to the discriminative ability of risks that were only based on the fixed effects of the random intercept model (the marginal risk calculation) (c-index both 0.66). The difference in discriminative ability among the standard model and marginal and conditional risk calculations disappeared when the c-index was estimated within each anesthesiologist, because the random anesthesiologist effects included in the conditional risk calculation only contributes to discrimination of patients treated by different anesthesiologist. Apparent and test performance of the PONV models described in Table 1 † overall performance (standard error). ‡ within anesthesiologist performance (standard deviation). * With calibration slope equal to 1 (i.e. calibration in the large). The standard model showed excellent apparent calibration in the large, when estimated with the overall performance estimates (Table 2). Apparent calibration in the large (overall) for the marginal risk calculation was almost optimal. However, calibration in the large assessed within clusters showed that predicted and observed incidences differed for some anesthesiologist. The standard deviations of calibration in the large within clusters were 0.329 and 0.385 for respectively the standard model and the marginal risk calculation (Table 2). The differences in predicted and observed incidences within some anesthesiologists were slightly smaller for the standard model compared to the marginal risk calculation, because predictors included in the marginal risk calculation did not contain information of the anesthesiologist specific intercept as these predictors were adjusted for the random anesthesiologist effects. For the conditional risk calculation, the calibration in the large within clusters and corresponding standard deviation were close to 0, which means that the observed and predicted incidences were similar among all anesthesiologists. This is due to the inclusion of the random anesthesiologist effects in the risk calculation, which comprises an estimate of the cluster specific incidence. The calibration slope assesses the correlation between predicted and observed risks for all patients (overall performance), or for patients treated by a particular anesthesiologist (within cluster performance). The overall and within cluster calibration slopes of the marginal risk calculation were slightly smaller compared to the calibration slopes of the standard model. The overall and within cluster calibration slopes of the conditional risk calculation were >1 in the development data (i.e. predicted risks lower than observed risks), because the anesthesiologist effects were shrunken to the average effect by the random intercept model. The standard deviations of the calibration slopes within clusters were limited for all models, indicating that the observed and predicted risks differed similarly among the anesthesiologists (Table 2). The standard and the marginal risk calculation had similar test performance, which was estimated in patients treated by 19 other anesthesiologists (overall c-index resp. 0.68 and 0.67) (Table 2). The test performance as evaluated with the overall and within cluster c-indexes was even higher than in the apparent performance. Possible reasons are stronger true predictor effects in the test data, differences in case-mix and randomness [25]. (The models were not re-calibrated in the external data). The overall and within cluster calibration in the large were too high for both models in the external data, indicating that the predicted risks were lower than the observed proportions of PONV. The calibration slopes from the standard model were larger than slopes from the random intercept model, as was also shown in the apparent validation. Simulation study The simulation study also showed similar overall discriminative ability for the standard model and the marginal risk calculation (apparent performance, c-index both 0.79, for ICC = 5%, Table 3). Discrimination of the conditional risk calculation was slightly better compared to the standard model and the marginal risk calculation (c-index 0.82, 2.5 and 97.5 percentiles 0.79; 0.84). The apparent calibration intercept and slope were ideal for standard model when assessing the overall performance, and ideal for the marginal risk calculation when assessing the within cluster performance. As in the empirical data, the overall and within cluster calibration slopes were too high for the conditional risk calculation due to shrinkage of the random center effects. Variation in the calibration in the large estimates within centers was lowest for the conditional risk calculation (calibration in the large and corresponding standard deviation both 0) (Table 3). The test performance of the standard model and the marginal risk calculation, as assessed in the source population, showed that the performance of these models was similar (Table 3). The difference between the apparent and test performance may be can be interpreted as optimism in model performance. Optimism in overall and within cluster performance was similar for the standard model and the marginal risk calculation. For instance, the difference in apparent and test calibration slopes (overall) was 0.04 for both models (Table 3). The risks for patients clustered within different centers was similar in these data (ICC 5%), which means that including center effects in the prediction model (i.e. random intercept model) cannot improve predictive performance considerably. Consequently, the performance of the models was similar, in the data with small differences in risks among centers. Simulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). The similarity in performance among the models disappeared when the ICC was 15% or 30% (Table 4, Additional file 1: Table S3). The discriminative ability of the conditional risk calculation was more accurate compared to the standard model and the marginal risk calculation. The apparent overall c-indexes and corresponding 2.5%; 97.5% range were respectively 0.85 (0.82; 0.87), 0.77 (0.74; 0.80) and 0.77 (0.73; 0.80) (Table 4). Assessment of the apparent performance of the standard model and the marginal risk calculation showed that the c-indexes remained similar in data with a higher ICC, however, the calibration parameters differed. The standard calibration in the large and calibration slope were equal to the line of identity for the standard model, but not for the marginal risk calculation. However, the calibration parameters assessed within clusters were on average more accurate for the marginal risk calculation (−0.00 and 1.00), compared to the standard model (−0.18 and 1.18). The evaluation of the standard model and the marginal risk calculation in the source population (test performance) showed similar results compared to the evaluation in the study sample (apparent performance) (Table 4). Simulation results in a domain with ICC = 15%, Pearson correlation X1 and random effect 0.0 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). Further, the apparent and test performance showed that, in data with an ICC of 15% or 30%, the standard deviations of the calibration in the large within clusters for the standard model and the marginal risk calculation were higher (e.g. respectively 0.94 and 1.01), compared to the standard deviations in data with an ICC of 5% (respectively 0.44 and 0.45) (Tables 4 and 3). So, when predictions were based on models neglecting center specific effects, the correlation between the observed and predicted incidences within centers differed among centers, especially in data with a high ICC. The (standard deviations of the) c-indexes within clusters were not influenced by a higher ICC. Tables 5 and 6 show the results of simulations investigating the influence of the number of centers on model performance. Especially when the number of centers is low – e.g. 5 centers –, it is more difficult to estimate accurate random intercepts and corresponding center effects. This potentially affects the performance of the random intercept logistic regression model. However, as in Table 3, the performance of the standard model and the marginal risk calculation were similar, and the conditional risk calculation had the most accurate performance. Simulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, number of patients 100, number of centers 5 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). Simulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, number of patients 1000, number of centers 50 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). The differences in performance between standard and random intercept models were smaller, when the clustering (i.e. the center effect) was associated with one of the predictors (Additional file 1: Tables S1, S2, and S4). For ICC values of 15% or 30%, the model performance of the standard model and the marginal risk calculation was better compared to model performance in datasets without associations between the clustering and a predictor. Finally, we compared the standard and random intercept model in data with a low incidence of the outcome Y (3%) (Additional file 1: Table S5). We sampled in total 1000 patients clustered in 20 centers. The performance of the models was similar. Only the calibration intercept of the marginal risk calculation showed an underestimation of the outcome incidence (intercept 0.14). The calibration intercept within clusters of the conditional risk calculation had a lower variance (0.00) compared to the other models. The simulation study also showed similar overall discriminative ability for the standard model and the marginal risk calculation (apparent performance, c-index both 0.79, for ICC = 5%, Table 3). Discrimination of the conditional risk calculation was slightly better compared to the standard model and the marginal risk calculation (c-index 0.82, 2.5 and 97.5 percentiles 0.79; 0.84). The apparent calibration intercept and slope were ideal for standard model when assessing the overall performance, and ideal for the marginal risk calculation when assessing the within cluster performance. As in the empirical data, the overall and within cluster calibration slopes were too high for the conditional risk calculation due to shrinkage of the random center effects. Variation in the calibration in the large estimates within centers was lowest for the conditional risk calculation (calibration in the large and corresponding standard deviation both 0) (Table 3). The test performance of the standard model and the marginal risk calculation, as assessed in the source population, showed that the performance of these models was similar (Table 3). The difference between the apparent and test performance may be can be interpreted as optimism in model performance. Optimism in overall and within cluster performance was similar for the standard model and the marginal risk calculation. For instance, the difference in apparent and test calibration slopes (overall) was 0.04 for both models (Table 3). The risks for patients clustered within different centers was similar in these data (ICC 5%), which means that including center effects in the prediction model (i.e. random intercept model) cannot improve predictive performance considerably. Consequently, the performance of the models was similar, in the data with small differences in risks among centers. Simulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). The similarity in performance among the models disappeared when the ICC was 15% or 30% (Table 4, Additional file 1: Table S3). The discriminative ability of the conditional risk calculation was more accurate compared to the standard model and the marginal risk calculation. The apparent overall c-indexes and corresponding 2.5%; 97.5% range were respectively 0.85 (0.82; 0.87), 0.77 (0.74; 0.80) and 0.77 (0.73; 0.80) (Table 4). Assessment of the apparent performance of the standard model and the marginal risk calculation showed that the c-indexes remained similar in data with a higher ICC, however, the calibration parameters differed. The standard calibration in the large and calibration slope were equal to the line of identity for the standard model, but not for the marginal risk calculation. However, the calibration parameters assessed within clusters were on average more accurate for the marginal risk calculation (−0.00 and 1.00), compared to the standard model (−0.18 and 1.18). The evaluation of the standard model and the marginal risk calculation in the source population (test performance) showed similar results compared to the evaluation in the study sample (apparent performance) (Table 4). Simulation results in a domain with ICC = 15%, Pearson correlation X1 and random effect 0.0 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). Further, the apparent and test performance showed that, in data with an ICC of 15% or 30%, the standard deviations of the calibration in the large within clusters for the standard model and the marginal risk calculation were higher (e.g. respectively 0.94 and 1.01), compared to the standard deviations in data with an ICC of 5% (respectively 0.44 and 0.45) (Tables 4 and 3). So, when predictions were based on models neglecting center specific effects, the correlation between the observed and predicted incidences within centers differed among centers, especially in data with a high ICC. The (standard deviations of the) c-indexes within clusters were not influenced by a higher ICC. Tables 5 and 6 show the results of simulations investigating the influence of the number of centers on model performance. Especially when the number of centers is low – e.g. 5 centers –, it is more difficult to estimate accurate random intercepts and corresponding center effects. This potentially affects the performance of the random intercept logistic regression model. However, as in Table 3, the performance of the standard model and the marginal risk calculation were similar, and the conditional risk calculation had the most accurate performance. Simulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, number of patients 100, number of centers 5 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). Simulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, number of patients 1000, number of centers 50 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). The differences in performance between standard and random intercept models were smaller, when the clustering (i.e. the center effect) was associated with one of the predictors (Additional file 1: Tables S1, S2, and S4). For ICC values of 15% or 30%, the model performance of the standard model and the marginal risk calculation was better compared to model performance in datasets without associations between the clustering and a predictor. Finally, we compared the standard and random intercept model in data with a low incidence of the outcome Y (3%) (Additional file 1: Table S5). We sampled in total 1000 patients clustered in 20 centers. The performance of the models was similar. Only the calibration intercept of the marginal risk calculation showed an underestimation of the outcome incidence (intercept 0.14). The calibration intercept within clusters of the conditional risk calculation had a lower variance (0.00) compared to the other models. Prediction of postoperative nausea and vomiting: The incidence of PONV was 37.5% (616/1642) in the development cohort (Table 1). 19 anesthesiologists treated on average 82 surgical patients (median 64, range 1–460). The incidence of PONV per anesthesiologist varied (interquartile range 29% – 47%), with an ICC of 3.2%. The corresponding variance of the random intercept (0.15) was significantly different from 0 (confidence interval 0.07 – 0.33). Distribution of predictor values and outcome, and predictor effects in multivariable logistic regression models * interquartile ranges of the predictor distributions per anesthesiologist. † logistic regression coefficients with 95 percent confidence intervals. Intervals of the random intercept model were based on the t-distribution. ‡ mean (standard deviation). § number (percentage) of patients with PONV, rather than the intercept is reported. PONV = post operative nausea and vomiting. †† non-parametric confidence interval of the random intercept variance was obtained from a cluster bootstrap procedure. When the clustering by anesthesiologist was taken into account (random intercept model), the predictive effect of type of anesthetics use (volatile yes/no) was different compared with the standard multivariable model (Table 1). If predictor distributions varied among anesthesiologists, as indicated by a wide interquartile range (Table 1), differences in predictor effects were found between the standard and random intercept model. The variance of the random intercept was 0.15, when the six predictors were included in the model. Consequently, random intercepts ranged from −0.49 to 0.50. Figure 1 shows the variation in predicted risks by the three risk calculations. Applying the three risk calculations for the prediction of PONV resulted in different predicted risks for the individual patients. A-C Predicted probabilities from the standard model and from the risk calculations based on the random intercept model. The predicted risks differed among the models. The diagonal indicates the line of identity (predicted probabilities of the two models are equal). The risk calculation that included fixed and random effects (the conditional risk calculation) showed the best overall discriminative ability in the development data (Table 2, c-index 0.69). The discriminative ability of the standard model was similar to the discriminative ability of risks that were only based on the fixed effects of the random intercept model (the marginal risk calculation) (c-index both 0.66). The difference in discriminative ability among the standard model and marginal and conditional risk calculations disappeared when the c-index was estimated within each anesthesiologist, because the random anesthesiologist effects included in the conditional risk calculation only contributes to discrimination of patients treated by different anesthesiologist. Apparent and test performance of the PONV models described in Table 1 † overall performance (standard error). ‡ within anesthesiologist performance (standard deviation). * With calibration slope equal to 1 (i.e. calibration in the large). The standard model showed excellent apparent calibration in the large, when estimated with the overall performance estimates (Table 2). Apparent calibration in the large (overall) for the marginal risk calculation was almost optimal. However, calibration in the large assessed within clusters showed that predicted and observed incidences differed for some anesthesiologist. The standard deviations of calibration in the large within clusters were 0.329 and 0.385 for respectively the standard model and the marginal risk calculation (Table 2). The differences in predicted and observed incidences within some anesthesiologists were slightly smaller for the standard model compared to the marginal risk calculation, because predictors included in the marginal risk calculation did not contain information of the anesthesiologist specific intercept as these predictors were adjusted for the random anesthesiologist effects. For the conditional risk calculation, the calibration in the large within clusters and corresponding standard deviation were close to 0, which means that the observed and predicted incidences were similar among all anesthesiologists. This is due to the inclusion of the random anesthesiologist effects in the risk calculation, which comprises an estimate of the cluster specific incidence. The calibration slope assesses the correlation between predicted and observed risks for all patients (overall performance), or for patients treated by a particular anesthesiologist (within cluster performance). The overall and within cluster calibration slopes of the marginal risk calculation were slightly smaller compared to the calibration slopes of the standard model. The overall and within cluster calibration slopes of the conditional risk calculation were >1 in the development data (i.e. predicted risks lower than observed risks), because the anesthesiologist effects were shrunken to the average effect by the random intercept model. The standard deviations of the calibration slopes within clusters were limited for all models, indicating that the observed and predicted risks differed similarly among the anesthesiologists (Table 2). The standard and the marginal risk calculation had similar test performance, which was estimated in patients treated by 19 other anesthesiologists (overall c-index resp. 0.68 and 0.67) (Table 2). The test performance as evaluated with the overall and within cluster c-indexes was even higher than in the apparent performance. Possible reasons are stronger true predictor effects in the test data, differences in case-mix and randomness [25]. (The models were not re-calibrated in the external data). The overall and within cluster calibration in the large were too high for both models in the external data, indicating that the predicted risks were lower than the observed proportions of PONV. The calibration slopes from the standard model were larger than slopes from the random intercept model, as was also shown in the apparent validation. Simulation study: The simulation study also showed similar overall discriminative ability for the standard model and the marginal risk calculation (apparent performance, c-index both 0.79, for ICC = 5%, Table 3). Discrimination of the conditional risk calculation was slightly better compared to the standard model and the marginal risk calculation (c-index 0.82, 2.5 and 97.5 percentiles 0.79; 0.84). The apparent calibration intercept and slope were ideal for standard model when assessing the overall performance, and ideal for the marginal risk calculation when assessing the within cluster performance. As in the empirical data, the overall and within cluster calibration slopes were too high for the conditional risk calculation due to shrinkage of the random center effects. Variation in the calibration in the large estimates within centers was lowest for the conditional risk calculation (calibration in the large and corresponding standard deviation both 0) (Table 3). The test performance of the standard model and the marginal risk calculation, as assessed in the source population, showed that the performance of these models was similar (Table 3). The difference between the apparent and test performance may be can be interpreted as optimism in model performance. Optimism in overall and within cluster performance was similar for the standard model and the marginal risk calculation. For instance, the difference in apparent and test calibration slopes (overall) was 0.04 for both models (Table 3). The risks for patients clustered within different centers was similar in these data (ICC 5%), which means that including center effects in the prediction model (i.e. random intercept model) cannot improve predictive performance considerably. Consequently, the performance of the models was similar, in the data with small differences in risks among centers. Simulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). The similarity in performance among the models disappeared when the ICC was 15% or 30% (Table 4, Additional file 1: Table S3). The discriminative ability of the conditional risk calculation was more accurate compared to the standard model and the marginal risk calculation. The apparent overall c-indexes and corresponding 2.5%; 97.5% range were respectively 0.85 (0.82; 0.87), 0.77 (0.74; 0.80) and 0.77 (0.73; 0.80) (Table 4). Assessment of the apparent performance of the standard model and the marginal risk calculation showed that the c-indexes remained similar in data with a higher ICC, however, the calibration parameters differed. The standard calibration in the large and calibration slope were equal to the line of identity for the standard model, but not for the marginal risk calculation. However, the calibration parameters assessed within clusters were on average more accurate for the marginal risk calculation (−0.00 and 1.00), compared to the standard model (−0.18 and 1.18). The evaluation of the standard model and the marginal risk calculation in the source population (test performance) showed similar results compared to the evaluation in the study sample (apparent performance) (Table 4). Simulation results in a domain with ICC = 15%, Pearson correlation X1 and random effect 0.0 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). Further, the apparent and test performance showed that, in data with an ICC of 15% or 30%, the standard deviations of the calibration in the large within clusters for the standard model and the marginal risk calculation were higher (e.g. respectively 0.94 and 1.01), compared to the standard deviations in data with an ICC of 5% (respectively 0.44 and 0.45) (Tables 4 and 3). So, when predictions were based on models neglecting center specific effects, the correlation between the observed and predicted incidences within centers differed among centers, especially in data with a high ICC. The (standard deviations of the) c-indexes within clusters were not influenced by a higher ICC. Tables 5 and 6 show the results of simulations investigating the influence of the number of centers on model performance. Especially when the number of centers is low – e.g. 5 centers –, it is more difficult to estimate accurate random intercepts and corresponding center effects. This potentially affects the performance of the random intercept logistic regression model. However, as in Table 3, the performance of the standard model and the marginal risk calculation were similar, and the conditional risk calculation had the most accurate performance. Simulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, number of patients 100, number of centers 5 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). Simulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, number of patients 1000, number of centers 50 * With calibration slope equal to 1 (i.e. calibration in the large). † overall performance (2.5 and 97.5 percentiles). ‡ median of overall performance from 100 simulations (median of within cluster performances). The differences in performance between standard and random intercept models were smaller, when the clustering (i.e. the center effect) was associated with one of the predictors (Additional file 1: Tables S1, S2, and S4). For ICC values of 15% or 30%, the model performance of the standard model and the marginal risk calculation was better compared to model performance in datasets without associations between the clustering and a predictor. Finally, we compared the standard and random intercept model in data with a low incidence of the outcome Y (3%) (Additional file 1: Table S5). We sampled in total 1000 patients clustered in 20 centers. The performance of the models was similar. Only the calibration intercept of the marginal risk calculation showed an underestimation of the outcome incidence (intercept 0.14). The calibration intercept within clusters of the conditional risk calculation had a lower variance (0.00) compared to the other models. Discussion: We compared standard logistic regression with random intercept logistic regression for the development of clinical prediction models in clustered data. Our example with empirical data showed similar overall discriminative ability of the standard model and the random intercept model, if the cluster specific effects (estimated by a random intercept) were not used in the risk calculation. If the cluster specific effects of the random intercept model could be used for predictions, the overall discrimination and calibration in the large within clusters improved. This was confirmed in the simulation studies. The quality of model calibration depended on how the calibration was estimated. Standard developed prediction models showed better overall calibration than random intercept prediction models using the average of cluster effect, but the latter showed better calibration within clusters than standard developed models. Predicted risks from the random intercept model were calculated using only the fixed predictor effects (the marginal risk calculation), or both fixed predictor effects and random cluster effects (the conditional risk calculation). The conditional risk calculation, including the same fixed predictor effects as the marginal risk calculation, showed the highest overall discrimination, apparently due to inclusion of the cluster specific information in the prediction model. The conditional risk calculation could only be applied and evaluated in the development data. To evaluate this risk calculation in subjects from new clusters, the new cluster effects should be known. The differences that we found in calibration parameters between the standard model and random intercept logistic regression model (either used with the marginal or conditional risk calculation) slightly disappeared when the cluster effect was correlated with one of the predictors (Pearson correlation coefficient between cluster and X1 = 0.4, see Additional file 1: Table S1, S2 and S4). Especially the standard deviations of the calibration in the large within clusters were lower in data with correlation (e.g. 0.110 for the apparent performance of the standard model, ICC 5%) as compared to simulated data without correlation between the cluster effect and predictor X1 (standard deviation 0.444). So, the predicted and observed incidences within several clusters were better in agreement in data with the correlation. Due to the correlation of predictor and cluster, the predictor X1 contained information of the cluster and was able to predict (partly) the cluster specific incidence, hence improving the calibration in the large within clusters. Until now, random effects models are mainly used to obtain correct estimates of intervention effects in cluster randomized trials [5,26], or causal effects in multilevel etiologic studies [27,28] and more recently in meta-analysis with individual patient data. The focus in such studies is on the effect estimate of one main factor, usually the intervention or exposure, which may be adjusted for confounding factors. In prediction studies, the focus is on all estimated predictor effects. All effects combined in the linear predictor results in an absolute risk estimate. Indeed, we found in our data, that the predictor effects for PONV were different in the random intercept logistic regression model compared to the standard model (Table 1). The different predictor effects, however, did not result in clear improvements in model performance (discrimination and calibration) between the marginal risk calculation and the standard model. This may be the result of relatively low clustering (ICC = 3.2%) in our empirical data. The simulations showed that particularly model calibration within clusters was better for the random intercept logistic regression models than the standard model, if the data were stronger clustered (ICC = 15%). The differences in overall and within cluster performance measures – both discrimination and calibration measures – raise the question what estimates are preferable in the presence of clustered data. Measures that assess the within cluster performance – i.e. performance per cluster – will be probably more informative than overall measures, since prediction models are to be used by individual clinicians or treatment centers. Reporting the variability of the within cluster performance measures found in the original development data set, indicates future users whether the model performance will differ widely among centers. Wide differences would implicate that the model may need updating for individual centers with center specific information. Conclusion: In summary, we compared prediction models that were developed with random intercept or standard logistic regression analysis in clustered data. Adding the cluster effect in a prediction model increases the amount of predictive information, resulting in improved overall discriminative ability and calibration in the large within clusters. Particularly if cluster effects are relatively strong (ICC larger than 5%), prediction modeling with inclusion of the cluster effect in the model will result in better performance than models not including cluster specific effects. Abbreviations: PONV: Postoperative nausea and vomiting; c-index: Concordance index; ICC: Intraclass correlation coefficient Competing interests: All authors (W Bouwmeester, JWR Twisk, TH Kappen, WA van Klei, KGM Moons, Y Vergouwe) state that they have no conflict of interests. Authors’ contributions: Study design and analysis: WB, YV. Drafting manuscript: WB. Study design and reviewing the manuscript: JWRT, KGMM, THK, WAvK, YV. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2288/13/19/prepub Supplementary Material: Simulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.4. Apparent performance. Table S2 Simulation results in a domain with ICC = 15%, Pearson correlation X1 and random effect 0.4. Table S3 Simulation results in a domain with ICC = 30%, Pearson correlation X1 and random effect 0.0. Table S4 Simulation results in a domain with ICC = 30%, Pearson correlation X1 and random effect 0.4. Table S5 Simulation results in a domain with ICC = 5%, Pearson correlation X1 and random effect 0.0, outcome incidence 3% in 1000 patients. (DOC 83 kb) Click here for file
Background: When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods: Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results: The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusions: The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.
Background: Many clinical prediction models are being developed. Diagnostic prediction models combine patient characteristics and test results to predict the presence or absence of a certain diagnosis. Prognostic prediction models predict the future occurrence of outcomes [1]. Study data that are used for model development are frequently clustered within e.g. centers or treating physician [2]. For instance, patients treated in a particular center may be more alike compared to patients treated in another center due to differences in treatment policies. As a result, patients treated in the same center are dependent (clustered), rather than independent. Regression techniques that take clustering into account [3-6] are frequently used in cluster randomized trials and in etiologic research with subjects clustered within e.g. neighborhoods or countries. Surprisingly, such regression models were hardly used in research aimed at developing prediction models [2]. Notably from the domains of therapeutic and causal studies, it has been shown that regression methods that take the clustering into account yield different estimates of the regression coefficients than standard regression techniques neglecting the clustered data structure [5,7,8]. However, in the domain of prediction studies, it is yet unknown to what extent regression methods for clustered data need to be used in the presence of clustered data. Two types of models can be used for analyzing clustered data: marginal models and conditional models [9]. Marginal models, such as the Generalized Estimation Equation (GEE) method, adjust for the clustering nature of data and estimate the standard error of the estimated parameters correctly. The interpretation of GEE results is on the higher cluster level and therefore not suitable for predictions in individual patients [10]. Conditional models estimate predictor effects for patients in the specific clusters. Conditioning on cluster is often done with random effects to save degrees of freedom. Thereby, conditional models allow for predictions of outcomes on the lowest clustering level (here, the patient). Accurate predictions are not necessarily achieved with a random effects model (having different regression parameters compared to a standard model), because the random effects are not readily applicable in new data with new clusters and the clustering in the data may be weak resulting in minor differences between the models. We explore the effect of including a random intercept in a conditional prediction model compared to standard regression. We use empirical and simulated clustered data to assess the performance of the prediction models. We show that model calibration is suboptimal, particularly when applied in new subjects, if clustering is not accounted for in the prediction model development. Conclusion: In summary, we compared prediction models that were developed with random intercept or standard logistic regression analysis in clustered data. Adding the cluster effect in a prediction model increases the amount of predictive information, resulting in improved overall discriminative ability and calibration in the large within clusters. Particularly if cluster effects are relatively strong (ICC larger than 5%), prediction modeling with inclusion of the cluster effect in the model will result in better performance than models not including cluster specific effects.
Background: When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods: Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results: The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusions: The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.
13,397
363
[ 479, 157, 549, 260, 608, 1062, 1258, 19, 31, 40, 16 ]
16
[ "model", "performance", "standard", "risk", "random", "calibration", "risk calculation", "calculation", "intercept", "effects" ]
[ "regression techniques clustering", "patients specific clusters", "clinical prediction", "development clinical prediction", "regression methods clustered" ]
[CONTENT] Logistic regression analysis | Prediction model with random intercept | Validation [SUMMARY]
[CONTENT] Logistic regression analysis | Prediction model with random intercept | Validation [SUMMARY]
[CONTENT] Logistic regression analysis | Prediction model with random intercept | Validation [SUMMARY]
[CONTENT] Logistic regression analysis | Prediction model with random intercept | Validation [SUMMARY]
[CONTENT] Logistic regression analysis | Prediction model with random intercept | Validation [SUMMARY]
[CONTENT] Logistic regression analysis | Prediction model with random intercept | Validation [SUMMARY]
[CONTENT] Adult | Aged | Analgesics, Opioid | Calibration | Cluster Analysis | Female | Humans | Logistic Models | Male | Middle Aged | Models, Statistical | Postoperative Nausea and Vomiting | Predictive Value of Tests | Reproducibility of Results | Risk Assessment | Surgical Procedures, Operative | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Analgesics, Opioid | Calibration | Cluster Analysis | Female | Humans | Logistic Models | Male | Middle Aged | Models, Statistical | Postoperative Nausea and Vomiting | Predictive Value of Tests | Reproducibility of Results | Risk Assessment | Surgical Procedures, Operative | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Analgesics, Opioid | Calibration | Cluster Analysis | Female | Humans | Logistic Models | Male | Middle Aged | Models, Statistical | Postoperative Nausea and Vomiting | Predictive Value of Tests | Reproducibility of Results | Risk Assessment | Surgical Procedures, Operative | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Analgesics, Opioid | Calibration | Cluster Analysis | Female | Humans | Logistic Models | Male | Middle Aged | Models, Statistical | Postoperative Nausea and Vomiting | Predictive Value of Tests | Reproducibility of Results | Risk Assessment | Surgical Procedures, Operative | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Analgesics, Opioid | Calibration | Cluster Analysis | Female | Humans | Logistic Models | Male | Middle Aged | Models, Statistical | Postoperative Nausea and Vomiting | Predictive Value of Tests | Reproducibility of Results | Risk Assessment | Surgical Procedures, Operative | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Analgesics, Opioid | Calibration | Cluster Analysis | Female | Humans | Logistic Models | Male | Middle Aged | Models, Statistical | Postoperative Nausea and Vomiting | Predictive Value of Tests | Reproducibility of Results | Risk Assessment | Surgical Procedures, Operative | Treatment Outcome [SUMMARY]
[CONTENT] regression techniques clustering | patients specific clusters | clinical prediction | development clinical prediction | regression methods clustered [SUMMARY]
[CONTENT] regression techniques clustering | patients specific clusters | clinical prediction | development clinical prediction | regression methods clustered [SUMMARY]
[CONTENT] regression techniques clustering | patients specific clusters | clinical prediction | development clinical prediction | regression methods clustered [SUMMARY]
[CONTENT] regression techniques clustering | patients specific clusters | clinical prediction | development clinical prediction | regression methods clustered [SUMMARY]
[CONTENT] regression techniques clustering | patients specific clusters | clinical prediction | development clinical prediction | regression methods clustered [SUMMARY]
[CONTENT] regression techniques clustering | patients specific clusters | clinical prediction | development clinical prediction | regression methods clustered [SUMMARY]
[CONTENT] model | performance | standard | risk | random | calibration | risk calculation | calculation | intercept | effects [SUMMARY]
[CONTENT] model | performance | standard | risk | random | calibration | risk calculation | calculation | intercept | effects [SUMMARY]
[CONTENT] model | performance | standard | risk | random | calibration | risk calculation | calculation | intercept | effects [SUMMARY]
[CONTENT] model | performance | standard | risk | random | calibration | risk calculation | calculation | intercept | effects [SUMMARY]
[CONTENT] model | performance | standard | risk | random | calibration | risk calculation | calculation | intercept | effects [SUMMARY]
[CONTENT] model | performance | standard | risk | random | calibration | risk calculation | calculation | intercept | effects [SUMMARY]
[CONTENT] models | clustered | data | regression | prediction | clustered data | clustering | conditional models | prediction models | model [SUMMARY]
[CONTENT] model | intercept | patients | random | risk | regression | predictor | standard | linear | risk calculation [SUMMARY]
[CONTENT] risk calculation | calculation | risk | performance | calibration | standard | model | marginal risk | marginal risk calculation | overall [SUMMARY]
[CONTENT] cluster | cluster effect | prediction | information resulting | effects relatively strong | better performance models including | better performance models | better performance | calibration large clusters particularly | effects relatively strong icc [SUMMARY]
[CONTENT] model | risk | standard | calibration | random | performance | calculation | risk calculation | intercept | cluster [SUMMARY]
[CONTENT] model | risk | standard | calibration | random | performance | calculation | risk calculation | intercept | cluster [SUMMARY]
[CONTENT] ||| ||| ||| [SUMMARY]
[CONTENT] 1642 | one | 19 ||| ||| 5% | 15% | 30% ||| Standard [SUMMARY]
[CONTENT] 0.69 | 0.66 ||| 0.68 | 0.67 ||| ||| [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| ||| ||| 1642 | one | 19 ||| ||| 5% | 15% | 30% ||| Standard ||| ||| 0.69 | 0.66 ||| 0.68 | 0.67 ||| ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| ||| 1642 | one | 19 ||| ||| 5% | 15% | 30% ||| Standard ||| ||| 0.69 | 0.66 ||| 0.68 | 0.67 ||| ||| ||| ||| [SUMMARY]
Adolescent blood pressure, body mass index and skin folds: sorting out the effects of early weight and length gains.
21325148
Although there is longstanding evidence of the short-term benefits of promoting rapid growth for young children in low-income settings, more recent studies suggest that early weight gain can also increase the risk of chronic diseases in adults. This paper attempts to separate the effects of early life weight and length/height gains on blood pressure, body mass index (BMI), sum of skin folds and subscapular/triceps skin fold ratio at 14-15 years of age.
BACKGROUND
The sample comprised 833 members of a prospective population-based birth cohort from Brazil. Conditional size (weight or height) analyses were used to express the difference between observed size at a given age and expected size based on a regression, including all previous measures of the same anthropometric index. A positive conditional weight or height indicates growing faster than expected given prior size.
METHODS
Conditional weights at all age ranges were positively associated with most outcomes; each z-score of conditional weight at 4 years was associated with an increase of 6.1 mm in the sum of skin folds (95% CI 4.5 to 7.6) in adolescence after adjustment for conditional length/height. Associations of the outcomes with conditional length/height were mostly negative or non-significant-each z-score was associated with a reduction of 2.4 mm (95% CI -3.8 to -1.1) in the sum of skin folds after adjustment for conditional weight. No associations were found with the skin fold ratio.
RESULTS
The promotion of rapid length/height gain without excessive weight gain seems to be beneficial for long-term outcomes, but this requires confirmation from other studies.
CONCLUSION
[ "Adolescent", "Blood Pressure", "Body Height", "Body Mass Index", "Body Weight", "Brazil", "Child", "Child, Preschool", "Cohort Studies", "Female", "Humans", "Infant", "Male", "Prospective Studies", "Weight Gain" ]
3245895
Introduction
A wide body of evidence supports the short term benefits of promoting rapid weight gain for young children in low-income settings.1–3 However, recently, several authors suggest that early weight gain can also increase the risk of chronic diseases in adults.4–6 Another in this debate is that rapid weight gain in poor populations is positively associated with human capital in adults—including schooling, economic productivity and next-generation birth weight.7 8 This has been described as the ‘catch-up dilemma’.9 The first studies on the long-term consequences of early life exposures suggested that birth weight was inversely associated with the risk of chronic diseases in adulthood,10 but it has been argued that such effect was primarily due to postnatal instead of prenatal weight trajectories.11 Further studies tried to establish whether there was a specific age range in which rapid weight gain was most strongly associated with beneficial7 8 or detrimental adult outcomes.12 13 Because weight gain in a given age range is not independent from previous weight gains,14–16 conditional analyses have been more recently proposed to overcome the lack of independence.14 Conditional size is defined as the amount by which the size at the end of a time interval exceeds that which would have been predicted at the beginning of the interval from previous measurements of the same anthropometric parameter—for example, weight or height. A positive conditional weight or height indicates growing faster than expected given prior size. An additional issue yet to be addressed is how to disentangle the long-term effects of early weight gain from those of length/height gain during the same age range during childhood. We have used this novel approach to analyse data from a birth cohort study in Southern Brazil in order to explore the independent effects of weight and length/height gains during different periods in the first 4 years of life on blood pressure and on measures of obesity and central fat distribution at 14–15 years of age.
Methods
Pelotas is a 340 000-inhabitant city located in the extreme south of Brazil near the border with Uruguay and Argentina. All hospital-born children in 1993 whose families were residents of the city (N=5265) were eligible for a birth cohort study; there were only 16 refusals.17 18 Low birth weight (<2500 g) children comprised 9.8% of the sample. All low birth weight children and a random 20% sample of the remaining children (N=1454) were followed up at home at the ages of 6, 12 and 48 months, and all cohort participants were visited at the age of 14–15 years. Analyses presented here are restricted to subjects located in these four waves of data collection and are weighted to represent the total population. Further details on the methodology of the 1993 Pelotas (Brazil) birth cohort study are available elsewhere.17 18 Birth weight was measured at the hospital by the study team using paediatric scales (Filizola, Sao Paulo, Brazil) with a precision of 10 g, and birth length with a locally made infantometer with a precision of 1 mm. Portable weighing scales were used to measure weight at home visits (PLENNA, São Paulo, Brazil). Length was measured at 6 and 12 months with the infantometer used for the birth measures and standing height was measured at 48 months. Measurements were converted into z-scores of weight for age (WAZ), length or height for age (HAZ) and body mass index (BMI; weight/height2) for age (BAZ) using the WHO Growth Standards.19 Outcome variables included systolic blood pressure, BMI, the sum of triceps and subscapular skin folds, and subscapular/triceps ratios at 14–15 years of age. Blood pressure was measured twice using a wrist digital Omron sphygmomanometer and the mean value was used in the analysis. A validation study using mercury sphygmomanometers as the gold standard showed that the quality of the measurements was adequate (mean difference 0.3 mm Hg).20 Participants were measured seated after a 10 min rest. We opted not to treat hypertension as a dichotomous outcome given its low prevalence at this age range. Skin folds were measured three times and the mean value used. The subscapular/triceps skin fold ratio was calculated by dividing the subscapular by the triceps skin fold measure and multiplying this ratio by 100. Interviewers were trained and standardised on weight, height and skin folds measurements within the margins of error of the National Center for Health Statistics.21 Standardisation sessions were repeated every 2 months during fieldwork, which took place from January to July 2008. Confounding variables included child sex, child skin colour, family socioeconomic level (based on an assets index divided into quintiles), adolescent pubertal status (Tanner's stages),22 maternal pre-gestational BMI and maternal smoking during pregnancy. We used conditional weight and length/height variables to express the component of weight (or length/height) at a given age that is uncorrelated with earlier measures.14 15 23 Conditional measures express how an individual child deviates from its own previous growth trajectory; thus, expressing acceleration or deceleration in growth. These were calculated as the residuals from linear regressions of weight (or length/height) at a given age on all prior weights (or lengths/heights). For example, a positive residual at 48 months indicates that a child grew more rapidly in the 12–48 month age range than was predicted from his/her weight at birth, 6 and 12 months. Conditional size was expressed in z-scores using the local distribution. The main advantage of this approach is that—unlike traditional analyses—conditional weight or height at a given age is independent from earlier weights or heights. In describing the results, we used conditional weight at a given age (eg, 12 months) interchangeably with weight gain in the preceding age range (eg, 6–12 months) Nevertheless, all analyses were repeated using the unconditional or crude weight gains, expressed as z-score changes using the WHO standards.19 The analyses were based on four models. First, we report the unadjusted effect of the growth variables on adolescent outcomes. Second, we adjust these analyses for confounding variables measured in infancy and childhood. The third model includes further adjustments for potential mediating variables—adolescent BMI and height. In the last model, conditional weight was adjusted for conditional length/height at the same age and vice versa. Analyses were conducted in Stata version 10.0 and the significance level was set at 5% for two-sided tests. All phases of the 1993 Pelotas (Brazil) birth cohort study were approved by the Federal University of Pelotas Ethics Committee. Written informed consent from the parents or caretakers was obtained prior to each wave of data collection.
null
null
Discussion
Our findings suggest that early weight gain increases adolescent blood pressure and fatness, whereas early length/height gains are not associated with higher blood pressure and seem to protect against overweight and fatness in adolescents. These results emerged when length/height gain and weight gain were adjusted for one another. The finding that the effects of weight and height gains seem to go in opposite directions is of great interest for public health. Blood pressure is a challenging outcome for studies of the long-term consequences of early growth patterns because of its strong correlation with adult size.14 24 Using unconditional methods, previous studies reported that early weight gain was associated with higher blood pressure in adolescence and adulthood.25 26 In our length-adjusted analyses, putting on weight rapidly from 0–6 months was associated with higher blood pressure in adolescence, whereas later weight gain was not. A previous analysis of five cohorts in low and middle-income countries found that conditional weight at 12 months was positively associated with adult mean blood pressure and pre-hypertension in the confounder-adjusted analyses, but the associations were no longer significant after adjustment for adult height.14 Both BMI—which comprises fat and lean mass—and the sum of skin folds—which represents fat mass only—were positively associated with early weight gain. The regression coefficients suggest that weight gain from 12–48 months had a larger effect than earlier gains. It is reassuring that results on BMI and sum of skin folds are very consistent as these are two independent measures of obesity. A previous analysis of a subsample of 9-year-olds from our cohort showed that while early weight gain was related to lean mass assessed through isotope dilution, later weight gain was associated with fat mass.27 Also using BMI and skin folds, Stettler and colleagues found that rapid weight gain during early infancy was associated with obesity in older children and young adults,12 and suggested that the first week of life was critical for BMI development.13 These studies did not report on the effects of height gain nor used conditional growth modelling. Length gain from 0–6 months, when adjusted for weight gain in the same period, did not appear to increase blood pressure—if anything it was associated with a borderline reduction (p=0.05) in systolic pressure. It was also associated with lower BMI and possibly with reduced skin folds. Length/height gains from 6–12 and 12–48 months were not associated with blood pressure, but showed negative associations with BMI and skin folds (not all of which were statistically significant), which suggest that putting on length/height without excessive weight gain is beneficial for body composition development. Central deposition of fat can be measured both as waist circumference (alone or as a ratio to hip/thigh circumference) and as subscapular-to-triceps skin fold ratios. Both measures have been linked to glucose intolerance, hypertension and coronary heart disease.28–32 In our adjusted analyses, the skin fold ratio was not associated with early weight or height gains. This contradicts the findings of some studies,33 34 although there is no consensus on such observations.35 In some populations—for example in India—central fat deposition is already evident in infants.36 By adjusting early growth variables for adolescent BMI or height (model 2 in tables 3 and 4), we addressed the question of to what extent the effects of early weight and height gains on later outcomes may be channelled through adolescent size—which itself is partly a result of early growth patterns. For example, putting on weight rapidly in early life leads to larger BMI and height in adolescence, and early linear growth is also associated with adolescent height and to a lesser extent with adolescent BMI (table 2). In our view, the most important results from the public health perspective are those unadjusted for adolescent size (eg, model 1 in tables 3 and 4, and the models shown in table 5) because these address what might be expected from early interventions. The present study is, to our knowledge, the first attempt to investigate long-term outcomes while mutually adjusting the effects of early weight and length/height gains. Also, by using the conditional method, we eliminated the correlation between growth variables in subsequent age ranges.15 16 A limitation of our cohort is the lack of measurements at the critical age of 2 years, which is widely regarded at the upper limit of the window of opportunity for preventing undernutrition.7 37 In low and middle-income countries, paediatric practice has included promoting rapid weight gain to prevent undernutrition and its harmful consequences.38 This is highly justified in societies where undernutrition is responsible for a large proportion of the burden of disease,39 but in face of the nutrition transition there may be detrimental long-term consequences of rapid weight gain.7 Nevertheless, it has been proposed that in such societies, the benefits of rapid weight gain in the first 2 years regarding short-term morbidity and mortality outcomes, as well as long-term human capital outcomes, far outweigh its potential contribution to complex chronic diseases in adulthood.7 Our findings suggest a new dimension to this debate, mainly that the promotion of rapid length/height gain without excessive weight gain may be beneficial for long-term outcomes. If our results are confirmed by other studies, it will be necessary to reassess the results of existing nutrition intervention studies to identify suitable strategies to achieve this growth pattern. Finally, our findings strongly support the need to monitor infant and child length/height in addition to the current practice of monitoring weight only.37 Although there is longstanding evidence of the short-term benefits of promoting rapid growth for young children in low-income settings, more recent studies suggest that early weight gain can also increase the risk of chronic diseases in adults. Supportive evidence on the benefits of rapid early weight gain in poor populations is provided by its positive association with human capital in adults. Rapid weight gain up to about the age of 2 years is more strongly associated with positive outcomes, whereas rapid weight gain in late childhood is associated with negative outcomes. Conditional weight gains in all different age ranges up to 4 years tended to be associated with higher blood pressure, BMI and skin folds. In marked contrast, rapid length/height gains tended to afford protection against most of these outcomes, particularly those related to body composition. These results emerged when length/height gain and weight gain were adjusted for one another.
[ "Introduction", "Results", "Discussion" ]
[ "A wide body of evidence supports the short term benefits of promoting rapid weight gain for young children in low-income settings.1–3 However, recently, several authors suggest that early weight gain can also increase the risk of chronic diseases in adults.4–6 Another in this debate is that rapid weight gain in poor populations is positively associated with human capital in adults—including schooling, economic productivity and next-generation birth weight.7 8 This has been described as the ‘catch-up dilemma’.9\nThe first studies on the long-term consequences of early life exposures suggested that birth weight was inversely associated with the risk of chronic diseases in adulthood,10 but it has been argued that such effect was primarily due to postnatal instead of prenatal weight trajectories.11 Further studies tried to establish whether there was a specific age range in which rapid weight gain was most strongly associated with beneficial7 8 or detrimental adult outcomes.12 13\nBecause weight gain in a given age range is not independent from previous weight gains,14–16 conditional analyses have been more recently proposed to overcome the lack of independence.14 Conditional size is defined as the amount by which the size at the end of a time interval exceeds that which would have been predicted at the beginning of the interval from previous measurements of the same anthropometric parameter—for example, weight or height. A positive conditional weight or height indicates growing faster than expected given prior size. An additional issue yet to be addressed is how to disentangle the long-term effects of early weight gain from those of length/height gain during the same age range during childhood.\nWe have used this novel approach to analyse data from a birth cohort study in Southern Brazil in order to explore the independent effects of weight and length/height gains during different periods in the first 4 years of life on blood pressure and on measures of obesity and central fat distribution at 14–15 years of age.", "Follow-up rates at the 6 months and 1, 4 and 14–15 year follow-up visits of the 1993 Pelotas (Brazil) birth cohort were 96.8%, 93.4%, 87.2%, and 85.6%, respectively. Full datasets were available for 833 adolescents. Subjects included in the analysis were similar to the rest of the cohort: 47.7% versus 49.7% were men and 20.3% versus 20.1% belonged to the poorest wealth quintile at birth. Outcome variables were also similar in the two groups (systolic blood pressure: 119.2 vs 119.4 mm Hg; BMI: 21.3 vs 21.5 kg/m2). Table 1 describes the sample in terms of covariates, infancy and childhood weight and length/height gain and adolescent outcomes.\nDescriptive statistics on infancy, childhood and adolescent variables\n1993 Pelotas Birth Cohort, Brazil, 2008.\nBMI, body mass index.\nGrowth faltering was uncommon in the population under study. At the age of 4 years, 2.4% (according to WAZ) or 0.4% (according to BAZ) were underweight and 5.2% stunted (according to HAZ) compared to the WHO Child Growth Standards (below −2 SD relative to the median).19\nTable 2 presents the correlation matrix for weight and length/height gains. When expressed as z-scores, unconditional weight gain in a given age range tended to be inversely correlated to later weight gains; the same applied to length/height gain. Also, weight gain was positively correlated to length/height gain in the same age range. The use of conditional weights effectively removed the correlation between gains in successive periods with coefficients being equal to zero. Conditional weight in any period, and to a lesser extent conditional height, were associated with higher BMI at 14–15 years. For adolescent height, the associations with conditional weight were significant but weaker than those for conditional height.\nPearson correlation coefficients among infancy and childhood exposures and with BMI and height at 14–15 years\n1993 Pelotas Birth Cohort, Brazil, 2008.\n*p-value between 0.001 and 0.05; **p-value <0.001.\nBMI, body mass index; mo, months.\nTable 3 shows that early (0–6 months) and late (12–48 months) weight gains were positively associated with systolic blood pressure, whereas gains from 6–12 months were not. This was observed in the crude and confounder-adjusted analyses. Further adjustment for adolescent height and BMI must be interpreted with caution because these variables are likely part of the causal chain between early growth and adolescent outcomes. In these analyses, weight gain from 6–12 months became inversely associated with systolic blood pressure. Results were similar for diastolic pressure (data not shown). BMI and the sum of skin folds were positively associated with weight gain in all periods; the only exception is the lack of correlation with the sum of skin folds when potential mediating variables (adolescent BMI and height) are adjusted for. There were no significant associations with the skin fold ratio outcome in the adjusted models.\nAssociation between conditional weight gain in infancy and childhood and outcomes at age 14–15 years\n1993 Pelotas Birth Cohort, Brazil, 2008.\nModel 1: adjustment for sex, skin colour, puberty status, socioeconomic level, pre-gestational body mass index (BMI) and smoking during pregnancy.\nModel 2: adjustment for variables included in model 1 plus adolescent (14–15 years) BMI (except when the outcome was BMI) and height.\nChange in outcome per SD increase in conditional weight or length/height.\nβ, beta coefficient; BMI, body mass index; mo, months.\nResults for length/height gain (table 4) show that, after adjustment for confounders, late gains tend to be associated with all outcomes except subscapular/triceps ratio but not early gains. Patterns were not clear-cut after adjustment for adolescent BMI and height, which suggest that these variables mediate the effect of childhood height gain on adolescent outcomes. There were not significant associations with the skin fold ratio in the adjusted models.\nAssociation between conditional length/height gain in infancy and childhood and outcomes at age 14–15 years\n1993 Pelotas Birth Cohort, Brazil, 2008.\nModel 1: adjustment for sex, skin colour, puberty status, socioeconomic level, pre-gestational BMI and smoking during pregnancy.\nModel 2: adjustment for variables included in model 1 plus BMI (except when the outcome was BMI) and height at 14–15 years.\nChange in outcome per SD increase in conditional weight or length/height.\nβ, beta coefficient; BMI, body mass index; mo, months.\nTable 5 repeats the confounder-adjusted analyses (model 1 in tables 3 and 4) with further adjustment of weight gain for length/height gain and vice versa. Weight gains in all periods tended to remain positively associated with all outcomes except for the skin fold ratio. In marked contrast to weight gain, associations between length gain and both obesity outcomes were mostly negative particularly for the body composition outcomes. Further adjustment for height at 14–15 years did not change the direction or significance of the associations (data not shown).\nAssociations between conditional weight and length/height gain in infancy and childhood adjusted for each other and outcomes at age 14–15 years\n1993 Pelotas Birth Cohort, Brazil, 2008.\nAdjustment for sex, skin colour, puberty status, socioeconomic level, pre-gestational BMI, smoking during pregnancy and length/height gain in the same period.\nAdjustment for sex, skin colour, puberty status, socioeconomic level, pre-gestational BMI, smoking during pregnancy and weight gain in the same period.\nChange in outcome per SD increase in conditional weight or length/height.\nβ, beta coefficient; BMI, body mass index; mo, months.\nAll the analyses described above were repeated using as explanatory variables the crude, or unconditional weight and height gains in each period, rather than conditional analyses. Although as expected the magnitude of the coefficients were different in the two sets of analyses, the findings were very similar in terms of direction and statistical significance (data not shown).", "Our findings suggest that early weight gain increases adolescent blood pressure and fatness, whereas early length/height gains are not associated with higher blood pressure and seem to protect against overweight and fatness in adolescents. These results emerged when length/height gain and weight gain were adjusted for one another. The finding that the effects of weight and height gains seem to go in opposite directions is of great interest for public health.\nBlood pressure is a challenging outcome for studies of the long-term consequences of early growth patterns because of its strong correlation with adult size.14 24 Using unconditional methods, previous studies reported that early weight gain was associated with higher blood pressure in adolescence and adulthood.25 26 In our length-adjusted analyses, putting on weight rapidly from 0–6 months was associated with higher blood pressure in adolescence, whereas later weight gain was not. A previous analysis of five cohorts in low and middle-income countries found that conditional weight at 12 months was positively associated with adult mean blood pressure and pre-hypertension in the confounder-adjusted analyses, but the associations were no longer significant after adjustment for adult height.14\nBoth BMI—which comprises fat and lean mass—and the sum of skin folds—which represents fat mass only—were positively associated with early weight gain. The regression coefficients suggest that weight gain from 12–48 months had a larger effect than earlier gains. It is reassuring that results on BMI and sum of skin folds are very consistent as these are two independent measures of obesity. A previous analysis of a subsample of 9-year-olds from our cohort showed that while early weight gain was related to lean mass assessed through isotope dilution, later weight gain was associated with fat mass.27 Also using BMI and skin folds, Stettler and colleagues found that rapid weight gain during early infancy was associated with obesity in older children and young adults,12 and suggested that the first week of life was critical for BMI development.13 These studies did not report on the effects of height gain nor used conditional growth modelling.\nLength gain from 0–6 months, when adjusted for weight gain in the same period, did not appear to increase blood pressure—if anything it was associated with a borderline reduction (p=0.05) in systolic pressure. It was also associated with lower BMI and possibly with reduced skin folds. Length/height gains from 6–12 and 12–48 months were not associated with blood pressure, but showed negative associations with BMI and skin folds (not all of which were statistically significant), which suggest that putting on length/height without excessive weight gain is beneficial for body composition development.\nCentral deposition of fat can be measured both as waist circumference (alone or as a ratio to hip/thigh circumference) and as subscapular-to-triceps skin fold ratios. Both measures have been linked to glucose intolerance, hypertension and coronary heart disease.28–32 In our adjusted analyses, the skin fold ratio was not associated with early weight or height gains. This contradicts the findings of some studies,33 34 although there is no consensus on such observations.35 In some populations—for example in India—central fat deposition is already evident in infants.36\nBy adjusting early growth variables for adolescent BMI or height (model 2 in tables 3 and 4), we addressed the question of to what extent the effects of early weight and height gains on later outcomes may be channelled through adolescent size—which itself is partly a result of early growth patterns. For example, putting on weight rapidly in early life leads to larger BMI and height in adolescence, and early linear growth is also associated with adolescent height and to a lesser extent with adolescent BMI (table 2). In our view, the most important results from the public health perspective are those unadjusted for adolescent size (eg, model 1 in tables 3 and 4, and the models shown in table 5) because these address what might be expected from early interventions.\nThe present study is, to our knowledge, the first attempt to investigate long-term outcomes while mutually adjusting the effects of early weight and length/height gains. Also, by using the conditional method, we eliminated the correlation between growth variables in subsequent age ranges.15 16 A limitation of our cohort is the lack of measurements at the critical age of 2 years, which is widely regarded at the upper limit of the window of opportunity for preventing undernutrition.7 37\nIn low and middle-income countries, paediatric practice has included promoting rapid weight gain to prevent undernutrition and its harmful consequences.38 This is highly justified in societies where undernutrition is responsible for a large proportion of the burden of disease,39 but in face of the nutrition transition there may be detrimental long-term consequences of rapid weight gain.7 Nevertheless, it has been proposed that in such societies, the benefits of rapid weight gain in the first 2 years regarding short-term morbidity and mortality outcomes, as well as long-term human capital outcomes, far outweigh its potential contribution to complex chronic diseases in adulthood.7\nOur findings suggest a new dimension to this debate, mainly that the promotion of rapid length/height gain without excessive weight gain may be beneficial for long-term outcomes. If our results are confirmed by other studies, it will be necessary to reassess the results of existing nutrition intervention studies to identify suitable strategies to achieve this growth pattern. Finally, our findings strongly support the need to monitor infant and child length/height in addition to the current practice of monitoring weight only.37\nAlthough there is longstanding evidence of the short-term benefits of promoting rapid growth for young children in low-income settings, more recent studies suggest that early weight gain can also increase the risk of chronic diseases in adults. Supportive evidence on the benefits of rapid early weight gain in poor populations is provided by its positive association with human capital in adults. Rapid weight gain up to about the age of 2 years is more strongly associated with positive outcomes, whereas rapid weight gain in late childhood is associated with negative outcomes.\nConditional weight gains in all different age ranges up to 4 years tended to be associated with higher blood pressure, BMI and skin folds. In marked contrast, rapid length/height gains tended to afford protection against most of these outcomes, particularly those related to body composition. These results emerged when length/height gain and weight gain were adjusted for one another." ]
[ null, null, null ]
[ "Introduction", "Methods", "Results", "Discussion" ]
[ "A wide body of evidence supports the short term benefits of promoting rapid weight gain for young children in low-income settings.1–3 However, recently, several authors suggest that early weight gain can also increase the risk of chronic diseases in adults.4–6 Another in this debate is that rapid weight gain in poor populations is positively associated with human capital in adults—including schooling, economic productivity and next-generation birth weight.7 8 This has been described as the ‘catch-up dilemma’.9\nThe first studies on the long-term consequences of early life exposures suggested that birth weight was inversely associated with the risk of chronic diseases in adulthood,10 but it has been argued that such effect was primarily due to postnatal instead of prenatal weight trajectories.11 Further studies tried to establish whether there was a specific age range in which rapid weight gain was most strongly associated with beneficial7 8 or detrimental adult outcomes.12 13\nBecause weight gain in a given age range is not independent from previous weight gains,14–16 conditional analyses have been more recently proposed to overcome the lack of independence.14 Conditional size is defined as the amount by which the size at the end of a time interval exceeds that which would have been predicted at the beginning of the interval from previous measurements of the same anthropometric parameter—for example, weight or height. A positive conditional weight or height indicates growing faster than expected given prior size. An additional issue yet to be addressed is how to disentangle the long-term effects of early weight gain from those of length/height gain during the same age range during childhood.\nWe have used this novel approach to analyse data from a birth cohort study in Southern Brazil in order to explore the independent effects of weight and length/height gains during different periods in the first 4 years of life on blood pressure and on measures of obesity and central fat distribution at 14–15 years of age.", "Pelotas is a 340 000-inhabitant city located in the extreme south of Brazil near the border with Uruguay and Argentina. All hospital-born children in 1993 whose families were residents of the city (N=5265) were eligible for a birth cohort study; there were only 16 refusals.17 18 Low birth weight (<2500 g) children comprised 9.8% of the sample. All low birth weight children and a random 20% sample of the remaining children (N=1454) were followed up at home at the ages of 6, 12 and 48 months, and all cohort participants were visited at the age of 14–15 years. Analyses presented here are restricted to subjects located in these four waves of data collection and are weighted to represent the total population. Further details on the methodology of the 1993 Pelotas (Brazil) birth cohort study are available elsewhere.17 18\nBirth weight was measured at the hospital by the study team using paediatric scales (Filizola, Sao Paulo, Brazil) with a precision of 10 g, and birth length with a locally made infantometer with a precision of 1 mm. Portable weighing scales were used to measure weight at home visits (PLENNA, São Paulo, Brazil). Length was measured at 6 and 12 months with the infantometer used for the birth measures and standing height was measured at 48 months. Measurements were converted into z-scores of weight for age (WAZ), length or height for age (HAZ) and body mass index (BMI; weight/height2) for age (BAZ) using the WHO Growth Standards.19\nOutcome variables included systolic blood pressure, BMI, the sum of triceps and subscapular skin folds, and subscapular/triceps ratios at 14–15 years of age. Blood pressure was measured twice using a wrist digital Omron sphygmomanometer and the mean value was used in the analysis. A validation study using mercury sphygmomanometers as the gold standard showed that the quality of the measurements was adequate (mean difference 0.3 mm Hg).20 Participants were measured seated after a 10 min rest. We opted not to treat hypertension as a dichotomous outcome given its low prevalence at this age range. Skin folds were measured three times and the mean value used. The subscapular/triceps skin fold ratio was calculated by dividing the subscapular by the triceps skin fold measure and multiplying this ratio by 100. Interviewers were trained and standardised on weight, height and skin folds measurements within the margins of error of the National Center for Health Statistics.21 Standardisation sessions were repeated every 2 months during fieldwork, which took place from January to July 2008.\nConfounding variables included child sex, child skin colour, family socioeconomic level (based on an assets index divided into quintiles), adolescent pubertal status (Tanner's stages),22 maternal pre-gestational BMI and maternal smoking during pregnancy. We used conditional weight and length/height variables to express the component of weight (or length/height) at a given age that is uncorrelated with earlier measures.14 15 23 Conditional measures express how an individual child deviates from its own previous growth trajectory; thus, expressing acceleration or deceleration in growth. These were calculated as the residuals from linear regressions of weight (or length/height) at a given age on all prior weights (or lengths/heights). For example, a positive residual at 48 months indicates that a child grew more rapidly in the 12–48 month age range than was predicted from his/her weight at birth, 6 and 12 months. Conditional size was expressed in z-scores using the local distribution. The main advantage of this approach is that—unlike traditional analyses—conditional weight or height at a given age is independent from earlier weights or heights. In describing the results, we used conditional weight at a given age (eg, 12 months) interchangeably with weight gain in the preceding age range (eg, 6–12 months) Nevertheless, all analyses were repeated using the unconditional or crude weight gains, expressed as z-score changes using the WHO standards.19\nThe analyses were based on four models. First, we report the unadjusted effect of the growth variables on adolescent outcomes. Second, we adjust these analyses for confounding variables measured in infancy and childhood. The third model includes further adjustments for potential mediating variables—adolescent BMI and height. In the last model, conditional weight was adjusted for conditional length/height at the same age and vice versa. Analyses were conducted in Stata version 10.0 and the significance level was set at 5% for two-sided tests.\nAll phases of the 1993 Pelotas (Brazil) birth cohort study were approved by the Federal University of Pelotas Ethics Committee. Written informed consent from the parents or caretakers was obtained prior to each wave of data collection.", "Follow-up rates at the 6 months and 1, 4 and 14–15 year follow-up visits of the 1993 Pelotas (Brazil) birth cohort were 96.8%, 93.4%, 87.2%, and 85.6%, respectively. Full datasets were available for 833 adolescents. Subjects included in the analysis were similar to the rest of the cohort: 47.7% versus 49.7% were men and 20.3% versus 20.1% belonged to the poorest wealth quintile at birth. Outcome variables were also similar in the two groups (systolic blood pressure: 119.2 vs 119.4 mm Hg; BMI: 21.3 vs 21.5 kg/m2). Table 1 describes the sample in terms of covariates, infancy and childhood weight and length/height gain and adolescent outcomes.\nDescriptive statistics on infancy, childhood and adolescent variables\n1993 Pelotas Birth Cohort, Brazil, 2008.\nBMI, body mass index.\nGrowth faltering was uncommon in the population under study. At the age of 4 years, 2.4% (according to WAZ) or 0.4% (according to BAZ) were underweight and 5.2% stunted (according to HAZ) compared to the WHO Child Growth Standards (below −2 SD relative to the median).19\nTable 2 presents the correlation matrix for weight and length/height gains. When expressed as z-scores, unconditional weight gain in a given age range tended to be inversely correlated to later weight gains; the same applied to length/height gain. Also, weight gain was positively correlated to length/height gain in the same age range. The use of conditional weights effectively removed the correlation between gains in successive periods with coefficients being equal to zero. Conditional weight in any period, and to a lesser extent conditional height, were associated with higher BMI at 14–15 years. For adolescent height, the associations with conditional weight were significant but weaker than those for conditional height.\nPearson correlation coefficients among infancy and childhood exposures and with BMI and height at 14–15 years\n1993 Pelotas Birth Cohort, Brazil, 2008.\n*p-value between 0.001 and 0.05; **p-value <0.001.\nBMI, body mass index; mo, months.\nTable 3 shows that early (0–6 months) and late (12–48 months) weight gains were positively associated with systolic blood pressure, whereas gains from 6–12 months were not. This was observed in the crude and confounder-adjusted analyses. Further adjustment for adolescent height and BMI must be interpreted with caution because these variables are likely part of the causal chain between early growth and adolescent outcomes. In these analyses, weight gain from 6–12 months became inversely associated with systolic blood pressure. Results were similar for diastolic pressure (data not shown). BMI and the sum of skin folds were positively associated with weight gain in all periods; the only exception is the lack of correlation with the sum of skin folds when potential mediating variables (adolescent BMI and height) are adjusted for. There were no significant associations with the skin fold ratio outcome in the adjusted models.\nAssociation between conditional weight gain in infancy and childhood and outcomes at age 14–15 years\n1993 Pelotas Birth Cohort, Brazil, 2008.\nModel 1: adjustment for sex, skin colour, puberty status, socioeconomic level, pre-gestational body mass index (BMI) and smoking during pregnancy.\nModel 2: adjustment for variables included in model 1 plus adolescent (14–15 years) BMI (except when the outcome was BMI) and height.\nChange in outcome per SD increase in conditional weight or length/height.\nβ, beta coefficient; BMI, body mass index; mo, months.\nResults for length/height gain (table 4) show that, after adjustment for confounders, late gains tend to be associated with all outcomes except subscapular/triceps ratio but not early gains. Patterns were not clear-cut after adjustment for adolescent BMI and height, which suggest that these variables mediate the effect of childhood height gain on adolescent outcomes. There were not significant associations with the skin fold ratio in the adjusted models.\nAssociation between conditional length/height gain in infancy and childhood and outcomes at age 14–15 years\n1993 Pelotas Birth Cohort, Brazil, 2008.\nModel 1: adjustment for sex, skin colour, puberty status, socioeconomic level, pre-gestational BMI and smoking during pregnancy.\nModel 2: adjustment for variables included in model 1 plus BMI (except when the outcome was BMI) and height at 14–15 years.\nChange in outcome per SD increase in conditional weight or length/height.\nβ, beta coefficient; BMI, body mass index; mo, months.\nTable 5 repeats the confounder-adjusted analyses (model 1 in tables 3 and 4) with further adjustment of weight gain for length/height gain and vice versa. Weight gains in all periods tended to remain positively associated with all outcomes except for the skin fold ratio. In marked contrast to weight gain, associations between length gain and both obesity outcomes were mostly negative particularly for the body composition outcomes. Further adjustment for height at 14–15 years did not change the direction or significance of the associations (data not shown).\nAssociations between conditional weight and length/height gain in infancy and childhood adjusted for each other and outcomes at age 14–15 years\n1993 Pelotas Birth Cohort, Brazil, 2008.\nAdjustment for sex, skin colour, puberty status, socioeconomic level, pre-gestational BMI, smoking during pregnancy and length/height gain in the same period.\nAdjustment for sex, skin colour, puberty status, socioeconomic level, pre-gestational BMI, smoking during pregnancy and weight gain in the same period.\nChange in outcome per SD increase in conditional weight or length/height.\nβ, beta coefficient; BMI, body mass index; mo, months.\nAll the analyses described above were repeated using as explanatory variables the crude, or unconditional weight and height gains in each period, rather than conditional analyses. Although as expected the magnitude of the coefficients were different in the two sets of analyses, the findings were very similar in terms of direction and statistical significance (data not shown).", "Our findings suggest that early weight gain increases adolescent blood pressure and fatness, whereas early length/height gains are not associated with higher blood pressure and seem to protect against overweight and fatness in adolescents. These results emerged when length/height gain and weight gain were adjusted for one another. The finding that the effects of weight and height gains seem to go in opposite directions is of great interest for public health.\nBlood pressure is a challenging outcome for studies of the long-term consequences of early growth patterns because of its strong correlation with adult size.14 24 Using unconditional methods, previous studies reported that early weight gain was associated with higher blood pressure in adolescence and adulthood.25 26 In our length-adjusted analyses, putting on weight rapidly from 0–6 months was associated with higher blood pressure in adolescence, whereas later weight gain was not. A previous analysis of five cohorts in low and middle-income countries found that conditional weight at 12 months was positively associated with adult mean blood pressure and pre-hypertension in the confounder-adjusted analyses, but the associations were no longer significant after adjustment for adult height.14\nBoth BMI—which comprises fat and lean mass—and the sum of skin folds—which represents fat mass only—were positively associated with early weight gain. The regression coefficients suggest that weight gain from 12–48 months had a larger effect than earlier gains. It is reassuring that results on BMI and sum of skin folds are very consistent as these are two independent measures of obesity. A previous analysis of a subsample of 9-year-olds from our cohort showed that while early weight gain was related to lean mass assessed through isotope dilution, later weight gain was associated with fat mass.27 Also using BMI and skin folds, Stettler and colleagues found that rapid weight gain during early infancy was associated with obesity in older children and young adults,12 and suggested that the first week of life was critical for BMI development.13 These studies did not report on the effects of height gain nor used conditional growth modelling.\nLength gain from 0–6 months, when adjusted for weight gain in the same period, did not appear to increase blood pressure—if anything it was associated with a borderline reduction (p=0.05) in systolic pressure. It was also associated with lower BMI and possibly with reduced skin folds. Length/height gains from 6–12 and 12–48 months were not associated with blood pressure, but showed negative associations with BMI and skin folds (not all of which were statistically significant), which suggest that putting on length/height without excessive weight gain is beneficial for body composition development.\nCentral deposition of fat can be measured both as waist circumference (alone or as a ratio to hip/thigh circumference) and as subscapular-to-triceps skin fold ratios. Both measures have been linked to glucose intolerance, hypertension and coronary heart disease.28–32 In our adjusted analyses, the skin fold ratio was not associated with early weight or height gains. This contradicts the findings of some studies,33 34 although there is no consensus on such observations.35 In some populations—for example in India—central fat deposition is already evident in infants.36\nBy adjusting early growth variables for adolescent BMI or height (model 2 in tables 3 and 4), we addressed the question of to what extent the effects of early weight and height gains on later outcomes may be channelled through adolescent size—which itself is partly a result of early growth patterns. For example, putting on weight rapidly in early life leads to larger BMI and height in adolescence, and early linear growth is also associated with adolescent height and to a lesser extent with adolescent BMI (table 2). In our view, the most important results from the public health perspective are those unadjusted for adolescent size (eg, model 1 in tables 3 and 4, and the models shown in table 5) because these address what might be expected from early interventions.\nThe present study is, to our knowledge, the first attempt to investigate long-term outcomes while mutually adjusting the effects of early weight and length/height gains. Also, by using the conditional method, we eliminated the correlation between growth variables in subsequent age ranges.15 16 A limitation of our cohort is the lack of measurements at the critical age of 2 years, which is widely regarded at the upper limit of the window of opportunity for preventing undernutrition.7 37\nIn low and middle-income countries, paediatric practice has included promoting rapid weight gain to prevent undernutrition and its harmful consequences.38 This is highly justified in societies where undernutrition is responsible for a large proportion of the burden of disease,39 but in face of the nutrition transition there may be detrimental long-term consequences of rapid weight gain.7 Nevertheless, it has been proposed that in such societies, the benefits of rapid weight gain in the first 2 years regarding short-term morbidity and mortality outcomes, as well as long-term human capital outcomes, far outweigh its potential contribution to complex chronic diseases in adulthood.7\nOur findings suggest a new dimension to this debate, mainly that the promotion of rapid length/height gain without excessive weight gain may be beneficial for long-term outcomes. If our results are confirmed by other studies, it will be necessary to reassess the results of existing nutrition intervention studies to identify suitable strategies to achieve this growth pattern. Finally, our findings strongly support the need to monitor infant and child length/height in addition to the current practice of monitoring weight only.37\nAlthough there is longstanding evidence of the short-term benefits of promoting rapid growth for young children in low-income settings, more recent studies suggest that early weight gain can also increase the risk of chronic diseases in adults. Supportive evidence on the benefits of rapid early weight gain in poor populations is provided by its positive association with human capital in adults. Rapid weight gain up to about the age of 2 years is more strongly associated with positive outcomes, whereas rapid weight gain in late childhood is associated with negative outcomes.\nConditional weight gains in all different age ranges up to 4 years tended to be associated with higher blood pressure, BMI and skin folds. In marked contrast, rapid length/height gains tended to afford protection against most of these outcomes, particularly those related to body composition. These results emerged when length/height gain and weight gain were adjusted for one another." ]
[ null, "methods", null, null ]
[ "Blood pressure", "chronic disease", "cohort studies", "prospective studies", "skinfold thickness", "adolescents cg", "blood pressure", "children", "chronic DI", "cohort me" ]
Introduction: A wide body of evidence supports the short term benefits of promoting rapid weight gain for young children in low-income settings.1–3 However, recently, several authors suggest that early weight gain can also increase the risk of chronic diseases in adults.4–6 Another in this debate is that rapid weight gain in poor populations is positively associated with human capital in adults—including schooling, economic productivity and next-generation birth weight.7 8 This has been described as the ‘catch-up dilemma’.9 The first studies on the long-term consequences of early life exposures suggested that birth weight was inversely associated with the risk of chronic diseases in adulthood,10 but it has been argued that such effect was primarily due to postnatal instead of prenatal weight trajectories.11 Further studies tried to establish whether there was a specific age range in which rapid weight gain was most strongly associated with beneficial7 8 or detrimental adult outcomes.12 13 Because weight gain in a given age range is not independent from previous weight gains,14–16 conditional analyses have been more recently proposed to overcome the lack of independence.14 Conditional size is defined as the amount by which the size at the end of a time interval exceeds that which would have been predicted at the beginning of the interval from previous measurements of the same anthropometric parameter—for example, weight or height. A positive conditional weight or height indicates growing faster than expected given prior size. An additional issue yet to be addressed is how to disentangle the long-term effects of early weight gain from those of length/height gain during the same age range during childhood. We have used this novel approach to analyse data from a birth cohort study in Southern Brazil in order to explore the independent effects of weight and length/height gains during different periods in the first 4 years of life on blood pressure and on measures of obesity and central fat distribution at 14–15 years of age. Methods: Pelotas is a 340 000-inhabitant city located in the extreme south of Brazil near the border with Uruguay and Argentina. All hospital-born children in 1993 whose families were residents of the city (N=5265) were eligible for a birth cohort study; there were only 16 refusals.17 18 Low birth weight (<2500 g) children comprised 9.8% of the sample. All low birth weight children and a random 20% sample of the remaining children (N=1454) were followed up at home at the ages of 6, 12 and 48 months, and all cohort participants were visited at the age of 14–15 years. Analyses presented here are restricted to subjects located in these four waves of data collection and are weighted to represent the total population. Further details on the methodology of the 1993 Pelotas (Brazil) birth cohort study are available elsewhere.17 18 Birth weight was measured at the hospital by the study team using paediatric scales (Filizola, Sao Paulo, Brazil) with a precision of 10 g, and birth length with a locally made infantometer with a precision of 1 mm. Portable weighing scales were used to measure weight at home visits (PLENNA, São Paulo, Brazil). Length was measured at 6 and 12 months with the infantometer used for the birth measures and standing height was measured at 48 months. Measurements were converted into z-scores of weight for age (WAZ), length or height for age (HAZ) and body mass index (BMI; weight/height2) for age (BAZ) using the WHO Growth Standards.19 Outcome variables included systolic blood pressure, BMI, the sum of triceps and subscapular skin folds, and subscapular/triceps ratios at 14–15 years of age. Blood pressure was measured twice using a wrist digital Omron sphygmomanometer and the mean value was used in the analysis. A validation study using mercury sphygmomanometers as the gold standard showed that the quality of the measurements was adequate (mean difference 0.3 mm Hg).20 Participants were measured seated after a 10 min rest. We opted not to treat hypertension as a dichotomous outcome given its low prevalence at this age range. Skin folds were measured three times and the mean value used. The subscapular/triceps skin fold ratio was calculated by dividing the subscapular by the triceps skin fold measure and multiplying this ratio by 100. Interviewers were trained and standardised on weight, height and skin folds measurements within the margins of error of the National Center for Health Statistics.21 Standardisation sessions were repeated every 2 months during fieldwork, which took place from January to July 2008. Confounding variables included child sex, child skin colour, family socioeconomic level (based on an assets index divided into quintiles), adolescent pubertal status (Tanner's stages),22 maternal pre-gestational BMI and maternal smoking during pregnancy. We used conditional weight and length/height variables to express the component of weight (or length/height) at a given age that is uncorrelated with earlier measures.14 15 23 Conditional measures express how an individual child deviates from its own previous growth trajectory; thus, expressing acceleration or deceleration in growth. These were calculated as the residuals from linear regressions of weight (or length/height) at a given age on all prior weights (or lengths/heights). For example, a positive residual at 48 months indicates that a child grew more rapidly in the 12–48 month age range than was predicted from his/her weight at birth, 6 and 12 months. Conditional size was expressed in z-scores using the local distribution. The main advantage of this approach is that—unlike traditional analyses—conditional weight or height at a given age is independent from earlier weights or heights. In describing the results, we used conditional weight at a given age (eg, 12 months) interchangeably with weight gain in the preceding age range (eg, 6–12 months) Nevertheless, all analyses were repeated using the unconditional or crude weight gains, expressed as z-score changes using the WHO standards.19 The analyses were based on four models. First, we report the unadjusted effect of the growth variables on adolescent outcomes. Second, we adjust these analyses for confounding variables measured in infancy and childhood. The third model includes further adjustments for potential mediating variables—adolescent BMI and height. In the last model, conditional weight was adjusted for conditional length/height at the same age and vice versa. Analyses were conducted in Stata version 10.0 and the significance level was set at 5% for two-sided tests. All phases of the 1993 Pelotas (Brazil) birth cohort study were approved by the Federal University of Pelotas Ethics Committee. Written informed consent from the parents or caretakers was obtained prior to each wave of data collection. Results: Follow-up rates at the 6 months and 1, 4 and 14–15 year follow-up visits of the 1993 Pelotas (Brazil) birth cohort were 96.8%, 93.4%, 87.2%, and 85.6%, respectively. Full datasets were available for 833 adolescents. Subjects included in the analysis were similar to the rest of the cohort: 47.7% versus 49.7% were men and 20.3% versus 20.1% belonged to the poorest wealth quintile at birth. Outcome variables were also similar in the two groups (systolic blood pressure: 119.2 vs 119.4 mm Hg; BMI: 21.3 vs 21.5 kg/m2). Table 1 describes the sample in terms of covariates, infancy and childhood weight and length/height gain and adolescent outcomes. Descriptive statistics on infancy, childhood and adolescent variables 1993 Pelotas Birth Cohort, Brazil, 2008. BMI, body mass index. Growth faltering was uncommon in the population under study. At the age of 4 years, 2.4% (according to WAZ) or 0.4% (according to BAZ) were underweight and 5.2% stunted (according to HAZ) compared to the WHO Child Growth Standards (below −2 SD relative to the median).19 Table 2 presents the correlation matrix for weight and length/height gains. When expressed as z-scores, unconditional weight gain in a given age range tended to be inversely correlated to later weight gains; the same applied to length/height gain. Also, weight gain was positively correlated to length/height gain in the same age range. The use of conditional weights effectively removed the correlation between gains in successive periods with coefficients being equal to zero. Conditional weight in any period, and to a lesser extent conditional height, were associated with higher BMI at 14–15 years. For adolescent height, the associations with conditional weight were significant but weaker than those for conditional height. Pearson correlation coefficients among infancy and childhood exposures and with BMI and height at 14–15 years 1993 Pelotas Birth Cohort, Brazil, 2008. *p-value between 0.001 and 0.05; **p-value <0.001. BMI, body mass index; mo, months. Table 3 shows that early (0–6 months) and late (12–48 months) weight gains were positively associated with systolic blood pressure, whereas gains from 6–12 months were not. This was observed in the crude and confounder-adjusted analyses. Further adjustment for adolescent height and BMI must be interpreted with caution because these variables are likely part of the causal chain between early growth and adolescent outcomes. In these analyses, weight gain from 6–12 months became inversely associated with systolic blood pressure. Results were similar for diastolic pressure (data not shown). BMI and the sum of skin folds were positively associated with weight gain in all periods; the only exception is the lack of correlation with the sum of skin folds when potential mediating variables (adolescent BMI and height) are adjusted for. There were no significant associations with the skin fold ratio outcome in the adjusted models. Association between conditional weight gain in infancy and childhood and outcomes at age 14–15 years 1993 Pelotas Birth Cohort, Brazil, 2008. Model 1: adjustment for sex, skin colour, puberty status, socioeconomic level, pre-gestational body mass index (BMI) and smoking during pregnancy. Model 2: adjustment for variables included in model 1 plus adolescent (14–15 years) BMI (except when the outcome was BMI) and height. Change in outcome per SD increase in conditional weight or length/height. β, beta coefficient; BMI, body mass index; mo, months. Results for length/height gain (table 4) show that, after adjustment for confounders, late gains tend to be associated with all outcomes except subscapular/triceps ratio but not early gains. Patterns were not clear-cut after adjustment for adolescent BMI and height, which suggest that these variables mediate the effect of childhood height gain on adolescent outcomes. There were not significant associations with the skin fold ratio in the adjusted models. Association between conditional length/height gain in infancy and childhood and outcomes at age 14–15 years 1993 Pelotas Birth Cohort, Brazil, 2008. Model 1: adjustment for sex, skin colour, puberty status, socioeconomic level, pre-gestational BMI and smoking during pregnancy. Model 2: adjustment for variables included in model 1 plus BMI (except when the outcome was BMI) and height at 14–15 years. Change in outcome per SD increase in conditional weight or length/height. β, beta coefficient; BMI, body mass index; mo, months. Table 5 repeats the confounder-adjusted analyses (model 1 in tables 3 and 4) with further adjustment of weight gain for length/height gain and vice versa. Weight gains in all periods tended to remain positively associated with all outcomes except for the skin fold ratio. In marked contrast to weight gain, associations between length gain and both obesity outcomes were mostly negative particularly for the body composition outcomes. Further adjustment for height at 14–15 years did not change the direction or significance of the associations (data not shown). Associations between conditional weight and length/height gain in infancy and childhood adjusted for each other and outcomes at age 14–15 years 1993 Pelotas Birth Cohort, Brazil, 2008. Adjustment for sex, skin colour, puberty status, socioeconomic level, pre-gestational BMI, smoking during pregnancy and length/height gain in the same period. Adjustment for sex, skin colour, puberty status, socioeconomic level, pre-gestational BMI, smoking during pregnancy and weight gain in the same period. Change in outcome per SD increase in conditional weight or length/height. β, beta coefficient; BMI, body mass index; mo, months. All the analyses described above were repeated using as explanatory variables the crude, or unconditional weight and height gains in each period, rather than conditional analyses. Although as expected the magnitude of the coefficients were different in the two sets of analyses, the findings were very similar in terms of direction and statistical significance (data not shown). Discussion: Our findings suggest that early weight gain increases adolescent blood pressure and fatness, whereas early length/height gains are not associated with higher blood pressure and seem to protect against overweight and fatness in adolescents. These results emerged when length/height gain and weight gain were adjusted for one another. The finding that the effects of weight and height gains seem to go in opposite directions is of great interest for public health. Blood pressure is a challenging outcome for studies of the long-term consequences of early growth patterns because of its strong correlation with adult size.14 24 Using unconditional methods, previous studies reported that early weight gain was associated with higher blood pressure in adolescence and adulthood.25 26 In our length-adjusted analyses, putting on weight rapidly from 0–6 months was associated with higher blood pressure in adolescence, whereas later weight gain was not. A previous analysis of five cohorts in low and middle-income countries found that conditional weight at 12 months was positively associated with adult mean blood pressure and pre-hypertension in the confounder-adjusted analyses, but the associations were no longer significant after adjustment for adult height.14 Both BMI—which comprises fat and lean mass—and the sum of skin folds—which represents fat mass only—were positively associated with early weight gain. The regression coefficients suggest that weight gain from 12–48 months had a larger effect than earlier gains. It is reassuring that results on BMI and sum of skin folds are very consistent as these are two independent measures of obesity. A previous analysis of a subsample of 9-year-olds from our cohort showed that while early weight gain was related to lean mass assessed through isotope dilution, later weight gain was associated with fat mass.27 Also using BMI and skin folds, Stettler and colleagues found that rapid weight gain during early infancy was associated with obesity in older children and young adults,12 and suggested that the first week of life was critical for BMI development.13 These studies did not report on the effects of height gain nor used conditional growth modelling. Length gain from 0–6 months, when adjusted for weight gain in the same period, did not appear to increase blood pressure—if anything it was associated with a borderline reduction (p=0.05) in systolic pressure. It was also associated with lower BMI and possibly with reduced skin folds. Length/height gains from 6–12 and 12–48 months were not associated with blood pressure, but showed negative associations with BMI and skin folds (not all of which were statistically significant), which suggest that putting on length/height without excessive weight gain is beneficial for body composition development. Central deposition of fat can be measured both as waist circumference (alone or as a ratio to hip/thigh circumference) and as subscapular-to-triceps skin fold ratios. Both measures have been linked to glucose intolerance, hypertension and coronary heart disease.28–32 In our adjusted analyses, the skin fold ratio was not associated with early weight or height gains. This contradicts the findings of some studies,33 34 although there is no consensus on such observations.35 In some populations—for example in India—central fat deposition is already evident in infants.36 By adjusting early growth variables for adolescent BMI or height (model 2 in tables 3 and 4), we addressed the question of to what extent the effects of early weight and height gains on later outcomes may be channelled through adolescent size—which itself is partly a result of early growth patterns. For example, putting on weight rapidly in early life leads to larger BMI and height in adolescence, and early linear growth is also associated with adolescent height and to a lesser extent with adolescent BMI (table 2). In our view, the most important results from the public health perspective are those unadjusted for adolescent size (eg, model 1 in tables 3 and 4, and the models shown in table 5) because these address what might be expected from early interventions. The present study is, to our knowledge, the first attempt to investigate long-term outcomes while mutually adjusting the effects of early weight and length/height gains. Also, by using the conditional method, we eliminated the correlation between growth variables in subsequent age ranges.15 16 A limitation of our cohort is the lack of measurements at the critical age of 2 years, which is widely regarded at the upper limit of the window of opportunity for preventing undernutrition.7 37 In low and middle-income countries, paediatric practice has included promoting rapid weight gain to prevent undernutrition and its harmful consequences.38 This is highly justified in societies where undernutrition is responsible for a large proportion of the burden of disease,39 but in face of the nutrition transition there may be detrimental long-term consequences of rapid weight gain.7 Nevertheless, it has been proposed that in such societies, the benefits of rapid weight gain in the first 2 years regarding short-term morbidity and mortality outcomes, as well as long-term human capital outcomes, far outweigh its potential contribution to complex chronic diseases in adulthood.7 Our findings suggest a new dimension to this debate, mainly that the promotion of rapid length/height gain without excessive weight gain may be beneficial for long-term outcomes. If our results are confirmed by other studies, it will be necessary to reassess the results of existing nutrition intervention studies to identify suitable strategies to achieve this growth pattern. Finally, our findings strongly support the need to monitor infant and child length/height in addition to the current practice of monitoring weight only.37 Although there is longstanding evidence of the short-term benefits of promoting rapid growth for young children in low-income settings, more recent studies suggest that early weight gain can also increase the risk of chronic diseases in adults. Supportive evidence on the benefits of rapid early weight gain in poor populations is provided by its positive association with human capital in adults. Rapid weight gain up to about the age of 2 years is more strongly associated with positive outcomes, whereas rapid weight gain in late childhood is associated with negative outcomes. Conditional weight gains in all different age ranges up to 4 years tended to be associated with higher blood pressure, BMI and skin folds. In marked contrast, rapid length/height gains tended to afford protection against most of these outcomes, particularly those related to body composition. These results emerged when length/height gain and weight gain were adjusted for one another.
Background: Although there is longstanding evidence of the short-term benefits of promoting rapid growth for young children in low-income settings, more recent studies suggest that early weight gain can also increase the risk of chronic diseases in adults. This paper attempts to separate the effects of early life weight and length/height gains on blood pressure, body mass index (BMI), sum of skin folds and subscapular/triceps skin fold ratio at 14-15 years of age. Methods: The sample comprised 833 members of a prospective population-based birth cohort from Brazil. Conditional size (weight or height) analyses were used to express the difference between observed size at a given age and expected size based on a regression, including all previous measures of the same anthropometric index. A positive conditional weight or height indicates growing faster than expected given prior size. Results: Conditional weights at all age ranges were positively associated with most outcomes; each z-score of conditional weight at 4 years was associated with an increase of 6.1 mm in the sum of skin folds (95% CI 4.5 to 7.6) in adolescence after adjustment for conditional length/height. Associations of the outcomes with conditional length/height were mostly negative or non-significant-each z-score was associated with a reduction of 2.4 mm (95% CI -3.8 to -1.1) in the sum of skin folds after adjustment for conditional weight. No associations were found with the skin fold ratio. Conclusions: The promotion of rapid length/height gain without excessive weight gain seems to be beneficial for long-term outcomes, but this requires confirmation from other studies.
Introduction: A wide body of evidence supports the short term benefits of promoting rapid weight gain for young children in low-income settings.1–3 However, recently, several authors suggest that early weight gain can also increase the risk of chronic diseases in adults.4–6 Another in this debate is that rapid weight gain in poor populations is positively associated with human capital in adults—including schooling, economic productivity and next-generation birth weight.7 8 This has been described as the ‘catch-up dilemma’.9 The first studies on the long-term consequences of early life exposures suggested that birth weight was inversely associated with the risk of chronic diseases in adulthood,10 but it has been argued that such effect was primarily due to postnatal instead of prenatal weight trajectories.11 Further studies tried to establish whether there was a specific age range in which rapid weight gain was most strongly associated with beneficial7 8 or detrimental adult outcomes.12 13 Because weight gain in a given age range is not independent from previous weight gains,14–16 conditional analyses have been more recently proposed to overcome the lack of independence.14 Conditional size is defined as the amount by which the size at the end of a time interval exceeds that which would have been predicted at the beginning of the interval from previous measurements of the same anthropometric parameter—for example, weight or height. A positive conditional weight or height indicates growing faster than expected given prior size. An additional issue yet to be addressed is how to disentangle the long-term effects of early weight gain from those of length/height gain during the same age range during childhood. We have used this novel approach to analyse data from a birth cohort study in Southern Brazil in order to explore the independent effects of weight and length/height gains during different periods in the first 4 years of life on blood pressure and on measures of obesity and central fat distribution at 14–15 years of age. Discussion: Our findings suggest that early weight gain increases adolescent blood pressure and fatness, whereas early length/height gains are not associated with higher blood pressure and seem to protect against overweight and fatness in adolescents. These results emerged when length/height gain and weight gain were adjusted for one another. The finding that the effects of weight and height gains seem to go in opposite directions is of great interest for public health. Blood pressure is a challenging outcome for studies of the long-term consequences of early growth patterns because of its strong correlation with adult size.14 24 Using unconditional methods, previous studies reported that early weight gain was associated with higher blood pressure in adolescence and adulthood.25 26 In our length-adjusted analyses, putting on weight rapidly from 0–6 months was associated with higher blood pressure in adolescence, whereas later weight gain was not. A previous analysis of five cohorts in low and middle-income countries found that conditional weight at 12 months was positively associated with adult mean blood pressure and pre-hypertension in the confounder-adjusted analyses, but the associations were no longer significant after adjustment for adult height.14 Both BMI—which comprises fat and lean mass—and the sum of skin folds—which represents fat mass only—were positively associated with early weight gain. The regression coefficients suggest that weight gain from 12–48 months had a larger effect than earlier gains. It is reassuring that results on BMI and sum of skin folds are very consistent as these are two independent measures of obesity. A previous analysis of a subsample of 9-year-olds from our cohort showed that while early weight gain was related to lean mass assessed through isotope dilution, later weight gain was associated with fat mass.27 Also using BMI and skin folds, Stettler and colleagues found that rapid weight gain during early infancy was associated with obesity in older children and young adults,12 and suggested that the first week of life was critical for BMI development.13 These studies did not report on the effects of height gain nor used conditional growth modelling. Length gain from 0–6 months, when adjusted for weight gain in the same period, did not appear to increase blood pressure—if anything it was associated with a borderline reduction (p=0.05) in systolic pressure. It was also associated with lower BMI and possibly with reduced skin folds. Length/height gains from 6–12 and 12–48 months were not associated with blood pressure, but showed negative associations with BMI and skin folds (not all of which were statistically significant), which suggest that putting on length/height without excessive weight gain is beneficial for body composition development. Central deposition of fat can be measured both as waist circumference (alone or as a ratio to hip/thigh circumference) and as subscapular-to-triceps skin fold ratios. Both measures have been linked to glucose intolerance, hypertension and coronary heart disease.28–32 In our adjusted analyses, the skin fold ratio was not associated with early weight or height gains. This contradicts the findings of some studies,33 34 although there is no consensus on such observations.35 In some populations—for example in India—central fat deposition is already evident in infants.36 By adjusting early growth variables for adolescent BMI or height (model 2 in tables 3 and 4), we addressed the question of to what extent the effects of early weight and height gains on later outcomes may be channelled through adolescent size—which itself is partly a result of early growth patterns. For example, putting on weight rapidly in early life leads to larger BMI and height in adolescence, and early linear growth is also associated with adolescent height and to a lesser extent with adolescent BMI (table 2). In our view, the most important results from the public health perspective are those unadjusted for adolescent size (eg, model 1 in tables 3 and 4, and the models shown in table 5) because these address what might be expected from early interventions. The present study is, to our knowledge, the first attempt to investigate long-term outcomes while mutually adjusting the effects of early weight and length/height gains. Also, by using the conditional method, we eliminated the correlation between growth variables in subsequent age ranges.15 16 A limitation of our cohort is the lack of measurements at the critical age of 2 years, which is widely regarded at the upper limit of the window of opportunity for preventing undernutrition.7 37 In low and middle-income countries, paediatric practice has included promoting rapid weight gain to prevent undernutrition and its harmful consequences.38 This is highly justified in societies where undernutrition is responsible for a large proportion of the burden of disease,39 but in face of the nutrition transition there may be detrimental long-term consequences of rapid weight gain.7 Nevertheless, it has been proposed that in such societies, the benefits of rapid weight gain in the first 2 years regarding short-term morbidity and mortality outcomes, as well as long-term human capital outcomes, far outweigh its potential contribution to complex chronic diseases in adulthood.7 Our findings suggest a new dimension to this debate, mainly that the promotion of rapid length/height gain without excessive weight gain may be beneficial for long-term outcomes. If our results are confirmed by other studies, it will be necessary to reassess the results of existing nutrition intervention studies to identify suitable strategies to achieve this growth pattern. Finally, our findings strongly support the need to monitor infant and child length/height in addition to the current practice of monitoring weight only.37 Although there is longstanding evidence of the short-term benefits of promoting rapid growth for young children in low-income settings, more recent studies suggest that early weight gain can also increase the risk of chronic diseases in adults. Supportive evidence on the benefits of rapid early weight gain in poor populations is provided by its positive association with human capital in adults. Rapid weight gain up to about the age of 2 years is more strongly associated with positive outcomes, whereas rapid weight gain in late childhood is associated with negative outcomes. Conditional weight gains in all different age ranges up to 4 years tended to be associated with higher blood pressure, BMI and skin folds. In marked contrast, rapid length/height gains tended to afford protection against most of these outcomes, particularly those related to body composition. These results emerged when length/height gain and weight gain were adjusted for one another.
Background: Although there is longstanding evidence of the short-term benefits of promoting rapid growth for young children in low-income settings, more recent studies suggest that early weight gain can also increase the risk of chronic diseases in adults. This paper attempts to separate the effects of early life weight and length/height gains on blood pressure, body mass index (BMI), sum of skin folds and subscapular/triceps skin fold ratio at 14-15 years of age. Methods: The sample comprised 833 members of a prospective population-based birth cohort from Brazil. Conditional size (weight or height) analyses were used to express the difference between observed size at a given age and expected size based on a regression, including all previous measures of the same anthropometric index. A positive conditional weight or height indicates growing faster than expected given prior size. Results: Conditional weights at all age ranges were positively associated with most outcomes; each z-score of conditional weight at 4 years was associated with an increase of 6.1 mm in the sum of skin folds (95% CI 4.5 to 7.6) in adolescence after adjustment for conditional length/height. Associations of the outcomes with conditional length/height were mostly negative or non-significant-each z-score was associated with a reduction of 2.4 mm (95% CI -3.8 to -1.1) in the sum of skin folds after adjustment for conditional weight. No associations were found with the skin fold ratio. Conclusions: The promotion of rapid length/height gain without excessive weight gain seems to be beneficial for long-term outcomes, but this requires confirmation from other studies.
3,693
318
[ 352, 1201, 1214 ]
4
[ "weight", "height", "gain", "weight gain", "bmi", "length", "length height", "age", "conditional", "associated" ]
[ "early weight height", "instead prenatal weight", "infancy associated obesity", "weight gain young", "effects early weight" ]
null
[CONTENT] Blood pressure | chronic disease | cohort studies | prospective studies | skinfold thickness | adolescents cg | blood pressure | children | chronic DI | cohort me [SUMMARY]
[CONTENT] Blood pressure | chronic disease | cohort studies | prospective studies | skinfold thickness | adolescents cg | blood pressure | children | chronic DI | cohort me [SUMMARY]
null
[CONTENT] Blood pressure | chronic disease | cohort studies | prospective studies | skinfold thickness | adolescents cg | blood pressure | children | chronic DI | cohort me [SUMMARY]
[CONTENT] Blood pressure | chronic disease | cohort studies | prospective studies | skinfold thickness | adolescents cg | blood pressure | children | chronic DI | cohort me [SUMMARY]
[CONTENT] Blood pressure | chronic disease | cohort studies | prospective studies | skinfold thickness | adolescents cg | blood pressure | children | chronic DI | cohort me [SUMMARY]
[CONTENT] Adolescent | Blood Pressure | Body Height | Body Mass Index | Body Weight | Brazil | Child | Child, Preschool | Cohort Studies | Female | Humans | Infant | Male | Prospective Studies | Weight Gain [SUMMARY]
[CONTENT] Adolescent | Blood Pressure | Body Height | Body Mass Index | Body Weight | Brazil | Child | Child, Preschool | Cohort Studies | Female | Humans | Infant | Male | Prospective Studies | Weight Gain [SUMMARY]
null
[CONTENT] Adolescent | Blood Pressure | Body Height | Body Mass Index | Body Weight | Brazil | Child | Child, Preschool | Cohort Studies | Female | Humans | Infant | Male | Prospective Studies | Weight Gain [SUMMARY]
[CONTENT] Adolescent | Blood Pressure | Body Height | Body Mass Index | Body Weight | Brazil | Child | Child, Preschool | Cohort Studies | Female | Humans | Infant | Male | Prospective Studies | Weight Gain [SUMMARY]
[CONTENT] Adolescent | Blood Pressure | Body Height | Body Mass Index | Body Weight | Brazil | Child | Child, Preschool | Cohort Studies | Female | Humans | Infant | Male | Prospective Studies | Weight Gain [SUMMARY]
[CONTENT] early weight height | instead prenatal weight | infancy associated obesity | weight gain young | effects early weight [SUMMARY]
[CONTENT] early weight height | instead prenatal weight | infancy associated obesity | weight gain young | effects early weight [SUMMARY]
null
[CONTENT] early weight height | instead prenatal weight | infancy associated obesity | weight gain young | effects early weight [SUMMARY]
[CONTENT] early weight height | instead prenatal weight | infancy associated obesity | weight gain young | effects early weight [SUMMARY]
[CONTENT] early weight height | instead prenatal weight | infancy associated obesity | weight gain young | effects early weight [SUMMARY]
[CONTENT] weight | height | gain | weight gain | bmi | length | length height | age | conditional | associated [SUMMARY]
[CONTENT] weight | height | gain | weight gain | bmi | length | length height | age | conditional | associated [SUMMARY]
null
[CONTENT] weight | height | gain | weight gain | bmi | length | length height | age | conditional | associated [SUMMARY]
[CONTENT] weight | height | gain | weight gain | bmi | length | length height | age | conditional | associated [SUMMARY]
[CONTENT] weight | height | gain | weight gain | bmi | length | length height | age | conditional | associated [SUMMARY]
[CONTENT] weight | gain | weight gain | rapid weight | rapid weight gain | rapid | term | height | age | recently [SUMMARY]
[CONTENT] weight | age | birth | measured | months | height | skin | variables | length | conditional [SUMMARY]
null
[CONTENT] weight | gain | early | weight gain | associated | height | rapid | early weight | bmi | length [SUMMARY]
[CONTENT] weight | height | gain | weight gain | bmi | length | age | associated | early | length height [SUMMARY]
[CONTENT] weight | height | gain | weight gain | bmi | length | age | associated | early | length height [SUMMARY]
[CONTENT] ||| BMI | 14-15 years of age [SUMMARY]
[CONTENT] 833 | Brazil ||| ||| [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| BMI | 14-15 years of age ||| 833 | Brazil ||| ||| ||| 4 years | 6.1 mm | 95% | 4.5 to 7.6 ||| 2.4 mm | 95% | CI ||| ||| [SUMMARY]
[CONTENT] ||| ||| BMI | 14-15 years of age ||| 833 | Brazil ||| ||| ||| 4 years | 6.1 mm | 95% | 4.5 to 7.6 ||| 2.4 mm | 95% | CI ||| ||| [SUMMARY]
MR-proADM as marker of endotheliitis predicts COVID-19 severity.
33569769
Early identification of patients at high risk of progression to severe COVID-19 constituted an unsolved challenge. Although growing evidence demonstrates a direct association between endotheliitis and severe COVID-19, the role of endothelial damage biomarkers has been scarcely studied. We investigated the relationship between circulating mid-regional proadrenomedullin (MR-proADM) levels, a biomarker of endothelial dysfunction, and prognosis of SARS-CoV-2-infected patients.
BACKGROUND
Prospective observational study enrolling adult patients with confirmed COVID-19. On admission to emergency department, a blood sample was drawn for laboratory test analysis. Primary and secondary endpoints were 28-day all-cause mortality and severe COVID-19 progression. Area under the curve (AUC) and multivariate regression analysis were employed to assess the association of the biomarker with the established endpoints.
METHODS
A total of 99 patients were enrolled. During hospitalization, 25 (25.3%) cases progressed to severe disease and the 28-day mortality rate was of 14.1%. MR-proADM showed the highest AUC to predict 28-day mortality (0.905; [CI] 95%: 0.829-0.955; P < .001) and progression to severe disease (0.829; [CI] 95%: 0.740-0.897; P < .001), respectively. MR-proADM plasma levels above optimal cut-off (1.01 nmol/L) showed the strongest independent association with 28-day mortality risk (hazard ratio [HR]: 10.470, 95% CI: 2.066-53.049; P < .005) and with progression to severe disease (HR: 6.803, 95% CI: 1.458-31.750; P = .015).
RESULTS
Mid-regional proadrenomedullin was the biomarker with highest performance for prognosis of death and progression to severe disease in COVID-19 patients and represents a promising predictor for both outcomes, which might constitute a potential tool in the assessment of prognosis in early stages of this disease.
CONCLUSION
[ "Adrenomedullin", "Aged", "Aged, 80 and over", "Area Under Curve", "COVID-19", "Cause of Death", "Disease Progression", "Endothelium, Vascular", "Female", "Humans", "Inflammation", "Intensive Care Units", "Male", "Middle Aged", "Mortality", "Peptide Fragments", "Prognosis", "Proportional Hazards Models", "Prospective Studies", "Protein Precursors", "Respiration, Artificial", "SARS-CoV-2", "Severity of Illness Index" ]
7995076
INTRODUCTION
In December 2019, severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) was identified as the etiological agent for the pneumonia cases of unknown origin in Wuhan (Hubei Province, China), a disease termed as coronavirus disease‐2019 (COVID‐19). 1 On March 11, COVID‐19 was declared as a pandemic. According to the World Health Organization, nearly 2 million patients are currently dead after more than 82 million confirmed cases worldwide. 2 Despite the exponential growth in research related to COVID‐19 pandemic, the underlying pathophysiological mechanisms of this disease remain unclear. The incidence of complications associated to different organs and tissues and sepsis‐like multiple organ dysfunction suggests the involvement of multiple pathways. Accordingly, recent studies have proposed that virus‐induced endothelial dysfunction, resulting in impaired vascular blood flow, coagulation and leakage, may partially explain the development of organ dysfunction. 3 , 4 , 5 Hence, the development of endotheliitis may be a prominent feature of COVID‐19‐induced severe illness. 6 The role of clinical laboratories in this viral outbreak includes staging, prognostication and therapeutic monitoring. 7 Different biomarkers have been identified as predictors of severe forms of COVID‐19. 8 Most of them are related to inflammation or the dysregulated immune response that characterizes this disease. Although endothelial damage has been shown to be a decisive pathophysiological factor, there are scarce studies that evaluate biomarkers of endothelial damage in severe forms of COVID‐19. Here, mid‐regional proadrenomedullin (MR‐proADM), measured as a surrogate of adrenomedulin secretion, 9 may be of interest within COVID‐19‐induced endotheliitis. 10 This hormone is produced by endothelial and vascular smooth muscle cells throughout the vascular tree to maintain endothelial barrier function. It freely diffuses through the blood and interstitium and binds to specific widespread receptors and has been showed to play a key role in reducing vascular permeability, promoting endothelial stability and integrity following severe infection. 11 , 12 The extensive endothelial and pulmonary damage related to SARS‐CoV‐2 infection may cause a relevant disruption of the ADM system, mainly in severe cases and therefore an elevation of plasma levels of MR‐proADM. This disruption of the adrenomedullin system results in vascular leakage that represents the first step of inflammation and coagulation cascade activation. 6 Mid‐regional proadrenomedullin has been widely reported as a prognostic marker in infectious and non‐infectious diseases. 13 In sepsis and community acquired pneumonia, this biomarker predicts organ damage, poor progression and mortality 14 , 15 , 16 and this predictive ability is independent of the aetiology of pneumonia. 17 MR‐proADM has also been showed as a prognostic marker in viral infections 18 , 19 and its measurement has been recently postulated in a consensus document as a potential tool in the future for prognosis of COVID‐19 patients. 20 However, the role of MR‐proADM in COVID‐19 patients has been scarcely studied. Herein, the aim of this prospective study was to evaluate the relationship between MR‐proADM levels and prognosis of hospitalized SARS‐CoV‐2‐infected patients as well as its potential role as a marker of SARS‐CoV‐2‐related widespread endothelial damage.
null
null
RESULTS
Patient characteristics Main baseline and clinical characteristics on admission according to the endpoints previously defined are listed in Table 1. A total of 99 patients, 60 from Santa Lucía University Hospital (Cartagena, Spain) and 39 from Clínico University Hospital (Valladolid, Spain), were admitted due to COVID‐19, with a mean age of 66 years (61.6% were male). Hypertension was the most common comorbidity, with a greater prevalence among non‐survivors (48.2% vs. 92.9%, P = .002), followed by diabetes mellitus (28.3%) and cardiovascular disease (18.2%). During hospitalization, 25 (25.3%) cases progressed to severe disease, of whom 16 (16.2%) required intensive care, 12 (12.1%) underwent mechanical ventilation and 14 (14.1%) died of any cause within the first 28 days of hospital stay. There were not significant differences between the two participating centres regarding to the rates of 28‐day mortality (11.7% vs. 17.9%; P = .381) and progression to severe disease (23.3% vs. 28.2%; P = .582). In overall population, median hospital stay was 17 (IQR: 8‐16) days and 12 (IQR: 7‐19) days in patients requiring Intensive Care Unit care. Demographics, comorbidities and laboratory findings on admission, grouped according to survival status at 28 d and progression to severe disease Laboratory tests levels are expressed as median (IQR) or mean (SD), as appropriate. Abbreviations: ALT, Alanine aminotransferase; COPD, Chronic obstructive pulmonary disease; CRP, C‐reactive protein; IL‐6, Interleukin‐6; LDH, Lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; NLR, Neutrophil‐to‐lymphocyte ratio; PCT, Procalcitonin; WBC, White blood cell. Main baseline and clinical characteristics on admission according to the endpoints previously defined are listed in Table 1. A total of 99 patients, 60 from Santa Lucía University Hospital (Cartagena, Spain) and 39 from Clínico University Hospital (Valladolid, Spain), were admitted due to COVID‐19, with a mean age of 66 years (61.6% were male). Hypertension was the most common comorbidity, with a greater prevalence among non‐survivors (48.2% vs. 92.9%, P = .002), followed by diabetes mellitus (28.3%) and cardiovascular disease (18.2%). During hospitalization, 25 (25.3%) cases progressed to severe disease, of whom 16 (16.2%) required intensive care, 12 (12.1%) underwent mechanical ventilation and 14 (14.1%) died of any cause within the first 28 days of hospital stay. There were not significant differences between the two participating centres regarding to the rates of 28‐day mortality (11.7% vs. 17.9%; P = .381) and progression to severe disease (23.3% vs. 28.2%; P = .582). In overall population, median hospital stay was 17 (IQR: 8‐16) days and 12 (IQR: 7‐19) days in patients requiring Intensive Care Unit care. Demographics, comorbidities and laboratory findings on admission, grouped according to survival status at 28 d and progression to severe disease Laboratory tests levels are expressed as median (IQR) or mean (SD), as appropriate. Abbreviations: ALT, Alanine aminotransferase; COPD, Chronic obstructive pulmonary disease; CRP, C‐reactive protein; IL‐6, Interleukin‐6; LDH, Lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; NLR, Neutrophil‐to‐lymphocyte ratio; PCT, Procalcitonin; WBC, White blood cell. Laboratory tests for prediction of 28‐day mortality According to survival status, the biomarker levels are summarized in Table 1. Glucose, creatinine, albumin, LDH, ferritin, CRP, IL‐6, PCT, D‐dimer and MR‐proADM levels and neutrophil‐to‐lymphocyte ratio (NLR) were significantly higher in patients who died, while platelet and lymphocyte counts were significantly lower. The accuracy of biomarkers for predicting 28‐days mortality, evaluated by ROC curve analysis, is showed in Figure 1.A and Table 2. MR‐proADM was the biomarker with the highest AUC (0.905, 95% confidence interval [CI]: 0.829‐0.955; P < .001). Receiver operating characteristic curves of biomarker levels on admission to predict 28‐d mortality (A) and progression to severe disease (B) Receiver operating characteristic (ROC) curves for prediction of primary and secondary endpoints Only biomarkers with significant differences in comparisons among groups according to the outcome were included in the table. Abbreviations: AUC, Area under the curve; CI, Confidence interval; CRP, C‐reactive protein; IL‐6, Interleukin‐6; LDH, Lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; PCT, Procalcitonin. According to the Youden index, we calculated the optimal cut‐offs for differentiating between survivors and non‐survivors (Table 3). Notably, Kaplan‐Meier analysis showed that no patient with a MR‐proADM value ≤0.88 nmol/L, recommended as cut‐off by the manufacturer, died in the first 28 days following Emergency Department admission (Figure 2A). Survival analysis for the cut‐off from Youden index is showed in Figure 2B. Accuracy of biomarkers for 28‐d mortality Abbreviations: CI, confidence interval; CRP, C‐reactive protein; IL‐6, interleukin‐6; LDH, Lactate dehydrogenase; LR, Likely hood ratio; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; NPV, Negative predictive value; PCT, Procalcitonin; PPV, Positive predictive value. Cutoff recommended by manufacturer to assess early the risk for progression to a more severe disease condition. Cumulative incidence of (A) 28‐d mortality during hospitalization stratified by MR‐proADM on admission ≤0.88 nmol/L, (B) 28‐d mortality during hospitalization stratified by MR‐proADM on admission >1.01 nmol/L, and (C) progression to severe disease stratified by MR‐proADM on admission >1.01 nmol/L In the multivariate Cox regression analysis (Table 4), after adjusting for confounders, MR‐proADM >1.01 nmol/L showed the strongest independent association with 28‐day mortality risk (hazard ratio [HR]: 10.470, 95% CI: 2.066‐53.049; P = .005). D‐dimer >935 ng/mL FEU (HR: 4.521, 95% CI: 1.185‐17.238; P = .027) and IL‐6 >117.8 pg/mL (HR: 3.739, 95% CI: 1.207‐11.585; P = .022) were also independent predictors associated with 28‐day mortality. Uni‐ and multivariate Cox regression analysis for 28‐d mortality Only variables with a P < .10 for HR in univariate analysis were included in the table. Abbreviations: CI, confidence interval; CRP, C‐reactive protein; HR, hazard ratio; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; n.s, non significant; NLR, neutrophil‐to‐lymphocyte ratio; PCT, procalcitonin. According to survival status, the biomarker levels are summarized in Table 1. Glucose, creatinine, albumin, LDH, ferritin, CRP, IL‐6, PCT, D‐dimer and MR‐proADM levels and neutrophil‐to‐lymphocyte ratio (NLR) were significantly higher in patients who died, while platelet and lymphocyte counts were significantly lower. The accuracy of biomarkers for predicting 28‐days mortality, evaluated by ROC curve analysis, is showed in Figure 1.A and Table 2. MR‐proADM was the biomarker with the highest AUC (0.905, 95% confidence interval [CI]: 0.829‐0.955; P < .001). Receiver operating characteristic curves of biomarker levels on admission to predict 28‐d mortality (A) and progression to severe disease (B) Receiver operating characteristic (ROC) curves for prediction of primary and secondary endpoints Only biomarkers with significant differences in comparisons among groups according to the outcome were included in the table. Abbreviations: AUC, Area under the curve; CI, Confidence interval; CRP, C‐reactive protein; IL‐6, Interleukin‐6; LDH, Lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; PCT, Procalcitonin. According to the Youden index, we calculated the optimal cut‐offs for differentiating between survivors and non‐survivors (Table 3). Notably, Kaplan‐Meier analysis showed that no patient with a MR‐proADM value ≤0.88 nmol/L, recommended as cut‐off by the manufacturer, died in the first 28 days following Emergency Department admission (Figure 2A). Survival analysis for the cut‐off from Youden index is showed in Figure 2B. Accuracy of biomarkers for 28‐d mortality Abbreviations: CI, confidence interval; CRP, C‐reactive protein; IL‐6, interleukin‐6; LDH, Lactate dehydrogenase; LR, Likely hood ratio; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; NPV, Negative predictive value; PCT, Procalcitonin; PPV, Positive predictive value. Cutoff recommended by manufacturer to assess early the risk for progression to a more severe disease condition. Cumulative incidence of (A) 28‐d mortality during hospitalization stratified by MR‐proADM on admission ≤0.88 nmol/L, (B) 28‐d mortality during hospitalization stratified by MR‐proADM on admission >1.01 nmol/L, and (C) progression to severe disease stratified by MR‐proADM on admission >1.01 nmol/L In the multivariate Cox regression analysis (Table 4), after adjusting for confounders, MR‐proADM >1.01 nmol/L showed the strongest independent association with 28‐day mortality risk (hazard ratio [HR]: 10.470, 95% CI: 2.066‐53.049; P = .005). D‐dimer >935 ng/mL FEU (HR: 4.521, 95% CI: 1.185‐17.238; P = .027) and IL‐6 >117.8 pg/mL (HR: 3.739, 95% CI: 1.207‐11.585; P = .022) were also independent predictors associated with 28‐day mortality. Uni‐ and multivariate Cox regression analysis for 28‐d mortality Only variables with a P < .10 for HR in univariate analysis were included in the table. Abbreviations: CI, confidence interval; CRP, C‐reactive protein; HR, hazard ratio; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; n.s, non significant; NLR, neutrophil‐to‐lymphocyte ratio; PCT, procalcitonin. Laboratory tests for prediction of progression to severe disease Creatinine, albumin, LDH, ferritin, CRP, PCT, IL‐6 and MR‐proADM levels and NLR were significant higher in patients who progressed to severe disease, while lymphocyte count was significant lower (Table 1). Again, MR‐proADM was the biomarker with the highest ROC AUC (0.829, 95% CI: 0.740‐0.897; P < .001) (Figure 1B and Table 2). Optimal cut‐offs for the biomarkers are showed in Table 5. Kaplan‐Meier analysis showed a higher likelihood of progression to severe disease in patients with a MR‐proADM level >1.01 nmol/L (Figure 2C). Accuracy of biomarkers for progression to severe disease Abbreviations: CI, confidence interval; CRP, C‐reactive protein; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; LR, likely hood ratio; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; NPV, negative predictive value; PCT, procalcitonin; PPV, positive predictive value. Cut‐off recommended by manufacturer to assess early the risk for progression to a more severe disease condition. Multivariate adjusted Cox regression model showed that MR‐proADM >1.01 nmol/L [HR: 6.803, 95% CI: 1.458‐31.750; P =.015) and ferritin >376 ng/ml (HR: 5.525, 95% CI; 1.042‐29.308; P =.045) at admission were the only independent variables associated with progression to severe disease (Table 6). Uni‐ and multivariate Cox regression analysis for progression to severe disease Only variables with a P <.10 for HR in univariate analysis were included in the table Abbreviations: CI, confidence interval; CRP, C‐reactive protein; HR, hazard ratio; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; n.s, non‐significant; NLR, neutrophil‐to‐lymphocyte ratio; PCT, procalcitonin. Creatinine, albumin, LDH, ferritin, CRP, PCT, IL‐6 and MR‐proADM levels and NLR were significant higher in patients who progressed to severe disease, while lymphocyte count was significant lower (Table 1). Again, MR‐proADM was the biomarker with the highest ROC AUC (0.829, 95% CI: 0.740‐0.897; P < .001) (Figure 1B and Table 2). Optimal cut‐offs for the biomarkers are showed in Table 5. Kaplan‐Meier analysis showed a higher likelihood of progression to severe disease in patients with a MR‐proADM level >1.01 nmol/L (Figure 2C). Accuracy of biomarkers for progression to severe disease Abbreviations: CI, confidence interval; CRP, C‐reactive protein; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; LR, likely hood ratio; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; NPV, negative predictive value; PCT, procalcitonin; PPV, positive predictive value. Cut‐off recommended by manufacturer to assess early the risk for progression to a more severe disease condition. Multivariate adjusted Cox regression model showed that MR‐proADM >1.01 nmol/L [HR: 6.803, 95% CI: 1.458‐31.750; P =.015) and ferritin >376 ng/ml (HR: 5.525, 95% CI; 1.042‐29.308; P =.045) at admission were the only independent variables associated with progression to severe disease (Table 6). Uni‐ and multivariate Cox regression analysis for progression to severe disease Only variables with a P <.10 for HR in univariate analysis were included in the table Abbreviations: CI, confidence interval; CRP, C‐reactive protein; HR, hazard ratio; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; n.s, non‐significant; NLR, neutrophil‐to‐lymphocyte ratio; PCT, procalcitonin.
null
null
[ "INTRODUCTION", "Study design and population", "Data collection", "Blood sampling and laboratory analysis", "Study endpoints", "Statistical analysis", "Patient characteristics", "Laboratory tests for prediction of 28‐day mortality", "Laboratory tests for prediction of progression to severe disease", "AUTHOR CONTRIBUTIONS" ]
[ "In December 2019, severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) was identified as the etiological agent for the pneumonia cases of unknown origin in Wuhan (Hubei Province, China), a disease termed as coronavirus disease‐2019 (COVID‐19).\n1\n On March 11, COVID‐19 was declared as a pandemic. According to the World Health Organization, nearly 2 million patients are currently dead after more than 82 million confirmed cases worldwide.\n2\n\n\nDespite the exponential growth in research related to COVID‐19 pandemic, the underlying pathophysiological mechanisms of this disease remain unclear. The incidence of complications associated to different organs and tissues and sepsis‐like multiple organ dysfunction suggests the involvement of multiple pathways. Accordingly, recent studies have proposed that virus‐induced endothelial dysfunction, resulting in impaired vascular blood flow, coagulation and leakage, may partially explain the development of organ dysfunction.\n3\n, \n4\n, \n5\n Hence, the development of endotheliitis may be a prominent feature of COVID‐19‐induced severe illness.\n6\n\n\nThe role of clinical laboratories in this viral outbreak includes staging, prognostication and therapeutic monitoring.\n7\n Different biomarkers have been identified as predictors of severe forms of COVID‐19.\n8\n Most of them are related to inflammation or the dysregulated immune response that characterizes this disease. Although endothelial damage has been shown to be a decisive pathophysiological factor, there are scarce studies that evaluate biomarkers of endothelial damage in severe forms of COVID‐19. Here, mid‐regional proadrenomedullin (MR‐proADM), measured as a surrogate of adrenomedulin secretion,\n9\n may be of interest within COVID‐19‐induced endotheliitis.\n10\n This hormone is produced by endothelial and vascular smooth muscle cells throughout the vascular tree to maintain endothelial barrier function. It freely diffuses through the blood and interstitium and binds to specific widespread receptors and has been showed to play a key role in reducing vascular permeability, promoting endothelial stability and integrity following severe infection.\n11\n, \n12\n The extensive endothelial and pulmonary damage related to SARS‐CoV‐2 infection may cause a relevant disruption of the ADM system, mainly in severe cases and therefore an elevation of plasma levels of MR‐proADM. This disruption of the adrenomedullin system results in vascular leakage that represents the first step of inflammation and coagulation cascade activation.\n6\n\n\nMid‐regional proadrenomedullin has been widely reported as a prognostic marker in infectious and non‐infectious diseases.\n13\n In sepsis and community acquired pneumonia, this biomarker predicts organ damage, poor progression and mortality\n14\n, \n15\n, \n16\n and this predictive ability is independent of the aetiology of pneumonia.\n17\n MR‐proADM has also been showed as a prognostic marker in viral infections\n18\n, \n19\n and its measurement has been recently postulated in a consensus document as a potential tool in the future for prognosis of COVID‐19 patients.\n20\n\n\nHowever, the role of MR‐proADM in COVID‐19 patients has been scarcely studied. Herein, the aim of this prospective study was to evaluate the relationship between MR‐proADM levels and prognosis of hospitalized SARS‐CoV‐2‐infected patients as well as its potential role as a marker of SARS‐CoV‐2‐related widespread endothelial damage.", "This was a prospective observational study including consecutive adult patients admitted to Santa Lucía University Hospital and Clínico Universitario Hospital, by confirmed SARS‐CoV‐2 infection between March and April 2020. COVID‐19 was diagnosed by a positive result of real‐time reverse transcriptase‐polymerase chain reaction testing of a nasopharyngeal specimen. Exclusion criteria were as follows: (a) patients <18 years; (b) pregnant women; (c) patients transferred from or to other hospital and (d) lack of samples for the biomarkers measurement.\nThis study was approved by the Ethics Committee of both hospitals and performed under a waiver of informed consent. The work was carried out by following the guidelines of the Declaration of Helsinki of the World Medical Association.", "Data collection was performed from electronic medical records and laboratory information systems. For eligible patients, we extracted the demographic information, comorbidities, laboratory test results and variables required for the previously defined endpoints.", "In all patients, venous blood samples for biochemical analysis, including glucose, creatinine, sodium, potassium, albumin, bilirubin, alanine aminotransferase (ALT), ferritin, C‐reactive protein (CRP), lactate dehydrogenase (LDH) and procalcitonin (PCT), haematological analysis, including haemoglobin, cell blood and platelet counts and coagulation markers, including D‐dimer, were collected on admission to the Emergency Department and analysed in the laboratory within 1 hour, by using the habitual methods currently used in the participating laboratories. For measurement of MR‐proADM and interleukin 6 (IL‐6), blood samples collected in tubes containing EDTA K3 as anticoagulant were centrifuged at 2000 g for 10 min and plasma was subsequently frozen and stored to −80°C until testing, according to stability results previously reported.\n9\n\n\nMid‐regional proadrenomedullin was measured by a homogeneous sandwich immunoassay with fluorescent detection using a time‐resolved amplified cryptate emission (TRACE) technology assay (KRYPTOR®, Brahms Thermo Fisher Scientific Inc). According to manufacturer´s data, the detection limit, functional sensitivity and quantification limit were 0.05 nmol/L, 0.23 nmol/L and 0.25 nmol/L; intra‐assay coefficient of variation (CV) and inter‐assay CV were ≤10% and ≤20%, for a level ranging from 0.2 to 0.5 nmol/L, respectively.", "The primary endpoint was all‐cause mortality at 28‐days. Secondary endpoint was severe COVID‐19 progression, defined as a composite of admission to Intensive Care Unit during the index hospital stay and/or need for mechanical ventilation and/or 28‐day mortality, both verified by chart review.", "The normality of continuous variables was tested by Kolmogorov‐Smirnov or Shapiro‐Wilk test, and they are presented as median (interquartile range [IQR]) or mean (standard deviation [SD]), as appropriate. Comparisons for continuous variables were performed by Student's t test, for the normally distributed data; for skewed distribution, Mann‐Whitney U non parametric tests were used for comparisons. Categorical variables are presented as frequency and percentage in each category. The significance of differences in percentages was tested by the chi‐squared test. Discriminatory ability for both outcomes was evaluated by calculating the area under the receiver operating characteristic curve (ROC AUC). We additionally calculated the optimal ROC‐derived cut‐offs (Youden Index, corresponding to the maximum of the sum ‘sensibility + specificity’) and sensitivity, specificity, likelihood ratios and predictive values. The association between the biomarkers and the risk for both outcomes was assessed by Cox regression analysis, adjusted by confounding variables. Variables yielding a P < .10 in the univariate regression analysis were further included in the multivariate using the backward stepwise selection method. In a further step, the impact of the biomarkers on outcomes along time was assessed by using Kaplan‐Meier curves and the Mantel‐Haenszel log‐rank test. Time was censored at 28 days following admission to the Emergency Department. Software packages SPSS vs. 22 (SPSS Inc), and MedCalc 15.0 (MedCalc Software) were used for statistical analysis, with a P < .05 considered statistically significant.\nReporting of the study conforms to CONSORT‐revised and the broader EQUATOR guidelines.\n21\n\n", "Main baseline and clinical characteristics on admission according to the endpoints previously defined are listed in Table 1. A total of 99 patients, 60 from Santa Lucía University Hospital (Cartagena, Spain) and 39 from Clínico University Hospital (Valladolid, Spain), were admitted due to COVID‐19, with a mean age of 66 years (61.6% were male). Hypertension was the most common comorbidity, with a greater prevalence among non‐survivors (48.2% vs. 92.9%, P = .002), followed by diabetes mellitus (28.3%) and cardiovascular disease (18.2%). During hospitalization, 25 (25.3%) cases progressed to severe disease, of whom 16 (16.2%) required intensive care, 12 (12.1%) underwent mechanical ventilation and 14 (14.1%) died of any cause within the first 28 days of hospital stay. There were not significant differences between the two participating centres regarding to the rates of 28‐day mortality (11.7% vs. 17.9%; P = .381) and progression to severe disease (23.3% vs. 28.2%; P = .582). In overall population, median hospital stay was 17 (IQR: 8‐16) days and 12 (IQR: 7‐19) days in patients requiring Intensive Care Unit care.\nDemographics, comorbidities and laboratory findings on admission, grouped according to survival status at 28 d and progression to severe disease\nLaboratory tests levels are expressed as median (IQR) or mean (SD), as appropriate.\nAbbreviations: ALT, Alanine aminotransferase; COPD, Chronic obstructive pulmonary disease; CRP, C‐reactive protein; IL‐6, Interleukin‐6; LDH, Lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; NLR, Neutrophil‐to‐lymphocyte ratio; PCT, Procalcitonin; WBC, White blood cell.", "According to survival status, the biomarker levels are summarized in Table 1. Glucose, creatinine, albumin, LDH, ferritin, CRP, IL‐6, PCT, D‐dimer and MR‐proADM levels and neutrophil‐to‐lymphocyte ratio (NLR) were significantly higher in patients who died, while platelet and lymphocyte counts were significantly lower.\nThe accuracy of biomarkers for predicting 28‐days mortality, evaluated by ROC curve analysis, is showed in Figure 1.A and Table 2. MR‐proADM was the biomarker with the highest AUC (0.905, 95% confidence interval [CI]: 0.829‐0.955; P < .001).\nReceiver operating characteristic curves of biomarker levels on admission to predict 28‐d mortality (A) and progression to severe disease (B)\nReceiver operating characteristic (ROC) curves for prediction of primary and secondary endpoints\nOnly biomarkers with significant differences in comparisons among groups according to the outcome were included in the table.\nAbbreviations: AUC, Area under the curve; CI, Confidence interval; CRP, C‐reactive protein; IL‐6, Interleukin‐6; LDH, Lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; PCT, Procalcitonin.\nAccording to the Youden index, we calculated the optimal cut‐offs for differentiating between survivors and non‐survivors (Table 3). Notably, Kaplan‐Meier analysis showed that no patient with a MR‐proADM value ≤0.88 nmol/L, recommended as cut‐off by the manufacturer, died in the first 28 days following Emergency Department admission (Figure 2A). Survival analysis for the cut‐off from Youden index is showed in Figure 2B.\nAccuracy of biomarkers for 28‐d mortality\nAbbreviations: CI, confidence interval; CRP, C‐reactive protein; IL‐6, interleukin‐6; LDH, Lactate dehydrogenase; LR, Likely hood ratio; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; NPV, Negative predictive value; PCT, Procalcitonin; PPV, Positive predictive value.\nCutoff recommended by manufacturer to assess early the risk for progression to a more severe disease condition.\nCumulative incidence of (A) 28‐d mortality during hospitalization stratified by MR‐proADM on admission ≤0.88 nmol/L, (B) 28‐d mortality during hospitalization stratified by MR‐proADM on admission >1.01 nmol/L, and (C) progression to severe disease stratified by MR‐proADM on admission >1.01 nmol/L\nIn the multivariate Cox regression analysis (Table 4), after adjusting for confounders, MR‐proADM >1.01 nmol/L showed the strongest independent association with 28‐day mortality risk (hazard ratio [HR]: 10.470, 95% CI: 2.066‐53.049; P = .005). D‐dimer >935 ng/mL FEU (HR: 4.521, 95% CI: 1.185‐17.238; P = .027) and IL‐6 >117.8 pg/mL (HR: 3.739, 95% CI: 1.207‐11.585; P = .022) were also independent predictors associated with 28‐day mortality.\nUni‐ and multivariate Cox regression analysis for 28‐d mortality\nOnly variables with a P < .10 for HR in univariate analysis were included in the table.\nAbbreviations: CI, confidence interval; CRP, C‐reactive protein; HR, hazard ratio; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; n.s, non significant; NLR, neutrophil‐to‐lymphocyte ratio; PCT, procalcitonin.", "Creatinine, albumin, LDH, ferritin, CRP, PCT, IL‐6 and MR‐proADM levels and NLR were significant higher in patients who progressed to severe disease, while lymphocyte count was significant lower (Table 1).\nAgain, MR‐proADM was the biomarker with the highest ROC AUC (0.829, 95% CI: 0.740‐0.897; P < .001) (Figure 1B and Table 2). Optimal cut‐offs for the biomarkers are showed in Table 5. Kaplan‐Meier analysis showed a higher likelihood of progression to severe disease in patients with a MR‐proADM level >1.01 nmol/L (Figure 2C).\nAccuracy of biomarkers for progression to severe disease\nAbbreviations: CI, confidence interval; CRP, C‐reactive protein; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; LR, likely hood ratio; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; NPV, negative predictive value; PCT, procalcitonin; PPV, positive predictive value.\nCut‐off recommended by manufacturer to assess early the risk for progression to a more severe disease condition.\nMultivariate adjusted Cox regression model showed that MR‐proADM >1.01 nmol/L [HR: 6.803, 95% CI: 1.458‐31.750; P =.015) and ferritin >376 ng/ml (HR: 5.525, 95% CI; 1.042‐29.308; P =.045) at admission were the only independent variables associated with progression to severe disease (Table 6).\nUni‐ and multivariate Cox regression analysis for progression to severe disease\nOnly variables with a P <.10 for HR in univariate analysis were included in the table\nAbbreviations: CI, confidence interval; CRP, C‐reactive protein; HR, hazard ratio; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; n.s, non‐significant; NLR, neutrophil‐to‐lymphocyte ratio; PCT, procalcitonin.", "LGGR and DAO designed this study, analysed the data and wrote the manuscript. All authors contributed to the enrollment of patients, data collection, sample collection and biomarkers measurement. LCS provided statistical advice. All authors reviewed and approved the final manuscript. All authors have accepted responsibility for the entire content of this manuscript and approved its submission." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIAL AND METHODS", "Study design and population", "Data collection", "Blood sampling and laboratory analysis", "Study endpoints", "Statistical analysis", "RESULTS", "Patient characteristics", "Laboratory tests for prediction of 28‐day mortality", "Laboratory tests for prediction of progression to severe disease", "DISCUSSION", "CONFLICT OF INTEREST", "AUTHOR CONTRIBUTIONS" ]
[ "In December 2019, severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) was identified as the etiological agent for the pneumonia cases of unknown origin in Wuhan (Hubei Province, China), a disease termed as coronavirus disease‐2019 (COVID‐19).\n1\n On March 11, COVID‐19 was declared as a pandemic. According to the World Health Organization, nearly 2 million patients are currently dead after more than 82 million confirmed cases worldwide.\n2\n\n\nDespite the exponential growth in research related to COVID‐19 pandemic, the underlying pathophysiological mechanisms of this disease remain unclear. The incidence of complications associated to different organs and tissues and sepsis‐like multiple organ dysfunction suggests the involvement of multiple pathways. Accordingly, recent studies have proposed that virus‐induced endothelial dysfunction, resulting in impaired vascular blood flow, coagulation and leakage, may partially explain the development of organ dysfunction.\n3\n, \n4\n, \n5\n Hence, the development of endotheliitis may be a prominent feature of COVID‐19‐induced severe illness.\n6\n\n\nThe role of clinical laboratories in this viral outbreak includes staging, prognostication and therapeutic monitoring.\n7\n Different biomarkers have been identified as predictors of severe forms of COVID‐19.\n8\n Most of them are related to inflammation or the dysregulated immune response that characterizes this disease. Although endothelial damage has been shown to be a decisive pathophysiological factor, there are scarce studies that evaluate biomarkers of endothelial damage in severe forms of COVID‐19. Here, mid‐regional proadrenomedullin (MR‐proADM), measured as a surrogate of adrenomedulin secretion,\n9\n may be of interest within COVID‐19‐induced endotheliitis.\n10\n This hormone is produced by endothelial and vascular smooth muscle cells throughout the vascular tree to maintain endothelial barrier function. It freely diffuses through the blood and interstitium and binds to specific widespread receptors and has been showed to play a key role in reducing vascular permeability, promoting endothelial stability and integrity following severe infection.\n11\n, \n12\n The extensive endothelial and pulmonary damage related to SARS‐CoV‐2 infection may cause a relevant disruption of the ADM system, mainly in severe cases and therefore an elevation of plasma levels of MR‐proADM. This disruption of the adrenomedullin system results in vascular leakage that represents the first step of inflammation and coagulation cascade activation.\n6\n\n\nMid‐regional proadrenomedullin has been widely reported as a prognostic marker in infectious and non‐infectious diseases.\n13\n In sepsis and community acquired pneumonia, this biomarker predicts organ damage, poor progression and mortality\n14\n, \n15\n, \n16\n and this predictive ability is independent of the aetiology of pneumonia.\n17\n MR‐proADM has also been showed as a prognostic marker in viral infections\n18\n, \n19\n and its measurement has been recently postulated in a consensus document as a potential tool in the future for prognosis of COVID‐19 patients.\n20\n\n\nHowever, the role of MR‐proADM in COVID‐19 patients has been scarcely studied. Herein, the aim of this prospective study was to evaluate the relationship between MR‐proADM levels and prognosis of hospitalized SARS‐CoV‐2‐infected patients as well as its potential role as a marker of SARS‐CoV‐2‐related widespread endothelial damage.", "Study design and population This was a prospective observational study including consecutive adult patients admitted to Santa Lucía University Hospital and Clínico Universitario Hospital, by confirmed SARS‐CoV‐2 infection between March and April 2020. COVID‐19 was diagnosed by a positive result of real‐time reverse transcriptase‐polymerase chain reaction testing of a nasopharyngeal specimen. Exclusion criteria were as follows: (a) patients <18 years; (b) pregnant women; (c) patients transferred from or to other hospital and (d) lack of samples for the biomarkers measurement.\nThis study was approved by the Ethics Committee of both hospitals and performed under a waiver of informed consent. The work was carried out by following the guidelines of the Declaration of Helsinki of the World Medical Association.\nThis was a prospective observational study including consecutive adult patients admitted to Santa Lucía University Hospital and Clínico Universitario Hospital, by confirmed SARS‐CoV‐2 infection between March and April 2020. COVID‐19 was diagnosed by a positive result of real‐time reverse transcriptase‐polymerase chain reaction testing of a nasopharyngeal specimen. Exclusion criteria were as follows: (a) patients <18 years; (b) pregnant women; (c) patients transferred from or to other hospital and (d) lack of samples for the biomarkers measurement.\nThis study was approved by the Ethics Committee of both hospitals and performed under a waiver of informed consent. The work was carried out by following the guidelines of the Declaration of Helsinki of the World Medical Association.\nData collection Data collection was performed from electronic medical records and laboratory information systems. For eligible patients, we extracted the demographic information, comorbidities, laboratory test results and variables required for the previously defined endpoints.\nData collection was performed from electronic medical records and laboratory information systems. For eligible patients, we extracted the demographic information, comorbidities, laboratory test results and variables required for the previously defined endpoints.\nBlood sampling and laboratory analysis In all patients, venous blood samples for biochemical analysis, including glucose, creatinine, sodium, potassium, albumin, bilirubin, alanine aminotransferase (ALT), ferritin, C‐reactive protein (CRP), lactate dehydrogenase (LDH) and procalcitonin (PCT), haematological analysis, including haemoglobin, cell blood and platelet counts and coagulation markers, including D‐dimer, were collected on admission to the Emergency Department and analysed in the laboratory within 1 hour, by using the habitual methods currently used in the participating laboratories. For measurement of MR‐proADM and interleukin 6 (IL‐6), blood samples collected in tubes containing EDTA K3 as anticoagulant were centrifuged at 2000 g for 10 min and plasma was subsequently frozen and stored to −80°C until testing, according to stability results previously reported.\n9\n\n\nMid‐regional proadrenomedullin was measured by a homogeneous sandwich immunoassay with fluorescent detection using a time‐resolved amplified cryptate emission (TRACE) technology assay (KRYPTOR®, Brahms Thermo Fisher Scientific Inc). According to manufacturer´s data, the detection limit, functional sensitivity and quantification limit were 0.05 nmol/L, 0.23 nmol/L and 0.25 nmol/L; intra‐assay coefficient of variation (CV) and inter‐assay CV were ≤10% and ≤20%, for a level ranging from 0.2 to 0.5 nmol/L, respectively.\nIn all patients, venous blood samples for biochemical analysis, including glucose, creatinine, sodium, potassium, albumin, bilirubin, alanine aminotransferase (ALT), ferritin, C‐reactive protein (CRP), lactate dehydrogenase (LDH) and procalcitonin (PCT), haematological analysis, including haemoglobin, cell blood and platelet counts and coagulation markers, including D‐dimer, were collected on admission to the Emergency Department and analysed in the laboratory within 1 hour, by using the habitual methods currently used in the participating laboratories. For measurement of MR‐proADM and interleukin 6 (IL‐6), blood samples collected in tubes containing EDTA K3 as anticoagulant were centrifuged at 2000 g for 10 min and plasma was subsequently frozen and stored to −80°C until testing, according to stability results previously reported.\n9\n\n\nMid‐regional proadrenomedullin was measured by a homogeneous sandwich immunoassay with fluorescent detection using a time‐resolved amplified cryptate emission (TRACE) technology assay (KRYPTOR®, Brahms Thermo Fisher Scientific Inc). According to manufacturer´s data, the detection limit, functional sensitivity and quantification limit were 0.05 nmol/L, 0.23 nmol/L and 0.25 nmol/L; intra‐assay coefficient of variation (CV) and inter‐assay CV were ≤10% and ≤20%, for a level ranging from 0.2 to 0.5 nmol/L, respectively.\nStudy endpoints The primary endpoint was all‐cause mortality at 28‐days. Secondary endpoint was severe COVID‐19 progression, defined as a composite of admission to Intensive Care Unit during the index hospital stay and/or need for mechanical ventilation and/or 28‐day mortality, both verified by chart review.\nThe primary endpoint was all‐cause mortality at 28‐days. Secondary endpoint was severe COVID‐19 progression, defined as a composite of admission to Intensive Care Unit during the index hospital stay and/or need for mechanical ventilation and/or 28‐day mortality, both verified by chart review.\nStatistical analysis The normality of continuous variables was tested by Kolmogorov‐Smirnov or Shapiro‐Wilk test, and they are presented as median (interquartile range [IQR]) or mean (standard deviation [SD]), as appropriate. Comparisons for continuous variables were performed by Student's t test, for the normally distributed data; for skewed distribution, Mann‐Whitney U non parametric tests were used for comparisons. Categorical variables are presented as frequency and percentage in each category. The significance of differences in percentages was tested by the chi‐squared test. Discriminatory ability for both outcomes was evaluated by calculating the area under the receiver operating characteristic curve (ROC AUC). We additionally calculated the optimal ROC‐derived cut‐offs (Youden Index, corresponding to the maximum of the sum ‘sensibility + specificity’) and sensitivity, specificity, likelihood ratios and predictive values. The association between the biomarkers and the risk for both outcomes was assessed by Cox regression analysis, adjusted by confounding variables. Variables yielding a P < .10 in the univariate regression analysis were further included in the multivariate using the backward stepwise selection method. In a further step, the impact of the biomarkers on outcomes along time was assessed by using Kaplan‐Meier curves and the Mantel‐Haenszel log‐rank test. Time was censored at 28 days following admission to the Emergency Department. Software packages SPSS vs. 22 (SPSS Inc), and MedCalc 15.0 (MedCalc Software) were used for statistical analysis, with a P < .05 considered statistically significant.\nReporting of the study conforms to CONSORT‐revised and the broader EQUATOR guidelines.\n21\n\n\nThe normality of continuous variables was tested by Kolmogorov‐Smirnov or Shapiro‐Wilk test, and they are presented as median (interquartile range [IQR]) or mean (standard deviation [SD]), as appropriate. Comparisons for continuous variables were performed by Student's t test, for the normally distributed data; for skewed distribution, Mann‐Whitney U non parametric tests were used for comparisons. Categorical variables are presented as frequency and percentage in each category. The significance of differences in percentages was tested by the chi‐squared test. Discriminatory ability for both outcomes was evaluated by calculating the area under the receiver operating characteristic curve (ROC AUC). We additionally calculated the optimal ROC‐derived cut‐offs (Youden Index, corresponding to the maximum of the sum ‘sensibility + specificity’) and sensitivity, specificity, likelihood ratios and predictive values. The association between the biomarkers and the risk for both outcomes was assessed by Cox regression analysis, adjusted by confounding variables. Variables yielding a P < .10 in the univariate regression analysis were further included in the multivariate using the backward stepwise selection method. In a further step, the impact of the biomarkers on outcomes along time was assessed by using Kaplan‐Meier curves and the Mantel‐Haenszel log‐rank test. Time was censored at 28 days following admission to the Emergency Department. Software packages SPSS vs. 22 (SPSS Inc), and MedCalc 15.0 (MedCalc Software) were used for statistical analysis, with a P < .05 considered statistically significant.\nReporting of the study conforms to CONSORT‐revised and the broader EQUATOR guidelines.\n21\n\n", "This was a prospective observational study including consecutive adult patients admitted to Santa Lucía University Hospital and Clínico Universitario Hospital, by confirmed SARS‐CoV‐2 infection between March and April 2020. COVID‐19 was diagnosed by a positive result of real‐time reverse transcriptase‐polymerase chain reaction testing of a nasopharyngeal specimen. Exclusion criteria were as follows: (a) patients <18 years; (b) pregnant women; (c) patients transferred from or to other hospital and (d) lack of samples for the biomarkers measurement.\nThis study was approved by the Ethics Committee of both hospitals and performed under a waiver of informed consent. The work was carried out by following the guidelines of the Declaration of Helsinki of the World Medical Association.", "Data collection was performed from electronic medical records and laboratory information systems. For eligible patients, we extracted the demographic information, comorbidities, laboratory test results and variables required for the previously defined endpoints.", "In all patients, venous blood samples for biochemical analysis, including glucose, creatinine, sodium, potassium, albumin, bilirubin, alanine aminotransferase (ALT), ferritin, C‐reactive protein (CRP), lactate dehydrogenase (LDH) and procalcitonin (PCT), haematological analysis, including haemoglobin, cell blood and platelet counts and coagulation markers, including D‐dimer, were collected on admission to the Emergency Department and analysed in the laboratory within 1 hour, by using the habitual methods currently used in the participating laboratories. For measurement of MR‐proADM and interleukin 6 (IL‐6), blood samples collected in tubes containing EDTA K3 as anticoagulant were centrifuged at 2000 g for 10 min and plasma was subsequently frozen and stored to −80°C until testing, according to stability results previously reported.\n9\n\n\nMid‐regional proadrenomedullin was measured by a homogeneous sandwich immunoassay with fluorescent detection using a time‐resolved amplified cryptate emission (TRACE) technology assay (KRYPTOR®, Brahms Thermo Fisher Scientific Inc). According to manufacturer´s data, the detection limit, functional sensitivity and quantification limit were 0.05 nmol/L, 0.23 nmol/L and 0.25 nmol/L; intra‐assay coefficient of variation (CV) and inter‐assay CV were ≤10% and ≤20%, for a level ranging from 0.2 to 0.5 nmol/L, respectively.", "The primary endpoint was all‐cause mortality at 28‐days. Secondary endpoint was severe COVID‐19 progression, defined as a composite of admission to Intensive Care Unit during the index hospital stay and/or need for mechanical ventilation and/or 28‐day mortality, both verified by chart review.", "The normality of continuous variables was tested by Kolmogorov‐Smirnov or Shapiro‐Wilk test, and they are presented as median (interquartile range [IQR]) or mean (standard deviation [SD]), as appropriate. Comparisons for continuous variables were performed by Student's t test, for the normally distributed data; for skewed distribution, Mann‐Whitney U non parametric tests were used for comparisons. Categorical variables are presented as frequency and percentage in each category. The significance of differences in percentages was tested by the chi‐squared test. Discriminatory ability for both outcomes was evaluated by calculating the area under the receiver operating characteristic curve (ROC AUC). We additionally calculated the optimal ROC‐derived cut‐offs (Youden Index, corresponding to the maximum of the sum ‘sensibility + specificity’) and sensitivity, specificity, likelihood ratios and predictive values. The association between the biomarkers and the risk for both outcomes was assessed by Cox regression analysis, adjusted by confounding variables. Variables yielding a P < .10 in the univariate regression analysis were further included in the multivariate using the backward stepwise selection method. In a further step, the impact of the biomarkers on outcomes along time was assessed by using Kaplan‐Meier curves and the Mantel‐Haenszel log‐rank test. Time was censored at 28 days following admission to the Emergency Department. Software packages SPSS vs. 22 (SPSS Inc), and MedCalc 15.0 (MedCalc Software) were used for statistical analysis, with a P < .05 considered statistically significant.\nReporting of the study conforms to CONSORT‐revised and the broader EQUATOR guidelines.\n21\n\n", "Patient characteristics Main baseline and clinical characteristics on admission according to the endpoints previously defined are listed in Table 1. A total of 99 patients, 60 from Santa Lucía University Hospital (Cartagena, Spain) and 39 from Clínico University Hospital (Valladolid, Spain), were admitted due to COVID‐19, with a mean age of 66 years (61.6% were male). Hypertension was the most common comorbidity, with a greater prevalence among non‐survivors (48.2% vs. 92.9%, P = .002), followed by diabetes mellitus (28.3%) and cardiovascular disease (18.2%). During hospitalization, 25 (25.3%) cases progressed to severe disease, of whom 16 (16.2%) required intensive care, 12 (12.1%) underwent mechanical ventilation and 14 (14.1%) died of any cause within the first 28 days of hospital stay. There were not significant differences between the two participating centres regarding to the rates of 28‐day mortality (11.7% vs. 17.9%; P = .381) and progression to severe disease (23.3% vs. 28.2%; P = .582). In overall population, median hospital stay was 17 (IQR: 8‐16) days and 12 (IQR: 7‐19) days in patients requiring Intensive Care Unit care.\nDemographics, comorbidities and laboratory findings on admission, grouped according to survival status at 28 d and progression to severe disease\nLaboratory tests levels are expressed as median (IQR) or mean (SD), as appropriate.\nAbbreviations: ALT, Alanine aminotransferase; COPD, Chronic obstructive pulmonary disease; CRP, C‐reactive protein; IL‐6, Interleukin‐6; LDH, Lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; NLR, Neutrophil‐to‐lymphocyte ratio; PCT, Procalcitonin; WBC, White blood cell.\nMain baseline and clinical characteristics on admission according to the endpoints previously defined are listed in Table 1. A total of 99 patients, 60 from Santa Lucía University Hospital (Cartagena, Spain) and 39 from Clínico University Hospital (Valladolid, Spain), were admitted due to COVID‐19, with a mean age of 66 years (61.6% were male). Hypertension was the most common comorbidity, with a greater prevalence among non‐survivors (48.2% vs. 92.9%, P = .002), followed by diabetes mellitus (28.3%) and cardiovascular disease (18.2%). During hospitalization, 25 (25.3%) cases progressed to severe disease, of whom 16 (16.2%) required intensive care, 12 (12.1%) underwent mechanical ventilation and 14 (14.1%) died of any cause within the first 28 days of hospital stay. There were not significant differences between the two participating centres regarding to the rates of 28‐day mortality (11.7% vs. 17.9%; P = .381) and progression to severe disease (23.3% vs. 28.2%; P = .582). In overall population, median hospital stay was 17 (IQR: 8‐16) days and 12 (IQR: 7‐19) days in patients requiring Intensive Care Unit care.\nDemographics, comorbidities and laboratory findings on admission, grouped according to survival status at 28 d and progression to severe disease\nLaboratory tests levels are expressed as median (IQR) or mean (SD), as appropriate.\nAbbreviations: ALT, Alanine aminotransferase; COPD, Chronic obstructive pulmonary disease; CRP, C‐reactive protein; IL‐6, Interleukin‐6; LDH, Lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; NLR, Neutrophil‐to‐lymphocyte ratio; PCT, Procalcitonin; WBC, White blood cell.\nLaboratory tests for prediction of 28‐day mortality According to survival status, the biomarker levels are summarized in Table 1. Glucose, creatinine, albumin, LDH, ferritin, CRP, IL‐6, PCT, D‐dimer and MR‐proADM levels and neutrophil‐to‐lymphocyte ratio (NLR) were significantly higher in patients who died, while platelet and lymphocyte counts were significantly lower.\nThe accuracy of biomarkers for predicting 28‐days mortality, evaluated by ROC curve analysis, is showed in Figure 1.A and Table 2. MR‐proADM was the biomarker with the highest AUC (0.905, 95% confidence interval [CI]: 0.829‐0.955; P < .001).\nReceiver operating characteristic curves of biomarker levels on admission to predict 28‐d mortality (A) and progression to severe disease (B)\nReceiver operating characteristic (ROC) curves for prediction of primary and secondary endpoints\nOnly biomarkers with significant differences in comparisons among groups according to the outcome were included in the table.\nAbbreviations: AUC, Area under the curve; CI, Confidence interval; CRP, C‐reactive protein; IL‐6, Interleukin‐6; LDH, Lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; PCT, Procalcitonin.\nAccording to the Youden index, we calculated the optimal cut‐offs for differentiating between survivors and non‐survivors (Table 3). Notably, Kaplan‐Meier analysis showed that no patient with a MR‐proADM value ≤0.88 nmol/L, recommended as cut‐off by the manufacturer, died in the first 28 days following Emergency Department admission (Figure 2A). Survival analysis for the cut‐off from Youden index is showed in Figure 2B.\nAccuracy of biomarkers for 28‐d mortality\nAbbreviations: CI, confidence interval; CRP, C‐reactive protein; IL‐6, interleukin‐6; LDH, Lactate dehydrogenase; LR, Likely hood ratio; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; NPV, Negative predictive value; PCT, Procalcitonin; PPV, Positive predictive value.\nCutoff recommended by manufacturer to assess early the risk for progression to a more severe disease condition.\nCumulative incidence of (A) 28‐d mortality during hospitalization stratified by MR‐proADM on admission ≤0.88 nmol/L, (B) 28‐d mortality during hospitalization stratified by MR‐proADM on admission >1.01 nmol/L, and (C) progression to severe disease stratified by MR‐proADM on admission >1.01 nmol/L\nIn the multivariate Cox regression analysis (Table 4), after adjusting for confounders, MR‐proADM >1.01 nmol/L showed the strongest independent association with 28‐day mortality risk (hazard ratio [HR]: 10.470, 95% CI: 2.066‐53.049; P = .005). D‐dimer >935 ng/mL FEU (HR: 4.521, 95% CI: 1.185‐17.238; P = .027) and IL‐6 >117.8 pg/mL (HR: 3.739, 95% CI: 1.207‐11.585; P = .022) were also independent predictors associated with 28‐day mortality.\nUni‐ and multivariate Cox regression analysis for 28‐d mortality\nOnly variables with a P < .10 for HR in univariate analysis were included in the table.\nAbbreviations: CI, confidence interval; CRP, C‐reactive protein; HR, hazard ratio; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; n.s, non significant; NLR, neutrophil‐to‐lymphocyte ratio; PCT, procalcitonin.\nAccording to survival status, the biomarker levels are summarized in Table 1. Glucose, creatinine, albumin, LDH, ferritin, CRP, IL‐6, PCT, D‐dimer and MR‐proADM levels and neutrophil‐to‐lymphocyte ratio (NLR) were significantly higher in patients who died, while platelet and lymphocyte counts were significantly lower.\nThe accuracy of biomarkers for predicting 28‐days mortality, evaluated by ROC curve analysis, is showed in Figure 1.A and Table 2. MR‐proADM was the biomarker with the highest AUC (0.905, 95% confidence interval [CI]: 0.829‐0.955; P < .001).\nReceiver operating characteristic curves of biomarker levels on admission to predict 28‐d mortality (A) and progression to severe disease (B)\nReceiver operating characteristic (ROC) curves for prediction of primary and secondary endpoints\nOnly biomarkers with significant differences in comparisons among groups according to the outcome were included in the table.\nAbbreviations: AUC, Area under the curve; CI, Confidence interval; CRP, C‐reactive protein; IL‐6, Interleukin‐6; LDH, Lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; PCT, Procalcitonin.\nAccording to the Youden index, we calculated the optimal cut‐offs for differentiating between survivors and non‐survivors (Table 3). Notably, Kaplan‐Meier analysis showed that no patient with a MR‐proADM value ≤0.88 nmol/L, recommended as cut‐off by the manufacturer, died in the first 28 days following Emergency Department admission (Figure 2A). Survival analysis for the cut‐off from Youden index is showed in Figure 2B.\nAccuracy of biomarkers for 28‐d mortality\nAbbreviations: CI, confidence interval; CRP, C‐reactive protein; IL‐6, interleukin‐6; LDH, Lactate dehydrogenase; LR, Likely hood ratio; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; NPV, Negative predictive value; PCT, Procalcitonin; PPV, Positive predictive value.\nCutoff recommended by manufacturer to assess early the risk for progression to a more severe disease condition.\nCumulative incidence of (A) 28‐d mortality during hospitalization stratified by MR‐proADM on admission ≤0.88 nmol/L, (B) 28‐d mortality during hospitalization stratified by MR‐proADM on admission >1.01 nmol/L, and (C) progression to severe disease stratified by MR‐proADM on admission >1.01 nmol/L\nIn the multivariate Cox regression analysis (Table 4), after adjusting for confounders, MR‐proADM >1.01 nmol/L showed the strongest independent association with 28‐day mortality risk (hazard ratio [HR]: 10.470, 95% CI: 2.066‐53.049; P = .005). D‐dimer >935 ng/mL FEU (HR: 4.521, 95% CI: 1.185‐17.238; P = .027) and IL‐6 >117.8 pg/mL (HR: 3.739, 95% CI: 1.207‐11.585; P = .022) were also independent predictors associated with 28‐day mortality.\nUni‐ and multivariate Cox regression analysis for 28‐d mortality\nOnly variables with a P < .10 for HR in univariate analysis were included in the table.\nAbbreviations: CI, confidence interval; CRP, C‐reactive protein; HR, hazard ratio; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; n.s, non significant; NLR, neutrophil‐to‐lymphocyte ratio; PCT, procalcitonin.\nLaboratory tests for prediction of progression to severe disease Creatinine, albumin, LDH, ferritin, CRP, PCT, IL‐6 and MR‐proADM levels and NLR were significant higher in patients who progressed to severe disease, while lymphocyte count was significant lower (Table 1).\nAgain, MR‐proADM was the biomarker with the highest ROC AUC (0.829, 95% CI: 0.740‐0.897; P < .001) (Figure 1B and Table 2). Optimal cut‐offs for the biomarkers are showed in Table 5. Kaplan‐Meier analysis showed a higher likelihood of progression to severe disease in patients with a MR‐proADM level >1.01 nmol/L (Figure 2C).\nAccuracy of biomarkers for progression to severe disease\nAbbreviations: CI, confidence interval; CRP, C‐reactive protein; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; LR, likely hood ratio; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; NPV, negative predictive value; PCT, procalcitonin; PPV, positive predictive value.\nCut‐off recommended by manufacturer to assess early the risk for progression to a more severe disease condition.\nMultivariate adjusted Cox regression model showed that MR‐proADM >1.01 nmol/L [HR: 6.803, 95% CI: 1.458‐31.750; P =.015) and ferritin >376 ng/ml (HR: 5.525, 95% CI; 1.042‐29.308; P =.045) at admission were the only independent variables associated with progression to severe disease (Table 6).\nUni‐ and multivariate Cox regression analysis for progression to severe disease\nOnly variables with a P <.10 for HR in univariate analysis were included in the table\nAbbreviations: CI, confidence interval; CRP, C‐reactive protein; HR, hazard ratio; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; n.s, non‐significant; NLR, neutrophil‐to‐lymphocyte ratio; PCT, procalcitonin.\nCreatinine, albumin, LDH, ferritin, CRP, PCT, IL‐6 and MR‐proADM levels and NLR were significant higher in patients who progressed to severe disease, while lymphocyte count was significant lower (Table 1).\nAgain, MR‐proADM was the biomarker with the highest ROC AUC (0.829, 95% CI: 0.740‐0.897; P < .001) (Figure 1B and Table 2). Optimal cut‐offs for the biomarkers are showed in Table 5. Kaplan‐Meier analysis showed a higher likelihood of progression to severe disease in patients with a MR‐proADM level >1.01 nmol/L (Figure 2C).\nAccuracy of biomarkers for progression to severe disease\nAbbreviations: CI, confidence interval; CRP, C‐reactive protein; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; LR, likely hood ratio; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; NPV, negative predictive value; PCT, procalcitonin; PPV, positive predictive value.\nCut‐off recommended by manufacturer to assess early the risk for progression to a more severe disease condition.\nMultivariate adjusted Cox regression model showed that MR‐proADM >1.01 nmol/L [HR: 6.803, 95% CI: 1.458‐31.750; P =.015) and ferritin >376 ng/ml (HR: 5.525, 95% CI; 1.042‐29.308; P =.045) at admission were the only independent variables associated with progression to severe disease (Table 6).\nUni‐ and multivariate Cox regression analysis for progression to severe disease\nOnly variables with a P <.10 for HR in univariate analysis were included in the table\nAbbreviations: CI, confidence interval; CRP, C‐reactive protein; HR, hazard ratio; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; n.s, non‐significant; NLR, neutrophil‐to‐lymphocyte ratio; PCT, procalcitonin.", "Main baseline and clinical characteristics on admission according to the endpoints previously defined are listed in Table 1. A total of 99 patients, 60 from Santa Lucía University Hospital (Cartagena, Spain) and 39 from Clínico University Hospital (Valladolid, Spain), were admitted due to COVID‐19, with a mean age of 66 years (61.6% were male). Hypertension was the most common comorbidity, with a greater prevalence among non‐survivors (48.2% vs. 92.9%, P = .002), followed by diabetes mellitus (28.3%) and cardiovascular disease (18.2%). During hospitalization, 25 (25.3%) cases progressed to severe disease, of whom 16 (16.2%) required intensive care, 12 (12.1%) underwent mechanical ventilation and 14 (14.1%) died of any cause within the first 28 days of hospital stay. There were not significant differences between the two participating centres regarding to the rates of 28‐day mortality (11.7% vs. 17.9%; P = .381) and progression to severe disease (23.3% vs. 28.2%; P = .582). In overall population, median hospital stay was 17 (IQR: 8‐16) days and 12 (IQR: 7‐19) days in patients requiring Intensive Care Unit care.\nDemographics, comorbidities and laboratory findings on admission, grouped according to survival status at 28 d and progression to severe disease\nLaboratory tests levels are expressed as median (IQR) or mean (SD), as appropriate.\nAbbreviations: ALT, Alanine aminotransferase; COPD, Chronic obstructive pulmonary disease; CRP, C‐reactive protein; IL‐6, Interleukin‐6; LDH, Lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; NLR, Neutrophil‐to‐lymphocyte ratio; PCT, Procalcitonin; WBC, White blood cell.", "According to survival status, the biomarker levels are summarized in Table 1. Glucose, creatinine, albumin, LDH, ferritin, CRP, IL‐6, PCT, D‐dimer and MR‐proADM levels and neutrophil‐to‐lymphocyte ratio (NLR) were significantly higher in patients who died, while platelet and lymphocyte counts were significantly lower.\nThe accuracy of biomarkers for predicting 28‐days mortality, evaluated by ROC curve analysis, is showed in Figure 1.A and Table 2. MR‐proADM was the biomarker with the highest AUC (0.905, 95% confidence interval [CI]: 0.829‐0.955; P < .001).\nReceiver operating characteristic curves of biomarker levels on admission to predict 28‐d mortality (A) and progression to severe disease (B)\nReceiver operating characteristic (ROC) curves for prediction of primary and secondary endpoints\nOnly biomarkers with significant differences in comparisons among groups according to the outcome were included in the table.\nAbbreviations: AUC, Area under the curve; CI, Confidence interval; CRP, C‐reactive protein; IL‐6, Interleukin‐6; LDH, Lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; PCT, Procalcitonin.\nAccording to the Youden index, we calculated the optimal cut‐offs for differentiating between survivors and non‐survivors (Table 3). Notably, Kaplan‐Meier analysis showed that no patient with a MR‐proADM value ≤0.88 nmol/L, recommended as cut‐off by the manufacturer, died in the first 28 days following Emergency Department admission (Figure 2A). Survival analysis for the cut‐off from Youden index is showed in Figure 2B.\nAccuracy of biomarkers for 28‐d mortality\nAbbreviations: CI, confidence interval; CRP, C‐reactive protein; IL‐6, interleukin‐6; LDH, Lactate dehydrogenase; LR, Likely hood ratio; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; NPV, Negative predictive value; PCT, Procalcitonin; PPV, Positive predictive value.\nCutoff recommended by manufacturer to assess early the risk for progression to a more severe disease condition.\nCumulative incidence of (A) 28‐d mortality during hospitalization stratified by MR‐proADM on admission ≤0.88 nmol/L, (B) 28‐d mortality during hospitalization stratified by MR‐proADM on admission >1.01 nmol/L, and (C) progression to severe disease stratified by MR‐proADM on admission >1.01 nmol/L\nIn the multivariate Cox regression analysis (Table 4), after adjusting for confounders, MR‐proADM >1.01 nmol/L showed the strongest independent association with 28‐day mortality risk (hazard ratio [HR]: 10.470, 95% CI: 2.066‐53.049; P = .005). D‐dimer >935 ng/mL FEU (HR: 4.521, 95% CI: 1.185‐17.238; P = .027) and IL‐6 >117.8 pg/mL (HR: 3.739, 95% CI: 1.207‐11.585; P = .022) were also independent predictors associated with 28‐day mortality.\nUni‐ and multivariate Cox regression analysis for 28‐d mortality\nOnly variables with a P < .10 for HR in univariate analysis were included in the table.\nAbbreviations: CI, confidence interval; CRP, C‐reactive protein; HR, hazard ratio; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; n.s, non significant; NLR, neutrophil‐to‐lymphocyte ratio; PCT, procalcitonin.", "Creatinine, albumin, LDH, ferritin, CRP, PCT, IL‐6 and MR‐proADM levels and NLR were significant higher in patients who progressed to severe disease, while lymphocyte count was significant lower (Table 1).\nAgain, MR‐proADM was the biomarker with the highest ROC AUC (0.829, 95% CI: 0.740‐0.897; P < .001) (Figure 1B and Table 2). Optimal cut‐offs for the biomarkers are showed in Table 5. Kaplan‐Meier analysis showed a higher likelihood of progression to severe disease in patients with a MR‐proADM level >1.01 nmol/L (Figure 2C).\nAccuracy of biomarkers for progression to severe disease\nAbbreviations: CI, confidence interval; CRP, C‐reactive protein; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; LR, likely hood ratio; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; NPV, negative predictive value; PCT, procalcitonin; PPV, positive predictive value.\nCut‐off recommended by manufacturer to assess early the risk for progression to a more severe disease condition.\nMultivariate adjusted Cox regression model showed that MR‐proADM >1.01 nmol/L [HR: 6.803, 95% CI: 1.458‐31.750; P =.015) and ferritin >376 ng/ml (HR: 5.525, 95% CI; 1.042‐29.308; P =.045) at admission were the only independent variables associated with progression to severe disease (Table 6).\nUni‐ and multivariate Cox regression analysis for progression to severe disease\nOnly variables with a P <.10 for HR in univariate analysis were included in the table\nAbbreviations: CI, confidence interval; CRP, C‐reactive protein; HR, hazard ratio; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; n.s, non‐significant; NLR, neutrophil‐to‐lymphocyte ratio; PCT, procalcitonin.", "Our study suggests that MR‐proADM may be used as an accurate marker of fatal outcome and progression to severe disease in COVID‐19. Its accuracy was significantly better than that showed by other previously investigated biomarkers. Patients who presented MR‐proADM levels above 1.01 nmol/L showed an association with 28‐day mortality and progression to severe disease independent of other factors.\nEndothelial dysfunction is known to be involved in organ dysfunction during bacterial sepsis\n22\n, \n23\n and viral infections,\n24\n as it induces a pro‐coagulant state, microvascular leak and organ failure.\nUnlike other types of serious infections of different aetiology, epidemiological studies show that COVID‐19 patients requiring hospital admission present frequently with accompanying conditions such as hypertension, diabetes, chronic renal failure and cardiovascular diseases.\n25\n, \n26\n These comorbidities are associated with chronic endothelial dysfunction and could predispose these patients to a worse outcome.\n27\n The endothelium plays major roles in the response to infection: endothelial cells release chemokines, to guide leucocytes to the infected tissue, and cytokines that activate inflammatory responses. Patients with endothelial dysfunction present major alterations at the glycocalyx, intercellular junctions and endothelial cells, resulting in enhanced leucocyte adhesion and extravasation, and also in the induction of a procoagulant and antifibrinolytic state. Prior endothelial dysfunction could thus predispose to the development of severe forms of COVID‐19.\n28\n\n\nIn fact, emerging data suggest a crucial role of endothelial dysfunction during SARS‐CoV‐2 infection.\n29\n In this regard, recent histopathological studies have evidenced the presence of virus within endothelial cells of different organs beyond the lungs, suggesting a direct viral effect, as well as the accumulation of inflammatory cells, with evidence of endothelial and inflammatory cell death, thus contributing directly to severity. Shortly, SARS‐CoV‐2 infection would facilitate the induction of endotheliitis in different organs as a direct consequence of virus involvement and of the host inflammatory response.\n3\n, \n4\n While endotheliopathy is thought to be a key factor of severe COVID‐19 pathogenesis, markers indicative of this process have not been well‐established. Only isolated little studies analyse the role of endothelium‐related molecules such as thrombomodulin,\n5\n angiopoietin 2,\n30\n, \n31\n VCAM or ICAM\n32\n in COVID‐19.\nAmong the endothelial dysfunction markers associated with sepsis, MR‐proADM appears to be the most promising, as reported by Martin‐Fernandez et al. in a recent study.\n22\n This biomarker can be automated with an adequate turn‐around‐time for its implementation as a stat laboratory test for clinical practice.\n9\n\n\nIn our study, MR‐proADM was the biomarker with the highest accuracy for 28‐day mortality, with a ROC AUC of 0.905. Furthermore, after adjusting for confounding variables, multivariate analysis showed the highest HR (10.47) when plasma MR‐proADM levels on admission were above 1.01 mmol/L for the primary outcome, together with levels of D‐dimer >935 ng/mL FEU and IL‐6 > 117.8 pg/mL (HR: 4.521 and 3.739, respectively). These findings support the association of the triad composed of endothelial damage, inflammation and coagulopathy with COVID‐19 severity.\n33\n In this line, there are numerous studies that describe the association between elevated plasma levels of D‐dimer or IL‐6 and poor prognosis.\n34\n, \n35\n\n\nAgain, and similar to results for 28‐day mortality, ROC AUC analysis evidenced that accuracy of MR‐proADM was the highest to detect progression to severe disease (with AUC above 0.80), better than other inflammation markers, such as CRP, ferritin, LDH and PCT, all of them recommended for monitoring COVID‐19 patients.\n8\n In addition, MR‐proADM, together with ferritin, was the only biomarkers independently associated with progression to severe disease in the multivariate analysis. The same cut‐off (>1.01 nmol/L) for MR‐proADM on admission showed the highest HR (6.803), while ferritin >376 ng/mL achieved a HR of 5.525. Serum ferritin, a feature of haemophagocytic lymphohistiocytosis, which is a known complication of viral infection, is closely related to poor recovery of COVID‐19 patients, and those with impaired lung lesion are more likely to have increased ferritin levels.\n36\n Again, the binomial composed by an inflammatory marker, in this case ferritin, together with an endothelial damage marker, such as MR‐proADM, seems crucial in the development of complications and fatal evolution in COVID‐19.\nIn the setting of infectious disease, MR‐proADM has been reported as a useful marker for differentiating between infection and sepsis\n22\n, \n37\n and for an early stratification of severity in patients with sepsis.\n15\n, \n38\n, \n39\n Few studies have evaluated the potential role of MR‐ProADM in viral infections and most of them have been limited to influenza virus. Thus, Valero‐Cifuentes et al.\n19\n reported a moderate ROC AUC (0.68) to predict a poor outcome in a cohort of patients admitted to hospital with influenza syndrome. On the contrary, Valenzuela et al.,\n18\n in a small cohort of patients with influenza A virus pneumonia, obtained a ROC AUC of 0.871 to predict mortality, with an optimal cut‐off of 1.12 nmol/L. In turn, Bello et al.,\n17\n reported a ROC AUC of 0.859 and an optimal cut‐off point of 1.09 nmol/L in patients with community‐acquired pneumonia of different aetiology, including virus.\n16\n These data are consistent with those obtained in our study.\nTo our knowledge, only a previous study has analysed the prognostic value of MR‐proADM in COVID‐19. Spoto et al.\n40\n have recently reported a ROC AUC for 28‐day mortality of 0.89 in 69 patients, similar to that reported in our study (0.905) in a larger sample. Further, there were substantial differences regarding baseline characteristics between the populations of both studies. These disparities may partially explain the different optimal cut‐offs (1.01 nmol/L vs. 2.0 nmol/L). This disagreement is likely due to differences in both study population characteristics. Hence, Spoto et al. cohort\n40\n included older patients than those in our study (79 years vs. 66 years), with a higher incidence of comorbidities such as cardiovascular disease (68.1% vs. 18.2%) and a greater severity, with a higher rate of death (23.2% vs. 14.1%) and of patients requiring admission to ICU (43.5% vs. 16.2%).\nIn addition, it is noteworthy that we observed that a MR‐proADM level ≤0.88 nmol/L allows to rule out mortality in the 28 days following admission to hospital, as previously reported by Andaluz‐Ojeda et al.\n15\n, \n41\n in critically ill patients with sepsis diagnosis.\nThis study presents some limitations, namely the small sample size. Besides, the measurement of other blood biomarkers previously reported as predictors for a poor outcome, such as troponin,\n42\n was not available in all the patients and it was not included in the study. Finally, we did not measure serial biomarkers and their values may therefore change during the patient's course, thereby making it possible to better identify deterioration or improvement.\nIn conclusion, the present study reports that plasma MR‐proADM levels, measured on admission to Emergency Department, were increased in COVID‐19 patients who died or progressed to severe disease. Besides, it was the biomarker with highest performance, expressed as ROC AUC, being MR‐proADM value levels above 1.01 nmol/L the only independent factor predictor for both outcomes. Our results suggest that MR‐proADM levels have a potential role in the assessment of prognosis in early stages of COVID‐19 and might be a candidate to be incorporated in an early management protocol. Further studies, with a larger sample size, are required to confirm these findings.", "Authors state no conflict of interest.", "LGGR and DAO designed this study, analysed the data and wrote the manuscript. All authors contributed to the enrollment of patients, data collection, sample collection and biomarkers measurement. LCS provided statistical advice. All authors reviewed and approved the final manuscript. All authors have accepted responsibility for the entire content of this manuscript and approved its submission." ]
[ null, "materials-and-methods", null, null, null, null, null, "results", null, null, null, "discussion", "COI-statement", null ]
[ "COVID‐19", "endotheliitis", "mid‐regional proadrenomedullin", "prognosis", "SARS‐CoV‐2", "severity" ]
INTRODUCTION: In December 2019, severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) was identified as the etiological agent for the pneumonia cases of unknown origin in Wuhan (Hubei Province, China), a disease termed as coronavirus disease‐2019 (COVID‐19). 1 On March 11, COVID‐19 was declared as a pandemic. According to the World Health Organization, nearly 2 million patients are currently dead after more than 82 million confirmed cases worldwide. 2 Despite the exponential growth in research related to COVID‐19 pandemic, the underlying pathophysiological mechanisms of this disease remain unclear. The incidence of complications associated to different organs and tissues and sepsis‐like multiple organ dysfunction suggests the involvement of multiple pathways. Accordingly, recent studies have proposed that virus‐induced endothelial dysfunction, resulting in impaired vascular blood flow, coagulation and leakage, may partially explain the development of organ dysfunction. 3 , 4 , 5 Hence, the development of endotheliitis may be a prominent feature of COVID‐19‐induced severe illness. 6 The role of clinical laboratories in this viral outbreak includes staging, prognostication and therapeutic monitoring. 7 Different biomarkers have been identified as predictors of severe forms of COVID‐19. 8 Most of them are related to inflammation or the dysregulated immune response that characterizes this disease. Although endothelial damage has been shown to be a decisive pathophysiological factor, there are scarce studies that evaluate biomarkers of endothelial damage in severe forms of COVID‐19. Here, mid‐regional proadrenomedullin (MR‐proADM), measured as a surrogate of adrenomedulin secretion, 9 may be of interest within COVID‐19‐induced endotheliitis. 10 This hormone is produced by endothelial and vascular smooth muscle cells throughout the vascular tree to maintain endothelial barrier function. It freely diffuses through the blood and interstitium and binds to specific widespread receptors and has been showed to play a key role in reducing vascular permeability, promoting endothelial stability and integrity following severe infection. 11 , 12 The extensive endothelial and pulmonary damage related to SARS‐CoV‐2 infection may cause a relevant disruption of the ADM system, mainly in severe cases and therefore an elevation of plasma levels of MR‐proADM. This disruption of the adrenomedullin system results in vascular leakage that represents the first step of inflammation and coagulation cascade activation. 6 Mid‐regional proadrenomedullin has been widely reported as a prognostic marker in infectious and non‐infectious diseases. 13 In sepsis and community acquired pneumonia, this biomarker predicts organ damage, poor progression and mortality 14 , 15 , 16 and this predictive ability is independent of the aetiology of pneumonia. 17 MR‐proADM has also been showed as a prognostic marker in viral infections 18 , 19 and its measurement has been recently postulated in a consensus document as a potential tool in the future for prognosis of COVID‐19 patients. 20 However, the role of MR‐proADM in COVID‐19 patients has been scarcely studied. Herein, the aim of this prospective study was to evaluate the relationship between MR‐proADM levels and prognosis of hospitalized SARS‐CoV‐2‐infected patients as well as its potential role as a marker of SARS‐CoV‐2‐related widespread endothelial damage. MATERIAL AND METHODS: Study design and population This was a prospective observational study including consecutive adult patients admitted to Santa Lucía University Hospital and Clínico Universitario Hospital, by confirmed SARS‐CoV‐2 infection between March and April 2020. COVID‐19 was diagnosed by a positive result of real‐time reverse transcriptase‐polymerase chain reaction testing of a nasopharyngeal specimen. Exclusion criteria were as follows: (a) patients <18 years; (b) pregnant women; (c) patients transferred from or to other hospital and (d) lack of samples for the biomarkers measurement. This study was approved by the Ethics Committee of both hospitals and performed under a waiver of informed consent. The work was carried out by following the guidelines of the Declaration of Helsinki of the World Medical Association. This was a prospective observational study including consecutive adult patients admitted to Santa Lucía University Hospital and Clínico Universitario Hospital, by confirmed SARS‐CoV‐2 infection between March and April 2020. COVID‐19 was diagnosed by a positive result of real‐time reverse transcriptase‐polymerase chain reaction testing of a nasopharyngeal specimen. Exclusion criteria were as follows: (a) patients <18 years; (b) pregnant women; (c) patients transferred from or to other hospital and (d) lack of samples for the biomarkers measurement. This study was approved by the Ethics Committee of both hospitals and performed under a waiver of informed consent. The work was carried out by following the guidelines of the Declaration of Helsinki of the World Medical Association. Data collection Data collection was performed from electronic medical records and laboratory information systems. For eligible patients, we extracted the demographic information, comorbidities, laboratory test results and variables required for the previously defined endpoints. Data collection was performed from electronic medical records and laboratory information systems. For eligible patients, we extracted the demographic information, comorbidities, laboratory test results and variables required for the previously defined endpoints. Blood sampling and laboratory analysis In all patients, venous blood samples for biochemical analysis, including glucose, creatinine, sodium, potassium, albumin, bilirubin, alanine aminotransferase (ALT), ferritin, C‐reactive protein (CRP), lactate dehydrogenase (LDH) and procalcitonin (PCT), haematological analysis, including haemoglobin, cell blood and platelet counts and coagulation markers, including D‐dimer, were collected on admission to the Emergency Department and analysed in the laboratory within 1 hour, by using the habitual methods currently used in the participating laboratories. For measurement of MR‐proADM and interleukin 6 (IL‐6), blood samples collected in tubes containing EDTA K3 as anticoagulant were centrifuged at 2000 g for 10 min and plasma was subsequently frozen and stored to −80°C until testing, according to stability results previously reported. 9 Mid‐regional proadrenomedullin was measured by a homogeneous sandwich immunoassay with fluorescent detection using a time‐resolved amplified cryptate emission (TRACE) technology assay (KRYPTOR®, Brahms Thermo Fisher Scientific Inc). According to manufacturer´s data, the detection limit, functional sensitivity and quantification limit were 0.05 nmol/L, 0.23 nmol/L and 0.25 nmol/L; intra‐assay coefficient of variation (CV) and inter‐assay CV were ≤10% and ≤20%, for a level ranging from 0.2 to 0.5 nmol/L, respectively. In all patients, venous blood samples for biochemical analysis, including glucose, creatinine, sodium, potassium, albumin, bilirubin, alanine aminotransferase (ALT), ferritin, C‐reactive protein (CRP), lactate dehydrogenase (LDH) and procalcitonin (PCT), haematological analysis, including haemoglobin, cell blood and platelet counts and coagulation markers, including D‐dimer, were collected on admission to the Emergency Department and analysed in the laboratory within 1 hour, by using the habitual methods currently used in the participating laboratories. For measurement of MR‐proADM and interleukin 6 (IL‐6), blood samples collected in tubes containing EDTA K3 as anticoagulant were centrifuged at 2000 g for 10 min and plasma was subsequently frozen and stored to −80°C until testing, according to stability results previously reported. 9 Mid‐regional proadrenomedullin was measured by a homogeneous sandwich immunoassay with fluorescent detection using a time‐resolved amplified cryptate emission (TRACE) technology assay (KRYPTOR®, Brahms Thermo Fisher Scientific Inc). According to manufacturer´s data, the detection limit, functional sensitivity and quantification limit were 0.05 nmol/L, 0.23 nmol/L and 0.25 nmol/L; intra‐assay coefficient of variation (CV) and inter‐assay CV were ≤10% and ≤20%, for a level ranging from 0.2 to 0.5 nmol/L, respectively. Study endpoints The primary endpoint was all‐cause mortality at 28‐days. Secondary endpoint was severe COVID‐19 progression, defined as a composite of admission to Intensive Care Unit during the index hospital stay and/or need for mechanical ventilation and/or 28‐day mortality, both verified by chart review. The primary endpoint was all‐cause mortality at 28‐days. Secondary endpoint was severe COVID‐19 progression, defined as a composite of admission to Intensive Care Unit during the index hospital stay and/or need for mechanical ventilation and/or 28‐day mortality, both verified by chart review. Statistical analysis The normality of continuous variables was tested by Kolmogorov‐Smirnov or Shapiro‐Wilk test, and they are presented as median (interquartile range [IQR]) or mean (standard deviation [SD]), as appropriate. Comparisons for continuous variables were performed by Student's t test, for the normally distributed data; for skewed distribution, Mann‐Whitney U non parametric tests were used for comparisons. Categorical variables are presented as frequency and percentage in each category. The significance of differences in percentages was tested by the chi‐squared test. Discriminatory ability for both outcomes was evaluated by calculating the area under the receiver operating characteristic curve (ROC AUC). We additionally calculated the optimal ROC‐derived cut‐offs (Youden Index, corresponding to the maximum of the sum ‘sensibility + specificity’) and sensitivity, specificity, likelihood ratios and predictive values. The association between the biomarkers and the risk for both outcomes was assessed by Cox regression analysis, adjusted by confounding variables. Variables yielding a P < .10 in the univariate regression analysis were further included in the multivariate using the backward stepwise selection method. In a further step, the impact of the biomarkers on outcomes along time was assessed by using Kaplan‐Meier curves and the Mantel‐Haenszel log‐rank test. Time was censored at 28 days following admission to the Emergency Department. Software packages SPSS vs. 22 (SPSS Inc), and MedCalc 15.0 (MedCalc Software) were used for statistical analysis, with a P < .05 considered statistically significant. Reporting of the study conforms to CONSORT‐revised and the broader EQUATOR guidelines. 21 The normality of continuous variables was tested by Kolmogorov‐Smirnov or Shapiro‐Wilk test, and they are presented as median (interquartile range [IQR]) or mean (standard deviation [SD]), as appropriate. Comparisons for continuous variables were performed by Student's t test, for the normally distributed data; for skewed distribution, Mann‐Whitney U non parametric tests were used for comparisons. Categorical variables are presented as frequency and percentage in each category. The significance of differences in percentages was tested by the chi‐squared test. Discriminatory ability for both outcomes was evaluated by calculating the area under the receiver operating characteristic curve (ROC AUC). We additionally calculated the optimal ROC‐derived cut‐offs (Youden Index, corresponding to the maximum of the sum ‘sensibility + specificity’) and sensitivity, specificity, likelihood ratios and predictive values. The association between the biomarkers and the risk for both outcomes was assessed by Cox regression analysis, adjusted by confounding variables. Variables yielding a P < .10 in the univariate regression analysis were further included in the multivariate using the backward stepwise selection method. In a further step, the impact of the biomarkers on outcomes along time was assessed by using Kaplan‐Meier curves and the Mantel‐Haenszel log‐rank test. Time was censored at 28 days following admission to the Emergency Department. Software packages SPSS vs. 22 (SPSS Inc), and MedCalc 15.0 (MedCalc Software) were used for statistical analysis, with a P < .05 considered statistically significant. Reporting of the study conforms to CONSORT‐revised and the broader EQUATOR guidelines. 21 Study design and population: This was a prospective observational study including consecutive adult patients admitted to Santa Lucía University Hospital and Clínico Universitario Hospital, by confirmed SARS‐CoV‐2 infection between March and April 2020. COVID‐19 was diagnosed by a positive result of real‐time reverse transcriptase‐polymerase chain reaction testing of a nasopharyngeal specimen. Exclusion criteria were as follows: (a) patients <18 years; (b) pregnant women; (c) patients transferred from or to other hospital and (d) lack of samples for the biomarkers measurement. This study was approved by the Ethics Committee of both hospitals and performed under a waiver of informed consent. The work was carried out by following the guidelines of the Declaration of Helsinki of the World Medical Association. Data collection: Data collection was performed from electronic medical records and laboratory information systems. For eligible patients, we extracted the demographic information, comorbidities, laboratory test results and variables required for the previously defined endpoints. Blood sampling and laboratory analysis: In all patients, venous blood samples for biochemical analysis, including glucose, creatinine, sodium, potassium, albumin, bilirubin, alanine aminotransferase (ALT), ferritin, C‐reactive protein (CRP), lactate dehydrogenase (LDH) and procalcitonin (PCT), haematological analysis, including haemoglobin, cell blood and platelet counts and coagulation markers, including D‐dimer, were collected on admission to the Emergency Department and analysed in the laboratory within 1 hour, by using the habitual methods currently used in the participating laboratories. For measurement of MR‐proADM and interleukin 6 (IL‐6), blood samples collected in tubes containing EDTA K3 as anticoagulant were centrifuged at 2000 g for 10 min and plasma was subsequently frozen and stored to −80°C until testing, according to stability results previously reported. 9 Mid‐regional proadrenomedullin was measured by a homogeneous sandwich immunoassay with fluorescent detection using a time‐resolved amplified cryptate emission (TRACE) technology assay (KRYPTOR®, Brahms Thermo Fisher Scientific Inc). According to manufacturer´s data, the detection limit, functional sensitivity and quantification limit were 0.05 nmol/L, 0.23 nmol/L and 0.25 nmol/L; intra‐assay coefficient of variation (CV) and inter‐assay CV were ≤10% and ≤20%, for a level ranging from 0.2 to 0.5 nmol/L, respectively. Study endpoints: The primary endpoint was all‐cause mortality at 28‐days. Secondary endpoint was severe COVID‐19 progression, defined as a composite of admission to Intensive Care Unit during the index hospital stay and/or need for mechanical ventilation and/or 28‐day mortality, both verified by chart review. Statistical analysis: The normality of continuous variables was tested by Kolmogorov‐Smirnov or Shapiro‐Wilk test, and they are presented as median (interquartile range [IQR]) or mean (standard deviation [SD]), as appropriate. Comparisons for continuous variables were performed by Student's t test, for the normally distributed data; for skewed distribution, Mann‐Whitney U non parametric tests were used for comparisons. Categorical variables are presented as frequency and percentage in each category. The significance of differences in percentages was tested by the chi‐squared test. Discriminatory ability for both outcomes was evaluated by calculating the area under the receiver operating characteristic curve (ROC AUC). We additionally calculated the optimal ROC‐derived cut‐offs (Youden Index, corresponding to the maximum of the sum ‘sensibility + specificity’) and sensitivity, specificity, likelihood ratios and predictive values. The association between the biomarkers and the risk for both outcomes was assessed by Cox regression analysis, adjusted by confounding variables. Variables yielding a P < .10 in the univariate regression analysis were further included in the multivariate using the backward stepwise selection method. In a further step, the impact of the biomarkers on outcomes along time was assessed by using Kaplan‐Meier curves and the Mantel‐Haenszel log‐rank test. Time was censored at 28 days following admission to the Emergency Department. Software packages SPSS vs. 22 (SPSS Inc), and MedCalc 15.0 (MedCalc Software) were used for statistical analysis, with a P < .05 considered statistically significant. Reporting of the study conforms to CONSORT‐revised and the broader EQUATOR guidelines. 21 RESULTS: Patient characteristics Main baseline and clinical characteristics on admission according to the endpoints previously defined are listed in Table 1. A total of 99 patients, 60 from Santa Lucía University Hospital (Cartagena, Spain) and 39 from Clínico University Hospital (Valladolid, Spain), were admitted due to COVID‐19, with a mean age of 66 years (61.6% were male). Hypertension was the most common comorbidity, with a greater prevalence among non‐survivors (48.2% vs. 92.9%, P = .002), followed by diabetes mellitus (28.3%) and cardiovascular disease (18.2%). During hospitalization, 25 (25.3%) cases progressed to severe disease, of whom 16 (16.2%) required intensive care, 12 (12.1%) underwent mechanical ventilation and 14 (14.1%) died of any cause within the first 28 days of hospital stay. There were not significant differences between the two participating centres regarding to the rates of 28‐day mortality (11.7% vs. 17.9%; P = .381) and progression to severe disease (23.3% vs. 28.2%; P = .582). In overall population, median hospital stay was 17 (IQR: 8‐16) days and 12 (IQR: 7‐19) days in patients requiring Intensive Care Unit care. Demographics, comorbidities and laboratory findings on admission, grouped according to survival status at 28 d and progression to severe disease Laboratory tests levels are expressed as median (IQR) or mean (SD), as appropriate. Abbreviations: ALT, Alanine aminotransferase; COPD, Chronic obstructive pulmonary disease; CRP, C‐reactive protein; IL‐6, Interleukin‐6; LDH, Lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; NLR, Neutrophil‐to‐lymphocyte ratio; PCT, Procalcitonin; WBC, White blood cell. Main baseline and clinical characteristics on admission according to the endpoints previously defined are listed in Table 1. A total of 99 patients, 60 from Santa Lucía University Hospital (Cartagena, Spain) and 39 from Clínico University Hospital (Valladolid, Spain), were admitted due to COVID‐19, with a mean age of 66 years (61.6% were male). Hypertension was the most common comorbidity, with a greater prevalence among non‐survivors (48.2% vs. 92.9%, P = .002), followed by diabetes mellitus (28.3%) and cardiovascular disease (18.2%). During hospitalization, 25 (25.3%) cases progressed to severe disease, of whom 16 (16.2%) required intensive care, 12 (12.1%) underwent mechanical ventilation and 14 (14.1%) died of any cause within the first 28 days of hospital stay. There were not significant differences between the two participating centres regarding to the rates of 28‐day mortality (11.7% vs. 17.9%; P = .381) and progression to severe disease (23.3% vs. 28.2%; P = .582). In overall population, median hospital stay was 17 (IQR: 8‐16) days and 12 (IQR: 7‐19) days in patients requiring Intensive Care Unit care. Demographics, comorbidities and laboratory findings on admission, grouped according to survival status at 28 d and progression to severe disease Laboratory tests levels are expressed as median (IQR) or mean (SD), as appropriate. Abbreviations: ALT, Alanine aminotransferase; COPD, Chronic obstructive pulmonary disease; CRP, C‐reactive protein; IL‐6, Interleukin‐6; LDH, Lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; NLR, Neutrophil‐to‐lymphocyte ratio; PCT, Procalcitonin; WBC, White blood cell. Laboratory tests for prediction of 28‐day mortality According to survival status, the biomarker levels are summarized in Table 1. Glucose, creatinine, albumin, LDH, ferritin, CRP, IL‐6, PCT, D‐dimer and MR‐proADM levels and neutrophil‐to‐lymphocyte ratio (NLR) were significantly higher in patients who died, while platelet and lymphocyte counts were significantly lower. The accuracy of biomarkers for predicting 28‐days mortality, evaluated by ROC curve analysis, is showed in Figure 1.A and Table 2. MR‐proADM was the biomarker with the highest AUC (0.905, 95% confidence interval [CI]: 0.829‐0.955; P < .001). Receiver operating characteristic curves of biomarker levels on admission to predict 28‐d mortality (A) and progression to severe disease (B) Receiver operating characteristic (ROC) curves for prediction of primary and secondary endpoints Only biomarkers with significant differences in comparisons among groups according to the outcome were included in the table. Abbreviations: AUC, Area under the curve; CI, Confidence interval; CRP, C‐reactive protein; IL‐6, Interleukin‐6; LDH, Lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; PCT, Procalcitonin. According to the Youden index, we calculated the optimal cut‐offs for differentiating between survivors and non‐survivors (Table 3). Notably, Kaplan‐Meier analysis showed that no patient with a MR‐proADM value ≤0.88 nmol/L, recommended as cut‐off by the manufacturer, died in the first 28 days following Emergency Department admission (Figure 2A). Survival analysis for the cut‐off from Youden index is showed in Figure 2B. Accuracy of biomarkers for 28‐d mortality Abbreviations: CI, confidence interval; CRP, C‐reactive protein; IL‐6, interleukin‐6; LDH, Lactate dehydrogenase; LR, Likely hood ratio; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; NPV, Negative predictive value; PCT, Procalcitonin; PPV, Positive predictive value. Cutoff recommended by manufacturer to assess early the risk for progression to a more severe disease condition. Cumulative incidence of (A) 28‐d mortality during hospitalization stratified by MR‐proADM on admission ≤0.88 nmol/L, (B) 28‐d mortality during hospitalization stratified by MR‐proADM on admission >1.01 nmol/L, and (C) progression to severe disease stratified by MR‐proADM on admission >1.01 nmol/L In the multivariate Cox regression analysis (Table 4), after adjusting for confounders, MR‐proADM >1.01 nmol/L showed the strongest independent association with 28‐day mortality risk (hazard ratio [HR]: 10.470, 95% CI: 2.066‐53.049; P = .005). D‐dimer >935 ng/mL FEU (HR: 4.521, 95% CI: 1.185‐17.238; P = .027) and IL‐6 >117.8 pg/mL (HR: 3.739, 95% CI: 1.207‐11.585; P = .022) were also independent predictors associated with 28‐day mortality. Uni‐ and multivariate Cox regression analysis for 28‐d mortality Only variables with a P < .10 for HR in univariate analysis were included in the table. Abbreviations: CI, confidence interval; CRP, C‐reactive protein; HR, hazard ratio; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; n.s, non significant; NLR, neutrophil‐to‐lymphocyte ratio; PCT, procalcitonin. According to survival status, the biomarker levels are summarized in Table 1. Glucose, creatinine, albumin, LDH, ferritin, CRP, IL‐6, PCT, D‐dimer and MR‐proADM levels and neutrophil‐to‐lymphocyte ratio (NLR) were significantly higher in patients who died, while platelet and lymphocyte counts were significantly lower. The accuracy of biomarkers for predicting 28‐days mortality, evaluated by ROC curve analysis, is showed in Figure 1.A and Table 2. MR‐proADM was the biomarker with the highest AUC (0.905, 95% confidence interval [CI]: 0.829‐0.955; P < .001). Receiver operating characteristic curves of biomarker levels on admission to predict 28‐d mortality (A) and progression to severe disease (B) Receiver operating characteristic (ROC) curves for prediction of primary and secondary endpoints Only biomarkers with significant differences in comparisons among groups according to the outcome were included in the table. Abbreviations: AUC, Area under the curve; CI, Confidence interval; CRP, C‐reactive protein; IL‐6, Interleukin‐6; LDH, Lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; PCT, Procalcitonin. According to the Youden index, we calculated the optimal cut‐offs for differentiating between survivors and non‐survivors (Table 3). Notably, Kaplan‐Meier analysis showed that no patient with a MR‐proADM value ≤0.88 nmol/L, recommended as cut‐off by the manufacturer, died in the first 28 days following Emergency Department admission (Figure 2A). Survival analysis for the cut‐off from Youden index is showed in Figure 2B. Accuracy of biomarkers for 28‐d mortality Abbreviations: CI, confidence interval; CRP, C‐reactive protein; IL‐6, interleukin‐6; LDH, Lactate dehydrogenase; LR, Likely hood ratio; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; NPV, Negative predictive value; PCT, Procalcitonin; PPV, Positive predictive value. Cutoff recommended by manufacturer to assess early the risk for progression to a more severe disease condition. Cumulative incidence of (A) 28‐d mortality during hospitalization stratified by MR‐proADM on admission ≤0.88 nmol/L, (B) 28‐d mortality during hospitalization stratified by MR‐proADM on admission >1.01 nmol/L, and (C) progression to severe disease stratified by MR‐proADM on admission >1.01 nmol/L In the multivariate Cox regression analysis (Table 4), after adjusting for confounders, MR‐proADM >1.01 nmol/L showed the strongest independent association with 28‐day mortality risk (hazard ratio [HR]: 10.470, 95% CI: 2.066‐53.049; P = .005). D‐dimer >935 ng/mL FEU (HR: 4.521, 95% CI: 1.185‐17.238; P = .027) and IL‐6 >117.8 pg/mL (HR: 3.739, 95% CI: 1.207‐11.585; P = .022) were also independent predictors associated with 28‐day mortality. Uni‐ and multivariate Cox regression analysis for 28‐d mortality Only variables with a P < .10 for HR in univariate analysis were included in the table. Abbreviations: CI, confidence interval; CRP, C‐reactive protein; HR, hazard ratio; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; n.s, non significant; NLR, neutrophil‐to‐lymphocyte ratio; PCT, procalcitonin. Laboratory tests for prediction of progression to severe disease Creatinine, albumin, LDH, ferritin, CRP, PCT, IL‐6 and MR‐proADM levels and NLR were significant higher in patients who progressed to severe disease, while lymphocyte count was significant lower (Table 1). Again, MR‐proADM was the biomarker with the highest ROC AUC (0.829, 95% CI: 0.740‐0.897; P < .001) (Figure 1B and Table 2). Optimal cut‐offs for the biomarkers are showed in Table 5. Kaplan‐Meier analysis showed a higher likelihood of progression to severe disease in patients with a MR‐proADM level >1.01 nmol/L (Figure 2C). Accuracy of biomarkers for progression to severe disease Abbreviations: CI, confidence interval; CRP, C‐reactive protein; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; LR, likely hood ratio; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; NPV, negative predictive value; PCT, procalcitonin; PPV, positive predictive value. Cut‐off recommended by manufacturer to assess early the risk for progression to a more severe disease condition. Multivariate adjusted Cox regression model showed that MR‐proADM >1.01 nmol/L [HR: 6.803, 95% CI: 1.458‐31.750; P =.015) and ferritin >376 ng/ml (HR: 5.525, 95% CI; 1.042‐29.308; P =.045) at admission were the only independent variables associated with progression to severe disease (Table 6). Uni‐ and multivariate Cox regression analysis for progression to severe disease Only variables with a P <.10 for HR in univariate analysis were included in the table Abbreviations: CI, confidence interval; CRP, C‐reactive protein; HR, hazard ratio; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; n.s, non‐significant; NLR, neutrophil‐to‐lymphocyte ratio; PCT, procalcitonin. Creatinine, albumin, LDH, ferritin, CRP, PCT, IL‐6 and MR‐proADM levels and NLR were significant higher in patients who progressed to severe disease, while lymphocyte count was significant lower (Table 1). Again, MR‐proADM was the biomarker with the highest ROC AUC (0.829, 95% CI: 0.740‐0.897; P < .001) (Figure 1B and Table 2). Optimal cut‐offs for the biomarkers are showed in Table 5. Kaplan‐Meier analysis showed a higher likelihood of progression to severe disease in patients with a MR‐proADM level >1.01 nmol/L (Figure 2C). Accuracy of biomarkers for progression to severe disease Abbreviations: CI, confidence interval; CRP, C‐reactive protein; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; LR, likely hood ratio; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; NPV, negative predictive value; PCT, procalcitonin; PPV, positive predictive value. Cut‐off recommended by manufacturer to assess early the risk for progression to a more severe disease condition. Multivariate adjusted Cox regression model showed that MR‐proADM >1.01 nmol/L [HR: 6.803, 95% CI: 1.458‐31.750; P =.015) and ferritin >376 ng/ml (HR: 5.525, 95% CI; 1.042‐29.308; P =.045) at admission were the only independent variables associated with progression to severe disease (Table 6). Uni‐ and multivariate Cox regression analysis for progression to severe disease Only variables with a P <.10 for HR in univariate analysis were included in the table Abbreviations: CI, confidence interval; CRP, C‐reactive protein; HR, hazard ratio; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; n.s, non‐significant; NLR, neutrophil‐to‐lymphocyte ratio; PCT, procalcitonin. Patient characteristics: Main baseline and clinical characteristics on admission according to the endpoints previously defined are listed in Table 1. A total of 99 patients, 60 from Santa Lucía University Hospital (Cartagena, Spain) and 39 from Clínico University Hospital (Valladolid, Spain), were admitted due to COVID‐19, with a mean age of 66 years (61.6% were male). Hypertension was the most common comorbidity, with a greater prevalence among non‐survivors (48.2% vs. 92.9%, P = .002), followed by diabetes mellitus (28.3%) and cardiovascular disease (18.2%). During hospitalization, 25 (25.3%) cases progressed to severe disease, of whom 16 (16.2%) required intensive care, 12 (12.1%) underwent mechanical ventilation and 14 (14.1%) died of any cause within the first 28 days of hospital stay. There were not significant differences between the two participating centres regarding to the rates of 28‐day mortality (11.7% vs. 17.9%; P = .381) and progression to severe disease (23.3% vs. 28.2%; P = .582). In overall population, median hospital stay was 17 (IQR: 8‐16) days and 12 (IQR: 7‐19) days in patients requiring Intensive Care Unit care. Demographics, comorbidities and laboratory findings on admission, grouped according to survival status at 28 d and progression to severe disease Laboratory tests levels are expressed as median (IQR) or mean (SD), as appropriate. Abbreviations: ALT, Alanine aminotransferase; COPD, Chronic obstructive pulmonary disease; CRP, C‐reactive protein; IL‐6, Interleukin‐6; LDH, Lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; NLR, Neutrophil‐to‐lymphocyte ratio; PCT, Procalcitonin; WBC, White blood cell. Laboratory tests for prediction of 28‐day mortality: According to survival status, the biomarker levels are summarized in Table 1. Glucose, creatinine, albumin, LDH, ferritin, CRP, IL‐6, PCT, D‐dimer and MR‐proADM levels and neutrophil‐to‐lymphocyte ratio (NLR) were significantly higher in patients who died, while platelet and lymphocyte counts were significantly lower. The accuracy of biomarkers for predicting 28‐days mortality, evaluated by ROC curve analysis, is showed in Figure 1.A and Table 2. MR‐proADM was the biomarker with the highest AUC (0.905, 95% confidence interval [CI]: 0.829‐0.955; P < .001). Receiver operating characteristic curves of biomarker levels on admission to predict 28‐d mortality (A) and progression to severe disease (B) Receiver operating characteristic (ROC) curves for prediction of primary and secondary endpoints Only biomarkers with significant differences in comparisons among groups according to the outcome were included in the table. Abbreviations: AUC, Area under the curve; CI, Confidence interval; CRP, C‐reactive protein; IL‐6, Interleukin‐6; LDH, Lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; PCT, Procalcitonin. According to the Youden index, we calculated the optimal cut‐offs for differentiating between survivors and non‐survivors (Table 3). Notably, Kaplan‐Meier analysis showed that no patient with a MR‐proADM value ≤0.88 nmol/L, recommended as cut‐off by the manufacturer, died in the first 28 days following Emergency Department admission (Figure 2A). Survival analysis for the cut‐off from Youden index is showed in Figure 2B. Accuracy of biomarkers for 28‐d mortality Abbreviations: CI, confidence interval; CRP, C‐reactive protein; IL‐6, interleukin‐6; LDH, Lactate dehydrogenase; LR, Likely hood ratio; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; NPV, Negative predictive value; PCT, Procalcitonin; PPV, Positive predictive value. Cutoff recommended by manufacturer to assess early the risk for progression to a more severe disease condition. Cumulative incidence of (A) 28‐d mortality during hospitalization stratified by MR‐proADM on admission ≤0.88 nmol/L, (B) 28‐d mortality during hospitalization stratified by MR‐proADM on admission >1.01 nmol/L, and (C) progression to severe disease stratified by MR‐proADM on admission >1.01 nmol/L In the multivariate Cox regression analysis (Table 4), after adjusting for confounders, MR‐proADM >1.01 nmol/L showed the strongest independent association with 28‐day mortality risk (hazard ratio [HR]: 10.470, 95% CI: 2.066‐53.049; P = .005). D‐dimer >935 ng/mL FEU (HR: 4.521, 95% CI: 1.185‐17.238; P = .027) and IL‐6 >117.8 pg/mL (HR: 3.739, 95% CI: 1.207‐11.585; P = .022) were also independent predictors associated with 28‐day mortality. Uni‐ and multivariate Cox regression analysis for 28‐d mortality Only variables with a P < .10 for HR in univariate analysis were included in the table. Abbreviations: CI, confidence interval; CRP, C‐reactive protein; HR, hazard ratio; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; n.s, non significant; NLR, neutrophil‐to‐lymphocyte ratio; PCT, procalcitonin. Laboratory tests for prediction of progression to severe disease: Creatinine, albumin, LDH, ferritin, CRP, PCT, IL‐6 and MR‐proADM levels and NLR were significant higher in patients who progressed to severe disease, while lymphocyte count was significant lower (Table 1). Again, MR‐proADM was the biomarker with the highest ROC AUC (0.829, 95% CI: 0.740‐0.897; P < .001) (Figure 1B and Table 2). Optimal cut‐offs for the biomarkers are showed in Table 5. Kaplan‐Meier analysis showed a higher likelihood of progression to severe disease in patients with a MR‐proADM level >1.01 nmol/L (Figure 2C). Accuracy of biomarkers for progression to severe disease Abbreviations: CI, confidence interval; CRP, C‐reactive protein; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; LR, likely hood ratio; MR‐proADM, Mid‐regional proadrenomedullin; NLR, neutrophil‐to‐lymphocyte ratio; NPV, negative predictive value; PCT, procalcitonin; PPV, positive predictive value. Cut‐off recommended by manufacturer to assess early the risk for progression to a more severe disease condition. Multivariate adjusted Cox regression model showed that MR‐proADM >1.01 nmol/L [HR: 6.803, 95% CI: 1.458‐31.750; P =.015) and ferritin >376 ng/ml (HR: 5.525, 95% CI; 1.042‐29.308; P =.045) at admission were the only independent variables associated with progression to severe disease (Table 6). Uni‐ and multivariate Cox regression analysis for progression to severe disease Only variables with a P <.10 for HR in univariate analysis were included in the table Abbreviations: CI, confidence interval; CRP, C‐reactive protein; HR, hazard ratio; IL‐6, interleukin‐6; LDH, lactate dehydrogenase; MR‐proADM, Mid‐regional proadrenomedullin; n.s, non‐significant; NLR, neutrophil‐to‐lymphocyte ratio; PCT, procalcitonin. DISCUSSION: Our study suggests that MR‐proADM may be used as an accurate marker of fatal outcome and progression to severe disease in COVID‐19. Its accuracy was significantly better than that showed by other previously investigated biomarkers. Patients who presented MR‐proADM levels above 1.01 nmol/L showed an association with 28‐day mortality and progression to severe disease independent of other factors. Endothelial dysfunction is known to be involved in organ dysfunction during bacterial sepsis 22 , 23 and viral infections, 24 as it induces a pro‐coagulant state, microvascular leak and organ failure. Unlike other types of serious infections of different aetiology, epidemiological studies show that COVID‐19 patients requiring hospital admission present frequently with accompanying conditions such as hypertension, diabetes, chronic renal failure and cardiovascular diseases. 25 , 26 These comorbidities are associated with chronic endothelial dysfunction and could predispose these patients to a worse outcome. 27 The endothelium plays major roles in the response to infection: endothelial cells release chemokines, to guide leucocytes to the infected tissue, and cytokines that activate inflammatory responses. Patients with endothelial dysfunction present major alterations at the glycocalyx, intercellular junctions and endothelial cells, resulting in enhanced leucocyte adhesion and extravasation, and also in the induction of a procoagulant and antifibrinolytic state. Prior endothelial dysfunction could thus predispose to the development of severe forms of COVID‐19. 28 In fact, emerging data suggest a crucial role of endothelial dysfunction during SARS‐CoV‐2 infection. 29 In this regard, recent histopathological studies have evidenced the presence of virus within endothelial cells of different organs beyond the lungs, suggesting a direct viral effect, as well as the accumulation of inflammatory cells, with evidence of endothelial and inflammatory cell death, thus contributing directly to severity. Shortly, SARS‐CoV‐2 infection would facilitate the induction of endotheliitis in different organs as a direct consequence of virus involvement and of the host inflammatory response. 3 , 4 While endotheliopathy is thought to be a key factor of severe COVID‐19 pathogenesis, markers indicative of this process have not been well‐established. Only isolated little studies analyse the role of endothelium‐related molecules such as thrombomodulin, 5 angiopoietin 2, 30 , 31 VCAM or ICAM 32 in COVID‐19. Among the endothelial dysfunction markers associated with sepsis, MR‐proADM appears to be the most promising, as reported by Martin‐Fernandez et al. in a recent study. 22 This biomarker can be automated with an adequate turn‐around‐time for its implementation as a stat laboratory test for clinical practice. 9 In our study, MR‐proADM was the biomarker with the highest accuracy for 28‐day mortality, with a ROC AUC of 0.905. Furthermore, after adjusting for confounding variables, multivariate analysis showed the highest HR (10.47) when plasma MR‐proADM levels on admission were above 1.01 mmol/L for the primary outcome, together with levels of D‐dimer >935 ng/mL FEU and IL‐6 > 117.8 pg/mL (HR: 4.521 and 3.739, respectively). These findings support the association of the triad composed of endothelial damage, inflammation and coagulopathy with COVID‐19 severity. 33 In this line, there are numerous studies that describe the association between elevated plasma levels of D‐dimer or IL‐6 and poor prognosis. 34 , 35 Again, and similar to results for 28‐day mortality, ROC AUC analysis evidenced that accuracy of MR‐proADM was the highest to detect progression to severe disease (with AUC above 0.80), better than other inflammation markers, such as CRP, ferritin, LDH and PCT, all of them recommended for monitoring COVID‐19 patients. 8 In addition, MR‐proADM, together with ferritin, was the only biomarkers independently associated with progression to severe disease in the multivariate analysis. The same cut‐off (>1.01 nmol/L) for MR‐proADM on admission showed the highest HR (6.803), while ferritin >376 ng/mL achieved a HR of 5.525. Serum ferritin, a feature of haemophagocytic lymphohistiocytosis, which is a known complication of viral infection, is closely related to poor recovery of COVID‐19 patients, and those with impaired lung lesion are more likely to have increased ferritin levels. 36 Again, the binomial composed by an inflammatory marker, in this case ferritin, together with an endothelial damage marker, such as MR‐proADM, seems crucial in the development of complications and fatal evolution in COVID‐19. In the setting of infectious disease, MR‐proADM has been reported as a useful marker for differentiating between infection and sepsis 22 , 37 and for an early stratification of severity in patients with sepsis. 15 , 38 , 39 Few studies have evaluated the potential role of MR‐ProADM in viral infections and most of them have been limited to influenza virus. Thus, Valero‐Cifuentes et al. 19 reported a moderate ROC AUC (0.68) to predict a poor outcome in a cohort of patients admitted to hospital with influenza syndrome. On the contrary, Valenzuela et al., 18 in a small cohort of patients with influenza A virus pneumonia, obtained a ROC AUC of 0.871 to predict mortality, with an optimal cut‐off of 1.12 nmol/L. In turn, Bello et al., 17 reported a ROC AUC of 0.859 and an optimal cut‐off point of 1.09 nmol/L in patients with community‐acquired pneumonia of different aetiology, including virus. 16 These data are consistent with those obtained in our study. To our knowledge, only a previous study has analysed the prognostic value of MR‐proADM in COVID‐19. Spoto et al. 40 have recently reported a ROC AUC for 28‐day mortality of 0.89 in 69 patients, similar to that reported in our study (0.905) in a larger sample. Further, there were substantial differences regarding baseline characteristics between the populations of both studies. These disparities may partially explain the different optimal cut‐offs (1.01 nmol/L vs. 2.0 nmol/L). This disagreement is likely due to differences in both study population characteristics. Hence, Spoto et al. cohort 40 included older patients than those in our study (79 years vs. 66 years), with a higher incidence of comorbidities such as cardiovascular disease (68.1% vs. 18.2%) and a greater severity, with a higher rate of death (23.2% vs. 14.1%) and of patients requiring admission to ICU (43.5% vs. 16.2%). In addition, it is noteworthy that we observed that a MR‐proADM level ≤0.88 nmol/L allows to rule out mortality in the 28 days following admission to hospital, as previously reported by Andaluz‐Ojeda et al. 15 , 41 in critically ill patients with sepsis diagnosis. This study presents some limitations, namely the small sample size. Besides, the measurement of other blood biomarkers previously reported as predictors for a poor outcome, such as troponin, 42 was not available in all the patients and it was not included in the study. Finally, we did not measure serial biomarkers and their values may therefore change during the patient's course, thereby making it possible to better identify deterioration or improvement. In conclusion, the present study reports that plasma MR‐proADM levels, measured on admission to Emergency Department, were increased in COVID‐19 patients who died or progressed to severe disease. Besides, it was the biomarker with highest performance, expressed as ROC AUC, being MR‐proADM value levels above 1.01 nmol/L the only independent factor predictor for both outcomes. Our results suggest that MR‐proADM levels have a potential role in the assessment of prognosis in early stages of COVID‐19 and might be a candidate to be incorporated in an early management protocol. Further studies, with a larger sample size, are required to confirm these findings. CONFLICT OF INTEREST: Authors state no conflict of interest. AUTHOR CONTRIBUTIONS: LGGR and DAO designed this study, analysed the data and wrote the manuscript. All authors contributed to the enrollment of patients, data collection, sample collection and biomarkers measurement. LCS provided statistical advice. All authors reviewed and approved the final manuscript. All authors have accepted responsibility for the entire content of this manuscript and approved its submission.
Background: Early identification of patients at high risk of progression to severe COVID-19 constituted an unsolved challenge. Although growing evidence demonstrates a direct association between endotheliitis and severe COVID-19, the role of endothelial damage biomarkers has been scarcely studied. We investigated the relationship between circulating mid-regional proadrenomedullin (MR-proADM) levels, a biomarker of endothelial dysfunction, and prognosis of SARS-CoV-2-infected patients. Methods: Prospective observational study enrolling adult patients with confirmed COVID-19. On admission to emergency department, a blood sample was drawn for laboratory test analysis. Primary and secondary endpoints were 28-day all-cause mortality and severe COVID-19 progression. Area under the curve (AUC) and multivariate regression analysis were employed to assess the association of the biomarker with the established endpoints. Results: A total of 99 patients were enrolled. During hospitalization, 25 (25.3%) cases progressed to severe disease and the 28-day mortality rate was of 14.1%. MR-proADM showed the highest AUC to predict 28-day mortality (0.905; [CI] 95%: 0.829-0.955; P < .001) and progression to severe disease (0.829; [CI] 95%: 0.740-0.897; P < .001), respectively. MR-proADM plasma levels above optimal cut-off (1.01 nmol/L) showed the strongest independent association with 28-day mortality risk (hazard ratio [HR]: 10.470, 95% CI: 2.066-53.049; P < .005) and with progression to severe disease (HR: 6.803, 95% CI: 1.458-31.750; P = .015). Conclusions: Mid-regional proadrenomedullin was the biomarker with highest performance for prognosis of death and progression to severe disease in COVID-19 patients and represents a promising predictor for both outcomes, which might constitute a potential tool in the assessment of prognosis in early stages of this disease.
null
null
8,512
375
[ 584, 134, 37, 251, 46, 295, 339, 628, 351, 64 ]
14
[ "proadm", "mr proadm", "mr", "28", "disease", "severe", "patients", "analysis", "severe disease", "mortality" ]
[ "termed coronavirus disease", "syndrome coronavirus", "covid 19 pathogenesis", "covid 19 endothelial", "respiratory syndrome coronavirus" ]
null
null
null
[CONTENT] COVID‐19 | endotheliitis | mid‐regional proadrenomedullin | prognosis | SARS‐CoV‐2 | severity [SUMMARY]
null
[CONTENT] COVID‐19 | endotheliitis | mid‐regional proadrenomedullin | prognosis | SARS‐CoV‐2 | severity [SUMMARY]
null
[CONTENT] COVID‐19 | endotheliitis | mid‐regional proadrenomedullin | prognosis | SARS‐CoV‐2 | severity [SUMMARY]
null
[CONTENT] Adrenomedullin | Aged | Aged, 80 and over | Area Under Curve | COVID-19 | Cause of Death | Disease Progression | Endothelium, Vascular | Female | Humans | Inflammation | Intensive Care Units | Male | Middle Aged | Mortality | Peptide Fragments | Prognosis | Proportional Hazards Models | Prospective Studies | Protein Precursors | Respiration, Artificial | SARS-CoV-2 | Severity of Illness Index [SUMMARY]
null
[CONTENT] Adrenomedullin | Aged | Aged, 80 and over | Area Under Curve | COVID-19 | Cause of Death | Disease Progression | Endothelium, Vascular | Female | Humans | Inflammation | Intensive Care Units | Male | Middle Aged | Mortality | Peptide Fragments | Prognosis | Proportional Hazards Models | Prospective Studies | Protein Precursors | Respiration, Artificial | SARS-CoV-2 | Severity of Illness Index [SUMMARY]
null
[CONTENT] Adrenomedullin | Aged | Aged, 80 and over | Area Under Curve | COVID-19 | Cause of Death | Disease Progression | Endothelium, Vascular | Female | Humans | Inflammation | Intensive Care Units | Male | Middle Aged | Mortality | Peptide Fragments | Prognosis | Proportional Hazards Models | Prospective Studies | Protein Precursors | Respiration, Artificial | SARS-CoV-2 | Severity of Illness Index [SUMMARY]
null
[CONTENT] termed coronavirus disease | syndrome coronavirus | covid 19 pathogenesis | covid 19 endothelial | respiratory syndrome coronavirus [SUMMARY]
null
[CONTENT] termed coronavirus disease | syndrome coronavirus | covid 19 pathogenesis | covid 19 endothelial | respiratory syndrome coronavirus [SUMMARY]
null
[CONTENT] termed coronavirus disease | syndrome coronavirus | covid 19 pathogenesis | covid 19 endothelial | respiratory syndrome coronavirus [SUMMARY]
null
[CONTENT] proadm | mr proadm | mr | 28 | disease | severe | patients | analysis | severe disease | mortality [SUMMARY]
null
[CONTENT] proadm | mr proadm | mr | 28 | disease | severe | patients | analysis | severe disease | mortality [SUMMARY]
null
[CONTENT] proadm | mr proadm | mr | 28 | disease | severe | patients | analysis | severe disease | mortality [SUMMARY]
null
[CONTENT] endothelial | 19 | vascular | covid 19 | covid | damage | role | related | severe | induced [SUMMARY]
null
[CONTENT] ci | proadm | mr proadm | mr | disease | table | ratio | severe disease | 28 | progression severe [SUMMARY]
null
[CONTENT] mr | mr proadm | proadm | 28 | disease | severe | patients | authors | analysis | severe disease [SUMMARY]
null
[CONTENT] COVID-19 ||| COVID-19 ||| [SUMMARY]
null
[CONTENT] 99 ||| 25 | 25.3% | 28-day | 14.1% ||| 28-day | 0.905 | CI] | 95% | 0.829 | 0.829 | CI] | 95% | 0.740 ||| 1.01 | 28-day | 10.470 | 95% | CI | 2.066 | .005 | 6.803 | 95% | CI | 1.458-31.750 [SUMMARY]
null
[CONTENT] COVID-19 ||| COVID-19 ||| ||| COVID-19 ||| ||| 28-day | COVID-19 ||| ||| ||| 99 ||| 25 | 25.3% | 28-day | 14.1% ||| 28-day | 0.905 | CI] | 95% | 0.829 | 0.829 | CI] | 95% | 0.740 ||| 1.01 | 28-day | 10.470 | 95% | CI | 2.066 | .005 | 6.803 | 95% | CI | 1.458-31.750 ||| COVID-19 [SUMMARY]
null
Experimental Study on Effects of Adipose-Derived Stem Cell-Seeded Silk Fibroin Chitosan Film on Wound Healing of a Diabetic Rat Model.
29443833
Wound healing is a complex process that relies on growth factors and stimulation of angiogenesis. Tissue engineering materials composed of adipose-derived stem cells (ADSCs) and silk fibroin (SF)/chitosan (CS) may be able to solve this problem. The aim of this study was to investigate the wound-healing potential of ADSC-seeded SF/CS in streptozotocin-induced diabetic rats.
BACKGROUND
Thirty-six male Sprague-Dawley rats were purchased and randomly assigned into 3 groups: a control group (no graft), a group treated with SF/CS film graft, and a group treated with ADSC-seeded SF/CS graft. The number of animals in each group was 12. Diabetes was induced by an intraperitoneal injection of streptozotocin. A cutaneous wound was incised at the dorsal region of all the experimental animals. The ADSCs were labeled with CM-Dil fluorescent staining. Wound healing was assessed for all animal groups by observing the rate of wound closure and hematoxylin and eosin staining. The expression of epidermal growth factor, transforming growth factor-β, and vascular endothelial growth factor at the wound sites was studied by enzyme-linked immunosorbent assay to evaluate the effect of growth factors secreted by ADSCs. The differentiation of ADSCs was analyzed by immunofluorescence staining.
MATERIALS AND METHODS
The ADSC-seeded SF/CS film treatment significantly increased the rates of wound closure in treated animals, and hence wound healing was drastically enhanced for ADSC-SF/CS treatment groups compared with control groups and SF/CS film treatment group. Histological observations showed the condition of wound healing. Enzyme-linked immunosorbent assay and immunofluorescence staining observations showed the secretion and differentiation of ADSCs, respectively.
RESULTS
Our analyses clearly suggested that it is feasible and effective to enhance wound healing in a diabetic rat model with ADSC-seeded SF/CS film.
CONCLUSIONS
[ "Adipocytes", "Animals", "Cell Culture Techniques", "Chitosan", "Diabetes Mellitus, Experimental", "Enzyme-Linked Immunosorbent Assay", "Epidermal Growth Factor", "Fibroins", "Flow Cytometry", "Immunophenotyping", "Male", "Random Allocation", "Rats", "Rats, Sprague-Dawley", "Stem Cells", "Transforming Growth Factor beta", "Vascular Endothelial Growth Factor A", "Wound Healing" ]
5916459
null
null
MATERIALS AND METHODS
Solation and Culture of ADSCs Cytological and histological sections were performed in Regenerative Medicine Laboratory of Jinan University for expert assistance in preparing. The rats (Sprague-Dawley) (250–300 g) were anesthetized with 3% pentobarbital sodium (30 mg/kg) and then underwent surgery consisting of a lower abdominal midline incision and bilateral resection of the inguinal fat pads for culture ADSCs. A specimen of inguinal fat was harvested and placed in ice-cold phosphate-buffered saline (PBS). The adipose tissue was rinsed with PBS, minced into small pieces, and then incubated in a solution containing 0.1% collagenase type I (Sigma, St. Louis, MO) for 30 minutes at 37°C with shaking (100 rounds every minute). The top lipid layer was removed, and the remaining liquid portion was centrifuged at 1000g for 10 minutes at room temperature to obtain the stromal cell fraction. The pellet was filtered with a 100-μm mesh filter to remove the debris. The filtrate was treated with 0.3% NaCl for 10 minutes to lyse red blood cells and centrifuged at 300g for 10 minutes. The supernatant was discarded, and the remaining cells were suspended in Dulbecco modified Eagle’s medium (DMEM) supplemented with 10% fetal bovine serum (FBS) (Hyclone, Logan, Utah), and plated in a 25-cm2 bottle and cultured at 37°C in 5% CO2. The primary cells were cultured for 6 to 7 days until they reached confluence and were defined as “passage 0.” Adipose-derived stem cells were cultured and expanded in control medium and used for the experiments at passages 3 to 5. Cytological and histological sections were performed in Regenerative Medicine Laboratory of Jinan University for expert assistance in preparing. The rats (Sprague-Dawley) (250–300 g) were anesthetized with 3% pentobarbital sodium (30 mg/kg) and then underwent surgery consisting of a lower abdominal midline incision and bilateral resection of the inguinal fat pads for culture ADSCs. A specimen of inguinal fat was harvested and placed in ice-cold phosphate-buffered saline (PBS). The adipose tissue was rinsed with PBS, minced into small pieces, and then incubated in a solution containing 0.1% collagenase type I (Sigma, St. Louis, MO) for 30 minutes at 37°C with shaking (100 rounds every minute). The top lipid layer was removed, and the remaining liquid portion was centrifuged at 1000g for 10 minutes at room temperature to obtain the stromal cell fraction. The pellet was filtered with a 100-μm mesh filter to remove the debris. The filtrate was treated with 0.3% NaCl for 10 minutes to lyse red blood cells and centrifuged at 300g for 10 minutes. The supernatant was discarded, and the remaining cells were suspended in Dulbecco modified Eagle’s medium (DMEM) supplemented with 10% fetal bovine serum (FBS) (Hyclone, Logan, Utah), and plated in a 25-cm2 bottle and cultured at 37°C in 5% CO2. The primary cells were cultured for 6 to 7 days until they reached confluence and were defined as “passage 0.” Adipose-derived stem cells were cultured and expanded in control medium and used for the experiments at passages 3 to 5. Differentiation and Immunophenotyping of ADSCs Stem cell profile was evaluated through membrane receptor phenotyping and differentiation assays. Stem cell markers were determined through cytometric analysis for CD29, CD45, and CD90 (BD Biosciences, San Jose, CA), FACSCalibur flow cytometer (BD Biosciences). Stem cell pluripotency was evaluated by culture osteogenesis and adipogenesis under specific differentiation medium stimuli. Osteogenesis was induced with low-glucose DMEM supplemented with 10% FBS, dexamethasone (0.1 μM), ascorbic acid (0.2 mM), and β glycerol phosphate (10 mM) (Sigma) for 24 days. Adipogenesis was induced with high-glucose DMEM medium supplemented with 10% FBS, dexamethasone (1 μM), isobutylmethylxanthine (0.5 mM), insulin (10 μg/mL), and indomethacin (100 μM) (Sigma) for 18 days. Adipocyte lipid and osteocyte calcium deposits were stained with Oil-red-O and Alizarin Red (Chemicon), respectively. Stem cell profile was evaluated through membrane receptor phenotyping and differentiation assays. Stem cell markers were determined through cytometric analysis for CD29, CD45, and CD90 (BD Biosciences, San Jose, CA), FACSCalibur flow cytometer (BD Biosciences). Stem cell pluripotency was evaluated by culture osteogenesis and adipogenesis under specific differentiation medium stimuli. Osteogenesis was induced with low-glucose DMEM supplemented with 10% FBS, dexamethasone (0.1 μM), ascorbic acid (0.2 mM), and β glycerol phosphate (10 mM) (Sigma) for 24 days. Adipogenesis was induced with high-glucose DMEM medium supplemented with 10% FBS, dexamethasone (1 μM), isobutylmethylxanthine (0.5 mM), insulin (10 μg/mL), and indomethacin (100 μM) (Sigma) for 18 days. Adipocyte lipid and osteocyte calcium deposits were stained with Oil-red-O and Alizarin Red (Chemicon), respectively. Preparation of SF/CS The blended SF/CS scaffolds were fabricated as described previously, with slight modification.14,19–21 Briefly, CS was dissolved in 2% acetic acid and clarified by centrifugation. The final concentration of CS used was 2%. Silk cocoons were cut into pieces, degummed (to remove sericin) with a boiling in 0.01 M Na2CO3 solution for 30 minutes, repeated 3 times, and the fibers were washed with deionized water and then kept at 37°C overnight for drying. The degummed SF was dissolved in calcium nitrate tetrahydrate-methanol solution (molar ratio 1:4:2 calcium/water/methyl alcohol at 80°C), dialyzed against distilled water, and lyophilized to obtain a solid SF fluff. The SF fluff was further dissolved in distilled water at a 2% (w/v) concentration over a 3-hour period with continuous stirring to obtained aqueous SF solution. The solution was cooled and stored at 4°C until use. A volume of the same concentration of SF was added to prepare a 50:50 (v/v) SF/CS blend and was allowed to mix for 30 minutes (Fig. 1). Flow diagram: the preparation of SF/CS and ADSCs seeded SF/CS. Silk fibroin and chitosan blend (SF/CS) film were cut into 1.5-centimeter-diameter circles before implantation. SF/CS film appears light yellow, transparent and slightly uneven surface. Fluorescence microscope showing immunofluorescence staining of CM-Dil labeled ADSCs (red) in SF/CS film (100×). The nuclei (blue) were then counterstained with 4',6-Diamidino-2-Phenylindole, Dihydrochloride (DAPI). The blended SF/CS scaffolds were fabricated as described previously, with slight modification.14,19–21 Briefly, CS was dissolved in 2% acetic acid and clarified by centrifugation. The final concentration of CS used was 2%. Silk cocoons were cut into pieces, degummed (to remove sericin) with a boiling in 0.01 M Na2CO3 solution for 30 minutes, repeated 3 times, and the fibers were washed with deionized water and then kept at 37°C overnight for drying. The degummed SF was dissolved in calcium nitrate tetrahydrate-methanol solution (molar ratio 1:4:2 calcium/water/methyl alcohol at 80°C), dialyzed against distilled water, and lyophilized to obtain a solid SF fluff. The SF fluff was further dissolved in distilled water at a 2% (w/v) concentration over a 3-hour period with continuous stirring to obtained aqueous SF solution. The solution was cooled and stored at 4°C until use. A volume of the same concentration of SF was added to prepare a 50:50 (v/v) SF/CS blend and was allowed to mix for 30 minutes (Fig. 1). Flow diagram: the preparation of SF/CS and ADSCs seeded SF/CS. Silk fibroin and chitosan blend (SF/CS) film were cut into 1.5-centimeter-diameter circles before implantation. SF/CS film appears light yellow, transparent and slightly uneven surface. Fluorescence microscope showing immunofluorescence staining of CM-Dil labeled ADSCs (red) in SF/CS film (100×). The nuclei (blue) were then counterstained with 4',6-Diamidino-2-Phenylindole, Dihydrochloride (DAPI). Preparation of ADSC-SF/CS Combination Adipose-derived stem cells (passage 3) were labeled with CM-Dil, a lipophilic dye that stains the membrane of viable cells. For cell labeling, digested ADSCs were washed with PBS and incubated with CM-Dil (Life Technologies) in a 37°C incubator for 5 minutes, and then for an additional 15 minutes at 4°C. The cells were washed with PBS twice. The cells were then seeded onto a SF/CS membrane at a density of 1 × 106 ADSCs per cm2. The cell-seeded SF/CS membrane incubated for 48 hours under standard culture conditions were used for transplantation in animal experiments. Once on the operative field, grafts were transferred to a sterile six-well plate and washed gently in two 500-μL aliquots of PBS to remove any nonadherent cells or medium. The cell-seeded SF/CS membrane cultured for 48 hours was observed in a fluorescence microscope (Fig. 1). Adipose-derived stem cells (passage 3) were labeled with CM-Dil, a lipophilic dye that stains the membrane of viable cells. For cell labeling, digested ADSCs were washed with PBS and incubated with CM-Dil (Life Technologies) in a 37°C incubator for 5 minutes, and then for an additional 15 minutes at 4°C. The cells were washed with PBS twice. The cells were then seeded onto a SF/CS membrane at a density of 1 × 106 ADSCs per cm2. The cell-seeded SF/CS membrane incubated for 48 hours under standard culture conditions were used for transplantation in animal experiments. Once on the operative field, grafts were transferred to a sterile six-well plate and washed gently in two 500-μL aliquots of PBS to remove any nonadherent cells or medium. The cell-seeded SF/CS membrane cultured for 48 hours was observed in a fluorescence microscope (Fig. 1). Experimental Diabetes and Excisional Wounding Experimental animals: 36 male Sprague-Dawley rats (250–300 g) were obtained from Beijing University of Chinese Medicine (Beijing, China). All studies and procedures involving animals were conducted in compliance with guidelines established by the Institute Of Laboratory Animal Science (Jinan University, Guangzhou, China). Rats were kept for 3 weeks on balanced ration with water ad libitum for acclimatization. Standard animal chow and clean water were constantly supplied to the animals. All procedures complied with the standards stated in the Guide for the Care and Use of Laboratory Animals (Institute of Laboratory Animal Resources, National Academy of Sciences, Bethesda, MD, 1996). Eight-week-old Sprague-Dawley rat received streptozocin 40 mg/kg (Sigma) in 0.05 mol/L citrate buffer, pH 4.5, daily for 5 days. Only mice showing consistent polyuria, polydipsia, polyphagia, weight loss, and elevated blood glucose (>16.7 mmol/L or 250 mg/dL) were included in the study.22 These diabetic rats (n = 36) were divided into 3 groups with 8 rats in each group randomly: group A, diabetic control with no graft; group B, diabetic treated with SF/CS film alone; and group C, diabetic treated with ADSC-SF/CS. Rats were allowed to manifest hyperglycemia for 8 weeks before surgical wounding. All rats (n = 36) were anesthetized with an intraperitoneal injection of sodium pentobarbital (ServiceBio, Inc., Wuhan, China). Their back hair was shaved, carefully. Application field was outlined with marking pen just before removing skin. A 1.5-cm diameter circular impression was made on the middle of dorsum of the mouse. All wounds were cleaned with iodophor and sterile normal saline. Each group received the appropriate graft as described previously, then were covered with Vaseline oil gauze. Finally, 3M transparent dressing was used to tightly fix the graft and was covered with Vaseline oil gauze. After completion of wound healing test, rats were sacrificed. Samples of the previously injured tissue were removed at 3, 7, 14, and 21 days. The incision area and adjacent normal skin were excised for each sacrificed animal. Half of every sample fixed in 10% formalin for histological analysis, and the other half was frozen in liquid nitrogen for enzyme-linked immunosorbent assay (ELISA) test. Experimental animals: 36 male Sprague-Dawley rats (250–300 g) were obtained from Beijing University of Chinese Medicine (Beijing, China). All studies and procedures involving animals were conducted in compliance with guidelines established by the Institute Of Laboratory Animal Science (Jinan University, Guangzhou, China). Rats were kept for 3 weeks on balanced ration with water ad libitum for acclimatization. Standard animal chow and clean water were constantly supplied to the animals. All procedures complied with the standards stated in the Guide for the Care and Use of Laboratory Animals (Institute of Laboratory Animal Resources, National Academy of Sciences, Bethesda, MD, 1996). Eight-week-old Sprague-Dawley rat received streptozocin 40 mg/kg (Sigma) in 0.05 mol/L citrate buffer, pH 4.5, daily for 5 days. Only mice showing consistent polyuria, polydipsia, polyphagia, weight loss, and elevated blood glucose (>16.7 mmol/L or 250 mg/dL) were included in the study.22 These diabetic rats (n = 36) were divided into 3 groups with 8 rats in each group randomly: group A, diabetic control with no graft; group B, diabetic treated with SF/CS film alone; and group C, diabetic treated with ADSC-SF/CS. Rats were allowed to manifest hyperglycemia for 8 weeks before surgical wounding. All rats (n = 36) were anesthetized with an intraperitoneal injection of sodium pentobarbital (ServiceBio, Inc., Wuhan, China). Their back hair was shaved, carefully. Application field was outlined with marking pen just before removing skin. A 1.5-cm diameter circular impression was made on the middle of dorsum of the mouse. All wounds were cleaned with iodophor and sterile normal saline. Each group received the appropriate graft as described previously, then were covered with Vaseline oil gauze. Finally, 3M transparent dressing was used to tightly fix the graft and was covered with Vaseline oil gauze. After completion of wound healing test, rats were sacrificed. Samples of the previously injured tissue were removed at 3, 7, 14, and 21 days. The incision area and adjacent normal skin were excised for each sacrificed animal. Half of every sample fixed in 10% formalin for histological analysis, and the other half was frozen in liquid nitrogen for enzyme-linked immunosorbent assay (ELISA) test. Calculation of Wound Healing Rate Wound closure status was evaluated using a digital camera on days 3, 7, 14, and 21 after treatment. The area of the wound left unhealed was measured using ImageJ software. The wound healing rate was calculated as follows: wound healing rate = (original wound area − wound area on different days postsurgery)/original wound area. Wound closure status was evaluated using a digital camera on days 3, 7, 14, and 21 after treatment. The area of the wound left unhealed was measured using ImageJ software. The wound healing rate was calculated as follows: wound healing rate = (original wound area − wound area on different days postsurgery)/original wound area. Histopathological Assay The sample of skin containing dermis and hypodermis was isolated, carefully trimmed with cutter, and fixed in 10% neutral formalin solution. After paraffin embedding, 3- to 4-μm sections were taken and stained with hematoxylin and eosin for study of tissue appearance. Three indices were measured, the length epithelial tongue, the epithelial gap, and the area of granulation tissue as shown in Figure 2I. All slides were observed under light microscopic. Histological analysis of skin after hematoxylin and eosin staining. (I) A schematic transversal section is depicted for proper identification of the respective regions observed. (II) Images of granulation tissue were obtained from the healing. The arrows point to the blood vessels (allows) that grow perpendicular to the wound in days 7 and 21 after treatment. Measure and analysis of length epithelial tongue (III), epithelial gap (IV) and area of granulation (V). All data are mean ±SD (n = 3). *P < 0.05 ; **P < 0.01. The sample of skin containing dermis and hypodermis was isolated, carefully trimmed with cutter, and fixed in 10% neutral formalin solution. After paraffin embedding, 3- to 4-μm sections were taken and stained with hematoxylin and eosin for study of tissue appearance. Three indices were measured, the length epithelial tongue, the epithelial gap, and the area of granulation tissue as shown in Figure 2I. All slides were observed under light microscopic. Histological analysis of skin after hematoxylin and eosin staining. (I) A schematic transversal section is depicted for proper identification of the respective regions observed. (II) Images of granulation tissue were obtained from the healing. The arrows point to the blood vessels (allows) that grow perpendicular to the wound in days 7 and 21 after treatment. Measure and analysis of length epithelial tongue (III), epithelial gap (IV) and area of granulation (V). All data are mean ±SD (n = 3). *P < 0.05 ; **P < 0.01. Immunofluorescent Histology Paraffin-embedded sections first underwent standard deparaffinization and rehydration procedures, and they were then probed with fluorescein isothiocyanate-anti-CD31 antibody (green), anti–α-smooth muscle actin (α-SMA) antibody (green), anti–cytokeratin 10 antibody (green) (Cell Signaling Technology, Danvers, MA) for overnight at 4°C. Next, sections were incubated with horseradish peroxidase–conjugated secondary antibodies (Santa Cruz Biotechnology Inc., Santa Cruz, CA). The nuclei were then counterstained with 4′,6-diamidino-2-phenylindole, dihydrochloride. Paraffin-embedded sections first underwent standard deparaffinization and rehydration procedures, and they were then probed with fluorescein isothiocyanate-anti-CD31 antibody (green), anti–α-smooth muscle actin (α-SMA) antibody (green), anti–cytokeratin 10 antibody (green) (Cell Signaling Technology, Danvers, MA) for overnight at 4°C. Next, sections were incubated with horseradish peroxidase–conjugated secondary antibodies (Santa Cruz Biotechnology Inc., Santa Cruz, CA). The nuclei were then counterstained with 4′,6-diamidino-2-phenylindole, dihydrochloride. ELISA Weighed, minced samples were placed in lysis buffer containing protease inhibitors for 24 hours at 4°C. Tissues were then homogenized, centrifuged to remove tissue residue, and the amount of epidermal growth factor (EGF), transforming growth factor (TGF)-β1, vascular endothelial growth factor (VEGF) in the lysis buffer was measured in diluted aliquots with standard ELISA assays. Weighed, minced samples were placed in lysis buffer containing protease inhibitors for 24 hours at 4°C. Tissues were then homogenized, centrifuged to remove tissue residue, and the amount of epidermal growth factor (EGF), transforming growth factor (TGF)-β1, vascular endothelial growth factor (VEGF) in the lysis buffer was measured in diluted aliquots with standard ELISA assays. Statistical Analysis All data were expressed as means ± SD and were analyzed by 1-way ANOVA with Fisher adjustment (SPSS13.0 for windows). Difference between groups was analyzed by Tukey test, P less than 0.05 was considered statistically significant in all analyses. All data were expressed as means ± SD and were analyzed by 1-way ANOVA with Fisher adjustment (SPSS13.0 for windows). Difference between groups was analyzed by Tukey test, P less than 0.05 was considered statistically significant in all analyses.
RESULTS
Characterization of ADSCs Adipose-derived stem cells were initially expanded for 3 passages and later characterized by cell differentiation and immunophenotyping assays. The cultures were observed by using an inverted light microscope. Attachment of spindle-shaped cells to tissue culture plastic flask was observed after 1 day of culture of ADSCs. Primary cultures reached 70% to 80% confluence in approximately 5 to 6 days. During the passaging, cell growth tended to accelerate and morphology of cells changed gradually. Cells become more flat-shape with increase in passage number. After 3 passages, cultured cells displayed typical fibroblastoid morphology (Fig. 3I) and under appropriate stimuli, exhibited potential for adipocyte and osteocyte differentiation demonstrated through Oil Red (Fig. 3II), Alizarin red staining (Fig. 3III). Cells also expressed characteristic stem cell markers, CD29 and CD90, and were negative for CD45 (Fig. 3IV–VI). Culture, Differentiation and Immunophenotyping of ADSCs. (I) Representative image of ADSC displaying cell phenotype under bright-field microscopy (100×). (II-III) Stem cell pluripotency was evaluated by culture adipogenesis and osteogenesis under differentiation stimuli and posterior staining with (II) Oil-red-O and (III) Alizarin Red, respectively (100×). (IV-VI) Immunophenotypic characterization of ADSCs: (IV) CD29 (+), (V) CD45 (-) and (VI) CD90 (-). Adipose-derived stem cells were initially expanded for 3 passages and later characterized by cell differentiation and immunophenotyping assays. The cultures were observed by using an inverted light microscope. Attachment of spindle-shaped cells to tissue culture plastic flask was observed after 1 day of culture of ADSCs. Primary cultures reached 70% to 80% confluence in approximately 5 to 6 days. During the passaging, cell growth tended to accelerate and morphology of cells changed gradually. Cells become more flat-shape with increase in passage number. After 3 passages, cultured cells displayed typical fibroblastoid morphology (Fig. 3I) and under appropriate stimuli, exhibited potential for adipocyte and osteocyte differentiation demonstrated through Oil Red (Fig. 3II), Alizarin red staining (Fig. 3III). Cells also expressed characteristic stem cell markers, CD29 and CD90, and were negative for CD45 (Fig. 3IV–VI). Culture, Differentiation and Immunophenotyping of ADSCs. (I) Representative image of ADSC displaying cell phenotype under bright-field microscopy (100×). (II-III) Stem cell pluripotency was evaluated by culture adipogenesis and osteogenesis under differentiation stimuli and posterior staining with (II) Oil-red-O and (III) Alizarin Red, respectively (100×). (IV-VI) Immunophenotypic characterization of ADSCs: (IV) CD29 (+), (V) CD45 (-) and (VI) CD90 (-). Histopathological Assay Histological examination showed that there was no obvious difference among the 3 groups but significantly higher epithelialization rate, increased granulation tissue formulation, and capillary formation when compared with groups A and B in days 7 and 21(Fig. 2II). There were more blood vessels (allows), growing perpendicular to the wound, and less inflammatory cell in the wound granulation tissue than the other groups by day 7 (neovascularization, A > B > C; inflammatory cell, A < B < C). In day 21 after postwound healing, the images showed that the tissue of the wound which was grafted in the ADSCs-SF/CS almost regenerate close to the normal tissue. There still appear some neovascularization, inflammatory cell, and collagen that have not disappeared yet in the 2 other groups, especially control group. Histological examination showed that there was no obvious difference among the 3 groups but significantly higher epithelialization rate, increased granulation tissue formulation, and capillary formation when compared with groups A and B in days 7 and 21(Fig. 2II). There were more blood vessels (allows), growing perpendicular to the wound, and less inflammatory cell in the wound granulation tissue than the other groups by day 7 (neovascularization, A > B > C; inflammatory cell, A < B < C). In day 21 after postwound healing, the images showed that the tissue of the wound which was grafted in the ADSCs-SF/CS almost regenerate close to the normal tissue. There still appear some neovascularization, inflammatory cell, and collagen that have not disappeared yet in the 2 other groups, especially control group. Immunofluorescent Histology Engrafted CM-Dil–labeled cells were identified throughout the various substrata of the dermis and cutaneous appendages at 7, 14, 21 days postoperatively (Fig. 4). Engrafted ADSCs were identified, on the basis of red CM-Dil signal, at 14 days, engrafted cells were seen incorporated into winding linear structures consistent with microvascular elements and coexpressing positive signals for red CM-Dil and the endothelial marker CD31 (Fig. 4I). Some engrafted ADSCs were noted as recapitulating elements of linear vascular structures, costaining for the vascular smooth muscle marker α-SMA (Fig. 4II) and CM-Dil. However, red fluorescence-labeled cells were not observed to be incorporated into regenerated epidermalepithelium. At 21 days, in addition to these findings described, engrafted Dil-positive cells were observed to be incorporated as components of the regenerated epidermal epithelium on the basis of red fluorescence-labeled cells costaining for the cytokeratin marker of epidermal epithelium cytokeratin 10 (Fig. 4III). Expression of epithelial cell markers CD31, α-SMA, K10 in wound skin. (I) Photomicrographs showing Endothelial cells immunostaining for CD31 (green), engrafted ADSCs (red) in the wound site of diabetic rats, nuclear counterstained with DAPI (blue) at 200× magnification. (II) Images showing vascular smooth muscle cells immunostaining for α-SMA (green), engrafted ADSCs (red) and DAPI (blue) in the wound site of diabetic rats. (III) Images showing epidermal epithelium immunostaining for K10 (green), engrafted ADSCs (red) and DAPI (blue) in the wound site of diabetic rats. Scale bar = 50 μm Engrafted CM-Dil–labeled cells were identified throughout the various substrata of the dermis and cutaneous appendages at 7, 14, 21 days postoperatively (Fig. 4). Engrafted ADSCs were identified, on the basis of red CM-Dil signal, at 14 days, engrafted cells were seen incorporated into winding linear structures consistent with microvascular elements and coexpressing positive signals for red CM-Dil and the endothelial marker CD31 (Fig. 4I). Some engrafted ADSCs were noted as recapitulating elements of linear vascular structures, costaining for the vascular smooth muscle marker α-SMA (Fig. 4II) and CM-Dil. However, red fluorescence-labeled cells were not observed to be incorporated into regenerated epidermalepithelium. At 21 days, in addition to these findings described, engrafted Dil-positive cells were observed to be incorporated as components of the regenerated epidermal epithelium on the basis of red fluorescence-labeled cells costaining for the cytokeratin marker of epidermal epithelium cytokeratin 10 (Fig. 4III). Expression of epithelial cell markers CD31, α-SMA, K10 in wound skin. (I) Photomicrographs showing Endothelial cells immunostaining for CD31 (green), engrafted ADSCs (red) in the wound site of diabetic rats, nuclear counterstained with DAPI (blue) at 200× magnification. (II) Images showing vascular smooth muscle cells immunostaining for α-SMA (green), engrafted ADSCs (red) and DAPI (blue) in the wound site of diabetic rats. (III) Images showing epidermal epithelium immunostaining for K10 (green), engrafted ADSCs (red) and DAPI (blue) in the wound site of diabetic rats. Scale bar = 50 μm ELISA Mean EGF, TGF-β, and VEGF levels were significantly higher (P < 0.01) in ADSCs-SF/CS–treated group compared with SF/CS treated group and controls group at 14 days postoperatively. In addition, mean VEGF levels were higher (P < 0.05) in SF/CS-treated group compared with the control group (Fig. 5). ELISA analysis of wound tissue for concentration of EGF (I), TGF-β (II), and VEGF (III). Comparison of the concentration of Three Growth Factors between ADSC-SF/CS-treated (C), SF/CS-treated (B) and control groups (A) at 14 days post-wound. **P < 0.01, comparison between ADSC-SF/CS-treated and control groups; *P < 0.05, comparison between ADSC-SF/CS-treated and SF/CS-treated groups. Mean EGF, TGF-β, and VEGF levels were significantly higher (P < 0.01) in ADSCs-SF/CS–treated group compared with SF/CS treated group and controls group at 14 days postoperatively. In addition, mean VEGF levels were higher (P < 0.05) in SF/CS-treated group compared with the control group (Fig. 5). ELISA analysis of wound tissue for concentration of EGF (I), TGF-β (II), and VEGF (III). Comparison of the concentration of Three Growth Factors between ADSC-SF/CS-treated (C), SF/CS-treated (B) and control groups (A) at 14 days post-wound. **P < 0.01, comparison between ADSC-SF/CS-treated and control groups; *P < 0.05, comparison between ADSC-SF/CS-treated and SF/CS-treated groups.
CONCLUSIONS
In this study, we have successfully applied ADSC-seeded SF/CS as a cytoprosthetic hybrid for reconstructive support in a diabetic cutaneous wound healing model which supplies growth factors and promotes angiogenesis of diabetic wound. As a carrier, SF/CS film supports the delivery engraftment and secretion of stem cells, as well as differentiation into vascular and epithelial components. However, there are still some problems and details that have not been resolved, such as the optimum number of cells on the material. In conclusion, ADSC-seeded SF/CS is a promising cell-matrix hybrid and warrants further in vivo study of its potential for reconstructive surgical applications.
[ "Solation and Culture of ADSCs", "Differentiation and Immunophenotyping of ADSCs", "Preparation of SF/CS", "Preparation of ADSC-SF/CS Combination", "Experimental Diabetes and Excisional Wounding", "Calculation of Wound Healing Rate", "Histopathological Assay", "Immunofluorescent Histology", "ELISA", "Statistical Analysis", "Characterization of ADSCs", "Histopathological Assay", "Immunofluorescent Histology", "ELISA" ]
[ "Cytological and histological sections were performed in Regenerative Medicine Laboratory of Jinan University for expert assistance in preparing. The rats (Sprague-Dawley) (250–300 g) were anesthetized with 3% pentobarbital sodium (30 mg/kg) and then underwent surgery consisting of a lower abdominal midline incision and bilateral resection of the inguinal fat pads for culture ADSCs. A specimen of inguinal fat was harvested and placed in ice-cold phosphate-buffered saline (PBS). The adipose tissue was rinsed with PBS, minced into small pieces, and then incubated in a solution containing 0.1% collagenase type I (Sigma, St. Louis, MO) for 30 minutes at 37°C with shaking (100 rounds every minute). The top lipid layer was removed, and the remaining liquid portion was centrifuged at 1000g for 10 minutes at room temperature to obtain the stromal cell fraction. The pellet was filtered with a 100-μm mesh filter to remove the debris. The filtrate was treated with 0.3% NaCl for 10 minutes to lyse red blood cells and centrifuged at 300g for 10 minutes. The supernatant was discarded, and the remaining cells were suspended in Dulbecco modified Eagle’s medium (DMEM) supplemented with 10% fetal bovine serum (FBS) (Hyclone, Logan, Utah), and plated in a 25-cm2 bottle and cultured at 37°C in 5% CO2. The primary cells were cultured for 6 to 7 days until they reached confluence and were defined as “passage 0.” Adipose-derived stem cells were cultured and expanded in control medium and used for the experiments at passages 3 to 5.", "Stem cell profile was evaluated through membrane receptor phenotyping and differentiation assays. Stem cell markers were determined through cytometric analysis for CD29, CD45, and CD90 (BD Biosciences, San Jose, CA), FACSCalibur flow cytometer (BD Biosciences). Stem cell pluripotency was evaluated by culture osteogenesis and adipogenesis under specific differentiation medium stimuli. Osteogenesis was induced with low-glucose DMEM supplemented with 10% FBS, dexamethasone (0.1 μM), ascorbic acid (0.2 mM), and β glycerol phosphate (10 mM) (Sigma) for 24 days. Adipogenesis was induced with high-glucose DMEM medium supplemented with 10% FBS, dexamethasone (1 μM), isobutylmethylxanthine (0.5 mM), insulin (10 μg/mL), and indomethacin (100 μM) (Sigma) for 18 days. Adipocyte lipid and osteocyte calcium deposits were stained with Oil-red-O and Alizarin Red (Chemicon), respectively.", "The blended SF/CS scaffolds were fabricated as described previously, with slight modification.14,19–21 Briefly, CS was dissolved in 2% acetic acid and clarified by centrifugation. The final concentration of CS used was 2%. Silk cocoons were cut into pieces, degummed (to remove sericin) with a boiling in 0.01 M Na2CO3 solution for 30 minutes, repeated 3 times, and the fibers were washed with deionized water and then kept at 37°C overnight for drying. The degummed SF was dissolved in calcium nitrate tetrahydrate-methanol solution (molar ratio 1:4:2 calcium/water/methyl alcohol at 80°C), dialyzed against distilled water, and lyophilized to obtain a solid SF fluff. The SF fluff was further dissolved in distilled water at a 2% (w/v) concentration over a 3-hour period with continuous stirring to obtained aqueous SF solution. The solution was cooled and stored at 4°C until use. A volume of the same concentration of SF was added to prepare a 50:50 (v/v) SF/CS blend and was allowed to mix for 30 minutes (Fig. 1).\nFlow diagram: the preparation of SF/CS and ADSCs seeded SF/CS. Silk fibroin and chitosan blend (SF/CS) film were cut into 1.5-centimeter-diameter circles before implantation. SF/CS film appears light yellow, transparent and slightly uneven surface. Fluorescence microscope showing immunofluorescence staining of CM-Dil labeled ADSCs (red) in SF/CS film (100×). The nuclei (blue) were then counterstained with 4',6-Diamidino-2-Phenylindole, Dihydrochloride (DAPI).", "Adipose-derived stem cells (passage 3) were labeled with CM-Dil, a lipophilic dye that stains the membrane of viable cells. For cell labeling, digested ADSCs were washed with PBS and incubated with CM-Dil (Life Technologies) in a 37°C incubator for 5 minutes, and then for an additional 15 minutes at 4°C. The cells were washed with PBS twice. The cells were then seeded onto a SF/CS membrane at a density of 1 × 106 ADSCs per cm2. The cell-seeded SF/CS membrane incubated for 48 hours under standard culture conditions were used for transplantation in animal experiments. Once on the operative field, grafts were transferred to a sterile six-well plate and washed gently in two 500-μL aliquots of PBS to remove any nonadherent cells or medium. The cell-seeded SF/CS membrane cultured for 48 hours was observed in a fluorescence microscope (Fig. 1).", "Experimental animals: 36 male Sprague-Dawley rats (250–300 g) were obtained from Beijing University of Chinese Medicine (Beijing, China). All studies and procedures involving animals were conducted in compliance with guidelines established by the Institute Of Laboratory Animal Science (Jinan University, Guangzhou, China). Rats were kept for 3 weeks on balanced ration with water ad libitum for acclimatization. Standard animal chow and clean water were constantly supplied to the animals.\nAll procedures complied with the standards stated in the Guide for the Care and Use of Laboratory Animals (Institute of Laboratory Animal Resources, National Academy of Sciences, Bethesda, MD, 1996). Eight-week-old Sprague-Dawley rat received streptozocin 40 mg/kg (Sigma) in 0.05 mol/L citrate buffer, pH 4.5, daily for 5 days. Only mice showing consistent polyuria, polydipsia, polyphagia, weight loss, and elevated blood glucose (>16.7 mmol/L or 250 mg/dL) were included in the study.22\nThese diabetic rats (n = 36) were divided into 3 groups with 8 rats in each group randomly: group A, diabetic control with no graft; group B, diabetic treated with SF/CS film alone; and group C, diabetic treated with ADSC-SF/CS. Rats were allowed to manifest hyperglycemia for 8 weeks before surgical wounding. All rats (n = 36) were anesthetized with an intraperitoneal injection of sodium pentobarbital (ServiceBio, Inc., Wuhan, China). Their back hair was shaved, carefully. Application field was outlined with marking pen just before removing skin. A 1.5-cm diameter circular impression was made on the middle of dorsum of the mouse. All wounds were cleaned with iodophor and sterile normal saline. Each group received the appropriate graft as described previously, then were covered with Vaseline oil gauze. Finally, 3M transparent dressing was used to tightly fix the graft and was covered with Vaseline oil gauze.\nAfter completion of wound healing test, rats were sacrificed. Samples of the previously injured tissue were removed at 3, 7, 14, and 21 days. The incision area and adjacent normal skin were excised for each sacrificed animal. Half of every sample fixed in 10% formalin for histological analysis, and the other half was frozen in liquid nitrogen for enzyme-linked immunosorbent assay (ELISA) test.", "Wound closure status was evaluated using a digital camera on days 3, 7, 14, and 21 after treatment. The area of the wound left unhealed was measured using ImageJ software. The wound healing rate was calculated as follows: wound healing rate = (original wound area − wound area on different days postsurgery)/original wound area.", "The sample of skin containing dermis and hypodermis was isolated, carefully trimmed with cutter, and fixed in 10% neutral formalin solution. After paraffin embedding, 3- to 4-μm sections were taken and stained with hematoxylin and eosin for study of tissue appearance. Three indices were measured, the length epithelial tongue, the epithelial gap, and the area of granulation tissue as shown in Figure 2I. All slides were observed under light microscopic.\nHistological analysis of skin after hematoxylin and eosin staining. (I) A schematic transversal section is depicted for proper identification of the respective regions observed. (II) Images of granulation tissue were obtained from the healing. The arrows point to the blood vessels (allows) that grow perpendicular to the wound in days 7 and 21 after treatment. Measure and analysis of length epithelial tongue (III), epithelial gap (IV) and area of granulation (V). All data are mean ±SD (n = 3). *P < 0.05 ; **P < 0.01.", "Paraffin-embedded sections first underwent standard deparaffinization and rehydration procedures, and they were then probed with fluorescein isothiocyanate-anti-CD31 antibody (green), anti–α-smooth muscle actin (α-SMA) antibody (green), anti–cytokeratin 10 antibody (green) (Cell Signaling Technology, Danvers, MA) for overnight at 4°C. Next, sections were incubated with horseradish peroxidase–conjugated secondary antibodies (Santa Cruz Biotechnology Inc., Santa Cruz, CA). The nuclei were then counterstained with 4′,6-diamidino-2-phenylindole, dihydrochloride.", "Weighed, minced samples were placed in lysis buffer containing protease inhibitors for 24 hours at 4°C. Tissues were then homogenized, centrifuged to remove tissue residue, and the amount of epidermal growth factor (EGF), transforming growth factor (TGF)-β1, vascular endothelial growth factor (VEGF) in the lysis buffer was measured in diluted aliquots with standard ELISA assays.", "All data were expressed as means ± SD and were analyzed by 1-way ANOVA with Fisher adjustment (SPSS13.0 for windows). Difference between groups was analyzed by Tukey test, P less than 0.05 was considered statistically significant in all analyses.", "Adipose-derived stem cells were initially expanded for 3 passages and later characterized by cell differentiation and immunophenotyping assays. The cultures were observed by using an inverted light microscope. Attachment of spindle-shaped cells to tissue culture plastic flask was observed after 1 day of culture of ADSCs. Primary cultures reached 70% to 80% confluence in approximately 5 to 6 days. During the passaging, cell growth tended to accelerate and morphology of cells changed gradually. Cells become more flat-shape with increase in passage number. After 3 passages, cultured cells displayed typical fibroblastoid morphology (Fig. 3I) and under appropriate stimuli, exhibited potential for adipocyte and osteocyte differentiation demonstrated through Oil Red (Fig. 3II), Alizarin red staining (Fig. 3III). Cells also expressed characteristic stem cell markers, CD29 and CD90, and were negative for CD45 (Fig. 3IV–VI).\nCulture, Differentiation and Immunophenotyping of ADSCs. (I) Representative image of ADSC displaying cell phenotype under bright-field microscopy (100×). (II-III) Stem cell pluripotency was evaluated by culture adipogenesis and osteogenesis under differentiation stimuli and posterior staining with (II) Oil-red-O and (III) Alizarin Red, respectively (100×). (IV-VI) Immunophenotypic characterization of ADSCs: (IV) CD29 (+), (V) CD45 (-) and (VI) CD90 (-).", "Histological examination showed that there was no obvious difference among the 3 groups but significantly higher epithelialization rate, increased granulation tissue formulation, and capillary formation when compared with groups A and B in days 7 and 21(Fig. 2II). There were more blood vessels (allows), growing perpendicular to the wound, and less inflammatory cell in the wound granulation tissue than the other groups by day 7 (neovascularization, A > B > C; inflammatory cell, A < B < C). In day 21 after postwound healing, the images showed that the tissue of the wound which was grafted in the ADSCs-SF/CS almost regenerate close to the normal tissue. There still appear some neovascularization, inflammatory cell, and collagen that have not disappeared yet in the 2 other groups, especially control group.", "Engrafted CM-Dil–labeled cells were identified throughout the various substrata of the dermis and cutaneous appendages at 7, 14, 21 days postoperatively (Fig. 4). Engrafted ADSCs were identified, on the basis of red CM-Dil signal, at 14 days, engrafted cells were seen incorporated into winding linear structures consistent with microvascular elements and coexpressing positive signals for red CM-Dil and the endothelial marker CD31 (Fig. 4I). Some engrafted ADSCs were noted as recapitulating elements of linear vascular structures, costaining for the vascular smooth muscle marker α-SMA (Fig. 4II) and CM-Dil. However, red fluorescence-labeled cells were not observed to be incorporated into regenerated epidermalepithelium. At 21 days, in addition to these findings described, engrafted Dil-positive cells were observed to be incorporated as components of the regenerated epidermal epithelium on the basis of red fluorescence-labeled cells costaining for the cytokeratin marker of epidermal epithelium cytokeratin 10 (Fig. 4III).\nExpression of epithelial cell markers CD31, α-SMA, K10 in wound skin. (I) Photomicrographs showing Endothelial cells immunostaining for CD31 (green), engrafted ADSCs (red) in the wound site of diabetic rats, nuclear counterstained with DAPI (blue) at 200× magnification. (II) Images showing vascular smooth muscle cells immunostaining for α-SMA (green), engrafted ADSCs (red) and DAPI (blue) in the wound site of diabetic rats. (III) Images showing epidermal epithelium immunostaining for K10 (green), engrafted ADSCs (red) and DAPI (blue) in the wound site of diabetic rats. Scale bar = 50 μm", "Mean EGF, TGF-β, and VEGF levels were significantly higher (P < 0.01) in ADSCs-SF/CS–treated group compared with SF/CS treated group and controls group at 14 days postoperatively. In addition, mean VEGF levels were higher (P < 0.05) in SF/CS-treated group compared with the control group (Fig. 5).\nELISA analysis of wound tissue for concentration of EGF (I), TGF-β (II), and VEGF (III). Comparison of the concentration of Three Growth Factors between ADSC-SF/CS-treated (C), SF/CS-treated (B) and control groups (A) at 14 days post-wound. **P < 0.01, comparison between ADSC-SF/CS-treated and control groups; *P < 0.05, comparison between ADSC-SF/CS-treated and SF/CS-treated groups." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "MATERIALS AND METHODS", "Solation and Culture of ADSCs", "Differentiation and Immunophenotyping of ADSCs", "Preparation of SF/CS", "Preparation of ADSC-SF/CS Combination", "Experimental Diabetes and Excisional Wounding", "Calculation of Wound Healing Rate", "Histopathological Assay", "Immunofluorescent Histology", "ELISA", "Statistical Analysis", "RESULTS", "Characterization of ADSCs", "Histopathological Assay", "Immunofluorescent Histology", "ELISA", "DISCUSSION", "CONCLUSIONS" ]
[ " Solation and Culture of ADSCs Cytological and histological sections were performed in Regenerative Medicine Laboratory of Jinan University for expert assistance in preparing. The rats (Sprague-Dawley) (250–300 g) were anesthetized with 3% pentobarbital sodium (30 mg/kg) and then underwent surgery consisting of a lower abdominal midline incision and bilateral resection of the inguinal fat pads for culture ADSCs. A specimen of inguinal fat was harvested and placed in ice-cold phosphate-buffered saline (PBS). The adipose tissue was rinsed with PBS, minced into small pieces, and then incubated in a solution containing 0.1% collagenase type I (Sigma, St. Louis, MO) for 30 minutes at 37°C with shaking (100 rounds every minute). The top lipid layer was removed, and the remaining liquid portion was centrifuged at 1000g for 10 minutes at room temperature to obtain the stromal cell fraction. The pellet was filtered with a 100-μm mesh filter to remove the debris. The filtrate was treated with 0.3% NaCl for 10 minutes to lyse red blood cells and centrifuged at 300g for 10 minutes. The supernatant was discarded, and the remaining cells were suspended in Dulbecco modified Eagle’s medium (DMEM) supplemented with 10% fetal bovine serum (FBS) (Hyclone, Logan, Utah), and plated in a 25-cm2 bottle and cultured at 37°C in 5% CO2. The primary cells were cultured for 6 to 7 days until they reached confluence and were defined as “passage 0.” Adipose-derived stem cells were cultured and expanded in control medium and used for the experiments at passages 3 to 5.\nCytological and histological sections were performed in Regenerative Medicine Laboratory of Jinan University for expert assistance in preparing. The rats (Sprague-Dawley) (250–300 g) were anesthetized with 3% pentobarbital sodium (30 mg/kg) and then underwent surgery consisting of a lower abdominal midline incision and bilateral resection of the inguinal fat pads for culture ADSCs. A specimen of inguinal fat was harvested and placed in ice-cold phosphate-buffered saline (PBS). The adipose tissue was rinsed with PBS, minced into small pieces, and then incubated in a solution containing 0.1% collagenase type I (Sigma, St. Louis, MO) for 30 minutes at 37°C with shaking (100 rounds every minute). The top lipid layer was removed, and the remaining liquid portion was centrifuged at 1000g for 10 minutes at room temperature to obtain the stromal cell fraction. The pellet was filtered with a 100-μm mesh filter to remove the debris. The filtrate was treated with 0.3% NaCl for 10 minutes to lyse red blood cells and centrifuged at 300g for 10 minutes. The supernatant was discarded, and the remaining cells were suspended in Dulbecco modified Eagle’s medium (DMEM) supplemented with 10% fetal bovine serum (FBS) (Hyclone, Logan, Utah), and plated in a 25-cm2 bottle and cultured at 37°C in 5% CO2. The primary cells were cultured for 6 to 7 days until they reached confluence and were defined as “passage 0.” Adipose-derived stem cells were cultured and expanded in control medium and used for the experiments at passages 3 to 5.\n Differentiation and Immunophenotyping of ADSCs Stem cell profile was evaluated through membrane receptor phenotyping and differentiation assays. Stem cell markers were determined through cytometric analysis for CD29, CD45, and CD90 (BD Biosciences, San Jose, CA), FACSCalibur flow cytometer (BD Biosciences). Stem cell pluripotency was evaluated by culture osteogenesis and adipogenesis under specific differentiation medium stimuli. Osteogenesis was induced with low-glucose DMEM supplemented with 10% FBS, dexamethasone (0.1 μM), ascorbic acid (0.2 mM), and β glycerol phosphate (10 mM) (Sigma) for 24 days. Adipogenesis was induced with high-glucose DMEM medium supplemented with 10% FBS, dexamethasone (1 μM), isobutylmethylxanthine (0.5 mM), insulin (10 μg/mL), and indomethacin (100 μM) (Sigma) for 18 days. Adipocyte lipid and osteocyte calcium deposits were stained with Oil-red-O and Alizarin Red (Chemicon), respectively.\nStem cell profile was evaluated through membrane receptor phenotyping and differentiation assays. Stem cell markers were determined through cytometric analysis for CD29, CD45, and CD90 (BD Biosciences, San Jose, CA), FACSCalibur flow cytometer (BD Biosciences). Stem cell pluripotency was evaluated by culture osteogenesis and adipogenesis under specific differentiation medium stimuli. Osteogenesis was induced with low-glucose DMEM supplemented with 10% FBS, dexamethasone (0.1 μM), ascorbic acid (0.2 mM), and β glycerol phosphate (10 mM) (Sigma) for 24 days. Adipogenesis was induced with high-glucose DMEM medium supplemented with 10% FBS, dexamethasone (1 μM), isobutylmethylxanthine (0.5 mM), insulin (10 μg/mL), and indomethacin (100 μM) (Sigma) for 18 days. Adipocyte lipid and osteocyte calcium deposits were stained with Oil-red-O and Alizarin Red (Chemicon), respectively.\n Preparation of SF/CS The blended SF/CS scaffolds were fabricated as described previously, with slight modification.14,19–21 Briefly, CS was dissolved in 2% acetic acid and clarified by centrifugation. The final concentration of CS used was 2%. Silk cocoons were cut into pieces, degummed (to remove sericin) with a boiling in 0.01 M Na2CO3 solution for 30 minutes, repeated 3 times, and the fibers were washed with deionized water and then kept at 37°C overnight for drying. The degummed SF was dissolved in calcium nitrate tetrahydrate-methanol solution (molar ratio 1:4:2 calcium/water/methyl alcohol at 80°C), dialyzed against distilled water, and lyophilized to obtain a solid SF fluff. The SF fluff was further dissolved in distilled water at a 2% (w/v) concentration over a 3-hour period with continuous stirring to obtained aqueous SF solution. The solution was cooled and stored at 4°C until use. A volume of the same concentration of SF was added to prepare a 50:50 (v/v) SF/CS blend and was allowed to mix for 30 minutes (Fig. 1).\nFlow diagram: the preparation of SF/CS and ADSCs seeded SF/CS. Silk fibroin and chitosan blend (SF/CS) film were cut into 1.5-centimeter-diameter circles before implantation. SF/CS film appears light yellow, transparent and slightly uneven surface. Fluorescence microscope showing immunofluorescence staining of CM-Dil labeled ADSCs (red) in SF/CS film (100×). The nuclei (blue) were then counterstained with 4',6-Diamidino-2-Phenylindole, Dihydrochloride (DAPI).\nThe blended SF/CS scaffolds were fabricated as described previously, with slight modification.14,19–21 Briefly, CS was dissolved in 2% acetic acid and clarified by centrifugation. The final concentration of CS used was 2%. Silk cocoons were cut into pieces, degummed (to remove sericin) with a boiling in 0.01 M Na2CO3 solution for 30 minutes, repeated 3 times, and the fibers were washed with deionized water and then kept at 37°C overnight for drying. The degummed SF was dissolved in calcium nitrate tetrahydrate-methanol solution (molar ratio 1:4:2 calcium/water/methyl alcohol at 80°C), dialyzed against distilled water, and lyophilized to obtain a solid SF fluff. The SF fluff was further dissolved in distilled water at a 2% (w/v) concentration over a 3-hour period with continuous stirring to obtained aqueous SF solution. The solution was cooled and stored at 4°C until use. A volume of the same concentration of SF was added to prepare a 50:50 (v/v) SF/CS blend and was allowed to mix for 30 minutes (Fig. 1).\nFlow diagram: the preparation of SF/CS and ADSCs seeded SF/CS. Silk fibroin and chitosan blend (SF/CS) film were cut into 1.5-centimeter-diameter circles before implantation. SF/CS film appears light yellow, transparent and slightly uneven surface. Fluorescence microscope showing immunofluorescence staining of CM-Dil labeled ADSCs (red) in SF/CS film (100×). The nuclei (blue) were then counterstained with 4',6-Diamidino-2-Phenylindole, Dihydrochloride (DAPI).\n Preparation of ADSC-SF/CS Combination Adipose-derived stem cells (passage 3) were labeled with CM-Dil, a lipophilic dye that stains the membrane of viable cells. For cell labeling, digested ADSCs were washed with PBS and incubated with CM-Dil (Life Technologies) in a 37°C incubator for 5 minutes, and then for an additional 15 minutes at 4°C. The cells were washed with PBS twice. The cells were then seeded onto a SF/CS membrane at a density of 1 × 106 ADSCs per cm2. The cell-seeded SF/CS membrane incubated for 48 hours under standard culture conditions were used for transplantation in animal experiments. Once on the operative field, grafts were transferred to a sterile six-well plate and washed gently in two 500-μL aliquots of PBS to remove any nonadherent cells or medium. The cell-seeded SF/CS membrane cultured for 48 hours was observed in a fluorescence microscope (Fig. 1).\nAdipose-derived stem cells (passage 3) were labeled with CM-Dil, a lipophilic dye that stains the membrane of viable cells. For cell labeling, digested ADSCs were washed with PBS and incubated with CM-Dil (Life Technologies) in a 37°C incubator for 5 minutes, and then for an additional 15 minutes at 4°C. The cells were washed with PBS twice. The cells were then seeded onto a SF/CS membrane at a density of 1 × 106 ADSCs per cm2. The cell-seeded SF/CS membrane incubated for 48 hours under standard culture conditions were used for transplantation in animal experiments. Once on the operative field, grafts were transferred to a sterile six-well plate and washed gently in two 500-μL aliquots of PBS to remove any nonadherent cells or medium. The cell-seeded SF/CS membrane cultured for 48 hours was observed in a fluorescence microscope (Fig. 1).\n Experimental Diabetes and Excisional Wounding Experimental animals: 36 male Sprague-Dawley rats (250–300 g) were obtained from Beijing University of Chinese Medicine (Beijing, China). All studies and procedures involving animals were conducted in compliance with guidelines established by the Institute Of Laboratory Animal Science (Jinan University, Guangzhou, China). Rats were kept for 3 weeks on balanced ration with water ad libitum for acclimatization. Standard animal chow and clean water were constantly supplied to the animals.\nAll procedures complied with the standards stated in the Guide for the Care and Use of Laboratory Animals (Institute of Laboratory Animal Resources, National Academy of Sciences, Bethesda, MD, 1996). Eight-week-old Sprague-Dawley rat received streptozocin 40 mg/kg (Sigma) in 0.05 mol/L citrate buffer, pH 4.5, daily for 5 days. Only mice showing consistent polyuria, polydipsia, polyphagia, weight loss, and elevated blood glucose (>16.7 mmol/L or 250 mg/dL) were included in the study.22\nThese diabetic rats (n = 36) were divided into 3 groups with 8 rats in each group randomly: group A, diabetic control with no graft; group B, diabetic treated with SF/CS film alone; and group C, diabetic treated with ADSC-SF/CS. Rats were allowed to manifest hyperglycemia for 8 weeks before surgical wounding. All rats (n = 36) were anesthetized with an intraperitoneal injection of sodium pentobarbital (ServiceBio, Inc., Wuhan, China). Their back hair was shaved, carefully. Application field was outlined with marking pen just before removing skin. A 1.5-cm diameter circular impression was made on the middle of dorsum of the mouse. All wounds were cleaned with iodophor and sterile normal saline. Each group received the appropriate graft as described previously, then were covered with Vaseline oil gauze. Finally, 3M transparent dressing was used to tightly fix the graft and was covered with Vaseline oil gauze.\nAfter completion of wound healing test, rats were sacrificed. Samples of the previously injured tissue were removed at 3, 7, 14, and 21 days. The incision area and adjacent normal skin were excised for each sacrificed animal. Half of every sample fixed in 10% formalin for histological analysis, and the other half was frozen in liquid nitrogen for enzyme-linked immunosorbent assay (ELISA) test.\nExperimental animals: 36 male Sprague-Dawley rats (250–300 g) were obtained from Beijing University of Chinese Medicine (Beijing, China). All studies and procedures involving animals were conducted in compliance with guidelines established by the Institute Of Laboratory Animal Science (Jinan University, Guangzhou, China). Rats were kept for 3 weeks on balanced ration with water ad libitum for acclimatization. Standard animal chow and clean water were constantly supplied to the animals.\nAll procedures complied with the standards stated in the Guide for the Care and Use of Laboratory Animals (Institute of Laboratory Animal Resources, National Academy of Sciences, Bethesda, MD, 1996). Eight-week-old Sprague-Dawley rat received streptozocin 40 mg/kg (Sigma) in 0.05 mol/L citrate buffer, pH 4.5, daily for 5 days. Only mice showing consistent polyuria, polydipsia, polyphagia, weight loss, and elevated blood glucose (>16.7 mmol/L or 250 mg/dL) were included in the study.22\nThese diabetic rats (n = 36) were divided into 3 groups with 8 rats in each group randomly: group A, diabetic control with no graft; group B, diabetic treated with SF/CS film alone; and group C, diabetic treated with ADSC-SF/CS. Rats were allowed to manifest hyperglycemia for 8 weeks before surgical wounding. All rats (n = 36) were anesthetized with an intraperitoneal injection of sodium pentobarbital (ServiceBio, Inc., Wuhan, China). Their back hair was shaved, carefully. Application field was outlined with marking pen just before removing skin. A 1.5-cm diameter circular impression was made on the middle of dorsum of the mouse. All wounds were cleaned with iodophor and sterile normal saline. Each group received the appropriate graft as described previously, then were covered with Vaseline oil gauze. Finally, 3M transparent dressing was used to tightly fix the graft and was covered with Vaseline oil gauze.\nAfter completion of wound healing test, rats were sacrificed. Samples of the previously injured tissue were removed at 3, 7, 14, and 21 days. The incision area and adjacent normal skin were excised for each sacrificed animal. Half of every sample fixed in 10% formalin for histological analysis, and the other half was frozen in liquid nitrogen for enzyme-linked immunosorbent assay (ELISA) test.\n Calculation of Wound Healing Rate Wound closure status was evaluated using a digital camera on days 3, 7, 14, and 21 after treatment. The area of the wound left unhealed was measured using ImageJ software. The wound healing rate was calculated as follows: wound healing rate = (original wound area − wound area on different days postsurgery)/original wound area.\nWound closure status was evaluated using a digital camera on days 3, 7, 14, and 21 after treatment. The area of the wound left unhealed was measured using ImageJ software. The wound healing rate was calculated as follows: wound healing rate = (original wound area − wound area on different days postsurgery)/original wound area.\n Histopathological Assay The sample of skin containing dermis and hypodermis was isolated, carefully trimmed with cutter, and fixed in 10% neutral formalin solution. After paraffin embedding, 3- to 4-μm sections were taken and stained with hematoxylin and eosin for study of tissue appearance. Three indices were measured, the length epithelial tongue, the epithelial gap, and the area of granulation tissue as shown in Figure 2I. All slides were observed under light microscopic.\nHistological analysis of skin after hematoxylin and eosin staining. (I) A schematic transversal section is depicted for proper identification of the respective regions observed. (II) Images of granulation tissue were obtained from the healing. The arrows point to the blood vessels (allows) that grow perpendicular to the wound in days 7 and 21 after treatment. Measure and analysis of length epithelial tongue (III), epithelial gap (IV) and area of granulation (V). All data are mean ±SD (n = 3). *P < 0.05 ; **P < 0.01.\nThe sample of skin containing dermis and hypodermis was isolated, carefully trimmed with cutter, and fixed in 10% neutral formalin solution. After paraffin embedding, 3- to 4-μm sections were taken and stained with hematoxylin and eosin for study of tissue appearance. Three indices were measured, the length epithelial tongue, the epithelial gap, and the area of granulation tissue as shown in Figure 2I. All slides were observed under light microscopic.\nHistological analysis of skin after hematoxylin and eosin staining. (I) A schematic transversal section is depicted for proper identification of the respective regions observed. (II) Images of granulation tissue were obtained from the healing. The arrows point to the blood vessels (allows) that grow perpendicular to the wound in days 7 and 21 after treatment. Measure and analysis of length epithelial tongue (III), epithelial gap (IV) and area of granulation (V). All data are mean ±SD (n = 3). *P < 0.05 ; **P < 0.01.\n Immunofluorescent Histology Paraffin-embedded sections first underwent standard deparaffinization and rehydration procedures, and they were then probed with fluorescein isothiocyanate-anti-CD31 antibody (green), anti–α-smooth muscle actin (α-SMA) antibody (green), anti–cytokeratin 10 antibody (green) (Cell Signaling Technology, Danvers, MA) for overnight at 4°C. Next, sections were incubated with horseradish peroxidase–conjugated secondary antibodies (Santa Cruz Biotechnology Inc., Santa Cruz, CA). The nuclei were then counterstained with 4′,6-diamidino-2-phenylindole, dihydrochloride.\nParaffin-embedded sections first underwent standard deparaffinization and rehydration procedures, and they were then probed with fluorescein isothiocyanate-anti-CD31 antibody (green), anti–α-smooth muscle actin (α-SMA) antibody (green), anti–cytokeratin 10 antibody (green) (Cell Signaling Technology, Danvers, MA) for overnight at 4°C. Next, sections were incubated with horseradish peroxidase–conjugated secondary antibodies (Santa Cruz Biotechnology Inc., Santa Cruz, CA). The nuclei were then counterstained with 4′,6-diamidino-2-phenylindole, dihydrochloride.\n ELISA Weighed, minced samples were placed in lysis buffer containing protease inhibitors for 24 hours at 4°C. Tissues were then homogenized, centrifuged to remove tissue residue, and the amount of epidermal growth factor (EGF), transforming growth factor (TGF)-β1, vascular endothelial growth factor (VEGF) in the lysis buffer was measured in diluted aliquots with standard ELISA assays.\nWeighed, minced samples were placed in lysis buffer containing protease inhibitors for 24 hours at 4°C. Tissues were then homogenized, centrifuged to remove tissue residue, and the amount of epidermal growth factor (EGF), transforming growth factor (TGF)-β1, vascular endothelial growth factor (VEGF) in the lysis buffer was measured in diluted aliquots with standard ELISA assays.\n Statistical Analysis All data were expressed as means ± SD and were analyzed by 1-way ANOVA with Fisher adjustment (SPSS13.0 for windows). Difference between groups was analyzed by Tukey test, P less than 0.05 was considered statistically significant in all analyses.\nAll data were expressed as means ± SD and were analyzed by 1-way ANOVA with Fisher adjustment (SPSS13.0 for windows). Difference between groups was analyzed by Tukey test, P less than 0.05 was considered statistically significant in all analyses.", "Cytological and histological sections were performed in Regenerative Medicine Laboratory of Jinan University for expert assistance in preparing. The rats (Sprague-Dawley) (250–300 g) were anesthetized with 3% pentobarbital sodium (30 mg/kg) and then underwent surgery consisting of a lower abdominal midline incision and bilateral resection of the inguinal fat pads for culture ADSCs. A specimen of inguinal fat was harvested and placed in ice-cold phosphate-buffered saline (PBS). The adipose tissue was rinsed with PBS, minced into small pieces, and then incubated in a solution containing 0.1% collagenase type I (Sigma, St. Louis, MO) for 30 minutes at 37°C with shaking (100 rounds every minute). The top lipid layer was removed, and the remaining liquid portion was centrifuged at 1000g for 10 minutes at room temperature to obtain the stromal cell fraction. The pellet was filtered with a 100-μm mesh filter to remove the debris. The filtrate was treated with 0.3% NaCl for 10 minutes to lyse red blood cells and centrifuged at 300g for 10 minutes. The supernatant was discarded, and the remaining cells were suspended in Dulbecco modified Eagle’s medium (DMEM) supplemented with 10% fetal bovine serum (FBS) (Hyclone, Logan, Utah), and plated in a 25-cm2 bottle and cultured at 37°C in 5% CO2. The primary cells were cultured for 6 to 7 days until they reached confluence and were defined as “passage 0.” Adipose-derived stem cells were cultured and expanded in control medium and used for the experiments at passages 3 to 5.", "Stem cell profile was evaluated through membrane receptor phenotyping and differentiation assays. Stem cell markers were determined through cytometric analysis for CD29, CD45, and CD90 (BD Biosciences, San Jose, CA), FACSCalibur flow cytometer (BD Biosciences). Stem cell pluripotency was evaluated by culture osteogenesis and adipogenesis under specific differentiation medium stimuli. Osteogenesis was induced with low-glucose DMEM supplemented with 10% FBS, dexamethasone (0.1 μM), ascorbic acid (0.2 mM), and β glycerol phosphate (10 mM) (Sigma) for 24 days. Adipogenesis was induced with high-glucose DMEM medium supplemented with 10% FBS, dexamethasone (1 μM), isobutylmethylxanthine (0.5 mM), insulin (10 μg/mL), and indomethacin (100 μM) (Sigma) for 18 days. Adipocyte lipid and osteocyte calcium deposits were stained with Oil-red-O and Alizarin Red (Chemicon), respectively.", "The blended SF/CS scaffolds were fabricated as described previously, with slight modification.14,19–21 Briefly, CS was dissolved in 2% acetic acid and clarified by centrifugation. The final concentration of CS used was 2%. Silk cocoons were cut into pieces, degummed (to remove sericin) with a boiling in 0.01 M Na2CO3 solution for 30 minutes, repeated 3 times, and the fibers were washed with deionized water and then kept at 37°C overnight for drying. The degummed SF was dissolved in calcium nitrate tetrahydrate-methanol solution (molar ratio 1:4:2 calcium/water/methyl alcohol at 80°C), dialyzed against distilled water, and lyophilized to obtain a solid SF fluff. The SF fluff was further dissolved in distilled water at a 2% (w/v) concentration over a 3-hour period with continuous stirring to obtained aqueous SF solution. The solution was cooled and stored at 4°C until use. A volume of the same concentration of SF was added to prepare a 50:50 (v/v) SF/CS blend and was allowed to mix for 30 minutes (Fig. 1).\nFlow diagram: the preparation of SF/CS and ADSCs seeded SF/CS. Silk fibroin and chitosan blend (SF/CS) film were cut into 1.5-centimeter-diameter circles before implantation. SF/CS film appears light yellow, transparent and slightly uneven surface. Fluorescence microscope showing immunofluorescence staining of CM-Dil labeled ADSCs (red) in SF/CS film (100×). The nuclei (blue) were then counterstained with 4',6-Diamidino-2-Phenylindole, Dihydrochloride (DAPI).", "Adipose-derived stem cells (passage 3) were labeled with CM-Dil, a lipophilic dye that stains the membrane of viable cells. For cell labeling, digested ADSCs were washed with PBS and incubated with CM-Dil (Life Technologies) in a 37°C incubator for 5 minutes, and then for an additional 15 minutes at 4°C. The cells were washed with PBS twice. The cells were then seeded onto a SF/CS membrane at a density of 1 × 106 ADSCs per cm2. The cell-seeded SF/CS membrane incubated for 48 hours under standard culture conditions were used for transplantation in animal experiments. Once on the operative field, grafts were transferred to a sterile six-well plate and washed gently in two 500-μL aliquots of PBS to remove any nonadherent cells or medium. The cell-seeded SF/CS membrane cultured for 48 hours was observed in a fluorescence microscope (Fig. 1).", "Experimental animals: 36 male Sprague-Dawley rats (250–300 g) were obtained from Beijing University of Chinese Medicine (Beijing, China). All studies and procedures involving animals were conducted in compliance with guidelines established by the Institute Of Laboratory Animal Science (Jinan University, Guangzhou, China). Rats were kept for 3 weeks on balanced ration with water ad libitum for acclimatization. Standard animal chow and clean water were constantly supplied to the animals.\nAll procedures complied with the standards stated in the Guide for the Care and Use of Laboratory Animals (Institute of Laboratory Animal Resources, National Academy of Sciences, Bethesda, MD, 1996). Eight-week-old Sprague-Dawley rat received streptozocin 40 mg/kg (Sigma) in 0.05 mol/L citrate buffer, pH 4.5, daily for 5 days. Only mice showing consistent polyuria, polydipsia, polyphagia, weight loss, and elevated blood glucose (>16.7 mmol/L or 250 mg/dL) were included in the study.22\nThese diabetic rats (n = 36) were divided into 3 groups with 8 rats in each group randomly: group A, diabetic control with no graft; group B, diabetic treated with SF/CS film alone; and group C, diabetic treated with ADSC-SF/CS. Rats were allowed to manifest hyperglycemia for 8 weeks before surgical wounding. All rats (n = 36) were anesthetized with an intraperitoneal injection of sodium pentobarbital (ServiceBio, Inc., Wuhan, China). Their back hair was shaved, carefully. Application field was outlined with marking pen just before removing skin. A 1.5-cm diameter circular impression was made on the middle of dorsum of the mouse. All wounds were cleaned with iodophor and sterile normal saline. Each group received the appropriate graft as described previously, then were covered with Vaseline oil gauze. Finally, 3M transparent dressing was used to tightly fix the graft and was covered with Vaseline oil gauze.\nAfter completion of wound healing test, rats were sacrificed. Samples of the previously injured tissue were removed at 3, 7, 14, and 21 days. The incision area and adjacent normal skin were excised for each sacrificed animal. Half of every sample fixed in 10% formalin for histological analysis, and the other half was frozen in liquid nitrogen for enzyme-linked immunosorbent assay (ELISA) test.", "Wound closure status was evaluated using a digital camera on days 3, 7, 14, and 21 after treatment. The area of the wound left unhealed was measured using ImageJ software. The wound healing rate was calculated as follows: wound healing rate = (original wound area − wound area on different days postsurgery)/original wound area.", "The sample of skin containing dermis and hypodermis was isolated, carefully trimmed with cutter, and fixed in 10% neutral formalin solution. After paraffin embedding, 3- to 4-μm sections were taken and stained with hematoxylin and eosin for study of tissue appearance. Three indices were measured, the length epithelial tongue, the epithelial gap, and the area of granulation tissue as shown in Figure 2I. All slides were observed under light microscopic.\nHistological analysis of skin after hematoxylin and eosin staining. (I) A schematic transversal section is depicted for proper identification of the respective regions observed. (II) Images of granulation tissue were obtained from the healing. The arrows point to the blood vessels (allows) that grow perpendicular to the wound in days 7 and 21 after treatment. Measure and analysis of length epithelial tongue (III), epithelial gap (IV) and area of granulation (V). All data are mean ±SD (n = 3). *P < 0.05 ; **P < 0.01.", "Paraffin-embedded sections first underwent standard deparaffinization and rehydration procedures, and they were then probed with fluorescein isothiocyanate-anti-CD31 antibody (green), anti–α-smooth muscle actin (α-SMA) antibody (green), anti–cytokeratin 10 antibody (green) (Cell Signaling Technology, Danvers, MA) for overnight at 4°C. Next, sections were incubated with horseradish peroxidase–conjugated secondary antibodies (Santa Cruz Biotechnology Inc., Santa Cruz, CA). The nuclei were then counterstained with 4′,6-diamidino-2-phenylindole, dihydrochloride.", "Weighed, minced samples were placed in lysis buffer containing protease inhibitors for 24 hours at 4°C. Tissues were then homogenized, centrifuged to remove tissue residue, and the amount of epidermal growth factor (EGF), transforming growth factor (TGF)-β1, vascular endothelial growth factor (VEGF) in the lysis buffer was measured in diluted aliquots with standard ELISA assays.", "All data were expressed as means ± SD and were analyzed by 1-way ANOVA with Fisher adjustment (SPSS13.0 for windows). Difference between groups was analyzed by Tukey test, P less than 0.05 was considered statistically significant in all analyses.", " Characterization of ADSCs Adipose-derived stem cells were initially expanded for 3 passages and later characterized by cell differentiation and immunophenotyping assays. The cultures were observed by using an inverted light microscope. Attachment of spindle-shaped cells to tissue culture plastic flask was observed after 1 day of culture of ADSCs. Primary cultures reached 70% to 80% confluence in approximately 5 to 6 days. During the passaging, cell growth tended to accelerate and morphology of cells changed gradually. Cells become more flat-shape with increase in passage number. After 3 passages, cultured cells displayed typical fibroblastoid morphology (Fig. 3I) and under appropriate stimuli, exhibited potential for adipocyte and osteocyte differentiation demonstrated through Oil Red (Fig. 3II), Alizarin red staining (Fig. 3III). Cells also expressed characteristic stem cell markers, CD29 and CD90, and were negative for CD45 (Fig. 3IV–VI).\nCulture, Differentiation and Immunophenotyping of ADSCs. (I) Representative image of ADSC displaying cell phenotype under bright-field microscopy (100×). (II-III) Stem cell pluripotency was evaluated by culture adipogenesis and osteogenesis under differentiation stimuli and posterior staining with (II) Oil-red-O and (III) Alizarin Red, respectively (100×). (IV-VI) Immunophenotypic characterization of ADSCs: (IV) CD29 (+), (V) CD45 (-) and (VI) CD90 (-).\nAdipose-derived stem cells were initially expanded for 3 passages and later characterized by cell differentiation and immunophenotyping assays. The cultures were observed by using an inverted light microscope. Attachment of spindle-shaped cells to tissue culture plastic flask was observed after 1 day of culture of ADSCs. Primary cultures reached 70% to 80% confluence in approximately 5 to 6 days. During the passaging, cell growth tended to accelerate and morphology of cells changed gradually. Cells become more flat-shape with increase in passage number. After 3 passages, cultured cells displayed typical fibroblastoid morphology (Fig. 3I) and under appropriate stimuli, exhibited potential for adipocyte and osteocyte differentiation demonstrated through Oil Red (Fig. 3II), Alizarin red staining (Fig. 3III). Cells also expressed characteristic stem cell markers, CD29 and CD90, and were negative for CD45 (Fig. 3IV–VI).\nCulture, Differentiation and Immunophenotyping of ADSCs. (I) Representative image of ADSC displaying cell phenotype under bright-field microscopy (100×). (II-III) Stem cell pluripotency was evaluated by culture adipogenesis and osteogenesis under differentiation stimuli and posterior staining with (II) Oil-red-O and (III) Alizarin Red, respectively (100×). (IV-VI) Immunophenotypic characterization of ADSCs: (IV) CD29 (+), (V) CD45 (-) and (VI) CD90 (-).\n Histopathological Assay Histological examination showed that there was no obvious difference among the 3 groups but significantly higher epithelialization rate, increased granulation tissue formulation, and capillary formation when compared with groups A and B in days 7 and 21(Fig. 2II). There were more blood vessels (allows), growing perpendicular to the wound, and less inflammatory cell in the wound granulation tissue than the other groups by day 7 (neovascularization, A > B > C; inflammatory cell, A < B < C). In day 21 after postwound healing, the images showed that the tissue of the wound which was grafted in the ADSCs-SF/CS almost regenerate close to the normal tissue. There still appear some neovascularization, inflammatory cell, and collagen that have not disappeared yet in the 2 other groups, especially control group.\nHistological examination showed that there was no obvious difference among the 3 groups but significantly higher epithelialization rate, increased granulation tissue formulation, and capillary formation when compared with groups A and B in days 7 and 21(Fig. 2II). There were more blood vessels (allows), growing perpendicular to the wound, and less inflammatory cell in the wound granulation tissue than the other groups by day 7 (neovascularization, A > B > C; inflammatory cell, A < B < C). In day 21 after postwound healing, the images showed that the tissue of the wound which was grafted in the ADSCs-SF/CS almost regenerate close to the normal tissue. There still appear some neovascularization, inflammatory cell, and collagen that have not disappeared yet in the 2 other groups, especially control group.\n Immunofluorescent Histology Engrafted CM-Dil–labeled cells were identified throughout the various substrata of the dermis and cutaneous appendages at 7, 14, 21 days postoperatively (Fig. 4). Engrafted ADSCs were identified, on the basis of red CM-Dil signal, at 14 days, engrafted cells were seen incorporated into winding linear structures consistent with microvascular elements and coexpressing positive signals for red CM-Dil and the endothelial marker CD31 (Fig. 4I). Some engrafted ADSCs were noted as recapitulating elements of linear vascular structures, costaining for the vascular smooth muscle marker α-SMA (Fig. 4II) and CM-Dil. However, red fluorescence-labeled cells were not observed to be incorporated into regenerated epidermalepithelium. At 21 days, in addition to these findings described, engrafted Dil-positive cells were observed to be incorporated as components of the regenerated epidermal epithelium on the basis of red fluorescence-labeled cells costaining for the cytokeratin marker of epidermal epithelium cytokeratin 10 (Fig. 4III).\nExpression of epithelial cell markers CD31, α-SMA, K10 in wound skin. (I) Photomicrographs showing Endothelial cells immunostaining for CD31 (green), engrafted ADSCs (red) in the wound site of diabetic rats, nuclear counterstained with DAPI (blue) at 200× magnification. (II) Images showing vascular smooth muscle cells immunostaining for α-SMA (green), engrafted ADSCs (red) and DAPI (blue) in the wound site of diabetic rats. (III) Images showing epidermal epithelium immunostaining for K10 (green), engrafted ADSCs (red) and DAPI (blue) in the wound site of diabetic rats. Scale bar = 50 μm\nEngrafted CM-Dil–labeled cells were identified throughout the various substrata of the dermis and cutaneous appendages at 7, 14, 21 days postoperatively (Fig. 4). Engrafted ADSCs were identified, on the basis of red CM-Dil signal, at 14 days, engrafted cells were seen incorporated into winding linear structures consistent with microvascular elements and coexpressing positive signals for red CM-Dil and the endothelial marker CD31 (Fig. 4I). Some engrafted ADSCs were noted as recapitulating elements of linear vascular structures, costaining for the vascular smooth muscle marker α-SMA (Fig. 4II) and CM-Dil. However, red fluorescence-labeled cells were not observed to be incorporated into regenerated epidermalepithelium. At 21 days, in addition to these findings described, engrafted Dil-positive cells were observed to be incorporated as components of the regenerated epidermal epithelium on the basis of red fluorescence-labeled cells costaining for the cytokeratin marker of epidermal epithelium cytokeratin 10 (Fig. 4III).\nExpression of epithelial cell markers CD31, α-SMA, K10 in wound skin. (I) Photomicrographs showing Endothelial cells immunostaining for CD31 (green), engrafted ADSCs (red) in the wound site of diabetic rats, nuclear counterstained with DAPI (blue) at 200× magnification. (II) Images showing vascular smooth muscle cells immunostaining for α-SMA (green), engrafted ADSCs (red) and DAPI (blue) in the wound site of diabetic rats. (III) Images showing epidermal epithelium immunostaining for K10 (green), engrafted ADSCs (red) and DAPI (blue) in the wound site of diabetic rats. Scale bar = 50 μm\n ELISA Mean EGF, TGF-β, and VEGF levels were significantly higher (P < 0.01) in ADSCs-SF/CS–treated group compared with SF/CS treated group and controls group at 14 days postoperatively. In addition, mean VEGF levels were higher (P < 0.05) in SF/CS-treated group compared with the control group (Fig. 5).\nELISA analysis of wound tissue for concentration of EGF (I), TGF-β (II), and VEGF (III). Comparison of the concentration of Three Growth Factors between ADSC-SF/CS-treated (C), SF/CS-treated (B) and control groups (A) at 14 days post-wound. **P < 0.01, comparison between ADSC-SF/CS-treated and control groups; *P < 0.05, comparison between ADSC-SF/CS-treated and SF/CS-treated groups.\nMean EGF, TGF-β, and VEGF levels were significantly higher (P < 0.01) in ADSCs-SF/CS–treated group compared with SF/CS treated group and controls group at 14 days postoperatively. In addition, mean VEGF levels were higher (P < 0.05) in SF/CS-treated group compared with the control group (Fig. 5).\nELISA analysis of wound tissue for concentration of EGF (I), TGF-β (II), and VEGF (III). Comparison of the concentration of Three Growth Factors between ADSC-SF/CS-treated (C), SF/CS-treated (B) and control groups (A) at 14 days post-wound. **P < 0.01, comparison between ADSC-SF/CS-treated and control groups; *P < 0.05, comparison between ADSC-SF/CS-treated and SF/CS-treated groups.", "Adipose-derived stem cells were initially expanded for 3 passages and later characterized by cell differentiation and immunophenotyping assays. The cultures were observed by using an inverted light microscope. Attachment of spindle-shaped cells to tissue culture plastic flask was observed after 1 day of culture of ADSCs. Primary cultures reached 70% to 80% confluence in approximately 5 to 6 days. During the passaging, cell growth tended to accelerate and morphology of cells changed gradually. Cells become more flat-shape with increase in passage number. After 3 passages, cultured cells displayed typical fibroblastoid morphology (Fig. 3I) and under appropriate stimuli, exhibited potential for adipocyte and osteocyte differentiation demonstrated through Oil Red (Fig. 3II), Alizarin red staining (Fig. 3III). Cells also expressed characteristic stem cell markers, CD29 and CD90, and were negative for CD45 (Fig. 3IV–VI).\nCulture, Differentiation and Immunophenotyping of ADSCs. (I) Representative image of ADSC displaying cell phenotype under bright-field microscopy (100×). (II-III) Stem cell pluripotency was evaluated by culture adipogenesis and osteogenesis under differentiation stimuli and posterior staining with (II) Oil-red-O and (III) Alizarin Red, respectively (100×). (IV-VI) Immunophenotypic characterization of ADSCs: (IV) CD29 (+), (V) CD45 (-) and (VI) CD90 (-).", "Histological examination showed that there was no obvious difference among the 3 groups but significantly higher epithelialization rate, increased granulation tissue formulation, and capillary formation when compared with groups A and B in days 7 and 21(Fig. 2II). There were more blood vessels (allows), growing perpendicular to the wound, and less inflammatory cell in the wound granulation tissue than the other groups by day 7 (neovascularization, A > B > C; inflammatory cell, A < B < C). In day 21 after postwound healing, the images showed that the tissue of the wound which was grafted in the ADSCs-SF/CS almost regenerate close to the normal tissue. There still appear some neovascularization, inflammatory cell, and collagen that have not disappeared yet in the 2 other groups, especially control group.", "Engrafted CM-Dil–labeled cells were identified throughout the various substrata of the dermis and cutaneous appendages at 7, 14, 21 days postoperatively (Fig. 4). Engrafted ADSCs were identified, on the basis of red CM-Dil signal, at 14 days, engrafted cells were seen incorporated into winding linear structures consistent with microvascular elements and coexpressing positive signals for red CM-Dil and the endothelial marker CD31 (Fig. 4I). Some engrafted ADSCs were noted as recapitulating elements of linear vascular structures, costaining for the vascular smooth muscle marker α-SMA (Fig. 4II) and CM-Dil. However, red fluorescence-labeled cells were not observed to be incorporated into regenerated epidermalepithelium. At 21 days, in addition to these findings described, engrafted Dil-positive cells were observed to be incorporated as components of the regenerated epidermal epithelium on the basis of red fluorescence-labeled cells costaining for the cytokeratin marker of epidermal epithelium cytokeratin 10 (Fig. 4III).\nExpression of epithelial cell markers CD31, α-SMA, K10 in wound skin. (I) Photomicrographs showing Endothelial cells immunostaining for CD31 (green), engrafted ADSCs (red) in the wound site of diabetic rats, nuclear counterstained with DAPI (blue) at 200× magnification. (II) Images showing vascular smooth muscle cells immunostaining for α-SMA (green), engrafted ADSCs (red) and DAPI (blue) in the wound site of diabetic rats. (III) Images showing epidermal epithelium immunostaining for K10 (green), engrafted ADSCs (red) and DAPI (blue) in the wound site of diabetic rats. Scale bar = 50 μm", "Mean EGF, TGF-β, and VEGF levels were significantly higher (P < 0.01) in ADSCs-SF/CS–treated group compared with SF/CS treated group and controls group at 14 days postoperatively. In addition, mean VEGF levels were higher (P < 0.05) in SF/CS-treated group compared with the control group (Fig. 5).\nELISA analysis of wound tissue for concentration of EGF (I), TGF-β (II), and VEGF (III). Comparison of the concentration of Three Growth Factors between ADSC-SF/CS-treated (C), SF/CS-treated (B) and control groups (A) at 14 days post-wound. **P < 0.01, comparison between ADSC-SF/CS-treated and control groups; *P < 0.05, comparison between ADSC-SF/CS-treated and SF/CS-treated groups.", "Diabetes is associated with many hemorheological alterations. In diabetic patients, long-term complications are related to the occurrence of vascular alterations which involve hemorheological and endothelium-dependent flow changes, leading to possible tissue ischemia.23 In this study, based on the comparison between the wound of normal rats and diabetic rats postsurgery immediately (Figs. 6I and II), their skin have some visible changes when they maintain a stable diabetes for 8 weeks, such as poor blood supply, lack of subcutaneous tissue, thin skin, and poor elasticity.\nMacroscopic lesion appearance. Diabetic rats model of skin injury were used (I,III). A standardized wound area (1.5-cm-diameter circle skin in full thickness removed) was induced on the dorsal surface of diabetic rats (I) and normal rats (II) (they have similar weight and age). Compared with normal rats, diabetic rats have poor blood supply, lack of subcutaneous tissue, thin skin and poor elasticity. (III) Lesions appearance at 3, 7, 14 and 21 days.\nRefractory diabetic wound is a hot and difficult problem. Many studies promoting wound healing by injecting ADSC suspension into wound bed were reported previously.24,25 However, this method has some unclear security risk. Tissue engineering is an important therapeutic strategy to be used in regenerative medicine in the present and in the future. Therefore, it becomes the best choice for use in tissue engineering to solve this problem.\nTissue engineering includes 3 elements: scaffold, growth factors, and seed cells. Functional biomaterials research is focused on the development and improvement of scaffolding, which can be used to repair or regenerate an organ or tissue. Scaffolds are one of the crucial factors for tissue engineering. Several factors should be considered for developing a device as a skin substitute in tissue engineering, including biocompatibility and safety, degradability, and mechanical properties. Silk fibroin and CS are both natural polymer materials with good biological properties. The advantages of the 2 materials and their positive effects on the wound have been mentioned in this article. After the 2 are blended, they mutually modify each other.26,27 The SF/CS scaffold has biological, structural and mechanical properties that can be adjusted to meet specific clinical needs.12 Altman et al27 concluded that SF/CS is a naturally derived biocompatible scaffold and can be used as a substrate for stem cell delivery. Moreover, their study showed the potential use of SF/CS as a local carrier for autologous stem cells for reconstructive surgery application.\nIn this study, the same kind of ADSCs was used as seed cells for its potential to secrete of growth factors. ADSCs are easy to obtain and provide a much higher yield.27 In vitro, ADSCs have been verified that they were able to adhere, proliferate, and migrate to SF scaffolds without affecting their properties. In this study, we seed the CM-Dil–labeled cells onto a SF/CS membrane, cultured for 48 hours, and observed by fluorescence microscope. The surface of SF/CS is uneven, so it is difficult to display all the cells in a plane view but through several plane views. The morphology and the distribution of cells become less uniform because of the uneven surface of SF/CS.\nThis study successfully used a 50:50 SF/CS blend film, seeded and delivery of ADSCs of the same genus, in a diabetic rat dorsal cutaneous wound model. After a series of experiments and obtaining the data of the relevant test, our finding demonstrates that the construct created by the seeding of ADSCs on SF/CS can be used as an effective delivery vehicle in vivo. Neovascularization and epithelialization were the main reasons behind this process.\nAdipose-derived stem cells are a population of pluripotent mesenchymal cells, which can differentiate into various lineages. Adipose-derived stem cells have the potential to promote angiogenesis, secrete growth factors, and differentiate into multiple lineages upon appropriate stimulation.3 Therefore, in this study, ADSC-seeded SF/CS film can accelerate diabetic wound healing by promoting reepithelialization, cell infiltration, and angiogenesis.\nThe effects of ADSCs on fibroblasts play a key role in skin biology, such as wound healing, scar, and photoaging. In the early phase of wound healing, fibroblasts migrate into the wound area and move across fibrin-based provisional matrix. Because the provisional fibrin-based matrix is relatively devoid of fibroblasts, the processes of migration, proliferation, and ECM production are the key steps in the regeneration of a functional dermis. Fibroblasts produce collagen-based ECM that ultimately replaces the provisional fibrin-based matrix and help reapproximate wound edges through their contractile properties.28\nIn this study, the contraction of the wound surface was reflected by measuring the length of epithelial crawling and epithelial gap. It can be seen from the histogram that the length of epithelial crawling of ADSC-SF/CS grafted wound is significantly longer than a simple SF/CS grafted wound. This reflects that the fibroblast activity of ADSCs promote wound contraction when analyzed combined with the rate of wound contraction (Figs. 6III and 7).\nAnalysis of Wound Closure Rates The values were averaged and are reported as mean ± SD (n=3). *Significant differences between group B (SF/CS) and group A (control), P<0.05; **Significant differences between group C (ADSC-SF/CS) and group A (control) or B (SF/CS), P<0.01.\nOn the other hand, in line with previous reports, ADSCs promote tissue regeneration through the establishment and support of local microvascular networks.28,29 According to the development of granulation tissue in hematoxylin and eosin in this study, 7 days after transplantation, the neovascularization of the granulation tissue of wound area in ADSC-SF/CS grafted group was significantly faster than that in the other 2 groups. The blood vessels of the granulation tissue were perpendicular to the wound showing a more mature state. Although there was formation of new blood vessels in the SF/CS grafted wound, this was less in number and the morphology was relatively immature. Twenty-one days post-wound healing, the ADSC-SF/CS–transplanted wound basically developed to a near normal, and granulation tissue has translated into new skin tissue, and the transplantation of SF/CS group still only has a formation of a few new blood vessels, the control group wound neovascularization, inflammatory cells, and collagen was not reduced. Our findings demonstrate that ADSC plays an important role in promoting angiogenesis and granulation tissue maturation.\nReepithelialization and vascular support in the context of tissue regeneration is likely supported by 2 mechanisms, direct differentiation and through paracrine effectors.30,31\nAdipose-derived stem cells have excellent regenerative capacity, and multiple studies have proven that they augment tissue repairs. Adipose-derived stem cell is rich in soluble factors with paracrine action on fibroblasts and keratinocytes, such as EGF, fibroblast growth factor, insulin-like growth factor, TGF-β, VEGF, and platelet-derived growth factor, important cytokines for repair of skin ulcers. Furthermore, with ADSCs involved in neovascular bed establishment at the site of wound repair, there is likely also an autocrine loop functioning, with local ADSCs producing angiogenic factors that act on themselves or neighboring ADSCs in the reestablishment of vascularization.6,29,31–33\nIn this study, ADSC-SF/CS wound treatment resulted in significantly increased amounts of EGF, TGF-β, and VEGF in the wounds. Vascular endothelial growth factor plays a key role in angiogenesis by stimulating endothelial cell proliferation, migration, and organization into tubules. Moreover, VEGF increases circulating endothelial progenitor cells.30 Transforming growth factor-β is a strong stimulator of ECM deposition. EGF promotes the repair and regeneration of damaged epidermis.34\nHigh levels of growth factors indicate a paracrine mechanism. Adipose-derived stem cells release wound-healing factors and can stimulate recruitment, migration, and proliferation of endogenous cells in the wound environment. We have shown that ADSCs engrafted via an SF/CS carrier demonstrate spontaneous differentiation into a vascular endothelial phenotype, as evidenced by the colocalization of red fluorescence-labeled ADSC and CD31 staining. Further support for this observation is provided by the observation of red fluorescence DIL/α-SMA double-positive cells, indicating contribution of a vascular smooth muscle phenotype to the reconstitution of local small vessel matrices This finding is consistent with previous reports.31,35 In the present study, we show that ADSCs engraft and differentiate. Immunohistochemistry suggests that the transplanted cells differentiate into vessels, and epithelial phenotypes in their new microenvironment. Consequently, ADSCs can stimulate angiogenesis, epithelialization, and wound remodeling through paracrine secretion and direct differentiation during wound repair.\nThe functional improvement seen in our study, measured by accelerated rates of wound closure, likely reflects both paracrine and direct cellular mechanisms.", "In this study, we have successfully applied ADSC-seeded SF/CS as a cytoprosthetic hybrid for reconstructive support in a diabetic cutaneous wound healing model which supplies growth factors and promotes angiogenesis of diabetic wound. As a carrier, SF/CS film supports the delivery engraftment and secretion of stem cells, as well as differentiation into vascular and epithelial components. However, there are still some problems and details that have not been resolved, such as the optimum number of cells on the material. In conclusion, ADSC-seeded SF/CS is a promising cell-matrix hybrid and warrants further in vivo study of its potential for reconstructive surgical applications." ]
[ "methods", null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, "discussions", "conclusions" ]
[ "silk fibroin", "chitosan", "stem cell", "wound", "diabetes" ]
MATERIALS AND METHODS: Solation and Culture of ADSCs Cytological and histological sections were performed in Regenerative Medicine Laboratory of Jinan University for expert assistance in preparing. The rats (Sprague-Dawley) (250–300 g) were anesthetized with 3% pentobarbital sodium (30 mg/kg) and then underwent surgery consisting of a lower abdominal midline incision and bilateral resection of the inguinal fat pads for culture ADSCs. A specimen of inguinal fat was harvested and placed in ice-cold phosphate-buffered saline (PBS). The adipose tissue was rinsed with PBS, minced into small pieces, and then incubated in a solution containing 0.1% collagenase type I (Sigma, St. Louis, MO) for 30 minutes at 37°C with shaking (100 rounds every minute). The top lipid layer was removed, and the remaining liquid portion was centrifuged at 1000g for 10 minutes at room temperature to obtain the stromal cell fraction. The pellet was filtered with a 100-μm mesh filter to remove the debris. The filtrate was treated with 0.3% NaCl for 10 minutes to lyse red blood cells and centrifuged at 300g for 10 minutes. The supernatant was discarded, and the remaining cells were suspended in Dulbecco modified Eagle’s medium (DMEM) supplemented with 10% fetal bovine serum (FBS) (Hyclone, Logan, Utah), and plated in a 25-cm2 bottle and cultured at 37°C in 5% CO2. The primary cells were cultured for 6 to 7 days until they reached confluence and were defined as “passage 0.” Adipose-derived stem cells were cultured and expanded in control medium and used for the experiments at passages 3 to 5. Cytological and histological sections were performed in Regenerative Medicine Laboratory of Jinan University for expert assistance in preparing. The rats (Sprague-Dawley) (250–300 g) were anesthetized with 3% pentobarbital sodium (30 mg/kg) and then underwent surgery consisting of a lower abdominal midline incision and bilateral resection of the inguinal fat pads for culture ADSCs. A specimen of inguinal fat was harvested and placed in ice-cold phosphate-buffered saline (PBS). The adipose tissue was rinsed with PBS, minced into small pieces, and then incubated in a solution containing 0.1% collagenase type I (Sigma, St. Louis, MO) for 30 minutes at 37°C with shaking (100 rounds every minute). The top lipid layer was removed, and the remaining liquid portion was centrifuged at 1000g for 10 minutes at room temperature to obtain the stromal cell fraction. The pellet was filtered with a 100-μm mesh filter to remove the debris. The filtrate was treated with 0.3% NaCl for 10 minutes to lyse red blood cells and centrifuged at 300g for 10 minutes. The supernatant was discarded, and the remaining cells were suspended in Dulbecco modified Eagle’s medium (DMEM) supplemented with 10% fetal bovine serum (FBS) (Hyclone, Logan, Utah), and plated in a 25-cm2 bottle and cultured at 37°C in 5% CO2. The primary cells were cultured for 6 to 7 days until they reached confluence and were defined as “passage 0.” Adipose-derived stem cells were cultured and expanded in control medium and used for the experiments at passages 3 to 5. Differentiation and Immunophenotyping of ADSCs Stem cell profile was evaluated through membrane receptor phenotyping and differentiation assays. Stem cell markers were determined through cytometric analysis for CD29, CD45, and CD90 (BD Biosciences, San Jose, CA), FACSCalibur flow cytometer (BD Biosciences). Stem cell pluripotency was evaluated by culture osteogenesis and adipogenesis under specific differentiation medium stimuli. Osteogenesis was induced with low-glucose DMEM supplemented with 10% FBS, dexamethasone (0.1 μM), ascorbic acid (0.2 mM), and β glycerol phosphate (10 mM) (Sigma) for 24 days. Adipogenesis was induced with high-glucose DMEM medium supplemented with 10% FBS, dexamethasone (1 μM), isobutylmethylxanthine (0.5 mM), insulin (10 μg/mL), and indomethacin (100 μM) (Sigma) for 18 days. Adipocyte lipid and osteocyte calcium deposits were stained with Oil-red-O and Alizarin Red (Chemicon), respectively. Stem cell profile was evaluated through membrane receptor phenotyping and differentiation assays. Stem cell markers were determined through cytometric analysis for CD29, CD45, and CD90 (BD Biosciences, San Jose, CA), FACSCalibur flow cytometer (BD Biosciences). Stem cell pluripotency was evaluated by culture osteogenesis and adipogenesis under specific differentiation medium stimuli. Osteogenesis was induced with low-glucose DMEM supplemented with 10% FBS, dexamethasone (0.1 μM), ascorbic acid (0.2 mM), and β glycerol phosphate (10 mM) (Sigma) for 24 days. Adipogenesis was induced with high-glucose DMEM medium supplemented with 10% FBS, dexamethasone (1 μM), isobutylmethylxanthine (0.5 mM), insulin (10 μg/mL), and indomethacin (100 μM) (Sigma) for 18 days. Adipocyte lipid and osteocyte calcium deposits were stained with Oil-red-O and Alizarin Red (Chemicon), respectively. Preparation of SF/CS The blended SF/CS scaffolds were fabricated as described previously, with slight modification.14,19–21 Briefly, CS was dissolved in 2% acetic acid and clarified by centrifugation. The final concentration of CS used was 2%. Silk cocoons were cut into pieces, degummed (to remove sericin) with a boiling in 0.01 M Na2CO3 solution for 30 minutes, repeated 3 times, and the fibers were washed with deionized water and then kept at 37°C overnight for drying. The degummed SF was dissolved in calcium nitrate tetrahydrate-methanol solution (molar ratio 1:4:2 calcium/water/methyl alcohol at 80°C), dialyzed against distilled water, and lyophilized to obtain a solid SF fluff. The SF fluff was further dissolved in distilled water at a 2% (w/v) concentration over a 3-hour period with continuous stirring to obtained aqueous SF solution. The solution was cooled and stored at 4°C until use. A volume of the same concentration of SF was added to prepare a 50:50 (v/v) SF/CS blend and was allowed to mix for 30 minutes (Fig. 1). Flow diagram: the preparation of SF/CS and ADSCs seeded SF/CS. Silk fibroin and chitosan blend (SF/CS) film were cut into 1.5-centimeter-diameter circles before implantation. SF/CS film appears light yellow, transparent and slightly uneven surface. Fluorescence microscope showing immunofluorescence staining of CM-Dil labeled ADSCs (red) in SF/CS film (100×). The nuclei (blue) were then counterstained with 4',6-Diamidino-2-Phenylindole, Dihydrochloride (DAPI). The blended SF/CS scaffolds were fabricated as described previously, with slight modification.14,19–21 Briefly, CS was dissolved in 2% acetic acid and clarified by centrifugation. The final concentration of CS used was 2%. Silk cocoons were cut into pieces, degummed (to remove sericin) with a boiling in 0.01 M Na2CO3 solution for 30 minutes, repeated 3 times, and the fibers were washed with deionized water and then kept at 37°C overnight for drying. The degummed SF was dissolved in calcium nitrate tetrahydrate-methanol solution (molar ratio 1:4:2 calcium/water/methyl alcohol at 80°C), dialyzed against distilled water, and lyophilized to obtain a solid SF fluff. The SF fluff was further dissolved in distilled water at a 2% (w/v) concentration over a 3-hour period with continuous stirring to obtained aqueous SF solution. The solution was cooled and stored at 4°C until use. A volume of the same concentration of SF was added to prepare a 50:50 (v/v) SF/CS blend and was allowed to mix for 30 minutes (Fig. 1). Flow diagram: the preparation of SF/CS and ADSCs seeded SF/CS. Silk fibroin and chitosan blend (SF/CS) film were cut into 1.5-centimeter-diameter circles before implantation. SF/CS film appears light yellow, transparent and slightly uneven surface. Fluorescence microscope showing immunofluorescence staining of CM-Dil labeled ADSCs (red) in SF/CS film (100×). The nuclei (blue) were then counterstained with 4',6-Diamidino-2-Phenylindole, Dihydrochloride (DAPI). Preparation of ADSC-SF/CS Combination Adipose-derived stem cells (passage 3) were labeled with CM-Dil, a lipophilic dye that stains the membrane of viable cells. For cell labeling, digested ADSCs were washed with PBS and incubated with CM-Dil (Life Technologies) in a 37°C incubator for 5 minutes, and then for an additional 15 minutes at 4°C. The cells were washed with PBS twice. The cells were then seeded onto a SF/CS membrane at a density of 1 × 106 ADSCs per cm2. The cell-seeded SF/CS membrane incubated for 48 hours under standard culture conditions were used for transplantation in animal experiments. Once on the operative field, grafts were transferred to a sterile six-well plate and washed gently in two 500-μL aliquots of PBS to remove any nonadherent cells or medium. The cell-seeded SF/CS membrane cultured for 48 hours was observed in a fluorescence microscope (Fig. 1). Adipose-derived stem cells (passage 3) were labeled with CM-Dil, a lipophilic dye that stains the membrane of viable cells. For cell labeling, digested ADSCs were washed with PBS and incubated with CM-Dil (Life Technologies) in a 37°C incubator for 5 minutes, and then for an additional 15 minutes at 4°C. The cells were washed with PBS twice. The cells were then seeded onto a SF/CS membrane at a density of 1 × 106 ADSCs per cm2. The cell-seeded SF/CS membrane incubated for 48 hours under standard culture conditions were used for transplantation in animal experiments. Once on the operative field, grafts were transferred to a sterile six-well plate and washed gently in two 500-μL aliquots of PBS to remove any nonadherent cells or medium. The cell-seeded SF/CS membrane cultured for 48 hours was observed in a fluorescence microscope (Fig. 1). Experimental Diabetes and Excisional Wounding Experimental animals: 36 male Sprague-Dawley rats (250–300 g) were obtained from Beijing University of Chinese Medicine (Beijing, China). All studies and procedures involving animals were conducted in compliance with guidelines established by the Institute Of Laboratory Animal Science (Jinan University, Guangzhou, China). Rats were kept for 3 weeks on balanced ration with water ad libitum for acclimatization. Standard animal chow and clean water were constantly supplied to the animals. All procedures complied with the standards stated in the Guide for the Care and Use of Laboratory Animals (Institute of Laboratory Animal Resources, National Academy of Sciences, Bethesda, MD, 1996). Eight-week-old Sprague-Dawley rat received streptozocin 40 mg/kg (Sigma) in 0.05 mol/L citrate buffer, pH 4.5, daily for 5 days. Only mice showing consistent polyuria, polydipsia, polyphagia, weight loss, and elevated blood glucose (>16.7 mmol/L or 250 mg/dL) were included in the study.22 These diabetic rats (n = 36) were divided into 3 groups with 8 rats in each group randomly: group A, diabetic control with no graft; group B, diabetic treated with SF/CS film alone; and group C, diabetic treated with ADSC-SF/CS. Rats were allowed to manifest hyperglycemia for 8 weeks before surgical wounding. All rats (n = 36) were anesthetized with an intraperitoneal injection of sodium pentobarbital (ServiceBio, Inc., Wuhan, China). Their back hair was shaved, carefully. Application field was outlined with marking pen just before removing skin. A 1.5-cm diameter circular impression was made on the middle of dorsum of the mouse. All wounds were cleaned with iodophor and sterile normal saline. Each group received the appropriate graft as described previously, then were covered with Vaseline oil gauze. Finally, 3M transparent dressing was used to tightly fix the graft and was covered with Vaseline oil gauze. After completion of wound healing test, rats were sacrificed. Samples of the previously injured tissue were removed at 3, 7, 14, and 21 days. The incision area and adjacent normal skin were excised for each sacrificed animal. Half of every sample fixed in 10% formalin for histological analysis, and the other half was frozen in liquid nitrogen for enzyme-linked immunosorbent assay (ELISA) test. Experimental animals: 36 male Sprague-Dawley rats (250–300 g) were obtained from Beijing University of Chinese Medicine (Beijing, China). All studies and procedures involving animals were conducted in compliance with guidelines established by the Institute Of Laboratory Animal Science (Jinan University, Guangzhou, China). Rats were kept for 3 weeks on balanced ration with water ad libitum for acclimatization. Standard animal chow and clean water were constantly supplied to the animals. All procedures complied with the standards stated in the Guide for the Care and Use of Laboratory Animals (Institute of Laboratory Animal Resources, National Academy of Sciences, Bethesda, MD, 1996). Eight-week-old Sprague-Dawley rat received streptozocin 40 mg/kg (Sigma) in 0.05 mol/L citrate buffer, pH 4.5, daily for 5 days. Only mice showing consistent polyuria, polydipsia, polyphagia, weight loss, and elevated blood glucose (>16.7 mmol/L or 250 mg/dL) were included in the study.22 These diabetic rats (n = 36) were divided into 3 groups with 8 rats in each group randomly: group A, diabetic control with no graft; group B, diabetic treated with SF/CS film alone; and group C, diabetic treated with ADSC-SF/CS. Rats were allowed to manifest hyperglycemia for 8 weeks before surgical wounding. All rats (n = 36) were anesthetized with an intraperitoneal injection of sodium pentobarbital (ServiceBio, Inc., Wuhan, China). Their back hair was shaved, carefully. Application field was outlined with marking pen just before removing skin. A 1.5-cm diameter circular impression was made on the middle of dorsum of the mouse. All wounds were cleaned with iodophor and sterile normal saline. Each group received the appropriate graft as described previously, then were covered with Vaseline oil gauze. Finally, 3M transparent dressing was used to tightly fix the graft and was covered with Vaseline oil gauze. After completion of wound healing test, rats were sacrificed. Samples of the previously injured tissue were removed at 3, 7, 14, and 21 days. The incision area and adjacent normal skin were excised for each sacrificed animal. Half of every sample fixed in 10% formalin for histological analysis, and the other half was frozen in liquid nitrogen for enzyme-linked immunosorbent assay (ELISA) test. Calculation of Wound Healing Rate Wound closure status was evaluated using a digital camera on days 3, 7, 14, and 21 after treatment. The area of the wound left unhealed was measured using ImageJ software. The wound healing rate was calculated as follows: wound healing rate = (original wound area − wound area on different days postsurgery)/original wound area. Wound closure status was evaluated using a digital camera on days 3, 7, 14, and 21 after treatment. The area of the wound left unhealed was measured using ImageJ software. The wound healing rate was calculated as follows: wound healing rate = (original wound area − wound area on different days postsurgery)/original wound area. Histopathological Assay The sample of skin containing dermis and hypodermis was isolated, carefully trimmed with cutter, and fixed in 10% neutral formalin solution. After paraffin embedding, 3- to 4-μm sections were taken and stained with hematoxylin and eosin for study of tissue appearance. Three indices were measured, the length epithelial tongue, the epithelial gap, and the area of granulation tissue as shown in Figure 2I. All slides were observed under light microscopic. Histological analysis of skin after hematoxylin and eosin staining. (I) A schematic transversal section is depicted for proper identification of the respective regions observed. (II) Images of granulation tissue were obtained from the healing. The arrows point to the blood vessels (allows) that grow perpendicular to the wound in days 7 and 21 after treatment. Measure and analysis of length epithelial tongue (III), epithelial gap (IV) and area of granulation (V). All data are mean ±SD (n = 3). *P < 0.05 ; **P < 0.01. The sample of skin containing dermis and hypodermis was isolated, carefully trimmed with cutter, and fixed in 10% neutral formalin solution. After paraffin embedding, 3- to 4-μm sections were taken and stained with hematoxylin and eosin for study of tissue appearance. Three indices were measured, the length epithelial tongue, the epithelial gap, and the area of granulation tissue as shown in Figure 2I. All slides were observed under light microscopic. Histological analysis of skin after hematoxylin and eosin staining. (I) A schematic transversal section is depicted for proper identification of the respective regions observed. (II) Images of granulation tissue were obtained from the healing. The arrows point to the blood vessels (allows) that grow perpendicular to the wound in days 7 and 21 after treatment. Measure and analysis of length epithelial tongue (III), epithelial gap (IV) and area of granulation (V). All data are mean ±SD (n = 3). *P < 0.05 ; **P < 0.01. Immunofluorescent Histology Paraffin-embedded sections first underwent standard deparaffinization and rehydration procedures, and they were then probed with fluorescein isothiocyanate-anti-CD31 antibody (green), anti–α-smooth muscle actin (α-SMA) antibody (green), anti–cytokeratin 10 antibody (green) (Cell Signaling Technology, Danvers, MA) for overnight at 4°C. Next, sections were incubated with horseradish peroxidase–conjugated secondary antibodies (Santa Cruz Biotechnology Inc., Santa Cruz, CA). The nuclei were then counterstained with 4′,6-diamidino-2-phenylindole, dihydrochloride. Paraffin-embedded sections first underwent standard deparaffinization and rehydration procedures, and they were then probed with fluorescein isothiocyanate-anti-CD31 antibody (green), anti–α-smooth muscle actin (α-SMA) antibody (green), anti–cytokeratin 10 antibody (green) (Cell Signaling Technology, Danvers, MA) for overnight at 4°C. Next, sections were incubated with horseradish peroxidase–conjugated secondary antibodies (Santa Cruz Biotechnology Inc., Santa Cruz, CA). The nuclei were then counterstained with 4′,6-diamidino-2-phenylindole, dihydrochloride. ELISA Weighed, minced samples were placed in lysis buffer containing protease inhibitors for 24 hours at 4°C. Tissues were then homogenized, centrifuged to remove tissue residue, and the amount of epidermal growth factor (EGF), transforming growth factor (TGF)-β1, vascular endothelial growth factor (VEGF) in the lysis buffer was measured in diluted aliquots with standard ELISA assays. Weighed, minced samples were placed in lysis buffer containing protease inhibitors for 24 hours at 4°C. Tissues were then homogenized, centrifuged to remove tissue residue, and the amount of epidermal growth factor (EGF), transforming growth factor (TGF)-β1, vascular endothelial growth factor (VEGF) in the lysis buffer was measured in diluted aliquots with standard ELISA assays. Statistical Analysis All data were expressed as means ± SD and were analyzed by 1-way ANOVA with Fisher adjustment (SPSS13.0 for windows). Difference between groups was analyzed by Tukey test, P less than 0.05 was considered statistically significant in all analyses. All data were expressed as means ± SD and were analyzed by 1-way ANOVA with Fisher adjustment (SPSS13.0 for windows). Difference between groups was analyzed by Tukey test, P less than 0.05 was considered statistically significant in all analyses. Solation and Culture of ADSCs: Cytological and histological sections were performed in Regenerative Medicine Laboratory of Jinan University for expert assistance in preparing. The rats (Sprague-Dawley) (250–300 g) were anesthetized with 3% pentobarbital sodium (30 mg/kg) and then underwent surgery consisting of a lower abdominal midline incision and bilateral resection of the inguinal fat pads for culture ADSCs. A specimen of inguinal fat was harvested and placed in ice-cold phosphate-buffered saline (PBS). The adipose tissue was rinsed with PBS, minced into small pieces, and then incubated in a solution containing 0.1% collagenase type I (Sigma, St. Louis, MO) for 30 minutes at 37°C with shaking (100 rounds every minute). The top lipid layer was removed, and the remaining liquid portion was centrifuged at 1000g for 10 minutes at room temperature to obtain the stromal cell fraction. The pellet was filtered with a 100-μm mesh filter to remove the debris. The filtrate was treated with 0.3% NaCl for 10 minutes to lyse red blood cells and centrifuged at 300g for 10 minutes. The supernatant was discarded, and the remaining cells were suspended in Dulbecco modified Eagle’s medium (DMEM) supplemented with 10% fetal bovine serum (FBS) (Hyclone, Logan, Utah), and plated in a 25-cm2 bottle and cultured at 37°C in 5% CO2. The primary cells were cultured for 6 to 7 days until they reached confluence and were defined as “passage 0.” Adipose-derived stem cells were cultured and expanded in control medium and used for the experiments at passages 3 to 5. Differentiation and Immunophenotyping of ADSCs: Stem cell profile was evaluated through membrane receptor phenotyping and differentiation assays. Stem cell markers were determined through cytometric analysis for CD29, CD45, and CD90 (BD Biosciences, San Jose, CA), FACSCalibur flow cytometer (BD Biosciences). Stem cell pluripotency was evaluated by culture osteogenesis and adipogenesis under specific differentiation medium stimuli. Osteogenesis was induced with low-glucose DMEM supplemented with 10% FBS, dexamethasone (0.1 μM), ascorbic acid (0.2 mM), and β glycerol phosphate (10 mM) (Sigma) for 24 days. Adipogenesis was induced with high-glucose DMEM medium supplemented with 10% FBS, dexamethasone (1 μM), isobutylmethylxanthine (0.5 mM), insulin (10 μg/mL), and indomethacin (100 μM) (Sigma) for 18 days. Adipocyte lipid and osteocyte calcium deposits were stained with Oil-red-O and Alizarin Red (Chemicon), respectively. Preparation of SF/CS: The blended SF/CS scaffolds were fabricated as described previously, with slight modification.14,19–21 Briefly, CS was dissolved in 2% acetic acid and clarified by centrifugation. The final concentration of CS used was 2%. Silk cocoons were cut into pieces, degummed (to remove sericin) with a boiling in 0.01 M Na2CO3 solution for 30 minutes, repeated 3 times, and the fibers were washed with deionized water and then kept at 37°C overnight for drying. The degummed SF was dissolved in calcium nitrate tetrahydrate-methanol solution (molar ratio 1:4:2 calcium/water/methyl alcohol at 80°C), dialyzed against distilled water, and lyophilized to obtain a solid SF fluff. The SF fluff was further dissolved in distilled water at a 2% (w/v) concentration over a 3-hour period with continuous stirring to obtained aqueous SF solution. The solution was cooled and stored at 4°C until use. A volume of the same concentration of SF was added to prepare a 50:50 (v/v) SF/CS blend and was allowed to mix for 30 minutes (Fig. 1). Flow diagram: the preparation of SF/CS and ADSCs seeded SF/CS. Silk fibroin and chitosan blend (SF/CS) film were cut into 1.5-centimeter-diameter circles before implantation. SF/CS film appears light yellow, transparent and slightly uneven surface. Fluorescence microscope showing immunofluorescence staining of CM-Dil labeled ADSCs (red) in SF/CS film (100×). The nuclei (blue) were then counterstained with 4',6-Diamidino-2-Phenylindole, Dihydrochloride (DAPI). Preparation of ADSC-SF/CS Combination: Adipose-derived stem cells (passage 3) were labeled with CM-Dil, a lipophilic dye that stains the membrane of viable cells. For cell labeling, digested ADSCs were washed with PBS and incubated with CM-Dil (Life Technologies) in a 37°C incubator for 5 minutes, and then for an additional 15 minutes at 4°C. The cells were washed with PBS twice. The cells were then seeded onto a SF/CS membrane at a density of 1 × 106 ADSCs per cm2. The cell-seeded SF/CS membrane incubated for 48 hours under standard culture conditions were used for transplantation in animal experiments. Once on the operative field, grafts were transferred to a sterile six-well plate and washed gently in two 500-μL aliquots of PBS to remove any nonadherent cells or medium. The cell-seeded SF/CS membrane cultured for 48 hours was observed in a fluorescence microscope (Fig. 1). Experimental Diabetes and Excisional Wounding: Experimental animals: 36 male Sprague-Dawley rats (250–300 g) were obtained from Beijing University of Chinese Medicine (Beijing, China). All studies and procedures involving animals were conducted in compliance with guidelines established by the Institute Of Laboratory Animal Science (Jinan University, Guangzhou, China). Rats were kept for 3 weeks on balanced ration with water ad libitum for acclimatization. Standard animal chow and clean water were constantly supplied to the animals. All procedures complied with the standards stated in the Guide for the Care and Use of Laboratory Animals (Institute of Laboratory Animal Resources, National Academy of Sciences, Bethesda, MD, 1996). Eight-week-old Sprague-Dawley rat received streptozocin 40 mg/kg (Sigma) in 0.05 mol/L citrate buffer, pH 4.5, daily for 5 days. Only mice showing consistent polyuria, polydipsia, polyphagia, weight loss, and elevated blood glucose (>16.7 mmol/L or 250 mg/dL) were included in the study.22 These diabetic rats (n = 36) were divided into 3 groups with 8 rats in each group randomly: group A, diabetic control with no graft; group B, diabetic treated with SF/CS film alone; and group C, diabetic treated with ADSC-SF/CS. Rats were allowed to manifest hyperglycemia for 8 weeks before surgical wounding. All rats (n = 36) were anesthetized with an intraperitoneal injection of sodium pentobarbital (ServiceBio, Inc., Wuhan, China). Their back hair was shaved, carefully. Application field was outlined with marking pen just before removing skin. A 1.5-cm diameter circular impression was made on the middle of dorsum of the mouse. All wounds were cleaned with iodophor and sterile normal saline. Each group received the appropriate graft as described previously, then were covered with Vaseline oil gauze. Finally, 3M transparent dressing was used to tightly fix the graft and was covered with Vaseline oil gauze. After completion of wound healing test, rats were sacrificed. Samples of the previously injured tissue were removed at 3, 7, 14, and 21 days. The incision area and adjacent normal skin were excised for each sacrificed animal. Half of every sample fixed in 10% formalin for histological analysis, and the other half was frozen in liquid nitrogen for enzyme-linked immunosorbent assay (ELISA) test. Calculation of Wound Healing Rate: Wound closure status was evaluated using a digital camera on days 3, 7, 14, and 21 after treatment. The area of the wound left unhealed was measured using ImageJ software. The wound healing rate was calculated as follows: wound healing rate = (original wound area − wound area on different days postsurgery)/original wound area. Histopathological Assay: The sample of skin containing dermis and hypodermis was isolated, carefully trimmed with cutter, and fixed in 10% neutral formalin solution. After paraffin embedding, 3- to 4-μm sections were taken and stained with hematoxylin and eosin for study of tissue appearance. Three indices were measured, the length epithelial tongue, the epithelial gap, and the area of granulation tissue as shown in Figure 2I. All slides were observed under light microscopic. Histological analysis of skin after hematoxylin and eosin staining. (I) A schematic transversal section is depicted for proper identification of the respective regions observed. (II) Images of granulation tissue were obtained from the healing. The arrows point to the blood vessels (allows) that grow perpendicular to the wound in days 7 and 21 after treatment. Measure and analysis of length epithelial tongue (III), epithelial gap (IV) and area of granulation (V). All data are mean ±SD (n = 3). *P < 0.05 ; **P < 0.01. Immunofluorescent Histology: Paraffin-embedded sections first underwent standard deparaffinization and rehydration procedures, and they were then probed with fluorescein isothiocyanate-anti-CD31 antibody (green), anti–α-smooth muscle actin (α-SMA) antibody (green), anti–cytokeratin 10 antibody (green) (Cell Signaling Technology, Danvers, MA) for overnight at 4°C. Next, sections were incubated with horseradish peroxidase–conjugated secondary antibodies (Santa Cruz Biotechnology Inc., Santa Cruz, CA). The nuclei were then counterstained with 4′,6-diamidino-2-phenylindole, dihydrochloride. ELISA: Weighed, minced samples were placed in lysis buffer containing protease inhibitors for 24 hours at 4°C. Tissues were then homogenized, centrifuged to remove tissue residue, and the amount of epidermal growth factor (EGF), transforming growth factor (TGF)-β1, vascular endothelial growth factor (VEGF) in the lysis buffer was measured in diluted aliquots with standard ELISA assays. Statistical Analysis: All data were expressed as means ± SD and were analyzed by 1-way ANOVA with Fisher adjustment (SPSS13.0 for windows). Difference between groups was analyzed by Tukey test, P less than 0.05 was considered statistically significant in all analyses. RESULTS: Characterization of ADSCs Adipose-derived stem cells were initially expanded for 3 passages and later characterized by cell differentiation and immunophenotyping assays. The cultures were observed by using an inverted light microscope. Attachment of spindle-shaped cells to tissue culture plastic flask was observed after 1 day of culture of ADSCs. Primary cultures reached 70% to 80% confluence in approximately 5 to 6 days. During the passaging, cell growth tended to accelerate and morphology of cells changed gradually. Cells become more flat-shape with increase in passage number. After 3 passages, cultured cells displayed typical fibroblastoid morphology (Fig. 3I) and under appropriate stimuli, exhibited potential for adipocyte and osteocyte differentiation demonstrated through Oil Red (Fig. 3II), Alizarin red staining (Fig. 3III). Cells also expressed characteristic stem cell markers, CD29 and CD90, and were negative for CD45 (Fig. 3IV–VI). Culture, Differentiation and Immunophenotyping of ADSCs. (I) Representative image of ADSC displaying cell phenotype under bright-field microscopy (100×). (II-III) Stem cell pluripotency was evaluated by culture adipogenesis and osteogenesis under differentiation stimuli and posterior staining with (II) Oil-red-O and (III) Alizarin Red, respectively (100×). (IV-VI) Immunophenotypic characterization of ADSCs: (IV) CD29 (+), (V) CD45 (-) and (VI) CD90 (-). Adipose-derived stem cells were initially expanded for 3 passages and later characterized by cell differentiation and immunophenotyping assays. The cultures were observed by using an inverted light microscope. Attachment of spindle-shaped cells to tissue culture plastic flask was observed after 1 day of culture of ADSCs. Primary cultures reached 70% to 80% confluence in approximately 5 to 6 days. During the passaging, cell growth tended to accelerate and morphology of cells changed gradually. Cells become more flat-shape with increase in passage number. After 3 passages, cultured cells displayed typical fibroblastoid morphology (Fig. 3I) and under appropriate stimuli, exhibited potential for adipocyte and osteocyte differentiation demonstrated through Oil Red (Fig. 3II), Alizarin red staining (Fig. 3III). Cells also expressed characteristic stem cell markers, CD29 and CD90, and were negative for CD45 (Fig. 3IV–VI). Culture, Differentiation and Immunophenotyping of ADSCs. (I) Representative image of ADSC displaying cell phenotype under bright-field microscopy (100×). (II-III) Stem cell pluripotency was evaluated by culture adipogenesis and osteogenesis under differentiation stimuli and posterior staining with (II) Oil-red-O and (III) Alizarin Red, respectively (100×). (IV-VI) Immunophenotypic characterization of ADSCs: (IV) CD29 (+), (V) CD45 (-) and (VI) CD90 (-). Histopathological Assay Histological examination showed that there was no obvious difference among the 3 groups but significantly higher epithelialization rate, increased granulation tissue formulation, and capillary formation when compared with groups A and B in days 7 and 21(Fig. 2II). There were more blood vessels (allows), growing perpendicular to the wound, and less inflammatory cell in the wound granulation tissue than the other groups by day 7 (neovascularization, A > B > C; inflammatory cell, A < B < C). In day 21 after postwound healing, the images showed that the tissue of the wound which was grafted in the ADSCs-SF/CS almost regenerate close to the normal tissue. There still appear some neovascularization, inflammatory cell, and collagen that have not disappeared yet in the 2 other groups, especially control group. Histological examination showed that there was no obvious difference among the 3 groups but significantly higher epithelialization rate, increased granulation tissue formulation, and capillary formation when compared with groups A and B in days 7 and 21(Fig. 2II). There were more blood vessels (allows), growing perpendicular to the wound, and less inflammatory cell in the wound granulation tissue than the other groups by day 7 (neovascularization, A > B > C; inflammatory cell, A < B < C). In day 21 after postwound healing, the images showed that the tissue of the wound which was grafted in the ADSCs-SF/CS almost regenerate close to the normal tissue. There still appear some neovascularization, inflammatory cell, and collagen that have not disappeared yet in the 2 other groups, especially control group. Immunofluorescent Histology Engrafted CM-Dil–labeled cells were identified throughout the various substrata of the dermis and cutaneous appendages at 7, 14, 21 days postoperatively (Fig. 4). Engrafted ADSCs were identified, on the basis of red CM-Dil signal, at 14 days, engrafted cells were seen incorporated into winding linear structures consistent with microvascular elements and coexpressing positive signals for red CM-Dil and the endothelial marker CD31 (Fig. 4I). Some engrafted ADSCs were noted as recapitulating elements of linear vascular structures, costaining for the vascular smooth muscle marker α-SMA (Fig. 4II) and CM-Dil. However, red fluorescence-labeled cells were not observed to be incorporated into regenerated epidermalepithelium. At 21 days, in addition to these findings described, engrafted Dil-positive cells were observed to be incorporated as components of the regenerated epidermal epithelium on the basis of red fluorescence-labeled cells costaining for the cytokeratin marker of epidermal epithelium cytokeratin 10 (Fig. 4III). Expression of epithelial cell markers CD31, α-SMA, K10 in wound skin. (I) Photomicrographs showing Endothelial cells immunostaining for CD31 (green), engrafted ADSCs (red) in the wound site of diabetic rats, nuclear counterstained with DAPI (blue) at 200× magnification. (II) Images showing vascular smooth muscle cells immunostaining for α-SMA (green), engrafted ADSCs (red) and DAPI (blue) in the wound site of diabetic rats. (III) Images showing epidermal epithelium immunostaining for K10 (green), engrafted ADSCs (red) and DAPI (blue) in the wound site of diabetic rats. Scale bar = 50 μm Engrafted CM-Dil–labeled cells were identified throughout the various substrata of the dermis and cutaneous appendages at 7, 14, 21 days postoperatively (Fig. 4). Engrafted ADSCs were identified, on the basis of red CM-Dil signal, at 14 days, engrafted cells were seen incorporated into winding linear structures consistent with microvascular elements and coexpressing positive signals for red CM-Dil and the endothelial marker CD31 (Fig. 4I). Some engrafted ADSCs were noted as recapitulating elements of linear vascular structures, costaining for the vascular smooth muscle marker α-SMA (Fig. 4II) and CM-Dil. However, red fluorescence-labeled cells were not observed to be incorporated into regenerated epidermalepithelium. At 21 days, in addition to these findings described, engrafted Dil-positive cells were observed to be incorporated as components of the regenerated epidermal epithelium on the basis of red fluorescence-labeled cells costaining for the cytokeratin marker of epidermal epithelium cytokeratin 10 (Fig. 4III). Expression of epithelial cell markers CD31, α-SMA, K10 in wound skin. (I) Photomicrographs showing Endothelial cells immunostaining for CD31 (green), engrafted ADSCs (red) in the wound site of diabetic rats, nuclear counterstained with DAPI (blue) at 200× magnification. (II) Images showing vascular smooth muscle cells immunostaining for α-SMA (green), engrafted ADSCs (red) and DAPI (blue) in the wound site of diabetic rats. (III) Images showing epidermal epithelium immunostaining for K10 (green), engrafted ADSCs (red) and DAPI (blue) in the wound site of diabetic rats. Scale bar = 50 μm ELISA Mean EGF, TGF-β, and VEGF levels were significantly higher (P < 0.01) in ADSCs-SF/CS–treated group compared with SF/CS treated group and controls group at 14 days postoperatively. In addition, mean VEGF levels were higher (P < 0.05) in SF/CS-treated group compared with the control group (Fig. 5). ELISA analysis of wound tissue for concentration of EGF (I), TGF-β (II), and VEGF (III). Comparison of the concentration of Three Growth Factors between ADSC-SF/CS-treated (C), SF/CS-treated (B) and control groups (A) at 14 days post-wound. **P < 0.01, comparison between ADSC-SF/CS-treated and control groups; *P < 0.05, comparison between ADSC-SF/CS-treated and SF/CS-treated groups. Mean EGF, TGF-β, and VEGF levels were significantly higher (P < 0.01) in ADSCs-SF/CS–treated group compared with SF/CS treated group and controls group at 14 days postoperatively. In addition, mean VEGF levels were higher (P < 0.05) in SF/CS-treated group compared with the control group (Fig. 5). ELISA analysis of wound tissue for concentration of EGF (I), TGF-β (II), and VEGF (III). Comparison of the concentration of Three Growth Factors between ADSC-SF/CS-treated (C), SF/CS-treated (B) and control groups (A) at 14 days post-wound. **P < 0.01, comparison between ADSC-SF/CS-treated and control groups; *P < 0.05, comparison between ADSC-SF/CS-treated and SF/CS-treated groups. Characterization of ADSCs: Adipose-derived stem cells were initially expanded for 3 passages and later characterized by cell differentiation and immunophenotyping assays. The cultures were observed by using an inverted light microscope. Attachment of spindle-shaped cells to tissue culture plastic flask was observed after 1 day of culture of ADSCs. Primary cultures reached 70% to 80% confluence in approximately 5 to 6 days. During the passaging, cell growth tended to accelerate and morphology of cells changed gradually. Cells become more flat-shape with increase in passage number. After 3 passages, cultured cells displayed typical fibroblastoid morphology (Fig. 3I) and under appropriate stimuli, exhibited potential for adipocyte and osteocyte differentiation demonstrated through Oil Red (Fig. 3II), Alizarin red staining (Fig. 3III). Cells also expressed characteristic stem cell markers, CD29 and CD90, and were negative for CD45 (Fig. 3IV–VI). Culture, Differentiation and Immunophenotyping of ADSCs. (I) Representative image of ADSC displaying cell phenotype under bright-field microscopy (100×). (II-III) Stem cell pluripotency was evaluated by culture adipogenesis and osteogenesis under differentiation stimuli and posterior staining with (II) Oil-red-O and (III) Alizarin Red, respectively (100×). (IV-VI) Immunophenotypic characterization of ADSCs: (IV) CD29 (+), (V) CD45 (-) and (VI) CD90 (-). Histopathological Assay: Histological examination showed that there was no obvious difference among the 3 groups but significantly higher epithelialization rate, increased granulation tissue formulation, and capillary formation when compared with groups A and B in days 7 and 21(Fig. 2II). There were more blood vessels (allows), growing perpendicular to the wound, and less inflammatory cell in the wound granulation tissue than the other groups by day 7 (neovascularization, A > B > C; inflammatory cell, A < B < C). In day 21 after postwound healing, the images showed that the tissue of the wound which was grafted in the ADSCs-SF/CS almost regenerate close to the normal tissue. There still appear some neovascularization, inflammatory cell, and collagen that have not disappeared yet in the 2 other groups, especially control group. Immunofluorescent Histology: Engrafted CM-Dil–labeled cells were identified throughout the various substrata of the dermis and cutaneous appendages at 7, 14, 21 days postoperatively (Fig. 4). Engrafted ADSCs were identified, on the basis of red CM-Dil signal, at 14 days, engrafted cells were seen incorporated into winding linear structures consistent with microvascular elements and coexpressing positive signals for red CM-Dil and the endothelial marker CD31 (Fig. 4I). Some engrafted ADSCs were noted as recapitulating elements of linear vascular structures, costaining for the vascular smooth muscle marker α-SMA (Fig. 4II) and CM-Dil. However, red fluorescence-labeled cells were not observed to be incorporated into regenerated epidermalepithelium. At 21 days, in addition to these findings described, engrafted Dil-positive cells were observed to be incorporated as components of the regenerated epidermal epithelium on the basis of red fluorescence-labeled cells costaining for the cytokeratin marker of epidermal epithelium cytokeratin 10 (Fig. 4III). Expression of epithelial cell markers CD31, α-SMA, K10 in wound skin. (I) Photomicrographs showing Endothelial cells immunostaining for CD31 (green), engrafted ADSCs (red) in the wound site of diabetic rats, nuclear counterstained with DAPI (blue) at 200× magnification. (II) Images showing vascular smooth muscle cells immunostaining for α-SMA (green), engrafted ADSCs (red) and DAPI (blue) in the wound site of diabetic rats. (III) Images showing epidermal epithelium immunostaining for K10 (green), engrafted ADSCs (red) and DAPI (blue) in the wound site of diabetic rats. Scale bar = 50 μm ELISA: Mean EGF, TGF-β, and VEGF levels were significantly higher (P < 0.01) in ADSCs-SF/CS–treated group compared with SF/CS treated group and controls group at 14 days postoperatively. In addition, mean VEGF levels were higher (P < 0.05) in SF/CS-treated group compared with the control group (Fig. 5). ELISA analysis of wound tissue for concentration of EGF (I), TGF-β (II), and VEGF (III). Comparison of the concentration of Three Growth Factors between ADSC-SF/CS-treated (C), SF/CS-treated (B) and control groups (A) at 14 days post-wound. **P < 0.01, comparison between ADSC-SF/CS-treated and control groups; *P < 0.05, comparison between ADSC-SF/CS-treated and SF/CS-treated groups. DISCUSSION: Diabetes is associated with many hemorheological alterations. In diabetic patients, long-term complications are related to the occurrence of vascular alterations which involve hemorheological and endothelium-dependent flow changes, leading to possible tissue ischemia.23 In this study, based on the comparison between the wound of normal rats and diabetic rats postsurgery immediately (Figs. 6I and II), their skin have some visible changes when they maintain a stable diabetes for 8 weeks, such as poor blood supply, lack of subcutaneous tissue, thin skin, and poor elasticity. Macroscopic lesion appearance. Diabetic rats model of skin injury were used (I,III). A standardized wound area (1.5-cm-diameter circle skin in full thickness removed) was induced on the dorsal surface of diabetic rats (I) and normal rats (II) (they have similar weight and age). Compared with normal rats, diabetic rats have poor blood supply, lack of subcutaneous tissue, thin skin and poor elasticity. (III) Lesions appearance at 3, 7, 14 and 21 days. Refractory diabetic wound is a hot and difficult problem. Many studies promoting wound healing by injecting ADSC suspension into wound bed were reported previously.24,25 However, this method has some unclear security risk. Tissue engineering is an important therapeutic strategy to be used in regenerative medicine in the present and in the future. Therefore, it becomes the best choice for use in tissue engineering to solve this problem. Tissue engineering includes 3 elements: scaffold, growth factors, and seed cells. Functional biomaterials research is focused on the development and improvement of scaffolding, which can be used to repair or regenerate an organ or tissue. Scaffolds are one of the crucial factors for tissue engineering. Several factors should be considered for developing a device as a skin substitute in tissue engineering, including biocompatibility and safety, degradability, and mechanical properties. Silk fibroin and CS are both natural polymer materials with good biological properties. The advantages of the 2 materials and their positive effects on the wound have been mentioned in this article. After the 2 are blended, they mutually modify each other.26,27 The SF/CS scaffold has biological, structural and mechanical properties that can be adjusted to meet specific clinical needs.12 Altman et al27 concluded that SF/CS is a naturally derived biocompatible scaffold and can be used as a substrate for stem cell delivery. Moreover, their study showed the potential use of SF/CS as a local carrier for autologous stem cells for reconstructive surgery application. In this study, the same kind of ADSCs was used as seed cells for its potential to secrete of growth factors. ADSCs are easy to obtain and provide a much higher yield.27 In vitro, ADSCs have been verified that they were able to adhere, proliferate, and migrate to SF scaffolds without affecting their properties. In this study, we seed the CM-Dil–labeled cells onto a SF/CS membrane, cultured for 48 hours, and observed by fluorescence microscope. The surface of SF/CS is uneven, so it is difficult to display all the cells in a plane view but through several plane views. The morphology and the distribution of cells become less uniform because of the uneven surface of SF/CS. This study successfully used a 50:50 SF/CS blend film, seeded and delivery of ADSCs of the same genus, in a diabetic rat dorsal cutaneous wound model. After a series of experiments and obtaining the data of the relevant test, our finding demonstrates that the construct created by the seeding of ADSCs on SF/CS can be used as an effective delivery vehicle in vivo. Neovascularization and epithelialization were the main reasons behind this process. Adipose-derived stem cells are a population of pluripotent mesenchymal cells, which can differentiate into various lineages. Adipose-derived stem cells have the potential to promote angiogenesis, secrete growth factors, and differentiate into multiple lineages upon appropriate stimulation.3 Therefore, in this study, ADSC-seeded SF/CS film can accelerate diabetic wound healing by promoting reepithelialization, cell infiltration, and angiogenesis. The effects of ADSCs on fibroblasts play a key role in skin biology, such as wound healing, scar, and photoaging. In the early phase of wound healing, fibroblasts migrate into the wound area and move across fibrin-based provisional matrix. Because the provisional fibrin-based matrix is relatively devoid of fibroblasts, the processes of migration, proliferation, and ECM production are the key steps in the regeneration of a functional dermis. Fibroblasts produce collagen-based ECM that ultimately replaces the provisional fibrin-based matrix and help reapproximate wound edges through their contractile properties.28 In this study, the contraction of the wound surface was reflected by measuring the length of epithelial crawling and epithelial gap. It can be seen from the histogram that the length of epithelial crawling of ADSC-SF/CS grafted wound is significantly longer than a simple SF/CS grafted wound. This reflects that the fibroblast activity of ADSCs promote wound contraction when analyzed combined with the rate of wound contraction (Figs. 6III and 7). Analysis of Wound Closure Rates The values were averaged and are reported as mean ± SD (n=3). *Significant differences between group B (SF/CS) and group A (control), P<0.05; **Significant differences between group C (ADSC-SF/CS) and group A (control) or B (SF/CS), P<0.01. On the other hand, in line with previous reports, ADSCs promote tissue regeneration through the establishment and support of local microvascular networks.28,29 According to the development of granulation tissue in hematoxylin and eosin in this study, 7 days after transplantation, the neovascularization of the granulation tissue of wound area in ADSC-SF/CS grafted group was significantly faster than that in the other 2 groups. The blood vessels of the granulation tissue were perpendicular to the wound showing a more mature state. Although there was formation of new blood vessels in the SF/CS grafted wound, this was less in number and the morphology was relatively immature. Twenty-one days post-wound healing, the ADSC-SF/CS–transplanted wound basically developed to a near normal, and granulation tissue has translated into new skin tissue, and the transplantation of SF/CS group still only has a formation of a few new blood vessels, the control group wound neovascularization, inflammatory cells, and collagen was not reduced. Our findings demonstrate that ADSC plays an important role in promoting angiogenesis and granulation tissue maturation. Reepithelialization and vascular support in the context of tissue regeneration is likely supported by 2 mechanisms, direct differentiation and through paracrine effectors.30,31 Adipose-derived stem cells have excellent regenerative capacity, and multiple studies have proven that they augment tissue repairs. Adipose-derived stem cell is rich in soluble factors with paracrine action on fibroblasts and keratinocytes, such as EGF, fibroblast growth factor, insulin-like growth factor, TGF-β, VEGF, and platelet-derived growth factor, important cytokines for repair of skin ulcers. Furthermore, with ADSCs involved in neovascular bed establishment at the site of wound repair, there is likely also an autocrine loop functioning, with local ADSCs producing angiogenic factors that act on themselves or neighboring ADSCs in the reestablishment of vascularization.6,29,31–33 In this study, ADSC-SF/CS wound treatment resulted in significantly increased amounts of EGF, TGF-β, and VEGF in the wounds. Vascular endothelial growth factor plays a key role in angiogenesis by stimulating endothelial cell proliferation, migration, and organization into tubules. Moreover, VEGF increases circulating endothelial progenitor cells.30 Transforming growth factor-β is a strong stimulator of ECM deposition. EGF promotes the repair and regeneration of damaged epidermis.34 High levels of growth factors indicate a paracrine mechanism. Adipose-derived stem cells release wound-healing factors and can stimulate recruitment, migration, and proliferation of endogenous cells in the wound environment. We have shown that ADSCs engrafted via an SF/CS carrier demonstrate spontaneous differentiation into a vascular endothelial phenotype, as evidenced by the colocalization of red fluorescence-labeled ADSC and CD31 staining. Further support for this observation is provided by the observation of red fluorescence DIL/α-SMA double-positive cells, indicating contribution of a vascular smooth muscle phenotype to the reconstitution of local small vessel matrices This finding is consistent with previous reports.31,35 In the present study, we show that ADSCs engraft and differentiate. Immunohistochemistry suggests that the transplanted cells differentiate into vessels, and epithelial phenotypes in their new microenvironment. Consequently, ADSCs can stimulate angiogenesis, epithelialization, and wound remodeling through paracrine secretion and direct differentiation during wound repair. The functional improvement seen in our study, measured by accelerated rates of wound closure, likely reflects both paracrine and direct cellular mechanisms. CONCLUSIONS: In this study, we have successfully applied ADSC-seeded SF/CS as a cytoprosthetic hybrid for reconstructive support in a diabetic cutaneous wound healing model which supplies growth factors and promotes angiogenesis of diabetic wound. As a carrier, SF/CS film supports the delivery engraftment and secretion of stem cells, as well as differentiation into vascular and epithelial components. However, there are still some problems and details that have not been resolved, such as the optimum number of cells on the material. In conclusion, ADSC-seeded SF/CS is a promising cell-matrix hybrid and warrants further in vivo study of its potential for reconstructive surgical applications.
Background: Wound healing is a complex process that relies on growth factors and stimulation of angiogenesis. Tissue engineering materials composed of adipose-derived stem cells (ADSCs) and silk fibroin (SF)/chitosan (CS) may be able to solve this problem. The aim of this study was to investigate the wound-healing potential of ADSC-seeded SF/CS in streptozotocin-induced diabetic rats. Methods: Thirty-six male Sprague-Dawley rats were purchased and randomly assigned into 3 groups: a control group (no graft), a group treated with SF/CS film graft, and a group treated with ADSC-seeded SF/CS graft. The number of animals in each group was 12. Diabetes was induced by an intraperitoneal injection of streptozotocin. A cutaneous wound was incised at the dorsal region of all the experimental animals. The ADSCs were labeled with CM-Dil fluorescent staining. Wound healing was assessed for all animal groups by observing the rate of wound closure and hematoxylin and eosin staining. The expression of epidermal growth factor, transforming growth factor-β, and vascular endothelial growth factor at the wound sites was studied by enzyme-linked immunosorbent assay to evaluate the effect of growth factors secreted by ADSCs. The differentiation of ADSCs was analyzed by immunofluorescence staining. Results: The ADSC-seeded SF/CS film treatment significantly increased the rates of wound closure in treated animals, and hence wound healing was drastically enhanced for ADSC-SF/CS treatment groups compared with control groups and SF/CS film treatment group. Histological observations showed the condition of wound healing. Enzyme-linked immunosorbent assay and immunofluorescence staining observations showed the secretion and differentiation of ADSCs, respectively. Conclusions: Our analyses clearly suggested that it is feasible and effective to enhance wound healing in a diabetic rat model with ADSC-seeded SF/CS film.
null
null
10,478
356
[ 310, 175, 311, 182, 452, 62, 194, 108, 69, 46, 272, 153, 316, 182 ]
18
[ "sf", "cs", "sf cs", "wound", "cells", "adscs", "cell", "tissue", "days", "red" ]
[ "tissue formulation", "regenerate organ tissue", "adscs promote tissue", "specimen inguinal fat", "resection inguinal fat" ]
null
null
null
[CONTENT] silk fibroin | chitosan | stem cell | wound | diabetes [SUMMARY]
[CONTENT] silk fibroin | chitosan | stem cell | wound | diabetes [SUMMARY]
[CONTENT] silk fibroin | chitosan | stem cell | wound | diabetes [SUMMARY]
[CONTENT] silk fibroin | chitosan | stem cell | wound | diabetes [SUMMARY]
null
null
[CONTENT] Adipocytes | Animals | Cell Culture Techniques | Chitosan | Diabetes Mellitus, Experimental | Enzyme-Linked Immunosorbent Assay | Epidermal Growth Factor | Fibroins | Flow Cytometry | Immunophenotyping | Male | Random Allocation | Rats | Rats, Sprague-Dawley | Stem Cells | Transforming Growth Factor beta | Vascular Endothelial Growth Factor A | Wound Healing [SUMMARY]
[CONTENT] Adipocytes | Animals | Cell Culture Techniques | Chitosan | Diabetes Mellitus, Experimental | Enzyme-Linked Immunosorbent Assay | Epidermal Growth Factor | Fibroins | Flow Cytometry | Immunophenotyping | Male | Random Allocation | Rats | Rats, Sprague-Dawley | Stem Cells | Transforming Growth Factor beta | Vascular Endothelial Growth Factor A | Wound Healing [SUMMARY]
[CONTENT] Adipocytes | Animals | Cell Culture Techniques | Chitosan | Diabetes Mellitus, Experimental | Enzyme-Linked Immunosorbent Assay | Epidermal Growth Factor | Fibroins | Flow Cytometry | Immunophenotyping | Male | Random Allocation | Rats | Rats, Sprague-Dawley | Stem Cells | Transforming Growth Factor beta | Vascular Endothelial Growth Factor A | Wound Healing [SUMMARY]
[CONTENT] Adipocytes | Animals | Cell Culture Techniques | Chitosan | Diabetes Mellitus, Experimental | Enzyme-Linked Immunosorbent Assay | Epidermal Growth Factor | Fibroins | Flow Cytometry | Immunophenotyping | Male | Random Allocation | Rats | Rats, Sprague-Dawley | Stem Cells | Transforming Growth Factor beta | Vascular Endothelial Growth Factor A | Wound Healing [SUMMARY]
null
null
[CONTENT] tissue formulation | regenerate organ tissue | adscs promote tissue | specimen inguinal fat | resection inguinal fat [SUMMARY]
[CONTENT] tissue formulation | regenerate organ tissue | adscs promote tissue | specimen inguinal fat | resection inguinal fat [SUMMARY]
[CONTENT] tissue formulation | regenerate organ tissue | adscs promote tissue | specimen inguinal fat | resection inguinal fat [SUMMARY]
[CONTENT] tissue formulation | regenerate organ tissue | adscs promote tissue | specimen inguinal fat | resection inguinal fat [SUMMARY]
null
null
[CONTENT] sf | cs | sf cs | wound | cells | adscs | cell | tissue | days | red [SUMMARY]
[CONTENT] sf | cs | sf cs | wound | cells | adscs | cell | tissue | days | red [SUMMARY]
[CONTENT] sf | cs | sf cs | wound | cells | adscs | cell | tissue | days | red [SUMMARY]
[CONTENT] sf | cs | sf cs | wound | cells | adscs | cell | tissue | days | red [SUMMARY]
null
null
[CONTENT] sf | cs | sf cs | 10 | minutes | rats | cells | water | area | wound [SUMMARY]
[CONTENT] sf cs treated | cs treated | cells | engrafted | red | fig | treated | adscs | sf | cs [SUMMARY]
[CONTENT] hybrid | adsc seeded | adsc seeded sf cs | adsc seeded sf | reconstructive | sf | sf cs | cs | study | seeded [SUMMARY]
[CONTENT] sf | cs | wound | cells | sf cs | cell | adscs | tissue | red | group [SUMMARY]
null
null
[CONTENT] Thirty-six | Sprague-Dawley | 3 | SF/CS | SF/CS ||| 12 ||| ||| ||| CM-Dil ||| hematoxylin ||| ||| ADSCs [SUMMARY]
[CONTENT] ADSC | SF/CS | ADSC-SF/CS | SF/CS ||| ||| [SUMMARY]
[CONTENT] SF/CS [SUMMARY]
[CONTENT] ||| CS ||| SF/CS ||| Thirty-six | Sprague-Dawley | 3 | SF/CS | SF/CS ||| 12 ||| ||| ||| CM-Dil ||| hematoxylin ||| ||| ADSCs ||| SF/CS | ADSC-SF/CS | SF/CS ||| ||| ||| SF/CS [SUMMARY]
null
Associations between number of sick-leave days and future all-cause and cause-specific mortality: a population-based cohort study.
25037232
As the number of studies on the future situation of sickness absentees still is very limited, we aimed to investigate the association between number of sick-leave days and future all-cause and cause-specific mortality among women and men.
BACKGROUND
A cohort of 2 275 987 women and 2 393 248 men, aged 20-64 years in 1995 was followed 1996-2006 with regard to mortality. Data were obtained from linked authority-administered registers. The relative risks (RR) and 95% confidence intervals (CI) of mortality with and without a 2-year wash-out period were estimated by multivariate Poisson regression analyses. All analyses were stratified by sex, adjusting for socio demographics and inpatient care.
METHODS
A gradually higher all-cause mortality risk occurred with increasing number of sick-leave days in 1995, among both women (RR 1.11; CI 1.07-1.15 for those with 1-15 sick-leave days to RR 2.45; CI 2.36-2.53 among those with 166-365 days) and men (RR 1.20; CI 1.17-1.24 to RR 1.91; CI 1.85-1.97). Multivariate risk estimates were comparable for the different causes of death (circulatory disease, cancer, and suicide). The two-year washout period had only a minor effect on the risk estimates.
RESULTS
Even a low number of sick-leave days was associated with a higher risk for premature death in the following 11 years, also when adjusting for morbidity. This was the case for both women and men and also for cause-specific mortality. More knowledge is warranted on the mechanisms leading to higher mortality risks among sickness absentees, as sickness certification is a common measure in health care, and most sick leave is due to diagnoses you do not die from.
CONCLUSION
[ "Absenteeism", "Adult", "Cause of Death", "Cohort Studies", "Female", "Humans", "Male", "Middle Aged", "Mortality", "Prospective Studies", "Risk Assessment", "Sex Factors", "Sick Leave", "Sweden" ]
4223521
Background
Today, the number of studies on risk factors for sickness absence has increased much, however, the number of studies on the future situation of sickness absentees is still very limited [1]. Most of the few studies so far conducted have focused on to what extent sickness absence can predict all-cause and cause-specific mortality in occupational [2,3] or population-based cohorts [2-8], with different follow-up periods. The risk of premature death varies somewhat with used measures of sickness absence, nevertheless, an excess rate of premature death was found in all studies [2-10]. There is a need to test those results in a larger, population-based cohort with a long follow-up period. In welfare countries, to sickness certify patients is a common measure in healthcare, why more knowledge is warranted on associations between sickness absence and premature death. For instance, it is important to disentangle to what extent excess risks merely are due to a greater occurrence of morbidity among people who are sickness absent. There are several ways to measure morbidity, in research often self-reports are used [5,6,11]. Data on inpatient care, that is, being hospitalized, could be considered as data on more severe morbidity as well as on morbidity that has also been verified by physicians and therefore more valid. Also, the association between sickness absence and subsequent deaths might be due to the very disease that leads to the sickness absence. To account for that when trying to disentangle the associations between sickness absence and morbidity, sometimes a so called wash-out period is introduced, to exclude all deaths occurring in a near time frame, e.g., using a two-year wash out period [5]. Moreover, a great number of studies have shown gender differences in morbidity, in sickness absence, as well as in mortality [12-15]. There are several reasons to believe that the mechanisms leading to morbidity, to sickness absence, and to mortality might differ to some extent between the genders and that such aspects might not be visible when combining data for women and men, why gender-specific analyses are recommended [16-20]. The aim of this study was to investigate the associations between number of sick-leave days and future all-cause and cause-specific mortality among women and men, adjusting for morbidity and socioeconomic status, and also taking into account a two-year wash-out period for the relative risk of mortality.
Methods
Study population A population-based prospective cohort study was conducted with an eleven-year follow up. The cohort included all individuals aged 20–64 years, registered in Sweden on 31 December 1994 and 1995, and not on disability or old-age pension in 1995 (N = 4 669 235, 49% women). The cohort was identified from Statistics Sweden’s integrated population-based database for labor-market research (LISA). From this database also information regarding sick leave, disability pension, and other sociodemographic factors in 1995 was obtained. Further, date and cause of death was obtained from the Cause of Death Register, held by the National Board of Health and Welfare. The National Patient Register, also held by the National Board of Health and Welfare, was used to obtain information on inpatient care. A population-based prospective cohort study was conducted with an eleven-year follow up. The cohort included all individuals aged 20–64 years, registered in Sweden on 31 December 1994 and 1995, and not on disability or old-age pension in 1995 (N = 4 669 235, 49% women). The cohort was identified from Statistics Sweden’s integrated population-based database for labor-market research (LISA). From this database also information regarding sick leave, disability pension, and other sociodemographic factors in 1995 was obtained. Further, date and cause of death was obtained from the Cause of Death Register, held by the National Board of Health and Welfare. The National Patient Register, also held by the National Board of Health and Welfare, was used to obtain information on inpatient care. Variables All sick-leave days in 1995 reimbursed by the National Social Insurance Agency were included and categorized into five groups; 0 days (reference category), 1-15 days, 16-75 days, 76-165 days, and 166-365 days. From the age of 16 years, all Swedish residents with income from work, unemployment benefits, or student benefits are covered by the national sickness absence insurance regime and can be sickness absent with benefits if unable to work due to disease or injury. Benefits amount up to 80% of lost income. The first day of a sick-leave spell is a qualifying day, without any benefits. The first seven days can be self-certified; thereafter a physician certificate is needed. For those employed, the employer usually paid for the first 14 days of the sick-leave spell; those days are not registered in LISA and thus not included in this study. Unemployed people, if their morbidity lead to that they could not seek work, had sickness benefits paid by the Social Insurance Agency from the second day of the sick-leave spell. This means that for them also the shorter sick-leave spells were included. For most of the absentees, the category 1-15 days means that they had been sickness absent for 15-30 days. Information about inpatient care was obtained for the years 1990-95, excluding hospitalization due to childbirth. The median number of days of inpatient care for those who were hospitalized was 5 days for the six-year period. Inpatient care days were categorized as; 0 days, 1-5 days, and more than 5 days. In the analyses, age was used as a continuous variable and educational level was categorized into four groups: elementary school (9 years or less of schooling), secondary (10-12 years), university level (more than 12 years), or information missing. Type of place of residence was divided into a) large cities (Stockholm, Göteborg, and Malmö), b) middle-sized cities: places with more than 90 000 residents within 30 km from the municipal center, and c) rural municipalities (all other areas). Region of birth was categorized into Sweden, other Nordic country, other EU25, and other countries. As outcome measure, we studied all-cause mortality for the years 1996-2006 as well as cause-specific mortality from circulatory diseases, cancer, and suicide, respectively. All sick-leave days in 1995 reimbursed by the National Social Insurance Agency were included and categorized into five groups; 0 days (reference category), 1-15 days, 16-75 days, 76-165 days, and 166-365 days. From the age of 16 years, all Swedish residents with income from work, unemployment benefits, or student benefits are covered by the national sickness absence insurance regime and can be sickness absent with benefits if unable to work due to disease or injury. Benefits amount up to 80% of lost income. The first day of a sick-leave spell is a qualifying day, without any benefits. The first seven days can be self-certified; thereafter a physician certificate is needed. For those employed, the employer usually paid for the first 14 days of the sick-leave spell; those days are not registered in LISA and thus not included in this study. Unemployed people, if their morbidity lead to that they could not seek work, had sickness benefits paid by the Social Insurance Agency from the second day of the sick-leave spell. This means that for them also the shorter sick-leave spells were included. For most of the absentees, the category 1-15 days means that they had been sickness absent for 15-30 days. Information about inpatient care was obtained for the years 1990-95, excluding hospitalization due to childbirth. The median number of days of inpatient care for those who were hospitalized was 5 days for the six-year period. Inpatient care days were categorized as; 0 days, 1-5 days, and more than 5 days. In the analyses, age was used as a continuous variable and educational level was categorized into four groups: elementary school (9 years or less of schooling), secondary (10-12 years), university level (more than 12 years), or information missing. Type of place of residence was divided into a) large cities (Stockholm, Göteborg, and Malmö), b) middle-sized cities: places with more than 90 000 residents within 30 km from the municipal center, and c) rural municipalities (all other areas). Region of birth was categorized into Sweden, other Nordic country, other EU25, and other countries. As outcome measure, we studied all-cause mortality for the years 1996-2006 as well as cause-specific mortality from circulatory diseases, cancer, and suicide, respectively. Statistical analyses Multivariate analyses were conducted by Poisson regression with mortality as the dependent variable. The relative risk (RR) of mortality was estimated with 95% confidence interval (CI). Follow-up time was assessed by adding up the months the individuals were alive and residents of Sweden during the 11-years follow-up period 1996–2006. The associations between sick leave in 1995 and mortality were analyzed in five regression models. The reference group comprised individuals without sick-leave days in 1995. In model I, adjustments were made for age. In a second model (model II) we made additional adjustments for socio-demographic factors; i.e. educational level, type of place of residence, and region of birth. In model III, we adjusted for inpatient care in 1990-1995. Model IV included all covariates. In a fifth model, as an attempt to rule out that observed associations are not mainly due to an excess rate of sick leave shortly before death, a washout period of two years was used, that is, follow-up started in the third year after baseline (1998-2006). This included those who still lived 1 January 1998, conducting the same adjustment as in Model IV. Statistical analyses were carried out using the SAS software package, version 9.3. In accordance with the aim, all analyses were stratified by sex. The project was approved of by the Regional Ethical Review Board of Stockholm (dnr 2007/762-31). Multivariate analyses were conducted by Poisson regression with mortality as the dependent variable. The relative risk (RR) of mortality was estimated with 95% confidence interval (CI). Follow-up time was assessed by adding up the months the individuals were alive and residents of Sweden during the 11-years follow-up period 1996–2006. The associations between sick leave in 1995 and mortality were analyzed in five regression models. The reference group comprised individuals without sick-leave days in 1995. In model I, adjustments were made for age. In a second model (model II) we made additional adjustments for socio-demographic factors; i.e. educational level, type of place of residence, and region of birth. In model III, we adjusted for inpatient care in 1990-1995. Model IV included all covariates. In a fifth model, as an attempt to rule out that observed associations are not mainly due to an excess rate of sick leave shortly before death, a washout period of two years was used, that is, follow-up started in the third year after baseline (1998-2006). This included those who still lived 1 January 1998, conducting the same adjustment as in Model IV. Statistical analyses were carried out using the SAS software package, version 9.3. In accordance with the aim, all analyses were stratified by sex. The project was approved of by the Regional Ethical Review Board of Stockholm (dnr 2007/762-31).
Results
In this cohort of 4.7 million people of working ages, the mean age was about 40 years both among women and men. A slightly higher proportion of the men than the women had lower educational level (Table  1). About 70% of the women and 78% of the men had no inpatient care at all during 1990–1995. A higher rate of the women had had longer hospitalization. Also, a higher rate of the women had been sickness absent in 1995 and the mean number of sick-leave days during 1995 was 10.6 days in women and 8.0 days in men (not shown in table). Characteristics for the cohort of 4 669 235 women and men aged 20–64 and living in Sweden in 1995 1Low = ≤9 years; Medium = 10–12 years; High = 13+ years; 2Days reimbursed by the Social Insurance Agency, usually from the 15th day of a sick-leave spell. The relative risks (RR) of all-cause mortality displayed a gradual increase with increasing number of sick-leave days in 1995 among both women and men (Table  2). In the age-adjusted models, women and men with most sick-leave days (166–365 days), had more than tripled mortality risks; women RR 3.48; 95% CI 3.37-3.60 and men RR 3.29; CI 3.20-3.39 in comparison with women and men, respectively, without any sick-leave days reimbursed by the Social Insurance Agency in 1995. Adjustments for socio-demographics did not substantially change the estimates. When inpatient care was taken into account (model III), the excess risks generally decreased, but the clear gradients remained. In the fully adjusted models (model IV), the risks decreased further. Using a “wash-out”-period of two years in model V, e.g., starting the follow-up 1998, decreased the excess risk to some extent regarding death risks associated with number of sick-leave days. However, evident associations remained with a pronounced gradient. Relative risk (RR) for all-cause mortality among women and men on sick leave with adjustments in four models and in a fifth model with two years wash-out period 1Days reimbursed by the Social Insurance Agency, usually from the 15th day of a sick-leave spell. Model I: adjustment for age. Model II: adjustment for age and education, type of place of residence, and region of birth. Model III: adjustment for age, number of days in hospital (categorized). Model IV: adjustment for age, number of days in hospital (categorized), education, type of place of residence, and region of birth. Model V: model IV with two years wash-out period. The RRs of mortality due to circulatory diseases are presented in Table  3. Although risk estimates were lower compared to RRs of all-cause mortality, a gradual increase with increasing number of sick-leave days among both women and men emerged. In a similar way, the risk estimates decreased after adjusting for several potential confounders. Even in the fully adjusted model (model IV), both women and men with more than 75 sick-leave days had a nearly 50% higher risk of premature death due to a circulatory disease during the follow-up period than women without such sick-leave days. Introducing a wash-out period did not change the adjusted RRs of premature death from circulatory diseases. Relative risk (RR) of premature death from circulatory diseases among women and men on sick leave with adjustments in four models and in a fifth model with two years wash-out period 1Days reimbursed by the Social Insurance Agency, usually from the 15th day of a sick-leave spell. Model I: adjustment for age. Model II: adjustment for age and education, type of place of residence, and region of birth. Model III: adjustment for age, and number of days in hospital (categorized). Model IV: adjustment for age, number of days in hospital (categorized), education, type of place of residence, and region of birth. Model V: model IV with two years wash-out period. The associations of number of sick-leave days with subsequent cancer-related death were similar (Table  4). Both women and men with at least one sick-leave day had a higher risk of premature death from cancer compared to those with no sick-leave days in 1995. In contrary to the analyses on death from circulatory diseases, the wash-out period decreased the adjusted RRs of death. Relative risk (RR) of premature death from cancer diagnoses among women and men on sick leave with adjustments in four models and in a fifth model with two years wash-out period 1Days reimbursed by the Social Insurance Agency, usually from the 15th day of a sick-leave spell. Model I: adjustment for age. Model II: adjustment for age and education, type of place of residence, and region of birth. Model III: adjustment for age, and number of days in hospital (categorized). Model IV: adjustment for age, number of days in hospital (categorized), education, type of place of residence, and region of birth. Model V: model IV with two years wash-out period. The analyses presented in Table  5 show that sick leave was associated with a higher risk of suicide in both women and men. The RR for suicide increased in a graded fashion with more sick-leave days in both women and men, even after adjustment for several potential confounding factors. Adjusting for inpatient care decreased the estimates significantly for both sexes. Taking a wash-out period of two years into consideration decreased the estimates somewhat for both women and men. Relative risk (RR) of premature death from suicide among women and men on sick leave, with adjustments in four models and in a fifth model with two years wash-out period 1Days reimbursed by the Social Insurance Agency, usually from the 15th day of a sick-leave spell. Model I: adjustment for age. Model II: adjustment for age and education, type of place of residence, and region of birth. Model III: adjustment for age, and number of days in hospital (categorized). Model IV: adjustment for age, number of days in hospital (categorized), education, type of place of residence, and region of birth. Model V: model IV with two years wash-out period.
Conclusions
There was a clear association between sickness absence and all-cause and cause-specific mortality, even for a relatively short number of sick-leave days, also when adjusting for morbidity. Moreover, the higher risks were about the same for women and men, although there are higher risks for women to become sickness absent. As sickness certification of patients is common in health care, more knowledge on possible mechanisms behind the results is warranted.
[ "Background", "Study population", "Variables", "Statistical analyses", "Strengths", "Limitations", "Competing interests", "Author’s contribution", "Pre-publication history" ]
[ "Today, the number of studies on risk factors for sickness absence has increased much, however, the number of studies on the future situation of sickness absentees is still very limited\n[1]. Most of the few studies so far conducted have focused on to what extent sickness absence can predict all-cause and cause-specific mortality in occupational\n[2,3] or population-based cohorts\n[2-8], with different follow-up periods. The risk of premature death varies somewhat with used measures of sickness absence, nevertheless, an excess rate of premature death was found in all studies\n[2-10]. There is a need to test those results in a larger, population-based cohort with a long follow-up period.\nIn welfare countries, to sickness certify patients is a common measure in healthcare, why more knowledge is warranted on associations between sickness absence and premature death. For instance, it is important to disentangle to what extent excess risks merely are due to a greater occurrence of morbidity among people who are sickness absent. There are several ways to measure morbidity, in research often self-reports are used\n[5,6,11]. Data on inpatient care, that is, being hospitalized, could be considered as data on more severe morbidity as well as on morbidity that has also been verified by physicians and therefore more valid. Also, the association between sickness absence and subsequent deaths might be due to the very disease that leads to the sickness absence. To account for that when trying to disentangle the associations between sickness absence and morbidity, sometimes a so called wash-out period is introduced, to exclude all deaths occurring in a near time frame, e.g., using a two-year wash out period\n[5].\nMoreover, a great number of studies have shown gender differences in morbidity, in sickness absence, as well as in mortality\n[12-15]. There are several reasons to believe that the mechanisms leading to morbidity, to sickness absence, and to mortality might differ to some extent between the genders and that such aspects might not be visible when combining data for women and men, why gender-specific analyses are recommended\n[16-20].\nThe aim of this study was to investigate the associations between number of sick-leave days and future all-cause and cause-specific mortality among women and men, adjusting for morbidity and socioeconomic status, and also taking into account a two-year wash-out period for the relative risk of mortality.", "A population-based prospective cohort study was conducted with an eleven-year follow up.\nThe cohort included all individuals aged 20–64 years, registered in Sweden on 31 December 1994 and 1995, and not on disability or old-age pension in 1995 (N = 4 669 235, 49% women). The cohort was identified from Statistics Sweden’s integrated population-based database for labor-market research (LISA). From this database also information regarding sick leave, disability pension, and other sociodemographic factors in 1995 was obtained. Further, date and cause of death was obtained from the Cause of Death Register, held by the National Board of Health and Welfare. The National Patient Register, also held by the National Board of Health and Welfare, was used to obtain information on inpatient care.", "All sick-leave days in 1995 reimbursed by the National Social Insurance Agency were included and categorized into five groups; 0 days (reference category), 1-15 days, 16-75 days, 76-165 days, and 166-365 days. From the age of 16 years, all Swedish residents with income from work, unemployment benefits, or student benefits are covered by the national sickness absence insurance regime and can be sickness absent with benefits if unable to work due to disease or injury. Benefits amount up to 80% of lost income. The first day of a sick-leave spell is a qualifying day, without any benefits. The first seven days can be self-certified; thereafter a physician certificate is needed. For those employed, the employer usually paid for the first 14 days of the sick-leave spell; those days are not registered in LISA and thus not included in this study. Unemployed people, if their morbidity lead to that they could not seek work, had sickness benefits paid by the Social Insurance Agency from the second day of the sick-leave spell. This means that for them also the shorter sick-leave spells were included. For most of the absentees, the category 1-15 days means that they had been sickness absent for 15-30 days.\nInformation about inpatient care was obtained for the years 1990-95, excluding hospitalization due to childbirth. The median number of days of inpatient care for those who were hospitalized was 5 days for the six-year period. Inpatient care days were categorized as; 0 days, 1-5 days, and more than 5 days.\nIn the analyses, age was used as a continuous variable and educational level was categorized into four groups: elementary school (9 years or less of schooling), secondary (10-12 years), university level (more than 12 years), or information missing. Type of place of residence was divided into a) large cities (Stockholm, Göteborg, and Malmö), b) middle-sized cities: places with more than 90 000 residents within 30 km from the municipal center, and c) rural municipalities (all other areas). Region of birth was categorized into Sweden, other Nordic country, other EU25, and other countries.\nAs outcome measure, we studied all-cause mortality for the years 1996-2006 as well as cause-specific mortality from circulatory diseases, cancer, and suicide, respectively.", "Multivariate analyses were conducted by Poisson regression with mortality as the dependent variable. The relative risk (RR) of mortality was estimated with 95% confidence interval (CI). Follow-up time was assessed by adding up the months the individuals were alive and residents of Sweden during the 11-years follow-up period 1996–2006. The associations between sick leave in 1995 and mortality were analyzed in five regression models. The reference group comprised individuals without sick-leave days in 1995. In model I, adjustments were made for age. In a second model (model II) we made additional adjustments for socio-demographic factors; i.e. educational level, type of place of residence, and region of birth. In model III, we adjusted for inpatient care in 1990-1995. Model IV included all covariates. In a fifth model, as an attempt to rule out that observed associations are not mainly due to an excess rate of sick leave shortly before death, a washout period of two years was used, that is, follow-up started in the third year after baseline (1998-2006). This included those who still lived 1 January 1998, conducting the same adjustment as in Model IV. Statistical analyses were carried out using the SAS software package, version 9.3. In accordance with the aim, all analyses were stratified by sex.\nThe project was approved of by the Regional Ethical Review Board of Stockholm (dnr 2007/762-31).", "The strengths of this study are the population-based, prospective cohort design, with a very long follow-up time, the very large number of included individuals, the high quality of the nationwide register data, e.g. regarding completeness and validity\n[33,34] and that there was no loss to follow up. The size of the cohort (4.7 millions) also provided sufficient power for gender-stratified analyses regarding rare outcomes such as suicide. As both the exposure and outcome were based on register data, there was no recall bias. Other strengths are that most of the sick-leave days as well as all hospitalizations were certified by a physician and that all individuals not at risk for sickness absence in 1995 were excluded, that is, those on disability pension.", "The available information about health conditions was limited to morbidity leading to hospital care, which means that we were not able to account for all types of morbidity in the analyses. Hence, part of the higher risks may have been attributed to other types of impaired health among sick-listed people. On the other hand, it can be seen as an advantage that only the more severe morbidity was adjusted for, especially as morbidity seldom is accounted for in studies of associations between sick leave and premature deaths. That shorter sick-leave spells (<14 days) for employees could not be included can also be regarded both as a limitation and a strength. However, other studies have shown associations also with short-term sickness absence\n[7,10], which means that we rather have an under- than an over-estimation of results. Another limitation is that we could not exclude the sick-leave days from sick-leave spells shorter than 15 days - that is, days generally generated by unemployed people who had their sick-leave spells reimbursed by the Social Insurance Agency already from day 2. This means that there is an overrepresentation of unemployed among those in the category of 1-15 sick-leave days. However, most sickness absentees are working, and some of the unemployed do not have unemployment benefits- due to not having had a paid work lasting for 12 months or due to having been unemployed for too long time. This means that both in the reference group and in the group of fewer sick-leave days, there was a slight overrepresentation of unemployed. Future studies are needed to gain more knowledge on this. Despite the considerably large dataset including a number of important confounders, information on some potential confounders such as self-reports on health behavior including smoking and alcohol consumption could have been of importance. Moreover, the study was carried out using the year 1995 as baseline. Future studies are warranted to include cohorts based on other time periods and using other measures of sickness absence\n[35,36]. Moreover, the use of a two-year washout period can be questioned - both as being too long and as being too short - the effect of using different wash-out periods needs to be investigated in more detailed studies.", "The authors declare that they have no competing interests.", "EMR and KA originated the idea. GWR analysed the data in consultation with KA, CL, and EMR. EB wrote the first and subsequent drafts of the manuscript, with important intellectual input from all the co-authors. All authors contributed in designing the study and to the interpretation of the results and to the writing and approval of the final article.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/14/733/prepub\n" ]
[ null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study population", "Variables", "Statistical analyses", "Results", "Discussion", "Strengths", "Limitations", "Conclusions", "Competing interests", "Author’s contribution", "Pre-publication history" ]
[ "Today, the number of studies on risk factors for sickness absence has increased much, however, the number of studies on the future situation of sickness absentees is still very limited\n[1]. Most of the few studies so far conducted have focused on to what extent sickness absence can predict all-cause and cause-specific mortality in occupational\n[2,3] or population-based cohorts\n[2-8], with different follow-up periods. The risk of premature death varies somewhat with used measures of sickness absence, nevertheless, an excess rate of premature death was found in all studies\n[2-10]. There is a need to test those results in a larger, population-based cohort with a long follow-up period.\nIn welfare countries, to sickness certify patients is a common measure in healthcare, why more knowledge is warranted on associations between sickness absence and premature death. For instance, it is important to disentangle to what extent excess risks merely are due to a greater occurrence of morbidity among people who are sickness absent. There are several ways to measure morbidity, in research often self-reports are used\n[5,6,11]. Data on inpatient care, that is, being hospitalized, could be considered as data on more severe morbidity as well as on morbidity that has also been verified by physicians and therefore more valid. Also, the association between sickness absence and subsequent deaths might be due to the very disease that leads to the sickness absence. To account for that when trying to disentangle the associations between sickness absence and morbidity, sometimes a so called wash-out period is introduced, to exclude all deaths occurring in a near time frame, e.g., using a two-year wash out period\n[5].\nMoreover, a great number of studies have shown gender differences in morbidity, in sickness absence, as well as in mortality\n[12-15]. There are several reasons to believe that the mechanisms leading to morbidity, to sickness absence, and to mortality might differ to some extent between the genders and that such aspects might not be visible when combining data for women and men, why gender-specific analyses are recommended\n[16-20].\nThe aim of this study was to investigate the associations between number of sick-leave days and future all-cause and cause-specific mortality among women and men, adjusting for morbidity and socioeconomic status, and also taking into account a two-year wash-out period for the relative risk of mortality.", " Study population A population-based prospective cohort study was conducted with an eleven-year follow up.\nThe cohort included all individuals aged 20–64 years, registered in Sweden on 31 December 1994 and 1995, and not on disability or old-age pension in 1995 (N = 4 669 235, 49% women). The cohort was identified from Statistics Sweden’s integrated population-based database for labor-market research (LISA). From this database also information regarding sick leave, disability pension, and other sociodemographic factors in 1995 was obtained. Further, date and cause of death was obtained from the Cause of Death Register, held by the National Board of Health and Welfare. The National Patient Register, also held by the National Board of Health and Welfare, was used to obtain information on inpatient care.\nA population-based prospective cohort study was conducted with an eleven-year follow up.\nThe cohort included all individuals aged 20–64 years, registered in Sweden on 31 December 1994 and 1995, and not on disability or old-age pension in 1995 (N = 4 669 235, 49% women). The cohort was identified from Statistics Sweden’s integrated population-based database for labor-market research (LISA). From this database also information regarding sick leave, disability pension, and other sociodemographic factors in 1995 was obtained. Further, date and cause of death was obtained from the Cause of Death Register, held by the National Board of Health and Welfare. The National Patient Register, also held by the National Board of Health and Welfare, was used to obtain information on inpatient care.\n Variables All sick-leave days in 1995 reimbursed by the National Social Insurance Agency were included and categorized into five groups; 0 days (reference category), 1-15 days, 16-75 days, 76-165 days, and 166-365 days. From the age of 16 years, all Swedish residents with income from work, unemployment benefits, or student benefits are covered by the national sickness absence insurance regime and can be sickness absent with benefits if unable to work due to disease or injury. Benefits amount up to 80% of lost income. The first day of a sick-leave spell is a qualifying day, without any benefits. The first seven days can be self-certified; thereafter a physician certificate is needed. For those employed, the employer usually paid for the first 14 days of the sick-leave spell; those days are not registered in LISA and thus not included in this study. Unemployed people, if their morbidity lead to that they could not seek work, had sickness benefits paid by the Social Insurance Agency from the second day of the sick-leave spell. This means that for them also the shorter sick-leave spells were included. For most of the absentees, the category 1-15 days means that they had been sickness absent for 15-30 days.\nInformation about inpatient care was obtained for the years 1990-95, excluding hospitalization due to childbirth. The median number of days of inpatient care for those who were hospitalized was 5 days for the six-year period. Inpatient care days were categorized as; 0 days, 1-5 days, and more than 5 days.\nIn the analyses, age was used as a continuous variable and educational level was categorized into four groups: elementary school (9 years or less of schooling), secondary (10-12 years), university level (more than 12 years), or information missing. Type of place of residence was divided into a) large cities (Stockholm, Göteborg, and Malmö), b) middle-sized cities: places with more than 90 000 residents within 30 km from the municipal center, and c) rural municipalities (all other areas). Region of birth was categorized into Sweden, other Nordic country, other EU25, and other countries.\nAs outcome measure, we studied all-cause mortality for the years 1996-2006 as well as cause-specific mortality from circulatory diseases, cancer, and suicide, respectively.\nAll sick-leave days in 1995 reimbursed by the National Social Insurance Agency were included and categorized into five groups; 0 days (reference category), 1-15 days, 16-75 days, 76-165 days, and 166-365 days. From the age of 16 years, all Swedish residents with income from work, unemployment benefits, or student benefits are covered by the national sickness absence insurance regime and can be sickness absent with benefits if unable to work due to disease or injury. Benefits amount up to 80% of lost income. The first day of a sick-leave spell is a qualifying day, without any benefits. The first seven days can be self-certified; thereafter a physician certificate is needed. For those employed, the employer usually paid for the first 14 days of the sick-leave spell; those days are not registered in LISA and thus not included in this study. Unemployed people, if their morbidity lead to that they could not seek work, had sickness benefits paid by the Social Insurance Agency from the second day of the sick-leave spell. This means that for them also the shorter sick-leave spells were included. For most of the absentees, the category 1-15 days means that they had been sickness absent for 15-30 days.\nInformation about inpatient care was obtained for the years 1990-95, excluding hospitalization due to childbirth. The median number of days of inpatient care for those who were hospitalized was 5 days for the six-year period. Inpatient care days were categorized as; 0 days, 1-5 days, and more than 5 days.\nIn the analyses, age was used as a continuous variable and educational level was categorized into four groups: elementary school (9 years or less of schooling), secondary (10-12 years), university level (more than 12 years), or information missing. Type of place of residence was divided into a) large cities (Stockholm, Göteborg, and Malmö), b) middle-sized cities: places with more than 90 000 residents within 30 km from the municipal center, and c) rural municipalities (all other areas). Region of birth was categorized into Sweden, other Nordic country, other EU25, and other countries.\nAs outcome measure, we studied all-cause mortality for the years 1996-2006 as well as cause-specific mortality from circulatory diseases, cancer, and suicide, respectively.\n Statistical analyses Multivariate analyses were conducted by Poisson regression with mortality as the dependent variable. The relative risk (RR) of mortality was estimated with 95% confidence interval (CI). Follow-up time was assessed by adding up the months the individuals were alive and residents of Sweden during the 11-years follow-up period 1996–2006. The associations between sick leave in 1995 and mortality were analyzed in five regression models. The reference group comprised individuals without sick-leave days in 1995. In model I, adjustments were made for age. In a second model (model II) we made additional adjustments for socio-demographic factors; i.e. educational level, type of place of residence, and region of birth. In model III, we adjusted for inpatient care in 1990-1995. Model IV included all covariates. In a fifth model, as an attempt to rule out that observed associations are not mainly due to an excess rate of sick leave shortly before death, a washout period of two years was used, that is, follow-up started in the third year after baseline (1998-2006). This included those who still lived 1 January 1998, conducting the same adjustment as in Model IV. Statistical analyses were carried out using the SAS software package, version 9.3. In accordance with the aim, all analyses were stratified by sex.\nThe project was approved of by the Regional Ethical Review Board of Stockholm (dnr 2007/762-31).\nMultivariate analyses were conducted by Poisson regression with mortality as the dependent variable. The relative risk (RR) of mortality was estimated with 95% confidence interval (CI). Follow-up time was assessed by adding up the months the individuals were alive and residents of Sweden during the 11-years follow-up period 1996–2006. The associations between sick leave in 1995 and mortality were analyzed in five regression models. The reference group comprised individuals without sick-leave days in 1995. In model I, adjustments were made for age. In a second model (model II) we made additional adjustments for socio-demographic factors; i.e. educational level, type of place of residence, and region of birth. In model III, we adjusted for inpatient care in 1990-1995. Model IV included all covariates. In a fifth model, as an attempt to rule out that observed associations are not mainly due to an excess rate of sick leave shortly before death, a washout period of two years was used, that is, follow-up started in the third year after baseline (1998-2006). This included those who still lived 1 January 1998, conducting the same adjustment as in Model IV. Statistical analyses were carried out using the SAS software package, version 9.3. In accordance with the aim, all analyses were stratified by sex.\nThe project was approved of by the Regional Ethical Review Board of Stockholm (dnr 2007/762-31).", "A population-based prospective cohort study was conducted with an eleven-year follow up.\nThe cohort included all individuals aged 20–64 years, registered in Sweden on 31 December 1994 and 1995, and not on disability or old-age pension in 1995 (N = 4 669 235, 49% women). The cohort was identified from Statistics Sweden’s integrated population-based database for labor-market research (LISA). From this database also information regarding sick leave, disability pension, and other sociodemographic factors in 1995 was obtained. Further, date and cause of death was obtained from the Cause of Death Register, held by the National Board of Health and Welfare. The National Patient Register, also held by the National Board of Health and Welfare, was used to obtain information on inpatient care.", "All sick-leave days in 1995 reimbursed by the National Social Insurance Agency were included and categorized into five groups; 0 days (reference category), 1-15 days, 16-75 days, 76-165 days, and 166-365 days. From the age of 16 years, all Swedish residents with income from work, unemployment benefits, or student benefits are covered by the national sickness absence insurance regime and can be sickness absent with benefits if unable to work due to disease or injury. Benefits amount up to 80% of lost income. The first day of a sick-leave spell is a qualifying day, without any benefits. The first seven days can be self-certified; thereafter a physician certificate is needed. For those employed, the employer usually paid for the first 14 days of the sick-leave spell; those days are not registered in LISA and thus not included in this study. Unemployed people, if their morbidity lead to that they could not seek work, had sickness benefits paid by the Social Insurance Agency from the second day of the sick-leave spell. This means that for them also the shorter sick-leave spells were included. For most of the absentees, the category 1-15 days means that they had been sickness absent for 15-30 days.\nInformation about inpatient care was obtained for the years 1990-95, excluding hospitalization due to childbirth. The median number of days of inpatient care for those who were hospitalized was 5 days for the six-year period. Inpatient care days were categorized as; 0 days, 1-5 days, and more than 5 days.\nIn the analyses, age was used as a continuous variable and educational level was categorized into four groups: elementary school (9 years or less of schooling), secondary (10-12 years), university level (more than 12 years), or information missing. Type of place of residence was divided into a) large cities (Stockholm, Göteborg, and Malmö), b) middle-sized cities: places with more than 90 000 residents within 30 km from the municipal center, and c) rural municipalities (all other areas). Region of birth was categorized into Sweden, other Nordic country, other EU25, and other countries.\nAs outcome measure, we studied all-cause mortality for the years 1996-2006 as well as cause-specific mortality from circulatory diseases, cancer, and suicide, respectively.", "Multivariate analyses were conducted by Poisson regression with mortality as the dependent variable. The relative risk (RR) of mortality was estimated with 95% confidence interval (CI). Follow-up time was assessed by adding up the months the individuals were alive and residents of Sweden during the 11-years follow-up period 1996–2006. The associations between sick leave in 1995 and mortality were analyzed in five regression models. The reference group comprised individuals without sick-leave days in 1995. In model I, adjustments were made for age. In a second model (model II) we made additional adjustments for socio-demographic factors; i.e. educational level, type of place of residence, and region of birth. In model III, we adjusted for inpatient care in 1990-1995. Model IV included all covariates. In a fifth model, as an attempt to rule out that observed associations are not mainly due to an excess rate of sick leave shortly before death, a washout period of two years was used, that is, follow-up started in the third year after baseline (1998-2006). This included those who still lived 1 January 1998, conducting the same adjustment as in Model IV. Statistical analyses were carried out using the SAS software package, version 9.3. In accordance with the aim, all analyses were stratified by sex.\nThe project was approved of by the Regional Ethical Review Board of Stockholm (dnr 2007/762-31).", "In this cohort of 4.7 million people of working ages, the mean age was about 40 years both among women and men. A slightly higher proportion of the men than the women had lower educational level (Table \n1). About 70% of the women and 78% of the men had no inpatient care at all during 1990–1995. A higher rate of the women had had longer hospitalization. Also, a higher rate of the women had been sickness absent in 1995 and the mean number of sick-leave days during 1995 was 10.6 days in women and 8.0 days in men (not shown in table).\nCharacteristics for the cohort of 4 669 235 women and men aged 20–64 and living in Sweden in 1995\n1Low = ≤9 years; Medium = 10–12 years; High = 13+ years;\n2Days reimbursed by the Social Insurance Agency, usually from the 15th day of a sick-leave spell.\nThe relative risks (RR) of all-cause mortality displayed a gradual increase with increasing number of sick-leave days in 1995 among both women and men (Table \n2). In the age-adjusted models, women and men with most sick-leave days (166–365 days), had more than tripled mortality risks; women RR 3.48; 95% CI 3.37-3.60 and men RR 3.29; CI 3.20-3.39 in comparison with women and men, respectively, without any sick-leave days reimbursed by the Social Insurance Agency in 1995. Adjustments for socio-demographics did not substantially change the estimates. When inpatient care was taken into account (model III), the excess risks generally decreased, but the clear gradients remained. In the fully adjusted models (model IV), the risks decreased further. Using a “wash-out”-period of two years in model V, e.g., starting the follow-up 1998, decreased the excess risk to some extent regarding death risks associated with number of sick-leave days. However, evident associations remained with a pronounced gradient.\nRelative risk (RR) for all-cause mortality among women and men on sick leave with adjustments in four models and in a fifth model with two years wash-out period\n1Days reimbursed by the Social Insurance Agency, usually from the 15th day of a sick-leave spell.\nModel I: adjustment for age.\nModel II: adjustment for age and education, type of place of residence, and region of birth.\nModel III: adjustment for age, number of days in hospital (categorized).\nModel IV: adjustment for age, number of days in hospital (categorized), education, type of place of residence, and region of birth.\nModel V: model IV with two years wash-out period.\nThe RRs of mortality due to circulatory diseases are presented in Table \n3. Although risk estimates were lower compared to RRs of all-cause mortality, a gradual increase with increasing number of sick-leave days among both women and men emerged. In a similar way, the risk estimates decreased after adjusting for several potential confounders. Even in the fully adjusted model (model IV), both women and men with more than 75 sick-leave days had a nearly 50% higher risk of premature death due to a circulatory disease during the follow-up period than women without such sick-leave days. Introducing a wash-out period did not change the adjusted RRs of premature death from circulatory diseases.\nRelative risk (RR) of premature death from circulatory diseases among women and men on sick leave with adjustments in four models and in a fifth model with two years wash-out period\n1Days reimbursed by the Social Insurance Agency, usually from the 15th day of a sick-leave spell.\nModel I: adjustment for age.\nModel II: adjustment for age and education, type of place of residence, and region of birth.\nModel III: adjustment for age, and number of days in hospital (categorized).\nModel IV: adjustment for age, number of days in hospital (categorized), education, type of place of residence, and region of birth.\nModel V: model IV with two years wash-out period.\nThe associations of number of sick-leave days with subsequent cancer-related death were similar (Table \n4). Both women and men with at least one sick-leave day had a higher risk of premature death from cancer compared to those with no sick-leave days in 1995. In contrary to the analyses on death from circulatory diseases, the wash-out period decreased the adjusted RRs of death.\nRelative risk (RR) of premature death from cancer diagnoses among women and men on sick leave with adjustments in four models and in a fifth model with two years wash-out period\n1Days reimbursed by the Social Insurance Agency, usually from the 15th day of a sick-leave spell.\nModel I: adjustment for age.\nModel II: adjustment for age and education, type of place of residence, and region of birth.\nModel III: adjustment for age, and number of days in hospital (categorized).\nModel IV: adjustment for age, number of days in hospital (categorized), education, type of place of residence, and region of birth.\nModel V: model IV with two years wash-out period.\nThe analyses presented in Table \n5 show that sick leave was associated with a higher risk of suicide in both women and men. The RR for suicide increased in a graded fashion with more sick-leave days in both women and men, even after adjustment for several potential confounding factors. Adjusting for inpatient care decreased the estimates significantly for both sexes. Taking a wash-out period of two years into consideration decreased the estimates somewhat for both women and men.\nRelative risk (RR) of premature death from suicide among women and men on sick leave, with adjustments in four models and in a fifth model with two years wash-out period\n1Days reimbursed by the Social Insurance Agency, usually from the 15th day of a sick-leave spell.\nModel I: adjustment for age.\nModel II: adjustment for age and education, type of place of residence, and region of birth.\nModel III: adjustment for age, and number of days in hospital (categorized).\nModel IV: adjustment for age, number of days in hospital (categorized), education, type of place of residence, and region of birth.\nModel V: model IV with two years wash-out period.", "The results from this longitudinal study of 4.7 million women and men from the general population of working ages clearly demonstrated a gradual increase of all-cause and cause-specific premature death with increasing number of sick-leave days. Part of the mortality differences in the multivariate analyses was accounted for by differences in age, socio demographics, and particularly by morbidity, indicated by inpatient care. Results were similar for women and men. The two-year washout period had only a minor effect on the risk estimates.\nIndividuals having been sickness absent, also only for a few weeks, were found to have a higher risk for premature death, despite the fact that even individuals in the reference group might have been sickness absent later on, e.g., the year following the year of observation and despite the very long follow-up time of 11 years. More knowledge is warranted on these associations, regarding e.g. in different populations, time periods, and for different diagnoses. It might be argued that an association between sickness absence and premature death is expected as those on sick leave are that due to the underlying morbidity. However, the ill-health content of sick leave has often been questioned in the general debate\n[21,22]. Also, most sickness absences are due to musculoskeletal disorders – disorders that hardly ever are a cause of death, or due to minor mental disorders, which also very seldom are a cause of death with exception of suicide\n[13,21,23]. Many of the very short sick-leave spells are due to upper respiratory infections\n[13]. However, few of those are included in our data base due to the fact that the first 14 days of a sick-leave spell are paid by the employer. The main causes of death in Sweden as well as in other Western countries are circulatory disease and cancer – however, those diagnoses only stand for a few percentages of the sick-leave diagnoses\n[24].\nOther hypotheses of the identified associations are that sick leave might be a marker of other factors, or that sick leave per se is a risk factor for morbidity or mortality. Different possible negative consequences of being sickness absent have been described in the literature, e.g. negative changes of life style (alcohol, exercise, diet, tobacco), of social interactions, of economy or work carrier, of the prognosis of the disease underlying the absence or of other diseases (e.g. depression)\n[1,25].\nThe higher risk of premature death with increasing number of sick-leave days is in line with previous studies\n[4,5,7,9]. As all these studies have somewhat different ways of defining sickness absence, the results from these studies cannot easily be compared. Still, findings from our study based on a very large population-based database, point in the same direction as previous studies and warrants follow ups using more specified research questions.\nIn our study, socio-demographic factors did not seem to explain much of the higher mortality risks. There is massive evidence on the associations between socio-economic status and mortality\n[12,26,27]. We used educational level as a proxy of socioeconomic status rather than income, in the attempt not to introduce an unnecessary gender bias (as women get less paid and a larger rate of women works part time). In general, people of lower socioeconomic status have higher risk for sickness absence\n[13,28], however, possible effects of sickness absence regarding premature death do not seem to differ between socioeconomic groups. The same can be said about gender; women have higher risks for becoming sickness absent; however, we in general found no gender differences regarding premature death among sickness absentees. It seems as the possible negative ‘effects’ of sickness absence do not vary between genders or socioeconomic groups.\nOur results clearly indicate a higher RR for premature death due to circulatory diseases among women and men on sick leave, especially for those who had more than 15 sick-leave days in 1995, that is, mainly more than one months’ sick leave. This is in line with two previous studies based on occupational groups, finding that long-term sick leave is a risk factor for cardiovascular mortality\n[3,7]. Our results also correspond with another finding from these two studies\n[3,7], namely that sick leave was associated with cancer-related mortality.\nWe found an elevated suicide risk in women and men who had been on sick leave. The suicide risk increased in a graded manner as the number of sick-leave days increased, for both women and men. A few other studies have also found sick leave to be a risk factor for suicide\n[7,29,30]. In one of them, a case–control study with a lower number of women, sickness absence was a risk factor only for men\n[30]. It is well known that, in most Western countries, men have higher suicide rates than women\n[31,32]. However, here we studied the relative risks for suicide among sickness absent women and men, respectively, and found that those did not differ much between the genders.\n Strengths The strengths of this study are the population-based, prospective cohort design, with a very long follow-up time, the very large number of included individuals, the high quality of the nationwide register data, e.g. regarding completeness and validity\n[33,34] and that there was no loss to follow up. The size of the cohort (4.7 millions) also provided sufficient power for gender-stratified analyses regarding rare outcomes such as suicide. As both the exposure and outcome were based on register data, there was no recall bias. Other strengths are that most of the sick-leave days as well as all hospitalizations were certified by a physician and that all individuals not at risk for sickness absence in 1995 were excluded, that is, those on disability pension.\nThe strengths of this study are the population-based, prospective cohort design, with a very long follow-up time, the very large number of included individuals, the high quality of the nationwide register data, e.g. regarding completeness and validity\n[33,34] and that there was no loss to follow up. The size of the cohort (4.7 millions) also provided sufficient power for gender-stratified analyses regarding rare outcomes such as suicide. As both the exposure and outcome were based on register data, there was no recall bias. Other strengths are that most of the sick-leave days as well as all hospitalizations were certified by a physician and that all individuals not at risk for sickness absence in 1995 were excluded, that is, those on disability pension.\n Limitations The available information about health conditions was limited to morbidity leading to hospital care, which means that we were not able to account for all types of morbidity in the analyses. Hence, part of the higher risks may have been attributed to other types of impaired health among sick-listed people. On the other hand, it can be seen as an advantage that only the more severe morbidity was adjusted for, especially as morbidity seldom is accounted for in studies of associations between sick leave and premature deaths. That shorter sick-leave spells (<14 days) for employees could not be included can also be regarded both as a limitation and a strength. However, other studies have shown associations also with short-term sickness absence\n[7,10], which means that we rather have an under- than an over-estimation of results. Another limitation is that we could not exclude the sick-leave days from sick-leave spells shorter than 15 days - that is, days generally generated by unemployed people who had their sick-leave spells reimbursed by the Social Insurance Agency already from day 2. This means that there is an overrepresentation of unemployed among those in the category of 1-15 sick-leave days. However, most sickness absentees are working, and some of the unemployed do not have unemployment benefits- due to not having had a paid work lasting for 12 months or due to having been unemployed for too long time. This means that both in the reference group and in the group of fewer sick-leave days, there was a slight overrepresentation of unemployed. Future studies are needed to gain more knowledge on this. Despite the considerably large dataset including a number of important confounders, information on some potential confounders such as self-reports on health behavior including smoking and alcohol consumption could have been of importance. Moreover, the study was carried out using the year 1995 as baseline. Future studies are warranted to include cohorts based on other time periods and using other measures of sickness absence\n[35,36]. Moreover, the use of a two-year washout period can be questioned - both as being too long and as being too short - the effect of using different wash-out periods needs to be investigated in more detailed studies.\nThe available information about health conditions was limited to morbidity leading to hospital care, which means that we were not able to account for all types of morbidity in the analyses. Hence, part of the higher risks may have been attributed to other types of impaired health among sick-listed people. On the other hand, it can be seen as an advantage that only the more severe morbidity was adjusted for, especially as morbidity seldom is accounted for in studies of associations between sick leave and premature deaths. That shorter sick-leave spells (<14 days) for employees could not be included can also be regarded both as a limitation and a strength. However, other studies have shown associations also with short-term sickness absence\n[7,10], which means that we rather have an under- than an over-estimation of results. Another limitation is that we could not exclude the sick-leave days from sick-leave spells shorter than 15 days - that is, days generally generated by unemployed people who had their sick-leave spells reimbursed by the Social Insurance Agency already from day 2. This means that there is an overrepresentation of unemployed among those in the category of 1-15 sick-leave days. However, most sickness absentees are working, and some of the unemployed do not have unemployment benefits- due to not having had a paid work lasting for 12 months or due to having been unemployed for too long time. This means that both in the reference group and in the group of fewer sick-leave days, there was a slight overrepresentation of unemployed. Future studies are needed to gain more knowledge on this. Despite the considerably large dataset including a number of important confounders, information on some potential confounders such as self-reports on health behavior including smoking and alcohol consumption could have been of importance. Moreover, the study was carried out using the year 1995 as baseline. Future studies are warranted to include cohorts based on other time periods and using other measures of sickness absence\n[35,36]. Moreover, the use of a two-year washout period can be questioned - both as being too long and as being too short - the effect of using different wash-out periods needs to be investigated in more detailed studies.", "The strengths of this study are the population-based, prospective cohort design, with a very long follow-up time, the very large number of included individuals, the high quality of the nationwide register data, e.g. regarding completeness and validity\n[33,34] and that there was no loss to follow up. The size of the cohort (4.7 millions) also provided sufficient power for gender-stratified analyses regarding rare outcomes such as suicide. As both the exposure and outcome were based on register data, there was no recall bias. Other strengths are that most of the sick-leave days as well as all hospitalizations were certified by a physician and that all individuals not at risk for sickness absence in 1995 were excluded, that is, those on disability pension.", "The available information about health conditions was limited to morbidity leading to hospital care, which means that we were not able to account for all types of morbidity in the analyses. Hence, part of the higher risks may have been attributed to other types of impaired health among sick-listed people. On the other hand, it can be seen as an advantage that only the more severe morbidity was adjusted for, especially as morbidity seldom is accounted for in studies of associations between sick leave and premature deaths. That shorter sick-leave spells (<14 days) for employees could not be included can also be regarded both as a limitation and a strength. However, other studies have shown associations also with short-term sickness absence\n[7,10], which means that we rather have an under- than an over-estimation of results. Another limitation is that we could not exclude the sick-leave days from sick-leave spells shorter than 15 days - that is, days generally generated by unemployed people who had their sick-leave spells reimbursed by the Social Insurance Agency already from day 2. This means that there is an overrepresentation of unemployed among those in the category of 1-15 sick-leave days. However, most sickness absentees are working, and some of the unemployed do not have unemployment benefits- due to not having had a paid work lasting for 12 months or due to having been unemployed for too long time. This means that both in the reference group and in the group of fewer sick-leave days, there was a slight overrepresentation of unemployed. Future studies are needed to gain more knowledge on this. Despite the considerably large dataset including a number of important confounders, information on some potential confounders such as self-reports on health behavior including smoking and alcohol consumption could have been of importance. Moreover, the study was carried out using the year 1995 as baseline. Future studies are warranted to include cohorts based on other time periods and using other measures of sickness absence\n[35,36]. Moreover, the use of a two-year washout period can be questioned - both as being too long and as being too short - the effect of using different wash-out periods needs to be investigated in more detailed studies.", "There was a clear association between sickness absence and all-cause and cause-specific mortality, even for a relatively short number of sick-leave days, also when adjusting for morbidity. Moreover, the higher risks were about the same for women and men, although there are higher risks for women to become sickness absent. As sickness certification of patients is common in health care, more knowledge on possible mechanisms behind the results is warranted.", "The authors declare that they have no competing interests.", "EMR and KA originated the idea. GWR analysed the data in consultation with KA, CL, and EMR. EB wrote the first and subsequent drafts of the manuscript, with important intellectual input from all the co-authors. All authors contributed in designing the study and to the interpretation of the results and to the writing and approval of the final article.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/14/733/prepub\n" ]
[ null, "methods", null, null, null, "results", "discussion", null, null, "conclusions", null, null, null ]
[ "Mortality", "Sick-leave days", "Socioeconomic status", "Gender", "Morbidity", "Inpatient care" ]
Background: Today, the number of studies on risk factors for sickness absence has increased much, however, the number of studies on the future situation of sickness absentees is still very limited [1]. Most of the few studies so far conducted have focused on to what extent sickness absence can predict all-cause and cause-specific mortality in occupational [2,3] or population-based cohorts [2-8], with different follow-up periods. The risk of premature death varies somewhat with used measures of sickness absence, nevertheless, an excess rate of premature death was found in all studies [2-10]. There is a need to test those results in a larger, population-based cohort with a long follow-up period. In welfare countries, to sickness certify patients is a common measure in healthcare, why more knowledge is warranted on associations between sickness absence and premature death. For instance, it is important to disentangle to what extent excess risks merely are due to a greater occurrence of morbidity among people who are sickness absent. There are several ways to measure morbidity, in research often self-reports are used [5,6,11]. Data on inpatient care, that is, being hospitalized, could be considered as data on more severe morbidity as well as on morbidity that has also been verified by physicians and therefore more valid. Also, the association between sickness absence and subsequent deaths might be due to the very disease that leads to the sickness absence. To account for that when trying to disentangle the associations between sickness absence and morbidity, sometimes a so called wash-out period is introduced, to exclude all deaths occurring in a near time frame, e.g., using a two-year wash out period [5]. Moreover, a great number of studies have shown gender differences in morbidity, in sickness absence, as well as in mortality [12-15]. There are several reasons to believe that the mechanisms leading to morbidity, to sickness absence, and to mortality might differ to some extent between the genders and that such aspects might not be visible when combining data for women and men, why gender-specific analyses are recommended [16-20]. The aim of this study was to investigate the associations between number of sick-leave days and future all-cause and cause-specific mortality among women and men, adjusting for morbidity and socioeconomic status, and also taking into account a two-year wash-out period for the relative risk of mortality. Methods: Study population A population-based prospective cohort study was conducted with an eleven-year follow up. The cohort included all individuals aged 20–64 years, registered in Sweden on 31 December 1994 and 1995, and not on disability or old-age pension in 1995 (N = 4 669 235, 49% women). The cohort was identified from Statistics Sweden’s integrated population-based database for labor-market research (LISA). From this database also information regarding sick leave, disability pension, and other sociodemographic factors in 1995 was obtained. Further, date and cause of death was obtained from the Cause of Death Register, held by the National Board of Health and Welfare. The National Patient Register, also held by the National Board of Health and Welfare, was used to obtain information on inpatient care. A population-based prospective cohort study was conducted with an eleven-year follow up. The cohort included all individuals aged 20–64 years, registered in Sweden on 31 December 1994 and 1995, and not on disability or old-age pension in 1995 (N = 4 669 235, 49% women). The cohort was identified from Statistics Sweden’s integrated population-based database for labor-market research (LISA). From this database also information regarding sick leave, disability pension, and other sociodemographic factors in 1995 was obtained. Further, date and cause of death was obtained from the Cause of Death Register, held by the National Board of Health and Welfare. The National Patient Register, also held by the National Board of Health and Welfare, was used to obtain information on inpatient care. Variables All sick-leave days in 1995 reimbursed by the National Social Insurance Agency were included and categorized into five groups; 0 days (reference category), 1-15 days, 16-75 days, 76-165 days, and 166-365 days. From the age of 16 years, all Swedish residents with income from work, unemployment benefits, or student benefits are covered by the national sickness absence insurance regime and can be sickness absent with benefits if unable to work due to disease or injury. Benefits amount up to 80% of lost income. The first day of a sick-leave spell is a qualifying day, without any benefits. The first seven days can be self-certified; thereafter a physician certificate is needed. For those employed, the employer usually paid for the first 14 days of the sick-leave spell; those days are not registered in LISA and thus not included in this study. Unemployed people, if their morbidity lead to that they could not seek work, had sickness benefits paid by the Social Insurance Agency from the second day of the sick-leave spell. This means that for them also the shorter sick-leave spells were included. For most of the absentees, the category 1-15 days means that they had been sickness absent for 15-30 days. Information about inpatient care was obtained for the years 1990-95, excluding hospitalization due to childbirth. The median number of days of inpatient care for those who were hospitalized was 5 days for the six-year period. Inpatient care days were categorized as; 0 days, 1-5 days, and more than 5 days. In the analyses, age was used as a continuous variable and educational level was categorized into four groups: elementary school (9 years or less of schooling), secondary (10-12 years), university level (more than 12 years), or information missing. Type of place of residence was divided into a) large cities (Stockholm, Göteborg, and Malmö), b) middle-sized cities: places with more than 90 000 residents within 30 km from the municipal center, and c) rural municipalities (all other areas). Region of birth was categorized into Sweden, other Nordic country, other EU25, and other countries. As outcome measure, we studied all-cause mortality for the years 1996-2006 as well as cause-specific mortality from circulatory diseases, cancer, and suicide, respectively. All sick-leave days in 1995 reimbursed by the National Social Insurance Agency were included and categorized into five groups; 0 days (reference category), 1-15 days, 16-75 days, 76-165 days, and 166-365 days. From the age of 16 years, all Swedish residents with income from work, unemployment benefits, or student benefits are covered by the national sickness absence insurance regime and can be sickness absent with benefits if unable to work due to disease or injury. Benefits amount up to 80% of lost income. The first day of a sick-leave spell is a qualifying day, without any benefits. The first seven days can be self-certified; thereafter a physician certificate is needed. For those employed, the employer usually paid for the first 14 days of the sick-leave spell; those days are not registered in LISA and thus not included in this study. Unemployed people, if their morbidity lead to that they could not seek work, had sickness benefits paid by the Social Insurance Agency from the second day of the sick-leave spell. This means that for them also the shorter sick-leave spells were included. For most of the absentees, the category 1-15 days means that they had been sickness absent for 15-30 days. Information about inpatient care was obtained for the years 1990-95, excluding hospitalization due to childbirth. The median number of days of inpatient care for those who were hospitalized was 5 days for the six-year period. Inpatient care days were categorized as; 0 days, 1-5 days, and more than 5 days. In the analyses, age was used as a continuous variable and educational level was categorized into four groups: elementary school (9 years or less of schooling), secondary (10-12 years), university level (more than 12 years), or information missing. Type of place of residence was divided into a) large cities (Stockholm, Göteborg, and Malmö), b) middle-sized cities: places with more than 90 000 residents within 30 km from the municipal center, and c) rural municipalities (all other areas). Region of birth was categorized into Sweden, other Nordic country, other EU25, and other countries. As outcome measure, we studied all-cause mortality for the years 1996-2006 as well as cause-specific mortality from circulatory diseases, cancer, and suicide, respectively. Statistical analyses Multivariate analyses were conducted by Poisson regression with mortality as the dependent variable. The relative risk (RR) of mortality was estimated with 95% confidence interval (CI). Follow-up time was assessed by adding up the months the individuals were alive and residents of Sweden during the 11-years follow-up period 1996–2006. The associations between sick leave in 1995 and mortality were analyzed in five regression models. The reference group comprised individuals without sick-leave days in 1995. In model I, adjustments were made for age. In a second model (model II) we made additional adjustments for socio-demographic factors; i.e. educational level, type of place of residence, and region of birth. In model III, we adjusted for inpatient care in 1990-1995. Model IV included all covariates. In a fifth model, as an attempt to rule out that observed associations are not mainly due to an excess rate of sick leave shortly before death, a washout period of two years was used, that is, follow-up started in the third year after baseline (1998-2006). This included those who still lived 1 January 1998, conducting the same adjustment as in Model IV. Statistical analyses were carried out using the SAS software package, version 9.3. In accordance with the aim, all analyses were stratified by sex. The project was approved of by the Regional Ethical Review Board of Stockholm (dnr 2007/762-31). Multivariate analyses were conducted by Poisson regression with mortality as the dependent variable. The relative risk (RR) of mortality was estimated with 95% confidence interval (CI). Follow-up time was assessed by adding up the months the individuals were alive and residents of Sweden during the 11-years follow-up period 1996–2006. The associations between sick leave in 1995 and mortality were analyzed in five regression models. The reference group comprised individuals without sick-leave days in 1995. In model I, adjustments were made for age. In a second model (model II) we made additional adjustments for socio-demographic factors; i.e. educational level, type of place of residence, and region of birth. In model III, we adjusted for inpatient care in 1990-1995. Model IV included all covariates. In a fifth model, as an attempt to rule out that observed associations are not mainly due to an excess rate of sick leave shortly before death, a washout period of two years was used, that is, follow-up started in the third year after baseline (1998-2006). This included those who still lived 1 January 1998, conducting the same adjustment as in Model IV. Statistical analyses were carried out using the SAS software package, version 9.3. In accordance with the aim, all analyses were stratified by sex. The project was approved of by the Regional Ethical Review Board of Stockholm (dnr 2007/762-31). Study population: A population-based prospective cohort study was conducted with an eleven-year follow up. The cohort included all individuals aged 20–64 years, registered in Sweden on 31 December 1994 and 1995, and not on disability or old-age pension in 1995 (N = 4 669 235, 49% women). The cohort was identified from Statistics Sweden’s integrated population-based database for labor-market research (LISA). From this database also information regarding sick leave, disability pension, and other sociodemographic factors in 1995 was obtained. Further, date and cause of death was obtained from the Cause of Death Register, held by the National Board of Health and Welfare. The National Patient Register, also held by the National Board of Health and Welfare, was used to obtain information on inpatient care. Variables: All sick-leave days in 1995 reimbursed by the National Social Insurance Agency were included and categorized into five groups; 0 days (reference category), 1-15 days, 16-75 days, 76-165 days, and 166-365 days. From the age of 16 years, all Swedish residents with income from work, unemployment benefits, or student benefits are covered by the national sickness absence insurance regime and can be sickness absent with benefits if unable to work due to disease or injury. Benefits amount up to 80% of lost income. The first day of a sick-leave spell is a qualifying day, without any benefits. The first seven days can be self-certified; thereafter a physician certificate is needed. For those employed, the employer usually paid for the first 14 days of the sick-leave spell; those days are not registered in LISA and thus not included in this study. Unemployed people, if their morbidity lead to that they could not seek work, had sickness benefits paid by the Social Insurance Agency from the second day of the sick-leave spell. This means that for them also the shorter sick-leave spells were included. For most of the absentees, the category 1-15 days means that they had been sickness absent for 15-30 days. Information about inpatient care was obtained for the years 1990-95, excluding hospitalization due to childbirth. The median number of days of inpatient care for those who were hospitalized was 5 days for the six-year period. Inpatient care days were categorized as; 0 days, 1-5 days, and more than 5 days. In the analyses, age was used as a continuous variable and educational level was categorized into four groups: elementary school (9 years or less of schooling), secondary (10-12 years), university level (more than 12 years), or information missing. Type of place of residence was divided into a) large cities (Stockholm, Göteborg, and Malmö), b) middle-sized cities: places with more than 90 000 residents within 30 km from the municipal center, and c) rural municipalities (all other areas). Region of birth was categorized into Sweden, other Nordic country, other EU25, and other countries. As outcome measure, we studied all-cause mortality for the years 1996-2006 as well as cause-specific mortality from circulatory diseases, cancer, and suicide, respectively. Statistical analyses: Multivariate analyses were conducted by Poisson regression with mortality as the dependent variable. The relative risk (RR) of mortality was estimated with 95% confidence interval (CI). Follow-up time was assessed by adding up the months the individuals were alive and residents of Sweden during the 11-years follow-up period 1996–2006. The associations between sick leave in 1995 and mortality were analyzed in five regression models. The reference group comprised individuals without sick-leave days in 1995. In model I, adjustments were made for age. In a second model (model II) we made additional adjustments for socio-demographic factors; i.e. educational level, type of place of residence, and region of birth. In model III, we adjusted for inpatient care in 1990-1995. Model IV included all covariates. In a fifth model, as an attempt to rule out that observed associations are not mainly due to an excess rate of sick leave shortly before death, a washout period of two years was used, that is, follow-up started in the third year after baseline (1998-2006). This included those who still lived 1 January 1998, conducting the same adjustment as in Model IV. Statistical analyses were carried out using the SAS software package, version 9.3. In accordance with the aim, all analyses were stratified by sex. The project was approved of by the Regional Ethical Review Board of Stockholm (dnr 2007/762-31). Results: In this cohort of 4.7 million people of working ages, the mean age was about 40 years both among women and men. A slightly higher proportion of the men than the women had lower educational level (Table  1). About 70% of the women and 78% of the men had no inpatient care at all during 1990–1995. A higher rate of the women had had longer hospitalization. Also, a higher rate of the women had been sickness absent in 1995 and the mean number of sick-leave days during 1995 was 10.6 days in women and 8.0 days in men (not shown in table). Characteristics for the cohort of 4 669 235 women and men aged 20–64 and living in Sweden in 1995 1Low = ≤9 years; Medium = 10–12 years; High = 13+ years; 2Days reimbursed by the Social Insurance Agency, usually from the 15th day of a sick-leave spell. The relative risks (RR) of all-cause mortality displayed a gradual increase with increasing number of sick-leave days in 1995 among both women and men (Table  2). In the age-adjusted models, women and men with most sick-leave days (166–365 days), had more than tripled mortality risks; women RR 3.48; 95% CI 3.37-3.60 and men RR 3.29; CI 3.20-3.39 in comparison with women and men, respectively, without any sick-leave days reimbursed by the Social Insurance Agency in 1995. Adjustments for socio-demographics did not substantially change the estimates. When inpatient care was taken into account (model III), the excess risks generally decreased, but the clear gradients remained. In the fully adjusted models (model IV), the risks decreased further. Using a “wash-out”-period of two years in model V, e.g., starting the follow-up 1998, decreased the excess risk to some extent regarding death risks associated with number of sick-leave days. However, evident associations remained with a pronounced gradient. Relative risk (RR) for all-cause mortality among women and men on sick leave with adjustments in four models and in a fifth model with two years wash-out period 1Days reimbursed by the Social Insurance Agency, usually from the 15th day of a sick-leave spell. Model I: adjustment for age. Model II: adjustment for age and education, type of place of residence, and region of birth. Model III: adjustment for age, number of days in hospital (categorized). Model IV: adjustment for age, number of days in hospital (categorized), education, type of place of residence, and region of birth. Model V: model IV with two years wash-out period. The RRs of mortality due to circulatory diseases are presented in Table  3. Although risk estimates were lower compared to RRs of all-cause mortality, a gradual increase with increasing number of sick-leave days among both women and men emerged. In a similar way, the risk estimates decreased after adjusting for several potential confounders. Even in the fully adjusted model (model IV), both women and men with more than 75 sick-leave days had a nearly 50% higher risk of premature death due to a circulatory disease during the follow-up period than women without such sick-leave days. Introducing a wash-out period did not change the adjusted RRs of premature death from circulatory diseases. Relative risk (RR) of premature death from circulatory diseases among women and men on sick leave with adjustments in four models and in a fifth model with two years wash-out period 1Days reimbursed by the Social Insurance Agency, usually from the 15th day of a sick-leave spell. Model I: adjustment for age. Model II: adjustment for age and education, type of place of residence, and region of birth. Model III: adjustment for age, and number of days in hospital (categorized). Model IV: adjustment for age, number of days in hospital (categorized), education, type of place of residence, and region of birth. Model V: model IV with two years wash-out period. The associations of number of sick-leave days with subsequent cancer-related death were similar (Table  4). Both women and men with at least one sick-leave day had a higher risk of premature death from cancer compared to those with no sick-leave days in 1995. In contrary to the analyses on death from circulatory diseases, the wash-out period decreased the adjusted RRs of death. Relative risk (RR) of premature death from cancer diagnoses among women and men on sick leave with adjustments in four models and in a fifth model with two years wash-out period 1Days reimbursed by the Social Insurance Agency, usually from the 15th day of a sick-leave spell. Model I: adjustment for age. Model II: adjustment for age and education, type of place of residence, and region of birth. Model III: adjustment for age, and number of days in hospital (categorized). Model IV: adjustment for age, number of days in hospital (categorized), education, type of place of residence, and region of birth. Model V: model IV with two years wash-out period. The analyses presented in Table  5 show that sick leave was associated with a higher risk of suicide in both women and men. The RR for suicide increased in a graded fashion with more sick-leave days in both women and men, even after adjustment for several potential confounding factors. Adjusting for inpatient care decreased the estimates significantly for both sexes. Taking a wash-out period of two years into consideration decreased the estimates somewhat for both women and men. Relative risk (RR) of premature death from suicide among women and men on sick leave, with adjustments in four models and in a fifth model with two years wash-out period 1Days reimbursed by the Social Insurance Agency, usually from the 15th day of a sick-leave spell. Model I: adjustment for age. Model II: adjustment for age and education, type of place of residence, and region of birth. Model III: adjustment for age, and number of days in hospital (categorized). Model IV: adjustment for age, number of days in hospital (categorized), education, type of place of residence, and region of birth. Model V: model IV with two years wash-out period. Discussion: The results from this longitudinal study of 4.7 million women and men from the general population of working ages clearly demonstrated a gradual increase of all-cause and cause-specific premature death with increasing number of sick-leave days. Part of the mortality differences in the multivariate analyses was accounted for by differences in age, socio demographics, and particularly by morbidity, indicated by inpatient care. Results were similar for women and men. The two-year washout period had only a minor effect on the risk estimates. Individuals having been sickness absent, also only for a few weeks, were found to have a higher risk for premature death, despite the fact that even individuals in the reference group might have been sickness absent later on, e.g., the year following the year of observation and despite the very long follow-up time of 11 years. More knowledge is warranted on these associations, regarding e.g. in different populations, time periods, and for different diagnoses. It might be argued that an association between sickness absence and premature death is expected as those on sick leave are that due to the underlying morbidity. However, the ill-health content of sick leave has often been questioned in the general debate [21,22]. Also, most sickness absences are due to musculoskeletal disorders – disorders that hardly ever are a cause of death, or due to minor mental disorders, which also very seldom are a cause of death with exception of suicide [13,21,23]. Many of the very short sick-leave spells are due to upper respiratory infections [13]. However, few of those are included in our data base due to the fact that the first 14 days of a sick-leave spell are paid by the employer. The main causes of death in Sweden as well as in other Western countries are circulatory disease and cancer – however, those diagnoses only stand for a few percentages of the sick-leave diagnoses [24]. Other hypotheses of the identified associations are that sick leave might be a marker of other factors, or that sick leave per se is a risk factor for morbidity or mortality. Different possible negative consequences of being sickness absent have been described in the literature, e.g. negative changes of life style (alcohol, exercise, diet, tobacco), of social interactions, of economy or work carrier, of the prognosis of the disease underlying the absence or of other diseases (e.g. depression) [1,25]. The higher risk of premature death with increasing number of sick-leave days is in line with previous studies [4,5,7,9]. As all these studies have somewhat different ways of defining sickness absence, the results from these studies cannot easily be compared. Still, findings from our study based on a very large population-based database, point in the same direction as previous studies and warrants follow ups using more specified research questions. In our study, socio-demographic factors did not seem to explain much of the higher mortality risks. There is massive evidence on the associations between socio-economic status and mortality [12,26,27]. We used educational level as a proxy of socioeconomic status rather than income, in the attempt not to introduce an unnecessary gender bias (as women get less paid and a larger rate of women works part time). In general, people of lower socioeconomic status have higher risk for sickness absence [13,28], however, possible effects of sickness absence regarding premature death do not seem to differ between socioeconomic groups. The same can be said about gender; women have higher risks for becoming sickness absent; however, we in general found no gender differences regarding premature death among sickness absentees. It seems as the possible negative ‘effects’ of sickness absence do not vary between genders or socioeconomic groups. Our results clearly indicate a higher RR for premature death due to circulatory diseases among women and men on sick leave, especially for those who had more than 15 sick-leave days in 1995, that is, mainly more than one months’ sick leave. This is in line with two previous studies based on occupational groups, finding that long-term sick leave is a risk factor for cardiovascular mortality [3,7]. Our results also correspond with another finding from these two studies [3,7], namely that sick leave was associated with cancer-related mortality. We found an elevated suicide risk in women and men who had been on sick leave. The suicide risk increased in a graded manner as the number of sick-leave days increased, for both women and men. A few other studies have also found sick leave to be a risk factor for suicide [7,29,30]. In one of them, a case–control study with a lower number of women, sickness absence was a risk factor only for men [30]. It is well known that, in most Western countries, men have higher suicide rates than women [31,32]. However, here we studied the relative risks for suicide among sickness absent women and men, respectively, and found that those did not differ much between the genders. Strengths The strengths of this study are the population-based, prospective cohort design, with a very long follow-up time, the very large number of included individuals, the high quality of the nationwide register data, e.g. regarding completeness and validity [33,34] and that there was no loss to follow up. The size of the cohort (4.7 millions) also provided sufficient power for gender-stratified analyses regarding rare outcomes such as suicide. As both the exposure and outcome were based on register data, there was no recall bias. Other strengths are that most of the sick-leave days as well as all hospitalizations were certified by a physician and that all individuals not at risk for sickness absence in 1995 were excluded, that is, those on disability pension. The strengths of this study are the population-based, prospective cohort design, with a very long follow-up time, the very large number of included individuals, the high quality of the nationwide register data, e.g. regarding completeness and validity [33,34] and that there was no loss to follow up. The size of the cohort (4.7 millions) also provided sufficient power for gender-stratified analyses regarding rare outcomes such as suicide. As both the exposure and outcome were based on register data, there was no recall bias. Other strengths are that most of the sick-leave days as well as all hospitalizations were certified by a physician and that all individuals not at risk for sickness absence in 1995 were excluded, that is, those on disability pension. Limitations The available information about health conditions was limited to morbidity leading to hospital care, which means that we were not able to account for all types of morbidity in the analyses. Hence, part of the higher risks may have been attributed to other types of impaired health among sick-listed people. On the other hand, it can be seen as an advantage that only the more severe morbidity was adjusted for, especially as morbidity seldom is accounted for in studies of associations between sick leave and premature deaths. That shorter sick-leave spells (<14 days) for employees could not be included can also be regarded both as a limitation and a strength. However, other studies have shown associations also with short-term sickness absence [7,10], which means that we rather have an under- than an over-estimation of results. Another limitation is that we could not exclude the sick-leave days from sick-leave spells shorter than 15 days - that is, days generally generated by unemployed people who had their sick-leave spells reimbursed by the Social Insurance Agency already from day 2. This means that there is an overrepresentation of unemployed among those in the category of 1-15 sick-leave days. However, most sickness absentees are working, and some of the unemployed do not have unemployment benefits- due to not having had a paid work lasting for 12 months or due to having been unemployed for too long time. This means that both in the reference group and in the group of fewer sick-leave days, there was a slight overrepresentation of unemployed. Future studies are needed to gain more knowledge on this. Despite the considerably large dataset including a number of important confounders, information on some potential confounders such as self-reports on health behavior including smoking and alcohol consumption could have been of importance. Moreover, the study was carried out using the year 1995 as baseline. Future studies are warranted to include cohorts based on other time periods and using other measures of sickness absence [35,36]. Moreover, the use of a two-year washout period can be questioned - both as being too long and as being too short - the effect of using different wash-out periods needs to be investigated in more detailed studies. The available information about health conditions was limited to morbidity leading to hospital care, which means that we were not able to account for all types of morbidity in the analyses. Hence, part of the higher risks may have been attributed to other types of impaired health among sick-listed people. On the other hand, it can be seen as an advantage that only the more severe morbidity was adjusted for, especially as morbidity seldom is accounted for in studies of associations between sick leave and premature deaths. That shorter sick-leave spells (<14 days) for employees could not be included can also be regarded both as a limitation and a strength. However, other studies have shown associations also with short-term sickness absence [7,10], which means that we rather have an under- than an over-estimation of results. Another limitation is that we could not exclude the sick-leave days from sick-leave spells shorter than 15 days - that is, days generally generated by unemployed people who had their sick-leave spells reimbursed by the Social Insurance Agency already from day 2. This means that there is an overrepresentation of unemployed among those in the category of 1-15 sick-leave days. However, most sickness absentees are working, and some of the unemployed do not have unemployment benefits- due to not having had a paid work lasting for 12 months or due to having been unemployed for too long time. This means that both in the reference group and in the group of fewer sick-leave days, there was a slight overrepresentation of unemployed. Future studies are needed to gain more knowledge on this. Despite the considerably large dataset including a number of important confounders, information on some potential confounders such as self-reports on health behavior including smoking and alcohol consumption could have been of importance. Moreover, the study was carried out using the year 1995 as baseline. Future studies are warranted to include cohorts based on other time periods and using other measures of sickness absence [35,36]. Moreover, the use of a two-year washout period can be questioned - both as being too long and as being too short - the effect of using different wash-out periods needs to be investigated in more detailed studies. Strengths: The strengths of this study are the population-based, prospective cohort design, with a very long follow-up time, the very large number of included individuals, the high quality of the nationwide register data, e.g. regarding completeness and validity [33,34] and that there was no loss to follow up. The size of the cohort (4.7 millions) also provided sufficient power for gender-stratified analyses regarding rare outcomes such as suicide. As both the exposure and outcome were based on register data, there was no recall bias. Other strengths are that most of the sick-leave days as well as all hospitalizations were certified by a physician and that all individuals not at risk for sickness absence in 1995 were excluded, that is, those on disability pension. Limitations: The available information about health conditions was limited to morbidity leading to hospital care, which means that we were not able to account for all types of morbidity in the analyses. Hence, part of the higher risks may have been attributed to other types of impaired health among sick-listed people. On the other hand, it can be seen as an advantage that only the more severe morbidity was adjusted for, especially as morbidity seldom is accounted for in studies of associations between sick leave and premature deaths. That shorter sick-leave spells (<14 days) for employees could not be included can also be regarded both as a limitation and a strength. However, other studies have shown associations also with short-term sickness absence [7,10], which means that we rather have an under- than an over-estimation of results. Another limitation is that we could not exclude the sick-leave days from sick-leave spells shorter than 15 days - that is, days generally generated by unemployed people who had their sick-leave spells reimbursed by the Social Insurance Agency already from day 2. This means that there is an overrepresentation of unemployed among those in the category of 1-15 sick-leave days. However, most sickness absentees are working, and some of the unemployed do not have unemployment benefits- due to not having had a paid work lasting for 12 months or due to having been unemployed for too long time. This means that both in the reference group and in the group of fewer sick-leave days, there was a slight overrepresentation of unemployed. Future studies are needed to gain more knowledge on this. Despite the considerably large dataset including a number of important confounders, information on some potential confounders such as self-reports on health behavior including smoking and alcohol consumption could have been of importance. Moreover, the study was carried out using the year 1995 as baseline. Future studies are warranted to include cohorts based on other time periods and using other measures of sickness absence [35,36]. Moreover, the use of a two-year washout period can be questioned - both as being too long and as being too short - the effect of using different wash-out periods needs to be investigated in more detailed studies. Conclusions: There was a clear association between sickness absence and all-cause and cause-specific mortality, even for a relatively short number of sick-leave days, also when adjusting for morbidity. Moreover, the higher risks were about the same for women and men, although there are higher risks for women to become sickness absent. As sickness certification of patients is common in health care, more knowledge on possible mechanisms behind the results is warranted. Competing interests: The authors declare that they have no competing interests. Author’s contribution: EMR and KA originated the idea. GWR analysed the data in consultation with KA, CL, and EMR. EB wrote the first and subsequent drafts of the manuscript, with important intellectual input from all the co-authors. All authors contributed in designing the study and to the interpretation of the results and to the writing and approval of the final article. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2458/14/733/prepub
Background: As the number of studies on the future situation of sickness absentees still is very limited, we aimed to investigate the association between number of sick-leave days and future all-cause and cause-specific mortality among women and men. Methods: A cohort of 2 275 987 women and 2 393 248 men, aged 20-64 years in 1995 was followed 1996-2006 with regard to mortality. Data were obtained from linked authority-administered registers. The relative risks (RR) and 95% confidence intervals (CI) of mortality with and without a 2-year wash-out period were estimated by multivariate Poisson regression analyses. All analyses were stratified by sex, adjusting for socio demographics and inpatient care. Results: A gradually higher all-cause mortality risk occurred with increasing number of sick-leave days in 1995, among both women (RR 1.11; CI 1.07-1.15 for those with 1-15 sick-leave days to RR 2.45; CI 2.36-2.53 among those with 166-365 days) and men (RR 1.20; CI 1.17-1.24 to RR 1.91; CI 1.85-1.97). Multivariate risk estimates were comparable for the different causes of death (circulatory disease, cancer, and suicide). The two-year washout period had only a minor effect on the risk estimates. Conclusions: Even a low number of sick-leave days was associated with a higher risk for premature death in the following 11 years, also when adjusting for morbidity. This was the case for both women and men and also for cause-specific mortality. More knowledge is warranted on the mechanisms leading to higher mortality risks among sickness absentees, as sickness certification is a common measure in health care, and most sick leave is due to diagnoses you do not die from.
Background: Today, the number of studies on risk factors for sickness absence has increased much, however, the number of studies on the future situation of sickness absentees is still very limited [1]. Most of the few studies so far conducted have focused on to what extent sickness absence can predict all-cause and cause-specific mortality in occupational [2,3] or population-based cohorts [2-8], with different follow-up periods. The risk of premature death varies somewhat with used measures of sickness absence, nevertheless, an excess rate of premature death was found in all studies [2-10]. There is a need to test those results in a larger, population-based cohort with a long follow-up period. In welfare countries, to sickness certify patients is a common measure in healthcare, why more knowledge is warranted on associations between sickness absence and premature death. For instance, it is important to disentangle to what extent excess risks merely are due to a greater occurrence of morbidity among people who are sickness absent. There are several ways to measure morbidity, in research often self-reports are used [5,6,11]. Data on inpatient care, that is, being hospitalized, could be considered as data on more severe morbidity as well as on morbidity that has also been verified by physicians and therefore more valid. Also, the association between sickness absence and subsequent deaths might be due to the very disease that leads to the sickness absence. To account for that when trying to disentangle the associations between sickness absence and morbidity, sometimes a so called wash-out period is introduced, to exclude all deaths occurring in a near time frame, e.g., using a two-year wash out period [5]. Moreover, a great number of studies have shown gender differences in morbidity, in sickness absence, as well as in mortality [12-15]. There are several reasons to believe that the mechanisms leading to morbidity, to sickness absence, and to mortality might differ to some extent between the genders and that such aspects might not be visible when combining data for women and men, why gender-specific analyses are recommended [16-20]. The aim of this study was to investigate the associations between number of sick-leave days and future all-cause and cause-specific mortality among women and men, adjusting for morbidity and socioeconomic status, and also taking into account a two-year wash-out period for the relative risk of mortality. Conclusions: There was a clear association between sickness absence and all-cause and cause-specific mortality, even for a relatively short number of sick-leave days, also when adjusting for morbidity. Moreover, the higher risks were about the same for women and men, although there are higher risks for women to become sickness absent. As sickness certification of patients is common in health care, more knowledge on possible mechanisms behind the results is warranted.
Background: As the number of studies on the future situation of sickness absentees still is very limited, we aimed to investigate the association between number of sick-leave days and future all-cause and cause-specific mortality among women and men. Methods: A cohort of 2 275 987 women and 2 393 248 men, aged 20-64 years in 1995 was followed 1996-2006 with regard to mortality. Data were obtained from linked authority-administered registers. The relative risks (RR) and 95% confidence intervals (CI) of mortality with and without a 2-year wash-out period were estimated by multivariate Poisson regression analyses. All analyses were stratified by sex, adjusting for socio demographics and inpatient care. Results: A gradually higher all-cause mortality risk occurred with increasing number of sick-leave days in 1995, among both women (RR 1.11; CI 1.07-1.15 for those with 1-15 sick-leave days to RR 2.45; CI 2.36-2.53 among those with 166-365 days) and men (RR 1.20; CI 1.17-1.24 to RR 1.91; CI 1.85-1.97). Multivariate risk estimates were comparable for the different causes of death (circulatory disease, cancer, and suicide). The two-year washout period had only a minor effect on the risk estimates. Conclusions: Even a low number of sick-leave days was associated with a higher risk for premature death in the following 11 years, also when adjusting for morbidity. This was the case for both women and men and also for cause-specific mortality. More knowledge is warranted on the mechanisms leading to higher mortality risks among sickness absentees, as sickness certification is a common measure in health care, and most sick leave is due to diagnoses you do not die from.
7,530
353
[ 490, 156, 486, 279, 147, 433, 10, 68, 16 ]
13
[ "days", "sick", "sick leave", "leave", "model", "sickness", "years", "women", "1995", "leave days" ]
[ "absence premature death", "measures sickness absence", "sickness absence predict", "risks sickness absent", "premature death sickness" ]
[CONTENT] Mortality | Sick-leave days | Socioeconomic status | Gender | Morbidity | Inpatient care [SUMMARY]
[CONTENT] Mortality | Sick-leave days | Socioeconomic status | Gender | Morbidity | Inpatient care [SUMMARY]
[CONTENT] Mortality | Sick-leave days | Socioeconomic status | Gender | Morbidity | Inpatient care [SUMMARY]
[CONTENT] Mortality | Sick-leave days | Socioeconomic status | Gender | Morbidity | Inpatient care [SUMMARY]
[CONTENT] Mortality | Sick-leave days | Socioeconomic status | Gender | Morbidity | Inpatient care [SUMMARY]
[CONTENT] Mortality | Sick-leave days | Socioeconomic status | Gender | Morbidity | Inpatient care [SUMMARY]
[CONTENT] Absenteeism | Adult | Cause of Death | Cohort Studies | Female | Humans | Male | Middle Aged | Mortality | Prospective Studies | Risk Assessment | Sex Factors | Sick Leave | Sweden [SUMMARY]
[CONTENT] Absenteeism | Adult | Cause of Death | Cohort Studies | Female | Humans | Male | Middle Aged | Mortality | Prospective Studies | Risk Assessment | Sex Factors | Sick Leave | Sweden [SUMMARY]
[CONTENT] Absenteeism | Adult | Cause of Death | Cohort Studies | Female | Humans | Male | Middle Aged | Mortality | Prospective Studies | Risk Assessment | Sex Factors | Sick Leave | Sweden [SUMMARY]
[CONTENT] Absenteeism | Adult | Cause of Death | Cohort Studies | Female | Humans | Male | Middle Aged | Mortality | Prospective Studies | Risk Assessment | Sex Factors | Sick Leave | Sweden [SUMMARY]
[CONTENT] Absenteeism | Adult | Cause of Death | Cohort Studies | Female | Humans | Male | Middle Aged | Mortality | Prospective Studies | Risk Assessment | Sex Factors | Sick Leave | Sweden [SUMMARY]
[CONTENT] Absenteeism | Adult | Cause of Death | Cohort Studies | Female | Humans | Male | Middle Aged | Mortality | Prospective Studies | Risk Assessment | Sex Factors | Sick Leave | Sweden [SUMMARY]
[CONTENT] absence premature death | measures sickness absence | sickness absence predict | risks sickness absent | premature death sickness [SUMMARY]
[CONTENT] absence premature death | measures sickness absence | sickness absence predict | risks sickness absent | premature death sickness [SUMMARY]
[CONTENT] absence premature death | measures sickness absence | sickness absence predict | risks sickness absent | premature death sickness [SUMMARY]
[CONTENT] absence premature death | measures sickness absence | sickness absence predict | risks sickness absent | premature death sickness [SUMMARY]
[CONTENT] absence premature death | measures sickness absence | sickness absence predict | risks sickness absent | premature death sickness [SUMMARY]
[CONTENT] absence premature death | measures sickness absence | sickness absence predict | risks sickness absent | premature death sickness [SUMMARY]
[CONTENT] days | sick | sick leave | leave | model | sickness | years | women | 1995 | leave days [SUMMARY]
[CONTENT] days | sick | sick leave | leave | model | sickness | years | women | 1995 | leave days [SUMMARY]
[CONTENT] days | sick | sick leave | leave | model | sickness | years | women | 1995 | leave days [SUMMARY]
[CONTENT] days | sick | sick leave | leave | model | sickness | years | women | 1995 | leave days [SUMMARY]
[CONTENT] days | sick | sick leave | leave | model | sickness | years | women | 1995 | leave days [SUMMARY]
[CONTENT] days | sick | sick leave | leave | model | sickness | years | women | 1995 | leave days [SUMMARY]
[CONTENT] sickness | absence | sickness absence | morbidity | studies | number studies | mortality | extent | wash period | premature death [SUMMARY]
[CONTENT] days | model | years | benefits | national | sick leave | leave | sick | 1995 | included [SUMMARY]
[CONTENT] model | adjustment age | men | adjustment | women | wash period | age | women men | days | sick [SUMMARY]
[CONTENT] higher risks women | risks women | higher risks | sickness | higher | risks | women | cause | absence cause cause specific | women men higher risks [SUMMARY]
[CONTENT] days | sick | sick leave | leave | sickness | model | years | studies | authors | morbidity [SUMMARY]
[CONTENT] days | sick | sick leave | leave | sickness | model | years | studies | authors | morbidity [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] 2 275 | 987 | 2 | 20-64 years | 1995 | 1996-2006 ||| ||| 95% | CI | 2-year | Poisson ||| [SUMMARY]
[CONTENT] 1995 | 1.11 | CI | 1.07-1.15 | 1-15 | RR 2.45 | CI | 2.36 | 166-365 days | 1.20 | CI | 1.17-1.24 | 1.91 | CI | 1.85-1.97 ||| ||| two-year [SUMMARY]
[CONTENT] 11 years ||| ||| [SUMMARY]
[CONTENT] ||| 2 275 | 987 | 2 | 20-64 years | 1995 | 1996-2006 ||| ||| 95% | CI | 2-year | Poisson ||| ||| 1995 | 1.11 | CI | 1.07-1.15 | 1-15 | RR 2.45 | CI | 2.36 | 166-365 days | 1.20 | CI | 1.17-1.24 | 1.91 | CI | 1.85-1.97 ||| ||| two-year ||| 11 years ||| ||| [SUMMARY]
[CONTENT] ||| 2 275 | 987 | 2 | 20-64 years | 1995 | 1996-2006 ||| ||| 95% | CI | 2-year | Poisson ||| ||| 1995 | 1.11 | CI | 1.07-1.15 | 1-15 | RR 2.45 | CI | 2.36 | 166-365 days | 1.20 | CI | 1.17-1.24 | 1.91 | CI | 1.85-1.97 ||| ||| two-year ||| 11 years ||| ||| [SUMMARY]
Atrial fibrillation alters the microRNA expression profiles of the left atria of patients with mitral stenosis.
24461008
Structural changes of the left and right atria associated with atrial fibrillation (AF) in mitral stenosis (MS) patients are well known, and alterations in microRNA (miRNA) expression profiles of the right atria have also been investigated. However, miRNA changes in the left atria still require delineation. This study evaluated alterations in miRNA expression profiles of left atrial tissues from MS patients with AF relative to those with normal sinus rhythm (NSR).
BACKGROUND
Sample tissues from left atrial appendages were obtained from 12 MS patients (6 with AF) during mitral valve replacement surgery. From these tissues, miRNA expression profiles were created and analyzed using a human miRNA microarray. Results were validated via reverse-transcription and quantitative PCR for 5 selected miRNAs. Potential miRNA targets were predicted and their functions and potential pathways analyzed via the miRFocus database.
METHODS
The expression levels of 22 miRNAs differed between the AF and NSR groups. Relative to NSR patients, in those with AF the expression levels of 45% (10/22) of these miRNAs were significantly higher, while those of the balance (55%, 12/22) were significantly lower. Potential miRNA targets and molecular pathways were identified.
RESULTS
AF alters the miRNA expression profiles of the left atria of MS patients. These findings may be useful for the biological understanding of AF in MS patients.
CONCLUSIONS
[ "Adult", "Atrial Appendage", "Atrial Fibrillation", "Case-Control Studies", "Female", "Gene Expression Profiling", "Gene Expression Regulation", "Gene Regulatory Networks", "Humans", "Male", "MicroRNAs", "Middle Aged", "Mitral Valve Stenosis", "Oligonucleotide Array Sequence Analysis", "Real-Time Polymerase Chain Reaction", "Reverse Transcriptase Polymerase Chain Reaction" ]
3909014
Background
Atrial fibrillation (AF) is characterized as an irregular and sometimes rapid heart rate, with symptoms that include palpitations and shortness of breath. AF is the most common cardiac arrhythmia observed in clinical practice and constitutes a risk factor for ischemic stroke [1]. Despite recent significant advances in the understanding of the mechanisms associated with AF, complexities in the etiology of atrial electrical dysfunction (including a genetic component [2]) and the subsequent associated arrhythmia have prevented definitive elucidation [3]. The progression from acute to persistent and then chronic AF is accompanied by changes in gene expression that lead to differences in protein expression and activity. MicroRNAs (miRNAs) are regulators of gene expression at the post-transcriptional level [4], and appear to have regulatory roles that underlie the pathophysiology of AF. Many studies have shown that miRNAs regulate key genetic functions in cardiovascular biology and are crucial to the pathogenesis of cardiac diseases such as cardiac development [5], hypertrophy/heart failure [6], remodeling [7], acute myocardial infarction [8], and myocardial ischemia-reperfusion injury [9]. Currently, there is a growing body of literature that indicates that many miRNAs are involved in AF through their target genes [7,10,11]. AF can be an isolated condition, but it often occurs concomitantly with other cardiovascular diseases such as hypertension, congestive heart failure, coronary artery disease, and valvular heart disease [12]. AF is also prevalent in mitral stenosis (MS; a consequence of rheumatic fever), affecting approximately 40% of all MS patients [13]. MS is among the major cardiovascular diseases in developing countries where rheumatic fever is less well controlled, and 50% or more of patients with severe MS have AF. Patients with both AF and MS have a 17.5-fold greater risk of stroke and a four-fold higher incidence of embolism compared with people with normal sinus rhythm (NSR) [14,15]. Structural changes of the left atria (LA) and right atria (RA) associated with AF in MS patients are well established [13,14]. Recently, reports suggest that AF also alters the miRNA expression profiles in RA of MS patients [16,17]. However, miRNA changes in LA from MS patients with AF are still unknown. Given the complexity of the pathophysiology that may be associated with AF, we need a better understanding of the miRNA changes in the LA, which may help in designing and developing new therapeutic interventions. This study investigated alterations of miRNA expression profiles in LA tissues of MS patients with AF relative to MS patients with NSR.
Methods
The Human Ethics Committee of First Affiliated Hospital of Sun Yat-sen University approved this study, and the investigation complied with the principles that govern the use of human tissues outlined in the Declaration of Helsinki. All patients gave informed consent before participating in the study. Human tissue preparation Left atrial appendage (LAA) tissue samples were obtained from MS patients, both in NSR (n = 6, without history of AF) and with AF (n = 6, documented arrhythmia >6 months before surgery). The tissue samples were obtained at the time of mitral valve surgery, immediately snap frozen in liquid nitrogen, and stored at -80°C until used. The diagnosis of AF was reached by evaluating medical records and 12-lead electrocardiogram findings. NSR patients had no history of using antiarrhythmic drugs and were screened to ensure that they had never experienced AF [18]. Preoperative 2-dimensional color transthoracic echocardiography was performed routinely on the patients. Preoperative functional status was recorded according to New York Heart Association (NYHA) classifications. Left atrial appendage (LAA) tissue samples were obtained from MS patients, both in NSR (n = 6, without history of AF) and with AF (n = 6, documented arrhythmia >6 months before surgery). The tissue samples were obtained at the time of mitral valve surgery, immediately snap frozen in liquid nitrogen, and stored at -80°C until used. The diagnosis of AF was reached by evaluating medical records and 12-lead electrocardiogram findings. NSR patients had no history of using antiarrhythmic drugs and were screened to ensure that they had never experienced AF [18]. Preoperative 2-dimensional color transthoracic echocardiography was performed routinely on the patients. Preoperative functional status was recorded according to New York Heart Association (NYHA) classifications. RNA isolation The total RNA from human LAA tissue samples was extracted using TRIzol reagent (Invitrogen) in accordance with the protocol of the manufacturer. The RNA quality of each sample was determined using an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA) and immediately stored at -80°C. The total RNA from human LAA tissue samples was extracted using TRIzol reagent (Invitrogen) in accordance with the protocol of the manufacturer. The RNA quality of each sample was determined using an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA) and immediately stored at -80°C. Microarray processing and analysis The miRNA microarray expression analysis was performed by LC Sciences (Houston, TX, USA) as described previously [19]. In brief, the assay began with a total RNA sample (2 to 5 μg). The total RNA was size-fractionated using a YM-100 Microcon centrifugal filter (Millipore, Billerica, MA). RNA sequences of <300 nt were isolated. These small RNAs were then extended at the 3′ end with a poly(A) tail using poly(A) polymerase, and then by ligation of an oligonucleotide tag to the poly(A) tail for later fluorescent dye staining. Hybridization was performed overnight on a μParaflo microfluidic chip using a micro-circulation pump (Atactic Technologies, Houston, TX). Each microfluidic chip contained detection probes and control probes. The detection probes were made in situ by photogenerated reagents. These probes consisted chemically of modified nucleotide coding sequences complementary to target miRNA (all 1921 human miRNAs listed in the Sanger’s miRNA miRBase, Release 18.0, http://microrna.sanger.ac.uk/sequences/) and a spacer segment of polyethylene glycol to extend the coding sequences away from the substrate. The hybridization melting temperatures were balanced by chemical modifications of the detection probes. Hybridization was performed using 100 μL of 6× saline-sodium phosphate-EDTA (SSPE) buffer (0.90 M NaCl, 60 mM Na2HPO4, 6 mM EDTA, pH 6.8) containing 25% formamide at 34°C. Fluorescence labeling with tag-specific Cy5 dye was used for after-hybridization detection. An Axon GenePix 4000B Microarray Scanner (Molecular Device, Union City, CA) was used to collect the fluorescent images, which were digitized using Array-Pro image analysis software (Media Cybernetics, Bethesda, MD). Each miRNA was analyzed two times and the controls were repeated 4-16 times. Analysis of the microarray data was also performed at LC Sciences (see Additional file 1). The microarray data was analyzed by subtracting the background, and then the signals were normalized using a locally weighted regression scatterplot smoothing (LOWESS) filter as reported previously [20]. Detectable miRNAs were selected based on the following criteria: signal intensity >3-fold the background standard deviation, and spot coefficient of variation (CV) < 0.5, where CV = standard deviation/signal intensity. When repeating probes were present on the array, the transcript was listed as detectable only if the signals from at least 50% of the repeating probes were above detection level. To identify miRNAs whose expression differed between the AF and NSR groups, statistical analysis was performed. The ratio of two samples was calculated and expressed in log2scale (balanced) for each miRNA. The miRNAs were then sorted according to their differential ratios. The P-values of the t-test were also calculated. miRNAs with P-values < 0.05 were considered significantly differentially expressed. The miRNA microarray expression analysis was performed by LC Sciences (Houston, TX, USA) as described previously [19]. In brief, the assay began with a total RNA sample (2 to 5 μg). The total RNA was size-fractionated using a YM-100 Microcon centrifugal filter (Millipore, Billerica, MA). RNA sequences of <300 nt were isolated. These small RNAs were then extended at the 3′ end with a poly(A) tail using poly(A) polymerase, and then by ligation of an oligonucleotide tag to the poly(A) tail for later fluorescent dye staining. Hybridization was performed overnight on a μParaflo microfluidic chip using a micro-circulation pump (Atactic Technologies, Houston, TX). Each microfluidic chip contained detection probes and control probes. The detection probes were made in situ by photogenerated reagents. These probes consisted chemically of modified nucleotide coding sequences complementary to target miRNA (all 1921 human miRNAs listed in the Sanger’s miRNA miRBase, Release 18.0, http://microrna.sanger.ac.uk/sequences/) and a spacer segment of polyethylene glycol to extend the coding sequences away from the substrate. The hybridization melting temperatures were balanced by chemical modifications of the detection probes. Hybridization was performed using 100 μL of 6× saline-sodium phosphate-EDTA (SSPE) buffer (0.90 M NaCl, 60 mM Na2HPO4, 6 mM EDTA, pH 6.8) containing 25% formamide at 34°C. Fluorescence labeling with tag-specific Cy5 dye was used for after-hybridization detection. An Axon GenePix 4000B Microarray Scanner (Molecular Device, Union City, CA) was used to collect the fluorescent images, which were digitized using Array-Pro image analysis software (Media Cybernetics, Bethesda, MD). Each miRNA was analyzed two times and the controls were repeated 4-16 times. Analysis of the microarray data was also performed at LC Sciences (see Additional file 1). The microarray data was analyzed by subtracting the background, and then the signals were normalized using a locally weighted regression scatterplot smoothing (LOWESS) filter as reported previously [20]. Detectable miRNAs were selected based on the following criteria: signal intensity >3-fold the background standard deviation, and spot coefficient of variation (CV) < 0.5, where CV = standard deviation/signal intensity. When repeating probes were present on the array, the transcript was listed as detectable only if the signals from at least 50% of the repeating probes were above detection level. To identify miRNAs whose expression differed between the AF and NSR groups, statistical analysis was performed. The ratio of two samples was calculated and expressed in log2scale (balanced) for each miRNA. The miRNAs were then sorted according to their differential ratios. The P-values of the t-test were also calculated. miRNAs with P-values < 0.05 were considered significantly differentially expressed. Reverse transcription-real time quantitative PCR (RT-qPCR) validation of selected miRNAs To validate the microarray results in the present study, a stem-loop RT-qPCR based on SYBR Green I was performed on selected differentially expressed miRNAs. The primers used are listed in Additional file 2. Total RNA was isolated using TRIzol reagent (Invitrogen) as described above. A single-stranded cDNA for each specific miRNA was generated by reverse transcription (RT) of 250 ng of total RNA using a miRNA-specific stem-looped RT primer. Briefly, an RT reaction mixture contained 250 ng of total RNA, 0.5 μL of 2 μM stem-loop RT primer, 1.0 μL of 5× RT buffer, 0.25 μL of 10 mM of each dNTP, 0.25 μL of 40 U/μL RNase inhibitor, and 0.5 μL of 200 U/μL Moloney murine leukemia virus (M-MLV) reverse transcriptase. An Eppendorf Mastercycler (Eppendorf, Hamburg, Germany) was used to perform the RT reaction under the following conditions: 42°C for 60 min, 70°C for 15 min, and finally, held at 4°C. After the RT reaction, qPCR was performed using an ABI PRISM 7900HT sequence-detection system (Applied Biosystems, Foster City, CA, USA) with the Platinum SYBR Green qPCR SuperMix-UDG (Invitrogen). In accordance with the manufacturer’s instructions, a 20-μL PCR reaction mixture contained 0.5 μL of RT product, 10 μL of 2× SYBR Green Mix, 0.4 μL of ROX, 0.8 μL of 10 μM primer mix, and 8.3 μL of nuclease-free water. The reaction protocol was: 95°C for 2 min, and then 40 amplification cycles of 95°C for 15 s, and 60°C for 30 s. All reactions were run in triplicate. To account for possible differences in the amount of starting RNA, miRNA expressions were normalized to small nuclear RNA RNU6B [21,22]. RT-qPCR data were represented by the cycle threshold (Ct) value. The relative expression level (i.e., fold change) for each miRNA was calculated using the comparative cycle threshold 2-ΔΔCt method [19]. To validate the microarray results in the present study, a stem-loop RT-qPCR based on SYBR Green I was performed on selected differentially expressed miRNAs. The primers used are listed in Additional file 2. Total RNA was isolated using TRIzol reagent (Invitrogen) as described above. A single-stranded cDNA for each specific miRNA was generated by reverse transcription (RT) of 250 ng of total RNA using a miRNA-specific stem-looped RT primer. Briefly, an RT reaction mixture contained 250 ng of total RNA, 0.5 μL of 2 μM stem-loop RT primer, 1.0 μL of 5× RT buffer, 0.25 μL of 10 mM of each dNTP, 0.25 μL of 40 U/μL RNase inhibitor, and 0.5 μL of 200 U/μL Moloney murine leukemia virus (M-MLV) reverse transcriptase. An Eppendorf Mastercycler (Eppendorf, Hamburg, Germany) was used to perform the RT reaction under the following conditions: 42°C for 60 min, 70°C for 15 min, and finally, held at 4°C. After the RT reaction, qPCR was performed using an ABI PRISM 7900HT sequence-detection system (Applied Biosystems, Foster City, CA, USA) with the Platinum SYBR Green qPCR SuperMix-UDG (Invitrogen). In accordance with the manufacturer’s instructions, a 20-μL PCR reaction mixture contained 0.5 μL of RT product, 10 μL of 2× SYBR Green Mix, 0.4 μL of ROX, 0.8 μL of 10 μM primer mix, and 8.3 μL of nuclease-free water. The reaction protocol was: 95°C for 2 min, and then 40 amplification cycles of 95°C for 15 s, and 60°C for 30 s. All reactions were run in triplicate. To account for possible differences in the amount of starting RNA, miRNA expressions were normalized to small nuclear RNA RNU6B [21,22]. RT-qPCR data were represented by the cycle threshold (Ct) value. The relative expression level (i.e., fold change) for each miRNA was calculated using the comparative cycle threshold 2-ΔΔCt method [19]. Target prediction and function analysis We used the database miRFocus (http://mirfocus.org/) to predict potential human miRNA target genes. The website describes miRFocus as a human miRNA information database, and is an open-source web tool developed for rapid analysis of miRNAs. It also provides comprehensive information concerning human miRNAs, including not only miRNA annotations but also miRNA and target gene interactions, correlations between miRNAs and diseases and signaling pathways, and more. The miRFocus provides a full gene description and functional analysis for each target gene by combining the predicted target genes from other databases (TargetScan, miRanda, PicTar, MirTarget and microT). In this study, only those genes that were predicted by two or more databases were considered candidates; the greater the number of databases that predicted that a given gene would be a target, the more likely the miRNA-mRNA interaction would be relevant [23]. The miRFocus program also identifies miRNA-enriched pathways, incorporating those from the Kyoto Encyclopedia of Genes and Genomes (KEGG), Biocarta, and Gene Ontology (GO) databases, with Fisher’s exact test. We used the database miRFocus (http://mirfocus.org/) to predict potential human miRNA target genes. The website describes miRFocus as a human miRNA information database, and is an open-source web tool developed for rapid analysis of miRNAs. It also provides comprehensive information concerning human miRNAs, including not only miRNA annotations but also miRNA and target gene interactions, correlations between miRNAs and diseases and signaling pathways, and more. The miRFocus provides a full gene description and functional analysis for each target gene by combining the predicted target genes from other databases (TargetScan, miRanda, PicTar, MirTarget and microT). In this study, only those genes that were predicted by two or more databases were considered candidates; the greater the number of databases that predicted that a given gene would be a target, the more likely the miRNA-mRNA interaction would be relevant [23]. The miRFocus program also identifies miRNA-enriched pathways, incorporating those from the Kyoto Encyclopedia of Genes and Genomes (KEGG), Biocarta, and Gene Ontology (GO) databases, with Fisher’s exact test. Statistical analyses All data are presented as mean ± standard deviation and analyzed with the paired t-test. Spearman’s correlation coefficients were used to examine the association between validated miRNAs and left atrial size. P < 0.05 was considered statistically significant. All data are presented as mean ± standard deviation and analyzed with the paired t-test. Spearman’s correlation coefficients were used to examine the association between validated miRNAs and left atrial size. P < 0.05 was considered statistically significant.
Results
Clinical characteristics of the NSR and AF patients There were no significant differences in terms of age, gender, or NYHA functional classification between the NSR and AF groups. Preoperative color Doppler echocardiography showed that the size of the left atria of AF patients was significantly greater than that of NSR patients as previously reported [24], while there were no differences in the left ventricular end-diastolic diameter or ejection fraction between the two groups of patients (Table  1). Clinical characteristics of the NSR and AF patients (n = 6, each) *P < 0.05 with respect to NSR patients. There were no significant differences in terms of age, gender, or NYHA functional classification between the NSR and AF groups. Preoperative color Doppler echocardiography showed that the size of the left atria of AF patients was significantly greater than that of NSR patients as previously reported [24], while there were no differences in the left ventricular end-diastolic diameter or ejection fraction between the two groups of patients (Table  1). Clinical characteristics of the NSR and AF patients (n = 6, each) *P < 0.05 with respect to NSR patients. miRNA expression profiles of LAA tissue from MS patients with and without AF Of the 1898 human miRNAs analyzed, a total 213 miRNAs were detected; NSR patients expressed 155 miRNAs, while the AF patients expressed 208 miRNAs (Figure  1A). Among these, 150 miRNAs were common to the patients of both groups. 5 miRNAs were detected only in those with NSR, and 58 miRNAs were detected only in those with AF (Figure  1B). MiRNAs detected in NSR and AF MS patients. (A) A combined total of 213 miRNAs were detected. The LAAs of NSR patients expressed 155 miRNAs, while those with AF expressed 208 miRNAs. (B) Among these, 150 miRNAs were expressed in NSR and AF patients; 5 were expressed only in the NSR and 58 only in the AF. However, the expression levels of most of the detected miRNAs were low, which is evident by their low signal intensities (less than 500 units). Of the 155 miRNAs detected in the NSR group, 73 emitted signals <500 units, while only 16 were >5000 units. On the other hand, of the 208 miRNAs detected in patients with AF, the signal intensities of 127 were <500 units, while only 16 were >5000 units (Figure  2). The signal intensities of the 5 miRNAs detected only in NSR, and the 58 miRNAs detected only in AF, were all <200 units and not high enough to consider them as differentially expressed between NSR and AF. Hence these were not considered here for further analysis. Signal distribution of all detected miRNAs in MS patients, shown by microarray assay. The signal intensities of most of these miRNAs were low (0–500 units), with only 11 miRNAs showing intensities greater than 10000 units. Of the 1898 human miRNAs analyzed, a total 213 miRNAs were detected; NSR patients expressed 155 miRNAs, while the AF patients expressed 208 miRNAs (Figure  1A). Among these, 150 miRNAs were common to the patients of both groups. 5 miRNAs were detected only in those with NSR, and 58 miRNAs were detected only in those with AF (Figure  1B). MiRNAs detected in NSR and AF MS patients. (A) A combined total of 213 miRNAs were detected. The LAAs of NSR patients expressed 155 miRNAs, while those with AF expressed 208 miRNAs. (B) Among these, 150 miRNAs were expressed in NSR and AF patients; 5 were expressed only in the NSR and 58 only in the AF. However, the expression levels of most of the detected miRNAs were low, which is evident by their low signal intensities (less than 500 units). Of the 155 miRNAs detected in the NSR group, 73 emitted signals <500 units, while only 16 were >5000 units. On the other hand, of the 208 miRNAs detected in patients with AF, the signal intensities of 127 were <500 units, while only 16 were >5000 units (Figure  2). The signal intensities of the 5 miRNAs detected only in NSR, and the 58 miRNAs detected only in AF, were all <200 units and not high enough to consider them as differentially expressed between NSR and AF. Hence these were not considered here for further analysis. Signal distribution of all detected miRNAs in MS patients, shown by microarray assay. The signal intensities of most of these miRNAs were low (0–500 units), with only 11 miRNAs showing intensities greater than 10000 units. Differences in miRNA expression profiles of LAA tissues from MS patients with and without AF Differences existed in the expression levels of the 150 miRNAs detected in both the NSR and AF groups (Table  2). Statistical analysis showed that 22 of these miRNAs (15%) were significantly dysregulated in the AF group relative to the NSR: 10 miRNAs (45%) were upregulated, while 12 (55%) were downregulated (P < 0.05). MiRNAs differentially expressed in LAA tissues between MS patients with AF or NSR *Expression levels of miRNAs are described as upregulated or downregulated in AF relative to those in NSR. Most of the miRNAs selected for further analysis via RT-qPCR had a fold change that satisfied the equation |log2(fold change)| ≥ 1.5(Figure  3), and at least one group had a signal intensity >2000 units (Figure  4). Although the fold change of hsa-miR-26a-5p did not meet this criteria (|log2(fold change)| = 1.17), its P -value was <0.01, and thus it was included in our selection. Finally, we selected 5 miRNAs for further analysis: 3 were upregulated in the AF group relative to the NSR (hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p), and 2 were downregulated (hsa-miR-1 and hsa-miR-26a-5p). Volcano plot of 22 miRNAs significantly dysregulated in LAA tissues. The volcano plot shows the distribution of these miRNAs according to their P-value versus fold change. Those with the highest fold change in their expression (|Log2(fold change)| ≥ 1.5) have been labeled in red. Although the fold change of miR-26a-5p (|log2(fold change)| = 1.17) does not meet the criteria, its P-value is <0.01 (i.e., –Log10(P-value) > 2), and therefore is also labeled red. Comparison of signal intensities of 22 miRNAs significantly dysregulated in LAA tissues. The signal intensities of miRNAs expressed in the NSR group (x-axis) and AF group (y-axis) were compared. The miRNAs with the higher signal intensities (>2000 units) either on the x-axis or y-axis were labeled red. Differences existed in the expression levels of the 150 miRNAs detected in both the NSR and AF groups (Table  2). Statistical analysis showed that 22 of these miRNAs (15%) were significantly dysregulated in the AF group relative to the NSR: 10 miRNAs (45%) were upregulated, while 12 (55%) were downregulated (P < 0.05). MiRNAs differentially expressed in LAA tissues between MS patients with AF or NSR *Expression levels of miRNAs are described as upregulated or downregulated in AF relative to those in NSR. Most of the miRNAs selected for further analysis via RT-qPCR had a fold change that satisfied the equation |log2(fold change)| ≥ 1.5(Figure  3), and at least one group had a signal intensity >2000 units (Figure  4). Although the fold change of hsa-miR-26a-5p did not meet this criteria (|log2(fold change)| = 1.17), its P -value was <0.01, and thus it was included in our selection. Finally, we selected 5 miRNAs for further analysis: 3 were upregulated in the AF group relative to the NSR (hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p), and 2 were downregulated (hsa-miR-1 and hsa-miR-26a-5p). Volcano plot of 22 miRNAs significantly dysregulated in LAA tissues. The volcano plot shows the distribution of these miRNAs according to their P-value versus fold change. Those with the highest fold change in their expression (|Log2(fold change)| ≥ 1.5) have been labeled in red. Although the fold change of miR-26a-5p (|log2(fold change)| = 1.17) does not meet the criteria, its P-value is <0.01 (i.e., –Log10(P-value) > 2), and therefore is also labeled red. Comparison of signal intensities of 22 miRNAs significantly dysregulated in LAA tissues. The signal intensities of miRNAs expressed in the NSR group (x-axis) and AF group (y-axis) were compared. The miRNAs with the higher signal intensities (>2000 units) either on the x-axis or y-axis were labeled red. Validation of the miRNA microarray data with RT-qPCR To validate the data obtained from the miRNA microarray, RT-qPCR was performed on 5 selected miRNAs, and the results were compared with the microarray (Figure  5, Additional file 3). According to the RT-qPCR data, hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p were upregulated in the LAAs of the AF group relative to the NSR, while hsa-miR-1 and hsa-miR-26a-5p were downregulated. These data are comparable with our microarray data and thus validate the results from the miRNA microarray for these miRNAs. Validation of miRNA microarray data of selected miRNAs by RT-qPCR. RT-qPCR was performed on 5 differentially expressed miRNAs (3 upregulated: miR-466, miR-574-3p, miR-3613-3p; and 2 downregulated: miR-1, miR-26a-5p) to confirm the microarray data. (A) Illustration of microarray results. Expression levels of the 5 miRNAs in the AF group based on fold changes of signal intensity in the microarray relative to the NSR group. The log2(fold change) values are plotted on the y-axis. (B) Illustration of RT-qPCR results. Using RT-qPCR, the expression levels of each miRNA in the AF group relative to the NRS group was estimated by the comparative cycle threshold 2-ΔΔCt method. The log2(relative expression) values are plotted on the y-axis. There is a significant difference for each miRNA, consistent with the microarray result. To validate the data obtained from the miRNA microarray, RT-qPCR was performed on 5 selected miRNAs, and the results were compared with the microarray (Figure  5, Additional file 3). According to the RT-qPCR data, hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p were upregulated in the LAAs of the AF group relative to the NSR, while hsa-miR-1 and hsa-miR-26a-5p were downregulated. These data are comparable with our microarray data and thus validate the results from the miRNA microarray for these miRNAs. Validation of miRNA microarray data of selected miRNAs by RT-qPCR. RT-qPCR was performed on 5 differentially expressed miRNAs (3 upregulated: miR-466, miR-574-3p, miR-3613-3p; and 2 downregulated: miR-1, miR-26a-5p) to confirm the microarray data. (A) Illustration of microarray results. Expression levels of the 5 miRNAs in the AF group based on fold changes of signal intensity in the microarray relative to the NSR group. The log2(fold change) values are plotted on the y-axis. (B) Illustration of RT-qPCR results. Using RT-qPCR, the expression levels of each miRNA in the AF group relative to the NRS group was estimated by the comparative cycle threshold 2-ΔΔCt method. The log2(relative expression) values are plotted on the y-axis. There is a significant difference for each miRNA, consistent with the microarray result. Association between LA size and changes in miRNA expression We investigated whether the changes observed in the expression levels of the 5 validated miRNAs between the LAA tissues of NSR and AF patients correlated with LA size. Spearman’s correlation analysis showed a positive correlation between the level of expression of miR-466 in LAA and LA size (r = 0.73; P = 0.007). Moreover, there was a significantly negative correlation between the levels of expression of miR-1 and miR-26a-5p in LAAs and LA size (r = –0.81; P = 0.002 and r = –0.86; P < 0.001, respectively). However, although the expression levels of miR-574-3p and miR-3613-3p in LAAs significantly differed between NSR and AF patients, they were not significantly correlated with LA size (r = 0.54; P = 0.07 and r = 0.56; P = 0.06, respectively; Figure  6). Correlations between LA size and the relative expression levels of selected miRNAs. Spearman’s correlation analysis showed a positive correlation between the level of expression of miR-466 and LA size, and a negative correlation between the level of expression of miR-1 and miR-26a-5p with LA size. The expression levels of miR-574-3p and miR-3613-3p were not significantly correlated with LA size. We investigated whether the changes observed in the expression levels of the 5 validated miRNAs between the LAA tissues of NSR and AF patients correlated with LA size. Spearman’s correlation analysis showed a positive correlation between the level of expression of miR-466 in LAA and LA size (r = 0.73; P = 0.007). Moreover, there was a significantly negative correlation between the levels of expression of miR-1 and miR-26a-5p in LAAs and LA size (r = –0.81; P = 0.002 and r = –0.86; P < 0.001, respectively). However, although the expression levels of miR-574-3p and miR-3613-3p in LAAs significantly differed between NSR and AF patients, they were not significantly correlated with LA size (r = 0.54; P = 0.07 and r = 0.56; P = 0.06, respectively; Figure  6). Correlations between LA size and the relative expression levels of selected miRNAs. Spearman’s correlation analysis showed a positive correlation between the level of expression of miR-466 and LA size, and a negative correlation between the level of expression of miR-1 and miR-26a-5p with LA size. The expression levels of miR-574-3p and miR-3613-3p were not significantly correlated with LA size. Prediction of putative target genes and pathways of the differentially expressed miRNAs To determine the probable biological function of the differentially expressed miRNAs, we predicted the putative targets and pathways of 5 validated miRNAs (hsa-miR-1, hsa-miR-26a-5p, hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p) using the miRFocus database. Numerous putative target genes and pathways were identified for the 5 miRNAs. The miRNAs hsa-miR-1 and hsa-miR-26a-5p were predicted by 5 target prediction databases; hsa-miR-574-3p was predicted from 4 target prediction databases, and hsa-miR-466 and hsa-miR-3613-3P were predicted from 2 target prediction databases (Table  3). Prediction of putative target genes and pathways of selected miRNAs The biological function and potential functional pathways of each putative gene target were classified using the GO term and KEGG pathway (Tables  4 and 5). Since every gene is associated with many GO terms and KEGG pathways, the significant GO term and KEGG pathway for each miRNA were identified with Fisher’s exact test. Results of the target analysis indicated that the predicted potential genes have been linked to such important biological processes as the regulation of protein metabolism, transcription factor activity, cell division, and the transforming growth factor beta receptor (TGFBR) signaling pathway. These results suggest that these miRNAs have roles in human health and disease regulation. The pathway analysis also suggested that these miRNAs have important regulatory roles in different biological processes. Biological processes of the predicted miRNA targets Pathway analysis of the selected miRNAs To determine the probable biological function of the differentially expressed miRNAs, we predicted the putative targets and pathways of 5 validated miRNAs (hsa-miR-1, hsa-miR-26a-5p, hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p) using the miRFocus database. Numerous putative target genes and pathways were identified for the 5 miRNAs. The miRNAs hsa-miR-1 and hsa-miR-26a-5p were predicted by 5 target prediction databases; hsa-miR-574-3p was predicted from 4 target prediction databases, and hsa-miR-466 and hsa-miR-3613-3P were predicted from 2 target prediction databases (Table  3). Prediction of putative target genes and pathways of selected miRNAs The biological function and potential functional pathways of each putative gene target were classified using the GO term and KEGG pathway (Tables  4 and 5). Since every gene is associated with many GO terms and KEGG pathways, the significant GO term and KEGG pathway for each miRNA were identified with Fisher’s exact test. Results of the target analysis indicated that the predicted potential genes have been linked to such important biological processes as the regulation of protein metabolism, transcription factor activity, cell division, and the transforming growth factor beta receptor (TGFBR) signaling pathway. These results suggest that these miRNAs have roles in human health and disease regulation. The pathway analysis also suggested that these miRNAs have important regulatory roles in different biological processes. Biological processes of the predicted miRNA targets Pathway analysis of the selected miRNAs
Conclusions
This study shows that AF alters the miRNA expression profiles in LA from MS patients. These findings may be useful for the biological understanding of AF in MS patients and provide potential therapeutic targets for AF [32].
[ "Background", "Human tissue preparation", "RNA isolation", "Microarray processing and analysis", "Reverse transcription-real time quantitative PCR (RT-qPCR) validation of selected miRNAs", "Target prediction and function analysis", "Statistical analyses", "Clinical characteristics of the NSR and AF patients", "miRNA expression profiles of LAA tissue from MS patients with and without AF", "Differences in miRNA expression profiles of LAA tissues from MS patients with and without AF", "Validation of the miRNA microarray data with RT-qPCR", "Association between LA size and changes in miRNA expression", "Prediction of putative target genes and pathways of the differentially expressed miRNAs", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Atrial fibrillation (AF) is characterized as an irregular and sometimes rapid heart rate, with symptoms that include palpitations and shortness of breath. AF is the most common cardiac arrhythmia observed in clinical practice and constitutes a risk factor for ischemic stroke\n[1]. Despite recent significant advances in the understanding of the mechanisms associated with AF, complexities in the etiology of atrial electrical dysfunction (including a genetic component\n[2]) and the subsequent associated arrhythmia have prevented definitive elucidation\n[3].\nThe progression from acute to persistent and then chronic AF is accompanied by changes in gene expression that lead to differences in protein expression and activity. MicroRNAs (miRNAs) are regulators of gene expression at the post-transcriptional level\n[4], and appear to have regulatory roles that underlie the pathophysiology of AF. Many studies have shown that miRNAs regulate key genetic functions in cardiovascular biology and are crucial to the pathogenesis of cardiac diseases such as cardiac development\n[5], hypertrophy/heart failure\n[6], remodeling\n[7], acute myocardial infarction\n[8], and myocardial ischemia-reperfusion injury\n[9]. Currently, there is a growing body of literature that indicates that many miRNAs are involved in AF through their target genes\n[7,10,11].\nAF can be an isolated condition, but it often occurs concomitantly with other cardiovascular diseases such as hypertension, congestive heart failure, coronary artery disease, and valvular heart disease\n[12]. AF is also prevalent in mitral stenosis (MS; a consequence of rheumatic fever), affecting approximately 40% of all MS patients\n[13]. MS is among the major cardiovascular diseases in developing countries where rheumatic fever is less well controlled, and 50% or more of patients with severe MS have AF. Patients with both AF and MS have a 17.5-fold greater risk of stroke and a four-fold higher incidence of embolism compared with people with normal sinus rhythm (NSR)\n[14,15].\nStructural changes of the left atria (LA) and right atria (RA) associated with AF in MS patients are well established\n[13,14]. Recently, reports suggest that AF also alters the miRNA expression profiles in RA of MS patients\n[16,17]. However, miRNA changes in LA from MS patients with AF are still unknown. Given the complexity of the pathophysiology that may be associated with AF, we need a better understanding of the miRNA changes in the LA, which may help in designing and developing new therapeutic interventions. This study investigated alterations of miRNA expression profiles in LA tissues of MS patients with AF relative to MS patients with NSR.", "Left atrial appendage (LAA) tissue samples were obtained from MS patients, both in NSR (n = 6, without history of AF) and with AF (n = 6, documented arrhythmia >6 months before surgery). The tissue samples were obtained at the time of mitral valve surgery, immediately snap frozen in liquid nitrogen, and stored at -80°C until used. The diagnosis of AF was reached by evaluating medical records and 12-lead electrocardiogram findings. NSR patients had no history of using antiarrhythmic drugs and were screened to ensure that they had never experienced AF\n[18]. Preoperative 2-dimensional color transthoracic echocardiography was performed routinely on the patients. Preoperative functional status was recorded according to New York Heart Association (NYHA) classifications.", "The total RNA from human LAA tissue samples was extracted using TRIzol reagent (Invitrogen) in accordance with the protocol of the manufacturer. The RNA quality of each sample was determined using an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA) and immediately stored at -80°C.", "The miRNA microarray expression analysis was performed by LC Sciences (Houston, TX, USA) as described previously\n[19]. In brief, the assay began with a total RNA sample (2 to 5 μg). The total RNA was size-fractionated using a YM-100 Microcon centrifugal filter (Millipore, Billerica, MA). RNA sequences of <300 nt were isolated. These small RNAs were then extended at the 3′ end with a poly(A) tail using poly(A) polymerase, and then by ligation of an oligonucleotide tag to the poly(A) tail for later fluorescent dye staining.\nHybridization was performed overnight on a μParaflo microfluidic chip using a micro-circulation pump (Atactic Technologies, Houston, TX). Each microfluidic chip contained detection probes and control probes. The detection probes were made in situ by photogenerated reagents. These probes consisted chemically of modified nucleotide coding sequences complementary to target miRNA (all 1921 human miRNAs listed in the Sanger’s miRNA miRBase, Release 18.0, http://microrna.sanger.ac.uk/sequences/) and a spacer segment of polyethylene glycol to extend the coding sequences away from the substrate. The hybridization melting temperatures were balanced by chemical modifications of the detection probes. Hybridization was performed using 100 μL of 6× saline-sodium phosphate-EDTA (SSPE) buffer (0.90 M NaCl, 60 mM Na2HPO4, 6 mM EDTA, pH 6.8) containing 25% formamide at 34°C.\nFluorescence labeling with tag-specific Cy5 dye was used for after-hybridization detection. An Axon GenePix 4000B Microarray Scanner (Molecular Device, Union City, CA) was used to collect the fluorescent images, which were digitized using Array-Pro image analysis software (Media Cybernetics, Bethesda, MD). Each miRNA was analyzed two times and the controls were repeated 4-16 times.\nAnalysis of the microarray data was also performed at LC Sciences (see Additional file\n1). The microarray data was analyzed by subtracting the background, and then the signals were normalized using a locally weighted regression scatterplot smoothing (LOWESS) filter as reported previously\n[20]. Detectable miRNAs were selected based on the following criteria: signal intensity >3-fold the background standard deviation, and spot coefficient of variation (CV) < 0.5, where CV = standard deviation/signal intensity. When repeating probes were present on the array, the transcript was listed as detectable only if the signals from at least 50% of the repeating probes were above detection level. To identify miRNAs whose expression differed between the AF and NSR groups, statistical analysis was performed. The ratio of two samples was calculated and expressed in log2scale (balanced) for each miRNA. The miRNAs were then sorted according to their differential ratios. The P-values of the t-test were also calculated. miRNAs with P-values < 0.05 were considered significantly differentially expressed.", "To validate the microarray results in the present study, a stem-loop RT-qPCR based on SYBR Green I was performed on selected differentially expressed miRNAs. The primers used are listed in Additional file\n2. Total RNA was isolated using TRIzol reagent (Invitrogen) as described above. A single-stranded cDNA for each specific miRNA was generated by reverse transcription (RT) of 250 ng of total RNA using a miRNA-specific stem-looped RT primer. Briefly, an RT reaction mixture contained 250 ng of total RNA, 0.5 μL of 2 μM stem-loop RT primer, 1.0 μL of 5× RT buffer, 0.25 μL of 10 mM of each dNTP, 0.25 μL of 40 U/μL RNase inhibitor, and 0.5 μL of 200 U/μL Moloney murine leukemia virus (M-MLV) reverse transcriptase. An Eppendorf Mastercycler (Eppendorf, Hamburg, Germany) was used to perform the RT reaction under the following conditions: 42°C for 60 min, 70°C for 15 min, and finally, held at 4°C.\nAfter the RT reaction, qPCR was performed using an ABI PRISM 7900HT sequence-detection system (Applied Biosystems, Foster City, CA, USA) with the Platinum SYBR Green qPCR SuperMix-UDG (Invitrogen). In accordance with the manufacturer’s instructions, a 20-μL PCR reaction mixture contained 0.5 μL of RT product, 10 μL of 2× SYBR Green Mix, 0.4 μL of ROX, 0.8 μL of 10 μM primer mix, and 8.3 μL of nuclease-free water. The reaction protocol was: 95°C for 2 min, and then 40 amplification cycles of 95°C for 15 s, and 60°C for 30 s.\nAll reactions were run in triplicate. To account for possible differences in the amount of starting RNA, miRNA expressions were normalized to small nuclear RNA RNU6B\n[21,22]. RT-qPCR data were represented by the cycle threshold (Ct) value. The relative expression level (i.e., fold change) for each miRNA was calculated using the comparative cycle threshold 2-ΔΔCt method\n[19].", "We used the database miRFocus (http://mirfocus.org/) to predict potential human miRNA target genes. The website describes miRFocus as a human miRNA information database, and is an open-source web tool developed for rapid analysis of miRNAs. It also provides comprehensive information concerning human miRNAs, including not only miRNA annotations but also miRNA and target gene interactions, correlations between miRNAs and diseases and signaling pathways, and more. The miRFocus provides a full gene description and functional analysis for each target gene by combining the predicted target genes from other databases (TargetScan, miRanda, PicTar, MirTarget and microT). In this study, only those genes that were predicted by two or more databases were considered candidates; the greater the number of databases that predicted that a given gene would be a target, the more likely the miRNA-mRNA interaction would be relevant\n[23]. The miRFocus program also identifies miRNA-enriched pathways, incorporating those from the Kyoto Encyclopedia of Genes and Genomes (KEGG), Biocarta, and Gene Ontology (GO) databases, with Fisher’s exact test.", "All data are presented as mean ± standard deviation and analyzed with the paired t-test. Spearman’s correlation coefficients were used to examine the association between validated miRNAs and left atrial size. P < 0.05 was considered statistically significant.", "There were no significant differences in terms of age, gender, or NYHA functional classification between the NSR and AF groups. Preoperative color Doppler echocardiography showed that the size of the left atria of AF patients was significantly greater than that of NSR patients as previously reported\n[24], while there were no differences in the left ventricular end-diastolic diameter or ejection fraction between the two groups of patients (Table \n1).\nClinical characteristics of the NSR and AF patients (n = 6, each)\n*P < 0.05 with respect to NSR patients.", "Of the 1898 human miRNAs analyzed, a total 213 miRNAs were detected; NSR patients expressed 155 miRNAs, while the AF patients expressed 208 miRNAs (Figure \n1A). Among these, 150 miRNAs were common to the patients of both groups. 5 miRNAs were detected only in those with NSR, and 58 miRNAs were detected only in those with AF (Figure \n1B).\nMiRNAs detected in NSR and AF MS patients. (A) A combined total of 213 miRNAs were detected. The LAAs of NSR patients expressed 155 miRNAs, while those with AF expressed 208 miRNAs. (B) Among these, 150 miRNAs were expressed in NSR and AF patients; 5 were expressed only in the NSR and 58 only in the AF.\nHowever, the expression levels of most of the detected miRNAs were low, which is evident by their low signal intensities (less than 500 units). Of the 155 miRNAs detected in the NSR group, 73 emitted signals <500 units, while only 16 were >5000 units. On the other hand, of the 208 miRNAs detected in patients with AF, the signal intensities of 127 were <500 units, while only 16 were >5000 units (Figure \n2). The signal intensities of the 5 miRNAs detected only in NSR, and the 58 miRNAs detected only in AF, were all <200 units and not high enough to consider them as differentially expressed between NSR and AF. Hence these were not considered here for further analysis.\nSignal distribution of all detected miRNAs in MS patients, shown by microarray assay. The signal intensities of most of these miRNAs were low (0–500 units), with only 11 miRNAs showing intensities greater than 10000 units.", "Differences existed in the expression levels of the 150 miRNAs detected in both the NSR and AF groups (Table \n2). Statistical analysis showed that 22 of these miRNAs (15%) were significantly dysregulated in the AF group relative to the NSR: 10 miRNAs (45%) were upregulated, while 12 (55%) were downregulated (P < 0.05).\nMiRNAs differentially expressed in LAA tissues between MS patients with AF or NSR\n*Expression levels of miRNAs are described as upregulated or downregulated in AF relative to those in NSR.\nMost of the miRNAs selected for further analysis via RT-qPCR had a fold change that satisfied the equation |log2(fold change)| ≥ 1.5(Figure \n3), and at least one group had a signal intensity >2000 units (Figure \n4). Although the fold change of hsa-miR-26a-5p did not meet this criteria (|log2(fold change)| = 1.17), its P -value was <0.01, and thus it was included in our selection. Finally, we selected 5 miRNAs for further analysis: 3 were upregulated in the AF group relative to the NSR (hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p), and 2 were downregulated (hsa-miR-1 and hsa-miR-26a-5p).\nVolcano plot of 22 miRNAs significantly dysregulated in LAA tissues. The volcano plot shows the distribution of these miRNAs according to their P-value versus fold change. Those with the highest fold change in their expression (|Log2(fold change)| ≥ 1.5) have been labeled in red. Although the fold change of miR-26a-5p (|log2(fold change)| = 1.17) does not meet the criteria, its P-value is <0.01 (i.e., –Log10(P-value) > 2), and therefore is also labeled red.\nComparison of signal intensities of 22 miRNAs significantly dysregulated in LAA tissues. The signal intensities of miRNAs expressed in the NSR group (x-axis) and AF group (y-axis) were compared. The miRNAs with the higher signal intensities (>2000 units) either on the x-axis or y-axis were labeled red.", "To validate the data obtained from the miRNA microarray, RT-qPCR was performed on 5 selected miRNAs, and the results were compared with the microarray (Figure \n5, Additional file\n3). According to the RT-qPCR data, hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p were upregulated in the LAAs of the AF group relative to the NSR, while hsa-miR-1 and hsa-miR-26a-5p were downregulated. These data are comparable with our microarray data and thus validate the results from the miRNA microarray for these miRNAs.\nValidation of miRNA microarray data of selected miRNAs by RT-qPCR. RT-qPCR was performed on 5 differentially expressed miRNAs (3 upregulated: miR-466, miR-574-3p, miR-3613-3p; and 2 downregulated: miR-1, miR-26a-5p) to confirm the microarray data. (A) Illustration of microarray results. Expression levels of the 5 miRNAs in the AF group based on fold changes of signal intensity in the microarray relative to the NSR group. The log2(fold change) values are plotted on the y-axis. (B) Illustration of RT-qPCR results. Using RT-qPCR, the expression levels of each miRNA in the AF group relative to the NRS group was estimated by the comparative cycle threshold 2-ΔΔCt method. The log2(relative expression) values are plotted on the y-axis. There is a significant difference for each miRNA, consistent with the microarray result.", "We investigated whether the changes observed in the expression levels of the 5 validated miRNAs between the LAA tissues of NSR and AF patients correlated with LA size.\nSpearman’s correlation analysis showed a positive correlation between the level of expression of miR-466 in LAA and LA size (r = 0.73; P = 0.007). Moreover, there was a significantly negative correlation between the levels of expression of miR-1 and miR-26a-5p in LAAs and LA size (r = –0.81; P = 0.002 and r = –0.86; P < 0.001, respectively). However, although the expression levels of miR-574-3p and miR-3613-3p in LAAs significantly differed between NSR and AF patients, they were not significantly correlated with LA size (r = 0.54; P = 0.07 and r = 0.56; P = 0.06, respectively; Figure \n6).\nCorrelations between LA size and the relative expression levels of selected miRNAs. Spearman’s correlation analysis showed a positive correlation between the level of expression of miR-466 and LA size, and a negative correlation between the level of expression of miR-1 and miR-26a-5p with LA size. The expression levels of miR-574-3p and miR-3613-3p were not significantly correlated with LA size.", "To determine the probable biological function of the differentially expressed miRNAs, we predicted the putative targets and pathways of 5 validated miRNAs (hsa-miR-1, hsa-miR-26a-5p, hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p) using the miRFocus database.\nNumerous putative target genes and pathways were identified for the 5 miRNAs. The miRNAs hsa-miR-1 and hsa-miR-26a-5p were predicted by 5 target prediction databases; hsa-miR-574-3p was predicted from 4 target prediction databases, and hsa-miR-466 and hsa-miR-3613-3P were predicted from 2 target prediction databases (Table \n3).\nPrediction of putative target genes and pathways of selected miRNAs\nThe biological function and potential functional pathways of each putative gene target were classified using the GO term and KEGG pathway (Tables \n4 and\n5). Since every gene is associated with many GO terms and KEGG pathways, the significant GO term and KEGG pathway for each miRNA were identified with Fisher’s exact test. Results of the target analysis indicated that the predicted potential genes have been linked to such important biological processes as the regulation of protein metabolism, transcription factor activity, cell division, and the transforming growth factor beta receptor (TGFBR) signaling pathway. These results suggest that these miRNAs have roles in human health and disease regulation. The pathway analysis also suggested that these miRNAs have important regulatory roles in different biological processes.\nBiological processes of the predicted miRNA targets\nPathway analysis of the selected miRNAs", "AF: Atrial fibrillation; CV: Coefficient of variation; GO: Gene Ontology; KEGG: Kyoto Encyclopedia of Genes and Genomes; LA: Left atrial; LAA: Left atrial appendage; miRNA: Microrna; MS: Mitral stenosis; NSR: Normal sinus rhythm; NYHA: New York Heart Association; RT-qPCR: Reverse-transcription quantitative PCR; TGFBR: Transforming growth factor beta receptor.", "The authors declare that they have no competing interests.", "HL performed the molecular studies, participated in the sequence alignment, and drafted the manuscript. GXC, MYL, HQ, JR, and JPY participated in open heart surgery and collected clinical samples. HL and ZKW participated in the design of the study and performed the statistical analyses. ZKW and GXC conceived the study, participated in its design and coordination, and helped to draft the manuscript. HL and GXC contributed equally to this article. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2261/14/10/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Human tissue preparation", "RNA isolation", "Microarray processing and analysis", "Reverse transcription-real time quantitative PCR (RT-qPCR) validation of selected miRNAs", "Target prediction and function analysis", "Statistical analyses", "Results", "Clinical characteristics of the NSR and AF patients", "miRNA expression profiles of LAA tissue from MS patients with and without AF", "Differences in miRNA expression profiles of LAA tissues from MS patients with and without AF", "Validation of the miRNA microarray data with RT-qPCR", "Association between LA size and changes in miRNA expression", "Prediction of putative target genes and pathways of the differentially expressed miRNAs", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history", "Supplementary Material" ]
[ "Atrial fibrillation (AF) is characterized as an irregular and sometimes rapid heart rate, with symptoms that include palpitations and shortness of breath. AF is the most common cardiac arrhythmia observed in clinical practice and constitutes a risk factor for ischemic stroke\n[1]. Despite recent significant advances in the understanding of the mechanisms associated with AF, complexities in the etiology of atrial electrical dysfunction (including a genetic component\n[2]) and the subsequent associated arrhythmia have prevented definitive elucidation\n[3].\nThe progression from acute to persistent and then chronic AF is accompanied by changes in gene expression that lead to differences in protein expression and activity. MicroRNAs (miRNAs) are regulators of gene expression at the post-transcriptional level\n[4], and appear to have regulatory roles that underlie the pathophysiology of AF. Many studies have shown that miRNAs regulate key genetic functions in cardiovascular biology and are crucial to the pathogenesis of cardiac diseases such as cardiac development\n[5], hypertrophy/heart failure\n[6], remodeling\n[7], acute myocardial infarction\n[8], and myocardial ischemia-reperfusion injury\n[9]. Currently, there is a growing body of literature that indicates that many miRNAs are involved in AF through their target genes\n[7,10,11].\nAF can be an isolated condition, but it often occurs concomitantly with other cardiovascular diseases such as hypertension, congestive heart failure, coronary artery disease, and valvular heart disease\n[12]. AF is also prevalent in mitral stenosis (MS; a consequence of rheumatic fever), affecting approximately 40% of all MS patients\n[13]. MS is among the major cardiovascular diseases in developing countries where rheumatic fever is less well controlled, and 50% or more of patients with severe MS have AF. Patients with both AF and MS have a 17.5-fold greater risk of stroke and a four-fold higher incidence of embolism compared with people with normal sinus rhythm (NSR)\n[14,15].\nStructural changes of the left atria (LA) and right atria (RA) associated with AF in MS patients are well established\n[13,14]. Recently, reports suggest that AF also alters the miRNA expression profiles in RA of MS patients\n[16,17]. However, miRNA changes in LA from MS patients with AF are still unknown. Given the complexity of the pathophysiology that may be associated with AF, we need a better understanding of the miRNA changes in the LA, which may help in designing and developing new therapeutic interventions. This study investigated alterations of miRNA expression profiles in LA tissues of MS patients with AF relative to MS patients with NSR.", "The Human Ethics Committee of First Affiliated Hospital of Sun Yat-sen University approved this study, and the investigation complied with the principles that govern the use of human tissues outlined in the Declaration of Helsinki. All patients gave informed consent before participating in the study.\n Human tissue preparation Left atrial appendage (LAA) tissue samples were obtained from MS patients, both in NSR (n = 6, without history of AF) and with AF (n = 6, documented arrhythmia >6 months before surgery). The tissue samples were obtained at the time of mitral valve surgery, immediately snap frozen in liquid nitrogen, and stored at -80°C until used. The diagnosis of AF was reached by evaluating medical records and 12-lead electrocardiogram findings. NSR patients had no history of using antiarrhythmic drugs and were screened to ensure that they had never experienced AF\n[18]. Preoperative 2-dimensional color transthoracic echocardiography was performed routinely on the patients. Preoperative functional status was recorded according to New York Heart Association (NYHA) classifications.\nLeft atrial appendage (LAA) tissue samples were obtained from MS patients, both in NSR (n = 6, without history of AF) and with AF (n = 6, documented arrhythmia >6 months before surgery). The tissue samples were obtained at the time of mitral valve surgery, immediately snap frozen in liquid nitrogen, and stored at -80°C until used. The diagnosis of AF was reached by evaluating medical records and 12-lead electrocardiogram findings. NSR patients had no history of using antiarrhythmic drugs and were screened to ensure that they had never experienced AF\n[18]. Preoperative 2-dimensional color transthoracic echocardiography was performed routinely on the patients. Preoperative functional status was recorded according to New York Heart Association (NYHA) classifications.\n RNA isolation The total RNA from human LAA tissue samples was extracted using TRIzol reagent (Invitrogen) in accordance with the protocol of the manufacturer. The RNA quality of each sample was determined using an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA) and immediately stored at -80°C.\nThe total RNA from human LAA tissue samples was extracted using TRIzol reagent (Invitrogen) in accordance with the protocol of the manufacturer. The RNA quality of each sample was determined using an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA) and immediately stored at -80°C.\n Microarray processing and analysis The miRNA microarray expression analysis was performed by LC Sciences (Houston, TX, USA) as described previously\n[19]. In brief, the assay began with a total RNA sample (2 to 5 μg). The total RNA was size-fractionated using a YM-100 Microcon centrifugal filter (Millipore, Billerica, MA). RNA sequences of <300 nt were isolated. These small RNAs were then extended at the 3′ end with a poly(A) tail using poly(A) polymerase, and then by ligation of an oligonucleotide tag to the poly(A) tail for later fluorescent dye staining.\nHybridization was performed overnight on a μParaflo microfluidic chip using a micro-circulation pump (Atactic Technologies, Houston, TX). Each microfluidic chip contained detection probes and control probes. The detection probes were made in situ by photogenerated reagents. These probes consisted chemically of modified nucleotide coding sequences complementary to target miRNA (all 1921 human miRNAs listed in the Sanger’s miRNA miRBase, Release 18.0, http://microrna.sanger.ac.uk/sequences/) and a spacer segment of polyethylene glycol to extend the coding sequences away from the substrate. The hybridization melting temperatures were balanced by chemical modifications of the detection probes. Hybridization was performed using 100 μL of 6× saline-sodium phosphate-EDTA (SSPE) buffer (0.90 M NaCl, 60 mM Na2HPO4, 6 mM EDTA, pH 6.8) containing 25% formamide at 34°C.\nFluorescence labeling with tag-specific Cy5 dye was used for after-hybridization detection. An Axon GenePix 4000B Microarray Scanner (Molecular Device, Union City, CA) was used to collect the fluorescent images, which were digitized using Array-Pro image analysis software (Media Cybernetics, Bethesda, MD). Each miRNA was analyzed two times and the controls were repeated 4-16 times.\nAnalysis of the microarray data was also performed at LC Sciences (see Additional file\n1). The microarray data was analyzed by subtracting the background, and then the signals were normalized using a locally weighted regression scatterplot smoothing (LOWESS) filter as reported previously\n[20]. Detectable miRNAs were selected based on the following criteria: signal intensity >3-fold the background standard deviation, and spot coefficient of variation (CV) < 0.5, where CV = standard deviation/signal intensity. When repeating probes were present on the array, the transcript was listed as detectable only if the signals from at least 50% of the repeating probes were above detection level. To identify miRNAs whose expression differed between the AF and NSR groups, statistical analysis was performed. The ratio of two samples was calculated and expressed in log2scale (balanced) for each miRNA. The miRNAs were then sorted according to their differential ratios. The P-values of the t-test were also calculated. miRNAs with P-values < 0.05 were considered significantly differentially expressed.\nThe miRNA microarray expression analysis was performed by LC Sciences (Houston, TX, USA) as described previously\n[19]. In brief, the assay began with a total RNA sample (2 to 5 μg). The total RNA was size-fractionated using a YM-100 Microcon centrifugal filter (Millipore, Billerica, MA). RNA sequences of <300 nt were isolated. These small RNAs were then extended at the 3′ end with a poly(A) tail using poly(A) polymerase, and then by ligation of an oligonucleotide tag to the poly(A) tail for later fluorescent dye staining.\nHybridization was performed overnight on a μParaflo microfluidic chip using a micro-circulation pump (Atactic Technologies, Houston, TX). Each microfluidic chip contained detection probes and control probes. The detection probes were made in situ by photogenerated reagents. These probes consisted chemically of modified nucleotide coding sequences complementary to target miRNA (all 1921 human miRNAs listed in the Sanger’s miRNA miRBase, Release 18.0, http://microrna.sanger.ac.uk/sequences/) and a spacer segment of polyethylene glycol to extend the coding sequences away from the substrate. The hybridization melting temperatures were balanced by chemical modifications of the detection probes. Hybridization was performed using 100 μL of 6× saline-sodium phosphate-EDTA (SSPE) buffer (0.90 M NaCl, 60 mM Na2HPO4, 6 mM EDTA, pH 6.8) containing 25% formamide at 34°C.\nFluorescence labeling with tag-specific Cy5 dye was used for after-hybridization detection. An Axon GenePix 4000B Microarray Scanner (Molecular Device, Union City, CA) was used to collect the fluorescent images, which were digitized using Array-Pro image analysis software (Media Cybernetics, Bethesda, MD). Each miRNA was analyzed two times and the controls were repeated 4-16 times.\nAnalysis of the microarray data was also performed at LC Sciences (see Additional file\n1). The microarray data was analyzed by subtracting the background, and then the signals were normalized using a locally weighted regression scatterplot smoothing (LOWESS) filter as reported previously\n[20]. Detectable miRNAs were selected based on the following criteria: signal intensity >3-fold the background standard deviation, and spot coefficient of variation (CV) < 0.5, where CV = standard deviation/signal intensity. When repeating probes were present on the array, the transcript was listed as detectable only if the signals from at least 50% of the repeating probes were above detection level. To identify miRNAs whose expression differed between the AF and NSR groups, statistical analysis was performed. The ratio of two samples was calculated and expressed in log2scale (balanced) for each miRNA. The miRNAs were then sorted according to their differential ratios. The P-values of the t-test were also calculated. miRNAs with P-values < 0.05 were considered significantly differentially expressed.\n Reverse transcription-real time quantitative PCR (RT-qPCR) validation of selected miRNAs To validate the microarray results in the present study, a stem-loop RT-qPCR based on SYBR Green I was performed on selected differentially expressed miRNAs. The primers used are listed in Additional file\n2. Total RNA was isolated using TRIzol reagent (Invitrogen) as described above. A single-stranded cDNA for each specific miRNA was generated by reverse transcription (RT) of 250 ng of total RNA using a miRNA-specific stem-looped RT primer. Briefly, an RT reaction mixture contained 250 ng of total RNA, 0.5 μL of 2 μM stem-loop RT primer, 1.0 μL of 5× RT buffer, 0.25 μL of 10 mM of each dNTP, 0.25 μL of 40 U/μL RNase inhibitor, and 0.5 μL of 200 U/μL Moloney murine leukemia virus (M-MLV) reverse transcriptase. An Eppendorf Mastercycler (Eppendorf, Hamburg, Germany) was used to perform the RT reaction under the following conditions: 42°C for 60 min, 70°C for 15 min, and finally, held at 4°C.\nAfter the RT reaction, qPCR was performed using an ABI PRISM 7900HT sequence-detection system (Applied Biosystems, Foster City, CA, USA) with the Platinum SYBR Green qPCR SuperMix-UDG (Invitrogen). In accordance with the manufacturer’s instructions, a 20-μL PCR reaction mixture contained 0.5 μL of RT product, 10 μL of 2× SYBR Green Mix, 0.4 μL of ROX, 0.8 μL of 10 μM primer mix, and 8.3 μL of nuclease-free water. The reaction protocol was: 95°C for 2 min, and then 40 amplification cycles of 95°C for 15 s, and 60°C for 30 s.\nAll reactions were run in triplicate. To account for possible differences in the amount of starting RNA, miRNA expressions were normalized to small nuclear RNA RNU6B\n[21,22]. RT-qPCR data were represented by the cycle threshold (Ct) value. The relative expression level (i.e., fold change) for each miRNA was calculated using the comparative cycle threshold 2-ΔΔCt method\n[19].\nTo validate the microarray results in the present study, a stem-loop RT-qPCR based on SYBR Green I was performed on selected differentially expressed miRNAs. The primers used are listed in Additional file\n2. Total RNA was isolated using TRIzol reagent (Invitrogen) as described above. A single-stranded cDNA for each specific miRNA was generated by reverse transcription (RT) of 250 ng of total RNA using a miRNA-specific stem-looped RT primer. Briefly, an RT reaction mixture contained 250 ng of total RNA, 0.5 μL of 2 μM stem-loop RT primer, 1.0 μL of 5× RT buffer, 0.25 μL of 10 mM of each dNTP, 0.25 μL of 40 U/μL RNase inhibitor, and 0.5 μL of 200 U/μL Moloney murine leukemia virus (M-MLV) reverse transcriptase. An Eppendorf Mastercycler (Eppendorf, Hamburg, Germany) was used to perform the RT reaction under the following conditions: 42°C for 60 min, 70°C for 15 min, and finally, held at 4°C.\nAfter the RT reaction, qPCR was performed using an ABI PRISM 7900HT sequence-detection system (Applied Biosystems, Foster City, CA, USA) with the Platinum SYBR Green qPCR SuperMix-UDG (Invitrogen). In accordance with the manufacturer’s instructions, a 20-μL PCR reaction mixture contained 0.5 μL of RT product, 10 μL of 2× SYBR Green Mix, 0.4 μL of ROX, 0.8 μL of 10 μM primer mix, and 8.3 μL of nuclease-free water. The reaction protocol was: 95°C for 2 min, and then 40 amplification cycles of 95°C for 15 s, and 60°C for 30 s.\nAll reactions were run in triplicate. To account for possible differences in the amount of starting RNA, miRNA expressions were normalized to small nuclear RNA RNU6B\n[21,22]. RT-qPCR data were represented by the cycle threshold (Ct) value. The relative expression level (i.e., fold change) for each miRNA was calculated using the comparative cycle threshold 2-ΔΔCt method\n[19].\n Target prediction and function analysis We used the database miRFocus (http://mirfocus.org/) to predict potential human miRNA target genes. The website describes miRFocus as a human miRNA information database, and is an open-source web tool developed for rapid analysis of miRNAs. It also provides comprehensive information concerning human miRNAs, including not only miRNA annotations but also miRNA and target gene interactions, correlations between miRNAs and diseases and signaling pathways, and more. The miRFocus provides a full gene description and functional analysis for each target gene by combining the predicted target genes from other databases (TargetScan, miRanda, PicTar, MirTarget and microT). In this study, only those genes that were predicted by two or more databases were considered candidates; the greater the number of databases that predicted that a given gene would be a target, the more likely the miRNA-mRNA interaction would be relevant\n[23]. The miRFocus program also identifies miRNA-enriched pathways, incorporating those from the Kyoto Encyclopedia of Genes and Genomes (KEGG), Biocarta, and Gene Ontology (GO) databases, with Fisher’s exact test.\nWe used the database miRFocus (http://mirfocus.org/) to predict potential human miRNA target genes. The website describes miRFocus as a human miRNA information database, and is an open-source web tool developed for rapid analysis of miRNAs. It also provides comprehensive information concerning human miRNAs, including not only miRNA annotations but also miRNA and target gene interactions, correlations between miRNAs and diseases and signaling pathways, and more. The miRFocus provides a full gene description and functional analysis for each target gene by combining the predicted target genes from other databases (TargetScan, miRanda, PicTar, MirTarget and microT). In this study, only those genes that were predicted by two or more databases were considered candidates; the greater the number of databases that predicted that a given gene would be a target, the more likely the miRNA-mRNA interaction would be relevant\n[23]. The miRFocus program also identifies miRNA-enriched pathways, incorporating those from the Kyoto Encyclopedia of Genes and Genomes (KEGG), Biocarta, and Gene Ontology (GO) databases, with Fisher’s exact test.\n Statistical analyses All data are presented as mean ± standard deviation and analyzed with the paired t-test. Spearman’s correlation coefficients were used to examine the association between validated miRNAs and left atrial size. P < 0.05 was considered statistically significant.\nAll data are presented as mean ± standard deviation and analyzed with the paired t-test. Spearman’s correlation coefficients were used to examine the association between validated miRNAs and left atrial size. P < 0.05 was considered statistically significant.", "Left atrial appendage (LAA) tissue samples were obtained from MS patients, both in NSR (n = 6, without history of AF) and with AF (n = 6, documented arrhythmia >6 months before surgery). The tissue samples were obtained at the time of mitral valve surgery, immediately snap frozen in liquid nitrogen, and stored at -80°C until used. The diagnosis of AF was reached by evaluating medical records and 12-lead electrocardiogram findings. NSR patients had no history of using antiarrhythmic drugs and were screened to ensure that they had never experienced AF\n[18]. Preoperative 2-dimensional color transthoracic echocardiography was performed routinely on the patients. Preoperative functional status was recorded according to New York Heart Association (NYHA) classifications.", "The total RNA from human LAA tissue samples was extracted using TRIzol reagent (Invitrogen) in accordance with the protocol of the manufacturer. The RNA quality of each sample was determined using an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA) and immediately stored at -80°C.", "The miRNA microarray expression analysis was performed by LC Sciences (Houston, TX, USA) as described previously\n[19]. In brief, the assay began with a total RNA sample (2 to 5 μg). The total RNA was size-fractionated using a YM-100 Microcon centrifugal filter (Millipore, Billerica, MA). RNA sequences of <300 nt were isolated. These small RNAs were then extended at the 3′ end with a poly(A) tail using poly(A) polymerase, and then by ligation of an oligonucleotide tag to the poly(A) tail for later fluorescent dye staining.\nHybridization was performed overnight on a μParaflo microfluidic chip using a micro-circulation pump (Atactic Technologies, Houston, TX). Each microfluidic chip contained detection probes and control probes. The detection probes were made in situ by photogenerated reagents. These probes consisted chemically of modified nucleotide coding sequences complementary to target miRNA (all 1921 human miRNAs listed in the Sanger’s miRNA miRBase, Release 18.0, http://microrna.sanger.ac.uk/sequences/) and a spacer segment of polyethylene glycol to extend the coding sequences away from the substrate. The hybridization melting temperatures were balanced by chemical modifications of the detection probes. Hybridization was performed using 100 μL of 6× saline-sodium phosphate-EDTA (SSPE) buffer (0.90 M NaCl, 60 mM Na2HPO4, 6 mM EDTA, pH 6.8) containing 25% formamide at 34°C.\nFluorescence labeling with tag-specific Cy5 dye was used for after-hybridization detection. An Axon GenePix 4000B Microarray Scanner (Molecular Device, Union City, CA) was used to collect the fluorescent images, which were digitized using Array-Pro image analysis software (Media Cybernetics, Bethesda, MD). Each miRNA was analyzed two times and the controls were repeated 4-16 times.\nAnalysis of the microarray data was also performed at LC Sciences (see Additional file\n1). The microarray data was analyzed by subtracting the background, and then the signals were normalized using a locally weighted regression scatterplot smoothing (LOWESS) filter as reported previously\n[20]. Detectable miRNAs were selected based on the following criteria: signal intensity >3-fold the background standard deviation, and spot coefficient of variation (CV) < 0.5, where CV = standard deviation/signal intensity. When repeating probes were present on the array, the transcript was listed as detectable only if the signals from at least 50% of the repeating probes were above detection level. To identify miRNAs whose expression differed between the AF and NSR groups, statistical analysis was performed. The ratio of two samples was calculated and expressed in log2scale (balanced) for each miRNA. The miRNAs were then sorted according to their differential ratios. The P-values of the t-test were also calculated. miRNAs with P-values < 0.05 were considered significantly differentially expressed.", "To validate the microarray results in the present study, a stem-loop RT-qPCR based on SYBR Green I was performed on selected differentially expressed miRNAs. The primers used are listed in Additional file\n2. Total RNA was isolated using TRIzol reagent (Invitrogen) as described above. A single-stranded cDNA for each specific miRNA was generated by reverse transcription (RT) of 250 ng of total RNA using a miRNA-specific stem-looped RT primer. Briefly, an RT reaction mixture contained 250 ng of total RNA, 0.5 μL of 2 μM stem-loop RT primer, 1.0 μL of 5× RT buffer, 0.25 μL of 10 mM of each dNTP, 0.25 μL of 40 U/μL RNase inhibitor, and 0.5 μL of 200 U/μL Moloney murine leukemia virus (M-MLV) reverse transcriptase. An Eppendorf Mastercycler (Eppendorf, Hamburg, Germany) was used to perform the RT reaction under the following conditions: 42°C for 60 min, 70°C for 15 min, and finally, held at 4°C.\nAfter the RT reaction, qPCR was performed using an ABI PRISM 7900HT sequence-detection system (Applied Biosystems, Foster City, CA, USA) with the Platinum SYBR Green qPCR SuperMix-UDG (Invitrogen). In accordance with the manufacturer’s instructions, a 20-μL PCR reaction mixture contained 0.5 μL of RT product, 10 μL of 2× SYBR Green Mix, 0.4 μL of ROX, 0.8 μL of 10 μM primer mix, and 8.3 μL of nuclease-free water. The reaction protocol was: 95°C for 2 min, and then 40 amplification cycles of 95°C for 15 s, and 60°C for 30 s.\nAll reactions were run in triplicate. To account for possible differences in the amount of starting RNA, miRNA expressions were normalized to small nuclear RNA RNU6B\n[21,22]. RT-qPCR data were represented by the cycle threshold (Ct) value. The relative expression level (i.e., fold change) for each miRNA was calculated using the comparative cycle threshold 2-ΔΔCt method\n[19].", "We used the database miRFocus (http://mirfocus.org/) to predict potential human miRNA target genes. The website describes miRFocus as a human miRNA information database, and is an open-source web tool developed for rapid analysis of miRNAs. It also provides comprehensive information concerning human miRNAs, including not only miRNA annotations but also miRNA and target gene interactions, correlations between miRNAs and diseases and signaling pathways, and more. The miRFocus provides a full gene description and functional analysis for each target gene by combining the predicted target genes from other databases (TargetScan, miRanda, PicTar, MirTarget and microT). In this study, only those genes that were predicted by two or more databases were considered candidates; the greater the number of databases that predicted that a given gene would be a target, the more likely the miRNA-mRNA interaction would be relevant\n[23]. The miRFocus program also identifies miRNA-enriched pathways, incorporating those from the Kyoto Encyclopedia of Genes and Genomes (KEGG), Biocarta, and Gene Ontology (GO) databases, with Fisher’s exact test.", "All data are presented as mean ± standard deviation and analyzed with the paired t-test. Spearman’s correlation coefficients were used to examine the association between validated miRNAs and left atrial size. P < 0.05 was considered statistically significant.", " Clinical characteristics of the NSR and AF patients There were no significant differences in terms of age, gender, or NYHA functional classification between the NSR and AF groups. Preoperative color Doppler echocardiography showed that the size of the left atria of AF patients was significantly greater than that of NSR patients as previously reported\n[24], while there were no differences in the left ventricular end-diastolic diameter or ejection fraction between the two groups of patients (Table \n1).\nClinical characteristics of the NSR and AF patients (n = 6, each)\n*P < 0.05 with respect to NSR patients.\nThere were no significant differences in terms of age, gender, or NYHA functional classification between the NSR and AF groups. Preoperative color Doppler echocardiography showed that the size of the left atria of AF patients was significantly greater than that of NSR patients as previously reported\n[24], while there were no differences in the left ventricular end-diastolic diameter or ejection fraction between the two groups of patients (Table \n1).\nClinical characteristics of the NSR and AF patients (n = 6, each)\n*P < 0.05 with respect to NSR patients.\n miRNA expression profiles of LAA tissue from MS patients with and without AF Of the 1898 human miRNAs analyzed, a total 213 miRNAs were detected; NSR patients expressed 155 miRNAs, while the AF patients expressed 208 miRNAs (Figure \n1A). Among these, 150 miRNAs were common to the patients of both groups. 5 miRNAs were detected only in those with NSR, and 58 miRNAs were detected only in those with AF (Figure \n1B).\nMiRNAs detected in NSR and AF MS patients. (A) A combined total of 213 miRNAs were detected. The LAAs of NSR patients expressed 155 miRNAs, while those with AF expressed 208 miRNAs. (B) Among these, 150 miRNAs were expressed in NSR and AF patients; 5 were expressed only in the NSR and 58 only in the AF.\nHowever, the expression levels of most of the detected miRNAs were low, which is evident by their low signal intensities (less than 500 units). Of the 155 miRNAs detected in the NSR group, 73 emitted signals <500 units, while only 16 were >5000 units. On the other hand, of the 208 miRNAs detected in patients with AF, the signal intensities of 127 were <500 units, while only 16 were >5000 units (Figure \n2). The signal intensities of the 5 miRNAs detected only in NSR, and the 58 miRNAs detected only in AF, were all <200 units and not high enough to consider them as differentially expressed between NSR and AF. Hence these were not considered here for further analysis.\nSignal distribution of all detected miRNAs in MS patients, shown by microarray assay. The signal intensities of most of these miRNAs were low (0–500 units), with only 11 miRNAs showing intensities greater than 10000 units.\nOf the 1898 human miRNAs analyzed, a total 213 miRNAs were detected; NSR patients expressed 155 miRNAs, while the AF patients expressed 208 miRNAs (Figure \n1A). Among these, 150 miRNAs were common to the patients of both groups. 5 miRNAs were detected only in those with NSR, and 58 miRNAs were detected only in those with AF (Figure \n1B).\nMiRNAs detected in NSR and AF MS patients. (A) A combined total of 213 miRNAs were detected. The LAAs of NSR patients expressed 155 miRNAs, while those with AF expressed 208 miRNAs. (B) Among these, 150 miRNAs were expressed in NSR and AF patients; 5 were expressed only in the NSR and 58 only in the AF.\nHowever, the expression levels of most of the detected miRNAs were low, which is evident by their low signal intensities (less than 500 units). Of the 155 miRNAs detected in the NSR group, 73 emitted signals <500 units, while only 16 were >5000 units. On the other hand, of the 208 miRNAs detected in patients with AF, the signal intensities of 127 were <500 units, while only 16 were >5000 units (Figure \n2). The signal intensities of the 5 miRNAs detected only in NSR, and the 58 miRNAs detected only in AF, were all <200 units and not high enough to consider them as differentially expressed between NSR and AF. Hence these were not considered here for further analysis.\nSignal distribution of all detected miRNAs in MS patients, shown by microarray assay. The signal intensities of most of these miRNAs were low (0–500 units), with only 11 miRNAs showing intensities greater than 10000 units.\n Differences in miRNA expression profiles of LAA tissues from MS patients with and without AF Differences existed in the expression levels of the 150 miRNAs detected in both the NSR and AF groups (Table \n2). Statistical analysis showed that 22 of these miRNAs (15%) were significantly dysregulated in the AF group relative to the NSR: 10 miRNAs (45%) were upregulated, while 12 (55%) were downregulated (P < 0.05).\nMiRNAs differentially expressed in LAA tissues between MS patients with AF or NSR\n*Expression levels of miRNAs are described as upregulated or downregulated in AF relative to those in NSR.\nMost of the miRNAs selected for further analysis via RT-qPCR had a fold change that satisfied the equation |log2(fold change)| ≥ 1.5(Figure \n3), and at least one group had a signal intensity >2000 units (Figure \n4). Although the fold change of hsa-miR-26a-5p did not meet this criteria (|log2(fold change)| = 1.17), its P -value was <0.01, and thus it was included in our selection. Finally, we selected 5 miRNAs for further analysis: 3 were upregulated in the AF group relative to the NSR (hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p), and 2 were downregulated (hsa-miR-1 and hsa-miR-26a-5p).\nVolcano plot of 22 miRNAs significantly dysregulated in LAA tissues. The volcano plot shows the distribution of these miRNAs according to their P-value versus fold change. Those with the highest fold change in their expression (|Log2(fold change)| ≥ 1.5) have been labeled in red. Although the fold change of miR-26a-5p (|log2(fold change)| = 1.17) does not meet the criteria, its P-value is <0.01 (i.e., –Log10(P-value) > 2), and therefore is also labeled red.\nComparison of signal intensities of 22 miRNAs significantly dysregulated in LAA tissues. The signal intensities of miRNAs expressed in the NSR group (x-axis) and AF group (y-axis) were compared. The miRNAs with the higher signal intensities (>2000 units) either on the x-axis or y-axis were labeled red.\nDifferences existed in the expression levels of the 150 miRNAs detected in both the NSR and AF groups (Table \n2). Statistical analysis showed that 22 of these miRNAs (15%) were significantly dysregulated in the AF group relative to the NSR: 10 miRNAs (45%) were upregulated, while 12 (55%) were downregulated (P < 0.05).\nMiRNAs differentially expressed in LAA tissues between MS patients with AF or NSR\n*Expression levels of miRNAs are described as upregulated or downregulated in AF relative to those in NSR.\nMost of the miRNAs selected for further analysis via RT-qPCR had a fold change that satisfied the equation |log2(fold change)| ≥ 1.5(Figure \n3), and at least one group had a signal intensity >2000 units (Figure \n4). Although the fold change of hsa-miR-26a-5p did not meet this criteria (|log2(fold change)| = 1.17), its P -value was <0.01, and thus it was included in our selection. Finally, we selected 5 miRNAs for further analysis: 3 were upregulated in the AF group relative to the NSR (hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p), and 2 were downregulated (hsa-miR-1 and hsa-miR-26a-5p).\nVolcano plot of 22 miRNAs significantly dysregulated in LAA tissues. The volcano plot shows the distribution of these miRNAs according to their P-value versus fold change. Those with the highest fold change in their expression (|Log2(fold change)| ≥ 1.5) have been labeled in red. Although the fold change of miR-26a-5p (|log2(fold change)| = 1.17) does not meet the criteria, its P-value is <0.01 (i.e., –Log10(P-value) > 2), and therefore is also labeled red.\nComparison of signal intensities of 22 miRNAs significantly dysregulated in LAA tissues. The signal intensities of miRNAs expressed in the NSR group (x-axis) and AF group (y-axis) were compared. The miRNAs with the higher signal intensities (>2000 units) either on the x-axis or y-axis were labeled red.\n Validation of the miRNA microarray data with RT-qPCR To validate the data obtained from the miRNA microarray, RT-qPCR was performed on 5 selected miRNAs, and the results were compared with the microarray (Figure \n5, Additional file\n3). According to the RT-qPCR data, hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p were upregulated in the LAAs of the AF group relative to the NSR, while hsa-miR-1 and hsa-miR-26a-5p were downregulated. These data are comparable with our microarray data and thus validate the results from the miRNA microarray for these miRNAs.\nValidation of miRNA microarray data of selected miRNAs by RT-qPCR. RT-qPCR was performed on 5 differentially expressed miRNAs (3 upregulated: miR-466, miR-574-3p, miR-3613-3p; and 2 downregulated: miR-1, miR-26a-5p) to confirm the microarray data. (A) Illustration of microarray results. Expression levels of the 5 miRNAs in the AF group based on fold changes of signal intensity in the microarray relative to the NSR group. The log2(fold change) values are plotted on the y-axis. (B) Illustration of RT-qPCR results. Using RT-qPCR, the expression levels of each miRNA in the AF group relative to the NRS group was estimated by the comparative cycle threshold 2-ΔΔCt method. The log2(relative expression) values are plotted on the y-axis. There is a significant difference for each miRNA, consistent with the microarray result.\nTo validate the data obtained from the miRNA microarray, RT-qPCR was performed on 5 selected miRNAs, and the results were compared with the microarray (Figure \n5, Additional file\n3). According to the RT-qPCR data, hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p were upregulated in the LAAs of the AF group relative to the NSR, while hsa-miR-1 and hsa-miR-26a-5p were downregulated. These data are comparable with our microarray data and thus validate the results from the miRNA microarray for these miRNAs.\nValidation of miRNA microarray data of selected miRNAs by RT-qPCR. RT-qPCR was performed on 5 differentially expressed miRNAs (3 upregulated: miR-466, miR-574-3p, miR-3613-3p; and 2 downregulated: miR-1, miR-26a-5p) to confirm the microarray data. (A) Illustration of microarray results. Expression levels of the 5 miRNAs in the AF group based on fold changes of signal intensity in the microarray relative to the NSR group. The log2(fold change) values are plotted on the y-axis. (B) Illustration of RT-qPCR results. Using RT-qPCR, the expression levels of each miRNA in the AF group relative to the NRS group was estimated by the comparative cycle threshold 2-ΔΔCt method. The log2(relative expression) values are plotted on the y-axis. There is a significant difference for each miRNA, consistent with the microarray result.\n Association between LA size and changes in miRNA expression We investigated whether the changes observed in the expression levels of the 5 validated miRNAs between the LAA tissues of NSR and AF patients correlated with LA size.\nSpearman’s correlation analysis showed a positive correlation between the level of expression of miR-466 in LAA and LA size (r = 0.73; P = 0.007). Moreover, there was a significantly negative correlation between the levels of expression of miR-1 and miR-26a-5p in LAAs and LA size (r = –0.81; P = 0.002 and r = –0.86; P < 0.001, respectively). However, although the expression levels of miR-574-3p and miR-3613-3p in LAAs significantly differed between NSR and AF patients, they were not significantly correlated with LA size (r = 0.54; P = 0.07 and r = 0.56; P = 0.06, respectively; Figure \n6).\nCorrelations between LA size and the relative expression levels of selected miRNAs. Spearman’s correlation analysis showed a positive correlation between the level of expression of miR-466 and LA size, and a negative correlation between the level of expression of miR-1 and miR-26a-5p with LA size. The expression levels of miR-574-3p and miR-3613-3p were not significantly correlated with LA size.\nWe investigated whether the changes observed in the expression levels of the 5 validated miRNAs between the LAA tissues of NSR and AF patients correlated with LA size.\nSpearman’s correlation analysis showed a positive correlation between the level of expression of miR-466 in LAA and LA size (r = 0.73; P = 0.007). Moreover, there was a significantly negative correlation between the levels of expression of miR-1 and miR-26a-5p in LAAs and LA size (r = –0.81; P = 0.002 and r = –0.86; P < 0.001, respectively). However, although the expression levels of miR-574-3p and miR-3613-3p in LAAs significantly differed between NSR and AF patients, they were not significantly correlated with LA size (r = 0.54; P = 0.07 and r = 0.56; P = 0.06, respectively; Figure \n6).\nCorrelations between LA size and the relative expression levels of selected miRNAs. Spearman’s correlation analysis showed a positive correlation between the level of expression of miR-466 and LA size, and a negative correlation between the level of expression of miR-1 and miR-26a-5p with LA size. The expression levels of miR-574-3p and miR-3613-3p were not significantly correlated with LA size.\n Prediction of putative target genes and pathways of the differentially expressed miRNAs To determine the probable biological function of the differentially expressed miRNAs, we predicted the putative targets and pathways of 5 validated miRNAs (hsa-miR-1, hsa-miR-26a-5p, hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p) using the miRFocus database.\nNumerous putative target genes and pathways were identified for the 5 miRNAs. The miRNAs hsa-miR-1 and hsa-miR-26a-5p were predicted by 5 target prediction databases; hsa-miR-574-3p was predicted from 4 target prediction databases, and hsa-miR-466 and hsa-miR-3613-3P were predicted from 2 target prediction databases (Table \n3).\nPrediction of putative target genes and pathways of selected miRNAs\nThe biological function and potential functional pathways of each putative gene target were classified using the GO term and KEGG pathway (Tables \n4 and\n5). Since every gene is associated with many GO terms and KEGG pathways, the significant GO term and KEGG pathway for each miRNA were identified with Fisher’s exact test. Results of the target analysis indicated that the predicted potential genes have been linked to such important biological processes as the regulation of protein metabolism, transcription factor activity, cell division, and the transforming growth factor beta receptor (TGFBR) signaling pathway. These results suggest that these miRNAs have roles in human health and disease regulation. The pathway analysis also suggested that these miRNAs have important regulatory roles in different biological processes.\nBiological processes of the predicted miRNA targets\nPathway analysis of the selected miRNAs\nTo determine the probable biological function of the differentially expressed miRNAs, we predicted the putative targets and pathways of 5 validated miRNAs (hsa-miR-1, hsa-miR-26a-5p, hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p) using the miRFocus database.\nNumerous putative target genes and pathways were identified for the 5 miRNAs. The miRNAs hsa-miR-1 and hsa-miR-26a-5p were predicted by 5 target prediction databases; hsa-miR-574-3p was predicted from 4 target prediction databases, and hsa-miR-466 and hsa-miR-3613-3P were predicted from 2 target prediction databases (Table \n3).\nPrediction of putative target genes and pathways of selected miRNAs\nThe biological function and potential functional pathways of each putative gene target were classified using the GO term and KEGG pathway (Tables \n4 and\n5). Since every gene is associated with many GO terms and KEGG pathways, the significant GO term and KEGG pathway for each miRNA were identified with Fisher’s exact test. Results of the target analysis indicated that the predicted potential genes have been linked to such important biological processes as the regulation of protein metabolism, transcription factor activity, cell division, and the transforming growth factor beta receptor (TGFBR) signaling pathway. These results suggest that these miRNAs have roles in human health and disease regulation. The pathway analysis also suggested that these miRNAs have important regulatory roles in different biological processes.\nBiological processes of the predicted miRNA targets\nPathway analysis of the selected miRNAs", "There were no significant differences in terms of age, gender, or NYHA functional classification between the NSR and AF groups. Preoperative color Doppler echocardiography showed that the size of the left atria of AF patients was significantly greater than that of NSR patients as previously reported\n[24], while there were no differences in the left ventricular end-diastolic diameter or ejection fraction between the two groups of patients (Table \n1).\nClinical characteristics of the NSR and AF patients (n = 6, each)\n*P < 0.05 with respect to NSR patients.", "Of the 1898 human miRNAs analyzed, a total 213 miRNAs were detected; NSR patients expressed 155 miRNAs, while the AF patients expressed 208 miRNAs (Figure \n1A). Among these, 150 miRNAs were common to the patients of both groups. 5 miRNAs were detected only in those with NSR, and 58 miRNAs were detected only in those with AF (Figure \n1B).\nMiRNAs detected in NSR and AF MS patients. (A) A combined total of 213 miRNAs were detected. The LAAs of NSR patients expressed 155 miRNAs, while those with AF expressed 208 miRNAs. (B) Among these, 150 miRNAs were expressed in NSR and AF patients; 5 were expressed only in the NSR and 58 only in the AF.\nHowever, the expression levels of most of the detected miRNAs were low, which is evident by their low signal intensities (less than 500 units). Of the 155 miRNAs detected in the NSR group, 73 emitted signals <500 units, while only 16 were >5000 units. On the other hand, of the 208 miRNAs detected in patients with AF, the signal intensities of 127 were <500 units, while only 16 were >5000 units (Figure \n2). The signal intensities of the 5 miRNAs detected only in NSR, and the 58 miRNAs detected only in AF, were all <200 units and not high enough to consider them as differentially expressed between NSR and AF. Hence these were not considered here for further analysis.\nSignal distribution of all detected miRNAs in MS patients, shown by microarray assay. The signal intensities of most of these miRNAs were low (0–500 units), with only 11 miRNAs showing intensities greater than 10000 units.", "Differences existed in the expression levels of the 150 miRNAs detected in both the NSR and AF groups (Table \n2). Statistical analysis showed that 22 of these miRNAs (15%) were significantly dysregulated in the AF group relative to the NSR: 10 miRNAs (45%) were upregulated, while 12 (55%) were downregulated (P < 0.05).\nMiRNAs differentially expressed in LAA tissues between MS patients with AF or NSR\n*Expression levels of miRNAs are described as upregulated or downregulated in AF relative to those in NSR.\nMost of the miRNAs selected for further analysis via RT-qPCR had a fold change that satisfied the equation |log2(fold change)| ≥ 1.5(Figure \n3), and at least one group had a signal intensity >2000 units (Figure \n4). Although the fold change of hsa-miR-26a-5p did not meet this criteria (|log2(fold change)| = 1.17), its P -value was <0.01, and thus it was included in our selection. Finally, we selected 5 miRNAs for further analysis: 3 were upregulated in the AF group relative to the NSR (hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p), and 2 were downregulated (hsa-miR-1 and hsa-miR-26a-5p).\nVolcano plot of 22 miRNAs significantly dysregulated in LAA tissues. The volcano plot shows the distribution of these miRNAs according to their P-value versus fold change. Those with the highest fold change in their expression (|Log2(fold change)| ≥ 1.5) have been labeled in red. Although the fold change of miR-26a-5p (|log2(fold change)| = 1.17) does not meet the criteria, its P-value is <0.01 (i.e., –Log10(P-value) > 2), and therefore is also labeled red.\nComparison of signal intensities of 22 miRNAs significantly dysregulated in LAA tissues. The signal intensities of miRNAs expressed in the NSR group (x-axis) and AF group (y-axis) were compared. The miRNAs with the higher signal intensities (>2000 units) either on the x-axis or y-axis were labeled red.", "To validate the data obtained from the miRNA microarray, RT-qPCR was performed on 5 selected miRNAs, and the results were compared with the microarray (Figure \n5, Additional file\n3). According to the RT-qPCR data, hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p were upregulated in the LAAs of the AF group relative to the NSR, while hsa-miR-1 and hsa-miR-26a-5p were downregulated. These data are comparable with our microarray data and thus validate the results from the miRNA microarray for these miRNAs.\nValidation of miRNA microarray data of selected miRNAs by RT-qPCR. RT-qPCR was performed on 5 differentially expressed miRNAs (3 upregulated: miR-466, miR-574-3p, miR-3613-3p; and 2 downregulated: miR-1, miR-26a-5p) to confirm the microarray data. (A) Illustration of microarray results. Expression levels of the 5 miRNAs in the AF group based on fold changes of signal intensity in the microarray relative to the NSR group. The log2(fold change) values are plotted on the y-axis. (B) Illustration of RT-qPCR results. Using RT-qPCR, the expression levels of each miRNA in the AF group relative to the NRS group was estimated by the comparative cycle threshold 2-ΔΔCt method. The log2(relative expression) values are plotted on the y-axis. There is a significant difference for each miRNA, consistent with the microarray result.", "We investigated whether the changes observed in the expression levels of the 5 validated miRNAs between the LAA tissues of NSR and AF patients correlated with LA size.\nSpearman’s correlation analysis showed a positive correlation between the level of expression of miR-466 in LAA and LA size (r = 0.73; P = 0.007). Moreover, there was a significantly negative correlation between the levels of expression of miR-1 and miR-26a-5p in LAAs and LA size (r = –0.81; P = 0.002 and r = –0.86; P < 0.001, respectively). However, although the expression levels of miR-574-3p and miR-3613-3p in LAAs significantly differed between NSR and AF patients, they were not significantly correlated with LA size (r = 0.54; P = 0.07 and r = 0.56; P = 0.06, respectively; Figure \n6).\nCorrelations between LA size and the relative expression levels of selected miRNAs. Spearman’s correlation analysis showed a positive correlation between the level of expression of miR-466 and LA size, and a negative correlation between the level of expression of miR-1 and miR-26a-5p with LA size. The expression levels of miR-574-3p and miR-3613-3p were not significantly correlated with LA size.", "To determine the probable biological function of the differentially expressed miRNAs, we predicted the putative targets and pathways of 5 validated miRNAs (hsa-miR-1, hsa-miR-26a-5p, hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p) using the miRFocus database.\nNumerous putative target genes and pathways were identified for the 5 miRNAs. The miRNAs hsa-miR-1 and hsa-miR-26a-5p were predicted by 5 target prediction databases; hsa-miR-574-3p was predicted from 4 target prediction databases, and hsa-miR-466 and hsa-miR-3613-3P were predicted from 2 target prediction databases (Table \n3).\nPrediction of putative target genes and pathways of selected miRNAs\nThe biological function and potential functional pathways of each putative gene target were classified using the GO term and KEGG pathway (Tables \n4 and\n5). Since every gene is associated with many GO terms and KEGG pathways, the significant GO term and KEGG pathway for each miRNA were identified with Fisher’s exact test. Results of the target analysis indicated that the predicted potential genes have been linked to such important biological processes as the regulation of protein metabolism, transcription factor activity, cell division, and the transforming growth factor beta receptor (TGFBR) signaling pathway. These results suggest that these miRNAs have roles in human health and disease regulation. The pathway analysis also suggested that these miRNAs have important regulatory roles in different biological processes.\nBiological processes of the predicted miRNA targets\nPathway analysis of the selected miRNAs", "More and more studies indicate that specific alterations in miRNA expression profiles are associated with specific disease pathophysiologies\n[8,9,19]. Xiao et al.\n[16] were the first to report miRNA alterations in the RA associated with AF in MS patients; 28 miRNAs were differentially expressed between MS patients with AF and those in NSR. However, miRNA changes due to AF in the LA of MS patients are still unknown. The present study is the first to create and compare miRNA profiles of the LA of MS patients with AF and those without AF. We found that in the LA of MS patients, 22 miRNAs were differentially expressed between those with AF and those in NSR.\nThe results of our study and that of Xiao et al.\n[16] were completely different, except for miR-26b. After eliminating the influences of the miRNA microarray technologies used in the two studies, we conclude that these differences, at least in part, may reflect different mechanisms involved in AF between the LA and RA. In MS patients, electrical remodeling of both the left and right atria\n[14] is intrinsic to the initiation, development, and maintenance of AF\n[25], and morphological differences have also been demonstrated between the two atria\n[26]. Thus, it is not surprising that AF alters the miRNA expression profiles of the LA of MS patients, and that these alterations may differ from those of the RA. These differences may reflect different mechanisms involved in AF between LA and RA. Therefore, investigations into the differences in miRNA expression profiles associated with AF in MS patients should focus not only on the RA but also on the LA.\nCooley et al.\n[17] investigated the differences in miRNA expression profiles in LAA tissues from valvular heart disease patients, and found no detectable differences between patients with AF and those with NSR, a lack that these researchers attributed partially to problems with tissue availability. However, Girmatsion et al.\n[27] reported that miR-1 was downregulated in human LAA tissue from AF patients (relative to patients without AF who also underwent mitral valve repair or bypass grafting), which is consistent with our finding that miR-1 was downregulated in human LAA tissues in MS patients with AF. Unfortunately, Girmatsion et al.’s investigation did not utilize miRNA expression profiles. Recently, Luo et al.\n[28] found that miR-26 family members were significantly downregulated (>50%) in LAAs from a canine AF model (miR-26a) and in right atrial appendages from AF patients (miR-26a and miR-26b). This suggests the possible involvement of these miRNAs in AF pathophysiology, and is consistent with our finding that miR-26a-5p was downregulated in LAA tissues from MS patients with AF compared with those who remained in NSR.\nLu et al.\n[10] found that levels of miR-328 were elevated 3.5- to 3.9-fold in LAAs from dogs with model AF and in right atrial appendages from AF patients, detected by both microarray and RT-qPCR. However, in the present study we found that the expressions of miR-328 were very low in both the NSR and AF group and were not significantly different between the two groups. This contradictory finding may reflect differences in the species (dogs in Lu et al., humans in the present study) and in tissues that were sampled (right atrial appendages from AF patients in Lu et al., LAAs from AF patients in the current study) and the heterogeneity of human myocardial samples\n[29].\nStudies have shown that miRNAs may be involved directly or indirectly in AF by modulating atrial electrical remodeling (miR-1, miR-26, miR-328)\n[10,27,28] or structural remodeling (miR-30, miR-133, mir-590)\n[7,30]. One study showed that miR-1 overexpression slowed conduction and depolarized the cytoplasmic membrane by post-transcriptionally repressing KCNJ2 (potassium inwardly-rectifying channel, subfamily J, member 2; which encodes the K+ channel subunit Kir2.1) and GJA1 (gap junction protein, alpha 1, 43 kDa; which encodes connexin 43), and this likely accounts at least in part for its arrhythmogenic potential\n[31]. Another study indicated that miR-1 levels are greatly reduced in human AF, possibly contributing to upregulation of Kir2.1 subunits, leading to increased cardiac inward-rectifier potassium current I\nK1\n[27]. A recent study identified miR-26 as a potentially important regulator of KCNJ2 gene expression and, via IK1, a determinant of AF susceptibility\n[28]. In addition, it also identified miR-26 as a potential mediator of the electrophysiological effects of Ca2+-dependent NFAT (nuclear factor of activated T cells) signaling, believed to be important in the perpetuation of AF.\nPreviously, the miR-466, miR-574, and miR-3613 have not been described as participating in cardiovascular pathology. The current study found that these miRNAs are potentially involved in several important biological processes and functional pathways associated with AF (e.g., mTOR, Wnt, and Notch signaling), based on the predictions of putative target genes and pathways determined via miRFocus. Our results may implicate these miRNAs in the pathogenesis of AF.\nIn MS, the association between LA size and AF is well established and LA dilatation is considered both a cause and consequence of AF\n[13]. Our study found that the expression levels of three validated miRNAs (miR-1, miR-26a-5p, miR-466) correlated with LA size, while those of two others (miR-574-3p, miR-3613-3p) did not. This discrepancy is probably due to the multifactorial nature of AF in MS. For example, it is likely that persistent rheumatic inflammation and LA fibrosis also contribute to the etiology of AF in MS, as well as LA size and hypertension\n[13].\nThe main limitation of this study was the small number of patients included. This was due, in part, to the difficulty of finding MS patients with NSR. In addition, because this study was performed with native human tissues, we could not conduct experiments to modulate miRNA levels. Accordingly, the evidence presented here is indirect. Furthermore, the exact targets and pathways by which alterations in miRNAs cause AF in MS patients remain elusive and deserve further investigation\n[16]. Finally, the patients in this study were a specific cohort with preserved systolic left ventricular function and little comorbidity; they were undergoing mitral valve replacement surgery. Thus, changes identified in this population may not be representative of other cohort populations\n[27].", "This study shows that AF alters the miRNA expression profiles in LA from MS patients. These findings may be useful for the biological understanding of AF in MS patients and provide potential therapeutic targets for AF\n[32].", "AF: Atrial fibrillation; CV: Coefficient of variation; GO: Gene Ontology; KEGG: Kyoto Encyclopedia of Genes and Genomes; LA: Left atrial; LAA: Left atrial appendage; miRNA: Microrna; MS: Mitral stenosis; NSR: Normal sinus rhythm; NYHA: New York Heart Association; RT-qPCR: Reverse-transcription quantitative PCR; TGFBR: Transforming growth factor beta receptor.", "The authors declare that they have no competing interests.", "HL performed the molecular studies, participated in the sequence alignment, and drafted the manuscript. GXC, MYL, HQ, JR, and JPY participated in open heart surgery and collected clinical samples. HL and ZKW participated in the design of the study and performed the statistical analyses. ZKW and GXC conceived the study, participated in its design and coordination, and helped to draft the manuscript. HL and GXC contributed equally to this article. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2261/14/10/prepub\n", "Raw signal values for all miRNAs in microarray.\nClick here for file\nPrimers for RT-qPCR of miRNA and RNU6B.\nClick here for file\nRT-qPCR data.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", "conclusions", null, null, null, null, "supplementary-material" ]
[ "Atrial fibrillation", "Microrna", "Mitral stenosis", "Microarray", "Mirfocus" ]
Background: Atrial fibrillation (AF) is characterized as an irregular and sometimes rapid heart rate, with symptoms that include palpitations and shortness of breath. AF is the most common cardiac arrhythmia observed in clinical practice and constitutes a risk factor for ischemic stroke [1]. Despite recent significant advances in the understanding of the mechanisms associated with AF, complexities in the etiology of atrial electrical dysfunction (including a genetic component [2]) and the subsequent associated arrhythmia have prevented definitive elucidation [3]. The progression from acute to persistent and then chronic AF is accompanied by changes in gene expression that lead to differences in protein expression and activity. MicroRNAs (miRNAs) are regulators of gene expression at the post-transcriptional level [4], and appear to have regulatory roles that underlie the pathophysiology of AF. Many studies have shown that miRNAs regulate key genetic functions in cardiovascular biology and are crucial to the pathogenesis of cardiac diseases such as cardiac development [5], hypertrophy/heart failure [6], remodeling [7], acute myocardial infarction [8], and myocardial ischemia-reperfusion injury [9]. Currently, there is a growing body of literature that indicates that many miRNAs are involved in AF through their target genes [7,10,11]. AF can be an isolated condition, but it often occurs concomitantly with other cardiovascular diseases such as hypertension, congestive heart failure, coronary artery disease, and valvular heart disease [12]. AF is also prevalent in mitral stenosis (MS; a consequence of rheumatic fever), affecting approximately 40% of all MS patients [13]. MS is among the major cardiovascular diseases in developing countries where rheumatic fever is less well controlled, and 50% or more of patients with severe MS have AF. Patients with both AF and MS have a 17.5-fold greater risk of stroke and a four-fold higher incidence of embolism compared with people with normal sinus rhythm (NSR) [14,15]. Structural changes of the left atria (LA) and right atria (RA) associated with AF in MS patients are well established [13,14]. Recently, reports suggest that AF also alters the miRNA expression profiles in RA of MS patients [16,17]. However, miRNA changes in LA from MS patients with AF are still unknown. Given the complexity of the pathophysiology that may be associated with AF, we need a better understanding of the miRNA changes in the LA, which may help in designing and developing new therapeutic interventions. This study investigated alterations of miRNA expression profiles in LA tissues of MS patients with AF relative to MS patients with NSR. Methods: The Human Ethics Committee of First Affiliated Hospital of Sun Yat-sen University approved this study, and the investigation complied with the principles that govern the use of human tissues outlined in the Declaration of Helsinki. All patients gave informed consent before participating in the study. Human tissue preparation Left atrial appendage (LAA) tissue samples were obtained from MS patients, both in NSR (n = 6, without history of AF) and with AF (n = 6, documented arrhythmia >6 months before surgery). The tissue samples were obtained at the time of mitral valve surgery, immediately snap frozen in liquid nitrogen, and stored at -80°C until used. The diagnosis of AF was reached by evaluating medical records and 12-lead electrocardiogram findings. NSR patients had no history of using antiarrhythmic drugs and were screened to ensure that they had never experienced AF [18]. Preoperative 2-dimensional color transthoracic echocardiography was performed routinely on the patients. Preoperative functional status was recorded according to New York Heart Association (NYHA) classifications. Left atrial appendage (LAA) tissue samples were obtained from MS patients, both in NSR (n = 6, without history of AF) and with AF (n = 6, documented arrhythmia >6 months before surgery). The tissue samples were obtained at the time of mitral valve surgery, immediately snap frozen in liquid nitrogen, and stored at -80°C until used. The diagnosis of AF was reached by evaluating medical records and 12-lead electrocardiogram findings. NSR patients had no history of using antiarrhythmic drugs and were screened to ensure that they had never experienced AF [18]. Preoperative 2-dimensional color transthoracic echocardiography was performed routinely on the patients. Preoperative functional status was recorded according to New York Heart Association (NYHA) classifications. RNA isolation The total RNA from human LAA tissue samples was extracted using TRIzol reagent (Invitrogen) in accordance with the protocol of the manufacturer. The RNA quality of each sample was determined using an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA) and immediately stored at -80°C. The total RNA from human LAA tissue samples was extracted using TRIzol reagent (Invitrogen) in accordance with the protocol of the manufacturer. The RNA quality of each sample was determined using an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA) and immediately stored at -80°C. Microarray processing and analysis The miRNA microarray expression analysis was performed by LC Sciences (Houston, TX, USA) as described previously [19]. In brief, the assay began with a total RNA sample (2 to 5 μg). The total RNA was size-fractionated using a YM-100 Microcon centrifugal filter (Millipore, Billerica, MA). RNA sequences of <300 nt were isolated. These small RNAs were then extended at the 3′ end with a poly(A) tail using poly(A) polymerase, and then by ligation of an oligonucleotide tag to the poly(A) tail for later fluorescent dye staining. Hybridization was performed overnight on a μParaflo microfluidic chip using a micro-circulation pump (Atactic Technologies, Houston, TX). Each microfluidic chip contained detection probes and control probes. The detection probes were made in situ by photogenerated reagents. These probes consisted chemically of modified nucleotide coding sequences complementary to target miRNA (all 1921 human miRNAs listed in the Sanger’s miRNA miRBase, Release 18.0, http://microrna.sanger.ac.uk/sequences/) and a spacer segment of polyethylene glycol to extend the coding sequences away from the substrate. The hybridization melting temperatures were balanced by chemical modifications of the detection probes. Hybridization was performed using 100 μL of 6× saline-sodium phosphate-EDTA (SSPE) buffer (0.90 M NaCl, 60 mM Na2HPO4, 6 mM EDTA, pH 6.8) containing 25% formamide at 34°C. Fluorescence labeling with tag-specific Cy5 dye was used for after-hybridization detection. An Axon GenePix 4000B Microarray Scanner (Molecular Device, Union City, CA) was used to collect the fluorescent images, which were digitized using Array-Pro image analysis software (Media Cybernetics, Bethesda, MD). Each miRNA was analyzed two times and the controls were repeated 4-16 times. Analysis of the microarray data was also performed at LC Sciences (see Additional file 1). The microarray data was analyzed by subtracting the background, and then the signals were normalized using a locally weighted regression scatterplot smoothing (LOWESS) filter as reported previously [20]. Detectable miRNAs were selected based on the following criteria: signal intensity >3-fold the background standard deviation, and spot coefficient of variation (CV) < 0.5, where CV = standard deviation/signal intensity. When repeating probes were present on the array, the transcript was listed as detectable only if the signals from at least 50% of the repeating probes were above detection level. To identify miRNAs whose expression differed between the AF and NSR groups, statistical analysis was performed. The ratio of two samples was calculated and expressed in log2scale (balanced) for each miRNA. The miRNAs were then sorted according to their differential ratios. The P-values of the t-test were also calculated. miRNAs with P-values < 0.05 were considered significantly differentially expressed. The miRNA microarray expression analysis was performed by LC Sciences (Houston, TX, USA) as described previously [19]. In brief, the assay began with a total RNA sample (2 to 5 μg). The total RNA was size-fractionated using a YM-100 Microcon centrifugal filter (Millipore, Billerica, MA). RNA sequences of <300 nt were isolated. These small RNAs were then extended at the 3′ end with a poly(A) tail using poly(A) polymerase, and then by ligation of an oligonucleotide tag to the poly(A) tail for later fluorescent dye staining. Hybridization was performed overnight on a μParaflo microfluidic chip using a micro-circulation pump (Atactic Technologies, Houston, TX). Each microfluidic chip contained detection probes and control probes. The detection probes were made in situ by photogenerated reagents. These probes consisted chemically of modified nucleotide coding sequences complementary to target miRNA (all 1921 human miRNAs listed in the Sanger’s miRNA miRBase, Release 18.0, http://microrna.sanger.ac.uk/sequences/) and a spacer segment of polyethylene glycol to extend the coding sequences away from the substrate. The hybridization melting temperatures were balanced by chemical modifications of the detection probes. Hybridization was performed using 100 μL of 6× saline-sodium phosphate-EDTA (SSPE) buffer (0.90 M NaCl, 60 mM Na2HPO4, 6 mM EDTA, pH 6.8) containing 25% formamide at 34°C. Fluorescence labeling with tag-specific Cy5 dye was used for after-hybridization detection. An Axon GenePix 4000B Microarray Scanner (Molecular Device, Union City, CA) was used to collect the fluorescent images, which were digitized using Array-Pro image analysis software (Media Cybernetics, Bethesda, MD). Each miRNA was analyzed two times and the controls were repeated 4-16 times. Analysis of the microarray data was also performed at LC Sciences (see Additional file 1). The microarray data was analyzed by subtracting the background, and then the signals were normalized using a locally weighted regression scatterplot smoothing (LOWESS) filter as reported previously [20]. Detectable miRNAs were selected based on the following criteria: signal intensity >3-fold the background standard deviation, and spot coefficient of variation (CV) < 0.5, where CV = standard deviation/signal intensity. When repeating probes were present on the array, the transcript was listed as detectable only if the signals from at least 50% of the repeating probes were above detection level. To identify miRNAs whose expression differed between the AF and NSR groups, statistical analysis was performed. The ratio of two samples was calculated and expressed in log2scale (balanced) for each miRNA. The miRNAs were then sorted according to their differential ratios. The P-values of the t-test were also calculated. miRNAs with P-values < 0.05 were considered significantly differentially expressed. Reverse transcription-real time quantitative PCR (RT-qPCR) validation of selected miRNAs To validate the microarray results in the present study, a stem-loop RT-qPCR based on SYBR Green I was performed on selected differentially expressed miRNAs. The primers used are listed in Additional file 2. Total RNA was isolated using TRIzol reagent (Invitrogen) as described above. A single-stranded cDNA for each specific miRNA was generated by reverse transcription (RT) of 250 ng of total RNA using a miRNA-specific stem-looped RT primer. Briefly, an RT reaction mixture contained 250 ng of total RNA, 0.5 μL of 2 μM stem-loop RT primer, 1.0 μL of 5× RT buffer, 0.25 μL of 10 mM of each dNTP, 0.25 μL of 40 U/μL RNase inhibitor, and 0.5 μL of 200 U/μL Moloney murine leukemia virus (M-MLV) reverse transcriptase. An Eppendorf Mastercycler (Eppendorf, Hamburg, Germany) was used to perform the RT reaction under the following conditions: 42°C for 60 min, 70°C for 15 min, and finally, held at 4°C. After the RT reaction, qPCR was performed using an ABI PRISM 7900HT sequence-detection system (Applied Biosystems, Foster City, CA, USA) with the Platinum SYBR Green qPCR SuperMix-UDG (Invitrogen). In accordance with the manufacturer’s instructions, a 20-μL PCR reaction mixture contained 0.5 μL of RT product, 10 μL of 2× SYBR Green Mix, 0.4 μL of ROX, 0.8 μL of 10 μM primer mix, and 8.3 μL of nuclease-free water. The reaction protocol was: 95°C for 2 min, and then 40 amplification cycles of 95°C for 15 s, and 60°C for 30 s. All reactions were run in triplicate. To account for possible differences in the amount of starting RNA, miRNA expressions were normalized to small nuclear RNA RNU6B [21,22]. RT-qPCR data were represented by the cycle threshold (Ct) value. The relative expression level (i.e., fold change) for each miRNA was calculated using the comparative cycle threshold 2-ΔΔCt method [19]. To validate the microarray results in the present study, a stem-loop RT-qPCR based on SYBR Green I was performed on selected differentially expressed miRNAs. The primers used are listed in Additional file 2. Total RNA was isolated using TRIzol reagent (Invitrogen) as described above. A single-stranded cDNA for each specific miRNA was generated by reverse transcription (RT) of 250 ng of total RNA using a miRNA-specific stem-looped RT primer. Briefly, an RT reaction mixture contained 250 ng of total RNA, 0.5 μL of 2 μM stem-loop RT primer, 1.0 μL of 5× RT buffer, 0.25 μL of 10 mM of each dNTP, 0.25 μL of 40 U/μL RNase inhibitor, and 0.5 μL of 200 U/μL Moloney murine leukemia virus (M-MLV) reverse transcriptase. An Eppendorf Mastercycler (Eppendorf, Hamburg, Germany) was used to perform the RT reaction under the following conditions: 42°C for 60 min, 70°C for 15 min, and finally, held at 4°C. After the RT reaction, qPCR was performed using an ABI PRISM 7900HT sequence-detection system (Applied Biosystems, Foster City, CA, USA) with the Platinum SYBR Green qPCR SuperMix-UDG (Invitrogen). In accordance with the manufacturer’s instructions, a 20-μL PCR reaction mixture contained 0.5 μL of RT product, 10 μL of 2× SYBR Green Mix, 0.4 μL of ROX, 0.8 μL of 10 μM primer mix, and 8.3 μL of nuclease-free water. The reaction protocol was: 95°C for 2 min, and then 40 amplification cycles of 95°C for 15 s, and 60°C for 30 s. All reactions were run in triplicate. To account for possible differences in the amount of starting RNA, miRNA expressions were normalized to small nuclear RNA RNU6B [21,22]. RT-qPCR data were represented by the cycle threshold (Ct) value. The relative expression level (i.e., fold change) for each miRNA was calculated using the comparative cycle threshold 2-ΔΔCt method [19]. Target prediction and function analysis We used the database miRFocus (http://mirfocus.org/) to predict potential human miRNA target genes. The website describes miRFocus as a human miRNA information database, and is an open-source web tool developed for rapid analysis of miRNAs. It also provides comprehensive information concerning human miRNAs, including not only miRNA annotations but also miRNA and target gene interactions, correlations between miRNAs and diseases and signaling pathways, and more. The miRFocus provides a full gene description and functional analysis for each target gene by combining the predicted target genes from other databases (TargetScan, miRanda, PicTar, MirTarget and microT). In this study, only those genes that were predicted by two or more databases were considered candidates; the greater the number of databases that predicted that a given gene would be a target, the more likely the miRNA-mRNA interaction would be relevant [23]. The miRFocus program also identifies miRNA-enriched pathways, incorporating those from the Kyoto Encyclopedia of Genes and Genomes (KEGG), Biocarta, and Gene Ontology (GO) databases, with Fisher’s exact test. We used the database miRFocus (http://mirfocus.org/) to predict potential human miRNA target genes. The website describes miRFocus as a human miRNA information database, and is an open-source web tool developed for rapid analysis of miRNAs. It also provides comprehensive information concerning human miRNAs, including not only miRNA annotations but also miRNA and target gene interactions, correlations between miRNAs and diseases and signaling pathways, and more. The miRFocus provides a full gene description and functional analysis for each target gene by combining the predicted target genes from other databases (TargetScan, miRanda, PicTar, MirTarget and microT). In this study, only those genes that were predicted by two or more databases were considered candidates; the greater the number of databases that predicted that a given gene would be a target, the more likely the miRNA-mRNA interaction would be relevant [23]. The miRFocus program also identifies miRNA-enriched pathways, incorporating those from the Kyoto Encyclopedia of Genes and Genomes (KEGG), Biocarta, and Gene Ontology (GO) databases, with Fisher’s exact test. Statistical analyses All data are presented as mean ± standard deviation and analyzed with the paired t-test. Spearman’s correlation coefficients were used to examine the association between validated miRNAs and left atrial size. P < 0.05 was considered statistically significant. All data are presented as mean ± standard deviation and analyzed with the paired t-test. Spearman’s correlation coefficients were used to examine the association between validated miRNAs and left atrial size. P < 0.05 was considered statistically significant. Human tissue preparation: Left atrial appendage (LAA) tissue samples were obtained from MS patients, both in NSR (n = 6, without history of AF) and with AF (n = 6, documented arrhythmia >6 months before surgery). The tissue samples were obtained at the time of mitral valve surgery, immediately snap frozen in liquid nitrogen, and stored at -80°C until used. The diagnosis of AF was reached by evaluating medical records and 12-lead electrocardiogram findings. NSR patients had no history of using antiarrhythmic drugs and were screened to ensure that they had never experienced AF [18]. Preoperative 2-dimensional color transthoracic echocardiography was performed routinely on the patients. Preoperative functional status was recorded according to New York Heart Association (NYHA) classifications. RNA isolation: The total RNA from human LAA tissue samples was extracted using TRIzol reagent (Invitrogen) in accordance with the protocol of the manufacturer. The RNA quality of each sample was determined using an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA) and immediately stored at -80°C. Microarray processing and analysis: The miRNA microarray expression analysis was performed by LC Sciences (Houston, TX, USA) as described previously [19]. In brief, the assay began with a total RNA sample (2 to 5 μg). The total RNA was size-fractionated using a YM-100 Microcon centrifugal filter (Millipore, Billerica, MA). RNA sequences of <300 nt were isolated. These small RNAs were then extended at the 3′ end with a poly(A) tail using poly(A) polymerase, and then by ligation of an oligonucleotide tag to the poly(A) tail for later fluorescent dye staining. Hybridization was performed overnight on a μParaflo microfluidic chip using a micro-circulation pump (Atactic Technologies, Houston, TX). Each microfluidic chip contained detection probes and control probes. The detection probes were made in situ by photogenerated reagents. These probes consisted chemically of modified nucleotide coding sequences complementary to target miRNA (all 1921 human miRNAs listed in the Sanger’s miRNA miRBase, Release 18.0, http://microrna.sanger.ac.uk/sequences/) and a spacer segment of polyethylene glycol to extend the coding sequences away from the substrate. The hybridization melting temperatures were balanced by chemical modifications of the detection probes. Hybridization was performed using 100 μL of 6× saline-sodium phosphate-EDTA (SSPE) buffer (0.90 M NaCl, 60 mM Na2HPO4, 6 mM EDTA, pH 6.8) containing 25% formamide at 34°C. Fluorescence labeling with tag-specific Cy5 dye was used for after-hybridization detection. An Axon GenePix 4000B Microarray Scanner (Molecular Device, Union City, CA) was used to collect the fluorescent images, which were digitized using Array-Pro image analysis software (Media Cybernetics, Bethesda, MD). Each miRNA was analyzed two times and the controls were repeated 4-16 times. Analysis of the microarray data was also performed at LC Sciences (see Additional file 1). The microarray data was analyzed by subtracting the background, and then the signals were normalized using a locally weighted regression scatterplot smoothing (LOWESS) filter as reported previously [20]. Detectable miRNAs were selected based on the following criteria: signal intensity >3-fold the background standard deviation, and spot coefficient of variation (CV) < 0.5, where CV = standard deviation/signal intensity. When repeating probes were present on the array, the transcript was listed as detectable only if the signals from at least 50% of the repeating probes were above detection level. To identify miRNAs whose expression differed between the AF and NSR groups, statistical analysis was performed. The ratio of two samples was calculated and expressed in log2scale (balanced) for each miRNA. The miRNAs were then sorted according to their differential ratios. The P-values of the t-test were also calculated. miRNAs with P-values < 0.05 were considered significantly differentially expressed. Reverse transcription-real time quantitative PCR (RT-qPCR) validation of selected miRNAs: To validate the microarray results in the present study, a stem-loop RT-qPCR based on SYBR Green I was performed on selected differentially expressed miRNAs. The primers used are listed in Additional file 2. Total RNA was isolated using TRIzol reagent (Invitrogen) as described above. A single-stranded cDNA for each specific miRNA was generated by reverse transcription (RT) of 250 ng of total RNA using a miRNA-specific stem-looped RT primer. Briefly, an RT reaction mixture contained 250 ng of total RNA, 0.5 μL of 2 μM stem-loop RT primer, 1.0 μL of 5× RT buffer, 0.25 μL of 10 mM of each dNTP, 0.25 μL of 40 U/μL RNase inhibitor, and 0.5 μL of 200 U/μL Moloney murine leukemia virus (M-MLV) reverse transcriptase. An Eppendorf Mastercycler (Eppendorf, Hamburg, Germany) was used to perform the RT reaction under the following conditions: 42°C for 60 min, 70°C for 15 min, and finally, held at 4°C. After the RT reaction, qPCR was performed using an ABI PRISM 7900HT sequence-detection system (Applied Biosystems, Foster City, CA, USA) with the Platinum SYBR Green qPCR SuperMix-UDG (Invitrogen). In accordance with the manufacturer’s instructions, a 20-μL PCR reaction mixture contained 0.5 μL of RT product, 10 μL of 2× SYBR Green Mix, 0.4 μL of ROX, 0.8 μL of 10 μM primer mix, and 8.3 μL of nuclease-free water. The reaction protocol was: 95°C for 2 min, and then 40 amplification cycles of 95°C for 15 s, and 60°C for 30 s. All reactions were run in triplicate. To account for possible differences in the amount of starting RNA, miRNA expressions were normalized to small nuclear RNA RNU6B [21,22]. RT-qPCR data were represented by the cycle threshold (Ct) value. The relative expression level (i.e., fold change) for each miRNA was calculated using the comparative cycle threshold 2-ΔΔCt method [19]. Target prediction and function analysis: We used the database miRFocus (http://mirfocus.org/) to predict potential human miRNA target genes. The website describes miRFocus as a human miRNA information database, and is an open-source web tool developed for rapid analysis of miRNAs. It also provides comprehensive information concerning human miRNAs, including not only miRNA annotations but also miRNA and target gene interactions, correlations between miRNAs and diseases and signaling pathways, and more. The miRFocus provides a full gene description and functional analysis for each target gene by combining the predicted target genes from other databases (TargetScan, miRanda, PicTar, MirTarget and microT). In this study, only those genes that were predicted by two or more databases were considered candidates; the greater the number of databases that predicted that a given gene would be a target, the more likely the miRNA-mRNA interaction would be relevant [23]. The miRFocus program also identifies miRNA-enriched pathways, incorporating those from the Kyoto Encyclopedia of Genes and Genomes (KEGG), Biocarta, and Gene Ontology (GO) databases, with Fisher’s exact test. Statistical analyses: All data are presented as mean ± standard deviation and analyzed with the paired t-test. Spearman’s correlation coefficients were used to examine the association between validated miRNAs and left atrial size. P < 0.05 was considered statistically significant. Results: Clinical characteristics of the NSR and AF patients There were no significant differences in terms of age, gender, or NYHA functional classification between the NSR and AF groups. Preoperative color Doppler echocardiography showed that the size of the left atria of AF patients was significantly greater than that of NSR patients as previously reported [24], while there were no differences in the left ventricular end-diastolic diameter or ejection fraction between the two groups of patients (Table  1). Clinical characteristics of the NSR and AF patients (n = 6, each) *P < 0.05 with respect to NSR patients. There were no significant differences in terms of age, gender, or NYHA functional classification between the NSR and AF groups. Preoperative color Doppler echocardiography showed that the size of the left atria of AF patients was significantly greater than that of NSR patients as previously reported [24], while there were no differences in the left ventricular end-diastolic diameter or ejection fraction between the two groups of patients (Table  1). Clinical characteristics of the NSR and AF patients (n = 6, each) *P < 0.05 with respect to NSR patients. miRNA expression profiles of LAA tissue from MS patients with and without AF Of the 1898 human miRNAs analyzed, a total 213 miRNAs were detected; NSR patients expressed 155 miRNAs, while the AF patients expressed 208 miRNAs (Figure  1A). Among these, 150 miRNAs were common to the patients of both groups. 5 miRNAs were detected only in those with NSR, and 58 miRNAs were detected only in those with AF (Figure  1B). MiRNAs detected in NSR and AF MS patients. (A) A combined total of 213 miRNAs were detected. The LAAs of NSR patients expressed 155 miRNAs, while those with AF expressed 208 miRNAs. (B) Among these, 150 miRNAs were expressed in NSR and AF patients; 5 were expressed only in the NSR and 58 only in the AF. However, the expression levels of most of the detected miRNAs were low, which is evident by their low signal intensities (less than 500 units). Of the 155 miRNAs detected in the NSR group, 73 emitted signals <500 units, while only 16 were >5000 units. On the other hand, of the 208 miRNAs detected in patients with AF, the signal intensities of 127 were <500 units, while only 16 were >5000 units (Figure  2). The signal intensities of the 5 miRNAs detected only in NSR, and the 58 miRNAs detected only in AF, were all <200 units and not high enough to consider them as differentially expressed between NSR and AF. Hence these were not considered here for further analysis. Signal distribution of all detected miRNAs in MS patients, shown by microarray assay. The signal intensities of most of these miRNAs were low (0–500 units), with only 11 miRNAs showing intensities greater than 10000 units. Of the 1898 human miRNAs analyzed, a total 213 miRNAs were detected; NSR patients expressed 155 miRNAs, while the AF patients expressed 208 miRNAs (Figure  1A). Among these, 150 miRNAs were common to the patients of both groups. 5 miRNAs were detected only in those with NSR, and 58 miRNAs were detected only in those with AF (Figure  1B). MiRNAs detected in NSR and AF MS patients. (A) A combined total of 213 miRNAs were detected. The LAAs of NSR patients expressed 155 miRNAs, while those with AF expressed 208 miRNAs. (B) Among these, 150 miRNAs were expressed in NSR and AF patients; 5 were expressed only in the NSR and 58 only in the AF. However, the expression levels of most of the detected miRNAs were low, which is evident by their low signal intensities (less than 500 units). Of the 155 miRNAs detected in the NSR group, 73 emitted signals <500 units, while only 16 were >5000 units. On the other hand, of the 208 miRNAs detected in patients with AF, the signal intensities of 127 were <500 units, while only 16 were >5000 units (Figure  2). The signal intensities of the 5 miRNAs detected only in NSR, and the 58 miRNAs detected only in AF, were all <200 units and not high enough to consider them as differentially expressed between NSR and AF. Hence these were not considered here for further analysis. Signal distribution of all detected miRNAs in MS patients, shown by microarray assay. The signal intensities of most of these miRNAs were low (0–500 units), with only 11 miRNAs showing intensities greater than 10000 units. Differences in miRNA expression profiles of LAA tissues from MS patients with and without AF Differences existed in the expression levels of the 150 miRNAs detected in both the NSR and AF groups (Table  2). Statistical analysis showed that 22 of these miRNAs (15%) were significantly dysregulated in the AF group relative to the NSR: 10 miRNAs (45%) were upregulated, while 12 (55%) were downregulated (P < 0.05). MiRNAs differentially expressed in LAA tissues between MS patients with AF or NSR *Expression levels of miRNAs are described as upregulated or downregulated in AF relative to those in NSR. Most of the miRNAs selected for further analysis via RT-qPCR had a fold change that satisfied the equation |log2(fold change)| ≥ 1.5(Figure  3), and at least one group had a signal intensity >2000 units (Figure  4). Although the fold change of hsa-miR-26a-5p did not meet this criteria (|log2(fold change)| = 1.17), its P -value was <0.01, and thus it was included in our selection. Finally, we selected 5 miRNAs for further analysis: 3 were upregulated in the AF group relative to the NSR (hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p), and 2 were downregulated (hsa-miR-1 and hsa-miR-26a-5p). Volcano plot of 22 miRNAs significantly dysregulated in LAA tissues. The volcano plot shows the distribution of these miRNAs according to their P-value versus fold change. Those with the highest fold change in their expression (|Log2(fold change)| ≥ 1.5) have been labeled in red. Although the fold change of miR-26a-5p (|log2(fold change)| = 1.17) does not meet the criteria, its P-value is <0.01 (i.e., –Log10(P-value) > 2), and therefore is also labeled red. Comparison of signal intensities of 22 miRNAs significantly dysregulated in LAA tissues. The signal intensities of miRNAs expressed in the NSR group (x-axis) and AF group (y-axis) were compared. The miRNAs with the higher signal intensities (>2000 units) either on the x-axis or y-axis were labeled red. Differences existed in the expression levels of the 150 miRNAs detected in both the NSR and AF groups (Table  2). Statistical analysis showed that 22 of these miRNAs (15%) were significantly dysregulated in the AF group relative to the NSR: 10 miRNAs (45%) were upregulated, while 12 (55%) were downregulated (P < 0.05). MiRNAs differentially expressed in LAA tissues between MS patients with AF or NSR *Expression levels of miRNAs are described as upregulated or downregulated in AF relative to those in NSR. Most of the miRNAs selected for further analysis via RT-qPCR had a fold change that satisfied the equation |log2(fold change)| ≥ 1.5(Figure  3), and at least one group had a signal intensity >2000 units (Figure  4). Although the fold change of hsa-miR-26a-5p did not meet this criteria (|log2(fold change)| = 1.17), its P -value was <0.01, and thus it was included in our selection. Finally, we selected 5 miRNAs for further analysis: 3 were upregulated in the AF group relative to the NSR (hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p), and 2 were downregulated (hsa-miR-1 and hsa-miR-26a-5p). Volcano plot of 22 miRNAs significantly dysregulated in LAA tissues. The volcano plot shows the distribution of these miRNAs according to their P-value versus fold change. Those with the highest fold change in their expression (|Log2(fold change)| ≥ 1.5) have been labeled in red. Although the fold change of miR-26a-5p (|log2(fold change)| = 1.17) does not meet the criteria, its P-value is <0.01 (i.e., –Log10(P-value) > 2), and therefore is also labeled red. Comparison of signal intensities of 22 miRNAs significantly dysregulated in LAA tissues. The signal intensities of miRNAs expressed in the NSR group (x-axis) and AF group (y-axis) were compared. The miRNAs with the higher signal intensities (>2000 units) either on the x-axis or y-axis were labeled red. Validation of the miRNA microarray data with RT-qPCR To validate the data obtained from the miRNA microarray, RT-qPCR was performed on 5 selected miRNAs, and the results were compared with the microarray (Figure  5, Additional file 3). According to the RT-qPCR data, hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p were upregulated in the LAAs of the AF group relative to the NSR, while hsa-miR-1 and hsa-miR-26a-5p were downregulated. These data are comparable with our microarray data and thus validate the results from the miRNA microarray for these miRNAs. Validation of miRNA microarray data of selected miRNAs by RT-qPCR. RT-qPCR was performed on 5 differentially expressed miRNAs (3 upregulated: miR-466, miR-574-3p, miR-3613-3p; and 2 downregulated: miR-1, miR-26a-5p) to confirm the microarray data. (A) Illustration of microarray results. Expression levels of the 5 miRNAs in the AF group based on fold changes of signal intensity in the microarray relative to the NSR group. The log2(fold change) values are plotted on the y-axis. (B) Illustration of RT-qPCR results. Using RT-qPCR, the expression levels of each miRNA in the AF group relative to the NRS group was estimated by the comparative cycle threshold 2-ΔΔCt method. The log2(relative expression) values are plotted on the y-axis. There is a significant difference for each miRNA, consistent with the microarray result. To validate the data obtained from the miRNA microarray, RT-qPCR was performed on 5 selected miRNAs, and the results were compared with the microarray (Figure  5, Additional file 3). According to the RT-qPCR data, hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p were upregulated in the LAAs of the AF group relative to the NSR, while hsa-miR-1 and hsa-miR-26a-5p were downregulated. These data are comparable with our microarray data and thus validate the results from the miRNA microarray for these miRNAs. Validation of miRNA microarray data of selected miRNAs by RT-qPCR. RT-qPCR was performed on 5 differentially expressed miRNAs (3 upregulated: miR-466, miR-574-3p, miR-3613-3p; and 2 downregulated: miR-1, miR-26a-5p) to confirm the microarray data. (A) Illustration of microarray results. Expression levels of the 5 miRNAs in the AF group based on fold changes of signal intensity in the microarray relative to the NSR group. The log2(fold change) values are plotted on the y-axis. (B) Illustration of RT-qPCR results. Using RT-qPCR, the expression levels of each miRNA in the AF group relative to the NRS group was estimated by the comparative cycle threshold 2-ΔΔCt method. The log2(relative expression) values are plotted on the y-axis. There is a significant difference for each miRNA, consistent with the microarray result. Association between LA size and changes in miRNA expression We investigated whether the changes observed in the expression levels of the 5 validated miRNAs between the LAA tissues of NSR and AF patients correlated with LA size. Spearman’s correlation analysis showed a positive correlation between the level of expression of miR-466 in LAA and LA size (r = 0.73; P = 0.007). Moreover, there was a significantly negative correlation between the levels of expression of miR-1 and miR-26a-5p in LAAs and LA size (r = –0.81; P = 0.002 and r = –0.86; P < 0.001, respectively). However, although the expression levels of miR-574-3p and miR-3613-3p in LAAs significantly differed between NSR and AF patients, they were not significantly correlated with LA size (r = 0.54; P = 0.07 and r = 0.56; P = 0.06, respectively; Figure  6). Correlations between LA size and the relative expression levels of selected miRNAs. Spearman’s correlation analysis showed a positive correlation between the level of expression of miR-466 and LA size, and a negative correlation between the level of expression of miR-1 and miR-26a-5p with LA size. The expression levels of miR-574-3p and miR-3613-3p were not significantly correlated with LA size. We investigated whether the changes observed in the expression levels of the 5 validated miRNAs between the LAA tissues of NSR and AF patients correlated with LA size. Spearman’s correlation analysis showed a positive correlation between the level of expression of miR-466 in LAA and LA size (r = 0.73; P = 0.007). Moreover, there was a significantly negative correlation between the levels of expression of miR-1 and miR-26a-5p in LAAs and LA size (r = –0.81; P = 0.002 and r = –0.86; P < 0.001, respectively). However, although the expression levels of miR-574-3p and miR-3613-3p in LAAs significantly differed between NSR and AF patients, they were not significantly correlated with LA size (r = 0.54; P = 0.07 and r = 0.56; P = 0.06, respectively; Figure  6). Correlations between LA size and the relative expression levels of selected miRNAs. Spearman’s correlation analysis showed a positive correlation between the level of expression of miR-466 and LA size, and a negative correlation between the level of expression of miR-1 and miR-26a-5p with LA size. The expression levels of miR-574-3p and miR-3613-3p were not significantly correlated with LA size. Prediction of putative target genes and pathways of the differentially expressed miRNAs To determine the probable biological function of the differentially expressed miRNAs, we predicted the putative targets and pathways of 5 validated miRNAs (hsa-miR-1, hsa-miR-26a-5p, hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p) using the miRFocus database. Numerous putative target genes and pathways were identified for the 5 miRNAs. The miRNAs hsa-miR-1 and hsa-miR-26a-5p were predicted by 5 target prediction databases; hsa-miR-574-3p was predicted from 4 target prediction databases, and hsa-miR-466 and hsa-miR-3613-3P were predicted from 2 target prediction databases (Table  3). Prediction of putative target genes and pathways of selected miRNAs The biological function and potential functional pathways of each putative gene target were classified using the GO term and KEGG pathway (Tables  4 and 5). Since every gene is associated with many GO terms and KEGG pathways, the significant GO term and KEGG pathway for each miRNA were identified with Fisher’s exact test. Results of the target analysis indicated that the predicted potential genes have been linked to such important biological processes as the regulation of protein metabolism, transcription factor activity, cell division, and the transforming growth factor beta receptor (TGFBR) signaling pathway. These results suggest that these miRNAs have roles in human health and disease regulation. The pathway analysis also suggested that these miRNAs have important regulatory roles in different biological processes. Biological processes of the predicted miRNA targets Pathway analysis of the selected miRNAs To determine the probable biological function of the differentially expressed miRNAs, we predicted the putative targets and pathways of 5 validated miRNAs (hsa-miR-1, hsa-miR-26a-5p, hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p) using the miRFocus database. Numerous putative target genes and pathways were identified for the 5 miRNAs. The miRNAs hsa-miR-1 and hsa-miR-26a-5p were predicted by 5 target prediction databases; hsa-miR-574-3p was predicted from 4 target prediction databases, and hsa-miR-466 and hsa-miR-3613-3P were predicted from 2 target prediction databases (Table  3). Prediction of putative target genes and pathways of selected miRNAs The biological function and potential functional pathways of each putative gene target were classified using the GO term and KEGG pathway (Tables  4 and 5). Since every gene is associated with many GO terms and KEGG pathways, the significant GO term and KEGG pathway for each miRNA were identified with Fisher’s exact test. Results of the target analysis indicated that the predicted potential genes have been linked to such important biological processes as the regulation of protein metabolism, transcription factor activity, cell division, and the transforming growth factor beta receptor (TGFBR) signaling pathway. These results suggest that these miRNAs have roles in human health and disease regulation. The pathway analysis also suggested that these miRNAs have important regulatory roles in different biological processes. Biological processes of the predicted miRNA targets Pathway analysis of the selected miRNAs Clinical characteristics of the NSR and AF patients: There were no significant differences in terms of age, gender, or NYHA functional classification between the NSR and AF groups. Preoperative color Doppler echocardiography showed that the size of the left atria of AF patients was significantly greater than that of NSR patients as previously reported [24], while there were no differences in the left ventricular end-diastolic diameter or ejection fraction between the two groups of patients (Table  1). Clinical characteristics of the NSR and AF patients (n = 6, each) *P < 0.05 with respect to NSR patients. miRNA expression profiles of LAA tissue from MS patients with and without AF: Of the 1898 human miRNAs analyzed, a total 213 miRNAs were detected; NSR patients expressed 155 miRNAs, while the AF patients expressed 208 miRNAs (Figure  1A). Among these, 150 miRNAs were common to the patients of both groups. 5 miRNAs were detected only in those with NSR, and 58 miRNAs were detected only in those with AF (Figure  1B). MiRNAs detected in NSR and AF MS patients. (A) A combined total of 213 miRNAs were detected. The LAAs of NSR patients expressed 155 miRNAs, while those with AF expressed 208 miRNAs. (B) Among these, 150 miRNAs were expressed in NSR and AF patients; 5 were expressed only in the NSR and 58 only in the AF. However, the expression levels of most of the detected miRNAs were low, which is evident by their low signal intensities (less than 500 units). Of the 155 miRNAs detected in the NSR group, 73 emitted signals <500 units, while only 16 were >5000 units. On the other hand, of the 208 miRNAs detected in patients with AF, the signal intensities of 127 were <500 units, while only 16 were >5000 units (Figure  2). The signal intensities of the 5 miRNAs detected only in NSR, and the 58 miRNAs detected only in AF, were all <200 units and not high enough to consider them as differentially expressed between NSR and AF. Hence these were not considered here for further analysis. Signal distribution of all detected miRNAs in MS patients, shown by microarray assay. The signal intensities of most of these miRNAs were low (0–500 units), with only 11 miRNAs showing intensities greater than 10000 units. Differences in miRNA expression profiles of LAA tissues from MS patients with and without AF: Differences existed in the expression levels of the 150 miRNAs detected in both the NSR and AF groups (Table  2). Statistical analysis showed that 22 of these miRNAs (15%) were significantly dysregulated in the AF group relative to the NSR: 10 miRNAs (45%) were upregulated, while 12 (55%) were downregulated (P < 0.05). MiRNAs differentially expressed in LAA tissues between MS patients with AF or NSR *Expression levels of miRNAs are described as upregulated or downregulated in AF relative to those in NSR. Most of the miRNAs selected for further analysis via RT-qPCR had a fold change that satisfied the equation |log2(fold change)| ≥ 1.5(Figure  3), and at least one group had a signal intensity >2000 units (Figure  4). Although the fold change of hsa-miR-26a-5p did not meet this criteria (|log2(fold change)| = 1.17), its P -value was <0.01, and thus it was included in our selection. Finally, we selected 5 miRNAs for further analysis: 3 were upregulated in the AF group relative to the NSR (hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p), and 2 were downregulated (hsa-miR-1 and hsa-miR-26a-5p). Volcano plot of 22 miRNAs significantly dysregulated in LAA tissues. The volcano plot shows the distribution of these miRNAs according to their P-value versus fold change. Those with the highest fold change in their expression (|Log2(fold change)| ≥ 1.5) have been labeled in red. Although the fold change of miR-26a-5p (|log2(fold change)| = 1.17) does not meet the criteria, its P-value is <0.01 (i.e., –Log10(P-value) > 2), and therefore is also labeled red. Comparison of signal intensities of 22 miRNAs significantly dysregulated in LAA tissues. The signal intensities of miRNAs expressed in the NSR group (x-axis) and AF group (y-axis) were compared. The miRNAs with the higher signal intensities (>2000 units) either on the x-axis or y-axis were labeled red. Validation of the miRNA microarray data with RT-qPCR: To validate the data obtained from the miRNA microarray, RT-qPCR was performed on 5 selected miRNAs, and the results were compared with the microarray (Figure  5, Additional file 3). According to the RT-qPCR data, hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p were upregulated in the LAAs of the AF group relative to the NSR, while hsa-miR-1 and hsa-miR-26a-5p were downregulated. These data are comparable with our microarray data and thus validate the results from the miRNA microarray for these miRNAs. Validation of miRNA microarray data of selected miRNAs by RT-qPCR. RT-qPCR was performed on 5 differentially expressed miRNAs (3 upregulated: miR-466, miR-574-3p, miR-3613-3p; and 2 downregulated: miR-1, miR-26a-5p) to confirm the microarray data. (A) Illustration of microarray results. Expression levels of the 5 miRNAs in the AF group based on fold changes of signal intensity in the microarray relative to the NSR group. The log2(fold change) values are plotted on the y-axis. (B) Illustration of RT-qPCR results. Using RT-qPCR, the expression levels of each miRNA in the AF group relative to the NRS group was estimated by the comparative cycle threshold 2-ΔΔCt method. The log2(relative expression) values are plotted on the y-axis. There is a significant difference for each miRNA, consistent with the microarray result. Association between LA size and changes in miRNA expression: We investigated whether the changes observed in the expression levels of the 5 validated miRNAs between the LAA tissues of NSR and AF patients correlated with LA size. Spearman’s correlation analysis showed a positive correlation between the level of expression of miR-466 in LAA and LA size (r = 0.73; P = 0.007). Moreover, there was a significantly negative correlation between the levels of expression of miR-1 and miR-26a-5p in LAAs and LA size (r = –0.81; P = 0.002 and r = –0.86; P < 0.001, respectively). However, although the expression levels of miR-574-3p and miR-3613-3p in LAAs significantly differed between NSR and AF patients, they were not significantly correlated with LA size (r = 0.54; P = 0.07 and r = 0.56; P = 0.06, respectively; Figure  6). Correlations between LA size and the relative expression levels of selected miRNAs. Spearman’s correlation analysis showed a positive correlation between the level of expression of miR-466 and LA size, and a negative correlation between the level of expression of miR-1 and miR-26a-5p with LA size. The expression levels of miR-574-3p and miR-3613-3p were not significantly correlated with LA size. Prediction of putative target genes and pathways of the differentially expressed miRNAs: To determine the probable biological function of the differentially expressed miRNAs, we predicted the putative targets and pathways of 5 validated miRNAs (hsa-miR-1, hsa-miR-26a-5p, hsa-miR-466, hsa-miR-574-3p, and hsa-miR-3613-3p) using the miRFocus database. Numerous putative target genes and pathways were identified for the 5 miRNAs. The miRNAs hsa-miR-1 and hsa-miR-26a-5p were predicted by 5 target prediction databases; hsa-miR-574-3p was predicted from 4 target prediction databases, and hsa-miR-466 and hsa-miR-3613-3P were predicted from 2 target prediction databases (Table  3). Prediction of putative target genes and pathways of selected miRNAs The biological function and potential functional pathways of each putative gene target were classified using the GO term and KEGG pathway (Tables  4 and 5). Since every gene is associated with many GO terms and KEGG pathways, the significant GO term and KEGG pathway for each miRNA were identified with Fisher’s exact test. Results of the target analysis indicated that the predicted potential genes have been linked to such important biological processes as the regulation of protein metabolism, transcription factor activity, cell division, and the transforming growth factor beta receptor (TGFBR) signaling pathway. These results suggest that these miRNAs have roles in human health and disease regulation. The pathway analysis also suggested that these miRNAs have important regulatory roles in different biological processes. Biological processes of the predicted miRNA targets Pathway analysis of the selected miRNAs Discussion: More and more studies indicate that specific alterations in miRNA expression profiles are associated with specific disease pathophysiologies [8,9,19]. Xiao et al. [16] were the first to report miRNA alterations in the RA associated with AF in MS patients; 28 miRNAs were differentially expressed between MS patients with AF and those in NSR. However, miRNA changes due to AF in the LA of MS patients are still unknown. The present study is the first to create and compare miRNA profiles of the LA of MS patients with AF and those without AF. We found that in the LA of MS patients, 22 miRNAs were differentially expressed between those with AF and those in NSR. The results of our study and that of Xiao et al. [16] were completely different, except for miR-26b. After eliminating the influences of the miRNA microarray technologies used in the two studies, we conclude that these differences, at least in part, may reflect different mechanisms involved in AF between the LA and RA. In MS patients, electrical remodeling of both the left and right atria [14] is intrinsic to the initiation, development, and maintenance of AF [25], and morphological differences have also been demonstrated between the two atria [26]. Thus, it is not surprising that AF alters the miRNA expression profiles of the LA of MS patients, and that these alterations may differ from those of the RA. These differences may reflect different mechanisms involved in AF between LA and RA. Therefore, investigations into the differences in miRNA expression profiles associated with AF in MS patients should focus not only on the RA but also on the LA. Cooley et al. [17] investigated the differences in miRNA expression profiles in LAA tissues from valvular heart disease patients, and found no detectable differences between patients with AF and those with NSR, a lack that these researchers attributed partially to problems with tissue availability. However, Girmatsion et al. [27] reported that miR-1 was downregulated in human LAA tissue from AF patients (relative to patients without AF who also underwent mitral valve repair or bypass grafting), which is consistent with our finding that miR-1 was downregulated in human LAA tissues in MS patients with AF. Unfortunately, Girmatsion et al.’s investigation did not utilize miRNA expression profiles. Recently, Luo et al. [28] found that miR-26 family members were significantly downregulated (>50%) in LAAs from a canine AF model (miR-26a) and in right atrial appendages from AF patients (miR-26a and miR-26b). This suggests the possible involvement of these miRNAs in AF pathophysiology, and is consistent with our finding that miR-26a-5p was downregulated in LAA tissues from MS patients with AF compared with those who remained in NSR. Lu et al. [10] found that levels of miR-328 were elevated 3.5- to 3.9-fold in LAAs from dogs with model AF and in right atrial appendages from AF patients, detected by both microarray and RT-qPCR. However, in the present study we found that the expressions of miR-328 were very low in both the NSR and AF group and were not significantly different between the two groups. This contradictory finding may reflect differences in the species (dogs in Lu et al., humans in the present study) and in tissues that were sampled (right atrial appendages from AF patients in Lu et al., LAAs from AF patients in the current study) and the heterogeneity of human myocardial samples [29]. Studies have shown that miRNAs may be involved directly or indirectly in AF by modulating atrial electrical remodeling (miR-1, miR-26, miR-328) [10,27,28] or structural remodeling (miR-30, miR-133, mir-590) [7,30]. One study showed that miR-1 overexpression slowed conduction and depolarized the cytoplasmic membrane by post-transcriptionally repressing KCNJ2 (potassium inwardly-rectifying channel, subfamily J, member 2; which encodes the K+ channel subunit Kir2.1) and GJA1 (gap junction protein, alpha 1, 43 kDa; which encodes connexin 43), and this likely accounts at least in part for its arrhythmogenic potential [31]. Another study indicated that miR-1 levels are greatly reduced in human AF, possibly contributing to upregulation of Kir2.1 subunits, leading to increased cardiac inward-rectifier potassium current I K1 [27]. A recent study identified miR-26 as a potentially important regulator of KCNJ2 gene expression and, via IK1, a determinant of AF susceptibility [28]. In addition, it also identified miR-26 as a potential mediator of the electrophysiological effects of Ca2+-dependent NFAT (nuclear factor of activated T cells) signaling, believed to be important in the perpetuation of AF. Previously, the miR-466, miR-574, and miR-3613 have not been described as participating in cardiovascular pathology. The current study found that these miRNAs are potentially involved in several important biological processes and functional pathways associated with AF (e.g., mTOR, Wnt, and Notch signaling), based on the predictions of putative target genes and pathways determined via miRFocus. Our results may implicate these miRNAs in the pathogenesis of AF. In MS, the association between LA size and AF is well established and LA dilatation is considered both a cause and consequence of AF [13]. Our study found that the expression levels of three validated miRNAs (miR-1, miR-26a-5p, miR-466) correlated with LA size, while those of two others (miR-574-3p, miR-3613-3p) did not. This discrepancy is probably due to the multifactorial nature of AF in MS. For example, it is likely that persistent rheumatic inflammation and LA fibrosis also contribute to the etiology of AF in MS, as well as LA size and hypertension [13]. The main limitation of this study was the small number of patients included. This was due, in part, to the difficulty of finding MS patients with NSR. In addition, because this study was performed with native human tissues, we could not conduct experiments to modulate miRNA levels. Accordingly, the evidence presented here is indirect. Furthermore, the exact targets and pathways by which alterations in miRNAs cause AF in MS patients remain elusive and deserve further investigation [16]. Finally, the patients in this study were a specific cohort with preserved systolic left ventricular function and little comorbidity; they were undergoing mitral valve replacement surgery. Thus, changes identified in this population may not be representative of other cohort populations [27]. Conclusions: This study shows that AF alters the miRNA expression profiles in LA from MS patients. These findings may be useful for the biological understanding of AF in MS patients and provide potential therapeutic targets for AF [32]. Abbreviations: AF: Atrial fibrillation; CV: Coefficient of variation; GO: Gene Ontology; KEGG: Kyoto Encyclopedia of Genes and Genomes; LA: Left atrial; LAA: Left atrial appendage; miRNA: Microrna; MS: Mitral stenosis; NSR: Normal sinus rhythm; NYHA: New York Heart Association; RT-qPCR: Reverse-transcription quantitative PCR; TGFBR: Transforming growth factor beta receptor. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: HL performed the molecular studies, participated in the sequence alignment, and drafted the manuscript. GXC, MYL, HQ, JR, and JPY participated in open heart surgery and collected clinical samples. HL and ZKW participated in the design of the study and performed the statistical analyses. ZKW and GXC conceived the study, participated in its design and coordination, and helped to draft the manuscript. HL and GXC contributed equally to this article. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2261/14/10/prepub Supplementary Material: Raw signal values for all miRNAs in microarray. Click here for file Primers for RT-qPCR of miRNA and RNU6B. Click here for file RT-qPCR data. Click here for file
Background: Structural changes of the left and right atria associated with atrial fibrillation (AF) in mitral stenosis (MS) patients are well known, and alterations in microRNA (miRNA) expression profiles of the right atria have also been investigated. However, miRNA changes in the left atria still require delineation. This study evaluated alterations in miRNA expression profiles of left atrial tissues from MS patients with AF relative to those with normal sinus rhythm (NSR). Methods: Sample tissues from left atrial appendages were obtained from 12 MS patients (6 with AF) during mitral valve replacement surgery. From these tissues, miRNA expression profiles were created and analyzed using a human miRNA microarray. Results were validated via reverse-transcription and quantitative PCR for 5 selected miRNAs. Potential miRNA targets were predicted and their functions and potential pathways analyzed via the miRFocus database. Results: The expression levels of 22 miRNAs differed between the AF and NSR groups. Relative to NSR patients, in those with AF the expression levels of 45% (10/22) of these miRNAs were significantly higher, while those of the balance (55%, 12/22) were significantly lower. Potential miRNA targets and molecular pathways were identified. Conclusions: AF alters the miRNA expression profiles of the left atria of MS patients. These findings may be useful for the biological understanding of AF in MS patients.
Background: Atrial fibrillation (AF) is characterized as an irregular and sometimes rapid heart rate, with symptoms that include palpitations and shortness of breath. AF is the most common cardiac arrhythmia observed in clinical practice and constitutes a risk factor for ischemic stroke [1]. Despite recent significant advances in the understanding of the mechanisms associated with AF, complexities in the etiology of atrial electrical dysfunction (including a genetic component [2]) and the subsequent associated arrhythmia have prevented definitive elucidation [3]. The progression from acute to persistent and then chronic AF is accompanied by changes in gene expression that lead to differences in protein expression and activity. MicroRNAs (miRNAs) are regulators of gene expression at the post-transcriptional level [4], and appear to have regulatory roles that underlie the pathophysiology of AF. Many studies have shown that miRNAs regulate key genetic functions in cardiovascular biology and are crucial to the pathogenesis of cardiac diseases such as cardiac development [5], hypertrophy/heart failure [6], remodeling [7], acute myocardial infarction [8], and myocardial ischemia-reperfusion injury [9]. Currently, there is a growing body of literature that indicates that many miRNAs are involved in AF through their target genes [7,10,11]. AF can be an isolated condition, but it often occurs concomitantly with other cardiovascular diseases such as hypertension, congestive heart failure, coronary artery disease, and valvular heart disease [12]. AF is also prevalent in mitral stenosis (MS; a consequence of rheumatic fever), affecting approximately 40% of all MS patients [13]. MS is among the major cardiovascular diseases in developing countries where rheumatic fever is less well controlled, and 50% or more of patients with severe MS have AF. Patients with both AF and MS have a 17.5-fold greater risk of stroke and a four-fold higher incidence of embolism compared with people with normal sinus rhythm (NSR) [14,15]. Structural changes of the left atria (LA) and right atria (RA) associated with AF in MS patients are well established [13,14]. Recently, reports suggest that AF also alters the miRNA expression profiles in RA of MS patients [16,17]. However, miRNA changes in LA from MS patients with AF are still unknown. Given the complexity of the pathophysiology that may be associated with AF, we need a better understanding of the miRNA changes in the LA, which may help in designing and developing new therapeutic interventions. This study investigated alterations of miRNA expression profiles in LA tissues of MS patients with AF relative to MS patients with NSR. Conclusions: This study shows that AF alters the miRNA expression profiles in LA from MS patients. These findings may be useful for the biological understanding of AF in MS patients and provide potential therapeutic targets for AF [32].
Background: Structural changes of the left and right atria associated with atrial fibrillation (AF) in mitral stenosis (MS) patients are well known, and alterations in microRNA (miRNA) expression profiles of the right atria have also been investigated. However, miRNA changes in the left atria still require delineation. This study evaluated alterations in miRNA expression profiles of left atrial tissues from MS patients with AF relative to those with normal sinus rhythm (NSR). Methods: Sample tissues from left atrial appendages were obtained from 12 MS patients (6 with AF) during mitral valve replacement surgery. From these tissues, miRNA expression profiles were created and analyzed using a human miRNA microarray. Results were validated via reverse-transcription and quantitative PCR for 5 selected miRNAs. Potential miRNA targets were predicted and their functions and potential pathways analyzed via the miRFocus database. Results: The expression levels of 22 miRNAs differed between the AF and NSR groups. Relative to NSR patients, in those with AF the expression levels of 45% (10/22) of these miRNAs were significantly higher, while those of the balance (55%, 12/22) were significantly lower. Potential miRNA targets and molecular pathways were identified. Conclusions: AF alters the miRNA expression profiles of the left atria of MS patients. These findings may be useful for the biological understanding of AF in MS patients.
11,725
263
[ 514, 150, 57, 553, 428, 203, 48, 113, 328, 416, 278, 249, 287, 76, 10, 92, 16 ]
22
[ "mirnas", "af", "mir", "patients", "mirna", "nsr", "expression", "hsa", "hsa mir", "rt" ]
[ "mirnas clinical characteristics", "expression activity micrornas", "crucial pathogenesis cardiac", "cardiovascular biology crucial", "atrial appendage mirna" ]
[CONTENT] Atrial fibrillation | Microrna | Mitral stenosis | Microarray | Mirfocus [SUMMARY]
[CONTENT] Atrial fibrillation | Microrna | Mitral stenosis | Microarray | Mirfocus [SUMMARY]
[CONTENT] Atrial fibrillation | Microrna | Mitral stenosis | Microarray | Mirfocus [SUMMARY]
[CONTENT] Atrial fibrillation | Microrna | Mitral stenosis | Microarray | Mirfocus [SUMMARY]
[CONTENT] Atrial fibrillation | Microrna | Mitral stenosis | Microarray | Mirfocus [SUMMARY]
[CONTENT] Atrial fibrillation | Microrna | Mitral stenosis | Microarray | Mirfocus [SUMMARY]
[CONTENT] Adult | Atrial Appendage | Atrial Fibrillation | Case-Control Studies | Female | Gene Expression Profiling | Gene Expression Regulation | Gene Regulatory Networks | Humans | Male | MicroRNAs | Middle Aged | Mitral Valve Stenosis | Oligonucleotide Array Sequence Analysis | Real-Time Polymerase Chain Reaction | Reverse Transcriptase Polymerase Chain Reaction [SUMMARY]
[CONTENT] Adult | Atrial Appendage | Atrial Fibrillation | Case-Control Studies | Female | Gene Expression Profiling | Gene Expression Regulation | Gene Regulatory Networks | Humans | Male | MicroRNAs | Middle Aged | Mitral Valve Stenosis | Oligonucleotide Array Sequence Analysis | Real-Time Polymerase Chain Reaction | Reverse Transcriptase Polymerase Chain Reaction [SUMMARY]
[CONTENT] Adult | Atrial Appendage | Atrial Fibrillation | Case-Control Studies | Female | Gene Expression Profiling | Gene Expression Regulation | Gene Regulatory Networks | Humans | Male | MicroRNAs | Middle Aged | Mitral Valve Stenosis | Oligonucleotide Array Sequence Analysis | Real-Time Polymerase Chain Reaction | Reverse Transcriptase Polymerase Chain Reaction [SUMMARY]
[CONTENT] Adult | Atrial Appendage | Atrial Fibrillation | Case-Control Studies | Female | Gene Expression Profiling | Gene Expression Regulation | Gene Regulatory Networks | Humans | Male | MicroRNAs | Middle Aged | Mitral Valve Stenosis | Oligonucleotide Array Sequence Analysis | Real-Time Polymerase Chain Reaction | Reverse Transcriptase Polymerase Chain Reaction [SUMMARY]
[CONTENT] Adult | Atrial Appendage | Atrial Fibrillation | Case-Control Studies | Female | Gene Expression Profiling | Gene Expression Regulation | Gene Regulatory Networks | Humans | Male | MicroRNAs | Middle Aged | Mitral Valve Stenosis | Oligonucleotide Array Sequence Analysis | Real-Time Polymerase Chain Reaction | Reverse Transcriptase Polymerase Chain Reaction [SUMMARY]
[CONTENT] Adult | Atrial Appendage | Atrial Fibrillation | Case-Control Studies | Female | Gene Expression Profiling | Gene Expression Regulation | Gene Regulatory Networks | Humans | Male | MicroRNAs | Middle Aged | Mitral Valve Stenosis | Oligonucleotide Array Sequence Analysis | Real-Time Polymerase Chain Reaction | Reverse Transcriptase Polymerase Chain Reaction [SUMMARY]
[CONTENT] mirnas clinical characteristics | expression activity micrornas | crucial pathogenesis cardiac | cardiovascular biology crucial | atrial appendage mirna [SUMMARY]
[CONTENT] mirnas clinical characteristics | expression activity micrornas | crucial pathogenesis cardiac | cardiovascular biology crucial | atrial appendage mirna [SUMMARY]
[CONTENT] mirnas clinical characteristics | expression activity micrornas | crucial pathogenesis cardiac | cardiovascular biology crucial | atrial appendage mirna [SUMMARY]
[CONTENT] mirnas clinical characteristics | expression activity micrornas | crucial pathogenesis cardiac | cardiovascular biology crucial | atrial appendage mirna [SUMMARY]
[CONTENT] mirnas clinical characteristics | expression activity micrornas | crucial pathogenesis cardiac | cardiovascular biology crucial | atrial appendage mirna [SUMMARY]
[CONTENT] mirnas clinical characteristics | expression activity micrornas | crucial pathogenesis cardiac | cardiovascular biology crucial | atrial appendage mirna [SUMMARY]
[CONTENT] mirnas | af | mir | patients | mirna | nsr | expression | hsa | hsa mir | rt [SUMMARY]
[CONTENT] mirnas | af | mir | patients | mirna | nsr | expression | hsa | hsa mir | rt [SUMMARY]
[CONTENT] mirnas | af | mir | patients | mirna | nsr | expression | hsa | hsa mir | rt [SUMMARY]
[CONTENT] mirnas | af | mir | patients | mirna | nsr | expression | hsa | hsa mir | rt [SUMMARY]
[CONTENT] mirnas | af | mir | patients | mirna | nsr | expression | hsa | hsa mir | rt [SUMMARY]
[CONTENT] mirnas | af | mir | patients | mirna | nsr | expression | hsa | hsa mir | rt [SUMMARY]
[CONTENT] af | ms | patients | ms patients | associated | changes | cardiac | associated af | cardiovascular | la [SUMMARY]
[CONTENT] μl | rna | mirna | probes | rt | detection | performed | reaction | total rna | mirnas [SUMMARY]
[CONTENT] mir | mirnas | hsa | hsa mir | nsr | af | 3p | patients | detected | units [SUMMARY]
[CONTENT] af | ms patients | ms | patients | af 32 | therapeutic targets | provide potential therapeutic targets | provide potential therapeutic | provide potential | shows af alters [SUMMARY]
[CONTENT] mir | af | mirnas | patients | mirna | nsr | hsa | hsa mir | rt | expression [SUMMARY]
[CONTENT] mir | af | mirnas | patients | mirna | nsr | hsa | hsa mir | rt | expression [SUMMARY]
[CONTENT] ||| ||| MS | NSR [SUMMARY]
[CONTENT] 12 | 6 ||| ||| PCR | 5 ||| miRFocus [SUMMARY]
[CONTENT] 22 | AF | NSR ||| NSR | 45% | 10/22 | 55% | 12/22 ||| [SUMMARY]
[CONTENT] MS ||| AF | MS [SUMMARY]
[CONTENT] ||| ||| MS | NSR ||| 12 | 6 ||| ||| PCR | 5 ||| miRFocus ||| ||| 22 | AF | NSR ||| NSR | 45% | 10/22 | 55% | 12/22 ||| ||| MS ||| AF | MS [SUMMARY]
[CONTENT] ||| ||| MS | NSR ||| 12 | 6 ||| ||| PCR | 5 ||| miRFocus ||| ||| 22 | AF | NSR ||| NSR | 45% | 10/22 | 55% | 12/22 ||| ||| MS ||| AF | MS [SUMMARY]
Mild-to-Moderate Kidney Dysfunction and Cardiovascular Disease: Observational and Mendelian Randomization Analyses.
36314129
End-stage renal disease is associated with a high risk of cardiovascular events. It is unknown, however, whether mild-to-moderate kidney dysfunction is causally related to coronary heart disease (CHD) and stroke.
BACKGROUND
Observational analyses were conducted using individual-level data from 4 population data sources (Emerging Risk Factors Collaboration, EPIC-CVD [European Prospective Investigation into Cancer and Nutrition-Cardiovascular Disease Study], Million Veteran Program, and UK Biobank), comprising 648 135 participants with no history of cardiovascular disease or diabetes at baseline, yielding 42 858 and 15 693 incident CHD and stroke events, respectively, during 6.8 million person-years of follow-up. Using a genetic risk score of 218 variants for estimated glomerular filtration rate (eGFR), we conducted Mendelian randomization analyses involving 413 718 participants (25 917 CHD and 8622 strokes) in EPIC-CVD, Million Veteran Program, and UK Biobank.
METHODS
There were U-shaped observational associations of creatinine-based eGFR with CHD and stroke, with higher risk in participants with eGFR values <60 or >105 mL·min-1·1.73 m-2, compared with those with eGFR between 60 and 105 mL·min-1·1.73 m-2. Mendelian randomization analyses for CHD showed an association among participants with eGFR <60 mL·min-1·1.73 m-2, with a 14% (95% CI, 3%-27%) higher CHD risk per 5 mL·min-1·1.73 m-2 lower genetically predicted eGFR, but not for those with eGFR >105 mL·min-1·1.73 m-2. Results were not materially different after adjustment for factors associated with the eGFR genetic risk score, such as lipoprotein(a), triglycerides, hemoglobin A1c, and blood pressure. Mendelian randomization results for stroke were nonsignificant but broadly similar to those for CHD.
RESULTS
In people without manifest cardiovascular disease or diabetes, mild-to-moderate kidney dysfunction is causally related to risk of CHD, highlighting the potential value of preventive approaches that preserve and modulate kidney function.
CONCLUSIONS
[ "Humans", "Mendelian Randomization Analysis", "Prospective Studies", "Cardiovascular Diseases", "Coronary Disease", "Risk Factors", "Diabetes Mellitus", "Stroke", "Kidney" ]
9662821
Study Design and Study Overview
This study involved interrelated components (Figure 1). First, we characterized observational associations between eGFR and incident CHD or stroke, using data from the Emerging Risk Factors Collaboration,21 EPIC-CVD (European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease Study),22 Million Veteran Program (MVP),23 UK Biobank (UKB),24 collectively involving 648 135 participants, who had serum creatinine measurements but no known CVD or diabetes at baseline. Second, we constructed a genetic risk score (GRS) for eGFR by computing a weighted sum of eGFR-associated index variants reported in a discovery genome-wide association study from the CKDGen consortium comprising 567 460 participants with European ancestry,25 none of whom were from MVP, EPIC-CVD, or UKB. Third, we used this GRS to conduct Mendelian randomization analyses in a total of 413 718 participants (ie, EPIC-CVD, MVP, UKB), with concomitant individual-level information on genetics, serum creatinine, and disease outcomes. Fourth, to assess the potential for interference by horizontal pleiotropy26 and explore potential mechanisms that could mediate associations between eGFR and CVD outcomes, we studied our GRS for eGFR in relation to several established and emerging risk factors for CVD. Study design and overview. CHD indicates coronary heart disease; CKDGen, CKD Genetics consortium; CVD, cardiovascular disease; eGFR, estimated glomerular filtration rate; EPIC-CVD, European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease; ERFC, Emerging Risk Factors Collaboration; MVP, Million Veteran Program; NMR, nuclear magnetic resonance; and UKB, UK Biobank.
Methods
The data, code, and study material that support the findings of this study are available from the corresponding author on reasonable request. Study Design and Study Overview This study involved interrelated components (Figure 1). First, we characterized observational associations between eGFR and incident CHD or stroke, using data from the Emerging Risk Factors Collaboration,21 EPIC-CVD (European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease Study),22 Million Veteran Program (MVP),23 UK Biobank (UKB),24 collectively involving 648 135 participants, who had serum creatinine measurements but no known CVD or diabetes at baseline. Second, we constructed a genetic risk score (GRS) for eGFR by computing a weighted sum of eGFR-associated index variants reported in a discovery genome-wide association study from the CKDGen consortium comprising 567 460 participants with European ancestry,25 none of whom were from MVP, EPIC-CVD, or UKB. Third, we used this GRS to conduct Mendelian randomization analyses in a total of 413 718 participants (ie, EPIC-CVD, MVP, UKB), with concomitant individual-level information on genetics, serum creatinine, and disease outcomes. Fourth, to assess the potential for interference by horizontal pleiotropy26 and explore potential mechanisms that could mediate associations between eGFR and CVD outcomes, we studied our GRS for eGFR in relation to several established and emerging risk factors for CVD. Study design and overview. CHD indicates coronary heart disease; CKDGen, CKD Genetics consortium; CVD, cardiovascular disease; eGFR, estimated glomerular filtration rate; EPIC-CVD, European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease; ERFC, Emerging Risk Factors Collaboration; MVP, Million Veteran Program; NMR, nuclear magnetic resonance; and UKB, UK Biobank. This study involved interrelated components (Figure 1). First, we characterized observational associations between eGFR and incident CHD or stroke, using data from the Emerging Risk Factors Collaboration,21 EPIC-CVD (European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease Study),22 Million Veteran Program (MVP),23 UK Biobank (UKB),24 collectively involving 648 135 participants, who had serum creatinine measurements but no known CVD or diabetes at baseline. Second, we constructed a genetic risk score (GRS) for eGFR by computing a weighted sum of eGFR-associated index variants reported in a discovery genome-wide association study from the CKDGen consortium comprising 567 460 participants with European ancestry,25 none of whom were from MVP, EPIC-CVD, or UKB. Third, we used this GRS to conduct Mendelian randomization analyses in a total of 413 718 participants (ie, EPIC-CVD, MVP, UKB), with concomitant individual-level information on genetics, serum creatinine, and disease outcomes. Fourth, to assess the potential for interference by horizontal pleiotropy26 and explore potential mechanisms that could mediate associations between eGFR and CVD outcomes, we studied our GRS for eGFR in relation to several established and emerging risk factors for CVD. Study design and overview. CHD indicates coronary heart disease; CKDGen, CKD Genetics consortium; CVD, cardiovascular disease; eGFR, estimated glomerular filtration rate; EPIC-CVD, European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease; ERFC, Emerging Risk Factors Collaboration; MVP, Million Veteran Program; NMR, nuclear magnetic resonance; and UKB, UK Biobank. Data Sources Information on each of the data sources used in the analysis is provided in the Expanded Methods in the Supplemental Material. In brief, Emerging Risk Factors Collaboration, a global consortium of population cohort studies with harmonized individual-participant data for multiple CVD risk factors, has included 47 studies with available information on serum creatinine and diabetes status at recruitment.21 EPIC-CVD, a case-cohort study embedded in the pan-European EPIC prospective study of >500 000 participants, has recorded data on serum creatinine and imputed genome-wide array data from 21 of its 23 recruitment centers.22 MVP, a prospective cohort study recruited from 63 Veterans Health Administration medical facilities throughout the United States, has recorded serum creatinine, and imputed genome-wide array data are available for a large subset of its participants.23 UKB, a prospective study of 22 recruitment centers across the United Kingdom, has cohort-wide information on serum creatinine and imputed genome-wide array data.24 Relevant ethical approval and participant consent were already obtained in all studies that contributed data to this work. Information on each of the data sources used in the analysis is provided in the Expanded Methods in the Supplemental Material. In brief, Emerging Risk Factors Collaboration, a global consortium of population cohort studies with harmonized individual-participant data for multiple CVD risk factors, has included 47 studies with available information on serum creatinine and diabetes status at recruitment.21 EPIC-CVD, a case-cohort study embedded in the pan-European EPIC prospective study of >500 000 participants, has recorded data on serum creatinine and imputed genome-wide array data from 21 of its 23 recruitment centers.22 MVP, a prospective cohort study recruited from 63 Veterans Health Administration medical facilities throughout the United States, has recorded serum creatinine, and imputed genome-wide array data are available for a large subset of its participants.23 UKB, a prospective study of 22 recruitment centers across the United Kingdom, has cohort-wide information on serum creatinine and imputed genome-wide array data.24 Relevant ethical approval and participant consent were already obtained in all studies that contributed data to this work. Estimation of Kidney Function Kidney function was estimated using creatinine-based eGFR, calculated using the Chronic Kidney Disease Epidemiology Collaboration equation.27 Creatinine concentration was multiplied by 0.95 for studies in which measurements were not standardized to isotope-dilution mass spectrometry.25,28 In a subset of participants with available data, kidney function was also defined using the Chronic Kidney Disease Epidemiology Collaboration cystatin C–based equation29 and albuminuria measured as spot urine albumin-to-creatinine ratio (Expanded Methods). Kidney function was estimated using creatinine-based eGFR, calculated using the Chronic Kidney Disease Epidemiology Collaboration equation.27 Creatinine concentration was multiplied by 0.95 for studies in which measurements were not standardized to isotope-dilution mass spectrometry.25,28 In a subset of participants with available data, kidney function was also defined using the Chronic Kidney Disease Epidemiology Collaboration cystatin C–based equation29 and albuminuria measured as spot urine albumin-to-creatinine ratio (Expanded Methods). Observational Analyses Primary outcomes were incident CHD and stroke. Details of end-point definitions for each study are provided in Table S1. Participants in the contributing studies were eligible for inclusion in the present analysis if they met all of the following criteria: (1) aged 30 to 80 years at recruitment; (2) had recorded information on age, sex, circulating creatinine, and diabetes status; (3) had a creatinine-based eGFR of <300 mL·min–1·1.73 m–2; (4) did not have a known history of CVD or diabetes at baseline; (5) had complete information on the risk factors of smoking status, systolic blood pressure, total cholesterol, high-density lipoprotein cholesterol, and body mass index; and (6) had at least 1 year of follow-up data after recruitment. Hazard ratios for associations of creatinine-based eGFR with incident CHD and stroke were calculated using Cox regression, stratified by sex and study center, and when appropriate, adjusted for traditional vascular risk factors (defined here as age, systolic blood pressure, smoking status, total cholesterol, high-density lipoprotein cholesterol, and body mass index) on a complete-case basis. To account for the EPIC-CVD case-cohort design, Cox models were adapted using Prentice weights.30 To avoid overfitting models, studies contributing <20 incident events to the analysis of a particular outcome were excluded from the analysis. Fractional polynomials were used to characterize nonlinear relationships of creatinine-based eGFR with risk of CHD and stroke, adjusted for age and CVD risk factors.31 Study-specific estimates for each outcome were pooled across studies using multivariable random-effects meta-analysis, using a reference point of 90 mL·min–1·1.73 m–2. When information on urinary biomarkers in UKB was available, participants were grouped into tenths on the basis of levels of urinary albumin-to-creatinine ratio to assess the shapes of associations between urinary biomarkers and CVD risk, using participants without albuminuria as the reference group.32 Primary outcomes were incident CHD and stroke. Details of end-point definitions for each study are provided in Table S1. Participants in the contributing studies were eligible for inclusion in the present analysis if they met all of the following criteria: (1) aged 30 to 80 years at recruitment; (2) had recorded information on age, sex, circulating creatinine, and diabetes status; (3) had a creatinine-based eGFR of <300 mL·min–1·1.73 m–2; (4) did not have a known history of CVD or diabetes at baseline; (5) had complete information on the risk factors of smoking status, systolic blood pressure, total cholesterol, high-density lipoprotein cholesterol, and body mass index; and (6) had at least 1 year of follow-up data after recruitment. Hazard ratios for associations of creatinine-based eGFR with incident CHD and stroke were calculated using Cox regression, stratified by sex and study center, and when appropriate, adjusted for traditional vascular risk factors (defined here as age, systolic blood pressure, smoking status, total cholesterol, high-density lipoprotein cholesterol, and body mass index) on a complete-case basis. To account for the EPIC-CVD case-cohort design, Cox models were adapted using Prentice weights.30 To avoid overfitting models, studies contributing <20 incident events to the analysis of a particular outcome were excluded from the analysis. Fractional polynomials were used to characterize nonlinear relationships of creatinine-based eGFR with risk of CHD and stroke, adjusted for age and CVD risk factors.31 Study-specific estimates for each outcome were pooled across studies using multivariable random-effects meta-analysis, using a reference point of 90 mL·min–1·1.73 m–2. When information on urinary biomarkers in UKB was available, participants were grouped into tenths on the basis of levels of urinary albumin-to-creatinine ratio to assess the shapes of associations between urinary biomarkers and CVD risk, using participants without albuminuria as the reference group.32 GRS for Kidney Function Using individual-participant data from EPIC-CVD, MVP, and UKB, we calculated a GRS33 weighted by the conditional effect estimated of the genetic variants associated (P<5×10–8) with creatinine-based eGFR in CKDGen,25 a global genetics consortium that has published genome-wide association study summary statistics for creatinine-based eGFR. Of the 262 variants associated with creatinine-based eGFR, 37 were excluded because of ancestry heterogeneity as reported in CKDGen,25 4 were excluded because of associations (P<5×10–8) with vascular risk factors as reported in previous genome-wide association studies (ie, smoking status, alcohol consumption, and education attainment),34 and 3 were excluded because of missingness in at least 1 of the contributing studies, leaving 218 variants for the primary GRS for creatinine-based eGFR. In sensitivity analysis, we constructed 2 restricted GRSs using 126 and 121 genetic variants that were likely to be relevant for kidney function on the basis of their associations with cystatin C–based eGFR35 and blood urine nitrogen,25 respectively. Sensitivity analysis was also conducted using a GRS that included all 262 transancestry eGFR-associated index variants. Furthermore, to evaluate traits that could mediate or confound (through horizontal pleiotropy) the associations between genetically predicted eGFR and outcomes, we tested associations of GRSs for eGFR with a range of cardiovascular risk factors in UKB and EPIC-CVD and with 167 metabolites measured using targeted high-throughput nuclear magnetic resonance metabolomics (Nightingale Health Ltd) in UKB. Using individual-participant data from EPIC-CVD, MVP, and UKB, we calculated a GRS33 weighted by the conditional effect estimated of the genetic variants associated (P<5×10–8) with creatinine-based eGFR in CKDGen,25 a global genetics consortium that has published genome-wide association study summary statistics for creatinine-based eGFR. Of the 262 variants associated with creatinine-based eGFR, 37 were excluded because of ancestry heterogeneity as reported in CKDGen,25 4 were excluded because of associations (P<5×10–8) with vascular risk factors as reported in previous genome-wide association studies (ie, smoking status, alcohol consumption, and education attainment),34 and 3 were excluded because of missingness in at least 1 of the contributing studies, leaving 218 variants for the primary GRS for creatinine-based eGFR. In sensitivity analysis, we constructed 2 restricted GRSs using 126 and 121 genetic variants that were likely to be relevant for kidney function on the basis of their associations with cystatin C–based eGFR35 and blood urine nitrogen,25 respectively. Sensitivity analysis was also conducted using a GRS that included all 262 transancestry eGFR-associated index variants. Furthermore, to evaluate traits that could mediate or confound (through horizontal pleiotropy) the associations between genetically predicted eGFR and outcomes, we tested associations of GRSs for eGFR with a range of cardiovascular risk factors in UKB and EPIC-CVD and with 167 metabolites measured using targeted high-throughput nuclear magnetic resonance metabolomics (Nightingale Health Ltd) in UKB. Mendelian Randomization Analyses To account for the nonlinear relationship between eGFR and risk of CVD outcomes in observational analyses, we performed a stratified Mendelian randomization analysis using methods described previously.16–20 For each participant, we calculated the residual eGFR by subtracting the genetic contribution determined by the GRS from observed eGFR. Participants were grouped on the basis of their residual eGFR into 5-unit categories between 45 and <105 mL·min–1·1.73 m–2, plus <45 and ≥105 mL·min–1·1.73 m–2. By stratifying on residual eGFR, we compared individuals in the population who would have an eGFR in the same category if they had the same genotype and reduced the potential influence of collider bias. We then calculated Mendelian randomization estimates for each eGFR category using the ratio method with the GRS as an instrumental variable, adjusting for age, age-squared, sex, study center, and the first 10 principal components. Stratum-specific estimates were combined across studies using fixed-effect meta-analysis and plotted as a piecewise-linear function of eGFR, with pointwise confidence intervals calculated by resampling the stratum-specific estimates. Sensitivity analyses used non-parametric doubly-ranked stratification method. Detailed methods describing statistical analysis are in the Expanded Methods. Analyses used STATA 15.1 and R 3.6.1. To account for the nonlinear relationship between eGFR and risk of CVD outcomes in observational analyses, we performed a stratified Mendelian randomization analysis using methods described previously.16–20 For each participant, we calculated the residual eGFR by subtracting the genetic contribution determined by the GRS from observed eGFR. Participants were grouped on the basis of their residual eGFR into 5-unit categories between 45 and <105 mL·min–1·1.73 m–2, plus <45 and ≥105 mL·min–1·1.73 m–2. By stratifying on residual eGFR, we compared individuals in the population who would have an eGFR in the same category if they had the same genotype and reduced the potential influence of collider bias. We then calculated Mendelian randomization estimates for each eGFR category using the ratio method with the GRS as an instrumental variable, adjusting for age, age-squared, sex, study center, and the first 10 principal components. Stratum-specific estimates were combined across studies using fixed-effect meta-analysis and plotted as a piecewise-linear function of eGFR, with pointwise confidence intervals calculated by resampling the stratum-specific estimates. Sensitivity analyses used non-parametric doubly-ranked stratification method. Detailed methods describing statistical analysis are in the Expanded Methods. Analyses used STATA 15.1 and R 3.6.1.
Results
Among the 648 135 participants without history of CVD or diabetes at baseline, the mean age was 57 years, 57% were men, and 4.4% had creatinine-based eGFR <60 mL·min–1·1.73 m–2 (Table 1, Tables S2 and S3). During 6.8 million person-years of follow-up, there were 42 858 incident CHD outcomes and 15 693 strokes. Up to 413 718 participants of European ancestry from EPIC-CVD, MVP, and UKB contributed to the main genetic analyses (Figure 1). Distributions of serum creatinine concentration and creatinine-based eGFR were broadly similar across studies (Figures S1 and S2). Study-Level and Participant-Level Characteristics of the Contributing Data Sources Observational Associations of eGFR With Cardiovascular Outcomes For both CHD and stroke, there were U-shaped associations of creatinine-based eGFR. Compared with participants with creatinine-based eGFR values between 60 and 105 mL·min–1·1.73 m–2, risks of both CHD and stroke were higher in people with eGFR <60 or >105 mL·min–1·1.73 m–2 (Figure 2, Figure S3). The shapes of these associations did not change substantially after adjustment for several traditional risk factors (Figure 2). Associations were similar in men and women, in clinically relevant subgroups (ie, smokers, people with obesity, or hypertension; Figure S4), in the different studies contributing to this analysis (Figure S5), and when participants with a history of diabetes or missing information on cardiovascular risk factors were included (Figures S6 and S7). Similar associations were also observed for ischemic stroke (Figure S3). Observational associations of eGFR levels with risk of coronary heart disease and stroke (n=648 135). Participants with missing information on age and CVD risk factors (systolic blood pressure, total and high-density lipoprotein cholesterol, body mass index, and smoking status) were excluded from the analyses. Hazard ratios were estimated using Cox regression, adjusting for age and CVD risk factors (systolic blood pressure, total and high-density lipoprotein cholesterol, body mass index, and smoking status), and stratified by sex and study center. The reference point is 90 mL·min–1·1.73 m–2. Shaded regions indicate 95% CIs. CVD indicates cardiovascular disease; and eGFR, estimated glomerular filtration rate. For the 338 044 participants in UKB with available data on serum cystatin C and urinary albumin-to-creatinine ratio, there were broadly similar associations of CHD or stroke with cystatin C–based eGFR as creatinine-based eGFR equations, but only when eGFR values were lower than ≈90 mL·min–1·1.73 m–2. However, there was no evidence of higher risk of CHD in participants with cystatin C–based eGFR values >105 mL·min–1·1.73 m–2 (Figure S8), in contrast with creatinine-based eGFR values >105 mL·min–1·1.73 m–2. Levels of urinary microalbumin and urinary albumin-to-creatinine ratio showed approximately linear associations with risk of CHD and stroke, which were somewhat attenuated after adjustment for traditional risk factors (Figure S9). Compared with participants with a creatinine-based eGFR of 75 to <90 mL·min–1·1.73 m–2 and without albuminuria, participants with albuminuria had higher risk of CHD and stroke (Figure S10). For both CHD and stroke, there were U-shaped associations of creatinine-based eGFR. Compared with participants with creatinine-based eGFR values between 60 and 105 mL·min–1·1.73 m–2, risks of both CHD and stroke were higher in people with eGFR <60 or >105 mL·min–1·1.73 m–2 (Figure 2, Figure S3). The shapes of these associations did not change substantially after adjustment for several traditional risk factors (Figure 2). Associations were similar in men and women, in clinically relevant subgroups (ie, smokers, people with obesity, or hypertension; Figure S4), in the different studies contributing to this analysis (Figure S5), and when participants with a history of diabetes or missing information on cardiovascular risk factors were included (Figures S6 and S7). Similar associations were also observed for ischemic stroke (Figure S3). Observational associations of eGFR levels with risk of coronary heart disease and stroke (n=648 135). Participants with missing information on age and CVD risk factors (systolic blood pressure, total and high-density lipoprotein cholesterol, body mass index, and smoking status) were excluded from the analyses. Hazard ratios were estimated using Cox regression, adjusting for age and CVD risk factors (systolic blood pressure, total and high-density lipoprotein cholesterol, body mass index, and smoking status), and stratified by sex and study center. The reference point is 90 mL·min–1·1.73 m–2. Shaded regions indicate 95% CIs. CVD indicates cardiovascular disease; and eGFR, estimated glomerular filtration rate. For the 338 044 participants in UKB with available data on serum cystatin C and urinary albumin-to-creatinine ratio, there were broadly similar associations of CHD or stroke with cystatin C–based eGFR as creatinine-based eGFR equations, but only when eGFR values were lower than ≈90 mL·min–1·1.73 m–2. However, there was no evidence of higher risk of CHD in participants with cystatin C–based eGFR values >105 mL·min–1·1.73 m–2 (Figure S8), in contrast with creatinine-based eGFR values >105 mL·min–1·1.73 m–2. Levels of urinary microalbumin and urinary albumin-to-creatinine ratio showed approximately linear associations with risk of CHD and stroke, which were somewhat attenuated after adjustment for traditional risk factors (Figure S9). Compared with participants with a creatinine-based eGFR of 75 to <90 mL·min–1·1.73 m–2 and without albuminuria, participants with albuminuria had higher risk of CHD and stroke (Figure S10). Mendelian Randomization of Genetically Predicted eGFR With Cardiovascular Outcomes The GRS for eGFR (Table S4) explained 2.0% of variation in creatinine-based eGFR in EPIC-CVD, 2.2% in MVP, and 3.2% in UKB. A 1 SD increase in the GRS for eGFR was associated with 0.18 SD higher creatinine-based eGFR (Table S5, Figure S11). The GRS for eGFR was not associated with body mass index, diabetes, smoking status, or low-density lipoprotein cholesterol concentrations but showed modest associations with lipoprotein(a), triglycerides, blood pressure, and hemoglobin A1c measurement (Figure S11). Modest associations were also observed between the GRS for eGFR and triglyceride-related lipoprotein subclasses in a subset of participants with available data (Figure S12). In nonlinear Mendelian randomization analysis, we observed a curvilinear relationship between genetically predicted eGFR and CHD (Figure 3). Among participants with eGFR <60 mL·min–1·1.73 m–2, each 5 mL·min–1·1.73 m–2 lower genetically predicted eGFR was associated with 14% (95% CI, 3%–27%) higher risk of CHD (Table 2). There was no clear evidence of association among participants with eGFR >75 mL·min–1·1.73 m–2 (Figure 3). Similar, but not statistically significant, associations were observed for stroke (Table 2, Figure 3). Overall, stratum-specific localized average causal estimates and nonlinear Mendelian randomization estimates were compatible across the studies contributing to this analysis (Table S6, Figure S13). Findings were supported in analyses using the non-parametric doubly-ranked stratification (Table S7, Figure S14). Similar associations were observed in analyses that further adjusted for systolic blood pressure, lipoprotein(a), hemoglobin A1c, and triglycerides (Figure S15), included participants with a history of diabetes at baseline (Figure S16), or used ischemic stroke as the stroke outcome (Figure S17). Results were also similar using GRS for cystatin C–based eGFR, blood urine nitrogen, or variants associated with creatinine-based eGFR regardless of ancestry heterogeneity (Figure S18). Mendelian Randomization Estimates per 5 mL·min–1·1.73 m–2 Lower Genetically Predicted eGFR With Risk of Coronary Heart Disease and Stroke Associations of genetically predicted eGFR with risk of coronary heart disease and stroke (n=413 718). The reference point is 90 mL·min–1·1.73 m–2. Gradients at each point of the curve represent the localized average causal effect on coronary heart disease or stroke per 5 mL·min–1·1.73 m–2 change in genetically predicted eGFR. The vertical lines represent 95% CIs. Analyses were adjusted for age, age-squared, sex, study center, and the first 10 principal components of ancestry. eGFR indicates estimated glomerular filtration rate. The GRS for eGFR (Table S4) explained 2.0% of variation in creatinine-based eGFR in EPIC-CVD, 2.2% in MVP, and 3.2% in UKB. A 1 SD increase in the GRS for eGFR was associated with 0.18 SD higher creatinine-based eGFR (Table S5, Figure S11). The GRS for eGFR was not associated with body mass index, diabetes, smoking status, or low-density lipoprotein cholesterol concentrations but showed modest associations with lipoprotein(a), triglycerides, blood pressure, and hemoglobin A1c measurement (Figure S11). Modest associations were also observed between the GRS for eGFR and triglyceride-related lipoprotein subclasses in a subset of participants with available data (Figure S12). In nonlinear Mendelian randomization analysis, we observed a curvilinear relationship between genetically predicted eGFR and CHD (Figure 3). Among participants with eGFR <60 mL·min–1·1.73 m–2, each 5 mL·min–1·1.73 m–2 lower genetically predicted eGFR was associated with 14% (95% CI, 3%–27%) higher risk of CHD (Table 2). There was no clear evidence of association among participants with eGFR >75 mL·min–1·1.73 m–2 (Figure 3). Similar, but not statistically significant, associations were observed for stroke (Table 2, Figure 3). Overall, stratum-specific localized average causal estimates and nonlinear Mendelian randomization estimates were compatible across the studies contributing to this analysis (Table S6, Figure S13). Findings were supported in analyses using the non-parametric doubly-ranked stratification (Table S7, Figure S14). Similar associations were observed in analyses that further adjusted for systolic blood pressure, lipoprotein(a), hemoglobin A1c, and triglycerides (Figure S15), included participants with a history of diabetes at baseline (Figure S16), or used ischemic stroke as the stroke outcome (Figure S17). Results were also similar using GRS for cystatin C–based eGFR, blood urine nitrogen, or variants associated with creatinine-based eGFR regardless of ancestry heterogeneity (Figure S18). Mendelian Randomization Estimates per 5 mL·min–1·1.73 m–2 Lower Genetically Predicted eGFR With Risk of Coronary Heart Disease and Stroke Associations of genetically predicted eGFR with risk of coronary heart disease and stroke (n=413 718). The reference point is 90 mL·min–1·1.73 m–2. Gradients at each point of the curve represent the localized average causal effect on coronary heart disease or stroke per 5 mL·min–1·1.73 m–2 change in genetically predicted eGFR. The vertical lines represent 95% CIs. Analyses were adjusted for age, age-squared, sex, study center, and the first 10 principal components of ancestry. eGFR indicates estimated glomerular filtration rate.
Conclusions
In people without manifest CVD or diabetes, mild-to-moderate kidney dysfunction was causally related to cardiovascular outcomes, highlighting the potential cardiovascular benefit of preventive approaches that improve kidney function.
[ "What Is New?", "What Are the Clinical Implications?", "Data Sources", "Estimation of Kidney Function", "Observational Analyses", "GRS for Kidney Function", "Mendelian Randomization Analyses", "Observational Associations of eGFR With Cardiovascular Outcomes", "Mendelian Randomization of Genetically Predicted eGFR With Cardiovascular Outcomes", "Article Information", "Acknowledgments", "Sources of Funding", "Supplemental Material" ]
[ "In people without manifest cardiovascular disease or diabetes, there is a nonlinear causal relationship between kidney function and coronary heart disease.\nEven mildly reduced kidney function is causally associated with higher risk of coronary heart disease with a possible risk threshold for eGFR value of ≈75 mL·min–1·1.73 m–2.\nThe effect of reduced kidney function on coronary heart disease is independent of traditional cardiovascular risk factors.", "Preventive approaches that can preserve and modulate kidney function can help prevent cardiovascular diseases.\nGiven the nonlinear causal relationship, it may be a preferable strategy to identify individuals in the population with mild-to-moderate kidney dysfunction and target them for renoprotective interventions alongside routine strategies to reduce cardiovascular risk.", "Information on each of the data sources used in the analysis is provided in the Expanded Methods in the Supplemental Material. In brief, Emerging Risk Factors Collaboration, a global consortium of population cohort studies with harmonized individual-participant data for multiple CVD risk factors, has included 47 studies with available information on serum creatinine and diabetes status at recruitment.21 EPIC-CVD, a case-cohort study embedded in the pan-European EPIC prospective study of >500 000 participants, has recorded data on serum creatinine and imputed genome-wide array data from 21 of its 23 recruitment centers.22 MVP, a prospective cohort study recruited from 63 Veterans Health Administration medical facilities throughout the United States, has recorded serum creatinine, and imputed genome-wide array data are available for a large subset of its participants.23 UKB, a prospective study of 22 recruitment centers across the United Kingdom, has cohort-wide information on serum creatinine and imputed genome-wide array data.24 Relevant ethical approval and participant consent were already obtained in all studies that contributed data to this work.", "Kidney function was estimated using creatinine-based eGFR, calculated using the Chronic Kidney Disease Epidemiology Collaboration equation.27 Creatinine concentration was multiplied by 0.95 for studies in which measurements were not standardized to isotope-dilution mass spectrometry.25,28 In a subset of participants with available data, kidney function was also defined using the Chronic Kidney Disease Epidemiology Collaboration cystatin C–based equation29 and albuminuria measured as spot urine albumin-to-creatinine ratio (Expanded Methods).", "Primary outcomes were incident CHD and stroke. Details of end-point definitions for each study are provided in Table S1. Participants in the contributing studies were eligible for inclusion in the present analysis if they met all of the following criteria: (1) aged 30 to 80 years at recruitment; (2) had recorded information on age, sex, circulating creatinine, and diabetes status; (3) had a creatinine-based eGFR of <300 mL·min–1·1.73 m–2; (4) did not have a known history of CVD or diabetes at baseline; (5) had complete information on the risk factors of smoking status, systolic blood pressure, total cholesterol, high-density lipoprotein cholesterol, and body mass index; and (6) had at least 1 year of follow-up data after recruitment.\nHazard ratios for associations of creatinine-based eGFR with incident CHD and stroke were calculated using Cox regression, stratified by sex and study center, and when appropriate, adjusted for traditional vascular risk factors (defined here as age, systolic blood pressure, smoking status, total cholesterol, high-density lipoprotein cholesterol, and body mass index) on a complete-case basis. To account for the EPIC-CVD case-cohort design, Cox models were adapted using Prentice weights.30 To avoid overfitting models, studies contributing <20 incident events to the analysis of a particular outcome were excluded from the analysis. Fractional polynomials were used to characterize nonlinear relationships of creatinine-based eGFR with risk of CHD and stroke, adjusted for age and CVD risk factors.31 Study-specific estimates for each outcome were pooled across studies using multivariable random-effects meta-analysis, using a reference point of 90 mL·min–1·1.73 m–2. When information on urinary biomarkers in UKB was available, participants were grouped into tenths on the basis of levels of urinary albumin-to-creatinine ratio to assess the shapes of associations between urinary biomarkers and CVD risk, using participants without albuminuria as the reference group.32", "Using individual-participant data from EPIC-CVD, MVP, and UKB, we calculated a GRS33 weighted by the conditional effect estimated of the genetic variants associated (P<5×10–8) with creatinine-based eGFR in CKDGen,25 a global genetics consortium that has published genome-wide association study summary statistics for creatinine-based eGFR. Of the 262 variants associated with creatinine-based eGFR, 37 were excluded because of ancestry heterogeneity as reported in CKDGen,25 4 were excluded because of associations (P<5×10–8) with vascular risk factors as reported in previous genome-wide association studies (ie, smoking status, alcohol consumption, and education attainment),34 and 3 were excluded because of missingness in at least 1 of the contributing studies, leaving 218 variants for the primary GRS for creatinine-based eGFR.\nIn sensitivity analysis, we constructed 2 restricted GRSs using 126 and 121 genetic variants that were likely to be relevant for kidney function on the basis of their associations with cystatin C–based eGFR35 and blood urine nitrogen,25 respectively. Sensitivity analysis was also conducted using a GRS that included all 262 transancestry eGFR-associated index variants. Furthermore, to evaluate traits that could mediate or confound (through horizontal pleiotropy) the associations between genetically predicted eGFR and outcomes, we tested associations of GRSs for eGFR with a range of cardiovascular risk factors in UKB and EPIC-CVD and with 167 metabolites measured using targeted high-throughput nuclear magnetic resonance metabolomics (Nightingale Health Ltd) in UKB.", "To account for the nonlinear relationship between eGFR and risk of CVD outcomes in observational analyses, we performed a stratified Mendelian randomization analysis using methods described previously.16–20 For each participant, we calculated the residual eGFR by subtracting the genetic contribution determined by the GRS from observed eGFR. Participants were grouped on the basis of their residual eGFR into 5-unit categories between 45 and <105 mL·min–1·1.73 m–2, plus <45 and ≥105 mL·min–1·1.73 m–2. By stratifying on residual eGFR, we compared individuals in the population who would have an eGFR in the same category if they had the same genotype and reduced the potential influence of collider bias. We then calculated Mendelian randomization estimates for each eGFR category using the ratio method with the GRS as an instrumental variable, adjusting for age, age-squared, sex, study center, and the first 10 principal components. Stratum-specific estimates were combined across studies using fixed-effect meta-analysis and plotted as a piecewise-linear function of eGFR, with pointwise confidence intervals calculated by resampling the stratum-specific estimates. Sensitivity analyses used non-parametric doubly-ranked stratification method. Detailed methods describing statistical analysis are in the Expanded Methods. Analyses used STATA 15.1 and R 3.6.1.", "For both CHD and stroke, there were U-shaped associations of creatinine-based eGFR. Compared with participants with creatinine-based eGFR values between 60 and 105 mL·min–1·1.73 m–2, risks of both CHD and stroke were higher in people with eGFR <60 or >105 mL·min–1·1.73 m–2 (Figure 2, Figure S3). The shapes of these associations did not change substantially after adjustment for several traditional risk factors (Figure 2). Associations were similar in men and women, in clinically relevant subgroups (ie, smokers, people with obesity, or hypertension; Figure S4), in the different studies contributing to this analysis (Figure S5), and when participants with a history of diabetes or missing information on cardiovascular risk factors were included (Figures S6 and S7). Similar associations were also observed for ischemic stroke (Figure S3).\nObservational associations of eGFR levels with risk of coronary heart disease and stroke (n=648 135). Participants with missing information on age and CVD risk factors (systolic blood pressure, total and high-density lipoprotein cholesterol, body mass index, and smoking status) were excluded from the analyses. Hazard ratios were estimated using Cox regression, adjusting for age and CVD risk factors (systolic blood pressure, total and high-density lipoprotein cholesterol, body mass index, and smoking status), and stratified by sex and study center. The reference point is 90 mL·min–1·1.73 m–2. Shaded regions indicate 95% CIs. CVD indicates cardiovascular disease; and eGFR, estimated glomerular filtration rate.\nFor the 338 044 participants in UKB with available data on serum cystatin C and urinary albumin-to-creatinine ratio, there were broadly similar associations of CHD or stroke with cystatin C–based eGFR as creatinine-based eGFR equations, but only when eGFR values were lower than ≈90 mL·min–1·1.73 m–2. However, there was no evidence of higher risk of CHD in participants with cystatin C–based eGFR values >105 mL·min–1·1.73 m–2 (Figure S8), in contrast with creatinine-based eGFR values >105 mL·min–1·1.73 m–2. Levels of urinary microalbumin and urinary albumin-to-creatinine ratio showed approximately linear associations with risk of CHD and stroke, which were somewhat attenuated after adjustment for traditional risk factors (Figure S9). Compared with participants with a creatinine-based eGFR of 75 to <90 mL·min–1·1.73 m–2 and without albuminuria, participants with albuminuria had higher risk of CHD and stroke (Figure S10).", "The GRS for eGFR (Table S4) explained 2.0% of variation in creatinine-based eGFR in EPIC-CVD, 2.2% in MVP, and 3.2% in UKB. A 1 SD increase in the GRS for eGFR was associated with 0.18 SD higher creatinine-based eGFR (Table S5, Figure S11). The GRS for eGFR was not associated with body mass index, diabetes, smoking status, or low-density lipoprotein cholesterol concentrations but showed modest associations with lipoprotein(a), triglycerides, blood pressure, and hemoglobin A1c measurement (Figure S11). Modest associations were also observed between the GRS for eGFR and triglyceride-related lipoprotein subclasses in a subset of participants with available data (Figure S12).\nIn nonlinear Mendelian randomization analysis, we observed a curvilinear relationship between genetically predicted eGFR and CHD (Figure 3). Among participants with eGFR <60 mL·min–1·1.73 m–2, each 5 mL·min–1·1.73 m–2 lower genetically predicted eGFR was associated with 14% (95% CI, 3%–27%) higher risk of CHD (Table 2). There was no clear evidence of association among participants with eGFR >75 mL·min–1·1.73 m–2 (Figure 3). Similar, but not statistically significant, associations were observed for stroke (Table 2, Figure 3). Overall, stratum-specific localized average causal estimates and nonlinear Mendelian randomization estimates were compatible across the studies contributing to this analysis (Table S6, Figure S13). Findings were supported in analyses using the non-parametric doubly-ranked stratification (Table S7, Figure S14). Similar associations were observed in analyses that further adjusted for systolic blood pressure, lipoprotein(a), hemoglobin A1c, and triglycerides (Figure S15), included participants with a history of diabetes at baseline (Figure S16), or used ischemic stroke as the stroke outcome (Figure S17). Results were also similar using GRS for cystatin C–based eGFR, blood urine nitrogen, or variants associated with creatinine-based eGFR regardless of ancestry heterogeneity (Figure S18).\nMendelian Randomization Estimates per 5 mL·min–1·1.73 m–2 Lower Genetically Predicted eGFR With Risk of Coronary Heart Disease and Stroke\nAssociations of genetically predicted eGFR with risk of coronary heart disease and stroke (n=413 718). The reference point is 90 mL·min–1·1.73 m–2. Gradients at each point of the curve represent the localized average causal effect on coronary heart disease or stroke per 5 mL·min–1·1.73 m–2 change in genetically predicted eGFR. The vertical lines represent 95% CIs. Analyses were adjusted for age, age-squared, sex, study center, and the first 10 principal components of ancestry. eGFR indicates estimated glomerular filtration rate.", "Acknowledgments The authors thank investigators and participants of the several studies that contributed data to the Emerging Risk Factors Collaboration. We thank all EPIC (European Prospective Investigation into Cancer) participants and staff for their contribution to the study, the laboratory teams at the Medical Research Council Epidemiology Unit for sample management and Cambridge Genomic Services for genotyping, S. Spackman for data management, and the team at the EPIC-CVD Coordinating Centre for study coordination and administration. The authors also thank the participants of the VA Million Veteran Program and its collaborators. Acknowledgment of VA Million Veteran Program leadership and staff contributions can be found in the Supplemental Material Note. This research has been conducted using the UK Biobank Resource under Application Number 31852.\nThe authors thank investigators and participants of the several studies that contributed data to the Emerging Risk Factors Collaboration. We thank all EPIC (European Prospective Investigation into Cancer) participants and staff for their contribution to the study, the laboratory teams at the Medical Research Council Epidemiology Unit for sample management and Cambridge Genomic Services for genotyping, S. Spackman for data management, and the team at the EPIC-CVD Coordinating Centre for study coordination and administration. The authors also thank the participants of the VA Million Veteran Program and its collaborators. Acknowledgment of VA Million Veteran Program leadership and staff contributions can be found in the Supplemental Material Note. This research has been conducted using the UK Biobank Resource under Application Number 31852.\nSources of Funding The Emerging Risk Factors Collaboration (ERFC) coordinating center was underpinned by program grants from the British Heart Foundation (BHF; SP/09/002; RG/13/13/30194; RG/18/13/33946), BHF Centre of Research Excellence (RE/18/1/34212), the UK Medical Research Council (MR/L003120/1), and the National Institute for Health and Care Research (NIHR) Cambridge Biomedical Research Centre (BRC-1215-20014), with project-specific support received from the UK NIHR, British United Provident Association UK Foundation, and an unrestricted educational grant from GlaxoSmithKline. This work was supported by Health Data Research UK, which is funded by the UK Medical Research Council, the Engineering and Physical Sciences Research Council, the Economic and Social Research Council, the Department of Health and Social Care (England), the Chief Scientist Office of the Scottish Government Health and Social Care Directorates, the Health and Social Care Research and Development Division (Welsh Government), the Public Health Agency (Northern Ireland), the BHF, and the Wellcome Trust. A variety of funding sources have supported recruitment, follow-up, and laboratory measurements in the studies contributing data to the ERFC, which are listed on the ERFC website (www.phpc.cam.ac.uk/ceu/erfc/list-of-studies). EPIC-CVD (European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease Study) was funded by the European Research Council (268834) and the European Commission Framework Programme 7 (HEALTH-F2-2012-279233). The coordination of EPIC is financially supported by International Agency for Research on Cancer (IARC) and also by the Department of Epidemiology and Biostatistics, School of Public Health, Imperial College London which has additional infrastructure support provided by the NIHR Imperial Biomedical Research Centre (BRC). The national cohorts are supported by: Danish Cancer Society (Denmark); Ligue Contre le Cancer, Institut Gustave Roussy, Mutuelle Générale de l’Education Nationale, Institut National de la Santé et de la Recherche Médicale (INSERM) (France); German Cancer Aid, German Cancer Research Center (DKFZ), German Institute of Human Nutrition PotsdamRehbruecke (DIfE), Federal Ministry of Education and Research (BMBF) (Germany); Associazione Italiana per la Ricerca sul Cancro-AIRC-Italy, Compagnia di SanPaolo and National Research Council (Italy); Dutch Ministry of Public Health, Welfare and Sports (VWS), Netherlands Cancer Registry (NKR), LK Research Funds, Dutch Prevention Funds, Dutch ZON (Zorg Onderzoek Nederland), World Cancer Research Fund (WCRF), Statistics Netherlands (The Netherlands); Health Research Fund (FIS) - Instituto de Salud Carlos III (ISCIII), Regional Governments of Andalucía, Asturias, Basque Country, Murcia and Navarra, and the Catalan Institute of Oncology - ICO (Spain); Swedish Cancer Society, Swedish Research Council and County Councils of Skåne and Västerbotten (Sweden); Cancer Research UK (14136 to EPIC-Norfolk; C8221/A29017 to EPIC-Oxford), Medical Research Council, United Kingdom (1000143 to EPIC-Norfolk; MR/M012190/1 to EPIC-Oxford). The establishment of the EPIC-InterAct subcohort (used in the EPIC-CVD study) and conduct of biochemical assays was supported by the EU Sixth Framework Programme (FP6) (grant LSHM_CT_2006_037197 to the InterAct project) and the Medical Research Council Epidemiology Unit (grants MC_UU_12015/1 and MC_UU_12015/5). This research is based on data from the Million Veteran Program, Office of Research and Development, and Veterans Health Administration and was supported by award I01-BX004821 (principal investigators, Drs Peter W.F. Wilson and Kelly Cho) and I01-BX003360 (principal investigators, Dr Adriana M. Hung). Dr Damrauer is supported by IK2-CX001780. Dr Hung is supported by CX001897. Dr Tsao is supported by BX003362-01 from VA Office of Research and Development. Dr Robinson-Cohen is supported by R01DK122075. Dr Sun was funded by a BHF Programme Grant (RG/18/13/33946). Dr Arnold was funded by a BHF Programme Grant (RG/18/13/33946). Dr Kaptoge is funded by a BHF Chair award (CH/12/2/29428). Dr Mason is funded by the EU/EFPIA Innovative Medicines Initiative Joint Undertaking BigData@Heart grant 116074. Dr Bolton was funded by the NIHR BTRU in Donor Health and Genomics (NIHR BTRU-2014-10024). Dr Allara is funded by a BHF Programme Grant (RG/18/13/33946). Prof Inouye is supported by the Munz Chair of Cardiovascular Prediction and Prevention and the NIHR Cambridge Biomedical Research Centre (BRC-1215-20014). Prof Inouye was also supported by the UK Economic and Social Research 878 Council (ES/T013192/1). Prof Danesh holds a British Heart Foundation Professorship and a NIHR Senior Investigator Award. Prof Wood is part of the BigData@Heart Consortium, funded by the Innovative Medicines Initiative-2 Joint Undertaking under grant agreement No 116074. Prof Wood was supported by the BHF-Turing Cardiovascular Data Science Award (BCDSA\\100005). Prof Di Angelantonio holds a NIHR Senior Investigator Award.\nThe Emerging Risk Factors Collaboration (ERFC) coordinating center was underpinned by program grants from the British Heart Foundation (BHF; SP/09/002; RG/13/13/30194; RG/18/13/33946), BHF Centre of Research Excellence (RE/18/1/34212), the UK Medical Research Council (MR/L003120/1), and the National Institute for Health and Care Research (NIHR) Cambridge Biomedical Research Centre (BRC-1215-20014), with project-specific support received from the UK NIHR, British United Provident Association UK Foundation, and an unrestricted educational grant from GlaxoSmithKline. This work was supported by Health Data Research UK, which is funded by the UK Medical Research Council, the Engineering and Physical Sciences Research Council, the Economic and Social Research Council, the Department of Health and Social Care (England), the Chief Scientist Office of the Scottish Government Health and Social Care Directorates, the Health and Social Care Research and Development Division (Welsh Government), the Public Health Agency (Northern Ireland), the BHF, and the Wellcome Trust. A variety of funding sources have supported recruitment, follow-up, and laboratory measurements in the studies contributing data to the ERFC, which are listed on the ERFC website (www.phpc.cam.ac.uk/ceu/erfc/list-of-studies). EPIC-CVD (European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease Study) was funded by the European Research Council (268834) and the European Commission Framework Programme 7 (HEALTH-F2-2012-279233). The coordination of EPIC is financially supported by International Agency for Research on Cancer (IARC) and also by the Department of Epidemiology and Biostatistics, School of Public Health, Imperial College London which has additional infrastructure support provided by the NIHR Imperial Biomedical Research Centre (BRC). The national cohorts are supported by: Danish Cancer Society (Denmark); Ligue Contre le Cancer, Institut Gustave Roussy, Mutuelle Générale de l’Education Nationale, Institut National de la Santé et de la Recherche Médicale (INSERM) (France); German Cancer Aid, German Cancer Research Center (DKFZ), German Institute of Human Nutrition PotsdamRehbruecke (DIfE), Federal Ministry of Education and Research (BMBF) (Germany); Associazione Italiana per la Ricerca sul Cancro-AIRC-Italy, Compagnia di SanPaolo and National Research Council (Italy); Dutch Ministry of Public Health, Welfare and Sports (VWS), Netherlands Cancer Registry (NKR), LK Research Funds, Dutch Prevention Funds, Dutch ZON (Zorg Onderzoek Nederland), World Cancer Research Fund (WCRF), Statistics Netherlands (The Netherlands); Health Research Fund (FIS) - Instituto de Salud Carlos III (ISCIII), Regional Governments of Andalucía, Asturias, Basque Country, Murcia and Navarra, and the Catalan Institute of Oncology - ICO (Spain); Swedish Cancer Society, Swedish Research Council and County Councils of Skåne and Västerbotten (Sweden); Cancer Research UK (14136 to EPIC-Norfolk; C8221/A29017 to EPIC-Oxford), Medical Research Council, United Kingdom (1000143 to EPIC-Norfolk; MR/M012190/1 to EPIC-Oxford). The establishment of the EPIC-InterAct subcohort (used in the EPIC-CVD study) and conduct of biochemical assays was supported by the EU Sixth Framework Programme (FP6) (grant LSHM_CT_2006_037197 to the InterAct project) and the Medical Research Council Epidemiology Unit (grants MC_UU_12015/1 and MC_UU_12015/5). This research is based on data from the Million Veteran Program, Office of Research and Development, and Veterans Health Administration and was supported by award I01-BX004821 (principal investigators, Drs Peter W.F. Wilson and Kelly Cho) and I01-BX003360 (principal investigators, Dr Adriana M. Hung). Dr Damrauer is supported by IK2-CX001780. Dr Hung is supported by CX001897. Dr Tsao is supported by BX003362-01 from VA Office of Research and Development. Dr Robinson-Cohen is supported by R01DK122075. Dr Sun was funded by a BHF Programme Grant (RG/18/13/33946). Dr Arnold was funded by a BHF Programme Grant (RG/18/13/33946). Dr Kaptoge is funded by a BHF Chair award (CH/12/2/29428). Dr Mason is funded by the EU/EFPIA Innovative Medicines Initiative Joint Undertaking BigData@Heart grant 116074. Dr Bolton was funded by the NIHR BTRU in Donor Health and Genomics (NIHR BTRU-2014-10024). Dr Allara is funded by a BHF Programme Grant (RG/18/13/33946). Prof Inouye is supported by the Munz Chair of Cardiovascular Prediction and Prevention and the NIHR Cambridge Biomedical Research Centre (BRC-1215-20014). Prof Inouye was also supported by the UK Economic and Social Research 878 Council (ES/T013192/1). Prof Danesh holds a British Heart Foundation Professorship and a NIHR Senior Investigator Award. Prof Wood is part of the BigData@Heart Consortium, funded by the Innovative Medicines Initiative-2 Joint Undertaking under grant agreement No 116074. Prof Wood was supported by the BHF-Turing Cardiovascular Data Science Award (BCDSA\\100005). Prof Di Angelantonio holds a NIHR Senior Investigator Award.\nDisclosures Where authors are identified as personnel of the International Agency for Research on Cancer/World Health Organization, the authors alone are responsible for the views expressed in this article, and they do not necessarily represent the decisions, policy, or views of the International Agency for Research on Cancer/World Health Organization. The views expressed are those of the author(s) and not necessarily those of the National Institute for Health Research or the Department of Health and Social Care. This publication does not represent the views of the Department of Veterans Affairs or the United States government. Dr Staley is now a full-time employee at UCB. Dr Sun is now an employee at Regeneron Pharmaceuticals. Dr Arnold is now an employee of AstraZeneca. Dr Danesh serves on scientific advisory boards for AstraZeneca, Novartis, and UK Biobank, and has received multiple grants from academic, charitable and industry sources outside of the submitted work. Adam Butterworth reports institutional grants from AstraZeneca, Bayer, Biogen, BioMarin, Bioverativ, Novartis, Regeneron and Sanofi.\nWhere authors are identified as personnel of the International Agency for Research on Cancer/World Health Organization, the authors alone are responsible for the views expressed in this article, and they do not necessarily represent the decisions, policy, or views of the International Agency for Research on Cancer/World Health Organization. The views expressed are those of the author(s) and not necessarily those of the National Institute for Health Research or the Department of Health and Social Care. This publication does not represent the views of the Department of Veterans Affairs or the United States government. Dr Staley is now a full-time employee at UCB. Dr Sun is now an employee at Regeneron Pharmaceuticals. Dr Arnold is now an employee of AstraZeneca. Dr Danesh serves on scientific advisory boards for AstraZeneca, Novartis, and UK Biobank, and has received multiple grants from academic, charitable and industry sources outside of the submitted work. Adam Butterworth reports institutional grants from AstraZeneca, Bayer, Biogen, BioMarin, Bioverativ, Novartis, Regeneron and Sanofi.\nSupplemental Material Expanded Methods\nTables S1–S7\nFigures S1–S18\nExpanded Methods\nTables S1–S7\nFigures S1–S18", "The authors thank investigators and participants of the several studies that contributed data to the Emerging Risk Factors Collaboration. We thank all EPIC (European Prospective Investigation into Cancer) participants and staff for their contribution to the study, the laboratory teams at the Medical Research Council Epidemiology Unit for sample management and Cambridge Genomic Services for genotyping, S. Spackman for data management, and the team at the EPIC-CVD Coordinating Centre for study coordination and administration. The authors also thank the participants of the VA Million Veteran Program and its collaborators. Acknowledgment of VA Million Veteran Program leadership and staff contributions can be found in the Supplemental Material Note. This research has been conducted using the UK Biobank Resource under Application Number 31852.", "The Emerging Risk Factors Collaboration (ERFC) coordinating center was underpinned by program grants from the British Heart Foundation (BHF; SP/09/002; RG/13/13/30194; RG/18/13/33946), BHF Centre of Research Excellence (RE/18/1/34212), the UK Medical Research Council (MR/L003120/1), and the National Institute for Health and Care Research (NIHR) Cambridge Biomedical Research Centre (BRC-1215-20014), with project-specific support received from the UK NIHR, British United Provident Association UK Foundation, and an unrestricted educational grant from GlaxoSmithKline. This work was supported by Health Data Research UK, which is funded by the UK Medical Research Council, the Engineering and Physical Sciences Research Council, the Economic and Social Research Council, the Department of Health and Social Care (England), the Chief Scientist Office of the Scottish Government Health and Social Care Directorates, the Health and Social Care Research and Development Division (Welsh Government), the Public Health Agency (Northern Ireland), the BHF, and the Wellcome Trust. A variety of funding sources have supported recruitment, follow-up, and laboratory measurements in the studies contributing data to the ERFC, which are listed on the ERFC website (www.phpc.cam.ac.uk/ceu/erfc/list-of-studies). EPIC-CVD (European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease Study) was funded by the European Research Council (268834) and the European Commission Framework Programme 7 (HEALTH-F2-2012-279233). The coordination of EPIC is financially supported by International Agency for Research on Cancer (IARC) and also by the Department of Epidemiology and Biostatistics, School of Public Health, Imperial College London which has additional infrastructure support provided by the NIHR Imperial Biomedical Research Centre (BRC). The national cohorts are supported by: Danish Cancer Society (Denmark); Ligue Contre le Cancer, Institut Gustave Roussy, Mutuelle Générale de l’Education Nationale, Institut National de la Santé et de la Recherche Médicale (INSERM) (France); German Cancer Aid, German Cancer Research Center (DKFZ), German Institute of Human Nutrition PotsdamRehbruecke (DIfE), Federal Ministry of Education and Research (BMBF) (Germany); Associazione Italiana per la Ricerca sul Cancro-AIRC-Italy, Compagnia di SanPaolo and National Research Council (Italy); Dutch Ministry of Public Health, Welfare and Sports (VWS), Netherlands Cancer Registry (NKR), LK Research Funds, Dutch Prevention Funds, Dutch ZON (Zorg Onderzoek Nederland), World Cancer Research Fund (WCRF), Statistics Netherlands (The Netherlands); Health Research Fund (FIS) - Instituto de Salud Carlos III (ISCIII), Regional Governments of Andalucía, Asturias, Basque Country, Murcia and Navarra, and the Catalan Institute of Oncology - ICO (Spain); Swedish Cancer Society, Swedish Research Council and County Councils of Skåne and Västerbotten (Sweden); Cancer Research UK (14136 to EPIC-Norfolk; C8221/A29017 to EPIC-Oxford), Medical Research Council, United Kingdom (1000143 to EPIC-Norfolk; MR/M012190/1 to EPIC-Oxford). The establishment of the EPIC-InterAct subcohort (used in the EPIC-CVD study) and conduct of biochemical assays was supported by the EU Sixth Framework Programme (FP6) (grant LSHM_CT_2006_037197 to the InterAct project) and the Medical Research Council Epidemiology Unit (grants MC_UU_12015/1 and MC_UU_12015/5). This research is based on data from the Million Veteran Program, Office of Research and Development, and Veterans Health Administration and was supported by award I01-BX004821 (principal investigators, Drs Peter W.F. Wilson and Kelly Cho) and I01-BX003360 (principal investigators, Dr Adriana M. Hung). Dr Damrauer is supported by IK2-CX001780. Dr Hung is supported by CX001897. Dr Tsao is supported by BX003362-01 from VA Office of Research and Development. Dr Robinson-Cohen is supported by R01DK122075. Dr Sun was funded by a BHF Programme Grant (RG/18/13/33946). Dr Arnold was funded by a BHF Programme Grant (RG/18/13/33946). Dr Kaptoge is funded by a BHF Chair award (CH/12/2/29428). Dr Mason is funded by the EU/EFPIA Innovative Medicines Initiative Joint Undertaking BigData@Heart grant 116074. Dr Bolton was funded by the NIHR BTRU in Donor Health and Genomics (NIHR BTRU-2014-10024). Dr Allara is funded by a BHF Programme Grant (RG/18/13/33946). Prof Inouye is supported by the Munz Chair of Cardiovascular Prediction and Prevention and the NIHR Cambridge Biomedical Research Centre (BRC-1215-20014). Prof Inouye was also supported by the UK Economic and Social Research 878 Council (ES/T013192/1). Prof Danesh holds a British Heart Foundation Professorship and a NIHR Senior Investigator Award. Prof Wood is part of the BigData@Heart Consortium, funded by the Innovative Medicines Initiative-2 Joint Undertaking under grant agreement No 116074. Prof Wood was supported by the BHF-Turing Cardiovascular Data Science Award (BCDSA\\100005). Prof Di Angelantonio holds a NIHR Senior Investigator Award.", "Expanded Methods\nTables S1–S7\nFigures S1–S18" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "What Is New?", "What Are the Clinical Implications?", "Methods", "Study Design and Study Overview", "Data Sources", "Estimation of Kidney Function", "Observational Analyses", "GRS for Kidney Function", "Mendelian Randomization Analyses", "Results", "Observational Associations of eGFR With Cardiovascular Outcomes", "Mendelian Randomization of Genetically Predicted eGFR With Cardiovascular Outcomes", "Discussion", "Conclusions", "Article Information", "Acknowledgments", "Sources of Funding", "Disclosures", "Supplemental Material", "Supplementary Material" ]
[ "In people without manifest cardiovascular disease or diabetes, there is a nonlinear causal relationship between kidney function and coronary heart disease.\nEven mildly reduced kidney function is causally associated with higher risk of coronary heart disease with a possible risk threshold for eGFR value of ≈75 mL·min–1·1.73 m–2.\nThe effect of reduced kidney function on coronary heart disease is independent of traditional cardiovascular risk factors.", "Preventive approaches that can preserve and modulate kidney function can help prevent cardiovascular diseases.\nGiven the nonlinear causal relationship, it may be a preferable strategy to identify individuals in the population with mild-to-moderate kidney dysfunction and target them for renoprotective interventions alongside routine strategies to reduce cardiovascular risk.", "The data, code, and study material that support the findings of this study are available from the corresponding author on reasonable request.\nStudy Design and Study Overview This study involved interrelated components (Figure 1). First, we characterized observational associations between eGFR and incident CHD or stroke, using data from the Emerging Risk Factors Collaboration,21 EPIC-CVD (European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease Study),22 Million Veteran Program (MVP),23 UK Biobank (UKB),24 collectively involving 648 135 participants, who had serum creatinine measurements but no known CVD or diabetes at baseline. Second, we constructed a genetic risk score (GRS) for eGFR by computing a weighted sum of eGFR-associated index variants reported in a discovery genome-wide association study from the CKDGen consortium comprising 567 460 participants with European ancestry,25 none of whom were from MVP, EPIC-CVD, or UKB. Third, we used this GRS to conduct Mendelian randomization analyses in a total of 413 718 participants (ie, EPIC-CVD, MVP, UKB), with concomitant individual-level information on genetics, serum creatinine, and disease outcomes. Fourth, to assess the potential for interference by horizontal pleiotropy26 and explore potential mechanisms that could mediate associations between eGFR and CVD outcomes, we studied our GRS for eGFR in relation to several established and emerging risk factors for CVD.\nStudy design and overview. CHD indicates coronary heart disease; CKDGen, CKD Genetics consortium; CVD, cardiovascular disease; eGFR, estimated glomerular filtration rate; EPIC-CVD, European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease; ERFC, Emerging Risk Factors Collaboration; MVP, Million Veteran Program; NMR, nuclear magnetic resonance; and UKB, UK Biobank.\nThis study involved interrelated components (Figure 1). First, we characterized observational associations between eGFR and incident CHD or stroke, using data from the Emerging Risk Factors Collaboration,21 EPIC-CVD (European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease Study),22 Million Veteran Program (MVP),23 UK Biobank (UKB),24 collectively involving 648 135 participants, who had serum creatinine measurements but no known CVD or diabetes at baseline. Second, we constructed a genetic risk score (GRS) for eGFR by computing a weighted sum of eGFR-associated index variants reported in a discovery genome-wide association study from the CKDGen consortium comprising 567 460 participants with European ancestry,25 none of whom were from MVP, EPIC-CVD, or UKB. Third, we used this GRS to conduct Mendelian randomization analyses in a total of 413 718 participants (ie, EPIC-CVD, MVP, UKB), with concomitant individual-level information on genetics, serum creatinine, and disease outcomes. Fourth, to assess the potential for interference by horizontal pleiotropy26 and explore potential mechanisms that could mediate associations between eGFR and CVD outcomes, we studied our GRS for eGFR in relation to several established and emerging risk factors for CVD.\nStudy design and overview. CHD indicates coronary heart disease; CKDGen, CKD Genetics consortium; CVD, cardiovascular disease; eGFR, estimated glomerular filtration rate; EPIC-CVD, European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease; ERFC, Emerging Risk Factors Collaboration; MVP, Million Veteran Program; NMR, nuclear magnetic resonance; and UKB, UK Biobank.\nData Sources Information on each of the data sources used in the analysis is provided in the Expanded Methods in the Supplemental Material. In brief, Emerging Risk Factors Collaboration, a global consortium of population cohort studies with harmonized individual-participant data for multiple CVD risk factors, has included 47 studies with available information on serum creatinine and diabetes status at recruitment.21 EPIC-CVD, a case-cohort study embedded in the pan-European EPIC prospective study of >500 000 participants, has recorded data on serum creatinine and imputed genome-wide array data from 21 of its 23 recruitment centers.22 MVP, a prospective cohort study recruited from 63 Veterans Health Administration medical facilities throughout the United States, has recorded serum creatinine, and imputed genome-wide array data are available for a large subset of its participants.23 UKB, a prospective study of 22 recruitment centers across the United Kingdom, has cohort-wide information on serum creatinine and imputed genome-wide array data.24 Relevant ethical approval and participant consent were already obtained in all studies that contributed data to this work.\nInformation on each of the data sources used in the analysis is provided in the Expanded Methods in the Supplemental Material. In brief, Emerging Risk Factors Collaboration, a global consortium of population cohort studies with harmonized individual-participant data for multiple CVD risk factors, has included 47 studies with available information on serum creatinine and diabetes status at recruitment.21 EPIC-CVD, a case-cohort study embedded in the pan-European EPIC prospective study of >500 000 participants, has recorded data on serum creatinine and imputed genome-wide array data from 21 of its 23 recruitment centers.22 MVP, a prospective cohort study recruited from 63 Veterans Health Administration medical facilities throughout the United States, has recorded serum creatinine, and imputed genome-wide array data are available for a large subset of its participants.23 UKB, a prospective study of 22 recruitment centers across the United Kingdom, has cohort-wide information on serum creatinine and imputed genome-wide array data.24 Relevant ethical approval and participant consent were already obtained in all studies that contributed data to this work.\nEstimation of Kidney Function Kidney function was estimated using creatinine-based eGFR, calculated using the Chronic Kidney Disease Epidemiology Collaboration equation.27 Creatinine concentration was multiplied by 0.95 for studies in which measurements were not standardized to isotope-dilution mass spectrometry.25,28 In a subset of participants with available data, kidney function was also defined using the Chronic Kidney Disease Epidemiology Collaboration cystatin C–based equation29 and albuminuria measured as spot urine albumin-to-creatinine ratio (Expanded Methods).\nKidney function was estimated using creatinine-based eGFR, calculated using the Chronic Kidney Disease Epidemiology Collaboration equation.27 Creatinine concentration was multiplied by 0.95 for studies in which measurements were not standardized to isotope-dilution mass spectrometry.25,28 In a subset of participants with available data, kidney function was also defined using the Chronic Kidney Disease Epidemiology Collaboration cystatin C–based equation29 and albuminuria measured as spot urine albumin-to-creatinine ratio (Expanded Methods).\nObservational Analyses Primary outcomes were incident CHD and stroke. Details of end-point definitions for each study are provided in Table S1. Participants in the contributing studies were eligible for inclusion in the present analysis if they met all of the following criteria: (1) aged 30 to 80 years at recruitment; (2) had recorded information on age, sex, circulating creatinine, and diabetes status; (3) had a creatinine-based eGFR of <300 mL·min–1·1.73 m–2; (4) did not have a known history of CVD or diabetes at baseline; (5) had complete information on the risk factors of smoking status, systolic blood pressure, total cholesterol, high-density lipoprotein cholesterol, and body mass index; and (6) had at least 1 year of follow-up data after recruitment.\nHazard ratios for associations of creatinine-based eGFR with incident CHD and stroke were calculated using Cox regression, stratified by sex and study center, and when appropriate, adjusted for traditional vascular risk factors (defined here as age, systolic blood pressure, smoking status, total cholesterol, high-density lipoprotein cholesterol, and body mass index) on a complete-case basis. To account for the EPIC-CVD case-cohort design, Cox models were adapted using Prentice weights.30 To avoid overfitting models, studies contributing <20 incident events to the analysis of a particular outcome were excluded from the analysis. Fractional polynomials were used to characterize nonlinear relationships of creatinine-based eGFR with risk of CHD and stroke, adjusted for age and CVD risk factors.31 Study-specific estimates for each outcome were pooled across studies using multivariable random-effects meta-analysis, using a reference point of 90 mL·min–1·1.73 m–2. When information on urinary biomarkers in UKB was available, participants were grouped into tenths on the basis of levels of urinary albumin-to-creatinine ratio to assess the shapes of associations between urinary biomarkers and CVD risk, using participants without albuminuria as the reference group.32\nPrimary outcomes were incident CHD and stroke. Details of end-point definitions for each study are provided in Table S1. Participants in the contributing studies were eligible for inclusion in the present analysis if they met all of the following criteria: (1) aged 30 to 80 years at recruitment; (2) had recorded information on age, sex, circulating creatinine, and diabetes status; (3) had a creatinine-based eGFR of <300 mL·min–1·1.73 m–2; (4) did not have a known history of CVD or diabetes at baseline; (5) had complete information on the risk factors of smoking status, systolic blood pressure, total cholesterol, high-density lipoprotein cholesterol, and body mass index; and (6) had at least 1 year of follow-up data after recruitment.\nHazard ratios for associations of creatinine-based eGFR with incident CHD and stroke were calculated using Cox regression, stratified by sex and study center, and when appropriate, adjusted for traditional vascular risk factors (defined here as age, systolic blood pressure, smoking status, total cholesterol, high-density lipoprotein cholesterol, and body mass index) on a complete-case basis. To account for the EPIC-CVD case-cohort design, Cox models were adapted using Prentice weights.30 To avoid overfitting models, studies contributing <20 incident events to the analysis of a particular outcome were excluded from the analysis. Fractional polynomials were used to characterize nonlinear relationships of creatinine-based eGFR with risk of CHD and stroke, adjusted for age and CVD risk factors.31 Study-specific estimates for each outcome were pooled across studies using multivariable random-effects meta-analysis, using a reference point of 90 mL·min–1·1.73 m–2. When information on urinary biomarkers in UKB was available, participants were grouped into tenths on the basis of levels of urinary albumin-to-creatinine ratio to assess the shapes of associations between urinary biomarkers and CVD risk, using participants without albuminuria as the reference group.32\nGRS for Kidney Function Using individual-participant data from EPIC-CVD, MVP, and UKB, we calculated a GRS33 weighted by the conditional effect estimated of the genetic variants associated (P<5×10–8) with creatinine-based eGFR in CKDGen,25 a global genetics consortium that has published genome-wide association study summary statistics for creatinine-based eGFR. Of the 262 variants associated with creatinine-based eGFR, 37 were excluded because of ancestry heterogeneity as reported in CKDGen,25 4 were excluded because of associations (P<5×10–8) with vascular risk factors as reported in previous genome-wide association studies (ie, smoking status, alcohol consumption, and education attainment),34 and 3 were excluded because of missingness in at least 1 of the contributing studies, leaving 218 variants for the primary GRS for creatinine-based eGFR.\nIn sensitivity analysis, we constructed 2 restricted GRSs using 126 and 121 genetic variants that were likely to be relevant for kidney function on the basis of their associations with cystatin C–based eGFR35 and blood urine nitrogen,25 respectively. Sensitivity analysis was also conducted using a GRS that included all 262 transancestry eGFR-associated index variants. Furthermore, to evaluate traits that could mediate or confound (through horizontal pleiotropy) the associations between genetically predicted eGFR and outcomes, we tested associations of GRSs for eGFR with a range of cardiovascular risk factors in UKB and EPIC-CVD and with 167 metabolites measured using targeted high-throughput nuclear magnetic resonance metabolomics (Nightingale Health Ltd) in UKB.\nUsing individual-participant data from EPIC-CVD, MVP, and UKB, we calculated a GRS33 weighted by the conditional effect estimated of the genetic variants associated (P<5×10–8) with creatinine-based eGFR in CKDGen,25 a global genetics consortium that has published genome-wide association study summary statistics for creatinine-based eGFR. Of the 262 variants associated with creatinine-based eGFR, 37 were excluded because of ancestry heterogeneity as reported in CKDGen,25 4 were excluded because of associations (P<5×10–8) with vascular risk factors as reported in previous genome-wide association studies (ie, smoking status, alcohol consumption, and education attainment),34 and 3 were excluded because of missingness in at least 1 of the contributing studies, leaving 218 variants for the primary GRS for creatinine-based eGFR.\nIn sensitivity analysis, we constructed 2 restricted GRSs using 126 and 121 genetic variants that were likely to be relevant for kidney function on the basis of their associations with cystatin C–based eGFR35 and blood urine nitrogen,25 respectively. Sensitivity analysis was also conducted using a GRS that included all 262 transancestry eGFR-associated index variants. Furthermore, to evaluate traits that could mediate or confound (through horizontal pleiotropy) the associations between genetically predicted eGFR and outcomes, we tested associations of GRSs for eGFR with a range of cardiovascular risk factors in UKB and EPIC-CVD and with 167 metabolites measured using targeted high-throughput nuclear magnetic resonance metabolomics (Nightingale Health Ltd) in UKB.\nMendelian Randomization Analyses To account for the nonlinear relationship between eGFR and risk of CVD outcomes in observational analyses, we performed a stratified Mendelian randomization analysis using methods described previously.16–20 For each participant, we calculated the residual eGFR by subtracting the genetic contribution determined by the GRS from observed eGFR. Participants were grouped on the basis of their residual eGFR into 5-unit categories between 45 and <105 mL·min–1·1.73 m–2, plus <45 and ≥105 mL·min–1·1.73 m–2. By stratifying on residual eGFR, we compared individuals in the population who would have an eGFR in the same category if they had the same genotype and reduced the potential influence of collider bias. We then calculated Mendelian randomization estimates for each eGFR category using the ratio method with the GRS as an instrumental variable, adjusting for age, age-squared, sex, study center, and the first 10 principal components. Stratum-specific estimates were combined across studies using fixed-effect meta-analysis and plotted as a piecewise-linear function of eGFR, with pointwise confidence intervals calculated by resampling the stratum-specific estimates. Sensitivity analyses used non-parametric doubly-ranked stratification method. Detailed methods describing statistical analysis are in the Expanded Methods. Analyses used STATA 15.1 and R 3.6.1.\nTo account for the nonlinear relationship between eGFR and risk of CVD outcomes in observational analyses, we performed a stratified Mendelian randomization analysis using methods described previously.16–20 For each participant, we calculated the residual eGFR by subtracting the genetic contribution determined by the GRS from observed eGFR. Participants were grouped on the basis of their residual eGFR into 5-unit categories between 45 and <105 mL·min–1·1.73 m–2, plus <45 and ≥105 mL·min–1·1.73 m–2. By stratifying on residual eGFR, we compared individuals in the population who would have an eGFR in the same category if they had the same genotype and reduced the potential influence of collider bias. We then calculated Mendelian randomization estimates for each eGFR category using the ratio method with the GRS as an instrumental variable, adjusting for age, age-squared, sex, study center, and the first 10 principal components. Stratum-specific estimates were combined across studies using fixed-effect meta-analysis and plotted as a piecewise-linear function of eGFR, with pointwise confidence intervals calculated by resampling the stratum-specific estimates. Sensitivity analyses used non-parametric doubly-ranked stratification method. Detailed methods describing statistical analysis are in the Expanded Methods. Analyses used STATA 15.1 and R 3.6.1.", "This study involved interrelated components (Figure 1). First, we characterized observational associations between eGFR and incident CHD or stroke, using data from the Emerging Risk Factors Collaboration,21 EPIC-CVD (European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease Study),22 Million Veteran Program (MVP),23 UK Biobank (UKB),24 collectively involving 648 135 participants, who had serum creatinine measurements but no known CVD or diabetes at baseline. Second, we constructed a genetic risk score (GRS) for eGFR by computing a weighted sum of eGFR-associated index variants reported in a discovery genome-wide association study from the CKDGen consortium comprising 567 460 participants with European ancestry,25 none of whom were from MVP, EPIC-CVD, or UKB. Third, we used this GRS to conduct Mendelian randomization analyses in a total of 413 718 participants (ie, EPIC-CVD, MVP, UKB), with concomitant individual-level information on genetics, serum creatinine, and disease outcomes. Fourth, to assess the potential for interference by horizontal pleiotropy26 and explore potential mechanisms that could mediate associations between eGFR and CVD outcomes, we studied our GRS for eGFR in relation to several established and emerging risk factors for CVD.\nStudy design and overview. CHD indicates coronary heart disease; CKDGen, CKD Genetics consortium; CVD, cardiovascular disease; eGFR, estimated glomerular filtration rate; EPIC-CVD, European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease; ERFC, Emerging Risk Factors Collaboration; MVP, Million Veteran Program; NMR, nuclear magnetic resonance; and UKB, UK Biobank.", "Information on each of the data sources used in the analysis is provided in the Expanded Methods in the Supplemental Material. In brief, Emerging Risk Factors Collaboration, a global consortium of population cohort studies with harmonized individual-participant data for multiple CVD risk factors, has included 47 studies with available information on serum creatinine and diabetes status at recruitment.21 EPIC-CVD, a case-cohort study embedded in the pan-European EPIC prospective study of >500 000 participants, has recorded data on serum creatinine and imputed genome-wide array data from 21 of its 23 recruitment centers.22 MVP, a prospective cohort study recruited from 63 Veterans Health Administration medical facilities throughout the United States, has recorded serum creatinine, and imputed genome-wide array data are available for a large subset of its participants.23 UKB, a prospective study of 22 recruitment centers across the United Kingdom, has cohort-wide information on serum creatinine and imputed genome-wide array data.24 Relevant ethical approval and participant consent were already obtained in all studies that contributed data to this work.", "Kidney function was estimated using creatinine-based eGFR, calculated using the Chronic Kidney Disease Epidemiology Collaboration equation.27 Creatinine concentration was multiplied by 0.95 for studies in which measurements were not standardized to isotope-dilution mass spectrometry.25,28 In a subset of participants with available data, kidney function was also defined using the Chronic Kidney Disease Epidemiology Collaboration cystatin C–based equation29 and albuminuria measured as spot urine albumin-to-creatinine ratio (Expanded Methods).", "Primary outcomes were incident CHD and stroke. Details of end-point definitions for each study are provided in Table S1. Participants in the contributing studies were eligible for inclusion in the present analysis if they met all of the following criteria: (1) aged 30 to 80 years at recruitment; (2) had recorded information on age, sex, circulating creatinine, and diabetes status; (3) had a creatinine-based eGFR of <300 mL·min–1·1.73 m–2; (4) did not have a known history of CVD or diabetes at baseline; (5) had complete information on the risk factors of smoking status, systolic blood pressure, total cholesterol, high-density lipoprotein cholesterol, and body mass index; and (6) had at least 1 year of follow-up data after recruitment.\nHazard ratios for associations of creatinine-based eGFR with incident CHD and stroke were calculated using Cox regression, stratified by sex and study center, and when appropriate, adjusted for traditional vascular risk factors (defined here as age, systolic blood pressure, smoking status, total cholesterol, high-density lipoprotein cholesterol, and body mass index) on a complete-case basis. To account for the EPIC-CVD case-cohort design, Cox models were adapted using Prentice weights.30 To avoid overfitting models, studies contributing <20 incident events to the analysis of a particular outcome were excluded from the analysis. Fractional polynomials were used to characterize nonlinear relationships of creatinine-based eGFR with risk of CHD and stroke, adjusted for age and CVD risk factors.31 Study-specific estimates for each outcome were pooled across studies using multivariable random-effects meta-analysis, using a reference point of 90 mL·min–1·1.73 m–2. When information on urinary biomarkers in UKB was available, participants were grouped into tenths on the basis of levels of urinary albumin-to-creatinine ratio to assess the shapes of associations between urinary biomarkers and CVD risk, using participants without albuminuria as the reference group.32", "Using individual-participant data from EPIC-CVD, MVP, and UKB, we calculated a GRS33 weighted by the conditional effect estimated of the genetic variants associated (P<5×10–8) with creatinine-based eGFR in CKDGen,25 a global genetics consortium that has published genome-wide association study summary statistics for creatinine-based eGFR. Of the 262 variants associated with creatinine-based eGFR, 37 were excluded because of ancestry heterogeneity as reported in CKDGen,25 4 were excluded because of associations (P<5×10–8) with vascular risk factors as reported in previous genome-wide association studies (ie, smoking status, alcohol consumption, and education attainment),34 and 3 were excluded because of missingness in at least 1 of the contributing studies, leaving 218 variants for the primary GRS for creatinine-based eGFR.\nIn sensitivity analysis, we constructed 2 restricted GRSs using 126 and 121 genetic variants that were likely to be relevant for kidney function on the basis of their associations with cystatin C–based eGFR35 and blood urine nitrogen,25 respectively. Sensitivity analysis was also conducted using a GRS that included all 262 transancestry eGFR-associated index variants. Furthermore, to evaluate traits that could mediate or confound (through horizontal pleiotropy) the associations between genetically predicted eGFR and outcomes, we tested associations of GRSs for eGFR with a range of cardiovascular risk factors in UKB and EPIC-CVD and with 167 metabolites measured using targeted high-throughput nuclear magnetic resonance metabolomics (Nightingale Health Ltd) in UKB.", "To account for the nonlinear relationship between eGFR and risk of CVD outcomes in observational analyses, we performed a stratified Mendelian randomization analysis using methods described previously.16–20 For each participant, we calculated the residual eGFR by subtracting the genetic contribution determined by the GRS from observed eGFR. Participants were grouped on the basis of their residual eGFR into 5-unit categories between 45 and <105 mL·min–1·1.73 m–2, plus <45 and ≥105 mL·min–1·1.73 m–2. By stratifying on residual eGFR, we compared individuals in the population who would have an eGFR in the same category if they had the same genotype and reduced the potential influence of collider bias. We then calculated Mendelian randomization estimates for each eGFR category using the ratio method with the GRS as an instrumental variable, adjusting for age, age-squared, sex, study center, and the first 10 principal components. Stratum-specific estimates were combined across studies using fixed-effect meta-analysis and plotted as a piecewise-linear function of eGFR, with pointwise confidence intervals calculated by resampling the stratum-specific estimates. Sensitivity analyses used non-parametric doubly-ranked stratification method. Detailed methods describing statistical analysis are in the Expanded Methods. Analyses used STATA 15.1 and R 3.6.1.", "Among the 648 135 participants without history of CVD or diabetes at baseline, the mean age was 57 years, 57% were men, and 4.4% had creatinine-based eGFR <60 mL·min–1·1.73 m–2 (Table 1, Tables S2 and S3). During 6.8 million person-years of follow-up, there were 42 858 incident CHD outcomes and 15 693 strokes. Up to 413 718 participants of European ancestry from EPIC-CVD, MVP, and UKB contributed to the main genetic analyses (Figure 1). Distributions of serum creatinine concentration and creatinine-based eGFR were broadly similar across studies (Figures S1 and S2).\n\nStudy-Level and Participant-Level Characteristics of the Contributing Data Sources\n\nObservational Associations of eGFR With Cardiovascular Outcomes For both CHD and stroke, there were U-shaped associations of creatinine-based eGFR. Compared with participants with creatinine-based eGFR values between 60 and 105 mL·min–1·1.73 m–2, risks of both CHD and stroke were higher in people with eGFR <60 or >105 mL·min–1·1.73 m–2 (Figure 2, Figure S3). The shapes of these associations did not change substantially after adjustment for several traditional risk factors (Figure 2). Associations were similar in men and women, in clinically relevant subgroups (ie, smokers, people with obesity, or hypertension; Figure S4), in the different studies contributing to this analysis (Figure S5), and when participants with a history of diabetes or missing information on cardiovascular risk factors were included (Figures S6 and S7). Similar associations were also observed for ischemic stroke (Figure S3).\nObservational associations of eGFR levels with risk of coronary heart disease and stroke (n=648 135). Participants with missing information on age and CVD risk factors (systolic blood pressure, total and high-density lipoprotein cholesterol, body mass index, and smoking status) were excluded from the analyses. Hazard ratios were estimated using Cox regression, adjusting for age and CVD risk factors (systolic blood pressure, total and high-density lipoprotein cholesterol, body mass index, and smoking status), and stratified by sex and study center. The reference point is 90 mL·min–1·1.73 m–2. Shaded regions indicate 95% CIs. CVD indicates cardiovascular disease; and eGFR, estimated glomerular filtration rate.\nFor the 338 044 participants in UKB with available data on serum cystatin C and urinary albumin-to-creatinine ratio, there were broadly similar associations of CHD or stroke with cystatin C–based eGFR as creatinine-based eGFR equations, but only when eGFR values were lower than ≈90 mL·min–1·1.73 m–2. However, there was no evidence of higher risk of CHD in participants with cystatin C–based eGFR values >105 mL·min–1·1.73 m–2 (Figure S8), in contrast with creatinine-based eGFR values >105 mL·min–1·1.73 m–2. Levels of urinary microalbumin and urinary albumin-to-creatinine ratio showed approximately linear associations with risk of CHD and stroke, which were somewhat attenuated after adjustment for traditional risk factors (Figure S9). Compared with participants with a creatinine-based eGFR of 75 to <90 mL·min–1·1.73 m–2 and without albuminuria, participants with albuminuria had higher risk of CHD and stroke (Figure S10).\nFor both CHD and stroke, there were U-shaped associations of creatinine-based eGFR. Compared with participants with creatinine-based eGFR values between 60 and 105 mL·min–1·1.73 m–2, risks of both CHD and stroke were higher in people with eGFR <60 or >105 mL·min–1·1.73 m–2 (Figure 2, Figure S3). The shapes of these associations did not change substantially after adjustment for several traditional risk factors (Figure 2). Associations were similar in men and women, in clinically relevant subgroups (ie, smokers, people with obesity, or hypertension; Figure S4), in the different studies contributing to this analysis (Figure S5), and when participants with a history of diabetes or missing information on cardiovascular risk factors were included (Figures S6 and S7). Similar associations were also observed for ischemic stroke (Figure S3).\nObservational associations of eGFR levels with risk of coronary heart disease and stroke (n=648 135). Participants with missing information on age and CVD risk factors (systolic blood pressure, total and high-density lipoprotein cholesterol, body mass index, and smoking status) were excluded from the analyses. Hazard ratios were estimated using Cox regression, adjusting for age and CVD risk factors (systolic blood pressure, total and high-density lipoprotein cholesterol, body mass index, and smoking status), and stratified by sex and study center. The reference point is 90 mL·min–1·1.73 m–2. Shaded regions indicate 95% CIs. CVD indicates cardiovascular disease; and eGFR, estimated glomerular filtration rate.\nFor the 338 044 participants in UKB with available data on serum cystatin C and urinary albumin-to-creatinine ratio, there were broadly similar associations of CHD or stroke with cystatin C–based eGFR as creatinine-based eGFR equations, but only when eGFR values were lower than ≈90 mL·min–1·1.73 m–2. However, there was no evidence of higher risk of CHD in participants with cystatin C–based eGFR values >105 mL·min–1·1.73 m–2 (Figure S8), in contrast with creatinine-based eGFR values >105 mL·min–1·1.73 m–2. Levels of urinary microalbumin and urinary albumin-to-creatinine ratio showed approximately linear associations with risk of CHD and stroke, which were somewhat attenuated after adjustment for traditional risk factors (Figure S9). Compared with participants with a creatinine-based eGFR of 75 to <90 mL·min–1·1.73 m–2 and without albuminuria, participants with albuminuria had higher risk of CHD and stroke (Figure S10).\nMendelian Randomization of Genetically Predicted eGFR With Cardiovascular Outcomes The GRS for eGFR (Table S4) explained 2.0% of variation in creatinine-based eGFR in EPIC-CVD, 2.2% in MVP, and 3.2% in UKB. A 1 SD increase in the GRS for eGFR was associated with 0.18 SD higher creatinine-based eGFR (Table S5, Figure S11). The GRS for eGFR was not associated with body mass index, diabetes, smoking status, or low-density lipoprotein cholesterol concentrations but showed modest associations with lipoprotein(a), triglycerides, blood pressure, and hemoglobin A1c measurement (Figure S11). Modest associations were also observed between the GRS for eGFR and triglyceride-related lipoprotein subclasses in a subset of participants with available data (Figure S12).\nIn nonlinear Mendelian randomization analysis, we observed a curvilinear relationship between genetically predicted eGFR and CHD (Figure 3). Among participants with eGFR <60 mL·min–1·1.73 m–2, each 5 mL·min–1·1.73 m–2 lower genetically predicted eGFR was associated with 14% (95% CI, 3%–27%) higher risk of CHD (Table 2). There was no clear evidence of association among participants with eGFR >75 mL·min–1·1.73 m–2 (Figure 3). Similar, but not statistically significant, associations were observed for stroke (Table 2, Figure 3). Overall, stratum-specific localized average causal estimates and nonlinear Mendelian randomization estimates were compatible across the studies contributing to this analysis (Table S6, Figure S13). Findings were supported in analyses using the non-parametric doubly-ranked stratification (Table S7, Figure S14). Similar associations were observed in analyses that further adjusted for systolic blood pressure, lipoprotein(a), hemoglobin A1c, and triglycerides (Figure S15), included participants with a history of diabetes at baseline (Figure S16), or used ischemic stroke as the stroke outcome (Figure S17). Results were also similar using GRS for cystatin C–based eGFR, blood urine nitrogen, or variants associated with creatinine-based eGFR regardless of ancestry heterogeneity (Figure S18).\nMendelian Randomization Estimates per 5 mL·min–1·1.73 m–2 Lower Genetically Predicted eGFR With Risk of Coronary Heart Disease and Stroke\nAssociations of genetically predicted eGFR with risk of coronary heart disease and stroke (n=413 718). The reference point is 90 mL·min–1·1.73 m–2. Gradients at each point of the curve represent the localized average causal effect on coronary heart disease or stroke per 5 mL·min–1·1.73 m–2 change in genetically predicted eGFR. The vertical lines represent 95% CIs. Analyses were adjusted for age, age-squared, sex, study center, and the first 10 principal components of ancestry. eGFR indicates estimated glomerular filtration rate.\nThe GRS for eGFR (Table S4) explained 2.0% of variation in creatinine-based eGFR in EPIC-CVD, 2.2% in MVP, and 3.2% in UKB. A 1 SD increase in the GRS for eGFR was associated with 0.18 SD higher creatinine-based eGFR (Table S5, Figure S11). The GRS for eGFR was not associated with body mass index, diabetes, smoking status, or low-density lipoprotein cholesterol concentrations but showed modest associations with lipoprotein(a), triglycerides, blood pressure, and hemoglobin A1c measurement (Figure S11). Modest associations were also observed between the GRS for eGFR and triglyceride-related lipoprotein subclasses in a subset of participants with available data (Figure S12).\nIn nonlinear Mendelian randomization analysis, we observed a curvilinear relationship between genetically predicted eGFR and CHD (Figure 3). Among participants with eGFR <60 mL·min–1·1.73 m–2, each 5 mL·min–1·1.73 m–2 lower genetically predicted eGFR was associated with 14% (95% CI, 3%–27%) higher risk of CHD (Table 2). There was no clear evidence of association among participants with eGFR >75 mL·min–1·1.73 m–2 (Figure 3). Similar, but not statistically significant, associations were observed for stroke (Table 2, Figure 3). Overall, stratum-specific localized average causal estimates and nonlinear Mendelian randomization estimates were compatible across the studies contributing to this analysis (Table S6, Figure S13). Findings were supported in analyses using the non-parametric doubly-ranked stratification (Table S7, Figure S14). Similar associations were observed in analyses that further adjusted for systolic blood pressure, lipoprotein(a), hemoglobin A1c, and triglycerides (Figure S15), included participants with a history of diabetes at baseline (Figure S16), or used ischemic stroke as the stroke outcome (Figure S17). Results were also similar using GRS for cystatin C–based eGFR, blood urine nitrogen, or variants associated with creatinine-based eGFR regardless of ancestry heterogeneity (Figure S18).\nMendelian Randomization Estimates per 5 mL·min–1·1.73 m–2 Lower Genetically Predicted eGFR With Risk of Coronary Heart Disease and Stroke\nAssociations of genetically predicted eGFR with risk of coronary heart disease and stroke (n=413 718). The reference point is 90 mL·min–1·1.73 m–2. Gradients at each point of the curve represent the localized average causal effect on coronary heart disease or stroke per 5 mL·min–1·1.73 m–2 change in genetically predicted eGFR. The vertical lines represent 95% CIs. Analyses were adjusted for age, age-squared, sex, study center, and the first 10 principal components of ancestry. eGFR indicates estimated glomerular filtration rate.", "For both CHD and stroke, there were U-shaped associations of creatinine-based eGFR. Compared with participants with creatinine-based eGFR values between 60 and 105 mL·min–1·1.73 m–2, risks of both CHD and stroke were higher in people with eGFR <60 or >105 mL·min–1·1.73 m–2 (Figure 2, Figure S3). The shapes of these associations did not change substantially after adjustment for several traditional risk factors (Figure 2). Associations were similar in men and women, in clinically relevant subgroups (ie, smokers, people with obesity, or hypertension; Figure S4), in the different studies contributing to this analysis (Figure S5), and when participants with a history of diabetes or missing information on cardiovascular risk factors were included (Figures S6 and S7). Similar associations were also observed for ischemic stroke (Figure S3).\nObservational associations of eGFR levels with risk of coronary heart disease and stroke (n=648 135). Participants with missing information on age and CVD risk factors (systolic blood pressure, total and high-density lipoprotein cholesterol, body mass index, and smoking status) were excluded from the analyses. Hazard ratios were estimated using Cox regression, adjusting for age and CVD risk factors (systolic blood pressure, total and high-density lipoprotein cholesterol, body mass index, and smoking status), and stratified by sex and study center. The reference point is 90 mL·min–1·1.73 m–2. Shaded regions indicate 95% CIs. CVD indicates cardiovascular disease; and eGFR, estimated glomerular filtration rate.\nFor the 338 044 participants in UKB with available data on serum cystatin C and urinary albumin-to-creatinine ratio, there were broadly similar associations of CHD or stroke with cystatin C–based eGFR as creatinine-based eGFR equations, but only when eGFR values were lower than ≈90 mL·min–1·1.73 m–2. However, there was no evidence of higher risk of CHD in participants with cystatin C–based eGFR values >105 mL·min–1·1.73 m–2 (Figure S8), in contrast with creatinine-based eGFR values >105 mL·min–1·1.73 m–2. Levels of urinary microalbumin and urinary albumin-to-creatinine ratio showed approximately linear associations with risk of CHD and stroke, which were somewhat attenuated after adjustment for traditional risk factors (Figure S9). Compared with participants with a creatinine-based eGFR of 75 to <90 mL·min–1·1.73 m–2 and without albuminuria, participants with albuminuria had higher risk of CHD and stroke (Figure S10).", "The GRS for eGFR (Table S4) explained 2.0% of variation in creatinine-based eGFR in EPIC-CVD, 2.2% in MVP, and 3.2% in UKB. A 1 SD increase in the GRS for eGFR was associated with 0.18 SD higher creatinine-based eGFR (Table S5, Figure S11). The GRS for eGFR was not associated with body mass index, diabetes, smoking status, or low-density lipoprotein cholesterol concentrations but showed modest associations with lipoprotein(a), triglycerides, blood pressure, and hemoglobin A1c measurement (Figure S11). Modest associations were also observed between the GRS for eGFR and triglyceride-related lipoprotein subclasses in a subset of participants with available data (Figure S12).\nIn nonlinear Mendelian randomization analysis, we observed a curvilinear relationship between genetically predicted eGFR and CHD (Figure 3). Among participants with eGFR <60 mL·min–1·1.73 m–2, each 5 mL·min–1·1.73 m–2 lower genetically predicted eGFR was associated with 14% (95% CI, 3%–27%) higher risk of CHD (Table 2). There was no clear evidence of association among participants with eGFR >75 mL·min–1·1.73 m–2 (Figure 3). Similar, but not statistically significant, associations were observed for stroke (Table 2, Figure 3). Overall, stratum-specific localized average causal estimates and nonlinear Mendelian randomization estimates were compatible across the studies contributing to this analysis (Table S6, Figure S13). Findings were supported in analyses using the non-parametric doubly-ranked stratification (Table S7, Figure S14). Similar associations were observed in analyses that further adjusted for systolic blood pressure, lipoprotein(a), hemoglobin A1c, and triglycerides (Figure S15), included participants with a history of diabetes at baseline (Figure S16), or used ischemic stroke as the stroke outcome (Figure S17). Results were also similar using GRS for cystatin C–based eGFR, blood urine nitrogen, or variants associated with creatinine-based eGFR regardless of ancestry heterogeneity (Figure S18).\nMendelian Randomization Estimates per 5 mL·min–1·1.73 m–2 Lower Genetically Predicted eGFR With Risk of Coronary Heart Disease and Stroke\nAssociations of genetically predicted eGFR with risk of coronary heart disease and stroke (n=413 718). The reference point is 90 mL·min–1·1.73 m–2. Gradients at each point of the curve represent the localized average causal effect on coronary heart disease or stroke per 5 mL·min–1·1.73 m–2 change in genetically predicted eGFR. The vertical lines represent 95% CIs. Analyses were adjusted for age, age-squared, sex, study center, and the first 10 principal components of ancestry. eGFR indicates estimated glomerular filtration rate.", "In analyses combining genetic, biomarker, and clinical data in ≈640 000 participants, our study has suggested that, in people without manifest CVD or diabetes, even mildly reduced kidney function is causally associated with a higher risk of CVD outcomes. Our results provide novel causal insights and highlight the wider potential value of preventive approaches that can preserve and modulate kidney function.\nFirst, our study estimated a dose-response curve for genetically predicted eGFR and CHD, identifying an eGFR value of ≈75 mL·min–1·1.73 m–2 as a possible risk threshold. Therefore, the causal relationship of kidney function with CHD is nonlinear in shape, in contrast with those for blood pressure and low-density lipoprotein cholesterol, which each have approximately log-linear relationships with CHD risk across their range of values. In contrast with population-wide strategies to improve blood pressure and low-density lipoprotein cholesterol levels, this finding implies that it may be a preferable strategy to identify those in the population with mild-to-moderate kidney dysfunction and target them for renoprotective interventions alongside routine strategies to reduce cardiovascular risk. For example, the use of renoprotective interventions, such as renin angiotensin aldosterone system inhibitors36 and inhibitors of sodium-glucose cotransporter 2, might provide a potential means to do so.37 Our findings encourage additional evaluation of such agents in patients with CKD without manifest CVD or diabetes.38,39\nSecond, we found that our GRS for eGFR was modestly associated with several established and emerging CVD risk factors, including plasma concentration of proatherogenic lipids (eg, lipoprotein(a), triglycerides, and triglyceride-related lipoprotein subclasses), hemoglobin A1c values, and blood pressure, consistent with previous studies.11,40 However, adjustment for such factors did not materially alter the associations between eGFR and atherosclerotic CVD, indicating that they are unlikely to mediate or confound the associations between genetically predicted kidney dysfunction and CHD or stroke and limiting the likelihood that results are subject to influences of horizontal pleiotropy. These results suggest that the effect of reduced kidney function on CVD is independent of traditional cardiovascular risk factors and underscores the potential importance of direct preservation of renal function to prevent CVD, in addition to control of known risk factors.\nThird, our data help to resolve controversies about the relevance to CHD of higher-than-average eGFR. In contrast with the observation that higher-than-average creatinine-based eGFR values are associated with higher CHD risk at >105 mL·min–1·1.73 m–2, we found that genetically predicted higher eGFR values were not associated with CHD risk in this same group. This discordance implies different pathophysiological meanings of creatinine-based eGFR values >105 mL·min–1·1.73 m–2 (which may represent a transient state of hyperfiltration before progression to poorer kidney function and CKD) and genetically predicted higher eGFR values (which represent a lifelong tendency toward exposure to better kidney function). This explanation is supported by our findings showing that the association between higher creatinine-based eGFR values and higher CHD risk was principally in participants who had albuminuria (and, therefore, preexisting kidney damage) at entry into the study.\nFourth, our results are broadly consistent with a causal relationship between eGFR and stroke. The lack of statistically significant findings in our Mendelian randomization analysis for stroke outcomes principally reflects the lower power of our study to evaluate a GRS with stroke compared with CHD. It may also be attributable to pathogenetic heterogeneity in stroke diagnoses (eg, cardioembolic, small vessel disease, and hemorrhagic subtypes may be less driven by atherosclerotic pathology than other ischemic stroke subtypes).41,42\nOur study had major strengths, including a large sample size, access to individual-participant data, use of multiple genetic causal inference methods tailored to the evaluation of nonlinear disease associations, and an updated GRS that explains more variation in eGFR than previous analyses.14 However, there are also potential limitations. First, Mendelian randomization assumptions state that the only causal pathway from the genetic variants to the outcome is through eGFR. Although we assessed the potential for interference by horizontal pleiotropy, there is the possibility of residual confounding by unrecognized effects of genotypes on other risk factors and by adaptation during early life to compensate for genetically lower eGFR. Second, to reduce the scope for confounding by ancestry (population stratification), our analyses were limited to participants of European ancestries. This limitation means that our findings might not be applicable to other populations, and additional studies on this topic are needed, especially in non-European ancestry populations. Third, although serum creatinine is used routinely for estimating eGFR, true measurement of GFR requires the use of inulin, iohexol, or iothalamate. Assay of serum creatinine is liable to interference from other serum components (eg, bilirubin and glucose)43,44 and autoimmune activation45 and is sensitive to changes in individuals’ muscle mass (eg, sarcopenia). Assessment of cystatin C, an analyte that enables an alternative calculation of eGFR without the potential limitations of creatinine, was available only in a subset of the participants we studied. However, our genetic analyses restricted to genetic variants additionally associated with other biomarkers of kidney function showed results consistent with those for creatinine-based eGFR. Last, we used the 2009 Chronic Kidney Disease Epidemiology Collaboration equation to calculate eGFR. However, our analysis was limited to populations with European ancestry, in which the 2009 and 2021 Chronic Kidney Disease Epidemiology Collaboration equations provide similar estimates of eGFR.46", "In people without manifest CVD or diabetes, mild-to-moderate kidney dysfunction was causally related to cardiovascular outcomes, highlighting the potential cardiovascular benefit of preventive approaches that improve kidney function.", "Acknowledgments The authors thank investigators and participants of the several studies that contributed data to the Emerging Risk Factors Collaboration. We thank all EPIC (European Prospective Investigation into Cancer) participants and staff for their contribution to the study, the laboratory teams at the Medical Research Council Epidemiology Unit for sample management and Cambridge Genomic Services for genotyping, S. Spackman for data management, and the team at the EPIC-CVD Coordinating Centre for study coordination and administration. The authors also thank the participants of the VA Million Veteran Program and its collaborators. Acknowledgment of VA Million Veteran Program leadership and staff contributions can be found in the Supplemental Material Note. This research has been conducted using the UK Biobank Resource under Application Number 31852.\nThe authors thank investigators and participants of the several studies that contributed data to the Emerging Risk Factors Collaboration. We thank all EPIC (European Prospective Investigation into Cancer) participants and staff for their contribution to the study, the laboratory teams at the Medical Research Council Epidemiology Unit for sample management and Cambridge Genomic Services for genotyping, S. Spackman for data management, and the team at the EPIC-CVD Coordinating Centre for study coordination and administration. The authors also thank the participants of the VA Million Veteran Program and its collaborators. Acknowledgment of VA Million Veteran Program leadership and staff contributions can be found in the Supplemental Material Note. This research has been conducted using the UK Biobank Resource under Application Number 31852.\nSources of Funding The Emerging Risk Factors Collaboration (ERFC) coordinating center was underpinned by program grants from the British Heart Foundation (BHF; SP/09/002; RG/13/13/30194; RG/18/13/33946), BHF Centre of Research Excellence (RE/18/1/34212), the UK Medical Research Council (MR/L003120/1), and the National Institute for Health and Care Research (NIHR) Cambridge Biomedical Research Centre (BRC-1215-20014), with project-specific support received from the UK NIHR, British United Provident Association UK Foundation, and an unrestricted educational grant from GlaxoSmithKline. This work was supported by Health Data Research UK, which is funded by the UK Medical Research Council, the Engineering and Physical Sciences Research Council, the Economic and Social Research Council, the Department of Health and Social Care (England), the Chief Scientist Office of the Scottish Government Health and Social Care Directorates, the Health and Social Care Research and Development Division (Welsh Government), the Public Health Agency (Northern Ireland), the BHF, and the Wellcome Trust. A variety of funding sources have supported recruitment, follow-up, and laboratory measurements in the studies contributing data to the ERFC, which are listed on the ERFC website (www.phpc.cam.ac.uk/ceu/erfc/list-of-studies). EPIC-CVD (European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease Study) was funded by the European Research Council (268834) and the European Commission Framework Programme 7 (HEALTH-F2-2012-279233). The coordination of EPIC is financially supported by International Agency for Research on Cancer (IARC) and also by the Department of Epidemiology and Biostatistics, School of Public Health, Imperial College London which has additional infrastructure support provided by the NIHR Imperial Biomedical Research Centre (BRC). The national cohorts are supported by: Danish Cancer Society (Denmark); Ligue Contre le Cancer, Institut Gustave Roussy, Mutuelle Générale de l’Education Nationale, Institut National de la Santé et de la Recherche Médicale (INSERM) (France); German Cancer Aid, German Cancer Research Center (DKFZ), German Institute of Human Nutrition PotsdamRehbruecke (DIfE), Federal Ministry of Education and Research (BMBF) (Germany); Associazione Italiana per la Ricerca sul Cancro-AIRC-Italy, Compagnia di SanPaolo and National Research Council (Italy); Dutch Ministry of Public Health, Welfare and Sports (VWS), Netherlands Cancer Registry (NKR), LK Research Funds, Dutch Prevention Funds, Dutch ZON (Zorg Onderzoek Nederland), World Cancer Research Fund (WCRF), Statistics Netherlands (The Netherlands); Health Research Fund (FIS) - Instituto de Salud Carlos III (ISCIII), Regional Governments of Andalucía, Asturias, Basque Country, Murcia and Navarra, and the Catalan Institute of Oncology - ICO (Spain); Swedish Cancer Society, Swedish Research Council and County Councils of Skåne and Västerbotten (Sweden); Cancer Research UK (14136 to EPIC-Norfolk; C8221/A29017 to EPIC-Oxford), Medical Research Council, United Kingdom (1000143 to EPIC-Norfolk; MR/M012190/1 to EPIC-Oxford). The establishment of the EPIC-InterAct subcohort (used in the EPIC-CVD study) and conduct of biochemical assays was supported by the EU Sixth Framework Programme (FP6) (grant LSHM_CT_2006_037197 to the InterAct project) and the Medical Research Council Epidemiology Unit (grants MC_UU_12015/1 and MC_UU_12015/5). This research is based on data from the Million Veteran Program, Office of Research and Development, and Veterans Health Administration and was supported by award I01-BX004821 (principal investigators, Drs Peter W.F. Wilson and Kelly Cho) and I01-BX003360 (principal investigators, Dr Adriana M. Hung). Dr Damrauer is supported by IK2-CX001780. Dr Hung is supported by CX001897. Dr Tsao is supported by BX003362-01 from VA Office of Research and Development. Dr Robinson-Cohen is supported by R01DK122075. Dr Sun was funded by a BHF Programme Grant (RG/18/13/33946). Dr Arnold was funded by a BHF Programme Grant (RG/18/13/33946). Dr Kaptoge is funded by a BHF Chair award (CH/12/2/29428). Dr Mason is funded by the EU/EFPIA Innovative Medicines Initiative Joint Undertaking BigData@Heart grant 116074. Dr Bolton was funded by the NIHR BTRU in Donor Health and Genomics (NIHR BTRU-2014-10024). Dr Allara is funded by a BHF Programme Grant (RG/18/13/33946). Prof Inouye is supported by the Munz Chair of Cardiovascular Prediction and Prevention and the NIHR Cambridge Biomedical Research Centre (BRC-1215-20014). Prof Inouye was also supported by the UK Economic and Social Research 878 Council (ES/T013192/1). Prof Danesh holds a British Heart Foundation Professorship and a NIHR Senior Investigator Award. Prof Wood is part of the BigData@Heart Consortium, funded by the Innovative Medicines Initiative-2 Joint Undertaking under grant agreement No 116074. Prof Wood was supported by the BHF-Turing Cardiovascular Data Science Award (BCDSA\\100005). Prof Di Angelantonio holds a NIHR Senior Investigator Award.\nThe Emerging Risk Factors Collaboration (ERFC) coordinating center was underpinned by program grants from the British Heart Foundation (BHF; SP/09/002; RG/13/13/30194; RG/18/13/33946), BHF Centre of Research Excellence (RE/18/1/34212), the UK Medical Research Council (MR/L003120/1), and the National Institute for Health and Care Research (NIHR) Cambridge Biomedical Research Centre (BRC-1215-20014), with project-specific support received from the UK NIHR, British United Provident Association UK Foundation, and an unrestricted educational grant from GlaxoSmithKline. This work was supported by Health Data Research UK, which is funded by the UK Medical Research Council, the Engineering and Physical Sciences Research Council, the Economic and Social Research Council, the Department of Health and Social Care (England), the Chief Scientist Office of the Scottish Government Health and Social Care Directorates, the Health and Social Care Research and Development Division (Welsh Government), the Public Health Agency (Northern Ireland), the BHF, and the Wellcome Trust. A variety of funding sources have supported recruitment, follow-up, and laboratory measurements in the studies contributing data to the ERFC, which are listed on the ERFC website (www.phpc.cam.ac.uk/ceu/erfc/list-of-studies). EPIC-CVD (European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease Study) was funded by the European Research Council (268834) and the European Commission Framework Programme 7 (HEALTH-F2-2012-279233). The coordination of EPIC is financially supported by International Agency for Research on Cancer (IARC) and also by the Department of Epidemiology and Biostatistics, School of Public Health, Imperial College London which has additional infrastructure support provided by the NIHR Imperial Biomedical Research Centre (BRC). The national cohorts are supported by: Danish Cancer Society (Denmark); Ligue Contre le Cancer, Institut Gustave Roussy, Mutuelle Générale de l’Education Nationale, Institut National de la Santé et de la Recherche Médicale (INSERM) (France); German Cancer Aid, German Cancer Research Center (DKFZ), German Institute of Human Nutrition PotsdamRehbruecke (DIfE), Federal Ministry of Education and Research (BMBF) (Germany); Associazione Italiana per la Ricerca sul Cancro-AIRC-Italy, Compagnia di SanPaolo and National Research Council (Italy); Dutch Ministry of Public Health, Welfare and Sports (VWS), Netherlands Cancer Registry (NKR), LK Research Funds, Dutch Prevention Funds, Dutch ZON (Zorg Onderzoek Nederland), World Cancer Research Fund (WCRF), Statistics Netherlands (The Netherlands); Health Research Fund (FIS) - Instituto de Salud Carlos III (ISCIII), Regional Governments of Andalucía, Asturias, Basque Country, Murcia and Navarra, and the Catalan Institute of Oncology - ICO (Spain); Swedish Cancer Society, Swedish Research Council and County Councils of Skåne and Västerbotten (Sweden); Cancer Research UK (14136 to EPIC-Norfolk; C8221/A29017 to EPIC-Oxford), Medical Research Council, United Kingdom (1000143 to EPIC-Norfolk; MR/M012190/1 to EPIC-Oxford). The establishment of the EPIC-InterAct subcohort (used in the EPIC-CVD study) and conduct of biochemical assays was supported by the EU Sixth Framework Programme (FP6) (grant LSHM_CT_2006_037197 to the InterAct project) and the Medical Research Council Epidemiology Unit (grants MC_UU_12015/1 and MC_UU_12015/5). This research is based on data from the Million Veteran Program, Office of Research and Development, and Veterans Health Administration and was supported by award I01-BX004821 (principal investigators, Drs Peter W.F. Wilson and Kelly Cho) and I01-BX003360 (principal investigators, Dr Adriana M. Hung). Dr Damrauer is supported by IK2-CX001780. Dr Hung is supported by CX001897. Dr Tsao is supported by BX003362-01 from VA Office of Research and Development. Dr Robinson-Cohen is supported by R01DK122075. Dr Sun was funded by a BHF Programme Grant (RG/18/13/33946). Dr Arnold was funded by a BHF Programme Grant (RG/18/13/33946). Dr Kaptoge is funded by a BHF Chair award (CH/12/2/29428). Dr Mason is funded by the EU/EFPIA Innovative Medicines Initiative Joint Undertaking BigData@Heart grant 116074. Dr Bolton was funded by the NIHR BTRU in Donor Health and Genomics (NIHR BTRU-2014-10024). Dr Allara is funded by a BHF Programme Grant (RG/18/13/33946). Prof Inouye is supported by the Munz Chair of Cardiovascular Prediction and Prevention and the NIHR Cambridge Biomedical Research Centre (BRC-1215-20014). Prof Inouye was also supported by the UK Economic and Social Research 878 Council (ES/T013192/1). Prof Danesh holds a British Heart Foundation Professorship and a NIHR Senior Investigator Award. Prof Wood is part of the BigData@Heart Consortium, funded by the Innovative Medicines Initiative-2 Joint Undertaking under grant agreement No 116074. Prof Wood was supported by the BHF-Turing Cardiovascular Data Science Award (BCDSA\\100005). Prof Di Angelantonio holds a NIHR Senior Investigator Award.\nDisclosures Where authors are identified as personnel of the International Agency for Research on Cancer/World Health Organization, the authors alone are responsible for the views expressed in this article, and they do not necessarily represent the decisions, policy, or views of the International Agency for Research on Cancer/World Health Organization. The views expressed are those of the author(s) and not necessarily those of the National Institute for Health Research or the Department of Health and Social Care. This publication does not represent the views of the Department of Veterans Affairs or the United States government. Dr Staley is now a full-time employee at UCB. Dr Sun is now an employee at Regeneron Pharmaceuticals. Dr Arnold is now an employee of AstraZeneca. Dr Danesh serves on scientific advisory boards for AstraZeneca, Novartis, and UK Biobank, and has received multiple grants from academic, charitable and industry sources outside of the submitted work. Adam Butterworth reports institutional grants from AstraZeneca, Bayer, Biogen, BioMarin, Bioverativ, Novartis, Regeneron and Sanofi.\nWhere authors are identified as personnel of the International Agency for Research on Cancer/World Health Organization, the authors alone are responsible for the views expressed in this article, and they do not necessarily represent the decisions, policy, or views of the International Agency for Research on Cancer/World Health Organization. The views expressed are those of the author(s) and not necessarily those of the National Institute for Health Research or the Department of Health and Social Care. This publication does not represent the views of the Department of Veterans Affairs or the United States government. Dr Staley is now a full-time employee at UCB. Dr Sun is now an employee at Regeneron Pharmaceuticals. Dr Arnold is now an employee of AstraZeneca. Dr Danesh serves on scientific advisory boards for AstraZeneca, Novartis, and UK Biobank, and has received multiple grants from academic, charitable and industry sources outside of the submitted work. Adam Butterworth reports institutional grants from AstraZeneca, Bayer, Biogen, BioMarin, Bioverativ, Novartis, Regeneron and Sanofi.\nSupplemental Material Expanded Methods\nTables S1–S7\nFigures S1–S18\nExpanded Methods\nTables S1–S7\nFigures S1–S18", "The authors thank investigators and participants of the several studies that contributed data to the Emerging Risk Factors Collaboration. We thank all EPIC (European Prospective Investigation into Cancer) participants and staff for their contribution to the study, the laboratory teams at the Medical Research Council Epidemiology Unit for sample management and Cambridge Genomic Services for genotyping, S. Spackman for data management, and the team at the EPIC-CVD Coordinating Centre for study coordination and administration. The authors also thank the participants of the VA Million Veteran Program and its collaborators. Acknowledgment of VA Million Veteran Program leadership and staff contributions can be found in the Supplemental Material Note. This research has been conducted using the UK Biobank Resource under Application Number 31852.", "The Emerging Risk Factors Collaboration (ERFC) coordinating center was underpinned by program grants from the British Heart Foundation (BHF; SP/09/002; RG/13/13/30194; RG/18/13/33946), BHF Centre of Research Excellence (RE/18/1/34212), the UK Medical Research Council (MR/L003120/1), and the National Institute for Health and Care Research (NIHR) Cambridge Biomedical Research Centre (BRC-1215-20014), with project-specific support received from the UK NIHR, British United Provident Association UK Foundation, and an unrestricted educational grant from GlaxoSmithKline. This work was supported by Health Data Research UK, which is funded by the UK Medical Research Council, the Engineering and Physical Sciences Research Council, the Economic and Social Research Council, the Department of Health and Social Care (England), the Chief Scientist Office of the Scottish Government Health and Social Care Directorates, the Health and Social Care Research and Development Division (Welsh Government), the Public Health Agency (Northern Ireland), the BHF, and the Wellcome Trust. A variety of funding sources have supported recruitment, follow-up, and laboratory measurements in the studies contributing data to the ERFC, which are listed on the ERFC website (www.phpc.cam.ac.uk/ceu/erfc/list-of-studies). EPIC-CVD (European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease Study) was funded by the European Research Council (268834) and the European Commission Framework Programme 7 (HEALTH-F2-2012-279233). The coordination of EPIC is financially supported by International Agency for Research on Cancer (IARC) and also by the Department of Epidemiology and Biostatistics, School of Public Health, Imperial College London which has additional infrastructure support provided by the NIHR Imperial Biomedical Research Centre (BRC). The national cohorts are supported by: Danish Cancer Society (Denmark); Ligue Contre le Cancer, Institut Gustave Roussy, Mutuelle Générale de l’Education Nationale, Institut National de la Santé et de la Recherche Médicale (INSERM) (France); German Cancer Aid, German Cancer Research Center (DKFZ), German Institute of Human Nutrition PotsdamRehbruecke (DIfE), Federal Ministry of Education and Research (BMBF) (Germany); Associazione Italiana per la Ricerca sul Cancro-AIRC-Italy, Compagnia di SanPaolo and National Research Council (Italy); Dutch Ministry of Public Health, Welfare and Sports (VWS), Netherlands Cancer Registry (NKR), LK Research Funds, Dutch Prevention Funds, Dutch ZON (Zorg Onderzoek Nederland), World Cancer Research Fund (WCRF), Statistics Netherlands (The Netherlands); Health Research Fund (FIS) - Instituto de Salud Carlos III (ISCIII), Regional Governments of Andalucía, Asturias, Basque Country, Murcia and Navarra, and the Catalan Institute of Oncology - ICO (Spain); Swedish Cancer Society, Swedish Research Council and County Councils of Skåne and Västerbotten (Sweden); Cancer Research UK (14136 to EPIC-Norfolk; C8221/A29017 to EPIC-Oxford), Medical Research Council, United Kingdom (1000143 to EPIC-Norfolk; MR/M012190/1 to EPIC-Oxford). The establishment of the EPIC-InterAct subcohort (used in the EPIC-CVD study) and conduct of biochemical assays was supported by the EU Sixth Framework Programme (FP6) (grant LSHM_CT_2006_037197 to the InterAct project) and the Medical Research Council Epidemiology Unit (grants MC_UU_12015/1 and MC_UU_12015/5). This research is based on data from the Million Veteran Program, Office of Research and Development, and Veterans Health Administration and was supported by award I01-BX004821 (principal investigators, Drs Peter W.F. Wilson and Kelly Cho) and I01-BX003360 (principal investigators, Dr Adriana M. Hung). Dr Damrauer is supported by IK2-CX001780. Dr Hung is supported by CX001897. Dr Tsao is supported by BX003362-01 from VA Office of Research and Development. Dr Robinson-Cohen is supported by R01DK122075. Dr Sun was funded by a BHF Programme Grant (RG/18/13/33946). Dr Arnold was funded by a BHF Programme Grant (RG/18/13/33946). Dr Kaptoge is funded by a BHF Chair award (CH/12/2/29428). Dr Mason is funded by the EU/EFPIA Innovative Medicines Initiative Joint Undertaking BigData@Heart grant 116074. Dr Bolton was funded by the NIHR BTRU in Donor Health and Genomics (NIHR BTRU-2014-10024). Dr Allara is funded by a BHF Programme Grant (RG/18/13/33946). Prof Inouye is supported by the Munz Chair of Cardiovascular Prediction and Prevention and the NIHR Cambridge Biomedical Research Centre (BRC-1215-20014). Prof Inouye was also supported by the UK Economic and Social Research 878 Council (ES/T013192/1). Prof Danesh holds a British Heart Foundation Professorship and a NIHR Senior Investigator Award. Prof Wood is part of the BigData@Heart Consortium, funded by the Innovative Medicines Initiative-2 Joint Undertaking under grant agreement No 116074. Prof Wood was supported by the BHF-Turing Cardiovascular Data Science Award (BCDSA\\100005). Prof Di Angelantonio holds a NIHR Senior Investigator Award.", "Where authors are identified as personnel of the International Agency for Research on Cancer/World Health Organization, the authors alone are responsible for the views expressed in this article, and they do not necessarily represent the decisions, policy, or views of the International Agency for Research on Cancer/World Health Organization. The views expressed are those of the author(s) and not necessarily those of the National Institute for Health Research or the Department of Health and Social Care. This publication does not represent the views of the Department of Veterans Affairs or the United States government. Dr Staley is now a full-time employee at UCB. Dr Sun is now an employee at Regeneron Pharmaceuticals. Dr Arnold is now an employee of AstraZeneca. Dr Danesh serves on scientific advisory boards for AstraZeneca, Novartis, and UK Biobank, and has received multiple grants from academic, charitable and industry sources outside of the submitted work. Adam Butterworth reports institutional grants from AstraZeneca, Bayer, Biogen, BioMarin, Bioverativ, Novartis, Regeneron and Sanofi.", "Expanded Methods\nTables S1–S7\nFigures S1–S18", "" ]
[ null, null, "methods", "intro", null, null, null, null, null, "results", null, null, "discussion", "conclusions", null, null, null, "COI-Statement", null, "supplementary-material" ]
[ "cardiovascular diseases", "coronary disease", "kidney diseases", "stroke" ]
What Is New?: In people without manifest cardiovascular disease or diabetes, there is a nonlinear causal relationship between kidney function and coronary heart disease. Even mildly reduced kidney function is causally associated with higher risk of coronary heart disease with a possible risk threshold for eGFR value of ≈75 mL·min–1·1.73 m–2. The effect of reduced kidney function on coronary heart disease is independent of traditional cardiovascular risk factors. What Are the Clinical Implications?: Preventive approaches that can preserve and modulate kidney function can help prevent cardiovascular diseases. Given the nonlinear causal relationship, it may be a preferable strategy to identify individuals in the population with mild-to-moderate kidney dysfunction and target them for renoprotective interventions alongside routine strategies to reduce cardiovascular risk. Methods: The data, code, and study material that support the findings of this study are available from the corresponding author on reasonable request. Study Design and Study Overview This study involved interrelated components (Figure 1). First, we characterized observational associations between eGFR and incident CHD or stroke, using data from the Emerging Risk Factors Collaboration,21 EPIC-CVD (European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease Study),22 Million Veteran Program (MVP),23 UK Biobank (UKB),24 collectively involving 648 135 participants, who had serum creatinine measurements but no known CVD or diabetes at baseline. Second, we constructed a genetic risk score (GRS) for eGFR by computing a weighted sum of eGFR-associated index variants reported in a discovery genome-wide association study from the CKDGen consortium comprising 567 460 participants with European ancestry,25 none of whom were from MVP, EPIC-CVD, or UKB. Third, we used this GRS to conduct Mendelian randomization analyses in a total of 413 718 participants (ie, EPIC-CVD, MVP, UKB), with concomitant individual-level information on genetics, serum creatinine, and disease outcomes. Fourth, to assess the potential for interference by horizontal pleiotropy26 and explore potential mechanisms that could mediate associations between eGFR and CVD outcomes, we studied our GRS for eGFR in relation to several established and emerging risk factors for CVD. Study design and overview. CHD indicates coronary heart disease; CKDGen, CKD Genetics consortium; CVD, cardiovascular disease; eGFR, estimated glomerular filtration rate; EPIC-CVD, European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease; ERFC, Emerging Risk Factors Collaboration; MVP, Million Veteran Program; NMR, nuclear magnetic resonance; and UKB, UK Biobank. This study involved interrelated components (Figure 1). First, we characterized observational associations between eGFR and incident CHD or stroke, using data from the Emerging Risk Factors Collaboration,21 EPIC-CVD (European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease Study),22 Million Veteran Program (MVP),23 UK Biobank (UKB),24 collectively involving 648 135 participants, who had serum creatinine measurements but no known CVD or diabetes at baseline. Second, we constructed a genetic risk score (GRS) for eGFR by computing a weighted sum of eGFR-associated index variants reported in a discovery genome-wide association study from the CKDGen consortium comprising 567 460 participants with European ancestry,25 none of whom were from MVP, EPIC-CVD, or UKB. Third, we used this GRS to conduct Mendelian randomization analyses in a total of 413 718 participants (ie, EPIC-CVD, MVP, UKB), with concomitant individual-level information on genetics, serum creatinine, and disease outcomes. Fourth, to assess the potential for interference by horizontal pleiotropy26 and explore potential mechanisms that could mediate associations between eGFR and CVD outcomes, we studied our GRS for eGFR in relation to several established and emerging risk factors for CVD. Study design and overview. CHD indicates coronary heart disease; CKDGen, CKD Genetics consortium; CVD, cardiovascular disease; eGFR, estimated glomerular filtration rate; EPIC-CVD, European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease; ERFC, Emerging Risk Factors Collaboration; MVP, Million Veteran Program; NMR, nuclear magnetic resonance; and UKB, UK Biobank. Data Sources Information on each of the data sources used in the analysis is provided in the Expanded Methods in the Supplemental Material. In brief, Emerging Risk Factors Collaboration, a global consortium of population cohort studies with harmonized individual-participant data for multiple CVD risk factors, has included 47 studies with available information on serum creatinine and diabetes status at recruitment.21 EPIC-CVD, a case-cohort study embedded in the pan-European EPIC prospective study of >500 000 participants, has recorded data on serum creatinine and imputed genome-wide array data from 21 of its 23 recruitment centers.22 MVP, a prospective cohort study recruited from 63 Veterans Health Administration medical facilities throughout the United States, has recorded serum creatinine, and imputed genome-wide array data are available for a large subset of its participants.23 UKB, a prospective study of 22 recruitment centers across the United Kingdom, has cohort-wide information on serum creatinine and imputed genome-wide array data.24 Relevant ethical approval and participant consent were already obtained in all studies that contributed data to this work. Information on each of the data sources used in the analysis is provided in the Expanded Methods in the Supplemental Material. In brief, Emerging Risk Factors Collaboration, a global consortium of population cohort studies with harmonized individual-participant data for multiple CVD risk factors, has included 47 studies with available information on serum creatinine and diabetes status at recruitment.21 EPIC-CVD, a case-cohort study embedded in the pan-European EPIC prospective study of >500 000 participants, has recorded data on serum creatinine and imputed genome-wide array data from 21 of its 23 recruitment centers.22 MVP, a prospective cohort study recruited from 63 Veterans Health Administration medical facilities throughout the United States, has recorded serum creatinine, and imputed genome-wide array data are available for a large subset of its participants.23 UKB, a prospective study of 22 recruitment centers across the United Kingdom, has cohort-wide information on serum creatinine and imputed genome-wide array data.24 Relevant ethical approval and participant consent were already obtained in all studies that contributed data to this work. Estimation of Kidney Function Kidney function was estimated using creatinine-based eGFR, calculated using the Chronic Kidney Disease Epidemiology Collaboration equation.27 Creatinine concentration was multiplied by 0.95 for studies in which measurements were not standardized to isotope-dilution mass spectrometry.25,28 In a subset of participants with available data, kidney function was also defined using the Chronic Kidney Disease Epidemiology Collaboration cystatin C–based equation29 and albuminuria measured as spot urine albumin-to-creatinine ratio (Expanded Methods). Kidney function was estimated using creatinine-based eGFR, calculated using the Chronic Kidney Disease Epidemiology Collaboration equation.27 Creatinine concentration was multiplied by 0.95 for studies in which measurements were not standardized to isotope-dilution mass spectrometry.25,28 In a subset of participants with available data, kidney function was also defined using the Chronic Kidney Disease Epidemiology Collaboration cystatin C–based equation29 and albuminuria measured as spot urine albumin-to-creatinine ratio (Expanded Methods). Observational Analyses Primary outcomes were incident CHD and stroke. Details of end-point definitions for each study are provided in Table S1. Participants in the contributing studies were eligible for inclusion in the present analysis if they met all of the following criteria: (1) aged 30 to 80 years at recruitment; (2) had recorded information on age, sex, circulating creatinine, and diabetes status; (3) had a creatinine-based eGFR of <300 mL·min–1·1.73 m–2; (4) did not have a known history of CVD or diabetes at baseline; (5) had complete information on the risk factors of smoking status, systolic blood pressure, total cholesterol, high-density lipoprotein cholesterol, and body mass index; and (6) had at least 1 year of follow-up data after recruitment. Hazard ratios for associations of creatinine-based eGFR with incident CHD and stroke were calculated using Cox regression, stratified by sex and study center, and when appropriate, adjusted for traditional vascular risk factors (defined here as age, systolic blood pressure, smoking status, total cholesterol, high-density lipoprotein cholesterol, and body mass index) on a complete-case basis. To account for the EPIC-CVD case-cohort design, Cox models were adapted using Prentice weights.30 To avoid overfitting models, studies contributing <20 incident events to the analysis of a particular outcome were excluded from the analysis. Fractional polynomials were used to characterize nonlinear relationships of creatinine-based eGFR with risk of CHD and stroke, adjusted for age and CVD risk factors.31 Study-specific estimates for each outcome were pooled across studies using multivariable random-effects meta-analysis, using a reference point of 90 mL·min–1·1.73 m–2. When information on urinary biomarkers in UKB was available, participants were grouped into tenths on the basis of levels of urinary albumin-to-creatinine ratio to assess the shapes of associations between urinary biomarkers and CVD risk, using participants without albuminuria as the reference group.32 Primary outcomes were incident CHD and stroke. Details of end-point definitions for each study are provided in Table S1. Participants in the contributing studies were eligible for inclusion in the present analysis if they met all of the following criteria: (1) aged 30 to 80 years at recruitment; (2) had recorded information on age, sex, circulating creatinine, and diabetes status; (3) had a creatinine-based eGFR of <300 mL·min–1·1.73 m–2; (4) did not have a known history of CVD or diabetes at baseline; (5) had complete information on the risk factors of smoking status, systolic blood pressure, total cholesterol, high-density lipoprotein cholesterol, and body mass index; and (6) had at least 1 year of follow-up data after recruitment. Hazard ratios for associations of creatinine-based eGFR with incident CHD and stroke were calculated using Cox regression, stratified by sex and study center, and when appropriate, adjusted for traditional vascular risk factors (defined here as age, systolic blood pressure, smoking status, total cholesterol, high-density lipoprotein cholesterol, and body mass index) on a complete-case basis. To account for the EPIC-CVD case-cohort design, Cox models were adapted using Prentice weights.30 To avoid overfitting models, studies contributing <20 incident events to the analysis of a particular outcome were excluded from the analysis. Fractional polynomials were used to characterize nonlinear relationships of creatinine-based eGFR with risk of CHD and stroke, adjusted for age and CVD risk factors.31 Study-specific estimates for each outcome were pooled across studies using multivariable random-effects meta-analysis, using a reference point of 90 mL·min–1·1.73 m–2. When information on urinary biomarkers in UKB was available, participants were grouped into tenths on the basis of levels of urinary albumin-to-creatinine ratio to assess the shapes of associations between urinary biomarkers and CVD risk, using participants without albuminuria as the reference group.32 GRS for Kidney Function Using individual-participant data from EPIC-CVD, MVP, and UKB, we calculated a GRS33 weighted by the conditional effect estimated of the genetic variants associated (P<5×10–8) with creatinine-based eGFR in CKDGen,25 a global genetics consortium that has published genome-wide association study summary statistics for creatinine-based eGFR. Of the 262 variants associated with creatinine-based eGFR, 37 were excluded because of ancestry heterogeneity as reported in CKDGen,25 4 were excluded because of associations (P<5×10–8) with vascular risk factors as reported in previous genome-wide association studies (ie, smoking status, alcohol consumption, and education attainment),34 and 3 were excluded because of missingness in at least 1 of the contributing studies, leaving 218 variants for the primary GRS for creatinine-based eGFR. In sensitivity analysis, we constructed 2 restricted GRSs using 126 and 121 genetic variants that were likely to be relevant for kidney function on the basis of their associations with cystatin C–based eGFR35 and blood urine nitrogen,25 respectively. Sensitivity analysis was also conducted using a GRS that included all 262 transancestry eGFR-associated index variants. Furthermore, to evaluate traits that could mediate or confound (through horizontal pleiotropy) the associations between genetically predicted eGFR and outcomes, we tested associations of GRSs for eGFR with a range of cardiovascular risk factors in UKB and EPIC-CVD and with 167 metabolites measured using targeted high-throughput nuclear magnetic resonance metabolomics (Nightingale Health Ltd) in UKB. Using individual-participant data from EPIC-CVD, MVP, and UKB, we calculated a GRS33 weighted by the conditional effect estimated of the genetic variants associated (P<5×10–8) with creatinine-based eGFR in CKDGen,25 a global genetics consortium that has published genome-wide association study summary statistics for creatinine-based eGFR. Of the 262 variants associated with creatinine-based eGFR, 37 were excluded because of ancestry heterogeneity as reported in CKDGen,25 4 were excluded because of associations (P<5×10–8) with vascular risk factors as reported in previous genome-wide association studies (ie, smoking status, alcohol consumption, and education attainment),34 and 3 were excluded because of missingness in at least 1 of the contributing studies, leaving 218 variants for the primary GRS for creatinine-based eGFR. In sensitivity analysis, we constructed 2 restricted GRSs using 126 and 121 genetic variants that were likely to be relevant for kidney function on the basis of their associations with cystatin C–based eGFR35 and blood urine nitrogen,25 respectively. Sensitivity analysis was also conducted using a GRS that included all 262 transancestry eGFR-associated index variants. Furthermore, to evaluate traits that could mediate or confound (through horizontal pleiotropy) the associations between genetically predicted eGFR and outcomes, we tested associations of GRSs for eGFR with a range of cardiovascular risk factors in UKB and EPIC-CVD and with 167 metabolites measured using targeted high-throughput nuclear magnetic resonance metabolomics (Nightingale Health Ltd) in UKB. Mendelian Randomization Analyses To account for the nonlinear relationship between eGFR and risk of CVD outcomes in observational analyses, we performed a stratified Mendelian randomization analysis using methods described previously.16–20 For each participant, we calculated the residual eGFR by subtracting the genetic contribution determined by the GRS from observed eGFR. Participants were grouped on the basis of their residual eGFR into 5-unit categories between 45 and <105 mL·min–1·1.73 m–2, plus <45 and ≥105 mL·min–1·1.73 m–2. By stratifying on residual eGFR, we compared individuals in the population who would have an eGFR in the same category if they had the same genotype and reduced the potential influence of collider bias. We then calculated Mendelian randomization estimates for each eGFR category using the ratio method with the GRS as an instrumental variable, adjusting for age, age-squared, sex, study center, and the first 10 principal components. Stratum-specific estimates were combined across studies using fixed-effect meta-analysis and plotted as a piecewise-linear function of eGFR, with pointwise confidence intervals calculated by resampling the stratum-specific estimates. Sensitivity analyses used non-parametric doubly-ranked stratification method. Detailed methods describing statistical analysis are in the Expanded Methods. Analyses used STATA 15.1 and R 3.6.1. To account for the nonlinear relationship between eGFR and risk of CVD outcomes in observational analyses, we performed a stratified Mendelian randomization analysis using methods described previously.16–20 For each participant, we calculated the residual eGFR by subtracting the genetic contribution determined by the GRS from observed eGFR. Participants were grouped on the basis of their residual eGFR into 5-unit categories between 45 and <105 mL·min–1·1.73 m–2, plus <45 and ≥105 mL·min–1·1.73 m–2. By stratifying on residual eGFR, we compared individuals in the population who would have an eGFR in the same category if they had the same genotype and reduced the potential influence of collider bias. We then calculated Mendelian randomization estimates for each eGFR category using the ratio method with the GRS as an instrumental variable, adjusting for age, age-squared, sex, study center, and the first 10 principal components. Stratum-specific estimates were combined across studies using fixed-effect meta-analysis and plotted as a piecewise-linear function of eGFR, with pointwise confidence intervals calculated by resampling the stratum-specific estimates. Sensitivity analyses used non-parametric doubly-ranked stratification method. Detailed methods describing statistical analysis are in the Expanded Methods. Analyses used STATA 15.1 and R 3.6.1. Study Design and Study Overview: This study involved interrelated components (Figure 1). First, we characterized observational associations between eGFR and incident CHD or stroke, using data from the Emerging Risk Factors Collaboration,21 EPIC-CVD (European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease Study),22 Million Veteran Program (MVP),23 UK Biobank (UKB),24 collectively involving 648 135 participants, who had serum creatinine measurements but no known CVD or diabetes at baseline. Second, we constructed a genetic risk score (GRS) for eGFR by computing a weighted sum of eGFR-associated index variants reported in a discovery genome-wide association study from the CKDGen consortium comprising 567 460 participants with European ancestry,25 none of whom were from MVP, EPIC-CVD, or UKB. Third, we used this GRS to conduct Mendelian randomization analyses in a total of 413 718 participants (ie, EPIC-CVD, MVP, UKB), with concomitant individual-level information on genetics, serum creatinine, and disease outcomes. Fourth, to assess the potential for interference by horizontal pleiotropy26 and explore potential mechanisms that could mediate associations between eGFR and CVD outcomes, we studied our GRS for eGFR in relation to several established and emerging risk factors for CVD. Study design and overview. CHD indicates coronary heart disease; CKDGen, CKD Genetics consortium; CVD, cardiovascular disease; eGFR, estimated glomerular filtration rate; EPIC-CVD, European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease; ERFC, Emerging Risk Factors Collaboration; MVP, Million Veteran Program; NMR, nuclear magnetic resonance; and UKB, UK Biobank. Data Sources: Information on each of the data sources used in the analysis is provided in the Expanded Methods in the Supplemental Material. In brief, Emerging Risk Factors Collaboration, a global consortium of population cohort studies with harmonized individual-participant data for multiple CVD risk factors, has included 47 studies with available information on serum creatinine and diabetes status at recruitment.21 EPIC-CVD, a case-cohort study embedded in the pan-European EPIC prospective study of >500 000 participants, has recorded data on serum creatinine and imputed genome-wide array data from 21 of its 23 recruitment centers.22 MVP, a prospective cohort study recruited from 63 Veterans Health Administration medical facilities throughout the United States, has recorded serum creatinine, and imputed genome-wide array data are available for a large subset of its participants.23 UKB, a prospective study of 22 recruitment centers across the United Kingdom, has cohort-wide information on serum creatinine and imputed genome-wide array data.24 Relevant ethical approval and participant consent were already obtained in all studies that contributed data to this work. Estimation of Kidney Function: Kidney function was estimated using creatinine-based eGFR, calculated using the Chronic Kidney Disease Epidemiology Collaboration equation.27 Creatinine concentration was multiplied by 0.95 for studies in which measurements were not standardized to isotope-dilution mass spectrometry.25,28 In a subset of participants with available data, kidney function was also defined using the Chronic Kidney Disease Epidemiology Collaboration cystatin C–based equation29 and albuminuria measured as spot urine albumin-to-creatinine ratio (Expanded Methods). Observational Analyses: Primary outcomes were incident CHD and stroke. Details of end-point definitions for each study are provided in Table S1. Participants in the contributing studies were eligible for inclusion in the present analysis if they met all of the following criteria: (1) aged 30 to 80 years at recruitment; (2) had recorded information on age, sex, circulating creatinine, and diabetes status; (3) had a creatinine-based eGFR of <300 mL·min–1·1.73 m–2; (4) did not have a known history of CVD or diabetes at baseline; (5) had complete information on the risk factors of smoking status, systolic blood pressure, total cholesterol, high-density lipoprotein cholesterol, and body mass index; and (6) had at least 1 year of follow-up data after recruitment. Hazard ratios for associations of creatinine-based eGFR with incident CHD and stroke were calculated using Cox regression, stratified by sex and study center, and when appropriate, adjusted for traditional vascular risk factors (defined here as age, systolic blood pressure, smoking status, total cholesterol, high-density lipoprotein cholesterol, and body mass index) on a complete-case basis. To account for the EPIC-CVD case-cohort design, Cox models were adapted using Prentice weights.30 To avoid overfitting models, studies contributing <20 incident events to the analysis of a particular outcome were excluded from the analysis. Fractional polynomials were used to characterize nonlinear relationships of creatinine-based eGFR with risk of CHD and stroke, adjusted for age and CVD risk factors.31 Study-specific estimates for each outcome were pooled across studies using multivariable random-effects meta-analysis, using a reference point of 90 mL·min–1·1.73 m–2. When information on urinary biomarkers in UKB was available, participants were grouped into tenths on the basis of levels of urinary albumin-to-creatinine ratio to assess the shapes of associations between urinary biomarkers and CVD risk, using participants without albuminuria as the reference group.32 GRS for Kidney Function: Using individual-participant data from EPIC-CVD, MVP, and UKB, we calculated a GRS33 weighted by the conditional effect estimated of the genetic variants associated (P<5×10–8) with creatinine-based eGFR in CKDGen,25 a global genetics consortium that has published genome-wide association study summary statistics for creatinine-based eGFR. Of the 262 variants associated with creatinine-based eGFR, 37 were excluded because of ancestry heterogeneity as reported in CKDGen,25 4 were excluded because of associations (P<5×10–8) with vascular risk factors as reported in previous genome-wide association studies (ie, smoking status, alcohol consumption, and education attainment),34 and 3 were excluded because of missingness in at least 1 of the contributing studies, leaving 218 variants for the primary GRS for creatinine-based eGFR. In sensitivity analysis, we constructed 2 restricted GRSs using 126 and 121 genetic variants that were likely to be relevant for kidney function on the basis of their associations with cystatin C–based eGFR35 and blood urine nitrogen,25 respectively. Sensitivity analysis was also conducted using a GRS that included all 262 transancestry eGFR-associated index variants. Furthermore, to evaluate traits that could mediate or confound (through horizontal pleiotropy) the associations between genetically predicted eGFR and outcomes, we tested associations of GRSs for eGFR with a range of cardiovascular risk factors in UKB and EPIC-CVD and with 167 metabolites measured using targeted high-throughput nuclear magnetic resonance metabolomics (Nightingale Health Ltd) in UKB. Mendelian Randomization Analyses: To account for the nonlinear relationship between eGFR and risk of CVD outcomes in observational analyses, we performed a stratified Mendelian randomization analysis using methods described previously.16–20 For each participant, we calculated the residual eGFR by subtracting the genetic contribution determined by the GRS from observed eGFR. Participants were grouped on the basis of their residual eGFR into 5-unit categories between 45 and <105 mL·min–1·1.73 m–2, plus <45 and ≥105 mL·min–1·1.73 m–2. By stratifying on residual eGFR, we compared individuals in the population who would have an eGFR in the same category if they had the same genotype and reduced the potential influence of collider bias. We then calculated Mendelian randomization estimates for each eGFR category using the ratio method with the GRS as an instrumental variable, adjusting for age, age-squared, sex, study center, and the first 10 principal components. Stratum-specific estimates were combined across studies using fixed-effect meta-analysis and plotted as a piecewise-linear function of eGFR, with pointwise confidence intervals calculated by resampling the stratum-specific estimates. Sensitivity analyses used non-parametric doubly-ranked stratification method. Detailed methods describing statistical analysis are in the Expanded Methods. Analyses used STATA 15.1 and R 3.6.1. Results: Among the 648 135 participants without history of CVD or diabetes at baseline, the mean age was 57 years, 57% were men, and 4.4% had creatinine-based eGFR <60 mL·min–1·1.73 m–2 (Table 1, Tables S2 and S3). During 6.8 million person-years of follow-up, there were 42 858 incident CHD outcomes and 15 693 strokes. Up to 413 718 participants of European ancestry from EPIC-CVD, MVP, and UKB contributed to the main genetic analyses (Figure 1). Distributions of serum creatinine concentration and creatinine-based eGFR were broadly similar across studies (Figures S1 and S2). Study-Level and Participant-Level Characteristics of the Contributing Data Sources Observational Associations of eGFR With Cardiovascular Outcomes For both CHD and stroke, there were U-shaped associations of creatinine-based eGFR. Compared with participants with creatinine-based eGFR values between 60 and 105 mL·min–1·1.73 m–2, risks of both CHD and stroke were higher in people with eGFR <60 or >105 mL·min–1·1.73 m–2 (Figure 2, Figure S3). The shapes of these associations did not change substantially after adjustment for several traditional risk factors (Figure 2). Associations were similar in men and women, in clinically relevant subgroups (ie, smokers, people with obesity, or hypertension; Figure S4), in the different studies contributing to this analysis (Figure S5), and when participants with a history of diabetes or missing information on cardiovascular risk factors were included (Figures S6 and S7). Similar associations were also observed for ischemic stroke (Figure S3). Observational associations of eGFR levels with risk of coronary heart disease and stroke (n=648 135). Participants with missing information on age and CVD risk factors (systolic blood pressure, total and high-density lipoprotein cholesterol, body mass index, and smoking status) were excluded from the analyses. Hazard ratios were estimated using Cox regression, adjusting for age and CVD risk factors (systolic blood pressure, total and high-density lipoprotein cholesterol, body mass index, and smoking status), and stratified by sex and study center. The reference point is 90 mL·min–1·1.73 m–2. Shaded regions indicate 95% CIs. CVD indicates cardiovascular disease; and eGFR, estimated glomerular filtration rate. For the 338 044 participants in UKB with available data on serum cystatin C and urinary albumin-to-creatinine ratio, there were broadly similar associations of CHD or stroke with cystatin C–based eGFR as creatinine-based eGFR equations, but only when eGFR values were lower than ≈90 mL·min–1·1.73 m–2. However, there was no evidence of higher risk of CHD in participants with cystatin C–based eGFR values >105 mL·min–1·1.73 m–2 (Figure S8), in contrast with creatinine-based eGFR values >105 mL·min–1·1.73 m–2. Levels of urinary microalbumin and urinary albumin-to-creatinine ratio showed approximately linear associations with risk of CHD and stroke, which were somewhat attenuated after adjustment for traditional risk factors (Figure S9). Compared with participants with a creatinine-based eGFR of 75 to <90 mL·min–1·1.73 m–2 and without albuminuria, participants with albuminuria had higher risk of CHD and stroke (Figure S10). For both CHD and stroke, there were U-shaped associations of creatinine-based eGFR. Compared with participants with creatinine-based eGFR values between 60 and 105 mL·min–1·1.73 m–2, risks of both CHD and stroke were higher in people with eGFR <60 or >105 mL·min–1·1.73 m–2 (Figure 2, Figure S3). The shapes of these associations did not change substantially after adjustment for several traditional risk factors (Figure 2). Associations were similar in men and women, in clinically relevant subgroups (ie, smokers, people with obesity, or hypertension; Figure S4), in the different studies contributing to this analysis (Figure S5), and when participants with a history of diabetes or missing information on cardiovascular risk factors were included (Figures S6 and S7). Similar associations were also observed for ischemic stroke (Figure S3). Observational associations of eGFR levels with risk of coronary heart disease and stroke (n=648 135). Participants with missing information on age and CVD risk factors (systolic blood pressure, total and high-density lipoprotein cholesterol, body mass index, and smoking status) were excluded from the analyses. Hazard ratios were estimated using Cox regression, adjusting for age and CVD risk factors (systolic blood pressure, total and high-density lipoprotein cholesterol, body mass index, and smoking status), and stratified by sex and study center. The reference point is 90 mL·min–1·1.73 m–2. Shaded regions indicate 95% CIs. CVD indicates cardiovascular disease; and eGFR, estimated glomerular filtration rate. For the 338 044 participants in UKB with available data on serum cystatin C and urinary albumin-to-creatinine ratio, there were broadly similar associations of CHD or stroke with cystatin C–based eGFR as creatinine-based eGFR equations, but only when eGFR values were lower than ≈90 mL·min–1·1.73 m–2. However, there was no evidence of higher risk of CHD in participants with cystatin C–based eGFR values >105 mL·min–1·1.73 m–2 (Figure S8), in contrast with creatinine-based eGFR values >105 mL·min–1·1.73 m–2. Levels of urinary microalbumin and urinary albumin-to-creatinine ratio showed approximately linear associations with risk of CHD and stroke, which were somewhat attenuated after adjustment for traditional risk factors (Figure S9). Compared with participants with a creatinine-based eGFR of 75 to <90 mL·min–1·1.73 m–2 and without albuminuria, participants with albuminuria had higher risk of CHD and stroke (Figure S10). Mendelian Randomization of Genetically Predicted eGFR With Cardiovascular Outcomes The GRS for eGFR (Table S4) explained 2.0% of variation in creatinine-based eGFR in EPIC-CVD, 2.2% in MVP, and 3.2% in UKB. A 1 SD increase in the GRS for eGFR was associated with 0.18 SD higher creatinine-based eGFR (Table S5, Figure S11). The GRS for eGFR was not associated with body mass index, diabetes, smoking status, or low-density lipoprotein cholesterol concentrations but showed modest associations with lipoprotein(a), triglycerides, blood pressure, and hemoglobin A1c measurement (Figure S11). Modest associations were also observed between the GRS for eGFR and triglyceride-related lipoprotein subclasses in a subset of participants with available data (Figure S12). In nonlinear Mendelian randomization analysis, we observed a curvilinear relationship between genetically predicted eGFR and CHD (Figure 3). Among participants with eGFR <60 mL·min–1·1.73 m–2, each 5 mL·min–1·1.73 m–2 lower genetically predicted eGFR was associated with 14% (95% CI, 3%–27%) higher risk of CHD (Table 2). There was no clear evidence of association among participants with eGFR >75 mL·min–1·1.73 m–2 (Figure 3). Similar, but not statistically significant, associations were observed for stroke (Table 2, Figure 3). Overall, stratum-specific localized average causal estimates and nonlinear Mendelian randomization estimates were compatible across the studies contributing to this analysis (Table S6, Figure S13). Findings were supported in analyses using the non-parametric doubly-ranked stratification (Table S7, Figure S14). Similar associations were observed in analyses that further adjusted for systolic blood pressure, lipoprotein(a), hemoglobin A1c, and triglycerides (Figure S15), included participants with a history of diabetes at baseline (Figure S16), or used ischemic stroke as the stroke outcome (Figure S17). Results were also similar using GRS for cystatin C–based eGFR, blood urine nitrogen, or variants associated with creatinine-based eGFR regardless of ancestry heterogeneity (Figure S18). Mendelian Randomization Estimates per 5 mL·min–1·1.73 m–2 Lower Genetically Predicted eGFR With Risk of Coronary Heart Disease and Stroke Associations of genetically predicted eGFR with risk of coronary heart disease and stroke (n=413 718). The reference point is 90 mL·min–1·1.73 m–2. Gradients at each point of the curve represent the localized average causal effect on coronary heart disease or stroke per 5 mL·min–1·1.73 m–2 change in genetically predicted eGFR. The vertical lines represent 95% CIs. Analyses were adjusted for age, age-squared, sex, study center, and the first 10 principal components of ancestry. eGFR indicates estimated glomerular filtration rate. The GRS for eGFR (Table S4) explained 2.0% of variation in creatinine-based eGFR in EPIC-CVD, 2.2% in MVP, and 3.2% in UKB. A 1 SD increase in the GRS for eGFR was associated with 0.18 SD higher creatinine-based eGFR (Table S5, Figure S11). The GRS for eGFR was not associated with body mass index, diabetes, smoking status, or low-density lipoprotein cholesterol concentrations but showed modest associations with lipoprotein(a), triglycerides, blood pressure, and hemoglobin A1c measurement (Figure S11). Modest associations were also observed between the GRS for eGFR and triglyceride-related lipoprotein subclasses in a subset of participants with available data (Figure S12). In nonlinear Mendelian randomization analysis, we observed a curvilinear relationship between genetically predicted eGFR and CHD (Figure 3). Among participants with eGFR <60 mL·min–1·1.73 m–2, each 5 mL·min–1·1.73 m–2 lower genetically predicted eGFR was associated with 14% (95% CI, 3%–27%) higher risk of CHD (Table 2). There was no clear evidence of association among participants with eGFR >75 mL·min–1·1.73 m–2 (Figure 3). Similar, but not statistically significant, associations were observed for stroke (Table 2, Figure 3). Overall, stratum-specific localized average causal estimates and nonlinear Mendelian randomization estimates were compatible across the studies contributing to this analysis (Table S6, Figure S13). Findings were supported in analyses using the non-parametric doubly-ranked stratification (Table S7, Figure S14). Similar associations were observed in analyses that further adjusted for systolic blood pressure, lipoprotein(a), hemoglobin A1c, and triglycerides (Figure S15), included participants with a history of diabetes at baseline (Figure S16), or used ischemic stroke as the stroke outcome (Figure S17). Results were also similar using GRS for cystatin C–based eGFR, blood urine nitrogen, or variants associated with creatinine-based eGFR regardless of ancestry heterogeneity (Figure S18). Mendelian Randomization Estimates per 5 mL·min–1·1.73 m–2 Lower Genetically Predicted eGFR With Risk of Coronary Heart Disease and Stroke Associations of genetically predicted eGFR with risk of coronary heart disease and stroke (n=413 718). The reference point is 90 mL·min–1·1.73 m–2. Gradients at each point of the curve represent the localized average causal effect on coronary heart disease or stroke per 5 mL·min–1·1.73 m–2 change in genetically predicted eGFR. The vertical lines represent 95% CIs. Analyses were adjusted for age, age-squared, sex, study center, and the first 10 principal components of ancestry. eGFR indicates estimated glomerular filtration rate. Observational Associations of eGFR With Cardiovascular Outcomes: For both CHD and stroke, there were U-shaped associations of creatinine-based eGFR. Compared with participants with creatinine-based eGFR values between 60 and 105 mL·min–1·1.73 m–2, risks of both CHD and stroke were higher in people with eGFR <60 or >105 mL·min–1·1.73 m–2 (Figure 2, Figure S3). The shapes of these associations did not change substantially after adjustment for several traditional risk factors (Figure 2). Associations were similar in men and women, in clinically relevant subgroups (ie, smokers, people with obesity, or hypertension; Figure S4), in the different studies contributing to this analysis (Figure S5), and when participants with a history of diabetes or missing information on cardiovascular risk factors were included (Figures S6 and S7). Similar associations were also observed for ischemic stroke (Figure S3). Observational associations of eGFR levels with risk of coronary heart disease and stroke (n=648 135). Participants with missing information on age and CVD risk factors (systolic blood pressure, total and high-density lipoprotein cholesterol, body mass index, and smoking status) were excluded from the analyses. Hazard ratios were estimated using Cox regression, adjusting for age and CVD risk factors (systolic blood pressure, total and high-density lipoprotein cholesterol, body mass index, and smoking status), and stratified by sex and study center. The reference point is 90 mL·min–1·1.73 m–2. Shaded regions indicate 95% CIs. CVD indicates cardiovascular disease; and eGFR, estimated glomerular filtration rate. For the 338 044 participants in UKB with available data on serum cystatin C and urinary albumin-to-creatinine ratio, there were broadly similar associations of CHD or stroke with cystatin C–based eGFR as creatinine-based eGFR equations, but only when eGFR values were lower than ≈90 mL·min–1·1.73 m–2. However, there was no evidence of higher risk of CHD in participants with cystatin C–based eGFR values >105 mL·min–1·1.73 m–2 (Figure S8), in contrast with creatinine-based eGFR values >105 mL·min–1·1.73 m–2. Levels of urinary microalbumin and urinary albumin-to-creatinine ratio showed approximately linear associations with risk of CHD and stroke, which were somewhat attenuated after adjustment for traditional risk factors (Figure S9). Compared with participants with a creatinine-based eGFR of 75 to <90 mL·min–1·1.73 m–2 and without albuminuria, participants with albuminuria had higher risk of CHD and stroke (Figure S10). Mendelian Randomization of Genetically Predicted eGFR With Cardiovascular Outcomes: The GRS for eGFR (Table S4) explained 2.0% of variation in creatinine-based eGFR in EPIC-CVD, 2.2% in MVP, and 3.2% in UKB. A 1 SD increase in the GRS for eGFR was associated with 0.18 SD higher creatinine-based eGFR (Table S5, Figure S11). The GRS for eGFR was not associated with body mass index, diabetes, smoking status, or low-density lipoprotein cholesterol concentrations but showed modest associations with lipoprotein(a), triglycerides, blood pressure, and hemoglobin A1c measurement (Figure S11). Modest associations were also observed between the GRS for eGFR and triglyceride-related lipoprotein subclasses in a subset of participants with available data (Figure S12). In nonlinear Mendelian randomization analysis, we observed a curvilinear relationship between genetically predicted eGFR and CHD (Figure 3). Among participants with eGFR <60 mL·min–1·1.73 m–2, each 5 mL·min–1·1.73 m–2 lower genetically predicted eGFR was associated with 14% (95% CI, 3%–27%) higher risk of CHD (Table 2). There was no clear evidence of association among participants with eGFR >75 mL·min–1·1.73 m–2 (Figure 3). Similar, but not statistically significant, associations were observed for stroke (Table 2, Figure 3). Overall, stratum-specific localized average causal estimates and nonlinear Mendelian randomization estimates were compatible across the studies contributing to this analysis (Table S6, Figure S13). Findings were supported in analyses using the non-parametric doubly-ranked stratification (Table S7, Figure S14). Similar associations were observed in analyses that further adjusted for systolic blood pressure, lipoprotein(a), hemoglobin A1c, and triglycerides (Figure S15), included participants with a history of diabetes at baseline (Figure S16), or used ischemic stroke as the stroke outcome (Figure S17). Results were also similar using GRS for cystatin C–based eGFR, blood urine nitrogen, or variants associated with creatinine-based eGFR regardless of ancestry heterogeneity (Figure S18). Mendelian Randomization Estimates per 5 mL·min–1·1.73 m–2 Lower Genetically Predicted eGFR With Risk of Coronary Heart Disease and Stroke Associations of genetically predicted eGFR with risk of coronary heart disease and stroke (n=413 718). The reference point is 90 mL·min–1·1.73 m–2. Gradients at each point of the curve represent the localized average causal effect on coronary heart disease or stroke per 5 mL·min–1·1.73 m–2 change in genetically predicted eGFR. The vertical lines represent 95% CIs. Analyses were adjusted for age, age-squared, sex, study center, and the first 10 principal components of ancestry. eGFR indicates estimated glomerular filtration rate. Discussion: In analyses combining genetic, biomarker, and clinical data in ≈640 000 participants, our study has suggested that, in people without manifest CVD or diabetes, even mildly reduced kidney function is causally associated with a higher risk of CVD outcomes. Our results provide novel causal insights and highlight the wider potential value of preventive approaches that can preserve and modulate kidney function. First, our study estimated a dose-response curve for genetically predicted eGFR and CHD, identifying an eGFR value of ≈75 mL·min–1·1.73 m–2 as a possible risk threshold. Therefore, the causal relationship of kidney function with CHD is nonlinear in shape, in contrast with those for blood pressure and low-density lipoprotein cholesterol, which each have approximately log-linear relationships with CHD risk across their range of values. In contrast with population-wide strategies to improve blood pressure and low-density lipoprotein cholesterol levels, this finding implies that it may be a preferable strategy to identify those in the population with mild-to-moderate kidney dysfunction and target them for renoprotective interventions alongside routine strategies to reduce cardiovascular risk. For example, the use of renoprotective interventions, such as renin angiotensin aldosterone system inhibitors36 and inhibitors of sodium-glucose cotransporter 2, might provide a potential means to do so.37 Our findings encourage additional evaluation of such agents in patients with CKD without manifest CVD or diabetes.38,39 Second, we found that our GRS for eGFR was modestly associated with several established and emerging CVD risk factors, including plasma concentration of proatherogenic lipids (eg, lipoprotein(a), triglycerides, and triglyceride-related lipoprotein subclasses), hemoglobin A1c values, and blood pressure, consistent with previous studies.11,40 However, adjustment for such factors did not materially alter the associations between eGFR and atherosclerotic CVD, indicating that they are unlikely to mediate or confound the associations between genetically predicted kidney dysfunction and CHD or stroke and limiting the likelihood that results are subject to influences of horizontal pleiotropy. These results suggest that the effect of reduced kidney function on CVD is independent of traditional cardiovascular risk factors and underscores the potential importance of direct preservation of renal function to prevent CVD, in addition to control of known risk factors. Third, our data help to resolve controversies about the relevance to CHD of higher-than-average eGFR. In contrast with the observation that higher-than-average creatinine-based eGFR values are associated with higher CHD risk at >105 mL·min–1·1.73 m–2, we found that genetically predicted higher eGFR values were not associated with CHD risk in this same group. This discordance implies different pathophysiological meanings of creatinine-based eGFR values >105 mL·min–1·1.73 m–2 (which may represent a transient state of hyperfiltration before progression to poorer kidney function and CKD) and genetically predicted higher eGFR values (which represent a lifelong tendency toward exposure to better kidney function). This explanation is supported by our findings showing that the association between higher creatinine-based eGFR values and higher CHD risk was principally in participants who had albuminuria (and, therefore, preexisting kidney damage) at entry into the study. Fourth, our results are broadly consistent with a causal relationship between eGFR and stroke. The lack of statistically significant findings in our Mendelian randomization analysis for stroke outcomes principally reflects the lower power of our study to evaluate a GRS with stroke compared with CHD. It may also be attributable to pathogenetic heterogeneity in stroke diagnoses (eg, cardioembolic, small vessel disease, and hemorrhagic subtypes may be less driven by atherosclerotic pathology than other ischemic stroke subtypes).41,42 Our study had major strengths, including a large sample size, access to individual-participant data, use of multiple genetic causal inference methods tailored to the evaluation of nonlinear disease associations, and an updated GRS that explains more variation in eGFR than previous analyses.14 However, there are also potential limitations. First, Mendelian randomization assumptions state that the only causal pathway from the genetic variants to the outcome is through eGFR. Although we assessed the potential for interference by horizontal pleiotropy, there is the possibility of residual confounding by unrecognized effects of genotypes on other risk factors and by adaptation during early life to compensate for genetically lower eGFR. Second, to reduce the scope for confounding by ancestry (population stratification), our analyses were limited to participants of European ancestries. This limitation means that our findings might not be applicable to other populations, and additional studies on this topic are needed, especially in non-European ancestry populations. Third, although serum creatinine is used routinely for estimating eGFR, true measurement of GFR requires the use of inulin, iohexol, or iothalamate. Assay of serum creatinine is liable to interference from other serum components (eg, bilirubin and glucose)43,44 and autoimmune activation45 and is sensitive to changes in individuals’ muscle mass (eg, sarcopenia). Assessment of cystatin C, an analyte that enables an alternative calculation of eGFR without the potential limitations of creatinine, was available only in a subset of the participants we studied. However, our genetic analyses restricted to genetic variants additionally associated with other biomarkers of kidney function showed results consistent with those for creatinine-based eGFR. Last, we used the 2009 Chronic Kidney Disease Epidemiology Collaboration equation to calculate eGFR. However, our analysis was limited to populations with European ancestry, in which the 2009 and 2021 Chronic Kidney Disease Epidemiology Collaboration equations provide similar estimates of eGFR.46 Conclusions: In people without manifest CVD or diabetes, mild-to-moderate kidney dysfunction was causally related to cardiovascular outcomes, highlighting the potential cardiovascular benefit of preventive approaches that improve kidney function. Article Information: Acknowledgments The authors thank investigators and participants of the several studies that contributed data to the Emerging Risk Factors Collaboration. We thank all EPIC (European Prospective Investigation into Cancer) participants and staff for their contribution to the study, the laboratory teams at the Medical Research Council Epidemiology Unit for sample management and Cambridge Genomic Services for genotyping, S. Spackman for data management, and the team at the EPIC-CVD Coordinating Centre for study coordination and administration. The authors also thank the participants of the VA Million Veteran Program and its collaborators. Acknowledgment of VA Million Veteran Program leadership and staff contributions can be found in the Supplemental Material Note. This research has been conducted using the UK Biobank Resource under Application Number 31852. The authors thank investigators and participants of the several studies that contributed data to the Emerging Risk Factors Collaboration. We thank all EPIC (European Prospective Investigation into Cancer) participants and staff for their contribution to the study, the laboratory teams at the Medical Research Council Epidemiology Unit for sample management and Cambridge Genomic Services for genotyping, S. Spackman for data management, and the team at the EPIC-CVD Coordinating Centre for study coordination and administration. The authors also thank the participants of the VA Million Veteran Program and its collaborators. Acknowledgment of VA Million Veteran Program leadership and staff contributions can be found in the Supplemental Material Note. This research has been conducted using the UK Biobank Resource under Application Number 31852. Sources of Funding The Emerging Risk Factors Collaboration (ERFC) coordinating center was underpinned by program grants from the British Heart Foundation (BHF; SP/09/002; RG/13/13/30194; RG/18/13/33946), BHF Centre of Research Excellence (RE/18/1/34212), the UK Medical Research Council (MR/L003120/1), and the National Institute for Health and Care Research (NIHR) Cambridge Biomedical Research Centre (BRC-1215-20014), with project-specific support received from the UK NIHR, British United Provident Association UK Foundation, and an unrestricted educational grant from GlaxoSmithKline. This work was supported by Health Data Research UK, which is funded by the UK Medical Research Council, the Engineering and Physical Sciences Research Council, the Economic and Social Research Council, the Department of Health and Social Care (England), the Chief Scientist Office of the Scottish Government Health and Social Care Directorates, the Health and Social Care Research and Development Division (Welsh Government), the Public Health Agency (Northern Ireland), the BHF, and the Wellcome Trust. A variety of funding sources have supported recruitment, follow-up, and laboratory measurements in the studies contributing data to the ERFC, which are listed on the ERFC website (www.phpc.cam.ac.uk/ceu/erfc/list-of-studies). EPIC-CVD (European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease Study) was funded by the European Research Council (268834) and the European Commission Framework Programme 7 (HEALTH-F2-2012-279233). The coordination of EPIC is financially supported by International Agency for Research on Cancer (IARC) and also by the Department of Epidemiology and Biostatistics, School of Public Health, Imperial College London which has additional infrastructure support provided by the NIHR Imperial Biomedical Research Centre (BRC). The national cohorts are supported by: Danish Cancer Society (Denmark); Ligue Contre le Cancer, Institut Gustave Roussy, Mutuelle Générale de l’Education Nationale, Institut National de la Santé et de la Recherche Médicale (INSERM) (France); German Cancer Aid, German Cancer Research Center (DKFZ), German Institute of Human Nutrition PotsdamRehbruecke (DIfE), Federal Ministry of Education and Research (BMBF) (Germany); Associazione Italiana per la Ricerca sul Cancro-AIRC-Italy, Compagnia di SanPaolo and National Research Council (Italy); Dutch Ministry of Public Health, Welfare and Sports (VWS), Netherlands Cancer Registry (NKR), LK Research Funds, Dutch Prevention Funds, Dutch ZON (Zorg Onderzoek Nederland), World Cancer Research Fund (WCRF), Statistics Netherlands (The Netherlands); Health Research Fund (FIS) - Instituto de Salud Carlos III (ISCIII), Regional Governments of Andalucía, Asturias, Basque Country, Murcia and Navarra, and the Catalan Institute of Oncology - ICO (Spain); Swedish Cancer Society, Swedish Research Council and County Councils of Skåne and Västerbotten (Sweden); Cancer Research UK (14136 to EPIC-Norfolk; C8221/A29017 to EPIC-Oxford), Medical Research Council, United Kingdom (1000143 to EPIC-Norfolk; MR/M012190/1 to EPIC-Oxford). The establishment of the EPIC-InterAct subcohort (used in the EPIC-CVD study) and conduct of biochemical assays was supported by the EU Sixth Framework Programme (FP6) (grant LSHM_CT_2006_037197 to the InterAct project) and the Medical Research Council Epidemiology Unit (grants MC_UU_12015/1 and MC_UU_12015/5). This research is based on data from the Million Veteran Program, Office of Research and Development, and Veterans Health Administration and was supported by award I01-BX004821 (principal investigators, Drs Peter W.F. Wilson and Kelly Cho) and I01-BX003360 (principal investigators, Dr Adriana M. Hung). Dr Damrauer is supported by IK2-CX001780. Dr Hung is supported by CX001897. Dr Tsao is supported by BX003362-01 from VA Office of Research and Development. Dr Robinson-Cohen is supported by R01DK122075. Dr Sun was funded by a BHF Programme Grant (RG/18/13/33946). Dr Arnold was funded by a BHF Programme Grant (RG/18/13/33946). Dr Kaptoge is funded by a BHF Chair award (CH/12/2/29428). Dr Mason is funded by the EU/EFPIA Innovative Medicines Initiative Joint Undertaking BigData@Heart grant 116074. Dr Bolton was funded by the NIHR BTRU in Donor Health and Genomics (NIHR BTRU-2014-10024). Dr Allara is funded by a BHF Programme Grant (RG/18/13/33946). Prof Inouye is supported by the Munz Chair of Cardiovascular Prediction and Prevention and the NIHR Cambridge Biomedical Research Centre (BRC-1215-20014). Prof Inouye was also supported by the UK Economic and Social Research 878 Council (ES/T013192/1). Prof Danesh holds a British Heart Foundation Professorship and a NIHR Senior Investigator Award. Prof Wood is part of the BigData@Heart Consortium, funded by the Innovative Medicines Initiative-2 Joint Undertaking under grant agreement No 116074. Prof Wood was supported by the BHF-Turing Cardiovascular Data Science Award (BCDSA\100005). Prof Di Angelantonio holds a NIHR Senior Investigator Award. The Emerging Risk Factors Collaboration (ERFC) coordinating center was underpinned by program grants from the British Heart Foundation (BHF; SP/09/002; RG/13/13/30194; RG/18/13/33946), BHF Centre of Research Excellence (RE/18/1/34212), the UK Medical Research Council (MR/L003120/1), and the National Institute for Health and Care Research (NIHR) Cambridge Biomedical Research Centre (BRC-1215-20014), with project-specific support received from the UK NIHR, British United Provident Association UK Foundation, and an unrestricted educational grant from GlaxoSmithKline. This work was supported by Health Data Research UK, which is funded by the UK Medical Research Council, the Engineering and Physical Sciences Research Council, the Economic and Social Research Council, the Department of Health and Social Care (England), the Chief Scientist Office of the Scottish Government Health and Social Care Directorates, the Health and Social Care Research and Development Division (Welsh Government), the Public Health Agency (Northern Ireland), the BHF, and the Wellcome Trust. A variety of funding sources have supported recruitment, follow-up, and laboratory measurements in the studies contributing data to the ERFC, which are listed on the ERFC website (www.phpc.cam.ac.uk/ceu/erfc/list-of-studies). EPIC-CVD (European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease Study) was funded by the European Research Council (268834) and the European Commission Framework Programme 7 (HEALTH-F2-2012-279233). The coordination of EPIC is financially supported by International Agency for Research on Cancer (IARC) and also by the Department of Epidemiology and Biostatistics, School of Public Health, Imperial College London which has additional infrastructure support provided by the NIHR Imperial Biomedical Research Centre (BRC). The national cohorts are supported by: Danish Cancer Society (Denmark); Ligue Contre le Cancer, Institut Gustave Roussy, Mutuelle Générale de l’Education Nationale, Institut National de la Santé et de la Recherche Médicale (INSERM) (France); German Cancer Aid, German Cancer Research Center (DKFZ), German Institute of Human Nutrition PotsdamRehbruecke (DIfE), Federal Ministry of Education and Research (BMBF) (Germany); Associazione Italiana per la Ricerca sul Cancro-AIRC-Italy, Compagnia di SanPaolo and National Research Council (Italy); Dutch Ministry of Public Health, Welfare and Sports (VWS), Netherlands Cancer Registry (NKR), LK Research Funds, Dutch Prevention Funds, Dutch ZON (Zorg Onderzoek Nederland), World Cancer Research Fund (WCRF), Statistics Netherlands (The Netherlands); Health Research Fund (FIS) - Instituto de Salud Carlos III (ISCIII), Regional Governments of Andalucía, Asturias, Basque Country, Murcia and Navarra, and the Catalan Institute of Oncology - ICO (Spain); Swedish Cancer Society, Swedish Research Council and County Councils of Skåne and Västerbotten (Sweden); Cancer Research UK (14136 to EPIC-Norfolk; C8221/A29017 to EPIC-Oxford), Medical Research Council, United Kingdom (1000143 to EPIC-Norfolk; MR/M012190/1 to EPIC-Oxford). The establishment of the EPIC-InterAct subcohort (used in the EPIC-CVD study) and conduct of biochemical assays was supported by the EU Sixth Framework Programme (FP6) (grant LSHM_CT_2006_037197 to the InterAct project) and the Medical Research Council Epidemiology Unit (grants MC_UU_12015/1 and MC_UU_12015/5). This research is based on data from the Million Veteran Program, Office of Research and Development, and Veterans Health Administration and was supported by award I01-BX004821 (principal investigators, Drs Peter W.F. Wilson and Kelly Cho) and I01-BX003360 (principal investigators, Dr Adriana M. Hung). Dr Damrauer is supported by IK2-CX001780. Dr Hung is supported by CX001897. Dr Tsao is supported by BX003362-01 from VA Office of Research and Development. Dr Robinson-Cohen is supported by R01DK122075. Dr Sun was funded by a BHF Programme Grant (RG/18/13/33946). Dr Arnold was funded by a BHF Programme Grant (RG/18/13/33946). Dr Kaptoge is funded by a BHF Chair award (CH/12/2/29428). Dr Mason is funded by the EU/EFPIA Innovative Medicines Initiative Joint Undertaking BigData@Heart grant 116074. Dr Bolton was funded by the NIHR BTRU in Donor Health and Genomics (NIHR BTRU-2014-10024). Dr Allara is funded by a BHF Programme Grant (RG/18/13/33946). Prof Inouye is supported by the Munz Chair of Cardiovascular Prediction and Prevention and the NIHR Cambridge Biomedical Research Centre (BRC-1215-20014). Prof Inouye was also supported by the UK Economic and Social Research 878 Council (ES/T013192/1). Prof Danesh holds a British Heart Foundation Professorship and a NIHR Senior Investigator Award. Prof Wood is part of the BigData@Heart Consortium, funded by the Innovative Medicines Initiative-2 Joint Undertaking under grant agreement No 116074. Prof Wood was supported by the BHF-Turing Cardiovascular Data Science Award (BCDSA\100005). Prof Di Angelantonio holds a NIHR Senior Investigator Award. Disclosures Where authors are identified as personnel of the International Agency for Research on Cancer/World Health Organization, the authors alone are responsible for the views expressed in this article, and they do not necessarily represent the decisions, policy, or views of the International Agency for Research on Cancer/World Health Organization. The views expressed are those of the author(s) and not necessarily those of the National Institute for Health Research or the Department of Health and Social Care. This publication does not represent the views of the Department of Veterans Affairs or the United States government. Dr Staley is now a full-time employee at UCB. Dr Sun is now an employee at Regeneron Pharmaceuticals. Dr Arnold is now an employee of AstraZeneca. Dr Danesh serves on scientific advisory boards for AstraZeneca, Novartis, and UK Biobank, and has received multiple grants from academic, charitable and industry sources outside of the submitted work. Adam Butterworth reports institutional grants from AstraZeneca, Bayer, Biogen, BioMarin, Bioverativ, Novartis, Regeneron and Sanofi. Where authors are identified as personnel of the International Agency for Research on Cancer/World Health Organization, the authors alone are responsible for the views expressed in this article, and they do not necessarily represent the decisions, policy, or views of the International Agency for Research on Cancer/World Health Organization. The views expressed are those of the author(s) and not necessarily those of the National Institute for Health Research or the Department of Health and Social Care. This publication does not represent the views of the Department of Veterans Affairs or the United States government. Dr Staley is now a full-time employee at UCB. Dr Sun is now an employee at Regeneron Pharmaceuticals. Dr Arnold is now an employee of AstraZeneca. Dr Danesh serves on scientific advisory boards for AstraZeneca, Novartis, and UK Biobank, and has received multiple grants from academic, charitable and industry sources outside of the submitted work. Adam Butterworth reports institutional grants from AstraZeneca, Bayer, Biogen, BioMarin, Bioverativ, Novartis, Regeneron and Sanofi. Supplemental Material Expanded Methods Tables S1–S7 Figures S1–S18 Expanded Methods Tables S1–S7 Figures S1–S18 Acknowledgments: The authors thank investigators and participants of the several studies that contributed data to the Emerging Risk Factors Collaboration. We thank all EPIC (European Prospective Investigation into Cancer) participants and staff for their contribution to the study, the laboratory teams at the Medical Research Council Epidemiology Unit for sample management and Cambridge Genomic Services for genotyping, S. Spackman for data management, and the team at the EPIC-CVD Coordinating Centre for study coordination and administration. The authors also thank the participants of the VA Million Veteran Program and its collaborators. Acknowledgment of VA Million Veteran Program leadership and staff contributions can be found in the Supplemental Material Note. This research has been conducted using the UK Biobank Resource under Application Number 31852. Sources of Funding: The Emerging Risk Factors Collaboration (ERFC) coordinating center was underpinned by program grants from the British Heart Foundation (BHF; SP/09/002; RG/13/13/30194; RG/18/13/33946), BHF Centre of Research Excellence (RE/18/1/34212), the UK Medical Research Council (MR/L003120/1), and the National Institute for Health and Care Research (NIHR) Cambridge Biomedical Research Centre (BRC-1215-20014), with project-specific support received from the UK NIHR, British United Provident Association UK Foundation, and an unrestricted educational grant from GlaxoSmithKline. This work was supported by Health Data Research UK, which is funded by the UK Medical Research Council, the Engineering and Physical Sciences Research Council, the Economic and Social Research Council, the Department of Health and Social Care (England), the Chief Scientist Office of the Scottish Government Health and Social Care Directorates, the Health and Social Care Research and Development Division (Welsh Government), the Public Health Agency (Northern Ireland), the BHF, and the Wellcome Trust. A variety of funding sources have supported recruitment, follow-up, and laboratory measurements in the studies contributing data to the ERFC, which are listed on the ERFC website (www.phpc.cam.ac.uk/ceu/erfc/list-of-studies). EPIC-CVD (European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease Study) was funded by the European Research Council (268834) and the European Commission Framework Programme 7 (HEALTH-F2-2012-279233). The coordination of EPIC is financially supported by International Agency for Research on Cancer (IARC) and also by the Department of Epidemiology and Biostatistics, School of Public Health, Imperial College London which has additional infrastructure support provided by the NIHR Imperial Biomedical Research Centre (BRC). The national cohorts are supported by: Danish Cancer Society (Denmark); Ligue Contre le Cancer, Institut Gustave Roussy, Mutuelle Générale de l’Education Nationale, Institut National de la Santé et de la Recherche Médicale (INSERM) (France); German Cancer Aid, German Cancer Research Center (DKFZ), German Institute of Human Nutrition PotsdamRehbruecke (DIfE), Federal Ministry of Education and Research (BMBF) (Germany); Associazione Italiana per la Ricerca sul Cancro-AIRC-Italy, Compagnia di SanPaolo and National Research Council (Italy); Dutch Ministry of Public Health, Welfare and Sports (VWS), Netherlands Cancer Registry (NKR), LK Research Funds, Dutch Prevention Funds, Dutch ZON (Zorg Onderzoek Nederland), World Cancer Research Fund (WCRF), Statistics Netherlands (The Netherlands); Health Research Fund (FIS) - Instituto de Salud Carlos III (ISCIII), Regional Governments of Andalucía, Asturias, Basque Country, Murcia and Navarra, and the Catalan Institute of Oncology - ICO (Spain); Swedish Cancer Society, Swedish Research Council and County Councils of Skåne and Västerbotten (Sweden); Cancer Research UK (14136 to EPIC-Norfolk; C8221/A29017 to EPIC-Oxford), Medical Research Council, United Kingdom (1000143 to EPIC-Norfolk; MR/M012190/1 to EPIC-Oxford). The establishment of the EPIC-InterAct subcohort (used in the EPIC-CVD study) and conduct of biochemical assays was supported by the EU Sixth Framework Programme (FP6) (grant LSHM_CT_2006_037197 to the InterAct project) and the Medical Research Council Epidemiology Unit (grants MC_UU_12015/1 and MC_UU_12015/5). This research is based on data from the Million Veteran Program, Office of Research and Development, and Veterans Health Administration and was supported by award I01-BX004821 (principal investigators, Drs Peter W.F. Wilson and Kelly Cho) and I01-BX003360 (principal investigators, Dr Adriana M. Hung). Dr Damrauer is supported by IK2-CX001780. Dr Hung is supported by CX001897. Dr Tsao is supported by BX003362-01 from VA Office of Research and Development. Dr Robinson-Cohen is supported by R01DK122075. Dr Sun was funded by a BHF Programme Grant (RG/18/13/33946). Dr Arnold was funded by a BHF Programme Grant (RG/18/13/33946). Dr Kaptoge is funded by a BHF Chair award (CH/12/2/29428). Dr Mason is funded by the EU/EFPIA Innovative Medicines Initiative Joint Undertaking BigData@Heart grant 116074. Dr Bolton was funded by the NIHR BTRU in Donor Health and Genomics (NIHR BTRU-2014-10024). Dr Allara is funded by a BHF Programme Grant (RG/18/13/33946). Prof Inouye is supported by the Munz Chair of Cardiovascular Prediction and Prevention and the NIHR Cambridge Biomedical Research Centre (BRC-1215-20014). Prof Inouye was also supported by the UK Economic and Social Research 878 Council (ES/T013192/1). Prof Danesh holds a British Heart Foundation Professorship and a NIHR Senior Investigator Award. Prof Wood is part of the BigData@Heart Consortium, funded by the Innovative Medicines Initiative-2 Joint Undertaking under grant agreement No 116074. Prof Wood was supported by the BHF-Turing Cardiovascular Data Science Award (BCDSA\100005). Prof Di Angelantonio holds a NIHR Senior Investigator Award. Disclosures: Where authors are identified as personnel of the International Agency for Research on Cancer/World Health Organization, the authors alone are responsible for the views expressed in this article, and they do not necessarily represent the decisions, policy, or views of the International Agency for Research on Cancer/World Health Organization. The views expressed are those of the author(s) and not necessarily those of the National Institute for Health Research or the Department of Health and Social Care. This publication does not represent the views of the Department of Veterans Affairs or the United States government. Dr Staley is now a full-time employee at UCB. Dr Sun is now an employee at Regeneron Pharmaceuticals. Dr Arnold is now an employee of AstraZeneca. Dr Danesh serves on scientific advisory boards for AstraZeneca, Novartis, and UK Biobank, and has received multiple grants from academic, charitable and industry sources outside of the submitted work. Adam Butterworth reports institutional grants from AstraZeneca, Bayer, Biogen, BioMarin, Bioverativ, Novartis, Regeneron and Sanofi. Supplemental Material: Expanded Methods Tables S1–S7 Figures S1–S18 Supplementary Material:
Background: End-stage renal disease is associated with a high risk of cardiovascular events. It is unknown, however, whether mild-to-moderate kidney dysfunction is causally related to coronary heart disease (CHD) and stroke. Methods: Observational analyses were conducted using individual-level data from 4 population data sources (Emerging Risk Factors Collaboration, EPIC-CVD [European Prospective Investigation into Cancer and Nutrition-Cardiovascular Disease Study], Million Veteran Program, and UK Biobank), comprising 648 135 participants with no history of cardiovascular disease or diabetes at baseline, yielding 42 858 and 15 693 incident CHD and stroke events, respectively, during 6.8 million person-years of follow-up. Using a genetic risk score of 218 variants for estimated glomerular filtration rate (eGFR), we conducted Mendelian randomization analyses involving 413 718 participants (25 917 CHD and 8622 strokes) in EPIC-CVD, Million Veteran Program, and UK Biobank. Results: There were U-shaped observational associations of creatinine-based eGFR with CHD and stroke, with higher risk in participants with eGFR values <60 or >105 mL·min-1·1.73 m-2, compared with those with eGFR between 60 and 105 mL·min-1·1.73 m-2. Mendelian randomization analyses for CHD showed an association among participants with eGFR <60 mL·min-1·1.73 m-2, with a 14% (95% CI, 3%-27%) higher CHD risk per 5 mL·min-1·1.73 m-2 lower genetically predicted eGFR, but not for those with eGFR >105 mL·min-1·1.73 m-2. Results were not materially different after adjustment for factors associated with the eGFR genetic risk score, such as lipoprotein(a), triglycerides, hemoglobin A1c, and blood pressure. Mendelian randomization results for stroke were nonsignificant but broadly similar to those for CHD. Conclusions: In people without manifest cardiovascular disease or diabetes, mild-to-moderate kidney dysfunction is causally related to risk of CHD, highlighting the potential value of preventive approaches that preserve and modulate kidney function.
Study Design and Study Overview: This study involved interrelated components (Figure 1). First, we characterized observational associations between eGFR and incident CHD or stroke, using data from the Emerging Risk Factors Collaboration,21 EPIC-CVD (European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease Study),22 Million Veteran Program (MVP),23 UK Biobank (UKB),24 collectively involving 648 135 participants, who had serum creatinine measurements but no known CVD or diabetes at baseline. Second, we constructed a genetic risk score (GRS) for eGFR by computing a weighted sum of eGFR-associated index variants reported in a discovery genome-wide association study from the CKDGen consortium comprising 567 460 participants with European ancestry,25 none of whom were from MVP, EPIC-CVD, or UKB. Third, we used this GRS to conduct Mendelian randomization analyses in a total of 413 718 participants (ie, EPIC-CVD, MVP, UKB), with concomitant individual-level information on genetics, serum creatinine, and disease outcomes. Fourth, to assess the potential for interference by horizontal pleiotropy26 and explore potential mechanisms that could mediate associations between eGFR and CVD outcomes, we studied our GRS for eGFR in relation to several established and emerging risk factors for CVD. Study design and overview. CHD indicates coronary heart disease; CKDGen, CKD Genetics consortium; CVD, cardiovascular disease; eGFR, estimated glomerular filtration rate; EPIC-CVD, European Prospective Investigation into Cancer and Nutrition–Cardiovascular Disease; ERFC, Emerging Risk Factors Collaboration; MVP, Million Veteran Program; NMR, nuclear magnetic resonance; and UKB, UK Biobank. Conclusions: In people without manifest CVD or diabetes, mild-to-moderate kidney dysfunction was causally related to cardiovascular outcomes, highlighting the potential cardiovascular benefit of preventive approaches that improve kidney function.
Background: End-stage renal disease is associated with a high risk of cardiovascular events. It is unknown, however, whether mild-to-moderate kidney dysfunction is causally related to coronary heart disease (CHD) and stroke. Methods: Observational analyses were conducted using individual-level data from 4 population data sources (Emerging Risk Factors Collaboration, EPIC-CVD [European Prospective Investigation into Cancer and Nutrition-Cardiovascular Disease Study], Million Veteran Program, and UK Biobank), comprising 648 135 participants with no history of cardiovascular disease or diabetes at baseline, yielding 42 858 and 15 693 incident CHD and stroke events, respectively, during 6.8 million person-years of follow-up. Using a genetic risk score of 218 variants for estimated glomerular filtration rate (eGFR), we conducted Mendelian randomization analyses involving 413 718 participants (25 917 CHD and 8622 strokes) in EPIC-CVD, Million Veteran Program, and UK Biobank. Results: There were U-shaped observational associations of creatinine-based eGFR with CHD and stroke, with higher risk in participants with eGFR values <60 or >105 mL·min-1·1.73 m-2, compared with those with eGFR between 60 and 105 mL·min-1·1.73 m-2. Mendelian randomization analyses for CHD showed an association among participants with eGFR <60 mL·min-1·1.73 m-2, with a 14% (95% CI, 3%-27%) higher CHD risk per 5 mL·min-1·1.73 m-2 lower genetically predicted eGFR, but not for those with eGFR >105 mL·min-1·1.73 m-2. Results were not materially different after adjustment for factors associated with the eGFR genetic risk score, such as lipoprotein(a), triglycerides, hemoglobin A1c, and blood pressure. Mendelian randomization results for stroke were nonsignificant but broadly similar to those for CHD. Conclusions: In people without manifest cardiovascular disease or diabetes, mild-to-moderate kidney dysfunction is causally related to risk of CHD, highlighting the potential value of preventive approaches that preserve and modulate kidney function.
12,544
376
[ 71, 56, 197, 82, 374, 273, 229, 464, 495, 2548, 133, 929, 12 ]
20
[ "egfr", "risk", "research", "creatinine", "cvd", "participants", "based", "study", "figure", "associations" ]
[ "kidney function estimated", "associated biomarkers kidney", "kidney dysfunction causally", "kidney disease epidemiology", "kidney function coronary" ]
[CONTENT] cardiovascular diseases | coronary disease | kidney diseases | stroke [SUMMARY]
[CONTENT] cardiovascular diseases | coronary disease | kidney diseases | stroke [SUMMARY]
[CONTENT] cardiovascular diseases | coronary disease | kidney diseases | stroke [SUMMARY]
[CONTENT] cardiovascular diseases | coronary disease | kidney diseases | stroke [SUMMARY]
[CONTENT] cardiovascular diseases | coronary disease | kidney diseases | stroke [SUMMARY]
[CONTENT] cardiovascular diseases | coronary disease | kidney diseases | stroke [SUMMARY]
[CONTENT] Humans | Mendelian Randomization Analysis | Prospective Studies | Cardiovascular Diseases | Coronary Disease | Risk Factors | Diabetes Mellitus | Stroke | Kidney [SUMMARY]
[CONTENT] Humans | Mendelian Randomization Analysis | Prospective Studies | Cardiovascular Diseases | Coronary Disease | Risk Factors | Diabetes Mellitus | Stroke | Kidney [SUMMARY]
[CONTENT] Humans | Mendelian Randomization Analysis | Prospective Studies | Cardiovascular Diseases | Coronary Disease | Risk Factors | Diabetes Mellitus | Stroke | Kidney [SUMMARY]
[CONTENT] Humans | Mendelian Randomization Analysis | Prospective Studies | Cardiovascular Diseases | Coronary Disease | Risk Factors | Diabetes Mellitus | Stroke | Kidney [SUMMARY]
[CONTENT] Humans | Mendelian Randomization Analysis | Prospective Studies | Cardiovascular Diseases | Coronary Disease | Risk Factors | Diabetes Mellitus | Stroke | Kidney [SUMMARY]
[CONTENT] Humans | Mendelian Randomization Analysis | Prospective Studies | Cardiovascular Diseases | Coronary Disease | Risk Factors | Diabetes Mellitus | Stroke | Kidney [SUMMARY]
[CONTENT] kidney function estimated | associated biomarkers kidney | kidney dysfunction causally | kidney disease epidemiology | kidney function coronary [SUMMARY]
[CONTENT] kidney function estimated | associated biomarkers kidney | kidney dysfunction causally | kidney disease epidemiology | kidney function coronary [SUMMARY]
[CONTENT] kidney function estimated | associated biomarkers kidney | kidney dysfunction causally | kidney disease epidemiology | kidney function coronary [SUMMARY]
[CONTENT] kidney function estimated | associated biomarkers kidney | kidney dysfunction causally | kidney disease epidemiology | kidney function coronary [SUMMARY]
[CONTENT] kidney function estimated | associated biomarkers kidney | kidney dysfunction causally | kidney disease epidemiology | kidney function coronary [SUMMARY]
[CONTENT] kidney function estimated | associated biomarkers kidney | kidney dysfunction causally | kidney disease epidemiology | kidney function coronary [SUMMARY]
[CONTENT] egfr | risk | research | creatinine | cvd | participants | based | study | figure | associations [SUMMARY]
[CONTENT] egfr | risk | research | creatinine | cvd | participants | based | study | figure | associations [SUMMARY]
[CONTENT] egfr | risk | research | creatinine | cvd | participants | based | study | figure | associations [SUMMARY]
[CONTENT] egfr | risk | research | creatinine | cvd | participants | based | study | figure | associations [SUMMARY]
[CONTENT] egfr | risk | research | creatinine | cvd | participants | based | study | figure | associations [SUMMARY]
[CONTENT] egfr | risk | research | creatinine | cvd | participants | based | study | figure | associations [SUMMARY]
[CONTENT] cvd | egfr | mvp | disease | ukb | epic cvd | epic | emerging risk factors | emerging risk | cardiovascular disease [SUMMARY]
[CONTENT] egfr | creatinine | cvd | study | risk | analysis | data | ukb | based | wide [SUMMARY]
[CONTENT] figure | egfr | stroke | ml min 73 | ml min | ml | min 73 | min | 73 | associations [SUMMARY]
[CONTENT] kidney | kidney dysfunction causally | related cardiovascular outcomes | potential cardiovascular benefit preventive | dysfunction causally | highlighting | dysfunction causally related cardiovascular | benefit preventive approaches improve | benefit preventive approaches | benefit preventive [SUMMARY]
[CONTENT] egfr | research | creatinine | kidney | risk | figure | participants | based | cvd | based egfr [SUMMARY]
[CONTENT] egfr | research | creatinine | kidney | risk | figure | participants | based | cvd | based egfr [SUMMARY]
[CONTENT] ||| CHD [SUMMARY]
[CONTENT] 4 | European Prospective Investigation | Million Veteran Program | UK Biobank | 648 | 135 | 42 | 858 | CHD | 6.8 million ||| 218 | Mendelian | 413 | 718 | 25 | CHD | 8622 | EPIC | Million Veteran Program | UK Biobank [SUMMARY]
[CONTENT] CHD | 60 | 105 | between 60 and 105 ||| Mendelian | CHD | 60 | 14% | 95% | CI | 3%-27% | CHD | 5 | 105 ||| A1c ||| Mendelian | CHD [SUMMARY]
[CONTENT] CHD [SUMMARY]
[CONTENT] ||| CHD ||| 4 | European Prospective Investigation | Million Veteran Program | UK Biobank | 648 | 135 | 42 | 858 | CHD | 6.8 million ||| 218 | Mendelian | 413 | 718 | 25 | CHD | 8622 | EPIC | Million Veteran Program | UK Biobank ||| CHD | 60 | 105 | between 60 and 105 ||| Mendelian | CHD | 60 | 14% | 95% | CI | 3%-27% | CHD | 5 | 105 ||| A1c ||| Mendelian | CHD ||| CHD [SUMMARY]
[CONTENT] ||| CHD ||| 4 | European Prospective Investigation | Million Veteran Program | UK Biobank | 648 | 135 | 42 | 858 | CHD | 6.8 million ||| 218 | Mendelian | 413 | 718 | 25 | CHD | 8622 | EPIC | Million Veteran Program | UK Biobank ||| CHD | 60 | 105 | between 60 and 105 ||| Mendelian | CHD | 60 | 14% | 95% | CI | 3%-27% | CHD | 5 | 105 ||| A1c ||| Mendelian | CHD ||| CHD [SUMMARY]
Novel Germline
36013579
Pheochromocytoma (Pheo) and paraganglioma (PGL) are rare tumors, mostly resulting from pathogenic variants of predisposing genes, with a genetic contribution that now stands at around 70%. Germline variants account for approximately 40%, while the remaining 30% is attributable to somatic variants.
BACKGROUND
Genetic analysis was carried out by NGS. This analysis was initially performed using a panel of genes known for tumor predisposition (EGLN1, EPAS1, FH, KIF1Bβ, MAX, NF1, RET, SDHA, SDHAF2, SDHB, SDHC, SDHD, TMEM127, and VHL), followed initially by SNP-CGH array, to exclude the presence of the pathogenic Copy Number Variants (CNVs) and the loss of heterozygosity (LOH) and subsequently by whole exome sequencing (WES) comparative sequence analysis of the DNA extracted from tumor fragments and peripheral blood.
METHODS
We found a novel germline PHD2 (EGLN1) gene variant, c.153G&gt;A, p.W51*, in a patient affected by metastatic Pheo and chronic myeloid leukemia (CML) in the absence of polycythemia.
RESULTS
According to the latest guidelines, it is mandatory to perform genetic analysis in all Pheo/PGL cases regardless of phenotype. In patients with metastatic disease and no evidence of polycythemia, we propose testing for PHD2 (EGLN1) gene variants. A possible correlation between PHD2 (EGLN1) pathogenic variants and CML clinical course should be considered.
CONCLUSIONS
[ "Adrenal Gland Neoplasms", "Genetic Predisposition to Disease", "Germ-Line Mutation", "Humans", "Leukemia, Myelogenous, Chronic, BCR-ABL Positive", "Paraganglioma", "Pheochromocytoma", "Polycythemia" ]
9416477
1. Introduction
Pheochromocytoma (Pheo) and paraganglioma (PGL) are rare tumors with an incidence of 2–5 patients per million per year and an uncertain malignant potential [1]. Pheos represent about 80–85% of cases and PGLs represent about 15–20% [2]. The recommended diagnostic workup for suspected Pheo/PGL (PPGL) includes plasma free metanephrines and fractionated urinary metanephrines. Subsequently, computed tomography (CT) scans or magnetic resonance imaging (MRI) are used to locate the lesion [3]. Nuclear medicine exams such as 123I-meta-iodobenzyl-guanidine (MIBG) scintigraphy, 18F-fluoro-L-dihydroxyphenylalanine (18F-DOPA), and 68Ga-DOTATATE PET are predominantly used in patients with metastatic disease [4,5]. Surgery is the curative option in non-metastatic and metastatic PPGL, and in patients with metastatic disease, tumor resection has been reported to increase overall survival [6]. Medical therapy with alpha-blockers is necessary before surgery [7]. In patients affected by metastatic disease, ‘wait and see’ is the first option given the generally slow evolution of the disease [8]. Radiometabolic therapies with 131I-MIBG or peptide receptor radionuclide therapy (PPRT) have proved effective [9,10], while cyclophosphamide vincristine dacarbazine (CVD) is the chemotherapy most commonly administered [11]. Local therapy such as external radiation therapy, radiosurgery, radiofrequency, cryoablation, and ethanol injection should be considered for unresectable lesions [12]. Tyrosine kinase inhibitors, e.g., Sunitinib [13], Temozolomide [14], as well as immunotherapy such as Pembrolizumab [15], are novel medical options for metastatic PPGLs. Up to 70% of PPGLs are caused by germline or somatic pathogenic variants in one of the known susceptibility genes [16,17]. Based on their transcriptional profile, PPGLs are classified into three clusters. Cluster 1 includes PPGLs with variants in genes encoding the hypoxia-inducible factor (HIF) 2α, the Von Hippel–Lindau tumor suppressor (VHL), the prolyl hydroxylase domain (PHD), fumarate hydratase (FH), and succinate dehydrogenase subunits (SDHx) [18,19]. These tumors are characterized by the activation of pseudohypoxic pathways and by an immature catecholamine-secreting phenotype [20]. Cluster 2 comprises PPGLs with pathogenic variants in the REarranged during Transfection (RET) proto-oncogene, the Neurofibromatosis type 1 (NF1) tumor suppressor gene, the TransMEMbrane protein (TMEM127) gene, the Harvey rat sarcoma viral oncogene homolog (HRAS) and the MYC Associated Factor X (MAX) gene. Cluster 2 PPGLs show activated MAPK and mTOR signaling pathways and are mostly benign exhibiting a mature catecholamine phenotype with a strong expression of phenylethanolamine N-methyltransferase (PNMT) [21,22,23,24]. Cluster 3 is the Wnt signaling cluster. These tumors are due to somatic mutations of the CSDE1 gene or somatic gene fusions of the MAML3 gene [2], but also patients with sporadic forms fall into this cluster. Cluster 3 tumors have a more aggressive behavior [19]. Due to the large number of known susceptibility genes, next-generation sequencing (NGS) technology is ideally suited for carrying out PPGL genetic screening. The genetic screening proposed by Toledo et al. [25] includes the PHD2 (also called EGLN1) gene (Egl-9 Family Hypoxia Inducible Factor 1). The encoded protein catalyzes the post-translational formation of 4-hydroxyproline in hypoxia-inducible factor (HIF) alpha proteins. HIF is a transcriptional complex, playing a central role in mammalian oxygen homeostasis, controlling energy, iron metabolism, erythropoiesis, development under hypoxic or pseudohypoxic conditions and mediating adaptive cell responses to these states. Under normoxic conditions, HIF is controlled by several enzymatic reactions, including prolyl hydroxylation by PHDs, leading to proteasome degradation. However, pseudohypoxia conditions can lead to HIF stabilization and transactivation of target genes [26,27]. Dysregulation of HIF contributes to tumorigenesis and cancer progression [28,29,30,31]. HIF also has a crucial role in the pathogenesis of neuroendocrine tumors, especially PPGLs, regulating the cluster 1 pathway. Furthermore, PHD2 heterozygous variants cause polycythemia, supporting the importance of PHD2 in the control of red cell mass in humans [32,33] that may be associated with PPGL [34,35]. Here, we report a novel germline PHD2 variant in a patient affected by metastatic Pheo and chronic myeloid leukemia (CML) in the absence of polycythemia.
null
null
3. Results
The flow chart illustrating the filtering process and the variant selection used to identify the pathogenic variants is shown in Figure 1B. 3.1. Mutational Analysis Analysis of SNP-CGH array reveals the absence of pathogenic CNVs and LOH in the entire genome. In particular, we focused on chromosome 1 to identify or exclude rearrangement or LOH near EGLN1 that, in combination with the nonsense variant in EGLN1, could have some effect on our patient’s phenotype. NGS identified a heterozygous c.153G>A (p.W51*) variant in exon 1 of the PHD2 gene (NM_022051.3) which has been defined as pathogenic according to the American College of Medical Genetics (ACMG) guidelines. This variant is not reported in GnomAD, ExAC, or dbSNP NFE (European non-Finnish) databases. PHD2 variant entails a G to A transition of the TGG coding triplet in a stop codon, which encodes for a truncated PHD2 protein (Figure 4). Its pathogenicity is presumed because it leads to a premature stop codon and the bioinformatic tool (http://autopvs1.genetics.bgi.com/, accessed on 13 August 2022) predicted mRNA decay with a strong probability [44]. WES comparative sequence analysis of the tumor and blood DNA confirmed the variant identified by the panel genes, but no other significant variants were detected in the other coding regions. Computational analysis highlights the possible involvement of a peptide in two different mechanisms: reduction in propensity of PHD2 to generate dimers and straight interaction with nuclear receptor corepressor 2 (NCoR2). Firstly, four possible amino acid regions have been detected that, interacting with truncated a peptide, could interfere with the formation of a disulfide bond (Figure 5A–D). These Aa regions are located among the DSBH domain (aa 299–390 light blue) which possesses the three cysteine residues (Cys302, Cys323, and Cys326 red) required for oxidative dimerization, through the formation of the disulfide bond and/or induction of a conformational change after oxidation. Secondly, a potential interaction of a truncated peptide with a specific Aa sequence “TISNPPPLISSAK” of NCoR2 protein in positions 1101–1013 was identified. This NCoR2 sequence extends to the third region of the RII domain (residues 752–1016) and interacts with HDAC3, a Class I member of the histone deacetylase superfamily (Figure 5F). Genetic analysis was extended to the patient’s parents with their consent. Genomic DNA was screened for the PHD2 variant using PCR and Sanger sequencing. The patient had inherited the PHD2 variant from her father. Analysis of SNP-CGH array reveals the absence of pathogenic CNVs and LOH in the entire genome. In particular, we focused on chromosome 1 to identify or exclude rearrangement or LOH near EGLN1 that, in combination with the nonsense variant in EGLN1, could have some effect on our patient’s phenotype. NGS identified a heterozygous c.153G>A (p.W51*) variant in exon 1 of the PHD2 gene (NM_022051.3) which has been defined as pathogenic according to the American College of Medical Genetics (ACMG) guidelines. This variant is not reported in GnomAD, ExAC, or dbSNP NFE (European non-Finnish) databases. PHD2 variant entails a G to A transition of the TGG coding triplet in a stop codon, which encodes for a truncated PHD2 protein (Figure 4). Its pathogenicity is presumed because it leads to a premature stop codon and the bioinformatic tool (http://autopvs1.genetics.bgi.com/, accessed on 13 August 2022) predicted mRNA decay with a strong probability [44]. WES comparative sequence analysis of the tumor and blood DNA confirmed the variant identified by the panel genes, but no other significant variants were detected in the other coding regions. Computational analysis highlights the possible involvement of a peptide in two different mechanisms: reduction in propensity of PHD2 to generate dimers and straight interaction with nuclear receptor corepressor 2 (NCoR2). Firstly, four possible amino acid regions have been detected that, interacting with truncated a peptide, could interfere with the formation of a disulfide bond (Figure 5A–D). These Aa regions are located among the DSBH domain (aa 299–390 light blue) which possesses the three cysteine residues (Cys302, Cys323, and Cys326 red) required for oxidative dimerization, through the formation of the disulfide bond and/or induction of a conformational change after oxidation. Secondly, a potential interaction of a truncated peptide with a specific Aa sequence “TISNPPPLISSAK” of NCoR2 protein in positions 1101–1013 was identified. This NCoR2 sequence extends to the third region of the RII domain (residues 752–1016) and interacts with HDAC3, a Class I member of the histone deacetylase superfamily (Figure 5F). Genetic analysis was extended to the patient’s parents with their consent. Genomic DNA was screened for the PHD2 variant using PCR and Sanger sequencing. The patient had inherited the PHD2 variant from her father. 3.2. PHD2 and HIF Expression in the Primary Tumor Compared with non-PHD2-mutated Pheo and PGL, PHD2-mutated tissue showed a significant reduction in gene expression levels of ENGL1 mRNA of 43% and 47%, respectively (Figure 5E). Neoplastic cells were negative for PHD2 expression. Strong PHD2 staining of endothelial cells was evident in the lining of large blood vessels as well as in the capillaries. PHD2 expression was restrained to the cytoplasm (Figure 6). In comparison, the tumor expressed higher levels of HIF2α than a healthy adrenal, as shown by Western blot analysis (Figure 7). Compared with non-PHD2-mutated Pheo and PGL, PHD2-mutated tissue showed a significant reduction in gene expression levels of ENGL1 mRNA of 43% and 47%, respectively (Figure 5E). Neoplastic cells were negative for PHD2 expression. Strong PHD2 staining of endothelial cells was evident in the lining of large blood vessels as well as in the capillaries. PHD2 expression was restrained to the cytoplasm (Figure 6). In comparison, the tumor expressed higher levels of HIF2α than a healthy adrenal, as shown by Western blot analysis (Figure 7).
5. Conclusions
We report a novel PHD2 nonsense germline variant in a patient affected by metastatic Pheo and CML in the absence of polycythemia. Our findings confirm the need to screen patients affected by chromaffin disease using a comprehensive NGS panel, with no limitation regarding phenotype. In particular, we suggest extending PHD2 gene analysis to younger non-polycythemic patients.
[ "2. Materials and Methods", "2.1. Case Presentation", "2.2. Genetic Analysis", "2.3. SNP-CGH Array", "2.4. Whole Exome Sequencing and Bioinformatics Analysis", "2.5. 3 D Variant Prediction", "2.6. PHD2 Immunohistochemistry", "2.7. Western Blot Analysis", "2.8. RNA Isolation and Quantitative Real-Time PCR", "3.1. Mutational Analysis", "3.2. PHD2 and HIF Expression in the Primary Tumor" ]
[ "Informed consent was obtained from the patient and the manuscript was written following the CARE guidelines.\n2.1. Case Presentation The timing of the clinical presentation of the present case is shown in Figure 1A.\nA 17-year-old female was incidentally diagnosed in 2012 with a 2.7 cm left adrenal mass suggestive of adenoma. The patient came to our attention in 2014. The lesion had increased to 3.6 cm in size. Urine tests showed high levels of urinary Metanephrine (MNu: 488 mcg/day, normal value (nv) < 320) and urinary Normetanephrine (NMNu: 65,125 mcg/day, nv < 390), indicating Pheo. No alteration in urinary Methoxytyramine levels (MTXu: nv < 460) and blood count were observed (white blood cells 6.28 × 103/µL normal value (nv) 4.00–11.00, red blood cells 5.09 × 106/µL normal value (nv) 3.8–5.00, hemoglobin 14.5 g/dL nv 12.0–16.0, hematocrit 43.4% nv 35.0–48.8, platelets 287 × 103/µL normal value (nv) 150–450).\nThe patient was subjected to left laparoscopic adrenalectomy in June 2014 and the post-operative course was uneventful. Histologic examination revealed Pheo (Figure 2A). There was no evidence of capsular or vascular invasion and necrosis. The mitotic count was lower than 3 per 10 high-power fields (HPFs) and atypical figures were not seen. A Pheochromocytoma of the Adrenal Gland Scaled Score (PASS) of 3 was assigned and the Ki67 proliferation index was <1%. A para-aortic lymph node was positive for the presence of chromaffin tissue (Figure 2B) and metastatic Pheo was diagnosed.\nPost-operative urinary metanephrine levels remained high (MNu 51 mcg/day, NMNu 882 mcg/day, and MTXu 116 mcg/day). Subsequent controls revealed an increase in NMNu up to 1346 mcg/day in October 2015 with a negative CT scan and MRI. The patient underwent 18F-fluoro-dihydroxy-pheylalanine (18F-DOPA) positron emission tomography (PET) and 68Ga-DOTATATE (Figure 3A) that revealed an increased uptake of para-aortic and retrocrural lymph nodes. The disease remained stable with a progressive increase in urinary NMN levels up to 2391 mcg/day. In June 2020, on account of the tumor burden and increasing levels of urinary NMN, peptide receptor radionuclide therapy (PRRT) with 177Lu-DOTATATE was initiated after egg preservation. PRRT was interrupted after four cycles due to a rapid increase in platelets (up to 2348 × 103/µL, nv 150–450). In January 2021,68Ga-DOTATATE showed a reduced tracer uptake (Figure 3B). Further investigations, including bone marrow biopsy and genetic analysis, were carried out. A standard karyotype identified the presence of the Philadelphia chromosome (BCR/Abl), the hallmark of CML, and therapy with Imatinib was prescribed (200 mg bid). The erythropoietin level rose mildly (34.7 mlU/mL, the normal value being 4.3–29). During therapy, red blood cell count and hematocrit and hemoglobin levels remained stable, and the last control resulted 3.70 × 106/μL, 37%, and 12.2 g/dL, respectively. Mean corpuscular volume (MCV), mean cell hemoglobin (MCH), and ferritin were normal (85.3 fl, 28.5 pg and 13 ng/mL, respectively).\nThe timing of the clinical presentation of the present case is shown in Figure 1A.\nA 17-year-old female was incidentally diagnosed in 2012 with a 2.7 cm left adrenal mass suggestive of adenoma. The patient came to our attention in 2014. The lesion had increased to 3.6 cm in size. Urine tests showed high levels of urinary Metanephrine (MNu: 488 mcg/day, normal value (nv) < 320) and urinary Normetanephrine (NMNu: 65,125 mcg/day, nv < 390), indicating Pheo. No alteration in urinary Methoxytyramine levels (MTXu: nv < 460) and blood count were observed (white blood cells 6.28 × 103/µL normal value (nv) 4.00–11.00, red blood cells 5.09 × 106/µL normal value (nv) 3.8–5.00, hemoglobin 14.5 g/dL nv 12.0–16.0, hematocrit 43.4% nv 35.0–48.8, platelets 287 × 103/µL normal value (nv) 150–450).\nThe patient was subjected to left laparoscopic adrenalectomy in June 2014 and the post-operative course was uneventful. Histologic examination revealed Pheo (Figure 2A). There was no evidence of capsular or vascular invasion and necrosis. The mitotic count was lower than 3 per 10 high-power fields (HPFs) and atypical figures were not seen. A Pheochromocytoma of the Adrenal Gland Scaled Score (PASS) of 3 was assigned and the Ki67 proliferation index was <1%. A para-aortic lymph node was positive for the presence of chromaffin tissue (Figure 2B) and metastatic Pheo was diagnosed.\nPost-operative urinary metanephrine levels remained high (MNu 51 mcg/day, NMNu 882 mcg/day, and MTXu 116 mcg/day). Subsequent controls revealed an increase in NMNu up to 1346 mcg/day in October 2015 with a negative CT scan and MRI. The patient underwent 18F-fluoro-dihydroxy-pheylalanine (18F-DOPA) positron emission tomography (PET) and 68Ga-DOTATATE (Figure 3A) that revealed an increased uptake of para-aortic and retrocrural lymph nodes. The disease remained stable with a progressive increase in urinary NMN levels up to 2391 mcg/day. In June 2020, on account of the tumor burden and increasing levels of urinary NMN, peptide receptor radionuclide therapy (PRRT) with 177Lu-DOTATATE was initiated after egg preservation. PRRT was interrupted after four cycles due to a rapid increase in platelets (up to 2348 × 103/µL, nv 150–450). In January 2021,68Ga-DOTATATE showed a reduced tracer uptake (Figure 3B). Further investigations, including bone marrow biopsy and genetic analysis, were carried out. A standard karyotype identified the presence of the Philadelphia chromosome (BCR/Abl), the hallmark of CML, and therapy with Imatinib was prescribed (200 mg bid). The erythropoietin level rose mildly (34.7 mlU/mL, the normal value being 4.3–29). During therapy, red blood cell count and hematocrit and hemoglobin levels remained stable, and the last control resulted 3.70 × 106/μL, 37%, and 12.2 g/dL, respectively. Mean corpuscular volume (MCV), mean cell hemoglobin (MCH), and ferritin were normal (85.3 fl, 28.5 pg and 13 ng/mL, respectively).\n2.2. Genetic Analysis Genomic and tumor DNA of the patient were extracted using the QIAsymphony CDN kit (Qiagen, Hilden, Germany). DNA quality and quantity were measured by Qubit ds HS Assay on Qubit 2.0 Fluorimeter (Thermo Fisher Scientific, Waltham, MA, USA).\nAccording to the guidelines for genetic screening of PPGL, our patient was proposed for NGS targeting. Using the online DesignStudio software (Illumina, San Diego, CA, USA), probes were designed to cover the following genes: PHD2 (EGLN1), EPAS1, FH, KIF1Bβ, MAX, NF1, RET, SDHA, SDHAF2, SDHB, SDHC, SDHD, TMEM127, and VHL, including exon–intron boundaries.\nGenomic and tumor DNA of the patient were extracted using the QIAsymphony CDN kit (Qiagen, Hilden, Germany). DNA quality and quantity were measured by Qubit ds HS Assay on Qubit 2.0 Fluorimeter (Thermo Fisher Scientific, Waltham, MA, USA).\nAccording to the guidelines for genetic screening of PPGL, our patient was proposed for NGS targeting. Using the online DesignStudio software (Illumina, San Diego, CA, USA), probes were designed to cover the following genes: PHD2 (EGLN1), EPAS1, FH, KIF1Bβ, MAX, NF1, RET, SDHA, SDHAF2, SDHB, SDHC, SDHD, TMEM127, and VHL, including exon–intron boundaries.\n2.3. SNP-CGH Array To exclude the presence of pathogenic Copy Number Variants (CNVs) and loss of heterozygosity (LOH) in tumor samples, Illumina Infinium 850k Bead Chip CGH/SNP Microarray was performed according to the manufacturer’s instructions.\nTo exclude the presence of pathogenic Copy Number Variants (CNVs) and loss of heterozygosity (LOH) in tumor samples, Illumina Infinium 850k Bead Chip CGH/SNP Microarray was performed according to the manufacturer’s instructions.\n2.4. Whole Exome Sequencing and Bioinformatics Analysis We decided to perform whole exome sequencing (WES) on blood and tumor DNA to sequence all coding genes in order to detect other variants in genes not included in our panel.\nTo construct DNA libraries, we used a strategy based on enzymatic fragmentation to produce dsDNA fragments followed by end repair, A-tailing, adapter ligation and library amplification (Kapa Biosystems, Wilmington, MA, USA). Libraries were hybridized with the protocol SeqCap EZ Exome v3 (Nimblegen, Roche, Basel, Switzerland) and sequenced by NextSeQ550 platform (Illumina Inc., San Diego, CA, USA).\nWe decided to perform whole exome sequencing (WES) on blood and tumor DNA to sequence all coding genes in order to detect other variants in genes not included in our panel.\nTo construct DNA libraries, we used a strategy based on enzymatic fragmentation to produce dsDNA fragments followed by end repair, A-tailing, adapter ligation and library amplification (Kapa Biosystems, Wilmington, MA, USA). Libraries were hybridized with the protocol SeqCap EZ Exome v3 (Nimblegen, Roche, Basel, Switzerland) and sequenced by NextSeQ550 platform (Illumina Inc., San Diego, CA, USA).\n2.5. 3 D Variant Prediction To reveal the possible structural consequences of the identified variant, a 3D model of the truncated PHD2 protein was generated using the Phyre2 (Protein Homology Fold Recognition Engine) server created by the Structural Bioinformatics Group, Imperial College, London. Phyre2 uses the alignment of hidden Markov models via an HH search—an open-source software program for protein sequence searching—to significantly improve alignment accuracy (http://www.sbg.bio.ic.ac.uk/~phyre2/html/page.cgi?id=index, accessed on 13 June 2022) [36]. This was followed by I-TASSER (Iterative Threading ASSEmbly Refinement—https://zhanglab.ccmb.med.umich.edu/I-TASSER/, accessed on 13 August 2022) [37] to evaluate function the predictions and possible interactions of truncated PHD2 protein with other proteins and GRAMM v1.03, a program for protein docking to predict the structure of possible complexes (http://vakser.compbio.ku.edu/resources/gramm/grammx/, accessed on 13 August 2022) [38,39]. The generated *.pdb files were loaded and visualized with ChemDraw software to envisage a 3D structure (version 8; Cambridge Software; PerkinElmer, Inc., Waltham, MA, USA).\nTo reveal the possible structural consequences of the identified variant, a 3D model of the truncated PHD2 protein was generated using the Phyre2 (Protein Homology Fold Recognition Engine) server created by the Structural Bioinformatics Group, Imperial College, London. Phyre2 uses the alignment of hidden Markov models via an HH search—an open-source software program for protein sequence searching—to significantly improve alignment accuracy (http://www.sbg.bio.ic.ac.uk/~phyre2/html/page.cgi?id=index, accessed on 13 June 2022) [36]. This was followed by I-TASSER (Iterative Threading ASSEmbly Refinement—https://zhanglab.ccmb.med.umich.edu/I-TASSER/, accessed on 13 August 2022) [37] to evaluate function the predictions and possible interactions of truncated PHD2 protein with other proteins and GRAMM v1.03, a program for protein docking to predict the structure of possible complexes (http://vakser.compbio.ku.edu/resources/gramm/grammx/, accessed on 13 August 2022) [38,39]. The generated *.pdb files were loaded and visualized with ChemDraw software to envisage a 3D structure (version 8; Cambridge Software; PerkinElmer, Inc., Waltham, MA, USA).\n2.6. PHD2 Immunohistochemistry PHD2 expression was assessed by probing formalin-fixed and paraffin-embedded tissue sections with mouse monoclonal antibody anti-PHD2 at 10 µg/mL (Abcam Cat#ab103432, RRID: AB_10710680). This antibody specifically recognizes a synthetic peptide corresponding to amino acids 1–24 of human PHD2 and is therefore capable of recognizing both the wild-type (wt) protein and the mutant/truncated form. Antigen retrieval was achieved using Epitope Retrieval Solution Citrate buffer (10 mM, pH 6; Dako, Glostrup, Denmark) in a thermostatic bath. Immunohistochemical analysis was performed using EnVision FLEX Systems and 3,3′-diaminobenzidine as the chromogen in a Dako Autostainer Link48 Instrument (Dako). Negative controls were incubated without the primary antibody. The sections were lightly counterstained with Mayer’s hematoxylin and mounted with Permount.\nPHD2 expression was assessed by probing formalin-fixed and paraffin-embedded tissue sections with mouse monoclonal antibody anti-PHD2 at 10 µg/mL (Abcam Cat#ab103432, RRID: AB_10710680). This antibody specifically recognizes a synthetic peptide corresponding to amino acids 1–24 of human PHD2 and is therefore capable of recognizing both the wild-type (wt) protein and the mutant/truncated form. Antigen retrieval was achieved using Epitope Retrieval Solution Citrate buffer (10 mM, pH 6; Dako, Glostrup, Denmark) in a thermostatic bath. Immunohistochemical analysis was performed using EnVision FLEX Systems and 3,3′-diaminobenzidine as the chromogen in a Dako Autostainer Link48 Instrument (Dako). Negative controls were incubated without the primary antibody. The sections were lightly counterstained with Mayer’s hematoxylin and mounted with Permount.\n2.7. Western Blot Analysis Dissected tumor tissues or human healthy adrenal samples (100 mg) were chopped in lysis buffer, incubated for 30 min on ice, and centrifuged at 10,000× g for 15 min at 4 °C. Proteins were quantified by Coomassie Blue-reagent (Bio-Rad, Hercules, CA, USA) [40] and 40 μg of proteins was separated by SDS/PAGE then transferred onto PVDF (Immobilon, Millipore, Burlington, MA, USA), as previously described [41]. Bound antibodies detected by ECL reagents (Immobilon, Millipore, Burlington, MA, USA) were analyzed with a Biorad ChemiDoc Imaging System (Bio-Rad, Quantity-One Software). HIF2α polyclonal antibody was supplied by Novus Biologicals (Bio-Techne, Minneapolis, MN, USA), while anti-GAPDH monoclonal antibody and anti-rabbit and anti-mouse secondary antibodies conjugated to horseradish peroxidase were from Santa Cruz Biotechnology (Santa Cruz, CA, USA).\nDissected tumor tissues or human healthy adrenal samples (100 mg) were chopped in lysis buffer, incubated for 30 min on ice, and centrifuged at 10,000× g for 15 min at 4 °C. Proteins were quantified by Coomassie Blue-reagent (Bio-Rad, Hercules, CA, USA) [40] and 40 μg of proteins was separated by SDS/PAGE then transferred onto PVDF (Immobilon, Millipore, Burlington, MA, USA), as previously described [41]. Bound antibodies detected by ECL reagents (Immobilon, Millipore, Burlington, MA, USA) were analyzed with a Biorad ChemiDoc Imaging System (Bio-Rad, Quantity-One Software). HIF2α polyclonal antibody was supplied by Novus Biologicals (Bio-Techne, Minneapolis, MN, USA), while anti-GAPDH monoclonal antibody and anti-rabbit and anti-mouse secondary antibodies conjugated to horseradish peroxidase were from Santa Cruz Biotechnology (Santa Cruz, CA, USA).\n2.8. RNA Isolation and Quantitative Real-Time PCR Five different sample tissues obtained from pheochromocytoma (PHEO) and two from paraganglioma (PGL) were lysed for mRNA extraction. mRNA was isolated from frozen tissue using the RNeasy Mini Kit (Qiagen, Hilden, Germany), as previously described [42,43].\nFor each RNA sample, cDNA was obtained by reverse transcription PCR starting from 250 ng of RNA in 50 μL final volume reaction (Taqman RT-PCR kit; Applied Biosystems, Foster City, CA, USA) through the following cycling conditions: 10 min at 25 °C, 30 min at 48 °C, 3 min at 95 °C, and then held at 4 °C. Further quantitative real-time PCR (qRT-PCR) was carried out using primers and probes from Applied Biosystems for the gene transcripts human PHD2 (Hs00254392_m1) and GAPDH (4352934). RT-PCR reactions were performed in triplicate for each gene on an ABI Prism 7900 Sequence Detector (Applied Biosystems). The number of target genes, normalized to the endogenous reference gene (human GAPDH) and relative to a calibrator (Stratagene, San Diego, CA, USA), was calculated by 2−ΔΔCt.\nFive different sample tissues obtained from pheochromocytoma (PHEO) and two from paraganglioma (PGL) were lysed for mRNA extraction. mRNA was isolated from frozen tissue using the RNeasy Mini Kit (Qiagen, Hilden, Germany), as previously described [42,43].\nFor each RNA sample, cDNA was obtained by reverse transcription PCR starting from 250 ng of RNA in 50 μL final volume reaction (Taqman RT-PCR kit; Applied Biosystems, Foster City, CA, USA) through the following cycling conditions: 10 min at 25 °C, 30 min at 48 °C, 3 min at 95 °C, and then held at 4 °C. Further quantitative real-time PCR (qRT-PCR) was carried out using primers and probes from Applied Biosystems for the gene transcripts human PHD2 (Hs00254392_m1) and GAPDH (4352934). RT-PCR reactions were performed in triplicate for each gene on an ABI Prism 7900 Sequence Detector (Applied Biosystems). The number of target genes, normalized to the endogenous reference gene (human GAPDH) and relative to a calibrator (Stratagene, San Diego, CA, USA), was calculated by 2−ΔΔCt.", "The timing of the clinical presentation of the present case is shown in Figure 1A.\nA 17-year-old female was incidentally diagnosed in 2012 with a 2.7 cm left adrenal mass suggestive of adenoma. The patient came to our attention in 2014. The lesion had increased to 3.6 cm in size. Urine tests showed high levels of urinary Metanephrine (MNu: 488 mcg/day, normal value (nv) < 320) and urinary Normetanephrine (NMNu: 65,125 mcg/day, nv < 390), indicating Pheo. No alteration in urinary Methoxytyramine levels (MTXu: nv < 460) and blood count were observed (white blood cells 6.28 × 103/µL normal value (nv) 4.00–11.00, red blood cells 5.09 × 106/µL normal value (nv) 3.8–5.00, hemoglobin 14.5 g/dL nv 12.0–16.0, hematocrit 43.4% nv 35.0–48.8, platelets 287 × 103/µL normal value (nv) 150–450).\nThe patient was subjected to left laparoscopic adrenalectomy in June 2014 and the post-operative course was uneventful. Histologic examination revealed Pheo (Figure 2A). There was no evidence of capsular or vascular invasion and necrosis. The mitotic count was lower than 3 per 10 high-power fields (HPFs) and atypical figures were not seen. A Pheochromocytoma of the Adrenal Gland Scaled Score (PASS) of 3 was assigned and the Ki67 proliferation index was <1%. A para-aortic lymph node was positive for the presence of chromaffin tissue (Figure 2B) and metastatic Pheo was diagnosed.\nPost-operative urinary metanephrine levels remained high (MNu 51 mcg/day, NMNu 882 mcg/day, and MTXu 116 mcg/day). Subsequent controls revealed an increase in NMNu up to 1346 mcg/day in October 2015 with a negative CT scan and MRI. The patient underwent 18F-fluoro-dihydroxy-pheylalanine (18F-DOPA) positron emission tomography (PET) and 68Ga-DOTATATE (Figure 3A) that revealed an increased uptake of para-aortic and retrocrural lymph nodes. The disease remained stable with a progressive increase in urinary NMN levels up to 2391 mcg/day. In June 2020, on account of the tumor burden and increasing levels of urinary NMN, peptide receptor radionuclide therapy (PRRT) with 177Lu-DOTATATE was initiated after egg preservation. PRRT was interrupted after four cycles due to a rapid increase in platelets (up to 2348 × 103/µL, nv 150–450). In January 2021,68Ga-DOTATATE showed a reduced tracer uptake (Figure 3B). Further investigations, including bone marrow biopsy and genetic analysis, were carried out. A standard karyotype identified the presence of the Philadelphia chromosome (BCR/Abl), the hallmark of CML, and therapy with Imatinib was prescribed (200 mg bid). The erythropoietin level rose mildly (34.7 mlU/mL, the normal value being 4.3–29). During therapy, red blood cell count and hematocrit and hemoglobin levels remained stable, and the last control resulted 3.70 × 106/μL, 37%, and 12.2 g/dL, respectively. Mean corpuscular volume (MCV), mean cell hemoglobin (MCH), and ferritin were normal (85.3 fl, 28.5 pg and 13 ng/mL, respectively).", "Genomic and tumor DNA of the patient were extracted using the QIAsymphony CDN kit (Qiagen, Hilden, Germany). DNA quality and quantity were measured by Qubit ds HS Assay on Qubit 2.0 Fluorimeter (Thermo Fisher Scientific, Waltham, MA, USA).\nAccording to the guidelines for genetic screening of PPGL, our patient was proposed for NGS targeting. Using the online DesignStudio software (Illumina, San Diego, CA, USA), probes were designed to cover the following genes: PHD2 (EGLN1), EPAS1, FH, KIF1Bβ, MAX, NF1, RET, SDHA, SDHAF2, SDHB, SDHC, SDHD, TMEM127, and VHL, including exon–intron boundaries.", "To exclude the presence of pathogenic Copy Number Variants (CNVs) and loss of heterozygosity (LOH) in tumor samples, Illumina Infinium 850k Bead Chip CGH/SNP Microarray was performed according to the manufacturer’s instructions.", "We decided to perform whole exome sequencing (WES) on blood and tumor DNA to sequence all coding genes in order to detect other variants in genes not included in our panel.\nTo construct DNA libraries, we used a strategy based on enzymatic fragmentation to produce dsDNA fragments followed by end repair, A-tailing, adapter ligation and library amplification (Kapa Biosystems, Wilmington, MA, USA). Libraries were hybridized with the protocol SeqCap EZ Exome v3 (Nimblegen, Roche, Basel, Switzerland) and sequenced by NextSeQ550 platform (Illumina Inc., San Diego, CA, USA).", "To reveal the possible structural consequences of the identified variant, a 3D model of the truncated PHD2 protein was generated using the Phyre2 (Protein Homology Fold Recognition Engine) server created by the Structural Bioinformatics Group, Imperial College, London. Phyre2 uses the alignment of hidden Markov models via an HH search—an open-source software program for protein sequence searching—to significantly improve alignment accuracy (http://www.sbg.bio.ic.ac.uk/~phyre2/html/page.cgi?id=index, accessed on 13 June 2022) [36]. This was followed by I-TASSER (Iterative Threading ASSEmbly Refinement—https://zhanglab.ccmb.med.umich.edu/I-TASSER/, accessed on 13 August 2022) [37] to evaluate function the predictions and possible interactions of truncated PHD2 protein with other proteins and GRAMM v1.03, a program for protein docking to predict the structure of possible complexes (http://vakser.compbio.ku.edu/resources/gramm/grammx/, accessed on 13 August 2022) [38,39]. The generated *.pdb files were loaded and visualized with ChemDraw software to envisage a 3D structure (version 8; Cambridge Software; PerkinElmer, Inc., Waltham, MA, USA).", "PHD2 expression was assessed by probing formalin-fixed and paraffin-embedded tissue sections with mouse monoclonal antibody anti-PHD2 at 10 µg/mL (Abcam Cat#ab103432, RRID: AB_10710680). This antibody specifically recognizes a synthetic peptide corresponding to amino acids 1–24 of human PHD2 and is therefore capable of recognizing both the wild-type (wt) protein and the mutant/truncated form. Antigen retrieval was achieved using Epitope Retrieval Solution Citrate buffer (10 mM, pH 6; Dako, Glostrup, Denmark) in a thermostatic bath. Immunohistochemical analysis was performed using EnVision FLEX Systems and 3,3′-diaminobenzidine as the chromogen in a Dako Autostainer Link48 Instrument (Dako). Negative controls were incubated without the primary antibody. The sections were lightly counterstained with Mayer’s hematoxylin and mounted with Permount.", "Dissected tumor tissues or human healthy adrenal samples (100 mg) were chopped in lysis buffer, incubated for 30 min on ice, and centrifuged at 10,000× g for 15 min at 4 °C. Proteins were quantified by Coomassie Blue-reagent (Bio-Rad, Hercules, CA, USA) [40] and 40 μg of proteins was separated by SDS/PAGE then transferred onto PVDF (Immobilon, Millipore, Burlington, MA, USA), as previously described [41]. Bound antibodies detected by ECL reagents (Immobilon, Millipore, Burlington, MA, USA) were analyzed with a Biorad ChemiDoc Imaging System (Bio-Rad, Quantity-One Software). HIF2α polyclonal antibody was supplied by Novus Biologicals (Bio-Techne, Minneapolis, MN, USA), while anti-GAPDH monoclonal antibody and anti-rabbit and anti-mouse secondary antibodies conjugated to horseradish peroxidase were from Santa Cruz Biotechnology (Santa Cruz, CA, USA).", "Five different sample tissues obtained from pheochromocytoma (PHEO) and two from paraganglioma (PGL) were lysed for mRNA extraction. mRNA was isolated from frozen tissue using the RNeasy Mini Kit (Qiagen, Hilden, Germany), as previously described [42,43].\nFor each RNA sample, cDNA was obtained by reverse transcription PCR starting from 250 ng of RNA in 50 μL final volume reaction (Taqman RT-PCR kit; Applied Biosystems, Foster City, CA, USA) through the following cycling conditions: 10 min at 25 °C, 30 min at 48 °C, 3 min at 95 °C, and then held at 4 °C. Further quantitative real-time PCR (qRT-PCR) was carried out using primers and probes from Applied Biosystems for the gene transcripts human PHD2 (Hs00254392_m1) and GAPDH (4352934). RT-PCR reactions were performed in triplicate for each gene on an ABI Prism 7900 Sequence Detector (Applied Biosystems). The number of target genes, normalized to the endogenous reference gene (human GAPDH) and relative to a calibrator (Stratagene, San Diego, CA, USA), was calculated by 2−ΔΔCt.", "Analysis of SNP-CGH array reveals the absence of pathogenic CNVs and LOH in the entire genome. In particular, we focused on chromosome 1 to identify or exclude rearrangement or LOH near EGLN1 that, in combination with the nonsense variant in EGLN1, could have some effect on our patient’s phenotype.\nNGS identified a heterozygous c.153G>A (p.W51*) variant in exon 1 of the PHD2 gene (NM_022051.3) which has been defined as pathogenic according to the American College of Medical Genetics (ACMG) guidelines. This variant is not reported in GnomAD, ExAC, or dbSNP NFE (European non-Finnish) databases. PHD2 variant entails a G to A transition of the TGG coding triplet in a stop codon, which encodes for a truncated PHD2 protein (Figure 4).\nIts pathogenicity is presumed because it leads to a premature stop codon and the bioinformatic tool (http://autopvs1.genetics.bgi.com/, accessed on 13 August 2022) predicted mRNA decay with a strong probability [44].\nWES comparative sequence analysis of the tumor and blood DNA confirmed the variant identified by the panel genes, but no other significant variants were detected in the other coding regions.\nComputational analysis highlights the possible involvement of a peptide in two different mechanisms: reduction in propensity of PHD2 to generate dimers and straight interaction with nuclear receptor corepressor 2 (NCoR2). Firstly, four possible amino acid regions have been detected that, interacting with truncated a peptide, could interfere with the formation of a disulfide bond (Figure 5A–D). These Aa regions are located among the DSBH domain (aa 299–390 light blue) which possesses the three cysteine residues (Cys302, Cys323, and Cys326 red) required for oxidative dimerization, through the formation of the disulfide bond and/or induction of a conformational change after oxidation.\nSecondly, a potential interaction of a truncated peptide with a specific Aa sequence “TISNPPPLISSAK” of NCoR2 protein in positions 1101–1013 was identified. This NCoR2 sequence extends to the third region of the RII domain (residues 752–1016) and interacts with HDAC3, a Class I member of the histone deacetylase superfamily (Figure 5F).\nGenetic analysis was extended to the patient’s parents with their consent. Genomic DNA was screened for the PHD2 variant using PCR and Sanger sequencing. The patient had inherited the PHD2 variant from her father.", "Compared with non-PHD2-mutated Pheo and PGL, PHD2-mutated tissue showed a significant reduction in gene expression levels of ENGL1 mRNA of 43% and 47%, respectively (Figure 5E).\nNeoplastic cells were negative for PHD2 expression. Strong PHD2 staining of endothelial cells was evident in the lining of large blood vessels as well as in the capillaries. PHD2 expression was restrained to the cytoplasm (Figure 6). In comparison, the tumor expressed higher levels of HIF2α than a healthy adrenal, as shown by Western blot analysis (Figure 7)." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Case Presentation", "2.2. Genetic Analysis", "2.3. SNP-CGH Array", "2.4. Whole Exome Sequencing and Bioinformatics Analysis", "2.5. 3 D Variant Prediction", "2.6. PHD2 Immunohistochemistry", "2.7. Western Blot Analysis", "2.8. RNA Isolation and Quantitative Real-Time PCR", "3. Results", "3.1. Mutational Analysis", "3.2. PHD2 and HIF Expression in the Primary Tumor", "4. Discussion", "5. Conclusions" ]
[ "Pheochromocytoma (Pheo) and paraganglioma (PGL) are rare tumors with an incidence of 2–5 patients per million per year and an uncertain malignant potential [1]. Pheos represent about 80–85% of cases and PGLs represent about 15–20% [2]. The recommended diagnostic workup for suspected Pheo/PGL (PPGL) includes plasma free metanephrines and fractionated urinary metanephrines. Subsequently, computed tomography (CT) scans or magnetic resonance imaging (MRI) are used to locate the lesion [3]. Nuclear medicine exams such as 123I-meta-iodobenzyl-guanidine (MIBG) scintigraphy, 18F-fluoro-L-dihydroxyphenylalanine (18F-DOPA), and 68Ga-DOTATATE PET are predominantly used in patients with metastatic disease [4,5].\nSurgery is the curative option in non-metastatic and metastatic PPGL, and in patients with metastatic disease, tumor resection has been reported to increase overall survival [6].\nMedical therapy with alpha-blockers is necessary before surgery [7]. In patients affected by metastatic disease, ‘wait and see’ is the first option given the generally slow evolution of the disease [8]. Radiometabolic therapies with 131I-MIBG or peptide receptor radionuclide therapy (PPRT) have proved effective [9,10], while cyclophosphamide vincristine dacarbazine (CVD) is the chemotherapy most commonly administered [11]. Local therapy such as external radiation therapy, radiosurgery, radiofrequency, cryoablation, and ethanol injection should be considered for unresectable lesions [12]. Tyrosine kinase inhibitors, e.g., Sunitinib [13], Temozolomide [14], as well as immunotherapy such as Pembrolizumab [15], are novel medical options for metastatic PPGLs.\nUp to 70% of PPGLs are caused by germline or somatic pathogenic variants in one of the known susceptibility genes [16,17]. Based on their transcriptional profile, PPGLs are classified into three clusters. Cluster 1 includes PPGLs with variants in genes encoding the hypoxia-inducible factor (HIF) 2α, the Von Hippel–Lindau tumor suppressor (VHL), the prolyl hydroxylase domain (PHD), fumarate hydratase (FH), and succinate dehydrogenase subunits (SDHx) [18,19]. These tumors are characterized by the activation of pseudohypoxic pathways and by an immature catecholamine-secreting phenotype [20]. Cluster 2 comprises PPGLs with pathogenic variants in the REarranged during Transfection (RET) proto-oncogene, the Neurofibromatosis type 1 (NF1) tumor suppressor gene, the TransMEMbrane protein (TMEM127) gene, the Harvey rat sarcoma viral oncogene homolog (HRAS) and the MYC Associated Factor X (MAX) gene. Cluster 2 PPGLs show activated MAPK and mTOR signaling pathways and are mostly benign exhibiting a mature catecholamine phenotype with a strong expression of phenylethanolamine N-methyltransferase (PNMT) [21,22,23,24]. Cluster 3 is the Wnt signaling cluster. These tumors are due to somatic mutations of the CSDE1 gene or somatic gene fusions of the MAML3 gene [2], but also patients with sporadic forms fall into this cluster. Cluster 3 tumors have a more aggressive behavior [19].\nDue to the large number of known susceptibility genes, next-generation sequencing (NGS) technology is ideally suited for carrying out PPGL genetic screening.\nThe genetic screening proposed by Toledo et al. [25] includes the PHD2 (also called EGLN1) gene (Egl-9 Family Hypoxia Inducible Factor 1). The encoded protein catalyzes the post-translational formation of 4-hydroxyproline in hypoxia-inducible factor (HIF) alpha proteins. HIF is a transcriptional complex, playing a central role in mammalian oxygen homeostasis, controlling energy, iron metabolism, erythropoiesis, development under hypoxic or pseudohypoxic conditions and mediating adaptive cell responses to these states. Under normoxic conditions, HIF is controlled by several enzymatic reactions, including prolyl hydroxylation by PHDs, leading to proteasome degradation. However, pseudohypoxia conditions can lead to HIF stabilization and transactivation of target genes [26,27].\nDysregulation of HIF contributes to tumorigenesis and cancer progression [28,29,30,31]. HIF also has a crucial role in the pathogenesis of neuroendocrine tumors, especially PPGLs, regulating the cluster 1 pathway. Furthermore, PHD2 heterozygous variants cause polycythemia, supporting the importance of PHD2 in the control of red cell mass in humans [32,33] that may be associated with PPGL [34,35].\nHere, we report a novel germline PHD2 variant in a patient affected by metastatic Pheo and chronic myeloid leukemia (CML) in the absence of polycythemia.", "Informed consent was obtained from the patient and the manuscript was written following the CARE guidelines.\n2.1. Case Presentation The timing of the clinical presentation of the present case is shown in Figure 1A.\nA 17-year-old female was incidentally diagnosed in 2012 with a 2.7 cm left adrenal mass suggestive of adenoma. The patient came to our attention in 2014. The lesion had increased to 3.6 cm in size. Urine tests showed high levels of urinary Metanephrine (MNu: 488 mcg/day, normal value (nv) < 320) and urinary Normetanephrine (NMNu: 65,125 mcg/day, nv < 390), indicating Pheo. No alteration in urinary Methoxytyramine levels (MTXu: nv < 460) and blood count were observed (white blood cells 6.28 × 103/µL normal value (nv) 4.00–11.00, red blood cells 5.09 × 106/µL normal value (nv) 3.8–5.00, hemoglobin 14.5 g/dL nv 12.0–16.0, hematocrit 43.4% nv 35.0–48.8, platelets 287 × 103/µL normal value (nv) 150–450).\nThe patient was subjected to left laparoscopic adrenalectomy in June 2014 and the post-operative course was uneventful. Histologic examination revealed Pheo (Figure 2A). There was no evidence of capsular or vascular invasion and necrosis. The mitotic count was lower than 3 per 10 high-power fields (HPFs) and atypical figures were not seen. A Pheochromocytoma of the Adrenal Gland Scaled Score (PASS) of 3 was assigned and the Ki67 proliferation index was <1%. A para-aortic lymph node was positive for the presence of chromaffin tissue (Figure 2B) and metastatic Pheo was diagnosed.\nPost-operative urinary metanephrine levels remained high (MNu 51 mcg/day, NMNu 882 mcg/day, and MTXu 116 mcg/day). Subsequent controls revealed an increase in NMNu up to 1346 mcg/day in October 2015 with a negative CT scan and MRI. The patient underwent 18F-fluoro-dihydroxy-pheylalanine (18F-DOPA) positron emission tomography (PET) and 68Ga-DOTATATE (Figure 3A) that revealed an increased uptake of para-aortic and retrocrural lymph nodes. The disease remained stable with a progressive increase in urinary NMN levels up to 2391 mcg/day. In June 2020, on account of the tumor burden and increasing levels of urinary NMN, peptide receptor radionuclide therapy (PRRT) with 177Lu-DOTATATE was initiated after egg preservation. PRRT was interrupted after four cycles due to a rapid increase in platelets (up to 2348 × 103/µL, nv 150–450). In January 2021,68Ga-DOTATATE showed a reduced tracer uptake (Figure 3B). Further investigations, including bone marrow biopsy and genetic analysis, were carried out. A standard karyotype identified the presence of the Philadelphia chromosome (BCR/Abl), the hallmark of CML, and therapy with Imatinib was prescribed (200 mg bid). The erythropoietin level rose mildly (34.7 mlU/mL, the normal value being 4.3–29). During therapy, red blood cell count and hematocrit and hemoglobin levels remained stable, and the last control resulted 3.70 × 106/μL, 37%, and 12.2 g/dL, respectively. Mean corpuscular volume (MCV), mean cell hemoglobin (MCH), and ferritin were normal (85.3 fl, 28.5 pg and 13 ng/mL, respectively).\nThe timing of the clinical presentation of the present case is shown in Figure 1A.\nA 17-year-old female was incidentally diagnosed in 2012 with a 2.7 cm left adrenal mass suggestive of adenoma. The patient came to our attention in 2014. The lesion had increased to 3.6 cm in size. Urine tests showed high levels of urinary Metanephrine (MNu: 488 mcg/day, normal value (nv) < 320) and urinary Normetanephrine (NMNu: 65,125 mcg/day, nv < 390), indicating Pheo. No alteration in urinary Methoxytyramine levels (MTXu: nv < 460) and blood count were observed (white blood cells 6.28 × 103/µL normal value (nv) 4.00–11.00, red blood cells 5.09 × 106/µL normal value (nv) 3.8–5.00, hemoglobin 14.5 g/dL nv 12.0–16.0, hematocrit 43.4% nv 35.0–48.8, platelets 287 × 103/µL normal value (nv) 150–450).\nThe patient was subjected to left laparoscopic adrenalectomy in June 2014 and the post-operative course was uneventful. Histologic examination revealed Pheo (Figure 2A). There was no evidence of capsular or vascular invasion and necrosis. The mitotic count was lower than 3 per 10 high-power fields (HPFs) and atypical figures were not seen. A Pheochromocytoma of the Adrenal Gland Scaled Score (PASS) of 3 was assigned and the Ki67 proliferation index was <1%. A para-aortic lymph node was positive for the presence of chromaffin tissue (Figure 2B) and metastatic Pheo was diagnosed.\nPost-operative urinary metanephrine levels remained high (MNu 51 mcg/day, NMNu 882 mcg/day, and MTXu 116 mcg/day). Subsequent controls revealed an increase in NMNu up to 1346 mcg/day in October 2015 with a negative CT scan and MRI. The patient underwent 18F-fluoro-dihydroxy-pheylalanine (18F-DOPA) positron emission tomography (PET) and 68Ga-DOTATATE (Figure 3A) that revealed an increased uptake of para-aortic and retrocrural lymph nodes. The disease remained stable with a progressive increase in urinary NMN levels up to 2391 mcg/day. In June 2020, on account of the tumor burden and increasing levels of urinary NMN, peptide receptor radionuclide therapy (PRRT) with 177Lu-DOTATATE was initiated after egg preservation. PRRT was interrupted after four cycles due to a rapid increase in platelets (up to 2348 × 103/µL, nv 150–450). In January 2021,68Ga-DOTATATE showed a reduced tracer uptake (Figure 3B). Further investigations, including bone marrow biopsy and genetic analysis, were carried out. A standard karyotype identified the presence of the Philadelphia chromosome (BCR/Abl), the hallmark of CML, and therapy with Imatinib was prescribed (200 mg bid). The erythropoietin level rose mildly (34.7 mlU/mL, the normal value being 4.3–29). During therapy, red blood cell count and hematocrit and hemoglobin levels remained stable, and the last control resulted 3.70 × 106/μL, 37%, and 12.2 g/dL, respectively. Mean corpuscular volume (MCV), mean cell hemoglobin (MCH), and ferritin were normal (85.3 fl, 28.5 pg and 13 ng/mL, respectively).\n2.2. Genetic Analysis Genomic and tumor DNA of the patient were extracted using the QIAsymphony CDN kit (Qiagen, Hilden, Germany). DNA quality and quantity were measured by Qubit ds HS Assay on Qubit 2.0 Fluorimeter (Thermo Fisher Scientific, Waltham, MA, USA).\nAccording to the guidelines for genetic screening of PPGL, our patient was proposed for NGS targeting. Using the online DesignStudio software (Illumina, San Diego, CA, USA), probes were designed to cover the following genes: PHD2 (EGLN1), EPAS1, FH, KIF1Bβ, MAX, NF1, RET, SDHA, SDHAF2, SDHB, SDHC, SDHD, TMEM127, and VHL, including exon–intron boundaries.\nGenomic and tumor DNA of the patient were extracted using the QIAsymphony CDN kit (Qiagen, Hilden, Germany). DNA quality and quantity were measured by Qubit ds HS Assay on Qubit 2.0 Fluorimeter (Thermo Fisher Scientific, Waltham, MA, USA).\nAccording to the guidelines for genetic screening of PPGL, our patient was proposed for NGS targeting. Using the online DesignStudio software (Illumina, San Diego, CA, USA), probes were designed to cover the following genes: PHD2 (EGLN1), EPAS1, FH, KIF1Bβ, MAX, NF1, RET, SDHA, SDHAF2, SDHB, SDHC, SDHD, TMEM127, and VHL, including exon–intron boundaries.\n2.3. SNP-CGH Array To exclude the presence of pathogenic Copy Number Variants (CNVs) and loss of heterozygosity (LOH) in tumor samples, Illumina Infinium 850k Bead Chip CGH/SNP Microarray was performed according to the manufacturer’s instructions.\nTo exclude the presence of pathogenic Copy Number Variants (CNVs) and loss of heterozygosity (LOH) in tumor samples, Illumina Infinium 850k Bead Chip CGH/SNP Microarray was performed according to the manufacturer’s instructions.\n2.4. Whole Exome Sequencing and Bioinformatics Analysis We decided to perform whole exome sequencing (WES) on blood and tumor DNA to sequence all coding genes in order to detect other variants in genes not included in our panel.\nTo construct DNA libraries, we used a strategy based on enzymatic fragmentation to produce dsDNA fragments followed by end repair, A-tailing, adapter ligation and library amplification (Kapa Biosystems, Wilmington, MA, USA). Libraries were hybridized with the protocol SeqCap EZ Exome v3 (Nimblegen, Roche, Basel, Switzerland) and sequenced by NextSeQ550 platform (Illumina Inc., San Diego, CA, USA).\nWe decided to perform whole exome sequencing (WES) on blood and tumor DNA to sequence all coding genes in order to detect other variants in genes not included in our panel.\nTo construct DNA libraries, we used a strategy based on enzymatic fragmentation to produce dsDNA fragments followed by end repair, A-tailing, adapter ligation and library amplification (Kapa Biosystems, Wilmington, MA, USA). Libraries were hybridized with the protocol SeqCap EZ Exome v3 (Nimblegen, Roche, Basel, Switzerland) and sequenced by NextSeQ550 platform (Illumina Inc., San Diego, CA, USA).\n2.5. 3 D Variant Prediction To reveal the possible structural consequences of the identified variant, a 3D model of the truncated PHD2 protein was generated using the Phyre2 (Protein Homology Fold Recognition Engine) server created by the Structural Bioinformatics Group, Imperial College, London. Phyre2 uses the alignment of hidden Markov models via an HH search—an open-source software program for protein sequence searching—to significantly improve alignment accuracy (http://www.sbg.bio.ic.ac.uk/~phyre2/html/page.cgi?id=index, accessed on 13 June 2022) [36]. This was followed by I-TASSER (Iterative Threading ASSEmbly Refinement—https://zhanglab.ccmb.med.umich.edu/I-TASSER/, accessed on 13 August 2022) [37] to evaluate function the predictions and possible interactions of truncated PHD2 protein with other proteins and GRAMM v1.03, a program for protein docking to predict the structure of possible complexes (http://vakser.compbio.ku.edu/resources/gramm/grammx/, accessed on 13 August 2022) [38,39]. The generated *.pdb files were loaded and visualized with ChemDraw software to envisage a 3D structure (version 8; Cambridge Software; PerkinElmer, Inc., Waltham, MA, USA).\nTo reveal the possible structural consequences of the identified variant, a 3D model of the truncated PHD2 protein was generated using the Phyre2 (Protein Homology Fold Recognition Engine) server created by the Structural Bioinformatics Group, Imperial College, London. Phyre2 uses the alignment of hidden Markov models via an HH search—an open-source software program for protein sequence searching—to significantly improve alignment accuracy (http://www.sbg.bio.ic.ac.uk/~phyre2/html/page.cgi?id=index, accessed on 13 June 2022) [36]. This was followed by I-TASSER (Iterative Threading ASSEmbly Refinement—https://zhanglab.ccmb.med.umich.edu/I-TASSER/, accessed on 13 August 2022) [37] to evaluate function the predictions and possible interactions of truncated PHD2 protein with other proteins and GRAMM v1.03, a program for protein docking to predict the structure of possible complexes (http://vakser.compbio.ku.edu/resources/gramm/grammx/, accessed on 13 August 2022) [38,39]. The generated *.pdb files were loaded and visualized with ChemDraw software to envisage a 3D structure (version 8; Cambridge Software; PerkinElmer, Inc., Waltham, MA, USA).\n2.6. PHD2 Immunohistochemistry PHD2 expression was assessed by probing formalin-fixed and paraffin-embedded tissue sections with mouse monoclonal antibody anti-PHD2 at 10 µg/mL (Abcam Cat#ab103432, RRID: AB_10710680). This antibody specifically recognizes a synthetic peptide corresponding to amino acids 1–24 of human PHD2 and is therefore capable of recognizing both the wild-type (wt) protein and the mutant/truncated form. Antigen retrieval was achieved using Epitope Retrieval Solution Citrate buffer (10 mM, pH 6; Dako, Glostrup, Denmark) in a thermostatic bath. Immunohistochemical analysis was performed using EnVision FLEX Systems and 3,3′-diaminobenzidine as the chromogen in a Dako Autostainer Link48 Instrument (Dako). Negative controls were incubated without the primary antibody. The sections were lightly counterstained with Mayer’s hematoxylin and mounted with Permount.\nPHD2 expression was assessed by probing formalin-fixed and paraffin-embedded tissue sections with mouse monoclonal antibody anti-PHD2 at 10 µg/mL (Abcam Cat#ab103432, RRID: AB_10710680). This antibody specifically recognizes a synthetic peptide corresponding to amino acids 1–24 of human PHD2 and is therefore capable of recognizing both the wild-type (wt) protein and the mutant/truncated form. Antigen retrieval was achieved using Epitope Retrieval Solution Citrate buffer (10 mM, pH 6; Dako, Glostrup, Denmark) in a thermostatic bath. Immunohistochemical analysis was performed using EnVision FLEX Systems and 3,3′-diaminobenzidine as the chromogen in a Dako Autostainer Link48 Instrument (Dako). Negative controls were incubated without the primary antibody. The sections were lightly counterstained with Mayer’s hematoxylin and mounted with Permount.\n2.7. Western Blot Analysis Dissected tumor tissues or human healthy adrenal samples (100 mg) were chopped in lysis buffer, incubated for 30 min on ice, and centrifuged at 10,000× g for 15 min at 4 °C. Proteins were quantified by Coomassie Blue-reagent (Bio-Rad, Hercules, CA, USA) [40] and 40 μg of proteins was separated by SDS/PAGE then transferred onto PVDF (Immobilon, Millipore, Burlington, MA, USA), as previously described [41]. Bound antibodies detected by ECL reagents (Immobilon, Millipore, Burlington, MA, USA) were analyzed with a Biorad ChemiDoc Imaging System (Bio-Rad, Quantity-One Software). HIF2α polyclonal antibody was supplied by Novus Biologicals (Bio-Techne, Minneapolis, MN, USA), while anti-GAPDH monoclonal antibody and anti-rabbit and anti-mouse secondary antibodies conjugated to horseradish peroxidase were from Santa Cruz Biotechnology (Santa Cruz, CA, USA).\nDissected tumor tissues or human healthy adrenal samples (100 mg) were chopped in lysis buffer, incubated for 30 min on ice, and centrifuged at 10,000× g for 15 min at 4 °C. Proteins were quantified by Coomassie Blue-reagent (Bio-Rad, Hercules, CA, USA) [40] and 40 μg of proteins was separated by SDS/PAGE then transferred onto PVDF (Immobilon, Millipore, Burlington, MA, USA), as previously described [41]. Bound antibodies detected by ECL reagents (Immobilon, Millipore, Burlington, MA, USA) were analyzed with a Biorad ChemiDoc Imaging System (Bio-Rad, Quantity-One Software). HIF2α polyclonal antibody was supplied by Novus Biologicals (Bio-Techne, Minneapolis, MN, USA), while anti-GAPDH monoclonal antibody and anti-rabbit and anti-mouse secondary antibodies conjugated to horseradish peroxidase were from Santa Cruz Biotechnology (Santa Cruz, CA, USA).\n2.8. RNA Isolation and Quantitative Real-Time PCR Five different sample tissues obtained from pheochromocytoma (PHEO) and two from paraganglioma (PGL) were lysed for mRNA extraction. mRNA was isolated from frozen tissue using the RNeasy Mini Kit (Qiagen, Hilden, Germany), as previously described [42,43].\nFor each RNA sample, cDNA was obtained by reverse transcription PCR starting from 250 ng of RNA in 50 μL final volume reaction (Taqman RT-PCR kit; Applied Biosystems, Foster City, CA, USA) through the following cycling conditions: 10 min at 25 °C, 30 min at 48 °C, 3 min at 95 °C, and then held at 4 °C. Further quantitative real-time PCR (qRT-PCR) was carried out using primers and probes from Applied Biosystems for the gene transcripts human PHD2 (Hs00254392_m1) and GAPDH (4352934). RT-PCR reactions were performed in triplicate for each gene on an ABI Prism 7900 Sequence Detector (Applied Biosystems). The number of target genes, normalized to the endogenous reference gene (human GAPDH) and relative to a calibrator (Stratagene, San Diego, CA, USA), was calculated by 2−ΔΔCt.\nFive different sample tissues obtained from pheochromocytoma (PHEO) and two from paraganglioma (PGL) were lysed for mRNA extraction. mRNA was isolated from frozen tissue using the RNeasy Mini Kit (Qiagen, Hilden, Germany), as previously described [42,43].\nFor each RNA sample, cDNA was obtained by reverse transcription PCR starting from 250 ng of RNA in 50 μL final volume reaction (Taqman RT-PCR kit; Applied Biosystems, Foster City, CA, USA) through the following cycling conditions: 10 min at 25 °C, 30 min at 48 °C, 3 min at 95 °C, and then held at 4 °C. Further quantitative real-time PCR (qRT-PCR) was carried out using primers and probes from Applied Biosystems for the gene transcripts human PHD2 (Hs00254392_m1) and GAPDH (4352934). RT-PCR reactions were performed in triplicate for each gene on an ABI Prism 7900 Sequence Detector (Applied Biosystems). The number of target genes, normalized to the endogenous reference gene (human GAPDH) and relative to a calibrator (Stratagene, San Diego, CA, USA), was calculated by 2−ΔΔCt.", "The timing of the clinical presentation of the present case is shown in Figure 1A.\nA 17-year-old female was incidentally diagnosed in 2012 with a 2.7 cm left adrenal mass suggestive of adenoma. The patient came to our attention in 2014. The lesion had increased to 3.6 cm in size. Urine tests showed high levels of urinary Metanephrine (MNu: 488 mcg/day, normal value (nv) < 320) and urinary Normetanephrine (NMNu: 65,125 mcg/day, nv < 390), indicating Pheo. No alteration in urinary Methoxytyramine levels (MTXu: nv < 460) and blood count were observed (white blood cells 6.28 × 103/µL normal value (nv) 4.00–11.00, red blood cells 5.09 × 106/µL normal value (nv) 3.8–5.00, hemoglobin 14.5 g/dL nv 12.0–16.0, hematocrit 43.4% nv 35.0–48.8, platelets 287 × 103/µL normal value (nv) 150–450).\nThe patient was subjected to left laparoscopic adrenalectomy in June 2014 and the post-operative course was uneventful. Histologic examination revealed Pheo (Figure 2A). There was no evidence of capsular or vascular invasion and necrosis. The mitotic count was lower than 3 per 10 high-power fields (HPFs) and atypical figures were not seen. A Pheochromocytoma of the Adrenal Gland Scaled Score (PASS) of 3 was assigned and the Ki67 proliferation index was <1%. A para-aortic lymph node was positive for the presence of chromaffin tissue (Figure 2B) and metastatic Pheo was diagnosed.\nPost-operative urinary metanephrine levels remained high (MNu 51 mcg/day, NMNu 882 mcg/day, and MTXu 116 mcg/day). Subsequent controls revealed an increase in NMNu up to 1346 mcg/day in October 2015 with a negative CT scan and MRI. The patient underwent 18F-fluoro-dihydroxy-pheylalanine (18F-DOPA) positron emission tomography (PET) and 68Ga-DOTATATE (Figure 3A) that revealed an increased uptake of para-aortic and retrocrural lymph nodes. The disease remained stable with a progressive increase in urinary NMN levels up to 2391 mcg/day. In June 2020, on account of the tumor burden and increasing levels of urinary NMN, peptide receptor radionuclide therapy (PRRT) with 177Lu-DOTATATE was initiated after egg preservation. PRRT was interrupted after four cycles due to a rapid increase in platelets (up to 2348 × 103/µL, nv 150–450). In January 2021,68Ga-DOTATATE showed a reduced tracer uptake (Figure 3B). Further investigations, including bone marrow biopsy and genetic analysis, were carried out. A standard karyotype identified the presence of the Philadelphia chromosome (BCR/Abl), the hallmark of CML, and therapy with Imatinib was prescribed (200 mg bid). The erythropoietin level rose mildly (34.7 mlU/mL, the normal value being 4.3–29). During therapy, red blood cell count and hematocrit and hemoglobin levels remained stable, and the last control resulted 3.70 × 106/μL, 37%, and 12.2 g/dL, respectively. Mean corpuscular volume (MCV), mean cell hemoglobin (MCH), and ferritin were normal (85.3 fl, 28.5 pg and 13 ng/mL, respectively).", "Genomic and tumor DNA of the patient were extracted using the QIAsymphony CDN kit (Qiagen, Hilden, Germany). DNA quality and quantity were measured by Qubit ds HS Assay on Qubit 2.0 Fluorimeter (Thermo Fisher Scientific, Waltham, MA, USA).\nAccording to the guidelines for genetic screening of PPGL, our patient was proposed for NGS targeting. Using the online DesignStudio software (Illumina, San Diego, CA, USA), probes were designed to cover the following genes: PHD2 (EGLN1), EPAS1, FH, KIF1Bβ, MAX, NF1, RET, SDHA, SDHAF2, SDHB, SDHC, SDHD, TMEM127, and VHL, including exon–intron boundaries.", "To exclude the presence of pathogenic Copy Number Variants (CNVs) and loss of heterozygosity (LOH) in tumor samples, Illumina Infinium 850k Bead Chip CGH/SNP Microarray was performed according to the manufacturer’s instructions.", "We decided to perform whole exome sequencing (WES) on blood and tumor DNA to sequence all coding genes in order to detect other variants in genes not included in our panel.\nTo construct DNA libraries, we used a strategy based on enzymatic fragmentation to produce dsDNA fragments followed by end repair, A-tailing, adapter ligation and library amplification (Kapa Biosystems, Wilmington, MA, USA). Libraries were hybridized with the protocol SeqCap EZ Exome v3 (Nimblegen, Roche, Basel, Switzerland) and sequenced by NextSeQ550 platform (Illumina Inc., San Diego, CA, USA).", "To reveal the possible structural consequences of the identified variant, a 3D model of the truncated PHD2 protein was generated using the Phyre2 (Protein Homology Fold Recognition Engine) server created by the Structural Bioinformatics Group, Imperial College, London. Phyre2 uses the alignment of hidden Markov models via an HH search—an open-source software program for protein sequence searching—to significantly improve alignment accuracy (http://www.sbg.bio.ic.ac.uk/~phyre2/html/page.cgi?id=index, accessed on 13 June 2022) [36]. This was followed by I-TASSER (Iterative Threading ASSEmbly Refinement—https://zhanglab.ccmb.med.umich.edu/I-TASSER/, accessed on 13 August 2022) [37] to evaluate function the predictions and possible interactions of truncated PHD2 protein with other proteins and GRAMM v1.03, a program for protein docking to predict the structure of possible complexes (http://vakser.compbio.ku.edu/resources/gramm/grammx/, accessed on 13 August 2022) [38,39]. The generated *.pdb files were loaded and visualized with ChemDraw software to envisage a 3D structure (version 8; Cambridge Software; PerkinElmer, Inc., Waltham, MA, USA).", "PHD2 expression was assessed by probing formalin-fixed and paraffin-embedded tissue sections with mouse monoclonal antibody anti-PHD2 at 10 µg/mL (Abcam Cat#ab103432, RRID: AB_10710680). This antibody specifically recognizes a synthetic peptide corresponding to amino acids 1–24 of human PHD2 and is therefore capable of recognizing both the wild-type (wt) protein and the mutant/truncated form. Antigen retrieval was achieved using Epitope Retrieval Solution Citrate buffer (10 mM, pH 6; Dako, Glostrup, Denmark) in a thermostatic bath. Immunohistochemical analysis was performed using EnVision FLEX Systems and 3,3′-diaminobenzidine as the chromogen in a Dako Autostainer Link48 Instrument (Dako). Negative controls were incubated without the primary antibody. The sections were lightly counterstained with Mayer’s hematoxylin and mounted with Permount.", "Dissected tumor tissues or human healthy adrenal samples (100 mg) were chopped in lysis buffer, incubated for 30 min on ice, and centrifuged at 10,000× g for 15 min at 4 °C. Proteins were quantified by Coomassie Blue-reagent (Bio-Rad, Hercules, CA, USA) [40] and 40 μg of proteins was separated by SDS/PAGE then transferred onto PVDF (Immobilon, Millipore, Burlington, MA, USA), as previously described [41]. Bound antibodies detected by ECL reagents (Immobilon, Millipore, Burlington, MA, USA) were analyzed with a Biorad ChemiDoc Imaging System (Bio-Rad, Quantity-One Software). HIF2α polyclonal antibody was supplied by Novus Biologicals (Bio-Techne, Minneapolis, MN, USA), while anti-GAPDH monoclonal antibody and anti-rabbit and anti-mouse secondary antibodies conjugated to horseradish peroxidase were from Santa Cruz Biotechnology (Santa Cruz, CA, USA).", "Five different sample tissues obtained from pheochromocytoma (PHEO) and two from paraganglioma (PGL) were lysed for mRNA extraction. mRNA was isolated from frozen tissue using the RNeasy Mini Kit (Qiagen, Hilden, Germany), as previously described [42,43].\nFor each RNA sample, cDNA was obtained by reverse transcription PCR starting from 250 ng of RNA in 50 μL final volume reaction (Taqman RT-PCR kit; Applied Biosystems, Foster City, CA, USA) through the following cycling conditions: 10 min at 25 °C, 30 min at 48 °C, 3 min at 95 °C, and then held at 4 °C. Further quantitative real-time PCR (qRT-PCR) was carried out using primers and probes from Applied Biosystems for the gene transcripts human PHD2 (Hs00254392_m1) and GAPDH (4352934). RT-PCR reactions were performed in triplicate for each gene on an ABI Prism 7900 Sequence Detector (Applied Biosystems). The number of target genes, normalized to the endogenous reference gene (human GAPDH) and relative to a calibrator (Stratagene, San Diego, CA, USA), was calculated by 2−ΔΔCt.", "The flow chart illustrating the filtering process and the variant selection used to identify the pathogenic variants is shown in Figure 1B.\n3.1. Mutational Analysis Analysis of SNP-CGH array reveals the absence of pathogenic CNVs and LOH in the entire genome. In particular, we focused on chromosome 1 to identify or exclude rearrangement or LOH near EGLN1 that, in combination with the nonsense variant in EGLN1, could have some effect on our patient’s phenotype.\nNGS identified a heterozygous c.153G>A (p.W51*) variant in exon 1 of the PHD2 gene (NM_022051.3) which has been defined as pathogenic according to the American College of Medical Genetics (ACMG) guidelines. This variant is not reported in GnomAD, ExAC, or dbSNP NFE (European non-Finnish) databases. PHD2 variant entails a G to A transition of the TGG coding triplet in a stop codon, which encodes for a truncated PHD2 protein (Figure 4).\nIts pathogenicity is presumed because it leads to a premature stop codon and the bioinformatic tool (http://autopvs1.genetics.bgi.com/, accessed on 13 August 2022) predicted mRNA decay with a strong probability [44].\nWES comparative sequence analysis of the tumor and blood DNA confirmed the variant identified by the panel genes, but no other significant variants were detected in the other coding regions.\nComputational analysis highlights the possible involvement of a peptide in two different mechanisms: reduction in propensity of PHD2 to generate dimers and straight interaction with nuclear receptor corepressor 2 (NCoR2). Firstly, four possible amino acid regions have been detected that, interacting with truncated a peptide, could interfere with the formation of a disulfide bond (Figure 5A–D). These Aa regions are located among the DSBH domain (aa 299–390 light blue) which possesses the three cysteine residues (Cys302, Cys323, and Cys326 red) required for oxidative dimerization, through the formation of the disulfide bond and/or induction of a conformational change after oxidation.\nSecondly, a potential interaction of a truncated peptide with a specific Aa sequence “TISNPPPLISSAK” of NCoR2 protein in positions 1101–1013 was identified. This NCoR2 sequence extends to the third region of the RII domain (residues 752–1016) and interacts with HDAC3, a Class I member of the histone deacetylase superfamily (Figure 5F).\nGenetic analysis was extended to the patient’s parents with their consent. Genomic DNA was screened for the PHD2 variant using PCR and Sanger sequencing. The patient had inherited the PHD2 variant from her father.\nAnalysis of SNP-CGH array reveals the absence of pathogenic CNVs and LOH in the entire genome. In particular, we focused on chromosome 1 to identify or exclude rearrangement or LOH near EGLN1 that, in combination with the nonsense variant in EGLN1, could have some effect on our patient’s phenotype.\nNGS identified a heterozygous c.153G>A (p.W51*) variant in exon 1 of the PHD2 gene (NM_022051.3) which has been defined as pathogenic according to the American College of Medical Genetics (ACMG) guidelines. This variant is not reported in GnomAD, ExAC, or dbSNP NFE (European non-Finnish) databases. PHD2 variant entails a G to A transition of the TGG coding triplet in a stop codon, which encodes for a truncated PHD2 protein (Figure 4).\nIts pathogenicity is presumed because it leads to a premature stop codon and the bioinformatic tool (http://autopvs1.genetics.bgi.com/, accessed on 13 August 2022) predicted mRNA decay with a strong probability [44].\nWES comparative sequence analysis of the tumor and blood DNA confirmed the variant identified by the panel genes, but no other significant variants were detected in the other coding regions.\nComputational analysis highlights the possible involvement of a peptide in two different mechanisms: reduction in propensity of PHD2 to generate dimers and straight interaction with nuclear receptor corepressor 2 (NCoR2). Firstly, four possible amino acid regions have been detected that, interacting with truncated a peptide, could interfere with the formation of a disulfide bond (Figure 5A–D). These Aa regions are located among the DSBH domain (aa 299–390 light blue) which possesses the three cysteine residues (Cys302, Cys323, and Cys326 red) required for oxidative dimerization, through the formation of the disulfide bond and/or induction of a conformational change after oxidation.\nSecondly, a potential interaction of a truncated peptide with a specific Aa sequence “TISNPPPLISSAK” of NCoR2 protein in positions 1101–1013 was identified. This NCoR2 sequence extends to the third region of the RII domain (residues 752–1016) and interacts with HDAC3, a Class I member of the histone deacetylase superfamily (Figure 5F).\nGenetic analysis was extended to the patient’s parents with their consent. Genomic DNA was screened for the PHD2 variant using PCR and Sanger sequencing. The patient had inherited the PHD2 variant from her father.\n3.2. PHD2 and HIF Expression in the Primary Tumor Compared with non-PHD2-mutated Pheo and PGL, PHD2-mutated tissue showed a significant reduction in gene expression levels of ENGL1 mRNA of 43% and 47%, respectively (Figure 5E).\nNeoplastic cells were negative for PHD2 expression. Strong PHD2 staining of endothelial cells was evident in the lining of large blood vessels as well as in the capillaries. PHD2 expression was restrained to the cytoplasm (Figure 6). In comparison, the tumor expressed higher levels of HIF2α than a healthy adrenal, as shown by Western blot analysis (Figure 7).\nCompared with non-PHD2-mutated Pheo and PGL, PHD2-mutated tissue showed a significant reduction in gene expression levels of ENGL1 mRNA of 43% and 47%, respectively (Figure 5E).\nNeoplastic cells were negative for PHD2 expression. Strong PHD2 staining of endothelial cells was evident in the lining of large blood vessels as well as in the capillaries. PHD2 expression was restrained to the cytoplasm (Figure 6). In comparison, the tumor expressed higher levels of HIF2α than a healthy adrenal, as shown by Western blot analysis (Figure 7).", "Analysis of SNP-CGH array reveals the absence of pathogenic CNVs and LOH in the entire genome. In particular, we focused on chromosome 1 to identify or exclude rearrangement or LOH near EGLN1 that, in combination with the nonsense variant in EGLN1, could have some effect on our patient’s phenotype.\nNGS identified a heterozygous c.153G>A (p.W51*) variant in exon 1 of the PHD2 gene (NM_022051.3) which has been defined as pathogenic according to the American College of Medical Genetics (ACMG) guidelines. This variant is not reported in GnomAD, ExAC, or dbSNP NFE (European non-Finnish) databases. PHD2 variant entails a G to A transition of the TGG coding triplet in a stop codon, which encodes for a truncated PHD2 protein (Figure 4).\nIts pathogenicity is presumed because it leads to a premature stop codon and the bioinformatic tool (http://autopvs1.genetics.bgi.com/, accessed on 13 August 2022) predicted mRNA decay with a strong probability [44].\nWES comparative sequence analysis of the tumor and blood DNA confirmed the variant identified by the panel genes, but no other significant variants were detected in the other coding regions.\nComputational analysis highlights the possible involvement of a peptide in two different mechanisms: reduction in propensity of PHD2 to generate dimers and straight interaction with nuclear receptor corepressor 2 (NCoR2). Firstly, four possible amino acid regions have been detected that, interacting with truncated a peptide, could interfere with the formation of a disulfide bond (Figure 5A–D). These Aa regions are located among the DSBH domain (aa 299–390 light blue) which possesses the three cysteine residues (Cys302, Cys323, and Cys326 red) required for oxidative dimerization, through the formation of the disulfide bond and/or induction of a conformational change after oxidation.\nSecondly, a potential interaction of a truncated peptide with a specific Aa sequence “TISNPPPLISSAK” of NCoR2 protein in positions 1101–1013 was identified. This NCoR2 sequence extends to the third region of the RII domain (residues 752–1016) and interacts with HDAC3, a Class I member of the histone deacetylase superfamily (Figure 5F).\nGenetic analysis was extended to the patient’s parents with their consent. Genomic DNA was screened for the PHD2 variant using PCR and Sanger sequencing. The patient had inherited the PHD2 variant from her father.", "Compared with non-PHD2-mutated Pheo and PGL, PHD2-mutated tissue showed a significant reduction in gene expression levels of ENGL1 mRNA of 43% and 47%, respectively (Figure 5E).\nNeoplastic cells were negative for PHD2 expression. Strong PHD2 staining of endothelial cells was evident in the lining of large blood vessels as well as in the capillaries. PHD2 expression was restrained to the cytoplasm (Figure 6). In comparison, the tumor expressed higher levels of HIF2α than a healthy adrenal, as shown by Western blot analysis (Figure 7).", "Here, we describe a novel nonsense PHD2 germline variant in a patient affected by metastatic Pheo and CML with no evidence of polycythemia.\nThe first case of a PHD2 missense variant observed in a 30-year-old patient with polycythemia and abdominal PGL was reported in 2008 [34]. In 2014, a novel PHD2 germline missense variant was detected in a patient affected by Pheo [45]. More recently, a novel PHD2 germline missense variant was found in a patient with PPGL and polycythemia [35]. None of these patients presented metastatic disease.\nIn contrast, nonsense variants in the PHD2 gene were exclusively described in patients over 35 years of age suffering from polycythemia without chromaffin diseases [46,47]. We cannot exclude that in our case the absence of polycythemia could be explained by the younger age of the patient.\nThe PHD2 variants so far described in association with polycythemia are all heterozygous [48], suggesting that a partial loss of PHD2 activity is sufficient to induce polycythemia. Although only a few of the PHD2 variants have been reported, including those associated with tumors other than Pheo, a larger number of patients are needed to understand the exact function of heterozygous variants.\nThe PHD2 gene encodes prolyl hydroxylase domain-containing protein-2 (PHD2), catalyzing the post-translational modification of hypoxia-inducible transcription factors that play an essential role in oxygen homeostasis. Prolyl hydroxylation is a basic regulatory event that targets HIF subunits for proteasomal demolition via the Von Hippel–Lindau ubiquitylation complex [47]. At the physiological oxygen level (normoxia), PHD hydroxylates proline residues on HIF-α subunits leading to their destabilization by promoting ubiquitination via the Von Hippel–Lindau (VHL) ubiquitin ligase and subsequent proteasomal degradation. In hypoxia, the O2-dependent hydroxylation of HIF-α subunits by PHD is reduced, resulting in HIF-α accumulation, dimerization with HIF-β, and migration into the nucleus to induce an adaptive transcriptional response. Variants in the PHD2 gene determine a pseudohypoxia status. Indeed, PHD2 is unable to hydroxylate HIF-α, with the consequent stabilization of HIF-α subunits which are recognized as contributing to the pathogenesis of hereditary PPGLs [48].\nAs expected, our patient showed a lower expression of PHD2 and higher levels of HIF2α compared to the healthy adrenal tissues, confirming that PHD2 down-regulation results in HIF2α stabilization [49].\nIn our patient, we identified the new variant (c.153G>A) in the PHD2 gene that introduces a stop codon, resulting in the production of a truncated protein which could explain the unexpected clinical status.\nThis variant is not reported in any SNV database. The gene constraint (gnomAD) indicates a strong probability of being LoF intolerant (oe-score: 0.06); therefore, it was considered pathogenic. In addition, we did not identify any exonic/splicing variants in the other coding regions.\nWe also evaluated CNVs/LOH through SNP-CGH arrays, which allowed for the exclusion of any possible genomic rearrangements and also reinforced data from the literature reporting that haploinsufficiency and partial deregulation of PHD2 is sufficient to cause polycythemia [49].\nComputational analysis indicated two possible effects of the truncated peptide. Firstly, an interaction between the truncated peptide and wild-type PHD2 protein was hypothesized. The truncated protein reduces the propensity to generate PHD2 dimerization-blocking establishment of the disulfide bond among DSBH domains and subsequently hindering HIF-1α activation by increasing glucose flux and lactate production (the Warburg effect) under oxidative stress. This interference could have a protective effect, decreasing tumor growth, as seen in our patient. Secondly, the interaction and inhibition of the transcriptional regulation of NCoR2 were considered, which would lead to the activation of HDAC3 and repression of nuclear receptors such as the thyroid hormone receptor and the retinoic acid receptor. HDAC3 is recruited by enhancers to modulate both the epigenome and nearby gene expression and is the only endogenous histone deacetylase that has a unique role in modulating the transcriptional activities of nuclear receptors.\nMoreover, the heterozygous nonsense variant in the PHD2 gene, as well as the residual mRNA produced by the wild-type allele, may act in a dominant-negative fashion to lower protein activity, possibly leading to a non-canonical-associated phenotype with metastatic Pheo, but with no clinical signs of polycythemia.\nAt genetic analysis, the patient’s father was positive for the known PHD2 variant. CT scan and metanephrine assays proved negative. To date, considering the rarity of PHD2 variants, the penetrance of this variant is unknown. An incomplete penetrance could be assumed, such as that of SDHB germline variants (9–75%) [50]. Indeed, in the previously reported cases, it was not possible to study the transmission of the disease [34,35,45].\nOn the other hand, autosomal dominant inheritance has been reported in patients affected by PHD2 variants and polycythemia [51].\nIn summary, the role of PHD2 heterozygous state as the driver gene associated with an unusual phenotype could depend on its relative abundance in a specific tissue, its interplaying among the three isoforms, as well as other genetic and epigenetic mechanisms.\nWe can neither exclude the presence of other variants in the non-coding or regulatory regions nor epigenetic alteration that, in association with the nonsense variant in the PHD2 gene, may contribute to the development of the patient’s phenotype.\nIn agreement with the literature [52], we demonstrated that the PHD2 variant led to HIF stabilization and its consequent activation through a significant increase in HIF2α expression in the mutated tumor. Focusing on HIF activation, we suggest a possible correlation between HIF upregulation and response to CML therapy. Considering the presence of the Philadelphia chromosome, we have not assumed a possible relationship between PPRT therapy and CML. In the literature, there are no data on this correlation.\nIn CML cell populations, HIF-responsive genes are upregulated by BCR/Abl [53]. Under low oxygen conditions, HIF supports the maintenance of stem cell potential, promoting the expansion of the mutated progenitor and an increased production of BCR/Abl protein [54]. This mechanism could be maintained in pseudohypoxia conditions. In CML murine models, HIF-1α genetic knockout prevents CML development by impairing cell cycle progression and inducing apoptosis in leukemia stem cells (LSCs) [55]. Therefore, HIF-1α is a critical factor in CML. LSCs, selected under low oxygen tension, are tyrosine kinase inhibitor (TKI)-insensitive [56] and HIF-1α-dependent signaling is relevant to LSC maintenance in CML [57]. This establishes HIF targeting as a possible strategy for CML treatment in patients who are insensitive to Imatinib and other TKI inhibitors [58]. Our patient was given Imatinib only recently. Future studies are necessary in order to verify the response to TKI inhibitors in PHD2-mutated patients. The description of an unusual clinical feature through NGS DNA analysis can significantly increase the diagnostic rate and improve patient management regardless of specific clinical signs.", "We report a novel PHD2 nonsense germline variant in a patient affected by metastatic Pheo and CML in the absence of polycythemia. Our findings confirm the need to screen patients affected by chromaffin disease using a comprehensive NGS panel, with no limitation regarding phenotype. In particular, we suggest extending PHD2 gene analysis to younger non-polycythemic patients." ]
[ "intro", null, null, null, null, null, null, null, null, null, "results", null, null, "discussion", "conclusions" ]
[ "germline variants", "PHD2 gene", "metastatic pheochromocytoma", "radiometabolic therapy", "PPRT", "chronic myeloid leukemia" ]
1. Introduction: Pheochromocytoma (Pheo) and paraganglioma (PGL) are rare tumors with an incidence of 2–5 patients per million per year and an uncertain malignant potential [1]. Pheos represent about 80–85% of cases and PGLs represent about 15–20% [2]. The recommended diagnostic workup for suspected Pheo/PGL (PPGL) includes plasma free metanephrines and fractionated urinary metanephrines. Subsequently, computed tomography (CT) scans or magnetic resonance imaging (MRI) are used to locate the lesion [3]. Nuclear medicine exams such as 123I-meta-iodobenzyl-guanidine (MIBG) scintigraphy, 18F-fluoro-L-dihydroxyphenylalanine (18F-DOPA), and 68Ga-DOTATATE PET are predominantly used in patients with metastatic disease [4,5]. Surgery is the curative option in non-metastatic and metastatic PPGL, and in patients with metastatic disease, tumor resection has been reported to increase overall survival [6]. Medical therapy with alpha-blockers is necessary before surgery [7]. In patients affected by metastatic disease, ‘wait and see’ is the first option given the generally slow evolution of the disease [8]. Radiometabolic therapies with 131I-MIBG or peptide receptor radionuclide therapy (PPRT) have proved effective [9,10], while cyclophosphamide vincristine dacarbazine (CVD) is the chemotherapy most commonly administered [11]. Local therapy such as external radiation therapy, radiosurgery, radiofrequency, cryoablation, and ethanol injection should be considered for unresectable lesions [12]. Tyrosine kinase inhibitors, e.g., Sunitinib [13], Temozolomide [14], as well as immunotherapy such as Pembrolizumab [15], are novel medical options for metastatic PPGLs. Up to 70% of PPGLs are caused by germline or somatic pathogenic variants in one of the known susceptibility genes [16,17]. Based on their transcriptional profile, PPGLs are classified into three clusters. Cluster 1 includes PPGLs with variants in genes encoding the hypoxia-inducible factor (HIF) 2α, the Von Hippel–Lindau tumor suppressor (VHL), the prolyl hydroxylase domain (PHD), fumarate hydratase (FH), and succinate dehydrogenase subunits (SDHx) [18,19]. These tumors are characterized by the activation of pseudohypoxic pathways and by an immature catecholamine-secreting phenotype [20]. Cluster 2 comprises PPGLs with pathogenic variants in the REarranged during Transfection (RET) proto-oncogene, the Neurofibromatosis type 1 (NF1) tumor suppressor gene, the TransMEMbrane protein (TMEM127) gene, the Harvey rat sarcoma viral oncogene homolog (HRAS) and the MYC Associated Factor X (MAX) gene. Cluster 2 PPGLs show activated MAPK and mTOR signaling pathways and are mostly benign exhibiting a mature catecholamine phenotype with a strong expression of phenylethanolamine N-methyltransferase (PNMT) [21,22,23,24]. Cluster 3 is the Wnt signaling cluster. These tumors are due to somatic mutations of the CSDE1 gene or somatic gene fusions of the MAML3 gene [2], but also patients with sporadic forms fall into this cluster. Cluster 3 tumors have a more aggressive behavior [19]. Due to the large number of known susceptibility genes, next-generation sequencing (NGS) technology is ideally suited for carrying out PPGL genetic screening. The genetic screening proposed by Toledo et al. [25] includes the PHD2 (also called EGLN1) gene (Egl-9 Family Hypoxia Inducible Factor 1). The encoded protein catalyzes the post-translational formation of 4-hydroxyproline in hypoxia-inducible factor (HIF) alpha proteins. HIF is a transcriptional complex, playing a central role in mammalian oxygen homeostasis, controlling energy, iron metabolism, erythropoiesis, development under hypoxic or pseudohypoxic conditions and mediating adaptive cell responses to these states. Under normoxic conditions, HIF is controlled by several enzymatic reactions, including prolyl hydroxylation by PHDs, leading to proteasome degradation. However, pseudohypoxia conditions can lead to HIF stabilization and transactivation of target genes [26,27]. Dysregulation of HIF contributes to tumorigenesis and cancer progression [28,29,30,31]. HIF also has a crucial role in the pathogenesis of neuroendocrine tumors, especially PPGLs, regulating the cluster 1 pathway. Furthermore, PHD2 heterozygous variants cause polycythemia, supporting the importance of PHD2 in the control of red cell mass in humans [32,33] that may be associated with PPGL [34,35]. Here, we report a novel germline PHD2 variant in a patient affected by metastatic Pheo and chronic myeloid leukemia (CML) in the absence of polycythemia. 2. Materials and Methods: Informed consent was obtained from the patient and the manuscript was written following the CARE guidelines. 2.1. Case Presentation The timing of the clinical presentation of the present case is shown in Figure 1A. A 17-year-old female was incidentally diagnosed in 2012 with a 2.7 cm left adrenal mass suggestive of adenoma. The patient came to our attention in 2014. The lesion had increased to 3.6 cm in size. Urine tests showed high levels of urinary Metanephrine (MNu: 488 mcg/day, normal value (nv) < 320) and urinary Normetanephrine (NMNu: 65,125 mcg/day, nv < 390), indicating Pheo. No alteration in urinary Methoxytyramine levels (MTXu: nv < 460) and blood count were observed (white blood cells 6.28 × 103/µL normal value (nv) 4.00–11.00, red blood cells 5.09 × 106/µL normal value (nv) 3.8–5.00, hemoglobin 14.5 g/dL nv 12.0–16.0, hematocrit 43.4% nv 35.0–48.8, platelets 287 × 103/µL normal value (nv) 150–450). The patient was subjected to left laparoscopic adrenalectomy in June 2014 and the post-operative course was uneventful. Histologic examination revealed Pheo (Figure 2A). There was no evidence of capsular or vascular invasion and necrosis. The mitotic count was lower than 3 per 10 high-power fields (HPFs) and atypical figures were not seen. A Pheochromocytoma of the Adrenal Gland Scaled Score (PASS) of 3 was assigned and the Ki67 proliferation index was <1%. A para-aortic lymph node was positive for the presence of chromaffin tissue (Figure 2B) and metastatic Pheo was diagnosed. Post-operative urinary metanephrine levels remained high (MNu 51 mcg/day, NMNu 882 mcg/day, and MTXu 116 mcg/day). Subsequent controls revealed an increase in NMNu up to 1346 mcg/day in October 2015 with a negative CT scan and MRI. The patient underwent 18F-fluoro-dihydroxy-pheylalanine (18F-DOPA) positron emission tomography (PET) and 68Ga-DOTATATE (Figure 3A) that revealed an increased uptake of para-aortic and retrocrural lymph nodes. The disease remained stable with a progressive increase in urinary NMN levels up to 2391 mcg/day. In June 2020, on account of the tumor burden and increasing levels of urinary NMN, peptide receptor radionuclide therapy (PRRT) with 177Lu-DOTATATE was initiated after egg preservation. PRRT was interrupted after four cycles due to a rapid increase in platelets (up to 2348 × 103/µL, nv 150–450). In January 2021,68Ga-DOTATATE showed a reduced tracer uptake (Figure 3B). Further investigations, including bone marrow biopsy and genetic analysis, were carried out. A standard karyotype identified the presence of the Philadelphia chromosome (BCR/Abl), the hallmark of CML, and therapy with Imatinib was prescribed (200 mg bid). The erythropoietin level rose mildly (34.7 mlU/mL, the normal value being 4.3–29). During therapy, red blood cell count and hematocrit and hemoglobin levels remained stable, and the last control resulted 3.70 × 106/μL, 37%, and 12.2 g/dL, respectively. Mean corpuscular volume (MCV), mean cell hemoglobin (MCH), and ferritin were normal (85.3 fl, 28.5 pg and 13 ng/mL, respectively). The timing of the clinical presentation of the present case is shown in Figure 1A. A 17-year-old female was incidentally diagnosed in 2012 with a 2.7 cm left adrenal mass suggestive of adenoma. The patient came to our attention in 2014. The lesion had increased to 3.6 cm in size. Urine tests showed high levels of urinary Metanephrine (MNu: 488 mcg/day, normal value (nv) < 320) and urinary Normetanephrine (NMNu: 65,125 mcg/day, nv < 390), indicating Pheo. No alteration in urinary Methoxytyramine levels (MTXu: nv < 460) and blood count were observed (white blood cells 6.28 × 103/µL normal value (nv) 4.00–11.00, red blood cells 5.09 × 106/µL normal value (nv) 3.8–5.00, hemoglobin 14.5 g/dL nv 12.0–16.0, hematocrit 43.4% nv 35.0–48.8, platelets 287 × 103/µL normal value (nv) 150–450). The patient was subjected to left laparoscopic adrenalectomy in June 2014 and the post-operative course was uneventful. Histologic examination revealed Pheo (Figure 2A). There was no evidence of capsular or vascular invasion and necrosis. The mitotic count was lower than 3 per 10 high-power fields (HPFs) and atypical figures were not seen. A Pheochromocytoma of the Adrenal Gland Scaled Score (PASS) of 3 was assigned and the Ki67 proliferation index was <1%. A para-aortic lymph node was positive for the presence of chromaffin tissue (Figure 2B) and metastatic Pheo was diagnosed. Post-operative urinary metanephrine levels remained high (MNu 51 mcg/day, NMNu 882 mcg/day, and MTXu 116 mcg/day). Subsequent controls revealed an increase in NMNu up to 1346 mcg/day in October 2015 with a negative CT scan and MRI. The patient underwent 18F-fluoro-dihydroxy-pheylalanine (18F-DOPA) positron emission tomography (PET) and 68Ga-DOTATATE (Figure 3A) that revealed an increased uptake of para-aortic and retrocrural lymph nodes. The disease remained stable with a progressive increase in urinary NMN levels up to 2391 mcg/day. In June 2020, on account of the tumor burden and increasing levels of urinary NMN, peptide receptor radionuclide therapy (PRRT) with 177Lu-DOTATATE was initiated after egg preservation. PRRT was interrupted after four cycles due to a rapid increase in platelets (up to 2348 × 103/µL, nv 150–450). In January 2021,68Ga-DOTATATE showed a reduced tracer uptake (Figure 3B). Further investigations, including bone marrow biopsy and genetic analysis, were carried out. A standard karyotype identified the presence of the Philadelphia chromosome (BCR/Abl), the hallmark of CML, and therapy with Imatinib was prescribed (200 mg bid). The erythropoietin level rose mildly (34.7 mlU/mL, the normal value being 4.3–29). During therapy, red blood cell count and hematocrit and hemoglobin levels remained stable, and the last control resulted 3.70 × 106/μL, 37%, and 12.2 g/dL, respectively. Mean corpuscular volume (MCV), mean cell hemoglobin (MCH), and ferritin were normal (85.3 fl, 28.5 pg and 13 ng/mL, respectively). 2.2. Genetic Analysis Genomic and tumor DNA of the patient were extracted using the QIAsymphony CDN kit (Qiagen, Hilden, Germany). DNA quality and quantity were measured by Qubit ds HS Assay on Qubit 2.0 Fluorimeter (Thermo Fisher Scientific, Waltham, MA, USA). According to the guidelines for genetic screening of PPGL, our patient was proposed for NGS targeting. Using the online DesignStudio software (Illumina, San Diego, CA, USA), probes were designed to cover the following genes: PHD2 (EGLN1), EPAS1, FH, KIF1Bβ, MAX, NF1, RET, SDHA, SDHAF2, SDHB, SDHC, SDHD, TMEM127, and VHL, including exon–intron boundaries. Genomic and tumor DNA of the patient were extracted using the QIAsymphony CDN kit (Qiagen, Hilden, Germany). DNA quality and quantity were measured by Qubit ds HS Assay on Qubit 2.0 Fluorimeter (Thermo Fisher Scientific, Waltham, MA, USA). According to the guidelines for genetic screening of PPGL, our patient was proposed for NGS targeting. Using the online DesignStudio software (Illumina, San Diego, CA, USA), probes were designed to cover the following genes: PHD2 (EGLN1), EPAS1, FH, KIF1Bβ, MAX, NF1, RET, SDHA, SDHAF2, SDHB, SDHC, SDHD, TMEM127, and VHL, including exon–intron boundaries. 2.3. SNP-CGH Array To exclude the presence of pathogenic Copy Number Variants (CNVs) and loss of heterozygosity (LOH) in tumor samples, Illumina Infinium 850k Bead Chip CGH/SNP Microarray was performed according to the manufacturer’s instructions. To exclude the presence of pathogenic Copy Number Variants (CNVs) and loss of heterozygosity (LOH) in tumor samples, Illumina Infinium 850k Bead Chip CGH/SNP Microarray was performed according to the manufacturer’s instructions. 2.4. Whole Exome Sequencing and Bioinformatics Analysis We decided to perform whole exome sequencing (WES) on blood and tumor DNA to sequence all coding genes in order to detect other variants in genes not included in our panel. To construct DNA libraries, we used a strategy based on enzymatic fragmentation to produce dsDNA fragments followed by end repair, A-tailing, adapter ligation and library amplification (Kapa Biosystems, Wilmington, MA, USA). Libraries were hybridized with the protocol SeqCap EZ Exome v3 (Nimblegen, Roche, Basel, Switzerland) and sequenced by NextSeQ550 platform (Illumina Inc., San Diego, CA, USA). We decided to perform whole exome sequencing (WES) on blood and tumor DNA to sequence all coding genes in order to detect other variants in genes not included in our panel. To construct DNA libraries, we used a strategy based on enzymatic fragmentation to produce dsDNA fragments followed by end repair, A-tailing, adapter ligation and library amplification (Kapa Biosystems, Wilmington, MA, USA). Libraries were hybridized with the protocol SeqCap EZ Exome v3 (Nimblegen, Roche, Basel, Switzerland) and sequenced by NextSeQ550 platform (Illumina Inc., San Diego, CA, USA). 2.5. 3 D Variant Prediction To reveal the possible structural consequences of the identified variant, a 3D model of the truncated PHD2 protein was generated using the Phyre2 (Protein Homology Fold Recognition Engine) server created by the Structural Bioinformatics Group, Imperial College, London. Phyre2 uses the alignment of hidden Markov models via an HH search—an open-source software program for protein sequence searching—to significantly improve alignment accuracy (http://www.sbg.bio.ic.ac.uk/~phyre2/html/page.cgi?id=index, accessed on 13 June 2022) [36]. This was followed by I-TASSER (Iterative Threading ASSEmbly Refinement—https://zhanglab.ccmb.med.umich.edu/I-TASSER/, accessed on 13 August 2022) [37] to evaluate function the predictions and possible interactions of truncated PHD2 protein with other proteins and GRAMM v1.03, a program for protein docking to predict the structure of possible complexes (http://vakser.compbio.ku.edu/resources/gramm/grammx/, accessed on 13 August 2022) [38,39]. The generated *.pdb files were loaded and visualized with ChemDraw software to envisage a 3D structure (version 8; Cambridge Software; PerkinElmer, Inc., Waltham, MA, USA). To reveal the possible structural consequences of the identified variant, a 3D model of the truncated PHD2 protein was generated using the Phyre2 (Protein Homology Fold Recognition Engine) server created by the Structural Bioinformatics Group, Imperial College, London. Phyre2 uses the alignment of hidden Markov models via an HH search—an open-source software program for protein sequence searching—to significantly improve alignment accuracy (http://www.sbg.bio.ic.ac.uk/~phyre2/html/page.cgi?id=index, accessed on 13 June 2022) [36]. This was followed by I-TASSER (Iterative Threading ASSEmbly Refinement—https://zhanglab.ccmb.med.umich.edu/I-TASSER/, accessed on 13 August 2022) [37] to evaluate function the predictions and possible interactions of truncated PHD2 protein with other proteins and GRAMM v1.03, a program for protein docking to predict the structure of possible complexes (http://vakser.compbio.ku.edu/resources/gramm/grammx/, accessed on 13 August 2022) [38,39]. The generated *.pdb files were loaded and visualized with ChemDraw software to envisage a 3D structure (version 8; Cambridge Software; PerkinElmer, Inc., Waltham, MA, USA). 2.6. PHD2 Immunohistochemistry PHD2 expression was assessed by probing formalin-fixed and paraffin-embedded tissue sections with mouse monoclonal antibody anti-PHD2 at 10 µg/mL (Abcam Cat#ab103432, RRID: AB_10710680). This antibody specifically recognizes a synthetic peptide corresponding to amino acids 1–24 of human PHD2 and is therefore capable of recognizing both the wild-type (wt) protein and the mutant/truncated form. Antigen retrieval was achieved using Epitope Retrieval Solution Citrate buffer (10 mM, pH 6; Dako, Glostrup, Denmark) in a thermostatic bath. Immunohistochemical analysis was performed using EnVision FLEX Systems and 3,3′-diaminobenzidine as the chromogen in a Dako Autostainer Link48 Instrument (Dako). Negative controls were incubated without the primary antibody. The sections were lightly counterstained with Mayer’s hematoxylin and mounted with Permount. PHD2 expression was assessed by probing formalin-fixed and paraffin-embedded tissue sections with mouse monoclonal antibody anti-PHD2 at 10 µg/mL (Abcam Cat#ab103432, RRID: AB_10710680). This antibody specifically recognizes a synthetic peptide corresponding to amino acids 1–24 of human PHD2 and is therefore capable of recognizing both the wild-type (wt) protein and the mutant/truncated form. Antigen retrieval was achieved using Epitope Retrieval Solution Citrate buffer (10 mM, pH 6; Dako, Glostrup, Denmark) in a thermostatic bath. Immunohistochemical analysis was performed using EnVision FLEX Systems and 3,3′-diaminobenzidine as the chromogen in a Dako Autostainer Link48 Instrument (Dako). Negative controls were incubated without the primary antibody. The sections were lightly counterstained with Mayer’s hematoxylin and mounted with Permount. 2.7. Western Blot Analysis Dissected tumor tissues or human healthy adrenal samples (100 mg) were chopped in lysis buffer, incubated for 30 min on ice, and centrifuged at 10,000× g for 15 min at 4 °C. Proteins were quantified by Coomassie Blue-reagent (Bio-Rad, Hercules, CA, USA) [40] and 40 μg of proteins was separated by SDS/PAGE then transferred onto PVDF (Immobilon, Millipore, Burlington, MA, USA), as previously described [41]. Bound antibodies detected by ECL reagents (Immobilon, Millipore, Burlington, MA, USA) were analyzed with a Biorad ChemiDoc Imaging System (Bio-Rad, Quantity-One Software). HIF2α polyclonal antibody was supplied by Novus Biologicals (Bio-Techne, Minneapolis, MN, USA), while anti-GAPDH monoclonal antibody and anti-rabbit and anti-mouse secondary antibodies conjugated to horseradish peroxidase were from Santa Cruz Biotechnology (Santa Cruz, CA, USA). Dissected tumor tissues or human healthy adrenal samples (100 mg) were chopped in lysis buffer, incubated for 30 min on ice, and centrifuged at 10,000× g for 15 min at 4 °C. Proteins were quantified by Coomassie Blue-reagent (Bio-Rad, Hercules, CA, USA) [40] and 40 μg of proteins was separated by SDS/PAGE then transferred onto PVDF (Immobilon, Millipore, Burlington, MA, USA), as previously described [41]. Bound antibodies detected by ECL reagents (Immobilon, Millipore, Burlington, MA, USA) were analyzed with a Biorad ChemiDoc Imaging System (Bio-Rad, Quantity-One Software). HIF2α polyclonal antibody was supplied by Novus Biologicals (Bio-Techne, Minneapolis, MN, USA), while anti-GAPDH monoclonal antibody and anti-rabbit and anti-mouse secondary antibodies conjugated to horseradish peroxidase were from Santa Cruz Biotechnology (Santa Cruz, CA, USA). 2.8. RNA Isolation and Quantitative Real-Time PCR Five different sample tissues obtained from pheochromocytoma (PHEO) and two from paraganglioma (PGL) were lysed for mRNA extraction. mRNA was isolated from frozen tissue using the RNeasy Mini Kit (Qiagen, Hilden, Germany), as previously described [42,43]. For each RNA sample, cDNA was obtained by reverse transcription PCR starting from 250 ng of RNA in 50 μL final volume reaction (Taqman RT-PCR kit; Applied Biosystems, Foster City, CA, USA) through the following cycling conditions: 10 min at 25 °C, 30 min at 48 °C, 3 min at 95 °C, and then held at 4 °C. Further quantitative real-time PCR (qRT-PCR) was carried out using primers and probes from Applied Biosystems for the gene transcripts human PHD2 (Hs00254392_m1) and GAPDH (4352934). RT-PCR reactions were performed in triplicate for each gene on an ABI Prism 7900 Sequence Detector (Applied Biosystems). The number of target genes, normalized to the endogenous reference gene (human GAPDH) and relative to a calibrator (Stratagene, San Diego, CA, USA), was calculated by 2−ΔΔCt. Five different sample tissues obtained from pheochromocytoma (PHEO) and two from paraganglioma (PGL) were lysed for mRNA extraction. mRNA was isolated from frozen tissue using the RNeasy Mini Kit (Qiagen, Hilden, Germany), as previously described [42,43]. For each RNA sample, cDNA was obtained by reverse transcription PCR starting from 250 ng of RNA in 50 μL final volume reaction (Taqman RT-PCR kit; Applied Biosystems, Foster City, CA, USA) through the following cycling conditions: 10 min at 25 °C, 30 min at 48 °C, 3 min at 95 °C, and then held at 4 °C. Further quantitative real-time PCR (qRT-PCR) was carried out using primers and probes from Applied Biosystems for the gene transcripts human PHD2 (Hs00254392_m1) and GAPDH (4352934). RT-PCR reactions were performed in triplicate for each gene on an ABI Prism 7900 Sequence Detector (Applied Biosystems). The number of target genes, normalized to the endogenous reference gene (human GAPDH) and relative to a calibrator (Stratagene, San Diego, CA, USA), was calculated by 2−ΔΔCt. 2.1. Case Presentation: The timing of the clinical presentation of the present case is shown in Figure 1A. A 17-year-old female was incidentally diagnosed in 2012 with a 2.7 cm left adrenal mass suggestive of adenoma. The patient came to our attention in 2014. The lesion had increased to 3.6 cm in size. Urine tests showed high levels of urinary Metanephrine (MNu: 488 mcg/day, normal value (nv) < 320) and urinary Normetanephrine (NMNu: 65,125 mcg/day, nv < 390), indicating Pheo. No alteration in urinary Methoxytyramine levels (MTXu: nv < 460) and blood count were observed (white blood cells 6.28 × 103/µL normal value (nv) 4.00–11.00, red blood cells 5.09 × 106/µL normal value (nv) 3.8–5.00, hemoglobin 14.5 g/dL nv 12.0–16.0, hematocrit 43.4% nv 35.0–48.8, platelets 287 × 103/µL normal value (nv) 150–450). The patient was subjected to left laparoscopic adrenalectomy in June 2014 and the post-operative course was uneventful. Histologic examination revealed Pheo (Figure 2A). There was no evidence of capsular or vascular invasion and necrosis. The mitotic count was lower than 3 per 10 high-power fields (HPFs) and atypical figures were not seen. A Pheochromocytoma of the Adrenal Gland Scaled Score (PASS) of 3 was assigned and the Ki67 proliferation index was <1%. A para-aortic lymph node was positive for the presence of chromaffin tissue (Figure 2B) and metastatic Pheo was diagnosed. Post-operative urinary metanephrine levels remained high (MNu 51 mcg/day, NMNu 882 mcg/day, and MTXu 116 mcg/day). Subsequent controls revealed an increase in NMNu up to 1346 mcg/day in October 2015 with a negative CT scan and MRI. The patient underwent 18F-fluoro-dihydroxy-pheylalanine (18F-DOPA) positron emission tomography (PET) and 68Ga-DOTATATE (Figure 3A) that revealed an increased uptake of para-aortic and retrocrural lymph nodes. The disease remained stable with a progressive increase in urinary NMN levels up to 2391 mcg/day. In June 2020, on account of the tumor burden and increasing levels of urinary NMN, peptide receptor radionuclide therapy (PRRT) with 177Lu-DOTATATE was initiated after egg preservation. PRRT was interrupted after four cycles due to a rapid increase in platelets (up to 2348 × 103/µL, nv 150–450). In January 2021,68Ga-DOTATATE showed a reduced tracer uptake (Figure 3B). Further investigations, including bone marrow biopsy and genetic analysis, were carried out. A standard karyotype identified the presence of the Philadelphia chromosome (BCR/Abl), the hallmark of CML, and therapy with Imatinib was prescribed (200 mg bid). The erythropoietin level rose mildly (34.7 mlU/mL, the normal value being 4.3–29). During therapy, red blood cell count and hematocrit and hemoglobin levels remained stable, and the last control resulted 3.70 × 106/μL, 37%, and 12.2 g/dL, respectively. Mean corpuscular volume (MCV), mean cell hemoglobin (MCH), and ferritin were normal (85.3 fl, 28.5 pg and 13 ng/mL, respectively). 2.2. Genetic Analysis: Genomic and tumor DNA of the patient were extracted using the QIAsymphony CDN kit (Qiagen, Hilden, Germany). DNA quality and quantity were measured by Qubit ds HS Assay on Qubit 2.0 Fluorimeter (Thermo Fisher Scientific, Waltham, MA, USA). According to the guidelines for genetic screening of PPGL, our patient was proposed for NGS targeting. Using the online DesignStudio software (Illumina, San Diego, CA, USA), probes were designed to cover the following genes: PHD2 (EGLN1), EPAS1, FH, KIF1Bβ, MAX, NF1, RET, SDHA, SDHAF2, SDHB, SDHC, SDHD, TMEM127, and VHL, including exon–intron boundaries. 2.3. SNP-CGH Array: To exclude the presence of pathogenic Copy Number Variants (CNVs) and loss of heterozygosity (LOH) in tumor samples, Illumina Infinium 850k Bead Chip CGH/SNP Microarray was performed according to the manufacturer’s instructions. 2.4. Whole Exome Sequencing and Bioinformatics Analysis: We decided to perform whole exome sequencing (WES) on blood and tumor DNA to sequence all coding genes in order to detect other variants in genes not included in our panel. To construct DNA libraries, we used a strategy based on enzymatic fragmentation to produce dsDNA fragments followed by end repair, A-tailing, adapter ligation and library amplification (Kapa Biosystems, Wilmington, MA, USA). Libraries were hybridized with the protocol SeqCap EZ Exome v3 (Nimblegen, Roche, Basel, Switzerland) and sequenced by NextSeQ550 platform (Illumina Inc., San Diego, CA, USA). 2.5. 3 D Variant Prediction: To reveal the possible structural consequences of the identified variant, a 3D model of the truncated PHD2 protein was generated using the Phyre2 (Protein Homology Fold Recognition Engine) server created by the Structural Bioinformatics Group, Imperial College, London. Phyre2 uses the alignment of hidden Markov models via an HH search—an open-source software program for protein sequence searching—to significantly improve alignment accuracy (http://www.sbg.bio.ic.ac.uk/~phyre2/html/page.cgi?id=index, accessed on 13 June 2022) [36]. This was followed by I-TASSER (Iterative Threading ASSEmbly Refinement—https://zhanglab.ccmb.med.umich.edu/I-TASSER/, accessed on 13 August 2022) [37] to evaluate function the predictions and possible interactions of truncated PHD2 protein with other proteins and GRAMM v1.03, a program for protein docking to predict the structure of possible complexes (http://vakser.compbio.ku.edu/resources/gramm/grammx/, accessed on 13 August 2022) [38,39]. The generated *.pdb files were loaded and visualized with ChemDraw software to envisage a 3D structure (version 8; Cambridge Software; PerkinElmer, Inc., Waltham, MA, USA). 2.6. PHD2 Immunohistochemistry: PHD2 expression was assessed by probing formalin-fixed and paraffin-embedded tissue sections with mouse monoclonal antibody anti-PHD2 at 10 µg/mL (Abcam Cat#ab103432, RRID: AB_10710680). This antibody specifically recognizes a synthetic peptide corresponding to amino acids 1–24 of human PHD2 and is therefore capable of recognizing both the wild-type (wt) protein and the mutant/truncated form. Antigen retrieval was achieved using Epitope Retrieval Solution Citrate buffer (10 mM, pH 6; Dako, Glostrup, Denmark) in a thermostatic bath. Immunohistochemical analysis was performed using EnVision FLEX Systems and 3,3′-diaminobenzidine as the chromogen in a Dako Autostainer Link48 Instrument (Dako). Negative controls were incubated without the primary antibody. The sections were lightly counterstained with Mayer’s hematoxylin and mounted with Permount. 2.7. Western Blot Analysis: Dissected tumor tissues or human healthy adrenal samples (100 mg) were chopped in lysis buffer, incubated for 30 min on ice, and centrifuged at 10,000× g for 15 min at 4 °C. Proteins were quantified by Coomassie Blue-reagent (Bio-Rad, Hercules, CA, USA) [40] and 40 μg of proteins was separated by SDS/PAGE then transferred onto PVDF (Immobilon, Millipore, Burlington, MA, USA), as previously described [41]. Bound antibodies detected by ECL reagents (Immobilon, Millipore, Burlington, MA, USA) were analyzed with a Biorad ChemiDoc Imaging System (Bio-Rad, Quantity-One Software). HIF2α polyclonal antibody was supplied by Novus Biologicals (Bio-Techne, Minneapolis, MN, USA), while anti-GAPDH monoclonal antibody and anti-rabbit and anti-mouse secondary antibodies conjugated to horseradish peroxidase were from Santa Cruz Biotechnology (Santa Cruz, CA, USA). 2.8. RNA Isolation and Quantitative Real-Time PCR: Five different sample tissues obtained from pheochromocytoma (PHEO) and two from paraganglioma (PGL) were lysed for mRNA extraction. mRNA was isolated from frozen tissue using the RNeasy Mini Kit (Qiagen, Hilden, Germany), as previously described [42,43]. For each RNA sample, cDNA was obtained by reverse transcription PCR starting from 250 ng of RNA in 50 μL final volume reaction (Taqman RT-PCR kit; Applied Biosystems, Foster City, CA, USA) through the following cycling conditions: 10 min at 25 °C, 30 min at 48 °C, 3 min at 95 °C, and then held at 4 °C. Further quantitative real-time PCR (qRT-PCR) was carried out using primers and probes from Applied Biosystems for the gene transcripts human PHD2 (Hs00254392_m1) and GAPDH (4352934). RT-PCR reactions were performed in triplicate for each gene on an ABI Prism 7900 Sequence Detector (Applied Biosystems). The number of target genes, normalized to the endogenous reference gene (human GAPDH) and relative to a calibrator (Stratagene, San Diego, CA, USA), was calculated by 2−ΔΔCt. 3. Results: The flow chart illustrating the filtering process and the variant selection used to identify the pathogenic variants is shown in Figure 1B. 3.1. Mutational Analysis Analysis of SNP-CGH array reveals the absence of pathogenic CNVs and LOH in the entire genome. In particular, we focused on chromosome 1 to identify or exclude rearrangement or LOH near EGLN1 that, in combination with the nonsense variant in EGLN1, could have some effect on our patient’s phenotype. NGS identified a heterozygous c.153G>A (p.W51*) variant in exon 1 of the PHD2 gene (NM_022051.3) which has been defined as pathogenic according to the American College of Medical Genetics (ACMG) guidelines. This variant is not reported in GnomAD, ExAC, or dbSNP NFE (European non-Finnish) databases. PHD2 variant entails a G to A transition of the TGG coding triplet in a stop codon, which encodes for a truncated PHD2 protein (Figure 4). Its pathogenicity is presumed because it leads to a premature stop codon and the bioinformatic tool (http://autopvs1.genetics.bgi.com/, accessed on 13 August 2022) predicted mRNA decay with a strong probability [44]. WES comparative sequence analysis of the tumor and blood DNA confirmed the variant identified by the panel genes, but no other significant variants were detected in the other coding regions. Computational analysis highlights the possible involvement of a peptide in two different mechanisms: reduction in propensity of PHD2 to generate dimers and straight interaction with nuclear receptor corepressor 2 (NCoR2). Firstly, four possible amino acid regions have been detected that, interacting with truncated a peptide, could interfere with the formation of a disulfide bond (Figure 5A–D). These Aa regions are located among the DSBH domain (aa 299–390 light blue) which possesses the three cysteine residues (Cys302, Cys323, and Cys326 red) required for oxidative dimerization, through the formation of the disulfide bond and/or induction of a conformational change after oxidation. Secondly, a potential interaction of a truncated peptide with a specific Aa sequence “TISNPPPLISSAK” of NCoR2 protein in positions 1101–1013 was identified. This NCoR2 sequence extends to the third region of the RII domain (residues 752–1016) and interacts with HDAC3, a Class I member of the histone deacetylase superfamily (Figure 5F). Genetic analysis was extended to the patient’s parents with their consent. Genomic DNA was screened for the PHD2 variant using PCR and Sanger sequencing. The patient had inherited the PHD2 variant from her father. Analysis of SNP-CGH array reveals the absence of pathogenic CNVs and LOH in the entire genome. In particular, we focused on chromosome 1 to identify or exclude rearrangement or LOH near EGLN1 that, in combination with the nonsense variant in EGLN1, could have some effect on our patient’s phenotype. NGS identified a heterozygous c.153G>A (p.W51*) variant in exon 1 of the PHD2 gene (NM_022051.3) which has been defined as pathogenic according to the American College of Medical Genetics (ACMG) guidelines. This variant is not reported in GnomAD, ExAC, or dbSNP NFE (European non-Finnish) databases. PHD2 variant entails a G to A transition of the TGG coding triplet in a stop codon, which encodes for a truncated PHD2 protein (Figure 4). Its pathogenicity is presumed because it leads to a premature stop codon and the bioinformatic tool (http://autopvs1.genetics.bgi.com/, accessed on 13 August 2022) predicted mRNA decay with a strong probability [44]. WES comparative sequence analysis of the tumor and blood DNA confirmed the variant identified by the panel genes, but no other significant variants were detected in the other coding regions. Computational analysis highlights the possible involvement of a peptide in two different mechanisms: reduction in propensity of PHD2 to generate dimers and straight interaction with nuclear receptor corepressor 2 (NCoR2). Firstly, four possible amino acid regions have been detected that, interacting with truncated a peptide, could interfere with the formation of a disulfide bond (Figure 5A–D). These Aa regions are located among the DSBH domain (aa 299–390 light blue) which possesses the three cysteine residues (Cys302, Cys323, and Cys326 red) required for oxidative dimerization, through the formation of the disulfide bond and/or induction of a conformational change after oxidation. Secondly, a potential interaction of a truncated peptide with a specific Aa sequence “TISNPPPLISSAK” of NCoR2 protein in positions 1101–1013 was identified. This NCoR2 sequence extends to the third region of the RII domain (residues 752–1016) and interacts with HDAC3, a Class I member of the histone deacetylase superfamily (Figure 5F). Genetic analysis was extended to the patient’s parents with their consent. Genomic DNA was screened for the PHD2 variant using PCR and Sanger sequencing. The patient had inherited the PHD2 variant from her father. 3.2. PHD2 and HIF Expression in the Primary Tumor Compared with non-PHD2-mutated Pheo and PGL, PHD2-mutated tissue showed a significant reduction in gene expression levels of ENGL1 mRNA of 43% and 47%, respectively (Figure 5E). Neoplastic cells were negative for PHD2 expression. Strong PHD2 staining of endothelial cells was evident in the lining of large blood vessels as well as in the capillaries. PHD2 expression was restrained to the cytoplasm (Figure 6). In comparison, the tumor expressed higher levels of HIF2α than a healthy adrenal, as shown by Western blot analysis (Figure 7). Compared with non-PHD2-mutated Pheo and PGL, PHD2-mutated tissue showed a significant reduction in gene expression levels of ENGL1 mRNA of 43% and 47%, respectively (Figure 5E). Neoplastic cells were negative for PHD2 expression. Strong PHD2 staining of endothelial cells was evident in the lining of large blood vessels as well as in the capillaries. PHD2 expression was restrained to the cytoplasm (Figure 6). In comparison, the tumor expressed higher levels of HIF2α than a healthy adrenal, as shown by Western blot analysis (Figure 7). 3.1. Mutational Analysis: Analysis of SNP-CGH array reveals the absence of pathogenic CNVs and LOH in the entire genome. In particular, we focused on chromosome 1 to identify or exclude rearrangement or LOH near EGLN1 that, in combination with the nonsense variant in EGLN1, could have some effect on our patient’s phenotype. NGS identified a heterozygous c.153G>A (p.W51*) variant in exon 1 of the PHD2 gene (NM_022051.3) which has been defined as pathogenic according to the American College of Medical Genetics (ACMG) guidelines. This variant is not reported in GnomAD, ExAC, or dbSNP NFE (European non-Finnish) databases. PHD2 variant entails a G to A transition of the TGG coding triplet in a stop codon, which encodes for a truncated PHD2 protein (Figure 4). Its pathogenicity is presumed because it leads to a premature stop codon and the bioinformatic tool (http://autopvs1.genetics.bgi.com/, accessed on 13 August 2022) predicted mRNA decay with a strong probability [44]. WES comparative sequence analysis of the tumor and blood DNA confirmed the variant identified by the panel genes, but no other significant variants were detected in the other coding regions. Computational analysis highlights the possible involvement of a peptide in two different mechanisms: reduction in propensity of PHD2 to generate dimers and straight interaction with nuclear receptor corepressor 2 (NCoR2). Firstly, four possible amino acid regions have been detected that, interacting with truncated a peptide, could interfere with the formation of a disulfide bond (Figure 5A–D). These Aa regions are located among the DSBH domain (aa 299–390 light blue) which possesses the three cysteine residues (Cys302, Cys323, and Cys326 red) required for oxidative dimerization, through the formation of the disulfide bond and/or induction of a conformational change after oxidation. Secondly, a potential interaction of a truncated peptide with a specific Aa sequence “TISNPPPLISSAK” of NCoR2 protein in positions 1101–1013 was identified. This NCoR2 sequence extends to the third region of the RII domain (residues 752–1016) and interacts with HDAC3, a Class I member of the histone deacetylase superfamily (Figure 5F). Genetic analysis was extended to the patient’s parents with their consent. Genomic DNA was screened for the PHD2 variant using PCR and Sanger sequencing. The patient had inherited the PHD2 variant from her father. 3.2. PHD2 and HIF Expression in the Primary Tumor: Compared with non-PHD2-mutated Pheo and PGL, PHD2-mutated tissue showed a significant reduction in gene expression levels of ENGL1 mRNA of 43% and 47%, respectively (Figure 5E). Neoplastic cells were negative for PHD2 expression. Strong PHD2 staining of endothelial cells was evident in the lining of large blood vessels as well as in the capillaries. PHD2 expression was restrained to the cytoplasm (Figure 6). In comparison, the tumor expressed higher levels of HIF2α than a healthy adrenal, as shown by Western blot analysis (Figure 7). 4. Discussion: Here, we describe a novel nonsense PHD2 germline variant in a patient affected by metastatic Pheo and CML with no evidence of polycythemia. The first case of a PHD2 missense variant observed in a 30-year-old patient with polycythemia and abdominal PGL was reported in 2008 [34]. In 2014, a novel PHD2 germline missense variant was detected in a patient affected by Pheo [45]. More recently, a novel PHD2 germline missense variant was found in a patient with PPGL and polycythemia [35]. None of these patients presented metastatic disease. In contrast, nonsense variants in the PHD2 gene were exclusively described in patients over 35 years of age suffering from polycythemia without chromaffin diseases [46,47]. We cannot exclude that in our case the absence of polycythemia could be explained by the younger age of the patient. The PHD2 variants so far described in association with polycythemia are all heterozygous [48], suggesting that a partial loss of PHD2 activity is sufficient to induce polycythemia. Although only a few of the PHD2 variants have been reported, including those associated with tumors other than Pheo, a larger number of patients are needed to understand the exact function of heterozygous variants. The PHD2 gene encodes prolyl hydroxylase domain-containing protein-2 (PHD2), catalyzing the post-translational modification of hypoxia-inducible transcription factors that play an essential role in oxygen homeostasis. Prolyl hydroxylation is a basic regulatory event that targets HIF subunits for proteasomal demolition via the Von Hippel–Lindau ubiquitylation complex [47]. At the physiological oxygen level (normoxia), PHD hydroxylates proline residues on HIF-α subunits leading to their destabilization by promoting ubiquitination via the Von Hippel–Lindau (VHL) ubiquitin ligase and subsequent proteasomal degradation. In hypoxia, the O2-dependent hydroxylation of HIF-α subunits by PHD is reduced, resulting in HIF-α accumulation, dimerization with HIF-β, and migration into the nucleus to induce an adaptive transcriptional response. Variants in the PHD2 gene determine a pseudohypoxia status. Indeed, PHD2 is unable to hydroxylate HIF-α, with the consequent stabilization of HIF-α subunits which are recognized as contributing to the pathogenesis of hereditary PPGLs [48]. As expected, our patient showed a lower expression of PHD2 and higher levels of HIF2α compared to the healthy adrenal tissues, confirming that PHD2 down-regulation results in HIF2α stabilization [49]. In our patient, we identified the new variant (c.153G>A) in the PHD2 gene that introduces a stop codon, resulting in the production of a truncated protein which could explain the unexpected clinical status. This variant is not reported in any SNV database. The gene constraint (gnomAD) indicates a strong probability of being LoF intolerant (oe-score: 0.06); therefore, it was considered pathogenic. In addition, we did not identify any exonic/splicing variants in the other coding regions. We also evaluated CNVs/LOH through SNP-CGH arrays, which allowed for the exclusion of any possible genomic rearrangements and also reinforced data from the literature reporting that haploinsufficiency and partial deregulation of PHD2 is sufficient to cause polycythemia [49]. Computational analysis indicated two possible effects of the truncated peptide. Firstly, an interaction between the truncated peptide and wild-type PHD2 protein was hypothesized. The truncated protein reduces the propensity to generate PHD2 dimerization-blocking establishment of the disulfide bond among DSBH domains and subsequently hindering HIF-1α activation by increasing glucose flux and lactate production (the Warburg effect) under oxidative stress. This interference could have a protective effect, decreasing tumor growth, as seen in our patient. Secondly, the interaction and inhibition of the transcriptional regulation of NCoR2 were considered, which would lead to the activation of HDAC3 and repression of nuclear receptors such as the thyroid hormone receptor and the retinoic acid receptor. HDAC3 is recruited by enhancers to modulate both the epigenome and nearby gene expression and is the only endogenous histone deacetylase that has a unique role in modulating the transcriptional activities of nuclear receptors. Moreover, the heterozygous nonsense variant in the PHD2 gene, as well as the residual mRNA produced by the wild-type allele, may act in a dominant-negative fashion to lower protein activity, possibly leading to a non-canonical-associated phenotype with metastatic Pheo, but with no clinical signs of polycythemia. At genetic analysis, the patient’s father was positive for the known PHD2 variant. CT scan and metanephrine assays proved negative. To date, considering the rarity of PHD2 variants, the penetrance of this variant is unknown. An incomplete penetrance could be assumed, such as that of SDHB germline variants (9–75%) [50]. Indeed, in the previously reported cases, it was not possible to study the transmission of the disease [34,35,45]. On the other hand, autosomal dominant inheritance has been reported in patients affected by PHD2 variants and polycythemia [51]. In summary, the role of PHD2 heterozygous state as the driver gene associated with an unusual phenotype could depend on its relative abundance in a specific tissue, its interplaying among the three isoforms, as well as other genetic and epigenetic mechanisms. We can neither exclude the presence of other variants in the non-coding or regulatory regions nor epigenetic alteration that, in association with the nonsense variant in the PHD2 gene, may contribute to the development of the patient’s phenotype. In agreement with the literature [52], we demonstrated that the PHD2 variant led to HIF stabilization and its consequent activation through a significant increase in HIF2α expression in the mutated tumor. Focusing on HIF activation, we suggest a possible correlation between HIF upregulation and response to CML therapy. Considering the presence of the Philadelphia chromosome, we have not assumed a possible relationship between PPRT therapy and CML. In the literature, there are no data on this correlation. In CML cell populations, HIF-responsive genes are upregulated by BCR/Abl [53]. Under low oxygen conditions, HIF supports the maintenance of stem cell potential, promoting the expansion of the mutated progenitor and an increased production of BCR/Abl protein [54]. This mechanism could be maintained in pseudohypoxia conditions. In CML murine models, HIF-1α genetic knockout prevents CML development by impairing cell cycle progression and inducing apoptosis in leukemia stem cells (LSCs) [55]. Therefore, HIF-1α is a critical factor in CML. LSCs, selected under low oxygen tension, are tyrosine kinase inhibitor (TKI)-insensitive [56] and HIF-1α-dependent signaling is relevant to LSC maintenance in CML [57]. This establishes HIF targeting as a possible strategy for CML treatment in patients who are insensitive to Imatinib and other TKI inhibitors [58]. Our patient was given Imatinib only recently. Future studies are necessary in order to verify the response to TKI inhibitors in PHD2-mutated patients. The description of an unusual clinical feature through NGS DNA analysis can significantly increase the diagnostic rate and improve patient management regardless of specific clinical signs. 5. Conclusions: We report a novel PHD2 nonsense germline variant in a patient affected by metastatic Pheo and CML in the absence of polycythemia. Our findings confirm the need to screen patients affected by chromaffin disease using a comprehensive NGS panel, with no limitation regarding phenotype. In particular, we suggest extending PHD2 gene analysis to younger non-polycythemic patients.
Background: Pheochromocytoma (Pheo) and paraganglioma (PGL) are rare tumors, mostly resulting from pathogenic variants of predisposing genes, with a genetic contribution that now stands at around 70%. Germline variants account for approximately 40%, while the remaining 30% is attributable to somatic variants. Methods: Genetic analysis was carried out by NGS. This analysis was initially performed using a panel of genes known for tumor predisposition (EGLN1, EPAS1, FH, KIF1Bβ, MAX, NF1, RET, SDHA, SDHAF2, SDHB, SDHC, SDHD, TMEM127, and VHL), followed initially by SNP-CGH array, to exclude the presence of the pathogenic Copy Number Variants (CNVs) and the loss of heterozygosity (LOH) and subsequently by whole exome sequencing (WES) comparative sequence analysis of the DNA extracted from tumor fragments and peripheral blood. Results: We found a novel germline PHD2 (EGLN1) gene variant, c.153G&gt;A, p.W51*, in a patient affected by metastatic Pheo and chronic myeloid leukemia (CML) in the absence of polycythemia. Conclusions: According to the latest guidelines, it is mandatory to perform genetic analysis in all Pheo/PGL cases regardless of phenotype. In patients with metastatic disease and no evidence of polycythemia, we propose testing for PHD2 (EGLN1) gene variants. A possible correlation between PHD2 (EGLN1) pathogenic variants and CML clinical course should be considered.
1. Introduction: Pheochromocytoma (Pheo) and paraganglioma (PGL) are rare tumors with an incidence of 2–5 patients per million per year and an uncertain malignant potential [1]. Pheos represent about 80–85% of cases and PGLs represent about 15–20% [2]. The recommended diagnostic workup for suspected Pheo/PGL (PPGL) includes plasma free metanephrines and fractionated urinary metanephrines. Subsequently, computed tomography (CT) scans or magnetic resonance imaging (MRI) are used to locate the lesion [3]. Nuclear medicine exams such as 123I-meta-iodobenzyl-guanidine (MIBG) scintigraphy, 18F-fluoro-L-dihydroxyphenylalanine (18F-DOPA), and 68Ga-DOTATATE PET are predominantly used in patients with metastatic disease [4,5]. Surgery is the curative option in non-metastatic and metastatic PPGL, and in patients with metastatic disease, tumor resection has been reported to increase overall survival [6]. Medical therapy with alpha-blockers is necessary before surgery [7]. In patients affected by metastatic disease, ‘wait and see’ is the first option given the generally slow evolution of the disease [8]. Radiometabolic therapies with 131I-MIBG or peptide receptor radionuclide therapy (PPRT) have proved effective [9,10], while cyclophosphamide vincristine dacarbazine (CVD) is the chemotherapy most commonly administered [11]. Local therapy such as external radiation therapy, radiosurgery, radiofrequency, cryoablation, and ethanol injection should be considered for unresectable lesions [12]. Tyrosine kinase inhibitors, e.g., Sunitinib [13], Temozolomide [14], as well as immunotherapy such as Pembrolizumab [15], are novel medical options for metastatic PPGLs. Up to 70% of PPGLs are caused by germline or somatic pathogenic variants in one of the known susceptibility genes [16,17]. Based on their transcriptional profile, PPGLs are classified into three clusters. Cluster 1 includes PPGLs with variants in genes encoding the hypoxia-inducible factor (HIF) 2α, the Von Hippel–Lindau tumor suppressor (VHL), the prolyl hydroxylase domain (PHD), fumarate hydratase (FH), and succinate dehydrogenase subunits (SDHx) [18,19]. These tumors are characterized by the activation of pseudohypoxic pathways and by an immature catecholamine-secreting phenotype [20]. Cluster 2 comprises PPGLs with pathogenic variants in the REarranged during Transfection (RET) proto-oncogene, the Neurofibromatosis type 1 (NF1) tumor suppressor gene, the TransMEMbrane protein (TMEM127) gene, the Harvey rat sarcoma viral oncogene homolog (HRAS) and the MYC Associated Factor X (MAX) gene. Cluster 2 PPGLs show activated MAPK and mTOR signaling pathways and are mostly benign exhibiting a mature catecholamine phenotype with a strong expression of phenylethanolamine N-methyltransferase (PNMT) [21,22,23,24]. Cluster 3 is the Wnt signaling cluster. These tumors are due to somatic mutations of the CSDE1 gene or somatic gene fusions of the MAML3 gene [2], but also patients with sporadic forms fall into this cluster. Cluster 3 tumors have a more aggressive behavior [19]. Due to the large number of known susceptibility genes, next-generation sequencing (NGS) technology is ideally suited for carrying out PPGL genetic screening. The genetic screening proposed by Toledo et al. [25] includes the PHD2 (also called EGLN1) gene (Egl-9 Family Hypoxia Inducible Factor 1). The encoded protein catalyzes the post-translational formation of 4-hydroxyproline in hypoxia-inducible factor (HIF) alpha proteins. HIF is a transcriptional complex, playing a central role in mammalian oxygen homeostasis, controlling energy, iron metabolism, erythropoiesis, development under hypoxic or pseudohypoxic conditions and mediating adaptive cell responses to these states. Under normoxic conditions, HIF is controlled by several enzymatic reactions, including prolyl hydroxylation by PHDs, leading to proteasome degradation. However, pseudohypoxia conditions can lead to HIF stabilization and transactivation of target genes [26,27]. Dysregulation of HIF contributes to tumorigenesis and cancer progression [28,29,30,31]. HIF also has a crucial role in the pathogenesis of neuroendocrine tumors, especially PPGLs, regulating the cluster 1 pathway. Furthermore, PHD2 heterozygous variants cause polycythemia, supporting the importance of PHD2 in the control of red cell mass in humans [32,33] that may be associated with PPGL [34,35]. Here, we report a novel germline PHD2 variant in a patient affected by metastatic Pheo and chronic myeloid leukemia (CML) in the absence of polycythemia. 5. Conclusions: We report a novel PHD2 nonsense germline variant in a patient affected by metastatic Pheo and CML in the absence of polycythemia. Our findings confirm the need to screen patients affected by chromaffin disease using a comprehensive NGS panel, with no limitation regarding phenotype. In particular, we suggest extending PHD2 gene analysis to younger non-polycythemic patients.
Background: Pheochromocytoma (Pheo) and paraganglioma (PGL) are rare tumors, mostly resulting from pathogenic variants of predisposing genes, with a genetic contribution that now stands at around 70%. Germline variants account for approximately 40%, while the remaining 30% is attributable to somatic variants. Methods: Genetic analysis was carried out by NGS. This analysis was initially performed using a panel of genes known for tumor predisposition (EGLN1, EPAS1, FH, KIF1Bβ, MAX, NF1, RET, SDHA, SDHAF2, SDHB, SDHC, SDHD, TMEM127, and VHL), followed initially by SNP-CGH array, to exclude the presence of the pathogenic Copy Number Variants (CNVs) and the loss of heterozygosity (LOH) and subsequently by whole exome sequencing (WES) comparative sequence analysis of the DNA extracted from tumor fragments and peripheral blood. Results: We found a novel germline PHD2 (EGLN1) gene variant, c.153G&gt;A, p.W51*, in a patient affected by metastatic Pheo and chronic myeloid leukemia (CML) in the absence of polycythemia. Conclusions: According to the latest guidelines, it is mandatory to perform genetic analysis in all Pheo/PGL cases regardless of phenotype. In patients with metastatic disease and no evidence of polycythemia, we propose testing for PHD2 (EGLN1) gene variants. A possible correlation between PHD2 (EGLN1) pathogenic variants and CML clinical course should be considered.
9,063
274
[ 3364, 611, 131, 41, 113, 193, 147, 184, 222, 442, 108 ]
15
[ "phd2", "variant", "patient", "usa", "figure", "analysis", "gene", "protein", "tumor", "nv" ]
[ "paraganglioma pgl", "paraganglioma pgl rare", "malignant potential pheos", "metastatic pheo clinical", "metastatic pheo diagnosed" ]
null
[CONTENT] germline variants | PHD2 gene | metastatic pheochromocytoma | radiometabolic therapy | PPRT | chronic myeloid leukemia [SUMMARY]
null
[CONTENT] germline variants | PHD2 gene | metastatic pheochromocytoma | radiometabolic therapy | PPRT | chronic myeloid leukemia [SUMMARY]
[CONTENT] germline variants | PHD2 gene | metastatic pheochromocytoma | radiometabolic therapy | PPRT | chronic myeloid leukemia [SUMMARY]
[CONTENT] germline variants | PHD2 gene | metastatic pheochromocytoma | radiometabolic therapy | PPRT | chronic myeloid leukemia [SUMMARY]
[CONTENT] germline variants | PHD2 gene | metastatic pheochromocytoma | radiometabolic therapy | PPRT | chronic myeloid leukemia [SUMMARY]
[CONTENT] Adrenal Gland Neoplasms | Genetic Predisposition to Disease | Germ-Line Mutation | Humans | Leukemia, Myelogenous, Chronic, BCR-ABL Positive | Paraganglioma | Pheochromocytoma | Polycythemia [SUMMARY]
null
[CONTENT] Adrenal Gland Neoplasms | Genetic Predisposition to Disease | Germ-Line Mutation | Humans | Leukemia, Myelogenous, Chronic, BCR-ABL Positive | Paraganglioma | Pheochromocytoma | Polycythemia [SUMMARY]
[CONTENT] Adrenal Gland Neoplasms | Genetic Predisposition to Disease | Germ-Line Mutation | Humans | Leukemia, Myelogenous, Chronic, BCR-ABL Positive | Paraganglioma | Pheochromocytoma | Polycythemia [SUMMARY]
[CONTENT] Adrenal Gland Neoplasms | Genetic Predisposition to Disease | Germ-Line Mutation | Humans | Leukemia, Myelogenous, Chronic, BCR-ABL Positive | Paraganglioma | Pheochromocytoma | Polycythemia [SUMMARY]
[CONTENT] Adrenal Gland Neoplasms | Genetic Predisposition to Disease | Germ-Line Mutation | Humans | Leukemia, Myelogenous, Chronic, BCR-ABL Positive | Paraganglioma | Pheochromocytoma | Polycythemia [SUMMARY]
[CONTENT] paraganglioma pgl | paraganglioma pgl rare | malignant potential pheos | metastatic pheo clinical | metastatic pheo diagnosed [SUMMARY]
null
[CONTENT] paraganglioma pgl | paraganglioma pgl rare | malignant potential pheos | metastatic pheo clinical | metastatic pheo diagnosed [SUMMARY]
[CONTENT] paraganglioma pgl | paraganglioma pgl rare | malignant potential pheos | metastatic pheo clinical | metastatic pheo diagnosed [SUMMARY]
[CONTENT] paraganglioma pgl | paraganglioma pgl rare | malignant potential pheos | metastatic pheo clinical | metastatic pheo diagnosed [SUMMARY]
[CONTENT] paraganglioma pgl | paraganglioma pgl rare | malignant potential pheos | metastatic pheo clinical | metastatic pheo diagnosed [SUMMARY]
[CONTENT] phd2 | variant | patient | usa | figure | analysis | gene | protein | tumor | nv [SUMMARY]
null
[CONTENT] phd2 | variant | patient | usa | figure | analysis | gene | protein | tumor | nv [SUMMARY]
[CONTENT] phd2 | variant | patient | usa | figure | analysis | gene | protein | tumor | nv [SUMMARY]
[CONTENT] phd2 | variant | patient | usa | figure | analysis | gene | protein | tumor | nv [SUMMARY]
[CONTENT] phd2 | variant | patient | usa | figure | analysis | gene | protein | tumor | nv [SUMMARY]
[CONTENT] cluster | ppgls | hif | metastatic | tumors | patients | gene | factor | inducible factor | includes [SUMMARY]
null
[CONTENT] phd2 | figure | variant | analysis | aa | ncor2 | regions | phd2 variant | expression | sequence [SUMMARY]
[CONTENT] patients | affected | polycythemia findings confirm | extending | disease comprehensive | disease comprehensive ngs | disease comprehensive ngs panel | suggest extending | panel limitation phenotype particular | panel limitation phenotype [SUMMARY]
[CONTENT] phd2 | usa | figure | variant | patient | gene | protein | hif | analysis | nv [SUMMARY]
[CONTENT] phd2 | usa | figure | variant | patient | gene | protein | hif | analysis | nv [SUMMARY]
[CONTENT] Pheo | around 70% ||| approximately 40% | 30% [SUMMARY]
null
[CONTENT] EGLN1 | Pheo | CML [SUMMARY]
[CONTENT] Pheo/PGL ||| PHD2 (EGLN1 ||| EGLN1 | CML [SUMMARY]
[CONTENT] Pheo | around 70% ||| approximately 40% | 30% ||| NGS ||| EGLN1 | EPAS1 | MAX | NF1 | RET | SDHAF2 | SDHB | SDHC | SDHD | VHL | SNP | Copy Number Variants | CNVs | LOH | WES ||| ||| EGLN1 | Pheo | CML ||| ||| Pheo/PGL ||| PHD2 (EGLN1 ||| EGLN1 | CML [SUMMARY]
[CONTENT] Pheo | around 70% ||| approximately 40% | 30% ||| NGS ||| EGLN1 | EPAS1 | MAX | NF1 | RET | SDHAF2 | SDHB | SDHC | SDHD | VHL | SNP | Copy Number Variants | CNVs | LOH | WES ||| ||| EGLN1 | Pheo | CML ||| ||| Pheo/PGL ||| PHD2 (EGLN1 ||| EGLN1 | CML [SUMMARY]
Linalool, a Piper aduncum essential oil component, has selective activity against Trypanosoma cruzi trypomastigote forms at 4°C.
28177047
Recent studies showed that essential oils from different pepper species (Piper spp.) have promising leishmanicidal and trypanocidal activities.
BACKGROUND
PaEO chemical composition was obtained by GC-MS. Drug activity assays were based on cell counting, MTT data or infection index values. The effect of PaEO on the T. cruzi cell cycle and mitochondrial membrane potential was evaluated by flow cytometry.
METHODS
PaEO was effective against cell-derived (IC50/24 h: 2.8 μg/mL) and metacyclic (IC50/24 h: 12.1 μg/mL) trypomastigotes, as well as intracellular amastigotes (IC50/24 h: 9 μg/mL). At 4ºC - the temperature of red blood cells (RBCs) storage in blood banks - cell-derived trypomastigotes were more sensitive to PaEO (IC50/24 h = 3.8 μg/mL) than to gentian violet (IC50/24 h = 24.7 mg/mL). Cytotoxicity assays using Vero cells (37ºC) and RBCs (4ºC) showed that PaEO has increased selectivity for cell-derived trypomastigotes. Flow cytometry analysis showed that PaEO does not affect the cell cycle of T. cruzi epimastigotes, but decreases their mitochondrial membrane potential. GC-MS data identified nerolidol and linalool as major components of PaEO, and linalool had trypanocidal effect (IC50/24 h: 306 ng/mL) at 4ºC.
FINDINGS
The trypanocidal effect of PaEO is likely due to the presence of linalool, which may represent an interesting candidate for use in the treatment of potentially contaminated RBCs bags at low temperature.
MAIN CONCLUSION
[ "Acyclic Monoterpenes", "Animals", "Biological Assay", "Chlorocebus aethiops", "Cold Temperature", "Gas Chromatography-Mass Spectrometry", "Inhibitory Concentration 50", "Monoterpenes", "Oils, Volatile", "Piper", "Trypanocidal Agents", "Trypanosoma cruzi", "Vero Cells" ]
5293122
null
null
null
null
RESULTS
Cell-derived T. cruzi trypomastigotes are sensitive to PaEO - The EO of P. aduncum - a pepper species that has already yielded anti-trypanocidal extracts and molecules - had not been tested for its biological activity against T. cruzi. Thus, we tested the purified PaEO against different T. cruzi developmental forms, namely epimastigotes (axenically grown insect forms), cell-derived amastigotes and trypomastigotes (produced in monolayers of mammalian host cells) and infective metacyclic trypomastigotes (differentiated, by metacyclogenesis in vitro, from epimastigotes) (Fig. 1). As a reference drug, we used BZ, the current first-line for Chagas disease. Fig. 1: effect of Piper aduncum essential oil (PaEO) on Trypanosoma cruzi. Dose-response curves showing the inhibitory effect of PaEO on T. cruzi metacyclic trypomastigotes (meta), cell-derived trypomastigotes (try) and intracellular amastigotes (ama). Fa = Fraction of affected cells (ratio to untreated control). Cell-derived trypomastigotes were more sensitive to PaEO (IC50/24 h at 28ºC = 2.8 µg/mL) than to BZ (IC50/24 h = 16.1 µg/mL) (Table I). Metacyclic trypomastigotes were also sensitive to P. aduncum EO, with IC50/24 h values for PaEO and BZ of 12.1 µg/mL and 0.3 µg/mL, respectively. TABLE IInhibitory effect (IC50/24 h) and selectivity index (SI) of Piper aduncum essential oil (PaEO) for the treatment of different Trypanosoma cruzi developmental forms, compared with benznidazole (BZ) IC50/24 h (mg/mL)Selectivity index (SI) PaEOBZ PaEOa BZb Cell-derived trypomastigotes (28ºC)2.816.115.363.4Cell-derived trypomastigotes (4ºC)3.8ND11.3NDMetacyclic trypomastigotes (28ºC)12.10.33.53,646.4Epimastigotes (28ºC)84.714.70.569.4Amastigotes (37ºC)90.84.71,276.2 a: SI = CC50 PaEO on Vero cells (42.8 µg/mL) / IC50 PaEO; b: SI = CC50 BZ on Vero cells (1021 µg/mL) / IC50 BZ; ND: not done. a: SI = CC50 PaEO on Vero cells (42.8 µg/mL) / IC50 PaEO; b: SI = CC50 BZ on Vero cells (1021 µg/mL) / IC50 BZ; ND: not done. T. cruzi epimastigotes were incubated for 24 h at 28ºC with different concentrations of PaEO or BZ, and dose-response curves estimated the IC50/24 h values as 84.7 µg/mL for PaEO and 14.7 µg/mL (45 µM) for BZ (Table I). Blood transfusion is an important route for Chagas disease transmission, due to the presence of infective trypomastigotes in blood from infected donors. Thus, we tested the sensitivity of cell-derived trypomastigotes to PaEO at 4ºC, the temperature used for red blood storage in blood banks. Incubation at 4ºC led to inactivation of BZ, as evidenced by the presence of clumps of crystalline material in the medium (likely representing BZ agglutination), and by the absence of trypanosome morphological damage or motility loss, as assessed by light microscopy observation (not shown). Therefore, for the experiments at 4ºC we used gentian violet as a control, since this drug can be used to treat blood potentially infected with Chagas disease (Ramirez et al. 1995). Importantly, cell-derived trypomastigotes were 6,500 times more sensitive to P. aduncum EO (IC50/24 h = 3.8 µg/mL) than to gentian violet (IC50/24 h = 24,700 mg/mL). To estimate PaEO cytotoxicity, we performed cytotoxicity assays in uninfected Vero cells at 37ºC (Table II), and also in human erythrocytes (RBCs) at 4ºC (to mimic red blood cell storage conditions in blood banks). PaEO was more cytotoxic to Vero cells than BZ, with a CC50/24 h of 42.8 µg/mL, compared with 1,021 mg/mL for BZ. At the treatment temperature of 4ºC, gentian violet also exhibited lower cytotoxicity to RBCs (CC50/24 h = 71.4 mg/mL) than PaEO (CC5/24 h = 351.6 µg/mL). TABLE IIInhibitory effect (IC50/24 h, in µg/mL) and cytotoxicity (CC50/24 h, in µg/mL) of Piper aduncum essential oil (PaEO), gentian violet and linalool for Trypanosoma cruzi cell-derived trypomastigotes, red blood cells (RBCs, at 4ºC) and Vero cells (at 37ºC)  T. cruzi IC50/24 hRBCs CC50/24 hSelectivity index (SI) - RBCsVero cells CC50/24 hSelectivity index (SI) - Vero cells PaEO3.8351.692.542.811.2Linalool0.317,341.623,9900.872.7Gentian Violet24,700> 70,000> 2.871,4002.9 Selectivity index (SI) analysis - representing the ratio between IC50/24 h and CC50/24 h values - indicates that PaEO is particularly selective towards cell-derived T. cruzi trypomastigotes (SI = 15.3 and 11.3, for treatments with Vero cells at 28ºC and 4ºC, respectively; Table I). Importantly, at the treatment temperature of 4ºC, PaEO was more selective towards this form of the parasite than gentian violet (SI = 2.9). PaEO inhibits effectively the intracellular survival/replication of T. cruzi amastigotes - To test the effect of PaEO on amastigotes, Vero cells were infected with trypomastigotes and, after 24 h (when trypomastigotes had differentiated into amastigotes), the infected cultures were incubated with different concentrations of PaEO (or BZ, as a reference). PaEO at the concentration of 12.5 µg/mL decreased the T. cruzi amastigote infection index by 71.5%, similarly to 10 µg/mL BZ (81.3% decrease; Table III), showing that PaEO is as effective as the standard drug used for Chagas disease treatment, at this concentration. However, BZ was much more efficient than PaEO at inhibiting intracellular amastigote replication and survival, with an IC50/24 h (calculated from infection index data) of 0.8 µg/mL, compared with 9 µg/mL for PaEO (Table I). Also, strong cytotoxic effects occurred after treatment with 100 µg/mL PaEO (data not shown). TABLE IIIEffect of 24 h-treatment with Piper aduncum essential oil (PaEO) on Trypanosoma cruzi intracellular amastigotes in Vero cells, compared with benznidazole (BZ) Concentration (µg/mL)Infected cells (%)Amastigotes per cellInfection index (II)II decrease PaEO079.17.1562.1012.554.62.9160.171.55032.50.412.797.7100000100BZ087.311.51003.701074.92.5187.281.3500.40.90.399.9 To examine the mechanism of action of PaEO on T. cruzi, epimastigotes were incubated for 24 h at 28ºC with the EO and then subjected to cell cycle analysis by labeling with propidium iodide, or were labelled with Rhodamine-123, to analyse the effect of PaEO on the mitochondrial membrane potential. Although epimastigotes do not have direct medical importance, this life cycle form is ideal for in vitro analysis, because they can be cultivated easily in axenic medium. Treatment of epimastigotes with PaEO at the IC50/24 h did not alter significantly the number of cells in the G1 phase of the cell cycle, but there was a significant decrease in the number of cells in G2 (Fig. 2A-B). Labeling with rhodamine-123 showed that the PaEO decreased the mitochondrial membrane potential of nearly 98% of the tested epimastigotes (Fig. 2C-D), indicating a possible target of this compound. Fig. 2: effect of Piper aduncum essential oil (PaEO) on Trypanosoma cruzi epimastigotes, as analysed by flow cytometry. (A-B) Cell cycle analysis. Epimastigotes were treated for 24 h at 28°C with the IC50/24 h value of PaEO and then incubated with propidium iodide (PI). (A) Histograms of the PI-stained cell populations; (B) mean number of cells in each cell cycle stage. *: p < 0.05. (C-D) Mitochondrial membrane potential analysis. Epimastigotes were kept for 24 h at 28°C with the IC50/2 h value of PaEO and then incubated with Rhodamine-123. CCCP (carbonyl cyanide 3-clorophenylhydrazone): positive control. (C) Histograms of the Rhodamine-123-stained populations; (D) percentage of labelled cells after each treatment. Chemical composition analysis by gas chromatography/mass spectrometry (GC-MS) showed that the main constituents of the PaEO used in this study are nerolidol (25.22%) and linalool (13.42%) (Table IV). Aside from these two main constituents, PaEO contained several minor components (each accounting for ~1-5% of the total area; Table IV). Therefore, we tested the activities of linalool and nerolidol against cell-derived trypomastigotes at 4ºC (for 24 h), because the activity of PaEO against this developmental form appeared to be particularly promising (Table I). Linalool had clear trypanocidal activity, with an IC50/24 h of 306 ng/mL at 4ºC (Table II), indicating that this compound is approximately 52 times more effective against cell-derived trypanosomes than BZ at 28ºC (Table I). Cytotoxicity tests on Vero cells (at 4ºC) demonstrated that linalool was cytotoxic, however, with a CC50/24 h of 823.6 ng/mL, which resulted in a selectivity index (SI) for the treatment of cell-derived trypomastigotes at 4ºC of 2.69 (Table II). TABLE IVChemical composition of Piper aduncum essential oil (PaEO), as assessed by gas chromatography/mass spectrometry (GC/MS)ConstituentRetention timeAreaArea (%)Nerolidol31.131800000025.22Linalool11.83933635013.42Spatulenol31.7743748716.29β-ocimene9.7632372394.652-cycloexen-1-ol25.4830926764.44Germacrene D27.9928045454.03β-chamigrene28.5927885024.01E-β-farnesene26.9525093663.61γ-cadinene29.3724397623.51α-epi-muurolol34.2518935372.72α-bisabolol oxide34.7518671812.68α-ocimene9.3618071802.6Turmerone32.4914374342.07Caryophyllene oxide31.9812484581.79α-copaene23.6511691841.68Viridiflorol33.0411175931.61β-pinene7.59880591.42α-cadinol34.439755591.4α-cubebene24.168846911.27δ-cadineme29.58542011.23α-pinene6.26803570.98Safrole19.946402570.92 In contrast to linalool, nerolidol had no appreciable trypanocidal activity against cell-derived trypomastigotes (4ºC, 24 h) at concentrations between 100 ng/mL and 1 µg/mL, as assessed by light microscopy observation and MTT assays (data not shown). Also, nerolidol did not affect the integrity of Vero cell monolayers, after treatment for 24 h with the highest concentration used (1 µg/mL; not shown). These data indicate that the trypanocidal effect of PaEO was due to linalool.
null
null
[]
[]
[]
[ "MATERIALS AND METHODS", "RESULTS", "DISCUSSION" ]
[ "\nVero cells - Vero cells (ATCC CCL-81) were grown in RPMI-1640 medium with L-glutamine (Sigma Aldrich, St. Louis, MO, USA), supplemented with 5% fetal calf serum (FCS; Cultilab, Campinas, SP, Brazil), at 37ºC, and in a humidified 5% CO2 atmosphere. For seeding, cell monolayers were washed twice with PBS (pH 7.2), trypsinized and collected by centrifugation at 100 g for 2 min.\n\nParasites - In this work, we used the T. cruzi clone Dm28c. Epimastigote forms were kept at 28ºC in LIT medium supplemented with 10% inactivated FCS, with passages at every three days. For the experiments, parasites obtained from 72-h cultures were used.\nTo obtain cell-derived trypomastigotes, Vero cells were incubated with cell-derived trypomastigotes (1:10 ratio of cells to parasites) in 75 cm2 culture flasks containing 8 mL DMEM. After 4 h of interaction, non-internalised parasites were removed by rinsing with phosphate-buffered saline (PBS), new medium was added, and then changed to fresh medium every 24 h. After 96 h of infection, trypomastigotes released into the supernatant were collected by centrifugation at 3000 g for 10 min.\nTo obtain metacyclic trypomastigotes, culture epimastigotes in late logarithmic growth phase (five days) were subjected to nutritional stress as described previously (Contreras et al. 1985). Culture epimastigotes (5-7 x 107 cells/mL) were collected by centrifugation at 7000 g for 5 min at 10ºC, and resuspended in TAU medium (Triatomine Artificial Urine: 190 mM NaCl, 17 mM KCl, 2 mM MgCl2, 2 mM CaCl2, 8 mM phosphate buffer pH 6.0), at a concentration of 5 x 108 cells/mL. After incubation for 2 h at 28ºC (nutritional stress), parasites were transferred to 25 cm2 flasks containing 5 mL TAU3AAG medium (TAU supplemented with 10 mM L-proline, 50 mM sodium glutamate, 2 mM sodium aspartate and 10 mM glucose), at a final concentration of 5 x 106 cells/mL. After 72 h, metacyclic trypomastigotes in the supernatant (~80% parasites) were collected by centrifugation and purified by passage through a DEAE-cellulose affinity column, equilibrated in PSG buffer (47.47 mM Na2HPO4, 2.5 mM NaH2PO4.H20, 37.76 mM NaCl, 55.5 mM glucose).\n\nP. aduncum essential oil (PaEO) purification and chemical analysis - P. aduncum L. (matico) leaves were collected on March 2013 in the morning, with no rain, at the Medicinal Plants Garden of the Universidade Federal de Lavras (UFLA, MG, Brazil). P. aduncum EO was obtained by distillation in a Clevenger equipment, at the Department of Chemistry, Federal University of Lavras. Prior to use in experiments, PaEO was diluted to 100 mg/mL in DMSO (PaEO stock solution). Final DMSO concentrations in activity assays did not exceed 0.5%. Both undiluted and diluted (stock) PaEO were kept at 4ºC, and in the dark.\n\nPaEO chemical analysis was performed in the Department of Chemistry of UFLA, in a GC-17A Shimadzu gas chromatograph coupled to a QP 5000 Shimadzu mass spectrometer, with a selective detector. The column used was of fused silica/bound type (DB5, 30m x 0.25mm), with helium (1 mL/min) as the mobile-phase gas. The following analysis conditions were used: injection at 220ºC, detection at 240ºC; oven temperature between 40ºC and 240ºC, with addition of 3ºC/min; initial column pressure of 100.2 kPa; 1:10 split ratio and injected volume of 1 µL (solutions at 1% v/v) in dichloromethane. Mass spectra of each compound were compared with the Wiley 229 library database, and with the tabulated Kovats index.\n\nSingle drug activity assays on T. cruzi developmental forms - All experiments were performed in biological and technical triplicates. Incubations were performed for 24 h in all bioassays. The absorbance of untreated cells in culture media containing 0.5% DMSO (control) was used as 100% cell viability, and the percentage of dead cells in each treatment was estimated by comparison with the untreated control.\nFor all assays, IC50/24 h values (based on cell counting, MTT data or infection index values) and dose-response curves were generated using the CompuSyn software (ComboSyn Inc., Paramus, NJ, USA), and statistical analysis was performed in Excel. For all assays, the fraction of affected cells (Fa) was calculated relative to the untreated control (treated/untreated ratio).\nTo test the activity of PaEO on epimastigote forms, these cells were seeded into 96-well plates (5 x 106 cells/well) in LIT medium, and treated for 24 h at 28ºC with PaEO at the final concentrations of 9, 18, 37, 75, 150 and 300 µg/mL. As a control, BZ (at 6.25, 12.5, 25 or 50 µg/mL) or LIT medium with 0.5% DMSO (negative control) were used. Cell viability was assessed by the MTT assay (see “MTT assay”).\n\nPaEO activity assays with cell-derived trypomastigotes were performed at 28ºC and 4ºC. Cell-derived trypomastigotes (5 x 107 cells/well) were seeded in 96-well plates with DMEM, and treated with PaEO or BZ at the final concentrations 1, 10, 50 and 100 µg/mL, or with gentian violet (1 and 25 µM; for 4ºC experiments only). Untreated cells were used as a negative control. For treatments at 28ºC, plates were incubated at this temperature for 24 h. Then, each well was diluted 1:10 with 10% formaldehyde in PBS and cells were counted in a hemocytometer. For treatment at 4ºC, plates were incubated at this temperature for 24 h and then subjected to the MTT assay (see “MTT assay”). Due to the small size of trypomastigotes, the MTT reagent was used at the concentration of 1 mg/mL (2.5 fold less than that used for epimastigotes), which improved the correlation between the number of viable cells and the optical density (not shown). The major constituents of P. aduncum EO - linalool and nerolidol (both from Sigma, St. Louis, MO, USA) - were also tested (at the concentrations of 100, 250, 500 or 1000 ng/mL) against T. cruzi cell-derived trypomastigotes at 4ºC, as described above.\nTo test PaEO activity on metacyclic trypomastigotes, these cells were plated in 96-well plates (5 x 106 cells/well) with TAU3AAG medium, and treated with EO at the final concentrations of 1, 10, 50 or 100 µg/mL, for 24 h at 28ºC. Then, each well was diluted at 1:10 with 10% formaldehyde in PBS, and cells were counted in a hemocytometer.\nTo test PaEO activity against intracellular amastigotes, Vero cells were seeded in 24-well plates (2 x 104 cells/well) in DMEM (Sigma-Aldrich), and allowed to adhere for 24 h. Then, cell-derived trypomastigotes were added to each well (1:10 ratio of cells to parasites), incubated for 3 h, and non-internalised parasites were removed by washing with PBS. Infected cultures were incubated for 24 h (at 37ºC, 5% CO2), and then the culture medium was replaced with RPMI-1640 containing PaEO or BZ at final concentrations between 1 and 100 µg/mL (total volume of 1 mL/well), and plates were incubated for 24 h, at 37ºC (5% CO2). Treated cells were fixed with methanol, stained with Giemsa, and the inhibitory effect on intracellular amastigotes was estimated by counting (a) the number of infected cells, and (b) the number of amastigotes per cell, in 100 cells/wells, from random light microscopy images. These data were used to calculate the infection index (II) for each tested concentration (II = % infected cells x number of amastigotes per cell).\n\nMTT assay - After drug treatments, MTT solution (10 mg/mL in PBS) was added to each well for a final concentration of 2.5 mg/mL (or 1 mg/mL for trypomastigotes). Plates were incubated for 3 h at 28ºC in the dark, centrifuged for 10 min at 475 g and the supernatant was removed (by quick plate reversal). Then, 20 µL SDS was added to each well, followed by incubation for 1 h at 28ºC, after which 80 µL DMSO was added to each well and incubated for a further 1 h at 28ºC. Finally, the residual material was removed with the aid of a toothpick, and sample absorbance (at 550 nm) was read in an EL800 microplate reader (Biotek, Winooski, VT, USA). Dose-response curves were produced using the CompuSyn software, which was also used to calculate IC50/24 values.\n\nCytotoxicity - Vero cells were seeded in 96-well plates (2 x 104 cells/well) with RPMI-1640 medium and cultivated for 24 h at 37ºC (5% CO2). Cells were incubated for a further 24 h in the presence of PaEO (9, 18, 37, 75, 150 or 300 µg/mL), linalool/nerolidol (30, 60, 125, 250 or 500 ng/mL), BZ (1, 10, 100 or 1000 µg/mL) or gentian violet (100, 500 or 1000 µg/mL). Then, plates were subjected to the MTT assay as described above (see “MTT assay”), and MTT data were used to calculate 50% cytotoxicity (CC50/24 h) values. Plates were examined in an inverted microscope every 12 h, to evaluate monolayer integrity (confluence and adhesion).\nTo analyse cytotoxicity on red blood cells (RBCs) at 4ºC, human erythrocytes were obtained from 10-mL blood samples (O+), from a healthy volunteer donor, as previously described (Izumi et al. 2012). After collection, blood was defibrillated and then washed with a sterile ‘saline-glucose’ solution (0.85% NaCl/5% glucose). The final pellet was diluted in saline-glucose and centrifuged for 5 min at 1400 g. The supernatant was discarded and the pellet of RBCs was diluted in saline-glucose, for a final concentration of 6%. Then, RBCs were seeded in 96-well plates (3% RBCs/well) and treated with EO (0.1, 0.2, 0.4, 0.8, 1.6 or 3.2 mg/mL), for 5 h, at 4ºC. Negative control samples represented untreated RBCs diluted in saline-glucose. After treatment, plates were centrifuged at 1027 g for 2 min, and 100 µL of supernatant from each well were transferred to a new plate and analysed (at 550 nm) in an EL800 microplate reader (Biotek, Winooski, VT, USA).\nCytotoxicity (CC50) values were calculated using Excel, and selectivity index (SI) values were calculated as the ratio between the CC50/24 h (for Vero cells or RBCs) and IC50/24 h values.\n\nFlow cytometry - For flow cytometry analysis, epimastigotes were seeded into 96-well plates (5 x 106 cells/well) in LIT medium and treated with PaEO at a concentration corresponding to the IC50/24 h, for 24 h. Negative control cells were kept untreated. Then, cells were washed twice in PBS (with centrifugation at 7000 g for 2 min) and transferred to plastic flow cytometry tubes (300 µL/tube).\nTo evaluate the effect of PaEO on the T. cruzi cell cycle, 1 mL of 10 µg/mL propidium iodide diluted in 5% NP40, with 20 µg/mL RNAse (Qubit RNA HS Assay Kit, ThermoFisher Scientific, Waltham, MA, USA) was added to each tube, and samples were incubated for 20 min at 28ºC.\nTo evaluate the effect of PaEO on T. cruzi mitochondrial membrane potential, rhodamine 123 (final concentration 10 µg/mL) was added to cultures after PaEO treatment, and cells were incubated for 20 min at 28ºC, before PBS washing and transfer to flow cytometry tubes. Culture treatment with 10 µM CCCP (carbonyl cyanide 3-clorophenylhydrazone, Sigma Aldrich, St. Louis, MO, USA) after the rhodamine incubation was used as a positive control.\nFlow cytometry samples were analysed in a FACS Canto II (Becton-Dickinson, San Jose, CA, USA) flow cytometer (20,000 events/sample; 488 nm excitation and 585/42 nm emission for propidium iodide; 488 excitation and 530/30 emission for rhodamine 123), and data analysis was performed in FlowJo (Treestar Software, Ashland, OR, USA). Statistical analysis was performed by one-way analysis of variance (ANOVA), using GraphPad Prism 5.01 software.", "\nCell-derived T. cruzi trypomastigotes are sensitive to PaEO - The EO of P. aduncum - a pepper species that has already yielded anti-trypanocidal extracts and molecules - had not been tested for its biological activity against T. cruzi. Thus, we tested the purified PaEO against different T. cruzi developmental forms, namely epimastigotes (axenically grown insect forms), cell-derived amastigotes and trypomastigotes (produced in monolayers of mammalian host cells) and infective metacyclic trypomastigotes (differentiated, by metacyclogenesis in vitro, from epimastigotes) (Fig. 1). As a reference drug, we used BZ, the current first-line for Chagas disease.\n\nFig. 1: effect of Piper aduncum essential oil (PaEO) on Trypanosoma cruzi. Dose-response curves showing the inhibitory effect of PaEO on T. cruzi metacyclic trypomastigotes (meta), cell-derived trypomastigotes (try) and intracellular amastigotes (ama). Fa = Fraction of affected cells (ratio to untreated control).\n\nCell-derived trypomastigotes were more sensitive to PaEO (IC50/24 h at 28ºC = 2.8 µg/mL) than to BZ (IC50/24 h = 16.1 µg/mL) (Table I). Metacyclic trypomastigotes were also sensitive to P. aduncum EO, with IC50/24 h values for PaEO and BZ of 12.1 µg/mL and 0.3 µg/mL, respectively.\n\nTABLE IInhibitory effect (IC50/24 h) and selectivity index (SI) of Piper aduncum essential oil (PaEO) for the treatment of different Trypanosoma cruzi developmental forms, compared with benznidazole (BZ) IC50/24 h (mg/mL)Selectivity index (SI)\n\n\nPaEOBZ\nPaEOa\nBZb\nCell-derived trypomastigotes (28ºC)2.816.115.363.4Cell-derived trypomastigotes (4ºC)3.8ND11.3NDMetacyclic trypomastigotes (28ºC)12.10.33.53,646.4Epimastigotes (28ºC)84.714.70.569.4Amastigotes (37ºC)90.84.71,276.2\na: SI = CC50\nPaEO on Vero cells (42.8 µg/mL) / IC50\nPaEO; b: SI = CC50 BZ on Vero cells (1021 µg/mL) / IC50 BZ; ND: not done.\n\n\na: SI = CC50\nPaEO on Vero cells (42.8 µg/mL) / IC50\nPaEO; b: SI = CC50 BZ on Vero cells (1021 µg/mL) / IC50 BZ; ND: not done.\n\nT. cruzi epimastigotes were incubated for 24 h at 28ºC with different concentrations of PaEO or BZ, and dose-response curves estimated the IC50/24 h values as 84.7 µg/mL for PaEO and 14.7 µg/mL (45 µM) for BZ (Table I).\nBlood transfusion is an important route for Chagas disease transmission, due to the presence of infective trypomastigotes in blood from infected donors. Thus, we tested the sensitivity of cell-derived trypomastigotes to PaEO at 4ºC, the temperature used for red blood storage in blood banks. Incubation at 4ºC led to inactivation of BZ, as evidenced by the presence of clumps of crystalline material in the medium (likely representing BZ agglutination), and by the absence of trypanosome morphological damage or motility loss, as assessed by light microscopy observation (not shown). Therefore, for the experiments at 4ºC we used gentian violet as a control, since this drug can be used to treat blood potentially infected with Chagas disease (Ramirez et al. 1995). Importantly, cell-derived trypomastigotes were 6,500 times more sensitive to P. aduncum EO (IC50/24 h = 3.8 µg/mL) than to gentian violet (IC50/24 h = 24,700 mg/mL).\nTo estimate PaEO cytotoxicity, we performed cytotoxicity assays in uninfected Vero cells at 37ºC (Table II), and also in human erythrocytes (RBCs) at 4ºC (to mimic red blood cell storage conditions in blood banks). PaEO was more cytotoxic to Vero cells than BZ, with a CC50/24 h of 42.8 µg/mL, compared with 1,021 mg/mL for BZ. At the treatment temperature of 4ºC, gentian violet also exhibited lower cytotoxicity to RBCs (CC50/24 h = 71.4 mg/mL) than PaEO (CC5/24 h = 351.6 µg/mL).\n\nTABLE IIInhibitory effect (IC50/24 h, in µg/mL) and cytotoxicity (CC50/24 h, in µg/mL) of Piper aduncum essential oil (PaEO), gentian violet and linalool for Trypanosoma cruzi cell-derived trypomastigotes, red blood cells (RBCs, at 4ºC) and Vero cells (at 37ºC) \nT. cruzi IC50/24 hRBCs CC50/24 hSelectivity index (SI) - RBCsVero cells CC50/24 hSelectivity index (SI) - Vero cells\nPaEO3.8351.692.542.811.2Linalool0.317,341.623,9900.872.7Gentian Violet24,700> 70,000> 2.871,4002.9\n\nSelectivity index (SI) analysis - representing the ratio between IC50/24 h and CC50/24 h values - indicates that PaEO is particularly selective towards cell-derived T. cruzi trypomastigotes (SI = 15.3 and 11.3, for treatments with Vero cells at 28ºC and 4ºC, respectively; Table I). Importantly, at the treatment temperature of 4ºC, PaEO was more selective towards this form of the parasite than gentian violet (SI = 2.9).\n\nPaEO inhibits effectively the intracellular survival/replication of T. cruzi amastigotes - To test the effect of PaEO on amastigotes, Vero cells were infected with trypomastigotes and, after 24 h (when trypomastigotes had differentiated into amastigotes), the infected cultures were incubated with different concentrations of PaEO (or BZ, as a reference). PaEO at the concentration of 12.5 µg/mL decreased the T. cruzi amastigote infection index by 71.5%, similarly to 10 µg/mL BZ (81.3% decrease; Table III), showing that PaEO is as effective as the standard drug used for Chagas disease treatment, at this concentration. However, BZ was much more efficient than PaEO at inhibiting intracellular amastigote replication and survival, with an IC50/24 h (calculated from infection index data) of 0.8 µg/mL, compared with 9 µg/mL for PaEO (Table I). Also, strong cytotoxic effects occurred after treatment with 100 µg/mL PaEO (data not shown).\n\nTABLE IIIEffect of 24 h-treatment with Piper aduncum essential oil (PaEO) on Trypanosoma cruzi intracellular amastigotes in Vero cells, compared with benznidazole (BZ) Concentration (µg/mL)Infected cells (%)Amastigotes per cellInfection index (II)II decrease\nPaEO079.17.1562.1012.554.62.9160.171.55032.50.412.797.7100000100BZ087.311.51003.701074.92.5187.281.3500.40.90.399.9\n\nTo examine the mechanism of action of PaEO on T. cruzi, epimastigotes were incubated for 24 h at 28ºC with the EO and then subjected to cell cycle analysis by labeling with propidium iodide, or were labelled with Rhodamine-123, to analyse the effect of PaEO on the mitochondrial membrane potential. Although epimastigotes do not have direct medical importance, this life cycle form is ideal for in vitro analysis, because they can be cultivated easily in axenic medium.\nTreatment of epimastigotes with PaEO at the IC50/24 h did not alter significantly the number of cells in the G1 phase of the cell cycle, but there was a significant decrease in the number of cells in G2 (Fig. 2A-B). Labeling with rhodamine-123 showed that the PaEO decreased the mitochondrial membrane potential of nearly 98% of the tested epimastigotes (Fig. 2C-D), indicating a possible target of this compound.\n\nFig. 2: effect of Piper aduncum essential oil (PaEO) on Trypanosoma cruzi epimastigotes, as analysed by flow cytometry. (A-B) Cell cycle analysis. Epimastigotes were treated for 24 h at 28°C with the IC50/24 h value of PaEO and then incubated with propidium iodide (PI). (A) Histograms of the PI-stained cell populations; (B) mean number of cells in each cell cycle stage. *: p < 0.05. (C-D) Mitochondrial membrane potential analysis. Epimastigotes were kept for 24 h at 28°C with the IC50/2 h value of PaEO and then incubated with Rhodamine-123. CCCP (carbonyl cyanide 3-clorophenylhydrazone): positive control. (C) Histograms of the Rhodamine-123-stained populations; (D) percentage of labelled cells after each treatment.\n\nChemical composition analysis by gas chromatography/mass spectrometry (GC-MS) showed that the main constituents of the PaEO used in this study are nerolidol (25.22%) and linalool (13.42%) (Table IV). Aside from these two main constituents, PaEO contained several minor components (each accounting for ~1-5% of the total area; Table IV). Therefore, we tested the activities of linalool and nerolidol against cell-derived trypomastigotes at 4ºC (for 24 h), because the activity of PaEO against this developmental form appeared to be particularly promising (Table I). Linalool had clear trypanocidal activity, with an IC50/24 h of 306 ng/mL at 4ºC (Table II), indicating that this compound is approximately 52 times more effective against cell-derived trypanosomes than BZ at 28ºC (Table I). Cytotoxicity tests on Vero cells (at 4ºC) demonstrated that linalool was cytotoxic, however, with a CC50/24 h of 823.6 ng/mL, which resulted in a selectivity index (SI) for the treatment of cell-derived trypomastigotes at 4ºC of 2.69 (Table II).\n\nTABLE IVChemical composition of Piper aduncum essential oil (PaEO), as assessed by gas chromatography/mass spectrometry (GC/MS)ConstituentRetention timeAreaArea (%)Nerolidol31.131800000025.22Linalool11.83933635013.42Spatulenol31.7743748716.29β-ocimene9.7632372394.652-cycloexen-1-ol25.4830926764.44Germacrene D27.9928045454.03β-chamigrene28.5927885024.01E-β-farnesene26.9525093663.61γ-cadinene29.3724397623.51α-epi-muurolol34.2518935372.72α-bisabolol oxide34.7518671812.68α-ocimene9.3618071802.6Turmerone32.4914374342.07Caryophyllene oxide31.9812484581.79α-copaene23.6511691841.68Viridiflorol33.0411175931.61β-pinene7.59880591.42α-cadinol34.439755591.4α-cubebene24.168846911.27δ-cadineme29.58542011.23α-pinene6.26803570.98Safrole19.946402570.92\n\nIn contrast to linalool, nerolidol had no appreciable trypanocidal activity against cell-derived trypomastigotes (4ºC, 24 h) at concentrations between 100 ng/mL and 1 µg/mL, as assessed by light microscopy observation and MTT assays (data not shown). Also, nerolidol did not affect the integrity of Vero cell monolayers, after treatment for 24 h with the highest concentration used (1 µg/mL; not shown). These data indicate that the trypanocidal effect of PaEO was due to linalool.", "The microbicidal activity of EOs - against fungi, bacteria and viruses - is used primarily as a defense to ensure plant survival (Bakkali et al. 2008). A number of EOs are active against T. cruzi epimastigotes and trypomastigotes (Santoro et al. 2007a, b, Escobar et al. 2010, Borges et al. 2012, Silva et al. 2013, Azeredo et al. 2014) and recent studies showed that EOs from different pepper species (Piper spp.) have promising leishmanicidal (Monzote et al. 2010) and trypanocidal (Esperandim et al. 2013) activities. Nevertheless, the EO of the pepper species P. aduncum - whose extracts have clear trypanocidal activity - had not been tested previously against T. cruzi.\nIn the present study, we show that T. cruzi trypomastigotes and amastigotes are sensitive to PaEO at concentrations of 2.8 to 12.1 µg/mL (Table I). Importantly, the highly infective cell-derived trypomastigote forms - which can be found in contaminated blood from Chagas disease patients - were particularly sensitive to PaEO, with an IC50/24 h of 2.8 µg/mL.\nAlso, treatment of T. cruzi trypomastigotes and human erythrocytes with PaEO at 4ºC resulted in an excellent selectivity index (SI = 92.5) (Table II). The first desirable property of a promising drug candidate is an SI > 50 (Romanha et al. 2010); thus, PaEO (or its main constituent linalool) is a promising alternative for further testing on blood/blood cells, under thermal conditions of red blood cells storage (4ºC), in order to obtain derivatives effective at eliminating T. cruzi in blood bank samples.\nIn our tests with cell-derived trypomastigotes at 4ºC the use of BZ for 24 h had no effect against the parasite (not shown). The incomplete trypanocidal effect of BZ at 4ºC can lead to Chagas disease transmission via transfusion of contaminated blood (Martin et al. 2014). Gentian violet was the reference drug of choice in our tests at 4ºC, because it can be used to control T. cruzi infection in blood banks (Ramirez et al. 1995). However, the prophylaxis alternative with gentian violet produces side effects, such as staining in the mucosa and erythrocyte agglutination (Docampo & Moreno 1990).\nIn this study, most testing was performed for 24 h, to minimise the effects of nutrient starvation and toxicity due to parasite lysis. We believe that these conditions are closer to those found in the mammalian host, where toxin waste is eliminated through the bloodstream. Our data with BZ differ from that reported in other studies, where this drug was often tested for 72 h, yielding lower IC50 values (Franklim et al. 2013, Hamedt et al. 2014). To date, no standard conditions for drug testing against T. cruzi have been defined. Therefore, differences in parasite strains, incubation times and temperatures and culture media may all account for variations observed in experimental drug testing, and are likely to delay drug development for Chagas disease treatment (Andrade et al. 1985).\nMitochondrial membrane potential evaluation of epimastigotes by flow cytometry demonstrated that PaEO decreased the mitochondrial membrane potential of ~98% of the treated cells, with similar results obtained with the positive control with CCCP. Therefore, PaEO may be acting on the parasite mitochondrion. Accordingly, it has been already shown that other natural products induce depolarisation of mitochondrial membrane in Trypanosomatids such as T. cruzi and Leishmania (Inacio et al. 2012, Caleare et al. 2013, Ribeiro et al. 2013, Takahashi et al. 2013, Aliança et al. 2014, Corpas-López et al. 2016, Sülsen et al. 2016).\nEOs may affect different cellular targets, due to variations in their molecular composition. As they contain lipophilic molecules, the mechanism of action of EOs involves breakage and/or crossing of the plasma membrane (Knobloch et al. 1989, Sikkema et al. 1994). While our data showed that the PaEO may target the mitochondrion, we can not exclude the possibility that mitochondrial damage is a secondary effect of drug treatment.\nThe PaEO used in this work had nerolidol (25.22%) and linalool (13.42%) as its main constituents. GC-MS analysis of the essential oil of P. aduncum in other studies showed that it can yield four major constituents: (a) dillapiole, at 79.9-86.9% (Almeida et al. 2009); (b) nerolidol, at 79.2-82.5% (Oliveira et al. 2006); (c) cineole, at 54% (Oliveira et al. 2013), and (d) linalool, at 31-41% (Navickiene et al. 2006). This variation in composition may be explained by the collection of plants from different regions, and which are exposed to different environmental factors. Thus, to minimise EO composition variation between studies, the collected material should always come from the same location. The best thermal storage conditions for P. aduncum EO is 20ºC (for up to six months), without loss of regenerative capacity (da Silva & Scherwinski-Pereira 2011).\nIn this work we show that nerolidol - the major PaEO component - failed to affect significantly the cell-derived trypomastigote form of T. cruzi, at 4ºC, even at the concentration of 1 µg/mL. In contrast, linalool had potent trypanocidal effect against this parasite form, with an IC50/24 h of 306 ng/mL. These results show that the major component of an EO may not always be responsible for the lytic activity. Interestingly, incubation of linalool with T. cruzi blood trypomastigotes (Y strain) resulted in an IC50/24 h value of 264 µg/mL (Santoro et al. 2007b), indicating that trypomastigote forms of different origin (blood, cell culture or in vitro differentiation) and from different strains may differ in their susceptibility to EO derivatives.\nOur cytotoxicity tests in Vero cells showed that cell monolayers remained intact after nerolidol treatment, without clear changes or toxic effects, even at the concentration of 1 µg/mL. However, linalool was cytotoxic, with CC50/24 of 823.6 ng/mL, indicating that the major constituent is not necessarily the one responsible for EO toxicity towards mammalian cells, as suggested in other studies (Almeida et al. 2009, Oliveira et al. 2013, Liu et al. 2014). Also, linalool was approximately 80,000 times more efficient against cell-derived trypomastigotes at 4ºC than gentian violet, the drug commonly used to treat possibly contaminated blood bags.\nIn conclusion, our data indicate that the P. aduncum essential oil component linalool is a promising compound for further studies on the trypanocidal treatment of red blood cell bags at 4ºC prior to transfusion, to prevent T. cruzi transmission via this important route. To improve the safety profile of linalool, combinations with less toxic compounds (including benznidazole) should be tested, and linalool could be used as a lead for further drug development." ]
[ "materials|methods", "results", "discussion" ]
[ "essential oil", "linalool", "Piper aduncum", "Trypanosoma cruzi", "trypanocidal activity" ]
MATERIALS AND METHODS: Vero cells - Vero cells (ATCC CCL-81) were grown in RPMI-1640 medium with L-glutamine (Sigma Aldrich, St. Louis, MO, USA), supplemented with 5% fetal calf serum (FCS; Cultilab, Campinas, SP, Brazil), at 37ºC, and in a humidified 5% CO2 atmosphere. For seeding, cell monolayers were washed twice with PBS (pH 7.2), trypsinized and collected by centrifugation at 100 g for 2 min. Parasites - In this work, we used the T. cruzi clone Dm28c. Epimastigote forms were kept at 28ºC in LIT medium supplemented with 10% inactivated FCS, with passages at every three days. For the experiments, parasites obtained from 72-h cultures were used. To obtain cell-derived trypomastigotes, Vero cells were incubated with cell-derived trypomastigotes (1:10 ratio of cells to parasites) in 75 cm2 culture flasks containing 8 mL DMEM. After 4 h of interaction, non-internalised parasites were removed by rinsing with phosphate-buffered saline (PBS), new medium was added, and then changed to fresh medium every 24 h. After 96 h of infection, trypomastigotes released into the supernatant were collected by centrifugation at 3000 g for 10 min. To obtain metacyclic trypomastigotes, culture epimastigotes in late logarithmic growth phase (five days) were subjected to nutritional stress as described previously (Contreras et al. 1985). Culture epimastigotes (5-7 x 107 cells/mL) were collected by centrifugation at 7000 g for 5 min at 10ºC, and resuspended in TAU medium (Triatomine Artificial Urine: 190 mM NaCl, 17 mM KCl, 2 mM MgCl2, 2 mM CaCl2, 8 mM phosphate buffer pH 6.0), at a concentration of 5 x 108 cells/mL. After incubation for 2 h at 28ºC (nutritional stress), parasites were transferred to 25 cm2 flasks containing 5 mL TAU3AAG medium (TAU supplemented with 10 mM L-proline, 50 mM sodium glutamate, 2 mM sodium aspartate and 10 mM glucose), at a final concentration of 5 x 106 cells/mL. After 72 h, metacyclic trypomastigotes in the supernatant (~80% parasites) were collected by centrifugation and purified by passage through a DEAE-cellulose affinity column, equilibrated in PSG buffer (47.47 mM Na2HPO4, 2.5 mM NaH2PO4.H20, 37.76 mM NaCl, 55.5 mM glucose). P. aduncum essential oil (PaEO) purification and chemical analysis - P. aduncum L. (matico) leaves were collected on March 2013 in the morning, with no rain, at the Medicinal Plants Garden of the Universidade Federal de Lavras (UFLA, MG, Brazil). P. aduncum EO was obtained by distillation in a Clevenger equipment, at the Department of Chemistry, Federal University of Lavras. Prior to use in experiments, PaEO was diluted to 100 mg/mL in DMSO (PaEO stock solution). Final DMSO concentrations in activity assays did not exceed 0.5%. Both undiluted and diluted (stock) PaEO were kept at 4ºC, and in the dark. PaEO chemical analysis was performed in the Department of Chemistry of UFLA, in a GC-17A Shimadzu gas chromatograph coupled to a QP 5000 Shimadzu mass spectrometer, with a selective detector. The column used was of fused silica/bound type (DB5, 30m x 0.25mm), with helium (1 mL/min) as the mobile-phase gas. The following analysis conditions were used: injection at 220ºC, detection at 240ºC; oven temperature between 40ºC and 240ºC, with addition of 3ºC/min; initial column pressure of 100.2 kPa; 1:10 split ratio and injected volume of 1 µL (solutions at 1% v/v) in dichloromethane. Mass spectra of each compound were compared with the Wiley 229 library database, and with the tabulated Kovats index. Single drug activity assays on T. cruzi developmental forms - All experiments were performed in biological and technical triplicates. Incubations were performed for 24 h in all bioassays. The absorbance of untreated cells in culture media containing 0.5% DMSO (control) was used as 100% cell viability, and the percentage of dead cells in each treatment was estimated by comparison with the untreated control. For all assays, IC50/24 h values (based on cell counting, MTT data or infection index values) and dose-response curves were generated using the CompuSyn software (ComboSyn Inc., Paramus, NJ, USA), and statistical analysis was performed in Excel. For all assays, the fraction of affected cells (Fa) was calculated relative to the untreated control (treated/untreated ratio). To test the activity of PaEO on epimastigote forms, these cells were seeded into 96-well plates (5 x 106 cells/well) in LIT medium, and treated for 24 h at 28ºC with PaEO at the final concentrations of 9, 18, 37, 75, 150 and 300 µg/mL. As a control, BZ (at 6.25, 12.5, 25 or 50 µg/mL) or LIT medium with 0.5% DMSO (negative control) were used. Cell viability was assessed by the MTT assay (see “MTT assay”). PaEO activity assays with cell-derived trypomastigotes were performed at 28ºC and 4ºC. Cell-derived trypomastigotes (5 x 107 cells/well) were seeded in 96-well plates with DMEM, and treated with PaEO or BZ at the final concentrations 1, 10, 50 and 100 µg/mL, or with gentian violet (1 and 25 µM; for 4ºC experiments only). Untreated cells were used as a negative control. For treatments at 28ºC, plates were incubated at this temperature for 24 h. Then, each well was diluted 1:10 with 10% formaldehyde in PBS and cells were counted in a hemocytometer. For treatment at 4ºC, plates were incubated at this temperature for 24 h and then subjected to the MTT assay (see “MTT assay”). Due to the small size of trypomastigotes, the MTT reagent was used at the concentration of 1 mg/mL (2.5 fold less than that used for epimastigotes), which improved the correlation between the number of viable cells and the optical density (not shown). The major constituents of P. aduncum EO - linalool and nerolidol (both from Sigma, St. Louis, MO, USA) - were also tested (at the concentrations of 100, 250, 500 or 1000 ng/mL) against T. cruzi cell-derived trypomastigotes at 4ºC, as described above. To test PaEO activity on metacyclic trypomastigotes, these cells were plated in 96-well plates (5 x 106 cells/well) with TAU3AAG medium, and treated with EO at the final concentrations of 1, 10, 50 or 100 µg/mL, for 24 h at 28ºC. Then, each well was diluted at 1:10 with 10% formaldehyde in PBS, and cells were counted in a hemocytometer. To test PaEO activity against intracellular amastigotes, Vero cells were seeded in 24-well plates (2 x 104 cells/well) in DMEM (Sigma-Aldrich), and allowed to adhere for 24 h. Then, cell-derived trypomastigotes were added to each well (1:10 ratio of cells to parasites), incubated for 3 h, and non-internalised parasites were removed by washing with PBS. Infected cultures were incubated for 24 h (at 37ºC, 5% CO2), and then the culture medium was replaced with RPMI-1640 containing PaEO or BZ at final concentrations between 1 and 100 µg/mL (total volume of 1 mL/well), and plates were incubated for 24 h, at 37ºC (5% CO2). Treated cells were fixed with methanol, stained with Giemsa, and the inhibitory effect on intracellular amastigotes was estimated by counting (a) the number of infected cells, and (b) the number of amastigotes per cell, in 100 cells/wells, from random light microscopy images. These data were used to calculate the infection index (II) for each tested concentration (II = % infected cells x number of amastigotes per cell). MTT assay - After drug treatments, MTT solution (10 mg/mL in PBS) was added to each well for a final concentration of 2.5 mg/mL (or 1 mg/mL for trypomastigotes). Plates were incubated for 3 h at 28ºC in the dark, centrifuged for 10 min at 475 g and the supernatant was removed (by quick plate reversal). Then, 20 µL SDS was added to each well, followed by incubation for 1 h at 28ºC, after which 80 µL DMSO was added to each well and incubated for a further 1 h at 28ºC. Finally, the residual material was removed with the aid of a toothpick, and sample absorbance (at 550 nm) was read in an EL800 microplate reader (Biotek, Winooski, VT, USA). Dose-response curves were produced using the CompuSyn software, which was also used to calculate IC50/24 values. Cytotoxicity - Vero cells were seeded in 96-well plates (2 x 104 cells/well) with RPMI-1640 medium and cultivated for 24 h at 37ºC (5% CO2). Cells were incubated for a further 24 h in the presence of PaEO (9, 18, 37, 75, 150 or 300 µg/mL), linalool/nerolidol (30, 60, 125, 250 or 500 ng/mL), BZ (1, 10, 100 or 1000 µg/mL) or gentian violet (100, 500 or 1000 µg/mL). Then, plates were subjected to the MTT assay as described above (see “MTT assay”), and MTT data were used to calculate 50% cytotoxicity (CC50/24 h) values. Plates were examined in an inverted microscope every 12 h, to evaluate monolayer integrity (confluence and adhesion). To analyse cytotoxicity on red blood cells (RBCs) at 4ºC, human erythrocytes were obtained from 10-mL blood samples (O+), from a healthy volunteer donor, as previously described (Izumi et al. 2012). After collection, blood was defibrillated and then washed with a sterile ‘saline-glucose’ solution (0.85% NaCl/5% glucose). The final pellet was diluted in saline-glucose and centrifuged for 5 min at 1400 g. The supernatant was discarded and the pellet of RBCs was diluted in saline-glucose, for a final concentration of 6%. Then, RBCs were seeded in 96-well plates (3% RBCs/well) and treated with EO (0.1, 0.2, 0.4, 0.8, 1.6 or 3.2 mg/mL), for 5 h, at 4ºC. Negative control samples represented untreated RBCs diluted in saline-glucose. After treatment, plates were centrifuged at 1027 g for 2 min, and 100 µL of supernatant from each well were transferred to a new plate and analysed (at 550 nm) in an EL800 microplate reader (Biotek, Winooski, VT, USA). Cytotoxicity (CC50) values were calculated using Excel, and selectivity index (SI) values were calculated as the ratio between the CC50/24 h (for Vero cells or RBCs) and IC50/24 h values. Flow cytometry - For flow cytometry analysis, epimastigotes were seeded into 96-well plates (5 x 106 cells/well) in LIT medium and treated with PaEO at a concentration corresponding to the IC50/24 h, for 24 h. Negative control cells were kept untreated. Then, cells were washed twice in PBS (with centrifugation at 7000 g for 2 min) and transferred to plastic flow cytometry tubes (300 µL/tube). To evaluate the effect of PaEO on the T. cruzi cell cycle, 1 mL of 10 µg/mL propidium iodide diluted in 5% NP40, with 20 µg/mL RNAse (Qubit RNA HS Assay Kit, ThermoFisher Scientific, Waltham, MA, USA) was added to each tube, and samples were incubated for 20 min at 28ºC. To evaluate the effect of PaEO on T. cruzi mitochondrial membrane potential, rhodamine 123 (final concentration 10 µg/mL) was added to cultures after PaEO treatment, and cells were incubated for 20 min at 28ºC, before PBS washing and transfer to flow cytometry tubes. Culture treatment with 10 µM CCCP (carbonyl cyanide 3-clorophenylhydrazone, Sigma Aldrich, St. Louis, MO, USA) after the rhodamine incubation was used as a positive control. Flow cytometry samples were analysed in a FACS Canto II (Becton-Dickinson, San Jose, CA, USA) flow cytometer (20,000 events/sample; 488 nm excitation and 585/42 nm emission for propidium iodide; 488 excitation and 530/30 emission for rhodamine 123), and data analysis was performed in FlowJo (Treestar Software, Ashland, OR, USA). Statistical analysis was performed by one-way analysis of variance (ANOVA), using GraphPad Prism 5.01 software. RESULTS: Cell-derived T. cruzi trypomastigotes are sensitive to PaEO - The EO of P. aduncum - a pepper species that has already yielded anti-trypanocidal extracts and molecules - had not been tested for its biological activity against T. cruzi. Thus, we tested the purified PaEO against different T. cruzi developmental forms, namely epimastigotes (axenically grown insect forms), cell-derived amastigotes and trypomastigotes (produced in monolayers of mammalian host cells) and infective metacyclic trypomastigotes (differentiated, by metacyclogenesis in vitro, from epimastigotes) (Fig. 1). As a reference drug, we used BZ, the current first-line for Chagas disease. Fig. 1: effect of Piper aduncum essential oil (PaEO) on Trypanosoma cruzi. Dose-response curves showing the inhibitory effect of PaEO on T. cruzi metacyclic trypomastigotes (meta), cell-derived trypomastigotes (try) and intracellular amastigotes (ama). Fa = Fraction of affected cells (ratio to untreated control). Cell-derived trypomastigotes were more sensitive to PaEO (IC50/24 h at 28ºC = 2.8 µg/mL) than to BZ (IC50/24 h = 16.1 µg/mL) (Table I). Metacyclic trypomastigotes were also sensitive to P. aduncum EO, with IC50/24 h values for PaEO and BZ of 12.1 µg/mL and 0.3 µg/mL, respectively. TABLE IInhibitory effect (IC50/24 h) and selectivity index (SI) of Piper aduncum essential oil (PaEO) for the treatment of different Trypanosoma cruzi developmental forms, compared with benznidazole (BZ) IC50/24 h (mg/mL)Selectivity index (SI) PaEOBZ PaEOa BZb Cell-derived trypomastigotes (28ºC)2.816.115.363.4Cell-derived trypomastigotes (4ºC)3.8ND11.3NDMetacyclic trypomastigotes (28ºC)12.10.33.53,646.4Epimastigotes (28ºC)84.714.70.569.4Amastigotes (37ºC)90.84.71,276.2 a: SI = CC50 PaEO on Vero cells (42.8 µg/mL) / IC50 PaEO; b: SI = CC50 BZ on Vero cells (1021 µg/mL) / IC50 BZ; ND: not done. a: SI = CC50 PaEO on Vero cells (42.8 µg/mL) / IC50 PaEO; b: SI = CC50 BZ on Vero cells (1021 µg/mL) / IC50 BZ; ND: not done. T. cruzi epimastigotes were incubated for 24 h at 28ºC with different concentrations of PaEO or BZ, and dose-response curves estimated the IC50/24 h values as 84.7 µg/mL for PaEO and 14.7 µg/mL (45 µM) for BZ (Table I). Blood transfusion is an important route for Chagas disease transmission, due to the presence of infective trypomastigotes in blood from infected donors. Thus, we tested the sensitivity of cell-derived trypomastigotes to PaEO at 4ºC, the temperature used for red blood storage in blood banks. Incubation at 4ºC led to inactivation of BZ, as evidenced by the presence of clumps of crystalline material in the medium (likely representing BZ agglutination), and by the absence of trypanosome morphological damage or motility loss, as assessed by light microscopy observation (not shown). Therefore, for the experiments at 4ºC we used gentian violet as a control, since this drug can be used to treat blood potentially infected with Chagas disease (Ramirez et al. 1995). Importantly, cell-derived trypomastigotes were 6,500 times more sensitive to P. aduncum EO (IC50/24 h = 3.8 µg/mL) than to gentian violet (IC50/24 h = 24,700 mg/mL). To estimate PaEO cytotoxicity, we performed cytotoxicity assays in uninfected Vero cells at 37ºC (Table II), and also in human erythrocytes (RBCs) at 4ºC (to mimic red blood cell storage conditions in blood banks). PaEO was more cytotoxic to Vero cells than BZ, with a CC50/24 h of 42.8 µg/mL, compared with 1,021 mg/mL for BZ. At the treatment temperature of 4ºC, gentian violet also exhibited lower cytotoxicity to RBCs (CC50/24 h = 71.4 mg/mL) than PaEO (CC5/24 h = 351.6 µg/mL). TABLE IIInhibitory effect (IC50/24 h, in µg/mL) and cytotoxicity (CC50/24 h, in µg/mL) of Piper aduncum essential oil (PaEO), gentian violet and linalool for Trypanosoma cruzi cell-derived trypomastigotes, red blood cells (RBCs, at 4ºC) and Vero cells (at 37ºC)  T. cruzi IC50/24 hRBCs CC50/24 hSelectivity index (SI) - RBCsVero cells CC50/24 hSelectivity index (SI) - Vero cells PaEO3.8351.692.542.811.2Linalool0.317,341.623,9900.872.7Gentian Violet24,700> 70,000> 2.871,4002.9 Selectivity index (SI) analysis - representing the ratio between IC50/24 h and CC50/24 h values - indicates that PaEO is particularly selective towards cell-derived T. cruzi trypomastigotes (SI = 15.3 and 11.3, for treatments with Vero cells at 28ºC and 4ºC, respectively; Table I). Importantly, at the treatment temperature of 4ºC, PaEO was more selective towards this form of the parasite than gentian violet (SI = 2.9). PaEO inhibits effectively the intracellular survival/replication of T. cruzi amastigotes - To test the effect of PaEO on amastigotes, Vero cells were infected with trypomastigotes and, after 24 h (when trypomastigotes had differentiated into amastigotes), the infected cultures were incubated with different concentrations of PaEO (or BZ, as a reference). PaEO at the concentration of 12.5 µg/mL decreased the T. cruzi amastigote infection index by 71.5%, similarly to 10 µg/mL BZ (81.3% decrease; Table III), showing that PaEO is as effective as the standard drug used for Chagas disease treatment, at this concentration. However, BZ was much more efficient than PaEO at inhibiting intracellular amastigote replication and survival, with an IC50/24 h (calculated from infection index data) of 0.8 µg/mL, compared with 9 µg/mL for PaEO (Table I). Also, strong cytotoxic effects occurred after treatment with 100 µg/mL PaEO (data not shown). TABLE IIIEffect of 24 h-treatment with Piper aduncum essential oil (PaEO) on Trypanosoma cruzi intracellular amastigotes in Vero cells, compared with benznidazole (BZ) Concentration (µg/mL)Infected cells (%)Amastigotes per cellInfection index (II)II decrease PaEO079.17.1562.1012.554.62.9160.171.55032.50.412.797.7100000100BZ087.311.51003.701074.92.5187.281.3500.40.90.399.9 To examine the mechanism of action of PaEO on T. cruzi, epimastigotes were incubated for 24 h at 28ºC with the EO and then subjected to cell cycle analysis by labeling with propidium iodide, or were labelled with Rhodamine-123, to analyse the effect of PaEO on the mitochondrial membrane potential. Although epimastigotes do not have direct medical importance, this life cycle form is ideal for in vitro analysis, because they can be cultivated easily in axenic medium. Treatment of epimastigotes with PaEO at the IC50/24 h did not alter significantly the number of cells in the G1 phase of the cell cycle, but there was a significant decrease in the number of cells in G2 (Fig. 2A-B). Labeling with rhodamine-123 showed that the PaEO decreased the mitochondrial membrane potential of nearly 98% of the tested epimastigotes (Fig. 2C-D), indicating a possible target of this compound. Fig. 2: effect of Piper aduncum essential oil (PaEO) on Trypanosoma cruzi epimastigotes, as analysed by flow cytometry. (A-B) Cell cycle analysis. Epimastigotes were treated for 24 h at 28°C with the IC50/24 h value of PaEO and then incubated with propidium iodide (PI). (A) Histograms of the PI-stained cell populations; (B) mean number of cells in each cell cycle stage. *: p < 0.05. (C-D) Mitochondrial membrane potential analysis. Epimastigotes were kept for 24 h at 28°C with the IC50/2 h value of PaEO and then incubated with Rhodamine-123. CCCP (carbonyl cyanide 3-clorophenylhydrazone): positive control. (C) Histograms of the Rhodamine-123-stained populations; (D) percentage of labelled cells after each treatment. Chemical composition analysis by gas chromatography/mass spectrometry (GC-MS) showed that the main constituents of the PaEO used in this study are nerolidol (25.22%) and linalool (13.42%) (Table IV). Aside from these two main constituents, PaEO contained several minor components (each accounting for ~1-5% of the total area; Table IV). Therefore, we tested the activities of linalool and nerolidol against cell-derived trypomastigotes at 4ºC (for 24 h), because the activity of PaEO against this developmental form appeared to be particularly promising (Table I). Linalool had clear trypanocidal activity, with an IC50/24 h of 306 ng/mL at 4ºC (Table II), indicating that this compound is approximately 52 times more effective against cell-derived trypanosomes than BZ at 28ºC (Table I). Cytotoxicity tests on Vero cells (at 4ºC) demonstrated that linalool was cytotoxic, however, with a CC50/24 h of 823.6 ng/mL, which resulted in a selectivity index (SI) for the treatment of cell-derived trypomastigotes at 4ºC of 2.69 (Table II). TABLE IVChemical composition of Piper aduncum essential oil (PaEO), as assessed by gas chromatography/mass spectrometry (GC/MS)ConstituentRetention timeAreaArea (%)Nerolidol31.131800000025.22Linalool11.83933635013.42Spatulenol31.7743748716.29β-ocimene9.7632372394.652-cycloexen-1-ol25.4830926764.44Germacrene D27.9928045454.03β-chamigrene28.5927885024.01E-β-farnesene26.9525093663.61γ-cadinene29.3724397623.51α-epi-muurolol34.2518935372.72α-bisabolol oxide34.7518671812.68α-ocimene9.3618071802.6Turmerone32.4914374342.07Caryophyllene oxide31.9812484581.79α-copaene23.6511691841.68Viridiflorol33.0411175931.61β-pinene7.59880591.42α-cadinol34.439755591.4α-cubebene24.168846911.27δ-cadineme29.58542011.23α-pinene6.26803570.98Safrole19.946402570.92 In contrast to linalool, nerolidol had no appreciable trypanocidal activity against cell-derived trypomastigotes (4ºC, 24 h) at concentrations between 100 ng/mL and 1 µg/mL, as assessed by light microscopy observation and MTT assays (data not shown). Also, nerolidol did not affect the integrity of Vero cell monolayers, after treatment for 24 h with the highest concentration used (1 µg/mL; not shown). These data indicate that the trypanocidal effect of PaEO was due to linalool. DISCUSSION: The microbicidal activity of EOs - against fungi, bacteria and viruses - is used primarily as a defense to ensure plant survival (Bakkali et al. 2008). A number of EOs are active against T. cruzi epimastigotes and trypomastigotes (Santoro et al. 2007a, b, Escobar et al. 2010, Borges et al. 2012, Silva et al. 2013, Azeredo et al. 2014) and recent studies showed that EOs from different pepper species (Piper spp.) have promising leishmanicidal (Monzote et al. 2010) and trypanocidal (Esperandim et al. 2013) activities. Nevertheless, the EO of the pepper species P. aduncum - whose extracts have clear trypanocidal activity - had not been tested previously against T. cruzi. In the present study, we show that T. cruzi trypomastigotes and amastigotes are sensitive to PaEO at concentrations of 2.8 to 12.1 µg/mL (Table I). Importantly, the highly infective cell-derived trypomastigote forms - which can be found in contaminated blood from Chagas disease patients - were particularly sensitive to PaEO, with an IC50/24 h of 2.8 µg/mL. Also, treatment of T. cruzi trypomastigotes and human erythrocytes with PaEO at 4ºC resulted in an excellent selectivity index (SI = 92.5) (Table II). The first desirable property of a promising drug candidate is an SI > 50 (Romanha et al. 2010); thus, PaEO (or its main constituent linalool) is a promising alternative for further testing on blood/blood cells, under thermal conditions of red blood cells storage (4ºC), in order to obtain derivatives effective at eliminating T. cruzi in blood bank samples. In our tests with cell-derived trypomastigotes at 4ºC the use of BZ for 24 h had no effect against the parasite (not shown). The incomplete trypanocidal effect of BZ at 4ºC can lead to Chagas disease transmission via transfusion of contaminated blood (Martin et al. 2014). Gentian violet was the reference drug of choice in our tests at 4ºC, because it can be used to control T. cruzi infection in blood banks (Ramirez et al. 1995). However, the prophylaxis alternative with gentian violet produces side effects, such as staining in the mucosa and erythrocyte agglutination (Docampo & Moreno 1990). In this study, most testing was performed for 24 h, to minimise the effects of nutrient starvation and toxicity due to parasite lysis. We believe that these conditions are closer to those found in the mammalian host, where toxin waste is eliminated through the bloodstream. Our data with BZ differ from that reported in other studies, where this drug was often tested for 72 h, yielding lower IC50 values (Franklim et al. 2013, Hamedt et al. 2014). To date, no standard conditions for drug testing against T. cruzi have been defined. Therefore, differences in parasite strains, incubation times and temperatures and culture media may all account for variations observed in experimental drug testing, and are likely to delay drug development for Chagas disease treatment (Andrade et al. 1985). Mitochondrial membrane potential evaluation of epimastigotes by flow cytometry demonstrated that PaEO decreased the mitochondrial membrane potential of ~98% of the treated cells, with similar results obtained with the positive control with CCCP. Therefore, PaEO may be acting on the parasite mitochondrion. Accordingly, it has been already shown that other natural products induce depolarisation of mitochondrial membrane in Trypanosomatids such as T. cruzi and Leishmania (Inacio et al. 2012, Caleare et al. 2013, Ribeiro et al. 2013, Takahashi et al. 2013, Aliança et al. 2014, Corpas-López et al. 2016, Sülsen et al. 2016). EOs may affect different cellular targets, due to variations in their molecular composition. As they contain lipophilic molecules, the mechanism of action of EOs involves breakage and/or crossing of the plasma membrane (Knobloch et al. 1989, Sikkema et al. 1994). While our data showed that the PaEO may target the mitochondrion, we can not exclude the possibility that mitochondrial damage is a secondary effect of drug treatment. The PaEO used in this work had nerolidol (25.22%) and linalool (13.42%) as its main constituents. GC-MS analysis of the essential oil of P. aduncum in other studies showed that it can yield four major constituents: (a) dillapiole, at 79.9-86.9% (Almeida et al. 2009); (b) nerolidol, at 79.2-82.5% (Oliveira et al. 2006); (c) cineole, at 54% (Oliveira et al. 2013), and (d) linalool, at 31-41% (Navickiene et al. 2006). This variation in composition may be explained by the collection of plants from different regions, and which are exposed to different environmental factors. Thus, to minimise EO composition variation between studies, the collected material should always come from the same location. The best thermal storage conditions for P. aduncum EO is 20ºC (for up to six months), without loss of regenerative capacity (da Silva & Scherwinski-Pereira 2011). In this work we show that nerolidol - the major PaEO component - failed to affect significantly the cell-derived trypomastigote form of T. cruzi, at 4ºC, even at the concentration of 1 µg/mL. In contrast, linalool had potent trypanocidal effect against this parasite form, with an IC50/24 h of 306 ng/mL. These results show that the major component of an EO may not always be responsible for the lytic activity. Interestingly, incubation of linalool with T. cruzi blood trypomastigotes (Y strain) resulted in an IC50/24 h value of 264 µg/mL (Santoro et al. 2007b), indicating that trypomastigote forms of different origin (blood, cell culture or in vitro differentiation) and from different strains may differ in their susceptibility to EO derivatives. Our cytotoxicity tests in Vero cells showed that cell monolayers remained intact after nerolidol treatment, without clear changes or toxic effects, even at the concentration of 1 µg/mL. However, linalool was cytotoxic, with CC50/24 of 823.6 ng/mL, indicating that the major constituent is not necessarily the one responsible for EO toxicity towards mammalian cells, as suggested in other studies (Almeida et al. 2009, Oliveira et al. 2013, Liu et al. 2014). Also, linalool was approximately 80,000 times more efficient against cell-derived trypomastigotes at 4ºC than gentian violet, the drug commonly used to treat possibly contaminated blood bags. In conclusion, our data indicate that the P. aduncum essential oil component linalool is a promising compound for further studies on the trypanocidal treatment of red blood cell bags at 4ºC prior to transfusion, to prevent T. cruzi transmission via this important route. To improve the safety profile of linalool, combinations with less toxic compounds (including benznidazole) should be tested, and linalool could be used as a lead for further drug development.
Background: Recent studies showed that essential oils from different pepper species (Piper spp.) have promising leishmanicidal and trypanocidal activities. Methods: PaEO chemical composition was obtained by GC-MS. Drug activity assays were based on cell counting, MTT data or infection index values. The effect of PaEO on the T. cruzi cell cycle and mitochondrial membrane potential was evaluated by flow cytometry. Results: PaEO was effective against cell-derived (IC50/24 h: 2.8 μg/mL) and metacyclic (IC50/24 h: 12.1 μg/mL) trypomastigotes, as well as intracellular amastigotes (IC50/24 h: 9 μg/mL). At 4ºC - the temperature of red blood cells (RBCs) storage in blood banks - cell-derived trypomastigotes were more sensitive to PaEO (IC50/24 h = 3.8 μg/mL) than to gentian violet (IC50/24 h = 24.7 mg/mL). Cytotoxicity assays using Vero cells (37ºC) and RBCs (4ºC) showed that PaEO has increased selectivity for cell-derived trypomastigotes. Flow cytometry analysis showed that PaEO does not affect the cell cycle of T. cruzi epimastigotes, but decreases their mitochondrial membrane potential. GC-MS data identified nerolidol and linalool as major components of PaEO, and linalool had trypanocidal effect (IC50/24 h: 306 ng/mL) at 4ºC. Conclusions: The trypanocidal effect of PaEO is likely due to the presence of linalool, which may represent an interesting candidate for use in the treatment of potentially contaminated RBCs bags at low temperature.
null
null
5,672
289
[]
3
[ "paeo", "ml", "cells", "24", "cell", "µg ml", "µg", "trypomastigotes", "cruzi", "4ºc" ]
[ "cells parasites", "trypomastigotes culture epimastigotes", "trypomastigotes 107 cells", "trypomastigotes produced", "cells parasites incubated" ]
null
null
null
null
null
null
[CONTENT] essential oil | linalool | Piper aduncum | Trypanosoma cruzi | trypanocidal activity [SUMMARY]
null
[CONTENT] essential oil | linalool | Piper aduncum | Trypanosoma cruzi | trypanocidal activity [SUMMARY]
null
null
null
[CONTENT] Acyclic Monoterpenes | Animals | Biological Assay | Chlorocebus aethiops | Cold Temperature | Gas Chromatography-Mass Spectrometry | Inhibitory Concentration 50 | Monoterpenes | Oils, Volatile | Piper | Trypanocidal Agents | Trypanosoma cruzi | Vero Cells [SUMMARY]
null
[CONTENT] Acyclic Monoterpenes | Animals | Biological Assay | Chlorocebus aethiops | Cold Temperature | Gas Chromatography-Mass Spectrometry | Inhibitory Concentration 50 | Monoterpenes | Oils, Volatile | Piper | Trypanocidal Agents | Trypanosoma cruzi | Vero Cells [SUMMARY]
null
null
null
[CONTENT] cells parasites | trypomastigotes culture epimastigotes | trypomastigotes 107 cells | trypomastigotes produced | cells parasites incubated [SUMMARY]
null
[CONTENT] cells parasites | trypomastigotes culture epimastigotes | trypomastigotes 107 cells | trypomastigotes produced | cells parasites incubated [SUMMARY]
null
null
null
[CONTENT] paeo | ml | cells | 24 | cell | µg ml | µg | trypomastigotes | cruzi | 4ºc [SUMMARY]
null
[CONTENT] paeo | ml | cells | 24 | cell | µg ml | µg | trypomastigotes | cruzi | 4ºc [SUMMARY]
null
null
null
[CONTENT] paeo | 24 | ml | µg | µg ml | cells | table | trypomastigotes | cell | ic50 [SUMMARY]
null
[CONTENT] paeo | ml | cells | 24 | cell | µg ml | µg | trypomastigotes | cruzi | 4ºc [SUMMARY]
null
null
null
[CONTENT] 2.8 μg/mL | 12.1 μg/mL | 9 μg/mL ||| 3.8 μg/mL | gentian | 24.7 ||| Vero | 4ºC ||| T. cruzi ||| GC-MS | PaEO | 306 ng/mL | 4ºC. [SUMMARY]
null
[CONTENT] ||| ||| ||| GC-MS ||| MTT ||| ||| ||| 2.8 μg/mL | 12.1 μg/mL | 9 μg/mL ||| 3.8 μg/mL | gentian | 24.7 ||| Vero | 4ºC ||| T. cruzi ||| GC-MS | PaEO | 306 ng/mL | 4ºC. Conclusions | PaEO [SUMMARY]
null
Assessment of a couples HIV counseling and testing program for pregnant women and their partners in antenatal care (ANC) in 7 provinces, Thailand.
25539670
Couples HIV testing and counseling (CHTC) at antenatal care (ANC) settings allows pregnant women to learn the HIV status of themselves and their partners. Couples can make decisions together to prevent HIV transmission. In Thailand, men were tested at ANC settings only if their pregnant partners were HIV positive. A CHTC program based in ANC settings was developed and implemented at 16 pilot hospitals in 7 provinces during 2009-2010.
BACKGROUND
Cross-sectional data were collected using standard data collection forms from all pregnant women and accompanying partners who presented at first ANC visit at 16 hospitals. CHTC data for women and partners were analyzed to determine service uptake and HIV test results among couples. In-depth interviews were conducted among hospital staff of participating hospitals during field supervision visits to assess feasibility and acceptability of CHTC services.
METHODS
During October 2009-April 2010, 4,524 women initiating ANC were enrolled. Of these, 2,435 (54%) women came for ANC alone; 2,089 (46%) came with partners. Among men presenting with partners, 2,003 (96%) received couples counseling. Of these, 1,723 (86%) men and all pregnant women accepted HIV testing. Among 1,723 couples testing for HIV, 1,604 (93%) returned for test results. Of these, 1,567 (98%) were concordant negative, 6 (0.4%) were concordant positive and 17 (1%) were HIV discordant (7 male+/female- and 10 male-/female+). Nine of ten (90%) executive hospital staff reported high acceptability of CHTC services.
RESULTS
CHTC implemented in ANC settings helps identify more HIV-positive men whose partners were negative than previous practice, with high acceptability among hospital staff.
CONCLUSIONS
[ "Adult", "Attitude of Health Personnel", "Counseling", "Cross-Sectional Studies", "Family Characteristics", "Female", "HIV Infections", "Humans", "Male", "Mass Screening", "Men", "Middle Aged", "Patient Acceptance of Health Care", "Personnel, Hospital", "Pregnancy", "Pregnancy Complications, Infectious", "Pregnant Women", "Prenatal Care", "Sexual Partners", "Thailand", "Young Adult" ]
4301829
Background
In 2012, the World Health Organization (WHO) recommended that couples and partners of pregnant women in antenatal care (ANC) settings should be offered voluntary counseling and testing (VCT) with support for mutual disclosure [1,2], and also that antiretroviral therapy (ART) should be offered to HIV positive partners in serodiscordant relationships regardless of CD4 status [1] in order to reduce HIV transmission to uninfected partners [3]. Offering couples HIV testing and counseling (CHTC) at ANC settings provides many benefits including: increasing male participation in ANC services, enhancing communication between couples about safe sex practices [4,5], encouraging men to get tested and to know their HIV status, and preventing new HIV infections [1]. Couples who are aware of their partner’s and their own HIV status are more likely to adopt safe sex behaviors than people who are unaware of their HIV status [6,7]. If one or both members of a couple test positive, they can access and adhere to HIV treatment and care and interventions for prevention of mother-to-child HIV transmission (PMTCT). HIV-uninfected pregnant women and partners also receive benefits from CHTC through better-informed decisions about HIV prevention and reproductive health including contraception [1]. Offering CHTC at ANC settings may mitigate CHTC barriers and stigma discrimination because CHTC can be integrated into other routine maternal child health services and male participation activities routinely provided in ANC settings [8]. However, CHTC can be complex to implement because of limited number of staff at service delivery sites, acceptability problems among health care providers and clients, and potential adverse family consequences including conflict, separation, and intimate partner violence. Most reports from CHTC experiences are from projects based in low-income countries, particularly in sub-Saharan Africa [9,10], and Cambodia [11]; none exist for Thailand. Thailand, an upper middle-income country, has a concentrated HIV epidemic with prevalence of 1.1% of the adult population [12]. Approximately 800,000 pregnant women deliver in Thailand each year, and 98% of pregnant women in Thailand receive ANC at health facilities, where HIV testing is routine and nearly all accept HIV testing [13]. In most ANC settings in Thailand, VCT is provided to men only when their pregnant partners are HIV positive. Studies in Thailand have reported high (30-50%) serodiscordant rates among HIV-infected couples [14]; about 32% of new HIV infections in 2012 were among low-risk co-habitating couples (e.g., husband to wife and wife to husband) [15]. One study in Thailand reported that 0.05% of pregnant women had HIV seroconversion during pregnancy (presumably from their HIV positive partners) [16]. Although Thailand has implemented a successful PMTCT program, the 2008 national PMTCT program evaluation [17] reported that only half of the partners of HIV-infected pregnant women received HIV testing within 6 months after delivery. In addition, only 15% of HIV-infected women had received CHTC at ANC settings, but more than 70% were interested in receiving CHTC [17] for the opportunity to more openly communicate with their partners about HIV and PMTCT during pregnancy and the postpartum period [17]. These data highlight the need for improved HIV testing rates in couples at ANC settings in Thailand. In this paper, we describe the pilot implementation of a CHTC program in ANC settings in 17 hospitals in 7 provinces in Thailand during 2009–2010. Following this pilot implementation, Thailand national PMTCT guidelines 2010 [18,19] recommended routine CHTC in ANC clinics at public hospitals. Lessons learned from this pilot program provide recommendations for improvement and scale-up of this important program.
null
null
Results
Characteristics of CHTC pilot sites Among the 17 hospitals invited to participate in the CHTC trainings and pilot project implementation, 16 hospitals (94%) were able to implement CHTC services (Table 1). One community hospital was unable to implement CHTC services due to staffing limitations. One community hospital implemented CHTC services but did not collect and submit data to DOH; therefore, we report data from 15 (94%) of 16 pilot sites. The median number of counselors or nurses who provided CHTC in provincial hospitals, community hospitals, and health promotion center hospitals was 4 (range: 1–4), 3.5 (range: 2–6), and 4 (range: 3–5), respectively. All 16 hospitals also provided individual HIV counseling and testing for pregnant women and/or partners. Four (80%) of five provincial hospitals, two (25%) of eight community hospitals, and two (67%) of three health promotion center hospitals provided group HIV information before individual or couples pre-test HIV counseling (Table 1). There was significant variation in the proportion of pregnant women presenting with partners, and the proportion of couples accepting HIV testing, by hospital types and regions (p < 0.01) (data not shown). Among the 17 hospitals invited to participate in the CHTC trainings and pilot project implementation, 16 hospitals (94%) were able to implement CHTC services (Table 1). One community hospital was unable to implement CHTC services due to staffing limitations. One community hospital implemented CHTC services but did not collect and submit data to DOH; therefore, we report data from 15 (94%) of 16 pilot sites. The median number of counselors or nurses who provided CHTC in provincial hospitals, community hospitals, and health promotion center hospitals was 4 (range: 1–4), 3.5 (range: 2–6), and 4 (range: 3–5), respectively. All 16 hospitals also provided individual HIV counseling and testing for pregnant women and/or partners. Four (80%) of five provincial hospitals, two (25%) of eight community hospitals, and two (67%) of three health promotion center hospitals provided group HIV information before individual or couples pre-test HIV counseling (Table 1). There was significant variation in the proportion of pregnant women presenting with partners, and the proportion of couples accepting HIV testing, by hospital types and regions (p < 0.01) (data not shown). Uptake of pre-and post-test individual vs. CHTC From October 2009 to April 2010, a total of 4, 524 women were enrolled from ANC clinics at the 15 hospitals (Figure 1). Of these, 2,089 (46%) women presented with their partners. Of these, 2,003 (96%) couples received pretest counseling (Figure 2), including 994 (50%) couples receiving couples pretest counseling, 864 (43%) receiving pre-test couples group education followed by couples consent, and 145 (7.3%) receiving individual risk assessment followed by couples counseling (data not shown). Among the 2,003 couples receiving pretest HIV counseling, 1,723 (86%) accepted HIV testing, and 1,604 (93%) of these returned for post-test counseling (Figure 2). Among couples who returned for post-test counseling, 1,443 (90%) received couples post-test counseling and the remainder received individual post-test counseling with or without mutual disclosure (data not shown). Cascades of pre-test and post-test counseling of pregnant women and their partners are shown in Figure 2.Figure 2 Uptakes of couples HIV counseling and testing services among pregnant women and partners, Oct 2009 - April 2010. Uptakes of couples HIV counseling and testing services among pregnant women and partners, Oct 2009 - April 2010. Among 2,435 women who presented alone, 2,415 (99%) received pre-test HIV counseling. Of these, 1,341 (55%) received individual HIV counseling and testing, and 1,072 (44%) received group HIV education and consent. Of the 2,415 women receiving pre-test counseling, 2,324 (96%) accepted HIV testing. Two thousand two hundred and fifty-nine (97%) women returned for post-test HIV counseling. From October 2009 to April 2010, a total of 4, 524 women were enrolled from ANC clinics at the 15 hospitals (Figure 1). Of these, 2,089 (46%) women presented with their partners. Of these, 2,003 (96%) couples received pretest counseling (Figure 2), including 994 (50%) couples receiving couples pretest counseling, 864 (43%) receiving pre-test couples group education followed by couples consent, and 145 (7.3%) receiving individual risk assessment followed by couples counseling (data not shown). Among the 2,003 couples receiving pretest HIV counseling, 1,723 (86%) accepted HIV testing, and 1,604 (93%) of these returned for post-test counseling (Figure 2). Among couples who returned for post-test counseling, 1,443 (90%) received couples post-test counseling and the remainder received individual post-test counseling with or without mutual disclosure (data not shown). Cascades of pre-test and post-test counseling of pregnant women and their partners are shown in Figure 2.Figure 2 Uptakes of couples HIV counseling and testing services among pregnant women and partners, Oct 2009 - April 2010. Uptakes of couples HIV counseling and testing services among pregnant women and partners, Oct 2009 - April 2010. Among 2,435 women who presented alone, 2,415 (99%) received pre-test HIV counseling. Of these, 1,341 (55%) received individual HIV counseling and testing, and 1,072 (44%) received group HIV education and consent. Of the 2,415 women receiving pre-test counseling, 2,324 (96%) accepted HIV testing. Two thousand two hundred and fifty-nine (97%) women returned for post-test HIV counseling. HIV-infection status among couples receiving CHTC and among pregnant women receiving individual counseling and testing Among 1,604 couples returning for post-test counseling, 1,567 (98%) were HIV concordant negative, 6 (0.4%) were HIV concordant positive, 17 (1%) were HIV discordant (7 male+/female- and 10 male-/female+), and 14 (0.9%) had no HIV test results (Figure 3).Figure 3 HIV test results F: female; M: male; C: couples. HIV test results F: female; M: male; C: couples. Of 4,329 women who received HIV testing, 4,213 (97%) returned for post-test counseling. Of these, 4,166 (99%) were HIV negative and 39 (0.9%) were HIV positive. In this study, the overall HIV prevalence among men was 0.86% (15/1,746) and among pregnant women was 0.93% (40/4,298) (data not shown). Among 1,604 couples returning for post-test counseling, 1,567 (98%) were HIV concordant negative, 6 (0.4%) were HIV concordant positive, 17 (1%) were HIV discordant (7 male+/female- and 10 male-/female+), and 14 (0.9%) had no HIV test results (Figure 3).Figure 3 HIV test results F: female; M: male; C: couples. HIV test results F: female; M: male; C: couples. Of 4,329 women who received HIV testing, 4,213 (97%) returned for post-test counseling. Of these, 4,166 (99%) were HIV negative and 39 (0.9%) were HIV positive. In this study, the overall HIV prevalence among men was 0.86% (15/1,746) and among pregnant women was 0.93% (40/4,298) (data not shown). Reasons for HIV test refusal among male partners Of 2,005 men receiving pretest counseling, 1,759 (88%) men accepted HIV testing, including all partners of HIV-positive pregnant women. A higher proportion of men (891 (93%)) who received pre-test counseling at a provincial hospital accepted testing than men receiving pre-test counseling at either a community hospital (328 (82%)) or health promotion center hospital (504 (78%)) (Table 1). Among 246 men who refused HIV testing, 192 (78%) gave reasons for refusing testing. The main reasons were: did not want to test (115 (60%)); thought they had no risk (25 (13%)); wanted to test at hospitals near their residence (19 (10%)); fear of needles (17 (9%)); already knew HIV status (10 (5%)); and “other” (6 (3%)) (Data not shown). Of 2,005 men receiving pretest counseling, 1,759 (88%) men accepted HIV testing, including all partners of HIV-positive pregnant women. A higher proportion of men (891 (93%)) who received pre-test counseling at a provincial hospital accepted testing than men receiving pre-test counseling at either a community hospital (328 (82%)) or health promotion center hospital (504 (78%)) (Table 1). Among 246 men who refused HIV testing, 192 (78%) gave reasons for refusing testing. The main reasons were: did not want to test (115 (60%)); thought they had no risk (25 (13%)); wanted to test at hospitals near their residence (19 (10%)); fear of needles (17 (9%)); already knew HIV status (10 (5%)); and “other” (6 (3%)) (Data not shown). Feasibility and acceptability of CHTC services Among the 15 hospitals, 10 (67%) hospital executive administrators participated in the in-depth interview including two hospital directors, four attending physicians, and four chief ANC nurses. Five hospital executive administrators were not available to participate in the interview on the days of the supervision visits. Median age was 43 years (range: 20–60 years) and 5 (50%) were male. Of these, 9 (90%) reported high acceptability for CHTC services due to perceived benefits for pregnant women, which included: early access to HIV prevention, treatment, and care; improved couples communication; and the ability to integrate the program with existing services. Nine interviewees (90%) thought that CHTC services could be integrated into routine services after the end of the pilot period. One interviewee reported “needing more data before making a conclusive assessment about the benefits of CHTC”, concerned that “some couples might have confidentiality concerns and may prefer individual HIV counseling and testing”. Supportive systems for CHTC implementation available at hospitals included: written policies for CHTC (7/10); a CHTC working group (9/10); CHTC training (10/10), and CHTC campaigns at hospitals (8/10). Key barriers included: inadequate number of staff (5/10); staff being too busy (3/10); lack of tools (2/10), no clear defined leader for CHTC services on the team (2/10); not including provision of CHTC services in promotion criteria for staff (2/10); and no clear policy or standardized guidelines on how to provide CHTC in ANC settings (1/10). Concerns included potential negative consequences of CHTC for discordant couples, and additional staff workload. Twenty-eight health care workers (HCWs) from the 15 hospitals participated in the survey including 17 nurses and 6 counseling nurses. Median age was 43 years (range: 33–54 years) and 26 (93%) were female. They reported between 2 and 65 clients per day in the clinic, with each HCW providing CHTC to up to 16 couples per day. More than 85% of HCWs reported that they were confident in providing post-test counseling for couples with HIV seroconcordant negative, seroconcordant positive, and serodiscordant results. Among 22 HCWs, 10 (45%) provided group pre-test CHTC and 6 (27%) provided group post-test CHTC. Of 14 HCWs who reported that they had experience in providing post-test counseling for discordant couples, 4 (29%) HCWs reported negative consequences of mutual disclosure of HIV results for some clients, such as separation or family conflict. HCWs reported barriers for CHTC program implementation that included: high workload and limited number of staff trained in CHTC, leading to extended waiting times for clients; lack of clear national and hospital policies for CHTC implementation; a lack of standardized tools and materials to promote male participation in CHTC services at hospitals and in the community; partners of pregnant women refusing to participate in CHTC services; limited private space to provide CHTC services; and no waiting area for partners of pregnant women. HCWs provided recommendations for improvement of CHCT program implementation, including: developing and disseminating clear CHTC national and hospital policies; free HIV testing support to partners of pregnant women; better support from program managers; adequate number of staff allocated for counseling; provision of private rooms for counseling and better space for group education; ongoing CHTC training and supervision for HCWs; development of a CHTC manual on how to organize CHTC services in hospitals with high numbers of new ANC cases that also have a limited number of counselors; and tools and materials to promote access of male partners to CHTC services and to use as guidance for new counselors. In addition, several HCWs interviewed suggested that CHTC uptake should be included as a national target indicator for maternal child health counseling. Among the 15 hospitals, 10 (67%) hospital executive administrators participated in the in-depth interview including two hospital directors, four attending physicians, and four chief ANC nurses. Five hospital executive administrators were not available to participate in the interview on the days of the supervision visits. Median age was 43 years (range: 20–60 years) and 5 (50%) were male. Of these, 9 (90%) reported high acceptability for CHTC services due to perceived benefits for pregnant women, which included: early access to HIV prevention, treatment, and care; improved couples communication; and the ability to integrate the program with existing services. Nine interviewees (90%) thought that CHTC services could be integrated into routine services after the end of the pilot period. One interviewee reported “needing more data before making a conclusive assessment about the benefits of CHTC”, concerned that “some couples might have confidentiality concerns and may prefer individual HIV counseling and testing”. Supportive systems for CHTC implementation available at hospitals included: written policies for CHTC (7/10); a CHTC working group (9/10); CHTC training (10/10), and CHTC campaigns at hospitals (8/10). Key barriers included: inadequate number of staff (5/10); staff being too busy (3/10); lack of tools (2/10), no clear defined leader for CHTC services on the team (2/10); not including provision of CHTC services in promotion criteria for staff (2/10); and no clear policy or standardized guidelines on how to provide CHTC in ANC settings (1/10). Concerns included potential negative consequences of CHTC for discordant couples, and additional staff workload. Twenty-eight health care workers (HCWs) from the 15 hospitals participated in the survey including 17 nurses and 6 counseling nurses. Median age was 43 years (range: 33–54 years) and 26 (93%) were female. They reported between 2 and 65 clients per day in the clinic, with each HCW providing CHTC to up to 16 couples per day. More than 85% of HCWs reported that they were confident in providing post-test counseling for couples with HIV seroconcordant negative, seroconcordant positive, and serodiscordant results. Among 22 HCWs, 10 (45%) provided group pre-test CHTC and 6 (27%) provided group post-test CHTC. Of 14 HCWs who reported that they had experience in providing post-test counseling for discordant couples, 4 (29%) HCWs reported negative consequences of mutual disclosure of HIV results for some clients, such as separation or family conflict. HCWs reported barriers for CHTC program implementation that included: high workload and limited number of staff trained in CHTC, leading to extended waiting times for clients; lack of clear national and hospital policies for CHTC implementation; a lack of standardized tools and materials to promote male participation in CHTC services at hospitals and in the community; partners of pregnant women refusing to participate in CHTC services; limited private space to provide CHTC services; and no waiting area for partners of pregnant women. HCWs provided recommendations for improvement of CHCT program implementation, including: developing and disseminating clear CHTC national and hospital policies; free HIV testing support to partners of pregnant women; better support from program managers; adequate number of staff allocated for counseling; provision of private rooms for counseling and better space for group education; ongoing CHTC training and supervision for HCWs; development of a CHTC manual on how to organize CHTC services in hospitals with high numbers of new ANC cases that also have a limited number of counselors; and tools and materials to promote access of male partners to CHTC services and to use as guidance for new counselors. In addition, several HCWs interviewed suggested that CHTC uptake should be included as a national target indicator for maternal child health counseling.
Conclusions
CHTC implemented in ANC settings helped identify more HIV-positive men whose partners were negative than previous practice, with high acceptability among HCWs. Major concerns were negative consequences of CHCT for discordant couples and additional workload of the program. Among couples who came to first ANC visit together, a high proportion received HIV counseling and testing. Strategies are needed to increase the number and proportion of male partners who access CHCT services. To implement CHCT, hospitals need clear policies, support from hospital leaders, trained and supported personnel, private space, training, tools, and materials. There exists an opportunity to identify new HIV cases, and prevent infection among children – both major aspects of a comprehensive HIV program, and achievable in Thailand.
[ "CHTC training and implementation in ANC setting", "Study population and procedures", "Data analysis", "Assessment of feasibility and acceptability of CHTC services", "Assessment of staff responsible for CHTC services on feasibility, confidence, and experience in providing CHTC services", "Ethics approval", "Characteristics of CHTC pilot sites", "Uptake of pre-and post-test individual vs. CHTC", "HIV-infection status among couples receiving CHTC and among pregnant women receiving individual counseling and testing", "Reasons for HIV test refusal among male partners", "Feasibility and acceptability of CHTC services" ]
[ "The Thai Ministry of Public Health (MOPH) and Thailand MOPH-U.S. Centers for Disease Control and Prevention Collaboration (TUC) developed a pilot project in 2008 in order to assess the feasibility and acceptability of CHTC implementation in Thai ANC settings. A training manual [20] was developed in Thai, adapted from the U.S. Centers for Disease Control and Prevention (CDC) CHTC manual [21]. In February 2009, a 4-day CHTC training for ANC settings was conducted by trainers from the MOPH Department of Health (DOH), TUC, and counselors from tertiary care hospitals, for 32 service providers (ANC nurses and counselors who provided individual VCT for pregnant women) comprising 17 hospitals in 7 provinces. Key contents of the training included the importance of CHTC in ANC services, psychosocial issues relating to couples relationships, counselor self-awareness, counseling skills required to work effectively with couples, guidance on providing pre-test and post-test CHTC (for HIV-seroconcordant negative, seroconcordant positive, and serodiscordant results), and administrative management to set up couples counseling service systems. Training methods included didactic sessions, role play, video demonstrations, small group discussions, and lectures. Following the training, trainees returned to their hospitals, trained their ANC teams, and implemented CHTC services as part of routine ANC services.", "From October 2009 to April 2010, all pregnant women and their partners at 17 hospitals in 7 provinces in the North, Northeastern, Central, and Southern regions of Thailand (Figure 1) were offered routine ANC and CHTC services, and were asked to consent to HIV testing at first ANC visit by counselors or nurses. A convenience sample of participants was recruited based on availability of staff. We did not collect data on the total number of pregnant women, their partners, or the number who were approached to participate at first ANC visit. Enrollment targeted 30 HIV-positive persons among those receiving CHTC services. We anticipated that enrollment of 30 HIV-positive persons would enable us to enroll at least 10 serodiscordant couples, including HIV-positive men and negative pregnant women. This number could demonstrate benefits of CHTC in ANC settings in identifying more HIV-positive men whose partners were negative than previous practice. CHTC is defined as when two partners are counseled, tested and receive their results together; in this way they can mutually disclose their HIV status with one another.Figure 1\nGeographic area of seven provinces participated in CHTC project.\n\n\nGeographic area of seven provinces participated in CHTC project.\n\nCharacteristics of hospitals are shown in Table 1. Cross-sectional data were collected from pregnant women who came alone or with their partners at first visit, by interview using standard data collection forms. Pregnant women who came alone were offered routine ANC and encouraged to invite their partners for CHTC during the following visit. No incentives were provided for participation. A token amount of funding was provided to a nurse or a counselor who collected data (2 USD for data collection per case).Table 1\nCharacteristics of CHTC pilot implementing sites, 2009-10\n\nType of hospitals\n\nRegion\n\nNumber of new ANC cases during reporting period (Total n = 4524 cases)\n\n% of pregnant women came with partners (2,089 cases)\n\n% couples received pretest couples counseling (2,003 cases)\n\n% couples accepted HIV testing (1,723 couples)\n\nNumber of counselors who provide CHTC (persons)\n\nType of CHTC\nProvincial Hospital1. North603329 (54.6)322 (97.9)311 (96.6)4IC, CC, GE2. Northeast761119 (15.6)118 (99.2)117 (99.2)4IC, CC, GE3. Central442385 (86.9)364 (94.5)325 (89.3)4IC, CC, GE4. South40171 (17.7)71 (100)59 (83.1)1IC, CC, GE5. South34085 [25]85 (100)79 (92.9)4IC, CCCommunity hospital6. North4332 (74.4)32 (100)32 (100)4IC, CC7. North13643 (31.6)43 (100)43 (100)5IC, CC8. Northeast19978 (39.2)78 (100)75 (96.2)3IC, CC, GE9. Northeast33 (100)3 (100)3 (100)2IC, CC10. CentralNot in databaseNot in databaseNot in databaseNot in database6IC, CC11. South346108 (31.2)107 (99.1)47 (43.9)3IC, CC12. South13060 (46.2)59 (98.3)57 (96.6)5IC, CC13. South14281 (57)76 (93.8)71 (93.4)2IC, CC, GEHealth Promotion Center Hospital14. North410284 (69.3)269 (94.7)150 (55.8)3IC, CC, GE15. Northeast523394 (75.3)359 (91.1)339 (94.4)5IC, CC, GE16. South4517 (37.8)17 (100)15 (88.2)4IC, CCIC = individual counseling; CC = couples counseling; GE = group education and consent.\n\nCharacteristics of CHTC pilot implementing sites, 2009-10\n\nIC = individual counseling; CC = couples counseling; GE = group education and consent.\nHIV counseling and testing data were collected at each woman’s first ANC visit, including types of pre- and post-test counseling, uptake of HIV testing by couples, reasons for not testing individually or as a couple, and HIV test results. Each hospital reported data from monthly paper-based monitoring forms for data entry at DOH in an electronic file using MS Access. Partners who did not return for posttest counseling within 3 months were defined as missing. Patient hospital numbers were recorded in the paper forms at the hospitals but were excluded from electronic transmissions to DOH. A couple code, linking separate records for patients and their partners, was used as an identifier in the transmissions to DOH and in the electronic database.\nHIV enzyme-linked immunosorbent assays (ELISA) were performed for HIV-1 for those who consented to HIV testing. Each participant was asked individually whether they would prefer to receive HIV test results and posttest counseling alone or as a couple. HIV test results were given to pregnant women and partners at their next ANC visit. Individual counseling was provided to any participant upon request.", "Data from all participants were included in the analysis. Data were analyzed using STATA 11.0 (StataCorp., College Station, TX, USA) at the TUC office. We determined service uptake of individual or couples pre-test counseling, results of HIV testing among couples, barriers to testing, and percentage of HIV-infected women and partners referred to HIV care.", "Following a minimum of 4 months of CHTC implementation, DOH and TUC staff provided monitoring and supervision visits at the pilot hospitals. The supervision team conducted in-depth interviews with hospital directors using a semi-structured interview guide to assess acceptability, support, and sustainability of CHTC services from an administrative/executive perspective. If the hospital director was not available, a hospital executive administrator (e.g. attending physician or chief of the ANC clinic) responsible for CHTC services was interviewed. There were three trained interviewers who alternately interviewed hospital executive administrators. Each in-depth interview session took about 30 minutes. All interviews were manually recorded as text by the interviewers. The data from in-depth interviews was organized by question and analyzed by identifying consistencies and differences of each responder. Comments from interviewees were summarized by categories.", "During site supervision visits, self-administered questionnaires were sent to staff responsible for CHTC services, asking about workload, confidence, CHTC practices, negative consequences from CHTC during the implementation period, and barriers to and recommendations for successful CHTC program implementation. We compiled the responses in Microsoft Excel 2010 (version 14). The data from self-administered questionnaires was analyzed and summarized.", "This study was reviewed for human subjects considerations by the U.S. CDC and approved as research not involving human subjects. The Thailand MOPH determined this project to be a Program Evaluation, which does not require approval by the Thailand MOPH institutional review board.", "Among the 17 hospitals invited to participate in the CHTC trainings and pilot project implementation, 16 hospitals (94%) were able to implement CHTC services (Table 1). One community hospital was unable to implement CHTC services due to staffing limitations. One community hospital implemented CHTC services but did not collect and submit data to DOH; therefore, we report data from 15 (94%) of 16 pilot sites. The median number of counselors or nurses who provided CHTC in provincial hospitals, community hospitals, and health promotion center hospitals was 4 (range: 1–4), 3.5 (range: 2–6), and 4 (range: 3–5), respectively. All 16 hospitals also provided individual HIV counseling and testing for pregnant women and/or partners. Four (80%) of five provincial hospitals, two (25%) of eight community hospitals, and two (67%) of three health promotion center hospitals provided group HIV information before individual or couples pre-test HIV counseling (Table 1). There was significant variation in the proportion of pregnant women presenting with partners, and the proportion of couples accepting HIV testing, by hospital types and regions (p < 0.01) (data not shown).", "From October 2009 to April 2010, a total of 4, 524 women were enrolled from ANC clinics at the 15 hospitals (Figure 1). Of these, 2,089 (46%) women presented with their partners. Of these, 2,003 (96%) couples received pretest counseling (Figure 2), including 994 (50%) couples receiving couples pretest counseling, 864 (43%) receiving pre-test couples group education followed by couples consent, and 145 (7.3%) receiving individual risk assessment followed by couples counseling (data not shown). Among the 2,003 couples receiving pretest HIV counseling, 1,723 (86%) accepted HIV testing, and 1,604 (93%) of these returned for post-test counseling (Figure 2). Among couples who returned for post-test counseling, 1,443 (90%) received couples post-test counseling and the remainder received individual post-test counseling with or without mutual disclosure (data not shown). Cascades of pre-test and post-test counseling of pregnant women and their partners are shown in Figure 2.Figure 2\nUptakes of couples HIV counseling and testing services among pregnant women and partners, Oct 2009 - April 2010.\n\n\nUptakes of couples HIV counseling and testing services among pregnant women and partners, Oct 2009 - April 2010.\n\nAmong 2,435 women who presented alone, 2,415 (99%) received pre-test HIV counseling. Of these, 1,341 (55%) received individual HIV counseling and testing, and 1,072 (44%) received group HIV education and consent. Of the 2,415 women receiving pre-test counseling, 2,324 (96%) accepted HIV testing. Two thousand two hundred and fifty-nine (97%) women returned for post-test HIV counseling.", "Among 1,604 couples returning for post-test counseling, 1,567 (98%) were HIV concordant negative, 6 (0.4%) were HIV concordant positive, 17 (1%) were HIV discordant (7 male+/female- and 10 male-/female+), and 14 (0.9%) had no HIV test results (Figure 3).Figure 3\nHIV test results F: female; M: male; C: couples.\n\n\nHIV test results F: female; M: male; C: couples.\n\nOf 4,329 women who received HIV testing, 4,213 (97%) returned for post-test counseling. Of these, 4,166 (99%) were HIV negative and 39 (0.9%) were HIV positive. In this study, the overall HIV prevalence among men was 0.86% (15/1,746) and among pregnant women was 0.93% (40/4,298) (data not shown).", "Of 2,005 men receiving pretest counseling, 1,759 (88%) men accepted HIV testing, including all partners of HIV-positive pregnant women. A higher proportion of men (891 (93%)) who received pre-test counseling at a provincial hospital accepted testing than men receiving pre-test counseling at either a community hospital (328 (82%)) or health promotion center hospital (504 (78%)) (Table 1). Among 246 men who refused HIV testing, 192 (78%) gave reasons for refusing testing. The main reasons were: did not want to test (115 (60%)); thought they had no risk (25 (13%)); wanted to test at hospitals near their residence (19 (10%)); fear of needles (17 (9%)); already knew HIV status (10 (5%)); and “other” (6 (3%)) (Data not shown).", "Among the 15 hospitals, 10 (67%) hospital executive administrators participated in the in-depth interview including two hospital directors, four attending physicians, and four chief ANC nurses. Five hospital executive administrators were not available to participate in the interview on the days of the supervision visits. Median age was 43 years (range: 20–60 years) and 5 (50%) were male. Of these, 9 (90%) reported high acceptability for CHTC services due to perceived benefits for pregnant women, which included: early access to HIV prevention, treatment, and care; improved couples communication; and the ability to integrate the program with existing services. Nine interviewees (90%) thought that CHTC services could be integrated into routine services after the end of the pilot period. One interviewee reported “needing more data before making a conclusive assessment about the benefits of CHTC”, concerned that “some couples might have confidentiality concerns and may prefer individual HIV counseling and testing”.\nSupportive systems for CHTC implementation available at hospitals included: written policies for CHTC (7/10); a CHTC working group (9/10); CHTC training (10/10), and CHTC campaigns at hospitals (8/10). Key barriers included: inadequate number of staff (5/10); staff being too busy (3/10); lack of tools (2/10), no clear defined leader for CHTC services on the team (2/10); not including provision of CHTC services in promotion criteria for staff (2/10); and no clear policy or standardized guidelines on how to provide CHTC in ANC settings (1/10). Concerns included potential negative consequences of CHTC for discordant couples, and additional staff workload.\nTwenty-eight health care workers (HCWs) from the 15 hospitals participated in the survey including 17 nurses and 6 counseling nurses. Median age was 43 years (range: 33–54 years) and 26 (93%) were female. They reported between 2 and 65 clients per day in the clinic, with each HCW providing CHTC to up to 16 couples per day. More than 85% of HCWs reported that they were confident in providing post-test counseling for couples with HIV seroconcordant negative, seroconcordant positive, and serodiscordant results. Among 22 HCWs, 10 (45%) provided group pre-test CHTC and 6 (27%) provided group post-test CHTC. Of 14 HCWs who reported that they had experience in providing post-test counseling for discordant couples, 4 (29%) HCWs reported negative consequences of mutual disclosure of HIV results for some clients, such as separation or family conflict.\nHCWs reported barriers for CHTC program implementation that included: high workload and limited number of staff trained in CHTC, leading to extended waiting times for clients; lack of clear national and hospital policies for CHTC implementation; a lack of standardized tools and materials to promote male participation in CHTC services at hospitals and in the community; partners of pregnant women refusing to participate in CHTC services; limited private space to provide CHTC services; and no waiting area for partners of pregnant women.\nHCWs provided recommendations for improvement of CHCT program implementation, including: developing and disseminating clear CHTC national and hospital policies; free HIV testing support to partners of pregnant women; better support from program managers; adequate number of staff allocated for counseling; provision of private rooms for counseling and better space for group education; ongoing CHTC training and supervision for HCWs; development of a CHTC manual on how to organize CHTC services in hospitals with high numbers of new ANC cases that also have a limited number of counselors; and tools and materials to promote access of male partners to CHTC services and to use as guidance for new counselors. In addition, several HCWs interviewed suggested that CHTC uptake should be included as a national target indicator for maternal child health counseling." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "CHTC training and implementation in ANC setting", "Study population and procedures", "Data analysis", "Assessment of feasibility and acceptability of CHTC services", "Assessment of staff responsible for CHTC services on feasibility, confidence, and experience in providing CHTC services", "Ethics approval", "Results", "Characteristics of CHTC pilot sites", "Uptake of pre-and post-test individual vs. CHTC", "HIV-infection status among couples receiving CHTC and among pregnant women receiving individual counseling and testing", "Reasons for HIV test refusal among male partners", "Feasibility and acceptability of CHTC services", "Discussion", "Conclusions" ]
[ "In 2012, the World Health Organization (WHO) recommended that couples and partners of pregnant women in antenatal care (ANC) settings should be offered voluntary counseling and testing (VCT) with support for mutual disclosure [1,2], and also that antiretroviral therapy (ART) should be offered to HIV positive partners in serodiscordant relationships regardless of CD4 status [1] in order to reduce HIV transmission to uninfected partners [3]. Offering couples HIV testing and counseling (CHTC) at ANC settings provides many benefits including: increasing male participation in ANC services, enhancing communication between couples about safe sex practices [4,5], encouraging men to get tested and to know their HIV status, and preventing new HIV infections [1]. Couples who are aware of their partner’s and their own HIV status are more likely to adopt safe sex behaviors than people who are unaware of their HIV status [6,7]. If one or both members of a couple test positive, they can access and adhere to HIV treatment and care and interventions for prevention of mother-to-child HIV transmission (PMTCT). HIV-uninfected pregnant women and partners also receive benefits from CHTC through better-informed decisions about HIV prevention and reproductive health including contraception [1]. Offering CHTC at ANC settings may mitigate CHTC barriers and stigma discrimination because CHTC can be integrated into other routine maternal child health services and male participation activities routinely provided in ANC settings [8]. However, CHTC can be complex to implement because of limited number of staff at service delivery sites, acceptability problems among health care providers and clients, and potential adverse family consequences including conflict, separation, and intimate partner violence. Most reports from CHTC experiences are from projects based in low-income countries, particularly in sub-Saharan Africa [9,10], and Cambodia [11]; none exist for Thailand.\nThailand, an upper middle-income country, has a concentrated HIV epidemic with prevalence of 1.1% of the adult population [12]. Approximately 800,000 pregnant women deliver in Thailand each year, and 98% of pregnant women in Thailand receive ANC at health facilities, where HIV testing is routine and nearly all accept HIV testing [13]. In most ANC settings in Thailand, VCT is provided to men only when their pregnant partners are HIV positive. Studies in Thailand have reported high (30-50%) serodiscordant rates among HIV-infected couples [14]; about 32% of new HIV infections in 2012 were among low-risk co-habitating couples (e.g., husband to wife and wife to husband) [15]. One study in Thailand reported that 0.05% of pregnant women had HIV seroconversion during pregnancy (presumably from their HIV positive partners) [16]. Although Thailand has implemented a successful PMTCT program, the 2008 national PMTCT program evaluation [17] reported that only half of the partners of HIV-infected pregnant women received HIV testing within 6 months after delivery. In addition, only 15% of HIV-infected women had received CHTC at ANC settings, but more than 70% were interested in receiving CHTC [17] for the opportunity to more openly communicate with their partners about HIV and PMTCT during pregnancy and the postpartum period [17]. These data highlight the need for improved HIV testing rates in couples at ANC settings in Thailand.\nIn this paper, we describe the pilot implementation of a CHTC program in ANC settings in 17 hospitals in 7 provinces in Thailand during 2009–2010. Following this pilot implementation, Thailand national PMTCT guidelines 2010 [18,19] recommended routine CHTC in ANC clinics at public hospitals. Lessons learned from this pilot program provide recommendations for improvement and scale-up of this important program.", " CHTC training and implementation in ANC setting The Thai Ministry of Public Health (MOPH) and Thailand MOPH-U.S. Centers for Disease Control and Prevention Collaboration (TUC) developed a pilot project in 2008 in order to assess the feasibility and acceptability of CHTC implementation in Thai ANC settings. A training manual [20] was developed in Thai, adapted from the U.S. Centers for Disease Control and Prevention (CDC) CHTC manual [21]. In February 2009, a 4-day CHTC training for ANC settings was conducted by trainers from the MOPH Department of Health (DOH), TUC, and counselors from tertiary care hospitals, for 32 service providers (ANC nurses and counselors who provided individual VCT for pregnant women) comprising 17 hospitals in 7 provinces. Key contents of the training included the importance of CHTC in ANC services, psychosocial issues relating to couples relationships, counselor self-awareness, counseling skills required to work effectively with couples, guidance on providing pre-test and post-test CHTC (for HIV-seroconcordant negative, seroconcordant positive, and serodiscordant results), and administrative management to set up couples counseling service systems. Training methods included didactic sessions, role play, video demonstrations, small group discussions, and lectures. Following the training, trainees returned to their hospitals, trained their ANC teams, and implemented CHTC services as part of routine ANC services.\nThe Thai Ministry of Public Health (MOPH) and Thailand MOPH-U.S. Centers for Disease Control and Prevention Collaboration (TUC) developed a pilot project in 2008 in order to assess the feasibility and acceptability of CHTC implementation in Thai ANC settings. A training manual [20] was developed in Thai, adapted from the U.S. Centers for Disease Control and Prevention (CDC) CHTC manual [21]. In February 2009, a 4-day CHTC training for ANC settings was conducted by trainers from the MOPH Department of Health (DOH), TUC, and counselors from tertiary care hospitals, for 32 service providers (ANC nurses and counselors who provided individual VCT for pregnant women) comprising 17 hospitals in 7 provinces. Key contents of the training included the importance of CHTC in ANC services, psychosocial issues relating to couples relationships, counselor self-awareness, counseling skills required to work effectively with couples, guidance on providing pre-test and post-test CHTC (for HIV-seroconcordant negative, seroconcordant positive, and serodiscordant results), and administrative management to set up couples counseling service systems. Training methods included didactic sessions, role play, video demonstrations, small group discussions, and lectures. Following the training, trainees returned to their hospitals, trained their ANC teams, and implemented CHTC services as part of routine ANC services.\n Study population and procedures From October 2009 to April 2010, all pregnant women and their partners at 17 hospitals in 7 provinces in the North, Northeastern, Central, and Southern regions of Thailand (Figure 1) were offered routine ANC and CHTC services, and were asked to consent to HIV testing at first ANC visit by counselors or nurses. A convenience sample of participants was recruited based on availability of staff. We did not collect data on the total number of pregnant women, their partners, or the number who were approached to participate at first ANC visit. Enrollment targeted 30 HIV-positive persons among those receiving CHTC services. We anticipated that enrollment of 30 HIV-positive persons would enable us to enroll at least 10 serodiscordant couples, including HIV-positive men and negative pregnant women. This number could demonstrate benefits of CHTC in ANC settings in identifying more HIV-positive men whose partners were negative than previous practice. CHTC is defined as when two partners are counseled, tested and receive their results together; in this way they can mutually disclose their HIV status with one another.Figure 1\nGeographic area of seven provinces participated in CHTC project.\n\n\nGeographic area of seven provinces participated in CHTC project.\n\nCharacteristics of hospitals are shown in Table 1. Cross-sectional data were collected from pregnant women who came alone or with their partners at first visit, by interview using standard data collection forms. Pregnant women who came alone were offered routine ANC and encouraged to invite their partners for CHTC during the following visit. No incentives were provided for participation. A token amount of funding was provided to a nurse or a counselor who collected data (2 USD for data collection per case).Table 1\nCharacteristics of CHTC pilot implementing sites, 2009-10\n\nType of hospitals\n\nRegion\n\nNumber of new ANC cases during reporting period (Total n = 4524 cases)\n\n% of pregnant women came with partners (2,089 cases)\n\n% couples received pretest couples counseling (2,003 cases)\n\n% couples accepted HIV testing (1,723 couples)\n\nNumber of counselors who provide CHTC (persons)\n\nType of CHTC\nProvincial Hospital1. North603329 (54.6)322 (97.9)311 (96.6)4IC, CC, GE2. Northeast761119 (15.6)118 (99.2)117 (99.2)4IC, CC, GE3. Central442385 (86.9)364 (94.5)325 (89.3)4IC, CC, GE4. South40171 (17.7)71 (100)59 (83.1)1IC, CC, GE5. South34085 [25]85 (100)79 (92.9)4IC, CCCommunity hospital6. North4332 (74.4)32 (100)32 (100)4IC, CC7. North13643 (31.6)43 (100)43 (100)5IC, CC8. Northeast19978 (39.2)78 (100)75 (96.2)3IC, CC, GE9. Northeast33 (100)3 (100)3 (100)2IC, CC10. CentralNot in databaseNot in databaseNot in databaseNot in database6IC, CC11. South346108 (31.2)107 (99.1)47 (43.9)3IC, CC12. South13060 (46.2)59 (98.3)57 (96.6)5IC, CC13. South14281 (57)76 (93.8)71 (93.4)2IC, CC, GEHealth Promotion Center Hospital14. North410284 (69.3)269 (94.7)150 (55.8)3IC, CC, GE15. Northeast523394 (75.3)359 (91.1)339 (94.4)5IC, CC, GE16. South4517 (37.8)17 (100)15 (88.2)4IC, CCIC = individual counseling; CC = couples counseling; GE = group education and consent.\n\nCharacteristics of CHTC pilot implementing sites, 2009-10\n\nIC = individual counseling; CC = couples counseling; GE = group education and consent.\nHIV counseling and testing data were collected at each woman’s first ANC visit, including types of pre- and post-test counseling, uptake of HIV testing by couples, reasons for not testing individually or as a couple, and HIV test results. Each hospital reported data from monthly paper-based monitoring forms for data entry at DOH in an electronic file using MS Access. Partners who did not return for posttest counseling within 3 months were defined as missing. Patient hospital numbers were recorded in the paper forms at the hospitals but were excluded from electronic transmissions to DOH. A couple code, linking separate records for patients and their partners, was used as an identifier in the transmissions to DOH and in the electronic database.\nHIV enzyme-linked immunosorbent assays (ELISA) were performed for HIV-1 for those who consented to HIV testing. Each participant was asked individually whether they would prefer to receive HIV test results and posttest counseling alone or as a couple. HIV test results were given to pregnant women and partners at their next ANC visit. Individual counseling was provided to any participant upon request.\nFrom October 2009 to April 2010, all pregnant women and their partners at 17 hospitals in 7 provinces in the North, Northeastern, Central, and Southern regions of Thailand (Figure 1) were offered routine ANC and CHTC services, and were asked to consent to HIV testing at first ANC visit by counselors or nurses. A convenience sample of participants was recruited based on availability of staff. We did not collect data on the total number of pregnant women, their partners, or the number who were approached to participate at first ANC visit. Enrollment targeted 30 HIV-positive persons among those receiving CHTC services. We anticipated that enrollment of 30 HIV-positive persons would enable us to enroll at least 10 serodiscordant couples, including HIV-positive men and negative pregnant women. This number could demonstrate benefits of CHTC in ANC settings in identifying more HIV-positive men whose partners were negative than previous practice. CHTC is defined as when two partners are counseled, tested and receive their results together; in this way they can mutually disclose their HIV status with one another.Figure 1\nGeographic area of seven provinces participated in CHTC project.\n\n\nGeographic area of seven provinces participated in CHTC project.\n\nCharacteristics of hospitals are shown in Table 1. Cross-sectional data were collected from pregnant women who came alone or with their partners at first visit, by interview using standard data collection forms. Pregnant women who came alone were offered routine ANC and encouraged to invite their partners for CHTC during the following visit. No incentives were provided for participation. A token amount of funding was provided to a nurse or a counselor who collected data (2 USD for data collection per case).Table 1\nCharacteristics of CHTC pilot implementing sites, 2009-10\n\nType of hospitals\n\nRegion\n\nNumber of new ANC cases during reporting period (Total n = 4524 cases)\n\n% of pregnant women came with partners (2,089 cases)\n\n% couples received pretest couples counseling (2,003 cases)\n\n% couples accepted HIV testing (1,723 couples)\n\nNumber of counselors who provide CHTC (persons)\n\nType of CHTC\nProvincial Hospital1. North603329 (54.6)322 (97.9)311 (96.6)4IC, CC, GE2. Northeast761119 (15.6)118 (99.2)117 (99.2)4IC, CC, GE3. Central442385 (86.9)364 (94.5)325 (89.3)4IC, CC, GE4. South40171 (17.7)71 (100)59 (83.1)1IC, CC, GE5. South34085 [25]85 (100)79 (92.9)4IC, CCCommunity hospital6. North4332 (74.4)32 (100)32 (100)4IC, CC7. North13643 (31.6)43 (100)43 (100)5IC, CC8. Northeast19978 (39.2)78 (100)75 (96.2)3IC, CC, GE9. Northeast33 (100)3 (100)3 (100)2IC, CC10. CentralNot in databaseNot in databaseNot in databaseNot in database6IC, CC11. South346108 (31.2)107 (99.1)47 (43.9)3IC, CC12. South13060 (46.2)59 (98.3)57 (96.6)5IC, CC13. South14281 (57)76 (93.8)71 (93.4)2IC, CC, GEHealth Promotion Center Hospital14. North410284 (69.3)269 (94.7)150 (55.8)3IC, CC, GE15. Northeast523394 (75.3)359 (91.1)339 (94.4)5IC, CC, GE16. South4517 (37.8)17 (100)15 (88.2)4IC, CCIC = individual counseling; CC = couples counseling; GE = group education and consent.\n\nCharacteristics of CHTC pilot implementing sites, 2009-10\n\nIC = individual counseling; CC = couples counseling; GE = group education and consent.\nHIV counseling and testing data were collected at each woman’s first ANC visit, including types of pre- and post-test counseling, uptake of HIV testing by couples, reasons for not testing individually or as a couple, and HIV test results. Each hospital reported data from monthly paper-based monitoring forms for data entry at DOH in an electronic file using MS Access. Partners who did not return for posttest counseling within 3 months were defined as missing. Patient hospital numbers were recorded in the paper forms at the hospitals but were excluded from electronic transmissions to DOH. A couple code, linking separate records for patients and their partners, was used as an identifier in the transmissions to DOH and in the electronic database.\nHIV enzyme-linked immunosorbent assays (ELISA) were performed for HIV-1 for those who consented to HIV testing. Each participant was asked individually whether they would prefer to receive HIV test results and posttest counseling alone or as a couple. HIV test results were given to pregnant women and partners at their next ANC visit. Individual counseling was provided to any participant upon request.\n Data analysis Data from all participants were included in the analysis. Data were analyzed using STATA 11.0 (StataCorp., College Station, TX, USA) at the TUC office. We determined service uptake of individual or couples pre-test counseling, results of HIV testing among couples, barriers to testing, and percentage of HIV-infected women and partners referred to HIV care.\nData from all participants were included in the analysis. Data were analyzed using STATA 11.0 (StataCorp., College Station, TX, USA) at the TUC office. We determined service uptake of individual or couples pre-test counseling, results of HIV testing among couples, barriers to testing, and percentage of HIV-infected women and partners referred to HIV care.\n Assessment of feasibility and acceptability of CHTC services Following a minimum of 4 months of CHTC implementation, DOH and TUC staff provided monitoring and supervision visits at the pilot hospitals. The supervision team conducted in-depth interviews with hospital directors using a semi-structured interview guide to assess acceptability, support, and sustainability of CHTC services from an administrative/executive perspective. If the hospital director was not available, a hospital executive administrator (e.g. attending physician or chief of the ANC clinic) responsible for CHTC services was interviewed. There were three trained interviewers who alternately interviewed hospital executive administrators. Each in-depth interview session took about 30 minutes. All interviews were manually recorded as text by the interviewers. The data from in-depth interviews was organized by question and analyzed by identifying consistencies and differences of each responder. Comments from interviewees were summarized by categories.\nFollowing a minimum of 4 months of CHTC implementation, DOH and TUC staff provided monitoring and supervision visits at the pilot hospitals. The supervision team conducted in-depth interviews with hospital directors using a semi-structured interview guide to assess acceptability, support, and sustainability of CHTC services from an administrative/executive perspective. If the hospital director was not available, a hospital executive administrator (e.g. attending physician or chief of the ANC clinic) responsible for CHTC services was interviewed. There were three trained interviewers who alternately interviewed hospital executive administrators. Each in-depth interview session took about 30 minutes. All interviews were manually recorded as text by the interviewers. The data from in-depth interviews was organized by question and analyzed by identifying consistencies and differences of each responder. Comments from interviewees were summarized by categories.\n Assessment of staff responsible for CHTC services on feasibility, confidence, and experience in providing CHTC services During site supervision visits, self-administered questionnaires were sent to staff responsible for CHTC services, asking about workload, confidence, CHTC practices, negative consequences from CHTC during the implementation period, and barriers to and recommendations for successful CHTC program implementation. We compiled the responses in Microsoft Excel 2010 (version 14). The data from self-administered questionnaires was analyzed and summarized.\nDuring site supervision visits, self-administered questionnaires were sent to staff responsible for CHTC services, asking about workload, confidence, CHTC practices, negative consequences from CHTC during the implementation period, and barriers to and recommendations for successful CHTC program implementation. We compiled the responses in Microsoft Excel 2010 (version 14). The data from self-administered questionnaires was analyzed and summarized.\n Ethics approval This study was reviewed for human subjects considerations by the U.S. CDC and approved as research not involving human subjects. The Thailand MOPH determined this project to be a Program Evaluation, which does not require approval by the Thailand MOPH institutional review board.\nThis study was reviewed for human subjects considerations by the U.S. CDC and approved as research not involving human subjects. The Thailand MOPH determined this project to be a Program Evaluation, which does not require approval by the Thailand MOPH institutional review board.", "The Thai Ministry of Public Health (MOPH) and Thailand MOPH-U.S. Centers for Disease Control and Prevention Collaboration (TUC) developed a pilot project in 2008 in order to assess the feasibility and acceptability of CHTC implementation in Thai ANC settings. A training manual [20] was developed in Thai, adapted from the U.S. Centers for Disease Control and Prevention (CDC) CHTC manual [21]. In February 2009, a 4-day CHTC training for ANC settings was conducted by trainers from the MOPH Department of Health (DOH), TUC, and counselors from tertiary care hospitals, for 32 service providers (ANC nurses and counselors who provided individual VCT for pregnant women) comprising 17 hospitals in 7 provinces. Key contents of the training included the importance of CHTC in ANC services, psychosocial issues relating to couples relationships, counselor self-awareness, counseling skills required to work effectively with couples, guidance on providing pre-test and post-test CHTC (for HIV-seroconcordant negative, seroconcordant positive, and serodiscordant results), and administrative management to set up couples counseling service systems. Training methods included didactic sessions, role play, video demonstrations, small group discussions, and lectures. Following the training, trainees returned to their hospitals, trained their ANC teams, and implemented CHTC services as part of routine ANC services.", "From October 2009 to April 2010, all pregnant women and their partners at 17 hospitals in 7 provinces in the North, Northeastern, Central, and Southern regions of Thailand (Figure 1) were offered routine ANC and CHTC services, and were asked to consent to HIV testing at first ANC visit by counselors or nurses. A convenience sample of participants was recruited based on availability of staff. We did not collect data on the total number of pregnant women, their partners, or the number who were approached to participate at first ANC visit. Enrollment targeted 30 HIV-positive persons among those receiving CHTC services. We anticipated that enrollment of 30 HIV-positive persons would enable us to enroll at least 10 serodiscordant couples, including HIV-positive men and negative pregnant women. This number could demonstrate benefits of CHTC in ANC settings in identifying more HIV-positive men whose partners were negative than previous practice. CHTC is defined as when two partners are counseled, tested and receive their results together; in this way they can mutually disclose their HIV status with one another.Figure 1\nGeographic area of seven provinces participated in CHTC project.\n\n\nGeographic area of seven provinces participated in CHTC project.\n\nCharacteristics of hospitals are shown in Table 1. Cross-sectional data were collected from pregnant women who came alone or with their partners at first visit, by interview using standard data collection forms. Pregnant women who came alone were offered routine ANC and encouraged to invite their partners for CHTC during the following visit. No incentives were provided for participation. A token amount of funding was provided to a nurse or a counselor who collected data (2 USD for data collection per case).Table 1\nCharacteristics of CHTC pilot implementing sites, 2009-10\n\nType of hospitals\n\nRegion\n\nNumber of new ANC cases during reporting period (Total n = 4524 cases)\n\n% of pregnant women came with partners (2,089 cases)\n\n% couples received pretest couples counseling (2,003 cases)\n\n% couples accepted HIV testing (1,723 couples)\n\nNumber of counselors who provide CHTC (persons)\n\nType of CHTC\nProvincial Hospital1. North603329 (54.6)322 (97.9)311 (96.6)4IC, CC, GE2. Northeast761119 (15.6)118 (99.2)117 (99.2)4IC, CC, GE3. Central442385 (86.9)364 (94.5)325 (89.3)4IC, CC, GE4. South40171 (17.7)71 (100)59 (83.1)1IC, CC, GE5. South34085 [25]85 (100)79 (92.9)4IC, CCCommunity hospital6. North4332 (74.4)32 (100)32 (100)4IC, CC7. North13643 (31.6)43 (100)43 (100)5IC, CC8. Northeast19978 (39.2)78 (100)75 (96.2)3IC, CC, GE9. Northeast33 (100)3 (100)3 (100)2IC, CC10. CentralNot in databaseNot in databaseNot in databaseNot in database6IC, CC11. South346108 (31.2)107 (99.1)47 (43.9)3IC, CC12. South13060 (46.2)59 (98.3)57 (96.6)5IC, CC13. South14281 (57)76 (93.8)71 (93.4)2IC, CC, GEHealth Promotion Center Hospital14. North410284 (69.3)269 (94.7)150 (55.8)3IC, CC, GE15. Northeast523394 (75.3)359 (91.1)339 (94.4)5IC, CC, GE16. South4517 (37.8)17 (100)15 (88.2)4IC, CCIC = individual counseling; CC = couples counseling; GE = group education and consent.\n\nCharacteristics of CHTC pilot implementing sites, 2009-10\n\nIC = individual counseling; CC = couples counseling; GE = group education and consent.\nHIV counseling and testing data were collected at each woman’s first ANC visit, including types of pre- and post-test counseling, uptake of HIV testing by couples, reasons for not testing individually or as a couple, and HIV test results. Each hospital reported data from monthly paper-based monitoring forms for data entry at DOH in an electronic file using MS Access. Partners who did not return for posttest counseling within 3 months were defined as missing. Patient hospital numbers were recorded in the paper forms at the hospitals but were excluded from electronic transmissions to DOH. A couple code, linking separate records for patients and their partners, was used as an identifier in the transmissions to DOH and in the electronic database.\nHIV enzyme-linked immunosorbent assays (ELISA) were performed for HIV-1 for those who consented to HIV testing. Each participant was asked individually whether they would prefer to receive HIV test results and posttest counseling alone or as a couple. HIV test results were given to pregnant women and partners at their next ANC visit. Individual counseling was provided to any participant upon request.", "Data from all participants were included in the analysis. Data were analyzed using STATA 11.0 (StataCorp., College Station, TX, USA) at the TUC office. We determined service uptake of individual or couples pre-test counseling, results of HIV testing among couples, barriers to testing, and percentage of HIV-infected women and partners referred to HIV care.", "Following a minimum of 4 months of CHTC implementation, DOH and TUC staff provided monitoring and supervision visits at the pilot hospitals. The supervision team conducted in-depth interviews with hospital directors using a semi-structured interview guide to assess acceptability, support, and sustainability of CHTC services from an administrative/executive perspective. If the hospital director was not available, a hospital executive administrator (e.g. attending physician or chief of the ANC clinic) responsible for CHTC services was interviewed. There were three trained interviewers who alternately interviewed hospital executive administrators. Each in-depth interview session took about 30 minutes. All interviews were manually recorded as text by the interviewers. The data from in-depth interviews was organized by question and analyzed by identifying consistencies and differences of each responder. Comments from interviewees were summarized by categories.", "During site supervision visits, self-administered questionnaires were sent to staff responsible for CHTC services, asking about workload, confidence, CHTC practices, negative consequences from CHTC during the implementation period, and barriers to and recommendations for successful CHTC program implementation. We compiled the responses in Microsoft Excel 2010 (version 14). The data from self-administered questionnaires was analyzed and summarized.", "This study was reviewed for human subjects considerations by the U.S. CDC and approved as research not involving human subjects. The Thailand MOPH determined this project to be a Program Evaluation, which does not require approval by the Thailand MOPH institutional review board.", " Characteristics of CHTC pilot sites Among the 17 hospitals invited to participate in the CHTC trainings and pilot project implementation, 16 hospitals (94%) were able to implement CHTC services (Table 1). One community hospital was unable to implement CHTC services due to staffing limitations. One community hospital implemented CHTC services but did not collect and submit data to DOH; therefore, we report data from 15 (94%) of 16 pilot sites. The median number of counselors or nurses who provided CHTC in provincial hospitals, community hospitals, and health promotion center hospitals was 4 (range: 1–4), 3.5 (range: 2–6), and 4 (range: 3–5), respectively. All 16 hospitals also provided individual HIV counseling and testing for pregnant women and/or partners. Four (80%) of five provincial hospitals, two (25%) of eight community hospitals, and two (67%) of three health promotion center hospitals provided group HIV information before individual or couples pre-test HIV counseling (Table 1). There was significant variation in the proportion of pregnant women presenting with partners, and the proportion of couples accepting HIV testing, by hospital types and regions (p < 0.01) (data not shown).\nAmong the 17 hospitals invited to participate in the CHTC trainings and pilot project implementation, 16 hospitals (94%) were able to implement CHTC services (Table 1). One community hospital was unable to implement CHTC services due to staffing limitations. One community hospital implemented CHTC services but did not collect and submit data to DOH; therefore, we report data from 15 (94%) of 16 pilot sites. The median number of counselors or nurses who provided CHTC in provincial hospitals, community hospitals, and health promotion center hospitals was 4 (range: 1–4), 3.5 (range: 2–6), and 4 (range: 3–5), respectively. All 16 hospitals also provided individual HIV counseling and testing for pregnant women and/or partners. Four (80%) of five provincial hospitals, two (25%) of eight community hospitals, and two (67%) of three health promotion center hospitals provided group HIV information before individual or couples pre-test HIV counseling (Table 1). There was significant variation in the proportion of pregnant women presenting with partners, and the proportion of couples accepting HIV testing, by hospital types and regions (p < 0.01) (data not shown).\n Uptake of pre-and post-test individual vs. CHTC From October 2009 to April 2010, a total of 4, 524 women were enrolled from ANC clinics at the 15 hospitals (Figure 1). Of these, 2,089 (46%) women presented with their partners. Of these, 2,003 (96%) couples received pretest counseling (Figure 2), including 994 (50%) couples receiving couples pretest counseling, 864 (43%) receiving pre-test couples group education followed by couples consent, and 145 (7.3%) receiving individual risk assessment followed by couples counseling (data not shown). Among the 2,003 couples receiving pretest HIV counseling, 1,723 (86%) accepted HIV testing, and 1,604 (93%) of these returned for post-test counseling (Figure 2). Among couples who returned for post-test counseling, 1,443 (90%) received couples post-test counseling and the remainder received individual post-test counseling with or without mutual disclosure (data not shown). Cascades of pre-test and post-test counseling of pregnant women and their partners are shown in Figure 2.Figure 2\nUptakes of couples HIV counseling and testing services among pregnant women and partners, Oct 2009 - April 2010.\n\n\nUptakes of couples HIV counseling and testing services among pregnant women and partners, Oct 2009 - April 2010.\n\nAmong 2,435 women who presented alone, 2,415 (99%) received pre-test HIV counseling. Of these, 1,341 (55%) received individual HIV counseling and testing, and 1,072 (44%) received group HIV education and consent. Of the 2,415 women receiving pre-test counseling, 2,324 (96%) accepted HIV testing. Two thousand two hundred and fifty-nine (97%) women returned for post-test HIV counseling.\nFrom October 2009 to April 2010, a total of 4, 524 women were enrolled from ANC clinics at the 15 hospitals (Figure 1). Of these, 2,089 (46%) women presented with their partners. Of these, 2,003 (96%) couples received pretest counseling (Figure 2), including 994 (50%) couples receiving couples pretest counseling, 864 (43%) receiving pre-test couples group education followed by couples consent, and 145 (7.3%) receiving individual risk assessment followed by couples counseling (data not shown). Among the 2,003 couples receiving pretest HIV counseling, 1,723 (86%) accepted HIV testing, and 1,604 (93%) of these returned for post-test counseling (Figure 2). Among couples who returned for post-test counseling, 1,443 (90%) received couples post-test counseling and the remainder received individual post-test counseling with or without mutual disclosure (data not shown). Cascades of pre-test and post-test counseling of pregnant women and their partners are shown in Figure 2.Figure 2\nUptakes of couples HIV counseling and testing services among pregnant women and partners, Oct 2009 - April 2010.\n\n\nUptakes of couples HIV counseling and testing services among pregnant women and partners, Oct 2009 - April 2010.\n\nAmong 2,435 women who presented alone, 2,415 (99%) received pre-test HIV counseling. Of these, 1,341 (55%) received individual HIV counseling and testing, and 1,072 (44%) received group HIV education and consent. Of the 2,415 women receiving pre-test counseling, 2,324 (96%) accepted HIV testing. Two thousand two hundred and fifty-nine (97%) women returned for post-test HIV counseling.\n HIV-infection status among couples receiving CHTC and among pregnant women receiving individual counseling and testing Among 1,604 couples returning for post-test counseling, 1,567 (98%) were HIV concordant negative, 6 (0.4%) were HIV concordant positive, 17 (1%) were HIV discordant (7 male+/female- and 10 male-/female+), and 14 (0.9%) had no HIV test results (Figure 3).Figure 3\nHIV test results F: female; M: male; C: couples.\n\n\nHIV test results F: female; M: male; C: couples.\n\nOf 4,329 women who received HIV testing, 4,213 (97%) returned for post-test counseling. Of these, 4,166 (99%) were HIV negative and 39 (0.9%) were HIV positive. In this study, the overall HIV prevalence among men was 0.86% (15/1,746) and among pregnant women was 0.93% (40/4,298) (data not shown).\nAmong 1,604 couples returning for post-test counseling, 1,567 (98%) were HIV concordant negative, 6 (0.4%) were HIV concordant positive, 17 (1%) were HIV discordant (7 male+/female- and 10 male-/female+), and 14 (0.9%) had no HIV test results (Figure 3).Figure 3\nHIV test results F: female; M: male; C: couples.\n\n\nHIV test results F: female; M: male; C: couples.\n\nOf 4,329 women who received HIV testing, 4,213 (97%) returned for post-test counseling. Of these, 4,166 (99%) were HIV negative and 39 (0.9%) were HIV positive. In this study, the overall HIV prevalence among men was 0.86% (15/1,746) and among pregnant women was 0.93% (40/4,298) (data not shown).\n Reasons for HIV test refusal among male partners Of 2,005 men receiving pretest counseling, 1,759 (88%) men accepted HIV testing, including all partners of HIV-positive pregnant women. A higher proportion of men (891 (93%)) who received pre-test counseling at a provincial hospital accepted testing than men receiving pre-test counseling at either a community hospital (328 (82%)) or health promotion center hospital (504 (78%)) (Table 1). Among 246 men who refused HIV testing, 192 (78%) gave reasons for refusing testing. The main reasons were: did not want to test (115 (60%)); thought they had no risk (25 (13%)); wanted to test at hospitals near their residence (19 (10%)); fear of needles (17 (9%)); already knew HIV status (10 (5%)); and “other” (6 (3%)) (Data not shown).\nOf 2,005 men receiving pretest counseling, 1,759 (88%) men accepted HIV testing, including all partners of HIV-positive pregnant women. A higher proportion of men (891 (93%)) who received pre-test counseling at a provincial hospital accepted testing than men receiving pre-test counseling at either a community hospital (328 (82%)) or health promotion center hospital (504 (78%)) (Table 1). Among 246 men who refused HIV testing, 192 (78%) gave reasons for refusing testing. The main reasons were: did not want to test (115 (60%)); thought they had no risk (25 (13%)); wanted to test at hospitals near their residence (19 (10%)); fear of needles (17 (9%)); already knew HIV status (10 (5%)); and “other” (6 (3%)) (Data not shown).\n Feasibility and acceptability of CHTC services Among the 15 hospitals, 10 (67%) hospital executive administrators participated in the in-depth interview including two hospital directors, four attending physicians, and four chief ANC nurses. Five hospital executive administrators were not available to participate in the interview on the days of the supervision visits. Median age was 43 years (range: 20–60 years) and 5 (50%) were male. Of these, 9 (90%) reported high acceptability for CHTC services due to perceived benefits for pregnant women, which included: early access to HIV prevention, treatment, and care; improved couples communication; and the ability to integrate the program with existing services. Nine interviewees (90%) thought that CHTC services could be integrated into routine services after the end of the pilot period. One interviewee reported “needing more data before making a conclusive assessment about the benefits of CHTC”, concerned that “some couples might have confidentiality concerns and may prefer individual HIV counseling and testing”.\nSupportive systems for CHTC implementation available at hospitals included: written policies for CHTC (7/10); a CHTC working group (9/10); CHTC training (10/10), and CHTC campaigns at hospitals (8/10). Key barriers included: inadequate number of staff (5/10); staff being too busy (3/10); lack of tools (2/10), no clear defined leader for CHTC services on the team (2/10); not including provision of CHTC services in promotion criteria for staff (2/10); and no clear policy or standardized guidelines on how to provide CHTC in ANC settings (1/10). Concerns included potential negative consequences of CHTC for discordant couples, and additional staff workload.\nTwenty-eight health care workers (HCWs) from the 15 hospitals participated in the survey including 17 nurses and 6 counseling nurses. Median age was 43 years (range: 33–54 years) and 26 (93%) were female. They reported between 2 and 65 clients per day in the clinic, with each HCW providing CHTC to up to 16 couples per day. More than 85% of HCWs reported that they were confident in providing post-test counseling for couples with HIV seroconcordant negative, seroconcordant positive, and serodiscordant results. Among 22 HCWs, 10 (45%) provided group pre-test CHTC and 6 (27%) provided group post-test CHTC. Of 14 HCWs who reported that they had experience in providing post-test counseling for discordant couples, 4 (29%) HCWs reported negative consequences of mutual disclosure of HIV results for some clients, such as separation or family conflict.\nHCWs reported barriers for CHTC program implementation that included: high workload and limited number of staff trained in CHTC, leading to extended waiting times for clients; lack of clear national and hospital policies for CHTC implementation; a lack of standardized tools and materials to promote male participation in CHTC services at hospitals and in the community; partners of pregnant women refusing to participate in CHTC services; limited private space to provide CHTC services; and no waiting area for partners of pregnant women.\nHCWs provided recommendations for improvement of CHCT program implementation, including: developing and disseminating clear CHTC national and hospital policies; free HIV testing support to partners of pregnant women; better support from program managers; adequate number of staff allocated for counseling; provision of private rooms for counseling and better space for group education; ongoing CHTC training and supervision for HCWs; development of a CHTC manual on how to organize CHTC services in hospitals with high numbers of new ANC cases that also have a limited number of counselors; and tools and materials to promote access of male partners to CHTC services and to use as guidance for new counselors. In addition, several HCWs interviewed suggested that CHTC uptake should be included as a national target indicator for maternal child health counseling.\nAmong the 15 hospitals, 10 (67%) hospital executive administrators participated in the in-depth interview including two hospital directors, four attending physicians, and four chief ANC nurses. Five hospital executive administrators were not available to participate in the interview on the days of the supervision visits. Median age was 43 years (range: 20–60 years) and 5 (50%) were male. Of these, 9 (90%) reported high acceptability for CHTC services due to perceived benefits for pregnant women, which included: early access to HIV prevention, treatment, and care; improved couples communication; and the ability to integrate the program with existing services. Nine interviewees (90%) thought that CHTC services could be integrated into routine services after the end of the pilot period. One interviewee reported “needing more data before making a conclusive assessment about the benefits of CHTC”, concerned that “some couples might have confidentiality concerns and may prefer individual HIV counseling and testing”.\nSupportive systems for CHTC implementation available at hospitals included: written policies for CHTC (7/10); a CHTC working group (9/10); CHTC training (10/10), and CHTC campaigns at hospitals (8/10). Key barriers included: inadequate number of staff (5/10); staff being too busy (3/10); lack of tools (2/10), no clear defined leader for CHTC services on the team (2/10); not including provision of CHTC services in promotion criteria for staff (2/10); and no clear policy or standardized guidelines on how to provide CHTC in ANC settings (1/10). Concerns included potential negative consequences of CHTC for discordant couples, and additional staff workload.\nTwenty-eight health care workers (HCWs) from the 15 hospitals participated in the survey including 17 nurses and 6 counseling nurses. Median age was 43 years (range: 33–54 years) and 26 (93%) were female. They reported between 2 and 65 clients per day in the clinic, with each HCW providing CHTC to up to 16 couples per day. More than 85% of HCWs reported that they were confident in providing post-test counseling for couples with HIV seroconcordant negative, seroconcordant positive, and serodiscordant results. Among 22 HCWs, 10 (45%) provided group pre-test CHTC and 6 (27%) provided group post-test CHTC. Of 14 HCWs who reported that they had experience in providing post-test counseling for discordant couples, 4 (29%) HCWs reported negative consequences of mutual disclosure of HIV results for some clients, such as separation or family conflict.\nHCWs reported barriers for CHTC program implementation that included: high workload and limited number of staff trained in CHTC, leading to extended waiting times for clients; lack of clear national and hospital policies for CHTC implementation; a lack of standardized tools and materials to promote male participation in CHTC services at hospitals and in the community; partners of pregnant women refusing to participate in CHTC services; limited private space to provide CHTC services; and no waiting area for partners of pregnant women.\nHCWs provided recommendations for improvement of CHCT program implementation, including: developing and disseminating clear CHTC national and hospital policies; free HIV testing support to partners of pregnant women; better support from program managers; adequate number of staff allocated for counseling; provision of private rooms for counseling and better space for group education; ongoing CHTC training and supervision for HCWs; development of a CHTC manual on how to organize CHTC services in hospitals with high numbers of new ANC cases that also have a limited number of counselors; and tools and materials to promote access of male partners to CHTC services and to use as guidance for new counselors. In addition, several HCWs interviewed suggested that CHTC uptake should be included as a national target indicator for maternal child health counseling.", "Among the 17 hospitals invited to participate in the CHTC trainings and pilot project implementation, 16 hospitals (94%) were able to implement CHTC services (Table 1). One community hospital was unable to implement CHTC services due to staffing limitations. One community hospital implemented CHTC services but did not collect and submit data to DOH; therefore, we report data from 15 (94%) of 16 pilot sites. The median number of counselors or nurses who provided CHTC in provincial hospitals, community hospitals, and health promotion center hospitals was 4 (range: 1–4), 3.5 (range: 2–6), and 4 (range: 3–5), respectively. All 16 hospitals also provided individual HIV counseling and testing for pregnant women and/or partners. Four (80%) of five provincial hospitals, two (25%) of eight community hospitals, and two (67%) of three health promotion center hospitals provided group HIV information before individual or couples pre-test HIV counseling (Table 1). There was significant variation in the proportion of pregnant women presenting with partners, and the proportion of couples accepting HIV testing, by hospital types and regions (p < 0.01) (data not shown).", "From October 2009 to April 2010, a total of 4, 524 women were enrolled from ANC clinics at the 15 hospitals (Figure 1). Of these, 2,089 (46%) women presented with their partners. Of these, 2,003 (96%) couples received pretest counseling (Figure 2), including 994 (50%) couples receiving couples pretest counseling, 864 (43%) receiving pre-test couples group education followed by couples consent, and 145 (7.3%) receiving individual risk assessment followed by couples counseling (data not shown). Among the 2,003 couples receiving pretest HIV counseling, 1,723 (86%) accepted HIV testing, and 1,604 (93%) of these returned for post-test counseling (Figure 2). Among couples who returned for post-test counseling, 1,443 (90%) received couples post-test counseling and the remainder received individual post-test counseling with or without mutual disclosure (data not shown). Cascades of pre-test and post-test counseling of pregnant women and their partners are shown in Figure 2.Figure 2\nUptakes of couples HIV counseling and testing services among pregnant women and partners, Oct 2009 - April 2010.\n\n\nUptakes of couples HIV counseling and testing services among pregnant women and partners, Oct 2009 - April 2010.\n\nAmong 2,435 women who presented alone, 2,415 (99%) received pre-test HIV counseling. Of these, 1,341 (55%) received individual HIV counseling and testing, and 1,072 (44%) received group HIV education and consent. Of the 2,415 women receiving pre-test counseling, 2,324 (96%) accepted HIV testing. Two thousand two hundred and fifty-nine (97%) women returned for post-test HIV counseling.", "Among 1,604 couples returning for post-test counseling, 1,567 (98%) were HIV concordant negative, 6 (0.4%) were HIV concordant positive, 17 (1%) were HIV discordant (7 male+/female- and 10 male-/female+), and 14 (0.9%) had no HIV test results (Figure 3).Figure 3\nHIV test results F: female; M: male; C: couples.\n\n\nHIV test results F: female; M: male; C: couples.\n\nOf 4,329 women who received HIV testing, 4,213 (97%) returned for post-test counseling. Of these, 4,166 (99%) were HIV negative and 39 (0.9%) were HIV positive. In this study, the overall HIV prevalence among men was 0.86% (15/1,746) and among pregnant women was 0.93% (40/4,298) (data not shown).", "Of 2,005 men receiving pretest counseling, 1,759 (88%) men accepted HIV testing, including all partners of HIV-positive pregnant women. A higher proportion of men (891 (93%)) who received pre-test counseling at a provincial hospital accepted testing than men receiving pre-test counseling at either a community hospital (328 (82%)) or health promotion center hospital (504 (78%)) (Table 1). Among 246 men who refused HIV testing, 192 (78%) gave reasons for refusing testing. The main reasons were: did not want to test (115 (60%)); thought they had no risk (25 (13%)); wanted to test at hospitals near their residence (19 (10%)); fear of needles (17 (9%)); already knew HIV status (10 (5%)); and “other” (6 (3%)) (Data not shown).", "Among the 15 hospitals, 10 (67%) hospital executive administrators participated in the in-depth interview including two hospital directors, four attending physicians, and four chief ANC nurses. Five hospital executive administrators were not available to participate in the interview on the days of the supervision visits. Median age was 43 years (range: 20–60 years) and 5 (50%) were male. Of these, 9 (90%) reported high acceptability for CHTC services due to perceived benefits for pregnant women, which included: early access to HIV prevention, treatment, and care; improved couples communication; and the ability to integrate the program with existing services. Nine interviewees (90%) thought that CHTC services could be integrated into routine services after the end of the pilot period. One interviewee reported “needing more data before making a conclusive assessment about the benefits of CHTC”, concerned that “some couples might have confidentiality concerns and may prefer individual HIV counseling and testing”.\nSupportive systems for CHTC implementation available at hospitals included: written policies for CHTC (7/10); a CHTC working group (9/10); CHTC training (10/10), and CHTC campaigns at hospitals (8/10). Key barriers included: inadequate number of staff (5/10); staff being too busy (3/10); lack of tools (2/10), no clear defined leader for CHTC services on the team (2/10); not including provision of CHTC services in promotion criteria for staff (2/10); and no clear policy or standardized guidelines on how to provide CHTC in ANC settings (1/10). Concerns included potential negative consequences of CHTC for discordant couples, and additional staff workload.\nTwenty-eight health care workers (HCWs) from the 15 hospitals participated in the survey including 17 nurses and 6 counseling nurses. Median age was 43 years (range: 33–54 years) and 26 (93%) were female. They reported between 2 and 65 clients per day in the clinic, with each HCW providing CHTC to up to 16 couples per day. More than 85% of HCWs reported that they were confident in providing post-test counseling for couples with HIV seroconcordant negative, seroconcordant positive, and serodiscordant results. Among 22 HCWs, 10 (45%) provided group pre-test CHTC and 6 (27%) provided group post-test CHTC. Of 14 HCWs who reported that they had experience in providing post-test counseling for discordant couples, 4 (29%) HCWs reported negative consequences of mutual disclosure of HIV results for some clients, such as separation or family conflict.\nHCWs reported barriers for CHTC program implementation that included: high workload and limited number of staff trained in CHTC, leading to extended waiting times for clients; lack of clear national and hospital policies for CHTC implementation; a lack of standardized tools and materials to promote male participation in CHTC services at hospitals and in the community; partners of pregnant women refusing to participate in CHTC services; limited private space to provide CHTC services; and no waiting area for partners of pregnant women.\nHCWs provided recommendations for improvement of CHCT program implementation, including: developing and disseminating clear CHTC national and hospital policies; free HIV testing support to partners of pregnant women; better support from program managers; adequate number of staff allocated for counseling; provision of private rooms for counseling and better space for group education; ongoing CHTC training and supervision for HCWs; development of a CHTC manual on how to organize CHTC services in hospitals with high numbers of new ANC cases that also have a limited number of counselors; and tools and materials to promote access of male partners to CHTC services and to use as guidance for new counselors. In addition, several HCWs interviewed suggested that CHTC uptake should be included as a national target indicator for maternal child health counseling.", "This pilot project of a CHCT program at 15 hospitals in Thailand demonstrated that successful CHTC implementation in ANC setting is feasible. Uptake of CHTC services among male partners of pregnant women in ANC settings was high, as was acceptability among hospital staff and administration. Most hospitals participating in the program were able to successfully implement CHTC services. While this is the first report of a project of this type from Thailand, these findings are similar to those reported from sub-Saharan Africa and Cambodia [9-11]. Developing clear CHTC policies, having strong support from program managers, having a CHTC manual and materials, and training adequate numbers of HCWs to provide services were important elements contributing to successful program implementation. HCWs reported a high level of confidence in providing CHTC services, suggesting that the adapted training curriculum was sufficient to provide basic knowledge and skills for counselors.\nMale participation in and uptake of CHTC services in this project was higher than that reported in African and Indian settings [10,22], although overall uptake of CHTC services in this project was still well below 50%. Possible reasons for the high rate of couples accepting HIV testing in our report may be due to one or more of several factors: first, CHTC in most pilot hospitals used opt-out techniques; second, the universal health coverage scheme in Thailand provided free HIV testing for pregnant women, partners of pregnant women, and other risk populations [18]; and third, Thailand promoted integration of CHTC in ANC settings with other existing maternal and child health (MCH) programs [23] in order to reduce stigma and discrimination. In this study, the proportion of couples accepting HIV testing was lower than 60% in one community hospital and one health promotion center hospital. Hospital staff reported that possible reasons for this low uptake included counselor issues (e.g., encouraging only partners of HIV-infected pregnant women or partners with risk behaviors to get tested), and health worker issues (e.g., workers were not aware of the health benefit of free HIV testing for partners of pregnant women). We do not have data on reasons for non-presentation of male partners at ANC, and it is possible that some men were not aware of the potential benefits of attending ANC with their partners due to a lack of promotional campaigns in some hospitals and communities, were not aware of availability of CHTC services in the ANC setting [23,24], were not available due to employment or residence in another province, were afraid to learn the results of HIV testing, had other socioeconomic factors or relationship issues affecting their decisions [24], or experienced unfriendly services for men at these CHTC service delivery sites [25]. Strategies that have been used in health facilities in Thailand and other countries to increase male involvement have included: making ANC services more male friendly (e.g., providing comfortable waiting space or fast-tracked registration for men); providing health education to change beliefs and attitudes; [10] and integration of ANC CHTC with other MCH programs such as thalassemia screening, syphilis screening, and parenting classes [8]. Additional strategies to attract men to ANC in Thailand should be instituted and evaluated; for example, in addition to those strategies noted above, engaging celebrities as role models [26] to promote CHTC services. In addition, for men who do not participate in CHTC services during ANC, CHTC should be offered to these men during the delivery or postpartum period where most men do present with their partners [1,8,25].\nLoss to follow-up for men was also an issue in this project, possibly related to the fact that most participants needed to return for posttest results at the following clinical visit. The return rate for men may be improved by implementing same day HIV test results [27]. Feasibility and acceptability of using same day results among male partners of pregnant women in ANC settings should be examined.\nIdentifying new HIV cases is a cornerstone of HIV control, and identifying serodiscordant couples is critical for proper HIV prophylaxis, treatment and care, for women, men, and babies. In this study, a total of 13 HIV-positive men were identified during CHTC. Of these men, seven had HIV negative partners and would therefore not have been captured during standard ANC HIV-testing practices. Clearly, identifying new HIV cases is important in any venue. Identifying serodiscordant couples in which the male partner is HIV positive, is important to determine proper PMTCT interventions and identify women who may be in the window period of acute HIV infection. For HIV-infected persons, CHTC can also support more effective provision of ART and adherence [28], and can increase uptake and adherence of PMTCT [22], including early infant HIV diagnosis. Supporting couples counseling also allows couples the opportunity to share their vision related to family goals, and to make informed decisions about HIV and STI prevention and reproductive health, including family planning and contraception [1,29].\nAlthough this pilot project demonstrated success, national adoption and scale up requires overcoming some substantial obstacles. Some HCWs in this pilot expressed concern about additional workload and negative consequences of CHTC, particularly for serodiscordant couples. Providing information to couples as a group, using videotape and/or in-person didactic counseling [30], was a common technique used in this project, particularly in tertiary care hospitals with high numbers of new ANC cases and limited staff. Studies from Africa have demonstrated varying associations between HIV diagnosis and intimate partner violence [25,26]. Limited information exists regarding these types of social outcomes of CHTC in Thailand [31], and more research is needed in this area to determine potential adverse consequences of positive test results, especially for discordant couples.\nThe interpretation of these results is subject to at least four limitations. First, the enrollment was a convenience sample. We did not collect data on the total number of pregnant women and their partners who came to the first ANC visit during the review period, or the number of those who were approached to participate in this project. Based on the national PMTCT intervention monitoring system that reported from participating hospitals during the same period of enrollment, it was estimated that the enrollment covered approximately 50% of total pregnant women at participating sites. Therefore, it is possible there was sampling bias, and this might lead to over or under estimation of uptake of couples counseling if expanded nationally. Second, missing data were reported and this may have led to over or under estimates of some of the findings. Third, there were variations in the proportion of men presenting at different ANC setting, and in the uptake of pre-and post-test CHTC by hospital type and region. These variations are likely to have implications for scale-up of this initiative and findings may need to be interpreted in specific regional and other contexts for optimal program implementation. Finally, we did not collect demographic information of participants and other information related to factors associated with acceptance of CHCT services. This information would be useful to help determine feasibility of national scale-up of this program, and is an area for future study.\nFollowing implementation of this pilot CHCT program, revised Thailand PMTCT guidelines, released in October 2010 [18,19], recommended routine couples counseling and testing in ANC clinics. In 2013, CHTC programs in ANC settings were available in more than 70% of MOPH hospitals in all provinces in Thailand, but CHTC uptake was still low (22%) [13]. Moving forward, we recommend promoting involvement of male partners in ANC by providing and improving male-friendly services (e.g., providing free HIV testing, reducing waiting times, and providing waiting space for men), providing training for HCWs to increase couples counseling and testing skills, and support for CHTC implementation from hospital executives, all crucial for successful program implementation. Outcomes of couples counseling and testing during national implementation, and feasibility and acceptability of same-day test results for male partners should be studied in order to maximize program success.", "CHTC implemented in ANC settings helped identify more HIV-positive men whose partners were negative than previous practice, with high acceptability among HCWs. Major concerns were negative consequences of CHCT for discordant couples and additional workload of the program. Among couples who came to first ANC visit together, a high proportion received HIV counseling and testing. Strategies are needed to increase the number and proportion of male partners who access CHCT services. To implement CHCT, hospitals need clear policies, support from hospital leaders, trained and supported personnel, private space, training, tools, and materials. There exists an opportunity to identify new HIV cases, and prevent infection among children – both major aspects of a comprehensive HIV program, and achievable in Thailand." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, "results", null, null, null, null, null, "discussion", "conclusion" ]
[ "Couples counseling", "Pregnant women", "ANC", "Thailand" ]
Background: In 2012, the World Health Organization (WHO) recommended that couples and partners of pregnant women in antenatal care (ANC) settings should be offered voluntary counseling and testing (VCT) with support for mutual disclosure [1,2], and also that antiretroviral therapy (ART) should be offered to HIV positive partners in serodiscordant relationships regardless of CD4 status [1] in order to reduce HIV transmission to uninfected partners [3]. Offering couples HIV testing and counseling (CHTC) at ANC settings provides many benefits including: increasing male participation in ANC services, enhancing communication between couples about safe sex practices [4,5], encouraging men to get tested and to know their HIV status, and preventing new HIV infections [1]. Couples who are aware of their partner’s and their own HIV status are more likely to adopt safe sex behaviors than people who are unaware of their HIV status [6,7]. If one or both members of a couple test positive, they can access and adhere to HIV treatment and care and interventions for prevention of mother-to-child HIV transmission (PMTCT). HIV-uninfected pregnant women and partners also receive benefits from CHTC through better-informed decisions about HIV prevention and reproductive health including contraception [1]. Offering CHTC at ANC settings may mitigate CHTC barriers and stigma discrimination because CHTC can be integrated into other routine maternal child health services and male participation activities routinely provided in ANC settings [8]. However, CHTC can be complex to implement because of limited number of staff at service delivery sites, acceptability problems among health care providers and clients, and potential adverse family consequences including conflict, separation, and intimate partner violence. Most reports from CHTC experiences are from projects based in low-income countries, particularly in sub-Saharan Africa [9,10], and Cambodia [11]; none exist for Thailand. Thailand, an upper middle-income country, has a concentrated HIV epidemic with prevalence of 1.1% of the adult population [12]. Approximately 800,000 pregnant women deliver in Thailand each year, and 98% of pregnant women in Thailand receive ANC at health facilities, where HIV testing is routine and nearly all accept HIV testing [13]. In most ANC settings in Thailand, VCT is provided to men only when their pregnant partners are HIV positive. Studies in Thailand have reported high (30-50%) serodiscordant rates among HIV-infected couples [14]; about 32% of new HIV infections in 2012 were among low-risk co-habitating couples (e.g., husband to wife and wife to husband) [15]. One study in Thailand reported that 0.05% of pregnant women had HIV seroconversion during pregnancy (presumably from their HIV positive partners) [16]. Although Thailand has implemented a successful PMTCT program, the 2008 national PMTCT program evaluation [17] reported that only half of the partners of HIV-infected pregnant women received HIV testing within 6 months after delivery. In addition, only 15% of HIV-infected women had received CHTC at ANC settings, but more than 70% were interested in receiving CHTC [17] for the opportunity to more openly communicate with their partners about HIV and PMTCT during pregnancy and the postpartum period [17]. These data highlight the need for improved HIV testing rates in couples at ANC settings in Thailand. In this paper, we describe the pilot implementation of a CHTC program in ANC settings in 17 hospitals in 7 provinces in Thailand during 2009–2010. Following this pilot implementation, Thailand national PMTCT guidelines 2010 [18,19] recommended routine CHTC in ANC clinics at public hospitals. Lessons learned from this pilot program provide recommendations for improvement and scale-up of this important program. Methods: CHTC training and implementation in ANC setting The Thai Ministry of Public Health (MOPH) and Thailand MOPH-U.S. Centers for Disease Control and Prevention Collaboration (TUC) developed a pilot project in 2008 in order to assess the feasibility and acceptability of CHTC implementation in Thai ANC settings. A training manual [20] was developed in Thai, adapted from the U.S. Centers for Disease Control and Prevention (CDC) CHTC manual [21]. In February 2009, a 4-day CHTC training for ANC settings was conducted by trainers from the MOPH Department of Health (DOH), TUC, and counselors from tertiary care hospitals, for 32 service providers (ANC nurses and counselors who provided individual VCT for pregnant women) comprising 17 hospitals in 7 provinces. Key contents of the training included the importance of CHTC in ANC services, psychosocial issues relating to couples relationships, counselor self-awareness, counseling skills required to work effectively with couples, guidance on providing pre-test and post-test CHTC (for HIV-seroconcordant negative, seroconcordant positive, and serodiscordant results), and administrative management to set up couples counseling service systems. Training methods included didactic sessions, role play, video demonstrations, small group discussions, and lectures. Following the training, trainees returned to their hospitals, trained their ANC teams, and implemented CHTC services as part of routine ANC services. The Thai Ministry of Public Health (MOPH) and Thailand MOPH-U.S. Centers for Disease Control and Prevention Collaboration (TUC) developed a pilot project in 2008 in order to assess the feasibility and acceptability of CHTC implementation in Thai ANC settings. A training manual [20] was developed in Thai, adapted from the U.S. Centers for Disease Control and Prevention (CDC) CHTC manual [21]. In February 2009, a 4-day CHTC training for ANC settings was conducted by trainers from the MOPH Department of Health (DOH), TUC, and counselors from tertiary care hospitals, for 32 service providers (ANC nurses and counselors who provided individual VCT for pregnant women) comprising 17 hospitals in 7 provinces. Key contents of the training included the importance of CHTC in ANC services, psychosocial issues relating to couples relationships, counselor self-awareness, counseling skills required to work effectively with couples, guidance on providing pre-test and post-test CHTC (for HIV-seroconcordant negative, seroconcordant positive, and serodiscordant results), and administrative management to set up couples counseling service systems. Training methods included didactic sessions, role play, video demonstrations, small group discussions, and lectures. Following the training, trainees returned to their hospitals, trained their ANC teams, and implemented CHTC services as part of routine ANC services. Study population and procedures From October 2009 to April 2010, all pregnant women and their partners at 17 hospitals in 7 provinces in the North, Northeastern, Central, and Southern regions of Thailand (Figure 1) were offered routine ANC and CHTC services, and were asked to consent to HIV testing at first ANC visit by counselors or nurses. A convenience sample of participants was recruited based on availability of staff. We did not collect data on the total number of pregnant women, their partners, or the number who were approached to participate at first ANC visit. Enrollment targeted 30 HIV-positive persons among those receiving CHTC services. We anticipated that enrollment of 30 HIV-positive persons would enable us to enroll at least 10 serodiscordant couples, including HIV-positive men and negative pregnant women. This number could demonstrate benefits of CHTC in ANC settings in identifying more HIV-positive men whose partners were negative than previous practice. CHTC is defined as when two partners are counseled, tested and receive their results together; in this way they can mutually disclose their HIV status with one another.Figure 1 Geographic area of seven provinces participated in CHTC project. Geographic area of seven provinces participated in CHTC project. Characteristics of hospitals are shown in Table 1. Cross-sectional data were collected from pregnant women who came alone or with their partners at first visit, by interview using standard data collection forms. Pregnant women who came alone were offered routine ANC and encouraged to invite their partners for CHTC during the following visit. No incentives were provided for participation. A token amount of funding was provided to a nurse or a counselor who collected data (2 USD for data collection per case).Table 1 Characteristics of CHTC pilot implementing sites, 2009-10 Type of hospitals Region Number of new ANC cases during reporting period (Total n = 4524 cases) % of pregnant women came with partners (2,089 cases) % couples received pretest couples counseling (2,003 cases) % couples accepted HIV testing (1,723 couples) Number of counselors who provide CHTC (persons) Type of CHTC Provincial Hospital1. North603329 (54.6)322 (97.9)311 (96.6)4IC, CC, GE2. Northeast761119 (15.6)118 (99.2)117 (99.2)4IC, CC, GE3. Central442385 (86.9)364 (94.5)325 (89.3)4IC, CC, GE4. South40171 (17.7)71 (100)59 (83.1)1IC, CC, GE5. South34085 [25]85 (100)79 (92.9)4IC, CCCommunity hospital6. North4332 (74.4)32 (100)32 (100)4IC, CC7. North13643 (31.6)43 (100)43 (100)5IC, CC8. Northeast19978 (39.2)78 (100)75 (96.2)3IC, CC, GE9. Northeast33 (100)3 (100)3 (100)2IC, CC10. CentralNot in databaseNot in databaseNot in databaseNot in database6IC, CC11. South346108 (31.2)107 (99.1)47 (43.9)3IC, CC12. South13060 (46.2)59 (98.3)57 (96.6)5IC, CC13. South14281 (57)76 (93.8)71 (93.4)2IC, CC, GEHealth Promotion Center Hospital14. North410284 (69.3)269 (94.7)150 (55.8)3IC, CC, GE15. Northeast523394 (75.3)359 (91.1)339 (94.4)5IC, CC, GE16. South4517 (37.8)17 (100)15 (88.2)4IC, CCIC = individual counseling; CC = couples counseling; GE = group education and consent. Characteristics of CHTC pilot implementing sites, 2009-10 IC = individual counseling; CC = couples counseling; GE = group education and consent. HIV counseling and testing data were collected at each woman’s first ANC visit, including types of pre- and post-test counseling, uptake of HIV testing by couples, reasons for not testing individually or as a couple, and HIV test results. Each hospital reported data from monthly paper-based monitoring forms for data entry at DOH in an electronic file using MS Access. Partners who did not return for posttest counseling within 3 months were defined as missing. Patient hospital numbers were recorded in the paper forms at the hospitals but were excluded from electronic transmissions to DOH. A couple code, linking separate records for patients and their partners, was used as an identifier in the transmissions to DOH and in the electronic database. HIV enzyme-linked immunosorbent assays (ELISA) were performed for HIV-1 for those who consented to HIV testing. Each participant was asked individually whether they would prefer to receive HIV test results and posttest counseling alone or as a couple. HIV test results were given to pregnant women and partners at their next ANC visit. Individual counseling was provided to any participant upon request. From October 2009 to April 2010, all pregnant women and their partners at 17 hospitals in 7 provinces in the North, Northeastern, Central, and Southern regions of Thailand (Figure 1) were offered routine ANC and CHTC services, and were asked to consent to HIV testing at first ANC visit by counselors or nurses. A convenience sample of participants was recruited based on availability of staff. We did not collect data on the total number of pregnant women, their partners, or the number who were approached to participate at first ANC visit. Enrollment targeted 30 HIV-positive persons among those receiving CHTC services. We anticipated that enrollment of 30 HIV-positive persons would enable us to enroll at least 10 serodiscordant couples, including HIV-positive men and negative pregnant women. This number could demonstrate benefits of CHTC in ANC settings in identifying more HIV-positive men whose partners were negative than previous practice. CHTC is defined as when two partners are counseled, tested and receive their results together; in this way they can mutually disclose their HIV status with one another.Figure 1 Geographic area of seven provinces participated in CHTC project. Geographic area of seven provinces participated in CHTC project. Characteristics of hospitals are shown in Table 1. Cross-sectional data were collected from pregnant women who came alone or with their partners at first visit, by interview using standard data collection forms. Pregnant women who came alone were offered routine ANC and encouraged to invite their partners for CHTC during the following visit. No incentives were provided for participation. A token amount of funding was provided to a nurse or a counselor who collected data (2 USD for data collection per case).Table 1 Characteristics of CHTC pilot implementing sites, 2009-10 Type of hospitals Region Number of new ANC cases during reporting period (Total n = 4524 cases) % of pregnant women came with partners (2,089 cases) % couples received pretest couples counseling (2,003 cases) % couples accepted HIV testing (1,723 couples) Number of counselors who provide CHTC (persons) Type of CHTC Provincial Hospital1. North603329 (54.6)322 (97.9)311 (96.6)4IC, CC, GE2. Northeast761119 (15.6)118 (99.2)117 (99.2)4IC, CC, GE3. Central442385 (86.9)364 (94.5)325 (89.3)4IC, CC, GE4. South40171 (17.7)71 (100)59 (83.1)1IC, CC, GE5. South34085 [25]85 (100)79 (92.9)4IC, CCCommunity hospital6. North4332 (74.4)32 (100)32 (100)4IC, CC7. North13643 (31.6)43 (100)43 (100)5IC, CC8. Northeast19978 (39.2)78 (100)75 (96.2)3IC, CC, GE9. Northeast33 (100)3 (100)3 (100)2IC, CC10. CentralNot in databaseNot in databaseNot in databaseNot in database6IC, CC11. South346108 (31.2)107 (99.1)47 (43.9)3IC, CC12. South13060 (46.2)59 (98.3)57 (96.6)5IC, CC13. South14281 (57)76 (93.8)71 (93.4)2IC, CC, GEHealth Promotion Center Hospital14. North410284 (69.3)269 (94.7)150 (55.8)3IC, CC, GE15. Northeast523394 (75.3)359 (91.1)339 (94.4)5IC, CC, GE16. South4517 (37.8)17 (100)15 (88.2)4IC, CCIC = individual counseling; CC = couples counseling; GE = group education and consent. Characteristics of CHTC pilot implementing sites, 2009-10 IC = individual counseling; CC = couples counseling; GE = group education and consent. HIV counseling and testing data were collected at each woman’s first ANC visit, including types of pre- and post-test counseling, uptake of HIV testing by couples, reasons for not testing individually or as a couple, and HIV test results. Each hospital reported data from monthly paper-based monitoring forms for data entry at DOH in an electronic file using MS Access. Partners who did not return for posttest counseling within 3 months were defined as missing. Patient hospital numbers were recorded in the paper forms at the hospitals but were excluded from electronic transmissions to DOH. A couple code, linking separate records for patients and their partners, was used as an identifier in the transmissions to DOH and in the electronic database. HIV enzyme-linked immunosorbent assays (ELISA) were performed for HIV-1 for those who consented to HIV testing. Each participant was asked individually whether they would prefer to receive HIV test results and posttest counseling alone or as a couple. HIV test results were given to pregnant women and partners at their next ANC visit. Individual counseling was provided to any participant upon request. Data analysis Data from all participants were included in the analysis. Data were analyzed using STATA 11.0 (StataCorp., College Station, TX, USA) at the TUC office. We determined service uptake of individual or couples pre-test counseling, results of HIV testing among couples, barriers to testing, and percentage of HIV-infected women and partners referred to HIV care. Data from all participants were included in the analysis. Data were analyzed using STATA 11.0 (StataCorp., College Station, TX, USA) at the TUC office. We determined service uptake of individual or couples pre-test counseling, results of HIV testing among couples, barriers to testing, and percentage of HIV-infected women and partners referred to HIV care. Assessment of feasibility and acceptability of CHTC services Following a minimum of 4 months of CHTC implementation, DOH and TUC staff provided monitoring and supervision visits at the pilot hospitals. The supervision team conducted in-depth interviews with hospital directors using a semi-structured interview guide to assess acceptability, support, and sustainability of CHTC services from an administrative/executive perspective. If the hospital director was not available, a hospital executive administrator (e.g. attending physician or chief of the ANC clinic) responsible for CHTC services was interviewed. There were three trained interviewers who alternately interviewed hospital executive administrators. Each in-depth interview session took about 30 minutes. All interviews were manually recorded as text by the interviewers. The data from in-depth interviews was organized by question and analyzed by identifying consistencies and differences of each responder. Comments from interviewees were summarized by categories. Following a minimum of 4 months of CHTC implementation, DOH and TUC staff provided monitoring and supervision visits at the pilot hospitals. The supervision team conducted in-depth interviews with hospital directors using a semi-structured interview guide to assess acceptability, support, and sustainability of CHTC services from an administrative/executive perspective. If the hospital director was not available, a hospital executive administrator (e.g. attending physician or chief of the ANC clinic) responsible for CHTC services was interviewed. There were three trained interviewers who alternately interviewed hospital executive administrators. Each in-depth interview session took about 30 minutes. All interviews were manually recorded as text by the interviewers. The data from in-depth interviews was organized by question and analyzed by identifying consistencies and differences of each responder. Comments from interviewees were summarized by categories. Assessment of staff responsible for CHTC services on feasibility, confidence, and experience in providing CHTC services During site supervision visits, self-administered questionnaires were sent to staff responsible for CHTC services, asking about workload, confidence, CHTC practices, negative consequences from CHTC during the implementation period, and barriers to and recommendations for successful CHTC program implementation. We compiled the responses in Microsoft Excel 2010 (version 14). The data from self-administered questionnaires was analyzed and summarized. During site supervision visits, self-administered questionnaires were sent to staff responsible for CHTC services, asking about workload, confidence, CHTC practices, negative consequences from CHTC during the implementation period, and barriers to and recommendations for successful CHTC program implementation. We compiled the responses in Microsoft Excel 2010 (version 14). The data from self-administered questionnaires was analyzed and summarized. Ethics approval This study was reviewed for human subjects considerations by the U.S. CDC and approved as research not involving human subjects. The Thailand MOPH determined this project to be a Program Evaluation, which does not require approval by the Thailand MOPH institutional review board. This study was reviewed for human subjects considerations by the U.S. CDC and approved as research not involving human subjects. The Thailand MOPH determined this project to be a Program Evaluation, which does not require approval by the Thailand MOPH institutional review board. CHTC training and implementation in ANC setting: The Thai Ministry of Public Health (MOPH) and Thailand MOPH-U.S. Centers for Disease Control and Prevention Collaboration (TUC) developed a pilot project in 2008 in order to assess the feasibility and acceptability of CHTC implementation in Thai ANC settings. A training manual [20] was developed in Thai, adapted from the U.S. Centers for Disease Control and Prevention (CDC) CHTC manual [21]. In February 2009, a 4-day CHTC training for ANC settings was conducted by trainers from the MOPH Department of Health (DOH), TUC, and counselors from tertiary care hospitals, for 32 service providers (ANC nurses and counselors who provided individual VCT for pregnant women) comprising 17 hospitals in 7 provinces. Key contents of the training included the importance of CHTC in ANC services, psychosocial issues relating to couples relationships, counselor self-awareness, counseling skills required to work effectively with couples, guidance on providing pre-test and post-test CHTC (for HIV-seroconcordant negative, seroconcordant positive, and serodiscordant results), and administrative management to set up couples counseling service systems. Training methods included didactic sessions, role play, video demonstrations, small group discussions, and lectures. Following the training, trainees returned to their hospitals, trained their ANC teams, and implemented CHTC services as part of routine ANC services. Study population and procedures: From October 2009 to April 2010, all pregnant women and their partners at 17 hospitals in 7 provinces in the North, Northeastern, Central, and Southern regions of Thailand (Figure 1) were offered routine ANC and CHTC services, and were asked to consent to HIV testing at first ANC visit by counselors or nurses. A convenience sample of participants was recruited based on availability of staff. We did not collect data on the total number of pregnant women, their partners, or the number who were approached to participate at first ANC visit. Enrollment targeted 30 HIV-positive persons among those receiving CHTC services. We anticipated that enrollment of 30 HIV-positive persons would enable us to enroll at least 10 serodiscordant couples, including HIV-positive men and negative pregnant women. This number could demonstrate benefits of CHTC in ANC settings in identifying more HIV-positive men whose partners were negative than previous practice. CHTC is defined as when two partners are counseled, tested and receive their results together; in this way they can mutually disclose their HIV status with one another.Figure 1 Geographic area of seven provinces participated in CHTC project. Geographic area of seven provinces participated in CHTC project. Characteristics of hospitals are shown in Table 1. Cross-sectional data were collected from pregnant women who came alone or with their partners at first visit, by interview using standard data collection forms. Pregnant women who came alone were offered routine ANC and encouraged to invite their partners for CHTC during the following visit. No incentives were provided for participation. A token amount of funding was provided to a nurse or a counselor who collected data (2 USD for data collection per case).Table 1 Characteristics of CHTC pilot implementing sites, 2009-10 Type of hospitals Region Number of new ANC cases during reporting period (Total n = 4524 cases) % of pregnant women came with partners (2,089 cases) % couples received pretest couples counseling (2,003 cases) % couples accepted HIV testing (1,723 couples) Number of counselors who provide CHTC (persons) Type of CHTC Provincial Hospital1. North603329 (54.6)322 (97.9)311 (96.6)4IC, CC, GE2. Northeast761119 (15.6)118 (99.2)117 (99.2)4IC, CC, GE3. Central442385 (86.9)364 (94.5)325 (89.3)4IC, CC, GE4. South40171 (17.7)71 (100)59 (83.1)1IC, CC, GE5. South34085 [25]85 (100)79 (92.9)4IC, CCCommunity hospital6. North4332 (74.4)32 (100)32 (100)4IC, CC7. North13643 (31.6)43 (100)43 (100)5IC, CC8. Northeast19978 (39.2)78 (100)75 (96.2)3IC, CC, GE9. Northeast33 (100)3 (100)3 (100)2IC, CC10. CentralNot in databaseNot in databaseNot in databaseNot in database6IC, CC11. South346108 (31.2)107 (99.1)47 (43.9)3IC, CC12. South13060 (46.2)59 (98.3)57 (96.6)5IC, CC13. South14281 (57)76 (93.8)71 (93.4)2IC, CC, GEHealth Promotion Center Hospital14. North410284 (69.3)269 (94.7)150 (55.8)3IC, CC, GE15. Northeast523394 (75.3)359 (91.1)339 (94.4)5IC, CC, GE16. South4517 (37.8)17 (100)15 (88.2)4IC, CCIC = individual counseling; CC = couples counseling; GE = group education and consent. Characteristics of CHTC pilot implementing sites, 2009-10 IC = individual counseling; CC = couples counseling; GE = group education and consent. HIV counseling and testing data were collected at each woman’s first ANC visit, including types of pre- and post-test counseling, uptake of HIV testing by couples, reasons for not testing individually or as a couple, and HIV test results. Each hospital reported data from monthly paper-based monitoring forms for data entry at DOH in an electronic file using MS Access. Partners who did not return for posttest counseling within 3 months were defined as missing. Patient hospital numbers were recorded in the paper forms at the hospitals but were excluded from electronic transmissions to DOH. A couple code, linking separate records for patients and their partners, was used as an identifier in the transmissions to DOH and in the electronic database. HIV enzyme-linked immunosorbent assays (ELISA) were performed for HIV-1 for those who consented to HIV testing. Each participant was asked individually whether they would prefer to receive HIV test results and posttest counseling alone or as a couple. HIV test results were given to pregnant women and partners at their next ANC visit. Individual counseling was provided to any participant upon request. Data analysis: Data from all participants were included in the analysis. Data were analyzed using STATA 11.0 (StataCorp., College Station, TX, USA) at the TUC office. We determined service uptake of individual or couples pre-test counseling, results of HIV testing among couples, barriers to testing, and percentage of HIV-infected women and partners referred to HIV care. Assessment of feasibility and acceptability of CHTC services: Following a minimum of 4 months of CHTC implementation, DOH and TUC staff provided monitoring and supervision visits at the pilot hospitals. The supervision team conducted in-depth interviews with hospital directors using a semi-structured interview guide to assess acceptability, support, and sustainability of CHTC services from an administrative/executive perspective. If the hospital director was not available, a hospital executive administrator (e.g. attending physician or chief of the ANC clinic) responsible for CHTC services was interviewed. There were three trained interviewers who alternately interviewed hospital executive administrators. Each in-depth interview session took about 30 minutes. All interviews were manually recorded as text by the interviewers. The data from in-depth interviews was organized by question and analyzed by identifying consistencies and differences of each responder. Comments from interviewees were summarized by categories. Assessment of staff responsible for CHTC services on feasibility, confidence, and experience in providing CHTC services: During site supervision visits, self-administered questionnaires were sent to staff responsible for CHTC services, asking about workload, confidence, CHTC practices, negative consequences from CHTC during the implementation period, and barriers to and recommendations for successful CHTC program implementation. We compiled the responses in Microsoft Excel 2010 (version 14). The data from self-administered questionnaires was analyzed and summarized. Ethics approval: This study was reviewed for human subjects considerations by the U.S. CDC and approved as research not involving human subjects. The Thailand MOPH determined this project to be a Program Evaluation, which does not require approval by the Thailand MOPH institutional review board. Results: Characteristics of CHTC pilot sites Among the 17 hospitals invited to participate in the CHTC trainings and pilot project implementation, 16 hospitals (94%) were able to implement CHTC services (Table 1). One community hospital was unable to implement CHTC services due to staffing limitations. One community hospital implemented CHTC services but did not collect and submit data to DOH; therefore, we report data from 15 (94%) of 16 pilot sites. The median number of counselors or nurses who provided CHTC in provincial hospitals, community hospitals, and health promotion center hospitals was 4 (range: 1–4), 3.5 (range: 2–6), and 4 (range: 3–5), respectively. All 16 hospitals also provided individual HIV counseling and testing for pregnant women and/or partners. Four (80%) of five provincial hospitals, two (25%) of eight community hospitals, and two (67%) of three health promotion center hospitals provided group HIV information before individual or couples pre-test HIV counseling (Table 1). There was significant variation in the proportion of pregnant women presenting with partners, and the proportion of couples accepting HIV testing, by hospital types and regions (p < 0.01) (data not shown). Among the 17 hospitals invited to participate in the CHTC trainings and pilot project implementation, 16 hospitals (94%) were able to implement CHTC services (Table 1). One community hospital was unable to implement CHTC services due to staffing limitations. One community hospital implemented CHTC services but did not collect and submit data to DOH; therefore, we report data from 15 (94%) of 16 pilot sites. The median number of counselors or nurses who provided CHTC in provincial hospitals, community hospitals, and health promotion center hospitals was 4 (range: 1–4), 3.5 (range: 2–6), and 4 (range: 3–5), respectively. All 16 hospitals also provided individual HIV counseling and testing for pregnant women and/or partners. Four (80%) of five provincial hospitals, two (25%) of eight community hospitals, and two (67%) of three health promotion center hospitals provided group HIV information before individual or couples pre-test HIV counseling (Table 1). There was significant variation in the proportion of pregnant women presenting with partners, and the proportion of couples accepting HIV testing, by hospital types and regions (p < 0.01) (data not shown). Uptake of pre-and post-test individual vs. CHTC From October 2009 to April 2010, a total of 4, 524 women were enrolled from ANC clinics at the 15 hospitals (Figure 1). Of these, 2,089 (46%) women presented with their partners. Of these, 2,003 (96%) couples received pretest counseling (Figure 2), including 994 (50%) couples receiving couples pretest counseling, 864 (43%) receiving pre-test couples group education followed by couples consent, and 145 (7.3%) receiving individual risk assessment followed by couples counseling (data not shown). Among the 2,003 couples receiving pretest HIV counseling, 1,723 (86%) accepted HIV testing, and 1,604 (93%) of these returned for post-test counseling (Figure 2). Among couples who returned for post-test counseling, 1,443 (90%) received couples post-test counseling and the remainder received individual post-test counseling with or without mutual disclosure (data not shown). Cascades of pre-test and post-test counseling of pregnant women and their partners are shown in Figure 2.Figure 2 Uptakes of couples HIV counseling and testing services among pregnant women and partners, Oct 2009 - April 2010. Uptakes of couples HIV counseling and testing services among pregnant women and partners, Oct 2009 - April 2010. Among 2,435 women who presented alone, 2,415 (99%) received pre-test HIV counseling. Of these, 1,341 (55%) received individual HIV counseling and testing, and 1,072 (44%) received group HIV education and consent. Of the 2,415 women receiving pre-test counseling, 2,324 (96%) accepted HIV testing. Two thousand two hundred and fifty-nine (97%) women returned for post-test HIV counseling. From October 2009 to April 2010, a total of 4, 524 women were enrolled from ANC clinics at the 15 hospitals (Figure 1). Of these, 2,089 (46%) women presented with their partners. Of these, 2,003 (96%) couples received pretest counseling (Figure 2), including 994 (50%) couples receiving couples pretest counseling, 864 (43%) receiving pre-test couples group education followed by couples consent, and 145 (7.3%) receiving individual risk assessment followed by couples counseling (data not shown). Among the 2,003 couples receiving pretest HIV counseling, 1,723 (86%) accepted HIV testing, and 1,604 (93%) of these returned for post-test counseling (Figure 2). Among couples who returned for post-test counseling, 1,443 (90%) received couples post-test counseling and the remainder received individual post-test counseling with or without mutual disclosure (data not shown). Cascades of pre-test and post-test counseling of pregnant women and their partners are shown in Figure 2.Figure 2 Uptakes of couples HIV counseling and testing services among pregnant women and partners, Oct 2009 - April 2010. Uptakes of couples HIV counseling and testing services among pregnant women and partners, Oct 2009 - April 2010. Among 2,435 women who presented alone, 2,415 (99%) received pre-test HIV counseling. Of these, 1,341 (55%) received individual HIV counseling and testing, and 1,072 (44%) received group HIV education and consent. Of the 2,415 women receiving pre-test counseling, 2,324 (96%) accepted HIV testing. Two thousand two hundred and fifty-nine (97%) women returned for post-test HIV counseling. HIV-infection status among couples receiving CHTC and among pregnant women receiving individual counseling and testing Among 1,604 couples returning for post-test counseling, 1,567 (98%) were HIV concordant negative, 6 (0.4%) were HIV concordant positive, 17 (1%) were HIV discordant (7 male+/female- and 10 male-/female+), and 14 (0.9%) had no HIV test results (Figure 3).Figure 3 HIV test results F: female; M: male; C: couples. HIV test results F: female; M: male; C: couples. Of 4,329 women who received HIV testing, 4,213 (97%) returned for post-test counseling. Of these, 4,166 (99%) were HIV negative and 39 (0.9%) were HIV positive. In this study, the overall HIV prevalence among men was 0.86% (15/1,746) and among pregnant women was 0.93% (40/4,298) (data not shown). Among 1,604 couples returning for post-test counseling, 1,567 (98%) were HIV concordant negative, 6 (0.4%) were HIV concordant positive, 17 (1%) were HIV discordant (7 male+/female- and 10 male-/female+), and 14 (0.9%) had no HIV test results (Figure 3).Figure 3 HIV test results F: female; M: male; C: couples. HIV test results F: female; M: male; C: couples. Of 4,329 women who received HIV testing, 4,213 (97%) returned for post-test counseling. Of these, 4,166 (99%) were HIV negative and 39 (0.9%) were HIV positive. In this study, the overall HIV prevalence among men was 0.86% (15/1,746) and among pregnant women was 0.93% (40/4,298) (data not shown). Reasons for HIV test refusal among male partners Of 2,005 men receiving pretest counseling, 1,759 (88%) men accepted HIV testing, including all partners of HIV-positive pregnant women. A higher proportion of men (891 (93%)) who received pre-test counseling at a provincial hospital accepted testing than men receiving pre-test counseling at either a community hospital (328 (82%)) or health promotion center hospital (504 (78%)) (Table 1). Among 246 men who refused HIV testing, 192 (78%) gave reasons for refusing testing. The main reasons were: did not want to test (115 (60%)); thought they had no risk (25 (13%)); wanted to test at hospitals near their residence (19 (10%)); fear of needles (17 (9%)); already knew HIV status (10 (5%)); and “other” (6 (3%)) (Data not shown). Of 2,005 men receiving pretest counseling, 1,759 (88%) men accepted HIV testing, including all partners of HIV-positive pregnant women. A higher proportion of men (891 (93%)) who received pre-test counseling at a provincial hospital accepted testing than men receiving pre-test counseling at either a community hospital (328 (82%)) or health promotion center hospital (504 (78%)) (Table 1). Among 246 men who refused HIV testing, 192 (78%) gave reasons for refusing testing. The main reasons were: did not want to test (115 (60%)); thought they had no risk (25 (13%)); wanted to test at hospitals near their residence (19 (10%)); fear of needles (17 (9%)); already knew HIV status (10 (5%)); and “other” (6 (3%)) (Data not shown). Feasibility and acceptability of CHTC services Among the 15 hospitals, 10 (67%) hospital executive administrators participated in the in-depth interview including two hospital directors, four attending physicians, and four chief ANC nurses. Five hospital executive administrators were not available to participate in the interview on the days of the supervision visits. Median age was 43 years (range: 20–60 years) and 5 (50%) were male. Of these, 9 (90%) reported high acceptability for CHTC services due to perceived benefits for pregnant women, which included: early access to HIV prevention, treatment, and care; improved couples communication; and the ability to integrate the program with existing services. Nine interviewees (90%) thought that CHTC services could be integrated into routine services after the end of the pilot period. One interviewee reported “needing more data before making a conclusive assessment about the benefits of CHTC”, concerned that “some couples might have confidentiality concerns and may prefer individual HIV counseling and testing”. Supportive systems for CHTC implementation available at hospitals included: written policies for CHTC (7/10); a CHTC working group (9/10); CHTC training (10/10), and CHTC campaigns at hospitals (8/10). Key barriers included: inadequate number of staff (5/10); staff being too busy (3/10); lack of tools (2/10), no clear defined leader for CHTC services on the team (2/10); not including provision of CHTC services in promotion criteria for staff (2/10); and no clear policy or standardized guidelines on how to provide CHTC in ANC settings (1/10). Concerns included potential negative consequences of CHTC for discordant couples, and additional staff workload. Twenty-eight health care workers (HCWs) from the 15 hospitals participated in the survey including 17 nurses and 6 counseling nurses. Median age was 43 years (range: 33–54 years) and 26 (93%) were female. They reported between 2 and 65 clients per day in the clinic, with each HCW providing CHTC to up to 16 couples per day. More than 85% of HCWs reported that they were confident in providing post-test counseling for couples with HIV seroconcordant negative, seroconcordant positive, and serodiscordant results. Among 22 HCWs, 10 (45%) provided group pre-test CHTC and 6 (27%) provided group post-test CHTC. Of 14 HCWs who reported that they had experience in providing post-test counseling for discordant couples, 4 (29%) HCWs reported negative consequences of mutual disclosure of HIV results for some clients, such as separation or family conflict. HCWs reported barriers for CHTC program implementation that included: high workload and limited number of staff trained in CHTC, leading to extended waiting times for clients; lack of clear national and hospital policies for CHTC implementation; a lack of standardized tools and materials to promote male participation in CHTC services at hospitals and in the community; partners of pregnant women refusing to participate in CHTC services; limited private space to provide CHTC services; and no waiting area for partners of pregnant women. HCWs provided recommendations for improvement of CHCT program implementation, including: developing and disseminating clear CHTC national and hospital policies; free HIV testing support to partners of pregnant women; better support from program managers; adequate number of staff allocated for counseling; provision of private rooms for counseling and better space for group education; ongoing CHTC training and supervision for HCWs; development of a CHTC manual on how to organize CHTC services in hospitals with high numbers of new ANC cases that also have a limited number of counselors; and tools and materials to promote access of male partners to CHTC services and to use as guidance for new counselors. In addition, several HCWs interviewed suggested that CHTC uptake should be included as a national target indicator for maternal child health counseling. Among the 15 hospitals, 10 (67%) hospital executive administrators participated in the in-depth interview including two hospital directors, four attending physicians, and four chief ANC nurses. Five hospital executive administrators were not available to participate in the interview on the days of the supervision visits. Median age was 43 years (range: 20–60 years) and 5 (50%) were male. Of these, 9 (90%) reported high acceptability for CHTC services due to perceived benefits for pregnant women, which included: early access to HIV prevention, treatment, and care; improved couples communication; and the ability to integrate the program with existing services. Nine interviewees (90%) thought that CHTC services could be integrated into routine services after the end of the pilot period. One interviewee reported “needing more data before making a conclusive assessment about the benefits of CHTC”, concerned that “some couples might have confidentiality concerns and may prefer individual HIV counseling and testing”. Supportive systems for CHTC implementation available at hospitals included: written policies for CHTC (7/10); a CHTC working group (9/10); CHTC training (10/10), and CHTC campaigns at hospitals (8/10). Key barriers included: inadequate number of staff (5/10); staff being too busy (3/10); lack of tools (2/10), no clear defined leader for CHTC services on the team (2/10); not including provision of CHTC services in promotion criteria for staff (2/10); and no clear policy or standardized guidelines on how to provide CHTC in ANC settings (1/10). Concerns included potential negative consequences of CHTC for discordant couples, and additional staff workload. Twenty-eight health care workers (HCWs) from the 15 hospitals participated in the survey including 17 nurses and 6 counseling nurses. Median age was 43 years (range: 33–54 years) and 26 (93%) were female. They reported between 2 and 65 clients per day in the clinic, with each HCW providing CHTC to up to 16 couples per day. More than 85% of HCWs reported that they were confident in providing post-test counseling for couples with HIV seroconcordant negative, seroconcordant positive, and serodiscordant results. Among 22 HCWs, 10 (45%) provided group pre-test CHTC and 6 (27%) provided group post-test CHTC. Of 14 HCWs who reported that they had experience in providing post-test counseling for discordant couples, 4 (29%) HCWs reported negative consequences of mutual disclosure of HIV results for some clients, such as separation or family conflict. HCWs reported barriers for CHTC program implementation that included: high workload and limited number of staff trained in CHTC, leading to extended waiting times for clients; lack of clear national and hospital policies for CHTC implementation; a lack of standardized tools and materials to promote male participation in CHTC services at hospitals and in the community; partners of pregnant women refusing to participate in CHTC services; limited private space to provide CHTC services; and no waiting area for partners of pregnant women. HCWs provided recommendations for improvement of CHCT program implementation, including: developing and disseminating clear CHTC national and hospital policies; free HIV testing support to partners of pregnant women; better support from program managers; adequate number of staff allocated for counseling; provision of private rooms for counseling and better space for group education; ongoing CHTC training and supervision for HCWs; development of a CHTC manual on how to organize CHTC services in hospitals with high numbers of new ANC cases that also have a limited number of counselors; and tools and materials to promote access of male partners to CHTC services and to use as guidance for new counselors. In addition, several HCWs interviewed suggested that CHTC uptake should be included as a national target indicator for maternal child health counseling. Characteristics of CHTC pilot sites: Among the 17 hospitals invited to participate in the CHTC trainings and pilot project implementation, 16 hospitals (94%) were able to implement CHTC services (Table 1). One community hospital was unable to implement CHTC services due to staffing limitations. One community hospital implemented CHTC services but did not collect and submit data to DOH; therefore, we report data from 15 (94%) of 16 pilot sites. The median number of counselors or nurses who provided CHTC in provincial hospitals, community hospitals, and health promotion center hospitals was 4 (range: 1–4), 3.5 (range: 2–6), and 4 (range: 3–5), respectively. All 16 hospitals also provided individual HIV counseling and testing for pregnant women and/or partners. Four (80%) of five provincial hospitals, two (25%) of eight community hospitals, and two (67%) of three health promotion center hospitals provided group HIV information before individual or couples pre-test HIV counseling (Table 1). There was significant variation in the proportion of pregnant women presenting with partners, and the proportion of couples accepting HIV testing, by hospital types and regions (p < 0.01) (data not shown). Uptake of pre-and post-test individual vs. CHTC: From October 2009 to April 2010, a total of 4, 524 women were enrolled from ANC clinics at the 15 hospitals (Figure 1). Of these, 2,089 (46%) women presented with their partners. Of these, 2,003 (96%) couples received pretest counseling (Figure 2), including 994 (50%) couples receiving couples pretest counseling, 864 (43%) receiving pre-test couples group education followed by couples consent, and 145 (7.3%) receiving individual risk assessment followed by couples counseling (data not shown). Among the 2,003 couples receiving pretest HIV counseling, 1,723 (86%) accepted HIV testing, and 1,604 (93%) of these returned for post-test counseling (Figure 2). Among couples who returned for post-test counseling, 1,443 (90%) received couples post-test counseling and the remainder received individual post-test counseling with or without mutual disclosure (data not shown). Cascades of pre-test and post-test counseling of pregnant women and their partners are shown in Figure 2.Figure 2 Uptakes of couples HIV counseling and testing services among pregnant women and partners, Oct 2009 - April 2010. Uptakes of couples HIV counseling and testing services among pregnant women and partners, Oct 2009 - April 2010. Among 2,435 women who presented alone, 2,415 (99%) received pre-test HIV counseling. Of these, 1,341 (55%) received individual HIV counseling and testing, and 1,072 (44%) received group HIV education and consent. Of the 2,415 women receiving pre-test counseling, 2,324 (96%) accepted HIV testing. Two thousand two hundred and fifty-nine (97%) women returned for post-test HIV counseling. HIV-infection status among couples receiving CHTC and among pregnant women receiving individual counseling and testing: Among 1,604 couples returning for post-test counseling, 1,567 (98%) were HIV concordant negative, 6 (0.4%) were HIV concordant positive, 17 (1%) were HIV discordant (7 male+/female- and 10 male-/female+), and 14 (0.9%) had no HIV test results (Figure 3).Figure 3 HIV test results F: female; M: male; C: couples. HIV test results F: female; M: male; C: couples. Of 4,329 women who received HIV testing, 4,213 (97%) returned for post-test counseling. Of these, 4,166 (99%) were HIV negative and 39 (0.9%) were HIV positive. In this study, the overall HIV prevalence among men was 0.86% (15/1,746) and among pregnant women was 0.93% (40/4,298) (data not shown). Reasons for HIV test refusal among male partners: Of 2,005 men receiving pretest counseling, 1,759 (88%) men accepted HIV testing, including all partners of HIV-positive pregnant women. A higher proportion of men (891 (93%)) who received pre-test counseling at a provincial hospital accepted testing than men receiving pre-test counseling at either a community hospital (328 (82%)) or health promotion center hospital (504 (78%)) (Table 1). Among 246 men who refused HIV testing, 192 (78%) gave reasons for refusing testing. The main reasons were: did not want to test (115 (60%)); thought they had no risk (25 (13%)); wanted to test at hospitals near their residence (19 (10%)); fear of needles (17 (9%)); already knew HIV status (10 (5%)); and “other” (6 (3%)) (Data not shown). Feasibility and acceptability of CHTC services: Among the 15 hospitals, 10 (67%) hospital executive administrators participated in the in-depth interview including two hospital directors, four attending physicians, and four chief ANC nurses. Five hospital executive administrators were not available to participate in the interview on the days of the supervision visits. Median age was 43 years (range: 20–60 years) and 5 (50%) were male. Of these, 9 (90%) reported high acceptability for CHTC services due to perceived benefits for pregnant women, which included: early access to HIV prevention, treatment, and care; improved couples communication; and the ability to integrate the program with existing services. Nine interviewees (90%) thought that CHTC services could be integrated into routine services after the end of the pilot period. One interviewee reported “needing more data before making a conclusive assessment about the benefits of CHTC”, concerned that “some couples might have confidentiality concerns and may prefer individual HIV counseling and testing”. Supportive systems for CHTC implementation available at hospitals included: written policies for CHTC (7/10); a CHTC working group (9/10); CHTC training (10/10), and CHTC campaigns at hospitals (8/10). Key barriers included: inadequate number of staff (5/10); staff being too busy (3/10); lack of tools (2/10), no clear defined leader for CHTC services on the team (2/10); not including provision of CHTC services in promotion criteria for staff (2/10); and no clear policy or standardized guidelines on how to provide CHTC in ANC settings (1/10). Concerns included potential negative consequences of CHTC for discordant couples, and additional staff workload. Twenty-eight health care workers (HCWs) from the 15 hospitals participated in the survey including 17 nurses and 6 counseling nurses. Median age was 43 years (range: 33–54 years) and 26 (93%) were female. They reported between 2 and 65 clients per day in the clinic, with each HCW providing CHTC to up to 16 couples per day. More than 85% of HCWs reported that they were confident in providing post-test counseling for couples with HIV seroconcordant negative, seroconcordant positive, and serodiscordant results. Among 22 HCWs, 10 (45%) provided group pre-test CHTC and 6 (27%) provided group post-test CHTC. Of 14 HCWs who reported that they had experience in providing post-test counseling for discordant couples, 4 (29%) HCWs reported negative consequences of mutual disclosure of HIV results for some clients, such as separation or family conflict. HCWs reported barriers for CHTC program implementation that included: high workload and limited number of staff trained in CHTC, leading to extended waiting times for clients; lack of clear national and hospital policies for CHTC implementation; a lack of standardized tools and materials to promote male participation in CHTC services at hospitals and in the community; partners of pregnant women refusing to participate in CHTC services; limited private space to provide CHTC services; and no waiting area for partners of pregnant women. HCWs provided recommendations for improvement of CHCT program implementation, including: developing and disseminating clear CHTC national and hospital policies; free HIV testing support to partners of pregnant women; better support from program managers; adequate number of staff allocated for counseling; provision of private rooms for counseling and better space for group education; ongoing CHTC training and supervision for HCWs; development of a CHTC manual on how to organize CHTC services in hospitals with high numbers of new ANC cases that also have a limited number of counselors; and tools and materials to promote access of male partners to CHTC services and to use as guidance for new counselors. In addition, several HCWs interviewed suggested that CHTC uptake should be included as a national target indicator for maternal child health counseling. Discussion: This pilot project of a CHCT program at 15 hospitals in Thailand demonstrated that successful CHTC implementation in ANC setting is feasible. Uptake of CHTC services among male partners of pregnant women in ANC settings was high, as was acceptability among hospital staff and administration. Most hospitals participating in the program were able to successfully implement CHTC services. While this is the first report of a project of this type from Thailand, these findings are similar to those reported from sub-Saharan Africa and Cambodia [9-11]. Developing clear CHTC policies, having strong support from program managers, having a CHTC manual and materials, and training adequate numbers of HCWs to provide services were important elements contributing to successful program implementation. HCWs reported a high level of confidence in providing CHTC services, suggesting that the adapted training curriculum was sufficient to provide basic knowledge and skills for counselors. Male participation in and uptake of CHTC services in this project was higher than that reported in African and Indian settings [10,22], although overall uptake of CHTC services in this project was still well below 50%. Possible reasons for the high rate of couples accepting HIV testing in our report may be due to one or more of several factors: first, CHTC in most pilot hospitals used opt-out techniques; second, the universal health coverage scheme in Thailand provided free HIV testing for pregnant women, partners of pregnant women, and other risk populations [18]; and third, Thailand promoted integration of CHTC in ANC settings with other existing maternal and child health (MCH) programs [23] in order to reduce stigma and discrimination. In this study, the proportion of couples accepting HIV testing was lower than 60% in one community hospital and one health promotion center hospital. Hospital staff reported that possible reasons for this low uptake included counselor issues (e.g., encouraging only partners of HIV-infected pregnant women or partners with risk behaviors to get tested), and health worker issues (e.g., workers were not aware of the health benefit of free HIV testing for partners of pregnant women). We do not have data on reasons for non-presentation of male partners at ANC, and it is possible that some men were not aware of the potential benefits of attending ANC with their partners due to a lack of promotional campaigns in some hospitals and communities, were not aware of availability of CHTC services in the ANC setting [23,24], were not available due to employment or residence in another province, were afraid to learn the results of HIV testing, had other socioeconomic factors or relationship issues affecting their decisions [24], or experienced unfriendly services for men at these CHTC service delivery sites [25]. Strategies that have been used in health facilities in Thailand and other countries to increase male involvement have included: making ANC services more male friendly (e.g., providing comfortable waiting space or fast-tracked registration for men); providing health education to change beliefs and attitudes; [10] and integration of ANC CHTC with other MCH programs such as thalassemia screening, syphilis screening, and parenting classes [8]. Additional strategies to attract men to ANC in Thailand should be instituted and evaluated; for example, in addition to those strategies noted above, engaging celebrities as role models [26] to promote CHTC services. In addition, for men who do not participate in CHTC services during ANC, CHTC should be offered to these men during the delivery or postpartum period where most men do present with their partners [1,8,25]. Loss to follow-up for men was also an issue in this project, possibly related to the fact that most participants needed to return for posttest results at the following clinical visit. The return rate for men may be improved by implementing same day HIV test results [27]. Feasibility and acceptability of using same day results among male partners of pregnant women in ANC settings should be examined. Identifying new HIV cases is a cornerstone of HIV control, and identifying serodiscordant couples is critical for proper HIV prophylaxis, treatment and care, for women, men, and babies. In this study, a total of 13 HIV-positive men were identified during CHTC. Of these men, seven had HIV negative partners and would therefore not have been captured during standard ANC HIV-testing practices. Clearly, identifying new HIV cases is important in any venue. Identifying serodiscordant couples in which the male partner is HIV positive, is important to determine proper PMTCT interventions and identify women who may be in the window period of acute HIV infection. For HIV-infected persons, CHTC can also support more effective provision of ART and adherence [28], and can increase uptake and adherence of PMTCT [22], including early infant HIV diagnosis. Supporting couples counseling also allows couples the opportunity to share their vision related to family goals, and to make informed decisions about HIV and STI prevention and reproductive health, including family planning and contraception [1,29]. Although this pilot project demonstrated success, national adoption and scale up requires overcoming some substantial obstacles. Some HCWs in this pilot expressed concern about additional workload and negative consequences of CHTC, particularly for serodiscordant couples. Providing information to couples as a group, using videotape and/or in-person didactic counseling [30], was a common technique used in this project, particularly in tertiary care hospitals with high numbers of new ANC cases and limited staff. Studies from Africa have demonstrated varying associations between HIV diagnosis and intimate partner violence [25,26]. Limited information exists regarding these types of social outcomes of CHTC in Thailand [31], and more research is needed in this area to determine potential adverse consequences of positive test results, especially for discordant couples. The interpretation of these results is subject to at least four limitations. First, the enrollment was a convenience sample. We did not collect data on the total number of pregnant women and their partners who came to the first ANC visit during the review period, or the number of those who were approached to participate in this project. Based on the national PMTCT intervention monitoring system that reported from participating hospitals during the same period of enrollment, it was estimated that the enrollment covered approximately 50% of total pregnant women at participating sites. Therefore, it is possible there was sampling bias, and this might lead to over or under estimation of uptake of couples counseling if expanded nationally. Second, missing data were reported and this may have led to over or under estimates of some of the findings. Third, there were variations in the proportion of men presenting at different ANC setting, and in the uptake of pre-and post-test CHTC by hospital type and region. These variations are likely to have implications for scale-up of this initiative and findings may need to be interpreted in specific regional and other contexts for optimal program implementation. Finally, we did not collect demographic information of participants and other information related to factors associated with acceptance of CHCT services. This information would be useful to help determine feasibility of national scale-up of this program, and is an area for future study. Following implementation of this pilot CHCT program, revised Thailand PMTCT guidelines, released in October 2010 [18,19], recommended routine couples counseling and testing in ANC clinics. In 2013, CHTC programs in ANC settings were available in more than 70% of MOPH hospitals in all provinces in Thailand, but CHTC uptake was still low (22%) [13]. Moving forward, we recommend promoting involvement of male partners in ANC by providing and improving male-friendly services (e.g., providing free HIV testing, reducing waiting times, and providing waiting space for men), providing training for HCWs to increase couples counseling and testing skills, and support for CHTC implementation from hospital executives, all crucial for successful program implementation. Outcomes of couples counseling and testing during national implementation, and feasibility and acceptability of same-day test results for male partners should be studied in order to maximize program success. Conclusions: CHTC implemented in ANC settings helped identify more HIV-positive men whose partners were negative than previous practice, with high acceptability among HCWs. Major concerns were negative consequences of CHCT for discordant couples and additional workload of the program. Among couples who came to first ANC visit together, a high proportion received HIV counseling and testing. Strategies are needed to increase the number and proportion of male partners who access CHCT services. To implement CHCT, hospitals need clear policies, support from hospital leaders, trained and supported personnel, private space, training, tools, and materials. There exists an opportunity to identify new HIV cases, and prevent infection among children – both major aspects of a comprehensive HIV program, and achievable in Thailand.
Background: Couples HIV testing and counseling (CHTC) at antenatal care (ANC) settings allows pregnant women to learn the HIV status of themselves and their partners. Couples can make decisions together to prevent HIV transmission. In Thailand, men were tested at ANC settings only if their pregnant partners were HIV positive. A CHTC program based in ANC settings was developed and implemented at 16 pilot hospitals in 7 provinces during 2009-2010. Methods: Cross-sectional data were collected using standard data collection forms from all pregnant women and accompanying partners who presented at first ANC visit at 16 hospitals. CHTC data for women and partners were analyzed to determine service uptake and HIV test results among couples. In-depth interviews were conducted among hospital staff of participating hospitals during field supervision visits to assess feasibility and acceptability of CHTC services. Results: During October 2009-April 2010, 4,524 women initiating ANC were enrolled. Of these, 2,435 (54%) women came for ANC alone; 2,089 (46%) came with partners. Among men presenting with partners, 2,003 (96%) received couples counseling. Of these, 1,723 (86%) men and all pregnant women accepted HIV testing. Among 1,723 couples testing for HIV, 1,604 (93%) returned for test results. Of these, 1,567 (98%) were concordant negative, 6 (0.4%) were concordant positive and 17 (1%) were HIV discordant (7 male+/female- and 10 male-/female+). Nine of ten (90%) executive hospital staff reported high acceptability of CHTC services. Conclusions: CHTC implemented in ANC settings helps identify more HIV-positive men whose partners were negative than previous practice, with high acceptability among hospital staff.
Background: In 2012, the World Health Organization (WHO) recommended that couples and partners of pregnant women in antenatal care (ANC) settings should be offered voluntary counseling and testing (VCT) with support for mutual disclosure [1,2], and also that antiretroviral therapy (ART) should be offered to HIV positive partners in serodiscordant relationships regardless of CD4 status [1] in order to reduce HIV transmission to uninfected partners [3]. Offering couples HIV testing and counseling (CHTC) at ANC settings provides many benefits including: increasing male participation in ANC services, enhancing communication between couples about safe sex practices [4,5], encouraging men to get tested and to know their HIV status, and preventing new HIV infections [1]. Couples who are aware of their partner’s and their own HIV status are more likely to adopt safe sex behaviors than people who are unaware of their HIV status [6,7]. If one or both members of a couple test positive, they can access and adhere to HIV treatment and care and interventions for prevention of mother-to-child HIV transmission (PMTCT). HIV-uninfected pregnant women and partners also receive benefits from CHTC through better-informed decisions about HIV prevention and reproductive health including contraception [1]. Offering CHTC at ANC settings may mitigate CHTC barriers and stigma discrimination because CHTC can be integrated into other routine maternal child health services and male participation activities routinely provided in ANC settings [8]. However, CHTC can be complex to implement because of limited number of staff at service delivery sites, acceptability problems among health care providers and clients, and potential adverse family consequences including conflict, separation, and intimate partner violence. Most reports from CHTC experiences are from projects based in low-income countries, particularly in sub-Saharan Africa [9,10], and Cambodia [11]; none exist for Thailand. Thailand, an upper middle-income country, has a concentrated HIV epidemic with prevalence of 1.1% of the adult population [12]. Approximately 800,000 pregnant women deliver in Thailand each year, and 98% of pregnant women in Thailand receive ANC at health facilities, where HIV testing is routine and nearly all accept HIV testing [13]. In most ANC settings in Thailand, VCT is provided to men only when their pregnant partners are HIV positive. Studies in Thailand have reported high (30-50%) serodiscordant rates among HIV-infected couples [14]; about 32% of new HIV infections in 2012 were among low-risk co-habitating couples (e.g., husband to wife and wife to husband) [15]. One study in Thailand reported that 0.05% of pregnant women had HIV seroconversion during pregnancy (presumably from their HIV positive partners) [16]. Although Thailand has implemented a successful PMTCT program, the 2008 national PMTCT program evaluation [17] reported that only half of the partners of HIV-infected pregnant women received HIV testing within 6 months after delivery. In addition, only 15% of HIV-infected women had received CHTC at ANC settings, but more than 70% were interested in receiving CHTC [17] for the opportunity to more openly communicate with their partners about HIV and PMTCT during pregnancy and the postpartum period [17]. These data highlight the need for improved HIV testing rates in couples at ANC settings in Thailand. In this paper, we describe the pilot implementation of a CHTC program in ANC settings in 17 hospitals in 7 provinces in Thailand during 2009–2010. Following this pilot implementation, Thailand national PMTCT guidelines 2010 [18,19] recommended routine CHTC in ANC clinics at public hospitals. Lessons learned from this pilot program provide recommendations for improvement and scale-up of this important program. Conclusions: CHTC implemented in ANC settings helped identify more HIV-positive men whose partners were negative than previous practice, with high acceptability among HCWs. Major concerns were negative consequences of CHCT for discordant couples and additional workload of the program. Among couples who came to first ANC visit together, a high proportion received HIV counseling and testing. Strategies are needed to increase the number and proportion of male partners who access CHCT services. To implement CHCT, hospitals need clear policies, support from hospital leaders, trained and supported personnel, private space, training, tools, and materials. There exists an opportunity to identify new HIV cases, and prevent infection among children – both major aspects of a comprehensive HIV program, and achievable in Thailand.
Background: Couples HIV testing and counseling (CHTC) at antenatal care (ANC) settings allows pregnant women to learn the HIV status of themselves and their partners. Couples can make decisions together to prevent HIV transmission. In Thailand, men were tested at ANC settings only if their pregnant partners were HIV positive. A CHTC program based in ANC settings was developed and implemented at 16 pilot hospitals in 7 provinces during 2009-2010. Methods: Cross-sectional data were collected using standard data collection forms from all pregnant women and accompanying partners who presented at first ANC visit at 16 hospitals. CHTC data for women and partners were analyzed to determine service uptake and HIV test results among couples. In-depth interviews were conducted among hospital staff of participating hospitals during field supervision visits to assess feasibility and acceptability of CHTC services. Results: During October 2009-April 2010, 4,524 women initiating ANC were enrolled. Of these, 2,435 (54%) women came for ANC alone; 2,089 (46%) came with partners. Among men presenting with partners, 2,003 (96%) received couples counseling. Of these, 1,723 (86%) men and all pregnant women accepted HIV testing. Among 1,723 couples testing for HIV, 1,604 (93%) returned for test results. Of these, 1,567 (98%) were concordant negative, 6 (0.4%) were concordant positive and 17 (1%) were HIV discordant (7 male+/female- and 10 male-/female+). Nine of ten (90%) executive hospital staff reported high acceptability of CHTC services. Conclusions: CHTC implemented in ANC settings helps identify more HIV-positive men whose partners were negative than previous practice, with high acceptability among hospital staff.
12,008
334
[ 253, 855, 70, 156, 72, 46, 235, 342, 168, 194, 733 ]
16
[ "chtc", "hiv", "counseling", "couples", "test", "women", "services", "partners", "anc", "hospitals" ]
[ "couples hiv testing", "hiv counseling testing", "pretest hiv counseling", "offering couples hiv", "encouraging partners hiv" ]
null
[CONTENT] Couples counseling | Pregnant women | ANC | Thailand [SUMMARY]
null
[CONTENT] Couples counseling | Pregnant women | ANC | Thailand [SUMMARY]
[CONTENT] Couples counseling | Pregnant women | ANC | Thailand [SUMMARY]
[CONTENT] Couples counseling | Pregnant women | ANC | Thailand [SUMMARY]
[CONTENT] Couples counseling | Pregnant women | ANC | Thailand [SUMMARY]
[CONTENT] Adult | Attitude of Health Personnel | Counseling | Cross-Sectional Studies | Family Characteristics | Female | HIV Infections | Humans | Male | Mass Screening | Men | Middle Aged | Patient Acceptance of Health Care | Personnel, Hospital | Pregnancy | Pregnancy Complications, Infectious | Pregnant Women | Prenatal Care | Sexual Partners | Thailand | Young Adult [SUMMARY]
null
[CONTENT] Adult | Attitude of Health Personnel | Counseling | Cross-Sectional Studies | Family Characteristics | Female | HIV Infections | Humans | Male | Mass Screening | Men | Middle Aged | Patient Acceptance of Health Care | Personnel, Hospital | Pregnancy | Pregnancy Complications, Infectious | Pregnant Women | Prenatal Care | Sexual Partners | Thailand | Young Adult [SUMMARY]
[CONTENT] Adult | Attitude of Health Personnel | Counseling | Cross-Sectional Studies | Family Characteristics | Female | HIV Infections | Humans | Male | Mass Screening | Men | Middle Aged | Patient Acceptance of Health Care | Personnel, Hospital | Pregnancy | Pregnancy Complications, Infectious | Pregnant Women | Prenatal Care | Sexual Partners | Thailand | Young Adult [SUMMARY]
[CONTENT] Adult | Attitude of Health Personnel | Counseling | Cross-Sectional Studies | Family Characteristics | Female | HIV Infections | Humans | Male | Mass Screening | Men | Middle Aged | Patient Acceptance of Health Care | Personnel, Hospital | Pregnancy | Pregnancy Complications, Infectious | Pregnant Women | Prenatal Care | Sexual Partners | Thailand | Young Adult [SUMMARY]
[CONTENT] Adult | Attitude of Health Personnel | Counseling | Cross-Sectional Studies | Family Characteristics | Female | HIV Infections | Humans | Male | Mass Screening | Men | Middle Aged | Patient Acceptance of Health Care | Personnel, Hospital | Pregnancy | Pregnancy Complications, Infectious | Pregnant Women | Prenatal Care | Sexual Partners | Thailand | Young Adult [SUMMARY]
[CONTENT] couples hiv testing | hiv counseling testing | pretest hiv counseling | offering couples hiv | encouraging partners hiv [SUMMARY]
null
[CONTENT] couples hiv testing | hiv counseling testing | pretest hiv counseling | offering couples hiv | encouraging partners hiv [SUMMARY]
[CONTENT] couples hiv testing | hiv counseling testing | pretest hiv counseling | offering couples hiv | encouraging partners hiv [SUMMARY]
[CONTENT] couples hiv testing | hiv counseling testing | pretest hiv counseling | offering couples hiv | encouraging partners hiv [SUMMARY]
[CONTENT] couples hiv testing | hiv counseling testing | pretest hiv counseling | offering couples hiv | encouraging partners hiv [SUMMARY]
[CONTENT] chtc | hiv | counseling | couples | test | women | services | partners | anc | hospitals [SUMMARY]
null
[CONTENT] chtc | hiv | counseling | couples | test | women | services | partners | anc | hospitals [SUMMARY]
[CONTENT] chtc | hiv | counseling | couples | test | women | services | partners | anc | hospitals [SUMMARY]
[CONTENT] chtc | hiv | counseling | couples | test | women | services | partners | anc | hospitals [SUMMARY]
[CONTENT] chtc | hiv | counseling | couples | test | women | services | partners | anc | hospitals [SUMMARY]
[CONTENT] hiv | thailand | anc | chtc | pmtct | settings | anc settings | partners | pregnant | couples [SUMMARY]
null
[CONTENT] chtc | hiv | counseling | test | couples | 10 | women | hospitals | services | hcws [SUMMARY]
[CONTENT] chct | major | identify | hiv | proportion | high | program | infection children major aspects | visit high proportion received | major aspects [SUMMARY]
[CONTENT] hiv | chtc | couples | counseling | test | anc | women | partners | testing | services [SUMMARY]
[CONTENT] hiv | chtc | couples | counseling | test | anc | women | partners | testing | services [SUMMARY]
[CONTENT] ANC ||| ||| Thailand | ANC ||| ANC | 16 | 7 | 2009-2010 [SUMMARY]
null
[CONTENT] October 2009-April 2010 | 4,524 | ANC ||| 2,435 | 54% | ANC | 2,089 | 46% ||| 2,003 | 96% ||| 1,723 | 86% ||| 1,723 | 1,604 | 93% ||| 1,567 | 98% | 6 | 0.4% | 17 | 1% | 7 male+/female- and | 10 ||| Nine | 90% [SUMMARY]
[CONTENT] ANC [SUMMARY]
[CONTENT] ANC ||| ||| Thailand | ANC ||| ANC | 16 | 7 | 2009-2010 ||| ANC | 16 ||| ||| ||| October 2009-April 2010 | 4,524 | ANC ||| 2,435 | 54% | ANC | 2,089 | 46% ||| 2,003 | 96% ||| 1,723 | 86% ||| 1,723 | 1,604 | 93% ||| 1,567 | 98% | 6 | 0.4% | 17 | 1% | 7 male+/female- and | 10 ||| Nine | 90% ||| ANC [SUMMARY]
[CONTENT] ANC ||| ||| Thailand | ANC ||| ANC | 16 | 7 | 2009-2010 ||| ANC | 16 ||| ||| ||| October 2009-April 2010 | 4,524 | ANC ||| 2,435 | 54% | ANC | 2,089 | 46% ||| 2,003 | 96% ||| 1,723 | 86% ||| 1,723 | 1,604 | 93% ||| 1,567 | 98% | 6 | 0.4% | 17 | 1% | 7 male+/female- and | 10 ||| Nine | 90% ||| ANC [SUMMARY]
Frailty trajectories in community-dwelling older adults during COVID-19 pandemic: The PRESTIGE study.
35842830
Frailty has been recognized as potential surrogate of biological age and relevant risk factor for COVID-19 severity. Thus, it is important to explore the frailty trajectories during COVID-19 pandemic and understand how COVID-19 directly and indirectly impacts on frailty condition.
BACKGROUND
We enrolled 217 community-dwelling older adults with available information on frailty condition as assessed by multidimensional frailty model both at baseline and at one-year follow-up using Multidimensional Prognostic Index (MPI) tools. Pre-frail/frail subjects were identified at baseline as those with MPI score >0.33 (MPI grades 2-3). Frailty worsening was defined by MPI difference between 12 months follow-up and baseline ≥0.1. Multivariable logistic regression was modelled to identify predictors of worsening of frailty condition.
METHODS
Frailer subjects at baseline (MPI grades 2-3 = 48.4%) were older, more frequently female and had higher rates of hospitalization and Sars-CoV-2 infection compared to robust ones (MPI grade 1). Having MPI grades 2-3 at baseline was associated with higher risk of further worsening of frailty condition (adjusted odd ratio (aOR): 13.60, 95% confidence interval (CI): 4.01-46.09), independently by age, gender and Sars-CoV-2 infection. Specifically, frail subjects without COVID-19 (aOR: 14.84, 95% CI: 4.26-51.74) as well as those with COVID-19 (aOR: 12.77, 95% CI: 2.66-61.40, p = 0.001) had significantly higher risk of worsening of frailty condition.
RESULTS
Effects of COVID-19 pandemic among community-dwelling frailer individuals are far beyond the mere infection and disease, determining a significant deterioration of frailty status both in infected and non-infected subjects.
CONCLUSIONS
[ "Female", "Humans", "Aged", "Frailty", "Independent Living", "COVID-19", "Geriatric Assessment", "Pandemics", "SARS-CoV-2" ]
9350279
BACKGROUND
Frailty is a potentially reversible geriatric condition characterized by a reduction of biological reserves that predispose to countless negative outcomes including disability and mortality. 1 With the extended life expectancy and the rapid increase of aging population, assessment of frailty status may represent an useful proxy to measure biological age, beyond simple chronological age. 2 The divergencies between these two perspectives on patient's age have become particularly evident during the COVID‐19 pandemic. 3 , 4 Despite the initial epidemiological data suggested COVID‐19 as a geriatric condition with worst prognosis in older and multimorbid subjects, 5 such ageistic criterion was progressively reviewed and overcome. 4 Patients with same age could have completely different predisposition to contract Sars‐CoV‐2 and to experience severe consequences of the disease. Rather, it emerged that frailty is a better predictor of disease severity and higher incidence of negative outcomes in hospitalized older patients (e.g. mortality at short‐ as well as long term, length of hospital stay, higher incidence of admission to intensive care units and need of invasive mechanical ventilation). 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 Frail subjects may commonly experience atypical presentation of the COVID‐19 disease including hypotension, sudden functional decline, falls, and delirium, which may lead to diagnostic delay and further spread of infection. 14 Also indirect effects of COVID‐19 pandemic, mainly related on restriction measures, have been extensively studied including the obvious difficulties in the access to care for subjects with chronic diseases, as well as with the impressive increased incidence of psychosocial disorders (e.g. depression, anxiety and loneliness), malnutrition (over‐ or under‐nutrition) and cognitive impairment, which may contribute to incidence and progression of frailty condition. 15 , 16 , 17 However, there is still a paucity of direct evidence on the impact of COVID‐19 outbreak on frailty status in a population‐based setting. Recently a novel conceptual model for frailty evaluation has been proposed, based on tenets of Comprehensive Geriatric Assessment (CGA) and called “multidimensional model”. 18 This model, using the Multidimensional Prognostic Index (MPI) as assessment tool, 19 has been applied in different settings and populations including community‐dwelling subjects in whom was estimated a prevalence of multidimensional frailty of 13.3%. 20 Using its largely demonstrated clinimetric capacities, 18 the MPI has proven excellent accuracy in predicting several negative outcomes (e.g. mortality, hospitalization and falls) also in community‐based setting. 21 , 22 , 23 Therefore, in this study, we aimed to explore clinical course of frailty condition, as measured by the MPI tools, over 1 year during COVID‐19 pandemic in a cohort of community‐dwelling older adults to understand how COVID‐19 directly and indirectly impacted on frailty condition.
METHODS
Study population The PRESTIGE project (Involved and Resilient: Aging in Genoa) is a prospective, observational study aimed to explore frailty and social vulnerability in community‐dwelling older residents in the metropolitan area of Genoa, Italy. We included subjects: (1) aged 65 years or over; (2) community‐dwellers who attended the University of the Third Age (U3A – an international movement whose aim is encouraging the education of retired members of the community) in Genoa according to a lifelong learning program for subjects in their “third age” of life; (3) without acute clinical conditions and (4) able to provide informed consent. Participants were enrolled between November 2019 and February 2020 following the World Medical Association's 2008 Declaration of Helsinki and the guidelines for Good Clinical Practice. Reporting of the study conforms to broad EQUATOR guidelines. 24 From the original sample of 1354 subjects, 451 were randomly selected to undergo tele‐consult follow‐up at 12 months. Out of 380 subjects who agreed to participate, 217 completed the follow‐up and were eligible for this post hoc analysis (Figure 1). The included participants were interviewed by phone call at 12‐month follow‐up between November 2020 and February 2021, in order to reassess frailty condition and to gather information related to eventual Sars‐CoV‐2 infection. Considering that COVID‐19 emerged in Italy in March 2020, we were able to estimate the incidence of COVID‐19 positivity during the first and second pandemic waves in Liguria region. At follow‐up were also collected information regarding hospitalizations. Flow of study participants through the study The Ethical Committee of Department of Education of the University of Genoa (DISFOR), Genoa, Italy, approved the present study. All participants read and signed the informed consent form and all participants' records, and personal information were rendered anonymous before statistical analysis. The PRESTIGE project (Involved and Resilient: Aging in Genoa) is a prospective, observational study aimed to explore frailty and social vulnerability in community‐dwelling older residents in the metropolitan area of Genoa, Italy. We included subjects: (1) aged 65 years or over; (2) community‐dwellers who attended the University of the Third Age (U3A – an international movement whose aim is encouraging the education of retired members of the community) in Genoa according to a lifelong learning program for subjects in their “third age” of life; (3) without acute clinical conditions and (4) able to provide informed consent. Participants were enrolled between November 2019 and February 2020 following the World Medical Association's 2008 Declaration of Helsinki and the guidelines for Good Clinical Practice. Reporting of the study conforms to broad EQUATOR guidelines. 24 From the original sample of 1354 subjects, 451 were randomly selected to undergo tele‐consult follow‐up at 12 months. Out of 380 subjects who agreed to participate, 217 completed the follow‐up and were eligible for this post hoc analysis (Figure 1). The included participants were interviewed by phone call at 12‐month follow‐up between November 2020 and February 2021, in order to reassess frailty condition and to gather information related to eventual Sars‐CoV‐2 infection. Considering that COVID‐19 emerged in Italy in March 2020, we were able to estimate the incidence of COVID‐19 positivity during the first and second pandemic waves in Liguria region. At follow‐up were also collected information regarding hospitalizations. Flow of study participants through the study The Ethical Committee of Department of Education of the University of Genoa (DISFOR), Genoa, Italy, approved the present study. All participants read and signed the informed consent form and all participants' records, and personal information were rendered anonymous before statistical analysis. Frailty Frailty assessment was based on multidimensional model of frailty using Multidimensional Prognostic Index (MPI) tools. 18 In this study we adopted Self‐Administered MPI Short Form (SELFY‐MPI‐SF) 25 and Telephone‐administered MPI (TELE‐MPI) 26 both deriving from the standard Multidimensional Prognostic Index (MPI). 19 These have been developed and validated to extend the spectrum of application of multidimensional approach to frailty, based on the principles of Comprehensive Geriatric Assessment (CGA), in more specific settings as community/general practice (SELFY‐MPI‐SF) 25 and telehealth (TELE‐MPI). 26 Both tools showed strong agreement with standard MPI. 25 , 26 Self‐administered MPI short form (SELFY‐MPI‐SF) The SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales; 25 Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. For each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty). 25 Also, according to the previously established MPI categories, 19 the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66). 25 We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants. The SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales; 25 Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. For each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty). 25 Also, according to the previously established MPI categories, 19 the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66). 25 We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants. Telephone‐administered MPI (TELE‐MPI) The TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA. 26 The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance. 21 If the subject is able to perform the task was assigned 1 point. 21 Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score). 33 Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values. 26 Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty). The TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA. 26 The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance. 21 If the subject is able to perform the task was assigned 1 point. 21 Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score). 33 Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values. 26 Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty). Frailty assessment was based on multidimensional model of frailty using Multidimensional Prognostic Index (MPI) tools. 18 In this study we adopted Self‐Administered MPI Short Form (SELFY‐MPI‐SF) 25 and Telephone‐administered MPI (TELE‐MPI) 26 both deriving from the standard Multidimensional Prognostic Index (MPI). 19 These have been developed and validated to extend the spectrum of application of multidimensional approach to frailty, based on the principles of Comprehensive Geriatric Assessment (CGA), in more specific settings as community/general practice (SELFY‐MPI‐SF) 25 and telehealth (TELE‐MPI). 26 Both tools showed strong agreement with standard MPI. 25 , 26 Self‐administered MPI short form (SELFY‐MPI‐SF) The SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales; 25 Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. For each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty). 25 Also, according to the previously established MPI categories, 19 the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66). 25 We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants. The SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales; 25 Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. For each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty). 25 Also, according to the previously established MPI categories, 19 the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66). 25 We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants. Telephone‐administered MPI (TELE‐MPI) The TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA. 26 The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance. 21 If the subject is able to perform the task was assigned 1 point. 21 Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score). 33 Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values. 26 Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty). The TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA. 26 The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance. 21 If the subject is able to perform the task was assigned 1 point. 21 Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score). 33 Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values. 26 Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty). Statistical analysis Descriptive statistics were expressed for continuous variables as mean and standard deviation (SD) or median and interquartile range (IQR) and for discrete variables as absolute and relative frequencies (percentages) by MPI category (MPI grade 1 vs MPI grades 2–3). Independent sample t‐test or Mann–Whitney U test was used for comparison of continuous variables between groups. Chi‐square and Fisher's exact tests were used to compare categorical factors. Differences in hospitalizations and COVID‐19 cases were analysed using the Cochran–Mantel–Haenszel test. Paired sample t‐tests were used to compare MPI scores at baseline and at 12 months follow‐up among subjects reporting MPI grade 1 or grades 2–3 at baseline with or without Sars‐CoV‐2 infection. Multivariable logistic regression models were developed to identify whether frailty status and Sars‐CoV‐2 infection could predict worsening of frailty condition expressed as difference of MPI scores between 12 months follow‐up and baseline ≥0.1. We selected this cut‐off to identify significant worsening of frailty, based on previous literature showing that increase of 0.1 points of the MPI score was a clinically relevant change associated with increased risk of negative outcomes. 8 , 22 , 26 A two‐tailed significance level at p = 0.05 was set for each test. All the analyses were performed using SPSS v26.0 software for Windows (SPSS, Chicago, IL, USA). Descriptive statistics were expressed for continuous variables as mean and standard deviation (SD) or median and interquartile range (IQR) and for discrete variables as absolute and relative frequencies (percentages) by MPI category (MPI grade 1 vs MPI grades 2–3). Independent sample t‐test or Mann–Whitney U test was used for comparison of continuous variables between groups. Chi‐square and Fisher's exact tests were used to compare categorical factors. Differences in hospitalizations and COVID‐19 cases were analysed using the Cochran–Mantel–Haenszel test. Paired sample t‐tests were used to compare MPI scores at baseline and at 12 months follow‐up among subjects reporting MPI grade 1 or grades 2–3 at baseline with or without Sars‐CoV‐2 infection. Multivariable logistic regression models were developed to identify whether frailty status and Sars‐CoV‐2 infection could predict worsening of frailty condition expressed as difference of MPI scores between 12 months follow‐up and baseline ≥0.1. We selected this cut‐off to identify significant worsening of frailty, based on previous literature showing that increase of 0.1 points of the MPI score was a clinically relevant change associated with increased risk of negative outcomes. 8 , 22 , 26 A two‐tailed significance level at p = 0.05 was set for each test. All the analyses were performed using SPSS v26.0 software for Windows (SPSS, Chicago, IL, USA).
RESULTS
Overall, 217 community‐dwelling older adults (mean age 79.44 ± 7.75 years, range 62–107 years old; females: 49.8%) completed the MPI both at baseline and at 12 months follow‐up. Mean MPI score at baseline was 0.30 (SD: 0.18) with a prevalence of pre‐frail/frail subjects (MPI grades 2–3) of 48.4%. As shown in Table 1, pre‐frail/frail subjects had higher level of functional and cognitive impairment, malnutrition, social isolation and more comorbidities and number of medications compared to those with MPI grade 1 (robust subjects). Flu and anti‐pneumococcal vaccines were not performed in 21.7% and 59% of participants, respectively, with significant lower coverage for pneumococcal vaccination among more pre‐frail/frail subjects (32.4% vs. 49.1% in robusts, p = 0.012). Incidence of Sars‐CoV‐2 infection was 12.9%, but was almost five‐time higher among frailer compared to robust subjects (21.0% vs. 5.4%, OR: 4.68, 95% CI: 1.82–12.07, p = 0.001). Pre‐frail/frail subjects were also more prone to undergo hospitalizations during the follow‐up compared to robust individuals (26.3% vs. 4.5%, OR: 7.55, 95% CI: 2.70–21.05, p < 0.001). Characteristics of patients by MPI category at baseline Abbreviations: ADL, activities of daily living; BMI, body mass index; CIRS, Cumulative Illness Rating Scale; IADL, instrumental activities of daily living; MNA‐SF, Mini Nutritional Assessment Short Form; MPI, Multidimensional Prognostic Index; SD, standard deviation; TYM, Test Your Memory. While robust subjects at baseline remained on average stable during the follow‐up, indeed no subject evolved toward a pre‐frailty/frailty condition, we found that pre‐frail/frail older adults underwent significant deterioration of MPI score during 12 months (0.46 ± 0.09 at baseline vs 0.50 ± 0.17 at 12 months, p = 0.027) (Figure 2). Indeed, in the multivariable analysis, adjusting for age, gender, BMI, multimorbidity (3 or more chronic diseases), flu and anti‐pneumococcal vaccination and COVID‐19 positivity, pre‐frail/frail subjects at baseline experienced a significant higher risk of further worsening of frailty condition (adjusted odd ratio (aOR): 13.60, 95% confidence interval (CI): 4.01 to 46.09, p < 0.001) compared to robust subjects (Table 2, Table S1). Pre‐frail/frail subjects even though not‐infected by Sars‐CoV‐2 experienced a significant worsening of multidimensional frailty at 12 months follow‐up (MPI: 0.45 ± 0.08 at baseline vs 0.51 ± 0.17 at 12 months, p = 0.005). In the multivariable analysis, adjusting for age, gender, BMI, multimorbidity and flu and anti‐pneumococcal vaccination, pre‐frail/frail older adults both non‐infected (aOR: 14.84, 95% CI: 4.26 to 51.74, p < 0.001) and infected by Sars‐CoV‐2 (aOR: 12.77, 95% CI: 2.66 to 61.40, p = 0.001) had significantly greater risk of further worsening of frailty condition compared to robust and non‐infected subjects (Table 3, Table S2). Older adults who were robust at baseline were more likely to experience a worsening in ADL domain, whereas among frail/pre‐frail subjects the domains that contributed more to further MPI worsening were loss of IADL, poor mobility, cognitive impairment and malnutrition (Table S3). Trajectories of multidimensional frailty based on MPI category at baseline. Center lines show the medians; box limits indicate the 25th and 75th percentiles; whiskers extend 1.5 times the interquartile range from the 25th and 75th percentiles; cross represents sample mean; data points are plotted as open circles. T0 = baseline; T1 = 12 months; MPI = Multidimensional Prognostic Index Predictors of worsening of frailty condition during COVID‐19 pandemic Note: *Model adjusted also for BMI, flu vaccination and anti‐pneumococcal vaccination. Abbreviations: ΔMPI, difference of MPI scores between 12 months follow‐up and baseline; CI, confidence interval; OR, odd ratio; MPI, Multidimensional Prognostic Index. Risk of worsening of frailty condition according to frailty status at baseline and COVID‐19 positivity Note. *Model adjusted for age, gender, BMI, multimorbidity (3 or more chronic diseases), flu vaccination and anti‐pneumococcal vaccination. Abbreviations: ΔMPI, difference of MPI scores between 12 months follow‐up and baseline; CI, confidence interval; OR, odd ratio; MPI, Multidimensional Prognostic Index.
CONCLUSIONS
Effects of COVID‐19 pandemic among community‐dwelling pre‐frail/frail individuals are far beyond the mere infection and disease, but also might determine a significant deterioration of frailty status. More efforts must be taken to early recognize and manage frailer subjects, to avoid progression toward irreversible disability. Future studies should better define the frailty trajectories testing whether the slope of increase of multidimensional frailty, or reaching a specifical threshold of MPI score, can determine differences in short‐ and long‐term outcomes for community‐dwelling older adults.
[ "Study population", "Frailty", "\nSelf‐administered MPI short form (SELFY‐MPI‐SF)", "Telephone‐administered MPI (TELE‐MPI)", "Statistical analysis", "AUTHOR CONTRIBUTIONS", "FUNDING INFORMATION", "ETHICAL APPROVAL AND CONSENT TO PARTICIPATE", "CONSENT FOR PUBLICATION" ]
[ "The PRESTIGE project (Involved and Resilient: Aging in Genoa) is a prospective, observational study aimed to explore frailty and social vulnerability in community‐dwelling older residents in the metropolitan area of Genoa, Italy. We included subjects: (1) aged 65 years or over; (2) community‐dwellers who attended the University of the Third Age (U3A – an international movement whose aim is encouraging the education of retired members of the community) in Genoa according to a lifelong learning program for subjects in their “third age” of life; (3) without acute clinical conditions and (4) able to provide informed consent. Participants were enrolled between November 2019 and February 2020 following the World Medical Association's 2008 Declaration of Helsinki and the guidelines for Good Clinical Practice. Reporting of the study conforms to broad EQUATOR guidelines.\n24\n\n\nFrom the original sample of 1354 subjects, 451 were randomly selected to undergo tele‐consult follow‐up at 12 months. Out of 380 subjects who agreed to participate, 217 completed the follow‐up and were eligible for this post hoc analysis (Figure 1). The included participants were interviewed by phone call at 12‐month follow‐up between November 2020 and February 2021, in order to reassess frailty condition and to gather information related to eventual Sars‐CoV‐2 infection. Considering that COVID‐19 emerged in Italy in March 2020, we were able to estimate the incidence of COVID‐19 positivity during the first and second pandemic waves in Liguria region. At follow‐up were also collected information regarding hospitalizations.\nFlow of study participants through the study\nThe Ethical Committee of Department of Education of the University of Genoa (DISFOR), Genoa, Italy, approved the present study. All participants read and signed the informed consent form and all participants' records, and personal information were rendered anonymous before statistical analysis.", "Frailty assessment was based on multidimensional model of frailty using Multidimensional Prognostic Index (MPI) tools.\n18\n In this study we adopted Self‐Administered MPI Short Form (SELFY‐MPI‐SF) \n25\n and Telephone‐administered MPI (TELE‐MPI) \n26\n both deriving from the standard Multidimensional Prognostic Index (MPI).\n19\n These have been developed and validated to extend the spectrum of application of multidimensional approach to frailty, based on the principles of Comprehensive Geriatric Assessment (CGA), in more specific settings as community/general practice (SELFY‐MPI‐SF) \n25\n and telehealth (TELE‐MPI).\n26\n Both tools showed strong agreement with standard MPI.\n25\n, \n26\n\n\n \nSelf‐administered MPI short form (SELFY‐MPI‐SF) The SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales;\n25\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\nNumber of drugs regularly taken by the subject;\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\nCo‐habitation status including living alone, in an institution or with family members.\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\n\nNumber of drugs regularly taken by the subject;\n\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\n\nCo‐habitation status including living alone, in an institution or with family members.\nFor each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty).\n25\n Also, according to the previously established MPI categories,\n19\n the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66).\n25\n We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants.\nThe SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales;\n25\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\nNumber of drugs regularly taken by the subject;\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\nCo‐habitation status including living alone, in an institution or with family members.\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\n\nNumber of drugs regularly taken by the subject;\n\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\n\nCo‐habitation status including living alone, in an institution or with family members.\nFor each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty).\n25\n Also, according to the previously established MPI categories,\n19\n the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66).\n25\n We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants.\n Telephone‐administered MPI (TELE‐MPI) The TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA.\n26\n The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance.\n21\n If the subject is able to perform the task was assigned 1 point.\n21\n Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score).\n33\n Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values.\n26\n Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty).\nThe TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA.\n26\n The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance.\n21\n If the subject is able to perform the task was assigned 1 point.\n21\n Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score).\n33\n Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values.\n26\n Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty).", "The SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales;\n25\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\nNumber of drugs regularly taken by the subject;\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\nCo‐habitation status including living alone, in an institution or with family members.\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\n\nNumber of drugs regularly taken by the subject;\n\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\n\nCo‐habitation status including living alone, in an institution or with family members.\nFor each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty).\n25\n Also, according to the previously established MPI categories,\n19\n the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66).\n25\n We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants.", "The TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA.\n26\n The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance.\n21\n If the subject is able to perform the task was assigned 1 point.\n21\n Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score).\n33\n Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values.\n26\n Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty).", "Descriptive statistics were expressed for continuous variables as mean and standard deviation (SD) or median and interquartile range (IQR) and for discrete variables as absolute and relative frequencies (percentages) by MPI category (MPI grade 1 vs MPI grades 2–3). Independent sample t‐test or Mann–Whitney U test was used for comparison of continuous variables between groups. Chi‐square and Fisher's exact tests were used to compare categorical factors. Differences in hospitalizations and COVID‐19 cases were analysed using the Cochran–Mantel–Haenszel test. Paired sample t‐tests were used to compare MPI scores at baseline and at 12 months follow‐up among subjects reporting MPI grade 1 or grades 2–3 at baseline with or without Sars‐CoV‐2 infection. Multivariable logistic regression models were developed to identify whether frailty status and Sars‐CoV‐2 infection could predict worsening of frailty condition expressed as difference of MPI scores between 12 months follow‐up and baseline ≥0.1. We selected this cut‐off to identify significant worsening of frailty, based on previous literature showing that increase of 0.1 points of the MPI score was a clinically relevant change associated with increased risk of negative outcomes.\n8\n, \n22\n, \n26\n A two‐tailed significance level at p = 0.05 was set for each test. All the analyses were performed using SPSS v26.0 software for Windows (SPSS, Chicago, IL, USA).", "AP: Conceptualization, funding acquisition, supervision, writing – review & editing; CC: methodology, formal analysis, writing – original draft preparation; SZ: data curation, project administration, writing – review & editing; SP: investigation, writing – review & editing; BS: investigation, writing – review & editing; CP: investigation, writing – review & editing; ET: investigation, writing – review & editing; NV: investigation, writing – review & editing; EZ: investigation, writing – review & editing; CT: investigation, writing – review & editing; CS: investigation, writing – review & editing; AC: conceptualization, investigation, writing – review & editing.", "This work was supported by Fondazione CARIGE (“Stronger, less frail” GRANT 2018).", "The Ethical Committee of Department of Education of the University of Genoa (DISFOR), Genoa, Italy approved the present study on 5 September 2019; study number 030.", "Not applicable." ]
[ null, null, null, null, null, null, null, null, null ]
[ "BACKGROUND", "METHODS", "Study population", "Frailty", "\nSelf‐administered MPI short form (SELFY‐MPI‐SF)", "Telephone‐administered MPI (TELE‐MPI)", "Statistical analysis", "RESULTS", "DISCUSSION", "CONCLUSIONS", "AUTHOR CONTRIBUTIONS", "FUNDING INFORMATION", "CONFLICT OF INTEREST", "ETHICAL APPROVAL AND CONSENT TO PARTICIPATE", "CONSENT FOR PUBLICATION", "Supporting information" ]
[ "Frailty is a potentially reversible geriatric condition characterized by a reduction of biological reserves that predispose to countless negative outcomes including disability and mortality.\n1\n With the extended life expectancy and the rapid increase of aging population, assessment of frailty status may represent an useful proxy to measure biological age, beyond simple chronological age.\n2\n The divergencies between these two perspectives on patient's age have become particularly evident during the COVID‐19 pandemic.\n3\n, \n4\n Despite the initial epidemiological data suggested COVID‐19 as a geriatric condition with worst prognosis in older and multimorbid subjects,\n5\n such ageistic criterion was progressively reviewed and overcome.\n4\n Patients with same age could have completely different predisposition to contract Sars‐CoV‐2 and to experience severe consequences of the disease. Rather, it emerged that frailty is a better predictor of disease severity and higher incidence of negative outcomes in hospitalized older patients (e.g. mortality at short‐ as well as long term, length of hospital stay, higher incidence of admission to intensive care units and need of invasive mechanical ventilation).\n6\n, \n7\n, \n8\n, \n9\n, \n10\n, \n11\n, \n12\n, \n13\n Frail subjects may commonly experience atypical presentation of the COVID‐19 disease including hypotension, sudden functional decline, falls, and delirium, which may lead to diagnostic delay and further spread of infection.\n14\n\n\nAlso indirect effects of COVID‐19 pandemic, mainly related on restriction measures, have been extensively studied including the obvious difficulties in the access to care for subjects with chronic diseases, as well as with the impressive increased incidence of psychosocial disorders (e.g. depression, anxiety and loneliness), malnutrition (over‐ or under‐nutrition) and cognitive impairment, which may contribute to incidence and progression of frailty condition.\n15\n, \n16\n, \n17\n However, there is still a paucity of direct evidence on the impact of COVID‐19 outbreak on frailty status in a population‐based setting. Recently a novel conceptual model for frailty evaluation has been proposed, based on tenets of Comprehensive Geriatric Assessment (CGA) and called “multidimensional model”.\n18\n This model, using the Multidimensional Prognostic Index (MPI) as assessment tool,\n19\n has been applied in different settings and populations including community‐dwelling subjects in whom was estimated a prevalence of multidimensional frailty of 13.3%.\n20\n Using its largely demonstrated clinimetric capacities,\n18\n the MPI has proven excellent accuracy in predicting several negative outcomes (e.g. mortality, hospitalization and falls) also in community‐based setting.\n21\n, \n22\n, \n23\n Therefore, in this study, we aimed to explore clinical course of frailty condition, as measured by the MPI tools, over 1 year during COVID‐19 pandemic in a cohort of community‐dwelling older adults to understand how COVID‐19 directly and indirectly impacted on frailty condition.", " Study population The PRESTIGE project (Involved and Resilient: Aging in Genoa) is a prospective, observational study aimed to explore frailty and social vulnerability in community‐dwelling older residents in the metropolitan area of Genoa, Italy. We included subjects: (1) aged 65 years or over; (2) community‐dwellers who attended the University of the Third Age (U3A – an international movement whose aim is encouraging the education of retired members of the community) in Genoa according to a lifelong learning program for subjects in their “third age” of life; (3) without acute clinical conditions and (4) able to provide informed consent. Participants were enrolled between November 2019 and February 2020 following the World Medical Association's 2008 Declaration of Helsinki and the guidelines for Good Clinical Practice. Reporting of the study conforms to broad EQUATOR guidelines.\n24\n\n\nFrom the original sample of 1354 subjects, 451 were randomly selected to undergo tele‐consult follow‐up at 12 months. Out of 380 subjects who agreed to participate, 217 completed the follow‐up and were eligible for this post hoc analysis (Figure 1). The included participants were interviewed by phone call at 12‐month follow‐up between November 2020 and February 2021, in order to reassess frailty condition and to gather information related to eventual Sars‐CoV‐2 infection. Considering that COVID‐19 emerged in Italy in March 2020, we were able to estimate the incidence of COVID‐19 positivity during the first and second pandemic waves in Liguria region. At follow‐up were also collected information regarding hospitalizations.\nFlow of study participants through the study\nThe Ethical Committee of Department of Education of the University of Genoa (DISFOR), Genoa, Italy, approved the present study. All participants read and signed the informed consent form and all participants' records, and personal information were rendered anonymous before statistical analysis.\nThe PRESTIGE project (Involved and Resilient: Aging in Genoa) is a prospective, observational study aimed to explore frailty and social vulnerability in community‐dwelling older residents in the metropolitan area of Genoa, Italy. We included subjects: (1) aged 65 years or over; (2) community‐dwellers who attended the University of the Third Age (U3A – an international movement whose aim is encouraging the education of retired members of the community) in Genoa according to a lifelong learning program for subjects in their “third age” of life; (3) without acute clinical conditions and (4) able to provide informed consent. Participants were enrolled between November 2019 and February 2020 following the World Medical Association's 2008 Declaration of Helsinki and the guidelines for Good Clinical Practice. Reporting of the study conforms to broad EQUATOR guidelines.\n24\n\n\nFrom the original sample of 1354 subjects, 451 were randomly selected to undergo tele‐consult follow‐up at 12 months. Out of 380 subjects who agreed to participate, 217 completed the follow‐up and were eligible for this post hoc analysis (Figure 1). The included participants were interviewed by phone call at 12‐month follow‐up between November 2020 and February 2021, in order to reassess frailty condition and to gather information related to eventual Sars‐CoV‐2 infection. Considering that COVID‐19 emerged in Italy in March 2020, we were able to estimate the incidence of COVID‐19 positivity during the first and second pandemic waves in Liguria region. At follow‐up were also collected information regarding hospitalizations.\nFlow of study participants through the study\nThe Ethical Committee of Department of Education of the University of Genoa (DISFOR), Genoa, Italy, approved the present study. All participants read and signed the informed consent form and all participants' records, and personal information were rendered anonymous before statistical analysis.\n Frailty Frailty assessment was based on multidimensional model of frailty using Multidimensional Prognostic Index (MPI) tools.\n18\n In this study we adopted Self‐Administered MPI Short Form (SELFY‐MPI‐SF) \n25\n and Telephone‐administered MPI (TELE‐MPI) \n26\n both deriving from the standard Multidimensional Prognostic Index (MPI).\n19\n These have been developed and validated to extend the spectrum of application of multidimensional approach to frailty, based on the principles of Comprehensive Geriatric Assessment (CGA), in more specific settings as community/general practice (SELFY‐MPI‐SF) \n25\n and telehealth (TELE‐MPI).\n26\n Both tools showed strong agreement with standard MPI.\n25\n, \n26\n\n\n \nSelf‐administered MPI short form (SELFY‐MPI‐SF) The SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales;\n25\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\nNumber of drugs regularly taken by the subject;\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\nCo‐habitation status including living alone, in an institution or with family members.\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\n\nNumber of drugs regularly taken by the subject;\n\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\n\nCo‐habitation status including living alone, in an institution or with family members.\nFor each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty).\n25\n Also, according to the previously established MPI categories,\n19\n the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66).\n25\n We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants.\nThe SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales;\n25\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\nNumber of drugs regularly taken by the subject;\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\nCo‐habitation status including living alone, in an institution or with family members.\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\n\nNumber of drugs regularly taken by the subject;\n\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\n\nCo‐habitation status including living alone, in an institution or with family members.\nFor each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty).\n25\n Also, according to the previously established MPI categories,\n19\n the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66).\n25\n We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants.\n Telephone‐administered MPI (TELE‐MPI) The TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA.\n26\n The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance.\n21\n If the subject is able to perform the task was assigned 1 point.\n21\n Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score).\n33\n Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values.\n26\n Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty).\nThe TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA.\n26\n The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance.\n21\n If the subject is able to perform the task was assigned 1 point.\n21\n Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score).\n33\n Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values.\n26\n Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty).\nFrailty assessment was based on multidimensional model of frailty using Multidimensional Prognostic Index (MPI) tools.\n18\n In this study we adopted Self‐Administered MPI Short Form (SELFY‐MPI‐SF) \n25\n and Telephone‐administered MPI (TELE‐MPI) \n26\n both deriving from the standard Multidimensional Prognostic Index (MPI).\n19\n These have been developed and validated to extend the spectrum of application of multidimensional approach to frailty, based on the principles of Comprehensive Geriatric Assessment (CGA), in more specific settings as community/general practice (SELFY‐MPI‐SF) \n25\n and telehealth (TELE‐MPI).\n26\n Both tools showed strong agreement with standard MPI.\n25\n, \n26\n\n\n \nSelf‐administered MPI short form (SELFY‐MPI‐SF) The SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales;\n25\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\nNumber of drugs regularly taken by the subject;\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\nCo‐habitation status including living alone, in an institution or with family members.\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\n\nNumber of drugs regularly taken by the subject;\n\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\n\nCo‐habitation status including living alone, in an institution or with family members.\nFor each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty).\n25\n Also, according to the previously established MPI categories,\n19\n the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66).\n25\n We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants.\nThe SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales;\n25\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\nNumber of drugs regularly taken by the subject;\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\nCo‐habitation status including living alone, in an institution or with family members.\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\n\nNumber of drugs regularly taken by the subject;\n\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\n\nCo‐habitation status including living alone, in an institution or with family members.\nFor each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty).\n25\n Also, according to the previously established MPI categories,\n19\n the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66).\n25\n We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants.\n Telephone‐administered MPI (TELE‐MPI) The TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA.\n26\n The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance.\n21\n If the subject is able to perform the task was assigned 1 point.\n21\n Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score).\n33\n Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values.\n26\n Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty).\nThe TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA.\n26\n The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance.\n21\n If the subject is able to perform the task was assigned 1 point.\n21\n Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score).\n33\n Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values.\n26\n Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty).\n Statistical analysis Descriptive statistics were expressed for continuous variables as mean and standard deviation (SD) or median and interquartile range (IQR) and for discrete variables as absolute and relative frequencies (percentages) by MPI category (MPI grade 1 vs MPI grades 2–3). Independent sample t‐test or Mann–Whitney U test was used for comparison of continuous variables between groups. Chi‐square and Fisher's exact tests were used to compare categorical factors. Differences in hospitalizations and COVID‐19 cases were analysed using the Cochran–Mantel–Haenszel test. Paired sample t‐tests were used to compare MPI scores at baseline and at 12 months follow‐up among subjects reporting MPI grade 1 or grades 2–3 at baseline with or without Sars‐CoV‐2 infection. Multivariable logistic regression models were developed to identify whether frailty status and Sars‐CoV‐2 infection could predict worsening of frailty condition expressed as difference of MPI scores between 12 months follow‐up and baseline ≥0.1. We selected this cut‐off to identify significant worsening of frailty, based on previous literature showing that increase of 0.1 points of the MPI score was a clinically relevant change associated with increased risk of negative outcomes.\n8\n, \n22\n, \n26\n A two‐tailed significance level at p = 0.05 was set for each test. All the analyses were performed using SPSS v26.0 software for Windows (SPSS, Chicago, IL, USA).\nDescriptive statistics were expressed for continuous variables as mean and standard deviation (SD) or median and interquartile range (IQR) and for discrete variables as absolute and relative frequencies (percentages) by MPI category (MPI grade 1 vs MPI grades 2–3). Independent sample t‐test or Mann–Whitney U test was used for comparison of continuous variables between groups. Chi‐square and Fisher's exact tests were used to compare categorical factors. Differences in hospitalizations and COVID‐19 cases were analysed using the Cochran–Mantel–Haenszel test. Paired sample t‐tests were used to compare MPI scores at baseline and at 12 months follow‐up among subjects reporting MPI grade 1 or grades 2–3 at baseline with or without Sars‐CoV‐2 infection. Multivariable logistic regression models were developed to identify whether frailty status and Sars‐CoV‐2 infection could predict worsening of frailty condition expressed as difference of MPI scores between 12 months follow‐up and baseline ≥0.1. We selected this cut‐off to identify significant worsening of frailty, based on previous literature showing that increase of 0.1 points of the MPI score was a clinically relevant change associated with increased risk of negative outcomes.\n8\n, \n22\n, \n26\n A two‐tailed significance level at p = 0.05 was set for each test. All the analyses were performed using SPSS v26.0 software for Windows (SPSS, Chicago, IL, USA).", "The PRESTIGE project (Involved and Resilient: Aging in Genoa) is a prospective, observational study aimed to explore frailty and social vulnerability in community‐dwelling older residents in the metropolitan area of Genoa, Italy. We included subjects: (1) aged 65 years or over; (2) community‐dwellers who attended the University of the Third Age (U3A – an international movement whose aim is encouraging the education of retired members of the community) in Genoa according to a lifelong learning program for subjects in their “third age” of life; (3) without acute clinical conditions and (4) able to provide informed consent. Participants were enrolled between November 2019 and February 2020 following the World Medical Association's 2008 Declaration of Helsinki and the guidelines for Good Clinical Practice. Reporting of the study conforms to broad EQUATOR guidelines.\n24\n\n\nFrom the original sample of 1354 subjects, 451 were randomly selected to undergo tele‐consult follow‐up at 12 months. Out of 380 subjects who agreed to participate, 217 completed the follow‐up and were eligible for this post hoc analysis (Figure 1). The included participants were interviewed by phone call at 12‐month follow‐up between November 2020 and February 2021, in order to reassess frailty condition and to gather information related to eventual Sars‐CoV‐2 infection. Considering that COVID‐19 emerged in Italy in March 2020, we were able to estimate the incidence of COVID‐19 positivity during the first and second pandemic waves in Liguria region. At follow‐up were also collected information regarding hospitalizations.\nFlow of study participants through the study\nThe Ethical Committee of Department of Education of the University of Genoa (DISFOR), Genoa, Italy, approved the present study. All participants read and signed the informed consent form and all participants' records, and personal information were rendered anonymous before statistical analysis.", "Frailty assessment was based on multidimensional model of frailty using Multidimensional Prognostic Index (MPI) tools.\n18\n In this study we adopted Self‐Administered MPI Short Form (SELFY‐MPI‐SF) \n25\n and Telephone‐administered MPI (TELE‐MPI) \n26\n both deriving from the standard Multidimensional Prognostic Index (MPI).\n19\n These have been developed and validated to extend the spectrum of application of multidimensional approach to frailty, based on the principles of Comprehensive Geriatric Assessment (CGA), in more specific settings as community/general practice (SELFY‐MPI‐SF) \n25\n and telehealth (TELE‐MPI).\n26\n Both tools showed strong agreement with standard MPI.\n25\n, \n26\n\n\n \nSelf‐administered MPI short form (SELFY‐MPI‐SF) The SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales;\n25\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\nNumber of drugs regularly taken by the subject;\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\nCo‐habitation status including living alone, in an institution or with family members.\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\n\nNumber of drugs regularly taken by the subject;\n\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\n\nCo‐habitation status including living alone, in an institution or with family members.\nFor each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty).\n25\n Also, according to the previously established MPI categories,\n19\n the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66).\n25\n We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants.\nThe SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales;\n25\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\nNumber of drugs regularly taken by the subject;\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\nCo‐habitation status including living alone, in an institution or with family members.\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\n\nNumber of drugs regularly taken by the subject;\n\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\n\nCo‐habitation status including living alone, in an institution or with family members.\nFor each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty).\n25\n Also, according to the previously established MPI categories,\n19\n the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66).\n25\n We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants.\n Telephone‐administered MPI (TELE‐MPI) The TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA.\n26\n The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance.\n21\n If the subject is able to perform the task was assigned 1 point.\n21\n Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score).\n33\n Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values.\n26\n Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty).\nThe TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA.\n26\n The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance.\n21\n If the subject is able to perform the task was assigned 1 point.\n21\n Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score).\n33\n Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values.\n26\n Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty).", "The SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales;\n25\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\nNumber of drugs regularly taken by the subject;\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\nCo‐habitation status including living alone, in an institution or with family members.\n\n\nFunctional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use;\n27\n, \n28\n\n\n\nMobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs;\n27\n, \n28\n\n\n\nIndependence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances;\n29\n\n\n\nCognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence.\n30\n Final score ranges between 0 and 50, with lower scores indicating worst cognitive function;\n30\n\n\n\nNutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems;\n31\n\n\n\nNumber of drugs regularly taken by the subject;\n\nComorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS);\n32\n\n\n\nCo‐habitation status including living alone, in an institution or with family members.\nFor each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty).\n25\n Also, according to the previously established MPI categories,\n19\n the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66).\n25\n We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants.", "The TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA.\n26\n The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance.\n21\n If the subject is able to perform the task was assigned 1 point.\n21\n Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score).\n33\n Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values.\n26\n Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty).", "Descriptive statistics were expressed for continuous variables as mean and standard deviation (SD) or median and interquartile range (IQR) and for discrete variables as absolute and relative frequencies (percentages) by MPI category (MPI grade 1 vs MPI grades 2–3). Independent sample t‐test or Mann–Whitney U test was used for comparison of continuous variables between groups. Chi‐square and Fisher's exact tests were used to compare categorical factors. Differences in hospitalizations and COVID‐19 cases were analysed using the Cochran–Mantel–Haenszel test. Paired sample t‐tests were used to compare MPI scores at baseline and at 12 months follow‐up among subjects reporting MPI grade 1 or grades 2–3 at baseline with or without Sars‐CoV‐2 infection. Multivariable logistic regression models were developed to identify whether frailty status and Sars‐CoV‐2 infection could predict worsening of frailty condition expressed as difference of MPI scores between 12 months follow‐up and baseline ≥0.1. We selected this cut‐off to identify significant worsening of frailty, based on previous literature showing that increase of 0.1 points of the MPI score was a clinically relevant change associated with increased risk of negative outcomes.\n8\n, \n22\n, \n26\n A two‐tailed significance level at p = 0.05 was set for each test. All the analyses were performed using SPSS v26.0 software for Windows (SPSS, Chicago, IL, USA).", "Overall, 217 community‐dwelling older adults (mean age 79.44 ± 7.75 years, range 62–107 years old; females: 49.8%) completed the MPI both at baseline and at 12 months follow‐up. Mean MPI score at baseline was 0.30 (SD: 0.18) with a prevalence of pre‐frail/frail subjects (MPI grades 2–3) of 48.4%. As shown in Table 1, pre‐frail/frail subjects had higher level of functional and cognitive impairment, malnutrition, social isolation and more comorbidities and number of medications compared to those with MPI grade 1 (robust subjects). Flu and anti‐pneumococcal vaccines were not performed in 21.7% and 59% of participants, respectively, with significant lower coverage for pneumococcal vaccination among more pre‐frail/frail subjects (32.4% vs. 49.1% in robusts, p = 0.012). Incidence of Sars‐CoV‐2 infection was 12.9%, but was almost five‐time higher among frailer compared to robust subjects (21.0% vs. 5.4%, OR: 4.68, 95% CI: 1.82–12.07, p = 0.001). Pre‐frail/frail subjects were also more prone to undergo hospitalizations during the follow‐up compared to robust individuals (26.3% vs. 4.5%, OR: 7.55, 95% CI: 2.70–21.05, p < 0.001).\nCharacteristics of patients by MPI category at baseline\nAbbreviations: ADL, activities of daily living; BMI, body mass index; CIRS, Cumulative Illness Rating Scale; IADL, instrumental activities of daily living; MNA‐SF, Mini Nutritional Assessment Short Form; MPI, Multidimensional Prognostic Index; SD, standard deviation; TYM, Test Your Memory.\nWhile robust subjects at baseline remained on average stable during the follow‐up, indeed no subject evolved toward a pre‐frailty/frailty condition, we found that pre‐frail/frail older adults underwent significant deterioration of MPI score during 12 months (0.46 ± 0.09 at baseline vs 0.50 ± 0.17 at 12 months, p = 0.027) (Figure 2). Indeed, in the multivariable analysis, adjusting for age, gender, BMI, multimorbidity (3 or more chronic diseases), flu and anti‐pneumococcal vaccination and COVID‐19 positivity, pre‐frail/frail subjects at baseline experienced a significant higher risk of further worsening of frailty condition (adjusted odd ratio (aOR): 13.60, 95% confidence interval (CI): 4.01 to 46.09, p < 0.001) compared to robust subjects (Table 2, Table S1). Pre‐frail/frail subjects even though not‐infected by Sars‐CoV‐2 experienced a significant worsening of multidimensional frailty at 12 months follow‐up (MPI: 0.45 ± 0.08 at baseline vs 0.51 ± 0.17 at 12 months, p = 0.005). In the multivariable analysis, adjusting for age, gender, BMI, multimorbidity and flu and anti‐pneumococcal vaccination, pre‐frail/frail older adults both non‐infected (aOR: 14.84, 95% CI: 4.26 to 51.74, p < 0.001) and infected by Sars‐CoV‐2 (aOR: 12.77, 95% CI: 2.66 to 61.40, p = 0.001) had significantly greater risk of further worsening of frailty condition compared to robust and non‐infected subjects (Table 3, Table S2). Older adults who were robust at baseline were more likely to experience a worsening in ADL domain, whereas among frail/pre‐frail subjects the domains that contributed more to further MPI worsening were loss of IADL, poor mobility, cognitive impairment and malnutrition (Table S3).\nTrajectories of multidimensional frailty based on MPI category at baseline. Center lines show the medians; box limits indicate the 25th and 75th percentiles; whiskers extend 1.5 times the interquartile range from the 25th and 75th percentiles; cross represents sample mean; data points are plotted as open circles. T0 = baseline; T1 = 12 months; MPI = Multidimensional Prognostic Index\nPredictors of worsening of frailty condition during COVID‐19 pandemic\n\nNote: *Model adjusted also for BMI, flu vaccination and anti‐pneumococcal vaccination.\nAbbreviations: ΔMPI, difference of MPI scores between 12 months follow‐up and baseline; CI, confidence interval; OR, odd ratio; MPI, Multidimensional Prognostic Index.\nRisk of worsening of frailty condition according to frailty status at baseline and COVID‐19 positivity\n\nNote. *Model adjusted for age, gender, BMI, multimorbidity (3 or more chronic diseases), flu vaccination and anti‐pneumococcal vaccination.\nAbbreviations: ΔMPI, difference of MPI scores between 12 months follow‐up and baseline; CI, confidence interval; OR, odd ratio; MPI, Multidimensional Prognostic Index.", "In this cohort of community‐dwelling older adults assessed immediately before and during COVID‐19 pandemic in Italy, we found that pre‐frail/frail subjects independently by age, gender and occurrence of Sars‐CoV‐2 infection and several other potential confounders had significant higher risk to experience further worsening of frailty condition after 1 year.\nSolid evidence showed that frailty among hospitalized patients with Sars‐CoV‐2 infection was associated with higher risk of more severe forms of the disease, delirium, and death.\n6\n, \n34\n, \n35\n Moreover, levels of frailty condition pre‐infection have been associated with increased care needs after hospitalization and poorer long‐term survival also regardless of features of acute infection.\n12\n, \n13\n, \n34\n Parallelly, frailty has been deemed as a criterion for less aggressive approaches. Indeed, among older adults resident in long‐term care the COVID‐19 positivity and the presence of frailty condition were associated with a de‐escalation of care plans.\n36\n Frailty assessed by Clinical Frailty Scale (CFS) has been shown a better predictor of patient outcome compared to chronological age and comorbidities.\n37\n However, it has been questioned the reliability of CFS to adequately capture frailty condition.\n38\n Results from studies conducted in different settings, including community, confirmed that frailty, measured through multidimensional approach, was associated with a significant higher risk of negative health outcomes.\n18\n Higher MPI among older patients hospitalized in acute wards for COVID‐19 disease, as well among long‐term care and nursing home residents during COVID‐19 outbreak, were strong predictors of mortality risk and each 0.1 increase of the MPI score was associated with almost 40% higher probability of death.\n8\n, \n39\n, \n40\n Our data might suggest that impact of COVID‐19 pandemic on frailty condition, in frail older adults, is largely independent by direct effect of virus. Consistently, growing evidence claim the attention on burden of indirect effects of COVID‐19 (i.e., psychological distress, cognitive impairment, malnutrition and physical inactivity), which translate on multidimensional well‐being.\n15\n Then, it is reasonable thinking, even more so during this pandemic, that only a CGA‐based approach is qualified to really capture and track changes of frailty condition.\nThere is still a paucity of evidence on the effects of COVID‐19 outbreak on frailty condition among community‐dwelling older adults. Here we showed that, based on the pre‐pandemic frailty status, older adults experienced different trajectories of frailty during lockdown measures, independently by occurrence of COVID‐19 infection. Consistently, in a population of community‐dwelling frail older adults with hypertension, it has been observed a consensual impairment of physical and cognitive performances during COVID‐19 pandemic.\n41\n In a Japanese cohort of older adults assessed during COVID‐19 outbreak, it has been estimated a transition rate from non‐frailty to frailty over 6 months of roughly 10%.\n42\n Another study showed an increase of prevalence of social frailty with 10.7% of subjects who converted from robust to social frailty during one‐year follow‐up before and after the declaration of the state of emergency.\n43\n This transition seemed to be associated with exacerbation of depressive symptoms, but not with physical and cognitive functions.\n43\n Moreover, in a prospective study conducted between May and October 2020 among older adults in England and Spain, it has been observed a reduction of frailty as the restriction measures become less stringent,\n44\n suggesting that such effect of pandemic on frailty status might be potentially reversible.\n16\n\n\nIt has been questioned the role of greater biological and social vulnerability in older adults for higher predisposition to Sars‐CoV‐2 infection,\n45\n but few studies explored on a population‐based level the risk associated to being frail during COVID‐19 pandemic.\n46\n, \n47\n In a report from the UK Biobank conducted on 383,845 subjects, frailty status before COVID‐19 pandemic, assessed by both phenotypic and accumulation of deficits models, was associated with roughly two‐time higher risk of severe COVID‐19 infection resulting in hospital admission and death.\n46\n Another study carried out on 241 community‐dwelling older adults from the SarcoPhAge cohort, showed that frailty, assessed by Fried criteria, was associated with seven‐time higher risk of Sars‐CoV‐2 infection.\n47\n Here, we found an overall incidence of COVID‐19 positivity of 12.9%, which was roughly doubled in pre‐frail/frail subjects. However, independently by Sars‐CoV‐2 infection, we observed a significant worsening of frailty condition only in those subjects who were pre‐frail/frail before COVID‐19 outbreak.\nThis study has also some limitations that should be disclosed. First, having only two timepoints to assess frailty condition may have limited our ability to accurately capture the trajectories of frailty. Given the potential reversibility and fluctuation of this condition, we cannot state if the observed worsening of frailty status in pre‐frail/frail subjects was a real continuous trend. Therefore, planning an extension of study follow‐up could be essential to address this issue. Second, the COVID‐19 positivity may be underestimated because was self‐reported by patients. Indeed, particularly during the first pandemic wave, some pauci‐symptomatic cases might have been passed unrecognized given also the difficulties in provision of diagnostic tests. Third, to assess multidimensional frailty we used two different tools (i.e., SELFY‐MPI‐SF and TELE‐MPI), but both have been developed from the standard MPI with which showed strong agreement sharing the same explored domains and the same algorithm for calculation, with a mean difference between the MPI and each of two derived tools lower than two decimal points.\n25\n, \n26\n Therefore, we believe that our results can be poorly affected by this methodological difference. Fourth, additional confounding factors could have been included in our analyses such as severity of COVID‐19 disease, length of quarantine and presence of social support during COVID‐19 disease. Indeed the worsening of frailty condition could be strictly dependent by immunological status of the subjects and therefore the capacity of virus clearance, the duration of hospitalization and isolation, the availability of a caregiver or social services. Fifth, our results, due to relatively high loss at follow‐up, might suffer from selection bias. However the baseline characteristics (e.g., age, gender, comorbidities, SELFY‐MPI‐SF) of subjects who remained in the longitudinal study were overlapping with those who did not complete the follow‐up. Finally, this study was performed only on a relatively small population from a specific geographic area which has a high density of older adults living in the community. Thus, for example, we were not able to differentiate between prefrail and frail subjects, and the generalizability of these findings might be limited and should be verified in larger multicenter studies. Moreover given the post hoc nature of the analysis, the risk of further worsening of multidimensional frailty in pre‐frail/frail subjects during COVID‐19 pandemic needs to be confirmed in ad hoc‐designed studies.", "Effects of COVID‐19 pandemic among community‐dwelling pre‐frail/frail individuals are far beyond the mere infection and disease, but also might determine a significant deterioration of frailty status. More efforts must be taken to early recognize and manage frailer subjects, to avoid progression toward irreversible disability. Future studies should better define the frailty trajectories testing whether the slope of increase of multidimensional frailty, or reaching a specifical threshold of MPI score, can determine differences in short‐ and long‐term outcomes for community‐dwelling older adults.", "AP: Conceptualization, funding acquisition, supervision, writing – review & editing; CC: methodology, formal analysis, writing – original draft preparation; SZ: data curation, project administration, writing – review & editing; SP: investigation, writing – review & editing; BS: investigation, writing – review & editing; CP: investigation, writing – review & editing; ET: investigation, writing – review & editing; NV: investigation, writing – review & editing; EZ: investigation, writing – review & editing; CT: investigation, writing – review & editing; CS: investigation, writing – review & editing; AC: conceptualization, investigation, writing – review & editing.", "This work was supported by Fondazione CARIGE (“Stronger, less frail” GRANT 2018).", "The authors declare that they have no competing interests.", "The Ethical Committee of Department of Education of the University of Genoa (DISFOR), Genoa, Italy approved the present study on 5 September 2019; study number 030.", "Not applicable.", "\nTable S1‐S3\n\nClick here for additional data file." ]
[ "background", "methods", null, null, null, null, null, "results", "discussion", "conclusions", null, null, "COI-statement", null, null, "supplementary-material" ]
[ "community", "comprehensive geriatric assessment", "COVID‐19", "frailty", "multidimensional prognostic index" ]
BACKGROUND: Frailty is a potentially reversible geriatric condition characterized by a reduction of biological reserves that predispose to countless negative outcomes including disability and mortality. 1 With the extended life expectancy and the rapid increase of aging population, assessment of frailty status may represent an useful proxy to measure biological age, beyond simple chronological age. 2 The divergencies between these two perspectives on patient's age have become particularly evident during the COVID‐19 pandemic. 3 , 4 Despite the initial epidemiological data suggested COVID‐19 as a geriatric condition with worst prognosis in older and multimorbid subjects, 5 such ageistic criterion was progressively reviewed and overcome. 4 Patients with same age could have completely different predisposition to contract Sars‐CoV‐2 and to experience severe consequences of the disease. Rather, it emerged that frailty is a better predictor of disease severity and higher incidence of negative outcomes in hospitalized older patients (e.g. mortality at short‐ as well as long term, length of hospital stay, higher incidence of admission to intensive care units and need of invasive mechanical ventilation). 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 Frail subjects may commonly experience atypical presentation of the COVID‐19 disease including hypotension, sudden functional decline, falls, and delirium, which may lead to diagnostic delay and further spread of infection. 14 Also indirect effects of COVID‐19 pandemic, mainly related on restriction measures, have been extensively studied including the obvious difficulties in the access to care for subjects with chronic diseases, as well as with the impressive increased incidence of psychosocial disorders (e.g. depression, anxiety and loneliness), malnutrition (over‐ or under‐nutrition) and cognitive impairment, which may contribute to incidence and progression of frailty condition. 15 , 16 , 17 However, there is still a paucity of direct evidence on the impact of COVID‐19 outbreak on frailty status in a population‐based setting. Recently a novel conceptual model for frailty evaluation has been proposed, based on tenets of Comprehensive Geriatric Assessment (CGA) and called “multidimensional model”. 18 This model, using the Multidimensional Prognostic Index (MPI) as assessment tool, 19 has been applied in different settings and populations including community‐dwelling subjects in whom was estimated a prevalence of multidimensional frailty of 13.3%. 20 Using its largely demonstrated clinimetric capacities, 18 the MPI has proven excellent accuracy in predicting several negative outcomes (e.g. mortality, hospitalization and falls) also in community‐based setting. 21 , 22 , 23 Therefore, in this study, we aimed to explore clinical course of frailty condition, as measured by the MPI tools, over 1 year during COVID‐19 pandemic in a cohort of community‐dwelling older adults to understand how COVID‐19 directly and indirectly impacted on frailty condition. METHODS: Study population The PRESTIGE project (Involved and Resilient: Aging in Genoa) is a prospective, observational study aimed to explore frailty and social vulnerability in community‐dwelling older residents in the metropolitan area of Genoa, Italy. We included subjects: (1) aged 65 years or over; (2) community‐dwellers who attended the University of the Third Age (U3A – an international movement whose aim is encouraging the education of retired members of the community) in Genoa according to a lifelong learning program for subjects in their “third age” of life; (3) without acute clinical conditions and (4) able to provide informed consent. Participants were enrolled between November 2019 and February 2020 following the World Medical Association's 2008 Declaration of Helsinki and the guidelines for Good Clinical Practice. Reporting of the study conforms to broad EQUATOR guidelines. 24 From the original sample of 1354 subjects, 451 were randomly selected to undergo tele‐consult follow‐up at 12 months. Out of 380 subjects who agreed to participate, 217 completed the follow‐up and were eligible for this post hoc analysis (Figure 1). The included participants were interviewed by phone call at 12‐month follow‐up between November 2020 and February 2021, in order to reassess frailty condition and to gather information related to eventual Sars‐CoV‐2 infection. Considering that COVID‐19 emerged in Italy in March 2020, we were able to estimate the incidence of COVID‐19 positivity during the first and second pandemic waves in Liguria region. At follow‐up were also collected information regarding hospitalizations. Flow of study participants through the study The Ethical Committee of Department of Education of the University of Genoa (DISFOR), Genoa, Italy, approved the present study. All participants read and signed the informed consent form and all participants' records, and personal information were rendered anonymous before statistical analysis. The PRESTIGE project (Involved and Resilient: Aging in Genoa) is a prospective, observational study aimed to explore frailty and social vulnerability in community‐dwelling older residents in the metropolitan area of Genoa, Italy. We included subjects: (1) aged 65 years or over; (2) community‐dwellers who attended the University of the Third Age (U3A – an international movement whose aim is encouraging the education of retired members of the community) in Genoa according to a lifelong learning program for subjects in their “third age” of life; (3) without acute clinical conditions and (4) able to provide informed consent. Participants were enrolled between November 2019 and February 2020 following the World Medical Association's 2008 Declaration of Helsinki and the guidelines for Good Clinical Practice. Reporting of the study conforms to broad EQUATOR guidelines. 24 From the original sample of 1354 subjects, 451 were randomly selected to undergo tele‐consult follow‐up at 12 months. Out of 380 subjects who agreed to participate, 217 completed the follow‐up and were eligible for this post hoc analysis (Figure 1). The included participants were interviewed by phone call at 12‐month follow‐up between November 2020 and February 2021, in order to reassess frailty condition and to gather information related to eventual Sars‐CoV‐2 infection. Considering that COVID‐19 emerged in Italy in March 2020, we were able to estimate the incidence of COVID‐19 positivity during the first and second pandemic waves in Liguria region. At follow‐up were also collected information regarding hospitalizations. Flow of study participants through the study The Ethical Committee of Department of Education of the University of Genoa (DISFOR), Genoa, Italy, approved the present study. All participants read and signed the informed consent form and all participants' records, and personal information were rendered anonymous before statistical analysis. Frailty Frailty assessment was based on multidimensional model of frailty using Multidimensional Prognostic Index (MPI) tools. 18 In this study we adopted Self‐Administered MPI Short Form (SELFY‐MPI‐SF) 25 and Telephone‐administered MPI (TELE‐MPI) 26 both deriving from the standard Multidimensional Prognostic Index (MPI). 19 These have been developed and validated to extend the spectrum of application of multidimensional approach to frailty, based on the principles of Comprehensive Geriatric Assessment (CGA), in more specific settings as community/general practice (SELFY‐MPI‐SF) 25 and telehealth (TELE‐MPI). 26 Both tools showed strong agreement with standard MPI. 25 , 26 Self‐administered MPI short form (SELFY‐MPI‐SF) The SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales; 25 Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. For each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty). 25 Also, according to the previously established MPI categories, 19 the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66). 25 We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants. The SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales; 25 Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. For each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty). 25 Also, according to the previously established MPI categories, 19 the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66). 25 We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants. Telephone‐administered MPI (TELE‐MPI) The TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA. 26 The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance. 21 If the subject is able to perform the task was assigned 1 point. 21 Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score). 33 Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values. 26 Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty). The TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA. 26 The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance. 21 If the subject is able to perform the task was assigned 1 point. 21 Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score). 33 Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values. 26 Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty). Frailty assessment was based on multidimensional model of frailty using Multidimensional Prognostic Index (MPI) tools. 18 In this study we adopted Self‐Administered MPI Short Form (SELFY‐MPI‐SF) 25 and Telephone‐administered MPI (TELE‐MPI) 26 both deriving from the standard Multidimensional Prognostic Index (MPI). 19 These have been developed and validated to extend the spectrum of application of multidimensional approach to frailty, based on the principles of Comprehensive Geriatric Assessment (CGA), in more specific settings as community/general practice (SELFY‐MPI‐SF) 25 and telehealth (TELE‐MPI). 26 Both tools showed strong agreement with standard MPI. 25 , 26 Self‐administered MPI short form (SELFY‐MPI‐SF) The SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales; 25 Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. For each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty). 25 Also, according to the previously established MPI categories, 19 the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66). 25 We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants. The SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales; 25 Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. For each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty). 25 Also, according to the previously established MPI categories, 19 the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66). 25 We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants. Telephone‐administered MPI (TELE‐MPI) The TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA. 26 The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance. 21 If the subject is able to perform the task was assigned 1 point. 21 Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score). 33 Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values. 26 Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty). The TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA. 26 The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance. 21 If the subject is able to perform the task was assigned 1 point. 21 Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score). 33 Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values. 26 Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty). Statistical analysis Descriptive statistics were expressed for continuous variables as mean and standard deviation (SD) or median and interquartile range (IQR) and for discrete variables as absolute and relative frequencies (percentages) by MPI category (MPI grade 1 vs MPI grades 2–3). Independent sample t‐test or Mann–Whitney U test was used for comparison of continuous variables between groups. Chi‐square and Fisher's exact tests were used to compare categorical factors. Differences in hospitalizations and COVID‐19 cases were analysed using the Cochran–Mantel–Haenszel test. Paired sample t‐tests were used to compare MPI scores at baseline and at 12 months follow‐up among subjects reporting MPI grade 1 or grades 2–3 at baseline with or without Sars‐CoV‐2 infection. Multivariable logistic regression models were developed to identify whether frailty status and Sars‐CoV‐2 infection could predict worsening of frailty condition expressed as difference of MPI scores between 12 months follow‐up and baseline ≥0.1. We selected this cut‐off to identify significant worsening of frailty, based on previous literature showing that increase of 0.1 points of the MPI score was a clinically relevant change associated with increased risk of negative outcomes. 8 , 22 , 26 A two‐tailed significance level at p = 0.05 was set for each test. All the analyses were performed using SPSS v26.0 software for Windows (SPSS, Chicago, IL, USA). Descriptive statistics were expressed for continuous variables as mean and standard deviation (SD) or median and interquartile range (IQR) and for discrete variables as absolute and relative frequencies (percentages) by MPI category (MPI grade 1 vs MPI grades 2–3). Independent sample t‐test or Mann–Whitney U test was used for comparison of continuous variables between groups. Chi‐square and Fisher's exact tests were used to compare categorical factors. Differences in hospitalizations and COVID‐19 cases were analysed using the Cochran–Mantel–Haenszel test. Paired sample t‐tests were used to compare MPI scores at baseline and at 12 months follow‐up among subjects reporting MPI grade 1 or grades 2–3 at baseline with or without Sars‐CoV‐2 infection. Multivariable logistic regression models were developed to identify whether frailty status and Sars‐CoV‐2 infection could predict worsening of frailty condition expressed as difference of MPI scores between 12 months follow‐up and baseline ≥0.1. We selected this cut‐off to identify significant worsening of frailty, based on previous literature showing that increase of 0.1 points of the MPI score was a clinically relevant change associated with increased risk of negative outcomes. 8 , 22 , 26 A two‐tailed significance level at p = 0.05 was set for each test. All the analyses were performed using SPSS v26.0 software for Windows (SPSS, Chicago, IL, USA). Study population: The PRESTIGE project (Involved and Resilient: Aging in Genoa) is a prospective, observational study aimed to explore frailty and social vulnerability in community‐dwelling older residents in the metropolitan area of Genoa, Italy. We included subjects: (1) aged 65 years or over; (2) community‐dwellers who attended the University of the Third Age (U3A – an international movement whose aim is encouraging the education of retired members of the community) in Genoa according to a lifelong learning program for subjects in their “third age” of life; (3) without acute clinical conditions and (4) able to provide informed consent. Participants were enrolled between November 2019 and February 2020 following the World Medical Association's 2008 Declaration of Helsinki and the guidelines for Good Clinical Practice. Reporting of the study conforms to broad EQUATOR guidelines. 24 From the original sample of 1354 subjects, 451 were randomly selected to undergo tele‐consult follow‐up at 12 months. Out of 380 subjects who agreed to participate, 217 completed the follow‐up and were eligible for this post hoc analysis (Figure 1). The included participants were interviewed by phone call at 12‐month follow‐up between November 2020 and February 2021, in order to reassess frailty condition and to gather information related to eventual Sars‐CoV‐2 infection. Considering that COVID‐19 emerged in Italy in March 2020, we were able to estimate the incidence of COVID‐19 positivity during the first and second pandemic waves in Liguria region. At follow‐up were also collected information regarding hospitalizations. Flow of study participants through the study The Ethical Committee of Department of Education of the University of Genoa (DISFOR), Genoa, Italy, approved the present study. All participants read and signed the informed consent form and all participants' records, and personal information were rendered anonymous before statistical analysis. Frailty: Frailty assessment was based on multidimensional model of frailty using Multidimensional Prognostic Index (MPI) tools. 18 In this study we adopted Self‐Administered MPI Short Form (SELFY‐MPI‐SF) 25 and Telephone‐administered MPI (TELE‐MPI) 26 both deriving from the standard Multidimensional Prognostic Index (MPI). 19 These have been developed and validated to extend the spectrum of application of multidimensional approach to frailty, based on the principles of Comprehensive Geriatric Assessment (CGA), in more specific settings as community/general practice (SELFY‐MPI‐SF) 25 and telehealth (TELE‐MPI). 26 Both tools showed strong agreement with standard MPI. 25 , 26 Self‐administered MPI short form (SELFY‐MPI‐SF) The SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales; 25 Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. For each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty). 25 Also, according to the previously established MPI categories, 19 the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66). 25 We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants. The SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales; 25 Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. For each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty). 25 Also, according to the previously established MPI categories, 19 the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66). 25 We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants. Telephone‐administered MPI (TELE‐MPI) The TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA. 26 The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance. 21 If the subject is able to perform the task was assigned 1 point. 21 Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score). 33 Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values. 26 Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty). The TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA. 26 The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance. 21 If the subject is able to perform the task was assigned 1 point. 21 Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score). 33 Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values. 26 Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty). Self‐administered MPI short form (SELFY‐MPI‐SF): The SELFY‐MPI‐SF was used to evaluate frailty at baseline, by combining information on the following eight domains, assessed through eight self‐administered scales; 25 Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. Functional status: evaluated by the Barthel Activities of Daily Living (ADL) sub‐scale which explores the level of dependence/independence in six daily personal care activities such as feeding, bathing, personal hygiene, dressing, faecal and urinary continence and toilet use; 27 , 28 Mobility: evaluated by the Barthel mobility sub‐scale assessing the abilities to getting in and out of bed/chair, walking, and going up and down the stairs; 27 , 28 Independence in the instrumental activities of daily living (IADL): assessed through the self‐administered version of Lawton's IADL scale exploring the independence in eight activities such as telephone use, grocery shopping, meal preparation, housekeeping, laundry, travel, medication, handling finances; 29 Cognitive status: measured with the Test Your Memory (TYM), a 10‐task self‐administered test that explores following cognitive domains: orientation, ability to copy a sentence, semantic knowledge, calculation, verbal fluency, similarities, naming, visuo‐spatial abilities and recall of a previously copied sentence. 30 Final score ranges between 0 and 50, with lower scores indicating worst cognitive function; 30 Nutritional status: measured with the self‐administered version of the Mini Nutritional Assessment Short‐Form (MNA‐SF), which collects information on anthropometric measures (body mass index and weight loss), decline in food intake, mobility, recent psychological stress and neuropsychological problems; 31 Number of drugs regularly taken by the subject; Comorbidity: evaluated by number of pathologies requiring chronic drug therapy, among the first 13 categories of the Cumulative Illness Rating Scale (CIRS); 32 Co‐habitation status including living alone, in an institution or with family members. For each domain, a tripartite hierarchy is adopted based on conventional cut‐off points: a score of 0 indicates no problems, 0.5 minor problems and 1.0 major problems. The average of all these eight domains corresponds to SELFY‐MPI‐SF score, with values ranging between 0 and 1 (the higher the score, the greater the degree of frailty). 25 Also, according to the previously established MPI categories, 19 the SELFY‐MPI‐SF was expressed as three grades of risk: SELFY‐MPI‐SF grade 1 low risk (values ≤0.33), SELFY‐MPI‐SF grade 2 moderate risk (values between 0.34 and 0.66) and SELFY‐MPI‐SF grade 3 high risk (MPI value >0.66). 25 We defined as pre‐frail/frail subjects those with SELFY‐MPI‐SF grades 2 or 3, conversely those with SELFY‐MPI‐SF grade 1 were identified as robust participants. Telephone‐administered MPI (TELE‐MPI): The TELE‐MPI was collected 12 months apart from baseline assessment by contacting participants with phone call, from November 1, 2020 to February 28, 2021. Similarly to the SELFY‐MPI‐SF domains, the TELE‐MPI considered the same eight areas of CGA. 26 The only domains evaluated in different way compared to the SELFY‐MPI‐SF were mobility and cognition. Mobility was evaluated inquiring about: (1) the abilities to transfer from bed to chair or wheelchair, (2) walking at least ten feet without any assistance and (3) going up and down the stairs without assistance. 21 If the subject is able to perform the task was assigned 1 point. 21 Cognitive performances were assessed using the Short Portable Mental Status Questionnaire (SPMSQ) scale with a score ranging from 10 (worst score) to 0 (best score). 33 Despite these differences in these two scales, the same tripartite hierarchy was adopted to assign a score of: 0 (no problems), 0.5 (minor problems) and 1.0 (major problems), using previously proposed scale‐specific cut‐off values. 26 Thus, the sum of the scores assigned to each domain was divided by 8 to obtain a final TELE‐MPI risk score ranging ranging between 0 and 1 (the higher the score, the greater the degree of frailty). Statistical analysis: Descriptive statistics were expressed for continuous variables as mean and standard deviation (SD) or median and interquartile range (IQR) and for discrete variables as absolute and relative frequencies (percentages) by MPI category (MPI grade 1 vs MPI grades 2–3). Independent sample t‐test or Mann–Whitney U test was used for comparison of continuous variables between groups. Chi‐square and Fisher's exact tests were used to compare categorical factors. Differences in hospitalizations and COVID‐19 cases were analysed using the Cochran–Mantel–Haenszel test. Paired sample t‐tests were used to compare MPI scores at baseline and at 12 months follow‐up among subjects reporting MPI grade 1 or grades 2–3 at baseline with or without Sars‐CoV‐2 infection. Multivariable logistic regression models were developed to identify whether frailty status and Sars‐CoV‐2 infection could predict worsening of frailty condition expressed as difference of MPI scores between 12 months follow‐up and baseline ≥0.1. We selected this cut‐off to identify significant worsening of frailty, based on previous literature showing that increase of 0.1 points of the MPI score was a clinically relevant change associated with increased risk of negative outcomes. 8 , 22 , 26 A two‐tailed significance level at p = 0.05 was set for each test. All the analyses were performed using SPSS v26.0 software for Windows (SPSS, Chicago, IL, USA). RESULTS: Overall, 217 community‐dwelling older adults (mean age 79.44 ± 7.75 years, range 62–107 years old; females: 49.8%) completed the MPI both at baseline and at 12 months follow‐up. Mean MPI score at baseline was 0.30 (SD: 0.18) with a prevalence of pre‐frail/frail subjects (MPI grades 2–3) of 48.4%. As shown in Table 1, pre‐frail/frail subjects had higher level of functional and cognitive impairment, malnutrition, social isolation and more comorbidities and number of medications compared to those with MPI grade 1 (robust subjects). Flu and anti‐pneumococcal vaccines were not performed in 21.7% and 59% of participants, respectively, with significant lower coverage for pneumococcal vaccination among more pre‐frail/frail subjects (32.4% vs. 49.1% in robusts, p = 0.012). Incidence of Sars‐CoV‐2 infection was 12.9%, but was almost five‐time higher among frailer compared to robust subjects (21.0% vs. 5.4%, OR: 4.68, 95% CI: 1.82–12.07, p = 0.001). Pre‐frail/frail subjects were also more prone to undergo hospitalizations during the follow‐up compared to robust individuals (26.3% vs. 4.5%, OR: 7.55, 95% CI: 2.70–21.05, p < 0.001). Characteristics of patients by MPI category at baseline Abbreviations: ADL, activities of daily living; BMI, body mass index; CIRS, Cumulative Illness Rating Scale; IADL, instrumental activities of daily living; MNA‐SF, Mini Nutritional Assessment Short Form; MPI, Multidimensional Prognostic Index; SD, standard deviation; TYM, Test Your Memory. While robust subjects at baseline remained on average stable during the follow‐up, indeed no subject evolved toward a pre‐frailty/frailty condition, we found that pre‐frail/frail older adults underwent significant deterioration of MPI score during 12 months (0.46 ± 0.09 at baseline vs 0.50 ± 0.17 at 12 months, p = 0.027) (Figure 2). Indeed, in the multivariable analysis, adjusting for age, gender, BMI, multimorbidity (3 or more chronic diseases), flu and anti‐pneumococcal vaccination and COVID‐19 positivity, pre‐frail/frail subjects at baseline experienced a significant higher risk of further worsening of frailty condition (adjusted odd ratio (aOR): 13.60, 95% confidence interval (CI): 4.01 to 46.09, p < 0.001) compared to robust subjects (Table 2, Table S1). Pre‐frail/frail subjects even though not‐infected by Sars‐CoV‐2 experienced a significant worsening of multidimensional frailty at 12 months follow‐up (MPI: 0.45 ± 0.08 at baseline vs 0.51 ± 0.17 at 12 months, p = 0.005). In the multivariable analysis, adjusting for age, gender, BMI, multimorbidity and flu and anti‐pneumococcal vaccination, pre‐frail/frail older adults both non‐infected (aOR: 14.84, 95% CI: 4.26 to 51.74, p < 0.001) and infected by Sars‐CoV‐2 (aOR: 12.77, 95% CI: 2.66 to 61.40, p = 0.001) had significantly greater risk of further worsening of frailty condition compared to robust and non‐infected subjects (Table 3, Table S2). Older adults who were robust at baseline were more likely to experience a worsening in ADL domain, whereas among frail/pre‐frail subjects the domains that contributed more to further MPI worsening were loss of IADL, poor mobility, cognitive impairment and malnutrition (Table S3). Trajectories of multidimensional frailty based on MPI category at baseline. Center lines show the medians; box limits indicate the 25th and 75th percentiles; whiskers extend 1.5 times the interquartile range from the 25th and 75th percentiles; cross represents sample mean; data points are plotted as open circles. T0 = baseline; T1 = 12 months; MPI = Multidimensional Prognostic Index Predictors of worsening of frailty condition during COVID‐19 pandemic Note: *Model adjusted also for BMI, flu vaccination and anti‐pneumococcal vaccination. Abbreviations: ΔMPI, difference of MPI scores between 12 months follow‐up and baseline; CI, confidence interval; OR, odd ratio; MPI, Multidimensional Prognostic Index. Risk of worsening of frailty condition according to frailty status at baseline and COVID‐19 positivity Note. *Model adjusted for age, gender, BMI, multimorbidity (3 or more chronic diseases), flu vaccination and anti‐pneumococcal vaccination. Abbreviations: ΔMPI, difference of MPI scores between 12 months follow‐up and baseline; CI, confidence interval; OR, odd ratio; MPI, Multidimensional Prognostic Index. DISCUSSION: In this cohort of community‐dwelling older adults assessed immediately before and during COVID‐19 pandemic in Italy, we found that pre‐frail/frail subjects independently by age, gender and occurrence of Sars‐CoV‐2 infection and several other potential confounders had significant higher risk to experience further worsening of frailty condition after 1 year. Solid evidence showed that frailty among hospitalized patients with Sars‐CoV‐2 infection was associated with higher risk of more severe forms of the disease, delirium, and death. 6 , 34 , 35 Moreover, levels of frailty condition pre‐infection have been associated with increased care needs after hospitalization and poorer long‐term survival also regardless of features of acute infection. 12 , 13 , 34 Parallelly, frailty has been deemed as a criterion for less aggressive approaches. Indeed, among older adults resident in long‐term care the COVID‐19 positivity and the presence of frailty condition were associated with a de‐escalation of care plans. 36 Frailty assessed by Clinical Frailty Scale (CFS) has been shown a better predictor of patient outcome compared to chronological age and comorbidities. 37 However, it has been questioned the reliability of CFS to adequately capture frailty condition. 38 Results from studies conducted in different settings, including community, confirmed that frailty, measured through multidimensional approach, was associated with a significant higher risk of negative health outcomes. 18 Higher MPI among older patients hospitalized in acute wards for COVID‐19 disease, as well among long‐term care and nursing home residents during COVID‐19 outbreak, were strong predictors of mortality risk and each 0.1 increase of the MPI score was associated with almost 40% higher probability of death. 8 , 39 , 40 Our data might suggest that impact of COVID‐19 pandemic on frailty condition, in frail older adults, is largely independent by direct effect of virus. Consistently, growing evidence claim the attention on burden of indirect effects of COVID‐19 (i.e., psychological distress, cognitive impairment, malnutrition and physical inactivity), which translate on multidimensional well‐being. 15 Then, it is reasonable thinking, even more so during this pandemic, that only a CGA‐based approach is qualified to really capture and track changes of frailty condition. There is still a paucity of evidence on the effects of COVID‐19 outbreak on frailty condition among community‐dwelling older adults. Here we showed that, based on the pre‐pandemic frailty status, older adults experienced different trajectories of frailty during lockdown measures, independently by occurrence of COVID‐19 infection. Consistently, in a population of community‐dwelling frail older adults with hypertension, it has been observed a consensual impairment of physical and cognitive performances during COVID‐19 pandemic. 41 In a Japanese cohort of older adults assessed during COVID‐19 outbreak, it has been estimated a transition rate from non‐frailty to frailty over 6 months of roughly 10%. 42 Another study showed an increase of prevalence of social frailty with 10.7% of subjects who converted from robust to social frailty during one‐year follow‐up before and after the declaration of the state of emergency. 43 This transition seemed to be associated with exacerbation of depressive symptoms, but not with physical and cognitive functions. 43 Moreover, in a prospective study conducted between May and October 2020 among older adults in England and Spain, it has been observed a reduction of frailty as the restriction measures become less stringent, 44 suggesting that such effect of pandemic on frailty status might be potentially reversible. 16 It has been questioned the role of greater biological and social vulnerability in older adults for higher predisposition to Sars‐CoV‐2 infection, 45 but few studies explored on a population‐based level the risk associated to being frail during COVID‐19 pandemic. 46 , 47 In a report from the UK Biobank conducted on 383,845 subjects, frailty status before COVID‐19 pandemic, assessed by both phenotypic and accumulation of deficits models, was associated with roughly two‐time higher risk of severe COVID‐19 infection resulting in hospital admission and death. 46 Another study carried out on 241 community‐dwelling older adults from the SarcoPhAge cohort, showed that frailty, assessed by Fried criteria, was associated with seven‐time higher risk of Sars‐CoV‐2 infection. 47 Here, we found an overall incidence of COVID‐19 positivity of 12.9%, which was roughly doubled in pre‐frail/frail subjects. However, independently by Sars‐CoV‐2 infection, we observed a significant worsening of frailty condition only in those subjects who were pre‐frail/frail before COVID‐19 outbreak. This study has also some limitations that should be disclosed. First, having only two timepoints to assess frailty condition may have limited our ability to accurately capture the trajectories of frailty. Given the potential reversibility and fluctuation of this condition, we cannot state if the observed worsening of frailty status in pre‐frail/frail subjects was a real continuous trend. Therefore, planning an extension of study follow‐up could be essential to address this issue. Second, the COVID‐19 positivity may be underestimated because was self‐reported by patients. Indeed, particularly during the first pandemic wave, some pauci‐symptomatic cases might have been passed unrecognized given also the difficulties in provision of diagnostic tests. Third, to assess multidimensional frailty we used two different tools (i.e., SELFY‐MPI‐SF and TELE‐MPI), but both have been developed from the standard MPI with which showed strong agreement sharing the same explored domains and the same algorithm for calculation, with a mean difference between the MPI and each of two derived tools lower than two decimal points. 25 , 26 Therefore, we believe that our results can be poorly affected by this methodological difference. Fourth, additional confounding factors could have been included in our analyses such as severity of COVID‐19 disease, length of quarantine and presence of social support during COVID‐19 disease. Indeed the worsening of frailty condition could be strictly dependent by immunological status of the subjects and therefore the capacity of virus clearance, the duration of hospitalization and isolation, the availability of a caregiver or social services. Fifth, our results, due to relatively high loss at follow‐up, might suffer from selection bias. However the baseline characteristics (e.g., age, gender, comorbidities, SELFY‐MPI‐SF) of subjects who remained in the longitudinal study were overlapping with those who did not complete the follow‐up. Finally, this study was performed only on a relatively small population from a specific geographic area which has a high density of older adults living in the community. Thus, for example, we were not able to differentiate between prefrail and frail subjects, and the generalizability of these findings might be limited and should be verified in larger multicenter studies. Moreover given the post hoc nature of the analysis, the risk of further worsening of multidimensional frailty in pre‐frail/frail subjects during COVID‐19 pandemic needs to be confirmed in ad hoc‐designed studies. CONCLUSIONS: Effects of COVID‐19 pandemic among community‐dwelling pre‐frail/frail individuals are far beyond the mere infection and disease, but also might determine a significant deterioration of frailty status. More efforts must be taken to early recognize and manage frailer subjects, to avoid progression toward irreversible disability. Future studies should better define the frailty trajectories testing whether the slope of increase of multidimensional frailty, or reaching a specifical threshold of MPI score, can determine differences in short‐ and long‐term outcomes for community‐dwelling older adults. AUTHOR CONTRIBUTIONS: AP: Conceptualization, funding acquisition, supervision, writing – review & editing; CC: methodology, formal analysis, writing – original draft preparation; SZ: data curation, project administration, writing – review & editing; SP: investigation, writing – review & editing; BS: investigation, writing – review & editing; CP: investigation, writing – review & editing; ET: investigation, writing – review & editing; NV: investigation, writing – review & editing; EZ: investigation, writing – review & editing; CT: investigation, writing – review & editing; CS: investigation, writing – review & editing; AC: conceptualization, investigation, writing – review & editing. FUNDING INFORMATION: This work was supported by Fondazione CARIGE (“Stronger, less frail” GRANT 2018). CONFLICT OF INTEREST: The authors declare that they have no competing interests. ETHICAL APPROVAL AND CONSENT TO PARTICIPATE: The Ethical Committee of Department of Education of the University of Genoa (DISFOR), Genoa, Italy approved the present study on 5 September 2019; study number 030. CONSENT FOR PUBLICATION: Not applicable. Supporting information: Table S1‐S3 Click here for additional data file.
Background: Frailty has been recognized as potential surrogate of biological age and relevant risk factor for COVID-19 severity. Thus, it is important to explore the frailty trajectories during COVID-19 pandemic and understand how COVID-19 directly and indirectly impacts on frailty condition. Methods: We enrolled 217 community-dwelling older adults with available information on frailty condition as assessed by multidimensional frailty model both at baseline and at one-year follow-up using Multidimensional Prognostic Index (MPI) tools. Pre-frail/frail subjects were identified at baseline as those with MPI score >0.33 (MPI grades 2-3). Frailty worsening was defined by MPI difference between 12 months follow-up and baseline ≥0.1. Multivariable logistic regression was modelled to identify predictors of worsening of frailty condition. Results: Frailer subjects at baseline (MPI grades 2-3 = 48.4%) were older, more frequently female and had higher rates of hospitalization and Sars-CoV-2 infection compared to robust ones (MPI grade 1). Having MPI grades 2-3 at baseline was associated with higher risk of further worsening of frailty condition (adjusted odd ratio (aOR): 13.60, 95% confidence interval (CI): 4.01-46.09), independently by age, gender and Sars-CoV-2 infection. Specifically, frail subjects without COVID-19 (aOR: 14.84, 95% CI: 4.26-51.74) as well as those with COVID-19 (aOR: 12.77, 95% CI: 2.66-61.40, p = 0.001) had significantly higher risk of worsening of frailty condition. Conclusions: Effects of COVID-19 pandemic among community-dwelling frailer individuals are far beyond the mere infection and disease, determining a significant deterioration of frailty status both in infected and non-infected subjects.
BACKGROUND: Frailty is a potentially reversible geriatric condition characterized by a reduction of biological reserves that predispose to countless negative outcomes including disability and mortality. 1 With the extended life expectancy and the rapid increase of aging population, assessment of frailty status may represent an useful proxy to measure biological age, beyond simple chronological age. 2 The divergencies between these two perspectives on patient's age have become particularly evident during the COVID‐19 pandemic. 3 , 4 Despite the initial epidemiological data suggested COVID‐19 as a geriatric condition with worst prognosis in older and multimorbid subjects, 5 such ageistic criterion was progressively reviewed and overcome. 4 Patients with same age could have completely different predisposition to contract Sars‐CoV‐2 and to experience severe consequences of the disease. Rather, it emerged that frailty is a better predictor of disease severity and higher incidence of negative outcomes in hospitalized older patients (e.g. mortality at short‐ as well as long term, length of hospital stay, higher incidence of admission to intensive care units and need of invasive mechanical ventilation). 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 Frail subjects may commonly experience atypical presentation of the COVID‐19 disease including hypotension, sudden functional decline, falls, and delirium, which may lead to diagnostic delay and further spread of infection. 14 Also indirect effects of COVID‐19 pandemic, mainly related on restriction measures, have been extensively studied including the obvious difficulties in the access to care for subjects with chronic diseases, as well as with the impressive increased incidence of psychosocial disorders (e.g. depression, anxiety and loneliness), malnutrition (over‐ or under‐nutrition) and cognitive impairment, which may contribute to incidence and progression of frailty condition. 15 , 16 , 17 However, there is still a paucity of direct evidence on the impact of COVID‐19 outbreak on frailty status in a population‐based setting. Recently a novel conceptual model for frailty evaluation has been proposed, based on tenets of Comprehensive Geriatric Assessment (CGA) and called “multidimensional model”. 18 This model, using the Multidimensional Prognostic Index (MPI) as assessment tool, 19 has been applied in different settings and populations including community‐dwelling subjects in whom was estimated a prevalence of multidimensional frailty of 13.3%. 20 Using its largely demonstrated clinimetric capacities, 18 the MPI has proven excellent accuracy in predicting several negative outcomes (e.g. mortality, hospitalization and falls) also in community‐based setting. 21 , 22 , 23 Therefore, in this study, we aimed to explore clinical course of frailty condition, as measured by the MPI tools, over 1 year during COVID‐19 pandemic in a cohort of community‐dwelling older adults to understand how COVID‐19 directly and indirectly impacted on frailty condition. CONCLUSIONS: Effects of COVID‐19 pandemic among community‐dwelling pre‐frail/frail individuals are far beyond the mere infection and disease, but also might determine a significant deterioration of frailty status. More efforts must be taken to early recognize and manage frailer subjects, to avoid progression toward irreversible disability. Future studies should better define the frailty trajectories testing whether the slope of increase of multidimensional frailty, or reaching a specifical threshold of MPI score, can determine differences in short‐ and long‐term outcomes for community‐dwelling older adults.
Background: Frailty has been recognized as potential surrogate of biological age and relevant risk factor for COVID-19 severity. Thus, it is important to explore the frailty trajectories during COVID-19 pandemic and understand how COVID-19 directly and indirectly impacts on frailty condition. Methods: We enrolled 217 community-dwelling older adults with available information on frailty condition as assessed by multidimensional frailty model both at baseline and at one-year follow-up using Multidimensional Prognostic Index (MPI) tools. Pre-frail/frail subjects were identified at baseline as those with MPI score >0.33 (MPI grades 2-3). Frailty worsening was defined by MPI difference between 12 months follow-up and baseline ≥0.1. Multivariable logistic regression was modelled to identify predictors of worsening of frailty condition. Results: Frailer subjects at baseline (MPI grades 2-3 = 48.4%) were older, more frequently female and had higher rates of hospitalization and Sars-CoV-2 infection compared to robust ones (MPI grade 1). Having MPI grades 2-3 at baseline was associated with higher risk of further worsening of frailty condition (adjusted odd ratio (aOR): 13.60, 95% confidence interval (CI): 4.01-46.09), independently by age, gender and Sars-CoV-2 infection. Specifically, frail subjects without COVID-19 (aOR: 14.84, 95% CI: 4.26-51.74) as well as those with COVID-19 (aOR: 12.77, 95% CI: 2.66-61.40, p = 0.001) had significantly higher risk of worsening of frailty condition. Conclusions: Effects of COVID-19 pandemic among community-dwelling frailer individuals are far beyond the mere infection and disease, determining a significant deterioration of frailty status both in infected and non-infected subjects.
12,920
339
[ 340, 2319, 833, 253, 252, 134, 18, 32, 3 ]
16
[ "mpi", "frailty", "sf", "score", "mpi sf", "selfy", "selfy mpi", "selfy mpi sf", "status", "scale" ]
[ "condition frail older", "effect pandemic frailty", "assessed clinical frailty", "19 pandemic frailty", "frailty status covid" ]
[CONTENT] community | comprehensive geriatric assessment | COVID‐19 | frailty | multidimensional prognostic index [SUMMARY]
[CONTENT] community | comprehensive geriatric assessment | COVID‐19 | frailty | multidimensional prognostic index [SUMMARY]
[CONTENT] community | comprehensive geriatric assessment | COVID‐19 | frailty | multidimensional prognostic index [SUMMARY]
[CONTENT] community | comprehensive geriatric assessment | COVID‐19 | frailty | multidimensional prognostic index [SUMMARY]
[CONTENT] community | comprehensive geriatric assessment | COVID‐19 | frailty | multidimensional prognostic index [SUMMARY]
[CONTENT] community | comprehensive geriatric assessment | COVID‐19 | frailty | multidimensional prognostic index [SUMMARY]
[CONTENT] Female | Humans | Aged | Frailty | Independent Living | COVID-19 | Geriatric Assessment | Pandemics | SARS-CoV-2 [SUMMARY]
[CONTENT] Female | Humans | Aged | Frailty | Independent Living | COVID-19 | Geriatric Assessment | Pandemics | SARS-CoV-2 [SUMMARY]
[CONTENT] Female | Humans | Aged | Frailty | Independent Living | COVID-19 | Geriatric Assessment | Pandemics | SARS-CoV-2 [SUMMARY]
[CONTENT] Female | Humans | Aged | Frailty | Independent Living | COVID-19 | Geriatric Assessment | Pandemics | SARS-CoV-2 [SUMMARY]
[CONTENT] Female | Humans | Aged | Frailty | Independent Living | COVID-19 | Geriatric Assessment | Pandemics | SARS-CoV-2 [SUMMARY]
[CONTENT] Female | Humans | Aged | Frailty | Independent Living | COVID-19 | Geriatric Assessment | Pandemics | SARS-CoV-2 [SUMMARY]
[CONTENT] condition frail older | effect pandemic frailty | assessed clinical frailty | 19 pandemic frailty | frailty status covid [SUMMARY]
[CONTENT] condition frail older | effect pandemic frailty | assessed clinical frailty | 19 pandemic frailty | frailty status covid [SUMMARY]
[CONTENT] condition frail older | effect pandemic frailty | assessed clinical frailty | 19 pandemic frailty | frailty status covid [SUMMARY]
[CONTENT] condition frail older | effect pandemic frailty | assessed clinical frailty | 19 pandemic frailty | frailty status covid [SUMMARY]
[CONTENT] condition frail older | effect pandemic frailty | assessed clinical frailty | 19 pandemic frailty | frailty status covid [SUMMARY]
[CONTENT] condition frail older | effect pandemic frailty | assessed clinical frailty | 19 pandemic frailty | frailty status covid [SUMMARY]
[CONTENT] mpi | frailty | sf | score | mpi sf | selfy | selfy mpi | selfy mpi sf | status | scale [SUMMARY]
[CONTENT] mpi | frailty | sf | score | mpi sf | selfy | selfy mpi | selfy mpi sf | status | scale [SUMMARY]
[CONTENT] mpi | frailty | sf | score | mpi sf | selfy | selfy mpi | selfy mpi sf | status | scale [SUMMARY]
[CONTENT] mpi | frailty | sf | score | mpi sf | selfy | selfy mpi | selfy mpi sf | status | scale [SUMMARY]
[CONTENT] mpi | frailty | sf | score | mpi sf | selfy | selfy mpi | selfy mpi sf | status | scale [SUMMARY]
[CONTENT] mpi | frailty | sf | score | mpi sf | selfy | selfy mpi | selfy mpi sf | status | scale [SUMMARY]
[CONTENT] frailty | covid 19 | covid | 19 | condition | mortality | age | including | incidence | disease [SUMMARY]
[CONTENT] mpi | sf | mpi sf | selfy | selfy mpi | selfy mpi sf | administered | self administered | scale | score [SUMMARY]
[CONTENT] frail | mpi | baseline | vaccination | ci | pre | pneumococcal | subjects | 12 | pre frail [SUMMARY]
[CONTENT] determine | frailty | dwelling | community dwelling | community | frail | frailty trajectories | individuals far | individuals far mere | individuals far mere infection [SUMMARY]
[CONTENT] mpi | applicable | frailty | frail | score | sf | selfy mpi sf | mpi sf | selfy mpi | selfy [SUMMARY]
[CONTENT] mpi | applicable | frailty | frail | score | sf | selfy mpi sf | mpi sf | selfy mpi | selfy [SUMMARY]
[CONTENT] COVID-19 ||| COVID-19 | COVID-19 [SUMMARY]
[CONTENT] 217 | one-year | Multidimensional Prognostic Index ||| 0.33 | MPI | 2 ||| 12 months ||| [SUMMARY]
[CONTENT] 2 | 48.4% | 1 ||| 2 | 13.60 | 95% | CI | 4.01 ||| COVID-19 | 14.84 | 95% | CI | 4.26-51.74 | COVID-19 | 12.77 | 95% | CI | 2.66-61.40 | 0.001 [SUMMARY]
[CONTENT] COVID-19 [SUMMARY]
[CONTENT] COVID-19 ||| COVID-19 | COVID-19 ||| 217 | one-year | Multidimensional Prognostic Index ||| 0.33 | MPI | 2 ||| 12 months ||| ||| ||| 2 | 48.4% | 1 ||| 2 | 13.60 | 95% | CI | 4.01 ||| COVID-19 | 14.84 | 95% | CI | 4.26-51.74 | COVID-19 | 12.77 | 95% | CI | 2.66-61.40 | 0.001 ||| COVID-19 [SUMMARY]
[CONTENT] COVID-19 ||| COVID-19 | COVID-19 ||| 217 | one-year | Multidimensional Prognostic Index ||| 0.33 | MPI | 2 ||| 12 months ||| ||| ||| 2 | 48.4% | 1 ||| 2 | 13.60 | 95% | CI | 4.01 ||| COVID-19 | 14.84 | 95% | CI | 4.26-51.74 | COVID-19 | 12.77 | 95% | CI | 2.66-61.40 | 0.001 ||| COVID-19 [SUMMARY]
Double bypass for inoperable pancreatic malignancy at laparotomy: postoperative complications and long-term outcome.
23131226
Between 4% and 13% of patients with operable pancreatic malignancy are found unresectable at the time of surgery. Double bypass is a good option for fit patients but it is associated with a high risk of postoperative complications. The aim of this study was to identify pre-operatively which patients undergoing double bypass are at high risk of complications and to assess their long-term outcome.
INTRODUCTION
Of the 576 patients undergoing pancreatic resections between 2006 and 2011, 50 patients who underwent a laparotomy for a planned pancreaticoduodenectomy had a double bypass procedure for inoperable disease. Demographic data, risk factors for postoperative complications and pre-operative anaesthetic assessment data including the Portsmouth Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity (P-POSSUM) and cardiopulmonary exercise testing (CPET) were collected.
METHODS
Fifty patients (33 men and 17 women) were included in the study. The median patient age was 64 years (range: 39-79 years). The complication rate was 50% and the in-hospital mortality rate was 4%. The P-POSSUM physiology subscore and low anaerobic threshold at CPET were significantly associated with postoperative complications (p =0.005 and p =0.016 respectively) but they were unable to predict them. Overall long-term survival was significantly shorter in patients with postoperative complications (9 vs 18 months). Postoperative complications were independently associated with poorer long-term survival (p =0.003, odds ratio: 3.261).
RESULTS
P-POSSUM and CPET are associated with postoperative complications but the possibility of using them for risk prediction requires further research. However, postoperative complications following double bypass have a significant impact on long-term survival and this type of surgery should therefore only be performed in specialised centres.
CONCLUSIONS
[ "Adult", "Anastomosis, Roux-en-Y", "Female", "Gastroenterostomy", "Humans", "Length of Stay", "Male", "Middle Aged", "Palliative Care", "Pancreatic Neoplasms", "Pancreaticoduodenectomy", "Postoperative Complications", "Preoperative Care", "Prospective Studies", "Risk Assessment", "Stents", "Survival Analysis", "Treatment Outcome", "Young Adult" ]
3954282
null
null
Methods
Patient population The Freeman Hospital in Newcastle upon Tyne is a referral centre for pancreatic diseases in the north-east of England and the hepatopancreaticobiliary unit performs on average 120 pancreatic resections per year for malignancy. All patients who underwent a double bypass procedure between January 2006 and July 2011 at the hepatopancreatobiliary unit were identified from a prospectively held database. Only patients with pre-operatively resectable pancreatic malignancy or those with borderline resectability were included in the study. Patients without histological evidence of pancreatic malignancy were excluded. Endoscopic ultrasonography was performed only when tissue diagnosis was not available or when better assessment of vascular invasion was needed. Contraindications for resectability were: superior mesenteric artery, hepatic artery and coeliac artery involvement; superior mesenteric vein involvement below the first mesenteric branches; liver metastases; peritoneal metastases; and inferior vena cava involvement. In all cases the double bypass surgery consisted of double jejunal loop reconstruction with an end-to-side hepaticojejunostomy on a Roux-en-Y with a loop gastroenterostomy. Patients without up-to-date (within four weeks of surgery) CT were excluded from the study as this clearly relates to outcome of patients found to be unresectable at surgery. None of these patients received pre-operative chemotherapy or radiotherapy. Patients with neuroendocrine tumours were also excluded from the study as they have a less aggressive disease. All patients were assessed routinely for pancreaticoduodenectomy. The American Society of Anesthesiologists (ASA) physical status classification system was adopted (grade 1/2 = normal healthy or mild systemic disease; grade 3 = severe systemic disease; grade 4/5 = severe systemic disease that is a constant threat to life or moribund). The Portsmouth Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity (P-POSSUM) physiology subscore (age, cardiac and respiratory status, electrocardiography report, systolic blood pressure, pulse rate, haemoglobin, white cell count, urea, sodium, potassium, Glasgow coma scale) was also calculated as it has been validated previously as a very good predictor of morbidity in pancreatic surgery.11,12 Patients were also selected for CPET by virtue of a low subjective functional capacity. This was defined by a metabolic equivalent score of ≤7, according to a simple activity-based clinical history. In our institution, we have demonstrated previously (unpublished data) that patients with a simple metabolic equivalent score of ≤7 do not predictably have major complications postoperatively. CPET is the analysis of both cardiac and pulmonary data during exercise. It measures both O2 consumption and CO2 production, and can identify the point at which CO2 levels start to increase at the airway. This correlates to the lactate, or AT where energy systems begin to change over. The Freeman Hospital in Newcastle upon Tyne is a referral centre for pancreatic diseases in the north-east of England and the hepatopancreaticobiliary unit performs on average 120 pancreatic resections per year for malignancy. All patients who underwent a double bypass procedure between January 2006 and July 2011 at the hepatopancreatobiliary unit were identified from a prospectively held database. Only patients with pre-operatively resectable pancreatic malignancy or those with borderline resectability were included in the study. Patients without histological evidence of pancreatic malignancy were excluded. Endoscopic ultrasonography was performed only when tissue diagnosis was not available or when better assessment of vascular invasion was needed. Contraindications for resectability were: superior mesenteric artery, hepatic artery and coeliac artery involvement; superior mesenteric vein involvement below the first mesenteric branches; liver metastases; peritoneal metastases; and inferior vena cava involvement. In all cases the double bypass surgery consisted of double jejunal loop reconstruction with an end-to-side hepaticojejunostomy on a Roux-en-Y with a loop gastroenterostomy. Patients without up-to-date (within four weeks of surgery) CT were excluded from the study as this clearly relates to outcome of patients found to be unresectable at surgery. None of these patients received pre-operative chemotherapy or radiotherapy. Patients with neuroendocrine tumours were also excluded from the study as they have a less aggressive disease. All patients were assessed routinely for pancreaticoduodenectomy. The American Society of Anesthesiologists (ASA) physical status classification system was adopted (grade 1/2 = normal healthy or mild systemic disease; grade 3 = severe systemic disease; grade 4/5 = severe systemic disease that is a constant threat to life or moribund). The Portsmouth Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity (P-POSSUM) physiology subscore (age, cardiac and respiratory status, electrocardiography report, systolic blood pressure, pulse rate, haemoglobin, white cell count, urea, sodium, potassium, Glasgow coma scale) was also calculated as it has been validated previously as a very good predictor of morbidity in pancreatic surgery.11,12 Patients were also selected for CPET by virtue of a low subjective functional capacity. This was defined by a metabolic equivalent score of ≤7, according to a simple activity-based clinical history. In our institution, we have demonstrated previously (unpublished data) that patients with a simple metabolic equivalent score of ≤7 do not predictably have major complications postoperatively. CPET is the analysis of both cardiac and pulmonary data during exercise. It measures both O2 consumption and CO2 production, and can identify the point at which CO2 levels start to increase at the airway. This correlates to the lactate, or AT where energy systems begin to change over. Outcome measures Postoperative complications were recorded according to the postoperative morbidity survey.13 Secondary outcomes included intensive care unit stay, hospital stay, in-hospital mortality rates, delay or cancellation of palliative chemotherapy due to complications and overall survival. Patients were reviewed routinely in outpatient clinics every three months. All data (including demographic details, medical history, intra-operative details, morbidity and mortality, palliative chemotherapy and overall survival) were collected from a prospectively held database. Patients with in-hospital mortality were excluded from the long-term survival analysis. Postoperative complications were recorded according to the postoperative morbidity survey.13 Secondary outcomes included intensive care unit stay, hospital stay, in-hospital mortality rates, delay or cancellation of palliative chemotherapy due to complications and overall survival. Patients were reviewed routinely in outpatient clinics every three months. All data (including demographic details, medical history, intra-operative details, morbidity and mortality, palliative chemotherapy and overall survival) were collected from a prospectively held database. Patients with in-hospital mortality were excluded from the long-term survival analysis. Statistical analysis T-test (continuous, normal distribution), Mann–Whitney (continuous, non-normal distribution) and chi-square (categorical) analyses were used to compare demographic and clinical variables between groups. A two-tailed p-value of <0.05 was considered significant. The receiver operating characteristic (ROC) curve was used to determine optimal values (upper left corner), area under the curve (AUC), sensitivity and specificity. Survival was calculated by the Kaplan–Meier method and Cox proportional hazards regression. All analyses were performed using SPSS® 19.0 (SPSS, Chicago, IL, US). A p-value of <0.05 was considered significant. T-test (continuous, normal distribution), Mann–Whitney (continuous, non-normal distribution) and chi-square (categorical) analyses were used to compare demographic and clinical variables between groups. A two-tailed p-value of <0.05 was considered significant. The receiver operating characteristic (ROC) curve was used to determine optimal values (upper left corner), area under the curve (AUC), sensitivity and specificity. Survival was calculated by the Kaplan–Meier method and Cox proportional hazards regression. All analyses were performed using SPSS® 19.0 (SPSS, Chicago, IL, US). A p-value of <0.05 was considered significant.
null
null
Conclusions
Postoperative complications are associated independently with poorer long-term outcome in patients with pancreatic malignancy found unresectable at laparotomy. This type of surgery should therefore only be performed in specialised centres. A high P-POSSUM physiology subscore and a low cardiopulmonary reserve are pre-operative parameters associated with postoperative complications but further studies are needed to clarify their value as risk predictors. This would allow a better selection of patients who really benefit from palliative surgery at the time of laparotomy.
[ "Patient population", "Outcome measures", "Statistical analysis", "Results", "Discussion", "Conclusions" ]
[ "The Freeman Hospital in Newcastle upon Tyne is a referral centre for pancreatic diseases in the north-east of England and the hepatopancreaticobiliary unit performs on average 120 pancreatic resections per year for malignancy. All patients who underwent a double bypass procedure between January 2006 and July 2011 at the hepatopancreatobiliary unit were identified from a prospectively held database.\nOnly patients with pre-operatively resectable pancreatic malignancy or those with borderline resectability were included in the study. Patients without histological evidence of pancreatic malignancy were excluded. Endoscopic ultrasonography was performed only when tissue diagnosis was not available or when better assessment of vascular invasion was needed.\nContraindications for resectability were: superior mesenteric artery, hepatic artery and coeliac artery involvement; superior mesenteric vein involvement below the first mesenteric branches; liver metastases; peritoneal metastases; and inferior vena cava involvement.\nIn all cases the double bypass surgery consisted of double jejunal loop reconstruction with an end-to-side hepaticojejunostomy on a Roux-en-Y with a loop gastroenterostomy.\nPatients without up-to-date (within four weeks of surgery) CT were excluded from the study as this clearly relates to outcome of patients found to be unresectable at surgery. None of these patients received pre-operative chemotherapy or radiotherapy. Patients with neuroendocrine tumours were also excluded from the study as they have a less aggressive disease.\nAll patients were assessed routinely for pancreaticoduodenectomy. The American Society of Anesthesiologists (ASA) physical status classification system was adopted (grade 1/2 = normal healthy or mild systemic disease; grade 3 = severe systemic disease; grade 4/5 = severe systemic disease that is a constant threat to life or moribund). The Portsmouth Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity (P-POSSUM) physiology subscore (age, cardiac and respiratory status, electrocardiography report, systolic blood pressure, pulse rate, haemoglobin, white cell count, urea, sodium, potassium, Glasgow coma scale) was also calculated as it has been validated previously as a very good predictor of morbidity in pancreatic surgery.11,12\nPatients were also selected for CPET by virtue of a low subjective functional capacity. This was defined by a metabolic equivalent score of ≤7, according to a simple activity-based clinical history. In our institution, we have demonstrated previously (unpublished data) that patients with a simple metabolic equivalent score of ≤7 do not predictably have major complications postoperatively. CPET is the analysis of both cardiac and pulmonary data during exercise. It measures both O2 consumption and CO2 production, and can identify the point at which CO2 levels start to increase at the airway. This correlates to the lactate, or AT where energy systems begin to change over.", "Postoperative complications were recorded according to the postoperative morbidity survey.13 Secondary outcomes included intensive care unit stay, hospital stay, in-hospital mortality rates, delay or cancellation of palliative chemotherapy due to complications and overall survival. Patients were reviewed routinely in outpatient clinics every three months.\nAll data (including demographic details, medical history, intra-operative details, morbidity and mortality, palliative chemotherapy and overall survival) were collected from a prospectively held database. Patients with in-hospital mortality were excluded from the long-term survival analysis.", "T-test (continuous, normal distribution), Mann–Whitney (continuous, non-normal distribution) and chi-square (categorical) analyses were used to compare demographic and clinical variables between groups. A two-tailed p-value of <0.05 was considered significant. The receiver operating characteristic (ROC) curve was used to determine optimal values (upper left corner), area under the curve (AUC), sensitivity and specificity. Survival was calculated by the Kaplan–Meier method and Cox proportional hazards regression. All analyses were performed using SPSS® 19.0 (SPSS, Chicago, IL, US). A p-value of <0.05 was considered significant.", "Out of 576 patients who had an operation for pancreatic malignancy between January 2006 and July 2011, 57 patients (10%) underwent palliative double bypass surgery. All patients had a pre-operative biliary stent. (Usually, a plastic stent was used; four patients had a short metal stent.) No patients had pre-operative gastric outlet obstruction. Four patients with a diagnosis of neuroendocrine malignancy and three patients who were not considered for a pancreaticoduodenectomy at the time of the bypass were excluded. Three patients who did not have up-to-date CT underwent a palliative gastroenterostomy.\nFifty patients were considered in the study. The population data are shown in Table 1. The median age was 64 years (range: 39–79 years), and 33 male and 17 female patients were evaluated. The most common indication for surgery was pancreatic ductal adenocarcinoma (n=35, 70%). Postoperative complications occurred in 50% of patients (n=25). In-hospital mortality was 4% (n=2). The median hospital stay postoperatively was 12.6 days (range: 5–39 days). No patients were lost at follow-up. At the time of analysis, 16 patients (32%) were still alive. The median follow-up duration was 10 months (range: 4–33 months). Reasons for inoperability are also shown in Table 1.\nTable 1Population baseline characteristicsMedian age in years (range)64 (39–79)Sex > Male33 (66%) > Female17 (34%)Median body mass index in kg/m2 (range)25.5 (18–42)Indication to surgery > Ductal carcinoma35 (70%) > Ampullary carcinoma4 (8%) > Duodenal carcinoma4 (8%) > Cholangiocarcinoma1 (2%) > Undifferentiated adenocarcinoma6 (12%)Reason for palliative surgery > SMA, CA, HA, SMV involvement28 (56%) > Liver metastases15 (30%) > Peritoneal metastases4 (8%) > More than one of the above reasons3 (6%)Postoperative complications25 (50%)More than 1 complication9 (18%)Type of complication (POMS)* > Cardiovascular2 (4%) > Pulmonary6 (12%) > Renal3 (6%) > Gastrointestinal (including GOO)5 (10%) > Anastomotic leak (biliary, gastric or enteric)6 (12%) > Others (wound infection, uncontrolled diabetes etc)12 (24%)Median length of hospital stay in days (range)12.6 (5–39)Relaparotomy3 (6%)In-hospital mortality2 (4%)SMA = superior mesenteric artery; CA = coeliac artery; HA = hepatic artery; SMV = superior mesenteric vein; GOO = gastric outlet obstruction*POMS (postoperative morbidity survey): anastomotic leak is shown as a separate complication\nPopulation baseline characteristics\nSMA = superior mesenteric artery; CA = coeliac artery; HA = hepatic artery; SMV = superior mesenteric vein; GOO = gastric outlet obstruction\nPOMS (postoperative morbidity survey): anastomotic leak is shown as a separate complication\nPrimary and secondary outcomes are shown in Table 2. Twenty of the fifty patients had pre-operative CPET. P-POSSUM physiology subscore and AT are significantly associated with postoperative complications (p=0.005 and p=0.016 respectively).\nTable 2Outcome variables comparing patients with and without complicationsNo complications(n=25)1 or more complications(n=25)P-valueMedian age in years (range)64.6 (45–79)64.6 (39–76)1.0Sex0.551 > Male1518 > Female107Indication for surgery0.731 > Ductal carcinoma1817 > Ampullary carcinoma13 > Duodenal carcinoma22 > Cholangiocarcinoma10 > Undifferentiated adenocarcinoma33Reason for palliative surgery0.681 > SMA, CA, HA, SMV involvement1612 > Liver metastases69 > Peritoneal metastases22 > More than one of the above reasons12Median body mass index in kg/m2 (range)25.5 (21–42)25.4 (18–39)0.954Median blood loss at surgery in ml (range)733 (100–2,310)837 (250–2,500)0.491Operating time in hours (range)4.0 (1.5–6)4.2 (2–7)0.553Biliary stent at surgery19191.0Bilirubin level at surgery in μmol/l (range)36.0 (3–100)63.6 (6–208)0.064Diabetes771.0Cardiac history661.0ASA grade0.053 > 22012 > 3512 > 401Median P-POSSUM physiology score (range)16.5 (12–20)18.7 (15–26)0.005Median anaerobic threshold in ml/kg/min (range)*14.1 (10.3–16.9)11.3 (6.2–15.4)0.016Chemotherapy following surgery16120.693Chemotherapy not given as prolonged hospital stay030.235SMA = superior mesenteric artery; CA = coeliac artery; HA = hepatic artery; SMV = superior mesenteric vein; ASA = American Society of Anesthesiologists; P-POSSUM = Portsmouth Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity*20 patients only\nOutcome variables comparing patients with and without complications\nSMA = superior mesenteric artery; CA = coeliac artery; HA = hepatic artery; SMV = superior mesenteric vein; ASA = American Society of Anesthesiologists; P-POSSUM = Portsmouth Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity\n20 patients only\nThe median P-POSSUM physiology subscore in the complications group was 18.7. The ROC curve showed an optimal P-POSSUM physiology subscore at 16.5 with a fair degree of accuracy (AUC 70%, p=0.013), and a sensitivity and specificity of 72% and 56% respectively. The ROC curve was not performed on AT due to the small number of patients.\nThe median overall survival for all patients was 14.6 months. Patients with postoperative complications had a significantly shorter median survival than those without complications (9 vs 18 months, p=0.003) (Fig 1).\nFigure 1Survival for patients with and without postoperative complications\nSurvival for patients with and without postoperative complications\nAt univariate analysis, age at surgery, sex, tumour histology and reason of inoperability were not significantly associated with long-term survival. At Cox regression analysis, neither P-POSSUM or AT were associated with overall long-term survival (p=0.274 and p=0.498 respectively). At multivariate analysis, postoperative complications were associated independently with poorer long-term survival (p=0.003, odds ratio [OR]: 3.261, 95% confidence interval [CI]: 1.492–7.129). Palliative chemotherapy was also significantly associated with survival (p=0.028, OR: 0.446, 95% CI: 0.215–0.924). No multicollinearity was evidenced between the two variables.", "There is currently no strong evidence on how to manage patients with inoperable disease at laparotomy. Most of the literature has focused on comparing surgical bypass with biliary stents in biliary obstruction. Three randomised studies that compared surgical bypass with plastic biliary stenting found no difference in the rates of technical or therapeutic success.14–16 However, although the relative risk of complications was lower in the stenting group, there was a highly significant reduction in recurrent biliary obstruction at four months in the surgical group.\nLaparoscopic biliary bypass has also been developed as a minimally invasive approach17 but there is no randomised evidence to show superiority when comparing open and laparoscopic biliary bypass surgery.3 With regard to gastric outlet obstruction, which occurs in 10–20% of pancreatic cancer patients,18 laparoscopic gastroenterostomy has been introduced as a minimally invasive approach to surgical bypass and there is some evidence of a quicker recovery rate19 although duodenal stents may be preferable in patients with a short life expectancy.18\nOne of the questions we tried to answer was whether it is possible to identify those patients with a high risk of complications and to perhaps consider avoiding a surgical bypass at the time of laparotomy. We demonstrated that P-POSSUM physiology subscore and AT (albeit with a small sample size) are significantly associated with postoperative complications but we failed to validate these tests for risk prediction.\nIn fact, one of the problems we encountered was that all our patients were candidates for elective pancreaticoduodenectomy and therefore relatively fit. The median P-POSSUM physiology subscore in the complications group was low, making difficult to use this test to predict complications as evidenced by the ROC curve. The same applied to AT (11.3ml/kg/min in the complications group), which was higher when compared with our previous analysis, where an AT of <10.1 was used to predict complications.20 We believe larger studies are needed to test P-POSSUM and AT as predictors of complications.\nThere is very little literature looking into risk prediction in these patients. In 2009 de Castro et al published a study showing that POSSUM overpredicts morbidity and predicts overall long-term survival in patients with unresectable pancreatic cancer.21 In this study, surprisingly, 32% of the patients were found unresectable at laparotomy, possibly because of historical data, in contrast with the literature reporting an unresectability rate at laparotomy of 4–13%.\nPOSSUM includes the same variables as P-POSSUM but a different formula is used to predict the risk of death11 and it consists of 12 physiological factors (age, cardiac status, respiratory status, electrocardiography report, systolic blood pressure, pulse rate, Glasgow coma scale, haemoglobin, white cell count, urea, sodium, potassium) and 6 operative parameters (operative complexity, multiple procedures, blood loss, peritoneal contamination, extent of malignant spread, elective or emergency surgery). Obviously, these last parameters are very difficult to assess before surgery and therefore the role of POSSUM in this study is very limited.\nAn interesting finding of our study was that complications have a significant impact on long-term survival. In 2012 Kamphues et al published the long-term outcomes of 428 patients who underwent resection of pancreatic head cancer.22 The median survival was 15.5 months with a postoperative complication rate of 32.7%. The occurrence of severe postoperative complications was associated with a significantly shortened survival compared with patients without complications (16.5 vs 12.4 months; p=0.002) and was identified as an independent prognostic factor (p=0.002). Similar results have been shown in almost every type of gastrointestinal malignancy.23\nAs other studies have indicated, long-term survival after surgery may be affected by the degree of systemic immune response to the infection or surgical trauma shifted toward a Th2-type lymphocyte pattern.24–26 IL-10, one of the Th2 cytokines, downregulates tumour specific immune responses by directly suppressing IFNc and IL-12 production, thereby reducing major histocompatibility complex expression on the surface of tumour cells and inhibited tumour antigen presentation by antigen presenting cells.27–29 In line with these studies, we have demonstrated that severe postoperative complications have a strong impact on the long-term survival of patients with inoperable pancreatic head cancer and, as a result, these patients should be treated only in specialised centres.", "Postoperative complications are associated independently with poorer long-term outcome in patients with pancreatic malignancy found unresectable at laparotomy. This type of surgery should therefore only be performed in specialised centres. A high P-POSSUM physiology subscore and a low cardiopulmonary reserve are pre-operative parameters associated with postoperative complications but further studies are needed to clarify their value as risk predictors. This would allow a better selection of patients who really benefit from palliative surgery at the time of laparotomy." ]
[ null, null, null, null, null, null ]
[ "Methods", "Patient population", "Outcome measures", "Statistical analysis", "Results", "Discussion", "Conclusions" ]
[ " Patient population The Freeman Hospital in Newcastle upon Tyne is a referral centre for pancreatic diseases in the north-east of England and the hepatopancreaticobiliary unit performs on average 120 pancreatic resections per year for malignancy. All patients who underwent a double bypass procedure between January 2006 and July 2011 at the hepatopancreatobiliary unit were identified from a prospectively held database.\nOnly patients with pre-operatively resectable pancreatic malignancy or those with borderline resectability were included in the study. Patients without histological evidence of pancreatic malignancy were excluded. Endoscopic ultrasonography was performed only when tissue diagnosis was not available or when better assessment of vascular invasion was needed.\nContraindications for resectability were: superior mesenteric artery, hepatic artery and coeliac artery involvement; superior mesenteric vein involvement below the first mesenteric branches; liver metastases; peritoneal metastases; and inferior vena cava involvement.\nIn all cases the double bypass surgery consisted of double jejunal loop reconstruction with an end-to-side hepaticojejunostomy on a Roux-en-Y with a loop gastroenterostomy.\nPatients without up-to-date (within four weeks of surgery) CT were excluded from the study as this clearly relates to outcome of patients found to be unresectable at surgery. None of these patients received pre-operative chemotherapy or radiotherapy. Patients with neuroendocrine tumours were also excluded from the study as they have a less aggressive disease.\nAll patients were assessed routinely for pancreaticoduodenectomy. The American Society of Anesthesiologists (ASA) physical status classification system was adopted (grade 1/2 = normal healthy or mild systemic disease; grade 3 = severe systemic disease; grade 4/5 = severe systemic disease that is a constant threat to life or moribund). The Portsmouth Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity (P-POSSUM) physiology subscore (age, cardiac and respiratory status, electrocardiography report, systolic blood pressure, pulse rate, haemoglobin, white cell count, urea, sodium, potassium, Glasgow coma scale) was also calculated as it has been validated previously as a very good predictor of morbidity in pancreatic surgery.11,12\nPatients were also selected for CPET by virtue of a low subjective functional capacity. This was defined by a metabolic equivalent score of ≤7, according to a simple activity-based clinical history. In our institution, we have demonstrated previously (unpublished data) that patients with a simple metabolic equivalent score of ≤7 do not predictably have major complications postoperatively. CPET is the analysis of both cardiac and pulmonary data during exercise. It measures both O2 consumption and CO2 production, and can identify the point at which CO2 levels start to increase at the airway. This correlates to the lactate, or AT where energy systems begin to change over.\nThe Freeman Hospital in Newcastle upon Tyne is a referral centre for pancreatic diseases in the north-east of England and the hepatopancreaticobiliary unit performs on average 120 pancreatic resections per year for malignancy. All patients who underwent a double bypass procedure between January 2006 and July 2011 at the hepatopancreatobiliary unit were identified from a prospectively held database.\nOnly patients with pre-operatively resectable pancreatic malignancy or those with borderline resectability were included in the study. Patients without histological evidence of pancreatic malignancy were excluded. Endoscopic ultrasonography was performed only when tissue diagnosis was not available or when better assessment of vascular invasion was needed.\nContraindications for resectability were: superior mesenteric artery, hepatic artery and coeliac artery involvement; superior mesenteric vein involvement below the first mesenteric branches; liver metastases; peritoneal metastases; and inferior vena cava involvement.\nIn all cases the double bypass surgery consisted of double jejunal loop reconstruction with an end-to-side hepaticojejunostomy on a Roux-en-Y with a loop gastroenterostomy.\nPatients without up-to-date (within four weeks of surgery) CT were excluded from the study as this clearly relates to outcome of patients found to be unresectable at surgery. None of these patients received pre-operative chemotherapy or radiotherapy. Patients with neuroendocrine tumours were also excluded from the study as they have a less aggressive disease.\nAll patients were assessed routinely for pancreaticoduodenectomy. The American Society of Anesthesiologists (ASA) physical status classification system was adopted (grade 1/2 = normal healthy or mild systemic disease; grade 3 = severe systemic disease; grade 4/5 = severe systemic disease that is a constant threat to life or moribund). The Portsmouth Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity (P-POSSUM) physiology subscore (age, cardiac and respiratory status, electrocardiography report, systolic blood pressure, pulse rate, haemoglobin, white cell count, urea, sodium, potassium, Glasgow coma scale) was also calculated as it has been validated previously as a very good predictor of morbidity in pancreatic surgery.11,12\nPatients were also selected for CPET by virtue of a low subjective functional capacity. This was defined by a metabolic equivalent score of ≤7, according to a simple activity-based clinical history. In our institution, we have demonstrated previously (unpublished data) that patients with a simple metabolic equivalent score of ≤7 do not predictably have major complications postoperatively. CPET is the analysis of both cardiac and pulmonary data during exercise. It measures both O2 consumption and CO2 production, and can identify the point at which CO2 levels start to increase at the airway. This correlates to the lactate, or AT where energy systems begin to change over.\n Outcome measures Postoperative complications were recorded according to the postoperative morbidity survey.13 Secondary outcomes included intensive care unit stay, hospital stay, in-hospital mortality rates, delay or cancellation of palliative chemotherapy due to complications and overall survival. Patients were reviewed routinely in outpatient clinics every three months.\nAll data (including demographic details, medical history, intra-operative details, morbidity and mortality, palliative chemotherapy and overall survival) were collected from a prospectively held database. Patients with in-hospital mortality were excluded from the long-term survival analysis.\nPostoperative complications were recorded according to the postoperative morbidity survey.13 Secondary outcomes included intensive care unit stay, hospital stay, in-hospital mortality rates, delay or cancellation of palliative chemotherapy due to complications and overall survival. Patients were reviewed routinely in outpatient clinics every three months.\nAll data (including demographic details, medical history, intra-operative details, morbidity and mortality, palliative chemotherapy and overall survival) were collected from a prospectively held database. Patients with in-hospital mortality were excluded from the long-term survival analysis.\n Statistical analysis T-test (continuous, normal distribution), Mann–Whitney (continuous, non-normal distribution) and chi-square (categorical) analyses were used to compare demographic and clinical variables between groups. A two-tailed p-value of <0.05 was considered significant. The receiver operating characteristic (ROC) curve was used to determine optimal values (upper left corner), area under the curve (AUC), sensitivity and specificity. Survival was calculated by the Kaplan–Meier method and Cox proportional hazards regression. All analyses were performed using SPSS® 19.0 (SPSS, Chicago, IL, US). A p-value of <0.05 was considered significant.\nT-test (continuous, normal distribution), Mann–Whitney (continuous, non-normal distribution) and chi-square (categorical) analyses were used to compare demographic and clinical variables between groups. A two-tailed p-value of <0.05 was considered significant. The receiver operating characteristic (ROC) curve was used to determine optimal values (upper left corner), area under the curve (AUC), sensitivity and specificity. Survival was calculated by the Kaplan–Meier method and Cox proportional hazards regression. All analyses were performed using SPSS® 19.0 (SPSS, Chicago, IL, US). A p-value of <0.05 was considered significant.", "The Freeman Hospital in Newcastle upon Tyne is a referral centre for pancreatic diseases in the north-east of England and the hepatopancreaticobiliary unit performs on average 120 pancreatic resections per year for malignancy. All patients who underwent a double bypass procedure between January 2006 and July 2011 at the hepatopancreatobiliary unit were identified from a prospectively held database.\nOnly patients with pre-operatively resectable pancreatic malignancy or those with borderline resectability were included in the study. Patients without histological evidence of pancreatic malignancy were excluded. Endoscopic ultrasonography was performed only when tissue diagnosis was not available or when better assessment of vascular invasion was needed.\nContraindications for resectability were: superior mesenteric artery, hepatic artery and coeliac artery involvement; superior mesenteric vein involvement below the first mesenteric branches; liver metastases; peritoneal metastases; and inferior vena cava involvement.\nIn all cases the double bypass surgery consisted of double jejunal loop reconstruction with an end-to-side hepaticojejunostomy on a Roux-en-Y with a loop gastroenterostomy.\nPatients without up-to-date (within four weeks of surgery) CT were excluded from the study as this clearly relates to outcome of patients found to be unresectable at surgery. None of these patients received pre-operative chemotherapy or radiotherapy. Patients with neuroendocrine tumours were also excluded from the study as they have a less aggressive disease.\nAll patients were assessed routinely for pancreaticoduodenectomy. The American Society of Anesthesiologists (ASA) physical status classification system was adopted (grade 1/2 = normal healthy or mild systemic disease; grade 3 = severe systemic disease; grade 4/5 = severe systemic disease that is a constant threat to life or moribund). The Portsmouth Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity (P-POSSUM) physiology subscore (age, cardiac and respiratory status, electrocardiography report, systolic blood pressure, pulse rate, haemoglobin, white cell count, urea, sodium, potassium, Glasgow coma scale) was also calculated as it has been validated previously as a very good predictor of morbidity in pancreatic surgery.11,12\nPatients were also selected for CPET by virtue of a low subjective functional capacity. This was defined by a metabolic equivalent score of ≤7, according to a simple activity-based clinical history. In our institution, we have demonstrated previously (unpublished data) that patients with a simple metabolic equivalent score of ≤7 do not predictably have major complications postoperatively. CPET is the analysis of both cardiac and pulmonary data during exercise. It measures both O2 consumption and CO2 production, and can identify the point at which CO2 levels start to increase at the airway. This correlates to the lactate, or AT where energy systems begin to change over.", "Postoperative complications were recorded according to the postoperative morbidity survey.13 Secondary outcomes included intensive care unit stay, hospital stay, in-hospital mortality rates, delay or cancellation of palliative chemotherapy due to complications and overall survival. Patients were reviewed routinely in outpatient clinics every three months.\nAll data (including demographic details, medical history, intra-operative details, morbidity and mortality, palliative chemotherapy and overall survival) were collected from a prospectively held database. Patients with in-hospital mortality were excluded from the long-term survival analysis.", "T-test (continuous, normal distribution), Mann–Whitney (continuous, non-normal distribution) and chi-square (categorical) analyses were used to compare demographic and clinical variables between groups. A two-tailed p-value of <0.05 was considered significant. The receiver operating characteristic (ROC) curve was used to determine optimal values (upper left corner), area under the curve (AUC), sensitivity and specificity. Survival was calculated by the Kaplan–Meier method and Cox proportional hazards regression. All analyses were performed using SPSS® 19.0 (SPSS, Chicago, IL, US). A p-value of <0.05 was considered significant.", "Out of 576 patients who had an operation for pancreatic malignancy between January 2006 and July 2011, 57 patients (10%) underwent palliative double bypass surgery. All patients had a pre-operative biliary stent. (Usually, a plastic stent was used; four patients had a short metal stent.) No patients had pre-operative gastric outlet obstruction. Four patients with a diagnosis of neuroendocrine malignancy and three patients who were not considered for a pancreaticoduodenectomy at the time of the bypass were excluded. Three patients who did not have up-to-date CT underwent a palliative gastroenterostomy.\nFifty patients were considered in the study. The population data are shown in Table 1. The median age was 64 years (range: 39–79 years), and 33 male and 17 female patients were evaluated. The most common indication for surgery was pancreatic ductal adenocarcinoma (n=35, 70%). Postoperative complications occurred in 50% of patients (n=25). In-hospital mortality was 4% (n=2). The median hospital stay postoperatively was 12.6 days (range: 5–39 days). No patients were lost at follow-up. At the time of analysis, 16 patients (32%) were still alive. The median follow-up duration was 10 months (range: 4–33 months). Reasons for inoperability are also shown in Table 1.\nTable 1Population baseline characteristicsMedian age in years (range)64 (39–79)Sex > Male33 (66%) > Female17 (34%)Median body mass index in kg/m2 (range)25.5 (18–42)Indication to surgery > Ductal carcinoma35 (70%) > Ampullary carcinoma4 (8%) > Duodenal carcinoma4 (8%) > Cholangiocarcinoma1 (2%) > Undifferentiated adenocarcinoma6 (12%)Reason for palliative surgery > SMA, CA, HA, SMV involvement28 (56%) > Liver metastases15 (30%) > Peritoneal metastases4 (8%) > More than one of the above reasons3 (6%)Postoperative complications25 (50%)More than 1 complication9 (18%)Type of complication (POMS)* > Cardiovascular2 (4%) > Pulmonary6 (12%) > Renal3 (6%) > Gastrointestinal (including GOO)5 (10%) > Anastomotic leak (biliary, gastric or enteric)6 (12%) > Others (wound infection, uncontrolled diabetes etc)12 (24%)Median length of hospital stay in days (range)12.6 (5–39)Relaparotomy3 (6%)In-hospital mortality2 (4%)SMA = superior mesenteric artery; CA = coeliac artery; HA = hepatic artery; SMV = superior mesenteric vein; GOO = gastric outlet obstruction*POMS (postoperative morbidity survey): anastomotic leak is shown as a separate complication\nPopulation baseline characteristics\nSMA = superior mesenteric artery; CA = coeliac artery; HA = hepatic artery; SMV = superior mesenteric vein; GOO = gastric outlet obstruction\nPOMS (postoperative morbidity survey): anastomotic leak is shown as a separate complication\nPrimary and secondary outcomes are shown in Table 2. Twenty of the fifty patients had pre-operative CPET. P-POSSUM physiology subscore and AT are significantly associated with postoperative complications (p=0.005 and p=0.016 respectively).\nTable 2Outcome variables comparing patients with and without complicationsNo complications(n=25)1 or more complications(n=25)P-valueMedian age in years (range)64.6 (45–79)64.6 (39–76)1.0Sex0.551 > Male1518 > Female107Indication for surgery0.731 > Ductal carcinoma1817 > Ampullary carcinoma13 > Duodenal carcinoma22 > Cholangiocarcinoma10 > Undifferentiated adenocarcinoma33Reason for palliative surgery0.681 > SMA, CA, HA, SMV involvement1612 > Liver metastases69 > Peritoneal metastases22 > More than one of the above reasons12Median body mass index in kg/m2 (range)25.5 (21–42)25.4 (18–39)0.954Median blood loss at surgery in ml (range)733 (100–2,310)837 (250–2,500)0.491Operating time in hours (range)4.0 (1.5–6)4.2 (2–7)0.553Biliary stent at surgery19191.0Bilirubin level at surgery in μmol/l (range)36.0 (3–100)63.6 (6–208)0.064Diabetes771.0Cardiac history661.0ASA grade0.053 > 22012 > 3512 > 401Median P-POSSUM physiology score (range)16.5 (12–20)18.7 (15–26)0.005Median anaerobic threshold in ml/kg/min (range)*14.1 (10.3–16.9)11.3 (6.2–15.4)0.016Chemotherapy following surgery16120.693Chemotherapy not given as prolonged hospital stay030.235SMA = superior mesenteric artery; CA = coeliac artery; HA = hepatic artery; SMV = superior mesenteric vein; ASA = American Society of Anesthesiologists; P-POSSUM = Portsmouth Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity*20 patients only\nOutcome variables comparing patients with and without complications\nSMA = superior mesenteric artery; CA = coeliac artery; HA = hepatic artery; SMV = superior mesenteric vein; ASA = American Society of Anesthesiologists; P-POSSUM = Portsmouth Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity\n20 patients only\nThe median P-POSSUM physiology subscore in the complications group was 18.7. The ROC curve showed an optimal P-POSSUM physiology subscore at 16.5 with a fair degree of accuracy (AUC 70%, p=0.013), and a sensitivity and specificity of 72% and 56% respectively. The ROC curve was not performed on AT due to the small number of patients.\nThe median overall survival for all patients was 14.6 months. Patients with postoperative complications had a significantly shorter median survival than those without complications (9 vs 18 months, p=0.003) (Fig 1).\nFigure 1Survival for patients with and without postoperative complications\nSurvival for patients with and without postoperative complications\nAt univariate analysis, age at surgery, sex, tumour histology and reason of inoperability were not significantly associated with long-term survival. At Cox regression analysis, neither P-POSSUM or AT were associated with overall long-term survival (p=0.274 and p=0.498 respectively). At multivariate analysis, postoperative complications were associated independently with poorer long-term survival (p=0.003, odds ratio [OR]: 3.261, 95% confidence interval [CI]: 1.492–7.129). Palliative chemotherapy was also significantly associated with survival (p=0.028, OR: 0.446, 95% CI: 0.215–0.924). No multicollinearity was evidenced between the two variables.", "There is currently no strong evidence on how to manage patients with inoperable disease at laparotomy. Most of the literature has focused on comparing surgical bypass with biliary stents in biliary obstruction. Three randomised studies that compared surgical bypass with plastic biliary stenting found no difference in the rates of technical or therapeutic success.14–16 However, although the relative risk of complications was lower in the stenting group, there was a highly significant reduction in recurrent biliary obstruction at four months in the surgical group.\nLaparoscopic biliary bypass has also been developed as a minimally invasive approach17 but there is no randomised evidence to show superiority when comparing open and laparoscopic biliary bypass surgery.3 With regard to gastric outlet obstruction, which occurs in 10–20% of pancreatic cancer patients,18 laparoscopic gastroenterostomy has been introduced as a minimally invasive approach to surgical bypass and there is some evidence of a quicker recovery rate19 although duodenal stents may be preferable in patients with a short life expectancy.18\nOne of the questions we tried to answer was whether it is possible to identify those patients with a high risk of complications and to perhaps consider avoiding a surgical bypass at the time of laparotomy. We demonstrated that P-POSSUM physiology subscore and AT (albeit with a small sample size) are significantly associated with postoperative complications but we failed to validate these tests for risk prediction.\nIn fact, one of the problems we encountered was that all our patients were candidates for elective pancreaticoduodenectomy and therefore relatively fit. The median P-POSSUM physiology subscore in the complications group was low, making difficult to use this test to predict complications as evidenced by the ROC curve. The same applied to AT (11.3ml/kg/min in the complications group), which was higher when compared with our previous analysis, where an AT of <10.1 was used to predict complications.20 We believe larger studies are needed to test P-POSSUM and AT as predictors of complications.\nThere is very little literature looking into risk prediction in these patients. In 2009 de Castro et al published a study showing that POSSUM overpredicts morbidity and predicts overall long-term survival in patients with unresectable pancreatic cancer.21 In this study, surprisingly, 32% of the patients were found unresectable at laparotomy, possibly because of historical data, in contrast with the literature reporting an unresectability rate at laparotomy of 4–13%.\nPOSSUM includes the same variables as P-POSSUM but a different formula is used to predict the risk of death11 and it consists of 12 physiological factors (age, cardiac status, respiratory status, electrocardiography report, systolic blood pressure, pulse rate, Glasgow coma scale, haemoglobin, white cell count, urea, sodium, potassium) and 6 operative parameters (operative complexity, multiple procedures, blood loss, peritoneal contamination, extent of malignant spread, elective or emergency surgery). Obviously, these last parameters are very difficult to assess before surgery and therefore the role of POSSUM in this study is very limited.\nAn interesting finding of our study was that complications have a significant impact on long-term survival. In 2012 Kamphues et al published the long-term outcomes of 428 patients who underwent resection of pancreatic head cancer.22 The median survival was 15.5 months with a postoperative complication rate of 32.7%. The occurrence of severe postoperative complications was associated with a significantly shortened survival compared with patients without complications (16.5 vs 12.4 months; p=0.002) and was identified as an independent prognostic factor (p=0.002). Similar results have been shown in almost every type of gastrointestinal malignancy.23\nAs other studies have indicated, long-term survival after surgery may be affected by the degree of systemic immune response to the infection or surgical trauma shifted toward a Th2-type lymphocyte pattern.24–26 IL-10, one of the Th2 cytokines, downregulates tumour specific immune responses by directly suppressing IFNc and IL-12 production, thereby reducing major histocompatibility complex expression on the surface of tumour cells and inhibited tumour antigen presentation by antigen presenting cells.27–29 In line with these studies, we have demonstrated that severe postoperative complications have a strong impact on the long-term survival of patients with inoperable pancreatic head cancer and, as a result, these patients should be treated only in specialised centres.", "Postoperative complications are associated independently with poorer long-term outcome in patients with pancreatic malignancy found unresectable at laparotomy. This type of surgery should therefore only be performed in specialised centres. A high P-POSSUM physiology subscore and a low cardiopulmonary reserve are pre-operative parameters associated with postoperative complications but further studies are needed to clarify their value as risk predictors. This would allow a better selection of patients who really benefit from palliative surgery at the time of laparotomy." ]
[ "methods", null, null, null, null, null, null ]
[ "Pancreatic cancer", "Palliative surgery", "Postoperative complications" ]
Methods: Patient population The Freeman Hospital in Newcastle upon Tyne is a referral centre for pancreatic diseases in the north-east of England and the hepatopancreaticobiliary unit performs on average 120 pancreatic resections per year for malignancy. All patients who underwent a double bypass procedure between January 2006 and July 2011 at the hepatopancreatobiliary unit were identified from a prospectively held database. Only patients with pre-operatively resectable pancreatic malignancy or those with borderline resectability were included in the study. Patients without histological evidence of pancreatic malignancy were excluded. Endoscopic ultrasonography was performed only when tissue diagnosis was not available or when better assessment of vascular invasion was needed. Contraindications for resectability were: superior mesenteric artery, hepatic artery and coeliac artery involvement; superior mesenteric vein involvement below the first mesenteric branches; liver metastases; peritoneal metastases; and inferior vena cava involvement. In all cases the double bypass surgery consisted of double jejunal loop reconstruction with an end-to-side hepaticojejunostomy on a Roux-en-Y with a loop gastroenterostomy. Patients without up-to-date (within four weeks of surgery) CT were excluded from the study as this clearly relates to outcome of patients found to be unresectable at surgery. None of these patients received pre-operative chemotherapy or radiotherapy. Patients with neuroendocrine tumours were also excluded from the study as they have a less aggressive disease. All patients were assessed routinely for pancreaticoduodenectomy. The American Society of Anesthesiologists (ASA) physical status classification system was adopted (grade 1/2 = normal healthy or mild systemic disease; grade 3 = severe systemic disease; grade 4/5 = severe systemic disease that is a constant threat to life or moribund). The Portsmouth Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity (P-POSSUM) physiology subscore (age, cardiac and respiratory status, electrocardiography report, systolic blood pressure, pulse rate, haemoglobin, white cell count, urea, sodium, potassium, Glasgow coma scale) was also calculated as it has been validated previously as a very good predictor of morbidity in pancreatic surgery.11,12 Patients were also selected for CPET by virtue of a low subjective functional capacity. This was defined by a metabolic equivalent score of ≤7, according to a simple activity-based clinical history. In our institution, we have demonstrated previously (unpublished data) that patients with a simple metabolic equivalent score of ≤7 do not predictably have major complications postoperatively. CPET is the analysis of both cardiac and pulmonary data during exercise. It measures both O2 consumption and CO2 production, and can identify the point at which CO2 levels start to increase at the airway. This correlates to the lactate, or AT where energy systems begin to change over. The Freeman Hospital in Newcastle upon Tyne is a referral centre for pancreatic diseases in the north-east of England and the hepatopancreaticobiliary unit performs on average 120 pancreatic resections per year for malignancy. All patients who underwent a double bypass procedure between January 2006 and July 2011 at the hepatopancreatobiliary unit were identified from a prospectively held database. Only patients with pre-operatively resectable pancreatic malignancy or those with borderline resectability were included in the study. Patients without histological evidence of pancreatic malignancy were excluded. Endoscopic ultrasonography was performed only when tissue diagnosis was not available or when better assessment of vascular invasion was needed. Contraindications for resectability were: superior mesenteric artery, hepatic artery and coeliac artery involvement; superior mesenteric vein involvement below the first mesenteric branches; liver metastases; peritoneal metastases; and inferior vena cava involvement. In all cases the double bypass surgery consisted of double jejunal loop reconstruction with an end-to-side hepaticojejunostomy on a Roux-en-Y with a loop gastroenterostomy. Patients without up-to-date (within four weeks of surgery) CT were excluded from the study as this clearly relates to outcome of patients found to be unresectable at surgery. None of these patients received pre-operative chemotherapy or radiotherapy. Patients with neuroendocrine tumours were also excluded from the study as they have a less aggressive disease. All patients were assessed routinely for pancreaticoduodenectomy. The American Society of Anesthesiologists (ASA) physical status classification system was adopted (grade 1/2 = normal healthy or mild systemic disease; grade 3 = severe systemic disease; grade 4/5 = severe systemic disease that is a constant threat to life or moribund). The Portsmouth Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity (P-POSSUM) physiology subscore (age, cardiac and respiratory status, electrocardiography report, systolic blood pressure, pulse rate, haemoglobin, white cell count, urea, sodium, potassium, Glasgow coma scale) was also calculated as it has been validated previously as a very good predictor of morbidity in pancreatic surgery.11,12 Patients were also selected for CPET by virtue of a low subjective functional capacity. This was defined by a metabolic equivalent score of ≤7, according to a simple activity-based clinical history. In our institution, we have demonstrated previously (unpublished data) that patients with a simple metabolic equivalent score of ≤7 do not predictably have major complications postoperatively. CPET is the analysis of both cardiac and pulmonary data during exercise. It measures both O2 consumption and CO2 production, and can identify the point at which CO2 levels start to increase at the airway. This correlates to the lactate, or AT where energy systems begin to change over. Outcome measures Postoperative complications were recorded according to the postoperative morbidity survey.13 Secondary outcomes included intensive care unit stay, hospital stay, in-hospital mortality rates, delay or cancellation of palliative chemotherapy due to complications and overall survival. Patients were reviewed routinely in outpatient clinics every three months. All data (including demographic details, medical history, intra-operative details, morbidity and mortality, palliative chemotherapy and overall survival) were collected from a prospectively held database. Patients with in-hospital mortality were excluded from the long-term survival analysis. Postoperative complications were recorded according to the postoperative morbidity survey.13 Secondary outcomes included intensive care unit stay, hospital stay, in-hospital mortality rates, delay or cancellation of palliative chemotherapy due to complications and overall survival. Patients were reviewed routinely in outpatient clinics every three months. All data (including demographic details, medical history, intra-operative details, morbidity and mortality, palliative chemotherapy and overall survival) were collected from a prospectively held database. Patients with in-hospital mortality were excluded from the long-term survival analysis. Statistical analysis T-test (continuous, normal distribution), Mann–Whitney (continuous, non-normal distribution) and chi-square (categorical) analyses were used to compare demographic and clinical variables between groups. A two-tailed p-value of <0.05 was considered significant. The receiver operating characteristic (ROC) curve was used to determine optimal values (upper left corner), area under the curve (AUC), sensitivity and specificity. Survival was calculated by the Kaplan–Meier method and Cox proportional hazards regression. All analyses were performed using SPSS® 19.0 (SPSS, Chicago, IL, US). A p-value of <0.05 was considered significant. T-test (continuous, normal distribution), Mann–Whitney (continuous, non-normal distribution) and chi-square (categorical) analyses were used to compare demographic and clinical variables between groups. A two-tailed p-value of <0.05 was considered significant. The receiver operating characteristic (ROC) curve was used to determine optimal values (upper left corner), area under the curve (AUC), sensitivity and specificity. Survival was calculated by the Kaplan–Meier method and Cox proportional hazards regression. All analyses were performed using SPSS® 19.0 (SPSS, Chicago, IL, US). A p-value of <0.05 was considered significant. Patient population: The Freeman Hospital in Newcastle upon Tyne is a referral centre for pancreatic diseases in the north-east of England and the hepatopancreaticobiliary unit performs on average 120 pancreatic resections per year for malignancy. All patients who underwent a double bypass procedure between January 2006 and July 2011 at the hepatopancreatobiliary unit were identified from a prospectively held database. Only patients with pre-operatively resectable pancreatic malignancy or those with borderline resectability were included in the study. Patients without histological evidence of pancreatic malignancy were excluded. Endoscopic ultrasonography was performed only when tissue diagnosis was not available or when better assessment of vascular invasion was needed. Contraindications for resectability were: superior mesenteric artery, hepatic artery and coeliac artery involvement; superior mesenteric vein involvement below the first mesenteric branches; liver metastases; peritoneal metastases; and inferior vena cava involvement. In all cases the double bypass surgery consisted of double jejunal loop reconstruction with an end-to-side hepaticojejunostomy on a Roux-en-Y with a loop gastroenterostomy. Patients without up-to-date (within four weeks of surgery) CT were excluded from the study as this clearly relates to outcome of patients found to be unresectable at surgery. None of these patients received pre-operative chemotherapy or radiotherapy. Patients with neuroendocrine tumours were also excluded from the study as they have a less aggressive disease. All patients were assessed routinely for pancreaticoduodenectomy. The American Society of Anesthesiologists (ASA) physical status classification system was adopted (grade 1/2 = normal healthy or mild systemic disease; grade 3 = severe systemic disease; grade 4/5 = severe systemic disease that is a constant threat to life or moribund). The Portsmouth Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity (P-POSSUM) physiology subscore (age, cardiac and respiratory status, electrocardiography report, systolic blood pressure, pulse rate, haemoglobin, white cell count, urea, sodium, potassium, Glasgow coma scale) was also calculated as it has been validated previously as a very good predictor of morbidity in pancreatic surgery.11,12 Patients were also selected for CPET by virtue of a low subjective functional capacity. This was defined by a metabolic equivalent score of ≤7, according to a simple activity-based clinical history. In our institution, we have demonstrated previously (unpublished data) that patients with a simple metabolic equivalent score of ≤7 do not predictably have major complications postoperatively. CPET is the analysis of both cardiac and pulmonary data during exercise. It measures both O2 consumption and CO2 production, and can identify the point at which CO2 levels start to increase at the airway. This correlates to the lactate, or AT where energy systems begin to change over. Outcome measures: Postoperative complications were recorded according to the postoperative morbidity survey.13 Secondary outcomes included intensive care unit stay, hospital stay, in-hospital mortality rates, delay or cancellation of palliative chemotherapy due to complications and overall survival. Patients were reviewed routinely in outpatient clinics every three months. All data (including demographic details, medical history, intra-operative details, morbidity and mortality, palliative chemotherapy and overall survival) were collected from a prospectively held database. Patients with in-hospital mortality were excluded from the long-term survival analysis. Statistical analysis: T-test (continuous, normal distribution), Mann–Whitney (continuous, non-normal distribution) and chi-square (categorical) analyses were used to compare demographic and clinical variables between groups. A two-tailed p-value of <0.05 was considered significant. The receiver operating characteristic (ROC) curve was used to determine optimal values (upper left corner), area under the curve (AUC), sensitivity and specificity. Survival was calculated by the Kaplan–Meier method and Cox proportional hazards regression. All analyses were performed using SPSS® 19.0 (SPSS, Chicago, IL, US). A p-value of <0.05 was considered significant. Results: Out of 576 patients who had an operation for pancreatic malignancy between January 2006 and July 2011, 57 patients (10%) underwent palliative double bypass surgery. All patients had a pre-operative biliary stent. (Usually, a plastic stent was used; four patients had a short metal stent.) No patients had pre-operative gastric outlet obstruction. Four patients with a diagnosis of neuroendocrine malignancy and three patients who were not considered for a pancreaticoduodenectomy at the time of the bypass were excluded. Three patients who did not have up-to-date CT underwent a palliative gastroenterostomy. Fifty patients were considered in the study. The population data are shown in Table 1. The median age was 64 years (range: 39–79 years), and 33 male and 17 female patients were evaluated. The most common indication for surgery was pancreatic ductal adenocarcinoma (n=35, 70%). Postoperative complications occurred in 50% of patients (n=25). In-hospital mortality was 4% (n=2). The median hospital stay postoperatively was 12.6 days (range: 5–39 days). No patients were lost at follow-up. At the time of analysis, 16 patients (32%) were still alive. The median follow-up duration was 10 months (range: 4–33 months). Reasons for inoperability are also shown in Table 1. Table 1Population baseline characteristicsMedian age in years (range)64 (39–79)Sex > Male33 (66%) > Female17 (34%)Median body mass index in kg/m2 (range)25.5 (18–42)Indication to surgery > Ductal carcinoma35 (70%) > Ampullary carcinoma4 (8%) > Duodenal carcinoma4 (8%) > Cholangiocarcinoma1 (2%) > Undifferentiated adenocarcinoma6 (12%)Reason for palliative surgery > SMA, CA, HA, SMV involvement28 (56%) > Liver metastases15 (30%) > Peritoneal metastases4 (8%) > More than one of the above reasons3 (6%)Postoperative complications25 (50%)More than 1 complication9 (18%)Type of complication (POMS)* > Cardiovascular2 (4%) > Pulmonary6 (12%) > Renal3 (6%) > Gastrointestinal (including GOO)5 (10%) > Anastomotic leak (biliary, gastric or enteric)6 (12%) > Others (wound infection, uncontrolled diabetes etc)12 (24%)Median length of hospital stay in days (range)12.6 (5–39)Relaparotomy3 (6%)In-hospital mortality2 (4%)SMA = superior mesenteric artery; CA = coeliac artery; HA = hepatic artery; SMV = superior mesenteric vein; GOO = gastric outlet obstruction*POMS (postoperative morbidity survey): anastomotic leak is shown as a separate complication Population baseline characteristics SMA = superior mesenteric artery; CA = coeliac artery; HA = hepatic artery; SMV = superior mesenteric vein; GOO = gastric outlet obstruction POMS (postoperative morbidity survey): anastomotic leak is shown as a separate complication Primary and secondary outcomes are shown in Table 2. Twenty of the fifty patients had pre-operative CPET. P-POSSUM physiology subscore and AT are significantly associated with postoperative complications (p=0.005 and p=0.016 respectively). Table 2Outcome variables comparing patients with and without complicationsNo complications(n=25)1 or more complications(n=25)P-valueMedian age in years (range)64.6 (45–79)64.6 (39–76)1.0Sex0.551 > Male1518 > Female107Indication for surgery0.731 > Ductal carcinoma1817 > Ampullary carcinoma13 > Duodenal carcinoma22 > Cholangiocarcinoma10 > Undifferentiated adenocarcinoma33Reason for palliative surgery0.681 > SMA, CA, HA, SMV involvement1612 > Liver metastases69 > Peritoneal metastases22 > More than one of the above reasons12Median body mass index in kg/m2 (range)25.5 (21–42)25.4 (18–39)0.954Median blood loss at surgery in ml (range)733 (100–2,310)837 (250–2,500)0.491Operating time in hours (range)4.0 (1.5–6)4.2 (2–7)0.553Biliary stent at surgery19191.0Bilirubin level at surgery in μmol/l (range)36.0 (3–100)63.6 (6–208)0.064Diabetes771.0Cardiac history661.0ASA grade0.053 > 22012 > 3512 > 401Median P-POSSUM physiology score (range)16.5 (12–20)18.7 (15–26)0.005Median anaerobic threshold in ml/kg/min (range)*14.1 (10.3–16.9)11.3 (6.2–15.4)0.016Chemotherapy following surgery16120.693Chemotherapy not given as prolonged hospital stay030.235SMA = superior mesenteric artery; CA = coeliac artery; HA = hepatic artery; SMV = superior mesenteric vein; ASA = American Society of Anesthesiologists; P-POSSUM = Portsmouth Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity*20 patients only Outcome variables comparing patients with and without complications SMA = superior mesenteric artery; CA = coeliac artery; HA = hepatic artery; SMV = superior mesenteric vein; ASA = American Society of Anesthesiologists; P-POSSUM = Portsmouth Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity 20 patients only The median P-POSSUM physiology subscore in the complications group was 18.7. The ROC curve showed an optimal P-POSSUM physiology subscore at 16.5 with a fair degree of accuracy (AUC 70%, p=0.013), and a sensitivity and specificity of 72% and 56% respectively. The ROC curve was not performed on AT due to the small number of patients. The median overall survival for all patients was 14.6 months. Patients with postoperative complications had a significantly shorter median survival than those without complications (9 vs 18 months, p=0.003) (Fig 1). Figure 1Survival for patients with and without postoperative complications Survival for patients with and without postoperative complications At univariate analysis, age at surgery, sex, tumour histology and reason of inoperability were not significantly associated with long-term survival. At Cox regression analysis, neither P-POSSUM or AT were associated with overall long-term survival (p=0.274 and p=0.498 respectively). At multivariate analysis, postoperative complications were associated independently with poorer long-term survival (p=0.003, odds ratio [OR]: 3.261, 95% confidence interval [CI]: 1.492–7.129). Palliative chemotherapy was also significantly associated with survival (p=0.028, OR: 0.446, 95% CI: 0.215–0.924). No multicollinearity was evidenced between the two variables. Discussion: There is currently no strong evidence on how to manage patients with inoperable disease at laparotomy. Most of the literature has focused on comparing surgical bypass with biliary stents in biliary obstruction. Three randomised studies that compared surgical bypass with plastic biliary stenting found no difference in the rates of technical or therapeutic success.14–16 However, although the relative risk of complications was lower in the stenting group, there was a highly significant reduction in recurrent biliary obstruction at four months in the surgical group. Laparoscopic biliary bypass has also been developed as a minimally invasive approach17 but there is no randomised evidence to show superiority when comparing open and laparoscopic biliary bypass surgery.3 With regard to gastric outlet obstruction, which occurs in 10–20% of pancreatic cancer patients,18 laparoscopic gastroenterostomy has been introduced as a minimally invasive approach to surgical bypass and there is some evidence of a quicker recovery rate19 although duodenal stents may be preferable in patients with a short life expectancy.18 One of the questions we tried to answer was whether it is possible to identify those patients with a high risk of complications and to perhaps consider avoiding a surgical bypass at the time of laparotomy. We demonstrated that P-POSSUM physiology subscore and AT (albeit with a small sample size) are significantly associated with postoperative complications but we failed to validate these tests for risk prediction. In fact, one of the problems we encountered was that all our patients were candidates for elective pancreaticoduodenectomy and therefore relatively fit. The median P-POSSUM physiology subscore in the complications group was low, making difficult to use this test to predict complications as evidenced by the ROC curve. The same applied to AT (11.3ml/kg/min in the complications group), which was higher when compared with our previous analysis, where an AT of <10.1 was used to predict complications.20 We believe larger studies are needed to test P-POSSUM and AT as predictors of complications. There is very little literature looking into risk prediction in these patients. In 2009 de Castro et al published a study showing that POSSUM overpredicts morbidity and predicts overall long-term survival in patients with unresectable pancreatic cancer.21 In this study, surprisingly, 32% of the patients were found unresectable at laparotomy, possibly because of historical data, in contrast with the literature reporting an unresectability rate at laparotomy of 4–13%. POSSUM includes the same variables as P-POSSUM but a different formula is used to predict the risk of death11 and it consists of 12 physiological factors (age, cardiac status, respiratory status, electrocardiography report, systolic blood pressure, pulse rate, Glasgow coma scale, haemoglobin, white cell count, urea, sodium, potassium) and 6 operative parameters (operative complexity, multiple procedures, blood loss, peritoneal contamination, extent of malignant spread, elective or emergency surgery). Obviously, these last parameters are very difficult to assess before surgery and therefore the role of POSSUM in this study is very limited. An interesting finding of our study was that complications have a significant impact on long-term survival. In 2012 Kamphues et al published the long-term outcomes of 428 patients who underwent resection of pancreatic head cancer.22 The median survival was 15.5 months with a postoperative complication rate of 32.7%. The occurrence of severe postoperative complications was associated with a significantly shortened survival compared with patients without complications (16.5 vs 12.4 months; p=0.002) and was identified as an independent prognostic factor (p=0.002). Similar results have been shown in almost every type of gastrointestinal malignancy.23 As other studies have indicated, long-term survival after surgery may be affected by the degree of systemic immune response to the infection or surgical trauma shifted toward a Th2-type lymphocyte pattern.24–26 IL-10, one of the Th2 cytokines, downregulates tumour specific immune responses by directly suppressing IFNc and IL-12 production, thereby reducing major histocompatibility complex expression on the surface of tumour cells and inhibited tumour antigen presentation by antigen presenting cells.27–29 In line with these studies, we have demonstrated that severe postoperative complications have a strong impact on the long-term survival of patients with inoperable pancreatic head cancer and, as a result, these patients should be treated only in specialised centres. Conclusions: Postoperative complications are associated independently with poorer long-term outcome in patients with pancreatic malignancy found unresectable at laparotomy. This type of surgery should therefore only be performed in specialised centres. A high P-POSSUM physiology subscore and a low cardiopulmonary reserve are pre-operative parameters associated with postoperative complications but further studies are needed to clarify their value as risk predictors. This would allow a better selection of patients who really benefit from palliative surgery at the time of laparotomy.
Background: Between 4% and 13% of patients with operable pancreatic malignancy are found unresectable at the time of surgery. Double bypass is a good option for fit patients but it is associated with a high risk of postoperative complications. The aim of this study was to identify pre-operatively which patients undergoing double bypass are at high risk of complications and to assess their long-term outcome. Methods: Of the 576 patients undergoing pancreatic resections between 2006 and 2011, 50 patients who underwent a laparotomy for a planned pancreaticoduodenectomy had a double bypass procedure for inoperable disease. Demographic data, risk factors for postoperative complications and pre-operative anaesthetic assessment data including the Portsmouth Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity (P-POSSUM) and cardiopulmonary exercise testing (CPET) were collected. Results: Fifty patients (33 men and 17 women) were included in the study. The median patient age was 64 years (range: 39-79 years). The complication rate was 50% and the in-hospital mortality rate was 4%. The P-POSSUM physiology subscore and low anaerobic threshold at CPET were significantly associated with postoperative complications (p =0.005 and p =0.016 respectively) but they were unable to predict them. Overall long-term survival was significantly shorter in patients with postoperative complications (9 vs 18 months). Postoperative complications were independently associated with poorer long-term survival (p =0.003, odds ratio: 3.261). Conclusions: P-POSSUM and CPET are associated with postoperative complications but the possibility of using them for risk prediction requires further research. However, postoperative complications following double bypass have a significant impact on long-term survival and this type of surgery should therefore only be performed in specialised centres.
null
null
4,252
349
[ 506, 101, 130, 1126, 782, 88 ]
7
[ "patients", "complications", "survival", "surgery", "pancreatic", "artery", "postoperative", "possum", "mesenteric", "hospital" ]
[ "morbidity pancreatic surgery", "120 pancreatic resections", "inoperable pancreatic", "surgery pancreatic ductal", "underwent resection pancreatic" ]
null
null
null
null
[CONTENT] Pancreatic cancer | Palliative surgery | Postoperative complications [SUMMARY]
null
[CONTENT] Pancreatic cancer | Palliative surgery | Postoperative complications [SUMMARY]
[CONTENT] Pancreatic cancer | Palliative surgery | Postoperative complications [SUMMARY]
null
null
[CONTENT] Adult | Anastomosis, Roux-en-Y | Female | Gastroenterostomy | Humans | Length of Stay | Male | Middle Aged | Palliative Care | Pancreatic Neoplasms | Pancreaticoduodenectomy | Postoperative Complications | Preoperative Care | Prospective Studies | Risk Assessment | Stents | Survival Analysis | Treatment Outcome | Young Adult [SUMMARY]
null
[CONTENT] Adult | Anastomosis, Roux-en-Y | Female | Gastroenterostomy | Humans | Length of Stay | Male | Middle Aged | Palliative Care | Pancreatic Neoplasms | Pancreaticoduodenectomy | Postoperative Complications | Preoperative Care | Prospective Studies | Risk Assessment | Stents | Survival Analysis | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] Adult | Anastomosis, Roux-en-Y | Female | Gastroenterostomy | Humans | Length of Stay | Male | Middle Aged | Palliative Care | Pancreatic Neoplasms | Pancreaticoduodenectomy | Postoperative Complications | Preoperative Care | Prospective Studies | Risk Assessment | Stents | Survival Analysis | Treatment Outcome | Young Adult [SUMMARY]
null
null
[CONTENT] morbidity pancreatic surgery | 120 pancreatic resections | inoperable pancreatic | surgery pancreatic ductal | underwent resection pancreatic [SUMMARY]
null
[CONTENT] morbidity pancreatic surgery | 120 pancreatic resections | inoperable pancreatic | surgery pancreatic ductal | underwent resection pancreatic [SUMMARY]
[CONTENT] morbidity pancreatic surgery | 120 pancreatic resections | inoperable pancreatic | surgery pancreatic ductal | underwent resection pancreatic [SUMMARY]
null
null
[CONTENT] patients | complications | survival | surgery | pancreatic | artery | postoperative | possum | mesenteric | hospital [SUMMARY]
null
[CONTENT] patients | complications | survival | surgery | pancreatic | artery | postoperative | possum | mesenteric | hospital [SUMMARY]
[CONTENT] patients | complications | survival | surgery | pancreatic | artery | postoperative | possum | mesenteric | hospital [SUMMARY]
null
null
[CONTENT] patients | disease | pancreatic | grade | involvement | systemic disease | excluded | mortality | hospital | surgery [SUMMARY]
null
[CONTENT] laparotomy | associated | postoperative complications | postoperative | surgery | pancreatic malignancy found unresectable | associated postoperative complications studies | pancreatic malignancy found | pre operative parameters associated | pre operative parameters [SUMMARY]
[CONTENT] patients | complications | survival | surgery | postoperative | hospital | pancreatic | artery | mortality | postoperative complications [SUMMARY]
null
null
[CONTENT] 576 | between 2006 and 2011 | 50 ||| the Portsmouth Physiological and Operative Severity Score | CPET [SUMMARY]
null
[CONTENT] CPET ||| [SUMMARY]
[CONTENT] Between 4% and 13% ||| ||| ||| 576 | between 2006 and 2011 | 50 ||| the Portsmouth Physiological and Operative Severity Score | CPET ||| ||| Fifty | 33 | 17 ||| 64 years | 39-79 years ||| 50% | 4% ||| CPET | 0.005 | 0.016 ||| 9 vs 18 months ||| 0.003 | 3.261 ||| CPET ||| [SUMMARY]
null
Assessing Prevalence and Unique Risk Factors of Suicidal Ideation among First-Year University Students in China Using a Unique Multidimensional University Personality Inventor.
36078501
University students with suicidal ideation are at high risk of suicide, which constitutes a significant social and public health problem in China. However, little is known about the prevalence and associated risk factors of suicidal ideation among first-year university students in China, especially during the COVID-19 pandemic.
BACKGROUND
Using a cluster sampling technique, a university-wide survey was conducted of 686 first-year university students from Hangzhou in March 2020 using University Personality Inventory (UPI). UPI includes an assessment for suicidal ideation and possible risk factors. Suicidal ideation prevalence was calculated for males and females. Univariate analysis and multivariable logistic regression models were conducted, adjusting for age and sex. Analyses were carried out using the SPSS version 22.0 software.
METHODS
The prevalence of 12-month suicidal ideation among first-year university students during March 2020 was 5.2%, and there was no significant difference between males and females (4.8% vs. 6.0%, x2 = 0.28, p = 0.597). Multivariable logistic regression analysis identified social avoidance (B = 0.78, OR = 2.17, p &lt; 0.001) and emotional vulnerability (B = 0.71, OR = 2.02, p &lt; 0.001) as positively associated with suicidal ideation.
RESULTS
Social avoidance and emotional vulnerabilities are unique factors associated with greater suicidal ideation among first-year university students during the COVID-19 pandemic. UPI serves as a validated tool to screen suicide risks among Chinese university students. Encouraging social engagement and improving emotional regulation skills are promising targets to reduce suicidal ideation among first-year university students.
CONCLUSIONS
[ "COVID-19", "China", "Female", "Humans", "Inventors", "Male", "Pandemics", "Personality", "Prevalence", "Risk Factors", "Students", "Suicidal Ideation", "Universities" ]
9517881
1. Introduction
Suicide is a serious public health problem and is one of the leading causes of death among adolescents and young adults, especially among university students. A meta-analysis reported that the point prevalence of suicidal ideation among Chinese university students ranged between 1.2% and 26.0%, and the prevalence of suicidal ideation among university students in southern China was higher than that in northern China [1]. Understanding the prevalence and risk factors of suicidal ideation is important because it is closely related to suicide attempts and deaths [2,3,4]. Suicidal ideation among first-year university students is of particular importance because of the dynamic developmental, social, and behavioral transitions during their first year at university [5,6,7,8]. Socially, there are changes in residence, family relationships, and peer contexts [5,6,9,10,11]. For most Chinese first-year university students, it is the first time they have started to live in a province other than their place of birth [7,10,12]. Hence, the transition to university involves significant life changes (e.g., increased independence, social demands) and academic challenges (e.g., independent learning) alongside reduced parental support and oversight [13,14] Developmentally, adolescents and young adults have heterogeneous trajectories of suicidal thoughts and behaviors [15]. Greater impulsivity and self-reported aggression, as well as elevated trait anxiety, are the risk factors for suicidal ideation among young adults [15]. Previous studies found that among 16-year-old university students, those with greater tolerance toward suicide, higher family coping, and lower self-esteem were more likely to report suicidal ideation [16]. College entrance may be a strategically well-placed “point of capture” for detecting late adolescents with suicidal thoughts and behaviors [5]. However, a clear epidemiological picture of suicidal thoughts and behaviors among incoming Chinese university students is lacking [5]. Most previous studies relied on patient and hospital records, which may understate the issue of student suicidal ideation [14]. The first twelve months following the onset of suicidal ideation appear crucial, with data from seventeen countries demonstrating that 60% of attempts are made within this period [17]. Nevertheless, previous studies are limited by the focus on psychological risks [14]. However, non-psychological risk factors may be particularly important for Chinese college first-year university students, as they may not talk about their problems due to mental health stigma [18,19]. There is an urgent need to find a customized assessment tool for Chinese university students. The University Personality Inventory (UPI) was developed in 1966 in Japan and has been adopted as a rapid and effective mental health screening tool among Chinese university students, applied in almost every university in Zhejiang province [20,21]. UPI has been shown to be multidimensional [20,21,22]. Originally, the UPI only concentrated on psychological symptoms, such as depression, anxiety, neuroticism, persecutory beliefs, and obsessive compulsive symptoms [20]. However, these assessments do not help us understand the social and developmental stressors that are relevant to university students specifically [23]. In 2015, a new five-factor structure consisting of domains of physical symptoms, cognitive symptoms, emotional vulnerability, social avoidance, and interpersonal sensitivity of UPI was developed for Chinese university students [21], offering opportunities to understand how the new five-factor domains are associated with suicidal ideation. During COVID-19, there have been concerns and literature suggesting an increase in social isolation and mental distress, which are potential risk factors for suicidal ideation [24,25]. Special attention will be placed on disparities in suicide prevention across sociodemographic subgroups during the COVID-19 pandemic [26]. For first-year university students, most of their courses have transitioned to an online format, rendering them unable to connect to their classmates and teachers [27]. Those students who felt more connected with other students and teachers were more likely to feel calm and trusting [28]. Previous literature reported the various suicidal ideation situations in disparate Chinese university students, indicating the prevalence range of suicidal ideation from 1.24% to 26.00% [1,29,30,31,32]. However, it is unknown how national prevalent suicidal ideation was among first-year university students in China. Despite recent attention being paid to the alarming rates of suicidal behaviors among university students [5,33,34,35], less evidence is available to understand the potential risks associated with their suicidal ideation during the COVID-19 pandemic. Less research comparatively addressed suicide prevention and early intervention for university students than for primary and secondary school students [36]. This is troubling on the grounds that the college years represent a critical and unique developmental stage characterized by dynamic social role transitions, new living situations, and changing relationships [37,38]. Additionally, it is vital to understand and design college first-year university students specific intervention programs targeting the developmental factors during the transition from adolescence to emerging adulthood [26,39]. This study aimed to investigate the prevalence of suicidal ideation and the potential relationship between the new five-factor domain in the revised UPI and suicidal ideation among Chinese first-year university students during the pandemic period, and present targeting measures.
null
null
3. Results
3.1. Sample Characteristics Table 2 presents the sample characteristics. There were more male university students (n = 438, 63.85%) than females (n = 248, 36.15%). On average, students were 17.79 years old. Most students had network engineering, business administration, and preschool education majors. Table 2 presents the sample characteristics. There were more male university students (n = 438, 63.85%) than females (n = 248, 36.15%). On average, students were 17.79 years old. Most students had network engineering, business administration, and preschool education majors. 3.2. Prevalence of Suicidal Ideation The 12-month prevalence of suicidal ideation in the total sample was 5.2% (4.8% in men, 6.0% in women). There was no significant sex difference in suicidal ideation prevalence (x2 = 0.28, p = 0.597). The 12-month prevalence of suicidal ideation in the total sample was 5.2% (4.8% in men, 6.0% in women). There was no significant sex difference in suicidal ideation prevalence (x2 = 0.28, p = 0.597). 3.3. Associations between UPI Factors and Suicidal Ideation For all UPI factors, students with suicidal ideation had significantly higher scores than those without suicidal ideation (See Table 3). Results of multivariable regression (Table 4) showed significant differences in social avoidance and emotional vulnerability to suicidal ideation, but not the rest of the UPI factors. Specifically, students with social avoidance were 2.17 times more likely to think about suicide than those without. Similarly, students with emotional vulnerability had 2.03 greater odds of suicidal ideation than those without. For all UPI factors, students with suicidal ideation had significantly higher scores than those without suicidal ideation (See Table 3). Results of multivariable regression (Table 4) showed significant differences in social avoidance and emotional vulnerability to suicidal ideation, but not the rest of the UPI factors. Specifically, students with social avoidance were 2.17 times more likely to think about suicide than those without. Similarly, students with emotional vulnerability had 2.03 greater odds of suicidal ideation than those without.
5. Conclusions
In conclusion, the study revealed that for first-year university students amid the COVID-19 pandemic, the factors of physical symptoms, cognitive symptoms, and interpersonal sensitivity may work through the factors of social avoidance and emotional vulnerability. If any item of the social avoidance factor or of the emotional vulnerability factor, comes true, the likelihood of suicidal ideation will increase more than two times for university students. With the five-factor structure UPI, more associated risk factors can be found. Additionally, these findings contribute to providing timely intervention measures to reduce suicidal ideation and reduce the risk of suicide in this college student population.
[ "2. Materials and Methods", "2.1. Data Collection and Sample", "2.2. Measures", "2.3. Data Analyses", "3.1. Sample Characteristics", "3.2. Prevalence of Suicidal Ideation", "3.3. Associations between UPI Factors and Suicidal Ideation" ]
[ "2.1. Data Collection and Sample The study was held in a higher vocational college (similar to a community college) located in Xiaoshan district, Hangzhou, Zhejiang. Hangzhou is the capital and most populous city of Zhejiang Province, People’s Republic of China [40,41,42]. An online questionnaire was distributed to all of the first-year students in this college. This college is somehow like a community college, and is partially guided by Zhejiang Open University, which governs nearly eighty similar colleges all around the province. There are two types of students. One is a full-time student (completes a 3-year course of study to get an associate degree, after which the qualified student could either take an entry exam for further study to get a bachelor’s degree or enter employment); the other is a part-time student who usually has employment and already has an associate degree, and chooses to take courses at the weekend for further study for a bachelor degree. Using cluster sampling, we selected one higher vocational college (Xiaoshan college) to invite 686 first-year students to participate in this investigation (effective response rate = 100.00%). Our study sample is full-time students who just graduated from high school and were of similar age. Our sample is unique, as they were the first batch of first-year students entering college during the COVID-19 pandemic. Using a cluster sampling technique, a total of 686 (248 women, 438 men; ages 17 to 19) of these first-year university students were selected. The study was approved by the Human Research Ethics Committee of the vocational college. Informed written consent was obtained from participants before the study commenced.\nDuring the first month of students’ school entry (March 2020), participants completed the UPI with demographic information (age, sex). Students with a UPI total sum score above 20 or those who responded “yes” to item 25 (“Have an idea of wanting to die”) were identified as a risk group and were referred to a mental health professional [20,21].\nThe study was held in a higher vocational college (similar to a community college) located in Xiaoshan district, Hangzhou, Zhejiang. Hangzhou is the capital and most populous city of Zhejiang Province, People’s Republic of China [40,41,42]. An online questionnaire was distributed to all of the first-year students in this college. This college is somehow like a community college, and is partially guided by Zhejiang Open University, which governs nearly eighty similar colleges all around the province. There are two types of students. One is a full-time student (completes a 3-year course of study to get an associate degree, after which the qualified student could either take an entry exam for further study to get a bachelor’s degree or enter employment); the other is a part-time student who usually has employment and already has an associate degree, and chooses to take courses at the weekend for further study for a bachelor degree. Using cluster sampling, we selected one higher vocational college (Xiaoshan college) to invite 686 first-year students to participate in this investigation (effective response rate = 100.00%). Our study sample is full-time students who just graduated from high school and were of similar age. Our sample is unique, as they were the first batch of first-year students entering college during the COVID-19 pandemic. Using a cluster sampling technique, a total of 686 (248 women, 438 men; ages 17 to 19) of these first-year university students were selected. The study was approved by the Human Research Ethics Committee of the vocational college. Informed written consent was obtained from participants before the study commenced.\nDuring the first month of students’ school entry (March 2020), participants completed the UPI with demographic information (age, sex). Students with a UPI total sum score above 20 or those who responded “yes” to item 25 (“Have an idea of wanting to die”) were identified as a risk group and were referred to a mental health professional [20,21].\n2.2. Measures The UPI is a 60-item self-report measure assessing whether an individual experienced one or more mental health symptoms during the past year [20]. For each item, a score of 1 is given for “Yes”, and 0 was given for “No”. The 56 items describe whether an individual experienced one or more mental health symptoms, except for the “lie” scales (items 5, 20, 35, and 50). (The full version of the UPI is available in the Supplementary file).\nSuicidal ideation was measured by one item response to “Have an idea of wanting to die?” Participants responding “yes” were considered to have suicidal ideation. \nThe revised UPI includes five main factors tailored to assess physical, social, cognitive, interpersonal, and emotional health for Chinese university students (Table 1): (1) Physical symptoms (items 1, 2, 3, 17, 18, 46, 48 and 49); (2) social avoidance (items 10, 11, 41 and 43); (3) cognitive symptoms (items 29, 30, 38 and 39); (4) interpersonal sensitivity (items 57, 58); and (5) emotional vulnerability (items 6,15, 21, 24, 28 and 60). Factors were equal to the summed scores of the respective items based on the previously validated scoring guide [21]. For details, see Table 1.\nThe UPI is a 60-item self-report measure assessing whether an individual experienced one or more mental health symptoms during the past year [20]. For each item, a score of 1 is given for “Yes”, and 0 was given for “No”. The 56 items describe whether an individual experienced one or more mental health symptoms, except for the “lie” scales (items 5, 20, 35, and 50). (The full version of the UPI is available in the Supplementary file).\nSuicidal ideation was measured by one item response to “Have an idea of wanting to die?” Participants responding “yes” were considered to have suicidal ideation. \nThe revised UPI includes five main factors tailored to assess physical, social, cognitive, interpersonal, and emotional health for Chinese university students (Table 1): (1) Physical symptoms (items 1, 2, 3, 17, 18, 46, 48 and 49); (2) social avoidance (items 10, 11, 41 and 43); (3) cognitive symptoms (items 29, 30, 38 and 39); (4) interpersonal sensitivity (items 57, 58); and (5) emotional vulnerability (items 6,15, 21, 24, 28 and 60). Factors were equal to the summed scores of the respective items based on the previously validated scoring guide [21]. For details, see Table 1.\n2.3. Data Analyses Descriptive statistics (frequencies, percentages, means, and standard deviations) were used to report socio-demographic data and the prevalence of suicidal ideation. Univariate analyses (Chi-square test, rank-sum test, or binary logistic regression models) were implemented to assess the differences of five factors by suicidal ideation status. Multivariate logistic regression models were then fitted to explore associations between five UPI factors and suicidal ideation, controlling for the covariates (sex, age, major-, e.g., academic subject). A significance level of p < 0.05 was applied for all significance tests. Associations were reported as Odds Ratios (OR) and 95% confidence intervals (CI). All analyses were performed using SPSS for Windows V22.0 (IBM Corp., Armonk, NY, USA). There were no missing values (response rate: 100%).\nDescriptive statistics (frequencies, percentages, means, and standard deviations) were used to report socio-demographic data and the prevalence of suicidal ideation. Univariate analyses (Chi-square test, rank-sum test, or binary logistic regression models) were implemented to assess the differences of five factors by suicidal ideation status. Multivariate logistic regression models were then fitted to explore associations between five UPI factors and suicidal ideation, controlling for the covariates (sex, age, major-, e.g., academic subject). A significance level of p < 0.05 was applied for all significance tests. Associations were reported as Odds Ratios (OR) and 95% confidence intervals (CI). All analyses were performed using SPSS for Windows V22.0 (IBM Corp., Armonk, NY, USA). There were no missing values (response rate: 100%).", "The study was held in a higher vocational college (similar to a community college) located in Xiaoshan district, Hangzhou, Zhejiang. Hangzhou is the capital and most populous city of Zhejiang Province, People’s Republic of China [40,41,42]. An online questionnaire was distributed to all of the first-year students in this college. This college is somehow like a community college, and is partially guided by Zhejiang Open University, which governs nearly eighty similar colleges all around the province. There are two types of students. One is a full-time student (completes a 3-year course of study to get an associate degree, after which the qualified student could either take an entry exam for further study to get a bachelor’s degree or enter employment); the other is a part-time student who usually has employment and already has an associate degree, and chooses to take courses at the weekend for further study for a bachelor degree. Using cluster sampling, we selected one higher vocational college (Xiaoshan college) to invite 686 first-year students to participate in this investigation (effective response rate = 100.00%). Our study sample is full-time students who just graduated from high school and were of similar age. Our sample is unique, as they were the first batch of first-year students entering college during the COVID-19 pandemic. Using a cluster sampling technique, a total of 686 (248 women, 438 men; ages 17 to 19) of these first-year university students were selected. The study was approved by the Human Research Ethics Committee of the vocational college. Informed written consent was obtained from participants before the study commenced.\nDuring the first month of students’ school entry (March 2020), participants completed the UPI with demographic information (age, sex). Students with a UPI total sum score above 20 or those who responded “yes” to item 25 (“Have an idea of wanting to die”) were identified as a risk group and were referred to a mental health professional [20,21].", "The UPI is a 60-item self-report measure assessing whether an individual experienced one or more mental health symptoms during the past year [20]. For each item, a score of 1 is given for “Yes”, and 0 was given for “No”. The 56 items describe whether an individual experienced one or more mental health symptoms, except for the “lie” scales (items 5, 20, 35, and 50). (The full version of the UPI is available in the Supplementary file).\nSuicidal ideation was measured by one item response to “Have an idea of wanting to die?” Participants responding “yes” were considered to have suicidal ideation. \nThe revised UPI includes five main factors tailored to assess physical, social, cognitive, interpersonal, and emotional health for Chinese university students (Table 1): (1) Physical symptoms (items 1, 2, 3, 17, 18, 46, 48 and 49); (2) social avoidance (items 10, 11, 41 and 43); (3) cognitive symptoms (items 29, 30, 38 and 39); (4) interpersonal sensitivity (items 57, 58); and (5) emotional vulnerability (items 6,15, 21, 24, 28 and 60). Factors were equal to the summed scores of the respective items based on the previously validated scoring guide [21]. For details, see Table 1.", "Descriptive statistics (frequencies, percentages, means, and standard deviations) were used to report socio-demographic data and the prevalence of suicidal ideation. Univariate analyses (Chi-square test, rank-sum test, or binary logistic regression models) were implemented to assess the differences of five factors by suicidal ideation status. Multivariate logistic regression models were then fitted to explore associations between five UPI factors and suicidal ideation, controlling for the covariates (sex, age, major-, e.g., academic subject). A significance level of p < 0.05 was applied for all significance tests. Associations were reported as Odds Ratios (OR) and 95% confidence intervals (CI). All analyses were performed using SPSS for Windows V22.0 (IBM Corp., Armonk, NY, USA). There were no missing values (response rate: 100%).", "Table 2 presents the sample characteristics. There were more male university students (n = 438, 63.85%) than females (n = 248, 36.15%). On average, students were 17.79 years old. Most students had network engineering, business administration, and preschool education majors.", "The 12-month prevalence of suicidal ideation in the total sample was 5.2% (4.8% in men, 6.0% in women). There was no significant sex difference in suicidal ideation prevalence (x2 = 0.28, p = 0.597).", "For all UPI factors, students with suicidal ideation had significantly higher scores than those without suicidal ideation (See Table 3).\nResults of multivariable regression (Table 4) showed significant differences in social avoidance and emotional vulnerability to suicidal ideation, but not the rest of the UPI factors. Specifically, students with social avoidance were 2.17 times more likely to think about suicide than those without. Similarly, students with emotional vulnerability had 2.03 greater odds of suicidal ideation than those without." ]
[ null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Data Collection and Sample", "2.2. Measures", "2.3. Data Analyses", "3. Results", "3.1. Sample Characteristics", "3.2. Prevalence of Suicidal Ideation", "3.3. Associations between UPI Factors and Suicidal Ideation", "4. Discussion", "5. Conclusions" ]
[ "Suicide is a serious public health problem and is one of the leading causes of death among adolescents and young adults, especially among university students. A meta-analysis reported that the point prevalence of suicidal ideation among Chinese university students ranged between 1.2% and 26.0%, and the prevalence of suicidal ideation among university students in southern China was higher than that in northern China [1]. Understanding the prevalence and risk factors of suicidal ideation is important because it is closely related to suicide attempts and deaths [2,3,4].\nSuicidal ideation among first-year university students is of particular importance because of the dynamic developmental, social, and behavioral transitions during their first year at university [5,6,7,8]. Socially, there are changes in residence, family relationships, and peer contexts [5,6,9,10,11]. For most Chinese first-year university students, it is the first time they have started to live in a province other than their place of birth [7,10,12]. Hence, the transition to university involves significant life changes (e.g., increased independence, social demands) and academic challenges (e.g., independent learning) alongside reduced parental support and oversight [13,14] Developmentally, adolescents and young adults have heterogeneous trajectories of suicidal thoughts and behaviors [15]. Greater impulsivity and self-reported aggression, as well as elevated trait anxiety, are the risk factors for suicidal ideation among young adults [15].\nPrevious studies found that among 16-year-old university students, those with greater tolerance toward suicide, higher family coping, and lower self-esteem were more likely to report suicidal ideation [16]. College entrance may be a strategically well-placed “point of capture” for detecting late adolescents with suicidal thoughts and behaviors [5]. However, a clear epidemiological picture of suicidal thoughts and behaviors among incoming Chinese university students is lacking [5].\nMost previous studies relied on patient and hospital records, which may understate the issue of student suicidal ideation [14]. The first twelve months following the onset of suicidal ideation appear crucial, with data from seventeen countries demonstrating that 60% of attempts are made within this period [17]. Nevertheless, previous studies are limited by the focus on psychological risks [14]. However, non-psychological risk factors may be particularly important for Chinese college first-year university students, as they may not talk about their problems due to mental health stigma [18,19]. There is an urgent need to find a customized assessment tool for Chinese university students.\nThe University Personality Inventory (UPI) was developed in 1966 in Japan and has been adopted as a rapid and effective mental health screening tool among Chinese university students, applied in almost every university in Zhejiang province [20,21]. UPI has been shown to be multidimensional [20,21,22]. Originally, the UPI only concentrated on psychological symptoms, such as depression, anxiety, neuroticism, persecutory beliefs, and obsessive compulsive symptoms [20]. However, these assessments do not help us understand the social and developmental stressors that are relevant to university students specifically [23]. In 2015, a new five-factor structure consisting of domains of physical symptoms, cognitive symptoms, emotional vulnerability, social avoidance, and interpersonal sensitivity of UPI was developed for Chinese university students [21], offering opportunities to understand how the new five-factor domains are associated with suicidal ideation.\nDuring COVID-19, there have been concerns and literature suggesting an increase in social isolation and mental distress, which are potential risk factors for suicidal ideation [24,25]. Special attention will be placed on disparities in suicide prevention across sociodemographic subgroups during the COVID-19 pandemic [26]. For first-year university students, most of their courses have transitioned to an online format, rendering them unable to connect to their classmates and teachers [27]. Those students who felt more connected with other students and teachers were more likely to feel calm and trusting [28]. Previous literature reported the various suicidal ideation situations in disparate Chinese university students, indicating the prevalence range of suicidal ideation from 1.24% to 26.00% [1,29,30,31,32].\nHowever, it is unknown how national prevalent suicidal ideation was among first-year university students in China. Despite recent attention being paid to the alarming rates of suicidal behaviors among university students [5,33,34,35], less evidence is available to understand the potential risks associated with their suicidal ideation during the COVID-19 pandemic. Less research comparatively addressed suicide prevention and early intervention for university students than for primary and secondary school students [36]. This is troubling on the grounds that the college years represent a critical and unique developmental stage characterized by dynamic social role transitions, new living situations, and changing relationships [37,38]. Additionally, it is vital to understand and design college first-year university students specific intervention programs targeting the developmental factors during the transition from adolescence to emerging adulthood [26,39].\nThis study aimed to investigate the prevalence of suicidal ideation and the potential relationship between the new five-factor domain in the revised UPI and suicidal ideation among Chinese first-year university students during the pandemic period, and present targeting measures.", "2.1. Data Collection and Sample The study was held in a higher vocational college (similar to a community college) located in Xiaoshan district, Hangzhou, Zhejiang. Hangzhou is the capital and most populous city of Zhejiang Province, People’s Republic of China [40,41,42]. An online questionnaire was distributed to all of the first-year students in this college. This college is somehow like a community college, and is partially guided by Zhejiang Open University, which governs nearly eighty similar colleges all around the province. There are two types of students. One is a full-time student (completes a 3-year course of study to get an associate degree, after which the qualified student could either take an entry exam for further study to get a bachelor’s degree or enter employment); the other is a part-time student who usually has employment and already has an associate degree, and chooses to take courses at the weekend for further study for a bachelor degree. Using cluster sampling, we selected one higher vocational college (Xiaoshan college) to invite 686 first-year students to participate in this investigation (effective response rate = 100.00%). Our study sample is full-time students who just graduated from high school and were of similar age. Our sample is unique, as they were the first batch of first-year students entering college during the COVID-19 pandemic. Using a cluster sampling technique, a total of 686 (248 women, 438 men; ages 17 to 19) of these first-year university students were selected. The study was approved by the Human Research Ethics Committee of the vocational college. Informed written consent was obtained from participants before the study commenced.\nDuring the first month of students’ school entry (March 2020), participants completed the UPI with demographic information (age, sex). Students with a UPI total sum score above 20 or those who responded “yes” to item 25 (“Have an idea of wanting to die”) were identified as a risk group and were referred to a mental health professional [20,21].\nThe study was held in a higher vocational college (similar to a community college) located in Xiaoshan district, Hangzhou, Zhejiang. Hangzhou is the capital and most populous city of Zhejiang Province, People’s Republic of China [40,41,42]. An online questionnaire was distributed to all of the first-year students in this college. This college is somehow like a community college, and is partially guided by Zhejiang Open University, which governs nearly eighty similar colleges all around the province. There are two types of students. One is a full-time student (completes a 3-year course of study to get an associate degree, after which the qualified student could either take an entry exam for further study to get a bachelor’s degree or enter employment); the other is a part-time student who usually has employment and already has an associate degree, and chooses to take courses at the weekend for further study for a bachelor degree. Using cluster sampling, we selected one higher vocational college (Xiaoshan college) to invite 686 first-year students to participate in this investigation (effective response rate = 100.00%). Our study sample is full-time students who just graduated from high school and were of similar age. Our sample is unique, as they were the first batch of first-year students entering college during the COVID-19 pandemic. Using a cluster sampling technique, a total of 686 (248 women, 438 men; ages 17 to 19) of these first-year university students were selected. The study was approved by the Human Research Ethics Committee of the vocational college. Informed written consent was obtained from participants before the study commenced.\nDuring the first month of students’ school entry (March 2020), participants completed the UPI with demographic information (age, sex). Students with a UPI total sum score above 20 or those who responded “yes” to item 25 (“Have an idea of wanting to die”) were identified as a risk group and were referred to a mental health professional [20,21].\n2.2. Measures The UPI is a 60-item self-report measure assessing whether an individual experienced one or more mental health symptoms during the past year [20]. For each item, a score of 1 is given for “Yes”, and 0 was given for “No”. The 56 items describe whether an individual experienced one or more mental health symptoms, except for the “lie” scales (items 5, 20, 35, and 50). (The full version of the UPI is available in the Supplementary file).\nSuicidal ideation was measured by one item response to “Have an idea of wanting to die?” Participants responding “yes” were considered to have suicidal ideation. \nThe revised UPI includes five main factors tailored to assess physical, social, cognitive, interpersonal, and emotional health for Chinese university students (Table 1): (1) Physical symptoms (items 1, 2, 3, 17, 18, 46, 48 and 49); (2) social avoidance (items 10, 11, 41 and 43); (3) cognitive symptoms (items 29, 30, 38 and 39); (4) interpersonal sensitivity (items 57, 58); and (5) emotional vulnerability (items 6,15, 21, 24, 28 and 60). Factors were equal to the summed scores of the respective items based on the previously validated scoring guide [21]. For details, see Table 1.\nThe UPI is a 60-item self-report measure assessing whether an individual experienced one or more mental health symptoms during the past year [20]. For each item, a score of 1 is given for “Yes”, and 0 was given for “No”. The 56 items describe whether an individual experienced one or more mental health symptoms, except for the “lie” scales (items 5, 20, 35, and 50). (The full version of the UPI is available in the Supplementary file).\nSuicidal ideation was measured by one item response to “Have an idea of wanting to die?” Participants responding “yes” were considered to have suicidal ideation. \nThe revised UPI includes five main factors tailored to assess physical, social, cognitive, interpersonal, and emotional health for Chinese university students (Table 1): (1) Physical symptoms (items 1, 2, 3, 17, 18, 46, 48 and 49); (2) social avoidance (items 10, 11, 41 and 43); (3) cognitive symptoms (items 29, 30, 38 and 39); (4) interpersonal sensitivity (items 57, 58); and (5) emotional vulnerability (items 6,15, 21, 24, 28 and 60). Factors were equal to the summed scores of the respective items based on the previously validated scoring guide [21]. For details, see Table 1.\n2.3. Data Analyses Descriptive statistics (frequencies, percentages, means, and standard deviations) were used to report socio-demographic data and the prevalence of suicidal ideation. Univariate analyses (Chi-square test, rank-sum test, or binary logistic regression models) were implemented to assess the differences of five factors by suicidal ideation status. Multivariate logistic regression models were then fitted to explore associations between five UPI factors and suicidal ideation, controlling for the covariates (sex, age, major-, e.g., academic subject). A significance level of p < 0.05 was applied for all significance tests. Associations were reported as Odds Ratios (OR) and 95% confidence intervals (CI). All analyses were performed using SPSS for Windows V22.0 (IBM Corp., Armonk, NY, USA). There were no missing values (response rate: 100%).\nDescriptive statistics (frequencies, percentages, means, and standard deviations) were used to report socio-demographic data and the prevalence of suicidal ideation. Univariate analyses (Chi-square test, rank-sum test, or binary logistic regression models) were implemented to assess the differences of five factors by suicidal ideation status. Multivariate logistic regression models were then fitted to explore associations between five UPI factors and suicidal ideation, controlling for the covariates (sex, age, major-, e.g., academic subject). A significance level of p < 0.05 was applied for all significance tests. Associations were reported as Odds Ratios (OR) and 95% confidence intervals (CI). All analyses were performed using SPSS for Windows V22.0 (IBM Corp., Armonk, NY, USA). There were no missing values (response rate: 100%).", "The study was held in a higher vocational college (similar to a community college) located in Xiaoshan district, Hangzhou, Zhejiang. Hangzhou is the capital and most populous city of Zhejiang Province, People’s Republic of China [40,41,42]. An online questionnaire was distributed to all of the first-year students in this college. This college is somehow like a community college, and is partially guided by Zhejiang Open University, which governs nearly eighty similar colleges all around the province. There are two types of students. One is a full-time student (completes a 3-year course of study to get an associate degree, after which the qualified student could either take an entry exam for further study to get a bachelor’s degree or enter employment); the other is a part-time student who usually has employment and already has an associate degree, and chooses to take courses at the weekend for further study for a bachelor degree. Using cluster sampling, we selected one higher vocational college (Xiaoshan college) to invite 686 first-year students to participate in this investigation (effective response rate = 100.00%). Our study sample is full-time students who just graduated from high school and were of similar age. Our sample is unique, as they were the first batch of first-year students entering college during the COVID-19 pandemic. Using a cluster sampling technique, a total of 686 (248 women, 438 men; ages 17 to 19) of these first-year university students were selected. The study was approved by the Human Research Ethics Committee of the vocational college. Informed written consent was obtained from participants before the study commenced.\nDuring the first month of students’ school entry (March 2020), participants completed the UPI with demographic information (age, sex). Students with a UPI total sum score above 20 or those who responded “yes” to item 25 (“Have an idea of wanting to die”) were identified as a risk group and were referred to a mental health professional [20,21].", "The UPI is a 60-item self-report measure assessing whether an individual experienced one or more mental health symptoms during the past year [20]. For each item, a score of 1 is given for “Yes”, and 0 was given for “No”. The 56 items describe whether an individual experienced one or more mental health symptoms, except for the “lie” scales (items 5, 20, 35, and 50). (The full version of the UPI is available in the Supplementary file).\nSuicidal ideation was measured by one item response to “Have an idea of wanting to die?” Participants responding “yes” were considered to have suicidal ideation. \nThe revised UPI includes five main factors tailored to assess physical, social, cognitive, interpersonal, and emotional health for Chinese university students (Table 1): (1) Physical symptoms (items 1, 2, 3, 17, 18, 46, 48 and 49); (2) social avoidance (items 10, 11, 41 and 43); (3) cognitive symptoms (items 29, 30, 38 and 39); (4) interpersonal sensitivity (items 57, 58); and (5) emotional vulnerability (items 6,15, 21, 24, 28 and 60). Factors were equal to the summed scores of the respective items based on the previously validated scoring guide [21]. For details, see Table 1.", "Descriptive statistics (frequencies, percentages, means, and standard deviations) were used to report socio-demographic data and the prevalence of suicidal ideation. Univariate analyses (Chi-square test, rank-sum test, or binary logistic regression models) were implemented to assess the differences of five factors by suicidal ideation status. Multivariate logistic regression models were then fitted to explore associations between five UPI factors and suicidal ideation, controlling for the covariates (sex, age, major-, e.g., academic subject). A significance level of p < 0.05 was applied for all significance tests. Associations were reported as Odds Ratios (OR) and 95% confidence intervals (CI). All analyses were performed using SPSS for Windows V22.0 (IBM Corp., Armonk, NY, USA). There were no missing values (response rate: 100%).", "3.1. Sample Characteristics Table 2 presents the sample characteristics. There were more male university students (n = 438, 63.85%) than females (n = 248, 36.15%). On average, students were 17.79 years old. Most students had network engineering, business administration, and preschool education majors.\nTable 2 presents the sample characteristics. There were more male university students (n = 438, 63.85%) than females (n = 248, 36.15%). On average, students were 17.79 years old. Most students had network engineering, business administration, and preschool education majors.\n3.2. Prevalence of Suicidal Ideation The 12-month prevalence of suicidal ideation in the total sample was 5.2% (4.8% in men, 6.0% in women). There was no significant sex difference in suicidal ideation prevalence (x2 = 0.28, p = 0.597).\nThe 12-month prevalence of suicidal ideation in the total sample was 5.2% (4.8% in men, 6.0% in women). There was no significant sex difference in suicidal ideation prevalence (x2 = 0.28, p = 0.597).\n3.3. Associations between UPI Factors and Suicidal Ideation For all UPI factors, students with suicidal ideation had significantly higher scores than those without suicidal ideation (See Table 3).\nResults of multivariable regression (Table 4) showed significant differences in social avoidance and emotional vulnerability to suicidal ideation, but not the rest of the UPI factors. Specifically, students with social avoidance were 2.17 times more likely to think about suicide than those without. Similarly, students with emotional vulnerability had 2.03 greater odds of suicidal ideation than those without.\nFor all UPI factors, students with suicidal ideation had significantly higher scores than those without suicidal ideation (See Table 3).\nResults of multivariable regression (Table 4) showed significant differences in social avoidance and emotional vulnerability to suicidal ideation, but not the rest of the UPI factors. Specifically, students with social avoidance were 2.17 times more likely to think about suicide than those without. Similarly, students with emotional vulnerability had 2.03 greater odds of suicidal ideation than those without.", "Table 2 presents the sample characteristics. There were more male university students (n = 438, 63.85%) than females (n = 248, 36.15%). On average, students were 17.79 years old. Most students had network engineering, business administration, and preschool education majors.", "The 12-month prevalence of suicidal ideation in the total sample was 5.2% (4.8% in men, 6.0% in women). There was no significant sex difference in suicidal ideation prevalence (x2 = 0.28, p = 0.597).", "For all UPI factors, students with suicidal ideation had significantly higher scores than those without suicidal ideation (See Table 3).\nResults of multivariable regression (Table 4) showed significant differences in social avoidance and emotional vulnerability to suicidal ideation, but not the rest of the UPI factors. Specifically, students with social avoidance were 2.17 times more likely to think about suicide than those without. Similarly, students with emotional vulnerability had 2.03 greater odds of suicidal ideation than those without.", "In this study, we found suicidal ideation prevalence was 5.2% among the 686 first-year university students from the higher vocational college in Hangzhou, Eastern China. Such a rate is similar to an earlier study during COVID-19 among 2700 higher vocational college second-year university students in China (5.4%) [32], but lower than that in the US during the COVID-19 quarantine period (20%) in early 2020 [43,44].\nSuicidal ideation prevalence in China increased during the COVID-19 period compared with the pre-pandemic period in China (1.52% in Shandong), based on a prior study from April 2018 to October 2019 [30]. Suicidal ideation prevalence may vary with alteration of the years and areas. Research found that the suicidal ideation prevalence of Chinese higher vocational college first-year university students was 1.05% in 2008 in Jiangsu province, Eastern China [45]. Another research study found suicidal ideation prevalence was 2.63% in Chinese private higher vocational college first-year university students in 2009 in Anhui province, Eastern China [46]. The suicidal ideation prevalence among higher vocational college first-year university students mentioned above was lower than that found in this study.\nOnly social avoidance and emotional vulnerability were significant in the multivariable logistic regression after adjusting for covariates, while physical symptoms, cognitive symptoms, and interpersonal sensitivity were correlated with suicidal ideation. Social avoidance is commonly defined as the desire to escape or actually avoid being with, talking to, or interacting with others for any reason, i.e., the unwillingness to connect with the outside world in this study [47,48]. Previous literature also presented that social avoidance in young patients is a clinically worrisome phenomenon that characterizes impending schizophrenia, but that also constitutes a core phenomenon in avoidant personality disorder, schizoid personality disorder, and autism spectrum disorder [24,49]. It was demonstrated there was a causal pathway from social network strain (due to adverse childhood experiences) to suicidal ideation among middle-aged adults [24]. Social networks during adolescence influenced the odds of belonging to distinct suicidal trajectories [50]. Family cohesion protects youth from being in high-risk developmental courses of suicidal behaviors [50]. Social networks, especially quality of interactions, may improve suicide risk screening among university students [50].\nSocial restriction implemented as the quarantine measure during the COVID-19 pandemic and the perceived social isolation [51] has contributed to significant psychological distress [52]. The COVID-19 pandemic affected different aspects of daily life and mental well-being, making it difficult to reduce its influence on a single factor such as social avoidance [53,54]. It was found that social avoidance is a social risk of youth development [55]. Social avoidance increases loneliness, and so we need to build connectedness for students [56].\nEmotional vulnerability is commonly defined as the willingness to acknowledge one’s emotions, especially painful ones such as shame, sadness, anxiety, insecurity, etc. [57,58]. It is also been defined as “uncertainty, risk, and emotional exposure”, and is felt as anxiety about being rejected, shamed, or judged as inadequate, i.e., the manifestations of negative emotion in this study. Previous research suggests that problems with emotion regulation may be implicated in suicidality [59,60]. Early problems in learning to read and spell are related to motivational–emotional vulnerability in learning situations in the school context [61]. Self-reported ability to manage self-relevant emotions was negatively associated with suicidal ideation among university students [62]. Additionally, endorsement of fewer adaptive responses to negative emotions, as well as more maladaptive responses, increased the odds of suicidal ideation, plans, and attempts in children [63,64]. Emotional vulnerability can be related to personality, adverse childhood experiences, and also recent stress [24,65,66,67]. This study quantified the effect of risk elements in emotional vulnerability factors on suicidal ideation in a higher vocational college.\nThirdly, the most important thing is that under the five-factor frame of UPI, the change of every item score in the five factors was found to have a vital significant relationship with the risk of suicidal ideation prevalence, while, when these items were considered solely, respectively, these vital significant relationships could not be found. That may be the fundamental meaning of the five-factor frame of UPI. Thus, it is known to all that when a first-year university student has one of the following situations, “Do not like meeting others”, “Feel that I am not myself”, “Lack faith in others”, “Unwilling to associate with others”, “Full of dissatisfaction and complaints”, “Over-uneven in emotion”, “Intolerance”, “Irritable”, “Lack of patience”, and “Sensitive emotions”, this student’s suicidal ideation risk will increase. Additionally, these phenomena were not fully illustrated by other researchers. Given this wide and varied spectrum of related factors, a logical next step would be to develop and examine the effectiveness of targeted interventions that are “personalized” to not only the suicidal ideation, but also its associated risk factors.\nOur study is the first to demonstrate the utility of the revised UPI in understanding the risks of suicidal ideation among university students in China. In particular, by adding the five factors relevant to living experiences tailored to Chinese students, we were able to find two important factors—social avoidance and emotional vulnerability—as potential risk factors of suicidal ideation amid the COVID-19 pandemic. While previous studies mainly focused on psychological distress, social and emotional regulations may be unique protective factors for first-year university students [68]. In China, some Promoting Alternative Thinking Strategies (PATHS) Curriculum interventions help students improve their emotional understanding, emotion regulation, and prosocial behavior [69]. Additionally, child routines were shown to mediate the relationship between parenting and social–emotional development in China, which have important implications for research and practice aimed at enhancing social–emotional outcomes [70]. Our findings suggest that social avoidance and emotional vulnerability are important and unique risk factors for suicidal thoughts among university students in China, which has important implications for research and practice. Previous studies demonstrate that interpersonal belongingness [71,72], anhedonia, and type D personality [73] are potentially strong predictors in medical students. School-based suicide prevention can target building belongingness, student–teacher connectedness, more engaging school climates, and emotional well-being surveillance to screen high-risk students.\nSome limitations of this study exist. This is a cross-sectional study and lacks comparison with pre-COVID time periods. Moreover, it was limited by COVID-19 epidemic prevention and control restrictions, and the differences in the entrance to the university in different regions. We were not able to control for further covariates; for example, we do not have data on the living arrangements of students due to the lack of information in the survey. Only one college was investigated in this study, and all respondents were first-year university students, so there may be some limitations in the representativeness of the sample and extrapolation of the results.", "In conclusion, the study revealed that for first-year university students amid the COVID-19 pandemic, the factors of physical symptoms, cognitive symptoms, and interpersonal sensitivity may work through the factors of social avoidance and emotional vulnerability. If any item of the social avoidance factor or of the emotional vulnerability factor, comes true, the likelihood of suicidal ideation will increase more than two times for university students. With the five-factor structure UPI, more associated risk factors can be found. Additionally, these findings contribute to providing timely intervention measures to reduce suicidal ideation and reduce the risk of suicide in this college student population." ]
[ "intro", null, null, null, null, "results", null, null, null, "discussion", "conclusions" ]
[ "suicidal ideation", "college student", "psychological influencing factors", "university personality inventory" ]
1. Introduction: Suicide is a serious public health problem and is one of the leading causes of death among adolescents and young adults, especially among university students. A meta-analysis reported that the point prevalence of suicidal ideation among Chinese university students ranged between 1.2% and 26.0%, and the prevalence of suicidal ideation among university students in southern China was higher than that in northern China [1]. Understanding the prevalence and risk factors of suicidal ideation is important because it is closely related to suicide attempts and deaths [2,3,4]. Suicidal ideation among first-year university students is of particular importance because of the dynamic developmental, social, and behavioral transitions during their first year at university [5,6,7,8]. Socially, there are changes in residence, family relationships, and peer contexts [5,6,9,10,11]. For most Chinese first-year university students, it is the first time they have started to live in a province other than their place of birth [7,10,12]. Hence, the transition to university involves significant life changes (e.g., increased independence, social demands) and academic challenges (e.g., independent learning) alongside reduced parental support and oversight [13,14] Developmentally, adolescents and young adults have heterogeneous trajectories of suicidal thoughts and behaviors [15]. Greater impulsivity and self-reported aggression, as well as elevated trait anxiety, are the risk factors for suicidal ideation among young adults [15]. Previous studies found that among 16-year-old university students, those with greater tolerance toward suicide, higher family coping, and lower self-esteem were more likely to report suicidal ideation [16]. College entrance may be a strategically well-placed “point of capture” for detecting late adolescents with suicidal thoughts and behaviors [5]. However, a clear epidemiological picture of suicidal thoughts and behaviors among incoming Chinese university students is lacking [5]. Most previous studies relied on patient and hospital records, which may understate the issue of student suicidal ideation [14]. The first twelve months following the onset of suicidal ideation appear crucial, with data from seventeen countries demonstrating that 60% of attempts are made within this period [17]. Nevertheless, previous studies are limited by the focus on psychological risks [14]. However, non-psychological risk factors may be particularly important for Chinese college first-year university students, as they may not talk about their problems due to mental health stigma [18,19]. There is an urgent need to find a customized assessment tool for Chinese university students. The University Personality Inventory (UPI) was developed in 1966 in Japan and has been adopted as a rapid and effective mental health screening tool among Chinese university students, applied in almost every university in Zhejiang province [20,21]. UPI has been shown to be multidimensional [20,21,22]. Originally, the UPI only concentrated on psychological symptoms, such as depression, anxiety, neuroticism, persecutory beliefs, and obsessive compulsive symptoms [20]. However, these assessments do not help us understand the social and developmental stressors that are relevant to university students specifically [23]. In 2015, a new five-factor structure consisting of domains of physical symptoms, cognitive symptoms, emotional vulnerability, social avoidance, and interpersonal sensitivity of UPI was developed for Chinese university students [21], offering opportunities to understand how the new five-factor domains are associated with suicidal ideation. During COVID-19, there have been concerns and literature suggesting an increase in social isolation and mental distress, which are potential risk factors for suicidal ideation [24,25]. Special attention will be placed on disparities in suicide prevention across sociodemographic subgroups during the COVID-19 pandemic [26]. For first-year university students, most of their courses have transitioned to an online format, rendering them unable to connect to their classmates and teachers [27]. Those students who felt more connected with other students and teachers were more likely to feel calm and trusting [28]. Previous literature reported the various suicidal ideation situations in disparate Chinese university students, indicating the prevalence range of suicidal ideation from 1.24% to 26.00% [1,29,30,31,32]. However, it is unknown how national prevalent suicidal ideation was among first-year university students in China. Despite recent attention being paid to the alarming rates of suicidal behaviors among university students [5,33,34,35], less evidence is available to understand the potential risks associated with their suicidal ideation during the COVID-19 pandemic. Less research comparatively addressed suicide prevention and early intervention for university students than for primary and secondary school students [36]. This is troubling on the grounds that the college years represent a critical and unique developmental stage characterized by dynamic social role transitions, new living situations, and changing relationships [37,38]. Additionally, it is vital to understand and design college first-year university students specific intervention programs targeting the developmental factors during the transition from adolescence to emerging adulthood [26,39]. This study aimed to investigate the prevalence of suicidal ideation and the potential relationship between the new five-factor domain in the revised UPI and suicidal ideation among Chinese first-year university students during the pandemic period, and present targeting measures. 2. Materials and Methods: 2.1. Data Collection and Sample The study was held in a higher vocational college (similar to a community college) located in Xiaoshan district, Hangzhou, Zhejiang. Hangzhou is the capital and most populous city of Zhejiang Province, People’s Republic of China [40,41,42]. An online questionnaire was distributed to all of the first-year students in this college. This college is somehow like a community college, and is partially guided by Zhejiang Open University, which governs nearly eighty similar colleges all around the province. There are two types of students. One is a full-time student (completes a 3-year course of study to get an associate degree, after which the qualified student could either take an entry exam for further study to get a bachelor’s degree or enter employment); the other is a part-time student who usually has employment and already has an associate degree, and chooses to take courses at the weekend for further study for a bachelor degree. Using cluster sampling, we selected one higher vocational college (Xiaoshan college) to invite 686 first-year students to participate in this investigation (effective response rate = 100.00%). Our study sample is full-time students who just graduated from high school and were of similar age. Our sample is unique, as they were the first batch of first-year students entering college during the COVID-19 pandemic. Using a cluster sampling technique, a total of 686 (248 women, 438 men; ages 17 to 19) of these first-year university students were selected. The study was approved by the Human Research Ethics Committee of the vocational college. Informed written consent was obtained from participants before the study commenced. During the first month of students’ school entry (March 2020), participants completed the UPI with demographic information (age, sex). Students with a UPI total sum score above 20 or those who responded “yes” to item 25 (“Have an idea of wanting to die”) were identified as a risk group and were referred to a mental health professional [20,21]. The study was held in a higher vocational college (similar to a community college) located in Xiaoshan district, Hangzhou, Zhejiang. Hangzhou is the capital and most populous city of Zhejiang Province, People’s Republic of China [40,41,42]. An online questionnaire was distributed to all of the first-year students in this college. This college is somehow like a community college, and is partially guided by Zhejiang Open University, which governs nearly eighty similar colleges all around the province. There are two types of students. One is a full-time student (completes a 3-year course of study to get an associate degree, after which the qualified student could either take an entry exam for further study to get a bachelor’s degree or enter employment); the other is a part-time student who usually has employment and already has an associate degree, and chooses to take courses at the weekend for further study for a bachelor degree. Using cluster sampling, we selected one higher vocational college (Xiaoshan college) to invite 686 first-year students to participate in this investigation (effective response rate = 100.00%). Our study sample is full-time students who just graduated from high school and were of similar age. Our sample is unique, as they were the first batch of first-year students entering college during the COVID-19 pandemic. Using a cluster sampling technique, a total of 686 (248 women, 438 men; ages 17 to 19) of these first-year university students were selected. The study was approved by the Human Research Ethics Committee of the vocational college. Informed written consent was obtained from participants before the study commenced. During the first month of students’ school entry (March 2020), participants completed the UPI with demographic information (age, sex). Students with a UPI total sum score above 20 or those who responded “yes” to item 25 (“Have an idea of wanting to die”) were identified as a risk group and were referred to a mental health professional [20,21]. 2.2. Measures The UPI is a 60-item self-report measure assessing whether an individual experienced one or more mental health symptoms during the past year [20]. For each item, a score of 1 is given for “Yes”, and 0 was given for “No”. The 56 items describe whether an individual experienced one or more mental health symptoms, except for the “lie” scales (items 5, 20, 35, and 50). (The full version of the UPI is available in the Supplementary file). Suicidal ideation was measured by one item response to “Have an idea of wanting to die?” Participants responding “yes” were considered to have suicidal ideation. The revised UPI includes five main factors tailored to assess physical, social, cognitive, interpersonal, and emotional health for Chinese university students (Table 1): (1) Physical symptoms (items 1, 2, 3, 17, 18, 46, 48 and 49); (2) social avoidance (items 10, 11, 41 and 43); (3) cognitive symptoms (items 29, 30, 38 and 39); (4) interpersonal sensitivity (items 57, 58); and (5) emotional vulnerability (items 6,15, 21, 24, 28 and 60). Factors were equal to the summed scores of the respective items based on the previously validated scoring guide [21]. For details, see Table 1. The UPI is a 60-item self-report measure assessing whether an individual experienced one or more mental health symptoms during the past year [20]. For each item, a score of 1 is given for “Yes”, and 0 was given for “No”. The 56 items describe whether an individual experienced one or more mental health symptoms, except for the “lie” scales (items 5, 20, 35, and 50). (The full version of the UPI is available in the Supplementary file). Suicidal ideation was measured by one item response to “Have an idea of wanting to die?” Participants responding “yes” were considered to have suicidal ideation. The revised UPI includes five main factors tailored to assess physical, social, cognitive, interpersonal, and emotional health for Chinese university students (Table 1): (1) Physical symptoms (items 1, 2, 3, 17, 18, 46, 48 and 49); (2) social avoidance (items 10, 11, 41 and 43); (3) cognitive symptoms (items 29, 30, 38 and 39); (4) interpersonal sensitivity (items 57, 58); and (5) emotional vulnerability (items 6,15, 21, 24, 28 and 60). Factors were equal to the summed scores of the respective items based on the previously validated scoring guide [21]. For details, see Table 1. 2.3. Data Analyses Descriptive statistics (frequencies, percentages, means, and standard deviations) were used to report socio-demographic data and the prevalence of suicidal ideation. Univariate analyses (Chi-square test, rank-sum test, or binary logistic regression models) were implemented to assess the differences of five factors by suicidal ideation status. Multivariate logistic regression models were then fitted to explore associations between five UPI factors and suicidal ideation, controlling for the covariates (sex, age, major-, e.g., academic subject). A significance level of p < 0.05 was applied for all significance tests. Associations were reported as Odds Ratios (OR) and 95% confidence intervals (CI). All analyses were performed using SPSS for Windows V22.0 (IBM Corp., Armonk, NY, USA). There were no missing values (response rate: 100%). Descriptive statistics (frequencies, percentages, means, and standard deviations) were used to report socio-demographic data and the prevalence of suicidal ideation. Univariate analyses (Chi-square test, rank-sum test, or binary logistic regression models) were implemented to assess the differences of five factors by suicidal ideation status. Multivariate logistic regression models were then fitted to explore associations between five UPI factors and suicidal ideation, controlling for the covariates (sex, age, major-, e.g., academic subject). A significance level of p < 0.05 was applied for all significance tests. Associations were reported as Odds Ratios (OR) and 95% confidence intervals (CI). All analyses were performed using SPSS for Windows V22.0 (IBM Corp., Armonk, NY, USA). There were no missing values (response rate: 100%). 2.1. Data Collection and Sample: The study was held in a higher vocational college (similar to a community college) located in Xiaoshan district, Hangzhou, Zhejiang. Hangzhou is the capital and most populous city of Zhejiang Province, People’s Republic of China [40,41,42]. An online questionnaire was distributed to all of the first-year students in this college. This college is somehow like a community college, and is partially guided by Zhejiang Open University, which governs nearly eighty similar colleges all around the province. There are two types of students. One is a full-time student (completes a 3-year course of study to get an associate degree, after which the qualified student could either take an entry exam for further study to get a bachelor’s degree or enter employment); the other is a part-time student who usually has employment and already has an associate degree, and chooses to take courses at the weekend for further study for a bachelor degree. Using cluster sampling, we selected one higher vocational college (Xiaoshan college) to invite 686 first-year students to participate in this investigation (effective response rate = 100.00%). Our study sample is full-time students who just graduated from high school and were of similar age. Our sample is unique, as they were the first batch of first-year students entering college during the COVID-19 pandemic. Using a cluster sampling technique, a total of 686 (248 women, 438 men; ages 17 to 19) of these first-year university students were selected. The study was approved by the Human Research Ethics Committee of the vocational college. Informed written consent was obtained from participants before the study commenced. During the first month of students’ school entry (March 2020), participants completed the UPI with demographic information (age, sex). Students with a UPI total sum score above 20 or those who responded “yes” to item 25 (“Have an idea of wanting to die”) were identified as a risk group and were referred to a mental health professional [20,21]. 2.2. Measures: The UPI is a 60-item self-report measure assessing whether an individual experienced one or more mental health symptoms during the past year [20]. For each item, a score of 1 is given for “Yes”, and 0 was given for “No”. The 56 items describe whether an individual experienced one or more mental health symptoms, except for the “lie” scales (items 5, 20, 35, and 50). (The full version of the UPI is available in the Supplementary file). Suicidal ideation was measured by one item response to “Have an idea of wanting to die?” Participants responding “yes” were considered to have suicidal ideation. The revised UPI includes five main factors tailored to assess physical, social, cognitive, interpersonal, and emotional health for Chinese university students (Table 1): (1) Physical symptoms (items 1, 2, 3, 17, 18, 46, 48 and 49); (2) social avoidance (items 10, 11, 41 and 43); (3) cognitive symptoms (items 29, 30, 38 and 39); (4) interpersonal sensitivity (items 57, 58); and (5) emotional vulnerability (items 6,15, 21, 24, 28 and 60). Factors were equal to the summed scores of the respective items based on the previously validated scoring guide [21]. For details, see Table 1. 2.3. Data Analyses: Descriptive statistics (frequencies, percentages, means, and standard deviations) were used to report socio-demographic data and the prevalence of suicidal ideation. Univariate analyses (Chi-square test, rank-sum test, or binary logistic regression models) were implemented to assess the differences of five factors by suicidal ideation status. Multivariate logistic regression models were then fitted to explore associations between five UPI factors and suicidal ideation, controlling for the covariates (sex, age, major-, e.g., academic subject). A significance level of p < 0.05 was applied for all significance tests. Associations were reported as Odds Ratios (OR) and 95% confidence intervals (CI). All analyses were performed using SPSS for Windows V22.0 (IBM Corp., Armonk, NY, USA). There were no missing values (response rate: 100%). 3. Results: 3.1. Sample Characteristics Table 2 presents the sample characteristics. There were more male university students (n = 438, 63.85%) than females (n = 248, 36.15%). On average, students were 17.79 years old. Most students had network engineering, business administration, and preschool education majors. Table 2 presents the sample characteristics. There were more male university students (n = 438, 63.85%) than females (n = 248, 36.15%). On average, students were 17.79 years old. Most students had network engineering, business administration, and preschool education majors. 3.2. Prevalence of Suicidal Ideation The 12-month prevalence of suicidal ideation in the total sample was 5.2% (4.8% in men, 6.0% in women). There was no significant sex difference in suicidal ideation prevalence (x2 = 0.28, p = 0.597). The 12-month prevalence of suicidal ideation in the total sample was 5.2% (4.8% in men, 6.0% in women). There was no significant sex difference in suicidal ideation prevalence (x2 = 0.28, p = 0.597). 3.3. Associations between UPI Factors and Suicidal Ideation For all UPI factors, students with suicidal ideation had significantly higher scores than those without suicidal ideation (See Table 3). Results of multivariable regression (Table 4) showed significant differences in social avoidance and emotional vulnerability to suicidal ideation, but not the rest of the UPI factors. Specifically, students with social avoidance were 2.17 times more likely to think about suicide than those without. Similarly, students with emotional vulnerability had 2.03 greater odds of suicidal ideation than those without. For all UPI factors, students with suicidal ideation had significantly higher scores than those without suicidal ideation (See Table 3). Results of multivariable regression (Table 4) showed significant differences in social avoidance and emotional vulnerability to suicidal ideation, but not the rest of the UPI factors. Specifically, students with social avoidance were 2.17 times more likely to think about suicide than those without. Similarly, students with emotional vulnerability had 2.03 greater odds of suicidal ideation than those without. 3.1. Sample Characteristics: Table 2 presents the sample characteristics. There were more male university students (n = 438, 63.85%) than females (n = 248, 36.15%). On average, students were 17.79 years old. Most students had network engineering, business administration, and preschool education majors. 3.2. Prevalence of Suicidal Ideation: The 12-month prevalence of suicidal ideation in the total sample was 5.2% (4.8% in men, 6.0% in women). There was no significant sex difference in suicidal ideation prevalence (x2 = 0.28, p = 0.597). 3.3. Associations between UPI Factors and Suicidal Ideation: For all UPI factors, students with suicidal ideation had significantly higher scores than those without suicidal ideation (See Table 3). Results of multivariable regression (Table 4) showed significant differences in social avoidance and emotional vulnerability to suicidal ideation, but not the rest of the UPI factors. Specifically, students with social avoidance were 2.17 times more likely to think about suicide than those without. Similarly, students with emotional vulnerability had 2.03 greater odds of suicidal ideation than those without. 4. Discussion: In this study, we found suicidal ideation prevalence was 5.2% among the 686 first-year university students from the higher vocational college in Hangzhou, Eastern China. Such a rate is similar to an earlier study during COVID-19 among 2700 higher vocational college second-year university students in China (5.4%) [32], but lower than that in the US during the COVID-19 quarantine period (20%) in early 2020 [43,44]. Suicidal ideation prevalence in China increased during the COVID-19 period compared with the pre-pandemic period in China (1.52% in Shandong), based on a prior study from April 2018 to October 2019 [30]. Suicidal ideation prevalence may vary with alteration of the years and areas. Research found that the suicidal ideation prevalence of Chinese higher vocational college first-year university students was 1.05% in 2008 in Jiangsu province, Eastern China [45]. Another research study found suicidal ideation prevalence was 2.63% in Chinese private higher vocational college first-year university students in 2009 in Anhui province, Eastern China [46]. The suicidal ideation prevalence among higher vocational college first-year university students mentioned above was lower than that found in this study. Only social avoidance and emotional vulnerability were significant in the multivariable logistic regression after adjusting for covariates, while physical symptoms, cognitive symptoms, and interpersonal sensitivity were correlated with suicidal ideation. Social avoidance is commonly defined as the desire to escape or actually avoid being with, talking to, or interacting with others for any reason, i.e., the unwillingness to connect with the outside world in this study [47,48]. Previous literature also presented that social avoidance in young patients is a clinically worrisome phenomenon that characterizes impending schizophrenia, but that also constitutes a core phenomenon in avoidant personality disorder, schizoid personality disorder, and autism spectrum disorder [24,49]. It was demonstrated there was a causal pathway from social network strain (due to adverse childhood experiences) to suicidal ideation among middle-aged adults [24]. Social networks during adolescence influenced the odds of belonging to distinct suicidal trajectories [50]. Family cohesion protects youth from being in high-risk developmental courses of suicidal behaviors [50]. Social networks, especially quality of interactions, may improve suicide risk screening among university students [50]. Social restriction implemented as the quarantine measure during the COVID-19 pandemic and the perceived social isolation [51] has contributed to significant psychological distress [52]. The COVID-19 pandemic affected different aspects of daily life and mental well-being, making it difficult to reduce its influence on a single factor such as social avoidance [53,54]. It was found that social avoidance is a social risk of youth development [55]. Social avoidance increases loneliness, and so we need to build connectedness for students [56]. Emotional vulnerability is commonly defined as the willingness to acknowledge one’s emotions, especially painful ones such as shame, sadness, anxiety, insecurity, etc. [57,58]. It is also been defined as “uncertainty, risk, and emotional exposure”, and is felt as anxiety about being rejected, shamed, or judged as inadequate, i.e., the manifestations of negative emotion in this study. Previous research suggests that problems with emotion regulation may be implicated in suicidality [59,60]. Early problems in learning to read and spell are related to motivational–emotional vulnerability in learning situations in the school context [61]. Self-reported ability to manage self-relevant emotions was negatively associated with suicidal ideation among university students [62]. Additionally, endorsement of fewer adaptive responses to negative emotions, as well as more maladaptive responses, increased the odds of suicidal ideation, plans, and attempts in children [63,64]. Emotional vulnerability can be related to personality, adverse childhood experiences, and also recent stress [24,65,66,67]. This study quantified the effect of risk elements in emotional vulnerability factors on suicidal ideation in a higher vocational college. Thirdly, the most important thing is that under the five-factor frame of UPI, the change of every item score in the five factors was found to have a vital significant relationship with the risk of suicidal ideation prevalence, while, when these items were considered solely, respectively, these vital significant relationships could not be found. That may be the fundamental meaning of the five-factor frame of UPI. Thus, it is known to all that when a first-year university student has one of the following situations, “Do not like meeting others”, “Feel that I am not myself”, “Lack faith in others”, “Unwilling to associate with others”, “Full of dissatisfaction and complaints”, “Over-uneven in emotion”, “Intolerance”, “Irritable”, “Lack of patience”, and “Sensitive emotions”, this student’s suicidal ideation risk will increase. Additionally, these phenomena were not fully illustrated by other researchers. Given this wide and varied spectrum of related factors, a logical next step would be to develop and examine the effectiveness of targeted interventions that are “personalized” to not only the suicidal ideation, but also its associated risk factors. Our study is the first to demonstrate the utility of the revised UPI in understanding the risks of suicidal ideation among university students in China. In particular, by adding the five factors relevant to living experiences tailored to Chinese students, we were able to find two important factors—social avoidance and emotional vulnerability—as potential risk factors of suicidal ideation amid the COVID-19 pandemic. While previous studies mainly focused on psychological distress, social and emotional regulations may be unique protective factors for first-year university students [68]. In China, some Promoting Alternative Thinking Strategies (PATHS) Curriculum interventions help students improve their emotional understanding, emotion regulation, and prosocial behavior [69]. Additionally, child routines were shown to mediate the relationship between parenting and social–emotional development in China, which have important implications for research and practice aimed at enhancing social–emotional outcomes [70]. Our findings suggest that social avoidance and emotional vulnerability are important and unique risk factors for suicidal thoughts among university students in China, which has important implications for research and practice. Previous studies demonstrate that interpersonal belongingness [71,72], anhedonia, and type D personality [73] are potentially strong predictors in medical students. School-based suicide prevention can target building belongingness, student–teacher connectedness, more engaging school climates, and emotional well-being surveillance to screen high-risk students. Some limitations of this study exist. This is a cross-sectional study and lacks comparison with pre-COVID time periods. Moreover, it was limited by COVID-19 epidemic prevention and control restrictions, and the differences in the entrance to the university in different regions. We were not able to control for further covariates; for example, we do not have data on the living arrangements of students due to the lack of information in the survey. Only one college was investigated in this study, and all respondents were first-year university students, so there may be some limitations in the representativeness of the sample and extrapolation of the results. 5. Conclusions: In conclusion, the study revealed that for first-year university students amid the COVID-19 pandemic, the factors of physical symptoms, cognitive symptoms, and interpersonal sensitivity may work through the factors of social avoidance and emotional vulnerability. If any item of the social avoidance factor or of the emotional vulnerability factor, comes true, the likelihood of suicidal ideation will increase more than two times for university students. With the five-factor structure UPI, more associated risk factors can be found. Additionally, these findings contribute to providing timely intervention measures to reduce suicidal ideation and reduce the risk of suicide in this college student population.
Background: University students with suicidal ideation are at high risk of suicide, which constitutes a significant social and public health problem in China. However, little is known about the prevalence and associated risk factors of suicidal ideation among first-year university students in China, especially during the COVID-19 pandemic. Methods: Using a cluster sampling technique, a university-wide survey was conducted of 686 first-year university students from Hangzhou in March 2020 using University Personality Inventory (UPI). UPI includes an assessment for suicidal ideation and possible risk factors. Suicidal ideation prevalence was calculated for males and females. Univariate analysis and multivariable logistic regression models were conducted, adjusting for age and sex. Analyses were carried out using the SPSS version 22.0 software. Results: The prevalence of 12-month suicidal ideation among first-year university students during March 2020 was 5.2%, and there was no significant difference between males and females (4.8% vs. 6.0%, x2 = 0.28, p = 0.597). Multivariable logistic regression analysis identified social avoidance (B = 0.78, OR = 2.17, p &lt; 0.001) and emotional vulnerability (B = 0.71, OR = 2.02, p &lt; 0.001) as positively associated with suicidal ideation. Conclusions: Social avoidance and emotional vulnerabilities are unique factors associated with greater suicidal ideation among first-year university students during the COVID-19 pandemic. UPI serves as a validated tool to screen suicide risks among Chinese university students. Encouraging social engagement and improving emotional regulation skills are promising targets to reduce suicidal ideation among first-year university students.
1. Introduction: Suicide is a serious public health problem and is one of the leading causes of death among adolescents and young adults, especially among university students. A meta-analysis reported that the point prevalence of suicidal ideation among Chinese university students ranged between 1.2% and 26.0%, and the prevalence of suicidal ideation among university students in southern China was higher than that in northern China [1]. Understanding the prevalence and risk factors of suicidal ideation is important because it is closely related to suicide attempts and deaths [2,3,4]. Suicidal ideation among first-year university students is of particular importance because of the dynamic developmental, social, and behavioral transitions during their first year at university [5,6,7,8]. Socially, there are changes in residence, family relationships, and peer contexts [5,6,9,10,11]. For most Chinese first-year university students, it is the first time they have started to live in a province other than their place of birth [7,10,12]. Hence, the transition to university involves significant life changes (e.g., increased independence, social demands) and academic challenges (e.g., independent learning) alongside reduced parental support and oversight [13,14] Developmentally, adolescents and young adults have heterogeneous trajectories of suicidal thoughts and behaviors [15]. Greater impulsivity and self-reported aggression, as well as elevated trait anxiety, are the risk factors for suicidal ideation among young adults [15]. Previous studies found that among 16-year-old university students, those with greater tolerance toward suicide, higher family coping, and lower self-esteem were more likely to report suicidal ideation [16]. College entrance may be a strategically well-placed “point of capture” for detecting late adolescents with suicidal thoughts and behaviors [5]. However, a clear epidemiological picture of suicidal thoughts and behaviors among incoming Chinese university students is lacking [5]. Most previous studies relied on patient and hospital records, which may understate the issue of student suicidal ideation [14]. The first twelve months following the onset of suicidal ideation appear crucial, with data from seventeen countries demonstrating that 60% of attempts are made within this period [17]. Nevertheless, previous studies are limited by the focus on psychological risks [14]. However, non-psychological risk factors may be particularly important for Chinese college first-year university students, as they may not talk about their problems due to mental health stigma [18,19]. There is an urgent need to find a customized assessment tool for Chinese university students. The University Personality Inventory (UPI) was developed in 1966 in Japan and has been adopted as a rapid and effective mental health screening tool among Chinese university students, applied in almost every university in Zhejiang province [20,21]. UPI has been shown to be multidimensional [20,21,22]. Originally, the UPI only concentrated on psychological symptoms, such as depression, anxiety, neuroticism, persecutory beliefs, and obsessive compulsive symptoms [20]. However, these assessments do not help us understand the social and developmental stressors that are relevant to university students specifically [23]. In 2015, a new five-factor structure consisting of domains of physical symptoms, cognitive symptoms, emotional vulnerability, social avoidance, and interpersonal sensitivity of UPI was developed for Chinese university students [21], offering opportunities to understand how the new five-factor domains are associated with suicidal ideation. During COVID-19, there have been concerns and literature suggesting an increase in social isolation and mental distress, which are potential risk factors for suicidal ideation [24,25]. Special attention will be placed on disparities in suicide prevention across sociodemographic subgroups during the COVID-19 pandemic [26]. For first-year university students, most of their courses have transitioned to an online format, rendering them unable to connect to their classmates and teachers [27]. Those students who felt more connected with other students and teachers were more likely to feel calm and trusting [28]. Previous literature reported the various suicidal ideation situations in disparate Chinese university students, indicating the prevalence range of suicidal ideation from 1.24% to 26.00% [1,29,30,31,32]. However, it is unknown how national prevalent suicidal ideation was among first-year university students in China. Despite recent attention being paid to the alarming rates of suicidal behaviors among university students [5,33,34,35], less evidence is available to understand the potential risks associated with their suicidal ideation during the COVID-19 pandemic. Less research comparatively addressed suicide prevention and early intervention for university students than for primary and secondary school students [36]. This is troubling on the grounds that the college years represent a critical and unique developmental stage characterized by dynamic social role transitions, new living situations, and changing relationships [37,38]. Additionally, it is vital to understand and design college first-year university students specific intervention programs targeting the developmental factors during the transition from adolescence to emerging adulthood [26,39]. This study aimed to investigate the prevalence of suicidal ideation and the potential relationship between the new five-factor domain in the revised UPI and suicidal ideation among Chinese first-year university students during the pandemic period, and present targeting measures. 5. Conclusions: In conclusion, the study revealed that for first-year university students amid the COVID-19 pandemic, the factors of physical symptoms, cognitive symptoms, and interpersonal sensitivity may work through the factors of social avoidance and emotional vulnerability. If any item of the social avoidance factor or of the emotional vulnerability factor, comes true, the likelihood of suicidal ideation will increase more than two times for university students. With the five-factor structure UPI, more associated risk factors can be found. Additionally, these findings contribute to providing timely intervention measures to reduce suicidal ideation and reduce the risk of suicide in this college student population.
Background: University students with suicidal ideation are at high risk of suicide, which constitutes a significant social and public health problem in China. However, little is known about the prevalence and associated risk factors of suicidal ideation among first-year university students in China, especially during the COVID-19 pandemic. Methods: Using a cluster sampling technique, a university-wide survey was conducted of 686 first-year university students from Hangzhou in March 2020 using University Personality Inventory (UPI). UPI includes an assessment for suicidal ideation and possible risk factors. Suicidal ideation prevalence was calculated for males and females. Univariate analysis and multivariable logistic regression models were conducted, adjusting for age and sex. Analyses were carried out using the SPSS version 22.0 software. Results: The prevalence of 12-month suicidal ideation among first-year university students during March 2020 was 5.2%, and there was no significant difference between males and females (4.8% vs. 6.0%, x2 = 0.28, p = 0.597). Multivariable logistic regression analysis identified social avoidance (B = 0.78, OR = 2.17, p &lt; 0.001) and emotional vulnerability (B = 0.71, OR = 2.02, p &lt; 0.001) as positively associated with suicidal ideation. Conclusions: Social avoidance and emotional vulnerabilities are unique factors associated with greater suicidal ideation among first-year university students during the COVID-19 pandemic. UPI serves as a validated tool to screen suicide risks among Chinese university students. Encouraging social engagement and improving emotional regulation skills are promising targets to reduce suicidal ideation among first-year university students.
5,687
306
[ 1690, 394, 281, 161, 55, 47, 91 ]
11
[ "students", "suicidal", "ideation", "suicidal ideation", "university", "university students", "college", "factors", "social", "year" ]
[ "issue student suicidal", "suicide college student", "factors suicidal", "suicide similarly students", "suicidal ideation chinese" ]
null
[CONTENT] suicidal ideation | college student | psychological influencing factors | university personality inventory [SUMMARY]
null
[CONTENT] suicidal ideation | college student | psychological influencing factors | university personality inventory [SUMMARY]
[CONTENT] suicidal ideation | college student | psychological influencing factors | university personality inventory [SUMMARY]
[CONTENT] suicidal ideation | college student | psychological influencing factors | university personality inventory [SUMMARY]
[CONTENT] suicidal ideation | college student | psychological influencing factors | university personality inventory [SUMMARY]
[CONTENT] COVID-19 | China | Female | Humans | Inventors | Male | Pandemics | Personality | Prevalence | Risk Factors | Students | Suicidal Ideation | Universities [SUMMARY]
null
[CONTENT] COVID-19 | China | Female | Humans | Inventors | Male | Pandemics | Personality | Prevalence | Risk Factors | Students | Suicidal Ideation | Universities [SUMMARY]
[CONTENT] COVID-19 | China | Female | Humans | Inventors | Male | Pandemics | Personality | Prevalence | Risk Factors | Students | Suicidal Ideation | Universities [SUMMARY]
[CONTENT] COVID-19 | China | Female | Humans | Inventors | Male | Pandemics | Personality | Prevalence | Risk Factors | Students | Suicidal Ideation | Universities [SUMMARY]
[CONTENT] COVID-19 | China | Female | Humans | Inventors | Male | Pandemics | Personality | Prevalence | Risk Factors | Students | Suicidal Ideation | Universities [SUMMARY]
[CONTENT] issue student suicidal | suicide college student | factors suicidal | suicide similarly students | suicidal ideation chinese [SUMMARY]
null
[CONTENT] issue student suicidal | suicide college student | factors suicidal | suicide similarly students | suicidal ideation chinese [SUMMARY]
[CONTENT] issue student suicidal | suicide college student | factors suicidal | suicide similarly students | suicidal ideation chinese [SUMMARY]
[CONTENT] issue student suicidal | suicide college student | factors suicidal | suicide similarly students | suicidal ideation chinese [SUMMARY]
[CONTENT] issue student suicidal | suicide college student | factors suicidal | suicide similarly students | suicidal ideation chinese [SUMMARY]
[CONTENT] students | suicidal | ideation | suicidal ideation | university | university students | college | factors | social | year [SUMMARY]
null
[CONTENT] students | suicidal | ideation | suicidal ideation | university | university students | college | factors | social | year [SUMMARY]
[CONTENT] students | suicidal | ideation | suicidal ideation | university | university students | college | factors | social | year [SUMMARY]
[CONTENT] students | suicidal | ideation | suicidal ideation | university | university students | college | factors | social | year [SUMMARY]
[CONTENT] students | suicidal | ideation | suicidal ideation | university | university students | college | factors | social | year [SUMMARY]
[CONTENT] university | students | university students | suicidal | ideation | suicidal ideation | chinese | year | year university | chinese university [SUMMARY]
null
[CONTENT] suicidal ideation | ideation | suicidal | students | table | upi factors | sample | prevalence | sample characteristics | characteristics [SUMMARY]
[CONTENT] factor | reduce | factors | risk | symptoms | emotional vulnerability | avoidance | vulnerability | social | emotional [SUMMARY]
[CONTENT] students | suicidal | suicidal ideation | ideation | university | college | factors | social | items | university students [SUMMARY]
[CONTENT] students | suicidal | suicidal ideation | ideation | university | college | factors | social | items | university students [SUMMARY]
[CONTENT] China ||| first-year | China | COVID-19 [SUMMARY]
null
[CONTENT] 12-month | first-year | March 2020 | 5.2% | 4.8% | 6.0% | 0.28 | 0.597 ||| 0.78 | 2.17 | p &lt | 0.001 | 0.71 | 2.02 | p &lt | 0.001 [SUMMARY]
[CONTENT] first-year | COVID-19 ||| UPI | Chinese ||| first-year [SUMMARY]
[CONTENT] China ||| first-year | China | COVID-19 ||| 686 | first-year | Hangzhou | March 2020 | University Personality Inventory | UPI ||| UPI ||| ||| ||| SPSS | 22.0 ||| ||| 12-month | first-year | March 2020 | 5.2% | 4.8% | 6.0% | 0.28 | 0.597 ||| 0.78 | 2.17 | p &lt | 0.001 | 0.71 | 2.02 | p &lt | 0.001 ||| first-year | COVID-19 ||| UPI | Chinese ||| first-year [SUMMARY]
[CONTENT] China ||| first-year | China | COVID-19 ||| 686 | first-year | Hangzhou | March 2020 | University Personality Inventory | UPI ||| UPI ||| ||| ||| SPSS | 22.0 ||| ||| 12-month | first-year | March 2020 | 5.2% | 4.8% | 6.0% | 0.28 | 0.597 ||| 0.78 | 2.17 | p &lt | 0.001 | 0.71 | 2.02 | p &lt | 0.001 ||| first-year | COVID-19 ||| UPI | Chinese ||| first-year [SUMMARY]
Histone Methyltransferases SUV39H1 and G9a and DNA Methyltransferase DNMT1 in Penumbra Neurons and Astrocytes after Photothrombotic Stroke.
34830365
Cerebral ischemia, a common cerebrovascular disease, is one of the great threats to human health and new targets for stroke therapy are needed. The transcriptional activity in the cell is regulated by epigenetic processes such as DNA methylation/demethylation, acetylation/deacetylation, histone methylation, etc. Changes in DNA methylation after ischemia can have both neuroprotective and neurotoxic effects depending on the degree of ischemia damage, the time elapsed after injury, and the site of methylation.
BACKGROUND
In this study, we investigated the changes in the expression and intracellular localization of DNA methyltransferase DNMT1, histone methyltransferases SUV39H1, and G9a in penumbra neurons and astrocytes at 4 and 24 h after stroke in the rat cerebral cortex using photothrombotic stroke (PTS) model. Methods of immunofluorescence microscopy analysis, apoptosis analysis, and immunoblotting were used. Additionally, we have studied the effect of DNMT1 and G9a inhibitors on the volume of PTS-induced infarction and apoptosis of penumbra cells in the cortex of mice after PTS.
METHODS
This study has shown that the level of DNMT1 increased in the nuclear and cytoplasmic fractions of the penumbra tissue at 24 h after PTS. Inhibition of DNMT1 by 5-aza-2'-deoxycytidine protected cells of PTS-induced penumbra from apoptosis. An increase in the level of SUV39H1 in the penumbra was found at 24 h after PTS and G9a was overexpressed at 4 and 24 h after PTS. G9a inhibitors A-366 and BIX01294 protected penumbra cells from apoptosis and reduced the volume of PTS-induced cerebral infarction.
RESULTS
Thus, the data obtained show that DNA methyltransferase DNMT1 and histone methyltransferase G9a can be potential protein targets in ischemic penumbra cells, and their inhibitors are potential neuroprotective agents capable of protecting penumbra cells from postischemic damage to the cerebral cortex.
CONCLUSION
[ "Animals", "Astrocytes", "Cerebral Cortex", "DNA (Cytosine-5-)-Methyltransferase 1", "DNA Methylation", "Disease Models, Animal", "Gene Expression Regulation, Enzymologic", "Histone-Lysine N-Methyltransferase", "Humans", "Light", "Methyltransferases", "Mice", "Neurons", "Rats", "Repressor Proteins", "Stroke" ]
8619375
1. Introduction
Cerebral ischemia, a common cerebrovascular disease, is one of the great threats to human health [1]. Blockage of cerebral vessels in ischemic stroke (70–80% of all strokes) disrupts blood flow and the supply of oxygen and glucose to surrounding tissues. In the ischemic nucleus nerve cells die fast. Additionally, toxic factors (glutamate, K+, free radicals, acidosis, etc.) spread and damage the surrounding cells and tissues in the following hours [2,3]. During this time (2–6 h, “therapeutic window”), it is possible to save neurons in the transition zone (penumbra), decrease damage, and reduce the neurological consequences of a stroke. However, studies of potential neuroprotectors—calcium channel blockers, excitotoxicity inhibitors, antiapoptotic agents, antioxidants, etc.—which have shown promise in experiments on cell cultures or laboratory animals, have not proven effective protection of the human brain from stroke without unacceptable side effects [4,5,6,7]. The transcriptional activity in the cell is regulated by epigenetic processes such as DNA methylation/demethylation, acetylation/deacetylation, histone methylation/demethylation, histone phosphorylation, etc. DNA methylation is the most well-studied epigenetic modification. DNA methylation involves the attachment of a methyl group to cytosine in the CpG dinucleotide in DNA, where cytosine and guanine are linked by a phosphate group (5′CpG3′). Oxidation of methylated cytosine leads to its demethylation [5,6,7]. DNA methylation does not occur at every cytosine residue in the genome. Only about 60% of all CpG dinucleotides are methylated. If CpG islands (CGIs) in the promoter region of a gene undergo methylation, this usually leads to its suppression. Regulatory regions of many human and animal genes contain unmethylated CpG dinucleotides grouped into CGIs. These are usually promoters of housekeeping genes expressed in all tissues, and in the promoters of tissue-specific genes, CGIs are unmethylated only in those tissues where this gene is expressed [4,5,6,7]. Adding a methyl group to CpG sites can prevent gene transcription in different ways. DNA methylation can directly prevent the binding of DNA-binding factors to transcription sites. In addition, methyl groups in CpG dinucleotides can be recognized by the methyl-CpG-binding domain (MBD) family, such as MBD1–4 and MECP2 [5,6]. The binding of these proteins to methylated CpG sites recruits histone or chromatin-modifying protein complexes, which will assemble a number of complexes that provide repression of gene transcription [4,5,6,7]. At least two methylation systems with different methylases operate in mammalian cells. Methylation de novo is carried out by DNA methyltransferases DNMT3a and DNMT3b, which introduces elements of variability in the methylation profile. Maintenance methylation is provided by DNMT1, which attaches methyl groups to cytosines on one of the DNA strands where the complementary strand is already methylated (accuracy rate > 99%) [8]. Despite the fact that DNA methylation is a fairly common epigenetic modification, the level of DNA methylation in the mammalian genome does not exceed 1%. This is due to the instability of 5-methylcytosine, which undergoes spontaneous deamination to thymine during DNA repair. This results in a rapid depletion of CpG sites in the genome [9]. Nonetheless, DNA methylation is still critical in biological processes such as differentiation, genomic imprinting, genomic stability, and X chromosome inactivation. DNA methylation and repression of the expression of a number of genes increase in brain cells after ischemia [10,11,12]. The occlusion of the middle cerebral artery (CCA) within 30 min increased the total level of DNA methylation by a factor of four that correlated with the growth of cell damage and neurological deficit. Changes in DNA methylation after ischemia can occur not only at the global level, but also at the promoters of individual genes [13]. These changes can have both neuroprotective and neurotoxic effects depending on the degree of ischemia damage, the time elapsed after injury, and the site of methylation [14,15]. Methylation of histones by histone methyltransferases regulates transcriptional processes in cells. It is known that methylation of lysines 9 and 27 in histone H3, as well as lysine 20 in histone H4, globally suppresses transcription, and methylation of lysine 4 in histone H3 often correlates with transcriptional activity [16]. Lysine methyltransferases SUV39H1 and G9a are best studied histone methyltransferases. SUV39H1 and G9a methylate lysines in histones H3 and H4, leading to the formation of large regions of heterochromatin where gene expression is suppressed. G9a is more responsible for mono and dimethylation of lysine 9 in histone H3 (H3K9Me1 and H3K9Me2), while SUV39H1 is responsible for trimethylation (H3K9Me3) [16,17]. SUV39H1 and G9a are also involved in the repression of individual genes and in a number of neuropathological processes [18]. The damage to peripheral nerves increased the expression of SUV39H1 in the nuclei of neurons in the ganglia of the spinal cord that are involved in the transmission of pain information. This caused nociceptive hypersensitivity. It is assumed that SUV39H1 inhibitors can serve as promising drugs for the treatment of pain hypersensitivity caused by damage to peripheral nerves [18]. Pharmacological inhibition or knockout of histone methyltransferases SUV39H1 or G9a protected cultured neurons from ischemic damage in an oxygen- and glucose-free environment [18,19]. In this study, we investigated the changes in the expression and intracellular localization of DNA methyltransferase DNMT1, histone methyltransferases SUV39H1 and G9a in penumbra neurons and astrocytes in penumbra nuclear and penumbra cytoplasmic fractions at 4 and 24 h after photothrombotic stroke (PTS). In an attempt to find possible neuroprotectors, we studied the effect of DNMT1 and histone methyltransferase G9a inhibitors on the volume of PTS-induced infarction and apoptosis of penumbra cells in the cortex of mice after PTS.
4. Methods
4.1. Animals The experiments were carried out on adult male rats weighing 200–250 g. Experiments with enzyme inhibitors were carried out on male outbred CD-1 mice at the age of 14–15 weeks weighing 20–25 g. The animals were kept under standard conditions with free access to water and food at 22–25 °C, 12-h light/dark schedule, and an air exchange rate of 18 shifts per hour. Body temperature was monitored with a rectal thermometer and maintained within 37 ± 0.5 °C using an electric mat. International, national, and institutional guidelines for the care and conduct of animal experiments were followed. The animal protocols were evaluated and approved by the Animal Care and Use Committee of the Southern Federal University. The experiments were carried out on adult male rats weighing 200–250 g. Experiments with enzyme inhibitors were carried out on male outbred CD-1 mice at the age of 14–15 weeks weighing 20–25 g. The animals were kept under standard conditions with free access to water and food at 22–25 °C, 12-h light/dark schedule, and an air exchange rate of 18 shifts per hour. Body temperature was monitored with a rectal thermometer and maintained within 37 ± 0.5 °C using an electric mat. International, national, and institutional guidelines for the care and conduct of animal experiments were followed. The animal protocols were evaluated and approved by the Animal Care and Use Committee of the Southern Federal University. 4.2. Photothrombotic Stroke Model Rats were anesthetized by intraperitoneal administration of telazol (50 mg/kg) and xylazine (10 mg/kg) [28]. For mice, 25 mg/kg telazol and 5 mg/kg xylazine were used for anesthesia [29]. The PTS procedure has been previously described [30]. Briefly, PTS in the rat cerebral cortex was induced by laser irradiation (532 nm, 60 mW/cm2, Ø 3 mm, 30 min) of a part of the sensorimotor cortex of the rat brain after intravenous injection of the Rose Bengal photosensitizer (R4507, Sigma, Rehovot, Israel; 20 mg/kg). For PTS in the cerebral cortex of mice, Rose Bengal (15 mg/mL) was injected intraperitoneally (10 μL/g). A section of the mouse skull was freed from the periosteum in the area of the sensorimotor cortex (2 mm lateral to the bregma) and this part of brain was irradiated with a laser (532 nm, 200 mW/cm2, Ø 1 mm, 15 min) 5 min after photosensitizer was applied. The sham-operated animals that underwent the same operations, but without the introduction of a photosensitizer, were used as controls. Rats were anesthetized by intraperitoneal administration of telazol (50 mg/kg) and xylazine (10 mg/kg) [28]. For mice, 25 mg/kg telazol and 5 mg/kg xylazine were used for anesthesia [29]. The PTS procedure has been previously described [30]. Briefly, PTS in the rat cerebral cortex was induced by laser irradiation (532 nm, 60 mW/cm2, Ø 3 mm, 30 min) of a part of the sensorimotor cortex of the rat brain after intravenous injection of the Rose Bengal photosensitizer (R4507, Sigma, Rehovot, Israel; 20 mg/kg). For PTS in the cerebral cortex of mice, Rose Bengal (15 mg/mL) was injected intraperitoneally (10 μL/g). A section of the mouse skull was freed from the periosteum in the area of the sensorimotor cortex (2 mm lateral to the bregma) and this part of brain was irradiated with a laser (532 nm, 200 mW/cm2, Ø 1 mm, 15 min) 5 min after photosensitizer was applied. The sham-operated animals that underwent the same operations, but without the introduction of a photosensitizer, were used as controls. 4.3. Immunofluorescence Microscopy Analysis For immunofluorescence microscopy, the rats were anesthetized and transcardially perfused with 10% formalin 4 or 24 h after PTS, as described previously [30]. Briefly, control samples were from the contralateral cortex of the same animals (CL4 and CL24, respectively) or the cerebral cortex of sham-operated rats (SO4 and SO24). The brain was fixed in formalin overnight and incubated for 48 h in 20% sucrose in phosphate buffer (PBS) at 4 °C. Frontal sections of the brain with a thickness of 20 μm were prepared using a Leica VT 1000 S vibratome (Freiburg, Germany). They were frozen in 2-methylbutane and stored at −80 °C. After thawing, the sections were washed with PBS. The sections were then incubated overnight at 4 °C in the same solution with primary rabbit antibodies (SigmaAldrich, Rehovot, Israel): anti-DNMT1 (D4567, 1:500); anti-G9a (09–071, 1:100), anti-SUV39H1 (AV32470, 1:500) and anti-NeuN mouse antibody (MAB377; 1:1000) or anti-GFAP (SAB5201104; 1:1000), anti-histone H3, dimethylated at lysine 9 (anti-H3K9diMe; D5567, 1:500), and anti-histone H3, methylated at lysine 4 (anti-H3K4Me; M4819, 1:500). After washing in PBS, the sections were incubated for 1 h with fluorescently labeled secondary anti-rabbit CF488A (SAB4600045, 1:1000) or anti-mouse CF555 (SAB4600302, 1:1000) antibodies. Non-specific antibody binding was blocked by 5% BSA with 0.3% Triton X-100 (1 h, 20–25 °C). Sections were mounted on glass slides in 60% glycerol/PBS. Negative control: no primary antibodies. Sections were analyzed using an Eclipse FN1 microscope (Nikon, Tokyo, Japan). In most of the experiments, fluorescent images were studied in the central region of the penumbra, at a distance of 0.3–0.7 mm from the border of the infarction nucleus. The quantitative assessment of fluorescence was carried out on 10–15 images of experimental and control preparations obtained with the same settings of a digital camera. The average fluorescence intensity in the area occupied by cells was determined in each image using the ImageJ software (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021). The corrected fluorescence intensity of cells I (CTCF), proportional to the level of protein expression, was calculated as: I = Ii − Ac * Ib; where Ii is the integral fluorescence intensity, Ac is the cell area, and Ib is the average background fluorescence [30]. Threshold values remained constant for all images. The relative changes in the fluorescence of cells in the penumbra compared to the control cortex, ΔI, were calculated as: ΔI = (Ipen − Ic)/Ic, where Ipen is the average fluorescence intensity in the penumbra, and Ic is the average fluorescence intensity in the control samples. Additionally, using ROI Manager tools in ImageJ, the immunofluorescence of DNMT1, Suv39H1, and G9a proteins in neuronal nuclei was assessed. Immunofluorescence data of proteins in cell nuclei were normalized to Hoechst 33342 fluorescence. In addition, the number of cells expressing the protein under study was calculated per 100 cells. Protein colocalization with the neuronal marker NeuN or the astrocyte marker GFAP was assessed using Image J (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021) with the JACoP plugin. On RGB images (1280 × 960), the Manders coefficient M1 reflects the proportion of pixels containing red (cell markers or TUNEL staining) and green (proteins) signals in the total signal recorded in the red channel. In each area of the brain, three fields of vision were analyzed in 7–10 rats. Statistical processing was performed according to OneWay ANOVA. Results are presented as M ± SEM. For immunofluorescence microscopy, the rats were anesthetized and transcardially perfused with 10% formalin 4 or 24 h after PTS, as described previously [30]. Briefly, control samples were from the contralateral cortex of the same animals (CL4 and CL24, respectively) or the cerebral cortex of sham-operated rats (SO4 and SO24). The brain was fixed in formalin overnight and incubated for 48 h in 20% sucrose in phosphate buffer (PBS) at 4 °C. Frontal sections of the brain with a thickness of 20 μm were prepared using a Leica VT 1000 S vibratome (Freiburg, Germany). They were frozen in 2-methylbutane and stored at −80 °C. After thawing, the sections were washed with PBS. The sections were then incubated overnight at 4 °C in the same solution with primary rabbit antibodies (SigmaAldrich, Rehovot, Israel): anti-DNMT1 (D4567, 1:500); anti-G9a (09–071, 1:100), anti-SUV39H1 (AV32470, 1:500) and anti-NeuN mouse antibody (MAB377; 1:1000) or anti-GFAP (SAB5201104; 1:1000), anti-histone H3, dimethylated at lysine 9 (anti-H3K9diMe; D5567, 1:500), and anti-histone H3, methylated at lysine 4 (anti-H3K4Me; M4819, 1:500). After washing in PBS, the sections were incubated for 1 h with fluorescently labeled secondary anti-rabbit CF488A (SAB4600045, 1:1000) or anti-mouse CF555 (SAB4600302, 1:1000) antibodies. Non-specific antibody binding was blocked by 5% BSA with 0.3% Triton X-100 (1 h, 20–25 °C). Sections were mounted on glass slides in 60% glycerol/PBS. Negative control: no primary antibodies. Sections were analyzed using an Eclipse FN1 microscope (Nikon, Tokyo, Japan). In most of the experiments, fluorescent images were studied in the central region of the penumbra, at a distance of 0.3–0.7 mm from the border of the infarction nucleus. The quantitative assessment of fluorescence was carried out on 10–15 images of experimental and control preparations obtained with the same settings of a digital camera. The average fluorescence intensity in the area occupied by cells was determined in each image using the ImageJ software (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021). The corrected fluorescence intensity of cells I (CTCF), proportional to the level of protein expression, was calculated as: I = Ii − Ac * Ib; where Ii is the integral fluorescence intensity, Ac is the cell area, and Ib is the average background fluorescence [30]. Threshold values remained constant for all images. The relative changes in the fluorescence of cells in the penumbra compared to the control cortex, ΔI, were calculated as: ΔI = (Ipen − Ic)/Ic, where Ipen is the average fluorescence intensity in the penumbra, and Ic is the average fluorescence intensity in the control samples. Additionally, using ROI Manager tools in ImageJ, the immunofluorescence of DNMT1, Suv39H1, and G9a proteins in neuronal nuclei was assessed. Immunofluorescence data of proteins in cell nuclei were normalized to Hoechst 33342 fluorescence. In addition, the number of cells expressing the protein under study was calculated per 100 cells. Protein colocalization with the neuronal marker NeuN or the astrocyte marker GFAP was assessed using Image J (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021) with the JACoP plugin. On RGB images (1280 × 960), the Manders coefficient M1 reflects the proportion of pixels containing red (cell markers or TUNEL staining) and green (proteins) signals in the total signal recorded in the red channel. In each area of the brain, three fields of vision were analyzed in 7–10 rats. Statistical processing was performed according to OneWay ANOVA. Results are presented as M ± SEM. 4.4. Apoptosis Analysis Apoptotic cells were visualized by the TUNEL method (TdT-mediated dUTP-X nick end labeling) using the In Situ Cell Death Detection Kit, TMR red (# 12156792910, Roche, Mannheim, Germany) (red fluorescence) as described previously [30]. Briefly, sections were incubated at 37 °C with a primary antibody against the protein under study (green fluorescence), washed, treated with reagents from this kit, and incubated for 1 h with a secondary antibody Anti-Rabbit CF488A (SAB4600045, 1:1000) (green fluorescence) and with the Hoechst cell nucleus marker 33342 (10 μg/mL, blue fluorescence). The apoptotic coefficient (AI) was calculated as a percentage as:AI = (number of TUNEL-positive cells/total number of cells stained with Hoechst 33342) The analysis was performed on 3 images for each of 7–9 animals in the group. Apoptotic cells were visualized by the TUNEL method (TdT-mediated dUTP-X nick end labeling) using the In Situ Cell Death Detection Kit, TMR red (# 12156792910, Roche, Mannheim, Germany) (red fluorescence) as described previously [30]. Briefly, sections were incubated at 37 °C with a primary antibody against the protein under study (green fluorescence), washed, treated with reagents from this kit, and incubated for 1 h with a secondary antibody Anti-Rabbit CF488A (SAB4600045, 1:1000) (green fluorescence) and with the Hoechst cell nucleus marker 33342 (10 μg/mL, blue fluorescence). The apoptotic coefficient (AI) was calculated as a percentage as:AI = (number of TUNEL-positive cells/total number of cells stained with Hoechst 33342) The analysis was performed on 3 images for each of 7–9 animals in the group. 4.5. Immunoblotting The immunoblotting procedure has been previously described [30]. Briefly, the rats were anesthetized and decapitated 4 or 24 h after PTS. The brain was removed and a section of the cortex corresponding to the infarction nucleus was removed on ice with a cylindrical knife Ø3 mm; with another knife Ø7 mm, a 2-mm ring was cut around the irradiation zone, approximately corresponding to the penumbra tissue (experimental sample). A similar control sample was excised in the unirradiated contralateral hemisphere or in the cerebral cortex of sham-operated rats. Pieces of tissue were homogenized on ice, quickly frozen in liquid nitrogen, and then stored at −80 °C. After thawing and centrifugation using the CelLytic NuCLEAR Extraction Kit (SigmaAldrich), the cytoplasmic and nuclear fractions were isolated from the homogenates. The total supernatant was used as the cytoplasmic fraction, in which the nuclear protein histone H3 was practically not detected. Primary rabbit antibodies (all SigmaAldrich) were used in the experiments: anti-DNMT1 (D4567, 1:500), anti-G9a (09-071, 1:500), anti-SUV39H1 (AV32470, 1:500), and mouse anti-β-actin antibody (A5441, 1:5000). Secondary antibodies (all SigmaAldrich): anti-Rabbit IgG-Peroxidase (A6154, 1:1000) and anti-Mouse IgG-Peroxidase (A4416, 1:1000). The immunoblotting procedure has been previously described [30]. Briefly, the rats were anesthetized and decapitated 4 or 24 h after PTS. The brain was removed and a section of the cortex corresponding to the infarction nucleus was removed on ice with a cylindrical knife Ø3 mm; with another knife Ø7 mm, a 2-mm ring was cut around the irradiation zone, approximately corresponding to the penumbra tissue (experimental sample). A similar control sample was excised in the unirradiated contralateral hemisphere or in the cerebral cortex of sham-operated rats. Pieces of tissue were homogenized on ice, quickly frozen in liquid nitrogen, and then stored at −80 °C. After thawing and centrifugation using the CelLytic NuCLEAR Extraction Kit (SigmaAldrich), the cytoplasmic and nuclear fractions were isolated from the homogenates. The total supernatant was used as the cytoplasmic fraction, in which the nuclear protein histone H3 was practically not detected. Primary rabbit antibodies (all SigmaAldrich) were used in the experiments: anti-DNMT1 (D4567, 1:500), anti-G9a (09-071, 1:500), anti-SUV39H1 (AV32470, 1:500), and mouse anti-β-actin antibody (A5441, 1:5000). Secondary antibodies (all SigmaAldrich): anti-Rabbit IgG-Peroxidase (A6154, 1:1000) and anti-Mouse IgG-Peroxidase (A4416, 1:1000). 4.6. Inhibitors The study investigated the putative neuroprotective effect of the following inhibitors of epigenetic proteins were used: DNMT1 (5-azacytidine or decitabine (0.2 mg/kg, 1 per day, 7 days) [31,32,33], histone methyltransferases SUV39H1 and G9a (BIX01294 (0.5 mg/kg, once a day, 7 days) [34], and A-366 (2 mg/kg, once a day, 7 days) [35]. They were dissolved in dimethyl sulfoxide (DMSO) and then diluted in sterile saline. The final concentration of DMSO was 5%. All inhibitors were administered 1 h after PTS. Mice were decapitated 4 and 7 days after PTS and 1 h after the last injection of drugs to study the level of apoptosis in the peri-infarction area and the volume of infarction. The study investigated the putative neuroprotective effect of the following inhibitors of epigenetic proteins were used: DNMT1 (5-azacytidine or decitabine (0.2 mg/kg, 1 per day, 7 days) [31,32,33], histone methyltransferases SUV39H1 and G9a (BIX01294 (0.5 mg/kg, once a day, 7 days) [34], and A-366 (2 mg/kg, once a day, 7 days) [35]. They were dissolved in dimethyl sulfoxide (DMSO) and then diluted in sterile saline. The final concentration of DMSO was 5%. All inhibitors were administered 1 h after PTS. Mice were decapitated 4 and 7 days after PTS and 1 h after the last injection of drugs to study the level of apoptosis in the peri-infarction area and the volume of infarction. 4.7. Determination of the Volume of Infarction To determine the volume of infarction at different times after PTS, brain slices of mice were stained with 2,3,5-triphenyltetrazolium chloride (TTX; T8877, Sigma). For this, the mice were anesthetized and decapitated, the brains were quickly removed and placed in a pre-chilled adult mouse brain matrix (J&K Seiko Electronic Co., Ltd., DongGuan City, China). The matrix with brain tissue was transferred to a freezer (−80°C) for 3–5 min and sliced 2 mm thick. These sections were stained with 1% TTX for 30 min in the dark at 37 °C. Using the ImageJ image analysis software (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021), the areas of infarction zones in each section were measured, summed up, and multiplied by the section thickness (2 mm). To determine the volume of infarction at different times after PTS, brain slices of mice were stained with 2,3,5-triphenyltetrazolium chloride (TTX; T8877, Sigma). For this, the mice were anesthetized and decapitated, the brains were quickly removed and placed in a pre-chilled adult mouse brain matrix (J&K Seiko Electronic Co., Ltd., DongGuan City, China). The matrix with brain tissue was transferred to a freezer (−80°C) for 3–5 min and sliced 2 mm thick. These sections were stained with 1% TTX for 30 min in the dark at 37 °C. Using the ImageJ image analysis software (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021), the areas of infarction zones in each section were measured, summed up, and multiplied by the section thickness (2 mm).
2. Results
2.1. Expression and Localization of DNA Methyltransferase DNMT1 DNA methylation leading to gene silencing is carried out by DNA methyltransferases DNMT1 (supporting methylation in interphase cells) or DNMT3a and DNMT3b (de novo methylation at early stages of development as well as its changes during cell differentiation). DNMT1 maintains the DNA methylation pattern after cell repair or division. It attaches methyl groups to one of the DNA strands where the complementary strand is methylated. In a normal brain DNMT1 has been detected [20]. However, the role of DNMT in the brain’s response to ischemic injury is poorly understood. It has been shown that the global DNA methylation carried out by methyltransferases DNMT is enhanced 16–24 h after cerebral ischemia and reperfusion, which indicates a decrease in biosynthetic processes [21]. Western blotting data in our experiments have shown that the level of DNMT1 in both the nuclear and cytoplasmic fractions of the penumbra tissue did not change 4 h after PTS in the rat cerebral cortex, but significantly increased after 24 h (Figure 1). These results were confirmed by the data obtained by immunofluorescence microscopy analysis (Figure 2a–c) that have shown that the level of DNMT1 in the penumbra increased (Figure 2b,c) at 24 h after PTS. A colocalization of DNMT1 with markers of neurons NeuN (Figure 2d) and astrocytes GFAP (Figure 3b) significantly increased. As NeuN selectively labels the nuclei of neurons (Figure 2a), and GFAP stains the bodies of astrocytes, including their processes (Figure 3a), these data explain the results of immunoblotting. The increase in the level of DNMT1 in the nuclear fraction of the penumbra most likely was due to increased expression of neurons in the nuclei, and the increase in the level in the cytoplasmic fraction is due to the expression of this protein in the body and processes of astrocytes. An increase by 75% (p < 0.05) in DNMT1 expression in the nuclei of penumbra cells was shown at 24 h after PTS (Figure 2e). At the same time, there was no increase in the number of cells expressing the protein (Figure 2f). DNMT1 is a nuclear protein and its appearance in the cytoplasm of astrocytes could be explained by the fact that after synthesis in the cytoplasm the protein must be transported to the nucleus that occurs in neurons. In astrocytes, synthesis of DNMT1 can also be stimulated, but its transport to the nucleus is difficult or much slower than synthesis, so that it accumulates in the cytoplasm. These data are consistent with the results of works published previously [21,22,23]. A similar nuclear-cytoplasmic localization of DNMT1 was observed in reactive astrocytes after traumatic brain injury in contrast to normal localization in the nuclei of neurons [23]. DNA methylation leading to gene silencing is carried out by DNA methyltransferases DNMT1 (supporting methylation in interphase cells) or DNMT3a and DNMT3b (de novo methylation at early stages of development as well as its changes during cell differentiation). DNMT1 maintains the DNA methylation pattern after cell repair or division. It attaches methyl groups to one of the DNA strands where the complementary strand is methylated. In a normal brain DNMT1 has been detected [20]. However, the role of DNMT in the brain’s response to ischemic injury is poorly understood. It has been shown that the global DNA methylation carried out by methyltransferases DNMT is enhanced 16–24 h after cerebral ischemia and reperfusion, which indicates a decrease in biosynthetic processes [21]. Western blotting data in our experiments have shown that the level of DNMT1 in both the nuclear and cytoplasmic fractions of the penumbra tissue did not change 4 h after PTS in the rat cerebral cortex, but significantly increased after 24 h (Figure 1). These results were confirmed by the data obtained by immunofluorescence microscopy analysis (Figure 2a–c) that have shown that the level of DNMT1 in the penumbra increased (Figure 2b,c) at 24 h after PTS. A colocalization of DNMT1 with markers of neurons NeuN (Figure 2d) and astrocytes GFAP (Figure 3b) significantly increased. As NeuN selectively labels the nuclei of neurons (Figure 2a), and GFAP stains the bodies of astrocytes, including their processes (Figure 3a), these data explain the results of immunoblotting. The increase in the level of DNMT1 in the nuclear fraction of the penumbra most likely was due to increased expression of neurons in the nuclei, and the increase in the level in the cytoplasmic fraction is due to the expression of this protein in the body and processes of astrocytes. An increase by 75% (p < 0.05) in DNMT1 expression in the nuclei of penumbra cells was shown at 24 h after PTS (Figure 2e). At the same time, there was no increase in the number of cells expressing the protein (Figure 2f). DNMT1 is a nuclear protein and its appearance in the cytoplasm of astrocytes could be explained by the fact that after synthesis in the cytoplasm the protein must be transported to the nucleus that occurs in neurons. In astrocytes, synthesis of DNMT1 can also be stimulated, but its transport to the nucleus is difficult or much slower than synthesis, so that it accumulates in the cytoplasm. These data are consistent with the results of works published previously [21,22,23]. A similar nuclear-cytoplasmic localization of DNMT1 was observed in reactive astrocytes after traumatic brain injury in contrast to normal localization in the nuclei of neurons [23]. 2.2. Expression and Localization of SUV39H1 Histone Methyltransferase In the present experiments histone methyltransferase SUV39H1 was detected exclusively in the nuclear fraction of the rat cerebral cortex in control samples (Figure 4, Figure 5a and Figure 6a). Its level in the cytoplasm of cells was so low that it was practically not detected by immunoblotting (Figure 4c). The level of SUV39H1 in the nuclear fraction of the penumbra did not change at 4 h after PTS, but it significantly increased at 24 h after PTS (Figure 4a,b). Immunofluorescence microscopy also revealed an increase in the level of SUV39H1 at 24 h after PTS compared with both control groups (Figure 5a–c). SUV39H1 was localized mainly in the nuclei of neurons (Figure 5a and Figure 6a) and the colocalization of SUV39H1 with neuronal nuclei increased at 24 after PTS (Figure 5d). PTS did not affect colocalization of SUV39H1 with the astrocyte marker GFAP (Figure 6b). Analysis of SUV39H1 fluorescence in the nucleus of cells revealed an increase by 63% (p ≤ 0.05) in the level of protein in the nuclei (Figure 5e) at 24 h after PTS, while the number of cells expressing the protein was even reduced compared to the indicator in the contralateral hemisphere in the same period by 34% (p ≤ 0.05) (Figure 5f). In the present experiments histone methyltransferase SUV39H1 was detected exclusively in the nuclear fraction of the rat cerebral cortex in control samples (Figure 4, Figure 5a and Figure 6a). Its level in the cytoplasm of cells was so low that it was practically not detected by immunoblotting (Figure 4c). The level of SUV39H1 in the nuclear fraction of the penumbra did not change at 4 h after PTS, but it significantly increased at 24 h after PTS (Figure 4a,b). Immunofluorescence microscopy also revealed an increase in the level of SUV39H1 at 24 h after PTS compared with both control groups (Figure 5a–c). SUV39H1 was localized mainly in the nuclei of neurons (Figure 5a and Figure 6a) and the colocalization of SUV39H1 with neuronal nuclei increased at 24 after PTS (Figure 5d). PTS did not affect colocalization of SUV39H1 with the astrocyte marker GFAP (Figure 6b). Analysis of SUV39H1 fluorescence in the nucleus of cells revealed an increase by 63% (p ≤ 0.05) in the level of protein in the nuclei (Figure 5e) at 24 h after PTS, while the number of cells expressing the protein was even reduced compared to the indicator in the contralateral hemisphere in the same period by 34% (p ≤ 0.05) (Figure 5f). 2.3. Expression and Localization of Histone Methyltransferase G9a The level of histone methyltransferase G9a was also low in the nuclear fraction of control samples of the rat cerebral cortex, and in the cytoplasmic fraction it was practically undetectable (Figure 7b). The G9a level in the penumbra increased at 4 and 24 h after PTS (Figure 7a,c). Immunofluorescence microscopy revealed exclusively nuclear localization of G9a in neurons (Figure 8a), but not penumbra astrocytes (Figure 9a). The G9a level in the penumbra cells significantly increased relative to the level in the contralateral hemisphere of the same rats (Figure 8b) or in the cerebral cortex of sham-operated rats (Figure 8c) at 4 and 24 h after PTS. This increase can be due to the increased expression of G9a in the nuclei of neurons, as at these time points the coefficient of colocalization G9a with the marker of neuronal nuclei NeuN also increased (Figure 8d), while the colocalization of G9a with the marker of astrocytes GFAP did not change (Figure 9b). In addition, analysis of G9a fluorescence in the nucleus of cells showed an increase by 32% and 44% in its level in the nucleus at 4 and 24 h after PTS (Figure 8e), respectively, although the number of cells expressing the protein did not increase (Figure 8f). An increase in the level of SUV39H1 in the penumbra was found at 24 h after PTS, and G9a was overexpressed at 4 and 24 h after PTS. The level of histone methyltransferase G9a was also low in the nuclear fraction of control samples of the rat cerebral cortex, and in the cytoplasmic fraction it was practically undetectable (Figure 7b). The G9a level in the penumbra increased at 4 and 24 h after PTS (Figure 7a,c). Immunofluorescence microscopy revealed exclusively nuclear localization of G9a in neurons (Figure 8a), but not penumbra astrocytes (Figure 9a). The G9a level in the penumbra cells significantly increased relative to the level in the contralateral hemisphere of the same rats (Figure 8b) or in the cerebral cortex of sham-operated rats (Figure 8c) at 4 and 24 h after PTS. This increase can be due to the increased expression of G9a in the nuclei of neurons, as at these time points the coefficient of colocalization G9a with the marker of neuronal nuclei NeuN also increased (Figure 8d), while the colocalization of G9a with the marker of astrocytes GFAP did not change (Figure 9b). In addition, analysis of G9a fluorescence in the nucleus of cells showed an increase by 32% and 44% in its level in the nucleus at 4 and 24 h after PTS (Figure 8e), respectively, although the number of cells expressing the protein did not increase (Figure 8f). An increase in the level of SUV39H1 in the penumbra was found at 24 h after PTS, and G9a was overexpressed at 4 and 24 h after PTS. 2.4. The Changes of Histone H3 Methylation Level in the Penumbra: H3K9diMe and H3K4Me According to Western blotting data at both 4 h and 24 h after PTS the level of H3K9diMe in the penumbra significantly increased compared to the cortex of sham-operated animals by 62 (p < 0.001) and 69% (p < 0.05), respectively (Figure 10a). At the same time, in the control contralateral cortex of experimental rats, the level of H3K9diMe did not differ significantly from the penumbra, but significantly exceeded the level in the cortex of sham-operated animals. These data are confirmed by the results of immunofluorescence microscopy (Figure 10b). Immunofluorescence of H3K9diMe-positive cells in the penumbra at PTS4 and PTS24 exceeded that in the cerebral cortex of SO rats by 90 and 60%, respectively (Figure 10c), and in the unirradiated contralateral cortex of the same animals (CL4 and CL24) approximately by 90 and 100%, respectively (Figure 10d). At the same time, the coefficient M1 of H3K9diMe colocalization with the neuron marker increased by a factor of 2.4 (Figure 10e) and by a factor of 2 with the astrocyte marker GFAP (Figure 10f). Thus, the level of H3K9diMe increased in the penumbra cells relative to the cerebral cortex of SO rats and the contralateral cortex of the same animals at PTS4 and PTS24. This increase occurred in both neurons and astrocytes. Western blotting (Figure 11a) and immunofluorescence microscopy (Figure 11b) did not reveal significant changes in H3K4Me expression in the penumbra at 4PTS4 and PTS24 as compared with the contralateral cortex of the same rats and with the cerebral cortex of SO animals. Expression of Н3К4Ме was observed in the nuclei of neurons and in a small number of astrocytes in the rat cerebral cortex. Colocalization of the protein with cellular markers in the penumbra did not change significantly compared to the control groups (Figure 11e,f). According to Western blotting data at both 4 h and 24 h after PTS the level of H3K9diMe in the penumbra significantly increased compared to the cortex of sham-operated animals by 62 (p < 0.001) and 69% (p < 0.05), respectively (Figure 10a). At the same time, in the control contralateral cortex of experimental rats, the level of H3K9diMe did not differ significantly from the penumbra, but significantly exceeded the level in the cortex of sham-operated animals. These data are confirmed by the results of immunofluorescence microscopy (Figure 10b). Immunofluorescence of H3K9diMe-positive cells in the penumbra at PTS4 and PTS24 exceeded that in the cerebral cortex of SO rats by 90 and 60%, respectively (Figure 10c), and in the unirradiated contralateral cortex of the same animals (CL4 and CL24) approximately by 90 and 100%, respectively (Figure 10d). At the same time, the coefficient M1 of H3K9diMe colocalization with the neuron marker increased by a factor of 2.4 (Figure 10e) and by a factor of 2 with the astrocyte marker GFAP (Figure 10f). Thus, the level of H3K9diMe increased in the penumbra cells relative to the cerebral cortex of SO rats and the contralateral cortex of the same animals at PTS4 and PTS24. This increase occurred in both neurons and astrocytes. Western blotting (Figure 11a) and immunofluorescence microscopy (Figure 11b) did not reveal significant changes in H3K4Me expression in the penumbra at 4PTS4 and PTS24 as compared with the contralateral cortex of the same rats and with the cerebral cortex of SO animals. Expression of Н3К4Ме was observed in the nuclei of neurons and in a small number of astrocytes in the rat cerebral cortex. Colocalization of the protein with cellular markers in the penumbra did not change significantly compared to the control groups (Figure 11e,f). 2.5. DNMT1, SUV39H1, and G9a in Apoptotic Penumbra Cells Double fluorescent staining of penumbra tissue sections with antibodies against the studied epigenetic proteins and the apoptosis marker TUNEL showed antibodies against DNMT1 and G9a, but not SUV39H1 colocalized with the nuclei of apoptotic cells at 24 h after PTS (Figure 12). This indicates the possible involvement of DNMT1 and G9a proteins in PTI-induced apoptosis. Double fluorescent staining of penumbra tissue sections with antibodies against the studied epigenetic proteins and the apoptosis marker TUNEL showed antibodies against DNMT1 and G9a, but not SUV39H1 colocalized with the nuclei of apoptotic cells at 24 h after PTS (Figure 12). This indicates the possible involvement of DNMT1 and G9a proteins in PTI-induced apoptosis. 2.6. Inhibitory Analysis The nuclei of apoptotic cells are localized in a band about 1–1.5 mm wide on sections of the cerebral cortex of mice subjected to PTS. The apoptosis was not observed from the left and right of this band where the infarction nucleus and normal tissue are located. The effect of inhibitors on the level of apoptosis in the penumbra was studied at 4 days after PTS (Figure 13c) where 5-aza-2′-deoxycytidine, A-366, or BIX01294 significantly reduced the percentage of apoptotic cells in the penumbra. This effect persisted only for A-366 at 7 days after PTS. (Figure 13d). The administration of inhibitors of the studied epigenetic proteins to mice had no effect on the volume of infarction of the nervous tissue in the brain of mice at 4 days after PTS. However, in the group of mice injected with A-366 (but not other inhibitors) the infarction volume was reduced compared to the control group (PTS without inhibitor) at 7 days after PTS (Figure 13). The nuclei of apoptotic cells are localized in a band about 1–1.5 mm wide on sections of the cerebral cortex of mice subjected to PTS. The apoptosis was not observed from the left and right of this band where the infarction nucleus and normal tissue are located. The effect of inhibitors on the level of apoptosis in the penumbra was studied at 4 days after PTS (Figure 13c) where 5-aza-2′-deoxycytidine, A-366, or BIX01294 significantly reduced the percentage of apoptotic cells in the penumbra. This effect persisted only for A-366 at 7 days after PTS. (Figure 13d). The administration of inhibitors of the studied epigenetic proteins to mice had no effect on the volume of infarction of the nervous tissue in the brain of mice at 4 days after PTS. However, in the group of mice injected with A-366 (but not other inhibitors) the infarction volume was reduced compared to the control group (PTS without inhibitor) at 7 days after PTS (Figure 13).
5. Conclusions
The data obtained show that DNA methyltransferase DNMT1 and histone methyltransferase G9a can be potential protein targets in ischemic penumbra cells, and their inhibitors, such as 5-aza-2′-deoxycytidine, A-366, and BIX01294, are potential neuroprotective agents capable of protecting penumbra cells from postischemic damage to the cerebral cortex. The protective effect of these substances should be studied in more detail in the future. Compound A-366 had the most persistent protective effect on the mouse cerebral cortex after photothrombotic stroke. Therefore, it can be recommended as a promising anti-ischemic neuroprotector.
[ "2.1. Expression and Localization of DNA Methyltransferase DNMT1", "2.2. Expression and Localization of SUV39H1 Histone Methyltransferase", "2.3. Expression and Localization of Histone Methyltransferase G9a", "2.4. The Changes of Histone H3 Methylation Level in the Penumbra: H3K9diMe and H3K4Me", "2.5. DNMT1, SUV39H1, and G9a in Apoptotic Penumbra Cells", "2.6. Inhibitory Analysis", "4.1. Animals", "4.2. Photothrombotic Stroke Model", "4.3. Immunofluorescence Microscopy Analysis", "4.4. Apoptosis Analysis", "4.5. Immunoblotting", "4.6. Inhibitors", "4.7. Determination of the Volume of Infarction" ]
[ "DNA methylation leading to gene silencing is carried out by DNA methyltransferases DNMT1 (supporting methylation in interphase cells) or DNMT3a and DNMT3b (de novo methylation at early stages of development as well as its changes during cell differentiation). DNMT1 maintains the DNA methylation pattern after cell repair or division. It attaches methyl groups to one of the DNA strands where the complementary strand is methylated. In a normal brain DNMT1 has been detected [20]. However, the role of DNMT in the brain’s response to ischemic injury is poorly understood. It has been shown that the global DNA methylation carried out by methyltransferases DNMT is enhanced 16–24 h after cerebral ischemia and reperfusion, which indicates a decrease in biosynthetic processes [21].\nWestern blotting data in our experiments have shown that the level of DNMT1 in both the nuclear and cytoplasmic fractions of the penumbra tissue did not change 4 h after PTS in the rat cerebral cortex, but significantly increased after 24 h (Figure 1).\nThese results were confirmed by the data obtained by immunofluorescence microscopy analysis (Figure 2a–c) that have shown that the level of DNMT1 in the penumbra increased (Figure 2b,c) at 24 h after PTS. A colocalization of DNMT1 with markers of neurons NeuN (Figure 2d) and astrocytes GFAP (Figure 3b) significantly increased.\nAs NeuN selectively labels the nuclei of neurons (Figure 2a), and GFAP stains the bodies of astrocytes, including their processes (Figure 3a), these data explain the results of immunoblotting. The increase in the level of DNMT1 in the nuclear fraction of the penumbra most likely was due to increased expression of neurons in the nuclei, and the increase in the level in the cytoplasmic fraction is due to the expression of this protein in the body and processes of astrocytes.\nAn increase by 75% (p < 0.05) in DNMT1 expression in the nuclei of penumbra cells was shown at 24 h after PTS (Figure 2e). At the same time, there was no increase in the number of cells expressing the protein (Figure 2f).\nDNMT1 is a nuclear protein and its appearance in the cytoplasm of astrocytes could be explained by the fact that after synthesis in the cytoplasm the protein must be transported to the nucleus that occurs in neurons. In astrocytes, synthesis of DNMT1 can also be stimulated, but its transport to the nucleus is difficult or much slower than synthesis, so that it accumulates in the cytoplasm. These data are consistent with the results of works published previously [21,22,23]. A similar nuclear-cytoplasmic localization of DNMT1 was observed in reactive astrocytes after traumatic brain injury in contrast to normal localization in the nuclei of neurons [23].", "In the present experiments histone methyltransferase SUV39H1 was detected exclusively in the nuclear fraction of the rat cerebral cortex in control samples (Figure 4, Figure 5a and Figure 6a). Its level in the cytoplasm of cells was so low that it was practically not detected by immunoblotting (Figure 4c). The level of SUV39H1 in the nuclear fraction of the penumbra did not change at 4 h after PTS, but it significantly increased at 24 h after PTS (Figure 4a,b).\nImmunofluorescence microscopy also revealed an increase in the level of SUV39H1 at 24 h after PTS compared with both control groups (Figure 5a–c). SUV39H1 was localized mainly in the nuclei of neurons (Figure 5a and Figure 6a) and the colocalization of SUV39H1 with neuronal nuclei increased at 24 after PTS (Figure 5d). PTS did not affect colocalization of SUV39H1 with the astrocyte marker GFAP (Figure 6b).\nAnalysis of SUV39H1 fluorescence in the nucleus of cells revealed an increase by 63% (p ≤ 0.05) in the level of protein in the nuclei (Figure 5e) at 24 h after PTS, while the number of cells expressing the protein was even reduced compared to the indicator in the contralateral hemisphere in the same period by 34% (p ≤ 0.05) (Figure 5f).", "The level of histone methyltransferase G9a was also low in the nuclear fraction of control samples of the rat cerebral cortex, and in the cytoplasmic fraction it was practically undetectable (Figure 7b). The G9a level in the penumbra increased at 4 and 24 h after PTS (Figure 7a,c).\nImmunofluorescence microscopy revealed exclusively nuclear localization of G9a in neurons (Figure 8a), but not penumbra astrocytes (Figure 9a). The G9a level in the penumbra cells significantly increased relative to the level in the contralateral hemisphere of the same rats (Figure 8b) or in the cerebral cortex of sham-operated rats (Figure 8c) at 4 and 24 h after PTS. This increase can be due to the increased expression of G9a in the nuclei of neurons, as at these time points the coefficient of colocalization G9a with the marker of neuronal nuclei NeuN also increased (Figure 8d), while the colocalization of G9a with the marker of astrocytes GFAP did not change (Figure 9b).\nIn addition, analysis of G9a fluorescence in the nucleus of cells showed an increase by 32% and 44% in its level in the nucleus at 4 and 24 h after PTS (Figure 8e), respectively, although the number of cells expressing the protein did not increase (Figure 8f).\nAn increase in the level of SUV39H1 in the penumbra was found at 24 h after PTS, and G9a was overexpressed at 4 and 24 h after PTS.", "According to Western blotting data at both 4 h and 24 h after PTS the level of H3K9diMe in the penumbra significantly increased compared to the cortex of sham-operated animals by 62 (p < 0.001) and 69% (p < 0.05), respectively (Figure 10a). At the same time, in the control contralateral cortex of experimental rats, the level of H3K9diMe did not differ significantly from the penumbra, but significantly exceeded the level in the cortex of sham-operated animals.\nThese data are confirmed by the results of immunofluorescence microscopy (Figure 10b). Immunofluorescence of H3K9diMe-positive cells in the penumbra at PTS4 and PTS24 exceeded that in the cerebral cortex of SO rats by 90 and 60%, respectively (Figure 10c), and in the unirradiated contralateral cortex of the same animals (CL4 and CL24) approximately by 90 and 100%, respectively (Figure 10d). At the same time, the coefficient M1 of H3K9diMe colocalization with the neuron marker increased by a factor of 2.4 (Figure 10e) and by a factor of 2 with the astrocyte marker GFAP (Figure 10f). Thus, the level of H3K9diMe increased in the penumbra cells relative to the cerebral cortex of SO rats and the contralateral cortex of the same animals at PTS4 and PTS24. This increase occurred in both neurons and astrocytes.\nWestern blotting (Figure 11a) and immunofluorescence microscopy (Figure 11b) did not reveal significant changes in H3K4Me expression in the penumbra at 4PTS4 and PTS24 as compared with the contralateral cortex of the same rats and with the cerebral cortex of SO animals.\nExpression of Н3К4Ме was observed in the nuclei of neurons and in a small number of astrocytes in the rat cerebral cortex. Colocalization of the protein with cellular markers in the penumbra did not change significantly compared to the control groups (Figure 11e,f).", "Double fluorescent staining of penumbra tissue sections with antibodies against the studied epigenetic proteins and the apoptosis marker TUNEL showed antibodies against DNMT1 and G9a, but not SUV39H1 colocalized with the nuclei of apoptotic cells at 24 h after PTS (Figure 12). This indicates the possible involvement of DNMT1 and G9a proteins in PTI-induced apoptosis.", "The nuclei of apoptotic cells are localized in a band about 1–1.5 mm wide on sections of the cerebral cortex of mice subjected to PTS. The apoptosis was not observed from the left and right of this band where the infarction nucleus and normal tissue are located. The effect of inhibitors on the level of apoptosis in the penumbra was studied at 4 days after PTS (Figure 13c) where 5-aza-2′-deoxycytidine, A-366, or BIX01294 significantly reduced the percentage of apoptotic cells in the penumbra. This effect persisted only for A-366 at 7 days after PTS. (Figure 13d).\nThe administration of inhibitors of the studied epigenetic proteins to mice had no effect on the volume of infarction of the nervous tissue in the brain of mice at 4 days after PTS. However, in the group of mice injected with A-366 (but not other inhibitors) the infarction volume was reduced compared to the control group (PTS without inhibitor) at 7 days after PTS (Figure 13).", "The experiments were carried out on adult male rats weighing 200–250 g. Experiments with enzyme inhibitors were carried out on male outbred CD-1 mice at the age of 14–15 weeks weighing 20–25 g. The animals were kept under standard conditions with free access to water and food at 22–25 °C, 12-h light/dark schedule, and an air exchange rate of 18 shifts per hour. Body temperature was monitored with a rectal thermometer and maintained within 37 ± 0.5 °C using an electric mat. International, national, and institutional guidelines for the care and conduct of animal experiments were followed. The animal protocols were evaluated and approved by the Animal Care and Use Committee of the Southern Federal University.", "Rats were anesthetized by intraperitoneal administration of telazol (50 mg/kg) and xylazine (10 mg/kg) [28]. For mice, 25 mg/kg telazol and 5 mg/kg xylazine were used for anesthesia [29].\nThe PTS procedure has been previously described [30]. Briefly, PTS in the rat cerebral cortex was induced by laser irradiation (532 nm, 60 mW/cm2, Ø 3 mm, 30 min) of a part of the sensorimotor cortex of the rat brain after intravenous injection of the Rose Bengal photosensitizer (R4507, Sigma, Rehovot, Israel; 20 mg/kg). For PTS in the cerebral cortex of mice, Rose Bengal (15 mg/mL) was injected intraperitoneally (10 μL/g). A section of the mouse skull was freed from the periosteum in the area of the sensorimotor cortex (2 mm lateral to the bregma) and this part of brain was irradiated with a laser (532 nm, 200 mW/cm2, Ø 1 mm, 15 min) 5 min after photosensitizer was applied. The sham-operated animals that underwent the same operations, but without the introduction of a photosensitizer, were used as controls.", "For immunofluorescence microscopy, the rats were anesthetized and transcardially perfused with 10% formalin 4 or 24 h after PTS, as described previously [30]. Briefly, control samples were from the contralateral cortex of the same animals (CL4 and CL24, respectively) or the cerebral cortex of sham-operated rats (SO4 and SO24). The brain was fixed in formalin overnight and incubated for 48 h in 20% sucrose in phosphate buffer (PBS) at 4 °C. Frontal sections of the brain with a thickness of 20 μm were prepared using a Leica VT 1000 S vibratome (Freiburg, Germany). They were frozen in 2-methylbutane and stored at −80 °C. After thawing, the sections were washed with PBS. The sections were then incubated overnight at 4 °C in the same solution with primary rabbit antibodies (SigmaAldrich, Rehovot, Israel): anti-DNMT1 (D4567, 1:500); anti-G9a (09–071, 1:100), anti-SUV39H1 (AV32470, 1:500) and anti-NeuN mouse antibody (MAB377; 1:1000) or anti-GFAP (SAB5201104; 1:1000), anti-histone H3, dimethylated at lysine 9 (anti-H3K9diMe; D5567, 1:500), and anti-histone H3, methylated at lysine 4 (anti-H3K4Me; M4819, 1:500). After washing in PBS, the sections were incubated for 1 h with fluorescently labeled secondary anti-rabbit CF488A (SAB4600045, 1:1000) or anti-mouse CF555 (SAB4600302, 1:1000) antibodies. Non-specific antibody binding was blocked by 5% BSA with 0.3% Triton X-100 (1 h, 20–25 °C). Sections were mounted on glass slides in 60% glycerol/PBS. Negative control: no primary antibodies. Sections were analyzed using an Eclipse FN1 microscope (Nikon, Tokyo, Japan).\nIn most of the experiments, fluorescent images were studied in the central region of the penumbra, at a distance of 0.3–0.7 mm from the border of the infarction nucleus. The quantitative assessment of fluorescence was carried out on 10–15 images of experimental and control preparations obtained with the same settings of a digital camera. The average fluorescence intensity in the area occupied by cells was determined in each image using the ImageJ software (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021). The corrected fluorescence intensity of cells I (CTCF), proportional to the level of protein expression, was calculated as: I = Ii − Ac * Ib; where Ii is the integral fluorescence intensity, Ac is the cell area, and Ib is the average background fluorescence [30]. Threshold values remained constant for all images. The relative changes in the fluorescence of cells in the penumbra compared to the control cortex, ΔI, were calculated as: ΔI = (Ipen − Ic)/Ic, where Ipen is the average fluorescence intensity in the penumbra, and Ic is the average fluorescence intensity in the control samples.\nAdditionally, using ROI Manager tools in ImageJ, the immunofluorescence of DNMT1, Suv39H1, and G9a proteins in neuronal nuclei was assessed. Immunofluorescence data of proteins in cell nuclei were normalized to Hoechst 33342 fluorescence. In addition, the number of cells expressing the protein under study was calculated per 100 cells.\nProtein colocalization with the neuronal marker NeuN or the astrocyte marker GFAP was assessed using Image J (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021) with the JACoP plugin. On RGB images (1280 × 960), the Manders coefficient M1 reflects the proportion of pixels containing red (cell markers or TUNEL staining) and green (proteins) signals in the total signal recorded in the red channel. In each area of the brain, three fields of vision were analyzed in 7–10 rats. Statistical processing was performed according to OneWay ANOVA. Results are presented as M ± SEM.", "Apoptotic cells were visualized by the TUNEL method (TdT-mediated dUTP-X nick end labeling) using the In Situ Cell Death Detection Kit, TMR red (# 12156792910, Roche, Mannheim, Germany) (red fluorescence) as described previously [30]. Briefly, sections were incubated at 37 °C with a primary antibody against the protein under study (green fluorescence), washed, treated with reagents from this kit, and incubated for 1 h with a secondary antibody Anti-Rabbit CF488A (SAB4600045, 1:1000) (green fluorescence) and with the Hoechst cell nucleus marker 33342 (10 μg/mL, blue fluorescence). The apoptotic coefficient (AI) was calculated as a percentage as:AI = (number of TUNEL-positive cells/total number of cells stained with Hoechst 33342)\nThe analysis was performed on 3 images for each of 7–9 animals in the group.", "The immunoblotting procedure has been previously described [30]. Briefly, the rats were anesthetized and decapitated 4 or 24 h after PTS. The brain was removed and a section of the cortex corresponding to the infarction nucleus was removed on ice with a cylindrical knife Ø3 mm; with another knife Ø7 mm, a 2-mm ring was cut around the irradiation zone, approximately corresponding to the penumbra tissue (experimental sample). A similar control sample was excised in the unirradiated contralateral hemisphere or in the cerebral cortex of sham-operated rats. Pieces of tissue were homogenized on ice, quickly frozen in liquid nitrogen, and then stored at −80 °C. After thawing and centrifugation using the CelLytic NuCLEAR Extraction Kit (SigmaAldrich), the cytoplasmic and nuclear fractions were isolated from the homogenates. The total supernatant was used as the cytoplasmic fraction, in which the nuclear protein histone H3 was practically not detected. Primary rabbit antibodies (all SigmaAldrich) were used in the experiments: anti-DNMT1 (D4567, 1:500), anti-G9a (09-071, 1:500), anti-SUV39H1 (AV32470, 1:500), and mouse anti-β-actin antibody (A5441, 1:5000). Secondary antibodies (all SigmaAldrich): anti-Rabbit IgG-Peroxidase (A6154, 1:1000) and anti-Mouse IgG-Peroxidase (A4416, 1:1000).", "The study investigated the putative neuroprotective effect of the following inhibitors of epigenetic proteins were used: DNMT1 (5-azacytidine or decitabine (0.2 mg/kg, 1 per day, 7 days) [31,32,33], histone methyltransferases SUV39H1 and G9a (BIX01294 (0.5 mg/kg, once a day, 7 days) [34], and A-366 (2 mg/kg, once a day, 7 days) [35]. They were dissolved in dimethyl sulfoxide (DMSO) and then diluted in sterile saline. The final concentration of DMSO was 5%. All inhibitors were administered 1 h after PTS. Mice were decapitated 4 and 7 days after PTS and 1 h after the last injection of drugs to study the level of apoptosis in the peri-infarction area and the volume of infarction.", "To determine the volume of infarction at different times after PTS, brain slices of mice were stained with 2,3,5-triphenyltetrazolium chloride (TTX; T8877, Sigma). For this, the mice were anesthetized and decapitated, the brains were quickly removed and placed in a pre-chilled adult mouse brain matrix (J&K Seiko Electronic Co., Ltd., DongGuan City, China). The matrix with brain tissue was transferred to a freezer (−80°C) for 3–5 min and sliced 2 mm thick. These sections were stained with 1% TTX for 30 min in the dark at 37 °C. Using the ImageJ image analysis software (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021), the areas of infarction zones in each section were measured, summed up, and multiplied by the section thickness (2 mm)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Results", "2.1. Expression and Localization of DNA Methyltransferase DNMT1", "2.2. Expression and Localization of SUV39H1 Histone Methyltransferase", "2.3. Expression and Localization of Histone Methyltransferase G9a", "2.4. The Changes of Histone H3 Methylation Level in the Penumbra: H3K9diMe and H3K4Me", "2.5. DNMT1, SUV39H1, and G9a in Apoptotic Penumbra Cells", "2.6. Inhibitory Analysis", "3. Discussion", "4. Methods", "4.1. Animals", "4.2. Photothrombotic Stroke Model", "4.3. Immunofluorescence Microscopy Analysis", "4.4. Apoptosis Analysis", "4.5. Immunoblotting", "4.6. Inhibitors", "4.7. Determination of the Volume of Infarction", "5. Conclusions" ]
[ "Cerebral ischemia, a common cerebrovascular disease, is one of the great threats to human health [1]. Blockage of cerebral vessels in ischemic stroke (70–80% of all strokes) disrupts blood flow and the supply of oxygen and glucose to surrounding tissues. In the ischemic nucleus nerve cells die fast. Additionally, toxic factors (glutamate, K+, free radicals, acidosis, etc.) spread and damage the surrounding cells and tissues in the following hours [2,3]. During this time (2–6 h, “therapeutic window”), it is possible to save neurons in the transition zone (penumbra), decrease damage, and reduce the neurological consequences of a stroke. However, studies of potential neuroprotectors—calcium channel blockers, excitotoxicity inhibitors, antiapoptotic agents, antioxidants, etc.—which have shown promise in experiments on cell cultures or laboratory animals, have not proven effective protection of the human brain from stroke without unacceptable side effects [4,5,6,7]. The transcriptional activity in the cell is regulated by epigenetic processes such as DNA methylation/demethylation, acetylation/deacetylation, histone methylation/demethylation, histone phosphorylation, etc. DNA methylation is the most well-studied epigenetic modification. DNA methylation involves the attachment of a methyl group to cytosine in the CpG dinucleotide in DNA, where cytosine and guanine are linked by a phosphate group (5′CpG3′). Oxidation of methylated cytosine leads to its demethylation [5,6,7]. DNA methylation does not occur at every cytosine residue in the genome. Only about 60% of all CpG dinucleotides are methylated. If CpG islands (CGIs) in the promoter region of a gene undergo methylation, this usually leads to its suppression. Regulatory regions of many human and animal genes contain unmethylated CpG dinucleotides grouped into CGIs. These are usually promoters of housekeeping genes expressed in all tissues, and in the promoters of tissue-specific genes, CGIs are unmethylated only in those tissues where this gene is expressed [4,5,6,7]. Adding a methyl group to CpG sites can prevent gene transcription in different ways. DNA methylation can directly prevent the binding of DNA-binding factors to transcription sites. In addition, methyl groups in CpG dinucleotides can be recognized by the methyl-CpG-binding domain (MBD) family, such as MBD1–4 and MECP2 [5,6]. The binding of these proteins to methylated CpG sites recruits histone or chromatin-modifying protein complexes, which will assemble a number of complexes that provide repression of gene transcription [4,5,6,7].\nAt least two methylation systems with different methylases operate in mammalian cells. Methylation de novo is carried out by DNA methyltransferases DNMT3a and DNMT3b, which introduces elements of variability in the methylation profile. Maintenance methylation is provided by DNMT1, which attaches methyl groups to cytosines on one of the DNA strands where the complementary strand is already methylated (accuracy rate > 99%) [8].\nDespite the fact that DNA methylation is a fairly common epigenetic modification, the level of DNA methylation in the mammalian genome does not exceed 1%. This is due to the instability of 5-methylcytosine, which undergoes spontaneous deamination to thymine during DNA repair. This results in a rapid depletion of CpG sites in the genome [9]. Nonetheless, DNA methylation is still critical in biological processes such as differentiation, genomic imprinting, genomic stability, and X chromosome inactivation.\nDNA methylation and repression of the expression of a number of genes increase in brain cells after ischemia [10,11,12]. The occlusion of the middle cerebral artery (CCA) within 30 min increased the total level of DNA methylation by a factor of four that correlated with the growth of cell damage and neurological deficit. Changes in DNA methylation after ischemia can occur not only at the global level, but also at the promoters of individual genes [13]. These changes can have both neuroprotective and neurotoxic effects depending on the degree of ischemia damage, the time elapsed after injury, and the site of methylation [14,15].\nMethylation of histones by histone methyltransferases regulates transcriptional processes in cells. It is known that methylation of lysines 9 and 27 in histone H3, as well as lysine 20 in histone H4, globally suppresses transcription, and methylation of lysine 4 in histone H3 often correlates with transcriptional activity [16]. Lysine methyltransferases SUV39H1 and G9a are best studied histone methyltransferases. SUV39H1 and G9a methylate lysines in histones H3 and H4, leading to the formation of large regions of heterochromatin where gene expression is suppressed. G9a is more responsible for mono and dimethylation of lysine 9 in histone H3 (H3K9Me1 and H3K9Me2), while SUV39H1 is responsible for trimethylation (H3K9Me3) [16,17]. SUV39H1 and G9a are also involved in the repression of individual genes and in a number of neuropathological processes [18]. The damage to peripheral nerves increased the expression of SUV39H1 in the nuclei of neurons in the ganglia of the spinal cord that are involved in the transmission of pain information. This caused nociceptive hypersensitivity. It is assumed that SUV39H1 inhibitors can serve as promising drugs for the treatment of pain hypersensitivity caused by damage to peripheral nerves [18]. Pharmacological inhibition or knockout of histone methyltransferases SUV39H1 or G9a protected cultured neurons from ischemic damage in an oxygen- and glucose-free environment [18,19].\nIn this study, we investigated the changes in the expression and intracellular localization of DNA methyltransferase DNMT1, histone methyltransferases SUV39H1 and G9a in penumbra neurons and astrocytes in penumbra nuclear and penumbra cytoplasmic fractions at 4 and 24 h after photothrombotic stroke (PTS). In an attempt to find possible neuroprotectors, we studied the effect of DNMT1 and histone methyltransferase G9a inhibitors on the volume of PTS-induced infarction and apoptosis of penumbra cells in the cortex of mice after PTS.", "2.1. Expression and Localization of DNA Methyltransferase DNMT1 DNA methylation leading to gene silencing is carried out by DNA methyltransferases DNMT1 (supporting methylation in interphase cells) or DNMT3a and DNMT3b (de novo methylation at early stages of development as well as its changes during cell differentiation). DNMT1 maintains the DNA methylation pattern after cell repair or division. It attaches methyl groups to one of the DNA strands where the complementary strand is methylated. In a normal brain DNMT1 has been detected [20]. However, the role of DNMT in the brain’s response to ischemic injury is poorly understood. It has been shown that the global DNA methylation carried out by methyltransferases DNMT is enhanced 16–24 h after cerebral ischemia and reperfusion, which indicates a decrease in biosynthetic processes [21].\nWestern blotting data in our experiments have shown that the level of DNMT1 in both the nuclear and cytoplasmic fractions of the penumbra tissue did not change 4 h after PTS in the rat cerebral cortex, but significantly increased after 24 h (Figure 1).\nThese results were confirmed by the data obtained by immunofluorescence microscopy analysis (Figure 2a–c) that have shown that the level of DNMT1 in the penumbra increased (Figure 2b,c) at 24 h after PTS. A colocalization of DNMT1 with markers of neurons NeuN (Figure 2d) and astrocytes GFAP (Figure 3b) significantly increased.\nAs NeuN selectively labels the nuclei of neurons (Figure 2a), and GFAP stains the bodies of astrocytes, including their processes (Figure 3a), these data explain the results of immunoblotting. The increase in the level of DNMT1 in the nuclear fraction of the penumbra most likely was due to increased expression of neurons in the nuclei, and the increase in the level in the cytoplasmic fraction is due to the expression of this protein in the body and processes of astrocytes.\nAn increase by 75% (p < 0.05) in DNMT1 expression in the nuclei of penumbra cells was shown at 24 h after PTS (Figure 2e). At the same time, there was no increase in the number of cells expressing the protein (Figure 2f).\nDNMT1 is a nuclear protein and its appearance in the cytoplasm of astrocytes could be explained by the fact that after synthesis in the cytoplasm the protein must be transported to the nucleus that occurs in neurons. In astrocytes, synthesis of DNMT1 can also be stimulated, but its transport to the nucleus is difficult or much slower than synthesis, so that it accumulates in the cytoplasm. These data are consistent with the results of works published previously [21,22,23]. A similar nuclear-cytoplasmic localization of DNMT1 was observed in reactive astrocytes after traumatic brain injury in contrast to normal localization in the nuclei of neurons [23].\nDNA methylation leading to gene silencing is carried out by DNA methyltransferases DNMT1 (supporting methylation in interphase cells) or DNMT3a and DNMT3b (de novo methylation at early stages of development as well as its changes during cell differentiation). DNMT1 maintains the DNA methylation pattern after cell repair or division. It attaches methyl groups to one of the DNA strands where the complementary strand is methylated. In a normal brain DNMT1 has been detected [20]. However, the role of DNMT in the brain’s response to ischemic injury is poorly understood. It has been shown that the global DNA methylation carried out by methyltransferases DNMT is enhanced 16–24 h after cerebral ischemia and reperfusion, which indicates a decrease in biosynthetic processes [21].\nWestern blotting data in our experiments have shown that the level of DNMT1 in both the nuclear and cytoplasmic fractions of the penumbra tissue did not change 4 h after PTS in the rat cerebral cortex, but significantly increased after 24 h (Figure 1).\nThese results were confirmed by the data obtained by immunofluorescence microscopy analysis (Figure 2a–c) that have shown that the level of DNMT1 in the penumbra increased (Figure 2b,c) at 24 h after PTS. A colocalization of DNMT1 with markers of neurons NeuN (Figure 2d) and astrocytes GFAP (Figure 3b) significantly increased.\nAs NeuN selectively labels the nuclei of neurons (Figure 2a), and GFAP stains the bodies of astrocytes, including their processes (Figure 3a), these data explain the results of immunoblotting. The increase in the level of DNMT1 in the nuclear fraction of the penumbra most likely was due to increased expression of neurons in the nuclei, and the increase in the level in the cytoplasmic fraction is due to the expression of this protein in the body and processes of astrocytes.\nAn increase by 75% (p < 0.05) in DNMT1 expression in the nuclei of penumbra cells was shown at 24 h after PTS (Figure 2e). At the same time, there was no increase in the number of cells expressing the protein (Figure 2f).\nDNMT1 is a nuclear protein and its appearance in the cytoplasm of astrocytes could be explained by the fact that after synthesis in the cytoplasm the protein must be transported to the nucleus that occurs in neurons. In astrocytes, synthesis of DNMT1 can also be stimulated, but its transport to the nucleus is difficult or much slower than synthesis, so that it accumulates in the cytoplasm. These data are consistent with the results of works published previously [21,22,23]. A similar nuclear-cytoplasmic localization of DNMT1 was observed in reactive astrocytes after traumatic brain injury in contrast to normal localization in the nuclei of neurons [23].\n2.2. Expression and Localization of SUV39H1 Histone Methyltransferase In the present experiments histone methyltransferase SUV39H1 was detected exclusively in the nuclear fraction of the rat cerebral cortex in control samples (Figure 4, Figure 5a and Figure 6a). Its level in the cytoplasm of cells was so low that it was practically not detected by immunoblotting (Figure 4c). The level of SUV39H1 in the nuclear fraction of the penumbra did not change at 4 h after PTS, but it significantly increased at 24 h after PTS (Figure 4a,b).\nImmunofluorescence microscopy also revealed an increase in the level of SUV39H1 at 24 h after PTS compared with both control groups (Figure 5a–c). SUV39H1 was localized mainly in the nuclei of neurons (Figure 5a and Figure 6a) and the colocalization of SUV39H1 with neuronal nuclei increased at 24 after PTS (Figure 5d). PTS did not affect colocalization of SUV39H1 with the astrocyte marker GFAP (Figure 6b).\nAnalysis of SUV39H1 fluorescence in the nucleus of cells revealed an increase by 63% (p ≤ 0.05) in the level of protein in the nuclei (Figure 5e) at 24 h after PTS, while the number of cells expressing the protein was even reduced compared to the indicator in the contralateral hemisphere in the same period by 34% (p ≤ 0.05) (Figure 5f).\nIn the present experiments histone methyltransferase SUV39H1 was detected exclusively in the nuclear fraction of the rat cerebral cortex in control samples (Figure 4, Figure 5a and Figure 6a). Its level in the cytoplasm of cells was so low that it was practically not detected by immunoblotting (Figure 4c). The level of SUV39H1 in the nuclear fraction of the penumbra did not change at 4 h after PTS, but it significantly increased at 24 h after PTS (Figure 4a,b).\nImmunofluorescence microscopy also revealed an increase in the level of SUV39H1 at 24 h after PTS compared with both control groups (Figure 5a–c). SUV39H1 was localized mainly in the nuclei of neurons (Figure 5a and Figure 6a) and the colocalization of SUV39H1 with neuronal nuclei increased at 24 after PTS (Figure 5d). PTS did not affect colocalization of SUV39H1 with the astrocyte marker GFAP (Figure 6b).\nAnalysis of SUV39H1 fluorescence in the nucleus of cells revealed an increase by 63% (p ≤ 0.05) in the level of protein in the nuclei (Figure 5e) at 24 h after PTS, while the number of cells expressing the protein was even reduced compared to the indicator in the contralateral hemisphere in the same period by 34% (p ≤ 0.05) (Figure 5f).\n2.3. Expression and Localization of Histone Methyltransferase G9a The level of histone methyltransferase G9a was also low in the nuclear fraction of control samples of the rat cerebral cortex, and in the cytoplasmic fraction it was practically undetectable (Figure 7b). The G9a level in the penumbra increased at 4 and 24 h after PTS (Figure 7a,c).\nImmunofluorescence microscopy revealed exclusively nuclear localization of G9a in neurons (Figure 8a), but not penumbra astrocytes (Figure 9a). The G9a level in the penumbra cells significantly increased relative to the level in the contralateral hemisphere of the same rats (Figure 8b) or in the cerebral cortex of sham-operated rats (Figure 8c) at 4 and 24 h after PTS. This increase can be due to the increased expression of G9a in the nuclei of neurons, as at these time points the coefficient of colocalization G9a with the marker of neuronal nuclei NeuN also increased (Figure 8d), while the colocalization of G9a with the marker of astrocytes GFAP did not change (Figure 9b).\nIn addition, analysis of G9a fluorescence in the nucleus of cells showed an increase by 32% and 44% in its level in the nucleus at 4 and 24 h after PTS (Figure 8e), respectively, although the number of cells expressing the protein did not increase (Figure 8f).\nAn increase in the level of SUV39H1 in the penumbra was found at 24 h after PTS, and G9a was overexpressed at 4 and 24 h after PTS.\nThe level of histone methyltransferase G9a was also low in the nuclear fraction of control samples of the rat cerebral cortex, and in the cytoplasmic fraction it was practically undetectable (Figure 7b). The G9a level in the penumbra increased at 4 and 24 h after PTS (Figure 7a,c).\nImmunofluorescence microscopy revealed exclusively nuclear localization of G9a in neurons (Figure 8a), but not penumbra astrocytes (Figure 9a). The G9a level in the penumbra cells significantly increased relative to the level in the contralateral hemisphere of the same rats (Figure 8b) or in the cerebral cortex of sham-operated rats (Figure 8c) at 4 and 24 h after PTS. This increase can be due to the increased expression of G9a in the nuclei of neurons, as at these time points the coefficient of colocalization G9a with the marker of neuronal nuclei NeuN also increased (Figure 8d), while the colocalization of G9a with the marker of astrocytes GFAP did not change (Figure 9b).\nIn addition, analysis of G9a fluorescence in the nucleus of cells showed an increase by 32% and 44% in its level in the nucleus at 4 and 24 h after PTS (Figure 8e), respectively, although the number of cells expressing the protein did not increase (Figure 8f).\nAn increase in the level of SUV39H1 in the penumbra was found at 24 h after PTS, and G9a was overexpressed at 4 and 24 h after PTS.\n2.4. The Changes of Histone H3 Methylation Level in the Penumbra: H3K9diMe and H3K4Me According to Western blotting data at both 4 h and 24 h after PTS the level of H3K9diMe in the penumbra significantly increased compared to the cortex of sham-operated animals by 62 (p < 0.001) and 69% (p < 0.05), respectively (Figure 10a). At the same time, in the control contralateral cortex of experimental rats, the level of H3K9diMe did not differ significantly from the penumbra, but significantly exceeded the level in the cortex of sham-operated animals.\nThese data are confirmed by the results of immunofluorescence microscopy (Figure 10b). Immunofluorescence of H3K9diMe-positive cells in the penumbra at PTS4 and PTS24 exceeded that in the cerebral cortex of SO rats by 90 and 60%, respectively (Figure 10c), and in the unirradiated contralateral cortex of the same animals (CL4 and CL24) approximately by 90 and 100%, respectively (Figure 10d). At the same time, the coefficient M1 of H3K9diMe colocalization with the neuron marker increased by a factor of 2.4 (Figure 10e) and by a factor of 2 with the astrocyte marker GFAP (Figure 10f). Thus, the level of H3K9diMe increased in the penumbra cells relative to the cerebral cortex of SO rats and the contralateral cortex of the same animals at PTS4 and PTS24. This increase occurred in both neurons and astrocytes.\nWestern blotting (Figure 11a) and immunofluorescence microscopy (Figure 11b) did not reveal significant changes in H3K4Me expression in the penumbra at 4PTS4 and PTS24 as compared with the contralateral cortex of the same rats and with the cerebral cortex of SO animals.\nExpression of Н3К4Ме was observed in the nuclei of neurons and in a small number of astrocytes in the rat cerebral cortex. Colocalization of the protein with cellular markers in the penumbra did not change significantly compared to the control groups (Figure 11e,f).\nAccording to Western blotting data at both 4 h and 24 h after PTS the level of H3K9diMe in the penumbra significantly increased compared to the cortex of sham-operated animals by 62 (p < 0.001) and 69% (p < 0.05), respectively (Figure 10a). At the same time, in the control contralateral cortex of experimental rats, the level of H3K9diMe did not differ significantly from the penumbra, but significantly exceeded the level in the cortex of sham-operated animals.\nThese data are confirmed by the results of immunofluorescence microscopy (Figure 10b). Immunofluorescence of H3K9diMe-positive cells in the penumbra at PTS4 and PTS24 exceeded that in the cerebral cortex of SO rats by 90 and 60%, respectively (Figure 10c), and in the unirradiated contralateral cortex of the same animals (CL4 and CL24) approximately by 90 and 100%, respectively (Figure 10d). At the same time, the coefficient M1 of H3K9diMe colocalization with the neuron marker increased by a factor of 2.4 (Figure 10e) and by a factor of 2 with the astrocyte marker GFAP (Figure 10f). Thus, the level of H3K9diMe increased in the penumbra cells relative to the cerebral cortex of SO rats and the contralateral cortex of the same animals at PTS4 and PTS24. This increase occurred in both neurons and astrocytes.\nWestern blotting (Figure 11a) and immunofluorescence microscopy (Figure 11b) did not reveal significant changes in H3K4Me expression in the penumbra at 4PTS4 and PTS24 as compared with the contralateral cortex of the same rats and with the cerebral cortex of SO animals.\nExpression of Н3К4Ме was observed in the nuclei of neurons and in a small number of astrocytes in the rat cerebral cortex. Colocalization of the protein with cellular markers in the penumbra did not change significantly compared to the control groups (Figure 11e,f).\n2.5. DNMT1, SUV39H1, and G9a in Apoptotic Penumbra Cells Double fluorescent staining of penumbra tissue sections with antibodies against the studied epigenetic proteins and the apoptosis marker TUNEL showed antibodies against DNMT1 and G9a, but not SUV39H1 colocalized with the nuclei of apoptotic cells at 24 h after PTS (Figure 12). This indicates the possible involvement of DNMT1 and G9a proteins in PTI-induced apoptosis.\nDouble fluorescent staining of penumbra tissue sections with antibodies against the studied epigenetic proteins and the apoptosis marker TUNEL showed antibodies against DNMT1 and G9a, but not SUV39H1 colocalized with the nuclei of apoptotic cells at 24 h after PTS (Figure 12). This indicates the possible involvement of DNMT1 and G9a proteins in PTI-induced apoptosis.\n2.6. Inhibitory Analysis The nuclei of apoptotic cells are localized in a band about 1–1.5 mm wide on sections of the cerebral cortex of mice subjected to PTS. The apoptosis was not observed from the left and right of this band where the infarction nucleus and normal tissue are located. The effect of inhibitors on the level of apoptosis in the penumbra was studied at 4 days after PTS (Figure 13c) where 5-aza-2′-deoxycytidine, A-366, or BIX01294 significantly reduced the percentage of apoptotic cells in the penumbra. This effect persisted only for A-366 at 7 days after PTS. (Figure 13d).\nThe administration of inhibitors of the studied epigenetic proteins to mice had no effect on the volume of infarction of the nervous tissue in the brain of mice at 4 days after PTS. However, in the group of mice injected with A-366 (but not other inhibitors) the infarction volume was reduced compared to the control group (PTS without inhibitor) at 7 days after PTS (Figure 13).\nThe nuclei of apoptotic cells are localized in a band about 1–1.5 mm wide on sections of the cerebral cortex of mice subjected to PTS. The apoptosis was not observed from the left and right of this band where the infarction nucleus and normal tissue are located. The effect of inhibitors on the level of apoptosis in the penumbra was studied at 4 days after PTS (Figure 13c) where 5-aza-2′-deoxycytidine, A-366, or BIX01294 significantly reduced the percentage of apoptotic cells in the penumbra. This effect persisted only for A-366 at 7 days after PTS. (Figure 13d).\nThe administration of inhibitors of the studied epigenetic proteins to mice had no effect on the volume of infarction of the nervous tissue in the brain of mice at 4 days after PTS. However, in the group of mice injected with A-366 (but not other inhibitors) the infarction volume was reduced compared to the control group (PTS without inhibitor) at 7 days after PTS (Figure 13).", "DNA methylation leading to gene silencing is carried out by DNA methyltransferases DNMT1 (supporting methylation in interphase cells) or DNMT3a and DNMT3b (de novo methylation at early stages of development as well as its changes during cell differentiation). DNMT1 maintains the DNA methylation pattern after cell repair or division. It attaches methyl groups to one of the DNA strands where the complementary strand is methylated. In a normal brain DNMT1 has been detected [20]. However, the role of DNMT in the brain’s response to ischemic injury is poorly understood. It has been shown that the global DNA methylation carried out by methyltransferases DNMT is enhanced 16–24 h after cerebral ischemia and reperfusion, which indicates a decrease in biosynthetic processes [21].\nWestern blotting data in our experiments have shown that the level of DNMT1 in both the nuclear and cytoplasmic fractions of the penumbra tissue did not change 4 h after PTS in the rat cerebral cortex, but significantly increased after 24 h (Figure 1).\nThese results were confirmed by the data obtained by immunofluorescence microscopy analysis (Figure 2a–c) that have shown that the level of DNMT1 in the penumbra increased (Figure 2b,c) at 24 h after PTS. A colocalization of DNMT1 with markers of neurons NeuN (Figure 2d) and astrocytes GFAP (Figure 3b) significantly increased.\nAs NeuN selectively labels the nuclei of neurons (Figure 2a), and GFAP stains the bodies of astrocytes, including their processes (Figure 3a), these data explain the results of immunoblotting. The increase in the level of DNMT1 in the nuclear fraction of the penumbra most likely was due to increased expression of neurons in the nuclei, and the increase in the level in the cytoplasmic fraction is due to the expression of this protein in the body and processes of astrocytes.\nAn increase by 75% (p < 0.05) in DNMT1 expression in the nuclei of penumbra cells was shown at 24 h after PTS (Figure 2e). At the same time, there was no increase in the number of cells expressing the protein (Figure 2f).\nDNMT1 is a nuclear protein and its appearance in the cytoplasm of astrocytes could be explained by the fact that after synthesis in the cytoplasm the protein must be transported to the nucleus that occurs in neurons. In astrocytes, synthesis of DNMT1 can also be stimulated, but its transport to the nucleus is difficult or much slower than synthesis, so that it accumulates in the cytoplasm. These data are consistent with the results of works published previously [21,22,23]. A similar nuclear-cytoplasmic localization of DNMT1 was observed in reactive astrocytes after traumatic brain injury in contrast to normal localization in the nuclei of neurons [23].", "In the present experiments histone methyltransferase SUV39H1 was detected exclusively in the nuclear fraction of the rat cerebral cortex in control samples (Figure 4, Figure 5a and Figure 6a). Its level in the cytoplasm of cells was so low that it was practically not detected by immunoblotting (Figure 4c). The level of SUV39H1 in the nuclear fraction of the penumbra did not change at 4 h after PTS, but it significantly increased at 24 h after PTS (Figure 4a,b).\nImmunofluorescence microscopy also revealed an increase in the level of SUV39H1 at 24 h after PTS compared with both control groups (Figure 5a–c). SUV39H1 was localized mainly in the nuclei of neurons (Figure 5a and Figure 6a) and the colocalization of SUV39H1 with neuronal nuclei increased at 24 after PTS (Figure 5d). PTS did not affect colocalization of SUV39H1 with the astrocyte marker GFAP (Figure 6b).\nAnalysis of SUV39H1 fluorescence in the nucleus of cells revealed an increase by 63% (p ≤ 0.05) in the level of protein in the nuclei (Figure 5e) at 24 h after PTS, while the number of cells expressing the protein was even reduced compared to the indicator in the contralateral hemisphere in the same period by 34% (p ≤ 0.05) (Figure 5f).", "The level of histone methyltransferase G9a was also low in the nuclear fraction of control samples of the rat cerebral cortex, and in the cytoplasmic fraction it was practically undetectable (Figure 7b). The G9a level in the penumbra increased at 4 and 24 h after PTS (Figure 7a,c).\nImmunofluorescence microscopy revealed exclusively nuclear localization of G9a in neurons (Figure 8a), but not penumbra astrocytes (Figure 9a). The G9a level in the penumbra cells significantly increased relative to the level in the contralateral hemisphere of the same rats (Figure 8b) or in the cerebral cortex of sham-operated rats (Figure 8c) at 4 and 24 h after PTS. This increase can be due to the increased expression of G9a in the nuclei of neurons, as at these time points the coefficient of colocalization G9a with the marker of neuronal nuclei NeuN also increased (Figure 8d), while the colocalization of G9a with the marker of astrocytes GFAP did not change (Figure 9b).\nIn addition, analysis of G9a fluorescence in the nucleus of cells showed an increase by 32% and 44% in its level in the nucleus at 4 and 24 h after PTS (Figure 8e), respectively, although the number of cells expressing the protein did not increase (Figure 8f).\nAn increase in the level of SUV39H1 in the penumbra was found at 24 h after PTS, and G9a was overexpressed at 4 and 24 h after PTS.", "According to Western blotting data at both 4 h and 24 h after PTS the level of H3K9diMe in the penumbra significantly increased compared to the cortex of sham-operated animals by 62 (p < 0.001) and 69% (p < 0.05), respectively (Figure 10a). At the same time, in the control contralateral cortex of experimental rats, the level of H3K9diMe did not differ significantly from the penumbra, but significantly exceeded the level in the cortex of sham-operated animals.\nThese data are confirmed by the results of immunofluorescence microscopy (Figure 10b). Immunofluorescence of H3K9diMe-positive cells in the penumbra at PTS4 and PTS24 exceeded that in the cerebral cortex of SO rats by 90 and 60%, respectively (Figure 10c), and in the unirradiated contralateral cortex of the same animals (CL4 and CL24) approximately by 90 and 100%, respectively (Figure 10d). At the same time, the coefficient M1 of H3K9diMe colocalization with the neuron marker increased by a factor of 2.4 (Figure 10e) and by a factor of 2 with the astrocyte marker GFAP (Figure 10f). Thus, the level of H3K9diMe increased in the penumbra cells relative to the cerebral cortex of SO rats and the contralateral cortex of the same animals at PTS4 and PTS24. This increase occurred in both neurons and astrocytes.\nWestern blotting (Figure 11a) and immunofluorescence microscopy (Figure 11b) did not reveal significant changes in H3K4Me expression in the penumbra at 4PTS4 and PTS24 as compared with the contralateral cortex of the same rats and with the cerebral cortex of SO animals.\nExpression of Н3К4Ме was observed in the nuclei of neurons and in a small number of astrocytes in the rat cerebral cortex. Colocalization of the protein with cellular markers in the penumbra did not change significantly compared to the control groups (Figure 11e,f).", "Double fluorescent staining of penumbra tissue sections with antibodies against the studied epigenetic proteins and the apoptosis marker TUNEL showed antibodies against DNMT1 and G9a, but not SUV39H1 colocalized with the nuclei of apoptotic cells at 24 h after PTS (Figure 12). This indicates the possible involvement of DNMT1 and G9a proteins in PTI-induced apoptosis.", "The nuclei of apoptotic cells are localized in a band about 1–1.5 mm wide on sections of the cerebral cortex of mice subjected to PTS. The apoptosis was not observed from the left and right of this band where the infarction nucleus and normal tissue are located. The effect of inhibitors on the level of apoptosis in the penumbra was studied at 4 days after PTS (Figure 13c) where 5-aza-2′-deoxycytidine, A-366, or BIX01294 significantly reduced the percentage of apoptotic cells in the penumbra. This effect persisted only for A-366 at 7 days after PTS. (Figure 13d).\nThe administration of inhibitors of the studied epigenetic proteins to mice had no effect on the volume of infarction of the nervous tissue in the brain of mice at 4 days after PTS. However, in the group of mice injected with A-366 (but not other inhibitors) the infarction volume was reduced compared to the control group (PTS without inhibitor) at 7 days after PTS (Figure 13).", "DNA methyltransferase DNMT1 attaches methyl groups to one of the DNA strands where the complementary strand is methylated after cell repair or division. This study has shown that the level of DNMT1 in the nuclear and cytoplasmic fractions of the penumbra tissue did not change at 4 h after PTS in the rat cerebral cortex but increased at 24 h after PTS. Immunofluorescence microscopy confirmed an increase in the level of DNMT1 in the penumbra at 24 h after PTS. The colocalization of DNMT1 with markers of neurons (NeuN) and astrocytes (GFAP) was increased. As NeuN selectively labels the nuclei of neurons, and GFAP stains the bodies of astrocytes, including processes, these data explain the results of immunoblotting. It is likely that the increase in the level of DNMT1 in the nuclear fraction of the penumbra was due to increased expression in the nuclei of neurons, and an increase in the level of DNMT1 in the cytoplasmic fraction was due to the expression of this protein in the body and processes of astrocytes.\nThe investigated interval, 24 h, when DNMT1 is expressed, corresponds to the acute post-stroke period. According to data published previously [21,22] DNMT1 expression and subsequent global DNA methylation that suppresses protein biosynthesis increased at 16–24 h after cerebral ischemia and reperfusion. A number of authors suggest a relationship between DNMT1 activation, DNA methylation, and gene expression disorders with neuronal death [21,22,23,24,25]. It is possible ischemia and reperfusion, which cause oxidative damage to DNA, stimulate DNA repair in subsequent hours that necessitates the maintenance of DNA methylation and DNMT1 expression. In this case a correlation is realized between DNA damage in the ischemic brain, DNA methylation, and apoptosis. Our study showed colocalization of DNMT1 with apoptotic cell nuclei in PTS-induced penumbra. Inhibition of DNMT1 by 5-aza-2′-deoxycytidine protected cells of PTS-induced penumbra from apoptosis. The inhibition, but not complete knockout, of DNMT1 protected postmitotic neurons from ischemic damage [21,22].\nIn our previous study, we have shown that inhibition of DNA methylation by 5-azacytidine and 5-aza-2′-deoxycytidine (decitabine) reduced the level of PDT-induced necrosis of glial cells, but not neurons, by the factor of 1.3 and 2.0, respectively, and did not significantly influence apoptosis of glial cells [26].\nThis suggests the involvement of DNMT1 in apoptosis of penumbra cells after ischemic stroke. Therefore, the inhibition of DNA methylation may be a potential therapeutic strategy for the treatment of stroke.\nHistone methyltransferases SUV39H1 and G9a are localized in cell nuclei where lysines are methylated in histones H3 and H4. It is known that methylation of lysines 9 and 27 in histone H3, as well as lysine 20 in histone H4, globally suppresses transcription while methylation of lysine 4 in histone H3 often correlates with transcriptional activity. In control samples, SUV39H1 and G9a were detected by immunoblotting in the nuclear fraction of the rat cerebral cortex, while they were practically absent in the cytoplasm of the cells. An increase in the level of SUV39H1 in the penumbra was found at 24 h after PTS and G9a was overexpressed at 4 and 24 h after PTS. In our experiments, photothrombotic stroke in the rat cerebral cortex caused an increase in the level of H3K9diMe, but not H3K4Me, in neurons and astrocytes of the penumbra 4 h after exposure, and it lasted for at least 24 h. G9a plays a major role in histone H3 lysine 9 methylation [27]. It is possible that G9a played a key role in the dimethylation of histone H3 (H3K9Me2). However, it should be noted that the main function of SUV39H1 is not di-, but trimethylation of H3K9. In previous experiments, the H3K9Me3 level was not estimated, which clearly must be done in the future to clarify the real role of SU39H1.\nWe observed colocalization of G9a with apoptotic cell nuclei in PTS-induced penumbra. G9a inhibitors A-366 and BIX01294 protected penumbra cells from apoptosis and reduced the volume of PTS-induced cerebral infarction. This suggests the involvement of G9a in apoptosis of penumbra cells after ischemic stroke.", "4.1. Animals The experiments were carried out on adult male rats weighing 200–250 g. Experiments with enzyme inhibitors were carried out on male outbred CD-1 mice at the age of 14–15 weeks weighing 20–25 g. The animals were kept under standard conditions with free access to water and food at 22–25 °C, 12-h light/dark schedule, and an air exchange rate of 18 shifts per hour. Body temperature was monitored with a rectal thermometer and maintained within 37 ± 0.5 °C using an electric mat. International, national, and institutional guidelines for the care and conduct of animal experiments were followed. The animal protocols were evaluated and approved by the Animal Care and Use Committee of the Southern Federal University.\nThe experiments were carried out on adult male rats weighing 200–250 g. Experiments with enzyme inhibitors were carried out on male outbred CD-1 mice at the age of 14–15 weeks weighing 20–25 g. The animals were kept under standard conditions with free access to water and food at 22–25 °C, 12-h light/dark schedule, and an air exchange rate of 18 shifts per hour. Body temperature was monitored with a rectal thermometer and maintained within 37 ± 0.5 °C using an electric mat. International, national, and institutional guidelines for the care and conduct of animal experiments were followed. The animal protocols were evaluated and approved by the Animal Care and Use Committee of the Southern Federal University.\n4.2. Photothrombotic Stroke Model Rats were anesthetized by intraperitoneal administration of telazol (50 mg/kg) and xylazine (10 mg/kg) [28]. For mice, 25 mg/kg telazol and 5 mg/kg xylazine were used for anesthesia [29].\nThe PTS procedure has been previously described [30]. Briefly, PTS in the rat cerebral cortex was induced by laser irradiation (532 nm, 60 mW/cm2, Ø 3 mm, 30 min) of a part of the sensorimotor cortex of the rat brain after intravenous injection of the Rose Bengal photosensitizer (R4507, Sigma, Rehovot, Israel; 20 mg/kg). For PTS in the cerebral cortex of mice, Rose Bengal (15 mg/mL) was injected intraperitoneally (10 μL/g). A section of the mouse skull was freed from the periosteum in the area of the sensorimotor cortex (2 mm lateral to the bregma) and this part of brain was irradiated with a laser (532 nm, 200 mW/cm2, Ø 1 mm, 15 min) 5 min after photosensitizer was applied. The sham-operated animals that underwent the same operations, but without the introduction of a photosensitizer, were used as controls.\nRats were anesthetized by intraperitoneal administration of telazol (50 mg/kg) and xylazine (10 mg/kg) [28]. For mice, 25 mg/kg telazol and 5 mg/kg xylazine were used for anesthesia [29].\nThe PTS procedure has been previously described [30]. Briefly, PTS in the rat cerebral cortex was induced by laser irradiation (532 nm, 60 mW/cm2, Ø 3 mm, 30 min) of a part of the sensorimotor cortex of the rat brain after intravenous injection of the Rose Bengal photosensitizer (R4507, Sigma, Rehovot, Israel; 20 mg/kg). For PTS in the cerebral cortex of mice, Rose Bengal (15 mg/mL) was injected intraperitoneally (10 μL/g). A section of the mouse skull was freed from the periosteum in the area of the sensorimotor cortex (2 mm lateral to the bregma) and this part of brain was irradiated with a laser (532 nm, 200 mW/cm2, Ø 1 mm, 15 min) 5 min after photosensitizer was applied. The sham-operated animals that underwent the same operations, but without the introduction of a photosensitizer, were used as controls.\n4.3. Immunofluorescence Microscopy Analysis For immunofluorescence microscopy, the rats were anesthetized and transcardially perfused with 10% formalin 4 or 24 h after PTS, as described previously [30]. Briefly, control samples were from the contralateral cortex of the same animals (CL4 and CL24, respectively) or the cerebral cortex of sham-operated rats (SO4 and SO24). The brain was fixed in formalin overnight and incubated for 48 h in 20% sucrose in phosphate buffer (PBS) at 4 °C. Frontal sections of the brain with a thickness of 20 μm were prepared using a Leica VT 1000 S vibratome (Freiburg, Germany). They were frozen in 2-methylbutane and stored at −80 °C. After thawing, the sections were washed with PBS. The sections were then incubated overnight at 4 °C in the same solution with primary rabbit antibodies (SigmaAldrich, Rehovot, Israel): anti-DNMT1 (D4567, 1:500); anti-G9a (09–071, 1:100), anti-SUV39H1 (AV32470, 1:500) and anti-NeuN mouse antibody (MAB377; 1:1000) or anti-GFAP (SAB5201104; 1:1000), anti-histone H3, dimethylated at lysine 9 (anti-H3K9diMe; D5567, 1:500), and anti-histone H3, methylated at lysine 4 (anti-H3K4Me; M4819, 1:500). After washing in PBS, the sections were incubated for 1 h with fluorescently labeled secondary anti-rabbit CF488A (SAB4600045, 1:1000) or anti-mouse CF555 (SAB4600302, 1:1000) antibodies. Non-specific antibody binding was blocked by 5% BSA with 0.3% Triton X-100 (1 h, 20–25 °C). Sections were mounted on glass slides in 60% glycerol/PBS. Negative control: no primary antibodies. Sections were analyzed using an Eclipse FN1 microscope (Nikon, Tokyo, Japan).\nIn most of the experiments, fluorescent images were studied in the central region of the penumbra, at a distance of 0.3–0.7 mm from the border of the infarction nucleus. The quantitative assessment of fluorescence was carried out on 10–15 images of experimental and control preparations obtained with the same settings of a digital camera. The average fluorescence intensity in the area occupied by cells was determined in each image using the ImageJ software (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021). The corrected fluorescence intensity of cells I (CTCF), proportional to the level of protein expression, was calculated as: I = Ii − Ac * Ib; where Ii is the integral fluorescence intensity, Ac is the cell area, and Ib is the average background fluorescence [30]. Threshold values remained constant for all images. The relative changes in the fluorescence of cells in the penumbra compared to the control cortex, ΔI, were calculated as: ΔI = (Ipen − Ic)/Ic, where Ipen is the average fluorescence intensity in the penumbra, and Ic is the average fluorescence intensity in the control samples.\nAdditionally, using ROI Manager tools in ImageJ, the immunofluorescence of DNMT1, Suv39H1, and G9a proteins in neuronal nuclei was assessed. Immunofluorescence data of proteins in cell nuclei were normalized to Hoechst 33342 fluorescence. In addition, the number of cells expressing the protein under study was calculated per 100 cells.\nProtein colocalization with the neuronal marker NeuN or the astrocyte marker GFAP was assessed using Image J (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021) with the JACoP plugin. On RGB images (1280 × 960), the Manders coefficient M1 reflects the proportion of pixels containing red (cell markers or TUNEL staining) and green (proteins) signals in the total signal recorded in the red channel. In each area of the brain, three fields of vision were analyzed in 7–10 rats. Statistical processing was performed according to OneWay ANOVA. Results are presented as M ± SEM.\nFor immunofluorescence microscopy, the rats were anesthetized and transcardially perfused with 10% formalin 4 or 24 h after PTS, as described previously [30]. Briefly, control samples were from the contralateral cortex of the same animals (CL4 and CL24, respectively) or the cerebral cortex of sham-operated rats (SO4 and SO24). The brain was fixed in formalin overnight and incubated for 48 h in 20% sucrose in phosphate buffer (PBS) at 4 °C. Frontal sections of the brain with a thickness of 20 μm were prepared using a Leica VT 1000 S vibratome (Freiburg, Germany). They were frozen in 2-methylbutane and stored at −80 °C. After thawing, the sections were washed with PBS. The sections were then incubated overnight at 4 °C in the same solution with primary rabbit antibodies (SigmaAldrich, Rehovot, Israel): anti-DNMT1 (D4567, 1:500); anti-G9a (09–071, 1:100), anti-SUV39H1 (AV32470, 1:500) and anti-NeuN mouse antibody (MAB377; 1:1000) or anti-GFAP (SAB5201104; 1:1000), anti-histone H3, dimethylated at lysine 9 (anti-H3K9diMe; D5567, 1:500), and anti-histone H3, methylated at lysine 4 (anti-H3K4Me; M4819, 1:500). After washing in PBS, the sections were incubated for 1 h with fluorescently labeled secondary anti-rabbit CF488A (SAB4600045, 1:1000) or anti-mouse CF555 (SAB4600302, 1:1000) antibodies. Non-specific antibody binding was blocked by 5% BSA with 0.3% Triton X-100 (1 h, 20–25 °C). Sections were mounted on glass slides in 60% glycerol/PBS. Negative control: no primary antibodies. Sections were analyzed using an Eclipse FN1 microscope (Nikon, Tokyo, Japan).\nIn most of the experiments, fluorescent images were studied in the central region of the penumbra, at a distance of 0.3–0.7 mm from the border of the infarction nucleus. The quantitative assessment of fluorescence was carried out on 10–15 images of experimental and control preparations obtained with the same settings of a digital camera. The average fluorescence intensity in the area occupied by cells was determined in each image using the ImageJ software (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021). The corrected fluorescence intensity of cells I (CTCF), proportional to the level of protein expression, was calculated as: I = Ii − Ac * Ib; where Ii is the integral fluorescence intensity, Ac is the cell area, and Ib is the average background fluorescence [30]. Threshold values remained constant for all images. The relative changes in the fluorescence of cells in the penumbra compared to the control cortex, ΔI, were calculated as: ΔI = (Ipen − Ic)/Ic, where Ipen is the average fluorescence intensity in the penumbra, and Ic is the average fluorescence intensity in the control samples.\nAdditionally, using ROI Manager tools in ImageJ, the immunofluorescence of DNMT1, Suv39H1, and G9a proteins in neuronal nuclei was assessed. Immunofluorescence data of proteins in cell nuclei were normalized to Hoechst 33342 fluorescence. In addition, the number of cells expressing the protein under study was calculated per 100 cells.\nProtein colocalization with the neuronal marker NeuN or the astrocyte marker GFAP was assessed using Image J (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021) with the JACoP plugin. On RGB images (1280 × 960), the Manders coefficient M1 reflects the proportion of pixels containing red (cell markers or TUNEL staining) and green (proteins) signals in the total signal recorded in the red channel. In each area of the brain, three fields of vision were analyzed in 7–10 rats. Statistical processing was performed according to OneWay ANOVA. Results are presented as M ± SEM.\n4.4. Apoptosis Analysis Apoptotic cells were visualized by the TUNEL method (TdT-mediated dUTP-X nick end labeling) using the In Situ Cell Death Detection Kit, TMR red (# 12156792910, Roche, Mannheim, Germany) (red fluorescence) as described previously [30]. Briefly, sections were incubated at 37 °C with a primary antibody against the protein under study (green fluorescence), washed, treated with reagents from this kit, and incubated for 1 h with a secondary antibody Anti-Rabbit CF488A (SAB4600045, 1:1000) (green fluorescence) and with the Hoechst cell nucleus marker 33342 (10 μg/mL, blue fluorescence). The apoptotic coefficient (AI) was calculated as a percentage as:AI = (number of TUNEL-positive cells/total number of cells stained with Hoechst 33342)\nThe analysis was performed on 3 images for each of 7–9 animals in the group.\nApoptotic cells were visualized by the TUNEL method (TdT-mediated dUTP-X nick end labeling) using the In Situ Cell Death Detection Kit, TMR red (# 12156792910, Roche, Mannheim, Germany) (red fluorescence) as described previously [30]. Briefly, sections were incubated at 37 °C with a primary antibody against the protein under study (green fluorescence), washed, treated with reagents from this kit, and incubated for 1 h with a secondary antibody Anti-Rabbit CF488A (SAB4600045, 1:1000) (green fluorescence) and with the Hoechst cell nucleus marker 33342 (10 μg/mL, blue fluorescence). The apoptotic coefficient (AI) was calculated as a percentage as:AI = (number of TUNEL-positive cells/total number of cells stained with Hoechst 33342)\nThe analysis was performed on 3 images for each of 7–9 animals in the group.\n4.5. Immunoblotting The immunoblotting procedure has been previously described [30]. Briefly, the rats were anesthetized and decapitated 4 or 24 h after PTS. The brain was removed and a section of the cortex corresponding to the infarction nucleus was removed on ice with a cylindrical knife Ø3 mm; with another knife Ø7 mm, a 2-mm ring was cut around the irradiation zone, approximately corresponding to the penumbra tissue (experimental sample). A similar control sample was excised in the unirradiated contralateral hemisphere or in the cerebral cortex of sham-operated rats. Pieces of tissue were homogenized on ice, quickly frozen in liquid nitrogen, and then stored at −80 °C. After thawing and centrifugation using the CelLytic NuCLEAR Extraction Kit (SigmaAldrich), the cytoplasmic and nuclear fractions were isolated from the homogenates. The total supernatant was used as the cytoplasmic fraction, in which the nuclear protein histone H3 was practically not detected. Primary rabbit antibodies (all SigmaAldrich) were used in the experiments: anti-DNMT1 (D4567, 1:500), anti-G9a (09-071, 1:500), anti-SUV39H1 (AV32470, 1:500), and mouse anti-β-actin antibody (A5441, 1:5000). Secondary antibodies (all SigmaAldrich): anti-Rabbit IgG-Peroxidase (A6154, 1:1000) and anti-Mouse IgG-Peroxidase (A4416, 1:1000).\nThe immunoblotting procedure has been previously described [30]. Briefly, the rats were anesthetized and decapitated 4 or 24 h after PTS. The brain was removed and a section of the cortex corresponding to the infarction nucleus was removed on ice with a cylindrical knife Ø3 mm; with another knife Ø7 mm, a 2-mm ring was cut around the irradiation zone, approximately corresponding to the penumbra tissue (experimental sample). A similar control sample was excised in the unirradiated contralateral hemisphere or in the cerebral cortex of sham-operated rats. Pieces of tissue were homogenized on ice, quickly frozen in liquid nitrogen, and then stored at −80 °C. After thawing and centrifugation using the CelLytic NuCLEAR Extraction Kit (SigmaAldrich), the cytoplasmic and nuclear fractions were isolated from the homogenates. The total supernatant was used as the cytoplasmic fraction, in which the nuclear protein histone H3 was practically not detected. Primary rabbit antibodies (all SigmaAldrich) were used in the experiments: anti-DNMT1 (D4567, 1:500), anti-G9a (09-071, 1:500), anti-SUV39H1 (AV32470, 1:500), and mouse anti-β-actin antibody (A5441, 1:5000). Secondary antibodies (all SigmaAldrich): anti-Rabbit IgG-Peroxidase (A6154, 1:1000) and anti-Mouse IgG-Peroxidase (A4416, 1:1000).\n4.6. Inhibitors The study investigated the putative neuroprotective effect of the following inhibitors of epigenetic proteins were used: DNMT1 (5-azacytidine or decitabine (0.2 mg/kg, 1 per day, 7 days) [31,32,33], histone methyltransferases SUV39H1 and G9a (BIX01294 (0.5 mg/kg, once a day, 7 days) [34], and A-366 (2 mg/kg, once a day, 7 days) [35]. They were dissolved in dimethyl sulfoxide (DMSO) and then diluted in sterile saline. The final concentration of DMSO was 5%. All inhibitors were administered 1 h after PTS. Mice were decapitated 4 and 7 days after PTS and 1 h after the last injection of drugs to study the level of apoptosis in the peri-infarction area and the volume of infarction.\nThe study investigated the putative neuroprotective effect of the following inhibitors of epigenetic proteins were used: DNMT1 (5-azacytidine or decitabine (0.2 mg/kg, 1 per day, 7 days) [31,32,33], histone methyltransferases SUV39H1 and G9a (BIX01294 (0.5 mg/kg, once a day, 7 days) [34], and A-366 (2 mg/kg, once a day, 7 days) [35]. They were dissolved in dimethyl sulfoxide (DMSO) and then diluted in sterile saline. The final concentration of DMSO was 5%. All inhibitors were administered 1 h after PTS. Mice were decapitated 4 and 7 days after PTS and 1 h after the last injection of drugs to study the level of apoptosis in the peri-infarction area and the volume of infarction.\n4.7. Determination of the Volume of Infarction To determine the volume of infarction at different times after PTS, brain slices of mice were stained with 2,3,5-triphenyltetrazolium chloride (TTX; T8877, Sigma). For this, the mice were anesthetized and decapitated, the brains were quickly removed and placed in a pre-chilled adult mouse brain matrix (J&K Seiko Electronic Co., Ltd., DongGuan City, China). The matrix with brain tissue was transferred to a freezer (−80°C) for 3–5 min and sliced 2 mm thick. These sections were stained with 1% TTX for 30 min in the dark at 37 °C. Using the ImageJ image analysis software (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021), the areas of infarction zones in each section were measured, summed up, and multiplied by the section thickness (2 mm).\nTo determine the volume of infarction at different times after PTS, brain slices of mice were stained with 2,3,5-triphenyltetrazolium chloride (TTX; T8877, Sigma). For this, the mice were anesthetized and decapitated, the brains were quickly removed and placed in a pre-chilled adult mouse brain matrix (J&K Seiko Electronic Co., Ltd., DongGuan City, China). The matrix with brain tissue was transferred to a freezer (−80°C) for 3–5 min and sliced 2 mm thick. These sections were stained with 1% TTX for 30 min in the dark at 37 °C. Using the ImageJ image analysis software (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021), the areas of infarction zones in each section were measured, summed up, and multiplied by the section thickness (2 mm).", "The experiments were carried out on adult male rats weighing 200–250 g. Experiments with enzyme inhibitors were carried out on male outbred CD-1 mice at the age of 14–15 weeks weighing 20–25 g. The animals were kept under standard conditions with free access to water and food at 22–25 °C, 12-h light/dark schedule, and an air exchange rate of 18 shifts per hour. Body temperature was monitored with a rectal thermometer and maintained within 37 ± 0.5 °C using an electric mat. International, national, and institutional guidelines for the care and conduct of animal experiments were followed. The animal protocols were evaluated and approved by the Animal Care and Use Committee of the Southern Federal University.", "Rats were anesthetized by intraperitoneal administration of telazol (50 mg/kg) and xylazine (10 mg/kg) [28]. For mice, 25 mg/kg telazol and 5 mg/kg xylazine were used for anesthesia [29].\nThe PTS procedure has been previously described [30]. Briefly, PTS in the rat cerebral cortex was induced by laser irradiation (532 nm, 60 mW/cm2, Ø 3 mm, 30 min) of a part of the sensorimotor cortex of the rat brain after intravenous injection of the Rose Bengal photosensitizer (R4507, Sigma, Rehovot, Israel; 20 mg/kg). For PTS in the cerebral cortex of mice, Rose Bengal (15 mg/mL) was injected intraperitoneally (10 μL/g). A section of the mouse skull was freed from the periosteum in the area of the sensorimotor cortex (2 mm lateral to the bregma) and this part of brain was irradiated with a laser (532 nm, 200 mW/cm2, Ø 1 mm, 15 min) 5 min after photosensitizer was applied. The sham-operated animals that underwent the same operations, but without the introduction of a photosensitizer, were used as controls.", "For immunofluorescence microscopy, the rats were anesthetized and transcardially perfused with 10% formalin 4 or 24 h after PTS, as described previously [30]. Briefly, control samples were from the contralateral cortex of the same animals (CL4 and CL24, respectively) or the cerebral cortex of sham-operated rats (SO4 and SO24). The brain was fixed in formalin overnight and incubated for 48 h in 20% sucrose in phosphate buffer (PBS) at 4 °C. Frontal sections of the brain with a thickness of 20 μm were prepared using a Leica VT 1000 S vibratome (Freiburg, Germany). They were frozen in 2-methylbutane and stored at −80 °C. After thawing, the sections were washed with PBS. The sections were then incubated overnight at 4 °C in the same solution with primary rabbit antibodies (SigmaAldrich, Rehovot, Israel): anti-DNMT1 (D4567, 1:500); anti-G9a (09–071, 1:100), anti-SUV39H1 (AV32470, 1:500) and anti-NeuN mouse antibody (MAB377; 1:1000) or anti-GFAP (SAB5201104; 1:1000), anti-histone H3, dimethylated at lysine 9 (anti-H3K9diMe; D5567, 1:500), and anti-histone H3, methylated at lysine 4 (anti-H3K4Me; M4819, 1:500). After washing in PBS, the sections were incubated for 1 h with fluorescently labeled secondary anti-rabbit CF488A (SAB4600045, 1:1000) or anti-mouse CF555 (SAB4600302, 1:1000) antibodies. Non-specific antibody binding was blocked by 5% BSA with 0.3% Triton X-100 (1 h, 20–25 °C). Sections were mounted on glass slides in 60% glycerol/PBS. Negative control: no primary antibodies. Sections were analyzed using an Eclipse FN1 microscope (Nikon, Tokyo, Japan).\nIn most of the experiments, fluorescent images were studied in the central region of the penumbra, at a distance of 0.3–0.7 mm from the border of the infarction nucleus. The quantitative assessment of fluorescence was carried out on 10–15 images of experimental and control preparations obtained with the same settings of a digital camera. The average fluorescence intensity in the area occupied by cells was determined in each image using the ImageJ software (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021). The corrected fluorescence intensity of cells I (CTCF), proportional to the level of protein expression, was calculated as: I = Ii − Ac * Ib; where Ii is the integral fluorescence intensity, Ac is the cell area, and Ib is the average background fluorescence [30]. Threshold values remained constant for all images. The relative changes in the fluorescence of cells in the penumbra compared to the control cortex, ΔI, were calculated as: ΔI = (Ipen − Ic)/Ic, where Ipen is the average fluorescence intensity in the penumbra, and Ic is the average fluorescence intensity in the control samples.\nAdditionally, using ROI Manager tools in ImageJ, the immunofluorescence of DNMT1, Suv39H1, and G9a proteins in neuronal nuclei was assessed. Immunofluorescence data of proteins in cell nuclei were normalized to Hoechst 33342 fluorescence. In addition, the number of cells expressing the protein under study was calculated per 100 cells.\nProtein colocalization with the neuronal marker NeuN or the astrocyte marker GFAP was assessed using Image J (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021) with the JACoP plugin. On RGB images (1280 × 960), the Manders coefficient M1 reflects the proportion of pixels containing red (cell markers or TUNEL staining) and green (proteins) signals in the total signal recorded in the red channel. In each area of the brain, three fields of vision were analyzed in 7–10 rats. Statistical processing was performed according to OneWay ANOVA. Results are presented as M ± SEM.", "Apoptotic cells were visualized by the TUNEL method (TdT-mediated dUTP-X nick end labeling) using the In Situ Cell Death Detection Kit, TMR red (# 12156792910, Roche, Mannheim, Germany) (red fluorescence) as described previously [30]. Briefly, sections were incubated at 37 °C with a primary antibody against the protein under study (green fluorescence), washed, treated with reagents from this kit, and incubated for 1 h with a secondary antibody Anti-Rabbit CF488A (SAB4600045, 1:1000) (green fluorescence) and with the Hoechst cell nucleus marker 33342 (10 μg/mL, blue fluorescence). The apoptotic coefficient (AI) was calculated as a percentage as:AI = (number of TUNEL-positive cells/total number of cells stained with Hoechst 33342)\nThe analysis was performed on 3 images for each of 7–9 animals in the group.", "The immunoblotting procedure has been previously described [30]. Briefly, the rats were anesthetized and decapitated 4 or 24 h after PTS. The brain was removed and a section of the cortex corresponding to the infarction nucleus was removed on ice with a cylindrical knife Ø3 mm; with another knife Ø7 mm, a 2-mm ring was cut around the irradiation zone, approximately corresponding to the penumbra tissue (experimental sample). A similar control sample was excised in the unirradiated contralateral hemisphere or in the cerebral cortex of sham-operated rats. Pieces of tissue were homogenized on ice, quickly frozen in liquid nitrogen, and then stored at −80 °C. After thawing and centrifugation using the CelLytic NuCLEAR Extraction Kit (SigmaAldrich), the cytoplasmic and nuclear fractions were isolated from the homogenates. The total supernatant was used as the cytoplasmic fraction, in which the nuclear protein histone H3 was practically not detected. Primary rabbit antibodies (all SigmaAldrich) were used in the experiments: anti-DNMT1 (D4567, 1:500), anti-G9a (09-071, 1:500), anti-SUV39H1 (AV32470, 1:500), and mouse anti-β-actin antibody (A5441, 1:5000). Secondary antibodies (all SigmaAldrich): anti-Rabbit IgG-Peroxidase (A6154, 1:1000) and anti-Mouse IgG-Peroxidase (A4416, 1:1000).", "The study investigated the putative neuroprotective effect of the following inhibitors of epigenetic proteins were used: DNMT1 (5-azacytidine or decitabine (0.2 mg/kg, 1 per day, 7 days) [31,32,33], histone methyltransferases SUV39H1 and G9a (BIX01294 (0.5 mg/kg, once a day, 7 days) [34], and A-366 (2 mg/kg, once a day, 7 days) [35]. They were dissolved in dimethyl sulfoxide (DMSO) and then diluted in sterile saline. The final concentration of DMSO was 5%. All inhibitors were administered 1 h after PTS. Mice were decapitated 4 and 7 days after PTS and 1 h after the last injection of drugs to study the level of apoptosis in the peri-infarction area and the volume of infarction.", "To determine the volume of infarction at different times after PTS, brain slices of mice were stained with 2,3,5-triphenyltetrazolium chloride (TTX; T8877, Sigma). For this, the mice were anesthetized and decapitated, the brains were quickly removed and placed in a pre-chilled adult mouse brain matrix (J&K Seiko Electronic Co., Ltd., DongGuan City, China). The matrix with brain tissue was transferred to a freezer (−80°C) for 3–5 min and sliced 2 mm thick. These sections were stained with 1% TTX for 30 min in the dark at 37 °C. Using the ImageJ image analysis software (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021), the areas of infarction zones in each section were measured, summed up, and multiplied by the section thickness (2 mm).", "The data obtained show that DNA methyltransferase DNMT1 and histone methyltransferase G9a can be potential protein targets in ischemic penumbra cells, and their inhibitors, such as 5-aza-2′-deoxycytidine, A-366, and BIX01294, are potential neuroprotective agents capable of protecting penumbra cells from postischemic damage to the cerebral cortex. The protective effect of these substances should be studied in more detail in the future. Compound A-366 had the most persistent protective effect on the mouse cerebral cortex after photothrombotic stroke. Therefore, it can be recommended as a promising anti-ischemic neuroprotector." ]
[ "intro", "results", null, null, null, null, null, null, "discussion", "methods", null, null, null, null, null, null, null, "conclusions" ]
[ "ischemic stroke", "epigenetics", "histone methyltransferase", "DNA methyltransferase" ]
1. Introduction: Cerebral ischemia, a common cerebrovascular disease, is one of the great threats to human health [1]. Blockage of cerebral vessels in ischemic stroke (70–80% of all strokes) disrupts blood flow and the supply of oxygen and glucose to surrounding tissues. In the ischemic nucleus nerve cells die fast. Additionally, toxic factors (glutamate, K+, free radicals, acidosis, etc.) spread and damage the surrounding cells and tissues in the following hours [2,3]. During this time (2–6 h, “therapeutic window”), it is possible to save neurons in the transition zone (penumbra), decrease damage, and reduce the neurological consequences of a stroke. However, studies of potential neuroprotectors—calcium channel blockers, excitotoxicity inhibitors, antiapoptotic agents, antioxidants, etc.—which have shown promise in experiments on cell cultures or laboratory animals, have not proven effective protection of the human brain from stroke without unacceptable side effects [4,5,6,7]. The transcriptional activity in the cell is regulated by epigenetic processes such as DNA methylation/demethylation, acetylation/deacetylation, histone methylation/demethylation, histone phosphorylation, etc. DNA methylation is the most well-studied epigenetic modification. DNA methylation involves the attachment of a methyl group to cytosine in the CpG dinucleotide in DNA, where cytosine and guanine are linked by a phosphate group (5′CpG3′). Oxidation of methylated cytosine leads to its demethylation [5,6,7]. DNA methylation does not occur at every cytosine residue in the genome. Only about 60% of all CpG dinucleotides are methylated. If CpG islands (CGIs) in the promoter region of a gene undergo methylation, this usually leads to its suppression. Regulatory regions of many human and animal genes contain unmethylated CpG dinucleotides grouped into CGIs. These are usually promoters of housekeeping genes expressed in all tissues, and in the promoters of tissue-specific genes, CGIs are unmethylated only in those tissues where this gene is expressed [4,5,6,7]. Adding a methyl group to CpG sites can prevent gene transcription in different ways. DNA methylation can directly prevent the binding of DNA-binding factors to transcription sites. In addition, methyl groups in CpG dinucleotides can be recognized by the methyl-CpG-binding domain (MBD) family, such as MBD1–4 and MECP2 [5,6]. The binding of these proteins to methylated CpG sites recruits histone or chromatin-modifying protein complexes, which will assemble a number of complexes that provide repression of gene transcription [4,5,6,7]. At least two methylation systems with different methylases operate in mammalian cells. Methylation de novo is carried out by DNA methyltransferases DNMT3a and DNMT3b, which introduces elements of variability in the methylation profile. Maintenance methylation is provided by DNMT1, which attaches methyl groups to cytosines on one of the DNA strands where the complementary strand is already methylated (accuracy rate > 99%) [8]. Despite the fact that DNA methylation is a fairly common epigenetic modification, the level of DNA methylation in the mammalian genome does not exceed 1%. This is due to the instability of 5-methylcytosine, which undergoes spontaneous deamination to thymine during DNA repair. This results in a rapid depletion of CpG sites in the genome [9]. Nonetheless, DNA methylation is still critical in biological processes such as differentiation, genomic imprinting, genomic stability, and X chromosome inactivation. DNA methylation and repression of the expression of a number of genes increase in brain cells after ischemia [10,11,12]. The occlusion of the middle cerebral artery (CCA) within 30 min increased the total level of DNA methylation by a factor of four that correlated with the growth of cell damage and neurological deficit. Changes in DNA methylation after ischemia can occur not only at the global level, but also at the promoters of individual genes [13]. These changes can have both neuroprotective and neurotoxic effects depending on the degree of ischemia damage, the time elapsed after injury, and the site of methylation [14,15]. Methylation of histones by histone methyltransferases regulates transcriptional processes in cells. It is known that methylation of lysines 9 and 27 in histone H3, as well as lysine 20 in histone H4, globally suppresses transcription, and methylation of lysine 4 in histone H3 often correlates with transcriptional activity [16]. Lysine methyltransferases SUV39H1 and G9a are best studied histone methyltransferases. SUV39H1 and G9a methylate lysines in histones H3 and H4, leading to the formation of large regions of heterochromatin where gene expression is suppressed. G9a is more responsible for mono and dimethylation of lysine 9 in histone H3 (H3K9Me1 and H3K9Me2), while SUV39H1 is responsible for trimethylation (H3K9Me3) [16,17]. SUV39H1 and G9a are also involved in the repression of individual genes and in a number of neuropathological processes [18]. The damage to peripheral nerves increased the expression of SUV39H1 in the nuclei of neurons in the ganglia of the spinal cord that are involved in the transmission of pain information. This caused nociceptive hypersensitivity. It is assumed that SUV39H1 inhibitors can serve as promising drugs for the treatment of pain hypersensitivity caused by damage to peripheral nerves [18]. Pharmacological inhibition or knockout of histone methyltransferases SUV39H1 or G9a protected cultured neurons from ischemic damage in an oxygen- and glucose-free environment [18,19]. In this study, we investigated the changes in the expression and intracellular localization of DNA methyltransferase DNMT1, histone methyltransferases SUV39H1 and G9a in penumbra neurons and astrocytes in penumbra nuclear and penumbra cytoplasmic fractions at 4 and 24 h after photothrombotic stroke (PTS). In an attempt to find possible neuroprotectors, we studied the effect of DNMT1 and histone methyltransferase G9a inhibitors on the volume of PTS-induced infarction and apoptosis of penumbra cells in the cortex of mice after PTS. 2. Results: 2.1. Expression and Localization of DNA Methyltransferase DNMT1 DNA methylation leading to gene silencing is carried out by DNA methyltransferases DNMT1 (supporting methylation in interphase cells) or DNMT3a and DNMT3b (de novo methylation at early stages of development as well as its changes during cell differentiation). DNMT1 maintains the DNA methylation pattern after cell repair or division. It attaches methyl groups to one of the DNA strands where the complementary strand is methylated. In a normal brain DNMT1 has been detected [20]. However, the role of DNMT in the brain’s response to ischemic injury is poorly understood. It has been shown that the global DNA methylation carried out by methyltransferases DNMT is enhanced 16–24 h after cerebral ischemia and reperfusion, which indicates a decrease in biosynthetic processes [21]. Western blotting data in our experiments have shown that the level of DNMT1 in both the nuclear and cytoplasmic fractions of the penumbra tissue did not change 4 h after PTS in the rat cerebral cortex, but significantly increased after 24 h (Figure 1). These results were confirmed by the data obtained by immunofluorescence microscopy analysis (Figure 2a–c) that have shown that the level of DNMT1 in the penumbra increased (Figure 2b,c) at 24 h after PTS. A colocalization of DNMT1 with markers of neurons NeuN (Figure 2d) and astrocytes GFAP (Figure 3b) significantly increased. As NeuN selectively labels the nuclei of neurons (Figure 2a), and GFAP stains the bodies of astrocytes, including their processes (Figure 3a), these data explain the results of immunoblotting. The increase in the level of DNMT1 in the nuclear fraction of the penumbra most likely was due to increased expression of neurons in the nuclei, and the increase in the level in the cytoplasmic fraction is due to the expression of this protein in the body and processes of astrocytes. An increase by 75% (p < 0.05) in DNMT1 expression in the nuclei of penumbra cells was shown at 24 h after PTS (Figure 2e). At the same time, there was no increase in the number of cells expressing the protein (Figure 2f). DNMT1 is a nuclear protein and its appearance in the cytoplasm of astrocytes could be explained by the fact that after synthesis in the cytoplasm the protein must be transported to the nucleus that occurs in neurons. In astrocytes, synthesis of DNMT1 can also be stimulated, but its transport to the nucleus is difficult or much slower than synthesis, so that it accumulates in the cytoplasm. These data are consistent with the results of works published previously [21,22,23]. A similar nuclear-cytoplasmic localization of DNMT1 was observed in reactive astrocytes after traumatic brain injury in contrast to normal localization in the nuclei of neurons [23]. DNA methylation leading to gene silencing is carried out by DNA methyltransferases DNMT1 (supporting methylation in interphase cells) or DNMT3a and DNMT3b (de novo methylation at early stages of development as well as its changes during cell differentiation). DNMT1 maintains the DNA methylation pattern after cell repair or division. It attaches methyl groups to one of the DNA strands where the complementary strand is methylated. In a normal brain DNMT1 has been detected [20]. However, the role of DNMT in the brain’s response to ischemic injury is poorly understood. It has been shown that the global DNA methylation carried out by methyltransferases DNMT is enhanced 16–24 h after cerebral ischemia and reperfusion, which indicates a decrease in biosynthetic processes [21]. Western blotting data in our experiments have shown that the level of DNMT1 in both the nuclear and cytoplasmic fractions of the penumbra tissue did not change 4 h after PTS in the rat cerebral cortex, but significantly increased after 24 h (Figure 1). These results were confirmed by the data obtained by immunofluorescence microscopy analysis (Figure 2a–c) that have shown that the level of DNMT1 in the penumbra increased (Figure 2b,c) at 24 h after PTS. A colocalization of DNMT1 with markers of neurons NeuN (Figure 2d) and astrocytes GFAP (Figure 3b) significantly increased. As NeuN selectively labels the nuclei of neurons (Figure 2a), and GFAP stains the bodies of astrocytes, including their processes (Figure 3a), these data explain the results of immunoblotting. The increase in the level of DNMT1 in the nuclear fraction of the penumbra most likely was due to increased expression of neurons in the nuclei, and the increase in the level in the cytoplasmic fraction is due to the expression of this protein in the body and processes of astrocytes. An increase by 75% (p < 0.05) in DNMT1 expression in the nuclei of penumbra cells was shown at 24 h after PTS (Figure 2e). At the same time, there was no increase in the number of cells expressing the protein (Figure 2f). DNMT1 is a nuclear protein and its appearance in the cytoplasm of astrocytes could be explained by the fact that after synthesis in the cytoplasm the protein must be transported to the nucleus that occurs in neurons. In astrocytes, synthesis of DNMT1 can also be stimulated, but its transport to the nucleus is difficult or much slower than synthesis, so that it accumulates in the cytoplasm. These data are consistent with the results of works published previously [21,22,23]. A similar nuclear-cytoplasmic localization of DNMT1 was observed in reactive astrocytes after traumatic brain injury in contrast to normal localization in the nuclei of neurons [23]. 2.2. Expression and Localization of SUV39H1 Histone Methyltransferase In the present experiments histone methyltransferase SUV39H1 was detected exclusively in the nuclear fraction of the rat cerebral cortex in control samples (Figure 4, Figure 5a and Figure 6a). Its level in the cytoplasm of cells was so low that it was practically not detected by immunoblotting (Figure 4c). The level of SUV39H1 in the nuclear fraction of the penumbra did not change at 4 h after PTS, but it significantly increased at 24 h after PTS (Figure 4a,b). Immunofluorescence microscopy also revealed an increase in the level of SUV39H1 at 24 h after PTS compared with both control groups (Figure 5a–c). SUV39H1 was localized mainly in the nuclei of neurons (Figure 5a and Figure 6a) and the colocalization of SUV39H1 with neuronal nuclei increased at 24 after PTS (Figure 5d). PTS did not affect colocalization of SUV39H1 with the astrocyte marker GFAP (Figure 6b). Analysis of SUV39H1 fluorescence in the nucleus of cells revealed an increase by 63% (p ≤ 0.05) in the level of protein in the nuclei (Figure 5e) at 24 h after PTS, while the number of cells expressing the protein was even reduced compared to the indicator in the contralateral hemisphere in the same period by 34% (p ≤ 0.05) (Figure 5f). In the present experiments histone methyltransferase SUV39H1 was detected exclusively in the nuclear fraction of the rat cerebral cortex in control samples (Figure 4, Figure 5a and Figure 6a). Its level in the cytoplasm of cells was so low that it was practically not detected by immunoblotting (Figure 4c). The level of SUV39H1 in the nuclear fraction of the penumbra did not change at 4 h after PTS, but it significantly increased at 24 h after PTS (Figure 4a,b). Immunofluorescence microscopy also revealed an increase in the level of SUV39H1 at 24 h after PTS compared with both control groups (Figure 5a–c). SUV39H1 was localized mainly in the nuclei of neurons (Figure 5a and Figure 6a) and the colocalization of SUV39H1 with neuronal nuclei increased at 24 after PTS (Figure 5d). PTS did not affect colocalization of SUV39H1 with the astrocyte marker GFAP (Figure 6b). Analysis of SUV39H1 fluorescence in the nucleus of cells revealed an increase by 63% (p ≤ 0.05) in the level of protein in the nuclei (Figure 5e) at 24 h after PTS, while the number of cells expressing the protein was even reduced compared to the indicator in the contralateral hemisphere in the same period by 34% (p ≤ 0.05) (Figure 5f). 2.3. Expression and Localization of Histone Methyltransferase G9a The level of histone methyltransferase G9a was also low in the nuclear fraction of control samples of the rat cerebral cortex, and in the cytoplasmic fraction it was practically undetectable (Figure 7b). The G9a level in the penumbra increased at 4 and 24 h after PTS (Figure 7a,c). Immunofluorescence microscopy revealed exclusively nuclear localization of G9a in neurons (Figure 8a), but not penumbra astrocytes (Figure 9a). The G9a level in the penumbra cells significantly increased relative to the level in the contralateral hemisphere of the same rats (Figure 8b) or in the cerebral cortex of sham-operated rats (Figure 8c) at 4 and 24 h after PTS. This increase can be due to the increased expression of G9a in the nuclei of neurons, as at these time points the coefficient of colocalization G9a with the marker of neuronal nuclei NeuN also increased (Figure 8d), while the colocalization of G9a with the marker of astrocytes GFAP did not change (Figure 9b). In addition, analysis of G9a fluorescence in the nucleus of cells showed an increase by 32% and 44% in its level in the nucleus at 4 and 24 h after PTS (Figure 8e), respectively, although the number of cells expressing the protein did not increase (Figure 8f). An increase in the level of SUV39H1 in the penumbra was found at 24 h after PTS, and G9a was overexpressed at 4 and 24 h after PTS. The level of histone methyltransferase G9a was also low in the nuclear fraction of control samples of the rat cerebral cortex, and in the cytoplasmic fraction it was practically undetectable (Figure 7b). The G9a level in the penumbra increased at 4 and 24 h after PTS (Figure 7a,c). Immunofluorescence microscopy revealed exclusively nuclear localization of G9a in neurons (Figure 8a), but not penumbra astrocytes (Figure 9a). The G9a level in the penumbra cells significantly increased relative to the level in the contralateral hemisphere of the same rats (Figure 8b) or in the cerebral cortex of sham-operated rats (Figure 8c) at 4 and 24 h after PTS. This increase can be due to the increased expression of G9a in the nuclei of neurons, as at these time points the coefficient of colocalization G9a with the marker of neuronal nuclei NeuN also increased (Figure 8d), while the colocalization of G9a with the marker of astrocytes GFAP did not change (Figure 9b). In addition, analysis of G9a fluorescence in the nucleus of cells showed an increase by 32% and 44% in its level in the nucleus at 4 and 24 h after PTS (Figure 8e), respectively, although the number of cells expressing the protein did not increase (Figure 8f). An increase in the level of SUV39H1 in the penumbra was found at 24 h after PTS, and G9a was overexpressed at 4 and 24 h after PTS. 2.4. The Changes of Histone H3 Methylation Level in the Penumbra: H3K9diMe and H3K4Me According to Western blotting data at both 4 h and 24 h after PTS the level of H3K9diMe in the penumbra significantly increased compared to the cortex of sham-operated animals by 62 (p < 0.001) and 69% (p < 0.05), respectively (Figure 10a). At the same time, in the control contralateral cortex of experimental rats, the level of H3K9diMe did not differ significantly from the penumbra, but significantly exceeded the level in the cortex of sham-operated animals. These data are confirmed by the results of immunofluorescence microscopy (Figure 10b). Immunofluorescence of H3K9diMe-positive cells in the penumbra at PTS4 and PTS24 exceeded that in the cerebral cortex of SO rats by 90 and 60%, respectively (Figure 10c), and in the unirradiated contralateral cortex of the same animals (CL4 and CL24) approximately by 90 and 100%, respectively (Figure 10d). At the same time, the coefficient M1 of H3K9diMe colocalization with the neuron marker increased by a factor of 2.4 (Figure 10e) and by a factor of 2 with the astrocyte marker GFAP (Figure 10f). Thus, the level of H3K9diMe increased in the penumbra cells relative to the cerebral cortex of SO rats and the contralateral cortex of the same animals at PTS4 and PTS24. This increase occurred in both neurons and astrocytes. Western blotting (Figure 11a) and immunofluorescence microscopy (Figure 11b) did not reveal significant changes in H3K4Me expression in the penumbra at 4PTS4 and PTS24 as compared with the contralateral cortex of the same rats and with the cerebral cortex of SO animals. Expression of Н3К4Ме was observed in the nuclei of neurons and in a small number of astrocytes in the rat cerebral cortex. Colocalization of the protein with cellular markers in the penumbra did not change significantly compared to the control groups (Figure 11e,f). According to Western blotting data at both 4 h and 24 h after PTS the level of H3K9diMe in the penumbra significantly increased compared to the cortex of sham-operated animals by 62 (p < 0.001) and 69% (p < 0.05), respectively (Figure 10a). At the same time, in the control contralateral cortex of experimental rats, the level of H3K9diMe did not differ significantly from the penumbra, but significantly exceeded the level in the cortex of sham-operated animals. These data are confirmed by the results of immunofluorescence microscopy (Figure 10b). Immunofluorescence of H3K9diMe-positive cells in the penumbra at PTS4 and PTS24 exceeded that in the cerebral cortex of SO rats by 90 and 60%, respectively (Figure 10c), and in the unirradiated contralateral cortex of the same animals (CL4 and CL24) approximately by 90 and 100%, respectively (Figure 10d). At the same time, the coefficient M1 of H3K9diMe colocalization with the neuron marker increased by a factor of 2.4 (Figure 10e) and by a factor of 2 with the astrocyte marker GFAP (Figure 10f). Thus, the level of H3K9diMe increased in the penumbra cells relative to the cerebral cortex of SO rats and the contralateral cortex of the same animals at PTS4 and PTS24. This increase occurred in both neurons and astrocytes. Western blotting (Figure 11a) and immunofluorescence microscopy (Figure 11b) did not reveal significant changes in H3K4Me expression in the penumbra at 4PTS4 and PTS24 as compared with the contralateral cortex of the same rats and with the cerebral cortex of SO animals. Expression of Н3К4Ме was observed in the nuclei of neurons and in a small number of astrocytes in the rat cerebral cortex. Colocalization of the protein with cellular markers in the penumbra did not change significantly compared to the control groups (Figure 11e,f). 2.5. DNMT1, SUV39H1, and G9a in Apoptotic Penumbra Cells Double fluorescent staining of penumbra tissue sections with antibodies against the studied epigenetic proteins and the apoptosis marker TUNEL showed antibodies against DNMT1 and G9a, but not SUV39H1 colocalized with the nuclei of apoptotic cells at 24 h after PTS (Figure 12). This indicates the possible involvement of DNMT1 and G9a proteins in PTI-induced apoptosis. Double fluorescent staining of penumbra tissue sections with antibodies against the studied epigenetic proteins and the apoptosis marker TUNEL showed antibodies against DNMT1 and G9a, but not SUV39H1 colocalized with the nuclei of apoptotic cells at 24 h after PTS (Figure 12). This indicates the possible involvement of DNMT1 and G9a proteins in PTI-induced apoptosis. 2.6. Inhibitory Analysis The nuclei of apoptotic cells are localized in a band about 1–1.5 mm wide on sections of the cerebral cortex of mice subjected to PTS. The apoptosis was not observed from the left and right of this band where the infarction nucleus and normal tissue are located. The effect of inhibitors on the level of apoptosis in the penumbra was studied at 4 days after PTS (Figure 13c) where 5-aza-2′-deoxycytidine, A-366, or BIX01294 significantly reduced the percentage of apoptotic cells in the penumbra. This effect persisted only for A-366 at 7 days after PTS. (Figure 13d). The administration of inhibitors of the studied epigenetic proteins to mice had no effect on the volume of infarction of the nervous tissue in the brain of mice at 4 days after PTS. However, in the group of mice injected with A-366 (but not other inhibitors) the infarction volume was reduced compared to the control group (PTS without inhibitor) at 7 days after PTS (Figure 13). The nuclei of apoptotic cells are localized in a band about 1–1.5 mm wide on sections of the cerebral cortex of mice subjected to PTS. The apoptosis was not observed from the left and right of this band where the infarction nucleus and normal tissue are located. The effect of inhibitors on the level of apoptosis in the penumbra was studied at 4 days after PTS (Figure 13c) where 5-aza-2′-deoxycytidine, A-366, or BIX01294 significantly reduced the percentage of apoptotic cells in the penumbra. This effect persisted only for A-366 at 7 days after PTS. (Figure 13d). The administration of inhibitors of the studied epigenetic proteins to mice had no effect on the volume of infarction of the nervous tissue in the brain of mice at 4 days after PTS. However, in the group of mice injected with A-366 (but not other inhibitors) the infarction volume was reduced compared to the control group (PTS without inhibitor) at 7 days after PTS (Figure 13). 2.1. Expression and Localization of DNA Methyltransferase DNMT1: DNA methylation leading to gene silencing is carried out by DNA methyltransferases DNMT1 (supporting methylation in interphase cells) or DNMT3a and DNMT3b (de novo methylation at early stages of development as well as its changes during cell differentiation). DNMT1 maintains the DNA methylation pattern after cell repair or division. It attaches methyl groups to one of the DNA strands where the complementary strand is methylated. In a normal brain DNMT1 has been detected [20]. However, the role of DNMT in the brain’s response to ischemic injury is poorly understood. It has been shown that the global DNA methylation carried out by methyltransferases DNMT is enhanced 16–24 h after cerebral ischemia and reperfusion, which indicates a decrease in biosynthetic processes [21]. Western blotting data in our experiments have shown that the level of DNMT1 in both the nuclear and cytoplasmic fractions of the penumbra tissue did not change 4 h after PTS in the rat cerebral cortex, but significantly increased after 24 h (Figure 1). These results were confirmed by the data obtained by immunofluorescence microscopy analysis (Figure 2a–c) that have shown that the level of DNMT1 in the penumbra increased (Figure 2b,c) at 24 h after PTS. A colocalization of DNMT1 with markers of neurons NeuN (Figure 2d) and astrocytes GFAP (Figure 3b) significantly increased. As NeuN selectively labels the nuclei of neurons (Figure 2a), and GFAP stains the bodies of astrocytes, including their processes (Figure 3a), these data explain the results of immunoblotting. The increase in the level of DNMT1 in the nuclear fraction of the penumbra most likely was due to increased expression of neurons in the nuclei, and the increase in the level in the cytoplasmic fraction is due to the expression of this protein in the body and processes of astrocytes. An increase by 75% (p < 0.05) in DNMT1 expression in the nuclei of penumbra cells was shown at 24 h after PTS (Figure 2e). At the same time, there was no increase in the number of cells expressing the protein (Figure 2f). DNMT1 is a nuclear protein and its appearance in the cytoplasm of astrocytes could be explained by the fact that after synthesis in the cytoplasm the protein must be transported to the nucleus that occurs in neurons. In astrocytes, synthesis of DNMT1 can also be stimulated, but its transport to the nucleus is difficult or much slower than synthesis, so that it accumulates in the cytoplasm. These data are consistent with the results of works published previously [21,22,23]. A similar nuclear-cytoplasmic localization of DNMT1 was observed in reactive astrocytes after traumatic brain injury in contrast to normal localization in the nuclei of neurons [23]. 2.2. Expression and Localization of SUV39H1 Histone Methyltransferase: In the present experiments histone methyltransferase SUV39H1 was detected exclusively in the nuclear fraction of the rat cerebral cortex in control samples (Figure 4, Figure 5a and Figure 6a). Its level in the cytoplasm of cells was so low that it was practically not detected by immunoblotting (Figure 4c). The level of SUV39H1 in the nuclear fraction of the penumbra did not change at 4 h after PTS, but it significantly increased at 24 h after PTS (Figure 4a,b). Immunofluorescence microscopy also revealed an increase in the level of SUV39H1 at 24 h after PTS compared with both control groups (Figure 5a–c). SUV39H1 was localized mainly in the nuclei of neurons (Figure 5a and Figure 6a) and the colocalization of SUV39H1 with neuronal nuclei increased at 24 after PTS (Figure 5d). PTS did not affect colocalization of SUV39H1 with the astrocyte marker GFAP (Figure 6b). Analysis of SUV39H1 fluorescence in the nucleus of cells revealed an increase by 63% (p ≤ 0.05) in the level of protein in the nuclei (Figure 5e) at 24 h after PTS, while the number of cells expressing the protein was even reduced compared to the indicator in the contralateral hemisphere in the same period by 34% (p ≤ 0.05) (Figure 5f). 2.3. Expression and Localization of Histone Methyltransferase G9a: The level of histone methyltransferase G9a was also low in the nuclear fraction of control samples of the rat cerebral cortex, and in the cytoplasmic fraction it was practically undetectable (Figure 7b). The G9a level in the penumbra increased at 4 and 24 h after PTS (Figure 7a,c). Immunofluorescence microscopy revealed exclusively nuclear localization of G9a in neurons (Figure 8a), but not penumbra astrocytes (Figure 9a). The G9a level in the penumbra cells significantly increased relative to the level in the contralateral hemisphere of the same rats (Figure 8b) or in the cerebral cortex of sham-operated rats (Figure 8c) at 4 and 24 h after PTS. This increase can be due to the increased expression of G9a in the nuclei of neurons, as at these time points the coefficient of colocalization G9a with the marker of neuronal nuclei NeuN also increased (Figure 8d), while the colocalization of G9a with the marker of astrocytes GFAP did not change (Figure 9b). In addition, analysis of G9a fluorescence in the nucleus of cells showed an increase by 32% and 44% in its level in the nucleus at 4 and 24 h after PTS (Figure 8e), respectively, although the number of cells expressing the protein did not increase (Figure 8f). An increase in the level of SUV39H1 in the penumbra was found at 24 h after PTS, and G9a was overexpressed at 4 and 24 h after PTS. 2.4. The Changes of Histone H3 Methylation Level in the Penumbra: H3K9diMe and H3K4Me: According to Western blotting data at both 4 h and 24 h after PTS the level of H3K9diMe in the penumbra significantly increased compared to the cortex of sham-operated animals by 62 (p < 0.001) and 69% (p < 0.05), respectively (Figure 10a). At the same time, in the control contralateral cortex of experimental rats, the level of H3K9diMe did not differ significantly from the penumbra, but significantly exceeded the level in the cortex of sham-operated animals. These data are confirmed by the results of immunofluorescence microscopy (Figure 10b). Immunofluorescence of H3K9diMe-positive cells in the penumbra at PTS4 and PTS24 exceeded that in the cerebral cortex of SO rats by 90 and 60%, respectively (Figure 10c), and in the unirradiated contralateral cortex of the same animals (CL4 and CL24) approximately by 90 and 100%, respectively (Figure 10d). At the same time, the coefficient M1 of H3K9diMe colocalization with the neuron marker increased by a factor of 2.4 (Figure 10e) and by a factor of 2 with the astrocyte marker GFAP (Figure 10f). Thus, the level of H3K9diMe increased in the penumbra cells relative to the cerebral cortex of SO rats and the contralateral cortex of the same animals at PTS4 and PTS24. This increase occurred in both neurons and astrocytes. Western blotting (Figure 11a) and immunofluorescence microscopy (Figure 11b) did not reveal significant changes in H3K4Me expression in the penumbra at 4PTS4 and PTS24 as compared with the contralateral cortex of the same rats and with the cerebral cortex of SO animals. Expression of Н3К4Ме was observed in the nuclei of neurons and in a small number of astrocytes in the rat cerebral cortex. Colocalization of the protein with cellular markers in the penumbra did not change significantly compared to the control groups (Figure 11e,f). 2.5. DNMT1, SUV39H1, and G9a in Apoptotic Penumbra Cells: Double fluorescent staining of penumbra tissue sections with antibodies against the studied epigenetic proteins and the apoptosis marker TUNEL showed antibodies against DNMT1 and G9a, but not SUV39H1 colocalized with the nuclei of apoptotic cells at 24 h after PTS (Figure 12). This indicates the possible involvement of DNMT1 and G9a proteins in PTI-induced apoptosis. 2.6. Inhibitory Analysis: The nuclei of apoptotic cells are localized in a band about 1–1.5 mm wide on sections of the cerebral cortex of mice subjected to PTS. The apoptosis was not observed from the left and right of this band where the infarction nucleus and normal tissue are located. The effect of inhibitors on the level of apoptosis in the penumbra was studied at 4 days after PTS (Figure 13c) where 5-aza-2′-deoxycytidine, A-366, or BIX01294 significantly reduced the percentage of apoptotic cells in the penumbra. This effect persisted only for A-366 at 7 days after PTS. (Figure 13d). The administration of inhibitors of the studied epigenetic proteins to mice had no effect on the volume of infarction of the nervous tissue in the brain of mice at 4 days after PTS. However, in the group of mice injected with A-366 (but not other inhibitors) the infarction volume was reduced compared to the control group (PTS without inhibitor) at 7 days after PTS (Figure 13). 3. Discussion: DNA methyltransferase DNMT1 attaches methyl groups to one of the DNA strands where the complementary strand is methylated after cell repair or division. This study has shown that the level of DNMT1 in the nuclear and cytoplasmic fractions of the penumbra tissue did not change at 4 h after PTS in the rat cerebral cortex but increased at 24 h after PTS. Immunofluorescence microscopy confirmed an increase in the level of DNMT1 in the penumbra at 24 h after PTS. The colocalization of DNMT1 with markers of neurons (NeuN) and astrocytes (GFAP) was increased. As NeuN selectively labels the nuclei of neurons, and GFAP stains the bodies of astrocytes, including processes, these data explain the results of immunoblotting. It is likely that the increase in the level of DNMT1 in the nuclear fraction of the penumbra was due to increased expression in the nuclei of neurons, and an increase in the level of DNMT1 in the cytoplasmic fraction was due to the expression of this protein in the body and processes of astrocytes. The investigated interval, 24 h, when DNMT1 is expressed, corresponds to the acute post-stroke period. According to data published previously [21,22] DNMT1 expression and subsequent global DNA methylation that suppresses protein biosynthesis increased at 16–24 h after cerebral ischemia and reperfusion. A number of authors suggest a relationship between DNMT1 activation, DNA methylation, and gene expression disorders with neuronal death [21,22,23,24,25]. It is possible ischemia and reperfusion, which cause oxidative damage to DNA, stimulate DNA repair in subsequent hours that necessitates the maintenance of DNA methylation and DNMT1 expression. In this case a correlation is realized between DNA damage in the ischemic brain, DNA methylation, and apoptosis. Our study showed colocalization of DNMT1 with apoptotic cell nuclei in PTS-induced penumbra. Inhibition of DNMT1 by 5-aza-2′-deoxycytidine protected cells of PTS-induced penumbra from apoptosis. The inhibition, but not complete knockout, of DNMT1 protected postmitotic neurons from ischemic damage [21,22]. In our previous study, we have shown that inhibition of DNA methylation by 5-azacytidine and 5-aza-2′-deoxycytidine (decitabine) reduced the level of PDT-induced necrosis of glial cells, but not neurons, by the factor of 1.3 and 2.0, respectively, and did not significantly influence apoptosis of glial cells [26]. This suggests the involvement of DNMT1 in apoptosis of penumbra cells after ischemic stroke. Therefore, the inhibition of DNA methylation may be a potential therapeutic strategy for the treatment of stroke. Histone methyltransferases SUV39H1 and G9a are localized in cell nuclei where lysines are methylated in histones H3 and H4. It is known that methylation of lysines 9 and 27 in histone H3, as well as lysine 20 in histone H4, globally suppresses transcription while methylation of lysine 4 in histone H3 often correlates with transcriptional activity. In control samples, SUV39H1 and G9a were detected by immunoblotting in the nuclear fraction of the rat cerebral cortex, while they were practically absent in the cytoplasm of the cells. An increase in the level of SUV39H1 in the penumbra was found at 24 h after PTS and G9a was overexpressed at 4 and 24 h after PTS. In our experiments, photothrombotic stroke in the rat cerebral cortex caused an increase in the level of H3K9diMe, but not H3K4Me, in neurons and astrocytes of the penumbra 4 h after exposure, and it lasted for at least 24 h. G9a plays a major role in histone H3 lysine 9 methylation [27]. It is possible that G9a played a key role in the dimethylation of histone H3 (H3K9Me2). However, it should be noted that the main function of SUV39H1 is not di-, but trimethylation of H3K9. In previous experiments, the H3K9Me3 level was not estimated, which clearly must be done in the future to clarify the real role of SU39H1. We observed colocalization of G9a with apoptotic cell nuclei in PTS-induced penumbra. G9a inhibitors A-366 and BIX01294 protected penumbra cells from apoptosis and reduced the volume of PTS-induced cerebral infarction. This suggests the involvement of G9a in apoptosis of penumbra cells after ischemic stroke. 4. Methods: 4.1. Animals The experiments were carried out on adult male rats weighing 200–250 g. Experiments with enzyme inhibitors were carried out on male outbred CD-1 mice at the age of 14–15 weeks weighing 20–25 g. The animals were kept under standard conditions with free access to water and food at 22–25 °C, 12-h light/dark schedule, and an air exchange rate of 18 shifts per hour. Body temperature was monitored with a rectal thermometer and maintained within 37 ± 0.5 °C using an electric mat. International, national, and institutional guidelines for the care and conduct of animal experiments were followed. The animal protocols were evaluated and approved by the Animal Care and Use Committee of the Southern Federal University. The experiments were carried out on adult male rats weighing 200–250 g. Experiments with enzyme inhibitors were carried out on male outbred CD-1 mice at the age of 14–15 weeks weighing 20–25 g. The animals were kept under standard conditions with free access to water and food at 22–25 °C, 12-h light/dark schedule, and an air exchange rate of 18 shifts per hour. Body temperature was monitored with a rectal thermometer and maintained within 37 ± 0.5 °C using an electric mat. International, national, and institutional guidelines for the care and conduct of animal experiments were followed. The animal protocols were evaluated and approved by the Animal Care and Use Committee of the Southern Federal University. 4.2. Photothrombotic Stroke Model Rats were anesthetized by intraperitoneal administration of telazol (50 mg/kg) and xylazine (10 mg/kg) [28]. For mice, 25 mg/kg telazol and 5 mg/kg xylazine were used for anesthesia [29]. The PTS procedure has been previously described [30]. Briefly, PTS in the rat cerebral cortex was induced by laser irradiation (532 nm, 60 mW/cm2, Ø 3 mm, 30 min) of a part of the sensorimotor cortex of the rat brain after intravenous injection of the Rose Bengal photosensitizer (R4507, Sigma, Rehovot, Israel; 20 mg/kg). For PTS in the cerebral cortex of mice, Rose Bengal (15 mg/mL) was injected intraperitoneally (10 μL/g). A section of the mouse skull was freed from the periosteum in the area of the sensorimotor cortex (2 mm lateral to the bregma) and this part of brain was irradiated with a laser (532 nm, 200 mW/cm2, Ø 1 mm, 15 min) 5 min after photosensitizer was applied. The sham-operated animals that underwent the same operations, but without the introduction of a photosensitizer, were used as controls. Rats were anesthetized by intraperitoneal administration of telazol (50 mg/kg) and xylazine (10 mg/kg) [28]. For mice, 25 mg/kg telazol and 5 mg/kg xylazine were used for anesthesia [29]. The PTS procedure has been previously described [30]. Briefly, PTS in the rat cerebral cortex was induced by laser irradiation (532 nm, 60 mW/cm2, Ø 3 mm, 30 min) of a part of the sensorimotor cortex of the rat brain after intravenous injection of the Rose Bengal photosensitizer (R4507, Sigma, Rehovot, Israel; 20 mg/kg). For PTS in the cerebral cortex of mice, Rose Bengal (15 mg/mL) was injected intraperitoneally (10 μL/g). A section of the mouse skull was freed from the periosteum in the area of the sensorimotor cortex (2 mm lateral to the bregma) and this part of brain was irradiated with a laser (532 nm, 200 mW/cm2, Ø 1 mm, 15 min) 5 min after photosensitizer was applied. The sham-operated animals that underwent the same operations, but without the introduction of a photosensitizer, were used as controls. 4.3. Immunofluorescence Microscopy Analysis For immunofluorescence microscopy, the rats were anesthetized and transcardially perfused with 10% formalin 4 or 24 h after PTS, as described previously [30]. Briefly, control samples were from the contralateral cortex of the same animals (CL4 and CL24, respectively) or the cerebral cortex of sham-operated rats (SO4 and SO24). The brain was fixed in formalin overnight and incubated for 48 h in 20% sucrose in phosphate buffer (PBS) at 4 °C. Frontal sections of the brain with a thickness of 20 μm were prepared using a Leica VT 1000 S vibratome (Freiburg, Germany). They were frozen in 2-methylbutane and stored at −80 °C. After thawing, the sections were washed with PBS. The sections were then incubated overnight at 4 °C in the same solution with primary rabbit antibodies (SigmaAldrich, Rehovot, Israel): anti-DNMT1 (D4567, 1:500); anti-G9a (09–071, 1:100), anti-SUV39H1 (AV32470, 1:500) and anti-NeuN mouse antibody (MAB377; 1:1000) or anti-GFAP (SAB5201104; 1:1000), anti-histone H3, dimethylated at lysine 9 (anti-H3K9diMe; D5567, 1:500), and anti-histone H3, methylated at lysine 4 (anti-H3K4Me; M4819, 1:500). After washing in PBS, the sections were incubated for 1 h with fluorescently labeled secondary anti-rabbit CF488A (SAB4600045, 1:1000) or anti-mouse CF555 (SAB4600302, 1:1000) antibodies. Non-specific antibody binding was blocked by 5% BSA with 0.3% Triton X-100 (1 h, 20–25 °C). Sections were mounted on glass slides in 60% glycerol/PBS. Negative control: no primary antibodies. Sections were analyzed using an Eclipse FN1 microscope (Nikon, Tokyo, Japan). In most of the experiments, fluorescent images were studied in the central region of the penumbra, at a distance of 0.3–0.7 mm from the border of the infarction nucleus. The quantitative assessment of fluorescence was carried out on 10–15 images of experimental and control preparations obtained with the same settings of a digital camera. The average fluorescence intensity in the area occupied by cells was determined in each image using the ImageJ software (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021). The corrected fluorescence intensity of cells I (CTCF), proportional to the level of protein expression, was calculated as: I = Ii − Ac * Ib; where Ii is the integral fluorescence intensity, Ac is the cell area, and Ib is the average background fluorescence [30]. Threshold values remained constant for all images. The relative changes in the fluorescence of cells in the penumbra compared to the control cortex, ΔI, were calculated as: ΔI = (Ipen − Ic)/Ic, where Ipen is the average fluorescence intensity in the penumbra, and Ic is the average fluorescence intensity in the control samples. Additionally, using ROI Manager tools in ImageJ, the immunofluorescence of DNMT1, Suv39H1, and G9a proteins in neuronal nuclei was assessed. Immunofluorescence data of proteins in cell nuclei were normalized to Hoechst 33342 fluorescence. In addition, the number of cells expressing the protein under study was calculated per 100 cells. Protein colocalization with the neuronal marker NeuN or the astrocyte marker GFAP was assessed using Image J (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021) with the JACoP plugin. On RGB images (1280 × 960), the Manders coefficient M1 reflects the proportion of pixels containing red (cell markers or TUNEL staining) and green (proteins) signals in the total signal recorded in the red channel. In each area of the brain, three fields of vision were analyzed in 7–10 rats. Statistical processing was performed according to OneWay ANOVA. Results are presented as M ± SEM. For immunofluorescence microscopy, the rats were anesthetized and transcardially perfused with 10% formalin 4 or 24 h after PTS, as described previously [30]. Briefly, control samples were from the contralateral cortex of the same animals (CL4 and CL24, respectively) or the cerebral cortex of sham-operated rats (SO4 and SO24). The brain was fixed in formalin overnight and incubated for 48 h in 20% sucrose in phosphate buffer (PBS) at 4 °C. Frontal sections of the brain with a thickness of 20 μm were prepared using a Leica VT 1000 S vibratome (Freiburg, Germany). They were frozen in 2-methylbutane and stored at −80 °C. After thawing, the sections were washed with PBS. The sections were then incubated overnight at 4 °C in the same solution with primary rabbit antibodies (SigmaAldrich, Rehovot, Israel): anti-DNMT1 (D4567, 1:500); anti-G9a (09–071, 1:100), anti-SUV39H1 (AV32470, 1:500) and anti-NeuN mouse antibody (MAB377; 1:1000) or anti-GFAP (SAB5201104; 1:1000), anti-histone H3, dimethylated at lysine 9 (anti-H3K9diMe; D5567, 1:500), and anti-histone H3, methylated at lysine 4 (anti-H3K4Me; M4819, 1:500). After washing in PBS, the sections were incubated for 1 h with fluorescently labeled secondary anti-rabbit CF488A (SAB4600045, 1:1000) or anti-mouse CF555 (SAB4600302, 1:1000) antibodies. Non-specific antibody binding was blocked by 5% BSA with 0.3% Triton X-100 (1 h, 20–25 °C). Sections were mounted on glass slides in 60% glycerol/PBS. Negative control: no primary antibodies. Sections were analyzed using an Eclipse FN1 microscope (Nikon, Tokyo, Japan). In most of the experiments, fluorescent images were studied in the central region of the penumbra, at a distance of 0.3–0.7 mm from the border of the infarction nucleus. The quantitative assessment of fluorescence was carried out on 10–15 images of experimental and control preparations obtained with the same settings of a digital camera. The average fluorescence intensity in the area occupied by cells was determined in each image using the ImageJ software (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021). The corrected fluorescence intensity of cells I (CTCF), proportional to the level of protein expression, was calculated as: I = Ii − Ac * Ib; where Ii is the integral fluorescence intensity, Ac is the cell area, and Ib is the average background fluorescence [30]. Threshold values remained constant for all images. The relative changes in the fluorescence of cells in the penumbra compared to the control cortex, ΔI, were calculated as: ΔI = (Ipen − Ic)/Ic, where Ipen is the average fluorescence intensity in the penumbra, and Ic is the average fluorescence intensity in the control samples. Additionally, using ROI Manager tools in ImageJ, the immunofluorescence of DNMT1, Suv39H1, and G9a proteins in neuronal nuclei was assessed. Immunofluorescence data of proteins in cell nuclei were normalized to Hoechst 33342 fluorescence. In addition, the number of cells expressing the protein under study was calculated per 100 cells. Protein colocalization with the neuronal marker NeuN or the astrocyte marker GFAP was assessed using Image J (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021) with the JACoP plugin. On RGB images (1280 × 960), the Manders coefficient M1 reflects the proportion of pixels containing red (cell markers or TUNEL staining) and green (proteins) signals in the total signal recorded in the red channel. In each area of the brain, three fields of vision were analyzed in 7–10 rats. Statistical processing was performed according to OneWay ANOVA. Results are presented as M ± SEM. 4.4. Apoptosis Analysis Apoptotic cells were visualized by the TUNEL method (TdT-mediated dUTP-X nick end labeling) using the In Situ Cell Death Detection Kit, TMR red (# 12156792910, Roche, Mannheim, Germany) (red fluorescence) as described previously [30]. Briefly, sections were incubated at 37 °C with a primary antibody against the protein under study (green fluorescence), washed, treated with reagents from this kit, and incubated for 1 h with a secondary antibody Anti-Rabbit CF488A (SAB4600045, 1:1000) (green fluorescence) and with the Hoechst cell nucleus marker 33342 (10 μg/mL, blue fluorescence). The apoptotic coefficient (AI) was calculated as a percentage as:AI = (number of TUNEL-positive cells/total number of cells stained with Hoechst 33342) The analysis was performed on 3 images for each of 7–9 animals in the group. Apoptotic cells were visualized by the TUNEL method (TdT-mediated dUTP-X nick end labeling) using the In Situ Cell Death Detection Kit, TMR red (# 12156792910, Roche, Mannheim, Germany) (red fluorescence) as described previously [30]. Briefly, sections were incubated at 37 °C with a primary antibody against the protein under study (green fluorescence), washed, treated with reagents from this kit, and incubated for 1 h with a secondary antibody Anti-Rabbit CF488A (SAB4600045, 1:1000) (green fluorescence) and with the Hoechst cell nucleus marker 33342 (10 μg/mL, blue fluorescence). The apoptotic coefficient (AI) was calculated as a percentage as:AI = (number of TUNEL-positive cells/total number of cells stained with Hoechst 33342) The analysis was performed on 3 images for each of 7–9 animals in the group. 4.5. Immunoblotting The immunoblotting procedure has been previously described [30]. Briefly, the rats were anesthetized and decapitated 4 or 24 h after PTS. The brain was removed and a section of the cortex corresponding to the infarction nucleus was removed on ice with a cylindrical knife Ø3 mm; with another knife Ø7 mm, a 2-mm ring was cut around the irradiation zone, approximately corresponding to the penumbra tissue (experimental sample). A similar control sample was excised in the unirradiated contralateral hemisphere or in the cerebral cortex of sham-operated rats. Pieces of tissue were homogenized on ice, quickly frozen in liquid nitrogen, and then stored at −80 °C. After thawing and centrifugation using the CelLytic NuCLEAR Extraction Kit (SigmaAldrich), the cytoplasmic and nuclear fractions were isolated from the homogenates. The total supernatant was used as the cytoplasmic fraction, in which the nuclear protein histone H3 was practically not detected. Primary rabbit antibodies (all SigmaAldrich) were used in the experiments: anti-DNMT1 (D4567, 1:500), anti-G9a (09-071, 1:500), anti-SUV39H1 (AV32470, 1:500), and mouse anti-β-actin antibody (A5441, 1:5000). Secondary antibodies (all SigmaAldrich): anti-Rabbit IgG-Peroxidase (A6154, 1:1000) and anti-Mouse IgG-Peroxidase (A4416, 1:1000). The immunoblotting procedure has been previously described [30]. Briefly, the rats were anesthetized and decapitated 4 or 24 h after PTS. The brain was removed and a section of the cortex corresponding to the infarction nucleus was removed on ice with a cylindrical knife Ø3 mm; with another knife Ø7 mm, a 2-mm ring was cut around the irradiation zone, approximately corresponding to the penumbra tissue (experimental sample). A similar control sample was excised in the unirradiated contralateral hemisphere or in the cerebral cortex of sham-operated rats. Pieces of tissue were homogenized on ice, quickly frozen in liquid nitrogen, and then stored at −80 °C. After thawing and centrifugation using the CelLytic NuCLEAR Extraction Kit (SigmaAldrich), the cytoplasmic and nuclear fractions were isolated from the homogenates. The total supernatant was used as the cytoplasmic fraction, in which the nuclear protein histone H3 was practically not detected. Primary rabbit antibodies (all SigmaAldrich) were used in the experiments: anti-DNMT1 (D4567, 1:500), anti-G9a (09-071, 1:500), anti-SUV39H1 (AV32470, 1:500), and mouse anti-β-actin antibody (A5441, 1:5000). Secondary antibodies (all SigmaAldrich): anti-Rabbit IgG-Peroxidase (A6154, 1:1000) and anti-Mouse IgG-Peroxidase (A4416, 1:1000). 4.6. Inhibitors The study investigated the putative neuroprotective effect of the following inhibitors of epigenetic proteins were used: DNMT1 (5-azacytidine or decitabine (0.2 mg/kg, 1 per day, 7 days) [31,32,33], histone methyltransferases SUV39H1 and G9a (BIX01294 (0.5 mg/kg, once a day, 7 days) [34], and A-366 (2 mg/kg, once a day, 7 days) [35]. They were dissolved in dimethyl sulfoxide (DMSO) and then diluted in sterile saline. The final concentration of DMSO was 5%. All inhibitors were administered 1 h after PTS. Mice were decapitated 4 and 7 days after PTS and 1 h after the last injection of drugs to study the level of apoptosis in the peri-infarction area and the volume of infarction. The study investigated the putative neuroprotective effect of the following inhibitors of epigenetic proteins were used: DNMT1 (5-azacytidine or decitabine (0.2 mg/kg, 1 per day, 7 days) [31,32,33], histone methyltransferases SUV39H1 and G9a (BIX01294 (0.5 mg/kg, once a day, 7 days) [34], and A-366 (2 mg/kg, once a day, 7 days) [35]. They were dissolved in dimethyl sulfoxide (DMSO) and then diluted in sterile saline. The final concentration of DMSO was 5%. All inhibitors were administered 1 h after PTS. Mice were decapitated 4 and 7 days after PTS and 1 h after the last injection of drugs to study the level of apoptosis in the peri-infarction area and the volume of infarction. 4.7. Determination of the Volume of Infarction To determine the volume of infarction at different times after PTS, brain slices of mice were stained with 2,3,5-triphenyltetrazolium chloride (TTX; T8877, Sigma). For this, the mice were anesthetized and decapitated, the brains were quickly removed and placed in a pre-chilled adult mouse brain matrix (J&K Seiko Electronic Co., Ltd., DongGuan City, China). The matrix with brain tissue was transferred to a freezer (−80°C) for 3–5 min and sliced 2 mm thick. These sections were stained with 1% TTX for 30 min in the dark at 37 °C. Using the ImageJ image analysis software (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021), the areas of infarction zones in each section were measured, summed up, and multiplied by the section thickness (2 mm). To determine the volume of infarction at different times after PTS, brain slices of mice were stained with 2,3,5-triphenyltetrazolium chloride (TTX; T8877, Sigma). For this, the mice were anesthetized and decapitated, the brains were quickly removed and placed in a pre-chilled adult mouse brain matrix (J&K Seiko Electronic Co., Ltd., DongGuan City, China). The matrix with brain tissue was transferred to a freezer (−80°C) for 3–5 min and sliced 2 mm thick. These sections were stained with 1% TTX for 30 min in the dark at 37 °C. Using the ImageJ image analysis software (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021), the areas of infarction zones in each section were measured, summed up, and multiplied by the section thickness (2 mm). 4.1. Animals: The experiments were carried out on adult male rats weighing 200–250 g. Experiments with enzyme inhibitors were carried out on male outbred CD-1 mice at the age of 14–15 weeks weighing 20–25 g. The animals were kept under standard conditions with free access to water and food at 22–25 °C, 12-h light/dark schedule, and an air exchange rate of 18 shifts per hour. Body temperature was monitored with a rectal thermometer and maintained within 37 ± 0.5 °C using an electric mat. International, national, and institutional guidelines for the care and conduct of animal experiments were followed. The animal protocols were evaluated and approved by the Animal Care and Use Committee of the Southern Federal University. 4.2. Photothrombotic Stroke Model: Rats were anesthetized by intraperitoneal administration of telazol (50 mg/kg) and xylazine (10 mg/kg) [28]. For mice, 25 mg/kg telazol and 5 mg/kg xylazine were used for anesthesia [29]. The PTS procedure has been previously described [30]. Briefly, PTS in the rat cerebral cortex was induced by laser irradiation (532 nm, 60 mW/cm2, Ø 3 mm, 30 min) of a part of the sensorimotor cortex of the rat brain after intravenous injection of the Rose Bengal photosensitizer (R4507, Sigma, Rehovot, Israel; 20 mg/kg). For PTS in the cerebral cortex of mice, Rose Bengal (15 mg/mL) was injected intraperitoneally (10 μL/g). A section of the mouse skull was freed from the periosteum in the area of the sensorimotor cortex (2 mm lateral to the bregma) and this part of brain was irradiated with a laser (532 nm, 200 mW/cm2, Ø 1 mm, 15 min) 5 min after photosensitizer was applied. The sham-operated animals that underwent the same operations, but without the introduction of a photosensitizer, were used as controls. 4.3. Immunofluorescence Microscopy Analysis: For immunofluorescence microscopy, the rats were anesthetized and transcardially perfused with 10% formalin 4 or 24 h after PTS, as described previously [30]. Briefly, control samples were from the contralateral cortex of the same animals (CL4 and CL24, respectively) or the cerebral cortex of sham-operated rats (SO4 and SO24). The brain was fixed in formalin overnight and incubated for 48 h in 20% sucrose in phosphate buffer (PBS) at 4 °C. Frontal sections of the brain with a thickness of 20 μm were prepared using a Leica VT 1000 S vibratome (Freiburg, Germany). They were frozen in 2-methylbutane and stored at −80 °C. After thawing, the sections were washed with PBS. The sections were then incubated overnight at 4 °C in the same solution with primary rabbit antibodies (SigmaAldrich, Rehovot, Israel): anti-DNMT1 (D4567, 1:500); anti-G9a (09–071, 1:100), anti-SUV39H1 (AV32470, 1:500) and anti-NeuN mouse antibody (MAB377; 1:1000) or anti-GFAP (SAB5201104; 1:1000), anti-histone H3, dimethylated at lysine 9 (anti-H3K9diMe; D5567, 1:500), and anti-histone H3, methylated at lysine 4 (anti-H3K4Me; M4819, 1:500). After washing in PBS, the sections were incubated for 1 h with fluorescently labeled secondary anti-rabbit CF488A (SAB4600045, 1:1000) or anti-mouse CF555 (SAB4600302, 1:1000) antibodies. Non-specific antibody binding was blocked by 5% BSA with 0.3% Triton X-100 (1 h, 20–25 °C). Sections were mounted on glass slides in 60% glycerol/PBS. Negative control: no primary antibodies. Sections were analyzed using an Eclipse FN1 microscope (Nikon, Tokyo, Japan). In most of the experiments, fluorescent images were studied in the central region of the penumbra, at a distance of 0.3–0.7 mm from the border of the infarction nucleus. The quantitative assessment of fluorescence was carried out on 10–15 images of experimental and control preparations obtained with the same settings of a digital camera. The average fluorescence intensity in the area occupied by cells was determined in each image using the ImageJ software (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021). The corrected fluorescence intensity of cells I (CTCF), proportional to the level of protein expression, was calculated as: I = Ii − Ac * Ib; where Ii is the integral fluorescence intensity, Ac is the cell area, and Ib is the average background fluorescence [30]. Threshold values remained constant for all images. The relative changes in the fluorescence of cells in the penumbra compared to the control cortex, ΔI, were calculated as: ΔI = (Ipen − Ic)/Ic, where Ipen is the average fluorescence intensity in the penumbra, and Ic is the average fluorescence intensity in the control samples. Additionally, using ROI Manager tools in ImageJ, the immunofluorescence of DNMT1, Suv39H1, and G9a proteins in neuronal nuclei was assessed. Immunofluorescence data of proteins in cell nuclei were normalized to Hoechst 33342 fluorescence. In addition, the number of cells expressing the protein under study was calculated per 100 cells. Protein colocalization with the neuronal marker NeuN or the astrocyte marker GFAP was assessed using Image J (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021) with the JACoP plugin. On RGB images (1280 × 960), the Manders coefficient M1 reflects the proportion of pixels containing red (cell markers or TUNEL staining) and green (proteins) signals in the total signal recorded in the red channel. In each area of the brain, three fields of vision were analyzed in 7–10 rats. Statistical processing was performed according to OneWay ANOVA. Results are presented as M ± SEM. 4.4. Apoptosis Analysis: Apoptotic cells were visualized by the TUNEL method (TdT-mediated dUTP-X nick end labeling) using the In Situ Cell Death Detection Kit, TMR red (# 12156792910, Roche, Mannheim, Germany) (red fluorescence) as described previously [30]. Briefly, sections were incubated at 37 °C with a primary antibody against the protein under study (green fluorescence), washed, treated with reagents from this kit, and incubated for 1 h with a secondary antibody Anti-Rabbit CF488A (SAB4600045, 1:1000) (green fluorescence) and with the Hoechst cell nucleus marker 33342 (10 μg/mL, blue fluorescence). The apoptotic coefficient (AI) was calculated as a percentage as:AI = (number of TUNEL-positive cells/total number of cells stained with Hoechst 33342) The analysis was performed on 3 images for each of 7–9 animals in the group. 4.5. Immunoblotting: The immunoblotting procedure has been previously described [30]. Briefly, the rats were anesthetized and decapitated 4 or 24 h after PTS. The brain was removed and a section of the cortex corresponding to the infarction nucleus was removed on ice with a cylindrical knife Ø3 mm; with another knife Ø7 mm, a 2-mm ring was cut around the irradiation zone, approximately corresponding to the penumbra tissue (experimental sample). A similar control sample was excised in the unirradiated contralateral hemisphere or in the cerebral cortex of sham-operated rats. Pieces of tissue were homogenized on ice, quickly frozen in liquid nitrogen, and then stored at −80 °C. After thawing and centrifugation using the CelLytic NuCLEAR Extraction Kit (SigmaAldrich), the cytoplasmic and nuclear fractions were isolated from the homogenates. The total supernatant was used as the cytoplasmic fraction, in which the nuclear protein histone H3 was practically not detected. Primary rabbit antibodies (all SigmaAldrich) were used in the experiments: anti-DNMT1 (D4567, 1:500), anti-G9a (09-071, 1:500), anti-SUV39H1 (AV32470, 1:500), and mouse anti-β-actin antibody (A5441, 1:5000). Secondary antibodies (all SigmaAldrich): anti-Rabbit IgG-Peroxidase (A6154, 1:1000) and anti-Mouse IgG-Peroxidase (A4416, 1:1000). 4.6. Inhibitors: The study investigated the putative neuroprotective effect of the following inhibitors of epigenetic proteins were used: DNMT1 (5-azacytidine or decitabine (0.2 mg/kg, 1 per day, 7 days) [31,32,33], histone methyltransferases SUV39H1 and G9a (BIX01294 (0.5 mg/kg, once a day, 7 days) [34], and A-366 (2 mg/kg, once a day, 7 days) [35]. They were dissolved in dimethyl sulfoxide (DMSO) and then diluted in sterile saline. The final concentration of DMSO was 5%. All inhibitors were administered 1 h after PTS. Mice were decapitated 4 and 7 days after PTS and 1 h after the last injection of drugs to study the level of apoptosis in the peri-infarction area and the volume of infarction. 4.7. Determination of the Volume of Infarction: To determine the volume of infarction at different times after PTS, brain slices of mice were stained with 2,3,5-triphenyltetrazolium chloride (TTX; T8877, Sigma). For this, the mice were anesthetized and decapitated, the brains were quickly removed and placed in a pre-chilled adult mouse brain matrix (J&K Seiko Electronic Co., Ltd., DongGuan City, China). The matrix with brain tissue was transferred to a freezer (−80°C) for 3–5 min and sliced 2 mm thick. These sections were stained with 1% TTX for 30 min in the dark at 37 °C. Using the ImageJ image analysis software (http://rsb.info.nih.gov/ij/, accessed on 20 October 2021), the areas of infarction zones in each section were measured, summed up, and multiplied by the section thickness (2 mm). 5. Conclusions: The data obtained show that DNA methyltransferase DNMT1 and histone methyltransferase G9a can be potential protein targets in ischemic penumbra cells, and their inhibitors, such as 5-aza-2′-deoxycytidine, A-366, and BIX01294, are potential neuroprotective agents capable of protecting penumbra cells from postischemic damage to the cerebral cortex. The protective effect of these substances should be studied in more detail in the future. Compound A-366 had the most persistent protective effect on the mouse cerebral cortex after photothrombotic stroke. Therefore, it can be recommended as a promising anti-ischemic neuroprotector.
Background: Cerebral ischemia, a common cerebrovascular disease, is one of the great threats to human health and new targets for stroke therapy are needed. The transcriptional activity in the cell is regulated by epigenetic processes such as DNA methylation/demethylation, acetylation/deacetylation, histone methylation, etc. Changes in DNA methylation after ischemia can have both neuroprotective and neurotoxic effects depending on the degree of ischemia damage, the time elapsed after injury, and the site of methylation. Methods: In this study, we investigated the changes in the expression and intracellular localization of DNA methyltransferase DNMT1, histone methyltransferases SUV39H1, and G9a in penumbra neurons and astrocytes at 4 and 24 h after stroke in the rat cerebral cortex using photothrombotic stroke (PTS) model. Methods of immunofluorescence microscopy analysis, apoptosis analysis, and immunoblotting were used. Additionally, we have studied the effect of DNMT1 and G9a inhibitors on the volume of PTS-induced infarction and apoptosis of penumbra cells in the cortex of mice after PTS. Results: This study has shown that the level of DNMT1 increased in the nuclear and cytoplasmic fractions of the penumbra tissue at 24 h after PTS. Inhibition of DNMT1 by 5-aza-2'-deoxycytidine protected cells of PTS-induced penumbra from apoptosis. An increase in the level of SUV39H1 in the penumbra was found at 24 h after PTS and G9a was overexpressed at 4 and 24 h after PTS. G9a inhibitors A-366 and BIX01294 protected penumbra cells from apoptosis and reduced the volume of PTS-induced cerebral infarction. Conclusions: Thus, the data obtained show that DNA methyltransferase DNMT1 and histone methyltransferase G9a can be potential protein targets in ischemic penumbra cells, and their inhibitors are potential neuroprotective agents capable of protecting penumbra cells from postischemic damage to the cerebral cortex.
1. Introduction: Cerebral ischemia, a common cerebrovascular disease, is one of the great threats to human health [1]. Blockage of cerebral vessels in ischemic stroke (70–80% of all strokes) disrupts blood flow and the supply of oxygen and glucose to surrounding tissues. In the ischemic nucleus nerve cells die fast. Additionally, toxic factors (glutamate, K+, free radicals, acidosis, etc.) spread and damage the surrounding cells and tissues in the following hours [2,3]. During this time (2–6 h, “therapeutic window”), it is possible to save neurons in the transition zone (penumbra), decrease damage, and reduce the neurological consequences of a stroke. However, studies of potential neuroprotectors—calcium channel blockers, excitotoxicity inhibitors, antiapoptotic agents, antioxidants, etc.—which have shown promise in experiments on cell cultures or laboratory animals, have not proven effective protection of the human brain from stroke without unacceptable side effects [4,5,6,7]. The transcriptional activity in the cell is regulated by epigenetic processes such as DNA methylation/demethylation, acetylation/deacetylation, histone methylation/demethylation, histone phosphorylation, etc. DNA methylation is the most well-studied epigenetic modification. DNA methylation involves the attachment of a methyl group to cytosine in the CpG dinucleotide in DNA, where cytosine and guanine are linked by a phosphate group (5′CpG3′). Oxidation of methylated cytosine leads to its demethylation [5,6,7]. DNA methylation does not occur at every cytosine residue in the genome. Only about 60% of all CpG dinucleotides are methylated. If CpG islands (CGIs) in the promoter region of a gene undergo methylation, this usually leads to its suppression. Regulatory regions of many human and animal genes contain unmethylated CpG dinucleotides grouped into CGIs. These are usually promoters of housekeeping genes expressed in all tissues, and in the promoters of tissue-specific genes, CGIs are unmethylated only in those tissues where this gene is expressed [4,5,6,7]. Adding a methyl group to CpG sites can prevent gene transcription in different ways. DNA methylation can directly prevent the binding of DNA-binding factors to transcription sites. In addition, methyl groups in CpG dinucleotides can be recognized by the methyl-CpG-binding domain (MBD) family, such as MBD1–4 and MECP2 [5,6]. The binding of these proteins to methylated CpG sites recruits histone or chromatin-modifying protein complexes, which will assemble a number of complexes that provide repression of gene transcription [4,5,6,7]. At least two methylation systems with different methylases operate in mammalian cells. Methylation de novo is carried out by DNA methyltransferases DNMT3a and DNMT3b, which introduces elements of variability in the methylation profile. Maintenance methylation is provided by DNMT1, which attaches methyl groups to cytosines on one of the DNA strands where the complementary strand is already methylated (accuracy rate > 99%) [8]. Despite the fact that DNA methylation is a fairly common epigenetic modification, the level of DNA methylation in the mammalian genome does not exceed 1%. This is due to the instability of 5-methylcytosine, which undergoes spontaneous deamination to thymine during DNA repair. This results in a rapid depletion of CpG sites in the genome [9]. Nonetheless, DNA methylation is still critical in biological processes such as differentiation, genomic imprinting, genomic stability, and X chromosome inactivation. DNA methylation and repression of the expression of a number of genes increase in brain cells after ischemia [10,11,12]. The occlusion of the middle cerebral artery (CCA) within 30 min increased the total level of DNA methylation by a factor of four that correlated with the growth of cell damage and neurological deficit. Changes in DNA methylation after ischemia can occur not only at the global level, but also at the promoters of individual genes [13]. These changes can have both neuroprotective and neurotoxic effects depending on the degree of ischemia damage, the time elapsed after injury, and the site of methylation [14,15]. Methylation of histones by histone methyltransferases regulates transcriptional processes in cells. It is known that methylation of lysines 9 and 27 in histone H3, as well as lysine 20 in histone H4, globally suppresses transcription, and methylation of lysine 4 in histone H3 often correlates with transcriptional activity [16]. Lysine methyltransferases SUV39H1 and G9a are best studied histone methyltransferases. SUV39H1 and G9a methylate lysines in histones H3 and H4, leading to the formation of large regions of heterochromatin where gene expression is suppressed. G9a is more responsible for mono and dimethylation of lysine 9 in histone H3 (H3K9Me1 and H3K9Me2), while SUV39H1 is responsible for trimethylation (H3K9Me3) [16,17]. SUV39H1 and G9a are also involved in the repression of individual genes and in a number of neuropathological processes [18]. The damage to peripheral nerves increased the expression of SUV39H1 in the nuclei of neurons in the ganglia of the spinal cord that are involved in the transmission of pain information. This caused nociceptive hypersensitivity. It is assumed that SUV39H1 inhibitors can serve as promising drugs for the treatment of pain hypersensitivity caused by damage to peripheral nerves [18]. Pharmacological inhibition or knockout of histone methyltransferases SUV39H1 or G9a protected cultured neurons from ischemic damage in an oxygen- and glucose-free environment [18,19]. In this study, we investigated the changes in the expression and intracellular localization of DNA methyltransferase DNMT1, histone methyltransferases SUV39H1 and G9a in penumbra neurons and astrocytes in penumbra nuclear and penumbra cytoplasmic fractions at 4 and 24 h after photothrombotic stroke (PTS). In an attempt to find possible neuroprotectors, we studied the effect of DNMT1 and histone methyltransferase G9a inhibitors on the volume of PTS-induced infarction and apoptosis of penumbra cells in the cortex of mice after PTS. 5. Conclusions: The data obtained show that DNA methyltransferase DNMT1 and histone methyltransferase G9a can be potential protein targets in ischemic penumbra cells, and their inhibitors, such as 5-aza-2′-deoxycytidine, A-366, and BIX01294, are potential neuroprotective agents capable of protecting penumbra cells from postischemic damage to the cerebral cortex. The protective effect of these substances should be studied in more detail in the future. Compound A-366 had the most persistent protective effect on the mouse cerebral cortex after photothrombotic stroke. Therefore, it can be recommended as a promising anti-ischemic neuroprotector.
Background: Cerebral ischemia, a common cerebrovascular disease, is one of the great threats to human health and new targets for stroke therapy are needed. The transcriptional activity in the cell is regulated by epigenetic processes such as DNA methylation/demethylation, acetylation/deacetylation, histone methylation, etc. Changes in DNA methylation after ischemia can have both neuroprotective and neurotoxic effects depending on the degree of ischemia damage, the time elapsed after injury, and the site of methylation. Methods: In this study, we investigated the changes in the expression and intracellular localization of DNA methyltransferase DNMT1, histone methyltransferases SUV39H1, and G9a in penumbra neurons and astrocytes at 4 and 24 h after stroke in the rat cerebral cortex using photothrombotic stroke (PTS) model. Methods of immunofluorescence microscopy analysis, apoptosis analysis, and immunoblotting were used. Additionally, we have studied the effect of DNMT1 and G9a inhibitors on the volume of PTS-induced infarction and apoptosis of penumbra cells in the cortex of mice after PTS. Results: This study has shown that the level of DNMT1 increased in the nuclear and cytoplasmic fractions of the penumbra tissue at 24 h after PTS. Inhibition of DNMT1 by 5-aza-2'-deoxycytidine protected cells of PTS-induced penumbra from apoptosis. An increase in the level of SUV39H1 in the penumbra was found at 24 h after PTS and G9a was overexpressed at 4 and 24 h after PTS. G9a inhibitors A-366 and BIX01294 protected penumbra cells from apoptosis and reduced the volume of PTS-induced cerebral infarction. Conclusions: Thus, the data obtained show that DNA methyltransferase DNMT1 and histone methyltransferase G9a can be potential protein targets in ischemic penumbra cells, and their inhibitors are potential neuroprotective agents capable of protecting penumbra cells from postischemic damage to the cerebral cortex.
12,614
335
[ 515, 247, 278, 352, 62, 185, 130, 233, 722, 172, 258, 154, 155 ]
18
[ "figure", "pts", "penumbra", "cells", "cortex", "level", "dnmt1", "g9a", "24", "suv39h1" ]
[ "cerebral ischemia", "potential neuroprotectors calcium", "studies potential neuroprotectors", "ischemic stroke inhibition", "cells ischemic stroke" ]
[CONTENT] ischemic stroke | epigenetics | histone methyltransferase | DNA methyltransferase [SUMMARY]
[CONTENT] ischemic stroke | epigenetics | histone methyltransferase | DNA methyltransferase [SUMMARY]
[CONTENT] ischemic stroke | epigenetics | histone methyltransferase | DNA methyltransferase [SUMMARY]
[CONTENT] ischemic stroke | epigenetics | histone methyltransferase | DNA methyltransferase [SUMMARY]
[CONTENT] ischemic stroke | epigenetics | histone methyltransferase | DNA methyltransferase [SUMMARY]
[CONTENT] ischemic stroke | epigenetics | histone methyltransferase | DNA methyltransferase [SUMMARY]
[CONTENT] Animals | Astrocytes | Cerebral Cortex | DNA (Cytosine-5-)-Methyltransferase 1 | DNA Methylation | Disease Models, Animal | Gene Expression Regulation, Enzymologic | Histone-Lysine N-Methyltransferase | Humans | Light | Methyltransferases | Mice | Neurons | Rats | Repressor Proteins | Stroke [SUMMARY]
[CONTENT] Animals | Astrocytes | Cerebral Cortex | DNA (Cytosine-5-)-Methyltransferase 1 | DNA Methylation | Disease Models, Animal | Gene Expression Regulation, Enzymologic | Histone-Lysine N-Methyltransferase | Humans | Light | Methyltransferases | Mice | Neurons | Rats | Repressor Proteins | Stroke [SUMMARY]
[CONTENT] Animals | Astrocytes | Cerebral Cortex | DNA (Cytosine-5-)-Methyltransferase 1 | DNA Methylation | Disease Models, Animal | Gene Expression Regulation, Enzymologic | Histone-Lysine N-Methyltransferase | Humans | Light | Methyltransferases | Mice | Neurons | Rats | Repressor Proteins | Stroke [SUMMARY]
[CONTENT] Animals | Astrocytes | Cerebral Cortex | DNA (Cytosine-5-)-Methyltransferase 1 | DNA Methylation | Disease Models, Animal | Gene Expression Regulation, Enzymologic | Histone-Lysine N-Methyltransferase | Humans | Light | Methyltransferases | Mice | Neurons | Rats | Repressor Proteins | Stroke [SUMMARY]
[CONTENT] Animals | Astrocytes | Cerebral Cortex | DNA (Cytosine-5-)-Methyltransferase 1 | DNA Methylation | Disease Models, Animal | Gene Expression Regulation, Enzymologic | Histone-Lysine N-Methyltransferase | Humans | Light | Methyltransferases | Mice | Neurons | Rats | Repressor Proteins | Stroke [SUMMARY]
[CONTENT] Animals | Astrocytes | Cerebral Cortex | DNA (Cytosine-5-)-Methyltransferase 1 | DNA Methylation | Disease Models, Animal | Gene Expression Regulation, Enzymologic | Histone-Lysine N-Methyltransferase | Humans | Light | Methyltransferases | Mice | Neurons | Rats | Repressor Proteins | Stroke [SUMMARY]
[CONTENT] cerebral ischemia | potential neuroprotectors calcium | studies potential neuroprotectors | ischemic stroke inhibition | cells ischemic stroke [SUMMARY]
[CONTENT] cerebral ischemia | potential neuroprotectors calcium | studies potential neuroprotectors | ischemic stroke inhibition | cells ischemic stroke [SUMMARY]
[CONTENT] cerebral ischemia | potential neuroprotectors calcium | studies potential neuroprotectors | ischemic stroke inhibition | cells ischemic stroke [SUMMARY]
[CONTENT] cerebral ischemia | potential neuroprotectors calcium | studies potential neuroprotectors | ischemic stroke inhibition | cells ischemic stroke [SUMMARY]
[CONTENT] cerebral ischemia | potential neuroprotectors calcium | studies potential neuroprotectors | ischemic stroke inhibition | cells ischemic stroke [SUMMARY]
[CONTENT] cerebral ischemia | potential neuroprotectors calcium | studies potential neuroprotectors | ischemic stroke inhibition | cells ischemic stroke [SUMMARY]
[CONTENT] figure | pts | penumbra | cells | cortex | level | dnmt1 | g9a | 24 | suv39h1 [SUMMARY]
[CONTENT] figure | pts | penumbra | cells | cortex | level | dnmt1 | g9a | 24 | suv39h1 [SUMMARY]
[CONTENT] figure | pts | penumbra | cells | cortex | level | dnmt1 | g9a | 24 | suv39h1 [SUMMARY]
[CONTENT] figure | pts | penumbra | cells | cortex | level | dnmt1 | g9a | 24 | suv39h1 [SUMMARY]
[CONTENT] figure | pts | penumbra | cells | cortex | level | dnmt1 | g9a | 24 | suv39h1 [SUMMARY]
[CONTENT] figure | pts | penumbra | cells | cortex | level | dnmt1 | g9a | 24 | suv39h1 [SUMMARY]
[CONTENT] methylation | dna | cpg | dna methylation | genes | histone | damage | sites | cytosine | tissues [SUMMARY]
[CONTENT] anti | fluorescence | mg | kg | mg kg | 1000 | 500 | mm | sections | fluorescence intensity [SUMMARY]
[CONTENT] figure | level | pts | penumbra | increased | 24 | dnmt1 | increase | 24 pts | astrocytes [SUMMARY]
[CONTENT] protective | protective effect | potential | ischemic | methyltransferase | effect | 366 | penumbra cells | potential protein | cells postischemic damage [SUMMARY]
[CONTENT] figure | pts | penumbra | cells | dnmt1 | g9a | level | anti | cortex | methylation [SUMMARY]
[CONTENT] figure | pts | penumbra | cells | dnmt1 | g9a | level | anti | cortex | methylation [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] DNMT1 | SUV39H1 | G9a | 4 | 24 ||| ||| DNMT1 | G9a | PTS [SUMMARY]
[CONTENT] DNMT1 | 24 | PTS ||| 5 ||| 24 | PTS | G9a | 4 | 24 | PTS ||| A-366 [SUMMARY]
[CONTENT] DNMT1 | G9a [SUMMARY]
[CONTENT] ||| ||| ||| DNMT1 | SUV39H1 | G9a | 4 | 24 ||| ||| DNMT1 | G9a | PTS ||| DNMT1 | 24 | PTS ||| 5 ||| 24 | PTS | G9a | 4 | 24 | PTS ||| A-366 ||| DNMT1 | G9a [SUMMARY]
[CONTENT] ||| ||| ||| DNMT1 | SUV39H1 | G9a | 4 | 24 ||| ||| DNMT1 | G9a | PTS ||| DNMT1 | 24 | PTS ||| 5 ||| 24 | PTS | G9a | 4 | 24 | PTS ||| A-366 ||| DNMT1 | G9a [SUMMARY]
Bioactivity effects of extracellular matrix proteins on apical papilla cells.
34495108
Potent signaling agents stimulate and guide pulp tissue regeneration, especially in endodontic treatment of teeth with incomplete root formation.
BACKGROUND
Different concentrations (1, 5, and 10 µg/mL) of fibronectin (FN), laminin (LM), and type I collagen (COL) were applied to the bottom of non-treated wells of sterilized 96-well plates. Non-treated and pre-treated wells were used as negative (NC) and positive (PC) controls. After seeding the hAPCs (5×103 cells/well) on the different substrates, we assessed the following parameters: adhesion, proliferation, spreading, total collagen/type I collagen synthesis and gene expression (ITGA5, ITGAV, COL1A1, COL3A1) (ANOVA/Tukey; α=0.05).
METHODOLOGY
We observed greater attachment potential for cells on the FN substrate, with the effect depending on concentration. Concentrations of 5 and 10 µg/mL of FN yielded the highest cell proliferation, spreading and collagen synthesis values with 10 µg/mL concentration increasing the ITGA5, ITGAV, and COL1A1 expression compared with PC. LM (5 and 10 µg/mL) showed higher bioactivity values than NC, but those were lower than PC, and COL showed no bioactivity at all.
RESULTS
We conclude that FN at 10 µg/mL concentration exerted the most intense bioactive effects on hAPCs.
CONCLUSION
[ "Cell Adhesion", "Cells, Cultured", "Collagen Type I", "Extracellular Matrix", "Extracellular Matrix Proteins", "Fibronectins", "Humans", "Laminin" ]
8425894
Introduction
The concepts of tissue engineering were introduced to Dentistry in the last decade giving special attention to inducting pulp regeneration.1-4 The principles of regenerative endodontics are four: 1. disinfection and detoxification of root canals; 2. use of biomaterials as scaffold for cell adhesion, proliferation, and differentiation; 3. presence of cells with potential to regenerate a new tissue similar to the original; and 4. use of signaling agents to induce cell migration and enhance the bioactive action of the biomaterial.5,6 Two strategies emerged as the most promising therapies for tissue regeneration: cell approaches and cell-free approaches.7,8 The first involves implanting pre-cultured cells associated with a biomaterial at the site of injury. The bioactive materials provide a porous three-dimensional structure and act as a temporary extracellular matrix where attached cells can grow to regenerate the tissue. The second, cell-free approaches, involve the use of biomaterials along potent signaling agents. When put into the injury site, these agents stimulate cell migration towards the site of interest, as well as their proliferation and differentiation, favoring tissue healing.7,8 Cell-free therapies seem to be interesting for teeth with incomplete root formation. A histopathological study of rat teeth with periapical lesion demonstrated a viable apical papilla was maintained even after 90 days of pulp necrosis.9 The apical papilla is known to have a rich source of stem cells. Therefore, biomaterials associated with potent signaling agents could be used to attract these cells into the root canal. These biomaterials should also promote adhesion, proliferation and differentiation of the stem cells along the entire length of the root canal, so they can synthetize new pulp-like tissue.1 Fibronectin (FN), laminin (LM) and type I collagen (COL) – examples of extracellular matrix proteins (ECMp) – have bioactive properties and were proposed as chemotactic and inducing agents, since scientists have described their proprieties in several biomedical fields.10-15 Previous studies showed FN, either associated with other proteins, or not, can mediate cell adhesion, migration, proliferation, and differentiation; while also helping in tissue formation, remodeling, repair and regeneration.11,13,15 LM is a complex structure that also regulates cell adhesion, migration and proliferation.10,12 LM aids in maintaining cellular functions especially in the processes of reeptelialization and angiogenesis.10 COL is a common protein that surround cells throughout the body, with over 90% of this protein being type I. COL can be used to mimic the natural cell environment, and under specific conditions, is effective in promoting stem cell differentiation and growth.13,16 One study characterized the dehydrated human umbilical cord (DHUc) in terms of tissue composition and evaluated its in vitro and in vivo effects.14 The authors observed the presence of COL, hyaluronic acid, FN and LM, among other proteins, which included growth factors, inflammatory modulators and cell signaling regulators in the DHUc. They observed increased migration of fibroblasts, and higher level of mesenchymal stem cell proliferation, associated with induction of angiogenic potential in vitro depending on concentration when cells were treated with DHUc. When implanted into subcutaneous tissue of rats, this biomaterial proved to be biocompatible and biodegradable.14Researchers have reported decellularized human dental pulp, which maintains several proteins of this specialized connective tissue, like COL, FN and LN, is a suitable scaffold to mimic the complexity of the dental pulp extracellular matrix.17,18 Decellularized biological scaffolds allow stem cells from apical papilla17 as well as human dental pulp stem cells18 to attach and proliferate while being stimulated to differentiate into odontoblast-like cells close to the dentin substrate.18 Despite the extracellular matrix proteins role on pulp regeneration and formation of a new pulp-like tissue, the data concerning the influence of FN, LN and COL on the metabolism and function of human apical papilla cells (hAPCs) – especially when young teeth with incomplete root formation have lost their vitality – is scarce. Thus, this in vitro study assessed the effects of FN, LM and COL on the adhesion, proliferation and spreading of hAPCs as well as on the potential of these cells to synthesize collagen and express genes related with pulp regeneration.
Methodology
Firstly, approval of the research was obtained from the Research Ethics Committee of Araraquara School of Dentistry/UNESP, São Paulo, Brazil (protocol Nº. 80806617.3.0000.5416) as well as signed terms of informed consent from patients and their guardians, according to the Declaration of Helsinki. Subsequently the hAPCs were obtained from the apical papilla of four healthy third molars with incomplete root formation. These teeth were provided by volunteers aged 16 to 18 years old, of both sexes, who were patients at the surgery clinic. The teeth were extracted for orthodontic reasons. To obtain the primary culture of hAPCs, the extracted teeth were immediately immersed in Minimum Essential Medium Eagle Apha-culture medium (α-MEM; GIBCO, Carlsbad, CA, USA) supplemented with antibiotic and antifungal (100 U/mL penicillin, 100 g/mL streptomycin, and 0.25 g/mL amphotericin; GIBCO) at 4ºC, for 1 hour. In a procedure performed in a biosafety cabinet (Bio Protector 12 Plus; VECO, Campinas, SP, Brazil), the papilla was sectioned with a sterile scalpel blade, at the apical limit of the root. This tissue was transferred to a 1.5 mL tube containing phosphate buffered saline (PBS 1X; GIBCO), followed by mechanical disintegration with sterile surgical scissors. Then, the apical papilla fragments were incubated in α-MEM containing 3 mg/mL type I collagenase (Sigma-Aldrich, Saint Louis, MO, USA) at 37°C and 5% CO2 for 2 h. After this period, the cells were centrifuged, washed with PBS 1X, and later seeded in wells of a 6-well plate (Corning, Tewksbury, MA, USA); for the seeding, we used α-MEM supplement with 10% fetal bovine serum (FBS), 100 U/mL penicillin, 100 g/mL streptomycin, and 0.25 g/mL amphotericin (GIBCO). After 3 h incubation, the supernatant was discarded and the cells that remained attached to the bottom of the wells were sub-cultured in 100×15 mm petri dishes (Corning). Cells at passage 3 to 5 were used in the experiments. Fibronectin (2.0 mg/mL, bovine plasma; Sigma-Aldrich), laminin (1 mg/mL, Engelbreth-Holm-Swarm lathyritic mouse tumor; Santa Cruz Biotechnology), and type I collagen (3.67 mg/mL, rat trial; Corning, Bedford, MA, USA) were used in this study. These ECMps were diluted in PBS 1X to obtain concentrations of 1, 5 and 10 µg/mL that were applied (50 µL) to the bottom of wells without pretreatment of sterilized polystyrene 96-well plates (Corning; Product Number: 3370). The 96-well plates were then centrifuged at 1500 rpm at 4ºC for 10 min. After keeping the plates at 4ºC for 18 h, the remaining material was aspirated. These plates were incubated with 0.5% bovine serum albumin (BSA; Santa Cruz Biotechnology) for 10 min to block the uncoated surface,19,20 followed by washing with PBS 1X at 37ºC. All these steps were performed for both negative (NC) and positive (PC) control groups. In the NC, PBS solution without ECMp was applied to the bottom of non-pretreated wells of sterilized polystyrene 96-well plates (Corning; Product Number: 3370). In the PC, APBS without ECMp solution was applied to the bottom of wells of polystyrene 96-well plates that were pre-treated by the manufacturer for cell culture purpose (Corning; Product Number: 3395). Table 1 shows all groups established according to the polystyrene 96-well plates used, as well as the type and concentration of ECMp. Table 1Experimental and control groups according to type of polystyrene 96-well plate and type and concentration of extracellular matrix protein (ECMp)GroupsPre-treated wells by the manufactureECMpConcentration (µg/mL)NC (negative control)--0PC (positive control)+-0FN1-Fibronectin1FN5-Fibronectin5FN10-Fibronectin10LM1-Laminin1LM5-Laminin5LM10-Laminin10COL1-Type I collagen1COL5-Type I collagen5COL10-Type I collagen10 After performing the treatments of wells with ECMp, the hAPCs (5×103cells/well) were seeded in the respective wells and incubated at 37ºC and 5% CO2 for up to 5 days, with the culture medium being changed every 48 h. Then, the bioactivity parameters were evaluated. All assays were performed twice to ensure the data reproducibility. Cell adhesion: Since the non-treated 96-well plates did not allow cell attachment on the bottom of the wells, the attachment potential of the microplate treated with different concentrations of ECMp was assessed. For this purpose, protocols of dynamic systems, in which microchannels are treated with proteins to induce cell chemotaxis,19-21 were adapted to this study. After 24 h of applying cell suspensions to the plate wells (n=6), the hAPCs capable of migrating and attaching to the bottom of wells were evaluated under a light microscope. For this purpose, the cells were fixed with 2.5% glutaraldehyde for 2 h, washed with distilled water and stained with 0.1% crystal violet for 15 min. For quantitative analysis, photographs (1 field from the central region of each well of the microplate; 10× magnification) were obtained with a camera (Olympus C5060, Miami, FL, USA) coupled to a microscope (Olympus BX51, Miami, FL, USA). The images were evaluated using Image J 1.45S software (Wayne Rasband, National Institutes of Health, USA) to count the number of cells on each image. Cell proliferation: This analysis was performed by the MTT assay (n=6). After intervals of 1, 3 and 5 days in α-MEM, cells were incubated for 4 h in α-MEM supplemented with 10% MTT solution (5 mg/mL; Sigma-Aldrich).22 Then, the formazan crystals produced were dissolved in acidified isopropanol and absorbance of the resulting solution was measured at 570 nm (Synergy H1, Biotek, Winooski, VT, USA). The mean absorbance value obtained in the negative control group over the 1-day period was considered 100% cell proliferation. This parameter was used to determine the percentage of viability for the other groups. Cell spreading: For this analysis (n=4), after intervals of 1, 3 and 5 days in α-MEM, the cells were fixed in 4% paraformaldehyde, permeabilized in 0.1% Triton X-100 (Sigma-Aldrich) and incubated with Actin Red 555 probe (Life Technologies; Grand Island, NY, USA) in 2% BSA (1:20) to detect the actin filaments. After washing the cells with PBS, they were incubated with Hoescht (1:5000; Invitrogen, Carlsbad, CA, USA) for 15 min for nuclear counter-staining.23 The F-actin assay was then analyzed using a fluorescence microscope (Leica DM 5500B). Total collagen synthesis: The cells were cultured for 5 days in α-MEM without FBS, and the culture medium was collected and replaced every 48 h. The pool of collected culture medium of each sample was stored at -20ºC until the Sirius Red assay was performed (n=6). For this purpose, the culture was transferred to 1.5 mL tubes containing Direct Red solution in saturated picric acid (0.1%), and then incubated for 1 hour, under agitation at 400 rpm, in a dry bath at 25°C. The tubes were centrifuged, the supernatant was discarded, and 0.01 M hydrochloric acid was added. The tubes were centrifuged again, the supernatant was discarded, followed by the addition of 0.5 M sodium hydroxide to solubilize the precipitated material.24 The resulting solution from each sample was subjected to absorbance analysis in a spectrophotometer at 555 nm (Synergy H1).24 The percentage of total collagen synthesis for each sample was calculated based on the mean absorbance values of the negative control group. Type I collagen synthesis: The medium pool collected in the previous analysis (n=6) was also used to evaluate the type I collagen synthesis (COL-I), which was detected by enzyme-linked immunosorbent (ELISA) assay performed with a standardized kit (Duoset human COL-I - R&D Systems, Minneapolis, MN, USA), according to manufacturer’s instructions. For this purpose, 96-well plates for ELISA (Corning) were treated with 4 µg/mL of Capture Antibody and incubated overnight. After this period, the wells were incubated with Reagent Diluent 1X for 1 hour. Then, standardized aliquots of the samples and standard curve points were applied and incubated for 2 h. After this, the wells were maintained with 100 ng/mL Detection Solution for 2 h, followed by incubation in dark room, with 2.5% Streptavidin-HRP, for 20 min. The wells were washed with Wash Solution 1X between all steps described. Finally, Substrate Solution (color reagent A + color regent B; 1:1) was applied and after 20 minutes, the Stop Solution was added. The absorbance reading was performed at a wavelength of 450 nm in a spectrophotometer (Synergy H1). The concentration of type I collagen for each sample was determined according to the standard curve. The percentage of type I collagen synthesis for each sample was calculated based on the average concentration values, in ng/mL of the positive control group. Reverse transcription followed by quantitative polymerase chain reaction (RT-qPCR): After incubating the cells in α-MEM for 5 days (n=4), the RNAqueous kit (Ambion, Grand Island, NY, USA) was used to isolate the RNA in accordance with the manufacturer’s instructions. To obtain sufficient RNA for the reaction, a pool of cells obtained from two wells was used for each group. Then, 500 ng of total RNA associated with random hexamer primers and Moloney leukemia virus reverse transcriptase were used to determine the cDNA synthesis (n=4), according to the instructions of the High Capacity RT kit supplier (Applied Biosystems, Foster City, CA, USA). For the gene expression of ITGA5, ITGAV, COL1A1 and COL3A1 (Figure 1) by qPCR was performed, using pre-designed sets of primers and probes (Gene expression assays, Applied Biosystems) to detect target genes by the TaqMan system (TaqMan Universal PCR Master Mix, Applied Biosystems) with the StepOne Plus equipment (Applied Biosystems). In addition, the cycling conditions optimized by the manufacturer were used, and the cycle threshold (CT) values for each sample were calculated using the thermal cycler software and analyzed using the 2CT method. The results were normalized according to the constitutive gene expression (GAPDH) and expressed as a fold change in relation to the positive control. Figure 1TaqMan assays used to analyze gene expression The data from cell adhesion, viability, total protein synthesis, type I collagen synthesis, and RT-qPCR assays were evaluated for adherence to the normal curve (Shapiro-Wilk test, p>0.05) and homoscedasticity (Levene test, p>0.05). Then, the data were submitted to the one-way or two-way ANOVA test, followed by the Tukey post-hoc test by means of SPSS 20.0 software (SPSS Inc., Chicago, IL, USA). All statistical inferences were based on the 5% level of significance. The DSS Research calculator was used to calculate the statistical power of the samples, according to Kim25 (2016), which showed a power level > 95.0% for each analysis. The cell spreading images were analyzed descriptively.
Results
Cell adhesion: As demonstrated in Figure 2A and B, the highest attachment potential occurred in Group FN10, followed by Groups FN5, FN1, PC, LM10, and LM5 (p<0.05). No significant difference was observed between Groups LM1, COL1, COL5, and COL10 and Group NC (p>0.05). Figure 2Cell adhesion assay. (A) Mean and standard deviation of the number of cells adhered to the bottom plate treated according to the group. Different letters demonstrate a significant difference between groups (n=6; one-way ANOVA/Tukey tests; α=0.05). (B) Representative images of the cell adhesion Cell proliferation: Figure 3 shows increased cell proliferation occurred over time-points for all groups (p<0.05), except for those treated with type I collagen and Group NC (p>0.05). The best cell viability result was observed in the groups treated with 5 or 10 µg/mL of FN after intervals of 1, 3 and 5 days (p<0.05). At the last time-point, this increase was 30x higher when compared with Group NC, 1.7x higher than in Group PC, and 2.7x higher than the group treated with 10 µg/mL LM. The group FN1 also showed higher cell viability values than the group LM10 in all time-points (p<0.05). Figure 3Mean and standard deviation of cell proliferation values (n=6; two-way ANOVA/Tukey tests; α=0.05). Different capital letters indicate a significant difference between the time-points for each group. Different lowercase letters demonstrate a significant difference between groups in each time-point Cell spreading: Figure 4 displays how cells seeded on the bottom of the wells treated with 5 or 10 µg/mL of FN had better spreading on the substrate and a higher rate of proliferation over the time-points. In the groups treated with COL and in Group NC, lower numbers of cells organized in clusters exhibiting contracted cytoskeletons remained attached to the wells bottom. Figure 4Representative images of F-actin assay (10×). Red fluorescence indicates the actin filaments. Blue fluorescence indicates the cell nucleus Total collagen synthesis: Figure 5A shows increase in the total collagen synthesis occurred in Groups FN10 and FN5, followed by Groups PC and LM10 in comparison with Group NC (p<0.05). Figure 5Mean and standard deviation of total collagen (A) and type I collagen (B) synthesis values (n=6; one-way ANOVA/Tukey tests; α=0.05). Different letters demonstrate a significant difference between groups Type I collagen synthesis: Figure 5B demonstrates that more type I collagen was synthesized in Group FN10 (p<0.05), followed by Groups FN5 and PC (p<0.05), which showed no significant difference between them (p>0.05). Groups LM5 and LM10 showed lower values of type I collagen synthesis in comparison with Group PC (p<0.05), but higher than those observed for Group NC (p<0.05). RT-qPCR: Figure 6A-D shows higher levels of ITGA5, ITGAV, and COL1A1 gene expression occurred in Group FN10 in comparison with Group PC (p<0.05). However, FN10 and PC groups COL3A1 gene expression had no significant difference (p>0.05). Group FN5 showed no statistical difference when compared with Group PC for all evaluated genes (p>0.05), and Groups LM5 and LM10 demonstrated lower values of gene expression than Group PC (p<0.05). Figure 6Mean and standard deviation of mRNA gene expression values of the ITGA5 (A), ITGAV (B), COL1A1 (C) and COL3A1 (D) markers (n=4; one-way ANOVA/Tukey tests; α=0.05). Different letters demonstrate a significant difference between groups
Conclusions
According to the methodologies used in the present study, we conclude that 10 µg/mL of fibronectin could act as a potent bioactive agent for human apical papilla cells by inducing cell adhesion, proliferation, spreading and collagen synthesis.
[]
[]
[]
[ "Introduction", "Methodology", "Results", "Discussion", "Conclusions" ]
[ "The concepts of tissue engineering were introduced to Dentistry in the last decade giving special attention to inducting pulp regeneration.1-4 The principles of regenerative endodontics are four: 1. disinfection and detoxification of root canals; 2. use of biomaterials as scaffold for cell adhesion, proliferation, and differentiation; 3. presence of cells with potential to regenerate a new tissue similar to the original; and 4. use of signaling agents to induce cell migration and enhance the bioactive action of the biomaterial.5,6\nTwo strategies emerged as the most promising therapies for tissue regeneration: cell approaches and cell-free approaches.7,8 The first involves implanting pre-cultured cells associated with a biomaterial at the site of injury. The bioactive materials provide a porous three-dimensional structure and act as a temporary extracellular matrix where attached cells can grow to regenerate the tissue. The second, cell-free approaches, involve the use of biomaterials along potent signaling agents. When put into the injury site, these agents stimulate cell migration towards the site of interest, as well as their proliferation and differentiation, favoring tissue healing.7,8\nCell-free therapies seem to be interesting for teeth with incomplete root formation. A histopathological study of rat teeth with periapical lesion demonstrated a viable apical papilla was maintained even after 90 days of pulp necrosis.9 The apical papilla is known to have a rich source of stem cells. Therefore, biomaterials associated with potent signaling agents could be used to attract these cells into the root canal. These biomaterials should also promote adhesion, proliferation and differentiation of the stem cells along the entire length of the root canal, so they can synthetize new pulp-like tissue.1\nFibronectin (FN), laminin (LM) and type I collagen (COL) – examples of extracellular matrix proteins (ECMp) – have bioactive properties and were proposed as chemotactic and inducing agents, since scientists have described their proprieties in several biomedical fields.10-15 Previous studies showed FN, either associated with other proteins, or not, can mediate cell adhesion, migration, proliferation, and differentiation; while also helping in tissue formation, remodeling, repair and regeneration.11,13,15 LM is a complex structure that also regulates cell adhesion, migration and proliferation.10,12 LM aids in maintaining cellular functions especially in the processes of reeptelialization and angiogenesis.10 COL is a common protein that surround cells throughout the body, with over 90% of this protein being type I. COL can be used to mimic the natural cell environment, and under specific conditions, is effective in promoting stem cell differentiation and growth.13,16\nOne study characterized the dehydrated human umbilical cord (DHUc) in terms of tissue composition and evaluated its in vitro and in vivo effects.14 The authors observed the presence of COL, hyaluronic acid, FN and LM, among other proteins, which included growth factors, inflammatory modulators and cell signaling regulators in the DHUc. They observed increased migration of fibroblasts, and higher level of mesenchymal stem cell proliferation, associated with induction of angiogenic potential in vitro depending on concentration when cells were treated with DHUc. When implanted into subcutaneous tissue of rats, this biomaterial proved to be biocompatible and biodegradable.14Researchers have reported decellularized human dental pulp, which maintains several proteins of this specialized connective tissue, like COL, FN and LN, is a suitable scaffold to mimic the complexity of the dental pulp extracellular matrix.17,18 Decellularized biological scaffolds allow stem cells from apical papilla17 as well as human dental pulp stem cells18 to attach and proliferate while being stimulated to differentiate into odontoblast-like cells close to the dentin substrate.18\nDespite the extracellular matrix proteins role on pulp regeneration and formation of a new pulp-like tissue, the data concerning the influence of FN, LN and COL on the metabolism and function of human apical papilla cells (hAPCs) – especially when young teeth with incomplete root formation have lost their vitality – is scarce. Thus, this in vitro study assessed the effects of FN, LM and COL on the adhesion, proliferation and spreading of hAPCs as well as on the potential of these cells to synthesize collagen and express genes related with pulp regeneration.", "Firstly, approval of the research was obtained from the Research Ethics Committee of Araraquara School of Dentistry/UNESP, São Paulo, Brazil (protocol Nº. 80806617.3.0000.5416) as well as signed terms of informed consent from patients and their guardians, according to the Declaration of Helsinki. Subsequently the hAPCs were obtained from the apical papilla of four healthy third molars with incomplete root formation. These teeth were provided by volunteers aged 16 to 18 years old, of both sexes, who were patients at the surgery clinic. The teeth were extracted for orthodontic reasons. To obtain the primary culture of hAPCs, the extracted teeth were immediately immersed in Minimum Essential Medium Eagle Apha-culture medium (α-MEM; GIBCO, Carlsbad, CA, USA) supplemented with antibiotic and antifungal (100 U/mL penicillin, 100 g/mL streptomycin, and 0.25 g/mL amphotericin; GIBCO) at 4ºC, for 1 hour. In a procedure performed in a biosafety cabinet (Bio Protector 12 Plus; VECO, Campinas, SP, Brazil), the papilla was sectioned with a sterile scalpel blade, at the apical limit of the root. This tissue was transferred to a 1.5 mL tube containing phosphate buffered saline (PBS 1X; GIBCO), followed by mechanical disintegration with sterile surgical scissors. Then, the apical papilla fragments were incubated in α-MEM containing 3 mg/mL type I collagenase (Sigma-Aldrich, Saint Louis, MO, USA) at 37°C and 5% CO2 for 2 h. After this period, the cells were centrifuged, washed with PBS 1X, and later seeded in wells of a 6-well plate (Corning, Tewksbury, MA, USA); for the seeding, we used α-MEM supplement with 10% fetal bovine serum (FBS), 100 U/mL penicillin, 100 g/mL streptomycin, and 0.25 g/mL amphotericin (GIBCO). After 3 h incubation, the supernatant was discarded and the cells that remained attached to the bottom of the wells were sub-cultured in 100×15 mm petri dishes (Corning). Cells at passage 3 to 5 were used in the experiments.\nFibronectin (2.0 mg/mL, bovine plasma; Sigma-Aldrich), laminin (1 mg/mL, Engelbreth-Holm-Swarm lathyritic mouse tumor; Santa Cruz Biotechnology), and type I collagen (3.67 mg/mL, rat trial; Corning, Bedford, MA, USA) were used in this study. These ECMps were diluted in PBS 1X to obtain concentrations of 1, 5 and 10 µg/mL that were applied (50 µL) to the bottom of wells without pretreatment of sterilized polystyrene 96-well plates (Corning; Product Number: 3370). The 96-well plates were then centrifuged at 1500 rpm at 4ºC for 10 min. After keeping the plates at 4ºC for 18 h, the remaining material was aspirated. These plates were incubated with 0.5% bovine serum albumin (BSA; Santa Cruz Biotechnology) for 10 min to block the uncoated surface,19,20 followed by washing with PBS 1X at 37ºC. All these steps were performed for both negative (NC) and positive (PC) control groups. In the NC, PBS solution without ECMp was applied to the bottom of non-pretreated wells of sterilized polystyrene 96-well plates (Corning; Product Number: 3370). In the PC, APBS without ECMp solution was applied to the bottom of wells of polystyrene 96-well plates that were pre-treated by the manufacturer for cell culture purpose (Corning; Product Number: 3395). Table 1 shows all groups established according to the polystyrene 96-well plates used, as well as the type and concentration of ECMp.\n\nTable 1Experimental and control groups according to type of polystyrene 96-well plate and type and concentration of extracellular matrix protein (ECMp)GroupsPre-treated wells by the manufactureECMpConcentration (µg/mL)NC (negative control)--0PC (positive control)+-0FN1-Fibronectin1FN5-Fibronectin5FN10-Fibronectin10LM1-Laminin1LM5-Laminin5LM10-Laminin10COL1-Type I collagen1COL5-Type I collagen5COL10-Type I collagen10\n\nAfter performing the treatments of wells with ECMp, the hAPCs (5×103cells/well) were seeded in the respective wells and incubated at 37ºC and 5% CO2 for up to 5 days, with the culture medium being changed every 48 h. Then, the bioactivity parameters were evaluated. All assays were performed twice to ensure the data reproducibility.\nCell adhesion: Since the non-treated 96-well plates did not allow cell attachment on the bottom of the wells, the attachment potential of the microplate treated with different concentrations of ECMp was assessed. For this purpose, protocols of dynamic systems, in which microchannels are treated with proteins to induce cell chemotaxis,19-21 were adapted to this study. After 24 h of applying cell suspensions to the plate wells (n=6), the hAPCs capable of migrating and attaching to the bottom of wells were evaluated under a light microscope. For this purpose, the cells were fixed with 2.5% glutaraldehyde for 2 h, washed with distilled water and stained with 0.1% crystal violet for 15 min. For quantitative analysis, photographs (1 field from the central region of each well of the microplate; 10× magnification) were obtained with a camera (Olympus C5060, Miami, FL, USA) coupled to a microscope (Olympus BX51, Miami, FL, USA). The images were evaluated using Image J 1.45S software (Wayne Rasband, National Institutes of Health, USA) to count the number of cells on each image.\nCell proliferation: This analysis was performed by the MTT assay (n=6). After intervals of 1, 3 and 5 days in α-MEM, cells were incubated for 4 h in α-MEM supplemented with 10% MTT solution (5 mg/mL; Sigma-Aldrich).22 Then, the formazan crystals produced were dissolved in acidified isopropanol and absorbance of the resulting solution was measured at 570 nm (Synergy H1, Biotek, Winooski, VT, USA). The mean absorbance value obtained in the negative control group over the 1-day period was considered 100% cell proliferation. This parameter was used to determine the percentage of viability for the other groups.\nCell spreading: For this analysis (n=4), after intervals of 1, 3 and 5 days in α-MEM, the cells were fixed in 4% paraformaldehyde, permeabilized in 0.1% Triton X-100 (Sigma-Aldrich) and incubated with Actin Red 555 probe (Life Technologies; Grand Island, NY, USA) in 2% BSA (1:20) to detect the actin filaments. After washing the cells with PBS, they were incubated with Hoescht (1:5000; Invitrogen, Carlsbad, CA, USA) for 15 min for nuclear counter-staining.23 The F-actin assay was then analyzed using a fluorescence microscope (Leica DM 5500B).\nTotal collagen synthesis: The cells were cultured for 5 days in α-MEM without FBS, and the culture medium was collected and replaced every 48 h. The pool of collected culture medium of each sample was stored at -20ºC until the Sirius Red assay was performed (n=6). For this purpose, the culture was transferred to 1.5 mL tubes containing Direct Red solution in saturated picric acid (0.1%), and then incubated for 1 hour, under agitation at 400 rpm, in a dry bath at 25°C. The tubes were centrifuged, the supernatant was discarded, and 0.01 M hydrochloric acid was added. The tubes were centrifuged again, the supernatant was discarded, followed by the addition of 0.5 M sodium hydroxide to solubilize the precipitated material.24 The resulting solution from each sample was subjected to absorbance analysis in a spectrophotometer at 555 nm (Synergy H1).24 The percentage of total collagen synthesis for each sample was calculated based on the mean absorbance values of the negative control group.\nType I collagen synthesis: The medium pool collected in the previous analysis (n=6) was also used to evaluate the type I collagen synthesis (COL-I), which was detected by enzyme-linked immunosorbent (ELISA) assay performed with a standardized kit (Duoset human COL-I - R&D Systems, Minneapolis, MN, USA), according to manufacturer’s instructions. For this purpose, 96-well plates for ELISA (Corning) were treated with 4 µg/mL of Capture Antibody and incubated overnight. After this period, the wells were incubated with Reagent Diluent 1X for 1 hour. Then, standardized aliquots of the samples and standard curve points were applied and incubated for 2 h. After this, the wells were maintained with 100 ng/mL Detection Solution for 2 h, followed by incubation in dark room, with 2.5% Streptavidin-HRP, for 20 min. The wells were washed with Wash Solution 1X between all steps described. Finally, Substrate Solution (color reagent A + color regent B; 1:1) was applied and after 20 minutes, the Stop Solution was added. The absorbance reading was performed at a wavelength of 450 nm in a spectrophotometer (Synergy H1). The concentration of type I collagen for each sample was determined according to the standard curve. The percentage of type I collagen synthesis for each sample was calculated based on the average concentration values, in ng/mL of the positive control group.\nReverse transcription followed by quantitative polymerase chain reaction (RT-qPCR): After incubating the cells in α-MEM for 5 days (n=4), the RNAqueous kit (Ambion, Grand Island, NY, USA) was used to isolate the RNA in accordance with the manufacturer’s instructions. To obtain sufficient RNA for the reaction, a pool of cells obtained from two wells was used for each group. Then, 500 ng of total RNA associated with random hexamer primers and Moloney leukemia virus reverse transcriptase were used to determine the cDNA synthesis (n=4), according to the instructions of the High Capacity RT kit supplier (Applied Biosystems, Foster City, CA, USA). For the gene expression of ITGA5, ITGAV, COL1A1 and COL3A1 (Figure 1) by qPCR was performed, using pre-designed sets of primers and probes (Gene expression assays, Applied Biosystems) to detect target genes by the TaqMan system (TaqMan Universal PCR Master Mix, Applied Biosystems) with the StepOne Plus equipment (Applied Biosystems). In addition, the cycling conditions optimized by the manufacturer were used, and the cycle threshold (CT) values for each sample were calculated using the thermal cycler software and analyzed using the 2CT method. The results were normalized according to the constitutive gene expression (GAPDH) and expressed as a fold change in relation to the positive control.\n\nFigure 1TaqMan assays used to analyze gene expression\n\nThe data from cell adhesion, viability, total protein synthesis, type I collagen synthesis, and RT-qPCR assays were evaluated for adherence to the normal curve (Shapiro-Wilk test, p>0.05) and homoscedasticity (Levene test, p>0.05). Then, the data were submitted to the one-way or two-way ANOVA test, followed by the Tukey post-hoc test by means of SPSS 20.0 software (SPSS Inc., Chicago, IL, USA). All statistical inferences were based on the 5% level of significance. The DSS Research calculator was used to calculate the statistical power of the samples, according to Kim25 (2016), which showed a power level > 95.0% for each analysis. The cell spreading images were analyzed descriptively.", "Cell adhesion: As demonstrated in Figure 2A and B, the highest attachment potential occurred in Group FN10, followed by Groups FN5, FN1, PC, LM10, and LM5 (p<0.05). No significant difference was observed between Groups LM1, COL1, COL5, and COL10 and Group NC (p>0.05).\n\nFigure 2Cell adhesion assay. (A) Mean and standard deviation of the number of cells adhered to the bottom plate treated according to the group. Different letters demonstrate a significant difference between groups (n=6; one-way ANOVA/Tukey tests; α=0.05). (B) Representative images of the cell adhesion\n\nCell proliferation: Figure 3 shows increased cell proliferation occurred over time-points for all groups (p<0.05), except for those treated with type I collagen and Group NC (p>0.05). The best cell viability result was observed in the groups treated with 5 or 10 µg/mL of FN after intervals of 1, 3 and 5 days (p<0.05). At the last time-point, this increase was 30x higher when compared with Group NC, 1.7x higher than in Group PC, and 2.7x higher than the group treated with 10 µg/mL LM. The group FN1 also showed higher cell viability values than the group LM10 in all time-points (p<0.05).\n\nFigure 3Mean and standard deviation of cell proliferation values (n=6; two-way ANOVA/Tukey tests; α=0.05). Different capital letters indicate a significant difference between the time-points for each group. Different lowercase letters demonstrate a significant difference between groups in each time-point\n\nCell spreading: Figure 4 displays how cells seeded on the bottom of the wells treated with 5 or 10 µg/mL of FN had better spreading on the substrate and a higher rate of proliferation over the time-points. In the groups treated with COL and in Group NC, lower numbers of cells organized in clusters exhibiting contracted cytoskeletons remained attached to the wells bottom.\n\nFigure 4Representative images of F-actin assay (10×). Red fluorescence indicates the actin filaments. Blue fluorescence indicates the cell nucleus\n\nTotal collagen synthesis: Figure 5A shows increase in the total collagen synthesis occurred in Groups FN10 and FN5, followed by Groups PC and LM10 in comparison with Group NC (p<0.05).\n\nFigure 5Mean and standard deviation of total collagen (A) and type I collagen (B) synthesis values (n=6; one-way ANOVA/Tukey tests; α=0.05). Different letters demonstrate a significant difference between groups\n\nType I collagen synthesis: Figure 5B demonstrates that more type I collagen was synthesized in Group FN10 (p<0.05), followed by Groups FN5 and PC (p<0.05), which showed no significant difference between them (p>0.05). Groups LM5 and LM10 showed lower values of type I collagen synthesis in comparison with Group PC (p<0.05), but higher than those observed for Group NC (p<0.05).\nRT-qPCR: Figure 6A-D shows higher levels of ITGA5, ITGAV, and COL1A1 gene expression occurred in Group FN10 in comparison with Group PC (p<0.05). However, FN10 and PC groups COL3A1 gene expression had no significant difference (p>0.05). Group FN5 showed no statistical difference when compared with Group PC for all evaluated genes (p>0.05), and Groups LM5 and LM10 demonstrated lower values of gene expression than Group PC (p<0.05).\n\nFigure 6Mean and standard deviation of mRNA gene expression values of the ITGA5 (A), ITGAV (B), COL1A1 (C) and COL3A1 (D) markers (n=4; one-way ANOVA/Tukey tests; α=0.05). Different letters demonstrate a significant difference between groups\n", "The application of tissue engineering concepts for the purpose of pulp tissue regeneration has been relevant, especially in endodontic treatment of teeth with incomplete root formation. The thin and fragile walls of the root canal, associated with the short length of the root, make these teeth susceptible to fracturing under the forces to which they are submitted.7,26 Conservative treatments that aim to stimulate cell migration from apical papilla into the root canal and allow the synthesis of a new pulp-like tissue seem to be an interesting alternative that would allow the return of tooth vitality and continuity of root formation. For this purpose, the use of a potent signaling agent is crucial to stimulate and guide tissue regeneration.5,6 Studies have shown extracellular matrix proteins in concentrations between 0.5 to 100 µg/mL can act as a bioactive agent on cells from different sources.27,30 Therefore, in the present study, we assessed the biological activities of low concentrations of ECMp (fibronectin, laminin and type I collagen) on hAPCs.\nType I collagen can regulate the cell behavior by specific signals and trigger several biological activities essential for tissue development and homeostasis.11 However, the different concentrations of COL tested in this investigation showed no bioactive potential on the hAPCs. As observed in the negative control, only a few cells remained attached to the COL over the time-points. These cells, which exhibited cytoplasm contraction, were organized in clusters. Parisi, et al.15 (2020) reported that adhesion represents both the interaction between cells and the substrate on which they were seeded, and the interaction and communication among surrounding cells, which activate the mechanisms of proliferation and differentiation. These authors showed that lack of cell adhesion to the substrate and interaction with surrounding cells resulted in apoptosis.15 Therefore, according to the data obtained in the present study, and given the relevant role of collagen in different cell functions,4,11,31-33 we suggest that the concentrations of type I collagen tested on the hAPCs were insufficient to stimulate cell adhesion, proliferation and spreading. As the study aimed to evaluate and compare the bioactive effects of low dosages of ECMp, the same concentrations of FN, LM and COL were evaluated on hAPCs. However, based upon the negative data obtained for COL, we recommend the assessment of higher concentrations of this type of ECMp on hAPCs in future studies.\nAlthough the cell attachment of LM was lower in comparison with the positive control, we observed a dose-response effect on hAPCs when using different concentrations of this protein. The assays using 5 and 10 µg/mL of LM yielded greater cell adhesion, proliferation and spreading in comparison with the negative control. A recent study showed LM expression in blood vessels regulated the viability and migration of oligodendrocyte precursor cells via β1 integrin-focal adhesion kinase (FAK).30 In addition, Liu, et al.34 (2017) demonstrated in vivo that 316L stainless steel treated with stromal cell-derived factor-1α (SDF-1α)/laminin had increased biocompatibility and improved the healing of vascular lesions. Enhanced adhesion, migration and proliferation of human limbal epithelial stem cells was also demonstrated in tests with specific isoforms of LM incorporated into a fibrin-based hydrogel.12 These positive effects of LM have been shown on Schwann cells33 as well as on hepatocytes and endothelial cells.11,13 Therefore, the results of the present study, in which LM stimulated the adhesion, proliferation and spreading on hAPCs, corroborate with these data previously reported by several researchers.11,12,13,33,35\nThis study also showed that the bioactive potential of 1, 5 and 10 µg/mL of FN was concentration-dependent and promoted strong adhesion on hAPCs compared with the positive control and other experimental groups. Additionally, concentrations of 5 and 10 µg/mL of FN stimulated the highest proliferation and spreading of hAPCs at all time-points. These data are in line with a previous investigation in which the authors demonstrated an experimental hydrogel containing FN exerted higher chemotactic effect on NIH 3T3 fibroblasts than the same hydrogel containing LM.32 The role of FN in inducing cell migration and differentiation on dental pulp cells is already known.36,37 Chang, et al.38 (2016) evaluated the effect of titanium surfaces treated with glow discharge plasma followed by FN adsorption on MG-63 osteoblast-like cells. The authors observed that the modified titanium surfaces enhanced cell adhesion, migration and proliferation. Matsui, et al.28 (2015) also reported an increased cell adhesion in situ when FN-coated hydroxyapatite was assessed under a defined experimental condition. Amplified cell migration, adhesion, spreading and proliferation occurred when periodontal ligament cells were treated with 5 or 10 µg/mL of fibronectin.27 More recently, Parisi, et al.15 (2020) demonstrated that high concentrations of fluid containing FN allowed rapid deposition of this protein on biomaterials, what improved the adhesion and proliferation of cells on them.\nTaking into consideration all the interesting data concerning the positive effects of FN and LM on different cell lines and based upon the preliminary results obtained in this in vitro study, the two highest concentrations of FN and LM were selected for the subsequent experiments. In the present investigation, only the use of 10 µg/mL of FN significantly increased the synthesis of total collagen, as well as gene expression and synthesis of type I collagen in comparison with the positive control. Despite the tendency towards increase in the expression of type III collagen, there was no significant difference between FN10 and the positive control. Collagen is an abundant protein in the extracellular matrix of pulp tissue, with type I and type III collagen representing about 56% and 41%, respectively, of the organic content.31 Previous studies described the presence of FN in tissue engineering strategies as an approach to induce collagen synthesis and stimulate tissue regeneration.33,39 Collagen and other proteins present in the extracellular matrix have interaction sites with different specificities for cell membrane receptors, turning the cells capable of binding and triggering distinct biological activities related to development, homeostasis and formation of a new tissue.11\nTo better understand the mechanism of action of FN and LM on hAPCs, we also evaluated the gene expression of α5 integrin (ITGA5) and αv integrin (ITGAV); these genes are related to cell migration and adhesion.25,40-42 The interaction of α5β1 and αvβ3 integrins and the role of these membrane receptors in the detection of extra-cellular matrix proteins have been demonstrated.42 Although these integrins have distinct biological intracellular mechanisms, both act in the detection processes of specific ligands that trigger particular cell signaling related to the projection mechanisms and directional migration of the cytoskeleton. The joint action of integrins allows cells to adapt to the microenvironment and perform their functions in tissue remodeling and regeneration.44 In the present study, we observed higher gene expression of ITGA5 and ITGAV only for FN10 in comparison with the positive control (p<0.05). In a study conducted with a parental fibroblast lineage, Spoerri, et al.43 (2020) observed that when the α5β1 and αVβ3 integrins were activated by PAR1 and PAR3 proteases, via Gβγ and PI3K signaling, they changed their conformation, thereby enhancing their affinity to FN. Thus, the cell adhesion established was strengthened by means of specific biological signals mediated by PARs via Gα13, Gαi, ROCK and Src. Also, the authors reported the cell migration and proliferation were accelerated after cell adhesion was triggered.33 The effect of proteins that contain a functional domain of FN and LM was also observed in a primary culture of human keratinocytes.44 When these cells were subjected to GTPase inhibitors in the presence of FN and LM-mimic surfaces, the modulation of cell adhesion and migration was dependent on the Rac1 and Rho pathways.44 Nevertheless, in the presence of the α5β1 integrin receptor, FN-mimic was capable of inducing cell migration and increasing keratinocyte adhesion, whereas under these conditions, LM-mimic was unable to stimulate cell motility.44 Therefore, we could suggest that the type and concentration of the ECMp influence the cell adhesion and migration, and at least partly explain the strong bioactive potential of 10 µg/mL of FN on the HAPCs observed in the present study.\nOverall, the need to replace the traditional endodontic treatment of teeth with incomplete root formation (which aims to induce apexification and later fill the root canal with definitive materials), has led to the hypothesis that associating the bioactive properties of FN with those of an interesting biomaterial can regenerate the lost pulp tissue. Previous studies have shown that the association of extracellular matrix proteins with scaffolds approximated the conditions of the in vivo environment.13,29,33Thus, they mediated the mechanical-chemical signals in the processes of adhesion, migration, proliferation and differentiation of various cell types, and in the synthesis of a new matrix.13,29,33The results obtained in the present study cannot be directly extrapolated to clinical situations. However, in spite of the limitations of this laboratory investigation, it seems appropriate to conduct further studies to assess techniques using the cell-free approaches proposed by Galler and Widbiller7 (2020). In these, the root canals may be filled with biomaterials containing dosages of FN fibronectin capable of stimulating the migration of cells from apical papilla and allowing new pulp-like tissue formation.", "According to the methodologies used in the present study, we conclude that 10 µg/mL of fibronectin could act as a potent bioactive agent for human apical papilla cells by inducing cell adhesion, proliferation, spreading and collagen synthesis." ]
[ "intro", "methods", "results", "discussion", "conclusions" ]
[ "Guided tissue regeneration", "Dental pulp", "Fibronectin", "Laminin", "Collagen" ]
Introduction: The concepts of tissue engineering were introduced to Dentistry in the last decade giving special attention to inducting pulp regeneration.1-4 The principles of regenerative endodontics are four: 1. disinfection and detoxification of root canals; 2. use of biomaterials as scaffold for cell adhesion, proliferation, and differentiation; 3. presence of cells with potential to regenerate a new tissue similar to the original; and 4. use of signaling agents to induce cell migration and enhance the bioactive action of the biomaterial.5,6 Two strategies emerged as the most promising therapies for tissue regeneration: cell approaches and cell-free approaches.7,8 The first involves implanting pre-cultured cells associated with a biomaterial at the site of injury. The bioactive materials provide a porous three-dimensional structure and act as a temporary extracellular matrix where attached cells can grow to regenerate the tissue. The second, cell-free approaches, involve the use of biomaterials along potent signaling agents. When put into the injury site, these agents stimulate cell migration towards the site of interest, as well as their proliferation and differentiation, favoring tissue healing.7,8 Cell-free therapies seem to be interesting for teeth with incomplete root formation. A histopathological study of rat teeth with periapical lesion demonstrated a viable apical papilla was maintained even after 90 days of pulp necrosis.9 The apical papilla is known to have a rich source of stem cells. Therefore, biomaterials associated with potent signaling agents could be used to attract these cells into the root canal. These biomaterials should also promote adhesion, proliferation and differentiation of the stem cells along the entire length of the root canal, so they can synthetize new pulp-like tissue.1 Fibronectin (FN), laminin (LM) and type I collagen (COL) – examples of extracellular matrix proteins (ECMp) – have bioactive properties and were proposed as chemotactic and inducing agents, since scientists have described their proprieties in several biomedical fields.10-15 Previous studies showed FN, either associated with other proteins, or not, can mediate cell adhesion, migration, proliferation, and differentiation; while also helping in tissue formation, remodeling, repair and regeneration.11,13,15 LM is a complex structure that also regulates cell adhesion, migration and proliferation.10,12 LM aids in maintaining cellular functions especially in the processes of reeptelialization and angiogenesis.10 COL is a common protein that surround cells throughout the body, with over 90% of this protein being type I. COL can be used to mimic the natural cell environment, and under specific conditions, is effective in promoting stem cell differentiation and growth.13,16 One study characterized the dehydrated human umbilical cord (DHUc) in terms of tissue composition and evaluated its in vitro and in vivo effects.14 The authors observed the presence of COL, hyaluronic acid, FN and LM, among other proteins, which included growth factors, inflammatory modulators and cell signaling regulators in the DHUc. They observed increased migration of fibroblasts, and higher level of mesenchymal stem cell proliferation, associated with induction of angiogenic potential in vitro depending on concentration when cells were treated with DHUc. When implanted into subcutaneous tissue of rats, this biomaterial proved to be biocompatible and biodegradable.14Researchers have reported decellularized human dental pulp, which maintains several proteins of this specialized connective tissue, like COL, FN and LN, is a suitable scaffold to mimic the complexity of the dental pulp extracellular matrix.17,18 Decellularized biological scaffolds allow stem cells from apical papilla17 as well as human dental pulp stem cells18 to attach and proliferate while being stimulated to differentiate into odontoblast-like cells close to the dentin substrate.18 Despite the extracellular matrix proteins role on pulp regeneration and formation of a new pulp-like tissue, the data concerning the influence of FN, LN and COL on the metabolism and function of human apical papilla cells (hAPCs) – especially when young teeth with incomplete root formation have lost their vitality – is scarce. Thus, this in vitro study assessed the effects of FN, LM and COL on the adhesion, proliferation and spreading of hAPCs as well as on the potential of these cells to synthesize collagen and express genes related with pulp regeneration. Methodology: Firstly, approval of the research was obtained from the Research Ethics Committee of Araraquara School of Dentistry/UNESP, São Paulo, Brazil (protocol Nº. 80806617.3.0000.5416) as well as signed terms of informed consent from patients and their guardians, according to the Declaration of Helsinki. Subsequently the hAPCs were obtained from the apical papilla of four healthy third molars with incomplete root formation. These teeth were provided by volunteers aged 16 to 18 years old, of both sexes, who were patients at the surgery clinic. The teeth were extracted for orthodontic reasons. To obtain the primary culture of hAPCs, the extracted teeth were immediately immersed in Minimum Essential Medium Eagle Apha-culture medium (α-MEM; GIBCO, Carlsbad, CA, USA) supplemented with antibiotic and antifungal (100 U/mL penicillin, 100 g/mL streptomycin, and 0.25 g/mL amphotericin; GIBCO) at 4ºC, for 1 hour. In a procedure performed in a biosafety cabinet (Bio Protector 12 Plus; VECO, Campinas, SP, Brazil), the papilla was sectioned with a sterile scalpel blade, at the apical limit of the root. This tissue was transferred to a 1.5 mL tube containing phosphate buffered saline (PBS 1X; GIBCO), followed by mechanical disintegration with sterile surgical scissors. Then, the apical papilla fragments were incubated in α-MEM containing 3 mg/mL type I collagenase (Sigma-Aldrich, Saint Louis, MO, USA) at 37°C and 5% CO2 for 2 h. After this period, the cells were centrifuged, washed with PBS 1X, and later seeded in wells of a 6-well plate (Corning, Tewksbury, MA, USA); for the seeding, we used α-MEM supplement with 10% fetal bovine serum (FBS), 100 U/mL penicillin, 100 g/mL streptomycin, and 0.25 g/mL amphotericin (GIBCO). After 3 h incubation, the supernatant was discarded and the cells that remained attached to the bottom of the wells were sub-cultured in 100×15 mm petri dishes (Corning). Cells at passage 3 to 5 were used in the experiments. Fibronectin (2.0 mg/mL, bovine plasma; Sigma-Aldrich), laminin (1 mg/mL, Engelbreth-Holm-Swarm lathyritic mouse tumor; Santa Cruz Biotechnology), and type I collagen (3.67 mg/mL, rat trial; Corning, Bedford, MA, USA) were used in this study. These ECMps were diluted in PBS 1X to obtain concentrations of 1, 5 and 10 µg/mL that were applied (50 µL) to the bottom of wells without pretreatment of sterilized polystyrene 96-well plates (Corning; Product Number: 3370). The 96-well plates were then centrifuged at 1500 rpm at 4ºC for 10 min. After keeping the plates at 4ºC for 18 h, the remaining material was aspirated. These plates were incubated with 0.5% bovine serum albumin (BSA; Santa Cruz Biotechnology) for 10 min to block the uncoated surface,19,20 followed by washing with PBS 1X at 37ºC. All these steps were performed for both negative (NC) and positive (PC) control groups. In the NC, PBS solution without ECMp was applied to the bottom of non-pretreated wells of sterilized polystyrene 96-well plates (Corning; Product Number: 3370). In the PC, APBS without ECMp solution was applied to the bottom of wells of polystyrene 96-well plates that were pre-treated by the manufacturer for cell culture purpose (Corning; Product Number: 3395). Table 1 shows all groups established according to the polystyrene 96-well plates used, as well as the type and concentration of ECMp. Table 1Experimental and control groups according to type of polystyrene 96-well plate and type and concentration of extracellular matrix protein (ECMp)GroupsPre-treated wells by the manufactureECMpConcentration (µg/mL)NC (negative control)--0PC (positive control)+-0FN1-Fibronectin1FN5-Fibronectin5FN10-Fibronectin10LM1-Laminin1LM5-Laminin5LM10-Laminin10COL1-Type I collagen1COL5-Type I collagen5COL10-Type I collagen10 After performing the treatments of wells with ECMp, the hAPCs (5×103cells/well) were seeded in the respective wells and incubated at 37ºC and 5% CO2 for up to 5 days, with the culture medium being changed every 48 h. Then, the bioactivity parameters were evaluated. All assays were performed twice to ensure the data reproducibility. Cell adhesion: Since the non-treated 96-well plates did not allow cell attachment on the bottom of the wells, the attachment potential of the microplate treated with different concentrations of ECMp was assessed. For this purpose, protocols of dynamic systems, in which microchannels are treated with proteins to induce cell chemotaxis,19-21 were adapted to this study. After 24 h of applying cell suspensions to the plate wells (n=6), the hAPCs capable of migrating and attaching to the bottom of wells were evaluated under a light microscope. For this purpose, the cells were fixed with 2.5% glutaraldehyde for 2 h, washed with distilled water and stained with 0.1% crystal violet for 15 min. For quantitative analysis, photographs (1 field from the central region of each well of the microplate; 10× magnification) were obtained with a camera (Olympus C5060, Miami, FL, USA) coupled to a microscope (Olympus BX51, Miami, FL, USA). The images were evaluated using Image J 1.45S software (Wayne Rasband, National Institutes of Health, USA) to count the number of cells on each image. Cell proliferation: This analysis was performed by the MTT assay (n=6). After intervals of 1, 3 and 5 days in α-MEM, cells were incubated for 4 h in α-MEM supplemented with 10% MTT solution (5 mg/mL; Sigma-Aldrich).22 Then, the formazan crystals produced were dissolved in acidified isopropanol and absorbance of the resulting solution was measured at 570 nm (Synergy H1, Biotek, Winooski, VT, USA). The mean absorbance value obtained in the negative control group over the 1-day period was considered 100% cell proliferation. This parameter was used to determine the percentage of viability for the other groups. Cell spreading: For this analysis (n=4), after intervals of 1, 3 and 5 days in α-MEM, the cells were fixed in 4% paraformaldehyde, permeabilized in 0.1% Triton X-100 (Sigma-Aldrich) and incubated with Actin Red 555 probe (Life Technologies; Grand Island, NY, USA) in 2% BSA (1:20) to detect the actin filaments. After washing the cells with PBS, they were incubated with Hoescht (1:5000; Invitrogen, Carlsbad, CA, USA) for 15 min for nuclear counter-staining.23 The F-actin assay was then analyzed using a fluorescence microscope (Leica DM 5500B). Total collagen synthesis: The cells were cultured for 5 days in α-MEM without FBS, and the culture medium was collected and replaced every 48 h. The pool of collected culture medium of each sample was stored at -20ºC until the Sirius Red assay was performed (n=6). For this purpose, the culture was transferred to 1.5 mL tubes containing Direct Red solution in saturated picric acid (0.1%), and then incubated for 1 hour, under agitation at 400 rpm, in a dry bath at 25°C. The tubes were centrifuged, the supernatant was discarded, and 0.01 M hydrochloric acid was added. The tubes were centrifuged again, the supernatant was discarded, followed by the addition of 0.5 M sodium hydroxide to solubilize the precipitated material.24 The resulting solution from each sample was subjected to absorbance analysis in a spectrophotometer at 555 nm (Synergy H1).24 The percentage of total collagen synthesis for each sample was calculated based on the mean absorbance values of the negative control group. Type I collagen synthesis: The medium pool collected in the previous analysis (n=6) was also used to evaluate the type I collagen synthesis (COL-I), which was detected by enzyme-linked immunosorbent (ELISA) assay performed with a standardized kit (Duoset human COL-I - R&D Systems, Minneapolis, MN, USA), according to manufacturer’s instructions. For this purpose, 96-well plates for ELISA (Corning) were treated with 4 µg/mL of Capture Antibody and incubated overnight. After this period, the wells were incubated with Reagent Diluent 1X for 1 hour. Then, standardized aliquots of the samples and standard curve points were applied and incubated for 2 h. After this, the wells were maintained with 100 ng/mL Detection Solution for 2 h, followed by incubation in dark room, with 2.5% Streptavidin-HRP, for 20 min. The wells were washed with Wash Solution 1X between all steps described. Finally, Substrate Solution (color reagent A + color regent B; 1:1) was applied and after 20 minutes, the Stop Solution was added. The absorbance reading was performed at a wavelength of 450 nm in a spectrophotometer (Synergy H1). The concentration of type I collagen for each sample was determined according to the standard curve. The percentage of type I collagen synthesis for each sample was calculated based on the average concentration values, in ng/mL of the positive control group. Reverse transcription followed by quantitative polymerase chain reaction (RT-qPCR): After incubating the cells in α-MEM for 5 days (n=4), the RNAqueous kit (Ambion, Grand Island, NY, USA) was used to isolate the RNA in accordance with the manufacturer’s instructions. To obtain sufficient RNA for the reaction, a pool of cells obtained from two wells was used for each group. Then, 500 ng of total RNA associated with random hexamer primers and Moloney leukemia virus reverse transcriptase were used to determine the cDNA synthesis (n=4), according to the instructions of the High Capacity RT kit supplier (Applied Biosystems, Foster City, CA, USA). For the gene expression of ITGA5, ITGAV, COL1A1 and COL3A1 (Figure 1) by qPCR was performed, using pre-designed sets of primers and probes (Gene expression assays, Applied Biosystems) to detect target genes by the TaqMan system (TaqMan Universal PCR Master Mix, Applied Biosystems) with the StepOne Plus equipment (Applied Biosystems). In addition, the cycling conditions optimized by the manufacturer were used, and the cycle threshold (CT) values for each sample were calculated using the thermal cycler software and analyzed using the 2CT method. The results were normalized according to the constitutive gene expression (GAPDH) and expressed as a fold change in relation to the positive control. Figure 1TaqMan assays used to analyze gene expression The data from cell adhesion, viability, total protein synthesis, type I collagen synthesis, and RT-qPCR assays were evaluated for adherence to the normal curve (Shapiro-Wilk test, p>0.05) and homoscedasticity (Levene test, p>0.05). Then, the data were submitted to the one-way or two-way ANOVA test, followed by the Tukey post-hoc test by means of SPSS 20.0 software (SPSS Inc., Chicago, IL, USA). All statistical inferences were based on the 5% level of significance. The DSS Research calculator was used to calculate the statistical power of the samples, according to Kim25 (2016), which showed a power level > 95.0% for each analysis. The cell spreading images were analyzed descriptively. Results: Cell adhesion: As demonstrated in Figure 2A and B, the highest attachment potential occurred in Group FN10, followed by Groups FN5, FN1, PC, LM10, and LM5 (p<0.05). No significant difference was observed between Groups LM1, COL1, COL5, and COL10 and Group NC (p>0.05). Figure 2Cell adhesion assay. (A) Mean and standard deviation of the number of cells adhered to the bottom plate treated according to the group. Different letters demonstrate a significant difference between groups (n=6; one-way ANOVA/Tukey tests; α=0.05). (B) Representative images of the cell adhesion Cell proliferation: Figure 3 shows increased cell proliferation occurred over time-points for all groups (p<0.05), except for those treated with type I collagen and Group NC (p>0.05). The best cell viability result was observed in the groups treated with 5 or 10 µg/mL of FN after intervals of 1, 3 and 5 days (p<0.05). At the last time-point, this increase was 30x higher when compared with Group NC, 1.7x higher than in Group PC, and 2.7x higher than the group treated with 10 µg/mL LM. The group FN1 also showed higher cell viability values than the group LM10 in all time-points (p<0.05). Figure 3Mean and standard deviation of cell proliferation values (n=6; two-way ANOVA/Tukey tests; α=0.05). Different capital letters indicate a significant difference between the time-points for each group. Different lowercase letters demonstrate a significant difference between groups in each time-point Cell spreading: Figure 4 displays how cells seeded on the bottom of the wells treated with 5 or 10 µg/mL of FN had better spreading on the substrate and a higher rate of proliferation over the time-points. In the groups treated with COL and in Group NC, lower numbers of cells organized in clusters exhibiting contracted cytoskeletons remained attached to the wells bottom. Figure 4Representative images of F-actin assay (10×). Red fluorescence indicates the actin filaments. Blue fluorescence indicates the cell nucleus Total collagen synthesis: Figure 5A shows increase in the total collagen synthesis occurred in Groups FN10 and FN5, followed by Groups PC and LM10 in comparison with Group NC (p<0.05). Figure 5Mean and standard deviation of total collagen (A) and type I collagen (B) synthesis values (n=6; one-way ANOVA/Tukey tests; α=0.05). Different letters demonstrate a significant difference between groups Type I collagen synthesis: Figure 5B demonstrates that more type I collagen was synthesized in Group FN10 (p<0.05), followed by Groups FN5 and PC (p<0.05), which showed no significant difference between them (p>0.05). Groups LM5 and LM10 showed lower values of type I collagen synthesis in comparison with Group PC (p<0.05), but higher than those observed for Group NC (p<0.05). RT-qPCR: Figure 6A-D shows higher levels of ITGA5, ITGAV, and COL1A1 gene expression occurred in Group FN10 in comparison with Group PC (p<0.05). However, FN10 and PC groups COL3A1 gene expression had no significant difference (p>0.05). Group FN5 showed no statistical difference when compared with Group PC for all evaluated genes (p>0.05), and Groups LM5 and LM10 demonstrated lower values of gene expression than Group PC (p<0.05). Figure 6Mean and standard deviation of mRNA gene expression values of the ITGA5 (A), ITGAV (B), COL1A1 (C) and COL3A1 (D) markers (n=4; one-way ANOVA/Tukey tests; α=0.05). Different letters demonstrate a significant difference between groups Discussion: The application of tissue engineering concepts for the purpose of pulp tissue regeneration has been relevant, especially in endodontic treatment of teeth with incomplete root formation. The thin and fragile walls of the root canal, associated with the short length of the root, make these teeth susceptible to fracturing under the forces to which they are submitted.7,26 Conservative treatments that aim to stimulate cell migration from apical papilla into the root canal and allow the synthesis of a new pulp-like tissue seem to be an interesting alternative that would allow the return of tooth vitality and continuity of root formation. For this purpose, the use of a potent signaling agent is crucial to stimulate and guide tissue regeneration.5,6 Studies have shown extracellular matrix proteins in concentrations between 0.5 to 100 µg/mL can act as a bioactive agent on cells from different sources.27,30 Therefore, in the present study, we assessed the biological activities of low concentrations of ECMp (fibronectin, laminin and type I collagen) on hAPCs. Type I collagen can regulate the cell behavior by specific signals and trigger several biological activities essential for tissue development and homeostasis.11 However, the different concentrations of COL tested in this investigation showed no bioactive potential on the hAPCs. As observed in the negative control, only a few cells remained attached to the COL over the time-points. These cells, which exhibited cytoplasm contraction, were organized in clusters. Parisi, et al.15 (2020) reported that adhesion represents both the interaction between cells and the substrate on which they were seeded, and the interaction and communication among surrounding cells, which activate the mechanisms of proliferation and differentiation. These authors showed that lack of cell adhesion to the substrate and interaction with surrounding cells resulted in apoptosis.15 Therefore, according to the data obtained in the present study, and given the relevant role of collagen in different cell functions,4,11,31-33 we suggest that the concentrations of type I collagen tested on the hAPCs were insufficient to stimulate cell adhesion, proliferation and spreading. As the study aimed to evaluate and compare the bioactive effects of low dosages of ECMp, the same concentrations of FN, LM and COL were evaluated on hAPCs. However, based upon the negative data obtained for COL, we recommend the assessment of higher concentrations of this type of ECMp on hAPCs in future studies. Although the cell attachment of LM was lower in comparison with the positive control, we observed a dose-response effect on hAPCs when using different concentrations of this protein. The assays using 5 and 10 µg/mL of LM yielded greater cell adhesion, proliferation and spreading in comparison with the negative control. A recent study showed LM expression in blood vessels regulated the viability and migration of oligodendrocyte precursor cells via β1 integrin-focal adhesion kinase (FAK).30 In addition, Liu, et al.34 (2017) demonstrated in vivo that 316L stainless steel treated with stromal cell-derived factor-1α (SDF-1α)/laminin had increased biocompatibility and improved the healing of vascular lesions. Enhanced adhesion, migration and proliferation of human limbal epithelial stem cells was also demonstrated in tests with specific isoforms of LM incorporated into a fibrin-based hydrogel.12 These positive effects of LM have been shown on Schwann cells33 as well as on hepatocytes and endothelial cells.11,13 Therefore, the results of the present study, in which LM stimulated the adhesion, proliferation and spreading on hAPCs, corroborate with these data previously reported by several researchers.11,12,13,33,35 This study also showed that the bioactive potential of 1, 5 and 10 µg/mL of FN was concentration-dependent and promoted strong adhesion on hAPCs compared with the positive control and other experimental groups. Additionally, concentrations of 5 and 10 µg/mL of FN stimulated the highest proliferation and spreading of hAPCs at all time-points. These data are in line with a previous investigation in which the authors demonstrated an experimental hydrogel containing FN exerted higher chemotactic effect on NIH 3T3 fibroblasts than the same hydrogel containing LM.32 The role of FN in inducing cell migration and differentiation on dental pulp cells is already known.36,37 Chang, et al.38 (2016) evaluated the effect of titanium surfaces treated with glow discharge plasma followed by FN adsorption on MG-63 osteoblast-like cells. The authors observed that the modified titanium surfaces enhanced cell adhesion, migration and proliferation. Matsui, et al.28 (2015) also reported an increased cell adhesion in situ when FN-coated hydroxyapatite was assessed under a defined experimental condition. Amplified cell migration, adhesion, spreading and proliferation occurred when periodontal ligament cells were treated with 5 or 10 µg/mL of fibronectin.27 More recently, Parisi, et al.15 (2020) demonstrated that high concentrations of fluid containing FN allowed rapid deposition of this protein on biomaterials, what improved the adhesion and proliferation of cells on them. Taking into consideration all the interesting data concerning the positive effects of FN and LM on different cell lines and based upon the preliminary results obtained in this in vitro study, the two highest concentrations of FN and LM were selected for the subsequent experiments. In the present investigation, only the use of 10 µg/mL of FN significantly increased the synthesis of total collagen, as well as gene expression and synthesis of type I collagen in comparison with the positive control. Despite the tendency towards increase in the expression of type III collagen, there was no significant difference between FN10 and the positive control. Collagen is an abundant protein in the extracellular matrix of pulp tissue, with type I and type III collagen representing about 56% and 41%, respectively, of the organic content.31 Previous studies described the presence of FN in tissue engineering strategies as an approach to induce collagen synthesis and stimulate tissue regeneration.33,39 Collagen and other proteins present in the extracellular matrix have interaction sites with different specificities for cell membrane receptors, turning the cells capable of binding and triggering distinct biological activities related to development, homeostasis and formation of a new tissue.11 To better understand the mechanism of action of FN and LM on hAPCs, we also evaluated the gene expression of α5 integrin (ITGA5) and αv integrin (ITGAV); these genes are related to cell migration and adhesion.25,40-42 The interaction of α5β1 and αvβ3 integrins and the role of these membrane receptors in the detection of extra-cellular matrix proteins have been demonstrated.42 Although these integrins have distinct biological intracellular mechanisms, both act in the detection processes of specific ligands that trigger particular cell signaling related to the projection mechanisms and directional migration of the cytoskeleton. The joint action of integrins allows cells to adapt to the microenvironment and perform their functions in tissue remodeling and regeneration.44 In the present study, we observed higher gene expression of ITGA5 and ITGAV only for FN10 in comparison with the positive control (p<0.05). In a study conducted with a parental fibroblast lineage, Spoerri, et al.43 (2020) observed that when the α5β1 and αVβ3 integrins were activated by PAR1 and PAR3 proteases, via Gβγ and PI3K signaling, they changed their conformation, thereby enhancing their affinity to FN. Thus, the cell adhesion established was strengthened by means of specific biological signals mediated by PARs via Gα13, Gαi, ROCK and Src. Also, the authors reported the cell migration and proliferation were accelerated after cell adhesion was triggered.33 The effect of proteins that contain a functional domain of FN and LM was also observed in a primary culture of human keratinocytes.44 When these cells were subjected to GTPase inhibitors in the presence of FN and LM-mimic surfaces, the modulation of cell adhesion and migration was dependent on the Rac1 and Rho pathways.44 Nevertheless, in the presence of the α5β1 integrin receptor, FN-mimic was capable of inducing cell migration and increasing keratinocyte adhesion, whereas under these conditions, LM-mimic was unable to stimulate cell motility.44 Therefore, we could suggest that the type and concentration of the ECMp influence the cell adhesion and migration, and at least partly explain the strong bioactive potential of 10 µg/mL of FN on the HAPCs observed in the present study. Overall, the need to replace the traditional endodontic treatment of teeth with incomplete root formation (which aims to induce apexification and later fill the root canal with definitive materials), has led to the hypothesis that associating the bioactive properties of FN with those of an interesting biomaterial can regenerate the lost pulp tissue. Previous studies have shown that the association of extracellular matrix proteins with scaffolds approximated the conditions of the in vivo environment.13,29,33Thus, they mediated the mechanical-chemical signals in the processes of adhesion, migration, proliferation and differentiation of various cell types, and in the synthesis of a new matrix.13,29,33The results obtained in the present study cannot be directly extrapolated to clinical situations. However, in spite of the limitations of this laboratory investigation, it seems appropriate to conduct further studies to assess techniques using the cell-free approaches proposed by Galler and Widbiller7 (2020). In these, the root canals may be filled with biomaterials containing dosages of FN fibronectin capable of stimulating the migration of cells from apical papilla and allowing new pulp-like tissue formation. Conclusions: According to the methodologies used in the present study, we conclude that 10 µg/mL of fibronectin could act as a potent bioactive agent for human apical papilla cells by inducing cell adhesion, proliferation, spreading and collagen synthesis.
Background: Potent signaling agents stimulate and guide pulp tissue regeneration, especially in endodontic treatment of teeth with incomplete root formation. Methods: Different concentrations (1, 5, and 10 µg/mL) of fibronectin (FN), laminin (LM), and type I collagen (COL) were applied to the bottom of non-treated wells of sterilized 96-well plates. Non-treated and pre-treated wells were used as negative (NC) and positive (PC) controls. After seeding the hAPCs (5×103 cells/well) on the different substrates, we assessed the following parameters: adhesion, proliferation, spreading, total collagen/type I collagen synthesis and gene expression (ITGA5, ITGAV, COL1A1, COL3A1) (ANOVA/Tukey; α=0.05). Results: We observed greater attachment potential for cells on the FN substrate, with the effect depending on concentration. Concentrations of 5 and 10 µg/mL of FN yielded the highest cell proliferation, spreading and collagen synthesis values with 10 µg/mL concentration increasing the ITGA5, ITGAV, and COL1A1 expression compared with PC. LM (5 and 10 µg/mL) showed higher bioactivity values than NC, but those were lower than PC, and COL showed no bioactivity at all. Conclusions: We conclude that FN at 10 µg/mL concentration exerted the most intense bioactive effects on hAPCs.
Introduction: The concepts of tissue engineering were introduced to Dentistry in the last decade giving special attention to inducting pulp regeneration.1-4 The principles of regenerative endodontics are four: 1. disinfection and detoxification of root canals; 2. use of biomaterials as scaffold for cell adhesion, proliferation, and differentiation; 3. presence of cells with potential to regenerate a new tissue similar to the original; and 4. use of signaling agents to induce cell migration and enhance the bioactive action of the biomaterial.5,6 Two strategies emerged as the most promising therapies for tissue regeneration: cell approaches and cell-free approaches.7,8 The first involves implanting pre-cultured cells associated with a biomaterial at the site of injury. The bioactive materials provide a porous three-dimensional structure and act as a temporary extracellular matrix where attached cells can grow to regenerate the tissue. The second, cell-free approaches, involve the use of biomaterials along potent signaling agents. When put into the injury site, these agents stimulate cell migration towards the site of interest, as well as their proliferation and differentiation, favoring tissue healing.7,8 Cell-free therapies seem to be interesting for teeth with incomplete root formation. A histopathological study of rat teeth with periapical lesion demonstrated a viable apical papilla was maintained even after 90 days of pulp necrosis.9 The apical papilla is known to have a rich source of stem cells. Therefore, biomaterials associated with potent signaling agents could be used to attract these cells into the root canal. These biomaterials should also promote adhesion, proliferation and differentiation of the stem cells along the entire length of the root canal, so they can synthetize new pulp-like tissue.1 Fibronectin (FN), laminin (LM) and type I collagen (COL) – examples of extracellular matrix proteins (ECMp) – have bioactive properties and were proposed as chemotactic and inducing agents, since scientists have described their proprieties in several biomedical fields.10-15 Previous studies showed FN, either associated with other proteins, or not, can mediate cell adhesion, migration, proliferation, and differentiation; while also helping in tissue formation, remodeling, repair and regeneration.11,13,15 LM is a complex structure that also regulates cell adhesion, migration and proliferation.10,12 LM aids in maintaining cellular functions especially in the processes of reeptelialization and angiogenesis.10 COL is a common protein that surround cells throughout the body, with over 90% of this protein being type I. COL can be used to mimic the natural cell environment, and under specific conditions, is effective in promoting stem cell differentiation and growth.13,16 One study characterized the dehydrated human umbilical cord (DHUc) in terms of tissue composition and evaluated its in vitro and in vivo effects.14 The authors observed the presence of COL, hyaluronic acid, FN and LM, among other proteins, which included growth factors, inflammatory modulators and cell signaling regulators in the DHUc. They observed increased migration of fibroblasts, and higher level of mesenchymal stem cell proliferation, associated with induction of angiogenic potential in vitro depending on concentration when cells were treated with DHUc. When implanted into subcutaneous tissue of rats, this biomaterial proved to be biocompatible and biodegradable.14Researchers have reported decellularized human dental pulp, which maintains several proteins of this specialized connective tissue, like COL, FN and LN, is a suitable scaffold to mimic the complexity of the dental pulp extracellular matrix.17,18 Decellularized biological scaffolds allow stem cells from apical papilla17 as well as human dental pulp stem cells18 to attach and proliferate while being stimulated to differentiate into odontoblast-like cells close to the dentin substrate.18 Despite the extracellular matrix proteins role on pulp regeneration and formation of a new pulp-like tissue, the data concerning the influence of FN, LN and COL on the metabolism and function of human apical papilla cells (hAPCs) – especially when young teeth with incomplete root formation have lost their vitality – is scarce. Thus, this in vitro study assessed the effects of FN, LM and COL on the adhesion, proliferation and spreading of hAPCs as well as on the potential of these cells to synthesize collagen and express genes related with pulp regeneration. Conclusions: According to the methodologies used in the present study, we conclude that 10 µg/mL of fibronectin could act as a potent bioactive agent for human apical papilla cells by inducing cell adhesion, proliferation, spreading and collagen synthesis.
Background: Potent signaling agents stimulate and guide pulp tissue regeneration, especially in endodontic treatment of teeth with incomplete root formation. Methods: Different concentrations (1, 5, and 10 µg/mL) of fibronectin (FN), laminin (LM), and type I collagen (COL) were applied to the bottom of non-treated wells of sterilized 96-well plates. Non-treated and pre-treated wells were used as negative (NC) and positive (PC) controls. After seeding the hAPCs (5×103 cells/well) on the different substrates, we assessed the following parameters: adhesion, proliferation, spreading, total collagen/type I collagen synthesis and gene expression (ITGA5, ITGAV, COL1A1, COL3A1) (ANOVA/Tukey; α=0.05). Results: We observed greater attachment potential for cells on the FN substrate, with the effect depending on concentration. Concentrations of 5 and 10 µg/mL of FN yielded the highest cell proliferation, spreading and collagen synthesis values with 10 µg/mL concentration increasing the ITGA5, ITGAV, and COL1A1 expression compared with PC. LM (5 and 10 µg/mL) showed higher bioactivity values than NC, but those were lower than PC, and COL showed no bioactivity at all. Conclusions: We conclude that FN at 10 µg/mL concentration exerted the most intense bioactive effects on hAPCs.
5,394
269
[]
5
[ "cell", "cells", "collagen", "adhesion", "ml", "type", "fn", "proliferation", "group", "tissue" ]
[ "tissue regeneration cell", "cells root canal", "dental pulp extracellular", "therapies tissue regeneration", "root canal biomaterials" ]
[CONTENT] Guided tissue regeneration | Dental pulp | Fibronectin | Laminin | Collagen [SUMMARY]
[CONTENT] Guided tissue regeneration | Dental pulp | Fibronectin | Laminin | Collagen [SUMMARY]
[CONTENT] Guided tissue regeneration | Dental pulp | Fibronectin | Laminin | Collagen [SUMMARY]
[CONTENT] Guided tissue regeneration | Dental pulp | Fibronectin | Laminin | Collagen [SUMMARY]
[CONTENT] Guided tissue regeneration | Dental pulp | Fibronectin | Laminin | Collagen [SUMMARY]
[CONTENT] Guided tissue regeneration | Dental pulp | Fibronectin | Laminin | Collagen [SUMMARY]
[CONTENT] Cell Adhesion | Cells, Cultured | Collagen Type I | Extracellular Matrix | Extracellular Matrix Proteins | Fibronectins | Humans | Laminin [SUMMARY]
[CONTENT] Cell Adhesion | Cells, Cultured | Collagen Type I | Extracellular Matrix | Extracellular Matrix Proteins | Fibronectins | Humans | Laminin [SUMMARY]
[CONTENT] Cell Adhesion | Cells, Cultured | Collagen Type I | Extracellular Matrix | Extracellular Matrix Proteins | Fibronectins | Humans | Laminin [SUMMARY]
[CONTENT] Cell Adhesion | Cells, Cultured | Collagen Type I | Extracellular Matrix | Extracellular Matrix Proteins | Fibronectins | Humans | Laminin [SUMMARY]
[CONTENT] Cell Adhesion | Cells, Cultured | Collagen Type I | Extracellular Matrix | Extracellular Matrix Proteins | Fibronectins | Humans | Laminin [SUMMARY]
[CONTENT] Cell Adhesion | Cells, Cultured | Collagen Type I | Extracellular Matrix | Extracellular Matrix Proteins | Fibronectins | Humans | Laminin [SUMMARY]
[CONTENT] tissue regeneration cell | cells root canal | dental pulp extracellular | therapies tissue regeneration | root canal biomaterials [SUMMARY]
[CONTENT] tissue regeneration cell | cells root canal | dental pulp extracellular | therapies tissue regeneration | root canal biomaterials [SUMMARY]
[CONTENT] tissue regeneration cell | cells root canal | dental pulp extracellular | therapies tissue regeneration | root canal biomaterials [SUMMARY]
[CONTENT] tissue regeneration cell | cells root canal | dental pulp extracellular | therapies tissue regeneration | root canal biomaterials [SUMMARY]
[CONTENT] tissue regeneration cell | cells root canal | dental pulp extracellular | therapies tissue regeneration | root canal biomaterials [SUMMARY]
[CONTENT] tissue regeneration cell | cells root canal | dental pulp extracellular | therapies tissue regeneration | root canal biomaterials [SUMMARY]
[CONTENT] cell | cells | collagen | adhesion | ml | type | fn | proliferation | group | tissue [SUMMARY]
[CONTENT] cell | cells | collagen | adhesion | ml | type | fn | proliferation | group | tissue [SUMMARY]
[CONTENT] cell | cells | collagen | adhesion | ml | type | fn | proliferation | group | tissue [SUMMARY]
[CONTENT] cell | cells | collagen | adhesion | ml | type | fn | proliferation | group | tissue [SUMMARY]
[CONTENT] cell | cells | collagen | adhesion | ml | type | fn | proliferation | group | tissue [SUMMARY]
[CONTENT] cell | cells | collagen | adhesion | ml | type | fn | proliferation | group | tissue [SUMMARY]
[CONTENT] tissue | pulp | cell | cells | agents | stem | migration | differentiation | regeneration | fn [SUMMARY]
[CONTENT] usa | wells | ml | incubated | solution | plates | applied | mem | performed | 96 [SUMMARY]
[CONTENT] group | 05 | groups | figure | difference | pc | significant | significant difference | group nc | letters [SUMMARY]
[CONTENT] study conclude | act potent bioactive | fibronectin act potent | potent bioactive agent human | fibronectin act potent bioactive | potent bioactive agent | potent bioactive | ml fibronectin act potent | ml fibronectin act | proliferation spreading collagen [SUMMARY]
[CONTENT] cell | cells | group | fn | tissue | ml | 05 | adhesion | collagen | proliferation [SUMMARY]
[CONTENT] cell | cells | group | fn | tissue | ml | 05 | adhesion | collagen | proliferation [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] 1 | 5 | 10 | COL | 96 ||| NC ||| ITGA5 | ITGAV | ANOVA/Tukey [SUMMARY]
[CONTENT] ||| 5 | 10 | ITGA5 | ITGAV ||| 5 | NC | COL [SUMMARY]
[CONTENT] 10 [SUMMARY]
[CONTENT] ||| 1 | 5 | 10 | COL | 96 ||| NC ||| ITGA5 | ITGAV | ANOVA/Tukey ||| ||| ||| 5 | 10 | ITGA5 | ITGAV ||| 5 | NC | COL ||| 10 [SUMMARY]
[CONTENT] ||| 1 | 5 | 10 | COL | 96 ||| NC ||| ITGA5 | ITGAV | ANOVA/Tukey ||| ||| ||| 5 | 10 | ITGA5 | ITGAV ||| 5 | NC | COL ||| 10 [SUMMARY]
Azole resistance in Aspergillus fumigatus. The first 2-year's Data from the Danish National Surveillance Study, 2018-2020.
35104010
Azole resistance complicates treatment of patients with invasive aspergillosis with an increased mortality. Azole resistance in Aspergillus fumigatus is a growing problem and associated with human and environmental azole use. Denmark has a considerable and highly efficient agricultural sector. Following reports on environmental azole resistance in A. fumigatus from Danish patients, the ministry of health requested a prospective national surveillance of azole-resistant A. fumigatus and particularly that of environmental origin.
BACKGROUND
Unique isolates regarded as clinically relevant and any A. fumigatus isolated on a preferred weekday (background samples) were included. EUCAST susceptibility testing was performed and azole-resistant isolates underwent cyp51A gene sequencing.
METHODS
The azole resistance prevalence was 6.1% (66/1083) at patient level. The TR34 /L98H prevalence was 3.6% (39/1083) and included the variants TR34 /L98H, TR34 3 /L98H and TR34 /L98H/S297T/F495I. Resistance caused by other Cyp51A variants accounted for 1.3% (14/1083) and included G54R, P216S, F219L, G54W, M220I, M220K, M220R, G432S, G448S and Y121F alterations. Non-Cyp51A-mediated resistance accounted for 1.2% (13/1083). Proportionally, TR34 /L98H, other Cyp51A variants and non-Cyp51A-mediated resistance accounted for 59.1% (39/66), 21.2% (14/66) and 19.7% (13/66), respectively, of all resistance. Azole resistance was detected in all five regions in Denmark, and TR34 /L98H specifically, in four of five regions during the surveillance period.
RESULTS
The azole resistance prevalence does not lead to a change in the initial treatment of aspergillosis at this point, but causes concern and leads to therapeutic challenges in the affected patients.
CONCLUSION
[ "Antifungal Agents", "Aspergillus fumigatus", "Azoles", "Denmark", "Drug Resistance, Fungal", "Fungal Proteins", "Humans", "Microbial Sensitivity Tests", "Prospective Studies" ]
9302650
INTRODUCTION
Azole resistance in Aspergillus fumigatus due to the specific molecular mechanisms TR34/L98H or TR46/Y121F/T289A has been reported from all seven continents except Antarctica. 1 These mechanisms are found in environmental A. fumigatus isolates and in isolates from azole naïve as well as from exposed patients. Azole resistance can also arise in A. fumigatus in patients receiving long‐term azole treatment. 2 Most resistant isolates harbour mutations in cyp51A, which encodes the azole target 14α‐sterol‐demethylase, essential for ergosterol biosynthesis. 2 However, azole resistance has also been ascribed to efflux pumps and other non‐cyp51A‐mediated resistance mutations. 2 , 3 In Denmark, the first isolation of TR34/L98H and TR46/Y121F/T289A were from clinical samples in 2007 and 2014, respectively. 4 , 5 Subsequently, TR34/L98H and TR46/Y121F/T289A have also been found in environmental samples since 2009 and 2019, respectively. 6 , 7 Moreover, an increase in the prevalence of azole resistance among Danish cystic fibrosis (CF) patients was found over a 10‐year period. 8 Clinical manifestations with Aspergillus vary according to patient group. In the CF population, Aspergillus occurs most often as part of colonisation, allergic bronchopulmonary aspergillosis (ABPA) and bronchitis. 9 ABPA is also a well‐known condition in patients with asthma. 10 Invasive aspergillosis mainly occurs in patients who are immunosuppressed and chronic aspergillosis in patients with impaired lung tissue architecture. Azoles are the drugs of choice in the management of aspergillosis. 10 Voriconazole and isavuconazole are first choice in invasive aspergillosis, 11 , 12 itraconazole (and voriconazole) in chronic aspergillosis 13 and posaconazole as prophylaxis or salvage treatment, but with potential future broadening of its licensed indication due to non‐inferior to voriconazole for primary therapy. 10 , 12 , 14 At this point, azoles are the only antifungal agents against aspergillosis for oral administration. The emergence of azole resistance complicates patient treatment, and invasive aspergillosis with azole resistance is associated with an inferior outcome compared to invasive aspergillosis with a susceptible strain. 15 , 16 An international expert opinion suggested that when the environmental resistance rate exceeds 10% in a region, the initial treatment for invasive aspergillosis should be either liposomal amphotericin B or voriconazole combined with an echinocandin. 17 This recommendation was based on two observations. First, the significantly increased mortality found in patients who received voriconazole initially for invasive aspergillosis due to resistant A. fumigatus 15 , 16 and second, the superior activity of voriconazole for those with susceptible A. fumigatus (~70% survival vs. 55% for amphotericin B and 50% for echinocandins). 17 , 18 This approach requires reliable epidemiological data on the prevalence of azole resistance in A. fumigatus due to the environmental route of acquisition (which may occur even in azole naïve patients) and medical route (which is limited to the azole exposed patient population). The Danish national surveillance programme on azole resistance was established in 2018 upon request from the ministry of health due to the rising concerns for azole resistance of environmental origin. The objective was to determine the prevalence of azole‐resistant A. fumigatus isolates among A. fumigatus colonised and infected patients in Denmark and determine the underlying resistance mechanism. We present data from the first 2 years of the surveillance.
METHODS
Organisation of the national surveillance programme of azole‐resistant A. fumigatus The surveillance programme was initiated on October 1st 2018 with participation from all 10 Danish clinical microbiological departments. Inclusion criteria were as follows: (a) unique A. fumigatus isolates that were regarded clinically significant and (b) any A. fumigatus isolated on a preferred weekday (regardless of clinical significance) were included when marked ‘Background’. The adherence to the inclusion criteria varied. Six departments followed the instructions with ‘Background’ samples with a potential uncertainty of whether the isolate represented a clinical condition with aspergillosis or a contamination. Two departments sent all isolates, and two departments sent only clinically relevant isolates. The centres are quite in‐homogeneous in patient up‐take and size of uptake area. For example, three are district hospitals (Vejle, Sønderborg and Esbjerg), two hold CF‐centres (AUH and RH) and one has a centre for chronic pulmonary aspergillosis (OUH). Isolates from the same patients were deemed unique if one of the following conditions were met: (1) when sampled more than 30 days apart, (2) if the isolate had a different susceptibility or (3) a different molecular resistance mechanism. The clinical microbiological departments referred isolates to the reference mycological laboratory at Statens Serum Institut prospectively. Some departments performed species identification of moulds to the species level and only referred A. fumigatus while others referred all Aspergillus isolates or all mould isolates for species identification and susceptibility testing. One department at Aarhus University Hospital (AUH) performed EUCAST susceptibility testing (E. Def 10.1 and E. Def 9.3.1 as described below) of most A. fumigatus isolates locally and referred the MIC data and all resistant isolates for cyp51A sequencing (and confirmatory MIC determination) thus ensuring that all A. fumigatus isolates from AUH were included in the data analysis. Monthly reports on referred isolates were communicated to the participating laboratories to motivate and ensure adherence to the surveillance programme. The surveillance programme was initiated on October 1st 2018 with participation from all 10 Danish clinical microbiological departments. Inclusion criteria were as follows: (a) unique A. fumigatus isolates that were regarded clinically significant and (b) any A. fumigatus isolated on a preferred weekday (regardless of clinical significance) were included when marked ‘Background’. The adherence to the inclusion criteria varied. Six departments followed the instructions with ‘Background’ samples with a potential uncertainty of whether the isolate represented a clinical condition with aspergillosis or a contamination. Two departments sent all isolates, and two departments sent only clinically relevant isolates. The centres are quite in‐homogeneous in patient up‐take and size of uptake area. For example, three are district hospitals (Vejle, Sønderborg and Esbjerg), two hold CF‐centres (AUH and RH) and one has a centre for chronic pulmonary aspergillosis (OUH). Isolates from the same patients were deemed unique if one of the following conditions were met: (1) when sampled more than 30 days apart, (2) if the isolate had a different susceptibility or (3) a different molecular resistance mechanism. The clinical microbiological departments referred isolates to the reference mycological laboratory at Statens Serum Institut prospectively. Some departments performed species identification of moulds to the species level and only referred A. fumigatus while others referred all Aspergillus isolates or all mould isolates for species identification and susceptibility testing. One department at Aarhus University Hospital (AUH) performed EUCAST susceptibility testing (E. Def 10.1 and E. Def 9.3.1 as described below) of most A. fumigatus isolates locally and referred the MIC data and all resistant isolates for cyp51A sequencing (and confirmatory MIC determination) thus ensuring that all A. fumigatus isolates from AUH were included in the data analysis. Monthly reports on referred isolates were communicated to the participating laboratories to motivate and ensure adherence to the surveillance programme. Culturing and species identification Primary cultures were performed using Sabouraud glucose agar (SSI Diagnostika or bioMérieux) or YGC agar (yeast glucose agar; SSI Diagnostika) with incubation at 35–37°C for 5 days. Species identification included classical techniques including macro‐ and micromorphology and thermotolerance testing supplemented with MALDI‐TOF MS and β‐tubulin sequencing as needed as previously described. 8 Only A. fumigatus sensu stricto isolates were included in the surveillance. Primary cultures were performed using Sabouraud glucose agar (SSI Diagnostika or bioMérieux) or YGC agar (yeast glucose agar; SSI Diagnostika) with incubation at 35–37°C for 5 days. Species identification included classical techniques including macro‐ and micromorphology and thermotolerance testing supplemented with MALDI‐TOF MS and β‐tubulin sequencing as needed as previously described. 8 Only A. fumigatus sensu stricto isolates were included in the surveillance. Susceptibility testing and target gene sequencing A. fumigatus isolates underwent screening for azole resistance following the EUCAST E. Def 10.1 method using VIPcheck azole agar plates (Mediaproducts BV). 19 Screening positive isolates underwent EUCAST E. Def 9.3.1 susceptibility testing. 20 For consistency, the MIC values from the reference laboratory were used throughout. The applied antifungal concentration ranges for the MIC testing varied slightly during the study period. Susceptibility classification was performed according to the current EUCAST breakpoints v. 10.0. 21 cyp51A sequencing was performed for isolates classified as azole resistant to at least one azole. The promoter and full coding region of the cyp51A gene were sequenced as previously described, 5 with the exception that for Sanger sequencing, 0F was replaced with a new primer 1F (5′‐GTGCGTAGCAAGGGAGAAGGA‐3′) for improved results. A. fumigatus isolates underwent screening for azole resistance following the EUCAST E. Def 10.1 method using VIPcheck azole agar plates (Mediaproducts BV). 19 Screening positive isolates underwent EUCAST E. Def 9.3.1 susceptibility testing. 20 For consistency, the MIC values from the reference laboratory were used throughout. The applied antifungal concentration ranges for the MIC testing varied slightly during the study period. Susceptibility classification was performed according to the current EUCAST breakpoints v. 10.0. 21 cyp51A sequencing was performed for isolates classified as azole resistant to at least one azole. The promoter and full coding region of the cyp51A gene were sequenced as previously described, 5 with the exception that for Sanger sequencing, 0F was replaced with a new primer 1F (5′‐GTGCGTAGCAAGGGAGAAGGA‐3′) for improved results. Data management The azole resistance prevalence was determined at patient level and compared to the Dutch national surveillance, 2013–2018. 22 Azole resistance was divided into environmentally driven resistance (presence of TR34/L98H or TR46/Y121F/T289A), other cyp51A mutations and non‐cyp51A‐mediated resistance (when resistant, but no cyp51A mutations were identified). A χ2 was used for comparing azole resistance prevalence at patient level in the Dutch and the Danish population using R studio (R version 4. 1. 1) R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R‐project.org/. The surveillance was requested by the Danish Ministry of Health and the scientific study approved by the QA & Compliance at Statens Serum Institut (journal number 21/00765). Preliminary results have previously been presented in part at the Trends in Medical Mycology 2019 and at European Congress on Clinical Microbiology and Infectious Disease (ECCMID) in 2020 and 2021 conferences and briefly summarised as part of the national Danish Integrated Antimicrobial Resistance Monitoring and Research Programme (DANMAP) established by the Danish Ministry of Food, Agriculture and Fisheries and the Danish Ministry of Health in annual reports 2018, 2019 and 2020 reports. 7 , 23 , 24 The results presented here are updated since then. The azole resistance prevalence was determined at patient level and compared to the Dutch national surveillance, 2013–2018. 22 Azole resistance was divided into environmentally driven resistance (presence of TR34/L98H or TR46/Y121F/T289A), other cyp51A mutations and non‐cyp51A‐mediated resistance (when resistant, but no cyp51A mutations were identified). A χ2 was used for comparing azole resistance prevalence at patient level in the Dutch and the Danish population using R studio (R version 4. 1. 1) R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R‐project.org/. The surveillance was requested by the Danish Ministry of Health and the scientific study approved by the QA & Compliance at Statens Serum Institut (journal number 21/00765). Preliminary results have previously been presented in part at the Trends in Medical Mycology 2019 and at European Congress on Clinical Microbiology and Infectious Disease (ECCMID) in 2020 and 2021 conferences and briefly summarised as part of the national Danish Integrated Antimicrobial Resistance Monitoring and Research Programme (DANMAP) established by the Danish Ministry of Food, Agriculture and Fisheries and the Danish Ministry of Health in annual reports 2018, 2019 and 2020 reports. 7 , 23 , 24 The results presented here are updated since then.
RESULTS
A total of 1820 susceptibility‐tested A. fumigatus isolates from 1083 patients were included in the analysis. The vast majority originated from airways including nose/sinus (1609/1820) and ear samples (182/1820) (Table 1). Number of patients and sample types with Aspergillus fumigatus isolates Samples marked Background samples Includes samples marked as sputum/laryngeal aspirate, sinus, nose/nose vestibule biopsy/nasal aspirate/nose‐throat. Includes samples marked as BAL/bronchial aspirate/pleura fluid and lung biopsy/lung/pleura. Cerebrospinal fluid, abscess/drain fluid/drain/abdominal swab, biopsy abdominal/biopsy organ not specified/pericardium/pericardial fluid and bone. Itraconazole resistance was found in 5.9% (108/1820) of isolates and voriconazole resistance in 5.6% (102/1820) (Figure 1). Posaconazole resistance was detected in 103 isolates due to MICs of ≥0.5 mg/L, and 85 had MIC 0.25 mg/L (defined as area of technical uncertainty [ATU]) of which 6 were classified as resistant due to an itraconazole MIC >1 mg/L. Isavuconazole resistance with MICs of ≥4 mg/L was detected in 90 isolates, and 235 isolates had MIC 2 mg/L (ATU) of which 13 were classified as resistant due to a voriconazole MIC >1 mg/L. Overall, susceptibility testing identified 119 isolates resistant to at least one azole from 66 patients leading to a resistance prevalence among patients of 6.1% (66/1083, 95% CI 4.8%–7.7%). The proportion of isolates that were azole resistant was 6.5% (119/1820). From lower airways, the proportion of resistant isolates were 4.3% (12/278) compared to 3.6% (6/166) of isolates from tracheal aspirates and 8.5% (99/1165) of isolates in the upper airways (Table 1). MIC values for the included Aspergillus fumigatus isolates. Susceptible isolates (S) are shown green when susceptible at azole resistance screening. Susceptible isolates with an MIC are shown in blue, resistant isolates in red and isolates in the ATU for which the classification depends on the susceptibility of either itraconazole or voriconazole, respectively, are indicated in black. MIC values above 4 mg/L are shown as >4 mg/L. Isolates with no MICs for posaconazole (n = 1) and isavuconazole (n = 365) are not included in the diagrams The proportional distribution of resistance mechanisms at patient level are shown in Figure 2. cyp51A sequencing of azole resistant‐A. fumigatus demonstrated environmental resistance (TR34/L98H, TR34 3/L98H or TR34/L98H/297T/F495I) in isolates from 39 patients. This corresponds to an environmental resistance prevalence among the patients of 3.6% (39/1083 patients; 95% CI: 2.6%–4.9%) and accounted for 59.1% (39/66 patients; 95% CI: 47.0%–70.1%) of patients with resistant A. fumigatus isolates (Figure 2). Resistance with a tandem repeat was detected in samples from airways (Sputum [n = 44], BAL [n = 6] and tracheal aspirate [n = 4]) and one ear sample. Cyp51A amino acid profiles found in the 66 patients with at least one resistant isolate. Each patient is shown only once. Some patients harbour resistant isolates with a Cyp51A resistance mechanism and resistant isolates with a non‐Cyp51A related mechanism (WT) (for example the two patients with P216S and wild‐type isolates) Resistance involving other cyp51A mutations accounted for 21.2% (14/66; 95% CI: 13.1%–32.5%). The corresponding alterations were G54R (n = 5), P216S (n = 2), F219L (n = 1), G54W (n = 1), M220I (n = 1), M220K (n = 1), M220R (n = 1), G432S (n = 1), G448S (n = 1) and Y121F (n = 1). One patient had sequential isolates with either M220R or G54R. Non‐cyp51A‐mediated resistance (wild‐type cyp51A) accounted for 19.7% (13/66; 95% CI: 11.9%–30.8%). Isolates from 12 of these patients were voriconazole resistant with MICs ≥4 mg/L or with MIC 2 mg/L and cross‐resistance to the other azoles. One patient harboured an isolate that was classified as resistant solely due to a voriconazole MIC of 2 mg/L. All TR34/L98H variants, M220R, G432S, G448S and Y121F were associated with pan azole high‐level resistance (Table 2). In contrast, F219L, G54R, G54W, M220I, M220K and P216S primarily affected itraconazole susceptibility and to some degree posaconazole with limited or no MIC elevations to voriconazole and isavuconazole. MIC values for Aspergillus fumigatus isolates resistant to at least one azole and which underwent cyp51A sequencing Resistance mechanisms are shown according to environmental, single point mutations and non‐cyp51A‐mediated. Single point mutations are shown according to decreasing resistance. Abbreviations: ISA, Isavuconazole; ITR, Itraconazole; POS, Posaconazole; VOR, Voriconazole. One resistant isolate is not shown in this table since it was found with a F46Y/M172V/E427K, which is not associated with azole resistance, and the same patient had other resistant isolates with TR34/L98H. Four isolates did not undergo cyp51A‐sequencing and are not shown in the table One isolate with TR34/L98H was mixed with a wild‐type isolate resulting in lower MICs than normally observed for TR34/L98H. One resistant isolate with N248K was classified as non‐cyp51A‐mediated since this mutation is not associated with azole resistance, and since same patient had another resistant isolate with no detected cyp51A‐mediated resistance. Among patients with a resistant isolate, both susceptible and resistant isolates were cultured intermittently during the surveillance from 38/66 (58%). Twenty‐five patients had only one resistant isolate, and three patients had several consecutive resistant isolates. Four unique resistant isolates did not undergo cyp51A‐sequencing. Three isolates from a patient who had several resistant isolates with M220K, and another isolate from a patient who had isolates with P216S. Isolates marked as ‘Background samples’ included 99 isolates from 94 patients (Table 1). A. fumigatus from three patients (3.2%; 95% CI: 0.9%–9.0%) were azole resistant and all harboured the TR34/L98H resistance mechanism. One patient had consecutive isolates with TR34/L98H‐marked background and not marked as background. Azole resistance was detected in samples from all five Danish regions (Figure 3). TR34/L98H isolates were detected in four out of five regions and in samples from both the hospital and the primary health care sector, whereas isolates with single point mutations in cyp51A were found in three of five regions. Proportion of resistant Aspergillus fumigatus isolates and associated underlying resistance mechanism across the five Danish Regions. Each Region represented is the Region of the health care facility from which the isolate was referred. As some health care services are centralised this will not in all cases represent the patients' place of residence or the place where the resistant fungus was acquired. Total numbers of isolates were for Capital n = 910, Zealand n = 91, Southern Denmark n = 326, Central Jutland n = 419 and Northern Jutland n = 74. The resistance mechanism remained uncharacterised in five isolates from five patients whom were known to harbour other cyp51A mutant isolates (blue bar). These included four resistant isolates that did not undergo cyp51A sequencing of which three isolates derived from a patient who had other isolates with M220K, and one isolate from a patient who had other isolates with P216S, and one isolate with F46Y/M172/E427K from a patient who also had isolates with TR34/L98H Comparing surveillances at national level, azole resistance prevalence was lower in Denmark than in the Netherlands (66/1083 [6.1%] vs. 508/4496 [11.3%]) (p < .0001). 22
null
null
[ "INTRODUCTION", "Organisation of the national surveillance programme of azole‐resistant A. fumigatus\n", "Culturing and species identification", "Susceptibility testing and target gene sequencing", "Data management", "AUTHOR CONTRIBUTIONS" ]
[ "Azole resistance in Aspergillus fumigatus due to the specific molecular mechanisms TR34/L98H or TR46/Y121F/T289A has been reported from all seven continents except Antarctica.\n1\n These mechanisms are found in environmental A. fumigatus isolates and in isolates from azole naïve as well as from exposed patients. Azole resistance can also arise in A. fumigatus in patients receiving long‐term azole treatment.\n2\n Most resistant isolates harbour mutations in cyp51A, which encodes the azole target 14α‐sterol‐demethylase, essential for ergosterol biosynthesis.\n2\n However, azole resistance has also been ascribed to efflux pumps and other non‐cyp51A‐mediated resistance mutations.\n2\n, \n3\n\n\nIn Denmark, the first isolation of TR34/L98H and TR46/Y121F/T289A were from clinical samples in 2007 and 2014, respectively.\n4\n, \n5\n Subsequently, TR34/L98H and TR46/Y121F/T289A have also been found in environmental samples since 2009 and 2019, respectively.\n6\n, \n7\n Moreover, an increase in the prevalence of azole resistance among Danish cystic fibrosis (CF) patients was found over a 10‐year period.\n8\n\n\nClinical manifestations with Aspergillus vary according to patient group. In the CF population, Aspergillus occurs most often as part of colonisation, allergic bronchopulmonary aspergillosis (ABPA) and bronchitis.\n9\n ABPA is also a well‐known condition in patients with asthma.\n10\n Invasive aspergillosis mainly occurs in patients who are immunosuppressed and chronic aspergillosis in patients with impaired lung tissue architecture. Azoles are the drugs of choice in the management of aspergillosis.\n10\n Voriconazole and isavuconazole are first choice in invasive aspergillosis,\n11\n, \n12\n itraconazole (and voriconazole) in chronic aspergillosis\n13\n and posaconazole as prophylaxis or salvage treatment, but with potential future broadening of its licensed indication due to non‐inferior to voriconazole for primary therapy.\n10\n, \n12\n, \n14\n At this point, azoles are the only antifungal agents against aspergillosis for oral administration. The emergence of azole resistance complicates patient treatment, and invasive aspergillosis with azole resistance is associated with an inferior outcome compared to invasive aspergillosis with a susceptible strain.\n15\n, \n16\n\n\nAn international expert opinion suggested that when the environmental resistance rate exceeds 10% in a region, the initial treatment for invasive aspergillosis should be either liposomal amphotericin B or voriconazole combined with an echinocandin.\n17\n This recommendation was based on two observations. First, the significantly increased mortality found in patients who received voriconazole initially for invasive aspergillosis due to resistant A. fumigatus\n\n15\n, \n16\n and second, the superior activity of voriconazole for those with susceptible A. fumigatus (~70% survival vs. 55% for amphotericin B and 50% for echinocandins).\n17\n, \n18\n This approach requires reliable epidemiological data on the prevalence of azole resistance in A. fumigatus due to the environmental route of acquisition (which may occur even in azole naïve patients) and medical route (which is limited to the azole exposed patient population).\nThe Danish national surveillance programme on azole resistance was established in 2018 upon request from the ministry of health due to the rising concerns for azole resistance of environmental origin. The objective was to determine the prevalence of azole‐resistant A. fumigatus isolates among A. fumigatus colonised and infected patients in Denmark and determine the underlying resistance mechanism. We present data from the first 2 years of the surveillance.", "The surveillance programme was initiated on October 1st 2018 with participation from all 10 Danish clinical microbiological departments. Inclusion criteria were as follows: (a) unique A. fumigatus isolates that were regarded clinically significant and (b) any A. fumigatus isolated on a preferred weekday (regardless of clinical significance) were included when marked ‘Background’. The adherence to the inclusion criteria varied. Six departments followed the instructions with ‘Background’ samples with a potential uncertainty of whether the isolate represented a clinical condition with aspergillosis or a contamination. Two departments sent all isolates, and two departments sent only clinically relevant isolates. The centres are quite in‐homogeneous in patient up‐take and size of uptake area. For example, three are district hospitals (Vejle, Sønderborg and Esbjerg), two hold CF‐centres (AUH and RH) and one has a centre for chronic pulmonary aspergillosis (OUH). Isolates from the same patients were deemed unique if one of the following conditions were met: (1) when sampled more than 30 days apart, (2) if the isolate had a different susceptibility or (3) a different molecular resistance mechanism.\nThe clinical microbiological departments referred isolates to the reference mycological laboratory at Statens Serum Institut prospectively. Some departments performed species identification of moulds to the species level and only referred A. fumigatus while others referred all Aspergillus isolates or all mould isolates for species identification and susceptibility testing. One department at Aarhus University Hospital (AUH) performed EUCAST susceptibility testing (E. Def 10.1 and E. Def 9.3.1 as described below) of most A. fumigatus isolates locally and referred the MIC data and all resistant isolates for cyp51A sequencing (and confirmatory MIC determination) thus ensuring that all A. fumigatus isolates from AUH were included in the data analysis. Monthly reports on referred isolates were communicated to the participating laboratories to motivate and ensure adherence to the surveillance programme.", "Primary cultures were performed using Sabouraud glucose agar (SSI Diagnostika or bioMérieux) or YGC agar (yeast glucose agar; SSI Diagnostika) with incubation at 35–37°C for 5 days. Species identification included classical techniques including macro‐ and micromorphology and thermotolerance testing supplemented with MALDI‐TOF MS and β‐tubulin sequencing as needed as previously described.\n8\n Only A. fumigatus sensu stricto isolates were included in the surveillance.", "\nA. fumigatus isolates underwent screening for azole resistance following the EUCAST E. Def 10.1 method using VIPcheck azole agar plates (Mediaproducts BV).\n19\n Screening positive isolates underwent EUCAST E. Def 9.3.1 susceptibility testing.\n20\n For consistency, the MIC values from the reference laboratory were used throughout. The applied antifungal concentration ranges for the MIC testing varied slightly during the study period. Susceptibility classification was performed according to the current EUCAST breakpoints v. 10.0.\n21\n\ncyp51A sequencing was performed for isolates classified as azole resistant to at least one azole. The promoter and full coding region of the cyp51A gene were sequenced as previously described,\n5\n with the exception that for Sanger sequencing, 0F was replaced with a new primer 1F (5′‐GTGCGTAGCAAGGGAGAAGGA‐3′) for improved results.", "The azole resistance prevalence was determined at patient level and compared to the Dutch national surveillance, 2013–2018.\n22\n Azole resistance was divided into environmentally driven resistance (presence of TR34/L98H or TR46/Y121F/T289A), other cyp51A mutations and non‐cyp51A‐mediated resistance (when resistant, but no cyp51A mutations were identified).\nA χ2 was used for comparing azole resistance prevalence at patient level in the Dutch and the Danish population using R studio (R version 4. 1. 1) R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R‐project.org/.\nThe surveillance was requested by the Danish Ministry of Health and the scientific study approved by the QA & Compliance at Statens Serum Institut (journal number 21/00765).\nPreliminary results have previously been presented in part at the Trends in Medical Mycology 2019 and at European Congress on Clinical Microbiology and Infectious Disease (ECCMID) in 2020 and 2021 conferences and briefly summarised as part of the national Danish Integrated Antimicrobial Resistance Monitoring and Research Programme (DANMAP) established by the Danish Ministry of Food, Agriculture and Fisheries and the Danish Ministry of Health in annual reports 2018, 2019 and 2020 reports.\n7\n, \n23\n, \n24\n The results presented here are updated since then.", "\nMalene Risum: Data curation (lead); Formal analysis (equal); Investigation (equal); Project administration (equal); Writing – original draft (lead). Rasmus Krøger Hare: Data curation (equal); Investigation (equal); Methodology (lead); Writing – review & editing (equal). Jan Berg Gertsen: Data curation (equal); Investigation (equal); Methodology (equal); Project administration (equal); Writing – review & editing (supporting). Lise Kristensen: Investigation (equal); Methodology (equal); Writing – review & editing (supporting). Flemming Schønning Rosenvinge: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Sofia Sulim: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Nissrine Abou‐Chakra: Investigation (supporting); Methodology (equal); Writing – review & editing (supporting). Jette Bangsborg: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Bent Røder: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Ea Sofie Marmolin: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Karen Astavad : Investigation (equal); Methodology (supporting); Writing – review & editing (equal). Michael Pedersen: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Esad Dzajic: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Steen Lomborg Andersen: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Maiken Cavling Arendrup: Conceptualization (lead); Data curation (equal); Formal analysis (equal); Investigation (equal); Methodology (lead); Project administration (lead); Writing – original draft (lead)." ]
[ null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Organisation of the national surveillance programme of azole‐resistant A. fumigatus\n", "Culturing and species identification", "Susceptibility testing and target gene sequencing", "Data management", "RESULTS", "DISCUSSION", "CONFLICT OF INTEREST", "AUTHOR CONTRIBUTIONS" ]
[ "Azole resistance in Aspergillus fumigatus due to the specific molecular mechanisms TR34/L98H or TR46/Y121F/T289A has been reported from all seven continents except Antarctica.\n1\n These mechanisms are found in environmental A. fumigatus isolates and in isolates from azole naïve as well as from exposed patients. Azole resistance can also arise in A. fumigatus in patients receiving long‐term azole treatment.\n2\n Most resistant isolates harbour mutations in cyp51A, which encodes the azole target 14α‐sterol‐demethylase, essential for ergosterol biosynthesis.\n2\n However, azole resistance has also been ascribed to efflux pumps and other non‐cyp51A‐mediated resistance mutations.\n2\n, \n3\n\n\nIn Denmark, the first isolation of TR34/L98H and TR46/Y121F/T289A were from clinical samples in 2007 and 2014, respectively.\n4\n, \n5\n Subsequently, TR34/L98H and TR46/Y121F/T289A have also been found in environmental samples since 2009 and 2019, respectively.\n6\n, \n7\n Moreover, an increase in the prevalence of azole resistance among Danish cystic fibrosis (CF) patients was found over a 10‐year period.\n8\n\n\nClinical manifestations with Aspergillus vary according to patient group. In the CF population, Aspergillus occurs most often as part of colonisation, allergic bronchopulmonary aspergillosis (ABPA) and bronchitis.\n9\n ABPA is also a well‐known condition in patients with asthma.\n10\n Invasive aspergillosis mainly occurs in patients who are immunosuppressed and chronic aspergillosis in patients with impaired lung tissue architecture. Azoles are the drugs of choice in the management of aspergillosis.\n10\n Voriconazole and isavuconazole are first choice in invasive aspergillosis,\n11\n, \n12\n itraconazole (and voriconazole) in chronic aspergillosis\n13\n and posaconazole as prophylaxis or salvage treatment, but with potential future broadening of its licensed indication due to non‐inferior to voriconazole for primary therapy.\n10\n, \n12\n, \n14\n At this point, azoles are the only antifungal agents against aspergillosis for oral administration. The emergence of azole resistance complicates patient treatment, and invasive aspergillosis with azole resistance is associated with an inferior outcome compared to invasive aspergillosis with a susceptible strain.\n15\n, \n16\n\n\nAn international expert opinion suggested that when the environmental resistance rate exceeds 10% in a region, the initial treatment for invasive aspergillosis should be either liposomal amphotericin B or voriconazole combined with an echinocandin.\n17\n This recommendation was based on two observations. First, the significantly increased mortality found in patients who received voriconazole initially for invasive aspergillosis due to resistant A. fumigatus\n\n15\n, \n16\n and second, the superior activity of voriconazole for those with susceptible A. fumigatus (~70% survival vs. 55% for amphotericin B and 50% for echinocandins).\n17\n, \n18\n This approach requires reliable epidemiological data on the prevalence of azole resistance in A. fumigatus due to the environmental route of acquisition (which may occur even in azole naïve patients) and medical route (which is limited to the azole exposed patient population).\nThe Danish national surveillance programme on azole resistance was established in 2018 upon request from the ministry of health due to the rising concerns for azole resistance of environmental origin. The objective was to determine the prevalence of azole‐resistant A. fumigatus isolates among A. fumigatus colonised and infected patients in Denmark and determine the underlying resistance mechanism. We present data from the first 2 years of the surveillance.", "Organisation of the national surveillance programme of azole‐resistant A. fumigatus\n The surveillance programme was initiated on October 1st 2018 with participation from all 10 Danish clinical microbiological departments. Inclusion criteria were as follows: (a) unique A. fumigatus isolates that were regarded clinically significant and (b) any A. fumigatus isolated on a preferred weekday (regardless of clinical significance) were included when marked ‘Background’. The adherence to the inclusion criteria varied. Six departments followed the instructions with ‘Background’ samples with a potential uncertainty of whether the isolate represented a clinical condition with aspergillosis or a contamination. Two departments sent all isolates, and two departments sent only clinically relevant isolates. The centres are quite in‐homogeneous in patient up‐take and size of uptake area. For example, three are district hospitals (Vejle, Sønderborg and Esbjerg), two hold CF‐centres (AUH and RH) and one has a centre for chronic pulmonary aspergillosis (OUH). Isolates from the same patients were deemed unique if one of the following conditions were met: (1) when sampled more than 30 days apart, (2) if the isolate had a different susceptibility or (3) a different molecular resistance mechanism.\nThe clinical microbiological departments referred isolates to the reference mycological laboratory at Statens Serum Institut prospectively. Some departments performed species identification of moulds to the species level and only referred A. fumigatus while others referred all Aspergillus isolates or all mould isolates for species identification and susceptibility testing. One department at Aarhus University Hospital (AUH) performed EUCAST susceptibility testing (E. Def 10.1 and E. Def 9.3.1 as described below) of most A. fumigatus isolates locally and referred the MIC data and all resistant isolates for cyp51A sequencing (and confirmatory MIC determination) thus ensuring that all A. fumigatus isolates from AUH were included in the data analysis. Monthly reports on referred isolates were communicated to the participating laboratories to motivate and ensure adherence to the surveillance programme.\nThe surveillance programme was initiated on October 1st 2018 with participation from all 10 Danish clinical microbiological departments. Inclusion criteria were as follows: (a) unique A. fumigatus isolates that were regarded clinically significant and (b) any A. fumigatus isolated on a preferred weekday (regardless of clinical significance) were included when marked ‘Background’. The adherence to the inclusion criteria varied. Six departments followed the instructions with ‘Background’ samples with a potential uncertainty of whether the isolate represented a clinical condition with aspergillosis or a contamination. Two departments sent all isolates, and two departments sent only clinically relevant isolates. The centres are quite in‐homogeneous in patient up‐take and size of uptake area. For example, three are district hospitals (Vejle, Sønderborg and Esbjerg), two hold CF‐centres (AUH and RH) and one has a centre for chronic pulmonary aspergillosis (OUH). Isolates from the same patients were deemed unique if one of the following conditions were met: (1) when sampled more than 30 days apart, (2) if the isolate had a different susceptibility or (3) a different molecular resistance mechanism.\nThe clinical microbiological departments referred isolates to the reference mycological laboratory at Statens Serum Institut prospectively. Some departments performed species identification of moulds to the species level and only referred A. fumigatus while others referred all Aspergillus isolates or all mould isolates for species identification and susceptibility testing. One department at Aarhus University Hospital (AUH) performed EUCAST susceptibility testing (E. Def 10.1 and E. Def 9.3.1 as described below) of most A. fumigatus isolates locally and referred the MIC data and all resistant isolates for cyp51A sequencing (and confirmatory MIC determination) thus ensuring that all A. fumigatus isolates from AUH were included in the data analysis. Monthly reports on referred isolates were communicated to the participating laboratories to motivate and ensure adherence to the surveillance programme.\nCulturing and species identification Primary cultures were performed using Sabouraud glucose agar (SSI Diagnostika or bioMérieux) or YGC agar (yeast glucose agar; SSI Diagnostika) with incubation at 35–37°C for 5 days. Species identification included classical techniques including macro‐ and micromorphology and thermotolerance testing supplemented with MALDI‐TOF MS and β‐tubulin sequencing as needed as previously described.\n8\n Only A. fumigatus sensu stricto isolates were included in the surveillance.\nPrimary cultures were performed using Sabouraud glucose agar (SSI Diagnostika or bioMérieux) or YGC agar (yeast glucose agar; SSI Diagnostika) with incubation at 35–37°C for 5 days. Species identification included classical techniques including macro‐ and micromorphology and thermotolerance testing supplemented with MALDI‐TOF MS and β‐tubulin sequencing as needed as previously described.\n8\n Only A. fumigatus sensu stricto isolates were included in the surveillance.\nSusceptibility testing and target gene sequencing \nA. fumigatus isolates underwent screening for azole resistance following the EUCAST E. Def 10.1 method using VIPcheck azole agar plates (Mediaproducts BV).\n19\n Screening positive isolates underwent EUCAST E. Def 9.3.1 susceptibility testing.\n20\n For consistency, the MIC values from the reference laboratory were used throughout. The applied antifungal concentration ranges for the MIC testing varied slightly during the study period. Susceptibility classification was performed according to the current EUCAST breakpoints v. 10.0.\n21\n\ncyp51A sequencing was performed for isolates classified as azole resistant to at least one azole. The promoter and full coding region of the cyp51A gene were sequenced as previously described,\n5\n with the exception that for Sanger sequencing, 0F was replaced with a new primer 1F (5′‐GTGCGTAGCAAGGGAGAAGGA‐3′) for improved results.\n\nA. fumigatus isolates underwent screening for azole resistance following the EUCAST E. Def 10.1 method using VIPcheck azole agar plates (Mediaproducts BV).\n19\n Screening positive isolates underwent EUCAST E. Def 9.3.1 susceptibility testing.\n20\n For consistency, the MIC values from the reference laboratory were used throughout. The applied antifungal concentration ranges for the MIC testing varied slightly during the study period. Susceptibility classification was performed according to the current EUCAST breakpoints v. 10.0.\n21\n\ncyp51A sequencing was performed for isolates classified as azole resistant to at least one azole. The promoter and full coding region of the cyp51A gene were sequenced as previously described,\n5\n with the exception that for Sanger sequencing, 0F was replaced with a new primer 1F (5′‐GTGCGTAGCAAGGGAGAAGGA‐3′) for improved results.\nData management The azole resistance prevalence was determined at patient level and compared to the Dutch national surveillance, 2013–2018.\n22\n Azole resistance was divided into environmentally driven resistance (presence of TR34/L98H or TR46/Y121F/T289A), other cyp51A mutations and non‐cyp51A‐mediated resistance (when resistant, but no cyp51A mutations were identified).\nA χ2 was used for comparing azole resistance prevalence at patient level in the Dutch and the Danish population using R studio (R version 4. 1. 1) R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R‐project.org/.\nThe surveillance was requested by the Danish Ministry of Health and the scientific study approved by the QA & Compliance at Statens Serum Institut (journal number 21/00765).\nPreliminary results have previously been presented in part at the Trends in Medical Mycology 2019 and at European Congress on Clinical Microbiology and Infectious Disease (ECCMID) in 2020 and 2021 conferences and briefly summarised as part of the national Danish Integrated Antimicrobial Resistance Monitoring and Research Programme (DANMAP) established by the Danish Ministry of Food, Agriculture and Fisheries and the Danish Ministry of Health in annual reports 2018, 2019 and 2020 reports.\n7\n, \n23\n, \n24\n The results presented here are updated since then.\nThe azole resistance prevalence was determined at patient level and compared to the Dutch national surveillance, 2013–2018.\n22\n Azole resistance was divided into environmentally driven resistance (presence of TR34/L98H or TR46/Y121F/T289A), other cyp51A mutations and non‐cyp51A‐mediated resistance (when resistant, but no cyp51A mutations were identified).\nA χ2 was used for comparing azole resistance prevalence at patient level in the Dutch and the Danish population using R studio (R version 4. 1. 1) R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R‐project.org/.\nThe surveillance was requested by the Danish Ministry of Health and the scientific study approved by the QA & Compliance at Statens Serum Institut (journal number 21/00765).\nPreliminary results have previously been presented in part at the Trends in Medical Mycology 2019 and at European Congress on Clinical Microbiology and Infectious Disease (ECCMID) in 2020 and 2021 conferences and briefly summarised as part of the national Danish Integrated Antimicrobial Resistance Monitoring and Research Programme (DANMAP) established by the Danish Ministry of Food, Agriculture and Fisheries and the Danish Ministry of Health in annual reports 2018, 2019 and 2020 reports.\n7\n, \n23\n, \n24\n The results presented here are updated since then.", "The surveillance programme was initiated on October 1st 2018 with participation from all 10 Danish clinical microbiological departments. Inclusion criteria were as follows: (a) unique A. fumigatus isolates that were regarded clinically significant and (b) any A. fumigatus isolated on a preferred weekday (regardless of clinical significance) were included when marked ‘Background’. The adherence to the inclusion criteria varied. Six departments followed the instructions with ‘Background’ samples with a potential uncertainty of whether the isolate represented a clinical condition with aspergillosis or a contamination. Two departments sent all isolates, and two departments sent only clinically relevant isolates. The centres are quite in‐homogeneous in patient up‐take and size of uptake area. For example, three are district hospitals (Vejle, Sønderborg and Esbjerg), two hold CF‐centres (AUH and RH) and one has a centre for chronic pulmonary aspergillosis (OUH). Isolates from the same patients were deemed unique if one of the following conditions were met: (1) when sampled more than 30 days apart, (2) if the isolate had a different susceptibility or (3) a different molecular resistance mechanism.\nThe clinical microbiological departments referred isolates to the reference mycological laboratory at Statens Serum Institut prospectively. Some departments performed species identification of moulds to the species level and only referred A. fumigatus while others referred all Aspergillus isolates or all mould isolates for species identification and susceptibility testing. One department at Aarhus University Hospital (AUH) performed EUCAST susceptibility testing (E. Def 10.1 and E. Def 9.3.1 as described below) of most A. fumigatus isolates locally and referred the MIC data and all resistant isolates for cyp51A sequencing (and confirmatory MIC determination) thus ensuring that all A. fumigatus isolates from AUH were included in the data analysis. Monthly reports on referred isolates were communicated to the participating laboratories to motivate and ensure adherence to the surveillance programme.", "Primary cultures were performed using Sabouraud glucose agar (SSI Diagnostika or bioMérieux) or YGC agar (yeast glucose agar; SSI Diagnostika) with incubation at 35–37°C for 5 days. Species identification included classical techniques including macro‐ and micromorphology and thermotolerance testing supplemented with MALDI‐TOF MS and β‐tubulin sequencing as needed as previously described.\n8\n Only A. fumigatus sensu stricto isolates were included in the surveillance.", "\nA. fumigatus isolates underwent screening for azole resistance following the EUCAST E. Def 10.1 method using VIPcheck azole agar plates (Mediaproducts BV).\n19\n Screening positive isolates underwent EUCAST E. Def 9.3.1 susceptibility testing.\n20\n For consistency, the MIC values from the reference laboratory were used throughout. The applied antifungal concentration ranges for the MIC testing varied slightly during the study period. Susceptibility classification was performed according to the current EUCAST breakpoints v. 10.0.\n21\n\ncyp51A sequencing was performed for isolates classified as azole resistant to at least one azole. The promoter and full coding region of the cyp51A gene were sequenced as previously described,\n5\n with the exception that for Sanger sequencing, 0F was replaced with a new primer 1F (5′‐GTGCGTAGCAAGGGAGAAGGA‐3′) for improved results.", "The azole resistance prevalence was determined at patient level and compared to the Dutch national surveillance, 2013–2018.\n22\n Azole resistance was divided into environmentally driven resistance (presence of TR34/L98H or TR46/Y121F/T289A), other cyp51A mutations and non‐cyp51A‐mediated resistance (when resistant, but no cyp51A mutations were identified).\nA χ2 was used for comparing azole resistance prevalence at patient level in the Dutch and the Danish population using R studio (R version 4. 1. 1) R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R‐project.org/.\nThe surveillance was requested by the Danish Ministry of Health and the scientific study approved by the QA & Compliance at Statens Serum Institut (journal number 21/00765).\nPreliminary results have previously been presented in part at the Trends in Medical Mycology 2019 and at European Congress on Clinical Microbiology and Infectious Disease (ECCMID) in 2020 and 2021 conferences and briefly summarised as part of the national Danish Integrated Antimicrobial Resistance Monitoring and Research Programme (DANMAP) established by the Danish Ministry of Food, Agriculture and Fisheries and the Danish Ministry of Health in annual reports 2018, 2019 and 2020 reports.\n7\n, \n23\n, \n24\n The results presented here are updated since then.", "A total of 1820 susceptibility‐tested A. fumigatus isolates from 1083 patients were included in the analysis. The vast majority originated from airways including nose/sinus (1609/1820) and ear samples (182/1820) (Table 1).\nNumber of patients and sample types with Aspergillus fumigatus isolates\nSamples marked\nBackground samples\nIncludes samples marked as sputum/laryngeal aspirate, sinus, nose/nose vestibule biopsy/nasal aspirate/nose‐throat.\nIncludes samples marked as BAL/bronchial aspirate/pleura fluid and lung biopsy/lung/pleura.\nCerebrospinal fluid, abscess/drain fluid/drain/abdominal swab, biopsy abdominal/biopsy organ not specified/pericardium/pericardial fluid and bone.\nItraconazole resistance was found in 5.9% (108/1820) of isolates and voriconazole resistance in 5.6% (102/1820) (Figure 1). Posaconazole resistance was detected in 103 isolates due to MICs of ≥0.5 mg/L, and 85 had MIC 0.25 mg/L (defined as area of technical uncertainty [ATU]) of which 6 were classified as resistant due to an itraconazole MIC >1 mg/L. Isavuconazole resistance with MICs of ≥4 mg/L was detected in 90 isolates, and 235 isolates had MIC 2 mg/L (ATU) of which 13 were classified as resistant due to a voriconazole MIC >1 mg/L. Overall, susceptibility testing identified 119 isolates resistant to at least one azole from 66 patients leading to a resistance prevalence among patients of 6.1% (66/1083, 95% CI 4.8%–7.7%). The proportion of isolates that were azole resistant was 6.5% (119/1820). From lower airways, the proportion of resistant isolates were 4.3% (12/278) compared to 3.6% (6/166) of isolates from tracheal aspirates and 8.5% (99/1165) of isolates in the upper airways (Table 1).\nMIC values for the included Aspergillus fumigatus isolates. Susceptible isolates (S) are shown green when susceptible at azole resistance screening. Susceptible isolates with an MIC are shown in blue, resistant isolates in red and isolates in the ATU for which the classification depends on the susceptibility of either itraconazole or voriconazole, respectively, are indicated in black. MIC values above 4 mg/L are shown as >4 mg/L. Isolates with no MICs for posaconazole (n = 1) and isavuconazole (n = 365) are not included in the diagrams\nThe proportional distribution of resistance mechanisms at patient level are shown in Figure 2. cyp51A sequencing of azole resistant‐A. fumigatus demonstrated environmental resistance (TR34/L98H, TR34\n3/L98H or TR34/L98H/297T/F495I) in isolates from 39 patients. This corresponds to an environmental resistance prevalence among the patients of 3.6% (39/1083 patients; 95% CI: 2.6%–4.9%) and accounted for 59.1% (39/66 patients; 95% CI: 47.0%–70.1%) of patients with resistant A. fumigatus isolates (Figure 2). Resistance with a tandem repeat was detected in samples from airways (Sputum [n = 44], BAL [n = 6] and tracheal aspirate [n = 4]) and one ear sample.\nCyp51A amino acid profiles found in the 66 patients with at least one resistant isolate. Each patient is shown only once. Some patients harbour resistant isolates with a Cyp51A resistance mechanism and resistant isolates with a non‐Cyp51A related mechanism (WT) (for example the two patients with P216S and wild‐type isolates)\nResistance involving other cyp51A mutations accounted for 21.2% (14/66; 95% CI: 13.1%–32.5%). The corresponding alterations were G54R (n = 5), P216S (n = 2), F219L (n = 1), G54W (n = 1), M220I (n = 1), M220K (n = 1), M220R (n = 1), G432S (n = 1), G448S (n = 1) and Y121F (n = 1). One patient had sequential isolates with either M220R or G54R.\nNon‐cyp51A‐mediated resistance (wild‐type cyp51A) accounted for 19.7% (13/66; 95% CI: 11.9%–30.8%). Isolates from 12 of these patients were voriconazole resistant with MICs ≥4 mg/L or with MIC 2 mg/L and cross‐resistance to the other azoles. One patient harboured an isolate that was classified as resistant solely due to a voriconazole MIC of 2 mg/L.\nAll TR34/L98H variants, M220R, G432S, G448S and Y121F were associated with pan azole high‐level resistance (Table 2). In contrast, F219L, G54R, G54W, M220I, M220K and P216S primarily affected itraconazole susceptibility and to some degree posaconazole with limited or no MIC elevations to voriconazole and isavuconazole.\nMIC values for Aspergillus fumigatus isolates resistant to at least one azole and which underwent cyp51A sequencing\nResistance mechanisms are shown according to environmental, single point mutations and non‐cyp51A‐mediated. Single point mutations are shown according to decreasing resistance.\nAbbreviations: ISA, Isavuconazole; ITR, Itraconazole; POS, Posaconazole; VOR, Voriconazole.\nOne resistant isolate is not shown in this table since it was found with a F46Y/M172V/E427K, which is not associated with azole resistance, and the same patient had other resistant isolates with TR34/L98H. Four isolates did not undergo cyp51A‐sequencing and are not shown in the table\nOne isolate with TR34/L98H was mixed with a wild‐type isolate resulting in lower MICs than normally observed for TR34/L98H.\nOne resistant isolate with N248K was classified as non‐cyp51A‐mediated since this mutation is not associated with azole resistance, and since same patient had another resistant isolate with no detected cyp51A‐mediated resistance.\nAmong patients with a resistant isolate, both susceptible and resistant isolates were cultured intermittently during the surveillance from 38/66 (58%). Twenty‐five patients had only one resistant isolate, and three patients had several consecutive resistant isolates.\nFour unique resistant isolates did not undergo cyp51A‐sequencing. Three isolates from a patient who had several resistant isolates with M220K, and another isolate from a patient who had isolates with P216S.\nIsolates marked as ‘Background samples’ included 99 isolates from 94 patients (Table 1). A. fumigatus from three patients (3.2%; 95% CI: 0.9%–9.0%) were azole resistant and all harboured the TR34/L98H resistance mechanism. One patient had consecutive isolates with TR34/L98H‐marked background and not marked as background.\nAzole resistance was detected in samples from all five Danish regions (Figure 3). TR34/L98H isolates were detected in four out of five regions and in samples from both the hospital and the primary health care sector, whereas isolates with single point mutations in cyp51A were found in three of five regions.\nProportion of resistant Aspergillus fumigatus isolates and associated underlying resistance mechanism across the five Danish Regions. Each Region represented is the Region of the health care facility from which the isolate was referred. As some health care services are centralised this will not in all cases represent the patients' place of residence or the place where the resistant fungus was acquired. Total numbers of isolates were for Capital n = 910, Zealand n = 91, Southern Denmark n = 326, Central Jutland n = 419 and Northern Jutland n = 74. The resistance mechanism remained uncharacterised in five isolates from five patients whom were known to harbour other cyp51A mutant isolates (blue bar). These included four resistant isolates that did not undergo cyp51A sequencing of which three isolates derived from a patient who had other isolates with M220K, and one isolate from a patient who had other isolates with P216S, and one isolate with F46Y/M172/E427K from a patient who also had isolates with TR34/L98H\nComparing surveillances at national level, azole resistance prevalence was lower in Denmark than in the Netherlands (66/1083 [6.1%] vs. 508/4496 [11.3%]) (p < .0001).\n22\n\n", "An azole resistance prevalence of 6.1% including a TR34/L98H‐related environmental resistance of 3.6% was documented at patient level during the first 2 years of the Danish nationwide surveillance programme. Whereas the first figure represents the current burden of azole resistance, the second provides information on what the chances are for facing azole resistance among azole‐naïve patients. The resistance prevalence was higher in samples from the upper airways than in tracheal aspirates and lower airway specimens. We speculate, that this may reflect that out‐patients with chronic lung disease including CF and aspergillosis are often provided with sputum containers for regular submission of sputum by mail and thus that the resistance frequencies across the different sample types are not directly comparable. Overall the resistance prevalence remains well below 10%, and azole therapy therefore remains the first choice for the initial treatment for aspergillosis in our country.\nSeveral observations suggest that azole resistance in A. fumigatus and resistance due to TR34/L98H specifically is increasing in Denmark. In 2007, 1.9% A. fumigatus isolates were azole resistant and none due to TR34/L98H mechanisms in a 3 month multicentre survey.\n25\n From 2010 to 2014, the azole resistance prevalence at patient level was 2.1% among referred (and thus selected) isolates with approximately half involving TR34/L98H mechanisms.\n26\n Moreover, in the Danish CF population, specifically, an azole resistance prevalence of 4.5% including 1.5% due to TR34/L98H was observed in 2007 and 2009 compared to 7.3% including 3.7% TR34/L98H in 2018.\n5\n, \n8\n Although the studies are not directly comparable, we speculate that azole resistance is rising both overall and among CF patients and is driven by both medical and environmental azole use. Of note, we did not observe any isolates harbouring TR46/Y121F/T289A during the 2‐year surveillance although this resistance genotype has been found once in DK in 2014.\n4\n\n\nThree single point amino acid alterations (G54A, G54R and G432S) have been associated with azole resistance in both azole‐treated patients and the environment.\n27\n, \n28\n, \n29\n, \n30\n Five patients in this surveillance programme harboured isolates with a G54R and one patient an isolate with a G432S alteration. Unfortunately, we did not have access to clinical information or prior medication data to enable a discussion of the origin of these resistance mechanisms.\nThe number of patients with azole‐resistant A. fumigatus was unevenly distributed across the five regions in Denmark. The reason for a higher occurrence in the capital region is likely that this is the largest region based on population size, and that one of the two CF centres is based in the capital region. TR34/L98H was not detected in northern Jutland during the study period, but was detected in a clinical isolate shortly before the surveillance programme was initiated.\n24\n We therefore argue that TR34/L98H is found all over the country and pose a risk for any patient in Denmark susceptible to Aspergillus infections.\nIn comparison to other surveillance studies, our azole resistance prevalence was higher than the 3.2% found in 2009 to 2011 in a multicentre study with 19 countries,\n31\n but lower than the 11% in the more recent Dutch nationwide surveillance in 2017 and 2018.\n22\n Larger studies and surveillances on azole resistance in A. fumigatus are summarised in Table 3.\n22\n, \n25\n, \n32\n, \n33\n, \n34\n, \n35\n, \n36\n, \n37\n, \n38\n, \n39\n These studies show that the azole resistance prevalence in the present surveillance is in line with other European studies from 2011 to 2018 and the Netherlands from 2007 to 2009.\nStudies and surveillances with azole resistance in Aspergillus fumigatus\n\nTR34/L98H and/or TR46/Y121F\nproportion of resistance\nDenmark\n2007\n(Mortensen et al.)\n25\n\n\n1.9% isolate level\n(2/107)\nThe Netherlands\n2013–2018\n(Lestrade et al.)\n22\n\n\n11.3% patient level\n(508/4496)\nThe Netherlands\n2007–2009\n(Van der linden et al.)\n32\n\n\n5.3% patient level\n(63/1192)\n4.6% isolate level\n(82/1792)\n4.1% isolate level\n(74/1792)\nBelgium\n2011–2012\n(Vermeulen et al.)\n33\n\n\n5.5% patient level\n(9/164)\n4.3% patient level\n(7/164)\nSpain\n2019\n(Escribano et al.)\n34\n\n\n4.7% patient level\n(34/715)\n2.8% patient level\n(20/715)\nItaly\n2016–2018\n(Prigitano et al.)\n35\n\n\n6.6% isolate level\n(19/286)\n4.2% isolate level\n(12/286)\nUSA\n2015–2017\n(Berkow et al.)\n36\n\n\n1.5% isolate level\n(20/1356)\n0.4% isolate level\n(5/1356)\nJapan\n2017–2018\n(Tsuchido et al.)\n39\n\n\n12.7% isolate level\n(7/55)\n3.6% isolate level\n(2/55)\nTaiwan\n2011–2018\n(Wu et al.)\n37\n\n\n4% patient level\n(12/297)\n5.1% isolate level\n(19/375)\n3.4% patient level\n(10/297)\n3.5% isolate level\n(13/375)\nChina\n2010–2015\n(Chen et al.)\n38\n\n\n2.5% isolate level\n(8/317)\n2.5% isolate level\n(8/317)\nStudies shown involve those who are either nationwide surveillances or multicentre studies in one country and not limited to a certain patient group or a referral hospital. Studies included are those that report azole resistance in A. fumigatus specifically. Azole resistance in A. fumigatus prevalence is shown in numbers of patients unless other specified.\nNumbers of patients with either TR34/L98H or TR46/Y121F were not specified, and total number of isolates was not specified.\nThis study is associated with both strengths and limitations. The primary strength is that it is nationwide and thus population based. Results in studies limited to specific disease or centre will depend strongly on the case mix and use of azole therapy, which would favour selection of azole‐resistant A. fumigatus. Limitations include a risk of ascertainment bias. We cannot exclude that centres managing many Aspergillus patients are more prone to prioritise referral of isolates – a so‐called cluster sampling. Furthermore, the COVID‐19 pandemic emerged during the surveillance, and we cannot be certain that routine sampling was performed as under regular circumstances. Indeed fewer BALs were performed, and out‐patients with lung diseases were more often encouraged to send sputum samples by mail, than to visit the clinic in person.\nOur classification of isolates as background samples is also associated with limitations. Not all laboratories adhered strictly to the inclusion criteria and not all laboratories referred background samples. Clinical data and information on prior antifungal treatment were not collected and therefore we cannot verify that the samples marked ‘Background’ actually represented a clinically insignificant background samples or that all such samples were indeed marked as Background samples. However, the fact that no isolates with medically driven point mutations were found among background samples suggests that Background samples at least are dominated by isolates from patients without prior azole therapy for clinically documented infection and thus representative for the background level of environmental resistance.\nIn conclusion, azole resistance is a significant problem for patients with clinical disease and in need of azole treatment. Few or no oral alternative drug options combined with long duration of treatment is a clinical challenge and results in a worsened prognosis. Initial treatment of invasive aspergillosis can remain unchanged in Denmark – but optimal treatment strategies do depend on the likelihood of azole resistance – highlighting the importance of continued surveillance, rapid susceptibility testing and a one‐health approach to azole use.", "MR: Has received research ‐and travel grants from Gilead. RKH: Has over the past 5 years received travel grants and speaker honoraria from Gilead. JBG: Has over the past 5 years received travel grants and speaker honoraria from Gilead. LK: No conflicts of interest. FSR: No conflicts of interest. SS: No conflicts of interest. NA: No conflicts of interest. JB: No conflicts of interest. BLR: No conflicts of interest. EM: No conflicts of interest. KA: Has received travel grant and speaker honoraria from Gilead.MP: No conflicts of interest. ED: No conflicts of interest. SLA: No conflicts of interest. MCA: has outside the current work, over the past 5 years, received research grants/contract work (paid to the SSI) from Amplyx, Basilea, Cidara, F2G, Gilead, Novabiotics and Scynexis, and speaker honoraria (personal fee) from Astellas, Chiesi, Gilead, MSD, and SEGES. She is the current chairman of the EUCAST‐AFST.", "\nMalene Risum: Data curation (lead); Formal analysis (equal); Investigation (equal); Project administration (equal); Writing – original draft (lead). Rasmus Krøger Hare: Data curation (equal); Investigation (equal); Methodology (lead); Writing – review & editing (equal). Jan Berg Gertsen: Data curation (equal); Investigation (equal); Methodology (equal); Project administration (equal); Writing – review & editing (supporting). Lise Kristensen: Investigation (equal); Methodology (equal); Writing – review & editing (supporting). Flemming Schønning Rosenvinge: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Sofia Sulim: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Nissrine Abou‐Chakra: Investigation (supporting); Methodology (equal); Writing – review & editing (supporting). Jette Bangsborg: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Bent Røder: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Ea Sofie Marmolin: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Karen Astavad : Investigation (equal); Methodology (supporting); Writing – review & editing (equal). Michael Pedersen: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Esad Dzajic: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Steen Lomborg Andersen: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Maiken Cavling Arendrup: Conceptualization (lead); Data curation (equal); Formal analysis (equal); Investigation (equal); Methodology (lead); Project administration (lead); Writing – original draft (lead)." ]
[ null, "methods", null, null, null, null, "results", "discussion", "COI-statement", null ]
[ "antifungal susceptibility", "\nAspergillus fumigatus\n", "azole resistance", "environmental route", "itraconazole", "medical route", "TR34/L98H", "voriconazole" ]
INTRODUCTION: Azole resistance in Aspergillus fumigatus due to the specific molecular mechanisms TR34/L98H or TR46/Y121F/T289A has been reported from all seven continents except Antarctica. 1 These mechanisms are found in environmental A. fumigatus isolates and in isolates from azole naïve as well as from exposed patients. Azole resistance can also arise in A. fumigatus in patients receiving long‐term azole treatment. 2 Most resistant isolates harbour mutations in cyp51A, which encodes the azole target 14α‐sterol‐demethylase, essential for ergosterol biosynthesis. 2 However, azole resistance has also been ascribed to efflux pumps and other non‐cyp51A‐mediated resistance mutations. 2 , 3 In Denmark, the first isolation of TR34/L98H and TR46/Y121F/T289A were from clinical samples in 2007 and 2014, respectively. 4 , 5 Subsequently, TR34/L98H and TR46/Y121F/T289A have also been found in environmental samples since 2009 and 2019, respectively. 6 , 7 Moreover, an increase in the prevalence of azole resistance among Danish cystic fibrosis (CF) patients was found over a 10‐year period. 8 Clinical manifestations with Aspergillus vary according to patient group. In the CF population, Aspergillus occurs most often as part of colonisation, allergic bronchopulmonary aspergillosis (ABPA) and bronchitis. 9 ABPA is also a well‐known condition in patients with asthma. 10 Invasive aspergillosis mainly occurs in patients who are immunosuppressed and chronic aspergillosis in patients with impaired lung tissue architecture. Azoles are the drugs of choice in the management of aspergillosis. 10 Voriconazole and isavuconazole are first choice in invasive aspergillosis, 11 , 12 itraconazole (and voriconazole) in chronic aspergillosis 13 and posaconazole as prophylaxis or salvage treatment, but with potential future broadening of its licensed indication due to non‐inferior to voriconazole for primary therapy. 10 , 12 , 14 At this point, azoles are the only antifungal agents against aspergillosis for oral administration. The emergence of azole resistance complicates patient treatment, and invasive aspergillosis with azole resistance is associated with an inferior outcome compared to invasive aspergillosis with a susceptible strain. 15 , 16 An international expert opinion suggested that when the environmental resistance rate exceeds 10% in a region, the initial treatment for invasive aspergillosis should be either liposomal amphotericin B or voriconazole combined with an echinocandin. 17 This recommendation was based on two observations. First, the significantly increased mortality found in patients who received voriconazole initially for invasive aspergillosis due to resistant A. fumigatus 15 , 16 and second, the superior activity of voriconazole for those with susceptible A. fumigatus (~70% survival vs. 55% for amphotericin B and 50% for echinocandins). 17 , 18 This approach requires reliable epidemiological data on the prevalence of azole resistance in A. fumigatus due to the environmental route of acquisition (which may occur even in azole naïve patients) and medical route (which is limited to the azole exposed patient population). The Danish national surveillance programme on azole resistance was established in 2018 upon request from the ministry of health due to the rising concerns for azole resistance of environmental origin. The objective was to determine the prevalence of azole‐resistant A. fumigatus isolates among A. fumigatus colonised and infected patients in Denmark and determine the underlying resistance mechanism. We present data from the first 2 years of the surveillance. METHODS: Organisation of the national surveillance programme of azole‐resistant A. fumigatus The surveillance programme was initiated on October 1st 2018 with participation from all 10 Danish clinical microbiological departments. Inclusion criteria were as follows: (a) unique A. fumigatus isolates that were regarded clinically significant and (b) any A. fumigatus isolated on a preferred weekday (regardless of clinical significance) were included when marked ‘Background’. The adherence to the inclusion criteria varied. Six departments followed the instructions with ‘Background’ samples with a potential uncertainty of whether the isolate represented a clinical condition with aspergillosis or a contamination. Two departments sent all isolates, and two departments sent only clinically relevant isolates. The centres are quite in‐homogeneous in patient up‐take and size of uptake area. For example, three are district hospitals (Vejle, Sønderborg and Esbjerg), two hold CF‐centres (AUH and RH) and one has a centre for chronic pulmonary aspergillosis (OUH). Isolates from the same patients were deemed unique if one of the following conditions were met: (1) when sampled more than 30 days apart, (2) if the isolate had a different susceptibility or (3) a different molecular resistance mechanism. The clinical microbiological departments referred isolates to the reference mycological laboratory at Statens Serum Institut prospectively. Some departments performed species identification of moulds to the species level and only referred A. fumigatus while others referred all Aspergillus isolates or all mould isolates for species identification and susceptibility testing. One department at Aarhus University Hospital (AUH) performed EUCAST susceptibility testing (E. Def 10.1 and E. Def 9.3.1 as described below) of most A. fumigatus isolates locally and referred the MIC data and all resistant isolates for cyp51A sequencing (and confirmatory MIC determination) thus ensuring that all A. fumigatus isolates from AUH were included in the data analysis. Monthly reports on referred isolates were communicated to the participating laboratories to motivate and ensure adherence to the surveillance programme. The surveillance programme was initiated on October 1st 2018 with participation from all 10 Danish clinical microbiological departments. Inclusion criteria were as follows: (a) unique A. fumigatus isolates that were regarded clinically significant and (b) any A. fumigatus isolated on a preferred weekday (regardless of clinical significance) were included when marked ‘Background’. The adherence to the inclusion criteria varied. Six departments followed the instructions with ‘Background’ samples with a potential uncertainty of whether the isolate represented a clinical condition with aspergillosis or a contamination. Two departments sent all isolates, and two departments sent only clinically relevant isolates. The centres are quite in‐homogeneous in patient up‐take and size of uptake area. For example, three are district hospitals (Vejle, Sønderborg and Esbjerg), two hold CF‐centres (AUH and RH) and one has a centre for chronic pulmonary aspergillosis (OUH). Isolates from the same patients were deemed unique if one of the following conditions were met: (1) when sampled more than 30 days apart, (2) if the isolate had a different susceptibility or (3) a different molecular resistance mechanism. The clinical microbiological departments referred isolates to the reference mycological laboratory at Statens Serum Institut prospectively. Some departments performed species identification of moulds to the species level and only referred A. fumigatus while others referred all Aspergillus isolates or all mould isolates for species identification and susceptibility testing. One department at Aarhus University Hospital (AUH) performed EUCAST susceptibility testing (E. Def 10.1 and E. Def 9.3.1 as described below) of most A. fumigatus isolates locally and referred the MIC data and all resistant isolates for cyp51A sequencing (and confirmatory MIC determination) thus ensuring that all A. fumigatus isolates from AUH were included in the data analysis. Monthly reports on referred isolates were communicated to the participating laboratories to motivate and ensure adherence to the surveillance programme. Culturing and species identification Primary cultures were performed using Sabouraud glucose agar (SSI Diagnostika or bioMérieux) or YGC agar (yeast glucose agar; SSI Diagnostika) with incubation at 35–37°C for 5 days. Species identification included classical techniques including macro‐ and micromorphology and thermotolerance testing supplemented with MALDI‐TOF MS and β‐tubulin sequencing as needed as previously described. 8 Only A. fumigatus sensu stricto isolates were included in the surveillance. Primary cultures were performed using Sabouraud glucose agar (SSI Diagnostika or bioMérieux) or YGC agar (yeast glucose agar; SSI Diagnostika) with incubation at 35–37°C for 5 days. Species identification included classical techniques including macro‐ and micromorphology and thermotolerance testing supplemented with MALDI‐TOF MS and β‐tubulin sequencing as needed as previously described. 8 Only A. fumigatus sensu stricto isolates were included in the surveillance. Susceptibility testing and target gene sequencing A. fumigatus isolates underwent screening for azole resistance following the EUCAST E. Def 10.1 method using VIPcheck azole agar plates (Mediaproducts BV). 19 Screening positive isolates underwent EUCAST E. Def 9.3.1 susceptibility testing. 20 For consistency, the MIC values from the reference laboratory were used throughout. The applied antifungal concentration ranges for the MIC testing varied slightly during the study period. Susceptibility classification was performed according to the current EUCAST breakpoints v. 10.0. 21 cyp51A sequencing was performed for isolates classified as azole resistant to at least one azole. The promoter and full coding region of the cyp51A gene were sequenced as previously described, 5 with the exception that for Sanger sequencing, 0F was replaced with a new primer 1F (5′‐GTGCGTAGCAAGGGAGAAGGA‐3′) for improved results. A. fumigatus isolates underwent screening for azole resistance following the EUCAST E. Def 10.1 method using VIPcheck azole agar plates (Mediaproducts BV). 19 Screening positive isolates underwent EUCAST E. Def 9.3.1 susceptibility testing. 20 For consistency, the MIC values from the reference laboratory were used throughout. The applied antifungal concentration ranges for the MIC testing varied slightly during the study period. Susceptibility classification was performed according to the current EUCAST breakpoints v. 10.0. 21 cyp51A sequencing was performed for isolates classified as azole resistant to at least one azole. The promoter and full coding region of the cyp51A gene were sequenced as previously described, 5 with the exception that for Sanger sequencing, 0F was replaced with a new primer 1F (5′‐GTGCGTAGCAAGGGAGAAGGA‐3′) for improved results. Data management The azole resistance prevalence was determined at patient level and compared to the Dutch national surveillance, 2013–2018. 22 Azole resistance was divided into environmentally driven resistance (presence of TR34/L98H or TR46/Y121F/T289A), other cyp51A mutations and non‐cyp51A‐mediated resistance (when resistant, but no cyp51A mutations were identified). A χ2 was used for comparing azole resistance prevalence at patient level in the Dutch and the Danish population using R studio (R version 4. 1. 1) R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R‐project.org/. The surveillance was requested by the Danish Ministry of Health and the scientific study approved by the QA & Compliance at Statens Serum Institut (journal number 21/00765). Preliminary results have previously been presented in part at the Trends in Medical Mycology 2019 and at European Congress on Clinical Microbiology and Infectious Disease (ECCMID) in 2020 and 2021 conferences and briefly summarised as part of the national Danish Integrated Antimicrobial Resistance Monitoring and Research Programme (DANMAP) established by the Danish Ministry of Food, Agriculture and Fisheries and the Danish Ministry of Health in annual reports 2018, 2019 and 2020 reports. 7 , 23 , 24 The results presented here are updated since then. The azole resistance prevalence was determined at patient level and compared to the Dutch national surveillance, 2013–2018. 22 Azole resistance was divided into environmentally driven resistance (presence of TR34/L98H or TR46/Y121F/T289A), other cyp51A mutations and non‐cyp51A‐mediated resistance (when resistant, but no cyp51A mutations were identified). A χ2 was used for comparing azole resistance prevalence at patient level in the Dutch and the Danish population using R studio (R version 4. 1. 1) R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R‐project.org/. The surveillance was requested by the Danish Ministry of Health and the scientific study approved by the QA & Compliance at Statens Serum Institut (journal number 21/00765). Preliminary results have previously been presented in part at the Trends in Medical Mycology 2019 and at European Congress on Clinical Microbiology and Infectious Disease (ECCMID) in 2020 and 2021 conferences and briefly summarised as part of the national Danish Integrated Antimicrobial Resistance Monitoring and Research Programme (DANMAP) established by the Danish Ministry of Food, Agriculture and Fisheries and the Danish Ministry of Health in annual reports 2018, 2019 and 2020 reports. 7 , 23 , 24 The results presented here are updated since then. Organisation of the national surveillance programme of azole‐resistant A. fumigatus : The surveillance programme was initiated on October 1st 2018 with participation from all 10 Danish clinical microbiological departments. Inclusion criteria were as follows: (a) unique A. fumigatus isolates that were regarded clinically significant and (b) any A. fumigatus isolated on a preferred weekday (regardless of clinical significance) were included when marked ‘Background’. The adherence to the inclusion criteria varied. Six departments followed the instructions with ‘Background’ samples with a potential uncertainty of whether the isolate represented a clinical condition with aspergillosis or a contamination. Two departments sent all isolates, and two departments sent only clinically relevant isolates. The centres are quite in‐homogeneous in patient up‐take and size of uptake area. For example, three are district hospitals (Vejle, Sønderborg and Esbjerg), two hold CF‐centres (AUH and RH) and one has a centre for chronic pulmonary aspergillosis (OUH). Isolates from the same patients were deemed unique if one of the following conditions were met: (1) when sampled more than 30 days apart, (2) if the isolate had a different susceptibility or (3) a different molecular resistance mechanism. The clinical microbiological departments referred isolates to the reference mycological laboratory at Statens Serum Institut prospectively. Some departments performed species identification of moulds to the species level and only referred A. fumigatus while others referred all Aspergillus isolates or all mould isolates for species identification and susceptibility testing. One department at Aarhus University Hospital (AUH) performed EUCAST susceptibility testing (E. Def 10.1 and E. Def 9.3.1 as described below) of most A. fumigatus isolates locally and referred the MIC data and all resistant isolates for cyp51A sequencing (and confirmatory MIC determination) thus ensuring that all A. fumigatus isolates from AUH were included in the data analysis. Monthly reports on referred isolates were communicated to the participating laboratories to motivate and ensure adherence to the surveillance programme. Culturing and species identification: Primary cultures were performed using Sabouraud glucose agar (SSI Diagnostika or bioMérieux) or YGC agar (yeast glucose agar; SSI Diagnostika) with incubation at 35–37°C for 5 days. Species identification included classical techniques including macro‐ and micromorphology and thermotolerance testing supplemented with MALDI‐TOF MS and β‐tubulin sequencing as needed as previously described. 8 Only A. fumigatus sensu stricto isolates were included in the surveillance. Susceptibility testing and target gene sequencing: A. fumigatus isolates underwent screening for azole resistance following the EUCAST E. Def 10.1 method using VIPcheck azole agar plates (Mediaproducts BV). 19 Screening positive isolates underwent EUCAST E. Def 9.3.1 susceptibility testing. 20 For consistency, the MIC values from the reference laboratory were used throughout. The applied antifungal concentration ranges for the MIC testing varied slightly during the study period. Susceptibility classification was performed according to the current EUCAST breakpoints v. 10.0. 21 cyp51A sequencing was performed for isolates classified as azole resistant to at least one azole. The promoter and full coding region of the cyp51A gene were sequenced as previously described, 5 with the exception that for Sanger sequencing, 0F was replaced with a new primer 1F (5′‐GTGCGTAGCAAGGGAGAAGGA‐3′) for improved results. Data management: The azole resistance prevalence was determined at patient level and compared to the Dutch national surveillance, 2013–2018. 22 Azole resistance was divided into environmentally driven resistance (presence of TR34/L98H or TR46/Y121F/T289A), other cyp51A mutations and non‐cyp51A‐mediated resistance (when resistant, but no cyp51A mutations were identified). A χ2 was used for comparing azole resistance prevalence at patient level in the Dutch and the Danish population using R studio (R version 4. 1. 1) R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R‐project.org/. The surveillance was requested by the Danish Ministry of Health and the scientific study approved by the QA & Compliance at Statens Serum Institut (journal number 21/00765). Preliminary results have previously been presented in part at the Trends in Medical Mycology 2019 and at European Congress on Clinical Microbiology and Infectious Disease (ECCMID) in 2020 and 2021 conferences and briefly summarised as part of the national Danish Integrated Antimicrobial Resistance Monitoring and Research Programme (DANMAP) established by the Danish Ministry of Food, Agriculture and Fisheries and the Danish Ministry of Health in annual reports 2018, 2019 and 2020 reports. 7 , 23 , 24 The results presented here are updated since then. RESULTS: A total of 1820 susceptibility‐tested A. fumigatus isolates from 1083 patients were included in the analysis. The vast majority originated from airways including nose/sinus (1609/1820) and ear samples (182/1820) (Table 1). Number of patients and sample types with Aspergillus fumigatus isolates Samples marked Background samples Includes samples marked as sputum/laryngeal aspirate, sinus, nose/nose vestibule biopsy/nasal aspirate/nose‐throat. Includes samples marked as BAL/bronchial aspirate/pleura fluid and lung biopsy/lung/pleura. Cerebrospinal fluid, abscess/drain fluid/drain/abdominal swab, biopsy abdominal/biopsy organ not specified/pericardium/pericardial fluid and bone. Itraconazole resistance was found in 5.9% (108/1820) of isolates and voriconazole resistance in 5.6% (102/1820) (Figure 1). Posaconazole resistance was detected in 103 isolates due to MICs of ≥0.5 mg/L, and 85 had MIC 0.25 mg/L (defined as area of technical uncertainty [ATU]) of which 6 were classified as resistant due to an itraconazole MIC >1 mg/L. Isavuconazole resistance with MICs of ≥4 mg/L was detected in 90 isolates, and 235 isolates had MIC 2 mg/L (ATU) of which 13 were classified as resistant due to a voriconazole MIC >1 mg/L. Overall, susceptibility testing identified 119 isolates resistant to at least one azole from 66 patients leading to a resistance prevalence among patients of 6.1% (66/1083, 95% CI 4.8%–7.7%). The proportion of isolates that were azole resistant was 6.5% (119/1820). From lower airways, the proportion of resistant isolates were 4.3% (12/278) compared to 3.6% (6/166) of isolates from tracheal aspirates and 8.5% (99/1165) of isolates in the upper airways (Table 1). MIC values for the included Aspergillus fumigatus isolates. Susceptible isolates (S) are shown green when susceptible at azole resistance screening. Susceptible isolates with an MIC are shown in blue, resistant isolates in red and isolates in the ATU for which the classification depends on the susceptibility of either itraconazole or voriconazole, respectively, are indicated in black. MIC values above 4 mg/L are shown as >4 mg/L. Isolates with no MICs for posaconazole (n = 1) and isavuconazole (n = 365) are not included in the diagrams The proportional distribution of resistance mechanisms at patient level are shown in Figure 2. cyp51A sequencing of azole resistant‐A. fumigatus demonstrated environmental resistance (TR34/L98H, TR34 3/L98H or TR34/L98H/297T/F495I) in isolates from 39 patients. This corresponds to an environmental resistance prevalence among the patients of 3.6% (39/1083 patients; 95% CI: 2.6%–4.9%) and accounted for 59.1% (39/66 patients; 95% CI: 47.0%–70.1%) of patients with resistant A. fumigatus isolates (Figure 2). Resistance with a tandem repeat was detected in samples from airways (Sputum [n = 44], BAL [n = 6] and tracheal aspirate [n = 4]) and one ear sample. Cyp51A amino acid profiles found in the 66 patients with at least one resistant isolate. Each patient is shown only once. Some patients harbour resistant isolates with a Cyp51A resistance mechanism and resistant isolates with a non‐Cyp51A related mechanism (WT) (for example the two patients with P216S and wild‐type isolates) Resistance involving other cyp51A mutations accounted for 21.2% (14/66; 95% CI: 13.1%–32.5%). The corresponding alterations were G54R (n = 5), P216S (n = 2), F219L (n = 1), G54W (n = 1), M220I (n = 1), M220K (n = 1), M220R (n = 1), G432S (n = 1), G448S (n = 1) and Y121F (n = 1). One patient had sequential isolates with either M220R or G54R. Non‐cyp51A‐mediated resistance (wild‐type cyp51A) accounted for 19.7% (13/66; 95% CI: 11.9%–30.8%). Isolates from 12 of these patients were voriconazole resistant with MICs ≥4 mg/L or with MIC 2 mg/L and cross‐resistance to the other azoles. One patient harboured an isolate that was classified as resistant solely due to a voriconazole MIC of 2 mg/L. All TR34/L98H variants, M220R, G432S, G448S and Y121F were associated with pan azole high‐level resistance (Table 2). In contrast, F219L, G54R, G54W, M220I, M220K and P216S primarily affected itraconazole susceptibility and to some degree posaconazole with limited or no MIC elevations to voriconazole and isavuconazole. MIC values for Aspergillus fumigatus isolates resistant to at least one azole and which underwent cyp51A sequencing Resistance mechanisms are shown according to environmental, single point mutations and non‐cyp51A‐mediated. Single point mutations are shown according to decreasing resistance. Abbreviations: ISA, Isavuconazole; ITR, Itraconazole; POS, Posaconazole; VOR, Voriconazole. One resistant isolate is not shown in this table since it was found with a F46Y/M172V/E427K, which is not associated with azole resistance, and the same patient had other resistant isolates with TR34/L98H. Four isolates did not undergo cyp51A‐sequencing and are not shown in the table One isolate with TR34/L98H was mixed with a wild‐type isolate resulting in lower MICs than normally observed for TR34/L98H. One resistant isolate with N248K was classified as non‐cyp51A‐mediated since this mutation is not associated with azole resistance, and since same patient had another resistant isolate with no detected cyp51A‐mediated resistance. Among patients with a resistant isolate, both susceptible and resistant isolates were cultured intermittently during the surveillance from 38/66 (58%). Twenty‐five patients had only one resistant isolate, and three patients had several consecutive resistant isolates. Four unique resistant isolates did not undergo cyp51A‐sequencing. Three isolates from a patient who had several resistant isolates with M220K, and another isolate from a patient who had isolates with P216S. Isolates marked as ‘Background samples’ included 99 isolates from 94 patients (Table 1). A. fumigatus from three patients (3.2%; 95% CI: 0.9%–9.0%) were azole resistant and all harboured the TR34/L98H resistance mechanism. One patient had consecutive isolates with TR34/L98H‐marked background and not marked as background. Azole resistance was detected in samples from all five Danish regions (Figure 3). TR34/L98H isolates were detected in four out of five regions and in samples from both the hospital and the primary health care sector, whereas isolates with single point mutations in cyp51A were found in three of five regions. Proportion of resistant Aspergillus fumigatus isolates and associated underlying resistance mechanism across the five Danish Regions. Each Region represented is the Region of the health care facility from which the isolate was referred. As some health care services are centralised this will not in all cases represent the patients' place of residence or the place where the resistant fungus was acquired. Total numbers of isolates were for Capital n = 910, Zealand n = 91, Southern Denmark n = 326, Central Jutland n = 419 and Northern Jutland n = 74. The resistance mechanism remained uncharacterised in five isolates from five patients whom were known to harbour other cyp51A mutant isolates (blue bar). These included four resistant isolates that did not undergo cyp51A sequencing of which three isolates derived from a patient who had other isolates with M220K, and one isolate from a patient who had other isolates with P216S, and one isolate with F46Y/M172/E427K from a patient who also had isolates with TR34/L98H Comparing surveillances at national level, azole resistance prevalence was lower in Denmark than in the Netherlands (66/1083 [6.1%] vs. 508/4496 [11.3%]) (p < .0001). 22 DISCUSSION: An azole resistance prevalence of 6.1% including a TR34/L98H‐related environmental resistance of 3.6% was documented at patient level during the first 2 years of the Danish nationwide surveillance programme. Whereas the first figure represents the current burden of azole resistance, the second provides information on what the chances are for facing azole resistance among azole‐naïve patients. The resistance prevalence was higher in samples from the upper airways than in tracheal aspirates and lower airway specimens. We speculate, that this may reflect that out‐patients with chronic lung disease including CF and aspergillosis are often provided with sputum containers for regular submission of sputum by mail and thus that the resistance frequencies across the different sample types are not directly comparable. Overall the resistance prevalence remains well below 10%, and azole therapy therefore remains the first choice for the initial treatment for aspergillosis in our country. Several observations suggest that azole resistance in A. fumigatus and resistance due to TR34/L98H specifically is increasing in Denmark. In 2007, 1.9% A. fumigatus isolates were azole resistant and none due to TR34/L98H mechanisms in a 3 month multicentre survey. 25 From 2010 to 2014, the azole resistance prevalence at patient level was 2.1% among referred (and thus selected) isolates with approximately half involving TR34/L98H mechanisms. 26 Moreover, in the Danish CF population, specifically, an azole resistance prevalence of 4.5% including 1.5% due to TR34/L98H was observed in 2007 and 2009 compared to 7.3% including 3.7% TR34/L98H in 2018. 5 , 8 Although the studies are not directly comparable, we speculate that azole resistance is rising both overall and among CF patients and is driven by both medical and environmental azole use. Of note, we did not observe any isolates harbouring TR46/Y121F/T289A during the 2‐year surveillance although this resistance genotype has been found once in DK in 2014. 4 Three single point amino acid alterations (G54A, G54R and G432S) have been associated with azole resistance in both azole‐treated patients and the environment. 27 , 28 , 29 , 30 Five patients in this surveillance programme harboured isolates with a G54R and one patient an isolate with a G432S alteration. Unfortunately, we did not have access to clinical information or prior medication data to enable a discussion of the origin of these resistance mechanisms. The number of patients with azole‐resistant A. fumigatus was unevenly distributed across the five regions in Denmark. The reason for a higher occurrence in the capital region is likely that this is the largest region based on population size, and that one of the two CF centres is based in the capital region. TR34/L98H was not detected in northern Jutland during the study period, but was detected in a clinical isolate shortly before the surveillance programme was initiated. 24 We therefore argue that TR34/L98H is found all over the country and pose a risk for any patient in Denmark susceptible to Aspergillus infections. In comparison to other surveillance studies, our azole resistance prevalence was higher than the 3.2% found in 2009 to 2011 in a multicentre study with 19 countries, 31 but lower than the 11% in the more recent Dutch nationwide surveillance in 2017 and 2018. 22 Larger studies and surveillances on azole resistance in A. fumigatus are summarised in Table 3. 22 , 25 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 These studies show that the azole resistance prevalence in the present surveillance is in line with other European studies from 2011 to 2018 and the Netherlands from 2007 to 2009. Studies and surveillances with azole resistance in Aspergillus fumigatus TR34/L98H and/or TR46/Y121F proportion of resistance Denmark 2007 (Mortensen et al.) 25 1.9% isolate level (2/107) The Netherlands 2013–2018 (Lestrade et al.) 22 11.3% patient level (508/4496) The Netherlands 2007–2009 (Van der linden et al.) 32 5.3% patient level (63/1192) 4.6% isolate level (82/1792) 4.1% isolate level (74/1792) Belgium 2011–2012 (Vermeulen et al.) 33 5.5% patient level (9/164) 4.3% patient level (7/164) Spain 2019 (Escribano et al.) 34 4.7% patient level (34/715) 2.8% patient level (20/715) Italy 2016–2018 (Prigitano et al.) 35 6.6% isolate level (19/286) 4.2% isolate level (12/286) USA 2015–2017 (Berkow et al.) 36 1.5% isolate level (20/1356) 0.4% isolate level (5/1356) Japan 2017–2018 (Tsuchido et al.) 39 12.7% isolate level (7/55) 3.6% isolate level (2/55) Taiwan 2011–2018 (Wu et al.) 37 4% patient level (12/297) 5.1% isolate level (19/375) 3.4% patient level (10/297) 3.5% isolate level (13/375) China 2010–2015 (Chen et al.) 38 2.5% isolate level (8/317) 2.5% isolate level (8/317) Studies shown involve those who are either nationwide surveillances or multicentre studies in one country and not limited to a certain patient group or a referral hospital. Studies included are those that report azole resistance in A. fumigatus specifically. Azole resistance in A. fumigatus prevalence is shown in numbers of patients unless other specified. Numbers of patients with either TR34/L98H or TR46/Y121F were not specified, and total number of isolates was not specified. This study is associated with both strengths and limitations. The primary strength is that it is nationwide and thus population based. Results in studies limited to specific disease or centre will depend strongly on the case mix and use of azole therapy, which would favour selection of azole‐resistant A. fumigatus. Limitations include a risk of ascertainment bias. We cannot exclude that centres managing many Aspergillus patients are more prone to prioritise referral of isolates – a so‐called cluster sampling. Furthermore, the COVID‐19 pandemic emerged during the surveillance, and we cannot be certain that routine sampling was performed as under regular circumstances. Indeed fewer BALs were performed, and out‐patients with lung diseases were more often encouraged to send sputum samples by mail, than to visit the clinic in person. Our classification of isolates as background samples is also associated with limitations. Not all laboratories adhered strictly to the inclusion criteria and not all laboratories referred background samples. Clinical data and information on prior antifungal treatment were not collected and therefore we cannot verify that the samples marked ‘Background’ actually represented a clinically insignificant background samples or that all such samples were indeed marked as Background samples. However, the fact that no isolates with medically driven point mutations were found among background samples suggests that Background samples at least are dominated by isolates from patients without prior azole therapy for clinically documented infection and thus representative for the background level of environmental resistance. In conclusion, azole resistance is a significant problem for patients with clinical disease and in need of azole treatment. Few or no oral alternative drug options combined with long duration of treatment is a clinical challenge and results in a worsened prognosis. Initial treatment of invasive aspergillosis can remain unchanged in Denmark – but optimal treatment strategies do depend on the likelihood of azole resistance – highlighting the importance of continued surveillance, rapid susceptibility testing and a one‐health approach to azole use. CONFLICT OF INTEREST: MR: Has received research ‐and travel grants from Gilead. RKH: Has over the past 5 years received travel grants and speaker honoraria from Gilead. JBG: Has over the past 5 years received travel grants and speaker honoraria from Gilead. LK: No conflicts of interest. FSR: No conflicts of interest. SS: No conflicts of interest. NA: No conflicts of interest. JB: No conflicts of interest. BLR: No conflicts of interest. EM: No conflicts of interest. KA: Has received travel grant and speaker honoraria from Gilead.MP: No conflicts of interest. ED: No conflicts of interest. SLA: No conflicts of interest. MCA: has outside the current work, over the past 5 years, received research grants/contract work (paid to the SSI) from Amplyx, Basilea, Cidara, F2G, Gilead, Novabiotics and Scynexis, and speaker honoraria (personal fee) from Astellas, Chiesi, Gilead, MSD, and SEGES. She is the current chairman of the EUCAST‐AFST. AUTHOR CONTRIBUTIONS: Malene Risum: Data curation (lead); Formal analysis (equal); Investigation (equal); Project administration (equal); Writing – original draft (lead). Rasmus Krøger Hare: Data curation (equal); Investigation (equal); Methodology (lead); Writing – review & editing (equal). Jan Berg Gertsen: Data curation (equal); Investigation (equal); Methodology (equal); Project administration (equal); Writing – review & editing (supporting). Lise Kristensen: Investigation (equal); Methodology (equal); Writing – review & editing (supporting). Flemming Schønning Rosenvinge: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Sofia Sulim: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Nissrine Abou‐Chakra: Investigation (supporting); Methodology (equal); Writing – review & editing (supporting). Jette Bangsborg: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Bent Røder: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Ea Sofie Marmolin: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Karen Astavad : Investigation (equal); Methodology (supporting); Writing – review & editing (equal). Michael Pedersen: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Esad Dzajic: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Steen Lomborg Andersen: Investigation (supporting); Methodology (supporting); Writing – review & editing (supporting). Maiken Cavling Arendrup: Conceptualization (lead); Data curation (equal); Formal analysis (equal); Investigation (equal); Methodology (lead); Project administration (lead); Writing – original draft (lead).
Background: Azole resistance complicates treatment of patients with invasive aspergillosis with an increased mortality. Azole resistance in Aspergillus fumigatus is a growing problem and associated with human and environmental azole use. Denmark has a considerable and highly efficient agricultural sector. Following reports on environmental azole resistance in A. fumigatus from Danish patients, the ministry of health requested a prospective national surveillance of azole-resistant A. fumigatus and particularly that of environmental origin. Methods: Unique isolates regarded as clinically relevant and any A. fumigatus isolated on a preferred weekday (background samples) were included. EUCAST susceptibility testing was performed and azole-resistant isolates underwent cyp51A gene sequencing. Results: The azole resistance prevalence was 6.1% (66/1083) at patient level. The TR34 /L98H prevalence was 3.6% (39/1083) and included the variants TR34 /L98H, TR34 3 /L98H and TR34 /L98H/S297T/F495I. Resistance caused by other Cyp51A variants accounted for 1.3% (14/1083) and included G54R, P216S, F219L, G54W, M220I, M220K, M220R, G432S, G448S and Y121F alterations. Non-Cyp51A-mediated resistance accounted for 1.2% (13/1083). Proportionally, TR34 /L98H, other Cyp51A variants and non-Cyp51A-mediated resistance accounted for 59.1% (39/66), 21.2% (14/66) and 19.7% (13/66), respectively, of all resistance. Azole resistance was detected in all five regions in Denmark, and TR34 /L98H specifically, in four of five regions during the surveillance period. Conclusions: The azole resistance prevalence does not lead to a change in the initial treatment of aspergillosis at this point, but causes concern and leads to therapeutic challenges in the affected patients.
null
null
6,889
324
[ 658, 355, 77, 148, 252, 389 ]
10
[ "isolates", "resistance", "azole", "fumigatus", "resistant", "patients", "azole resistance", "patient", "cyp51a", "level" ]
[ "resistance aspergillus fumigatus", "azole resistance prevalence", "invasive aspergillosis azole", "azole resistance fumigatus", "ergosterol biosynthesis azole" ]
null
null
[CONTENT] antifungal susceptibility | Aspergillus fumigatus | azole resistance | environmental route | itraconazole | medical route | TR34/L98H | voriconazole [SUMMARY]
[CONTENT] antifungal susceptibility | Aspergillus fumigatus | azole resistance | environmental route | itraconazole | medical route | TR34/L98H | voriconazole [SUMMARY]
[CONTENT] antifungal susceptibility | Aspergillus fumigatus | azole resistance | environmental route | itraconazole | medical route | TR34/L98H | voriconazole [SUMMARY]
null
[CONTENT] antifungal susceptibility | Aspergillus fumigatus | azole resistance | environmental route | itraconazole | medical route | TR34/L98H | voriconazole [SUMMARY]
null
[CONTENT] Antifungal Agents | Aspergillus fumigatus | Azoles | Denmark | Drug Resistance, Fungal | Fungal Proteins | Humans | Microbial Sensitivity Tests | Prospective Studies [SUMMARY]
[CONTENT] Antifungal Agents | Aspergillus fumigatus | Azoles | Denmark | Drug Resistance, Fungal | Fungal Proteins | Humans | Microbial Sensitivity Tests | Prospective Studies [SUMMARY]
[CONTENT] Antifungal Agents | Aspergillus fumigatus | Azoles | Denmark | Drug Resistance, Fungal | Fungal Proteins | Humans | Microbial Sensitivity Tests | Prospective Studies [SUMMARY]
null
[CONTENT] Antifungal Agents | Aspergillus fumigatus | Azoles | Denmark | Drug Resistance, Fungal | Fungal Proteins | Humans | Microbial Sensitivity Tests | Prospective Studies [SUMMARY]
null
[CONTENT] resistance aspergillus fumigatus | azole resistance prevalence | invasive aspergillosis azole | azole resistance fumigatus | ergosterol biosynthesis azole [SUMMARY]
[CONTENT] resistance aspergillus fumigatus | azole resistance prevalence | invasive aspergillosis azole | azole resistance fumigatus | ergosterol biosynthesis azole [SUMMARY]
[CONTENT] resistance aspergillus fumigatus | azole resistance prevalence | invasive aspergillosis azole | azole resistance fumigatus | ergosterol biosynthesis azole [SUMMARY]
null
[CONTENT] resistance aspergillus fumigatus | azole resistance prevalence | invasive aspergillosis azole | azole resistance fumigatus | ergosterol biosynthesis azole [SUMMARY]
null
[CONTENT] isolates | resistance | azole | fumigatus | resistant | patients | azole resistance | patient | cyp51a | level [SUMMARY]
[CONTENT] isolates | resistance | azole | fumigatus | resistant | patients | azole resistance | patient | cyp51a | level [SUMMARY]
[CONTENT] isolates | resistance | azole | fumigatus | resistant | patients | azole resistance | patient | cyp51a | level [SUMMARY]
null
[CONTENT] isolates | resistance | azole | fumigatus | resistant | patients | azole resistance | patient | cyp51a | level [SUMMARY]
null
[CONTENT] azole | aspergillosis | resistance | patients | voriconazole | invasive | invasive aspergillosis | azole resistance | fumigatus | environmental [SUMMARY]
[CONTENT] isolates | departments | azole | resistance | fumigatus | species | referred | susceptibility | danish | cyp51a [SUMMARY]
[CONTENT] isolates | resistant | resistance | patients | mg | isolate | cyp51a | 66 | shown | mic [SUMMARY]
null
[CONTENT] isolates | azole | resistance | fumigatus | azole resistance | patients | supporting | cyp51a | level | resistant [SUMMARY]
null
[CONTENT] Azole ||| Aspergillus ||| Denmark ||| azole | A. | Danish | the ministry of health | azole [SUMMARY]
[CONTENT] weekday ||| EUCAST [SUMMARY]
[CONTENT] azole | 6.1% | 66/1083 ||| 3.6% | 39/1083 | 3 ||| 1.3% | 14/1083 | G54R | F219L | G54W ||| Non-Cyp51A-mediated | 1.2% | 13/1083 ||| 59.1% | 39/66 | 21.2% | 14/66 | 19.7% | 13/66 ||| five | Denmark | four | five [SUMMARY]
null
[CONTENT] Azole ||| Aspergillus ||| Denmark ||| azole | A. | Danish | the ministry of health | azole ||| weekday ||| EUCAST ||| ||| azole | 6.1% | 66/1083 ||| 3.6% | 39/1083 | 3 ||| 1.3% | 14/1083 | G54R | F219L | G54W ||| Non-Cyp51A-mediated | 1.2% | 13/1083 ||| 59.1% | 39/66 | 21.2% | 14/66 | 19.7% | 13/66 ||| five | Denmark | four | five ||| azole [SUMMARY]
null
Drugs Dosing in Geriatric Patients Depending on Kidney Function Estimated by MDRD and Cockroft-Gault Formulas.
34916788
According to the current data, regardless of the method used to estimate GFR, the differences between the obtained results should be insignificant and do not imply therapeutic decisions. The aim of this study was to analyze and compare the eGFR results with the estimated creatinine clearance score calculated according to the Cockroft-Gault equation, and to assess the significance of the difference between these two results.
INTRODUCTION
A study group was constituted of 115 patients, of whom 76 were women and 39 men at the age range of 55-93 years, with a median of 79 years. The study analyzed differences in the assessment of kidney function by comparing the results of eGFR assessed by MDRD method obtained from the laboratory with the calculated values of creatinine clearance using the Cockroft-Gault formula, and examining the correlation between the difference D = eGFR -eClCr and BMI and body surface.
SAMPLE AND METHODS
In the entire group of patients (N = 115), the significant statistical difference was found between eGFR and eClCr. In the subgroup of patients (N = 45) with the lower baseline eGFR <60, there was no significant difference between eGFR and eClCr, while in the subgroup of patients with baseline eGFR ≥60 (N = 75), there was a significant difference between eGFR and eClCr. The study showed that based on the estimated GFR using both methods (C-G and MDRD), 29.2% and 32.4% of patients, respectively, were incorrectly assigned to given stage of chronic kidney disease.
RESULTS
Proper assessment of kidney function is very important in order to properly drugs dosing, especially to adjust the doses of drugs metabolized by the kidneys in order to avoid or minimize their nephrotoxic effects.
CONCLUSION
[ "Aged", "Aged, 80 and over", "Humans", "Kidney", "Pharmaceutical Preparations" ]
8672121
Introduction
According to epidemiological data presented by the National Kidney Foundation, chronic kidney disease (CKD) affects about 11% of adults over 20 years of age,1 which accounts in the Polish population for about 4 mln people. According to the other registers, the patients’ number may be between 10% and 18% of the population,2–4 and in groups at risk of coexisting diabetes, hypertension, atherosclerosis, obesity, or in elderly patients, it may even reach 50% of the general population.3 CKD is a chronic, progressive and initially asymptomatic disease, therefore some patients are unaware of the existing burden. Some of them will require renal replacement therapy in the future, so it is important to emphasize the importance of diagnosis, control and education as well as nephroprotective activities, including avoiding the use of nephrotoxic drugs and their correct dosage.1 The concept of chronic kidney disease as a set of symptoms associated with the damage or reduction of the number of nephrons was developed in 2002 by a group of American nephrologists associated with the Kidney Disease Outcome Quality Initiative (NKF K/DOQI).5 The current definition of chronic kidney disease, approved by Kidney Disease Improving Global Outcome (KDIGO) in 2005 and modified in 2012, assumes the demonstration of impaired renal function (in laboratory blood or urine tests) or their structure (abnormalities in tests imaging or histological examinations) resulting from permanent damage or depletion of nephrons due to diseases affecting the renal parenchyma.1,5 The glomerular filtration rate GFR and albuminuria are used to assess renal function.2,6 Glomerular filtration is a hypothetical amount of plasma purified from a specific substance in a given unit of time.7 A method that allows to obtain very accurate results is the assessment of the clearance using exogenous substances such as inulin or iohexal, but due to cost and invasiveness it is not widely used and is rather applied as a reference in scientific research.7 The results obtained in the determination of creatinine clearance, which as an endogenous substance formed in the muscles, is excreted both by glomerular filtration and up to 20% by tubular secretion, may be somewhat inaccurate, and the test requires a daily urine collection.8–10 A huge progress in diagnostics was the introduction of complex mathematical equations allowing the estimation of GFR values on the basis of a single blood creatinine concentration, taking into account variables such as age, sex and body weight of the examined person. (Cockroft-Gault method) or age, gender and race (MDRD-simplified formula) and in the case of MDRD-6: age, gender, race, creatinine, urea and albumin concentration, enabling simple and quick results.3,7 The first equation to estimate creatinine clearance (CrCl) from blood creatinine concentration was the Cockroft-Gault formula, published in 1976, based on data from 249 male patients aged 18–92 years with measured creatinine clearance in the range of 30–130 mL/m2.11,12 It is based on age, weight, serum creatinine and gender (Table 1). The obtained results were compared with the means of two measurements of daily creatinine clearance. The method does not take into account the size of the body surface area.13 Table 1The Cockcroft-Gault Formula for Estimating Creatinine Clearance (CrCl)11,12CrClMale([140-age] × weight in kg)/(serum creatinine × 72)FemaleCrCl (male) × 0.85 The Cockcroft-Gault Formula for Estimating Creatinine Clearance (CrCl)11,12 Currently, the GFR value calculated according to the Cockroft-Gault formula is not generally recommended for the diagnosis and monitoring a course of chronic kidney disease.5,14 The Cockroft-Gault equation underestimates the GFR value in lean and elderly people, and overstates the obese and hyperhydrated people.7 It should be, yet, emphasized that on the basis of the GFR calculated according to the Cockroft-Gault formula the pharmacokinetic studies related to the registration of drugs were carried out to determine the dosage in patients with kidney disease.15–17 The MDRD formula was introduced in 1999 based on data analysis of 1628 patients with chronic kidney disease (mean GFR 40 mL/min/1.73 m2) who were predominantly Caucasian and had no associated diabetes.18 This formula comes in two forms - the classic one, which takes into account such variables as age, sex, race, serum creatinine, urea and serum albumin (MDRD6), and a simplified one that takes into account only age, gender, race and creatinine concentration (the so-called MDRD4), which has found widespread use in the assessment of e GFR.18,19 MDRD formula: GFR (mL/min/1.73 m2) = 175 × (Scr)−1.154 × (Age)−0.203 × (0.742 if female) × (1.212 if African American) (conventional units).20–22 To date, the MDRD equation has been evaluated in a number of further patient populations, including: African Americans, Europeans, and Asians, as well as diabetic patients with and without underlying kidney disease, kidney transplant patients, and potential kidney donors.14 Observations have shown that the MDRD formula is a good tool in everyday clinical practice to assess kidney function.3,5 It should be noted, however, that the MDRD formula has not been tested in persons without confirmed kidney damage (without demonstrated albuminuria), in pregnant women, in children, in the elderly >85 years of age, and in some ethnic groups, eg, Latinos. Nevertheless, the commonness and ease of obtaining the eGFR result contributed to a significant improvement in the diagnosis and monitoring of chronic kidney disease.16 However, it should be emphasized that the MDRD formula underestimates the results in the GFR range of 60–120 mL/min range. Therefore, the MDRD equation for GFR > 60 mL/min/1.73 m2 by lowering the values may lead to a misdiagnosis of CKD in healthy persons.7 In 2009, Levey at all developed another equation to evaluate the eGFR, namely CKD-EPI.23,24 This time, a much larger group of subjects was included – 8254 both healthy and those with kidney disease. The same data as for the simplified version of MDRD (age, sex, race, creatinine concentration) are required to assess e GFR. It has been shown that it has a very similar accuracy compared to MDRD in the case of GFR < 60 mL/min/ 1.73 m2, and higher accuracy for GFR > 60 mL/min/1.73 m2.19,24 In line with the recommendations of KDIGO (2012), CKD-EPI (named after the Chronic Kidney Disease Epidemiology Collaborative) is currently the preferred method for determining the estimated GFR. However, some researchers discuss the above recommendation, proving the benefits of the commonly used MDRD formula and the CDI-EPI inaccuracy in the case of low GFR values.25 In everyday clinical practice, most determinations are still made using the MDRD equation. At the same time, it is known that the incidence of chronic kidney disease and renal failure increases in the elderly. The most common causes are the coexistence of diseases leading to kidney damage, such as atherosclerosis, arterial hypertension, diabetes type 2, also the use of nephrotoxic drugs and prerenal damage associated with chronic fluid deficiency.2,6 It should also be mentioned here that the pathological processes overlap with the slow deterioration of the excretory function of the kidneys associated with the aging process of organs due to atherosclerotic lesions, atrophy of the renal tubules and glomerular sclerosis.2,26 After the age of 30 years, there is a decrease in GFR ca. 1.0 mL/min yearly.2 In the physiological aging process, both macroscopic changes (thinning of the renal cortex, more frequent occurrence of simple cysts) and microscopic changes, like a decrease in the number of active nephrons on the basis of atherosclerosis, interstitial fibrosis and atrophy of the renal tubules, are observed.27 Biopsy studies conducted in healthy kidney donors have shown that nephron sclerosis can be observed only in about 2.7% of donors aged <30 years, but in about 58% in the case of donors aged 60–69 years and in 73% of donors over 70 years of age life. The physiological decrease in the number of nephrons and the associated reduced GFR index, compared to people aged 18–29 and 70–75 years of age, can amount to as much as 48%.27 It should be emphasized that, as Hommos et al reported,27 the mere demonstration of a reduced eGFR value, indicating the existence of chronic kidney disease (<60 mL/min/1.73 m2), with the simultaneous lack of albuminuria does not translate into an increased risk of death for a given age group.28 In a study comparing the results of eGFR and eClCr with the reference methods of assessing GFR in most patients and in most of the drugs used, the differences obtained in the results of eGFR and eCrCl are so small that they do not lead to the need for significant changes in drug dosage. In principle, any method of GFR determination can be used.28 The exception seems to be patients with a body surface that differs significantly from the standard one,29–31 where these differences may be significant. Moreover, it is worth noting the specificity of therapy in geriatric patients resulting from multimorbidity with subsequent polypharmacy, more frequent occurrence of CKD, as compared to the general population, as well as the above-mentioned difference, significant for drug dosing, between the eGFR and eCrCl results in patients with body surface area significantly different from the mean population values. Taking all of this into consideration, the authors decided to undertake the present analyses in order to ascertain for which patient drug dosing should be determined using the eClCr values calculated with the Cockroft-Gault formula instead of the eGFR results obtained from the laboratory.
null
null
null
null
Discussion
Strengths of this work are: addressing the important topic of assessing renal function in the geriatric population and paying attention to possible differences in drug dosing using the Cockroft-Gault formula as compared with MDRD equation. Limitations: the small size of the study group does not allow to separate subgroups in terms of body weight and height, age or sex, and to conduct a thorough assessment of the impact of these parameters on the difference D. This is particularly important in the context of body weight; it is a variable that appears in the CG formula but not in the MDRD equation. The spread of body weight in the examined patients is quite significant: from 39 kg to 125 kg, with the coefficient of variation equal to 24%, which, considering the number of all patients (115), makes it impossible to separate appropriately numerous subgroups with similar body weight. In the light of the data obtained in the present study, it is worthwhile in patients with BMI or body surface significantly different from the standard one to calculate ClCr with the Cockroft-Gault formula and to adjust the dose of the drug according to the information provided by the producer. This action is in line with the generally accepted recommendations for the prevention and slowing of the progression of CKD, where one of the medical measures is to avoid nephrotoxic effects of the applied therapy and meets one of the main ethical postulates in medicine, ie, “Primum non nocere.”
[ "Sample and Methods", "Statistical Analyses", "Results", "Discussion" ]
[ "For this analysis, retrospective data from 115 patients hospitalized at the Department of Geriatrics of the University Hospital in Wrocław in 2020 were used. The results of 76 women and 39 men were included in the study. The minimum age of patients is 55 years, the maximum age is 93 years with a median of 79 years.\nThe study analyzed differences in the assessment of kidney function by comparing the results of eGFR assessed by MDRD method obtained from the laboratory with the calculated values of creatinine clearance using the Cockroft-Gault formula, and examining the correlation between the difference D = eGFR -eClCr and BMI and body surface.\nStatistical Analyses The normality of the distributions was tested with the Shapiro–Wilk test. The difference between eGFR and eClCr in the whole study group - the Wilcoxon test, and the “loess method: local regression fitting” method were used to determine the regression line. Calculations were made using the program: The R Project for Statistical Computing - R, v. 4.0.3- R Studio, v. 1.4.1103.\nThe study was approved by the Commission of Bioethics at Wroclaw Medical University (KB-58/2021). All patients gave their informed consent to participate in the study. The study was performed in accordance with the principles of the Declaration of Helsinki.\nThe normality of the distributions was tested with the Shapiro–Wilk test. The difference between eGFR and eClCr in the whole study group - the Wilcoxon test, and the “loess method: local regression fitting” method were used to determine the regression line. Calculations were made using the program: The R Project for Statistical Computing - R, v. 4.0.3- R Studio, v. 1.4.1103.\nThe study was approved by the Commission of Bioethics at Wroclaw Medical University (KB-58/2021). All patients gave their informed consent to participate in the study. The study was performed in accordance with the principles of the Declaration of Helsinki.", "The normality of the distributions was tested with the Shapiro–Wilk test. The difference between eGFR and eClCr in the whole study group - the Wilcoxon test, and the “loess method: local regression fitting” method were used to determine the regression line. Calculations were made using the program: The R Project for Statistical Computing - R, v. 4.0.3- R Studio, v. 1.4.1103.\nThe study was approved by the Commission of Bioethics at Wroclaw Medical University (KB-58/2021). All patients gave their informed consent to participate in the study. The study was performed in accordance with the principles of the Declaration of Helsinki.", "A study group was constituted of 115 patients, of whom 76 were women and 39 men at the age range of 55–93 years, with a median of 79 years (Table 2)Table 2Results of the StudyResultsMinimum ValueMaximum ValueMedianRangePatients’ age55 years93 years79 yearsBMI value16.28 kg/m243.87 kg/ m226.73 kg/ m2Body surface area1.23 m22.56 m21.83 m2Creatinine concentration0.49 mg/dl2.26 mg/dl0.9 mg/dl1.97 mg/dleGFR21 mL/min/1.73 m2131 mL/min/1.73 m270 mL/min/1,73 m2110 mL/min/1.73 m2ClCr by C-G formula19 mL/min123.31 mL/min62.5 mL/min104.2 mL/minNumber of drugs used continuously2137Number of drugs metabolized by the kidneys052\n\nResults of the Study\nThe minimum GFR (estimated creatinine clearance eClCr) calculated from the Cockroft-Gault formula was 19.11 mL/min, and the maximum was 123.31 mL/min with a median of 62.51 mL/min, and a range of 104.2 mL/min (Table 2).\nIn addition, the analysis assessed the number of chronic medications used by patients (short-term medications, reliever medications, vitamins, supplements and topical medications were not included). The minimum number of continuous medications was 2 and the maximum was 13, with a median of 7 (Table 2).\nMoreover, the number of drugs requiring dose adjustment or drug discontinuation in the case of renal failure was assessed. The following drugs: antidiabetic (metformin, sulfonylureas), NOAC drugs, NSAIDs, pregabalin, ACEI, diuretics, and in one case chronically used LMWH) were considered as nephrotoxic. The minimum number of these drugs used by patients was 0 and the maximum was 5, with a median of 2 (Table 2).\nThe differences in the assessment of kidney function were analyzed by comparing the eGFR results obtained from the laboratory using the MDRD method with the calculated values of creatinine clearance using the Cockroft-Gault formula. The correlation between the difference D = eGFR - eClCr and BMI and body surface was also investigated.\nIn the entire group of patients (N = 115), the significant statistical difference was found between eGFR and eClCr (p < 10−4 = 0.0001). In the subgroup of patients (N = 45) with the lower baseline eGFR <60, there was no significant difference between eGFR and eClCr (p = 0.48), while in the subgroup of patients with baseline eGFR ≥60 (N = 75), there was a significant difference between eGFR and eClCr (p < 10−5).\nThe further analysis also proved the existence of a statistical relationship between the difference D = eGFR - eCrCl in correlation to BMI and body surface area (Figures 1–4). There is a statistically strong (rho close to −1) inverse correlation between D and BMI and between D and body surface area. As the BMI or body surface area increases, the eClCr values start to exceed the eGFR values. The regression lines in the Figures 1 and 2 were determined using the LOESS “loess method: local regression fitting” method. The points where the value determined by the local regression reaches the minimum, ie, where the eGFR and eClCr are the closest to each other, are the values of 30.8 kg/m2 for the BMI and 1.95 m2 for the body surface area. The further from the above-mentioned points, the greater the difference between the obtained results, and with an increase in the BMI or body surface area, the eClCr values start to exceed the eGFR, while in the range of values lower than the minimum regression, both in terms of BMI data and body surface area the value eClCr is smaller than eGFR (Figures 1–4).Figure 1The relationship between the absolute value of D (difference eGFR -eClCr) - Y axis and BMI - X axis. The red line represents the local regression.Figure 2The relationship between the absolute value of D (difference eGFR -eClCr) - Y axis, and body surface area - X axis. The red line represents the local regression.Figure 3The relationship between the value D (difference eGFR - eClCr) - Y axis, and BMI - X axis. The red line represents the local regression.Figure 4The relationship between the value D (difference eGFR - eClCr) - Y axis, and body surface area - X axis. The red line represents the local regression.\nThe relationship between the absolute value of D (difference eGFR -eClCr) - Y axis and BMI - X axis. The red line represents the local regression.\nThe relationship between the absolute value of D (difference eGFR -eClCr) - Y axis, and body surface area - X axis. The red line represents the local regression.\nThe relationship between the value D (difference eGFR - eClCr) - Y axis, and BMI - X axis. The red line represents the local regression.\nThe relationship between the value D (difference eGFR - eClCr) - Y axis, and body surface area - X axis. The red line represents the local regression.\nThe above observation also analyzed the amount of drugs used chronically by patients, where the minimum number of constantly used drugs was 2 and the maximum 13 with a median of 7, including the number of drugs metabolized by the kidneys with the potential to deteriorate kidney function, especially in the case of incorrect dosing, from 0 to 5 with a median of 2. In order to prevent chronic kidney disease and slow its progression, it is necessary to carefully analyze the drugs used, avoid polytherapy and avoid nephrotoxic drugs. The data collected in our study included: gender, age, weight, height, creatinine, eGFR, fCG, number of drugs taken, including the number of drugs known to be nephrotoxic. Based on the above data, BMI and body surface area (BSA) were calculated according to the formulas:\nBMI = body mass (kg)/height2 (m) and BSA by Haycock = 0.024265.h0.3964.w0.5378.\nMultiple linear regression was performed for the dependent variable D = eGFR – fCG adjusted for confounders such as: body mass (present in the MDRD formula but not in the Cockcroft-Gault formula), the number of medications taken, and the number of nephrotoxic drugs taken.\nRelationship between D and BMI: The analysis of the results of multiple regression adjusted for body mass, number of medications taken, and number of nephrotoxic medications taken showed that the difference in D is not influenced by the number of medications taken. Adding this information to the regression model did not change significantly (>20%) the coefficients obtained in the model without this information. Moreover, the calculated coefficients related to the number of drugs did not have a statistically significant (p > 0.05) influence on the model result (D value).\nHowever, it was found that the patient’s body weight is an important factor (confounder). Introducing it to the model changed the value of the coefficient for BMI by 73%. A statistically significant influence of the body mass on the value of D was also found. It should be emphasized, however, that the patients’ body weight – although absent in the Cockcroft-Gault formula, hence its consideration as a potential confounding factor – and the BMI value are directly related to each other. In order to better assess the impact of body weight and BMI on the difference “D”, a larger observation group should be gathered, which would allow for the assessment of confounding factors not only by statistical methods, but also by matching and restriction. The relatively small size of the observation group is one of the limitations of our study (Figure 5).Figure 5The relationship between the value D (difference eGFR - eClCr) - Y axis, and body mass - X axis. The red line represents the local regression.\nThe relationship between the value D (difference eGFR - eClCr) - Y axis, and body mass - X axis. The red line represents the local regression.\nThe relationship between D and body surface area: As in the case of the relationship between D and BMI, the analysis of the results of multiple regression taking into account the effect of body weight and the number of drugs taken on the difference “D” showed that consideration of the number of drugs in the model does not significantly change the parameters of the model. Introducing the patient’s body weight to the model changes the value of the coefficient for the body area by 50%. Interestingly, it also causes the surface area to cease to be a statistically significant component of the model. In the case of the dependence of D on the body surface area and body weight, it should also be noted that the patient’s weight is directly related to the patient’s body surface calculated using the Haycock formula.\nSumming it up – the obtained results allow, in our opinion, to conclude that the difference D = eGFR – fCG is significantly influenced by BMI and body mass. The more the patient’s BMI differs from approx. 31 kg/m2 or 82.5 kg, the greater the difference D is greater in the absolute value.\nThis can also be expressed, as follows:\nfor BMI < 31 kg/m2 or body weight <82.5 kg, the difference D > 0; meaning eGFR > fCG;\nwhile for BMI > 31 kg/m2 or body weight >82.5 kg, the difference D < 0; meaning eGFR < fCG.\nThe study showed that based on the estimated GFR using both methods (C-G and MDRD), 29.2% and 32.4% of patients, respectively, were incorrectly assigned to given stage of chronic kidney disease.", "Assessment of kidney function is very important in order to properly drugs dosing, especially to adjust the doses of drugs metabolized by the kidneys in order to avoid or minimize their nephrotoxic effects.26,28 The current recommendations in the Summary of Product Characteristics are mainly based on the old pharmacokinetic studies, before the era of standardization of creatinine assessment and the common use of estimated GFR, when the estimated creatinine clearance (eClCr) calculated from the Cockroft-Gault equation was used to assess renal function.13,28,30\nHistorically, the use of different methods of creatinine concentration has resulted in different method-dependent results which were difficult to compare with others, yet inconsistently used in recommendations for drug dose adjustment in patients with kidney disease. The current progress in the diagnosis of kidney diseases, the widespread use of the estimated GFR filtration coefficient and the standardization of creatinine concentration assessment result in the possibility of a more precise assessment of kidney function.30–36\nPerhaps, according to the current state of knowledge, on the basis of eGFR, and not eCLCr, pharmacokinetic studies of new drugs should be performed in order to ensure safer dosing of drugs, but it is difficult to imagine a situation in which manufacturers of all drugs used so far will conduct another pharmacokinetic test to update the dosing recommendations for existing drugs.30–36\nThe aim of this study was to analyze and compare the eGFR results with the estimated creatinine clearance score calculated according to the Cockroft-Gault equation, and to assess the significance of the difference between these two results. According to the current data,28,33 regardless of the method used to estimate GFR, the differences between the obtained results should be insignificant and do not imply therapeutic decisions.\nThe present study results revealed that based on the estimated GFR using both methods (C-G and MDRD), 29.2% and 32.4% of patients, respectively, were incorrectly assigned to given stage of chronic kidney disease.\nThe further practical conclusion is that in the population of 115 patients who underwent observation, as many as 45 people (this group constitutes 39% of the total number of respondents), the initial eGFR was <60 mL/min/1.73 m2. In some cases, there are no clear data on the history of the length of the lesions, so if the time criterion is not met (>3 months), it is not possible to diagnose chronic kidney disease in all these patients, nevertheless this group requires periodic laboratory control in an outpatient setting. Optimizing the treatment of comorbidities as well as a careful analysis of the need for chronic drugs, especially drugs metabolized in the kidneys, and their appropriate dosage.\nIn the Fiossart et al’s cohort study, data from 2095 adult patients were analyzed by comparing the results obtained from the Cockroft-Gault or MDRD equation with the reference creatinine clearance result. While in the overall analysis, the results between eGFR and the reference creatinine clearance were slightly different, regardless of the method used, the differences became more significant in the subgroup analysis taking into account age, sex and body weight. In most cases, the results of the Cockroft-Gault equation turned out to be less accurate. Fiossart et al emphasize the need to design large multicentre studies to validate equations to assess eGFR.38\nVarious researchers (including Cirillo et al) emphasize limitations in the use of both methods. MDRD equation has not been validated for GFR > 60 mL/min because the studies did not include healthy people.39 This is reflected in a study by Stevens et al comparing eGFR size according to MDRD with GFR after use of radionuclides. It was shown that eGFR from the MDRD pattern was compatible with GFR only at GFR values <60 mL/min/1.73 m2.37\nAccording to the recommendations of KDIGO, the determination of the eGFR values between 60 and 89 mL/min/1.73 m2 was associated with the diagnosis of chronic kidney disease only in the presence of other markers of kidney damage. The paragraph on seniors emphasized the physiological decline in GFR associated with age and the difficulties in distinguishing the causes of a reduced GFR in the coexistence of early-stage kidney disease.1,5\nThe data show that up to 75% of people over 70 years of age may have GFR <90 mL/min/1.73 m2, and 25% <60 mL/min/1.73 m2.5,29 Patients with a GFR result of 60–89 mL/min/1.73 m2 should be tested for comorbidities, mainly cardiovascular disease, lipid disorders, and diabetes mellitus. With a low risk of cardiovascular disease, the main recommendations are regular checks of kidney function (at least once a year), urine sediment testing, blood pressure control, changing to an active lifestyle and avoiding drugs and contrast agents with nephrotoxic potential.5,29,30\nThe situation is slightly different in elderly patients with GFR < 60 mL/min/1.73 m2 or at high risk of developing or being diagnosed with cardiovascular disease. In addition to the above measures, they require assessment of CKD complications (anemia, endocrine disorders, nutritional deficiencies, osteoporosis, neuropathy) and dietary and pharmacological treatment aimed at slowing the progression of CKD and mitigating the levels of cardiovascular risk factors, regularly repeated blood and sediment test urine (up to 4 times a year) and nephrological consultation in case of disease progression.1,5\nThe aim of our study was to answer the question whether there is a need for additional determination of GFR according to the Cockroft-Gault equation in order to ensure optimal drug dosing and minimize nephrotoxicity. Due to the supposed slight difference between the results obtained in the Cockroft-Gaul equation and MDRD, an attempt was made to distinguish patients in whom these differences might be more significant. It was observed that the more distant the BMI or body surface area result from the so-called minimum regression, ie, the point where both results differ slightly (in the above case BMI 30.8 kg/m2, and body surface area 1.93 m2), the differences between eGFR and eCLCr were bigger.\nIt leads to the practical conclusion that in people with BMI values or body surface area significantly different from the standard ones, the eClCr values calculated according to the Cockroft-Gault equation should be used in determining the doses of drugs metabolized by the kidneys. Of course, it should be emphasized that the eGFR formula (most often calculated according to the MDRD formula) is still a standard when it comes to assessing kidney function and qualifying the patient to the appropriate stage of chronic kidney disease, while the Cockroft-Gault formula is destined to determine the exact dose of drugs according to the manufacturer’s information in patients with renal failure, especially those with a body surface area or BMI significantly different from the standard ones.\nAccording to the recommendations of nephrology societies, the prevention and treatment of chronic kidney disease requires an active approach both on the part of the patient (increased physical activity, adequate diet, hydration, avoidance of self-administered drugs, especially those metabolised by the kidneys, including NSAIDs) and the doctor (repeated at control intervals). Blood and urine tests for the assessment of kidney function and the presence of albuminuria, especially in patients at risk of chronic kidney disease, optimal treatment of comorbidities, critical analysis of the indications for the use of contrast agents, analysis of the amount of necessary drugs in chronic therapy and precise dosing of drugs metabolized by the kidneys, appropriate patient education and referral to Nephrology Clinics of patients with e GFR < 30." ]
[ null, null, null, null ]
[ "Introduction", "Sample and Methods", "Statistical Analyses", "Results", "Discussion" ]
[ "According to epidemiological data presented by the National Kidney Foundation, chronic kidney disease (CKD) affects about 11% of adults over 20 years of age,1 which accounts in the Polish population for about 4 mln people. According to the other registers, the patients’ number may be between 10% and 18% of the population,2–4 and in groups at risk of coexisting diabetes, hypertension, atherosclerosis, obesity, or in elderly patients, it may even reach 50% of the general population.3\nCKD is a chronic, progressive and initially asymptomatic disease, therefore some patients are unaware of the existing burden. Some of them will require renal replacement therapy in the future, so it is important to emphasize the importance of diagnosis, control and education as well as nephroprotective activities, including avoiding the use of nephrotoxic drugs and their correct dosage.1\nThe concept of chronic kidney disease as a set of symptoms associated with the damage or reduction of the number of nephrons was developed in 2002 by a group of American nephrologists associated with the Kidney Disease Outcome Quality Initiative (NKF K/DOQI).5 The current definition of chronic kidney disease, approved by Kidney Disease Improving Global Outcome (KDIGO) in 2005 and modified in 2012, assumes the demonstration of impaired renal function (in laboratory blood or urine tests) or their structure (abnormalities in tests imaging or histological examinations) resulting from permanent damage or depletion of nephrons due to diseases affecting the renal parenchyma.1,5\nThe glomerular filtration rate GFR and albuminuria are used to assess renal function.2,6 Glomerular filtration is a hypothetical amount of plasma purified from a specific substance in a given unit of time.7 A method that allows to obtain very accurate results is the assessment of the clearance using exogenous substances such as inulin or iohexal, but due to cost and invasiveness it is not widely used and is rather applied as a reference in scientific research.7\nThe results obtained in the determination of creatinine clearance, which as an endogenous substance formed in the muscles, is excreted both by glomerular filtration and up to 20% by tubular secretion, may be somewhat inaccurate, and the test requires a daily urine collection.8–10\nA huge progress in diagnostics was the introduction of complex mathematical equations allowing the estimation of GFR values on the basis of a single blood creatinine concentration, taking into account variables such as age, sex and body weight of the examined person. (Cockroft-Gault method) or age, gender and race (MDRD-simplified formula) and in the case of MDRD-6: age, gender, race, creatinine, urea and albumin concentration, enabling simple and quick results.3,7\nThe first equation to estimate creatinine clearance (CrCl) from blood creatinine concentration was the Cockroft-Gault formula, published in 1976, based on data from 249 male patients aged 18–92 years with measured creatinine clearance in the range of 30–130 mL/m2.11,12 It is based on age, weight, serum creatinine and gender (Table 1). The obtained results were compared with the means of two measurements of daily creatinine clearance. The method does not take into account the size of the body surface area.13\nTable 1The Cockcroft-Gault Formula for Estimating Creatinine Clearance (CrCl)11,12CrClMale([140-age] × weight in kg)/(serum creatinine × 72)FemaleCrCl (male) × 0.85\n\nThe Cockcroft-Gault Formula for Estimating Creatinine Clearance (CrCl)11,12\nCurrently, the GFR value calculated according to the Cockroft-Gault formula is not generally recommended for the diagnosis and monitoring a course of chronic kidney disease.5,14 The Cockroft-Gault equation underestimates the GFR value in lean and elderly people, and overstates the obese and hyperhydrated people.7\nIt should be, yet, emphasized that on the basis of the GFR calculated according to the Cockroft-Gault formula the pharmacokinetic studies related to the registration of drugs were carried out to determine the dosage in patients with kidney disease.15–17\nThe MDRD formula was introduced in 1999 based on data analysis of 1628 patients with chronic kidney disease (mean GFR 40 mL/min/1.73 m2) who were predominantly Caucasian and had no associated diabetes.18 This formula comes in two forms - the classic one, which takes into account such variables as age, sex, race, serum creatinine, urea and serum albumin (MDRD6), and a simplified one that takes into account only age, gender, race and creatinine concentration (the so-called MDRD4), which has found widespread use in the assessment of e GFR.18,19\nMDRD formula: GFR (mL/min/1.73 m2) = 175 × (Scr)−1.154 × (Age)−0.203 × (0.742 if female) × (1.212 if African American) (conventional units).20–22\nTo date, the MDRD equation has been evaluated in a number of further patient populations, including: African Americans, Europeans, and Asians, as well as diabetic patients with and without underlying kidney disease, kidney transplant patients, and potential kidney donors.14 Observations have shown that the MDRD formula is a good tool in everyday clinical practice to assess kidney function.3,5\nIt should be noted, however, that the MDRD formula has not been tested in persons without confirmed kidney damage (without demonstrated albuminuria), in pregnant women, in children, in the elderly >85 years of age, and in some ethnic groups, eg, Latinos. Nevertheless, the commonness and ease of obtaining the eGFR result contributed to a significant improvement in the diagnosis and monitoring of chronic kidney disease.16\nHowever, it should be emphasized that the MDRD formula underestimates the results in the GFR range of 60–120 mL/min range. Therefore, the MDRD equation for GFR > 60 mL/min/1.73 m2 by lowering the values may lead to a misdiagnosis of CKD in healthy persons.7\nIn 2009, Levey at all developed another equation to evaluate the eGFR, namely CKD-EPI.23,24 This time, a much larger group of subjects was included – 8254 both healthy and those with kidney disease. The same data as for the simplified version of MDRD (age, sex, race, creatinine concentration) are required to assess e GFR. It has been shown that it has a very similar accuracy compared to MDRD in the case of GFR < 60 mL/min/ 1.73 m2, and higher accuracy for GFR > 60 mL/min/1.73 m2.19,24\nIn line with the recommendations of KDIGO (2012), CKD-EPI (named after the Chronic Kidney Disease Epidemiology Collaborative) is currently the preferred method for determining the estimated GFR. However, some researchers discuss the above recommendation, proving the benefits of the commonly used MDRD formula and the CDI-EPI inaccuracy in the case of low GFR values.25\nIn everyday clinical practice, most determinations are still made using the MDRD equation. At the same time, it is known that the incidence of chronic kidney disease and renal failure increases in the elderly. The most common causes are the coexistence of diseases leading to kidney damage, such as atherosclerosis, arterial hypertension, diabetes type 2, also the use of nephrotoxic drugs and prerenal damage associated with chronic fluid deficiency.2,6\nIt should also be mentioned here that the pathological processes overlap with the slow deterioration of the excretory function of the kidneys associated with the aging process of organs due to atherosclerotic lesions, atrophy of the renal tubules and glomerular sclerosis.2,26 After the age of 30 years, there is a decrease in GFR ca. 1.0 mL/min yearly.2\nIn the physiological aging process, both macroscopic changes (thinning of the renal cortex, more frequent occurrence of simple cysts) and microscopic changes, like a decrease in the number of active nephrons on the basis of atherosclerosis, interstitial fibrosis and atrophy of the renal tubules, are observed.27\nBiopsy studies conducted in healthy kidney donors have shown that nephron sclerosis can be observed only in about 2.7% of donors aged <30 years, but in about 58% in the case of donors aged 60–69 years and in 73% of donors over 70 years of age life. The physiological decrease in the number of nephrons and the associated reduced GFR index, compared to people aged 18–29 and 70–75 years of age, can amount to as much as 48%.27\nIt should be emphasized that, as Hommos et al reported,27 the mere demonstration of a reduced eGFR value, indicating the existence of chronic kidney disease (<60 mL/min/1.73 m2), with the simultaneous lack of albuminuria does not translate into an increased risk of death for a given age group.28\nIn a study comparing the results of eGFR and eClCr with the reference methods of assessing GFR in most patients and in most of the drugs used, the differences obtained in the results of eGFR and eCrCl are so small that they do not lead to the need for significant changes in drug dosage. In principle, any method of GFR determination can be used.28 The exception seems to be patients with a body surface that differs significantly from the standard one,29–31 where these differences may be significant.\nMoreover, it is worth noting the specificity of therapy in geriatric patients resulting from multimorbidity with subsequent polypharmacy, more frequent occurrence of CKD, as compared to the general population, as well as the above-mentioned difference, significant for drug dosing, between the eGFR and eCrCl results in patients with body surface area significantly different from the mean population values. Taking all of this into consideration, the authors decided to undertake the present analyses in order to ascertain for which patient drug dosing should be determined using the eClCr values calculated with the Cockroft-Gault formula instead of the eGFR results obtained from the laboratory.", "For this analysis, retrospective data from 115 patients hospitalized at the Department of Geriatrics of the University Hospital in Wrocław in 2020 were used. The results of 76 women and 39 men were included in the study. The minimum age of patients is 55 years, the maximum age is 93 years with a median of 79 years.\nThe study analyzed differences in the assessment of kidney function by comparing the results of eGFR assessed by MDRD method obtained from the laboratory with the calculated values of creatinine clearance using the Cockroft-Gault formula, and examining the correlation between the difference D = eGFR -eClCr and BMI and body surface.\nStatistical Analyses The normality of the distributions was tested with the Shapiro–Wilk test. The difference between eGFR and eClCr in the whole study group - the Wilcoxon test, and the “loess method: local regression fitting” method were used to determine the regression line. Calculations were made using the program: The R Project for Statistical Computing - R, v. 4.0.3- R Studio, v. 1.4.1103.\nThe study was approved by the Commission of Bioethics at Wroclaw Medical University (KB-58/2021). All patients gave their informed consent to participate in the study. The study was performed in accordance with the principles of the Declaration of Helsinki.\nThe normality of the distributions was tested with the Shapiro–Wilk test. The difference between eGFR and eClCr in the whole study group - the Wilcoxon test, and the “loess method: local regression fitting” method were used to determine the regression line. Calculations were made using the program: The R Project for Statistical Computing - R, v. 4.0.3- R Studio, v. 1.4.1103.\nThe study was approved by the Commission of Bioethics at Wroclaw Medical University (KB-58/2021). All patients gave their informed consent to participate in the study. The study was performed in accordance with the principles of the Declaration of Helsinki.", "The normality of the distributions was tested with the Shapiro–Wilk test. The difference between eGFR and eClCr in the whole study group - the Wilcoxon test, and the “loess method: local regression fitting” method were used to determine the regression line. Calculations were made using the program: The R Project for Statistical Computing - R, v. 4.0.3- R Studio, v. 1.4.1103.\nThe study was approved by the Commission of Bioethics at Wroclaw Medical University (KB-58/2021). All patients gave their informed consent to participate in the study. The study was performed in accordance with the principles of the Declaration of Helsinki.", "A study group was constituted of 115 patients, of whom 76 were women and 39 men at the age range of 55–93 years, with a median of 79 years (Table 2)Table 2Results of the StudyResultsMinimum ValueMaximum ValueMedianRangePatients’ age55 years93 years79 yearsBMI value16.28 kg/m243.87 kg/ m226.73 kg/ m2Body surface area1.23 m22.56 m21.83 m2Creatinine concentration0.49 mg/dl2.26 mg/dl0.9 mg/dl1.97 mg/dleGFR21 mL/min/1.73 m2131 mL/min/1.73 m270 mL/min/1,73 m2110 mL/min/1.73 m2ClCr by C-G formula19 mL/min123.31 mL/min62.5 mL/min104.2 mL/minNumber of drugs used continuously2137Number of drugs metabolized by the kidneys052\n\nResults of the Study\nThe minimum GFR (estimated creatinine clearance eClCr) calculated from the Cockroft-Gault formula was 19.11 mL/min, and the maximum was 123.31 mL/min with a median of 62.51 mL/min, and a range of 104.2 mL/min (Table 2).\nIn addition, the analysis assessed the number of chronic medications used by patients (short-term medications, reliever medications, vitamins, supplements and topical medications were not included). The minimum number of continuous medications was 2 and the maximum was 13, with a median of 7 (Table 2).\nMoreover, the number of drugs requiring dose adjustment or drug discontinuation in the case of renal failure was assessed. The following drugs: antidiabetic (metformin, sulfonylureas), NOAC drugs, NSAIDs, pregabalin, ACEI, diuretics, and in one case chronically used LMWH) were considered as nephrotoxic. The minimum number of these drugs used by patients was 0 and the maximum was 5, with a median of 2 (Table 2).\nThe differences in the assessment of kidney function were analyzed by comparing the eGFR results obtained from the laboratory using the MDRD method with the calculated values of creatinine clearance using the Cockroft-Gault formula. The correlation between the difference D = eGFR - eClCr and BMI and body surface was also investigated.\nIn the entire group of patients (N = 115), the significant statistical difference was found between eGFR and eClCr (p < 10−4 = 0.0001). In the subgroup of patients (N = 45) with the lower baseline eGFR <60, there was no significant difference between eGFR and eClCr (p = 0.48), while in the subgroup of patients with baseline eGFR ≥60 (N = 75), there was a significant difference between eGFR and eClCr (p < 10−5).\nThe further analysis also proved the existence of a statistical relationship between the difference D = eGFR - eCrCl in correlation to BMI and body surface area (Figures 1–4). There is a statistically strong (rho close to −1) inverse correlation between D and BMI and between D and body surface area. As the BMI or body surface area increases, the eClCr values start to exceed the eGFR values. The regression lines in the Figures 1 and 2 were determined using the LOESS “loess method: local regression fitting” method. The points where the value determined by the local regression reaches the minimum, ie, where the eGFR and eClCr are the closest to each other, are the values of 30.8 kg/m2 for the BMI and 1.95 m2 for the body surface area. The further from the above-mentioned points, the greater the difference between the obtained results, and with an increase in the BMI or body surface area, the eClCr values start to exceed the eGFR, while in the range of values lower than the minimum regression, both in terms of BMI data and body surface area the value eClCr is smaller than eGFR (Figures 1–4).Figure 1The relationship between the absolute value of D (difference eGFR -eClCr) - Y axis and BMI - X axis. The red line represents the local regression.Figure 2The relationship between the absolute value of D (difference eGFR -eClCr) - Y axis, and body surface area - X axis. The red line represents the local regression.Figure 3The relationship between the value D (difference eGFR - eClCr) - Y axis, and BMI - X axis. The red line represents the local regression.Figure 4The relationship between the value D (difference eGFR - eClCr) - Y axis, and body surface area - X axis. The red line represents the local regression.\nThe relationship between the absolute value of D (difference eGFR -eClCr) - Y axis and BMI - X axis. The red line represents the local regression.\nThe relationship between the absolute value of D (difference eGFR -eClCr) - Y axis, and body surface area - X axis. The red line represents the local regression.\nThe relationship between the value D (difference eGFR - eClCr) - Y axis, and BMI - X axis. The red line represents the local regression.\nThe relationship between the value D (difference eGFR - eClCr) - Y axis, and body surface area - X axis. The red line represents the local regression.\nThe above observation also analyzed the amount of drugs used chronically by patients, where the minimum number of constantly used drugs was 2 and the maximum 13 with a median of 7, including the number of drugs metabolized by the kidneys with the potential to deteriorate kidney function, especially in the case of incorrect dosing, from 0 to 5 with a median of 2. In order to prevent chronic kidney disease and slow its progression, it is necessary to carefully analyze the drugs used, avoid polytherapy and avoid nephrotoxic drugs. The data collected in our study included: gender, age, weight, height, creatinine, eGFR, fCG, number of drugs taken, including the number of drugs known to be nephrotoxic. Based on the above data, BMI and body surface area (BSA) were calculated according to the formulas:\nBMI = body mass (kg)/height2 (m) and BSA by Haycock = 0.024265.h0.3964.w0.5378.\nMultiple linear regression was performed for the dependent variable D = eGFR – fCG adjusted for confounders such as: body mass (present in the MDRD formula but not in the Cockcroft-Gault formula), the number of medications taken, and the number of nephrotoxic drugs taken.\nRelationship between D and BMI: The analysis of the results of multiple regression adjusted for body mass, number of medications taken, and number of nephrotoxic medications taken showed that the difference in D is not influenced by the number of medications taken. Adding this information to the regression model did not change significantly (>20%) the coefficients obtained in the model without this information. Moreover, the calculated coefficients related to the number of drugs did not have a statistically significant (p > 0.05) influence on the model result (D value).\nHowever, it was found that the patient’s body weight is an important factor (confounder). Introducing it to the model changed the value of the coefficient for BMI by 73%. A statistically significant influence of the body mass on the value of D was also found. It should be emphasized, however, that the patients’ body weight – although absent in the Cockcroft-Gault formula, hence its consideration as a potential confounding factor – and the BMI value are directly related to each other. In order to better assess the impact of body weight and BMI on the difference “D”, a larger observation group should be gathered, which would allow for the assessment of confounding factors not only by statistical methods, but also by matching and restriction. The relatively small size of the observation group is one of the limitations of our study (Figure 5).Figure 5The relationship between the value D (difference eGFR - eClCr) - Y axis, and body mass - X axis. The red line represents the local regression.\nThe relationship between the value D (difference eGFR - eClCr) - Y axis, and body mass - X axis. The red line represents the local regression.\nThe relationship between D and body surface area: As in the case of the relationship between D and BMI, the analysis of the results of multiple regression taking into account the effect of body weight and the number of drugs taken on the difference “D” showed that consideration of the number of drugs in the model does not significantly change the parameters of the model. Introducing the patient’s body weight to the model changes the value of the coefficient for the body area by 50%. Interestingly, it also causes the surface area to cease to be a statistically significant component of the model. In the case of the dependence of D on the body surface area and body weight, it should also be noted that the patient’s weight is directly related to the patient’s body surface calculated using the Haycock formula.\nSumming it up – the obtained results allow, in our opinion, to conclude that the difference D = eGFR – fCG is significantly influenced by BMI and body mass. The more the patient’s BMI differs from approx. 31 kg/m2 or 82.5 kg, the greater the difference D is greater in the absolute value.\nThis can also be expressed, as follows:\nfor BMI < 31 kg/m2 or body weight <82.5 kg, the difference D > 0; meaning eGFR > fCG;\nwhile for BMI > 31 kg/m2 or body weight >82.5 kg, the difference D < 0; meaning eGFR < fCG.\nThe study showed that based on the estimated GFR using both methods (C-G and MDRD), 29.2% and 32.4% of patients, respectively, were incorrectly assigned to given stage of chronic kidney disease.", "Assessment of kidney function is very important in order to properly drugs dosing, especially to adjust the doses of drugs metabolized by the kidneys in order to avoid or minimize their nephrotoxic effects.26,28 The current recommendations in the Summary of Product Characteristics are mainly based on the old pharmacokinetic studies, before the era of standardization of creatinine assessment and the common use of estimated GFR, when the estimated creatinine clearance (eClCr) calculated from the Cockroft-Gault equation was used to assess renal function.13,28,30\nHistorically, the use of different methods of creatinine concentration has resulted in different method-dependent results which were difficult to compare with others, yet inconsistently used in recommendations for drug dose adjustment in patients with kidney disease. The current progress in the diagnosis of kidney diseases, the widespread use of the estimated GFR filtration coefficient and the standardization of creatinine concentration assessment result in the possibility of a more precise assessment of kidney function.30–36\nPerhaps, according to the current state of knowledge, on the basis of eGFR, and not eCLCr, pharmacokinetic studies of new drugs should be performed in order to ensure safer dosing of drugs, but it is difficult to imagine a situation in which manufacturers of all drugs used so far will conduct another pharmacokinetic test to update the dosing recommendations for existing drugs.30–36\nThe aim of this study was to analyze and compare the eGFR results with the estimated creatinine clearance score calculated according to the Cockroft-Gault equation, and to assess the significance of the difference between these two results. According to the current data,28,33 regardless of the method used to estimate GFR, the differences between the obtained results should be insignificant and do not imply therapeutic decisions.\nThe present study results revealed that based on the estimated GFR using both methods (C-G and MDRD), 29.2% and 32.4% of patients, respectively, were incorrectly assigned to given stage of chronic kidney disease.\nThe further practical conclusion is that in the population of 115 patients who underwent observation, as many as 45 people (this group constitutes 39% of the total number of respondents), the initial eGFR was <60 mL/min/1.73 m2. In some cases, there are no clear data on the history of the length of the lesions, so if the time criterion is not met (>3 months), it is not possible to diagnose chronic kidney disease in all these patients, nevertheless this group requires periodic laboratory control in an outpatient setting. Optimizing the treatment of comorbidities as well as a careful analysis of the need for chronic drugs, especially drugs metabolized in the kidneys, and their appropriate dosage.\nIn the Fiossart et al’s cohort study, data from 2095 adult patients were analyzed by comparing the results obtained from the Cockroft-Gault or MDRD equation with the reference creatinine clearance result. While in the overall analysis, the results between eGFR and the reference creatinine clearance were slightly different, regardless of the method used, the differences became more significant in the subgroup analysis taking into account age, sex and body weight. In most cases, the results of the Cockroft-Gault equation turned out to be less accurate. Fiossart et al emphasize the need to design large multicentre studies to validate equations to assess eGFR.38\nVarious researchers (including Cirillo et al) emphasize limitations in the use of both methods. MDRD equation has not been validated for GFR > 60 mL/min because the studies did not include healthy people.39 This is reflected in a study by Stevens et al comparing eGFR size according to MDRD with GFR after use of radionuclides. It was shown that eGFR from the MDRD pattern was compatible with GFR only at GFR values <60 mL/min/1.73 m2.37\nAccording to the recommendations of KDIGO, the determination of the eGFR values between 60 and 89 mL/min/1.73 m2 was associated with the diagnosis of chronic kidney disease only in the presence of other markers of kidney damage. The paragraph on seniors emphasized the physiological decline in GFR associated with age and the difficulties in distinguishing the causes of a reduced GFR in the coexistence of early-stage kidney disease.1,5\nThe data show that up to 75% of people over 70 years of age may have GFR <90 mL/min/1.73 m2, and 25% <60 mL/min/1.73 m2.5,29 Patients with a GFR result of 60–89 mL/min/1.73 m2 should be tested for comorbidities, mainly cardiovascular disease, lipid disorders, and diabetes mellitus. With a low risk of cardiovascular disease, the main recommendations are regular checks of kidney function (at least once a year), urine sediment testing, blood pressure control, changing to an active lifestyle and avoiding drugs and contrast agents with nephrotoxic potential.5,29,30\nThe situation is slightly different in elderly patients with GFR < 60 mL/min/1.73 m2 or at high risk of developing or being diagnosed with cardiovascular disease. In addition to the above measures, they require assessment of CKD complications (anemia, endocrine disorders, nutritional deficiencies, osteoporosis, neuropathy) and dietary and pharmacological treatment aimed at slowing the progression of CKD and mitigating the levels of cardiovascular risk factors, regularly repeated blood and sediment test urine (up to 4 times a year) and nephrological consultation in case of disease progression.1,5\nThe aim of our study was to answer the question whether there is a need for additional determination of GFR according to the Cockroft-Gault equation in order to ensure optimal drug dosing and minimize nephrotoxicity. Due to the supposed slight difference between the results obtained in the Cockroft-Gaul equation and MDRD, an attempt was made to distinguish patients in whom these differences might be more significant. It was observed that the more distant the BMI or body surface area result from the so-called minimum regression, ie, the point where both results differ slightly (in the above case BMI 30.8 kg/m2, and body surface area 1.93 m2), the differences between eGFR and eCLCr were bigger.\nIt leads to the practical conclusion that in people with BMI values or body surface area significantly different from the standard ones, the eClCr values calculated according to the Cockroft-Gault equation should be used in determining the doses of drugs metabolized by the kidneys. Of course, it should be emphasized that the eGFR formula (most often calculated according to the MDRD formula) is still a standard when it comes to assessing kidney function and qualifying the patient to the appropriate stage of chronic kidney disease, while the Cockroft-Gault formula is destined to determine the exact dose of drugs according to the manufacturer’s information in patients with renal failure, especially those with a body surface area or BMI significantly different from the standard ones.\nAccording to the recommendations of nephrology societies, the prevention and treatment of chronic kidney disease requires an active approach both on the part of the patient (increased physical activity, adequate diet, hydration, avoidance of self-administered drugs, especially those metabolised by the kidneys, including NSAIDs) and the doctor (repeated at control intervals). Blood and urine tests for the assessment of kidney function and the presence of albuminuria, especially in patients at risk of chronic kidney disease, optimal treatment of comorbidities, critical analysis of the indications for the use of contrast agents, analysis of the amount of necessary drugs in chronic therapy and precise dosing of drugs metabolized by the kidneys, appropriate patient education and referral to Nephrology Clinics of patients with e GFR < 30." ]
[ "intro", null, null, null, null ]
[ "geriatric patients", "kidney function", "drug dosing" ]
Introduction: According to epidemiological data presented by the National Kidney Foundation, chronic kidney disease (CKD) affects about 11% of adults over 20 years of age,1 which accounts in the Polish population for about 4 mln people. According to the other registers, the patients’ number may be between 10% and 18% of the population,2–4 and in groups at risk of coexisting diabetes, hypertension, atherosclerosis, obesity, or in elderly patients, it may even reach 50% of the general population.3 CKD is a chronic, progressive and initially asymptomatic disease, therefore some patients are unaware of the existing burden. Some of them will require renal replacement therapy in the future, so it is important to emphasize the importance of diagnosis, control and education as well as nephroprotective activities, including avoiding the use of nephrotoxic drugs and their correct dosage.1 The concept of chronic kidney disease as a set of symptoms associated with the damage or reduction of the number of nephrons was developed in 2002 by a group of American nephrologists associated with the Kidney Disease Outcome Quality Initiative (NKF K/DOQI).5 The current definition of chronic kidney disease, approved by Kidney Disease Improving Global Outcome (KDIGO) in 2005 and modified in 2012, assumes the demonstration of impaired renal function (in laboratory blood or urine tests) or their structure (abnormalities in tests imaging or histological examinations) resulting from permanent damage or depletion of nephrons due to diseases affecting the renal parenchyma.1,5 The glomerular filtration rate GFR and albuminuria are used to assess renal function.2,6 Glomerular filtration is a hypothetical amount of plasma purified from a specific substance in a given unit of time.7 A method that allows to obtain very accurate results is the assessment of the clearance using exogenous substances such as inulin or iohexal, but due to cost and invasiveness it is not widely used and is rather applied as a reference in scientific research.7 The results obtained in the determination of creatinine clearance, which as an endogenous substance formed in the muscles, is excreted both by glomerular filtration and up to 20% by tubular secretion, may be somewhat inaccurate, and the test requires a daily urine collection.8–10 A huge progress in diagnostics was the introduction of complex mathematical equations allowing the estimation of GFR values on the basis of a single blood creatinine concentration, taking into account variables such as age, sex and body weight of the examined person. (Cockroft-Gault method) or age, gender and race (MDRD-simplified formula) and in the case of MDRD-6: age, gender, race, creatinine, urea and albumin concentration, enabling simple and quick results.3,7 The first equation to estimate creatinine clearance (CrCl) from blood creatinine concentration was the Cockroft-Gault formula, published in 1976, based on data from 249 male patients aged 18–92 years with measured creatinine clearance in the range of 30–130 mL/m2.11,12 It is based on age, weight, serum creatinine and gender (Table 1). The obtained results were compared with the means of two measurements of daily creatinine clearance. The method does not take into account the size of the body surface area.13 Table 1The Cockcroft-Gault Formula for Estimating Creatinine Clearance (CrCl)11,12CrClMale([140-age] × weight in kg)/(serum creatinine × 72)FemaleCrCl (male) × 0.85 The Cockcroft-Gault Formula for Estimating Creatinine Clearance (CrCl)11,12 Currently, the GFR value calculated according to the Cockroft-Gault formula is not generally recommended for the diagnosis and monitoring a course of chronic kidney disease.5,14 The Cockroft-Gault equation underestimates the GFR value in lean and elderly people, and overstates the obese and hyperhydrated people.7 It should be, yet, emphasized that on the basis of the GFR calculated according to the Cockroft-Gault formula the pharmacokinetic studies related to the registration of drugs were carried out to determine the dosage in patients with kidney disease.15–17 The MDRD formula was introduced in 1999 based on data analysis of 1628 patients with chronic kidney disease (mean GFR 40 mL/min/1.73 m2) who were predominantly Caucasian and had no associated diabetes.18 This formula comes in two forms - the classic one, which takes into account such variables as age, sex, race, serum creatinine, urea and serum albumin (MDRD6), and a simplified one that takes into account only age, gender, race and creatinine concentration (the so-called MDRD4), which has found widespread use in the assessment of e GFR.18,19 MDRD formula: GFR (mL/min/1.73 m2) = 175 × (Scr)−1.154 × (Age)−0.203 × (0.742 if female) × (1.212 if African American) (conventional units).20–22 To date, the MDRD equation has been evaluated in a number of further patient populations, including: African Americans, Europeans, and Asians, as well as diabetic patients with and without underlying kidney disease, kidney transplant patients, and potential kidney donors.14 Observations have shown that the MDRD formula is a good tool in everyday clinical practice to assess kidney function.3,5 It should be noted, however, that the MDRD formula has not been tested in persons without confirmed kidney damage (without demonstrated albuminuria), in pregnant women, in children, in the elderly >85 years of age, and in some ethnic groups, eg, Latinos. Nevertheless, the commonness and ease of obtaining the eGFR result contributed to a significant improvement in the diagnosis and monitoring of chronic kidney disease.16 However, it should be emphasized that the MDRD formula underestimates the results in the GFR range of 60–120 mL/min range. Therefore, the MDRD equation for GFR > 60 mL/min/1.73 m2 by lowering the values may lead to a misdiagnosis of CKD in healthy persons.7 In 2009, Levey at all developed another equation to evaluate the eGFR, namely CKD-EPI.23,24 This time, a much larger group of subjects was included – 8254 both healthy and those with kidney disease. The same data as for the simplified version of MDRD (age, sex, race, creatinine concentration) are required to assess e GFR. It has been shown that it has a very similar accuracy compared to MDRD in the case of GFR < 60 mL/min/ 1.73 m2, and higher accuracy for GFR > 60 mL/min/1.73 m2.19,24 In line with the recommendations of KDIGO (2012), CKD-EPI (named after the Chronic Kidney Disease Epidemiology Collaborative) is currently the preferred method for determining the estimated GFR. However, some researchers discuss the above recommendation, proving the benefits of the commonly used MDRD formula and the CDI-EPI inaccuracy in the case of low GFR values.25 In everyday clinical practice, most determinations are still made using the MDRD equation. At the same time, it is known that the incidence of chronic kidney disease and renal failure increases in the elderly. The most common causes are the coexistence of diseases leading to kidney damage, such as atherosclerosis, arterial hypertension, diabetes type 2, also the use of nephrotoxic drugs and prerenal damage associated with chronic fluid deficiency.2,6 It should also be mentioned here that the pathological processes overlap with the slow deterioration of the excretory function of the kidneys associated with the aging process of organs due to atherosclerotic lesions, atrophy of the renal tubules and glomerular sclerosis.2,26 After the age of 30 years, there is a decrease in GFR ca. 1.0 mL/min yearly.2 In the physiological aging process, both macroscopic changes (thinning of the renal cortex, more frequent occurrence of simple cysts) and microscopic changes, like a decrease in the number of active nephrons on the basis of atherosclerosis, interstitial fibrosis and atrophy of the renal tubules, are observed.27 Biopsy studies conducted in healthy kidney donors have shown that nephron sclerosis can be observed only in about 2.7% of donors aged <30 years, but in about 58% in the case of donors aged 60–69 years and in 73% of donors over 70 years of age life. The physiological decrease in the number of nephrons and the associated reduced GFR index, compared to people aged 18–29 and 70–75 years of age, can amount to as much as 48%.27 It should be emphasized that, as Hommos et al reported,27 the mere demonstration of a reduced eGFR value, indicating the existence of chronic kidney disease (<60 mL/min/1.73 m2), with the simultaneous lack of albuminuria does not translate into an increased risk of death for a given age group.28 In a study comparing the results of eGFR and eClCr with the reference methods of assessing GFR in most patients and in most of the drugs used, the differences obtained in the results of eGFR and eCrCl are so small that they do not lead to the need for significant changes in drug dosage. In principle, any method of GFR determination can be used.28 The exception seems to be patients with a body surface that differs significantly from the standard one,29–31 where these differences may be significant. Moreover, it is worth noting the specificity of therapy in geriatric patients resulting from multimorbidity with subsequent polypharmacy, more frequent occurrence of CKD, as compared to the general population, as well as the above-mentioned difference, significant for drug dosing, between the eGFR and eCrCl results in patients with body surface area significantly different from the mean population values. Taking all of this into consideration, the authors decided to undertake the present analyses in order to ascertain for which patient drug dosing should be determined using the eClCr values calculated with the Cockroft-Gault formula instead of the eGFR results obtained from the laboratory. Sample and Methods: For this analysis, retrospective data from 115 patients hospitalized at the Department of Geriatrics of the University Hospital in Wrocław in 2020 were used. The results of 76 women and 39 men were included in the study. The minimum age of patients is 55 years, the maximum age is 93 years with a median of 79 years. The study analyzed differences in the assessment of kidney function by comparing the results of eGFR assessed by MDRD method obtained from the laboratory with the calculated values of creatinine clearance using the Cockroft-Gault formula, and examining the correlation between the difference D = eGFR -eClCr and BMI and body surface. Statistical Analyses The normality of the distributions was tested with the Shapiro–Wilk test. The difference between eGFR and eClCr in the whole study group - the Wilcoxon test, and the “loess method: local regression fitting” method were used to determine the regression line. Calculations were made using the program: The R Project for Statistical Computing - R, v. 4.0.3- R Studio, v. 1.4.1103. The study was approved by the Commission of Bioethics at Wroclaw Medical University (KB-58/2021). All patients gave their informed consent to participate in the study. The study was performed in accordance with the principles of the Declaration of Helsinki. The normality of the distributions was tested with the Shapiro–Wilk test. The difference between eGFR and eClCr in the whole study group - the Wilcoxon test, and the “loess method: local regression fitting” method were used to determine the regression line. Calculations were made using the program: The R Project for Statistical Computing - R, v. 4.0.3- R Studio, v. 1.4.1103. The study was approved by the Commission of Bioethics at Wroclaw Medical University (KB-58/2021). All patients gave their informed consent to participate in the study. The study was performed in accordance with the principles of the Declaration of Helsinki. Statistical Analyses: The normality of the distributions was tested with the Shapiro–Wilk test. The difference between eGFR and eClCr in the whole study group - the Wilcoxon test, and the “loess method: local regression fitting” method were used to determine the regression line. Calculations were made using the program: The R Project for Statistical Computing - R, v. 4.0.3- R Studio, v. 1.4.1103. The study was approved by the Commission of Bioethics at Wroclaw Medical University (KB-58/2021). All patients gave their informed consent to participate in the study. The study was performed in accordance with the principles of the Declaration of Helsinki. Results: A study group was constituted of 115 patients, of whom 76 were women and 39 men at the age range of 55–93 years, with a median of 79 years (Table 2)Table 2Results of the StudyResultsMinimum ValueMaximum ValueMedianRangePatients’ age55 years93 years79 yearsBMI value16.28 kg/m243.87 kg/ m226.73 kg/ m2Body surface area1.23 m22.56 m21.83 m2Creatinine concentration0.49 mg/dl2.26 mg/dl0.9 mg/dl1.97 mg/dleGFR21 mL/min/1.73 m2131 mL/min/1.73 m270 mL/min/1,73 m2110 mL/min/1.73 m2ClCr by C-G formula19 mL/min123.31 mL/min62.5 mL/min104.2 mL/minNumber of drugs used continuously2137Number of drugs metabolized by the kidneys052 Results of the Study The minimum GFR (estimated creatinine clearance eClCr) calculated from the Cockroft-Gault formula was 19.11 mL/min, and the maximum was 123.31 mL/min with a median of 62.51 mL/min, and a range of 104.2 mL/min (Table 2). In addition, the analysis assessed the number of chronic medications used by patients (short-term medications, reliever medications, vitamins, supplements and topical medications were not included). The minimum number of continuous medications was 2 and the maximum was 13, with a median of 7 (Table 2). Moreover, the number of drugs requiring dose adjustment or drug discontinuation in the case of renal failure was assessed. The following drugs: antidiabetic (metformin, sulfonylureas), NOAC drugs, NSAIDs, pregabalin, ACEI, diuretics, and in one case chronically used LMWH) were considered as nephrotoxic. The minimum number of these drugs used by patients was 0 and the maximum was 5, with a median of 2 (Table 2). The differences in the assessment of kidney function were analyzed by comparing the eGFR results obtained from the laboratory using the MDRD method with the calculated values of creatinine clearance using the Cockroft-Gault formula. The correlation between the difference D = eGFR - eClCr and BMI and body surface was also investigated. In the entire group of patients (N = 115), the significant statistical difference was found between eGFR and eClCr (p < 10−4 = 0.0001). In the subgroup of patients (N = 45) with the lower baseline eGFR <60, there was no significant difference between eGFR and eClCr (p = 0.48), while in the subgroup of patients with baseline eGFR ≥60 (N = 75), there was a significant difference between eGFR and eClCr (p < 10−5). The further analysis also proved the existence of a statistical relationship between the difference D = eGFR - eCrCl in correlation to BMI and body surface area (Figures 1–4). There is a statistically strong (rho close to −1) inverse correlation between D and BMI and between D and body surface area. As the BMI or body surface area increases, the eClCr values start to exceed the eGFR values. The regression lines in the Figures 1 and 2 were determined using the LOESS “loess method: local regression fitting” method. The points where the value determined by the local regression reaches the minimum, ie, where the eGFR and eClCr are the closest to each other, are the values of 30.8 kg/m2 for the BMI and 1.95 m2 for the body surface area. The further from the above-mentioned points, the greater the difference between the obtained results, and with an increase in the BMI or body surface area, the eClCr values start to exceed the eGFR, while in the range of values lower than the minimum regression, both in terms of BMI data and body surface area the value eClCr is smaller than eGFR (Figures 1–4).Figure 1The relationship between the absolute value of D (difference eGFR -eClCr) - Y axis and BMI - X axis. The red line represents the local regression.Figure 2The relationship between the absolute value of D (difference eGFR -eClCr) - Y axis, and body surface area - X axis. The red line represents the local regression.Figure 3The relationship between the value D (difference eGFR - eClCr) - Y axis, and BMI - X axis. The red line represents the local regression.Figure 4The relationship between the value D (difference eGFR - eClCr) - Y axis, and body surface area - X axis. The red line represents the local regression. The relationship between the absolute value of D (difference eGFR -eClCr) - Y axis and BMI - X axis. The red line represents the local regression. The relationship between the absolute value of D (difference eGFR -eClCr) - Y axis, and body surface area - X axis. The red line represents the local regression. The relationship between the value D (difference eGFR - eClCr) - Y axis, and BMI - X axis. The red line represents the local regression. The relationship between the value D (difference eGFR - eClCr) - Y axis, and body surface area - X axis. The red line represents the local regression. The above observation also analyzed the amount of drugs used chronically by patients, where the minimum number of constantly used drugs was 2 and the maximum 13 with a median of 7, including the number of drugs metabolized by the kidneys with the potential to deteriorate kidney function, especially in the case of incorrect dosing, from 0 to 5 with a median of 2. In order to prevent chronic kidney disease and slow its progression, it is necessary to carefully analyze the drugs used, avoid polytherapy and avoid nephrotoxic drugs. The data collected in our study included: gender, age, weight, height, creatinine, eGFR, fCG, number of drugs taken, including the number of drugs known to be nephrotoxic. Based on the above data, BMI and body surface area (BSA) were calculated according to the formulas: BMI = body mass (kg)/height2 (m) and BSA by Haycock = 0.024265.h0.3964.w0.5378. Multiple linear regression was performed for the dependent variable D = eGFR – fCG adjusted for confounders such as: body mass (present in the MDRD formula but not in the Cockcroft-Gault formula), the number of medications taken, and the number of nephrotoxic drugs taken. Relationship between D and BMI: The analysis of the results of multiple regression adjusted for body mass, number of medications taken, and number of nephrotoxic medications taken showed that the difference in D is not influenced by the number of medications taken. Adding this information to the regression model did not change significantly (>20%) the coefficients obtained in the model without this information. Moreover, the calculated coefficients related to the number of drugs did not have a statistically significant (p > 0.05) influence on the model result (D value). However, it was found that the patient’s body weight is an important factor (confounder). Introducing it to the model changed the value of the coefficient for BMI by 73%. A statistically significant influence of the body mass on the value of D was also found. It should be emphasized, however, that the patients’ body weight – although absent in the Cockcroft-Gault formula, hence its consideration as a potential confounding factor – and the BMI value are directly related to each other. In order to better assess the impact of body weight and BMI on the difference “D”, a larger observation group should be gathered, which would allow for the assessment of confounding factors not only by statistical methods, but also by matching and restriction. The relatively small size of the observation group is one of the limitations of our study (Figure 5).Figure 5The relationship between the value D (difference eGFR - eClCr) - Y axis, and body mass - X axis. The red line represents the local regression. The relationship between the value D (difference eGFR - eClCr) - Y axis, and body mass - X axis. The red line represents the local regression. The relationship between D and body surface area: As in the case of the relationship between D and BMI, the analysis of the results of multiple regression taking into account the effect of body weight and the number of drugs taken on the difference “D” showed that consideration of the number of drugs in the model does not significantly change the parameters of the model. Introducing the patient’s body weight to the model changes the value of the coefficient for the body area by 50%. Interestingly, it also causes the surface area to cease to be a statistically significant component of the model. In the case of the dependence of D on the body surface area and body weight, it should also be noted that the patient’s weight is directly related to the patient’s body surface calculated using the Haycock formula. Summing it up – the obtained results allow, in our opinion, to conclude that the difference D = eGFR – fCG is significantly influenced by BMI and body mass. The more the patient’s BMI differs from approx. 31 kg/m2 or 82.5 kg, the greater the difference D is greater in the absolute value. This can also be expressed, as follows: for BMI < 31 kg/m2 or body weight <82.5 kg, the difference D > 0; meaning eGFR > fCG; while for BMI > 31 kg/m2 or body weight >82.5 kg, the difference D < 0; meaning eGFR < fCG. The study showed that based on the estimated GFR using both methods (C-G and MDRD), 29.2% and 32.4% of patients, respectively, were incorrectly assigned to given stage of chronic kidney disease. Discussion: Assessment of kidney function is very important in order to properly drugs dosing, especially to adjust the doses of drugs metabolized by the kidneys in order to avoid or minimize their nephrotoxic effects.26,28 The current recommendations in the Summary of Product Characteristics are mainly based on the old pharmacokinetic studies, before the era of standardization of creatinine assessment and the common use of estimated GFR, when the estimated creatinine clearance (eClCr) calculated from the Cockroft-Gault equation was used to assess renal function.13,28,30 Historically, the use of different methods of creatinine concentration has resulted in different method-dependent results which were difficult to compare with others, yet inconsistently used in recommendations for drug dose adjustment in patients with kidney disease. The current progress in the diagnosis of kidney diseases, the widespread use of the estimated GFR filtration coefficient and the standardization of creatinine concentration assessment result in the possibility of a more precise assessment of kidney function.30–36 Perhaps, according to the current state of knowledge, on the basis of eGFR, and not eCLCr, pharmacokinetic studies of new drugs should be performed in order to ensure safer dosing of drugs, but it is difficult to imagine a situation in which manufacturers of all drugs used so far will conduct another pharmacokinetic test to update the dosing recommendations for existing drugs.30–36 The aim of this study was to analyze and compare the eGFR results with the estimated creatinine clearance score calculated according to the Cockroft-Gault equation, and to assess the significance of the difference between these two results. According to the current data,28,33 regardless of the method used to estimate GFR, the differences between the obtained results should be insignificant and do not imply therapeutic decisions. The present study results revealed that based on the estimated GFR using both methods (C-G and MDRD), 29.2% and 32.4% of patients, respectively, were incorrectly assigned to given stage of chronic kidney disease. The further practical conclusion is that in the population of 115 patients who underwent observation, as many as 45 people (this group constitutes 39% of the total number of respondents), the initial eGFR was <60 mL/min/1.73 m2. In some cases, there are no clear data on the history of the length of the lesions, so if the time criterion is not met (>3 months), it is not possible to diagnose chronic kidney disease in all these patients, nevertheless this group requires periodic laboratory control in an outpatient setting. Optimizing the treatment of comorbidities as well as a careful analysis of the need for chronic drugs, especially drugs metabolized in the kidneys, and their appropriate dosage. In the Fiossart et al’s cohort study, data from 2095 adult patients were analyzed by comparing the results obtained from the Cockroft-Gault or MDRD equation with the reference creatinine clearance result. While in the overall analysis, the results between eGFR and the reference creatinine clearance were slightly different, regardless of the method used, the differences became more significant in the subgroup analysis taking into account age, sex and body weight. In most cases, the results of the Cockroft-Gault equation turned out to be less accurate. Fiossart et al emphasize the need to design large multicentre studies to validate equations to assess eGFR.38 Various researchers (including Cirillo et al) emphasize limitations in the use of both methods. MDRD equation has not been validated for GFR > 60 mL/min because the studies did not include healthy people.39 This is reflected in a study by Stevens et al comparing eGFR size according to MDRD with GFR after use of radionuclides. It was shown that eGFR from the MDRD pattern was compatible with GFR only at GFR values <60 mL/min/1.73 m2.37 According to the recommendations of KDIGO, the determination of the eGFR values between 60 and 89 mL/min/1.73 m2 was associated with the diagnosis of chronic kidney disease only in the presence of other markers of kidney damage. The paragraph on seniors emphasized the physiological decline in GFR associated with age and the difficulties in distinguishing the causes of a reduced GFR in the coexistence of early-stage kidney disease.1,5 The data show that up to 75% of people over 70 years of age may have GFR <90 mL/min/1.73 m2, and 25% <60 mL/min/1.73 m2.5,29 Patients with a GFR result of 60–89 mL/min/1.73 m2 should be tested for comorbidities, mainly cardiovascular disease, lipid disorders, and diabetes mellitus. With a low risk of cardiovascular disease, the main recommendations are regular checks of kidney function (at least once a year), urine sediment testing, blood pressure control, changing to an active lifestyle and avoiding drugs and contrast agents with nephrotoxic potential.5,29,30 The situation is slightly different in elderly patients with GFR < 60 mL/min/1.73 m2 or at high risk of developing or being diagnosed with cardiovascular disease. In addition to the above measures, they require assessment of CKD complications (anemia, endocrine disorders, nutritional deficiencies, osteoporosis, neuropathy) and dietary and pharmacological treatment aimed at slowing the progression of CKD and mitigating the levels of cardiovascular risk factors, regularly repeated blood and sediment test urine (up to 4 times a year) and nephrological consultation in case of disease progression.1,5 The aim of our study was to answer the question whether there is a need for additional determination of GFR according to the Cockroft-Gault equation in order to ensure optimal drug dosing and minimize nephrotoxicity. Due to the supposed slight difference between the results obtained in the Cockroft-Gaul equation and MDRD, an attempt was made to distinguish patients in whom these differences might be more significant. It was observed that the more distant the BMI or body surface area result from the so-called minimum regression, ie, the point where both results differ slightly (in the above case BMI 30.8 kg/m2, and body surface area 1.93 m2), the differences between eGFR and eCLCr were bigger. It leads to the practical conclusion that in people with BMI values or body surface area significantly different from the standard ones, the eClCr values calculated according to the Cockroft-Gault equation should be used in determining the doses of drugs metabolized by the kidneys. Of course, it should be emphasized that the eGFR formula (most often calculated according to the MDRD formula) is still a standard when it comes to assessing kidney function and qualifying the patient to the appropriate stage of chronic kidney disease, while the Cockroft-Gault formula is destined to determine the exact dose of drugs according to the manufacturer’s information in patients with renal failure, especially those with a body surface area or BMI significantly different from the standard ones. According to the recommendations of nephrology societies, the prevention and treatment of chronic kidney disease requires an active approach both on the part of the patient (increased physical activity, adequate diet, hydration, avoidance of self-administered drugs, especially those metabolised by the kidneys, including NSAIDs) and the doctor (repeated at control intervals). Blood and urine tests for the assessment of kidney function and the presence of albuminuria, especially in patients at risk of chronic kidney disease, optimal treatment of comorbidities, critical analysis of the indications for the use of contrast agents, analysis of the amount of necessary drugs in chronic therapy and precise dosing of drugs metabolized by the kidneys, appropriate patient education and referral to Nephrology Clinics of patients with e GFR < 30.
Background: According to the current data, regardless of the method used to estimate GFR, the differences between the obtained results should be insignificant and do not imply therapeutic decisions. The aim of this study was to analyze and compare the eGFR results with the estimated creatinine clearance score calculated according to the Cockroft-Gault equation, and to assess the significance of the difference between these two results. Methods: A study group was constituted of 115 patients, of whom 76 were women and 39 men at the age range of 55-93 years, with a median of 79 years. The study analyzed differences in the assessment of kidney function by comparing the results of eGFR assessed by MDRD method obtained from the laboratory with the calculated values of creatinine clearance using the Cockroft-Gault formula, and examining the correlation between the difference D = eGFR -eClCr and BMI and body surface. Results: In the entire group of patients (N = 115), the significant statistical difference was found between eGFR and eClCr. In the subgroup of patients (N = 45) with the lower baseline eGFR <60, there was no significant difference between eGFR and eClCr, while in the subgroup of patients with baseline eGFR ≥60 (N = 75), there was a significant difference between eGFR and eClCr. The study showed that based on the estimated GFR using both methods (C-G and MDRD), 29.2% and 32.4% of patients, respectively, were incorrectly assigned to given stage of chronic kidney disease. Conclusions: Proper assessment of kidney function is very important in order to properly drugs dosing, especially to adjust the doses of drugs metabolized by the kidneys in order to avoid or minimize their nephrotoxic effects.
Introduction: According to epidemiological data presented by the National Kidney Foundation, chronic kidney disease (CKD) affects about 11% of adults over 20 years of age,1 which accounts in the Polish population for about 4 mln people. According to the other registers, the patients’ number may be between 10% and 18% of the population,2–4 and in groups at risk of coexisting diabetes, hypertension, atherosclerosis, obesity, or in elderly patients, it may even reach 50% of the general population.3 CKD is a chronic, progressive and initially asymptomatic disease, therefore some patients are unaware of the existing burden. Some of them will require renal replacement therapy in the future, so it is important to emphasize the importance of diagnosis, control and education as well as nephroprotective activities, including avoiding the use of nephrotoxic drugs and their correct dosage.1 The concept of chronic kidney disease as a set of symptoms associated with the damage or reduction of the number of nephrons was developed in 2002 by a group of American nephrologists associated with the Kidney Disease Outcome Quality Initiative (NKF K/DOQI).5 The current definition of chronic kidney disease, approved by Kidney Disease Improving Global Outcome (KDIGO) in 2005 and modified in 2012, assumes the demonstration of impaired renal function (in laboratory blood or urine tests) or their structure (abnormalities in tests imaging or histological examinations) resulting from permanent damage or depletion of nephrons due to diseases affecting the renal parenchyma.1,5 The glomerular filtration rate GFR and albuminuria are used to assess renal function.2,6 Glomerular filtration is a hypothetical amount of plasma purified from a specific substance in a given unit of time.7 A method that allows to obtain very accurate results is the assessment of the clearance using exogenous substances such as inulin or iohexal, but due to cost and invasiveness it is not widely used and is rather applied as a reference in scientific research.7 The results obtained in the determination of creatinine clearance, which as an endogenous substance formed in the muscles, is excreted both by glomerular filtration and up to 20% by tubular secretion, may be somewhat inaccurate, and the test requires a daily urine collection.8–10 A huge progress in diagnostics was the introduction of complex mathematical equations allowing the estimation of GFR values on the basis of a single blood creatinine concentration, taking into account variables such as age, sex and body weight of the examined person. (Cockroft-Gault method) or age, gender and race (MDRD-simplified formula) and in the case of MDRD-6: age, gender, race, creatinine, urea and albumin concentration, enabling simple and quick results.3,7 The first equation to estimate creatinine clearance (CrCl) from blood creatinine concentration was the Cockroft-Gault formula, published in 1976, based on data from 249 male patients aged 18–92 years with measured creatinine clearance in the range of 30–130 mL/m2.11,12 It is based on age, weight, serum creatinine and gender (Table 1). The obtained results were compared with the means of two measurements of daily creatinine clearance. The method does not take into account the size of the body surface area.13 Table 1The Cockcroft-Gault Formula for Estimating Creatinine Clearance (CrCl)11,12CrClMale([140-age] × weight in kg)/(serum creatinine × 72)FemaleCrCl (male) × 0.85 The Cockcroft-Gault Formula for Estimating Creatinine Clearance (CrCl)11,12 Currently, the GFR value calculated according to the Cockroft-Gault formula is not generally recommended for the diagnosis and monitoring a course of chronic kidney disease.5,14 The Cockroft-Gault equation underestimates the GFR value in lean and elderly people, and overstates the obese and hyperhydrated people.7 It should be, yet, emphasized that on the basis of the GFR calculated according to the Cockroft-Gault formula the pharmacokinetic studies related to the registration of drugs were carried out to determine the dosage in patients with kidney disease.15–17 The MDRD formula was introduced in 1999 based on data analysis of 1628 patients with chronic kidney disease (mean GFR 40 mL/min/1.73 m2) who were predominantly Caucasian and had no associated diabetes.18 This formula comes in two forms - the classic one, which takes into account such variables as age, sex, race, serum creatinine, urea and serum albumin (MDRD6), and a simplified one that takes into account only age, gender, race and creatinine concentration (the so-called MDRD4), which has found widespread use in the assessment of e GFR.18,19 MDRD formula: GFR (mL/min/1.73 m2) = 175 × (Scr)−1.154 × (Age)−0.203 × (0.742 if female) × (1.212 if African American) (conventional units).20–22 To date, the MDRD equation has been evaluated in a number of further patient populations, including: African Americans, Europeans, and Asians, as well as diabetic patients with and without underlying kidney disease, kidney transplant patients, and potential kidney donors.14 Observations have shown that the MDRD formula is a good tool in everyday clinical practice to assess kidney function.3,5 It should be noted, however, that the MDRD formula has not been tested in persons without confirmed kidney damage (without demonstrated albuminuria), in pregnant women, in children, in the elderly >85 years of age, and in some ethnic groups, eg, Latinos. Nevertheless, the commonness and ease of obtaining the eGFR result contributed to a significant improvement in the diagnosis and monitoring of chronic kidney disease.16 However, it should be emphasized that the MDRD formula underestimates the results in the GFR range of 60–120 mL/min range. Therefore, the MDRD equation for GFR > 60 mL/min/1.73 m2 by lowering the values may lead to a misdiagnosis of CKD in healthy persons.7 In 2009, Levey at all developed another equation to evaluate the eGFR, namely CKD-EPI.23,24 This time, a much larger group of subjects was included – 8254 both healthy and those with kidney disease. The same data as for the simplified version of MDRD (age, sex, race, creatinine concentration) are required to assess e GFR. It has been shown that it has a very similar accuracy compared to MDRD in the case of GFR < 60 mL/min/ 1.73 m2, and higher accuracy for GFR > 60 mL/min/1.73 m2.19,24 In line with the recommendations of KDIGO (2012), CKD-EPI (named after the Chronic Kidney Disease Epidemiology Collaborative) is currently the preferred method for determining the estimated GFR. However, some researchers discuss the above recommendation, proving the benefits of the commonly used MDRD formula and the CDI-EPI inaccuracy in the case of low GFR values.25 In everyday clinical practice, most determinations are still made using the MDRD equation. At the same time, it is known that the incidence of chronic kidney disease and renal failure increases in the elderly. The most common causes are the coexistence of diseases leading to kidney damage, such as atherosclerosis, arterial hypertension, diabetes type 2, also the use of nephrotoxic drugs and prerenal damage associated with chronic fluid deficiency.2,6 It should also be mentioned here that the pathological processes overlap with the slow deterioration of the excretory function of the kidneys associated with the aging process of organs due to atherosclerotic lesions, atrophy of the renal tubules and glomerular sclerosis.2,26 After the age of 30 years, there is a decrease in GFR ca. 1.0 mL/min yearly.2 In the physiological aging process, both macroscopic changes (thinning of the renal cortex, more frequent occurrence of simple cysts) and microscopic changes, like a decrease in the number of active nephrons on the basis of atherosclerosis, interstitial fibrosis and atrophy of the renal tubules, are observed.27 Biopsy studies conducted in healthy kidney donors have shown that nephron sclerosis can be observed only in about 2.7% of donors aged <30 years, but in about 58% in the case of donors aged 60–69 years and in 73% of donors over 70 years of age life. The physiological decrease in the number of nephrons and the associated reduced GFR index, compared to people aged 18–29 and 70–75 years of age, can amount to as much as 48%.27 It should be emphasized that, as Hommos et al reported,27 the mere demonstration of a reduced eGFR value, indicating the existence of chronic kidney disease (<60 mL/min/1.73 m2), with the simultaneous lack of albuminuria does not translate into an increased risk of death for a given age group.28 In a study comparing the results of eGFR and eClCr with the reference methods of assessing GFR in most patients and in most of the drugs used, the differences obtained in the results of eGFR and eCrCl are so small that they do not lead to the need for significant changes in drug dosage. In principle, any method of GFR determination can be used.28 The exception seems to be patients with a body surface that differs significantly from the standard one,29–31 where these differences may be significant. Moreover, it is worth noting the specificity of therapy in geriatric patients resulting from multimorbidity with subsequent polypharmacy, more frequent occurrence of CKD, as compared to the general population, as well as the above-mentioned difference, significant for drug dosing, between the eGFR and eCrCl results in patients with body surface area significantly different from the mean population values. Taking all of this into consideration, the authors decided to undertake the present analyses in order to ascertain for which patient drug dosing should be determined using the eClCr values calculated with the Cockroft-Gault formula instead of the eGFR results obtained from the laboratory. Discussion: Strengths of this work are: addressing the important topic of assessing renal function in the geriatric population and paying attention to possible differences in drug dosing using the Cockroft-Gault formula as compared with MDRD equation. Limitations: the small size of the study group does not allow to separate subgroups in terms of body weight and height, age or sex, and to conduct a thorough assessment of the impact of these parameters on the difference D. This is particularly important in the context of body weight; it is a variable that appears in the CG formula but not in the MDRD equation. The spread of body weight in the examined patients is quite significant: from 39 kg to 125 kg, with the coefficient of variation equal to 24%, which, considering the number of all patients (115), makes it impossible to separate appropriately numerous subgroups with similar body weight. In the light of the data obtained in the present study, it is worthwhile in patients with BMI or body surface significantly different from the standard one to calculate ClCr with the Cockroft-Gault formula and to adjust the dose of the drug according to the information provided by the producer. This action is in line with the generally accepted recommendations for the prevention and slowing of the progression of CKD, where one of the medical measures is to avoid nephrotoxic effects of the applied therapy and meets one of the main ethical postulates in medicine, ie, “Primum non nocere.”
Background: According to the current data, regardless of the method used to estimate GFR, the differences between the obtained results should be insignificant and do not imply therapeutic decisions. The aim of this study was to analyze and compare the eGFR results with the estimated creatinine clearance score calculated according to the Cockroft-Gault equation, and to assess the significance of the difference between these two results. Methods: A study group was constituted of 115 patients, of whom 76 were women and 39 men at the age range of 55-93 years, with a median of 79 years. The study analyzed differences in the assessment of kidney function by comparing the results of eGFR assessed by MDRD method obtained from the laboratory with the calculated values of creatinine clearance using the Cockroft-Gault formula, and examining the correlation between the difference D = eGFR -eClCr and BMI and body surface. Results: In the entire group of patients (N = 115), the significant statistical difference was found between eGFR and eClCr. In the subgroup of patients (N = 45) with the lower baseline eGFR <60, there was no significant difference between eGFR and eClCr, while in the subgroup of patients with baseline eGFR ≥60 (N = 75), there was a significant difference between eGFR and eClCr. The study showed that based on the estimated GFR using both methods (C-G and MDRD), 29.2% and 32.4% of patients, respectively, were incorrectly assigned to given stage of chronic kidney disease. Conclusions: Proper assessment of kidney function is very important in order to properly drugs dosing, especially to adjust the doses of drugs metabolized by the kidneys in order to avoid or minimize their nephrotoxic effects.
5,492
330
[ 358, 118, 1828, 1390 ]
5
[ "egfr", "kidney", "body", "patients", "gfr", "drugs", "difference", "disease", "ml", "eclcr" ]
[ "kidney disease epidemiology", "kidney disease discussion", "diagnosis chronic kidney", "prevent chronic kidney", "risk chronic kidney" ]
null
null
[CONTENT] geriatric patients | kidney function | drug dosing [SUMMARY]
null
null
[CONTENT] geriatric patients | kidney function | drug dosing [SUMMARY]
[CONTENT] geriatric patients | kidney function | drug dosing [SUMMARY]
[CONTENT] geriatric patients | kidney function | drug dosing [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Humans | Kidney | Pharmaceutical Preparations [SUMMARY]
null
null
[CONTENT] Aged | Aged, 80 and over | Humans | Kidney | Pharmaceutical Preparations [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Humans | Kidney | Pharmaceutical Preparations [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Humans | Kidney | Pharmaceutical Preparations [SUMMARY]
[CONTENT] kidney disease epidemiology | kidney disease discussion | diagnosis chronic kidney | prevent chronic kidney | risk chronic kidney [SUMMARY]
null
null
[CONTENT] kidney disease epidemiology | kidney disease discussion | diagnosis chronic kidney | prevent chronic kidney | risk chronic kidney [SUMMARY]
[CONTENT] kidney disease epidemiology | kidney disease discussion | diagnosis chronic kidney | prevent chronic kidney | risk chronic kidney [SUMMARY]
[CONTENT] kidney disease epidemiology | kidney disease discussion | diagnosis chronic kidney | prevent chronic kidney | risk chronic kidney [SUMMARY]
[CONTENT] egfr | kidney | body | patients | gfr | drugs | difference | disease | ml | eclcr [SUMMARY]
null
null
[CONTENT] egfr | kidney | body | patients | gfr | drugs | difference | disease | ml | eclcr [SUMMARY]
[CONTENT] egfr | kidney | body | patients | gfr | drugs | difference | disease | ml | eclcr [SUMMARY]
[CONTENT] egfr | kidney | body | patients | gfr | drugs | difference | disease | ml | eclcr [SUMMARY]
[CONTENT] gfr | kidney | disease | kidney disease | age | creatinine | formula | chronic | mdrd | ml [SUMMARY]
null
null
[CONTENT] gfr | drugs | kidney | disease | according | equation | m2 | min 73 m2 | 73 m2 | ml min 73 m2 [SUMMARY]
[CONTENT] study | egfr | gfr | kidney | patients | drugs | regression | disease | body | ml [SUMMARY]
[CONTENT] study | egfr | gfr | kidney | patients | drugs | regression | disease | body | ml [SUMMARY]
[CONTENT] GFR ||| Cockroft-Gault | two [SUMMARY]
null
null
[CONTENT] [SUMMARY]
[CONTENT] ||| GFR ||| Cockroft-Gault | two ||| 115 | 76 | 39 | 55-93 years | 79 years ||| MDRD | Cockroft-Gault | BMI ||| ||| 115 | eGFR ||| 45 ||| ||| 75 ||| GFR | MDRD | 29.2% | 32.4% ||| [SUMMARY]
[CONTENT] ||| GFR ||| Cockroft-Gault | two ||| 115 | 76 | 39 | 55-93 years | 79 years ||| MDRD | Cockroft-Gault | BMI ||| ||| 115 | eGFR ||| 45 ||| ||| 75 ||| GFR | MDRD | 29.2% | 32.4% ||| [SUMMARY]
Genetic characterization of a Coxsackie A9 virus associated with aseptic meningitis in Alberta, Canada in 2010.
23521862
An unusually high incidence of aseptic meningitis caused by enteroviruses was noted in Alberta, Canada between March and October 2010. Sequence based typing was performed on the enterovirus positive samples to gain a better understanding of the molecular characteristics of the Coxsackie A9 (CVA-9) strain responsible for most cases in this outbreak.
BACKGROUND
Molecular typing was performed by amplification and sequencing of the VP2 region. The genomic sequence of one of the 2010 outbreak isolates was compared to a CVA-9 isolate from 2003 and the prototype sequence to study genetic drift and recombination.
METHODS
Of the 4323 samples tested, 213 were positive for enteroviruses (4.93%). The majority of the positives were detected in CSF samples (n = 157, 73.71%) and 81.94% of the sequenced isolates were typed as CVA-9. The sequenced CVA-9 positives were predominantly (94.16%) detected in patients ranging in age from 15 to 29 years and the peak months for detection were between March and October. Full genome sequence comparisons revealed that the CVA-9 viruses isolated in Alberta in 2003 and 2010 were highly homologous to the prototype CVA-9 in the structural VP1, VP2 and VP3 regions but divergent in the VP4, non-structural and non-coding regions.
RESULTS
The increase in cases of aseptic meningitis was associated with enterovirus CVA-9. Sequence divergence between the prototype strain of CVA-9 and the Alberta isolates suggests genetic drifting and/or recombination events, however the sequence was conserved in the antigenic regions determined by the VP1, VP2 and VP3 genes. These results suggest that the increase in CVA-9 cases likely did not result from the emergence of a radically different immune escape mutant.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Alberta", "Child", "Child, Preschool", "Coxsackievirus Infections", "Disease Outbreaks", "Enterovirus B, Human", "Female", "Genotype", "Humans", "Infant", "Infant, Newborn", "Male", "Meningitis, Aseptic", "Middle Aged", "Molecular Epidemiology", "Molecular Sequence Data", "Molecular Typing", "RNA, Viral", "Sequence Analysis, DNA", "Young Adult" ]
3620579
Background
The enterovirus genome is comprised of a single open reading frame flanked by the 5’ and 3’ untranslated regions (UTRs), and the encoded polyprotein is cleaved to produce the structural and nonstructural proteins [1]. Although most infections are asymptomatic, non-polio enteroviruses are the most common infectious cause of aseptic meningitis [1]. Several outbreaks resulting from different enterovirus serotypes have been described including but not restricted to Coxsackievirus A9 (CVA-9) [2], EV71 [3,4], EV68 [5], Coxsackievirus A24 [6], echovirus 18 [7], echovirus 30 [8,9], and Coxsackievirus A16 [10]. Non-polio enteroviruses were traditionally classified into serotypes, based on a neutralization assay and included 64 classical isolates consisting of Coxsackieviruses A, Coxsackieviruses B, echoviruses and several numbered enteroviruses [11]. Genetic studies led to a new classification scheme, grouping the enteroviruses into species A, B, C and D, although serotype determination is still widely used, including for epidemiological purposes. Typing methods based on sequencing have been previously described in the VP1 [12-14], VP2 [15,16], and VP4 [17] regions. Molecular typing methods based on the P1 genomic region sequences, have been found to yield equivalent results to neutralizing assays, which they have essentially replaced [1,14]. Molecular typing has also led to the characterization of new enterovirus serotypes [4,18-20] and provides information on recombination between enteroviruses. This is a common phenomenon which almost always occurs between viruses belonging to the same species [1,21,22]. Serotype determination remains important for molecular epidemiology, in part because the capsid contains the neutralization epitopes, and a rise or fall in incidence of a serotype can be linked to serotype-specific humoral immunity within the population [21]. The capsid also determines the cellular receptors used by the virus [23] and is thus an important determinant of viral pathogenesis, although non-structural proteins also contribute to the pathogenesis of the virus [24]. An unusually high incidence of aseptic meningitis was noted in Alberta, Canada between March and October 2010 [25]. A high proportion of CSF samples submitted to the Provincial Laboratory For Public Health of Alberta were positive for enteroviruses and serotyping by molecular methods revealed that majority of these enteroviruses belonged to the CVA-9 serotype. The primary goal of this study was to provide a full genetic characterization of the CVA-9 isolates responsible for this outbreak. Secondary goals were to comment on the methods of molecular serotyping for the diagnostic laboratory and on the use of long RT-PCR as a convenient method to obtain the near full-length genomic sequence of enteroviruses, especially when recombination events are involved in the genesis of the viral genome. The genome sequence of these isolates was determined and compared to the sequence of a CVA-9 isolate from Alberta in 2003 and to the prototype CVA-9 sequence (strain: Griggs; D00627) [26]. Serotyping by sequencing within the VP2 region was found to be more reliable than within the VP4 region. A comparison of the different genes revealed a higher nucleotide and amino acid conservation in the structural regions, and analysis of the sequence of the non-structural region pointed to recombination events in the genesis of the 2010 isolate.
Methods
Screening for enteroviruses Specimens submitted to the Provincial Laboratory for Public Health (ProvLab) for enterovirus testing from January 1 to December 31, 2010 were included in this study. A total of 4323 samples from patients ranging in age from one day old to 97 years of age were tested; the most common specimen types tested included CSF (n = 2687, 62.16%), plasma (n = 497, 11.50%), nasopharyngeal and throat swabs (n = 213, 4.93%) and stools (n = 103, 2.40%). Viral RNA was extracted using the easyMAG® automated extractor (BioMérieux, Durham, NC, USA) and the extracted nucleic acid was screened for enteroviruses using a previously published NASBA assay [35]. Specimens submitted to the Provincial Laboratory for Public Health (ProvLab) for enterovirus testing from January 1 to December 31, 2010 were included in this study. A total of 4323 samples from patients ranging in age from one day old to 97 years of age were tested; the most common specimen types tested included CSF (n = 2687, 62.16%), plasma (n = 497, 11.50%), nasopharyngeal and throat swabs (n = 213, 4.93%) and stools (n = 103, 2.40%). Viral RNA was extracted using the easyMAG® automated extractor (BioMérieux, Durham, NC, USA) and the extracted nucleic acid was screened for enteroviruses using a previously published NASBA assay [35]. Serotyping of enteroviruses Enteroviruses were serotyped using one-step RT-PCR to amplify the partial 5’untranslated region (293 bp), the VP4 region (207 bp) and partial VP2 region (250 bp) using previously described primers 1-EV/RV and 2-EV/RV from RNA extracted directly from the specimen [36]. Sequencing was performed without the need for nested amplification. Amplification was performed using the One-step RT-PCR kit from Qiagen (Ontario, Canada) using 10 μl of 5X buffer, 10 μl of Q solution, 2 μl of 10 mM dNTPs, 2 μl of enzyme, 0.125 μl of RNaseOUT (Life Technologies, Ontario, Canada), 0.8 μM of primers and 5 μl of template nucleic acid in a total volume of 50 μl. The reverse transcription step was performed at 50°C for 30 mins, followed by enzyme activation at 95°C for 15 mins. The amplification protocol included 45 cycles of denaturation at 95°C for 30 seconds, followed by annealing at 55°C for 1.5 minutes and amplification at 72°C for 60 seconds. A final extension step was performed for 10 minutes at 72°C followed by cooling. Amplified products were sequenced in both directions on the ABI PRISM 3130-Avant Genetic Analyzer (Applied Biosystems (ABI), Foster City, CA). Enteroviruses were serotyped using one-step RT-PCR to amplify the partial 5’untranslated region (293 bp), the VP4 region (207 bp) and partial VP2 region (250 bp) using previously described primers 1-EV/RV and 2-EV/RV from RNA extracted directly from the specimen [36]. Sequencing was performed without the need for nested amplification. Amplification was performed using the One-step RT-PCR kit from Qiagen (Ontario, Canada) using 10 μl of 5X buffer, 10 μl of Q solution, 2 μl of 10 mM dNTPs, 2 μl of enzyme, 0.125 μl of RNaseOUT (Life Technologies, Ontario, Canada), 0.8 μM of primers and 5 μl of template nucleic acid in a total volume of 50 μl. The reverse transcription step was performed at 50°C for 30 mins, followed by enzyme activation at 95°C for 15 mins. The amplification protocol included 45 cycles of denaturation at 95°C for 30 seconds, followed by annealing at 55°C for 1.5 minutes and amplification at 72°C for 60 seconds. A final extension step was performed for 10 minutes at 72°C followed by cooling. Amplified products were sequenced in both directions on the ABI PRISM 3130-Avant Genetic Analyzer (Applied Biosystems (ABI), Foster City, CA). Complete genome sequencing of CVA-9 Two enterovirus positive samples from 2003 (CVA-9_2003) and 2010 (CVA-9_2010) with a high viral load as estimated by the Ct value, that exhibited strong growth in primary Rhesus Monkey Kidney cells from Diagnostic Hybrids (Ohio, USA) were used. The near-complete genome of these viruses was amplified by long RT-PCR as previously described [37]. The amplicons were sequenced by genome walking and contig assembly was performed using seqscape v2.5 (ABI). The genomic sequences of the CVA-9_2003 and CVA-9_2010 have been deposited in GenBank under accession numbers JQ837913 and JQ837914, respectively. Two enterovirus positive samples from 2003 (CVA-9_2003) and 2010 (CVA-9_2010) with a high viral load as estimated by the Ct value, that exhibited strong growth in primary Rhesus Monkey Kidney cells from Diagnostic Hybrids (Ohio, USA) were used. The near-complete genome of these viruses was amplified by long RT-PCR as previously described [37]. The amplicons were sequenced by genome walking and contig assembly was performed using seqscape v2.5 (ABI). The genomic sequences of the CVA-9_2003 and CVA-9_2010 have been deposited in GenBank under accession numbers JQ837913 and JQ837914, respectively. Sequence analysis Sequences were aligned using ClustalX (Version 1.81, March 2000; ftp://ftp-igbmc.u-strasbg.fr/pub/ClustalX/) and phylogenetic analysis was conducted using Treecon [38]. Distance estimation was performed using the Jukes and Cantor distance correction (1969), topology inference was performed using the neighbour joining method, and bootstraping was done using 1000 replicates and the tree was re-rooted at the internode. Simplot analysis were performed using alignments done with CLustalX and the Simplot for Windows v3.5.1 program [39]. Sequences were aligned using ClustalX (Version 1.81, March 2000; ftp://ftp-igbmc.u-strasbg.fr/pub/ClustalX/) and phylogenetic analysis was conducted using Treecon [38]. Distance estimation was performed using the Jukes and Cantor distance correction (1969), topology inference was performed using the neighbour joining method, and bootstraping was done using 1000 replicates and the tree was re-rooted at the internode. Simplot analysis were performed using alignments done with CLustalX and the Simplot for Windows v3.5.1 program [39].
Results
Detection and serotyping Of the 4323 samples tested, 213 were positive for enteroviruses (4.93%). The majority of the positives were CSF samples (n = 157, 73.71%) followed by specimens of respiratory origin (n = 17, 7.98%), stools (n = 10, 4.70%) and plasma (n = 8, 3.76%). The rest of this analysis concentrates on CSF samples. A total of 72 positive CSF samples were randomly selected to represent different age groups, geographic locations and disease severity for serotype determination based on phylogenetic clustering of the partial sequence of the VP2 region with the prototype sequences. More than 85% of samples could be typed without the need for culture or nested PCR (data not shown). This RT-PCR detects both enteroviruses and rhinoviruses, and allows for differentiation based on the length of the amplicon. The most common serotype detected was CVA-9 (n = 59, 81.94%) and it was clearly the predominant serotype in the population. Closer examination of age distribution revealed that the majority (n = 56, 94.92%) of the CVA-9 positives were from patients ranging in age from 15 to 29 years, only three (5.08%) CVA-9 positives were detected in children less than a year old. The other frequently detected serotypes were CVB-3 (n = 3) and CVB-4 (n = 2). Figure 1A and 1B show the distribution of the different serotypes detected from infants less than one year old and from patients older than one year. A phylogenetic tree including representatives of the different enterovirus serotypes detected and some representative prototype strains in the VP2 region is indicated in Figure 2. The VP2 sequence from all the sequenced isolates was identical, showing that the outbreak was associated with a clonal or nearly clonal CVA-9 population. Distribution of the different serotypes detected in the sequenced samples. A indicates serotype distribution in samples from patients less than one year and B from patients over one year old. Phylogenetic tree including representatives of the different serotypes detected and corresponding prototype sequences (in italics). The alignment used for the phylogenetic tree included 173 bases of the VP2 region from the clinical specimens, the same region from the prototype strains is also included. The Genbank accession number for the prototype strains is included in legend for Table 2. Patient samples are identified using a laboratory number followed by the enterovirus type. Of the 4323 samples tested, 213 were positive for enteroviruses (4.93%). The majority of the positives were CSF samples (n = 157, 73.71%) followed by specimens of respiratory origin (n = 17, 7.98%), stools (n = 10, 4.70%) and plasma (n = 8, 3.76%). The rest of this analysis concentrates on CSF samples. A total of 72 positive CSF samples were randomly selected to represent different age groups, geographic locations and disease severity for serotype determination based on phylogenetic clustering of the partial sequence of the VP2 region with the prototype sequences. More than 85% of samples could be typed without the need for culture or nested PCR (data not shown). This RT-PCR detects both enteroviruses and rhinoviruses, and allows for differentiation based on the length of the amplicon. The most common serotype detected was CVA-9 (n = 59, 81.94%) and it was clearly the predominant serotype in the population. Closer examination of age distribution revealed that the majority (n = 56, 94.92%) of the CVA-9 positives were from patients ranging in age from 15 to 29 years, only three (5.08%) CVA-9 positives were detected in children less than a year old. The other frequently detected serotypes were CVB-3 (n = 3) and CVB-4 (n = 2). Figure 1A and 1B show the distribution of the different serotypes detected from infants less than one year old and from patients older than one year. A phylogenetic tree including representatives of the different enterovirus serotypes detected and some representative prototype strains in the VP2 region is indicated in Figure 2. The VP2 sequence from all the sequenced isolates was identical, showing that the outbreak was associated with a clonal or nearly clonal CVA-9 population. Distribution of the different serotypes detected in the sequenced samples. A indicates serotype distribution in samples from patients less than one year and B from patients over one year old. Phylogenetic tree including representatives of the different serotypes detected and corresponding prototype sequences (in italics). The alignment used for the phylogenetic tree included 173 bases of the VP2 region from the clinical specimens, the same region from the prototype strains is also included. The Genbank accession number for the prototype strains is included in legend for Table 2. Patient samples are identified using a laboratory number followed by the enterovirus type. Age and gender distribution Of the 59 samples positive for CVA-9, roughly equal numbers were detected in males (n = 30) and females (n = 29). Figure 3 shows the age distribution and percentage of CSF samples positive for CVA-9 within the different age groups tested. The data indicates that the majority of the CVA-9 positives were detected in patients between 15 to 30 years old. Age distribution of patients positive for enterovirus CVA-9. The bars represent the total number of samples positive for CVA-9 and the broken line indicates the percentage of samples positive for CVA-9 among the serotyped samples in each age group. The solid line indicates the percentage of samples positive for all enteroviruses in each age group based on the number of CSF samples tested within that age group. Of the 59 samples positive for CVA-9, roughly equal numbers were detected in males (n = 30) and females (n = 29). Figure 3 shows the age distribution and percentage of CSF samples positive for CVA-9 within the different age groups tested. The data indicates that the majority of the CVA-9 positives were detected in patients between 15 to 30 years old. Age distribution of patients positive for enterovirus CVA-9. The bars represent the total number of samples positive for CVA-9 and the broken line indicates the percentage of samples positive for CVA-9 among the serotyped samples in each age group. The solid line indicates the percentage of samples positive for all enteroviruses in each age group based on the number of CSF samples tested within that age group. Seasonality Samples from January 2010 to December 2010 were used in this study; peak months for the detection of CVA-9 positives in CSF samples were between March and October. Samples from January 2010 to December 2010 were used in this study; peak months for the detection of CVA-9 positives in CSF samples were between March and October. Comparison of near full length genomes The genomic sequences of the CVA-9_2003 and CVA-9_2010 have been deposited in GenBank under accession number JQ837913 and JQ837914, respectively. Comparison of the nucleotide and amino acid sequences in each of the gene segments from CVA-9_2003 and CVA-9_2010 to the prototype CVA-9 sequence are shown in Table 1. Nucleotide identity in the 5’ untranslated region (5’ UTR) between CVA-9_2003 and CVA-9_2010 was higher (96.5%) than with the prototype CVA-9 sequence which was around 85%. The nucleotide and amino acid identity was high (around 95%) between CVA-9_2003 and CVA-9_2010 samples in the structural genes (VP4, VP2, VP3 and VP1). The nucleotide identity in the structural genes for both these samples was lower (around 80%) when compared to the prototype CVA-9 sequence but amino acid identity was high (around 90%) indicating that the nucleotide changes consisted mostly of synonymous mutations. CVA-9_2010 showed some sequence diversity from the CVA-9_2003 isolate in the 3’ non-structural domain. This domain encodes the 2A protease, 2B protein, 2C helicase, 3A protein, 3B protein (VPg), 3C protease and 3D protein (polymerase). The nucleotide sequence identity in the 2A protease and 2B protein regions was 96.15% and 93.60% respectively; identity in the 2C helicase, 3A protein and 3D protein was around 80%. The lowest nucleotide identity was observed in the 3B protein at 74.63% and 3C protease at 77.78%, however amino acid conservation in these genes was higher as indicated in Table 1. Comparison of these gene segments to the prototype CVA-9 strain showed a higher percent of amino acid identity as compared to nucleotide identity. Percent identity comparison of nucleotide and amino acid sequences for isolates from 2003 (JQ837913), 2010 (JQ837914) and prototype CVA-9 sequence (D00627) Phylogenetic analysis comparing the nucleotide sequences of all the species B prototype sequences to CVA-9_2003 and CVA-9_2010 shows that in the VP2 region, these sequences cluster with the prototype CVA-9 sequence with a high bootstrap value; a similar pattern was observed for the VP1 and VP3 regions. However, in the VP4 region, the closest neighbour to CVA-9_2003 and CVA-9_2010 sequences is enterovirus 101 (EV-101) and there is significant divergence from the prototype CVA-9 sequence at the nucleotide level (data not shown). Table 2 lists the nearest neighbour based on phylogenetic analysis for the structural and non-structural gene segments, compared to the gene sequences from prototype strains. As indicated in the table the regions 2A, 2B, and 3B from the CVA-9_2003 and CVA-9_2010 cluster together, however, the 2C, 3A, 3C and 3D regions from CVA-9_2003 and CVA-9_2010 cluster with different enterovirus species B prototype sequences indicating base pair changes or recombination in these regions. The probable mosaic nature of the genome of CVA-9_2003 and CVA-9_2010 was further explored by performing Simplot analysis. Figure 4A compares the sequence of CVA-9_2003 to that of the enteroviruses identified in Table 2 as the most homologous in each gene. All the viruses are highly homologous within the 5’NC, and the higher score of EV-74 may or may not reflect the origin of the 5’UTR segment of the Alberta isolates; nevertheless, sequencing the 5’UTR would not allow for accurate serotyping. It shows that in the VP2-VP3-VP1 region the CVA-9_2003 is most homologous to the prototype CVA-9; it is speculated that the differences are attributable to genetic drifting over the years. It is interesting to note that in the VP4 region the highest homology is with EV-101, suggesting that recombination with EV-101 (or a very similar virus) might have occurred. This suggests that relying only on the VP4 sequence would not allow for accurate serotyping. Simplot analysis also shows that in the non-structural coding region, recombination with other viruses of species B has occurred, although the dominance of individual viruses is not clear enough to attribute the origin of the segments with certainty. Similarly, Figure 4B compares the sequence of CVA-9_2010 to the enteroviruses identified in Table 2. Similar comments can be made regarding the sequence of CVA-9_2010, although in the non-structural domain, identifying CVB-6, EV-86 and EV-100 as contributing to the recombination events appears more convincing. Tables 1 and 2 show that there is a high nucleotide identity between CVA-9_2003 and CVA-9_2010 in the region upstream of that coding for protein 2B. It can be speculated that CVA-9_2010 arose from recombination events between CVA-9_2003, CVB-6, EV-86 and EV-100. This hypothesis is examined by Simplot analysis in Figure 4C. This analysis suggests that CVA-9 (2010 and 2003) are highly homologous from the 5’NC to the 2B domain; recombination with other viruses clearly occurred beyond this point, with contributions from EV-86 and EV-100, and likely from CVB-6. Closest neighbour to the 2003 and 2010 samples based on phylogenetic analysis Phylogenetic analysis was performed using all the available prototype sequences belonging to enterovirus species B. Phylogenetic tree is based on complete nucleotide sequence of the VP2 and VP4 genes; GenBank numbers for the prototype sequences used were D00627_CVA-9; M16560_CVB1; AF085363_CVB2; M16572_CVB3; X05690_CVB4; AF114383_CVB5; AF105342_CVB6; AF029859_E1; AY302545_E2; AY302553_E3; AY302557_E4; AF083069_E5; AY302558_E6; AY302559_E7; X84981_E9; X80059_E11; X79047_E12; AY302539_E13; AY302540_E14; AY302541_E15; AY302542_E16; AY302543_E17; AF317694_E18; AY302544_E19; AY302546_E20; AY302547_E21; AY302548_E24; AY302549_E25; AY302550_E26; AY302551_E27; AY302552_E29; AF162711_E30; AY302554_E31; AY302555_E32; AY302556_E33; AY302560_EV69; AF241359_EV73; AY556057_EV74; AY556070_EV75; AJ493062_EV77; AY843297_EV79; AY843298_EV80; AY843299_EV81; AY843300_EV82; AY843301_EV83; DQ902712_EV84; AY843303_EV85; AY843304_EV86; AY843305_EV87; AY843306_EV88; AY843307_EV97; NC_013115.1_EV107; DQ902713_EV100; AY843308_EV101. The GenBank numbers for the in-house sequenced samples from 2003 and 2010 are JQ837913 and JQ837914. Simplot analysis of full genome sequences of CVA-9_2003, CVA-9_2010 and related species B sequences. Genomic regions are indicated at the bottom of each panel based on numbering of the prototype CVA-9 (D00627) 5’UTR:1–722; VP4:723–929; VP2:930–1712; VP3:1713–2426; VP1:2427–3332; 2A:3333–3773; 2B:3774–4069; 2C:4070–5057; 3A:5058–5323; 3B:5324–5390; 3C:5391–5938; 3D:5939–7324; 3’UTR:7325 - >7403. The query sequences used are CVA-9_2003 in panel 4A and CVA-9_2010 in panels 4B and 4C. Arrows in panels 4A and 4B indicate recombination with EV-101 in the VP4 region. The genomic sequences of the CVA-9_2003 and CVA-9_2010 have been deposited in GenBank under accession number JQ837913 and JQ837914, respectively. Comparison of the nucleotide and amino acid sequences in each of the gene segments from CVA-9_2003 and CVA-9_2010 to the prototype CVA-9 sequence are shown in Table 1. Nucleotide identity in the 5’ untranslated region (5’ UTR) between CVA-9_2003 and CVA-9_2010 was higher (96.5%) than with the prototype CVA-9 sequence which was around 85%. The nucleotide and amino acid identity was high (around 95%) between CVA-9_2003 and CVA-9_2010 samples in the structural genes (VP4, VP2, VP3 and VP1). The nucleotide identity in the structural genes for both these samples was lower (around 80%) when compared to the prototype CVA-9 sequence but amino acid identity was high (around 90%) indicating that the nucleotide changes consisted mostly of synonymous mutations. CVA-9_2010 showed some sequence diversity from the CVA-9_2003 isolate in the 3’ non-structural domain. This domain encodes the 2A protease, 2B protein, 2C helicase, 3A protein, 3B protein (VPg), 3C protease and 3D protein (polymerase). The nucleotide sequence identity in the 2A protease and 2B protein regions was 96.15% and 93.60% respectively; identity in the 2C helicase, 3A protein and 3D protein was around 80%. The lowest nucleotide identity was observed in the 3B protein at 74.63% and 3C protease at 77.78%, however amino acid conservation in these genes was higher as indicated in Table 1. Comparison of these gene segments to the prototype CVA-9 strain showed a higher percent of amino acid identity as compared to nucleotide identity. Percent identity comparison of nucleotide and amino acid sequences for isolates from 2003 (JQ837913), 2010 (JQ837914) and prototype CVA-9 sequence (D00627) Phylogenetic analysis comparing the nucleotide sequences of all the species B prototype sequences to CVA-9_2003 and CVA-9_2010 shows that in the VP2 region, these sequences cluster with the prototype CVA-9 sequence with a high bootstrap value; a similar pattern was observed for the VP1 and VP3 regions. However, in the VP4 region, the closest neighbour to CVA-9_2003 and CVA-9_2010 sequences is enterovirus 101 (EV-101) and there is significant divergence from the prototype CVA-9 sequence at the nucleotide level (data not shown). Table 2 lists the nearest neighbour based on phylogenetic analysis for the structural and non-structural gene segments, compared to the gene sequences from prototype strains. As indicated in the table the regions 2A, 2B, and 3B from the CVA-9_2003 and CVA-9_2010 cluster together, however, the 2C, 3A, 3C and 3D regions from CVA-9_2003 and CVA-9_2010 cluster with different enterovirus species B prototype sequences indicating base pair changes or recombination in these regions. The probable mosaic nature of the genome of CVA-9_2003 and CVA-9_2010 was further explored by performing Simplot analysis. Figure 4A compares the sequence of CVA-9_2003 to that of the enteroviruses identified in Table 2 as the most homologous in each gene. All the viruses are highly homologous within the 5’NC, and the higher score of EV-74 may or may not reflect the origin of the 5’UTR segment of the Alberta isolates; nevertheless, sequencing the 5’UTR would not allow for accurate serotyping. It shows that in the VP2-VP3-VP1 region the CVA-9_2003 is most homologous to the prototype CVA-9; it is speculated that the differences are attributable to genetic drifting over the years. It is interesting to note that in the VP4 region the highest homology is with EV-101, suggesting that recombination with EV-101 (or a very similar virus) might have occurred. This suggests that relying only on the VP4 sequence would not allow for accurate serotyping. Simplot analysis also shows that in the non-structural coding region, recombination with other viruses of species B has occurred, although the dominance of individual viruses is not clear enough to attribute the origin of the segments with certainty. Similarly, Figure 4B compares the sequence of CVA-9_2010 to the enteroviruses identified in Table 2. Similar comments can be made regarding the sequence of CVA-9_2010, although in the non-structural domain, identifying CVB-6, EV-86 and EV-100 as contributing to the recombination events appears more convincing. Tables 1 and 2 show that there is a high nucleotide identity between CVA-9_2003 and CVA-9_2010 in the region upstream of that coding for protein 2B. It can be speculated that CVA-9_2010 arose from recombination events between CVA-9_2003, CVB-6, EV-86 and EV-100. This hypothesis is examined by Simplot analysis in Figure 4C. This analysis suggests that CVA-9 (2010 and 2003) are highly homologous from the 5’NC to the 2B domain; recombination with other viruses clearly occurred beyond this point, with contributions from EV-86 and EV-100, and likely from CVB-6. Closest neighbour to the 2003 and 2010 samples based on phylogenetic analysis Phylogenetic analysis was performed using all the available prototype sequences belonging to enterovirus species B. Phylogenetic tree is based on complete nucleotide sequence of the VP2 and VP4 genes; GenBank numbers for the prototype sequences used were D00627_CVA-9; M16560_CVB1; AF085363_CVB2; M16572_CVB3; X05690_CVB4; AF114383_CVB5; AF105342_CVB6; AF029859_E1; AY302545_E2; AY302553_E3; AY302557_E4; AF083069_E5; AY302558_E6; AY302559_E7; X84981_E9; X80059_E11; X79047_E12; AY302539_E13; AY302540_E14; AY302541_E15; AY302542_E16; AY302543_E17; AF317694_E18; AY302544_E19; AY302546_E20; AY302547_E21; AY302548_E24; AY302549_E25; AY302550_E26; AY302551_E27; AY302552_E29; AF162711_E30; AY302554_E31; AY302555_E32; AY302556_E33; AY302560_EV69; AF241359_EV73; AY556057_EV74; AY556070_EV75; AJ493062_EV77; AY843297_EV79; AY843298_EV80; AY843299_EV81; AY843300_EV82; AY843301_EV83; DQ902712_EV84; AY843303_EV85; AY843304_EV86; AY843305_EV87; AY843306_EV88; AY843307_EV97; NC_013115.1_EV107; DQ902713_EV100; AY843308_EV101. The GenBank numbers for the in-house sequenced samples from 2003 and 2010 are JQ837913 and JQ837914. Simplot analysis of full genome sequences of CVA-9_2003, CVA-9_2010 and related species B sequences. Genomic regions are indicated at the bottom of each panel based on numbering of the prototype CVA-9 (D00627) 5’UTR:1–722; VP4:723–929; VP2:930–1712; VP3:1713–2426; VP1:2427–3332; 2A:3333–3773; 2B:3774–4069; 2C:4070–5057; 3A:5058–5323; 3B:5324–5390; 3C:5391–5938; 3D:5939–7324; 3’UTR:7325 - >7403. The query sequences used are CVA-9_2003 in panel 4A and CVA-9_2010 in panels 4B and 4C. Arrows in panels 4A and 4B indicate recombination with EV-101 in the VP4 region.
Conclusion
In summary the sudden increase in cases of aseptic meningitis in Alberta in 2010 was associated with enterovirus CVA-9. The capsid region was highly homologous to the capsid of a 2003 isolate, suggesting that the infections were not the result of the emergence of an immune escape mutant. We thus speculate that the increase in the number of infections may have resulted from a decline in the level of herd immunity against this virus to a level where the virus was able to penetrate the population. When compared to the prototype strain of CVA-9, the Alberta isolates displayed signs of multiple recombination events in addition to genetic drifting.
[ "Background", "Detection and serotyping", "Age and gender distribution", "Seasonality", "Comparison of near full length genomes", "Screening for enteroviruses", "Serotyping of enteroviruses", "Complete genome sequencing of CVA-9", "Sequence analysis", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "The enterovirus genome is comprised of a single open reading frame flanked by the 5’ and 3’ untranslated regions (UTRs), and the encoded polyprotein is cleaved to produce the structural and nonstructural proteins [1]. Although most infections are asymptomatic, non-polio enteroviruses are the most common infectious cause of aseptic meningitis [1]. Several outbreaks resulting from different enterovirus serotypes have been described including but not restricted to Coxsackievirus A9 (CVA-9) [2], EV71 [3,4], EV68 [5], Coxsackievirus A24 [6], echovirus 18 [7], echovirus 30 [8,9], and Coxsackievirus A16 [10].\nNon-polio enteroviruses were traditionally classified into serotypes, based on a neutralization assay and included 64 classical isolates consisting of Coxsackieviruses A, Coxsackieviruses B, echoviruses and several numbered enteroviruses [11]. Genetic studies led to a new classification scheme, grouping the enteroviruses into species A, B, C and D, although serotype determination is still widely used, including for epidemiological purposes. Typing methods based on sequencing have been previously described in the VP1 [12-14], VP2 [15,16], and VP4 [17] regions. Molecular typing methods based on the P1 genomic region sequences, have been found to yield equivalent results to neutralizing assays, which they have essentially replaced [1,14]. Molecular typing has also led to the characterization of new enterovirus serotypes [4,18-20] and provides information on recombination between enteroviruses. This is a common phenomenon which almost always occurs between viruses belonging to the same species [1,21,22]. Serotype determination remains important for molecular epidemiology, in part because the capsid contains the neutralization epitopes, and a rise or fall in incidence of a serotype can be linked to serotype-specific humoral immunity within the population [21]. The capsid also determines the cellular receptors used by the virus [23] and is thus an important determinant of viral pathogenesis, although non-structural proteins also contribute to the pathogenesis of the virus [24].\nAn unusually high incidence of aseptic meningitis was noted in Alberta, Canada between March and October 2010 [25]. A high proportion of CSF samples submitted to the Provincial Laboratory For Public Health of Alberta were positive for enteroviruses and serotyping by molecular methods revealed that majority of these enteroviruses belonged to the CVA-9 serotype. The primary goal of this study was to provide a full genetic characterization of the CVA-9 isolates responsible for this outbreak. Secondary goals were to comment on the methods of molecular serotyping for the diagnostic laboratory and on the use of long RT-PCR as a convenient method to obtain the near full-length genomic sequence of enteroviruses, especially when recombination events are involved in the genesis of the viral genome. The genome sequence of these isolates was determined and compared to the sequence of a CVA-9 isolate from Alberta in 2003 and to the prototype CVA-9 sequence (strain: Griggs; D00627) [26]. Serotyping by sequencing within the VP2 region was found to be more reliable than within the VP4 region. A comparison of the different genes revealed a higher nucleotide and amino acid conservation in the structural regions, and analysis of the sequence of the non-structural region pointed to recombination events in the genesis of the 2010 isolate.", "Of the 4323 samples tested, 213 were positive for enteroviruses (4.93%). The majority of the positives were CSF samples (n = 157, 73.71%) followed by specimens of respiratory origin (n = 17, 7.98%), stools (n = 10, 4.70%) and plasma (n = 8, 3.76%). The rest of this analysis concentrates on CSF samples. A total of 72 positive CSF samples were randomly selected to represent different age groups, geographic locations and disease severity for serotype determination based on phylogenetic clustering of the partial sequence of the VP2 region with the prototype sequences. More than 85% of samples could be typed without the need for culture or nested PCR (data not shown). This RT-PCR detects both enteroviruses and rhinoviruses, and allows for differentiation based on the length of the amplicon.\nThe most common serotype detected was CVA-9 (n = 59, 81.94%) and it was clearly the predominant serotype in the population. Closer examination of age distribution revealed that the majority (n = 56, 94.92%) of the CVA-9 positives were from patients ranging in age from 15 to 29 years, only three (5.08%) CVA-9 positives were detected in children less than a year old. The other frequently detected serotypes were CVB-3 (n = 3) and CVB-4 (n = 2). Figure 1A and 1B show the distribution of the different serotypes detected from infants less than one year old and from patients older than one year. A phylogenetic tree including representatives of the different enterovirus serotypes detected and some representative prototype strains in the VP2 region is indicated in Figure 2. The VP2 sequence from all the sequenced isolates was identical, showing that the outbreak was associated with a clonal or nearly clonal CVA-9 population.\nDistribution of the different serotypes detected in the sequenced samples. A indicates serotype distribution in samples from patients less than one year and B from patients over one year old.\nPhylogenetic tree including representatives of the different serotypes detected and corresponding prototype sequences (in italics). The alignment used for the phylogenetic tree included 173 bases of the VP2 region from the clinical specimens, the same region from the prototype strains is also included. The Genbank accession number for the prototype strains is included in legend for Table 2. Patient samples are identified using a laboratory number followed by the enterovirus type.", "Of the 59 samples positive for CVA-9, roughly equal numbers were detected in males (n = 30) and females (n = 29). Figure 3 shows the age distribution and percentage of CSF samples positive for CVA-9 within the different age groups tested. The data indicates that the majority of the CVA-9 positives were detected in patients between 15 to 30 years old.\nAge distribution of patients positive for enterovirus CVA-9. The bars represent the total number of samples positive for CVA-9 and the broken line indicates the percentage of samples positive for CVA-9 among the serotyped samples in each age group. The solid line indicates the percentage of samples positive for all enteroviruses in each age group based on the number of CSF samples tested within that age group.", "Samples from January 2010 to December 2010 were used in this study; peak months for the detection of CVA-9 positives in CSF samples were between March and October.", "The genomic sequences of the CVA-9_2003 and CVA-9_2010 have been deposited in GenBank under accession number JQ837913 and JQ837914, respectively. Comparison of the nucleotide and amino acid sequences in each of the gene segments from CVA-9_2003 and CVA-9_2010 to the prototype CVA-9 sequence are shown in Table 1. Nucleotide identity in the 5’ untranslated region (5’ UTR) between CVA-9_2003 and CVA-9_2010 was higher (96.5%) than with the prototype CVA-9 sequence which was around 85%. The nucleotide and amino acid identity was high (around 95%) between CVA-9_2003 and CVA-9_2010 samples in the structural genes (VP4, VP2, VP3 and VP1). The nucleotide identity in the structural genes for both these samples was lower (around 80%) when compared to the prototype CVA-9 sequence but amino acid identity was high (around 90%) indicating that the nucleotide changes consisted mostly of synonymous mutations. CVA-9_2010 showed some sequence diversity from the CVA-9_2003 isolate in the 3’ non-structural domain. This domain encodes the 2A protease, 2B protein, 2C helicase, 3A protein, 3B protein (VPg), 3C protease and 3D protein (polymerase). The nucleotide sequence identity in the 2A protease and 2B protein regions was 96.15% and 93.60% respectively; identity in the 2C helicase, 3A protein and 3D protein was around 80%. The lowest nucleotide identity was observed in the 3B protein at 74.63% and 3C protease at 77.78%, however amino acid conservation in these genes was higher as indicated in Table 1. Comparison of these gene segments to the prototype CVA-9 strain showed a higher percent of amino acid identity as compared to nucleotide identity.\nPercent identity comparison of nucleotide and amino acid sequences for isolates from 2003 (JQ837913), 2010 (JQ837914) and prototype CVA-9 sequence (D00627)\nPhylogenetic analysis comparing the nucleotide sequences of all the species B prototype sequences to CVA-9_2003 and CVA-9_2010 shows that in the VP2 region, these sequences cluster with the prototype CVA-9 sequence with a high bootstrap value; a similar pattern was observed for the VP1 and VP3 regions. However, in the VP4 region, the closest neighbour to CVA-9_2003 and CVA-9_2010 sequences is enterovirus 101 (EV-101) and there is significant divergence from the prototype CVA-9 sequence at the nucleotide level (data not shown). Table 2 lists the nearest neighbour based on phylogenetic analysis for the structural and non-structural gene segments, compared to the gene sequences from prototype strains. As indicated in the table the regions 2A, 2B, and 3B from the CVA-9_2003 and CVA-9_2010 cluster together, however, the 2C, 3A, 3C and 3D regions from CVA-9_2003 and CVA-9_2010 cluster with different enterovirus species B prototype sequences indicating base pair changes or recombination in these regions. The probable mosaic nature of the genome of CVA-9_2003 and CVA-9_2010 was further explored by performing Simplot analysis. Figure 4A compares the sequence of CVA-9_2003 to that of the enteroviruses identified in Table 2 as the most homologous in each gene. All the viruses are highly homologous within the 5’NC, and the higher score of EV-74 may or may not reflect the origin of the 5’UTR segment of the Alberta isolates; nevertheless, sequencing the 5’UTR would not allow for accurate serotyping. It shows that in the VP2-VP3-VP1 region the CVA-9_2003 is most homologous to the prototype CVA-9; it is speculated that the differences are attributable to genetic drifting over the years. It is interesting to note that in the VP4 region the highest homology is with EV-101, suggesting that recombination with EV-101 (or a very similar virus) might have occurred. This suggests that relying only on the VP4 sequence would not allow for accurate serotyping. Simplot analysis also shows that in the non-structural coding region, recombination with other viruses of species B has occurred, although the dominance of individual viruses is not clear enough to attribute the origin of the segments with certainty. Similarly, Figure 4B compares the sequence of CVA-9_2010 to the enteroviruses identified in Table 2. Similar comments can be made regarding the sequence of CVA-9_2010, although in the non-structural domain, identifying CVB-6, EV-86 and EV-100 as contributing to the recombination events appears more convincing. Tables 1 and 2 show that there is a high nucleotide identity between CVA-9_2003 and CVA-9_2010 in the region upstream of that coding for protein 2B. It can be speculated that CVA-9_2010 arose from recombination events between CVA-9_2003, CVB-6, EV-86 and EV-100. This hypothesis is examined by Simplot analysis in Figure 4C. This analysis suggests that CVA-9 (2010 and 2003) are highly homologous from the 5’NC to the 2B domain; recombination with other viruses clearly occurred beyond this point, with contributions from EV-86 and EV-100, and likely from CVB-6.\nClosest neighbour to the 2003 and 2010 samples based on phylogenetic analysis\nPhylogenetic analysis was performed using all the available prototype sequences belonging to enterovirus species B.\nPhylogenetic tree is based on complete nucleotide sequence of the VP2 and VP4 genes; GenBank numbers for the prototype sequences used were D00627_CVA-9; M16560_CVB1; AF085363_CVB2; M16572_CVB3; X05690_CVB4; AF114383_CVB5; AF105342_CVB6; AF029859_E1; AY302545_E2; AY302553_E3; AY302557_E4; AF083069_E5; AY302558_E6; AY302559_E7; X84981_E9; X80059_E11; X79047_E12; AY302539_E13; AY302540_E14; AY302541_E15; AY302542_E16; AY302543_E17; AF317694_E18; AY302544_E19; AY302546_E20; AY302547_E21; AY302548_E24; AY302549_E25; AY302550_E26; AY302551_E27; AY302552_E29; AF162711_E30; AY302554_E31; AY302555_E32; AY302556_E33; AY302560_EV69; AF241359_EV73; AY556057_EV74; AY556070_EV75; AJ493062_EV77; AY843297_EV79; AY843298_EV80; AY843299_EV81; AY843300_EV82; AY843301_EV83; DQ902712_EV84; AY843303_EV85; AY843304_EV86; AY843305_EV87; AY843306_EV88; AY843307_EV97; NC_013115.1_EV107; DQ902713_EV100; AY843308_EV101. The GenBank numbers for the in-house sequenced samples from 2003 and 2010 are JQ837913 and JQ837914.\nSimplot analysis of full genome sequences of CVA-9_2003, CVA-9_2010 and related species B sequences. Genomic regions are indicated at the bottom of each panel based on numbering of the prototype CVA-9 (D00627) 5’UTR:1–722; VP4:723–929; VP2:930–1712; VP3:1713–2426; VP1:2427–3332; 2A:3333–3773; 2B:3774–4069; 2C:4070–5057; 3A:5058–5323; 3B:5324–5390; 3C:5391–5938; 3D:5939–7324; 3’UTR:7325 - >7403. The query sequences used are CVA-9_2003 in panel 4A and CVA-9_2010 in panels 4B and 4C. Arrows in panels 4A and 4B indicate recombination with EV-101 in the VP4 region.", "Specimens submitted to the Provincial Laboratory for Public Health (ProvLab) for enterovirus testing from January 1 to December 31, 2010 were included in this study. A total of 4323 samples from patients ranging in age from one day old to 97 years of age were tested; the most common specimen types tested included CSF (n = 2687, 62.16%), plasma (n = 497, 11.50%), nasopharyngeal and throat swabs (n = 213, 4.93%) and stools (n = 103, 2.40%). Viral RNA was extracted using the easyMAG® automated extractor (BioMérieux, Durham, NC, USA) and the extracted nucleic acid was screened for enteroviruses using a previously published NASBA assay [35].", "Enteroviruses were serotyped using one-step RT-PCR to amplify the partial 5’untranslated region (293 bp), the VP4 region (207 bp) and partial VP2 region (250 bp) using previously described primers 1-EV/RV and 2-EV/RV from RNA extracted directly from the specimen [36]. Sequencing was performed without the need for nested amplification. Amplification was performed using the One-step RT-PCR kit from Qiagen (Ontario, Canada) using 10 μl of 5X buffer, 10 μl of Q solution, 2 μl of 10 mM dNTPs, 2 μl of enzyme, 0.125 μl of RNaseOUT (Life Technologies, Ontario, Canada), 0.8 μM of primers and 5 μl of template nucleic acid in a total volume of 50 μl. The reverse transcription step was performed at 50°C for 30 mins, followed by enzyme activation at 95°C for 15 mins. The amplification protocol included 45 cycles of denaturation at 95°C for 30 seconds, followed by annealing at 55°C for 1.5 minutes and amplification at 72°C for 60 seconds. A final extension step was performed for 10 minutes at 72°C followed by cooling. Amplified products were sequenced in both directions on the ABI PRISM 3130-Avant Genetic Analyzer (Applied Biosystems (ABI), Foster City, CA).", "Two enterovirus positive samples from 2003 (CVA-9_2003) and 2010 (CVA-9_2010) with a high viral load as estimated by the Ct value, that exhibited strong growth in primary Rhesus Monkey Kidney cells from Diagnostic Hybrids (Ohio, USA) were used. The near-complete genome of these viruses was amplified by long RT-PCR as previously described [37]. The amplicons were sequenced by genome walking and contig assembly was performed using seqscape v2.5 (ABI). The genomic sequences of the CVA-9_2003 and CVA-9_2010 have been deposited in GenBank under accession numbers JQ837913 and JQ837914, respectively.", "Sequences were aligned using ClustalX (Version 1.81, March 2000; ftp://ftp-igbmc.u-strasbg.fr/pub/ClustalX/) and phylogenetic analysis was conducted using Treecon [38]. Distance estimation was performed using the Jukes and Cantor distance correction (1969), topology inference was performed using the neighbour joining method, and bootstraping was done using 1000 replicates and the tree was re-rooted at the internode. Simplot analysis were performed using alignments done with CLustalX and the Simplot for Windows v3.5.1 program [39].", "CSF: Cerebrospinal fluid; CVA: Coxsackie virus A; CVB: Coxsackie virus B; EV: Enterovirus; UTR: Untranslated region; NASBA: Nucleic acid sequence based amplification; RT-PCR: Reverse-transcription polymerase chain reaction; ABI: Applied biosystems; CVA-9: Coxsackie virus A9", "The authors declare that they have no competing interests.", "EC carried out the molecular studies and participated in the sequence alignment. KP, SW and RT conceived of the study, participated in its design, coordination, and analysis. KP wrote the manuscript, SW and RT edited the manuscript. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Results", "Detection and serotyping", "Age and gender distribution", "Seasonality", "Comparison of near full length genomes", "Discussion", "Conclusion", "Methods", "Screening for enteroviruses", "Serotyping of enteroviruses", "Complete genome sequencing of CVA-9", "Sequence analysis", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "The enterovirus genome is comprised of a single open reading frame flanked by the 5’ and 3’ untranslated regions (UTRs), and the encoded polyprotein is cleaved to produce the structural and nonstructural proteins [1]. Although most infections are asymptomatic, non-polio enteroviruses are the most common infectious cause of aseptic meningitis [1]. Several outbreaks resulting from different enterovirus serotypes have been described including but not restricted to Coxsackievirus A9 (CVA-9) [2], EV71 [3,4], EV68 [5], Coxsackievirus A24 [6], echovirus 18 [7], echovirus 30 [8,9], and Coxsackievirus A16 [10].\nNon-polio enteroviruses were traditionally classified into serotypes, based on a neutralization assay and included 64 classical isolates consisting of Coxsackieviruses A, Coxsackieviruses B, echoviruses and several numbered enteroviruses [11]. Genetic studies led to a new classification scheme, grouping the enteroviruses into species A, B, C and D, although serotype determination is still widely used, including for epidemiological purposes. Typing methods based on sequencing have been previously described in the VP1 [12-14], VP2 [15,16], and VP4 [17] regions. Molecular typing methods based on the P1 genomic region sequences, have been found to yield equivalent results to neutralizing assays, which they have essentially replaced [1,14]. Molecular typing has also led to the characterization of new enterovirus serotypes [4,18-20] and provides information on recombination between enteroviruses. This is a common phenomenon which almost always occurs between viruses belonging to the same species [1,21,22]. Serotype determination remains important for molecular epidemiology, in part because the capsid contains the neutralization epitopes, and a rise or fall in incidence of a serotype can be linked to serotype-specific humoral immunity within the population [21]. The capsid also determines the cellular receptors used by the virus [23] and is thus an important determinant of viral pathogenesis, although non-structural proteins also contribute to the pathogenesis of the virus [24].\nAn unusually high incidence of aseptic meningitis was noted in Alberta, Canada between March and October 2010 [25]. A high proportion of CSF samples submitted to the Provincial Laboratory For Public Health of Alberta were positive for enteroviruses and serotyping by molecular methods revealed that majority of these enteroviruses belonged to the CVA-9 serotype. The primary goal of this study was to provide a full genetic characterization of the CVA-9 isolates responsible for this outbreak. Secondary goals were to comment on the methods of molecular serotyping for the diagnostic laboratory and on the use of long RT-PCR as a convenient method to obtain the near full-length genomic sequence of enteroviruses, especially when recombination events are involved in the genesis of the viral genome. The genome sequence of these isolates was determined and compared to the sequence of a CVA-9 isolate from Alberta in 2003 and to the prototype CVA-9 sequence (strain: Griggs; D00627) [26]. Serotyping by sequencing within the VP2 region was found to be more reliable than within the VP4 region. A comparison of the different genes revealed a higher nucleotide and amino acid conservation in the structural regions, and analysis of the sequence of the non-structural region pointed to recombination events in the genesis of the 2010 isolate.", " Detection and serotyping Of the 4323 samples tested, 213 were positive for enteroviruses (4.93%). The majority of the positives were CSF samples (n = 157, 73.71%) followed by specimens of respiratory origin (n = 17, 7.98%), stools (n = 10, 4.70%) and plasma (n = 8, 3.76%). The rest of this analysis concentrates on CSF samples. A total of 72 positive CSF samples were randomly selected to represent different age groups, geographic locations and disease severity for serotype determination based on phylogenetic clustering of the partial sequence of the VP2 region with the prototype sequences. More than 85% of samples could be typed without the need for culture or nested PCR (data not shown). This RT-PCR detects both enteroviruses and rhinoviruses, and allows for differentiation based on the length of the amplicon.\nThe most common serotype detected was CVA-9 (n = 59, 81.94%) and it was clearly the predominant serotype in the population. Closer examination of age distribution revealed that the majority (n = 56, 94.92%) of the CVA-9 positives were from patients ranging in age from 15 to 29 years, only three (5.08%) CVA-9 positives were detected in children less than a year old. The other frequently detected serotypes were CVB-3 (n = 3) and CVB-4 (n = 2). Figure 1A and 1B show the distribution of the different serotypes detected from infants less than one year old and from patients older than one year. A phylogenetic tree including representatives of the different enterovirus serotypes detected and some representative prototype strains in the VP2 region is indicated in Figure 2. The VP2 sequence from all the sequenced isolates was identical, showing that the outbreak was associated with a clonal or nearly clonal CVA-9 population.\nDistribution of the different serotypes detected in the sequenced samples. A indicates serotype distribution in samples from patients less than one year and B from patients over one year old.\nPhylogenetic tree including representatives of the different serotypes detected and corresponding prototype sequences (in italics). The alignment used for the phylogenetic tree included 173 bases of the VP2 region from the clinical specimens, the same region from the prototype strains is also included. The Genbank accession number for the prototype strains is included in legend for Table 2. Patient samples are identified using a laboratory number followed by the enterovirus type.\nOf the 4323 samples tested, 213 were positive for enteroviruses (4.93%). The majority of the positives were CSF samples (n = 157, 73.71%) followed by specimens of respiratory origin (n = 17, 7.98%), stools (n = 10, 4.70%) and plasma (n = 8, 3.76%). The rest of this analysis concentrates on CSF samples. A total of 72 positive CSF samples were randomly selected to represent different age groups, geographic locations and disease severity for serotype determination based on phylogenetic clustering of the partial sequence of the VP2 region with the prototype sequences. More than 85% of samples could be typed without the need for culture or nested PCR (data not shown). This RT-PCR detects both enteroviruses and rhinoviruses, and allows for differentiation based on the length of the amplicon.\nThe most common serotype detected was CVA-9 (n = 59, 81.94%) and it was clearly the predominant serotype in the population. Closer examination of age distribution revealed that the majority (n = 56, 94.92%) of the CVA-9 positives were from patients ranging in age from 15 to 29 years, only three (5.08%) CVA-9 positives were detected in children less than a year old. The other frequently detected serotypes were CVB-3 (n = 3) and CVB-4 (n = 2). Figure 1A and 1B show the distribution of the different serotypes detected from infants less than one year old and from patients older than one year. A phylogenetic tree including representatives of the different enterovirus serotypes detected and some representative prototype strains in the VP2 region is indicated in Figure 2. The VP2 sequence from all the sequenced isolates was identical, showing that the outbreak was associated with a clonal or nearly clonal CVA-9 population.\nDistribution of the different serotypes detected in the sequenced samples. A indicates serotype distribution in samples from patients less than one year and B from patients over one year old.\nPhylogenetic tree including representatives of the different serotypes detected and corresponding prototype sequences (in italics). The alignment used for the phylogenetic tree included 173 bases of the VP2 region from the clinical specimens, the same region from the prototype strains is also included. The Genbank accession number for the prototype strains is included in legend for Table 2. Patient samples are identified using a laboratory number followed by the enterovirus type.\n Age and gender distribution Of the 59 samples positive for CVA-9, roughly equal numbers were detected in males (n = 30) and females (n = 29). Figure 3 shows the age distribution and percentage of CSF samples positive for CVA-9 within the different age groups tested. The data indicates that the majority of the CVA-9 positives were detected in patients between 15 to 30 years old.\nAge distribution of patients positive for enterovirus CVA-9. The bars represent the total number of samples positive for CVA-9 and the broken line indicates the percentage of samples positive for CVA-9 among the serotyped samples in each age group. The solid line indicates the percentage of samples positive for all enteroviruses in each age group based on the number of CSF samples tested within that age group.\nOf the 59 samples positive for CVA-9, roughly equal numbers were detected in males (n = 30) and females (n = 29). Figure 3 shows the age distribution and percentage of CSF samples positive for CVA-9 within the different age groups tested. The data indicates that the majority of the CVA-9 positives were detected in patients between 15 to 30 years old.\nAge distribution of patients positive for enterovirus CVA-9. The bars represent the total number of samples positive for CVA-9 and the broken line indicates the percentage of samples positive for CVA-9 among the serotyped samples in each age group. The solid line indicates the percentage of samples positive for all enteroviruses in each age group based on the number of CSF samples tested within that age group.\n Seasonality Samples from January 2010 to December 2010 were used in this study; peak months for the detection of CVA-9 positives in CSF samples were between March and October.\nSamples from January 2010 to December 2010 were used in this study; peak months for the detection of CVA-9 positives in CSF samples were between March and October.\n Comparison of near full length genomes The genomic sequences of the CVA-9_2003 and CVA-9_2010 have been deposited in GenBank under accession number JQ837913 and JQ837914, respectively. Comparison of the nucleotide and amino acid sequences in each of the gene segments from CVA-9_2003 and CVA-9_2010 to the prototype CVA-9 sequence are shown in Table 1. Nucleotide identity in the 5’ untranslated region (5’ UTR) between CVA-9_2003 and CVA-9_2010 was higher (96.5%) than with the prototype CVA-9 sequence which was around 85%. The nucleotide and amino acid identity was high (around 95%) between CVA-9_2003 and CVA-9_2010 samples in the structural genes (VP4, VP2, VP3 and VP1). The nucleotide identity in the structural genes for both these samples was lower (around 80%) when compared to the prototype CVA-9 sequence but amino acid identity was high (around 90%) indicating that the nucleotide changes consisted mostly of synonymous mutations. CVA-9_2010 showed some sequence diversity from the CVA-9_2003 isolate in the 3’ non-structural domain. This domain encodes the 2A protease, 2B protein, 2C helicase, 3A protein, 3B protein (VPg), 3C protease and 3D protein (polymerase). The nucleotide sequence identity in the 2A protease and 2B protein regions was 96.15% and 93.60% respectively; identity in the 2C helicase, 3A protein and 3D protein was around 80%. The lowest nucleotide identity was observed in the 3B protein at 74.63% and 3C protease at 77.78%, however amino acid conservation in these genes was higher as indicated in Table 1. Comparison of these gene segments to the prototype CVA-9 strain showed a higher percent of amino acid identity as compared to nucleotide identity.\nPercent identity comparison of nucleotide and amino acid sequences for isolates from 2003 (JQ837913), 2010 (JQ837914) and prototype CVA-9 sequence (D00627)\nPhylogenetic analysis comparing the nucleotide sequences of all the species B prototype sequences to CVA-9_2003 and CVA-9_2010 shows that in the VP2 region, these sequences cluster with the prototype CVA-9 sequence with a high bootstrap value; a similar pattern was observed for the VP1 and VP3 regions. However, in the VP4 region, the closest neighbour to CVA-9_2003 and CVA-9_2010 sequences is enterovirus 101 (EV-101) and there is significant divergence from the prototype CVA-9 sequence at the nucleotide level (data not shown). Table 2 lists the nearest neighbour based on phylogenetic analysis for the structural and non-structural gene segments, compared to the gene sequences from prototype strains. As indicated in the table the regions 2A, 2B, and 3B from the CVA-9_2003 and CVA-9_2010 cluster together, however, the 2C, 3A, 3C and 3D regions from CVA-9_2003 and CVA-9_2010 cluster with different enterovirus species B prototype sequences indicating base pair changes or recombination in these regions. The probable mosaic nature of the genome of CVA-9_2003 and CVA-9_2010 was further explored by performing Simplot analysis. Figure 4A compares the sequence of CVA-9_2003 to that of the enteroviruses identified in Table 2 as the most homologous in each gene. All the viruses are highly homologous within the 5’NC, and the higher score of EV-74 may or may not reflect the origin of the 5’UTR segment of the Alberta isolates; nevertheless, sequencing the 5’UTR would not allow for accurate serotyping. It shows that in the VP2-VP3-VP1 region the CVA-9_2003 is most homologous to the prototype CVA-9; it is speculated that the differences are attributable to genetic drifting over the years. It is interesting to note that in the VP4 region the highest homology is with EV-101, suggesting that recombination with EV-101 (or a very similar virus) might have occurred. This suggests that relying only on the VP4 sequence would not allow for accurate serotyping. Simplot analysis also shows that in the non-structural coding region, recombination with other viruses of species B has occurred, although the dominance of individual viruses is not clear enough to attribute the origin of the segments with certainty. Similarly, Figure 4B compares the sequence of CVA-9_2010 to the enteroviruses identified in Table 2. Similar comments can be made regarding the sequence of CVA-9_2010, although in the non-structural domain, identifying CVB-6, EV-86 and EV-100 as contributing to the recombination events appears more convincing. Tables 1 and 2 show that there is a high nucleotide identity between CVA-9_2003 and CVA-9_2010 in the region upstream of that coding for protein 2B. It can be speculated that CVA-9_2010 arose from recombination events between CVA-9_2003, CVB-6, EV-86 and EV-100. This hypothesis is examined by Simplot analysis in Figure 4C. This analysis suggests that CVA-9 (2010 and 2003) are highly homologous from the 5’NC to the 2B domain; recombination with other viruses clearly occurred beyond this point, with contributions from EV-86 and EV-100, and likely from CVB-6.\nClosest neighbour to the 2003 and 2010 samples based on phylogenetic analysis\nPhylogenetic analysis was performed using all the available prototype sequences belonging to enterovirus species B.\nPhylogenetic tree is based on complete nucleotide sequence of the VP2 and VP4 genes; GenBank numbers for the prototype sequences used were D00627_CVA-9; M16560_CVB1; AF085363_CVB2; M16572_CVB3; X05690_CVB4; AF114383_CVB5; AF105342_CVB6; AF029859_E1; AY302545_E2; AY302553_E3; AY302557_E4; AF083069_E5; AY302558_E6; AY302559_E7; X84981_E9; X80059_E11; X79047_E12; AY302539_E13; AY302540_E14; AY302541_E15; AY302542_E16; AY302543_E17; AF317694_E18; AY302544_E19; AY302546_E20; AY302547_E21; AY302548_E24; AY302549_E25; AY302550_E26; AY302551_E27; AY302552_E29; AF162711_E30; AY302554_E31; AY302555_E32; AY302556_E33; AY302560_EV69; AF241359_EV73; AY556057_EV74; AY556070_EV75; AJ493062_EV77; AY843297_EV79; AY843298_EV80; AY843299_EV81; AY843300_EV82; AY843301_EV83; DQ902712_EV84; AY843303_EV85; AY843304_EV86; AY843305_EV87; AY843306_EV88; AY843307_EV97; NC_013115.1_EV107; DQ902713_EV100; AY843308_EV101. The GenBank numbers for the in-house sequenced samples from 2003 and 2010 are JQ837913 and JQ837914.\nSimplot analysis of full genome sequences of CVA-9_2003, CVA-9_2010 and related species B sequences. Genomic regions are indicated at the bottom of each panel based on numbering of the prototype CVA-9 (D00627) 5’UTR:1–722; VP4:723–929; VP2:930–1712; VP3:1713–2426; VP1:2427–3332; 2A:3333–3773; 2B:3774–4069; 2C:4070–5057; 3A:5058–5323; 3B:5324–5390; 3C:5391–5938; 3D:5939–7324; 3’UTR:7325 - >7403. The query sequences used are CVA-9_2003 in panel 4A and CVA-9_2010 in panels 4B and 4C. Arrows in panels 4A and 4B indicate recombination with EV-101 in the VP4 region.\nThe genomic sequences of the CVA-9_2003 and CVA-9_2010 have been deposited in GenBank under accession number JQ837913 and JQ837914, respectively. Comparison of the nucleotide and amino acid sequences in each of the gene segments from CVA-9_2003 and CVA-9_2010 to the prototype CVA-9 sequence are shown in Table 1. Nucleotide identity in the 5’ untranslated region (5’ UTR) between CVA-9_2003 and CVA-9_2010 was higher (96.5%) than with the prototype CVA-9 sequence which was around 85%. The nucleotide and amino acid identity was high (around 95%) between CVA-9_2003 and CVA-9_2010 samples in the structural genes (VP4, VP2, VP3 and VP1). The nucleotide identity in the structural genes for both these samples was lower (around 80%) when compared to the prototype CVA-9 sequence but amino acid identity was high (around 90%) indicating that the nucleotide changes consisted mostly of synonymous mutations. CVA-9_2010 showed some sequence diversity from the CVA-9_2003 isolate in the 3’ non-structural domain. This domain encodes the 2A protease, 2B protein, 2C helicase, 3A protein, 3B protein (VPg), 3C protease and 3D protein (polymerase). The nucleotide sequence identity in the 2A protease and 2B protein regions was 96.15% and 93.60% respectively; identity in the 2C helicase, 3A protein and 3D protein was around 80%. The lowest nucleotide identity was observed in the 3B protein at 74.63% and 3C protease at 77.78%, however amino acid conservation in these genes was higher as indicated in Table 1. Comparison of these gene segments to the prototype CVA-9 strain showed a higher percent of amino acid identity as compared to nucleotide identity.\nPercent identity comparison of nucleotide and amino acid sequences for isolates from 2003 (JQ837913), 2010 (JQ837914) and prototype CVA-9 sequence (D00627)\nPhylogenetic analysis comparing the nucleotide sequences of all the species B prototype sequences to CVA-9_2003 and CVA-9_2010 shows that in the VP2 region, these sequences cluster with the prototype CVA-9 sequence with a high bootstrap value; a similar pattern was observed for the VP1 and VP3 regions. However, in the VP4 region, the closest neighbour to CVA-9_2003 and CVA-9_2010 sequences is enterovirus 101 (EV-101) and there is significant divergence from the prototype CVA-9 sequence at the nucleotide level (data not shown). Table 2 lists the nearest neighbour based on phylogenetic analysis for the structural and non-structural gene segments, compared to the gene sequences from prototype strains. As indicated in the table the regions 2A, 2B, and 3B from the CVA-9_2003 and CVA-9_2010 cluster together, however, the 2C, 3A, 3C and 3D regions from CVA-9_2003 and CVA-9_2010 cluster with different enterovirus species B prototype sequences indicating base pair changes or recombination in these regions. The probable mosaic nature of the genome of CVA-9_2003 and CVA-9_2010 was further explored by performing Simplot analysis. Figure 4A compares the sequence of CVA-9_2003 to that of the enteroviruses identified in Table 2 as the most homologous in each gene. All the viruses are highly homologous within the 5’NC, and the higher score of EV-74 may or may not reflect the origin of the 5’UTR segment of the Alberta isolates; nevertheless, sequencing the 5’UTR would not allow for accurate serotyping. It shows that in the VP2-VP3-VP1 region the CVA-9_2003 is most homologous to the prototype CVA-9; it is speculated that the differences are attributable to genetic drifting over the years. It is interesting to note that in the VP4 region the highest homology is with EV-101, suggesting that recombination with EV-101 (or a very similar virus) might have occurred. This suggests that relying only on the VP4 sequence would not allow for accurate serotyping. Simplot analysis also shows that in the non-structural coding region, recombination with other viruses of species B has occurred, although the dominance of individual viruses is not clear enough to attribute the origin of the segments with certainty. Similarly, Figure 4B compares the sequence of CVA-9_2010 to the enteroviruses identified in Table 2. Similar comments can be made regarding the sequence of CVA-9_2010, although in the non-structural domain, identifying CVB-6, EV-86 and EV-100 as contributing to the recombination events appears more convincing. Tables 1 and 2 show that there is a high nucleotide identity between CVA-9_2003 and CVA-9_2010 in the region upstream of that coding for protein 2B. It can be speculated that CVA-9_2010 arose from recombination events between CVA-9_2003, CVB-6, EV-86 and EV-100. This hypothesis is examined by Simplot analysis in Figure 4C. This analysis suggests that CVA-9 (2010 and 2003) are highly homologous from the 5’NC to the 2B domain; recombination with other viruses clearly occurred beyond this point, with contributions from EV-86 and EV-100, and likely from CVB-6.\nClosest neighbour to the 2003 and 2010 samples based on phylogenetic analysis\nPhylogenetic analysis was performed using all the available prototype sequences belonging to enterovirus species B.\nPhylogenetic tree is based on complete nucleotide sequence of the VP2 and VP4 genes; GenBank numbers for the prototype sequences used were D00627_CVA-9; M16560_CVB1; AF085363_CVB2; M16572_CVB3; X05690_CVB4; AF114383_CVB5; AF105342_CVB6; AF029859_E1; AY302545_E2; AY302553_E3; AY302557_E4; AF083069_E5; AY302558_E6; AY302559_E7; X84981_E9; X80059_E11; X79047_E12; AY302539_E13; AY302540_E14; AY302541_E15; AY302542_E16; AY302543_E17; AF317694_E18; AY302544_E19; AY302546_E20; AY302547_E21; AY302548_E24; AY302549_E25; AY302550_E26; AY302551_E27; AY302552_E29; AF162711_E30; AY302554_E31; AY302555_E32; AY302556_E33; AY302560_EV69; AF241359_EV73; AY556057_EV74; AY556070_EV75; AJ493062_EV77; AY843297_EV79; AY843298_EV80; AY843299_EV81; AY843300_EV82; AY843301_EV83; DQ902712_EV84; AY843303_EV85; AY843304_EV86; AY843305_EV87; AY843306_EV88; AY843307_EV97; NC_013115.1_EV107; DQ902713_EV100; AY843308_EV101. The GenBank numbers for the in-house sequenced samples from 2003 and 2010 are JQ837913 and JQ837914.\nSimplot analysis of full genome sequences of CVA-9_2003, CVA-9_2010 and related species B sequences. Genomic regions are indicated at the bottom of each panel based on numbering of the prototype CVA-9 (D00627) 5’UTR:1–722; VP4:723–929; VP2:930–1712; VP3:1713–2426; VP1:2427–3332; 2A:3333–3773; 2B:3774–4069; 2C:4070–5057; 3A:5058–5323; 3B:5324–5390; 3C:5391–5938; 3D:5939–7324; 3’UTR:7325 - >7403. The query sequences used are CVA-9_2003 in panel 4A and CVA-9_2010 in panels 4B and 4C. Arrows in panels 4A and 4B indicate recombination with EV-101 in the VP4 region.", "Of the 4323 samples tested, 213 were positive for enteroviruses (4.93%). The majority of the positives were CSF samples (n = 157, 73.71%) followed by specimens of respiratory origin (n = 17, 7.98%), stools (n = 10, 4.70%) and plasma (n = 8, 3.76%). The rest of this analysis concentrates on CSF samples. A total of 72 positive CSF samples were randomly selected to represent different age groups, geographic locations and disease severity for serotype determination based on phylogenetic clustering of the partial sequence of the VP2 region with the prototype sequences. More than 85% of samples could be typed without the need for culture or nested PCR (data not shown). This RT-PCR detects both enteroviruses and rhinoviruses, and allows for differentiation based on the length of the amplicon.\nThe most common serotype detected was CVA-9 (n = 59, 81.94%) and it was clearly the predominant serotype in the population. Closer examination of age distribution revealed that the majority (n = 56, 94.92%) of the CVA-9 positives were from patients ranging in age from 15 to 29 years, only three (5.08%) CVA-9 positives were detected in children less than a year old. The other frequently detected serotypes were CVB-3 (n = 3) and CVB-4 (n = 2). Figure 1A and 1B show the distribution of the different serotypes detected from infants less than one year old and from patients older than one year. A phylogenetic tree including representatives of the different enterovirus serotypes detected and some representative prototype strains in the VP2 region is indicated in Figure 2. The VP2 sequence from all the sequenced isolates was identical, showing that the outbreak was associated with a clonal or nearly clonal CVA-9 population.\nDistribution of the different serotypes detected in the sequenced samples. A indicates serotype distribution in samples from patients less than one year and B from patients over one year old.\nPhylogenetic tree including representatives of the different serotypes detected and corresponding prototype sequences (in italics). The alignment used for the phylogenetic tree included 173 bases of the VP2 region from the clinical specimens, the same region from the prototype strains is also included. The Genbank accession number for the prototype strains is included in legend for Table 2. Patient samples are identified using a laboratory number followed by the enterovirus type.", "Of the 59 samples positive for CVA-9, roughly equal numbers were detected in males (n = 30) and females (n = 29). Figure 3 shows the age distribution and percentage of CSF samples positive for CVA-9 within the different age groups tested. The data indicates that the majority of the CVA-9 positives were detected in patients between 15 to 30 years old.\nAge distribution of patients positive for enterovirus CVA-9. The bars represent the total number of samples positive for CVA-9 and the broken line indicates the percentage of samples positive for CVA-9 among the serotyped samples in each age group. The solid line indicates the percentage of samples positive for all enteroviruses in each age group based on the number of CSF samples tested within that age group.", "Samples from January 2010 to December 2010 were used in this study; peak months for the detection of CVA-9 positives in CSF samples were between March and October.", "The genomic sequences of the CVA-9_2003 and CVA-9_2010 have been deposited in GenBank under accession number JQ837913 and JQ837914, respectively. Comparison of the nucleotide and amino acid sequences in each of the gene segments from CVA-9_2003 and CVA-9_2010 to the prototype CVA-9 sequence are shown in Table 1. Nucleotide identity in the 5’ untranslated region (5’ UTR) between CVA-9_2003 and CVA-9_2010 was higher (96.5%) than with the prototype CVA-9 sequence which was around 85%. The nucleotide and amino acid identity was high (around 95%) between CVA-9_2003 and CVA-9_2010 samples in the structural genes (VP4, VP2, VP3 and VP1). The nucleotide identity in the structural genes for both these samples was lower (around 80%) when compared to the prototype CVA-9 sequence but amino acid identity was high (around 90%) indicating that the nucleotide changes consisted mostly of synonymous mutations. CVA-9_2010 showed some sequence diversity from the CVA-9_2003 isolate in the 3’ non-structural domain. This domain encodes the 2A protease, 2B protein, 2C helicase, 3A protein, 3B protein (VPg), 3C protease and 3D protein (polymerase). The nucleotide sequence identity in the 2A protease and 2B protein regions was 96.15% and 93.60% respectively; identity in the 2C helicase, 3A protein and 3D protein was around 80%. The lowest nucleotide identity was observed in the 3B protein at 74.63% and 3C protease at 77.78%, however amino acid conservation in these genes was higher as indicated in Table 1. Comparison of these gene segments to the prototype CVA-9 strain showed a higher percent of amino acid identity as compared to nucleotide identity.\nPercent identity comparison of nucleotide and amino acid sequences for isolates from 2003 (JQ837913), 2010 (JQ837914) and prototype CVA-9 sequence (D00627)\nPhylogenetic analysis comparing the nucleotide sequences of all the species B prototype sequences to CVA-9_2003 and CVA-9_2010 shows that in the VP2 region, these sequences cluster with the prototype CVA-9 sequence with a high bootstrap value; a similar pattern was observed for the VP1 and VP3 regions. However, in the VP4 region, the closest neighbour to CVA-9_2003 and CVA-9_2010 sequences is enterovirus 101 (EV-101) and there is significant divergence from the prototype CVA-9 sequence at the nucleotide level (data not shown). Table 2 lists the nearest neighbour based on phylogenetic analysis for the structural and non-structural gene segments, compared to the gene sequences from prototype strains. As indicated in the table the regions 2A, 2B, and 3B from the CVA-9_2003 and CVA-9_2010 cluster together, however, the 2C, 3A, 3C and 3D regions from CVA-9_2003 and CVA-9_2010 cluster with different enterovirus species B prototype sequences indicating base pair changes or recombination in these regions. The probable mosaic nature of the genome of CVA-9_2003 and CVA-9_2010 was further explored by performing Simplot analysis. Figure 4A compares the sequence of CVA-9_2003 to that of the enteroviruses identified in Table 2 as the most homologous in each gene. All the viruses are highly homologous within the 5’NC, and the higher score of EV-74 may or may not reflect the origin of the 5’UTR segment of the Alberta isolates; nevertheless, sequencing the 5’UTR would not allow for accurate serotyping. It shows that in the VP2-VP3-VP1 region the CVA-9_2003 is most homologous to the prototype CVA-9; it is speculated that the differences are attributable to genetic drifting over the years. It is interesting to note that in the VP4 region the highest homology is with EV-101, suggesting that recombination with EV-101 (or a very similar virus) might have occurred. This suggests that relying only on the VP4 sequence would not allow for accurate serotyping. Simplot analysis also shows that in the non-structural coding region, recombination with other viruses of species B has occurred, although the dominance of individual viruses is not clear enough to attribute the origin of the segments with certainty. Similarly, Figure 4B compares the sequence of CVA-9_2010 to the enteroviruses identified in Table 2. Similar comments can be made regarding the sequence of CVA-9_2010, although in the non-structural domain, identifying CVB-6, EV-86 and EV-100 as contributing to the recombination events appears more convincing. Tables 1 and 2 show that there is a high nucleotide identity between CVA-9_2003 and CVA-9_2010 in the region upstream of that coding for protein 2B. It can be speculated that CVA-9_2010 arose from recombination events between CVA-9_2003, CVB-6, EV-86 and EV-100. This hypothesis is examined by Simplot analysis in Figure 4C. This analysis suggests that CVA-9 (2010 and 2003) are highly homologous from the 5’NC to the 2B domain; recombination with other viruses clearly occurred beyond this point, with contributions from EV-86 and EV-100, and likely from CVB-6.\nClosest neighbour to the 2003 and 2010 samples based on phylogenetic analysis\nPhylogenetic analysis was performed using all the available prototype sequences belonging to enterovirus species B.\nPhylogenetic tree is based on complete nucleotide sequence of the VP2 and VP4 genes; GenBank numbers for the prototype sequences used were D00627_CVA-9; M16560_CVB1; AF085363_CVB2; M16572_CVB3; X05690_CVB4; AF114383_CVB5; AF105342_CVB6; AF029859_E1; AY302545_E2; AY302553_E3; AY302557_E4; AF083069_E5; AY302558_E6; AY302559_E7; X84981_E9; X80059_E11; X79047_E12; AY302539_E13; AY302540_E14; AY302541_E15; AY302542_E16; AY302543_E17; AF317694_E18; AY302544_E19; AY302546_E20; AY302547_E21; AY302548_E24; AY302549_E25; AY302550_E26; AY302551_E27; AY302552_E29; AF162711_E30; AY302554_E31; AY302555_E32; AY302556_E33; AY302560_EV69; AF241359_EV73; AY556057_EV74; AY556070_EV75; AJ493062_EV77; AY843297_EV79; AY843298_EV80; AY843299_EV81; AY843300_EV82; AY843301_EV83; DQ902712_EV84; AY843303_EV85; AY843304_EV86; AY843305_EV87; AY843306_EV88; AY843307_EV97; NC_013115.1_EV107; DQ902713_EV100; AY843308_EV101. The GenBank numbers for the in-house sequenced samples from 2003 and 2010 are JQ837913 and JQ837914.\nSimplot analysis of full genome sequences of CVA-9_2003, CVA-9_2010 and related species B sequences. Genomic regions are indicated at the bottom of each panel based on numbering of the prototype CVA-9 (D00627) 5’UTR:1–722; VP4:723–929; VP2:930–1712; VP3:1713–2426; VP1:2427–3332; 2A:3333–3773; 2B:3774–4069; 2C:4070–5057; 3A:5058–5323; 3B:5324–5390; 3C:5391–5938; 3D:5939–7324; 3’UTR:7325 - >7403. The query sequences used are CVA-9_2003 in panel 4A and CVA-9_2010 in panels 4B and 4C. Arrows in panels 4A and 4B indicate recombination with EV-101 in the VP4 region.", "Recombination between enteroviruses is quite frequent, and occurs typically within the same species [1,27]. It was initially thought that the 5’UTR and P1 regions move independently during recombination, and the frequency of recombination was higher in the non-structural coding region [22]. This lead to the conclusion that serotyping by molecular methods involving sequencing of the P1 region should be equivalent to sequencing segments of this region (VP1,2,3 or 4). Several studies have confirmed the finding that sequencing of the VP1 or VP2 regions yields similar typing results compared to sequencing the entire P1 region [1,14,28]. Typing using the VP4 region has yielded conflicting results including studies reporting both consistent and inconsistent results with classical serotyping [28,29]; Perera et al. have noted that the VP4 sequence is unreliable for serotyping of viruses in species B and C [29]. This inconsistency seems to result from recombination events involving the VP4 region [22]. Intuitively, one may still continue to regard the segment coding for the external domains of the capsid to recombine as a block since the VP4 domain remains completely inside the capsid and is inaccessible to antibodies [1].\nThe amplicon used for serotype determination includes a segment from the 5’UTR, the VP4 region and a segment of the VP2 region; but given the possibility of recombinations outside the VP2-VP3-VP1 region, only the sequence within VP2 was used for the typing of samples in this study. It was noted that if serotype assignment is made based on Blast comparisons with the GenBank database sequences, there is a potential for inaccurate serotype assignment since contemporary isolates submitted into GenBank may have been serotyped based on the VP4 sequence, which can reassort independently of the VP2-VP3-VP1 segment. In this study, the serotype assignment was achieved by constructing a phylogenetic tree with the partial VP2 sequence obtained by sequencing the amplicon and the corresponding VP2 sequences of the original prototype strains [1]. In spite of the genetic drift evident in the sequences of contemporary viruses, this method provided unambiguous serotype assignment. Instances of recombination within the VP2-VP3-VP1 segment have only been rarely reported. Al-Hello and colleagues reported on an isolate that was identified as echovirus 11 by genetic analysis, but could be neutralized by antisera specific for echovirus 11 or CVA-9. A peptide scan analysis confirmed the presence of epitopes recognized by both antisera, whereas a Simplot analysis failed to reveal a recombination event within the capsid [30]. Should reports of isolates with similar properties become more frequent, the very concept of enterovirus serotype might become untenable. Notwithstanding these considerations, at this point in time it remains valuable to serotype isolates, because the serotype is determined by the neutralizing epitopes, and remains important for the molecular epidemiology of these viruses. In addition, the capsid determines the choice of the cellular receptor, consequently the serotype is an important factor to determine the pathogenicity of an isolate [23]; however, the remainder of the genome also contributes to pathogenicity [24] and complete genome sequencing is required for full characterisation of an isolate.\nDuring the 2010 outbreak, a different serotype distribution was noted among infants (< 1year) and older individuals, this difference in distribution of serotypes between age groups has been noted before [31,32]. The serotypes found among infants included in our study have been typically associated with neonatal disease, for example, Coxsackie B3 and B4, echovirus 9, and EV-71. It was interesting to observe one isolate with the VP2 region most similar to the newly described EV-97 [20]. Among the rest of the population, CVA-9 was clearly the predominant serotype. Phylogenetic analysis of the VP2 region from the CVA-9 isolates showed that all the isolates had the same sequences and therefore originated from the same strain.\nComparison of the capsid regions showed the CVA-9_2003 and CVA-9_2010 isolates to be highly homologous and this was corroborated by Simplot anaysis; consequently it is speculated that the sudden increase in detection of CVA-9 in CSF samples is attributable to an increased number of individuals susceptible to CVA-9 in the population rather than to the emergence of an immune escape mutant in 2010. For an infectious agent to cause an outbreak, it is necessary that the herd immunity of the population drops below a threshold that is determined by the basic reproduction number R0[33]. For CVA-9 to re-enter the population of Alberta in 2010, either the new isolate was an immune escape mutant against which the population had no prior immunity, or the herd immunity had declined below the threshold. Because of the very high homology in the capsid sequences between the 2003 and 2010 isolates, it does not seem likely that the 2010 isolate was an immune escape mutant.\nA comparison with the sequence of the prototype CVA-9 shows that the two Alberta isolates inherited their VP2-VP3-VP1 segment from the prototype, but the VP4 segment shows sequence divergence. For all the other protein coding segments, phylogenetic trees of the nucleotide sequence were constructed to compare the Alberta strains with the prototype strains within species B, and Simplot analysis were performed. Two important observations can be made: firstly, the two Alberta isolates are highly homologous in most regions, with few changes that can be largely attributed to sequence drift; however, recombination has likely occurred in the 2C, 3A, 3C, and 3D regions. Secondly, the Alberta isolates are very different from the prototype CVA-9 in the VP4 segment (supporting the concept that VP4 can be a site for recombination). The 5’NC, structural and non-structural regions appear to be components of a mosaic genome modified from the prototype CVA-9 by recombination and drifting. Similar observations have been made on other contemporary CVA-9 strains [34], and indeed in other contemporary enteroviruses within species B [22,27]. How much the pathogenicity of these CVA-9 viruses has been modified by the non-structural genes, compared to the prototype strain, is unclear [22].", "In summary the sudden increase in cases of aseptic meningitis in Alberta in 2010 was associated with enterovirus CVA-9. The capsid region was highly homologous to the capsid of a 2003 isolate, suggesting that the infections were not the result of the emergence of an immune escape mutant. We thus speculate that the increase in the number of infections may have resulted from a decline in the level of herd immunity against this virus to a level where the virus was able to penetrate the population. When compared to the prototype strain of CVA-9, the Alberta isolates displayed signs of multiple recombination events in addition to genetic drifting.", " Screening for enteroviruses Specimens submitted to the Provincial Laboratory for Public Health (ProvLab) for enterovirus testing from January 1 to December 31, 2010 were included in this study. A total of 4323 samples from patients ranging in age from one day old to 97 years of age were tested; the most common specimen types tested included CSF (n = 2687, 62.16%), plasma (n = 497, 11.50%), nasopharyngeal and throat swabs (n = 213, 4.93%) and stools (n = 103, 2.40%). Viral RNA was extracted using the easyMAG® automated extractor (BioMérieux, Durham, NC, USA) and the extracted nucleic acid was screened for enteroviruses using a previously published NASBA assay [35].\nSpecimens submitted to the Provincial Laboratory for Public Health (ProvLab) for enterovirus testing from January 1 to December 31, 2010 were included in this study. A total of 4323 samples from patients ranging in age from one day old to 97 years of age were tested; the most common specimen types tested included CSF (n = 2687, 62.16%), plasma (n = 497, 11.50%), nasopharyngeal and throat swabs (n = 213, 4.93%) and stools (n = 103, 2.40%). Viral RNA was extracted using the easyMAG® automated extractor (BioMérieux, Durham, NC, USA) and the extracted nucleic acid was screened for enteroviruses using a previously published NASBA assay [35].\n Serotyping of enteroviruses Enteroviruses were serotyped using one-step RT-PCR to amplify the partial 5’untranslated region (293 bp), the VP4 region (207 bp) and partial VP2 region (250 bp) using previously described primers 1-EV/RV and 2-EV/RV from RNA extracted directly from the specimen [36]. Sequencing was performed without the need for nested amplification. Amplification was performed using the One-step RT-PCR kit from Qiagen (Ontario, Canada) using 10 μl of 5X buffer, 10 μl of Q solution, 2 μl of 10 mM dNTPs, 2 μl of enzyme, 0.125 μl of RNaseOUT (Life Technologies, Ontario, Canada), 0.8 μM of primers and 5 μl of template nucleic acid in a total volume of 50 μl. The reverse transcription step was performed at 50°C for 30 mins, followed by enzyme activation at 95°C for 15 mins. The amplification protocol included 45 cycles of denaturation at 95°C for 30 seconds, followed by annealing at 55°C for 1.5 minutes and amplification at 72°C for 60 seconds. A final extension step was performed for 10 minutes at 72°C followed by cooling. Amplified products were sequenced in both directions on the ABI PRISM 3130-Avant Genetic Analyzer (Applied Biosystems (ABI), Foster City, CA).\nEnteroviruses were serotyped using one-step RT-PCR to amplify the partial 5’untranslated region (293 bp), the VP4 region (207 bp) and partial VP2 region (250 bp) using previously described primers 1-EV/RV and 2-EV/RV from RNA extracted directly from the specimen [36]. Sequencing was performed without the need for nested amplification. Amplification was performed using the One-step RT-PCR kit from Qiagen (Ontario, Canada) using 10 μl of 5X buffer, 10 μl of Q solution, 2 μl of 10 mM dNTPs, 2 μl of enzyme, 0.125 μl of RNaseOUT (Life Technologies, Ontario, Canada), 0.8 μM of primers and 5 μl of template nucleic acid in a total volume of 50 μl. The reverse transcription step was performed at 50°C for 30 mins, followed by enzyme activation at 95°C for 15 mins. The amplification protocol included 45 cycles of denaturation at 95°C for 30 seconds, followed by annealing at 55°C for 1.5 minutes and amplification at 72°C for 60 seconds. A final extension step was performed for 10 minutes at 72°C followed by cooling. Amplified products were sequenced in both directions on the ABI PRISM 3130-Avant Genetic Analyzer (Applied Biosystems (ABI), Foster City, CA).\n Complete genome sequencing of CVA-9 Two enterovirus positive samples from 2003 (CVA-9_2003) and 2010 (CVA-9_2010) with a high viral load as estimated by the Ct value, that exhibited strong growth in primary Rhesus Monkey Kidney cells from Diagnostic Hybrids (Ohio, USA) were used. The near-complete genome of these viruses was amplified by long RT-PCR as previously described [37]. The amplicons were sequenced by genome walking and contig assembly was performed using seqscape v2.5 (ABI). The genomic sequences of the CVA-9_2003 and CVA-9_2010 have been deposited in GenBank under accession numbers JQ837913 and JQ837914, respectively.\nTwo enterovirus positive samples from 2003 (CVA-9_2003) and 2010 (CVA-9_2010) with a high viral load as estimated by the Ct value, that exhibited strong growth in primary Rhesus Monkey Kidney cells from Diagnostic Hybrids (Ohio, USA) were used. The near-complete genome of these viruses was amplified by long RT-PCR as previously described [37]. The amplicons were sequenced by genome walking and contig assembly was performed using seqscape v2.5 (ABI). The genomic sequences of the CVA-9_2003 and CVA-9_2010 have been deposited in GenBank under accession numbers JQ837913 and JQ837914, respectively.\n Sequence analysis Sequences were aligned using ClustalX (Version 1.81, March 2000; ftp://ftp-igbmc.u-strasbg.fr/pub/ClustalX/) and phylogenetic analysis was conducted using Treecon [38]. Distance estimation was performed using the Jukes and Cantor distance correction (1969), topology inference was performed using the neighbour joining method, and bootstraping was done using 1000 replicates and the tree was re-rooted at the internode. Simplot analysis were performed using alignments done with CLustalX and the Simplot for Windows v3.5.1 program [39].\nSequences were aligned using ClustalX (Version 1.81, March 2000; ftp://ftp-igbmc.u-strasbg.fr/pub/ClustalX/) and phylogenetic analysis was conducted using Treecon [38]. Distance estimation was performed using the Jukes and Cantor distance correction (1969), topology inference was performed using the neighbour joining method, and bootstraping was done using 1000 replicates and the tree was re-rooted at the internode. Simplot analysis were performed using alignments done with CLustalX and the Simplot for Windows v3.5.1 program [39].", "Specimens submitted to the Provincial Laboratory for Public Health (ProvLab) for enterovirus testing from January 1 to December 31, 2010 were included in this study. A total of 4323 samples from patients ranging in age from one day old to 97 years of age were tested; the most common specimen types tested included CSF (n = 2687, 62.16%), plasma (n = 497, 11.50%), nasopharyngeal and throat swabs (n = 213, 4.93%) and stools (n = 103, 2.40%). Viral RNA was extracted using the easyMAG® automated extractor (BioMérieux, Durham, NC, USA) and the extracted nucleic acid was screened for enteroviruses using a previously published NASBA assay [35].", "Enteroviruses were serotyped using one-step RT-PCR to amplify the partial 5’untranslated region (293 bp), the VP4 region (207 bp) and partial VP2 region (250 bp) using previously described primers 1-EV/RV and 2-EV/RV from RNA extracted directly from the specimen [36]. Sequencing was performed without the need for nested amplification. Amplification was performed using the One-step RT-PCR kit from Qiagen (Ontario, Canada) using 10 μl of 5X buffer, 10 μl of Q solution, 2 μl of 10 mM dNTPs, 2 μl of enzyme, 0.125 μl of RNaseOUT (Life Technologies, Ontario, Canada), 0.8 μM of primers and 5 μl of template nucleic acid in a total volume of 50 μl. The reverse transcription step was performed at 50°C for 30 mins, followed by enzyme activation at 95°C for 15 mins. The amplification protocol included 45 cycles of denaturation at 95°C for 30 seconds, followed by annealing at 55°C for 1.5 minutes and amplification at 72°C for 60 seconds. A final extension step was performed for 10 minutes at 72°C followed by cooling. Amplified products were sequenced in both directions on the ABI PRISM 3130-Avant Genetic Analyzer (Applied Biosystems (ABI), Foster City, CA).", "Two enterovirus positive samples from 2003 (CVA-9_2003) and 2010 (CVA-9_2010) with a high viral load as estimated by the Ct value, that exhibited strong growth in primary Rhesus Monkey Kidney cells from Diagnostic Hybrids (Ohio, USA) were used. The near-complete genome of these viruses was amplified by long RT-PCR as previously described [37]. The amplicons were sequenced by genome walking and contig assembly was performed using seqscape v2.5 (ABI). The genomic sequences of the CVA-9_2003 and CVA-9_2010 have been deposited in GenBank under accession numbers JQ837913 and JQ837914, respectively.", "Sequences were aligned using ClustalX (Version 1.81, March 2000; ftp://ftp-igbmc.u-strasbg.fr/pub/ClustalX/) and phylogenetic analysis was conducted using Treecon [38]. Distance estimation was performed using the Jukes and Cantor distance correction (1969), topology inference was performed using the neighbour joining method, and bootstraping was done using 1000 replicates and the tree was re-rooted at the internode. Simplot analysis were performed using alignments done with CLustalX and the Simplot for Windows v3.5.1 program [39].", "CSF: Cerebrospinal fluid; CVA: Coxsackie virus A; CVB: Coxsackie virus B; EV: Enterovirus; UTR: Untranslated region; NASBA: Nucleic acid sequence based amplification; RT-PCR: Reverse-transcription polymerase chain reaction; ABI: Applied biosystems; CVA-9: Coxsackie virus A9", "The authors declare that they have no competing interests.", "EC carried out the molecular studies and participated in the sequence alignment. KP, SW and RT conceived of the study, participated in its design, coordination, and analysis. KP wrote the manuscript, SW and RT edited the manuscript. All authors read and approved the final manuscript." ]
[ null, "results", null, null, null, null, "discussion", "conclusions", "methods", null, null, null, null, null, null, null ]
[ "Human enteroviruses", "Coxsackie A9", "Aseptic meningitis", "Serotyping" ]
Background: The enterovirus genome is comprised of a single open reading frame flanked by the 5’ and 3’ untranslated regions (UTRs), and the encoded polyprotein is cleaved to produce the structural and nonstructural proteins [1]. Although most infections are asymptomatic, non-polio enteroviruses are the most common infectious cause of aseptic meningitis [1]. Several outbreaks resulting from different enterovirus serotypes have been described including but not restricted to Coxsackievirus A9 (CVA-9) [2], EV71 [3,4], EV68 [5], Coxsackievirus A24 [6], echovirus 18 [7], echovirus 30 [8,9], and Coxsackievirus A16 [10]. Non-polio enteroviruses were traditionally classified into serotypes, based on a neutralization assay and included 64 classical isolates consisting of Coxsackieviruses A, Coxsackieviruses B, echoviruses and several numbered enteroviruses [11]. Genetic studies led to a new classification scheme, grouping the enteroviruses into species A, B, C and D, although serotype determination is still widely used, including for epidemiological purposes. Typing methods based on sequencing have been previously described in the VP1 [12-14], VP2 [15,16], and VP4 [17] regions. Molecular typing methods based on the P1 genomic region sequences, have been found to yield equivalent results to neutralizing assays, which they have essentially replaced [1,14]. Molecular typing has also led to the characterization of new enterovirus serotypes [4,18-20] and provides information on recombination between enteroviruses. This is a common phenomenon which almost always occurs between viruses belonging to the same species [1,21,22]. Serotype determination remains important for molecular epidemiology, in part because the capsid contains the neutralization epitopes, and a rise or fall in incidence of a serotype can be linked to serotype-specific humoral immunity within the population [21]. The capsid also determines the cellular receptors used by the virus [23] and is thus an important determinant of viral pathogenesis, although non-structural proteins also contribute to the pathogenesis of the virus [24]. An unusually high incidence of aseptic meningitis was noted in Alberta, Canada between March and October 2010 [25]. A high proportion of CSF samples submitted to the Provincial Laboratory For Public Health of Alberta were positive for enteroviruses and serotyping by molecular methods revealed that majority of these enteroviruses belonged to the CVA-9 serotype. The primary goal of this study was to provide a full genetic characterization of the CVA-9 isolates responsible for this outbreak. Secondary goals were to comment on the methods of molecular serotyping for the diagnostic laboratory and on the use of long RT-PCR as a convenient method to obtain the near full-length genomic sequence of enteroviruses, especially when recombination events are involved in the genesis of the viral genome. The genome sequence of these isolates was determined and compared to the sequence of a CVA-9 isolate from Alberta in 2003 and to the prototype CVA-9 sequence (strain: Griggs; D00627) [26]. Serotyping by sequencing within the VP2 region was found to be more reliable than within the VP4 region. A comparison of the different genes revealed a higher nucleotide and amino acid conservation in the structural regions, and analysis of the sequence of the non-structural region pointed to recombination events in the genesis of the 2010 isolate. Results: Detection and serotyping Of the 4323 samples tested, 213 were positive for enteroviruses (4.93%). The majority of the positives were CSF samples (n = 157, 73.71%) followed by specimens of respiratory origin (n = 17, 7.98%), stools (n = 10, 4.70%) and plasma (n = 8, 3.76%). The rest of this analysis concentrates on CSF samples. A total of 72 positive CSF samples were randomly selected to represent different age groups, geographic locations and disease severity for serotype determination based on phylogenetic clustering of the partial sequence of the VP2 region with the prototype sequences. More than 85% of samples could be typed without the need for culture or nested PCR (data not shown). This RT-PCR detects both enteroviruses and rhinoviruses, and allows for differentiation based on the length of the amplicon. The most common serotype detected was CVA-9 (n = 59, 81.94%) and it was clearly the predominant serotype in the population. Closer examination of age distribution revealed that the majority (n = 56, 94.92%) of the CVA-9 positives were from patients ranging in age from 15 to 29 years, only three (5.08%) CVA-9 positives were detected in children less than a year old. The other frequently detected serotypes were CVB-3 (n = 3) and CVB-4 (n = 2). Figure 1A and 1B show the distribution of the different serotypes detected from infants less than one year old and from patients older than one year. A phylogenetic tree including representatives of the different enterovirus serotypes detected and some representative prototype strains in the VP2 region is indicated in Figure 2. The VP2 sequence from all the sequenced isolates was identical, showing that the outbreak was associated with a clonal or nearly clonal CVA-9 population. Distribution of the different serotypes detected in the sequenced samples. A indicates serotype distribution in samples from patients less than one year and B from patients over one year old. Phylogenetic tree including representatives of the different serotypes detected and corresponding prototype sequences (in italics). The alignment used for the phylogenetic tree included 173 bases of the VP2 region from the clinical specimens, the same region from the prototype strains is also included. The Genbank accession number for the prototype strains is included in legend for Table 2. Patient samples are identified using a laboratory number followed by the enterovirus type. Of the 4323 samples tested, 213 were positive for enteroviruses (4.93%). The majority of the positives were CSF samples (n = 157, 73.71%) followed by specimens of respiratory origin (n = 17, 7.98%), stools (n = 10, 4.70%) and plasma (n = 8, 3.76%). The rest of this analysis concentrates on CSF samples. A total of 72 positive CSF samples were randomly selected to represent different age groups, geographic locations and disease severity for serotype determination based on phylogenetic clustering of the partial sequence of the VP2 region with the prototype sequences. More than 85% of samples could be typed without the need for culture or nested PCR (data not shown). This RT-PCR detects both enteroviruses and rhinoviruses, and allows for differentiation based on the length of the amplicon. The most common serotype detected was CVA-9 (n = 59, 81.94%) and it was clearly the predominant serotype in the population. Closer examination of age distribution revealed that the majority (n = 56, 94.92%) of the CVA-9 positives were from patients ranging in age from 15 to 29 years, only three (5.08%) CVA-9 positives were detected in children less than a year old. The other frequently detected serotypes were CVB-3 (n = 3) and CVB-4 (n = 2). Figure 1A and 1B show the distribution of the different serotypes detected from infants less than one year old and from patients older than one year. A phylogenetic tree including representatives of the different enterovirus serotypes detected and some representative prototype strains in the VP2 region is indicated in Figure 2. The VP2 sequence from all the sequenced isolates was identical, showing that the outbreak was associated with a clonal or nearly clonal CVA-9 population. Distribution of the different serotypes detected in the sequenced samples. A indicates serotype distribution in samples from patients less than one year and B from patients over one year old. Phylogenetic tree including representatives of the different serotypes detected and corresponding prototype sequences (in italics). The alignment used for the phylogenetic tree included 173 bases of the VP2 region from the clinical specimens, the same region from the prototype strains is also included. The Genbank accession number for the prototype strains is included in legend for Table 2. Patient samples are identified using a laboratory number followed by the enterovirus type. Age and gender distribution Of the 59 samples positive for CVA-9, roughly equal numbers were detected in males (n = 30) and females (n = 29). Figure 3 shows the age distribution and percentage of CSF samples positive for CVA-9 within the different age groups tested. The data indicates that the majority of the CVA-9 positives were detected in patients between 15 to 30 years old. Age distribution of patients positive for enterovirus CVA-9. The bars represent the total number of samples positive for CVA-9 and the broken line indicates the percentage of samples positive for CVA-9 among the serotyped samples in each age group. The solid line indicates the percentage of samples positive for all enteroviruses in each age group based on the number of CSF samples tested within that age group. Of the 59 samples positive for CVA-9, roughly equal numbers were detected in males (n = 30) and females (n = 29). Figure 3 shows the age distribution and percentage of CSF samples positive for CVA-9 within the different age groups tested. The data indicates that the majority of the CVA-9 positives were detected in patients between 15 to 30 years old. Age distribution of patients positive for enterovirus CVA-9. The bars represent the total number of samples positive for CVA-9 and the broken line indicates the percentage of samples positive for CVA-9 among the serotyped samples in each age group. The solid line indicates the percentage of samples positive for all enteroviruses in each age group based on the number of CSF samples tested within that age group. Seasonality Samples from January 2010 to December 2010 were used in this study; peak months for the detection of CVA-9 positives in CSF samples were between March and October. Samples from January 2010 to December 2010 were used in this study; peak months for the detection of CVA-9 positives in CSF samples were between March and October. Comparison of near full length genomes The genomic sequences of the CVA-9_2003 and CVA-9_2010 have been deposited in GenBank under accession number JQ837913 and JQ837914, respectively. Comparison of the nucleotide and amino acid sequences in each of the gene segments from CVA-9_2003 and CVA-9_2010 to the prototype CVA-9 sequence are shown in Table 1. Nucleotide identity in the 5’ untranslated region (5’ UTR) between CVA-9_2003 and CVA-9_2010 was higher (96.5%) than with the prototype CVA-9 sequence which was around 85%. The nucleotide and amino acid identity was high (around 95%) between CVA-9_2003 and CVA-9_2010 samples in the structural genes (VP4, VP2, VP3 and VP1). The nucleotide identity in the structural genes for both these samples was lower (around 80%) when compared to the prototype CVA-9 sequence but amino acid identity was high (around 90%) indicating that the nucleotide changes consisted mostly of synonymous mutations. CVA-9_2010 showed some sequence diversity from the CVA-9_2003 isolate in the 3’ non-structural domain. This domain encodes the 2A protease, 2B protein, 2C helicase, 3A protein, 3B protein (VPg), 3C protease and 3D protein (polymerase). The nucleotide sequence identity in the 2A protease and 2B protein regions was 96.15% and 93.60% respectively; identity in the 2C helicase, 3A protein and 3D protein was around 80%. The lowest nucleotide identity was observed in the 3B protein at 74.63% and 3C protease at 77.78%, however amino acid conservation in these genes was higher as indicated in Table 1. Comparison of these gene segments to the prototype CVA-9 strain showed a higher percent of amino acid identity as compared to nucleotide identity. Percent identity comparison of nucleotide and amino acid sequences for isolates from 2003 (JQ837913), 2010 (JQ837914) and prototype CVA-9 sequence (D00627) Phylogenetic analysis comparing the nucleotide sequences of all the species B prototype sequences to CVA-9_2003 and CVA-9_2010 shows that in the VP2 region, these sequences cluster with the prototype CVA-9 sequence with a high bootstrap value; a similar pattern was observed for the VP1 and VP3 regions. However, in the VP4 region, the closest neighbour to CVA-9_2003 and CVA-9_2010 sequences is enterovirus 101 (EV-101) and there is significant divergence from the prototype CVA-9 sequence at the nucleotide level (data not shown). Table 2 lists the nearest neighbour based on phylogenetic analysis for the structural and non-structural gene segments, compared to the gene sequences from prototype strains. As indicated in the table the regions 2A, 2B, and 3B from the CVA-9_2003 and CVA-9_2010 cluster together, however, the 2C, 3A, 3C and 3D regions from CVA-9_2003 and CVA-9_2010 cluster with different enterovirus species B prototype sequences indicating base pair changes or recombination in these regions. The probable mosaic nature of the genome of CVA-9_2003 and CVA-9_2010 was further explored by performing Simplot analysis. Figure 4A compares the sequence of CVA-9_2003 to that of the enteroviruses identified in Table 2 as the most homologous in each gene. All the viruses are highly homologous within the 5’NC, and the higher score of EV-74 may or may not reflect the origin of the 5’UTR segment of the Alberta isolates; nevertheless, sequencing the 5’UTR would not allow for accurate serotyping. It shows that in the VP2-VP3-VP1 region the CVA-9_2003 is most homologous to the prototype CVA-9; it is speculated that the differences are attributable to genetic drifting over the years. It is interesting to note that in the VP4 region the highest homology is with EV-101, suggesting that recombination with EV-101 (or a very similar virus) might have occurred. This suggests that relying only on the VP4 sequence would not allow for accurate serotyping. Simplot analysis also shows that in the non-structural coding region, recombination with other viruses of species B has occurred, although the dominance of individual viruses is not clear enough to attribute the origin of the segments with certainty. Similarly, Figure 4B compares the sequence of CVA-9_2010 to the enteroviruses identified in Table 2. Similar comments can be made regarding the sequence of CVA-9_2010, although in the non-structural domain, identifying CVB-6, EV-86 and EV-100 as contributing to the recombination events appears more convincing. Tables 1 and 2 show that there is a high nucleotide identity between CVA-9_2003 and CVA-9_2010 in the region upstream of that coding for protein 2B. It can be speculated that CVA-9_2010 arose from recombination events between CVA-9_2003, CVB-6, EV-86 and EV-100. This hypothesis is examined by Simplot analysis in Figure 4C. This analysis suggests that CVA-9 (2010 and 2003) are highly homologous from the 5’NC to the 2B domain; recombination with other viruses clearly occurred beyond this point, with contributions from EV-86 and EV-100, and likely from CVB-6. Closest neighbour to the 2003 and 2010 samples based on phylogenetic analysis Phylogenetic analysis was performed using all the available prototype sequences belonging to enterovirus species B. Phylogenetic tree is based on complete nucleotide sequence of the VP2 and VP4 genes; GenBank numbers for the prototype sequences used were D00627_CVA-9; M16560_CVB1; AF085363_CVB2; M16572_CVB3; X05690_CVB4; AF114383_CVB5; AF105342_CVB6; AF029859_E1; AY302545_E2; AY302553_E3; AY302557_E4; AF083069_E5; AY302558_E6; AY302559_E7; X84981_E9; X80059_E11; X79047_E12; AY302539_E13; AY302540_E14; AY302541_E15; AY302542_E16; AY302543_E17; AF317694_E18; AY302544_E19; AY302546_E20; AY302547_E21; AY302548_E24; AY302549_E25; AY302550_E26; AY302551_E27; AY302552_E29; AF162711_E30; AY302554_E31; AY302555_E32; AY302556_E33; AY302560_EV69; AF241359_EV73; AY556057_EV74; AY556070_EV75; AJ493062_EV77; AY843297_EV79; AY843298_EV80; AY843299_EV81; AY843300_EV82; AY843301_EV83; DQ902712_EV84; AY843303_EV85; AY843304_EV86; AY843305_EV87; AY843306_EV88; AY843307_EV97; NC_013115.1_EV107; DQ902713_EV100; AY843308_EV101. The GenBank numbers for the in-house sequenced samples from 2003 and 2010 are JQ837913 and JQ837914. Simplot analysis of full genome sequences of CVA-9_2003, CVA-9_2010 and related species B sequences. Genomic regions are indicated at the bottom of each panel based on numbering of the prototype CVA-9 (D00627) 5’UTR:1–722; VP4:723–929; VP2:930–1712; VP3:1713–2426; VP1:2427–3332; 2A:3333–3773; 2B:3774–4069; 2C:4070–5057; 3A:5058–5323; 3B:5324–5390; 3C:5391–5938; 3D:5939–7324; 3’UTR:7325 - >7403. The query sequences used are CVA-9_2003 in panel 4A and CVA-9_2010 in panels 4B and 4C. Arrows in panels 4A and 4B indicate recombination with EV-101 in the VP4 region. The genomic sequences of the CVA-9_2003 and CVA-9_2010 have been deposited in GenBank under accession number JQ837913 and JQ837914, respectively. Comparison of the nucleotide and amino acid sequences in each of the gene segments from CVA-9_2003 and CVA-9_2010 to the prototype CVA-9 sequence are shown in Table 1. Nucleotide identity in the 5’ untranslated region (5’ UTR) between CVA-9_2003 and CVA-9_2010 was higher (96.5%) than with the prototype CVA-9 sequence which was around 85%. The nucleotide and amino acid identity was high (around 95%) between CVA-9_2003 and CVA-9_2010 samples in the structural genes (VP4, VP2, VP3 and VP1). The nucleotide identity in the structural genes for both these samples was lower (around 80%) when compared to the prototype CVA-9 sequence but amino acid identity was high (around 90%) indicating that the nucleotide changes consisted mostly of synonymous mutations. CVA-9_2010 showed some sequence diversity from the CVA-9_2003 isolate in the 3’ non-structural domain. This domain encodes the 2A protease, 2B protein, 2C helicase, 3A protein, 3B protein (VPg), 3C protease and 3D protein (polymerase). The nucleotide sequence identity in the 2A protease and 2B protein regions was 96.15% and 93.60% respectively; identity in the 2C helicase, 3A protein and 3D protein was around 80%. The lowest nucleotide identity was observed in the 3B protein at 74.63% and 3C protease at 77.78%, however amino acid conservation in these genes was higher as indicated in Table 1. Comparison of these gene segments to the prototype CVA-9 strain showed a higher percent of amino acid identity as compared to nucleotide identity. Percent identity comparison of nucleotide and amino acid sequences for isolates from 2003 (JQ837913), 2010 (JQ837914) and prototype CVA-9 sequence (D00627) Phylogenetic analysis comparing the nucleotide sequences of all the species B prototype sequences to CVA-9_2003 and CVA-9_2010 shows that in the VP2 region, these sequences cluster with the prototype CVA-9 sequence with a high bootstrap value; a similar pattern was observed for the VP1 and VP3 regions. However, in the VP4 region, the closest neighbour to CVA-9_2003 and CVA-9_2010 sequences is enterovirus 101 (EV-101) and there is significant divergence from the prototype CVA-9 sequence at the nucleotide level (data not shown). Table 2 lists the nearest neighbour based on phylogenetic analysis for the structural and non-structural gene segments, compared to the gene sequences from prototype strains. As indicated in the table the regions 2A, 2B, and 3B from the CVA-9_2003 and CVA-9_2010 cluster together, however, the 2C, 3A, 3C and 3D regions from CVA-9_2003 and CVA-9_2010 cluster with different enterovirus species B prototype sequences indicating base pair changes or recombination in these regions. The probable mosaic nature of the genome of CVA-9_2003 and CVA-9_2010 was further explored by performing Simplot analysis. Figure 4A compares the sequence of CVA-9_2003 to that of the enteroviruses identified in Table 2 as the most homologous in each gene. All the viruses are highly homologous within the 5’NC, and the higher score of EV-74 may or may not reflect the origin of the 5’UTR segment of the Alberta isolates; nevertheless, sequencing the 5’UTR would not allow for accurate serotyping. It shows that in the VP2-VP3-VP1 region the CVA-9_2003 is most homologous to the prototype CVA-9; it is speculated that the differences are attributable to genetic drifting over the years. It is interesting to note that in the VP4 region the highest homology is with EV-101, suggesting that recombination with EV-101 (or a very similar virus) might have occurred. This suggests that relying only on the VP4 sequence would not allow for accurate serotyping. Simplot analysis also shows that in the non-structural coding region, recombination with other viruses of species B has occurred, although the dominance of individual viruses is not clear enough to attribute the origin of the segments with certainty. Similarly, Figure 4B compares the sequence of CVA-9_2010 to the enteroviruses identified in Table 2. Similar comments can be made regarding the sequence of CVA-9_2010, although in the non-structural domain, identifying CVB-6, EV-86 and EV-100 as contributing to the recombination events appears more convincing. Tables 1 and 2 show that there is a high nucleotide identity between CVA-9_2003 and CVA-9_2010 in the region upstream of that coding for protein 2B. It can be speculated that CVA-9_2010 arose from recombination events between CVA-9_2003, CVB-6, EV-86 and EV-100. This hypothesis is examined by Simplot analysis in Figure 4C. This analysis suggests that CVA-9 (2010 and 2003) are highly homologous from the 5’NC to the 2B domain; recombination with other viruses clearly occurred beyond this point, with contributions from EV-86 and EV-100, and likely from CVB-6. Closest neighbour to the 2003 and 2010 samples based on phylogenetic analysis Phylogenetic analysis was performed using all the available prototype sequences belonging to enterovirus species B. Phylogenetic tree is based on complete nucleotide sequence of the VP2 and VP4 genes; GenBank numbers for the prototype sequences used were D00627_CVA-9; M16560_CVB1; AF085363_CVB2; M16572_CVB3; X05690_CVB4; AF114383_CVB5; AF105342_CVB6; AF029859_E1; AY302545_E2; AY302553_E3; AY302557_E4; AF083069_E5; AY302558_E6; AY302559_E7; X84981_E9; X80059_E11; X79047_E12; AY302539_E13; AY302540_E14; AY302541_E15; AY302542_E16; AY302543_E17; AF317694_E18; AY302544_E19; AY302546_E20; AY302547_E21; AY302548_E24; AY302549_E25; AY302550_E26; AY302551_E27; AY302552_E29; AF162711_E30; AY302554_E31; AY302555_E32; AY302556_E33; AY302560_EV69; AF241359_EV73; AY556057_EV74; AY556070_EV75; AJ493062_EV77; AY843297_EV79; AY843298_EV80; AY843299_EV81; AY843300_EV82; AY843301_EV83; DQ902712_EV84; AY843303_EV85; AY843304_EV86; AY843305_EV87; AY843306_EV88; AY843307_EV97; NC_013115.1_EV107; DQ902713_EV100; AY843308_EV101. The GenBank numbers for the in-house sequenced samples from 2003 and 2010 are JQ837913 and JQ837914. Simplot analysis of full genome sequences of CVA-9_2003, CVA-9_2010 and related species B sequences. Genomic regions are indicated at the bottom of each panel based on numbering of the prototype CVA-9 (D00627) 5’UTR:1–722; VP4:723–929; VP2:930–1712; VP3:1713–2426; VP1:2427–3332; 2A:3333–3773; 2B:3774–4069; 2C:4070–5057; 3A:5058–5323; 3B:5324–5390; 3C:5391–5938; 3D:5939–7324; 3’UTR:7325 - >7403. The query sequences used are CVA-9_2003 in panel 4A and CVA-9_2010 in panels 4B and 4C. Arrows in panels 4A and 4B indicate recombination with EV-101 in the VP4 region. Detection and serotyping: Of the 4323 samples tested, 213 were positive for enteroviruses (4.93%). The majority of the positives were CSF samples (n = 157, 73.71%) followed by specimens of respiratory origin (n = 17, 7.98%), stools (n = 10, 4.70%) and plasma (n = 8, 3.76%). The rest of this analysis concentrates on CSF samples. A total of 72 positive CSF samples were randomly selected to represent different age groups, geographic locations and disease severity for serotype determination based on phylogenetic clustering of the partial sequence of the VP2 region with the prototype sequences. More than 85% of samples could be typed without the need for culture or nested PCR (data not shown). This RT-PCR detects both enteroviruses and rhinoviruses, and allows for differentiation based on the length of the amplicon. The most common serotype detected was CVA-9 (n = 59, 81.94%) and it was clearly the predominant serotype in the population. Closer examination of age distribution revealed that the majority (n = 56, 94.92%) of the CVA-9 positives were from patients ranging in age from 15 to 29 years, only three (5.08%) CVA-9 positives were detected in children less than a year old. The other frequently detected serotypes were CVB-3 (n = 3) and CVB-4 (n = 2). Figure 1A and 1B show the distribution of the different serotypes detected from infants less than one year old and from patients older than one year. A phylogenetic tree including representatives of the different enterovirus serotypes detected and some representative prototype strains in the VP2 region is indicated in Figure 2. The VP2 sequence from all the sequenced isolates was identical, showing that the outbreak was associated with a clonal or nearly clonal CVA-9 population. Distribution of the different serotypes detected in the sequenced samples. A indicates serotype distribution in samples from patients less than one year and B from patients over one year old. Phylogenetic tree including representatives of the different serotypes detected and corresponding prototype sequences (in italics). The alignment used for the phylogenetic tree included 173 bases of the VP2 region from the clinical specimens, the same region from the prototype strains is also included. The Genbank accession number for the prototype strains is included in legend for Table 2. Patient samples are identified using a laboratory number followed by the enterovirus type. Age and gender distribution: Of the 59 samples positive for CVA-9, roughly equal numbers were detected in males (n = 30) and females (n = 29). Figure 3 shows the age distribution and percentage of CSF samples positive for CVA-9 within the different age groups tested. The data indicates that the majority of the CVA-9 positives were detected in patients between 15 to 30 years old. Age distribution of patients positive for enterovirus CVA-9. The bars represent the total number of samples positive for CVA-9 and the broken line indicates the percentage of samples positive for CVA-9 among the serotyped samples in each age group. The solid line indicates the percentage of samples positive for all enteroviruses in each age group based on the number of CSF samples tested within that age group. Seasonality: Samples from January 2010 to December 2010 were used in this study; peak months for the detection of CVA-9 positives in CSF samples were between March and October. Comparison of near full length genomes: The genomic sequences of the CVA-9_2003 and CVA-9_2010 have been deposited in GenBank under accession number JQ837913 and JQ837914, respectively. Comparison of the nucleotide and amino acid sequences in each of the gene segments from CVA-9_2003 and CVA-9_2010 to the prototype CVA-9 sequence are shown in Table 1. Nucleotide identity in the 5’ untranslated region (5’ UTR) between CVA-9_2003 and CVA-9_2010 was higher (96.5%) than with the prototype CVA-9 sequence which was around 85%. The nucleotide and amino acid identity was high (around 95%) between CVA-9_2003 and CVA-9_2010 samples in the structural genes (VP4, VP2, VP3 and VP1). The nucleotide identity in the structural genes for both these samples was lower (around 80%) when compared to the prototype CVA-9 sequence but amino acid identity was high (around 90%) indicating that the nucleotide changes consisted mostly of synonymous mutations. CVA-9_2010 showed some sequence diversity from the CVA-9_2003 isolate in the 3’ non-structural domain. This domain encodes the 2A protease, 2B protein, 2C helicase, 3A protein, 3B protein (VPg), 3C protease and 3D protein (polymerase). The nucleotide sequence identity in the 2A protease and 2B protein regions was 96.15% and 93.60% respectively; identity in the 2C helicase, 3A protein and 3D protein was around 80%. The lowest nucleotide identity was observed in the 3B protein at 74.63% and 3C protease at 77.78%, however amino acid conservation in these genes was higher as indicated in Table 1. Comparison of these gene segments to the prototype CVA-9 strain showed a higher percent of amino acid identity as compared to nucleotide identity. Percent identity comparison of nucleotide and amino acid sequences for isolates from 2003 (JQ837913), 2010 (JQ837914) and prototype CVA-9 sequence (D00627) Phylogenetic analysis comparing the nucleotide sequences of all the species B prototype sequences to CVA-9_2003 and CVA-9_2010 shows that in the VP2 region, these sequences cluster with the prototype CVA-9 sequence with a high bootstrap value; a similar pattern was observed for the VP1 and VP3 regions. However, in the VP4 region, the closest neighbour to CVA-9_2003 and CVA-9_2010 sequences is enterovirus 101 (EV-101) and there is significant divergence from the prototype CVA-9 sequence at the nucleotide level (data not shown). Table 2 lists the nearest neighbour based on phylogenetic analysis for the structural and non-structural gene segments, compared to the gene sequences from prototype strains. As indicated in the table the regions 2A, 2B, and 3B from the CVA-9_2003 and CVA-9_2010 cluster together, however, the 2C, 3A, 3C and 3D regions from CVA-9_2003 and CVA-9_2010 cluster with different enterovirus species B prototype sequences indicating base pair changes or recombination in these regions. The probable mosaic nature of the genome of CVA-9_2003 and CVA-9_2010 was further explored by performing Simplot analysis. Figure 4A compares the sequence of CVA-9_2003 to that of the enteroviruses identified in Table 2 as the most homologous in each gene. All the viruses are highly homologous within the 5’NC, and the higher score of EV-74 may or may not reflect the origin of the 5’UTR segment of the Alberta isolates; nevertheless, sequencing the 5’UTR would not allow for accurate serotyping. It shows that in the VP2-VP3-VP1 region the CVA-9_2003 is most homologous to the prototype CVA-9; it is speculated that the differences are attributable to genetic drifting over the years. It is interesting to note that in the VP4 region the highest homology is with EV-101, suggesting that recombination with EV-101 (or a very similar virus) might have occurred. This suggests that relying only on the VP4 sequence would not allow for accurate serotyping. Simplot analysis also shows that in the non-structural coding region, recombination with other viruses of species B has occurred, although the dominance of individual viruses is not clear enough to attribute the origin of the segments with certainty. Similarly, Figure 4B compares the sequence of CVA-9_2010 to the enteroviruses identified in Table 2. Similar comments can be made regarding the sequence of CVA-9_2010, although in the non-structural domain, identifying CVB-6, EV-86 and EV-100 as contributing to the recombination events appears more convincing. Tables 1 and 2 show that there is a high nucleotide identity between CVA-9_2003 and CVA-9_2010 in the region upstream of that coding for protein 2B. It can be speculated that CVA-9_2010 arose from recombination events between CVA-9_2003, CVB-6, EV-86 and EV-100. This hypothesis is examined by Simplot analysis in Figure 4C. This analysis suggests that CVA-9 (2010 and 2003) are highly homologous from the 5’NC to the 2B domain; recombination with other viruses clearly occurred beyond this point, with contributions from EV-86 and EV-100, and likely from CVB-6. Closest neighbour to the 2003 and 2010 samples based on phylogenetic analysis Phylogenetic analysis was performed using all the available prototype sequences belonging to enterovirus species B. Phylogenetic tree is based on complete nucleotide sequence of the VP2 and VP4 genes; GenBank numbers for the prototype sequences used were D00627_CVA-9; M16560_CVB1; AF085363_CVB2; M16572_CVB3; X05690_CVB4; AF114383_CVB5; AF105342_CVB6; AF029859_E1; AY302545_E2; AY302553_E3; AY302557_E4; AF083069_E5; AY302558_E6; AY302559_E7; X84981_E9; X80059_E11; X79047_E12; AY302539_E13; AY302540_E14; AY302541_E15; AY302542_E16; AY302543_E17; AF317694_E18; AY302544_E19; AY302546_E20; AY302547_E21; AY302548_E24; AY302549_E25; AY302550_E26; AY302551_E27; AY302552_E29; AF162711_E30; AY302554_E31; AY302555_E32; AY302556_E33; AY302560_EV69; AF241359_EV73; AY556057_EV74; AY556070_EV75; AJ493062_EV77; AY843297_EV79; AY843298_EV80; AY843299_EV81; AY843300_EV82; AY843301_EV83; DQ902712_EV84; AY843303_EV85; AY843304_EV86; AY843305_EV87; AY843306_EV88; AY843307_EV97; NC_013115.1_EV107; DQ902713_EV100; AY843308_EV101. The GenBank numbers for the in-house sequenced samples from 2003 and 2010 are JQ837913 and JQ837914. Simplot analysis of full genome sequences of CVA-9_2003, CVA-9_2010 and related species B sequences. Genomic regions are indicated at the bottom of each panel based on numbering of the prototype CVA-9 (D00627) 5’UTR:1–722; VP4:723–929; VP2:930–1712; VP3:1713–2426; VP1:2427–3332; 2A:3333–3773; 2B:3774–4069; 2C:4070–5057; 3A:5058–5323; 3B:5324–5390; 3C:5391–5938; 3D:5939–7324; 3’UTR:7325 - >7403. The query sequences used are CVA-9_2003 in panel 4A and CVA-9_2010 in panels 4B and 4C. Arrows in panels 4A and 4B indicate recombination with EV-101 in the VP4 region. Discussion: Recombination between enteroviruses is quite frequent, and occurs typically within the same species [1,27]. It was initially thought that the 5’UTR and P1 regions move independently during recombination, and the frequency of recombination was higher in the non-structural coding region [22]. This lead to the conclusion that serotyping by molecular methods involving sequencing of the P1 region should be equivalent to sequencing segments of this region (VP1,2,3 or 4). Several studies have confirmed the finding that sequencing of the VP1 or VP2 regions yields similar typing results compared to sequencing the entire P1 region [1,14,28]. Typing using the VP4 region has yielded conflicting results including studies reporting both consistent and inconsistent results with classical serotyping [28,29]; Perera et al. have noted that the VP4 sequence is unreliable for serotyping of viruses in species B and C [29]. This inconsistency seems to result from recombination events involving the VP4 region [22]. Intuitively, one may still continue to regard the segment coding for the external domains of the capsid to recombine as a block since the VP4 domain remains completely inside the capsid and is inaccessible to antibodies [1]. The amplicon used for serotype determination includes a segment from the 5’UTR, the VP4 region and a segment of the VP2 region; but given the possibility of recombinations outside the VP2-VP3-VP1 region, only the sequence within VP2 was used for the typing of samples in this study. It was noted that if serotype assignment is made based on Blast comparisons with the GenBank database sequences, there is a potential for inaccurate serotype assignment since contemporary isolates submitted into GenBank may have been serotyped based on the VP4 sequence, which can reassort independently of the VP2-VP3-VP1 segment. In this study, the serotype assignment was achieved by constructing a phylogenetic tree with the partial VP2 sequence obtained by sequencing the amplicon and the corresponding VP2 sequences of the original prototype strains [1]. In spite of the genetic drift evident in the sequences of contemporary viruses, this method provided unambiguous serotype assignment. Instances of recombination within the VP2-VP3-VP1 segment have only been rarely reported. Al-Hello and colleagues reported on an isolate that was identified as echovirus 11 by genetic analysis, but could be neutralized by antisera specific for echovirus 11 or CVA-9. A peptide scan analysis confirmed the presence of epitopes recognized by both antisera, whereas a Simplot analysis failed to reveal a recombination event within the capsid [30]. Should reports of isolates with similar properties become more frequent, the very concept of enterovirus serotype might become untenable. Notwithstanding these considerations, at this point in time it remains valuable to serotype isolates, because the serotype is determined by the neutralizing epitopes, and remains important for the molecular epidemiology of these viruses. In addition, the capsid determines the choice of the cellular receptor, consequently the serotype is an important factor to determine the pathogenicity of an isolate [23]; however, the remainder of the genome also contributes to pathogenicity [24] and complete genome sequencing is required for full characterisation of an isolate. During the 2010 outbreak, a different serotype distribution was noted among infants (< 1year) and older individuals, this difference in distribution of serotypes between age groups has been noted before [31,32]. The serotypes found among infants included in our study have been typically associated with neonatal disease, for example, Coxsackie B3 and B4, echovirus 9, and EV-71. It was interesting to observe one isolate with the VP2 region most similar to the newly described EV-97 [20]. Among the rest of the population, CVA-9 was clearly the predominant serotype. Phylogenetic analysis of the VP2 region from the CVA-9 isolates showed that all the isolates had the same sequences and therefore originated from the same strain. Comparison of the capsid regions showed the CVA-9_2003 and CVA-9_2010 isolates to be highly homologous and this was corroborated by Simplot anaysis; consequently it is speculated that the sudden increase in detection of CVA-9 in CSF samples is attributable to an increased number of individuals susceptible to CVA-9 in the population rather than to the emergence of an immune escape mutant in 2010. For an infectious agent to cause an outbreak, it is necessary that the herd immunity of the population drops below a threshold that is determined by the basic reproduction number R0[33]. For CVA-9 to re-enter the population of Alberta in 2010, either the new isolate was an immune escape mutant against which the population had no prior immunity, or the herd immunity had declined below the threshold. Because of the very high homology in the capsid sequences between the 2003 and 2010 isolates, it does not seem likely that the 2010 isolate was an immune escape mutant. A comparison with the sequence of the prototype CVA-9 shows that the two Alberta isolates inherited their VP2-VP3-VP1 segment from the prototype, but the VP4 segment shows sequence divergence. For all the other protein coding segments, phylogenetic trees of the nucleotide sequence were constructed to compare the Alberta strains with the prototype strains within species B, and Simplot analysis were performed. Two important observations can be made: firstly, the two Alberta isolates are highly homologous in most regions, with few changes that can be largely attributed to sequence drift; however, recombination has likely occurred in the 2C, 3A, 3C, and 3D regions. Secondly, the Alberta isolates are very different from the prototype CVA-9 in the VP4 segment (supporting the concept that VP4 can be a site for recombination). The 5’NC, structural and non-structural regions appear to be components of a mosaic genome modified from the prototype CVA-9 by recombination and drifting. Similar observations have been made on other contemporary CVA-9 strains [34], and indeed in other contemporary enteroviruses within species B [22,27]. How much the pathogenicity of these CVA-9 viruses has been modified by the non-structural genes, compared to the prototype strain, is unclear [22]. Conclusion: In summary the sudden increase in cases of aseptic meningitis in Alberta in 2010 was associated with enterovirus CVA-9. The capsid region was highly homologous to the capsid of a 2003 isolate, suggesting that the infections were not the result of the emergence of an immune escape mutant. We thus speculate that the increase in the number of infections may have resulted from a decline in the level of herd immunity against this virus to a level where the virus was able to penetrate the population. When compared to the prototype strain of CVA-9, the Alberta isolates displayed signs of multiple recombination events in addition to genetic drifting. Methods: Screening for enteroviruses Specimens submitted to the Provincial Laboratory for Public Health (ProvLab) for enterovirus testing from January 1 to December 31, 2010 were included in this study. A total of 4323 samples from patients ranging in age from one day old to 97 years of age were tested; the most common specimen types tested included CSF (n = 2687, 62.16%), plasma (n = 497, 11.50%), nasopharyngeal and throat swabs (n = 213, 4.93%) and stools (n = 103, 2.40%). Viral RNA was extracted using the easyMAG® automated extractor (BioMérieux, Durham, NC, USA) and the extracted nucleic acid was screened for enteroviruses using a previously published NASBA assay [35]. Specimens submitted to the Provincial Laboratory for Public Health (ProvLab) for enterovirus testing from January 1 to December 31, 2010 were included in this study. A total of 4323 samples from patients ranging in age from one day old to 97 years of age were tested; the most common specimen types tested included CSF (n = 2687, 62.16%), plasma (n = 497, 11.50%), nasopharyngeal and throat swabs (n = 213, 4.93%) and stools (n = 103, 2.40%). Viral RNA was extracted using the easyMAG® automated extractor (BioMérieux, Durham, NC, USA) and the extracted nucleic acid was screened for enteroviruses using a previously published NASBA assay [35]. Serotyping of enteroviruses Enteroviruses were serotyped using one-step RT-PCR to amplify the partial 5’untranslated region (293 bp), the VP4 region (207 bp) and partial VP2 region (250 bp) using previously described primers 1-EV/RV and 2-EV/RV from RNA extracted directly from the specimen [36]. Sequencing was performed without the need for nested amplification. Amplification was performed using the One-step RT-PCR kit from Qiagen (Ontario, Canada) using 10 μl of 5X buffer, 10 μl of Q solution, 2 μl of 10 mM dNTPs, 2 μl of enzyme, 0.125 μl of RNaseOUT (Life Technologies, Ontario, Canada), 0.8 μM of primers and 5 μl of template nucleic acid in a total volume of 50 μl. The reverse transcription step was performed at 50°C for 30 mins, followed by enzyme activation at 95°C for 15 mins. The amplification protocol included 45 cycles of denaturation at 95°C for 30 seconds, followed by annealing at 55°C for 1.5 minutes and amplification at 72°C for 60 seconds. A final extension step was performed for 10 minutes at 72°C followed by cooling. Amplified products were sequenced in both directions on the ABI PRISM 3130-Avant Genetic Analyzer (Applied Biosystems (ABI), Foster City, CA). Enteroviruses were serotyped using one-step RT-PCR to amplify the partial 5’untranslated region (293 bp), the VP4 region (207 bp) and partial VP2 region (250 bp) using previously described primers 1-EV/RV and 2-EV/RV from RNA extracted directly from the specimen [36]. Sequencing was performed without the need for nested amplification. Amplification was performed using the One-step RT-PCR kit from Qiagen (Ontario, Canada) using 10 μl of 5X buffer, 10 μl of Q solution, 2 μl of 10 mM dNTPs, 2 μl of enzyme, 0.125 μl of RNaseOUT (Life Technologies, Ontario, Canada), 0.8 μM of primers and 5 μl of template nucleic acid in a total volume of 50 μl. The reverse transcription step was performed at 50°C for 30 mins, followed by enzyme activation at 95°C for 15 mins. The amplification protocol included 45 cycles of denaturation at 95°C for 30 seconds, followed by annealing at 55°C for 1.5 minutes and amplification at 72°C for 60 seconds. A final extension step was performed for 10 minutes at 72°C followed by cooling. Amplified products were sequenced in both directions on the ABI PRISM 3130-Avant Genetic Analyzer (Applied Biosystems (ABI), Foster City, CA). Complete genome sequencing of CVA-9 Two enterovirus positive samples from 2003 (CVA-9_2003) and 2010 (CVA-9_2010) with a high viral load as estimated by the Ct value, that exhibited strong growth in primary Rhesus Monkey Kidney cells from Diagnostic Hybrids (Ohio, USA) were used. The near-complete genome of these viruses was amplified by long RT-PCR as previously described [37]. The amplicons were sequenced by genome walking and contig assembly was performed using seqscape v2.5 (ABI). The genomic sequences of the CVA-9_2003 and CVA-9_2010 have been deposited in GenBank under accession numbers JQ837913 and JQ837914, respectively. Two enterovirus positive samples from 2003 (CVA-9_2003) and 2010 (CVA-9_2010) with a high viral load as estimated by the Ct value, that exhibited strong growth in primary Rhesus Monkey Kidney cells from Diagnostic Hybrids (Ohio, USA) were used. The near-complete genome of these viruses was amplified by long RT-PCR as previously described [37]. The amplicons were sequenced by genome walking and contig assembly was performed using seqscape v2.5 (ABI). The genomic sequences of the CVA-9_2003 and CVA-9_2010 have been deposited in GenBank under accession numbers JQ837913 and JQ837914, respectively. Sequence analysis Sequences were aligned using ClustalX (Version 1.81, March 2000; ftp://ftp-igbmc.u-strasbg.fr/pub/ClustalX/) and phylogenetic analysis was conducted using Treecon [38]. Distance estimation was performed using the Jukes and Cantor distance correction (1969), topology inference was performed using the neighbour joining method, and bootstraping was done using 1000 replicates and the tree was re-rooted at the internode. Simplot analysis were performed using alignments done with CLustalX and the Simplot for Windows v3.5.1 program [39]. Sequences were aligned using ClustalX (Version 1.81, March 2000; ftp://ftp-igbmc.u-strasbg.fr/pub/ClustalX/) and phylogenetic analysis was conducted using Treecon [38]. Distance estimation was performed using the Jukes and Cantor distance correction (1969), topology inference was performed using the neighbour joining method, and bootstraping was done using 1000 replicates and the tree was re-rooted at the internode. Simplot analysis were performed using alignments done with CLustalX and the Simplot for Windows v3.5.1 program [39]. Screening for enteroviruses: Specimens submitted to the Provincial Laboratory for Public Health (ProvLab) for enterovirus testing from January 1 to December 31, 2010 were included in this study. A total of 4323 samples from patients ranging in age from one day old to 97 years of age were tested; the most common specimen types tested included CSF (n = 2687, 62.16%), plasma (n = 497, 11.50%), nasopharyngeal and throat swabs (n = 213, 4.93%) and stools (n = 103, 2.40%). Viral RNA was extracted using the easyMAG® automated extractor (BioMérieux, Durham, NC, USA) and the extracted nucleic acid was screened for enteroviruses using a previously published NASBA assay [35]. Serotyping of enteroviruses: Enteroviruses were serotyped using one-step RT-PCR to amplify the partial 5’untranslated region (293 bp), the VP4 region (207 bp) and partial VP2 region (250 bp) using previously described primers 1-EV/RV and 2-EV/RV from RNA extracted directly from the specimen [36]. Sequencing was performed without the need for nested amplification. Amplification was performed using the One-step RT-PCR kit from Qiagen (Ontario, Canada) using 10 μl of 5X buffer, 10 μl of Q solution, 2 μl of 10 mM dNTPs, 2 μl of enzyme, 0.125 μl of RNaseOUT (Life Technologies, Ontario, Canada), 0.8 μM of primers and 5 μl of template nucleic acid in a total volume of 50 μl. The reverse transcription step was performed at 50°C for 30 mins, followed by enzyme activation at 95°C for 15 mins. The amplification protocol included 45 cycles of denaturation at 95°C for 30 seconds, followed by annealing at 55°C for 1.5 minutes and amplification at 72°C for 60 seconds. A final extension step was performed for 10 minutes at 72°C followed by cooling. Amplified products were sequenced in both directions on the ABI PRISM 3130-Avant Genetic Analyzer (Applied Biosystems (ABI), Foster City, CA). Complete genome sequencing of CVA-9: Two enterovirus positive samples from 2003 (CVA-9_2003) and 2010 (CVA-9_2010) with a high viral load as estimated by the Ct value, that exhibited strong growth in primary Rhesus Monkey Kidney cells from Diagnostic Hybrids (Ohio, USA) were used. The near-complete genome of these viruses was amplified by long RT-PCR as previously described [37]. The amplicons were sequenced by genome walking and contig assembly was performed using seqscape v2.5 (ABI). The genomic sequences of the CVA-9_2003 and CVA-9_2010 have been deposited in GenBank under accession numbers JQ837913 and JQ837914, respectively. Sequence analysis: Sequences were aligned using ClustalX (Version 1.81, March 2000; ftp://ftp-igbmc.u-strasbg.fr/pub/ClustalX/) and phylogenetic analysis was conducted using Treecon [38]. Distance estimation was performed using the Jukes and Cantor distance correction (1969), topology inference was performed using the neighbour joining method, and bootstraping was done using 1000 replicates and the tree was re-rooted at the internode. Simplot analysis were performed using alignments done with CLustalX and the Simplot for Windows v3.5.1 program [39]. Abbreviations: CSF: Cerebrospinal fluid; CVA: Coxsackie virus A; CVB: Coxsackie virus B; EV: Enterovirus; UTR: Untranslated region; NASBA: Nucleic acid sequence based amplification; RT-PCR: Reverse-transcription polymerase chain reaction; ABI: Applied biosystems; CVA-9: Coxsackie virus A9 Competing interests: The authors declare that they have no competing interests. Authors’ contributions: EC carried out the molecular studies and participated in the sequence alignment. KP, SW and RT conceived of the study, participated in its design, coordination, and analysis. KP wrote the manuscript, SW and RT edited the manuscript. All authors read and approved the final manuscript.
Background: An unusually high incidence of aseptic meningitis caused by enteroviruses was noted in Alberta, Canada between March and October 2010. Sequence based typing was performed on the enterovirus positive samples to gain a better understanding of the molecular characteristics of the Coxsackie A9 (CVA-9) strain responsible for most cases in this outbreak. Methods: Molecular typing was performed by amplification and sequencing of the VP2 region. The genomic sequence of one of the 2010 outbreak isolates was compared to a CVA-9 isolate from 2003 and the prototype sequence to study genetic drift and recombination. Results: Of the 4323 samples tested, 213 were positive for enteroviruses (4.93%). The majority of the positives were detected in CSF samples (n = 157, 73.71%) and 81.94% of the sequenced isolates were typed as CVA-9. The sequenced CVA-9 positives were predominantly (94.16%) detected in patients ranging in age from 15 to 29 years and the peak months for detection were between March and October. Full genome sequence comparisons revealed that the CVA-9 viruses isolated in Alberta in 2003 and 2010 were highly homologous to the prototype CVA-9 in the structural VP1, VP2 and VP3 regions but divergent in the VP4, non-structural and non-coding regions. Conclusions: The increase in cases of aseptic meningitis was associated with enterovirus CVA-9. Sequence divergence between the prototype strain of CVA-9 and the Alberta isolates suggests genetic drifting and/or recombination events, however the sequence was conserved in the antigenic regions determined by the VP1, VP2 and VP3 genes. These results suggest that the increase in CVA-9 cases likely did not result from the emergence of a radically different immune escape mutant.
Background: The enterovirus genome is comprised of a single open reading frame flanked by the 5’ and 3’ untranslated regions (UTRs), and the encoded polyprotein is cleaved to produce the structural and nonstructural proteins [1]. Although most infections are asymptomatic, non-polio enteroviruses are the most common infectious cause of aseptic meningitis [1]. Several outbreaks resulting from different enterovirus serotypes have been described including but not restricted to Coxsackievirus A9 (CVA-9) [2], EV71 [3,4], EV68 [5], Coxsackievirus A24 [6], echovirus 18 [7], echovirus 30 [8,9], and Coxsackievirus A16 [10]. Non-polio enteroviruses were traditionally classified into serotypes, based on a neutralization assay and included 64 classical isolates consisting of Coxsackieviruses A, Coxsackieviruses B, echoviruses and several numbered enteroviruses [11]. Genetic studies led to a new classification scheme, grouping the enteroviruses into species A, B, C and D, although serotype determination is still widely used, including for epidemiological purposes. Typing methods based on sequencing have been previously described in the VP1 [12-14], VP2 [15,16], and VP4 [17] regions. Molecular typing methods based on the P1 genomic region sequences, have been found to yield equivalent results to neutralizing assays, which they have essentially replaced [1,14]. Molecular typing has also led to the characterization of new enterovirus serotypes [4,18-20] and provides information on recombination between enteroviruses. This is a common phenomenon which almost always occurs between viruses belonging to the same species [1,21,22]. Serotype determination remains important for molecular epidemiology, in part because the capsid contains the neutralization epitopes, and a rise or fall in incidence of a serotype can be linked to serotype-specific humoral immunity within the population [21]. The capsid also determines the cellular receptors used by the virus [23] and is thus an important determinant of viral pathogenesis, although non-structural proteins also contribute to the pathogenesis of the virus [24]. An unusually high incidence of aseptic meningitis was noted in Alberta, Canada between March and October 2010 [25]. A high proportion of CSF samples submitted to the Provincial Laboratory For Public Health of Alberta were positive for enteroviruses and serotyping by molecular methods revealed that majority of these enteroviruses belonged to the CVA-9 serotype. The primary goal of this study was to provide a full genetic characterization of the CVA-9 isolates responsible for this outbreak. Secondary goals were to comment on the methods of molecular serotyping for the diagnostic laboratory and on the use of long RT-PCR as a convenient method to obtain the near full-length genomic sequence of enteroviruses, especially when recombination events are involved in the genesis of the viral genome. The genome sequence of these isolates was determined and compared to the sequence of a CVA-9 isolate from Alberta in 2003 and to the prototype CVA-9 sequence (strain: Griggs; D00627) [26]. Serotyping by sequencing within the VP2 region was found to be more reliable than within the VP4 region. A comparison of the different genes revealed a higher nucleotide and amino acid conservation in the structural regions, and analysis of the sequence of the non-structural region pointed to recombination events in the genesis of the 2010 isolate. Conclusion: In summary the sudden increase in cases of aseptic meningitis in Alberta in 2010 was associated with enterovirus CVA-9. The capsid region was highly homologous to the capsid of a 2003 isolate, suggesting that the infections were not the result of the emergence of an immune escape mutant. We thus speculate that the increase in the number of infections may have resulted from a decline in the level of herd immunity against this virus to a level where the virus was able to penetrate the population. When compared to the prototype strain of CVA-9, the Alberta isolates displayed signs of multiple recombination events in addition to genetic drifting.
Background: An unusually high incidence of aseptic meningitis caused by enteroviruses was noted in Alberta, Canada between March and October 2010. Sequence based typing was performed on the enterovirus positive samples to gain a better understanding of the molecular characteristics of the Coxsackie A9 (CVA-9) strain responsible for most cases in this outbreak. Methods: Molecular typing was performed by amplification and sequencing of the VP2 region. The genomic sequence of one of the 2010 outbreak isolates was compared to a CVA-9 isolate from 2003 and the prototype sequence to study genetic drift and recombination. Results: Of the 4323 samples tested, 213 were positive for enteroviruses (4.93%). The majority of the positives were detected in CSF samples (n = 157, 73.71%) and 81.94% of the sequenced isolates were typed as CVA-9. The sequenced CVA-9 positives were predominantly (94.16%) detected in patients ranging in age from 15 to 29 years and the peak months for detection were between March and October. Full genome sequence comparisons revealed that the CVA-9 viruses isolated in Alberta in 2003 and 2010 were highly homologous to the prototype CVA-9 in the structural VP1, VP2 and VP3 regions but divergent in the VP4, non-structural and non-coding regions. Conclusions: The increase in cases of aseptic meningitis was associated with enterovirus CVA-9. Sequence divergence between the prototype strain of CVA-9 and the Alberta isolates suggests genetic drifting and/or recombination events, however the sequence was conserved in the antigenic regions determined by the VP1, VP2 and VP3 genes. These results suggest that the increase in CVA-9 cases likely did not result from the emergence of a radically different immune escape mutant.
9,391
317
[ 623, 472, 147, 30, 1164, 148, 266, 110, 88, 55, 10, 54 ]
16
[ "cva", "samples", "prototype", "sequence", "region", "sequences", "9_2003", "9_2010", "cva 9_2003", "cva 9_2010" ]
[ "enteroviruses species serotype", "enteroviruses enteroviruses serotyped", "enterovirus serotypes detected", "concept enterovirus serotype", "different enterovirus serotypes" ]
[CONTENT] Human enteroviruses | Coxsackie A9 | Aseptic meningitis | Serotyping [SUMMARY]
[CONTENT] Human enteroviruses | Coxsackie A9 | Aseptic meningitis | Serotyping [SUMMARY]
[CONTENT] Human enteroviruses | Coxsackie A9 | Aseptic meningitis | Serotyping [SUMMARY]
[CONTENT] Human enteroviruses | Coxsackie A9 | Aseptic meningitis | Serotyping [SUMMARY]
[CONTENT] Human enteroviruses | Coxsackie A9 | Aseptic meningitis | Serotyping [SUMMARY]
[CONTENT] Human enteroviruses | Coxsackie A9 | Aseptic meningitis | Serotyping [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Alberta | Child | Child, Preschool | Coxsackievirus Infections | Disease Outbreaks | Enterovirus B, Human | Female | Genotype | Humans | Infant | Infant, Newborn | Male | Meningitis, Aseptic | Middle Aged | Molecular Epidemiology | Molecular Sequence Data | Molecular Typing | RNA, Viral | Sequence Analysis, DNA | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Alberta | Child | Child, Preschool | Coxsackievirus Infections | Disease Outbreaks | Enterovirus B, Human | Female | Genotype | Humans | Infant | Infant, Newborn | Male | Meningitis, Aseptic | Middle Aged | Molecular Epidemiology | Molecular Sequence Data | Molecular Typing | RNA, Viral | Sequence Analysis, DNA | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Alberta | Child | Child, Preschool | Coxsackievirus Infections | Disease Outbreaks | Enterovirus B, Human | Female | Genotype | Humans | Infant | Infant, Newborn | Male | Meningitis, Aseptic | Middle Aged | Molecular Epidemiology | Molecular Sequence Data | Molecular Typing | RNA, Viral | Sequence Analysis, DNA | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Alberta | Child | Child, Preschool | Coxsackievirus Infections | Disease Outbreaks | Enterovirus B, Human | Female | Genotype | Humans | Infant | Infant, Newborn | Male | Meningitis, Aseptic | Middle Aged | Molecular Epidemiology | Molecular Sequence Data | Molecular Typing | RNA, Viral | Sequence Analysis, DNA | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Alberta | Child | Child, Preschool | Coxsackievirus Infections | Disease Outbreaks | Enterovirus B, Human | Female | Genotype | Humans | Infant | Infant, Newborn | Male | Meningitis, Aseptic | Middle Aged | Molecular Epidemiology | Molecular Sequence Data | Molecular Typing | RNA, Viral | Sequence Analysis, DNA | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Alberta | Child | Child, Preschool | Coxsackievirus Infections | Disease Outbreaks | Enterovirus B, Human | Female | Genotype | Humans | Infant | Infant, Newborn | Male | Meningitis, Aseptic | Middle Aged | Molecular Epidemiology | Molecular Sequence Data | Molecular Typing | RNA, Viral | Sequence Analysis, DNA | Young Adult [SUMMARY]
[CONTENT] enteroviruses species serotype | enteroviruses enteroviruses serotyped | enterovirus serotypes detected | concept enterovirus serotype | different enterovirus serotypes [SUMMARY]
[CONTENT] enteroviruses species serotype | enteroviruses enteroviruses serotyped | enterovirus serotypes detected | concept enterovirus serotype | different enterovirus serotypes [SUMMARY]
[CONTENT] enteroviruses species serotype | enteroviruses enteroviruses serotyped | enterovirus serotypes detected | concept enterovirus serotype | different enterovirus serotypes [SUMMARY]
[CONTENT] enteroviruses species serotype | enteroviruses enteroviruses serotyped | enterovirus serotypes detected | concept enterovirus serotype | different enterovirus serotypes [SUMMARY]
[CONTENT] enteroviruses species serotype | enteroviruses enteroviruses serotyped | enterovirus serotypes detected | concept enterovirus serotype | different enterovirus serotypes [SUMMARY]
[CONTENT] enteroviruses species serotype | enteroviruses enteroviruses serotyped | enterovirus serotypes detected | concept enterovirus serotype | different enterovirus serotypes [SUMMARY]
[CONTENT] cva | samples | prototype | sequence | region | sequences | 9_2003 | 9_2010 | cva 9_2003 | cva 9_2010 [SUMMARY]
[CONTENT] cva | samples | prototype | sequence | region | sequences | 9_2003 | 9_2010 | cva 9_2003 | cva 9_2010 [SUMMARY]
[CONTENT] cva | samples | prototype | sequence | region | sequences | 9_2003 | 9_2010 | cva 9_2003 | cva 9_2010 [SUMMARY]
[CONTENT] cva | samples | prototype | sequence | region | sequences | 9_2003 | 9_2010 | cva 9_2003 | cva 9_2010 [SUMMARY]
[CONTENT] cva | samples | prototype | sequence | region | sequences | 9_2003 | 9_2010 | cva 9_2003 | cva 9_2010 [SUMMARY]
[CONTENT] cva | samples | prototype | sequence | region | sequences | 9_2003 | 9_2010 | cva 9_2003 | cva 9_2010 [SUMMARY]
[CONTENT] enteroviruses | molecular | serotype | methods | coxsackievirus | structural | non | typing | sequence | cva [SUMMARY]
[CONTENT] μl | performed | step | amplification | bp | clustalx | 10 | 50 | extracted | followed [SUMMARY]
[CONTENT] cva | prototype | cva 9_2010 | 9_2010 | 9_2003 | cva 9_2003 | samples | identity | nucleotide | sequences [SUMMARY]
[CONTENT] increase | infections | level | capsid | alberta | virus | summary sudden | summary sudden increase | multiple recombination | multiple [SUMMARY]
[CONTENT] cva | samples | region | prototype | sequence | age | μl | 9_2010 | cva 9_2010 | cva 9_2003 [SUMMARY]
[CONTENT] cva | samples | region | prototype | sequence | age | μl | 9_2010 | cva 9_2010 | cva 9_2003 [SUMMARY]
[CONTENT] Alberta | Canada | between March and October 2010 ||| Sequence | the Coxsackie A9 | CVA-9 [SUMMARY]
[CONTENT] ||| one | 2010 | CVA-9 | 2003 [SUMMARY]
[CONTENT] 4323 | 213 | 4.93% ||| CSF | 157 | 73.71% | 81.94% | CVA-9 ||| CVA-9 | 94.16% | 15 to 29 years | months | between March and October ||| CVA-9 | Alberta | 2003 | 2010 | CVA-9 | VP4 [SUMMARY]
[CONTENT] CVA-9 ||| CVA-9 | Alberta ||| CVA-9 [SUMMARY]
[CONTENT] Alberta | Canada | between March and October 2010 ||| Sequence | the Coxsackie A9 | CVA-9 ||| ||| one | 2010 | CVA-9 | 2003 ||| 4323 | 213 | 4.93% ||| CSF | 157 | 73.71% | 81.94% | CVA-9 ||| CVA-9 | 94.16% | 15 to 29 years | months | between March and October ||| CVA-9 | Alberta | 2003 | 2010 | CVA-9 | VP4 ||| CVA-9 ||| CVA-9 | Alberta ||| CVA-9 [SUMMARY]
[CONTENT] Alberta | Canada | between March and October 2010 ||| Sequence | the Coxsackie A9 | CVA-9 ||| ||| one | 2010 | CVA-9 | 2003 ||| 4323 | 213 | 4.93% ||| CSF | 157 | 73.71% | 81.94% | CVA-9 ||| CVA-9 | 94.16% | 15 to 29 years | months | between March and October ||| CVA-9 | Alberta | 2003 | 2010 | CVA-9 | VP4 ||| CVA-9 ||| CVA-9 | Alberta ||| CVA-9 [SUMMARY]
Performance evaluation of three automated quantitative immunoassays and their correlation with a surrogate virus neutralization test in coronavirus disease 19 patients and pre-pandemic controls.
34369009
SARS-CoV-2 pandemic is currently ongoing, meanwhile vaccinations are rapidly underway in some countries. The quantitative immunoassays detecting antibodies against spike antigen of SARS-CoV-2 have been developed based on the findings that they have a better correlation with the neutralizing antibody.
BACKGROUND
The performances of the Abbott Architect SARS-CoV-2 IgG II Quant, DiaSorin LIAISON SARS-CoV-2 TrimericS IgG, and Roche Elecsys anti-SARS-CoV-2 S were evaluated on 173 sera from 126 SARS-CoV-2 patients and 151 pre-pandemic sera. Their correlations with GenScript cPass SARS-CoV-2 Neutralization Antibody Detection Kit were also analyzed on 173 sera from 126 SARS-CoV-2 patients.
METHODS
Architect SARS-CoV-2 IgG II Quant and Elecsys anti-SARS-CoV-2 S showed the highest overall sensitivity (96.0%), followed by LIAISON SARS-CoV-2 TrimericS IgG (93.6%). The specificities of Elecsys anti-SARS-CoV-2 S and LIAISON SARS-CoV-2 TrimericS IgG were 100.0%, followed by Architect SARS-CoV-2 IgG II Quant (99.3%). Regarding the correlation with cPass neutralization antibody assay, LIAISON SARS-CoV-2 TrimericS IgG showed the best correlation (Spearman rho = 0.88), followed by Architect SARS-CoV-2 IgG II Quant and Elecsys anti-SARS-CoV-2 S (all rho = 0.87).
RESULTS
The three automated quantitative immunoassays showed good diagnostic performance and strong correlations with neutralization antibodies. These assays will be useful in diagnostic assistance, evaluating the response to vaccination, and the assessment of herd immunity in the future.
CONCLUSIONS
[ "Antibodies, Neutralizing", "Antibodies, Viral", "COVID-19", "COVID-19 Serological Testing", "Humans", "Immunoassay", "Immunoglobulin G", "Neutralization Tests", "ROC Curve", "Reproducibility of Results", "SARS-CoV-2", "Sensitivity and Specificity", "Serologic Tests", "Spike Glycoprotein, Coronavirus" ]
8418513
INTRODUCTION
Severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2), first reported in Wuhan, China in 2019, caused a worldwide outbreak that is currently ongoing.1 Coronavirus disease 19 (COVID‐19), the infectious disease caused by SARS‐CoV‐2, became not only an unprecedented threat to public health worldwide but also a tragic shock to the economy across the globe.2 The absence of specific treatment options proved to be effective against SARS‐CoV‐2 aggravated the affair.3 This has resulted in the heightened importance of SARS‐CoV‐2 diagnostic testing as quarantine and social distancing have become the primary strategies for control of COVID‐19.4 Molecular testing, especially RT‐PCR, a reliable tool detecting the active SARS‐CoV‐2 infection, is the first option for the COVID‐19 diagnosis.5, 6 And serologic testing, a secondary weapon in the diagnostic arsenal for COVID‐19, was used as complementary to the RT‐PCR in the area where RT‐PCR has its limitations. Because serologic testing for COVID‐19 has its advantages such as cost‐effectiveness, short turnaround time, high‐throughput, ability to detect past infection, and usefulness in resource‐limited areas.7 Recently, the countermeasure strategy against COVID‐19 has stepped up from detection and quarantine of infection to active achievement of herd immunity through vaccination, as several vaccines have been approved for emergency use by the government in Europe and the United States and vaccinations are rapidly underway in some countries.8, 9, 10 In line with these shifts, the importance of tests detecting antibodies for SARS‐CoV‐2, especially neutralizing antibodies representing the protective ability of host immunity, is further emphasized. The majority of SARS‐CoV‐2 serologic assays used whole or some parts of spike protein (S1 subunit, S2 subunit, and receptor binding domain located in S1 subunit) or nucleocapsid (N) protein as target antigens.11 Previous studies reported that assays targeting spike protein showed a better correlation with virus neutralization assay compared to nucleocapsid protein.12, 13 Recently, commercial manufacturers launched the quantitative SARS‐CoV‐2 antibody assays using spike protein as a target antigen, which can be a pivotal tool assessing the effect of vaccination. Abbott Architect anti‐SARS‐CoV‐2 IgG II Quant, new quantitative SARS‐CoV‐2 IgG immunoassay released by Abbott, received CM mark approval from the EU government. Roche also released their new quantitative assay, Elecsys anti‐SARS‐CoV‐2 S, targeting receptor binding domain (RBD) of S1 subunit. And DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG has been developed based on the observation that trimer form of spike protein showed greater sensitivity for the detection of SARS‐CoV‐2 antibodies.14 Many studies are conducted regarding the clinical performances of the previous version of anti‐SARS‐CoV‐2 assays against N protein (Abbott or Roche) and against S1/S2 subunit (Diasorin).15, 16, 17, 18 However, clinical performances of newly launched SARS‐CoV‐2 antibody assays against S1 RBD (Abbott or Roche) or TrimericS (Diasorin) have not been evaluated thoroughly. To the best of our knowledge, the performance of Abbott Architect anti‐SARS‐CoV‐2 IgG II Quant has not been fully evaluated so far. The performances of Elecsys anti‐SARS‐CoV‐2 S have been reported in comparison with the previous version (Elecsys anti‐SARS‐CoV‐2 against N antigen), showed better sensitivity than Elecsys anti‐SARS‐CoV‐2 against N.19, 20 The performance of LIAISON SARS‐CoV‐2 TrimericS IgG has been once reported21 and showed better performance than previous version of LIAISON SARS‐CoV‐2 against S1/S2.15, 16, 17 However, the superiority of clinical performance can be evaluated precisely when performed in the same population. We assessed the clinical performance of three newly developed anti‐SARS‐CoV‐2 assays in the same subjects. Meanwhile, virus neutralization assay using live SARS‐CoV‐2 is the gold standard method for assessing neutralizing antibodies. But its utility is limited because it is labor‐intensive, time‐consuming, and requires specialized facilities such as biosafety level 3.4 For this reason, researchers tried to develop alternatives that are more appropriate for large‐scale use in clinical laboratories.22 GenScript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit is an enzyme‐linked immunosorbent assay (ELISA) based surrogate virus neutralization test (sVNT) that mimics the reaction of human ACE2 receptor and RBD. It has been reported that cPass SARS‐CoV‐2 Neutralization Antibody test presented an excellent correlation with cell‐culture‐based virus neutralization assays and could be a useful measure of virus‐neutralizing activity.23, 24 The correlations of cPass SARS‐CoV‐2 Neutralization Antibody test with three automated anti‐SARS‐CoV‐2 assays (Mindray CL‐900i against S and N, BioMerieux VIDAS 3 against RBD, and Diasorin LIAISON SARS‐CoV‐2 against S1/S2) have been reported with the best correlation in VIDAS 3 (r = 0.75), followed by LIAISON S1/S2 (r = 0.66) and Mindray CL‐900i (r = 0.57)25. However, the correlations of cPass SARS‐CoV‐2 Neutralization Antibody test with Abbott SARS‐CoV‐2 IgG II Quant, Roche Elecsys anti‐SARS‐CoV‐2 S, and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG have not been evaluated. Therefore, we performed a comparative assessment of three fully automated quantitative assays detecting antibodies against spike protein: Abbott SARS‐CoV‐2 IgG II Quant, Roche Elecsys anti‐SARS‐CoV‐2 S, and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG. We evaluated their clinical performance and quantitative correlation with cPass SARS‐CoV‐2 Neutralization Antibody test.
null
null
RESULTS
Clinical performance The clinical performances of three immunoassays are shown in Table 2. The sensitivity of Abbott Quant, DiaSorin TrimericS, and Roche S on 173 sera from 126 COVID‐19 patients was 96.0% (95% CI, 91.8%–98.4%), 93.6% (95% CI, 88.9%–96.8%) and 96.0% (95% CI, 91.9%–98.4%), respectively. The sensitivity calculated from the subgroup sampled 14 days after the onset of symptom was 97.6% (95% CI, 93.2%–99.5%), 96.8% (95% CI, 92.1%–99.1%) and 97.6% (95% CI, 93.2%–99.5%), respectively. The specificity of Abbott Quant, DiaSorin TrimericS, and Roche S on 151 pre‐pandemic sera were 99.3 (95% CI, 96.4–100.0), 100.0% (95% CI, 97.6–100.0%), and 100.0% (95% CI, 97.6%–100.0%), respectively. No positive result was observed in the cross‐reactivity panel. Clinical performance of three immunoassays Abbreviations: Abbott Quant, Abbott SARS‐CoV‐2 IgG II Quant; D, days after onset of symptom; DiaSorin TrimericS, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG; Genscript cPass, Genscript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit;NT, not tested; Other, other infection; Roche S, Roche Elecsys anti‐SARS‐CoV‐2 S. The clinical performances of three immunoassays are shown in Table 2. The sensitivity of Abbott Quant, DiaSorin TrimericS, and Roche S on 173 sera from 126 COVID‐19 patients was 96.0% (95% CI, 91.8%–98.4%), 93.6% (95% CI, 88.9%–96.8%) and 96.0% (95% CI, 91.9%–98.4%), respectively. The sensitivity calculated from the subgroup sampled 14 days after the onset of symptom was 97.6% (95% CI, 93.2%–99.5%), 96.8% (95% CI, 92.1%–99.1%) and 97.6% (95% CI, 93.2%–99.5%), respectively. The specificity of Abbott Quant, DiaSorin TrimericS, and Roche S on 151 pre‐pandemic sera were 99.3 (95% CI, 96.4–100.0), 100.0% (95% CI, 97.6–100.0%), and 100.0% (95% CI, 97.6%–100.0%), respectively. No positive result was observed in the cross‐reactivity panel. Clinical performance of three immunoassays Abbreviations: Abbott Quant, Abbott SARS‐CoV‐2 IgG II Quant; D, days after onset of symptom; DiaSorin TrimericS, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG; Genscript cPass, Genscript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit;NT, not tested; Other, other infection; Roche S, Roche Elecsys anti‐SARS‐CoV‐2 S. Repeatability, within‐laboratory imprecision, and linearity The repeatability and within‐laboratory imprecision for three immunoassays are shown in Table 3. The within‐laboratory precisions of Abbott Quant and Roche S were all <4.0%. The within‐laboratory precisions of DiaSorin TrimericS were 2.9–8.2%, which were slightly larger than that claimed by the manufacturer. Repeatability and within‐laboratory imprecision for three quantitative immunoassays Abbreviations: Conc., concentration; CV, coefficient of variation; Lab, laboratory; PS, pooled serum. The linearity assessment of three immunoassays (Figure S1) revealed to be linear across the analytical measurement range (R2 = 0.9992, 0.9947, 0.9966 for Abbott Quant, DiaSorin TrimericS, and Roche S, respectively). And % recovery was of Abbott Quant, DiaSorin TrimericS, and Roche S was all acceptable (criteria: 100% ± 10%), ranged as 96.4%–100.0%, 100.0%–109.7%, 92.8%–102.9%, respectively. The repeatability and within‐laboratory imprecision for three immunoassays are shown in Table 3. The within‐laboratory precisions of Abbott Quant and Roche S were all <4.0%. The within‐laboratory precisions of DiaSorin TrimericS were 2.9–8.2%, which were slightly larger than that claimed by the manufacturer. Repeatability and within‐laboratory imprecision for three quantitative immunoassays Abbreviations: Conc., concentration; CV, coefficient of variation; Lab, laboratory; PS, pooled serum. The linearity assessment of three immunoassays (Figure S1) revealed to be linear across the analytical measurement range (R2 = 0.9992, 0.9947, 0.9966 for Abbott Quant, DiaSorin TrimericS, and Roche S, respectively). And % recovery was of Abbott Quant, DiaSorin TrimericS, and Roche S was all acceptable (criteria: 100% ± 10%), ranged as 96.4%–100.0%, 100.0%–109.7%, 92.8%–102.9%, respectively. Correlation between results from three immunoassays The correlations between results from three immunoassays were shown in Figure 1. The Roche S correlated well with Abbott Quant II (rho = 0.88) and DiaSorin Trimeric S (rho = 0.85). Abbott Quant II correlated well with DiaSorin Trimeric S (rho = 0.9). Spearman correlation between three immunoassays and cPass neutralization antibody. (A‐C). Abbott SARS‐CoV‐2 IgG II Quant (A), Roche Elecsys anti‐SARS‐CoV‐2 S (B), and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (C) demonstrated strong correlations with Genscript cPass surrogate virus neutralization test (Spearman's rho of 0.87, 0.87, and 0.88 for each). (D‐F). Three immunoassays demonstrated strong correlations with each other. Spearman's rho of 0.88 between Abbott SARS‐CoV‐2 IgG II Quant and Roche Elecsys anti‐SARS‐CoV‐2 (D); Spearman's rho of 0.9 between Abbott SARS‐CoV‐2 IgG II Quant and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (E); Spearman's rho of 0.85 between Roche Elecsys anti‐ SARS‐CoV‐2 and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (F) The correlations between results from three immunoassays were shown in Figure 1. The Roche S correlated well with Abbott Quant II (rho = 0.88) and DiaSorin Trimeric S (rho = 0.85). Abbott Quant II correlated well with DiaSorin Trimeric S (rho = 0.9). Spearman correlation between three immunoassays and cPass neutralization antibody. (A‐C). Abbott SARS‐CoV‐2 IgG II Quant (A), Roche Elecsys anti‐SARS‐CoV‐2 S (B), and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (C) demonstrated strong correlations with Genscript cPass surrogate virus neutralization test (Spearman's rho of 0.87, 0.87, and 0.88 for each). (D‐F). Three immunoassays demonstrated strong correlations with each other. Spearman's rho of 0.88 between Abbott SARS‐CoV‐2 IgG II Quant and Roche Elecsys anti‐SARS‐CoV‐2 (D); Spearman's rho of 0.9 between Abbott SARS‐CoV‐2 IgG II Quant and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (E); Spearman's rho of 0.85 between Roche Elecsys anti‐ SARS‐CoV‐2 and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (F) Comparison to cPass SARS‐CoV‐2 neutralization antibody test The concordances between the qualitative results of three immunoassays and cPass SARS‐CoV‐2 neutralization test in SARS‐CoV‐2 positive patients are shown in Table 4. The positive percent agreements of three immunoassays with cPass were 97.6%–99.4%. The negative percent agreements of three immunoassays with cPass were 55.6%–77.8%. The overall percent agreements of three immunoassays with cPass were 96.5%–97.7% with Cohen's kappa value of 0.61–0.74. The positive percent agreements between the three immunoassays were 98.8%–99.4% and the negative percent agreements between three immunoassays were 54.5%–71.4%. The overall percent agreements between three immunoassays were 96.5%–97.7% with Cohen's kappa value of 0.65–0.7. Concordance between the qualitative results of three immunoassays and cPass SARS‐CoV‐2 Neutralization Antibody Detection kit in 173 SARS‐CoV‐2 patients Abbreviations: Abbott Quant, Abbott SARS‐CoV‐2 IgG II Quant; DiaSorin TrimericS, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG; NPA, negative percent agreement; OPA, overall percent agreement; PPA, positive percent agreement; Roche S, Roche Elecsys anti‐SARS‐CoV‐2 S. The correlations of quantitative results of three immunoassays with % inhibition values of cPass were very strong with Spearman's rho value of 0.87 for Abbott Quant and Roche S, and 0.88 for DiaSorin TrimericS (p < 0.001 for all) (Figure 1). The concordances between the qualitative results of three immunoassays and cPass SARS‐CoV‐2 neutralization test in SARS‐CoV‐2 positive patients are shown in Table 4. The positive percent agreements of three immunoassays with cPass were 97.6%–99.4%. The negative percent agreements of three immunoassays with cPass were 55.6%–77.8%. The overall percent agreements of three immunoassays with cPass were 96.5%–97.7% with Cohen's kappa value of 0.61–0.74. The positive percent agreements between the three immunoassays were 98.8%–99.4% and the negative percent agreements between three immunoassays were 54.5%–71.4%. The overall percent agreements between three immunoassays were 96.5%–97.7% with Cohen's kappa value of 0.65–0.7. Concordance between the qualitative results of three immunoassays and cPass SARS‐CoV‐2 Neutralization Antibody Detection kit in 173 SARS‐CoV‐2 patients Abbreviations: Abbott Quant, Abbott SARS‐CoV‐2 IgG II Quant; DiaSorin TrimericS, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG; NPA, negative percent agreement; OPA, overall percent agreement; PPA, positive percent agreement; Roche S, Roche Elecsys anti‐SARS‐CoV‐2 S. The correlations of quantitative results of three immunoassays with % inhibition values of cPass were very strong with Spearman's rho value of 0.87 for Abbott Quant and Roche S, and 0.88 for DiaSorin TrimericS (p < 0.001 for all) (Figure 1). Receiver operating characteristics analysis Receiver operating characteristics (ROC) curve analysis was performed on three immunoassays (Figure 2). Areas under the curve (AUC) of three immunoassays were 0.993 for Abbott Quant, 0.989 for DiaSorin TrimericS, and 0.983 for Roche S, which means excellent performances for all three immunoassays. Using the ROC curves, we assumed the optimized cut‐off values based on Youden's index. The optimized cut‐offs were 41.9 AU/ml, 3.985 AU/ml, and 0.544 U/ml for Abbott Quant, DiaSorin TrimericS, and Roche S, respectively. The corresponding manufacturer's recommended cut‐offs were 50.0, 13.0, and 0.8. Applying the optimized cut‐offs, the sensitivity of Abbott Quant improved from 96.0% to 97.1%, with no decrease in specificity (99.3%). The sensitivity of DiaSorin TrimericS improved from 93.6% to 98.3% with a slight decrease in specificity (from 100.0% to 97.1%). The sensitivity of Roche S improved from 96.0% to 96.5% with no decrease in specificity (100.0%). Receiver operating characteristics (ROC) curve analysis for three immunoassays. Area under the curve were 0.993 for Abbott Quant (A), 0.989 for DiaSorin TrimericS (B), and 0.983 for Roche S (C), which means excellent performances for all three immunoassays Receiver operating characteristics (ROC) curve analysis was performed on three immunoassays (Figure 2). Areas under the curve (AUC) of three immunoassays were 0.993 for Abbott Quant, 0.989 for DiaSorin TrimericS, and 0.983 for Roche S, which means excellent performances for all three immunoassays. Using the ROC curves, we assumed the optimized cut‐off values based on Youden's index. The optimized cut‐offs were 41.9 AU/ml, 3.985 AU/ml, and 0.544 U/ml for Abbott Quant, DiaSorin TrimericS, and Roche S, respectively. The corresponding manufacturer's recommended cut‐offs were 50.0, 13.0, and 0.8. Applying the optimized cut‐offs, the sensitivity of Abbott Quant improved from 96.0% to 97.1%, with no decrease in specificity (99.3%). The sensitivity of DiaSorin TrimericS improved from 93.6% to 98.3% with a slight decrease in specificity (from 100.0% to 97.1%). The sensitivity of Roche S improved from 96.0% to 96.5% with no decrease in specificity (100.0%). Receiver operating characteristics (ROC) curve analysis for three immunoassays. Area under the curve were 0.993 for Abbott Quant (A), 0.989 for DiaSorin TrimericS (B), and 0.983 for Roche S (C), which means excellent performances for all three immunoassays
null
null
[ "INTRODUCTION", "Test specimens", "Automated anti‐SARS‐CoV‐2 IgG immunoassay", "GenScript cPass SARS‐CoV‐2 Neutralization Antibody Detection assay", "Precision and linearity assessment", "Statistical analysis", "Clinical performance", "Repeatability, within‐laboratory imprecision, and linearity", "Correlation between results from three immunoassays", "Comparison to cPass SARS‐CoV‐2 neutralization antibody test", "Receiver operating characteristics analysis" ]
[ "Severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2), first reported in Wuhan, China in 2019, caused a worldwide outbreak that is currently ongoing.1 Coronavirus disease 19 (COVID‐19), the infectious disease caused by SARS‐CoV‐2, became not only an unprecedented threat to public health worldwide but also a tragic shock to the economy across the globe.2 The absence of specific treatment options proved to be effective against SARS‐CoV‐2 aggravated the affair.3 This has resulted in the heightened importance of SARS‐CoV‐2 diagnostic testing as quarantine and social distancing have become the primary strategies for control of COVID‐19.4\n\nMolecular testing, especially RT‐PCR, a reliable tool detecting the active SARS‐CoV‐2 infection, is the first option for the COVID‐19 diagnosis.5, 6 And serologic testing, a secondary weapon in the diagnostic arsenal for COVID‐19, was used as complementary to the RT‐PCR in the area where RT‐PCR has its limitations. Because serologic testing for COVID‐19 has its advantages such as cost‐effectiveness, short turnaround time, high‐throughput, ability to detect past infection, and usefulness in resource‐limited areas.7\n\nRecently, the countermeasure strategy against COVID‐19 has stepped up from detection and quarantine of infection to active achievement of herd immunity through vaccination, as several vaccines have been approved for emergency use by the government in Europe and the United States and vaccinations are rapidly underway in some countries.8, 9, 10 In line with these shifts, the importance of tests detecting antibodies for SARS‐CoV‐2, especially neutralizing antibodies representing the protective ability of host immunity, is further emphasized.\nThe majority of SARS‐CoV‐2 serologic assays used whole or some parts of spike protein (S1 subunit, S2 subunit, and receptor binding domain located in S1 subunit) or nucleocapsid (N) protein as target antigens.11 Previous studies reported that assays targeting spike protein showed a better correlation with virus neutralization assay compared to nucleocapsid protein.12, 13 Recently, commercial manufacturers launched the quantitative SARS‐CoV‐2 antibody assays using spike protein as a target antigen, which can be a pivotal tool assessing the effect of vaccination. Abbott Architect anti‐SARS‐CoV‐2 IgG II Quant, new quantitative SARS‐CoV‐2 IgG immunoassay released by Abbott, received CM mark approval from the EU government. Roche also released their new quantitative assay, Elecsys anti‐SARS‐CoV‐2 S, targeting receptor binding domain (RBD) of S1 subunit. And DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG has been developed based on the observation that trimer form of spike protein showed greater sensitivity for the detection of SARS‐CoV‐2 antibodies.14 Many studies are conducted regarding the clinical performances of the previous version of anti‐SARS‐CoV‐2 assays against N protein (Abbott or Roche) and against S1/S2 subunit (Diasorin).15, 16, 17, 18 However, clinical performances of newly launched SARS‐CoV‐2 antibody assays against S1 RBD (Abbott or Roche) or TrimericS (Diasorin) have not been evaluated thoroughly. To the best of our knowledge, the performance of Abbott Architect anti‐SARS‐CoV‐2 IgG II Quant has not been fully evaluated so far. The performances of Elecsys anti‐SARS‐CoV‐2 S have been reported in comparison with the previous version (Elecsys anti‐SARS‐CoV‐2 against N antigen), showed better sensitivity than Elecsys anti‐SARS‐CoV‐2 against N.19, 20 The performance of LIAISON SARS‐CoV‐2 TrimericS IgG has been once reported21 and showed better performance than previous version of LIAISON SARS‐CoV‐2 against S1/S2.15, 16, 17 However, the superiority of clinical performance can be evaluated precisely when performed in the same population. We assessed the clinical performance of three newly developed anti‐SARS‐CoV‐2 assays in the same subjects.\nMeanwhile, virus neutralization assay using live SARS‐CoV‐2 is the gold standard method for assessing neutralizing antibodies. But its utility is limited because it is labor‐intensive, time‐consuming, and requires specialized facilities such as biosafety level 3.4 For this reason, researchers tried to develop alternatives that are more appropriate for large‐scale use in clinical laboratories.22 GenScript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit is an enzyme‐linked immunosorbent assay (ELISA) based surrogate virus neutralization test (sVNT) that mimics the reaction of human ACE2 receptor and RBD. It has been reported that cPass SARS‐CoV‐2 Neutralization Antibody test presented an excellent correlation with cell‐culture‐based virus neutralization assays and could be a useful measure of virus‐neutralizing activity.23, 24 The correlations of cPass SARS‐CoV‐2 Neutralization Antibody test with three automated anti‐SARS‐CoV‐2 assays (Mindray CL‐900i against S and N, BioMerieux VIDAS 3 against RBD, and Diasorin LIAISON SARS‐CoV‐2 against S1/S2) have been reported with the best correlation in VIDAS 3 (r = 0.75), followed by LIAISON S1/S2 (r = 0.66) and Mindray CL‐900i (r = 0.57)25. However, the correlations of cPass SARS‐CoV‐2 Neutralization Antibody test with Abbott SARS‐CoV‐2 IgG II Quant, Roche Elecsys anti‐SARS‐CoV‐2 S, and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG have not been evaluated.\nTherefore, we performed a comparative assessment of three fully automated quantitative assays detecting antibodies against spike protein: Abbott SARS‐CoV‐2 IgG II Quant, Roche Elecsys anti‐SARS‐CoV‐2 S, and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG. We evaluated their clinical performance and quantitative correlation with cPass SARS‐CoV‐2 Neutralization Antibody test.", "This study was approved by the Institutional Review Board of Seoul National University Hospital (IRB no. 2011‐041‐1170). All subjects were admitted to Seoul National University Hospital, or Boramae Medical Center between February 2020 and January 2021. Leftover patient specimens obtained for routine serologic testing were used. A total of 173 sera from 126 COVID‐19 patients were included in this study. Clinical diagnosis for COVID‐19 were determined based on clinical symptoms, imaging diagnosis, and laboratory findings including RT‐PCR. Among 126 COVID‐19 patients (44 females and 82 males), two (1.6%), 20 (15.9%), 18 (14.3%), 40 (31.7%), and 46 (36.5%) patients were classified by disease severity as asymptomatic, mild, moderate, severe, and critical, respectively. For each subject, age, sex, the number of days from onset of the symptom to the day sample collected, and disease severity determined by WHO interim guidance26 were acquired through electronic medical records. For specificity evaluation, 151 pre‐pandemic sera were tested. Out of the 151 sera, 98 were from healthy subjects and 53 were from patients with positive results of various infectious markers: 5 anti‐HAV IgG, 5 anti‐T. pallidum IgG, 5 anti‐HCV, 5 anti‐HBc IgG, 5 anti‐CMV‐IgG, 5 anti‐rubella IgG, 4 anti‐toxoplasma IgG, 3 anti‐HIV IgG, 2 anti‐mycoplasma IgG, 2 M. tubeculosis PCR, 2 RSV PCR, 5 rhinovirus PCR, 1 adenovirus PCR, 1 bocavirus PCR, 1 parainfluenza virus PCR, 1 coronavirus OC43 PCR, 1 coronavirus 229E PCR.", "Three fully automated commercial immunoassays were evaluated. The specifications of the three immunoassays are summarized in Table 1. Abbott SARS‐CoV‐2 IgG II Quant (Abbott Laboratories, Sligo, Ireland; hereafter called Abbott Quant) is a chemiluminescent microparticle immunoassay designed for the quantitative determination of IgG antibodies to RBD of the S1 subunit of the spike protein of SARS‐CoV‐2. The assay was performed on Abbott Architect i2000SR system (Abbott Laboratories, Abbott Park). Roche Elecsys anti‐SARS‐CoV‐2 S (Roche Diagnostics; hereafter called Roche S) is an electrochemiluminescence immunoassay for the quantitative determination of antibodies to RBD of the spike protein of SARS‐CoV‐2. The assay was performed on Roche cobas e601 system (Roche Diagnostics). DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (DiaSorin, Stillwater; hereafter called DiaSorin TrimericS) is a chemiluminescent immunoassay using magnetic particles coated with recombinant trimeric SARS‐CoV‐2 spike protein for the quantitative determination of IgG antibodies. The assay was performed on LIAISON XL analyzer (DiaSorin). All tests were performed according to the manufacturer's instructions.\nSpecifications of the four immunoassays claimed by each manufacturers\nAbbreviations: AMR, analytical measuring range; CI, confidence interval; CLIA, chemiluminescent immunoassay; CLMIA, chemiluminescent microparticle immunoassay; ECLIA, electrochemiluminescent immunoassay; RBD, receptor binding domain; SP, spike protein.\nSensitivity were calculated from patient sample collected after 14 days or later from positive PCR results.\nMeasuring range extends up to 80,000 by 1:2 dilution.\nMeasuring range extends up to 2,500 by 1:10 dilution.\nPositive percent agreement with plaque reduction neutralization test (PRNT)50.\nNegative percent agreement with PRNT50.\nNot available (approved for qualitative detection).", "GenScript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit (GenScript, Piscataway; hereafter called cPass) is a blocking ELISA detection tool, which mimics the virus neutralization process. The detection kit utilizes the Horseradish peroxidase (HRP) conjugated recombinant SARS‐CoV‐2 RBD protein and the human angiotensin‐converting enzyme 2 (ACE2) receptor protein. The protein interaction between HRP‐RBD and ACE2 can be blocked by neutralizing antibodies against SARS‐CoV‐RBD. The assay was performed as below, according to the manufacturer's instruction.\nTest samples, negative control, and positive control were 1:10 diluted with sample dilution buffer. HRP‐RBD was diluted 1:1000 with HRP dilution buffer. Diluted samples and diluted HRP‐RBD solution were mixed with a volume ratio of 1:1 and incubated at 37°C for 30 min. 100 μl of the mixture was then added to the capture plate coated with the human ACE2 receptor protein and incubated at 37°C for 15 min. After the incubation, the mixture was washed 4 times with 260 μl of wash buffer. Then, 100 μl of tetramethylbenzidine solution was added to the mixture and incubated at room temperature for 15 min. Finally, 50 μl of stop solution was added. The absorbance of the final solution was read at 450 nm in a microplate reader.\nSignal inhibition was calculated as follow:Percent Signal Inhibition=1‐average optical density value of sample/average optical density value of negative control×100%\n\nThe test results were interpreted as positive when the percent signal inhibition was ≥30%, which is the cut‐off for signal inhibition claimed by the manufacturer.", "The precision assessment was performed on three quantitative assays, according to CLSI EP15‐A3 protocol,27 using one quality control material and two pooled patient sera for five consecutive days, five times a day. Repeatability and within‐laboratory precision were estimated using ANOVA and compared to values claimed by the manufacturers.\nLinearity assessment was performed on three quantitative assays, according to CLSI EP6‐A protocol.28 Two patient sera with high (H) and low (L) concentration were mixed at ratios of 4H, 1L + 3H, 2L + 2H, 3L + 1H, and 4L. All levels are measured in duplicates.", "For three immunoassays, sensitivity and specificity were calculated. The sensitivity of the subgroup sampled 14 days after the onset of symptoms was also calculated and compared with the manufacturer's claim. It is in line with the recommendation from infectious diseases society of America guidelines on the Diagnosis of COVID‐19 that suggests against using serologic testing to diagnose SARS‐CoV‐2 infection during the first two weeks (14 days) following symptom onset.29 The concordances between the three immunoassays and cPass were assessed using overall, positive and negative percent agreement as well as Cohen's kappa statistics. Cohen's kappa is a robust statistic of inter‐rater reliability, useful for assessing the level of agreement between two diagnostic assays. Ranging between 0 and 1, a kappa value <0.40 represents poor agreement, 0.40–0.59 represents fair agreement, 0.60–0.74 represents good agreement, and ≥0.75 represents excellent agreement.30 We evaluated the correlations of the quantitative value of three immunoassays with each other and with % inhibition value of cPass using Spearman's rank‐order correlation coefficient (rho). All statistical analyses were performed by using R version 4.0.5 (R foundation for statistical Computing).", "The clinical performances of three immunoassays are shown in Table 2. The sensitivity of Abbott Quant, DiaSorin TrimericS, and Roche S on 173 sera from 126 COVID‐19 patients was 96.0% (95% CI, 91.8%–98.4%), 93.6% (95% CI, 88.9%–96.8%) and 96.0% (95% CI, 91.9%–98.4%), respectively. The sensitivity calculated from the subgroup sampled 14 days after the onset of symptom was 97.6% (95% CI, 93.2%–99.5%), 96.8% (95% CI, 92.1%–99.1%) and 97.6% (95% CI, 93.2%–99.5%), respectively. The specificity of Abbott Quant, DiaSorin TrimericS, and Roche S on 151 pre‐pandemic sera were 99.3 (95% CI, 96.4–100.0), 100.0% (95% CI, 97.6–100.0%), and 100.0% (95% CI, 97.6%–100.0%), respectively. No positive result was observed in the cross‐reactivity panel.\nClinical performance of three immunoassays\nAbbreviations: Abbott Quant, Abbott SARS‐CoV‐2 IgG II Quant; D, days after onset of symptom; DiaSorin TrimericS, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG; Genscript cPass, Genscript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit;NT, not tested; Other, other infection; Roche S, Roche Elecsys anti‐SARS‐CoV‐2 S.", "The repeatability and within‐laboratory imprecision for three immunoassays are shown in Table 3. The within‐laboratory precisions of Abbott Quant and Roche S were all <4.0%. The within‐laboratory precisions of DiaSorin TrimericS were 2.9–8.2%, which were slightly larger than that claimed by the manufacturer.\nRepeatability and within‐laboratory imprecision for three quantitative immunoassays\nAbbreviations: Conc., concentration; CV, coefficient of variation; Lab, laboratory; PS, pooled serum.\nThe linearity assessment of three immunoassays (Figure S1) revealed to be linear across the analytical measurement range (R2 = 0.9992, 0.9947, 0.9966 for Abbott Quant, DiaSorin TrimericS, and Roche S, respectively). And % recovery was of Abbott Quant, DiaSorin TrimericS, and Roche S was all acceptable (criteria: 100% ± 10%), ranged as 96.4%–100.0%, 100.0%–109.7%, 92.8%–102.9%, respectively.", "The correlations between results from three immunoassays were shown in Figure 1. The Roche S correlated well with Abbott Quant II (rho = 0.88) and DiaSorin Trimeric S (rho = 0.85). Abbott Quant II correlated well with DiaSorin Trimeric S (rho = 0.9).\nSpearman correlation between three immunoassays and cPass neutralization antibody. (A‐C). Abbott SARS‐CoV‐2 IgG II Quant (A), Roche Elecsys anti‐SARS‐CoV‐2 S (B), and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (C) demonstrated strong correlations with Genscript cPass surrogate virus neutralization test (Spearman's rho of 0.87, 0.87, and 0.88 for each). (D‐F). Three immunoassays demonstrated strong correlations with each other. Spearman's rho of 0.88 between Abbott SARS‐CoV‐2 IgG II Quant and Roche Elecsys anti‐SARS‐CoV‐2 (D); Spearman's rho of 0.9 between Abbott SARS‐CoV‐2 IgG II Quant and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (E); Spearman's rho of 0.85 between Roche Elecsys anti‐ SARS‐CoV‐2 and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (F)", "The concordances between the qualitative results of three immunoassays and cPass SARS‐CoV‐2 neutralization test in SARS‐CoV‐2 positive patients are shown in Table 4. The positive percent agreements of three immunoassays with cPass were 97.6%–99.4%. The negative percent agreements of three immunoassays with cPass were 55.6%–77.8%. The overall percent agreements of three immunoassays with cPass were 96.5%–97.7% with Cohen's kappa value of 0.61–0.74. The positive percent agreements between the three immunoassays were 98.8%–99.4% and the negative percent agreements between three immunoassays were 54.5%–71.4%. The overall percent agreements between three immunoassays were 96.5%–97.7% with Cohen's kappa value of 0.65–0.7.\nConcordance between the qualitative results of three immunoassays and cPass SARS‐CoV‐2 Neutralization Antibody Detection kit in 173 SARS‐CoV‐2 patients\nAbbreviations: Abbott Quant, Abbott SARS‐CoV‐2 IgG II Quant; DiaSorin TrimericS, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG; NPA, negative percent agreement; OPA, overall percent agreement; PPA, positive percent agreement; Roche S, Roche Elecsys anti‐SARS‐CoV‐2 S.\nThe correlations of quantitative results of three immunoassays with % inhibition values of cPass were very strong with Spearman's rho value of 0.87 for Abbott Quant and Roche S, and 0.88 for DiaSorin TrimericS (p < 0.001 for all) (Figure 1).", "Receiver operating characteristics (ROC) curve analysis was performed on three immunoassays (Figure 2). Areas under the curve (AUC) of three immunoassays were 0.993 for Abbott Quant, 0.989 for DiaSorin TrimericS, and 0.983 for Roche S, which means excellent performances for all three immunoassays. Using the ROC curves, we assumed the optimized cut‐off values based on Youden's index. The optimized cut‐offs were 41.9 AU/ml, 3.985 AU/ml, and 0.544 U/ml for Abbott Quant, DiaSorin TrimericS, and Roche S, respectively. The corresponding manufacturer's recommended cut‐offs were 50.0, 13.0, and 0.8. Applying the optimized cut‐offs, the sensitivity of Abbott Quant improved from 96.0% to 97.1%, with no decrease in specificity (99.3%). The sensitivity of DiaSorin TrimericS improved from 93.6% to 98.3% with a slight decrease in specificity (from 100.0% to 97.1%). The sensitivity of Roche S improved from 96.0% to 96.5% with no decrease in specificity (100.0%).\nReceiver operating characteristics (ROC) curve analysis for three immunoassays. Area under the curve were 0.993 for Abbott Quant (A), 0.989 for DiaSorin TrimericS (B), and 0.983 for Roche S (C), which means excellent performances for all three immunoassays" ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Test specimens", "Automated anti‐SARS‐CoV‐2 IgG immunoassay", "GenScript cPass SARS‐CoV‐2 Neutralization Antibody Detection assay", "Precision and linearity assessment", "Statistical analysis", "RESULTS", "Clinical performance", "Repeatability, within‐laboratory imprecision, and linearity", "Correlation between results from three immunoassays", "Comparison to cPass SARS‐CoV‐2 neutralization antibody test", "Receiver operating characteristics analysis", "DISCUSSION", "Supporting information" ]
[ "Severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2), first reported in Wuhan, China in 2019, caused a worldwide outbreak that is currently ongoing.1 Coronavirus disease 19 (COVID‐19), the infectious disease caused by SARS‐CoV‐2, became not only an unprecedented threat to public health worldwide but also a tragic shock to the economy across the globe.2 The absence of specific treatment options proved to be effective against SARS‐CoV‐2 aggravated the affair.3 This has resulted in the heightened importance of SARS‐CoV‐2 diagnostic testing as quarantine and social distancing have become the primary strategies for control of COVID‐19.4\n\nMolecular testing, especially RT‐PCR, a reliable tool detecting the active SARS‐CoV‐2 infection, is the first option for the COVID‐19 diagnosis.5, 6 And serologic testing, a secondary weapon in the diagnostic arsenal for COVID‐19, was used as complementary to the RT‐PCR in the area where RT‐PCR has its limitations. Because serologic testing for COVID‐19 has its advantages such as cost‐effectiveness, short turnaround time, high‐throughput, ability to detect past infection, and usefulness in resource‐limited areas.7\n\nRecently, the countermeasure strategy against COVID‐19 has stepped up from detection and quarantine of infection to active achievement of herd immunity through vaccination, as several vaccines have been approved for emergency use by the government in Europe and the United States and vaccinations are rapidly underway in some countries.8, 9, 10 In line with these shifts, the importance of tests detecting antibodies for SARS‐CoV‐2, especially neutralizing antibodies representing the protective ability of host immunity, is further emphasized.\nThe majority of SARS‐CoV‐2 serologic assays used whole or some parts of spike protein (S1 subunit, S2 subunit, and receptor binding domain located in S1 subunit) or nucleocapsid (N) protein as target antigens.11 Previous studies reported that assays targeting spike protein showed a better correlation with virus neutralization assay compared to nucleocapsid protein.12, 13 Recently, commercial manufacturers launched the quantitative SARS‐CoV‐2 antibody assays using spike protein as a target antigen, which can be a pivotal tool assessing the effect of vaccination. Abbott Architect anti‐SARS‐CoV‐2 IgG II Quant, new quantitative SARS‐CoV‐2 IgG immunoassay released by Abbott, received CM mark approval from the EU government. Roche also released their new quantitative assay, Elecsys anti‐SARS‐CoV‐2 S, targeting receptor binding domain (RBD) of S1 subunit. And DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG has been developed based on the observation that trimer form of spike protein showed greater sensitivity for the detection of SARS‐CoV‐2 antibodies.14 Many studies are conducted regarding the clinical performances of the previous version of anti‐SARS‐CoV‐2 assays against N protein (Abbott or Roche) and against S1/S2 subunit (Diasorin).15, 16, 17, 18 However, clinical performances of newly launched SARS‐CoV‐2 antibody assays against S1 RBD (Abbott or Roche) or TrimericS (Diasorin) have not been evaluated thoroughly. To the best of our knowledge, the performance of Abbott Architect anti‐SARS‐CoV‐2 IgG II Quant has not been fully evaluated so far. The performances of Elecsys anti‐SARS‐CoV‐2 S have been reported in comparison with the previous version (Elecsys anti‐SARS‐CoV‐2 against N antigen), showed better sensitivity than Elecsys anti‐SARS‐CoV‐2 against N.19, 20 The performance of LIAISON SARS‐CoV‐2 TrimericS IgG has been once reported21 and showed better performance than previous version of LIAISON SARS‐CoV‐2 against S1/S2.15, 16, 17 However, the superiority of clinical performance can be evaluated precisely when performed in the same population. We assessed the clinical performance of three newly developed anti‐SARS‐CoV‐2 assays in the same subjects.\nMeanwhile, virus neutralization assay using live SARS‐CoV‐2 is the gold standard method for assessing neutralizing antibodies. But its utility is limited because it is labor‐intensive, time‐consuming, and requires specialized facilities such as biosafety level 3.4 For this reason, researchers tried to develop alternatives that are more appropriate for large‐scale use in clinical laboratories.22 GenScript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit is an enzyme‐linked immunosorbent assay (ELISA) based surrogate virus neutralization test (sVNT) that mimics the reaction of human ACE2 receptor and RBD. It has been reported that cPass SARS‐CoV‐2 Neutralization Antibody test presented an excellent correlation with cell‐culture‐based virus neutralization assays and could be a useful measure of virus‐neutralizing activity.23, 24 The correlations of cPass SARS‐CoV‐2 Neutralization Antibody test with three automated anti‐SARS‐CoV‐2 assays (Mindray CL‐900i against S and N, BioMerieux VIDAS 3 against RBD, and Diasorin LIAISON SARS‐CoV‐2 against S1/S2) have been reported with the best correlation in VIDAS 3 (r = 0.75), followed by LIAISON S1/S2 (r = 0.66) and Mindray CL‐900i (r = 0.57)25. However, the correlations of cPass SARS‐CoV‐2 Neutralization Antibody test with Abbott SARS‐CoV‐2 IgG II Quant, Roche Elecsys anti‐SARS‐CoV‐2 S, and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG have not been evaluated.\nTherefore, we performed a comparative assessment of three fully automated quantitative assays detecting antibodies against spike protein: Abbott SARS‐CoV‐2 IgG II Quant, Roche Elecsys anti‐SARS‐CoV‐2 S, and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG. We evaluated their clinical performance and quantitative correlation with cPass SARS‐CoV‐2 Neutralization Antibody test.", "Test specimens This study was approved by the Institutional Review Board of Seoul National University Hospital (IRB no. 2011‐041‐1170). All subjects were admitted to Seoul National University Hospital, or Boramae Medical Center between February 2020 and January 2021. Leftover patient specimens obtained for routine serologic testing were used. A total of 173 sera from 126 COVID‐19 patients were included in this study. Clinical diagnosis for COVID‐19 were determined based on clinical symptoms, imaging diagnosis, and laboratory findings including RT‐PCR. Among 126 COVID‐19 patients (44 females and 82 males), two (1.6%), 20 (15.9%), 18 (14.3%), 40 (31.7%), and 46 (36.5%) patients were classified by disease severity as asymptomatic, mild, moderate, severe, and critical, respectively. For each subject, age, sex, the number of days from onset of the symptom to the day sample collected, and disease severity determined by WHO interim guidance26 were acquired through electronic medical records. For specificity evaluation, 151 pre‐pandemic sera were tested. Out of the 151 sera, 98 were from healthy subjects and 53 were from patients with positive results of various infectious markers: 5 anti‐HAV IgG, 5 anti‐T. pallidum IgG, 5 anti‐HCV, 5 anti‐HBc IgG, 5 anti‐CMV‐IgG, 5 anti‐rubella IgG, 4 anti‐toxoplasma IgG, 3 anti‐HIV IgG, 2 anti‐mycoplasma IgG, 2 M. tubeculosis PCR, 2 RSV PCR, 5 rhinovirus PCR, 1 adenovirus PCR, 1 bocavirus PCR, 1 parainfluenza virus PCR, 1 coronavirus OC43 PCR, 1 coronavirus 229E PCR.\nThis study was approved by the Institutional Review Board of Seoul National University Hospital (IRB no. 2011‐041‐1170). All subjects were admitted to Seoul National University Hospital, or Boramae Medical Center between February 2020 and January 2021. Leftover patient specimens obtained for routine serologic testing were used. A total of 173 sera from 126 COVID‐19 patients were included in this study. Clinical diagnosis for COVID‐19 were determined based on clinical symptoms, imaging diagnosis, and laboratory findings including RT‐PCR. Among 126 COVID‐19 patients (44 females and 82 males), two (1.6%), 20 (15.9%), 18 (14.3%), 40 (31.7%), and 46 (36.5%) patients were classified by disease severity as asymptomatic, mild, moderate, severe, and critical, respectively. For each subject, age, sex, the number of days from onset of the symptom to the day sample collected, and disease severity determined by WHO interim guidance26 were acquired through electronic medical records. For specificity evaluation, 151 pre‐pandemic sera were tested. Out of the 151 sera, 98 were from healthy subjects and 53 were from patients with positive results of various infectious markers: 5 anti‐HAV IgG, 5 anti‐T. pallidum IgG, 5 anti‐HCV, 5 anti‐HBc IgG, 5 anti‐CMV‐IgG, 5 anti‐rubella IgG, 4 anti‐toxoplasma IgG, 3 anti‐HIV IgG, 2 anti‐mycoplasma IgG, 2 M. tubeculosis PCR, 2 RSV PCR, 5 rhinovirus PCR, 1 adenovirus PCR, 1 bocavirus PCR, 1 parainfluenza virus PCR, 1 coronavirus OC43 PCR, 1 coronavirus 229E PCR.\nAutomated anti‐SARS‐CoV‐2 IgG immunoassay Three fully automated commercial immunoassays were evaluated. The specifications of the three immunoassays are summarized in Table 1. Abbott SARS‐CoV‐2 IgG II Quant (Abbott Laboratories, Sligo, Ireland; hereafter called Abbott Quant) is a chemiluminescent microparticle immunoassay designed for the quantitative determination of IgG antibodies to RBD of the S1 subunit of the spike protein of SARS‐CoV‐2. The assay was performed on Abbott Architect i2000SR system (Abbott Laboratories, Abbott Park). Roche Elecsys anti‐SARS‐CoV‐2 S (Roche Diagnostics; hereafter called Roche S) is an electrochemiluminescence immunoassay for the quantitative determination of antibodies to RBD of the spike protein of SARS‐CoV‐2. The assay was performed on Roche cobas e601 system (Roche Diagnostics). DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (DiaSorin, Stillwater; hereafter called DiaSorin TrimericS) is a chemiluminescent immunoassay using magnetic particles coated with recombinant trimeric SARS‐CoV‐2 spike protein for the quantitative determination of IgG antibodies. The assay was performed on LIAISON XL analyzer (DiaSorin). All tests were performed according to the manufacturer's instructions.\nSpecifications of the four immunoassays claimed by each manufacturers\nAbbreviations: AMR, analytical measuring range; CI, confidence interval; CLIA, chemiluminescent immunoassay; CLMIA, chemiluminescent microparticle immunoassay; ECLIA, electrochemiluminescent immunoassay; RBD, receptor binding domain; SP, spike protein.\nSensitivity were calculated from patient sample collected after 14 days or later from positive PCR results.\nMeasuring range extends up to 80,000 by 1:2 dilution.\nMeasuring range extends up to 2,500 by 1:10 dilution.\nPositive percent agreement with plaque reduction neutralization test (PRNT)50.\nNegative percent agreement with PRNT50.\nNot available (approved for qualitative detection).\nThree fully automated commercial immunoassays were evaluated. The specifications of the three immunoassays are summarized in Table 1. Abbott SARS‐CoV‐2 IgG II Quant (Abbott Laboratories, Sligo, Ireland; hereafter called Abbott Quant) is a chemiluminescent microparticle immunoassay designed for the quantitative determination of IgG antibodies to RBD of the S1 subunit of the spike protein of SARS‐CoV‐2. The assay was performed on Abbott Architect i2000SR system (Abbott Laboratories, Abbott Park). Roche Elecsys anti‐SARS‐CoV‐2 S (Roche Diagnostics; hereafter called Roche S) is an electrochemiluminescence immunoassay for the quantitative determination of antibodies to RBD of the spike protein of SARS‐CoV‐2. The assay was performed on Roche cobas e601 system (Roche Diagnostics). DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (DiaSorin, Stillwater; hereafter called DiaSorin TrimericS) is a chemiluminescent immunoassay using magnetic particles coated with recombinant trimeric SARS‐CoV‐2 spike protein for the quantitative determination of IgG antibodies. The assay was performed on LIAISON XL analyzer (DiaSorin). All tests were performed according to the manufacturer's instructions.\nSpecifications of the four immunoassays claimed by each manufacturers\nAbbreviations: AMR, analytical measuring range; CI, confidence interval; CLIA, chemiluminescent immunoassay; CLMIA, chemiluminescent microparticle immunoassay; ECLIA, electrochemiluminescent immunoassay; RBD, receptor binding domain; SP, spike protein.\nSensitivity were calculated from patient sample collected after 14 days or later from positive PCR results.\nMeasuring range extends up to 80,000 by 1:2 dilution.\nMeasuring range extends up to 2,500 by 1:10 dilution.\nPositive percent agreement with plaque reduction neutralization test (PRNT)50.\nNegative percent agreement with PRNT50.\nNot available (approved for qualitative detection).\nGenScript cPass SARS‐CoV‐2 Neutralization Antibody Detection assay GenScript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit (GenScript, Piscataway; hereafter called cPass) is a blocking ELISA detection tool, which mimics the virus neutralization process. The detection kit utilizes the Horseradish peroxidase (HRP) conjugated recombinant SARS‐CoV‐2 RBD protein and the human angiotensin‐converting enzyme 2 (ACE2) receptor protein. The protein interaction between HRP‐RBD and ACE2 can be blocked by neutralizing antibodies against SARS‐CoV‐RBD. The assay was performed as below, according to the manufacturer's instruction.\nTest samples, negative control, and positive control were 1:10 diluted with sample dilution buffer. HRP‐RBD was diluted 1:1000 with HRP dilution buffer. Diluted samples and diluted HRP‐RBD solution were mixed with a volume ratio of 1:1 and incubated at 37°C for 30 min. 100 μl of the mixture was then added to the capture plate coated with the human ACE2 receptor protein and incubated at 37°C for 15 min. After the incubation, the mixture was washed 4 times with 260 μl of wash buffer. Then, 100 μl of tetramethylbenzidine solution was added to the mixture and incubated at room temperature for 15 min. Finally, 50 μl of stop solution was added. The absorbance of the final solution was read at 450 nm in a microplate reader.\nSignal inhibition was calculated as follow:Percent Signal Inhibition=1‐average optical density value of sample/average optical density value of negative control×100%\n\nThe test results were interpreted as positive when the percent signal inhibition was ≥30%, which is the cut‐off for signal inhibition claimed by the manufacturer.\nGenScript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit (GenScript, Piscataway; hereafter called cPass) is a blocking ELISA detection tool, which mimics the virus neutralization process. The detection kit utilizes the Horseradish peroxidase (HRP) conjugated recombinant SARS‐CoV‐2 RBD protein and the human angiotensin‐converting enzyme 2 (ACE2) receptor protein. The protein interaction between HRP‐RBD and ACE2 can be blocked by neutralizing antibodies against SARS‐CoV‐RBD. The assay was performed as below, according to the manufacturer's instruction.\nTest samples, negative control, and positive control were 1:10 diluted with sample dilution buffer. HRP‐RBD was diluted 1:1000 with HRP dilution buffer. Diluted samples and diluted HRP‐RBD solution were mixed with a volume ratio of 1:1 and incubated at 37°C for 30 min. 100 μl of the mixture was then added to the capture plate coated with the human ACE2 receptor protein and incubated at 37°C for 15 min. After the incubation, the mixture was washed 4 times with 260 μl of wash buffer. Then, 100 μl of tetramethylbenzidine solution was added to the mixture and incubated at room temperature for 15 min. Finally, 50 μl of stop solution was added. The absorbance of the final solution was read at 450 nm in a microplate reader.\nSignal inhibition was calculated as follow:Percent Signal Inhibition=1‐average optical density value of sample/average optical density value of negative control×100%\n\nThe test results were interpreted as positive when the percent signal inhibition was ≥30%, which is the cut‐off for signal inhibition claimed by the manufacturer.\nPrecision and linearity assessment The precision assessment was performed on three quantitative assays, according to CLSI EP15‐A3 protocol,27 using one quality control material and two pooled patient sera for five consecutive days, five times a day. Repeatability and within‐laboratory precision were estimated using ANOVA and compared to values claimed by the manufacturers.\nLinearity assessment was performed on three quantitative assays, according to CLSI EP6‐A protocol.28 Two patient sera with high (H) and low (L) concentration were mixed at ratios of 4H, 1L + 3H, 2L + 2H, 3L + 1H, and 4L. All levels are measured in duplicates.\nThe precision assessment was performed on three quantitative assays, according to CLSI EP15‐A3 protocol,27 using one quality control material and two pooled patient sera for five consecutive days, five times a day. Repeatability and within‐laboratory precision were estimated using ANOVA and compared to values claimed by the manufacturers.\nLinearity assessment was performed on three quantitative assays, according to CLSI EP6‐A protocol.28 Two patient sera with high (H) and low (L) concentration were mixed at ratios of 4H, 1L + 3H, 2L + 2H, 3L + 1H, and 4L. All levels are measured in duplicates.\nStatistical analysis For three immunoassays, sensitivity and specificity were calculated. The sensitivity of the subgroup sampled 14 days after the onset of symptoms was also calculated and compared with the manufacturer's claim. It is in line with the recommendation from infectious diseases society of America guidelines on the Diagnosis of COVID‐19 that suggests against using serologic testing to diagnose SARS‐CoV‐2 infection during the first two weeks (14 days) following symptom onset.29 The concordances between the three immunoassays and cPass were assessed using overall, positive and negative percent agreement as well as Cohen's kappa statistics. Cohen's kappa is a robust statistic of inter‐rater reliability, useful for assessing the level of agreement between two diagnostic assays. Ranging between 0 and 1, a kappa value <0.40 represents poor agreement, 0.40–0.59 represents fair agreement, 0.60–0.74 represents good agreement, and ≥0.75 represents excellent agreement.30 We evaluated the correlations of the quantitative value of three immunoassays with each other and with % inhibition value of cPass using Spearman's rank‐order correlation coefficient (rho). All statistical analyses were performed by using R version 4.0.5 (R foundation for statistical Computing).\nFor three immunoassays, sensitivity and specificity were calculated. The sensitivity of the subgroup sampled 14 days after the onset of symptoms was also calculated and compared with the manufacturer's claim. It is in line with the recommendation from infectious diseases society of America guidelines on the Diagnosis of COVID‐19 that suggests against using serologic testing to diagnose SARS‐CoV‐2 infection during the first two weeks (14 days) following symptom onset.29 The concordances between the three immunoassays and cPass were assessed using overall, positive and negative percent agreement as well as Cohen's kappa statistics. Cohen's kappa is a robust statistic of inter‐rater reliability, useful for assessing the level of agreement between two diagnostic assays. Ranging between 0 and 1, a kappa value <0.40 represents poor agreement, 0.40–0.59 represents fair agreement, 0.60–0.74 represents good agreement, and ≥0.75 represents excellent agreement.30 We evaluated the correlations of the quantitative value of three immunoassays with each other and with % inhibition value of cPass using Spearman's rank‐order correlation coefficient (rho). All statistical analyses were performed by using R version 4.0.5 (R foundation for statistical Computing).", "This study was approved by the Institutional Review Board of Seoul National University Hospital (IRB no. 2011‐041‐1170). All subjects were admitted to Seoul National University Hospital, or Boramae Medical Center between February 2020 and January 2021. Leftover patient specimens obtained for routine serologic testing were used. A total of 173 sera from 126 COVID‐19 patients were included in this study. Clinical diagnosis for COVID‐19 were determined based on clinical symptoms, imaging diagnosis, and laboratory findings including RT‐PCR. Among 126 COVID‐19 patients (44 females and 82 males), two (1.6%), 20 (15.9%), 18 (14.3%), 40 (31.7%), and 46 (36.5%) patients were classified by disease severity as asymptomatic, mild, moderate, severe, and critical, respectively. For each subject, age, sex, the number of days from onset of the symptom to the day sample collected, and disease severity determined by WHO interim guidance26 were acquired through electronic medical records. For specificity evaluation, 151 pre‐pandemic sera were tested. Out of the 151 sera, 98 were from healthy subjects and 53 were from patients with positive results of various infectious markers: 5 anti‐HAV IgG, 5 anti‐T. pallidum IgG, 5 anti‐HCV, 5 anti‐HBc IgG, 5 anti‐CMV‐IgG, 5 anti‐rubella IgG, 4 anti‐toxoplasma IgG, 3 anti‐HIV IgG, 2 anti‐mycoplasma IgG, 2 M. tubeculosis PCR, 2 RSV PCR, 5 rhinovirus PCR, 1 adenovirus PCR, 1 bocavirus PCR, 1 parainfluenza virus PCR, 1 coronavirus OC43 PCR, 1 coronavirus 229E PCR.", "Three fully automated commercial immunoassays were evaluated. The specifications of the three immunoassays are summarized in Table 1. Abbott SARS‐CoV‐2 IgG II Quant (Abbott Laboratories, Sligo, Ireland; hereafter called Abbott Quant) is a chemiluminescent microparticle immunoassay designed for the quantitative determination of IgG antibodies to RBD of the S1 subunit of the spike protein of SARS‐CoV‐2. The assay was performed on Abbott Architect i2000SR system (Abbott Laboratories, Abbott Park). Roche Elecsys anti‐SARS‐CoV‐2 S (Roche Diagnostics; hereafter called Roche S) is an electrochemiluminescence immunoassay for the quantitative determination of antibodies to RBD of the spike protein of SARS‐CoV‐2. The assay was performed on Roche cobas e601 system (Roche Diagnostics). DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (DiaSorin, Stillwater; hereafter called DiaSorin TrimericS) is a chemiluminescent immunoassay using magnetic particles coated with recombinant trimeric SARS‐CoV‐2 spike protein for the quantitative determination of IgG antibodies. The assay was performed on LIAISON XL analyzer (DiaSorin). All tests were performed according to the manufacturer's instructions.\nSpecifications of the four immunoassays claimed by each manufacturers\nAbbreviations: AMR, analytical measuring range; CI, confidence interval; CLIA, chemiluminescent immunoassay; CLMIA, chemiluminescent microparticle immunoassay; ECLIA, electrochemiluminescent immunoassay; RBD, receptor binding domain; SP, spike protein.\nSensitivity were calculated from patient sample collected after 14 days or later from positive PCR results.\nMeasuring range extends up to 80,000 by 1:2 dilution.\nMeasuring range extends up to 2,500 by 1:10 dilution.\nPositive percent agreement with plaque reduction neutralization test (PRNT)50.\nNegative percent agreement with PRNT50.\nNot available (approved for qualitative detection).", "GenScript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit (GenScript, Piscataway; hereafter called cPass) is a blocking ELISA detection tool, which mimics the virus neutralization process. The detection kit utilizes the Horseradish peroxidase (HRP) conjugated recombinant SARS‐CoV‐2 RBD protein and the human angiotensin‐converting enzyme 2 (ACE2) receptor protein. The protein interaction between HRP‐RBD and ACE2 can be blocked by neutralizing antibodies against SARS‐CoV‐RBD. The assay was performed as below, according to the manufacturer's instruction.\nTest samples, negative control, and positive control were 1:10 diluted with sample dilution buffer. HRP‐RBD was diluted 1:1000 with HRP dilution buffer. Diluted samples and diluted HRP‐RBD solution were mixed with a volume ratio of 1:1 and incubated at 37°C for 30 min. 100 μl of the mixture was then added to the capture plate coated with the human ACE2 receptor protein and incubated at 37°C for 15 min. After the incubation, the mixture was washed 4 times with 260 μl of wash buffer. Then, 100 μl of tetramethylbenzidine solution was added to the mixture and incubated at room temperature for 15 min. Finally, 50 μl of stop solution was added. The absorbance of the final solution was read at 450 nm in a microplate reader.\nSignal inhibition was calculated as follow:Percent Signal Inhibition=1‐average optical density value of sample/average optical density value of negative control×100%\n\nThe test results were interpreted as positive when the percent signal inhibition was ≥30%, which is the cut‐off for signal inhibition claimed by the manufacturer.", "The precision assessment was performed on three quantitative assays, according to CLSI EP15‐A3 protocol,27 using one quality control material and two pooled patient sera for five consecutive days, five times a day. Repeatability and within‐laboratory precision were estimated using ANOVA and compared to values claimed by the manufacturers.\nLinearity assessment was performed on three quantitative assays, according to CLSI EP6‐A protocol.28 Two patient sera with high (H) and low (L) concentration were mixed at ratios of 4H, 1L + 3H, 2L + 2H, 3L + 1H, and 4L. All levels are measured in duplicates.", "For three immunoassays, sensitivity and specificity were calculated. The sensitivity of the subgroup sampled 14 days after the onset of symptoms was also calculated and compared with the manufacturer's claim. It is in line with the recommendation from infectious diseases society of America guidelines on the Diagnosis of COVID‐19 that suggests against using serologic testing to diagnose SARS‐CoV‐2 infection during the first two weeks (14 days) following symptom onset.29 The concordances between the three immunoassays and cPass were assessed using overall, positive and negative percent agreement as well as Cohen's kappa statistics. Cohen's kappa is a robust statistic of inter‐rater reliability, useful for assessing the level of agreement between two diagnostic assays. Ranging between 0 and 1, a kappa value <0.40 represents poor agreement, 0.40–0.59 represents fair agreement, 0.60–0.74 represents good agreement, and ≥0.75 represents excellent agreement.30 We evaluated the correlations of the quantitative value of three immunoassays with each other and with % inhibition value of cPass using Spearman's rank‐order correlation coefficient (rho). All statistical analyses were performed by using R version 4.0.5 (R foundation for statistical Computing).", "Clinical performance The clinical performances of three immunoassays are shown in Table 2. The sensitivity of Abbott Quant, DiaSorin TrimericS, and Roche S on 173 sera from 126 COVID‐19 patients was 96.0% (95% CI, 91.8%–98.4%), 93.6% (95% CI, 88.9%–96.8%) and 96.0% (95% CI, 91.9%–98.4%), respectively. The sensitivity calculated from the subgroup sampled 14 days after the onset of symptom was 97.6% (95% CI, 93.2%–99.5%), 96.8% (95% CI, 92.1%–99.1%) and 97.6% (95% CI, 93.2%–99.5%), respectively. The specificity of Abbott Quant, DiaSorin TrimericS, and Roche S on 151 pre‐pandemic sera were 99.3 (95% CI, 96.4–100.0), 100.0% (95% CI, 97.6–100.0%), and 100.0% (95% CI, 97.6%–100.0%), respectively. No positive result was observed in the cross‐reactivity panel.\nClinical performance of three immunoassays\nAbbreviations: Abbott Quant, Abbott SARS‐CoV‐2 IgG II Quant; D, days after onset of symptom; DiaSorin TrimericS, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG; Genscript cPass, Genscript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit;NT, not tested; Other, other infection; Roche S, Roche Elecsys anti‐SARS‐CoV‐2 S.\nThe clinical performances of three immunoassays are shown in Table 2. The sensitivity of Abbott Quant, DiaSorin TrimericS, and Roche S on 173 sera from 126 COVID‐19 patients was 96.0% (95% CI, 91.8%–98.4%), 93.6% (95% CI, 88.9%–96.8%) and 96.0% (95% CI, 91.9%–98.4%), respectively. The sensitivity calculated from the subgroup sampled 14 days after the onset of symptom was 97.6% (95% CI, 93.2%–99.5%), 96.8% (95% CI, 92.1%–99.1%) and 97.6% (95% CI, 93.2%–99.5%), respectively. The specificity of Abbott Quant, DiaSorin TrimericS, and Roche S on 151 pre‐pandemic sera were 99.3 (95% CI, 96.4–100.0), 100.0% (95% CI, 97.6–100.0%), and 100.0% (95% CI, 97.6%–100.0%), respectively. No positive result was observed in the cross‐reactivity panel.\nClinical performance of three immunoassays\nAbbreviations: Abbott Quant, Abbott SARS‐CoV‐2 IgG II Quant; D, days after onset of symptom; DiaSorin TrimericS, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG; Genscript cPass, Genscript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit;NT, not tested; Other, other infection; Roche S, Roche Elecsys anti‐SARS‐CoV‐2 S.\nRepeatability, within‐laboratory imprecision, and linearity The repeatability and within‐laboratory imprecision for three immunoassays are shown in Table 3. The within‐laboratory precisions of Abbott Quant and Roche S were all <4.0%. The within‐laboratory precisions of DiaSorin TrimericS were 2.9–8.2%, which were slightly larger than that claimed by the manufacturer.\nRepeatability and within‐laboratory imprecision for three quantitative immunoassays\nAbbreviations: Conc., concentration; CV, coefficient of variation; Lab, laboratory; PS, pooled serum.\nThe linearity assessment of three immunoassays (Figure S1) revealed to be linear across the analytical measurement range (R2 = 0.9992, 0.9947, 0.9966 for Abbott Quant, DiaSorin TrimericS, and Roche S, respectively). And % recovery was of Abbott Quant, DiaSorin TrimericS, and Roche S was all acceptable (criteria: 100% ± 10%), ranged as 96.4%–100.0%, 100.0%–109.7%, 92.8%–102.9%, respectively.\nThe repeatability and within‐laboratory imprecision for three immunoassays are shown in Table 3. The within‐laboratory precisions of Abbott Quant and Roche S were all <4.0%. The within‐laboratory precisions of DiaSorin TrimericS were 2.9–8.2%, which were slightly larger than that claimed by the manufacturer.\nRepeatability and within‐laboratory imprecision for three quantitative immunoassays\nAbbreviations: Conc., concentration; CV, coefficient of variation; Lab, laboratory; PS, pooled serum.\nThe linearity assessment of three immunoassays (Figure S1) revealed to be linear across the analytical measurement range (R2 = 0.9992, 0.9947, 0.9966 for Abbott Quant, DiaSorin TrimericS, and Roche S, respectively). And % recovery was of Abbott Quant, DiaSorin TrimericS, and Roche S was all acceptable (criteria: 100% ± 10%), ranged as 96.4%–100.0%, 100.0%–109.7%, 92.8%–102.9%, respectively.\nCorrelation between results from three immunoassays The correlations between results from three immunoassays were shown in Figure 1. The Roche S correlated well with Abbott Quant II (rho = 0.88) and DiaSorin Trimeric S (rho = 0.85). Abbott Quant II correlated well with DiaSorin Trimeric S (rho = 0.9).\nSpearman correlation between three immunoassays and cPass neutralization antibody. (A‐C). Abbott SARS‐CoV‐2 IgG II Quant (A), Roche Elecsys anti‐SARS‐CoV‐2 S (B), and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (C) demonstrated strong correlations with Genscript cPass surrogate virus neutralization test (Spearman's rho of 0.87, 0.87, and 0.88 for each). (D‐F). Three immunoassays demonstrated strong correlations with each other. Spearman's rho of 0.88 between Abbott SARS‐CoV‐2 IgG II Quant and Roche Elecsys anti‐SARS‐CoV‐2 (D); Spearman's rho of 0.9 between Abbott SARS‐CoV‐2 IgG II Quant and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (E); Spearman's rho of 0.85 between Roche Elecsys anti‐ SARS‐CoV‐2 and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (F)\nThe correlations between results from three immunoassays were shown in Figure 1. The Roche S correlated well with Abbott Quant II (rho = 0.88) and DiaSorin Trimeric S (rho = 0.85). Abbott Quant II correlated well with DiaSorin Trimeric S (rho = 0.9).\nSpearman correlation between three immunoassays and cPass neutralization antibody. (A‐C). Abbott SARS‐CoV‐2 IgG II Quant (A), Roche Elecsys anti‐SARS‐CoV‐2 S (B), and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (C) demonstrated strong correlations with Genscript cPass surrogate virus neutralization test (Spearman's rho of 0.87, 0.87, and 0.88 for each). (D‐F). Three immunoassays demonstrated strong correlations with each other. Spearman's rho of 0.88 between Abbott SARS‐CoV‐2 IgG II Quant and Roche Elecsys anti‐SARS‐CoV‐2 (D); Spearman's rho of 0.9 between Abbott SARS‐CoV‐2 IgG II Quant and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (E); Spearman's rho of 0.85 between Roche Elecsys anti‐ SARS‐CoV‐2 and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (F)\nComparison to cPass SARS‐CoV‐2 neutralization antibody test The concordances between the qualitative results of three immunoassays and cPass SARS‐CoV‐2 neutralization test in SARS‐CoV‐2 positive patients are shown in Table 4. The positive percent agreements of three immunoassays with cPass were 97.6%–99.4%. The negative percent agreements of three immunoassays with cPass were 55.6%–77.8%. The overall percent agreements of three immunoassays with cPass were 96.5%–97.7% with Cohen's kappa value of 0.61–0.74. The positive percent agreements between the three immunoassays were 98.8%–99.4% and the negative percent agreements between three immunoassays were 54.5%–71.4%. The overall percent agreements between three immunoassays were 96.5%–97.7% with Cohen's kappa value of 0.65–0.7.\nConcordance between the qualitative results of three immunoassays and cPass SARS‐CoV‐2 Neutralization Antibody Detection kit in 173 SARS‐CoV‐2 patients\nAbbreviations: Abbott Quant, Abbott SARS‐CoV‐2 IgG II Quant; DiaSorin TrimericS, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG; NPA, negative percent agreement; OPA, overall percent agreement; PPA, positive percent agreement; Roche S, Roche Elecsys anti‐SARS‐CoV‐2 S.\nThe correlations of quantitative results of three immunoassays with % inhibition values of cPass were very strong with Spearman's rho value of 0.87 for Abbott Quant and Roche S, and 0.88 for DiaSorin TrimericS (p < 0.001 for all) (Figure 1).\nThe concordances between the qualitative results of three immunoassays and cPass SARS‐CoV‐2 neutralization test in SARS‐CoV‐2 positive patients are shown in Table 4. The positive percent agreements of three immunoassays with cPass were 97.6%–99.4%. The negative percent agreements of three immunoassays with cPass were 55.6%–77.8%. The overall percent agreements of three immunoassays with cPass were 96.5%–97.7% with Cohen's kappa value of 0.61–0.74. The positive percent agreements between the three immunoassays were 98.8%–99.4% and the negative percent agreements between three immunoassays were 54.5%–71.4%. The overall percent agreements between three immunoassays were 96.5%–97.7% with Cohen's kappa value of 0.65–0.7.\nConcordance between the qualitative results of three immunoassays and cPass SARS‐CoV‐2 Neutralization Antibody Detection kit in 173 SARS‐CoV‐2 patients\nAbbreviations: Abbott Quant, Abbott SARS‐CoV‐2 IgG II Quant; DiaSorin TrimericS, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG; NPA, negative percent agreement; OPA, overall percent agreement; PPA, positive percent agreement; Roche S, Roche Elecsys anti‐SARS‐CoV‐2 S.\nThe correlations of quantitative results of three immunoassays with % inhibition values of cPass were very strong with Spearman's rho value of 0.87 for Abbott Quant and Roche S, and 0.88 for DiaSorin TrimericS (p < 0.001 for all) (Figure 1).\nReceiver operating characteristics analysis Receiver operating characteristics (ROC) curve analysis was performed on three immunoassays (Figure 2). Areas under the curve (AUC) of three immunoassays were 0.993 for Abbott Quant, 0.989 for DiaSorin TrimericS, and 0.983 for Roche S, which means excellent performances for all three immunoassays. Using the ROC curves, we assumed the optimized cut‐off values based on Youden's index. The optimized cut‐offs were 41.9 AU/ml, 3.985 AU/ml, and 0.544 U/ml for Abbott Quant, DiaSorin TrimericS, and Roche S, respectively. The corresponding manufacturer's recommended cut‐offs were 50.0, 13.0, and 0.8. Applying the optimized cut‐offs, the sensitivity of Abbott Quant improved from 96.0% to 97.1%, with no decrease in specificity (99.3%). The sensitivity of DiaSorin TrimericS improved from 93.6% to 98.3% with a slight decrease in specificity (from 100.0% to 97.1%). The sensitivity of Roche S improved from 96.0% to 96.5% with no decrease in specificity (100.0%).\nReceiver operating characteristics (ROC) curve analysis for three immunoassays. Area under the curve were 0.993 for Abbott Quant (A), 0.989 for DiaSorin TrimericS (B), and 0.983 for Roche S (C), which means excellent performances for all three immunoassays\nReceiver operating characteristics (ROC) curve analysis was performed on three immunoassays (Figure 2). Areas under the curve (AUC) of three immunoassays were 0.993 for Abbott Quant, 0.989 for DiaSorin TrimericS, and 0.983 for Roche S, which means excellent performances for all three immunoassays. Using the ROC curves, we assumed the optimized cut‐off values based on Youden's index. The optimized cut‐offs were 41.9 AU/ml, 3.985 AU/ml, and 0.544 U/ml for Abbott Quant, DiaSorin TrimericS, and Roche S, respectively. The corresponding manufacturer's recommended cut‐offs were 50.0, 13.0, and 0.8. Applying the optimized cut‐offs, the sensitivity of Abbott Quant improved from 96.0% to 97.1%, with no decrease in specificity (99.3%). The sensitivity of DiaSorin TrimericS improved from 93.6% to 98.3% with a slight decrease in specificity (from 100.0% to 97.1%). The sensitivity of Roche S improved from 96.0% to 96.5% with no decrease in specificity (100.0%).\nReceiver operating characteristics (ROC) curve analysis for three immunoassays. Area under the curve were 0.993 for Abbott Quant (A), 0.989 for DiaSorin TrimericS (B), and 0.983 for Roche S (C), which means excellent performances for all three immunoassays", "The clinical performances of three immunoassays are shown in Table 2. The sensitivity of Abbott Quant, DiaSorin TrimericS, and Roche S on 173 sera from 126 COVID‐19 patients was 96.0% (95% CI, 91.8%–98.4%), 93.6% (95% CI, 88.9%–96.8%) and 96.0% (95% CI, 91.9%–98.4%), respectively. The sensitivity calculated from the subgroup sampled 14 days after the onset of symptom was 97.6% (95% CI, 93.2%–99.5%), 96.8% (95% CI, 92.1%–99.1%) and 97.6% (95% CI, 93.2%–99.5%), respectively. The specificity of Abbott Quant, DiaSorin TrimericS, and Roche S on 151 pre‐pandemic sera were 99.3 (95% CI, 96.4–100.0), 100.0% (95% CI, 97.6–100.0%), and 100.0% (95% CI, 97.6%–100.0%), respectively. No positive result was observed in the cross‐reactivity panel.\nClinical performance of three immunoassays\nAbbreviations: Abbott Quant, Abbott SARS‐CoV‐2 IgG II Quant; D, days after onset of symptom; DiaSorin TrimericS, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG; Genscript cPass, Genscript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit;NT, not tested; Other, other infection; Roche S, Roche Elecsys anti‐SARS‐CoV‐2 S.", "The repeatability and within‐laboratory imprecision for three immunoassays are shown in Table 3. The within‐laboratory precisions of Abbott Quant and Roche S were all <4.0%. The within‐laboratory precisions of DiaSorin TrimericS were 2.9–8.2%, which were slightly larger than that claimed by the manufacturer.\nRepeatability and within‐laboratory imprecision for three quantitative immunoassays\nAbbreviations: Conc., concentration; CV, coefficient of variation; Lab, laboratory; PS, pooled serum.\nThe linearity assessment of three immunoassays (Figure S1) revealed to be linear across the analytical measurement range (R2 = 0.9992, 0.9947, 0.9966 for Abbott Quant, DiaSorin TrimericS, and Roche S, respectively). And % recovery was of Abbott Quant, DiaSorin TrimericS, and Roche S was all acceptable (criteria: 100% ± 10%), ranged as 96.4%–100.0%, 100.0%–109.7%, 92.8%–102.9%, respectively.", "The correlations between results from three immunoassays were shown in Figure 1. The Roche S correlated well with Abbott Quant II (rho = 0.88) and DiaSorin Trimeric S (rho = 0.85). Abbott Quant II correlated well with DiaSorin Trimeric S (rho = 0.9).\nSpearman correlation between three immunoassays and cPass neutralization antibody. (A‐C). Abbott SARS‐CoV‐2 IgG II Quant (A), Roche Elecsys anti‐SARS‐CoV‐2 S (B), and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (C) demonstrated strong correlations with Genscript cPass surrogate virus neutralization test (Spearman's rho of 0.87, 0.87, and 0.88 for each). (D‐F). Three immunoassays demonstrated strong correlations with each other. Spearman's rho of 0.88 between Abbott SARS‐CoV‐2 IgG II Quant and Roche Elecsys anti‐SARS‐CoV‐2 (D); Spearman's rho of 0.9 between Abbott SARS‐CoV‐2 IgG II Quant and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (E); Spearman's rho of 0.85 between Roche Elecsys anti‐ SARS‐CoV‐2 and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (F)", "The concordances between the qualitative results of three immunoassays and cPass SARS‐CoV‐2 neutralization test in SARS‐CoV‐2 positive patients are shown in Table 4. The positive percent agreements of three immunoassays with cPass were 97.6%–99.4%. The negative percent agreements of three immunoassays with cPass were 55.6%–77.8%. The overall percent agreements of three immunoassays with cPass were 96.5%–97.7% with Cohen's kappa value of 0.61–0.74. The positive percent agreements between the three immunoassays were 98.8%–99.4% and the negative percent agreements between three immunoassays were 54.5%–71.4%. The overall percent agreements between three immunoassays were 96.5%–97.7% with Cohen's kappa value of 0.65–0.7.\nConcordance between the qualitative results of three immunoassays and cPass SARS‐CoV‐2 Neutralization Antibody Detection kit in 173 SARS‐CoV‐2 patients\nAbbreviations: Abbott Quant, Abbott SARS‐CoV‐2 IgG II Quant; DiaSorin TrimericS, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG; NPA, negative percent agreement; OPA, overall percent agreement; PPA, positive percent agreement; Roche S, Roche Elecsys anti‐SARS‐CoV‐2 S.\nThe correlations of quantitative results of three immunoassays with % inhibition values of cPass were very strong with Spearman's rho value of 0.87 for Abbott Quant and Roche S, and 0.88 for DiaSorin TrimericS (p < 0.001 for all) (Figure 1).", "Receiver operating characteristics (ROC) curve analysis was performed on three immunoassays (Figure 2). Areas under the curve (AUC) of three immunoassays were 0.993 for Abbott Quant, 0.989 for DiaSorin TrimericS, and 0.983 for Roche S, which means excellent performances for all three immunoassays. Using the ROC curves, we assumed the optimized cut‐off values based on Youden's index. The optimized cut‐offs were 41.9 AU/ml, 3.985 AU/ml, and 0.544 U/ml for Abbott Quant, DiaSorin TrimericS, and Roche S, respectively. The corresponding manufacturer's recommended cut‐offs were 50.0, 13.0, and 0.8. Applying the optimized cut‐offs, the sensitivity of Abbott Quant improved from 96.0% to 97.1%, with no decrease in specificity (99.3%). The sensitivity of DiaSorin TrimericS improved from 93.6% to 98.3% with a slight decrease in specificity (from 100.0% to 97.1%). The sensitivity of Roche S improved from 96.0% to 96.5% with no decrease in specificity (100.0%).\nReceiver operating characteristics (ROC) curve analysis for three immunoassays. Area under the curve were 0.993 for Abbott Quant (A), 0.989 for DiaSorin TrimericS (B), and 0.983 for Roche S (C), which means excellent performances for all three immunoassays", "In this study, we compared three commercially available automated quantitative immunoassays for the detection of antibodies to SARS‐CoV‐2 and cPass as a surrogate for viral neutralization. In the sensitivity test, all assays demonstrated excellent sensitivity greater than 90%, and Abbott Quant and Roche S showed slightly higher sensitivity than DiaSorin TrimericS. The sensitivity calculated from the subgroup sampled 14 days or later after the onset of symptom, which has been reported to be the window period of SARS‐CoV‐2 antibody formation31 were all slightly lower compared to that claimed by the manufacturer. (Abbott Quant 97.6% vs. 99.4%; DiaSorin TrimericS 96.8% vs. 98.7%; Roche S 97.6% vs. 98.8%). There were four sera from four patients with negative results in the subgroup sampled 14 days or later after the onset. Two samples were negative for all assays, and the other two were detected at very low quantitative values near the cut‐off only in Roche S and Abbott Quant, respectively. Out of four patients, two with underlying hematologic malignancy were suspected of having inhibition of antibody development by their immunocompromised status. The other two patients were asymptomatic or showed very mild symptoms. Previous studies have reported that antibodies were not detected in 10%–20% of mild cases of COVID‐19.32, 33 Considering that 28 sera from mild cases were included in this study, the sensitivities of three immunoassays were satisfactory.\nIn prior reports evaluating the clinical performances of the previous version of anti‐SARS‐CoV‐2 assays against N protein (Abbott or Roche) and against S1/S2 subunit (DiaSorin),15, 16, 17, 18 the sensitivities of Abbott anti‐SARS‐CoV‐2 (against N) were 86.5%~90.8%. Those of Roche anti‐SARS‐CoV‐2 (against N) were 83.0%~93.0%. The sensitivities of DiaSorin LIAISON anit‐SARS‐CoV‐2 (against S1/S2) were 70.0%~85.3%, slightly lower than those of Abbott or Roche anti‐SARS‐CoV‐2 (N).15, 16, 17 In the report evaluating two Roche anti‐SARS‐CoV‐2 assays against N or S simultaneously,19 anti‐SARS‐CoV‐2 S showed higher sensitivity than anti‐SARS‐CoV‐2 (against N) (93.0% vs. 89.0%). In a recent report, the sensitivity of DiaSorin anti‐SARS‐CoV‐2 Trimeric S was 99.4%, which was higher than the previous version of DiaSorin anti‐SARS‐CoV‐2 against S1/S2.21 In our study, the sensitivity was highest in Roche S and Abbott Quant (96.0%), followed by DiaSorin Trimeric S (93.6%). All three immunoassays showed higher sensitivities than prior reports of the previous version of those assays (Abbott and Roche against N or DiaSorin anti‐S1/S2). The sensitivity of DiaSorin TrimericS in this study improved from 93.6% to 98.3% with a slight decrease in specificity (from 100.0% to 98.3%) by adjusting the cut‐off value from 13.0 AU/ml to 3.985 AU/ml, which implicates no substantial differences in sensitivity of the three immunoassays.\nThe sensitivity of cPass NT has been reported higher (93%) than those of Abbott anti‐SARS‐CoV‐2 (N) (89%) or Roche anti‐SARS‐CoV‐2 total (N) (83%).17 In our study, cPass showed similar sensitivity (94.8%) with the other three immunoassays (93.6%~96.0%), which is consistent with the previous reports, considering the lower sensitivity of Abbott or Roche anti‐SARS‐CoV‐2 (N) assays in prior studies.15, 16, 17, 18\n\nIn the specificity test conducted with 151 pre‐pandemic samples, all three immunoassays showed remarkable specificity with only one positive result from Abbott Quant. These results were consistent with manufacturer's claim and previous studies, which report superior sensitivity in the new version of anti‐SARS‐CoV‐2 assays (range 99.8%–100.0%).19, 20, 21 In the previous version of three immunoassays, DiaSorin anti‐SARS‐CoV‐2 against S1/S2 showed slightly lower specificities than Abbott anti‐SARS‐CoV‐2 (N) or Roche anti‐SARS‐CoV‐2 total (N).15, 16, 17 In the new DiaSorin anti‐SARS‐CoV‐2 against trimeric S, specificity was excellent (99.8%).21 The sensitivities of both the previous and new version of Roche anti‐SARS‐CoV‐2 were excellent (100.0% for both).19, 20\n\nImprecision of the new Roche anti‐SARS‐CoV‐2 against S has been reported as 1.06% at 9.06 U/ml.34 The imprecision of the new DiaSorin anti‐SARS‐CoV‐2 against trimeric S was an average of 4.85% (3.6%~5.8% range) at values ranging from 5 to 591 AU/L,21 which was higher than that of Roche S. In our study, the imprecision at low and high level pooled serum was 8.2% and 5.2% in DiaSorin Trimeric S and 2.5% and 3.2% in Roche S. The higher imprecision in DiaSorin Trimeric S compared to Roche S in our study was consistent with previous reports.21, 34 The imprecision of Abbott Quant II, firstly reported in this study, was similar to Roche S.\nThe correlations of the result from three immunoassays with each other were excellent (rho value 0.85–0.9). In comparison with cPass, all three immunoassays showed high percent agreement for overall patient samples above 95%. Cohen's kappa statistics were 0.61 for Abbott Quant, 0.74 for Roche S, and 0.68 for DiaSorin TrimericS, all denoting good agreement. Although cPass is not intended to use as a quantitative assay, strong correlations between quantitative values of three assays and % inhibition values of cPass were found in this study with Spearman rho value of 0.87–0.88. In previous reports, the Abbott SARS‐CoV‐2 IgG assay (targeting nucleocapsid antigen) and the Roche anti‐SARS‐CoV total antibody (targeting nucleocapsid antigen) showed weaker correlations with neutralizing antibody than DiaSorin SARS‐CoV‐2 IgG (targeting S1/S2 subunits), Euroimmune SARS‐CoV‐2 IgG ELISA (targeting S1 subunit), or Siemens SARS‐CoV‐2 total antibody (targeting RBD).12, 13 Newly launched Abbott Quant and Roche S (both targeting RBD) showed strong correlations with cPass in this study as well as DiaSorin TrimericS (targeting trimeric spike protein). All three immunoassays can be applied to assess the immune response to vaccination because the SARS‐CoV‐2 vaccines use RBD as an immunogen.\nAbbott Architect I and Roche cobas system provided the function for dilutional testing and DiaSorin LIAISON analyzer did not. In this study, Abbott Quant needed dilution and retest in four of the 173 samples (2.3%), Roche S in 58 samples (33.5%), so there was the inconvenience of needs for additional samples and reagents, especially in Roche S. DiaSorin TrimericS reported 11 out of 173 samples (6.4%) as above the limit of quantitation. Although direct comparison was unavailable because the quantitative units of the three assays are not uniform, Abbott Quant assay seems to have the highest upper limit of quantitation without dilution.\nThis study had some limitations. (1) Culture‐based virus neutralization test was not performed due to its requirement for very specialized facilities. Instead, cPass sVNT was utilized as a substitute, based on previous studies demonstrating excellent concordance between sVNT and conventional virus neutralization test.23, 24, 35 (2) Few asymptomatic patients were included (2/173) because the patient group consisted mainly of inpatients. So, the results of this study should be carefully applied in populations that contain a large number of asymptomatic patients.\nNevertheless, we showed the performances of the new version of three automated quantitative immunoassays detecting antibodies against SARS‐CoV‐2 spike protein (Abbott and Roche) or trimeric S (DiaSorin). The improved sensitivities of Abbott SARS‐CoV‐2 IgG II Quant, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG, and Roche Elecsys anti‐SARS‐CoV‐2 S compared to the previous version of three immunoassays were suspected. The total imprecision was slightly higher in DiaSorin Trimeric S than Roche S or Abbott Quant II. The correlations of the results of the three immunoassays were good. We also demonstrated the strong correlations of the three immunoassays with sVNT. These high‐throughput immunoassays are supposed to be valuable in diagnostic assistance of RT‐PCR, evaluating the response to vaccination, and the assessment of herd immunity in future.", "Figure S1\nClick here for additional data file." ]
[ null, "materials-and-methods", null, null, null, null, null, "results", null, null, null, null, null, "discussion", "supplementary-material" ]
[ "immunoassays", "neutralizing antibody", "SARS‐CoV‐2", "spike antigen", "virus neutralization test" ]
INTRODUCTION: Severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2), first reported in Wuhan, China in 2019, caused a worldwide outbreak that is currently ongoing.1 Coronavirus disease 19 (COVID‐19), the infectious disease caused by SARS‐CoV‐2, became not only an unprecedented threat to public health worldwide but also a tragic shock to the economy across the globe.2 The absence of specific treatment options proved to be effective against SARS‐CoV‐2 aggravated the affair.3 This has resulted in the heightened importance of SARS‐CoV‐2 diagnostic testing as quarantine and social distancing have become the primary strategies for control of COVID‐19.4 Molecular testing, especially RT‐PCR, a reliable tool detecting the active SARS‐CoV‐2 infection, is the first option for the COVID‐19 diagnosis.5, 6 And serologic testing, a secondary weapon in the diagnostic arsenal for COVID‐19, was used as complementary to the RT‐PCR in the area where RT‐PCR has its limitations. Because serologic testing for COVID‐19 has its advantages such as cost‐effectiveness, short turnaround time, high‐throughput, ability to detect past infection, and usefulness in resource‐limited areas.7 Recently, the countermeasure strategy against COVID‐19 has stepped up from detection and quarantine of infection to active achievement of herd immunity through vaccination, as several vaccines have been approved for emergency use by the government in Europe and the United States and vaccinations are rapidly underway in some countries.8, 9, 10 In line with these shifts, the importance of tests detecting antibodies for SARS‐CoV‐2, especially neutralizing antibodies representing the protective ability of host immunity, is further emphasized. The majority of SARS‐CoV‐2 serologic assays used whole or some parts of spike protein (S1 subunit, S2 subunit, and receptor binding domain located in S1 subunit) or nucleocapsid (N) protein as target antigens.11 Previous studies reported that assays targeting spike protein showed a better correlation with virus neutralization assay compared to nucleocapsid protein.12, 13 Recently, commercial manufacturers launched the quantitative SARS‐CoV‐2 antibody assays using spike protein as a target antigen, which can be a pivotal tool assessing the effect of vaccination. Abbott Architect anti‐SARS‐CoV‐2 IgG II Quant, new quantitative SARS‐CoV‐2 IgG immunoassay released by Abbott, received CM mark approval from the EU government. Roche also released their new quantitative assay, Elecsys anti‐SARS‐CoV‐2 S, targeting receptor binding domain (RBD) of S1 subunit. And DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG has been developed based on the observation that trimer form of spike protein showed greater sensitivity for the detection of SARS‐CoV‐2 antibodies.14 Many studies are conducted regarding the clinical performances of the previous version of anti‐SARS‐CoV‐2 assays against N protein (Abbott or Roche) and against S1/S2 subunit (Diasorin).15, 16, 17, 18 However, clinical performances of newly launched SARS‐CoV‐2 antibody assays against S1 RBD (Abbott or Roche) or TrimericS (Diasorin) have not been evaluated thoroughly. To the best of our knowledge, the performance of Abbott Architect anti‐SARS‐CoV‐2 IgG II Quant has not been fully evaluated so far. The performances of Elecsys anti‐SARS‐CoV‐2 S have been reported in comparison with the previous version (Elecsys anti‐SARS‐CoV‐2 against N antigen), showed better sensitivity than Elecsys anti‐SARS‐CoV‐2 against N.19, 20 The performance of LIAISON SARS‐CoV‐2 TrimericS IgG has been once reported21 and showed better performance than previous version of LIAISON SARS‐CoV‐2 against S1/S2.15, 16, 17 However, the superiority of clinical performance can be evaluated precisely when performed in the same population. We assessed the clinical performance of three newly developed anti‐SARS‐CoV‐2 assays in the same subjects. Meanwhile, virus neutralization assay using live SARS‐CoV‐2 is the gold standard method for assessing neutralizing antibodies. But its utility is limited because it is labor‐intensive, time‐consuming, and requires specialized facilities such as biosafety level 3.4 For this reason, researchers tried to develop alternatives that are more appropriate for large‐scale use in clinical laboratories.22 GenScript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit is an enzyme‐linked immunosorbent assay (ELISA) based surrogate virus neutralization test (sVNT) that mimics the reaction of human ACE2 receptor and RBD. It has been reported that cPass SARS‐CoV‐2 Neutralization Antibody test presented an excellent correlation with cell‐culture‐based virus neutralization assays and could be a useful measure of virus‐neutralizing activity.23, 24 The correlations of cPass SARS‐CoV‐2 Neutralization Antibody test with three automated anti‐SARS‐CoV‐2 assays (Mindray CL‐900i against S and N, BioMerieux VIDAS 3 against RBD, and Diasorin LIAISON SARS‐CoV‐2 against S1/S2) have been reported with the best correlation in VIDAS 3 (r = 0.75), followed by LIAISON S1/S2 (r = 0.66) and Mindray CL‐900i (r = 0.57)25. However, the correlations of cPass SARS‐CoV‐2 Neutralization Antibody test with Abbott SARS‐CoV‐2 IgG II Quant, Roche Elecsys anti‐SARS‐CoV‐2 S, and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG have not been evaluated. Therefore, we performed a comparative assessment of three fully automated quantitative assays detecting antibodies against spike protein: Abbott SARS‐CoV‐2 IgG II Quant, Roche Elecsys anti‐SARS‐CoV‐2 S, and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG. We evaluated their clinical performance and quantitative correlation with cPass SARS‐CoV‐2 Neutralization Antibody test. MATERIALS AND METHODS: Test specimens This study was approved by the Institutional Review Board of Seoul National University Hospital (IRB no. 2011‐041‐1170). All subjects were admitted to Seoul National University Hospital, or Boramae Medical Center between February 2020 and January 2021. Leftover patient specimens obtained for routine serologic testing were used. A total of 173 sera from 126 COVID‐19 patients were included in this study. Clinical diagnosis for COVID‐19 were determined based on clinical symptoms, imaging diagnosis, and laboratory findings including RT‐PCR. Among 126 COVID‐19 patients (44 females and 82 males), two (1.6%), 20 (15.9%), 18 (14.3%), 40 (31.7%), and 46 (36.5%) patients were classified by disease severity as asymptomatic, mild, moderate, severe, and critical, respectively. For each subject, age, sex, the number of days from onset of the symptom to the day sample collected, and disease severity determined by WHO interim guidance26 were acquired through electronic medical records. For specificity evaluation, 151 pre‐pandemic sera were tested. Out of the 151 sera, 98 were from healthy subjects and 53 were from patients with positive results of various infectious markers: 5 anti‐HAV IgG, 5 anti‐T. pallidum IgG, 5 anti‐HCV, 5 anti‐HBc IgG, 5 anti‐CMV‐IgG, 5 anti‐rubella IgG, 4 anti‐toxoplasma IgG, 3 anti‐HIV IgG, 2 anti‐mycoplasma IgG, 2 M. tubeculosis PCR, 2 RSV PCR, 5 rhinovirus PCR, 1 adenovirus PCR, 1 bocavirus PCR, 1 parainfluenza virus PCR, 1 coronavirus OC43 PCR, 1 coronavirus 229E PCR. This study was approved by the Institutional Review Board of Seoul National University Hospital (IRB no. 2011‐041‐1170). All subjects were admitted to Seoul National University Hospital, or Boramae Medical Center between February 2020 and January 2021. Leftover patient specimens obtained for routine serologic testing were used. A total of 173 sera from 126 COVID‐19 patients were included in this study. Clinical diagnosis for COVID‐19 were determined based on clinical symptoms, imaging diagnosis, and laboratory findings including RT‐PCR. Among 126 COVID‐19 patients (44 females and 82 males), two (1.6%), 20 (15.9%), 18 (14.3%), 40 (31.7%), and 46 (36.5%) patients were classified by disease severity as asymptomatic, mild, moderate, severe, and critical, respectively. For each subject, age, sex, the number of days from onset of the symptom to the day sample collected, and disease severity determined by WHO interim guidance26 were acquired through electronic medical records. For specificity evaluation, 151 pre‐pandemic sera were tested. Out of the 151 sera, 98 were from healthy subjects and 53 were from patients with positive results of various infectious markers: 5 anti‐HAV IgG, 5 anti‐T. pallidum IgG, 5 anti‐HCV, 5 anti‐HBc IgG, 5 anti‐CMV‐IgG, 5 anti‐rubella IgG, 4 anti‐toxoplasma IgG, 3 anti‐HIV IgG, 2 anti‐mycoplasma IgG, 2 M. tubeculosis PCR, 2 RSV PCR, 5 rhinovirus PCR, 1 adenovirus PCR, 1 bocavirus PCR, 1 parainfluenza virus PCR, 1 coronavirus OC43 PCR, 1 coronavirus 229E PCR. Automated anti‐SARS‐CoV‐2 IgG immunoassay Three fully automated commercial immunoassays were evaluated. The specifications of the three immunoassays are summarized in Table 1. Abbott SARS‐CoV‐2 IgG II Quant (Abbott Laboratories, Sligo, Ireland; hereafter called Abbott Quant) is a chemiluminescent microparticle immunoassay designed for the quantitative determination of IgG antibodies to RBD of the S1 subunit of the spike protein of SARS‐CoV‐2. The assay was performed on Abbott Architect i2000SR system (Abbott Laboratories, Abbott Park). Roche Elecsys anti‐SARS‐CoV‐2 S (Roche Diagnostics; hereafter called Roche S) is an electrochemiluminescence immunoassay for the quantitative determination of antibodies to RBD of the spike protein of SARS‐CoV‐2. The assay was performed on Roche cobas e601 system (Roche Diagnostics). DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (DiaSorin, Stillwater; hereafter called DiaSorin TrimericS) is a chemiluminescent immunoassay using magnetic particles coated with recombinant trimeric SARS‐CoV‐2 spike protein for the quantitative determination of IgG antibodies. The assay was performed on LIAISON XL analyzer (DiaSorin). All tests were performed according to the manufacturer's instructions. Specifications of the four immunoassays claimed by each manufacturers Abbreviations: AMR, analytical measuring range; CI, confidence interval; CLIA, chemiluminescent immunoassay; CLMIA, chemiluminescent microparticle immunoassay; ECLIA, electrochemiluminescent immunoassay; RBD, receptor binding domain; SP, spike protein. Sensitivity were calculated from patient sample collected after 14 days or later from positive PCR results. Measuring range extends up to 80,000 by 1:2 dilution. Measuring range extends up to 2,500 by 1:10 dilution. Positive percent agreement with plaque reduction neutralization test (PRNT)50. Negative percent agreement with PRNT50. Not available (approved for qualitative detection). Three fully automated commercial immunoassays were evaluated. The specifications of the three immunoassays are summarized in Table 1. Abbott SARS‐CoV‐2 IgG II Quant (Abbott Laboratories, Sligo, Ireland; hereafter called Abbott Quant) is a chemiluminescent microparticle immunoassay designed for the quantitative determination of IgG antibodies to RBD of the S1 subunit of the spike protein of SARS‐CoV‐2. The assay was performed on Abbott Architect i2000SR system (Abbott Laboratories, Abbott Park). Roche Elecsys anti‐SARS‐CoV‐2 S (Roche Diagnostics; hereafter called Roche S) is an electrochemiluminescence immunoassay for the quantitative determination of antibodies to RBD of the spike protein of SARS‐CoV‐2. The assay was performed on Roche cobas e601 system (Roche Diagnostics). DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (DiaSorin, Stillwater; hereafter called DiaSorin TrimericS) is a chemiluminescent immunoassay using magnetic particles coated with recombinant trimeric SARS‐CoV‐2 spike protein for the quantitative determination of IgG antibodies. The assay was performed on LIAISON XL analyzer (DiaSorin). All tests were performed according to the manufacturer's instructions. Specifications of the four immunoassays claimed by each manufacturers Abbreviations: AMR, analytical measuring range; CI, confidence interval; CLIA, chemiluminescent immunoassay; CLMIA, chemiluminescent microparticle immunoassay; ECLIA, electrochemiluminescent immunoassay; RBD, receptor binding domain; SP, spike protein. Sensitivity were calculated from patient sample collected after 14 days or later from positive PCR results. Measuring range extends up to 80,000 by 1:2 dilution. Measuring range extends up to 2,500 by 1:10 dilution. Positive percent agreement with plaque reduction neutralization test (PRNT)50. Negative percent agreement with PRNT50. Not available (approved for qualitative detection). GenScript cPass SARS‐CoV‐2 Neutralization Antibody Detection assay GenScript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit (GenScript, Piscataway; hereafter called cPass) is a blocking ELISA detection tool, which mimics the virus neutralization process. The detection kit utilizes the Horseradish peroxidase (HRP) conjugated recombinant SARS‐CoV‐2 RBD protein and the human angiotensin‐converting enzyme 2 (ACE2) receptor protein. The protein interaction between HRP‐RBD and ACE2 can be blocked by neutralizing antibodies against SARS‐CoV‐RBD. The assay was performed as below, according to the manufacturer's instruction. Test samples, negative control, and positive control were 1:10 diluted with sample dilution buffer. HRP‐RBD was diluted 1:1000 with HRP dilution buffer. Diluted samples and diluted HRP‐RBD solution were mixed with a volume ratio of 1:1 and incubated at 37°C for 30 min. 100 μl of the mixture was then added to the capture plate coated with the human ACE2 receptor protein and incubated at 37°C for 15 min. After the incubation, the mixture was washed 4 times with 260 μl of wash buffer. Then, 100 μl of tetramethylbenzidine solution was added to the mixture and incubated at room temperature for 15 min. Finally, 50 μl of stop solution was added. The absorbance of the final solution was read at 450 nm in a microplate reader. Signal inhibition was calculated as follow:Percent Signal Inhibition=1‐average optical density value of sample/average optical density value of negative control×100% The test results were interpreted as positive when the percent signal inhibition was ≥30%, which is the cut‐off for signal inhibition claimed by the manufacturer. GenScript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit (GenScript, Piscataway; hereafter called cPass) is a blocking ELISA detection tool, which mimics the virus neutralization process. The detection kit utilizes the Horseradish peroxidase (HRP) conjugated recombinant SARS‐CoV‐2 RBD protein and the human angiotensin‐converting enzyme 2 (ACE2) receptor protein. The protein interaction between HRP‐RBD and ACE2 can be blocked by neutralizing antibodies against SARS‐CoV‐RBD. The assay was performed as below, according to the manufacturer's instruction. Test samples, negative control, and positive control were 1:10 diluted with sample dilution buffer. HRP‐RBD was diluted 1:1000 with HRP dilution buffer. Diluted samples and diluted HRP‐RBD solution were mixed with a volume ratio of 1:1 and incubated at 37°C for 30 min. 100 μl of the mixture was then added to the capture plate coated with the human ACE2 receptor protein and incubated at 37°C for 15 min. After the incubation, the mixture was washed 4 times with 260 μl of wash buffer. Then, 100 μl of tetramethylbenzidine solution was added to the mixture and incubated at room temperature for 15 min. Finally, 50 μl of stop solution was added. The absorbance of the final solution was read at 450 nm in a microplate reader. Signal inhibition was calculated as follow:Percent Signal Inhibition=1‐average optical density value of sample/average optical density value of negative control×100% The test results were interpreted as positive when the percent signal inhibition was ≥30%, which is the cut‐off for signal inhibition claimed by the manufacturer. Precision and linearity assessment The precision assessment was performed on three quantitative assays, according to CLSI EP15‐A3 protocol,27 using one quality control material and two pooled patient sera for five consecutive days, five times a day. Repeatability and within‐laboratory precision were estimated using ANOVA and compared to values claimed by the manufacturers. Linearity assessment was performed on three quantitative assays, according to CLSI EP6‐A protocol.28 Two patient sera with high (H) and low (L) concentration were mixed at ratios of 4H, 1L + 3H, 2L + 2H, 3L + 1H, and 4L. All levels are measured in duplicates. The precision assessment was performed on three quantitative assays, according to CLSI EP15‐A3 protocol,27 using one quality control material and two pooled patient sera for five consecutive days, five times a day. Repeatability and within‐laboratory precision were estimated using ANOVA and compared to values claimed by the manufacturers. Linearity assessment was performed on three quantitative assays, according to CLSI EP6‐A protocol.28 Two patient sera with high (H) and low (L) concentration were mixed at ratios of 4H, 1L + 3H, 2L + 2H, 3L + 1H, and 4L. All levels are measured in duplicates. Statistical analysis For three immunoassays, sensitivity and specificity were calculated. The sensitivity of the subgroup sampled 14 days after the onset of symptoms was also calculated and compared with the manufacturer's claim. It is in line with the recommendation from infectious diseases society of America guidelines on the Diagnosis of COVID‐19 that suggests against using serologic testing to diagnose SARS‐CoV‐2 infection during the first two weeks (14 days) following symptom onset.29 The concordances between the three immunoassays and cPass were assessed using overall, positive and negative percent agreement as well as Cohen's kappa statistics. Cohen's kappa is a robust statistic of inter‐rater reliability, useful for assessing the level of agreement between two diagnostic assays. Ranging between 0 and 1, a kappa value <0.40 represents poor agreement, 0.40–0.59 represents fair agreement, 0.60–0.74 represents good agreement, and ≥0.75 represents excellent agreement.30 We evaluated the correlations of the quantitative value of three immunoassays with each other and with % inhibition value of cPass using Spearman's rank‐order correlation coefficient (rho). All statistical analyses were performed by using R version 4.0.5 (R foundation for statistical Computing). For three immunoassays, sensitivity and specificity were calculated. The sensitivity of the subgroup sampled 14 days after the onset of symptoms was also calculated and compared with the manufacturer's claim. It is in line with the recommendation from infectious diseases society of America guidelines on the Diagnosis of COVID‐19 that suggests against using serologic testing to diagnose SARS‐CoV‐2 infection during the first two weeks (14 days) following symptom onset.29 The concordances between the three immunoassays and cPass were assessed using overall, positive and negative percent agreement as well as Cohen's kappa statistics. Cohen's kappa is a robust statistic of inter‐rater reliability, useful for assessing the level of agreement between two diagnostic assays. Ranging between 0 and 1, a kappa value <0.40 represents poor agreement, 0.40–0.59 represents fair agreement, 0.60–0.74 represents good agreement, and ≥0.75 represents excellent agreement.30 We evaluated the correlations of the quantitative value of three immunoassays with each other and with % inhibition value of cPass using Spearman's rank‐order correlation coefficient (rho). All statistical analyses were performed by using R version 4.0.5 (R foundation for statistical Computing). Test specimens: This study was approved by the Institutional Review Board of Seoul National University Hospital (IRB no. 2011‐041‐1170). All subjects were admitted to Seoul National University Hospital, or Boramae Medical Center between February 2020 and January 2021. Leftover patient specimens obtained for routine serologic testing were used. A total of 173 sera from 126 COVID‐19 patients were included in this study. Clinical diagnosis for COVID‐19 were determined based on clinical symptoms, imaging diagnosis, and laboratory findings including RT‐PCR. Among 126 COVID‐19 patients (44 females and 82 males), two (1.6%), 20 (15.9%), 18 (14.3%), 40 (31.7%), and 46 (36.5%) patients were classified by disease severity as asymptomatic, mild, moderate, severe, and critical, respectively. For each subject, age, sex, the number of days from onset of the symptom to the day sample collected, and disease severity determined by WHO interim guidance26 were acquired through electronic medical records. For specificity evaluation, 151 pre‐pandemic sera were tested. Out of the 151 sera, 98 were from healthy subjects and 53 were from patients with positive results of various infectious markers: 5 anti‐HAV IgG, 5 anti‐T. pallidum IgG, 5 anti‐HCV, 5 anti‐HBc IgG, 5 anti‐CMV‐IgG, 5 anti‐rubella IgG, 4 anti‐toxoplasma IgG, 3 anti‐HIV IgG, 2 anti‐mycoplasma IgG, 2 M. tubeculosis PCR, 2 RSV PCR, 5 rhinovirus PCR, 1 adenovirus PCR, 1 bocavirus PCR, 1 parainfluenza virus PCR, 1 coronavirus OC43 PCR, 1 coronavirus 229E PCR. Automated anti‐SARS‐CoV‐2 IgG immunoassay: Three fully automated commercial immunoassays were evaluated. The specifications of the three immunoassays are summarized in Table 1. Abbott SARS‐CoV‐2 IgG II Quant (Abbott Laboratories, Sligo, Ireland; hereafter called Abbott Quant) is a chemiluminescent microparticle immunoassay designed for the quantitative determination of IgG antibodies to RBD of the S1 subunit of the spike protein of SARS‐CoV‐2. The assay was performed on Abbott Architect i2000SR system (Abbott Laboratories, Abbott Park). Roche Elecsys anti‐SARS‐CoV‐2 S (Roche Diagnostics; hereafter called Roche S) is an electrochemiluminescence immunoassay for the quantitative determination of antibodies to RBD of the spike protein of SARS‐CoV‐2. The assay was performed on Roche cobas e601 system (Roche Diagnostics). DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (DiaSorin, Stillwater; hereafter called DiaSorin TrimericS) is a chemiluminescent immunoassay using magnetic particles coated with recombinant trimeric SARS‐CoV‐2 spike protein for the quantitative determination of IgG antibodies. The assay was performed on LIAISON XL analyzer (DiaSorin). All tests were performed according to the manufacturer's instructions. Specifications of the four immunoassays claimed by each manufacturers Abbreviations: AMR, analytical measuring range; CI, confidence interval; CLIA, chemiluminescent immunoassay; CLMIA, chemiluminescent microparticle immunoassay; ECLIA, electrochemiluminescent immunoassay; RBD, receptor binding domain; SP, spike protein. Sensitivity were calculated from patient sample collected after 14 days or later from positive PCR results. Measuring range extends up to 80,000 by 1:2 dilution. Measuring range extends up to 2,500 by 1:10 dilution. Positive percent agreement with plaque reduction neutralization test (PRNT)50. Negative percent agreement with PRNT50. Not available (approved for qualitative detection). GenScript cPass SARS‐CoV‐2 Neutralization Antibody Detection assay: GenScript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit (GenScript, Piscataway; hereafter called cPass) is a blocking ELISA detection tool, which mimics the virus neutralization process. The detection kit utilizes the Horseradish peroxidase (HRP) conjugated recombinant SARS‐CoV‐2 RBD protein and the human angiotensin‐converting enzyme 2 (ACE2) receptor protein. The protein interaction between HRP‐RBD and ACE2 can be blocked by neutralizing antibodies against SARS‐CoV‐RBD. The assay was performed as below, according to the manufacturer's instruction. Test samples, negative control, and positive control were 1:10 diluted with sample dilution buffer. HRP‐RBD was diluted 1:1000 with HRP dilution buffer. Diluted samples and diluted HRP‐RBD solution were mixed with a volume ratio of 1:1 and incubated at 37°C for 30 min. 100 μl of the mixture was then added to the capture plate coated with the human ACE2 receptor protein and incubated at 37°C for 15 min. After the incubation, the mixture was washed 4 times with 260 μl of wash buffer. Then, 100 μl of tetramethylbenzidine solution was added to the mixture and incubated at room temperature for 15 min. Finally, 50 μl of stop solution was added. The absorbance of the final solution was read at 450 nm in a microplate reader. Signal inhibition was calculated as follow:Percent Signal Inhibition=1‐average optical density value of sample/average optical density value of negative control×100% The test results were interpreted as positive when the percent signal inhibition was ≥30%, which is the cut‐off for signal inhibition claimed by the manufacturer. Precision and linearity assessment: The precision assessment was performed on three quantitative assays, according to CLSI EP15‐A3 protocol,27 using one quality control material and two pooled patient sera for five consecutive days, five times a day. Repeatability and within‐laboratory precision were estimated using ANOVA and compared to values claimed by the manufacturers. Linearity assessment was performed on three quantitative assays, according to CLSI EP6‐A protocol.28 Two patient sera with high (H) and low (L) concentration were mixed at ratios of 4H, 1L + 3H, 2L + 2H, 3L + 1H, and 4L. All levels are measured in duplicates. Statistical analysis: For three immunoassays, sensitivity and specificity were calculated. The sensitivity of the subgroup sampled 14 days after the onset of symptoms was also calculated and compared with the manufacturer's claim. It is in line with the recommendation from infectious diseases society of America guidelines on the Diagnosis of COVID‐19 that suggests against using serologic testing to diagnose SARS‐CoV‐2 infection during the first two weeks (14 days) following symptom onset.29 The concordances between the three immunoassays and cPass were assessed using overall, positive and negative percent agreement as well as Cohen's kappa statistics. Cohen's kappa is a robust statistic of inter‐rater reliability, useful for assessing the level of agreement between two diagnostic assays. Ranging between 0 and 1, a kappa value <0.40 represents poor agreement, 0.40–0.59 represents fair agreement, 0.60–0.74 represents good agreement, and ≥0.75 represents excellent agreement.30 We evaluated the correlations of the quantitative value of three immunoassays with each other and with % inhibition value of cPass using Spearman's rank‐order correlation coefficient (rho). All statistical analyses were performed by using R version 4.0.5 (R foundation for statistical Computing). RESULTS: Clinical performance The clinical performances of three immunoassays are shown in Table 2. The sensitivity of Abbott Quant, DiaSorin TrimericS, and Roche S on 173 sera from 126 COVID‐19 patients was 96.0% (95% CI, 91.8%–98.4%), 93.6% (95% CI, 88.9%–96.8%) and 96.0% (95% CI, 91.9%–98.4%), respectively. The sensitivity calculated from the subgroup sampled 14 days after the onset of symptom was 97.6% (95% CI, 93.2%–99.5%), 96.8% (95% CI, 92.1%–99.1%) and 97.6% (95% CI, 93.2%–99.5%), respectively. The specificity of Abbott Quant, DiaSorin TrimericS, and Roche S on 151 pre‐pandemic sera were 99.3 (95% CI, 96.4–100.0), 100.0% (95% CI, 97.6–100.0%), and 100.0% (95% CI, 97.6%–100.0%), respectively. No positive result was observed in the cross‐reactivity panel. Clinical performance of three immunoassays Abbreviations: Abbott Quant, Abbott SARS‐CoV‐2 IgG II Quant; D, days after onset of symptom; DiaSorin TrimericS, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG; Genscript cPass, Genscript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit;NT, not tested; Other, other infection; Roche S, Roche Elecsys anti‐SARS‐CoV‐2 S. The clinical performances of three immunoassays are shown in Table 2. The sensitivity of Abbott Quant, DiaSorin TrimericS, and Roche S on 173 sera from 126 COVID‐19 patients was 96.0% (95% CI, 91.8%–98.4%), 93.6% (95% CI, 88.9%–96.8%) and 96.0% (95% CI, 91.9%–98.4%), respectively. The sensitivity calculated from the subgroup sampled 14 days after the onset of symptom was 97.6% (95% CI, 93.2%–99.5%), 96.8% (95% CI, 92.1%–99.1%) and 97.6% (95% CI, 93.2%–99.5%), respectively. The specificity of Abbott Quant, DiaSorin TrimericS, and Roche S on 151 pre‐pandemic sera were 99.3 (95% CI, 96.4–100.0), 100.0% (95% CI, 97.6–100.0%), and 100.0% (95% CI, 97.6%–100.0%), respectively. No positive result was observed in the cross‐reactivity panel. Clinical performance of three immunoassays Abbreviations: Abbott Quant, Abbott SARS‐CoV‐2 IgG II Quant; D, days after onset of symptom; DiaSorin TrimericS, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG; Genscript cPass, Genscript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit;NT, not tested; Other, other infection; Roche S, Roche Elecsys anti‐SARS‐CoV‐2 S. Repeatability, within‐laboratory imprecision, and linearity The repeatability and within‐laboratory imprecision for three immunoassays are shown in Table 3. The within‐laboratory precisions of Abbott Quant and Roche S were all <4.0%. The within‐laboratory precisions of DiaSorin TrimericS were 2.9–8.2%, which were slightly larger than that claimed by the manufacturer. Repeatability and within‐laboratory imprecision for three quantitative immunoassays Abbreviations: Conc., concentration; CV, coefficient of variation; Lab, laboratory; PS, pooled serum. The linearity assessment of three immunoassays (Figure S1) revealed to be linear across the analytical measurement range (R2 = 0.9992, 0.9947, 0.9966 for Abbott Quant, DiaSorin TrimericS, and Roche S, respectively). And % recovery was of Abbott Quant, DiaSorin TrimericS, and Roche S was all acceptable (criteria: 100% ± 10%), ranged as 96.4%–100.0%, 100.0%–109.7%, 92.8%–102.9%, respectively. The repeatability and within‐laboratory imprecision for three immunoassays are shown in Table 3. The within‐laboratory precisions of Abbott Quant and Roche S were all <4.0%. The within‐laboratory precisions of DiaSorin TrimericS were 2.9–8.2%, which were slightly larger than that claimed by the manufacturer. Repeatability and within‐laboratory imprecision for three quantitative immunoassays Abbreviations: Conc., concentration; CV, coefficient of variation; Lab, laboratory; PS, pooled serum. The linearity assessment of three immunoassays (Figure S1) revealed to be linear across the analytical measurement range (R2 = 0.9992, 0.9947, 0.9966 for Abbott Quant, DiaSorin TrimericS, and Roche S, respectively). And % recovery was of Abbott Quant, DiaSorin TrimericS, and Roche S was all acceptable (criteria: 100% ± 10%), ranged as 96.4%–100.0%, 100.0%–109.7%, 92.8%–102.9%, respectively. Correlation between results from three immunoassays The correlations between results from three immunoassays were shown in Figure 1. The Roche S correlated well with Abbott Quant II (rho = 0.88) and DiaSorin Trimeric S (rho = 0.85). Abbott Quant II correlated well with DiaSorin Trimeric S (rho = 0.9). Spearman correlation between three immunoassays and cPass neutralization antibody. (A‐C). Abbott SARS‐CoV‐2 IgG II Quant (A), Roche Elecsys anti‐SARS‐CoV‐2 S (B), and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (C) demonstrated strong correlations with Genscript cPass surrogate virus neutralization test (Spearman's rho of 0.87, 0.87, and 0.88 for each). (D‐F). Three immunoassays demonstrated strong correlations with each other. Spearman's rho of 0.88 between Abbott SARS‐CoV‐2 IgG II Quant and Roche Elecsys anti‐SARS‐CoV‐2 (D); Spearman's rho of 0.9 between Abbott SARS‐CoV‐2 IgG II Quant and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (E); Spearman's rho of 0.85 between Roche Elecsys anti‐ SARS‐CoV‐2 and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (F) The correlations between results from three immunoassays were shown in Figure 1. The Roche S correlated well with Abbott Quant II (rho = 0.88) and DiaSorin Trimeric S (rho = 0.85). Abbott Quant II correlated well with DiaSorin Trimeric S (rho = 0.9). Spearman correlation between three immunoassays and cPass neutralization antibody. (A‐C). Abbott SARS‐CoV‐2 IgG II Quant (A), Roche Elecsys anti‐SARS‐CoV‐2 S (B), and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (C) demonstrated strong correlations with Genscript cPass surrogate virus neutralization test (Spearman's rho of 0.87, 0.87, and 0.88 for each). (D‐F). Three immunoassays demonstrated strong correlations with each other. Spearman's rho of 0.88 between Abbott SARS‐CoV‐2 IgG II Quant and Roche Elecsys anti‐SARS‐CoV‐2 (D); Spearman's rho of 0.9 between Abbott SARS‐CoV‐2 IgG II Quant and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (E); Spearman's rho of 0.85 between Roche Elecsys anti‐ SARS‐CoV‐2 and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (F) Comparison to cPass SARS‐CoV‐2 neutralization antibody test The concordances between the qualitative results of three immunoassays and cPass SARS‐CoV‐2 neutralization test in SARS‐CoV‐2 positive patients are shown in Table 4. The positive percent agreements of three immunoassays with cPass were 97.6%–99.4%. The negative percent agreements of three immunoassays with cPass were 55.6%–77.8%. The overall percent agreements of three immunoassays with cPass were 96.5%–97.7% with Cohen's kappa value of 0.61–0.74. The positive percent agreements between the three immunoassays were 98.8%–99.4% and the negative percent agreements between three immunoassays were 54.5%–71.4%. The overall percent agreements between three immunoassays were 96.5%–97.7% with Cohen's kappa value of 0.65–0.7. Concordance between the qualitative results of three immunoassays and cPass SARS‐CoV‐2 Neutralization Antibody Detection kit in 173 SARS‐CoV‐2 patients Abbreviations: Abbott Quant, Abbott SARS‐CoV‐2 IgG II Quant; DiaSorin TrimericS, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG; NPA, negative percent agreement; OPA, overall percent agreement; PPA, positive percent agreement; Roche S, Roche Elecsys anti‐SARS‐CoV‐2 S. The correlations of quantitative results of three immunoassays with % inhibition values of cPass were very strong with Spearman's rho value of 0.87 for Abbott Quant and Roche S, and 0.88 for DiaSorin TrimericS (p < 0.001 for all) (Figure 1). The concordances between the qualitative results of three immunoassays and cPass SARS‐CoV‐2 neutralization test in SARS‐CoV‐2 positive patients are shown in Table 4. The positive percent agreements of three immunoassays with cPass were 97.6%–99.4%. The negative percent agreements of three immunoassays with cPass were 55.6%–77.8%. The overall percent agreements of three immunoassays with cPass were 96.5%–97.7% with Cohen's kappa value of 0.61–0.74. The positive percent agreements between the three immunoassays were 98.8%–99.4% and the negative percent agreements between three immunoassays were 54.5%–71.4%. The overall percent agreements between three immunoassays were 96.5%–97.7% with Cohen's kappa value of 0.65–0.7. Concordance between the qualitative results of three immunoassays and cPass SARS‐CoV‐2 Neutralization Antibody Detection kit in 173 SARS‐CoV‐2 patients Abbreviations: Abbott Quant, Abbott SARS‐CoV‐2 IgG II Quant; DiaSorin TrimericS, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG; NPA, negative percent agreement; OPA, overall percent agreement; PPA, positive percent agreement; Roche S, Roche Elecsys anti‐SARS‐CoV‐2 S. The correlations of quantitative results of three immunoassays with % inhibition values of cPass were very strong with Spearman's rho value of 0.87 for Abbott Quant and Roche S, and 0.88 for DiaSorin TrimericS (p < 0.001 for all) (Figure 1). Receiver operating characteristics analysis Receiver operating characteristics (ROC) curve analysis was performed on three immunoassays (Figure 2). Areas under the curve (AUC) of three immunoassays were 0.993 for Abbott Quant, 0.989 for DiaSorin TrimericS, and 0.983 for Roche S, which means excellent performances for all three immunoassays. Using the ROC curves, we assumed the optimized cut‐off values based on Youden's index. The optimized cut‐offs were 41.9 AU/ml, 3.985 AU/ml, and 0.544 U/ml for Abbott Quant, DiaSorin TrimericS, and Roche S, respectively. The corresponding manufacturer's recommended cut‐offs were 50.0, 13.0, and 0.8. Applying the optimized cut‐offs, the sensitivity of Abbott Quant improved from 96.0% to 97.1%, with no decrease in specificity (99.3%). The sensitivity of DiaSorin TrimericS improved from 93.6% to 98.3% with a slight decrease in specificity (from 100.0% to 97.1%). The sensitivity of Roche S improved from 96.0% to 96.5% with no decrease in specificity (100.0%). Receiver operating characteristics (ROC) curve analysis for three immunoassays. Area under the curve were 0.993 for Abbott Quant (A), 0.989 for DiaSorin TrimericS (B), and 0.983 for Roche S (C), which means excellent performances for all three immunoassays Receiver operating characteristics (ROC) curve analysis was performed on three immunoassays (Figure 2). Areas under the curve (AUC) of three immunoassays were 0.993 for Abbott Quant, 0.989 for DiaSorin TrimericS, and 0.983 for Roche S, which means excellent performances for all three immunoassays. Using the ROC curves, we assumed the optimized cut‐off values based on Youden's index. The optimized cut‐offs were 41.9 AU/ml, 3.985 AU/ml, and 0.544 U/ml for Abbott Quant, DiaSorin TrimericS, and Roche S, respectively. The corresponding manufacturer's recommended cut‐offs were 50.0, 13.0, and 0.8. Applying the optimized cut‐offs, the sensitivity of Abbott Quant improved from 96.0% to 97.1%, with no decrease in specificity (99.3%). The sensitivity of DiaSorin TrimericS improved from 93.6% to 98.3% with a slight decrease in specificity (from 100.0% to 97.1%). The sensitivity of Roche S improved from 96.0% to 96.5% with no decrease in specificity (100.0%). Receiver operating characteristics (ROC) curve analysis for three immunoassays. Area under the curve were 0.993 for Abbott Quant (A), 0.989 for DiaSorin TrimericS (B), and 0.983 for Roche S (C), which means excellent performances for all three immunoassays Clinical performance: The clinical performances of three immunoassays are shown in Table 2. The sensitivity of Abbott Quant, DiaSorin TrimericS, and Roche S on 173 sera from 126 COVID‐19 patients was 96.0% (95% CI, 91.8%–98.4%), 93.6% (95% CI, 88.9%–96.8%) and 96.0% (95% CI, 91.9%–98.4%), respectively. The sensitivity calculated from the subgroup sampled 14 days after the onset of symptom was 97.6% (95% CI, 93.2%–99.5%), 96.8% (95% CI, 92.1%–99.1%) and 97.6% (95% CI, 93.2%–99.5%), respectively. The specificity of Abbott Quant, DiaSorin TrimericS, and Roche S on 151 pre‐pandemic sera were 99.3 (95% CI, 96.4–100.0), 100.0% (95% CI, 97.6–100.0%), and 100.0% (95% CI, 97.6%–100.0%), respectively. No positive result was observed in the cross‐reactivity panel. Clinical performance of three immunoassays Abbreviations: Abbott Quant, Abbott SARS‐CoV‐2 IgG II Quant; D, days after onset of symptom; DiaSorin TrimericS, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG; Genscript cPass, Genscript cPass SARS‐CoV‐2 Neutralization Antibody Detection Kit;NT, not tested; Other, other infection; Roche S, Roche Elecsys anti‐SARS‐CoV‐2 S. Repeatability, within‐laboratory imprecision, and linearity: The repeatability and within‐laboratory imprecision for three immunoassays are shown in Table 3. The within‐laboratory precisions of Abbott Quant and Roche S were all <4.0%. The within‐laboratory precisions of DiaSorin TrimericS were 2.9–8.2%, which were slightly larger than that claimed by the manufacturer. Repeatability and within‐laboratory imprecision for three quantitative immunoassays Abbreviations: Conc., concentration; CV, coefficient of variation; Lab, laboratory; PS, pooled serum. The linearity assessment of three immunoassays (Figure S1) revealed to be linear across the analytical measurement range (R2 = 0.9992, 0.9947, 0.9966 for Abbott Quant, DiaSorin TrimericS, and Roche S, respectively). And % recovery was of Abbott Quant, DiaSorin TrimericS, and Roche S was all acceptable (criteria: 100% ± 10%), ranged as 96.4%–100.0%, 100.0%–109.7%, 92.8%–102.9%, respectively. Correlation between results from three immunoassays: The correlations between results from three immunoassays were shown in Figure 1. The Roche S correlated well with Abbott Quant II (rho = 0.88) and DiaSorin Trimeric S (rho = 0.85). Abbott Quant II correlated well with DiaSorin Trimeric S (rho = 0.9). Spearman correlation between three immunoassays and cPass neutralization antibody. (A‐C). Abbott SARS‐CoV‐2 IgG II Quant (A), Roche Elecsys anti‐SARS‐CoV‐2 S (B), and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (C) demonstrated strong correlations with Genscript cPass surrogate virus neutralization test (Spearman's rho of 0.87, 0.87, and 0.88 for each). (D‐F). Three immunoassays demonstrated strong correlations with each other. Spearman's rho of 0.88 between Abbott SARS‐CoV‐2 IgG II Quant and Roche Elecsys anti‐SARS‐CoV‐2 (D); Spearman's rho of 0.9 between Abbott SARS‐CoV‐2 IgG II Quant and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (E); Spearman's rho of 0.85 between Roche Elecsys anti‐ SARS‐CoV‐2 and DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG (F) Comparison to cPass SARS‐CoV‐2 neutralization antibody test: The concordances between the qualitative results of three immunoassays and cPass SARS‐CoV‐2 neutralization test in SARS‐CoV‐2 positive patients are shown in Table 4. The positive percent agreements of three immunoassays with cPass were 97.6%–99.4%. The negative percent agreements of three immunoassays with cPass were 55.6%–77.8%. The overall percent agreements of three immunoassays with cPass were 96.5%–97.7% with Cohen's kappa value of 0.61–0.74. The positive percent agreements between the three immunoassays were 98.8%–99.4% and the negative percent agreements between three immunoassays were 54.5%–71.4%. The overall percent agreements between three immunoassays were 96.5%–97.7% with Cohen's kappa value of 0.65–0.7. Concordance between the qualitative results of three immunoassays and cPass SARS‐CoV‐2 Neutralization Antibody Detection kit in 173 SARS‐CoV‐2 patients Abbreviations: Abbott Quant, Abbott SARS‐CoV‐2 IgG II Quant; DiaSorin TrimericS, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG; NPA, negative percent agreement; OPA, overall percent agreement; PPA, positive percent agreement; Roche S, Roche Elecsys anti‐SARS‐CoV‐2 S. The correlations of quantitative results of three immunoassays with % inhibition values of cPass were very strong with Spearman's rho value of 0.87 for Abbott Quant and Roche S, and 0.88 for DiaSorin TrimericS (p < 0.001 for all) (Figure 1). Receiver operating characteristics analysis: Receiver operating characteristics (ROC) curve analysis was performed on three immunoassays (Figure 2). Areas under the curve (AUC) of three immunoassays were 0.993 for Abbott Quant, 0.989 for DiaSorin TrimericS, and 0.983 for Roche S, which means excellent performances for all three immunoassays. Using the ROC curves, we assumed the optimized cut‐off values based on Youden's index. The optimized cut‐offs were 41.9 AU/ml, 3.985 AU/ml, and 0.544 U/ml for Abbott Quant, DiaSorin TrimericS, and Roche S, respectively. The corresponding manufacturer's recommended cut‐offs were 50.0, 13.0, and 0.8. Applying the optimized cut‐offs, the sensitivity of Abbott Quant improved from 96.0% to 97.1%, with no decrease in specificity (99.3%). The sensitivity of DiaSorin TrimericS improved from 93.6% to 98.3% with a slight decrease in specificity (from 100.0% to 97.1%). The sensitivity of Roche S improved from 96.0% to 96.5% with no decrease in specificity (100.0%). Receiver operating characteristics (ROC) curve analysis for three immunoassays. Area under the curve were 0.993 for Abbott Quant (A), 0.989 for DiaSorin TrimericS (B), and 0.983 for Roche S (C), which means excellent performances for all three immunoassays DISCUSSION: In this study, we compared three commercially available automated quantitative immunoassays for the detection of antibodies to SARS‐CoV‐2 and cPass as a surrogate for viral neutralization. In the sensitivity test, all assays demonstrated excellent sensitivity greater than 90%, and Abbott Quant and Roche S showed slightly higher sensitivity than DiaSorin TrimericS. The sensitivity calculated from the subgroup sampled 14 days or later after the onset of symptom, which has been reported to be the window period of SARS‐CoV‐2 antibody formation31 were all slightly lower compared to that claimed by the manufacturer. (Abbott Quant 97.6% vs. 99.4%; DiaSorin TrimericS 96.8% vs. 98.7%; Roche S 97.6% vs. 98.8%). There were four sera from four patients with negative results in the subgroup sampled 14 days or later after the onset. Two samples were negative for all assays, and the other two were detected at very low quantitative values near the cut‐off only in Roche S and Abbott Quant, respectively. Out of four patients, two with underlying hematologic malignancy were suspected of having inhibition of antibody development by their immunocompromised status. The other two patients were asymptomatic or showed very mild symptoms. Previous studies have reported that antibodies were not detected in 10%–20% of mild cases of COVID‐19.32, 33 Considering that 28 sera from mild cases were included in this study, the sensitivities of three immunoassays were satisfactory. In prior reports evaluating the clinical performances of the previous version of anti‐SARS‐CoV‐2 assays against N protein (Abbott or Roche) and against S1/S2 subunit (DiaSorin),15, 16, 17, 18 the sensitivities of Abbott anti‐SARS‐CoV‐2 (against N) were 86.5%~90.8%. Those of Roche anti‐SARS‐CoV‐2 (against N) were 83.0%~93.0%. The sensitivities of DiaSorin LIAISON anit‐SARS‐CoV‐2 (against S1/S2) were 70.0%~85.3%, slightly lower than those of Abbott or Roche anti‐SARS‐CoV‐2 (N).15, 16, 17 In the report evaluating two Roche anti‐SARS‐CoV‐2 assays against N or S simultaneously,19 anti‐SARS‐CoV‐2 S showed higher sensitivity than anti‐SARS‐CoV‐2 (against N) (93.0% vs. 89.0%). In a recent report, the sensitivity of DiaSorin anti‐SARS‐CoV‐2 Trimeric S was 99.4%, which was higher than the previous version of DiaSorin anti‐SARS‐CoV‐2 against S1/S2.21 In our study, the sensitivity was highest in Roche S and Abbott Quant (96.0%), followed by DiaSorin Trimeric S (93.6%). All three immunoassays showed higher sensitivities than prior reports of the previous version of those assays (Abbott and Roche against N or DiaSorin anti‐S1/S2). The sensitivity of DiaSorin TrimericS in this study improved from 93.6% to 98.3% with a slight decrease in specificity (from 100.0% to 98.3%) by adjusting the cut‐off value from 13.0 AU/ml to 3.985 AU/ml, which implicates no substantial differences in sensitivity of the three immunoassays. The sensitivity of cPass NT has been reported higher (93%) than those of Abbott anti‐SARS‐CoV‐2 (N) (89%) or Roche anti‐SARS‐CoV‐2 total (N) (83%).17 In our study, cPass showed similar sensitivity (94.8%) with the other three immunoassays (93.6%~96.0%), which is consistent with the previous reports, considering the lower sensitivity of Abbott or Roche anti‐SARS‐CoV‐2 (N) assays in prior studies.15, 16, 17, 18 In the specificity test conducted with 151 pre‐pandemic samples, all three immunoassays showed remarkable specificity with only one positive result from Abbott Quant. These results were consistent with manufacturer's claim and previous studies, which report superior sensitivity in the new version of anti‐SARS‐CoV‐2 assays (range 99.8%–100.0%).19, 20, 21 In the previous version of three immunoassays, DiaSorin anti‐SARS‐CoV‐2 against S1/S2 showed slightly lower specificities than Abbott anti‐SARS‐CoV‐2 (N) or Roche anti‐SARS‐CoV‐2 total (N).15, 16, 17 In the new DiaSorin anti‐SARS‐CoV‐2 against trimeric S, specificity was excellent (99.8%).21 The sensitivities of both the previous and new version of Roche anti‐SARS‐CoV‐2 were excellent (100.0% for both).19, 20 Imprecision of the new Roche anti‐SARS‐CoV‐2 against S has been reported as 1.06% at 9.06 U/ml.34 The imprecision of the new DiaSorin anti‐SARS‐CoV‐2 against trimeric S was an average of 4.85% (3.6%~5.8% range) at values ranging from 5 to 591 AU/L,21 which was higher than that of Roche S. In our study, the imprecision at low and high level pooled serum was 8.2% and 5.2% in DiaSorin Trimeric S and 2.5% and 3.2% in Roche S. The higher imprecision in DiaSorin Trimeric S compared to Roche S in our study was consistent with previous reports.21, 34 The imprecision of Abbott Quant II, firstly reported in this study, was similar to Roche S. The correlations of the result from three immunoassays with each other were excellent (rho value 0.85–0.9). In comparison with cPass, all three immunoassays showed high percent agreement for overall patient samples above 95%. Cohen's kappa statistics were 0.61 for Abbott Quant, 0.74 for Roche S, and 0.68 for DiaSorin TrimericS, all denoting good agreement. Although cPass is not intended to use as a quantitative assay, strong correlations between quantitative values of three assays and % inhibition values of cPass were found in this study with Spearman rho value of 0.87–0.88. In previous reports, the Abbott SARS‐CoV‐2 IgG assay (targeting nucleocapsid antigen) and the Roche anti‐SARS‐CoV total antibody (targeting nucleocapsid antigen) showed weaker correlations with neutralizing antibody than DiaSorin SARS‐CoV‐2 IgG (targeting S1/S2 subunits), Euroimmune SARS‐CoV‐2 IgG ELISA (targeting S1 subunit), or Siemens SARS‐CoV‐2 total antibody (targeting RBD).12, 13 Newly launched Abbott Quant and Roche S (both targeting RBD) showed strong correlations with cPass in this study as well as DiaSorin TrimericS (targeting trimeric spike protein). All three immunoassays can be applied to assess the immune response to vaccination because the SARS‐CoV‐2 vaccines use RBD as an immunogen. Abbott Architect I and Roche cobas system provided the function for dilutional testing and DiaSorin LIAISON analyzer did not. In this study, Abbott Quant needed dilution and retest in four of the 173 samples (2.3%), Roche S in 58 samples (33.5%), so there was the inconvenience of needs for additional samples and reagents, especially in Roche S. DiaSorin TrimericS reported 11 out of 173 samples (6.4%) as above the limit of quantitation. Although direct comparison was unavailable because the quantitative units of the three assays are not uniform, Abbott Quant assay seems to have the highest upper limit of quantitation without dilution. This study had some limitations. (1) Culture‐based virus neutralization test was not performed due to its requirement for very specialized facilities. Instead, cPass sVNT was utilized as a substitute, based on previous studies demonstrating excellent concordance between sVNT and conventional virus neutralization test.23, 24, 35 (2) Few asymptomatic patients were included (2/173) because the patient group consisted mainly of inpatients. So, the results of this study should be carefully applied in populations that contain a large number of asymptomatic patients. Nevertheless, we showed the performances of the new version of three automated quantitative immunoassays detecting antibodies against SARS‐CoV‐2 spike protein (Abbott and Roche) or trimeric S (DiaSorin). The improved sensitivities of Abbott SARS‐CoV‐2 IgG II Quant, DiaSorin LIAISON SARS‐CoV‐2 TrimericS IgG, and Roche Elecsys anti‐SARS‐CoV‐2 S compared to the previous version of three immunoassays were suspected. The total imprecision was slightly higher in DiaSorin Trimeric S than Roche S or Abbott Quant II. The correlations of the results of the three immunoassays were good. We also demonstrated the strong correlations of the three immunoassays with sVNT. These high‐throughput immunoassays are supposed to be valuable in diagnostic assistance of RT‐PCR, evaluating the response to vaccination, and the assessment of herd immunity in future. Supporting information: Figure S1 Click here for additional data file.
Background: SARS-CoV-2 pandemic is currently ongoing, meanwhile vaccinations are rapidly underway in some countries. The quantitative immunoassays detecting antibodies against spike antigen of SARS-CoV-2 have been developed based on the findings that they have a better correlation with the neutralizing antibody. Methods: The performances of the Abbott Architect SARS-CoV-2 IgG II Quant, DiaSorin LIAISON SARS-CoV-2 TrimericS IgG, and Roche Elecsys anti-SARS-CoV-2 S were evaluated on 173 sera from 126 SARS-CoV-2 patients and 151 pre-pandemic sera. Their correlations with GenScript cPass SARS-CoV-2 Neutralization Antibody Detection Kit were also analyzed on 173 sera from 126 SARS-CoV-2 patients. Results: Architect SARS-CoV-2 IgG II Quant and Elecsys anti-SARS-CoV-2 S showed the highest overall sensitivity (96.0%), followed by LIAISON SARS-CoV-2 TrimericS IgG (93.6%). The specificities of Elecsys anti-SARS-CoV-2 S and LIAISON SARS-CoV-2 TrimericS IgG were 100.0%, followed by Architect SARS-CoV-2 IgG II Quant (99.3%). Regarding the correlation with cPass neutralization antibody assay, LIAISON SARS-CoV-2 TrimericS IgG showed the best correlation (Spearman rho = 0.88), followed by Architect SARS-CoV-2 IgG II Quant and Elecsys anti-SARS-CoV-2 S (all rho = 0.87). Conclusions: The three automated quantitative immunoassays showed good diagnostic performance and strong correlations with neutralization antibodies. These assays will be useful in diagnostic assistance, evaluating the response to vaccination, and the assessment of herd immunity in the future.
null
null
9,499
311
[ 929, 299, 313, 294, 109, 205, 244, 170, 200, 231, 248 ]
15
[ "cov", "sars cov", "sars", "abbott", "roche", "immunoassays", "diasorin", "igg", "anti", "quant" ]
[ "diagnosis covid 19", "covid 19 diagnosis", "coronavirus sars", "pcr coronavirus", "respiratory syndrome coronavirus" ]
null
null
null
[CONTENT] immunoassays | neutralizing antibody | SARS‐CoV‐2 | spike antigen | virus neutralization test [SUMMARY]
null
[CONTENT] immunoassays | neutralizing antibody | SARS‐CoV‐2 | spike antigen | virus neutralization test [SUMMARY]
null
[CONTENT] immunoassays | neutralizing antibody | SARS‐CoV‐2 | spike antigen | virus neutralization test [SUMMARY]
null
[CONTENT] Antibodies, Neutralizing | Antibodies, Viral | COVID-19 | COVID-19 Serological Testing | Humans | Immunoassay | Immunoglobulin G | Neutralization Tests | ROC Curve | Reproducibility of Results | SARS-CoV-2 | Sensitivity and Specificity | Serologic Tests | Spike Glycoprotein, Coronavirus [SUMMARY]
null
[CONTENT] Antibodies, Neutralizing | Antibodies, Viral | COVID-19 | COVID-19 Serological Testing | Humans | Immunoassay | Immunoglobulin G | Neutralization Tests | ROC Curve | Reproducibility of Results | SARS-CoV-2 | Sensitivity and Specificity | Serologic Tests | Spike Glycoprotein, Coronavirus [SUMMARY]
null
[CONTENT] Antibodies, Neutralizing | Antibodies, Viral | COVID-19 | COVID-19 Serological Testing | Humans | Immunoassay | Immunoglobulin G | Neutralization Tests | ROC Curve | Reproducibility of Results | SARS-CoV-2 | Sensitivity and Specificity | Serologic Tests | Spike Glycoprotein, Coronavirus [SUMMARY]
null
[CONTENT] diagnosis covid 19 | covid 19 diagnosis | coronavirus sars | pcr coronavirus | respiratory syndrome coronavirus [SUMMARY]
null
[CONTENT] diagnosis covid 19 | covid 19 diagnosis | coronavirus sars | pcr coronavirus | respiratory syndrome coronavirus [SUMMARY]
null
[CONTENT] diagnosis covid 19 | covid 19 diagnosis | coronavirus sars | pcr coronavirus | respiratory syndrome coronavirus [SUMMARY]
null
[CONTENT] cov | sars cov | sars | abbott | roche | immunoassays | diasorin | igg | anti | quant [SUMMARY]
null
[CONTENT] cov | sars cov | sars | abbott | roche | immunoassays | diasorin | igg | anti | quant [SUMMARY]
null
[CONTENT] cov | sars cov | sars | abbott | roche | immunoassays | diasorin | igg | anti | quant [SUMMARY]
null
[CONTENT] cov | sars | sars cov | assays | anti sars | anti sars cov | anti | protein | performance | 19 [SUMMARY]
null
[CONTENT] immunoassays | cov | sars | sars cov | abbott | quant | diasorin | roche | 95 ci | trimerics [SUMMARY]
null
[CONTENT] sars cov | cov | sars | immunoassays | abbott | roche | diasorin | igg | quant | anti [SUMMARY]
null
[CONTENT] ||| [SUMMARY]
null
[CONTENT] Elecsys | 96.0% | IgG | 93.6% ||| Elecsys | IgG | 100.0% | 99.3% ||| cPass | IgG | Spearman | 0.88 | Elecsys | 0.87 [SUMMARY]
null
[CONTENT] ||| ||| the Abbott Architect | IgG | Roche Elecsys | 173 | 126 | 151 ||| 173 | 126 ||| Elecsys | 96.0% | IgG | 93.6% ||| Elecsys | IgG | 100.0% | 99.3% ||| cPass | IgG | Spearman | 0.88 | Elecsys | 0.87 ||| three ||| [SUMMARY]
null
Predictive factors for successful limb salvage surgery in diabetic foot patients.
25551288
The goal of salvage surgery in the diabetic foot is maximal preservation of the limb, but it is also important to resect unviable tissue sufficiently to avoid reamputation. This study aims to provide information on determining the optimal amputation level that allows preservation of as much limb length as possible without the risk of further reamputation by analyzing several predictive factors.
BACKGROUND
Between April 2004 and July 2013, 154 patients underwent limb salvage surgery for distal diabetic foot gangrene. According to the final level of amputation, the patients were divided into two groups: Patients with primary success of the limb salvage, and patients that failed to heal after the primary limb salvage surgery. The factors predictive of success, including comorbidity, laboratory findings, and radiologic findings were evaluated by a retrospective chart review.
METHODS
The mean age of the study population was 63.9 years, with a male-to-female ratio of approximately 2:1. The mean follow-up duration was 30 months. Statistical analysis showed that underlying renal disease, limited activity before surgery, a low hemoglobin level, a high white blood cell count, a high C-reactive protein level, and damage to two or more vessels on preoperative computed tomography (CT) angiogram were significantly associated with the success or failure of limb salvage. The five-year survival rate was 81.6% for the limb salvage success group and 36.4% for the limb salvage failure group.
RESULTS
This study evaluated the factors predictive of the success of limb salvage surgery and identified indicators for preserving as much as possible of the leg of a patient with diabetic foot. This should help surgeons to establish the appropriate amputation level for a case of diabetic foot and help prevent consecutive operations.
CONCLUSION
[ "Adult", "Aged", "Aged, 80 and over", "Amputation, Surgical", "Diabetic Foot", "Female", "Follow-Up Studies", "Gangrene", "Humans", "Kaplan-Meier Estimate", "Limb Salvage", "Male", "Middle Aged", "Reoperation", "Retrospective Studies", "Risk Factors", "Survival Rate" ]
4320552
Background
Approximately 3–4% of diabetic patients develop foot ulcers sometime during their life. One of the most important strategies for the management of the diabetic foot is to prevent complications that may necessitate a major limb amputation. Even with appropriate treatment, some patients must undergo major amputation or a limb salvage operation [1, 2]. These operations are not only a huge emotional and social burden to the patients due to physical impairment, but also a financial burden [3, 4]. In recent decades, systemization of multidisciplinary management and implementation of free tissue transfer in diabetic foot treatment have led to a notable decrease in the major amputation rate [5, 6]. The key to limb salvage surgery is maximal retention of the limb and minimization of the amputation level. Free tissue transfer has become an alternative option to major amputation for elderly diabetic patients [7, 8]. Successful limb salvage, defined as a stump fit for functional ambulation, is mostly determined by the level of amputation. It is mostly affected by preservation of the talus and calcaneus because it minimizes limb length discrepancy and preserves the heel pad [9]. The level of Chopart amputation is the most proximal among lower limb amputation locations that preserve the talus and calcaneus. Although disputable, the Chopart amputation has been defined as the threshold of successful limb salvage [10–12]. The incidence of reamputation following first toe or transmetatarsal amputation associated with diabetes mellitus has been found to be nearly one-third. Nearly 40% of patients with diabetic foot who had amputations at the foot level have a history of previous amputation [13, 14]. Surgeons should preserve as much limb length as possible. However, it is also important to avoid reamputation, since it is a massive surgical burden to diabetic patients, who usually already are in a poor general condition and face financial difficulties. The purpose of this article is to provide information on determining the optimal amputation level, preserving as much limb length as possible without requiring additional reamputation by analyzing several predictive factors.
Methods
Approval for this retrospective study was obtained from the Institutional Review Board on Human Subjects Research and the Ethics Committee, Hanyang University Guri Hospital (IRB No. 2014–07-010). The study population was composed of patients who presented to the Department of Plastic Surgery with a diabetic foot complication from April 2004 to July 2013. The inclusion criterion was gangrene of the distal foot (distal to the metatarsophalangeal joint) that required hospitalization and amputation. Patients with complete healing without amputation were excluded from this study. The patients were divided into two groups based on their latest amputation level. The group with successful limb salvage consisted of patients with a preserved talus and calcaneus after amputation at the Chopart level or distal to it. The other group comprised patients in whom the limb could not be preserved. The patients of this group required reamputation more proximal than the Chopart level following an unsuccessful limb salvage operation. The primary outcome measures included age, sex, smoking status, presence of comorbidities (hypertension, ischemic heart syndrome, stroke, chronic renal failure, chronic osteomyelitis), status of premorbid activities of daily living, preoperative laboratory findings, and preoperative radiologic findings. Preoperative laboratory investigations, including the hemoglobin level (Hb), white blood cell (WBC) count, glycosylated hemoglobin (HbA1c), creatinine, and C-reactive protein (CRP) levels were collected. A preoperative computed tomography (CT) angiogram was performed in all patients to evaluate the number of abnormal vessels and the state (patent, partial occlusion, total occlusion) of each vessel of the lower extremity. The secondary outcome measures were six-month and five-year survival rates. Kaplan-Meier survival estimate curves were calculated for all patients. Statistical analysis All statistical analyses were performed using Stata/SE 12.0 with statistical significance set at P < 0.05. To determine the statistically significant differences between the two groups, an independent t-test was used for hemoglobin and a Mann–Whitney U test was used for numerical prognostic factors. Fisher’s exact test and logistic regression analysis were used for categorical prognostic factors. Kaplan-Meier survival estimate curves were also calculated, and the log-rank test was used to compare the survival rate. All statistical analyses were performed using Stata/SE 12.0 with statistical significance set at P < 0.05. To determine the statistically significant differences between the two groups, an independent t-test was used for hemoglobin and a Mann–Whitney U test was used for numerical prognostic factors. Fisher’s exact test and logistic regression analysis were used for categorical prognostic factors. Kaplan-Meier survival estimate curves were also calculated, and the log-rank test was used to compare the survival rate.
Results
Patient profiles Of the 461 consecutively admitted patients with diabetic foot complications between 2004 and 2013, 307 patients with complete remission without amputation were identified and excluded from the study. The other 154 patients, who underwent limb salvage surgery, and who were classified as grade 2–4 in the Wagner system, were divided into two groups. The group with successful limb salvage consisted of 124 patients, and the group with limb salvage failure consisted of 30 patients (Table 1).Table 1 Final amputation level of both groups Success groupFailure groupToe amputation or disarticulation100-Ray (metatarsal)8-Transmetatarsal6-Midfoot Lisfranc7-Chopart3-Syme--BK amputation-30Total12430BK; below knee. Final amputation level of both groups BK; below knee. Of the 461 consecutively admitted patients with diabetic foot complications between 2004 and 2013, 307 patients with complete remission without amputation were identified and excluded from the study. The other 154 patients, who underwent limb salvage surgery, and who were classified as grade 2–4 in the Wagner system, were divided into two groups. The group with successful limb salvage consisted of 124 patients, and the group with limb salvage failure consisted of 30 patients (Table 1).Table 1 Final amputation level of both groups Success groupFailure groupToe amputation or disarticulation100-Ray (metatarsal)8-Transmetatarsal6-Midfoot Lisfranc7-Chopart3-Syme--BK amputation-30Total12430BK; below knee. Final amputation level of both groups BK; below knee. Risk factor evaluation We evaluated various factors related to the success and failure of limb salvage (Table 2). Statistical analysis showed that chronic renal failure and the activities of daily living were significantly associated with the success or failure of limb salvage. Our analysis found no significant association between the outcome of limb salvage and age, sex, smoking status, type of diabetes, ischemic heart syndrome, stroke, or hypertension. The outcome of limb salvage was significantly associated with Hb level, WBC count, and CRP level, but not with HbA1c. Osteomyelitis had no significant relationship with the outcome. On preoperative CT angiogram, multivariate analysis showed that the number of damaged vessels in the failure group was greater than that in the success group with statistical significance. The comparison between the lack of vessel damage and single vessel damage was not statistically significant. The comparison between damage to two or more vessels and less than two vessels showed that the failure group had damage to two or more vessels significantly more often. The failure group also contained a significantly greater proportion of cases with damage to three vessels. For each of the major vessels, cases in the failure group were significantly more likely to have an occlusion. The odds ratio was 10.405 for the anterior tibial artery, 5.062 for the posterior tibial artery, and 4.229 for the perineal artery, all with a significance level of P < 0.05.Table 2 Results of evaluation of the value of factors as predictive of outcome FactorsOutcomeSuccess groupFailure groupP-valueOR95% CIn (%)Mean (SD)n (%)Mean (SD)Lower limitUpper limitAge (years)0.4491.130.8181.574≤404(3.2%)0(0%)41-5023(18.6%)4(13.3%)51-6022(17.7%)3(10.0%)61-7030(24.2%)13(43.3%)71-8038(30.7%)10(33.3%)>807(5.7%)0(0%)Total124(100%)30(100%)Sex0.282Male81(65.3%)23(76.7%)Female43(34.7%)7(23.3%)Smoking0.802Current or Ex-smoker25(20.2%)7(23.3%)Never99(79.8%)23(76.7%)Hypertension0.419Yes61(49.2%)13(43.3%)No63(50.8%)17(56.7%)Ischemic heart disease0.966Yes8(6.5%)2(6.7%)No116(93.5%)28(93.3%)Stroke0.84Yes14(11.3%)3(10.0%)No110(88.7%)27(90.0%)Chronic renal failure<0.01Yes11(8.9%)11(36.7%)No113(91.1%)19(63.3%)Premorbid ambulation state<0.011.841.262.68Independent80(64.5%)8(26.7%)Walking with aid19(15.3%)10(33.3%)Wheel chair17(13.7%)7(23.3%)Bedridden8(6.5%)5(16.7%)Preoperative laboratory findingHb (g/dl)10.55 (2.3)9.42 (1.4)<0.01WBC (/mm3)10471 (5535)12475 (5263)<0.01CRP (mg/dl)6.16 (7.5)10.62 (5.3)<0.01Creatinine (mg/dl)1.82 (2.3)2.82 (2.6)0.251HbA1C (%)7.62 (1.9)7.53 (1.9)0.11Preoperative radiologic findingsX-ray Osteomyelitis18(14.5%)7(23.3%)0.24 None106(85.5%)23(76.7%)CT angiogram* Damaged vessel† <0.014.382.3608.151 Number of damaged vessels†† None versus ≥ 10.1332.9030.61913.615≤1 versus ≥ 2<0.0116.6774.98055.846≤2 versus 3<0.0121.5836.48371.86 Anterior tibial artery<0.0110.4054.08826.486PatentStenosis > 50%, Diffuse atheromatosisOcclusion Posterior tibial artery<0.015.0622.6339.733PatentStenosis > 50%, Diffuse atheromatosisOcclusion Peroneal artery<0.014.2292.2407.984PatentStenosis > 50%, Diffuse atheromatosisOcclusion*Statistical analysis was performed by univariate logistic regression analysis. †Comparison of the distribution of the number of damaged vessels between limb salvage success group and failure group. ††Comparison of the number of damaged vessels between limb salvage success group and failure group.The number of damaged vessels was defined as those with partial occlusion plus those with total occlusion.Hb; hemoglobin, WBC; white blood cell, CRP; C-reactive protein, HbA1c; glycosylated hemoglobin, CT; computed tomography. Results of evaluation of the value of factors as predictive of outcome *Statistical analysis was performed by univariate logistic regression analysis. †Comparison of the distribution of the number of damaged vessels between limb salvage success group and failure group. ††Comparison of the number of damaged vessels between limb salvage success group and failure group. The number of damaged vessels was defined as those with partial occlusion plus those with total occlusion. Hb; hemoglobin, WBC; white blood cell, CRP; C-reactive protein, HbA1c; glycosylated hemoglobin, CT; computed tomography. We evaluated various factors related to the success and failure of limb salvage (Table 2). Statistical analysis showed that chronic renal failure and the activities of daily living were significantly associated with the success or failure of limb salvage. Our analysis found no significant association between the outcome of limb salvage and age, sex, smoking status, type of diabetes, ischemic heart syndrome, stroke, or hypertension. The outcome of limb salvage was significantly associated with Hb level, WBC count, and CRP level, but not with HbA1c. Osteomyelitis had no significant relationship with the outcome. On preoperative CT angiogram, multivariate analysis showed that the number of damaged vessels in the failure group was greater than that in the success group with statistical significance. The comparison between the lack of vessel damage and single vessel damage was not statistically significant. The comparison between damage to two or more vessels and less than two vessels showed that the failure group had damage to two or more vessels significantly more often. The failure group also contained a significantly greater proportion of cases with damage to three vessels. For each of the major vessels, cases in the failure group were significantly more likely to have an occlusion. The odds ratio was 10.405 for the anterior tibial artery, 5.062 for the posterior tibial artery, and 4.229 for the perineal artery, all with a significance level of P < 0.05.Table 2 Results of evaluation of the value of factors as predictive of outcome FactorsOutcomeSuccess groupFailure groupP-valueOR95% CIn (%)Mean (SD)n (%)Mean (SD)Lower limitUpper limitAge (years)0.4491.130.8181.574≤404(3.2%)0(0%)41-5023(18.6%)4(13.3%)51-6022(17.7%)3(10.0%)61-7030(24.2%)13(43.3%)71-8038(30.7%)10(33.3%)>807(5.7%)0(0%)Total124(100%)30(100%)Sex0.282Male81(65.3%)23(76.7%)Female43(34.7%)7(23.3%)Smoking0.802Current or Ex-smoker25(20.2%)7(23.3%)Never99(79.8%)23(76.7%)Hypertension0.419Yes61(49.2%)13(43.3%)No63(50.8%)17(56.7%)Ischemic heart disease0.966Yes8(6.5%)2(6.7%)No116(93.5%)28(93.3%)Stroke0.84Yes14(11.3%)3(10.0%)No110(88.7%)27(90.0%)Chronic renal failure<0.01Yes11(8.9%)11(36.7%)No113(91.1%)19(63.3%)Premorbid ambulation state<0.011.841.262.68Independent80(64.5%)8(26.7%)Walking with aid19(15.3%)10(33.3%)Wheel chair17(13.7%)7(23.3%)Bedridden8(6.5%)5(16.7%)Preoperative laboratory findingHb (g/dl)10.55 (2.3)9.42 (1.4)<0.01WBC (/mm3)10471 (5535)12475 (5263)<0.01CRP (mg/dl)6.16 (7.5)10.62 (5.3)<0.01Creatinine (mg/dl)1.82 (2.3)2.82 (2.6)0.251HbA1C (%)7.62 (1.9)7.53 (1.9)0.11Preoperative radiologic findingsX-ray Osteomyelitis18(14.5%)7(23.3%)0.24 None106(85.5%)23(76.7%)CT angiogram* Damaged vessel† <0.014.382.3608.151 Number of damaged vessels†† None versus ≥ 10.1332.9030.61913.615≤1 versus ≥ 2<0.0116.6774.98055.846≤2 versus 3<0.0121.5836.48371.86 Anterior tibial artery<0.0110.4054.08826.486PatentStenosis > 50%, Diffuse atheromatosisOcclusion Posterior tibial artery<0.015.0622.6339.733PatentStenosis > 50%, Diffuse atheromatosisOcclusion Peroneal artery<0.014.2292.2407.984PatentStenosis > 50%, Diffuse atheromatosisOcclusion*Statistical analysis was performed by univariate logistic regression analysis. †Comparison of the distribution of the number of damaged vessels between limb salvage success group and failure group. ††Comparison of the number of damaged vessels between limb salvage success group and failure group.The number of damaged vessels was defined as those with partial occlusion plus those with total occlusion.Hb; hemoglobin, WBC; white blood cell, CRP; C-reactive protein, HbA1c; glycosylated hemoglobin, CT; computed tomography. Results of evaluation of the value of factors as predictive of outcome *Statistical analysis was performed by univariate logistic regression analysis. †Comparison of the distribution of the number of damaged vessels between limb salvage success group and failure group. ††Comparison of the number of damaged vessels between limb salvage success group and failure group. The number of damaged vessels was defined as those with partial occlusion plus those with total occlusion. Hb; hemoglobin, WBC; white blood cell, CRP; C-reactive protein, HbA1c; glycosylated hemoglobin, CT; computed tomography. Follow-up period and survival rate The average follow-up period was 118 weeks. The six-month survival rate was 91.9% for the success group and 76.7% for the failure group without statistical significance (Table 3). The Kaplan-Meier survival estimate was calculated for the limb salvage success group and failure group patients. A comparison between the two groups was performed with the log-rank test. The five-year survival rate was 81.6% for the limb salvage success group and 36.4% for the limb salvage failure group (P < 0.05) (Figure 1).Table 3 Survival rates FactorsOutcomeLimb salvage success groupLimb salvage failure groupP-valuen (%)n (%)Survived at 6 months0.142Yes114 (91.9%)23 (76.7%)No10 (8.1%)7 (23.3%)Figure 1 Kaplan-Meier survival estimate for both groups. Survival rates Kaplan-Meier survival estimate for both groups. The average follow-up period was 118 weeks. The six-month survival rate was 91.9% for the success group and 76.7% for the failure group without statistical significance (Table 3). The Kaplan-Meier survival estimate was calculated for the limb salvage success group and failure group patients. A comparison between the two groups was performed with the log-rank test. The five-year survival rate was 81.6% for the limb salvage success group and 36.4% for the limb salvage failure group (P < 0.05) (Figure 1).Table 3 Survival rates FactorsOutcomeLimb salvage success groupLimb salvage failure groupP-valuen (%)n (%)Survived at 6 months0.142Yes114 (91.9%)23 (76.7%)No10 (8.1%)7 (23.3%)Figure 1 Kaplan-Meier survival estimate for both groups. Survival rates Kaplan-Meier survival estimate for both groups.
Conclusions
This study evaluated the factors predictive of the success of limb salvage surgery and identified indicators for preserving the limbs of patients with diabetic foot complications, allowing the establishment of an appropriate amputation level of the diabetic foot and minimizing subsequent operations.
[ "Background", "Statistical analysis", "Patient profiles", "Risk factor evaluation", "Follow-up period and survival rate" ]
[ "Approximately 3–4% of diabetic patients develop foot ulcers sometime during their life. One of the most important strategies for the management of the diabetic foot is to prevent complications that may necessitate a major limb amputation. Even with appropriate treatment, some patients must undergo major amputation or a limb salvage operation [1, 2]. These operations are not only a huge emotional and social burden to the patients due to physical impairment, but also a financial burden [3, 4]. In recent decades, systemization of multidisciplinary management and implementation of free tissue transfer in diabetic foot treatment have led to a notable decrease in the major amputation rate [5, 6]. The key to limb salvage surgery is maximal retention of the limb and minimization of the amputation level. Free tissue transfer has become an alternative option to major amputation for elderly diabetic patients [7, 8]. Successful limb salvage, defined as a stump fit for functional ambulation, is mostly determined by the level of amputation. It is mostly affected by preservation of the talus and calcaneus because it minimizes limb length discrepancy and preserves the heel pad [9]. The level of Chopart amputation is the most proximal among lower limb amputation locations that preserve the talus and calcaneus. Although disputable, the Chopart amputation has been defined as the threshold of successful limb salvage [10–12]. The incidence of reamputation following first toe or transmetatarsal amputation associated with diabetes mellitus has been found to be nearly one-third. Nearly 40% of patients with diabetic foot who had amputations at the foot level have a history of previous amputation [13, 14]. Surgeons should preserve as much limb length as possible. However, it is also important to avoid reamputation, since it is a massive surgical burden to diabetic patients, who usually already are in a poor general condition and face financial difficulties.\nThe purpose of this article is to provide information on determining the optimal amputation level, preserving as much limb length as possible without requiring additional reamputation by analyzing several predictive factors.", "All statistical analyses were performed using Stata/SE 12.0 with statistical significance set at P < 0.05. To determine the statistically significant differences between the two groups, an independent t-test was used for hemoglobin and a Mann–Whitney U test was used for numerical prognostic factors. Fisher’s exact test and logistic regression analysis were used for categorical prognostic factors. Kaplan-Meier survival estimate curves were also calculated, and the log-rank test was used to compare the survival rate.", "Of the 461 consecutively admitted patients with diabetic foot complications between 2004 and 2013, 307 patients with complete remission without amputation were identified and excluded from the study. The other 154 patients, who underwent limb salvage surgery, and who were classified as grade 2–4 in the Wagner system, were divided into two groups. The group with successful limb salvage consisted of 124 patients, and the group with limb salvage failure consisted of 30 patients (Table 1).Table 1\nFinal amputation level of both groups\nSuccess groupFailure groupToe amputation or disarticulation100-Ray (metatarsal)8-Transmetatarsal6-Midfoot Lisfranc7-Chopart3-Syme--BK amputation-30Total12430BK; below knee.\n\nFinal amputation level of both groups\n\nBK; below knee.", "We evaluated various factors related to the success and failure of limb salvage (Table 2). Statistical analysis showed that chronic renal failure and the activities of daily living were significantly associated with the success or failure of limb salvage. Our analysis found no significant association between the outcome of limb salvage and age, sex, smoking status, type of diabetes, ischemic heart syndrome, stroke, or hypertension. The outcome of limb salvage was significantly associated with Hb level, WBC count, and CRP level, but not with HbA1c. Osteomyelitis had no significant relationship with the outcome. On preoperative CT angiogram, multivariate analysis showed that the number of damaged vessels in the failure group was greater than that in the success group with statistical significance. The comparison between the lack of vessel damage and single vessel damage was not statistically significant. The comparison between damage to two or more vessels and less than two vessels showed that the failure group had damage to two or more vessels significantly more often. The failure group also contained a significantly greater proportion of cases with damage to three vessels. For each of the major vessels, cases in the failure group were significantly more likely to have an occlusion. The odds ratio was 10.405 for the anterior tibial artery, 5.062 for the posterior tibial artery, and 4.229 for the perineal artery, all with a significance level of P < 0.05.Table 2\nResults of evaluation of the value of factors as predictive of outcome\nFactorsOutcomeSuccess groupFailure groupP-valueOR95% CIn (%)Mean (SD)n (%)Mean (SD)Lower limitUpper limitAge (years)0.4491.130.8181.574≤404(3.2%)0(0%)41-5023(18.6%)4(13.3%)51-6022(17.7%)3(10.0%)61-7030(24.2%)13(43.3%)71-8038(30.7%)10(33.3%)>807(5.7%)0(0%)Total124(100%)30(100%)Sex0.282Male81(65.3%)23(76.7%)Female43(34.7%)7(23.3%)Smoking0.802Current or Ex-smoker25(20.2%)7(23.3%)Never99(79.8%)23(76.7%)Hypertension0.419Yes61(49.2%)13(43.3%)No63(50.8%)17(56.7%)Ischemic heart disease0.966Yes8(6.5%)2(6.7%)No116(93.5%)28(93.3%)Stroke0.84Yes14(11.3%)3(10.0%)No110(88.7%)27(90.0%)Chronic renal failure<0.01Yes11(8.9%)11(36.7%)No113(91.1%)19(63.3%)Premorbid ambulation state<0.011.841.262.68Independent80(64.5%)8(26.7%)Walking with aid19(15.3%)10(33.3%)Wheel chair17(13.7%)7(23.3%)Bedridden8(6.5%)5(16.7%)Preoperative laboratory findingHb (g/dl)10.55 (2.3)9.42 (1.4)<0.01WBC (/mm3)10471 (5535)12475 (5263)<0.01CRP (mg/dl)6.16 (7.5)10.62 (5.3)<0.01Creatinine (mg/dl)1.82 (2.3)2.82 (2.6)0.251HbA1C (%)7.62 (1.9)7.53 (1.9)0.11Preoperative radiologic findingsX-ray Osteomyelitis18(14.5%)7(23.3%)0.24 None106(85.5%)23(76.7%)CT angiogram* Damaged vessel†\n<0.014.382.3608.151 Number of damaged vessels††\nNone versus ≥ 10.1332.9030.61913.615≤1 versus ≥ 2<0.0116.6774.98055.846≤2 versus 3<0.0121.5836.48371.86 Anterior tibial artery<0.0110.4054.08826.486PatentStenosis > 50%, Diffuse atheromatosisOcclusion Posterior tibial artery<0.015.0622.6339.733PatentStenosis > 50%, Diffuse atheromatosisOcclusion Peroneal artery<0.014.2292.2407.984PatentStenosis > 50%, Diffuse atheromatosisOcclusion*Statistical analysis was performed by univariate logistic regression analysis.\n†Comparison of the distribution of the number of damaged vessels between limb salvage success group and failure group.\n††Comparison of the number of damaged vessels between limb salvage success group and failure group.The number of damaged vessels was defined as those with partial occlusion plus those with total occlusion.Hb; hemoglobin, WBC; white blood cell, CRP; C-reactive protein, HbA1c; glycosylated hemoglobin, CT; computed tomography.\n\nResults of evaluation of the value of factors as predictive of outcome\n\n*Statistical analysis was performed by univariate logistic regression analysis.\n\n†Comparison of the distribution of the number of damaged vessels between limb salvage success group and failure group.\n\n††Comparison of the number of damaged vessels between limb salvage success group and failure group.\nThe number of damaged vessels was defined as those with partial occlusion plus those with total occlusion.\nHb; hemoglobin, WBC; white blood cell, CRP; C-reactive protein, HbA1c; glycosylated hemoglobin, CT; computed tomography.", "The average follow-up period was 118 weeks. The six-month survival rate was 91.9% for the success group and 76.7% for the failure group without statistical significance (Table 3). The Kaplan-Meier survival estimate was calculated for the limb salvage success group and failure group patients. A comparison between the two groups was performed with the log-rank test. The five-year survival rate was 81.6% for the limb salvage success group and 36.4% for the limb salvage failure group (P < 0.05) (Figure 1).Table 3\nSurvival rates\nFactorsOutcomeLimb salvage success groupLimb salvage failure groupP-valuen (%)n (%)Survived at 6 months0.142Yes114 (91.9%)23 (76.7%)No10 (8.1%)7 (23.3%)Figure 1\nKaplan-Meier survival estimate for both groups.\n\n\nSurvival rates\n\n\nKaplan-Meier survival estimate for both groups.\n" ]
[ null, null, null, null, null ]
[ "Background", "Methods", "Statistical analysis", "Results", "Patient profiles", "Risk factor evaluation", "Follow-up period and survival rate", "Discussion", "Conclusions" ]
[ "Approximately 3–4% of diabetic patients develop foot ulcers sometime during their life. One of the most important strategies for the management of the diabetic foot is to prevent complications that may necessitate a major limb amputation. Even with appropriate treatment, some patients must undergo major amputation or a limb salvage operation [1, 2]. These operations are not only a huge emotional and social burden to the patients due to physical impairment, but also a financial burden [3, 4]. In recent decades, systemization of multidisciplinary management and implementation of free tissue transfer in diabetic foot treatment have led to a notable decrease in the major amputation rate [5, 6]. The key to limb salvage surgery is maximal retention of the limb and minimization of the amputation level. Free tissue transfer has become an alternative option to major amputation for elderly diabetic patients [7, 8]. Successful limb salvage, defined as a stump fit for functional ambulation, is mostly determined by the level of amputation. It is mostly affected by preservation of the talus and calcaneus because it minimizes limb length discrepancy and preserves the heel pad [9]. The level of Chopart amputation is the most proximal among lower limb amputation locations that preserve the talus and calcaneus. Although disputable, the Chopart amputation has been defined as the threshold of successful limb salvage [10–12]. The incidence of reamputation following first toe or transmetatarsal amputation associated with diabetes mellitus has been found to be nearly one-third. Nearly 40% of patients with diabetic foot who had amputations at the foot level have a history of previous amputation [13, 14]. Surgeons should preserve as much limb length as possible. However, it is also important to avoid reamputation, since it is a massive surgical burden to diabetic patients, who usually already are in a poor general condition and face financial difficulties.\nThe purpose of this article is to provide information on determining the optimal amputation level, preserving as much limb length as possible without requiring additional reamputation by analyzing several predictive factors.", "Approval for this retrospective study was obtained from the Institutional Review Board on Human Subjects Research and the Ethics Committee, Hanyang University Guri Hospital (IRB No. 2014–07-010). The study population was composed of patients who presented to the Department of Plastic Surgery with a diabetic foot complication from April 2004 to July 2013. The inclusion criterion was gangrene of the distal foot (distal to the metatarsophalangeal joint) that required hospitalization and amputation. Patients with complete healing without amputation were excluded from this study. The patients were divided into two groups based on their latest amputation level. The group with successful limb salvage consisted of patients with a preserved talus and calcaneus after amputation at the Chopart level or distal to it. The other group comprised patients in whom the limb could not be preserved. The patients of this group required reamputation more proximal than the Chopart level following an unsuccessful limb salvage operation.\nThe primary outcome measures included age, sex, smoking status, presence of comorbidities (hypertension, ischemic heart syndrome, stroke, chronic renal failure, chronic osteomyelitis), status of premorbid activities of daily living, preoperative laboratory findings, and preoperative radiologic findings. Preoperative laboratory investigations, including the hemoglobin level (Hb), white blood cell (WBC) count, glycosylated hemoglobin (HbA1c), creatinine, and C-reactive protein (CRP) levels were collected. A preoperative computed tomography (CT) angiogram was performed in all patients to evaluate the number of abnormal vessels and the state (patent, partial occlusion, total occlusion) of each vessel of the lower extremity. The secondary outcome measures were six-month and five-year survival rates. Kaplan-Meier survival estimate curves were calculated for all patients.\n Statistical analysis All statistical analyses were performed using Stata/SE 12.0 with statistical significance set at P < 0.05. To determine the statistically significant differences between the two groups, an independent t-test was used for hemoglobin and a Mann–Whitney U test was used for numerical prognostic factors. Fisher’s exact test and logistic regression analysis were used for categorical prognostic factors. Kaplan-Meier survival estimate curves were also calculated, and the log-rank test was used to compare the survival rate.\nAll statistical analyses were performed using Stata/SE 12.0 with statistical significance set at P < 0.05. To determine the statistically significant differences between the two groups, an independent t-test was used for hemoglobin and a Mann–Whitney U test was used for numerical prognostic factors. Fisher’s exact test and logistic regression analysis were used for categorical prognostic factors. Kaplan-Meier survival estimate curves were also calculated, and the log-rank test was used to compare the survival rate.", "All statistical analyses were performed using Stata/SE 12.0 with statistical significance set at P < 0.05. To determine the statistically significant differences between the two groups, an independent t-test was used for hemoglobin and a Mann–Whitney U test was used for numerical prognostic factors. Fisher’s exact test and logistic regression analysis were used for categorical prognostic factors. Kaplan-Meier survival estimate curves were also calculated, and the log-rank test was used to compare the survival rate.", " Patient profiles Of the 461 consecutively admitted patients with diabetic foot complications between 2004 and 2013, 307 patients with complete remission without amputation were identified and excluded from the study. The other 154 patients, who underwent limb salvage surgery, and who were classified as grade 2–4 in the Wagner system, were divided into two groups. The group with successful limb salvage consisted of 124 patients, and the group with limb salvage failure consisted of 30 patients (Table 1).Table 1\nFinal amputation level of both groups\nSuccess groupFailure groupToe amputation or disarticulation100-Ray (metatarsal)8-Transmetatarsal6-Midfoot Lisfranc7-Chopart3-Syme--BK amputation-30Total12430BK; below knee.\n\nFinal amputation level of both groups\n\nBK; below knee.\nOf the 461 consecutively admitted patients with diabetic foot complications between 2004 and 2013, 307 patients with complete remission without amputation were identified and excluded from the study. The other 154 patients, who underwent limb salvage surgery, and who were classified as grade 2–4 in the Wagner system, were divided into two groups. The group with successful limb salvage consisted of 124 patients, and the group with limb salvage failure consisted of 30 patients (Table 1).Table 1\nFinal amputation level of both groups\nSuccess groupFailure groupToe amputation or disarticulation100-Ray (metatarsal)8-Transmetatarsal6-Midfoot Lisfranc7-Chopart3-Syme--BK amputation-30Total12430BK; below knee.\n\nFinal amputation level of both groups\n\nBK; below knee.\n Risk factor evaluation We evaluated various factors related to the success and failure of limb salvage (Table 2). Statistical analysis showed that chronic renal failure and the activities of daily living were significantly associated with the success or failure of limb salvage. Our analysis found no significant association between the outcome of limb salvage and age, sex, smoking status, type of diabetes, ischemic heart syndrome, stroke, or hypertension. The outcome of limb salvage was significantly associated with Hb level, WBC count, and CRP level, but not with HbA1c. Osteomyelitis had no significant relationship with the outcome. On preoperative CT angiogram, multivariate analysis showed that the number of damaged vessels in the failure group was greater than that in the success group with statistical significance. The comparison between the lack of vessel damage and single vessel damage was not statistically significant. The comparison between damage to two or more vessels and less than two vessels showed that the failure group had damage to two or more vessels significantly more often. The failure group also contained a significantly greater proportion of cases with damage to three vessels. For each of the major vessels, cases in the failure group were significantly more likely to have an occlusion. The odds ratio was 10.405 for the anterior tibial artery, 5.062 for the posterior tibial artery, and 4.229 for the perineal artery, all with a significance level of P < 0.05.Table 2\nResults of evaluation of the value of factors as predictive of outcome\nFactorsOutcomeSuccess groupFailure groupP-valueOR95% CIn (%)Mean (SD)n (%)Mean (SD)Lower limitUpper limitAge (years)0.4491.130.8181.574≤404(3.2%)0(0%)41-5023(18.6%)4(13.3%)51-6022(17.7%)3(10.0%)61-7030(24.2%)13(43.3%)71-8038(30.7%)10(33.3%)>807(5.7%)0(0%)Total124(100%)30(100%)Sex0.282Male81(65.3%)23(76.7%)Female43(34.7%)7(23.3%)Smoking0.802Current or Ex-smoker25(20.2%)7(23.3%)Never99(79.8%)23(76.7%)Hypertension0.419Yes61(49.2%)13(43.3%)No63(50.8%)17(56.7%)Ischemic heart disease0.966Yes8(6.5%)2(6.7%)No116(93.5%)28(93.3%)Stroke0.84Yes14(11.3%)3(10.0%)No110(88.7%)27(90.0%)Chronic renal failure<0.01Yes11(8.9%)11(36.7%)No113(91.1%)19(63.3%)Premorbid ambulation state<0.011.841.262.68Independent80(64.5%)8(26.7%)Walking with aid19(15.3%)10(33.3%)Wheel chair17(13.7%)7(23.3%)Bedridden8(6.5%)5(16.7%)Preoperative laboratory findingHb (g/dl)10.55 (2.3)9.42 (1.4)<0.01WBC (/mm3)10471 (5535)12475 (5263)<0.01CRP (mg/dl)6.16 (7.5)10.62 (5.3)<0.01Creatinine (mg/dl)1.82 (2.3)2.82 (2.6)0.251HbA1C (%)7.62 (1.9)7.53 (1.9)0.11Preoperative radiologic findingsX-ray Osteomyelitis18(14.5%)7(23.3%)0.24 None106(85.5%)23(76.7%)CT angiogram* Damaged vessel†\n<0.014.382.3608.151 Number of damaged vessels††\nNone versus ≥ 10.1332.9030.61913.615≤1 versus ≥ 2<0.0116.6774.98055.846≤2 versus 3<0.0121.5836.48371.86 Anterior tibial artery<0.0110.4054.08826.486PatentStenosis > 50%, Diffuse atheromatosisOcclusion Posterior tibial artery<0.015.0622.6339.733PatentStenosis > 50%, Diffuse atheromatosisOcclusion Peroneal artery<0.014.2292.2407.984PatentStenosis > 50%, Diffuse atheromatosisOcclusion*Statistical analysis was performed by univariate logistic regression analysis.\n†Comparison of the distribution of the number of damaged vessels between limb salvage success group and failure group.\n††Comparison of the number of damaged vessels between limb salvage success group and failure group.The number of damaged vessels was defined as those with partial occlusion plus those with total occlusion.Hb; hemoglobin, WBC; white blood cell, CRP; C-reactive protein, HbA1c; glycosylated hemoglobin, CT; computed tomography.\n\nResults of evaluation of the value of factors as predictive of outcome\n\n*Statistical analysis was performed by univariate logistic regression analysis.\n\n†Comparison of the distribution of the number of damaged vessels between limb salvage success group and failure group.\n\n††Comparison of the number of damaged vessels between limb salvage success group and failure group.\nThe number of damaged vessels was defined as those with partial occlusion plus those with total occlusion.\nHb; hemoglobin, WBC; white blood cell, CRP; C-reactive protein, HbA1c; glycosylated hemoglobin, CT; computed tomography.\nWe evaluated various factors related to the success and failure of limb salvage (Table 2). Statistical analysis showed that chronic renal failure and the activities of daily living were significantly associated with the success or failure of limb salvage. Our analysis found no significant association between the outcome of limb salvage and age, sex, smoking status, type of diabetes, ischemic heart syndrome, stroke, or hypertension. The outcome of limb salvage was significantly associated with Hb level, WBC count, and CRP level, but not with HbA1c. Osteomyelitis had no significant relationship with the outcome. On preoperative CT angiogram, multivariate analysis showed that the number of damaged vessels in the failure group was greater than that in the success group with statistical significance. The comparison between the lack of vessel damage and single vessel damage was not statistically significant. The comparison between damage to two or more vessels and less than two vessels showed that the failure group had damage to two or more vessels significantly more often. The failure group also contained a significantly greater proportion of cases with damage to three vessels. For each of the major vessels, cases in the failure group were significantly more likely to have an occlusion. The odds ratio was 10.405 for the anterior tibial artery, 5.062 for the posterior tibial artery, and 4.229 for the perineal artery, all with a significance level of P < 0.05.Table 2\nResults of evaluation of the value of factors as predictive of outcome\nFactorsOutcomeSuccess groupFailure groupP-valueOR95% CIn (%)Mean (SD)n (%)Mean (SD)Lower limitUpper limitAge (years)0.4491.130.8181.574≤404(3.2%)0(0%)41-5023(18.6%)4(13.3%)51-6022(17.7%)3(10.0%)61-7030(24.2%)13(43.3%)71-8038(30.7%)10(33.3%)>807(5.7%)0(0%)Total124(100%)30(100%)Sex0.282Male81(65.3%)23(76.7%)Female43(34.7%)7(23.3%)Smoking0.802Current or Ex-smoker25(20.2%)7(23.3%)Never99(79.8%)23(76.7%)Hypertension0.419Yes61(49.2%)13(43.3%)No63(50.8%)17(56.7%)Ischemic heart disease0.966Yes8(6.5%)2(6.7%)No116(93.5%)28(93.3%)Stroke0.84Yes14(11.3%)3(10.0%)No110(88.7%)27(90.0%)Chronic renal failure<0.01Yes11(8.9%)11(36.7%)No113(91.1%)19(63.3%)Premorbid ambulation state<0.011.841.262.68Independent80(64.5%)8(26.7%)Walking with aid19(15.3%)10(33.3%)Wheel chair17(13.7%)7(23.3%)Bedridden8(6.5%)5(16.7%)Preoperative laboratory findingHb (g/dl)10.55 (2.3)9.42 (1.4)<0.01WBC (/mm3)10471 (5535)12475 (5263)<0.01CRP (mg/dl)6.16 (7.5)10.62 (5.3)<0.01Creatinine (mg/dl)1.82 (2.3)2.82 (2.6)0.251HbA1C (%)7.62 (1.9)7.53 (1.9)0.11Preoperative radiologic findingsX-ray Osteomyelitis18(14.5%)7(23.3%)0.24 None106(85.5%)23(76.7%)CT angiogram* Damaged vessel†\n<0.014.382.3608.151 Number of damaged vessels††\nNone versus ≥ 10.1332.9030.61913.615≤1 versus ≥ 2<0.0116.6774.98055.846≤2 versus 3<0.0121.5836.48371.86 Anterior tibial artery<0.0110.4054.08826.486PatentStenosis > 50%, Diffuse atheromatosisOcclusion Posterior tibial artery<0.015.0622.6339.733PatentStenosis > 50%, Diffuse atheromatosisOcclusion Peroneal artery<0.014.2292.2407.984PatentStenosis > 50%, Diffuse atheromatosisOcclusion*Statistical analysis was performed by univariate logistic regression analysis.\n†Comparison of the distribution of the number of damaged vessels between limb salvage success group and failure group.\n††Comparison of the number of damaged vessels between limb salvage success group and failure group.The number of damaged vessels was defined as those with partial occlusion plus those with total occlusion.Hb; hemoglobin, WBC; white blood cell, CRP; C-reactive protein, HbA1c; glycosylated hemoglobin, CT; computed tomography.\n\nResults of evaluation of the value of factors as predictive of outcome\n\n*Statistical analysis was performed by univariate logistic regression analysis.\n\n†Comparison of the distribution of the number of damaged vessels between limb salvage success group and failure group.\n\n††Comparison of the number of damaged vessels between limb salvage success group and failure group.\nThe number of damaged vessels was defined as those with partial occlusion plus those with total occlusion.\nHb; hemoglobin, WBC; white blood cell, CRP; C-reactive protein, HbA1c; glycosylated hemoglobin, CT; computed tomography.\n Follow-up period and survival rate The average follow-up period was 118 weeks. The six-month survival rate was 91.9% for the success group and 76.7% for the failure group without statistical significance (Table 3). The Kaplan-Meier survival estimate was calculated for the limb salvage success group and failure group patients. A comparison between the two groups was performed with the log-rank test. The five-year survival rate was 81.6% for the limb salvage success group and 36.4% for the limb salvage failure group (P < 0.05) (Figure 1).Table 3\nSurvival rates\nFactorsOutcomeLimb salvage success groupLimb salvage failure groupP-valuen (%)n (%)Survived at 6 months0.142Yes114 (91.9%)23 (76.7%)No10 (8.1%)7 (23.3%)Figure 1\nKaplan-Meier survival estimate for both groups.\n\n\nSurvival rates\n\n\nKaplan-Meier survival estimate for both groups.\n\nThe average follow-up period was 118 weeks. The six-month survival rate was 91.9% for the success group and 76.7% for the failure group without statistical significance (Table 3). The Kaplan-Meier survival estimate was calculated for the limb salvage success group and failure group patients. A comparison between the two groups was performed with the log-rank test. The five-year survival rate was 81.6% for the limb salvage success group and 36.4% for the limb salvage failure group (P < 0.05) (Figure 1).Table 3\nSurvival rates\nFactorsOutcomeLimb salvage success groupLimb salvage failure groupP-valuen (%)n (%)Survived at 6 months0.142Yes114 (91.9%)23 (76.7%)No10 (8.1%)7 (23.3%)Figure 1\nKaplan-Meier survival estimate for both groups.\n\n\nSurvival rates\n\n\nKaplan-Meier survival estimate for both groups.\n", "Of the 461 consecutively admitted patients with diabetic foot complications between 2004 and 2013, 307 patients with complete remission without amputation were identified and excluded from the study. The other 154 patients, who underwent limb salvage surgery, and who were classified as grade 2–4 in the Wagner system, were divided into two groups. The group with successful limb salvage consisted of 124 patients, and the group with limb salvage failure consisted of 30 patients (Table 1).Table 1\nFinal amputation level of both groups\nSuccess groupFailure groupToe amputation or disarticulation100-Ray (metatarsal)8-Transmetatarsal6-Midfoot Lisfranc7-Chopart3-Syme--BK amputation-30Total12430BK; below knee.\n\nFinal amputation level of both groups\n\nBK; below knee.", "We evaluated various factors related to the success and failure of limb salvage (Table 2). Statistical analysis showed that chronic renal failure and the activities of daily living were significantly associated with the success or failure of limb salvage. Our analysis found no significant association between the outcome of limb salvage and age, sex, smoking status, type of diabetes, ischemic heart syndrome, stroke, or hypertension. The outcome of limb salvage was significantly associated with Hb level, WBC count, and CRP level, but not with HbA1c. Osteomyelitis had no significant relationship with the outcome. On preoperative CT angiogram, multivariate analysis showed that the number of damaged vessels in the failure group was greater than that in the success group with statistical significance. The comparison between the lack of vessel damage and single vessel damage was not statistically significant. The comparison between damage to two or more vessels and less than two vessels showed that the failure group had damage to two or more vessels significantly more often. The failure group also contained a significantly greater proportion of cases with damage to three vessels. For each of the major vessels, cases in the failure group were significantly more likely to have an occlusion. The odds ratio was 10.405 for the anterior tibial artery, 5.062 for the posterior tibial artery, and 4.229 for the perineal artery, all with a significance level of P < 0.05.Table 2\nResults of evaluation of the value of factors as predictive of outcome\nFactorsOutcomeSuccess groupFailure groupP-valueOR95% CIn (%)Mean (SD)n (%)Mean (SD)Lower limitUpper limitAge (years)0.4491.130.8181.574≤404(3.2%)0(0%)41-5023(18.6%)4(13.3%)51-6022(17.7%)3(10.0%)61-7030(24.2%)13(43.3%)71-8038(30.7%)10(33.3%)>807(5.7%)0(0%)Total124(100%)30(100%)Sex0.282Male81(65.3%)23(76.7%)Female43(34.7%)7(23.3%)Smoking0.802Current or Ex-smoker25(20.2%)7(23.3%)Never99(79.8%)23(76.7%)Hypertension0.419Yes61(49.2%)13(43.3%)No63(50.8%)17(56.7%)Ischemic heart disease0.966Yes8(6.5%)2(6.7%)No116(93.5%)28(93.3%)Stroke0.84Yes14(11.3%)3(10.0%)No110(88.7%)27(90.0%)Chronic renal failure<0.01Yes11(8.9%)11(36.7%)No113(91.1%)19(63.3%)Premorbid ambulation state<0.011.841.262.68Independent80(64.5%)8(26.7%)Walking with aid19(15.3%)10(33.3%)Wheel chair17(13.7%)7(23.3%)Bedridden8(6.5%)5(16.7%)Preoperative laboratory findingHb (g/dl)10.55 (2.3)9.42 (1.4)<0.01WBC (/mm3)10471 (5535)12475 (5263)<0.01CRP (mg/dl)6.16 (7.5)10.62 (5.3)<0.01Creatinine (mg/dl)1.82 (2.3)2.82 (2.6)0.251HbA1C (%)7.62 (1.9)7.53 (1.9)0.11Preoperative radiologic findingsX-ray Osteomyelitis18(14.5%)7(23.3%)0.24 None106(85.5%)23(76.7%)CT angiogram* Damaged vessel†\n<0.014.382.3608.151 Number of damaged vessels††\nNone versus ≥ 10.1332.9030.61913.615≤1 versus ≥ 2<0.0116.6774.98055.846≤2 versus 3<0.0121.5836.48371.86 Anterior tibial artery<0.0110.4054.08826.486PatentStenosis > 50%, Diffuse atheromatosisOcclusion Posterior tibial artery<0.015.0622.6339.733PatentStenosis > 50%, Diffuse atheromatosisOcclusion Peroneal artery<0.014.2292.2407.984PatentStenosis > 50%, Diffuse atheromatosisOcclusion*Statistical analysis was performed by univariate logistic regression analysis.\n†Comparison of the distribution of the number of damaged vessels between limb salvage success group and failure group.\n††Comparison of the number of damaged vessels between limb salvage success group and failure group.The number of damaged vessels was defined as those with partial occlusion plus those with total occlusion.Hb; hemoglobin, WBC; white blood cell, CRP; C-reactive protein, HbA1c; glycosylated hemoglobin, CT; computed tomography.\n\nResults of evaluation of the value of factors as predictive of outcome\n\n*Statistical analysis was performed by univariate logistic regression analysis.\n\n†Comparison of the distribution of the number of damaged vessels between limb salvage success group and failure group.\n\n††Comparison of the number of damaged vessels between limb salvage success group and failure group.\nThe number of damaged vessels was defined as those with partial occlusion plus those with total occlusion.\nHb; hemoglobin, WBC; white blood cell, CRP; C-reactive protein, HbA1c; glycosylated hemoglobin, CT; computed tomography.", "The average follow-up period was 118 weeks. The six-month survival rate was 91.9% for the success group and 76.7% for the failure group without statistical significance (Table 3). The Kaplan-Meier survival estimate was calculated for the limb salvage success group and failure group patients. A comparison between the two groups was performed with the log-rank test. The five-year survival rate was 81.6% for the limb salvage success group and 36.4% for the limb salvage failure group (P < 0.05) (Figure 1).Table 3\nSurvival rates\nFactorsOutcomeLimb salvage success groupLimb salvage failure groupP-valuen (%)n (%)Survived at 6 months0.142Yes114 (91.9%)23 (76.7%)No10 (8.1%)7 (23.3%)Figure 1\nKaplan-Meier survival estimate for both groups.\n\n\nSurvival rates\n\n\nKaplan-Meier survival estimate for both groups.\n", "The objective of this study was to identify any predictive factors of limb salvage success for patients with diabetic foot complications. Many studies have focused on the risk factors of diabetic foot ulceration and independent causation of multiple potential etiologic agents. However, no published studies have examined the risk factors for major amputation after limb-salvage surgery. Risk factors are important in predicting the prognosis of ulceration, yet many patients already have intractable ulceration prior to hospital admission. As a result, these studies are less helpful for the prognosis of patients in need of surgery for complicated diabetic foot [15, 16]. This study differs from previous studies in that it suggests the clinical predictors of limb salvage surgery failure. In this study, Hb, WBC, and CRP were risk factors of limb salvage surgery, but HbA1c was not. This is not surprising, given that Hb, WBC, and CRP are risk factors for diabetic foot complication, and hence a reflection of the patient’s general condition and the degree of wound inflammation [16]. On the other hand, unlike our findings showing no significant relationship between HbA1c and limb salvage surgery outcomes, previous studies have reported that HbA1c is a risk factor for diabetic foot complications [17–19]. Chronic renal failure is also an important risk factor for proximal osteotomy. In our study, 11 of 22 patients with chronic renal failure experienced limb salvage surgery failure. In previous reports, among chronic renal failure patients on dialysis who underwent limb salvage surgery, about 50% experienced failure and went to amputation. It has been reported that the risk of lower limb amputation is greater in diabetic foot patients with kidney disease [15, 20]. However, in this study, creatinine was not a significant risk factor. Why creatinine was not found to be a risk factor for salvage failure in our study, although it has been identified as a risk factor for major limb amputation in previous studies, cannot be explained satisfactorily. The authors supposed that the reason was that creatinine levels could be controlled directly depending on the treatment for renal failure, such as dialysis [15, 21].\nTo reflect the uniqueness of the patients with diabetic foot complications, a simple analysis method, the ambulation state, was used in this study. This can be relatively easily measured through a simple conversation with the patient. We found that the higher the ambulation state prior to surgery, the more successful the limb salvage operation. In other words, the postoperative walking ability was proportional to one’s walking ability before the surgery. CT angiogram was used to identify the status of the blood vessels prior to surgery and has been proven effective in prior studies [22–24]. Nevertheless, no studies have examined the failure of limb salvage surgery using the results of CT angiography until now. In the results of this study, the greater the number of damaged vessels as shown on CT angiogram, the greater was the difference of the odds ratio between the two groups. When comparing each blood vessel, a reduced vascular patency was found to be associated with failure. The number of normal blood vessels and the condition of each of the blood vessels had an effect on the results. In addition, it is worth considering that the diameter of the blood vessels of the lower limb is associated with the clinical outcomes of limb salvage surgery. The vessel diameter and odds ratio are largest in the anterior tibial artery, followed by the posterior tibial artery, and last, the peroneal artery [25].\nStone et al. [20] reported the 1-, 3- and 5-year survival rates of diabetic foot patients undergoing transmetatarsal amputation to be 73%, 68%, and 62%, respectively. In our study, the 5-year survival rate of the limb salvage group was 81.6%. This may be higher because we included a high proportion of patients with toe amputation. We also found that in our success and failure groups, the 6-month survival rates showed no statistically significant difference, but the 5-year survival rate of the limb salvage surgery success group was significantly higher, meaning that the patient’s age and life expectancy may help guide further surgical treatment.", "This study evaluated the factors predictive of the success of limb salvage surgery and identified indicators for preserving the limbs of patients with diabetic foot complications, allowing the establishment of an appropriate amputation level of the diabetic foot and minimizing subsequent operations." ]
[ null, "methods", null, "results", null, null, null, "discussion", "conclusions" ]
[ "Diabetic foot", "Major limb amputation", "Limb salvage" ]
Background: Approximately 3–4% of diabetic patients develop foot ulcers sometime during their life. One of the most important strategies for the management of the diabetic foot is to prevent complications that may necessitate a major limb amputation. Even with appropriate treatment, some patients must undergo major amputation or a limb salvage operation [1, 2]. These operations are not only a huge emotional and social burden to the patients due to physical impairment, but also a financial burden [3, 4]. In recent decades, systemization of multidisciplinary management and implementation of free tissue transfer in diabetic foot treatment have led to a notable decrease in the major amputation rate [5, 6]. The key to limb salvage surgery is maximal retention of the limb and minimization of the amputation level. Free tissue transfer has become an alternative option to major amputation for elderly diabetic patients [7, 8]. Successful limb salvage, defined as a stump fit for functional ambulation, is mostly determined by the level of amputation. It is mostly affected by preservation of the talus and calcaneus because it minimizes limb length discrepancy and preserves the heel pad [9]. The level of Chopart amputation is the most proximal among lower limb amputation locations that preserve the talus and calcaneus. Although disputable, the Chopart amputation has been defined as the threshold of successful limb salvage [10–12]. The incidence of reamputation following first toe or transmetatarsal amputation associated with diabetes mellitus has been found to be nearly one-third. Nearly 40% of patients with diabetic foot who had amputations at the foot level have a history of previous amputation [13, 14]. Surgeons should preserve as much limb length as possible. However, it is also important to avoid reamputation, since it is a massive surgical burden to diabetic patients, who usually already are in a poor general condition and face financial difficulties. The purpose of this article is to provide information on determining the optimal amputation level, preserving as much limb length as possible without requiring additional reamputation by analyzing several predictive factors. Methods: Approval for this retrospective study was obtained from the Institutional Review Board on Human Subjects Research and the Ethics Committee, Hanyang University Guri Hospital (IRB No. 2014–07-010). The study population was composed of patients who presented to the Department of Plastic Surgery with a diabetic foot complication from April 2004 to July 2013. The inclusion criterion was gangrene of the distal foot (distal to the metatarsophalangeal joint) that required hospitalization and amputation. Patients with complete healing without amputation were excluded from this study. The patients were divided into two groups based on their latest amputation level. The group with successful limb salvage consisted of patients with a preserved talus and calcaneus after amputation at the Chopart level or distal to it. The other group comprised patients in whom the limb could not be preserved. The patients of this group required reamputation more proximal than the Chopart level following an unsuccessful limb salvage operation. The primary outcome measures included age, sex, smoking status, presence of comorbidities (hypertension, ischemic heart syndrome, stroke, chronic renal failure, chronic osteomyelitis), status of premorbid activities of daily living, preoperative laboratory findings, and preoperative radiologic findings. Preoperative laboratory investigations, including the hemoglobin level (Hb), white blood cell (WBC) count, glycosylated hemoglobin (HbA1c), creatinine, and C-reactive protein (CRP) levels were collected. A preoperative computed tomography (CT) angiogram was performed in all patients to evaluate the number of abnormal vessels and the state (patent, partial occlusion, total occlusion) of each vessel of the lower extremity. The secondary outcome measures were six-month and five-year survival rates. Kaplan-Meier survival estimate curves were calculated for all patients. Statistical analysis All statistical analyses were performed using Stata/SE 12.0 with statistical significance set at P < 0.05. To determine the statistically significant differences between the two groups, an independent t-test was used for hemoglobin and a Mann–Whitney U test was used for numerical prognostic factors. Fisher’s exact test and logistic regression analysis were used for categorical prognostic factors. Kaplan-Meier survival estimate curves were also calculated, and the log-rank test was used to compare the survival rate. All statistical analyses were performed using Stata/SE 12.0 with statistical significance set at P < 0.05. To determine the statistically significant differences between the two groups, an independent t-test was used for hemoglobin and a Mann–Whitney U test was used for numerical prognostic factors. Fisher’s exact test and logistic regression analysis were used for categorical prognostic factors. Kaplan-Meier survival estimate curves were also calculated, and the log-rank test was used to compare the survival rate. Statistical analysis: All statistical analyses were performed using Stata/SE 12.0 with statistical significance set at P < 0.05. To determine the statistically significant differences between the two groups, an independent t-test was used for hemoglobin and a Mann–Whitney U test was used for numerical prognostic factors. Fisher’s exact test and logistic regression analysis were used for categorical prognostic factors. Kaplan-Meier survival estimate curves were also calculated, and the log-rank test was used to compare the survival rate. Results: Patient profiles Of the 461 consecutively admitted patients with diabetic foot complications between 2004 and 2013, 307 patients with complete remission without amputation were identified and excluded from the study. The other 154 patients, who underwent limb salvage surgery, and who were classified as grade 2–4 in the Wagner system, were divided into two groups. The group with successful limb salvage consisted of 124 patients, and the group with limb salvage failure consisted of 30 patients (Table 1).Table 1 Final amputation level of both groups Success groupFailure groupToe amputation or disarticulation100-Ray (metatarsal)8-Transmetatarsal6-Midfoot Lisfranc7-Chopart3-Syme--BK amputation-30Total12430BK; below knee. Final amputation level of both groups BK; below knee. Of the 461 consecutively admitted patients with diabetic foot complications between 2004 and 2013, 307 patients with complete remission without amputation were identified and excluded from the study. The other 154 patients, who underwent limb salvage surgery, and who were classified as grade 2–4 in the Wagner system, were divided into two groups. The group with successful limb salvage consisted of 124 patients, and the group with limb salvage failure consisted of 30 patients (Table 1).Table 1 Final amputation level of both groups Success groupFailure groupToe amputation or disarticulation100-Ray (metatarsal)8-Transmetatarsal6-Midfoot Lisfranc7-Chopart3-Syme--BK amputation-30Total12430BK; below knee. Final amputation level of both groups BK; below knee. Risk factor evaluation We evaluated various factors related to the success and failure of limb salvage (Table 2). Statistical analysis showed that chronic renal failure and the activities of daily living were significantly associated with the success or failure of limb salvage. Our analysis found no significant association between the outcome of limb salvage and age, sex, smoking status, type of diabetes, ischemic heart syndrome, stroke, or hypertension. The outcome of limb salvage was significantly associated with Hb level, WBC count, and CRP level, but not with HbA1c. Osteomyelitis had no significant relationship with the outcome. On preoperative CT angiogram, multivariate analysis showed that the number of damaged vessels in the failure group was greater than that in the success group with statistical significance. The comparison between the lack of vessel damage and single vessel damage was not statistically significant. The comparison between damage to two or more vessels and less than two vessels showed that the failure group had damage to two or more vessels significantly more often. The failure group also contained a significantly greater proportion of cases with damage to three vessels. For each of the major vessels, cases in the failure group were significantly more likely to have an occlusion. The odds ratio was 10.405 for the anterior tibial artery, 5.062 for the posterior tibial artery, and 4.229 for the perineal artery, all with a significance level of P < 0.05.Table 2 Results of evaluation of the value of factors as predictive of outcome FactorsOutcomeSuccess groupFailure groupP-valueOR95% CIn (%)Mean (SD)n (%)Mean (SD)Lower limitUpper limitAge (years)0.4491.130.8181.574≤404(3.2%)0(0%)41-5023(18.6%)4(13.3%)51-6022(17.7%)3(10.0%)61-7030(24.2%)13(43.3%)71-8038(30.7%)10(33.3%)>807(5.7%)0(0%)Total124(100%)30(100%)Sex0.282Male81(65.3%)23(76.7%)Female43(34.7%)7(23.3%)Smoking0.802Current or Ex-smoker25(20.2%)7(23.3%)Never99(79.8%)23(76.7%)Hypertension0.419Yes61(49.2%)13(43.3%)No63(50.8%)17(56.7%)Ischemic heart disease0.966Yes8(6.5%)2(6.7%)No116(93.5%)28(93.3%)Stroke0.84Yes14(11.3%)3(10.0%)No110(88.7%)27(90.0%)Chronic renal failure<0.01Yes11(8.9%)11(36.7%)No113(91.1%)19(63.3%)Premorbid ambulation state<0.011.841.262.68Independent80(64.5%)8(26.7%)Walking with aid19(15.3%)10(33.3%)Wheel chair17(13.7%)7(23.3%)Bedridden8(6.5%)5(16.7%)Preoperative laboratory findingHb (g/dl)10.55 (2.3)9.42 (1.4)<0.01WBC (/mm3)10471 (5535)12475 (5263)<0.01CRP (mg/dl)6.16 (7.5)10.62 (5.3)<0.01Creatinine (mg/dl)1.82 (2.3)2.82 (2.6)0.251HbA1C (%)7.62 (1.9)7.53 (1.9)0.11Preoperative radiologic findingsX-ray Osteomyelitis18(14.5%)7(23.3%)0.24 None106(85.5%)23(76.7%)CT angiogram* Damaged vessel† <0.014.382.3608.151 Number of damaged vessels†† None versus ≥ 10.1332.9030.61913.615≤1 versus ≥ 2<0.0116.6774.98055.846≤2 versus 3<0.0121.5836.48371.86 Anterior tibial artery<0.0110.4054.08826.486PatentStenosis > 50%, Diffuse atheromatosisOcclusion Posterior tibial artery<0.015.0622.6339.733PatentStenosis > 50%, Diffuse atheromatosisOcclusion Peroneal artery<0.014.2292.2407.984PatentStenosis > 50%, Diffuse atheromatosisOcclusion*Statistical analysis was performed by univariate logistic regression analysis. †Comparison of the distribution of the number of damaged vessels between limb salvage success group and failure group. ††Comparison of the number of damaged vessels between limb salvage success group and failure group.The number of damaged vessels was defined as those with partial occlusion plus those with total occlusion.Hb; hemoglobin, WBC; white blood cell, CRP; C-reactive protein, HbA1c; glycosylated hemoglobin, CT; computed tomography. Results of evaluation of the value of factors as predictive of outcome *Statistical analysis was performed by univariate logistic regression analysis. †Comparison of the distribution of the number of damaged vessels between limb salvage success group and failure group. ††Comparison of the number of damaged vessels between limb salvage success group and failure group. The number of damaged vessels was defined as those with partial occlusion plus those with total occlusion. Hb; hemoglobin, WBC; white blood cell, CRP; C-reactive protein, HbA1c; glycosylated hemoglobin, CT; computed tomography. We evaluated various factors related to the success and failure of limb salvage (Table 2). Statistical analysis showed that chronic renal failure and the activities of daily living were significantly associated with the success or failure of limb salvage. Our analysis found no significant association between the outcome of limb salvage and age, sex, smoking status, type of diabetes, ischemic heart syndrome, stroke, or hypertension. The outcome of limb salvage was significantly associated with Hb level, WBC count, and CRP level, but not with HbA1c. Osteomyelitis had no significant relationship with the outcome. On preoperative CT angiogram, multivariate analysis showed that the number of damaged vessels in the failure group was greater than that in the success group with statistical significance. The comparison between the lack of vessel damage and single vessel damage was not statistically significant. The comparison between damage to two or more vessels and less than two vessels showed that the failure group had damage to two or more vessels significantly more often. The failure group also contained a significantly greater proportion of cases with damage to three vessels. For each of the major vessels, cases in the failure group were significantly more likely to have an occlusion. The odds ratio was 10.405 for the anterior tibial artery, 5.062 for the posterior tibial artery, and 4.229 for the perineal artery, all with a significance level of P < 0.05.Table 2 Results of evaluation of the value of factors as predictive of outcome FactorsOutcomeSuccess groupFailure groupP-valueOR95% CIn (%)Mean (SD)n (%)Mean (SD)Lower limitUpper limitAge (years)0.4491.130.8181.574≤404(3.2%)0(0%)41-5023(18.6%)4(13.3%)51-6022(17.7%)3(10.0%)61-7030(24.2%)13(43.3%)71-8038(30.7%)10(33.3%)>807(5.7%)0(0%)Total124(100%)30(100%)Sex0.282Male81(65.3%)23(76.7%)Female43(34.7%)7(23.3%)Smoking0.802Current or Ex-smoker25(20.2%)7(23.3%)Never99(79.8%)23(76.7%)Hypertension0.419Yes61(49.2%)13(43.3%)No63(50.8%)17(56.7%)Ischemic heart disease0.966Yes8(6.5%)2(6.7%)No116(93.5%)28(93.3%)Stroke0.84Yes14(11.3%)3(10.0%)No110(88.7%)27(90.0%)Chronic renal failure<0.01Yes11(8.9%)11(36.7%)No113(91.1%)19(63.3%)Premorbid ambulation state<0.011.841.262.68Independent80(64.5%)8(26.7%)Walking with aid19(15.3%)10(33.3%)Wheel chair17(13.7%)7(23.3%)Bedridden8(6.5%)5(16.7%)Preoperative laboratory findingHb (g/dl)10.55 (2.3)9.42 (1.4)<0.01WBC (/mm3)10471 (5535)12475 (5263)<0.01CRP (mg/dl)6.16 (7.5)10.62 (5.3)<0.01Creatinine (mg/dl)1.82 (2.3)2.82 (2.6)0.251HbA1C (%)7.62 (1.9)7.53 (1.9)0.11Preoperative radiologic findingsX-ray Osteomyelitis18(14.5%)7(23.3%)0.24 None106(85.5%)23(76.7%)CT angiogram* Damaged vessel† <0.014.382.3608.151 Number of damaged vessels†† None versus ≥ 10.1332.9030.61913.615≤1 versus ≥ 2<0.0116.6774.98055.846≤2 versus 3<0.0121.5836.48371.86 Anterior tibial artery<0.0110.4054.08826.486PatentStenosis > 50%, Diffuse atheromatosisOcclusion Posterior tibial artery<0.015.0622.6339.733PatentStenosis > 50%, Diffuse atheromatosisOcclusion Peroneal artery<0.014.2292.2407.984PatentStenosis > 50%, Diffuse atheromatosisOcclusion*Statistical analysis was performed by univariate logistic regression analysis. †Comparison of the distribution of the number of damaged vessels between limb salvage success group and failure group. ††Comparison of the number of damaged vessels between limb salvage success group and failure group.The number of damaged vessels was defined as those with partial occlusion plus those with total occlusion.Hb; hemoglobin, WBC; white blood cell, CRP; C-reactive protein, HbA1c; glycosylated hemoglobin, CT; computed tomography. Results of evaluation of the value of factors as predictive of outcome *Statistical analysis was performed by univariate logistic regression analysis. †Comparison of the distribution of the number of damaged vessels between limb salvage success group and failure group. ††Comparison of the number of damaged vessels between limb salvage success group and failure group. The number of damaged vessels was defined as those with partial occlusion plus those with total occlusion. Hb; hemoglobin, WBC; white blood cell, CRP; C-reactive protein, HbA1c; glycosylated hemoglobin, CT; computed tomography. Follow-up period and survival rate The average follow-up period was 118 weeks. The six-month survival rate was 91.9% for the success group and 76.7% for the failure group without statistical significance (Table 3). The Kaplan-Meier survival estimate was calculated for the limb salvage success group and failure group patients. A comparison between the two groups was performed with the log-rank test. The five-year survival rate was 81.6% for the limb salvage success group and 36.4% for the limb salvage failure group (P < 0.05) (Figure 1).Table 3 Survival rates FactorsOutcomeLimb salvage success groupLimb salvage failure groupP-valuen (%)n (%)Survived at 6 months0.142Yes114 (91.9%)23 (76.7%)No10 (8.1%)7 (23.3%)Figure 1 Kaplan-Meier survival estimate for both groups. Survival rates Kaplan-Meier survival estimate for both groups. The average follow-up period was 118 weeks. The six-month survival rate was 91.9% for the success group and 76.7% for the failure group without statistical significance (Table 3). The Kaplan-Meier survival estimate was calculated for the limb salvage success group and failure group patients. A comparison between the two groups was performed with the log-rank test. The five-year survival rate was 81.6% for the limb salvage success group and 36.4% for the limb salvage failure group (P < 0.05) (Figure 1).Table 3 Survival rates FactorsOutcomeLimb salvage success groupLimb salvage failure groupP-valuen (%)n (%)Survived at 6 months0.142Yes114 (91.9%)23 (76.7%)No10 (8.1%)7 (23.3%)Figure 1 Kaplan-Meier survival estimate for both groups. Survival rates Kaplan-Meier survival estimate for both groups. Patient profiles: Of the 461 consecutively admitted patients with diabetic foot complications between 2004 and 2013, 307 patients with complete remission without amputation were identified and excluded from the study. The other 154 patients, who underwent limb salvage surgery, and who were classified as grade 2–4 in the Wagner system, were divided into two groups. The group with successful limb salvage consisted of 124 patients, and the group with limb salvage failure consisted of 30 patients (Table 1).Table 1 Final amputation level of both groups Success groupFailure groupToe amputation or disarticulation100-Ray (metatarsal)8-Transmetatarsal6-Midfoot Lisfranc7-Chopart3-Syme--BK amputation-30Total12430BK; below knee. Final amputation level of both groups BK; below knee. Risk factor evaluation: We evaluated various factors related to the success and failure of limb salvage (Table 2). Statistical analysis showed that chronic renal failure and the activities of daily living were significantly associated with the success or failure of limb salvage. Our analysis found no significant association between the outcome of limb salvage and age, sex, smoking status, type of diabetes, ischemic heart syndrome, stroke, or hypertension. The outcome of limb salvage was significantly associated with Hb level, WBC count, and CRP level, but not with HbA1c. Osteomyelitis had no significant relationship with the outcome. On preoperative CT angiogram, multivariate analysis showed that the number of damaged vessels in the failure group was greater than that in the success group with statistical significance. The comparison between the lack of vessel damage and single vessel damage was not statistically significant. The comparison between damage to two or more vessels and less than two vessels showed that the failure group had damage to two or more vessels significantly more often. The failure group also contained a significantly greater proportion of cases with damage to three vessels. For each of the major vessels, cases in the failure group were significantly more likely to have an occlusion. The odds ratio was 10.405 for the anterior tibial artery, 5.062 for the posterior tibial artery, and 4.229 for the perineal artery, all with a significance level of P < 0.05.Table 2 Results of evaluation of the value of factors as predictive of outcome FactorsOutcomeSuccess groupFailure groupP-valueOR95% CIn (%)Mean (SD)n (%)Mean (SD)Lower limitUpper limitAge (years)0.4491.130.8181.574≤404(3.2%)0(0%)41-5023(18.6%)4(13.3%)51-6022(17.7%)3(10.0%)61-7030(24.2%)13(43.3%)71-8038(30.7%)10(33.3%)>807(5.7%)0(0%)Total124(100%)30(100%)Sex0.282Male81(65.3%)23(76.7%)Female43(34.7%)7(23.3%)Smoking0.802Current or Ex-smoker25(20.2%)7(23.3%)Never99(79.8%)23(76.7%)Hypertension0.419Yes61(49.2%)13(43.3%)No63(50.8%)17(56.7%)Ischemic heart disease0.966Yes8(6.5%)2(6.7%)No116(93.5%)28(93.3%)Stroke0.84Yes14(11.3%)3(10.0%)No110(88.7%)27(90.0%)Chronic renal failure<0.01Yes11(8.9%)11(36.7%)No113(91.1%)19(63.3%)Premorbid ambulation state<0.011.841.262.68Independent80(64.5%)8(26.7%)Walking with aid19(15.3%)10(33.3%)Wheel chair17(13.7%)7(23.3%)Bedridden8(6.5%)5(16.7%)Preoperative laboratory findingHb (g/dl)10.55 (2.3)9.42 (1.4)<0.01WBC (/mm3)10471 (5535)12475 (5263)<0.01CRP (mg/dl)6.16 (7.5)10.62 (5.3)<0.01Creatinine (mg/dl)1.82 (2.3)2.82 (2.6)0.251HbA1C (%)7.62 (1.9)7.53 (1.9)0.11Preoperative radiologic findingsX-ray Osteomyelitis18(14.5%)7(23.3%)0.24 None106(85.5%)23(76.7%)CT angiogram* Damaged vessel† <0.014.382.3608.151 Number of damaged vessels†† None versus ≥ 10.1332.9030.61913.615≤1 versus ≥ 2<0.0116.6774.98055.846≤2 versus 3<0.0121.5836.48371.86 Anterior tibial artery<0.0110.4054.08826.486PatentStenosis > 50%, Diffuse atheromatosisOcclusion Posterior tibial artery<0.015.0622.6339.733PatentStenosis > 50%, Diffuse atheromatosisOcclusion Peroneal artery<0.014.2292.2407.984PatentStenosis > 50%, Diffuse atheromatosisOcclusion*Statistical analysis was performed by univariate logistic regression analysis. †Comparison of the distribution of the number of damaged vessels between limb salvage success group and failure group. ††Comparison of the number of damaged vessels between limb salvage success group and failure group.The number of damaged vessels was defined as those with partial occlusion plus those with total occlusion.Hb; hemoglobin, WBC; white blood cell, CRP; C-reactive protein, HbA1c; glycosylated hemoglobin, CT; computed tomography. Results of evaluation of the value of factors as predictive of outcome *Statistical analysis was performed by univariate logistic regression analysis. †Comparison of the distribution of the number of damaged vessels between limb salvage success group and failure group. ††Comparison of the number of damaged vessels between limb salvage success group and failure group. The number of damaged vessels was defined as those with partial occlusion plus those with total occlusion. Hb; hemoglobin, WBC; white blood cell, CRP; C-reactive protein, HbA1c; glycosylated hemoglobin, CT; computed tomography. Follow-up period and survival rate: The average follow-up period was 118 weeks. The six-month survival rate was 91.9% for the success group and 76.7% for the failure group without statistical significance (Table 3). The Kaplan-Meier survival estimate was calculated for the limb salvage success group and failure group patients. A comparison between the two groups was performed with the log-rank test. The five-year survival rate was 81.6% for the limb salvage success group and 36.4% for the limb salvage failure group (P < 0.05) (Figure 1).Table 3 Survival rates FactorsOutcomeLimb salvage success groupLimb salvage failure groupP-valuen (%)n (%)Survived at 6 months0.142Yes114 (91.9%)23 (76.7%)No10 (8.1%)7 (23.3%)Figure 1 Kaplan-Meier survival estimate for both groups. Survival rates Kaplan-Meier survival estimate for both groups. Discussion: The objective of this study was to identify any predictive factors of limb salvage success for patients with diabetic foot complications. Many studies have focused on the risk factors of diabetic foot ulceration and independent causation of multiple potential etiologic agents. However, no published studies have examined the risk factors for major amputation after limb-salvage surgery. Risk factors are important in predicting the prognosis of ulceration, yet many patients already have intractable ulceration prior to hospital admission. As a result, these studies are less helpful for the prognosis of patients in need of surgery for complicated diabetic foot [15, 16]. This study differs from previous studies in that it suggests the clinical predictors of limb salvage surgery failure. In this study, Hb, WBC, and CRP were risk factors of limb salvage surgery, but HbA1c was not. This is not surprising, given that Hb, WBC, and CRP are risk factors for diabetic foot complication, and hence a reflection of the patient’s general condition and the degree of wound inflammation [16]. On the other hand, unlike our findings showing no significant relationship between HbA1c and limb salvage surgery outcomes, previous studies have reported that HbA1c is a risk factor for diabetic foot complications [17–19]. Chronic renal failure is also an important risk factor for proximal osteotomy. In our study, 11 of 22 patients with chronic renal failure experienced limb salvage surgery failure. In previous reports, among chronic renal failure patients on dialysis who underwent limb salvage surgery, about 50% experienced failure and went to amputation. It has been reported that the risk of lower limb amputation is greater in diabetic foot patients with kidney disease [15, 20]. However, in this study, creatinine was not a significant risk factor. Why creatinine was not found to be a risk factor for salvage failure in our study, although it has been identified as a risk factor for major limb amputation in previous studies, cannot be explained satisfactorily. The authors supposed that the reason was that creatinine levels could be controlled directly depending on the treatment for renal failure, such as dialysis [15, 21]. To reflect the uniqueness of the patients with diabetic foot complications, a simple analysis method, the ambulation state, was used in this study. This can be relatively easily measured through a simple conversation with the patient. We found that the higher the ambulation state prior to surgery, the more successful the limb salvage operation. In other words, the postoperative walking ability was proportional to one’s walking ability before the surgery. CT angiogram was used to identify the status of the blood vessels prior to surgery and has been proven effective in prior studies [22–24]. Nevertheless, no studies have examined the failure of limb salvage surgery using the results of CT angiography until now. In the results of this study, the greater the number of damaged vessels as shown on CT angiogram, the greater was the difference of the odds ratio between the two groups. When comparing each blood vessel, a reduced vascular patency was found to be associated with failure. The number of normal blood vessels and the condition of each of the blood vessels had an effect on the results. In addition, it is worth considering that the diameter of the blood vessels of the lower limb is associated with the clinical outcomes of limb salvage surgery. The vessel diameter and odds ratio are largest in the anterior tibial artery, followed by the posterior tibial artery, and last, the peroneal artery [25]. Stone et al. [20] reported the 1-, 3- and 5-year survival rates of diabetic foot patients undergoing transmetatarsal amputation to be 73%, 68%, and 62%, respectively. In our study, the 5-year survival rate of the limb salvage group was 81.6%. This may be higher because we included a high proportion of patients with toe amputation. We also found that in our success and failure groups, the 6-month survival rates showed no statistically significant difference, but the 5-year survival rate of the limb salvage surgery success group was significantly higher, meaning that the patient’s age and life expectancy may help guide further surgical treatment. Conclusions: This study evaluated the factors predictive of the success of limb salvage surgery and identified indicators for preserving the limbs of patients with diabetic foot complications, allowing the establishment of an appropriate amputation level of the diabetic foot and minimizing subsequent operations.
Background: The goal of salvage surgery in the diabetic foot is maximal preservation of the limb, but it is also important to resect unviable tissue sufficiently to avoid reamputation. This study aims to provide information on determining the optimal amputation level that allows preservation of as much limb length as possible without the risk of further reamputation by analyzing several predictive factors. Methods: Between April 2004 and July 2013, 154 patients underwent limb salvage surgery for distal diabetic foot gangrene. According to the final level of amputation, the patients were divided into two groups: Patients with primary success of the limb salvage, and patients that failed to heal after the primary limb salvage surgery. The factors predictive of success, including comorbidity, laboratory findings, and radiologic findings were evaluated by a retrospective chart review. Results: The mean age of the study population was 63.9 years, with a male-to-female ratio of approximately 2:1. The mean follow-up duration was 30 months. Statistical analysis showed that underlying renal disease, limited activity before surgery, a low hemoglobin level, a high white blood cell count, a high C-reactive protein level, and damage to two or more vessels on preoperative computed tomography (CT) angiogram were significantly associated with the success or failure of limb salvage. The five-year survival rate was 81.6% for the limb salvage success group and 36.4% for the limb salvage failure group. Conclusions: This study evaluated the factors predictive of the success of limb salvage surgery and identified indicators for preserving as much as possible of the leg of a patient with diabetic foot. This should help surgeons to establish the appropriate amputation level for a case of diabetic foot and help prevent consecutive operations.
Background: Approximately 3–4% of diabetic patients develop foot ulcers sometime during their life. One of the most important strategies for the management of the diabetic foot is to prevent complications that may necessitate a major limb amputation. Even with appropriate treatment, some patients must undergo major amputation or a limb salvage operation [1, 2]. These operations are not only a huge emotional and social burden to the patients due to physical impairment, but also a financial burden [3, 4]. In recent decades, systemization of multidisciplinary management and implementation of free tissue transfer in diabetic foot treatment have led to a notable decrease in the major amputation rate [5, 6]. The key to limb salvage surgery is maximal retention of the limb and minimization of the amputation level. Free tissue transfer has become an alternative option to major amputation for elderly diabetic patients [7, 8]. Successful limb salvage, defined as a stump fit for functional ambulation, is mostly determined by the level of amputation. It is mostly affected by preservation of the talus and calcaneus because it minimizes limb length discrepancy and preserves the heel pad [9]. The level of Chopart amputation is the most proximal among lower limb amputation locations that preserve the talus and calcaneus. Although disputable, the Chopart amputation has been defined as the threshold of successful limb salvage [10–12]. The incidence of reamputation following first toe or transmetatarsal amputation associated with diabetes mellitus has been found to be nearly one-third. Nearly 40% of patients with diabetic foot who had amputations at the foot level have a history of previous amputation [13, 14]. Surgeons should preserve as much limb length as possible. However, it is also important to avoid reamputation, since it is a massive surgical burden to diabetic patients, who usually already are in a poor general condition and face financial difficulties. The purpose of this article is to provide information on determining the optimal amputation level, preserving as much limb length as possible without requiring additional reamputation by analyzing several predictive factors. Conclusions: This study evaluated the factors predictive of the success of limb salvage surgery and identified indicators for preserving the limbs of patients with diabetic foot complications, allowing the establishment of an appropriate amputation level of the diabetic foot and minimizing subsequent operations.
Background: The goal of salvage surgery in the diabetic foot is maximal preservation of the limb, but it is also important to resect unviable tissue sufficiently to avoid reamputation. This study aims to provide information on determining the optimal amputation level that allows preservation of as much limb length as possible without the risk of further reamputation by analyzing several predictive factors. Methods: Between April 2004 and July 2013, 154 patients underwent limb salvage surgery for distal diabetic foot gangrene. According to the final level of amputation, the patients were divided into two groups: Patients with primary success of the limb salvage, and patients that failed to heal after the primary limb salvage surgery. The factors predictive of success, including comorbidity, laboratory findings, and radiologic findings were evaluated by a retrospective chart review. Results: The mean age of the study population was 63.9 years, with a male-to-female ratio of approximately 2:1. The mean follow-up duration was 30 months. Statistical analysis showed that underlying renal disease, limited activity before surgery, a low hemoglobin level, a high white blood cell count, a high C-reactive protein level, and damage to two or more vessels on preoperative computed tomography (CT) angiogram were significantly associated with the success or failure of limb salvage. The five-year survival rate was 81.6% for the limb salvage success group and 36.4% for the limb salvage failure group. Conclusions: This study evaluated the factors predictive of the success of limb salvage surgery and identified indicators for preserving as much as possible of the leg of a patient with diabetic foot. This should help surgeons to establish the appropriate amputation level for a case of diabetic foot and help prevent consecutive operations.
4,666
331
[ 385, 94, 133, 620, 168 ]
9
[ "limb", "salvage", "group", "failure", "limb salvage", "vessels", "patients", "success", "amputation", "survival" ]
[ "amputation associated diabetes", "management diabetic foot", "amputation limb salvage", "amputation elderly diabetic", "transfer diabetic foot" ]
[CONTENT] Diabetic foot | Major limb amputation | Limb salvage [SUMMARY]
[CONTENT] Diabetic foot | Major limb amputation | Limb salvage [SUMMARY]
[CONTENT] Diabetic foot | Major limb amputation | Limb salvage [SUMMARY]
[CONTENT] Diabetic foot | Major limb amputation | Limb salvage [SUMMARY]
[CONTENT] Diabetic foot | Major limb amputation | Limb salvage [SUMMARY]
[CONTENT] Diabetic foot | Major limb amputation | Limb salvage [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Amputation, Surgical | Diabetic Foot | Female | Follow-Up Studies | Gangrene | Humans | Kaplan-Meier Estimate | Limb Salvage | Male | Middle Aged | Reoperation | Retrospective Studies | Risk Factors | Survival Rate [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Amputation, Surgical | Diabetic Foot | Female | Follow-Up Studies | Gangrene | Humans | Kaplan-Meier Estimate | Limb Salvage | Male | Middle Aged | Reoperation | Retrospective Studies | Risk Factors | Survival Rate [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Amputation, Surgical | Diabetic Foot | Female | Follow-Up Studies | Gangrene | Humans | Kaplan-Meier Estimate | Limb Salvage | Male | Middle Aged | Reoperation | Retrospective Studies | Risk Factors | Survival Rate [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Amputation, Surgical | Diabetic Foot | Female | Follow-Up Studies | Gangrene | Humans | Kaplan-Meier Estimate | Limb Salvage | Male | Middle Aged | Reoperation | Retrospective Studies | Risk Factors | Survival Rate [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Amputation, Surgical | Diabetic Foot | Female | Follow-Up Studies | Gangrene | Humans | Kaplan-Meier Estimate | Limb Salvage | Male | Middle Aged | Reoperation | Retrospective Studies | Risk Factors | Survival Rate [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Amputation, Surgical | Diabetic Foot | Female | Follow-Up Studies | Gangrene | Humans | Kaplan-Meier Estimate | Limb Salvage | Male | Middle Aged | Reoperation | Retrospective Studies | Risk Factors | Survival Rate [SUMMARY]
[CONTENT] amputation associated diabetes | management diabetic foot | amputation limb salvage | amputation elderly diabetic | transfer diabetic foot [SUMMARY]
[CONTENT] amputation associated diabetes | management diabetic foot | amputation limb salvage | amputation elderly diabetic | transfer diabetic foot [SUMMARY]
[CONTENT] amputation associated diabetes | management diabetic foot | amputation limb salvage | amputation elderly diabetic | transfer diabetic foot [SUMMARY]
[CONTENT] amputation associated diabetes | management diabetic foot | amputation limb salvage | amputation elderly diabetic | transfer diabetic foot [SUMMARY]
[CONTENT] amputation associated diabetes | management diabetic foot | amputation limb salvage | amputation elderly diabetic | transfer diabetic foot [SUMMARY]
[CONTENT] amputation associated diabetes | management diabetic foot | amputation limb salvage | amputation elderly diabetic | transfer diabetic foot [SUMMARY]
[CONTENT] limb | salvage | group | failure | limb salvage | vessels | patients | success | amputation | survival [SUMMARY]
[CONTENT] limb | salvage | group | failure | limb salvage | vessels | patients | success | amputation | survival [SUMMARY]
[CONTENT] limb | salvage | group | failure | limb salvage | vessels | patients | success | amputation | survival [SUMMARY]
[CONTENT] limb | salvage | group | failure | limb salvage | vessels | patients | success | amputation | survival [SUMMARY]
[CONTENT] limb | salvage | group | failure | limb salvage | vessels | patients | success | amputation | survival [SUMMARY]
[CONTENT] limb | salvage | group | failure | limb salvage | vessels | patients | success | amputation | survival [SUMMARY]
[CONTENT] amputation | limb | diabetic | length | limb length | burden | diabetic patients | patients | foot | level [SUMMARY]
[CONTENT] test | patients | survival | prognostic factors | prognostic | distal | preoperative | statistical | hemoglobin | curves [SUMMARY]
[CONTENT] group | failure | vessels | failure group | salvage | 23 | damaged | success | limb salvage | limb [SUMMARY]
[CONTENT] diabetic foot | foot | diabetic | allowing establishment appropriate | success limb salvage surgery | allowing establishment appropriate amputation | limbs | limbs patients | limbs patients diabetic | limbs patients diabetic foot [SUMMARY]
[CONTENT] limb | group | salvage | amputation | failure | patients | limb salvage | survival | success | test [SUMMARY]
[CONTENT] limb | group | salvage | amputation | failure | patients | limb salvage | survival | success | test [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] Between April 2004 and July 2013 | 154 ||| two ||| [SUMMARY]
[CONTENT] 63.9 years | approximately 2:1 ||| 30 months ||| two | CT ||| five-year | 81.6% | 36.4% [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| Between April 2004 and July 2013 | 154 ||| two ||| ||| ||| 63.9 years | approximately 2:1 ||| 30 months ||| two | CT ||| five-year | 81.6% | 36.4% ||| ||| [SUMMARY]
[CONTENT] ||| ||| Between April 2004 and July 2013 | 154 ||| two ||| ||| ||| 63.9 years | approximately 2:1 ||| 30 months ||| two | CT ||| five-year | 81.6% | 36.4% ||| ||| [SUMMARY]
Time-dependent risk of developing distant metastasis in breast cancer patients according to treatment, age and tumour characteristics.
24434426
Metastatic breast cancer is a severe condition without curative treatment. How relative and absolute risk of distant metastasis varies over time since diagnosis, as a function of treatment, age and tumour characteristics, has not been studied in detail.
BACKGROUND
A total of 9514 women under the age of 75 when diagnosed with breast cancer in Stockholm and Gotland regions during 1990-2006 were followed up for metastasis (mean follow-up=5.7 years). Time-dependent development of distant metastasis was analysed using flexible parametric survival models and presented as hazard ratio (HR) and cumulative risk.
METHODS
A total of 995 (10.4%) patients developed distant metastasis; the most common sites were skeleton (32.5%) and multiple sites (28.3%). Women younger than 50 years at diagnosis, with lymph node-positive, oestrogen receptor (ER)-negative, >20 mm tumours and treated only locally, had the highest risk of distant metastasis (0-5 years' cumulative risk =0.55; 95% confidence interval (CI): 0.47-0.64). Women older than 50 years at diagnosis, with ER-positive, lymph node-negative and ≤20-mm tumours, had the same and lowest cumulative risk of developing metastasis 0-5 and 5-10 years (cumulative risk=0.03; 95% CI: 0.02-0.04). In the period of 5-10 years after diagnosis, women with ER-positive, lymph node-positive and >20-mm tumours were at highest risk of distant recurrence. Women with ER-negative tumours showed a decline in risk during this period.
RESULTS
Our data show no support for discontinuation at 5 years of clinical follow-up in breast cancer patients and suggest further investigation on differential clinical follow-up for different subgroups of patients.
CONCLUSION
[ "Aged", "Antineoplastic Agents", "Breast Neoplasms", "Cohort Studies", "Female", "Humans", "Middle Aged", "Neoplasm Metastasis", "Neoplasm Recurrence, Local", "Risk", "Sweden", "Time Factors" ]
3950882
null
null
null
null
Results
The mean age of the patients was 56.4 years at diagnosis. The tumour characteristics were distributed as follows: 63.5% of patients had lymph node-negative tumours, 69% had tumours of size 20 mm or less and 82.3% had ER-positive tumours (Table 1). In our cohort, 87.9% of women underwent systemic adjuvant treatment. The overall rate of first distant metastasis was 19.4 per 1000 person–years (95% CI: 18.2–20.6). Figure 1 shows rates of first distant metastasis in relation to time since breast cancer diagnosis for different age groups of patients and tumour characteristics. Women younger than 50 years at breast cancer diagnosis, women with ER-negative tumours, positive lymph nodes and tumours larger than 20 mm, all showed a peak in the rate of first distant metastasis at about 2 years, in comparison with fairly stable rates in other subgroups of patients. In particular, women with ER-negative tumours showed a sharp decrease in the rate of first distant metastasis after 2 years since breast cancer diagnosis, whereas women with ER-positive tumours showed rather stable rates of first distant metastasis over time from about 1 year up until 10 years since breast cancer diagnosis. Table 2 shows frequency distribution of sites of first distant metastasis by time-to-first-distant metastasis subdivided into 0–2, 2–5 and 5–10 years since diagnosis. Overall, metastasis to the skeleton (32.5%) and multiple sites of metastasis (28.3%) were the most frequent presentations of distant metastasis within 10 years. No particular combination pattern of multiple sites of first distant metastasis was found in the cohort. The site distribution of first distant metastasis changed significantly (P<0.05) over time for the following sites: skeleton, CNS and liver. The proportion of first distant metastasis to the skeleton increased from 29.9% of all first distant metastasis in the first 2 years to 36.5% in the period of 5–10 years since breast cancer diagnosis. The proportion of CNS and liver metastasis instead decreased from 6.8% and 15.4%, respectively, in the first 2 years to 1.8% and 8.0%, respectively, in 5–10 years since breast cancer diagnosis. Between 5 and 10 years since breast cancer diagnosis, 274 (27.5%) of all first distant metastasis were diagnosed. Table 3 shows time-dependent HRs of developing first distant metastasis by age and tumour characteristics at 2, 5 and 10 years from breast cancer diagnosis. For women with positive lymph nodes, the hazard of developing metastasis was still significantly increased at 10 years after diagnosis (HR=2.6; 95% CI: 1.9–3.5) compared with women with negative lymph nodes. Women with ER-negative tumours had an increased hazard at 2 (HR=2.7; 95% CI: 2.2–3.3) and 5 (HR=1.4; 95% CI: 1.1–1.7) years from breast cancer diagnosis compared with women with ER-positive tumours; at 10 years from diagnosis, the same HR was not significantly increased anymore (HR=0.9; 95% CI: 0.6–1.4). Having a tumour larger than 20 mm at breast cancer diagnosis was still causing an increased hazard of developing first distant metastasis at 10 years (HR=1.5; 95% CI: 1.1–2.0). Table 4 shows cumulative risks of developing first distant metastasis within 5 years of diagnosis, and within 10 years surviving 5 years without metastasis, according to all different covariate patterns. Among those with an adjuvant treatment combination including CT and/or HT, the highest risk of first distant metastasis in the period of 0–5 years was found in patients of 50 years of age or less, with tumour size >20 mm, positive lymph nodes and ER-negative tumours at breast cancer diagnosis (cumulative risk =0.45; 95% CI: 0.38–0.53); whereas the lowest risk was among patients aged 51–74 years with negative lymph nodes, tumour size ⩽20 mm and with ER-positive tumours at breast cancer diagnosis (cumulative risk =0.03; 95% CI: 0.02–0.04). For the same two groups of patients in the period of 5–10 years following diagnosis (that is, among those who survived metastasis-free until 5 years), the risk dropped (cumulative risk=0.09; 95% CI: 0.05–0.17) for the group originally at the highest risk, whereas it remained the same (cumulative risk=0.03; 95% CI: 0.02–0.04) for the group at the lowest risk. The risk at 5–10 years following diagnosis was similar to the risk for the period of 0–5 years in all women with ER-positive tumours and negative lymph nodes, regardless of treatment, tumour size and age. This was also true for women with ER-positive tumours and positive lymph nodes, if the tumour size at diagnosis was ⩽20 mm. For all patients with ER-negative tumours instead, the risk at 5–10 years following diagnosis was significantly lower compared with the risk for the period of 0–5 years regardless of age, tumour size and nodes. Women treated with systemic adjuvant treatment (that is, CT and/or HT) always had lower risks compared to women treated with surgery only, with or without RT.
null
null
[ "Data source", "Study cohort", "Ethics", "Statistical analysis" ]
[ "The Stockholm Breast Cancer Register (SBCR) is a population-based clinical register held by the Regional Cancer Centre of Stockholm–Gotland region, Sweden. The register contains data about all breast cancer diagnoses occurring in the Swedish counties of Stockholm and Gotland since 1976. The SBCR provides detailed clinical information, such as tumour characteristics and intention of treatment, for each patient.", "A population-based cohort was selected from the SBCR, including all women diagnosed with first invasive breast cancer in the period of 1 January 1990 and 31 December 2006, younger than 75 years at diagnosis and without any previous occurrence of cancer. Patients were followed up for at most 10 years from the date of breast cancer diagnosis until the development of first distant metastasis (event), until death, diagnosis of second primary cancer or end of study period (31 December 2006). The records were linked to the Swedish Cancer Register (Barlow et al, 2009) for information on other invasive cancers through linkage by the personal identification number (unique for each Swedish resident and included in all Swedish population registers).\nThe cohort comprised 14 188 women. Those who had a metastatic disease at diagnosis (stage IV, n=264), were diagnosed with first distant metastasis occurring within 3 months from breast cancer diagnosis (n=44), had tumour size less than 1 mm (n=52), received neoadjuvant treatment (n=798) and did not undergo surgery for breast cancer (n=217) were excluded. Women who were diagnosed with second primary cancer at the time of breast cancer diagnosis (n=226) were also excluded because of impossibility to infer origin of metastasis. Finally, women who were referred as dying from breast cancer without any record of distant metastasis were also excluded (n=240), as it was not possible to assess whether this was due to missing information about metastatic status or due to the inaccuracy in the reported underlying cause of death. Of the remaining patients (n=12 322), 1189 (9.7%) had subsequent distant metastasis within 10 years of initial diagnosis.\nInformation on date of breast cancer diagnosis, planned adjuvant treatment and site of first distant metastasis was complete for all patients. Information on number of positive lymph nodes and tumour size was available for 94.6% and 98.4% of patients, respectively. Information on ER status was available for 80.3% of patients. For this reason, in the analysis we eventually included only patients with information available for each covariate used in the models (n=9514), of which 995 (10.4%) developed distant metastasis within 10 years of initial diagnosis.\nInformation on the cause of death was obtained from the Swedish Cause of Death Register (Rutqvist, 1985). Patients with underlying cause of death other than breast cancer (International Classification of Diseases (ICD) version 8=174, ICD9=174, ICD10=C50) were censored. Information on the site of first metastasis was obtained from the SBCR and was divided into eight groups according to the ICD code: skeleton, lung, pleura, liver, central nervous system (CNS), skin, other sites and multiple sites of first distant metastasis (defined as distant metastasis diagnosed within 2 months from the first distant metastasis diagnosis).", "This study has an ethical permission from the regional ethics committee at the Karolinska Institutet.", "This cohort study was analysed using survival methodology with adjustment for competing risks. All women were followed up and they contributed to risk–time from the date of diagnosis until the date of first distant metastasis (event) or diagnosis of second primary cancer, date of death, 10 years of follow-up or end of study, 31 December 2006 (censoring).\nRates of first distant metastasis were calculated as number of events (development of first distant metastasis) divided by total risk–time. These rates were modelled using flexible parametric survival models (FPMs) that use a restricted cubic spline function for the cumulative baseline hazard rate (Royston and Parmar, 2002; Lambert and Royston, 2009). The models estimate hazard ratios (HRs) with 95% confidence intervals (CIs) as measure of association between exposures and outcome. For the baseline rate, we used a spline with five degrees of freedom (two knots at each boundary and four intermediate knots placed at quintiles of the distribution of events). The underlying timescale used in the analysis was time since diagnosis of breast cancer.\nFrom the FPM it is also possible to post estimate the cumulative risk of developing metastasis in the presence of competing risks (Hinchliffe and Lambert, 2012). We estimated the cumulative risk for first distant metastasis within 5 years from diagnosis for various covariate patterns in the presence of competing event death due to causes other than breast cancer. We further estimated the cumulative risk of metastasis during the period of 5–10 years after diagnosis, conditional upon surviving and being metastasis-free at 5 years (that is, conditional probabilities). Variables were categorised as follows: age at breast cancer diagnosis (⩽50, 51–60, 61–74 years), calendar period at breast cancer diagnosis (1990–1994, 1995–1999, 2000–2006), tumour size (⩽20 mm, >20 mm), lymph node status (positive/negative), ER status (ER-positive/negative) and treatment. Treatment was categorised as local treatment (surgery without adjuvant treatment; surgery with radiotherapy only (RT)) and systemic treatment (any combination with either chemotherapy (CT) or hormone therapy (HT)).\nFirstly, we estimated HRs by age and tumour characteristics using non-proportional hazards models, which allow HR to vary over follow-up (that is, time-dependent effects).\nSecondly, we estimated 5-year cumulative risks of developing a first distant metastasis in the presence of competing event death due to other causes. As cumulative risk is an absolute measure, results are shown for a given set of characteristics (covariate patterns). We also report cumulative risks during 5–10 years after diagnosis conditional on surviving 5 years answering the question: what is the probability that a patient who has survived 5 years without metastasis will develop first distant metastasis in the coming years? The data were analysed using Stata Intercooled 12.0 (StataCorp. 2009, Stata Statistical Software: Release 12. College Station, TX, USA: StataCorp LP) and R package version 2.15.1 for calculating CIFs 5–10 years." ]
[ null, null, null, null ]
[ "Materials and Methods", "Data source", "Study cohort", "Ethics", "Statistical analysis", "Results", "Discussion" ]
[ " Data source The Stockholm Breast Cancer Register (SBCR) is a population-based clinical register held by the Regional Cancer Centre of Stockholm–Gotland region, Sweden. The register contains data about all breast cancer diagnoses occurring in the Swedish counties of Stockholm and Gotland since 1976. The SBCR provides detailed clinical information, such as tumour characteristics and intention of treatment, for each patient.\nThe Stockholm Breast Cancer Register (SBCR) is a population-based clinical register held by the Regional Cancer Centre of Stockholm–Gotland region, Sweden. The register contains data about all breast cancer diagnoses occurring in the Swedish counties of Stockholm and Gotland since 1976. The SBCR provides detailed clinical information, such as tumour characteristics and intention of treatment, for each patient.\n Study cohort A population-based cohort was selected from the SBCR, including all women diagnosed with first invasive breast cancer in the period of 1 January 1990 and 31 December 2006, younger than 75 years at diagnosis and without any previous occurrence of cancer. Patients were followed up for at most 10 years from the date of breast cancer diagnosis until the development of first distant metastasis (event), until death, diagnosis of second primary cancer or end of study period (31 December 2006). The records were linked to the Swedish Cancer Register (Barlow et al, 2009) for information on other invasive cancers through linkage by the personal identification number (unique for each Swedish resident and included in all Swedish population registers).\nThe cohort comprised 14 188 women. Those who had a metastatic disease at diagnosis (stage IV, n=264), were diagnosed with first distant metastasis occurring within 3 months from breast cancer diagnosis (n=44), had tumour size less than 1 mm (n=52), received neoadjuvant treatment (n=798) and did not undergo surgery for breast cancer (n=217) were excluded. Women who were diagnosed with second primary cancer at the time of breast cancer diagnosis (n=226) were also excluded because of impossibility to infer origin of metastasis. Finally, women who were referred as dying from breast cancer without any record of distant metastasis were also excluded (n=240), as it was not possible to assess whether this was due to missing information about metastatic status or due to the inaccuracy in the reported underlying cause of death. Of the remaining patients (n=12 322), 1189 (9.7%) had subsequent distant metastasis within 10 years of initial diagnosis.\nInformation on date of breast cancer diagnosis, planned adjuvant treatment and site of first distant metastasis was complete for all patients. Information on number of positive lymph nodes and tumour size was available for 94.6% and 98.4% of patients, respectively. Information on ER status was available for 80.3% of patients. For this reason, in the analysis we eventually included only patients with information available for each covariate used in the models (n=9514), of which 995 (10.4%) developed distant metastasis within 10 years of initial diagnosis.\nInformation on the cause of death was obtained from the Swedish Cause of Death Register (Rutqvist, 1985). Patients with underlying cause of death other than breast cancer (International Classification of Diseases (ICD) version 8=174, ICD9=174, ICD10=C50) were censored. Information on the site of first metastasis was obtained from the SBCR and was divided into eight groups according to the ICD code: skeleton, lung, pleura, liver, central nervous system (CNS), skin, other sites and multiple sites of first distant metastasis (defined as distant metastasis diagnosed within 2 months from the first distant metastasis diagnosis).\nA population-based cohort was selected from the SBCR, including all women diagnosed with first invasive breast cancer in the period of 1 January 1990 and 31 December 2006, younger than 75 years at diagnosis and without any previous occurrence of cancer. Patients were followed up for at most 10 years from the date of breast cancer diagnosis until the development of first distant metastasis (event), until death, diagnosis of second primary cancer or end of study period (31 December 2006). The records were linked to the Swedish Cancer Register (Barlow et al, 2009) for information on other invasive cancers through linkage by the personal identification number (unique for each Swedish resident and included in all Swedish population registers).\nThe cohort comprised 14 188 women. Those who had a metastatic disease at diagnosis (stage IV, n=264), were diagnosed with first distant metastasis occurring within 3 months from breast cancer diagnosis (n=44), had tumour size less than 1 mm (n=52), received neoadjuvant treatment (n=798) and did not undergo surgery for breast cancer (n=217) were excluded. Women who were diagnosed with second primary cancer at the time of breast cancer diagnosis (n=226) were also excluded because of impossibility to infer origin of metastasis. Finally, women who were referred as dying from breast cancer without any record of distant metastasis were also excluded (n=240), as it was not possible to assess whether this was due to missing information about metastatic status or due to the inaccuracy in the reported underlying cause of death. Of the remaining patients (n=12 322), 1189 (9.7%) had subsequent distant metastasis within 10 years of initial diagnosis.\nInformation on date of breast cancer diagnosis, planned adjuvant treatment and site of first distant metastasis was complete for all patients. Information on number of positive lymph nodes and tumour size was available for 94.6% and 98.4% of patients, respectively. Information on ER status was available for 80.3% of patients. For this reason, in the analysis we eventually included only patients with information available for each covariate used in the models (n=9514), of which 995 (10.4%) developed distant metastasis within 10 years of initial diagnosis.\nInformation on the cause of death was obtained from the Swedish Cause of Death Register (Rutqvist, 1985). Patients with underlying cause of death other than breast cancer (International Classification of Diseases (ICD) version 8=174, ICD9=174, ICD10=C50) were censored. Information on the site of first metastasis was obtained from the SBCR and was divided into eight groups according to the ICD code: skeleton, lung, pleura, liver, central nervous system (CNS), skin, other sites and multiple sites of first distant metastasis (defined as distant metastasis diagnosed within 2 months from the first distant metastasis diagnosis).\n Ethics This study has an ethical permission from the regional ethics committee at the Karolinska Institutet.\nThis study has an ethical permission from the regional ethics committee at the Karolinska Institutet.\n Statistical analysis This cohort study was analysed using survival methodology with adjustment for competing risks. All women were followed up and they contributed to risk–time from the date of diagnosis until the date of first distant metastasis (event) or diagnosis of second primary cancer, date of death, 10 years of follow-up or end of study, 31 December 2006 (censoring).\nRates of first distant metastasis were calculated as number of events (development of first distant metastasis) divided by total risk–time. These rates were modelled using flexible parametric survival models (FPMs) that use a restricted cubic spline function for the cumulative baseline hazard rate (Royston and Parmar, 2002; Lambert and Royston, 2009). The models estimate hazard ratios (HRs) with 95% confidence intervals (CIs) as measure of association between exposures and outcome. For the baseline rate, we used a spline with five degrees of freedom (two knots at each boundary and four intermediate knots placed at quintiles of the distribution of events). The underlying timescale used in the analysis was time since diagnosis of breast cancer.\nFrom the FPM it is also possible to post estimate the cumulative risk of developing metastasis in the presence of competing risks (Hinchliffe and Lambert, 2012). We estimated the cumulative risk for first distant metastasis within 5 years from diagnosis for various covariate patterns in the presence of competing event death due to causes other than breast cancer. We further estimated the cumulative risk of metastasis during the period of 5–10 years after diagnosis, conditional upon surviving and being metastasis-free at 5 years (that is, conditional probabilities). Variables were categorised as follows: age at breast cancer diagnosis (⩽50, 51–60, 61–74 years), calendar period at breast cancer diagnosis (1990–1994, 1995–1999, 2000–2006), tumour size (⩽20 mm, >20 mm), lymph node status (positive/negative), ER status (ER-positive/negative) and treatment. Treatment was categorised as local treatment (surgery without adjuvant treatment; surgery with radiotherapy only (RT)) and systemic treatment (any combination with either chemotherapy (CT) or hormone therapy (HT)).\nFirstly, we estimated HRs by age and tumour characteristics using non-proportional hazards models, which allow HR to vary over follow-up (that is, time-dependent effects).\nSecondly, we estimated 5-year cumulative risks of developing a first distant metastasis in the presence of competing event death due to other causes. As cumulative risk is an absolute measure, results are shown for a given set of characteristics (covariate patterns). We also report cumulative risks during 5–10 years after diagnosis conditional on surviving 5 years answering the question: what is the probability that a patient who has survived 5 years without metastasis will develop first distant metastasis in the coming years? The data were analysed using Stata Intercooled 12.0 (StataCorp. 2009, Stata Statistical Software: Release 12. College Station, TX, USA: StataCorp LP) and R package version 2.15.1 for calculating CIFs 5–10 years.\nThis cohort study was analysed using survival methodology with adjustment for competing risks. All women were followed up and they contributed to risk–time from the date of diagnosis until the date of first distant metastasis (event) or diagnosis of second primary cancer, date of death, 10 years of follow-up or end of study, 31 December 2006 (censoring).\nRates of first distant metastasis were calculated as number of events (development of first distant metastasis) divided by total risk–time. These rates were modelled using flexible parametric survival models (FPMs) that use a restricted cubic spline function for the cumulative baseline hazard rate (Royston and Parmar, 2002; Lambert and Royston, 2009). The models estimate hazard ratios (HRs) with 95% confidence intervals (CIs) as measure of association between exposures and outcome. For the baseline rate, we used a spline with five degrees of freedom (two knots at each boundary and four intermediate knots placed at quintiles of the distribution of events). The underlying timescale used in the analysis was time since diagnosis of breast cancer.\nFrom the FPM it is also possible to post estimate the cumulative risk of developing metastasis in the presence of competing risks (Hinchliffe and Lambert, 2012). We estimated the cumulative risk for first distant metastasis within 5 years from diagnosis for various covariate patterns in the presence of competing event death due to causes other than breast cancer. We further estimated the cumulative risk of metastasis during the period of 5–10 years after diagnosis, conditional upon surviving and being metastasis-free at 5 years (that is, conditional probabilities). Variables were categorised as follows: age at breast cancer diagnosis (⩽50, 51–60, 61–74 years), calendar period at breast cancer diagnosis (1990–1994, 1995–1999, 2000–2006), tumour size (⩽20 mm, >20 mm), lymph node status (positive/negative), ER status (ER-positive/negative) and treatment. Treatment was categorised as local treatment (surgery without adjuvant treatment; surgery with radiotherapy only (RT)) and systemic treatment (any combination with either chemotherapy (CT) or hormone therapy (HT)).\nFirstly, we estimated HRs by age and tumour characteristics using non-proportional hazards models, which allow HR to vary over follow-up (that is, time-dependent effects).\nSecondly, we estimated 5-year cumulative risks of developing a first distant metastasis in the presence of competing event death due to other causes. As cumulative risk is an absolute measure, results are shown for a given set of characteristics (covariate patterns). We also report cumulative risks during 5–10 years after diagnosis conditional on surviving 5 years answering the question: what is the probability that a patient who has survived 5 years without metastasis will develop first distant metastasis in the coming years? The data were analysed using Stata Intercooled 12.0 (StataCorp. 2009, Stata Statistical Software: Release 12. College Station, TX, USA: StataCorp LP) and R package version 2.15.1 for calculating CIFs 5–10 years.", "The Stockholm Breast Cancer Register (SBCR) is a population-based clinical register held by the Regional Cancer Centre of Stockholm–Gotland region, Sweden. The register contains data about all breast cancer diagnoses occurring in the Swedish counties of Stockholm and Gotland since 1976. The SBCR provides detailed clinical information, such as tumour characteristics and intention of treatment, for each patient.", "A population-based cohort was selected from the SBCR, including all women diagnosed with first invasive breast cancer in the period of 1 January 1990 and 31 December 2006, younger than 75 years at diagnosis and without any previous occurrence of cancer. Patients were followed up for at most 10 years from the date of breast cancer diagnosis until the development of first distant metastasis (event), until death, diagnosis of second primary cancer or end of study period (31 December 2006). The records were linked to the Swedish Cancer Register (Barlow et al, 2009) for information on other invasive cancers through linkage by the personal identification number (unique for each Swedish resident and included in all Swedish population registers).\nThe cohort comprised 14 188 women. Those who had a metastatic disease at diagnosis (stage IV, n=264), were diagnosed with first distant metastasis occurring within 3 months from breast cancer diagnosis (n=44), had tumour size less than 1 mm (n=52), received neoadjuvant treatment (n=798) and did not undergo surgery for breast cancer (n=217) were excluded. Women who were diagnosed with second primary cancer at the time of breast cancer diagnosis (n=226) were also excluded because of impossibility to infer origin of metastasis. Finally, women who were referred as dying from breast cancer without any record of distant metastasis were also excluded (n=240), as it was not possible to assess whether this was due to missing information about metastatic status or due to the inaccuracy in the reported underlying cause of death. Of the remaining patients (n=12 322), 1189 (9.7%) had subsequent distant metastasis within 10 years of initial diagnosis.\nInformation on date of breast cancer diagnosis, planned adjuvant treatment and site of first distant metastasis was complete for all patients. Information on number of positive lymph nodes and tumour size was available for 94.6% and 98.4% of patients, respectively. Information on ER status was available for 80.3% of patients. For this reason, in the analysis we eventually included only patients with information available for each covariate used in the models (n=9514), of which 995 (10.4%) developed distant metastasis within 10 years of initial diagnosis.\nInformation on the cause of death was obtained from the Swedish Cause of Death Register (Rutqvist, 1985). Patients with underlying cause of death other than breast cancer (International Classification of Diseases (ICD) version 8=174, ICD9=174, ICD10=C50) were censored. Information on the site of first metastasis was obtained from the SBCR and was divided into eight groups according to the ICD code: skeleton, lung, pleura, liver, central nervous system (CNS), skin, other sites and multiple sites of first distant metastasis (defined as distant metastasis diagnosed within 2 months from the first distant metastasis diagnosis).", "This study has an ethical permission from the regional ethics committee at the Karolinska Institutet.", "This cohort study was analysed using survival methodology with adjustment for competing risks. All women were followed up and they contributed to risk–time from the date of diagnosis until the date of first distant metastasis (event) or diagnosis of second primary cancer, date of death, 10 years of follow-up or end of study, 31 December 2006 (censoring).\nRates of first distant metastasis were calculated as number of events (development of first distant metastasis) divided by total risk–time. These rates were modelled using flexible parametric survival models (FPMs) that use a restricted cubic spline function for the cumulative baseline hazard rate (Royston and Parmar, 2002; Lambert and Royston, 2009). The models estimate hazard ratios (HRs) with 95% confidence intervals (CIs) as measure of association between exposures and outcome. For the baseline rate, we used a spline with five degrees of freedom (two knots at each boundary and four intermediate knots placed at quintiles of the distribution of events). The underlying timescale used in the analysis was time since diagnosis of breast cancer.\nFrom the FPM it is also possible to post estimate the cumulative risk of developing metastasis in the presence of competing risks (Hinchliffe and Lambert, 2012). We estimated the cumulative risk for first distant metastasis within 5 years from diagnosis for various covariate patterns in the presence of competing event death due to causes other than breast cancer. We further estimated the cumulative risk of metastasis during the period of 5–10 years after diagnosis, conditional upon surviving and being metastasis-free at 5 years (that is, conditional probabilities). Variables were categorised as follows: age at breast cancer diagnosis (⩽50, 51–60, 61–74 years), calendar period at breast cancer diagnosis (1990–1994, 1995–1999, 2000–2006), tumour size (⩽20 mm, >20 mm), lymph node status (positive/negative), ER status (ER-positive/negative) and treatment. Treatment was categorised as local treatment (surgery without adjuvant treatment; surgery with radiotherapy only (RT)) and systemic treatment (any combination with either chemotherapy (CT) or hormone therapy (HT)).\nFirstly, we estimated HRs by age and tumour characteristics using non-proportional hazards models, which allow HR to vary over follow-up (that is, time-dependent effects).\nSecondly, we estimated 5-year cumulative risks of developing a first distant metastasis in the presence of competing event death due to other causes. As cumulative risk is an absolute measure, results are shown for a given set of characteristics (covariate patterns). We also report cumulative risks during 5–10 years after diagnosis conditional on surviving 5 years answering the question: what is the probability that a patient who has survived 5 years without metastasis will develop first distant metastasis in the coming years? The data were analysed using Stata Intercooled 12.0 (StataCorp. 2009, Stata Statistical Software: Release 12. College Station, TX, USA: StataCorp LP) and R package version 2.15.1 for calculating CIFs 5–10 years.", "The mean age of the patients was 56.4 years at diagnosis. The tumour characteristics were distributed as follows: 63.5% of patients had lymph node-negative tumours, 69% had tumours of size 20 mm or less and 82.3% had ER-positive tumours (Table 1). In our cohort, 87.9% of women underwent systemic adjuvant treatment. The overall rate of first distant metastasis was 19.4 per 1000 person–years (95% CI: 18.2–20.6).\nFigure 1 shows rates of first distant metastasis in relation to time since breast cancer diagnosis for different age groups of patients and tumour characteristics. Women younger than 50 years at breast cancer diagnosis, women with ER-negative tumours, positive lymph nodes and tumours larger than 20 mm, all showed a peak in the rate of first distant metastasis at about 2 years, in comparison with fairly stable rates in other subgroups of patients. In particular, women with ER-negative tumours showed a sharp decrease in the rate of first distant metastasis after 2 years since breast cancer diagnosis, whereas women with ER-positive tumours showed rather stable rates of first distant metastasis over time from about 1 year up until 10 years since breast cancer diagnosis.\nTable 2 shows frequency distribution of sites of first distant metastasis by time-to-first-distant metastasis subdivided into 0–2, 2–5 and 5–10 years since diagnosis. Overall, metastasis to the skeleton (32.5%) and multiple sites of metastasis (28.3%) were the most frequent presentations of distant metastasis within 10 years. No particular combination pattern of multiple sites of first distant metastasis was found in the cohort. The site distribution of first distant metastasis changed significantly (P<0.05) over time for the following sites: skeleton, CNS and liver. The proportion of first distant metastasis to the skeleton increased from 29.9% of all first distant metastasis in the first 2 years to 36.5% in the period of 5–10 years since breast cancer diagnosis. The proportion of CNS and liver metastasis instead decreased from 6.8% and 15.4%, respectively, in the first 2 years to 1.8% and 8.0%, respectively, in 5–10 years since breast cancer diagnosis. Between 5 and 10 years since breast cancer diagnosis, 274 (27.5%) of all first distant metastasis were diagnosed.\nTable 3 shows time-dependent HRs of developing first distant metastasis by age and tumour characteristics at 2, 5 and 10 years from breast cancer diagnosis. For women with positive lymph nodes, the hazard of developing metastasis was still significantly increased at 10 years after diagnosis (HR=2.6; 95% CI: 1.9–3.5) compared with women with negative lymph nodes. Women with ER-negative tumours had an increased hazard at 2 (HR=2.7; 95% CI: 2.2–3.3) and 5 (HR=1.4; 95% CI: 1.1–1.7) years from breast cancer diagnosis compared with women with ER-positive tumours; at 10 years from diagnosis, the same HR was not significantly increased anymore (HR=0.9; 95% CI: 0.6–1.4). Having a tumour larger than 20 mm at breast cancer diagnosis was still causing an increased hazard of developing first distant metastasis at 10 years (HR=1.5; 95% CI: 1.1–2.0).\nTable 4 shows cumulative risks of developing first distant metastasis within 5 years of diagnosis, and within 10 years surviving 5 years without metastasis, according to all different covariate patterns. Among those with an adjuvant treatment combination including CT and/or HT, the highest risk of first distant metastasis in the period of 0–5 years was found in patients of 50 years of age or less, with tumour size >20 mm, positive lymph nodes and ER-negative tumours at breast cancer diagnosis (cumulative risk =0.45; 95% CI: 0.38–0.53); whereas the lowest risk was among patients aged 51–74 years with negative lymph nodes, tumour size ⩽20 mm and with ER-positive tumours at breast cancer diagnosis (cumulative risk =0.03; 95% CI: 0.02–0.04). For the same two groups of patients in the period of 5–10 years following diagnosis (that is, among those who survived metastasis-free until 5 years), the risk dropped (cumulative risk=0.09; 95% CI: 0.05–0.17) for the group originally at the highest risk, whereas it remained the same (cumulative risk=0.03; 95% CI: 0.02–0.04) for the group at the lowest risk. The risk at 5–10 years following diagnosis was similar to the risk for the period of 0–5 years in all women with ER-positive tumours and negative lymph nodes, regardless of treatment, tumour size and age. This was also true for women with ER-positive tumours and positive lymph nodes, if the tumour size at diagnosis was ⩽20 mm. For all patients with ER-negative tumours instead, the risk at 5–10 years following diagnosis was significantly lower compared with the risk for the period of 0–5 years regardless of age, tumour size and nodes. Women treated with systemic adjuvant treatment (that is, CT and/or HT) always had lower risks compared to women treated with surgery only, with or without RT.", "We found that the site distribution of first distant metastasis over time changed significantly for metastasis to the skeleton, CNS and liver. The risk of developing distant metastasis within 10 years of breast cancer diagnosis significantly varied in different subgroups of patients. The risk remained non-negligible up to 10 years since diagnosis particularly among women with positive lymph nodes. The risk was high in particular among patients with ER-negative tumours within the first 5 years of diagnosis, whereas it significantly decreased after 5 years since diagnosis. The risk of developing distant metastasis remained instead rather stable for most subgroups of patients with ER-positive tumours independent of age and other tumour characteristics.\nIn our cohort, one-third of first distant metastasis was diagnosed in the skeleton. This proportion significantly increased over time since diagnosis, whereas the proportion of metastasis to the liver and CNS significantly decreased. This seems to reflect the natural history of distant recurrences as women with ER-positive tumours more often develop metastasis to the skeleton later during follow-up, whereas women with ER-negative tumours more often develop early liver and CNS metastasis (Kennecke et al, 2010).\nInterestingly, the risk for developing distant metastasis over time since diagnosis mirrors the pattern observed for breast cancer-specific mortality that has been previously studied in this same cohort: lymph node status and tumour size at diagnosis are significant prognosticators of distant recurrence, as well as of breast cancer death, at 10 years since breast cancer diagnosis, whereas ER status is not. These findings confirm that distant metastasis still indicate a very poor prognosis in breast cancer patients (Hilsenbeck et al, 1998; Louwman et al, 2008; Colzani et al, 2011).\nCumulative risk estimates of developing distant metastasis were obtained for specific subgroups of patients while taking into account the competing risk of dying from other causes and by allowing time-dependent effects (interaction with time since diagnosis) for age and tumour characteristics. The use of cumulative risk as a function of time is of relevant clinical value as it allows a quantitative estimation of what is the actual probability of developing distant metastasis for any given subgroup of breast cancer patients at different time points. This measurement may help clinicians to better estimate the individual risk of developing first distant metastasis during the first 5 years as well as 5–10 years after diagnosis. One of the main messages from this study stems from the fact that the risk of developing distant metastasis carries over significantly to the second 5 years of follow-up for most metastasis-free patients, particularly those patients with lymph node-positive tumours where the risk at 5–10 years after diagnosis is always higher or equal to 8%. In addition, for some subgroups of patients with ER-positive tumours, the cumulative risk of developing distant metastasis in the period of 0–5 and 5–10 years is essentially the same.\nPatients with ER-positive and lymph node-negative tumours show no change and very low risk over time. This could be explained by a very good effect of treatment, or alternatively even by overtreatment since patients undergoing local treatment show a similar low risk. More clinical attention should, however, be given to other subgroups in which follow-up could be intensified and treatment could be improved. In particular, women with ER-positive, lymph node-positive, >20 mm tumours still have a risk higher than 10% to develop first distant metastasis in the period of 5–10 years after diagnosis. Of note, all subgroups of patients with ER-negative tumours have a sharp significant decrease in risk of developing distant metastasis in the period of 5–10 years following diagnosis compared with the period of 0–5 years, independent of age and other tumour characteristics. The American Society of Clinical Oncology (ASCO) has recently concluded in a review of the clinical practice guidelines of primary breast cancer follow-up that there is no present need for updating the current guidelines (Khatcheressian et al, 2013). Although evidence supporting the change of current practice is rather weak (Robertson et al, 2011), following future improvements in prevention and treatment of metastatic breast cancer, a differential follow-up of patients with ER-positive and ER-negative tumours could be considered, given their remarkably different risks of spreading.\nThis study has some relevant strengths. We used a large cohort of women followed up to 10 years with accurate and complete information, enabling us to apply a comprehensive design and methodology. We analysed the risk of developing distant metastasis from many different perspectives, providing a thorough picture of the topic by analysing the proportion of first distant metastasis in different sites, the relative (HR) and absolute risk (cumulative risk) of developing metastasis at different follow-up times according to different patient and tumour characteristics by taking into account competing risks, and allowing the main effects to vary over time.\nThis paper has some limitations as well. The date of diagnosis of distant metastasis might be subject to timing of clinical work-ups and type of follow-up. In addition, site of first distant metastasis could be affected by detection bias. As in all studies requiring a long follow-up, the estimated cumulative risk of first distant metastasis might not reflect current risk as it was observed in women diagnosed between 1995 and 1999. In particular, adjuvant treatment has changed and we do not know whether the same risk patterns are observed in recently diagnosed patients: for instance, aromatase inhibitors have been widely used instead of tamoxifen from early 2000s, and high risk patients with ER-positive tumours are today offered extended endocrine therapy up to 10 years (Burstein et al, 2010).\nIn conclusion, there is still a clinically relevant risk of developing first distant metastasis from 5 to 10 years after breast cancer diagnosis in several groups of patients, especially those with positive lymph nodes at diagnosis. Patients with negative lymph nodes and ER-positive tumours, unlike those with ER-negative tumours, have a very similar low risk of developing first distant metastasis in the first 5 years and in the second 5 years of follow-up, independent of age, other tumour characteristics and competing risks of dying due to other causes. Upcoming improvements in metastasis prevention and treatment should elicit further research aimed at identifying specific clinical follow-ups for different subgroups of breast cancer patients. Five-year metastasis-free survival may not any longer be an appropriate outcome indicator measurement tool for breast cancer patients, particularly for those with ER-positive tumours." ]
[ "materials|methods", null, null, null, null, "results", "discussion" ]
[ "breast cancer", "distant metastasis", "risk", "survival analysis", "tumour characteristics", "competing risk" ]
Materials and Methods: Data source The Stockholm Breast Cancer Register (SBCR) is a population-based clinical register held by the Regional Cancer Centre of Stockholm–Gotland region, Sweden. The register contains data about all breast cancer diagnoses occurring in the Swedish counties of Stockholm and Gotland since 1976. The SBCR provides detailed clinical information, such as tumour characteristics and intention of treatment, for each patient. The Stockholm Breast Cancer Register (SBCR) is a population-based clinical register held by the Regional Cancer Centre of Stockholm–Gotland region, Sweden. The register contains data about all breast cancer diagnoses occurring in the Swedish counties of Stockholm and Gotland since 1976. The SBCR provides detailed clinical information, such as tumour characteristics and intention of treatment, for each patient. Study cohort A population-based cohort was selected from the SBCR, including all women diagnosed with first invasive breast cancer in the period of 1 January 1990 and 31 December 2006, younger than 75 years at diagnosis and without any previous occurrence of cancer. Patients were followed up for at most 10 years from the date of breast cancer diagnosis until the development of first distant metastasis (event), until death, diagnosis of second primary cancer or end of study period (31 December 2006). The records were linked to the Swedish Cancer Register (Barlow et al, 2009) for information on other invasive cancers through linkage by the personal identification number (unique for each Swedish resident and included in all Swedish population registers). The cohort comprised 14 188 women. Those who had a metastatic disease at diagnosis (stage IV, n=264), were diagnosed with first distant metastasis occurring within 3 months from breast cancer diagnosis (n=44), had tumour size less than 1 mm (n=52), received neoadjuvant treatment (n=798) and did not undergo surgery for breast cancer (n=217) were excluded. Women who were diagnosed with second primary cancer at the time of breast cancer diagnosis (n=226) were also excluded because of impossibility to infer origin of metastasis. Finally, women who were referred as dying from breast cancer without any record of distant metastasis were also excluded (n=240), as it was not possible to assess whether this was due to missing information about metastatic status or due to the inaccuracy in the reported underlying cause of death. Of the remaining patients (n=12 322), 1189 (9.7%) had subsequent distant metastasis within 10 years of initial diagnosis. Information on date of breast cancer diagnosis, planned adjuvant treatment and site of first distant metastasis was complete for all patients. Information on number of positive lymph nodes and tumour size was available for 94.6% and 98.4% of patients, respectively. Information on ER status was available for 80.3% of patients. For this reason, in the analysis we eventually included only patients with information available for each covariate used in the models (n=9514), of which 995 (10.4%) developed distant metastasis within 10 years of initial diagnosis. Information on the cause of death was obtained from the Swedish Cause of Death Register (Rutqvist, 1985). Patients with underlying cause of death other than breast cancer (International Classification of Diseases (ICD) version 8=174, ICD9=174, ICD10=C50) were censored. Information on the site of first metastasis was obtained from the SBCR and was divided into eight groups according to the ICD code: skeleton, lung, pleura, liver, central nervous system (CNS), skin, other sites and multiple sites of first distant metastasis (defined as distant metastasis diagnosed within 2 months from the first distant metastasis diagnosis). A population-based cohort was selected from the SBCR, including all women diagnosed with first invasive breast cancer in the period of 1 January 1990 and 31 December 2006, younger than 75 years at diagnosis and without any previous occurrence of cancer. Patients were followed up for at most 10 years from the date of breast cancer diagnosis until the development of first distant metastasis (event), until death, diagnosis of second primary cancer or end of study period (31 December 2006). The records were linked to the Swedish Cancer Register (Barlow et al, 2009) for information on other invasive cancers through linkage by the personal identification number (unique for each Swedish resident and included in all Swedish population registers). The cohort comprised 14 188 women. Those who had a metastatic disease at diagnosis (stage IV, n=264), were diagnosed with first distant metastasis occurring within 3 months from breast cancer diagnosis (n=44), had tumour size less than 1 mm (n=52), received neoadjuvant treatment (n=798) and did not undergo surgery for breast cancer (n=217) were excluded. Women who were diagnosed with second primary cancer at the time of breast cancer diagnosis (n=226) were also excluded because of impossibility to infer origin of metastasis. Finally, women who were referred as dying from breast cancer without any record of distant metastasis were also excluded (n=240), as it was not possible to assess whether this was due to missing information about metastatic status or due to the inaccuracy in the reported underlying cause of death. Of the remaining patients (n=12 322), 1189 (9.7%) had subsequent distant metastasis within 10 years of initial diagnosis. Information on date of breast cancer diagnosis, planned adjuvant treatment and site of first distant metastasis was complete for all patients. Information on number of positive lymph nodes and tumour size was available for 94.6% and 98.4% of patients, respectively. Information on ER status was available for 80.3% of patients. For this reason, in the analysis we eventually included only patients with information available for each covariate used in the models (n=9514), of which 995 (10.4%) developed distant metastasis within 10 years of initial diagnosis. Information on the cause of death was obtained from the Swedish Cause of Death Register (Rutqvist, 1985). Patients with underlying cause of death other than breast cancer (International Classification of Diseases (ICD) version 8=174, ICD9=174, ICD10=C50) were censored. Information on the site of first metastasis was obtained from the SBCR and was divided into eight groups according to the ICD code: skeleton, lung, pleura, liver, central nervous system (CNS), skin, other sites and multiple sites of first distant metastasis (defined as distant metastasis diagnosed within 2 months from the first distant metastasis diagnosis). Ethics This study has an ethical permission from the regional ethics committee at the Karolinska Institutet. This study has an ethical permission from the regional ethics committee at the Karolinska Institutet. Statistical analysis This cohort study was analysed using survival methodology with adjustment for competing risks. All women were followed up and they contributed to risk–time from the date of diagnosis until the date of first distant metastasis (event) or diagnosis of second primary cancer, date of death, 10 years of follow-up or end of study, 31 December 2006 (censoring). Rates of first distant metastasis were calculated as number of events (development of first distant metastasis) divided by total risk–time. These rates were modelled using flexible parametric survival models (FPMs) that use a restricted cubic spline function for the cumulative baseline hazard rate (Royston and Parmar, 2002; Lambert and Royston, 2009). The models estimate hazard ratios (HRs) with 95% confidence intervals (CIs) as measure of association between exposures and outcome. For the baseline rate, we used a spline with five degrees of freedom (two knots at each boundary and four intermediate knots placed at quintiles of the distribution of events). The underlying timescale used in the analysis was time since diagnosis of breast cancer. From the FPM it is also possible to post estimate the cumulative risk of developing metastasis in the presence of competing risks (Hinchliffe and Lambert, 2012). We estimated the cumulative risk for first distant metastasis within 5 years from diagnosis for various covariate patterns in the presence of competing event death due to causes other than breast cancer. We further estimated the cumulative risk of metastasis during the period of 5–10 years after diagnosis, conditional upon surviving and being metastasis-free at 5 years (that is, conditional probabilities). Variables were categorised as follows: age at breast cancer diagnosis (⩽50, 51–60, 61–74 years), calendar period at breast cancer diagnosis (1990–1994, 1995–1999, 2000–2006), tumour size (⩽20 mm, >20 mm), lymph node status (positive/negative), ER status (ER-positive/negative) and treatment. Treatment was categorised as local treatment (surgery without adjuvant treatment; surgery with radiotherapy only (RT)) and systemic treatment (any combination with either chemotherapy (CT) or hormone therapy (HT)). Firstly, we estimated HRs by age and tumour characteristics using non-proportional hazards models, which allow HR to vary over follow-up (that is, time-dependent effects). Secondly, we estimated 5-year cumulative risks of developing a first distant metastasis in the presence of competing event death due to other causes. As cumulative risk is an absolute measure, results are shown for a given set of characteristics (covariate patterns). We also report cumulative risks during 5–10 years after diagnosis conditional on surviving 5 years answering the question: what is the probability that a patient who has survived 5 years without metastasis will develop first distant metastasis in the coming years? The data were analysed using Stata Intercooled 12.0 (StataCorp. 2009, Stata Statistical Software: Release 12. College Station, TX, USA: StataCorp LP) and R package version 2.15.1 for calculating CIFs 5–10 years. This cohort study was analysed using survival methodology with adjustment for competing risks. All women were followed up and they contributed to risk–time from the date of diagnosis until the date of first distant metastasis (event) or diagnosis of second primary cancer, date of death, 10 years of follow-up or end of study, 31 December 2006 (censoring). Rates of first distant metastasis were calculated as number of events (development of first distant metastasis) divided by total risk–time. These rates were modelled using flexible parametric survival models (FPMs) that use a restricted cubic spline function for the cumulative baseline hazard rate (Royston and Parmar, 2002; Lambert and Royston, 2009). The models estimate hazard ratios (HRs) with 95% confidence intervals (CIs) as measure of association between exposures and outcome. For the baseline rate, we used a spline with five degrees of freedom (two knots at each boundary and four intermediate knots placed at quintiles of the distribution of events). The underlying timescale used in the analysis was time since diagnosis of breast cancer. From the FPM it is also possible to post estimate the cumulative risk of developing metastasis in the presence of competing risks (Hinchliffe and Lambert, 2012). We estimated the cumulative risk for first distant metastasis within 5 years from diagnosis for various covariate patterns in the presence of competing event death due to causes other than breast cancer. We further estimated the cumulative risk of metastasis during the period of 5–10 years after diagnosis, conditional upon surviving and being metastasis-free at 5 years (that is, conditional probabilities). Variables were categorised as follows: age at breast cancer diagnosis (⩽50, 51–60, 61–74 years), calendar period at breast cancer diagnosis (1990–1994, 1995–1999, 2000–2006), tumour size (⩽20 mm, >20 mm), lymph node status (positive/negative), ER status (ER-positive/negative) and treatment. Treatment was categorised as local treatment (surgery without adjuvant treatment; surgery with radiotherapy only (RT)) and systemic treatment (any combination with either chemotherapy (CT) or hormone therapy (HT)). Firstly, we estimated HRs by age and tumour characteristics using non-proportional hazards models, which allow HR to vary over follow-up (that is, time-dependent effects). Secondly, we estimated 5-year cumulative risks of developing a first distant metastasis in the presence of competing event death due to other causes. As cumulative risk is an absolute measure, results are shown for a given set of characteristics (covariate patterns). We also report cumulative risks during 5–10 years after diagnosis conditional on surviving 5 years answering the question: what is the probability that a patient who has survived 5 years without metastasis will develop first distant metastasis in the coming years? The data were analysed using Stata Intercooled 12.0 (StataCorp. 2009, Stata Statistical Software: Release 12. College Station, TX, USA: StataCorp LP) and R package version 2.15.1 for calculating CIFs 5–10 years. Data source: The Stockholm Breast Cancer Register (SBCR) is a population-based clinical register held by the Regional Cancer Centre of Stockholm–Gotland region, Sweden. The register contains data about all breast cancer diagnoses occurring in the Swedish counties of Stockholm and Gotland since 1976. The SBCR provides detailed clinical information, such as tumour characteristics and intention of treatment, for each patient. Study cohort: A population-based cohort was selected from the SBCR, including all women diagnosed with first invasive breast cancer in the period of 1 January 1990 and 31 December 2006, younger than 75 years at diagnosis and without any previous occurrence of cancer. Patients were followed up for at most 10 years from the date of breast cancer diagnosis until the development of first distant metastasis (event), until death, diagnosis of second primary cancer or end of study period (31 December 2006). The records were linked to the Swedish Cancer Register (Barlow et al, 2009) for information on other invasive cancers through linkage by the personal identification number (unique for each Swedish resident and included in all Swedish population registers). The cohort comprised 14 188 women. Those who had a metastatic disease at diagnosis (stage IV, n=264), were diagnosed with first distant metastasis occurring within 3 months from breast cancer diagnosis (n=44), had tumour size less than 1 mm (n=52), received neoadjuvant treatment (n=798) and did not undergo surgery for breast cancer (n=217) were excluded. Women who were diagnosed with second primary cancer at the time of breast cancer diagnosis (n=226) were also excluded because of impossibility to infer origin of metastasis. Finally, women who were referred as dying from breast cancer without any record of distant metastasis were also excluded (n=240), as it was not possible to assess whether this was due to missing information about metastatic status or due to the inaccuracy in the reported underlying cause of death. Of the remaining patients (n=12 322), 1189 (9.7%) had subsequent distant metastasis within 10 years of initial diagnosis. Information on date of breast cancer diagnosis, planned adjuvant treatment and site of first distant metastasis was complete for all patients. Information on number of positive lymph nodes and tumour size was available for 94.6% and 98.4% of patients, respectively. Information on ER status was available for 80.3% of patients. For this reason, in the analysis we eventually included only patients with information available for each covariate used in the models (n=9514), of which 995 (10.4%) developed distant metastasis within 10 years of initial diagnosis. Information on the cause of death was obtained from the Swedish Cause of Death Register (Rutqvist, 1985). Patients with underlying cause of death other than breast cancer (International Classification of Diseases (ICD) version 8=174, ICD9=174, ICD10=C50) were censored. Information on the site of first metastasis was obtained from the SBCR and was divided into eight groups according to the ICD code: skeleton, lung, pleura, liver, central nervous system (CNS), skin, other sites and multiple sites of first distant metastasis (defined as distant metastasis diagnosed within 2 months from the first distant metastasis diagnosis). Ethics: This study has an ethical permission from the regional ethics committee at the Karolinska Institutet. Statistical analysis: This cohort study was analysed using survival methodology with adjustment for competing risks. All women were followed up and they contributed to risk–time from the date of diagnosis until the date of first distant metastasis (event) or diagnosis of second primary cancer, date of death, 10 years of follow-up or end of study, 31 December 2006 (censoring). Rates of first distant metastasis were calculated as number of events (development of first distant metastasis) divided by total risk–time. These rates were modelled using flexible parametric survival models (FPMs) that use a restricted cubic spline function for the cumulative baseline hazard rate (Royston and Parmar, 2002; Lambert and Royston, 2009). The models estimate hazard ratios (HRs) with 95% confidence intervals (CIs) as measure of association between exposures and outcome. For the baseline rate, we used a spline with five degrees of freedom (two knots at each boundary and four intermediate knots placed at quintiles of the distribution of events). The underlying timescale used in the analysis was time since diagnosis of breast cancer. From the FPM it is also possible to post estimate the cumulative risk of developing metastasis in the presence of competing risks (Hinchliffe and Lambert, 2012). We estimated the cumulative risk for first distant metastasis within 5 years from diagnosis for various covariate patterns in the presence of competing event death due to causes other than breast cancer. We further estimated the cumulative risk of metastasis during the period of 5–10 years after diagnosis, conditional upon surviving and being metastasis-free at 5 years (that is, conditional probabilities). Variables were categorised as follows: age at breast cancer diagnosis (⩽50, 51–60, 61–74 years), calendar period at breast cancer diagnosis (1990–1994, 1995–1999, 2000–2006), tumour size (⩽20 mm, >20 mm), lymph node status (positive/negative), ER status (ER-positive/negative) and treatment. Treatment was categorised as local treatment (surgery without adjuvant treatment; surgery with radiotherapy only (RT)) and systemic treatment (any combination with either chemotherapy (CT) or hormone therapy (HT)). Firstly, we estimated HRs by age and tumour characteristics using non-proportional hazards models, which allow HR to vary over follow-up (that is, time-dependent effects). Secondly, we estimated 5-year cumulative risks of developing a first distant metastasis in the presence of competing event death due to other causes. As cumulative risk is an absolute measure, results are shown for a given set of characteristics (covariate patterns). We also report cumulative risks during 5–10 years after diagnosis conditional on surviving 5 years answering the question: what is the probability that a patient who has survived 5 years without metastasis will develop first distant metastasis in the coming years? The data were analysed using Stata Intercooled 12.0 (StataCorp. 2009, Stata Statistical Software: Release 12. College Station, TX, USA: StataCorp LP) and R package version 2.15.1 for calculating CIFs 5–10 years. Results: The mean age of the patients was 56.4 years at diagnosis. The tumour characteristics were distributed as follows: 63.5% of patients had lymph node-negative tumours, 69% had tumours of size 20 mm or less and 82.3% had ER-positive tumours (Table 1). In our cohort, 87.9% of women underwent systemic adjuvant treatment. The overall rate of first distant metastasis was 19.4 per 1000 person–years (95% CI: 18.2–20.6). Figure 1 shows rates of first distant metastasis in relation to time since breast cancer diagnosis for different age groups of patients and tumour characteristics. Women younger than 50 years at breast cancer diagnosis, women with ER-negative tumours, positive lymph nodes and tumours larger than 20 mm, all showed a peak in the rate of first distant metastasis at about 2 years, in comparison with fairly stable rates in other subgroups of patients. In particular, women with ER-negative tumours showed a sharp decrease in the rate of first distant metastasis after 2 years since breast cancer diagnosis, whereas women with ER-positive tumours showed rather stable rates of first distant metastasis over time from about 1 year up until 10 years since breast cancer diagnosis. Table 2 shows frequency distribution of sites of first distant metastasis by time-to-first-distant metastasis subdivided into 0–2, 2–5 and 5–10 years since diagnosis. Overall, metastasis to the skeleton (32.5%) and multiple sites of metastasis (28.3%) were the most frequent presentations of distant metastasis within 10 years. No particular combination pattern of multiple sites of first distant metastasis was found in the cohort. The site distribution of first distant metastasis changed significantly (P<0.05) over time for the following sites: skeleton, CNS and liver. The proportion of first distant metastasis to the skeleton increased from 29.9% of all first distant metastasis in the first 2 years to 36.5% in the period of 5–10 years since breast cancer diagnosis. The proportion of CNS and liver metastasis instead decreased from 6.8% and 15.4%, respectively, in the first 2 years to 1.8% and 8.0%, respectively, in 5–10 years since breast cancer diagnosis. Between 5 and 10 years since breast cancer diagnosis, 274 (27.5%) of all first distant metastasis were diagnosed. Table 3 shows time-dependent HRs of developing first distant metastasis by age and tumour characteristics at 2, 5 and 10 years from breast cancer diagnosis. For women with positive lymph nodes, the hazard of developing metastasis was still significantly increased at 10 years after diagnosis (HR=2.6; 95% CI: 1.9–3.5) compared with women with negative lymph nodes. Women with ER-negative tumours had an increased hazard at 2 (HR=2.7; 95% CI: 2.2–3.3) and 5 (HR=1.4; 95% CI: 1.1–1.7) years from breast cancer diagnosis compared with women with ER-positive tumours; at 10 years from diagnosis, the same HR was not significantly increased anymore (HR=0.9; 95% CI: 0.6–1.4). Having a tumour larger than 20 mm at breast cancer diagnosis was still causing an increased hazard of developing first distant metastasis at 10 years (HR=1.5; 95% CI: 1.1–2.0). Table 4 shows cumulative risks of developing first distant metastasis within 5 years of diagnosis, and within 10 years surviving 5 years without metastasis, according to all different covariate patterns. Among those with an adjuvant treatment combination including CT and/or HT, the highest risk of first distant metastasis in the period of 0–5 years was found in patients of 50 years of age or less, with tumour size >20 mm, positive lymph nodes and ER-negative tumours at breast cancer diagnosis (cumulative risk =0.45; 95% CI: 0.38–0.53); whereas the lowest risk was among patients aged 51–74 years with negative lymph nodes, tumour size ⩽20 mm and with ER-positive tumours at breast cancer diagnosis (cumulative risk =0.03; 95% CI: 0.02–0.04). For the same two groups of patients in the period of 5–10 years following diagnosis (that is, among those who survived metastasis-free until 5 years), the risk dropped (cumulative risk=0.09; 95% CI: 0.05–0.17) for the group originally at the highest risk, whereas it remained the same (cumulative risk=0.03; 95% CI: 0.02–0.04) for the group at the lowest risk. The risk at 5–10 years following diagnosis was similar to the risk for the period of 0–5 years in all women with ER-positive tumours and negative lymph nodes, regardless of treatment, tumour size and age. This was also true for women with ER-positive tumours and positive lymph nodes, if the tumour size at diagnosis was ⩽20 mm. For all patients with ER-negative tumours instead, the risk at 5–10 years following diagnosis was significantly lower compared with the risk for the period of 0–5 years regardless of age, tumour size and nodes. Women treated with systemic adjuvant treatment (that is, CT and/or HT) always had lower risks compared to women treated with surgery only, with or without RT. Discussion: We found that the site distribution of first distant metastasis over time changed significantly for metastasis to the skeleton, CNS and liver. The risk of developing distant metastasis within 10 years of breast cancer diagnosis significantly varied in different subgroups of patients. The risk remained non-negligible up to 10 years since diagnosis particularly among women with positive lymph nodes. The risk was high in particular among patients with ER-negative tumours within the first 5 years of diagnosis, whereas it significantly decreased after 5 years since diagnosis. The risk of developing distant metastasis remained instead rather stable for most subgroups of patients with ER-positive tumours independent of age and other tumour characteristics. In our cohort, one-third of first distant metastasis was diagnosed in the skeleton. This proportion significantly increased over time since diagnosis, whereas the proportion of metastasis to the liver and CNS significantly decreased. This seems to reflect the natural history of distant recurrences as women with ER-positive tumours more often develop metastasis to the skeleton later during follow-up, whereas women with ER-negative tumours more often develop early liver and CNS metastasis (Kennecke et al, 2010). Interestingly, the risk for developing distant metastasis over time since diagnosis mirrors the pattern observed for breast cancer-specific mortality that has been previously studied in this same cohort: lymph node status and tumour size at diagnosis are significant prognosticators of distant recurrence, as well as of breast cancer death, at 10 years since breast cancer diagnosis, whereas ER status is not. These findings confirm that distant metastasis still indicate a very poor prognosis in breast cancer patients (Hilsenbeck et al, 1998; Louwman et al, 2008; Colzani et al, 2011). Cumulative risk estimates of developing distant metastasis were obtained for specific subgroups of patients while taking into account the competing risk of dying from other causes and by allowing time-dependent effects (interaction with time since diagnosis) for age and tumour characteristics. The use of cumulative risk as a function of time is of relevant clinical value as it allows a quantitative estimation of what is the actual probability of developing distant metastasis for any given subgroup of breast cancer patients at different time points. This measurement may help clinicians to better estimate the individual risk of developing first distant metastasis during the first 5 years as well as 5–10 years after diagnosis. One of the main messages from this study stems from the fact that the risk of developing distant metastasis carries over significantly to the second 5 years of follow-up for most metastasis-free patients, particularly those patients with lymph node-positive tumours where the risk at 5–10 years after diagnosis is always higher or equal to 8%. In addition, for some subgroups of patients with ER-positive tumours, the cumulative risk of developing distant metastasis in the period of 0–5 and 5–10 years is essentially the same. Patients with ER-positive and lymph node-negative tumours show no change and very low risk over time. This could be explained by a very good effect of treatment, or alternatively even by overtreatment since patients undergoing local treatment show a similar low risk. More clinical attention should, however, be given to other subgroups in which follow-up could be intensified and treatment could be improved. In particular, women with ER-positive, lymph node-positive, >20 mm tumours still have a risk higher than 10% to develop first distant metastasis in the period of 5–10 years after diagnosis. Of note, all subgroups of patients with ER-negative tumours have a sharp significant decrease in risk of developing distant metastasis in the period of 5–10 years following diagnosis compared with the period of 0–5 years, independent of age and other tumour characteristics. The American Society of Clinical Oncology (ASCO) has recently concluded in a review of the clinical practice guidelines of primary breast cancer follow-up that there is no present need for updating the current guidelines (Khatcheressian et al, 2013). Although evidence supporting the change of current practice is rather weak (Robertson et al, 2011), following future improvements in prevention and treatment of metastatic breast cancer, a differential follow-up of patients with ER-positive and ER-negative tumours could be considered, given their remarkably different risks of spreading. This study has some relevant strengths. We used a large cohort of women followed up to 10 years with accurate and complete information, enabling us to apply a comprehensive design and methodology. We analysed the risk of developing distant metastasis from many different perspectives, providing a thorough picture of the topic by analysing the proportion of first distant metastasis in different sites, the relative (HR) and absolute risk (cumulative risk) of developing metastasis at different follow-up times according to different patient and tumour characteristics by taking into account competing risks, and allowing the main effects to vary over time. This paper has some limitations as well. The date of diagnosis of distant metastasis might be subject to timing of clinical work-ups and type of follow-up. In addition, site of first distant metastasis could be affected by detection bias. As in all studies requiring a long follow-up, the estimated cumulative risk of first distant metastasis might not reflect current risk as it was observed in women diagnosed between 1995 and 1999. In particular, adjuvant treatment has changed and we do not know whether the same risk patterns are observed in recently diagnosed patients: for instance, aromatase inhibitors have been widely used instead of tamoxifen from early 2000s, and high risk patients with ER-positive tumours are today offered extended endocrine therapy up to 10 years (Burstein et al, 2010). In conclusion, there is still a clinically relevant risk of developing first distant metastasis from 5 to 10 years after breast cancer diagnosis in several groups of patients, especially those with positive lymph nodes at diagnosis. Patients with negative lymph nodes and ER-positive tumours, unlike those with ER-negative tumours, have a very similar low risk of developing first distant metastasis in the first 5 years and in the second 5 years of follow-up, independent of age, other tumour characteristics and competing risks of dying due to other causes. Upcoming improvements in metastasis prevention and treatment should elicit further research aimed at identifying specific clinical follow-ups for different subgroups of breast cancer patients. Five-year metastasis-free survival may not any longer be an appropriate outcome indicator measurement tool for breast cancer patients, particularly for those with ER-positive tumours.
Background: Metastatic breast cancer is a severe condition without curative treatment. How relative and absolute risk of distant metastasis varies over time since diagnosis, as a function of treatment, age and tumour characteristics, has not been studied in detail. Methods: A total of 9514 women under the age of 75 when diagnosed with breast cancer in Stockholm and Gotland regions during 1990-2006 were followed up for metastasis (mean follow-up=5.7 years). Time-dependent development of distant metastasis was analysed using flexible parametric survival models and presented as hazard ratio (HR) and cumulative risk. Results: A total of 995 (10.4%) patients developed distant metastasis; the most common sites were skeleton (32.5%) and multiple sites (28.3%). Women younger than 50 years at diagnosis, with lymph node-positive, oestrogen receptor (ER)-negative, >20 mm tumours and treated only locally, had the highest risk of distant metastasis (0-5 years' cumulative risk =0.55; 95% confidence interval (CI): 0.47-0.64). Women older than 50 years at diagnosis, with ER-positive, lymph node-negative and ≤20-mm tumours, had the same and lowest cumulative risk of developing metastasis 0-5 and 5-10 years (cumulative risk=0.03; 95% CI: 0.02-0.04). In the period of 5-10 years after diagnosis, women with ER-positive, lymph node-positive and >20-mm tumours were at highest risk of distant recurrence. Women with ER-negative tumours showed a decline in risk during this period. Conclusions: Our data show no support for discontinuation at 5 years of clinical follow-up in breast cancer patients and suggest further investigation on differential clinical follow-up for different subgroups of patients.
null
null
5,902
349
[ 70, 541, 16, 588 ]
7
[ "metastasis", "diagnosis", "years", "distant", "cancer", "distant metastasis", "breast cancer", "breast", "risk", "10" ]
[ "breast cancer record", "breast cancer diagnoses", "patient stockholm breast", "swedish cancer register", "cancer register sbcr" ]
null
null
null
null
null
null
[CONTENT] breast cancer | distant metastasis | risk | survival analysis | tumour characteristics | competing risk [SUMMARY]
null
[CONTENT] breast cancer | distant metastasis | risk | survival analysis | tumour characteristics | competing risk [SUMMARY]
null
null
null
[CONTENT] Aged | Antineoplastic Agents | Breast Neoplasms | Cohort Studies | Female | Humans | Middle Aged | Neoplasm Metastasis | Neoplasm Recurrence, Local | Risk | Sweden | Time Factors [SUMMARY]
null
[CONTENT] Aged | Antineoplastic Agents | Breast Neoplasms | Cohort Studies | Female | Humans | Middle Aged | Neoplasm Metastasis | Neoplasm Recurrence, Local | Risk | Sweden | Time Factors [SUMMARY]
null
null
null
[CONTENT] breast cancer record | breast cancer diagnoses | patient stockholm breast | swedish cancer register | cancer register sbcr [SUMMARY]
null
[CONTENT] breast cancer record | breast cancer diagnoses | patient stockholm breast | swedish cancer register | cancer register sbcr [SUMMARY]
null
null
null
[CONTENT] metastasis | diagnosis | years | distant | cancer | distant metastasis | breast cancer | breast | risk | 10 [SUMMARY]
null
[CONTENT] metastasis | diagnosis | years | distant | cancer | distant metastasis | breast cancer | breast | risk | 10 [SUMMARY]
null
null
null
[CONTENT] years | metastasis | tumours | diagnosis | 95 ci | ci | distant | distant metastasis | risk | 10 [SUMMARY]
null
[CONTENT] metastasis | diagnosis | years | cancer | distant | distant metastasis | breast | breast cancer | risk | patients [SUMMARY]
null
null
null
[CONTENT] 995 | 10.4% | 32.5% | 28.3% ||| 50 years | 20 mm | 0-5 years' | 0.55 | 95% | CI | 0.47 ||| older than 50 years | ER | 0 | 5-10 years | 95% | CI | 0.02-0.04 ||| 5-10 years | ER | 20-mm ||| ER [SUMMARY]
null
[CONTENT] ||| ||| 9514 | the age of 75 | Stockholm | Gotland | 1990-2006 | years ||| ||| 995 | 10.4% | 32.5% | 28.3% ||| 50 years | 20 mm | 0-5 years' | 0.55 | 95% | CI | 0.47 ||| older than 50 years | ER | 0 | 5-10 years | 95% | CI | 0.02-0.04 ||| 5-10 years | ER | 20-mm ||| ER ||| 5 years [SUMMARY]
null
A retrospective clinical study of patients with pregnancy-associated breast cancer among multiple centers in China (CSBrS-008).
34553700
Pregnancy-associated breast cancer (PABC) is a special type of breast cancer that occurs during pregnancy and within 1 year after childbirth. With the rapid social development and the adjustment of reproductive policies in China, the average age of females at first childbirth is increasing, which is expected to lead to an increase in the incidence of PABC. This study aimed to accumulate clinical experience and to investigate and summarize the prevalence, diagnosis, and treatment of PABC based on large multicenter samples in China.
BACKGROUND
According to the Chinese Society of Breast Surgery, a total of 164 patients with PABC in 27 hospitals from January 2016 to December 2018 were identified. The pregnancy status, clinicopathological features, comprehensive treatment methods, and outcomes were retrospectively analyzed. Survival curves were plotted using the Kaplan-Meier method.
METHODS
A total of 164 patients of PABC accounted for 0.30% of the total number of cases in the same period; of which, 83 patients were diagnosed during pregnancy and 81 patients during lactation. The median age of PABC was 33 years (24-47 years). Stage I patients accounted for 9.1% (15/164), stage II 54.9% (90/164), stage III 24.4% (40/164), and stage IV 2.4% (4/164). About 9.1% (15/164) of patients were luminal A. Luminal B patients accounted the most (43.3% [71/164]). About 15.2% (25/164) of patients were human epidermal growth factor receptor 2 (Her-2) overexpression and 18.9% (31/164) of patients were triple-negative breast cancer. For pregnancy breast cancer, 36.1% (30/83) of patients received direct surgery and 20.5% (17/83) received chemotherapy during pregnancy. About 31.3% (26/83) chose abortion or induction of labor. The median follow-up time was 36 months (3-59 months); 11.0% (18/164) patients had local recurrence or distant metastasis and 3.0% (5/164) died.
RESULTS
It is safe and feasible to standardize surgery and chemotherapy for PABC.
CONCLUSIONS
[ "Adult", "Breast Neoplasms", "China", "Female", "Humans", "Neoplasm Recurrence, Local", "Pregnancy", "Pregnancy Complications, Neoplastic", "Prognosis", "Retrospective Studies" ]
8478375
Introduction
Pregnancy-associated breast cancer (PABC) is a special type of breast cancer that occurs during pregnancy and within 1 year after childbirth. PABC can be divided into PABC during pregnancy and postpartum PABC.[1] PABC accounts for 0.2% to 3.8% of all breast cancers, including 10% to 20% of all breast cancers in patients <30 years.[2–4] With the rapid social development and the adjustment of reproductive policies in China, the average age of females at first childbirth is increasing, which is expected to lead to an increase in the incidence of PABC. A total of 164 cases of PABC in 27 hospitals were collected in this study through the Chinese Society of Breast Surgery to investigate the prevalence of PABC in China and to accumulate and summarize the experience with the clinical diagnosis and treatment of PABC in China.
Statistical methods
The results were analyzed using IBM Statistics SPSS 20 (IBM Corp., Armonk, NY, USA). Survival curves were plotted using the Kaplan-Meier method.
Results
Clinical and pathological characteristics A total of 164 patients with a median age of 33 years (24–47 years) were included in this study. Among them, 15 (8.2%) had a family history of malignancies and ten (5.4%) had a family history of breast cancer. There were six patients (3.7%) in stage 0, 15 (9.1%) in stage I, 51 (31.1%) in stage IIa, 39 (23.8%) in stage IIb, 27 (16.5%) in stage IIIa, 8 (4.9%) in stage IIIb, five (3.0%) in stage IIIc, four (2.4%) in stage IV, and nine (2.4%) in an unknown stage. There were 54 cases (33.0%) of N0, 71 cases (43.3%) of N1, 19 cases (11.6%) of N2, and five cases (3.0%) of N3. Invasive ductal carcinoma of the breast was the most common pathological type, accounting 71.3% (117/164). There were two cases (1.2%) of histological grade G1, 51 (31.1%) of G2, and 49 (30.0%) of G3. There were 15 cases (9.1%) of luminal A type, 48 (29.3%) of luminal B type (Her2−), 23 (14.0%) of luminal B type (Her2+), 25 (15.2%) of human epidermal growth factor receptor 2 (Her-2)-positive breast cancer, and 31 (18.9%) of triple-negative breast cancer (TNBC). The clinicopathological characteristics of the patients are summarized in Table 1. The clinicopathological characteristics of the 164 patients with PABC. ER: Estrogen receptor; PABC: Pregnancy-associated breast cancer; PR: Partial remission; TNBC: Triple-negative breast cancer. A total of 164 patients with a median age of 33 years (24–47 years) were included in this study. Among them, 15 (8.2%) had a family history of malignancies and ten (5.4%) had a family history of breast cancer. There were six patients (3.7%) in stage 0, 15 (9.1%) in stage I, 51 (31.1%) in stage IIa, 39 (23.8%) in stage IIb, 27 (16.5%) in stage IIIa, 8 (4.9%) in stage IIIb, five (3.0%) in stage IIIc, four (2.4%) in stage IV, and nine (2.4%) in an unknown stage. There were 54 cases (33.0%) of N0, 71 cases (43.3%) of N1, 19 cases (11.6%) of N2, and five cases (3.0%) of N3. Invasive ductal carcinoma of the breast was the most common pathological type, accounting 71.3% (117/164). There were two cases (1.2%) of histological grade G1, 51 (31.1%) of G2, and 49 (30.0%) of G3. There were 15 cases (9.1%) of luminal A type, 48 (29.3%) of luminal B type (Her2−), 23 (14.0%) of luminal B type (Her2+), 25 (15.2%) of human epidermal growth factor receptor 2 (Her-2)-positive breast cancer, and 31 (18.9%) of triple-negative breast cancer (TNBC). The clinicopathological characteristics of the patients are summarized in Table 1. The clinicopathological characteristics of the 164 patients with PABC. ER: Estrogen receptor; PABC: Pregnancy-associated breast cancer; PR: Partial remission; TNBC: Triple-negative breast cancer. Outcomes of patients with PABC during pregnancy The 83 patients with PABC during pregnancy included 14 patients in early pregnancy, 38 in mid-pregnancy, and 31 in late pregnancy. All patients with PABC in early pregnancy chose surgical or medical abortion to terminate their pregnancy. Among the patients with PABC in mid-pregnancy, 12 patients chose medical abortion and 26 delivered successfully. All patients with PABC in late pregnancy delivered successfully. All neonates had normal Apgar scores, indicating good development, with 23 neonates weighing >2500 g and 21 neonates weighing <2500 g. The 83 patients with PABC during pregnancy included 14 patients in early pregnancy, 38 in mid-pregnancy, and 31 in late pregnancy. All patients with PABC in early pregnancy chose surgical or medical abortion to terminate their pregnancy. Among the patients with PABC in mid-pregnancy, 12 patients chose medical abortion and 26 delivered successfully. All patients with PABC in late pregnancy delivered successfully. All neonates had normal Apgar scores, indicating good development, with 23 neonates weighing >2500 g and 21 neonates weighing <2500 g. Treatment for PABC during pregnancy Among the 83 patients with PABC during pregnancy, 30 (36.1%) underwent surgical treatment during pregnancy, including eight who received breast-conserving surgery and 22 who received modified radical mastectomy; 17 (20.5%) received chemotherapy during pregnancy, including six who received neoadjuvant chemotherapy and 11 who received adjuvant chemotherapy (eight received weekly chemotherapy with paclitaxel and nine received chemotherapy with epirubicin + cyclophosphamide). Among the six patients who received neoadjuvant chemotherapy, one patient with progressive disease after initial chemotherapy received direct surgical treatment, three patients with stable disease after initial chemotherapy achieved partial remission after receiving a different chemotherapy regimen, and two patients had partial remission after initial chemotherapy. Among the 17 patients who received chemotherapy during pregnancy, recurrence and metastasis occurred in four patients, including two who received adjuvant chemotherapy and two who received neoadjuvant chemotherapy. The treatment regimens are shown in Table 2. Treatment conditions of 17 breast cancer patients accepting chemotherapy during pregnancy period. ddEC: Dose-dense epirubicin and cyclophosphamide; EC: Epirubicin + cyclophosphamide; EC-wP: Epirubicin + cyclophosphamide-weekly paclitaxel; IDC: Invasive ductal carcinoma; ILC: Invasive lobular carcinoma; PD: Progressive disease; PR: Partial remission; SD: Stable disease; TNBC: Triple-negative breast cancer. Among the 83 patients with PABC during pregnancy, 30 (36.1%) underwent surgical treatment during pregnancy, including eight who received breast-conserving surgery and 22 who received modified radical mastectomy; 17 (20.5%) received chemotherapy during pregnancy, including six who received neoadjuvant chemotherapy and 11 who received adjuvant chemotherapy (eight received weekly chemotherapy with paclitaxel and nine received chemotherapy with epirubicin + cyclophosphamide). Among the six patients who received neoadjuvant chemotherapy, one patient with progressive disease after initial chemotherapy received direct surgical treatment, three patients with stable disease after initial chemotherapy achieved partial remission after receiving a different chemotherapy regimen, and two patients had partial remission after initial chemotherapy. Among the 17 patients who received chemotherapy during pregnancy, recurrence and metastasis occurred in four patients, including two who received adjuvant chemotherapy and two who received neoadjuvant chemotherapy. The treatment regimens are shown in Table 2. Treatment conditions of 17 breast cancer patients accepting chemotherapy during pregnancy period. ddEC: Dose-dense epirubicin and cyclophosphamide; EC: Epirubicin + cyclophosphamide; EC-wP: Epirubicin + cyclophosphamide-weekly paclitaxel; IDC: Invasive ductal carcinoma; ILC: Invasive lobular carcinoma; PD: Progressive disease; PR: Partial remission; SD: Stable disease; TNBC: Triple-negative breast cancer. Follow-up Follow-up visits were performed for 164 PABC patients. The median follow-up duration was 36 months (3–59 months). Among the patients, 18 were lost to follow-up, with a loss to follow-up rate of 11.0%. Among the patients with PABC during pregnancy, two had local recurrence, six had distant metastasis (including two cases of liver metastasis, three cases of bone metastasis, and one case of skin metastasis), and three died. Among the patients with PABC during lactation, four had local recurrence, six had metastasis (including one case of brain metastasis, one case of adrenal metastasis, one case of lumbar metastasis, one case of liver metastasis, and two cases of liver metastasis with bone metastasis), and two died. There were a total of 18 cases (11.0%) of local recurrence and distant metastasis and five cases (3.0%) of death in the 164 patients with PABC. The follow-up data are shown in Tables 3 and 4. Follow-up results of 164 patients with PABC. PABC: Pregnancy-associated breast cancer. Follow-up of 23 cases of recurrence, metastasis, and death. IDC: Invasive ductal carcinoma; ILC: Invasive lobular carcinoma; TNBC: Triple-negative breast cancer. The Kaplan-Meier survival curves of the patients with PABC during pregnancy and those of the patients with PABC during lactation were compared, and the P value was 0.525 for DFS and 0.346 for OS. In the comparison of patients with Ki-67 <20%, patients with Ki-67 ≥20% and < 40%, and patients with Ki-67 ≥40%, the P value was 0.179 for DFS and 0.391 for OS. Similarly, in the comparative analysis of Her-2-positive patients and Her-2-negative patients, the P value was 0.638 for DFS and 0.364 for OS. The survival curves are shown in Figures 1–3. In addition, the Kaplan-Meier survival curves of the 83 patients with PABC during pregnancy who terminated their pregnancy and those of patients who continued their pregnancy [Figure 4] were compared, and the P value was 0.138 for DFS and 0.910 for OS, with no significant difference. The OS and DFS analysis of the patients with PABC during pregnancy and those of the patients with PABC during lactation. DFS: Disease free survival; OS: Overall survival; PABC: Pregnancy-associated breast cancer. The OS and DFS analysis of the subgroups with different Ki-67 index. DFS: Disease free survival; OS: Overall survival. The OS and DFS analysis of the Her2+ and Her2− subgroups. DFS: Disease free survival; OS: Overall survival. The OS and DFS analysis of the 83 patients with PABC during pregnancy who terminated their pregnancy and those of patients who continued their pregnancy. DFS: Disease free survival; OS: Overall survival; PABC: Pregnancy-associated breast cancer. A total of 85 children of 138 patients with PABC during pregnancy or lactation who delivered successfully were followed up for development. The children included 39 boys and 46 girls with an average age of 29 months (19–72 months). Their heights and weights were all within the normal ranges. Follow-up visits were performed for 164 PABC patients. The median follow-up duration was 36 months (3–59 months). Among the patients, 18 were lost to follow-up, with a loss to follow-up rate of 11.0%. Among the patients with PABC during pregnancy, two had local recurrence, six had distant metastasis (including two cases of liver metastasis, three cases of bone metastasis, and one case of skin metastasis), and three died. Among the patients with PABC during lactation, four had local recurrence, six had metastasis (including one case of brain metastasis, one case of adrenal metastasis, one case of lumbar metastasis, one case of liver metastasis, and two cases of liver metastasis with bone metastasis), and two died. There were a total of 18 cases (11.0%) of local recurrence and distant metastasis and five cases (3.0%) of death in the 164 patients with PABC. The follow-up data are shown in Tables 3 and 4. Follow-up results of 164 patients with PABC. PABC: Pregnancy-associated breast cancer. Follow-up of 23 cases of recurrence, metastasis, and death. IDC: Invasive ductal carcinoma; ILC: Invasive lobular carcinoma; TNBC: Triple-negative breast cancer. The Kaplan-Meier survival curves of the patients with PABC during pregnancy and those of the patients with PABC during lactation were compared, and the P value was 0.525 for DFS and 0.346 for OS. In the comparison of patients with Ki-67 <20%, patients with Ki-67 ≥20% and < 40%, and patients with Ki-67 ≥40%, the P value was 0.179 for DFS and 0.391 for OS. Similarly, in the comparative analysis of Her-2-positive patients and Her-2-negative patients, the P value was 0.638 for DFS and 0.364 for OS. The survival curves are shown in Figures 1–3. In addition, the Kaplan-Meier survival curves of the 83 patients with PABC during pregnancy who terminated their pregnancy and those of patients who continued their pregnancy [Figure 4] were compared, and the P value was 0.138 for DFS and 0.910 for OS, with no significant difference. The OS and DFS analysis of the patients with PABC during pregnancy and those of the patients with PABC during lactation. DFS: Disease free survival; OS: Overall survival; PABC: Pregnancy-associated breast cancer. The OS and DFS analysis of the subgroups with different Ki-67 index. DFS: Disease free survival; OS: Overall survival. The OS and DFS analysis of the Her2+ and Her2− subgroups. DFS: Disease free survival; OS: Overall survival. The OS and DFS analysis of the 83 patients with PABC during pregnancy who terminated their pregnancy and those of patients who continued their pregnancy. DFS: Disease free survival; OS: Overall survival; PABC: Pregnancy-associated breast cancer. A total of 85 children of 138 patients with PABC during pregnancy or lactation who delivered successfully were followed up for development. The children included 39 boys and 46 girls with an average age of 29 months (19–72 months). Their heights and weights were all within the normal ranges.
null
null
[ "Methods", "Clinical data", "Follow-up", "Clinical and pathological characteristics", "Treatment for PABC during pregnancy", "Follow-up", "Acknowledgements" ]
[ "Clinical data A total of 60,998 newly diagnosed breast cancer patients were treated at 27 hospitals nationwide between January 2016 and December 2018, including 164 patients with pathologically confirmed PABC, accounting for 0.3% of the total number of breast cancer cases during the same period. Among these PABC cases, 83 occurred during pregnancy, including 14 cases during early pregnancy (<12 weeks of gestation), 38 cases during mid-pregnancy (12–28 weeks of gestation), and 31 cases during late pregnancy (>28 weeks of gestation) and 81 occurred during lactation.\nA total of 60,998 newly diagnosed breast cancer patients were treated at 27 hospitals nationwide between January 2016 and December 2018, including 164 patients with pathologically confirmed PABC, accounting for 0.3% of the total number of breast cancer cases during the same period. Among these PABC cases, 83 occurred during pregnancy, including 14 cases during early pregnancy (<12 weeks of gestation), 38 cases during mid-pregnancy (12–28 weeks of gestation), and 31 cases during late pregnancy (>28 weeks of gestation) and 81 occurred during lactation.\nFollow-up Follow-up visits were conducted via outpatient visits or telephonic interviews. The follow-up deadline was November 31, 2020, and the median follow-up duration was 36 months (3–59 months). Survival curves were plotted based on the follow-up data. Disease-free survival (DFS) was defined as the time from the pathological diagnosis of PABC to relapse or metastasis. Overall survival (OS) was defined as the interval between the pathological diagnosis of PABC and the date of death or the last follow-up.\nFollow-up visits were conducted via outpatient visits or telephonic interviews. The follow-up deadline was November 31, 2020, and the median follow-up duration was 36 months (3–59 months). Survival curves were plotted based on the follow-up data. Disease-free survival (DFS) was defined as the time from the pathological diagnosis of PABC to relapse or metastasis. Overall survival (OS) was defined as the interval between the pathological diagnosis of PABC and the date of death or the last follow-up.\nStatistical methods The results were analyzed using IBM Statistics SPSS 20 (IBM Corp., Armonk, NY, USA). Survival curves were plotted using the Kaplan-Meier method.\nThe results were analyzed using IBM Statistics SPSS 20 (IBM Corp., Armonk, NY, USA). Survival curves were plotted using the Kaplan-Meier method.", "A total of 60,998 newly diagnosed breast cancer patients were treated at 27 hospitals nationwide between January 2016 and December 2018, including 164 patients with pathologically confirmed PABC, accounting for 0.3% of the total number of breast cancer cases during the same period. Among these PABC cases, 83 occurred during pregnancy, including 14 cases during early pregnancy (<12 weeks of gestation), 38 cases during mid-pregnancy (12–28 weeks of gestation), and 31 cases during late pregnancy (>28 weeks of gestation) and 81 occurred during lactation.", "Follow-up visits were conducted via outpatient visits or telephonic interviews. The follow-up deadline was November 31, 2020, and the median follow-up duration was 36 months (3–59 months). Survival curves were plotted based on the follow-up data. Disease-free survival (DFS) was defined as the time from the pathological diagnosis of PABC to relapse or metastasis. Overall survival (OS) was defined as the interval between the pathological diagnosis of PABC and the date of death or the last follow-up.", "A total of 164 patients with a median age of 33 years (24–47 years) were included in this study. Among them, 15 (8.2%) had a family history of malignancies and ten (5.4%) had a family history of breast cancer. There were six patients (3.7%) in stage 0, 15 (9.1%) in stage I, 51 (31.1%) in stage IIa, 39 (23.8%) in stage IIb, 27 (16.5%) in stage IIIa, 8 (4.9%) in stage IIIb, five (3.0%) in stage IIIc, four (2.4%) in stage IV, and nine (2.4%) in an unknown stage. There were 54 cases (33.0%) of N0, 71 cases (43.3%) of N1, 19 cases (11.6%) of N2, and five cases (3.0%) of N3. Invasive ductal carcinoma of the breast was the most common pathological type, accounting 71.3% (117/164). There were two cases (1.2%) of histological grade G1, 51 (31.1%) of G2, and 49 (30.0%) of G3. There were 15 cases (9.1%) of luminal A type, 48 (29.3%) of luminal B type (Her2−), 23 (14.0%) of luminal B type (Her2+), 25 (15.2%) of human epidermal growth factor receptor 2 (Her-2)-positive breast cancer, and 31 (18.9%) of triple-negative breast cancer (TNBC). The clinicopathological characteristics of the patients are summarized in Table 1.\nThe clinicopathological characteristics of the 164 patients with PABC.\nER: Estrogen receptor; PABC: Pregnancy-associated breast cancer; PR: Partial remission; TNBC: Triple-negative breast cancer.", "Among the 83 patients with PABC during pregnancy, 30 (36.1%) underwent surgical treatment during pregnancy, including eight who received breast-conserving surgery and 22 who received modified radical mastectomy; 17 (20.5%) received chemotherapy during pregnancy, including six who received neoadjuvant chemotherapy and 11 who received adjuvant chemotherapy (eight received weekly chemotherapy with paclitaxel and nine received chemotherapy with epirubicin + cyclophosphamide). Among the six patients who received neoadjuvant chemotherapy, one patient with progressive disease after initial chemotherapy received direct surgical treatment, three patients with stable disease after initial chemotherapy achieved partial remission after receiving a different chemotherapy regimen, and two patients had partial remission after initial chemotherapy. Among the 17 patients who received chemotherapy during pregnancy, recurrence and metastasis occurred in four patients, including two who received adjuvant chemotherapy and two who received neoadjuvant chemotherapy. The treatment regimens are shown in Table 2.\nTreatment conditions of 17 breast cancer patients accepting chemotherapy during pregnancy period.\nddEC: Dose-dense epirubicin and cyclophosphamide; EC: Epirubicin + cyclophosphamide; EC-wP: Epirubicin + cyclophosphamide-weekly paclitaxel; IDC: Invasive ductal carcinoma; ILC: Invasive lobular carcinoma; PD: Progressive disease; PR: Partial remission; SD: Stable disease; TNBC: Triple-negative breast cancer.", "Follow-up visits were performed for 164 PABC patients. The median follow-up duration was 36 months (3–59 months). Among the patients, 18 were lost to follow-up, with a loss to follow-up rate of 11.0%. Among the patients with PABC during pregnancy, two had local recurrence, six had distant metastasis (including two cases of liver metastasis, three cases of bone metastasis, and one case of skin metastasis), and three died. Among the patients with PABC during lactation, four had local recurrence, six had metastasis (including one case of brain metastasis, one case of adrenal metastasis, one case of lumbar metastasis, one case of liver metastasis, and two cases of liver metastasis with bone metastasis), and two died. There were a total of 18 cases (11.0%) of local recurrence and distant metastasis and five cases (3.0%) of death in the 164 patients with PABC. The follow-up data are shown in Tables 3 and 4.\nFollow-up results of 164 patients with PABC.\nPABC: Pregnancy-associated breast cancer.\nFollow-up of 23 cases of recurrence, metastasis, and death.\nIDC: Invasive ductal carcinoma; ILC: Invasive lobular carcinoma; TNBC: Triple-negative breast cancer.\nThe Kaplan-Meier survival curves of the patients with PABC during pregnancy and those of the patients with PABC during lactation were compared, and the P value was 0.525 for DFS and 0.346 for OS. In the comparison of patients with Ki-67 <20%, patients with Ki-67 ≥20% and < 40%, and patients with Ki-67 ≥40%, the P value was 0.179 for DFS and 0.391 for OS. Similarly, in the comparative analysis of Her-2-positive patients and Her-2-negative patients, the P value was 0.638 for DFS and 0.364 for OS. The survival curves are shown in Figures 1–3. In addition, the Kaplan-Meier survival curves of the 83 patients with PABC during pregnancy who terminated their pregnancy and those of patients who continued their pregnancy [Figure 4] were compared, and the P value was 0.138 for DFS and 0.910 for OS, with no significant difference.\nThe OS and DFS analysis of the patients with PABC during pregnancy and those of the patients with PABC during lactation. DFS: Disease free survival; OS: Overall survival; PABC: Pregnancy-associated breast cancer.\nThe OS and DFS analysis of the subgroups with different Ki-67 index. DFS: Disease free survival; OS: Overall survival.\nThe OS and DFS analysis of the Her2+ and Her2− subgroups. DFS: Disease free survival; OS: Overall survival.\nThe OS and DFS analysis of the 83 patients with PABC during pregnancy who terminated their pregnancy and those of patients who continued their pregnancy. DFS: Disease free survival; OS: Overall survival; PABC: Pregnancy-associated breast cancer.\nA total of 85 children of 138 patients with PABC during pregnancy or lactation who delivered successfully were followed up for development. The children included 39 boys and 46 girls with an average age of 29 months (19–72 months). Their heights and weights were all within the normal ranges.", "Thanks to the following professors for providing data for the multicenter clinical study, Ke-Jin Wu (Obstetrics and Gynecology Hospital Affiliated to Fudan University, China), Xing-Song Tian (Shandong Provincial Hospital, China), Jin-Ping Liu (Sichuan Provincial People's Hospital, China), Zhi-Min Fan (Bethune First Hospital of Jilin University, China), Zhong-Wei Cao (People's Hospital of Inner Mongolia Autonomous Region, China), Chuan Wang (Union Hospital Affiliated to Fujian Medical University, China), Rui Ling (Xijing Hospital, China), Yin-Hua Liu (Peking University First Hospital, China), Jian-Dong Wang (General Hospital of Chinese people's Liberation Army, China), Jing-Hua Zhang (Tangshan People's Hospital, China), Zhi-Gang Yu (Second Hospital of Shandong University, China), Wei Zhu (Zhongshan Hospital Affiliated to Fudan University, China), De-Dian Chen (Yunnan Cancer Hospital, China), Yun-Jiang Liu (Fourth Hospital of Hebei Medical University, China), Da-Hua Mao (Wudang Hospital Affiliated to Guizhou Medical University, China), Li-Li Tang (Middle Hospital, Xiangya Hospital of Southern University, China), Shu Wang (People's Hospital of Peking University, China), Er-Wei Song (Sun Yat Sen Memorial Hospital of Sun Yat Sen University, China), Pei-Fen Fu (First Affiliated Hospital of Zhejiang University, China), Jiang-Hua Ou (Cancer Hospital of Xinjiang Medical University, China), Ke Liu (Cancer Hospital of Jilin Province, China), Yong-Hui Luo (Second Affiliated Hospital of Nanchang University, China), Xiang Qu (Beijing Friendship Hospital of Capital Medical University, China), Yi Zhao (Shengjing Hospital Affiliated to China Medical University, China), Jian Huang (The Second Affiliated Hospital of Medical College of Zhejiang University, China), Hong-Chuan Jiang (Beijing Chaoyang Hospital Affiliated to Capital Medical University, China), Rong Ma (Qilu Hospital Affiliated to Shandong University, China), Jian-Guo Zhang (The Second Affiliated Hospital of Harbin Medical University, China), Jun Jiang (Southwest Hospital Affiliated to the Third Military Medical University, China), Zhen-Zhen Liu (Henan Cancer Hospital Affiliated to Capital Medical University, China), Jian-Guo Zhang (Second Affiliated Hospital Affiliated to Harbin Medical University, China), Ai-Lin Song (Second Affiliated Hospital of Lanzhou University, China), and Qiang Zou (Huashan Hospital Affiliated to Fudan University, China)." ]
[ "methods", null, null, null, null, null, null ]
[ "Introduction", "Methods", "Clinical data", "Follow-up", "Statistical methods", "Results", "Clinical and pathological characteristics", "Outcomes of patients with PABC during pregnancy", "Treatment for PABC during pregnancy", "Follow-up", "Discussion", "Acknowledgements", "Conflicts of interest" ]
[ "Pregnancy-associated breast cancer (PABC) is a special type of breast cancer that occurs during pregnancy and within 1 year after childbirth. PABC can be divided into PABC during pregnancy and postpartum PABC.[1] PABC accounts for 0.2% to 3.8% of all breast cancers, including 10% to 20% of all breast cancers in patients <30 years.[2–4] With the rapid social development and the adjustment of reproductive policies in China, the average age of females at first childbirth is increasing, which is expected to lead to an increase in the incidence of PABC. A total of 164 cases of PABC in 27 hospitals were collected in this study through the Chinese Society of Breast Surgery to investigate the prevalence of PABC in China and to accumulate and summarize the experience with the clinical diagnosis and treatment of PABC in China.", "Clinical data A total of 60,998 newly diagnosed breast cancer patients were treated at 27 hospitals nationwide between January 2016 and December 2018, including 164 patients with pathologically confirmed PABC, accounting for 0.3% of the total number of breast cancer cases during the same period. Among these PABC cases, 83 occurred during pregnancy, including 14 cases during early pregnancy (<12 weeks of gestation), 38 cases during mid-pregnancy (12–28 weeks of gestation), and 31 cases during late pregnancy (>28 weeks of gestation) and 81 occurred during lactation.\nA total of 60,998 newly diagnosed breast cancer patients were treated at 27 hospitals nationwide between January 2016 and December 2018, including 164 patients with pathologically confirmed PABC, accounting for 0.3% of the total number of breast cancer cases during the same period. Among these PABC cases, 83 occurred during pregnancy, including 14 cases during early pregnancy (<12 weeks of gestation), 38 cases during mid-pregnancy (12–28 weeks of gestation), and 31 cases during late pregnancy (>28 weeks of gestation) and 81 occurred during lactation.\nFollow-up Follow-up visits were conducted via outpatient visits or telephonic interviews. The follow-up deadline was November 31, 2020, and the median follow-up duration was 36 months (3–59 months). Survival curves were plotted based on the follow-up data. Disease-free survival (DFS) was defined as the time from the pathological diagnosis of PABC to relapse or metastasis. Overall survival (OS) was defined as the interval between the pathological diagnosis of PABC and the date of death or the last follow-up.\nFollow-up visits were conducted via outpatient visits or telephonic interviews. The follow-up deadline was November 31, 2020, and the median follow-up duration was 36 months (3–59 months). Survival curves were plotted based on the follow-up data. Disease-free survival (DFS) was defined as the time from the pathological diagnosis of PABC to relapse or metastasis. Overall survival (OS) was defined as the interval between the pathological diagnosis of PABC and the date of death or the last follow-up.\nStatistical methods The results were analyzed using IBM Statistics SPSS 20 (IBM Corp., Armonk, NY, USA). Survival curves were plotted using the Kaplan-Meier method.\nThe results were analyzed using IBM Statistics SPSS 20 (IBM Corp., Armonk, NY, USA). Survival curves were plotted using the Kaplan-Meier method.", "A total of 60,998 newly diagnosed breast cancer patients were treated at 27 hospitals nationwide between January 2016 and December 2018, including 164 patients with pathologically confirmed PABC, accounting for 0.3% of the total number of breast cancer cases during the same period. Among these PABC cases, 83 occurred during pregnancy, including 14 cases during early pregnancy (<12 weeks of gestation), 38 cases during mid-pregnancy (12–28 weeks of gestation), and 31 cases during late pregnancy (>28 weeks of gestation) and 81 occurred during lactation.", "Follow-up visits were conducted via outpatient visits or telephonic interviews. The follow-up deadline was November 31, 2020, and the median follow-up duration was 36 months (3–59 months). Survival curves were plotted based on the follow-up data. Disease-free survival (DFS) was defined as the time from the pathological diagnosis of PABC to relapse or metastasis. Overall survival (OS) was defined as the interval between the pathological diagnosis of PABC and the date of death or the last follow-up.", "The results were analyzed using IBM Statistics SPSS 20 (IBM Corp., Armonk, NY, USA). Survival curves were plotted using the Kaplan-Meier method.", "Clinical and pathological characteristics A total of 164 patients with a median age of 33 years (24–47 years) were included in this study. Among them, 15 (8.2%) had a family history of malignancies and ten (5.4%) had a family history of breast cancer. There were six patients (3.7%) in stage 0, 15 (9.1%) in stage I, 51 (31.1%) in stage IIa, 39 (23.8%) in stage IIb, 27 (16.5%) in stage IIIa, 8 (4.9%) in stage IIIb, five (3.0%) in stage IIIc, four (2.4%) in stage IV, and nine (2.4%) in an unknown stage. There were 54 cases (33.0%) of N0, 71 cases (43.3%) of N1, 19 cases (11.6%) of N2, and five cases (3.0%) of N3. Invasive ductal carcinoma of the breast was the most common pathological type, accounting 71.3% (117/164). There were two cases (1.2%) of histological grade G1, 51 (31.1%) of G2, and 49 (30.0%) of G3. There were 15 cases (9.1%) of luminal A type, 48 (29.3%) of luminal B type (Her2−), 23 (14.0%) of luminal B type (Her2+), 25 (15.2%) of human epidermal growth factor receptor 2 (Her-2)-positive breast cancer, and 31 (18.9%) of triple-negative breast cancer (TNBC). The clinicopathological characteristics of the patients are summarized in Table 1.\nThe clinicopathological characteristics of the 164 patients with PABC.\nER: Estrogen receptor; PABC: Pregnancy-associated breast cancer; PR: Partial remission; TNBC: Triple-negative breast cancer.\nA total of 164 patients with a median age of 33 years (24–47 years) were included in this study. Among them, 15 (8.2%) had a family history of malignancies and ten (5.4%) had a family history of breast cancer. There were six patients (3.7%) in stage 0, 15 (9.1%) in stage I, 51 (31.1%) in stage IIa, 39 (23.8%) in stage IIb, 27 (16.5%) in stage IIIa, 8 (4.9%) in stage IIIb, five (3.0%) in stage IIIc, four (2.4%) in stage IV, and nine (2.4%) in an unknown stage. There were 54 cases (33.0%) of N0, 71 cases (43.3%) of N1, 19 cases (11.6%) of N2, and five cases (3.0%) of N3. Invasive ductal carcinoma of the breast was the most common pathological type, accounting 71.3% (117/164). There were two cases (1.2%) of histological grade G1, 51 (31.1%) of G2, and 49 (30.0%) of G3. There were 15 cases (9.1%) of luminal A type, 48 (29.3%) of luminal B type (Her2−), 23 (14.0%) of luminal B type (Her2+), 25 (15.2%) of human epidermal growth factor receptor 2 (Her-2)-positive breast cancer, and 31 (18.9%) of triple-negative breast cancer (TNBC). The clinicopathological characteristics of the patients are summarized in Table 1.\nThe clinicopathological characteristics of the 164 patients with PABC.\nER: Estrogen receptor; PABC: Pregnancy-associated breast cancer; PR: Partial remission; TNBC: Triple-negative breast cancer.\nOutcomes of patients with PABC during pregnancy The 83 patients with PABC during pregnancy included 14 patients in early pregnancy, 38 in mid-pregnancy, and 31 in late pregnancy. All patients with PABC in early pregnancy chose surgical or medical abortion to terminate their pregnancy. Among the patients with PABC in mid-pregnancy, 12 patients chose medical abortion and 26 delivered successfully. All patients with PABC in late pregnancy delivered successfully. All neonates had normal Apgar scores, indicating good development, with 23 neonates weighing >2500 g and 21 neonates weighing <2500 g.\nThe 83 patients with PABC during pregnancy included 14 patients in early pregnancy, 38 in mid-pregnancy, and 31 in late pregnancy. All patients with PABC in early pregnancy chose surgical or medical abortion to terminate their pregnancy. Among the patients with PABC in mid-pregnancy, 12 patients chose medical abortion and 26 delivered successfully. All patients with PABC in late pregnancy delivered successfully. All neonates had normal Apgar scores, indicating good development, with 23 neonates weighing >2500 g and 21 neonates weighing <2500 g.\nTreatment for PABC during pregnancy Among the 83 patients with PABC during pregnancy, 30 (36.1%) underwent surgical treatment during pregnancy, including eight who received breast-conserving surgery and 22 who received modified radical mastectomy; 17 (20.5%) received chemotherapy during pregnancy, including six who received neoadjuvant chemotherapy and 11 who received adjuvant chemotherapy (eight received weekly chemotherapy with paclitaxel and nine received chemotherapy with epirubicin + cyclophosphamide). Among the six patients who received neoadjuvant chemotherapy, one patient with progressive disease after initial chemotherapy received direct surgical treatment, three patients with stable disease after initial chemotherapy achieved partial remission after receiving a different chemotherapy regimen, and two patients had partial remission after initial chemotherapy. Among the 17 patients who received chemotherapy during pregnancy, recurrence and metastasis occurred in four patients, including two who received adjuvant chemotherapy and two who received neoadjuvant chemotherapy. The treatment regimens are shown in Table 2.\nTreatment conditions of 17 breast cancer patients accepting chemotherapy during pregnancy period.\nddEC: Dose-dense epirubicin and cyclophosphamide; EC: Epirubicin + cyclophosphamide; EC-wP: Epirubicin + cyclophosphamide-weekly paclitaxel; IDC: Invasive ductal carcinoma; ILC: Invasive lobular carcinoma; PD: Progressive disease; PR: Partial remission; SD: Stable disease; TNBC: Triple-negative breast cancer.\nAmong the 83 patients with PABC during pregnancy, 30 (36.1%) underwent surgical treatment during pregnancy, including eight who received breast-conserving surgery and 22 who received modified radical mastectomy; 17 (20.5%) received chemotherapy during pregnancy, including six who received neoadjuvant chemotherapy and 11 who received adjuvant chemotherapy (eight received weekly chemotherapy with paclitaxel and nine received chemotherapy with epirubicin + cyclophosphamide). Among the six patients who received neoadjuvant chemotherapy, one patient with progressive disease after initial chemotherapy received direct surgical treatment, three patients with stable disease after initial chemotherapy achieved partial remission after receiving a different chemotherapy regimen, and two patients had partial remission after initial chemotherapy. Among the 17 patients who received chemotherapy during pregnancy, recurrence and metastasis occurred in four patients, including two who received adjuvant chemotherapy and two who received neoadjuvant chemotherapy. The treatment regimens are shown in Table 2.\nTreatment conditions of 17 breast cancer patients accepting chemotherapy during pregnancy period.\nddEC: Dose-dense epirubicin and cyclophosphamide; EC: Epirubicin + cyclophosphamide; EC-wP: Epirubicin + cyclophosphamide-weekly paclitaxel; IDC: Invasive ductal carcinoma; ILC: Invasive lobular carcinoma; PD: Progressive disease; PR: Partial remission; SD: Stable disease; TNBC: Triple-negative breast cancer.\nFollow-up Follow-up visits were performed for 164 PABC patients. The median follow-up duration was 36 months (3–59 months). Among the patients, 18 were lost to follow-up, with a loss to follow-up rate of 11.0%. Among the patients with PABC during pregnancy, two had local recurrence, six had distant metastasis (including two cases of liver metastasis, three cases of bone metastasis, and one case of skin metastasis), and three died. Among the patients with PABC during lactation, four had local recurrence, six had metastasis (including one case of brain metastasis, one case of adrenal metastasis, one case of lumbar metastasis, one case of liver metastasis, and two cases of liver metastasis with bone metastasis), and two died. There were a total of 18 cases (11.0%) of local recurrence and distant metastasis and five cases (3.0%) of death in the 164 patients with PABC. The follow-up data are shown in Tables 3 and 4.\nFollow-up results of 164 patients with PABC.\nPABC: Pregnancy-associated breast cancer.\nFollow-up of 23 cases of recurrence, metastasis, and death.\nIDC: Invasive ductal carcinoma; ILC: Invasive lobular carcinoma; TNBC: Triple-negative breast cancer.\nThe Kaplan-Meier survival curves of the patients with PABC during pregnancy and those of the patients with PABC during lactation were compared, and the P value was 0.525 for DFS and 0.346 for OS. In the comparison of patients with Ki-67 <20%, patients with Ki-67 ≥20% and < 40%, and patients with Ki-67 ≥40%, the P value was 0.179 for DFS and 0.391 for OS. Similarly, in the comparative analysis of Her-2-positive patients and Her-2-negative patients, the P value was 0.638 for DFS and 0.364 for OS. The survival curves are shown in Figures 1–3. In addition, the Kaplan-Meier survival curves of the 83 patients with PABC during pregnancy who terminated their pregnancy and those of patients who continued their pregnancy [Figure 4] were compared, and the P value was 0.138 for DFS and 0.910 for OS, with no significant difference.\nThe OS and DFS analysis of the patients with PABC during pregnancy and those of the patients with PABC during lactation. DFS: Disease free survival; OS: Overall survival; PABC: Pregnancy-associated breast cancer.\nThe OS and DFS analysis of the subgroups with different Ki-67 index. DFS: Disease free survival; OS: Overall survival.\nThe OS and DFS analysis of the Her2+ and Her2− subgroups. DFS: Disease free survival; OS: Overall survival.\nThe OS and DFS analysis of the 83 patients with PABC during pregnancy who terminated their pregnancy and those of patients who continued their pregnancy. DFS: Disease free survival; OS: Overall survival; PABC: Pregnancy-associated breast cancer.\nA total of 85 children of 138 patients with PABC during pregnancy or lactation who delivered successfully were followed up for development. The children included 39 boys and 46 girls with an average age of 29 months (19–72 months). Their heights and weights were all within the normal ranges.\nFollow-up visits were performed for 164 PABC patients. The median follow-up duration was 36 months (3–59 months). Among the patients, 18 were lost to follow-up, with a loss to follow-up rate of 11.0%. Among the patients with PABC during pregnancy, two had local recurrence, six had distant metastasis (including two cases of liver metastasis, three cases of bone metastasis, and one case of skin metastasis), and three died. Among the patients with PABC during lactation, four had local recurrence, six had metastasis (including one case of brain metastasis, one case of adrenal metastasis, one case of lumbar metastasis, one case of liver metastasis, and two cases of liver metastasis with bone metastasis), and two died. There were a total of 18 cases (11.0%) of local recurrence and distant metastasis and five cases (3.0%) of death in the 164 patients with PABC. The follow-up data are shown in Tables 3 and 4.\nFollow-up results of 164 patients with PABC.\nPABC: Pregnancy-associated breast cancer.\nFollow-up of 23 cases of recurrence, metastasis, and death.\nIDC: Invasive ductal carcinoma; ILC: Invasive lobular carcinoma; TNBC: Triple-negative breast cancer.\nThe Kaplan-Meier survival curves of the patients with PABC during pregnancy and those of the patients with PABC during lactation were compared, and the P value was 0.525 for DFS and 0.346 for OS. In the comparison of patients with Ki-67 <20%, patients with Ki-67 ≥20% and < 40%, and patients with Ki-67 ≥40%, the P value was 0.179 for DFS and 0.391 for OS. Similarly, in the comparative analysis of Her-2-positive patients and Her-2-negative patients, the P value was 0.638 for DFS and 0.364 for OS. The survival curves are shown in Figures 1–3. In addition, the Kaplan-Meier survival curves of the 83 patients with PABC during pregnancy who terminated their pregnancy and those of patients who continued their pregnancy [Figure 4] were compared, and the P value was 0.138 for DFS and 0.910 for OS, with no significant difference.\nThe OS and DFS analysis of the patients with PABC during pregnancy and those of the patients with PABC during lactation. DFS: Disease free survival; OS: Overall survival; PABC: Pregnancy-associated breast cancer.\nThe OS and DFS analysis of the subgroups with different Ki-67 index. DFS: Disease free survival; OS: Overall survival.\nThe OS and DFS analysis of the Her2+ and Her2− subgroups. DFS: Disease free survival; OS: Overall survival.\nThe OS and DFS analysis of the 83 patients with PABC during pregnancy who terminated their pregnancy and those of patients who continued their pregnancy. DFS: Disease free survival; OS: Overall survival; PABC: Pregnancy-associated breast cancer.\nA total of 85 children of 138 patients with PABC during pregnancy or lactation who delivered successfully were followed up for development. The children included 39 boys and 46 girls with an average age of 29 months (19–72 months). Their heights and weights were all within the normal ranges.", "A total of 164 patients with a median age of 33 years (24–47 years) were included in this study. Among them, 15 (8.2%) had a family history of malignancies and ten (5.4%) had a family history of breast cancer. There were six patients (3.7%) in stage 0, 15 (9.1%) in stage I, 51 (31.1%) in stage IIa, 39 (23.8%) in stage IIb, 27 (16.5%) in stage IIIa, 8 (4.9%) in stage IIIb, five (3.0%) in stage IIIc, four (2.4%) in stage IV, and nine (2.4%) in an unknown stage. There were 54 cases (33.0%) of N0, 71 cases (43.3%) of N1, 19 cases (11.6%) of N2, and five cases (3.0%) of N3. Invasive ductal carcinoma of the breast was the most common pathological type, accounting 71.3% (117/164). There were two cases (1.2%) of histological grade G1, 51 (31.1%) of G2, and 49 (30.0%) of G3. There were 15 cases (9.1%) of luminal A type, 48 (29.3%) of luminal B type (Her2−), 23 (14.0%) of luminal B type (Her2+), 25 (15.2%) of human epidermal growth factor receptor 2 (Her-2)-positive breast cancer, and 31 (18.9%) of triple-negative breast cancer (TNBC). The clinicopathological characteristics of the patients are summarized in Table 1.\nThe clinicopathological characteristics of the 164 patients with PABC.\nER: Estrogen receptor; PABC: Pregnancy-associated breast cancer; PR: Partial remission; TNBC: Triple-negative breast cancer.", "The 83 patients with PABC during pregnancy included 14 patients in early pregnancy, 38 in mid-pregnancy, and 31 in late pregnancy. All patients with PABC in early pregnancy chose surgical or medical abortion to terminate their pregnancy. Among the patients with PABC in mid-pregnancy, 12 patients chose medical abortion and 26 delivered successfully. All patients with PABC in late pregnancy delivered successfully. All neonates had normal Apgar scores, indicating good development, with 23 neonates weighing >2500 g and 21 neonates weighing <2500 g.", "Among the 83 patients with PABC during pregnancy, 30 (36.1%) underwent surgical treatment during pregnancy, including eight who received breast-conserving surgery and 22 who received modified radical mastectomy; 17 (20.5%) received chemotherapy during pregnancy, including six who received neoadjuvant chemotherapy and 11 who received adjuvant chemotherapy (eight received weekly chemotherapy with paclitaxel and nine received chemotherapy with epirubicin + cyclophosphamide). Among the six patients who received neoadjuvant chemotherapy, one patient with progressive disease after initial chemotherapy received direct surgical treatment, three patients with stable disease after initial chemotherapy achieved partial remission after receiving a different chemotherapy regimen, and two patients had partial remission after initial chemotherapy. Among the 17 patients who received chemotherapy during pregnancy, recurrence and metastasis occurred in four patients, including two who received adjuvant chemotherapy and two who received neoadjuvant chemotherapy. The treatment regimens are shown in Table 2.\nTreatment conditions of 17 breast cancer patients accepting chemotherapy during pregnancy period.\nddEC: Dose-dense epirubicin and cyclophosphamide; EC: Epirubicin + cyclophosphamide; EC-wP: Epirubicin + cyclophosphamide-weekly paclitaxel; IDC: Invasive ductal carcinoma; ILC: Invasive lobular carcinoma; PD: Progressive disease; PR: Partial remission; SD: Stable disease; TNBC: Triple-negative breast cancer.", "Follow-up visits were performed for 164 PABC patients. The median follow-up duration was 36 months (3–59 months). Among the patients, 18 were lost to follow-up, with a loss to follow-up rate of 11.0%. Among the patients with PABC during pregnancy, two had local recurrence, six had distant metastasis (including two cases of liver metastasis, three cases of bone metastasis, and one case of skin metastasis), and three died. Among the patients with PABC during lactation, four had local recurrence, six had metastasis (including one case of brain metastasis, one case of adrenal metastasis, one case of lumbar metastasis, one case of liver metastasis, and two cases of liver metastasis with bone metastasis), and two died. There were a total of 18 cases (11.0%) of local recurrence and distant metastasis and five cases (3.0%) of death in the 164 patients with PABC. The follow-up data are shown in Tables 3 and 4.\nFollow-up results of 164 patients with PABC.\nPABC: Pregnancy-associated breast cancer.\nFollow-up of 23 cases of recurrence, metastasis, and death.\nIDC: Invasive ductal carcinoma; ILC: Invasive lobular carcinoma; TNBC: Triple-negative breast cancer.\nThe Kaplan-Meier survival curves of the patients with PABC during pregnancy and those of the patients with PABC during lactation were compared, and the P value was 0.525 for DFS and 0.346 for OS. In the comparison of patients with Ki-67 <20%, patients with Ki-67 ≥20% and < 40%, and patients with Ki-67 ≥40%, the P value was 0.179 for DFS and 0.391 for OS. Similarly, in the comparative analysis of Her-2-positive patients and Her-2-negative patients, the P value was 0.638 for DFS and 0.364 for OS. The survival curves are shown in Figures 1–3. In addition, the Kaplan-Meier survival curves of the 83 patients with PABC during pregnancy who terminated their pregnancy and those of patients who continued their pregnancy [Figure 4] were compared, and the P value was 0.138 for DFS and 0.910 for OS, with no significant difference.\nThe OS and DFS analysis of the patients with PABC during pregnancy and those of the patients with PABC during lactation. DFS: Disease free survival; OS: Overall survival; PABC: Pregnancy-associated breast cancer.\nThe OS and DFS analysis of the subgroups with different Ki-67 index. DFS: Disease free survival; OS: Overall survival.\nThe OS and DFS analysis of the Her2+ and Her2− subgroups. DFS: Disease free survival; OS: Overall survival.\nThe OS and DFS analysis of the 83 patients with PABC during pregnancy who terminated their pregnancy and those of patients who continued their pregnancy. DFS: Disease free survival; OS: Overall survival; PABC: Pregnancy-associated breast cancer.\nA total of 85 children of 138 patients with PABC during pregnancy or lactation who delivered successfully were followed up for development. The children included 39 boys and 46 girls with an average age of 29 months (19–72 months). Their heights and weights were all within the normal ranges.", "PABC occurs during pregnancy and lactation. It affects the health of the mother and the fetus and should receive extensive attention. The definition of PABC varies in different studies. In a 2012 meta-analyses of 30 studies on PABC, eight studies defined PABC as breast cancer diagnosed during pregnancy, five studies defined PABC as breast cancer diagnosed during lactation, and 15 studies defined PABC as newly occurring breast cancer during pregnancy or within 1 year after childbirth.[5] A meta-analysis conducted in 2016 suggested to extend the definition of PABC to include new occurrences of breast cancer during pregnancy and within 5 years after childbirth.[6] Currently, the mainstream definition of PABC is newly occurring breast cancer during pregnancy and within 1 year after childbirth,[7] which is the subject of this clinical study.\nThe incidence of PABC is low (0.2%–3.8%) in foreign studies, and it ranks third among cancers commonly diagnosed during pregnancy and lactation.[8] A multicenter study involving 16 countries showed that PABC ranked first among cancers diagnosed during pregnancy.[9] No multicenter clinical study of PABC has been conducted in China. A large single-center retrospective study performed at the Tianjin Medical University found that PABC accounted 0.36% of all breast cancer cases.[10] The present study included 164 PABC patients from 27 hospitals nationwide, accounting 0.3% of the total cases of breast cancer during the same period.\nThe median age of onset of PABC in this study was 33 years, which was consistent with a previous large-scale study.[11] Currently, the relationship between PABC and family history of breast cancer is inconclusive. In this study, 5.4% of the PABC patients had a family history of breast cancer; this is similar to the proportion of people with a family history of breast cancer in the general population, suggesting that a family history of breast cancer may not be associated with PABC.[12,13]\nThis study showed that patients in stages III and IV accounted to 26.8% (44/164). Current studies suggest that the late stages are associated with delayed diagnosis and that the diagnosis of PABC is typically delayed by 1 to 13 months.[14] Hyperplasia of the mammary glands during pregnancy and lactation increases tissue density, making small masses undetectable by conventional palpation; furthermore, the use of X-ray and other imaging examinations is limited due to their negative impact on the fetus, and pregnancy-related hormone fluctuations may stimulate tumor growth and progression. All these factors affect the early diagnosis of PABC. The literature has reported that a 1-month delay in treatment may increase the risk of lymph node involvement by 0.9%.[15] Breast ultrasound is safe, simple, and non-radioactive. Therefore, it can be used as the first-choice method for the diagnosis of PABC. In addition, mammography with protective measures has also been proven safe for the fetus, but its sensitivity is not high because of the increased density of the mammary glands. For suspected clinical patients, the above examinations should be performed as soon as possible, and if necessary, biopsy should be performed to avoid delays in diagnosis and subsequent treatment.\nIn our study, invasive ductal carcinoma was the most common pathological type, accounting 71.3% of all cases. Histological grades G2 and G3 accounted 61.1% of all histological grades. The estrogen receptor (ER)- and progesterone receptor (PR)-positive rates were not high (52.4% and 42.7%, respectively), and the Her-2-positive rate was 29.3%. The results were consistent with the results of similar studies conducted abroad. PABC tends to have a high histological grade and the expression of ER and PR is low.[10,14] In addition, patients with Ki-67 <20%, patients with Ki-67 ≥20% and <40%, and patients with Ki-67 ≥40% accounted 15.2%, 29.3%, and 43.3% of the Ki-67-positive patients, respectively. Regarding molecular subtypes, the most common types are still controversial, and some studies believe that TNBC and Her-2 overexpression types are the most common. In our study, luminal B was the most common type, and the proportions of TNBC and Her-2 overexpression were also high, which is supported by other studies.[10,14,16,17] Through gene expression profiling, some scholars have found that compared with non-PABC patients, patients with PABC had epithelial cells with enhanced expression of genes associated with invasion and recurrence that had a complex relationship with ER and PR.[18] Other literature has reviewed the mechanisms underlying the development of PABC, including various hypotheses such as hormonal effects, immune changes, and inflammatory responses which await further investigation.[14]\nWhile the treatment of PABC during lactation is similar to that for non-PABC, PABC during pregnancy is difficult to treat because of the restrictions associated with pregnancy.[19] Regarding the decision to pursue surgery or chemotherapy during pregnancy, surgery can theoretically be performed at any time. Among the 57 patients in this study who continued their pregnancy, 30 chose to undergo surgery during pregnancy; most of them selected modified radical mastectomy, although a few selected breast-conserving surgery. There were no significant adverse events in the mother or the fetus after surgery. When it is administered during pregnancy, chemotherapy is usually performed in the middle and late stages of pregnancy, and the safety of mother and fetus can be ensured.[20] Because chemotherapy in early pregnancy is likely to cause fetal malformations, and chemotherapy in late pregnancy can easily cause bone marrow suppression in the fetus after delivery, it is contraindicated during early and late pregnancy (after 35 weeks or 3 weeks before delivery).[1] In this study, 17 patients with PABC during pregnancy underwent chemotherapy, including two patients in late pregnancy and 15 patients in mid-pregnancy, and all neonates were healthy. Most of these patients received weekly chemotherapy with anthracycline + cyclophosphamide, and a few of them received weekly chemotherapy with small-dose paclitaxel, mainly because the proportions of anthracycline, cyclophosphamide, and paclitaxel that can pass through the placental barrier are <8%, the lowest among chemotherapeutic drugs. More than 50% of carboplatin can pass through the placental barrier; thus, it ranks highest among chemotherapeutic drugs. Approximately 25% of cyclophosphamide can pass though through the placental barrier, and although its effect on fetal development is relatively small in mid- and late pregnancy, it tends to cause fetal malformations when administered in early pregnancy.[1] Therefore, weekly chemotherapy with anthracycline + cyclophosphamide was the main chemotherapy method used in this study, and no obvious complications of mother or hypoplasia of fetus were observed during chemotherapy. In addition, endocrine therapy, targeted therapy, and radiotherapy are prohibited during pregnancy because they can easily affect fetal development or threaten the life of the fetus.\nA total of 26 (31.3%) patients terminated the pregnancy. Whether termination of pregnancy can change the prognosis is still controversial. Most studies showed no difference in prognosis between the patients who terminated their pregnancy and those who continued their pregnancy.[21] In this study, among 57 patients who continued their pregnancy, eight had recurrence and metastasis and two died. In contrast, among 26 patients who terminated their pregnancy, only one patient died and no patient had recurrence or metastasis. Survival analysis showed that the DFS of the patients who continued their pregnancy was worse, but not significantly worse than that of patients who terminated their pregnancy, and their OS was not significantly different. Therefore, termination of pregnancy is not recommended for women with PABC during pregnancy unless chemotherapy is required in early pregnancy under certain circumstances. Regarding the impact of PABC on neonatal outcomes, a US study based on big data showed that the risks of intrauterine growth restriction, congenital abnormality, and intrauterine fetal death were similar in patients with PABC and patients with non-PABC patients, but the risks of premature birth and premature rupture of membranes were higher in patients with PABC.[22] A total of 57 patients who continued their pregnancy were selected for this study, and 11 of them who were in late pregnancy (close to 39 weeks of gestation gave birth) before receiving any treatment. The remaining 46 patients gave birth after receiving chemotherapy or/and surgery during pregnancy, and 89.1% of patients were diagnosed in mid-pregnancy. Among the neonates delivered by the 57 patients with PABC during pregnancy, 23 (37.7%) neonates with normal body weight (>2500 g) and 21 (34.4%) neonates with body weight <2500 g had Apgar scores >8 points, and the birth conditions of 13 neonates were unknown. This result is basically similar to those of previous studies.\nThe prognosis of PABC is still controversial. Previous studies have reported that the prognosis of patients with PABC is worse than that of non-PABC patients because of the diagnosis delay, treatment delay, and hormone stimulation, and pregnancy and lactation are independent factors for a poor prognosis in patients with breast cancer.[5,6] However, some studies have reported that when age, tumor stage, hormone receptor expression, and other factors were matched, the prognosis of patients with PABC and that of non-PABC patients were not significantly different.[23,24] In this study, the median follow-up duration was 36 months. Among the 164 patients with PABC, 18 (11.0%) had local recurrence and distant metastasis, and five (3.0%) died. Studies have shown that breast cancer detected during lactation is associated with a worse prognosis,[25] perhaps because the persistent presence of inflammatory stimuli during the process of mammary gland involution leads to changes in the tumor microenvironment and promotes the migration and metastasis of tumor cells. The comparison of survival conditions between patients with PABC during pregnancy and patients with PABC during lactation showed that the prognosis of the latter group was slightly worse. However, follow-ups found no significant difference between the groups. In this study, 23 patients with recurrence, metastasis, or death were analyzed. The 11 cases of PABC during pregnancy were initially diagnosed as stage III breast cancer. The 12 cases of PABC during lactation included one case of stage 0 (8.3%, 1/12), four cases of stage IV (33.3%, 4/12), five cases of stage II (41.6%, 5/12), and two cases of stage IIIC (16.6%, 2/12). No patient with PABC during pregnancy had stage IV PABC, perhaps because the absence of systemic evaluation due to pregnancy made it difficult to detect distant metastasis. Four cases of stage IV PABC during lactation were all T2N1M1, which may also be associated with breast cancer being prone to distant metastasis. TNBC and luminal B (9/11) were the dominant molecular subtypes of PABC during pregnancy. Luminal B type and Her-2-positive breast cancer (10/12) were the dominant molecular subtypes of PABC during lactation. In addition, the Ki-67 value-added index is closely related to the prognosis of breast cancer, and a higher Ki-67 index generally indicates a worse prognosis.[26,27] Therefore, the Ki-67 index was divided into three levels in this study: Ki-67 <20%, Ki-67 ≥20%, and <40%, and Ki-67 ≥40%. Patients with Ki-67 ≥40% had a worse, but not significantly worse, prognosis than patients with Ki-67 ≥20% and <40%, and patients with Ki-67 ≥20% and <40% had a worse, but not significantly worse, prognosis than patients with Ki-67 <20%. Similarly, the comparison of survival between the Her-2-positive patients and the Her-2-negative patients showed that the Her-2-positive patients had a worse, but not significantly worse, prognosis than the Her-2-negative patients, possibly because of the small sample size and the short follow-up duration in this study. More PABC cases and longer follow-up durations are needed to further improve the survival analysis.\nRegarding the growth and development of the children of PABC patients, some studies showed that the children of PABC patients who received chemotherapy during pregnancy had no significant difference in neurological and cardiac development and functions compared with normal children.[28] In this study, 85 children were followed up. No obvious abnormalities in nerve and heart growth and development were observed in the children of PABC patients who received chemotherapy or surgical treatment during pregnancy or the children of PABC patients who did not receive any treatment during pregnancy. However, further studies are needed because of the large number of children lost to follow-up and the relatively short follow-up duration.\nPABC is a special type of breast cancer. Standardized surgery and chemotherapy for PABC during pregnancy are safe for mother and fetus. However, our conclusions still need to be confirmed by large-scale studies and long-term follow-up data. With the adjustment of national birth control policies, the incidence of PABC may increase. Early detection, early diagnosis, and standardized treatment of PABC are very important.", "Thanks to the following professors for providing data for the multicenter clinical study, Ke-Jin Wu (Obstetrics and Gynecology Hospital Affiliated to Fudan University, China), Xing-Song Tian (Shandong Provincial Hospital, China), Jin-Ping Liu (Sichuan Provincial People's Hospital, China), Zhi-Min Fan (Bethune First Hospital of Jilin University, China), Zhong-Wei Cao (People's Hospital of Inner Mongolia Autonomous Region, China), Chuan Wang (Union Hospital Affiliated to Fujian Medical University, China), Rui Ling (Xijing Hospital, China), Yin-Hua Liu (Peking University First Hospital, China), Jian-Dong Wang (General Hospital of Chinese people's Liberation Army, China), Jing-Hua Zhang (Tangshan People's Hospital, China), Zhi-Gang Yu (Second Hospital of Shandong University, China), Wei Zhu (Zhongshan Hospital Affiliated to Fudan University, China), De-Dian Chen (Yunnan Cancer Hospital, China), Yun-Jiang Liu (Fourth Hospital of Hebei Medical University, China), Da-Hua Mao (Wudang Hospital Affiliated to Guizhou Medical University, China), Li-Li Tang (Middle Hospital, Xiangya Hospital of Southern University, China), Shu Wang (People's Hospital of Peking University, China), Er-Wei Song (Sun Yat Sen Memorial Hospital of Sun Yat Sen University, China), Pei-Fen Fu (First Affiliated Hospital of Zhejiang University, China), Jiang-Hua Ou (Cancer Hospital of Xinjiang Medical University, China), Ke Liu (Cancer Hospital of Jilin Province, China), Yong-Hui Luo (Second Affiliated Hospital of Nanchang University, China), Xiang Qu (Beijing Friendship Hospital of Capital Medical University, China), Yi Zhao (Shengjing Hospital Affiliated to China Medical University, China), Jian Huang (The Second Affiliated Hospital of Medical College of Zhejiang University, China), Hong-Chuan Jiang (Beijing Chaoyang Hospital Affiliated to Capital Medical University, China), Rong Ma (Qilu Hospital Affiliated to Shandong University, China), Jian-Guo Zhang (The Second Affiliated Hospital of Harbin Medical University, China), Jun Jiang (Southwest Hospital Affiliated to the Third Military Medical University, China), Zhen-Zhen Liu (Henan Cancer Hospital Affiliated to Capital Medical University, China), Jian-Guo Zhang (Second Affiliated Hospital Affiliated to Harbin Medical University, China), Ai-Lin Song (Second Affiliated Hospital of Lanzhou University, China), and Qiang Zou (Huashan Hospital Affiliated to Fudan University, China).", "None." ]
[ "intro", "methods", null, null, "methods", "results", null, "subjects", null, null, "discussion", null, "COI-statement" ]
[ "Pregnancy-associated breast cancer", "Clinicopathological feature", "Treatment", "Prognosis" ]
Introduction: Pregnancy-associated breast cancer (PABC) is a special type of breast cancer that occurs during pregnancy and within 1 year after childbirth. PABC can be divided into PABC during pregnancy and postpartum PABC.[1] PABC accounts for 0.2% to 3.8% of all breast cancers, including 10% to 20% of all breast cancers in patients <30 years.[2–4] With the rapid social development and the adjustment of reproductive policies in China, the average age of females at first childbirth is increasing, which is expected to lead to an increase in the incidence of PABC. A total of 164 cases of PABC in 27 hospitals were collected in this study through the Chinese Society of Breast Surgery to investigate the prevalence of PABC in China and to accumulate and summarize the experience with the clinical diagnosis and treatment of PABC in China. Methods: Clinical data A total of 60,998 newly diagnosed breast cancer patients were treated at 27 hospitals nationwide between January 2016 and December 2018, including 164 patients with pathologically confirmed PABC, accounting for 0.3% of the total number of breast cancer cases during the same period. Among these PABC cases, 83 occurred during pregnancy, including 14 cases during early pregnancy (<12 weeks of gestation), 38 cases during mid-pregnancy (12–28 weeks of gestation), and 31 cases during late pregnancy (>28 weeks of gestation) and 81 occurred during lactation. A total of 60,998 newly diagnosed breast cancer patients were treated at 27 hospitals nationwide between January 2016 and December 2018, including 164 patients with pathologically confirmed PABC, accounting for 0.3% of the total number of breast cancer cases during the same period. Among these PABC cases, 83 occurred during pregnancy, including 14 cases during early pregnancy (<12 weeks of gestation), 38 cases during mid-pregnancy (12–28 weeks of gestation), and 31 cases during late pregnancy (>28 weeks of gestation) and 81 occurred during lactation. Follow-up Follow-up visits were conducted via outpatient visits or telephonic interviews. The follow-up deadline was November 31, 2020, and the median follow-up duration was 36 months (3–59 months). Survival curves were plotted based on the follow-up data. Disease-free survival (DFS) was defined as the time from the pathological diagnosis of PABC to relapse or metastasis. Overall survival (OS) was defined as the interval between the pathological diagnosis of PABC and the date of death or the last follow-up. Follow-up visits were conducted via outpatient visits or telephonic interviews. The follow-up deadline was November 31, 2020, and the median follow-up duration was 36 months (3–59 months). Survival curves were plotted based on the follow-up data. Disease-free survival (DFS) was defined as the time from the pathological diagnosis of PABC to relapse or metastasis. Overall survival (OS) was defined as the interval between the pathological diagnosis of PABC and the date of death or the last follow-up. Statistical methods The results were analyzed using IBM Statistics SPSS 20 (IBM Corp., Armonk, NY, USA). Survival curves were plotted using the Kaplan-Meier method. The results were analyzed using IBM Statistics SPSS 20 (IBM Corp., Armonk, NY, USA). Survival curves were plotted using the Kaplan-Meier method. Clinical data: A total of 60,998 newly diagnosed breast cancer patients were treated at 27 hospitals nationwide between January 2016 and December 2018, including 164 patients with pathologically confirmed PABC, accounting for 0.3% of the total number of breast cancer cases during the same period. Among these PABC cases, 83 occurred during pregnancy, including 14 cases during early pregnancy (<12 weeks of gestation), 38 cases during mid-pregnancy (12–28 weeks of gestation), and 31 cases during late pregnancy (>28 weeks of gestation) and 81 occurred during lactation. Follow-up: Follow-up visits were conducted via outpatient visits or telephonic interviews. The follow-up deadline was November 31, 2020, and the median follow-up duration was 36 months (3–59 months). Survival curves were plotted based on the follow-up data. Disease-free survival (DFS) was defined as the time from the pathological diagnosis of PABC to relapse or metastasis. Overall survival (OS) was defined as the interval between the pathological diagnosis of PABC and the date of death or the last follow-up. Statistical methods: The results were analyzed using IBM Statistics SPSS 20 (IBM Corp., Armonk, NY, USA). Survival curves were plotted using the Kaplan-Meier method. Results: Clinical and pathological characteristics A total of 164 patients with a median age of 33 years (24–47 years) were included in this study. Among them, 15 (8.2%) had a family history of malignancies and ten (5.4%) had a family history of breast cancer. There were six patients (3.7%) in stage 0, 15 (9.1%) in stage I, 51 (31.1%) in stage IIa, 39 (23.8%) in stage IIb, 27 (16.5%) in stage IIIa, 8 (4.9%) in stage IIIb, five (3.0%) in stage IIIc, four (2.4%) in stage IV, and nine (2.4%) in an unknown stage. There were 54 cases (33.0%) of N0, 71 cases (43.3%) of N1, 19 cases (11.6%) of N2, and five cases (3.0%) of N3. Invasive ductal carcinoma of the breast was the most common pathological type, accounting 71.3% (117/164). There were two cases (1.2%) of histological grade G1, 51 (31.1%) of G2, and 49 (30.0%) of G3. There were 15 cases (9.1%) of luminal A type, 48 (29.3%) of luminal B type (Her2−), 23 (14.0%) of luminal B type (Her2+), 25 (15.2%) of human epidermal growth factor receptor 2 (Her-2)-positive breast cancer, and 31 (18.9%) of triple-negative breast cancer (TNBC). The clinicopathological characteristics of the patients are summarized in Table 1. The clinicopathological characteristics of the 164 patients with PABC. ER: Estrogen receptor; PABC: Pregnancy-associated breast cancer; PR: Partial remission; TNBC: Triple-negative breast cancer. A total of 164 patients with a median age of 33 years (24–47 years) were included in this study. Among them, 15 (8.2%) had a family history of malignancies and ten (5.4%) had a family history of breast cancer. There were six patients (3.7%) in stage 0, 15 (9.1%) in stage I, 51 (31.1%) in stage IIa, 39 (23.8%) in stage IIb, 27 (16.5%) in stage IIIa, 8 (4.9%) in stage IIIb, five (3.0%) in stage IIIc, four (2.4%) in stage IV, and nine (2.4%) in an unknown stage. There were 54 cases (33.0%) of N0, 71 cases (43.3%) of N1, 19 cases (11.6%) of N2, and five cases (3.0%) of N3. Invasive ductal carcinoma of the breast was the most common pathological type, accounting 71.3% (117/164). There were two cases (1.2%) of histological grade G1, 51 (31.1%) of G2, and 49 (30.0%) of G3. There were 15 cases (9.1%) of luminal A type, 48 (29.3%) of luminal B type (Her2−), 23 (14.0%) of luminal B type (Her2+), 25 (15.2%) of human epidermal growth factor receptor 2 (Her-2)-positive breast cancer, and 31 (18.9%) of triple-negative breast cancer (TNBC). The clinicopathological characteristics of the patients are summarized in Table 1. The clinicopathological characteristics of the 164 patients with PABC. ER: Estrogen receptor; PABC: Pregnancy-associated breast cancer; PR: Partial remission; TNBC: Triple-negative breast cancer. Outcomes of patients with PABC during pregnancy The 83 patients with PABC during pregnancy included 14 patients in early pregnancy, 38 in mid-pregnancy, and 31 in late pregnancy. All patients with PABC in early pregnancy chose surgical or medical abortion to terminate their pregnancy. Among the patients with PABC in mid-pregnancy, 12 patients chose medical abortion and 26 delivered successfully. All patients with PABC in late pregnancy delivered successfully. All neonates had normal Apgar scores, indicating good development, with 23 neonates weighing >2500 g and 21 neonates weighing <2500 g. The 83 patients with PABC during pregnancy included 14 patients in early pregnancy, 38 in mid-pregnancy, and 31 in late pregnancy. All patients with PABC in early pregnancy chose surgical or medical abortion to terminate their pregnancy. Among the patients with PABC in mid-pregnancy, 12 patients chose medical abortion and 26 delivered successfully. All patients with PABC in late pregnancy delivered successfully. All neonates had normal Apgar scores, indicating good development, with 23 neonates weighing >2500 g and 21 neonates weighing <2500 g. Treatment for PABC during pregnancy Among the 83 patients with PABC during pregnancy, 30 (36.1%) underwent surgical treatment during pregnancy, including eight who received breast-conserving surgery and 22 who received modified radical mastectomy; 17 (20.5%) received chemotherapy during pregnancy, including six who received neoadjuvant chemotherapy and 11 who received adjuvant chemotherapy (eight received weekly chemotherapy with paclitaxel and nine received chemotherapy with epirubicin + cyclophosphamide). Among the six patients who received neoadjuvant chemotherapy, one patient with progressive disease after initial chemotherapy received direct surgical treatment, three patients with stable disease after initial chemotherapy achieved partial remission after receiving a different chemotherapy regimen, and two patients had partial remission after initial chemotherapy. Among the 17 patients who received chemotherapy during pregnancy, recurrence and metastasis occurred in four patients, including two who received adjuvant chemotherapy and two who received neoadjuvant chemotherapy. The treatment regimens are shown in Table 2. Treatment conditions of 17 breast cancer patients accepting chemotherapy during pregnancy period. ddEC: Dose-dense epirubicin and cyclophosphamide; EC: Epirubicin + cyclophosphamide; EC-wP: Epirubicin + cyclophosphamide-weekly paclitaxel; IDC: Invasive ductal carcinoma; ILC: Invasive lobular carcinoma; PD: Progressive disease; PR: Partial remission; SD: Stable disease; TNBC: Triple-negative breast cancer. Among the 83 patients with PABC during pregnancy, 30 (36.1%) underwent surgical treatment during pregnancy, including eight who received breast-conserving surgery and 22 who received modified radical mastectomy; 17 (20.5%) received chemotherapy during pregnancy, including six who received neoadjuvant chemotherapy and 11 who received adjuvant chemotherapy (eight received weekly chemotherapy with paclitaxel and nine received chemotherapy with epirubicin + cyclophosphamide). Among the six patients who received neoadjuvant chemotherapy, one patient with progressive disease after initial chemotherapy received direct surgical treatment, three patients with stable disease after initial chemotherapy achieved partial remission after receiving a different chemotherapy regimen, and two patients had partial remission after initial chemotherapy. Among the 17 patients who received chemotherapy during pregnancy, recurrence and metastasis occurred in four patients, including two who received adjuvant chemotherapy and two who received neoadjuvant chemotherapy. The treatment regimens are shown in Table 2. Treatment conditions of 17 breast cancer patients accepting chemotherapy during pregnancy period. ddEC: Dose-dense epirubicin and cyclophosphamide; EC: Epirubicin + cyclophosphamide; EC-wP: Epirubicin + cyclophosphamide-weekly paclitaxel; IDC: Invasive ductal carcinoma; ILC: Invasive lobular carcinoma; PD: Progressive disease; PR: Partial remission; SD: Stable disease; TNBC: Triple-negative breast cancer. Follow-up Follow-up visits were performed for 164 PABC patients. The median follow-up duration was 36 months (3–59 months). Among the patients, 18 were lost to follow-up, with a loss to follow-up rate of 11.0%. Among the patients with PABC during pregnancy, two had local recurrence, six had distant metastasis (including two cases of liver metastasis, three cases of bone metastasis, and one case of skin metastasis), and three died. Among the patients with PABC during lactation, four had local recurrence, six had metastasis (including one case of brain metastasis, one case of adrenal metastasis, one case of lumbar metastasis, one case of liver metastasis, and two cases of liver metastasis with bone metastasis), and two died. There were a total of 18 cases (11.0%) of local recurrence and distant metastasis and five cases (3.0%) of death in the 164 patients with PABC. The follow-up data are shown in Tables 3 and 4. Follow-up results of 164 patients with PABC. PABC: Pregnancy-associated breast cancer. Follow-up of 23 cases of recurrence, metastasis, and death. IDC: Invasive ductal carcinoma; ILC: Invasive lobular carcinoma; TNBC: Triple-negative breast cancer. The Kaplan-Meier survival curves of the patients with PABC during pregnancy and those of the patients with PABC during lactation were compared, and the P value was 0.525 for DFS and 0.346 for OS. In the comparison of patients with Ki-67 <20%, patients with Ki-67 ≥20% and < 40%, and patients with Ki-67 ≥40%, the P value was 0.179 for DFS and 0.391 for OS. Similarly, in the comparative analysis of Her-2-positive patients and Her-2-negative patients, the P value was 0.638 for DFS and 0.364 for OS. The survival curves are shown in Figures 1–3. In addition, the Kaplan-Meier survival curves of the 83 patients with PABC during pregnancy who terminated their pregnancy and those of patients who continued their pregnancy [Figure 4] were compared, and the P value was 0.138 for DFS and 0.910 for OS, with no significant difference. The OS and DFS analysis of the patients with PABC during pregnancy and those of the patients with PABC during lactation. DFS: Disease free survival; OS: Overall survival; PABC: Pregnancy-associated breast cancer. The OS and DFS analysis of the subgroups with different Ki-67 index. DFS: Disease free survival; OS: Overall survival. The OS and DFS analysis of the Her2+ and Her2− subgroups. DFS: Disease free survival; OS: Overall survival. The OS and DFS analysis of the 83 patients with PABC during pregnancy who terminated their pregnancy and those of patients who continued their pregnancy. DFS: Disease free survival; OS: Overall survival; PABC: Pregnancy-associated breast cancer. A total of 85 children of 138 patients with PABC during pregnancy or lactation who delivered successfully were followed up for development. The children included 39 boys and 46 girls with an average age of 29 months (19–72 months). Their heights and weights were all within the normal ranges. Follow-up visits were performed for 164 PABC patients. The median follow-up duration was 36 months (3–59 months). Among the patients, 18 were lost to follow-up, with a loss to follow-up rate of 11.0%. Among the patients with PABC during pregnancy, two had local recurrence, six had distant metastasis (including two cases of liver metastasis, three cases of bone metastasis, and one case of skin metastasis), and three died. Among the patients with PABC during lactation, four had local recurrence, six had metastasis (including one case of brain metastasis, one case of adrenal metastasis, one case of lumbar metastasis, one case of liver metastasis, and two cases of liver metastasis with bone metastasis), and two died. There were a total of 18 cases (11.0%) of local recurrence and distant metastasis and five cases (3.0%) of death in the 164 patients with PABC. The follow-up data are shown in Tables 3 and 4. Follow-up results of 164 patients with PABC. PABC: Pregnancy-associated breast cancer. Follow-up of 23 cases of recurrence, metastasis, and death. IDC: Invasive ductal carcinoma; ILC: Invasive lobular carcinoma; TNBC: Triple-negative breast cancer. The Kaplan-Meier survival curves of the patients with PABC during pregnancy and those of the patients with PABC during lactation were compared, and the P value was 0.525 for DFS and 0.346 for OS. In the comparison of patients with Ki-67 <20%, patients with Ki-67 ≥20% and < 40%, and patients with Ki-67 ≥40%, the P value was 0.179 for DFS and 0.391 for OS. Similarly, in the comparative analysis of Her-2-positive patients and Her-2-negative patients, the P value was 0.638 for DFS and 0.364 for OS. The survival curves are shown in Figures 1–3. In addition, the Kaplan-Meier survival curves of the 83 patients with PABC during pregnancy who terminated their pregnancy and those of patients who continued their pregnancy [Figure 4] were compared, and the P value was 0.138 for DFS and 0.910 for OS, with no significant difference. The OS and DFS analysis of the patients with PABC during pregnancy and those of the patients with PABC during lactation. DFS: Disease free survival; OS: Overall survival; PABC: Pregnancy-associated breast cancer. The OS and DFS analysis of the subgroups with different Ki-67 index. DFS: Disease free survival; OS: Overall survival. The OS and DFS analysis of the Her2+ and Her2− subgroups. DFS: Disease free survival; OS: Overall survival. The OS and DFS analysis of the 83 patients with PABC during pregnancy who terminated their pregnancy and those of patients who continued their pregnancy. DFS: Disease free survival; OS: Overall survival; PABC: Pregnancy-associated breast cancer. A total of 85 children of 138 patients with PABC during pregnancy or lactation who delivered successfully were followed up for development. The children included 39 boys and 46 girls with an average age of 29 months (19–72 months). Their heights and weights were all within the normal ranges. Clinical and pathological characteristics: A total of 164 patients with a median age of 33 years (24–47 years) were included in this study. Among them, 15 (8.2%) had a family history of malignancies and ten (5.4%) had a family history of breast cancer. There were six patients (3.7%) in stage 0, 15 (9.1%) in stage I, 51 (31.1%) in stage IIa, 39 (23.8%) in stage IIb, 27 (16.5%) in stage IIIa, 8 (4.9%) in stage IIIb, five (3.0%) in stage IIIc, four (2.4%) in stage IV, and nine (2.4%) in an unknown stage. There were 54 cases (33.0%) of N0, 71 cases (43.3%) of N1, 19 cases (11.6%) of N2, and five cases (3.0%) of N3. Invasive ductal carcinoma of the breast was the most common pathological type, accounting 71.3% (117/164). There were two cases (1.2%) of histological grade G1, 51 (31.1%) of G2, and 49 (30.0%) of G3. There were 15 cases (9.1%) of luminal A type, 48 (29.3%) of luminal B type (Her2−), 23 (14.0%) of luminal B type (Her2+), 25 (15.2%) of human epidermal growth factor receptor 2 (Her-2)-positive breast cancer, and 31 (18.9%) of triple-negative breast cancer (TNBC). The clinicopathological characteristics of the patients are summarized in Table 1. The clinicopathological characteristics of the 164 patients with PABC. ER: Estrogen receptor; PABC: Pregnancy-associated breast cancer; PR: Partial remission; TNBC: Triple-negative breast cancer. Outcomes of patients with PABC during pregnancy: The 83 patients with PABC during pregnancy included 14 patients in early pregnancy, 38 in mid-pregnancy, and 31 in late pregnancy. All patients with PABC in early pregnancy chose surgical or medical abortion to terminate their pregnancy. Among the patients with PABC in mid-pregnancy, 12 patients chose medical abortion and 26 delivered successfully. All patients with PABC in late pregnancy delivered successfully. All neonates had normal Apgar scores, indicating good development, with 23 neonates weighing >2500 g and 21 neonates weighing <2500 g. Treatment for PABC during pregnancy: Among the 83 patients with PABC during pregnancy, 30 (36.1%) underwent surgical treatment during pregnancy, including eight who received breast-conserving surgery and 22 who received modified radical mastectomy; 17 (20.5%) received chemotherapy during pregnancy, including six who received neoadjuvant chemotherapy and 11 who received adjuvant chemotherapy (eight received weekly chemotherapy with paclitaxel and nine received chemotherapy with epirubicin + cyclophosphamide). Among the six patients who received neoadjuvant chemotherapy, one patient with progressive disease after initial chemotherapy received direct surgical treatment, three patients with stable disease after initial chemotherapy achieved partial remission after receiving a different chemotherapy regimen, and two patients had partial remission after initial chemotherapy. Among the 17 patients who received chemotherapy during pregnancy, recurrence and metastasis occurred in four patients, including two who received adjuvant chemotherapy and two who received neoadjuvant chemotherapy. The treatment regimens are shown in Table 2. Treatment conditions of 17 breast cancer patients accepting chemotherapy during pregnancy period. ddEC: Dose-dense epirubicin and cyclophosphamide; EC: Epirubicin + cyclophosphamide; EC-wP: Epirubicin + cyclophosphamide-weekly paclitaxel; IDC: Invasive ductal carcinoma; ILC: Invasive lobular carcinoma; PD: Progressive disease; PR: Partial remission; SD: Stable disease; TNBC: Triple-negative breast cancer. Follow-up: Follow-up visits were performed for 164 PABC patients. The median follow-up duration was 36 months (3–59 months). Among the patients, 18 were lost to follow-up, with a loss to follow-up rate of 11.0%. Among the patients with PABC during pregnancy, two had local recurrence, six had distant metastasis (including two cases of liver metastasis, three cases of bone metastasis, and one case of skin metastasis), and three died. Among the patients with PABC during lactation, four had local recurrence, six had metastasis (including one case of brain metastasis, one case of adrenal metastasis, one case of lumbar metastasis, one case of liver metastasis, and two cases of liver metastasis with bone metastasis), and two died. There were a total of 18 cases (11.0%) of local recurrence and distant metastasis and five cases (3.0%) of death in the 164 patients with PABC. The follow-up data are shown in Tables 3 and 4. Follow-up results of 164 patients with PABC. PABC: Pregnancy-associated breast cancer. Follow-up of 23 cases of recurrence, metastasis, and death. IDC: Invasive ductal carcinoma; ILC: Invasive lobular carcinoma; TNBC: Triple-negative breast cancer. The Kaplan-Meier survival curves of the patients with PABC during pregnancy and those of the patients with PABC during lactation were compared, and the P value was 0.525 for DFS and 0.346 for OS. In the comparison of patients with Ki-67 <20%, patients with Ki-67 ≥20% and < 40%, and patients with Ki-67 ≥40%, the P value was 0.179 for DFS and 0.391 for OS. Similarly, in the comparative analysis of Her-2-positive patients and Her-2-negative patients, the P value was 0.638 for DFS and 0.364 for OS. The survival curves are shown in Figures 1–3. In addition, the Kaplan-Meier survival curves of the 83 patients with PABC during pregnancy who terminated their pregnancy and those of patients who continued their pregnancy [Figure 4] were compared, and the P value was 0.138 for DFS and 0.910 for OS, with no significant difference. The OS and DFS analysis of the patients with PABC during pregnancy and those of the patients with PABC during lactation. DFS: Disease free survival; OS: Overall survival; PABC: Pregnancy-associated breast cancer. The OS and DFS analysis of the subgroups with different Ki-67 index. DFS: Disease free survival; OS: Overall survival. The OS and DFS analysis of the Her2+ and Her2− subgroups. DFS: Disease free survival; OS: Overall survival. The OS and DFS analysis of the 83 patients with PABC during pregnancy who terminated their pregnancy and those of patients who continued their pregnancy. DFS: Disease free survival; OS: Overall survival; PABC: Pregnancy-associated breast cancer. A total of 85 children of 138 patients with PABC during pregnancy or lactation who delivered successfully were followed up for development. The children included 39 boys and 46 girls with an average age of 29 months (19–72 months). Their heights and weights were all within the normal ranges. Discussion: PABC occurs during pregnancy and lactation. It affects the health of the mother and the fetus and should receive extensive attention. The definition of PABC varies in different studies. In a 2012 meta-analyses of 30 studies on PABC, eight studies defined PABC as breast cancer diagnosed during pregnancy, five studies defined PABC as breast cancer diagnosed during lactation, and 15 studies defined PABC as newly occurring breast cancer during pregnancy or within 1 year after childbirth.[5] A meta-analysis conducted in 2016 suggested to extend the definition of PABC to include new occurrences of breast cancer during pregnancy and within 5 years after childbirth.[6] Currently, the mainstream definition of PABC is newly occurring breast cancer during pregnancy and within 1 year after childbirth,[7] which is the subject of this clinical study. The incidence of PABC is low (0.2%–3.8%) in foreign studies, and it ranks third among cancers commonly diagnosed during pregnancy and lactation.[8] A multicenter study involving 16 countries showed that PABC ranked first among cancers diagnosed during pregnancy.[9] No multicenter clinical study of PABC has been conducted in China. A large single-center retrospective study performed at the Tianjin Medical University found that PABC accounted 0.36% of all breast cancer cases.[10] The present study included 164 PABC patients from 27 hospitals nationwide, accounting 0.3% of the total cases of breast cancer during the same period. The median age of onset of PABC in this study was 33 years, which was consistent with a previous large-scale study.[11] Currently, the relationship between PABC and family history of breast cancer is inconclusive. In this study, 5.4% of the PABC patients had a family history of breast cancer; this is similar to the proportion of people with a family history of breast cancer in the general population, suggesting that a family history of breast cancer may not be associated with PABC.[12,13] This study showed that patients in stages III and IV accounted to 26.8% (44/164). Current studies suggest that the late stages are associated with delayed diagnosis and that the diagnosis of PABC is typically delayed by 1 to 13 months.[14] Hyperplasia of the mammary glands during pregnancy and lactation increases tissue density, making small masses undetectable by conventional palpation; furthermore, the use of X-ray and other imaging examinations is limited due to their negative impact on the fetus, and pregnancy-related hormone fluctuations may stimulate tumor growth and progression. All these factors affect the early diagnosis of PABC. The literature has reported that a 1-month delay in treatment may increase the risk of lymph node involvement by 0.9%.[15] Breast ultrasound is safe, simple, and non-radioactive. Therefore, it can be used as the first-choice method for the diagnosis of PABC. In addition, mammography with protective measures has also been proven safe for the fetus, but its sensitivity is not high because of the increased density of the mammary glands. For suspected clinical patients, the above examinations should be performed as soon as possible, and if necessary, biopsy should be performed to avoid delays in diagnosis and subsequent treatment. In our study, invasive ductal carcinoma was the most common pathological type, accounting 71.3% of all cases. Histological grades G2 and G3 accounted 61.1% of all histological grades. The estrogen receptor (ER)- and progesterone receptor (PR)-positive rates were not high (52.4% and 42.7%, respectively), and the Her-2-positive rate was 29.3%. The results were consistent with the results of similar studies conducted abroad. PABC tends to have a high histological grade and the expression of ER and PR is low.[10,14] In addition, patients with Ki-67 <20%, patients with Ki-67 ≥20% and <40%, and patients with Ki-67 ≥40% accounted 15.2%, 29.3%, and 43.3% of the Ki-67-positive patients, respectively. Regarding molecular subtypes, the most common types are still controversial, and some studies believe that TNBC and Her-2 overexpression types are the most common. In our study, luminal B was the most common type, and the proportions of TNBC and Her-2 overexpression were also high, which is supported by other studies.[10,14,16,17] Through gene expression profiling, some scholars have found that compared with non-PABC patients, patients with PABC had epithelial cells with enhanced expression of genes associated with invasion and recurrence that had a complex relationship with ER and PR.[18] Other literature has reviewed the mechanisms underlying the development of PABC, including various hypotheses such as hormonal effects, immune changes, and inflammatory responses which await further investigation.[14] While the treatment of PABC during lactation is similar to that for non-PABC, PABC during pregnancy is difficult to treat because of the restrictions associated with pregnancy.[19] Regarding the decision to pursue surgery or chemotherapy during pregnancy, surgery can theoretically be performed at any time. Among the 57 patients in this study who continued their pregnancy, 30 chose to undergo surgery during pregnancy; most of them selected modified radical mastectomy, although a few selected breast-conserving surgery. There were no significant adverse events in the mother or the fetus after surgery. When it is administered during pregnancy, chemotherapy is usually performed in the middle and late stages of pregnancy, and the safety of mother and fetus can be ensured.[20] Because chemotherapy in early pregnancy is likely to cause fetal malformations, and chemotherapy in late pregnancy can easily cause bone marrow suppression in the fetus after delivery, it is contraindicated during early and late pregnancy (after 35 weeks or 3 weeks before delivery).[1] In this study, 17 patients with PABC during pregnancy underwent chemotherapy, including two patients in late pregnancy and 15 patients in mid-pregnancy, and all neonates were healthy. Most of these patients received weekly chemotherapy with anthracycline + cyclophosphamide, and a few of them received weekly chemotherapy with small-dose paclitaxel, mainly because the proportions of anthracycline, cyclophosphamide, and paclitaxel that can pass through the placental barrier are <8%, the lowest among chemotherapeutic drugs. More than 50% of carboplatin can pass through the placental barrier; thus, it ranks highest among chemotherapeutic drugs. Approximately 25% of cyclophosphamide can pass though through the placental barrier, and although its effect on fetal development is relatively small in mid- and late pregnancy, it tends to cause fetal malformations when administered in early pregnancy.[1] Therefore, weekly chemotherapy with anthracycline + cyclophosphamide was the main chemotherapy method used in this study, and no obvious complications of mother or hypoplasia of fetus were observed during chemotherapy. In addition, endocrine therapy, targeted therapy, and radiotherapy are prohibited during pregnancy because they can easily affect fetal development or threaten the life of the fetus. A total of 26 (31.3%) patients terminated the pregnancy. Whether termination of pregnancy can change the prognosis is still controversial. Most studies showed no difference in prognosis between the patients who terminated their pregnancy and those who continued their pregnancy.[21] In this study, among 57 patients who continued their pregnancy, eight had recurrence and metastasis and two died. In contrast, among 26 patients who terminated their pregnancy, only one patient died and no patient had recurrence or metastasis. Survival analysis showed that the DFS of the patients who continued their pregnancy was worse, but not significantly worse than that of patients who terminated their pregnancy, and their OS was not significantly different. Therefore, termination of pregnancy is not recommended for women with PABC during pregnancy unless chemotherapy is required in early pregnancy under certain circumstances. Regarding the impact of PABC on neonatal outcomes, a US study based on big data showed that the risks of intrauterine growth restriction, congenital abnormality, and intrauterine fetal death were similar in patients with PABC and patients with non-PABC patients, but the risks of premature birth and premature rupture of membranes were higher in patients with PABC.[22] A total of 57 patients who continued their pregnancy were selected for this study, and 11 of them who were in late pregnancy (close to 39 weeks of gestation gave birth) before receiving any treatment. The remaining 46 patients gave birth after receiving chemotherapy or/and surgery during pregnancy, and 89.1% of patients were diagnosed in mid-pregnancy. Among the neonates delivered by the 57 patients with PABC during pregnancy, 23 (37.7%) neonates with normal body weight (>2500 g) and 21 (34.4%) neonates with body weight <2500 g had Apgar scores >8 points, and the birth conditions of 13 neonates were unknown. This result is basically similar to those of previous studies. The prognosis of PABC is still controversial. Previous studies have reported that the prognosis of patients with PABC is worse than that of non-PABC patients because of the diagnosis delay, treatment delay, and hormone stimulation, and pregnancy and lactation are independent factors for a poor prognosis in patients with breast cancer.[5,6] However, some studies have reported that when age, tumor stage, hormone receptor expression, and other factors were matched, the prognosis of patients with PABC and that of non-PABC patients were not significantly different.[23,24] In this study, the median follow-up duration was 36 months. Among the 164 patients with PABC, 18 (11.0%) had local recurrence and distant metastasis, and five (3.0%) died. Studies have shown that breast cancer detected during lactation is associated with a worse prognosis,[25] perhaps because the persistent presence of inflammatory stimuli during the process of mammary gland involution leads to changes in the tumor microenvironment and promotes the migration and metastasis of tumor cells. The comparison of survival conditions between patients with PABC during pregnancy and patients with PABC during lactation showed that the prognosis of the latter group was slightly worse. However, follow-ups found no significant difference between the groups. In this study, 23 patients with recurrence, metastasis, or death were analyzed. The 11 cases of PABC during pregnancy were initially diagnosed as stage III breast cancer. The 12 cases of PABC during lactation included one case of stage 0 (8.3%, 1/12), four cases of stage IV (33.3%, 4/12), five cases of stage II (41.6%, 5/12), and two cases of stage IIIC (16.6%, 2/12). No patient with PABC during pregnancy had stage IV PABC, perhaps because the absence of systemic evaluation due to pregnancy made it difficult to detect distant metastasis. Four cases of stage IV PABC during lactation were all T2N1M1, which may also be associated with breast cancer being prone to distant metastasis. TNBC and luminal B (9/11) were the dominant molecular subtypes of PABC during pregnancy. Luminal B type and Her-2-positive breast cancer (10/12) were the dominant molecular subtypes of PABC during lactation. In addition, the Ki-67 value-added index is closely related to the prognosis of breast cancer, and a higher Ki-67 index generally indicates a worse prognosis.[26,27] Therefore, the Ki-67 index was divided into three levels in this study: Ki-67 <20%, Ki-67 ≥20%, and <40%, and Ki-67 ≥40%. Patients with Ki-67 ≥40% had a worse, but not significantly worse, prognosis than patients with Ki-67 ≥20% and <40%, and patients with Ki-67 ≥20% and <40% had a worse, but not significantly worse, prognosis than patients with Ki-67 <20%. Similarly, the comparison of survival between the Her-2-positive patients and the Her-2-negative patients showed that the Her-2-positive patients had a worse, but not significantly worse, prognosis than the Her-2-negative patients, possibly because of the small sample size and the short follow-up duration in this study. More PABC cases and longer follow-up durations are needed to further improve the survival analysis. Regarding the growth and development of the children of PABC patients, some studies showed that the children of PABC patients who received chemotherapy during pregnancy had no significant difference in neurological and cardiac development and functions compared with normal children.[28] In this study, 85 children were followed up. No obvious abnormalities in nerve and heart growth and development were observed in the children of PABC patients who received chemotherapy or surgical treatment during pregnancy or the children of PABC patients who did not receive any treatment during pregnancy. However, further studies are needed because of the large number of children lost to follow-up and the relatively short follow-up duration. PABC is a special type of breast cancer. Standardized surgery and chemotherapy for PABC during pregnancy are safe for mother and fetus. However, our conclusions still need to be confirmed by large-scale studies and long-term follow-up data. With the adjustment of national birth control policies, the incidence of PABC may increase. Early detection, early diagnosis, and standardized treatment of PABC are very important. Acknowledgements: Thanks to the following professors for providing data for the multicenter clinical study, Ke-Jin Wu (Obstetrics and Gynecology Hospital Affiliated to Fudan University, China), Xing-Song Tian (Shandong Provincial Hospital, China), Jin-Ping Liu (Sichuan Provincial People's Hospital, China), Zhi-Min Fan (Bethune First Hospital of Jilin University, China), Zhong-Wei Cao (People's Hospital of Inner Mongolia Autonomous Region, China), Chuan Wang (Union Hospital Affiliated to Fujian Medical University, China), Rui Ling (Xijing Hospital, China), Yin-Hua Liu (Peking University First Hospital, China), Jian-Dong Wang (General Hospital of Chinese people's Liberation Army, China), Jing-Hua Zhang (Tangshan People's Hospital, China), Zhi-Gang Yu (Second Hospital of Shandong University, China), Wei Zhu (Zhongshan Hospital Affiliated to Fudan University, China), De-Dian Chen (Yunnan Cancer Hospital, China), Yun-Jiang Liu (Fourth Hospital of Hebei Medical University, China), Da-Hua Mao (Wudang Hospital Affiliated to Guizhou Medical University, China), Li-Li Tang (Middle Hospital, Xiangya Hospital of Southern University, China), Shu Wang (People's Hospital of Peking University, China), Er-Wei Song (Sun Yat Sen Memorial Hospital of Sun Yat Sen University, China), Pei-Fen Fu (First Affiliated Hospital of Zhejiang University, China), Jiang-Hua Ou (Cancer Hospital of Xinjiang Medical University, China), Ke Liu (Cancer Hospital of Jilin Province, China), Yong-Hui Luo (Second Affiliated Hospital of Nanchang University, China), Xiang Qu (Beijing Friendship Hospital of Capital Medical University, China), Yi Zhao (Shengjing Hospital Affiliated to China Medical University, China), Jian Huang (The Second Affiliated Hospital of Medical College of Zhejiang University, China), Hong-Chuan Jiang (Beijing Chaoyang Hospital Affiliated to Capital Medical University, China), Rong Ma (Qilu Hospital Affiliated to Shandong University, China), Jian-Guo Zhang (The Second Affiliated Hospital of Harbin Medical University, China), Jun Jiang (Southwest Hospital Affiliated to the Third Military Medical University, China), Zhen-Zhen Liu (Henan Cancer Hospital Affiliated to Capital Medical University, China), Jian-Guo Zhang (Second Affiliated Hospital Affiliated to Harbin Medical University, China), Ai-Lin Song (Second Affiliated Hospital of Lanzhou University, China), and Qiang Zou (Huashan Hospital Affiliated to Fudan University, China). Conflicts of interest: None.
Background: Pregnancy-associated breast cancer (PABC) is a special type of breast cancer that occurs during pregnancy and within 1 year after childbirth. With the rapid social development and the adjustment of reproductive policies in China, the average age of females at first childbirth is increasing, which is expected to lead to an increase in the incidence of PABC. This study aimed to accumulate clinical experience and to investigate and summarize the prevalence, diagnosis, and treatment of PABC based on large multicenter samples in China. Methods: According to the Chinese Society of Breast Surgery, a total of 164 patients with PABC in 27 hospitals from January 2016 to December 2018 were identified. The pregnancy status, clinicopathological features, comprehensive treatment methods, and outcomes were retrospectively analyzed. Survival curves were plotted using the Kaplan-Meier method. Results: A total of 164 patients of PABC accounted for 0.30% of the total number of cases in the same period; of which, 83 patients were diagnosed during pregnancy and 81 patients during lactation. The median age of PABC was 33 years (24-47 years). Stage I patients accounted for 9.1% (15/164), stage II 54.9% (90/164), stage III 24.4% (40/164), and stage IV 2.4% (4/164). About 9.1% (15/164) of patients were luminal A. Luminal B patients accounted the most (43.3% [71/164]). About 15.2% (25/164) of patients were human epidermal growth factor receptor 2 (Her-2) overexpression and 18.9% (31/164) of patients were triple-negative breast cancer. For pregnancy breast cancer, 36.1% (30/83) of patients received direct surgery and 20.5% (17/83) received chemotherapy during pregnancy. About 31.3% (26/83) chose abortion or induction of labor. The median follow-up time was 36 months (3-59 months); 11.0% (18/164) patients had local recurrence or distant metastasis and 3.0% (5/164) died. Conclusions: It is safe and feasible to standardize surgery and chemotherapy for PABC.
null
null
7,877
405
[ 494, 106, 104, 350, 243, 622, 502 ]
13
[ "patients", "pregnancy", "pabc", "breast", "patients pabc", "cancer", "cases", "breast cancer", "chemotherapy", "metastasis" ]
[ "breast cancer cases", "breast cancer follow", "pregnancy associated breast", "pabc breast cancer", "breast cancer pabc" ]
null
null
[CONTENT] Pregnancy-associated breast cancer | Clinicopathological feature | Treatment | Prognosis [SUMMARY]
[CONTENT] Pregnancy-associated breast cancer | Clinicopathological feature | Treatment | Prognosis [SUMMARY]
[CONTENT] Pregnancy-associated breast cancer | Clinicopathological feature | Treatment | Prognosis [SUMMARY]
null
[CONTENT] Pregnancy-associated breast cancer | Clinicopathological feature | Treatment | Prognosis [SUMMARY]
null
[CONTENT] Adult | Breast Neoplasms | China | Female | Humans | Neoplasm Recurrence, Local | Pregnancy | Pregnancy Complications, Neoplastic | Prognosis | Retrospective Studies [SUMMARY]
[CONTENT] Adult | Breast Neoplasms | China | Female | Humans | Neoplasm Recurrence, Local | Pregnancy | Pregnancy Complications, Neoplastic | Prognosis | Retrospective Studies [SUMMARY]
[CONTENT] Adult | Breast Neoplasms | China | Female | Humans | Neoplasm Recurrence, Local | Pregnancy | Pregnancy Complications, Neoplastic | Prognosis | Retrospective Studies [SUMMARY]
null
[CONTENT] Adult | Breast Neoplasms | China | Female | Humans | Neoplasm Recurrence, Local | Pregnancy | Pregnancy Complications, Neoplastic | Prognosis | Retrospective Studies [SUMMARY]
null
[CONTENT] breast cancer cases | breast cancer follow | pregnancy associated breast | pabc breast cancer | breast cancer pabc [SUMMARY]
[CONTENT] breast cancer cases | breast cancer follow | pregnancy associated breast | pabc breast cancer | breast cancer pabc [SUMMARY]
[CONTENT] breast cancer cases | breast cancer follow | pregnancy associated breast | pabc breast cancer | breast cancer pabc [SUMMARY]
null
[CONTENT] breast cancer cases | breast cancer follow | pregnancy associated breast | pabc breast cancer | breast cancer pabc [SUMMARY]
null
[CONTENT] patients | pregnancy | pabc | breast | patients pabc | cancer | cases | breast cancer | chemotherapy | metastasis [SUMMARY]
[CONTENT] patients | pregnancy | pabc | breast | patients pabc | cancer | cases | breast cancer | chemotherapy | metastasis [SUMMARY]
[CONTENT] patients | pregnancy | pabc | breast | patients pabc | cancer | cases | breast cancer | chemotherapy | metastasis [SUMMARY]
null
[CONTENT] patients | pregnancy | pabc | breast | patients pabc | cancer | cases | breast cancer | chemotherapy | metastasis [SUMMARY]
null
[CONTENT] pabc | breast | china | breast cancers | pabc china | childbirth | cancers | pregnancy | breast surgery investigate | 164 cases pabc [SUMMARY]
[CONTENT] ibm | plotted kaplan meier | ibm statistics spss 20 | corp armonk | corp | kaplan meier method | analyzed ibm | analyzed ibm statistics | analyzed ibm statistics spss | usa survival curves plotted [SUMMARY]
[CONTENT] patients | pregnancy | chemotherapy | patients pabc | pabc | received | metastasis | dfs | os | pabc pregnancy [SUMMARY]
null
[CONTENT] patients | pregnancy | pabc | cases | breast | chemotherapy | follow | survival | patients pabc | breast cancer [SUMMARY]
null
[CONTENT] 1 year ||| China | first | PABC ||| PABC | China [SUMMARY]
[CONTENT] the Chinese | 164 | PABC | 27 | January 2016 to December 2018 ||| ||| [SUMMARY]
[CONTENT] 164 | PABC | 0.30% | 83 | 81 ||| PABC | 33 years | 24-47 years ||| 9.1% | 15/164 | 54.9% | 90/164 | 24.4% | 40/164 | 2.4% | 4/164 ||| About 9.1% | 15/164 | A. Luminal B | 43.3% | 71/164 ||| About 15.2% | 25/164 | 2 | 18.9% | 31/164 ||| 36.1% | 30/83 | 20.5% | 17/83 ||| About 31.3% | 26/83 ||| 36 months | 3-59 months | 11.0% | 18/164 | 3.0% | 5/164 [SUMMARY]
null
[CONTENT] 1 year ||| China | first | PABC ||| PABC | China ||| the Chinese | 164 | PABC | 27 | January 2016 to December 2018 ||| ||| ||| ||| 164 | PABC | 0.30% | 83 | 81 ||| PABC | 33 years | 24-47 years ||| 9.1% | 15/164 | 54.9% | 90/164 | 24.4% | 40/164 | 2.4% | 4/164 ||| About 9.1% | 15/164 | A. Luminal B | 43.3% | 71/164 ||| About 15.2% | 25/164 | 2 | 18.9% | 31/164 ||| 36.1% | 30/83 | 20.5% | 17/83 ||| About 31.3% | 26/83 ||| 36 months | 3-59 months | 11.0% | 18/164 | 3.0% | 5/164 ||| PABC [SUMMARY]
null
Totally laparoscopic total gastrectomy using the modified overlap method and conventional open total gastrectomy: A comparative study.
34025073
Although several methods of totally laparoscopic total gastrectomy (TLTG) have been reported. The best anastomosis technique for LTG has not been established.
BACKGROUND
We performed 151 and 131 surgeries using TLTG with the modified overlap method and OTG for gastric cancer between March 2012 and December 2018. Surgical and oncological outcomes were compared between groups using propensity score matching. In addition, we analyzed the risk factors associated with postoperative complications.
METHODS
Patients who underwent TLTG were discharged earlier than those who underwent OTG [TLTG (9.62 ± 5.32) vs OTG (13.51 ± 10.67), P < 0.05]. Time to first flatus and soft diet were significantly shorter in TLTG group. The pain scores at all postoperative periods and administration of opioids were significantly lower in the TLTG group than in the OTG group. No significant difference in early, late and esophagojejunostomy (EJ)-related complications or 5-year recurrence free and overall survival between groups. Multivariate analysis demonstrated that body mass index [odds ratio (OR), 1.824; 95% confidence interval (CI): 1.029-3.234, P = 0.040] and American Society of Anaesthesiologists (ASA) score (OR, 3.154; 95%CI: 1.084-9.174, P = 0.035) were independent risk factors of early complications. Additionally, age was associated with ≥ 3 Clavien-Dindo classification and EJ-related complications.
RESULTS
Although TLTG with the modified overlap method showed similar complication rate and oncological outcome with OTG, it yields lower pain score, earlier bowel recovery, and discharge. Surgeons should perform total gastrectomy cautiously and delicately in patients with obesity, high ASA scores, and older ages.
CONCLUSION
[ "Aged", "Anastomosis, Surgical", "Gastrectomy", "Humans", "Laparoscopy", "Middle Aged", "Neoplasm Recurrence, Local", "Postoperative Complications", "Stomach Neoplasms", "Treatment Outcome" ]
8117731
INTRODUCTION
Laparoscopic total gastrectomy (LTG) is becoming increasingly used to treat upper or middle third gastric cancer because it shows earlier recovery and is considered less invasive[1-3]. Of the entire procedure of LTG, esophagojejunal reconstruction is the most crucial process. This is because the failure of esophagojejunostomy (EJ) such as leakage and stricture could induce the patients to suffer[4]. When performing EJ, EJ using linear stapler method is widely adopted due to its simplicity in comparison to the circular stapled method, such as overlap and functional method[5-9]. Nonetheless, the linear stapled method has a fundamental problem that it requires larger space to dissect around the distal esophagus than does the circular stapled method because the linear stapler needs to be inserted in the abdominal hiatus. Recently, we developed a modified overlap method for totally LTG (TLTG) for overcoming these disadvantages of linear stapled method[10]. This method is performed with an intracorporeal side to side esophagojejunal anastomosis using a 45-mm linear stapler at 45° from the longitudinal direction of the esophagus (Figure 1). This procedure requires less dissection around abdominal esophagus; therefore, it can create a secure esophagojejunal anastomosis with reduced tension as circular stapled method. Two types of intracorporeal esophagojejunostomy methods. A: The conventional overlap method; B: The modified overlap method. Several studies have investigated the surgical outcomes of the TLTG compared with open total gastrectomy (OTG), including EJ-related complications[11,12]. However, to the best of our knowledge, there are few studies which compared TLTG with the overlap method and OTG with circular stapled method including oncological outcome. The aim of the present study was to investigate the technical feasibility and oncological outcome of TLTG with the modified overlap method when compared with OTG with circular stapled method in the treatment of upper or middle third gastric cancer.
MATERIALS AND METHODS
Patients This study was approved by the institutional review board of the Asan Medical Center. We reviewed the retrospectively collected and analyzed data of 462 patients who underwent curative TLTG (n = 178) and OTG (n = 284) as a treatment for upper or middle third gastric cancer between March 2012 and December 2018 at Asan Medical Center. We excluded all patients who received neoadjuvant chemotherapy, were diagnosed with esophagogastric junction cancer, underwent resection additional organs except for gall bladder, and did not have a gastric cancer diagnosis. Finally, 151 and 131 patients who underwent TLTG and OTG, respectively, were enrolled. All TLTG procedures were performed as TLTG with the modified overlap method. We evaluated TNM (tumor-node-metastasis) stage using the American Joint Committee on Cancer (AJCC), 7th edition[13]. Clinical characteristics and pathologic data were compared between TLTG and OTG groups. Additionally, we evaluated surgical outcomes, including EJ-related complications, and oncologic outcomes, including recurrence free survival (RFS) and overall survival (OS). Early and late complications were defined as events occurring within or after 30 d postoperatively, respectively. EJ complications, including bleeding, leakage and stricture, were diagnosed via upper gastrointestinal series, esophagogastroduodenoscopy, computed tomography, and clinical signs. These complications were reviewed and classified based on the Clavien-Dindo classification system (CDC)[14]. Patients were matched using propensity score matching (PSM) analysis and surgical and oncological outcomes were evaluated. This study was approved by the institutional review board of the Asan Medical Center. We reviewed the retrospectively collected and analyzed data of 462 patients who underwent curative TLTG (n = 178) and OTG (n = 284) as a treatment for upper or middle third gastric cancer between March 2012 and December 2018 at Asan Medical Center. We excluded all patients who received neoadjuvant chemotherapy, were diagnosed with esophagogastric junction cancer, underwent resection additional organs except for gall bladder, and did not have a gastric cancer diagnosis. Finally, 151 and 131 patients who underwent TLTG and OTG, respectively, were enrolled. All TLTG procedures were performed as TLTG with the modified overlap method. We evaluated TNM (tumor-node-metastasis) stage using the American Joint Committee on Cancer (AJCC), 7th edition[13]. Clinical characteristics and pathologic data were compared between TLTG and OTG groups. Additionally, we evaluated surgical outcomes, including EJ-related complications, and oncologic outcomes, including recurrence free survival (RFS) and overall survival (OS). Early and late complications were defined as events occurring within or after 30 d postoperatively, respectively. EJ complications, including bleeding, leakage and stricture, were diagnosed via upper gastrointestinal series, esophagogastroduodenoscopy, computed tomography, and clinical signs. These complications were reviewed and classified based on the Clavien-Dindo classification system (CDC)[14]. Patients were matched using propensity score matching (PSM) analysis and surgical and oncological outcomes were evaluated. Surgical technique of anastomosis We performed TLTG with modified overlap method and OTG with circular stapled method, which was similar to our previous studies[3,10]. Both procedures were performed by a single experienced surgeon who conducts approximately 300 cases of gastrectomy annually. We performed TLTG with modified overlap method and OTG with circular stapled method, which was similar to our previous studies[3,10]. Both procedures were performed by a single experienced surgeon who conducts approximately 300 cases of gastrectomy annually. Statistical analysis In the un-matched group, numerical variables were presented as the mean ± SD using the Student’s t-test or Kruskal-Wallis test. Categorical variables was performed using the Chi-square test. Univariate and multivariate analyses were performed for the entire patient cohort (un-matched group) using logistic regression. Variables were included in the multivariate analysis if their univariate significance was < 0.1. To reduce the impact of treatment selection bias and potential confounding factors in this observational study, we performed rigorous adjustments for significant differences in the baseline characteristics of patients using the logistic regression models with generalized estimating equations (GEE) with a propensity score matched set. When that technique was used, the propensity scores were estimated without considering the outcomes using multiple logistic regression analysis. A full non-parsimonious model was developed that included all the variables shown in Table 1. Model discrimination was assessed using the C statistic and model calibration was evaluated using Hosmer-Lemeshow statistics. Overall, the model was well calibrated (Hosmer-Lemeshow test; P = 0.368) with reasonable discrimination (C statistic = 0.858). We matched the two groups (1:1 ratio) using a ‘greedy nearest-neighbour’ algorithm method. The Matching balance was measured based on the standardized mean differences. A > 10% difference in the absolute value was considered significantly imbalanced. Patient clinical characteristics Values are expressed as mean ± SD or n (%). PSM: Propensity score matching; BMI: Body mass index; ASA score: American Society of Anesthesiologists Physical Status Classification; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy. In the matched group, numerical variables were reported as the means ± SD using the paired t-test. Categorical variables were performed using McNemar's test or Marginal homogeniety test. To evaluate the association between type of surgery, and complication and survival (and recurrence), the propensity score adjusted model was applied. Finally, the logistic regression model with GEE was applied using propensity score-based matching. The Cox proportional hazards model was applied using propensity score-based matching with robust standard errors. All reported P values are two-sided; values < 0.05 were considered statistically significant. Data manipulation and statistical analyses were performed using SAS® Version 9.4 (SAS Institute Inc., Cary, NC, United States). In the un-matched group, numerical variables were presented as the mean ± SD using the Student’s t-test or Kruskal-Wallis test. Categorical variables was performed using the Chi-square test. Univariate and multivariate analyses were performed for the entire patient cohort (un-matched group) using logistic regression. Variables were included in the multivariate analysis if their univariate significance was < 0.1. To reduce the impact of treatment selection bias and potential confounding factors in this observational study, we performed rigorous adjustments for significant differences in the baseline characteristics of patients using the logistic regression models with generalized estimating equations (GEE) with a propensity score matched set. When that technique was used, the propensity scores were estimated without considering the outcomes using multiple logistic regression analysis. A full non-parsimonious model was developed that included all the variables shown in Table 1. Model discrimination was assessed using the C statistic and model calibration was evaluated using Hosmer-Lemeshow statistics. Overall, the model was well calibrated (Hosmer-Lemeshow test; P = 0.368) with reasonable discrimination (C statistic = 0.858). We matched the two groups (1:1 ratio) using a ‘greedy nearest-neighbour’ algorithm method. The Matching balance was measured based on the standardized mean differences. A > 10% difference in the absolute value was considered significantly imbalanced. Patient clinical characteristics Values are expressed as mean ± SD or n (%). PSM: Propensity score matching; BMI: Body mass index; ASA score: American Society of Anesthesiologists Physical Status Classification; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy. In the matched group, numerical variables were reported as the means ± SD using the paired t-test. Categorical variables were performed using McNemar's test or Marginal homogeniety test. To evaluate the association between type of surgery, and complication and survival (and recurrence), the propensity score adjusted model was applied. Finally, the logistic regression model with GEE was applied using propensity score-based matching. The Cox proportional hazards model was applied using propensity score-based matching with robust standard errors. All reported P values are two-sided; values < 0.05 were considered statistically significant. Data manipulation and statistical analyses were performed using SAS® Version 9.4 (SAS Institute Inc., Cary, NC, United States).
null
null
CONCLUSION
Based on our results, we confirmed that the TLTG with modified overlap method has several advantages over OTG. However, this study has certain limitations. It is a retrospective study performed by a single experienced surgeon at a high-volume center, and the number of enrolled patients is relatively small.
[ "INTRODUCTION", "Patients", "Surgical technique of anastomosis", "Statistical analysis", "RESULTS", "Clinicopathological characteristics", "Surgical outcomes and postoperative complications in PSM", "Oncologic outcomes of PSM", "Risk factors for postoperative complications", "DISCUSSION", "CONCLUSION" ]
[ "Laparoscopic total gastrectomy (LTG) is becoming increasingly used to treat upper or middle third gastric cancer because it shows earlier recovery and is considered less invasive[1-3]. Of the entire procedure of LTG, esophagojejunal reconstruction is the most crucial process. This is because the failure of esophagojejunostomy (EJ) such as leakage and stricture could induce the patients to suffer[4]. When performing EJ, EJ using linear stapler method is widely adopted due to its simplicity in comparison to the circular stapled method, such as overlap and functional method[5-9]. Nonetheless, the linear stapled method has a fundamental problem that it requires larger space to dissect around the distal esophagus than does the circular stapled method because the linear stapler needs to be inserted in the abdominal hiatus.\nRecently, we developed a modified overlap method for totally LTG (TLTG) for overcoming these disadvantages of linear stapled method[10]. This method is performed with an intracorporeal side to side esophagojejunal anastomosis using a 45-mm linear stapler at 45° from the longitudinal direction of the esophagus (Figure 1). This procedure requires less dissection around abdominal esophagus; therefore, it can create a secure esophagojejunal anastomosis with reduced tension as circular stapled method.\n\nTwo types of intracorporeal esophagojejunostomy methods. A: The conventional overlap method; B: The modified overlap method.\nSeveral studies have investigated the surgical outcomes of the TLTG compared with open total gastrectomy (OTG), including EJ-related complications[11,12]. However, to the best of our knowledge, there are few studies which compared TLTG with the overlap method and OTG with circular stapled method including oncological outcome.\nThe aim of the present study was to investigate the technical feasibility and oncological outcome of TLTG with the modified overlap method when compared with OTG with circular stapled method in the treatment of upper or middle third gastric cancer.", "This study was approved by the institutional review board of the Asan Medical Center. We reviewed the retrospectively collected and analyzed data of 462 patients who underwent curative TLTG (n = 178) and OTG (n = 284) as a treatment for upper or middle third gastric cancer between March 2012 and December 2018 at Asan Medical Center. We excluded all patients who received neoadjuvant chemotherapy, were diagnosed with esophagogastric junction cancer, underwent resection additional organs except for gall bladder, and did not have a gastric cancer diagnosis. Finally, 151 and 131 patients who underwent TLTG and OTG, respectively, were enrolled. All TLTG procedures were performed as TLTG with the modified overlap method. We evaluated TNM (tumor-node-metastasis) stage using the American Joint Committee on Cancer (AJCC), 7th edition[13]. Clinical characteristics and pathologic data were compared between TLTG and OTG groups. Additionally, we evaluated surgical outcomes, including EJ-related complications, and oncologic outcomes, including recurrence free survival (RFS) and overall survival (OS). Early and late complications were defined as events occurring within or after 30 d postoperatively, respectively. EJ complications, including bleeding, leakage and stricture, were diagnosed via upper gastrointestinal series, esophagogastroduodenoscopy, computed tomography, and clinical signs. These complications were reviewed and classified based on the Clavien-Dindo classification system (CDC)[14]. Patients were matched using propensity score matching (PSM) analysis and surgical and oncological outcomes were evaluated.", "We performed TLTG with modified overlap method and OTG with circular stapled method, which was similar to our previous studies[3,10]. Both procedures were performed by a single experienced surgeon who conducts approximately 300 cases of gastrectomy annually.", "In the un-matched group, numerical variables were presented as the mean ± SD using the Student’s t-test or Kruskal-Wallis test. Categorical variables was performed using the Chi-square test. Univariate and multivariate analyses were performed for the entire patient cohort (un-matched group) using logistic regression. Variables were included in the multivariate analysis if their univariate significance was < 0.1.\nTo reduce the impact of treatment selection bias and potential confounding factors in this observational study, we performed rigorous adjustments for significant differences in the baseline characteristics of patients using the logistic regression models with generalized estimating equations (GEE) with a propensity score matched set. When that technique was used, the propensity scores were estimated without considering the outcomes using multiple logistic regression analysis. A full non-parsimonious model was developed that included all the variables shown in Table 1. Model discrimination was assessed using the C statistic and model calibration was evaluated using Hosmer-Lemeshow statistics. Overall, the model was well calibrated (Hosmer-Lemeshow test; P = 0.368) with reasonable discrimination (C statistic = 0.858). We matched the two groups (1:1 ratio) using a ‘greedy nearest-neighbour’ algorithm method. The Matching balance was measured based on the standardized mean differences. A > 10% difference in the absolute value was considered significantly imbalanced.\nPatient clinical characteristics\nValues are expressed as mean ± SD or n (%). PSM: Propensity score matching; BMI: Body mass index; ASA score: American Society of Anesthesiologists Physical Status Classification; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy.\nIn the matched group, numerical variables were reported as the means ± SD using the paired t-test. Categorical variables were performed using McNemar's test or Marginal homogeniety test. To evaluate the association between type of surgery, and complication and survival (and recurrence), the propensity score adjusted model was applied. Finally, the logistic regression model with GEE was applied using propensity score-based matching. The Cox proportional hazards model was applied using propensity score-based matching with robust standard errors. All reported P values are two-sided; values < 0.05 were considered statistically significant. Data manipulation and statistical analyses were performed using SAS® Version 9.4 (SAS Institute Inc., Cary, NC, United States).", "Clinicopathological characteristics The clinical variables are summarized in Table 1. There was a significant difference in body mass index (BMI), tumor size, and pathologic tumor stage before PSM between groups (all P < 0.05); however, these differences disappeared after PSM. There were no statistically significant differences in all baseline variables included in the model between groups.\nThe clinical variables are summarized in Table 1. There was a significant difference in body mass index (BMI), tumor size, and pathologic tumor stage before PSM between groups (all P < 0.05); however, these differences disappeared after PSM. There were no statistically significant differences in all baseline variables included in the model between groups.\nSurgical outcomes and postoperative complications in PSM All surgical outcomes and postoperative complications are shown in Table 2. There was no significant difference in operation time between groups (P = 0.351). Patients who underwent TLTG had significantly lower pain scores on all postoperative days than patients who underwent OTG. Moreover, patients in the TLTG group required significantly less analgesic and opioid administration than in the OTG group. The TLTG group reported earlier time to first flatus (3.62 ± 0.84 d vs 4.15 ± 0.87 d, P = 0.002) and soft diet (4.62 ± 2.67 d vs 7.47 ± 7.92 d, P = 0.001). Furthermore, patients who underwent TLTG stayed statistically significantly fewer days at the hospital after surgery than patients who underwent OTG (9.62 ± 5.32 d vs 13.51 ± 10.67 d; P < 0.001). No significant differences in postoperative complications were noted between the two groups (P = 0.161).\nEarly surgical outcomes and pathologic data in patients undergoing the totally laparoscopic gastrectomy with the modified overlap method and open total gastrectomy\nValues are expressed as mean ± SD or n (%) or median (range). PSM: Propensity score matching; LN: Lymph node; PRM: Proximal resection margin; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy.\nPostoperative complications, including EJ-related complications, are summarized in Table 3. There were no significant differences in the early and late postoperative overall complications between groups (P = 0.317 and P = 0.257, respectively). In addition, there was no difference in the incidence of patients with ≥ 3 CDC complications in the early and late postoperative periods between groups (P = 0.428 and P > 0.999, respectively). There was no significant differences in EJ-related complications. Table 4 shows the details of EJ-related complications. Five cases of EJ leakage were observed, and two cases of EJ bleeding were found. Four patients with CDC 3 complications required interventions such as endoscopic management and pigtail drainage, whereas 2 patients with CDC 2 complications fully recovered by conservative treatment. One postoperative mortality occurred due to EJ bleeding.\nPostoperative complications\nValues are expressed as mean ± SD or n (%). PSM: Propensity score matching; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostomy.\nCharacteristics of the patients with esophagojejunostomy-related complications\nTNM: Tumor-node-metastasis; CDC: Clavien-Dindo classification; F: Female; M: Male; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy.\nAll surgical outcomes and postoperative complications are shown in Table 2. There was no significant difference in operation time between groups (P = 0.351). Patients who underwent TLTG had significantly lower pain scores on all postoperative days than patients who underwent OTG. Moreover, patients in the TLTG group required significantly less analgesic and opioid administration than in the OTG group. The TLTG group reported earlier time to first flatus (3.62 ± 0.84 d vs 4.15 ± 0.87 d, P = 0.002) and soft diet (4.62 ± 2.67 d vs 7.47 ± 7.92 d, P = 0.001). Furthermore, patients who underwent TLTG stayed statistically significantly fewer days at the hospital after surgery than patients who underwent OTG (9.62 ± 5.32 d vs 13.51 ± 10.67 d; P < 0.001). No significant differences in postoperative complications were noted between the two groups (P = 0.161).\nEarly surgical outcomes and pathologic data in patients undergoing the totally laparoscopic gastrectomy with the modified overlap method and open total gastrectomy\nValues are expressed as mean ± SD or n (%) or median (range). PSM: Propensity score matching; LN: Lymph node; PRM: Proximal resection margin; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy.\nPostoperative complications, including EJ-related complications, are summarized in Table 3. There were no significant differences in the early and late postoperative overall complications between groups (P = 0.317 and P = 0.257, respectively). In addition, there was no difference in the incidence of patients with ≥ 3 CDC complications in the early and late postoperative periods between groups (P = 0.428 and P > 0.999, respectively). There was no significant differences in EJ-related complications. Table 4 shows the details of EJ-related complications. Five cases of EJ leakage were observed, and two cases of EJ bleeding were found. Four patients with CDC 3 complications required interventions such as endoscopic management and pigtail drainage, whereas 2 patients with CDC 2 complications fully recovered by conservative treatment. One postoperative mortality occurred due to EJ bleeding.\nPostoperative complications\nValues are expressed as mean ± SD or n (%). PSM: Propensity score matching; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostomy.\nCharacteristics of the patients with esophagojejunostomy-related complications\nTNM: Tumor-node-metastasis; CDC: Clavien-Dindo classification; F: Female; M: Male; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy.\nOncologic outcomes of PSM There were no significant differences in the number of retrieved lymph nodes between groups (P = 0.713). The 5-year RFS and OS are shown in Figure 2. There were no significant differences in pathologic tumor stage between groups after PSM. The 5-year RFS rates of patients who underwent TLTG and OTG were 87.7% and 92.3%, respectively; however, these differences were no significant (P = 0.653). The 5-year OS rates of patients who underwent TLTG and OTG were 74.6% and 80.4%, respectively (P = 0.476).\n\nSurvival curves for matched patients. A: Overall survival; B: Recurrence free survival. PSM: Propensity score matching; LTG: Laparoscopic total gastrectomy; OTG: Open total gastrectomy; F/U: Follow up.\nThere were no significant differences in the number of retrieved lymph nodes between groups (P = 0.713). The 5-year RFS and OS are shown in Figure 2. There were no significant differences in pathologic tumor stage between groups after PSM. The 5-year RFS rates of patients who underwent TLTG and OTG were 87.7% and 92.3%, respectively; however, these differences were no significant (P = 0.653). The 5-year OS rates of patients who underwent TLTG and OTG were 74.6% and 80.4%, respectively (P = 0.476).\n\nSurvival curves for matched patients. A: Overall survival; B: Recurrence free survival. PSM: Propensity score matching; LTG: Laparoscopic total gastrectomy; OTG: Open total gastrectomy; F/U: Follow up.\nRisk factors for postoperative complications Tables 5 and 6 demonstrate the risk factors for postoperative complications after TLTG and OTG. BMI and American Society of Anaesthesiologists (ASA) scores were significantly associated with the occurrence of early complications in the univariate analysis. In addition, ASA score and age were significantly associated with the incidence of ≥ 3 CDC and EJ-related complications, respectively. Multivariate analysis demonstrated that BMI [odds ratio (OR), 1.824; 95% confidence interval (CI): 1.029-3.234, P = 0.040] and ASA score (OR, 3.154; 95%CI: 1.084-9.174, P = 0.035) were independent risk factors of early complications. Furthermore, multivariate analysis revealed that age was associated with ≥ 3 CDC and EJ-related complications.\nUnivariate analysis of risk factors for overall early, Clavien-Dindo classification ≥ 3, and esophagojejunostomy-related complications\nValues are expressed as mean ± SD or n (%). TLTG: Totally laparoscopic total gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostomy; CI: Confidence interval; OR: Odds ratio.\nMultivariate analysis of risk factors for early, Clavien-Dindo classification ≥ 3, and esophagojejunostomy-related complications\nValues are expressed as mean ± SD or n (%). TLTG: Totally laparoscopic total gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostom; CI: Confidence interval; OR: Odds ratio; BMI: Body mass index; ASA score: American Society of Anesthesiologists Physical Status Classification.\nTables 5 and 6 demonstrate the risk factors for postoperative complications after TLTG and OTG. BMI and American Society of Anaesthesiologists (ASA) scores were significantly associated with the occurrence of early complications in the univariate analysis. In addition, ASA score and age were significantly associated with the incidence of ≥ 3 CDC and EJ-related complications, respectively. Multivariate analysis demonstrated that BMI [odds ratio (OR), 1.824; 95% confidence interval (CI): 1.029-3.234, P = 0.040] and ASA score (OR, 3.154; 95%CI: 1.084-9.174, P = 0.035) were independent risk factors of early complications. Furthermore, multivariate analysis revealed that age was associated with ≥ 3 CDC and EJ-related complications.\nUnivariate analysis of risk factors for overall early, Clavien-Dindo classification ≥ 3, and esophagojejunostomy-related complications\nValues are expressed as mean ± SD or n (%). TLTG: Totally laparoscopic total gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostomy; CI: Confidence interval; OR: Odds ratio.\nMultivariate analysis of risk factors for early, Clavien-Dindo classification ≥ 3, and esophagojejunostomy-related complications\nValues are expressed as mean ± SD or n (%). TLTG: Totally laparoscopic total gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostom; CI: Confidence interval; OR: Odds ratio; BMI: Body mass index; ASA score: American Society of Anesthesiologists Physical Status Classification.", "The clinical variables are summarized in Table 1. There was a significant difference in body mass index (BMI), tumor size, and pathologic tumor stage before PSM between groups (all P < 0.05); however, these differences disappeared after PSM. There were no statistically significant differences in all baseline variables included in the model between groups.", "All surgical outcomes and postoperative complications are shown in Table 2. There was no significant difference in operation time between groups (P = 0.351). Patients who underwent TLTG had significantly lower pain scores on all postoperative days than patients who underwent OTG. Moreover, patients in the TLTG group required significantly less analgesic and opioid administration than in the OTG group. The TLTG group reported earlier time to first flatus (3.62 ± 0.84 d vs 4.15 ± 0.87 d, P = 0.002) and soft diet (4.62 ± 2.67 d vs 7.47 ± 7.92 d, P = 0.001). Furthermore, patients who underwent TLTG stayed statistically significantly fewer days at the hospital after surgery than patients who underwent OTG (9.62 ± 5.32 d vs 13.51 ± 10.67 d; P < 0.001). No significant differences in postoperative complications were noted between the two groups (P = 0.161).\nEarly surgical outcomes and pathologic data in patients undergoing the totally laparoscopic gastrectomy with the modified overlap method and open total gastrectomy\nValues are expressed as mean ± SD or n (%) or median (range). PSM: Propensity score matching; LN: Lymph node; PRM: Proximal resection margin; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy.\nPostoperative complications, including EJ-related complications, are summarized in Table 3. There were no significant differences in the early and late postoperative overall complications between groups (P = 0.317 and P = 0.257, respectively). In addition, there was no difference in the incidence of patients with ≥ 3 CDC complications in the early and late postoperative periods between groups (P = 0.428 and P > 0.999, respectively). There was no significant differences in EJ-related complications. Table 4 shows the details of EJ-related complications. Five cases of EJ leakage were observed, and two cases of EJ bleeding were found. Four patients with CDC 3 complications required interventions such as endoscopic management and pigtail drainage, whereas 2 patients with CDC 2 complications fully recovered by conservative treatment. One postoperative mortality occurred due to EJ bleeding.\nPostoperative complications\nValues are expressed as mean ± SD or n (%). PSM: Propensity score matching; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostomy.\nCharacteristics of the patients with esophagojejunostomy-related complications\nTNM: Tumor-node-metastasis; CDC: Clavien-Dindo classification; F: Female; M: Male; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy.", "There were no significant differences in the number of retrieved lymph nodes between groups (P = 0.713). The 5-year RFS and OS are shown in Figure 2. There were no significant differences in pathologic tumor stage between groups after PSM. The 5-year RFS rates of patients who underwent TLTG and OTG were 87.7% and 92.3%, respectively; however, these differences were no significant (P = 0.653). The 5-year OS rates of patients who underwent TLTG and OTG were 74.6% and 80.4%, respectively (P = 0.476).\n\nSurvival curves for matched patients. A: Overall survival; B: Recurrence free survival. PSM: Propensity score matching; LTG: Laparoscopic total gastrectomy; OTG: Open total gastrectomy; F/U: Follow up.", "Tables 5 and 6 demonstrate the risk factors for postoperative complications after TLTG and OTG. BMI and American Society of Anaesthesiologists (ASA) scores were significantly associated with the occurrence of early complications in the univariate analysis. In addition, ASA score and age were significantly associated with the incidence of ≥ 3 CDC and EJ-related complications, respectively. Multivariate analysis demonstrated that BMI [odds ratio (OR), 1.824; 95% confidence interval (CI): 1.029-3.234, P = 0.040] and ASA score (OR, 3.154; 95%CI: 1.084-9.174, P = 0.035) were independent risk factors of early complications. Furthermore, multivariate analysis revealed that age was associated with ≥ 3 CDC and EJ-related complications.\nUnivariate analysis of risk factors for overall early, Clavien-Dindo classification ≥ 3, and esophagojejunostomy-related complications\nValues are expressed as mean ± SD or n (%). TLTG: Totally laparoscopic total gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostomy; CI: Confidence interval; OR: Odds ratio.\nMultivariate analysis of risk factors for early, Clavien-Dindo classification ≥ 3, and esophagojejunostomy-related complications\nValues are expressed as mean ± SD or n (%). TLTG: Totally laparoscopic total gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostom; CI: Confidence interval; OR: Odds ratio; BMI: Body mass index; ASA score: American Society of Anesthesiologists Physical Status Classification.", "To the best of our knowledge, this is the first study to compare feasibility and oncological outcomes between patients who underwent TLTG with the modified overlap method and OTG. This study demonstrated that TLTG with the modified overlap method is a technically safe procedure based on acceptable postoperative complications, including EJ-related complications.\nThe overlap method is a widely used EJ reconstruction method in TLTG, it which can lessen the tension in the anastomosis and reduce mesentery division. This secures additional jejunum length for anastomosis[15,16]. This method involves a linear stapler for anastomosis; therefore, the area around the abdominal esophagus requires sufficient dissection. Furthermore, and space in the hiatus and length of the esophagus in which the stapler will be placed should be secured. This may lead to tension in the esophagus after anastomosis and hiatal hernia caused by excessive hiatus dissection. We have devised a novel method to minimize these risks, named the modified overlap method. We use a linear stapler; however, compared with the existing side-to-side anastomosis, less esophageal dissection is required. Further, anastomosis is completed obliquely at 45°; therefore, the resulting anastomosis is similar to when a circular stapler is used because end to side anastomosis is possible. This study proved the TLTG with this modified overlap method showed no significant difference in EJ complications when compared with OTG.\nA previous comparative study of LTG and OTG reported similar EJ anastomotic complications; however, a previous large multicenter cohort study in Japan has shown that open surgery is safer for EJ reconstruction[11,12,17]. This indicates that controversy remains regarding the superior method for EJ anastomosis between OTG and LTG. Furthermore, international treatment guidelines, including the Korean gastric cancer treatment and Japanese gastric cancer treatment guidelines, do not yet recognize LTG as a standard treatment[18,19]. Nonetheless, our data indicate that a randomized clinical trial assessing the surgical and oncological outcomes using the modified overlap method should be conducted to confirm its safety and efficacy.\nIn this study, we overcame operative time and lymphadenectomy issues using the TLTG with the modified overlap method. First, laparoscopic gastrectomy surgery is longer than open gastrectomy[2,20,21]. However, the institution in which this study was conducted is a high-volume center where more than a thousand laparoscopic gastrectomies are performed annually. The lead surgeon in this study performs > 300 gastric cancer operations per year. All surgical team members in this institution are skilled and experienced; therefore, we predicted a reduced operative time while maintaining acceptable surgical and oncological outcomes. However, it may be difficult to apply the results of this study to low-volume centers or inexperienced surgeons.\nSecond, lymph node dissection is an important procedure in gastric cancer surgery because the oncologic outcome is dependent on a proper lymphadenectomy[22]. The AJCC recommends that ≥ 30 lymph nodes be removed for lymphadenectomy in gastric cancer[23]. In this study, there was no statistically significant difference in the number of retrieved lymph nodes between groups; ≥ 30 lymph nodes were removed in both groups. In addition, this study showed that the 5-year overall and RFS after PSM analysis did not differ between groups.\nIn general, the risk factors associated with surgical complications in laparoscopic gastrectomy are comorbidity, surgeon experience, age, malnutrition, gender, and chronic liver disease[24-26]. Most studies have included and analyzed patients who underwent total and distal gastrectomies. Fewer studies have analyzed total gastrectomy alone. Kosuga et al[25] and Martin et al[26] have classified total gastrectomy as a risk factor for complications (OR 1.63 and 3.13, respectively). This indicates that it is important to evaluate risk factors relative to limited total gastrectomy. Li et al[27] have shown that old age combined with splenectomy is a risk factor for overall complications after total gastrectomy. In this study, preoperative BMI and ASA scores were risk factors associated with early complications. Further, old age was a risk factor associated with EJ complications. Patients over 60-year-old with a BMI over 25 and ASA scores of ≥ 3 were more likely to have surgical complications; therefore, caution is required during surgery and careful perioperative management is necessary.\nThis study has some limitations. First, it is a retrospective study performed by a single experienced surgeon at a high-volume center. Therefore, our method might not be appropriate for relatively inexperienced surgeons or small-volume institutes. Second, the number of enrolled patients is relatively small; therefore, subgroup analysis, such as distinguishing between early gastric cancer and advanced gastric cancer or grouping by stage, was not possible.", "In conclusion, we confirmed that TLTG with the modified overlap method had several advantages over OTG. These included a lower pain score, earlier bowel recovery, and discharge based on acceptable postoperative complications and oncological outcomes." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Patients", "Surgical technique of anastomosis", "Statistical analysis", "RESULTS", "Clinicopathological characteristics", "Surgical outcomes and postoperative complications in PSM", "Oncologic outcomes of PSM", "Risk factors for postoperative complications", "DISCUSSION", "CONCLUSION" ]
[ "Laparoscopic total gastrectomy (LTG) is becoming increasingly used to treat upper or middle third gastric cancer because it shows earlier recovery and is considered less invasive[1-3]. Of the entire procedure of LTG, esophagojejunal reconstruction is the most crucial process. This is because the failure of esophagojejunostomy (EJ) such as leakage and stricture could induce the patients to suffer[4]. When performing EJ, EJ using linear stapler method is widely adopted due to its simplicity in comparison to the circular stapled method, such as overlap and functional method[5-9]. Nonetheless, the linear stapled method has a fundamental problem that it requires larger space to dissect around the distal esophagus than does the circular stapled method because the linear stapler needs to be inserted in the abdominal hiatus.\nRecently, we developed a modified overlap method for totally LTG (TLTG) for overcoming these disadvantages of linear stapled method[10]. This method is performed with an intracorporeal side to side esophagojejunal anastomosis using a 45-mm linear stapler at 45° from the longitudinal direction of the esophagus (Figure 1). This procedure requires less dissection around abdominal esophagus; therefore, it can create a secure esophagojejunal anastomosis with reduced tension as circular stapled method.\n\nTwo types of intracorporeal esophagojejunostomy methods. A: The conventional overlap method; B: The modified overlap method.\nSeveral studies have investigated the surgical outcomes of the TLTG compared with open total gastrectomy (OTG), including EJ-related complications[11,12]. However, to the best of our knowledge, there are few studies which compared TLTG with the overlap method and OTG with circular stapled method including oncological outcome.\nThe aim of the present study was to investigate the technical feasibility and oncological outcome of TLTG with the modified overlap method when compared with OTG with circular stapled method in the treatment of upper or middle third gastric cancer.", "Patients This study was approved by the institutional review board of the Asan Medical Center. We reviewed the retrospectively collected and analyzed data of 462 patients who underwent curative TLTG (n = 178) and OTG (n = 284) as a treatment for upper or middle third gastric cancer between March 2012 and December 2018 at Asan Medical Center. We excluded all patients who received neoadjuvant chemotherapy, were diagnosed with esophagogastric junction cancer, underwent resection additional organs except for gall bladder, and did not have a gastric cancer diagnosis. Finally, 151 and 131 patients who underwent TLTG and OTG, respectively, were enrolled. All TLTG procedures were performed as TLTG with the modified overlap method. We evaluated TNM (tumor-node-metastasis) stage using the American Joint Committee on Cancer (AJCC), 7th edition[13]. Clinical characteristics and pathologic data were compared between TLTG and OTG groups. Additionally, we evaluated surgical outcomes, including EJ-related complications, and oncologic outcomes, including recurrence free survival (RFS) and overall survival (OS). Early and late complications were defined as events occurring within or after 30 d postoperatively, respectively. EJ complications, including bleeding, leakage and stricture, were diagnosed via upper gastrointestinal series, esophagogastroduodenoscopy, computed tomography, and clinical signs. These complications were reviewed and classified based on the Clavien-Dindo classification system (CDC)[14]. Patients were matched using propensity score matching (PSM) analysis and surgical and oncological outcomes were evaluated.\nThis study was approved by the institutional review board of the Asan Medical Center. We reviewed the retrospectively collected and analyzed data of 462 patients who underwent curative TLTG (n = 178) and OTG (n = 284) as a treatment for upper or middle third gastric cancer between March 2012 and December 2018 at Asan Medical Center. We excluded all patients who received neoadjuvant chemotherapy, were diagnosed with esophagogastric junction cancer, underwent resection additional organs except for gall bladder, and did not have a gastric cancer diagnosis. Finally, 151 and 131 patients who underwent TLTG and OTG, respectively, were enrolled. All TLTG procedures were performed as TLTG with the modified overlap method. We evaluated TNM (tumor-node-metastasis) stage using the American Joint Committee on Cancer (AJCC), 7th edition[13]. Clinical characteristics and pathologic data were compared between TLTG and OTG groups. Additionally, we evaluated surgical outcomes, including EJ-related complications, and oncologic outcomes, including recurrence free survival (RFS) and overall survival (OS). Early and late complications were defined as events occurring within or after 30 d postoperatively, respectively. EJ complications, including bleeding, leakage and stricture, were diagnosed via upper gastrointestinal series, esophagogastroduodenoscopy, computed tomography, and clinical signs. These complications were reviewed and classified based on the Clavien-Dindo classification system (CDC)[14]. Patients were matched using propensity score matching (PSM) analysis and surgical and oncological outcomes were evaluated.\nSurgical technique of anastomosis We performed TLTG with modified overlap method and OTG with circular stapled method, which was similar to our previous studies[3,10]. Both procedures were performed by a single experienced surgeon who conducts approximately 300 cases of gastrectomy annually.\nWe performed TLTG with modified overlap method and OTG with circular stapled method, which was similar to our previous studies[3,10]. Both procedures were performed by a single experienced surgeon who conducts approximately 300 cases of gastrectomy annually.\nStatistical analysis In the un-matched group, numerical variables were presented as the mean ± SD using the Student’s t-test or Kruskal-Wallis test. Categorical variables was performed using the Chi-square test. Univariate and multivariate analyses were performed for the entire patient cohort (un-matched group) using logistic regression. Variables were included in the multivariate analysis if their univariate significance was < 0.1.\nTo reduce the impact of treatment selection bias and potential confounding factors in this observational study, we performed rigorous adjustments for significant differences in the baseline characteristics of patients using the logistic regression models with generalized estimating equations (GEE) with a propensity score matched set. When that technique was used, the propensity scores were estimated without considering the outcomes using multiple logistic regression analysis. A full non-parsimonious model was developed that included all the variables shown in Table 1. Model discrimination was assessed using the C statistic and model calibration was evaluated using Hosmer-Lemeshow statistics. Overall, the model was well calibrated (Hosmer-Lemeshow test; P = 0.368) with reasonable discrimination (C statistic = 0.858). We matched the two groups (1:1 ratio) using a ‘greedy nearest-neighbour’ algorithm method. The Matching balance was measured based on the standardized mean differences. A > 10% difference in the absolute value was considered significantly imbalanced.\nPatient clinical characteristics\nValues are expressed as mean ± SD or n (%). PSM: Propensity score matching; BMI: Body mass index; ASA score: American Society of Anesthesiologists Physical Status Classification; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy.\nIn the matched group, numerical variables were reported as the means ± SD using the paired t-test. Categorical variables were performed using McNemar's test or Marginal homogeniety test. To evaluate the association between type of surgery, and complication and survival (and recurrence), the propensity score adjusted model was applied. Finally, the logistic regression model with GEE was applied using propensity score-based matching. The Cox proportional hazards model was applied using propensity score-based matching with robust standard errors. All reported P values are two-sided; values < 0.05 were considered statistically significant. Data manipulation and statistical analyses were performed using SAS® Version 9.4 (SAS Institute Inc., Cary, NC, United States).\nIn the un-matched group, numerical variables were presented as the mean ± SD using the Student’s t-test or Kruskal-Wallis test. Categorical variables was performed using the Chi-square test. Univariate and multivariate analyses were performed for the entire patient cohort (un-matched group) using logistic regression. Variables were included in the multivariate analysis if their univariate significance was < 0.1.\nTo reduce the impact of treatment selection bias and potential confounding factors in this observational study, we performed rigorous adjustments for significant differences in the baseline characteristics of patients using the logistic regression models with generalized estimating equations (GEE) with a propensity score matched set. When that technique was used, the propensity scores were estimated without considering the outcomes using multiple logistic regression analysis. A full non-parsimonious model was developed that included all the variables shown in Table 1. Model discrimination was assessed using the C statistic and model calibration was evaluated using Hosmer-Lemeshow statistics. Overall, the model was well calibrated (Hosmer-Lemeshow test; P = 0.368) with reasonable discrimination (C statistic = 0.858). We matched the two groups (1:1 ratio) using a ‘greedy nearest-neighbour’ algorithm method. The Matching balance was measured based on the standardized mean differences. A > 10% difference in the absolute value was considered significantly imbalanced.\nPatient clinical characteristics\nValues are expressed as mean ± SD or n (%). PSM: Propensity score matching; BMI: Body mass index; ASA score: American Society of Anesthesiologists Physical Status Classification; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy.\nIn the matched group, numerical variables were reported as the means ± SD using the paired t-test. Categorical variables were performed using McNemar's test or Marginal homogeniety test. To evaluate the association between type of surgery, and complication and survival (and recurrence), the propensity score adjusted model was applied. Finally, the logistic regression model with GEE was applied using propensity score-based matching. The Cox proportional hazards model was applied using propensity score-based matching with robust standard errors. All reported P values are two-sided; values < 0.05 were considered statistically significant. Data manipulation and statistical analyses were performed using SAS® Version 9.4 (SAS Institute Inc., Cary, NC, United States).", "This study was approved by the institutional review board of the Asan Medical Center. We reviewed the retrospectively collected and analyzed data of 462 patients who underwent curative TLTG (n = 178) and OTG (n = 284) as a treatment for upper or middle third gastric cancer between March 2012 and December 2018 at Asan Medical Center. We excluded all patients who received neoadjuvant chemotherapy, were diagnosed with esophagogastric junction cancer, underwent resection additional organs except for gall bladder, and did not have a gastric cancer diagnosis. Finally, 151 and 131 patients who underwent TLTG and OTG, respectively, were enrolled. All TLTG procedures were performed as TLTG with the modified overlap method. We evaluated TNM (tumor-node-metastasis) stage using the American Joint Committee on Cancer (AJCC), 7th edition[13]. Clinical characteristics and pathologic data were compared between TLTG and OTG groups. Additionally, we evaluated surgical outcomes, including EJ-related complications, and oncologic outcomes, including recurrence free survival (RFS) and overall survival (OS). Early and late complications were defined as events occurring within or after 30 d postoperatively, respectively. EJ complications, including bleeding, leakage and stricture, were diagnosed via upper gastrointestinal series, esophagogastroduodenoscopy, computed tomography, and clinical signs. These complications were reviewed and classified based on the Clavien-Dindo classification system (CDC)[14]. Patients were matched using propensity score matching (PSM) analysis and surgical and oncological outcomes were evaluated.", "We performed TLTG with modified overlap method and OTG with circular stapled method, which was similar to our previous studies[3,10]. Both procedures were performed by a single experienced surgeon who conducts approximately 300 cases of gastrectomy annually.", "In the un-matched group, numerical variables were presented as the mean ± SD using the Student’s t-test or Kruskal-Wallis test. Categorical variables was performed using the Chi-square test. Univariate and multivariate analyses were performed for the entire patient cohort (un-matched group) using logistic regression. Variables were included in the multivariate analysis if their univariate significance was < 0.1.\nTo reduce the impact of treatment selection bias and potential confounding factors in this observational study, we performed rigorous adjustments for significant differences in the baseline characteristics of patients using the logistic regression models with generalized estimating equations (GEE) with a propensity score matched set. When that technique was used, the propensity scores were estimated without considering the outcomes using multiple logistic regression analysis. A full non-parsimonious model was developed that included all the variables shown in Table 1. Model discrimination was assessed using the C statistic and model calibration was evaluated using Hosmer-Lemeshow statistics. Overall, the model was well calibrated (Hosmer-Lemeshow test; P = 0.368) with reasonable discrimination (C statistic = 0.858). We matched the two groups (1:1 ratio) using a ‘greedy nearest-neighbour’ algorithm method. The Matching balance was measured based on the standardized mean differences. A > 10% difference in the absolute value was considered significantly imbalanced.\nPatient clinical characteristics\nValues are expressed as mean ± SD or n (%). PSM: Propensity score matching; BMI: Body mass index; ASA score: American Society of Anesthesiologists Physical Status Classification; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy.\nIn the matched group, numerical variables were reported as the means ± SD using the paired t-test. Categorical variables were performed using McNemar's test or Marginal homogeniety test. To evaluate the association between type of surgery, and complication and survival (and recurrence), the propensity score adjusted model was applied. Finally, the logistic regression model with GEE was applied using propensity score-based matching. The Cox proportional hazards model was applied using propensity score-based matching with robust standard errors. All reported P values are two-sided; values < 0.05 were considered statistically significant. Data manipulation and statistical analyses were performed using SAS® Version 9.4 (SAS Institute Inc., Cary, NC, United States).", "Clinicopathological characteristics The clinical variables are summarized in Table 1. There was a significant difference in body mass index (BMI), tumor size, and pathologic tumor stage before PSM between groups (all P < 0.05); however, these differences disappeared after PSM. There were no statistically significant differences in all baseline variables included in the model between groups.\nThe clinical variables are summarized in Table 1. There was a significant difference in body mass index (BMI), tumor size, and pathologic tumor stage before PSM between groups (all P < 0.05); however, these differences disappeared after PSM. There were no statistically significant differences in all baseline variables included in the model between groups.\nSurgical outcomes and postoperative complications in PSM All surgical outcomes and postoperative complications are shown in Table 2. There was no significant difference in operation time between groups (P = 0.351). Patients who underwent TLTG had significantly lower pain scores on all postoperative days than patients who underwent OTG. Moreover, patients in the TLTG group required significantly less analgesic and opioid administration than in the OTG group. The TLTG group reported earlier time to first flatus (3.62 ± 0.84 d vs 4.15 ± 0.87 d, P = 0.002) and soft diet (4.62 ± 2.67 d vs 7.47 ± 7.92 d, P = 0.001). Furthermore, patients who underwent TLTG stayed statistically significantly fewer days at the hospital after surgery than patients who underwent OTG (9.62 ± 5.32 d vs 13.51 ± 10.67 d; P < 0.001). No significant differences in postoperative complications were noted between the two groups (P = 0.161).\nEarly surgical outcomes and pathologic data in patients undergoing the totally laparoscopic gastrectomy with the modified overlap method and open total gastrectomy\nValues are expressed as mean ± SD or n (%) or median (range). PSM: Propensity score matching; LN: Lymph node; PRM: Proximal resection margin; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy.\nPostoperative complications, including EJ-related complications, are summarized in Table 3. There were no significant differences in the early and late postoperative overall complications between groups (P = 0.317 and P = 0.257, respectively). In addition, there was no difference in the incidence of patients with ≥ 3 CDC complications in the early and late postoperative periods between groups (P = 0.428 and P > 0.999, respectively). There was no significant differences in EJ-related complications. Table 4 shows the details of EJ-related complications. Five cases of EJ leakage were observed, and two cases of EJ bleeding were found. Four patients with CDC 3 complications required interventions such as endoscopic management and pigtail drainage, whereas 2 patients with CDC 2 complications fully recovered by conservative treatment. One postoperative mortality occurred due to EJ bleeding.\nPostoperative complications\nValues are expressed as mean ± SD or n (%). PSM: Propensity score matching; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostomy.\nCharacteristics of the patients with esophagojejunostomy-related complications\nTNM: Tumor-node-metastasis; CDC: Clavien-Dindo classification; F: Female; M: Male; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy.\nAll surgical outcomes and postoperative complications are shown in Table 2. There was no significant difference in operation time between groups (P = 0.351). Patients who underwent TLTG had significantly lower pain scores on all postoperative days than patients who underwent OTG. Moreover, patients in the TLTG group required significantly less analgesic and opioid administration than in the OTG group. The TLTG group reported earlier time to first flatus (3.62 ± 0.84 d vs 4.15 ± 0.87 d, P = 0.002) and soft diet (4.62 ± 2.67 d vs 7.47 ± 7.92 d, P = 0.001). Furthermore, patients who underwent TLTG stayed statistically significantly fewer days at the hospital after surgery than patients who underwent OTG (9.62 ± 5.32 d vs 13.51 ± 10.67 d; P < 0.001). No significant differences in postoperative complications were noted between the two groups (P = 0.161).\nEarly surgical outcomes and pathologic data in patients undergoing the totally laparoscopic gastrectomy with the modified overlap method and open total gastrectomy\nValues are expressed as mean ± SD or n (%) or median (range). PSM: Propensity score matching; LN: Lymph node; PRM: Proximal resection margin; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy.\nPostoperative complications, including EJ-related complications, are summarized in Table 3. There were no significant differences in the early and late postoperative overall complications between groups (P = 0.317 and P = 0.257, respectively). In addition, there was no difference in the incidence of patients with ≥ 3 CDC complications in the early and late postoperative periods between groups (P = 0.428 and P > 0.999, respectively). There was no significant differences in EJ-related complications. Table 4 shows the details of EJ-related complications. Five cases of EJ leakage were observed, and two cases of EJ bleeding were found. Four patients with CDC 3 complications required interventions such as endoscopic management and pigtail drainage, whereas 2 patients with CDC 2 complications fully recovered by conservative treatment. One postoperative mortality occurred due to EJ bleeding.\nPostoperative complications\nValues are expressed as mean ± SD or n (%). PSM: Propensity score matching; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostomy.\nCharacteristics of the patients with esophagojejunostomy-related complications\nTNM: Tumor-node-metastasis; CDC: Clavien-Dindo classification; F: Female; M: Male; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy.\nOncologic outcomes of PSM There were no significant differences in the number of retrieved lymph nodes between groups (P = 0.713). The 5-year RFS and OS are shown in Figure 2. There were no significant differences in pathologic tumor stage between groups after PSM. The 5-year RFS rates of patients who underwent TLTG and OTG were 87.7% and 92.3%, respectively; however, these differences were no significant (P = 0.653). The 5-year OS rates of patients who underwent TLTG and OTG were 74.6% and 80.4%, respectively (P = 0.476).\n\nSurvival curves for matched patients. A: Overall survival; B: Recurrence free survival. PSM: Propensity score matching; LTG: Laparoscopic total gastrectomy; OTG: Open total gastrectomy; F/U: Follow up.\nThere were no significant differences in the number of retrieved lymph nodes between groups (P = 0.713). The 5-year RFS and OS are shown in Figure 2. There were no significant differences in pathologic tumor stage between groups after PSM. The 5-year RFS rates of patients who underwent TLTG and OTG were 87.7% and 92.3%, respectively; however, these differences were no significant (P = 0.653). The 5-year OS rates of patients who underwent TLTG and OTG were 74.6% and 80.4%, respectively (P = 0.476).\n\nSurvival curves for matched patients. A: Overall survival; B: Recurrence free survival. PSM: Propensity score matching; LTG: Laparoscopic total gastrectomy; OTG: Open total gastrectomy; F/U: Follow up.\nRisk factors for postoperative complications Tables 5 and 6 demonstrate the risk factors for postoperative complications after TLTG and OTG. BMI and American Society of Anaesthesiologists (ASA) scores were significantly associated with the occurrence of early complications in the univariate analysis. In addition, ASA score and age were significantly associated with the incidence of ≥ 3 CDC and EJ-related complications, respectively. Multivariate analysis demonstrated that BMI [odds ratio (OR), 1.824; 95% confidence interval (CI): 1.029-3.234, P = 0.040] and ASA score (OR, 3.154; 95%CI: 1.084-9.174, P = 0.035) were independent risk factors of early complications. Furthermore, multivariate analysis revealed that age was associated with ≥ 3 CDC and EJ-related complications.\nUnivariate analysis of risk factors for overall early, Clavien-Dindo classification ≥ 3, and esophagojejunostomy-related complications\nValues are expressed as mean ± SD or n (%). TLTG: Totally laparoscopic total gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostomy; CI: Confidence interval; OR: Odds ratio.\nMultivariate analysis of risk factors for early, Clavien-Dindo classification ≥ 3, and esophagojejunostomy-related complications\nValues are expressed as mean ± SD or n (%). TLTG: Totally laparoscopic total gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostom; CI: Confidence interval; OR: Odds ratio; BMI: Body mass index; ASA score: American Society of Anesthesiologists Physical Status Classification.\nTables 5 and 6 demonstrate the risk factors for postoperative complications after TLTG and OTG. BMI and American Society of Anaesthesiologists (ASA) scores were significantly associated with the occurrence of early complications in the univariate analysis. In addition, ASA score and age were significantly associated with the incidence of ≥ 3 CDC and EJ-related complications, respectively. Multivariate analysis demonstrated that BMI [odds ratio (OR), 1.824; 95% confidence interval (CI): 1.029-3.234, P = 0.040] and ASA score (OR, 3.154; 95%CI: 1.084-9.174, P = 0.035) were independent risk factors of early complications. Furthermore, multivariate analysis revealed that age was associated with ≥ 3 CDC and EJ-related complications.\nUnivariate analysis of risk factors for overall early, Clavien-Dindo classification ≥ 3, and esophagojejunostomy-related complications\nValues are expressed as mean ± SD or n (%). TLTG: Totally laparoscopic total gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostomy; CI: Confidence interval; OR: Odds ratio.\nMultivariate analysis of risk factors for early, Clavien-Dindo classification ≥ 3, and esophagojejunostomy-related complications\nValues are expressed as mean ± SD or n (%). TLTG: Totally laparoscopic total gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostom; CI: Confidence interval; OR: Odds ratio; BMI: Body mass index; ASA score: American Society of Anesthesiologists Physical Status Classification.", "The clinical variables are summarized in Table 1. There was a significant difference in body mass index (BMI), tumor size, and pathologic tumor stage before PSM between groups (all P < 0.05); however, these differences disappeared after PSM. There were no statistically significant differences in all baseline variables included in the model between groups.", "All surgical outcomes and postoperative complications are shown in Table 2. There was no significant difference in operation time between groups (P = 0.351). Patients who underwent TLTG had significantly lower pain scores on all postoperative days than patients who underwent OTG. Moreover, patients in the TLTG group required significantly less analgesic and opioid administration than in the OTG group. The TLTG group reported earlier time to first flatus (3.62 ± 0.84 d vs 4.15 ± 0.87 d, P = 0.002) and soft diet (4.62 ± 2.67 d vs 7.47 ± 7.92 d, P = 0.001). Furthermore, patients who underwent TLTG stayed statistically significantly fewer days at the hospital after surgery than patients who underwent OTG (9.62 ± 5.32 d vs 13.51 ± 10.67 d; P < 0.001). No significant differences in postoperative complications were noted between the two groups (P = 0.161).\nEarly surgical outcomes and pathologic data in patients undergoing the totally laparoscopic gastrectomy with the modified overlap method and open total gastrectomy\nValues are expressed as mean ± SD or n (%) or median (range). PSM: Propensity score matching; LN: Lymph node; PRM: Proximal resection margin; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy.\nPostoperative complications, including EJ-related complications, are summarized in Table 3. There were no significant differences in the early and late postoperative overall complications between groups (P = 0.317 and P = 0.257, respectively). In addition, there was no difference in the incidence of patients with ≥ 3 CDC complications in the early and late postoperative periods between groups (P = 0.428 and P > 0.999, respectively). There was no significant differences in EJ-related complications. Table 4 shows the details of EJ-related complications. Five cases of EJ leakage were observed, and two cases of EJ bleeding were found. Four patients with CDC 3 complications required interventions such as endoscopic management and pigtail drainage, whereas 2 patients with CDC 2 complications fully recovered by conservative treatment. One postoperative mortality occurred due to EJ bleeding.\nPostoperative complications\nValues are expressed as mean ± SD or n (%). PSM: Propensity score matching; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostomy.\nCharacteristics of the patients with esophagojejunostomy-related complications\nTNM: Tumor-node-metastasis; CDC: Clavien-Dindo classification; F: Female; M: Male; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy.", "There were no significant differences in the number of retrieved lymph nodes between groups (P = 0.713). The 5-year RFS and OS are shown in Figure 2. There were no significant differences in pathologic tumor stage between groups after PSM. The 5-year RFS rates of patients who underwent TLTG and OTG were 87.7% and 92.3%, respectively; however, these differences were no significant (P = 0.653). The 5-year OS rates of patients who underwent TLTG and OTG were 74.6% and 80.4%, respectively (P = 0.476).\n\nSurvival curves for matched patients. A: Overall survival; B: Recurrence free survival. PSM: Propensity score matching; LTG: Laparoscopic total gastrectomy; OTG: Open total gastrectomy; F/U: Follow up.", "Tables 5 and 6 demonstrate the risk factors for postoperative complications after TLTG and OTG. BMI and American Society of Anaesthesiologists (ASA) scores were significantly associated with the occurrence of early complications in the univariate analysis. In addition, ASA score and age were significantly associated with the incidence of ≥ 3 CDC and EJ-related complications, respectively. Multivariate analysis demonstrated that BMI [odds ratio (OR), 1.824; 95% confidence interval (CI): 1.029-3.234, P = 0.040] and ASA score (OR, 3.154; 95%CI: 1.084-9.174, P = 0.035) were independent risk factors of early complications. Furthermore, multivariate analysis revealed that age was associated with ≥ 3 CDC and EJ-related complications.\nUnivariate analysis of risk factors for overall early, Clavien-Dindo classification ≥ 3, and esophagojejunostomy-related complications\nValues are expressed as mean ± SD or n (%). TLTG: Totally laparoscopic total gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostomy; CI: Confidence interval; OR: Odds ratio.\nMultivariate analysis of risk factors for early, Clavien-Dindo classification ≥ 3, and esophagojejunostomy-related complications\nValues are expressed as mean ± SD or n (%). TLTG: Totally laparoscopic total gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostom; CI: Confidence interval; OR: Odds ratio; BMI: Body mass index; ASA score: American Society of Anesthesiologists Physical Status Classification.", "To the best of our knowledge, this is the first study to compare feasibility and oncological outcomes between patients who underwent TLTG with the modified overlap method and OTG. This study demonstrated that TLTG with the modified overlap method is a technically safe procedure based on acceptable postoperative complications, including EJ-related complications.\nThe overlap method is a widely used EJ reconstruction method in TLTG, it which can lessen the tension in the anastomosis and reduce mesentery division. This secures additional jejunum length for anastomosis[15,16]. This method involves a linear stapler for anastomosis; therefore, the area around the abdominal esophagus requires sufficient dissection. Furthermore, and space in the hiatus and length of the esophagus in which the stapler will be placed should be secured. This may lead to tension in the esophagus after anastomosis and hiatal hernia caused by excessive hiatus dissection. We have devised a novel method to minimize these risks, named the modified overlap method. We use a linear stapler; however, compared with the existing side-to-side anastomosis, less esophageal dissection is required. Further, anastomosis is completed obliquely at 45°; therefore, the resulting anastomosis is similar to when a circular stapler is used because end to side anastomosis is possible. This study proved the TLTG with this modified overlap method showed no significant difference in EJ complications when compared with OTG.\nA previous comparative study of LTG and OTG reported similar EJ anastomotic complications; however, a previous large multicenter cohort study in Japan has shown that open surgery is safer for EJ reconstruction[11,12,17]. This indicates that controversy remains regarding the superior method for EJ anastomosis between OTG and LTG. Furthermore, international treatment guidelines, including the Korean gastric cancer treatment and Japanese gastric cancer treatment guidelines, do not yet recognize LTG as a standard treatment[18,19]. Nonetheless, our data indicate that a randomized clinical trial assessing the surgical and oncological outcomes using the modified overlap method should be conducted to confirm its safety and efficacy.\nIn this study, we overcame operative time and lymphadenectomy issues using the TLTG with the modified overlap method. First, laparoscopic gastrectomy surgery is longer than open gastrectomy[2,20,21]. However, the institution in which this study was conducted is a high-volume center where more than a thousand laparoscopic gastrectomies are performed annually. The lead surgeon in this study performs > 300 gastric cancer operations per year. All surgical team members in this institution are skilled and experienced; therefore, we predicted a reduced operative time while maintaining acceptable surgical and oncological outcomes. However, it may be difficult to apply the results of this study to low-volume centers or inexperienced surgeons.\nSecond, lymph node dissection is an important procedure in gastric cancer surgery because the oncologic outcome is dependent on a proper lymphadenectomy[22]. The AJCC recommends that ≥ 30 lymph nodes be removed for lymphadenectomy in gastric cancer[23]. In this study, there was no statistically significant difference in the number of retrieved lymph nodes between groups; ≥ 30 lymph nodes were removed in both groups. In addition, this study showed that the 5-year overall and RFS after PSM analysis did not differ between groups.\nIn general, the risk factors associated with surgical complications in laparoscopic gastrectomy are comorbidity, surgeon experience, age, malnutrition, gender, and chronic liver disease[24-26]. Most studies have included and analyzed patients who underwent total and distal gastrectomies. Fewer studies have analyzed total gastrectomy alone. Kosuga et al[25] and Martin et al[26] have classified total gastrectomy as a risk factor for complications (OR 1.63 and 3.13, respectively). This indicates that it is important to evaluate risk factors relative to limited total gastrectomy. Li et al[27] have shown that old age combined with splenectomy is a risk factor for overall complications after total gastrectomy. In this study, preoperative BMI and ASA scores were risk factors associated with early complications. Further, old age was a risk factor associated with EJ complications. Patients over 60-year-old with a BMI over 25 and ASA scores of ≥ 3 were more likely to have surgical complications; therefore, caution is required during surgery and careful perioperative management is necessary.\nThis study has some limitations. First, it is a retrospective study performed by a single experienced surgeon at a high-volume center. Therefore, our method might not be appropriate for relatively inexperienced surgeons or small-volume institutes. Second, the number of enrolled patients is relatively small; therefore, subgroup analysis, such as distinguishing between early gastric cancer and advanced gastric cancer or grouping by stage, was not possible.", "In conclusion, we confirmed that TLTG with the modified overlap method had several advantages over OTG. These included a lower pain score, earlier bowel recovery, and discharge based on acceptable postoperative complications and oncological outcomes." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null ]
[ "Laparoscopic surgery", "Gastrectomy", "Anastomosis", "Stomach neoplasms", "Totally laparoscopic total gastrectomy" ]
INTRODUCTION: Laparoscopic total gastrectomy (LTG) is becoming increasingly used to treat upper or middle third gastric cancer because it shows earlier recovery and is considered less invasive[1-3]. Of the entire procedure of LTG, esophagojejunal reconstruction is the most crucial process. This is because the failure of esophagojejunostomy (EJ) such as leakage and stricture could induce the patients to suffer[4]. When performing EJ, EJ using linear stapler method is widely adopted due to its simplicity in comparison to the circular stapled method, such as overlap and functional method[5-9]. Nonetheless, the linear stapled method has a fundamental problem that it requires larger space to dissect around the distal esophagus than does the circular stapled method because the linear stapler needs to be inserted in the abdominal hiatus. Recently, we developed a modified overlap method for totally LTG (TLTG) for overcoming these disadvantages of linear stapled method[10]. This method is performed with an intracorporeal side to side esophagojejunal anastomosis using a 45-mm linear stapler at 45° from the longitudinal direction of the esophagus (Figure 1). This procedure requires less dissection around abdominal esophagus; therefore, it can create a secure esophagojejunal anastomosis with reduced tension as circular stapled method. Two types of intracorporeal esophagojejunostomy methods. A: The conventional overlap method; B: The modified overlap method. Several studies have investigated the surgical outcomes of the TLTG compared with open total gastrectomy (OTG), including EJ-related complications[11,12]. However, to the best of our knowledge, there are few studies which compared TLTG with the overlap method and OTG with circular stapled method including oncological outcome. The aim of the present study was to investigate the technical feasibility and oncological outcome of TLTG with the modified overlap method when compared with OTG with circular stapled method in the treatment of upper or middle third gastric cancer. MATERIALS AND METHODS: Patients This study was approved by the institutional review board of the Asan Medical Center. We reviewed the retrospectively collected and analyzed data of 462 patients who underwent curative TLTG (n = 178) and OTG (n = 284) as a treatment for upper or middle third gastric cancer between March 2012 and December 2018 at Asan Medical Center. We excluded all patients who received neoadjuvant chemotherapy, were diagnosed with esophagogastric junction cancer, underwent resection additional organs except for gall bladder, and did not have a gastric cancer diagnosis. Finally, 151 and 131 patients who underwent TLTG and OTG, respectively, were enrolled. All TLTG procedures were performed as TLTG with the modified overlap method. We evaluated TNM (tumor-node-metastasis) stage using the American Joint Committee on Cancer (AJCC), 7th edition[13]. Clinical characteristics and pathologic data were compared between TLTG and OTG groups. Additionally, we evaluated surgical outcomes, including EJ-related complications, and oncologic outcomes, including recurrence free survival (RFS) and overall survival (OS). Early and late complications were defined as events occurring within or after 30 d postoperatively, respectively. EJ complications, including bleeding, leakage and stricture, were diagnosed via upper gastrointestinal series, esophagogastroduodenoscopy, computed tomography, and clinical signs. These complications were reviewed and classified based on the Clavien-Dindo classification system (CDC)[14]. Patients were matched using propensity score matching (PSM) analysis and surgical and oncological outcomes were evaluated. This study was approved by the institutional review board of the Asan Medical Center. We reviewed the retrospectively collected and analyzed data of 462 patients who underwent curative TLTG (n = 178) and OTG (n = 284) as a treatment for upper or middle third gastric cancer between March 2012 and December 2018 at Asan Medical Center. We excluded all patients who received neoadjuvant chemotherapy, were diagnosed with esophagogastric junction cancer, underwent resection additional organs except for gall bladder, and did not have a gastric cancer diagnosis. Finally, 151 and 131 patients who underwent TLTG and OTG, respectively, were enrolled. All TLTG procedures were performed as TLTG with the modified overlap method. We evaluated TNM (tumor-node-metastasis) stage using the American Joint Committee on Cancer (AJCC), 7th edition[13]. Clinical characteristics and pathologic data were compared between TLTG and OTG groups. Additionally, we evaluated surgical outcomes, including EJ-related complications, and oncologic outcomes, including recurrence free survival (RFS) and overall survival (OS). Early and late complications were defined as events occurring within or after 30 d postoperatively, respectively. EJ complications, including bleeding, leakage and stricture, were diagnosed via upper gastrointestinal series, esophagogastroduodenoscopy, computed tomography, and clinical signs. These complications were reviewed and classified based on the Clavien-Dindo classification system (CDC)[14]. Patients were matched using propensity score matching (PSM) analysis and surgical and oncological outcomes were evaluated. Surgical technique of anastomosis We performed TLTG with modified overlap method and OTG with circular stapled method, which was similar to our previous studies[3,10]. Both procedures were performed by a single experienced surgeon who conducts approximately 300 cases of gastrectomy annually. We performed TLTG with modified overlap method and OTG with circular stapled method, which was similar to our previous studies[3,10]. Both procedures were performed by a single experienced surgeon who conducts approximately 300 cases of gastrectomy annually. Statistical analysis In the un-matched group, numerical variables were presented as the mean ± SD using the Student’s t-test or Kruskal-Wallis test. Categorical variables was performed using the Chi-square test. Univariate and multivariate analyses were performed for the entire patient cohort (un-matched group) using logistic regression. Variables were included in the multivariate analysis if their univariate significance was < 0.1. To reduce the impact of treatment selection bias and potential confounding factors in this observational study, we performed rigorous adjustments for significant differences in the baseline characteristics of patients using the logistic regression models with generalized estimating equations (GEE) with a propensity score matched set. When that technique was used, the propensity scores were estimated without considering the outcomes using multiple logistic regression analysis. A full non-parsimonious model was developed that included all the variables shown in Table 1. Model discrimination was assessed using the C statistic and model calibration was evaluated using Hosmer-Lemeshow statistics. Overall, the model was well calibrated (Hosmer-Lemeshow test; P = 0.368) with reasonable discrimination (C statistic = 0.858). We matched the two groups (1:1 ratio) using a ‘greedy nearest-neighbour’ algorithm method. The Matching balance was measured based on the standardized mean differences. A > 10% difference in the absolute value was considered significantly imbalanced. Patient clinical characteristics Values are expressed as mean ± SD or n (%). PSM: Propensity score matching; BMI: Body mass index; ASA score: American Society of Anesthesiologists Physical Status Classification; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy. In the matched group, numerical variables were reported as the means ± SD using the paired t-test. Categorical variables were performed using McNemar's test or Marginal homogeniety test. To evaluate the association between type of surgery, and complication and survival (and recurrence), the propensity score adjusted model was applied. Finally, the logistic regression model with GEE was applied using propensity score-based matching. The Cox proportional hazards model was applied using propensity score-based matching with robust standard errors. All reported P values are two-sided; values < 0.05 were considered statistically significant. Data manipulation and statistical analyses were performed using SAS® Version 9.4 (SAS Institute Inc., Cary, NC, United States). In the un-matched group, numerical variables were presented as the mean ± SD using the Student’s t-test or Kruskal-Wallis test. Categorical variables was performed using the Chi-square test. Univariate and multivariate analyses were performed for the entire patient cohort (un-matched group) using logistic regression. Variables were included in the multivariate analysis if their univariate significance was < 0.1. To reduce the impact of treatment selection bias and potential confounding factors in this observational study, we performed rigorous adjustments for significant differences in the baseline characteristics of patients using the logistic regression models with generalized estimating equations (GEE) with a propensity score matched set. When that technique was used, the propensity scores were estimated without considering the outcomes using multiple logistic regression analysis. A full non-parsimonious model was developed that included all the variables shown in Table 1. Model discrimination was assessed using the C statistic and model calibration was evaluated using Hosmer-Lemeshow statistics. Overall, the model was well calibrated (Hosmer-Lemeshow test; P = 0.368) with reasonable discrimination (C statistic = 0.858). We matched the two groups (1:1 ratio) using a ‘greedy nearest-neighbour’ algorithm method. The Matching balance was measured based on the standardized mean differences. A > 10% difference in the absolute value was considered significantly imbalanced. Patient clinical characteristics Values are expressed as mean ± SD or n (%). PSM: Propensity score matching; BMI: Body mass index; ASA score: American Society of Anesthesiologists Physical Status Classification; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy. In the matched group, numerical variables were reported as the means ± SD using the paired t-test. Categorical variables were performed using McNemar's test or Marginal homogeniety test. To evaluate the association between type of surgery, and complication and survival (and recurrence), the propensity score adjusted model was applied. Finally, the logistic regression model with GEE was applied using propensity score-based matching. The Cox proportional hazards model was applied using propensity score-based matching with robust standard errors. All reported P values are two-sided; values < 0.05 were considered statistically significant. Data manipulation and statistical analyses were performed using SAS® Version 9.4 (SAS Institute Inc., Cary, NC, United States). Patients: This study was approved by the institutional review board of the Asan Medical Center. We reviewed the retrospectively collected and analyzed data of 462 patients who underwent curative TLTG (n = 178) and OTG (n = 284) as a treatment for upper or middle third gastric cancer between March 2012 and December 2018 at Asan Medical Center. We excluded all patients who received neoadjuvant chemotherapy, were diagnosed with esophagogastric junction cancer, underwent resection additional organs except for gall bladder, and did not have a gastric cancer diagnosis. Finally, 151 and 131 patients who underwent TLTG and OTG, respectively, were enrolled. All TLTG procedures were performed as TLTG with the modified overlap method. We evaluated TNM (tumor-node-metastasis) stage using the American Joint Committee on Cancer (AJCC), 7th edition[13]. Clinical characteristics and pathologic data were compared between TLTG and OTG groups. Additionally, we evaluated surgical outcomes, including EJ-related complications, and oncologic outcomes, including recurrence free survival (RFS) and overall survival (OS). Early and late complications were defined as events occurring within or after 30 d postoperatively, respectively. EJ complications, including bleeding, leakage and stricture, were diagnosed via upper gastrointestinal series, esophagogastroduodenoscopy, computed tomography, and clinical signs. These complications were reviewed and classified based on the Clavien-Dindo classification system (CDC)[14]. Patients were matched using propensity score matching (PSM) analysis and surgical and oncological outcomes were evaluated. Surgical technique of anastomosis: We performed TLTG with modified overlap method and OTG with circular stapled method, which was similar to our previous studies[3,10]. Both procedures were performed by a single experienced surgeon who conducts approximately 300 cases of gastrectomy annually. Statistical analysis: In the un-matched group, numerical variables were presented as the mean ± SD using the Student’s t-test or Kruskal-Wallis test. Categorical variables was performed using the Chi-square test. Univariate and multivariate analyses were performed for the entire patient cohort (un-matched group) using logistic regression. Variables were included in the multivariate analysis if their univariate significance was < 0.1. To reduce the impact of treatment selection bias and potential confounding factors in this observational study, we performed rigorous adjustments for significant differences in the baseline characteristics of patients using the logistic regression models with generalized estimating equations (GEE) with a propensity score matched set. When that technique was used, the propensity scores were estimated without considering the outcomes using multiple logistic regression analysis. A full non-parsimonious model was developed that included all the variables shown in Table 1. Model discrimination was assessed using the C statistic and model calibration was evaluated using Hosmer-Lemeshow statistics. Overall, the model was well calibrated (Hosmer-Lemeshow test; P = 0.368) with reasonable discrimination (C statistic = 0.858). We matched the two groups (1:1 ratio) using a ‘greedy nearest-neighbour’ algorithm method. The Matching balance was measured based on the standardized mean differences. A > 10% difference in the absolute value was considered significantly imbalanced. Patient clinical characteristics Values are expressed as mean ± SD or n (%). PSM: Propensity score matching; BMI: Body mass index; ASA score: American Society of Anesthesiologists Physical Status Classification; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy. In the matched group, numerical variables were reported as the means ± SD using the paired t-test. Categorical variables were performed using McNemar's test or Marginal homogeniety test. To evaluate the association between type of surgery, and complication and survival (and recurrence), the propensity score adjusted model was applied. Finally, the logistic regression model with GEE was applied using propensity score-based matching. The Cox proportional hazards model was applied using propensity score-based matching with robust standard errors. All reported P values are two-sided; values < 0.05 were considered statistically significant. Data manipulation and statistical analyses were performed using SAS® Version 9.4 (SAS Institute Inc., Cary, NC, United States). RESULTS: Clinicopathological characteristics The clinical variables are summarized in Table 1. There was a significant difference in body mass index (BMI), tumor size, and pathologic tumor stage before PSM between groups (all P < 0.05); however, these differences disappeared after PSM. There were no statistically significant differences in all baseline variables included in the model between groups. The clinical variables are summarized in Table 1. There was a significant difference in body mass index (BMI), tumor size, and pathologic tumor stage before PSM between groups (all P < 0.05); however, these differences disappeared after PSM. There were no statistically significant differences in all baseline variables included in the model between groups. Surgical outcomes and postoperative complications in PSM All surgical outcomes and postoperative complications are shown in Table 2. There was no significant difference in operation time between groups (P = 0.351). Patients who underwent TLTG had significantly lower pain scores on all postoperative days than patients who underwent OTG. Moreover, patients in the TLTG group required significantly less analgesic and opioid administration than in the OTG group. The TLTG group reported earlier time to first flatus (3.62 ± 0.84 d vs 4.15 ± 0.87 d, P = 0.002) and soft diet (4.62 ± 2.67 d vs 7.47 ± 7.92 d, P = 0.001). Furthermore, patients who underwent TLTG stayed statistically significantly fewer days at the hospital after surgery than patients who underwent OTG (9.62 ± 5.32 d vs 13.51 ± 10.67 d; P < 0.001). No significant differences in postoperative complications were noted between the two groups (P = 0.161). Early surgical outcomes and pathologic data in patients undergoing the totally laparoscopic gastrectomy with the modified overlap method and open total gastrectomy Values are expressed as mean ± SD or n (%) or median (range). PSM: Propensity score matching; LN: Lymph node; PRM: Proximal resection margin; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy. Postoperative complications, including EJ-related complications, are summarized in Table 3. There were no significant differences in the early and late postoperative overall complications between groups (P = 0.317 and P = 0.257, respectively). In addition, there was no difference in the incidence of patients with ≥ 3 CDC complications in the early and late postoperative periods between groups (P = 0.428 and P > 0.999, respectively). There was no significant differences in EJ-related complications. Table 4 shows the details of EJ-related complications. Five cases of EJ leakage were observed, and two cases of EJ bleeding were found. Four patients with CDC 3 complications required interventions such as endoscopic management and pigtail drainage, whereas 2 patients with CDC 2 complications fully recovered by conservative treatment. One postoperative mortality occurred due to EJ bleeding. Postoperative complications Values are expressed as mean ± SD or n (%). PSM: Propensity score matching; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostomy. Characteristics of the patients with esophagojejunostomy-related complications TNM: Tumor-node-metastasis; CDC: Clavien-Dindo classification; F: Female; M: Male; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy. All surgical outcomes and postoperative complications are shown in Table 2. There was no significant difference in operation time between groups (P = 0.351). Patients who underwent TLTG had significantly lower pain scores on all postoperative days than patients who underwent OTG. Moreover, patients in the TLTG group required significantly less analgesic and opioid administration than in the OTG group. The TLTG group reported earlier time to first flatus (3.62 ± 0.84 d vs 4.15 ± 0.87 d, P = 0.002) and soft diet (4.62 ± 2.67 d vs 7.47 ± 7.92 d, P = 0.001). Furthermore, patients who underwent TLTG stayed statistically significantly fewer days at the hospital after surgery than patients who underwent OTG (9.62 ± 5.32 d vs 13.51 ± 10.67 d; P < 0.001). No significant differences in postoperative complications were noted between the two groups (P = 0.161). Early surgical outcomes and pathologic data in patients undergoing the totally laparoscopic gastrectomy with the modified overlap method and open total gastrectomy Values are expressed as mean ± SD or n (%) or median (range). PSM: Propensity score matching; LN: Lymph node; PRM: Proximal resection margin; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy. Postoperative complications, including EJ-related complications, are summarized in Table 3. There were no significant differences in the early and late postoperative overall complications between groups (P = 0.317 and P = 0.257, respectively). In addition, there was no difference in the incidence of patients with ≥ 3 CDC complications in the early and late postoperative periods between groups (P = 0.428 and P > 0.999, respectively). There was no significant differences in EJ-related complications. Table 4 shows the details of EJ-related complications. Five cases of EJ leakage were observed, and two cases of EJ bleeding were found. Four patients with CDC 3 complications required interventions such as endoscopic management and pigtail drainage, whereas 2 patients with CDC 2 complications fully recovered by conservative treatment. One postoperative mortality occurred due to EJ bleeding. Postoperative complications Values are expressed as mean ± SD or n (%). PSM: Propensity score matching; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostomy. Characteristics of the patients with esophagojejunostomy-related complications TNM: Tumor-node-metastasis; CDC: Clavien-Dindo classification; F: Female; M: Male; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy. Oncologic outcomes of PSM There were no significant differences in the number of retrieved lymph nodes between groups (P = 0.713). The 5-year RFS and OS are shown in Figure 2. There were no significant differences in pathologic tumor stage between groups after PSM. The 5-year RFS rates of patients who underwent TLTG and OTG were 87.7% and 92.3%, respectively; however, these differences were no significant (P = 0.653). The 5-year OS rates of patients who underwent TLTG and OTG were 74.6% and 80.4%, respectively (P = 0.476). Survival curves for matched patients. A: Overall survival; B: Recurrence free survival. PSM: Propensity score matching; LTG: Laparoscopic total gastrectomy; OTG: Open total gastrectomy; F/U: Follow up. There were no significant differences in the number of retrieved lymph nodes between groups (P = 0.713). The 5-year RFS and OS are shown in Figure 2. There were no significant differences in pathologic tumor stage between groups after PSM. The 5-year RFS rates of patients who underwent TLTG and OTG were 87.7% and 92.3%, respectively; however, these differences were no significant (P = 0.653). The 5-year OS rates of patients who underwent TLTG and OTG were 74.6% and 80.4%, respectively (P = 0.476). Survival curves for matched patients. A: Overall survival; B: Recurrence free survival. PSM: Propensity score matching; LTG: Laparoscopic total gastrectomy; OTG: Open total gastrectomy; F/U: Follow up. Risk factors for postoperative complications Tables 5 and 6 demonstrate the risk factors for postoperative complications after TLTG and OTG. BMI and American Society of Anaesthesiologists (ASA) scores were significantly associated with the occurrence of early complications in the univariate analysis. In addition, ASA score and age were significantly associated with the incidence of ≥ 3 CDC and EJ-related complications, respectively. Multivariate analysis demonstrated that BMI [odds ratio (OR), 1.824; 95% confidence interval (CI): 1.029-3.234, P = 0.040] and ASA score (OR, 3.154; 95%CI: 1.084-9.174, P = 0.035) were independent risk factors of early complications. Furthermore, multivariate analysis revealed that age was associated with ≥ 3 CDC and EJ-related complications. Univariate analysis of risk factors for overall early, Clavien-Dindo classification ≥ 3, and esophagojejunostomy-related complications Values are expressed as mean ± SD or n (%). TLTG: Totally laparoscopic total gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostomy; CI: Confidence interval; OR: Odds ratio. Multivariate analysis of risk factors for early, Clavien-Dindo classification ≥ 3, and esophagojejunostomy-related complications Values are expressed as mean ± SD or n (%). TLTG: Totally laparoscopic total gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostom; CI: Confidence interval; OR: Odds ratio; BMI: Body mass index; ASA score: American Society of Anesthesiologists Physical Status Classification. Tables 5 and 6 demonstrate the risk factors for postoperative complications after TLTG and OTG. BMI and American Society of Anaesthesiologists (ASA) scores were significantly associated with the occurrence of early complications in the univariate analysis. In addition, ASA score and age were significantly associated with the incidence of ≥ 3 CDC and EJ-related complications, respectively. Multivariate analysis demonstrated that BMI [odds ratio (OR), 1.824; 95% confidence interval (CI): 1.029-3.234, P = 0.040] and ASA score (OR, 3.154; 95%CI: 1.084-9.174, P = 0.035) were independent risk factors of early complications. Furthermore, multivariate analysis revealed that age was associated with ≥ 3 CDC and EJ-related complications. Univariate analysis of risk factors for overall early, Clavien-Dindo classification ≥ 3, and esophagojejunostomy-related complications Values are expressed as mean ± SD or n (%). TLTG: Totally laparoscopic total gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostomy; CI: Confidence interval; OR: Odds ratio. Multivariate analysis of risk factors for early, Clavien-Dindo classification ≥ 3, and esophagojejunostomy-related complications Values are expressed as mean ± SD or n (%). TLTG: Totally laparoscopic total gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostom; CI: Confidence interval; OR: Odds ratio; BMI: Body mass index; ASA score: American Society of Anesthesiologists Physical Status Classification. Clinicopathological characteristics: The clinical variables are summarized in Table 1. There was a significant difference in body mass index (BMI), tumor size, and pathologic tumor stage before PSM between groups (all P < 0.05); however, these differences disappeared after PSM. There were no statistically significant differences in all baseline variables included in the model between groups. Surgical outcomes and postoperative complications in PSM: All surgical outcomes and postoperative complications are shown in Table 2. There was no significant difference in operation time between groups (P = 0.351). Patients who underwent TLTG had significantly lower pain scores on all postoperative days than patients who underwent OTG. Moreover, patients in the TLTG group required significantly less analgesic and opioid administration than in the OTG group. The TLTG group reported earlier time to first flatus (3.62 ± 0.84 d vs 4.15 ± 0.87 d, P = 0.002) and soft diet (4.62 ± 2.67 d vs 7.47 ± 7.92 d, P = 0.001). Furthermore, patients who underwent TLTG stayed statistically significantly fewer days at the hospital after surgery than patients who underwent OTG (9.62 ± 5.32 d vs 13.51 ± 10.67 d; P < 0.001). No significant differences in postoperative complications were noted between the two groups (P = 0.161). Early surgical outcomes and pathologic data in patients undergoing the totally laparoscopic gastrectomy with the modified overlap method and open total gastrectomy Values are expressed as mean ± SD or n (%) or median (range). PSM: Propensity score matching; LN: Lymph node; PRM: Proximal resection margin; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy. Postoperative complications, including EJ-related complications, are summarized in Table 3. There were no significant differences in the early and late postoperative overall complications between groups (P = 0.317 and P = 0.257, respectively). In addition, there was no difference in the incidence of patients with ≥ 3 CDC complications in the early and late postoperative periods between groups (P = 0.428 and P > 0.999, respectively). There was no significant differences in EJ-related complications. Table 4 shows the details of EJ-related complications. Five cases of EJ leakage were observed, and two cases of EJ bleeding were found. Four patients with CDC 3 complications required interventions such as endoscopic management and pigtail drainage, whereas 2 patients with CDC 2 complications fully recovered by conservative treatment. One postoperative mortality occurred due to EJ bleeding. Postoperative complications Values are expressed as mean ± SD or n (%). PSM: Propensity score matching; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostomy. Characteristics of the patients with esophagojejunostomy-related complications TNM: Tumor-node-metastasis; CDC: Clavien-Dindo classification; F: Female; M: Male; TLTG: Totally laparoscopic gastrectomy; OTG: Open total gastrectomy. Oncologic outcomes of PSM: There were no significant differences in the number of retrieved lymph nodes between groups (P = 0.713). The 5-year RFS and OS are shown in Figure 2. There were no significant differences in pathologic tumor stage between groups after PSM. The 5-year RFS rates of patients who underwent TLTG and OTG were 87.7% and 92.3%, respectively; however, these differences were no significant (P = 0.653). The 5-year OS rates of patients who underwent TLTG and OTG were 74.6% and 80.4%, respectively (P = 0.476). Survival curves for matched patients. A: Overall survival; B: Recurrence free survival. PSM: Propensity score matching; LTG: Laparoscopic total gastrectomy; OTG: Open total gastrectomy; F/U: Follow up. Risk factors for postoperative complications: Tables 5 and 6 demonstrate the risk factors for postoperative complications after TLTG and OTG. BMI and American Society of Anaesthesiologists (ASA) scores were significantly associated with the occurrence of early complications in the univariate analysis. In addition, ASA score and age were significantly associated with the incidence of ≥ 3 CDC and EJ-related complications, respectively. Multivariate analysis demonstrated that BMI [odds ratio (OR), 1.824; 95% confidence interval (CI): 1.029-3.234, P = 0.040] and ASA score (OR, 3.154; 95%CI: 1.084-9.174, P = 0.035) were independent risk factors of early complications. Furthermore, multivariate analysis revealed that age was associated with ≥ 3 CDC and EJ-related complications. Univariate analysis of risk factors for overall early, Clavien-Dindo classification ≥ 3, and esophagojejunostomy-related complications Values are expressed as mean ± SD or n (%). TLTG: Totally laparoscopic total gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostomy; CI: Confidence interval; OR: Odds ratio. Multivariate analysis of risk factors for early, Clavien-Dindo classification ≥ 3, and esophagojejunostomy-related complications Values are expressed as mean ± SD or n (%). TLTG: Totally laparoscopic total gastrectomy; OTG: Open total gastrectomy; CDC: Clavien-Dindo classification; EJ: Esophagojejunostom; CI: Confidence interval; OR: Odds ratio; BMI: Body mass index; ASA score: American Society of Anesthesiologists Physical Status Classification. DISCUSSION: To the best of our knowledge, this is the first study to compare feasibility and oncological outcomes between patients who underwent TLTG with the modified overlap method and OTG. This study demonstrated that TLTG with the modified overlap method is a technically safe procedure based on acceptable postoperative complications, including EJ-related complications. The overlap method is a widely used EJ reconstruction method in TLTG, it which can lessen the tension in the anastomosis and reduce mesentery division. This secures additional jejunum length for anastomosis[15,16]. This method involves a linear stapler for anastomosis; therefore, the area around the abdominal esophagus requires sufficient dissection. Furthermore, and space in the hiatus and length of the esophagus in which the stapler will be placed should be secured. This may lead to tension in the esophagus after anastomosis and hiatal hernia caused by excessive hiatus dissection. We have devised a novel method to minimize these risks, named the modified overlap method. We use a linear stapler; however, compared with the existing side-to-side anastomosis, less esophageal dissection is required. Further, anastomosis is completed obliquely at 45°; therefore, the resulting anastomosis is similar to when a circular stapler is used because end to side anastomosis is possible. This study proved the TLTG with this modified overlap method showed no significant difference in EJ complications when compared with OTG. A previous comparative study of LTG and OTG reported similar EJ anastomotic complications; however, a previous large multicenter cohort study in Japan has shown that open surgery is safer for EJ reconstruction[11,12,17]. This indicates that controversy remains regarding the superior method for EJ anastomosis between OTG and LTG. Furthermore, international treatment guidelines, including the Korean gastric cancer treatment and Japanese gastric cancer treatment guidelines, do not yet recognize LTG as a standard treatment[18,19]. Nonetheless, our data indicate that a randomized clinical trial assessing the surgical and oncological outcomes using the modified overlap method should be conducted to confirm its safety and efficacy. In this study, we overcame operative time and lymphadenectomy issues using the TLTG with the modified overlap method. First, laparoscopic gastrectomy surgery is longer than open gastrectomy[2,20,21]. However, the institution in which this study was conducted is a high-volume center where more than a thousand laparoscopic gastrectomies are performed annually. The lead surgeon in this study performs > 300 gastric cancer operations per year. All surgical team members in this institution are skilled and experienced; therefore, we predicted a reduced operative time while maintaining acceptable surgical and oncological outcomes. However, it may be difficult to apply the results of this study to low-volume centers or inexperienced surgeons. Second, lymph node dissection is an important procedure in gastric cancer surgery because the oncologic outcome is dependent on a proper lymphadenectomy[22]. The AJCC recommends that ≥ 30 lymph nodes be removed for lymphadenectomy in gastric cancer[23]. In this study, there was no statistically significant difference in the number of retrieved lymph nodes between groups; ≥ 30 lymph nodes were removed in both groups. In addition, this study showed that the 5-year overall and RFS after PSM analysis did not differ between groups. In general, the risk factors associated with surgical complications in laparoscopic gastrectomy are comorbidity, surgeon experience, age, malnutrition, gender, and chronic liver disease[24-26]. Most studies have included and analyzed patients who underwent total and distal gastrectomies. Fewer studies have analyzed total gastrectomy alone. Kosuga et al[25] and Martin et al[26] have classified total gastrectomy as a risk factor for complications (OR 1.63 and 3.13, respectively). This indicates that it is important to evaluate risk factors relative to limited total gastrectomy. Li et al[27] have shown that old age combined with splenectomy is a risk factor for overall complications after total gastrectomy. In this study, preoperative BMI and ASA scores were risk factors associated with early complications. Further, old age was a risk factor associated with EJ complications. Patients over 60-year-old with a BMI over 25 and ASA scores of ≥ 3 were more likely to have surgical complications; therefore, caution is required during surgery and careful perioperative management is necessary. This study has some limitations. First, it is a retrospective study performed by a single experienced surgeon at a high-volume center. Therefore, our method might not be appropriate for relatively inexperienced surgeons or small-volume institutes. Second, the number of enrolled patients is relatively small; therefore, subgroup analysis, such as distinguishing between early gastric cancer and advanced gastric cancer or grouping by stage, was not possible. CONCLUSION: In conclusion, we confirmed that TLTG with the modified overlap method had several advantages over OTG. These included a lower pain score, earlier bowel recovery, and discharge based on acceptable postoperative complications and oncological outcomes.
Background: Although several methods of totally laparoscopic total gastrectomy (TLTG) have been reported. The best anastomosis technique for LTG has not been established. Methods: We performed 151 and 131 surgeries using TLTG with the modified overlap method and OTG for gastric cancer between March 2012 and December 2018. Surgical and oncological outcomes were compared between groups using propensity score matching. In addition, we analyzed the risk factors associated with postoperative complications. Results: Patients who underwent TLTG were discharged earlier than those who underwent OTG [TLTG (9.62 ± 5.32) vs OTG (13.51 ± 10.67), P < 0.05]. Time to first flatus and soft diet were significantly shorter in TLTG group. The pain scores at all postoperative periods and administration of opioids were significantly lower in the TLTG group than in the OTG group. No significant difference in early, late and esophagojejunostomy (EJ)-related complications or 5-year recurrence free and overall survival between groups. Multivariate analysis demonstrated that body mass index [odds ratio (OR), 1.824; 95% confidence interval (CI): 1.029-3.234, P = 0.040] and American Society of Anaesthesiologists (ASA) score (OR, 3.154; 95%CI: 1.084-9.174, P = 0.035) were independent risk factors of early complications. Additionally, age was associated with ≥ 3 Clavien-Dindo classification and EJ-related complications. Conclusions: Although TLTG with the modified overlap method showed similar complication rate and oncological outcome with OTG, it yields lower pain score, earlier bowel recovery, and discharge. Surgeons should perform total gastrectomy cautiously and delicately in patients with obesity, high ASA scores, and older ages.
INTRODUCTION: Laparoscopic total gastrectomy (LTG) is becoming increasingly used to treat upper or middle third gastric cancer because it shows earlier recovery and is considered less invasive[1-3]. Of the entire procedure of LTG, esophagojejunal reconstruction is the most crucial process. This is because the failure of esophagojejunostomy (EJ) such as leakage and stricture could induce the patients to suffer[4]. When performing EJ, EJ using linear stapler method is widely adopted due to its simplicity in comparison to the circular stapled method, such as overlap and functional method[5-9]. Nonetheless, the linear stapled method has a fundamental problem that it requires larger space to dissect around the distal esophagus than does the circular stapled method because the linear stapler needs to be inserted in the abdominal hiatus. Recently, we developed a modified overlap method for totally LTG (TLTG) for overcoming these disadvantages of linear stapled method[10]. This method is performed with an intracorporeal side to side esophagojejunal anastomosis using a 45-mm linear stapler at 45° from the longitudinal direction of the esophagus (Figure 1). This procedure requires less dissection around abdominal esophagus; therefore, it can create a secure esophagojejunal anastomosis with reduced tension as circular stapled method. Two types of intracorporeal esophagojejunostomy methods. A: The conventional overlap method; B: The modified overlap method. Several studies have investigated the surgical outcomes of the TLTG compared with open total gastrectomy (OTG), including EJ-related complications[11,12]. However, to the best of our knowledge, there are few studies which compared TLTG with the overlap method and OTG with circular stapled method including oncological outcome. The aim of the present study was to investigate the technical feasibility and oncological outcome of TLTG with the modified overlap method when compared with OTG with circular stapled method in the treatment of upper or middle third gastric cancer. CONCLUSION: Based on our results, we confirmed that the TLTG with modified overlap method has several advantages over OTG. However, this study has certain limitations. It is a retrospective study performed by a single experienced surgeon at a high-volume center, and the number of enrolled patients is relatively small.
Background: Although several methods of totally laparoscopic total gastrectomy (TLTG) have been reported. The best anastomosis technique for LTG has not been established. Methods: We performed 151 and 131 surgeries using TLTG with the modified overlap method and OTG for gastric cancer between March 2012 and December 2018. Surgical and oncological outcomes were compared between groups using propensity score matching. In addition, we analyzed the risk factors associated with postoperative complications. Results: Patients who underwent TLTG were discharged earlier than those who underwent OTG [TLTG (9.62 ± 5.32) vs OTG (13.51 ± 10.67), P < 0.05]. Time to first flatus and soft diet were significantly shorter in TLTG group. The pain scores at all postoperative periods and administration of opioids were significantly lower in the TLTG group than in the OTG group. No significant difference in early, late and esophagojejunostomy (EJ)-related complications or 5-year recurrence free and overall survival between groups. Multivariate analysis demonstrated that body mass index [odds ratio (OR), 1.824; 95% confidence interval (CI): 1.029-3.234, P = 0.040] and American Society of Anaesthesiologists (ASA) score (OR, 3.154; 95%CI: 1.084-9.174, P = 0.035) were independent risk factors of early complications. Additionally, age was associated with ≥ 3 Clavien-Dindo classification and EJ-related complications. Conclusions: Although TLTG with the modified overlap method showed similar complication rate and oncological outcome with OTG, it yields lower pain score, earlier bowel recovery, and discharge. Surgeons should perform total gastrectomy cautiously and delicately in patients with obesity, high ASA scores, and older ages.
6,654
318
[ 350, 278, 41, 448, 2021, 65, 486, 152, 295, 864, 40 ]
12
[ "complications", "tltg", "patients", "gastrectomy", "otg", "ej", "method", "total", "score", "total gastrectomy" ]
[ "esophagus stapler placed", "esophagojejunostomy methods conventional", "total gastrectomy ltg", "gastrectomy ltg increasingly", "overlap method laparoscopic" ]
null
[CONTENT] Laparoscopic surgery | Gastrectomy | Anastomosis | Stomach neoplasms | Totally laparoscopic total gastrectomy [SUMMARY]
[CONTENT] Laparoscopic surgery | Gastrectomy | Anastomosis | Stomach neoplasms | Totally laparoscopic total gastrectomy [SUMMARY]
null
[CONTENT] Laparoscopic surgery | Gastrectomy | Anastomosis | Stomach neoplasms | Totally laparoscopic total gastrectomy [SUMMARY]
[CONTENT] Laparoscopic surgery | Gastrectomy | Anastomosis | Stomach neoplasms | Totally laparoscopic total gastrectomy [SUMMARY]
[CONTENT] Laparoscopic surgery | Gastrectomy | Anastomosis | Stomach neoplasms | Totally laparoscopic total gastrectomy [SUMMARY]
[CONTENT] Aged | Anastomosis, Surgical | Gastrectomy | Humans | Laparoscopy | Middle Aged | Neoplasm Recurrence, Local | Postoperative Complications | Stomach Neoplasms | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Anastomosis, Surgical | Gastrectomy | Humans | Laparoscopy | Middle Aged | Neoplasm Recurrence, Local | Postoperative Complications | Stomach Neoplasms | Treatment Outcome [SUMMARY]
null
[CONTENT] Aged | Anastomosis, Surgical | Gastrectomy | Humans | Laparoscopy | Middle Aged | Neoplasm Recurrence, Local | Postoperative Complications | Stomach Neoplasms | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Anastomosis, Surgical | Gastrectomy | Humans | Laparoscopy | Middle Aged | Neoplasm Recurrence, Local | Postoperative Complications | Stomach Neoplasms | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Anastomosis, Surgical | Gastrectomy | Humans | Laparoscopy | Middle Aged | Neoplasm Recurrence, Local | Postoperative Complications | Stomach Neoplasms | Treatment Outcome [SUMMARY]
[CONTENT] esophagus stapler placed | esophagojejunostomy methods conventional | total gastrectomy ltg | gastrectomy ltg increasingly | overlap method laparoscopic [SUMMARY]
[CONTENT] esophagus stapler placed | esophagojejunostomy methods conventional | total gastrectomy ltg | gastrectomy ltg increasingly | overlap method laparoscopic [SUMMARY]
null
[CONTENT] esophagus stapler placed | esophagojejunostomy methods conventional | total gastrectomy ltg | gastrectomy ltg increasingly | overlap method laparoscopic [SUMMARY]
[CONTENT] esophagus stapler placed | esophagojejunostomy methods conventional | total gastrectomy ltg | gastrectomy ltg increasingly | overlap method laparoscopic [SUMMARY]
[CONTENT] esophagus stapler placed | esophagojejunostomy methods conventional | total gastrectomy ltg | gastrectomy ltg increasingly | overlap method laparoscopic [SUMMARY]
[CONTENT] complications | tltg | patients | gastrectomy | otg | ej | method | total | score | total gastrectomy [SUMMARY]
[CONTENT] complications | tltg | patients | gastrectomy | otg | ej | method | total | score | total gastrectomy [SUMMARY]
null
[CONTENT] complications | tltg | patients | gastrectomy | otg | ej | method | total | score | total gastrectomy [SUMMARY]
[CONTENT] complications | tltg | patients | gastrectomy | otg | ej | method | total | score | total gastrectomy [SUMMARY]
[CONTENT] complications | tltg | patients | gastrectomy | otg | ej | method | total | score | total gastrectomy [SUMMARY]
[CONTENT] method | stapled | stapled method | linear | circular stapled | circular stapled method | circular | esophagojejunal | overlap | stapler [SUMMARY]
[CONTENT] test | model | performed | variables | propensity | matched | logistic regression | regression | logistic | propensity score [SUMMARY]
null
[CONTENT] lower pain score earlier | bowel recovery discharge | lower pain score | conclusion confirmed tltg modified | bowel | method advantages otg included | postoperative complications oncological outcomes | postoperative complications oncological | earlier bowel recovery discharge | advantages otg included lower [SUMMARY]
[CONTENT] complications | tltg | patients | method | gastrectomy | otg | ej | significant | differences | score [SUMMARY]
[CONTENT] complications | tltg | patients | method | gastrectomy | otg | ej | significant | differences | score [SUMMARY]
[CONTENT] ||| LTG [SUMMARY]
[CONTENT] 151 | 131 | TLTG | March 2012 | December 2018 ||| ||| [SUMMARY]
null
[CONTENT] TLTG ||| [SUMMARY]
[CONTENT] ||| LTG ||| 151 | 131 | TLTG | March 2012 | December 2018 ||| ||| ||| ||| TLTG ||| TLTG | 9.62 | 5.32 | OTG | 13.51 | 10.67 | 0.05 ||| first | TLTG ||| TLTG ||| 5-year ||| 1.824 | 95% | CI | 1.029 | 0.040 | American Society of Anaesthesiologists | 3.154 | 1.084 | 0.035 ||| ≥ | 3 ||| TLTG ||| [SUMMARY]
[CONTENT] ||| LTG ||| 151 | 131 | TLTG | March 2012 | December 2018 ||| ||| ||| ||| TLTG ||| TLTG | 9.62 | 5.32 | OTG | 13.51 | 10.67 | 0.05 ||| first | TLTG ||| TLTG ||| 5-year ||| 1.824 | 95% | CI | 1.029 | 0.040 | American Society of Anaesthesiologists | 3.154 | 1.084 | 0.035 ||| ≥ | 3 ||| TLTG ||| [SUMMARY]
Percutaneous transforaminal endoscopic surgery (PTES) for symptomatic lumbar disc herniation: a surgical technique, outcome, and complications in 209 consecutive cases.
28178992
We designed an easy posterolateral transforaminal endoscopic decompression technique, termed PTES, for radiculopathy secondary to lumbar disc herniation. The purpose of the study is to describe the technique of PTES and evaluate the efficacy and safety for treatment of lumbar disc herniation including primary herniation, reherniation, intracanal herniation, and extracanal herniation and to report outcome and complications.
BACKGROUND
PTES was performed to treat 209 cases of intracanal or extracanal herniations with or without extruding or sequestrated fragment, high iliac crest, scoliosis, calcification, or cauda equina syndrome including recurrent herniation after previous surgical intervention at the index level or adjacent disc herniation after decompression and fusion. Preoperative and postoperative leg pain was evaluated using the 10-point visual analog scale (VAS) and the results were determined to be excellent, good, fair, or poor according to the MacNab classification at 2-year follow-up.
METHODS
The patients were followed for an average of 26.3 ± 2.3 months. The VAS score of leg pain significantly dropped from 9 (6-10) before operation to 1 (0-3) (P < 0.001) immediately after the operation and to 0 (0-3) (P < 0.001) 2 years after operation. At 2-year follow-up, 95.7% (200/209) of the patients showed excellent or good outcomes, 2.9% (6/209) fair and 1.4% (3/209) poor. No patients had any form of permanent iatrogenic nerve damage and a major complication, although there were one case of infection and one case of recurrence.
RESULTS
PTES for lumbar disc herniation is an effective and safe method with simple orientation, easy puncture, reduced steps, and little X-ray exposure, which can be applied in almost all kinds of lumbar disc herniation, including L5/S1 level with high iliac crest, herniation with scoliosis or calcification, recurrent herniation, and adjacent disc herniation after decompression and fusion. The learning curve is no longer steep for surgeons.
CONCLUSIONS
[ "Adult", "Aged", "Decompression, Surgical", "Diskectomy, Percutaneous", "Endoscopy", "Female", "Humans", "Intervertebral Disc Displacement", "Lumbar Vertebrae", "Magnetic Resonance Imaging", "Male", "Middle Aged", "Minimally Invasive Surgical Procedures", "Radiculopathy", "Tomography, X-Ray Computed", "Treatment Outcome", "Visual Analog Scale" ]
5299691
Background
The radicular syndrome caused by lumbar disc herniation compressing neurologic elements is a clear indication for surgical decompression. In the last decades, as a minimally invasive surgical technique, the posterior lateral transforaminal endoscopic surgery has been developed to perform discectomy for neurologic decompression under direct view and local anesthesia including YESS (Yeung Endoscopy Spine Surgery) [1–5] and TESS (Transforaminal Endoscopic Spine Surgery) [6–11]. There was a high percentage of patient satisfaction and a low rate of complications in YESS or TESS for lumbar disc herniation [1–11]. Compared with traditional lumbar discectomy, YESS and TESS have certain advantages: (1) no need for general anesthesia, (2) less cases of iatrogenic neurologic damage, (3) no retraction on the intracanal nerve elements, (4) significantly less infections, (5) only minimal disturbance of ligamentum flavum or intracanal capsular structures, therefore, less scar formation, (6) no interference of scar tissue to reach the recurrent herniated tissue in cases of previous dorsal-discectomy, and (7) shorter hospital stay, earlier functional recovery, earlier return to work, and higher cost-effectiveness [1–11]. Although nearly all kinds of disc herniations are accessible for TESS of outside disc-inside technique directly into the spinal canal [2, 3], complexity of C-arm guided orientation, difficulty to find the optimal trajectory for target, and more steps of surgical manipulation leaded to much exposure of X-ray, long duration of operation, and steep learning curve. We designed an easy posterolateral transforaminal endoscopic decompression technique for radiculopathy secondary to lumbar disc herniation, termed PTES (percutaneous transforaminal endoscopic surgery). The purpose of the study is to describe the technique of PTES and evaluate the efficacy and safety for treatment of lumbar disc herniation including primary herniation, reherniation, intracanal herniation, and extracanal herniation and to report outcome and complications.
null
null
Results
The 209 patients who met the inclusion criteria were followed for an average of 26.3 ± 2.3 months. There were 116 (55.5%) male patients and 93 (44.5%) female patients. The average ages were 46.4 ± 14.9 years for the male patients and 46.8 ± 11.1 for the female patients. The mean duration of the operation was 50.9 ± 9.9 min per level. The mean frequency of intraoperative fluoroscopy was 5 (3–14) times per level. The mean blood loss was 5 (2–20) ml per level. The mean stay in the hospital was 3 (2–4) days. The VAS score of leg pain significantly dropped from 9 (6–10) before operation to 1 (0–3) (P < 0.001) immediately after the operation and to 0 (0–3) (P < 0.001) 2 years after operation. However, there were 16 (7.7%) patients with the rebound effect of leg pain, and 9 (7–10) of VAS score preoperatively dropped to 0 (0–2) immediately after operation and rose to 7 (5–9) 1 week postoperatively. Fourteen of 16 patients got pain relief during 2 months, and the VAS score became 3 (2–4) 2 months postoperatively and 0.5 (0–3) 2 years postoperatively. Other two underwent reoperation about 1 month after surgery in other hospitals. At 2-year follow-up, 95.7% (200/209) of the patients showed excellent or good outcomes, 2.9% (6/209) fair, and 1.4% (3/209) poor. The percentages of excellent or good results were 100% (29/29) for L5/S1 herniations with high iliac crest, 100% (25/25) for herniations with scoliosis, and 96.8% (30/31) for herniation with calcification. The percentage of fair results was 3.2% (1/31) for herniation with calcification (Table 4). Excellent or good outcome were shown in 100% (24/24) of recurrent herniations or missed fragments after previous surgical intervention at the index level and adjacent disc herniations after decompression and fusion. Voiding dysfunction recovered during 1 day after surgery and the strength of the quadriceps, foot/toe extensors, or triceps improved during 3 months in two patients of cauda equine syndromes showing fair outcomes. Postoperative complications (Table 6) included three patients who experienced increased weakness of quadriceps or foot/toe extensor strength, which fully recovered about 1 month after surgery. One other patient experienced low toxicity infection of disc, which was cured by intravenous use of antibiotics for 2 weeks. There was one case of herniation recurrence 8 months after surgery, which was successfully treated with MIS-TLIF (minimally invasive surgery-transforaminal lumbar intervertebral fusion). No dural tears had to be treated and no dural leaks after surgery occurred, and no meningoceles or dural cysts in the surgical area were observed in the postoperative MRI scans. No patients had any form of permanent iatrogenic nerve damage and a major complication such as intraoperative vascular injury. There was no death.Table 6ComplicationsComplicationNumberPercentIncreased weakness of quadriceps or foot/toe extensor strength31.4Disc infection10.5Herniation recurrence10.5Permanent iatrogenic nerve damage0Dural tear or dural leak0Intraoperative vascular injury0Death0 Complications
Conclusions
The current data indicate that PTES for lumbar disc herniation is an effective and safe method with simple orientation, easy puncture, reduced steps, and little X-ray exposure, which can be applied in almost all kinds of lumbar disc herniation, including L5/S1 level with high iliac crest, herniation with scoliosis or calcification, recurrent herniation, and adjacent disc herniation after decompression and fusion. The learning curve is no longer steep for surgeons.
[ "Pre- and postoperative imaging", "Surgical procedure", "Clinical follow-up", "Statistical analysis" ]
[ "All patients were evaluated before the procedure by CT and MRI imaging to determine the type of disc herniation (intracanal or extracanal) or to determine if there was calcification. An intracanal herniation is defined as its apex between the bilateral pedicles. The foraminal herniation had its apex within the mediolateral borders of the adjacent pedicle, and the apex is lateral to the bordering pedicle in an extraforaminal herniation. Posteoanterior and lateral radiographs were obtained to detect scoliosis or high iliac crest when the lower plate of L4 vertebral body was not higher than the line between the highest points of bilateral iliac crest (Fig. 2a). After the treatment, MRI images were obtained to assess neurological decompression or exclude dural cyst, myelomeningocele, dural tears or spinal fluid leaks, and reherniation.", "Anesthesia consisted of 1% local lidocaine infiltration, supplemented with conscious sedation. The patient was placed in a prone position with hyperkyphotic bolsters placed under the abdomen on a radiolucent table, especially in the cases of L5/S1 herniation with high iliac crest. A biplane fluoroscopy was used for radiograph imaging. Good posteoanterior and lateral images should be obtained by rotating the C-arm relative to the patient whose back was positioned parallel to the horizontal plane especially in the scoliosis case.\nThe midline is marked on the skin surface by palpating the spinal processes. Then a transverse line bisecting the disc is drawn along the K-wire which is placed transversely across the center of the target disc on the posteoanterior image (Fig. 4b). The surface marking of anatomic disc center is identified by the intersection of transverse line and longitudinal midline (Figs. 4c, d and 5c, d), which is used as the aiming reference point of puncture.Fig. 4Male patient of 65 years with L5/S1 disc herniation underwent the procedure of PTES. a Sagittal and axial MR image showed L5/S1 disc herniation. A transverse line bisecting the disc was drawn along the metal rod which was placed transversely across the center of the target disc on b posteoanterior C-arm view. c Photography showed the surface marking of anatomic disc center identified by the intersection of transverse line and longitudinal midline, and the entrance point of puncture located at the corner of flat back turning to lateral side. The puncture needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through anatomic disc center on d schematic diagram. The tip of puncture needle was in the posterior intervertebral space close to spinal canal on e lateral C-arm view and near the medial border of pedicle on f posteoanterior C-arm view. Through the 8.8-mm cannula, a 7.5-mm hand reamer was introduced to ream facet bone until the resistance faded, which was checked with g posteoanterior image. h Lateral C-arm view showed that the tip of the thick guiding rod was positioned at the herniated fragment. A 7.5-mm working cannula was advanced over the guiding rod directly to the sequestrated fragment on i posteoanterior fluoroscopic image. Under j, k endoscopic view, the fragments underneath the nerve root were removed and the freed nerve root could be visualized. l Photography showed minimally invasive result 1 months after surgery\nFig. 5\na Sagittal and b axial MR images showed L4/5 disc herniation in 41-year-old man. c, d Photography showed the surface marking of anatomic disc center identified by the intersection of transverse L4/5 level line and longitudinal midline, and the entrance point of puncture located at the corner of the flat back turning to the lateral side. The tip of puncture needle was in the posterior one third of intervertebral space on e lateral C-arm view and beyond the medial border of pedicle on f posteoanterior C-arm view. A 7.5-mm hand reamer was introduced through the cannula to ream away the ventral bone of superior facet joint and ligmentum flavum for enlargement of foramen (press-down enlargement of foramen) until the resistance disappeared. The tip of reamer should exceed the medial border of pedicle till the midline between the pedicle and spinal process on g posteoanterior C-arm view, and it should be close to posterior wall of target disc on h lateral C-arm view. i Endoscopic picture showed that both the ipsilateral nerve root and the contralateral nerve root were exposed for complete decompression after removal of j sequestrated disc fragments\n\nMale patient of 65 years with L5/S1 disc herniation underwent the procedure of PTES. a Sagittal and axial MR image showed L5/S1 disc herniation. A transverse line bisecting the disc was drawn along the metal rod which was placed transversely across the center of the target disc on b posteoanterior C-arm view. c Photography showed the surface marking of anatomic disc center identified by the intersection of transverse line and longitudinal midline, and the entrance point of puncture located at the corner of flat back turning to lateral side. The puncture needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through anatomic disc center on d schematic diagram. The tip of puncture needle was in the posterior intervertebral space close to spinal canal on e lateral C-arm view and near the medial border of pedicle on f posteoanterior C-arm view. Through the 8.8-mm cannula, a 7.5-mm hand reamer was introduced to ream facet bone until the resistance faded, which was checked with g posteoanterior image. h Lateral C-arm view showed that the tip of the thick guiding rod was positioned at the herniated fragment. A 7.5-mm working cannula was advanced over the guiding rod directly to the sequestrated fragment on i posteoanterior fluoroscopic image. Under j, k endoscopic view, the fragments underneath the nerve root were removed and the freed nerve root could be visualized. l Photography showed minimally invasive result 1 months after surgery\n\na Sagittal and b axial MR images showed L4/5 disc herniation in 41-year-old man. c, d Photography showed the surface marking of anatomic disc center identified by the intersection of transverse L4/5 level line and longitudinal midline, and the entrance point of puncture located at the corner of the flat back turning to the lateral side. The tip of puncture needle was in the posterior one third of intervertebral space on e lateral C-arm view and beyond the medial border of pedicle on f posteoanterior C-arm view. A 7.5-mm hand reamer was introduced through the cannula to ream away the ventral bone of superior facet joint and ligmentum flavum for enlargement of foramen (press-down enlargement of foramen) until the resistance disappeared. The tip of reamer should exceed the medial border of pedicle till the midline between the pedicle and spinal process on g posteoanterior C-arm view, and it should be close to posterior wall of target disc on h lateral C-arm view. i Endoscopic picture showed that both the ipsilateral nerve root and the contralateral nerve root were exposed for complete decompression after removal of j sequestrated disc fragments\nIn practice, we found that the entrance point was located at the corner of the flat back turning to the lateral side (Figs. 4c, d and 5c, d), named “Gu’s Point”, not depending on the size of the patient, gender, and level. This point is as high as, or more cranial, or slightly more caudal than the horizontal line of target disc. The entrance point for the patient with scoliosis is adjusted medially or laterally according to the rotation.\nThen the skin, subcutaneous tissue, and trajectory tract were infiltrated with local anesthesia, and an 18-gauge needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through the anatomic disc center (Fig. 4d), which should be adjusted according to the rotation of vertebra for scoliosis. Once the needle reached the target when resistance disappeared, the lateral C-arm view was taken to ensure that the tip should be in the posterior one third of the intervertebral space or intracanal area close to the posterior wall of the disc (Figs. 2d, 4e, 5e, and 6c). The plane of the needle bevel could be used for minor directional adjustment. Sometimes the direction of the needle bevel was rotated to skirt the obstacle for target especially at L5/S1 level with high iliac crest. The angle and direction of puncture or the entrance point should be adjusted once there was nerve root symptom. Then the C-arm was rotated to the posteroanterior projection, and the needle tip should be near the medial border of the pedicle on the C-arm view (Figs. 2e, 4f, and 5f). Subsequently, contrast solution [9 ml iohexol (300 mg/mL) + 1 ml methylene blue] or methylene blue solution (9 ml 0.9% NaCl + 1 ml methylene blue) was injected to dye the nucleus, and pain reaction or dye leakage was recorded, which sometimes could be omitted.Fig. 6Male patient (47 years) of L4/5 disc herniation with extruding fragment underwent the procedure of PTES. a Sagittal and b axial MR images showed L4/5 disc herniation. During the procedure of PTES, The puncture needle was anteromedially inserted at about 45° angle to the horizontal plane and the tip of puncture needle was in the posterior one third intervertebral space on c lateral C-arm view. Over the guiding wire, stepwise-dilating cannulas were introduced to annulus fibrosus through the foramen d. The thick guiding rod was advanced with a mallet into the foramen after removal of the guiding wire e. An 8.8-mm cannula was pushed over the rod to the facet joint area, docked at the superior facet after the rod was  removed, and pressed down to make the angle of the cannula to the horizontal plane smaller. Through this cannula, a 7.5-mm hand reamer was then introduced to ream the ventral part of facet bone until the resistance faded (press-down enlargement of the foramen) f, which was checked with g posteoanterior C-arm image. The reamer was removed, and the thick guiding rod then was reintroduced and advanced with the aid of a mallet until the tip of the guiding rod was into the herniated fragment h. A 7.5-mm working cannula was advanced over the guiding rod directly to the extruding fragment and the spine endoscope was inserted i. Under j, k, and l endoscopic view, the extruding disc fragment could be observed and the freed nerve root could be visualized after the massive nucleus underneath the nerve root were removed\n\nMale patient (47 years) of L4/5 disc herniation with extruding fragment underwent the procedure of PTES. a Sagittal and b axial MR images showed L4/5 disc herniation. During the procedure of PTES, The puncture needle was anteromedially inserted at about 45° angle to the horizontal plane and the tip of puncture needle was in the posterior one third intervertebral space on c lateral C-arm view. Over the guiding wire, stepwise-dilating cannulas were introduced to annulus fibrosus through the foramen d. The thick guiding rod was advanced with a mallet into the foramen after removal of the guiding wire e. An 8.8-mm cannula was pushed over the rod to the facet joint area, docked at the superior facet after the rod was  removed, and pressed down to make the angle of the cannula to the horizontal plane smaller. Through this cannula, a 7.5-mm hand reamer was then introduced to ream the ventral part of facet bone until the resistance faded (press-down enlargement of the foramen) f, which was checked with g posteoanterior C-arm image. The reamer was removed, and the thick guiding rod then was reintroduced and advanced with the aid of a mallet until the tip of the guiding rod was into the herniated fragment h. A 7.5-mm working cannula was advanced over the guiding rod directly to the extruding fragment and the spine endoscope was inserted i. Under j, k, and l endoscopic view, the extruding disc fragment could be observed and the freed nerve root could be visualized after the massive nucleus underneath the nerve root were removed\nAfter the guiding wire was introduced into the disc space, a stab incision of about 6 mm was made and the 18-gauge needle was inserted along the guiding wire to the facet joint capsule for lidocaine infiltration. Over the guiding wire, stepwise-dilating cannulas of 1.5, 3.5, and 5.5 mm were introduced to the anulus fibrosus through the foramen (Fig. 6d). After the dilating cannulas were removed, the thick guiding rod of 6.3 mm in diameter was introduced over the guiding wire and then advanced with a mallet into the foramen after removal of guiding wire (Fig. 6e). An 8.8-mm cannula of one-sided opening was pushed over the rod to the facet joint area, docked at the superior facet after the rod was removed, and pressed down to make the angle of cannula to the horizontal plane smaller according to the position of the needle tip on the lateral and posteroanterior C-arm view, which indicates the inclination angle of puncture trajectory. A 7.5-mm hand reamer was then introduced through this cannula to ream away the ventral bone of the superior facet joint and the ligmentum flavum for enlargement of foramen until resistance disappeared, meaning that the spinal canal has been entered, which was called press-down enlargement of foramen (Fig. 6f). The tip of the reamer should exceed the medial border of the pedicle till the midline is between the pedicle and the spinal process on a posteoanterior C-arm view (Figs. 4g, 5g and 6g), and it should be close to the posterior wall of the target disc on the lateral C-arm view (Fig. 5h), to ensure that the vicinity of the extruding or sequestrated fragment was reached. Only the posteoanterior C-arm image was needed to determine the position of the reamer during the foramen enlargement and it could be evaluated through endoscopic image replacing lateral C-arm image whether the target area had been reached. In case this could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet with 7.5-mm reamer for further enlargement of foramen through the 8.8-mm cannula docked at the facet and pressed down further to make the angle of cannula to the horizontal plane smaller (Fig. 7). In cases with difficult access to the fragment, particularly in the presence of foraminal stenosis, an 8.8-mm reamer was used to enlarge the foramen through a 10-mm cannula. The reamer could be advanced deeper to remove some calcified tissue of disc for lumbar disc herniation with calcification. If the reamer in the foramen was higher or lower than the disc level on the posteoranterior C-arm image, the position and direction of the reamer could be adjusted through a minor movement of the cannula or a thick guiding rod. Once there was nerve root symptom, the surgeon must stop reaming immediately, adjust the angle and direction of reamer (move the tip of cannula laterally and dorsally toward the superior facet and press the cannula down), or change the entrance point medially until the symptom disappear. The reamer was removed, and the thick guiding rod was reintroduced and advanced with the aid of a mallet till the tip of the guiding rod was into the herniated fragment (Fig. 6h). A 7.5-mm working cannula with a one-sided opening was then advanced over the guiding rod directly to the target area (Figs. 1c, d, 2g, 3d, e, 4i, and 6i). For extracanal disc herniation, the process of reaming could usually be omitted.Fig. 7In case the extruding or sequestrated fragment could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet for further enlargement of the foramen through the cannula docked at the facet and pressed down further to make the angle of the cannula to the horizontal plane smaller\n\nIn case the extruding or sequestrated fragment could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet for further enlargement of the foramen through the cannula docked at the facet and pressed down further to make the angle of the cannula to the horizontal plane smaller\nThe spine endoscope was introduced (Fig. 6i), and generally, the extruding or sequestrated disc fragment could be observed and the nerve root was then shown on the screen (Figs. 2i and 5i) after the fragments were removed. In cases where the nerve root was also visible, a working forceps was introduced through the endoscope to remove the fragments underneath the nerve root and the central dura, even the contralateral nerve root, under endoscopic view (Figs. 4j, 5i, and 6j, k). In disc protrusion where the annular wall was complete, the annulus was perforated to pull out the herniated nucleus. The massive disc fragment could be grabbed and extracted together with the endoscope. After the scope was again inserted, the nerve root was inspected (Figs. 4k, 5i, and 6l) and the remaining fragments of the disc were removed under endoscopic vision. When the calcification was encountered, a 3.5-mm small reamer or electric shaver was used to grind off the calcified tissue under the view through the endoscope (Fig. 3f). The cannula was then rotated cephalad to check the existing nerve root and rotated back to detect bone fragments that reamed off and take those out. The freed nerve root that was always pulsating with the heart rate could be observed at the end of procedure (Figs. 2i, 4k, 5i, and 6l). The trigger-flex bipolar radiofrequency probe could be used to clean the access way, stop the bleeding, ablate the nucleus and annulus, shrink the annular tears, and release the scar tissue around the nerve root for recurrent herniation after previous surgical intervention at the index level.\nThe patients were mobilized 5 h after surgery. A flexible back brace was used for 3 weeks. After leaving the hospital, patients were encouraged to resume their daily routine and were followed up as outpatients at the hospital ward.", "Leg pain was evaluated using the 10-point visual analog scale (VAS) preoperatively, immediately, 1 week, 1, 2, 3, and 6 months, and 1 and 2 years after surgery. The clinical evaluation included a straight leg raising test and a check of the strength of the quadriceps, foot/toe extensors, as well as triceps strength.\nIn a retrospective assessment, the results were determined to be excellent, good, fair, or poor according to the MacNab classification [12] (Table 5) 2 years postoperatively. The fair and better grades also included the requirement that the patients were willing to select the same procedure again for the same problem in the future.Table 5MacNab classification [12]ResultsComplicationsExcellentNo pain; no restriction of activityGoodOccasional back or leg pain not interfering with the patient’s ability to do his or her normal work, or to enjoy leisure activitiesFairImproved functional capacity, but handicapped by intermittent pain of sufficient severity tocurtail or modify work or leisure activitiesPoorNo improvement or insufficient improvement toenable an increase in activities/or furtheroperative intervention required\n\nMacNab classification [12]\nDuring the follow-up, all complications were registered including iatrogenic nerve damage, vascular injuries, infection, wound healing, thrombosis, or recurrence.", "Comparison of preoperative and postoperative VAS was performed using linear mixed effects model for multiple comparison procedures. Statistically significant differences were defined at a 95% confidence level. The SPSS software (17.0, SPSS Inc., Chicago, IL, USA) supported statistical evaluation." ]
[ null, null, null, null ]
[ "Background", "Methods", "Pre- and postoperative imaging", "Surgical procedure", "Clinical follow-up", "Statistical analysis", "Results", "Discussion", "Conclusions" ]
[ "The radicular syndrome caused by lumbar disc herniation compressing neurologic elements is a clear indication for surgical decompression. In the last decades, as a minimally invasive surgical technique, the posterior lateral transforaminal endoscopic surgery has been developed to perform discectomy for neurologic decompression under direct view and local anesthesia including YESS (Yeung Endoscopy Spine Surgery) [1–5] and TESS (Transforaminal Endoscopic Spine Surgery) [6–11]. There was a high percentage of patient satisfaction and a low rate of complications in YESS or TESS for lumbar disc herniation [1–11]. Compared with traditional lumbar discectomy, YESS and TESS have certain advantages: (1) no need for general anesthesia, (2) less cases of iatrogenic neurologic damage, (3) no retraction on the intracanal nerve elements, (4) significantly less infections, (5) only minimal disturbance of ligamentum flavum or intracanal capsular structures, therefore, less scar formation, (6) no interference of scar tissue to reach the recurrent herniated tissue in cases of previous dorsal-discectomy, and (7) shorter hospital stay, earlier functional recovery, earlier return to work, and higher cost-effectiveness [1–11]. Although nearly all kinds of disc herniations are accessible for TESS of outside disc-inside technique directly into the spinal canal [2, 3], complexity of C-arm guided orientation, difficulty to find the optimal trajectory for target, and more steps of surgical manipulation leaded to much exposure of X-ray, long duration of operation, and steep learning curve.\nWe designed an easy posterolateral transforaminal endoscopic decompression technique for radiculopathy secondary to lumbar disc herniation, termed PTES (percutaneous transforaminal endoscopic surgery). The purpose of the study is to describe the technique of PTES and evaluate the efficacy and safety for treatment of lumbar disc herniation including primary herniation, reherniation, intracanal herniation, and extracanal herniation and to report outcome and complications.", "The clinical study proposal was approved by Zhongshan Hospital Ethical Committee (the medical ethical committee of the authors’ hospital). Informed consent to participate in the study has been obtained from patients. From January 2012 to June 2013, PTES was performed to treat 209 consecutive patients of intracanal or extracanal herniations with or without extruding or sequestrated fragment, high iliac crest, scoliosis, calcification, or cauda equina syndrome including recurrent herniation after previous surgical intervention at the index level or adjacent disc herniation after decompression and fusion. During the same period, other additional patients underwent PTES for various other conditions that did not meet the inclusion criteria for this study. The excluded patients had the primary diagnoses of chronic discogenic pain, foraminal stenosis, lateral recess stenosis, or pyogenic discitis.\nInclusion criteria were as follows: (1) primarily radicular pain of the unilateral leg; (2) disc herniation at one level from L2 to S1 (Table 1) proven by magnetic resonance imaging (MRI) or computed tomography (CT) corresponding to the neurologic findings; (3) clear nerve root tension sign with a straight leg raising sign of less than 45°, or a positive neurologic finding in terms of absent knee or ankle reflex, corresponding dermatomal numbness or weakness of quadriceps, foot/toe extensors, or triceps; and (4) in all patients, conservative treatment had failed.Table 1Location of lumbar disc herniation according to levelHerniation levelNo. of patientsPercentL5/S18339.7L4/L58138.8L3/L43215.3L2/L3136.2Total209100\n\nLocation of lumbar disc herniation according to level\nThis study included 181 intracanal and 28 extracanal (foraminal and extraforaminal) herniations (Table 2). Eighteen recurrent herniations (Fig. 1), three missed fragments after previous surgical intervention at the index level (Table 3), and three adjacent disc herniations after decompression and fusion also were included. For the group that had undergone prior surgical intervention, there were 14 laminectomy and discectomies, 2 endoscopic discectomy, 5 decompression and fusion, 2 radiofrequency ablation, and 1 ozone injections. This study included 29 L5/S1 herniations with high iliac crest (Fig. 2), 25 with scoliosis, and 31 with calcification (Fig. 3) (Table 4), and cauda equina syndrome occurring in 2 patients.Table 2Location of the lumbar disc herniation in relation to the pedicle and spinal canalLocation of herniationNo. of patientsPercentIntracanal18186.6Foraminal167.7Extraforaminal125.7Total209100\nFig. 1\na Sagittal and b axial MR images showed L4/5 recurrent herniation after decompression and fusion in 61-year-old woman. During the procedure of PTES, c posteoanterior and d lateral C-arm image confirmed that a 7.5-mm working cannula was advanced directly to the protruding fragment\nTable 3Prior surgical intervention undertaken at index levelPrior procedure at index levelNo. of patientsPercentLaminectomy and discectomy146.7Endoscopic discectomy21.0Decompression and fusion21.0Radiofrequency ablation21.0Ozone injections10.5\nFig. 2\na Posteoanterior X-ray picture, b sagittal, and c axial MR images showed L5/S1 disc herniation with high iliac crest in a 44-year-old man. The tip of the puncture needle was in the posterior one third of intervertebral space on d lateral C-arm view and beyond the medial border of pedicle on e posteoanterior C-arm view. During the procedure of PTES, a 7.5-mm working cannula was advanced over the guiding rod to the vicinity of the sequestrated fragment on f lateral and g posteoanterior C-arm view after the enlargement of the foramen. i Endoscopic picture showed that the nerve root was exposed for complete decompression after removal of h sequestrated disc fragments, which was confirmed on j sagittal and k axial MR images 1 week after operation. After 2 years follow-up, l sagittal and m axial MR images showed no recurrent herniation\nFig. 3Male patient of 42 years with L4/5 disc herniation and calcification underwent the procedure of PTES. a Sagittal, b axial MR images, and c CT scan showed calcification in L4/5 disc herniation. During the procedure of PTES, d posteoanterior and e lateral C-arm image confirmed that a 7.5-mm working cannula was advanced directly to the protruding fragment. A 3.5-mm small reamer was used to grind off g calcified tissue under f view through the endoscope, which was confirmed on h CT scan after surgery\nTable 4Outcome according to the MacNab classificationAll patientsL5/S1 herniations with high iliac crestHerniations with scoliosisHerniation with calcificationNo. of patients in category209292531Excellent or good n (%)200 (95.7)29 (100)25 (100)30 (96.8)Fair n (%)6 (2.9)0 (0)0 (0)1 (3.2)Poor n (%)3 (1.4)0 (0)0 (0)0 (0)\n\nLocation of the lumbar disc herniation in relation to the pedicle and spinal canal\n\na Sagittal and b axial MR images showed L4/5 recurrent herniation after decompression and fusion in 61-year-old woman. During the procedure of PTES, c posteoanterior and d lateral C-arm image confirmed that a 7.5-mm working cannula was advanced directly to the protruding fragment\nPrior surgical intervention undertaken at index level\n\na Posteoanterior X-ray picture, b sagittal, and c axial MR images showed L5/S1 disc herniation with high iliac crest in a 44-year-old man. The tip of the puncture needle was in the posterior one third of intervertebral space on d lateral C-arm view and beyond the medial border of pedicle on e posteoanterior C-arm view. During the procedure of PTES, a 7.5-mm working cannula was advanced over the guiding rod to the vicinity of the sequestrated fragment on f lateral and g posteoanterior C-arm view after the enlargement of the foramen. i Endoscopic picture showed that the nerve root was exposed for complete decompression after removal of h sequestrated disc fragments, which was confirmed on j sagittal and k axial MR images 1 week after operation. After 2 years follow-up, l sagittal and m axial MR images showed no recurrent herniation\nMale patient of 42 years with L4/5 disc herniation and calcification underwent the procedure of PTES. a Sagittal, b axial MR images, and c CT scan showed calcification in L4/5 disc herniation. During the procedure of PTES, d posteoanterior and e lateral C-arm image confirmed that a 7.5-mm working cannula was advanced directly to the protruding fragment. A 3.5-mm small reamer was used to grind off g calcified tissue under f view through the endoscope, which was confirmed on h CT scan after surgery\nOutcome according to the MacNab classification\n Pre- and postoperative imaging All patients were evaluated before the procedure by CT and MRI imaging to determine the type of disc herniation (intracanal or extracanal) or to determine if there was calcification. An intracanal herniation is defined as its apex between the bilateral pedicles. The foraminal herniation had its apex within the mediolateral borders of the adjacent pedicle, and the apex is lateral to the bordering pedicle in an extraforaminal herniation. Posteoanterior and lateral radiographs were obtained to detect scoliosis or high iliac crest when the lower plate of L4 vertebral body was not higher than the line between the highest points of bilateral iliac crest (Fig. 2a). After the treatment, MRI images were obtained to assess neurological decompression or exclude dural cyst, myelomeningocele, dural tears or spinal fluid leaks, and reherniation.\nAll patients were evaluated before the procedure by CT and MRI imaging to determine the type of disc herniation (intracanal or extracanal) or to determine if there was calcification. An intracanal herniation is defined as its apex between the bilateral pedicles. The foraminal herniation had its apex within the mediolateral borders of the adjacent pedicle, and the apex is lateral to the bordering pedicle in an extraforaminal herniation. Posteoanterior and lateral radiographs were obtained to detect scoliosis or high iliac crest when the lower plate of L4 vertebral body was not higher than the line between the highest points of bilateral iliac crest (Fig. 2a). After the treatment, MRI images were obtained to assess neurological decompression or exclude dural cyst, myelomeningocele, dural tears or spinal fluid leaks, and reherniation.\n Surgical procedure Anesthesia consisted of 1% local lidocaine infiltration, supplemented with conscious sedation. The patient was placed in a prone position with hyperkyphotic bolsters placed under the abdomen on a radiolucent table, especially in the cases of L5/S1 herniation with high iliac crest. A biplane fluoroscopy was used for radiograph imaging. Good posteoanterior and lateral images should be obtained by rotating the C-arm relative to the patient whose back was positioned parallel to the horizontal plane especially in the scoliosis case.\nThe midline is marked on the skin surface by palpating the spinal processes. Then a transverse line bisecting the disc is drawn along the K-wire which is placed transversely across the center of the target disc on the posteoanterior image (Fig. 4b). The surface marking of anatomic disc center is identified by the intersection of transverse line and longitudinal midline (Figs. 4c, d and 5c, d), which is used as the aiming reference point of puncture.Fig. 4Male patient of 65 years with L5/S1 disc herniation underwent the procedure of PTES. a Sagittal and axial MR image showed L5/S1 disc herniation. A transverse line bisecting the disc was drawn along the metal rod which was placed transversely across the center of the target disc on b posteoanterior C-arm view. c Photography showed the surface marking of anatomic disc center identified by the intersection of transverse line and longitudinal midline, and the entrance point of puncture located at the corner of flat back turning to lateral side. The puncture needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through anatomic disc center on d schematic diagram. The tip of puncture needle was in the posterior intervertebral space close to spinal canal on e lateral C-arm view and near the medial border of pedicle on f posteoanterior C-arm view. Through the 8.8-mm cannula, a 7.5-mm hand reamer was introduced to ream facet bone until the resistance faded, which was checked with g posteoanterior image. h Lateral C-arm view showed that the tip of the thick guiding rod was positioned at the herniated fragment. A 7.5-mm working cannula was advanced over the guiding rod directly to the sequestrated fragment on i posteoanterior fluoroscopic image. Under j, k endoscopic view, the fragments underneath the nerve root were removed and the freed nerve root could be visualized. l Photography showed minimally invasive result 1 months after surgery\nFig. 5\na Sagittal and b axial MR images showed L4/5 disc herniation in 41-year-old man. c, d Photography showed the surface marking of anatomic disc center identified by the intersection of transverse L4/5 level line and longitudinal midline, and the entrance point of puncture located at the corner of the flat back turning to the lateral side. The tip of puncture needle was in the posterior one third of intervertebral space on e lateral C-arm view and beyond the medial border of pedicle on f posteoanterior C-arm view. A 7.5-mm hand reamer was introduced through the cannula to ream away the ventral bone of superior facet joint and ligmentum flavum for enlargement of foramen (press-down enlargement of foramen) until the resistance disappeared. The tip of reamer should exceed the medial border of pedicle till the midline between the pedicle and spinal process on g posteoanterior C-arm view, and it should be close to posterior wall of target disc on h lateral C-arm view. i Endoscopic picture showed that both the ipsilateral nerve root and the contralateral nerve root were exposed for complete decompression after removal of j sequestrated disc fragments\n\nMale patient of 65 years with L5/S1 disc herniation underwent the procedure of PTES. a Sagittal and axial MR image showed L5/S1 disc herniation. A transverse line bisecting the disc was drawn along the metal rod which was placed transversely across the center of the target disc on b posteoanterior C-arm view. c Photography showed the surface marking of anatomic disc center identified by the intersection of transverse line and longitudinal midline, and the entrance point of puncture located at the corner of flat back turning to lateral side. The puncture needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through anatomic disc center on d schematic diagram. The tip of puncture needle was in the posterior intervertebral space close to spinal canal on e lateral C-arm view and near the medial border of pedicle on f posteoanterior C-arm view. Through the 8.8-mm cannula, a 7.5-mm hand reamer was introduced to ream facet bone until the resistance faded, which was checked with g posteoanterior image. h Lateral C-arm view showed that the tip of the thick guiding rod was positioned at the herniated fragment. A 7.5-mm working cannula was advanced over the guiding rod directly to the sequestrated fragment on i posteoanterior fluoroscopic image. Under j, k endoscopic view, the fragments underneath the nerve root were removed and the freed nerve root could be visualized. l Photography showed minimally invasive result 1 months after surgery\n\na Sagittal and b axial MR images showed L4/5 disc herniation in 41-year-old man. c, d Photography showed the surface marking of anatomic disc center identified by the intersection of transverse L4/5 level line and longitudinal midline, and the entrance point of puncture located at the corner of the flat back turning to the lateral side. The tip of puncture needle was in the posterior one third of intervertebral space on e lateral C-arm view and beyond the medial border of pedicle on f posteoanterior C-arm view. A 7.5-mm hand reamer was introduced through the cannula to ream away the ventral bone of superior facet joint and ligmentum flavum for enlargement of foramen (press-down enlargement of foramen) until the resistance disappeared. The tip of reamer should exceed the medial border of pedicle till the midline between the pedicle and spinal process on g posteoanterior C-arm view, and it should be close to posterior wall of target disc on h lateral C-arm view. i Endoscopic picture showed that both the ipsilateral nerve root and the contralateral nerve root were exposed for complete decompression after removal of j sequestrated disc fragments\nIn practice, we found that the entrance point was located at the corner of the flat back turning to the lateral side (Figs. 4c, d and 5c, d), named “Gu’s Point”, not depending on the size of the patient, gender, and level. This point is as high as, or more cranial, or slightly more caudal than the horizontal line of target disc. The entrance point for the patient with scoliosis is adjusted medially or laterally according to the rotation.\nThen the skin, subcutaneous tissue, and trajectory tract were infiltrated with local anesthesia, and an 18-gauge needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through the anatomic disc center (Fig. 4d), which should be adjusted according to the rotation of vertebra for scoliosis. Once the needle reached the target when resistance disappeared, the lateral C-arm view was taken to ensure that the tip should be in the posterior one third of the intervertebral space or intracanal area close to the posterior wall of the disc (Figs. 2d, 4e, 5e, and 6c). The plane of the needle bevel could be used for minor directional adjustment. Sometimes the direction of the needle bevel was rotated to skirt the obstacle for target especially at L5/S1 level with high iliac crest. The angle and direction of puncture or the entrance point should be adjusted once there was nerve root symptom. Then the C-arm was rotated to the posteroanterior projection, and the needle tip should be near the medial border of the pedicle on the C-arm view (Figs. 2e, 4f, and 5f). Subsequently, contrast solution [9 ml iohexol (300 mg/mL) + 1 ml methylene blue] or methylene blue solution (9 ml 0.9% NaCl + 1 ml methylene blue) was injected to dye the nucleus, and pain reaction or dye leakage was recorded, which sometimes could be omitted.Fig. 6Male patient (47 years) of L4/5 disc herniation with extruding fragment underwent the procedure of PTES. a Sagittal and b axial MR images showed L4/5 disc herniation. During the procedure of PTES, The puncture needle was anteromedially inserted at about 45° angle to the horizontal plane and the tip of puncture needle was in the posterior one third intervertebral space on c lateral C-arm view. Over the guiding wire, stepwise-dilating cannulas were introduced to annulus fibrosus through the foramen d. The thick guiding rod was advanced with a mallet into the foramen after removal of the guiding wire e. An 8.8-mm cannula was pushed over the rod to the facet joint area, docked at the superior facet after the rod was  removed, and pressed down to make the angle of the cannula to the horizontal plane smaller. Through this cannula, a 7.5-mm hand reamer was then introduced to ream the ventral part of facet bone until the resistance faded (press-down enlargement of the foramen) f, which was checked with g posteoanterior C-arm image. The reamer was removed, and the thick guiding rod then was reintroduced and advanced with the aid of a mallet until the tip of the guiding rod was into the herniated fragment h. A 7.5-mm working cannula was advanced over the guiding rod directly to the extruding fragment and the spine endoscope was inserted i. Under j, k, and l endoscopic view, the extruding disc fragment could be observed and the freed nerve root could be visualized after the massive nucleus underneath the nerve root were removed\n\nMale patient (47 years) of L4/5 disc herniation with extruding fragment underwent the procedure of PTES. a Sagittal and b axial MR images showed L4/5 disc herniation. During the procedure of PTES, The puncture needle was anteromedially inserted at about 45° angle to the horizontal plane and the tip of puncture needle was in the posterior one third intervertebral space on c lateral C-arm view. Over the guiding wire, stepwise-dilating cannulas were introduced to annulus fibrosus through the foramen d. The thick guiding rod was advanced with a mallet into the foramen after removal of the guiding wire e. An 8.8-mm cannula was pushed over the rod to the facet joint area, docked at the superior facet after the rod was  removed, and pressed down to make the angle of the cannula to the horizontal plane smaller. Through this cannula, a 7.5-mm hand reamer was then introduced to ream the ventral part of facet bone until the resistance faded (press-down enlargement of the foramen) f, which was checked with g posteoanterior C-arm image. The reamer was removed, and the thick guiding rod then was reintroduced and advanced with the aid of a mallet until the tip of the guiding rod was into the herniated fragment h. A 7.5-mm working cannula was advanced over the guiding rod directly to the extruding fragment and the spine endoscope was inserted i. Under j, k, and l endoscopic view, the extruding disc fragment could be observed and the freed nerve root could be visualized after the massive nucleus underneath the nerve root were removed\nAfter the guiding wire was introduced into the disc space, a stab incision of about 6 mm was made and the 18-gauge needle was inserted along the guiding wire to the facet joint capsule for lidocaine infiltration. Over the guiding wire, stepwise-dilating cannulas of 1.5, 3.5, and 5.5 mm were introduced to the anulus fibrosus through the foramen (Fig. 6d). After the dilating cannulas were removed, the thick guiding rod of 6.3 mm in diameter was introduced over the guiding wire and then advanced with a mallet into the foramen after removal of guiding wire (Fig. 6e). An 8.8-mm cannula of one-sided opening was pushed over the rod to the facet joint area, docked at the superior facet after the rod was removed, and pressed down to make the angle of cannula to the horizontal plane smaller according to the position of the needle tip on the lateral and posteroanterior C-arm view, which indicates the inclination angle of puncture trajectory. A 7.5-mm hand reamer was then introduced through this cannula to ream away the ventral bone of the superior facet joint and the ligmentum flavum for enlargement of foramen until resistance disappeared, meaning that the spinal canal has been entered, which was called press-down enlargement of foramen (Fig. 6f). The tip of the reamer should exceed the medial border of the pedicle till the midline is between the pedicle and the spinal process on a posteoanterior C-arm view (Figs. 4g, 5g and 6g), and it should be close to the posterior wall of the target disc on the lateral C-arm view (Fig. 5h), to ensure that the vicinity of the extruding or sequestrated fragment was reached. Only the posteoanterior C-arm image was needed to determine the position of the reamer during the foramen enlargement and it could be evaluated through endoscopic image replacing lateral C-arm image whether the target area had been reached. In case this could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet with 7.5-mm reamer for further enlargement of foramen through the 8.8-mm cannula docked at the facet and pressed down further to make the angle of cannula to the horizontal plane smaller (Fig. 7). In cases with difficult access to the fragment, particularly in the presence of foraminal stenosis, an 8.8-mm reamer was used to enlarge the foramen through a 10-mm cannula. The reamer could be advanced deeper to remove some calcified tissue of disc for lumbar disc herniation with calcification. If the reamer in the foramen was higher or lower than the disc level on the posteoranterior C-arm image, the position and direction of the reamer could be adjusted through a minor movement of the cannula or a thick guiding rod. Once there was nerve root symptom, the surgeon must stop reaming immediately, adjust the angle and direction of reamer (move the tip of cannula laterally and dorsally toward the superior facet and press the cannula down), or change the entrance point medially until the symptom disappear. The reamer was removed, and the thick guiding rod was reintroduced and advanced with the aid of a mallet till the tip of the guiding rod was into the herniated fragment (Fig. 6h). A 7.5-mm working cannula with a one-sided opening was then advanced over the guiding rod directly to the target area (Figs. 1c, d, 2g, 3d, e, 4i, and 6i). For extracanal disc herniation, the process of reaming could usually be omitted.Fig. 7In case the extruding or sequestrated fragment could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet for further enlargement of the foramen through the cannula docked at the facet and pressed down further to make the angle of the cannula to the horizontal plane smaller\n\nIn case the extruding or sequestrated fragment could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet for further enlargement of the foramen through the cannula docked at the facet and pressed down further to make the angle of the cannula to the horizontal plane smaller\nThe spine endoscope was introduced (Fig. 6i), and generally, the extruding or sequestrated disc fragment could be observed and the nerve root was then shown on the screen (Figs. 2i and 5i) after the fragments were removed. In cases where the nerve root was also visible, a working forceps was introduced through the endoscope to remove the fragments underneath the nerve root and the central dura, even the contralateral nerve root, under endoscopic view (Figs. 4j, 5i, and 6j, k). In disc protrusion where the annular wall was complete, the annulus was perforated to pull out the herniated nucleus. The massive disc fragment could be grabbed and extracted together with the endoscope. After the scope was again inserted, the nerve root was inspected (Figs. 4k, 5i, and 6l) and the remaining fragments of the disc were removed under endoscopic vision. When the calcification was encountered, a 3.5-mm small reamer or electric shaver was used to grind off the calcified tissue under the view through the endoscope (Fig. 3f). The cannula was then rotated cephalad to check the existing nerve root and rotated back to detect bone fragments that reamed off and take those out. The freed nerve root that was always pulsating with the heart rate could be observed at the end of procedure (Figs. 2i, 4k, 5i, and 6l). The trigger-flex bipolar radiofrequency probe could be used to clean the access way, stop the bleeding, ablate the nucleus and annulus, shrink the annular tears, and release the scar tissue around the nerve root for recurrent herniation after previous surgical intervention at the index level.\nThe patients were mobilized 5 h after surgery. A flexible back brace was used for 3 weeks. After leaving the hospital, patients were encouraged to resume their daily routine and were followed up as outpatients at the hospital ward.\nAnesthesia consisted of 1% local lidocaine infiltration, supplemented with conscious sedation. The patient was placed in a prone position with hyperkyphotic bolsters placed under the abdomen on a radiolucent table, especially in the cases of L5/S1 herniation with high iliac crest. A biplane fluoroscopy was used for radiograph imaging. Good posteoanterior and lateral images should be obtained by rotating the C-arm relative to the patient whose back was positioned parallel to the horizontal plane especially in the scoliosis case.\nThe midline is marked on the skin surface by palpating the spinal processes. Then a transverse line bisecting the disc is drawn along the K-wire which is placed transversely across the center of the target disc on the posteoanterior image (Fig. 4b). The surface marking of anatomic disc center is identified by the intersection of transverse line and longitudinal midline (Figs. 4c, d and 5c, d), which is used as the aiming reference point of puncture.Fig. 4Male patient of 65 years with L5/S1 disc herniation underwent the procedure of PTES. a Sagittal and axial MR image showed L5/S1 disc herniation. A transverse line bisecting the disc was drawn along the metal rod which was placed transversely across the center of the target disc on b posteoanterior C-arm view. c Photography showed the surface marking of anatomic disc center identified by the intersection of transverse line and longitudinal midline, and the entrance point of puncture located at the corner of flat back turning to lateral side. The puncture needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through anatomic disc center on d schematic diagram. The tip of puncture needle was in the posterior intervertebral space close to spinal canal on e lateral C-arm view and near the medial border of pedicle on f posteoanterior C-arm view. Through the 8.8-mm cannula, a 7.5-mm hand reamer was introduced to ream facet bone until the resistance faded, which was checked with g posteoanterior image. h Lateral C-arm view showed that the tip of the thick guiding rod was positioned at the herniated fragment. A 7.5-mm working cannula was advanced over the guiding rod directly to the sequestrated fragment on i posteoanterior fluoroscopic image. Under j, k endoscopic view, the fragments underneath the nerve root were removed and the freed nerve root could be visualized. l Photography showed minimally invasive result 1 months after surgery\nFig. 5\na Sagittal and b axial MR images showed L4/5 disc herniation in 41-year-old man. c, d Photography showed the surface marking of anatomic disc center identified by the intersection of transverse L4/5 level line and longitudinal midline, and the entrance point of puncture located at the corner of the flat back turning to the lateral side. The tip of puncture needle was in the posterior one third of intervertebral space on e lateral C-arm view and beyond the medial border of pedicle on f posteoanterior C-arm view. A 7.5-mm hand reamer was introduced through the cannula to ream away the ventral bone of superior facet joint and ligmentum flavum for enlargement of foramen (press-down enlargement of foramen) until the resistance disappeared. The tip of reamer should exceed the medial border of pedicle till the midline between the pedicle and spinal process on g posteoanterior C-arm view, and it should be close to posterior wall of target disc on h lateral C-arm view. i Endoscopic picture showed that both the ipsilateral nerve root and the contralateral nerve root were exposed for complete decompression after removal of j sequestrated disc fragments\n\nMale patient of 65 years with L5/S1 disc herniation underwent the procedure of PTES. a Sagittal and axial MR image showed L5/S1 disc herniation. A transverse line bisecting the disc was drawn along the metal rod which was placed transversely across the center of the target disc on b posteoanterior C-arm view. c Photography showed the surface marking of anatomic disc center identified by the intersection of transverse line and longitudinal midline, and the entrance point of puncture located at the corner of flat back turning to lateral side. The puncture needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through anatomic disc center on d schematic diagram. The tip of puncture needle was in the posterior intervertebral space close to spinal canal on e lateral C-arm view and near the medial border of pedicle on f posteoanterior C-arm view. Through the 8.8-mm cannula, a 7.5-mm hand reamer was introduced to ream facet bone until the resistance faded, which was checked with g posteoanterior image. h Lateral C-arm view showed that the tip of the thick guiding rod was positioned at the herniated fragment. A 7.5-mm working cannula was advanced over the guiding rod directly to the sequestrated fragment on i posteoanterior fluoroscopic image. Under j, k endoscopic view, the fragments underneath the nerve root were removed and the freed nerve root could be visualized. l Photography showed minimally invasive result 1 months after surgery\n\na Sagittal and b axial MR images showed L4/5 disc herniation in 41-year-old man. c, d Photography showed the surface marking of anatomic disc center identified by the intersection of transverse L4/5 level line and longitudinal midline, and the entrance point of puncture located at the corner of the flat back turning to the lateral side. The tip of puncture needle was in the posterior one third of intervertebral space on e lateral C-arm view and beyond the medial border of pedicle on f posteoanterior C-arm view. A 7.5-mm hand reamer was introduced through the cannula to ream away the ventral bone of superior facet joint and ligmentum flavum for enlargement of foramen (press-down enlargement of foramen) until the resistance disappeared. The tip of reamer should exceed the medial border of pedicle till the midline between the pedicle and spinal process on g posteoanterior C-arm view, and it should be close to posterior wall of target disc on h lateral C-arm view. i Endoscopic picture showed that both the ipsilateral nerve root and the contralateral nerve root were exposed for complete decompression after removal of j sequestrated disc fragments\nIn practice, we found that the entrance point was located at the corner of the flat back turning to the lateral side (Figs. 4c, d and 5c, d), named “Gu’s Point”, not depending on the size of the patient, gender, and level. This point is as high as, or more cranial, or slightly more caudal than the horizontal line of target disc. The entrance point for the patient with scoliosis is adjusted medially or laterally according to the rotation.\nThen the skin, subcutaneous tissue, and trajectory tract were infiltrated with local anesthesia, and an 18-gauge needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through the anatomic disc center (Fig. 4d), which should be adjusted according to the rotation of vertebra for scoliosis. Once the needle reached the target when resistance disappeared, the lateral C-arm view was taken to ensure that the tip should be in the posterior one third of the intervertebral space or intracanal area close to the posterior wall of the disc (Figs. 2d, 4e, 5e, and 6c). The plane of the needle bevel could be used for minor directional adjustment. Sometimes the direction of the needle bevel was rotated to skirt the obstacle for target especially at L5/S1 level with high iliac crest. The angle and direction of puncture or the entrance point should be adjusted once there was nerve root symptom. Then the C-arm was rotated to the posteroanterior projection, and the needle tip should be near the medial border of the pedicle on the C-arm view (Figs. 2e, 4f, and 5f). Subsequently, contrast solution [9 ml iohexol (300 mg/mL) + 1 ml methylene blue] or methylene blue solution (9 ml 0.9% NaCl + 1 ml methylene blue) was injected to dye the nucleus, and pain reaction or dye leakage was recorded, which sometimes could be omitted.Fig. 6Male patient (47 years) of L4/5 disc herniation with extruding fragment underwent the procedure of PTES. a Sagittal and b axial MR images showed L4/5 disc herniation. During the procedure of PTES, The puncture needle was anteromedially inserted at about 45° angle to the horizontal plane and the tip of puncture needle was in the posterior one third intervertebral space on c lateral C-arm view. Over the guiding wire, stepwise-dilating cannulas were introduced to annulus fibrosus through the foramen d. The thick guiding rod was advanced with a mallet into the foramen after removal of the guiding wire e. An 8.8-mm cannula was pushed over the rod to the facet joint area, docked at the superior facet after the rod was  removed, and pressed down to make the angle of the cannula to the horizontal plane smaller. Through this cannula, a 7.5-mm hand reamer was then introduced to ream the ventral part of facet bone until the resistance faded (press-down enlargement of the foramen) f, which was checked with g posteoanterior C-arm image. The reamer was removed, and the thick guiding rod then was reintroduced and advanced with the aid of a mallet until the tip of the guiding rod was into the herniated fragment h. A 7.5-mm working cannula was advanced over the guiding rod directly to the extruding fragment and the spine endoscope was inserted i. Under j, k, and l endoscopic view, the extruding disc fragment could be observed and the freed nerve root could be visualized after the massive nucleus underneath the nerve root were removed\n\nMale patient (47 years) of L4/5 disc herniation with extruding fragment underwent the procedure of PTES. a Sagittal and b axial MR images showed L4/5 disc herniation. During the procedure of PTES, The puncture needle was anteromedially inserted at about 45° angle to the horizontal plane and the tip of puncture needle was in the posterior one third intervertebral space on c lateral C-arm view. Over the guiding wire, stepwise-dilating cannulas were introduced to annulus fibrosus through the foramen d. The thick guiding rod was advanced with a mallet into the foramen after removal of the guiding wire e. An 8.8-mm cannula was pushed over the rod to the facet joint area, docked at the superior facet after the rod was  removed, and pressed down to make the angle of the cannula to the horizontal plane smaller. Through this cannula, a 7.5-mm hand reamer was then introduced to ream the ventral part of facet bone until the resistance faded (press-down enlargement of the foramen) f, which was checked with g posteoanterior C-arm image. The reamer was removed, and the thick guiding rod then was reintroduced and advanced with the aid of a mallet until the tip of the guiding rod was into the herniated fragment h. A 7.5-mm working cannula was advanced over the guiding rod directly to the extruding fragment and the spine endoscope was inserted i. Under j, k, and l endoscopic view, the extruding disc fragment could be observed and the freed nerve root could be visualized after the massive nucleus underneath the nerve root were removed\nAfter the guiding wire was introduced into the disc space, a stab incision of about 6 mm was made and the 18-gauge needle was inserted along the guiding wire to the facet joint capsule for lidocaine infiltration. Over the guiding wire, stepwise-dilating cannulas of 1.5, 3.5, and 5.5 mm were introduced to the anulus fibrosus through the foramen (Fig. 6d). After the dilating cannulas were removed, the thick guiding rod of 6.3 mm in diameter was introduced over the guiding wire and then advanced with a mallet into the foramen after removal of guiding wire (Fig. 6e). An 8.8-mm cannula of one-sided opening was pushed over the rod to the facet joint area, docked at the superior facet after the rod was removed, and pressed down to make the angle of cannula to the horizontal plane smaller according to the position of the needle tip on the lateral and posteroanterior C-arm view, which indicates the inclination angle of puncture trajectory. A 7.5-mm hand reamer was then introduced through this cannula to ream away the ventral bone of the superior facet joint and the ligmentum flavum for enlargement of foramen until resistance disappeared, meaning that the spinal canal has been entered, which was called press-down enlargement of foramen (Fig. 6f). The tip of the reamer should exceed the medial border of the pedicle till the midline is between the pedicle and the spinal process on a posteoanterior C-arm view (Figs. 4g, 5g and 6g), and it should be close to the posterior wall of the target disc on the lateral C-arm view (Fig. 5h), to ensure that the vicinity of the extruding or sequestrated fragment was reached. Only the posteoanterior C-arm image was needed to determine the position of the reamer during the foramen enlargement and it could be evaluated through endoscopic image replacing lateral C-arm image whether the target area had been reached. In case this could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet with 7.5-mm reamer for further enlargement of foramen through the 8.8-mm cannula docked at the facet and pressed down further to make the angle of cannula to the horizontal plane smaller (Fig. 7). In cases with difficult access to the fragment, particularly in the presence of foraminal stenosis, an 8.8-mm reamer was used to enlarge the foramen through a 10-mm cannula. The reamer could be advanced deeper to remove some calcified tissue of disc for lumbar disc herniation with calcification. If the reamer in the foramen was higher or lower than the disc level on the posteoranterior C-arm image, the position and direction of the reamer could be adjusted through a minor movement of the cannula or a thick guiding rod. Once there was nerve root symptom, the surgeon must stop reaming immediately, adjust the angle and direction of reamer (move the tip of cannula laterally and dorsally toward the superior facet and press the cannula down), or change the entrance point medially until the symptom disappear. The reamer was removed, and the thick guiding rod was reintroduced and advanced with the aid of a mallet till the tip of the guiding rod was into the herniated fragment (Fig. 6h). A 7.5-mm working cannula with a one-sided opening was then advanced over the guiding rod directly to the target area (Figs. 1c, d, 2g, 3d, e, 4i, and 6i). For extracanal disc herniation, the process of reaming could usually be omitted.Fig. 7In case the extruding or sequestrated fragment could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet for further enlargement of the foramen through the cannula docked at the facet and pressed down further to make the angle of the cannula to the horizontal plane smaller\n\nIn case the extruding or sequestrated fragment could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet for further enlargement of the foramen through the cannula docked at the facet and pressed down further to make the angle of the cannula to the horizontal plane smaller\nThe spine endoscope was introduced (Fig. 6i), and generally, the extruding or sequestrated disc fragment could be observed and the nerve root was then shown on the screen (Figs. 2i and 5i) after the fragments were removed. In cases where the nerve root was also visible, a working forceps was introduced through the endoscope to remove the fragments underneath the nerve root and the central dura, even the contralateral nerve root, under endoscopic view (Figs. 4j, 5i, and 6j, k). In disc protrusion where the annular wall was complete, the annulus was perforated to pull out the herniated nucleus. The massive disc fragment could be grabbed and extracted together with the endoscope. After the scope was again inserted, the nerve root was inspected (Figs. 4k, 5i, and 6l) and the remaining fragments of the disc were removed under endoscopic vision. When the calcification was encountered, a 3.5-mm small reamer or electric shaver was used to grind off the calcified tissue under the view through the endoscope (Fig. 3f). The cannula was then rotated cephalad to check the existing nerve root and rotated back to detect bone fragments that reamed off and take those out. The freed nerve root that was always pulsating with the heart rate could be observed at the end of procedure (Figs. 2i, 4k, 5i, and 6l). The trigger-flex bipolar radiofrequency probe could be used to clean the access way, stop the bleeding, ablate the nucleus and annulus, shrink the annular tears, and release the scar tissue around the nerve root for recurrent herniation after previous surgical intervention at the index level.\nThe patients were mobilized 5 h after surgery. A flexible back brace was used for 3 weeks. After leaving the hospital, patients were encouraged to resume their daily routine and were followed up as outpatients at the hospital ward.\n Clinical follow-up Leg pain was evaluated using the 10-point visual analog scale (VAS) preoperatively, immediately, 1 week, 1, 2, 3, and 6 months, and 1 and 2 years after surgery. The clinical evaluation included a straight leg raising test and a check of the strength of the quadriceps, foot/toe extensors, as well as triceps strength.\nIn a retrospective assessment, the results were determined to be excellent, good, fair, or poor according to the MacNab classification [12] (Table 5) 2 years postoperatively. The fair and better grades also included the requirement that the patients were willing to select the same procedure again for the same problem in the future.Table 5MacNab classification [12]ResultsComplicationsExcellentNo pain; no restriction of activityGoodOccasional back or leg pain not interfering with the patient’s ability to do his or her normal work, or to enjoy leisure activitiesFairImproved functional capacity, but handicapped by intermittent pain of sufficient severity tocurtail or modify work or leisure activitiesPoorNo improvement or insufficient improvement toenable an increase in activities/or furtheroperative intervention required\n\nMacNab classification [12]\nDuring the follow-up, all complications were registered including iatrogenic nerve damage, vascular injuries, infection, wound healing, thrombosis, or recurrence.\nLeg pain was evaluated using the 10-point visual analog scale (VAS) preoperatively, immediately, 1 week, 1, 2, 3, and 6 months, and 1 and 2 years after surgery. The clinical evaluation included a straight leg raising test and a check of the strength of the quadriceps, foot/toe extensors, as well as triceps strength.\nIn a retrospective assessment, the results were determined to be excellent, good, fair, or poor according to the MacNab classification [12] (Table 5) 2 years postoperatively. The fair and better grades also included the requirement that the patients were willing to select the same procedure again for the same problem in the future.Table 5MacNab classification [12]ResultsComplicationsExcellentNo pain; no restriction of activityGoodOccasional back or leg pain not interfering with the patient’s ability to do his or her normal work, or to enjoy leisure activitiesFairImproved functional capacity, but handicapped by intermittent pain of sufficient severity tocurtail or modify work or leisure activitiesPoorNo improvement or insufficient improvement toenable an increase in activities/or furtheroperative intervention required\n\nMacNab classification [12]\nDuring the follow-up, all complications were registered including iatrogenic nerve damage, vascular injuries, infection, wound healing, thrombosis, or recurrence.\n Statistical analysis Comparison of preoperative and postoperative VAS was performed using linear mixed effects model for multiple comparison procedures. Statistically significant differences were defined at a 95% confidence level. The SPSS software (17.0, SPSS Inc., Chicago, IL, USA) supported statistical evaluation.\nComparison of preoperative and postoperative VAS was performed using linear mixed effects model for multiple comparison procedures. Statistically significant differences were defined at a 95% confidence level. The SPSS software (17.0, SPSS Inc., Chicago, IL, USA) supported statistical evaluation.", "All patients were evaluated before the procedure by CT and MRI imaging to determine the type of disc herniation (intracanal or extracanal) or to determine if there was calcification. An intracanal herniation is defined as its apex between the bilateral pedicles. The foraminal herniation had its apex within the mediolateral borders of the adjacent pedicle, and the apex is lateral to the bordering pedicle in an extraforaminal herniation. Posteoanterior and lateral radiographs were obtained to detect scoliosis or high iliac crest when the lower plate of L4 vertebral body was not higher than the line between the highest points of bilateral iliac crest (Fig. 2a). After the treatment, MRI images were obtained to assess neurological decompression or exclude dural cyst, myelomeningocele, dural tears or spinal fluid leaks, and reherniation.", "Anesthesia consisted of 1% local lidocaine infiltration, supplemented with conscious sedation. The patient was placed in a prone position with hyperkyphotic bolsters placed under the abdomen on a radiolucent table, especially in the cases of L5/S1 herniation with high iliac crest. A biplane fluoroscopy was used for radiograph imaging. Good posteoanterior and lateral images should be obtained by rotating the C-arm relative to the patient whose back was positioned parallel to the horizontal plane especially in the scoliosis case.\nThe midline is marked on the skin surface by palpating the spinal processes. Then a transverse line bisecting the disc is drawn along the K-wire which is placed transversely across the center of the target disc on the posteoanterior image (Fig. 4b). The surface marking of anatomic disc center is identified by the intersection of transverse line and longitudinal midline (Figs. 4c, d and 5c, d), which is used as the aiming reference point of puncture.Fig. 4Male patient of 65 years with L5/S1 disc herniation underwent the procedure of PTES. a Sagittal and axial MR image showed L5/S1 disc herniation. A transverse line bisecting the disc was drawn along the metal rod which was placed transversely across the center of the target disc on b posteoanterior C-arm view. c Photography showed the surface marking of anatomic disc center identified by the intersection of transverse line and longitudinal midline, and the entrance point of puncture located at the corner of flat back turning to lateral side. The puncture needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through anatomic disc center on d schematic diagram. The tip of puncture needle was in the posterior intervertebral space close to spinal canal on e lateral C-arm view and near the medial border of pedicle on f posteoanterior C-arm view. Through the 8.8-mm cannula, a 7.5-mm hand reamer was introduced to ream facet bone until the resistance faded, which was checked with g posteoanterior image. h Lateral C-arm view showed that the tip of the thick guiding rod was positioned at the herniated fragment. A 7.5-mm working cannula was advanced over the guiding rod directly to the sequestrated fragment on i posteoanterior fluoroscopic image. Under j, k endoscopic view, the fragments underneath the nerve root were removed and the freed nerve root could be visualized. l Photography showed minimally invasive result 1 months after surgery\nFig. 5\na Sagittal and b axial MR images showed L4/5 disc herniation in 41-year-old man. c, d Photography showed the surface marking of anatomic disc center identified by the intersection of transverse L4/5 level line and longitudinal midline, and the entrance point of puncture located at the corner of the flat back turning to the lateral side. The tip of puncture needle was in the posterior one third of intervertebral space on e lateral C-arm view and beyond the medial border of pedicle on f posteoanterior C-arm view. A 7.5-mm hand reamer was introduced through the cannula to ream away the ventral bone of superior facet joint and ligmentum flavum for enlargement of foramen (press-down enlargement of foramen) until the resistance disappeared. The tip of reamer should exceed the medial border of pedicle till the midline between the pedicle and spinal process on g posteoanterior C-arm view, and it should be close to posterior wall of target disc on h lateral C-arm view. i Endoscopic picture showed that both the ipsilateral nerve root and the contralateral nerve root were exposed for complete decompression after removal of j sequestrated disc fragments\n\nMale patient of 65 years with L5/S1 disc herniation underwent the procedure of PTES. a Sagittal and axial MR image showed L5/S1 disc herniation. A transverse line bisecting the disc was drawn along the metal rod which was placed transversely across the center of the target disc on b posteoanterior C-arm view. c Photography showed the surface marking of anatomic disc center identified by the intersection of transverse line and longitudinal midline, and the entrance point of puncture located at the corner of flat back turning to lateral side. The puncture needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through anatomic disc center on d schematic diagram. The tip of puncture needle was in the posterior intervertebral space close to spinal canal on e lateral C-arm view and near the medial border of pedicle on f posteoanterior C-arm view. Through the 8.8-mm cannula, a 7.5-mm hand reamer was introduced to ream facet bone until the resistance faded, which was checked with g posteoanterior image. h Lateral C-arm view showed that the tip of the thick guiding rod was positioned at the herniated fragment. A 7.5-mm working cannula was advanced over the guiding rod directly to the sequestrated fragment on i posteoanterior fluoroscopic image. Under j, k endoscopic view, the fragments underneath the nerve root were removed and the freed nerve root could be visualized. l Photography showed minimally invasive result 1 months after surgery\n\na Sagittal and b axial MR images showed L4/5 disc herniation in 41-year-old man. c, d Photography showed the surface marking of anatomic disc center identified by the intersection of transverse L4/5 level line and longitudinal midline, and the entrance point of puncture located at the corner of the flat back turning to the lateral side. The tip of puncture needle was in the posterior one third of intervertebral space on e lateral C-arm view and beyond the medial border of pedicle on f posteoanterior C-arm view. A 7.5-mm hand reamer was introduced through the cannula to ream away the ventral bone of superior facet joint and ligmentum flavum for enlargement of foramen (press-down enlargement of foramen) until the resistance disappeared. The tip of reamer should exceed the medial border of pedicle till the midline between the pedicle and spinal process on g posteoanterior C-arm view, and it should be close to posterior wall of target disc on h lateral C-arm view. i Endoscopic picture showed that both the ipsilateral nerve root and the contralateral nerve root were exposed for complete decompression after removal of j sequestrated disc fragments\nIn practice, we found that the entrance point was located at the corner of the flat back turning to the lateral side (Figs. 4c, d and 5c, d), named “Gu’s Point”, not depending on the size of the patient, gender, and level. This point is as high as, or more cranial, or slightly more caudal than the horizontal line of target disc. The entrance point for the patient with scoliosis is adjusted medially or laterally according to the rotation.\nThen the skin, subcutaneous tissue, and trajectory tract were infiltrated with local anesthesia, and an 18-gauge needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through the anatomic disc center (Fig. 4d), which should be adjusted according to the rotation of vertebra for scoliosis. Once the needle reached the target when resistance disappeared, the lateral C-arm view was taken to ensure that the tip should be in the posterior one third of the intervertebral space or intracanal area close to the posterior wall of the disc (Figs. 2d, 4e, 5e, and 6c). The plane of the needle bevel could be used for minor directional adjustment. Sometimes the direction of the needle bevel was rotated to skirt the obstacle for target especially at L5/S1 level with high iliac crest. The angle and direction of puncture or the entrance point should be adjusted once there was nerve root symptom. Then the C-arm was rotated to the posteroanterior projection, and the needle tip should be near the medial border of the pedicle on the C-arm view (Figs. 2e, 4f, and 5f). Subsequently, contrast solution [9 ml iohexol (300 mg/mL) + 1 ml methylene blue] or methylene blue solution (9 ml 0.9% NaCl + 1 ml methylene blue) was injected to dye the nucleus, and pain reaction or dye leakage was recorded, which sometimes could be omitted.Fig. 6Male patient (47 years) of L4/5 disc herniation with extruding fragment underwent the procedure of PTES. a Sagittal and b axial MR images showed L4/5 disc herniation. During the procedure of PTES, The puncture needle was anteromedially inserted at about 45° angle to the horizontal plane and the tip of puncture needle was in the posterior one third intervertebral space on c lateral C-arm view. Over the guiding wire, stepwise-dilating cannulas were introduced to annulus fibrosus through the foramen d. The thick guiding rod was advanced with a mallet into the foramen after removal of the guiding wire e. An 8.8-mm cannula was pushed over the rod to the facet joint area, docked at the superior facet after the rod was  removed, and pressed down to make the angle of the cannula to the horizontal plane smaller. Through this cannula, a 7.5-mm hand reamer was then introduced to ream the ventral part of facet bone until the resistance faded (press-down enlargement of the foramen) f, which was checked with g posteoanterior C-arm image. The reamer was removed, and the thick guiding rod then was reintroduced and advanced with the aid of a mallet until the tip of the guiding rod was into the herniated fragment h. A 7.5-mm working cannula was advanced over the guiding rod directly to the extruding fragment and the spine endoscope was inserted i. Under j, k, and l endoscopic view, the extruding disc fragment could be observed and the freed nerve root could be visualized after the massive nucleus underneath the nerve root were removed\n\nMale patient (47 years) of L4/5 disc herniation with extruding fragment underwent the procedure of PTES. a Sagittal and b axial MR images showed L4/5 disc herniation. During the procedure of PTES, The puncture needle was anteromedially inserted at about 45° angle to the horizontal plane and the tip of puncture needle was in the posterior one third intervertebral space on c lateral C-arm view. Over the guiding wire, stepwise-dilating cannulas were introduced to annulus fibrosus through the foramen d. The thick guiding rod was advanced with a mallet into the foramen after removal of the guiding wire e. An 8.8-mm cannula was pushed over the rod to the facet joint area, docked at the superior facet after the rod was  removed, and pressed down to make the angle of the cannula to the horizontal plane smaller. Through this cannula, a 7.5-mm hand reamer was then introduced to ream the ventral part of facet bone until the resistance faded (press-down enlargement of the foramen) f, which was checked with g posteoanterior C-arm image. The reamer was removed, and the thick guiding rod then was reintroduced and advanced with the aid of a mallet until the tip of the guiding rod was into the herniated fragment h. A 7.5-mm working cannula was advanced over the guiding rod directly to the extruding fragment and the spine endoscope was inserted i. Under j, k, and l endoscopic view, the extruding disc fragment could be observed and the freed nerve root could be visualized after the massive nucleus underneath the nerve root were removed\nAfter the guiding wire was introduced into the disc space, a stab incision of about 6 mm was made and the 18-gauge needle was inserted along the guiding wire to the facet joint capsule for lidocaine infiltration. Over the guiding wire, stepwise-dilating cannulas of 1.5, 3.5, and 5.5 mm were introduced to the anulus fibrosus through the foramen (Fig. 6d). After the dilating cannulas were removed, the thick guiding rod of 6.3 mm in diameter was introduced over the guiding wire and then advanced with a mallet into the foramen after removal of guiding wire (Fig. 6e). An 8.8-mm cannula of one-sided opening was pushed over the rod to the facet joint area, docked at the superior facet after the rod was removed, and pressed down to make the angle of cannula to the horizontal plane smaller according to the position of the needle tip on the lateral and posteroanterior C-arm view, which indicates the inclination angle of puncture trajectory. A 7.5-mm hand reamer was then introduced through this cannula to ream away the ventral bone of the superior facet joint and the ligmentum flavum for enlargement of foramen until resistance disappeared, meaning that the spinal canal has been entered, which was called press-down enlargement of foramen (Fig. 6f). The tip of the reamer should exceed the medial border of the pedicle till the midline is between the pedicle and the spinal process on a posteoanterior C-arm view (Figs. 4g, 5g and 6g), and it should be close to the posterior wall of the target disc on the lateral C-arm view (Fig. 5h), to ensure that the vicinity of the extruding or sequestrated fragment was reached. Only the posteoanterior C-arm image was needed to determine the position of the reamer during the foramen enlargement and it could be evaluated through endoscopic image replacing lateral C-arm image whether the target area had been reached. In case this could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet with 7.5-mm reamer for further enlargement of foramen through the 8.8-mm cannula docked at the facet and pressed down further to make the angle of cannula to the horizontal plane smaller (Fig. 7). In cases with difficult access to the fragment, particularly in the presence of foraminal stenosis, an 8.8-mm reamer was used to enlarge the foramen through a 10-mm cannula. The reamer could be advanced deeper to remove some calcified tissue of disc for lumbar disc herniation with calcification. If the reamer in the foramen was higher or lower than the disc level on the posteoranterior C-arm image, the position and direction of the reamer could be adjusted through a minor movement of the cannula or a thick guiding rod. Once there was nerve root symptom, the surgeon must stop reaming immediately, adjust the angle and direction of reamer (move the tip of cannula laterally and dorsally toward the superior facet and press the cannula down), or change the entrance point medially until the symptom disappear. The reamer was removed, and the thick guiding rod was reintroduced and advanced with the aid of a mallet till the tip of the guiding rod was into the herniated fragment (Fig. 6h). A 7.5-mm working cannula with a one-sided opening was then advanced over the guiding rod directly to the target area (Figs. 1c, d, 2g, 3d, e, 4i, and 6i). For extracanal disc herniation, the process of reaming could usually be omitted.Fig. 7In case the extruding or sequestrated fragment could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet for further enlargement of the foramen through the cannula docked at the facet and pressed down further to make the angle of the cannula to the horizontal plane smaller\n\nIn case the extruding or sequestrated fragment could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet for further enlargement of the foramen through the cannula docked at the facet and pressed down further to make the angle of the cannula to the horizontal plane smaller\nThe spine endoscope was introduced (Fig. 6i), and generally, the extruding or sequestrated disc fragment could be observed and the nerve root was then shown on the screen (Figs. 2i and 5i) after the fragments were removed. In cases where the nerve root was also visible, a working forceps was introduced through the endoscope to remove the fragments underneath the nerve root and the central dura, even the contralateral nerve root, under endoscopic view (Figs. 4j, 5i, and 6j, k). In disc protrusion where the annular wall was complete, the annulus was perforated to pull out the herniated nucleus. The massive disc fragment could be grabbed and extracted together with the endoscope. After the scope was again inserted, the nerve root was inspected (Figs. 4k, 5i, and 6l) and the remaining fragments of the disc were removed under endoscopic vision. When the calcification was encountered, a 3.5-mm small reamer or electric shaver was used to grind off the calcified tissue under the view through the endoscope (Fig. 3f). The cannula was then rotated cephalad to check the existing nerve root and rotated back to detect bone fragments that reamed off and take those out. The freed nerve root that was always pulsating with the heart rate could be observed at the end of procedure (Figs. 2i, 4k, 5i, and 6l). The trigger-flex bipolar radiofrequency probe could be used to clean the access way, stop the bleeding, ablate the nucleus and annulus, shrink the annular tears, and release the scar tissue around the nerve root for recurrent herniation after previous surgical intervention at the index level.\nThe patients were mobilized 5 h after surgery. A flexible back brace was used for 3 weeks. After leaving the hospital, patients were encouraged to resume their daily routine and were followed up as outpatients at the hospital ward.", "Leg pain was evaluated using the 10-point visual analog scale (VAS) preoperatively, immediately, 1 week, 1, 2, 3, and 6 months, and 1 and 2 years after surgery. The clinical evaluation included a straight leg raising test and a check of the strength of the quadriceps, foot/toe extensors, as well as triceps strength.\nIn a retrospective assessment, the results were determined to be excellent, good, fair, or poor according to the MacNab classification [12] (Table 5) 2 years postoperatively. The fair and better grades also included the requirement that the patients were willing to select the same procedure again for the same problem in the future.Table 5MacNab classification [12]ResultsComplicationsExcellentNo pain; no restriction of activityGoodOccasional back or leg pain not interfering with the patient’s ability to do his or her normal work, or to enjoy leisure activitiesFairImproved functional capacity, but handicapped by intermittent pain of sufficient severity tocurtail or modify work or leisure activitiesPoorNo improvement or insufficient improvement toenable an increase in activities/or furtheroperative intervention required\n\nMacNab classification [12]\nDuring the follow-up, all complications were registered including iatrogenic nerve damage, vascular injuries, infection, wound healing, thrombosis, or recurrence.", "Comparison of preoperative and postoperative VAS was performed using linear mixed effects model for multiple comparison procedures. Statistically significant differences were defined at a 95% confidence level. The SPSS software (17.0, SPSS Inc., Chicago, IL, USA) supported statistical evaluation.", "The 209 patients who met the inclusion criteria were followed for an average of 26.3 ± 2.3 months. There were 116 (55.5%) male patients and 93 (44.5%) female patients. The average ages were 46.4 ± 14.9 years for the male patients and 46.8 ± 11.1 for the female patients.\nThe mean duration of the operation was 50.9 ± 9.9 min per level. The mean frequency of intraoperative fluoroscopy was 5 (3–14) times per level. The mean blood loss was 5 (2–20) ml per level. The mean stay in the hospital was 3 (2–4) days.\nThe VAS score of leg pain significantly dropped from 9 (6–10) before operation to 1 (0–3) (P < 0.001) immediately after the operation and to 0 (0–3) (P < 0.001) 2 years after operation. However, there were 16 (7.7%) patients with the rebound effect of leg pain, and 9 (7–10) of VAS score preoperatively dropped to 0 (0–2) immediately after operation and rose to 7 (5–9) 1 week postoperatively. Fourteen of 16 patients got pain relief during 2 months, and the VAS score became 3 (2–4) 2 months postoperatively and 0.5 (0–3) 2 years postoperatively. Other two underwent reoperation about 1 month after surgery in other hospitals.\nAt 2-year follow-up, 95.7% (200/209) of the patients showed excellent or good outcomes, 2.9% (6/209) fair, and 1.4% (3/209) poor. The percentages of excellent or good results were 100% (29/29) for L5/S1 herniations with high iliac crest, 100% (25/25) for herniations with scoliosis, and 96.8% (30/31) for herniation with calcification. The percentage of fair results was 3.2% (1/31) for herniation with calcification (Table 4). Excellent or good outcome were shown in 100% (24/24) of recurrent herniations or missed fragments after previous surgical intervention at the index level and adjacent disc herniations after decompression and fusion. Voiding dysfunction recovered during 1 day after surgery and the strength of the quadriceps, foot/toe extensors, or triceps improved during 3 months in two patients of cauda equine syndromes showing fair outcomes.\nPostoperative complications (Table 6) included three patients who experienced increased weakness of quadriceps or foot/toe extensor strength, which fully recovered about 1 month after surgery. One other patient experienced low toxicity infection of disc, which was cured by intravenous use of antibiotics for 2 weeks. There was one case of herniation recurrence 8 months after surgery, which was successfully treated with MIS-TLIF (minimally invasive surgery-transforaminal lumbar intervertebral fusion). No dural tears had to be treated and no dural leaks after surgery occurred, and no meningoceles or dural cysts in the surgical area were observed in the postoperative MRI scans. No patients had any form of permanent iatrogenic nerve damage and a major complication such as intraoperative vascular injury. There was no death.Table 6ComplicationsComplicationNumberPercentIncreased weakness of quadriceps or foot/toe extensor strength31.4Disc infection10.5Herniation recurrence10.5Permanent iatrogenic nerve damage0Dural tear or dural leak0Intraoperative vascular injury0Death0\n\nComplications", "In 2002, Yeung et al. [5] reported the outcome and complications in 307 cases of posterolateral endoscopic discectomies (YESS) with a minimal follow-up of 1 year. They reported an 83.6% excellent or good result and a 9.3% rate of poor results. Their reoperation rate was 5%, with an average follow-up of 19 months. In 2006, the study by Hoogland et al. [10] showed that there was a recurrence rate of 6.9% at 1-year postoperatively in 142 cases treated with posterior lateral endoscopic discectomy (TESS) and at 2-year follow-up, 85.4% of the patients had an excellent or good result and 7.7% were not satisfied. These are comparable to the outcomes in our study of posterolateral endoscopic discectomy (PTES). At 2-year follow-up, 95.7% (200/209) of the patients showed excellent or good results, 2.9% (6/209) fair, and 1.4% (3/209) poor. The recurrence rate was 0.5% (1/209), and the reoperation rate was 1.4% (3/209).\nSince 1994, Hoogland et al. [6–9] have used special reamers to enlarge the foramen, which was named after TESS, so that the anterior spinal canal could be made accessible for endoscope and instruments also for the L5/S1 level, avoiding injury to the exiting nerve root, a problem that has been reported after the regular transforaminal approach. At that point, nearly all types of disc herniation became accessible with the lateral percutaneous approach [13]. However, in TESS, it was complicated to determine the entrance point through C-arm-guided orientation, and it depended on the measurement of distance to the midline, which sometimes was not accurate because of the different sizes of patients. The tip of the puncture should reach the posterior wall of the disc or the superior fact in TESS, and the rigid target of puncture made it difficult to find the optimal trajectory through repeated directional adjustments. In addition, there were more steps for enlargement of foramen through step-by-step reaming of the superior facet bone during the procedure. These led to much exposure of X-ray, long duration of operation, and steep learning curve.\nFor the orientation before puncture in PTES, only the posteroanterior C-arm projection was needed when the K-wire was placed transversely across the center of the target disc. A transverse line bisecting the disc was drawn along the K-wire, and the surface projection of anatomic disc center was located where the transverse line crossed the longitudinal midline drawn by palpation. The puncture should aim at the perpendicular line through the anatomic disc center. Our study showed that the entrance point was located at the corner of the flat back turning to the lateral side, and as high as, or more cranial, or slightly more caudal than the horizontal line of target disc, which was similar to “All roads lead to Rome (herniated fragment).” This has never been mentioned by other scholars, and we named this entrance point after “Gu’s Point”. It was not necessary to take the C-arm projection and measure the distance to the midline for determination of the entrance point. The orientation before puncture in PTES was simplified compared with TESS. On the lateral C-arm view, the target of puncture should be in the posterior one third of intervertebral space or intracanal area close to posterior wall of the disc in PTES, which made the direction and angle of the puncture flexible. Sometimes, the needle was inserted into a disc at 45° angle to the horizontal plane for puncture, which still made it achievable to put the working cannula into the spinal canal (Fig. 6). So, in PTES technique it was easy to find the optimal trajectory of puncture.\nAlthough the tip of the puncture needle was in the disc and sometimes higher or lower than the target disc, the guiding rod of 6.3 mm in diameter could be advanced into the foramen with a mallet after the guiding wire was drawn out, and good position could be more easily achieved by minor adjustment of guiding rod compared with the soft puncture needle. When enlargement of foramen was performed through the cannula docked at the facet during PTES, the cannula was pressed down to make the angle of the reamer to the horizontal plane smaller and more bones in the ventral part of the superior facet was cut away (Fig. 6f). We called it press-down enlargement of foramen, which made it easy to advance the working cannula into the spine canal between the dura and disc even if the angle of puncture was 45° and to remove the fragments underneath the nerve root and the central dura, even the contralateral nerve root (Fig. 5i), without the retraction on the intracanal nerve elements. If the reamer in the foramen was higher or lower than the disc level, the position and direction of the reamer could be adjusted through minor movement of the cannula or a thick guiding rod. If the vicinity of the extruding or sequestrated fragment could not be reached, the following steps were needed: (1) repeated reaming of more bones of the ventral part of the superior facet using a 7.5-mm reamer through the 8.8-mm cannula pressed down further, which could be achieved by moving the tip of the cannula laterally and dorsally toward the superior facet (Fig. 7) or (2) using a bigger reamer of 8.8-mm through a 10-mm cannula for further enlargement of the foramen in the presence of foraminal stenosis. All these measures made it possible to simplify the orientation and facilitate the puncture of PTES.\nDuring PTES, after the orientation and puncture, enlargement of the foramen was performed by one-step reaming of a 7.5-mm reamer instead of a step-by-step reaming before endoscopic discectomy. Simple orientation, easy puncture, and reduced steps could decrease the times of C-arm projection. The results of a study showed that the mean frequency of intraoperative fluoroscopy was 5 (3–14) times per level. In general, 4 times of C-arm projection were needed during the procedure of PTES, including the orientation of the involved disc center on the posteoranterior image (Fig. 4b), confirmation of the puncture needle tip reaching the target on the lateral (Fig. 4e) and posteoranterior view (Fig. 4f), and the position of the reamer checked with an posteoanterior image (Fig. 4g) when resistance faded. It could be ensured by the endoscopic image replacing lateral fluoroscopic projection that the vicinity of the extruding or sequestrated fragment had been reached. In extracanal disc herniation, the procedure of reaming could usually be omitted and there were only three times of intraoperative fluoroscopy during PTES (Fig. 4b, e, f). The amount of radiation, which the surgeon and the patient received, could be reduced as little as possible in the procedure. Compared with TESS, simple orientation, easy puncture, reduced steps, and few C-arm projections shortened the duration of PTES procedure and lowered the learning curve. From positioning the patient to closing the skin, the mean duration of operation was 50.9 ± 9.9 min per level.\nL5/S1 herniation with high iliac crest and disc herniation with scoliosis or calcification are difficult cases for transforaminal endoscopic surgery. In our study, the percentages of excellent or good results were 100% (29/29) for L5/S1 herniations with high iliac crest, 100% (25/25) for herniations with scoliosis, and 96.8% (30/31) for herniation with calcification. The percentage of fair results was 3.2% (1/31) for herniation with calcification. The following are some key points to overcome the difficulties for these special cases. In L5/S1 disc herniation with high iliac crest, the puncture needle usually was blocked by iliac crest, sacral promontory or transverse process. The patient should be hyperkyphoticly placed in a prone position in order to make the space larger among the iliac crest, sacral promontory, and transverse process. The obstacle could be skirted through rotating the direction of the needle bevel during the puncture. For disc herniation with scoliosis, the C-arm should be rotated relative to the patient to obtain good posteoanterior and lateral images, and the entrance point and angle of puncture should be adjusted according the rotation of lumbar vertebrae. When there was calcification in disc herniation, some calcified tissue of the disc could be reamed away during enlargement of the foramen, and the small reamer or electric shaver was used to grind off the calcified tissue through the endoscope under the view.\nIn recurrent herniation after previous surgical intervention at the index level, the lateral transforaminal approach could bypass the scar tissue in the previous dorsal area and reduce the risk of dural tears. Hoogland et al. [11] reported that the spinal fluid leak during the surgery was suspected in 11 cases, and no dural tears had to be treated in their series of 262 patients of recurrent disc herniation undergoing endoscopic transforaminal discectomy. No dural leaks after surgery occurred or meningoceles or dural cysts in the surgical area were observed in the postoperative MRI scans that were obtained on almost all patients. But the incidents of dural tears requiring treatment in the dorsal microdiscectomy is about 10% [14]. In our series of 18 patients treated by PTES, there was no dural tears, and no dural leaks or meningoceles or dural cysts in the surgical area after surgery. The extruding or sequestrated disc material could be removed through the working tunnel of lateral transforaminal approach without interference with scar tissue. After removal of the protruded material, the nerve root could be inspected and the scar tissue around the nerve root could be released by the radiofrequency using a trigger-flex bipolar probe. In comparison, the scar must be removed in the dorsal reintervention, and tedious retraction of the compressed nerve root was needed for the removal of protruded disc tissue, which increased risk of neurologic injury. When PTES was performed to treat recurrence herniation and adjacent disc herniation after decompression and fusion, there was no need to remove, replace, or extend the previous internal fixation, which significantly decreased the aggressiveness, reduced the blood loss, and fastened the recovery, compared with the open revision surgery.\nThe patient was in a continuous awakened state under local anesthesia supplemented with intravenous sedation during surgery of PTES, and the surgeon could be alerted if there was any physical irritation to the neurologic elements. Once there was nerve root symptom during puncture or enlargement of foramen, which usually indicated the involvement of exiting nerve root, the performance must be stopped immediately. The angle and direction of puncture should be adjusted or the entrance point should be medially moved until the symptom disappeared during puncture. The surgeon could change the angle and direction of the reamer through moving the tip of the cannula laterally and dorsally toward the superior facet and pressing the cannula down to avoid the irritation of an exiting nerve root during the enlargement of the foramen. In case there was still neurologic symptom during the reaming of the superior facet, the entrance point should be medially changed. Although no patient had any form of permanent iatrogenic nerve damage in this study, there were three patients who experienced transient weakness of quadriceps or foot/toe extensor strength, which was relative to no stopping the operation immediately when the nerve root symptom occurred. In addition, there were 16 (7.7%) patients with the rebound effect of leg pain 1 week after operation, in which 14 cases got pain relief during 2 months and other two underwent reoperation after about 1 month in other hospitals. This indicated that the observation of at least 2 months should be preferred to immediate reoperation when the rebound effect of leg pain occurred, although further study should be performed to detect the possible factors. There was only one case of recurrence in this study and 0.5% (1/209) of recurrence rate was significantly lower, compared with that of other reports. Attention should be paid to take good care of the lumbar spine after surgery, such as not bending frequently, no lifting of heavy load, and not keeping the same position for a long time, which was an important factor of preventing recurrent herniation.", "The current data indicate that PTES for lumbar disc herniation is an effective and safe method with simple orientation, easy puncture, reduced steps, and little X-ray exposure, which can be applied in almost all kinds of lumbar disc herniation, including L5/S1 level with high iliac crest, herniation with scoliosis or calcification, recurrent herniation, and adjacent disc herniation after decompression and fusion. The learning curve is no longer steep for surgeons." ]
[ "introduction", "materials|methods", null, null, null, null, "results", "discussion", "conclusion" ]
[ "Lumbar disc herniation", "Transforaminal", "Endoscopic discectomy", "Minimally invasive surgery" ]
Background: The radicular syndrome caused by lumbar disc herniation compressing neurologic elements is a clear indication for surgical decompression. In the last decades, as a minimally invasive surgical technique, the posterior lateral transforaminal endoscopic surgery has been developed to perform discectomy for neurologic decompression under direct view and local anesthesia including YESS (Yeung Endoscopy Spine Surgery) [1–5] and TESS (Transforaminal Endoscopic Spine Surgery) [6–11]. There was a high percentage of patient satisfaction and a low rate of complications in YESS or TESS for lumbar disc herniation [1–11]. Compared with traditional lumbar discectomy, YESS and TESS have certain advantages: (1) no need for general anesthesia, (2) less cases of iatrogenic neurologic damage, (3) no retraction on the intracanal nerve elements, (4) significantly less infections, (5) only minimal disturbance of ligamentum flavum or intracanal capsular structures, therefore, less scar formation, (6) no interference of scar tissue to reach the recurrent herniated tissue in cases of previous dorsal-discectomy, and (7) shorter hospital stay, earlier functional recovery, earlier return to work, and higher cost-effectiveness [1–11]. Although nearly all kinds of disc herniations are accessible for TESS of outside disc-inside technique directly into the spinal canal [2, 3], complexity of C-arm guided orientation, difficulty to find the optimal trajectory for target, and more steps of surgical manipulation leaded to much exposure of X-ray, long duration of operation, and steep learning curve. We designed an easy posterolateral transforaminal endoscopic decompression technique for radiculopathy secondary to lumbar disc herniation, termed PTES (percutaneous transforaminal endoscopic surgery). The purpose of the study is to describe the technique of PTES and evaluate the efficacy and safety for treatment of lumbar disc herniation including primary herniation, reherniation, intracanal herniation, and extracanal herniation and to report outcome and complications. Methods: The clinical study proposal was approved by Zhongshan Hospital Ethical Committee (the medical ethical committee of the authors’ hospital). Informed consent to participate in the study has been obtained from patients. From January 2012 to June 2013, PTES was performed to treat 209 consecutive patients of intracanal or extracanal herniations with or without extruding or sequestrated fragment, high iliac crest, scoliosis, calcification, or cauda equina syndrome including recurrent herniation after previous surgical intervention at the index level or adjacent disc herniation after decompression and fusion. During the same period, other additional patients underwent PTES for various other conditions that did not meet the inclusion criteria for this study. The excluded patients had the primary diagnoses of chronic discogenic pain, foraminal stenosis, lateral recess stenosis, or pyogenic discitis. Inclusion criteria were as follows: (1) primarily radicular pain of the unilateral leg; (2) disc herniation at one level from L2 to S1 (Table 1) proven by magnetic resonance imaging (MRI) or computed tomography (CT) corresponding to the neurologic findings; (3) clear nerve root tension sign with a straight leg raising sign of less than 45°, or a positive neurologic finding in terms of absent knee or ankle reflex, corresponding dermatomal numbness or weakness of quadriceps, foot/toe extensors, or triceps; and (4) in all patients, conservative treatment had failed.Table 1Location of lumbar disc herniation according to levelHerniation levelNo. of patientsPercentL5/S18339.7L4/L58138.8L3/L43215.3L2/L3136.2Total209100 Location of lumbar disc herniation according to level This study included 181 intracanal and 28 extracanal (foraminal and extraforaminal) herniations (Table 2). Eighteen recurrent herniations (Fig. 1), three missed fragments after previous surgical intervention at the index level (Table 3), and three adjacent disc herniations after decompression and fusion also were included. For the group that had undergone prior surgical intervention, there were 14 laminectomy and discectomies, 2 endoscopic discectomy, 5 decompression and fusion, 2 radiofrequency ablation, and 1 ozone injections. This study included 29 L5/S1 herniations with high iliac crest (Fig. 2), 25 with scoliosis, and 31 with calcification (Fig. 3) (Table 4), and cauda equina syndrome occurring in 2 patients.Table 2Location of the lumbar disc herniation in relation to the pedicle and spinal canalLocation of herniationNo. of patientsPercentIntracanal18186.6Foraminal167.7Extraforaminal125.7Total209100 Fig. 1 a Sagittal and b axial MR images showed L4/5 recurrent herniation after decompression and fusion in 61-year-old woman. During the procedure of PTES, c posteoanterior and d lateral C-arm image confirmed that a 7.5-mm working cannula was advanced directly to the protruding fragment Table 3Prior surgical intervention undertaken at index levelPrior procedure at index levelNo. of patientsPercentLaminectomy and discectomy146.7Endoscopic discectomy21.0Decompression and fusion21.0Radiofrequency ablation21.0Ozone injections10.5 Fig. 2 a Posteoanterior X-ray picture, b sagittal, and c axial MR images showed L5/S1 disc herniation with high iliac crest in a 44-year-old man. The tip of the puncture needle was in the posterior one third of intervertebral space on d lateral C-arm view and beyond the medial border of pedicle on e posteoanterior C-arm view. During the procedure of PTES, a 7.5-mm working cannula was advanced over the guiding rod to the vicinity of the sequestrated fragment on f lateral and g posteoanterior C-arm view after the enlargement of the foramen. i Endoscopic picture showed that the nerve root was exposed for complete decompression after removal of h sequestrated disc fragments, which was confirmed on j sagittal and k axial MR images 1 week after operation. After 2 years follow-up, l sagittal and m axial MR images showed no recurrent herniation Fig. 3Male patient of 42 years with L4/5 disc herniation and calcification underwent the procedure of PTES. a Sagittal, b axial MR images, and c CT scan showed calcification in L4/5 disc herniation. During the procedure of PTES, d posteoanterior and e lateral C-arm image confirmed that a 7.5-mm working cannula was advanced directly to the protruding fragment. A 3.5-mm small reamer was used to grind off g calcified tissue under f view through the endoscope, which was confirmed on h CT scan after surgery Table 4Outcome according to the MacNab classificationAll patientsL5/S1 herniations with high iliac crestHerniations with scoliosisHerniation with calcificationNo. of patients in category209292531Excellent or good n (%)200 (95.7)29 (100)25 (100)30 (96.8)Fair n (%)6 (2.9)0 (0)0 (0)1 (3.2)Poor n (%)3 (1.4)0 (0)0 (0)0 (0) Location of the lumbar disc herniation in relation to the pedicle and spinal canal a Sagittal and b axial MR images showed L4/5 recurrent herniation after decompression and fusion in 61-year-old woman. During the procedure of PTES, c posteoanterior and d lateral C-arm image confirmed that a 7.5-mm working cannula was advanced directly to the protruding fragment Prior surgical intervention undertaken at index level a Posteoanterior X-ray picture, b sagittal, and c axial MR images showed L5/S1 disc herniation with high iliac crest in a 44-year-old man. The tip of the puncture needle was in the posterior one third of intervertebral space on d lateral C-arm view and beyond the medial border of pedicle on e posteoanterior C-arm view. During the procedure of PTES, a 7.5-mm working cannula was advanced over the guiding rod to the vicinity of the sequestrated fragment on f lateral and g posteoanterior C-arm view after the enlargement of the foramen. i Endoscopic picture showed that the nerve root was exposed for complete decompression after removal of h sequestrated disc fragments, which was confirmed on j sagittal and k axial MR images 1 week after operation. After 2 years follow-up, l sagittal and m axial MR images showed no recurrent herniation Male patient of 42 years with L4/5 disc herniation and calcification underwent the procedure of PTES. a Sagittal, b axial MR images, and c CT scan showed calcification in L4/5 disc herniation. During the procedure of PTES, d posteoanterior and e lateral C-arm image confirmed that a 7.5-mm working cannula was advanced directly to the protruding fragment. A 3.5-mm small reamer was used to grind off g calcified tissue under f view through the endoscope, which was confirmed on h CT scan after surgery Outcome according to the MacNab classification Pre- and postoperative imaging All patients were evaluated before the procedure by CT and MRI imaging to determine the type of disc herniation (intracanal or extracanal) or to determine if there was calcification. An intracanal herniation is defined as its apex between the bilateral pedicles. The foraminal herniation had its apex within the mediolateral borders of the adjacent pedicle, and the apex is lateral to the bordering pedicle in an extraforaminal herniation. Posteoanterior and lateral radiographs were obtained to detect scoliosis or high iliac crest when the lower plate of L4 vertebral body was not higher than the line between the highest points of bilateral iliac crest (Fig. 2a). After the treatment, MRI images were obtained to assess neurological decompression or exclude dural cyst, myelomeningocele, dural tears or spinal fluid leaks, and reherniation. All patients were evaluated before the procedure by CT and MRI imaging to determine the type of disc herniation (intracanal or extracanal) or to determine if there was calcification. An intracanal herniation is defined as its apex between the bilateral pedicles. The foraminal herniation had its apex within the mediolateral borders of the adjacent pedicle, and the apex is lateral to the bordering pedicle in an extraforaminal herniation. Posteoanterior and lateral radiographs were obtained to detect scoliosis or high iliac crest when the lower plate of L4 vertebral body was not higher than the line between the highest points of bilateral iliac crest (Fig. 2a). After the treatment, MRI images were obtained to assess neurological decompression or exclude dural cyst, myelomeningocele, dural tears or spinal fluid leaks, and reherniation. Surgical procedure Anesthesia consisted of 1% local lidocaine infiltration, supplemented with conscious sedation. The patient was placed in a prone position with hyperkyphotic bolsters placed under the abdomen on a radiolucent table, especially in the cases of L5/S1 herniation with high iliac crest. A biplane fluoroscopy was used for radiograph imaging. Good posteoanterior and lateral images should be obtained by rotating the C-arm relative to the patient whose back was positioned parallel to the horizontal plane especially in the scoliosis case. The midline is marked on the skin surface by palpating the spinal processes. Then a transverse line bisecting the disc is drawn along the K-wire which is placed transversely across the center of the target disc on the posteoanterior image (Fig. 4b). The surface marking of anatomic disc center is identified by the intersection of transverse line and longitudinal midline (Figs. 4c, d and 5c, d), which is used as the aiming reference point of puncture.Fig. 4Male patient of 65 years with L5/S1 disc herniation underwent the procedure of PTES. a Sagittal and axial MR image showed L5/S1 disc herniation. A transverse line bisecting the disc was drawn along the metal rod which was placed transversely across the center of the target disc on b posteoanterior C-arm view. c Photography showed the surface marking of anatomic disc center identified by the intersection of transverse line and longitudinal midline, and the entrance point of puncture located at the corner of flat back turning to lateral side. The puncture needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through anatomic disc center on d schematic diagram. The tip of puncture needle was in the posterior intervertebral space close to spinal canal on e lateral C-arm view and near the medial border of pedicle on f posteoanterior C-arm view. Through the 8.8-mm cannula, a 7.5-mm hand reamer was introduced to ream facet bone until the resistance faded, which was checked with g posteoanterior image. h Lateral C-arm view showed that the tip of the thick guiding rod was positioned at the herniated fragment. A 7.5-mm working cannula was advanced over the guiding rod directly to the sequestrated fragment on i posteoanterior fluoroscopic image. Under j, k endoscopic view, the fragments underneath the nerve root were removed and the freed nerve root could be visualized. l Photography showed minimally invasive result 1 months after surgery Fig. 5 a Sagittal and b axial MR images showed L4/5 disc herniation in 41-year-old man. c, d Photography showed the surface marking of anatomic disc center identified by the intersection of transverse L4/5 level line and longitudinal midline, and the entrance point of puncture located at the corner of the flat back turning to the lateral side. The tip of puncture needle was in the posterior one third of intervertebral space on e lateral C-arm view and beyond the medial border of pedicle on f posteoanterior C-arm view. A 7.5-mm hand reamer was introduced through the cannula to ream away the ventral bone of superior facet joint and ligmentum flavum for enlargement of foramen (press-down enlargement of foramen) until the resistance disappeared. The tip of reamer should exceed the medial border of pedicle till the midline between the pedicle and spinal process on g posteoanterior C-arm view, and it should be close to posterior wall of target disc on h lateral C-arm view. i Endoscopic picture showed that both the ipsilateral nerve root and the contralateral nerve root were exposed for complete decompression after removal of j sequestrated disc fragments Male patient of 65 years with L5/S1 disc herniation underwent the procedure of PTES. a Sagittal and axial MR image showed L5/S1 disc herniation. A transverse line bisecting the disc was drawn along the metal rod which was placed transversely across the center of the target disc on b posteoanterior C-arm view. c Photography showed the surface marking of anatomic disc center identified by the intersection of transverse line and longitudinal midline, and the entrance point of puncture located at the corner of flat back turning to lateral side. The puncture needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through anatomic disc center on d schematic diagram. The tip of puncture needle was in the posterior intervertebral space close to spinal canal on e lateral C-arm view and near the medial border of pedicle on f posteoanterior C-arm view. Through the 8.8-mm cannula, a 7.5-mm hand reamer was introduced to ream facet bone until the resistance faded, which was checked with g posteoanterior image. h Lateral C-arm view showed that the tip of the thick guiding rod was positioned at the herniated fragment. A 7.5-mm working cannula was advanced over the guiding rod directly to the sequestrated fragment on i posteoanterior fluoroscopic image. Under j, k endoscopic view, the fragments underneath the nerve root were removed and the freed nerve root could be visualized. l Photography showed minimally invasive result 1 months after surgery a Sagittal and b axial MR images showed L4/5 disc herniation in 41-year-old man. c, d Photography showed the surface marking of anatomic disc center identified by the intersection of transverse L4/5 level line and longitudinal midline, and the entrance point of puncture located at the corner of the flat back turning to the lateral side. The tip of puncture needle was in the posterior one third of intervertebral space on e lateral C-arm view and beyond the medial border of pedicle on f posteoanterior C-arm view. A 7.5-mm hand reamer was introduced through the cannula to ream away the ventral bone of superior facet joint and ligmentum flavum for enlargement of foramen (press-down enlargement of foramen) until the resistance disappeared. The tip of reamer should exceed the medial border of pedicle till the midline between the pedicle and spinal process on g posteoanterior C-arm view, and it should be close to posterior wall of target disc on h lateral C-arm view. i Endoscopic picture showed that both the ipsilateral nerve root and the contralateral nerve root were exposed for complete decompression after removal of j sequestrated disc fragments In practice, we found that the entrance point was located at the corner of the flat back turning to the lateral side (Figs. 4c, d and 5c, d), named “Gu’s Point”, not depending on the size of the patient, gender, and level. This point is as high as, or more cranial, or slightly more caudal than the horizontal line of target disc. The entrance point for the patient with scoliosis is adjusted medially or laterally according to the rotation. Then the skin, subcutaneous tissue, and trajectory tract were infiltrated with local anesthesia, and an 18-gauge needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through the anatomic disc center (Fig. 4d), which should be adjusted according to the rotation of vertebra for scoliosis. Once the needle reached the target when resistance disappeared, the lateral C-arm view was taken to ensure that the tip should be in the posterior one third of the intervertebral space or intracanal area close to the posterior wall of the disc (Figs. 2d, 4e, 5e, and 6c). The plane of the needle bevel could be used for minor directional adjustment. Sometimes the direction of the needle bevel was rotated to skirt the obstacle for target especially at L5/S1 level with high iliac crest. The angle and direction of puncture or the entrance point should be adjusted once there was nerve root symptom. Then the C-arm was rotated to the posteroanterior projection, and the needle tip should be near the medial border of the pedicle on the C-arm view (Figs. 2e, 4f, and 5f). Subsequently, contrast solution [9 ml iohexol (300 mg/mL) + 1 ml methylene blue] or methylene blue solution (9 ml 0.9% NaCl + 1 ml methylene blue) was injected to dye the nucleus, and pain reaction or dye leakage was recorded, which sometimes could be omitted.Fig. 6Male patient (47 years) of L4/5 disc herniation with extruding fragment underwent the procedure of PTES. a Sagittal and b axial MR images showed L4/5 disc herniation. During the procedure of PTES, The puncture needle was anteromedially inserted at about 45° angle to the horizontal plane and the tip of puncture needle was in the posterior one third intervertebral space on c lateral C-arm view. Over the guiding wire, stepwise-dilating cannulas were introduced to annulus fibrosus through the foramen d. The thick guiding rod was advanced with a mallet into the foramen after removal of the guiding wire e. An 8.8-mm cannula was pushed over the rod to the facet joint area, docked at the superior facet after the rod was  removed, and pressed down to make the angle of the cannula to the horizontal plane smaller. Through this cannula, a 7.5-mm hand reamer was then introduced to ream the ventral part of facet bone until the resistance faded (press-down enlargement of the foramen) f, which was checked with g posteoanterior C-arm image. The reamer was removed, and the thick guiding rod then was reintroduced and advanced with the aid of a mallet until the tip of the guiding rod was into the herniated fragment h. A 7.5-mm working cannula was advanced over the guiding rod directly to the extruding fragment and the spine endoscope was inserted i. Under j, k, and l endoscopic view, the extruding disc fragment could be observed and the freed nerve root could be visualized after the massive nucleus underneath the nerve root were removed Male patient (47 years) of L4/5 disc herniation with extruding fragment underwent the procedure of PTES. a Sagittal and b axial MR images showed L4/5 disc herniation. During the procedure of PTES, The puncture needle was anteromedially inserted at about 45° angle to the horizontal plane and the tip of puncture needle was in the posterior one third intervertebral space on c lateral C-arm view. Over the guiding wire, stepwise-dilating cannulas were introduced to annulus fibrosus through the foramen d. The thick guiding rod was advanced with a mallet into the foramen after removal of the guiding wire e. An 8.8-mm cannula was pushed over the rod to the facet joint area, docked at the superior facet after the rod was  removed, and pressed down to make the angle of the cannula to the horizontal plane smaller. Through this cannula, a 7.5-mm hand reamer was then introduced to ream the ventral part of facet bone until the resistance faded (press-down enlargement of the foramen) f, which was checked with g posteoanterior C-arm image. The reamer was removed, and the thick guiding rod then was reintroduced and advanced with the aid of a mallet until the tip of the guiding rod was into the herniated fragment h. A 7.5-mm working cannula was advanced over the guiding rod directly to the extruding fragment and the spine endoscope was inserted i. Under j, k, and l endoscopic view, the extruding disc fragment could be observed and the freed nerve root could be visualized after the massive nucleus underneath the nerve root were removed After the guiding wire was introduced into the disc space, a stab incision of about 6 mm was made and the 18-gauge needle was inserted along the guiding wire to the facet joint capsule for lidocaine infiltration. Over the guiding wire, stepwise-dilating cannulas of 1.5, 3.5, and 5.5 mm were introduced to the anulus fibrosus through the foramen (Fig. 6d). After the dilating cannulas were removed, the thick guiding rod of 6.3 mm in diameter was introduced over the guiding wire and then advanced with a mallet into the foramen after removal of guiding wire (Fig. 6e). An 8.8-mm cannula of one-sided opening was pushed over the rod to the facet joint area, docked at the superior facet after the rod was removed, and pressed down to make the angle of cannula to the horizontal plane smaller according to the position of the needle tip on the lateral and posteroanterior C-arm view, which indicates the inclination angle of puncture trajectory. A 7.5-mm hand reamer was then introduced through this cannula to ream away the ventral bone of the superior facet joint and the ligmentum flavum for enlargement of foramen until resistance disappeared, meaning that the spinal canal has been entered, which was called press-down enlargement of foramen (Fig. 6f). The tip of the reamer should exceed the medial border of the pedicle till the midline is between the pedicle and the spinal process on a posteoanterior C-arm view (Figs. 4g, 5g and 6g), and it should be close to the posterior wall of the target disc on the lateral C-arm view (Fig. 5h), to ensure that the vicinity of the extruding or sequestrated fragment was reached. Only the posteoanterior C-arm image was needed to determine the position of the reamer during the foramen enlargement and it could be evaluated through endoscopic image replacing lateral C-arm image whether the target area had been reached. In case this could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet with 7.5-mm reamer for further enlargement of foramen through the 8.8-mm cannula docked at the facet and pressed down further to make the angle of cannula to the horizontal plane smaller (Fig. 7). In cases with difficult access to the fragment, particularly in the presence of foraminal stenosis, an 8.8-mm reamer was used to enlarge the foramen through a 10-mm cannula. The reamer could be advanced deeper to remove some calcified tissue of disc for lumbar disc herniation with calcification. If the reamer in the foramen was higher or lower than the disc level on the posteoranterior C-arm image, the position and direction of the reamer could be adjusted through a minor movement of the cannula or a thick guiding rod. Once there was nerve root symptom, the surgeon must stop reaming immediately, adjust the angle and direction of reamer (move the tip of cannula laterally and dorsally toward the superior facet and press the cannula down), or change the entrance point medially until the symptom disappear. The reamer was removed, and the thick guiding rod was reintroduced and advanced with the aid of a mallet till the tip of the guiding rod was into the herniated fragment (Fig. 6h). A 7.5-mm working cannula with a one-sided opening was then advanced over the guiding rod directly to the target area (Figs. 1c, d, 2g, 3d, e, 4i, and 6i). For extracanal disc herniation, the process of reaming could usually be omitted.Fig. 7In case the extruding or sequestrated fragment could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet for further enlargement of the foramen through the cannula docked at the facet and pressed down further to make the angle of the cannula to the horizontal plane smaller In case the extruding or sequestrated fragment could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet for further enlargement of the foramen through the cannula docked at the facet and pressed down further to make the angle of the cannula to the horizontal plane smaller The spine endoscope was introduced (Fig. 6i), and generally, the extruding or sequestrated disc fragment could be observed and the nerve root was then shown on the screen (Figs. 2i and 5i) after the fragments were removed. In cases where the nerve root was also visible, a working forceps was introduced through the endoscope to remove the fragments underneath the nerve root and the central dura, even the contralateral nerve root, under endoscopic view (Figs. 4j, 5i, and 6j, k). In disc protrusion where the annular wall was complete, the annulus was perforated to pull out the herniated nucleus. The massive disc fragment could be grabbed and extracted together with the endoscope. After the scope was again inserted, the nerve root was inspected (Figs. 4k, 5i, and 6l) and the remaining fragments of the disc were removed under endoscopic vision. When the calcification was encountered, a 3.5-mm small reamer or electric shaver was used to grind off the calcified tissue under the view through the endoscope (Fig. 3f). The cannula was then rotated cephalad to check the existing nerve root and rotated back to detect bone fragments that reamed off and take those out. The freed nerve root that was always pulsating with the heart rate could be observed at the end of procedure (Figs. 2i, 4k, 5i, and 6l). The trigger-flex bipolar radiofrequency probe could be used to clean the access way, stop the bleeding, ablate the nucleus and annulus, shrink the annular tears, and release the scar tissue around the nerve root for recurrent herniation after previous surgical intervention at the index level. The patients were mobilized 5 h after surgery. A flexible back brace was used for 3 weeks. After leaving the hospital, patients were encouraged to resume their daily routine and were followed up as outpatients at the hospital ward. Anesthesia consisted of 1% local lidocaine infiltration, supplemented with conscious sedation. The patient was placed in a prone position with hyperkyphotic bolsters placed under the abdomen on a radiolucent table, especially in the cases of L5/S1 herniation with high iliac crest. A biplane fluoroscopy was used for radiograph imaging. Good posteoanterior and lateral images should be obtained by rotating the C-arm relative to the patient whose back was positioned parallel to the horizontal plane especially in the scoliosis case. The midline is marked on the skin surface by palpating the spinal processes. Then a transverse line bisecting the disc is drawn along the K-wire which is placed transversely across the center of the target disc on the posteoanterior image (Fig. 4b). The surface marking of anatomic disc center is identified by the intersection of transverse line and longitudinal midline (Figs. 4c, d and 5c, d), which is used as the aiming reference point of puncture.Fig. 4Male patient of 65 years with L5/S1 disc herniation underwent the procedure of PTES. a Sagittal and axial MR image showed L5/S1 disc herniation. A transverse line bisecting the disc was drawn along the metal rod which was placed transversely across the center of the target disc on b posteoanterior C-arm view. c Photography showed the surface marking of anatomic disc center identified by the intersection of transverse line and longitudinal midline, and the entrance point of puncture located at the corner of flat back turning to lateral side. The puncture needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through anatomic disc center on d schematic diagram. The tip of puncture needle was in the posterior intervertebral space close to spinal canal on e lateral C-arm view and near the medial border of pedicle on f posteoanterior C-arm view. Through the 8.8-mm cannula, a 7.5-mm hand reamer was introduced to ream facet bone until the resistance faded, which was checked with g posteoanterior image. h Lateral C-arm view showed that the tip of the thick guiding rod was positioned at the herniated fragment. A 7.5-mm working cannula was advanced over the guiding rod directly to the sequestrated fragment on i posteoanterior fluoroscopic image. Under j, k endoscopic view, the fragments underneath the nerve root were removed and the freed nerve root could be visualized. l Photography showed minimally invasive result 1 months after surgery Fig. 5 a Sagittal and b axial MR images showed L4/5 disc herniation in 41-year-old man. c, d Photography showed the surface marking of anatomic disc center identified by the intersection of transverse L4/5 level line and longitudinal midline, and the entrance point of puncture located at the corner of the flat back turning to the lateral side. The tip of puncture needle was in the posterior one third of intervertebral space on e lateral C-arm view and beyond the medial border of pedicle on f posteoanterior C-arm view. A 7.5-mm hand reamer was introduced through the cannula to ream away the ventral bone of superior facet joint and ligmentum flavum for enlargement of foramen (press-down enlargement of foramen) until the resistance disappeared. The tip of reamer should exceed the medial border of pedicle till the midline between the pedicle and spinal process on g posteoanterior C-arm view, and it should be close to posterior wall of target disc on h lateral C-arm view. i Endoscopic picture showed that both the ipsilateral nerve root and the contralateral nerve root were exposed for complete decompression after removal of j sequestrated disc fragments Male patient of 65 years with L5/S1 disc herniation underwent the procedure of PTES. a Sagittal and axial MR image showed L5/S1 disc herniation. A transverse line bisecting the disc was drawn along the metal rod which was placed transversely across the center of the target disc on b posteoanterior C-arm view. c Photography showed the surface marking of anatomic disc center identified by the intersection of transverse line and longitudinal midline, and the entrance point of puncture located at the corner of flat back turning to lateral side. The puncture needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through anatomic disc center on d schematic diagram. The tip of puncture needle was in the posterior intervertebral space close to spinal canal on e lateral C-arm view and near the medial border of pedicle on f posteoanterior C-arm view. Through the 8.8-mm cannula, a 7.5-mm hand reamer was introduced to ream facet bone until the resistance faded, which was checked with g posteoanterior image. h Lateral C-arm view showed that the tip of the thick guiding rod was positioned at the herniated fragment. A 7.5-mm working cannula was advanced over the guiding rod directly to the sequestrated fragment on i posteoanterior fluoroscopic image. Under j, k endoscopic view, the fragments underneath the nerve root were removed and the freed nerve root could be visualized. l Photography showed minimally invasive result 1 months after surgery a Sagittal and b axial MR images showed L4/5 disc herniation in 41-year-old man. c, d Photography showed the surface marking of anatomic disc center identified by the intersection of transverse L4/5 level line and longitudinal midline, and the entrance point of puncture located at the corner of the flat back turning to the lateral side. The tip of puncture needle was in the posterior one third of intervertebral space on e lateral C-arm view and beyond the medial border of pedicle on f posteoanterior C-arm view. A 7.5-mm hand reamer was introduced through the cannula to ream away the ventral bone of superior facet joint and ligmentum flavum for enlargement of foramen (press-down enlargement of foramen) until the resistance disappeared. The tip of reamer should exceed the medial border of pedicle till the midline between the pedicle and spinal process on g posteoanterior C-arm view, and it should be close to posterior wall of target disc on h lateral C-arm view. i Endoscopic picture showed that both the ipsilateral nerve root and the contralateral nerve root were exposed for complete decompression after removal of j sequestrated disc fragments In practice, we found that the entrance point was located at the corner of the flat back turning to the lateral side (Figs. 4c, d and 5c, d), named “Gu’s Point”, not depending on the size of the patient, gender, and level. This point is as high as, or more cranial, or slightly more caudal than the horizontal line of target disc. The entrance point for the patient with scoliosis is adjusted medially or laterally according to the rotation. Then the skin, subcutaneous tissue, and trajectory tract were infiltrated with local anesthesia, and an 18-gauge needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through the anatomic disc center (Fig. 4d), which should be adjusted according to the rotation of vertebra for scoliosis. Once the needle reached the target when resistance disappeared, the lateral C-arm view was taken to ensure that the tip should be in the posterior one third of the intervertebral space or intracanal area close to the posterior wall of the disc (Figs. 2d, 4e, 5e, and 6c). The plane of the needle bevel could be used for minor directional adjustment. Sometimes the direction of the needle bevel was rotated to skirt the obstacle for target especially at L5/S1 level with high iliac crest. The angle and direction of puncture or the entrance point should be adjusted once there was nerve root symptom. Then the C-arm was rotated to the posteroanterior projection, and the needle tip should be near the medial border of the pedicle on the C-arm view (Figs. 2e, 4f, and 5f). Subsequently, contrast solution [9 ml iohexol (300 mg/mL) + 1 ml methylene blue] or methylene blue solution (9 ml 0.9% NaCl + 1 ml methylene blue) was injected to dye the nucleus, and pain reaction or dye leakage was recorded, which sometimes could be omitted.Fig. 6Male patient (47 years) of L4/5 disc herniation with extruding fragment underwent the procedure of PTES. a Sagittal and b axial MR images showed L4/5 disc herniation. During the procedure of PTES, The puncture needle was anteromedially inserted at about 45° angle to the horizontal plane and the tip of puncture needle was in the posterior one third intervertebral space on c lateral C-arm view. Over the guiding wire, stepwise-dilating cannulas were introduced to annulus fibrosus through the foramen d. The thick guiding rod was advanced with a mallet into the foramen after removal of the guiding wire e. An 8.8-mm cannula was pushed over the rod to the facet joint area, docked at the superior facet after the rod was  removed, and pressed down to make the angle of the cannula to the horizontal plane smaller. Through this cannula, a 7.5-mm hand reamer was then introduced to ream the ventral part of facet bone until the resistance faded (press-down enlargement of the foramen) f, which was checked with g posteoanterior C-arm image. The reamer was removed, and the thick guiding rod then was reintroduced and advanced with the aid of a mallet until the tip of the guiding rod was into the herniated fragment h. A 7.5-mm working cannula was advanced over the guiding rod directly to the extruding fragment and the spine endoscope was inserted i. Under j, k, and l endoscopic view, the extruding disc fragment could be observed and the freed nerve root could be visualized after the massive nucleus underneath the nerve root were removed Male patient (47 years) of L4/5 disc herniation with extruding fragment underwent the procedure of PTES. a Sagittal and b axial MR images showed L4/5 disc herniation. During the procedure of PTES, The puncture needle was anteromedially inserted at about 45° angle to the horizontal plane and the tip of puncture needle was in the posterior one third intervertebral space on c lateral C-arm view. Over the guiding wire, stepwise-dilating cannulas were introduced to annulus fibrosus through the foramen d. The thick guiding rod was advanced with a mallet into the foramen after removal of the guiding wire e. An 8.8-mm cannula was pushed over the rod to the facet joint area, docked at the superior facet after the rod was  removed, and pressed down to make the angle of the cannula to the horizontal plane smaller. Through this cannula, a 7.5-mm hand reamer was then introduced to ream the ventral part of facet bone until the resistance faded (press-down enlargement of the foramen) f, which was checked with g posteoanterior C-arm image. The reamer was removed, and the thick guiding rod then was reintroduced and advanced with the aid of a mallet until the tip of the guiding rod was into the herniated fragment h. A 7.5-mm working cannula was advanced over the guiding rod directly to the extruding fragment and the spine endoscope was inserted i. Under j, k, and l endoscopic view, the extruding disc fragment could be observed and the freed nerve root could be visualized after the massive nucleus underneath the nerve root were removed After the guiding wire was introduced into the disc space, a stab incision of about 6 mm was made and the 18-gauge needle was inserted along the guiding wire to the facet joint capsule for lidocaine infiltration. Over the guiding wire, stepwise-dilating cannulas of 1.5, 3.5, and 5.5 mm were introduced to the anulus fibrosus through the foramen (Fig. 6d). After the dilating cannulas were removed, the thick guiding rod of 6.3 mm in diameter was introduced over the guiding wire and then advanced with a mallet into the foramen after removal of guiding wire (Fig. 6e). An 8.8-mm cannula of one-sided opening was pushed over the rod to the facet joint area, docked at the superior facet after the rod was removed, and pressed down to make the angle of cannula to the horizontal plane smaller according to the position of the needle tip on the lateral and posteroanterior C-arm view, which indicates the inclination angle of puncture trajectory. A 7.5-mm hand reamer was then introduced through this cannula to ream away the ventral bone of the superior facet joint and the ligmentum flavum for enlargement of foramen until resistance disappeared, meaning that the spinal canal has been entered, which was called press-down enlargement of foramen (Fig. 6f). The tip of the reamer should exceed the medial border of the pedicle till the midline is between the pedicle and the spinal process on a posteoanterior C-arm view (Figs. 4g, 5g and 6g), and it should be close to the posterior wall of the target disc on the lateral C-arm view (Fig. 5h), to ensure that the vicinity of the extruding or sequestrated fragment was reached. Only the posteoanterior C-arm image was needed to determine the position of the reamer during the foramen enlargement and it could be evaluated through endoscopic image replacing lateral C-arm image whether the target area had been reached. In case this could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet with 7.5-mm reamer for further enlargement of foramen through the 8.8-mm cannula docked at the facet and pressed down further to make the angle of cannula to the horizontal plane smaller (Fig. 7). In cases with difficult access to the fragment, particularly in the presence of foraminal stenosis, an 8.8-mm reamer was used to enlarge the foramen through a 10-mm cannula. The reamer could be advanced deeper to remove some calcified tissue of disc for lumbar disc herniation with calcification. If the reamer in the foramen was higher or lower than the disc level on the posteoranterior C-arm image, the position and direction of the reamer could be adjusted through a minor movement of the cannula or a thick guiding rod. Once there was nerve root symptom, the surgeon must stop reaming immediately, adjust the angle and direction of reamer (move the tip of cannula laterally and dorsally toward the superior facet and press the cannula down), or change the entrance point medially until the symptom disappear. The reamer was removed, and the thick guiding rod was reintroduced and advanced with the aid of a mallet till the tip of the guiding rod was into the herniated fragment (Fig. 6h). A 7.5-mm working cannula with a one-sided opening was then advanced over the guiding rod directly to the target area (Figs. 1c, d, 2g, 3d, e, 4i, and 6i). For extracanal disc herniation, the process of reaming could usually be omitted.Fig. 7In case the extruding or sequestrated fragment could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet for further enlargement of the foramen through the cannula docked at the facet and pressed down further to make the angle of the cannula to the horizontal plane smaller In case the extruding or sequestrated fragment could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet for further enlargement of the foramen through the cannula docked at the facet and pressed down further to make the angle of the cannula to the horizontal plane smaller The spine endoscope was introduced (Fig. 6i), and generally, the extruding or sequestrated disc fragment could be observed and the nerve root was then shown on the screen (Figs. 2i and 5i) after the fragments were removed. In cases where the nerve root was also visible, a working forceps was introduced through the endoscope to remove the fragments underneath the nerve root and the central dura, even the contralateral nerve root, under endoscopic view (Figs. 4j, 5i, and 6j, k). In disc protrusion where the annular wall was complete, the annulus was perforated to pull out the herniated nucleus. The massive disc fragment could be grabbed and extracted together with the endoscope. After the scope was again inserted, the nerve root was inspected (Figs. 4k, 5i, and 6l) and the remaining fragments of the disc were removed under endoscopic vision. When the calcification was encountered, a 3.5-mm small reamer or electric shaver was used to grind off the calcified tissue under the view through the endoscope (Fig. 3f). The cannula was then rotated cephalad to check the existing nerve root and rotated back to detect bone fragments that reamed off and take those out. The freed nerve root that was always pulsating with the heart rate could be observed at the end of procedure (Figs. 2i, 4k, 5i, and 6l). The trigger-flex bipolar radiofrequency probe could be used to clean the access way, stop the bleeding, ablate the nucleus and annulus, shrink the annular tears, and release the scar tissue around the nerve root for recurrent herniation after previous surgical intervention at the index level. The patients were mobilized 5 h after surgery. A flexible back brace was used for 3 weeks. After leaving the hospital, patients were encouraged to resume their daily routine and were followed up as outpatients at the hospital ward. Clinical follow-up Leg pain was evaluated using the 10-point visual analog scale (VAS) preoperatively, immediately, 1 week, 1, 2, 3, and 6 months, and 1 and 2 years after surgery. The clinical evaluation included a straight leg raising test and a check of the strength of the quadriceps, foot/toe extensors, as well as triceps strength. In a retrospective assessment, the results were determined to be excellent, good, fair, or poor according to the MacNab classification [12] (Table 5) 2 years postoperatively. The fair and better grades also included the requirement that the patients were willing to select the same procedure again for the same problem in the future.Table 5MacNab classification [12]ResultsComplicationsExcellentNo pain; no restriction of activityGoodOccasional back or leg pain not interfering with the patient’s ability to do his or her normal work, or to enjoy leisure activitiesFairImproved functional capacity, but handicapped by intermittent pain of sufficient severity tocurtail or modify work or leisure activitiesPoorNo improvement or insufficient improvement toenable an increase in activities/or furtheroperative intervention required MacNab classification [12] During the follow-up, all complications were registered including iatrogenic nerve damage, vascular injuries, infection, wound healing, thrombosis, or recurrence. Leg pain was evaluated using the 10-point visual analog scale (VAS) preoperatively, immediately, 1 week, 1, 2, 3, and 6 months, and 1 and 2 years after surgery. The clinical evaluation included a straight leg raising test and a check of the strength of the quadriceps, foot/toe extensors, as well as triceps strength. In a retrospective assessment, the results were determined to be excellent, good, fair, or poor according to the MacNab classification [12] (Table 5) 2 years postoperatively. The fair and better grades also included the requirement that the patients were willing to select the same procedure again for the same problem in the future.Table 5MacNab classification [12]ResultsComplicationsExcellentNo pain; no restriction of activityGoodOccasional back or leg pain not interfering with the patient’s ability to do his or her normal work, or to enjoy leisure activitiesFairImproved functional capacity, but handicapped by intermittent pain of sufficient severity tocurtail or modify work or leisure activitiesPoorNo improvement or insufficient improvement toenable an increase in activities/or furtheroperative intervention required MacNab classification [12] During the follow-up, all complications were registered including iatrogenic nerve damage, vascular injuries, infection, wound healing, thrombosis, or recurrence. Statistical analysis Comparison of preoperative and postoperative VAS was performed using linear mixed effects model for multiple comparison procedures. Statistically significant differences were defined at a 95% confidence level. The SPSS software (17.0, SPSS Inc., Chicago, IL, USA) supported statistical evaluation. Comparison of preoperative and postoperative VAS was performed using linear mixed effects model for multiple comparison procedures. Statistically significant differences were defined at a 95% confidence level. The SPSS software (17.0, SPSS Inc., Chicago, IL, USA) supported statistical evaluation. Pre- and postoperative imaging: All patients were evaluated before the procedure by CT and MRI imaging to determine the type of disc herniation (intracanal or extracanal) or to determine if there was calcification. An intracanal herniation is defined as its apex between the bilateral pedicles. The foraminal herniation had its apex within the mediolateral borders of the adjacent pedicle, and the apex is lateral to the bordering pedicle in an extraforaminal herniation. Posteoanterior and lateral radiographs were obtained to detect scoliosis or high iliac crest when the lower plate of L4 vertebral body was not higher than the line between the highest points of bilateral iliac crest (Fig. 2a). After the treatment, MRI images were obtained to assess neurological decompression or exclude dural cyst, myelomeningocele, dural tears or spinal fluid leaks, and reherniation. Surgical procedure: Anesthesia consisted of 1% local lidocaine infiltration, supplemented with conscious sedation. The patient was placed in a prone position with hyperkyphotic bolsters placed under the abdomen on a radiolucent table, especially in the cases of L5/S1 herniation with high iliac crest. A biplane fluoroscopy was used for radiograph imaging. Good posteoanterior and lateral images should be obtained by rotating the C-arm relative to the patient whose back was positioned parallel to the horizontal plane especially in the scoliosis case. The midline is marked on the skin surface by palpating the spinal processes. Then a transverse line bisecting the disc is drawn along the K-wire which is placed transversely across the center of the target disc on the posteoanterior image (Fig. 4b). The surface marking of anatomic disc center is identified by the intersection of transverse line and longitudinal midline (Figs. 4c, d and 5c, d), which is used as the aiming reference point of puncture.Fig. 4Male patient of 65 years with L5/S1 disc herniation underwent the procedure of PTES. a Sagittal and axial MR image showed L5/S1 disc herniation. A transverse line bisecting the disc was drawn along the metal rod which was placed transversely across the center of the target disc on b posteoanterior C-arm view. c Photography showed the surface marking of anatomic disc center identified by the intersection of transverse line and longitudinal midline, and the entrance point of puncture located at the corner of flat back turning to lateral side. The puncture needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through anatomic disc center on d schematic diagram. The tip of puncture needle was in the posterior intervertebral space close to spinal canal on e lateral C-arm view and near the medial border of pedicle on f posteoanterior C-arm view. Through the 8.8-mm cannula, a 7.5-mm hand reamer was introduced to ream facet bone until the resistance faded, which was checked with g posteoanterior image. h Lateral C-arm view showed that the tip of the thick guiding rod was positioned at the herniated fragment. A 7.5-mm working cannula was advanced over the guiding rod directly to the sequestrated fragment on i posteoanterior fluoroscopic image. Under j, k endoscopic view, the fragments underneath the nerve root were removed and the freed nerve root could be visualized. l Photography showed minimally invasive result 1 months after surgery Fig. 5 a Sagittal and b axial MR images showed L4/5 disc herniation in 41-year-old man. c, d Photography showed the surface marking of anatomic disc center identified by the intersection of transverse L4/5 level line and longitudinal midline, and the entrance point of puncture located at the corner of the flat back turning to the lateral side. The tip of puncture needle was in the posterior one third of intervertebral space on e lateral C-arm view and beyond the medial border of pedicle on f posteoanterior C-arm view. A 7.5-mm hand reamer was introduced through the cannula to ream away the ventral bone of superior facet joint and ligmentum flavum for enlargement of foramen (press-down enlargement of foramen) until the resistance disappeared. The tip of reamer should exceed the medial border of pedicle till the midline between the pedicle and spinal process on g posteoanterior C-arm view, and it should be close to posterior wall of target disc on h lateral C-arm view. i Endoscopic picture showed that both the ipsilateral nerve root and the contralateral nerve root were exposed for complete decompression after removal of j sequestrated disc fragments Male patient of 65 years with L5/S1 disc herniation underwent the procedure of PTES. a Sagittal and axial MR image showed L5/S1 disc herniation. A transverse line bisecting the disc was drawn along the metal rod which was placed transversely across the center of the target disc on b posteoanterior C-arm view. c Photography showed the surface marking of anatomic disc center identified by the intersection of transverse line and longitudinal midline, and the entrance point of puncture located at the corner of flat back turning to lateral side. The puncture needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through anatomic disc center on d schematic diagram. The tip of puncture needle was in the posterior intervertebral space close to spinal canal on e lateral C-arm view and near the medial border of pedicle on f posteoanterior C-arm view. Through the 8.8-mm cannula, a 7.5-mm hand reamer was introduced to ream facet bone until the resistance faded, which was checked with g posteoanterior image. h Lateral C-arm view showed that the tip of the thick guiding rod was positioned at the herniated fragment. A 7.5-mm working cannula was advanced over the guiding rod directly to the sequestrated fragment on i posteoanterior fluoroscopic image. Under j, k endoscopic view, the fragments underneath the nerve root were removed and the freed nerve root could be visualized. l Photography showed minimally invasive result 1 months after surgery a Sagittal and b axial MR images showed L4/5 disc herniation in 41-year-old man. c, d Photography showed the surface marking of anatomic disc center identified by the intersection of transverse L4/5 level line and longitudinal midline, and the entrance point of puncture located at the corner of the flat back turning to the lateral side. The tip of puncture needle was in the posterior one third of intervertebral space on e lateral C-arm view and beyond the medial border of pedicle on f posteoanterior C-arm view. A 7.5-mm hand reamer was introduced through the cannula to ream away the ventral bone of superior facet joint and ligmentum flavum for enlargement of foramen (press-down enlargement of foramen) until the resistance disappeared. The tip of reamer should exceed the medial border of pedicle till the midline between the pedicle and spinal process on g posteoanterior C-arm view, and it should be close to posterior wall of target disc on h lateral C-arm view. i Endoscopic picture showed that both the ipsilateral nerve root and the contralateral nerve root were exposed for complete decompression after removal of j sequestrated disc fragments In practice, we found that the entrance point was located at the corner of the flat back turning to the lateral side (Figs. 4c, d and 5c, d), named “Gu’s Point”, not depending on the size of the patient, gender, and level. This point is as high as, or more cranial, or slightly more caudal than the horizontal line of target disc. The entrance point for the patient with scoliosis is adjusted medially or laterally according to the rotation. Then the skin, subcutaneous tissue, and trajectory tract were infiltrated with local anesthesia, and an 18-gauge needle was inserted at about 35° angle (25°–45°) to the horizontal plane anteromedially toward the perpendicular line through the anatomic disc center (Fig. 4d), which should be adjusted according to the rotation of vertebra for scoliosis. Once the needle reached the target when resistance disappeared, the lateral C-arm view was taken to ensure that the tip should be in the posterior one third of the intervertebral space or intracanal area close to the posterior wall of the disc (Figs. 2d, 4e, 5e, and 6c). The plane of the needle bevel could be used for minor directional adjustment. Sometimes the direction of the needle bevel was rotated to skirt the obstacle for target especially at L5/S1 level with high iliac crest. The angle and direction of puncture or the entrance point should be adjusted once there was nerve root symptom. Then the C-arm was rotated to the posteroanterior projection, and the needle tip should be near the medial border of the pedicle on the C-arm view (Figs. 2e, 4f, and 5f). Subsequently, contrast solution [9 ml iohexol (300 mg/mL) + 1 ml methylene blue] or methylene blue solution (9 ml 0.9% NaCl + 1 ml methylene blue) was injected to dye the nucleus, and pain reaction or dye leakage was recorded, which sometimes could be omitted.Fig. 6Male patient (47 years) of L4/5 disc herniation with extruding fragment underwent the procedure of PTES. a Sagittal and b axial MR images showed L4/5 disc herniation. During the procedure of PTES, The puncture needle was anteromedially inserted at about 45° angle to the horizontal plane and the tip of puncture needle was in the posterior one third intervertebral space on c lateral C-arm view. Over the guiding wire, stepwise-dilating cannulas were introduced to annulus fibrosus through the foramen d. The thick guiding rod was advanced with a mallet into the foramen after removal of the guiding wire e. An 8.8-mm cannula was pushed over the rod to the facet joint area, docked at the superior facet after the rod was  removed, and pressed down to make the angle of the cannula to the horizontal plane smaller. Through this cannula, a 7.5-mm hand reamer was then introduced to ream the ventral part of facet bone until the resistance faded (press-down enlargement of the foramen) f, which was checked with g posteoanterior C-arm image. The reamer was removed, and the thick guiding rod then was reintroduced and advanced with the aid of a mallet until the tip of the guiding rod was into the herniated fragment h. A 7.5-mm working cannula was advanced over the guiding rod directly to the extruding fragment and the spine endoscope was inserted i. Under j, k, and l endoscopic view, the extruding disc fragment could be observed and the freed nerve root could be visualized after the massive nucleus underneath the nerve root were removed Male patient (47 years) of L4/5 disc herniation with extruding fragment underwent the procedure of PTES. a Sagittal and b axial MR images showed L4/5 disc herniation. During the procedure of PTES, The puncture needle was anteromedially inserted at about 45° angle to the horizontal plane and the tip of puncture needle was in the posterior one third intervertebral space on c lateral C-arm view. Over the guiding wire, stepwise-dilating cannulas were introduced to annulus fibrosus through the foramen d. The thick guiding rod was advanced with a mallet into the foramen after removal of the guiding wire e. An 8.8-mm cannula was pushed over the rod to the facet joint area, docked at the superior facet after the rod was  removed, and pressed down to make the angle of the cannula to the horizontal plane smaller. Through this cannula, a 7.5-mm hand reamer was then introduced to ream the ventral part of facet bone until the resistance faded (press-down enlargement of the foramen) f, which was checked with g posteoanterior C-arm image. The reamer was removed, and the thick guiding rod then was reintroduced and advanced with the aid of a mallet until the tip of the guiding rod was into the herniated fragment h. A 7.5-mm working cannula was advanced over the guiding rod directly to the extruding fragment and the spine endoscope was inserted i. Under j, k, and l endoscopic view, the extruding disc fragment could be observed and the freed nerve root could be visualized after the massive nucleus underneath the nerve root were removed After the guiding wire was introduced into the disc space, a stab incision of about 6 mm was made and the 18-gauge needle was inserted along the guiding wire to the facet joint capsule for lidocaine infiltration. Over the guiding wire, stepwise-dilating cannulas of 1.5, 3.5, and 5.5 mm were introduced to the anulus fibrosus through the foramen (Fig. 6d). After the dilating cannulas were removed, the thick guiding rod of 6.3 mm in diameter was introduced over the guiding wire and then advanced with a mallet into the foramen after removal of guiding wire (Fig. 6e). An 8.8-mm cannula of one-sided opening was pushed over the rod to the facet joint area, docked at the superior facet after the rod was removed, and pressed down to make the angle of cannula to the horizontal plane smaller according to the position of the needle tip on the lateral and posteroanterior C-arm view, which indicates the inclination angle of puncture trajectory. A 7.5-mm hand reamer was then introduced through this cannula to ream away the ventral bone of the superior facet joint and the ligmentum flavum for enlargement of foramen until resistance disappeared, meaning that the spinal canal has been entered, which was called press-down enlargement of foramen (Fig. 6f). The tip of the reamer should exceed the medial border of the pedicle till the midline is between the pedicle and the spinal process on a posteoanterior C-arm view (Figs. 4g, 5g and 6g), and it should be close to the posterior wall of the target disc on the lateral C-arm view (Fig. 5h), to ensure that the vicinity of the extruding or sequestrated fragment was reached. Only the posteoanterior C-arm image was needed to determine the position of the reamer during the foramen enlargement and it could be evaluated through endoscopic image replacing lateral C-arm image whether the target area had been reached. In case this could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet with 7.5-mm reamer for further enlargement of foramen through the 8.8-mm cannula docked at the facet and pressed down further to make the angle of cannula to the horizontal plane smaller (Fig. 7). In cases with difficult access to the fragment, particularly in the presence of foraminal stenosis, an 8.8-mm reamer was used to enlarge the foramen through a 10-mm cannula. The reamer could be advanced deeper to remove some calcified tissue of disc for lumbar disc herniation with calcification. If the reamer in the foramen was higher or lower than the disc level on the posteoranterior C-arm image, the position and direction of the reamer could be adjusted through a minor movement of the cannula or a thick guiding rod. Once there was nerve root symptom, the surgeon must stop reaming immediately, adjust the angle and direction of reamer (move the tip of cannula laterally and dorsally toward the superior facet and press the cannula down), or change the entrance point medially until the symptom disappear. The reamer was removed, and the thick guiding rod was reintroduced and advanced with the aid of a mallet till the tip of the guiding rod was into the herniated fragment (Fig. 6h). A 7.5-mm working cannula with a one-sided opening was then advanced over the guiding rod directly to the target area (Figs. 1c, d, 2g, 3d, e, 4i, and 6i). For extracanal disc herniation, the process of reaming could usually be omitted.Fig. 7In case the extruding or sequestrated fragment could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet for further enlargement of the foramen through the cannula docked at the facet and pressed down further to make the angle of the cannula to the horizontal plane smaller In case the extruding or sequestrated fragment could not be achieved, the step of reaming facet was repeated to remove more ventral parts of the superior facet for further enlargement of the foramen through the cannula docked at the facet and pressed down further to make the angle of the cannula to the horizontal plane smaller The spine endoscope was introduced (Fig. 6i), and generally, the extruding or sequestrated disc fragment could be observed and the nerve root was then shown on the screen (Figs. 2i and 5i) after the fragments were removed. In cases where the nerve root was also visible, a working forceps was introduced through the endoscope to remove the fragments underneath the nerve root and the central dura, even the contralateral nerve root, under endoscopic view (Figs. 4j, 5i, and 6j, k). In disc protrusion where the annular wall was complete, the annulus was perforated to pull out the herniated nucleus. The massive disc fragment could be grabbed and extracted together with the endoscope. After the scope was again inserted, the nerve root was inspected (Figs. 4k, 5i, and 6l) and the remaining fragments of the disc were removed under endoscopic vision. When the calcification was encountered, a 3.5-mm small reamer or electric shaver was used to grind off the calcified tissue under the view through the endoscope (Fig. 3f). The cannula was then rotated cephalad to check the existing nerve root and rotated back to detect bone fragments that reamed off and take those out. The freed nerve root that was always pulsating with the heart rate could be observed at the end of procedure (Figs. 2i, 4k, 5i, and 6l). The trigger-flex bipolar radiofrequency probe could be used to clean the access way, stop the bleeding, ablate the nucleus and annulus, shrink the annular tears, and release the scar tissue around the nerve root for recurrent herniation after previous surgical intervention at the index level. The patients were mobilized 5 h after surgery. A flexible back brace was used for 3 weeks. After leaving the hospital, patients were encouraged to resume their daily routine and were followed up as outpatients at the hospital ward. Clinical follow-up: Leg pain was evaluated using the 10-point visual analog scale (VAS) preoperatively, immediately, 1 week, 1, 2, 3, and 6 months, and 1 and 2 years after surgery. The clinical evaluation included a straight leg raising test and a check of the strength of the quadriceps, foot/toe extensors, as well as triceps strength. In a retrospective assessment, the results were determined to be excellent, good, fair, or poor according to the MacNab classification [12] (Table 5) 2 years postoperatively. The fair and better grades also included the requirement that the patients were willing to select the same procedure again for the same problem in the future.Table 5MacNab classification [12]ResultsComplicationsExcellentNo pain; no restriction of activityGoodOccasional back or leg pain not interfering with the patient’s ability to do his or her normal work, or to enjoy leisure activitiesFairImproved functional capacity, but handicapped by intermittent pain of sufficient severity tocurtail or modify work or leisure activitiesPoorNo improvement or insufficient improvement toenable an increase in activities/or furtheroperative intervention required MacNab classification [12] During the follow-up, all complications were registered including iatrogenic nerve damage, vascular injuries, infection, wound healing, thrombosis, or recurrence. Statistical analysis: Comparison of preoperative and postoperative VAS was performed using linear mixed effects model for multiple comparison procedures. Statistically significant differences were defined at a 95% confidence level. The SPSS software (17.0, SPSS Inc., Chicago, IL, USA) supported statistical evaluation. Results: The 209 patients who met the inclusion criteria were followed for an average of 26.3 ± 2.3 months. There were 116 (55.5%) male patients and 93 (44.5%) female patients. The average ages were 46.4 ± 14.9 years for the male patients and 46.8 ± 11.1 for the female patients. The mean duration of the operation was 50.9 ± 9.9 min per level. The mean frequency of intraoperative fluoroscopy was 5 (3–14) times per level. The mean blood loss was 5 (2–20) ml per level. The mean stay in the hospital was 3 (2–4) days. The VAS score of leg pain significantly dropped from 9 (6–10) before operation to 1 (0–3) (P < 0.001) immediately after the operation and to 0 (0–3) (P < 0.001) 2 years after operation. However, there were 16 (7.7%) patients with the rebound effect of leg pain, and 9 (7–10) of VAS score preoperatively dropped to 0 (0–2) immediately after operation and rose to 7 (5–9) 1 week postoperatively. Fourteen of 16 patients got pain relief during 2 months, and the VAS score became 3 (2–4) 2 months postoperatively and 0.5 (0–3) 2 years postoperatively. Other two underwent reoperation about 1 month after surgery in other hospitals. At 2-year follow-up, 95.7% (200/209) of the patients showed excellent or good outcomes, 2.9% (6/209) fair, and 1.4% (3/209) poor. The percentages of excellent or good results were 100% (29/29) for L5/S1 herniations with high iliac crest, 100% (25/25) for herniations with scoliosis, and 96.8% (30/31) for herniation with calcification. The percentage of fair results was 3.2% (1/31) for herniation with calcification (Table 4). Excellent or good outcome were shown in 100% (24/24) of recurrent herniations or missed fragments after previous surgical intervention at the index level and adjacent disc herniations after decompression and fusion. Voiding dysfunction recovered during 1 day after surgery and the strength of the quadriceps, foot/toe extensors, or triceps improved during 3 months in two patients of cauda equine syndromes showing fair outcomes. Postoperative complications (Table 6) included three patients who experienced increased weakness of quadriceps or foot/toe extensor strength, which fully recovered about 1 month after surgery. One other patient experienced low toxicity infection of disc, which was cured by intravenous use of antibiotics for 2 weeks. There was one case of herniation recurrence 8 months after surgery, which was successfully treated with MIS-TLIF (minimally invasive surgery-transforaminal lumbar intervertebral fusion). No dural tears had to be treated and no dural leaks after surgery occurred, and no meningoceles or dural cysts in the surgical area were observed in the postoperative MRI scans. No patients had any form of permanent iatrogenic nerve damage and a major complication such as intraoperative vascular injury. There was no death.Table 6ComplicationsComplicationNumberPercentIncreased weakness of quadriceps or foot/toe extensor strength31.4Disc infection10.5Herniation recurrence10.5Permanent iatrogenic nerve damage0Dural tear or dural leak0Intraoperative vascular injury0Death0 Complications Discussion: In 2002, Yeung et al. [5] reported the outcome and complications in 307 cases of posterolateral endoscopic discectomies (YESS) with a minimal follow-up of 1 year. They reported an 83.6% excellent or good result and a 9.3% rate of poor results. Their reoperation rate was 5%, with an average follow-up of 19 months. In 2006, the study by Hoogland et al. [10] showed that there was a recurrence rate of 6.9% at 1-year postoperatively in 142 cases treated with posterior lateral endoscopic discectomy (TESS) and at 2-year follow-up, 85.4% of the patients had an excellent or good result and 7.7% were not satisfied. These are comparable to the outcomes in our study of posterolateral endoscopic discectomy (PTES). At 2-year follow-up, 95.7% (200/209) of the patients showed excellent or good results, 2.9% (6/209) fair, and 1.4% (3/209) poor. The recurrence rate was 0.5% (1/209), and the reoperation rate was 1.4% (3/209). Since 1994, Hoogland et al. [6–9] have used special reamers to enlarge the foramen, which was named after TESS, so that the anterior spinal canal could be made accessible for endoscope and instruments also for the L5/S1 level, avoiding injury to the exiting nerve root, a problem that has been reported after the regular transforaminal approach. At that point, nearly all types of disc herniation became accessible with the lateral percutaneous approach [13]. However, in TESS, it was complicated to determine the entrance point through C-arm-guided orientation, and it depended on the measurement of distance to the midline, which sometimes was not accurate because of the different sizes of patients. The tip of the puncture should reach the posterior wall of the disc or the superior fact in TESS, and the rigid target of puncture made it difficult to find the optimal trajectory through repeated directional adjustments. In addition, there were more steps for enlargement of foramen through step-by-step reaming of the superior facet bone during the procedure. These led to much exposure of X-ray, long duration of operation, and steep learning curve. For the orientation before puncture in PTES, only the posteroanterior C-arm projection was needed when the K-wire was placed transversely across the center of the target disc. A transverse line bisecting the disc was drawn along the K-wire, and the surface projection of anatomic disc center was located where the transverse line crossed the longitudinal midline drawn by palpation. The puncture should aim at the perpendicular line through the anatomic disc center. Our study showed that the entrance point was located at the corner of the flat back turning to the lateral side, and as high as, or more cranial, or slightly more caudal than the horizontal line of target disc, which was similar to “All roads lead to Rome (herniated fragment).” This has never been mentioned by other scholars, and we named this entrance point after “Gu’s Point”. It was not necessary to take the C-arm projection and measure the distance to the midline for determination of the entrance point. The orientation before puncture in PTES was simplified compared with TESS. On the lateral C-arm view, the target of puncture should be in the posterior one third of intervertebral space or intracanal area close to posterior wall of the disc in PTES, which made the direction and angle of the puncture flexible. Sometimes, the needle was inserted into a disc at 45° angle to the horizontal plane for puncture, which still made it achievable to put the working cannula into the spinal canal (Fig. 6). So, in PTES technique it was easy to find the optimal trajectory of puncture. Although the tip of the puncture needle was in the disc and sometimes higher or lower than the target disc, the guiding rod of 6.3 mm in diameter could be advanced into the foramen with a mallet after the guiding wire was drawn out, and good position could be more easily achieved by minor adjustment of guiding rod compared with the soft puncture needle. When enlargement of foramen was performed through the cannula docked at the facet during PTES, the cannula was pressed down to make the angle of the reamer to the horizontal plane smaller and more bones in the ventral part of the superior facet was cut away (Fig. 6f). We called it press-down enlargement of foramen, which made it easy to advance the working cannula into the spine canal between the dura and disc even if the angle of puncture was 45° and to remove the fragments underneath the nerve root and the central dura, even the contralateral nerve root (Fig. 5i), without the retraction on the intracanal nerve elements. If the reamer in the foramen was higher or lower than the disc level, the position and direction of the reamer could be adjusted through minor movement of the cannula or a thick guiding rod. If the vicinity of the extruding or sequestrated fragment could not be reached, the following steps were needed: (1) repeated reaming of more bones of the ventral part of the superior facet using a 7.5-mm reamer through the 8.8-mm cannula pressed down further, which could be achieved by moving the tip of the cannula laterally and dorsally toward the superior facet (Fig. 7) or (2) using a bigger reamer of 8.8-mm through a 10-mm cannula for further enlargement of the foramen in the presence of foraminal stenosis. All these measures made it possible to simplify the orientation and facilitate the puncture of PTES. During PTES, after the orientation and puncture, enlargement of the foramen was performed by one-step reaming of a 7.5-mm reamer instead of a step-by-step reaming before endoscopic discectomy. Simple orientation, easy puncture, and reduced steps could decrease the times of C-arm projection. The results of a study showed that the mean frequency of intraoperative fluoroscopy was 5 (3–14) times per level. In general, 4 times of C-arm projection were needed during the procedure of PTES, including the orientation of the involved disc center on the posteoranterior image (Fig. 4b), confirmation of the puncture needle tip reaching the target on the lateral (Fig. 4e) and posteoranterior view (Fig. 4f), and the position of the reamer checked with an posteoanterior image (Fig. 4g) when resistance faded. It could be ensured by the endoscopic image replacing lateral fluoroscopic projection that the vicinity of the extruding or sequestrated fragment had been reached. In extracanal disc herniation, the procedure of reaming could usually be omitted and there were only three times of intraoperative fluoroscopy during PTES (Fig. 4b, e, f). The amount of radiation, which the surgeon and the patient received, could be reduced as little as possible in the procedure. Compared with TESS, simple orientation, easy puncture, reduced steps, and few C-arm projections shortened the duration of PTES procedure and lowered the learning curve. From positioning the patient to closing the skin, the mean duration of operation was 50.9 ± 9.9 min per level. L5/S1 herniation with high iliac crest and disc herniation with scoliosis or calcification are difficult cases for transforaminal endoscopic surgery. In our study, the percentages of excellent or good results were 100% (29/29) for L5/S1 herniations with high iliac crest, 100% (25/25) for herniations with scoliosis, and 96.8% (30/31) for herniation with calcification. The percentage of fair results was 3.2% (1/31) for herniation with calcification. The following are some key points to overcome the difficulties for these special cases. In L5/S1 disc herniation with high iliac crest, the puncture needle usually was blocked by iliac crest, sacral promontory or transverse process. The patient should be hyperkyphoticly placed in a prone position in order to make the space larger among the iliac crest, sacral promontory, and transverse process. The obstacle could be skirted through rotating the direction of the needle bevel during the puncture. For disc herniation with scoliosis, the C-arm should be rotated relative to the patient to obtain good posteoanterior and lateral images, and the entrance point and angle of puncture should be adjusted according the rotation of lumbar vertebrae. When there was calcification in disc herniation, some calcified tissue of the disc could be reamed away during enlargement of the foramen, and the small reamer or electric shaver was used to grind off the calcified tissue through the endoscope under the view. In recurrent herniation after previous surgical intervention at the index level, the lateral transforaminal approach could bypass the scar tissue in the previous dorsal area and reduce the risk of dural tears. Hoogland et al. [11] reported that the spinal fluid leak during the surgery was suspected in 11 cases, and no dural tears had to be treated in their series of 262 patients of recurrent disc herniation undergoing endoscopic transforaminal discectomy. No dural leaks after surgery occurred or meningoceles or dural cysts in the surgical area were observed in the postoperative MRI scans that were obtained on almost all patients. But the incidents of dural tears requiring treatment in the dorsal microdiscectomy is about 10% [14]. In our series of 18 patients treated by PTES, there was no dural tears, and no dural leaks or meningoceles or dural cysts in the surgical area after surgery. The extruding or sequestrated disc material could be removed through the working tunnel of lateral transforaminal approach without interference with scar tissue. After removal of the protruded material, the nerve root could be inspected and the scar tissue around the nerve root could be released by the radiofrequency using a trigger-flex bipolar probe. In comparison, the scar must be removed in the dorsal reintervention, and tedious retraction of the compressed nerve root was needed for the removal of protruded disc tissue, which increased risk of neurologic injury. When PTES was performed to treat recurrence herniation and adjacent disc herniation after decompression and fusion, there was no need to remove, replace, or extend the previous internal fixation, which significantly decreased the aggressiveness, reduced the blood loss, and fastened the recovery, compared with the open revision surgery. The patient was in a continuous awakened state under local anesthesia supplemented with intravenous sedation during surgery of PTES, and the surgeon could be alerted if there was any physical irritation to the neurologic elements. Once there was nerve root symptom during puncture or enlargement of foramen, which usually indicated the involvement of exiting nerve root, the performance must be stopped immediately. The angle and direction of puncture should be adjusted or the entrance point should be medially moved until the symptom disappeared during puncture. The surgeon could change the angle and direction of the reamer through moving the tip of the cannula laterally and dorsally toward the superior facet and pressing the cannula down to avoid the irritation of an exiting nerve root during the enlargement of the foramen. In case there was still neurologic symptom during the reaming of the superior facet, the entrance point should be medially changed. Although no patient had any form of permanent iatrogenic nerve damage in this study, there were three patients who experienced transient weakness of quadriceps or foot/toe extensor strength, which was relative to no stopping the operation immediately when the nerve root symptom occurred. In addition, there were 16 (7.7%) patients with the rebound effect of leg pain 1 week after operation, in which 14 cases got pain relief during 2 months and other two underwent reoperation after about 1 month in other hospitals. This indicated that the observation of at least 2 months should be preferred to immediate reoperation when the rebound effect of leg pain occurred, although further study should be performed to detect the possible factors. There was only one case of recurrence in this study and 0.5% (1/209) of recurrence rate was significantly lower, compared with that of other reports. Attention should be paid to take good care of the lumbar spine after surgery, such as not bending frequently, no lifting of heavy load, and not keeping the same position for a long time, which was an important factor of preventing recurrent herniation. Conclusions: The current data indicate that PTES for lumbar disc herniation is an effective and safe method with simple orientation, easy puncture, reduced steps, and little X-ray exposure, which can be applied in almost all kinds of lumbar disc herniation, including L5/S1 level with high iliac crest, herniation with scoliosis or calcification, recurrent herniation, and adjacent disc herniation after decompression and fusion. The learning curve is no longer steep for surgeons.
Background: We designed an easy posterolateral transforaminal endoscopic decompression technique, termed PTES, for radiculopathy secondary to lumbar disc herniation. The purpose of the study is to describe the technique of PTES and evaluate the efficacy and safety for treatment of lumbar disc herniation including primary herniation, reherniation, intracanal herniation, and extracanal herniation and to report outcome and complications. Methods: PTES was performed to treat 209 cases of intracanal or extracanal herniations with or without extruding or sequestrated fragment, high iliac crest, scoliosis, calcification, or cauda equina syndrome including recurrent herniation after previous surgical intervention at the index level or adjacent disc herniation after decompression and fusion. Preoperative and postoperative leg pain was evaluated using the 10-point visual analog scale (VAS) and the results were determined to be excellent, good, fair, or poor according to the MacNab classification at 2-year follow-up. Results: The patients were followed for an average of 26.3 ± 2.3 months. The VAS score of leg pain significantly dropped from 9 (6-10) before operation to 1 (0-3) (P < 0.001) immediately after the operation and to 0 (0-3) (P < 0.001) 2 years after operation. At 2-year follow-up, 95.7% (200/209) of the patients showed excellent or good outcomes, 2.9% (6/209) fair and 1.4% (3/209) poor. No patients had any form of permanent iatrogenic nerve damage and a major complication, although there were one case of infection and one case of recurrence. Conclusions: PTES for lumbar disc herniation is an effective and safe method with simple orientation, easy puncture, reduced steps, and little X-ray exposure, which can be applied in almost all kinds of lumbar disc herniation, including L5/S1 level with high iliac crest, herniation with scoliosis or calcification, recurrent herniation, and adjacent disc herniation after decompression and fusion. The learning curve is no longer steep for surgeons.
Background: The radicular syndrome caused by lumbar disc herniation compressing neurologic elements is a clear indication for surgical decompression. In the last decades, as a minimally invasive surgical technique, the posterior lateral transforaminal endoscopic surgery has been developed to perform discectomy for neurologic decompression under direct view and local anesthesia including YESS (Yeung Endoscopy Spine Surgery) [1–5] and TESS (Transforaminal Endoscopic Spine Surgery) [6–11]. There was a high percentage of patient satisfaction and a low rate of complications in YESS or TESS for lumbar disc herniation [1–11]. Compared with traditional lumbar discectomy, YESS and TESS have certain advantages: (1) no need for general anesthesia, (2) less cases of iatrogenic neurologic damage, (3) no retraction on the intracanal nerve elements, (4) significantly less infections, (5) only minimal disturbance of ligamentum flavum or intracanal capsular structures, therefore, less scar formation, (6) no interference of scar tissue to reach the recurrent herniated tissue in cases of previous dorsal-discectomy, and (7) shorter hospital stay, earlier functional recovery, earlier return to work, and higher cost-effectiveness [1–11]. Although nearly all kinds of disc herniations are accessible for TESS of outside disc-inside technique directly into the spinal canal [2, 3], complexity of C-arm guided orientation, difficulty to find the optimal trajectory for target, and more steps of surgical manipulation leaded to much exposure of X-ray, long duration of operation, and steep learning curve. We designed an easy posterolateral transforaminal endoscopic decompression technique for radiculopathy secondary to lumbar disc herniation, termed PTES (percutaneous transforaminal endoscopic surgery). The purpose of the study is to describe the technique of PTES and evaluate the efficacy and safety for treatment of lumbar disc herniation including primary herniation, reherniation, intracanal herniation, and extracanal herniation and to report outcome and complications. Conclusions: The current data indicate that PTES for lumbar disc herniation is an effective and safe method with simple orientation, easy puncture, reduced steps, and little X-ray exposure, which can be applied in almost all kinds of lumbar disc herniation, including L5/S1 level with high iliac crest, herniation with scoliosis or calcification, recurrent herniation, and adjacent disc herniation after decompression and fusion. The learning curve is no longer steep for surgeons.
Background: We designed an easy posterolateral transforaminal endoscopic decompression technique, termed PTES, for radiculopathy secondary to lumbar disc herniation. The purpose of the study is to describe the technique of PTES and evaluate the efficacy and safety for treatment of lumbar disc herniation including primary herniation, reherniation, intracanal herniation, and extracanal herniation and to report outcome and complications. Methods: PTES was performed to treat 209 cases of intracanal or extracanal herniations with or without extruding or sequestrated fragment, high iliac crest, scoliosis, calcification, or cauda equina syndrome including recurrent herniation after previous surgical intervention at the index level or adjacent disc herniation after decompression and fusion. Preoperative and postoperative leg pain was evaluated using the 10-point visual analog scale (VAS) and the results were determined to be excellent, good, fair, or poor according to the MacNab classification at 2-year follow-up. Results: The patients were followed for an average of 26.3 ± 2.3 months. The VAS score of leg pain significantly dropped from 9 (6-10) before operation to 1 (0-3) (P < 0.001) immediately after the operation and to 0 (0-3) (P < 0.001) 2 years after operation. At 2-year follow-up, 95.7% (200/209) of the patients showed excellent or good outcomes, 2.9% (6/209) fair and 1.4% (3/209) poor. No patients had any form of permanent iatrogenic nerve damage and a major complication, although there were one case of infection and one case of recurrence. Conclusions: PTES for lumbar disc herniation is an effective and safe method with simple orientation, easy puncture, reduced steps, and little X-ray exposure, which can be applied in almost all kinds of lumbar disc herniation, including L5/S1 level with high iliac crest, herniation with scoliosis or calcification, recurrent herniation, and adjacent disc herniation after decompression and fusion. The learning curve is no longer steep for surgeons.
16,139
392
[ 145, 3370, 241, 49 ]
9
[ "disc", "arm", "cannula", "herniation", "view", "mm", "nerve", "lateral", "guiding", "rod" ]
[ "endoscopic transforaminal discectomy", "traditional lumbar discectomy", "endoscopic discectomy tess", "transforaminal endoscopic spine", "lumbar disc herniation" ]
null
[CONTENT] Lumbar disc herniation | Transforaminal | Endoscopic discectomy | Minimally invasive surgery [SUMMARY]
null
[CONTENT] Lumbar disc herniation | Transforaminal | Endoscopic discectomy | Minimally invasive surgery [SUMMARY]
[CONTENT] Lumbar disc herniation | Transforaminal | Endoscopic discectomy | Minimally invasive surgery [SUMMARY]
[CONTENT] Lumbar disc herniation | Transforaminal | Endoscopic discectomy | Minimally invasive surgery [SUMMARY]
[CONTENT] Lumbar disc herniation | Transforaminal | Endoscopic discectomy | Minimally invasive surgery [SUMMARY]
[CONTENT] Adult | Aged | Decompression, Surgical | Diskectomy, Percutaneous | Endoscopy | Female | Humans | Intervertebral Disc Displacement | Lumbar Vertebrae | Magnetic Resonance Imaging | Male | Middle Aged | Minimally Invasive Surgical Procedures | Radiculopathy | Tomography, X-Ray Computed | Treatment Outcome | Visual Analog Scale [SUMMARY]
null
[CONTENT] Adult | Aged | Decompression, Surgical | Diskectomy, Percutaneous | Endoscopy | Female | Humans | Intervertebral Disc Displacement | Lumbar Vertebrae | Magnetic Resonance Imaging | Male | Middle Aged | Minimally Invasive Surgical Procedures | Radiculopathy | Tomography, X-Ray Computed | Treatment Outcome | Visual Analog Scale [SUMMARY]
[CONTENT] Adult | Aged | Decompression, Surgical | Diskectomy, Percutaneous | Endoscopy | Female | Humans | Intervertebral Disc Displacement | Lumbar Vertebrae | Magnetic Resonance Imaging | Male | Middle Aged | Minimally Invasive Surgical Procedures | Radiculopathy | Tomography, X-Ray Computed | Treatment Outcome | Visual Analog Scale [SUMMARY]
[CONTENT] Adult | Aged | Decompression, Surgical | Diskectomy, Percutaneous | Endoscopy | Female | Humans | Intervertebral Disc Displacement | Lumbar Vertebrae | Magnetic Resonance Imaging | Male | Middle Aged | Minimally Invasive Surgical Procedures | Radiculopathy | Tomography, X-Ray Computed | Treatment Outcome | Visual Analog Scale [SUMMARY]
[CONTENT] Adult | Aged | Decompression, Surgical | Diskectomy, Percutaneous | Endoscopy | Female | Humans | Intervertebral Disc Displacement | Lumbar Vertebrae | Magnetic Resonance Imaging | Male | Middle Aged | Minimally Invasive Surgical Procedures | Radiculopathy | Tomography, X-Ray Computed | Treatment Outcome | Visual Analog Scale [SUMMARY]
[CONTENT] endoscopic transforaminal discectomy | traditional lumbar discectomy | endoscopic discectomy tess | transforaminal endoscopic spine | lumbar disc herniation [SUMMARY]
null
[CONTENT] endoscopic transforaminal discectomy | traditional lumbar discectomy | endoscopic discectomy tess | transforaminal endoscopic spine | lumbar disc herniation [SUMMARY]
[CONTENT] endoscopic transforaminal discectomy | traditional lumbar discectomy | endoscopic discectomy tess | transforaminal endoscopic spine | lumbar disc herniation [SUMMARY]
[CONTENT] endoscopic transforaminal discectomy | traditional lumbar discectomy | endoscopic discectomy tess | transforaminal endoscopic spine | lumbar disc herniation [SUMMARY]
[CONTENT] endoscopic transforaminal discectomy | traditional lumbar discectomy | endoscopic discectomy tess | transforaminal endoscopic spine | lumbar disc herniation [SUMMARY]
[CONTENT] disc | arm | cannula | herniation | view | mm | nerve | lateral | guiding | rod [SUMMARY]
null
[CONTENT] disc | arm | cannula | herniation | view | mm | nerve | lateral | guiding | rod [SUMMARY]
[CONTENT] disc | arm | cannula | herniation | view | mm | nerve | lateral | guiding | rod [SUMMARY]
[CONTENT] disc | arm | cannula | herniation | view | mm | nerve | lateral | guiding | rod [SUMMARY]
[CONTENT] disc | arm | cannula | herniation | view | mm | nerve | lateral | guiding | rod [SUMMARY]
[CONTENT] technique | tess | transforaminal endoscopic | herniation | transforaminal | disc | lumbar | lumbar disc | lumbar disc herniation | endoscopic [SUMMARY]
null
[CONTENT] patients | mean | operation | surgery | score | level mean | vas score | 209 | months | dural [SUMMARY]
[CONTENT] herniation | disc herniation | disc | lumbar disc | lumbar disc herniation | lumbar | effective safe | current data indicate | recurrent herniation adjacent | little ray [SUMMARY]
[CONTENT] disc | herniation | cannula | puncture | disc herniation | arm | lateral | mm | root | nerve root [SUMMARY]
[CONTENT] disc | herniation | cannula | puncture | disc herniation | arm | lateral | mm | root | nerve root [SUMMARY]
[CONTENT] PTES ||| PTES [SUMMARY]
null
[CONTENT] 26.3 ± | 2.3 months ||| VAS | 9 | 6-10 | 1 | 0-3 | 0.001 | 0 ||| 0-3 | 0.001 | 2 years ||| 2-year | 95.7% | 200/209 | 2.9% | 6/209 | 1.4% | 3/209 ||| one | one [SUMMARY]
[CONTENT] L5 ||| [SUMMARY]
[CONTENT] PTES ||| PTES ||| 209 ||| 10 | MacNab | 2-year ||| ||| 26.3 ± | 2.3 months ||| VAS | 9 | 6-10 | 1 | 0-3 | 0.001 | 0 ||| 0-3 | 0.001 | 2 years ||| 2-year | 95.7% | 200/209 | 2.9% | 6/209 | 1.4% | 3/209 ||| one | one ||| L5 ||| [SUMMARY]
[CONTENT] PTES ||| PTES ||| 209 ||| 10 | MacNab | 2-year ||| ||| 26.3 ± | 2.3 months ||| VAS | 9 | 6-10 | 1 | 0-3 | 0.001 | 0 ||| 0-3 | 0.001 | 2 years ||| 2-year | 95.7% | 200/209 | 2.9% | 6/209 | 1.4% | 3/209 ||| one | one ||| L5 ||| [SUMMARY]
High prevalence of low bone mass and associated factors in Korean HIV-positive male patients undergoing antiretroviral therapy.
24433984
Low bone mass is prevalent in HIV-positive patients. However, compared to Western countries, less is known about HIV-associated osteopenia in Asian populations.
INTRODUCTION
We performed a cross-sectional survey in Seoul National University Hospital from December 2011 to May 2012. We measured bone mineral density using central dual energy X-ray absorptiometry, with consent, in male HIV-positive patients, aged 40 years and older. Diagnosis of low bone mass was made using International Society for Clinical Densitometry Z-score criteria in the 40-49 years age group and World Health Organization T-score criteria in the >50-year age group. The data were compared with those of a community-based cohort in Korea.
METHODS
Eighty-four HIV-positive male patients were included in this study. Median age was 49 (interquartile range [IQR], 45-56) years, and median body mass index (BMI) was 22.6 (IQR, 20.9-24.4). Viral suppression was achieved in 75 (89.3%) patients and median duration of antiretroviral therapy was 71 (IQR, 36-120) months. The overall prevalence of low bone mass was 16.7% in the 40-49 years age group and 54.8% in the>50 years age group. Our cohort had significantly lower bone mass at the femur neck and total hip than HIV-negative Koreans in the 40-49 years age group. Low bone mass was significantly associated with low BMI, and a high level of serum carboxy-terminal collagen crosslinks, but was not associated with antiretroviral regimen or duration of antiretroviral therapy.
RESULTS
Low bone mass is prevalent in Korean HIV-positive males undergoing antiretroviral therapy, and may be associated with increased bone resorption.
CONCLUSIONS
[ "Absorptiometry, Photon", "Adult", "Age Factors", "Anti-HIV Agents", "Bone Density", "Bone Resorption", "Cross-Sectional Studies", "HIV Infections", "Humans", "Male", "Middle Aged", "Osteoporosis", "Republic of Korea" ]
3888902
Introduction
The advent of combination antiretroviral therapy (CART) has improved the prognosis of patients infected with HIV [1]. Long-term CART is associated with several metabolic complications including lipodystrophy, insulin resistance, diabetes and dyslipidemia [2]. It is also well known that low bone mass is prevalent in HIV-positive patients [3]. In one meta-analysis, osteoporosis was three times more prevalent among HIV-positive patients than among HIV-negative controls, and was especially common among those receiving antiretroviral therapy [4]. The prevalence of low bone mass may vary in different ethnic groups, and less is known about the characteristics of low bone mass in Asian HIV-positive patients than in Western patients. Considering that there is a marked predominance of men in the clinic where this study was conducted, and that gender is an important risk factor for low bone mass, we chose to include only male patients. The present study was undertaken to investigate the prevalence of, and risk factors for, low bone mass in Korean HIV-positive males undergoing antiretroviral therapy.
Methods
Study population This cross-sectional study included HIV-positive male patients over 40 years old who underwent CART for at least three months at Seoul National University Hospital. A board-certified infectious disease specialist took a complete history from all participants. Demographic data (sex, ethnicity and date of birth), HIV exposure category (men who have sex with men [MSM], heterosexual), life style (smoking, alcohol consumption and physical activity), nadir CD4 cell count, current CD4 cell count and history of CART were recorded. Serum level of 25(OH) Vitamin D3, parathyroid hormone, carboxy-terminal collagen crosslinks (CTX), bone-specific alkaline phosphatase (ALP), total testosterone and free testosterone were measured for each participant. Serum CTX level was measured by electrochemiluminescence immunoassay using the Elecsys® ß-Crosslaps serum assay kit (Roche Diagnostics). The study protocol was in accordance with institutional guidelines and approved by an institutional review board. Informed consent was obtained from the study participants. This cross-sectional study included HIV-positive male patients over 40 years old who underwent CART for at least three months at Seoul National University Hospital. A board-certified infectious disease specialist took a complete history from all participants. Demographic data (sex, ethnicity and date of birth), HIV exposure category (men who have sex with men [MSM], heterosexual), life style (smoking, alcohol consumption and physical activity), nadir CD4 cell count, current CD4 cell count and history of CART were recorded. Serum level of 25(OH) Vitamin D3, parathyroid hormone, carboxy-terminal collagen crosslinks (CTX), bone-specific alkaline phosphatase (ALP), total testosterone and free testosterone were measured for each participant. Serum CTX level was measured by electrochemiluminescence immunoassay using the Elecsys® ß-Crosslaps serum assay kit (Roche Diagnostics). The study protocol was in accordance with institutional guidelines and approved by an institutional review board. Informed consent was obtained from the study participants. Measurements of anthropometric parameters and BMD Height and body weight were measured by standard methods in light clothes. Body mass index (BMI) was calculated as weight divided by height squared (kg/m2). Bone mineral density (BMD) (g/cm2) measurements at central skeletal sites (lumbar spine, femoral neck and total hip) were obtained using dual energy X-ray absorptiometry (Lunar Prodigy, GE Medical System). Diagnosis of low bone mass was made using the International Society for Clinical Densitometry (ISCD) Z-score criteria (low BMD for chronological age, Z-score ≤ − 2.0) and the World Health Organization (WHO) T-score criteria (Osteopenia, −2.5 < T-score < − 1.0; Osteoporosis, T-score ≤ − 2.5) [5]. For comparison with the general population, the BMD of subjects in a representative Korean community-based cohort was used [6]. Both the HIV-positive study group and the HIV-negative general population group used the same Lunar Prodigy machine with the same software (Encore, GE) and the same manufacturer-provided Korean reference. Height and body weight were measured by standard methods in light clothes. Body mass index (BMI) was calculated as weight divided by height squared (kg/m2). Bone mineral density (BMD) (g/cm2) measurements at central skeletal sites (lumbar spine, femoral neck and total hip) were obtained using dual energy X-ray absorptiometry (Lunar Prodigy, GE Medical System). Diagnosis of low bone mass was made using the International Society for Clinical Densitometry (ISCD) Z-score criteria (low BMD for chronological age, Z-score ≤ − 2.0) and the World Health Organization (WHO) T-score criteria (Osteopenia, −2.5 < T-score < − 1.0; Osteoporosis, T-score ≤ − 2.5) [5]. For comparison with the general population, the BMD of subjects in a representative Korean community-based cohort was used [6]. Both the HIV-positive study group and the HIV-negative general population group used the same Lunar Prodigy machine with the same software (Encore, GE) and the same manufacturer-provided Korean reference. Statistics BMD differences were analyzed using Student's t-test. Risk factors for low bone mass were analyzed using a linear regression model with BMD as the dependent variable. Additional analysis was performed using binary logistic regression model with any BMD T-score < − 1.0 as the dependent variable and odds ratios per one standard deviation increment were shown for continuous variables. All significance tests were two-sided, and data analyses were performed with SPSS software (version 19.0; SPSS Inc., Chicago, IL, USA). BMD differences were analyzed using Student's t-test. Risk factors for low bone mass were analyzed using a linear regression model with BMD as the dependent variable. Additional analysis was performed using binary logistic regression model with any BMD T-score < − 1.0 as the dependent variable and odds ratios per one standard deviation increment were shown for continuous variables. All significance tests were two-sided, and data analyses were performed with SPSS software (version 19.0; SPSS Inc., Chicago, IL, USA).
Results
A total of 84 HIV-positive patients were included in this study. All were male, and 39 (46.4%) of them had the risk factors of homosexual behaviour. All were Korean, and median age was 49 years (IQR 45–56) (Table 1). Median CD4 cell count was 531 cells/µL (IQR, 345–762), and 75 (89.3%) patients had an HIV RNA level < 40 copies/mL. Median duration of antiretroviral therapy was 71 months (IQR, 36–120). Fifty-five (65.5%) of the subjects had vitamin D deficiency. Demographics and clinical characteristics of the study participants IQR, interquartile range; HIV, human immunodeficiency virus. The overall prevalence of low bone mass was 16.7% in the 40–49 years age group and 54.8% in the >50 years age group (Table 2). Compared with HIV-negative Koreans, the subjects in our cohort with HIV infection had significantly lower bone mass at femur neck and total hip in the 40–49 years age group (Figure 1B and 1C). In the 50–59 year age group, subjects with HIV infection had significantly lower bone mass at the femur neck than HIV-negative Koreans. There was no significant difference in the BMD at lumbar spine between HIV-negative Koreans and our subjects. Comparison of the bone mineral densities of the study participants with those of non-HIV-positive Koreans. *Data from the Ansung cohort study [6]. **p<0.05. Prevalence of low bone mass in the study participants Diagnosed using the International Society for Clinical Densitometry (ISCD) Z-score criteria (low BMD for chronological age, Z-score ≤ − 2.0). Diagnosed using the World Health Organization (WHO) T-score criteria (Osteopenia, −2.5 < T-score < − 1.0; Osteoporosis, T-score ≤ − 2.5). BMD, bone mineral density. The association between BMD and other factors was investigated. There was no significant association between BMD and HIV exposure category (between MSM and heterosexual men), CART category (between protease inhibitor-based and non-nucleoside reverse transcriptase inhibitor-based), smoking, alcohol intake or exercise status. BMD was positively associated with BMI at all three sites and these associations were independent of other factors (Table 3). BMD at the femoral neck and total hip was negatively associated with serum CTX and these associations were independent of other factors. Bone specific ALP was negatively associated with lumbar spine BMD in multivariate analysis. The logistic regression analysis revealed that independent risk factors for low BMD were low BMI, high serum CTX, and high bone-specific ALP (Table 4). Summary of linear regression analysis of factors associated with bone mineral density Running more than one hour a week. BMI, body mass index; PI, protease inhibitor; PTH, parathyroid hormone; CTX, carboxy-terminal collagen crosslinks; ALP, alkaline phosphatase; Ca, calcium; Cr, creatinine. Summary of logistic regression analysis of factors associated with low bone mineral density (any T score≤ − 1) Odds ratios are shown as per one standard deviation increment. Running more than one hour in a week. OR, Odds ratio; CI, confidence interval; BMI, body mass index; PI, protease inhibitor; PTH, parathyroid hormone; CTX, carboxy-terminal collagen crosslinks; ALP, alkaline phosphatase; Ca, calcium; Cr, creatnine.
Conclusions
In this first survey for HIV-associated osteopenia in Asians undergoing antiretroviral therapy, the prevalence of low bone mass was significantly higher compared to non HIV-infected individuals. Low bone mass was associated with increased bone resorption.
[ "Introduction", "Study population", "Measurements of anthropometric parameters and BMD", "Statistics" ]
[ "The advent of combination antiretroviral therapy (CART) has improved the prognosis of patients infected with HIV [1]. Long-term CART is associated with several metabolic complications including lipodystrophy, insulin resistance, diabetes and dyslipidemia [2]. It is also well known that low bone mass is prevalent in HIV-positive patients [3]. In one meta-analysis, osteoporosis was three times more prevalent among HIV-positive patients than among HIV-negative controls, and was especially common among those receiving antiretroviral therapy [4].\nThe prevalence of low bone mass may vary in different ethnic groups, and less is known about the characteristics of low bone mass in Asian HIV-positive patients than in Western patients. Considering that there is a marked predominance of men in the clinic where this study was conducted, and that gender is an important risk factor for low bone mass, we chose to include only male patients. The present study was undertaken to investigate the prevalence of, and risk factors for, low bone mass in Korean HIV-positive males undergoing antiretroviral therapy.", "This cross-sectional study included HIV-positive male patients over 40 years old who underwent CART for at least three months at Seoul National University Hospital. A board-certified infectious disease specialist took a complete history from all participants. Demographic data (sex, ethnicity and date of birth), HIV exposure category (men who have sex with men [MSM], heterosexual), life style (smoking, alcohol consumption and physical activity), nadir CD4 cell count, current CD4 cell count and history of CART were recorded. Serum level of 25(OH) Vitamin D3, parathyroid hormone, carboxy-terminal collagen crosslinks (CTX), bone-specific alkaline phosphatase (ALP), total testosterone and free testosterone were measured for each participant. Serum CTX level was measured by electrochemiluminescence immunoassay using the Elecsys® ß-Crosslaps serum assay kit (Roche Diagnostics). The study protocol was in accordance with institutional guidelines and approved by an institutional review board. Informed consent was obtained from the study participants.", "Height and body weight were measured by standard methods in light clothes. Body mass index (BMI) was calculated as weight divided by height squared (kg/m2). Bone mineral density (BMD) (g/cm2) measurements at central skeletal sites (lumbar spine, femoral neck and total hip) were obtained using dual energy X-ray absorptiometry (Lunar Prodigy, GE Medical System). Diagnosis of low bone mass was made using the International Society for Clinical Densitometry (ISCD) Z-score criteria (low BMD for chronological age, Z-score ≤ − 2.0) and the World Health Organization (WHO) T-score criteria (Osteopenia, −2.5 < T-score < − 1.0; Osteoporosis, T-score ≤ − 2.5) [5]. For comparison with the general population, the BMD of subjects in a representative Korean community-based cohort was used [6]. Both the HIV-positive study group and the HIV-negative general population group used the same Lunar Prodigy machine with the same software (Encore, GE) and the same manufacturer-provided Korean reference.", "BMD differences were analyzed using Student's t-test. Risk factors for low bone mass were analyzed using a linear regression model with BMD as the dependent variable. Additional analysis was performed using binary logistic regression model with any BMD T-score < − 1.0 as the dependent variable and odds ratios per one standard deviation increment were shown for continuous variables. All significance tests were two-sided, and data analyses were performed with SPSS software (version 19.0; SPSS Inc., Chicago, IL, USA)." ]
[ null, null, null, null ]
[ "Introduction", "Methods", "Study population", "Measurements of anthropometric parameters and BMD", "Statistics", "Results", "Discussion", "Conclusions" ]
[ "The advent of combination antiretroviral therapy (CART) has improved the prognosis of patients infected with HIV [1]. Long-term CART is associated with several metabolic complications including lipodystrophy, insulin resistance, diabetes and dyslipidemia [2]. It is also well known that low bone mass is prevalent in HIV-positive patients [3]. In one meta-analysis, osteoporosis was three times more prevalent among HIV-positive patients than among HIV-negative controls, and was especially common among those receiving antiretroviral therapy [4].\nThe prevalence of low bone mass may vary in different ethnic groups, and less is known about the characteristics of low bone mass in Asian HIV-positive patients than in Western patients. Considering that there is a marked predominance of men in the clinic where this study was conducted, and that gender is an important risk factor for low bone mass, we chose to include only male patients. The present study was undertaken to investigate the prevalence of, and risk factors for, low bone mass in Korean HIV-positive males undergoing antiretroviral therapy.", " Study population This cross-sectional study included HIV-positive male patients over 40 years old who underwent CART for at least three months at Seoul National University Hospital. A board-certified infectious disease specialist took a complete history from all participants. Demographic data (sex, ethnicity and date of birth), HIV exposure category (men who have sex with men [MSM], heterosexual), life style (smoking, alcohol consumption and physical activity), nadir CD4 cell count, current CD4 cell count and history of CART were recorded. Serum level of 25(OH) Vitamin D3, parathyroid hormone, carboxy-terminal collagen crosslinks (CTX), bone-specific alkaline phosphatase (ALP), total testosterone and free testosterone were measured for each participant. Serum CTX level was measured by electrochemiluminescence immunoassay using the Elecsys® ß-Crosslaps serum assay kit (Roche Diagnostics). The study protocol was in accordance with institutional guidelines and approved by an institutional review board. Informed consent was obtained from the study participants.\nThis cross-sectional study included HIV-positive male patients over 40 years old who underwent CART for at least three months at Seoul National University Hospital. A board-certified infectious disease specialist took a complete history from all participants. Demographic data (sex, ethnicity and date of birth), HIV exposure category (men who have sex with men [MSM], heterosexual), life style (smoking, alcohol consumption and physical activity), nadir CD4 cell count, current CD4 cell count and history of CART were recorded. Serum level of 25(OH) Vitamin D3, parathyroid hormone, carboxy-terminal collagen crosslinks (CTX), bone-specific alkaline phosphatase (ALP), total testosterone and free testosterone were measured for each participant. Serum CTX level was measured by electrochemiluminescence immunoassay using the Elecsys® ß-Crosslaps serum assay kit (Roche Diagnostics). The study protocol was in accordance with institutional guidelines and approved by an institutional review board. Informed consent was obtained from the study participants.\n Measurements of anthropometric parameters and BMD Height and body weight were measured by standard methods in light clothes. Body mass index (BMI) was calculated as weight divided by height squared (kg/m2). Bone mineral density (BMD) (g/cm2) measurements at central skeletal sites (lumbar spine, femoral neck and total hip) were obtained using dual energy X-ray absorptiometry (Lunar Prodigy, GE Medical System). Diagnosis of low bone mass was made using the International Society for Clinical Densitometry (ISCD) Z-score criteria (low BMD for chronological age, Z-score ≤ − 2.0) and the World Health Organization (WHO) T-score criteria (Osteopenia, −2.5 < T-score < − 1.0; Osteoporosis, T-score ≤ − 2.5) [5]. For comparison with the general population, the BMD of subjects in a representative Korean community-based cohort was used [6]. Both the HIV-positive study group and the HIV-negative general population group used the same Lunar Prodigy machine with the same software (Encore, GE) and the same manufacturer-provided Korean reference.\nHeight and body weight were measured by standard methods in light clothes. Body mass index (BMI) was calculated as weight divided by height squared (kg/m2). Bone mineral density (BMD) (g/cm2) measurements at central skeletal sites (lumbar spine, femoral neck and total hip) were obtained using dual energy X-ray absorptiometry (Lunar Prodigy, GE Medical System). Diagnosis of low bone mass was made using the International Society for Clinical Densitometry (ISCD) Z-score criteria (low BMD for chronological age, Z-score ≤ − 2.0) and the World Health Organization (WHO) T-score criteria (Osteopenia, −2.5 < T-score < − 1.0; Osteoporosis, T-score ≤ − 2.5) [5]. For comparison with the general population, the BMD of subjects in a representative Korean community-based cohort was used [6]. Both the HIV-positive study group and the HIV-negative general population group used the same Lunar Prodigy machine with the same software (Encore, GE) and the same manufacturer-provided Korean reference.\n Statistics BMD differences were analyzed using Student's t-test. Risk factors for low bone mass were analyzed using a linear regression model with BMD as the dependent variable. Additional analysis was performed using binary logistic regression model with any BMD T-score < − 1.0 as the dependent variable and odds ratios per one standard deviation increment were shown for continuous variables. All significance tests were two-sided, and data analyses were performed with SPSS software (version 19.0; SPSS Inc., Chicago, IL, USA).\nBMD differences were analyzed using Student's t-test. Risk factors for low bone mass were analyzed using a linear regression model with BMD as the dependent variable. Additional analysis was performed using binary logistic regression model with any BMD T-score < − 1.0 as the dependent variable and odds ratios per one standard deviation increment were shown for continuous variables. All significance tests were two-sided, and data analyses were performed with SPSS software (version 19.0; SPSS Inc., Chicago, IL, USA).", "This cross-sectional study included HIV-positive male patients over 40 years old who underwent CART for at least three months at Seoul National University Hospital. A board-certified infectious disease specialist took a complete history from all participants. Demographic data (sex, ethnicity and date of birth), HIV exposure category (men who have sex with men [MSM], heterosexual), life style (smoking, alcohol consumption and physical activity), nadir CD4 cell count, current CD4 cell count and history of CART were recorded. Serum level of 25(OH) Vitamin D3, parathyroid hormone, carboxy-terminal collagen crosslinks (CTX), bone-specific alkaline phosphatase (ALP), total testosterone and free testosterone were measured for each participant. Serum CTX level was measured by electrochemiluminescence immunoassay using the Elecsys® ß-Crosslaps serum assay kit (Roche Diagnostics). The study protocol was in accordance with institutional guidelines and approved by an institutional review board. Informed consent was obtained from the study participants.", "Height and body weight were measured by standard methods in light clothes. Body mass index (BMI) was calculated as weight divided by height squared (kg/m2). Bone mineral density (BMD) (g/cm2) measurements at central skeletal sites (lumbar spine, femoral neck and total hip) were obtained using dual energy X-ray absorptiometry (Lunar Prodigy, GE Medical System). Diagnosis of low bone mass was made using the International Society for Clinical Densitometry (ISCD) Z-score criteria (low BMD for chronological age, Z-score ≤ − 2.0) and the World Health Organization (WHO) T-score criteria (Osteopenia, −2.5 < T-score < − 1.0; Osteoporosis, T-score ≤ − 2.5) [5]. For comparison with the general population, the BMD of subjects in a representative Korean community-based cohort was used [6]. Both the HIV-positive study group and the HIV-negative general population group used the same Lunar Prodigy machine with the same software (Encore, GE) and the same manufacturer-provided Korean reference.", "BMD differences were analyzed using Student's t-test. Risk factors for low bone mass were analyzed using a linear regression model with BMD as the dependent variable. Additional analysis was performed using binary logistic regression model with any BMD T-score < − 1.0 as the dependent variable and odds ratios per one standard deviation increment were shown for continuous variables. All significance tests were two-sided, and data analyses were performed with SPSS software (version 19.0; SPSS Inc., Chicago, IL, USA).", "A total of 84 HIV-positive patients were included in this study. All were male, and 39 (46.4%) of them had the risk factors of homosexual behaviour. All were Korean, and median age was 49 years (IQR 45–56) (Table 1). Median CD4 cell count was 531 cells/µL (IQR, 345–762), and 75 (89.3%) patients had an HIV RNA level < 40 copies/mL. Median duration of antiretroviral therapy was 71 months (IQR, 36–120). Fifty-five (65.5%) of the subjects had vitamin D deficiency.\nDemographics and clinical characteristics of the study participants\nIQR, interquartile range; HIV, human immunodeficiency virus.\nThe overall prevalence of low bone mass was 16.7% in the 40–49 years age group and 54.8% in the >50 years age group (Table 2). Compared with HIV-negative Koreans, the subjects in our cohort with HIV infection had significantly lower bone mass at femur neck and total hip in the 40–49 years age group (Figure 1B and 1C). In the 50–59 year age group, subjects with HIV infection had significantly lower bone mass at the femur neck than HIV-negative Koreans. There was no significant difference in the BMD at lumbar spine between HIV-negative Koreans and our subjects.\nComparison of the bone mineral densities of the study participants with those of non-HIV-positive Koreans.\n\n*Data from the Ansung cohort study [6].\n**p<0.05.\n\nPrevalence of low bone mass in the study participants\nDiagnosed using the International Society for Clinical Densitometry (ISCD) Z-score criteria (low BMD for chronological age, Z-score ≤ − 2.0).\nDiagnosed using the World Health Organization (WHO) T-score criteria (Osteopenia, −2.5 < T-score < − 1.0; Osteoporosis, T-score ≤ − 2.5). BMD, bone mineral density.\nThe association between BMD and other factors was investigated. There was no significant association between BMD and HIV exposure category (between MSM and heterosexual men), CART category (between protease inhibitor-based and non-nucleoside reverse transcriptase inhibitor-based), smoking, alcohol intake or exercise status. BMD was positively associated with BMI at all three sites and these associations were independent of other factors (Table 3). BMD at the femoral neck and total hip was negatively associated with serum CTX and these associations were independent of other factors. Bone specific ALP was negatively associated with lumbar spine BMD in multivariate analysis. The logistic regression analysis revealed that independent risk factors for low BMD were low BMI, high serum CTX, and high bone-specific ALP (Table 4).\nSummary of linear regression analysis of factors associated with bone mineral density\nRunning more than one hour a week.\nBMI, body mass index; PI, protease inhibitor; PTH, parathyroid hormone; CTX, carboxy-terminal collagen crosslinks; ALP, alkaline phosphatase; Ca, calcium; Cr, creatinine.\nSummary of logistic regression analysis of factors associated with low bone mineral density (any T score≤ − 1)\nOdds ratios are shown as per one standard deviation increment.\nRunning more than one hour in a week.\nOR, Odds ratio; CI, confidence interval; BMI, body mass index; PI, protease inhibitor; PTH, parathyroid hormone; CTX, carboxy-terminal collagen crosslinks; ALP, alkaline phosphatase; Ca, calcium; Cr, creatnine.", "In the present study, BMDs at the lumbar spine, femur neck and total hip were investigated among Korean HIV-positive male patients undergoing CART, and clinical factors and serum markers were measured. Low bone mass was prevalent among Korean HIV-positive male patients undergoing antiretroviral treatment. Several factors were associated with low bone mass.\nA high prevalence of low bone mass has been reported in HIV-positive individuals in many cross-sectional studies [7, 8]. The prevalence of osteoporosis was more than three times greater among HIV-positive patients than among HIV-negative control subjects in a meta-analysis [4]. A decrease of BMD ranging from −1.5 to −5.8%, depending on the antiretroviral regimen, was observed within the first year of antiretroviral therapy [9]. The overall prevalence of fractures was significantly higher in HIV-positive patients, than in HIV-negative patients (2.87 vs. 1.77 fractures per 100 persons) [10]. Accordingly, in the present study, the patients with HIV infections had significantly lower BMD than age-matched HIV-negative Koreans in the general population. In the present study, the overall prevalence of osteoporosis and osteopenia were 9.5 and 46.4%, respectively. The reported prevalence of osteoporosis in HIV-positive patients varies widely according to age group and gender (3–33%) [3]. Several researchers have reported the prevalence of osteoporosis in HIV-positive men in Western countries (France: 16%, mean age 40 years; Australia: 3%, mean age 43 years) [7, 11]. A recent study reported the prevalence of osteoporosis in HIV-positive men in Taiwan (3.1%; median age 36.5 years) [12]. However, as discussed by its authors, the Taiwanese study was limited by several factors (majority of the subjects were middle aged, no bone turnover markers were measured and, most importantly, only lumbar spine BMD was measured, not femur neck or total hip). In addition to these limitations, due to the study design (which excluded potential osteoporosis patients), the authors state that the prevalence of osteoporosis was underestimated. In the present study, the prevalence of osteoporosis was determined using all three BMD sites. And the BMDs of HIV-positive men were compared with those of age-matched same-race controls. The association between BMD and bone-metabolism-related factors including bone turnover markers was also investigated.\nThe causes of low bone mass in HIV appear to be multifactorial. First, there are complex interactions between HIV infection and traditional osteoporosis risk factors [13]. Chronic HIV infection leads to poor nutrition and low weight. Indeed, the BMI of the HIV-negative control group was 23.8±3.1 and the BMI of the HIV-positive study group was 22.6±2.5. This difference in BMI could explain some portion of the difference in BMD. Low vitamin D levels [14] and hypogonadism [15] are associated with HIV infection. The prevalence of diabetes was high among the study subjects. The high prevalence of diabetes could have a substantial effect on bone metabolism. However, the BMD was not significantly different between diabetic and non-diabetic patients and similar trends were observed when diabetic subjects were excluded. High rates of alcohol and tobacco use are reported among HIV-positive patients. Second, uncontrolled HIV viremia per se may decrease bone mass [4, 16]: systemic inflammation caused by HIV infection can affect bone remodelling. There have been several studies of the association between HIV infection and bone remodelling. HIV proteins have been reported to augment bone resorption through increased osteoclast activity [17], and HIV proteins attenuate bone formation by stimulating osteoblasts apoptosis [18]. Third, CART-related factors may also link low bone mass and HIV infection. Continuous CART decreased BMD [19] and intermittent CART resulted in less loss of BMD [20]. These complex mechanisms could have resulted in the low BMD observed in the present study.\nWe found that low BMI was associated with low BMD at all three sites. Similar findings were obtained in previous studies [11, 12], demonstrating the important association between body weight and BMD. Markers of bone metabolism (CTX and bone-specific ALP) were negatively associated with BMD in the present study. This is in accordance with previous studies of HIV-positive patients [21, 22], suggesting that high bone turnover is associated with the reduction in BMD. The initiation of CART led to an increase in markers of bone turnover [22] and intermittent application of CART reduced the decline in markers of bone turnover and in BMD [20], suggesting that CART decreases BMD by increasing bone turnover. These findings indicate that measurements of bone turnover markers in HIV-positive patients could be used in the clinic to assess bone metabolism status in HIV.\nIn the present study, we observed significantly lower femur neck BMD and total hip BMD in HIV-positive patients than in HIV-negative patients in the general Korean population. However, no significant difference was observed in lumbar spine BMD. A previous study also reported that the difference in lumbar spine BMD was borderline whereas the difference in hip BMD was significant [7]. Others have reported that both lumbar spine BMD and femur neck BMD are significantly lower in HIV-positive patients [8].\nA recent study showed that intermittent CART has less effect on BMD than continuous antiretroviral therapy [20], which suggests a deleterious effect of CART on bone. Also, longer duration of CART was associated with reduced BMD [12]. However, in the present analysis, the duration of CART was not associated with BMD.\nIn one recent study, the prevalence of low BMD was similar among HIV-positive MSM and HIV-negative MSM [23], suggesting that the low BMD found in the former may precede HIV acquisition. Compared with heterosexual men, MSM are likely to have the conventional risk factors for osteoporosis (such as low body weight, alcohol use, smoking) [24]. Based on these findings, it has been suggested that the low BMD found among HIV-positive MSM may be the result of MSM-related factors, and may not be fully attributable to HIV infection alone or the use of ART [23]. However, in the present study, there was no significant difference in BMD between MSM and heterosexual men, suggesting that the low BMD is more likely to be the result of HIV infection and/or the use of CART than MSM-related life style factors.\nThis study has several limitations. First, because of its cross-sectional design, causal relationships between HIV infection and reduced BMD could not be determined. Second, its statistical power was low due to the small sample size. Third, half of the study subjects were under the age of 50 years. Inclusion of these young participants could have attenuated the statistical significance regarding the effect of ageing on bone loss. However, to the best of our knowledge, this is the first study in the English language to report the prevalence of osteoporosis based on all three BMD sites as well as factors related to bone metabolism, including bone turnover markers, in HIV-positive Asian men.", "In this first survey for HIV-associated osteopenia in Asians undergoing antiretroviral therapy, the prevalence of low bone mass was significantly higher compared to non HIV-infected individuals. Low bone mass was associated with increased bone resorption." ]
[ null, "methods", null, null, null, "results", "discussion", "conclusions" ]
[ "HIV", "AIDS", "osteopenia", "osteoporosis" ]
Introduction: The advent of combination antiretroviral therapy (CART) has improved the prognosis of patients infected with HIV [1]. Long-term CART is associated with several metabolic complications including lipodystrophy, insulin resistance, diabetes and dyslipidemia [2]. It is also well known that low bone mass is prevalent in HIV-positive patients [3]. In one meta-analysis, osteoporosis was three times more prevalent among HIV-positive patients than among HIV-negative controls, and was especially common among those receiving antiretroviral therapy [4]. The prevalence of low bone mass may vary in different ethnic groups, and less is known about the characteristics of low bone mass in Asian HIV-positive patients than in Western patients. Considering that there is a marked predominance of men in the clinic where this study was conducted, and that gender is an important risk factor for low bone mass, we chose to include only male patients. The present study was undertaken to investigate the prevalence of, and risk factors for, low bone mass in Korean HIV-positive males undergoing antiretroviral therapy. Methods: Study population This cross-sectional study included HIV-positive male patients over 40 years old who underwent CART for at least three months at Seoul National University Hospital. A board-certified infectious disease specialist took a complete history from all participants. Demographic data (sex, ethnicity and date of birth), HIV exposure category (men who have sex with men [MSM], heterosexual), life style (smoking, alcohol consumption and physical activity), nadir CD4 cell count, current CD4 cell count and history of CART were recorded. Serum level of 25(OH) Vitamin D3, parathyroid hormone, carboxy-terminal collagen crosslinks (CTX), bone-specific alkaline phosphatase (ALP), total testosterone and free testosterone were measured for each participant. Serum CTX level was measured by electrochemiluminescence immunoassay using the Elecsys® ß-Crosslaps serum assay kit (Roche Diagnostics). The study protocol was in accordance with institutional guidelines and approved by an institutional review board. Informed consent was obtained from the study participants. This cross-sectional study included HIV-positive male patients over 40 years old who underwent CART for at least three months at Seoul National University Hospital. A board-certified infectious disease specialist took a complete history from all participants. Demographic data (sex, ethnicity and date of birth), HIV exposure category (men who have sex with men [MSM], heterosexual), life style (smoking, alcohol consumption and physical activity), nadir CD4 cell count, current CD4 cell count and history of CART were recorded. Serum level of 25(OH) Vitamin D3, parathyroid hormone, carboxy-terminal collagen crosslinks (CTX), bone-specific alkaline phosphatase (ALP), total testosterone and free testosterone were measured for each participant. Serum CTX level was measured by electrochemiluminescence immunoassay using the Elecsys® ß-Crosslaps serum assay kit (Roche Diagnostics). The study protocol was in accordance with institutional guidelines and approved by an institutional review board. Informed consent was obtained from the study participants. Measurements of anthropometric parameters and BMD Height and body weight were measured by standard methods in light clothes. Body mass index (BMI) was calculated as weight divided by height squared (kg/m2). Bone mineral density (BMD) (g/cm2) measurements at central skeletal sites (lumbar spine, femoral neck and total hip) were obtained using dual energy X-ray absorptiometry (Lunar Prodigy, GE Medical System). Diagnosis of low bone mass was made using the International Society for Clinical Densitometry (ISCD) Z-score criteria (low BMD for chronological age, Z-score ≤ − 2.0) and the World Health Organization (WHO) T-score criteria (Osteopenia, −2.5 < T-score < − 1.0; Osteoporosis, T-score ≤ − 2.5) [5]. For comparison with the general population, the BMD of subjects in a representative Korean community-based cohort was used [6]. Both the HIV-positive study group and the HIV-negative general population group used the same Lunar Prodigy machine with the same software (Encore, GE) and the same manufacturer-provided Korean reference. Height and body weight were measured by standard methods in light clothes. Body mass index (BMI) was calculated as weight divided by height squared (kg/m2). Bone mineral density (BMD) (g/cm2) measurements at central skeletal sites (lumbar spine, femoral neck and total hip) were obtained using dual energy X-ray absorptiometry (Lunar Prodigy, GE Medical System). Diagnosis of low bone mass was made using the International Society for Clinical Densitometry (ISCD) Z-score criteria (low BMD for chronological age, Z-score ≤ − 2.0) and the World Health Organization (WHO) T-score criteria (Osteopenia, −2.5 < T-score < − 1.0; Osteoporosis, T-score ≤ − 2.5) [5]. For comparison with the general population, the BMD of subjects in a representative Korean community-based cohort was used [6]. Both the HIV-positive study group and the HIV-negative general population group used the same Lunar Prodigy machine with the same software (Encore, GE) and the same manufacturer-provided Korean reference. Statistics BMD differences were analyzed using Student's t-test. Risk factors for low bone mass were analyzed using a linear regression model with BMD as the dependent variable. Additional analysis was performed using binary logistic regression model with any BMD T-score < − 1.0 as the dependent variable and odds ratios per one standard deviation increment were shown for continuous variables. All significance tests were two-sided, and data analyses were performed with SPSS software (version 19.0; SPSS Inc., Chicago, IL, USA). BMD differences were analyzed using Student's t-test. Risk factors for low bone mass were analyzed using a linear regression model with BMD as the dependent variable. Additional analysis was performed using binary logistic regression model with any BMD T-score < − 1.0 as the dependent variable and odds ratios per one standard deviation increment were shown for continuous variables. All significance tests were two-sided, and data analyses were performed with SPSS software (version 19.0; SPSS Inc., Chicago, IL, USA). Study population: This cross-sectional study included HIV-positive male patients over 40 years old who underwent CART for at least three months at Seoul National University Hospital. A board-certified infectious disease specialist took a complete history from all participants. Demographic data (sex, ethnicity and date of birth), HIV exposure category (men who have sex with men [MSM], heterosexual), life style (smoking, alcohol consumption and physical activity), nadir CD4 cell count, current CD4 cell count and history of CART were recorded. Serum level of 25(OH) Vitamin D3, parathyroid hormone, carboxy-terminal collagen crosslinks (CTX), bone-specific alkaline phosphatase (ALP), total testosterone and free testosterone were measured for each participant. Serum CTX level was measured by electrochemiluminescence immunoassay using the Elecsys® ß-Crosslaps serum assay kit (Roche Diagnostics). The study protocol was in accordance with institutional guidelines and approved by an institutional review board. Informed consent was obtained from the study participants. Measurements of anthropometric parameters and BMD: Height and body weight were measured by standard methods in light clothes. Body mass index (BMI) was calculated as weight divided by height squared (kg/m2). Bone mineral density (BMD) (g/cm2) measurements at central skeletal sites (lumbar spine, femoral neck and total hip) were obtained using dual energy X-ray absorptiometry (Lunar Prodigy, GE Medical System). Diagnosis of low bone mass was made using the International Society for Clinical Densitometry (ISCD) Z-score criteria (low BMD for chronological age, Z-score ≤ − 2.0) and the World Health Organization (WHO) T-score criteria (Osteopenia, −2.5 < T-score < − 1.0; Osteoporosis, T-score ≤ − 2.5) [5]. For comparison with the general population, the BMD of subjects in a representative Korean community-based cohort was used [6]. Both the HIV-positive study group and the HIV-negative general population group used the same Lunar Prodigy machine with the same software (Encore, GE) and the same manufacturer-provided Korean reference. Statistics: BMD differences were analyzed using Student's t-test. Risk factors for low bone mass were analyzed using a linear regression model with BMD as the dependent variable. Additional analysis was performed using binary logistic regression model with any BMD T-score < − 1.0 as the dependent variable and odds ratios per one standard deviation increment were shown for continuous variables. All significance tests were two-sided, and data analyses were performed with SPSS software (version 19.0; SPSS Inc., Chicago, IL, USA). Results: A total of 84 HIV-positive patients were included in this study. All were male, and 39 (46.4%) of them had the risk factors of homosexual behaviour. All were Korean, and median age was 49 years (IQR 45–56) (Table 1). Median CD4 cell count was 531 cells/µL (IQR, 345–762), and 75 (89.3%) patients had an HIV RNA level < 40 copies/mL. Median duration of antiretroviral therapy was 71 months (IQR, 36–120). Fifty-five (65.5%) of the subjects had vitamin D deficiency. Demographics and clinical characteristics of the study participants IQR, interquartile range; HIV, human immunodeficiency virus. The overall prevalence of low bone mass was 16.7% in the 40–49 years age group and 54.8% in the >50 years age group (Table 2). Compared with HIV-negative Koreans, the subjects in our cohort with HIV infection had significantly lower bone mass at femur neck and total hip in the 40–49 years age group (Figure 1B and 1C). In the 50–59 year age group, subjects with HIV infection had significantly lower bone mass at the femur neck than HIV-negative Koreans. There was no significant difference in the BMD at lumbar spine between HIV-negative Koreans and our subjects. Comparison of the bone mineral densities of the study participants with those of non-HIV-positive Koreans. *Data from the Ansung cohort study [6]. **p<0.05. Prevalence of low bone mass in the study participants Diagnosed using the International Society for Clinical Densitometry (ISCD) Z-score criteria (low BMD for chronological age, Z-score ≤ − 2.0). Diagnosed using the World Health Organization (WHO) T-score criteria (Osteopenia, −2.5 < T-score < − 1.0; Osteoporosis, T-score ≤ − 2.5). BMD, bone mineral density. The association between BMD and other factors was investigated. There was no significant association between BMD and HIV exposure category (between MSM and heterosexual men), CART category (between protease inhibitor-based and non-nucleoside reverse transcriptase inhibitor-based), smoking, alcohol intake or exercise status. BMD was positively associated with BMI at all three sites and these associations were independent of other factors (Table 3). BMD at the femoral neck and total hip was negatively associated with serum CTX and these associations were independent of other factors. Bone specific ALP was negatively associated with lumbar spine BMD in multivariate analysis. The logistic regression analysis revealed that independent risk factors for low BMD were low BMI, high serum CTX, and high bone-specific ALP (Table 4). Summary of linear regression analysis of factors associated with bone mineral density Running more than one hour a week. BMI, body mass index; PI, protease inhibitor; PTH, parathyroid hormone; CTX, carboxy-terminal collagen crosslinks; ALP, alkaline phosphatase; Ca, calcium; Cr, creatinine. Summary of logistic regression analysis of factors associated with low bone mineral density (any T score≤ − 1) Odds ratios are shown as per one standard deviation increment. Running more than one hour in a week. OR, Odds ratio; CI, confidence interval; BMI, body mass index; PI, protease inhibitor; PTH, parathyroid hormone; CTX, carboxy-terminal collagen crosslinks; ALP, alkaline phosphatase; Ca, calcium; Cr, creatnine. Discussion: In the present study, BMDs at the lumbar spine, femur neck and total hip were investigated among Korean HIV-positive male patients undergoing CART, and clinical factors and serum markers were measured. Low bone mass was prevalent among Korean HIV-positive male patients undergoing antiretroviral treatment. Several factors were associated with low bone mass. A high prevalence of low bone mass has been reported in HIV-positive individuals in many cross-sectional studies [7, 8]. The prevalence of osteoporosis was more than three times greater among HIV-positive patients than among HIV-negative control subjects in a meta-analysis [4]. A decrease of BMD ranging from −1.5 to −5.8%, depending on the antiretroviral regimen, was observed within the first year of antiretroviral therapy [9]. The overall prevalence of fractures was significantly higher in HIV-positive patients, than in HIV-negative patients (2.87 vs. 1.77 fractures per 100 persons) [10]. Accordingly, in the present study, the patients with HIV infections had significantly lower BMD than age-matched HIV-negative Koreans in the general population. In the present study, the overall prevalence of osteoporosis and osteopenia were 9.5 and 46.4%, respectively. The reported prevalence of osteoporosis in HIV-positive patients varies widely according to age group and gender (3–33%) [3]. Several researchers have reported the prevalence of osteoporosis in HIV-positive men in Western countries (France: 16%, mean age 40 years; Australia: 3%, mean age 43 years) [7, 11]. A recent study reported the prevalence of osteoporosis in HIV-positive men in Taiwan (3.1%; median age 36.5 years) [12]. However, as discussed by its authors, the Taiwanese study was limited by several factors (majority of the subjects were middle aged, no bone turnover markers were measured and, most importantly, only lumbar spine BMD was measured, not femur neck or total hip). In addition to these limitations, due to the study design (which excluded potential osteoporosis patients), the authors state that the prevalence of osteoporosis was underestimated. In the present study, the prevalence of osteoporosis was determined using all three BMD sites. And the BMDs of HIV-positive men were compared with those of age-matched same-race controls. The association between BMD and bone-metabolism-related factors including bone turnover markers was also investigated. The causes of low bone mass in HIV appear to be multifactorial. First, there are complex interactions between HIV infection and traditional osteoporosis risk factors [13]. Chronic HIV infection leads to poor nutrition and low weight. Indeed, the BMI of the HIV-negative control group was 23.8±3.1 and the BMI of the HIV-positive study group was 22.6±2.5. This difference in BMI could explain some portion of the difference in BMD. Low vitamin D levels [14] and hypogonadism [15] are associated with HIV infection. The prevalence of diabetes was high among the study subjects. The high prevalence of diabetes could have a substantial effect on bone metabolism. However, the BMD was not significantly different between diabetic and non-diabetic patients and similar trends were observed when diabetic subjects were excluded. High rates of alcohol and tobacco use are reported among HIV-positive patients. Second, uncontrolled HIV viremia per se may decrease bone mass [4, 16]: systemic inflammation caused by HIV infection can affect bone remodelling. There have been several studies of the association between HIV infection and bone remodelling. HIV proteins have been reported to augment bone resorption through increased osteoclast activity [17], and HIV proteins attenuate bone formation by stimulating osteoblasts apoptosis [18]. Third, CART-related factors may also link low bone mass and HIV infection. Continuous CART decreased BMD [19] and intermittent CART resulted in less loss of BMD [20]. These complex mechanisms could have resulted in the low BMD observed in the present study. We found that low BMI was associated with low BMD at all three sites. Similar findings were obtained in previous studies [11, 12], demonstrating the important association between body weight and BMD. Markers of bone metabolism (CTX and bone-specific ALP) were negatively associated with BMD in the present study. This is in accordance with previous studies of HIV-positive patients [21, 22], suggesting that high bone turnover is associated with the reduction in BMD. The initiation of CART led to an increase in markers of bone turnover [22] and intermittent application of CART reduced the decline in markers of bone turnover and in BMD [20], suggesting that CART decreases BMD by increasing bone turnover. These findings indicate that measurements of bone turnover markers in HIV-positive patients could be used in the clinic to assess bone metabolism status in HIV. In the present study, we observed significantly lower femur neck BMD and total hip BMD in HIV-positive patients than in HIV-negative patients in the general Korean population. However, no significant difference was observed in lumbar spine BMD. A previous study also reported that the difference in lumbar spine BMD was borderline whereas the difference in hip BMD was significant [7]. Others have reported that both lumbar spine BMD and femur neck BMD are significantly lower in HIV-positive patients [8]. A recent study showed that intermittent CART has less effect on BMD than continuous antiretroviral therapy [20], which suggests a deleterious effect of CART on bone. Also, longer duration of CART was associated with reduced BMD [12]. However, in the present analysis, the duration of CART was not associated with BMD. In one recent study, the prevalence of low BMD was similar among HIV-positive MSM and HIV-negative MSM [23], suggesting that the low BMD found in the former may precede HIV acquisition. Compared with heterosexual men, MSM are likely to have the conventional risk factors for osteoporosis (such as low body weight, alcohol use, smoking) [24]. Based on these findings, it has been suggested that the low BMD found among HIV-positive MSM may be the result of MSM-related factors, and may not be fully attributable to HIV infection alone or the use of ART [23]. However, in the present study, there was no significant difference in BMD between MSM and heterosexual men, suggesting that the low BMD is more likely to be the result of HIV infection and/or the use of CART than MSM-related life style factors. This study has several limitations. First, because of its cross-sectional design, causal relationships between HIV infection and reduced BMD could not be determined. Second, its statistical power was low due to the small sample size. Third, half of the study subjects were under the age of 50 years. Inclusion of these young participants could have attenuated the statistical significance regarding the effect of ageing on bone loss. However, to the best of our knowledge, this is the first study in the English language to report the prevalence of osteoporosis based on all three BMD sites as well as factors related to bone metabolism, including bone turnover markers, in HIV-positive Asian men. Conclusions: In this first survey for HIV-associated osteopenia in Asians undergoing antiretroviral therapy, the prevalence of low bone mass was significantly higher compared to non HIV-infected individuals. Low bone mass was associated with increased bone resorption.
Background: Low bone mass is prevalent in HIV-positive patients. However, compared to Western countries, less is known about HIV-associated osteopenia in Asian populations. Methods: We performed a cross-sectional survey in Seoul National University Hospital from December 2011 to May 2012. We measured bone mineral density using central dual energy X-ray absorptiometry, with consent, in male HIV-positive patients, aged 40 years and older. Diagnosis of low bone mass was made using International Society for Clinical Densitometry Z-score criteria in the 40-49 years age group and World Health Organization T-score criteria in the >50-year age group. The data were compared with those of a community-based cohort in Korea. Results: Eighty-four HIV-positive male patients were included in this study. Median age was 49 (interquartile range [IQR], 45-56) years, and median body mass index (BMI) was 22.6 (IQR, 20.9-24.4). Viral suppression was achieved in 75 (89.3%) patients and median duration of antiretroviral therapy was 71 (IQR, 36-120) months. The overall prevalence of low bone mass was 16.7% in the 40-49 years age group and 54.8% in the>50 years age group. Our cohort had significantly lower bone mass at the femur neck and total hip than HIV-negative Koreans in the 40-49 years age group. Low bone mass was significantly associated with low BMI, and a high level of serum carboxy-terminal collagen crosslinks, but was not associated with antiretroviral regimen or duration of antiretroviral therapy. Conclusions: Low bone mass is prevalent in Korean HIV-positive males undergoing antiretroviral therapy, and may be associated with increased bone resorption.
Introduction: The advent of combination antiretroviral therapy (CART) has improved the prognosis of patients infected with HIV [1]. Long-term CART is associated with several metabolic complications including lipodystrophy, insulin resistance, diabetes and dyslipidemia [2]. It is also well known that low bone mass is prevalent in HIV-positive patients [3]. In one meta-analysis, osteoporosis was three times more prevalent among HIV-positive patients than among HIV-negative controls, and was especially common among those receiving antiretroviral therapy [4]. The prevalence of low bone mass may vary in different ethnic groups, and less is known about the characteristics of low bone mass in Asian HIV-positive patients than in Western patients. Considering that there is a marked predominance of men in the clinic where this study was conducted, and that gender is an important risk factor for low bone mass, we chose to include only male patients. The present study was undertaken to investigate the prevalence of, and risk factors for, low bone mass in Korean HIV-positive males undergoing antiretroviral therapy. Conclusions: In this first survey for HIV-associated osteopenia in Asians undergoing antiretroviral therapy, the prevalence of low bone mass was significantly higher compared to non HIV-infected individuals. Low bone mass was associated with increased bone resorption.
Background: Low bone mass is prevalent in HIV-positive patients. However, compared to Western countries, less is known about HIV-associated osteopenia in Asian populations. Methods: We performed a cross-sectional survey in Seoul National University Hospital from December 2011 to May 2012. We measured bone mineral density using central dual energy X-ray absorptiometry, with consent, in male HIV-positive patients, aged 40 years and older. Diagnosis of low bone mass was made using International Society for Clinical Densitometry Z-score criteria in the 40-49 years age group and World Health Organization T-score criteria in the >50-year age group. The data were compared with those of a community-based cohort in Korea. Results: Eighty-four HIV-positive male patients were included in this study. Median age was 49 (interquartile range [IQR], 45-56) years, and median body mass index (BMI) was 22.6 (IQR, 20.9-24.4). Viral suppression was achieved in 75 (89.3%) patients and median duration of antiretroviral therapy was 71 (IQR, 36-120) months. The overall prevalence of low bone mass was 16.7% in the 40-49 years age group and 54.8% in the>50 years age group. Our cohort had significantly lower bone mass at the femur neck and total hip than HIV-negative Koreans in the 40-49 years age group. Low bone mass was significantly associated with low BMI, and a high level of serum carboxy-terminal collagen crosslinks, but was not associated with antiretroviral regimen or duration of antiretroviral therapy. Conclusions: Low bone mass is prevalent in Korean HIV-positive males undergoing antiretroviral therapy, and may be associated with increased bone resorption.
3,858
342
[ 206, 190, 213, 97 ]
8
[ "hiv", "bmd", "bone", "study", "low", "hiv positive", "positive", "mass", "patients", "score" ]
[ "mass korean hiv", "bmi hiv negative", "hiv associated osteopenia", "prevalence low bone", "bone remodelling hiv" ]
[CONTENT] HIV | AIDS | osteopenia | osteoporosis [SUMMARY]
[CONTENT] HIV | AIDS | osteopenia | osteoporosis [SUMMARY]
[CONTENT] HIV | AIDS | osteopenia | osteoporosis [SUMMARY]
[CONTENT] HIV | AIDS | osteopenia | osteoporosis [SUMMARY]
[CONTENT] HIV | AIDS | osteopenia | osteoporosis [SUMMARY]
[CONTENT] HIV | AIDS | osteopenia | osteoporosis [SUMMARY]
[CONTENT] Absorptiometry, Photon | Adult | Age Factors | Anti-HIV Agents | Bone Density | Bone Resorption | Cross-Sectional Studies | HIV Infections | Humans | Male | Middle Aged | Osteoporosis | Republic of Korea [SUMMARY]
[CONTENT] Absorptiometry, Photon | Adult | Age Factors | Anti-HIV Agents | Bone Density | Bone Resorption | Cross-Sectional Studies | HIV Infections | Humans | Male | Middle Aged | Osteoporosis | Republic of Korea [SUMMARY]
[CONTENT] Absorptiometry, Photon | Adult | Age Factors | Anti-HIV Agents | Bone Density | Bone Resorption | Cross-Sectional Studies | HIV Infections | Humans | Male | Middle Aged | Osteoporosis | Republic of Korea [SUMMARY]
[CONTENT] Absorptiometry, Photon | Adult | Age Factors | Anti-HIV Agents | Bone Density | Bone Resorption | Cross-Sectional Studies | HIV Infections | Humans | Male | Middle Aged | Osteoporosis | Republic of Korea [SUMMARY]
[CONTENT] Absorptiometry, Photon | Adult | Age Factors | Anti-HIV Agents | Bone Density | Bone Resorption | Cross-Sectional Studies | HIV Infections | Humans | Male | Middle Aged | Osteoporosis | Republic of Korea [SUMMARY]
[CONTENT] Absorptiometry, Photon | Adult | Age Factors | Anti-HIV Agents | Bone Density | Bone Resorption | Cross-Sectional Studies | HIV Infections | Humans | Male | Middle Aged | Osteoporosis | Republic of Korea [SUMMARY]
[CONTENT] mass korean hiv | bmi hiv negative | hiv associated osteopenia | prevalence low bone | bone remodelling hiv [SUMMARY]
[CONTENT] mass korean hiv | bmi hiv negative | hiv associated osteopenia | prevalence low bone | bone remodelling hiv [SUMMARY]
[CONTENT] mass korean hiv | bmi hiv negative | hiv associated osteopenia | prevalence low bone | bone remodelling hiv [SUMMARY]
[CONTENT] mass korean hiv | bmi hiv negative | hiv associated osteopenia | prevalence low bone | bone remodelling hiv [SUMMARY]
[CONTENT] mass korean hiv | bmi hiv negative | hiv associated osteopenia | prevalence low bone | bone remodelling hiv [SUMMARY]
[CONTENT] mass korean hiv | bmi hiv negative | hiv associated osteopenia | prevalence low bone | bone remodelling hiv [SUMMARY]
[CONTENT] hiv | bmd | bone | study | low | hiv positive | positive | mass | patients | score [SUMMARY]
[CONTENT] hiv | bmd | bone | study | low | hiv positive | positive | mass | patients | score [SUMMARY]
[CONTENT] hiv | bmd | bone | study | low | hiv positive | positive | mass | patients | score [SUMMARY]
[CONTENT] hiv | bmd | bone | study | low | hiv positive | positive | mass | patients | score [SUMMARY]
[CONTENT] hiv | bmd | bone | study | low | hiv positive | positive | mass | patients | score [SUMMARY]
[CONTENT] hiv | bmd | bone | study | low | hiv positive | positive | mass | patients | score [SUMMARY]
[CONTENT] patients | hiv | low | low bone | mass | bone mass | low bone mass | hiv positive patients | positive patients | prevalent hiv positive patients [SUMMARY]
[CONTENT] score | bmd | study | measured | serum | population | hiv | dependent | variable | lunar [SUMMARY]
[CONTENT] bmd | hiv | inhibitor | iqr | table | bone | factors | age | score | koreans [SUMMARY]
[CONTENT] associated | bone | higher compared non | increased bone resorption | osteopenia asians | mass associated increased bone | bone mass significantly | bone mass significantly higher | associated osteopenia | mass associated [SUMMARY]
[CONTENT] hiv | bmd | bone | low | score | mass | study | bone mass | low bone | patients [SUMMARY]
[CONTENT] hiv | bmd | bone | low | score | mass | study | bone mass | low bone | patients [SUMMARY]
[CONTENT] ||| Asian [SUMMARY]
[CONTENT] Seoul National University Hospital | December 2011 to May 2012 ||| 40 years ||| International Society for Clinical Densitometry Z | 40-49 years | World Health Organization | 50-year ||| Korea [SUMMARY]
[CONTENT] Eighty-four ||| 49 | 45 | years | BMI | 22.6 | IQR | 20.9 ||| 75 | 89.3% | 71 | IQR | 36 | months ||| 16.7% | 40-49 years | 54.8% | years ||| Koreans | 40-49 years ||| BMI [SUMMARY]
[CONTENT] Korean [SUMMARY]
[CONTENT] ||| Asian ||| Seoul National University Hospital | December 2011 to May 2012 ||| 40 years ||| International Society for Clinical Densitometry Z | 40-49 years | World Health Organization | 50-year ||| Korea ||| Eighty-four ||| 49 | 45 | years | BMI | 22.6 | IQR | 20.9 ||| 75 | 89.3% | 71 | IQR | 36 | months ||| 16.7% | 40-49 years | 54.8% | years ||| Koreans | 40-49 years ||| BMI ||| Korean [SUMMARY]
[CONTENT] ||| Asian ||| Seoul National University Hospital | December 2011 to May 2012 ||| 40 years ||| International Society for Clinical Densitometry Z | 40-49 years | World Health Organization | 50-year ||| Korea ||| Eighty-four ||| 49 | 45 | years | BMI | 22.6 | IQR | 20.9 ||| 75 | 89.3% | 71 | IQR | 36 | months ||| 16.7% | 40-49 years | 54.8% | years ||| Koreans | 40-49 years ||| BMI ||| Korean [SUMMARY]
COVID-19 Mortality Risk Correlates Inversely with Vitamin D3 Status, and a Mortality Rate Close to Zero Could Theoretically Be Achieved at 50 ng/mL 25(OH)D3: Results of a Systematic Review and Meta-Analysis.
34684596
Much research shows that blood calcidiol (25(OH)D3) levels correlate strongly with SARS-CoV-2 infection severity. There is open discussion regarding whether low D3 is caused by the infection or if deficiency negatively affects immune defense. The aim of this study was to collect further evidence on this topic.
BACKGROUND
Systematic literature search was performed to identify retrospective cohort as well as clinical studies on COVID-19 mortality rates versus D3 blood levels. Mortality rates from clinical studies were corrected for age, sex, and diabetes. Data were analyzed using correlation and linear regression.
METHODS
One population study and seven clinical studies were identified, which reported D3 blood levels preinfection or on the day of hospital admission. The two independent datasets showed a negative Pearson correlation of D3 levels and mortality risk (r(17) = -0.4154, p = 0.0770/r(13) = -0.4886, p = 0.0646). For the combined data, median (IQR) D3 levels were 23.2 ng/mL (17.4-26.8), and a significant Pearson correlation was observed (r(32) = -0.3989, p = 0.0194). Regression suggested a theoretical point of zero mortality at approximately 50 ng/mL D3.
RESULTS
The datasets provide strong evidence that low D3 is a predictor rather than just a side effect of the infection. Despite ongoing vaccinations, we recommend raising serum 25(OH)D levels to above 50 ng/mL to prevent or mitigate new outbreaks due to escape mutations or decreasing antibody activity.
CONCLUSIONS
[ "COVID-19", "Calcifediol", "Cholecalciferol", "Humans", "Nutritional Status", "Risk Assessment", "SARS-CoV-2" ]
8541492
1. Introduction
The SARS-CoV-2 pandemic causing acute respiratory distress syndrome (ARDS) has lasted for more than 18 months. It has created a major global health crisis due to the high number of patients requiring intensive care, and the high death rate has substantially affected everyday life through contact restrictions and lockdowns. According to many scientists and medical professionals, we are far from the end of this disaster and hence must learn to coexist with the virus for several more years, perhaps decades [1,2]. It is realistic to assume that there will be new mutations, which are possibly more infectious or more deadly. In the known history of virus infections, we have never faced a similar global spread. Due to the great number of viral genome replications that occur in infected individuals and the error-prone nature of RNA-dependent RNA polymerase, progressive accrual mutations do and will continue to occur [3,4,5]. Thus, similar to other virus infections such as influenza, we have to expect that the effectiveness of vaccination is limited in time, especially with the current vaccines designed to trigger an immunological response against a single viral protein [6,7,8]. We have already learned that even fully vaccinated people can be infected [9]. Currently, most of these infections do not result in hospitalization, especially for young individuals without comorbidities. However, these infections are the basis for the ongoing dissemination of the virus in a situation where worldwide herd immunity against SARS-CoV-2 is rather unlikely. Instead, humanity could be trapped in an insuperable race between new mutations and new vaccines, with an increasing risk of newly arising mutations becoming resistant to the current vaccines [3,10,11]. Thus, a return to normal life in the near future seems unlikely. Mask requirements as well as limitations of public life will likely accompany us for a long time if we are not able to establish additional methods that reduce virus dissemination. Vaccination is an important part in the fight against SARS-CoV-2 but, with respect to the situation described above, should not be the only focus. One strong pillar in the protection against any type of virus infection is the strength of our immune system [12]. Unfortunately, thus far, this unquestioned basic principle of nature has been more or less neglected by the responsible authorities. It is well known that our modern lifestyle is far from optimal with respect to nutrition, physical fitness, and recreation. In particular, many people are not spending enough time outside in the sun, even in summer. The consequence is widespread vitamin D deficiency, which limits the performance of their immune systems, resulting in the increased spread of some preventable diseases of civilization, reduced protection against infections, and reduced effectiveness of vaccination [13]. In this publication, we will demonstrate that vitamin D3 deficiency, which is a well-documented worldwide problem [13,14,15,16,17,18,19,20], is one of the main reasons for severe courses of SARS-CoV-2 infections. The fatality rates correlate well with the findings that elderly people, black people, and people with comorbidities show very low vitamin D3 levels [16,21,22,23]. Additionally, with only a few exceptions, we are facing the highest infection rates in the winter months and in northern countries, which are known to suffer from low vitamin D3 levels due to low endogenous sun-triggered vitamin D3 synthesis [24,25,26,27]. Vitamin D3 was first discovered at the beginning of the 19th century as an essential factor needed to guarantee skeletal health. This discovery came after a long period of dealing with the dire consequences of rickets, which causes osteomalacia (softening of bones). This disease especially affected children in northern countries, who were deprived of sunlight and often worked in dark production halls during the industrial revolution [28]. At the beginning of the 20th century, it became clear that sunlight can cure rickets by triggering vitamin D3 synthesis in the skin. Cod liver oil is recognized as a natural source of vitamin D3 [29]. At the time, a blood level of 20 ng/mL was sufficient to stop osteomalacia. This target is still the recommended blood level today, as stated in many official documents [30]. In accordance with many other publications, we will show that this level is considerably too low to guarantee optimal functioning of the human body. In the late 1920s, Adolf Windaus elucidated the structure of vitamin D3. The metabolic pathway of vitamin D3 (biochemical name cholecalciferol) is shown in Figure 1 [31]. The precursor, 7-dehydrocholesterol, is transformed into cholecalciferol in our skin by photoisomerization caused by UV-B exposure (wavelength 280–315 nm). After transportation to the liver, cholecalciferol is hydroxylated, resulting in 25-hydroxycholecalciferol (25(OH)D3, also called calcidiol), which can be stored in fat tissue for several months and is released back into blood circulation when needed. The biologically active form is generated by a further hydroxylation step, resulting in 1,25-dihydroxycholecalciferol (1,25(OH)2D3, also called calcitriol). Early investigations assumed that this transformation takes place mainly in the kidney. Over the last decades, knowledge regarding the mechanisms through which vitamin D3 affects human health has improved dramatically. It was discovered that the vitamin D3 receptor (VDR) and the vitamin D3 activating enzyme 1-α-hydroxylase (CYP27B1) are expressed in many cell types that are not involved in bone and mineral metabolism, such as the intestine, pancreas, and prostate as well as cells of the immune system [32,33,34,35,36]. This finding demonstrates the important, much wider impact of vitamin D3 on human health than previously understood [37,38]. Vitamin D turned out to be a powerful epigenetic regulator, influencing more than 2500 genes [39] and impacting dozens of our most serious health challenges [40], including cancer [41,42], diabetes mellitus [43], acute respiratory tract infections [44], chronic inflammatory diseases [45], and autoimmune diseases such as multiple sclerosis [46]. In the field of human immunology, the extrarenal synthesis of the active metabolite calcitriol-1,25(OH)2D3-by immune cells and lung epithelial cells has been shown to have immunomodulatory properties [47,48,49,50,51,52]. Today, a compelling body of experimental evidence indicates that activated vitamin D3 plays a fundamental role in regulating both innate and adaptive immune systems [53,54,55,56]. Intracellular vitamin D3 receptors (VDRs) are present in nearly all cell types involved in the human immune response, such as monocytes/macrophages, T cells, B cells, natural killer (NK) cells, and dendritic cells (DCs). Receptor binding engages the formation of the “vitamin D3 response element” (VDRE), regulating a large number of target genes involved in the immune response [57]. As a consequence of this knowledge, the scientific community now agrees that calcitriol is much more than a vitamin but rather a highly effective hormone with the same level of importance to human metabolism as other steroid hormones. The blood level ensuring the reliable effectiveness of vitamin D3 with respect to all its important functions came under discussion again, and it turned out that 40–60 ng/mL is preferable [38], which is considerably above the level required to prevent rickets. Long before the SARS-CoV-2 pandemic, an increasing number of scientific publications showed the effectiveness of a sufficient vitamin D3 blood level in curing many of the human diseases caused by a weak or unregulated immune system [38,58,59,60]. This includes all types of virus infections [44,61,62,63,64,65,66,67,68,69,70], with a main emphasis on lung infections that cause ARDS [71,72,73], as well as autoimmune diseases [46,63,74,75]. However, routine vitamin D3 testing and supplementation are still not established today. Unfortunately, it seems that the new findings about vitamin D3 have not been well accepted in the medical community. Many official recommendations to define vitamin D3 deficiency still stick to the 20 ng/mL established 100 years ago to cure rickets [76]. Additionally, many recommendations for vitamin D3 supplementation are in the range of 5 to 20 µg per day (200 to 800 international units), which is much too low to guarantee the optimal blood level of 40–60 ng/mL [38,77]. One reason for these incorrect recommendations turned out to be calculation error [78,79]. Another reason for the error is because vitamin D3 treatment to cure osteomalacia was commonly combined with high doses of calcium to support bone calcification. When examining for the side effects of overdoses of such combination products, it turned out that there is a high risk of calcium deposits in blood vessels, especially in the kidney. Today, it is clear that such combination preparations are nonsensical because vitamin D3 stimulates calcium uptake in the intestine itself. Without calcium supplementation, even very high vitamin D3 supplementation does not cause vascular calcification, especially if another important finding is included. Even when calcium blood levels are high, the culprit for undesirable vascular calcification is not vitamin D but insufficient blood levels of vitamin K2. Thus, daily vitamin D3 supplementation in the range of 4000 to 10,000 units (100 to 250 µg) needed to generate an optimal vitamin D3 blood level in the range of 40–60 ng/mL has been shown to be completely safe when combined with approximately 200 µg/mL vitamin K2 [80,81,82]. However, this knowledge is still not widespread in the medical community, and obsolete warnings about the risks of vitamin D3 overdoses unfortunately are still commonly circulating. Based on these circumstances, the SARS-CoV-2 pandemic is becoming the second breakthrough in the history of vitamin D3 association with disease (after rickets), and we have to ensure that full advantage is being taken of its medical properties to keep people healthy. The most life-threatening events in the course of a SARS-CoV-2 infection are ARDS and cytokine release syndrome (CRS). It is well established that vitamin D3 is able to inhibit the underlying metabolic pathways [83,84] because a very specific interaction exists between the mechanism of SARS-CoV-2 infection and vitamin D3. Angiotensin-converting enzyme 2 (ACE2), a part of the renin-angiotensin system (RAS), serves as the major entry point for SARS-CoV-2 into cells (Figure 2). When SARS-CoV-2 is attached to ACE2 its expression is reduced, thus causing lung injury and pneumonia [85,86,87]. Vitamin D3 is a negative RAS modulator by inhibition of renin expression and stimulation of ACE2 expression. It therefore has a protective role against ARDS caused by SARS-CoV-2. Sufficient vitamin D3 levels prevent the development of ARDS by reducing the levels of angiotensin II and increasing the level of angiotensin-(1,7) [18,88,89,90,91,92]. There are several additional important functions of vitamin D3 supporting immune defense [18,77,94,95]:Vitamin D decreases the production of Th1 cells. Thus, it can suppress the progression of inflammation by reducing the generation of inflammatory cytokines [74,96,97].Vitamin D3 reduces the severity of cytokine release syndrome (CRS). This “cytokine storm” causes multiple organ damage and is therefore the main cause of death in the late stage of SARS-CoV-2 infection. The systemic inflammatory response due to viral infection is attenuated by promoting the differentiation of regulatory T cells [98,99,100,101].Vitamin D3 induces the production of the endogenous antimicrobial peptide cathelicidin (LL-37) in macrophages and lung epithelial cells, which acts against invading respiratory viruses by disrupting viral envelopes and altering the viability of host target cells [52,102,103,104,105,106,107].Experimental studies have shown that vitamin D and its metabolites modulate endothelial function and vascular permeability via multiple genomic and extragenomic pathways [108,109].Vitamin D reduces coagulation abnormalities in critically ill COVID-19 patients [110,111,112]. Vitamin D decreases the production of Th1 cells. Thus, it can suppress the progression of inflammation by reducing the generation of inflammatory cytokines [74,96,97]. Vitamin D3 reduces the severity of cytokine release syndrome (CRS). This “cytokine storm” causes multiple organ damage and is therefore the main cause of death in the late stage of SARS-CoV-2 infection. The systemic inflammatory response due to viral infection is attenuated by promoting the differentiation of regulatory T cells [98,99,100,101]. Vitamin D3 induces the production of the endogenous antimicrobial peptide cathelicidin (LL-37) in macrophages and lung epithelial cells, which acts against invading respiratory viruses by disrupting viral envelopes and altering the viability of host target cells [52,102,103,104,105,106,107]. Experimental studies have shown that vitamin D and its metabolites modulate endothelial function and vascular permeability via multiple genomic and extragenomic pathways [108,109]. Vitamin D reduces coagulation abnormalities in critically ill COVID-19 patients [110,111,112]. A rapidly increasing number of publications are investigating the vitamin D3 status of SARS-CoV-2 patients and have confirmed both low vitamin D levels in cases of severe courses of infection [113,114,115,116,117,118,119,120,121,122,123,124,125,126,127] and positive results of vitamin D3 treatments [128,129,130,131,132,133,134]. Therefore, many scientists recommend vitamin D3 as an indispensable part of a medical treatment plan to avoid severe courses of SARS-CoV-2 infection [14,18,77,84,135,136], which has additionally resulted in proposals for the consequent supplementation of the whole population [137]. A comprehensive overview and discussion of the current literature is given in a review by Linda Benskin [138]. Unfortunately, all these studies are based on relatively low numbers of patients. Well-accepted, placebo-controlled, double-blinded studies are still missing. The finding that most SARS-CoV-2 patients admitted to hospitals have vitamin D3 blood levels that are too low is unquestioned even by opponents of vitamin D supplementation. However, there is an ongoing discussion as to whether we are facing a causal relationship or just a decline in the vitamin D levels caused by the infection itself [84,139,140,141]. There are reliable data on the average vitamin D3 levels in the population [15,19,142] in several countries, in parallel to the data about death rates caused by SARS-CoV-2 in these countries [143,144]. Obviously, these vitamin D3 data are not affected by SARS-CoV-2 infections. While meta-studies using such data [26,136,140,145] are already available, our goal was to analyze these data in the same manner as selected clinical data. In this article, we identify a vitamin D threshold that virtually eliminates excess mortality caused by SARS-CoV-2. In contrast to published D3/SARS-CoV-2 correlations [146,147,148,149,150,151,152], our data include studies assessing preinfection vitamin D values as well as studies with vitamin D values measured post-infection latest on the day after hospitalization. Thus, we can expect that the measured vitamin D status is still close to the preinfection level. In contrast to other meta-studies which also included large retrospective cohort studies [151,152], our aim was to perform regressions on the combined data after correcting for patient characteristics. These results from independent datasets, which include data from before and after the onset of the disease, also further strengthen the assumption of a causal relationship between vitamin D3 blood levels and SARS-CoV-2 death rates. Our results therefore also confirm the importance of establishing vitamin D3 supplementation as a general method to prevent severe courses of SARS-CoV-2 infections.
2. Methods
2.1. Search Strategy and Selection Criteria Initially, a systematic literature review was performed to identify relevant COVID-19 studies. Included studies were observational cohort studies that grouped two or more cohorts by their vitamin D3 values and listed mortality rates for the respective cohorts. PubMed and the https://c19vitamind.com (accessed on 27 March 2021) registry were searched according to Table 1. Subsequently, titles and abstracts were screened, and full-text articles were further analyzed for eligibility. Initially, a systematic literature review was performed to identify relevant COVID-19 studies. Included studies were observational cohort studies that grouped two or more cohorts by their vitamin D3 values and listed mortality rates for the respective cohorts. PubMed and the https://c19vitamind.com (accessed on 27 March 2021) registry were searched according to Table 1. Subsequently, titles and abstracts were screened, and full-text articles were further analyzed for eligibility. 2.2. Data Analysis Collected studies were divided into a population study [142] and seven hospital studies. Notably, these data sources are fundamentally different, as one assesses vitamin D values long-term, whereas the other measures vitamin D values postinfection, thereby masking a possible causal relationship between the preinfection vitamin D level and mortality. Several corrections for the crude mortality rates (CMRs) recorded by Ahmad were attempted to understand the underlying causes within the population study data and the outliers. In the end, none were used in the final data evaluation to avoid the risk of introducing hidden variables that also correlate with D3. Mortality rates and D3 blood levels from studies on hospitalized COVID-19 patients were assembled in a separate dataset. When no median D3 blood levels were provided for the individual study cohorts, the IQR, mean ± SD or estimated values within the grouping criteria were used in that order. Patient characteristics, including age IQR, sex and diabetes status, were used to compute expected mortality rates with a machine learning model [154], which is available online (https://www.economist.com/graphic-detail/covid-pandemic-mortality-risk-estimator (accessed on 27 March 2021)). While other comorbidities from the source studies were not considered in our analysis, they also have lower impact on the model’s output, as can be easily confirmed using the online tool. Based on the expected disease mortality rate for the respective patient cohorts, the reported mortality rates from the source studies were corrected. Thereby, the relationship between the initial vitamin D levels and the resulting mortality becomes more apparent. The two datasets were combined, and the mortality rates of the hospital studies were scaled according to the mortality range of the population studies, resulting in a uniform list of patient cohorts, their vitamin D status and dimensionless mortality coefficients. Linear regressions (OLS), Pearson and Spearman correlations of vitamin D, and the mortality values for the separate and combined datasets were generated with a Python 3.7 kernel using the scipy.stats 1.7.0 and statsmodels 0.12.2 libraries in a https://deepnote.com (accessed on 30 July 2021) Jupyter notebook. Collected studies were divided into a population study [142] and seven hospital studies. Notably, these data sources are fundamentally different, as one assesses vitamin D values long-term, whereas the other measures vitamin D values postinfection, thereby masking a possible causal relationship between the preinfection vitamin D level and mortality. Several corrections for the crude mortality rates (CMRs) recorded by Ahmad were attempted to understand the underlying causes within the population study data and the outliers. In the end, none were used in the final data evaluation to avoid the risk of introducing hidden variables that also correlate with D3. Mortality rates and D3 blood levels from studies on hospitalized COVID-19 patients were assembled in a separate dataset. When no median D3 blood levels were provided for the individual study cohorts, the IQR, mean ± SD or estimated values within the grouping criteria were used in that order. Patient characteristics, including age IQR, sex and diabetes status, were used to compute expected mortality rates with a machine learning model [154], which is available online (https://www.economist.com/graphic-detail/covid-pandemic-mortality-risk-estimator (accessed on 27 March 2021)). While other comorbidities from the source studies were not considered in our analysis, they also have lower impact on the model’s output, as can be easily confirmed using the online tool. Based on the expected disease mortality rate for the respective patient cohorts, the reported mortality rates from the source studies were corrected. Thereby, the relationship between the initial vitamin D levels and the resulting mortality becomes more apparent. The two datasets were combined, and the mortality rates of the hospital studies were scaled according to the mortality range of the population studies, resulting in a uniform list of patient cohorts, their vitamin D status and dimensionless mortality coefficients. Linear regressions (OLS), Pearson and Spearman correlations of vitamin D, and the mortality values for the separate and combined datasets were generated with a Python 3.7 kernel using the scipy.stats 1.7.0 and statsmodels 0.12.2 libraries in a https://deepnote.com (accessed on 30 July 2021) Jupyter notebook.
3. Results
Database and registry searches resulted in 563 and 66 records, respectively. Nonsystematic web searches accounted for 13 studies, from which an additional 31 references were assessed. After removal of 104 duplicates and initial screening, 44 studies remained. Four meta-studies, one comment, one retracted study, one report with unavailable data, one wrong topic report, and one Russian language record were excluded. The remaining 35 studies were assessed in full text, 20 of which did not meet the eligibility criteria due to their study design or lack of quantitative mortality data. Four further studies were excluded due to missing data for individual patient cohorts. Finally, three studies were excluded due to skewed or nonrepresentative patient characteristics, as reviewed by LB and JVM [114,155,156]. Eight eligible studies for quantitative analysis remained, as listed in Table 2. A PRISMA flowchart [157] is presented in Figure 3. The observed median (IQR) vitamin D value over all collected study cohorts was 23.2 ng/mL (17.4–26.8). A frequency distribution of vitamin D levels is shown in Figure 4. One population study by Ahmad et al. [142] was identified. Therein, the CMRs are compiled for 19 European countries based on COVID-19 pandemic data from Johns Hopkins University [143] in the time frame from 21 March 2020 to 22 January 2021, as well as D3 blood levels for the respective countries collected by literature review. Furthermore, the proportions of the 70+ age population were collected. The median vitamin D3 level across countries was 23.2 ng/mL (19.9–25.5 ng/mL). A moderately negative Spearman’s correlation with the corresponding mean vitamin D3 levels in the respective populations was observed at rs = −0.430 (95% CI: −0.805–−0.081). No further adjustments of these CMR values were performed by Ahmad. The correlations shown in Table 3 suggest the sex/age distribution, diabetes, and the rigidity of public health measures as some of the causes for outliers within the Ahmad dataset. However, this has little effect on the further results discussed below. The extracted data from seven hospital studies showed a median vitamin D3 level of 23.2 ng/mL (14.5–30.9 ng/mL). These data are plotted after correction of patient characteristics and scaling in combination with the data points from Ahmad in Figure 5. The correlation results are shown in Table 4 in which the combined data show a significant negative Pearson correlation at r(32) = −0.3989, p = 0.0194. The linear regression results can be found in Table 5. The regression for the combined data intersects the D3 axis at 50.7 ng/mL, suggesting a theoretical point of zero mortality.
6. Conclusions
Although there are a vast number of publications supporting a correlation between the severity and death rate of SARS-CoV-2 infections and the blood level of vitamin D3, there is still an open debate about whether this relation is causal. This is because in most studies, the vitamin D level was determined several days after the onset of infection; therefore, a low vitamin D level may be the result and not the trigger of the course of infection. In this publication, we used a meta-analysis of two independent sets of data. One analysis is based on the long-term average vitamin D3 levels documented for 19 countries. The second analysis is based on 1601 hospitalized patients, 784 who had their vitamin D levels measured within a day after admission, and 817 whose vitamin D levels were known preinfection. Both datasets show a strong correlation between the death rate caused by SARS-CoV-2 and the vitamin D blood level. At a threshold level of 30 ng/mL, mortality decreases considerably. In addition, our analysis shows that the correlation for the combined datasets intersects the axis at approximately 50 ng/mL, which suggests that this vitamin D3 blood level may prevent any excess mortality. These findings are supported not only by a large infection study, showing the same optimum but also by the natural levels observed in traditional people living in the region where humanity originated from that were able to fight down most (not all) infections in most (not all) individuals. Vaccination is and will be an important keystone in our fight against SARS-CoV-2. However, current data clearly show that vaccination alone cannot prevent all SARS-CoV-2 infections and dissemination of the virus. This scenario possibly will become much worse in the case of new virus mutations that are not very susceptible to the current vaccines or even not sensitive to any vaccine. Therefore, based on our data, the authors strongly recommend combining vaccination with routine strengthening of the immune system of the whole population by vitamin D3 supplementation to consistently guarantee blood levels above 50 ng/mL (125 nmol/L). From a medical point of view, this will not only save many lives but also increase the success of vaccination. From a social and political point of view, it will lower the need for further contact restrictions and lockdowns. From an economical point of view, it will save billions of dollars worldwide, as vitamin D3 is inexpensive and—together with vaccines—provides a good opportunity to get the spread of SARS-CoV-2 under control. Although there exists very broad data-based support for the protective effect of vitamin D against severe SARS-CoV-2 infections, we strongly recommend initiating well-designed observational studies as mentioned and/or double-blind randomized controlled trials (RCTs) to convince the medical community and the health authorities that vitamin D testing and supplementation are needed to avoid fatal breakthrough infections and to be prepared for new dangerous mutations.
[ "2.1. Search Strategy and Selection Criteria", "2.2. Data Analysis", "5. Limitations" ]
[ "Initially, a systematic literature review was performed to identify relevant COVID-19 studies. Included studies were observational cohort studies that grouped two or more cohorts by their vitamin D3 values and listed mortality rates for the respective cohorts. PubMed and the https://c19vitamind.com (accessed on 27 March 2021) registry were searched according to Table 1. Subsequently, titles and abstracts were screened, and full-text articles were further analyzed for eligibility.", "Collected studies were divided into a population study [142] and seven hospital studies. Notably, these data sources are fundamentally different, as one assesses vitamin D values long-term, whereas the other measures vitamin D values postinfection, thereby masking a possible causal relationship between the preinfection vitamin D level and mortality.\nSeveral corrections for the crude mortality rates (CMRs) recorded by Ahmad were attempted to understand the underlying causes within the population study data and the outliers. In the end, none were used in the final data evaluation to avoid the risk of introducing hidden variables that also correlate with D3.\nMortality rates and D3 blood levels from studies on hospitalized COVID-19 patients were assembled in a separate dataset. When no median D3 blood levels were provided for the individual study cohorts, the IQR, mean ± SD or estimated values within the grouping criteria were used in that order. Patient characteristics, including age IQR, sex and diabetes status, were used to compute expected mortality rates with a machine learning model [154], which is available online (https://www.economist.com/graphic-detail/covid-pandemic-mortality-risk-estimator (accessed on 27 March 2021)). While other comorbidities from the source studies were not considered in our analysis, they also have lower impact on the model’s output, as can be easily confirmed using the online tool. Based on the expected disease mortality rate for the respective patient cohorts, the reported mortality rates from the source studies were corrected. Thereby, the relationship between the initial vitamin D levels and the resulting mortality becomes more apparent.\nThe two datasets were combined, and the mortality rates of the hospital studies were scaled according to the mortality range of the population studies, resulting in a uniform list of patient cohorts, their vitamin D status and dimensionless mortality coefficients. Linear regressions (OLS), Pearson and Spearman correlations of vitamin D, and the mortality values for the separate and combined datasets were generated with a Python 3.7 kernel using the scipy.stats 1.7.0 and statsmodels 0.12.2 libraries in a https://deepnote.com (accessed on 30 July 2021) Jupyter notebook.", "This study does not question the vital role that vaccination will play in coping with the COVID-19 pandemic. Nor does it claim that in the case of an acute SARS-CoV-2 infection, a high boost of 25(OH)D3 is or could be a helpful remedy when vitamin D deficiency is evident, as this is another question. Furthermore, empirical data on COVID-19 mortality for vitamin D3 blood levels above 35 ng/mL are sparse." ]
[ null, null, null ]
[ "1. Introduction", "2. Methods", "2.1. Search Strategy and Selection Criteria", "2.2. Data Analysis", "3. Results", "4. Discussion", "5. Limitations", "6. Conclusions" ]
[ "The SARS-CoV-2 pandemic causing acute respiratory distress syndrome (ARDS) has lasted for more than 18 months. It has created a major global health crisis due to the high number of patients requiring intensive care, and the high death rate has substantially affected everyday life through contact restrictions and lockdowns. According to many scientists and medical professionals, we are far from the end of this disaster and hence must learn to coexist with the virus for several more years, perhaps decades [1,2].\nIt is realistic to assume that there will be new mutations, which are possibly more infectious or more deadly. In the known history of virus infections, we have never faced a similar global spread. Due to the great number of viral genome replications that occur in infected individuals and the error-prone nature of RNA-dependent RNA polymerase, progressive accrual mutations do and will continue to occur [3,4,5]. Thus, similar to other virus infections such as influenza, we have to expect that the effectiveness of vaccination is limited in time, especially with the current vaccines designed to trigger an immunological response against a single viral protein [6,7,8].\nWe have already learned that even fully vaccinated people can be infected [9]. Currently, most of these infections do not result in hospitalization, especially for young individuals without comorbidities. However, these infections are the basis for the ongoing dissemination of the virus in a situation where worldwide herd immunity against SARS-CoV-2 is rather unlikely. Instead, humanity could be trapped in an insuperable race between new mutations and new vaccines, with an increasing risk of newly arising mutations becoming resistant to the current vaccines [3,10,11]. Thus, a return to normal life in the near future seems unlikely. Mask requirements as well as limitations of public life will likely accompany us for a long time if we are not able to establish additional methods that reduce virus dissemination.\nVaccination is an important part in the fight against SARS-CoV-2 but, with respect to the situation described above, should not be the only focus. One strong pillar in the protection against any type of virus infection is the strength of our immune system [12]. Unfortunately, thus far, this unquestioned basic principle of nature has been more or less neglected by the responsible authorities. It is well known that our modern lifestyle is far from optimal with respect to nutrition, physical fitness, and recreation. In particular, many people are not spending enough time outside in the sun, even in summer. The consequence is widespread vitamin D deficiency, which limits the performance of their immune systems, resulting in the increased spread of some preventable diseases of civilization, reduced protection against infections, and reduced effectiveness of vaccination [13].\nIn this publication, we will demonstrate that vitamin D3 deficiency, which is a well-documented worldwide problem [13,14,15,16,17,18,19,20], is one of the main reasons for severe courses of SARS-CoV-2 infections. The fatality rates correlate well with the findings that elderly people, black people, and people with comorbidities show very low vitamin D3 levels [16,21,22,23]. Additionally, with only a few exceptions, we are facing the highest infection rates in the winter months and in northern countries, which are known to suffer from low vitamin D3 levels due to low endogenous sun-triggered vitamin D3 synthesis [24,25,26,27].\nVitamin D3 was first discovered at the beginning of the 19th century as an essential factor needed to guarantee skeletal health. This discovery came after a long period of dealing with the dire consequences of rickets, which causes osteomalacia (softening of bones). This disease especially affected children in northern countries, who were deprived of sunlight and often worked in dark production halls during the industrial revolution [28]. At the beginning of the 20th century, it became clear that sunlight can cure rickets by triggering vitamin D3 synthesis in the skin. Cod liver oil is recognized as a natural source of vitamin D3 [29]. At the time, a blood level of 20 ng/mL was sufficient to stop osteomalacia. This target is still the recommended blood level today, as stated in many official documents [30]. In accordance with many other publications, we will show that this level is considerably too low to guarantee optimal functioning of the human body.\nIn the late 1920s, Adolf Windaus elucidated the structure of vitamin D3. The metabolic pathway of vitamin D3 (biochemical name cholecalciferol) is shown in Figure 1 [31]. The precursor, 7-dehydrocholesterol, is transformed into cholecalciferol in our skin by photoisomerization caused by UV-B exposure (wavelength 280–315 nm). After transportation to the liver, cholecalciferol is hydroxylated, resulting in 25-hydroxycholecalciferol (25(OH)D3, also called calcidiol), which can be stored in fat tissue for several months and is released back into blood circulation when needed. The biologically active form is generated by a further hydroxylation step, resulting in 1,25-dihydroxycholecalciferol (1,25(OH)2D3, also called calcitriol). Early investigations assumed that this transformation takes place mainly in the kidney.\nOver the last decades, knowledge regarding the mechanisms through which vitamin D3 affects human health has improved dramatically. It was discovered that the vitamin D3 receptor (VDR) and the vitamin D3 activating enzyme 1-α-hydroxylase (CYP27B1) are expressed in many cell types that are not involved in bone and mineral metabolism, such as the intestine, pancreas, and prostate as well as cells of the immune system [32,33,34,35,36]. This finding demonstrates the important, much wider impact of vitamin D3 on human health than previously understood [37,38]. Vitamin D turned out to be a powerful epigenetic regulator, influencing more than 2500 genes [39] and impacting dozens of our most serious health challenges [40], including cancer [41,42], diabetes mellitus [43], acute respiratory tract infections [44], chronic inflammatory diseases [45], and autoimmune diseases such as multiple sclerosis [46].\nIn the field of human immunology, the extrarenal synthesis of the active metabolite calcitriol-1,25(OH)2D3-by immune cells and lung epithelial cells has been shown to have immunomodulatory properties [47,48,49,50,51,52]. Today, a compelling body of experimental evidence indicates that activated vitamin D3 plays a fundamental role in regulating both innate and adaptive immune systems [53,54,55,56]. Intracellular vitamin D3 receptors (VDRs) are present in nearly all cell types involved in the human immune response, such as monocytes/macrophages, T cells, B cells, natural killer (NK) cells, and dendritic cells (DCs). Receptor binding engages the formation of the “vitamin D3 response element” (VDRE), regulating a large number of target genes involved in the immune response [57]. As a consequence of this knowledge, the scientific community now agrees that calcitriol is much more than a vitamin but rather a highly effective hormone with the same level of importance to human metabolism as other steroid hormones.\nThe blood level ensuring the reliable effectiveness of vitamin D3 with respect to all its important functions came under discussion again, and it turned out that 40–60 ng/mL is preferable [38], which is considerably above the level required to prevent rickets.\nLong before the SARS-CoV-2 pandemic, an increasing number of scientific publications showed the effectiveness of a sufficient vitamin D3 blood level in curing many of the human diseases caused by a weak or unregulated immune system [38,58,59,60]. This includes all types of virus infections [44,61,62,63,64,65,66,67,68,69,70], with a main emphasis on lung infections that cause ARDS [71,72,73], as well as autoimmune diseases [46,63,74,75]. However, routine vitamin D3 testing and supplementation are still not established today. Unfortunately, it seems that the new findings about vitamin D3 have not been well accepted in the medical community. Many official recommendations to define vitamin D3 deficiency still stick to the 20 ng/mL established 100 years ago to cure rickets [76].\nAdditionally, many recommendations for vitamin D3 supplementation are in the range of 5 to 20 µg per day (200 to 800 international units), which is much too low to guarantee the optimal blood level of 40–60 ng/mL [38,77]. One reason for these incorrect recommendations turned out to be calculation error [78,79]. Another reason for the error is because vitamin D3 treatment to cure osteomalacia was commonly combined with high doses of calcium to support bone calcification. When examining for the side effects of overdoses of such combination products, it turned out that there is a high risk of calcium deposits in blood vessels, especially in the kidney. Today, it is clear that such combination preparations are nonsensical because vitamin D3 stimulates calcium uptake in the intestine itself. Without calcium supplementation, even very high vitamin D3 supplementation does not cause vascular calcification, especially if another important finding is included. Even when calcium blood levels are high, the culprit for undesirable vascular calcification is not vitamin D but insufficient blood levels of vitamin K2. Thus, daily vitamin D3 supplementation in the range of 4000 to 10,000 units (100 to 250 µg) needed to generate an optimal vitamin D3 blood level in the range of 40–60 ng/mL has been shown to be completely safe when combined with approximately 200 µg/mL vitamin K2 [80,81,82]. However, this knowledge is still not widespread in the medical community, and obsolete warnings about the risks of vitamin D3 overdoses unfortunately are still commonly circulating.\nBased on these circumstances, the SARS-CoV-2 pandemic is becoming the second breakthrough in the history of vitamin D3 association with disease (after rickets), and we have to ensure that full advantage is being taken of its medical properties to keep people healthy. The most life-threatening events in the course of a SARS-CoV-2 infection are ARDS and cytokine release syndrome (CRS). It is well established that vitamin D3 is able to inhibit the underlying metabolic pathways [83,84] because a very specific interaction exists between the mechanism of SARS-CoV-2 infection and vitamin D3.\nAngiotensin-converting enzyme 2 (ACE2), a part of the renin-angiotensin system (RAS), serves as the major entry point for SARS-CoV-2 into cells (Figure 2). When SARS-CoV-2 is attached to ACE2 its expression is reduced, thus causing lung injury and pneumonia [85,86,87]. Vitamin D3 is a negative RAS modulator by inhibition of renin expression and stimulation of ACE2 expression. It therefore has a protective role against ARDS caused by SARS-CoV-2. Sufficient vitamin D3 levels prevent the development of ARDS by reducing the levels of angiotensin II and increasing the level of angiotensin-(1,7) [18,88,89,90,91,92].\nThere are several additional important functions of vitamin D3 supporting immune defense [18,77,94,95]:Vitamin D decreases the production of Th1 cells. Thus, it can suppress the progression of inflammation by reducing the generation of inflammatory cytokines [74,96,97].Vitamin D3 reduces the severity of cytokine release syndrome (CRS). This “cytokine storm” causes multiple organ damage and is therefore the main cause of death in the late stage of SARS-CoV-2 infection. The systemic inflammatory response due to viral infection is attenuated by promoting the differentiation of regulatory T cells [98,99,100,101].Vitamin D3 induces the production of the endogenous antimicrobial peptide cathelicidin (LL-37) in macrophages and lung epithelial cells, which acts against invading respiratory viruses by disrupting viral envelopes and altering the viability of host target cells [52,102,103,104,105,106,107].Experimental studies have shown that vitamin D and its metabolites modulate endothelial function and vascular permeability via multiple genomic and extragenomic pathways [108,109].Vitamin D reduces coagulation abnormalities in critically ill COVID-19 patients [110,111,112].\nVitamin D decreases the production of Th1 cells. Thus, it can suppress the progression of inflammation by reducing the generation of inflammatory cytokines [74,96,97].\nVitamin D3 reduces the severity of cytokine release syndrome (CRS). This “cytokine storm” causes multiple organ damage and is therefore the main cause of death in the late stage of SARS-CoV-2 infection. The systemic inflammatory response due to viral infection is attenuated by promoting the differentiation of regulatory T cells [98,99,100,101].\nVitamin D3 induces the production of the endogenous antimicrobial peptide cathelicidin (LL-37) in macrophages and lung epithelial cells, which acts against invading respiratory viruses by disrupting viral envelopes and altering the viability of host target cells [52,102,103,104,105,106,107].\nExperimental studies have shown that vitamin D and its metabolites modulate endothelial function and vascular permeability via multiple genomic and extragenomic pathways [108,109].\nVitamin D reduces coagulation abnormalities in critically ill COVID-19 patients [110,111,112].\nA rapidly increasing number of publications are investigating the vitamin D3 status of SARS-CoV-2 patients and have confirmed both low vitamin D levels in cases of severe courses of infection [113,114,115,116,117,118,119,120,121,122,123,124,125,126,127] and positive results of vitamin D3 treatments [128,129,130,131,132,133,134]. Therefore, many scientists recommend vitamin D3 as an indispensable part of a medical treatment plan to avoid severe courses of SARS-CoV-2 infection [14,18,77,84,135,136], which has additionally resulted in proposals for the consequent supplementation of the whole population [137]. A comprehensive overview and discussion of the current literature is given in a review by Linda Benskin [138]. Unfortunately, all these studies are based on relatively low numbers of patients. Well-accepted, placebo-controlled, double-blinded studies are still missing.\nThe finding that most SARS-CoV-2 patients admitted to hospitals have vitamin D3 blood levels that are too low is unquestioned even by opponents of vitamin D supplementation. However, there is an ongoing discussion as to whether we are facing a causal relationship or just a decline in the vitamin D levels caused by the infection itself [84,139,140,141].\nThere are reliable data on the average vitamin D3 levels in the population [15,19,142] in several countries, in parallel to the data about death rates caused by SARS-CoV-2 in these countries [143,144]. Obviously, these vitamin D3 data are not affected by SARS-CoV-2 infections. While meta-studies using such data [26,136,140,145] are already available, our goal was to analyze these data in the same manner as selected clinical data. In this article, we identify a vitamin D threshold that virtually eliminates excess mortality caused by SARS-CoV-2. In contrast to published D3/SARS-CoV-2 correlations [146,147,148,149,150,151,152], our data include studies assessing preinfection vitamin D values as well as studies with vitamin D values measured post-infection latest on the day after hospitalization. Thus, we can expect that the measured vitamin D status is still close to the preinfection level. In contrast to other meta-studies which also included large retrospective cohort studies [151,152], our aim was to perform regressions on the combined data after correcting for patient characteristics.\nThese results from independent datasets, which include data from before and after the onset of the disease, also further strengthen the assumption of a causal relationship between vitamin D3 blood levels and SARS-CoV-2 death rates. Our results therefore also confirm the importance of establishing vitamin D3 supplementation as a general method to prevent severe courses of SARS-CoV-2 infections.", " 2.1. Search Strategy and Selection Criteria Initially, a systematic literature review was performed to identify relevant COVID-19 studies. Included studies were observational cohort studies that grouped two or more cohorts by their vitamin D3 values and listed mortality rates for the respective cohorts. PubMed and the https://c19vitamind.com (accessed on 27 March 2021) registry were searched according to Table 1. Subsequently, titles and abstracts were screened, and full-text articles were further analyzed for eligibility.\nInitially, a systematic literature review was performed to identify relevant COVID-19 studies. Included studies were observational cohort studies that grouped two or more cohorts by their vitamin D3 values and listed mortality rates for the respective cohorts. PubMed and the https://c19vitamind.com (accessed on 27 March 2021) registry were searched according to Table 1. Subsequently, titles and abstracts were screened, and full-text articles were further analyzed for eligibility.\n 2.2. Data Analysis Collected studies were divided into a population study [142] and seven hospital studies. Notably, these data sources are fundamentally different, as one assesses vitamin D values long-term, whereas the other measures vitamin D values postinfection, thereby masking a possible causal relationship between the preinfection vitamin D level and mortality.\nSeveral corrections for the crude mortality rates (CMRs) recorded by Ahmad were attempted to understand the underlying causes within the population study data and the outliers. In the end, none were used in the final data evaluation to avoid the risk of introducing hidden variables that also correlate with D3.\nMortality rates and D3 blood levels from studies on hospitalized COVID-19 patients were assembled in a separate dataset. When no median D3 blood levels were provided for the individual study cohorts, the IQR, mean ± SD or estimated values within the grouping criteria were used in that order. Patient characteristics, including age IQR, sex and diabetes status, were used to compute expected mortality rates with a machine learning model [154], which is available online (https://www.economist.com/graphic-detail/covid-pandemic-mortality-risk-estimator (accessed on 27 March 2021)). While other comorbidities from the source studies were not considered in our analysis, they also have lower impact on the model’s output, as can be easily confirmed using the online tool. Based on the expected disease mortality rate for the respective patient cohorts, the reported mortality rates from the source studies were corrected. Thereby, the relationship between the initial vitamin D levels and the resulting mortality becomes more apparent.\nThe two datasets were combined, and the mortality rates of the hospital studies were scaled according to the mortality range of the population studies, resulting in a uniform list of patient cohorts, their vitamin D status and dimensionless mortality coefficients. Linear regressions (OLS), Pearson and Spearman correlations of vitamin D, and the mortality values for the separate and combined datasets were generated with a Python 3.7 kernel using the scipy.stats 1.7.0 and statsmodels 0.12.2 libraries in a https://deepnote.com (accessed on 30 July 2021) Jupyter notebook.\nCollected studies were divided into a population study [142] and seven hospital studies. Notably, these data sources are fundamentally different, as one assesses vitamin D values long-term, whereas the other measures vitamin D values postinfection, thereby masking a possible causal relationship between the preinfection vitamin D level and mortality.\nSeveral corrections for the crude mortality rates (CMRs) recorded by Ahmad were attempted to understand the underlying causes within the population study data and the outliers. In the end, none were used in the final data evaluation to avoid the risk of introducing hidden variables that also correlate with D3.\nMortality rates and D3 blood levels from studies on hospitalized COVID-19 patients were assembled in a separate dataset. When no median D3 blood levels were provided for the individual study cohorts, the IQR, mean ± SD or estimated values within the grouping criteria were used in that order. Patient characteristics, including age IQR, sex and diabetes status, were used to compute expected mortality rates with a machine learning model [154], which is available online (https://www.economist.com/graphic-detail/covid-pandemic-mortality-risk-estimator (accessed on 27 March 2021)). While other comorbidities from the source studies were not considered in our analysis, they also have lower impact on the model’s output, as can be easily confirmed using the online tool. Based on the expected disease mortality rate for the respective patient cohorts, the reported mortality rates from the source studies were corrected. Thereby, the relationship between the initial vitamin D levels and the resulting mortality becomes more apparent.\nThe two datasets were combined, and the mortality rates of the hospital studies were scaled according to the mortality range of the population studies, resulting in a uniform list of patient cohorts, their vitamin D status and dimensionless mortality coefficients. Linear regressions (OLS), Pearson and Spearman correlations of vitamin D, and the mortality values for the separate and combined datasets were generated with a Python 3.7 kernel using the scipy.stats 1.7.0 and statsmodels 0.12.2 libraries in a https://deepnote.com (accessed on 30 July 2021) Jupyter notebook.", "Initially, a systematic literature review was performed to identify relevant COVID-19 studies. Included studies were observational cohort studies that grouped two or more cohorts by their vitamin D3 values and listed mortality rates for the respective cohorts. PubMed and the https://c19vitamind.com (accessed on 27 March 2021) registry were searched according to Table 1. Subsequently, titles and abstracts were screened, and full-text articles were further analyzed for eligibility.", "Collected studies were divided into a population study [142] and seven hospital studies. Notably, these data sources are fundamentally different, as one assesses vitamin D values long-term, whereas the other measures vitamin D values postinfection, thereby masking a possible causal relationship between the preinfection vitamin D level and mortality.\nSeveral corrections for the crude mortality rates (CMRs) recorded by Ahmad were attempted to understand the underlying causes within the population study data and the outliers. In the end, none were used in the final data evaluation to avoid the risk of introducing hidden variables that also correlate with D3.\nMortality rates and D3 blood levels from studies on hospitalized COVID-19 patients were assembled in a separate dataset. When no median D3 blood levels were provided for the individual study cohorts, the IQR, mean ± SD or estimated values within the grouping criteria were used in that order. Patient characteristics, including age IQR, sex and diabetes status, were used to compute expected mortality rates with a machine learning model [154], which is available online (https://www.economist.com/graphic-detail/covid-pandemic-mortality-risk-estimator (accessed on 27 March 2021)). While other comorbidities from the source studies were not considered in our analysis, they also have lower impact on the model’s output, as can be easily confirmed using the online tool. Based on the expected disease mortality rate for the respective patient cohorts, the reported mortality rates from the source studies were corrected. Thereby, the relationship between the initial vitamin D levels and the resulting mortality becomes more apparent.\nThe two datasets were combined, and the mortality rates of the hospital studies were scaled according to the mortality range of the population studies, resulting in a uniform list of patient cohorts, their vitamin D status and dimensionless mortality coefficients. Linear regressions (OLS), Pearson and Spearman correlations of vitamin D, and the mortality values for the separate and combined datasets were generated with a Python 3.7 kernel using the scipy.stats 1.7.0 and statsmodels 0.12.2 libraries in a https://deepnote.com (accessed on 30 July 2021) Jupyter notebook.", "Database and registry searches resulted in 563 and 66 records, respectively. Nonsystematic web searches accounted for 13 studies, from which an additional 31 references were assessed. After removal of 104 duplicates and initial screening, 44 studies remained. Four meta-studies, one comment, one retracted study, one report with unavailable data, one wrong topic report, and one Russian language record were excluded. The remaining 35 studies were assessed in full text, 20 of which did not meet the eligibility criteria due to their study design or lack of quantitative mortality data. Four further studies were excluded due to missing data for individual patient cohorts. Finally, three studies were excluded due to skewed or nonrepresentative patient characteristics, as reviewed by LB and JVM [114,155,156]. Eight eligible studies for quantitative analysis remained, as listed in Table 2. A PRISMA flowchart [157] is presented in Figure 3.\nThe observed median (IQR) vitamin D value over all collected study cohorts was 23.2 ng/mL (17.4–26.8). A frequency distribution of vitamin D levels is shown in Figure 4.\nOne population study by Ahmad et al. [142] was identified. Therein, the CMRs are compiled for 19 European countries based on COVID-19 pandemic data from Johns Hopkins University [143] in the time frame from 21 March 2020 to 22 January 2021, as well as D3 blood levels for the respective countries collected by literature review. Furthermore, the proportions of the 70+ age population were collected. The median vitamin D3 level across countries was 23.2 ng/mL (19.9–25.5 ng/mL). A moderately negative Spearman’s correlation with the corresponding mean vitamin D3 levels in the respective populations was observed at rs = −0.430 (95% CI: −0.805–−0.081). No further adjustments of these CMR values were performed by Ahmad. The correlations shown in Table 3 suggest the sex/age distribution, diabetes, and the rigidity of public health measures as some of the causes for outliers within the Ahmad dataset. However, this has little effect on the further results discussed below.\nThe extracted data from seven hospital studies showed a median vitamin D3 level of 23.2 ng/mL (14.5–30.9 ng/mL). These data are plotted after correction of patient characteristics and scaling in combination with the data points from Ahmad in Figure 5.\nThe correlation results are shown in Table 4 in which the combined data show a significant negative Pearson correlation at r(32) = −0.3989, p = 0.0194. The linear regression results can be found in Table 5. The regression for the combined data intersects the D3 axis at 50.7 ng/mL, suggesting a theoretical point of zero mortality.", "This study illustrates that, at a time when vaccination was not yet available, patients with sufficiently high D3 serum levels preceding the infection were highly unlikely to suffer a fatal outcome. The partial risk at this D3 level seems to vanish under the normal statistical mortality risk for a given age and in light of given comorbidities. This correlation should have been good news when vaccination was not available but instead was widely ignored. Nonetheless, this result may offer hope for combating future variants of the rapidly changing virus as well as the dreaded breakthrough infections, in which severe outcomes have been seen in 10.5% of the vaccinated versus 26.5% of the unvaccinated group [164], with breakthrough even being fatal in 2% of cases [165].\nCould a virus that is spreading so easily and is much deadlier than H1N1 influenza be kept under control if the human immune system could work at its fullest capacity? Zero mortality, a phrase used above, is of course an impossibility, as there is always a given intrinsic mortality rate for any age. Statistical variations in genetics as well as in lifestyle often prevent us from identifying the exact medical cause of death, especially when risk factors (i.e., comorbidities) and an acute infection are in competition with one another. Risk factors also tend to reinforce each other. In COVID-19, it is common knowledge that type II diabetes, obesity, and high blood pressure easily double the risk of death [166], depending on age. The discussion of whether a patient has died “because of” or “with” COVID-19 or “from” or only “with” his or her comorbidities thus seems obsolete. SARS-CoV-2 infection is statistically just adding to the overall mortality risk, but obviously to a much higher degree than most other infectious diseases or general risk factors.\nThe background section has shown that the vitamin D system plays a crucial role not only in the healthiness and strength of the skeletal system (rickets/osteoporosis) but also in the outcome of many infectious and/or autoimmune diseases [167,168]. Preexisting D3 deficiency is highly correlated in all of these previously mentioned cases.\nMany argue that, because a correlation does not imply causality, a low D3 level may be merely a biomarker for an existing disease rather than its cause. However, the range of diseases for which existing empirical evidence shows an inverse relationship between disease severity and long-term D3 levels suggests that this assumption should be reversed [169].\nThis study investigated the correlation between vitamin D levels as a marker of a patient’s immune defense and resilience against COVID-19 and presumably other respiratory infections. It compared and merged data from two completely different datasets. The strength of the chosen approach lies in its diversity, as data from opposite and independent parts of the data universe yielded similar results. This result strengthens the hypothesis that a fatal outcome from COVID-19 infection, apart from other risk factors, is strongly dependent on the vitamin D status of the patient. The mathematical regressions suggested that the lower threshold for healthy vitamin D levels should lie at approximately 125 nmol/L or 50 ng/mL 25(OH)D3, which would save most lives, reducing the impact even for patients with various comorbidities.\nThis is—to our knowledge—the first study that aimed to determine an optimum D3 level to minimize COVID-19 mortality, as other studies typically limit themselves to identifying odds ratios for 2–3 patient cohorts split at 30 ng/mL or lower.\nAnother study confirmed that the number of infections clearly correlated with the respective D3 levels, with a cohort size close to 200,000 [122]. A minimum number of infections was observed at 55 ng/mL.\nDoes that mean that vitamin D protects people from getting infected? Physically, an infection occurs when viruses or bacteria intercept and enter body cells. Medically, infections are defined as having symptomatic aftereffects. However, a positive PCR test presumes the individual to be infectious even when there are no clinical symptoms and can be followed by quarantine. There is ample evidence that many people with a confirmed SARS-CoV-2 infection have not shown any symptoms [170].\nA “physical infection”, which a PCR test can later detect, can only be avoided by physical measures such as disinfection, masks and/or virucidal sprays, which will prevent the virus from either entering the body or otherwise attaching to body cells to infect them. However, if we define “infection” as having to be clinically symptomatic, then we have to refer to it as “silent” to describe what happens when the immune system fights down the virus without showing any symptoms apart from producing specific T-cells or antibodies. Nevertheless, the PCR test will show such people as being “infected/infectious”, which justifies that they are counted as “cases” even without confirmation by clinical symptoms, e.g., in Worldometer Statistics [171].\nJust as the D3 status correlates not only with the severity of symptoms but also with the length of the ongoing disease [172], it is fair to assume that the same reasoning also applies for silent infections. Thus, the duration in which a silent infection itself is active, i.e., infectious and will therefore produce a positive PCR result, may be reduced. We suggest that this may have a clear effect on the reproduction rate. Thus, it seems clear that a good immune defense, be it naturally present because of good preconditioning or from an acquired cross immunity from earlier human coronavirus infections, cannot “protect” against the infection like physical measures but can protect against clinical symptoms. Finding only half as many “infected” patients (confirmed by PCR tests) with a vitamin D level >30 ng/mL [122] does not prove protection against physical infection but rather against its consequences—a reduction in the number of days of people being infectious must statistically lead to the demonstrated result of only half as many positive PCR tests recorded in the group >30 ng/mL vs. the group <30 ng/mL. This “protection” was most effective at ~55 ng/mL, which agrees well with our results.\nThis result was also confirmed in a 2012 study, which showed that one of the fatal and most feared symptoms of COVID-19, the out-of-control inflammation leading to respiratory failure, is directly correlated with vitamin D levels. Cells incubated in 30 ng/mL vitamin D and above displayed a significantly reduced response to lipopolysaccharides (LPS), with the highest inflammatory inhibition observed at 50 ng/mL [173].\nThis result matches scientific data on the natural vitamin D3 levels seen among traditional hunter/gatherer lifestyles in a highly infectious environment, which were 110–125 nmol/L (45–50 ng/mL) [174].\nThere is a major discrepancy with the 30 ng/mL D3 value considered by the WHO as the threshold for sufficiency and the 20 ng/mL limit assumed by D-A-CH countries.\nThree directors of Iranian Hospital Dubai also state from their practical experience that among 21 COVID-19 patients with D3 levels above 40 ng/mL (supplemented with D3 for up to nine years for ophthalmologic reasons), none remained hospitalized for over 4 days, with no cytokine storm, hypercoagulation, or complement deregulation occurring [175].\nThus, we hypothesize that long-standing supplementation with D3 preceding acute infection will reduce the risk of a fatal outcome to practically nil and generally mitigate the course of the disease.\nHowever, we have to point out that there are exceptions to that as a rule of nature: as in any multifactorial setting, we find a bell curve distribution in the activation of a huge number of genes that are under the control of vitamin D. There may be genetic reasons for this finding, but there are also additional influencing parameters necessary for the production of enzymes and cells of the immune system, such as magnesium, zinc, and selenium. Carlberg et al. found this bell curve distribution when verifying the activation of 500–700 genes contributing to the production of immune system-relevant cells and proteins after D3 supplementation [176]. Participants at the low end showed only 33% activation, while others at the high end showed well over 80% “of the 36 vitamin D3-triggered parameters”. Carlberg used the term (vitamin D3) low and high responders to describe what he saw.\nThis finding may explain why a “D3 deficient” high responder may show only mild or even no symptoms, while a low responder may experience a fatal outcome. It also explains why, on the one hand, many so-called “autoimmune” inflammation-based diseases do highly correlate with the D3 level based on, e.g., higher latitudes or higher age, when D3 production decreases, but why only parts of the population are affected: it is presumably the low responders who are mostly affected. Thus, for 68–95% (1 or 2 sigma SDs), the suggested D3 level may be sufficient to fight everyday infections, and for the 2.5–16% of high responders, it is more than sufficient and is completely harmless. However, for the 2.5–16% of low responders, this level should be raised further to 75 ng/mL or even >100 ng/mL to achieve the same immune status as mid-level responders. A vitamin D3 test before the start of any supplementation in combination with the patient’s personal history of diseases might provide a good indication as to which group the patient belongs to and thus whether 50 ng/mL would be sufficient, or if “normal” levels of D3 are found (between 20 and 30 ng/mL) along with any of the known D3-dependent autoimmune diseases, a higher level should be targeted as a precaution, especially as levels up to 120 ng/mL are declared to have no adverse effects by the WHO.\nAs future mutations of the SARS-CoV-2 virus may not be susceptible to the acquired immunity from vaccination or from a preceding infection, the entire population should raise their serum vitamin D level to a safe level as soon as possible. As long as enough vitamin K2 is provided, the suggested D3 levels are entirely safe to achieve by supplementation. However, the body is neither monothematic nor monocausal but a complicated system of dependencies and interactions of many different metabolites, hormones, vitamins, micronutrients, enzymes, etc. Selenium, magnesium, zinc, and vitamins A and E should also be controlled for and supplemented where necessary to optimize the conditions for a well-functioning immune system.\nA simple observational study could prove or disprove all of the above. If one were to test PCR-positive contacts of an infected person for D3 levels immediately, i.e., before the onset of any symptoms, and then follow them for 4 weeks and relate the course of their symptomatology to the D3 level, the same result as shown above must be obtained: a regression should cross the zero baseline at 45–55 ng/mL. Therefore, we strongly recommend the performance of such a study, which could be carried out with very little human and economic effort.\nEven diseases caused by low vitamin D3 levels cannot be entirely resolved by ensuring a certain (fixed) D3 level for the population, as immune system activation varies. However, to fulfill Scribonius Largus’ still valid quote “primum non nocere, secundum cavere, tertium sanare” from 50 A.D., it should be the duty of the medical profession to closely look into a medication or supplementation that might help (tertium sanare) as long as it has no known risks (primum non nocere) within the limits of dosages that are needed for the blood level mentioned (secundum cavere).\nUnfortunately, this does not imply that in the case of an acute SARS-CoV-2 infection, newly started supplementation with 25(OH)D3 will be a helpful remedy when calcidiol deficiency is evident, especially if this deficiency has been long lasting and caused or exacerbated typical comorbidities that can now aggravate the outcome of the infection. This was not a question we aimed to answer in this study.", "This study does not question the vital role that vaccination will play in coping with the COVID-19 pandemic. Nor does it claim that in the case of an acute SARS-CoV-2 infection, a high boost of 25(OH)D3 is or could be a helpful remedy when vitamin D deficiency is evident, as this is another question. Furthermore, empirical data on COVID-19 mortality for vitamin D3 blood levels above 35 ng/mL are sparse.", "Although there are a vast number of publications supporting a correlation between the severity and death rate of SARS-CoV-2 infections and the blood level of vitamin D3, there is still an open debate about whether this relation is causal. This is because in most studies, the vitamin D level was determined several days after the onset of infection; therefore, a low vitamin D level may be the result and not the trigger of the course of infection.\nIn this publication, we used a meta-analysis of two independent sets of data. One analysis is based on the long-term average vitamin D3 levels documented for 19 countries. The second analysis is based on 1601 hospitalized patients, 784 who had their vitamin D levels measured within a day after admission, and 817 whose vitamin D levels were known preinfection. Both datasets show a strong correlation between the death rate caused by SARS-CoV-2 and the vitamin D blood level. At a threshold level of 30 ng/mL, mortality decreases considerably. In addition, our analysis shows that the correlation for the combined datasets intersects the axis at approximately 50 ng/mL, which suggests that this vitamin D3 blood level may prevent any excess mortality. These findings are supported not only by a large infection study, showing the same optimum but also by the natural levels observed in traditional people living in the region where humanity originated from that were able to fight down most (not all) infections in most (not all) individuals.\nVaccination is and will be an important keystone in our fight against SARS-CoV-2. However, current data clearly show that vaccination alone cannot prevent all SARS-CoV-2 infections and dissemination of the virus. This scenario possibly will become much worse in the case of new virus mutations that are not very susceptible to the current vaccines or even not sensitive to any vaccine.\nTherefore, based on our data, the authors strongly recommend combining vaccination with routine strengthening of the immune system of the whole population by vitamin D3 supplementation to consistently guarantee blood levels above 50 ng/mL (125 nmol/L). From a medical point of view, this will not only save many lives but also increase the success of vaccination. From a social and political point of view, it will lower the need for further contact restrictions and lockdowns. From an economical point of view, it will save billions of dollars worldwide, as vitamin D3 is inexpensive and—together with vaccines—provides a good opportunity to get the spread of SARS-CoV-2 under control.\nAlthough there exists very broad data-based support for the protective effect of vitamin D against severe SARS-CoV-2 infections, we strongly recommend initiating well-designed observational studies as mentioned and/or double-blind randomized controlled trials (RCTs) to convince the medical community and the health authorities that vitamin D testing and supplementation are needed to avoid fatal breakthrough infections and to be prepared for new dangerous mutations." ]
[ "intro", "methods", null, null, "results", "discussion", null, "conclusions" ]
[ "mortality", "vitamin D", "calcidiol", "calcitriol", "D3", "COVID-19", "inflammation", "SARS-CoV-2", "ARDS", "immune status", "immunodeficiency", "renin", "angiotensin", "ACE2", "virus infection", "cytokine release syndrome", "CRS" ]
1. Introduction: The SARS-CoV-2 pandemic causing acute respiratory distress syndrome (ARDS) has lasted for more than 18 months. It has created a major global health crisis due to the high number of patients requiring intensive care, and the high death rate has substantially affected everyday life through contact restrictions and lockdowns. According to many scientists and medical professionals, we are far from the end of this disaster and hence must learn to coexist with the virus for several more years, perhaps decades [1,2]. It is realistic to assume that there will be new mutations, which are possibly more infectious or more deadly. In the known history of virus infections, we have never faced a similar global spread. Due to the great number of viral genome replications that occur in infected individuals and the error-prone nature of RNA-dependent RNA polymerase, progressive accrual mutations do and will continue to occur [3,4,5]. Thus, similar to other virus infections such as influenza, we have to expect that the effectiveness of vaccination is limited in time, especially with the current vaccines designed to trigger an immunological response against a single viral protein [6,7,8]. We have already learned that even fully vaccinated people can be infected [9]. Currently, most of these infections do not result in hospitalization, especially for young individuals without comorbidities. However, these infections are the basis for the ongoing dissemination of the virus in a situation where worldwide herd immunity against SARS-CoV-2 is rather unlikely. Instead, humanity could be trapped in an insuperable race between new mutations and new vaccines, with an increasing risk of newly arising mutations becoming resistant to the current vaccines [3,10,11]. Thus, a return to normal life in the near future seems unlikely. Mask requirements as well as limitations of public life will likely accompany us for a long time if we are not able to establish additional methods that reduce virus dissemination. Vaccination is an important part in the fight against SARS-CoV-2 but, with respect to the situation described above, should not be the only focus. One strong pillar in the protection against any type of virus infection is the strength of our immune system [12]. Unfortunately, thus far, this unquestioned basic principle of nature has been more or less neglected by the responsible authorities. It is well known that our modern lifestyle is far from optimal with respect to nutrition, physical fitness, and recreation. In particular, many people are not spending enough time outside in the sun, even in summer. The consequence is widespread vitamin D deficiency, which limits the performance of their immune systems, resulting in the increased spread of some preventable diseases of civilization, reduced protection against infections, and reduced effectiveness of vaccination [13]. In this publication, we will demonstrate that vitamin D3 deficiency, which is a well-documented worldwide problem [13,14,15,16,17,18,19,20], is one of the main reasons for severe courses of SARS-CoV-2 infections. The fatality rates correlate well with the findings that elderly people, black people, and people with comorbidities show very low vitamin D3 levels [16,21,22,23]. Additionally, with only a few exceptions, we are facing the highest infection rates in the winter months and in northern countries, which are known to suffer from low vitamin D3 levels due to low endogenous sun-triggered vitamin D3 synthesis [24,25,26,27]. Vitamin D3 was first discovered at the beginning of the 19th century as an essential factor needed to guarantee skeletal health. This discovery came after a long period of dealing with the dire consequences of rickets, which causes osteomalacia (softening of bones). This disease especially affected children in northern countries, who were deprived of sunlight and often worked in dark production halls during the industrial revolution [28]. At the beginning of the 20th century, it became clear that sunlight can cure rickets by triggering vitamin D3 synthesis in the skin. Cod liver oil is recognized as a natural source of vitamin D3 [29]. At the time, a blood level of 20 ng/mL was sufficient to stop osteomalacia. This target is still the recommended blood level today, as stated in many official documents [30]. In accordance with many other publications, we will show that this level is considerably too low to guarantee optimal functioning of the human body. In the late 1920s, Adolf Windaus elucidated the structure of vitamin D3. The metabolic pathway of vitamin D3 (biochemical name cholecalciferol) is shown in Figure 1 [31]. The precursor, 7-dehydrocholesterol, is transformed into cholecalciferol in our skin by photoisomerization caused by UV-B exposure (wavelength 280–315 nm). After transportation to the liver, cholecalciferol is hydroxylated, resulting in 25-hydroxycholecalciferol (25(OH)D3, also called calcidiol), which can be stored in fat tissue for several months and is released back into blood circulation when needed. The biologically active form is generated by a further hydroxylation step, resulting in 1,25-dihydroxycholecalciferol (1,25(OH)2D3, also called calcitriol). Early investigations assumed that this transformation takes place mainly in the kidney. Over the last decades, knowledge regarding the mechanisms through which vitamin D3 affects human health has improved dramatically. It was discovered that the vitamin D3 receptor (VDR) and the vitamin D3 activating enzyme 1-α-hydroxylase (CYP27B1) are expressed in many cell types that are not involved in bone and mineral metabolism, such as the intestine, pancreas, and prostate as well as cells of the immune system [32,33,34,35,36]. This finding demonstrates the important, much wider impact of vitamin D3 on human health than previously understood [37,38]. Vitamin D turned out to be a powerful epigenetic regulator, influencing more than 2500 genes [39] and impacting dozens of our most serious health challenges [40], including cancer [41,42], diabetes mellitus [43], acute respiratory tract infections [44], chronic inflammatory diseases [45], and autoimmune diseases such as multiple sclerosis [46]. In the field of human immunology, the extrarenal synthesis of the active metabolite calcitriol-1,25(OH)2D3-by immune cells and lung epithelial cells has been shown to have immunomodulatory properties [47,48,49,50,51,52]. Today, a compelling body of experimental evidence indicates that activated vitamin D3 plays a fundamental role in regulating both innate and adaptive immune systems [53,54,55,56]. Intracellular vitamin D3 receptors (VDRs) are present in nearly all cell types involved in the human immune response, such as monocytes/macrophages, T cells, B cells, natural killer (NK) cells, and dendritic cells (DCs). Receptor binding engages the formation of the “vitamin D3 response element” (VDRE), regulating a large number of target genes involved in the immune response [57]. As a consequence of this knowledge, the scientific community now agrees that calcitriol is much more than a vitamin but rather a highly effective hormone with the same level of importance to human metabolism as other steroid hormones. The blood level ensuring the reliable effectiveness of vitamin D3 with respect to all its important functions came under discussion again, and it turned out that 40–60 ng/mL is preferable [38], which is considerably above the level required to prevent rickets. Long before the SARS-CoV-2 pandemic, an increasing number of scientific publications showed the effectiveness of a sufficient vitamin D3 blood level in curing many of the human diseases caused by a weak or unregulated immune system [38,58,59,60]. This includes all types of virus infections [44,61,62,63,64,65,66,67,68,69,70], with a main emphasis on lung infections that cause ARDS [71,72,73], as well as autoimmune diseases [46,63,74,75]. However, routine vitamin D3 testing and supplementation are still not established today. Unfortunately, it seems that the new findings about vitamin D3 have not been well accepted in the medical community. Many official recommendations to define vitamin D3 deficiency still stick to the 20 ng/mL established 100 years ago to cure rickets [76]. Additionally, many recommendations for vitamin D3 supplementation are in the range of 5 to 20 µg per day (200 to 800 international units), which is much too low to guarantee the optimal blood level of 40–60 ng/mL [38,77]. One reason for these incorrect recommendations turned out to be calculation error [78,79]. Another reason for the error is because vitamin D3 treatment to cure osteomalacia was commonly combined with high doses of calcium to support bone calcification. When examining for the side effects of overdoses of such combination products, it turned out that there is a high risk of calcium deposits in blood vessels, especially in the kidney. Today, it is clear that such combination preparations are nonsensical because vitamin D3 stimulates calcium uptake in the intestine itself. Without calcium supplementation, even very high vitamin D3 supplementation does not cause vascular calcification, especially if another important finding is included. Even when calcium blood levels are high, the culprit for undesirable vascular calcification is not vitamin D but insufficient blood levels of vitamin K2. Thus, daily vitamin D3 supplementation in the range of 4000 to 10,000 units (100 to 250 µg) needed to generate an optimal vitamin D3 blood level in the range of 40–60 ng/mL has been shown to be completely safe when combined with approximately 200 µg/mL vitamin K2 [80,81,82]. However, this knowledge is still not widespread in the medical community, and obsolete warnings about the risks of vitamin D3 overdoses unfortunately are still commonly circulating. Based on these circumstances, the SARS-CoV-2 pandemic is becoming the second breakthrough in the history of vitamin D3 association with disease (after rickets), and we have to ensure that full advantage is being taken of its medical properties to keep people healthy. The most life-threatening events in the course of a SARS-CoV-2 infection are ARDS and cytokine release syndrome (CRS). It is well established that vitamin D3 is able to inhibit the underlying metabolic pathways [83,84] because a very specific interaction exists between the mechanism of SARS-CoV-2 infection and vitamin D3. Angiotensin-converting enzyme 2 (ACE2), a part of the renin-angiotensin system (RAS), serves as the major entry point for SARS-CoV-2 into cells (Figure 2). When SARS-CoV-2 is attached to ACE2 its expression is reduced, thus causing lung injury and pneumonia [85,86,87]. Vitamin D3 is a negative RAS modulator by inhibition of renin expression and stimulation of ACE2 expression. It therefore has a protective role against ARDS caused by SARS-CoV-2. Sufficient vitamin D3 levels prevent the development of ARDS by reducing the levels of angiotensin II and increasing the level of angiotensin-(1,7) [18,88,89,90,91,92]. There are several additional important functions of vitamin D3 supporting immune defense [18,77,94,95]:Vitamin D decreases the production of Th1 cells. Thus, it can suppress the progression of inflammation by reducing the generation of inflammatory cytokines [74,96,97].Vitamin D3 reduces the severity of cytokine release syndrome (CRS). This “cytokine storm” causes multiple organ damage and is therefore the main cause of death in the late stage of SARS-CoV-2 infection. The systemic inflammatory response due to viral infection is attenuated by promoting the differentiation of regulatory T cells [98,99,100,101].Vitamin D3 induces the production of the endogenous antimicrobial peptide cathelicidin (LL-37) in macrophages and lung epithelial cells, which acts against invading respiratory viruses by disrupting viral envelopes and altering the viability of host target cells [52,102,103,104,105,106,107].Experimental studies have shown that vitamin D and its metabolites modulate endothelial function and vascular permeability via multiple genomic and extragenomic pathways [108,109].Vitamin D reduces coagulation abnormalities in critically ill COVID-19 patients [110,111,112]. Vitamin D decreases the production of Th1 cells. Thus, it can suppress the progression of inflammation by reducing the generation of inflammatory cytokines [74,96,97]. Vitamin D3 reduces the severity of cytokine release syndrome (CRS). This “cytokine storm” causes multiple organ damage and is therefore the main cause of death in the late stage of SARS-CoV-2 infection. The systemic inflammatory response due to viral infection is attenuated by promoting the differentiation of regulatory T cells [98,99,100,101]. Vitamin D3 induces the production of the endogenous antimicrobial peptide cathelicidin (LL-37) in macrophages and lung epithelial cells, which acts against invading respiratory viruses by disrupting viral envelopes and altering the viability of host target cells [52,102,103,104,105,106,107]. Experimental studies have shown that vitamin D and its metabolites modulate endothelial function and vascular permeability via multiple genomic and extragenomic pathways [108,109]. Vitamin D reduces coagulation abnormalities in critically ill COVID-19 patients [110,111,112]. A rapidly increasing number of publications are investigating the vitamin D3 status of SARS-CoV-2 patients and have confirmed both low vitamin D levels in cases of severe courses of infection [113,114,115,116,117,118,119,120,121,122,123,124,125,126,127] and positive results of vitamin D3 treatments [128,129,130,131,132,133,134]. Therefore, many scientists recommend vitamin D3 as an indispensable part of a medical treatment plan to avoid severe courses of SARS-CoV-2 infection [14,18,77,84,135,136], which has additionally resulted in proposals for the consequent supplementation of the whole population [137]. A comprehensive overview and discussion of the current literature is given in a review by Linda Benskin [138]. Unfortunately, all these studies are based on relatively low numbers of patients. Well-accepted, placebo-controlled, double-blinded studies are still missing. The finding that most SARS-CoV-2 patients admitted to hospitals have vitamin D3 blood levels that are too low is unquestioned even by opponents of vitamin D supplementation. However, there is an ongoing discussion as to whether we are facing a causal relationship or just a decline in the vitamin D levels caused by the infection itself [84,139,140,141]. There are reliable data on the average vitamin D3 levels in the population [15,19,142] in several countries, in parallel to the data about death rates caused by SARS-CoV-2 in these countries [143,144]. Obviously, these vitamin D3 data are not affected by SARS-CoV-2 infections. While meta-studies using such data [26,136,140,145] are already available, our goal was to analyze these data in the same manner as selected clinical data. In this article, we identify a vitamin D threshold that virtually eliminates excess mortality caused by SARS-CoV-2. In contrast to published D3/SARS-CoV-2 correlations [146,147,148,149,150,151,152], our data include studies assessing preinfection vitamin D values as well as studies with vitamin D values measured post-infection latest on the day after hospitalization. Thus, we can expect that the measured vitamin D status is still close to the preinfection level. In contrast to other meta-studies which also included large retrospective cohort studies [151,152], our aim was to perform regressions on the combined data after correcting for patient characteristics. These results from independent datasets, which include data from before and after the onset of the disease, also further strengthen the assumption of a causal relationship between vitamin D3 blood levels and SARS-CoV-2 death rates. Our results therefore also confirm the importance of establishing vitamin D3 supplementation as a general method to prevent severe courses of SARS-CoV-2 infections. 2. Methods: 2.1. Search Strategy and Selection Criteria Initially, a systematic literature review was performed to identify relevant COVID-19 studies. Included studies were observational cohort studies that grouped two or more cohorts by their vitamin D3 values and listed mortality rates for the respective cohorts. PubMed and the https://c19vitamind.com (accessed on 27 March 2021) registry were searched according to Table 1. Subsequently, titles and abstracts were screened, and full-text articles were further analyzed for eligibility. Initially, a systematic literature review was performed to identify relevant COVID-19 studies. Included studies were observational cohort studies that grouped two or more cohorts by their vitamin D3 values and listed mortality rates for the respective cohorts. PubMed and the https://c19vitamind.com (accessed on 27 March 2021) registry were searched according to Table 1. Subsequently, titles and abstracts were screened, and full-text articles were further analyzed for eligibility. 2.2. Data Analysis Collected studies were divided into a population study [142] and seven hospital studies. Notably, these data sources are fundamentally different, as one assesses vitamin D values long-term, whereas the other measures vitamin D values postinfection, thereby masking a possible causal relationship between the preinfection vitamin D level and mortality. Several corrections for the crude mortality rates (CMRs) recorded by Ahmad were attempted to understand the underlying causes within the population study data and the outliers. In the end, none were used in the final data evaluation to avoid the risk of introducing hidden variables that also correlate with D3. Mortality rates and D3 blood levels from studies on hospitalized COVID-19 patients were assembled in a separate dataset. When no median D3 blood levels were provided for the individual study cohorts, the IQR, mean ± SD or estimated values within the grouping criteria were used in that order. Patient characteristics, including age IQR, sex and diabetes status, were used to compute expected mortality rates with a machine learning model [154], which is available online (https://www.economist.com/graphic-detail/covid-pandemic-mortality-risk-estimator (accessed on 27 March 2021)). While other comorbidities from the source studies were not considered in our analysis, they also have lower impact on the model’s output, as can be easily confirmed using the online tool. Based on the expected disease mortality rate for the respective patient cohorts, the reported mortality rates from the source studies were corrected. Thereby, the relationship between the initial vitamin D levels and the resulting mortality becomes more apparent. The two datasets were combined, and the mortality rates of the hospital studies were scaled according to the mortality range of the population studies, resulting in a uniform list of patient cohorts, their vitamin D status and dimensionless mortality coefficients. Linear regressions (OLS), Pearson and Spearman correlations of vitamin D, and the mortality values for the separate and combined datasets were generated with a Python 3.7 kernel using the scipy.stats 1.7.0 and statsmodels 0.12.2 libraries in a https://deepnote.com (accessed on 30 July 2021) Jupyter notebook. Collected studies were divided into a population study [142] and seven hospital studies. Notably, these data sources are fundamentally different, as one assesses vitamin D values long-term, whereas the other measures vitamin D values postinfection, thereby masking a possible causal relationship between the preinfection vitamin D level and mortality. Several corrections for the crude mortality rates (CMRs) recorded by Ahmad were attempted to understand the underlying causes within the population study data and the outliers. In the end, none were used in the final data evaluation to avoid the risk of introducing hidden variables that also correlate with D3. Mortality rates and D3 blood levels from studies on hospitalized COVID-19 patients were assembled in a separate dataset. When no median D3 blood levels were provided for the individual study cohorts, the IQR, mean ± SD or estimated values within the grouping criteria were used in that order. Patient characteristics, including age IQR, sex and diabetes status, were used to compute expected mortality rates with a machine learning model [154], which is available online (https://www.economist.com/graphic-detail/covid-pandemic-mortality-risk-estimator (accessed on 27 March 2021)). While other comorbidities from the source studies were not considered in our analysis, they also have lower impact on the model’s output, as can be easily confirmed using the online tool. Based on the expected disease mortality rate for the respective patient cohorts, the reported mortality rates from the source studies were corrected. Thereby, the relationship between the initial vitamin D levels and the resulting mortality becomes more apparent. The two datasets were combined, and the mortality rates of the hospital studies were scaled according to the mortality range of the population studies, resulting in a uniform list of patient cohorts, their vitamin D status and dimensionless mortality coefficients. Linear regressions (OLS), Pearson and Spearman correlations of vitamin D, and the mortality values for the separate and combined datasets were generated with a Python 3.7 kernel using the scipy.stats 1.7.0 and statsmodels 0.12.2 libraries in a https://deepnote.com (accessed on 30 July 2021) Jupyter notebook. 2.1. Search Strategy and Selection Criteria: Initially, a systematic literature review was performed to identify relevant COVID-19 studies. Included studies were observational cohort studies that grouped two or more cohorts by their vitamin D3 values and listed mortality rates for the respective cohorts. PubMed and the https://c19vitamind.com (accessed on 27 March 2021) registry were searched according to Table 1. Subsequently, titles and abstracts were screened, and full-text articles were further analyzed for eligibility. 2.2. Data Analysis: Collected studies were divided into a population study [142] and seven hospital studies. Notably, these data sources are fundamentally different, as one assesses vitamin D values long-term, whereas the other measures vitamin D values postinfection, thereby masking a possible causal relationship between the preinfection vitamin D level and mortality. Several corrections for the crude mortality rates (CMRs) recorded by Ahmad were attempted to understand the underlying causes within the population study data and the outliers. In the end, none were used in the final data evaluation to avoid the risk of introducing hidden variables that also correlate with D3. Mortality rates and D3 blood levels from studies on hospitalized COVID-19 patients were assembled in a separate dataset. When no median D3 blood levels were provided for the individual study cohorts, the IQR, mean ± SD or estimated values within the grouping criteria were used in that order. Patient characteristics, including age IQR, sex and diabetes status, were used to compute expected mortality rates with a machine learning model [154], which is available online (https://www.economist.com/graphic-detail/covid-pandemic-mortality-risk-estimator (accessed on 27 March 2021)). While other comorbidities from the source studies were not considered in our analysis, they also have lower impact on the model’s output, as can be easily confirmed using the online tool. Based on the expected disease mortality rate for the respective patient cohorts, the reported mortality rates from the source studies were corrected. Thereby, the relationship between the initial vitamin D levels and the resulting mortality becomes more apparent. The two datasets were combined, and the mortality rates of the hospital studies were scaled according to the mortality range of the population studies, resulting in a uniform list of patient cohorts, their vitamin D status and dimensionless mortality coefficients. Linear regressions (OLS), Pearson and Spearman correlations of vitamin D, and the mortality values for the separate and combined datasets were generated with a Python 3.7 kernel using the scipy.stats 1.7.0 and statsmodels 0.12.2 libraries in a https://deepnote.com (accessed on 30 July 2021) Jupyter notebook. 3. Results: Database and registry searches resulted in 563 and 66 records, respectively. Nonsystematic web searches accounted for 13 studies, from which an additional 31 references were assessed. After removal of 104 duplicates and initial screening, 44 studies remained. Four meta-studies, one comment, one retracted study, one report with unavailable data, one wrong topic report, and one Russian language record were excluded. The remaining 35 studies were assessed in full text, 20 of which did not meet the eligibility criteria due to their study design or lack of quantitative mortality data. Four further studies were excluded due to missing data for individual patient cohorts. Finally, three studies were excluded due to skewed or nonrepresentative patient characteristics, as reviewed by LB and JVM [114,155,156]. Eight eligible studies for quantitative analysis remained, as listed in Table 2. A PRISMA flowchart [157] is presented in Figure 3. The observed median (IQR) vitamin D value over all collected study cohorts was 23.2 ng/mL (17.4–26.8). A frequency distribution of vitamin D levels is shown in Figure 4. One population study by Ahmad et al. [142] was identified. Therein, the CMRs are compiled for 19 European countries based on COVID-19 pandemic data from Johns Hopkins University [143] in the time frame from 21 March 2020 to 22 January 2021, as well as D3 blood levels for the respective countries collected by literature review. Furthermore, the proportions of the 70+ age population were collected. The median vitamin D3 level across countries was 23.2 ng/mL (19.9–25.5 ng/mL). A moderately negative Spearman’s correlation with the corresponding mean vitamin D3 levels in the respective populations was observed at rs = −0.430 (95% CI: −0.805–−0.081). No further adjustments of these CMR values were performed by Ahmad. The correlations shown in Table 3 suggest the sex/age distribution, diabetes, and the rigidity of public health measures as some of the causes for outliers within the Ahmad dataset. However, this has little effect on the further results discussed below. The extracted data from seven hospital studies showed a median vitamin D3 level of 23.2 ng/mL (14.5–30.9 ng/mL). These data are plotted after correction of patient characteristics and scaling in combination with the data points from Ahmad in Figure 5. The correlation results are shown in Table 4 in which the combined data show a significant negative Pearson correlation at r(32) = −0.3989, p = 0.0194. The linear regression results can be found in Table 5. The regression for the combined data intersects the D3 axis at 50.7 ng/mL, suggesting a theoretical point of zero mortality. 4. Discussion: This study illustrates that, at a time when vaccination was not yet available, patients with sufficiently high D3 serum levels preceding the infection were highly unlikely to suffer a fatal outcome. The partial risk at this D3 level seems to vanish under the normal statistical mortality risk for a given age and in light of given comorbidities. This correlation should have been good news when vaccination was not available but instead was widely ignored. Nonetheless, this result may offer hope for combating future variants of the rapidly changing virus as well as the dreaded breakthrough infections, in which severe outcomes have been seen in 10.5% of the vaccinated versus 26.5% of the unvaccinated group [164], with breakthrough even being fatal in 2% of cases [165]. Could a virus that is spreading so easily and is much deadlier than H1N1 influenza be kept under control if the human immune system could work at its fullest capacity? Zero mortality, a phrase used above, is of course an impossibility, as there is always a given intrinsic mortality rate for any age. Statistical variations in genetics as well as in lifestyle often prevent us from identifying the exact medical cause of death, especially when risk factors (i.e., comorbidities) and an acute infection are in competition with one another. Risk factors also tend to reinforce each other. In COVID-19, it is common knowledge that type II diabetes, obesity, and high blood pressure easily double the risk of death [166], depending on age. The discussion of whether a patient has died “because of” or “with” COVID-19 or “from” or only “with” his or her comorbidities thus seems obsolete. SARS-CoV-2 infection is statistically just adding to the overall mortality risk, but obviously to a much higher degree than most other infectious diseases or general risk factors. The background section has shown that the vitamin D system plays a crucial role not only in the healthiness and strength of the skeletal system (rickets/osteoporosis) but also in the outcome of many infectious and/or autoimmune diseases [167,168]. Preexisting D3 deficiency is highly correlated in all of these previously mentioned cases. Many argue that, because a correlation does not imply causality, a low D3 level may be merely a biomarker for an existing disease rather than its cause. However, the range of diseases for which existing empirical evidence shows an inverse relationship between disease severity and long-term D3 levels suggests that this assumption should be reversed [169]. This study investigated the correlation between vitamin D levels as a marker of a patient’s immune defense and resilience against COVID-19 and presumably other respiratory infections. It compared and merged data from two completely different datasets. The strength of the chosen approach lies in its diversity, as data from opposite and independent parts of the data universe yielded similar results. This result strengthens the hypothesis that a fatal outcome from COVID-19 infection, apart from other risk factors, is strongly dependent on the vitamin D status of the patient. The mathematical regressions suggested that the lower threshold for healthy vitamin D levels should lie at approximately 125 nmol/L or 50 ng/mL 25(OH)D3, which would save most lives, reducing the impact even for patients with various comorbidities. This is—to our knowledge—the first study that aimed to determine an optimum D3 level to minimize COVID-19 mortality, as other studies typically limit themselves to identifying odds ratios for 2–3 patient cohorts split at 30 ng/mL or lower. Another study confirmed that the number of infections clearly correlated with the respective D3 levels, with a cohort size close to 200,000 [122]. A minimum number of infections was observed at 55 ng/mL. Does that mean that vitamin D protects people from getting infected? Physically, an infection occurs when viruses or bacteria intercept and enter body cells. Medically, infections are defined as having symptomatic aftereffects. However, a positive PCR test presumes the individual to be infectious even when there are no clinical symptoms and can be followed by quarantine. There is ample evidence that many people with a confirmed SARS-CoV-2 infection have not shown any symptoms [170]. A “physical infection”, which a PCR test can later detect, can only be avoided by physical measures such as disinfection, masks and/or virucidal sprays, which will prevent the virus from either entering the body or otherwise attaching to body cells to infect them. However, if we define “infection” as having to be clinically symptomatic, then we have to refer to it as “silent” to describe what happens when the immune system fights down the virus without showing any symptoms apart from producing specific T-cells or antibodies. Nevertheless, the PCR test will show such people as being “infected/infectious”, which justifies that they are counted as “cases” even without confirmation by clinical symptoms, e.g., in Worldometer Statistics [171]. Just as the D3 status correlates not only with the severity of symptoms but also with the length of the ongoing disease [172], it is fair to assume that the same reasoning also applies for silent infections. Thus, the duration in which a silent infection itself is active, i.e., infectious and will therefore produce a positive PCR result, may be reduced. We suggest that this may have a clear effect on the reproduction rate. Thus, it seems clear that a good immune defense, be it naturally present because of good preconditioning or from an acquired cross immunity from earlier human coronavirus infections, cannot “protect” against the infection like physical measures but can protect against clinical symptoms. Finding only half as many “infected” patients (confirmed by PCR tests) with a vitamin D level >30 ng/mL [122] does not prove protection against physical infection but rather against its consequences—a reduction in the number of days of people being infectious must statistically lead to the demonstrated result of only half as many positive PCR tests recorded in the group >30 ng/mL vs. the group <30 ng/mL. This “protection” was most effective at ~55 ng/mL, which agrees well with our results. This result was also confirmed in a 2012 study, which showed that one of the fatal and most feared symptoms of COVID-19, the out-of-control inflammation leading to respiratory failure, is directly correlated with vitamin D levels. Cells incubated in 30 ng/mL vitamin D and above displayed a significantly reduced response to lipopolysaccharides (LPS), with the highest inflammatory inhibition observed at 50 ng/mL [173]. This result matches scientific data on the natural vitamin D3 levels seen among traditional hunter/gatherer lifestyles in a highly infectious environment, which were 110–125 nmol/L (45–50 ng/mL) [174]. There is a major discrepancy with the 30 ng/mL D3 value considered by the WHO as the threshold for sufficiency and the 20 ng/mL limit assumed by D-A-CH countries. Three directors of Iranian Hospital Dubai also state from their practical experience that among 21 COVID-19 patients with D3 levels above 40 ng/mL (supplemented with D3 for up to nine years for ophthalmologic reasons), none remained hospitalized for over 4 days, with no cytokine storm, hypercoagulation, or complement deregulation occurring [175]. Thus, we hypothesize that long-standing supplementation with D3 preceding acute infection will reduce the risk of a fatal outcome to practically nil and generally mitigate the course of the disease. However, we have to point out that there are exceptions to that as a rule of nature: as in any multifactorial setting, we find a bell curve distribution in the activation of a huge number of genes that are under the control of vitamin D. There may be genetic reasons for this finding, but there are also additional influencing parameters necessary for the production of enzymes and cells of the immune system, such as magnesium, zinc, and selenium. Carlberg et al. found this bell curve distribution when verifying the activation of 500–700 genes contributing to the production of immune system-relevant cells and proteins after D3 supplementation [176]. Participants at the low end showed only 33% activation, while others at the high end showed well over 80% “of the 36 vitamin D3-triggered parameters”. Carlberg used the term (vitamin D3) low and high responders to describe what he saw. This finding may explain why a “D3 deficient” high responder may show only mild or even no symptoms, while a low responder may experience a fatal outcome. It also explains why, on the one hand, many so-called “autoimmune” inflammation-based diseases do highly correlate with the D3 level based on, e.g., higher latitudes or higher age, when D3 production decreases, but why only parts of the population are affected: it is presumably the low responders who are mostly affected. Thus, for 68–95% (1 or 2 sigma SDs), the suggested D3 level may be sufficient to fight everyday infections, and for the 2.5–16% of high responders, it is more than sufficient and is completely harmless. However, for the 2.5–16% of low responders, this level should be raised further to 75 ng/mL or even >100 ng/mL to achieve the same immune status as mid-level responders. A vitamin D3 test before the start of any supplementation in combination with the patient’s personal history of diseases might provide a good indication as to which group the patient belongs to and thus whether 50 ng/mL would be sufficient, or if “normal” levels of D3 are found (between 20 and 30 ng/mL) along with any of the known D3-dependent autoimmune diseases, a higher level should be targeted as a precaution, especially as levels up to 120 ng/mL are declared to have no adverse effects by the WHO. As future mutations of the SARS-CoV-2 virus may not be susceptible to the acquired immunity from vaccination or from a preceding infection, the entire population should raise their serum vitamin D level to a safe level as soon as possible. As long as enough vitamin K2 is provided, the suggested D3 levels are entirely safe to achieve by supplementation. However, the body is neither monothematic nor monocausal but a complicated system of dependencies and interactions of many different metabolites, hormones, vitamins, micronutrients, enzymes, etc. Selenium, magnesium, zinc, and vitamins A and E should also be controlled for and supplemented where necessary to optimize the conditions for a well-functioning immune system. A simple observational study could prove or disprove all of the above. If one were to test PCR-positive contacts of an infected person for D3 levels immediately, i.e., before the onset of any symptoms, and then follow them for 4 weeks and relate the course of their symptomatology to the D3 level, the same result as shown above must be obtained: a regression should cross the zero baseline at 45–55 ng/mL. Therefore, we strongly recommend the performance of such a study, which could be carried out with very little human and economic effort. Even diseases caused by low vitamin D3 levels cannot be entirely resolved by ensuring a certain (fixed) D3 level for the population, as immune system activation varies. However, to fulfill Scribonius Largus’ still valid quote “primum non nocere, secundum cavere, tertium sanare” from 50 A.D., it should be the duty of the medical profession to closely look into a medication or supplementation that might help (tertium sanare) as long as it has no known risks (primum non nocere) within the limits of dosages that are needed for the blood level mentioned (secundum cavere). Unfortunately, this does not imply that in the case of an acute SARS-CoV-2 infection, newly started supplementation with 25(OH)D3 will be a helpful remedy when calcidiol deficiency is evident, especially if this deficiency has been long lasting and caused or exacerbated typical comorbidities that can now aggravate the outcome of the infection. This was not a question we aimed to answer in this study. 5. Limitations: This study does not question the vital role that vaccination will play in coping with the COVID-19 pandemic. Nor does it claim that in the case of an acute SARS-CoV-2 infection, a high boost of 25(OH)D3 is or could be a helpful remedy when vitamin D deficiency is evident, as this is another question. Furthermore, empirical data on COVID-19 mortality for vitamin D3 blood levels above 35 ng/mL are sparse. 6. Conclusions: Although there are a vast number of publications supporting a correlation between the severity and death rate of SARS-CoV-2 infections and the blood level of vitamin D3, there is still an open debate about whether this relation is causal. This is because in most studies, the vitamin D level was determined several days after the onset of infection; therefore, a low vitamin D level may be the result and not the trigger of the course of infection. In this publication, we used a meta-analysis of two independent sets of data. One analysis is based on the long-term average vitamin D3 levels documented for 19 countries. The second analysis is based on 1601 hospitalized patients, 784 who had their vitamin D levels measured within a day after admission, and 817 whose vitamin D levels were known preinfection. Both datasets show a strong correlation between the death rate caused by SARS-CoV-2 and the vitamin D blood level. At a threshold level of 30 ng/mL, mortality decreases considerably. In addition, our analysis shows that the correlation for the combined datasets intersects the axis at approximately 50 ng/mL, which suggests that this vitamin D3 blood level may prevent any excess mortality. These findings are supported not only by a large infection study, showing the same optimum but also by the natural levels observed in traditional people living in the region where humanity originated from that were able to fight down most (not all) infections in most (not all) individuals. Vaccination is and will be an important keystone in our fight against SARS-CoV-2. However, current data clearly show that vaccination alone cannot prevent all SARS-CoV-2 infections and dissemination of the virus. This scenario possibly will become much worse in the case of new virus mutations that are not very susceptible to the current vaccines or even not sensitive to any vaccine. Therefore, based on our data, the authors strongly recommend combining vaccination with routine strengthening of the immune system of the whole population by vitamin D3 supplementation to consistently guarantee blood levels above 50 ng/mL (125 nmol/L). From a medical point of view, this will not only save many lives but also increase the success of vaccination. From a social and political point of view, it will lower the need for further contact restrictions and lockdowns. From an economical point of view, it will save billions of dollars worldwide, as vitamin D3 is inexpensive and—together with vaccines—provides a good opportunity to get the spread of SARS-CoV-2 under control. Although there exists very broad data-based support for the protective effect of vitamin D against severe SARS-CoV-2 infections, we strongly recommend initiating well-designed observational studies as mentioned and/or double-blind randomized controlled trials (RCTs) to convince the medical community and the health authorities that vitamin D testing and supplementation are needed to avoid fatal breakthrough infections and to be prepared for new dangerous mutations.
Background: Much research shows that blood calcidiol (25(OH)D3) levels correlate strongly with SARS-CoV-2 infection severity. There is open discussion regarding whether low D3 is caused by the infection or if deficiency negatively affects immune defense. The aim of this study was to collect further evidence on this topic. Methods: Systematic literature search was performed to identify retrospective cohort as well as clinical studies on COVID-19 mortality rates versus D3 blood levels. Mortality rates from clinical studies were corrected for age, sex, and diabetes. Data were analyzed using correlation and linear regression. Results: One population study and seven clinical studies were identified, which reported D3 blood levels preinfection or on the day of hospital admission. The two independent datasets showed a negative Pearson correlation of D3 levels and mortality risk (r(17) = -0.4154, p = 0.0770/r(13) = -0.4886, p = 0.0646). For the combined data, median (IQR) D3 levels were 23.2 ng/mL (17.4-26.8), and a significant Pearson correlation was observed (r(32) = -0.3989, p = 0.0194). Regression suggested a theoretical point of zero mortality at approximately 50 ng/mL D3. Conclusions: The datasets provide strong evidence that low D3 is a predictor rather than just a side effect of the infection. Despite ongoing vaccinations, we recommend raising serum 25(OH)D levels to above 50 ng/mL to prevent or mitigate new outbreaks due to escape mutations or decreasing antibody activity.
1. Introduction: The SARS-CoV-2 pandemic causing acute respiratory distress syndrome (ARDS) has lasted for more than 18 months. It has created a major global health crisis due to the high number of patients requiring intensive care, and the high death rate has substantially affected everyday life through contact restrictions and lockdowns. According to many scientists and medical professionals, we are far from the end of this disaster and hence must learn to coexist with the virus for several more years, perhaps decades [1,2]. It is realistic to assume that there will be new mutations, which are possibly more infectious or more deadly. In the known history of virus infections, we have never faced a similar global spread. Due to the great number of viral genome replications that occur in infected individuals and the error-prone nature of RNA-dependent RNA polymerase, progressive accrual mutations do and will continue to occur [3,4,5]. Thus, similar to other virus infections such as influenza, we have to expect that the effectiveness of vaccination is limited in time, especially with the current vaccines designed to trigger an immunological response against a single viral protein [6,7,8]. We have already learned that even fully vaccinated people can be infected [9]. Currently, most of these infections do not result in hospitalization, especially for young individuals without comorbidities. However, these infections are the basis for the ongoing dissemination of the virus in a situation where worldwide herd immunity against SARS-CoV-2 is rather unlikely. Instead, humanity could be trapped in an insuperable race between new mutations and new vaccines, with an increasing risk of newly arising mutations becoming resistant to the current vaccines [3,10,11]. Thus, a return to normal life in the near future seems unlikely. Mask requirements as well as limitations of public life will likely accompany us for a long time if we are not able to establish additional methods that reduce virus dissemination. Vaccination is an important part in the fight against SARS-CoV-2 but, with respect to the situation described above, should not be the only focus. One strong pillar in the protection against any type of virus infection is the strength of our immune system [12]. Unfortunately, thus far, this unquestioned basic principle of nature has been more or less neglected by the responsible authorities. It is well known that our modern lifestyle is far from optimal with respect to nutrition, physical fitness, and recreation. In particular, many people are not spending enough time outside in the sun, even in summer. The consequence is widespread vitamin D deficiency, which limits the performance of their immune systems, resulting in the increased spread of some preventable diseases of civilization, reduced protection against infections, and reduced effectiveness of vaccination [13]. In this publication, we will demonstrate that vitamin D3 deficiency, which is a well-documented worldwide problem [13,14,15,16,17,18,19,20], is one of the main reasons for severe courses of SARS-CoV-2 infections. The fatality rates correlate well with the findings that elderly people, black people, and people with comorbidities show very low vitamin D3 levels [16,21,22,23]. Additionally, with only a few exceptions, we are facing the highest infection rates in the winter months and in northern countries, which are known to suffer from low vitamin D3 levels due to low endogenous sun-triggered vitamin D3 synthesis [24,25,26,27]. Vitamin D3 was first discovered at the beginning of the 19th century as an essential factor needed to guarantee skeletal health. This discovery came after a long period of dealing with the dire consequences of rickets, which causes osteomalacia (softening of bones). This disease especially affected children in northern countries, who were deprived of sunlight and often worked in dark production halls during the industrial revolution [28]. At the beginning of the 20th century, it became clear that sunlight can cure rickets by triggering vitamin D3 synthesis in the skin. Cod liver oil is recognized as a natural source of vitamin D3 [29]. At the time, a blood level of 20 ng/mL was sufficient to stop osteomalacia. This target is still the recommended blood level today, as stated in many official documents [30]. In accordance with many other publications, we will show that this level is considerably too low to guarantee optimal functioning of the human body. In the late 1920s, Adolf Windaus elucidated the structure of vitamin D3. The metabolic pathway of vitamin D3 (biochemical name cholecalciferol) is shown in Figure 1 [31]. The precursor, 7-dehydrocholesterol, is transformed into cholecalciferol in our skin by photoisomerization caused by UV-B exposure (wavelength 280–315 nm). After transportation to the liver, cholecalciferol is hydroxylated, resulting in 25-hydroxycholecalciferol (25(OH)D3, also called calcidiol), which can be stored in fat tissue for several months and is released back into blood circulation when needed. The biologically active form is generated by a further hydroxylation step, resulting in 1,25-dihydroxycholecalciferol (1,25(OH)2D3, also called calcitriol). Early investigations assumed that this transformation takes place mainly in the kidney. Over the last decades, knowledge regarding the mechanisms through which vitamin D3 affects human health has improved dramatically. It was discovered that the vitamin D3 receptor (VDR) and the vitamin D3 activating enzyme 1-α-hydroxylase (CYP27B1) are expressed in many cell types that are not involved in bone and mineral metabolism, such as the intestine, pancreas, and prostate as well as cells of the immune system [32,33,34,35,36]. This finding demonstrates the important, much wider impact of vitamin D3 on human health than previously understood [37,38]. Vitamin D turned out to be a powerful epigenetic regulator, influencing more than 2500 genes [39] and impacting dozens of our most serious health challenges [40], including cancer [41,42], diabetes mellitus [43], acute respiratory tract infections [44], chronic inflammatory diseases [45], and autoimmune diseases such as multiple sclerosis [46]. In the field of human immunology, the extrarenal synthesis of the active metabolite calcitriol-1,25(OH)2D3-by immune cells and lung epithelial cells has been shown to have immunomodulatory properties [47,48,49,50,51,52]. Today, a compelling body of experimental evidence indicates that activated vitamin D3 plays a fundamental role in regulating both innate and adaptive immune systems [53,54,55,56]. Intracellular vitamin D3 receptors (VDRs) are present in nearly all cell types involved in the human immune response, such as monocytes/macrophages, T cells, B cells, natural killer (NK) cells, and dendritic cells (DCs). Receptor binding engages the formation of the “vitamin D3 response element” (VDRE), regulating a large number of target genes involved in the immune response [57]. As a consequence of this knowledge, the scientific community now agrees that calcitriol is much more than a vitamin but rather a highly effective hormone with the same level of importance to human metabolism as other steroid hormones. The blood level ensuring the reliable effectiveness of vitamin D3 with respect to all its important functions came under discussion again, and it turned out that 40–60 ng/mL is preferable [38], which is considerably above the level required to prevent rickets. Long before the SARS-CoV-2 pandemic, an increasing number of scientific publications showed the effectiveness of a sufficient vitamin D3 blood level in curing many of the human diseases caused by a weak or unregulated immune system [38,58,59,60]. This includes all types of virus infections [44,61,62,63,64,65,66,67,68,69,70], with a main emphasis on lung infections that cause ARDS [71,72,73], as well as autoimmune diseases [46,63,74,75]. However, routine vitamin D3 testing and supplementation are still not established today. Unfortunately, it seems that the new findings about vitamin D3 have not been well accepted in the medical community. Many official recommendations to define vitamin D3 deficiency still stick to the 20 ng/mL established 100 years ago to cure rickets [76]. Additionally, many recommendations for vitamin D3 supplementation are in the range of 5 to 20 µg per day (200 to 800 international units), which is much too low to guarantee the optimal blood level of 40–60 ng/mL [38,77]. One reason for these incorrect recommendations turned out to be calculation error [78,79]. Another reason for the error is because vitamin D3 treatment to cure osteomalacia was commonly combined with high doses of calcium to support bone calcification. When examining for the side effects of overdoses of such combination products, it turned out that there is a high risk of calcium deposits in blood vessels, especially in the kidney. Today, it is clear that such combination preparations are nonsensical because vitamin D3 stimulates calcium uptake in the intestine itself. Without calcium supplementation, even very high vitamin D3 supplementation does not cause vascular calcification, especially if another important finding is included. Even when calcium blood levels are high, the culprit for undesirable vascular calcification is not vitamin D but insufficient blood levels of vitamin K2. Thus, daily vitamin D3 supplementation in the range of 4000 to 10,000 units (100 to 250 µg) needed to generate an optimal vitamin D3 blood level in the range of 40–60 ng/mL has been shown to be completely safe when combined with approximately 200 µg/mL vitamin K2 [80,81,82]. However, this knowledge is still not widespread in the medical community, and obsolete warnings about the risks of vitamin D3 overdoses unfortunately are still commonly circulating. Based on these circumstances, the SARS-CoV-2 pandemic is becoming the second breakthrough in the history of vitamin D3 association with disease (after rickets), and we have to ensure that full advantage is being taken of its medical properties to keep people healthy. The most life-threatening events in the course of a SARS-CoV-2 infection are ARDS and cytokine release syndrome (CRS). It is well established that vitamin D3 is able to inhibit the underlying metabolic pathways [83,84] because a very specific interaction exists between the mechanism of SARS-CoV-2 infection and vitamin D3. Angiotensin-converting enzyme 2 (ACE2), a part of the renin-angiotensin system (RAS), serves as the major entry point for SARS-CoV-2 into cells (Figure 2). When SARS-CoV-2 is attached to ACE2 its expression is reduced, thus causing lung injury and pneumonia [85,86,87]. Vitamin D3 is a negative RAS modulator by inhibition of renin expression and stimulation of ACE2 expression. It therefore has a protective role against ARDS caused by SARS-CoV-2. Sufficient vitamin D3 levels prevent the development of ARDS by reducing the levels of angiotensin II and increasing the level of angiotensin-(1,7) [18,88,89,90,91,92]. There are several additional important functions of vitamin D3 supporting immune defense [18,77,94,95]:Vitamin D decreases the production of Th1 cells. Thus, it can suppress the progression of inflammation by reducing the generation of inflammatory cytokines [74,96,97].Vitamin D3 reduces the severity of cytokine release syndrome (CRS). This “cytokine storm” causes multiple organ damage and is therefore the main cause of death in the late stage of SARS-CoV-2 infection. The systemic inflammatory response due to viral infection is attenuated by promoting the differentiation of regulatory T cells [98,99,100,101].Vitamin D3 induces the production of the endogenous antimicrobial peptide cathelicidin (LL-37) in macrophages and lung epithelial cells, which acts against invading respiratory viruses by disrupting viral envelopes and altering the viability of host target cells [52,102,103,104,105,106,107].Experimental studies have shown that vitamin D and its metabolites modulate endothelial function and vascular permeability via multiple genomic and extragenomic pathways [108,109].Vitamin D reduces coagulation abnormalities in critically ill COVID-19 patients [110,111,112]. Vitamin D decreases the production of Th1 cells. Thus, it can suppress the progression of inflammation by reducing the generation of inflammatory cytokines [74,96,97]. Vitamin D3 reduces the severity of cytokine release syndrome (CRS). This “cytokine storm” causes multiple organ damage and is therefore the main cause of death in the late stage of SARS-CoV-2 infection. The systemic inflammatory response due to viral infection is attenuated by promoting the differentiation of regulatory T cells [98,99,100,101]. Vitamin D3 induces the production of the endogenous antimicrobial peptide cathelicidin (LL-37) in macrophages and lung epithelial cells, which acts against invading respiratory viruses by disrupting viral envelopes and altering the viability of host target cells [52,102,103,104,105,106,107]. Experimental studies have shown that vitamin D and its metabolites modulate endothelial function and vascular permeability via multiple genomic and extragenomic pathways [108,109]. Vitamin D reduces coagulation abnormalities in critically ill COVID-19 patients [110,111,112]. A rapidly increasing number of publications are investigating the vitamin D3 status of SARS-CoV-2 patients and have confirmed both low vitamin D levels in cases of severe courses of infection [113,114,115,116,117,118,119,120,121,122,123,124,125,126,127] and positive results of vitamin D3 treatments [128,129,130,131,132,133,134]. Therefore, many scientists recommend vitamin D3 as an indispensable part of a medical treatment plan to avoid severe courses of SARS-CoV-2 infection [14,18,77,84,135,136], which has additionally resulted in proposals for the consequent supplementation of the whole population [137]. A comprehensive overview and discussion of the current literature is given in a review by Linda Benskin [138]. Unfortunately, all these studies are based on relatively low numbers of patients. Well-accepted, placebo-controlled, double-blinded studies are still missing. The finding that most SARS-CoV-2 patients admitted to hospitals have vitamin D3 blood levels that are too low is unquestioned even by opponents of vitamin D supplementation. However, there is an ongoing discussion as to whether we are facing a causal relationship or just a decline in the vitamin D levels caused by the infection itself [84,139,140,141]. There are reliable data on the average vitamin D3 levels in the population [15,19,142] in several countries, in parallel to the data about death rates caused by SARS-CoV-2 in these countries [143,144]. Obviously, these vitamin D3 data are not affected by SARS-CoV-2 infections. While meta-studies using such data [26,136,140,145] are already available, our goal was to analyze these data in the same manner as selected clinical data. In this article, we identify a vitamin D threshold that virtually eliminates excess mortality caused by SARS-CoV-2. In contrast to published D3/SARS-CoV-2 correlations [146,147,148,149,150,151,152], our data include studies assessing preinfection vitamin D values as well as studies with vitamin D values measured post-infection latest on the day after hospitalization. Thus, we can expect that the measured vitamin D status is still close to the preinfection level. In contrast to other meta-studies which also included large retrospective cohort studies [151,152], our aim was to perform regressions on the combined data after correcting for patient characteristics. These results from independent datasets, which include data from before and after the onset of the disease, also further strengthen the assumption of a causal relationship between vitamin D3 blood levels and SARS-CoV-2 death rates. Our results therefore also confirm the importance of establishing vitamin D3 supplementation as a general method to prevent severe courses of SARS-CoV-2 infections. 6. Conclusions: Although there are a vast number of publications supporting a correlation between the severity and death rate of SARS-CoV-2 infections and the blood level of vitamin D3, there is still an open debate about whether this relation is causal. This is because in most studies, the vitamin D level was determined several days after the onset of infection; therefore, a low vitamin D level may be the result and not the trigger of the course of infection. In this publication, we used a meta-analysis of two independent sets of data. One analysis is based on the long-term average vitamin D3 levels documented for 19 countries. The second analysis is based on 1601 hospitalized patients, 784 who had their vitamin D levels measured within a day after admission, and 817 whose vitamin D levels were known preinfection. Both datasets show a strong correlation between the death rate caused by SARS-CoV-2 and the vitamin D blood level. At a threshold level of 30 ng/mL, mortality decreases considerably. In addition, our analysis shows that the correlation for the combined datasets intersects the axis at approximately 50 ng/mL, which suggests that this vitamin D3 blood level may prevent any excess mortality. These findings are supported not only by a large infection study, showing the same optimum but also by the natural levels observed in traditional people living in the region where humanity originated from that were able to fight down most (not all) infections in most (not all) individuals. Vaccination is and will be an important keystone in our fight against SARS-CoV-2. However, current data clearly show that vaccination alone cannot prevent all SARS-CoV-2 infections and dissemination of the virus. This scenario possibly will become much worse in the case of new virus mutations that are not very susceptible to the current vaccines or even not sensitive to any vaccine. Therefore, based on our data, the authors strongly recommend combining vaccination with routine strengthening of the immune system of the whole population by vitamin D3 supplementation to consistently guarantee blood levels above 50 ng/mL (125 nmol/L). From a medical point of view, this will not only save many lives but also increase the success of vaccination. From a social and political point of view, it will lower the need for further contact restrictions and lockdowns. From an economical point of view, it will save billions of dollars worldwide, as vitamin D3 is inexpensive and—together with vaccines—provides a good opportunity to get the spread of SARS-CoV-2 under control. Although there exists very broad data-based support for the protective effect of vitamin D against severe SARS-CoV-2 infections, we strongly recommend initiating well-designed observational studies as mentioned and/or double-blind randomized controlled trials (RCTs) to convince the medical community and the health authorities that vitamin D testing and supplementation are needed to avoid fatal breakthrough infections and to be prepared for new dangerous mutations.
Background: Much research shows that blood calcidiol (25(OH)D3) levels correlate strongly with SARS-CoV-2 infection severity. There is open discussion regarding whether low D3 is caused by the infection or if deficiency negatively affects immune defense. The aim of this study was to collect further evidence on this topic. Methods: Systematic literature search was performed to identify retrospective cohort as well as clinical studies on COVID-19 mortality rates versus D3 blood levels. Mortality rates from clinical studies were corrected for age, sex, and diabetes. Data were analyzed using correlation and linear regression. Results: One population study and seven clinical studies were identified, which reported D3 blood levels preinfection or on the day of hospital admission. The two independent datasets showed a negative Pearson correlation of D3 levels and mortality risk (r(17) = -0.4154, p = 0.0770/r(13) = -0.4886, p = 0.0646). For the combined data, median (IQR) D3 levels were 23.2 ng/mL (17.4-26.8), and a significant Pearson correlation was observed (r(32) = -0.3989, p = 0.0194). Regression suggested a theoretical point of zero mortality at approximately 50 ng/mL D3. Conclusions: The datasets provide strong evidence that low D3 is a predictor rather than just a side effect of the infection. Despite ongoing vaccinations, we recommend raising serum 25(OH)D levels to above 50 ng/mL to prevent or mitigate new outbreaks due to escape mutations or decreasing antibody activity.
7,779
283
[ 78, 385, 80 ]
8
[ "vitamin", "d3", "vitamin d3", "mortality", "studies", "levels", "data", "level", "ml", "ng ml" ]
[ "immunity sars cov", "future mutations sars", "vaccines increasing risk", "respiratory viruses", "virus mutations susceptible" ]
[CONTENT] mortality | vitamin D | calcidiol | calcitriol | D3 | COVID-19 | inflammation | SARS-CoV-2 | ARDS | immune status | immunodeficiency | renin | angiotensin | ACE2 | virus infection | cytokine release syndrome | CRS [SUMMARY]
[CONTENT] mortality | vitamin D | calcidiol | calcitriol | D3 | COVID-19 | inflammation | SARS-CoV-2 | ARDS | immune status | immunodeficiency | renin | angiotensin | ACE2 | virus infection | cytokine release syndrome | CRS [SUMMARY]
[CONTENT] mortality | vitamin D | calcidiol | calcitriol | D3 | COVID-19 | inflammation | SARS-CoV-2 | ARDS | immune status | immunodeficiency | renin | angiotensin | ACE2 | virus infection | cytokine release syndrome | CRS [SUMMARY]
[CONTENT] mortality | vitamin D | calcidiol | calcitriol | D3 | COVID-19 | inflammation | SARS-CoV-2 | ARDS | immune status | immunodeficiency | renin | angiotensin | ACE2 | virus infection | cytokine release syndrome | CRS [SUMMARY]
[CONTENT] mortality | vitamin D | calcidiol | calcitriol | D3 | COVID-19 | inflammation | SARS-CoV-2 | ARDS | immune status | immunodeficiency | renin | angiotensin | ACE2 | virus infection | cytokine release syndrome | CRS [SUMMARY]
[CONTENT] mortality | vitamin D | calcidiol | calcitriol | D3 | COVID-19 | inflammation | SARS-CoV-2 | ARDS | immune status | immunodeficiency | renin | angiotensin | ACE2 | virus infection | cytokine release syndrome | CRS [SUMMARY]
[CONTENT] COVID-19 | Calcifediol | Cholecalciferol | Humans | Nutritional Status | Risk Assessment | SARS-CoV-2 [SUMMARY]
[CONTENT] COVID-19 | Calcifediol | Cholecalciferol | Humans | Nutritional Status | Risk Assessment | SARS-CoV-2 [SUMMARY]
[CONTENT] COVID-19 | Calcifediol | Cholecalciferol | Humans | Nutritional Status | Risk Assessment | SARS-CoV-2 [SUMMARY]
[CONTENT] COVID-19 | Calcifediol | Cholecalciferol | Humans | Nutritional Status | Risk Assessment | SARS-CoV-2 [SUMMARY]
[CONTENT] COVID-19 | Calcifediol | Cholecalciferol | Humans | Nutritional Status | Risk Assessment | SARS-CoV-2 [SUMMARY]
[CONTENT] COVID-19 | Calcifediol | Cholecalciferol | Humans | Nutritional Status | Risk Assessment | SARS-CoV-2 [SUMMARY]
[CONTENT] immunity sars cov | future mutations sars | vaccines increasing risk | respiratory viruses | virus mutations susceptible [SUMMARY]
[CONTENT] immunity sars cov | future mutations sars | vaccines increasing risk | respiratory viruses | virus mutations susceptible [SUMMARY]
[CONTENT] immunity sars cov | future mutations sars | vaccines increasing risk | respiratory viruses | virus mutations susceptible [SUMMARY]
[CONTENT] immunity sars cov | future mutations sars | vaccines increasing risk | respiratory viruses | virus mutations susceptible [SUMMARY]
[CONTENT] immunity sars cov | future mutations sars | vaccines increasing risk | respiratory viruses | virus mutations susceptible [SUMMARY]
[CONTENT] immunity sars cov | future mutations sars | vaccines increasing risk | respiratory viruses | virus mutations susceptible [SUMMARY]
[CONTENT] vitamin | d3 | vitamin d3 | mortality | studies | levels | data | level | ml | ng ml [SUMMARY]
[CONTENT] vitamin | d3 | vitamin d3 | mortality | studies | levels | data | level | ml | ng ml [SUMMARY]
[CONTENT] vitamin | d3 | vitamin d3 | mortality | studies | levels | data | level | ml | ng ml [SUMMARY]
[CONTENT] vitamin | d3 | vitamin d3 | mortality | studies | levels | data | level | ml | ng ml [SUMMARY]
[CONTENT] vitamin | d3 | vitamin d3 | mortality | studies | levels | data | level | ml | ng ml [SUMMARY]
[CONTENT] vitamin | d3 | vitamin d3 | mortality | studies | levels | data | level | ml | ng ml [SUMMARY]
[CONTENT] vitamin | vitamin d3 | d3 | sars cov | sars | cov | cells | infections | infection | immune [SUMMARY]
[CONTENT] mortality | studies | mortality rates | rates | values | cohorts | vitamin | com | https | accessed [SUMMARY]
[CONTENT] data | studies | ml | ng | ng ml | excluded | 23 ng ml | 23 ng | table | ahmad [SUMMARY]
[CONTENT] vitamin | sars cov | sars | cov | infections | view | point view | level | analysis | vaccination [SUMMARY]
[CONTENT] vitamin | d3 | studies | mortality | vitamin d3 | data | ml | levels | ng ml | ng [SUMMARY]
[CONTENT] vitamin | d3 | studies | mortality | vitamin d3 | data | ml | levels | ng ml | ng [SUMMARY]
[CONTENT] ||| D3 ||| [SUMMARY]
[CONTENT] COVID-19 | D3 ||| ||| [SUMMARY]
[CONTENT] One | seven | D3 | the day ||| two | Pearson | D3 | 0.0646 ||| IQR | D3 | 23.2 ng/mL | 17.4-26.8 | Pearson | r(32 | 0.0194 ||| zero [SUMMARY]
[CONTENT] D3 ||| 25(OH)D [SUMMARY]
[CONTENT] ||| D3 ||| ||| COVID-19 | D3 ||| ||| ||| One | seven | D3 | the day ||| two | Pearson | D3 | 0.0646 ||| IQR | D3 | 23.2 ng/mL | 17.4-26.8 | Pearson | r(32 | 0.0194 ||| zero ||| D3 ||| 25(OH)D [SUMMARY]
[CONTENT] ||| D3 ||| ||| COVID-19 | D3 ||| ||| ||| One | seven | D3 | the day ||| two | Pearson | D3 | 0.0646 ||| IQR | D3 | 23.2 ng/mL | 17.4-26.8 | Pearson | r(32 | 0.0194 ||| zero ||| D3 ||| 25(OH)D [SUMMARY]
Does dropout from school matter in taking antenatal care visits among women in Bangladesh? An application of marginalized poisson-poisson mixture model.
35698030
There exists a lack of research in explaining the link between dropout from school and antenatal care (ANC) visits of women during pregnancy in Bangladesh. The aim of this study is to investigate how the drop out from school influences the ANC visits after controlling the relevant covariates using an appropriate count regression model.
BACKGROUND
The association between the explanatory variables and the outcome of interest, ANC visits, have been performed using one-way analysis of variance/independent sample t-test. To examine the adjusted effects of covariates on the marginal mean of count data, Marginalized Poison-Poisson mixture regression model has been fitted.
METHODS
The estimated incidence rate of antenatal care visits was 10.6% lower for the mothers who were not continued their education after marriage but had at least 10 years of schooling (p-value <0.01) and 20.2% lower for the drop-outed mothers (p-value <0.01) than the mothers who got continued their education after marriage.
RESULTS
To ensure the WHO recommended 8+ ANC visits for the pregnant women of Bangladesh, it is essential to promote maternal education so that at least ten years of schooling should be completed by a woman and dropout from school after marriage should be prevented.
CONCLUSIONS
[ "Bangladesh", "Educational Status", "Female", "Humans", "Patient Acceptance of Health Care", "Pregnancy", "Pregnant Women", "Prenatal Care", "Schools" ]
9190147
Introduction
Antenatal care (ANC) is an important element of persistence care that a mother receives before and during pregnancy, at the time of childbirth, and the postnatal period. The aim of ANC is to detect any pregnancy complications, take immediate necessary steps for solving such complications, and prepare mother for safe and healthy birth. It has both direct and indirect influences on the survival and health of the mother as well as newborn [1, 2]. The World Health Organization (WHO) recently recommended for at least eight ANC visits for a woman during a pregnancy period [2]. The percentage of pregnant women receiving at least four ANC visits during a pregnancy period in Bangladesh has been increasing over the past two decades, from 6% in 1993–94 to 47% in 2017–18 [3]. Despite this substantial progress in ANC services, only about 11.5% of sampled women received WHO recommended 8+ ANC visits during November 2014-March 2018 in Bangladesh [3]. Therefore, to achieve the health related Sustainable Development Goals, it is essential to make access to quality ANC service easy for the pregnant mother and proper monitoring system is required to observe whether this service implemented throughout the country [4]. Education is one of the important determinants of good health as heath disparities can be determined by educational disparities [5, 6]. Mother’s education is considered as a crucial factor for preventing and treating poor health outcomes and effective use of health care services [7]. To ensure the best health practice, public policies should focus on getting the best attainable education for women among all socioeconomic classes [8] as there exist extreme gender differentiation in economic roles, lower parental investments in daughters than in sons, and significant restrictions on girls’ public mobility [9]. The women having more education are more likely to receive ANC services. Educated women understand the benefits of taking frequent ANC services as they are more knowledgeable about reproductive health care [10, 11]. School dropout should be considered as a public heath problem because education is a strong predictor of long-term health and dropouts have poorer health outcome [12, 13]. Although school dropout and child marriage are interrelated outcomes that have an enormous adverse impact on adolescent girls health and wellbeing as well as on their progeny, most of the parents of Bangladesh think that marriage and stop going school is the better option for their daughter’s prosperity [14]. The complex relationship between girl’s high school dropout and risky adolescent behavior suggests that schoolgirl dropout due to marriage has been associated with high risk of subsequent teen pregnancy [15–17] and teen pregnancy can lead to medical complications for the mother and infant [18]. There is a lack of research in explaining the link between whether or not a woman continued their education after marriage and ANC visits. Although the ANC services in Bangladesh has increasing trend over the decades, access to ANC services should be improved drastically to achieve the WHO recommended positive pregnancy outcome and the health Sustainable Development Goals. Therefore, the aim of the current study is to investigate the determinants of the frequency of ANC visits of women during their most recent pregnancy period in Bangladesh by applying an appropriate count regression model giving special emphasis on how the schoolgirl dropout influence the response of interest. In practice, it may happen that sample count data observations may arise from two or more populations. Analyses of such data using standard or zero augmented count models may result in misleading conclusion [19, 20]. To overcome this difficulty, it may require to analyze the data taking mixture of populations into account [21]. Therefore, it is necessary to check whether count data come from mixture of populations. Moreover, the significance of inference regarding the marginal mean is well documented in many studies under mixture model setup [22–25]. Although finite mixtures of the standard count models have the advantage of describing over-dispersed count data, inferences regarding the overall exposure effects on marginal mean cannot be made from these model [22, 24]. The regression parameters for the marginal means can also be estimated sometimes indirectly by using the latent class parameters but such parameters cannot elucidate the link between the covariates and the population-wide parameters properly [26]. For this purpose, Benecha et al. [26] proposed marginally-specified mean models for mixtures of two count distributions to facilitate the maximum likelihood estimation where the mixing proportion have been included in the model along with the parameters for marginal mean. These are marginalized Poisson-Poisson (MPois-Pois) model and marginalized negative binomial-Poisson (MNB-Pois) model. In this study, an attempt has been made to draw inferences about the overall exposure effects on marginal means of ANC visits by using the MPois-Pois model.
null
null
Results
To analyse the count variable of interest, the number of ANC visits during pregnancy, we have considered 4,941 women who gave the index birth in the three years preceding the survey after adjusting for missing values. The mean, standard deviation, minimum and maximum of the number of ANC visits were 3.93, 2.88, 0 and 20, respectively. It was found that the frequency of ANC visits in Bangladesh arises from mixture of two unobserved populations with proportions 0.55 and 0.45 in the absence of covariates. The mean number of ANC visits for each category of explanatory variables along with its standard errors, 95% confidence interval (CI) and the percent distribution of respondents in each category of selected covariates are presented in Table 1. Table 1The mean, standard error(SE) and 95% confidence interval (CI) for number of ANC visits during pregnancy of women in Bangladesh by some selected socioeconomic and demographic characteristics, BDHS 2017Variablesn(%)MeanSE95% CIStudy continuity status ∗∗∗Continued after marriage552(11.2)5.570.1285.32-5.82Not continued but had ≥10 years schooling609(12.3)4.890.1154.67-5.12Drop-outed3780(76.5)3.540.0453.45-3.62Place of residence ∗∗∗Urban1696(34.3)4.710.0764.56-4.86Rural3245(65.7)3.520.0463.43-3.62Exposed to media ∗∗∗No2265(45.8)3.070.0522.97-3.17Yes2676(54.2)4.660.0584.55-4.77Mother’s age at birth (years) ∗∗∗<201377(27.9)3.790.0743.64-3.9320-292816(57.0)4.050.0553.94-4.16≥30748(15.1)3.740.1083.53-3.95Difference between husband and wife age (years) ∗∗∗Non-positive55(1.1)5.220.5144.21-6.231-51602(32.4)3.880.0713.74-4.026-102098(42.5)3.920.0653.79-4.04>101186(24.0)3.970.0793.81-4.12Number of decisions woman participatedNone747(15.1)3.760.1033.55-3.961-21492(30.2)3.920.0713.78-4.06All 32702(54.7)3.990.0573.87-4.10Number of reasons wife beating justified ∗∗∗Not at all4055(82.1)4.070.0463.98-4.171-2708(14.3)3.330.0953.14-3.523-5178(3.6)3.030.1882.66-3.40Wealth index ∗∗∗Poor1823(36.9)3.000.0602.88-3.11Middle1588(32.1)3.920.0693.79-4.06Rich1530(31.0)5.050.0774.90-5.20Birth Order ∗∗∗11870(37.9)4.370.0664.24-4.5021623(32.8)4.030.0733.89-4.183848(17.2)3.680.0973.49-3.87≥4600(12.1)2.640.1002.45-2.84***p-value <0.01; **p-value <0.05; *p-value <0.10 The mean, standard error(SE) and 95% confidence interval (CI) for number of ANC visits during pregnancy of women in Bangladesh by some selected socioeconomic and demographic characteristics, BDHS 2017 ***p-value <0.01; **p-value <0.05; *p-value <0.10 From Table 1 it was found that every three out of four mothers (76.5%) were drop-outed from school, 12.3% were completed at least 10 years of schooling but stopped education after marriage, whereas 11.2% had continued their education after marriage and reached at least 10 years of schooling. About two-third of the mothers (65.7%) were sampled from rural area and rest were from urban area. Of the mothers, about 54.2% were exposed to any of the three media at least once a week. Most of the mothers (57.0%) had given their index birth at age 20–29 years, 15.1% at age 30–39 years, and a little more than one-forth (27.9%) at age below 20 years. It was found that 24.0% couple had age gap (difference between husband and wife age) more than 10 years, 42.5% had 6–10 years, 32.4% had 1–5 years, whereas for 1.1% couple husband age did not exceed wife age. Most of the women (54.7%) participated in all three major decisions regarding themselves, 30.2% participated in 1–2 and 15.1% did not participate in any such decisions. Most of the women (82.1%) had justified none of all five reasons of wife beating, 14.3% had justified 1–2 reasons, whereas only 3.6% had justified 3–5 reasons. Women were distributed over the categories of wealth index almost equally. Most of the mothers (37.9%) had given their index birth as first birth, 32.8% as second birth, 17.2% as third birth and for rest of the mothers child birth order were 4 or more. The exposure variable ‘study continuity status’ was found to have a significant association (p-value <0.01) with the ANC taking behavior. The average rate of ANC visits was largest (5.57) for the mothers if they had been continued their education after marriage and smallest (3.54) for those who drop-outed from school. The average number of ANC visits was significantly (p-value <0.01) higher (4.71) for the urban mothers than the rural (3.52). It was also higher (4.66) for the mothers who were exposed any of the three media at least once a week than who were not exposed (3.07) with p-value <0.01. The mean ANC visits was highest (4.05) for the mothers whose age at birth for the index child was 20–29 years whereas it was 3.79 and 3.74 for mother’s age at birth below 20 years and at least 30 years respectively, the result was statistically significant with p-value <0.01. The average ANC visits was significantly (p-value <0.01) higher (5.22) for the mothers whose partner age did not exceed their age, and this average seems to be similar for the other categories of ‘difference between husband and wife age’. The averages did not differ significantly among different categories of ‘number of decisions woman participated’. The average rate of visits was inversely related with the number of reasons a woman justified beating by her husband (p-value <0.01) and also with the birth order number (p-value <0.01). However, it was positively related with the wealth quintiles (p-value <0.01). Since in the absence of covariates one of the proportions of mixture was found as 0.55, we have fitted the MPois-Pois model to analyse the number of ANC visits taken by women during a pregnancy period in Bangladesh. As our main interest is to estimate the marginal parameters, the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \hat {\boldsymbol {\beta }} $\end{document}β^ along with IDR are presented in Table 2. From Table 2 it was found that, the estimated incidence rate of ANC visits was 10.6% lower for the mothers who were not continued their education after marriage but had at least 10 years at schooling (p-value <0.01) and 20.2% lower who drop-outed from studying (p-value <0.01) than the mothers who got continued their education after marriage. The IDR of ANC visits was 1.12 for urban to rural mothers (p-value <0.01), 1.24 for exposed media at least once a week to unexposed mothers (p-value <0.01). The rate of ANC visits was 7.7% lower for mothers who gave their index birth below 20 years of age (p-value <0.01) and 5.3% higher for mothers who gave their index birth after or at 30 years of age (p-value <0.10) than those who gave birth during age 20–29 years. The IDR of ANC visits was 1.26 for the women who are not younger than their husbands to the women whose age lag behind 1–5 years than their husbands (p-value <0.05), this IDR was statistically insignificant for the other categories of ‘difference between husband and wife age’. The rates of ANC visits were statistically insignificant among all categories of ‘number of decisions woman participated’. The IDR of ANC visits was 0.88 for the mothers who justified beating by their partners for 1–2 reasons to the mothers who never justified for any of the five reasons (p-value <0.01), this ratio was about the same for the mothers who justified beating by their partners for 3–5 reasons (p-value <0.05). The rate of ANC visit was 13.8% higher for the mothers from middle-wealth households than the poor households, 24.1% higher for the mothers from rich households than the poor households, both the results were statistically significant with p-value <0.01. As the birth order of index child increases, the mothers were significantly less likely to take ANC care during their pregnancy. For example, the rate of ANC visits of mothers during their pregnancy were 7.2%, 10.9% and 30.9% less likely for the second birth, third birth and forth or upper order birth respectively compared to the first birth. It was also seen that the marginal mean parameters were estimated from mixture of two latent subpopulations with the proportions 0.61 and 0.39 after adjusting the covariates. Table 2The estimated marginal parameters (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \hat {\beta } $\end{document}β^), standard errors (SE), and p-values, 95% CI for regression parameters, and IDR under MPois-Pois mixture model on number of ANC visits during a pregnancy, BDHS 2017\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \hat {\beta } $\end{document}β^SEz-valuep-value95% CIIDRIntercept1.3590.04232.708<0.001-3.893Study continuity statusContinued after marriage (ref)Not continued but had ≥10 years schooling-0.1120.030-3.755<0.001-0.171- -0.0530.894Drop-outed-0.2250.026-8.679<0.001-0.276- -0.1740.798Place of residenceRural (ref)Urban0.1130.0215.457<0.0010.072-0.1541.120Exposed to mediaNo (ref)Yes0.2180.02210.072<0.0010.175-0.2611.243Mother’s age at birth (years)<20-0.0800.025-3.2260.001-0.129- -0.0310.92320-29 (ref)≥300.0520.0311.6910.091-0.009- 0.1131.053Difference between husband and wife age (years)Non-positive0.2280.0982.3280.0200.036-0.4201.2561-5 (ref)6-10-0.0020.021-0.1100.913-0.043- 0.0390.998>100.0110.0250.4320.666-0.038- 0.0601.011Number of decisions woman participatedNone (ref)1-20.0020.0290.0790.937-0.055- 0.0591.002All 30.0210.0270.7810.435-0.032- 0.0741.021Number of reasons wife beating justifiedNot at all (ref)1-2-0.1260.028-4.535<0.001-0.181- -0.0710.8813-5-0.1280.057-2.2210.026-0.240- -0.0160.880Wealth indexPoor (ref)Middle0.1290.0255.082<0.0010.080-0.1781.138Rich0.2160.0297.338<0.0010.159-0.2731.241Birth Order1 (ref)2-0.0750.024-3.1400.002-0.122- -0.0280.9283-0.1160.032-3.601<0.001-0.179- -0.0530.891≥4-0.3690.043-8.492<0.001-0.453- -0.2850.691Mixing Proportion\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\hat {\pi }$\end{document}π^0.6100.04912.472<0.001-- The estimated marginal parameters (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \hat {\beta } $\end{document}β^), standard errors (SE), and p-values, 95% CI for regression parameters, and IDR under MPois-Pois mixture model on number of ANC visits during a pregnancy, BDHS 2017 To asses the goodness of fit of the MPois-Pois, with selected covariates, the Poisson, negative binomial regression model have also fitted. It was found that the AIC values of Poisson, negative binomial and MPois-Pois were 23468, 22543, and 22470 respectively. It implies that the MPois-Pois is the best choice for fitting this data set.
Discussion and conclusion
The study has limitations as follows: (1) We used cross-sectional data to study the association between schoolgirl dropout and frequency of ANC visits during their pregnancy period by controlling some socio-demographic variables and therefore it would not be possible to asses the causal relationship between them; (2) Although the data have been collected using stratified cluster sampling technique, cluster-level variation has not been considered into the analysis. Further researchers can be carried out by taking this limitations into accout.
[ "Data and methods", "Data", "Variables", "Statistical analyses", "Model", "Discussion and conclusion" ]
[ "Data In this study, a nationwide data extracted from Bangladesh Demographic and Health Survey (BDHS) 2017–2018 have been utilized to analyze the number of ANC visits taken by a woman during her pregnancy period. The survey was based on a two-stage stratified sample of households. In the first stage of sampling, 250 enumeration areas (EAs) from urban and 425 from rural areas were selected. In the second stage, an average of 30 households per EA were selected. Based on the design, A total of 20127 ever married women of reproductive age were interviewed to collect data on fertility, family planning along with socioeconomic and demographic characteristics. BDHS data also provides information on several aspect of maternal and newborn health, including antenatal care (ANC). The information regarding number of ANC that a woman receives during pregnancy was collected from 5012 women who gave their most recent birth in the 3 years preceding the survey. Finally, a sample of 4941 women was included in the analysis as they provide complete information about all the variables considered in the study.\nIn this study, a nationwide data extracted from Bangladesh Demographic and Health Survey (BDHS) 2017–2018 have been utilized to analyze the number of ANC visits taken by a woman during her pregnancy period. The survey was based on a two-stage stratified sample of households. In the first stage of sampling, 250 enumeration areas (EAs) from urban and 425 from rural areas were selected. In the second stage, an average of 30 households per EA were selected. Based on the design, A total of 20127 ever married women of reproductive age were interviewed to collect data on fertility, family planning along with socioeconomic and demographic characteristics. BDHS data also provides information on several aspect of maternal and newborn health, including antenatal care (ANC). The information regarding number of ANC that a woman receives during pregnancy was collected from 5012 women who gave their most recent birth in the 3 years preceding the survey. Finally, a sample of 4941 women was included in the analysis as they provide complete information about all the variables considered in the study.\nVariables The outcome variable is ‘the number of ANC visits’ that a woman received during pregnancy for her most recent birth. The exposure variable ‘study continuity status’ is categorized into three categories. The categories of this variable were created based on ‘the number of years of schooling of a woman’ and ‘whether she studying or attending school just before getting married’ or ‘whether she continued studying after marriage’. These categories are \ncontinued studying after marriage: continued studying after marriage and total number of schooling ≥10 years;not continued but having ≥10 years schooling: ‘stop attending school before getting married or stopped continuing study after marriage’ but having ≥10 years of schooling;drop-outed: having <10 years of schooling.\ncontinued studying after marriage: continued studying after marriage and total number of schooling ≥10 years;\nnot continued but having ≥10 years schooling: ‘stop attending school before getting married or stopped continuing study after marriage’ but having ≥10 years of schooling;\ndrop-outed: having <10 years of schooling.\nTherefore, the women who could not reach at 10 academic years of schooling is considered as ‘schoolgirl dropout’ in this study. Note that ‘dropout’ category consists of three mutually exclusive classes: ‘stop attending school before getting married’, ‘stopped continuing study after marriage’, ‘continued studying after marriage’. The effects of some other covariates were controlled in this study based on some available literatures [10, 27, 28]. These variables are: place of residence (urban, rural); exposed to any of the three media newspaper/magazine, radio, and television at least once a week (yes, no); mother’s age at birth in years (<20, 20-29, ≥30); difference between husband and wife age in years (non-positive, 1-5, 6-10, >10); number of decisions woman participated out of three major decisions regarding her own health care, large household purchases, and visits to family/relatives (none, 1-2, all 3); number of reasons a woman justified beating by her husband out of five regarding if she goes out without telling husband, neglects the children, argues with husband, refuses to have sex with husband, and burns the food (not at all, 1-2, 3-5); wealth index (poor, middle, rich) was created by using the ranked wealth index factor scores and then dividing the ranking into three equal categories, each comprising 33.33 percent of the sampled households; and order of birth of the index pregnancy (1, 2, 3, ≥4).\nThe outcome variable is ‘the number of ANC visits’ that a woman received during pregnancy for her most recent birth. The exposure variable ‘study continuity status’ is categorized into three categories. The categories of this variable were created based on ‘the number of years of schooling of a woman’ and ‘whether she studying or attending school just before getting married’ or ‘whether she continued studying after marriage’. These categories are \ncontinued studying after marriage: continued studying after marriage and total number of schooling ≥10 years;not continued but having ≥10 years schooling: ‘stop attending school before getting married or stopped continuing study after marriage’ but having ≥10 years of schooling;drop-outed: having <10 years of schooling.\ncontinued studying after marriage: continued studying after marriage and total number of schooling ≥10 years;\nnot continued but having ≥10 years schooling: ‘stop attending school before getting married or stopped continuing study after marriage’ but having ≥10 years of schooling;\ndrop-outed: having <10 years of schooling.\nTherefore, the women who could not reach at 10 academic years of schooling is considered as ‘schoolgirl dropout’ in this study. Note that ‘dropout’ category consists of three mutually exclusive classes: ‘stop attending school before getting married’, ‘stopped continuing study after marriage’, ‘continued studying after marriage’. The effects of some other covariates were controlled in this study based on some available literatures [10, 27, 28]. These variables are: place of residence (urban, rural); exposed to any of the three media newspaper/magazine, radio, and television at least once a week (yes, no); mother’s age at birth in years (<20, 20-29, ≥30); difference between husband and wife age in years (non-positive, 1-5, 6-10, >10); number of decisions woman participated out of three major decisions regarding her own health care, large household purchases, and visits to family/relatives (none, 1-2, all 3); number of reasons a woman justified beating by her husband out of five regarding if she goes out without telling husband, neglects the children, argues with husband, refuses to have sex with husband, and burns the food (not at all, 1-2, 3-5); wealth index (poor, middle, rich) was created by using the ranked wealth index factor scores and then dividing the ranking into three equal categories, each comprising 33.33 percent of the sampled households; and order of birth of the index pregnancy (1, 2, 3, ≥4).\nStatistical analyses In this study, all the explanatory variables considered are categorical and the outcome variable of interest is the number of ANC visits during a pregnancy, which is discrete in nature. Therefore, we have performed descriptive statistics by computing percent distribution for the explanatory variables and by mean and standard deviation for the outcome variable. As all the explanatory variables are categorical, in order to draw inference about the association between the explanatory variables and the count outcome of interest we have performed one-way analysis of variance (ANOVA)/independent sample t-test. To justify whether data arise from a single or two different populations, we have computed the mixing proportion of the count variable using the marginalized Poisson-Poisson (MPois-Pois) mixture model [26] in the absence of covariates. Finally, to examine the adjusted effects of covariates on the marginal mean of count data, MPois-Pois mixture model has been used along with computing incidence density ratio (IDR).\nIn this study, all the explanatory variables considered are categorical and the outcome variable of interest is the number of ANC visits during a pregnancy, which is discrete in nature. Therefore, we have performed descriptive statistics by computing percent distribution for the explanatory variables and by mean and standard deviation for the outcome variable. As all the explanatory variables are categorical, in order to draw inference about the association between the explanatory variables and the count outcome of interest we have performed one-way analysis of variance (ANOVA)/independent sample t-test. To justify whether data arise from a single or two different populations, we have computed the mixing proportion of the count variable using the marginalized Poisson-Poisson (MPois-Pois) mixture model [26] in the absence of covariates. Finally, to examine the adjusted effects of covariates on the marginal mean of count data, MPois-Pois mixture model has been used along with computing incidence density ratio (IDR).\nModel The source population from which the count data have been collected is assumed to be partitioned into two latent subpopulations having Poisson distributions with mean μ1 and μ2, respectively. Let Y1,...,Yn be a random sample of size n. Following [21], the Poisson-Poisson mixture probability distribution of Yi(i=1,...,n) can be written as \n1\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ f(Y_{i}=y_{i}|\\pi, \\boldsymbol{\\mu}_{i}^{\\prime})=\\pi\\frac{e^{-\\mu_{1i}}\\mu_{1i}^{y_{i}}}{y_{i}!}+(1-\\pi)\\frac{e^{-\\mu_{2i}}\\mu_{2i}^{y_{i}}}{y_{i}!}, $$ \\end{document}f(Yi=yi|π,μi′)=πe−μ1iμ1iyiyi!+(1−π)e−μ2iμ2iyiyi!,\nwhere μi=(μi1,μi2)′ and π is the mixing proportion. Hence, the marginal mean and variance of Yi can be given respectively as \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$\\begin{aligned} E(Y_{i})&=\\mu_{i}=\\pi\\mu_{1i}+(1-\\pi)\\mu_{2i}\\quad\\text{and} \\\\ \\text{Var}(Y_{i})&=\\sigma^{2}_{i}=\\mu_{i}+\\pi\\left(1-\\pi\\right)\\left(\\mu_{1i}-\\mu_{2i}\\right)^{2}. \\end{aligned} $$ \\end{document}E(Yi)=μi=πμ1i+(1−π)μ2iandVar(Yi)=σi2=μi+π1−πμ1i−μ2i2. One may obtain the MPois-Pois mixture model by replacing μ2i with (1−π)−1(μi−πμ1i) in (1) [26]. To introduce the covariates into the marginalized model, the following specifications have been considered \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$\\log(\\mu_{i})=\\boldsymbol{x}_{i}^{\\prime}\\boldsymbol{\\beta},\\quad \\log(\\mu_{1i})=\\boldsymbol{z}_{i}^{\\prime}\\boldsymbol{\\alpha},\\quad \\text{logit}(\\pi)=\\tau, $$ \\end{document}log(μi)=xi′β,log(μ1i)=zi′α,logit(π)=τ, where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\boldsymbol {\\mathcalligra {x}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}xi and β respectively are p1×1 vector of covariates and regression parameters associated with the marginal mean μi and \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\boldsymbol {\\mathcalligra {z}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}zi and α respectively are p2×1 vector of covariates and regression parameters associated with the subpopulation mean μ1i and −∞<τ<∞ is a constant. It can be permissible to use same covariates for both classes (i.e., \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\boldsymbol {\\mathcalligra {x}}_{\\boldsymbol {\\mathcalligra {i}}}=\\boldsymbol {\\mathcalligra {z}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}xi=zi) with p1=p2=p.\nFor the jth covariate in a MPois-Pois model where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\boldsymbol {\\mathcalligra {x}}_{\\boldsymbol {\\mathcalligra {i}}}=\\boldsymbol {\\mathcalligra {z}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}xi=zi, IDR, the ratio of means for a one-unit increase in xij, is obtained as follows \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$\\frac{E(Y_{i}|x_{ij}=k+1,\\tilde{\\boldsymbol{\\mathcalligra{x}}}_{\\boldsymbol{\\mathcalligra{i}}})}{E(Y_{i}|x_{ij}=k,\\tilde{\\boldsymbol{\\mathcalligra{x}}}_{\\boldsymbol{\\mathcalligra{i}}})}=\\exp(\\beta_{j}); $$ \\end{document}E(Yi|xij=k+1,x~i)E(Yi|xij=k,x~i)=exp(βj); where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\tilde {\\boldsymbol {\\mathcalligra {x}}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}x~i indicates all covariates except xij. The primary interest is to estimate the regression parameters (β) of the marginal mean (μi) whereas the nuisance parameters α and τ need to be estimated to facilitate the maximum likelihood estimation of β. The likelihood function of MPois-Pois model is given as follows \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $${{}\\begin{aligned} L(\\boldsymbol{\\tau}, \\boldsymbol{\\alpha}, \\boldsymbol{\\beta}|\\boldsymbol{y})=\\prod\\limits_{i=1}^{n}&\\frac{1}{(1+e^{\\tau})y_{i}!} \\left\\{e^{\\tau}\\exp(-e^{\\boldsymbol{z}_{\\boldsymbol{i}}^{\\prime}\\boldsymbol{\\alpha}}) e^{\\boldsymbol{z}_{\\boldsymbol{i}}^{\\boldsymbol{\\prime}}\\boldsymbol{\\alpha}y_{i}} + \\right.\\\\ &\\qquad \\left.e^{-\\eta(\\tau,\\boldsymbol{\\beta},\\boldsymbol{\\alpha}\\boldsymbol{;x}_{\\boldsymbol{i}},\\boldsymbol{z}_{\\boldsymbol{i}})}\\eta(\\tau,\\boldsymbol{\\beta,\\alpha;x_{i},z_{i}})^{y_{i}}\\right\\}, \\end{aligned}} $$ \\end{document}L(τ,α,β|y)=∏i=1n1(1+eτ)yi!eτexp(−ezi′α)ezi′αyi+e−η(τ,β,α;xi,zi)η(τ,β,α;xi,zi)yi, where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\eta (\\tau,\\boldsymbol {\\beta },\\boldsymbol {\\alpha }\\boldsymbol {;x}_{\\boldsymbol {i}}, \\boldsymbol {z}_{\\boldsymbol {i}})=e^{\\boldsymbol {x}_{\\boldsymbol {i}}^{\\boldsymbol {\\prime }}\\boldsymbol {\\beta }}(1+e^{\\tau })-e^{\\tau } e^{\\boldsymbol {z}_{\\boldsymbol {i}}^{\\boldsymbol {\\prime }}\\boldsymbol {\\alpha }}$\\end{document}η(τ,β,α;xi,zi)=exi′β(1+eτ)−eτezi′α. With carefully chosen starting values, parameters of MPois-Pois can be estimated by the use of quasi-Newton optimization method. The quasi-Newton optimization can be implemented by SAS ‘nlmixed’ or R ‘optim’ function to get estimates from above likelihood function. Starting values for τ and α can be obtained by EM algorithm from Poisson-Poisson mixture model. Also, the initial values of β can be obtained by fitting a marginalized zero-inflated Poisson (MZIP) model [26].\nThe source population from which the count data have been collected is assumed to be partitioned into two latent subpopulations having Poisson distributions with mean μ1 and μ2, respectively. Let Y1,...,Yn be a random sample of size n. Following [21], the Poisson-Poisson mixture probability distribution of Yi(i=1,...,n) can be written as \n1\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ f(Y_{i}=y_{i}|\\pi, \\boldsymbol{\\mu}_{i}^{\\prime})=\\pi\\frac{e^{-\\mu_{1i}}\\mu_{1i}^{y_{i}}}{y_{i}!}+(1-\\pi)\\frac{e^{-\\mu_{2i}}\\mu_{2i}^{y_{i}}}{y_{i}!}, $$ \\end{document}f(Yi=yi|π,μi′)=πe−μ1iμ1iyiyi!+(1−π)e−μ2iμ2iyiyi!,\nwhere μi=(μi1,μi2)′ and π is the mixing proportion. Hence, the marginal mean and variance of Yi can be given respectively as \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$\\begin{aligned} E(Y_{i})&=\\mu_{i}=\\pi\\mu_{1i}+(1-\\pi)\\mu_{2i}\\quad\\text{and} \\\\ \\text{Var}(Y_{i})&=\\sigma^{2}_{i}=\\mu_{i}+\\pi\\left(1-\\pi\\right)\\left(\\mu_{1i}-\\mu_{2i}\\right)^{2}. \\end{aligned} $$ \\end{document}E(Yi)=μi=πμ1i+(1−π)μ2iandVar(Yi)=σi2=μi+π1−πμ1i−μ2i2. One may obtain the MPois-Pois mixture model by replacing μ2i with (1−π)−1(μi−πμ1i) in (1) [26]. To introduce the covariates into the marginalized model, the following specifications have been considered \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$\\log(\\mu_{i})=\\boldsymbol{x}_{i}^{\\prime}\\boldsymbol{\\beta},\\quad \\log(\\mu_{1i})=\\boldsymbol{z}_{i}^{\\prime}\\boldsymbol{\\alpha},\\quad \\text{logit}(\\pi)=\\tau, $$ \\end{document}log(μi)=xi′β,log(μ1i)=zi′α,logit(π)=τ, where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\boldsymbol {\\mathcalligra {x}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}xi and β respectively are p1×1 vector of covariates and regression parameters associated with the marginal mean μi and \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\boldsymbol {\\mathcalligra {z}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}zi and α respectively are p2×1 vector of covariates and regression parameters associated with the subpopulation mean μ1i and −∞<τ<∞ is a constant. It can be permissible to use same covariates for both classes (i.e., \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\boldsymbol {\\mathcalligra {x}}_{\\boldsymbol {\\mathcalligra {i}}}=\\boldsymbol {\\mathcalligra {z}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}xi=zi) with p1=p2=p.\nFor the jth covariate in a MPois-Pois model where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\boldsymbol {\\mathcalligra {x}}_{\\boldsymbol {\\mathcalligra {i}}}=\\boldsymbol {\\mathcalligra {z}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}xi=zi, IDR, the ratio of means for a one-unit increase in xij, is obtained as follows \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$\\frac{E(Y_{i}|x_{ij}=k+1,\\tilde{\\boldsymbol{\\mathcalligra{x}}}_{\\boldsymbol{\\mathcalligra{i}}})}{E(Y_{i}|x_{ij}=k,\\tilde{\\boldsymbol{\\mathcalligra{x}}}_{\\boldsymbol{\\mathcalligra{i}}})}=\\exp(\\beta_{j}); $$ \\end{document}E(Yi|xij=k+1,x~i)E(Yi|xij=k,x~i)=exp(βj); where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\tilde {\\boldsymbol {\\mathcalligra {x}}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}x~i indicates all covariates except xij. The primary interest is to estimate the regression parameters (β) of the marginal mean (μi) whereas the nuisance parameters α and τ need to be estimated to facilitate the maximum likelihood estimation of β. The likelihood function of MPois-Pois model is given as follows \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $${{}\\begin{aligned} L(\\boldsymbol{\\tau}, \\boldsymbol{\\alpha}, \\boldsymbol{\\beta}|\\boldsymbol{y})=\\prod\\limits_{i=1}^{n}&\\frac{1}{(1+e^{\\tau})y_{i}!} \\left\\{e^{\\tau}\\exp(-e^{\\boldsymbol{z}_{\\boldsymbol{i}}^{\\prime}\\boldsymbol{\\alpha}}) e^{\\boldsymbol{z}_{\\boldsymbol{i}}^{\\boldsymbol{\\prime}}\\boldsymbol{\\alpha}y_{i}} + \\right.\\\\ &\\qquad \\left.e^{-\\eta(\\tau,\\boldsymbol{\\beta},\\boldsymbol{\\alpha}\\boldsymbol{;x}_{\\boldsymbol{i}},\\boldsymbol{z}_{\\boldsymbol{i}})}\\eta(\\tau,\\boldsymbol{\\beta,\\alpha;x_{i},z_{i}})^{y_{i}}\\right\\}, \\end{aligned}} $$ \\end{document}L(τ,α,β|y)=∏i=1n1(1+eτ)yi!eτexp(−ezi′α)ezi′αyi+e−η(τ,β,α;xi,zi)η(τ,β,α;xi,zi)yi, where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\eta (\\tau,\\boldsymbol {\\beta },\\boldsymbol {\\alpha }\\boldsymbol {;x}_{\\boldsymbol {i}}, \\boldsymbol {z}_{\\boldsymbol {i}})=e^{\\boldsymbol {x}_{\\boldsymbol {i}}^{\\boldsymbol {\\prime }}\\boldsymbol {\\beta }}(1+e^{\\tau })-e^{\\tau } e^{\\boldsymbol {z}_{\\boldsymbol {i}}^{\\boldsymbol {\\prime }}\\boldsymbol {\\alpha }}$\\end{document}η(τ,β,α;xi,zi)=exi′β(1+eτ)−eτezi′α. With carefully chosen starting values, parameters of MPois-Pois can be estimated by the use of quasi-Newton optimization method. The quasi-Newton optimization can be implemented by SAS ‘nlmixed’ or R ‘optim’ function to get estimates from above likelihood function. Starting values for τ and α can be obtained by EM algorithm from Poisson-Poisson mixture model. Also, the initial values of β can be obtained by fitting a marginalized zero-inflated Poisson (MZIP) model [26].", "In this study, a nationwide data extracted from Bangladesh Demographic and Health Survey (BDHS) 2017–2018 have been utilized to analyze the number of ANC visits taken by a woman during her pregnancy period. The survey was based on a two-stage stratified sample of households. In the first stage of sampling, 250 enumeration areas (EAs) from urban and 425 from rural areas were selected. In the second stage, an average of 30 households per EA were selected. Based on the design, A total of 20127 ever married women of reproductive age were interviewed to collect data on fertility, family planning along with socioeconomic and demographic characteristics. BDHS data also provides information on several aspect of maternal and newborn health, including antenatal care (ANC). The information regarding number of ANC that a woman receives during pregnancy was collected from 5012 women who gave their most recent birth in the 3 years preceding the survey. Finally, a sample of 4941 women was included in the analysis as they provide complete information about all the variables considered in the study.", "The outcome variable is ‘the number of ANC visits’ that a woman received during pregnancy for her most recent birth. The exposure variable ‘study continuity status’ is categorized into three categories. The categories of this variable were created based on ‘the number of years of schooling of a woman’ and ‘whether she studying or attending school just before getting married’ or ‘whether she continued studying after marriage’. These categories are \ncontinued studying after marriage: continued studying after marriage and total number of schooling ≥10 years;not continued but having ≥10 years schooling: ‘stop attending school before getting married or stopped continuing study after marriage’ but having ≥10 years of schooling;drop-outed: having <10 years of schooling.\ncontinued studying after marriage: continued studying after marriage and total number of schooling ≥10 years;\nnot continued but having ≥10 years schooling: ‘stop attending school before getting married or stopped continuing study after marriage’ but having ≥10 years of schooling;\ndrop-outed: having <10 years of schooling.\nTherefore, the women who could not reach at 10 academic years of schooling is considered as ‘schoolgirl dropout’ in this study. Note that ‘dropout’ category consists of three mutually exclusive classes: ‘stop attending school before getting married’, ‘stopped continuing study after marriage’, ‘continued studying after marriage’. The effects of some other covariates were controlled in this study based on some available literatures [10, 27, 28]. These variables are: place of residence (urban, rural); exposed to any of the three media newspaper/magazine, radio, and television at least once a week (yes, no); mother’s age at birth in years (<20, 20-29, ≥30); difference between husband and wife age in years (non-positive, 1-5, 6-10, >10); number of decisions woman participated out of three major decisions regarding her own health care, large household purchases, and visits to family/relatives (none, 1-2, all 3); number of reasons a woman justified beating by her husband out of five regarding if she goes out without telling husband, neglects the children, argues with husband, refuses to have sex with husband, and burns the food (not at all, 1-2, 3-5); wealth index (poor, middle, rich) was created by using the ranked wealth index factor scores and then dividing the ranking into three equal categories, each comprising 33.33 percent of the sampled households; and order of birth of the index pregnancy (1, 2, 3, ≥4).", "In this study, all the explanatory variables considered are categorical and the outcome variable of interest is the number of ANC visits during a pregnancy, which is discrete in nature. Therefore, we have performed descriptive statistics by computing percent distribution for the explanatory variables and by mean and standard deviation for the outcome variable. As all the explanatory variables are categorical, in order to draw inference about the association between the explanatory variables and the count outcome of interest we have performed one-way analysis of variance (ANOVA)/independent sample t-test. To justify whether data arise from a single or two different populations, we have computed the mixing proportion of the count variable using the marginalized Poisson-Poisson (MPois-Pois) mixture model [26] in the absence of covariates. Finally, to examine the adjusted effects of covariates on the marginal mean of count data, MPois-Pois mixture model has been used along with computing incidence density ratio (IDR).", "The source population from which the count data have been collected is assumed to be partitioned into two latent subpopulations having Poisson distributions with mean μ1 and μ2, respectively. Let Y1,...,Yn be a random sample of size n. Following [21], the Poisson-Poisson mixture probability distribution of Yi(i=1,...,n) can be written as \n1\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ f(Y_{i}=y_{i}|\\pi, \\boldsymbol{\\mu}_{i}^{\\prime})=\\pi\\frac{e^{-\\mu_{1i}}\\mu_{1i}^{y_{i}}}{y_{i}!}+(1-\\pi)\\frac{e^{-\\mu_{2i}}\\mu_{2i}^{y_{i}}}{y_{i}!}, $$ \\end{document}f(Yi=yi|π,μi′)=πe−μ1iμ1iyiyi!+(1−π)e−μ2iμ2iyiyi!,\nwhere μi=(μi1,μi2)′ and π is the mixing proportion. Hence, the marginal mean and variance of Yi can be given respectively as \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$\\begin{aligned} E(Y_{i})&=\\mu_{i}=\\pi\\mu_{1i}+(1-\\pi)\\mu_{2i}\\quad\\text{and} \\\\ \\text{Var}(Y_{i})&=\\sigma^{2}_{i}=\\mu_{i}+\\pi\\left(1-\\pi\\right)\\left(\\mu_{1i}-\\mu_{2i}\\right)^{2}. \\end{aligned} $$ \\end{document}E(Yi)=μi=πμ1i+(1−π)μ2iandVar(Yi)=σi2=μi+π1−πμ1i−μ2i2. One may obtain the MPois-Pois mixture model by replacing μ2i with (1−π)−1(μi−πμ1i) in (1) [26]. To introduce the covariates into the marginalized model, the following specifications have been considered \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$\\log(\\mu_{i})=\\boldsymbol{x}_{i}^{\\prime}\\boldsymbol{\\beta},\\quad \\log(\\mu_{1i})=\\boldsymbol{z}_{i}^{\\prime}\\boldsymbol{\\alpha},\\quad \\text{logit}(\\pi)=\\tau, $$ \\end{document}log(μi)=xi′β,log(μ1i)=zi′α,logit(π)=τ, where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\boldsymbol {\\mathcalligra {x}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}xi and β respectively are p1×1 vector of covariates and regression parameters associated with the marginal mean μi and \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\boldsymbol {\\mathcalligra {z}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}zi and α respectively are p2×1 vector of covariates and regression parameters associated with the subpopulation mean μ1i and −∞<τ<∞ is a constant. It can be permissible to use same covariates for both classes (i.e., \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\boldsymbol {\\mathcalligra {x}}_{\\boldsymbol {\\mathcalligra {i}}}=\\boldsymbol {\\mathcalligra {z}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}xi=zi) with p1=p2=p.\nFor the jth covariate in a MPois-Pois model where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\boldsymbol {\\mathcalligra {x}}_{\\boldsymbol {\\mathcalligra {i}}}=\\boldsymbol {\\mathcalligra {z}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}xi=zi, IDR, the ratio of means for a one-unit increase in xij, is obtained as follows \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$\\frac{E(Y_{i}|x_{ij}=k+1,\\tilde{\\boldsymbol{\\mathcalligra{x}}}_{\\boldsymbol{\\mathcalligra{i}}})}{E(Y_{i}|x_{ij}=k,\\tilde{\\boldsymbol{\\mathcalligra{x}}}_{\\boldsymbol{\\mathcalligra{i}}})}=\\exp(\\beta_{j}); $$ \\end{document}E(Yi|xij=k+1,x~i)E(Yi|xij=k,x~i)=exp(βj); where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\tilde {\\boldsymbol {\\mathcalligra {x}}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}x~i indicates all covariates except xij. The primary interest is to estimate the regression parameters (β) of the marginal mean (μi) whereas the nuisance parameters α and τ need to be estimated to facilitate the maximum likelihood estimation of β. The likelihood function of MPois-Pois model is given as follows \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $${{}\\begin{aligned} L(\\boldsymbol{\\tau}, \\boldsymbol{\\alpha}, \\boldsymbol{\\beta}|\\boldsymbol{y})=\\prod\\limits_{i=1}^{n}&\\frac{1}{(1+e^{\\tau})y_{i}!} \\left\\{e^{\\tau}\\exp(-e^{\\boldsymbol{z}_{\\boldsymbol{i}}^{\\prime}\\boldsymbol{\\alpha}}) e^{\\boldsymbol{z}_{\\boldsymbol{i}}^{\\boldsymbol{\\prime}}\\boldsymbol{\\alpha}y_{i}} + \\right.\\\\ &\\qquad \\left.e^{-\\eta(\\tau,\\boldsymbol{\\beta},\\boldsymbol{\\alpha}\\boldsymbol{;x}_{\\boldsymbol{i}},\\boldsymbol{z}_{\\boldsymbol{i}})}\\eta(\\tau,\\boldsymbol{\\beta,\\alpha;x_{i},z_{i}})^{y_{i}}\\right\\}, \\end{aligned}} $$ \\end{document}L(τ,α,β|y)=∏i=1n1(1+eτ)yi!eτexp(−ezi′α)ezi′αyi+e−η(τ,β,α;xi,zi)η(τ,β,α;xi,zi)yi, where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\eta (\\tau,\\boldsymbol {\\beta },\\boldsymbol {\\alpha }\\boldsymbol {;x}_{\\boldsymbol {i}}, \\boldsymbol {z}_{\\boldsymbol {i}})=e^{\\boldsymbol {x}_{\\boldsymbol {i}}^{\\boldsymbol {\\prime }}\\boldsymbol {\\beta }}(1+e^{\\tau })-e^{\\tau } e^{\\boldsymbol {z}_{\\boldsymbol {i}}^{\\boldsymbol {\\prime }}\\boldsymbol {\\alpha }}$\\end{document}η(τ,β,α;xi,zi)=exi′β(1+eτ)−eτezi′α. With carefully chosen starting values, parameters of MPois-Pois can be estimated by the use of quasi-Newton optimization method. The quasi-Newton optimization can be implemented by SAS ‘nlmixed’ or R ‘optim’ function to get estimates from above likelihood function. Starting values for τ and α can be obtained by EM algorithm from Poisson-Poisson mixture model. Also, the initial values of β can be obtained by fitting a marginalized zero-inflated Poisson (MZIP) model [26].", "To analyze the count data to draw valid inference, it is require to check whether the data come from a population or from a mixture of populations. If the target population consist of mixture of populations, the estimates of latent class parameters of the regression model can be obtained by fitting a mixture model. However, it is sometimes difficult to get estimate of the regression parameters for marginal means from the mixture model setup. Also, the interpretation in terms of IDR for the population-wide parameters can not be possible from such model. The marginalized mixture model can be utilized under these circumstance for drawing inferences about the whole population. The objective of this study is to explore the effect of schoolgirl dropout due to marriage on the number of ANC visits in Bangladesh (inference regarding the population-wide parameters). From the results, we have observed that the population count data can be regarded a mixture of two latent count distributions with proportions 0.55 and 0.45 in the absence of covariates. Therefore, a marginalized mixture model, the MPois-Pois model, has been utilized to meet the study objective.\nAlthough tremendous successes has been documented for ANC service of the pregnant women, Bangladesh is still far away to ensure the WHO recommended 8+ ANC visits for the pregnant women. Receiving heath care service during pregnancy is associated with organization, accessibility, standard, value, health beliefs as well as some socio-demographic factors [27]. Maternal health care programme in Bangladesh has been confronted many challenges including, accessibility, lack of equity, lack of public health facilities, scarcity of skilled workforce and inadequate financial resource allocation [29]. To identify the obstacles for receiving ANC services, it is necessary to undertake some comprehensive research studies. This study aimed to identify some socio-demographic factors with a special emphasis to the schoolgirl dropout due to marriage that are associated with the frequency of ANC visits in Bangladesh.\nResearchers were investigated which of the socioeconomic and demographic factors are responsible for the frequency of ANC visits women took during their pregnancy period in Bangladesh in some of the previous studies [10, 27, 28, 30]. Although they were included education as an independent variable and found that it was an important determinants of frequency of ANC visits, we have considered ‘study continuity status after marriage’ as the exposure variable in this study. The results revealed that schoolgirl dropout have tremendous negative influence on the rate of receiving ANC services of the pregnant women in Bangladesh. Consistent with findings of the aforementioned previous studies, this study found that women who are living in urban area, possessing higher wealth index and exposed to media have higher rate of receiving ANC visits.\nHossain et al. [28] found that the frequency of ANC visits was less for women aged 20 years or below compare to women aged 20–35 years at the time of delivery. Besides, Kabir and Islam [30] has found the odds of receiving ≥4 ANC services was low if mother’s age at conception is ≥35 years. However, we had found that the rate of receiving ANC services was smaller for women aged below 20 years but higher for women aged 30 years or more than whose age were 20–29 years at the time of delivery. In this study, we have found that higher order of birth was associated with lower rate of ANC visits. This findings are similar with previous studies [28, 30].\nBhowmik et al. [10] found that women who were more involved in making decision regarding own health care can significantly improve the frequency of ANC visits. However, Kabir and Islam [30] has found increased odds of taking ≥4 ANC visits if women and their husband jointly took decision regarding women own health care. But Hossain et al. [28] did not find any association between women empowerment and frequency of receiving ANC services. In this study we have considered three aspects of women empowerment as independent variables, These are: number of decisions woman participated out of three major decisions regarding themselves, number of reasons a woman justified beating by her husband out of five reasons of wife beating, and difference between husband and wife age (in years). Although the number of decisions woman participated was not associated with the frequency of ANC visits, increased number of reasons for beating justification was associated with lower rate of receiving ANC services. Also, the rate of receiving ANC services was higher for women who are not younger than their husbands.\nSince adequate level of ANC visits during pregnancy period of women contributes to maintain proper maternal health and to protect the adverse pregnancy outcomes to a great extent, it is essential to make sure that every pregnant mother receives WHO recommended at least eight ANC visits. This can be accomplished by encouraging women to take sufficient number of ANC visits during their pregnancy period. This might be achieved in Bangladesh by promoting maternal education so that at least ten years of schooling should be completed by a woman and dropout from school after marriage should be prevented. Besides schoolgirl dropout, this study helps policy makers to identify other socio-demographic factors such as place of residence, media exposure, mother’s age at birth, women empowerment, wealth index and birth order that might uplift women for receiving more ANC visits during their pregnancy period." ]
[ null, null, null, null, null, null ]
[ "Introduction", "Data and methods", "Data", "Variables", "Statistical analyses", "Model", "Results", "Discussion and conclusion" ]
[ "Antenatal care (ANC) is an important element of persistence care that a mother receives before and during pregnancy, at the time of childbirth, and the postnatal period. The aim of ANC is to detect any pregnancy complications, take immediate necessary steps for solving such complications, and prepare mother for safe and healthy birth. It has both direct and indirect influences on the survival and health of the mother as well as newborn [1, 2].\nThe World Health Organization (WHO) recently recommended for at least eight ANC visits for a woman during a pregnancy period [2]. The percentage of pregnant women receiving at least four ANC visits during a pregnancy period in Bangladesh has been increasing over the past two decades, from 6% in 1993–94 to 47% in 2017–18 [3]. Despite this substantial progress in ANC services, only about 11.5% of sampled women received WHO recommended 8+ ANC visits during November 2014-March 2018 in Bangladesh [3]. Therefore, to achieve the health related Sustainable Development Goals, it is essential to make access to quality ANC service easy for the pregnant mother and proper monitoring system is required to observe whether this service implemented throughout the country [4].\nEducation is one of the important determinants of good health as heath disparities can be determined by educational disparities [5, 6]. Mother’s education is considered as a crucial factor for preventing and treating poor health outcomes and effective use of health care services [7]. To ensure the best health practice, public policies should focus on getting the best attainable education for women among all socioeconomic classes [8] as there exist extreme gender differentiation in economic roles, lower parental investments in daughters than in sons, and significant restrictions on girls’ public mobility [9]. The women having more education are more likely to receive ANC services. Educated women understand the benefits of taking frequent ANC services as they are more knowledgeable about reproductive health care [10, 11]. School dropout should be considered as a public heath problem because education is a strong predictor of long-term health and dropouts have poorer health outcome [12, 13]. Although school dropout and child marriage are interrelated outcomes that have an enormous adverse impact on adolescent girls health and wellbeing as well as on their progeny, most of the parents of Bangladesh think that marriage and stop going school is the better option for their daughter’s prosperity [14]. The complex relationship between girl’s high school dropout and risky adolescent behavior suggests that schoolgirl dropout due to marriage has been associated with high risk of subsequent teen pregnancy [15–17] and teen pregnancy can lead to medical complications for the mother and infant [18].\nThere is a lack of research in explaining the link between whether or not a woman continued their education after marriage and ANC visits. Although the ANC services in Bangladesh has increasing trend over the decades, access to ANC services should be improved drastically to achieve the WHO recommended positive pregnancy outcome and the health Sustainable Development Goals. Therefore, the aim of the current study is to investigate the determinants of the frequency of ANC visits of women during their most recent pregnancy period in Bangladesh by applying an appropriate count regression model giving special emphasis on how the schoolgirl dropout influence the response of interest.\nIn practice, it may happen that sample count data observations may arise from two or more populations. Analyses of such data using standard or zero augmented count models may result in misleading conclusion [19, 20]. To overcome this difficulty, it may require to analyze the data taking mixture of populations into account [21]. Therefore, it is necessary to check whether count data come from mixture of populations. Moreover, the significance of inference regarding the marginal mean is well documented in many studies under mixture model setup [22–25]. Although finite mixtures of the standard count models have the advantage of describing over-dispersed count data, inferences regarding the overall exposure effects on marginal mean cannot be made from these model [22, 24].\nThe regression parameters for the marginal means can also be estimated sometimes indirectly by using the latent class parameters but such parameters cannot elucidate the link between the covariates and the population-wide parameters properly [26]. For this purpose, Benecha et al. [26] proposed marginally-specified mean models for mixtures of two count distributions to facilitate the maximum likelihood estimation where the mixing proportion have been included in the model along with the parameters for marginal mean. These are marginalized Poisson-Poisson (MPois-Pois) model and marginalized negative binomial-Poisson (MNB-Pois) model. In this study, an attempt has been made to draw inferences about the overall exposure effects on marginal means of ANC visits by using the MPois-Pois model.", "Data In this study, a nationwide data extracted from Bangladesh Demographic and Health Survey (BDHS) 2017–2018 have been utilized to analyze the number of ANC visits taken by a woman during her pregnancy period. The survey was based on a two-stage stratified sample of households. In the first stage of sampling, 250 enumeration areas (EAs) from urban and 425 from rural areas were selected. In the second stage, an average of 30 households per EA were selected. Based on the design, A total of 20127 ever married women of reproductive age were interviewed to collect data on fertility, family planning along with socioeconomic and demographic characteristics. BDHS data also provides information on several aspect of maternal and newborn health, including antenatal care (ANC). The information regarding number of ANC that a woman receives during pregnancy was collected from 5012 women who gave their most recent birth in the 3 years preceding the survey. Finally, a sample of 4941 women was included in the analysis as they provide complete information about all the variables considered in the study.\nIn this study, a nationwide data extracted from Bangladesh Demographic and Health Survey (BDHS) 2017–2018 have been utilized to analyze the number of ANC visits taken by a woman during her pregnancy period. The survey was based on a two-stage stratified sample of households. In the first stage of sampling, 250 enumeration areas (EAs) from urban and 425 from rural areas were selected. In the second stage, an average of 30 households per EA were selected. Based on the design, A total of 20127 ever married women of reproductive age were interviewed to collect data on fertility, family planning along with socioeconomic and demographic characteristics. BDHS data also provides information on several aspect of maternal and newborn health, including antenatal care (ANC). The information regarding number of ANC that a woman receives during pregnancy was collected from 5012 women who gave their most recent birth in the 3 years preceding the survey. Finally, a sample of 4941 women was included in the analysis as they provide complete information about all the variables considered in the study.\nVariables The outcome variable is ‘the number of ANC visits’ that a woman received during pregnancy for her most recent birth. The exposure variable ‘study continuity status’ is categorized into three categories. The categories of this variable were created based on ‘the number of years of schooling of a woman’ and ‘whether she studying or attending school just before getting married’ or ‘whether she continued studying after marriage’. These categories are \ncontinued studying after marriage: continued studying after marriage and total number of schooling ≥10 years;not continued but having ≥10 years schooling: ‘stop attending school before getting married or stopped continuing study after marriage’ but having ≥10 years of schooling;drop-outed: having <10 years of schooling.\ncontinued studying after marriage: continued studying after marriage and total number of schooling ≥10 years;\nnot continued but having ≥10 years schooling: ‘stop attending school before getting married or stopped continuing study after marriage’ but having ≥10 years of schooling;\ndrop-outed: having <10 years of schooling.\nTherefore, the women who could not reach at 10 academic years of schooling is considered as ‘schoolgirl dropout’ in this study. Note that ‘dropout’ category consists of three mutually exclusive classes: ‘stop attending school before getting married’, ‘stopped continuing study after marriage’, ‘continued studying after marriage’. The effects of some other covariates were controlled in this study based on some available literatures [10, 27, 28]. These variables are: place of residence (urban, rural); exposed to any of the three media newspaper/magazine, radio, and television at least once a week (yes, no); mother’s age at birth in years (<20, 20-29, ≥30); difference between husband and wife age in years (non-positive, 1-5, 6-10, >10); number of decisions woman participated out of three major decisions regarding her own health care, large household purchases, and visits to family/relatives (none, 1-2, all 3); number of reasons a woman justified beating by her husband out of five regarding if she goes out without telling husband, neglects the children, argues with husband, refuses to have sex with husband, and burns the food (not at all, 1-2, 3-5); wealth index (poor, middle, rich) was created by using the ranked wealth index factor scores and then dividing the ranking into three equal categories, each comprising 33.33 percent of the sampled households; and order of birth of the index pregnancy (1, 2, 3, ≥4).\nThe outcome variable is ‘the number of ANC visits’ that a woman received during pregnancy for her most recent birth. The exposure variable ‘study continuity status’ is categorized into three categories. The categories of this variable were created based on ‘the number of years of schooling of a woman’ and ‘whether she studying or attending school just before getting married’ or ‘whether she continued studying after marriage’. These categories are \ncontinued studying after marriage: continued studying after marriage and total number of schooling ≥10 years;not continued but having ≥10 years schooling: ‘stop attending school before getting married or stopped continuing study after marriage’ but having ≥10 years of schooling;drop-outed: having <10 years of schooling.\ncontinued studying after marriage: continued studying after marriage and total number of schooling ≥10 years;\nnot continued but having ≥10 years schooling: ‘stop attending school before getting married or stopped continuing study after marriage’ but having ≥10 years of schooling;\ndrop-outed: having <10 years of schooling.\nTherefore, the women who could not reach at 10 academic years of schooling is considered as ‘schoolgirl dropout’ in this study. Note that ‘dropout’ category consists of three mutually exclusive classes: ‘stop attending school before getting married’, ‘stopped continuing study after marriage’, ‘continued studying after marriage’. The effects of some other covariates were controlled in this study based on some available literatures [10, 27, 28]. These variables are: place of residence (urban, rural); exposed to any of the three media newspaper/magazine, radio, and television at least once a week (yes, no); mother’s age at birth in years (<20, 20-29, ≥30); difference between husband and wife age in years (non-positive, 1-5, 6-10, >10); number of decisions woman participated out of three major decisions regarding her own health care, large household purchases, and visits to family/relatives (none, 1-2, all 3); number of reasons a woman justified beating by her husband out of five regarding if she goes out without telling husband, neglects the children, argues with husband, refuses to have sex with husband, and burns the food (not at all, 1-2, 3-5); wealth index (poor, middle, rich) was created by using the ranked wealth index factor scores and then dividing the ranking into three equal categories, each comprising 33.33 percent of the sampled households; and order of birth of the index pregnancy (1, 2, 3, ≥4).\nStatistical analyses In this study, all the explanatory variables considered are categorical and the outcome variable of interest is the number of ANC visits during a pregnancy, which is discrete in nature. Therefore, we have performed descriptive statistics by computing percent distribution for the explanatory variables and by mean and standard deviation for the outcome variable. As all the explanatory variables are categorical, in order to draw inference about the association between the explanatory variables and the count outcome of interest we have performed one-way analysis of variance (ANOVA)/independent sample t-test. To justify whether data arise from a single or two different populations, we have computed the mixing proportion of the count variable using the marginalized Poisson-Poisson (MPois-Pois) mixture model [26] in the absence of covariates. Finally, to examine the adjusted effects of covariates on the marginal mean of count data, MPois-Pois mixture model has been used along with computing incidence density ratio (IDR).\nIn this study, all the explanatory variables considered are categorical and the outcome variable of interest is the number of ANC visits during a pregnancy, which is discrete in nature. Therefore, we have performed descriptive statistics by computing percent distribution for the explanatory variables and by mean and standard deviation for the outcome variable. As all the explanatory variables are categorical, in order to draw inference about the association between the explanatory variables and the count outcome of interest we have performed one-way analysis of variance (ANOVA)/independent sample t-test. To justify whether data arise from a single or two different populations, we have computed the mixing proportion of the count variable using the marginalized Poisson-Poisson (MPois-Pois) mixture model [26] in the absence of covariates. Finally, to examine the adjusted effects of covariates on the marginal mean of count data, MPois-Pois mixture model has been used along with computing incidence density ratio (IDR).\nModel The source population from which the count data have been collected is assumed to be partitioned into two latent subpopulations having Poisson distributions with mean μ1 and μ2, respectively. Let Y1,...,Yn be a random sample of size n. Following [21], the Poisson-Poisson mixture probability distribution of Yi(i=1,...,n) can be written as \n1\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ f(Y_{i}=y_{i}|\\pi, \\boldsymbol{\\mu}_{i}^{\\prime})=\\pi\\frac{e^{-\\mu_{1i}}\\mu_{1i}^{y_{i}}}{y_{i}!}+(1-\\pi)\\frac{e^{-\\mu_{2i}}\\mu_{2i}^{y_{i}}}{y_{i}!}, $$ \\end{document}f(Yi=yi|π,μi′)=πe−μ1iμ1iyiyi!+(1−π)e−μ2iμ2iyiyi!,\nwhere μi=(μi1,μi2)′ and π is the mixing proportion. Hence, the marginal mean and variance of Yi can be given respectively as \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$\\begin{aligned} E(Y_{i})&=\\mu_{i}=\\pi\\mu_{1i}+(1-\\pi)\\mu_{2i}\\quad\\text{and} \\\\ \\text{Var}(Y_{i})&=\\sigma^{2}_{i}=\\mu_{i}+\\pi\\left(1-\\pi\\right)\\left(\\mu_{1i}-\\mu_{2i}\\right)^{2}. \\end{aligned} $$ \\end{document}E(Yi)=μi=πμ1i+(1−π)μ2iandVar(Yi)=σi2=μi+π1−πμ1i−μ2i2. One may obtain the MPois-Pois mixture model by replacing μ2i with (1−π)−1(μi−πμ1i) in (1) [26]. To introduce the covariates into the marginalized model, the following specifications have been considered \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$\\log(\\mu_{i})=\\boldsymbol{x}_{i}^{\\prime}\\boldsymbol{\\beta},\\quad \\log(\\mu_{1i})=\\boldsymbol{z}_{i}^{\\prime}\\boldsymbol{\\alpha},\\quad \\text{logit}(\\pi)=\\tau, $$ \\end{document}log(μi)=xi′β,log(μ1i)=zi′α,logit(π)=τ, where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\boldsymbol {\\mathcalligra {x}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}xi and β respectively are p1×1 vector of covariates and regression parameters associated with the marginal mean μi and \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\boldsymbol {\\mathcalligra {z}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}zi and α respectively are p2×1 vector of covariates and regression parameters associated with the subpopulation mean μ1i and −∞<τ<∞ is a constant. It can be permissible to use same covariates for both classes (i.e., \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\boldsymbol {\\mathcalligra {x}}_{\\boldsymbol {\\mathcalligra {i}}}=\\boldsymbol {\\mathcalligra {z}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}xi=zi) with p1=p2=p.\nFor the jth covariate in a MPois-Pois model where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\boldsymbol {\\mathcalligra {x}}_{\\boldsymbol {\\mathcalligra {i}}}=\\boldsymbol {\\mathcalligra {z}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}xi=zi, IDR, the ratio of means for a one-unit increase in xij, is obtained as follows \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$\\frac{E(Y_{i}|x_{ij}=k+1,\\tilde{\\boldsymbol{\\mathcalligra{x}}}_{\\boldsymbol{\\mathcalligra{i}}})}{E(Y_{i}|x_{ij}=k,\\tilde{\\boldsymbol{\\mathcalligra{x}}}_{\\boldsymbol{\\mathcalligra{i}}})}=\\exp(\\beta_{j}); $$ \\end{document}E(Yi|xij=k+1,x~i)E(Yi|xij=k,x~i)=exp(βj); where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\tilde {\\boldsymbol {\\mathcalligra {x}}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}x~i indicates all covariates except xij. The primary interest is to estimate the regression parameters (β) of the marginal mean (μi) whereas the nuisance parameters α and τ need to be estimated to facilitate the maximum likelihood estimation of β. The likelihood function of MPois-Pois model is given as follows \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $${{}\\begin{aligned} L(\\boldsymbol{\\tau}, \\boldsymbol{\\alpha}, \\boldsymbol{\\beta}|\\boldsymbol{y})=\\prod\\limits_{i=1}^{n}&\\frac{1}{(1+e^{\\tau})y_{i}!} \\left\\{e^{\\tau}\\exp(-e^{\\boldsymbol{z}_{\\boldsymbol{i}}^{\\prime}\\boldsymbol{\\alpha}}) e^{\\boldsymbol{z}_{\\boldsymbol{i}}^{\\boldsymbol{\\prime}}\\boldsymbol{\\alpha}y_{i}} + \\right.\\\\ &\\qquad \\left.e^{-\\eta(\\tau,\\boldsymbol{\\beta},\\boldsymbol{\\alpha}\\boldsymbol{;x}_{\\boldsymbol{i}},\\boldsymbol{z}_{\\boldsymbol{i}})}\\eta(\\tau,\\boldsymbol{\\beta,\\alpha;x_{i},z_{i}})^{y_{i}}\\right\\}, \\end{aligned}} $$ \\end{document}L(τ,α,β|y)=∏i=1n1(1+eτ)yi!eτexp(−ezi′α)ezi′αyi+e−η(τ,β,α;xi,zi)η(τ,β,α;xi,zi)yi, where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\eta (\\tau,\\boldsymbol {\\beta },\\boldsymbol {\\alpha }\\boldsymbol {;x}_{\\boldsymbol {i}}, \\boldsymbol {z}_{\\boldsymbol {i}})=e^{\\boldsymbol {x}_{\\boldsymbol {i}}^{\\boldsymbol {\\prime }}\\boldsymbol {\\beta }}(1+e^{\\tau })-e^{\\tau } e^{\\boldsymbol {z}_{\\boldsymbol {i}}^{\\boldsymbol {\\prime }}\\boldsymbol {\\alpha }}$\\end{document}η(τ,β,α;xi,zi)=exi′β(1+eτ)−eτezi′α. With carefully chosen starting values, parameters of MPois-Pois can be estimated by the use of quasi-Newton optimization method. The quasi-Newton optimization can be implemented by SAS ‘nlmixed’ or R ‘optim’ function to get estimates from above likelihood function. Starting values for τ and α can be obtained by EM algorithm from Poisson-Poisson mixture model. Also, the initial values of β can be obtained by fitting a marginalized zero-inflated Poisson (MZIP) model [26].\nThe source population from which the count data have been collected is assumed to be partitioned into two latent subpopulations having Poisson distributions with mean μ1 and μ2, respectively. Let Y1,...,Yn be a random sample of size n. Following [21], the Poisson-Poisson mixture probability distribution of Yi(i=1,...,n) can be written as \n1\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ f(Y_{i}=y_{i}|\\pi, \\boldsymbol{\\mu}_{i}^{\\prime})=\\pi\\frac{e^{-\\mu_{1i}}\\mu_{1i}^{y_{i}}}{y_{i}!}+(1-\\pi)\\frac{e^{-\\mu_{2i}}\\mu_{2i}^{y_{i}}}{y_{i}!}, $$ \\end{document}f(Yi=yi|π,μi′)=πe−μ1iμ1iyiyi!+(1−π)e−μ2iμ2iyiyi!,\nwhere μi=(μi1,μi2)′ and π is the mixing proportion. Hence, the marginal mean and variance of Yi can be given respectively as \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$\\begin{aligned} E(Y_{i})&=\\mu_{i}=\\pi\\mu_{1i}+(1-\\pi)\\mu_{2i}\\quad\\text{and} \\\\ \\text{Var}(Y_{i})&=\\sigma^{2}_{i}=\\mu_{i}+\\pi\\left(1-\\pi\\right)\\left(\\mu_{1i}-\\mu_{2i}\\right)^{2}. \\end{aligned} $$ \\end{document}E(Yi)=μi=πμ1i+(1−π)μ2iandVar(Yi)=σi2=μi+π1−πμ1i−μ2i2. One may obtain the MPois-Pois mixture model by replacing μ2i with (1−π)−1(μi−πμ1i) in (1) [26]. To introduce the covariates into the marginalized model, the following specifications have been considered \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$\\log(\\mu_{i})=\\boldsymbol{x}_{i}^{\\prime}\\boldsymbol{\\beta},\\quad \\log(\\mu_{1i})=\\boldsymbol{z}_{i}^{\\prime}\\boldsymbol{\\alpha},\\quad \\text{logit}(\\pi)=\\tau, $$ \\end{document}log(μi)=xi′β,log(μ1i)=zi′α,logit(π)=τ, where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\boldsymbol {\\mathcalligra {x}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}xi and β respectively are p1×1 vector of covariates and regression parameters associated with the marginal mean μi and \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\boldsymbol {\\mathcalligra {z}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}zi and α respectively are p2×1 vector of covariates and regression parameters associated with the subpopulation mean μ1i and −∞<τ<∞ is a constant. It can be permissible to use same covariates for both classes (i.e., \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\boldsymbol {\\mathcalligra {x}}_{\\boldsymbol {\\mathcalligra {i}}}=\\boldsymbol {\\mathcalligra {z}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}xi=zi) with p1=p2=p.\nFor the jth covariate in a MPois-Pois model where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\boldsymbol {\\mathcalligra {x}}_{\\boldsymbol {\\mathcalligra {i}}}=\\boldsymbol {\\mathcalligra {z}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}xi=zi, IDR, the ratio of means for a one-unit increase in xij, is obtained as follows \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$\\frac{E(Y_{i}|x_{ij}=k+1,\\tilde{\\boldsymbol{\\mathcalligra{x}}}_{\\boldsymbol{\\mathcalligra{i}}})}{E(Y_{i}|x_{ij}=k,\\tilde{\\boldsymbol{\\mathcalligra{x}}}_{\\boldsymbol{\\mathcalligra{i}}})}=\\exp(\\beta_{j}); $$ \\end{document}E(Yi|xij=k+1,x~i)E(Yi|xij=k,x~i)=exp(βj); where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\tilde {\\boldsymbol {\\mathcalligra {x}}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}x~i indicates all covariates except xij. The primary interest is to estimate the regression parameters (β) of the marginal mean (μi) whereas the nuisance parameters α and τ need to be estimated to facilitate the maximum likelihood estimation of β. The likelihood function of MPois-Pois model is given as follows \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $${{}\\begin{aligned} L(\\boldsymbol{\\tau}, \\boldsymbol{\\alpha}, \\boldsymbol{\\beta}|\\boldsymbol{y})=\\prod\\limits_{i=1}^{n}&\\frac{1}{(1+e^{\\tau})y_{i}!} \\left\\{e^{\\tau}\\exp(-e^{\\boldsymbol{z}_{\\boldsymbol{i}}^{\\prime}\\boldsymbol{\\alpha}}) e^{\\boldsymbol{z}_{\\boldsymbol{i}}^{\\boldsymbol{\\prime}}\\boldsymbol{\\alpha}y_{i}} + \\right.\\\\ &\\qquad \\left.e^{-\\eta(\\tau,\\boldsymbol{\\beta},\\boldsymbol{\\alpha}\\boldsymbol{;x}_{\\boldsymbol{i}},\\boldsymbol{z}_{\\boldsymbol{i}})}\\eta(\\tau,\\boldsymbol{\\beta,\\alpha;x_{i},z_{i}})^{y_{i}}\\right\\}, \\end{aligned}} $$ \\end{document}L(τ,α,β|y)=∏i=1n1(1+eτ)yi!eτexp(−ezi′α)ezi′αyi+e−η(τ,β,α;xi,zi)η(τ,β,α;xi,zi)yi, where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\eta (\\tau,\\boldsymbol {\\beta },\\boldsymbol {\\alpha }\\boldsymbol {;x}_{\\boldsymbol {i}}, \\boldsymbol {z}_{\\boldsymbol {i}})=e^{\\boldsymbol {x}_{\\boldsymbol {i}}^{\\boldsymbol {\\prime }}\\boldsymbol {\\beta }}(1+e^{\\tau })-e^{\\tau } e^{\\boldsymbol {z}_{\\boldsymbol {i}}^{\\boldsymbol {\\prime }}\\boldsymbol {\\alpha }}$\\end{document}η(τ,β,α;xi,zi)=exi′β(1+eτ)−eτezi′α. With carefully chosen starting values, parameters of MPois-Pois can be estimated by the use of quasi-Newton optimization method. The quasi-Newton optimization can be implemented by SAS ‘nlmixed’ or R ‘optim’ function to get estimates from above likelihood function. Starting values for τ and α can be obtained by EM algorithm from Poisson-Poisson mixture model. Also, the initial values of β can be obtained by fitting a marginalized zero-inflated Poisson (MZIP) model [26].", "In this study, a nationwide data extracted from Bangladesh Demographic and Health Survey (BDHS) 2017–2018 have been utilized to analyze the number of ANC visits taken by a woman during her pregnancy period. The survey was based on a two-stage stratified sample of households. In the first stage of sampling, 250 enumeration areas (EAs) from urban and 425 from rural areas were selected. In the second stage, an average of 30 households per EA were selected. Based on the design, A total of 20127 ever married women of reproductive age were interviewed to collect data on fertility, family planning along with socioeconomic and demographic characteristics. BDHS data also provides information on several aspect of maternal and newborn health, including antenatal care (ANC). The information regarding number of ANC that a woman receives during pregnancy was collected from 5012 women who gave their most recent birth in the 3 years preceding the survey. Finally, a sample of 4941 women was included in the analysis as they provide complete information about all the variables considered in the study.", "The outcome variable is ‘the number of ANC visits’ that a woman received during pregnancy for her most recent birth. The exposure variable ‘study continuity status’ is categorized into three categories. The categories of this variable were created based on ‘the number of years of schooling of a woman’ and ‘whether she studying or attending school just before getting married’ or ‘whether she continued studying after marriage’. These categories are \ncontinued studying after marriage: continued studying after marriage and total number of schooling ≥10 years;not continued but having ≥10 years schooling: ‘stop attending school before getting married or stopped continuing study after marriage’ but having ≥10 years of schooling;drop-outed: having <10 years of schooling.\ncontinued studying after marriage: continued studying after marriage and total number of schooling ≥10 years;\nnot continued but having ≥10 years schooling: ‘stop attending school before getting married or stopped continuing study after marriage’ but having ≥10 years of schooling;\ndrop-outed: having <10 years of schooling.\nTherefore, the women who could not reach at 10 academic years of schooling is considered as ‘schoolgirl dropout’ in this study. Note that ‘dropout’ category consists of three mutually exclusive classes: ‘stop attending school before getting married’, ‘stopped continuing study after marriage’, ‘continued studying after marriage’. The effects of some other covariates were controlled in this study based on some available literatures [10, 27, 28]. These variables are: place of residence (urban, rural); exposed to any of the three media newspaper/magazine, radio, and television at least once a week (yes, no); mother’s age at birth in years (<20, 20-29, ≥30); difference between husband and wife age in years (non-positive, 1-5, 6-10, >10); number of decisions woman participated out of three major decisions regarding her own health care, large household purchases, and visits to family/relatives (none, 1-2, all 3); number of reasons a woman justified beating by her husband out of five regarding if she goes out without telling husband, neglects the children, argues with husband, refuses to have sex with husband, and burns the food (not at all, 1-2, 3-5); wealth index (poor, middle, rich) was created by using the ranked wealth index factor scores and then dividing the ranking into three equal categories, each comprising 33.33 percent of the sampled households; and order of birth of the index pregnancy (1, 2, 3, ≥4).", "In this study, all the explanatory variables considered are categorical and the outcome variable of interest is the number of ANC visits during a pregnancy, which is discrete in nature. Therefore, we have performed descriptive statistics by computing percent distribution for the explanatory variables and by mean and standard deviation for the outcome variable. As all the explanatory variables are categorical, in order to draw inference about the association between the explanatory variables and the count outcome of interest we have performed one-way analysis of variance (ANOVA)/independent sample t-test. To justify whether data arise from a single or two different populations, we have computed the mixing proportion of the count variable using the marginalized Poisson-Poisson (MPois-Pois) mixture model [26] in the absence of covariates. Finally, to examine the adjusted effects of covariates on the marginal mean of count data, MPois-Pois mixture model has been used along with computing incidence density ratio (IDR).", "The source population from which the count data have been collected is assumed to be partitioned into two latent subpopulations having Poisson distributions with mean μ1 and μ2, respectively. Let Y1,...,Yn be a random sample of size n. Following [21], the Poisson-Poisson mixture probability distribution of Yi(i=1,...,n) can be written as \n1\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ f(Y_{i}=y_{i}|\\pi, \\boldsymbol{\\mu}_{i}^{\\prime})=\\pi\\frac{e^{-\\mu_{1i}}\\mu_{1i}^{y_{i}}}{y_{i}!}+(1-\\pi)\\frac{e^{-\\mu_{2i}}\\mu_{2i}^{y_{i}}}{y_{i}!}, $$ \\end{document}f(Yi=yi|π,μi′)=πe−μ1iμ1iyiyi!+(1−π)e−μ2iμ2iyiyi!,\nwhere μi=(μi1,μi2)′ and π is the mixing proportion. Hence, the marginal mean and variance of Yi can be given respectively as \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$\\begin{aligned} E(Y_{i})&=\\mu_{i}=\\pi\\mu_{1i}+(1-\\pi)\\mu_{2i}\\quad\\text{and} \\\\ \\text{Var}(Y_{i})&=\\sigma^{2}_{i}=\\mu_{i}+\\pi\\left(1-\\pi\\right)\\left(\\mu_{1i}-\\mu_{2i}\\right)^{2}. \\end{aligned} $$ \\end{document}E(Yi)=μi=πμ1i+(1−π)μ2iandVar(Yi)=σi2=μi+π1−πμ1i−μ2i2. One may obtain the MPois-Pois mixture model by replacing μ2i with (1−π)−1(μi−πμ1i) in (1) [26]. To introduce the covariates into the marginalized model, the following specifications have been considered \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$\\log(\\mu_{i})=\\boldsymbol{x}_{i}^{\\prime}\\boldsymbol{\\beta},\\quad \\log(\\mu_{1i})=\\boldsymbol{z}_{i}^{\\prime}\\boldsymbol{\\alpha},\\quad \\text{logit}(\\pi)=\\tau, $$ \\end{document}log(μi)=xi′β,log(μ1i)=zi′α,logit(π)=τ, where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\boldsymbol {\\mathcalligra {x}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}xi and β respectively are p1×1 vector of covariates and regression parameters associated with the marginal mean μi and \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\boldsymbol {\\mathcalligra {z}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}zi and α respectively are p2×1 vector of covariates and regression parameters associated with the subpopulation mean μ1i and −∞<τ<∞ is a constant. It can be permissible to use same covariates for both classes (i.e., \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\boldsymbol {\\mathcalligra {x}}_{\\boldsymbol {\\mathcalligra {i}}}=\\boldsymbol {\\mathcalligra {z}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}xi=zi) with p1=p2=p.\nFor the jth covariate in a MPois-Pois model where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\boldsymbol {\\mathcalligra {x}}_{\\boldsymbol {\\mathcalligra {i}}}=\\boldsymbol {\\mathcalligra {z}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}xi=zi, IDR, the ratio of means for a one-unit increase in xij, is obtained as follows \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$\\frac{E(Y_{i}|x_{ij}=k+1,\\tilde{\\boldsymbol{\\mathcalligra{x}}}_{\\boldsymbol{\\mathcalligra{i}}})}{E(Y_{i}|x_{ij}=k,\\tilde{\\boldsymbol{\\mathcalligra{x}}}_{\\boldsymbol{\\mathcalligra{i}}})}=\\exp(\\beta_{j}); $$ \\end{document}E(Yi|xij=k+1,x~i)E(Yi|xij=k,x~i)=exp(βj); where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\tilde {\\boldsymbol {\\mathcalligra {x}}}_{\\boldsymbol {\\mathcalligra {i}}}$\\end{document}x~i indicates all covariates except xij. The primary interest is to estimate the regression parameters (β) of the marginal mean (μi) whereas the nuisance parameters α and τ need to be estimated to facilitate the maximum likelihood estimation of β. The likelihood function of MPois-Pois model is given as follows \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $${{}\\begin{aligned} L(\\boldsymbol{\\tau}, \\boldsymbol{\\alpha}, \\boldsymbol{\\beta}|\\boldsymbol{y})=\\prod\\limits_{i=1}^{n}&\\frac{1}{(1+e^{\\tau})y_{i}!} \\left\\{e^{\\tau}\\exp(-e^{\\boldsymbol{z}_{\\boldsymbol{i}}^{\\prime}\\boldsymbol{\\alpha}}) e^{\\boldsymbol{z}_{\\boldsymbol{i}}^{\\boldsymbol{\\prime}}\\boldsymbol{\\alpha}y_{i}} + \\right.\\\\ &\\qquad \\left.e^{-\\eta(\\tau,\\boldsymbol{\\beta},\\boldsymbol{\\alpha}\\boldsymbol{;x}_{\\boldsymbol{i}},\\boldsymbol{z}_{\\boldsymbol{i}})}\\eta(\\tau,\\boldsymbol{\\beta,\\alpha;x_{i},z_{i}})^{y_{i}}\\right\\}, \\end{aligned}} $$ \\end{document}L(τ,α,β|y)=∏i=1n1(1+eτ)yi!eτexp(−ezi′α)ezi′αyi+e−η(τ,β,α;xi,zi)η(τ,β,α;xi,zi)yi, where \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\eta (\\tau,\\boldsymbol {\\beta },\\boldsymbol {\\alpha }\\boldsymbol {;x}_{\\boldsymbol {i}}, \\boldsymbol {z}_{\\boldsymbol {i}})=e^{\\boldsymbol {x}_{\\boldsymbol {i}}^{\\boldsymbol {\\prime }}\\boldsymbol {\\beta }}(1+e^{\\tau })-e^{\\tau } e^{\\boldsymbol {z}_{\\boldsymbol {i}}^{\\boldsymbol {\\prime }}\\boldsymbol {\\alpha }}$\\end{document}η(τ,β,α;xi,zi)=exi′β(1+eτ)−eτezi′α. With carefully chosen starting values, parameters of MPois-Pois can be estimated by the use of quasi-Newton optimization method. The quasi-Newton optimization can be implemented by SAS ‘nlmixed’ or R ‘optim’ function to get estimates from above likelihood function. Starting values for τ and α can be obtained by EM algorithm from Poisson-Poisson mixture model. Also, the initial values of β can be obtained by fitting a marginalized zero-inflated Poisson (MZIP) model [26].", "To analyse the count variable of interest, the number of ANC visits during pregnancy, we have considered 4,941 women who gave the index birth in the three years preceding the survey after adjusting for missing values. The mean, standard deviation, minimum and maximum of the number of ANC visits were 3.93, 2.88, 0 and 20, respectively. It was found that the frequency of ANC visits in Bangladesh arises from mixture of two unobserved populations with proportions 0.55 and 0.45 in the absence of covariates. The mean number of ANC visits for each category of explanatory variables along with its standard errors, 95% confidence interval (CI) and the percent distribution of respondents in each category of selected covariates are presented in Table 1.\nTable 1The mean, standard error(SE) and 95% confidence interval (CI) for number of ANC visits during pregnancy of women in Bangladesh by some selected socioeconomic and demographic characteristics, BDHS 2017Variablesn(%)MeanSE95% CIStudy continuity status ∗∗∗Continued after marriage552(11.2)5.570.1285.32-5.82Not continued but had ≥10 years schooling609(12.3)4.890.1154.67-5.12Drop-outed3780(76.5)3.540.0453.45-3.62Place of residence ∗∗∗Urban1696(34.3)4.710.0764.56-4.86Rural3245(65.7)3.520.0463.43-3.62Exposed to media ∗∗∗No2265(45.8)3.070.0522.97-3.17Yes2676(54.2)4.660.0584.55-4.77Mother’s age at birth (years) ∗∗∗<201377(27.9)3.790.0743.64-3.9320-292816(57.0)4.050.0553.94-4.16≥30748(15.1)3.740.1083.53-3.95Difference between husband and wife age (years) ∗∗∗Non-positive55(1.1)5.220.5144.21-6.231-51602(32.4)3.880.0713.74-4.026-102098(42.5)3.920.0653.79-4.04>101186(24.0)3.970.0793.81-4.12Number of decisions woman participatedNone747(15.1)3.760.1033.55-3.961-21492(30.2)3.920.0713.78-4.06All 32702(54.7)3.990.0573.87-4.10Number of reasons wife beating justified ∗∗∗Not at all4055(82.1)4.070.0463.98-4.171-2708(14.3)3.330.0953.14-3.523-5178(3.6)3.030.1882.66-3.40Wealth index ∗∗∗Poor1823(36.9)3.000.0602.88-3.11Middle1588(32.1)3.920.0693.79-4.06Rich1530(31.0)5.050.0774.90-5.20Birth Order ∗∗∗11870(37.9)4.370.0664.24-4.5021623(32.8)4.030.0733.89-4.183848(17.2)3.680.0973.49-3.87≥4600(12.1)2.640.1002.45-2.84***p-value <0.01; **p-value <0.05; *p-value <0.10\nThe mean, standard error(SE) and 95% confidence interval (CI) for number of ANC visits during pregnancy of women in Bangladesh by some selected socioeconomic and demographic characteristics, BDHS 2017\n***p-value <0.01; **p-value <0.05; *p-value <0.10\nFrom Table 1 it was found that every three out of four mothers (76.5%) were drop-outed from school, 12.3% were completed at least 10 years of schooling but stopped education after marriage, whereas 11.2% had continued their education after marriage and reached at least 10 years of schooling. About two-third of the mothers (65.7%) were sampled from rural area and rest were from urban area. Of the mothers, about 54.2% were exposed to any of the three media at least once a week. Most of the mothers (57.0%) had given their index birth at age 20–29 years, 15.1% at age 30–39 years, and a little more than one-forth (27.9%) at age below 20 years. It was found that 24.0% couple had age gap (difference between husband and wife age) more than 10 years, 42.5% had 6–10 years, 32.4% had 1–5 years, whereas for 1.1% couple husband age did not exceed wife age. Most of the women (54.7%) participated in all three major decisions regarding themselves, 30.2% participated in 1–2 and 15.1% did not participate in any such decisions. Most of the women (82.1%) had justified none of all five reasons of wife beating, 14.3% had justified 1–2 reasons, whereas only 3.6% had justified 3–5 reasons. Women were distributed over the categories of wealth index almost equally. Most of the mothers (37.9%) had given their index birth as first birth, 32.8% as second birth, 17.2% as third birth and for rest of the mothers child birth order were 4 or more.\nThe exposure variable ‘study continuity status’ was found to have a significant association (p-value <0.01) with the ANC taking behavior. The average rate of ANC visits was largest (5.57) for the mothers if they had been continued their education after marriage and smallest (3.54) for those who drop-outed from school. The average number of ANC visits was significantly (p-value <0.01) higher (4.71) for the urban mothers than the rural (3.52). It was also higher (4.66) for the mothers who were exposed any of the three media at least once a week than who were not exposed (3.07) with p-value <0.01. The mean ANC visits was highest (4.05) for the mothers whose age at birth for the index child was 20–29 years whereas it was 3.79 and 3.74 for mother’s age at birth below 20 years and at least 30 years respectively, the result was statistically significant with p-value <0.01. The average ANC visits was significantly (p-value <0.01) higher (5.22) for the mothers whose partner age did not exceed their age, and this average seems to be similar for the other categories of ‘difference between husband and wife age’. The averages did not differ significantly among different categories of ‘number of decisions woman participated’. The average rate of visits was inversely related with the number of reasons a woman justified beating by her husband (p-value <0.01) and also with the birth order number (p-value <0.01). However, it was positively related with the wealth quintiles (p-value <0.01).\nSince in the absence of covariates one of the proportions of mixture was found as 0.55, we have fitted the MPois-Pois model to analyse the number of ANC visits taken by women during a pregnancy period in Bangladesh. As our main interest is to estimate the marginal parameters, the \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\hat {\\boldsymbol {\\beta }} $\\end{document}β^ along with IDR are presented in Table 2. From Table 2 it was found that, the estimated incidence rate of ANC visits was 10.6% lower for the mothers who were not continued their education after marriage but had at least 10 years at schooling (p-value <0.01) and 20.2% lower who drop-outed from studying (p-value <0.01) than the mothers who got continued their education after marriage. The IDR of ANC visits was 1.12 for urban to rural mothers (p-value <0.01), 1.24 for exposed media at least once a week to unexposed mothers (p-value <0.01). The rate of ANC visits was 7.7% lower for mothers who gave their index birth below 20 years of age (p-value <0.01) and 5.3% higher for mothers who gave their index birth after or at 30 years of age (p-value <0.10) than those who gave birth during age 20–29 years. The IDR of ANC visits was 1.26 for the women who are not younger than their husbands to the women whose age lag behind 1–5 years than their husbands (p-value <0.05), this IDR was statistically insignificant for the other categories of ‘difference between husband and wife age’. The rates of ANC visits were statistically insignificant among all categories of ‘number of decisions woman participated’. The IDR of ANC visits was 0.88 for the mothers who justified beating by their partners for 1–2 reasons to the mothers who never justified for any of the five reasons (p-value <0.01), this ratio was about the same for the mothers who justified beating by their partners for 3–5 reasons (p-value <0.05). The rate of ANC visit was 13.8% higher for the mothers from middle-wealth households than the poor households, 24.1% higher for the mothers from rich households than the poor households, both the results were statistically significant with p-value <0.01. As the birth order of index child increases, the mothers were significantly less likely to take ANC care during their pregnancy. For example, the rate of ANC visits of mothers during their pregnancy were 7.2%, 10.9% and 30.9% less likely for the second birth, third birth and forth or upper order birth respectively compared to the first birth. It was also seen that the marginal mean parameters were estimated from mixture of two latent subpopulations with the proportions 0.61 and 0.39 after adjusting the covariates.\nTable 2The estimated marginal parameters (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\hat {\\beta } $\\end{document}β^), standard errors (SE), and p-values, 95% CI for regression parameters, and IDR under MPois-Pois mixture model on number of ANC visits during a pregnancy, BDHS 2017\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\hat {\\beta } $\\end{document}β^SEz-valuep-value95% CIIDRIntercept1.3590.04232.708<0.001-3.893Study continuity statusContinued after marriage (ref)Not continued but had ≥10 years schooling-0.1120.030-3.755<0.001-0.171- -0.0530.894Drop-outed-0.2250.026-8.679<0.001-0.276- -0.1740.798Place of residenceRural (ref)Urban0.1130.0215.457<0.0010.072-0.1541.120Exposed to mediaNo (ref)Yes0.2180.02210.072<0.0010.175-0.2611.243Mother’s age at birth (years)<20-0.0800.025-3.2260.001-0.129- -0.0310.92320-29 (ref)≥300.0520.0311.6910.091-0.009- 0.1131.053Difference between husband and wife age (years)Non-positive0.2280.0982.3280.0200.036-0.4201.2561-5 (ref)6-10-0.0020.021-0.1100.913-0.043- 0.0390.998>100.0110.0250.4320.666-0.038- 0.0601.011Number of decisions woman participatedNone (ref)1-20.0020.0290.0790.937-0.055- 0.0591.002All 30.0210.0270.7810.435-0.032- 0.0741.021Number of reasons wife beating justifiedNot at all (ref)1-2-0.1260.028-4.535<0.001-0.181- -0.0710.8813-5-0.1280.057-2.2210.026-0.240- -0.0160.880Wealth indexPoor (ref)Middle0.1290.0255.082<0.0010.080-0.1781.138Rich0.2160.0297.338<0.0010.159-0.2731.241Birth Order1 (ref)2-0.0750.024-3.1400.002-0.122- -0.0280.9283-0.1160.032-3.601<0.001-0.179- -0.0530.891≥4-0.3690.043-8.492<0.001-0.453- -0.2850.691Mixing Proportion\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\hat {\\pi }$\\end{document}π^0.6100.04912.472<0.001--\nThe estimated marginal parameters (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$ \\hat {\\beta } $\\end{document}β^), standard errors (SE), and p-values, 95% CI for regression parameters, and IDR under MPois-Pois mixture model on number of ANC visits during a pregnancy, BDHS 2017\nTo asses the goodness of fit of the MPois-Pois, with selected covariates, the Poisson, negative binomial regression model have also fitted. It was found that the AIC values of Poisson, negative binomial and MPois-Pois were 23468, 22543, and 22470 respectively. It implies that the MPois-Pois is the best choice for fitting this data set.", "To analyze the count data to draw valid inference, it is require to check whether the data come from a population or from a mixture of populations. If the target population consist of mixture of populations, the estimates of latent class parameters of the regression model can be obtained by fitting a mixture model. However, it is sometimes difficult to get estimate of the regression parameters for marginal means from the mixture model setup. Also, the interpretation in terms of IDR for the population-wide parameters can not be possible from such model. The marginalized mixture model can be utilized under these circumstance for drawing inferences about the whole population. The objective of this study is to explore the effect of schoolgirl dropout due to marriage on the number of ANC visits in Bangladesh (inference regarding the population-wide parameters). From the results, we have observed that the population count data can be regarded a mixture of two latent count distributions with proportions 0.55 and 0.45 in the absence of covariates. Therefore, a marginalized mixture model, the MPois-Pois model, has been utilized to meet the study objective.\nAlthough tremendous successes has been documented for ANC service of the pregnant women, Bangladesh is still far away to ensure the WHO recommended 8+ ANC visits for the pregnant women. Receiving heath care service during pregnancy is associated with organization, accessibility, standard, value, health beliefs as well as some socio-demographic factors [27]. Maternal health care programme in Bangladesh has been confronted many challenges including, accessibility, lack of equity, lack of public health facilities, scarcity of skilled workforce and inadequate financial resource allocation [29]. To identify the obstacles for receiving ANC services, it is necessary to undertake some comprehensive research studies. This study aimed to identify some socio-demographic factors with a special emphasis to the schoolgirl dropout due to marriage that are associated with the frequency of ANC visits in Bangladesh.\nResearchers were investigated which of the socioeconomic and demographic factors are responsible for the frequency of ANC visits women took during their pregnancy period in Bangladesh in some of the previous studies [10, 27, 28, 30]. Although they were included education as an independent variable and found that it was an important determinants of frequency of ANC visits, we have considered ‘study continuity status after marriage’ as the exposure variable in this study. The results revealed that schoolgirl dropout have tremendous negative influence on the rate of receiving ANC services of the pregnant women in Bangladesh. Consistent with findings of the aforementioned previous studies, this study found that women who are living in urban area, possessing higher wealth index and exposed to media have higher rate of receiving ANC visits.\nHossain et al. [28] found that the frequency of ANC visits was less for women aged 20 years or below compare to women aged 20–35 years at the time of delivery. Besides, Kabir and Islam [30] has found the odds of receiving ≥4 ANC services was low if mother’s age at conception is ≥35 years. However, we had found that the rate of receiving ANC services was smaller for women aged below 20 years but higher for women aged 30 years or more than whose age were 20–29 years at the time of delivery. In this study, we have found that higher order of birth was associated with lower rate of ANC visits. This findings are similar with previous studies [28, 30].\nBhowmik et al. [10] found that women who were more involved in making decision regarding own health care can significantly improve the frequency of ANC visits. However, Kabir and Islam [30] has found increased odds of taking ≥4 ANC visits if women and their husband jointly took decision regarding women own health care. But Hossain et al. [28] did not find any association between women empowerment and frequency of receiving ANC services. In this study we have considered three aspects of women empowerment as independent variables, These are: number of decisions woman participated out of three major decisions regarding themselves, number of reasons a woman justified beating by her husband out of five reasons of wife beating, and difference between husband and wife age (in years). Although the number of decisions woman participated was not associated with the frequency of ANC visits, increased number of reasons for beating justification was associated with lower rate of receiving ANC services. Also, the rate of receiving ANC services was higher for women who are not younger than their husbands.\nSince adequate level of ANC visits during pregnancy period of women contributes to maintain proper maternal health and to protect the adverse pregnancy outcomes to a great extent, it is essential to make sure that every pregnant mother receives WHO recommended at least eight ANC visits. This can be accomplished by encouraging women to take sufficient number of ANC visits during their pregnancy period. This might be achieved in Bangladesh by promoting maternal education so that at least ten years of schooling should be completed by a woman and dropout from school after marriage should be prevented. Besides schoolgirl dropout, this study helps policy makers to identify other socio-demographic factors such as place of residence, media exposure, mother’s age at birth, women empowerment, wealth index and birth order that might uplift women for receiving more ANC visits during their pregnancy period." ]
[ "introduction", null, null, null, null, null, "results", null ]
[ "Antenatal care", "BDHS", "Count model", "Dropout", "Marginalized", "Mixture" ]
Introduction: Antenatal care (ANC) is an important element of persistence care that a mother receives before and during pregnancy, at the time of childbirth, and the postnatal period. The aim of ANC is to detect any pregnancy complications, take immediate necessary steps for solving such complications, and prepare mother for safe and healthy birth. It has both direct and indirect influences on the survival and health of the mother as well as newborn [1, 2]. The World Health Organization (WHO) recently recommended for at least eight ANC visits for a woman during a pregnancy period [2]. The percentage of pregnant women receiving at least four ANC visits during a pregnancy period in Bangladesh has been increasing over the past two decades, from 6% in 1993–94 to 47% in 2017–18 [3]. Despite this substantial progress in ANC services, only about 11.5% of sampled women received WHO recommended 8+ ANC visits during November 2014-March 2018 in Bangladesh [3]. Therefore, to achieve the health related Sustainable Development Goals, it is essential to make access to quality ANC service easy for the pregnant mother and proper monitoring system is required to observe whether this service implemented throughout the country [4]. Education is one of the important determinants of good health as heath disparities can be determined by educational disparities [5, 6]. Mother’s education is considered as a crucial factor for preventing and treating poor health outcomes and effective use of health care services [7]. To ensure the best health practice, public policies should focus on getting the best attainable education for women among all socioeconomic classes [8] as there exist extreme gender differentiation in economic roles, lower parental investments in daughters than in sons, and significant restrictions on girls’ public mobility [9]. The women having more education are more likely to receive ANC services. Educated women understand the benefits of taking frequent ANC services as they are more knowledgeable about reproductive health care [10, 11]. School dropout should be considered as a public heath problem because education is a strong predictor of long-term health and dropouts have poorer health outcome [12, 13]. Although school dropout and child marriage are interrelated outcomes that have an enormous adverse impact on adolescent girls health and wellbeing as well as on their progeny, most of the parents of Bangladesh think that marriage and stop going school is the better option for their daughter’s prosperity [14]. The complex relationship between girl’s high school dropout and risky adolescent behavior suggests that schoolgirl dropout due to marriage has been associated with high risk of subsequent teen pregnancy [15–17] and teen pregnancy can lead to medical complications for the mother and infant [18]. There is a lack of research in explaining the link between whether or not a woman continued their education after marriage and ANC visits. Although the ANC services in Bangladesh has increasing trend over the decades, access to ANC services should be improved drastically to achieve the WHO recommended positive pregnancy outcome and the health Sustainable Development Goals. Therefore, the aim of the current study is to investigate the determinants of the frequency of ANC visits of women during their most recent pregnancy period in Bangladesh by applying an appropriate count regression model giving special emphasis on how the schoolgirl dropout influence the response of interest. In practice, it may happen that sample count data observations may arise from two or more populations. Analyses of such data using standard or zero augmented count models may result in misleading conclusion [19, 20]. To overcome this difficulty, it may require to analyze the data taking mixture of populations into account [21]. Therefore, it is necessary to check whether count data come from mixture of populations. Moreover, the significance of inference regarding the marginal mean is well documented in many studies under mixture model setup [22–25]. Although finite mixtures of the standard count models have the advantage of describing over-dispersed count data, inferences regarding the overall exposure effects on marginal mean cannot be made from these model [22, 24]. The regression parameters for the marginal means can also be estimated sometimes indirectly by using the latent class parameters but such parameters cannot elucidate the link between the covariates and the population-wide parameters properly [26]. For this purpose, Benecha et al. [26] proposed marginally-specified mean models for mixtures of two count distributions to facilitate the maximum likelihood estimation where the mixing proportion have been included in the model along with the parameters for marginal mean. These are marginalized Poisson-Poisson (MPois-Pois) model and marginalized negative binomial-Poisson (MNB-Pois) model. In this study, an attempt has been made to draw inferences about the overall exposure effects on marginal means of ANC visits by using the MPois-Pois model. Data and methods: Data In this study, a nationwide data extracted from Bangladesh Demographic and Health Survey (BDHS) 2017–2018 have been utilized to analyze the number of ANC visits taken by a woman during her pregnancy period. The survey was based on a two-stage stratified sample of households. In the first stage of sampling, 250 enumeration areas (EAs) from urban and 425 from rural areas were selected. In the second stage, an average of 30 households per EA were selected. Based on the design, A total of 20127 ever married women of reproductive age were interviewed to collect data on fertility, family planning along with socioeconomic and demographic characteristics. BDHS data also provides information on several aspect of maternal and newborn health, including antenatal care (ANC). The information regarding number of ANC that a woman receives during pregnancy was collected from 5012 women who gave their most recent birth in the 3 years preceding the survey. Finally, a sample of 4941 women was included in the analysis as they provide complete information about all the variables considered in the study. In this study, a nationwide data extracted from Bangladesh Demographic and Health Survey (BDHS) 2017–2018 have been utilized to analyze the number of ANC visits taken by a woman during her pregnancy period. The survey was based on a two-stage stratified sample of households. In the first stage of sampling, 250 enumeration areas (EAs) from urban and 425 from rural areas were selected. In the second stage, an average of 30 households per EA were selected. Based on the design, A total of 20127 ever married women of reproductive age were interviewed to collect data on fertility, family planning along with socioeconomic and demographic characteristics. BDHS data also provides information on several aspect of maternal and newborn health, including antenatal care (ANC). The information regarding number of ANC that a woman receives during pregnancy was collected from 5012 women who gave their most recent birth in the 3 years preceding the survey. Finally, a sample of 4941 women was included in the analysis as they provide complete information about all the variables considered in the study. Variables The outcome variable is ‘the number of ANC visits’ that a woman received during pregnancy for her most recent birth. The exposure variable ‘study continuity status’ is categorized into three categories. The categories of this variable were created based on ‘the number of years of schooling of a woman’ and ‘whether she studying or attending school just before getting married’ or ‘whether she continued studying after marriage’. These categories are continued studying after marriage: continued studying after marriage and total number of schooling ≥10 years;not continued but having ≥10 years schooling: ‘stop attending school before getting married or stopped continuing study after marriage’ but having ≥10 years of schooling;drop-outed: having <10 years of schooling. continued studying after marriage: continued studying after marriage and total number of schooling ≥10 years; not continued but having ≥10 years schooling: ‘stop attending school before getting married or stopped continuing study after marriage’ but having ≥10 years of schooling; drop-outed: having <10 years of schooling. Therefore, the women who could not reach at 10 academic years of schooling is considered as ‘schoolgirl dropout’ in this study. Note that ‘dropout’ category consists of three mutually exclusive classes: ‘stop attending school before getting married’, ‘stopped continuing study after marriage’, ‘continued studying after marriage’. The effects of some other covariates were controlled in this study based on some available literatures [10, 27, 28]. These variables are: place of residence (urban, rural); exposed to any of the three media newspaper/magazine, radio, and television at least once a week (yes, no); mother’s age at birth in years (<20, 20-29, ≥30); difference between husband and wife age in years (non-positive, 1-5, 6-10, >10); number of decisions woman participated out of three major decisions regarding her own health care, large household purchases, and visits to family/relatives (none, 1-2, all 3); number of reasons a woman justified beating by her husband out of five regarding if she goes out without telling husband, neglects the children, argues with husband, refuses to have sex with husband, and burns the food (not at all, 1-2, 3-5); wealth index (poor, middle, rich) was created by using the ranked wealth index factor scores and then dividing the ranking into three equal categories, each comprising 33.33 percent of the sampled households; and order of birth of the index pregnancy (1, 2, 3, ≥4). The outcome variable is ‘the number of ANC visits’ that a woman received during pregnancy for her most recent birth. The exposure variable ‘study continuity status’ is categorized into three categories. The categories of this variable were created based on ‘the number of years of schooling of a woman’ and ‘whether she studying or attending school just before getting married’ or ‘whether she continued studying after marriage’. These categories are continued studying after marriage: continued studying after marriage and total number of schooling ≥10 years;not continued but having ≥10 years schooling: ‘stop attending school before getting married or stopped continuing study after marriage’ but having ≥10 years of schooling;drop-outed: having <10 years of schooling. continued studying after marriage: continued studying after marriage and total number of schooling ≥10 years; not continued but having ≥10 years schooling: ‘stop attending school before getting married or stopped continuing study after marriage’ but having ≥10 years of schooling; drop-outed: having <10 years of schooling. Therefore, the women who could not reach at 10 academic years of schooling is considered as ‘schoolgirl dropout’ in this study. Note that ‘dropout’ category consists of three mutually exclusive classes: ‘stop attending school before getting married’, ‘stopped continuing study after marriage’, ‘continued studying after marriage’. The effects of some other covariates were controlled in this study based on some available literatures [10, 27, 28]. These variables are: place of residence (urban, rural); exposed to any of the three media newspaper/magazine, radio, and television at least once a week (yes, no); mother’s age at birth in years (<20, 20-29, ≥30); difference between husband and wife age in years (non-positive, 1-5, 6-10, >10); number of decisions woman participated out of three major decisions regarding her own health care, large household purchases, and visits to family/relatives (none, 1-2, all 3); number of reasons a woman justified beating by her husband out of five regarding if she goes out without telling husband, neglects the children, argues with husband, refuses to have sex with husband, and burns the food (not at all, 1-2, 3-5); wealth index (poor, middle, rich) was created by using the ranked wealth index factor scores and then dividing the ranking into three equal categories, each comprising 33.33 percent of the sampled households; and order of birth of the index pregnancy (1, 2, 3, ≥4). Statistical analyses In this study, all the explanatory variables considered are categorical and the outcome variable of interest is the number of ANC visits during a pregnancy, which is discrete in nature. Therefore, we have performed descriptive statistics by computing percent distribution for the explanatory variables and by mean and standard deviation for the outcome variable. As all the explanatory variables are categorical, in order to draw inference about the association between the explanatory variables and the count outcome of interest we have performed one-way analysis of variance (ANOVA)/independent sample t-test. To justify whether data arise from a single or two different populations, we have computed the mixing proportion of the count variable using the marginalized Poisson-Poisson (MPois-Pois) mixture model [26] in the absence of covariates. Finally, to examine the adjusted effects of covariates on the marginal mean of count data, MPois-Pois mixture model has been used along with computing incidence density ratio (IDR). In this study, all the explanatory variables considered are categorical and the outcome variable of interest is the number of ANC visits during a pregnancy, which is discrete in nature. Therefore, we have performed descriptive statistics by computing percent distribution for the explanatory variables and by mean and standard deviation for the outcome variable. As all the explanatory variables are categorical, in order to draw inference about the association between the explanatory variables and the count outcome of interest we have performed one-way analysis of variance (ANOVA)/independent sample t-test. To justify whether data arise from a single or two different populations, we have computed the mixing proportion of the count variable using the marginalized Poisson-Poisson (MPois-Pois) mixture model [26] in the absence of covariates. Finally, to examine the adjusted effects of covariates on the marginal mean of count data, MPois-Pois mixture model has been used along with computing incidence density ratio (IDR). Model The source population from which the count data have been collected is assumed to be partitioned into two latent subpopulations having Poisson distributions with mean μ1 and μ2, respectively. Let Y1,...,Yn be a random sample of size n. Following [21], the Poisson-Poisson mixture probability distribution of Yi(i=1,...,n) can be written as 1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ f(Y_{i}=y_{i}|\pi, \boldsymbol{\mu}_{i}^{\prime})=\pi\frac{e^{-\mu_{1i}}\mu_{1i}^{y_{i}}}{y_{i}!}+(1-\pi)\frac{e^{-\mu_{2i}}\mu_{2i}^{y_{i}}}{y_{i}!}, $$ \end{document}f(Yi=yi|π,μi′)=πe−μ1iμ1iyiyi!+(1−π)e−μ2iμ2iyiyi!, where μi=(μi1,μi2)′ and π is the mixing proportion. Hence, the marginal mean and variance of Yi can be given respectively as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$\begin{aligned} E(Y_{i})&=\mu_{i}=\pi\mu_{1i}+(1-\pi)\mu_{2i}\quad\text{and} \\ \text{Var}(Y_{i})&=\sigma^{2}_{i}=\mu_{i}+\pi\left(1-\pi\right)\left(\mu_{1i}-\mu_{2i}\right)^{2}. \end{aligned} $$ \end{document}E(Yi)=μi=πμ1i+(1−π)μ2iandVar(Yi)=σi2=μi+π1−πμ1i−μ2i2. One may obtain the MPois-Pois mixture model by replacing μ2i with (1−π)−1(μi−πμ1i) in (1) [26]. To introduce the covariates into the marginalized model, the following specifications have been considered \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$\log(\mu_{i})=\boldsymbol{x}_{i}^{\prime}\boldsymbol{\beta},\quad \log(\mu_{1i})=\boldsymbol{z}_{i}^{\prime}\boldsymbol{\alpha},\quad \text{logit}(\pi)=\tau, $$ \end{document}log(μi)=xi′β,log(μ1i)=zi′α,logit(π)=τ, where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \boldsymbol {\mathcalligra {x}}_{\boldsymbol {\mathcalligra {i}}}$\end{document}xi and β respectively are p1×1 vector of covariates and regression parameters associated with the marginal mean μi and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \boldsymbol {\mathcalligra {z}}_{\boldsymbol {\mathcalligra {i}}}$\end{document}zi and α respectively are p2×1 vector of covariates and regression parameters associated with the subpopulation mean μ1i and −∞<τ<∞ is a constant. It can be permissible to use same covariates for both classes (i.e., \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\boldsymbol {\mathcalligra {x}}_{\boldsymbol {\mathcalligra {i}}}=\boldsymbol {\mathcalligra {z}}_{\boldsymbol {\mathcalligra {i}}}$\end{document}xi=zi) with p1=p2=p. For the jth covariate in a MPois-Pois model where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\boldsymbol {\mathcalligra {x}}_{\boldsymbol {\mathcalligra {i}}}=\boldsymbol {\mathcalligra {z}}_{\boldsymbol {\mathcalligra {i}}}$\end{document}xi=zi, IDR, the ratio of means for a one-unit increase in xij, is obtained as follows \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$\frac{E(Y_{i}|x_{ij}=k+1,\tilde{\boldsymbol{\mathcalligra{x}}}_{\boldsymbol{\mathcalligra{i}}})}{E(Y_{i}|x_{ij}=k,\tilde{\boldsymbol{\mathcalligra{x}}}_{\boldsymbol{\mathcalligra{i}}})}=\exp(\beta_{j}); $$ \end{document}E(Yi|xij=k+1,x~i)E(Yi|xij=k,x~i)=exp(βj); where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\tilde {\boldsymbol {\mathcalligra {x}}}_{\boldsymbol {\mathcalligra {i}}}$\end{document}x~i indicates all covariates except xij. The primary interest is to estimate the regression parameters (β) of the marginal mean (μi) whereas the nuisance parameters α and τ need to be estimated to facilitate the maximum likelihood estimation of β. The likelihood function of MPois-Pois model is given as follows \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $${{}\begin{aligned} L(\boldsymbol{\tau}, \boldsymbol{\alpha}, \boldsymbol{\beta}|\boldsymbol{y})=\prod\limits_{i=1}^{n}&\frac{1}{(1+e^{\tau})y_{i}!} \left\{e^{\tau}\exp(-e^{\boldsymbol{z}_{\boldsymbol{i}}^{\prime}\boldsymbol{\alpha}}) e^{\boldsymbol{z}_{\boldsymbol{i}}^{\boldsymbol{\prime}}\boldsymbol{\alpha}y_{i}} + \right.\\ &\qquad \left.e^{-\eta(\tau,\boldsymbol{\beta},\boldsymbol{\alpha}\boldsymbol{;x}_{\boldsymbol{i}},\boldsymbol{z}_{\boldsymbol{i}})}\eta(\tau,\boldsymbol{\beta,\alpha;x_{i},z_{i}})^{y_{i}}\right\}, \end{aligned}} $$ \end{document}L(τ,α,β|y)=∏i=1n1(1+eτ)yi!eτexp(−ezi′α)ezi′αyi+e−η(τ,β,α;xi,zi)η(τ,β,α;xi,zi)yi, where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \eta (\tau,\boldsymbol {\beta },\boldsymbol {\alpha }\boldsymbol {;x}_{\boldsymbol {i}}, \boldsymbol {z}_{\boldsymbol {i}})=e^{\boldsymbol {x}_{\boldsymbol {i}}^{\boldsymbol {\prime }}\boldsymbol {\beta }}(1+e^{\tau })-e^{\tau } e^{\boldsymbol {z}_{\boldsymbol {i}}^{\boldsymbol {\prime }}\boldsymbol {\alpha }}$\end{document}η(τ,β,α;xi,zi)=exi′β(1+eτ)−eτezi′α. With carefully chosen starting values, parameters of MPois-Pois can be estimated by the use of quasi-Newton optimization method. The quasi-Newton optimization can be implemented by SAS ‘nlmixed’ or R ‘optim’ function to get estimates from above likelihood function. Starting values for τ and α can be obtained by EM algorithm from Poisson-Poisson mixture model. Also, the initial values of β can be obtained by fitting a marginalized zero-inflated Poisson (MZIP) model [26]. The source population from which the count data have been collected is assumed to be partitioned into two latent subpopulations having Poisson distributions with mean μ1 and μ2, respectively. Let Y1,...,Yn be a random sample of size n. Following [21], the Poisson-Poisson mixture probability distribution of Yi(i=1,...,n) can be written as 1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ f(Y_{i}=y_{i}|\pi, \boldsymbol{\mu}_{i}^{\prime})=\pi\frac{e^{-\mu_{1i}}\mu_{1i}^{y_{i}}}{y_{i}!}+(1-\pi)\frac{e^{-\mu_{2i}}\mu_{2i}^{y_{i}}}{y_{i}!}, $$ \end{document}f(Yi=yi|π,μi′)=πe−μ1iμ1iyiyi!+(1−π)e−μ2iμ2iyiyi!, where μi=(μi1,μi2)′ and π is the mixing proportion. Hence, the marginal mean and variance of Yi can be given respectively as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$\begin{aligned} E(Y_{i})&=\mu_{i}=\pi\mu_{1i}+(1-\pi)\mu_{2i}\quad\text{and} \\ \text{Var}(Y_{i})&=\sigma^{2}_{i}=\mu_{i}+\pi\left(1-\pi\right)\left(\mu_{1i}-\mu_{2i}\right)^{2}. \end{aligned} $$ \end{document}E(Yi)=μi=πμ1i+(1−π)μ2iandVar(Yi)=σi2=μi+π1−πμ1i−μ2i2. One may obtain the MPois-Pois mixture model by replacing μ2i with (1−π)−1(μi−πμ1i) in (1) [26]. To introduce the covariates into the marginalized model, the following specifications have been considered \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$\log(\mu_{i})=\boldsymbol{x}_{i}^{\prime}\boldsymbol{\beta},\quad \log(\mu_{1i})=\boldsymbol{z}_{i}^{\prime}\boldsymbol{\alpha},\quad \text{logit}(\pi)=\tau, $$ \end{document}log(μi)=xi′β,log(μ1i)=zi′α,logit(π)=τ, where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \boldsymbol {\mathcalligra {x}}_{\boldsymbol {\mathcalligra {i}}}$\end{document}xi and β respectively are p1×1 vector of covariates and regression parameters associated with the marginal mean μi and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \boldsymbol {\mathcalligra {z}}_{\boldsymbol {\mathcalligra {i}}}$\end{document}zi and α respectively are p2×1 vector of covariates and regression parameters associated with the subpopulation mean μ1i and −∞<τ<∞ is a constant. It can be permissible to use same covariates for both classes (i.e., \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\boldsymbol {\mathcalligra {x}}_{\boldsymbol {\mathcalligra {i}}}=\boldsymbol {\mathcalligra {z}}_{\boldsymbol {\mathcalligra {i}}}$\end{document}xi=zi) with p1=p2=p. For the jth covariate in a MPois-Pois model where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\boldsymbol {\mathcalligra {x}}_{\boldsymbol {\mathcalligra {i}}}=\boldsymbol {\mathcalligra {z}}_{\boldsymbol {\mathcalligra {i}}}$\end{document}xi=zi, IDR, the ratio of means for a one-unit increase in xij, is obtained as follows \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$\frac{E(Y_{i}|x_{ij}=k+1,\tilde{\boldsymbol{\mathcalligra{x}}}_{\boldsymbol{\mathcalligra{i}}})}{E(Y_{i}|x_{ij}=k,\tilde{\boldsymbol{\mathcalligra{x}}}_{\boldsymbol{\mathcalligra{i}}})}=\exp(\beta_{j}); $$ \end{document}E(Yi|xij=k+1,x~i)E(Yi|xij=k,x~i)=exp(βj); where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\tilde {\boldsymbol {\mathcalligra {x}}}_{\boldsymbol {\mathcalligra {i}}}$\end{document}x~i indicates all covariates except xij. The primary interest is to estimate the regression parameters (β) of the marginal mean (μi) whereas the nuisance parameters α and τ need to be estimated to facilitate the maximum likelihood estimation of β. The likelihood function of MPois-Pois model is given as follows \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $${{}\begin{aligned} L(\boldsymbol{\tau}, \boldsymbol{\alpha}, \boldsymbol{\beta}|\boldsymbol{y})=\prod\limits_{i=1}^{n}&\frac{1}{(1+e^{\tau})y_{i}!} \left\{e^{\tau}\exp(-e^{\boldsymbol{z}_{\boldsymbol{i}}^{\prime}\boldsymbol{\alpha}}) e^{\boldsymbol{z}_{\boldsymbol{i}}^{\boldsymbol{\prime}}\boldsymbol{\alpha}y_{i}} + \right.\\ &\qquad \left.e^{-\eta(\tau,\boldsymbol{\beta},\boldsymbol{\alpha}\boldsymbol{;x}_{\boldsymbol{i}},\boldsymbol{z}_{\boldsymbol{i}})}\eta(\tau,\boldsymbol{\beta,\alpha;x_{i},z_{i}})^{y_{i}}\right\}, \end{aligned}} $$ \end{document}L(τ,α,β|y)=∏i=1n1(1+eτ)yi!eτexp(−ezi′α)ezi′αyi+e−η(τ,β,α;xi,zi)η(τ,β,α;xi,zi)yi, where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \eta (\tau,\boldsymbol {\beta },\boldsymbol {\alpha }\boldsymbol {;x}_{\boldsymbol {i}}, \boldsymbol {z}_{\boldsymbol {i}})=e^{\boldsymbol {x}_{\boldsymbol {i}}^{\boldsymbol {\prime }}\boldsymbol {\beta }}(1+e^{\tau })-e^{\tau } e^{\boldsymbol {z}_{\boldsymbol {i}}^{\boldsymbol {\prime }}\boldsymbol {\alpha }}$\end{document}η(τ,β,α;xi,zi)=exi′β(1+eτ)−eτezi′α. With carefully chosen starting values, parameters of MPois-Pois can be estimated by the use of quasi-Newton optimization method. The quasi-Newton optimization can be implemented by SAS ‘nlmixed’ or R ‘optim’ function to get estimates from above likelihood function. Starting values for τ and α can be obtained by EM algorithm from Poisson-Poisson mixture model. Also, the initial values of β can be obtained by fitting a marginalized zero-inflated Poisson (MZIP) model [26]. Data: In this study, a nationwide data extracted from Bangladesh Demographic and Health Survey (BDHS) 2017–2018 have been utilized to analyze the number of ANC visits taken by a woman during her pregnancy period. The survey was based on a two-stage stratified sample of households. In the first stage of sampling, 250 enumeration areas (EAs) from urban and 425 from rural areas were selected. In the second stage, an average of 30 households per EA were selected. Based on the design, A total of 20127 ever married women of reproductive age were interviewed to collect data on fertility, family planning along with socioeconomic and demographic characteristics. BDHS data also provides information on several aspect of maternal and newborn health, including antenatal care (ANC). The information regarding number of ANC that a woman receives during pregnancy was collected from 5012 women who gave their most recent birth in the 3 years preceding the survey. Finally, a sample of 4941 women was included in the analysis as they provide complete information about all the variables considered in the study. Variables: The outcome variable is ‘the number of ANC visits’ that a woman received during pregnancy for her most recent birth. The exposure variable ‘study continuity status’ is categorized into three categories. The categories of this variable were created based on ‘the number of years of schooling of a woman’ and ‘whether she studying or attending school just before getting married’ or ‘whether she continued studying after marriage’. These categories are continued studying after marriage: continued studying after marriage and total number of schooling ≥10 years;not continued but having ≥10 years schooling: ‘stop attending school before getting married or stopped continuing study after marriage’ but having ≥10 years of schooling;drop-outed: having <10 years of schooling. continued studying after marriage: continued studying after marriage and total number of schooling ≥10 years; not continued but having ≥10 years schooling: ‘stop attending school before getting married or stopped continuing study after marriage’ but having ≥10 years of schooling; drop-outed: having <10 years of schooling. Therefore, the women who could not reach at 10 academic years of schooling is considered as ‘schoolgirl dropout’ in this study. Note that ‘dropout’ category consists of three mutually exclusive classes: ‘stop attending school before getting married’, ‘stopped continuing study after marriage’, ‘continued studying after marriage’. The effects of some other covariates were controlled in this study based on some available literatures [10, 27, 28]. These variables are: place of residence (urban, rural); exposed to any of the three media newspaper/magazine, radio, and television at least once a week (yes, no); mother’s age at birth in years (<20, 20-29, ≥30); difference between husband and wife age in years (non-positive, 1-5, 6-10, >10); number of decisions woman participated out of three major decisions regarding her own health care, large household purchases, and visits to family/relatives (none, 1-2, all 3); number of reasons a woman justified beating by her husband out of five regarding if she goes out without telling husband, neglects the children, argues with husband, refuses to have sex with husband, and burns the food (not at all, 1-2, 3-5); wealth index (poor, middle, rich) was created by using the ranked wealth index factor scores and then dividing the ranking into three equal categories, each comprising 33.33 percent of the sampled households; and order of birth of the index pregnancy (1, 2, 3, ≥4). Statistical analyses: In this study, all the explanatory variables considered are categorical and the outcome variable of interest is the number of ANC visits during a pregnancy, which is discrete in nature. Therefore, we have performed descriptive statistics by computing percent distribution for the explanatory variables and by mean and standard deviation for the outcome variable. As all the explanatory variables are categorical, in order to draw inference about the association between the explanatory variables and the count outcome of interest we have performed one-way analysis of variance (ANOVA)/independent sample t-test. To justify whether data arise from a single or two different populations, we have computed the mixing proportion of the count variable using the marginalized Poisson-Poisson (MPois-Pois) mixture model [26] in the absence of covariates. Finally, to examine the adjusted effects of covariates on the marginal mean of count data, MPois-Pois mixture model has been used along with computing incidence density ratio (IDR). Model: The source population from which the count data have been collected is assumed to be partitioned into two latent subpopulations having Poisson distributions with mean μ1 and μ2, respectively. Let Y1,...,Yn be a random sample of size n. Following [21], the Poisson-Poisson mixture probability distribution of Yi(i=1,...,n) can be written as 1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ f(Y_{i}=y_{i}|\pi, \boldsymbol{\mu}_{i}^{\prime})=\pi\frac{e^{-\mu_{1i}}\mu_{1i}^{y_{i}}}{y_{i}!}+(1-\pi)\frac{e^{-\mu_{2i}}\mu_{2i}^{y_{i}}}{y_{i}!}, $$ \end{document}f(Yi=yi|π,μi′)=πe−μ1iμ1iyiyi!+(1−π)e−μ2iμ2iyiyi!, where μi=(μi1,μi2)′ and π is the mixing proportion. Hence, the marginal mean and variance of Yi can be given respectively as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$\begin{aligned} E(Y_{i})&=\mu_{i}=\pi\mu_{1i}+(1-\pi)\mu_{2i}\quad\text{and} \\ \text{Var}(Y_{i})&=\sigma^{2}_{i}=\mu_{i}+\pi\left(1-\pi\right)\left(\mu_{1i}-\mu_{2i}\right)^{2}. \end{aligned} $$ \end{document}E(Yi)=μi=πμ1i+(1−π)μ2iandVar(Yi)=σi2=μi+π1−πμ1i−μ2i2. One may obtain the MPois-Pois mixture model by replacing μ2i with (1−π)−1(μi−πμ1i) in (1) [26]. To introduce the covariates into the marginalized model, the following specifications have been considered \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$\log(\mu_{i})=\boldsymbol{x}_{i}^{\prime}\boldsymbol{\beta},\quad \log(\mu_{1i})=\boldsymbol{z}_{i}^{\prime}\boldsymbol{\alpha},\quad \text{logit}(\pi)=\tau, $$ \end{document}log(μi)=xi′β,log(μ1i)=zi′α,logit(π)=τ, where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \boldsymbol {\mathcalligra {x}}_{\boldsymbol {\mathcalligra {i}}}$\end{document}xi and β respectively are p1×1 vector of covariates and regression parameters associated with the marginal mean μi and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \boldsymbol {\mathcalligra {z}}_{\boldsymbol {\mathcalligra {i}}}$\end{document}zi and α respectively are p2×1 vector of covariates and regression parameters associated with the subpopulation mean μ1i and −∞<τ<∞ is a constant. It can be permissible to use same covariates for both classes (i.e., \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\boldsymbol {\mathcalligra {x}}_{\boldsymbol {\mathcalligra {i}}}=\boldsymbol {\mathcalligra {z}}_{\boldsymbol {\mathcalligra {i}}}$\end{document}xi=zi) with p1=p2=p. For the jth covariate in a MPois-Pois model where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\boldsymbol {\mathcalligra {x}}_{\boldsymbol {\mathcalligra {i}}}=\boldsymbol {\mathcalligra {z}}_{\boldsymbol {\mathcalligra {i}}}$\end{document}xi=zi, IDR, the ratio of means for a one-unit increase in xij, is obtained as follows \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$\frac{E(Y_{i}|x_{ij}=k+1,\tilde{\boldsymbol{\mathcalligra{x}}}_{\boldsymbol{\mathcalligra{i}}})}{E(Y_{i}|x_{ij}=k,\tilde{\boldsymbol{\mathcalligra{x}}}_{\boldsymbol{\mathcalligra{i}}})}=\exp(\beta_{j}); $$ \end{document}E(Yi|xij=k+1,x~i)E(Yi|xij=k,x~i)=exp(βj); where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\tilde {\boldsymbol {\mathcalligra {x}}}_{\boldsymbol {\mathcalligra {i}}}$\end{document}x~i indicates all covariates except xij. The primary interest is to estimate the regression parameters (β) of the marginal mean (μi) whereas the nuisance parameters α and τ need to be estimated to facilitate the maximum likelihood estimation of β. The likelihood function of MPois-Pois model is given as follows \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $${{}\begin{aligned} L(\boldsymbol{\tau}, \boldsymbol{\alpha}, \boldsymbol{\beta}|\boldsymbol{y})=\prod\limits_{i=1}^{n}&\frac{1}{(1+e^{\tau})y_{i}!} \left\{e^{\tau}\exp(-e^{\boldsymbol{z}_{\boldsymbol{i}}^{\prime}\boldsymbol{\alpha}}) e^{\boldsymbol{z}_{\boldsymbol{i}}^{\boldsymbol{\prime}}\boldsymbol{\alpha}y_{i}} + \right.\\ &\qquad \left.e^{-\eta(\tau,\boldsymbol{\beta},\boldsymbol{\alpha}\boldsymbol{;x}_{\boldsymbol{i}},\boldsymbol{z}_{\boldsymbol{i}})}\eta(\tau,\boldsymbol{\beta,\alpha;x_{i},z_{i}})^{y_{i}}\right\}, \end{aligned}} $$ \end{document}L(τ,α,β|y)=∏i=1n1(1+eτ)yi!eτexp(−ezi′α)ezi′αyi+e−η(τ,β,α;xi,zi)η(τ,β,α;xi,zi)yi, where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \eta (\tau,\boldsymbol {\beta },\boldsymbol {\alpha }\boldsymbol {;x}_{\boldsymbol {i}}, \boldsymbol {z}_{\boldsymbol {i}})=e^{\boldsymbol {x}_{\boldsymbol {i}}^{\boldsymbol {\prime }}\boldsymbol {\beta }}(1+e^{\tau })-e^{\tau } e^{\boldsymbol {z}_{\boldsymbol {i}}^{\boldsymbol {\prime }}\boldsymbol {\alpha }}$\end{document}η(τ,β,α;xi,zi)=exi′β(1+eτ)−eτezi′α. With carefully chosen starting values, parameters of MPois-Pois can be estimated by the use of quasi-Newton optimization method. The quasi-Newton optimization can be implemented by SAS ‘nlmixed’ or R ‘optim’ function to get estimates from above likelihood function. Starting values for τ and α can be obtained by EM algorithm from Poisson-Poisson mixture model. Also, the initial values of β can be obtained by fitting a marginalized zero-inflated Poisson (MZIP) model [26]. Results: To analyse the count variable of interest, the number of ANC visits during pregnancy, we have considered 4,941 women who gave the index birth in the three years preceding the survey after adjusting for missing values. The mean, standard deviation, minimum and maximum of the number of ANC visits were 3.93, 2.88, 0 and 20, respectively. It was found that the frequency of ANC visits in Bangladesh arises from mixture of two unobserved populations with proportions 0.55 and 0.45 in the absence of covariates. The mean number of ANC visits for each category of explanatory variables along with its standard errors, 95% confidence interval (CI) and the percent distribution of respondents in each category of selected covariates are presented in Table 1. Table 1The mean, standard error(SE) and 95% confidence interval (CI) for number of ANC visits during pregnancy of women in Bangladesh by some selected socioeconomic and demographic characteristics, BDHS 2017Variablesn(%)MeanSE95% CIStudy continuity status ∗∗∗Continued after marriage552(11.2)5.570.1285.32-5.82Not continued but had ≥10 years schooling609(12.3)4.890.1154.67-5.12Drop-outed3780(76.5)3.540.0453.45-3.62Place of residence ∗∗∗Urban1696(34.3)4.710.0764.56-4.86Rural3245(65.7)3.520.0463.43-3.62Exposed to media ∗∗∗No2265(45.8)3.070.0522.97-3.17Yes2676(54.2)4.660.0584.55-4.77Mother’s age at birth (years) ∗∗∗<201377(27.9)3.790.0743.64-3.9320-292816(57.0)4.050.0553.94-4.16≥30748(15.1)3.740.1083.53-3.95Difference between husband and wife age (years) ∗∗∗Non-positive55(1.1)5.220.5144.21-6.231-51602(32.4)3.880.0713.74-4.026-102098(42.5)3.920.0653.79-4.04>101186(24.0)3.970.0793.81-4.12Number of decisions woman participatedNone747(15.1)3.760.1033.55-3.961-21492(30.2)3.920.0713.78-4.06All 32702(54.7)3.990.0573.87-4.10Number of reasons wife beating justified ∗∗∗Not at all4055(82.1)4.070.0463.98-4.171-2708(14.3)3.330.0953.14-3.523-5178(3.6)3.030.1882.66-3.40Wealth index ∗∗∗Poor1823(36.9)3.000.0602.88-3.11Middle1588(32.1)3.920.0693.79-4.06Rich1530(31.0)5.050.0774.90-5.20Birth Order ∗∗∗11870(37.9)4.370.0664.24-4.5021623(32.8)4.030.0733.89-4.183848(17.2)3.680.0973.49-3.87≥4600(12.1)2.640.1002.45-2.84***p-value <0.01; **p-value <0.05; *p-value <0.10 The mean, standard error(SE) and 95% confidence interval (CI) for number of ANC visits during pregnancy of women in Bangladesh by some selected socioeconomic and demographic characteristics, BDHS 2017 ***p-value <0.01; **p-value <0.05; *p-value <0.10 From Table 1 it was found that every three out of four mothers (76.5%) were drop-outed from school, 12.3% were completed at least 10 years of schooling but stopped education after marriage, whereas 11.2% had continued their education after marriage and reached at least 10 years of schooling. About two-third of the mothers (65.7%) were sampled from rural area and rest were from urban area. Of the mothers, about 54.2% were exposed to any of the three media at least once a week. Most of the mothers (57.0%) had given their index birth at age 20–29 years, 15.1% at age 30–39 years, and a little more than one-forth (27.9%) at age below 20 years. It was found that 24.0% couple had age gap (difference between husband and wife age) more than 10 years, 42.5% had 6–10 years, 32.4% had 1–5 years, whereas for 1.1% couple husband age did not exceed wife age. Most of the women (54.7%) participated in all three major decisions regarding themselves, 30.2% participated in 1–2 and 15.1% did not participate in any such decisions. Most of the women (82.1%) had justified none of all five reasons of wife beating, 14.3% had justified 1–2 reasons, whereas only 3.6% had justified 3–5 reasons. Women were distributed over the categories of wealth index almost equally. Most of the mothers (37.9%) had given their index birth as first birth, 32.8% as second birth, 17.2% as third birth and for rest of the mothers child birth order were 4 or more. The exposure variable ‘study continuity status’ was found to have a significant association (p-value <0.01) with the ANC taking behavior. The average rate of ANC visits was largest (5.57) for the mothers if they had been continued their education after marriage and smallest (3.54) for those who drop-outed from school. The average number of ANC visits was significantly (p-value <0.01) higher (4.71) for the urban mothers than the rural (3.52). It was also higher (4.66) for the mothers who were exposed any of the three media at least once a week than who were not exposed (3.07) with p-value <0.01. The mean ANC visits was highest (4.05) for the mothers whose age at birth for the index child was 20–29 years whereas it was 3.79 and 3.74 for mother’s age at birth below 20 years and at least 30 years respectively, the result was statistically significant with p-value <0.01. The average ANC visits was significantly (p-value <0.01) higher (5.22) for the mothers whose partner age did not exceed their age, and this average seems to be similar for the other categories of ‘difference between husband and wife age’. The averages did not differ significantly among different categories of ‘number of decisions woman participated’. The average rate of visits was inversely related with the number of reasons a woman justified beating by her husband (p-value <0.01) and also with the birth order number (p-value <0.01). However, it was positively related with the wealth quintiles (p-value <0.01). Since in the absence of covariates one of the proportions of mixture was found as 0.55, we have fitted the MPois-Pois model to analyse the number of ANC visits taken by women during a pregnancy period in Bangladesh. As our main interest is to estimate the marginal parameters, the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \hat {\boldsymbol {\beta }} $\end{document}β^ along with IDR are presented in Table 2. From Table 2 it was found that, the estimated incidence rate of ANC visits was 10.6% lower for the mothers who were not continued their education after marriage but had at least 10 years at schooling (p-value <0.01) and 20.2% lower who drop-outed from studying (p-value <0.01) than the mothers who got continued their education after marriage. The IDR of ANC visits was 1.12 for urban to rural mothers (p-value <0.01), 1.24 for exposed media at least once a week to unexposed mothers (p-value <0.01). The rate of ANC visits was 7.7% lower for mothers who gave their index birth below 20 years of age (p-value <0.01) and 5.3% higher for mothers who gave their index birth after or at 30 years of age (p-value <0.10) than those who gave birth during age 20–29 years. The IDR of ANC visits was 1.26 for the women who are not younger than their husbands to the women whose age lag behind 1–5 years than their husbands (p-value <0.05), this IDR was statistically insignificant for the other categories of ‘difference between husband and wife age’. The rates of ANC visits were statistically insignificant among all categories of ‘number of decisions woman participated’. The IDR of ANC visits was 0.88 for the mothers who justified beating by their partners for 1–2 reasons to the mothers who never justified for any of the five reasons (p-value <0.01), this ratio was about the same for the mothers who justified beating by their partners for 3–5 reasons (p-value <0.05). The rate of ANC visit was 13.8% higher for the mothers from middle-wealth households than the poor households, 24.1% higher for the mothers from rich households than the poor households, both the results were statistically significant with p-value <0.01. As the birth order of index child increases, the mothers were significantly less likely to take ANC care during their pregnancy. For example, the rate of ANC visits of mothers during their pregnancy were 7.2%, 10.9% and 30.9% less likely for the second birth, third birth and forth or upper order birth respectively compared to the first birth. It was also seen that the marginal mean parameters were estimated from mixture of two latent subpopulations with the proportions 0.61 and 0.39 after adjusting the covariates. Table 2The estimated marginal parameters (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \hat {\beta } $\end{document}β^), standard errors (SE), and p-values, 95% CI for regression parameters, and IDR under MPois-Pois mixture model on number of ANC visits during a pregnancy, BDHS 2017\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \hat {\beta } $\end{document}β^SEz-valuep-value95% CIIDRIntercept1.3590.04232.708<0.001-3.893Study continuity statusContinued after marriage (ref)Not continued but had ≥10 years schooling-0.1120.030-3.755<0.001-0.171- -0.0530.894Drop-outed-0.2250.026-8.679<0.001-0.276- -0.1740.798Place of residenceRural (ref)Urban0.1130.0215.457<0.0010.072-0.1541.120Exposed to mediaNo (ref)Yes0.2180.02210.072<0.0010.175-0.2611.243Mother’s age at birth (years)<20-0.0800.025-3.2260.001-0.129- -0.0310.92320-29 (ref)≥300.0520.0311.6910.091-0.009- 0.1131.053Difference between husband and wife age (years)Non-positive0.2280.0982.3280.0200.036-0.4201.2561-5 (ref)6-10-0.0020.021-0.1100.913-0.043- 0.0390.998>100.0110.0250.4320.666-0.038- 0.0601.011Number of decisions woman participatedNone (ref)1-20.0020.0290.0790.937-0.055- 0.0591.002All 30.0210.0270.7810.435-0.032- 0.0741.021Number of reasons wife beating justifiedNot at all (ref)1-2-0.1260.028-4.535<0.001-0.181- -0.0710.8813-5-0.1280.057-2.2210.026-0.240- -0.0160.880Wealth indexPoor (ref)Middle0.1290.0255.082<0.0010.080-0.1781.138Rich0.2160.0297.338<0.0010.159-0.2731.241Birth Order1 (ref)2-0.0750.024-3.1400.002-0.122- -0.0280.9283-0.1160.032-3.601<0.001-0.179- -0.0530.891≥4-0.3690.043-8.492<0.001-0.453- -0.2850.691Mixing Proportion\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\hat {\pi }$\end{document}π^0.6100.04912.472<0.001-- The estimated marginal parameters (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \hat {\beta } $\end{document}β^), standard errors (SE), and p-values, 95% CI for regression parameters, and IDR under MPois-Pois mixture model on number of ANC visits during a pregnancy, BDHS 2017 To asses the goodness of fit of the MPois-Pois, with selected covariates, the Poisson, negative binomial regression model have also fitted. It was found that the AIC values of Poisson, negative binomial and MPois-Pois were 23468, 22543, and 22470 respectively. It implies that the MPois-Pois is the best choice for fitting this data set. Discussion and conclusion: To analyze the count data to draw valid inference, it is require to check whether the data come from a population or from a mixture of populations. If the target population consist of mixture of populations, the estimates of latent class parameters of the regression model can be obtained by fitting a mixture model. However, it is sometimes difficult to get estimate of the regression parameters for marginal means from the mixture model setup. Also, the interpretation in terms of IDR for the population-wide parameters can not be possible from such model. The marginalized mixture model can be utilized under these circumstance for drawing inferences about the whole population. The objective of this study is to explore the effect of schoolgirl dropout due to marriage on the number of ANC visits in Bangladesh (inference regarding the population-wide parameters). From the results, we have observed that the population count data can be regarded a mixture of two latent count distributions with proportions 0.55 and 0.45 in the absence of covariates. Therefore, a marginalized mixture model, the MPois-Pois model, has been utilized to meet the study objective. Although tremendous successes has been documented for ANC service of the pregnant women, Bangladesh is still far away to ensure the WHO recommended 8+ ANC visits for the pregnant women. Receiving heath care service during pregnancy is associated with organization, accessibility, standard, value, health beliefs as well as some socio-demographic factors [27]. Maternal health care programme in Bangladesh has been confronted many challenges including, accessibility, lack of equity, lack of public health facilities, scarcity of skilled workforce and inadequate financial resource allocation [29]. To identify the obstacles for receiving ANC services, it is necessary to undertake some comprehensive research studies. This study aimed to identify some socio-demographic factors with a special emphasis to the schoolgirl dropout due to marriage that are associated with the frequency of ANC visits in Bangladesh. Researchers were investigated which of the socioeconomic and demographic factors are responsible for the frequency of ANC visits women took during their pregnancy period in Bangladesh in some of the previous studies [10, 27, 28, 30]. Although they were included education as an independent variable and found that it was an important determinants of frequency of ANC visits, we have considered ‘study continuity status after marriage’ as the exposure variable in this study. The results revealed that schoolgirl dropout have tremendous negative influence on the rate of receiving ANC services of the pregnant women in Bangladesh. Consistent with findings of the aforementioned previous studies, this study found that women who are living in urban area, possessing higher wealth index and exposed to media have higher rate of receiving ANC visits. Hossain et al. [28] found that the frequency of ANC visits was less for women aged 20 years or below compare to women aged 20–35 years at the time of delivery. Besides, Kabir and Islam [30] has found the odds of receiving ≥4 ANC services was low if mother’s age at conception is ≥35 years. However, we had found that the rate of receiving ANC services was smaller for women aged below 20 years but higher for women aged 30 years or more than whose age were 20–29 years at the time of delivery. In this study, we have found that higher order of birth was associated with lower rate of ANC visits. This findings are similar with previous studies [28, 30]. Bhowmik et al. [10] found that women who were more involved in making decision regarding own health care can significantly improve the frequency of ANC visits. However, Kabir and Islam [30] has found increased odds of taking ≥4 ANC visits if women and their husband jointly took decision regarding women own health care. But Hossain et al. [28] did not find any association between women empowerment and frequency of receiving ANC services. In this study we have considered three aspects of women empowerment as independent variables, These are: number of decisions woman participated out of three major decisions regarding themselves, number of reasons a woman justified beating by her husband out of five reasons of wife beating, and difference between husband and wife age (in years). Although the number of decisions woman participated was not associated with the frequency of ANC visits, increased number of reasons for beating justification was associated with lower rate of receiving ANC services. Also, the rate of receiving ANC services was higher for women who are not younger than their husbands. Since adequate level of ANC visits during pregnancy period of women contributes to maintain proper maternal health and to protect the adverse pregnancy outcomes to a great extent, it is essential to make sure that every pregnant mother receives WHO recommended at least eight ANC visits. This can be accomplished by encouraging women to take sufficient number of ANC visits during their pregnancy period. This might be achieved in Bangladesh by promoting maternal education so that at least ten years of schooling should be completed by a woman and dropout from school after marriage should be prevented. Besides schoolgirl dropout, this study helps policy makers to identify other socio-demographic factors such as place of residence, media exposure, mother’s age at birth, women empowerment, wealth index and birth order that might uplift women for receiving more ANC visits during their pregnancy period.
Background: There exists a lack of research in explaining the link between dropout from school and antenatal care (ANC) visits of women during pregnancy in Bangladesh. The aim of this study is to investigate how the drop out from school influences the ANC visits after controlling the relevant covariates using an appropriate count regression model. Methods: The association between the explanatory variables and the outcome of interest, ANC visits, have been performed using one-way analysis of variance/independent sample t-test. To examine the adjusted effects of covariates on the marginal mean of count data, Marginalized Poison-Poisson mixture regression model has been fitted. Results: The estimated incidence rate of antenatal care visits was 10.6% lower for the mothers who were not continued their education after marriage but had at least 10 years of schooling (p-value <0.01) and 20.2% lower for the drop-outed mothers (p-value <0.01) than the mothers who got continued their education after marriage. Conclusions: To ensure the WHO recommended 8+ ANC visits for the pregnant women of Bangladesh, it is essential to promote maternal education so that at least ten years of schooling should be completed by a woman and dropout from school after marriage should be prevented.
Introduction: Antenatal care (ANC) is an important element of persistence care that a mother receives before and during pregnancy, at the time of childbirth, and the postnatal period. The aim of ANC is to detect any pregnancy complications, take immediate necessary steps for solving such complications, and prepare mother for safe and healthy birth. It has both direct and indirect influences on the survival and health of the mother as well as newborn [1, 2]. The World Health Organization (WHO) recently recommended for at least eight ANC visits for a woman during a pregnancy period [2]. The percentage of pregnant women receiving at least four ANC visits during a pregnancy period in Bangladesh has been increasing over the past two decades, from 6% in 1993–94 to 47% in 2017–18 [3]. Despite this substantial progress in ANC services, only about 11.5% of sampled women received WHO recommended 8+ ANC visits during November 2014-March 2018 in Bangladesh [3]. Therefore, to achieve the health related Sustainable Development Goals, it is essential to make access to quality ANC service easy for the pregnant mother and proper monitoring system is required to observe whether this service implemented throughout the country [4]. Education is one of the important determinants of good health as heath disparities can be determined by educational disparities [5, 6]. Mother’s education is considered as a crucial factor for preventing and treating poor health outcomes and effective use of health care services [7]. To ensure the best health practice, public policies should focus on getting the best attainable education for women among all socioeconomic classes [8] as there exist extreme gender differentiation in economic roles, lower parental investments in daughters than in sons, and significant restrictions on girls’ public mobility [9]. The women having more education are more likely to receive ANC services. Educated women understand the benefits of taking frequent ANC services as they are more knowledgeable about reproductive health care [10, 11]. School dropout should be considered as a public heath problem because education is a strong predictor of long-term health and dropouts have poorer health outcome [12, 13]. Although school dropout and child marriage are interrelated outcomes that have an enormous adverse impact on adolescent girls health and wellbeing as well as on their progeny, most of the parents of Bangladesh think that marriage and stop going school is the better option for their daughter’s prosperity [14]. The complex relationship between girl’s high school dropout and risky adolescent behavior suggests that schoolgirl dropout due to marriage has been associated with high risk of subsequent teen pregnancy [15–17] and teen pregnancy can lead to medical complications for the mother and infant [18]. There is a lack of research in explaining the link between whether or not a woman continued their education after marriage and ANC visits. Although the ANC services in Bangladesh has increasing trend over the decades, access to ANC services should be improved drastically to achieve the WHO recommended positive pregnancy outcome and the health Sustainable Development Goals. Therefore, the aim of the current study is to investigate the determinants of the frequency of ANC visits of women during their most recent pregnancy period in Bangladesh by applying an appropriate count regression model giving special emphasis on how the schoolgirl dropout influence the response of interest. In practice, it may happen that sample count data observations may arise from two or more populations. Analyses of such data using standard or zero augmented count models may result in misleading conclusion [19, 20]. To overcome this difficulty, it may require to analyze the data taking mixture of populations into account [21]. Therefore, it is necessary to check whether count data come from mixture of populations. Moreover, the significance of inference regarding the marginal mean is well documented in many studies under mixture model setup [22–25]. Although finite mixtures of the standard count models have the advantage of describing over-dispersed count data, inferences regarding the overall exposure effects on marginal mean cannot be made from these model [22, 24]. The regression parameters for the marginal means can also be estimated sometimes indirectly by using the latent class parameters but such parameters cannot elucidate the link between the covariates and the population-wide parameters properly [26]. For this purpose, Benecha et al. [26] proposed marginally-specified mean models for mixtures of two count distributions to facilitate the maximum likelihood estimation where the mixing proportion have been included in the model along with the parameters for marginal mean. These are marginalized Poisson-Poisson (MPois-Pois) model and marginalized negative binomial-Poisson (MNB-Pois) model. In this study, an attempt has been made to draw inferences about the overall exposure effects on marginal means of ANC visits by using the MPois-Pois model. Discussion and conclusion: The study has limitations as follows: (1) We used cross-sectional data to study the association between schoolgirl dropout and frequency of ANC visits during their pregnancy period by controlling some socio-demographic variables and therefore it would not be possible to asses the causal relationship between them; (2) Although the data have been collected using stratified cluster sampling technique, cluster-level variation has not been considered into the analysis. Further researchers can be carried out by taking this limitations into accout.
Background: There exists a lack of research in explaining the link between dropout from school and antenatal care (ANC) visits of women during pregnancy in Bangladesh. The aim of this study is to investigate how the drop out from school influences the ANC visits after controlling the relevant covariates using an appropriate count regression model. Methods: The association between the explanatory variables and the outcome of interest, ANC visits, have been performed using one-way analysis of variance/independent sample t-test. To examine the adjusted effects of covariates on the marginal mean of count data, Marginalized Poison-Poisson mixture regression model has been fitted. Results: The estimated incidence rate of antenatal care visits was 10.6% lower for the mothers who were not continued their education after marriage but had at least 10 years of schooling (p-value <0.01) and 20.2% lower for the drop-outed mothers (p-value <0.01) than the mothers who got continued their education after marriage. Conclusions: To ensure the WHO recommended 8+ ANC visits for the pregnant women of Bangladesh, it is essential to promote maternal education so that at least ten years of schooling should be completed by a woman and dropout from school after marriage should be prevented.
9,557
244
[ 3710, 199, 514, 181, 955, 1004 ]
8
[ "usepackage", "boldsymbol", "document", "anc", "years", "boldsymbol mathcalligra", "mathcalligra", "visits", "10", "anc visits" ]
[ "visits woman pregnancy", "bangladesh promoting maternal", "anc visits pregnancy", "maternal health care", "pregnant women bangladesh" ]
null
[CONTENT] Antenatal care | BDHS | Count model | Dropout | Marginalized | Mixture [SUMMARY]
null
[CONTENT] Antenatal care | BDHS | Count model | Dropout | Marginalized | Mixture [SUMMARY]
[CONTENT] Antenatal care | BDHS | Count model | Dropout | Marginalized | Mixture [SUMMARY]
[CONTENT] Antenatal care | BDHS | Count model | Dropout | Marginalized | Mixture [SUMMARY]
[CONTENT] Antenatal care | BDHS | Count model | Dropout | Marginalized | Mixture [SUMMARY]
[CONTENT] Bangladesh | Educational Status | Female | Humans | Patient Acceptance of Health Care | Pregnancy | Pregnant Women | Prenatal Care | Schools [SUMMARY]
null
[CONTENT] Bangladesh | Educational Status | Female | Humans | Patient Acceptance of Health Care | Pregnancy | Pregnant Women | Prenatal Care | Schools [SUMMARY]
[CONTENT] Bangladesh | Educational Status | Female | Humans | Patient Acceptance of Health Care | Pregnancy | Pregnant Women | Prenatal Care | Schools [SUMMARY]
[CONTENT] Bangladesh | Educational Status | Female | Humans | Patient Acceptance of Health Care | Pregnancy | Pregnant Women | Prenatal Care | Schools [SUMMARY]
[CONTENT] Bangladesh | Educational Status | Female | Humans | Patient Acceptance of Health Care | Pregnancy | Pregnant Women | Prenatal Care | Schools [SUMMARY]
[CONTENT] visits woman pregnancy | bangladesh promoting maternal | anc visits pregnancy | maternal health care | pregnant women bangladesh [SUMMARY]
null
[CONTENT] visits woman pregnancy | bangladesh promoting maternal | anc visits pregnancy | maternal health care | pregnant women bangladesh [SUMMARY]
[CONTENT] visits woman pregnancy | bangladesh promoting maternal | anc visits pregnancy | maternal health care | pregnant women bangladesh [SUMMARY]
[CONTENT] visits woman pregnancy | bangladesh promoting maternal | anc visits pregnancy | maternal health care | pregnant women bangladesh [SUMMARY]
[CONTENT] visits woman pregnancy | bangladesh promoting maternal | anc visits pregnancy | maternal health care | pregnant women bangladesh [SUMMARY]
[CONTENT] usepackage | boldsymbol | document | anc | years | boldsymbol mathcalligra | mathcalligra | visits | 10 | anc visits [SUMMARY]
null
[CONTENT] usepackage | boldsymbol | document | anc | years | boldsymbol mathcalligra | mathcalligra | visits | 10 | anc visits [SUMMARY]
[CONTENT] usepackage | boldsymbol | document | anc | years | boldsymbol mathcalligra | mathcalligra | visits | 10 | anc visits [SUMMARY]
[CONTENT] usepackage | boldsymbol | document | anc | years | boldsymbol mathcalligra | mathcalligra | visits | 10 | anc visits [SUMMARY]
[CONTENT] usepackage | boldsymbol | document | anc | years | boldsymbol mathcalligra | mathcalligra | visits | 10 | anc visits [SUMMARY]
[CONTENT] health | anc | services | education | anc services | pregnancy | model | count | mother | dropout [SUMMARY]
null
[CONTENT] usepackage | mothers | value | 01 | value 01 | years | age | anc | birth | visits [SUMMARY]
[CONTENT] anc | women | receiving | receiving anc | receiving anc services | found | anc visits | visits | services | anc services [SUMMARY]
[CONTENT] usepackage | boldsymbol | anc | years | women | visits | anc visits | 10 | document | number [SUMMARY]
[CONTENT] usepackage | boldsymbol | anc | years | women | visits | anc visits | 10 | document | number [SUMMARY]
[CONTENT] ANC | Bangladesh ||| ANC [SUMMARY]
null
[CONTENT] 10.6% | at least 10 years | 20.2% [SUMMARY]
[CONTENT] WHO | 8 | ANC | Bangladesh | at least ten years [SUMMARY]
[CONTENT] ANC | Bangladesh ||| ANC ||| ANC | one ||| ||| 10.6% | at least 10 years | 20.2% ||| WHO | 8 | ANC | Bangladesh | at least ten years [SUMMARY]
[CONTENT] ANC | Bangladesh ||| ANC ||| ANC | one ||| ||| 10.6% | at least 10 years | 20.2% ||| WHO | 8 | ANC | Bangladesh | at least ten years [SUMMARY]
Effect of waiting time for COVID-19 screening on postoperative outcomes of type A aortic dissection: An institutional study.
35971227
Since November 2020, all patients undergoing emergency surgery at our hospital have been subjected to preoperative reverse transcription polymerase chain reaction (RT-PCR) screening to prevent nosocomial COVID-19 infection, with admission to the operating room requiring a negative result. Herein, we compared the pre- and postoperative outcomes of acute type A aortic dissection surgery before and after implementing the RT-PCR screening for all patients.
BACKGROUND
We compared the postoperative results of 105 patients who underwent acute type A aortic dissection emergency surgery from January 2019 to October 2020 (Group I) and 109 patients who underwent the surgery following RT-PCR screening from November 2020 to March 2022 (Group II).
METHODS
The average waiting time from arrival at the hospital to admission to the operating room was 36 and 81 min in Groups I and II, respectively. Ruptured cardiac tamponade was observed preoperatively in 26.6% and 21.1% of Groups I and II patients, respectively. The preoperative waiting time due to RT-PCR screening did not contribute to the cardiac tamponade. Surgical complications such as bleeding (reopened chest), respiratory failure, cerebral neuropathy, or mediastinitis did not increase significantly. The number of deaths 30 days after surgery (Group I = 13 and Group II = 3) showed no significant difference between the groups. There were no cases of nosocomial COVID-19 infections.
RESULTS
Preoperative COVID-19 screening is an important method to prevent nosocomial infections. The associated waiting time did not affect the number of preoperative ruptures or affect postoperative complications or mortality.
CONCLUSIONS
[ "Aortic Dissection", "COVID-19", "Cardiac Tamponade", "Cross Infection", "Humans", "Postoperative Complications", "Retrospective Studies", "Treatment Outcome", "Waiting Lists" ]
9382570
Introduction
To prevent nosocomial COVID-19 infection, screening tests, such as reverse transcription polymerase chain reaction (RT-PCR), are necessary at the time of admission. However, in emergency diseases requiring cardiovascular surgery, the time from admission to the start of surgery is crucial. The Japanese Surgical Association has suggested that if COVID-19 cannot be ruled out before emergency surgery, surgery should be performed after waiting for the maximum possible time for the RT-PCR results. However, in detrimental cases caused by the patients’ preoperative condition, our hospital policy is to perform the surgery in a negative pressure room while wearing personal protective equipment. Our hospital provides a 24-h RT-PCR service, but results are released after an hour. Even after November 2020, surgeries were postponed with the aim of waiting for patients’ RT-PCR results. In this study, we compared the pre- and postoperative outcomes in patients who underwent acute type A aortic dissection emergency surgery before and after implementing the RT-PCR screening for all patients.
null
null
Results
Groups I and II included 105 (49 men, 56 women) and 109 (51 men, 58 women) patients, respectively. Although it is well known that the proportion of men with acute type A dissection is about 2–3 times higher than that of women, the proportion of men and women in this study was similar. The results of the pre- and postoperative findings are presented in Table 1. No patient tested positive for COVID-19 preoperatively. There were no nosocomial infections with COVID-19 during the observation period. The average waiting time from arrival at the hospital to admission to the operating room was 36 and 81 min in Groups I and II, respectively. Preoperative ruptured cardiac tamponade was observed in 28 (27%) patients in Group I and 23 (21%) patients in Group II. Fisher's exact test showed that the preoperative waiting time associated with RT-PCR testing did not contribute to the ruptured cardiac tamponade in Group II patients (p = 0.42). No cases of cardiac tamponade occurred during the preoperative waiting period in Group II. Seven patients in Group I and 13 patients in Group II were on catecholamine preoperatively. There was no significant difference between the two groups (p = 0.24). Catecholamine was administered to some patients with cardiac tamponade to allow a certain degree of hypotension and prevent the progression of cardiac tamponade if the patients maintained consciousness, and there was no progression of organ ischemia. In Group I and Group II, 87 and 79 patients underwent ascending aortic artery replacement, respectively, 8 and 20 patients underwent ascending arch aortic replacement, respectively, 2 and no patients underwent redo ascending aortic artery replacement, respectively. In each group, four patients underwent coronary artery bypass surgery in addition to ascending aortic artery replacement. Three cases of Bentall surgery and two cases of ascending aortic artery replacement were performed, while ascending aortic artery replacement plus aortic valve replacement was performed in one and four cases, respectively. Regarding post-surgical complications, bleeding (reopened chest) occurred in two patients in Group I (1.9%) and six patients in Group II (5.5%), with no significant difference between the two groups (p = 0.28). Respiratory failure was observed in 14 patients (13%) in Group I and six patients (5.5%) in Group II, but the difference between the two groups was not significant (p = 0.06). A total of 15 (14%) patients in Group I and 10 patients (9.2%) in Group II developed cerebral neurological disorders, with no significant difference between the two groups (p = 0.29). The number of patients that developed mediastinitis was also not significantly different (p = 1) between Groups I (0.95%) and II (0.92%). Although the proportion of patients who died 30 days post-surgery was 12% (13 patients) each in Groups I and II, with no significant difference between the two groups (p = 1). Pre- and postoperative findings. The average time to enter the operating room has been delayed by 45 min owing to the mandatory COVID-19 PCR test. No significant increase in preoperative cardiac tamponade, preoperative catecholamine use, and postoperative complications was seen.
Conclusion
Although COVID-19 RT-PCR screening increases the preoperative waiting time, it does not contribute to postoperative mortality. To prevent nosocomial infections, it is advisable to perform RT-PCR before surgery. However, some patients with poor preoperative condition can suffer adverse events, and precautions, such as preoperative antihypertensive rest, may be necessary to prevent cardiac tamponade rupture.
[ "Material and methods", "Conclusion" ]
[ "We compared the pre- and postoperative results of 105 patients who underwent emergency surgery for acute type A aortic dissection from January 2019 to October 2020 (Group I) and 109 patients who underwent the same procedure from November 2020 to March 2022 (Group II), only after preoperative COVID-19 RT-PCR results were available. Basically, all patients with acute type A aortic dissection are treated with emergency surgery in our hospital (if they do not agree to surgery, they are treated conservatively). The flow of patient selection is shown in Figure 1. To adjust for confounding factors other than preoperative RT-PCR testing, we ensured that the surgical team (including the cardiovascular surgery medical specialist) and surgical procedure used for both groups were identical. In our hospital, we performed ascending aorta replacement if the entry tear is in the ascending aorta and total arch replacement if it is in the aortic arch. If the dissection involves a coronary artery, coronary artery bypass surgery is added. In moderate or severe cases of aortic regurgitation, aortic valve replacement is performed simultaneously. In cases of aortic root aneurysm, Bentall's procedure is also performed. If the aortic leaflet quality and shape are basically normal and only the sinus expands by more than 4.5 cm, we consider David's surgery. In this study, we also defined cardiac tamponade as a case of shock vitality in the presence of hemopericardium due to pericardial membrane collapse. We evaluated the time from arrival at the hospital to admission to the operating room, preoperative cardiac tamponade, postoperative chest reopening, respiratory failure, cerebral neuropathy, mediastinitis, and mortality. Respiratory failure was defined as the need for tracheal intubation for more than 1 week after surgery. Neurological disorders were defined as abnormal neurological or imaging findings within 1 week after surgery. Mediastinitis was defined as an infection extending down the sternum. Fisher's exact test was used to compare the variables between the two groups. Statistically significant differences were considered at p < 0.05.\nPatient selection chart. The number of patients with type A acute aortic dissection who presented to our hospital between January 2019 and March 2022 was 214. Eleven patients who refused surgery because of their advanced age or hindrance in their activities of daily living were excluded from the study while the rest were divided into two groups (Group I, 105 patients; Group II, 109 patients) as part of the study.", "Although COVID-19 RT-PCR screening increases the preoperative waiting time, it does not contribute to postoperative mortality. To prevent nosocomial infections, it is advisable to perform RT-PCR before surgery. However, some patients with poor preoperative condition can suffer adverse events, and precautions, such as preoperative antihypertensive rest, may be necessary to prevent cardiac tamponade rupture." ]
[ null, null ]
[ "Introduction", "Material and methods", "Results", "Discussion", "Conclusion" ]
[ "To prevent nosocomial COVID-19 infection, screening tests, such as reverse transcription polymerase chain reaction (RT-PCR), are necessary at the time of admission. However, in emergency diseases requiring cardiovascular surgery, the time from admission to the start of surgery is crucial. The Japanese Surgical Association has suggested that if COVID-19 cannot be ruled out before emergency surgery, surgery should be performed after waiting for the maximum possible time for the RT-PCR results. However, in detrimental cases caused by the patients’ preoperative condition, our hospital policy is to perform the surgery in a negative pressure room while wearing personal protective equipment. Our hospital provides a 24-h RT-PCR service, but results are released after an hour. Even after November 2020, surgeries were postponed with the aim of waiting for patients’ RT-PCR results. In this study, we compared the pre- and postoperative outcomes in patients who underwent acute type A aortic dissection emergency surgery before and after implementing the RT-PCR screening for all patients.", "We compared the pre- and postoperative results of 105 patients who underwent emergency surgery for acute type A aortic dissection from January 2019 to October 2020 (Group I) and 109 patients who underwent the same procedure from November 2020 to March 2022 (Group II), only after preoperative COVID-19 RT-PCR results were available. Basically, all patients with acute type A aortic dissection are treated with emergency surgery in our hospital (if they do not agree to surgery, they are treated conservatively). The flow of patient selection is shown in Figure 1. To adjust for confounding factors other than preoperative RT-PCR testing, we ensured that the surgical team (including the cardiovascular surgery medical specialist) and surgical procedure used for both groups were identical. In our hospital, we performed ascending aorta replacement if the entry tear is in the ascending aorta and total arch replacement if it is in the aortic arch. If the dissection involves a coronary artery, coronary artery bypass surgery is added. In moderate or severe cases of aortic regurgitation, aortic valve replacement is performed simultaneously. In cases of aortic root aneurysm, Bentall's procedure is also performed. If the aortic leaflet quality and shape are basically normal and only the sinus expands by more than 4.5 cm, we consider David's surgery. In this study, we also defined cardiac tamponade as a case of shock vitality in the presence of hemopericardium due to pericardial membrane collapse. We evaluated the time from arrival at the hospital to admission to the operating room, preoperative cardiac tamponade, postoperative chest reopening, respiratory failure, cerebral neuropathy, mediastinitis, and mortality. Respiratory failure was defined as the need for tracheal intubation for more than 1 week after surgery. Neurological disorders were defined as abnormal neurological or imaging findings within 1 week after surgery. Mediastinitis was defined as an infection extending down the sternum. Fisher's exact test was used to compare the variables between the two groups. Statistically significant differences were considered at p < 0.05.\nPatient selection chart. The number of patients with type A acute aortic dissection who presented to our hospital between January 2019 and March 2022 was 214. Eleven patients who refused surgery because of their advanced age or hindrance in their activities of daily living were excluded from the study while the rest were divided into two groups (Group I, 105 patients; Group II, 109 patients) as part of the study.", "Groups I and II included 105 (49 men, 56 women) and 109 (51 men, 58 women) patients, respectively. Although it is well known that the proportion of men with acute type A dissection is about 2–3 times higher than that of women, the proportion of men and women in this study was similar. The results of the pre- and postoperative findings are presented in Table 1. No patient tested positive for COVID-19 preoperatively. There were no nosocomial infections with COVID-19 during the observation period. The average waiting time from arrival at the hospital to admission to the operating room was 36 and 81 min in Groups I and II, respectively. Preoperative ruptured cardiac tamponade was observed in 28 (27%) patients in Group I and 23 (21%) patients in Group II. Fisher's exact test showed that the preoperative waiting time associated with RT-PCR testing did not contribute to the ruptured cardiac tamponade in Group II patients (p = 0.42). No cases of cardiac tamponade occurred during the preoperative waiting period in Group II. Seven patients in Group I and 13 patients in Group II were on catecholamine preoperatively. There was no significant difference between the two groups (p = 0.24). Catecholamine was administered to some patients with cardiac tamponade to allow a certain degree of hypotension and prevent the progression of cardiac tamponade if the patients maintained consciousness, and there was no progression of organ ischemia. In Group I and Group II, 87 and 79 patients underwent ascending aortic artery replacement, respectively, 8 and 20 patients underwent ascending arch aortic replacement, respectively, 2 and no patients underwent redo ascending aortic artery replacement, respectively. In each group, four patients underwent coronary artery bypass surgery in addition to ascending aortic artery replacement. Three cases of Bentall surgery and two cases of ascending aortic artery replacement were performed, while ascending aortic artery replacement plus aortic valve replacement was performed in one and four cases, respectively. Regarding post-surgical complications, bleeding (reopened chest) occurred in two patients in Group I (1.9%) and six patients in Group II (5.5%), with no significant difference between the two groups (p = 0.28). Respiratory failure was observed in 14 patients (13%) in Group I and six patients (5.5%) in Group II, but the difference between the two groups was not significant (p = 0.06). A total of 15 (14%) patients in Group I and 10 patients (9.2%) in Group II developed cerebral neurological disorders, with no significant difference between the two groups (p = 0.29). The number of patients that developed mediastinitis was also not significantly different (p = 1) between Groups I (0.95%) and II (0.92%). Although the proportion of patients who died 30 days post-surgery was 12% (13 patients) each in Groups I and II, with no significant difference between the two groups (p = 1).\nPre- and postoperative findings.\nThe average time to enter the operating room has been delayed by 45 min owing to the mandatory COVID-19 PCR test. No significant increase in preoperative cardiac tamponade, preoperative catecholamine use, and postoperative complications was seen.", "Considering the recent spread of COVID-19, it is necessary to control nosocomial infections. Since RT-PCR screening was implemented, there has been no nosocomial COVID-19 transmission in our hospital from patients to medical staff, suggesting that RT-PCR COVID-19 testing at the time of scheduled admission may be effective in controlling nosocomial infections. However, RT-PCR testing before emergency admission may not be available at some facilities. Despite the usefulness of RT-PCR screening, the prognosis of patients with type A acute aortic dissection, who must wait for an hour for RT-PCR test results, needs to be examined. Furthermore, antihypertensive rest and analgesia are important for managing any preoperative complications.1 In our hospital, intravenous calcium channel blockers are used to strictly maintain systolic blood pressure below 120 mmHg, while fentanyl, an analgesic, is administered continuously for pain relief. However, sudden changes may occur during the waiting period, including cardiac arrest associated with (1) acute myocardial infarction due to closure of the coronary artery as the aortic dissection progresses, (2) cardiac tamponade due to rupture of the ascending aorta, and (3) rupture of the descending or abdominal aorta due to vulnerability associated with the dissection.2,3 To date, we have not experienced a case of cardiac arrest in our hospital while waiting for COVID-19 RT-PCR test results. Previous studies have shown that the one-step reverse transcription loop-mediated isothermal amplification (RT-LAMP) test is comparable to RT-PCR. Yu et al.4 showed that the RT-LAMP test could detect synthesized RNA equivalent to 10 copies of the SARS-CoV-2 virus, with a clinical sensitivity of 97.6%. In addition, RT-LAMP test provides results in 40 min. This may be a useful method for shortening the preoperative waiting time. Overall, although COVID-19 RT-PCR screening significantly prolonged preoperative waiting time, it did not contribute to postoperative mortality in our study. However, it is established that aortic type A dissection is the most dangerous cardiovascular disease, which can occur with pericardial tamponade, arterial rupture, or coronary myocardial infarction, leading to patient mortality. Furthermore, even if the patient does not die, delayed surgery leads to impaired ischemic function, which affects patient recovery. Therefore, it is necessary to devise alternatives for prompt transfer to the operating room, which will facilitate early surgical intervention.", "Although COVID-19 RT-PCR screening increases the preoperative waiting time, it does not contribute to postoperative mortality. To prevent nosocomial infections, it is advisable to perform RT-PCR before surgery. However, some patients with poor preoperative condition can suffer adverse events, and precautions, such as preoperative antihypertensive rest, may be necessary to prevent cardiac tamponade rupture." ]
[ "intro", null, "results", "discussion", null ]
[ "Acute type A aortic dissection", "COVID-19 screening", "nosocomial", "RT-PCR" ]
Introduction: To prevent nosocomial COVID-19 infection, screening tests, such as reverse transcription polymerase chain reaction (RT-PCR), are necessary at the time of admission. However, in emergency diseases requiring cardiovascular surgery, the time from admission to the start of surgery is crucial. The Japanese Surgical Association has suggested that if COVID-19 cannot be ruled out before emergency surgery, surgery should be performed after waiting for the maximum possible time for the RT-PCR results. However, in detrimental cases caused by the patients’ preoperative condition, our hospital policy is to perform the surgery in a negative pressure room while wearing personal protective equipment. Our hospital provides a 24-h RT-PCR service, but results are released after an hour. Even after November 2020, surgeries were postponed with the aim of waiting for patients’ RT-PCR results. In this study, we compared the pre- and postoperative outcomes in patients who underwent acute type A aortic dissection emergency surgery before and after implementing the RT-PCR screening for all patients. Material and methods: We compared the pre- and postoperative results of 105 patients who underwent emergency surgery for acute type A aortic dissection from January 2019 to October 2020 (Group I) and 109 patients who underwent the same procedure from November 2020 to March 2022 (Group II), only after preoperative COVID-19 RT-PCR results were available. Basically, all patients with acute type A aortic dissection are treated with emergency surgery in our hospital (if they do not agree to surgery, they are treated conservatively). The flow of patient selection is shown in Figure 1. To adjust for confounding factors other than preoperative RT-PCR testing, we ensured that the surgical team (including the cardiovascular surgery medical specialist) and surgical procedure used for both groups were identical. In our hospital, we performed ascending aorta replacement if the entry tear is in the ascending aorta and total arch replacement if it is in the aortic arch. If the dissection involves a coronary artery, coronary artery bypass surgery is added. In moderate or severe cases of aortic regurgitation, aortic valve replacement is performed simultaneously. In cases of aortic root aneurysm, Bentall's procedure is also performed. If the aortic leaflet quality and shape are basically normal and only the sinus expands by more than 4.5 cm, we consider David's surgery. In this study, we also defined cardiac tamponade as a case of shock vitality in the presence of hemopericardium due to pericardial membrane collapse. We evaluated the time from arrival at the hospital to admission to the operating room, preoperative cardiac tamponade, postoperative chest reopening, respiratory failure, cerebral neuropathy, mediastinitis, and mortality. Respiratory failure was defined as the need for tracheal intubation for more than 1 week after surgery. Neurological disorders were defined as abnormal neurological or imaging findings within 1 week after surgery. Mediastinitis was defined as an infection extending down the sternum. Fisher's exact test was used to compare the variables between the two groups. Statistically significant differences were considered at p < 0.05. Patient selection chart. The number of patients with type A acute aortic dissection who presented to our hospital between January 2019 and March 2022 was 214. Eleven patients who refused surgery because of their advanced age or hindrance in their activities of daily living were excluded from the study while the rest were divided into two groups (Group I, 105 patients; Group II, 109 patients) as part of the study. Results: Groups I and II included 105 (49 men, 56 women) and 109 (51 men, 58 women) patients, respectively. Although it is well known that the proportion of men with acute type A dissection is about 2–3 times higher than that of women, the proportion of men and women in this study was similar. The results of the pre- and postoperative findings are presented in Table 1. No patient tested positive for COVID-19 preoperatively. There were no nosocomial infections with COVID-19 during the observation period. The average waiting time from arrival at the hospital to admission to the operating room was 36 and 81 min in Groups I and II, respectively. Preoperative ruptured cardiac tamponade was observed in 28 (27%) patients in Group I and 23 (21%) patients in Group II. Fisher's exact test showed that the preoperative waiting time associated with RT-PCR testing did not contribute to the ruptured cardiac tamponade in Group II patients (p = 0.42). No cases of cardiac tamponade occurred during the preoperative waiting period in Group II. Seven patients in Group I and 13 patients in Group II were on catecholamine preoperatively. There was no significant difference between the two groups (p = 0.24). Catecholamine was administered to some patients with cardiac tamponade to allow a certain degree of hypotension and prevent the progression of cardiac tamponade if the patients maintained consciousness, and there was no progression of organ ischemia. In Group I and Group II, 87 and 79 patients underwent ascending aortic artery replacement, respectively, 8 and 20 patients underwent ascending arch aortic replacement, respectively, 2 and no patients underwent redo ascending aortic artery replacement, respectively. In each group, four patients underwent coronary artery bypass surgery in addition to ascending aortic artery replacement. Three cases of Bentall surgery and two cases of ascending aortic artery replacement were performed, while ascending aortic artery replacement plus aortic valve replacement was performed in one and four cases, respectively. Regarding post-surgical complications, bleeding (reopened chest) occurred in two patients in Group I (1.9%) and six patients in Group II (5.5%), with no significant difference between the two groups (p = 0.28). Respiratory failure was observed in 14 patients (13%) in Group I and six patients (5.5%) in Group II, but the difference between the two groups was not significant (p = 0.06). A total of 15 (14%) patients in Group I and 10 patients (9.2%) in Group II developed cerebral neurological disorders, with no significant difference between the two groups (p = 0.29). The number of patients that developed mediastinitis was also not significantly different (p = 1) between Groups I (0.95%) and II (0.92%). Although the proportion of patients who died 30 days post-surgery was 12% (13 patients) each in Groups I and II, with no significant difference between the two groups (p = 1). Pre- and postoperative findings. The average time to enter the operating room has been delayed by 45 min owing to the mandatory COVID-19 PCR test. No significant increase in preoperative cardiac tamponade, preoperative catecholamine use, and postoperative complications was seen. Discussion: Considering the recent spread of COVID-19, it is necessary to control nosocomial infections. Since RT-PCR screening was implemented, there has been no nosocomial COVID-19 transmission in our hospital from patients to medical staff, suggesting that RT-PCR COVID-19 testing at the time of scheduled admission may be effective in controlling nosocomial infections. However, RT-PCR testing before emergency admission may not be available at some facilities. Despite the usefulness of RT-PCR screening, the prognosis of patients with type A acute aortic dissection, who must wait for an hour for RT-PCR test results, needs to be examined. Furthermore, antihypertensive rest and analgesia are important for managing any preoperative complications.1 In our hospital, intravenous calcium channel blockers are used to strictly maintain systolic blood pressure below 120 mmHg, while fentanyl, an analgesic, is administered continuously for pain relief. However, sudden changes may occur during the waiting period, including cardiac arrest associated with (1) acute myocardial infarction due to closure of the coronary artery as the aortic dissection progresses, (2) cardiac tamponade due to rupture of the ascending aorta, and (3) rupture of the descending or abdominal aorta due to vulnerability associated with the dissection.2,3 To date, we have not experienced a case of cardiac arrest in our hospital while waiting for COVID-19 RT-PCR test results. Previous studies have shown that the one-step reverse transcription loop-mediated isothermal amplification (RT-LAMP) test is comparable to RT-PCR. Yu et al.4 showed that the RT-LAMP test could detect synthesized RNA equivalent to 10 copies of the SARS-CoV-2 virus, with a clinical sensitivity of 97.6%. In addition, RT-LAMP test provides results in 40 min. This may be a useful method for shortening the preoperative waiting time. Overall, although COVID-19 RT-PCR screening significantly prolonged preoperative waiting time, it did not contribute to postoperative mortality in our study. However, it is established that aortic type A dissection is the most dangerous cardiovascular disease, which can occur with pericardial tamponade, arterial rupture, or coronary myocardial infarction, leading to patient mortality. Furthermore, even if the patient does not die, delayed surgery leads to impaired ischemic function, which affects patient recovery. Therefore, it is necessary to devise alternatives for prompt transfer to the operating room, which will facilitate early surgical intervention. Conclusion: Although COVID-19 RT-PCR screening increases the preoperative waiting time, it does not contribute to postoperative mortality. To prevent nosocomial infections, it is advisable to perform RT-PCR before surgery. However, some patients with poor preoperative condition can suffer adverse events, and precautions, such as preoperative antihypertensive rest, may be necessary to prevent cardiac tamponade rupture.
Background: Since November 2020, all patients undergoing emergency surgery at our hospital have been subjected to preoperative reverse transcription polymerase chain reaction (RT-PCR) screening to prevent nosocomial COVID-19 infection, with admission to the operating room requiring a negative result. Herein, we compared the pre- and postoperative outcomes of acute type A aortic dissection surgery before and after implementing the RT-PCR screening for all patients. Methods: We compared the postoperative results of 105 patients who underwent acute type A aortic dissection emergency surgery from January 2019 to October 2020 (Group I) and 109 patients who underwent the surgery following RT-PCR screening from November 2020 to March 2022 (Group II). Results: The average waiting time from arrival at the hospital to admission to the operating room was 36 and 81 min in Groups I and II, respectively. Ruptured cardiac tamponade was observed preoperatively in 26.6% and 21.1% of Groups I and II patients, respectively. The preoperative waiting time due to RT-PCR screening did not contribute to the cardiac tamponade. Surgical complications such as bleeding (reopened chest), respiratory failure, cerebral neuropathy, or mediastinitis did not increase significantly. The number of deaths 30 days after surgery (Group I = 13 and Group II = 3) showed no significant difference between the groups. There were no cases of nosocomial COVID-19 infections. Conclusions: Preoperative COVID-19 screening is an important method to prevent nosocomial infections. The associated waiting time did not affect the number of preoperative ruptures or affect postoperative complications or mortality.
Introduction: To prevent nosocomial COVID-19 infection, screening tests, such as reverse transcription polymerase chain reaction (RT-PCR), are necessary at the time of admission. However, in emergency diseases requiring cardiovascular surgery, the time from admission to the start of surgery is crucial. The Japanese Surgical Association has suggested that if COVID-19 cannot be ruled out before emergency surgery, surgery should be performed after waiting for the maximum possible time for the RT-PCR results. However, in detrimental cases caused by the patients’ preoperative condition, our hospital policy is to perform the surgery in a negative pressure room while wearing personal protective equipment. Our hospital provides a 24-h RT-PCR service, but results are released after an hour. Even after November 2020, surgeries were postponed with the aim of waiting for patients’ RT-PCR results. In this study, we compared the pre- and postoperative outcomes in patients who underwent acute type A aortic dissection emergency surgery before and after implementing the RT-PCR screening for all patients. Conclusion: Although COVID-19 RT-PCR screening increases the preoperative waiting time, it does not contribute to postoperative mortality. To prevent nosocomial infections, it is advisable to perform RT-PCR before surgery. However, some patients with poor preoperative condition can suffer adverse events, and precautions, such as preoperative antihypertensive rest, may be necessary to prevent cardiac tamponade rupture.
Background: Since November 2020, all patients undergoing emergency surgery at our hospital have been subjected to preoperative reverse transcription polymerase chain reaction (RT-PCR) screening to prevent nosocomial COVID-19 infection, with admission to the operating room requiring a negative result. Herein, we compared the pre- and postoperative outcomes of acute type A aortic dissection surgery before and after implementing the RT-PCR screening for all patients. Methods: We compared the postoperative results of 105 patients who underwent acute type A aortic dissection emergency surgery from January 2019 to October 2020 (Group I) and 109 patients who underwent the surgery following RT-PCR screening from November 2020 to March 2022 (Group II). Results: The average waiting time from arrival at the hospital to admission to the operating room was 36 and 81 min in Groups I and II, respectively. Ruptured cardiac tamponade was observed preoperatively in 26.6% and 21.1% of Groups I and II patients, respectively. The preoperative waiting time due to RT-PCR screening did not contribute to the cardiac tamponade. Surgical complications such as bleeding (reopened chest), respiratory failure, cerebral neuropathy, or mediastinitis did not increase significantly. The number of deaths 30 days after surgery (Group I = 13 and Group II = 3) showed no significant difference between the groups. There were no cases of nosocomial COVID-19 infections. Conclusions: Preoperative COVID-19 screening is an important method to prevent nosocomial infections. The associated waiting time did not affect the number of preoperative ruptures or affect postoperative complications or mortality.
1,823
301
[ 457, 67 ]
5
[ "patients", "rt", "surgery", "aortic", "pcr", "group", "rt pcr", "preoperative", "ii", "covid 19" ]
[ "preoperative covid 19", "19 preoperatively nosocomial", "rt pcr surgery", "emergency surgery implementing", "aortic dissection wait" ]
null
[CONTENT] Acute type A aortic dissection | COVID-19 screening | nosocomial | RT-PCR [SUMMARY]
null
[CONTENT] Acute type A aortic dissection | COVID-19 screening | nosocomial | RT-PCR [SUMMARY]
[CONTENT] Acute type A aortic dissection | COVID-19 screening | nosocomial | RT-PCR [SUMMARY]
[CONTENT] Acute type A aortic dissection | COVID-19 screening | nosocomial | RT-PCR [SUMMARY]
[CONTENT] Acute type A aortic dissection | COVID-19 screening | nosocomial | RT-PCR [SUMMARY]
[CONTENT] Aortic Dissection | COVID-19 | Cardiac Tamponade | Cross Infection | Humans | Postoperative Complications | Retrospective Studies | Treatment Outcome | Waiting Lists [SUMMARY]
null
[CONTENT] Aortic Dissection | COVID-19 | Cardiac Tamponade | Cross Infection | Humans | Postoperative Complications | Retrospective Studies | Treatment Outcome | Waiting Lists [SUMMARY]
[CONTENT] Aortic Dissection | COVID-19 | Cardiac Tamponade | Cross Infection | Humans | Postoperative Complications | Retrospective Studies | Treatment Outcome | Waiting Lists [SUMMARY]
[CONTENT] Aortic Dissection | COVID-19 | Cardiac Tamponade | Cross Infection | Humans | Postoperative Complications | Retrospective Studies | Treatment Outcome | Waiting Lists [SUMMARY]
[CONTENT] Aortic Dissection | COVID-19 | Cardiac Tamponade | Cross Infection | Humans | Postoperative Complications | Retrospective Studies | Treatment Outcome | Waiting Lists [SUMMARY]
[CONTENT] preoperative covid 19 | 19 preoperatively nosocomial | rt pcr surgery | emergency surgery implementing | aortic dissection wait [SUMMARY]
null
[CONTENT] preoperative covid 19 | 19 preoperatively nosocomial | rt pcr surgery | emergency surgery implementing | aortic dissection wait [SUMMARY]
[CONTENT] preoperative covid 19 | 19 preoperatively nosocomial | rt pcr surgery | emergency surgery implementing | aortic dissection wait [SUMMARY]
[CONTENT] preoperative covid 19 | 19 preoperatively nosocomial | rt pcr surgery | emergency surgery implementing | aortic dissection wait [SUMMARY]
[CONTENT] preoperative covid 19 | 19 preoperatively nosocomial | rt pcr surgery | emergency surgery implementing | aortic dissection wait [SUMMARY]
[CONTENT] patients | rt | surgery | aortic | pcr | group | rt pcr | preoperative | ii | covid 19 [SUMMARY]
null
[CONTENT] patients | rt | surgery | aortic | pcr | group | rt pcr | preoperative | ii | covid 19 [SUMMARY]
[CONTENT] patients | rt | surgery | aortic | pcr | group | rt pcr | preoperative | ii | covid 19 [SUMMARY]
[CONTENT] patients | rt | surgery | aortic | pcr | group | rt pcr | preoperative | ii | covid 19 [SUMMARY]
[CONTENT] patients | rt | surgery | aortic | pcr | group | rt pcr | preoperative | ii | covid 19 [SUMMARY]
[CONTENT] surgery | pcr | rt pcr | rt | emergency | time admission | patients | results | emergency surgery | rt pcr results [SUMMARY]
null
[CONTENT] group | patients | ii | groups | patients group | group ii | respectively | replacement | artery replacement | ascending aortic artery [SUMMARY]
[CONTENT] preoperative | prevent | rt pcr surgery patients | surgery patients | advisable perform rt pcr | mortality prevent nosocomial infections | mortality prevent nosocomial | mortality prevent | prevent cardiac | prevent cardiac tamponade [SUMMARY]
[CONTENT] patients | rt | pcr | surgery | rt pcr | group | aortic | preoperative | ii | groups [SUMMARY]
[CONTENT] patients | rt | pcr | surgery | rt pcr | group | aortic | preoperative | ii | groups [SUMMARY]
[CONTENT] November 2020 | transcription | RT-PCR | COVID-19 ||| [SUMMARY]
null
[CONTENT] 36 | 81 | Groups | II ||| 26.6% | 21.1% | Groups | II ||| RT-PCR ||| ||| 30 days | 13 | Group II | 3 ||| COVID-19 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] November 2020 | transcription | RT-PCR | COVID-19 ||| ||| 105 | January 2019 to October 2020 | 109 | RT-PCR | November 2020 to March 2022 ||| ||| 36 | 81 | Groups | II ||| 26.6% | 21.1% | Groups | II ||| RT-PCR ||| ||| 30 days | 13 | Group II | 3 ||| COVID-19 ||| ||| [SUMMARY]
[CONTENT] November 2020 | transcription | RT-PCR | COVID-19 ||| ||| 105 | January 2019 to October 2020 | 109 | RT-PCR | November 2020 to March 2022 ||| ||| 36 | 81 | Groups | II ||| 26.6% | 21.1% | Groups | II ||| RT-PCR ||| ||| 30 days | 13 | Group II | 3 ||| COVID-19 ||| ||| [SUMMARY]
Low expression of chloride channel accessory 1 predicts a poor prognosis in colorectal cancer.
25603912
Chloride channel accessory 1 (CLCA1) is a CLCA protein that plays a functional role in regulating the differentiation and proliferation of colorectal cancer (CRC) cells. Here we investigated the relationship between the level of CLCA1 and the prognosis of CRC.
BACKGROUND
First, the level of CLCA1 was detected quantitatively in normal and cancerous colonic epithelial tissues with immunohistochemistry. Next, the correlations between CLCA1 expression, pathological tumor features, and the overall survival rate of patients was analyzed. Finally, 3 publicly available data sets from the Gene Expression Omnibus were examined: normal CRC versus early CRC (GSE4107), primary CRC versus metastatic lesions (GSE28702), and low chromosomal instability versus high chromosomal instability (GSE30540).
METHODS
The expression of CLCA1 was decreased markedly in tumor specimens. CLCA1 expression was correlated significantly with the histological grade (P < .01) and lymph node metastasis (P < .01). A significantly poorer overall survival rate was found in patients with low levels of CLCA1 expression versus those with high expression levels (P < .05). The results confirmed that the low expression of CLCA1 in CRC was highly associated with tumorigenesis, metastasis, and high chromosomal instability. In addition, the loss of CLCA1 disrupted the differentiation of human colon adenocarcinoma cells (Caco-2) in vitro.
RESULTS
These findings suggest that CLCA1 levels may be a potential predictor of prognosis in primary human CRC. Low expression of CLCA1 predicts disease recurrence and lower survival, and this has implications for the selection of patients most likely to need and benefit from adjuvant chemotherapy.
CONCLUSIONS
[ "Adult", "Aged", "Aged, 80 and over", "Biomarkers, Tumor", "Blotting, Western", "Cadherins", "Chloride Channels", "Colorectal Neoplasms", "Down-Regulation", "Female", "Gene Expression Regulation, Neoplastic", "Humans", "Immunohistochemistry", "Kaplan-Meier Estimate", "Ki-67 Antigen", "Male", "Middle Aged", "Neoplasm Grading", "Neoplasm Staging", "PTEN Phosphohydrolase", "Predictive Value of Tests", "Prognosis", "Tumor Suppressor Protein p53" ]
4654332
Introduction
The chloride channel accessory (CLCA) family (also called the calcium-sensitive chloride conductance protein family) comprises 4 genes in humans and at least 6 genes in mice.1–5 All members of the CLCA family map to the same region on chromosome 1p31-p22 and share a high degree of homology in size, sequence, and predicted structure, but they differ significantly in their tissue distribution. The human genome encodes 3 functional CLCA proteins, including CLCA1, CLCA2, and CLCA4. CLCA2 has been identified to be a p53-inducible inhibitor of cell proliferation and to be a marker of differentiated epithelium that is downregulated with tumor progression.1,6 CLCA4 is downregulated in breast cancer cells, and low expression of CLCA4 indicates a poor prognosis for patients with breast cancer.7 CLCA4 and CLCA1 are expressed in intestinal epithelium.3,8–10 CLCA1 and CLCA4 may function as tumor suppressors and are associated negatively with tumorigenicity.11 We have shown that CLCA1 contributes to differentiation and proliferation inhibition in colon cell lines,10 but the role of CLCA1 in the prognosis of patients with colorectal cancer (CRC) remains unclear. In cancer progression, individual cells undergo complex and important interactions with the extracellular environment through transmembrane signal transduction. These interactions regulate cancer cell proliferation, invasion, and metastasis. Ion channels are a crucial first step in this process, and they play emerging roles in carcinogenesis and tumor progression.11 For example, activation of the chloride current through specialized volume-regulated anion channels in response to cell swelling is one of the major mechanisms by which cells restore their volume after hypo-osmotic stress (regulatory volume decrease, RVD).12 This is important because there is a direct link between the apoptotic resistance conferred by the antiapoptotic Bcl-2 protein and the strengthening of RVD capability due to upregulation of the chloride current and swelling.13 Therefore, further investigations will elucidate important roles for ion channels in cancer development and progression and potentially establish ion channels as effective discriminative markers and therapeutic targets.14 We have reported that CLCA1 is expressed in differentiated, growth-arrested mammalian epithelial cells but is downregulated during tumor progression.10 We have identified CLCA1 as a regulator of the transition from proliferation to differentiation in Caco-2 cells. In this study, we have determined further that the expression of CLCA1 in human CRC intestinal tissue is associated with the primary tumor status (the degree of invasion of the intestinal wall), lymph node metastases, and the overall survival rate. CLCA1 also is correlated closely with tumor suppressor p53 and E-cadherin, which has been determined to influence the prognosis of CRC.15–18 Our findings suggest that the expression level of CLCA1 may predict disease relapse and outcomes for patients with CRC.
null
null
Results
Patients and Tumors The characteristics of the 36 CRC patients in the study cohort are shown in Table 1. There were 11 women (30.6%) and 25 men (69.4%). The median age was 55.5 years with a range of 25 to 80 years. In terms of the anatomical location of the tumors, 15 (41.7%) were in the ascending colon, 2 (5.6%) were in the transverse colon, 3 (8.3%) were in the left colon, 6 (16.7%) were in the sigmoid colon, and 10 (27.8%) were in the rectum. Twenty-six patients (72.2%) had moderately or poorly differentiated tumors, with the remaining 10 (27.8%) having well-differentiated cancers. The Dukes staging was as follows: (A) 7 or 19.4%, (B) 18 or 50%, (C) 5 or 13.9%, and (D) 6 or 16.7%. The characteristics of the 36 CRC patients in the study cohort are shown in Table 1. There were 11 women (30.6%) and 25 men (69.4%). The median age was 55.5 years with a range of 25 to 80 years. In terms of the anatomical location of the tumors, 15 (41.7%) were in the ascending colon, 2 (5.6%) were in the transverse colon, 3 (8.3%) were in the left colon, 6 (16.7%) were in the sigmoid colon, and 10 (27.8%) were in the rectum. Twenty-six patients (72.2%) had moderately or poorly differentiated tumors, with the remaining 10 (27.8%) having well-differentiated cancers. The Dukes staging was as follows: (A) 7 or 19.4%, (B) 18 or 50%, (C) 5 or 13.9%, and (D) 6 or 16.7%. Expression of CLCA1 and Clinical Grades of CRCs The level of expression of CLCA1 with respect to tumor staging is shown in Table 2. There was a high level of CLCA1 expression in normal colonic epithelium in stark contrast to the tumor tissue (Fig. 1A). The mean percentage of CLCA1-positive cells was 88% in the normal samples (n = 6) and 54% in the tumor samples (n = 36). In addition, the expression pattern of CLCA1 was predominantly membranous and cytoplasmic in normal colonic epithelium, but this pattern was altered in the tumor area, which showed an absence of cytoplasmic and/or membranous staining (Fig. 1A). Furthermore, we analyzed the relationship between the level of expression of CLCA1 and the primary tumor status (T1-T4), lymph nodes status (N0 or N1/N2), Dukes stage (A-D), and histological grade (well, moderately, or poorly differentiated). This analysis showed that the CLCA1 expression in noncancerous control mucosa samples was significantly higher than that in samples with tumors. There also was a significant difference between normal tissue and early CRC tissue (T1, P < .05). In addition, in the more advanced tumor stages (T3 and T4), the expression of CLCA1 was reduced in comparison with earlier stage tumors (T1/T2 vs T3/T4, P < .01; Fig. 1B and Table 2). Furthermore, CLCA1 expression levels in primary tumors were reduced significantly when the patients had positive lymph nodes (N1/N2, P < .01; Fig. 1C). In Dukes stage A and B tumors, there was much higher expression of CLCA1 in comparison with Dukes stage C and D tumors (P < .01; Fig. 1D and Table 2). An analysis by histological grades also showed that the expression of CLCA1 was reduced significantly in poorly differentiated tumors versus well-differentiated tumor (P < .01; Fig. 1E and Table 2). Our data indicate that low CLCA1 expression levels are associated with an advanced tumor stage, a less differentiated tumor histological grade, and metastases in regional lymph nodes. Expression Level of CLCA1 and Status of Primary Tumors Abbreviation: CLCA1, chloride channel accessory 1. Expression levels were compared with the Pearson chi-square test. The expression of CLCA1 correlates with tumor stages, histological grades, and lymph node metastasis in colorectal cancer. (A) CLCA1 in normal colonic tissue showed (i) preservation of high levels of expression (brown) on cell membranes and in the cytoplasm, (ii) reduced expression on membranes and in the cytoplasm in colon cancer tissue, (iii) reduced expression in cytoplasm only in colon cancer tissue, and (iiii) reduced expression in rectal cancer. CLCA1 expression was strongly associated with (B) a different tumor status, (C) lymph node metastasis, (D) Dukes staging, and (E) histological grades. Expression levels of CLCA1 were quantified via the ratio of positive cells to total cells. Data are presented as means and standard errors of the mean. CLCA1 indicates chloride channel accessory 1. The level of expression of CLCA1 with respect to tumor staging is shown in Table 2. There was a high level of CLCA1 expression in normal colonic epithelium in stark contrast to the tumor tissue (Fig. 1A). The mean percentage of CLCA1-positive cells was 88% in the normal samples (n = 6) and 54% in the tumor samples (n = 36). In addition, the expression pattern of CLCA1 was predominantly membranous and cytoplasmic in normal colonic epithelium, but this pattern was altered in the tumor area, which showed an absence of cytoplasmic and/or membranous staining (Fig. 1A). Furthermore, we analyzed the relationship between the level of expression of CLCA1 and the primary tumor status (T1-T4), lymph nodes status (N0 or N1/N2), Dukes stage (A-D), and histological grade (well, moderately, or poorly differentiated). This analysis showed that the CLCA1 expression in noncancerous control mucosa samples was significantly higher than that in samples with tumors. There also was a significant difference between normal tissue and early CRC tissue (T1, P < .05). In addition, in the more advanced tumor stages (T3 and T4), the expression of CLCA1 was reduced in comparison with earlier stage tumors (T1/T2 vs T3/T4, P < .01; Fig. 1B and Table 2). Furthermore, CLCA1 expression levels in primary tumors were reduced significantly when the patients had positive lymph nodes (N1/N2, P < .01; Fig. 1C). In Dukes stage A and B tumors, there was much higher expression of CLCA1 in comparison with Dukes stage C and D tumors (P < .01; Fig. 1D and Table 2). An analysis by histological grades also showed that the expression of CLCA1 was reduced significantly in poorly differentiated tumors versus well-differentiated tumor (P < .01; Fig. 1E and Table 2). Our data indicate that low CLCA1 expression levels are associated with an advanced tumor stage, a less differentiated tumor histological grade, and metastases in regional lymph nodes. Expression Level of CLCA1 and Status of Primary Tumors Abbreviation: CLCA1, chloride channel accessory 1. Expression levels were compared with the Pearson chi-square test. The expression of CLCA1 correlates with tumor stages, histological grades, and lymph node metastasis in colorectal cancer. (A) CLCA1 in normal colonic tissue showed (i) preservation of high levels of expression (brown) on cell membranes and in the cytoplasm, (ii) reduced expression on membranes and in the cytoplasm in colon cancer tissue, (iii) reduced expression in cytoplasm only in colon cancer tissue, and (iiii) reduced expression in rectal cancer. CLCA1 expression was strongly associated with (B) a different tumor status, (C) lymph node metastasis, (D) Dukes staging, and (E) histological grades. Expression levels of CLCA1 were quantified via the ratio of positive cells to total cells. Data are presented as means and standard errors of the mean. CLCA1 indicates chloride channel accessory 1. Reduced Expression of CLCA1 Is an Indicator of the Likelihood of Disease Relapse and Poorer Survival The median disease-free survival (DFS) for all patients was 34.5 months. The postoperative median DFS for patients with high expression of CLCA1 was 40 months, whereas that of patients with low CLCA1 expression was 23 months. CRC patients with reduced CLCA1 expression had a higher risk of disease relapse and death than patients with high CLCA1 expression (P < .01; Table 2). Kaplan-Meier analysis was used to evaluate the correlation between the survival of patients with CRC and the level of expression of CLCA1. Patients were divided into 2 groups: CRC patients with high CLCA1 expression (>30%) and those with low CLCA1 expression (≤30%). Our data showed that the DFS for CRC patients whose tumors had preserved CLCA1 expression was significantly higher than the DFS for patients with low CLCA1 expression (P < .05; Fig. 2A). Kaplan-Meier analysis was also performed with stratification by the characteristics of the primary tumor and lymph node metastases. We classified patients into groups with T1/T2 (n = 12) and T3/T4 tumors (n = 24) because of the case number and better prognosis with T1/T2 tumors versus T3/T4 tumors29 and with lymph node–negative tumors versus lymph node–positive tumors. Our results also showed that the primary tumor stage was correlated with the DFS of CRC patients. The DFS of patients with T3/T4 tumors was reduced significantly in comparison with the DFS of patients with T1/T2 tumors (P < .05; Fig. 2B). CLCA1 expression levels and disease-free survival for CRC patients. Kaplan-Meier curves of disease-free survival are shown for CRC patients: (A) different CLCA1 expression (P < .05), (B) tumor status (P < .05), (C) lymph node metastasis (P > .05), and (D) Ki-67 expression (P > .05). The differences between curves were analyzed with the log-rank test. CLCA1 indicates chloride channel accessory 1; CRC, colorectal cancer. However, the lymph node status (N0 vs N1/N2) with respect to the DFS of this group of CRC patients showed a difference with a P value of .06 (Fig. 2C). Ki-67 has been studied as a prognosticator for CRC,30 but we showed that there was no significant association between the expression levels of Ki-67 and the survival of CRC patients (Fig. 2D). Overall, our results demonstrated that the CLCA1 expression level was a prognostic factor for the survival of patients with CRC in the univariate analysis. Patients with CRC characterized by high levels of CLCA1 expression had a favorable prognosis, whereas patients with CRC with low CLCA1 expression had a poorer survival rate. The median disease-free survival (DFS) for all patients was 34.5 months. The postoperative median DFS for patients with high expression of CLCA1 was 40 months, whereas that of patients with low CLCA1 expression was 23 months. CRC patients with reduced CLCA1 expression had a higher risk of disease relapse and death than patients with high CLCA1 expression (P < .01; Table 2). Kaplan-Meier analysis was used to evaluate the correlation between the survival of patients with CRC and the level of expression of CLCA1. Patients were divided into 2 groups: CRC patients with high CLCA1 expression (>30%) and those with low CLCA1 expression (≤30%). Our data showed that the DFS for CRC patients whose tumors had preserved CLCA1 expression was significantly higher than the DFS for patients with low CLCA1 expression (P < .05; Fig. 2A). Kaplan-Meier analysis was also performed with stratification by the characteristics of the primary tumor and lymph node metastases. We classified patients into groups with T1/T2 (n = 12) and T3/T4 tumors (n = 24) because of the case number and better prognosis with T1/T2 tumors versus T3/T4 tumors29 and with lymph node–negative tumors versus lymph node–positive tumors. Our results also showed that the primary tumor stage was correlated with the DFS of CRC patients. The DFS of patients with T3/T4 tumors was reduced significantly in comparison with the DFS of patients with T1/T2 tumors (P < .05; Fig. 2B). CLCA1 expression levels and disease-free survival for CRC patients. Kaplan-Meier curves of disease-free survival are shown for CRC patients: (A) different CLCA1 expression (P < .05), (B) tumor status (P < .05), (C) lymph node metastasis (P > .05), and (D) Ki-67 expression (P > .05). The differences between curves were analyzed with the log-rank test. CLCA1 indicates chloride channel accessory 1; CRC, colorectal cancer. However, the lymph node status (N0 vs N1/N2) with respect to the DFS of this group of CRC patients showed a difference with a P value of .06 (Fig. 2C). Ki-67 has been studied as a prognosticator for CRC,30 but we showed that there was no significant association between the expression levels of Ki-67 and the survival of CRC patients (Fig. 2D). Overall, our results demonstrated that the CLCA1 expression level was a prognostic factor for the survival of patients with CRC in the univariate analysis. Patients with CRC characterized by high levels of CLCA1 expression had a favorable prognosis, whereas patients with CRC with low CLCA1 expression had a poorer survival rate. Correlation Between CLCA1 and PTEN, p53, E-Cadherin, and Ki-67 in CRC Patients Abnormal expression or mutations of PTEN and p53, E-cadherin, and Ki-67 are associated with a poor prognosis for patients with CRC.15,18,30–32 We also checked the expression of PTEN, p53, Ki-67, and E-cadherin in patients and compared the relationship between expression levels of CLCA1 and these 4 tumor-associated genes. We found that 16 of 28 CRC patients (57%) had high expression (>50% score) of Ki-67 cancer cells. However, no correlation was found between the levels of CLCA1 and Ki-67 expression in the primary tumors (P = .29; Fig. 3A). Furthermore, we investigated the correlation between the expression of PTEN and CLCA1 and found no association between the expression levels of these 2 molecules (P = .95; Fig. 3B). However, CLCA1 expression levels correlated strongly and positively with the level of p53 and E-cadherin expression (P < .01; Fig. 3C,D). The downregulation of E-cadherin at the membrane indicated a poor outcome.15 p53 is a tumor suppressor in CRC. Mutations of p53 are associated with worse survival, and normal levels of p53 are required for CRCs to respond to chemotherapy.16 These data further confirm that low expression of CLCA1 appears to correlate strongly with a poor prognosis of CRC. Correlation between the expression of CLCA1 and known prognostic markers for colorectal cancer. Expression levels of CLCA1, Ki-67, E-cad, PTEN, and p53 were analyzed through the percentages of positively staining cells. The correlations of CLCA1 expression with (A) Ki-67 (n = 28, R2 = 0.18, P = .29), (B) PTEN (n = 9, R2 = –0.02, P = .95), (C) E-cad (n = 12, R2 = 0.8, P = .0016), and (D) p53 (n = 15, R2 = 0.78, P = .0007) are shown, and strong positive correlations between high CLCA1 expression and high E-cad and p53 expression levels are indicated. CLCA1 indicates chloride channel accessory 1; E-cad, E-cadherin; PTEN, phosphatase and tensin homolog. Abnormal expression or mutations of PTEN and p53, E-cadherin, and Ki-67 are associated with a poor prognosis for patients with CRC.15,18,30–32 We also checked the expression of PTEN, p53, Ki-67, and E-cadherin in patients and compared the relationship between expression levels of CLCA1 and these 4 tumor-associated genes. We found that 16 of 28 CRC patients (57%) had high expression (>50% score) of Ki-67 cancer cells. However, no correlation was found between the levels of CLCA1 and Ki-67 expression in the primary tumors (P = .29; Fig. 3A). Furthermore, we investigated the correlation between the expression of PTEN and CLCA1 and found no association between the expression levels of these 2 molecules (P = .95; Fig. 3B). However, CLCA1 expression levels correlated strongly and positively with the level of p53 and E-cadherin expression (P < .01; Fig. 3C,D). The downregulation of E-cadherin at the membrane indicated a poor outcome.15 p53 is a tumor suppressor in CRC. Mutations of p53 are associated with worse survival, and normal levels of p53 are required for CRCs to respond to chemotherapy.16 These data further confirm that low expression of CLCA1 appears to correlate strongly with a poor prognosis of CRC. Correlation between the expression of CLCA1 and known prognostic markers for colorectal cancer. Expression levels of CLCA1, Ki-67, E-cad, PTEN, and p53 were analyzed through the percentages of positively staining cells. The correlations of CLCA1 expression with (A) Ki-67 (n = 28, R2 = 0.18, P = .29), (B) PTEN (n = 9, R2 = –0.02, P = .95), (C) E-cad (n = 12, R2 = 0.8, P = .0016), and (D) p53 (n = 15, R2 = 0.78, P = .0007) are shown, and strong positive correlations between high CLCA1 expression and high E-cad and p53 expression levels are indicated. CLCA1 indicates chloride channel accessory 1; E-cad, E-cadherin; PTEN, phosphatase and tensin homolog. Confirmation of CLCA1 Expression in CRC With Microarray Data Sets To further validate our data, we analyzed the 3 public, independent microarray data sets from GEO. Our analysis showed that the expression level of CLCA1 was inhibited significantly in early CRC specimens (normal vs early CRC, P < .05; Fig. 4A). In comparison with nonmetastatic specimens, the expression of CLCA1 was downregulated significantly in metastatic CRC (P < .01; Fig. 4B). CIN-high showed the worst survival for CRC patients.27 Our results showed that CRC with the CIN-high phenotype had lower expression of CLCA1 than CRC with the CIN-low phenotype (P = .015; Fig. 4C). Therefore, the results from these 3 independent microarray sets were identical to the results from our clinical analysis data. Publicly available microarray data sets for validation. (A) In public microarray data set GSE4107, we analyzed the expression of CLCA1 in normal colon mucosa and CRC tissues. The results showed that the expression of CLCA1 was inhibited significantly in early CRC patients. (B) In the GSE28702 microarray gene set, low expression of CLCA1 was associated with CRC metastasis. (C) The analysis of the GSE30540 microarray gene set showed that low CLCA1 expression was associated with CRC with a CIN-high signature, and this indicated significantly poorer survival in comparison with a CIN-low signature. CIN indicates chromosomal instability; CLCA1, chloride channel accessory 1; CRC, colorectal cancer. To further validate our data, we analyzed the 3 public, independent microarray data sets from GEO. Our analysis showed that the expression level of CLCA1 was inhibited significantly in early CRC specimens (normal vs early CRC, P < .05; Fig. 4A). In comparison with nonmetastatic specimens, the expression of CLCA1 was downregulated significantly in metastatic CRC (P < .01; Fig. 4B). CIN-high showed the worst survival for CRC patients.27 Our results showed that CRC with the CIN-high phenotype had lower expression of CLCA1 than CRC with the CIN-low phenotype (P = .015; Fig. 4C). Therefore, the results from these 3 independent microarray sets were identical to the results from our clinical analysis data. Publicly available microarray data sets for validation. (A) In public microarray data set GSE4107, we analyzed the expression of CLCA1 in normal colon mucosa and CRC tissues. The results showed that the expression of CLCA1 was inhibited significantly in early CRC patients. (B) In the GSE28702 microarray gene set, low expression of CLCA1 was associated with CRC metastasis. (C) The analysis of the GSE30540 microarray gene set showed that low CLCA1 expression was associated with CRC with a CIN-high signature, and this indicated significantly poorer survival in comparison with a CIN-low signature. CIN indicates chromosomal instability; CLCA1, chloride channel accessory 1; CRC, colorectal cancer. CLCA1 Regulates the Differentiation of Human CRC Cells Culturing to confluence induces the spontaneous differentiation of human colon adenocarcinoma cells (Caco-2).11,33–35 Using this model, we investigated further the functional role of CLCA1 in the differentiation of Caco-2 cells. First, we detected the expression of CLCA1 and the differentiation marker E-cadherin in a confluent culture. We found that the expression of both CLCA1 and E-cadherin was increased in a time-dependent manner (Fig. 5A). These data suggest that the expression of CLCA1 may contribute to the spontaneous differentiation of Caco-2 cells. Next, we used stealth siRNA (siRNACLCA1) to knock down the expression of CLCA1 in Caco-2 cells. After 72 hours of transfection, cells were tested for the expression of CLCA1, ALPI, and E-cadherin by western blotting. We found that knockdown of CLCA1 inhibited expression levels of ALPI and E-cadherin significantly (Fig. 5B).10 These results indicate that CLCA1 expression plays a key role in the regulation of the spontaneous differentiation of Caco-2 cells. CLCA1 contributes to the differentiation of colorectal cancer cells. Human colon adenocarcinoma cells (Caco-2) were cultured to become confluent for 10 days. (A) The expression of CLCA1 was upregulated for 4 days in a confluent culture and lasted for up to 10 days of culturing. The mature epithelial marker E-cad increased in a time-dependent manner. The histogram shows the relative intensity of E-cad expressed as a ratio with respect to the GAPDH control. (B) Caco-2 cells were transfected transiently with 150 nM siRNACLCA1 and blotted for CLCA1, E-cad, and ALPI. siRNACLCA1 effectively inhibited CLCA1 and downregulated the expression of E-cad and ALPI. The histogram shows the relative intensity of CLCA1, E-cad, and ALPI expressed as a ratio with respect to the GAPDH control. Data are presented as means and standard errors of the mean. All results were analyzed on the basis of 3 independent experiments. ALPI indicates intestinal alkaline phosphatase; CLCA1, chloride channel accessory 1; Cont, control; E-cad, E-cadherin; GAPDH, glyceraldehyde 3-phosphate dehydrogenase; siRNA, small interfering RNA. Culturing to confluence induces the spontaneous differentiation of human colon adenocarcinoma cells (Caco-2).11,33–35 Using this model, we investigated further the functional role of CLCA1 in the differentiation of Caco-2 cells. First, we detected the expression of CLCA1 and the differentiation marker E-cadherin in a confluent culture. We found that the expression of both CLCA1 and E-cadherin was increased in a time-dependent manner (Fig. 5A). These data suggest that the expression of CLCA1 may contribute to the spontaneous differentiation of Caco-2 cells. Next, we used stealth siRNA (siRNACLCA1) to knock down the expression of CLCA1 in Caco-2 cells. After 72 hours of transfection, cells were tested for the expression of CLCA1, ALPI, and E-cadherin by western blotting. We found that knockdown of CLCA1 inhibited expression levels of ALPI and E-cadherin significantly (Fig. 5B).10 These results indicate that CLCA1 expression plays a key role in the regulation of the spontaneous differentiation of Caco-2 cells. CLCA1 contributes to the differentiation of colorectal cancer cells. Human colon adenocarcinoma cells (Caco-2) were cultured to become confluent for 10 days. (A) The expression of CLCA1 was upregulated for 4 days in a confluent culture and lasted for up to 10 days of culturing. The mature epithelial marker E-cad increased in a time-dependent manner. The histogram shows the relative intensity of E-cad expressed as a ratio with respect to the GAPDH control. (B) Caco-2 cells were transfected transiently with 150 nM siRNACLCA1 and blotted for CLCA1, E-cad, and ALPI. siRNACLCA1 effectively inhibited CLCA1 and downregulated the expression of E-cad and ALPI. The histogram shows the relative intensity of CLCA1, E-cad, and ALPI expressed as a ratio with respect to the GAPDH control. Data are presented as means and standard errors of the mean. All results were analyzed on the basis of 3 independent experiments. ALPI indicates intestinal alkaline phosphatase; CLCA1, chloride channel accessory 1; Cont, control; E-cad, E-cadherin; GAPDH, glyceraldehyde 3-phosphate dehydrogenase; siRNA, small interfering RNA.
null
null
[ "Introduction", "Material and Methods", "Patients", "Clinical Follow-Up", "Histology and Colon Cancer Staging", "Immunohistochemistry", "Knockdown of CLCA1 and Western Blotting", "Quantitative Analysis", "Microarray Data Analysis", "Statistical Analysis", "Patients and Tumors", "Expression of CLCA1 and Clinical Grades of CRCs", "Reduced Expression of CLCA1 Is an Indicator of the Likelihood of Disease Relapse and Poorer Survival", "Correlation Between CLCA1 and PTEN, p53, E-Cadherin, and Ki-67 in CRC Patients", "Confirmation of CLCA1 Expression in CRC With Microarray Data Sets", "CLCA1 Regulates the Differentiation of Human CRC Cells" ]
[ "The chloride channel accessory (CLCA) family (also called the calcium-sensitive chloride conductance protein family) comprises 4 genes in humans and at least 6 genes in mice.1–5 All members of the CLCA family map to the same region on chromosome 1p31-p22 and share a high degree of homology in size, sequence, and predicted structure, but they differ significantly in their tissue distribution. The human genome encodes 3 functional CLCA proteins, including CLCA1, CLCA2, and CLCA4. CLCA2 has been identified to be a p53-inducible inhibitor of cell proliferation and to be a marker of differentiated epithelium that is downregulated with tumor progression.1,6 CLCA4 is downregulated in breast cancer cells, and low expression of CLCA4 indicates a poor prognosis for patients with breast cancer.7 CLCA4 and CLCA1 are expressed in intestinal epithelium.3,8–10 CLCA1 and CLCA4 may function as tumor suppressors and are associated negatively with tumorigenicity.11 We have shown that CLCA1 contributes to differentiation and proliferation inhibition in colon cell lines,10 but the role of CLCA1 in the prognosis of patients with colorectal cancer (CRC) remains unclear.\nIn cancer progression, individual cells undergo complex and important interactions with the extracellular environment through transmembrane signal transduction. These interactions regulate cancer cell proliferation, invasion, and metastasis. Ion channels are a crucial first step in this process, and they play emerging roles in carcinogenesis and tumor progression.11 For example, activation of the chloride current through specialized volume-regulated anion channels in response to cell swelling is one of the major mechanisms by which cells restore their volume after hypo-osmotic stress (regulatory volume decrease, RVD).12 This is important because there is a direct link between the apoptotic resistance conferred by the antiapoptotic Bcl-2 protein and the strengthening of RVD capability due to upregulation of the chloride current and swelling.13 Therefore, further investigations will elucidate important roles for ion channels in cancer development and progression and potentially establish ion channels as effective discriminative markers and therapeutic targets.14\nWe have reported that CLCA1 is expressed in differentiated, growth-arrested mammalian epithelial cells but is downregulated during tumor progression.10 We have identified CLCA1 as a regulator of the transition from proliferation to differentiation in Caco-2 cells. In this study, we have determined further that the expression of CLCA1 in human CRC intestinal tissue is associated with the primary tumor status (the degree of invasion of the intestinal wall), lymph node metastases, and the overall survival rate. CLCA1 also is correlated closely with tumor suppressor p53 and E-cadherin, which has been determined to influence the prognosis of CRC.15–18 Our findings suggest that the expression level of CLCA1 may predict disease relapse and outcomes for patients with CRC.", " Patients Thirty-six patients were diagnosed with CRC (26 with colon cancer and 10 with rectal cancer; Table 1), which was classified with the International Union Against Cancer TNM staging system and the Dukes staging system. All patients underwent surgical resection of their tumors at the 309th Hospital in Beijing, China. Ethical approval for the study was granted by the 309th Hospital's ethics committee. Informed written consent was obtained from all participants involved in the study.\nClinical Characteristics of Patients With Colorectal Cancer\nThe key clinical characteristics of the patients are summarized in Table 1. Normal specimens were obtained from 2 sources: 2 samples came from normal colon tissue of noncancer patients, and 4 samples were obtained from adjacent, grossly normal-appearing tissue taken at least 10 cm away from the cancer. This was in accordance with other research.19 All tumor samples were obtained from surgical resection. After surgical resection of their tumors, patients were followed up with clinical examinations, abdominal ultrasonography or abdominal computer tomography scans, and carcinoembryonic antigen measurements every 6 months for the first 5 years. Thereafter, these investigations were performed annually until 5 years after the initial treatment. The patients who died as a result of any postoperative complications or non–cancer-related diseases were excluded from the survival analysis. A patient with a family history suggestive of hereditary nonpolyposis colon cancer syndrome was excluded. None of the patients included in this study had chemotherapy or radiotherapy before surgery. In all, 36 CRC patients were included in the survival analysis. Adjuvant chemotherapy was given to all CRC patients except those with stage I disease (cancer had not invaded the outermost layers of the colon or rectum and had not spread to lymph nodes or distant sites). The mFOLFOX6 chemotherapy regimen consisted of a 2-hour intravenous infusion of oxaliplatin (85 mg/m2) and folinic acid (400 mg/m2), which was followed by an intravenous bolus injection of 5-fluorouracil (400 mg/m2) plus a 46-hour intravenous infusion of 5-fluorouracil (2400 mg/m2); this was repeated every 2 weeks. After 4 cycles of therapy, all lesions were assessed with computer tomography.\nThirty-six patients were diagnosed with CRC (26 with colon cancer and 10 with rectal cancer; Table 1), which was classified with the International Union Against Cancer TNM staging system and the Dukes staging system. All patients underwent surgical resection of their tumors at the 309th Hospital in Beijing, China. Ethical approval for the study was granted by the 309th Hospital's ethics committee. Informed written consent was obtained from all participants involved in the study.\nClinical Characteristics of Patients With Colorectal Cancer\nThe key clinical characteristics of the patients are summarized in Table 1. Normal specimens were obtained from 2 sources: 2 samples came from normal colon tissue of noncancer patients, and 4 samples were obtained from adjacent, grossly normal-appearing tissue taken at least 10 cm away from the cancer. This was in accordance with other research.19 All tumor samples were obtained from surgical resection. After surgical resection of their tumors, patients were followed up with clinical examinations, abdominal ultrasonography or abdominal computer tomography scans, and carcinoembryonic antigen measurements every 6 months for the first 5 years. Thereafter, these investigations were performed annually until 5 years after the initial treatment. The patients who died as a result of any postoperative complications or non–cancer-related diseases were excluded from the survival analysis. A patient with a family history suggestive of hereditary nonpolyposis colon cancer syndrome was excluded. None of the patients included in this study had chemotherapy or radiotherapy before surgery. In all, 36 CRC patients were included in the survival analysis. Adjuvant chemotherapy was given to all CRC patients except those with stage I disease (cancer had not invaded the outermost layers of the colon or rectum and had not spread to lymph nodes or distant sites). The mFOLFOX6 chemotherapy regimen consisted of a 2-hour intravenous infusion of oxaliplatin (85 mg/m2) and folinic acid (400 mg/m2), which was followed by an intravenous bolus injection of 5-fluorouracil (400 mg/m2) plus a 46-hour intravenous infusion of 5-fluorouracil (2400 mg/m2); this was repeated every 2 weeks. After 4 cycles of therapy, all lesions were assessed with computer tomography.\n Clinical Follow-Up All patients were prospectively followed up according to the schedule described previously. The mean follow-up time in this study was 46 months (range, 7-52 months). The status of each patient was determined at the date of the last follow-up or at the end of a 5-year follow-up period, and if they were deceased, the cause of death was ascertained from the medical records and/or death certificate information. At the initial diagnosis, 6 patients with CRC had metastases.\nAll patients were prospectively followed up according to the schedule described previously. The mean follow-up time in this study was 46 months (range, 7-52 months). The status of each patient was determined at the date of the last follow-up or at the end of a 5-year follow-up period, and if they were deceased, the cause of death was ascertained from the medical records and/or death certificate information. At the initial diagnosis, 6 patients with CRC had metastases.\n Histology and Colon Cancer Staging Resected tumors were obtained immediately after surgical resection and were fixed in 10% pH-neutral formalin and embedded in paraffin. Paraffin-embedded tissue sections (5 µm thick) were cut serially and used for hematoxylin-eosin staining and immunohistochemical analysis. The patients were staged according to the International Union Against Cancer TNM staging system and the Dukes staging system (Table 1).20 The Dukes staging system (used for colon cancer staging originally) was defined as follows: (A) tumor in the mucosa, (B) tumor in the muscle layer, (C) involvement of lymph nodes, and (D) distant metastases. Tumors were classified also by their degree of histological differentiation: (well differentiated [I]), moderately differentiated [II], or poorly differentiated [III]), the presence or absence of perineural invasion, the presence or absence of venous emboli, and the number of lymph nodes involved by the tumor.\nResected tumors were obtained immediately after surgical resection and were fixed in 10% pH-neutral formalin and embedded in paraffin. Paraffin-embedded tissue sections (5 µm thick) were cut serially and used for hematoxylin-eosin staining and immunohistochemical analysis. The patients were staged according to the International Union Against Cancer TNM staging system and the Dukes staging system (Table 1).20 The Dukes staging system (used for colon cancer staging originally) was defined as follows: (A) tumor in the mucosa, (B) tumor in the muscle layer, (C) involvement of lymph nodes, and (D) distant metastases. Tumors were classified also by their degree of histological differentiation: (well differentiated [I]), moderately differentiated [II], or poorly differentiated [III]), the presence or absence of perineural invasion, the presence or absence of venous emboli, and the number of lymph nodes involved by the tumor.\n Immunohistochemistry After paraffin embedding, the tissue was cut into 5-µm-thick sections and mounted onto Superfrost Plus slides. The immunohistochemistry was performed on Leica Bond Max and included dewaxing and heat-induced epitope retrieval with Epitope Retrieval 1 (Leica Biosystems) at 100°C. Peroxidase block from the Bond Refine Detection System was used. The primary antibody CLCA1 (polyclone; Santa Cruz) was applied at a 1:100 dilution for 30 minutes at room temperature. The staining was completed with the Bond Polymer Refine Detection system. The Bond Polymer Refine Detection system contained a peroxide block, post primary, a polymer reagent, 3,3′-diaminobenzidine chromogen, and a hematoxylin counterstain (Leica Biosystems). p53, E-cadherin, phosphatase and tensin homolog (PTEN), and Ki-67 expression was evaluated according to the proportion of positively stained tumor cells.\nAfter paraffin embedding, the tissue was cut into 5-µm-thick sections and mounted onto Superfrost Plus slides. The immunohistochemistry was performed on Leica Bond Max and included dewaxing and heat-induced epitope retrieval with Epitope Retrieval 1 (Leica Biosystems) at 100°C. Peroxidase block from the Bond Refine Detection System was used. The primary antibody CLCA1 (polyclone; Santa Cruz) was applied at a 1:100 dilution for 30 minutes at room temperature. The staining was completed with the Bond Polymer Refine Detection system. The Bond Polymer Refine Detection system contained a peroxide block, post primary, a polymer reagent, 3,3′-diaminobenzidine chromogen, and a hematoxylin counterstain (Leica Biosystems). p53, E-cadherin, phosphatase and tensin homolog (PTEN), and Ki-67 expression was evaluated according to the proportion of positively stained tumor cells.\n Knockdown of CLCA1 and Western Blotting Caco-2 cells (5×105; American Type Culture Collection) were cultured on collagen I–precoated 6-well plates for the indicated time. Cells were lysed with a cell lysis buffer (Sigma-Aldrich) and a protease inhibitor cocktail (Thermo Scientific) for western blot analysis. Knockdown of CLCA1 in Caco-2 cells has been described previously.10 Briefly, a stealth RNAi small interfering RNA (siRNA) duplex with sense-strand sequences (5′-CAAUGCUACCCUGCCUCC AAUUACA-3′; Invitrogen, United Kingdom) was used to specifically target the CLCA1 gene. Caco-2 cells were transfected with Lipofectamine 2000 (Invitrogen) according to the manufacturer's protocol with a final siRNA concentration of 150 nM. Nontargeting negative control siRNA was used for non–sequence-specific effects of these molecules. After 72 hours, cells were lysed for western blot analysis. The primary antibodies used for western blotting included anti-CLCA1 (1:1000; Santa Cruz), anti–E-cadherin (1:5000; BD), anti–intestinal alkaline phosphatase (anti-ALPI; 1:2000, Novus), and anti–glyceraldehyde 3-phosphate dehydrogenase (1:50,000; Santa Cruz). Membranes were incubated with the relevant primary antibodies overnight at 4°C. A secondary antibody with horseradish peroxidase (1:5000; Sigma-Aldrich, United Kingdom) was used, and the immunoblots were detected with WesternBright ECL (AGTC Bioproducts).\nCaco-2 cells (5×105; American Type Culture Collection) were cultured on collagen I–precoated 6-well plates for the indicated time. Cells were lysed with a cell lysis buffer (Sigma-Aldrich) and a protease inhibitor cocktail (Thermo Scientific) for western blot analysis. Knockdown of CLCA1 in Caco-2 cells has been described previously.10 Briefly, a stealth RNAi small interfering RNA (siRNA) duplex with sense-strand sequences (5′-CAAUGCUACCCUGCCUCC AAUUACA-3′; Invitrogen, United Kingdom) was used to specifically target the CLCA1 gene. Caco-2 cells were transfected with Lipofectamine 2000 (Invitrogen) according to the manufacturer's protocol with a final siRNA concentration of 150 nM. Nontargeting negative control siRNA was used for non–sequence-specific effects of these molecules. After 72 hours, cells were lysed for western blot analysis. The primary antibodies used for western blotting included anti-CLCA1 (1:1000; Santa Cruz), anti–E-cadherin (1:5000; BD), anti–intestinal alkaline phosphatase (anti-ALPI; 1:2000, Novus), and anti–glyceraldehyde 3-phosphate dehydrogenase (1:50,000; Santa Cruz). Membranes were incubated with the relevant primary antibodies overnight at 4°C. A secondary antibody with horseradish peroxidase (1:5000; Sigma-Aldrich, United Kingdom) was used, and the immunoblots were detected with WesternBright ECL (AGTC Bioproducts).\n Quantitative Analysis Slides were assessed with light microscopy. All slides were analyzed by 2 independent observers (L.C. and J.P.) who were blinded to the clinical data. For each colon carcinoma, staining was evaluated on separate slides, which included the core and the invasive edge of the tumor, respectively. Slides were examined at ×400 (40× objective and 10× ocular) and were analyzed via the counting of all cells present in them (at least 5000 cells). All of the slides were reviewed independently by each researcher twice. Discrepancies between investigators (<10% of the cases) required a third joint observation with a conclusive agreement.21 The expression of CLCA1 was scored as the ratio of the number of positive cells to the total number of cells (stained cells/total number evaluated). The samples were considered CLCA1-positive if any positive staining was detected unless there were only rare, isolated single cells. To translate a continuous variable of CLCA1 expression into a clinical decision, it is necessary to stratify patients into 2 groups that may have different prognoses. Currently, there is no standard method or standard software for biomarker cutoff determination.22 However, our approach was to use X-tile 3.6 plot software (developed by Camp et al23), which provides a single, global assessment of every possible way of dividing a population into low-, medium-, and high-level marker expression. Using an X-tile plot, we determined the significant, optimal cut point to be 30% (P = .0224). Therefore, we used 30% CLCA1 positive staining as a cutoff value to unequivocally categorize cases into 2 groups: high expression of CLCA1 (>30% of cells stained) and low expression of CLCA1 (none or ≤30% of cells stained). This was in accordance with other research.21,24\nSlides were assessed with light microscopy. All slides were analyzed by 2 independent observers (L.C. and J.P.) who were blinded to the clinical data. For each colon carcinoma, staining was evaluated on separate slides, which included the core and the invasive edge of the tumor, respectively. Slides were examined at ×400 (40× objective and 10× ocular) and were analyzed via the counting of all cells present in them (at least 5000 cells). All of the slides were reviewed independently by each researcher twice. Discrepancies between investigators (<10% of the cases) required a third joint observation with a conclusive agreement.21 The expression of CLCA1 was scored as the ratio of the number of positive cells to the total number of cells (stained cells/total number evaluated). The samples were considered CLCA1-positive if any positive staining was detected unless there were only rare, isolated single cells. To translate a continuous variable of CLCA1 expression into a clinical decision, it is necessary to stratify patients into 2 groups that may have different prognoses. Currently, there is no standard method or standard software for biomarker cutoff determination.22 However, our approach was to use X-tile 3.6 plot software (developed by Camp et al23), which provides a single, global assessment of every possible way of dividing a population into low-, medium-, and high-level marker expression. Using an X-tile plot, we determined the significant, optimal cut point to be 30% (P = .0224). Therefore, we used 30% CLCA1 positive staining as a cutoff value to unequivocally categorize cases into 2 groups: high expression of CLCA1 (>30% of cells stained) and low expression of CLCA1 (none or ≤30% of cells stained). This was in accordance with other research.21,24\n Microarray Data Analysis The microarray data sources were obtained from the Gene Expression Omnibus (GEO).25 Three data sets (series accession numbers GSE4107, GSE28702, and GSE30540) were not subjected to any additional normalization because all had been normalized when we obtained them.19 In GSE4107, early CRC specimens (n = 12) and adjacent, grossly normal-appearing tissue (n = 10) at least 8 cm away were collected routinely and archived from patients undergoing colorectal resection at the Singapore General Hospital.19 In the GSE28702 study, 83 patients with unresectable CRC, including 56 patients with primary CRC and 27 patients with metastatic lesions in the liver (23 tumors), lungs (1 tumor), and peritoneum (3 tumors), were recruited from April 2007 to December 2010 at Teikyo University Hospital and Gifu University Hospital.26 All CRC samples were obtained before mFOLFOX6 therapy. GSE30540 included 25 chromosomal instability–high (CIN-high) CRC patients and 10 chromosomal instability–low (CIN-low) CRC patients.27 We analyzed the expression of CLCA1 in these published microarray data sets with the GEO software. The identity of genes across microarray data sets was established with public annotations primarily based on Unigene.28\nThe microarray data sources were obtained from the Gene Expression Omnibus (GEO).25 Three data sets (series accession numbers GSE4107, GSE28702, and GSE30540) were not subjected to any additional normalization because all had been normalized when we obtained them.19 In GSE4107, early CRC specimens (n = 12) and adjacent, grossly normal-appearing tissue (n = 10) at least 8 cm away were collected routinely and archived from patients undergoing colorectal resection at the Singapore General Hospital.19 In the GSE28702 study, 83 patients with unresectable CRC, including 56 patients with primary CRC and 27 patients with metastatic lesions in the liver (23 tumors), lungs (1 tumor), and peritoneum (3 tumors), were recruited from April 2007 to December 2010 at Teikyo University Hospital and Gifu University Hospital.26 All CRC samples were obtained before mFOLFOX6 therapy. GSE30540 included 25 chromosomal instability–high (CIN-high) CRC patients and 10 chromosomal instability–low (CIN-low) CRC patients.27 We analyzed the expression of CLCA1 in these published microarray data sets with the GEO software. The identity of genes across microarray data sets was established with public annotations primarily based on Unigene.28\n Statistical Analysis Statistical analysis was performed with Excel and Prism. The equality of group means and comparisons between proportions were analyzed with an unpaired Student t test and chi-square test, respectively. Univariate statistical analysis was performed with a log-rank test (Mantel-Cox). X-tile 3.6.1 plot software (http://medicine.yale.edu/lab/rimm/research/software.aspx) was used to determine the cutoff point of the CLCA1 expression level for separating all patients into 2 groups to examine the impact of CLCA1 expression on prognosis. The curves were plotted with the product-limit method (Kaplan-Meier) and were analyzed with Spearman correlation coefficients and Wilcoxon tests for all survival analyses. For covariates retained in the model, relative hazards with 95% confidence intervals were estimated. Differences with a P value of .05 or less were considered to be statistically significant.\nStatistical analysis was performed with Excel and Prism. The equality of group means and comparisons between proportions were analyzed with an unpaired Student t test and chi-square test, respectively. Univariate statistical analysis was performed with a log-rank test (Mantel-Cox). X-tile 3.6.1 plot software (http://medicine.yale.edu/lab/rimm/research/software.aspx) was used to determine the cutoff point of the CLCA1 expression level for separating all patients into 2 groups to examine the impact of CLCA1 expression on prognosis. The curves were plotted with the product-limit method (Kaplan-Meier) and were analyzed with Spearman correlation coefficients and Wilcoxon tests for all survival analyses. For covariates retained in the model, relative hazards with 95% confidence intervals were estimated. Differences with a P value of .05 or less were considered to be statistically significant.", "Thirty-six patients were diagnosed with CRC (26 with colon cancer and 10 with rectal cancer; Table 1), which was classified with the International Union Against Cancer TNM staging system and the Dukes staging system. All patients underwent surgical resection of their tumors at the 309th Hospital in Beijing, China. Ethical approval for the study was granted by the 309th Hospital's ethics committee. Informed written consent was obtained from all participants involved in the study.\nClinical Characteristics of Patients With Colorectal Cancer\nThe key clinical characteristics of the patients are summarized in Table 1. Normal specimens were obtained from 2 sources: 2 samples came from normal colon tissue of noncancer patients, and 4 samples were obtained from adjacent, grossly normal-appearing tissue taken at least 10 cm away from the cancer. This was in accordance with other research.19 All tumor samples were obtained from surgical resection. After surgical resection of their tumors, patients were followed up with clinical examinations, abdominal ultrasonography or abdominal computer tomography scans, and carcinoembryonic antigen measurements every 6 months for the first 5 years. Thereafter, these investigations were performed annually until 5 years after the initial treatment. The patients who died as a result of any postoperative complications or non–cancer-related diseases were excluded from the survival analysis. A patient with a family history suggestive of hereditary nonpolyposis colon cancer syndrome was excluded. None of the patients included in this study had chemotherapy or radiotherapy before surgery. In all, 36 CRC patients were included in the survival analysis. Adjuvant chemotherapy was given to all CRC patients except those with stage I disease (cancer had not invaded the outermost layers of the colon or rectum and had not spread to lymph nodes or distant sites). The mFOLFOX6 chemotherapy regimen consisted of a 2-hour intravenous infusion of oxaliplatin (85 mg/m2) and folinic acid (400 mg/m2), which was followed by an intravenous bolus injection of 5-fluorouracil (400 mg/m2) plus a 46-hour intravenous infusion of 5-fluorouracil (2400 mg/m2); this was repeated every 2 weeks. After 4 cycles of therapy, all lesions were assessed with computer tomography.", "All patients were prospectively followed up according to the schedule described previously. The mean follow-up time in this study was 46 months (range, 7-52 months). The status of each patient was determined at the date of the last follow-up or at the end of a 5-year follow-up period, and if they were deceased, the cause of death was ascertained from the medical records and/or death certificate information. At the initial diagnosis, 6 patients with CRC had metastases.", "Resected tumors were obtained immediately after surgical resection and were fixed in 10% pH-neutral formalin and embedded in paraffin. Paraffin-embedded tissue sections (5 µm thick) were cut serially and used for hematoxylin-eosin staining and immunohistochemical analysis. The patients were staged according to the International Union Against Cancer TNM staging system and the Dukes staging system (Table 1).20 The Dukes staging system (used for colon cancer staging originally) was defined as follows: (A) tumor in the mucosa, (B) tumor in the muscle layer, (C) involvement of lymph nodes, and (D) distant metastases. Tumors were classified also by their degree of histological differentiation: (well differentiated [I]), moderately differentiated [II], or poorly differentiated [III]), the presence or absence of perineural invasion, the presence or absence of venous emboli, and the number of lymph nodes involved by the tumor.", "After paraffin embedding, the tissue was cut into 5-µm-thick sections and mounted onto Superfrost Plus slides. The immunohistochemistry was performed on Leica Bond Max and included dewaxing and heat-induced epitope retrieval with Epitope Retrieval 1 (Leica Biosystems) at 100°C. Peroxidase block from the Bond Refine Detection System was used. The primary antibody CLCA1 (polyclone; Santa Cruz) was applied at a 1:100 dilution for 30 minutes at room temperature. The staining was completed with the Bond Polymer Refine Detection system. The Bond Polymer Refine Detection system contained a peroxide block, post primary, a polymer reagent, 3,3′-diaminobenzidine chromogen, and a hematoxylin counterstain (Leica Biosystems). p53, E-cadherin, phosphatase and tensin homolog (PTEN), and Ki-67 expression was evaluated according to the proportion of positively stained tumor cells.", "Caco-2 cells (5×105; American Type Culture Collection) were cultured on collagen I–precoated 6-well plates for the indicated time. Cells were lysed with a cell lysis buffer (Sigma-Aldrich) and a protease inhibitor cocktail (Thermo Scientific) for western blot analysis. Knockdown of CLCA1 in Caco-2 cells has been described previously.10 Briefly, a stealth RNAi small interfering RNA (siRNA) duplex with sense-strand sequences (5′-CAAUGCUACCCUGCCUCC AAUUACA-3′; Invitrogen, United Kingdom) was used to specifically target the CLCA1 gene. Caco-2 cells were transfected with Lipofectamine 2000 (Invitrogen) according to the manufacturer's protocol with a final siRNA concentration of 150 nM. Nontargeting negative control siRNA was used for non–sequence-specific effects of these molecules. After 72 hours, cells were lysed for western blot analysis. The primary antibodies used for western blotting included anti-CLCA1 (1:1000; Santa Cruz), anti–E-cadherin (1:5000; BD), anti–intestinal alkaline phosphatase (anti-ALPI; 1:2000, Novus), and anti–glyceraldehyde 3-phosphate dehydrogenase (1:50,000; Santa Cruz). Membranes were incubated with the relevant primary antibodies overnight at 4°C. A secondary antibody with horseradish peroxidase (1:5000; Sigma-Aldrich, United Kingdom) was used, and the immunoblots were detected with WesternBright ECL (AGTC Bioproducts).", "Slides were assessed with light microscopy. All slides were analyzed by 2 independent observers (L.C. and J.P.) who were blinded to the clinical data. For each colon carcinoma, staining was evaluated on separate slides, which included the core and the invasive edge of the tumor, respectively. Slides were examined at ×400 (40× objective and 10× ocular) and were analyzed via the counting of all cells present in them (at least 5000 cells). All of the slides were reviewed independently by each researcher twice. Discrepancies between investigators (<10% of the cases) required a third joint observation with a conclusive agreement.21 The expression of CLCA1 was scored as the ratio of the number of positive cells to the total number of cells (stained cells/total number evaluated). The samples were considered CLCA1-positive if any positive staining was detected unless there were only rare, isolated single cells. To translate a continuous variable of CLCA1 expression into a clinical decision, it is necessary to stratify patients into 2 groups that may have different prognoses. Currently, there is no standard method or standard software for biomarker cutoff determination.22 However, our approach was to use X-tile 3.6 plot software (developed by Camp et al23), which provides a single, global assessment of every possible way of dividing a population into low-, medium-, and high-level marker expression. Using an X-tile plot, we determined the significant, optimal cut point to be 30% (P = .0224). Therefore, we used 30% CLCA1 positive staining as a cutoff value to unequivocally categorize cases into 2 groups: high expression of CLCA1 (>30% of cells stained) and low expression of CLCA1 (none or ≤30% of cells stained). This was in accordance with other research.21,24", "The microarray data sources were obtained from the Gene Expression Omnibus (GEO).25 Three data sets (series accession numbers GSE4107, GSE28702, and GSE30540) were not subjected to any additional normalization because all had been normalized when we obtained them.19 In GSE4107, early CRC specimens (n = 12) and adjacent, grossly normal-appearing tissue (n = 10) at least 8 cm away were collected routinely and archived from patients undergoing colorectal resection at the Singapore General Hospital.19 In the GSE28702 study, 83 patients with unresectable CRC, including 56 patients with primary CRC and 27 patients with metastatic lesions in the liver (23 tumors), lungs (1 tumor), and peritoneum (3 tumors), were recruited from April 2007 to December 2010 at Teikyo University Hospital and Gifu University Hospital.26 All CRC samples were obtained before mFOLFOX6 therapy. GSE30540 included 25 chromosomal instability–high (CIN-high) CRC patients and 10 chromosomal instability–low (CIN-low) CRC patients.27 We analyzed the expression of CLCA1 in these published microarray data sets with the GEO software. The identity of genes across microarray data sets was established with public annotations primarily based on Unigene.28", "Statistical analysis was performed with Excel and Prism. The equality of group means and comparisons between proportions were analyzed with an unpaired Student t test and chi-square test, respectively. Univariate statistical analysis was performed with a log-rank test (Mantel-Cox). X-tile 3.6.1 plot software (http://medicine.yale.edu/lab/rimm/research/software.aspx) was used to determine the cutoff point of the CLCA1 expression level for separating all patients into 2 groups to examine the impact of CLCA1 expression on prognosis. The curves were plotted with the product-limit method (Kaplan-Meier) and were analyzed with Spearman correlation coefficients and Wilcoxon tests for all survival analyses. For covariates retained in the model, relative hazards with 95% confidence intervals were estimated. Differences with a P value of .05 or less were considered to be statistically significant.", "The characteristics of the 36 CRC patients in the study cohort are shown in Table 1. There were 11 women (30.6%) and 25 men (69.4%). The median age was 55.5 years with a range of 25 to 80 years. In terms of the anatomical location of the tumors, 15 (41.7%) were in the ascending colon, 2 (5.6%) were in the transverse colon, 3 (8.3%) were in the left colon, 6 (16.7%) were in the sigmoid colon, and 10 (27.8%) were in the rectum. Twenty-six patients (72.2%) had moderately or poorly differentiated tumors, with the remaining 10 (27.8%) having well-differentiated cancers. The Dukes staging was as follows: (A) 7 or 19.4%, (B) 18 or 50%, (C) 5 or 13.9%, and (D) 6 or 16.7%.", "The level of expression of CLCA1 with respect to tumor staging is shown in Table 2. There was a high level of CLCA1 expression in normal colonic epithelium in stark contrast to the tumor tissue (Fig. 1A). The mean percentage of CLCA1-positive cells was 88% in the normal samples (n = 6) and 54% in the tumor samples (n = 36). In addition, the expression pattern of CLCA1 was predominantly membranous and cytoplasmic in normal colonic epithelium, but this pattern was altered in the tumor area, which showed an absence of cytoplasmic and/or membranous staining (Fig. 1A). Furthermore, we analyzed the relationship between the level of expression of CLCA1 and the primary tumor status (T1-T4), lymph nodes status (N0 or N1/N2), Dukes stage (A-D), and histological grade (well, moderately, or poorly differentiated). This analysis showed that the CLCA1 expression in noncancerous control mucosa samples was significantly higher than that in samples with tumors. There also was a significant difference between normal tissue and early CRC tissue (T1, P < .05). In addition, in the more advanced tumor stages (T3 and T4), the expression of CLCA1 was reduced in comparison with earlier stage tumors (T1/T2 vs T3/T4, P < .01; Fig. 1B and Table 2). Furthermore, CLCA1 expression levels in primary tumors were reduced significantly when the patients had positive lymph nodes (N1/N2, P < .01; Fig. 1C). In Dukes stage A and B tumors, there was much higher expression of CLCA1 in comparison with Dukes stage C and D tumors (P < .01; Fig. 1D and Table 2). An analysis by histological grades also showed that the expression of CLCA1 was reduced significantly in poorly differentiated tumors versus well-differentiated tumor (P < .01; Fig. 1E and Table 2). Our data indicate that low CLCA1 expression levels are associated with an advanced tumor stage, a less differentiated tumor histological grade, and metastases in regional lymph nodes.\nExpression Level of CLCA1 and Status of Primary Tumors\nAbbreviation: CLCA1, chloride channel accessory 1.\nExpression levels were compared with the Pearson chi-square test.\nThe expression of CLCA1 correlates with tumor stages, histological grades, and lymph node metastasis in colorectal cancer. (A) CLCA1 in normal colonic tissue showed (i) preservation of high levels of expression (brown) on cell membranes and in the cytoplasm, (ii) reduced expression on membranes and in the cytoplasm in colon cancer tissue, (iii) reduced expression in cytoplasm only in colon cancer tissue, and (iiii) reduced expression in rectal cancer. CLCA1 expression was strongly associated with (B) a different tumor status, (C) lymph node metastasis, (D) Dukes staging, and (E) histological grades. Expression levels of CLCA1 were quantified via the ratio of positive cells to total cells. Data are presented as means and standard errors of the mean. CLCA1 indicates chloride channel accessory 1.", "The median disease-free survival (DFS) for all patients was 34.5 months. The postoperative median DFS for patients with high expression of CLCA1 was 40 months, whereas that of patients with low CLCA1 expression was 23 months. CRC patients with reduced CLCA1 expression had a higher risk of disease relapse and death than patients with high CLCA1 expression (P < .01; Table 2). Kaplan-Meier analysis was used to evaluate the correlation between the survival of patients with CRC and the level of expression of CLCA1. Patients were divided into 2 groups: CRC patients with high CLCA1 expression (>30%) and those with low CLCA1 expression (≤30%). Our data showed that the DFS for CRC patients whose tumors had preserved CLCA1 expression was significantly higher than the DFS for patients with low CLCA1 expression (P < .05; Fig. 2A). Kaplan-Meier analysis was also performed with stratification by the characteristics of the primary tumor and lymph node metastases. We classified patients into groups with T1/T2 (n = 12) and T3/T4 tumors (n = 24) because of the case number and better prognosis with T1/T2 tumors versus T3/T4 tumors29 and with lymph node–negative tumors versus lymph node–positive tumors. Our results also showed that the primary tumor stage was correlated with the DFS of CRC patients. The DFS of patients with T3/T4 tumors was reduced significantly in comparison with the DFS of patients with T1/T2 tumors (P < .05; Fig. 2B).\nCLCA1 expression levels and disease-free survival for CRC patients. Kaplan-Meier curves of disease-free survival are shown for CRC patients: (A) different CLCA1 expression (P < .05), (B) tumor status (P < .05), (C) lymph node metastasis (P > .05), and (D) Ki-67 expression (P > .05). The differences between curves were analyzed with the log-rank test. CLCA1 indicates chloride channel accessory 1; CRC, colorectal cancer.\nHowever, the lymph node status (N0 vs N1/N2) with respect to the DFS of this group of CRC patients showed a difference with a P value of .06 (Fig. 2C). Ki-67 has been studied as a prognosticator for CRC,30 but we showed that there was no significant association between the expression levels of Ki-67 and the survival of CRC patients (Fig. 2D). Overall, our results demonstrated that the CLCA1 expression level was a prognostic factor for the survival of patients with CRC in the univariate analysis. Patients with CRC characterized by high levels of CLCA1 expression had a favorable prognosis, whereas patients with CRC with low CLCA1 expression had a poorer survival rate.", "Abnormal expression or mutations of PTEN and p53, E-cadherin, and Ki-67 are associated with a poor prognosis for patients with CRC.15,18,30–32 We also checked the expression of PTEN, p53, Ki-67, and E-cadherin in patients and compared the relationship between expression levels of CLCA1 and these 4 tumor-associated genes. We found that 16 of 28 CRC patients (57%) had high expression (>50% score) of Ki-67 cancer cells. However, no correlation was found between the levels of CLCA1 and Ki-67 expression in the primary tumors (P = .29; Fig. 3A). Furthermore, we investigated the correlation between the expression of PTEN and CLCA1 and found no association between the expression levels of these 2 molecules (P = .95; Fig. 3B). However, CLCA1 expression levels correlated strongly and positively with the level of p53 and E-cadherin expression (P < .01; Fig. 3C,D). The downregulation of E-cadherin at the membrane indicated a poor outcome.15 p53 is a tumor suppressor in CRC. Mutations of p53 are associated with worse survival, and normal levels of p53 are required for CRCs to respond to chemotherapy.16 These data further confirm that low expression of CLCA1 appears to correlate strongly with a poor prognosis of CRC.\nCorrelation between the expression of CLCA1 and known prognostic markers for colorectal cancer. Expression levels of CLCA1, Ki-67, E-cad, PTEN, and p53 were analyzed through the percentages of positively staining cells. The correlations of CLCA1 expression with (A) Ki-67 (n = 28, R2 = 0.18, P = .29), (B) PTEN (n = 9, R2 = –0.02, P = .95), (C) E-cad (n = 12, R2 = 0.8, P = .0016), and (D) p53 (n = 15, R2 = 0.78, P = .0007) are shown, and strong positive correlations between high CLCA1 expression and high E-cad and p53 expression levels are indicated. CLCA1 indicates chloride channel accessory 1; E-cad, E-cadherin; PTEN, phosphatase and tensin homolog.", "To further validate our data, we analyzed the 3 public, independent microarray data sets from GEO. Our analysis showed that the expression level of CLCA1 was inhibited significantly in early CRC specimens (normal vs early CRC, P < .05; Fig. 4A). In comparison with nonmetastatic specimens, the expression of CLCA1 was downregulated significantly in metastatic CRC (P < .01; Fig. 4B). CIN-high showed the worst survival for CRC patients.27 Our results showed that CRC with the CIN-high phenotype had lower expression of CLCA1 than CRC with the CIN-low phenotype (P = .015; Fig. 4C). Therefore, the results from these 3 independent microarray sets were identical to the results from our clinical analysis data.\nPublicly available microarray data sets for validation. (A) In public microarray data set GSE4107, we analyzed the expression of CLCA1 in normal colon mucosa and CRC tissues. The results showed that the expression of CLCA1 was inhibited significantly in early CRC patients. (B) In the GSE28702 microarray gene set, low expression of CLCA1 was associated with CRC metastasis. (C) The analysis of the GSE30540 microarray gene set showed that low CLCA1 expression was associated with CRC with a CIN-high signature, and this indicated significantly poorer survival in comparison with a CIN-low signature. CIN indicates chromosomal instability; CLCA1, chloride channel accessory 1; CRC, colorectal cancer.", "Culturing to confluence induces the spontaneous differentiation of human colon adenocarcinoma cells (Caco-2).11,33–35 Using this model, we investigated further the functional role of CLCA1 in the differentiation of Caco-2 cells. First, we detected the expression of CLCA1 and the differentiation marker E-cadherin in a confluent culture. We found that the expression of both CLCA1 and E-cadherin was increased in a time-dependent manner (Fig. 5A). These data suggest that the expression of CLCA1 may contribute to the spontaneous differentiation of Caco-2 cells. Next, we used stealth siRNA (siRNACLCA1) to knock down the expression of CLCA1 in Caco-2 cells. After 72 hours of transfection, cells were tested for the expression of CLCA1, ALPI, and E-cadherin by western blotting. We found that knockdown of CLCA1 inhibited expression levels of ALPI and E-cadherin significantly (Fig. 5B).10 These results indicate that CLCA1 expression plays a key role in the regulation of the spontaneous differentiation of Caco-2 cells.\nCLCA1 contributes to the differentiation of colorectal cancer cells. Human colon adenocarcinoma cells (Caco-2) were cultured to become confluent for 10 days. (A) The expression of CLCA1 was upregulated for 4 days in a confluent culture and lasted for up to 10 days of culturing. The mature epithelial marker E-cad increased in a time-dependent manner. The histogram shows the relative intensity of E-cad expressed as a ratio with respect to the GAPDH control. (B) Caco-2 cells were transfected transiently with 150 nM siRNACLCA1 and blotted for CLCA1, E-cad, and ALPI. siRNACLCA1 effectively inhibited CLCA1 and downregulated the expression of E-cad and ALPI. The histogram shows the relative intensity of CLCA1, E-cad, and ALPI expressed as a ratio with respect to the GAPDH control. Data are presented as means and standard errors of the mean. All results were analyzed on the basis of 3 independent experiments. ALPI indicates intestinal alkaline phosphatase; CLCA1, chloride channel accessory 1; Cont, control; E-cad, E-cadherin; GAPDH, glyceraldehyde 3-phosphate dehydrogenase; siRNA, small interfering RNA." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Material and Methods", "Patients", "Clinical Follow-Up", "Histology and Colon Cancer Staging", "Immunohistochemistry", "Knockdown of CLCA1 and Western Blotting", "Quantitative Analysis", "Microarray Data Analysis", "Statistical Analysis", "Results", "Patients and Tumors", "Expression of CLCA1 and Clinical Grades of CRCs", "Reduced Expression of CLCA1 Is an Indicator of the Likelihood of Disease Relapse and Poorer Survival", "Correlation Between CLCA1 and PTEN, p53, E-Cadherin, and Ki-67 in CRC Patients", "Confirmation of CLCA1 Expression in CRC With Microarray Data Sets", "CLCA1 Regulates the Differentiation of Human CRC Cells", "Discussion" ]
[ "The chloride channel accessory (CLCA) family (also called the calcium-sensitive chloride conductance protein family) comprises 4 genes in humans and at least 6 genes in mice.1–5 All members of the CLCA family map to the same region on chromosome 1p31-p22 and share a high degree of homology in size, sequence, and predicted structure, but they differ significantly in their tissue distribution. The human genome encodes 3 functional CLCA proteins, including CLCA1, CLCA2, and CLCA4. CLCA2 has been identified to be a p53-inducible inhibitor of cell proliferation and to be a marker of differentiated epithelium that is downregulated with tumor progression.1,6 CLCA4 is downregulated in breast cancer cells, and low expression of CLCA4 indicates a poor prognosis for patients with breast cancer.7 CLCA4 and CLCA1 are expressed in intestinal epithelium.3,8–10 CLCA1 and CLCA4 may function as tumor suppressors and are associated negatively with tumorigenicity.11 We have shown that CLCA1 contributes to differentiation and proliferation inhibition in colon cell lines,10 but the role of CLCA1 in the prognosis of patients with colorectal cancer (CRC) remains unclear.\nIn cancer progression, individual cells undergo complex and important interactions with the extracellular environment through transmembrane signal transduction. These interactions regulate cancer cell proliferation, invasion, and metastasis. Ion channels are a crucial first step in this process, and they play emerging roles in carcinogenesis and tumor progression.11 For example, activation of the chloride current through specialized volume-regulated anion channels in response to cell swelling is one of the major mechanisms by which cells restore their volume after hypo-osmotic stress (regulatory volume decrease, RVD).12 This is important because there is a direct link between the apoptotic resistance conferred by the antiapoptotic Bcl-2 protein and the strengthening of RVD capability due to upregulation of the chloride current and swelling.13 Therefore, further investigations will elucidate important roles for ion channels in cancer development and progression and potentially establish ion channels as effective discriminative markers and therapeutic targets.14\nWe have reported that CLCA1 is expressed in differentiated, growth-arrested mammalian epithelial cells but is downregulated during tumor progression.10 We have identified CLCA1 as a regulator of the transition from proliferation to differentiation in Caco-2 cells. In this study, we have determined further that the expression of CLCA1 in human CRC intestinal tissue is associated with the primary tumor status (the degree of invasion of the intestinal wall), lymph node metastases, and the overall survival rate. CLCA1 also is correlated closely with tumor suppressor p53 and E-cadherin, which has been determined to influence the prognosis of CRC.15–18 Our findings suggest that the expression level of CLCA1 may predict disease relapse and outcomes for patients with CRC.", " Patients Thirty-six patients were diagnosed with CRC (26 with colon cancer and 10 with rectal cancer; Table 1), which was classified with the International Union Against Cancer TNM staging system and the Dukes staging system. All patients underwent surgical resection of their tumors at the 309th Hospital in Beijing, China. Ethical approval for the study was granted by the 309th Hospital's ethics committee. Informed written consent was obtained from all participants involved in the study.\nClinical Characteristics of Patients With Colorectal Cancer\nThe key clinical characteristics of the patients are summarized in Table 1. Normal specimens were obtained from 2 sources: 2 samples came from normal colon tissue of noncancer patients, and 4 samples were obtained from adjacent, grossly normal-appearing tissue taken at least 10 cm away from the cancer. This was in accordance with other research.19 All tumor samples were obtained from surgical resection. After surgical resection of their tumors, patients were followed up with clinical examinations, abdominal ultrasonography or abdominal computer tomography scans, and carcinoembryonic antigen measurements every 6 months for the first 5 years. Thereafter, these investigations were performed annually until 5 years after the initial treatment. The patients who died as a result of any postoperative complications or non–cancer-related diseases were excluded from the survival analysis. A patient with a family history suggestive of hereditary nonpolyposis colon cancer syndrome was excluded. None of the patients included in this study had chemotherapy or radiotherapy before surgery. In all, 36 CRC patients were included in the survival analysis. Adjuvant chemotherapy was given to all CRC patients except those with stage I disease (cancer had not invaded the outermost layers of the colon or rectum and had not spread to lymph nodes or distant sites). The mFOLFOX6 chemotherapy regimen consisted of a 2-hour intravenous infusion of oxaliplatin (85 mg/m2) and folinic acid (400 mg/m2), which was followed by an intravenous bolus injection of 5-fluorouracil (400 mg/m2) plus a 46-hour intravenous infusion of 5-fluorouracil (2400 mg/m2); this was repeated every 2 weeks. After 4 cycles of therapy, all lesions were assessed with computer tomography.\nThirty-six patients were diagnosed with CRC (26 with colon cancer and 10 with rectal cancer; Table 1), which was classified with the International Union Against Cancer TNM staging system and the Dukes staging system. All patients underwent surgical resection of their tumors at the 309th Hospital in Beijing, China. Ethical approval for the study was granted by the 309th Hospital's ethics committee. Informed written consent was obtained from all participants involved in the study.\nClinical Characteristics of Patients With Colorectal Cancer\nThe key clinical characteristics of the patients are summarized in Table 1. Normal specimens were obtained from 2 sources: 2 samples came from normal colon tissue of noncancer patients, and 4 samples were obtained from adjacent, grossly normal-appearing tissue taken at least 10 cm away from the cancer. This was in accordance with other research.19 All tumor samples were obtained from surgical resection. After surgical resection of their tumors, patients were followed up with clinical examinations, abdominal ultrasonography or abdominal computer tomography scans, and carcinoembryonic antigen measurements every 6 months for the first 5 years. Thereafter, these investigations were performed annually until 5 years after the initial treatment. The patients who died as a result of any postoperative complications or non–cancer-related diseases were excluded from the survival analysis. A patient with a family history suggestive of hereditary nonpolyposis colon cancer syndrome was excluded. None of the patients included in this study had chemotherapy or radiotherapy before surgery. In all, 36 CRC patients were included in the survival analysis. Adjuvant chemotherapy was given to all CRC patients except those with stage I disease (cancer had not invaded the outermost layers of the colon or rectum and had not spread to lymph nodes or distant sites). The mFOLFOX6 chemotherapy regimen consisted of a 2-hour intravenous infusion of oxaliplatin (85 mg/m2) and folinic acid (400 mg/m2), which was followed by an intravenous bolus injection of 5-fluorouracil (400 mg/m2) plus a 46-hour intravenous infusion of 5-fluorouracil (2400 mg/m2); this was repeated every 2 weeks. After 4 cycles of therapy, all lesions were assessed with computer tomography.\n Clinical Follow-Up All patients were prospectively followed up according to the schedule described previously. The mean follow-up time in this study was 46 months (range, 7-52 months). The status of each patient was determined at the date of the last follow-up or at the end of a 5-year follow-up period, and if they were deceased, the cause of death was ascertained from the medical records and/or death certificate information. At the initial diagnosis, 6 patients with CRC had metastases.\nAll patients were prospectively followed up according to the schedule described previously. The mean follow-up time in this study was 46 months (range, 7-52 months). The status of each patient was determined at the date of the last follow-up or at the end of a 5-year follow-up period, and if they were deceased, the cause of death was ascertained from the medical records and/or death certificate information. At the initial diagnosis, 6 patients with CRC had metastases.\n Histology and Colon Cancer Staging Resected tumors were obtained immediately after surgical resection and were fixed in 10% pH-neutral formalin and embedded in paraffin. Paraffin-embedded tissue sections (5 µm thick) were cut serially and used for hematoxylin-eosin staining and immunohistochemical analysis. The patients were staged according to the International Union Against Cancer TNM staging system and the Dukes staging system (Table 1).20 The Dukes staging system (used for colon cancer staging originally) was defined as follows: (A) tumor in the mucosa, (B) tumor in the muscle layer, (C) involvement of lymph nodes, and (D) distant metastases. Tumors were classified also by their degree of histological differentiation: (well differentiated [I]), moderately differentiated [II], or poorly differentiated [III]), the presence or absence of perineural invasion, the presence or absence of venous emboli, and the number of lymph nodes involved by the tumor.\nResected tumors were obtained immediately after surgical resection and were fixed in 10% pH-neutral formalin and embedded in paraffin. Paraffin-embedded tissue sections (5 µm thick) were cut serially and used for hematoxylin-eosin staining and immunohistochemical analysis. The patients were staged according to the International Union Against Cancer TNM staging system and the Dukes staging system (Table 1).20 The Dukes staging system (used for colon cancer staging originally) was defined as follows: (A) tumor in the mucosa, (B) tumor in the muscle layer, (C) involvement of lymph nodes, and (D) distant metastases. Tumors were classified also by their degree of histological differentiation: (well differentiated [I]), moderately differentiated [II], or poorly differentiated [III]), the presence or absence of perineural invasion, the presence or absence of venous emboli, and the number of lymph nodes involved by the tumor.\n Immunohistochemistry After paraffin embedding, the tissue was cut into 5-µm-thick sections and mounted onto Superfrost Plus slides. The immunohistochemistry was performed on Leica Bond Max and included dewaxing and heat-induced epitope retrieval with Epitope Retrieval 1 (Leica Biosystems) at 100°C. Peroxidase block from the Bond Refine Detection System was used. The primary antibody CLCA1 (polyclone; Santa Cruz) was applied at a 1:100 dilution for 30 minutes at room temperature. The staining was completed with the Bond Polymer Refine Detection system. The Bond Polymer Refine Detection system contained a peroxide block, post primary, a polymer reagent, 3,3′-diaminobenzidine chromogen, and a hematoxylin counterstain (Leica Biosystems). p53, E-cadherin, phosphatase and tensin homolog (PTEN), and Ki-67 expression was evaluated according to the proportion of positively stained tumor cells.\nAfter paraffin embedding, the tissue was cut into 5-µm-thick sections and mounted onto Superfrost Plus slides. The immunohistochemistry was performed on Leica Bond Max and included dewaxing and heat-induced epitope retrieval with Epitope Retrieval 1 (Leica Biosystems) at 100°C. Peroxidase block from the Bond Refine Detection System was used. The primary antibody CLCA1 (polyclone; Santa Cruz) was applied at a 1:100 dilution for 30 minutes at room temperature. The staining was completed with the Bond Polymer Refine Detection system. The Bond Polymer Refine Detection system contained a peroxide block, post primary, a polymer reagent, 3,3′-diaminobenzidine chromogen, and a hematoxylin counterstain (Leica Biosystems). p53, E-cadherin, phosphatase and tensin homolog (PTEN), and Ki-67 expression was evaluated according to the proportion of positively stained tumor cells.\n Knockdown of CLCA1 and Western Blotting Caco-2 cells (5×105; American Type Culture Collection) were cultured on collagen I–precoated 6-well plates for the indicated time. Cells were lysed with a cell lysis buffer (Sigma-Aldrich) and a protease inhibitor cocktail (Thermo Scientific) for western blot analysis. Knockdown of CLCA1 in Caco-2 cells has been described previously.10 Briefly, a stealth RNAi small interfering RNA (siRNA) duplex with sense-strand sequences (5′-CAAUGCUACCCUGCCUCC AAUUACA-3′; Invitrogen, United Kingdom) was used to specifically target the CLCA1 gene. Caco-2 cells were transfected with Lipofectamine 2000 (Invitrogen) according to the manufacturer's protocol with a final siRNA concentration of 150 nM. Nontargeting negative control siRNA was used for non–sequence-specific effects of these molecules. After 72 hours, cells were lysed for western blot analysis. The primary antibodies used for western blotting included anti-CLCA1 (1:1000; Santa Cruz), anti–E-cadherin (1:5000; BD), anti–intestinal alkaline phosphatase (anti-ALPI; 1:2000, Novus), and anti–glyceraldehyde 3-phosphate dehydrogenase (1:50,000; Santa Cruz). Membranes were incubated with the relevant primary antibodies overnight at 4°C. A secondary antibody with horseradish peroxidase (1:5000; Sigma-Aldrich, United Kingdom) was used, and the immunoblots were detected with WesternBright ECL (AGTC Bioproducts).\nCaco-2 cells (5×105; American Type Culture Collection) were cultured on collagen I–precoated 6-well plates for the indicated time. Cells were lysed with a cell lysis buffer (Sigma-Aldrich) and a protease inhibitor cocktail (Thermo Scientific) for western blot analysis. Knockdown of CLCA1 in Caco-2 cells has been described previously.10 Briefly, a stealth RNAi small interfering RNA (siRNA) duplex with sense-strand sequences (5′-CAAUGCUACCCUGCCUCC AAUUACA-3′; Invitrogen, United Kingdom) was used to specifically target the CLCA1 gene. Caco-2 cells were transfected with Lipofectamine 2000 (Invitrogen) according to the manufacturer's protocol with a final siRNA concentration of 150 nM. Nontargeting negative control siRNA was used for non–sequence-specific effects of these molecules. After 72 hours, cells were lysed for western blot analysis. The primary antibodies used for western blotting included anti-CLCA1 (1:1000; Santa Cruz), anti–E-cadherin (1:5000; BD), anti–intestinal alkaline phosphatase (anti-ALPI; 1:2000, Novus), and anti–glyceraldehyde 3-phosphate dehydrogenase (1:50,000; Santa Cruz). Membranes were incubated with the relevant primary antibodies overnight at 4°C. A secondary antibody with horseradish peroxidase (1:5000; Sigma-Aldrich, United Kingdom) was used, and the immunoblots were detected with WesternBright ECL (AGTC Bioproducts).\n Quantitative Analysis Slides were assessed with light microscopy. All slides were analyzed by 2 independent observers (L.C. and J.P.) who were blinded to the clinical data. For each colon carcinoma, staining was evaluated on separate slides, which included the core and the invasive edge of the tumor, respectively. Slides were examined at ×400 (40× objective and 10× ocular) and were analyzed via the counting of all cells present in them (at least 5000 cells). All of the slides were reviewed independently by each researcher twice. Discrepancies between investigators (<10% of the cases) required a third joint observation with a conclusive agreement.21 The expression of CLCA1 was scored as the ratio of the number of positive cells to the total number of cells (stained cells/total number evaluated). The samples were considered CLCA1-positive if any positive staining was detected unless there were only rare, isolated single cells. To translate a continuous variable of CLCA1 expression into a clinical decision, it is necessary to stratify patients into 2 groups that may have different prognoses. Currently, there is no standard method or standard software for biomarker cutoff determination.22 However, our approach was to use X-tile 3.6 plot software (developed by Camp et al23), which provides a single, global assessment of every possible way of dividing a population into low-, medium-, and high-level marker expression. Using an X-tile plot, we determined the significant, optimal cut point to be 30% (P = .0224). Therefore, we used 30% CLCA1 positive staining as a cutoff value to unequivocally categorize cases into 2 groups: high expression of CLCA1 (>30% of cells stained) and low expression of CLCA1 (none or ≤30% of cells stained). This was in accordance with other research.21,24\nSlides were assessed with light microscopy. All slides were analyzed by 2 independent observers (L.C. and J.P.) who were blinded to the clinical data. For each colon carcinoma, staining was evaluated on separate slides, which included the core and the invasive edge of the tumor, respectively. Slides were examined at ×400 (40× objective and 10× ocular) and were analyzed via the counting of all cells present in them (at least 5000 cells). All of the slides were reviewed independently by each researcher twice. Discrepancies between investigators (<10% of the cases) required a third joint observation with a conclusive agreement.21 The expression of CLCA1 was scored as the ratio of the number of positive cells to the total number of cells (stained cells/total number evaluated). The samples were considered CLCA1-positive if any positive staining was detected unless there were only rare, isolated single cells. To translate a continuous variable of CLCA1 expression into a clinical decision, it is necessary to stratify patients into 2 groups that may have different prognoses. Currently, there is no standard method or standard software for biomarker cutoff determination.22 However, our approach was to use X-tile 3.6 plot software (developed by Camp et al23), which provides a single, global assessment of every possible way of dividing a population into low-, medium-, and high-level marker expression. Using an X-tile plot, we determined the significant, optimal cut point to be 30% (P = .0224). Therefore, we used 30% CLCA1 positive staining as a cutoff value to unequivocally categorize cases into 2 groups: high expression of CLCA1 (>30% of cells stained) and low expression of CLCA1 (none or ≤30% of cells stained). This was in accordance with other research.21,24\n Microarray Data Analysis The microarray data sources were obtained from the Gene Expression Omnibus (GEO).25 Three data sets (series accession numbers GSE4107, GSE28702, and GSE30540) were not subjected to any additional normalization because all had been normalized when we obtained them.19 In GSE4107, early CRC specimens (n = 12) and adjacent, grossly normal-appearing tissue (n = 10) at least 8 cm away were collected routinely and archived from patients undergoing colorectal resection at the Singapore General Hospital.19 In the GSE28702 study, 83 patients with unresectable CRC, including 56 patients with primary CRC and 27 patients with metastatic lesions in the liver (23 tumors), lungs (1 tumor), and peritoneum (3 tumors), were recruited from April 2007 to December 2010 at Teikyo University Hospital and Gifu University Hospital.26 All CRC samples were obtained before mFOLFOX6 therapy. GSE30540 included 25 chromosomal instability–high (CIN-high) CRC patients and 10 chromosomal instability–low (CIN-low) CRC patients.27 We analyzed the expression of CLCA1 in these published microarray data sets with the GEO software. The identity of genes across microarray data sets was established with public annotations primarily based on Unigene.28\nThe microarray data sources were obtained from the Gene Expression Omnibus (GEO).25 Three data sets (series accession numbers GSE4107, GSE28702, and GSE30540) were not subjected to any additional normalization because all had been normalized when we obtained them.19 In GSE4107, early CRC specimens (n = 12) and adjacent, grossly normal-appearing tissue (n = 10) at least 8 cm away were collected routinely and archived from patients undergoing colorectal resection at the Singapore General Hospital.19 In the GSE28702 study, 83 patients with unresectable CRC, including 56 patients with primary CRC and 27 patients with metastatic lesions in the liver (23 tumors), lungs (1 tumor), and peritoneum (3 tumors), were recruited from April 2007 to December 2010 at Teikyo University Hospital and Gifu University Hospital.26 All CRC samples were obtained before mFOLFOX6 therapy. GSE30540 included 25 chromosomal instability–high (CIN-high) CRC patients and 10 chromosomal instability–low (CIN-low) CRC patients.27 We analyzed the expression of CLCA1 in these published microarray data sets with the GEO software. The identity of genes across microarray data sets was established with public annotations primarily based on Unigene.28\n Statistical Analysis Statistical analysis was performed with Excel and Prism. The equality of group means and comparisons between proportions were analyzed with an unpaired Student t test and chi-square test, respectively. Univariate statistical analysis was performed with a log-rank test (Mantel-Cox). X-tile 3.6.1 plot software (http://medicine.yale.edu/lab/rimm/research/software.aspx) was used to determine the cutoff point of the CLCA1 expression level for separating all patients into 2 groups to examine the impact of CLCA1 expression on prognosis. The curves were plotted with the product-limit method (Kaplan-Meier) and were analyzed with Spearman correlation coefficients and Wilcoxon tests for all survival analyses. For covariates retained in the model, relative hazards with 95% confidence intervals were estimated. Differences with a P value of .05 or less were considered to be statistically significant.\nStatistical analysis was performed with Excel and Prism. The equality of group means and comparisons between proportions were analyzed with an unpaired Student t test and chi-square test, respectively. Univariate statistical analysis was performed with a log-rank test (Mantel-Cox). X-tile 3.6.1 plot software (http://medicine.yale.edu/lab/rimm/research/software.aspx) was used to determine the cutoff point of the CLCA1 expression level for separating all patients into 2 groups to examine the impact of CLCA1 expression on prognosis. The curves were plotted with the product-limit method (Kaplan-Meier) and were analyzed with Spearman correlation coefficients and Wilcoxon tests for all survival analyses. For covariates retained in the model, relative hazards with 95% confidence intervals were estimated. Differences with a P value of .05 or less were considered to be statistically significant.", "Thirty-six patients were diagnosed with CRC (26 with colon cancer and 10 with rectal cancer; Table 1), which was classified with the International Union Against Cancer TNM staging system and the Dukes staging system. All patients underwent surgical resection of their tumors at the 309th Hospital in Beijing, China. Ethical approval for the study was granted by the 309th Hospital's ethics committee. Informed written consent was obtained from all participants involved in the study.\nClinical Characteristics of Patients With Colorectal Cancer\nThe key clinical characteristics of the patients are summarized in Table 1. Normal specimens were obtained from 2 sources: 2 samples came from normal colon tissue of noncancer patients, and 4 samples were obtained from adjacent, grossly normal-appearing tissue taken at least 10 cm away from the cancer. This was in accordance with other research.19 All tumor samples were obtained from surgical resection. After surgical resection of their tumors, patients were followed up with clinical examinations, abdominal ultrasonography or abdominal computer tomography scans, and carcinoembryonic antigen measurements every 6 months for the first 5 years. Thereafter, these investigations were performed annually until 5 years after the initial treatment. The patients who died as a result of any postoperative complications or non–cancer-related diseases were excluded from the survival analysis. A patient with a family history suggestive of hereditary nonpolyposis colon cancer syndrome was excluded. None of the patients included in this study had chemotherapy or radiotherapy before surgery. In all, 36 CRC patients were included in the survival analysis. Adjuvant chemotherapy was given to all CRC patients except those with stage I disease (cancer had not invaded the outermost layers of the colon or rectum and had not spread to lymph nodes or distant sites). The mFOLFOX6 chemotherapy regimen consisted of a 2-hour intravenous infusion of oxaliplatin (85 mg/m2) and folinic acid (400 mg/m2), which was followed by an intravenous bolus injection of 5-fluorouracil (400 mg/m2) plus a 46-hour intravenous infusion of 5-fluorouracil (2400 mg/m2); this was repeated every 2 weeks. After 4 cycles of therapy, all lesions were assessed with computer tomography.", "All patients were prospectively followed up according to the schedule described previously. The mean follow-up time in this study was 46 months (range, 7-52 months). The status of each patient was determined at the date of the last follow-up or at the end of a 5-year follow-up period, and if they were deceased, the cause of death was ascertained from the medical records and/or death certificate information. At the initial diagnosis, 6 patients with CRC had metastases.", "Resected tumors were obtained immediately after surgical resection and were fixed in 10% pH-neutral formalin and embedded in paraffin. Paraffin-embedded tissue sections (5 µm thick) were cut serially and used for hematoxylin-eosin staining and immunohistochemical analysis. The patients were staged according to the International Union Against Cancer TNM staging system and the Dukes staging system (Table 1).20 The Dukes staging system (used for colon cancer staging originally) was defined as follows: (A) tumor in the mucosa, (B) tumor in the muscle layer, (C) involvement of lymph nodes, and (D) distant metastases. Tumors were classified also by their degree of histological differentiation: (well differentiated [I]), moderately differentiated [II], or poorly differentiated [III]), the presence or absence of perineural invasion, the presence or absence of venous emboli, and the number of lymph nodes involved by the tumor.", "After paraffin embedding, the tissue was cut into 5-µm-thick sections and mounted onto Superfrost Plus slides. The immunohistochemistry was performed on Leica Bond Max and included dewaxing and heat-induced epitope retrieval with Epitope Retrieval 1 (Leica Biosystems) at 100°C. Peroxidase block from the Bond Refine Detection System was used. The primary antibody CLCA1 (polyclone; Santa Cruz) was applied at a 1:100 dilution for 30 minutes at room temperature. The staining was completed with the Bond Polymer Refine Detection system. The Bond Polymer Refine Detection system contained a peroxide block, post primary, a polymer reagent, 3,3′-diaminobenzidine chromogen, and a hematoxylin counterstain (Leica Biosystems). p53, E-cadherin, phosphatase and tensin homolog (PTEN), and Ki-67 expression was evaluated according to the proportion of positively stained tumor cells.", "Caco-2 cells (5×105; American Type Culture Collection) were cultured on collagen I–precoated 6-well plates for the indicated time. Cells were lysed with a cell lysis buffer (Sigma-Aldrich) and a protease inhibitor cocktail (Thermo Scientific) for western blot analysis. Knockdown of CLCA1 in Caco-2 cells has been described previously.10 Briefly, a stealth RNAi small interfering RNA (siRNA) duplex with sense-strand sequences (5′-CAAUGCUACCCUGCCUCC AAUUACA-3′; Invitrogen, United Kingdom) was used to specifically target the CLCA1 gene. Caco-2 cells were transfected with Lipofectamine 2000 (Invitrogen) according to the manufacturer's protocol with a final siRNA concentration of 150 nM. Nontargeting negative control siRNA was used for non–sequence-specific effects of these molecules. After 72 hours, cells were lysed for western blot analysis. The primary antibodies used for western blotting included anti-CLCA1 (1:1000; Santa Cruz), anti–E-cadherin (1:5000; BD), anti–intestinal alkaline phosphatase (anti-ALPI; 1:2000, Novus), and anti–glyceraldehyde 3-phosphate dehydrogenase (1:50,000; Santa Cruz). Membranes were incubated with the relevant primary antibodies overnight at 4°C. A secondary antibody with horseradish peroxidase (1:5000; Sigma-Aldrich, United Kingdom) was used, and the immunoblots were detected with WesternBright ECL (AGTC Bioproducts).", "Slides were assessed with light microscopy. All slides were analyzed by 2 independent observers (L.C. and J.P.) who were blinded to the clinical data. For each colon carcinoma, staining was evaluated on separate slides, which included the core and the invasive edge of the tumor, respectively. Slides were examined at ×400 (40× objective and 10× ocular) and were analyzed via the counting of all cells present in them (at least 5000 cells). All of the slides were reviewed independently by each researcher twice. Discrepancies between investigators (<10% of the cases) required a third joint observation with a conclusive agreement.21 The expression of CLCA1 was scored as the ratio of the number of positive cells to the total number of cells (stained cells/total number evaluated). The samples were considered CLCA1-positive if any positive staining was detected unless there were only rare, isolated single cells. To translate a continuous variable of CLCA1 expression into a clinical decision, it is necessary to stratify patients into 2 groups that may have different prognoses. Currently, there is no standard method or standard software for biomarker cutoff determination.22 However, our approach was to use X-tile 3.6 plot software (developed by Camp et al23), which provides a single, global assessment of every possible way of dividing a population into low-, medium-, and high-level marker expression. Using an X-tile plot, we determined the significant, optimal cut point to be 30% (P = .0224). Therefore, we used 30% CLCA1 positive staining as a cutoff value to unequivocally categorize cases into 2 groups: high expression of CLCA1 (>30% of cells stained) and low expression of CLCA1 (none or ≤30% of cells stained). This was in accordance with other research.21,24", "The microarray data sources were obtained from the Gene Expression Omnibus (GEO).25 Three data sets (series accession numbers GSE4107, GSE28702, and GSE30540) were not subjected to any additional normalization because all had been normalized when we obtained them.19 In GSE4107, early CRC specimens (n = 12) and adjacent, grossly normal-appearing tissue (n = 10) at least 8 cm away were collected routinely and archived from patients undergoing colorectal resection at the Singapore General Hospital.19 In the GSE28702 study, 83 patients with unresectable CRC, including 56 patients with primary CRC and 27 patients with metastatic lesions in the liver (23 tumors), lungs (1 tumor), and peritoneum (3 tumors), were recruited from April 2007 to December 2010 at Teikyo University Hospital and Gifu University Hospital.26 All CRC samples were obtained before mFOLFOX6 therapy. GSE30540 included 25 chromosomal instability–high (CIN-high) CRC patients and 10 chromosomal instability–low (CIN-low) CRC patients.27 We analyzed the expression of CLCA1 in these published microarray data sets with the GEO software. The identity of genes across microarray data sets was established with public annotations primarily based on Unigene.28", "Statistical analysis was performed with Excel and Prism. The equality of group means and comparisons between proportions were analyzed with an unpaired Student t test and chi-square test, respectively. Univariate statistical analysis was performed with a log-rank test (Mantel-Cox). X-tile 3.6.1 plot software (http://medicine.yale.edu/lab/rimm/research/software.aspx) was used to determine the cutoff point of the CLCA1 expression level for separating all patients into 2 groups to examine the impact of CLCA1 expression on prognosis. The curves were plotted with the product-limit method (Kaplan-Meier) and were analyzed with Spearman correlation coefficients and Wilcoxon tests for all survival analyses. For covariates retained in the model, relative hazards with 95% confidence intervals were estimated. Differences with a P value of .05 or less were considered to be statistically significant.", " Patients and Tumors The characteristics of the 36 CRC patients in the study cohort are shown in Table 1. There were 11 women (30.6%) and 25 men (69.4%). The median age was 55.5 years with a range of 25 to 80 years. In terms of the anatomical location of the tumors, 15 (41.7%) were in the ascending colon, 2 (5.6%) were in the transverse colon, 3 (8.3%) were in the left colon, 6 (16.7%) were in the sigmoid colon, and 10 (27.8%) were in the rectum. Twenty-six patients (72.2%) had moderately or poorly differentiated tumors, with the remaining 10 (27.8%) having well-differentiated cancers. The Dukes staging was as follows: (A) 7 or 19.4%, (B) 18 or 50%, (C) 5 or 13.9%, and (D) 6 or 16.7%.\nThe characteristics of the 36 CRC patients in the study cohort are shown in Table 1. There were 11 women (30.6%) and 25 men (69.4%). The median age was 55.5 years with a range of 25 to 80 years. In terms of the anatomical location of the tumors, 15 (41.7%) were in the ascending colon, 2 (5.6%) were in the transverse colon, 3 (8.3%) were in the left colon, 6 (16.7%) were in the sigmoid colon, and 10 (27.8%) were in the rectum. Twenty-six patients (72.2%) had moderately or poorly differentiated tumors, with the remaining 10 (27.8%) having well-differentiated cancers. The Dukes staging was as follows: (A) 7 or 19.4%, (B) 18 or 50%, (C) 5 or 13.9%, and (D) 6 or 16.7%.\n Expression of CLCA1 and Clinical Grades of CRCs The level of expression of CLCA1 with respect to tumor staging is shown in Table 2. There was a high level of CLCA1 expression in normal colonic epithelium in stark contrast to the tumor tissue (Fig. 1A). The mean percentage of CLCA1-positive cells was 88% in the normal samples (n = 6) and 54% in the tumor samples (n = 36). In addition, the expression pattern of CLCA1 was predominantly membranous and cytoplasmic in normal colonic epithelium, but this pattern was altered in the tumor area, which showed an absence of cytoplasmic and/or membranous staining (Fig. 1A). Furthermore, we analyzed the relationship between the level of expression of CLCA1 and the primary tumor status (T1-T4), lymph nodes status (N0 or N1/N2), Dukes stage (A-D), and histological grade (well, moderately, or poorly differentiated). This analysis showed that the CLCA1 expression in noncancerous control mucosa samples was significantly higher than that in samples with tumors. There also was a significant difference between normal tissue and early CRC tissue (T1, P < .05). In addition, in the more advanced tumor stages (T3 and T4), the expression of CLCA1 was reduced in comparison with earlier stage tumors (T1/T2 vs T3/T4, P < .01; Fig. 1B and Table 2). Furthermore, CLCA1 expression levels in primary tumors were reduced significantly when the patients had positive lymph nodes (N1/N2, P < .01; Fig. 1C). In Dukes stage A and B tumors, there was much higher expression of CLCA1 in comparison with Dukes stage C and D tumors (P < .01; Fig. 1D and Table 2). An analysis by histological grades also showed that the expression of CLCA1 was reduced significantly in poorly differentiated tumors versus well-differentiated tumor (P < .01; Fig. 1E and Table 2). Our data indicate that low CLCA1 expression levels are associated with an advanced tumor stage, a less differentiated tumor histological grade, and metastases in regional lymph nodes.\nExpression Level of CLCA1 and Status of Primary Tumors\nAbbreviation: CLCA1, chloride channel accessory 1.\nExpression levels were compared with the Pearson chi-square test.\nThe expression of CLCA1 correlates with tumor stages, histological grades, and lymph node metastasis in colorectal cancer. (A) CLCA1 in normal colonic tissue showed (i) preservation of high levels of expression (brown) on cell membranes and in the cytoplasm, (ii) reduced expression on membranes and in the cytoplasm in colon cancer tissue, (iii) reduced expression in cytoplasm only in colon cancer tissue, and (iiii) reduced expression in rectal cancer. CLCA1 expression was strongly associated with (B) a different tumor status, (C) lymph node metastasis, (D) Dukes staging, and (E) histological grades. Expression levels of CLCA1 were quantified via the ratio of positive cells to total cells. Data are presented as means and standard errors of the mean. CLCA1 indicates chloride channel accessory 1.\nThe level of expression of CLCA1 with respect to tumor staging is shown in Table 2. There was a high level of CLCA1 expression in normal colonic epithelium in stark contrast to the tumor tissue (Fig. 1A). The mean percentage of CLCA1-positive cells was 88% in the normal samples (n = 6) and 54% in the tumor samples (n = 36). In addition, the expression pattern of CLCA1 was predominantly membranous and cytoplasmic in normal colonic epithelium, but this pattern was altered in the tumor area, which showed an absence of cytoplasmic and/or membranous staining (Fig. 1A). Furthermore, we analyzed the relationship between the level of expression of CLCA1 and the primary tumor status (T1-T4), lymph nodes status (N0 or N1/N2), Dukes stage (A-D), and histological grade (well, moderately, or poorly differentiated). This analysis showed that the CLCA1 expression in noncancerous control mucosa samples was significantly higher than that in samples with tumors. There also was a significant difference between normal tissue and early CRC tissue (T1, P < .05). In addition, in the more advanced tumor stages (T3 and T4), the expression of CLCA1 was reduced in comparison with earlier stage tumors (T1/T2 vs T3/T4, P < .01; Fig. 1B and Table 2). Furthermore, CLCA1 expression levels in primary tumors were reduced significantly when the patients had positive lymph nodes (N1/N2, P < .01; Fig. 1C). In Dukes stage A and B tumors, there was much higher expression of CLCA1 in comparison with Dukes stage C and D tumors (P < .01; Fig. 1D and Table 2). An analysis by histological grades also showed that the expression of CLCA1 was reduced significantly in poorly differentiated tumors versus well-differentiated tumor (P < .01; Fig. 1E and Table 2). Our data indicate that low CLCA1 expression levels are associated with an advanced tumor stage, a less differentiated tumor histological grade, and metastases in regional lymph nodes.\nExpression Level of CLCA1 and Status of Primary Tumors\nAbbreviation: CLCA1, chloride channel accessory 1.\nExpression levels were compared with the Pearson chi-square test.\nThe expression of CLCA1 correlates with tumor stages, histological grades, and lymph node metastasis in colorectal cancer. (A) CLCA1 in normal colonic tissue showed (i) preservation of high levels of expression (brown) on cell membranes and in the cytoplasm, (ii) reduced expression on membranes and in the cytoplasm in colon cancer tissue, (iii) reduced expression in cytoplasm only in colon cancer tissue, and (iiii) reduced expression in rectal cancer. CLCA1 expression was strongly associated with (B) a different tumor status, (C) lymph node metastasis, (D) Dukes staging, and (E) histological grades. Expression levels of CLCA1 were quantified via the ratio of positive cells to total cells. Data are presented as means and standard errors of the mean. CLCA1 indicates chloride channel accessory 1.\n Reduced Expression of CLCA1 Is an Indicator of the Likelihood of Disease Relapse and Poorer Survival The median disease-free survival (DFS) for all patients was 34.5 months. The postoperative median DFS for patients with high expression of CLCA1 was 40 months, whereas that of patients with low CLCA1 expression was 23 months. CRC patients with reduced CLCA1 expression had a higher risk of disease relapse and death than patients with high CLCA1 expression (P < .01; Table 2). Kaplan-Meier analysis was used to evaluate the correlation between the survival of patients with CRC and the level of expression of CLCA1. Patients were divided into 2 groups: CRC patients with high CLCA1 expression (>30%) and those with low CLCA1 expression (≤30%). Our data showed that the DFS for CRC patients whose tumors had preserved CLCA1 expression was significantly higher than the DFS for patients with low CLCA1 expression (P < .05; Fig. 2A). Kaplan-Meier analysis was also performed with stratification by the characteristics of the primary tumor and lymph node metastases. We classified patients into groups with T1/T2 (n = 12) and T3/T4 tumors (n = 24) because of the case number and better prognosis with T1/T2 tumors versus T3/T4 tumors29 and with lymph node–negative tumors versus lymph node–positive tumors. Our results also showed that the primary tumor stage was correlated with the DFS of CRC patients. The DFS of patients with T3/T4 tumors was reduced significantly in comparison with the DFS of patients with T1/T2 tumors (P < .05; Fig. 2B).\nCLCA1 expression levels and disease-free survival for CRC patients. Kaplan-Meier curves of disease-free survival are shown for CRC patients: (A) different CLCA1 expression (P < .05), (B) tumor status (P < .05), (C) lymph node metastasis (P > .05), and (D) Ki-67 expression (P > .05). The differences between curves were analyzed with the log-rank test. CLCA1 indicates chloride channel accessory 1; CRC, colorectal cancer.\nHowever, the lymph node status (N0 vs N1/N2) with respect to the DFS of this group of CRC patients showed a difference with a P value of .06 (Fig. 2C). Ki-67 has been studied as a prognosticator for CRC,30 but we showed that there was no significant association between the expression levels of Ki-67 and the survival of CRC patients (Fig. 2D). Overall, our results demonstrated that the CLCA1 expression level was a prognostic factor for the survival of patients with CRC in the univariate analysis. Patients with CRC characterized by high levels of CLCA1 expression had a favorable prognosis, whereas patients with CRC with low CLCA1 expression had a poorer survival rate.\nThe median disease-free survival (DFS) for all patients was 34.5 months. The postoperative median DFS for patients with high expression of CLCA1 was 40 months, whereas that of patients with low CLCA1 expression was 23 months. CRC patients with reduced CLCA1 expression had a higher risk of disease relapse and death than patients with high CLCA1 expression (P < .01; Table 2). Kaplan-Meier analysis was used to evaluate the correlation between the survival of patients with CRC and the level of expression of CLCA1. Patients were divided into 2 groups: CRC patients with high CLCA1 expression (>30%) and those with low CLCA1 expression (≤30%). Our data showed that the DFS for CRC patients whose tumors had preserved CLCA1 expression was significantly higher than the DFS for patients with low CLCA1 expression (P < .05; Fig. 2A). Kaplan-Meier analysis was also performed with stratification by the characteristics of the primary tumor and lymph node metastases. We classified patients into groups with T1/T2 (n = 12) and T3/T4 tumors (n = 24) because of the case number and better prognosis with T1/T2 tumors versus T3/T4 tumors29 and with lymph node–negative tumors versus lymph node–positive tumors. Our results also showed that the primary tumor stage was correlated with the DFS of CRC patients. The DFS of patients with T3/T4 tumors was reduced significantly in comparison with the DFS of patients with T1/T2 tumors (P < .05; Fig. 2B).\nCLCA1 expression levels and disease-free survival for CRC patients. Kaplan-Meier curves of disease-free survival are shown for CRC patients: (A) different CLCA1 expression (P < .05), (B) tumor status (P < .05), (C) lymph node metastasis (P > .05), and (D) Ki-67 expression (P > .05). The differences between curves were analyzed with the log-rank test. CLCA1 indicates chloride channel accessory 1; CRC, colorectal cancer.\nHowever, the lymph node status (N0 vs N1/N2) with respect to the DFS of this group of CRC patients showed a difference with a P value of .06 (Fig. 2C). Ki-67 has been studied as a prognosticator for CRC,30 but we showed that there was no significant association between the expression levels of Ki-67 and the survival of CRC patients (Fig. 2D). Overall, our results demonstrated that the CLCA1 expression level was a prognostic factor for the survival of patients with CRC in the univariate analysis. Patients with CRC characterized by high levels of CLCA1 expression had a favorable prognosis, whereas patients with CRC with low CLCA1 expression had a poorer survival rate.\n Correlation Between CLCA1 and PTEN, p53, E-Cadherin, and Ki-67 in CRC Patients Abnormal expression or mutations of PTEN and p53, E-cadherin, and Ki-67 are associated with a poor prognosis for patients with CRC.15,18,30–32 We also checked the expression of PTEN, p53, Ki-67, and E-cadherin in patients and compared the relationship between expression levels of CLCA1 and these 4 tumor-associated genes. We found that 16 of 28 CRC patients (57%) had high expression (>50% score) of Ki-67 cancer cells. However, no correlation was found between the levels of CLCA1 and Ki-67 expression in the primary tumors (P = .29; Fig. 3A). Furthermore, we investigated the correlation between the expression of PTEN and CLCA1 and found no association between the expression levels of these 2 molecules (P = .95; Fig. 3B). However, CLCA1 expression levels correlated strongly and positively with the level of p53 and E-cadherin expression (P < .01; Fig. 3C,D). The downregulation of E-cadherin at the membrane indicated a poor outcome.15 p53 is a tumor suppressor in CRC. Mutations of p53 are associated with worse survival, and normal levels of p53 are required for CRCs to respond to chemotherapy.16 These data further confirm that low expression of CLCA1 appears to correlate strongly with a poor prognosis of CRC.\nCorrelation between the expression of CLCA1 and known prognostic markers for colorectal cancer. Expression levels of CLCA1, Ki-67, E-cad, PTEN, and p53 were analyzed through the percentages of positively staining cells. The correlations of CLCA1 expression with (A) Ki-67 (n = 28, R2 = 0.18, P = .29), (B) PTEN (n = 9, R2 = –0.02, P = .95), (C) E-cad (n = 12, R2 = 0.8, P = .0016), and (D) p53 (n = 15, R2 = 0.78, P = .0007) are shown, and strong positive correlations between high CLCA1 expression and high E-cad and p53 expression levels are indicated. CLCA1 indicates chloride channel accessory 1; E-cad, E-cadherin; PTEN, phosphatase and tensin homolog.\nAbnormal expression or mutations of PTEN and p53, E-cadherin, and Ki-67 are associated with a poor prognosis for patients with CRC.15,18,30–32 We also checked the expression of PTEN, p53, Ki-67, and E-cadherin in patients and compared the relationship between expression levels of CLCA1 and these 4 tumor-associated genes. We found that 16 of 28 CRC patients (57%) had high expression (>50% score) of Ki-67 cancer cells. However, no correlation was found between the levels of CLCA1 and Ki-67 expression in the primary tumors (P = .29; Fig. 3A). Furthermore, we investigated the correlation between the expression of PTEN and CLCA1 and found no association between the expression levels of these 2 molecules (P = .95; Fig. 3B). However, CLCA1 expression levels correlated strongly and positively with the level of p53 and E-cadherin expression (P < .01; Fig. 3C,D). The downregulation of E-cadherin at the membrane indicated a poor outcome.15 p53 is a tumor suppressor in CRC. Mutations of p53 are associated with worse survival, and normal levels of p53 are required for CRCs to respond to chemotherapy.16 These data further confirm that low expression of CLCA1 appears to correlate strongly with a poor prognosis of CRC.\nCorrelation between the expression of CLCA1 and known prognostic markers for colorectal cancer. Expression levels of CLCA1, Ki-67, E-cad, PTEN, and p53 were analyzed through the percentages of positively staining cells. The correlations of CLCA1 expression with (A) Ki-67 (n = 28, R2 = 0.18, P = .29), (B) PTEN (n = 9, R2 = –0.02, P = .95), (C) E-cad (n = 12, R2 = 0.8, P = .0016), and (D) p53 (n = 15, R2 = 0.78, P = .0007) are shown, and strong positive correlations between high CLCA1 expression and high E-cad and p53 expression levels are indicated. CLCA1 indicates chloride channel accessory 1; E-cad, E-cadherin; PTEN, phosphatase and tensin homolog.\n Confirmation of CLCA1 Expression in CRC With Microarray Data Sets To further validate our data, we analyzed the 3 public, independent microarray data sets from GEO. Our analysis showed that the expression level of CLCA1 was inhibited significantly in early CRC specimens (normal vs early CRC, P < .05; Fig. 4A). In comparison with nonmetastatic specimens, the expression of CLCA1 was downregulated significantly in metastatic CRC (P < .01; Fig. 4B). CIN-high showed the worst survival for CRC patients.27 Our results showed that CRC with the CIN-high phenotype had lower expression of CLCA1 than CRC with the CIN-low phenotype (P = .015; Fig. 4C). Therefore, the results from these 3 independent microarray sets were identical to the results from our clinical analysis data.\nPublicly available microarray data sets for validation. (A) In public microarray data set GSE4107, we analyzed the expression of CLCA1 in normal colon mucosa and CRC tissues. The results showed that the expression of CLCA1 was inhibited significantly in early CRC patients. (B) In the GSE28702 microarray gene set, low expression of CLCA1 was associated with CRC metastasis. (C) The analysis of the GSE30540 microarray gene set showed that low CLCA1 expression was associated with CRC with a CIN-high signature, and this indicated significantly poorer survival in comparison with a CIN-low signature. CIN indicates chromosomal instability; CLCA1, chloride channel accessory 1; CRC, colorectal cancer.\nTo further validate our data, we analyzed the 3 public, independent microarray data sets from GEO. Our analysis showed that the expression level of CLCA1 was inhibited significantly in early CRC specimens (normal vs early CRC, P < .05; Fig. 4A). In comparison with nonmetastatic specimens, the expression of CLCA1 was downregulated significantly in metastatic CRC (P < .01; Fig. 4B). CIN-high showed the worst survival for CRC patients.27 Our results showed that CRC with the CIN-high phenotype had lower expression of CLCA1 than CRC with the CIN-low phenotype (P = .015; Fig. 4C). Therefore, the results from these 3 independent microarray sets were identical to the results from our clinical analysis data.\nPublicly available microarray data sets for validation. (A) In public microarray data set GSE4107, we analyzed the expression of CLCA1 in normal colon mucosa and CRC tissues. The results showed that the expression of CLCA1 was inhibited significantly in early CRC patients. (B) In the GSE28702 microarray gene set, low expression of CLCA1 was associated with CRC metastasis. (C) The analysis of the GSE30540 microarray gene set showed that low CLCA1 expression was associated with CRC with a CIN-high signature, and this indicated significantly poorer survival in comparison with a CIN-low signature. CIN indicates chromosomal instability; CLCA1, chloride channel accessory 1; CRC, colorectal cancer.\n CLCA1 Regulates the Differentiation of Human CRC Cells Culturing to confluence induces the spontaneous differentiation of human colon adenocarcinoma cells (Caco-2).11,33–35 Using this model, we investigated further the functional role of CLCA1 in the differentiation of Caco-2 cells. First, we detected the expression of CLCA1 and the differentiation marker E-cadherin in a confluent culture. We found that the expression of both CLCA1 and E-cadherin was increased in a time-dependent manner (Fig. 5A). These data suggest that the expression of CLCA1 may contribute to the spontaneous differentiation of Caco-2 cells. Next, we used stealth siRNA (siRNACLCA1) to knock down the expression of CLCA1 in Caco-2 cells. After 72 hours of transfection, cells were tested for the expression of CLCA1, ALPI, and E-cadherin by western blotting. We found that knockdown of CLCA1 inhibited expression levels of ALPI and E-cadherin significantly (Fig. 5B).10 These results indicate that CLCA1 expression plays a key role in the regulation of the spontaneous differentiation of Caco-2 cells.\nCLCA1 contributes to the differentiation of colorectal cancer cells. Human colon adenocarcinoma cells (Caco-2) were cultured to become confluent for 10 days. (A) The expression of CLCA1 was upregulated for 4 days in a confluent culture and lasted for up to 10 days of culturing. The mature epithelial marker E-cad increased in a time-dependent manner. The histogram shows the relative intensity of E-cad expressed as a ratio with respect to the GAPDH control. (B) Caco-2 cells were transfected transiently with 150 nM siRNACLCA1 and blotted for CLCA1, E-cad, and ALPI. siRNACLCA1 effectively inhibited CLCA1 and downregulated the expression of E-cad and ALPI. The histogram shows the relative intensity of CLCA1, E-cad, and ALPI expressed as a ratio with respect to the GAPDH control. Data are presented as means and standard errors of the mean. All results were analyzed on the basis of 3 independent experiments. ALPI indicates intestinal alkaline phosphatase; CLCA1, chloride channel accessory 1; Cont, control; E-cad, E-cadherin; GAPDH, glyceraldehyde 3-phosphate dehydrogenase; siRNA, small interfering RNA.\nCulturing to confluence induces the spontaneous differentiation of human colon adenocarcinoma cells (Caco-2).11,33–35 Using this model, we investigated further the functional role of CLCA1 in the differentiation of Caco-2 cells. First, we detected the expression of CLCA1 and the differentiation marker E-cadherin in a confluent culture. We found that the expression of both CLCA1 and E-cadherin was increased in a time-dependent manner (Fig. 5A). These data suggest that the expression of CLCA1 may contribute to the spontaneous differentiation of Caco-2 cells. Next, we used stealth siRNA (siRNACLCA1) to knock down the expression of CLCA1 in Caco-2 cells. After 72 hours of transfection, cells were tested for the expression of CLCA1, ALPI, and E-cadherin by western blotting. We found that knockdown of CLCA1 inhibited expression levels of ALPI and E-cadherin significantly (Fig. 5B).10 These results indicate that CLCA1 expression plays a key role in the regulation of the spontaneous differentiation of Caco-2 cells.\nCLCA1 contributes to the differentiation of colorectal cancer cells. Human colon adenocarcinoma cells (Caco-2) were cultured to become confluent for 10 days. (A) The expression of CLCA1 was upregulated for 4 days in a confluent culture and lasted for up to 10 days of culturing. The mature epithelial marker E-cad increased in a time-dependent manner. The histogram shows the relative intensity of E-cad expressed as a ratio with respect to the GAPDH control. (B) Caco-2 cells were transfected transiently with 150 nM siRNACLCA1 and blotted for CLCA1, E-cad, and ALPI. siRNACLCA1 effectively inhibited CLCA1 and downregulated the expression of E-cad and ALPI. The histogram shows the relative intensity of CLCA1, E-cad, and ALPI expressed as a ratio with respect to the GAPDH control. Data are presented as means and standard errors of the mean. All results were analyzed on the basis of 3 independent experiments. ALPI indicates intestinal alkaline phosphatase; CLCA1, chloride channel accessory 1; Cont, control; E-cad, E-cadherin; GAPDH, glyceraldehyde 3-phosphate dehydrogenase; siRNA, small interfering RNA.", "The characteristics of the 36 CRC patients in the study cohort are shown in Table 1. There were 11 women (30.6%) and 25 men (69.4%). The median age was 55.5 years with a range of 25 to 80 years. In terms of the anatomical location of the tumors, 15 (41.7%) were in the ascending colon, 2 (5.6%) were in the transverse colon, 3 (8.3%) were in the left colon, 6 (16.7%) were in the sigmoid colon, and 10 (27.8%) were in the rectum. Twenty-six patients (72.2%) had moderately or poorly differentiated tumors, with the remaining 10 (27.8%) having well-differentiated cancers. The Dukes staging was as follows: (A) 7 or 19.4%, (B) 18 or 50%, (C) 5 or 13.9%, and (D) 6 or 16.7%.", "The level of expression of CLCA1 with respect to tumor staging is shown in Table 2. There was a high level of CLCA1 expression in normal colonic epithelium in stark contrast to the tumor tissue (Fig. 1A). The mean percentage of CLCA1-positive cells was 88% in the normal samples (n = 6) and 54% in the tumor samples (n = 36). In addition, the expression pattern of CLCA1 was predominantly membranous and cytoplasmic in normal colonic epithelium, but this pattern was altered in the tumor area, which showed an absence of cytoplasmic and/or membranous staining (Fig. 1A). Furthermore, we analyzed the relationship between the level of expression of CLCA1 and the primary tumor status (T1-T4), lymph nodes status (N0 or N1/N2), Dukes stage (A-D), and histological grade (well, moderately, or poorly differentiated). This analysis showed that the CLCA1 expression in noncancerous control mucosa samples was significantly higher than that in samples with tumors. There also was a significant difference between normal tissue and early CRC tissue (T1, P < .05). In addition, in the more advanced tumor stages (T3 and T4), the expression of CLCA1 was reduced in comparison with earlier stage tumors (T1/T2 vs T3/T4, P < .01; Fig. 1B and Table 2). Furthermore, CLCA1 expression levels in primary tumors were reduced significantly when the patients had positive lymph nodes (N1/N2, P < .01; Fig. 1C). In Dukes stage A and B tumors, there was much higher expression of CLCA1 in comparison with Dukes stage C and D tumors (P < .01; Fig. 1D and Table 2). An analysis by histological grades also showed that the expression of CLCA1 was reduced significantly in poorly differentiated tumors versus well-differentiated tumor (P < .01; Fig. 1E and Table 2). Our data indicate that low CLCA1 expression levels are associated with an advanced tumor stage, a less differentiated tumor histological grade, and metastases in regional lymph nodes.\nExpression Level of CLCA1 and Status of Primary Tumors\nAbbreviation: CLCA1, chloride channel accessory 1.\nExpression levels were compared with the Pearson chi-square test.\nThe expression of CLCA1 correlates with tumor stages, histological grades, and lymph node metastasis in colorectal cancer. (A) CLCA1 in normal colonic tissue showed (i) preservation of high levels of expression (brown) on cell membranes and in the cytoplasm, (ii) reduced expression on membranes and in the cytoplasm in colon cancer tissue, (iii) reduced expression in cytoplasm only in colon cancer tissue, and (iiii) reduced expression in rectal cancer. CLCA1 expression was strongly associated with (B) a different tumor status, (C) lymph node metastasis, (D) Dukes staging, and (E) histological grades. Expression levels of CLCA1 were quantified via the ratio of positive cells to total cells. Data are presented as means and standard errors of the mean. CLCA1 indicates chloride channel accessory 1.", "The median disease-free survival (DFS) for all patients was 34.5 months. The postoperative median DFS for patients with high expression of CLCA1 was 40 months, whereas that of patients with low CLCA1 expression was 23 months. CRC patients with reduced CLCA1 expression had a higher risk of disease relapse and death than patients with high CLCA1 expression (P < .01; Table 2). Kaplan-Meier analysis was used to evaluate the correlation between the survival of patients with CRC and the level of expression of CLCA1. Patients were divided into 2 groups: CRC patients with high CLCA1 expression (>30%) and those with low CLCA1 expression (≤30%). Our data showed that the DFS for CRC patients whose tumors had preserved CLCA1 expression was significantly higher than the DFS for patients with low CLCA1 expression (P < .05; Fig. 2A). Kaplan-Meier analysis was also performed with stratification by the characteristics of the primary tumor and lymph node metastases. We classified patients into groups with T1/T2 (n = 12) and T3/T4 tumors (n = 24) because of the case number and better prognosis with T1/T2 tumors versus T3/T4 tumors29 and with lymph node–negative tumors versus lymph node–positive tumors. Our results also showed that the primary tumor stage was correlated with the DFS of CRC patients. The DFS of patients with T3/T4 tumors was reduced significantly in comparison with the DFS of patients with T1/T2 tumors (P < .05; Fig. 2B).\nCLCA1 expression levels and disease-free survival for CRC patients. Kaplan-Meier curves of disease-free survival are shown for CRC patients: (A) different CLCA1 expression (P < .05), (B) tumor status (P < .05), (C) lymph node metastasis (P > .05), and (D) Ki-67 expression (P > .05). The differences between curves were analyzed with the log-rank test. CLCA1 indicates chloride channel accessory 1; CRC, colorectal cancer.\nHowever, the lymph node status (N0 vs N1/N2) with respect to the DFS of this group of CRC patients showed a difference with a P value of .06 (Fig. 2C). Ki-67 has been studied as a prognosticator for CRC,30 but we showed that there was no significant association between the expression levels of Ki-67 and the survival of CRC patients (Fig. 2D). Overall, our results demonstrated that the CLCA1 expression level was a prognostic factor for the survival of patients with CRC in the univariate analysis. Patients with CRC characterized by high levels of CLCA1 expression had a favorable prognosis, whereas patients with CRC with low CLCA1 expression had a poorer survival rate.", "Abnormal expression or mutations of PTEN and p53, E-cadherin, and Ki-67 are associated with a poor prognosis for patients with CRC.15,18,30–32 We also checked the expression of PTEN, p53, Ki-67, and E-cadherin in patients and compared the relationship between expression levels of CLCA1 and these 4 tumor-associated genes. We found that 16 of 28 CRC patients (57%) had high expression (>50% score) of Ki-67 cancer cells. However, no correlation was found between the levels of CLCA1 and Ki-67 expression in the primary tumors (P = .29; Fig. 3A). Furthermore, we investigated the correlation between the expression of PTEN and CLCA1 and found no association between the expression levels of these 2 molecules (P = .95; Fig. 3B). However, CLCA1 expression levels correlated strongly and positively with the level of p53 and E-cadherin expression (P < .01; Fig. 3C,D). The downregulation of E-cadherin at the membrane indicated a poor outcome.15 p53 is a tumor suppressor in CRC. Mutations of p53 are associated with worse survival, and normal levels of p53 are required for CRCs to respond to chemotherapy.16 These data further confirm that low expression of CLCA1 appears to correlate strongly with a poor prognosis of CRC.\nCorrelation between the expression of CLCA1 and known prognostic markers for colorectal cancer. Expression levels of CLCA1, Ki-67, E-cad, PTEN, and p53 were analyzed through the percentages of positively staining cells. The correlations of CLCA1 expression with (A) Ki-67 (n = 28, R2 = 0.18, P = .29), (B) PTEN (n = 9, R2 = –0.02, P = .95), (C) E-cad (n = 12, R2 = 0.8, P = .0016), and (D) p53 (n = 15, R2 = 0.78, P = .0007) are shown, and strong positive correlations between high CLCA1 expression and high E-cad and p53 expression levels are indicated. CLCA1 indicates chloride channel accessory 1; E-cad, E-cadherin; PTEN, phosphatase and tensin homolog.", "To further validate our data, we analyzed the 3 public, independent microarray data sets from GEO. Our analysis showed that the expression level of CLCA1 was inhibited significantly in early CRC specimens (normal vs early CRC, P < .05; Fig. 4A). In comparison with nonmetastatic specimens, the expression of CLCA1 was downregulated significantly in metastatic CRC (P < .01; Fig. 4B). CIN-high showed the worst survival for CRC patients.27 Our results showed that CRC with the CIN-high phenotype had lower expression of CLCA1 than CRC with the CIN-low phenotype (P = .015; Fig. 4C). Therefore, the results from these 3 independent microarray sets were identical to the results from our clinical analysis data.\nPublicly available microarray data sets for validation. (A) In public microarray data set GSE4107, we analyzed the expression of CLCA1 in normal colon mucosa and CRC tissues. The results showed that the expression of CLCA1 was inhibited significantly in early CRC patients. (B) In the GSE28702 microarray gene set, low expression of CLCA1 was associated with CRC metastasis. (C) The analysis of the GSE30540 microarray gene set showed that low CLCA1 expression was associated with CRC with a CIN-high signature, and this indicated significantly poorer survival in comparison with a CIN-low signature. CIN indicates chromosomal instability; CLCA1, chloride channel accessory 1; CRC, colorectal cancer.", "Culturing to confluence induces the spontaneous differentiation of human colon adenocarcinoma cells (Caco-2).11,33–35 Using this model, we investigated further the functional role of CLCA1 in the differentiation of Caco-2 cells. First, we detected the expression of CLCA1 and the differentiation marker E-cadherin in a confluent culture. We found that the expression of both CLCA1 and E-cadherin was increased in a time-dependent manner (Fig. 5A). These data suggest that the expression of CLCA1 may contribute to the spontaneous differentiation of Caco-2 cells. Next, we used stealth siRNA (siRNACLCA1) to knock down the expression of CLCA1 in Caco-2 cells. After 72 hours of transfection, cells were tested for the expression of CLCA1, ALPI, and E-cadherin by western blotting. We found that knockdown of CLCA1 inhibited expression levels of ALPI and E-cadherin significantly (Fig. 5B).10 These results indicate that CLCA1 expression plays a key role in the regulation of the spontaneous differentiation of Caco-2 cells.\nCLCA1 contributes to the differentiation of colorectal cancer cells. Human colon adenocarcinoma cells (Caco-2) were cultured to become confluent for 10 days. (A) The expression of CLCA1 was upregulated for 4 days in a confluent culture and lasted for up to 10 days of culturing. The mature epithelial marker E-cad increased in a time-dependent manner. The histogram shows the relative intensity of E-cad expressed as a ratio with respect to the GAPDH control. (B) Caco-2 cells were transfected transiently with 150 nM siRNACLCA1 and blotted for CLCA1, E-cad, and ALPI. siRNACLCA1 effectively inhibited CLCA1 and downregulated the expression of E-cad and ALPI. The histogram shows the relative intensity of CLCA1, E-cad, and ALPI expressed as a ratio with respect to the GAPDH control. Data are presented as means and standard errors of the mean. All results were analyzed on the basis of 3 independent experiments. ALPI indicates intestinal alkaline phosphatase; CLCA1, chloride channel accessory 1; Cont, control; E-cad, E-cadherin; GAPDH, glyceraldehyde 3-phosphate dehydrogenase; siRNA, small interfering RNA.", "In 2010, 40,695 people in the United Kingdom were diagnosed with bowel cancer, and 15,708 died from the disease. In CRC, one of the important challenges is the accurate prediction of which patients will experience disease relapse after surgery and which patients will require and most benefit from adjuvant therapies. At present, the TNM staging system for tumors is the gold standard for determining the prognosis of patients with CRC. The staging system is dependent on the extent of local invasion, the degree of lymph node involvement, and the presence or absence of distant metastases. However, this system can cause problems, especially with respect to treatment decisions.36 For example, the number of paraffin blocks containing tumor tissue (tumor blocks) will affect directly the likelihood of detecting submucosal, mesocolic, mesorectal, or peritoneal invasion, and the greater the number of tumor blocks dissected, the higher the expected T stage.36 Hence, additional prognostic biomarkers are needed urgently for the improved management of CRC patients.37\nThe high tissue specificity of transcription of some CLCAs38 suggested initially that the detection of their expression in specific tissues might be useful for early diagnosis, as a means of molecular staging, and for postoperative surveillance.11 In this study, primary CRC tumors showed significantly lower expression of CLCA1 in both membranes and cytoplasm in comparison with the normal colonic tissues. This may suggest that reduced CLCA1 expression is a biomarker of malignant cells. Patients in different stages of disease also show a big discrepancy in survival. Molecules involved in cancer relapse and prognosis might serve as markers for the early detection of metastasis and as a measure for therapeutic intervention (eg, N-Myc downstream-regulated gene 2, E-cadherin, and p53).15,16,18,39 In CRC, CLCA1 and CLCA4 are downregulated significantly in approximately 80% of patients.11 The loss of expression of both CLCA1 and CLCA4 during tumorigenesis suggests that strong activation of either might inhibit the survival of tumor cells.4 Our findings showed that CLCA1 expression was downregulated significantly in CRC and highly associated with known high-risk factors such as the tumor status, lymph node metastasis, Dukes grade, and histologic staging. Our data suggested that CLCA1 expression also decreased progressively from good to poor differentiation and with the progression of tumor stages. Specifically, the expression of CLCA1 in advanced CRC was decreased further in comparison with early stages of CRC. This indicated that the level of CLCA1 expression in primary CRC might be associated with its prognosis.\nSeveral molecular prognostic factors, such as p53, Ki-67, K-ras, and E-cadherin, are being evaluated in CRC patients,15,16,30,31 and it is still not possible to predict accurately the probability of recurrence of CRC in patients after surgery.32 Therefore, it is necessary to find reliable and sensitive markers that can help us to make well-informed decisions regarding which patients should receive chemotherapy and which should not. Here we have shown that the level of CLCA1 expression is correlated with the expression of known predictors: E-cadherin and p53. CLCA2 has been reported to be a p53-inducible inhibitor of cell proliferation and to be downregulated with tumor progression.1,6 Downregulation of E-cadherin at the membrane indicates a poorer prognosis.15 These results further support the notion that CLCA1 may be a tumor suppressor and may be associated with progressive potential.\nKaplan-Meier survival analysis is defined as the probability of surviving for a given length of time, and it is used to compare the fraction of patients living for a certain amount of time after treatment.40 To further validate our hypothesis, we used Kaplan-Meier analysis to test the prognostic sensitivity of CLCA1 in CRC. Although we analyzed a small group of 36 patients, we found that CRC patients with high CLCA1 expression levels had better DFS than CRC patients with low CLCA1 expression levels. The tumor stage as a univariate prognostic factor in CRC also correlated with the survival time of this group of patients. Higher CLCA1 expression proved to be correlated with longer survival for patients with CRC. Extending DFS means the prevention or delay of recurrence or metastasis, and this is a clinical benefit. To validate the prognostic potential of CLCA1, we analyzed the expression of CLCA1 in normal and early CRC, primary CRC and metastases, and CIN-high and CIN-low phenotypes from 3 independent microarray data sets. Our results further confirmed that low expression of CLCA1 correlated with the tumorigenesis of CRC, metastasis, and a CIN-high gene signature. In this respect, our findings indicate that measurements of CLCA1 expression may help to identify patients at high risk of a poorer prognosis. Therefore, an assessment of CLCA1 levels could contribute to an accurate prediction of the prognosis and recurrence probability of patients after potentially curative surgery and, consequently, to individualized treatment for each patient.\nRecent studies have shown that CLCA1 could increase cell differentiation through the regulation of the proliferation-to-differentiation transition.10 CLCA2 also has been reported to suppress the epithelial-mesenchymal transition in breast cancer.1 The CLCA1 precursor is approximately 900 amino acids long with 1 proteolytic cleavage site after the amino-terminal signal sequence. Eventually, 2 products of 90 and 30 to 40 kDa play functional roles.3,4,8,9 Using cultured Caco-2 monolayers as a model, we found that CLCA1 promotes intestinal epithelial differentiation through enhancement of E-cadherin and ALPI expression (Fig. 5).10 In some tumor types, including CRC, E-cadherin expression is often downregulated in highly invasive, poorly differentiated carcinomas.41,42 Decreased E-cadherin expression through the inhibition of CLCA1 might block cell differentiation and increase the likelihood of distant metastasis. Thus, the loss of expression of CLCA1 appears to be an important step in tumorigenic progression. Although the clinical role and therapeutic effect of CLCA1 are still to be investigated, the current study will continue to improve our understanding of the biological profile and behavior of CRC.\nIn summary, we have shown that aberrant CLCA1 expression is a sign of a poor outcome in CRC. Our data indicate that low levels of CLCA1 in primary CRC might be a powerful predictor of disease relapse and prognosis. Although further prospective studies will be needed to determine the actual clinical utility of this observation, our findings indicate that CLCA1 might be a sensitive prognostic marker for evaluating the recurrence, early metastasis, and prognosis of CRC. Moreover, it might also be a potential therapeutic target for molecular therapy." ]
[ null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion" ]
[ "chloride channel accessory 1 (CLCA1) expression", "colorectal carcinoma", "prognosis", "cell differentiation" ]
Introduction: The chloride channel accessory (CLCA) family (also called the calcium-sensitive chloride conductance protein family) comprises 4 genes in humans and at least 6 genes in mice.1–5 All members of the CLCA family map to the same region on chromosome 1p31-p22 and share a high degree of homology in size, sequence, and predicted structure, but they differ significantly in their tissue distribution. The human genome encodes 3 functional CLCA proteins, including CLCA1, CLCA2, and CLCA4. CLCA2 has been identified to be a p53-inducible inhibitor of cell proliferation and to be a marker of differentiated epithelium that is downregulated with tumor progression.1,6 CLCA4 is downregulated in breast cancer cells, and low expression of CLCA4 indicates a poor prognosis for patients with breast cancer.7 CLCA4 and CLCA1 are expressed in intestinal epithelium.3,8–10 CLCA1 and CLCA4 may function as tumor suppressors and are associated negatively with tumorigenicity.11 We have shown that CLCA1 contributes to differentiation and proliferation inhibition in colon cell lines,10 but the role of CLCA1 in the prognosis of patients with colorectal cancer (CRC) remains unclear. In cancer progression, individual cells undergo complex and important interactions with the extracellular environment through transmembrane signal transduction. These interactions regulate cancer cell proliferation, invasion, and metastasis. Ion channels are a crucial first step in this process, and they play emerging roles in carcinogenesis and tumor progression.11 For example, activation of the chloride current through specialized volume-regulated anion channels in response to cell swelling is one of the major mechanisms by which cells restore their volume after hypo-osmotic stress (regulatory volume decrease, RVD).12 This is important because there is a direct link between the apoptotic resistance conferred by the antiapoptotic Bcl-2 protein and the strengthening of RVD capability due to upregulation of the chloride current and swelling.13 Therefore, further investigations will elucidate important roles for ion channels in cancer development and progression and potentially establish ion channels as effective discriminative markers and therapeutic targets.14 We have reported that CLCA1 is expressed in differentiated, growth-arrested mammalian epithelial cells but is downregulated during tumor progression.10 We have identified CLCA1 as a regulator of the transition from proliferation to differentiation in Caco-2 cells. In this study, we have determined further that the expression of CLCA1 in human CRC intestinal tissue is associated with the primary tumor status (the degree of invasion of the intestinal wall), lymph node metastases, and the overall survival rate. CLCA1 also is correlated closely with tumor suppressor p53 and E-cadherin, which has been determined to influence the prognosis of CRC.15–18 Our findings suggest that the expression level of CLCA1 may predict disease relapse and outcomes for patients with CRC. Material and Methods: Patients Thirty-six patients were diagnosed with CRC (26 with colon cancer and 10 with rectal cancer; Table 1), which was classified with the International Union Against Cancer TNM staging system and the Dukes staging system. All patients underwent surgical resection of their tumors at the 309th Hospital in Beijing, China. Ethical approval for the study was granted by the 309th Hospital's ethics committee. Informed written consent was obtained from all participants involved in the study. Clinical Characteristics of Patients With Colorectal Cancer The key clinical characteristics of the patients are summarized in Table 1. Normal specimens were obtained from 2 sources: 2 samples came from normal colon tissue of noncancer patients, and 4 samples were obtained from adjacent, grossly normal-appearing tissue taken at least 10 cm away from the cancer. This was in accordance with other research.19 All tumor samples were obtained from surgical resection. After surgical resection of their tumors, patients were followed up with clinical examinations, abdominal ultrasonography or abdominal computer tomography scans, and carcinoembryonic antigen measurements every 6 months for the first 5 years. Thereafter, these investigations were performed annually until 5 years after the initial treatment. The patients who died as a result of any postoperative complications or non–cancer-related diseases were excluded from the survival analysis. A patient with a family history suggestive of hereditary nonpolyposis colon cancer syndrome was excluded. None of the patients included in this study had chemotherapy or radiotherapy before surgery. In all, 36 CRC patients were included in the survival analysis. Adjuvant chemotherapy was given to all CRC patients except those with stage I disease (cancer had not invaded the outermost layers of the colon or rectum and had not spread to lymph nodes or distant sites). The mFOLFOX6 chemotherapy regimen consisted of a 2-hour intravenous infusion of oxaliplatin (85 mg/m2) and folinic acid (400 mg/m2), which was followed by an intravenous bolus injection of 5-fluorouracil (400 mg/m2) plus a 46-hour intravenous infusion of 5-fluorouracil (2400 mg/m2); this was repeated every 2 weeks. After 4 cycles of therapy, all lesions were assessed with computer tomography. Thirty-six patients were diagnosed with CRC (26 with colon cancer and 10 with rectal cancer; Table 1), which was classified with the International Union Against Cancer TNM staging system and the Dukes staging system. All patients underwent surgical resection of their tumors at the 309th Hospital in Beijing, China. Ethical approval for the study was granted by the 309th Hospital's ethics committee. Informed written consent was obtained from all participants involved in the study. Clinical Characteristics of Patients With Colorectal Cancer The key clinical characteristics of the patients are summarized in Table 1. Normal specimens were obtained from 2 sources: 2 samples came from normal colon tissue of noncancer patients, and 4 samples were obtained from adjacent, grossly normal-appearing tissue taken at least 10 cm away from the cancer. This was in accordance with other research.19 All tumor samples were obtained from surgical resection. After surgical resection of their tumors, patients were followed up with clinical examinations, abdominal ultrasonography or abdominal computer tomography scans, and carcinoembryonic antigen measurements every 6 months for the first 5 years. Thereafter, these investigations were performed annually until 5 years after the initial treatment. The patients who died as a result of any postoperative complications or non–cancer-related diseases were excluded from the survival analysis. A patient with a family history suggestive of hereditary nonpolyposis colon cancer syndrome was excluded. None of the patients included in this study had chemotherapy or radiotherapy before surgery. In all, 36 CRC patients were included in the survival analysis. Adjuvant chemotherapy was given to all CRC patients except those with stage I disease (cancer had not invaded the outermost layers of the colon or rectum and had not spread to lymph nodes or distant sites). The mFOLFOX6 chemotherapy regimen consisted of a 2-hour intravenous infusion of oxaliplatin (85 mg/m2) and folinic acid (400 mg/m2), which was followed by an intravenous bolus injection of 5-fluorouracil (400 mg/m2) plus a 46-hour intravenous infusion of 5-fluorouracil (2400 mg/m2); this was repeated every 2 weeks. After 4 cycles of therapy, all lesions were assessed with computer tomography. Clinical Follow-Up All patients were prospectively followed up according to the schedule described previously. The mean follow-up time in this study was 46 months (range, 7-52 months). The status of each patient was determined at the date of the last follow-up or at the end of a 5-year follow-up period, and if they were deceased, the cause of death was ascertained from the medical records and/or death certificate information. At the initial diagnosis, 6 patients with CRC had metastases. All patients were prospectively followed up according to the schedule described previously. The mean follow-up time in this study was 46 months (range, 7-52 months). The status of each patient was determined at the date of the last follow-up or at the end of a 5-year follow-up period, and if they were deceased, the cause of death was ascertained from the medical records and/or death certificate information. At the initial diagnosis, 6 patients with CRC had metastases. Histology and Colon Cancer Staging Resected tumors were obtained immediately after surgical resection and were fixed in 10% pH-neutral formalin and embedded in paraffin. Paraffin-embedded tissue sections (5 µm thick) were cut serially and used for hematoxylin-eosin staining and immunohistochemical analysis. The patients were staged according to the International Union Against Cancer TNM staging system and the Dukes staging system (Table 1).20 The Dukes staging system (used for colon cancer staging originally) was defined as follows: (A) tumor in the mucosa, (B) tumor in the muscle layer, (C) involvement of lymph nodes, and (D) distant metastases. Tumors were classified also by their degree of histological differentiation: (well differentiated [I]), moderately differentiated [II], or poorly differentiated [III]), the presence or absence of perineural invasion, the presence or absence of venous emboli, and the number of lymph nodes involved by the tumor. Resected tumors were obtained immediately after surgical resection and were fixed in 10% pH-neutral formalin and embedded in paraffin. Paraffin-embedded tissue sections (5 µm thick) were cut serially and used for hematoxylin-eosin staining and immunohistochemical analysis. The patients were staged according to the International Union Against Cancer TNM staging system and the Dukes staging system (Table 1).20 The Dukes staging system (used for colon cancer staging originally) was defined as follows: (A) tumor in the mucosa, (B) tumor in the muscle layer, (C) involvement of lymph nodes, and (D) distant metastases. Tumors were classified also by their degree of histological differentiation: (well differentiated [I]), moderately differentiated [II], or poorly differentiated [III]), the presence or absence of perineural invasion, the presence or absence of venous emboli, and the number of lymph nodes involved by the tumor. Immunohistochemistry After paraffin embedding, the tissue was cut into 5-µm-thick sections and mounted onto Superfrost Plus slides. The immunohistochemistry was performed on Leica Bond Max and included dewaxing and heat-induced epitope retrieval with Epitope Retrieval 1 (Leica Biosystems) at 100°C. Peroxidase block from the Bond Refine Detection System was used. The primary antibody CLCA1 (polyclone; Santa Cruz) was applied at a 1:100 dilution for 30 minutes at room temperature. The staining was completed with the Bond Polymer Refine Detection system. The Bond Polymer Refine Detection system contained a peroxide block, post primary, a polymer reagent, 3,3′-diaminobenzidine chromogen, and a hematoxylin counterstain (Leica Biosystems). p53, E-cadherin, phosphatase and tensin homolog (PTEN), and Ki-67 expression was evaluated according to the proportion of positively stained tumor cells. After paraffin embedding, the tissue was cut into 5-µm-thick sections and mounted onto Superfrost Plus slides. The immunohistochemistry was performed on Leica Bond Max and included dewaxing and heat-induced epitope retrieval with Epitope Retrieval 1 (Leica Biosystems) at 100°C. Peroxidase block from the Bond Refine Detection System was used. The primary antibody CLCA1 (polyclone; Santa Cruz) was applied at a 1:100 dilution for 30 minutes at room temperature. The staining was completed with the Bond Polymer Refine Detection system. The Bond Polymer Refine Detection system contained a peroxide block, post primary, a polymer reagent, 3,3′-diaminobenzidine chromogen, and a hematoxylin counterstain (Leica Biosystems). p53, E-cadherin, phosphatase and tensin homolog (PTEN), and Ki-67 expression was evaluated according to the proportion of positively stained tumor cells. Knockdown of CLCA1 and Western Blotting Caco-2 cells (5×105; American Type Culture Collection) were cultured on collagen I–precoated 6-well plates for the indicated time. Cells were lysed with a cell lysis buffer (Sigma-Aldrich) and a protease inhibitor cocktail (Thermo Scientific) for western blot analysis. Knockdown of CLCA1 in Caco-2 cells has been described previously.10 Briefly, a stealth RNAi small interfering RNA (siRNA) duplex with sense-strand sequences (5′-CAAUGCUACCCUGCCUCC AAUUACA-3′; Invitrogen, United Kingdom) was used to specifically target the CLCA1 gene. Caco-2 cells were transfected with Lipofectamine 2000 (Invitrogen) according to the manufacturer's protocol with a final siRNA concentration of 150 nM. Nontargeting negative control siRNA was used for non–sequence-specific effects of these molecules. After 72 hours, cells were lysed for western blot analysis. The primary antibodies used for western blotting included anti-CLCA1 (1:1000; Santa Cruz), anti–E-cadherin (1:5000; BD), anti–intestinal alkaline phosphatase (anti-ALPI; 1:2000, Novus), and anti–glyceraldehyde 3-phosphate dehydrogenase (1:50,000; Santa Cruz). Membranes were incubated with the relevant primary antibodies overnight at 4°C. A secondary antibody with horseradish peroxidase (1:5000; Sigma-Aldrich, United Kingdom) was used, and the immunoblots were detected with WesternBright ECL (AGTC Bioproducts). Caco-2 cells (5×105; American Type Culture Collection) were cultured on collagen I–precoated 6-well plates for the indicated time. Cells were lysed with a cell lysis buffer (Sigma-Aldrich) and a protease inhibitor cocktail (Thermo Scientific) for western blot analysis. Knockdown of CLCA1 in Caco-2 cells has been described previously.10 Briefly, a stealth RNAi small interfering RNA (siRNA) duplex with sense-strand sequences (5′-CAAUGCUACCCUGCCUCC AAUUACA-3′; Invitrogen, United Kingdom) was used to specifically target the CLCA1 gene. Caco-2 cells were transfected with Lipofectamine 2000 (Invitrogen) according to the manufacturer's protocol with a final siRNA concentration of 150 nM. Nontargeting negative control siRNA was used for non–sequence-specific effects of these molecules. After 72 hours, cells were lysed for western blot analysis. The primary antibodies used for western blotting included anti-CLCA1 (1:1000; Santa Cruz), anti–E-cadherin (1:5000; BD), anti–intestinal alkaline phosphatase (anti-ALPI; 1:2000, Novus), and anti–glyceraldehyde 3-phosphate dehydrogenase (1:50,000; Santa Cruz). Membranes were incubated with the relevant primary antibodies overnight at 4°C. A secondary antibody with horseradish peroxidase (1:5000; Sigma-Aldrich, United Kingdom) was used, and the immunoblots were detected with WesternBright ECL (AGTC Bioproducts). Quantitative Analysis Slides were assessed with light microscopy. All slides were analyzed by 2 independent observers (L.C. and J.P.) who were blinded to the clinical data. For each colon carcinoma, staining was evaluated on separate slides, which included the core and the invasive edge of the tumor, respectively. Slides were examined at ×400 (40× objective and 10× ocular) and were analyzed via the counting of all cells present in them (at least 5000 cells). All of the slides were reviewed independently by each researcher twice. Discrepancies between investigators (<10% of the cases) required a third joint observation with a conclusive agreement.21 The expression of CLCA1 was scored as the ratio of the number of positive cells to the total number of cells (stained cells/total number evaluated). The samples were considered CLCA1-positive if any positive staining was detected unless there were only rare, isolated single cells. To translate a continuous variable of CLCA1 expression into a clinical decision, it is necessary to stratify patients into 2 groups that may have different prognoses. Currently, there is no standard method or standard software for biomarker cutoff determination.22 However, our approach was to use X-tile 3.6 plot software (developed by Camp et al23), which provides a single, global assessment of every possible way of dividing a population into low-, medium-, and high-level marker expression. Using an X-tile plot, we determined the significant, optimal cut point to be 30% (P = .0224). Therefore, we used 30% CLCA1 positive staining as a cutoff value to unequivocally categorize cases into 2 groups: high expression of CLCA1 (>30% of cells stained) and low expression of CLCA1 (none or ≤30% of cells stained). This was in accordance with other research.21,24 Slides were assessed with light microscopy. All slides were analyzed by 2 independent observers (L.C. and J.P.) who were blinded to the clinical data. For each colon carcinoma, staining was evaluated on separate slides, which included the core and the invasive edge of the tumor, respectively. Slides were examined at ×400 (40× objective and 10× ocular) and were analyzed via the counting of all cells present in them (at least 5000 cells). All of the slides were reviewed independently by each researcher twice. Discrepancies between investigators (<10% of the cases) required a third joint observation with a conclusive agreement.21 The expression of CLCA1 was scored as the ratio of the number of positive cells to the total number of cells (stained cells/total number evaluated). The samples were considered CLCA1-positive if any positive staining was detected unless there were only rare, isolated single cells. To translate a continuous variable of CLCA1 expression into a clinical decision, it is necessary to stratify patients into 2 groups that may have different prognoses. Currently, there is no standard method or standard software for biomarker cutoff determination.22 However, our approach was to use X-tile 3.6 plot software (developed by Camp et al23), which provides a single, global assessment of every possible way of dividing a population into low-, medium-, and high-level marker expression. Using an X-tile plot, we determined the significant, optimal cut point to be 30% (P = .0224). Therefore, we used 30% CLCA1 positive staining as a cutoff value to unequivocally categorize cases into 2 groups: high expression of CLCA1 (>30% of cells stained) and low expression of CLCA1 (none or ≤30% of cells stained). This was in accordance with other research.21,24 Microarray Data Analysis The microarray data sources were obtained from the Gene Expression Omnibus (GEO).25 Three data sets (series accession numbers GSE4107, GSE28702, and GSE30540) were not subjected to any additional normalization because all had been normalized when we obtained them.19 In GSE4107, early CRC specimens (n = 12) and adjacent, grossly normal-appearing tissue (n = 10) at least 8 cm away were collected routinely and archived from patients undergoing colorectal resection at the Singapore General Hospital.19 In the GSE28702 study, 83 patients with unresectable CRC, including 56 patients with primary CRC and 27 patients with metastatic lesions in the liver (23 tumors), lungs (1 tumor), and peritoneum (3 tumors), were recruited from April 2007 to December 2010 at Teikyo University Hospital and Gifu University Hospital.26 All CRC samples were obtained before mFOLFOX6 therapy. GSE30540 included 25 chromosomal instability–high (CIN-high) CRC patients and 10 chromosomal instability–low (CIN-low) CRC patients.27 We analyzed the expression of CLCA1 in these published microarray data sets with the GEO software. The identity of genes across microarray data sets was established with public annotations primarily based on Unigene.28 The microarray data sources were obtained from the Gene Expression Omnibus (GEO).25 Three data sets (series accession numbers GSE4107, GSE28702, and GSE30540) were not subjected to any additional normalization because all had been normalized when we obtained them.19 In GSE4107, early CRC specimens (n = 12) and adjacent, grossly normal-appearing tissue (n = 10) at least 8 cm away were collected routinely and archived from patients undergoing colorectal resection at the Singapore General Hospital.19 In the GSE28702 study, 83 patients with unresectable CRC, including 56 patients with primary CRC and 27 patients with metastatic lesions in the liver (23 tumors), lungs (1 tumor), and peritoneum (3 tumors), were recruited from April 2007 to December 2010 at Teikyo University Hospital and Gifu University Hospital.26 All CRC samples were obtained before mFOLFOX6 therapy. GSE30540 included 25 chromosomal instability–high (CIN-high) CRC patients and 10 chromosomal instability–low (CIN-low) CRC patients.27 We analyzed the expression of CLCA1 in these published microarray data sets with the GEO software. The identity of genes across microarray data sets was established with public annotations primarily based on Unigene.28 Statistical Analysis Statistical analysis was performed with Excel and Prism. The equality of group means and comparisons between proportions were analyzed with an unpaired Student t test and chi-square test, respectively. Univariate statistical analysis was performed with a log-rank test (Mantel-Cox). X-tile 3.6.1 plot software (http://medicine.yale.edu/lab/rimm/research/software.aspx) was used to determine the cutoff point of the CLCA1 expression level for separating all patients into 2 groups to examine the impact of CLCA1 expression on prognosis. The curves were plotted with the product-limit method (Kaplan-Meier) and were analyzed with Spearman correlation coefficients and Wilcoxon tests for all survival analyses. For covariates retained in the model, relative hazards with 95% confidence intervals were estimated. Differences with a P value of .05 or less were considered to be statistically significant. Statistical analysis was performed with Excel and Prism. The equality of group means and comparisons between proportions were analyzed with an unpaired Student t test and chi-square test, respectively. Univariate statistical analysis was performed with a log-rank test (Mantel-Cox). X-tile 3.6.1 plot software (http://medicine.yale.edu/lab/rimm/research/software.aspx) was used to determine the cutoff point of the CLCA1 expression level for separating all patients into 2 groups to examine the impact of CLCA1 expression on prognosis. The curves were plotted with the product-limit method (Kaplan-Meier) and were analyzed with Spearman correlation coefficients and Wilcoxon tests for all survival analyses. For covariates retained in the model, relative hazards with 95% confidence intervals were estimated. Differences with a P value of .05 or less were considered to be statistically significant. Patients: Thirty-six patients were diagnosed with CRC (26 with colon cancer and 10 with rectal cancer; Table 1), which was classified with the International Union Against Cancer TNM staging system and the Dukes staging system. All patients underwent surgical resection of their tumors at the 309th Hospital in Beijing, China. Ethical approval for the study was granted by the 309th Hospital's ethics committee. Informed written consent was obtained from all participants involved in the study. Clinical Characteristics of Patients With Colorectal Cancer The key clinical characteristics of the patients are summarized in Table 1. Normal specimens were obtained from 2 sources: 2 samples came from normal colon tissue of noncancer patients, and 4 samples were obtained from adjacent, grossly normal-appearing tissue taken at least 10 cm away from the cancer. This was in accordance with other research.19 All tumor samples were obtained from surgical resection. After surgical resection of their tumors, patients were followed up with clinical examinations, abdominal ultrasonography or abdominal computer tomography scans, and carcinoembryonic antigen measurements every 6 months for the first 5 years. Thereafter, these investigations were performed annually until 5 years after the initial treatment. The patients who died as a result of any postoperative complications or non–cancer-related diseases were excluded from the survival analysis. A patient with a family history suggestive of hereditary nonpolyposis colon cancer syndrome was excluded. None of the patients included in this study had chemotherapy or radiotherapy before surgery. In all, 36 CRC patients were included in the survival analysis. Adjuvant chemotherapy was given to all CRC patients except those with stage I disease (cancer had not invaded the outermost layers of the colon or rectum and had not spread to lymph nodes or distant sites). The mFOLFOX6 chemotherapy regimen consisted of a 2-hour intravenous infusion of oxaliplatin (85 mg/m2) and folinic acid (400 mg/m2), which was followed by an intravenous bolus injection of 5-fluorouracil (400 mg/m2) plus a 46-hour intravenous infusion of 5-fluorouracil (2400 mg/m2); this was repeated every 2 weeks. After 4 cycles of therapy, all lesions were assessed with computer tomography. Clinical Follow-Up: All patients were prospectively followed up according to the schedule described previously. The mean follow-up time in this study was 46 months (range, 7-52 months). The status of each patient was determined at the date of the last follow-up or at the end of a 5-year follow-up period, and if they were deceased, the cause of death was ascertained from the medical records and/or death certificate information. At the initial diagnosis, 6 patients with CRC had metastases. Histology and Colon Cancer Staging: Resected tumors were obtained immediately after surgical resection and were fixed in 10% pH-neutral formalin and embedded in paraffin. Paraffin-embedded tissue sections (5 µm thick) were cut serially and used for hematoxylin-eosin staining and immunohistochemical analysis. The patients were staged according to the International Union Against Cancer TNM staging system and the Dukes staging system (Table 1).20 The Dukes staging system (used for colon cancer staging originally) was defined as follows: (A) tumor in the mucosa, (B) tumor in the muscle layer, (C) involvement of lymph nodes, and (D) distant metastases. Tumors were classified also by their degree of histological differentiation: (well differentiated [I]), moderately differentiated [II], or poorly differentiated [III]), the presence or absence of perineural invasion, the presence or absence of venous emboli, and the number of lymph nodes involved by the tumor. Immunohistochemistry: After paraffin embedding, the tissue was cut into 5-µm-thick sections and mounted onto Superfrost Plus slides. The immunohistochemistry was performed on Leica Bond Max and included dewaxing and heat-induced epitope retrieval with Epitope Retrieval 1 (Leica Biosystems) at 100°C. Peroxidase block from the Bond Refine Detection System was used. The primary antibody CLCA1 (polyclone; Santa Cruz) was applied at a 1:100 dilution for 30 minutes at room temperature. The staining was completed with the Bond Polymer Refine Detection system. The Bond Polymer Refine Detection system contained a peroxide block, post primary, a polymer reagent, 3,3′-diaminobenzidine chromogen, and a hematoxylin counterstain (Leica Biosystems). p53, E-cadherin, phosphatase and tensin homolog (PTEN), and Ki-67 expression was evaluated according to the proportion of positively stained tumor cells. Knockdown of CLCA1 and Western Blotting: Caco-2 cells (5×105; American Type Culture Collection) were cultured on collagen I–precoated 6-well plates for the indicated time. Cells were lysed with a cell lysis buffer (Sigma-Aldrich) and a protease inhibitor cocktail (Thermo Scientific) for western blot analysis. Knockdown of CLCA1 in Caco-2 cells has been described previously.10 Briefly, a stealth RNAi small interfering RNA (siRNA) duplex with sense-strand sequences (5′-CAAUGCUACCCUGCCUCC AAUUACA-3′; Invitrogen, United Kingdom) was used to specifically target the CLCA1 gene. Caco-2 cells were transfected with Lipofectamine 2000 (Invitrogen) according to the manufacturer's protocol with a final siRNA concentration of 150 nM. Nontargeting negative control siRNA was used for non–sequence-specific effects of these molecules. After 72 hours, cells were lysed for western blot analysis. The primary antibodies used for western blotting included anti-CLCA1 (1:1000; Santa Cruz), anti–E-cadherin (1:5000; BD), anti–intestinal alkaline phosphatase (anti-ALPI; 1:2000, Novus), and anti–glyceraldehyde 3-phosphate dehydrogenase (1:50,000; Santa Cruz). Membranes were incubated with the relevant primary antibodies overnight at 4°C. A secondary antibody with horseradish peroxidase (1:5000; Sigma-Aldrich, United Kingdom) was used, and the immunoblots were detected with WesternBright ECL (AGTC Bioproducts). Quantitative Analysis: Slides were assessed with light microscopy. All slides were analyzed by 2 independent observers (L.C. and J.P.) who were blinded to the clinical data. For each colon carcinoma, staining was evaluated on separate slides, which included the core and the invasive edge of the tumor, respectively. Slides were examined at ×400 (40× objective and 10× ocular) and were analyzed via the counting of all cells present in them (at least 5000 cells). All of the slides were reviewed independently by each researcher twice. Discrepancies between investigators (<10% of the cases) required a third joint observation with a conclusive agreement.21 The expression of CLCA1 was scored as the ratio of the number of positive cells to the total number of cells (stained cells/total number evaluated). The samples were considered CLCA1-positive if any positive staining was detected unless there were only rare, isolated single cells. To translate a continuous variable of CLCA1 expression into a clinical decision, it is necessary to stratify patients into 2 groups that may have different prognoses. Currently, there is no standard method or standard software for biomarker cutoff determination.22 However, our approach was to use X-tile 3.6 plot software (developed by Camp et al23), which provides a single, global assessment of every possible way of dividing a population into low-, medium-, and high-level marker expression. Using an X-tile plot, we determined the significant, optimal cut point to be 30% (P = .0224). Therefore, we used 30% CLCA1 positive staining as a cutoff value to unequivocally categorize cases into 2 groups: high expression of CLCA1 (>30% of cells stained) and low expression of CLCA1 (none or ≤30% of cells stained). This was in accordance with other research.21,24 Microarray Data Analysis: The microarray data sources were obtained from the Gene Expression Omnibus (GEO).25 Three data sets (series accession numbers GSE4107, GSE28702, and GSE30540) were not subjected to any additional normalization because all had been normalized when we obtained them.19 In GSE4107, early CRC specimens (n = 12) and adjacent, grossly normal-appearing tissue (n = 10) at least 8 cm away were collected routinely and archived from patients undergoing colorectal resection at the Singapore General Hospital.19 In the GSE28702 study, 83 patients with unresectable CRC, including 56 patients with primary CRC and 27 patients with metastatic lesions in the liver (23 tumors), lungs (1 tumor), and peritoneum (3 tumors), were recruited from April 2007 to December 2010 at Teikyo University Hospital and Gifu University Hospital.26 All CRC samples were obtained before mFOLFOX6 therapy. GSE30540 included 25 chromosomal instability–high (CIN-high) CRC patients and 10 chromosomal instability–low (CIN-low) CRC patients.27 We analyzed the expression of CLCA1 in these published microarray data sets with the GEO software. The identity of genes across microarray data sets was established with public annotations primarily based on Unigene.28 Statistical Analysis: Statistical analysis was performed with Excel and Prism. The equality of group means and comparisons between proportions were analyzed with an unpaired Student t test and chi-square test, respectively. Univariate statistical analysis was performed with a log-rank test (Mantel-Cox). X-tile 3.6.1 plot software (http://medicine.yale.edu/lab/rimm/research/software.aspx) was used to determine the cutoff point of the CLCA1 expression level for separating all patients into 2 groups to examine the impact of CLCA1 expression on prognosis. The curves were plotted with the product-limit method (Kaplan-Meier) and were analyzed with Spearman correlation coefficients and Wilcoxon tests for all survival analyses. For covariates retained in the model, relative hazards with 95% confidence intervals were estimated. Differences with a P value of .05 or less were considered to be statistically significant. Results: Patients and Tumors The characteristics of the 36 CRC patients in the study cohort are shown in Table 1. There were 11 women (30.6%) and 25 men (69.4%). The median age was 55.5 years with a range of 25 to 80 years. In terms of the anatomical location of the tumors, 15 (41.7%) were in the ascending colon, 2 (5.6%) were in the transverse colon, 3 (8.3%) were in the left colon, 6 (16.7%) were in the sigmoid colon, and 10 (27.8%) were in the rectum. Twenty-six patients (72.2%) had moderately or poorly differentiated tumors, with the remaining 10 (27.8%) having well-differentiated cancers. The Dukes staging was as follows: (A) 7 or 19.4%, (B) 18 or 50%, (C) 5 or 13.9%, and (D) 6 or 16.7%. The characteristics of the 36 CRC patients in the study cohort are shown in Table 1. There were 11 women (30.6%) and 25 men (69.4%). The median age was 55.5 years with a range of 25 to 80 years. In terms of the anatomical location of the tumors, 15 (41.7%) were in the ascending colon, 2 (5.6%) were in the transverse colon, 3 (8.3%) were in the left colon, 6 (16.7%) were in the sigmoid colon, and 10 (27.8%) were in the rectum. Twenty-six patients (72.2%) had moderately or poorly differentiated tumors, with the remaining 10 (27.8%) having well-differentiated cancers. The Dukes staging was as follows: (A) 7 or 19.4%, (B) 18 or 50%, (C) 5 or 13.9%, and (D) 6 or 16.7%. Expression of CLCA1 and Clinical Grades of CRCs The level of expression of CLCA1 with respect to tumor staging is shown in Table 2. There was a high level of CLCA1 expression in normal colonic epithelium in stark contrast to the tumor tissue (Fig. 1A). The mean percentage of CLCA1-positive cells was 88% in the normal samples (n = 6) and 54% in the tumor samples (n = 36). In addition, the expression pattern of CLCA1 was predominantly membranous and cytoplasmic in normal colonic epithelium, but this pattern was altered in the tumor area, which showed an absence of cytoplasmic and/or membranous staining (Fig. 1A). Furthermore, we analyzed the relationship between the level of expression of CLCA1 and the primary tumor status (T1-T4), lymph nodes status (N0 or N1/N2), Dukes stage (A-D), and histological grade (well, moderately, or poorly differentiated). This analysis showed that the CLCA1 expression in noncancerous control mucosa samples was significantly higher than that in samples with tumors. There also was a significant difference between normal tissue and early CRC tissue (T1, P < .05). In addition, in the more advanced tumor stages (T3 and T4), the expression of CLCA1 was reduced in comparison with earlier stage tumors (T1/T2 vs T3/T4, P < .01; Fig. 1B and Table 2). Furthermore, CLCA1 expression levels in primary tumors were reduced significantly when the patients had positive lymph nodes (N1/N2, P < .01; Fig. 1C). In Dukes stage A and B tumors, there was much higher expression of CLCA1 in comparison with Dukes stage C and D tumors (P < .01; Fig. 1D and Table 2). An analysis by histological grades also showed that the expression of CLCA1 was reduced significantly in poorly differentiated tumors versus well-differentiated tumor (P < .01; Fig. 1E and Table 2). Our data indicate that low CLCA1 expression levels are associated with an advanced tumor stage, a less differentiated tumor histological grade, and metastases in regional lymph nodes. Expression Level of CLCA1 and Status of Primary Tumors Abbreviation: CLCA1, chloride channel accessory 1. Expression levels were compared with the Pearson chi-square test. The expression of CLCA1 correlates with tumor stages, histological grades, and lymph node metastasis in colorectal cancer. (A) CLCA1 in normal colonic tissue showed (i) preservation of high levels of expression (brown) on cell membranes and in the cytoplasm, (ii) reduced expression on membranes and in the cytoplasm in colon cancer tissue, (iii) reduced expression in cytoplasm only in colon cancer tissue, and (iiii) reduced expression in rectal cancer. CLCA1 expression was strongly associated with (B) a different tumor status, (C) lymph node metastasis, (D) Dukes staging, and (E) histological grades. Expression levels of CLCA1 were quantified via the ratio of positive cells to total cells. Data are presented as means and standard errors of the mean. CLCA1 indicates chloride channel accessory 1. The level of expression of CLCA1 with respect to tumor staging is shown in Table 2. There was a high level of CLCA1 expression in normal colonic epithelium in stark contrast to the tumor tissue (Fig. 1A). The mean percentage of CLCA1-positive cells was 88% in the normal samples (n = 6) and 54% in the tumor samples (n = 36). In addition, the expression pattern of CLCA1 was predominantly membranous and cytoplasmic in normal colonic epithelium, but this pattern was altered in the tumor area, which showed an absence of cytoplasmic and/or membranous staining (Fig. 1A). Furthermore, we analyzed the relationship between the level of expression of CLCA1 and the primary tumor status (T1-T4), lymph nodes status (N0 or N1/N2), Dukes stage (A-D), and histological grade (well, moderately, or poorly differentiated). This analysis showed that the CLCA1 expression in noncancerous control mucosa samples was significantly higher than that in samples with tumors. There also was a significant difference between normal tissue and early CRC tissue (T1, P < .05). In addition, in the more advanced tumor stages (T3 and T4), the expression of CLCA1 was reduced in comparison with earlier stage tumors (T1/T2 vs T3/T4, P < .01; Fig. 1B and Table 2). Furthermore, CLCA1 expression levels in primary tumors were reduced significantly when the patients had positive lymph nodes (N1/N2, P < .01; Fig. 1C). In Dukes stage A and B tumors, there was much higher expression of CLCA1 in comparison with Dukes stage C and D tumors (P < .01; Fig. 1D and Table 2). An analysis by histological grades also showed that the expression of CLCA1 was reduced significantly in poorly differentiated tumors versus well-differentiated tumor (P < .01; Fig. 1E and Table 2). Our data indicate that low CLCA1 expression levels are associated with an advanced tumor stage, a less differentiated tumor histological grade, and metastases in regional lymph nodes. Expression Level of CLCA1 and Status of Primary Tumors Abbreviation: CLCA1, chloride channel accessory 1. Expression levels were compared with the Pearson chi-square test. The expression of CLCA1 correlates with tumor stages, histological grades, and lymph node metastasis in colorectal cancer. (A) CLCA1 in normal colonic tissue showed (i) preservation of high levels of expression (brown) on cell membranes and in the cytoplasm, (ii) reduced expression on membranes and in the cytoplasm in colon cancer tissue, (iii) reduced expression in cytoplasm only in colon cancer tissue, and (iiii) reduced expression in rectal cancer. CLCA1 expression was strongly associated with (B) a different tumor status, (C) lymph node metastasis, (D) Dukes staging, and (E) histological grades. Expression levels of CLCA1 were quantified via the ratio of positive cells to total cells. Data are presented as means and standard errors of the mean. CLCA1 indicates chloride channel accessory 1. Reduced Expression of CLCA1 Is an Indicator of the Likelihood of Disease Relapse and Poorer Survival The median disease-free survival (DFS) for all patients was 34.5 months. The postoperative median DFS for patients with high expression of CLCA1 was 40 months, whereas that of patients with low CLCA1 expression was 23 months. CRC patients with reduced CLCA1 expression had a higher risk of disease relapse and death than patients with high CLCA1 expression (P < .01; Table 2). Kaplan-Meier analysis was used to evaluate the correlation between the survival of patients with CRC and the level of expression of CLCA1. Patients were divided into 2 groups: CRC patients with high CLCA1 expression (>30%) and those with low CLCA1 expression (≤30%). Our data showed that the DFS for CRC patients whose tumors had preserved CLCA1 expression was significantly higher than the DFS for patients with low CLCA1 expression (P < .05; Fig. 2A). Kaplan-Meier analysis was also performed with stratification by the characteristics of the primary tumor and lymph node metastases. We classified patients into groups with T1/T2 (n = 12) and T3/T4 tumors (n = 24) because of the case number and better prognosis with T1/T2 tumors versus T3/T4 tumors29 and with lymph node–negative tumors versus lymph node–positive tumors. Our results also showed that the primary tumor stage was correlated with the DFS of CRC patients. The DFS of patients with T3/T4 tumors was reduced significantly in comparison with the DFS of patients with T1/T2 tumors (P < .05; Fig. 2B). CLCA1 expression levels and disease-free survival for CRC patients. Kaplan-Meier curves of disease-free survival are shown for CRC patients: (A) different CLCA1 expression (P < .05), (B) tumor status (P < .05), (C) lymph node metastasis (P > .05), and (D) Ki-67 expression (P > .05). The differences between curves were analyzed with the log-rank test. CLCA1 indicates chloride channel accessory 1; CRC, colorectal cancer. However, the lymph node status (N0 vs N1/N2) with respect to the DFS of this group of CRC patients showed a difference with a P value of .06 (Fig. 2C). Ki-67 has been studied as a prognosticator for CRC,30 but we showed that there was no significant association between the expression levels of Ki-67 and the survival of CRC patients (Fig. 2D). Overall, our results demonstrated that the CLCA1 expression level was a prognostic factor for the survival of patients with CRC in the univariate analysis. Patients with CRC characterized by high levels of CLCA1 expression had a favorable prognosis, whereas patients with CRC with low CLCA1 expression had a poorer survival rate. The median disease-free survival (DFS) for all patients was 34.5 months. The postoperative median DFS for patients with high expression of CLCA1 was 40 months, whereas that of patients with low CLCA1 expression was 23 months. CRC patients with reduced CLCA1 expression had a higher risk of disease relapse and death than patients with high CLCA1 expression (P < .01; Table 2). Kaplan-Meier analysis was used to evaluate the correlation between the survival of patients with CRC and the level of expression of CLCA1. Patients were divided into 2 groups: CRC patients with high CLCA1 expression (>30%) and those with low CLCA1 expression (≤30%). Our data showed that the DFS for CRC patients whose tumors had preserved CLCA1 expression was significantly higher than the DFS for patients with low CLCA1 expression (P < .05; Fig. 2A). Kaplan-Meier analysis was also performed with stratification by the characteristics of the primary tumor and lymph node metastases. We classified patients into groups with T1/T2 (n = 12) and T3/T4 tumors (n = 24) because of the case number and better prognosis with T1/T2 tumors versus T3/T4 tumors29 and with lymph node–negative tumors versus lymph node–positive tumors. Our results also showed that the primary tumor stage was correlated with the DFS of CRC patients. The DFS of patients with T3/T4 tumors was reduced significantly in comparison with the DFS of patients with T1/T2 tumors (P < .05; Fig. 2B). CLCA1 expression levels and disease-free survival for CRC patients. Kaplan-Meier curves of disease-free survival are shown for CRC patients: (A) different CLCA1 expression (P < .05), (B) tumor status (P < .05), (C) lymph node metastasis (P > .05), and (D) Ki-67 expression (P > .05). The differences between curves were analyzed with the log-rank test. CLCA1 indicates chloride channel accessory 1; CRC, colorectal cancer. However, the lymph node status (N0 vs N1/N2) with respect to the DFS of this group of CRC patients showed a difference with a P value of .06 (Fig. 2C). Ki-67 has been studied as a prognosticator for CRC,30 but we showed that there was no significant association between the expression levels of Ki-67 and the survival of CRC patients (Fig. 2D). Overall, our results demonstrated that the CLCA1 expression level was a prognostic factor for the survival of patients with CRC in the univariate analysis. Patients with CRC characterized by high levels of CLCA1 expression had a favorable prognosis, whereas patients with CRC with low CLCA1 expression had a poorer survival rate. Correlation Between CLCA1 and PTEN, p53, E-Cadherin, and Ki-67 in CRC Patients Abnormal expression or mutations of PTEN and p53, E-cadherin, and Ki-67 are associated with a poor prognosis for patients with CRC.15,18,30–32 We also checked the expression of PTEN, p53, Ki-67, and E-cadherin in patients and compared the relationship between expression levels of CLCA1 and these 4 tumor-associated genes. We found that 16 of 28 CRC patients (57%) had high expression (>50% score) of Ki-67 cancer cells. However, no correlation was found between the levels of CLCA1 and Ki-67 expression in the primary tumors (P = .29; Fig. 3A). Furthermore, we investigated the correlation between the expression of PTEN and CLCA1 and found no association between the expression levels of these 2 molecules (P = .95; Fig. 3B). However, CLCA1 expression levels correlated strongly and positively with the level of p53 and E-cadherin expression (P < .01; Fig. 3C,D). The downregulation of E-cadherin at the membrane indicated a poor outcome.15 p53 is a tumor suppressor in CRC. Mutations of p53 are associated with worse survival, and normal levels of p53 are required for CRCs to respond to chemotherapy.16 These data further confirm that low expression of CLCA1 appears to correlate strongly with a poor prognosis of CRC. Correlation between the expression of CLCA1 and known prognostic markers for colorectal cancer. Expression levels of CLCA1, Ki-67, E-cad, PTEN, and p53 were analyzed through the percentages of positively staining cells. The correlations of CLCA1 expression with (A) Ki-67 (n = 28, R2 = 0.18, P = .29), (B) PTEN (n = 9, R2 = –0.02, P = .95), (C) E-cad (n = 12, R2 = 0.8, P = .0016), and (D) p53 (n = 15, R2 = 0.78, P = .0007) are shown, and strong positive correlations between high CLCA1 expression and high E-cad and p53 expression levels are indicated. CLCA1 indicates chloride channel accessory 1; E-cad, E-cadherin; PTEN, phosphatase and tensin homolog. Abnormal expression or mutations of PTEN and p53, E-cadherin, and Ki-67 are associated with a poor prognosis for patients with CRC.15,18,30–32 We also checked the expression of PTEN, p53, Ki-67, and E-cadherin in patients and compared the relationship between expression levels of CLCA1 and these 4 tumor-associated genes. We found that 16 of 28 CRC patients (57%) had high expression (>50% score) of Ki-67 cancer cells. However, no correlation was found between the levels of CLCA1 and Ki-67 expression in the primary tumors (P = .29; Fig. 3A). Furthermore, we investigated the correlation between the expression of PTEN and CLCA1 and found no association between the expression levels of these 2 molecules (P = .95; Fig. 3B). However, CLCA1 expression levels correlated strongly and positively with the level of p53 and E-cadherin expression (P < .01; Fig. 3C,D). The downregulation of E-cadherin at the membrane indicated a poor outcome.15 p53 is a tumor suppressor in CRC. Mutations of p53 are associated with worse survival, and normal levels of p53 are required for CRCs to respond to chemotherapy.16 These data further confirm that low expression of CLCA1 appears to correlate strongly with a poor prognosis of CRC. Correlation between the expression of CLCA1 and known prognostic markers for colorectal cancer. Expression levels of CLCA1, Ki-67, E-cad, PTEN, and p53 were analyzed through the percentages of positively staining cells. The correlations of CLCA1 expression with (A) Ki-67 (n = 28, R2 = 0.18, P = .29), (B) PTEN (n = 9, R2 = –0.02, P = .95), (C) E-cad (n = 12, R2 = 0.8, P = .0016), and (D) p53 (n = 15, R2 = 0.78, P = .0007) are shown, and strong positive correlations between high CLCA1 expression and high E-cad and p53 expression levels are indicated. CLCA1 indicates chloride channel accessory 1; E-cad, E-cadherin; PTEN, phosphatase and tensin homolog. Confirmation of CLCA1 Expression in CRC With Microarray Data Sets To further validate our data, we analyzed the 3 public, independent microarray data sets from GEO. Our analysis showed that the expression level of CLCA1 was inhibited significantly in early CRC specimens (normal vs early CRC, P < .05; Fig. 4A). In comparison with nonmetastatic specimens, the expression of CLCA1 was downregulated significantly in metastatic CRC (P < .01; Fig. 4B). CIN-high showed the worst survival for CRC patients.27 Our results showed that CRC with the CIN-high phenotype had lower expression of CLCA1 than CRC with the CIN-low phenotype (P = .015; Fig. 4C). Therefore, the results from these 3 independent microarray sets were identical to the results from our clinical analysis data. Publicly available microarray data sets for validation. (A) In public microarray data set GSE4107, we analyzed the expression of CLCA1 in normal colon mucosa and CRC tissues. The results showed that the expression of CLCA1 was inhibited significantly in early CRC patients. (B) In the GSE28702 microarray gene set, low expression of CLCA1 was associated with CRC metastasis. (C) The analysis of the GSE30540 microarray gene set showed that low CLCA1 expression was associated with CRC with a CIN-high signature, and this indicated significantly poorer survival in comparison with a CIN-low signature. CIN indicates chromosomal instability; CLCA1, chloride channel accessory 1; CRC, colorectal cancer. To further validate our data, we analyzed the 3 public, independent microarray data sets from GEO. Our analysis showed that the expression level of CLCA1 was inhibited significantly in early CRC specimens (normal vs early CRC, P < .05; Fig. 4A). In comparison with nonmetastatic specimens, the expression of CLCA1 was downregulated significantly in metastatic CRC (P < .01; Fig. 4B). CIN-high showed the worst survival for CRC patients.27 Our results showed that CRC with the CIN-high phenotype had lower expression of CLCA1 than CRC with the CIN-low phenotype (P = .015; Fig. 4C). Therefore, the results from these 3 independent microarray sets were identical to the results from our clinical analysis data. Publicly available microarray data sets for validation. (A) In public microarray data set GSE4107, we analyzed the expression of CLCA1 in normal colon mucosa and CRC tissues. The results showed that the expression of CLCA1 was inhibited significantly in early CRC patients. (B) In the GSE28702 microarray gene set, low expression of CLCA1 was associated with CRC metastasis. (C) The analysis of the GSE30540 microarray gene set showed that low CLCA1 expression was associated with CRC with a CIN-high signature, and this indicated significantly poorer survival in comparison with a CIN-low signature. CIN indicates chromosomal instability; CLCA1, chloride channel accessory 1; CRC, colorectal cancer. CLCA1 Regulates the Differentiation of Human CRC Cells Culturing to confluence induces the spontaneous differentiation of human colon adenocarcinoma cells (Caco-2).11,33–35 Using this model, we investigated further the functional role of CLCA1 in the differentiation of Caco-2 cells. First, we detected the expression of CLCA1 and the differentiation marker E-cadherin in a confluent culture. We found that the expression of both CLCA1 and E-cadherin was increased in a time-dependent manner (Fig. 5A). These data suggest that the expression of CLCA1 may contribute to the spontaneous differentiation of Caco-2 cells. Next, we used stealth siRNA (siRNACLCA1) to knock down the expression of CLCA1 in Caco-2 cells. After 72 hours of transfection, cells were tested for the expression of CLCA1, ALPI, and E-cadherin by western blotting. We found that knockdown of CLCA1 inhibited expression levels of ALPI and E-cadherin significantly (Fig. 5B).10 These results indicate that CLCA1 expression plays a key role in the regulation of the spontaneous differentiation of Caco-2 cells. CLCA1 contributes to the differentiation of colorectal cancer cells. Human colon adenocarcinoma cells (Caco-2) were cultured to become confluent for 10 days. (A) The expression of CLCA1 was upregulated for 4 days in a confluent culture and lasted for up to 10 days of culturing. The mature epithelial marker E-cad increased in a time-dependent manner. The histogram shows the relative intensity of E-cad expressed as a ratio with respect to the GAPDH control. (B) Caco-2 cells were transfected transiently with 150 nM siRNACLCA1 and blotted for CLCA1, E-cad, and ALPI. siRNACLCA1 effectively inhibited CLCA1 and downregulated the expression of E-cad and ALPI. The histogram shows the relative intensity of CLCA1, E-cad, and ALPI expressed as a ratio with respect to the GAPDH control. Data are presented as means and standard errors of the mean. All results were analyzed on the basis of 3 independent experiments. ALPI indicates intestinal alkaline phosphatase; CLCA1, chloride channel accessory 1; Cont, control; E-cad, E-cadherin; GAPDH, glyceraldehyde 3-phosphate dehydrogenase; siRNA, small interfering RNA. Culturing to confluence induces the spontaneous differentiation of human colon adenocarcinoma cells (Caco-2).11,33–35 Using this model, we investigated further the functional role of CLCA1 in the differentiation of Caco-2 cells. First, we detected the expression of CLCA1 and the differentiation marker E-cadherin in a confluent culture. We found that the expression of both CLCA1 and E-cadherin was increased in a time-dependent manner (Fig. 5A). These data suggest that the expression of CLCA1 may contribute to the spontaneous differentiation of Caco-2 cells. Next, we used stealth siRNA (siRNACLCA1) to knock down the expression of CLCA1 in Caco-2 cells. After 72 hours of transfection, cells were tested for the expression of CLCA1, ALPI, and E-cadherin by western blotting. We found that knockdown of CLCA1 inhibited expression levels of ALPI and E-cadherin significantly (Fig. 5B).10 These results indicate that CLCA1 expression plays a key role in the regulation of the spontaneous differentiation of Caco-2 cells. CLCA1 contributes to the differentiation of colorectal cancer cells. Human colon adenocarcinoma cells (Caco-2) were cultured to become confluent for 10 days. (A) The expression of CLCA1 was upregulated for 4 days in a confluent culture and lasted for up to 10 days of culturing. The mature epithelial marker E-cad increased in a time-dependent manner. The histogram shows the relative intensity of E-cad expressed as a ratio with respect to the GAPDH control. (B) Caco-2 cells were transfected transiently with 150 nM siRNACLCA1 and blotted for CLCA1, E-cad, and ALPI. siRNACLCA1 effectively inhibited CLCA1 and downregulated the expression of E-cad and ALPI. The histogram shows the relative intensity of CLCA1, E-cad, and ALPI expressed as a ratio with respect to the GAPDH control. Data are presented as means and standard errors of the mean. All results were analyzed on the basis of 3 independent experiments. ALPI indicates intestinal alkaline phosphatase; CLCA1, chloride channel accessory 1; Cont, control; E-cad, E-cadherin; GAPDH, glyceraldehyde 3-phosphate dehydrogenase; siRNA, small interfering RNA. Patients and Tumors: The characteristics of the 36 CRC patients in the study cohort are shown in Table 1. There were 11 women (30.6%) and 25 men (69.4%). The median age was 55.5 years with a range of 25 to 80 years. In terms of the anatomical location of the tumors, 15 (41.7%) were in the ascending colon, 2 (5.6%) were in the transverse colon, 3 (8.3%) were in the left colon, 6 (16.7%) were in the sigmoid colon, and 10 (27.8%) were in the rectum. Twenty-six patients (72.2%) had moderately or poorly differentiated tumors, with the remaining 10 (27.8%) having well-differentiated cancers. The Dukes staging was as follows: (A) 7 or 19.4%, (B) 18 or 50%, (C) 5 or 13.9%, and (D) 6 or 16.7%. Expression of CLCA1 and Clinical Grades of CRCs: The level of expression of CLCA1 with respect to tumor staging is shown in Table 2. There was a high level of CLCA1 expression in normal colonic epithelium in stark contrast to the tumor tissue (Fig. 1A). The mean percentage of CLCA1-positive cells was 88% in the normal samples (n = 6) and 54% in the tumor samples (n = 36). In addition, the expression pattern of CLCA1 was predominantly membranous and cytoplasmic in normal colonic epithelium, but this pattern was altered in the tumor area, which showed an absence of cytoplasmic and/or membranous staining (Fig. 1A). Furthermore, we analyzed the relationship between the level of expression of CLCA1 and the primary tumor status (T1-T4), lymph nodes status (N0 or N1/N2), Dukes stage (A-D), and histological grade (well, moderately, or poorly differentiated). This analysis showed that the CLCA1 expression in noncancerous control mucosa samples was significantly higher than that in samples with tumors. There also was a significant difference between normal tissue and early CRC tissue (T1, P < .05). In addition, in the more advanced tumor stages (T3 and T4), the expression of CLCA1 was reduced in comparison with earlier stage tumors (T1/T2 vs T3/T4, P < .01; Fig. 1B and Table 2). Furthermore, CLCA1 expression levels in primary tumors were reduced significantly when the patients had positive lymph nodes (N1/N2, P < .01; Fig. 1C). In Dukes stage A and B tumors, there was much higher expression of CLCA1 in comparison with Dukes stage C and D tumors (P < .01; Fig. 1D and Table 2). An analysis by histological grades also showed that the expression of CLCA1 was reduced significantly in poorly differentiated tumors versus well-differentiated tumor (P < .01; Fig. 1E and Table 2). Our data indicate that low CLCA1 expression levels are associated with an advanced tumor stage, a less differentiated tumor histological grade, and metastases in regional lymph nodes. Expression Level of CLCA1 and Status of Primary Tumors Abbreviation: CLCA1, chloride channel accessory 1. Expression levels were compared with the Pearson chi-square test. The expression of CLCA1 correlates with tumor stages, histological grades, and lymph node metastasis in colorectal cancer. (A) CLCA1 in normal colonic tissue showed (i) preservation of high levels of expression (brown) on cell membranes and in the cytoplasm, (ii) reduced expression on membranes and in the cytoplasm in colon cancer tissue, (iii) reduced expression in cytoplasm only in colon cancer tissue, and (iiii) reduced expression in rectal cancer. CLCA1 expression was strongly associated with (B) a different tumor status, (C) lymph node metastasis, (D) Dukes staging, and (E) histological grades. Expression levels of CLCA1 were quantified via the ratio of positive cells to total cells. Data are presented as means and standard errors of the mean. CLCA1 indicates chloride channel accessory 1. Reduced Expression of CLCA1 Is an Indicator of the Likelihood of Disease Relapse and Poorer Survival: The median disease-free survival (DFS) for all patients was 34.5 months. The postoperative median DFS for patients with high expression of CLCA1 was 40 months, whereas that of patients with low CLCA1 expression was 23 months. CRC patients with reduced CLCA1 expression had a higher risk of disease relapse and death than patients with high CLCA1 expression (P < .01; Table 2). Kaplan-Meier analysis was used to evaluate the correlation between the survival of patients with CRC and the level of expression of CLCA1. Patients were divided into 2 groups: CRC patients with high CLCA1 expression (>30%) and those with low CLCA1 expression (≤30%). Our data showed that the DFS for CRC patients whose tumors had preserved CLCA1 expression was significantly higher than the DFS for patients with low CLCA1 expression (P < .05; Fig. 2A). Kaplan-Meier analysis was also performed with stratification by the characteristics of the primary tumor and lymph node metastases. We classified patients into groups with T1/T2 (n = 12) and T3/T4 tumors (n = 24) because of the case number and better prognosis with T1/T2 tumors versus T3/T4 tumors29 and with lymph node–negative tumors versus lymph node–positive tumors. Our results also showed that the primary tumor stage was correlated with the DFS of CRC patients. The DFS of patients with T3/T4 tumors was reduced significantly in comparison with the DFS of patients with T1/T2 tumors (P < .05; Fig. 2B). CLCA1 expression levels and disease-free survival for CRC patients. Kaplan-Meier curves of disease-free survival are shown for CRC patients: (A) different CLCA1 expression (P < .05), (B) tumor status (P < .05), (C) lymph node metastasis (P > .05), and (D) Ki-67 expression (P > .05). The differences between curves were analyzed with the log-rank test. CLCA1 indicates chloride channel accessory 1; CRC, colorectal cancer. However, the lymph node status (N0 vs N1/N2) with respect to the DFS of this group of CRC patients showed a difference with a P value of .06 (Fig. 2C). Ki-67 has been studied as a prognosticator for CRC,30 but we showed that there was no significant association between the expression levels of Ki-67 and the survival of CRC patients (Fig. 2D). Overall, our results demonstrated that the CLCA1 expression level was a prognostic factor for the survival of patients with CRC in the univariate analysis. Patients with CRC characterized by high levels of CLCA1 expression had a favorable prognosis, whereas patients with CRC with low CLCA1 expression had a poorer survival rate. Correlation Between CLCA1 and PTEN, p53, E-Cadherin, and Ki-67 in CRC Patients: Abnormal expression or mutations of PTEN and p53, E-cadherin, and Ki-67 are associated with a poor prognosis for patients with CRC.15,18,30–32 We also checked the expression of PTEN, p53, Ki-67, and E-cadherin in patients and compared the relationship between expression levels of CLCA1 and these 4 tumor-associated genes. We found that 16 of 28 CRC patients (57%) had high expression (>50% score) of Ki-67 cancer cells. However, no correlation was found between the levels of CLCA1 and Ki-67 expression in the primary tumors (P = .29; Fig. 3A). Furthermore, we investigated the correlation between the expression of PTEN and CLCA1 and found no association between the expression levels of these 2 molecules (P = .95; Fig. 3B). However, CLCA1 expression levels correlated strongly and positively with the level of p53 and E-cadherin expression (P < .01; Fig. 3C,D). The downregulation of E-cadherin at the membrane indicated a poor outcome.15 p53 is a tumor suppressor in CRC. Mutations of p53 are associated with worse survival, and normal levels of p53 are required for CRCs to respond to chemotherapy.16 These data further confirm that low expression of CLCA1 appears to correlate strongly with a poor prognosis of CRC. Correlation between the expression of CLCA1 and known prognostic markers for colorectal cancer. Expression levels of CLCA1, Ki-67, E-cad, PTEN, and p53 were analyzed through the percentages of positively staining cells. The correlations of CLCA1 expression with (A) Ki-67 (n = 28, R2 = 0.18, P = .29), (B) PTEN (n = 9, R2 = –0.02, P = .95), (C) E-cad (n = 12, R2 = 0.8, P = .0016), and (D) p53 (n = 15, R2 = 0.78, P = .0007) are shown, and strong positive correlations between high CLCA1 expression and high E-cad and p53 expression levels are indicated. CLCA1 indicates chloride channel accessory 1; E-cad, E-cadherin; PTEN, phosphatase and tensin homolog. Confirmation of CLCA1 Expression in CRC With Microarray Data Sets: To further validate our data, we analyzed the 3 public, independent microarray data sets from GEO. Our analysis showed that the expression level of CLCA1 was inhibited significantly in early CRC specimens (normal vs early CRC, P < .05; Fig. 4A). In comparison with nonmetastatic specimens, the expression of CLCA1 was downregulated significantly in metastatic CRC (P < .01; Fig. 4B). CIN-high showed the worst survival for CRC patients.27 Our results showed that CRC with the CIN-high phenotype had lower expression of CLCA1 than CRC with the CIN-low phenotype (P = .015; Fig. 4C). Therefore, the results from these 3 independent microarray sets were identical to the results from our clinical analysis data. Publicly available microarray data sets for validation. (A) In public microarray data set GSE4107, we analyzed the expression of CLCA1 in normal colon mucosa and CRC tissues. The results showed that the expression of CLCA1 was inhibited significantly in early CRC patients. (B) In the GSE28702 microarray gene set, low expression of CLCA1 was associated with CRC metastasis. (C) The analysis of the GSE30540 microarray gene set showed that low CLCA1 expression was associated with CRC with a CIN-high signature, and this indicated significantly poorer survival in comparison with a CIN-low signature. CIN indicates chromosomal instability; CLCA1, chloride channel accessory 1; CRC, colorectal cancer. CLCA1 Regulates the Differentiation of Human CRC Cells: Culturing to confluence induces the spontaneous differentiation of human colon adenocarcinoma cells (Caco-2).11,33–35 Using this model, we investigated further the functional role of CLCA1 in the differentiation of Caco-2 cells. First, we detected the expression of CLCA1 and the differentiation marker E-cadherin in a confluent culture. We found that the expression of both CLCA1 and E-cadherin was increased in a time-dependent manner (Fig. 5A). These data suggest that the expression of CLCA1 may contribute to the spontaneous differentiation of Caco-2 cells. Next, we used stealth siRNA (siRNACLCA1) to knock down the expression of CLCA1 in Caco-2 cells. After 72 hours of transfection, cells were tested for the expression of CLCA1, ALPI, and E-cadherin by western blotting. We found that knockdown of CLCA1 inhibited expression levels of ALPI and E-cadherin significantly (Fig. 5B).10 These results indicate that CLCA1 expression plays a key role in the regulation of the spontaneous differentiation of Caco-2 cells. CLCA1 contributes to the differentiation of colorectal cancer cells. Human colon adenocarcinoma cells (Caco-2) were cultured to become confluent for 10 days. (A) The expression of CLCA1 was upregulated for 4 days in a confluent culture and lasted for up to 10 days of culturing. The mature epithelial marker E-cad increased in a time-dependent manner. The histogram shows the relative intensity of E-cad expressed as a ratio with respect to the GAPDH control. (B) Caco-2 cells were transfected transiently with 150 nM siRNACLCA1 and blotted for CLCA1, E-cad, and ALPI. siRNACLCA1 effectively inhibited CLCA1 and downregulated the expression of E-cad and ALPI. The histogram shows the relative intensity of CLCA1, E-cad, and ALPI expressed as a ratio with respect to the GAPDH control. Data are presented as means and standard errors of the mean. All results were analyzed on the basis of 3 independent experiments. ALPI indicates intestinal alkaline phosphatase; CLCA1, chloride channel accessory 1; Cont, control; E-cad, E-cadherin; GAPDH, glyceraldehyde 3-phosphate dehydrogenase; siRNA, small interfering RNA. Discussion: In 2010, 40,695 people in the United Kingdom were diagnosed with bowel cancer, and 15,708 died from the disease. In CRC, one of the important challenges is the accurate prediction of which patients will experience disease relapse after surgery and which patients will require and most benefit from adjuvant therapies. At present, the TNM staging system for tumors is the gold standard for determining the prognosis of patients with CRC. The staging system is dependent on the extent of local invasion, the degree of lymph node involvement, and the presence or absence of distant metastases. However, this system can cause problems, especially with respect to treatment decisions.36 For example, the number of paraffin blocks containing tumor tissue (tumor blocks) will affect directly the likelihood of detecting submucosal, mesocolic, mesorectal, or peritoneal invasion, and the greater the number of tumor blocks dissected, the higher the expected T stage.36 Hence, additional prognostic biomarkers are needed urgently for the improved management of CRC patients.37 The high tissue specificity of transcription of some CLCAs38 suggested initially that the detection of their expression in specific tissues might be useful for early diagnosis, as a means of molecular staging, and for postoperative surveillance.11 In this study, primary CRC tumors showed significantly lower expression of CLCA1 in both membranes and cytoplasm in comparison with the normal colonic tissues. This may suggest that reduced CLCA1 expression is a biomarker of malignant cells. Patients in different stages of disease also show a big discrepancy in survival. Molecules involved in cancer relapse and prognosis might serve as markers for the early detection of metastasis and as a measure for therapeutic intervention (eg, N-Myc downstream-regulated gene 2, E-cadherin, and p53).15,16,18,39 In CRC, CLCA1 and CLCA4 are downregulated significantly in approximately 80% of patients.11 The loss of expression of both CLCA1 and CLCA4 during tumorigenesis suggests that strong activation of either might inhibit the survival of tumor cells.4 Our findings showed that CLCA1 expression was downregulated significantly in CRC and highly associated with known high-risk factors such as the tumor status, lymph node metastasis, Dukes grade, and histologic staging. Our data suggested that CLCA1 expression also decreased progressively from good to poor differentiation and with the progression of tumor stages. Specifically, the expression of CLCA1 in advanced CRC was decreased further in comparison with early stages of CRC. This indicated that the level of CLCA1 expression in primary CRC might be associated with its prognosis. Several molecular prognostic factors, such as p53, Ki-67, K-ras, and E-cadherin, are being evaluated in CRC patients,15,16,30,31 and it is still not possible to predict accurately the probability of recurrence of CRC in patients after surgery.32 Therefore, it is necessary to find reliable and sensitive markers that can help us to make well-informed decisions regarding which patients should receive chemotherapy and which should not. Here we have shown that the level of CLCA1 expression is correlated with the expression of known predictors: E-cadherin and p53. CLCA2 has been reported to be a p53-inducible inhibitor of cell proliferation and to be downregulated with tumor progression.1,6 Downregulation of E-cadherin at the membrane indicates a poorer prognosis.15 These results further support the notion that CLCA1 may be a tumor suppressor and may be associated with progressive potential. Kaplan-Meier survival analysis is defined as the probability of surviving for a given length of time, and it is used to compare the fraction of patients living for a certain amount of time after treatment.40 To further validate our hypothesis, we used Kaplan-Meier analysis to test the prognostic sensitivity of CLCA1 in CRC. Although we analyzed a small group of 36 patients, we found that CRC patients with high CLCA1 expression levels had better DFS than CRC patients with low CLCA1 expression levels. The tumor stage as a univariate prognostic factor in CRC also correlated with the survival time of this group of patients. Higher CLCA1 expression proved to be correlated with longer survival for patients with CRC. Extending DFS means the prevention or delay of recurrence or metastasis, and this is a clinical benefit. To validate the prognostic potential of CLCA1, we analyzed the expression of CLCA1 in normal and early CRC, primary CRC and metastases, and CIN-high and CIN-low phenotypes from 3 independent microarray data sets. Our results further confirmed that low expression of CLCA1 correlated with the tumorigenesis of CRC, metastasis, and a CIN-high gene signature. In this respect, our findings indicate that measurements of CLCA1 expression may help to identify patients at high risk of a poorer prognosis. Therefore, an assessment of CLCA1 levels could contribute to an accurate prediction of the prognosis and recurrence probability of patients after potentially curative surgery and, consequently, to individualized treatment for each patient. Recent studies have shown that CLCA1 could increase cell differentiation through the regulation of the proliferation-to-differentiation transition.10 CLCA2 also has been reported to suppress the epithelial-mesenchymal transition in breast cancer.1 The CLCA1 precursor is approximately 900 amino acids long with 1 proteolytic cleavage site after the amino-terminal signal sequence. Eventually, 2 products of 90 and 30 to 40 kDa play functional roles.3,4,8,9 Using cultured Caco-2 monolayers as a model, we found that CLCA1 promotes intestinal epithelial differentiation through enhancement of E-cadherin and ALPI expression (Fig. 5).10 In some tumor types, including CRC, E-cadherin expression is often downregulated in highly invasive, poorly differentiated carcinomas.41,42 Decreased E-cadherin expression through the inhibition of CLCA1 might block cell differentiation and increase the likelihood of distant metastasis. Thus, the loss of expression of CLCA1 appears to be an important step in tumorigenic progression. Although the clinical role and therapeutic effect of CLCA1 are still to be investigated, the current study will continue to improve our understanding of the biological profile and behavior of CRC. In summary, we have shown that aberrant CLCA1 expression is a sign of a poor outcome in CRC. Our data indicate that low levels of CLCA1 in primary CRC might be a powerful predictor of disease relapse and prognosis. Although further prospective studies will be needed to determine the actual clinical utility of this observation, our findings indicate that CLCA1 might be a sensitive prognostic marker for evaluating the recurrence, early metastasis, and prognosis of CRC. Moreover, it might also be a potential therapeutic target for molecular therapy.
Background: Chloride channel accessory 1 (CLCA1) is a CLCA protein that plays a functional role in regulating the differentiation and proliferation of colorectal cancer (CRC) cells. Here we investigated the relationship between the level of CLCA1 and the prognosis of CRC. Methods: First, the level of CLCA1 was detected quantitatively in normal and cancerous colonic epithelial tissues with immunohistochemistry. Next, the correlations between CLCA1 expression, pathological tumor features, and the overall survival rate of patients was analyzed. Finally, 3 publicly available data sets from the Gene Expression Omnibus were examined: normal CRC versus early CRC (GSE4107), primary CRC versus metastatic lesions (GSE28702), and low chromosomal instability versus high chromosomal instability (GSE30540). Results: The expression of CLCA1 was decreased markedly in tumor specimens. CLCA1 expression was correlated significantly with the histological grade (P < .01) and lymph node metastasis (P < .01). A significantly poorer overall survival rate was found in patients with low levels of CLCA1 expression versus those with high expression levels (P < .05). The results confirmed that the low expression of CLCA1 in CRC was highly associated with tumorigenesis, metastasis, and high chromosomal instability. In addition, the loss of CLCA1 disrupted the differentiation of human colon adenocarcinoma cells (Caco-2) in vitro. Conclusions: These findings suggest that CLCA1 levels may be a potential predictor of prognosis in primary human CRC. Low expression of CLCA1 predicts disease recurrence and lower survival, and this has implications for the selection of patients most likely to need and benefit from adjuvant chemotherapy.
null
null
14,489
309
[ 486, 3662, 412, 97, 178, 155, 256, 340, 217, 152, 182, 592, 524, 409, 269, 400 ]
18
[ "clca1", "expression", "patients", "crc", "cells", "clca1 expression", "expression clca1", "tumor", "tumors", "cancer" ]
[ "cancer clca1 precursor", "cells clca1 contributes", "clca1 clca4 tumorigenesis", "clca1 tumor suppressor", "clca1 promotes intestinal" ]
null
null
null
[CONTENT] chloride channel accessory 1 (CLCA1) expression | colorectal carcinoma | prognosis | cell differentiation [SUMMARY]
null
[CONTENT] chloride channel accessory 1 (CLCA1) expression | colorectal carcinoma | prognosis | cell differentiation [SUMMARY]
null
[CONTENT] chloride channel accessory 1 (CLCA1) expression | colorectal carcinoma | prognosis | cell differentiation [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Blotting, Western | Cadherins | Chloride Channels | Colorectal Neoplasms | Down-Regulation | Female | Gene Expression Regulation, Neoplastic | Humans | Immunohistochemistry | Kaplan-Meier Estimate | Ki-67 Antigen | Male | Middle Aged | Neoplasm Grading | Neoplasm Staging | PTEN Phosphohydrolase | Predictive Value of Tests | Prognosis | Tumor Suppressor Protein p53 [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Blotting, Western | Cadherins | Chloride Channels | Colorectal Neoplasms | Down-Regulation | Female | Gene Expression Regulation, Neoplastic | Humans | Immunohistochemistry | Kaplan-Meier Estimate | Ki-67 Antigen | Male | Middle Aged | Neoplasm Grading | Neoplasm Staging | PTEN Phosphohydrolase | Predictive Value of Tests | Prognosis | Tumor Suppressor Protein p53 [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Blotting, Western | Cadherins | Chloride Channels | Colorectal Neoplasms | Down-Regulation | Female | Gene Expression Regulation, Neoplastic | Humans | Immunohistochemistry | Kaplan-Meier Estimate | Ki-67 Antigen | Male | Middle Aged | Neoplasm Grading | Neoplasm Staging | PTEN Phosphohydrolase | Predictive Value of Tests | Prognosis | Tumor Suppressor Protein p53 [SUMMARY]
null
[CONTENT] cancer clca1 precursor | cells clca1 contributes | clca1 clca4 tumorigenesis | clca1 tumor suppressor | clca1 promotes intestinal [SUMMARY]
null
[CONTENT] cancer clca1 precursor | cells clca1 contributes | clca1 clca4 tumorigenesis | clca1 tumor suppressor | clca1 promotes intestinal [SUMMARY]
null
[CONTENT] cancer clca1 precursor | cells clca1 contributes | clca1 clca4 tumorigenesis | clca1 tumor suppressor | clca1 promotes intestinal [SUMMARY]
null
[CONTENT] clca1 | expression | patients | crc | cells | clca1 expression | expression clca1 | tumor | tumors | cancer [SUMMARY]
null
[CONTENT] clca1 | expression | patients | crc | cells | clca1 expression | expression clca1 | tumor | tumors | cancer [SUMMARY]
null
[CONTENT] clca1 | expression | patients | crc | cells | clca1 expression | expression clca1 | tumor | tumors | cancer [SUMMARY]
null
[CONTENT] progression | clca4 | channels | clca1 | proliferation | ion | clca | ion channels | volume | cancer [SUMMARY]
null
[CONTENT] expression | clca1 | crc | patients | clca1 expression | fig | expression clca1 | levels | showed | tumors [SUMMARY]
null
[CONTENT] clca1 | expression | crc | patients | cells | tumor | clca1 expression | expression clca1 | tumors | cancer [SUMMARY]
null
[CONTENT] 1 | CLCA1 | CLCA | CRC ||| CLCA1 | CRC [SUMMARY]
null
[CONTENT] CLCA1 ||| CLCA1 ||| CLCA1 ||| CLCA1 | CRC ||| CLCA1 | Caco-2 [SUMMARY]
null
[CONTENT] 1 | CLCA1 | CLCA | CRC ||| CLCA1 | CRC ||| First | CLCA1 ||| CLCA1 ||| 3 | the Gene Expression Omnibus | CRC | CRC | GSE28702 ||| ||| CLCA1 ||| CLCA1 ||| CLCA1 ||| CLCA1 | CRC ||| CLCA1 | Caco-2 ||| CLCA1 | CRC ||| CLCA1 [SUMMARY]
null
Patterns and obstacles to oral antidiabetic medications adherence among type 2 diabetics in Ismailia, Egypt: a cross section study.
26113919
Diabetes is a costly and increasingly common chronic disease. Effective management of diabetes to achieve glycemic control improves patient quality of life. Adherence rates to drug regimens in patients with type 2 diabetes are relatively low and vary widely between populations. There are many factors that could affect patient adherence to drug therapy. The aim of the present study was assessing patterns and obstacles to adherence of type 2 diabetic patients to their oral hypoglycemic drugs.
INTRODUCTION
The present work is a descriptive cross section study, carried on type 2 diabetic patients who were on oral hypoglycemic drugs. Data concerning adherence to drugs was assessed using measure treatment adherence scale (MTA).
METHODS
A total of 372 (55.59% males and 44.41% females) patients with type-2 diabetes fulfilled the inclusion criteria and included in the study. Among the participants, 26.1% were found to have good adherence, 47.9% had a fair adherence, and 26% had poor adherence.
RESULTS
The overall rate of medication adherence among the diabetic patients population was suboptimal and non-acceptable. Evaluation of adherence is vital for patients with diabetes in order to determine factors and barriers affecting the adherence and to manage them.
CONCLUSION
[ "Administration, Oral", "Adult", "Cross-Sectional Studies", "Diabetes Mellitus, Type 2", "Egypt", "Female", "Humans", "Hypoglycemic Agents", "Male", "Medication Adherence", "Middle Aged", "Quality of Life" ]
4469448
Introduction
Diabetes is a costly and increasingly common chronic disease. The data from the Egyptian Demographic and Health Survey in 2008 showed the crude prevalence rate of physician-diagnosed diabetes among the adult population of Egypt aged 15-59 to be 4.07% [1]. Despite the clinical and economic value of the glycemic control is clear, many diabetic patientsstilldevelopdiabetes related complications, hence increases the burden on patients and the health services. Achievement of optimal glycemic control reduces the likelihood of diabetic complications and risk of death. However, achieving blood glucose level as close to normal as possible relays on the rational use of available anti-diabetic regimen, good adherence to prescribed treatments and successful self-management by patients [2]. One of the reasonsbehind unachievable glycemic controlmay be due to lack of patient's adherent to therapeutic regimen. Adherence rates to drug regimens in patients with type 2 diabetes are relatively low and vary widely between populations. It is estimated that only third of diabetic patients have adequate treatment adherence [3]. Physicians and nurses can motivate patients to be more adherentto their anti-diabetic regimen if they work on the factors that possibly affect oral hypoglycemic medications adherence. Many factors cancontribute and affect patient adherence to drug therapy. However, physicians and nurses can motivate patients to be more adherences to their anti-diabetic regimen and work on the causes. The main factors can be divided into three groups: patient factors, social and medical support, and medication related aspects [4]. Patient factors are for example the patient's age (i.e. older patients are more adherent), economic status (patients with a higher economic status being more adherent) and health beliefs (patients with beliefs about medicines as something harmful are less adherent) [4, 5]. Social and medical support included among others family help and the patient-health care provider relationship; patients with more support are more adherence. Medication related factors take into account the attitude towards medicines, the complexity of the medication regimen and the experience of side effects [6]. A positive attitude towards medicines, a less complex medication regimen and less experience of side effects are related to higher adherence rates. Studies that focused on the patient's perspective and his/her experiences with drug adherence have been performed less frequently [7]. Few studies have investigatedthe adherence to oral hypoglycemic medication in our Egyptian society, which has special demographic characteristics in urban and rural areas. Thesestudies have different designs and inconsistent results, leading to difficulties ingeneralizing their results on our diabetic population. Therefore, the aim of the present study was toassess patterns and obstacles that affecting adherence in type 2 diabetic patients to their oral hypoglycemic drugs.
Methods
An ethical approval was obtained from the research and Ethical Committee of the Faculty of Medicine, Suez Canal University. The objectives of the study were explained to individual patients and voluntary informed consent of the patients was also taken. The study was conducted in the Fanara Family Medicine Center that belongs to the Faculty of Medicine, Suez Canal University. Fanara is a rural areathat located 45 Km south Ismailia governorate, Egypt. The data was collected over the period between the beginning of April and end of May 2013. Study Design:The present study is a descriptive cross section study that was carried on all type 2 diabetic patients who lived in Fanara city andhave medical records at Fanara's Family Medicine Center. Exclusion criteria werepatients who were on insulin therapy, unconscious, or attended emergency condition and, or who were not interested in the study. A qualitative structured questionnaire was first used as a pilot in this study, and was testedon 40 type 2 diabetic patients. These patients were subsequently excluded from the study. After the pilot testing, some question-items in the questionnaire were modified and reframed to ensure validity of the method. The 36-item questionnaire took an average of 20 minutes to fill in and was administered to the participantsat the study site. The questionnaire was designed to have two sections; the first section included the socio-demographic characteristics of the type 2 diabetic patients while the second section assessed the adherence to oral anti-diabetic drugs. Each sectionconsisted of opened-/closed-ended questions. The adherence section focused on exploring patients’ experience with current anti-diabetic prescriptions and possible factors that could affect the adherence and patients’ knowledge and practice of diabetes self management behaviors such as self blood glucose monitoring, optimal blood glucose target, and complications from poor glycemic control. Data concerning adherence to drugs, its effect and its determinants, was assessed using questions of the Measure Treatment Adherence scale developed by Delgado and Lima. This method is used frequently to measure patient compliance with drug treatment. The Measure Treatment Adherence scale is a variation of the Morisky-Green test, which was used to assess patient behavior patterns associated with the use of medicines. It also showed good concurrent validity with high correlations with any answer and 0.77 sensitivity and 0.73 specificity [8]. All questions were read out to the participant, and the answers were recorded. Patients achieving a result of more than 75% were included in the good compliance group. Patients achieving a result less than 50% were included in the poor compliance group. Patients achieving a result between 50 and 75% were included in the fair compliance group. All collected data were analyzed using the Statistical Package for the Social Sciences (SPSS), version 13 software. Tests of proportions were done with ANOVA and a p-value of
Results
A total of 372 males (209 (55.59%) and 167 females (44.41%))patients with type-2 diabetes who fulfilled the inclusion criteria were included in the study. The mean age was 51.64±10.76 years. The sociodemographic characteristics of the studied population showed that among the patients, 159 (42.29%) had no formal education,100(26.60%) can read and write, 61 (16.22%) had received high education and 56 (14.89%) had received basic education. Also, 221 (58.78%) of the patientswere married, 109 (28.99%) were widows, 40 (10.64%) were divorced and 6 (1.6%) were single. A total of 144 (38.3%) were unemployed and 228 (61.7%) had a job. Among the participants, 256 (68.09%) perceived their economic standard as being below their needs and 120 (31.91%) perceived it as being enough or more than their needs. In addition, 217 (57.71%) had a family support and 159 (42.29%) had no family support. The results from the adherence section are summarized in Table 1. Among the patients, 98 (26.1%) were found to have good adherence, whereas 180(47.9%) had a fair adherence, and 99(26%) had poor adherence. Statistical analysis of factors that could affect adherence to oral hypoglycemic drugs results is illustrated in Table 2. Most of the patients (255 (67.82%)) had weak believes and motivations. In addition, the majority of the participants (230 (61.17%)) had poor relationship with health care providers. Another affecting factor is monitoring their blood glucose level. Theresults found that 272 (72.34%) participants did not monitor their blood level regularly. In addition, the number of drugs taken was found to be another factor, the present study found that 134 (35.64%) patients had only one oral hypoglycemic drug, and 242 (64.36%) had more than one oral hypoglycemic drugs. Among the patients, 246 (65.43%)had experienced a side effect of their oral hypoglycemic drug. On the other hand, 258 (68.62%) patients could not get their oral hypoglycemic drugs regularly because of its direct and indirect cost in relation to their income. Finally, 196 (52.13%) of allpatients had poor knowledge about diabetes and its complications. The ANOVA test using post Hoc for multi comparisons analysis through all the studied factors and different adherence scale compare frequency of these factors that could affecting adherence to oral hypoglycemic drug (OHD) among different adherence scale groups found highly significant different at P Table 3. The results showed that 201(53.46%) of the participants sometimes have forgotten to take medications, and 33 (8.78%) of the participants always forgotten to take medications. Of all the participants 187(49.73%) sometimes careless to take medications at time while 31(8.24%) participants were always careless to take medications at time. On the same times 162(43.09%) of the participants sometimes stopped taking medications when becomewell. On the other hand, 30 (7.98%) of the participants always stopped taking them. Also, 214(56.91%) of the participants sometimes stopped taking medications when become worse, but 29(7.71%) of theparticipants always stopped taking them. In the same manner, 168(44.68%) of the participants sometimes hate pills while 30(7.98%) participants always hated them. Regarding the medical advisory, 201(53.46%) of the participants sometimes stopped it and 30(7.98%) of the participants always stopped it. Table 4 shows the linear regression analysis model to assess the association between adherence to oral hypoglycemic drugs and selected independent adherence factors. There was statistical significant effect of the patient and healthcare provider′s relationship as P value was 0.046 and beta was 0.186 (CI95% (-0.547-0.005)). Thus, the only factor that could be used as potential predictors of adherence to oral hypoglycemic drugs was patient healthcare providers relationship. Frequency of overall MTA scale to oral hypoglycemic drugs among the participants Comparing the frequency of the factors affecting oral hypoglycemic drug adherence among different adherence scale in study population (n = 376) Assessment of patient adherence to oral hypoglycemic drug using the MTA Scale (n = 376) Linear regression analysis model to assess the association between oral hypoglycemic drug adherence and selected independent adherence barriers (n = 376)
Conclusion
The overall rate of medication adherence among the diabetic patients population was found to be suboptimal and non-acceptable. Evaluation of adherence is vital for patients with diabetes in order to determine factors and barriers affecting adherence. In addition, in order to manage poor adherence and better identification of affecting factors individualized suitable recommendations are essential for better healthcare management.
[]
[]
[]
[ "Introduction", "Methods", "Results", "Discussion", "Conclusion" ]
[ "Diabetes is a costly and increasingly common chronic disease. The data from the Egyptian Demographic and Health Survey in 2008 showed the crude prevalence rate of physician-diagnosed diabetes among the adult population of Egypt aged 15-59 to be 4.07% [1]. Despite the clinical and economic value of the glycemic control is clear, many diabetic patientsstilldevelopdiabetes related complications, hence increases the burden on patients and the health services. Achievement of optimal glycemic control reduces the likelihood of diabetic complications and risk of death. However, achieving blood glucose level as close to normal as possible relays on the rational use of available anti-diabetic regimen, good adherence to prescribed treatments and successful self-management by patients [2]. One of the reasonsbehind unachievable glycemic controlmay be due to lack of patient's adherent to therapeutic regimen. Adherence rates to drug regimens in patients with type 2 diabetes are relatively low and vary widely between populations. It is estimated that only third of diabetic patients have adequate treatment adherence [3]. Physicians and nurses can motivate patients to be more adherentto their anti-diabetic regimen if they work on the factors that possibly affect oral hypoglycemic medications adherence. Many factors cancontribute and affect patient adherence to drug therapy. However, physicians and nurses can motivate patients to be more adherences to their anti-diabetic regimen and work on the causes. The main factors can be divided into three groups: patient factors, social and medical support, and medication related aspects [4]. Patient factors are for example the patient's age (i.e. older patients are more adherent), economic status (patients with a higher economic status being more adherent) and health beliefs (patients with beliefs about medicines as something harmful are less adherent) [4, 5]. Social and medical support included among others family help and the patient-health care provider relationship; patients with more support are more adherence. Medication related factors take into account the attitude towards medicines, the complexity of the medication regimen and the experience of side effects [6]. A positive attitude towards medicines, a less complex medication regimen and less experience of side effects are related to higher adherence rates. Studies that focused on the patient's perspective and his/her experiences with drug adherence have been performed less frequently [7]. Few studies have investigatedthe adherence to oral hypoglycemic medication in our Egyptian society, which has special demographic characteristics in urban and rural areas. Thesestudies have different designs and inconsistent results, leading to difficulties ingeneralizing their results on our diabetic population. Therefore, the aim of the present study was toassess patterns and obstacles that affecting adherence in type 2 diabetic patients to their oral hypoglycemic drugs.", "An ethical approval was obtained from the research and Ethical Committee of the Faculty of Medicine, Suez Canal University. The objectives of the study were explained to individual patients and voluntary informed consent of the patients was also taken. The study was conducted in the Fanara Family Medicine Center that belongs to the Faculty of Medicine, Suez Canal University. Fanara is a rural areathat located 45 Km south Ismailia governorate, Egypt. The data was collected over the period between the beginning of April and end of May 2013. Study Design:The present study is a descriptive cross section study that was carried on all type 2 diabetic patients who lived in Fanara city andhave medical records at Fanara's Family Medicine Center. Exclusion criteria werepatients who were on insulin therapy, unconscious, or attended emergency condition and, or who were not interested in the study. A qualitative structured questionnaire was first used as a pilot in this study, and was testedon 40 type 2 diabetic patients. These patients were subsequently excluded from the study. After the pilot testing, some question-items in the questionnaire were modified and reframed to ensure validity of the method. The 36-item questionnaire took an average of 20 minutes to fill in and was administered to the participantsat the study site. The questionnaire was designed to have two sections; the first section included the socio-demographic characteristics of the type 2 diabetic patients while the second section assessed the adherence to oral anti-diabetic drugs. Each sectionconsisted of opened-/closed-ended questions. The adherence section focused on exploring patients’ experience with current anti-diabetic prescriptions and possible factors that could affect the adherence and patients’ knowledge and practice of diabetes self management behaviors such as self blood glucose monitoring, optimal blood glucose target, and complications from poor glycemic control. Data concerning adherence to drugs, its effect and its determinants, was assessed using questions of the Measure Treatment Adherence scale developed by Delgado and Lima. This method is used frequently to measure patient compliance with drug treatment. The Measure Treatment Adherence scale is a variation of the Morisky-Green test, which was used to assess patient behavior patterns associated with the use of medicines. It also showed good concurrent validity with high correlations with any answer and 0.77 sensitivity and 0.73 specificity [8]. All questions were read out to the participant, and the answers were recorded. Patients achieving a result of more than 75% were included in the good compliance group. Patients achieving a result less than 50% were included in the poor compliance group. Patients achieving a result between 50 and 75% were included in the fair compliance group. All collected data were analyzed using the Statistical Package for the Social Sciences (SPSS), version 13 software. Tests of proportions were done with ANOVA and a p-value of", "A total of 372 males (209 (55.59%) and 167 females (44.41%))patients with type-2 diabetes who fulfilled the inclusion criteria were included in the study. The mean age was 51.64±10.76 years. The sociodemographic characteristics of the studied population showed that among the patients, 159 (42.29%) had no formal education,100(26.60%) can read and write, 61 (16.22%) had received high education and 56 (14.89%) had received basic education. Also, 221 (58.78%) of the patientswere married, 109 (28.99%) were widows, 40 (10.64%) were divorced and 6 (1.6%) were single. A total of 144 (38.3%) were unemployed and 228 (61.7%) had a job. Among the participants, 256 (68.09%) perceived their economic standard as being below their needs and 120 (31.91%) perceived it as being enough or more than their needs. In addition, 217 (57.71%) had a family support and 159 (42.29%) had no family support. The results from the adherence section are summarized in Table 1. Among the patients, 98 (26.1%) were found to have good adherence, whereas 180(47.9%) had a fair adherence, and 99(26%) had poor adherence. Statistical analysis of factors that could affect adherence to oral hypoglycemic drugs results is illustrated in Table 2. Most of the patients (255 (67.82%)) had weak believes and motivations. In addition, the majority of the participants (230 (61.17%)) had poor relationship with health care providers. Another affecting factor is monitoring their blood glucose level. Theresults found that 272 (72.34%) participants did not monitor their blood level regularly. In addition, the number of drugs taken was found to be another factor, the present study found that 134 (35.64%) patients had only one oral hypoglycemic drug, and 242 (64.36%) had more than one oral hypoglycemic drugs. Among the patients, 246 (65.43%)had experienced a side effect of their oral hypoglycemic drug. On the other hand, 258 (68.62%) patients could not get their oral hypoglycemic drugs regularly because of its direct and indirect cost in relation to their income. Finally, 196 (52.13%) of allpatients had poor knowledge about diabetes and its complications. The ANOVA test using post Hoc for multi comparisons analysis through all the studied factors and different adherence scale compare frequency of these factors that could affecting adherence to oral hypoglycemic drug (OHD) among different adherence scale groups found highly significant different at P Table 3. The results showed that 201(53.46%) of the participants sometimes have forgotten to take medications, and 33 (8.78%) of the participants always forgotten to take medications. Of all the participants 187(49.73%) sometimes careless to take medications at time while 31(8.24%) participants were always careless to take medications at time. On the same times 162(43.09%) of the participants sometimes stopped taking medications when becomewell. On the other hand, 30 (7.98%) of the participants always stopped taking them. Also, 214(56.91%) of the participants sometimes stopped taking medications when become worse, but 29(7.71%) of theparticipants always stopped taking them. In the same manner, 168(44.68%) of the participants sometimes hate pills while 30(7.98%) participants always hated them. Regarding the medical advisory, 201(53.46%) of the participants sometimes stopped it and 30(7.98%) of the participants always stopped it. Table 4 shows the linear regression analysis model to assess the association between adherence to oral hypoglycemic drugs and selected independent adherence factors. There was statistical significant effect of the patient and healthcare provider′s relationship as P value was 0.046 and beta was 0.186 (CI95% (-0.547-0.005)). Thus, the only factor that could be used as potential predictors of adherence to oral hypoglycemic drugs was patient healthcare providers relationship.\n\nFrequency of overall MTA scale to oral hypoglycemic drugs among the participants\nComparing the frequency of the factors affecting oral hypoglycemic drug adherence among different adherence scale in study population (n = 376)\nAssessment of patient adherence to oral hypoglycemic drug using the MTA Scale (n = 376)\nLinear regression analysis model to assess the association between oral hypoglycemic drug adherence and selected independent adherence barriers (n = 376)", "Diabetes mellitus is a chronic disease that requires continuing medical care and education to prevent acute complications and reduce the risk of long term complications. Thus, there is a need for an integrated approach to facilitate adherence that addresses patient′s motivation to follow treatment advice as well as their ability to do so [9]. The study was carried out aiming to: assess patterns and obstacles of oral hypoglycemic drugs adherence among type 2 diabetic patients to reach recommendations to improve the patient's adherence. The results of the present study show that, among the participants 26.1% were found to have good adherence, 47.9% had a fair adherence, and 26% had poor adherence. These results are not in agreement with Nahla et al., [10], who reported that about 57% of patients always took their medication as prescribed and on time. In addition, the results by Kravitzetet al., 11 in Scotland found that 91% of the diabetic patients reported that they actually took their medication as prescribed. In the same manner, our results are inconsistent with the results of Gimenes and his colleges who found that the patient's adherence level to antidiabetic therapy was 78.3% among their study sample [12]. Finally, Yelena et al., in 2008 proved that the overall oral antidiabetic medication was 81% [13]. This difference in results may return to awarenessdifferences in the importance of adherence to antidiabetic medication and the policies and strategies that different countries adopt. Moreover, we found that 67.82% of the participants had week believes and motivations about the disease with statistical significant effect on adherence. These findings are in agreement with Spikmans et al.,14. Also, the results of the present study was in agreement with results of Donnan et al., [15] and Nagwa [16] who showed that the majority of patients had a strong perception about seriousness of the disease and the benefits of adherence to treatment with significance effect. This is an important point that physician should understand culture, perceptions and believes of their patients before recommending the treatment. Also, 61.17% of the participants had poor communications with health care providers with statistical significant among different adherence scale. The resultsare in agreement with Rolla [17] and Rubin [18], and in contrast to Shams and Barkat [19] who showed that the majority of patients had good communications with health care providers with no statistical significance on adherence level.\nThe present study showed that (72.34%) were not monitoring their blood sugar regularly with statistical significant effect on adherence. Many of our diabetic patients were not aware of self-monitoring of blood glucose level at home (SMBG) or lack financial support to buy the apparatus for regular and prompt detection of fluctuations in their blood glucose levels. This finding was in conformity with the report of a study made by Harris et al. [20] in US where many diabetic patients reported never to have monitored their blood glucose levels. The absence of established guidelines on SMBG and lack of its perceived importance by patients, as well as, the cost of the blood glucose monitoring devices especially in a developing country as in Egypt may have accounted for the low level of regular blood sugar monitoring among patients. Concerning medicationrelated factors: Our study revealed that patients who take complex treatment more than who take simple treatment with significant effect on adherence rate and this indicate that once daily dosing is the best. This disagrees with Iskedjian et al. [21], Shams and Barakat. [19], Sweileh et al [22] who showed that the non-adherence was least with the single drug regimen while it was highest among patients who were on combined oral and insulin treatment. The current study showed that 65.43% had side effects towards the treatment with statistical significant effect on adherence scale. This was in agreement with the findings in the study made by Jayant et al. [23] who reported that the side effects of medication may be a significant factor that can affect diabetic patients’ long-term adherence to treatment programs and this was a main factor for limiting adherence. Another study made by Girered [24] who reported something different than our results as he found that majority of diabetic patients (58%) had side effects with no statistical significant effect on adherence. Moreover, the present study showed that 68.62% of the patients had not adequate cost of medications in relation to the income with statistical significant effect on adherence. The same results was conducted with Shams and Barakat [19] who showed the same results where is a significant higher rate of adherence to oral treatment was observed in patients who exhibited adequate healthcare costs in relation to their income or full coverage health insurance compared with the others who did not have. This was also in agreement with Ohene-Buabeng et al., [25] and Adisa et al., [26] who reported that financial variables especially the direct and indirect costs associated with a prescribed regimen and restricted access to therapy had been found by several studies to influence patients’ commitment to medication adherence in developing countries and patients who had no insurance coverage or who had low income were more likely to be non-adherent to treatment In regards to patient-related factors,52.13% had poor knowledge about the disease and there was a strong positive relation between total knowledge and adherence rate to the medication. Fitzgerald et al., [27] found that the level of adherence increased with the improvement of the patient's level of knowledge. On the other hand, findings of the study conducted in China by Chan et al., [28] indicated that there was no association between the patient′s knowledge and adherence.\nThe current study also noted that only 26.1% had good adherence and 47.9% had fair adherence towards treatment. This was in agreement with Wild [29] who reported that 39% of patients had good medication adherence, 49% medium adherence and 12% poor adherence. These results are not in agreement with Nahla et al., [10] who reported that about 57% of patients always took their medication as prescribed and on time. Also results of the study by Kravitzet al., [11] in Scotland found that 91% of the diabetic patients reported that they actually took their medication as prescribed and this difference in results may be due to differences in awareness of importance of adherence and be may return to the national health policies. Previous studies return the causes behind poor adherence to financial, being busy with work, too many medicines being prescribed. Concerning the linear regression analysis model which assessed the association between adherence and selected independent adherence barriers, the healthcare provider′s relationship was the dominant predictor to good adherence. Many other factors which could affect adherence still remain unidentified. The results of this study emphasize the considerable role of good report with the doctor and the entire treatment team for shaping the awareness of the disease and acquiring the necessary knowledge. From our results, we endorse the role of family physician to screen patients at high risk for poor adherence, and the family physician should try more than usual to apply multiple interventions in order to improve adherence including educational, behavioral, and affective interventions. Educational interventions seek to improve adherence by providing information and/or skills. Education may take the form of individual instruction or group classes. In any event, a key element of successful educational strategies is providing simple and clear messages, hopefully tailored to the needs of the individual, and verifying that the messages have been understood [29]. Behavioral approaches have their roots in cognitive-behavioral psychology and use techniques such as reminders, memory aids, synchronizing therapeutic activities with routine life events (e.g., taking pills before you shower), goal-setting, self-monitoring, contracting, skill-building, and rewards. What is important is that the behavior in question has been negotiated with and accepted by individual patients so that adoption of the behavior has a chance of succeeding in the long term. Affective interventions seek to enhance adherence by providing emotional support and encouragement. Finally, it should be remembered that application of multiple interventions of different types is more effective than any single intervention [30]. Our results should be viewed with consideration of some limitations. One limitation was the use of self-report data on medication adherence, because of a resulting tendency to overestimate adherence due to recall biases and social desirability.", "The overall rate of medication adherence among the diabetic patients population was found to be suboptimal and non-acceptable. Evaluation of adherence is vital for patients with diabetes in order to determine factors and barriers affecting adherence. In addition, in order to manage poor adherence and better identification of affecting factors individualized suitable recommendations are essential for better healthcare management." ]
[ "intro", "methods", "results", "discussion", "conclusion" ]
[ "Adherence", "antidiabetic medication", "type 2 diabetes", "self-management" ]
Introduction: Diabetes is a costly and increasingly common chronic disease. The data from the Egyptian Demographic and Health Survey in 2008 showed the crude prevalence rate of physician-diagnosed diabetes among the adult population of Egypt aged 15-59 to be 4.07% [1]. Despite the clinical and economic value of the glycemic control is clear, many diabetic patientsstilldevelopdiabetes related complications, hence increases the burden on patients and the health services. Achievement of optimal glycemic control reduces the likelihood of diabetic complications and risk of death. However, achieving blood glucose level as close to normal as possible relays on the rational use of available anti-diabetic regimen, good adherence to prescribed treatments and successful self-management by patients [2]. One of the reasonsbehind unachievable glycemic controlmay be due to lack of patient's adherent to therapeutic regimen. Adherence rates to drug regimens in patients with type 2 diabetes are relatively low and vary widely between populations. It is estimated that only third of diabetic patients have adequate treatment adherence [3]. Physicians and nurses can motivate patients to be more adherentto their anti-diabetic regimen if they work on the factors that possibly affect oral hypoglycemic medications adherence. Many factors cancontribute and affect patient adherence to drug therapy. However, physicians and nurses can motivate patients to be more adherences to their anti-diabetic regimen and work on the causes. The main factors can be divided into three groups: patient factors, social and medical support, and medication related aspects [4]. Patient factors are for example the patient's age (i.e. older patients are more adherent), economic status (patients with a higher economic status being more adherent) and health beliefs (patients with beliefs about medicines as something harmful are less adherent) [4, 5]. Social and medical support included among others family help and the patient-health care provider relationship; patients with more support are more adherence. Medication related factors take into account the attitude towards medicines, the complexity of the medication regimen and the experience of side effects [6]. A positive attitude towards medicines, a less complex medication regimen and less experience of side effects are related to higher adherence rates. Studies that focused on the patient's perspective and his/her experiences with drug adherence have been performed less frequently [7]. Few studies have investigatedthe adherence to oral hypoglycemic medication in our Egyptian society, which has special demographic characteristics in urban and rural areas. Thesestudies have different designs and inconsistent results, leading to difficulties ingeneralizing their results on our diabetic population. Therefore, the aim of the present study was toassess patterns and obstacles that affecting adherence in type 2 diabetic patients to their oral hypoglycemic drugs. Methods: An ethical approval was obtained from the research and Ethical Committee of the Faculty of Medicine, Suez Canal University. The objectives of the study were explained to individual patients and voluntary informed consent of the patients was also taken. The study was conducted in the Fanara Family Medicine Center that belongs to the Faculty of Medicine, Suez Canal University. Fanara is a rural areathat located 45 Km south Ismailia governorate, Egypt. The data was collected over the period between the beginning of April and end of May 2013. Study Design:The present study is a descriptive cross section study that was carried on all type 2 diabetic patients who lived in Fanara city andhave medical records at Fanara's Family Medicine Center. Exclusion criteria werepatients who were on insulin therapy, unconscious, or attended emergency condition and, or who were not interested in the study. A qualitative structured questionnaire was first used as a pilot in this study, and was testedon 40 type 2 diabetic patients. These patients were subsequently excluded from the study. After the pilot testing, some question-items in the questionnaire were modified and reframed to ensure validity of the method. The 36-item questionnaire took an average of 20 minutes to fill in and was administered to the participantsat the study site. The questionnaire was designed to have two sections; the first section included the socio-demographic characteristics of the type 2 diabetic patients while the second section assessed the adherence to oral anti-diabetic drugs. Each sectionconsisted of opened-/closed-ended questions. The adherence section focused on exploring patients’ experience with current anti-diabetic prescriptions and possible factors that could affect the adherence and patients’ knowledge and practice of diabetes self management behaviors such as self blood glucose monitoring, optimal blood glucose target, and complications from poor glycemic control. Data concerning adherence to drugs, its effect and its determinants, was assessed using questions of the Measure Treatment Adherence scale developed by Delgado and Lima. This method is used frequently to measure patient compliance with drug treatment. The Measure Treatment Adherence scale is a variation of the Morisky-Green test, which was used to assess patient behavior patterns associated with the use of medicines. It also showed good concurrent validity with high correlations with any answer and 0.77 sensitivity and 0.73 specificity [8]. All questions were read out to the participant, and the answers were recorded. Patients achieving a result of more than 75% were included in the good compliance group. Patients achieving a result less than 50% were included in the poor compliance group. Patients achieving a result between 50 and 75% were included in the fair compliance group. All collected data were analyzed using the Statistical Package for the Social Sciences (SPSS), version 13 software. Tests of proportions were done with ANOVA and a p-value of Results: A total of 372 males (209 (55.59%) and 167 females (44.41%))patients with type-2 diabetes who fulfilled the inclusion criteria were included in the study. The mean age was 51.64±10.76 years. The sociodemographic characteristics of the studied population showed that among the patients, 159 (42.29%) had no formal education,100(26.60%) can read and write, 61 (16.22%) had received high education and 56 (14.89%) had received basic education. Also, 221 (58.78%) of the patientswere married, 109 (28.99%) were widows, 40 (10.64%) were divorced and 6 (1.6%) were single. A total of 144 (38.3%) were unemployed and 228 (61.7%) had a job. Among the participants, 256 (68.09%) perceived their economic standard as being below their needs and 120 (31.91%) perceived it as being enough or more than their needs. In addition, 217 (57.71%) had a family support and 159 (42.29%) had no family support. The results from the adherence section are summarized in Table 1. Among the patients, 98 (26.1%) were found to have good adherence, whereas 180(47.9%) had a fair adherence, and 99(26%) had poor adherence. Statistical analysis of factors that could affect adherence to oral hypoglycemic drugs results is illustrated in Table 2. Most of the patients (255 (67.82%)) had weak believes and motivations. In addition, the majority of the participants (230 (61.17%)) had poor relationship with health care providers. Another affecting factor is monitoring their blood glucose level. Theresults found that 272 (72.34%) participants did not monitor their blood level regularly. In addition, the number of drugs taken was found to be another factor, the present study found that 134 (35.64%) patients had only one oral hypoglycemic drug, and 242 (64.36%) had more than one oral hypoglycemic drugs. Among the patients, 246 (65.43%)had experienced a side effect of their oral hypoglycemic drug. On the other hand, 258 (68.62%) patients could not get their oral hypoglycemic drugs regularly because of its direct and indirect cost in relation to their income. Finally, 196 (52.13%) of allpatients had poor knowledge about diabetes and its complications. The ANOVA test using post Hoc for multi comparisons analysis through all the studied factors and different adherence scale compare frequency of these factors that could affecting adherence to oral hypoglycemic drug (OHD) among different adherence scale groups found highly significant different at P Table 3. The results showed that 201(53.46%) of the participants sometimes have forgotten to take medications, and 33 (8.78%) of the participants always forgotten to take medications. Of all the participants 187(49.73%) sometimes careless to take medications at time while 31(8.24%) participants were always careless to take medications at time. On the same times 162(43.09%) of the participants sometimes stopped taking medications when becomewell. On the other hand, 30 (7.98%) of the participants always stopped taking them. Also, 214(56.91%) of the participants sometimes stopped taking medications when become worse, but 29(7.71%) of theparticipants always stopped taking them. In the same manner, 168(44.68%) of the participants sometimes hate pills while 30(7.98%) participants always hated them. Regarding the medical advisory, 201(53.46%) of the participants sometimes stopped it and 30(7.98%) of the participants always stopped it. Table 4 shows the linear regression analysis model to assess the association between adherence to oral hypoglycemic drugs and selected independent adherence factors. There was statistical significant effect of the patient and healthcare provider′s relationship as P value was 0.046 and beta was 0.186 (CI95% (-0.547-0.005)). Thus, the only factor that could be used as potential predictors of adherence to oral hypoglycemic drugs was patient healthcare providers relationship. Frequency of overall MTA scale to oral hypoglycemic drugs among the participants Comparing the frequency of the factors affecting oral hypoglycemic drug adherence among different adherence scale in study population (n = 376) Assessment of patient adherence to oral hypoglycemic drug using the MTA Scale (n = 376) Linear regression analysis model to assess the association between oral hypoglycemic drug adherence and selected independent adherence barriers (n = 376) Discussion: Diabetes mellitus is a chronic disease that requires continuing medical care and education to prevent acute complications and reduce the risk of long term complications. Thus, there is a need for an integrated approach to facilitate adherence that addresses patient′s motivation to follow treatment advice as well as their ability to do so [9]. The study was carried out aiming to: assess patterns and obstacles of oral hypoglycemic drugs adherence among type 2 diabetic patients to reach recommendations to improve the patient's adherence. The results of the present study show that, among the participants 26.1% were found to have good adherence, 47.9% had a fair adherence, and 26% had poor adherence. These results are not in agreement with Nahla et al., [10], who reported that about 57% of patients always took their medication as prescribed and on time. In addition, the results by Kravitzetet al., 11 in Scotland found that 91% of the diabetic patients reported that they actually took their medication as prescribed. In the same manner, our results are inconsistent with the results of Gimenes and his colleges who found that the patient's adherence level to antidiabetic therapy was 78.3% among their study sample [12]. Finally, Yelena et al., in 2008 proved that the overall oral antidiabetic medication was 81% [13]. This difference in results may return to awarenessdifferences in the importance of adherence to antidiabetic medication and the policies and strategies that different countries adopt. Moreover, we found that 67.82% of the participants had week believes and motivations about the disease with statistical significant effect on adherence. These findings are in agreement with Spikmans et al.,14. Also, the results of the present study was in agreement with results of Donnan et al., [15] and Nagwa [16] who showed that the majority of patients had a strong perception about seriousness of the disease and the benefits of adherence to treatment with significance effect. This is an important point that physician should understand culture, perceptions and believes of their patients before recommending the treatment. Also, 61.17% of the participants had poor communications with health care providers with statistical significant among different adherence scale. The resultsare in agreement with Rolla [17] and Rubin [18], and in contrast to Shams and Barkat [19] who showed that the majority of patients had good communications with health care providers with no statistical significance on adherence level. The present study showed that (72.34%) were not monitoring their blood sugar regularly with statistical significant effect on adherence. Many of our diabetic patients were not aware of self-monitoring of blood glucose level at home (SMBG) or lack financial support to buy the apparatus for regular and prompt detection of fluctuations in their blood glucose levels. This finding was in conformity with the report of a study made by Harris et al. [20] in US where many diabetic patients reported never to have monitored their blood glucose levels. The absence of established guidelines on SMBG and lack of its perceived importance by patients, as well as, the cost of the blood glucose monitoring devices especially in a developing country as in Egypt may have accounted for the low level of regular blood sugar monitoring among patients. Concerning medicationrelated factors: Our study revealed that patients who take complex treatment more than who take simple treatment with significant effect on adherence rate and this indicate that once daily dosing is the best. This disagrees with Iskedjian et al. [21], Shams and Barakat. [19], Sweileh et al [22] who showed that the non-adherence was least with the single drug regimen while it was highest among patients who were on combined oral and insulin treatment. The current study showed that 65.43% had side effects towards the treatment with statistical significant effect on adherence scale. This was in agreement with the findings in the study made by Jayant et al. [23] who reported that the side effects of medication may be a significant factor that can affect diabetic patients’ long-term adherence to treatment programs and this was a main factor for limiting adherence. Another study made by Girered [24] who reported something different than our results as he found that majority of diabetic patients (58%) had side effects with no statistical significant effect on adherence. Moreover, the present study showed that 68.62% of the patients had not adequate cost of medications in relation to the income with statistical significant effect on adherence. The same results was conducted with Shams and Barakat [19] who showed the same results where is a significant higher rate of adherence to oral treatment was observed in patients who exhibited adequate healthcare costs in relation to their income or full coverage health insurance compared with the others who did not have. This was also in agreement with Ohene-Buabeng et al., [25] and Adisa et al., [26] who reported that financial variables especially the direct and indirect costs associated with a prescribed regimen and restricted access to therapy had been found by several studies to influence patients’ commitment to medication adherence in developing countries and patients who had no insurance coverage or who had low income were more likely to be non-adherent to treatment In regards to patient-related factors,52.13% had poor knowledge about the disease and there was a strong positive relation between total knowledge and adherence rate to the medication. Fitzgerald et al., [27] found that the level of adherence increased with the improvement of the patient's level of knowledge. On the other hand, findings of the study conducted in China by Chan et al., [28] indicated that there was no association between the patient′s knowledge and adherence. The current study also noted that only 26.1% had good adherence and 47.9% had fair adherence towards treatment. This was in agreement with Wild [29] who reported that 39% of patients had good medication adherence, 49% medium adherence and 12% poor adherence. These results are not in agreement with Nahla et al., [10] who reported that about 57% of patients always took their medication as prescribed and on time. Also results of the study by Kravitzet al., [11] in Scotland found that 91% of the diabetic patients reported that they actually took their medication as prescribed and this difference in results may be due to differences in awareness of importance of adherence and be may return to the national health policies. Previous studies return the causes behind poor adherence to financial, being busy with work, too many medicines being prescribed. Concerning the linear regression analysis model which assessed the association between adherence and selected independent adherence barriers, the healthcare provider′s relationship was the dominant predictor to good adherence. Many other factors which could affect adherence still remain unidentified. The results of this study emphasize the considerable role of good report with the doctor and the entire treatment team for shaping the awareness of the disease and acquiring the necessary knowledge. From our results, we endorse the role of family physician to screen patients at high risk for poor adherence, and the family physician should try more than usual to apply multiple interventions in order to improve adherence including educational, behavioral, and affective interventions. Educational interventions seek to improve adherence by providing information and/or skills. Education may take the form of individual instruction or group classes. In any event, a key element of successful educational strategies is providing simple and clear messages, hopefully tailored to the needs of the individual, and verifying that the messages have been understood [29]. Behavioral approaches have their roots in cognitive-behavioral psychology and use techniques such as reminders, memory aids, synchronizing therapeutic activities with routine life events (e.g., taking pills before you shower), goal-setting, self-monitoring, contracting, skill-building, and rewards. What is important is that the behavior in question has been negotiated with and accepted by individual patients so that adoption of the behavior has a chance of succeeding in the long term. Affective interventions seek to enhance adherence by providing emotional support and encouragement. Finally, it should be remembered that application of multiple interventions of different types is more effective than any single intervention [30]. Our results should be viewed with consideration of some limitations. One limitation was the use of self-report data on medication adherence, because of a resulting tendency to overestimate adherence due to recall biases and social desirability. Conclusion: The overall rate of medication adherence among the diabetic patients population was found to be suboptimal and non-acceptable. Evaluation of adherence is vital for patients with diabetes in order to determine factors and barriers affecting adherence. In addition, in order to manage poor adherence and better identification of affecting factors individualized suitable recommendations are essential for better healthcare management.
Background: Diabetes is a costly and increasingly common chronic disease. Effective management of diabetes to achieve glycemic control improves patient quality of life. Adherence rates to drug regimens in patients with type 2 diabetes are relatively low and vary widely between populations. There are many factors that could affect patient adherence to drug therapy. The aim of the present study was assessing patterns and obstacles to adherence of type 2 diabetic patients to their oral hypoglycemic drugs. Methods: The present work is a descriptive cross section study, carried on type 2 diabetic patients who were on oral hypoglycemic drugs. Data concerning adherence to drugs was assessed using measure treatment adherence scale (MTA). Results: A total of 372 (55.59% males and 44.41% females) patients with type-2 diabetes fulfilled the inclusion criteria and included in the study. Among the participants, 26.1% were found to have good adherence, 47.9% had a fair adherence, and 26% had poor adherence. Conclusions: The overall rate of medication adherence among the diabetic patients population was suboptimal and non-acceptable. Evaluation of adherence is vital for patients with diabetes in order to determine factors and barriers affecting the adherence and to manage them.
Introduction: Diabetes is a costly and increasingly common chronic disease. The data from the Egyptian Demographic and Health Survey in 2008 showed the crude prevalence rate of physician-diagnosed diabetes among the adult population of Egypt aged 15-59 to be 4.07% [1]. Despite the clinical and economic value of the glycemic control is clear, many diabetic patientsstilldevelopdiabetes related complications, hence increases the burden on patients and the health services. Achievement of optimal glycemic control reduces the likelihood of diabetic complications and risk of death. However, achieving blood glucose level as close to normal as possible relays on the rational use of available anti-diabetic regimen, good adherence to prescribed treatments and successful self-management by patients [2]. One of the reasonsbehind unachievable glycemic controlmay be due to lack of patient's adherent to therapeutic regimen. Adherence rates to drug regimens in patients with type 2 diabetes are relatively low and vary widely between populations. It is estimated that only third of diabetic patients have adequate treatment adherence [3]. Physicians and nurses can motivate patients to be more adherentto their anti-diabetic regimen if they work on the factors that possibly affect oral hypoglycemic medications adherence. Many factors cancontribute and affect patient adherence to drug therapy. However, physicians and nurses can motivate patients to be more adherences to their anti-diabetic regimen and work on the causes. The main factors can be divided into three groups: patient factors, social and medical support, and medication related aspects [4]. Patient factors are for example the patient's age (i.e. older patients are more adherent), economic status (patients with a higher economic status being more adherent) and health beliefs (patients with beliefs about medicines as something harmful are less adherent) [4, 5]. Social and medical support included among others family help and the patient-health care provider relationship; patients with more support are more adherence. Medication related factors take into account the attitude towards medicines, the complexity of the medication regimen and the experience of side effects [6]. A positive attitude towards medicines, a less complex medication regimen and less experience of side effects are related to higher adherence rates. Studies that focused on the patient's perspective and his/her experiences with drug adherence have been performed less frequently [7]. Few studies have investigatedthe adherence to oral hypoglycemic medication in our Egyptian society, which has special demographic characteristics in urban and rural areas. Thesestudies have different designs and inconsistent results, leading to difficulties ingeneralizing their results on our diabetic population. Therefore, the aim of the present study was toassess patterns and obstacles that affecting adherence in type 2 diabetic patients to their oral hypoglycemic drugs. Conclusion: The overall rate of medication adherence among the diabetic patients population was found to be suboptimal and non-acceptable. Evaluation of adherence is vital for patients with diabetes in order to determine factors and barriers affecting adherence. In addition, in order to manage poor adherence and better identification of affecting factors individualized suitable recommendations are essential for better healthcare management.
Background: Diabetes is a costly and increasingly common chronic disease. Effective management of diabetes to achieve glycemic control improves patient quality of life. Adherence rates to drug regimens in patients with type 2 diabetes are relatively low and vary widely between populations. There are many factors that could affect patient adherence to drug therapy. The aim of the present study was assessing patterns and obstacles to adherence of type 2 diabetic patients to their oral hypoglycemic drugs. Methods: The present work is a descriptive cross section study, carried on type 2 diabetic patients who were on oral hypoglycemic drugs. Data concerning adherence to drugs was assessed using measure treatment adherence scale (MTA). Results: A total of 372 (55.59% males and 44.41% females) patients with type-2 diabetes fulfilled the inclusion criteria and included in the study. Among the participants, 26.1% were found to have good adherence, 47.9% had a fair adherence, and 26% had poor adherence. Conclusions: The overall rate of medication adherence among the diabetic patients population was suboptimal and non-acceptable. Evaluation of adherence is vital for patients with diabetes in order to determine factors and barriers affecting the adherence and to manage them.
3,540
231
[]
5
[ "adherence", "patients", "study", "results", "diabetic", "oral", "patient", "participants", "factors", "medication" ]
[ "diabetic regimen good", "diabetic regimen", "importance adherence antidiabetic", "adherence diabetic patients", "adherence antidiabetic medication" ]
[CONTENT] Adherence | antidiabetic medication | type 2 diabetes | self-management [SUMMARY]
[CONTENT] Adherence | antidiabetic medication | type 2 diabetes | self-management [SUMMARY]
[CONTENT] Adherence | antidiabetic medication | type 2 diabetes | self-management [SUMMARY]
[CONTENT] Adherence | antidiabetic medication | type 2 diabetes | self-management [SUMMARY]
[CONTENT] Adherence | antidiabetic medication | type 2 diabetes | self-management [SUMMARY]
[CONTENT] Adherence | antidiabetic medication | type 2 diabetes | self-management [SUMMARY]
[CONTENT] Administration, Oral | Adult | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Egypt | Female | Humans | Hypoglycemic Agents | Male | Medication Adherence | Middle Aged | Quality of Life [SUMMARY]
[CONTENT] Administration, Oral | Adult | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Egypt | Female | Humans | Hypoglycemic Agents | Male | Medication Adherence | Middle Aged | Quality of Life [SUMMARY]
[CONTENT] Administration, Oral | Adult | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Egypt | Female | Humans | Hypoglycemic Agents | Male | Medication Adherence | Middle Aged | Quality of Life [SUMMARY]
[CONTENT] Administration, Oral | Adult | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Egypt | Female | Humans | Hypoglycemic Agents | Male | Medication Adherence | Middle Aged | Quality of Life [SUMMARY]
[CONTENT] Administration, Oral | Adult | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Egypt | Female | Humans | Hypoglycemic Agents | Male | Medication Adherence | Middle Aged | Quality of Life [SUMMARY]
[CONTENT] Administration, Oral | Adult | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Egypt | Female | Humans | Hypoglycemic Agents | Male | Medication Adherence | Middle Aged | Quality of Life [SUMMARY]
[CONTENT] diabetic regimen good | diabetic regimen | importance adherence antidiabetic | adherence diabetic patients | adherence antidiabetic medication [SUMMARY]
[CONTENT] diabetic regimen good | diabetic regimen | importance adherence antidiabetic | adherence diabetic patients | adherence antidiabetic medication [SUMMARY]
[CONTENT] diabetic regimen good | diabetic regimen | importance adherence antidiabetic | adherence diabetic patients | adherence antidiabetic medication [SUMMARY]
[CONTENT] diabetic regimen good | diabetic regimen | importance adherence antidiabetic | adherence diabetic patients | adherence antidiabetic medication [SUMMARY]
[CONTENT] diabetic regimen good | diabetic regimen | importance adherence antidiabetic | adherence diabetic patients | adherence antidiabetic medication [SUMMARY]
[CONTENT] diabetic regimen good | diabetic regimen | importance adherence antidiabetic | adherence diabetic patients | adherence antidiabetic medication [SUMMARY]
[CONTENT] adherence | patients | study | results | diabetic | oral | patient | participants | factors | medication [SUMMARY]
[CONTENT] adherence | patients | study | results | diabetic | oral | patient | participants | factors | medication [SUMMARY]
[CONTENT] adherence | patients | study | results | diabetic | oral | patient | participants | factors | medication [SUMMARY]
[CONTENT] adherence | patients | study | results | diabetic | oral | patient | participants | factors | medication [SUMMARY]
[CONTENT] adherence | patients | study | results | diabetic | oral | patient | participants | factors | medication [SUMMARY]
[CONTENT] adherence | patients | study | results | diabetic | oral | patient | participants | factors | medication [SUMMARY]
[CONTENT] patients | regimen | adherence | diabetic | patient | medication | related | adherent | anti diabetic regimen | diabetic regimen [SUMMARY]
[CONTENT] patients | study | compliance | fanara | questionnaire | medicine | section | compliance group | questions | measure [SUMMARY]
[CONTENT] participants | hypoglycemic | oral hypoglycemic | adherence | oral | stopped | hypoglycemic drug | oral hypoglycemic drug | participants stopped | adherence oral hypoglycemic [SUMMARY]
[CONTENT] better | adherence | order | affecting | factors barriers affecting adherence | poor adherence better | determine factors | determine factors barriers | determine factors barriers affecting | rate medication adherence [SUMMARY]
[CONTENT] adherence | patients | study | diabetic | participants | medication | factors | results | oral | hypoglycemic [SUMMARY]
[CONTENT] adherence | patients | study | diabetic | participants | medication | factors | results | oral | hypoglycemic [SUMMARY]
[CONTENT] ||| ||| 2 ||| ||| 2 [SUMMARY]
[CONTENT] 2 ||| MTA [SUMMARY]
[CONTENT] 372 | 55.59% | 44.41% ||| 26.1% | 47.9% | 26% [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| 2 ||| ||| 2 ||| 2 ||| MTA ||| 372 | 55.59% | 44.41% ||| 26.1% | 47.9% | 26% ||| ||| [SUMMARY]
[CONTENT] ||| ||| 2 ||| ||| 2 ||| 2 ||| MTA ||| 372 | 55.59% | 44.41% ||| 26.1% | 47.9% | 26% ||| ||| [SUMMARY]
High-resolution 3D X-ray imaging of intracranial nitinol stents.
21331601
To assess an optimized 3D imaging protocol for intracranial nitinol stents in 3D C-arm flat detector imaging. For this purpose, an image quality simulation and an in vitro study was carried out.
INTRODUCTION
Nitinol stents of various brands were placed inside an anthropomorphic head phantom, using iodine contrast. Experiments with objects were preceded by image quality and dose simulations. We varied X-ray imaging parameters in a commercially interventional X-ray system to set 3D image quality in the contrast-noise-sharpness space. Beam quality was varied to evaluate contrast of the stents while keeping absorbed dose below recommended values. Two detector formats were used, paired with an appropriate pixel size and X-ray focus size. Zoomed reconstructions were carried out and snapshot images acquired. High contrast spatial resolution was assessed with a CT phantom.
METHODS
We found an optimal protocol for imaging intracranial nitinol stents. Contrast resolution was optimized for nickel-titanium-containing stents. A high spatial resolution larger than 2.1 lp/mm allows struts to be visualized. We obtained images of stents of various brands and a representative set of images is shown. Independent of the make, struts can be imaged with virtually continuous strokes. Measured absorbed doses are shown to be lower than 50 mGy Computed Tomography Dose Index (CTDI).
RESULTS
By balancing the modulation transfer of the imaging components and tuning the high-contrast imaging capabilities, we have shown that thin nitinol stent wires can be reconstructed with high contrast-to-noise ratio and good detail, while keeping radiation doses within recommended values. Experimental results compare well with imaging simulations.
CONCLUSION
[ "Alloys", "Humans", "Imaging, Three-Dimensional", "Intracranial Aneurysm", "Phantoms, Imaging", "Radiation Dosage", "Radiographic Image Interpretation, Computer-Assisted", "Stents", "Tomography, X-Ray Computed", "X-Rays" ]
3261414
Introduction
The use of flat detectors allows display of vessel anatomy with submillimeter resolution and a high contrast-to-noise ratio (CNR). Three-dimensional (3D) cone beam imaging using a flat detector in a C-arc system was adopted and eventually displayed an image quality approaching that of CT with respect to contrast resolution [1, 2]. However, and despite the above improvement, 3D imaging of objects with high X-ray transparency and small detail may still be difficult [3]. Materials such as nitinol with its excellent biocompatibility and self deployment by shape memory [4] are widely used for intracranial stents, and generally yield good clinical results [5]. The usage of nitinol stents has become a common practice in the endovascular treatment as a coiling scaffold [6–8] to prevent wire herniation [9]. The visualization of nitinol stents in treatment of atherosclerotic stenoses [10, 11] is challenging and necessitates a highly developed X-ray imaging technique. Generally, in 2D clinical imaging protocols, only the stent end-markers made of tantalum or platinum can be recognized, but the stent body itself and struts are barely visible due to the low absorption of the constituents. To improve visualization, we need a high contrast resolution combined with high spatial resolution imaging. With 3D cone beam imaging based on flat detectors, both CNR and spatial resolution can be tailored such that fine detail rendition is sufficient to visualize the stent's struts. We describe a vascular imaging technique validated by an image quality assessment, using phantom objects with a variety of commercially available Nitinol stents. The purpose of this study is to visualize details of a stent, to support an improved analysis of its placement, by exploiting a joint optimization of all components of the imaging chain and the associated 3D vascular imaging platform.
null
null
Results
The high contrast limiting resolution has been optimized by balancing the relevant MTF of the imaging components, taking noise transfer into account. Focus size, detector pixel size, and the reconstruction voxel size are matched such that the transfer contributions of each parameter are balanced, resulting in an optimal voxel size and voxel number. The nearest predefined volume is chosen such that it matches the size of the stent under test. A further reduction of the voxel size would increase the image noise, whereas an increase of the voxel size would result in a spatial resolution loss. The results on spatial limiting resolution are shown in Table 3. The true resolution of the high-resolution protocol is better than the maximum reading of the Catphan™ 500 phantom and could thus not be read by this phantom. Deviations of 10–20% between measured and simulated values were found, which can be accounted for by the choice of the threshold modulation of 4%. We optimized for the MTF and the inherent system impulse response, including the stent strut diameter of up to 0.08 mm, as well as for contrast and noise. Since the basic materials of the stents are identical for all types (see Table 1), we decided to only use one stent type, namely a Wingspan nitinol stent. Only in case of the high-resolution/high-contrast case, the imaging results of all stents are shown, including the Pharos reference stent. Table 3Limiting high contrast spatial resolution as measured with a Catphan® 500 phantom (The Phantom Factory) and simulated with the IQ analysis programSpatial resolutionVolume definitionMeasured (lp/mm)Simulated (lp/mm)StandardNon-zoomed reconstruction volume0.50.533% reconstruction volume1.31.617% reconstruction volume1.61.8HIRESNon-zoomed reconstruction volume1.11.250% reconstruction volume2.02.533% reconstruction volume2.1a 3.2 aCatphan® 500 phantom range limit Limiting high contrast spatial resolution as measured with a Catphan® 500 phantom (The Phantom Factory) and simulated with the IQ analysis program aCatphan® 500 phantom range limit Reconstructed stent images inside a head phantom Figures 1, 2, 3, 4, and 5 show snapshots of reconstructed images in planes through the stent's axes and centered within the volume, where window settings on the work station were such that contrast and noise were comparable. For absolute stent dimensions, see Table 1. Figure 1 shows the results of the three protocols for 31 HU, where for reference, an optical image of the stent is shown also. The reconstruction scaling for the high- and standard resolution cases is found in Table 2. Figure 2 shows the images for the high contrast tube filling of 550 HU. Apart from the object contrast, the conditions are identical to those described for Fig. 1. The effect of varying the tube voltage is shown in Fig. 3, referring to a sequence of reconstructed images, displaying the image contrast for the infuse tubes filled with 31-HU contrast liquid. Figure 4 depicts the effect of streaking due to highly opaque platinum coils for 31-HU filling. Finally, in Fig. 5, a number of stent types illustrate the performance in the high-resolution/high-contrast mode, for again 31-HU contrast liquid. Figure 5 also shows example results of the more radiopaque material platinum in the Silk and Leo stents and the Co–Cr steel alloy-based Pharos stent. Fig. 1A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left Fig. 2A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown Fig. 3A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU Fig. 4Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case Fig. 5High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference Figures 1, 2, 3, 4, and 5 show snapshots of reconstructed images in planes through the stent's axes and centered within the volume, where window settings on the work station were such that contrast and noise were comparable. For absolute stent dimensions, see Table 1. Figure 1 shows the results of the three protocols for 31 HU, where for reference, an optical image of the stent is shown also. The reconstruction scaling for the high- and standard resolution cases is found in Table 2. Figure 2 shows the images for the high contrast tube filling of 550 HU. Apart from the object contrast, the conditions are identical to those described for Fig. 1. The effect of varying the tube voltage is shown in Fig. 3, referring to a sequence of reconstructed images, displaying the image contrast for the infuse tubes filled with 31-HU contrast liquid. Figure 4 depicts the effect of streaking due to highly opaque platinum coils for 31-HU filling. Finally, in Fig. 5, a number of stent types illustrate the performance in the high-resolution/high-contrast mode, for again 31-HU contrast liquid. Figure 5 also shows example results of the more radiopaque material platinum in the Silk and Leo stents and the Co–Cr steel alloy-based Pharos stent. Fig. 1A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left Fig. 2A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown Fig. 3A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU Fig. 4Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case Fig. 5High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference High contrast limiting spatial resolution Measured and modeled limiting resolutions are summarized in Table 3. The results on measured resolution were found by reading the reconstructed images from a Catphan™ 500 phantom in a transaxial plane. Measured and modeled limiting resolutions are summarized in Table 3. The results on measured resolution were found by reading the reconstructed images from a Catphan™ 500 phantom in a transaxial plane. Dose CTDI By adjusting the technique factors, a CTDI-type weighted dose is kept at the desired low value of 50 mGy for all protocols. The actual dose measurement and the analysis were carried out with a radiation field covering the 16-cm CTDI phantom. Table 4 shows the measured and simulated values. The measured intersystem CTDI dose variability was +/−0.5 mGy. We observed a systematic low error between measured and analytically modeled dose values. Table 4Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocolCTDIMeasured dose (mGy)Simulated dose (mGy)Standard4543HIRES4955 Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocol By adjusting the technique factors, a CTDI-type weighted dose is kept at the desired low value of 50 mGy for all protocols. The actual dose measurement and the analysis were carried out with a radiation field covering the 16-cm CTDI phantom. Table 4 shows the measured and simulated values. The measured intersystem CTDI dose variability was +/−0.5 mGy. We observed a systematic low error between measured and analytically modeled dose values. Table 4Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocolCTDIMeasured dose (mGy)Simulated dose (mGy)Standard4543HIRES4955 Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocol
Conclusions
We have optimized nitinol stent imaging, in the framework of a full-scale 3D imaging model for X-ray imaging. By balancing the sharpness transfer and noise transfer of the imaging components and tuning the contrast-rendering capabilities of a 3D X-ray imaging system, we showed that thin nitinol stent struts can be viewed with a high CNR and good detail rendering. While optimizing, the CTDI radiation doses were kept below recommended values. The quality of 3D images, produced with optimum system settings, proved that independent of the type or manufacturer of the stents, the detail rendering is adequate to assess the post-deployment shape. The stent's struts can be imaged with virtually continuous strokes.
[ "Reconstructed stent images inside a head phantom", "High contrast limiting spatial resolution", "Dose CTDI" ]
[ "Figures 1, 2, 3, 4, and 5 show snapshots of reconstructed images in planes through the stent's axes and centered within the volume, where window settings on the work station were such that contrast and noise were comparable. For absolute stent dimensions, see Table 1. Figure 1 shows the results of the three protocols for 31 HU, where for reference, an optical image of the stent is shown also. The reconstruction scaling for the high- and standard resolution cases is found in Table 2. Figure 2 shows the images for the high contrast tube filling of 550 HU. Apart from the object contrast, the conditions are identical to those described for Fig. 1. The effect of varying the tube voltage is shown in Fig. 3, referring to a sequence of reconstructed images, displaying the image contrast for the infuse tubes filled with 31-HU contrast liquid. Figure 4 depicts the effect of streaking due to highly opaque platinum coils for 31-HU filling. Finally, in Fig. 5, a number of stent types illustrate the performance in the high-resolution/high-contrast mode, for again 31-HU contrast liquid. Figure 5 also shows example results of the more radiopaque material platinum in the Silk and Leo stents and the Co–Cr steel alloy-based Pharos stent.\nFig. 1A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left\n\nFig. 2A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown\nFig. 3A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU\nFig. 4Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case\nFig. 5High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference\n\nA Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left\n\nA Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown\nA Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU\nStreak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case\nHigh-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference", "Measured and modeled limiting resolutions are summarized in Table 3. The results on measured resolution were found by reading the reconstructed images from a Catphan™ 500 phantom in a transaxial plane.", "By adjusting the technique factors, a CTDI-type weighted dose is kept at the desired low value of 50 mGy for all protocols. The actual dose measurement and the analysis were carried out with a radiation field covering the 16-cm CTDI phantom. Table 4 shows the measured and simulated values. The measured intersystem CTDI dose variability was +/−0.5 mGy. We observed a systematic low error between measured and analytically modeled dose values.\nTable 4Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocolCTDIMeasured dose (mGy)Simulated dose (mGy)Standard4543HIRES4955\n\nMeasured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocol" ]
[ null, null, null ]
[ "Introduction", "Materials and methods", "Results", "Reconstructed stent images inside a head phantom", "High contrast limiting spatial resolution", "Dose CTDI", "Discussion", "Conclusions" ]
[ "The use of flat detectors allows display of vessel anatomy with submillimeter resolution and a high contrast-to-noise ratio (CNR). Three-dimensional (3D) cone beam imaging using a flat detector in a C-arc system was adopted and eventually displayed an image quality approaching that of CT with respect to contrast resolution [1, 2]. However, and despite the above improvement, 3D imaging of objects with high X-ray transparency and small detail may still be difficult [3]. Materials such as nitinol with its excellent biocompatibility and self deployment by shape memory [4] are widely used for intracranial stents, and generally yield good clinical results [5]. The usage of nitinol stents has become a common practice in the endovascular treatment as a coiling scaffold [6–8] to prevent wire herniation [9]. The visualization of nitinol stents in treatment of atherosclerotic stenoses [10, 11] is challenging and necessitates a highly developed X-ray imaging technique. Generally, in 2D clinical imaging protocols, only the stent end-markers made of tantalum or platinum can be recognized, but the stent body itself and struts are barely visible due to the low absorption of the constituents. To improve visualization, we need a high contrast resolution combined with high spatial resolution imaging. With 3D cone beam imaging based on flat detectors, both CNR and spatial resolution can be tailored such that fine detail rendition is sufficient to visualize the stent's struts. We describe a vascular imaging technique validated by an image quality assessment, using phantom objects with a variety of commercially available Nitinol stents. The purpose of this study is to visualize details of a stent, to support an improved analysis of its placement, by exploiting a joint optimization of all components of the imaging chain and the associated 3D vascular imaging platform.", "A proprietary SW package calculates the image quality in terms of signal flow through the entire imaging chain, using image quality descriptors like Modulation Transfer Function (MTF), impulse response, noise power spectrum, low-contrast and high-contrast (spatial) resolution. Associated with these descriptors, the acquired dose can be calculated at any point in the physical chain, including a computed tomography dose index (CTDI) type of dose assessment for cone beam imaging. The analytical model also considers all the intra- and inter-component relations within the imaging system (also before and after detection). The quality description also incorporates an image quality degradation resulting from, e.g., system remnant blurring upon geometrical calibration and arc movement. The analytical model, comprehensively described by Kroon et al. [12, 13], allows us to vary the system parameters in X-ray generation, absorption, dose, and detection as well as in image processing quality, yielding accurate verified results.\nA number of self-expandable nitinol stents (Table 1), as well as one steel-based stent for reference, were deployed in plastic (infuse) tubes with an inner diameter of 3.5 mm which were inserted in a channel of an anthropomorphic head phantom (CIRS, Norfolk, Virginia; model 603), placed in the system isocenter. The tubes were filled with a diluted contrast agent: 500 ml H2O and 50 ml Visip 270 (GE), in order to produce a 550 HU density mimicking a contrast-filled vessel. The remainder of the phantom channel was filled with diluted contrast agent: 5,000 ml H2O and 50 ml Visip 270, producing 31-HU density. Another set of experiments was carried out by filling the tubes with 31-HU diluted contrast agent and the remaining volume again with 31 HU, representing blood. Yet, another experiment was performed by inserting platinum coiling wires close to the stents, in an air-filled cavity in the phantom, enabling an assessment of streaking effects.\nTable 1An overview of the stent propertiesTypeManufacturerLength (mm)Diameter (mm)Strut cross-section (μm)MaterialNeuroformBoston Scientific153w68*t66NitinolNatick, MAEnterpriseCordis374.5–a\nNitinolBridgewater, NJSolitaireev3204Ø 80NitinolPlymouth, MNWingspanBoston Scientific203.5w68*t73NitinolNatick, MASilkBalt Extrusion253.5–a\nNitinolMontmorency, FrFour Pt wiresLeo+Balt Extrusion252.5–a\nNitinolMontmorency, FrTwo Pt wiresPharosMicrus/Biotroniks252.75Ø 60SteelSan Jose, CACr–Co alloy\naThe manufacturers did not supply the dimensions\n\nAn overview of the stent properties\n\naThe manufacturers did not supply the dimensions\nWe used a Philips Xper™ vascular system equipped with a large (30 × 40 cm) flat detector and a 3D workstation for producing the 3D rendered images and hosting the typical 3D image post-processing modules. Images were acquired with a standard 3D protocol (with 45–49 mGy CTDI dose) using a large detector zoom format in combination with a large tube focus, or in a zoomed detector format (22-cm imaging diagonal) and a small tube focus. A set of 620 images was acquired with a 30 fr/s, resulting in a 20-s scan-time over a 200-degree arc-travel. CTDI was measured using a standard measurement protocol with a 16-cm diameter polymethyl methacrylate (PMMA) cylinder, and dose was measured with the Unfors (Billdal, Sweden) Mult-O-Meter 601-PMS. The CTDI dose was kept below 50 mGy. We determined the values using the largest irradiation field while keeping the technique factors identical to those for the high-resolution case with its smaller field. The CTDI protocol would be meaningless for a smaller beam format, as it is not irradiating the complete cylinder with a diameter of 16 cm, and thus not the peripheral probe positions. Spatial resolution was measured with a Catphan® 500 phantom (The Phantom Factory, Salem NY). The resolution reading was limited by the phantom maximum of 2.1 lp/mm. For the modeled limiting resolution analysis, a 4% modulation threshold was chosen. Object (physics governed) contrast was set by three tube voltages. This led us to the following protocols: standard 3D neuro protocol (120 kV), intracranial stent protocol (ics, 80 kV) and ics high resolution (hires, 80 kV, zoomed detector format).\nThe reconstructions were carried out in a zoomed mode, i.e., a region of interest was chosen as a fraction of the maximum volume, determined by the detector size and the projection geometry. This volume is divided into the desired voxel matrix. The corresponding technical parameters are shown in Table 2, where linear reconstruction zoom factors of 50% and 33% for the HIRES protocol and 33% and 17% for the standard protocol are listed. A voxel matrix of 2563 was applied for all cases. The final images were rendered by maximum intensity projection with a slice thickness of 5.0 mm, accommodating the stent's radial dimensions. The zoomed secondary reconstructions were carried out by panning the volume such that the relevant stent phantom portions were centered therein.\nTable 2Volume and voxel dimensions for the standard protocol and the high-resolution/high-contrast imaging protocolReconstruction zooming (%)Volume (2563 voxel matrix) (mm3)Voxel size (mm)HIRES5052.8 × 52.8 × 52.80.213334.4 × 34.4 × 34.40.13Standard3382.7 × 82.7 × 63.90.321742.6 × 42.6 × 32.90.17\n\nVolume and voxel dimensions for the standard protocol and the high-resolution/high-contrast imaging protocol", "The high contrast limiting resolution has been optimized by balancing the relevant MTF of the imaging components, taking noise transfer into account. Focus size, detector pixel size, and the reconstruction voxel size are matched such that the transfer contributions of each parameter are balanced, resulting in an optimal voxel size and voxel number. The nearest predefined volume is chosen such that it matches the size of the stent under test. A further reduction of the voxel size would increase the image noise, whereas an increase of the voxel size would result in a spatial resolution loss. The results on spatial limiting resolution are shown in Table 3. The true resolution of the high-resolution protocol is better than the maximum reading of the Catphan™ 500 phantom and could thus not be read by this phantom. Deviations of 10–20% between measured and simulated values were found, which can be accounted for by the choice of the threshold modulation of 4%. We optimized for the MTF and the inherent system impulse response, including the stent strut diameter of up to 0.08 mm, as well as for contrast and noise. Since the basic materials of the stents are identical for all types (see Table 1), we decided to only use one stent type, namely a Wingspan nitinol stent. Only in case of the high-resolution/high-contrast case, the imaging results of all stents are shown, including the Pharos reference stent.\nTable 3Limiting high contrast spatial resolution as measured with a Catphan® 500 phantom (The Phantom Factory) and simulated with the IQ analysis programSpatial resolutionVolume definitionMeasured (lp/mm)Simulated (lp/mm)StandardNon-zoomed reconstruction volume0.50.533% reconstruction volume1.31.617% reconstruction volume1.61.8HIRESNon-zoomed reconstruction volume1.11.250% reconstruction volume2.02.533% reconstruction volume2.1a\n3.2\naCatphan® 500 phantom range limit\n\nLimiting high contrast spatial resolution as measured with a Catphan® 500 phantom (The Phantom Factory) and simulated with the IQ analysis program\n\naCatphan® 500 phantom range limit\n Reconstructed stent images inside a head phantom Figures 1, 2, 3, 4, and 5 show snapshots of reconstructed images in planes through the stent's axes and centered within the volume, where window settings on the work station were such that contrast and noise were comparable. For absolute stent dimensions, see Table 1. Figure 1 shows the results of the three protocols for 31 HU, where for reference, an optical image of the stent is shown also. The reconstruction scaling for the high- and standard resolution cases is found in Table 2. Figure 2 shows the images for the high contrast tube filling of 550 HU. Apart from the object contrast, the conditions are identical to those described for Fig. 1. The effect of varying the tube voltage is shown in Fig. 3, referring to a sequence of reconstructed images, displaying the image contrast for the infuse tubes filled with 31-HU contrast liquid. Figure 4 depicts the effect of streaking due to highly opaque platinum coils for 31-HU filling. Finally, in Fig. 5, a number of stent types illustrate the performance in the high-resolution/high-contrast mode, for again 31-HU contrast liquid. Figure 5 also shows example results of the more radiopaque material platinum in the Silk and Leo stents and the Co–Cr steel alloy-based Pharos stent.\nFig. 1A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left\n\nFig. 2A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown\nFig. 3A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU\nFig. 4Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case\nFig. 5High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference\n\nA Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left\n\nA Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown\nA Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU\nStreak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case\nHigh-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference\nFigures 1, 2, 3, 4, and 5 show snapshots of reconstructed images in planes through the stent's axes and centered within the volume, where window settings on the work station were such that contrast and noise were comparable. For absolute stent dimensions, see Table 1. Figure 1 shows the results of the three protocols for 31 HU, where for reference, an optical image of the stent is shown also. The reconstruction scaling for the high- and standard resolution cases is found in Table 2. Figure 2 shows the images for the high contrast tube filling of 550 HU. Apart from the object contrast, the conditions are identical to those described for Fig. 1. The effect of varying the tube voltage is shown in Fig. 3, referring to a sequence of reconstructed images, displaying the image contrast for the infuse tubes filled with 31-HU contrast liquid. Figure 4 depicts the effect of streaking due to highly opaque platinum coils for 31-HU filling. Finally, in Fig. 5, a number of stent types illustrate the performance in the high-resolution/high-contrast mode, for again 31-HU contrast liquid. Figure 5 also shows example results of the more radiopaque material platinum in the Silk and Leo stents and the Co–Cr steel alloy-based Pharos stent.\nFig. 1A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left\n\nFig. 2A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown\nFig. 3A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU\nFig. 4Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case\nFig. 5High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference\n\nA Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left\n\nA Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown\nA Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU\nStreak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case\nHigh-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference\n High contrast limiting spatial resolution Measured and modeled limiting resolutions are summarized in Table 3. The results on measured resolution were found by reading the reconstructed images from a Catphan™ 500 phantom in a transaxial plane.\nMeasured and modeled limiting resolutions are summarized in Table 3. The results on measured resolution were found by reading the reconstructed images from a Catphan™ 500 phantom in a transaxial plane.\n Dose CTDI By adjusting the technique factors, a CTDI-type weighted dose is kept at the desired low value of 50 mGy for all protocols. The actual dose measurement and the analysis were carried out with a radiation field covering the 16-cm CTDI phantom. Table 4 shows the measured and simulated values. The measured intersystem CTDI dose variability was +/−0.5 mGy. We observed a systematic low error between measured and analytically modeled dose values.\nTable 4Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocolCTDIMeasured dose (mGy)Simulated dose (mGy)Standard4543HIRES4955\n\nMeasured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocol\nBy adjusting the technique factors, a CTDI-type weighted dose is kept at the desired low value of 50 mGy for all protocols. The actual dose measurement and the analysis were carried out with a radiation field covering the 16-cm CTDI phantom. Table 4 shows the measured and simulated values. The measured intersystem CTDI dose variability was +/−0.5 mGy. We observed a systematic low error between measured and analytically modeled dose values.\nTable 4Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocolCTDIMeasured dose (mGy)Simulated dose (mGy)Standard4543HIRES4955\n\nMeasured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocol", "Figures 1, 2, 3, 4, and 5 show snapshots of reconstructed images in planes through the stent's axes and centered within the volume, where window settings on the work station were such that contrast and noise were comparable. For absolute stent dimensions, see Table 1. Figure 1 shows the results of the three protocols for 31 HU, where for reference, an optical image of the stent is shown also. The reconstruction scaling for the high- and standard resolution cases is found in Table 2. Figure 2 shows the images for the high contrast tube filling of 550 HU. Apart from the object contrast, the conditions are identical to those described for Fig. 1. The effect of varying the tube voltage is shown in Fig. 3, referring to a sequence of reconstructed images, displaying the image contrast for the infuse tubes filled with 31-HU contrast liquid. Figure 4 depicts the effect of streaking due to highly opaque platinum coils for 31-HU filling. Finally, in Fig. 5, a number of stent types illustrate the performance in the high-resolution/high-contrast mode, for again 31-HU contrast liquid. Figure 5 also shows example results of the more radiopaque material platinum in the Silk and Leo stents and the Co–Cr steel alloy-based Pharos stent.\nFig. 1A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left\n\nFig. 2A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown\nFig. 3A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU\nFig. 4Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case\nFig. 5High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference\n\nA Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left\n\nA Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown\nA Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU\nStreak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case\nHigh-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference", "Measured and modeled limiting resolutions are summarized in Table 3. The results on measured resolution were found by reading the reconstructed images from a Catphan™ 500 phantom in a transaxial plane.", "By adjusting the technique factors, a CTDI-type weighted dose is kept at the desired low value of 50 mGy for all protocols. The actual dose measurement and the analysis were carried out with a radiation field covering the 16-cm CTDI phantom. Table 4 shows the measured and simulated values. The measured intersystem CTDI dose variability was +/−0.5 mGy. We observed a systematic low error between measured and analytically modeled dose values.\nTable 4Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocolCTDIMeasured dose (mGy)Simulated dose (mGy)Standard4543HIRES4955\n\nMeasured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocol", "Intracranial stenting became feasible as a routine treatment by the introduction of very flexible stents and have become a single treatment for a number of neurovascular conditions (Biondi et al. [14], Kurre et al. [15]). The stents are manufactured from a very thin material and most are self expanding, i.e., made from an alloy such as nitinol. However, most nitinol stents are difficult to visualize with fluoroscopy or C-arm cone beam CT. Visualization can be improved by increasing the density of the stent, more radiation, or enhanced image acquisition settings and processing. Increasing the density will add material and therefore unfavorably alters the stent characteristics and is therefore undesirable. An increase in dose will not be accepted (Struelens et al. [16]). Consequently, the visibility is preferably improved by further optimizing the imaging chain. To support and judge correct positioning, the nitinol stent should be visualized in its wall apposition, while conforming to varying diameters throughout the stent's length. To this end, it is advantageous to view details with virtually optical quality.\nDuring the past 6 years, the image quality of CT-like reconstructions, using a C-arm interventional X-ray system equipped with a flat detector, became increasingly notable. High-resolution imaging, with submillimeter isotropic spatial resolution, outperformed digital radiography, fluoroscopy, and even conventional CT (Kamran et al. [17]). A continuous effort to enhance image quality enabled imaging of details with low absorption. Evidence on high-quality in vitro imaging of the proper deployment of nitinol stents, related to area coverage, kinking, prolapse, and flattening, is given by Aurboonyawat [11], Ebrahimi [18] and Alvarado [19]. The stent conformity in curved vascular models and simulated aneurysm necks could be studied in detail. In a clinical setting, Benndorf [20] showed in vivo flat panel imaging of the Neuroform nitinol stent. Imagery of balloon mounted stents, based on a Cr–Co steel alloy, with intravenous administration of contrast medium, was carried out by Buhk et al. [21]. The reconstructed images allowed an accurate assessment of the stented lumen. Recently, Patel et al. [22] demonstrated high-resolution and contrast-enhanced simultaneous imaging of intraarterial cerebrovascular stents and their host arteries.\nIn our study, a Wingspan stent was chosen as an arbitrary example object for an evaluation through reconstructed images. The image quality improvement is assessed by an evaluation of the reconstructed images. Clearly the spatial impulse response improved by introducing the HIRES protocol, as is shown in Fig. 1 for the 31-HU contrast-filled tube, where the CNR was improved as well by the increased object contrast. These two measures lead to an improved visibility of the stent's struts, when comparing the standard protocol (17% zooming) with the results of the high-resolution/high-contrast protocol (33% zooming). The open cell structure of the stent is clearly discerned by virtually continuous strokes of the struts, so that its X-ray quality is approaching an optical quality. In Fig. 2, a comparison can be made for the 550-HU contrast filling. In this case, the contrast between struts and background is lower due to the denser filling of the tube, giving an accordingly reduced CNR. The sensitivity to object contrast can be readily viewed in Fig. 3, where the comparison is made for a varying tube voltage: the obvious increase in the CNR makes the stent more pronounced with respect to the (noisy) background and accounts for the choice of the lowest voltage, 80 kV, for the ICS and HIRES protocols. In Fig. 4, streak artifacts associated with high-density platinum coils are shown. Although the window settings are optimal for stent perception, details are still rendered with a sufficient contrast to artifact distance. Finally, Fig. 5 displays reconstructions of all stent types for the high-resolution/high-contrast case, showing the detail rendering of the protocol. As the cross-sections of the struts of the investigated types vary between 60 and 80 μm, the rendered contrasts vary accordingly, since the optimized system impulse response dominates for all cases. The Silk and Leo stents stand out more by high object contrast of the platinum auxiliary wires. In spite of the minimal strut diameter, the Pharos struts are rendered with the highest contrast. The Co–Cr steel alloy accounts for an increased visibility due to its inherent higher absorption. The remaining stent types are comparable in their visualization quality, which is obvious, considering the similarity in dimensions and composition.\nFor all protocols under test, the maximum weighted CTDI dose of 50 mGy is lower than the European guidelines, as according to EUR 16262 [23]. A dose of 60 mGy is recommended for routine head examinations. Due to the limited anatomical coverage, by using a smaller irradiated field in the high-resolution case, the dose-length product is smaller as well. Moreover, we found that the difference between modeled and measured doses is sufficiently small.", "We have optimized nitinol stent imaging, in the framework of a full-scale 3D imaging model for X-ray imaging. By balancing the sharpness transfer and noise transfer of the imaging components and tuning the contrast-rendering capabilities of a 3D X-ray imaging system, we showed that thin nitinol stent struts can be viewed with a high CNR and good detail rendering. While optimizing, the CTDI radiation doses were kept below recommended values.\nThe quality of 3D images, produced with optimum system settings, proved that independent of the type or manufacturer of the stents, the detail rendering is adequate to assess the post-deployment shape. The stent's struts can be imaged with virtually continuous strokes." ]
[ "introduction", "materials|methods", "results", null, null, null, "discussion", "conclusion" ]
[ "X-ray imaging", "Cone beam 3D", "Intracranial stents", "Nitinol", "Aneurysms" ]
Introduction: The use of flat detectors allows display of vessel anatomy with submillimeter resolution and a high contrast-to-noise ratio (CNR). Three-dimensional (3D) cone beam imaging using a flat detector in a C-arc system was adopted and eventually displayed an image quality approaching that of CT with respect to contrast resolution [1, 2]. However, and despite the above improvement, 3D imaging of objects with high X-ray transparency and small detail may still be difficult [3]. Materials such as nitinol with its excellent biocompatibility and self deployment by shape memory [4] are widely used for intracranial stents, and generally yield good clinical results [5]. The usage of nitinol stents has become a common practice in the endovascular treatment as a coiling scaffold [6–8] to prevent wire herniation [9]. The visualization of nitinol stents in treatment of atherosclerotic stenoses [10, 11] is challenging and necessitates a highly developed X-ray imaging technique. Generally, in 2D clinical imaging protocols, only the stent end-markers made of tantalum or platinum can be recognized, but the stent body itself and struts are barely visible due to the low absorption of the constituents. To improve visualization, we need a high contrast resolution combined with high spatial resolution imaging. With 3D cone beam imaging based on flat detectors, both CNR and spatial resolution can be tailored such that fine detail rendition is sufficient to visualize the stent's struts. We describe a vascular imaging technique validated by an image quality assessment, using phantom objects with a variety of commercially available Nitinol stents. The purpose of this study is to visualize details of a stent, to support an improved analysis of its placement, by exploiting a joint optimization of all components of the imaging chain and the associated 3D vascular imaging platform. Materials and methods: A proprietary SW package calculates the image quality in terms of signal flow through the entire imaging chain, using image quality descriptors like Modulation Transfer Function (MTF), impulse response, noise power spectrum, low-contrast and high-contrast (spatial) resolution. Associated with these descriptors, the acquired dose can be calculated at any point in the physical chain, including a computed tomography dose index (CTDI) type of dose assessment for cone beam imaging. The analytical model also considers all the intra- and inter-component relations within the imaging system (also before and after detection). The quality description also incorporates an image quality degradation resulting from, e.g., system remnant blurring upon geometrical calibration and arc movement. The analytical model, comprehensively described by Kroon et al. [12, 13], allows us to vary the system parameters in X-ray generation, absorption, dose, and detection as well as in image processing quality, yielding accurate verified results. A number of self-expandable nitinol stents (Table 1), as well as one steel-based stent for reference, were deployed in plastic (infuse) tubes with an inner diameter of 3.5 mm which were inserted in a channel of an anthropomorphic head phantom (CIRS, Norfolk, Virginia; model 603), placed in the system isocenter. The tubes were filled with a diluted contrast agent: 500 ml H2O and 50 ml Visip 270 (GE), in order to produce a 550 HU density mimicking a contrast-filled vessel. The remainder of the phantom channel was filled with diluted contrast agent: 5,000 ml H2O and 50 ml Visip 270, producing 31-HU density. Another set of experiments was carried out by filling the tubes with 31-HU diluted contrast agent and the remaining volume again with 31 HU, representing blood. Yet, another experiment was performed by inserting platinum coiling wires close to the stents, in an air-filled cavity in the phantom, enabling an assessment of streaking effects. Table 1An overview of the stent propertiesTypeManufacturerLength (mm)Diameter (mm)Strut cross-section (μm)MaterialNeuroformBoston Scientific153w68*t66NitinolNatick, MAEnterpriseCordis374.5–a NitinolBridgewater, NJSolitaireev3204Ø 80NitinolPlymouth, MNWingspanBoston Scientific203.5w68*t73NitinolNatick, MASilkBalt Extrusion253.5–a NitinolMontmorency, FrFour Pt wiresLeo+Balt Extrusion252.5–a NitinolMontmorency, FrTwo Pt wiresPharosMicrus/Biotroniks252.75Ø 60SteelSan Jose, CACr–Co alloy aThe manufacturers did not supply the dimensions An overview of the stent properties aThe manufacturers did not supply the dimensions We used a Philips Xper™ vascular system equipped with a large (30 × 40 cm) flat detector and a 3D workstation for producing the 3D rendered images and hosting the typical 3D image post-processing modules. Images were acquired with a standard 3D protocol (with 45–49 mGy CTDI dose) using a large detector zoom format in combination with a large tube focus, or in a zoomed detector format (22-cm imaging diagonal) and a small tube focus. A set of 620 images was acquired with a 30 fr/s, resulting in a 20-s scan-time over a 200-degree arc-travel. CTDI was measured using a standard measurement protocol with a 16-cm diameter polymethyl methacrylate (PMMA) cylinder, and dose was measured with the Unfors (Billdal, Sweden) Mult-O-Meter 601-PMS. The CTDI dose was kept below 50 mGy. We determined the values using the largest irradiation field while keeping the technique factors identical to those for the high-resolution case with its smaller field. The CTDI protocol would be meaningless for a smaller beam format, as it is not irradiating the complete cylinder with a diameter of 16 cm, and thus not the peripheral probe positions. Spatial resolution was measured with a Catphan® 500 phantom (The Phantom Factory, Salem NY). The resolution reading was limited by the phantom maximum of 2.1 lp/mm. For the modeled limiting resolution analysis, a 4% modulation threshold was chosen. Object (physics governed) contrast was set by three tube voltages. This led us to the following protocols: standard 3D neuro protocol (120 kV), intracranial stent protocol (ics, 80 kV) and ics high resolution (hires, 80 kV, zoomed detector format). The reconstructions were carried out in a zoomed mode, i.e., a region of interest was chosen as a fraction of the maximum volume, determined by the detector size and the projection geometry. This volume is divided into the desired voxel matrix. The corresponding technical parameters are shown in Table 2, where linear reconstruction zoom factors of 50% and 33% for the HIRES protocol and 33% and 17% for the standard protocol are listed. A voxel matrix of 2563 was applied for all cases. The final images were rendered by maximum intensity projection with a slice thickness of 5.0 mm, accommodating the stent's radial dimensions. The zoomed secondary reconstructions were carried out by panning the volume such that the relevant stent phantom portions were centered therein. Table 2Volume and voxel dimensions for the standard protocol and the high-resolution/high-contrast imaging protocolReconstruction zooming (%)Volume (2563 voxel matrix) (mm3)Voxel size (mm)HIRES5052.8 × 52.8 × 52.80.213334.4 × 34.4 × 34.40.13Standard3382.7 × 82.7 × 63.90.321742.6 × 42.6 × 32.90.17 Volume and voxel dimensions for the standard protocol and the high-resolution/high-contrast imaging protocol Results: The high contrast limiting resolution has been optimized by balancing the relevant MTF of the imaging components, taking noise transfer into account. Focus size, detector pixel size, and the reconstruction voxel size are matched such that the transfer contributions of each parameter are balanced, resulting in an optimal voxel size and voxel number. The nearest predefined volume is chosen such that it matches the size of the stent under test. A further reduction of the voxel size would increase the image noise, whereas an increase of the voxel size would result in a spatial resolution loss. The results on spatial limiting resolution are shown in Table 3. The true resolution of the high-resolution protocol is better than the maximum reading of the Catphan™ 500 phantom and could thus not be read by this phantom. Deviations of 10–20% between measured and simulated values were found, which can be accounted for by the choice of the threshold modulation of 4%. We optimized for the MTF and the inherent system impulse response, including the stent strut diameter of up to 0.08 mm, as well as for contrast and noise. Since the basic materials of the stents are identical for all types (see Table 1), we decided to only use one stent type, namely a Wingspan nitinol stent. Only in case of the high-resolution/high-contrast case, the imaging results of all stents are shown, including the Pharos reference stent. Table 3Limiting high contrast spatial resolution as measured with a Catphan® 500 phantom (The Phantom Factory) and simulated with the IQ analysis programSpatial resolutionVolume definitionMeasured (lp/mm)Simulated (lp/mm)StandardNon-zoomed reconstruction volume0.50.533% reconstruction volume1.31.617% reconstruction volume1.61.8HIRESNon-zoomed reconstruction volume1.11.250% reconstruction volume2.02.533% reconstruction volume2.1a 3.2 aCatphan® 500 phantom range limit Limiting high contrast spatial resolution as measured with a Catphan® 500 phantom (The Phantom Factory) and simulated with the IQ analysis program aCatphan® 500 phantom range limit Reconstructed stent images inside a head phantom Figures 1, 2, 3, 4, and 5 show snapshots of reconstructed images in planes through the stent's axes and centered within the volume, where window settings on the work station were such that contrast and noise were comparable. For absolute stent dimensions, see Table 1. Figure 1 shows the results of the three protocols for 31 HU, where for reference, an optical image of the stent is shown also. The reconstruction scaling for the high- and standard resolution cases is found in Table 2. Figure 2 shows the images for the high contrast tube filling of 550 HU. Apart from the object contrast, the conditions are identical to those described for Fig. 1. The effect of varying the tube voltage is shown in Fig. 3, referring to a sequence of reconstructed images, displaying the image contrast for the infuse tubes filled with 31-HU contrast liquid. Figure 4 depicts the effect of streaking due to highly opaque platinum coils for 31-HU filling. Finally, in Fig. 5, a number of stent types illustrate the performance in the high-resolution/high-contrast mode, for again 31-HU contrast liquid. Figure 5 also shows example results of the more radiopaque material platinum in the Silk and Leo stents and the Co–Cr steel alloy-based Pharos stent. Fig. 1A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left Fig. 2A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown Fig. 3A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU Fig. 4Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case Fig. 5High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference Figures 1, 2, 3, 4, and 5 show snapshots of reconstructed images in planes through the stent's axes and centered within the volume, where window settings on the work station were such that contrast and noise were comparable. For absolute stent dimensions, see Table 1. Figure 1 shows the results of the three protocols for 31 HU, where for reference, an optical image of the stent is shown also. The reconstruction scaling for the high- and standard resolution cases is found in Table 2. Figure 2 shows the images for the high contrast tube filling of 550 HU. Apart from the object contrast, the conditions are identical to those described for Fig. 1. The effect of varying the tube voltage is shown in Fig. 3, referring to a sequence of reconstructed images, displaying the image contrast for the infuse tubes filled with 31-HU contrast liquid. Figure 4 depicts the effect of streaking due to highly opaque platinum coils for 31-HU filling. Finally, in Fig. 5, a number of stent types illustrate the performance in the high-resolution/high-contrast mode, for again 31-HU contrast liquid. Figure 5 also shows example results of the more radiopaque material platinum in the Silk and Leo stents and the Co–Cr steel alloy-based Pharos stent. Fig. 1A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left Fig. 2A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown Fig. 3A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU Fig. 4Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case Fig. 5High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference High contrast limiting spatial resolution Measured and modeled limiting resolutions are summarized in Table 3. The results on measured resolution were found by reading the reconstructed images from a Catphan™ 500 phantom in a transaxial plane. Measured and modeled limiting resolutions are summarized in Table 3. The results on measured resolution were found by reading the reconstructed images from a Catphan™ 500 phantom in a transaxial plane. Dose CTDI By adjusting the technique factors, a CTDI-type weighted dose is kept at the desired low value of 50 mGy for all protocols. The actual dose measurement and the analysis were carried out with a radiation field covering the 16-cm CTDI phantom. Table 4 shows the measured and simulated values. The measured intersystem CTDI dose variability was +/−0.5 mGy. We observed a systematic low error between measured and analytically modeled dose values. Table 4Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocolCTDIMeasured dose (mGy)Simulated dose (mGy)Standard4543HIRES4955 Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocol By adjusting the technique factors, a CTDI-type weighted dose is kept at the desired low value of 50 mGy for all protocols. The actual dose measurement and the analysis were carried out with a radiation field covering the 16-cm CTDI phantom. Table 4 shows the measured and simulated values. The measured intersystem CTDI dose variability was +/−0.5 mGy. We observed a systematic low error between measured and analytically modeled dose values. Table 4Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocolCTDIMeasured dose (mGy)Simulated dose (mGy)Standard4543HIRES4955 Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocol Reconstructed stent images inside a head phantom: Figures 1, 2, 3, 4, and 5 show snapshots of reconstructed images in planes through the stent's axes and centered within the volume, where window settings on the work station were such that contrast and noise were comparable. For absolute stent dimensions, see Table 1. Figure 1 shows the results of the three protocols for 31 HU, where for reference, an optical image of the stent is shown also. The reconstruction scaling for the high- and standard resolution cases is found in Table 2. Figure 2 shows the images for the high contrast tube filling of 550 HU. Apart from the object contrast, the conditions are identical to those described for Fig. 1. The effect of varying the tube voltage is shown in Fig. 3, referring to a sequence of reconstructed images, displaying the image contrast for the infuse tubes filled with 31-HU contrast liquid. Figure 4 depicts the effect of streaking due to highly opaque platinum coils for 31-HU filling. Finally, in Fig. 5, a number of stent types illustrate the performance in the high-resolution/high-contrast mode, for again 31-HU contrast liquid. Figure 5 also shows example results of the more radiopaque material platinum in the Silk and Leo stents and the Co–Cr steel alloy-based Pharos stent. Fig. 1A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left Fig. 2A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown Fig. 3A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU Fig. 4Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case Fig. 5High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference High contrast limiting spatial resolution: Measured and modeled limiting resolutions are summarized in Table 3. The results on measured resolution were found by reading the reconstructed images from a Catphan™ 500 phantom in a transaxial plane. Dose CTDI: By adjusting the technique factors, a CTDI-type weighted dose is kept at the desired low value of 50 mGy for all protocols. The actual dose measurement and the analysis were carried out with a radiation field covering the 16-cm CTDI phantom. Table 4 shows the measured and simulated values. The measured intersystem CTDI dose variability was +/−0.5 mGy. We observed a systematic low error between measured and analytically modeled dose values. Table 4Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocolCTDIMeasured dose (mGy)Simulated dose (mGy)Standard4543HIRES4955 Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocol Discussion: Intracranial stenting became feasible as a routine treatment by the introduction of very flexible stents and have become a single treatment for a number of neurovascular conditions (Biondi et al. [14], Kurre et al. [15]). The stents are manufactured from a very thin material and most are self expanding, i.e., made from an alloy such as nitinol. However, most nitinol stents are difficult to visualize with fluoroscopy or C-arm cone beam CT. Visualization can be improved by increasing the density of the stent, more radiation, or enhanced image acquisition settings and processing. Increasing the density will add material and therefore unfavorably alters the stent characteristics and is therefore undesirable. An increase in dose will not be accepted (Struelens et al. [16]). Consequently, the visibility is preferably improved by further optimizing the imaging chain. To support and judge correct positioning, the nitinol stent should be visualized in its wall apposition, while conforming to varying diameters throughout the stent's length. To this end, it is advantageous to view details with virtually optical quality. During the past 6 years, the image quality of CT-like reconstructions, using a C-arm interventional X-ray system equipped with a flat detector, became increasingly notable. High-resolution imaging, with submillimeter isotropic spatial resolution, outperformed digital radiography, fluoroscopy, and even conventional CT (Kamran et al. [17]). A continuous effort to enhance image quality enabled imaging of details with low absorption. Evidence on high-quality in vitro imaging of the proper deployment of nitinol stents, related to area coverage, kinking, prolapse, and flattening, is given by Aurboonyawat [11], Ebrahimi [18] and Alvarado [19]. The stent conformity in curved vascular models and simulated aneurysm necks could be studied in detail. In a clinical setting, Benndorf [20] showed in vivo flat panel imaging of the Neuroform nitinol stent. Imagery of balloon mounted stents, based on a Cr–Co steel alloy, with intravenous administration of contrast medium, was carried out by Buhk et al. [21]. The reconstructed images allowed an accurate assessment of the stented lumen. Recently, Patel et al. [22] demonstrated high-resolution and contrast-enhanced simultaneous imaging of intraarterial cerebrovascular stents and their host arteries. In our study, a Wingspan stent was chosen as an arbitrary example object for an evaluation through reconstructed images. The image quality improvement is assessed by an evaluation of the reconstructed images. Clearly the spatial impulse response improved by introducing the HIRES protocol, as is shown in Fig. 1 for the 31-HU contrast-filled tube, where the CNR was improved as well by the increased object contrast. These two measures lead to an improved visibility of the stent's struts, when comparing the standard protocol (17% zooming) with the results of the high-resolution/high-contrast protocol (33% zooming). The open cell structure of the stent is clearly discerned by virtually continuous strokes of the struts, so that its X-ray quality is approaching an optical quality. In Fig. 2, a comparison can be made for the 550-HU contrast filling. In this case, the contrast between struts and background is lower due to the denser filling of the tube, giving an accordingly reduced CNR. The sensitivity to object contrast can be readily viewed in Fig. 3, where the comparison is made for a varying tube voltage: the obvious increase in the CNR makes the stent more pronounced with respect to the (noisy) background and accounts for the choice of the lowest voltage, 80 kV, for the ICS and HIRES protocols. In Fig. 4, streak artifacts associated with high-density platinum coils are shown. Although the window settings are optimal for stent perception, details are still rendered with a sufficient contrast to artifact distance. Finally, Fig. 5 displays reconstructions of all stent types for the high-resolution/high-contrast case, showing the detail rendering of the protocol. As the cross-sections of the struts of the investigated types vary between 60 and 80 μm, the rendered contrasts vary accordingly, since the optimized system impulse response dominates for all cases. The Silk and Leo stents stand out more by high object contrast of the platinum auxiliary wires. In spite of the minimal strut diameter, the Pharos struts are rendered with the highest contrast. The Co–Cr steel alloy accounts for an increased visibility due to its inherent higher absorption. The remaining stent types are comparable in their visualization quality, which is obvious, considering the similarity in dimensions and composition. For all protocols under test, the maximum weighted CTDI dose of 50 mGy is lower than the European guidelines, as according to EUR 16262 [23]. A dose of 60 mGy is recommended for routine head examinations. Due to the limited anatomical coverage, by using a smaller irradiated field in the high-resolution case, the dose-length product is smaller as well. Moreover, we found that the difference between modeled and measured doses is sufficiently small. Conclusions: We have optimized nitinol stent imaging, in the framework of a full-scale 3D imaging model for X-ray imaging. By balancing the sharpness transfer and noise transfer of the imaging components and tuning the contrast-rendering capabilities of a 3D X-ray imaging system, we showed that thin nitinol stent struts can be viewed with a high CNR and good detail rendering. While optimizing, the CTDI radiation doses were kept below recommended values. The quality of 3D images, produced with optimum system settings, proved that independent of the type or manufacturer of the stents, the detail rendering is adequate to assess the post-deployment shape. The stent's struts can be imaged with virtually continuous strokes.
Background: To assess an optimized 3D imaging protocol for intracranial nitinol stents in 3D C-arm flat detector imaging. For this purpose, an image quality simulation and an in vitro study was carried out. Methods: Nitinol stents of various brands were placed inside an anthropomorphic head phantom, using iodine contrast. Experiments with objects were preceded by image quality and dose simulations. We varied X-ray imaging parameters in a commercially interventional X-ray system to set 3D image quality in the contrast-noise-sharpness space. Beam quality was varied to evaluate contrast of the stents while keeping absorbed dose below recommended values. Two detector formats were used, paired with an appropriate pixel size and X-ray focus size. Zoomed reconstructions were carried out and snapshot images acquired. High contrast spatial resolution was assessed with a CT phantom. Results: We found an optimal protocol for imaging intracranial nitinol stents. Contrast resolution was optimized for nickel-titanium-containing stents. A high spatial resolution larger than 2.1 lp/mm allows struts to be visualized. We obtained images of stents of various brands and a representative set of images is shown. Independent of the make, struts can be imaged with virtually continuous strokes. Measured absorbed doses are shown to be lower than 50 mGy Computed Tomography Dose Index (CTDI). Conclusions: By balancing the modulation transfer of the imaging components and tuning the high-contrast imaging capabilities, we have shown that thin nitinol stent wires can be reconstructed with high contrast-to-noise ratio and good detail, while keeping radiation doses within recommended values. Experimental results compare well with imaging simulations.
Introduction: The use of flat detectors allows display of vessel anatomy with submillimeter resolution and a high contrast-to-noise ratio (CNR). Three-dimensional (3D) cone beam imaging using a flat detector in a C-arc system was adopted and eventually displayed an image quality approaching that of CT with respect to contrast resolution [1, 2]. However, and despite the above improvement, 3D imaging of objects with high X-ray transparency and small detail may still be difficult [3]. Materials such as nitinol with its excellent biocompatibility and self deployment by shape memory [4] are widely used for intracranial stents, and generally yield good clinical results [5]. The usage of nitinol stents has become a common practice in the endovascular treatment as a coiling scaffold [6–8] to prevent wire herniation [9]. The visualization of nitinol stents in treatment of atherosclerotic stenoses [10, 11] is challenging and necessitates a highly developed X-ray imaging technique. Generally, in 2D clinical imaging protocols, only the stent end-markers made of tantalum or platinum can be recognized, but the stent body itself and struts are barely visible due to the low absorption of the constituents. To improve visualization, we need a high contrast resolution combined with high spatial resolution imaging. With 3D cone beam imaging based on flat detectors, both CNR and spatial resolution can be tailored such that fine detail rendition is sufficient to visualize the stent's struts. We describe a vascular imaging technique validated by an image quality assessment, using phantom objects with a variety of commercially available Nitinol stents. The purpose of this study is to visualize details of a stent, to support an improved analysis of its placement, by exploiting a joint optimization of all components of the imaging chain and the associated 3D vascular imaging platform. Conclusions: We have optimized nitinol stent imaging, in the framework of a full-scale 3D imaging model for X-ray imaging. By balancing the sharpness transfer and noise transfer of the imaging components and tuning the contrast-rendering capabilities of a 3D X-ray imaging system, we showed that thin nitinol stent struts can be viewed with a high CNR and good detail rendering. While optimizing, the CTDI radiation doses were kept below recommended values. The quality of 3D images, produced with optimum system settings, proved that independent of the type or manufacturer of the stents, the detail rendering is adequate to assess the post-deployment shape. The stent's struts can be imaged with virtually continuous strokes.
Background: To assess an optimized 3D imaging protocol for intracranial nitinol stents in 3D C-arm flat detector imaging. For this purpose, an image quality simulation and an in vitro study was carried out. Methods: Nitinol stents of various brands were placed inside an anthropomorphic head phantom, using iodine contrast. Experiments with objects were preceded by image quality and dose simulations. We varied X-ray imaging parameters in a commercially interventional X-ray system to set 3D image quality in the contrast-noise-sharpness space. Beam quality was varied to evaluate contrast of the stents while keeping absorbed dose below recommended values. Two detector formats were used, paired with an appropriate pixel size and X-ray focus size. Zoomed reconstructions were carried out and snapshot images acquired. High contrast spatial resolution was assessed with a CT phantom. Results: We found an optimal protocol for imaging intracranial nitinol stents. Contrast resolution was optimized for nickel-titanium-containing stents. A high spatial resolution larger than 2.1 lp/mm allows struts to be visualized. We obtained images of stents of various brands and a representative set of images is shown. Independent of the make, struts can be imaged with virtually continuous strokes. Measured absorbed doses are shown to be lower than 50 mGy Computed Tomography Dose Index (CTDI). Conclusions: By balancing the modulation transfer of the imaging components and tuning the high-contrast imaging capabilities, we have shown that thin nitinol stent wires can be reconstructed with high contrast-to-noise ratio and good detail, while keeping radiation doses within recommended values. Experimental results compare well with imaging simulations.
6,521
316
[ 1018, 35, 133 ]
8
[ "contrast", "high", "resolution", "stent", "standard", "high contrast", "case", "resolution high", "resolution high contrast", "standard resolution" ]
[ "expandable nitinol stents", "optimized nitinol stent", "nitinol stents table", "nitinol stent imagery", "visualization nitinol stents" ]
null
[CONTENT] X-ray imaging | Cone beam 3D | Intracranial stents | Nitinol | Aneurysms [SUMMARY]
null
[CONTENT] X-ray imaging | Cone beam 3D | Intracranial stents | Nitinol | Aneurysms [SUMMARY]
[CONTENT] X-ray imaging | Cone beam 3D | Intracranial stents | Nitinol | Aneurysms [SUMMARY]
[CONTENT] X-ray imaging | Cone beam 3D | Intracranial stents | Nitinol | Aneurysms [SUMMARY]
[CONTENT] X-ray imaging | Cone beam 3D | Intracranial stents | Nitinol | Aneurysms [SUMMARY]
[CONTENT] Alloys | Humans | Imaging, Three-Dimensional | Intracranial Aneurysm | Phantoms, Imaging | Radiation Dosage | Radiographic Image Interpretation, Computer-Assisted | Stents | Tomography, X-Ray Computed | X-Rays [SUMMARY]
null
[CONTENT] Alloys | Humans | Imaging, Three-Dimensional | Intracranial Aneurysm | Phantoms, Imaging | Radiation Dosage | Radiographic Image Interpretation, Computer-Assisted | Stents | Tomography, X-Ray Computed | X-Rays [SUMMARY]
[CONTENT] Alloys | Humans | Imaging, Three-Dimensional | Intracranial Aneurysm | Phantoms, Imaging | Radiation Dosage | Radiographic Image Interpretation, Computer-Assisted | Stents | Tomography, X-Ray Computed | X-Rays [SUMMARY]
[CONTENT] Alloys | Humans | Imaging, Three-Dimensional | Intracranial Aneurysm | Phantoms, Imaging | Radiation Dosage | Radiographic Image Interpretation, Computer-Assisted | Stents | Tomography, X-Ray Computed | X-Rays [SUMMARY]
[CONTENT] Alloys | Humans | Imaging, Three-Dimensional | Intracranial Aneurysm | Phantoms, Imaging | Radiation Dosage | Radiographic Image Interpretation, Computer-Assisted | Stents | Tomography, X-Ray Computed | X-Rays [SUMMARY]
[CONTENT] expandable nitinol stents | optimized nitinol stent | nitinol stents table | nitinol stent imagery | visualization nitinol stents [SUMMARY]
null
[CONTENT] expandable nitinol stents | optimized nitinol stent | nitinol stents table | nitinol stent imagery | visualization nitinol stents [SUMMARY]
[CONTENT] expandable nitinol stents | optimized nitinol stent | nitinol stents table | nitinol stent imagery | visualization nitinol stents [SUMMARY]
[CONTENT] expandable nitinol stents | optimized nitinol stent | nitinol stents table | nitinol stent imagery | visualization nitinol stents [SUMMARY]
[CONTENT] expandable nitinol stents | optimized nitinol stent | nitinol stents table | nitinol stent imagery | visualization nitinol stents [SUMMARY]
[CONTENT] contrast | high | resolution | stent | standard | high contrast | case | resolution high | resolution high contrast | standard resolution [SUMMARY]
null
[CONTENT] contrast | high | resolution | stent | standard | high contrast | case | resolution high | resolution high contrast | standard resolution [SUMMARY]
[CONTENT] contrast | high | resolution | stent | standard | high contrast | case | resolution high | resolution high contrast | standard resolution [SUMMARY]
[CONTENT] contrast | high | resolution | stent | standard | high contrast | case | resolution high | resolution high contrast | standard resolution [SUMMARY]
[CONTENT] contrast | high | resolution | stent | standard | high contrast | case | resolution high | resolution high contrast | standard resolution [SUMMARY]
[CONTENT] imaging | 3d | resolution | nitinol stents | flat | detectors | 3d cone | 3d cone beam | 3d cone beam imaging | contrast resolution [SUMMARY]
null
[CONTENT] contrast | high | resolution | standard | standard resolution | case | contrast case | case standard resolution | case standard | high contrast [SUMMARY]
[CONTENT] imaging | rendering | 3d | detail rendering | ray imaging | stent | detail | stent struts | transfer | struts [SUMMARY]
[CONTENT] contrast | high | resolution | stent | dose | standard | imaging | high contrast | measured | case [SUMMARY]
[CONTENT] contrast | high | resolution | stent | dose | standard | imaging | high contrast | measured | case [SUMMARY]
[CONTENT] ||| [SUMMARY]
null
[CONTENT] ||| ||| larger than 2.1 ||| ||| ||| 50 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| Nitinol ||| ||| ||| ||| Two ||| ||| CT ||| ||| ||| ||| larger than 2.1 ||| ||| ||| 50 ||| ||| [SUMMARY]
[CONTENT] ||| ||| Nitinol ||| ||| ||| ||| Two ||| ||| CT ||| ||| ||| ||| larger than 2.1 ||| ||| ||| 50 ||| ||| [SUMMARY]
Prolonged survival in patients with hand-foot skin reaction secondary to cooperative sorafenib treatment.
34539142
Sorafenib is an oral drug that prolongs overall survival (OS) in patients with hepatocellular carcinoma. Adverse events, including hand-foot skin reaction (HFSR), lead to permanent sorafenib discontinuation.
BACKGROUND
We performed a retrospective, multicenter study of patients treated with sorafenib monotherapy between May 2009 and March 2018. We developed a mutual cooperation system that was initiated at the start of sorafenib treatment to effectively manage adverse events. The mutual cooperation system entailed patients receiving consultations during which pharmacists provided accurate information about sorafenib to alleviate the fear and anxiety related to adverse events. We stratified the patients into three groups: Group A, patients without HFSR but with pharmacist intervention; Group B, patients with HFSR and pharmacist interventions unreported to oncologists (nonmutual cooperation system); and Group C, patients with HFSR and pharmacist interventions known to oncologists (mutual cooperation system). OS and time to treatment failure (TTF) were evaluated using the Kaplan-Meier method.
METHODS
We enrolled 134 patients (Group A, n = 41; Group B, n = 30; Group C, n = 63). The median OS was significantly different between Groups A and C (6.2 vs 13.9 mo, p < 0.01) but not between Groups A and B (6.2 vs 7.7 mo, P = 0.62). Group A vs Group C was an independent OS predictor (HR, 0.41; 95%CI: 0.25-0.66; P < 0.01). In Group B alone, TTF was significantly lower and the nonadherence rate was higher (P < 0.01). In addition, the Spearman's rank correlation coefficients between OS and TTF in each group were 0.41 (Group A; P < 0.01), 0.13 (Group B; P = 0.51), and 0.58 (Group C; P < 0.01). There was a highly significant correlation between OS and TTF in Group C. However, there was no correlation between OS and TTF in Group B.
RESULTS
The mutual cooperation system increased treatment duration and improved prognosis in patients with HFSR. Future prospective studies (e.g., randomized controlled trials) and improved adherence could help prevent OS underestimation.
CONCLUSION
[ "Antineoplastic Agents", "Carcinoma, Hepatocellular", "Humans", "Liver Neoplasms", "Niacinamide", "Phenylurea Compounds", "Prospective Studies", "Retrospective Studies", "Sorafenib", "Treatment Outcome" ]
8409165
INTRODUCTION
Sorafenib is a multikinase inhibitor used to treat advanced hepatocellular carcinoma (HCC)[1,2]. Although sorafenib prolongs overall survival (OS) in patients with HCC, it is associated with various adverse events (AEs) that may lead to permanent discontinuation[3]. Previous studies found that hand-foot skin reaction (HFSR) was a prognostic marker of longer survival[4-6]. While HFSR is an important predictor of survival outcomes in clinical practice and clinical sorafenib trials, AE management could influence the efficacy of HFSR as a prognostic factor. A recent study showed that increased clinician experience with AEs reduced the potential for discontinuing sorafenib therapy, resulting in a longer OS in patients with HCC[7]. Nevertheless, it takes a long time for clinicians to develop the necessary experience for the management of AEs, and even with experience, it takes a substantial amount of time to provide a system of adequate follow-up after sorafenib initiation. As sorafenib is administered orally, its successful use for HCC treatment relies on patient medication adherence. However, many studies indicate that patients with cancer are sometimes nonadherent when prescribed oral drugs[8,9], and AEs are the main cause of poor adherence[10]. Poor adherence can lead to poor outcomes, and clinicians may wrongly conclude that a drug is ineffective because the response to treatment is insufficient[11]. It is important for patients to actively participate in making treatment decisions and then receive treatment according to their decisions to improve adherence[12]. We introduced behavior change techniques (patient education, medication regimen management, pharmacist-led interventions)[13,14] in our facilities as interventions to promote adherence. Using a preliminary simulation, we estimated that collecting patient information takes at least 20 min. From this, we concluded that it is difficult for oncologists to manage drugs that cause various AEs (e.g., sorafenib) without assistance due to their obligations to many patients. Thus, we developed a mutual cooperation system involving collaboration between oncologists and pharmacists to ensure effective AE management. This mutual cooperation system consisted of the initial intervention by a pharmacist followed by a medical examination by an oncologist. However, this system was affected by the patient’s behavior because patients were not obliged to follow the system. Some patients received intervention from a pharmacist after a medical examination by an oncologist. Effective AE management that improves medication adherence has a considerable impact on survival outcomes. Previous single-center studies suggest that healthcare provider interventions improve adherence, and the onset of HFSR was a favorable prognostic factor of OS in patients with HCC[15,16]. However, little is known about the association between prognosis and medication adherence in patients with HCC, and multicenter studies on this relationship are lacking. Therefore, we aimed to compare the impact of different AE interventions on patient prognosis.
MATERIALS AND METHODS
Study design We retrospectively evaluated patients with advanced HCC treated with sorafenib monotherapy and no subsequent chemotherapeutic agent between May 2009 and March 2018 using the medical records of the following core hospitals in Japan: Hitachi General Hospital, Ibaraki Prefectural Central Hospital, Ibaraki Cancer Center, and Tokyo Medical University Ibaraki Medical Center. These hospitals are core hospitals that were designated by the government to provide specialized cancer medical care. The patients were separated into three groups: Group A, patients without HFSR but with pharmacist intervention (this intervention was performed by pharmacists who did not share interview information with the oncologist; it is called nonmutual cooperation system); Group B, patients with HFSR and nonmutual cooperation system; and Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). We retrospectively evaluated patients with advanced HCC treated with sorafenib monotherapy and no subsequent chemotherapeutic agent between May 2009 and March 2018 using the medical records of the following core hospitals in Japan: Hitachi General Hospital, Ibaraki Prefectural Central Hospital, Ibaraki Cancer Center, and Tokyo Medical University Ibaraki Medical Center. These hospitals are core hospitals that were designated by the government to provide specialized cancer medical care. The patients were separated into three groups: Group A, patients without HFSR but with pharmacist intervention (this intervention was performed by pharmacists who did not share interview information with the oncologist; it is called nonmutual cooperation system); Group B, patients with HFSR and nonmutual cooperation system; and Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). Patient selection We included patients with stage B or C HCC according to the Barcelona Clinic Liver Cancer (BCLC) staging system. The indication criteria for sorafenib administration were as follows: Child-Pugh grade A or B; Eastern Cooperative Oncology Group (ECOG) performance status 0 or 1; alanine aminotransferase < 5-fold the upper limit of the normal range; total bilirubin level < 2.0 mg/dL; neutrophil count > 1500/µL; hemoglobin level ≥ 8.5 g/dL; platelet count > 75000/µL; and no dialysis requirement. The study exclusion criteria were as follows: Patients with a history of thrombosis or ischemic heart disease, pregnant women and those who could become pregnant, and patients with brain metastases. Our study protocol was approved by the ethics committee of each hospital and was performed according to the ethical guidelines of the 1975 Declaration of Helsinki. We obtained informed consent using an opt-out option on each facility’s website (see Institution website uniform resource locators). This study was registered with the University Hospital Medical Information Network (UMIN) (ID: UMIN000038701). We included patients with stage B or C HCC according to the Barcelona Clinic Liver Cancer (BCLC) staging system. The indication criteria for sorafenib administration were as follows: Child-Pugh grade A or B; Eastern Cooperative Oncology Group (ECOG) performance status 0 or 1; alanine aminotransferase < 5-fold the upper limit of the normal range; total bilirubin level < 2.0 mg/dL; neutrophil count > 1500/µL; hemoglobin level ≥ 8.5 g/dL; platelet count > 75000/µL; and no dialysis requirement. The study exclusion criteria were as follows: Patients with a history of thrombosis or ischemic heart disease, pregnant women and those who could become pregnant, and patients with brain metastases. Our study protocol was approved by the ethics committee of each hospital and was performed according to the ethical guidelines of the 1975 Declaration of Helsinki. We obtained informed consent using an opt-out option on each facility’s website (see Institution website uniform resource locators). This study was registered with the University Hospital Medical Information Network (UMIN) (ID: UMIN000038701). Data collection We collected patient data from the start of sorafenib, including age, sex, etiology of underlying liver disease, Child-Pugh score, history of present illness, medical history, tumor marker level [alpha-fetoprotein (AFP)], ECOG performance status, and relevant laboratory tests, including total bilirubin, albumin, and international normalized ratio (INR). Laboratory tests and tumor marker levels were obtained every 8–10 wk until permanent sorafenib discontinuation. We collected patient data from the start of sorafenib, including age, sex, etiology of underlying liver disease, Child-Pugh score, history of present illness, medical history, tumor marker level [alpha-fetoprotein (AFP)], ECOG performance status, and relevant laboratory tests, including total bilirubin, albumin, and international normalized ratio (INR). Laboratory tests and tumor marker levels were obtained every 8–10 wk until permanent sorafenib discontinuation. Computed tomography evaluations Sorafenib response evaluations on computed tomography (CT) were scheduled for 8 wk after the first treatment, and subsequent evaluations were planned every 8 wk. Thoracic, abdominal, and pelvic CT scans were performed with intravenous iodinated contrast media. CT evaluations were conducted based on the modified Response Evaluation Criteria in Solid Tumors (mRECIST)[17] by an oncologist. Sorafenib response evaluations on computed tomography (CT) were scheduled for 8 wk after the first treatment, and subsequent evaluations were planned every 8 wk. Thoracic, abdominal, and pelvic CT scans were performed with intravenous iodinated contrast media. CT evaluations were conducted based on the modified Response Evaluation Criteria in Solid Tumors (mRECIST)[17] by an oncologist. Intervention Pharmacists with special expertise provided medical care at the pharmacist’s outpatient clinic before or after a patient was medically examined by an oncologist. Every 8 wk at each visit, an AE evaluation, a residual drug count, self-management advice, and patient education, including descriptions of successful cases of AE management, pharmacist support, and advice for relieving patient anxiety and misunderstanding, were conducted. AEs were evaluated according to the National Cancer Institute Common Toxicity Criteria for Adverse Events (NCI-CTCAE) version 5.0. Pharmacists recommended that all patients use heparinoids before sorafenib treatment to prevent AEs. Additionally, the prophylactic use of urea cream to prevent dermatologic AEs was recommended after beginning sorafenib treatment. Pharmacist intervention began at the start of sorafenib treatment and continued until treatment ended. Pharmacists with special expertise provided medical care at the pharmacist’s outpatient clinic before or after a patient was medically examined by an oncologist. Every 8 wk at each visit, an AE evaluation, a residual drug count, self-management advice, and patient education, including descriptions of successful cases of AE management, pharmacist support, and advice for relieving patient anxiety and misunderstanding, were conducted. AEs were evaluated according to the National Cancer Institute Common Toxicity Criteria for Adverse Events (NCI-CTCAE) version 5.0. Pharmacists recommended that all patients use heparinoids before sorafenib treatment to prevent AEs. Additionally, the prophylactic use of urea cream to prevent dermatologic AEs was recommended after beginning sorafenib treatment. Pharmacist intervention began at the start of sorafenib treatment and continued until treatment ended. Mutual cooperation system We developed a mutual cooperation system that was initiated at the start of sorafenib treatment to manage AEs effectively. Although patients in Groups B and C received medical advice from oncologists and pharmacists, the systems differed. Group C received 20- to 30-min consultations during which pharmacists provided accurate information about sorafenib to alleviate fear and anxiety related to AEs. After each visit, the pharmacist summarized the consultation results in a report and discussed their findings with an oncologist. Group B patients received a 5- to 10-min session during which pharmacists provided the same information about sorafenib that Group C had received. These pharmacist consultations were brief because a thorough medical examination by an oncologist had already been completed. Furthermore, the pharmacist did not record the consultation content in the medical chart because the visit involved verbal intervention only, and a detailed report was unnecessary. We developed a mutual cooperation system that was initiated at the start of sorafenib treatment to manage AEs effectively. Although patients in Groups B and C received medical advice from oncologists and pharmacists, the systems differed. Group C received 20- to 30-min consultations during which pharmacists provided accurate information about sorafenib to alleviate fear and anxiety related to AEs. After each visit, the pharmacist summarized the consultation results in a report and discussed their findings with an oncologist. Group B patients received a 5- to 10-min session during which pharmacists provided the same information about sorafenib that Group C had received. These pharmacist consultations were brief because a thorough medical examination by an oncologist had already been completed. Furthermore, the pharmacist did not record the consultation content in the medical chart because the visit involved verbal intervention only, and a detailed report was unnecessary. Sorafenib therapy and AE management Sorafenib was administered at a dose of 400 or 800 mg/d. The initial dose of sorafenib was determined by an oncologist. At the start of sorafenib treatment, all patients received information about AEs from a pharmacist and an oncologist. Patients who developed an AE could confer with their consulting pharmacist or prescribing oncologist. Pharmacists collected and recorded patient data, including the AE grade (according to NCI-CTCAE version 5.0), AE time of onset, and emotional response of the patient to the AE. Oncologists performed dose modifications throughout treatment, including reductions, interruptions, and reintroductions, according to the drug manufacturer’s package insert for sorafenib. Sorafenib was administered at a dose of 400 or 800 mg/d. The initial dose of sorafenib was determined by an oncologist. At the start of sorafenib treatment, all patients received information about AEs from a pharmacist and an oncologist. Patients who developed an AE could confer with their consulting pharmacist or prescribing oncologist. Pharmacists collected and recorded patient data, including the AE grade (according to NCI-CTCAE version 5.0), AE time of onset, and emotional response of the patient to the AE. Oncologists performed dose modifications throughout treatment, including reductions, interruptions, and reintroductions, according to the drug manufacturer’s package insert for sorafenib. Criteria for permanent sorafenib discontinuation Sorafenib was permanently discontinued when any of the following events occurred: (1) Tumor progression, defined as either radiologic (by the mRECIST criteria) or clinically progressive disease (e.g., ECOG performance status decline or onset of severe symptoms with no connection to liver failure); (2) Unacceptable AEs, defined as moderate to severe AEs (e.g., grades 2–4) that persisted after dose reduction or temporary treatment interruption; or (3) Liver decompensation, defined as gastrointestinal bleeding, ascites, jaundice, or encephalopathy[3]. All patients were managed by an oncologist and received excellent supportive care after sorafenib was permanently discontinued. Time to treatment failure (TTF) was defined as the duration from the start of sorafenib treatment to permanent discontinuation. The proportion of days covered (PDC) was defined as the TTF divided by the time to radiologic progressive disease after sorafenib[18]. Nonadherence was defined as a PDC of ≤ 80%[19]. Sorafenib was permanently discontinued when any of the following events occurred: (1) Tumor progression, defined as either radiologic (by the mRECIST criteria) or clinically progressive disease (e.g., ECOG performance status decline or onset of severe symptoms with no connection to liver failure); (2) Unacceptable AEs, defined as moderate to severe AEs (e.g., grades 2–4) that persisted after dose reduction or temporary treatment interruption; or (3) Liver decompensation, defined as gastrointestinal bleeding, ascites, jaundice, or encephalopathy[3]. All patients were managed by an oncologist and received excellent supportive care after sorafenib was permanently discontinued. Time to treatment failure (TTF) was defined as the duration from the start of sorafenib treatment to permanent discontinuation. The proportion of days covered (PDC) was defined as the TTF divided by the time to radiologic progressive disease after sorafenib[18]. Nonadherence was defined as a PDC of ≤ 80%[19]. Statistical analysis Categorical variables were assessed using the chi-square test and are presented as frequencies or percentages. Continuous variables were analyzed using the Mann-Whitney U test and are expressed as the mean ± SD. OS and TTF were evaluated using the Kaplan-Meier method. A landmark analysis[20] was performed to consider HFSR cases that might have occurred if a patient with HCC had not died as a guarantee-time bias. The analysis was performed using the time when the highest-grade HFSR occurred in 50% or more of the patients as a landmark (here, 30 d). The log-rank test was used to estimate differences in survival curves. Additionally, we used Cox regression analyses to evaluate the relationship between the time to the occurrence of an event and explanatory variables. Logistic regression analyses were used to evaluate the relationship between nonadherence and explanatory variables. We included the following baseline characteristics as variables in our univariate analysis: age, sex, etiology of liver disease, bilirubin level, albumin level, INR, BCLC stage, ECOG performance status, macrovascular invasion, extrahepatic spread, serum AFP level, and number of previous transarterial chemoembolization (TACE) procedures for liver cancer. Variables identified as significant in the univariate analysis were included in the multivariate analysis. The correlations between OS and TTF were assessed by Spearman’s rank correlation coefficient. A p-value less than 0.05 was considered statistically significant. We used the Bonferroni correction for multiple comparisons to adjust the familywise error rate when comparing differences between the three groups. The statistical methods of this study were reviewed by Kamoshida T from the Department of Gastroenterology, Hitachi General Hospital, Japan. All statistical analyses were performed using SPSS software, version 22 (IBM Corp., Armonk, NY, United States). Categorical variables were assessed using the chi-square test and are presented as frequencies or percentages. Continuous variables were analyzed using the Mann-Whitney U test and are expressed as the mean ± SD. OS and TTF were evaluated using the Kaplan-Meier method. A landmark analysis[20] was performed to consider HFSR cases that might have occurred if a patient with HCC had not died as a guarantee-time bias. The analysis was performed using the time when the highest-grade HFSR occurred in 50% or more of the patients as a landmark (here, 30 d). The log-rank test was used to estimate differences in survival curves. Additionally, we used Cox regression analyses to evaluate the relationship between the time to the occurrence of an event and explanatory variables. Logistic regression analyses were used to evaluate the relationship between nonadherence and explanatory variables. We included the following baseline characteristics as variables in our univariate analysis: age, sex, etiology of liver disease, bilirubin level, albumin level, INR, BCLC stage, ECOG performance status, macrovascular invasion, extrahepatic spread, serum AFP level, and number of previous transarterial chemoembolization (TACE) procedures for liver cancer. Variables identified as significant in the univariate analysis were included in the multivariate analysis. The correlations between OS and TTF were assessed by Spearman’s rank correlation coefficient. A p-value less than 0.05 was considered statistically significant. We used the Bonferroni correction for multiple comparisons to adjust the familywise error rate when comparing differences between the three groups. The statistical methods of this study were reviewed by Kamoshida T from the Department of Gastroenterology, Hitachi General Hospital, Japan. All statistical analyses were performed using SPSS software, version 22 (IBM Corp., Armonk, NY, United States).
null
null
CONCLUSION
Future prospective studies (e.g., randomized controlled trials) and improved adherence could help avoid OS underestimation.
[ "INTRODUCTION", "Study design", "Patient selection", "Data collection", "Computed tomography evaluations", "Intervention", "Mutual cooperation system", "Sorafenib therapy and AE management", "Criteria for permanent sorafenib discontinuation", "Statistical analysis", "RESULTS", "Patients", "Baseline characteristics", "AEs", "Radiological response evaluations", "Permanent sorafenib discontinuation", "OS after sorafenib therapy", "Mutual cooperation system evaluation", "Correlation between OS and TTF", "DISCUSSION", "CONCLUSION" ]
[ "Sorafenib is a multikinase inhibitor used to treat advanced hepatocellular carcinoma (HCC)[1,2]. Although sorafenib prolongs overall survival (OS) in patients with HCC, it is associated with various adverse events (AEs) that may lead to permanent discontinuation[3].\nPrevious studies found that hand-foot skin reaction (HFSR) was a prognostic marker of longer survival[4-6]. While HFSR is an important predictor of survival outcomes in clinical practice and clinical sorafenib trials, AE management could influence the efficacy of HFSR as a prognostic factor. A recent study showed that increased clinician experience with AEs reduced the potential for discontinuing sorafenib therapy, resulting in a longer OS in patients with HCC[7]. Nevertheless, it takes a long time for clinicians to develop the necessary experience for the management of AEs, and even with experience, it takes a substantial amount of time to provide a system of adequate follow-up after sorafenib initiation.\nAs sorafenib is administered orally, its successful use for HCC treatment relies on patient medication adherence. However, many studies indicate that patients with cancer are sometimes nonadherent when prescribed oral drugs[8,9], and AEs are the main cause of poor adherence[10]. Poor adherence can lead to poor outcomes, and clinicians may wrongly conclude that a drug is ineffective because the response to treatment is insufficient[11].\nIt is important for patients to actively participate in making treatment decisions and then receive treatment according to their decisions to improve adherence[12]. We introduced behavior change techniques (patient education, medication regimen management, pharmacist-led interventions)[13,14] in our facilities as interventions to promote adherence. Using a preliminary simulation, we estimated that collecting patient information takes at least 20 min. From this, we concluded that it is difficult for oncologists to manage drugs that cause various AEs (e.g., sorafenib) without assistance due to their obligations to many patients. Thus, we developed a mutual cooperation system involving collaboration between oncologists and pharmacists to ensure effective AE management. This mutual cooperation system consisted of the initial intervention by a pharmacist followed by a medical examination by an oncologist. However, this system was affected by the patient’s behavior because patients were not obliged to follow the system. Some patients received intervention from a pharmacist after a medical examination by an oncologist.\nEffective AE management that improves medication adherence has a considerable impact on survival outcomes. Previous single-center studies suggest that healthcare provider interventions improve adherence, and the onset of HFSR was a favorable prognostic factor of OS in patients with HCC[15,16]. However, little is known about the association between prognosis and medication adherence in patients with HCC, and multicenter studies on this relationship are lacking. Therefore, we aimed to compare the impact of different AE interventions on patient prognosis.", "We retrospectively evaluated patients with advanced HCC treated with sorafenib monotherapy and no subsequent chemotherapeutic agent between May 2009 and March 2018 using the medical records of the following core hospitals in Japan: Hitachi General Hospital, Ibaraki Prefectural Central Hospital, Ibaraki Cancer Center, and Tokyo Medical University Ibaraki Medical Center. These hospitals are core hospitals that were designated by the government to provide specialized cancer medical care. The patients were separated into three groups: Group A, patients without HFSR but with pharmacist intervention (this intervention was performed by pharmacists who did not share interview information with the oncologist; it is called nonmutual cooperation system); Group B, patients with HFSR and nonmutual cooperation system; and Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system).", "We included patients with stage B or C HCC according to the Barcelona Clinic Liver Cancer (BCLC) staging system. The indication criteria for sorafenib administration were as follows: Child-Pugh grade A or B; Eastern Cooperative Oncology Group (ECOG) performance status 0 or 1; alanine aminotransferase < 5-fold the upper limit of the normal range; total bilirubin level < 2.0 mg/dL; neutrophil count > 1500/µL; hemoglobin level ≥ 8.5 g/dL; platelet count > 75000/µL; and no dialysis requirement. The study exclusion criteria were as follows: Patients with a history of thrombosis or ischemic heart disease, pregnant women and those who could become pregnant, and patients with brain metastases. Our study protocol was approved by the ethics committee of each hospital and was performed according to the ethical guidelines of the 1975 Declaration of Helsinki. We obtained informed consent using an opt-out option on each facility’s website (see Institution website uniform resource locators). This study was registered with the University Hospital Medical Information Network (UMIN) (ID: UMIN000038701).", "We collected patient data from the start of sorafenib, including age, sex, etiology of underlying liver disease, Child-Pugh score, history of present illness, medical history, tumor marker level [alpha-fetoprotein (AFP)], ECOG performance status, and relevant laboratory tests, including total bilirubin, albumin, and international normalized ratio (INR). Laboratory tests and tumor marker levels were obtained every 8–10 wk until permanent sorafenib discontinuation.", "Sorafenib response evaluations on computed tomography (CT) were scheduled for 8 wk after the first treatment, and subsequent evaluations were planned every 8 wk. Thoracic, abdominal, and pelvic CT scans were performed with intravenous iodinated contrast media. CT evaluations were conducted based on the modified Response Evaluation Criteria in Solid Tumors (mRECIST)[17] by an oncologist.", "Pharmacists with special expertise provided medical care at the pharmacist’s outpatient clinic before or after a patient was medically examined by an oncologist. Every 8 wk at each visit, an AE evaluation, a residual drug count, self-management advice, and patient education, including descriptions of successful cases of AE management, pharmacist support, and advice for relieving patient anxiety and misunderstanding, were conducted. AEs were evaluated according to the National Cancer Institute Common Toxicity Criteria for Adverse Events (NCI-CTCAE) version 5.0. Pharmacists recommended that all patients use heparinoids before sorafenib treatment to prevent AEs. Additionally, the prophylactic use of urea cream to prevent dermatologic AEs was recommended after beginning sorafenib treatment. Pharmacist intervention began at the start of sorafenib treatment and continued until treatment ended.", "We developed a mutual cooperation system that was initiated at the start of sorafenib treatment to manage AEs effectively. Although patients in Groups B and C received medical advice from oncologists and pharmacists, the systems differed. Group C received 20- to 30-min consultations during which pharmacists provided accurate information about sorafenib to alleviate fear and anxiety related to AEs. After each visit, the pharmacist summarized the consultation results in a report and discussed their findings with an oncologist. Group B patients received a 5- to 10-min session during which pharmacists provided the same information about sorafenib that Group C had received. These pharmacist consultations were brief because a thorough medical examination by an oncologist had already been completed. Furthermore, the pharmacist did not record the consultation content in the medical chart because the visit involved verbal intervention only, and a detailed report was unnecessary.", "Sorafenib was administered at a dose of 400 or 800 mg/d. The initial dose of sorafenib was determined by an oncologist. At the start of sorafenib treatment, all patients received information about AEs from a pharmacist and an oncologist. Patients who developed an AE could confer with their consulting pharmacist or prescribing oncologist. Pharmacists collected and recorded patient data, including the AE grade (according to NCI-CTCAE version 5.0), AE time of onset, and emotional response of the patient to the AE. Oncologists performed dose modifications throughout treatment, including reductions, interruptions, and reintroductions, according to the drug manufacturer’s package insert for sorafenib.", "Sorafenib was permanently discontinued when any of the following events occurred: (1) Tumor progression, defined as either radiologic (by the mRECIST criteria) or clinically progressive disease (e.g., ECOG performance status decline or onset of severe symptoms with no connection to liver failure); (2) Unacceptable AEs, defined as moderate to severe AEs (e.g., grades 2–4) that persisted after dose reduction or temporary treatment interruption; or (3) Liver decompensation, defined as gastrointestinal bleeding, ascites, jaundice, or encephalopathy[3]. All patients were managed by an oncologist and received excellent supportive care after sorafenib was permanently discontinued. Time to treatment failure (TTF) was defined as the duration from the start of sorafenib treatment to permanent discontinuation. The proportion of days covered (PDC) was defined as the TTF divided by the time to radiologic progressive disease after sorafenib[18]. Nonadherence was defined as a PDC of ≤ 80%[19].", "Categorical variables were assessed using the chi-square test and are presented as frequencies or percentages. Continuous variables were analyzed using the Mann-Whitney U test and are expressed as the mean ± SD. OS and TTF were evaluated using the Kaplan-Meier method. A landmark analysis[20] was performed to consider HFSR cases that might have occurred if a patient with HCC had not died as a guarantee-time bias. The analysis was performed using the time when the highest-grade HFSR occurred in 50% or more of the patients as a landmark (here, 30 d). The log-rank test was used to estimate differences in survival curves. Additionally, we used Cox regression analyses to evaluate the relationship between the time to the occurrence of an event and explanatory variables. Logistic regression analyses were used to evaluate the relationship between nonadherence and explanatory variables.\nWe included the following baseline characteristics as variables in our univariate analysis: age, sex, etiology of liver disease, bilirubin level, albumin level, INR, BCLC stage, ECOG performance status, macrovascular invasion, extrahepatic spread, serum AFP level, and number of previous transarterial chemoembolization (TACE) procedures for liver cancer. Variables identified as significant in the univariate analysis were included in the multivariate analysis. The correlations between OS and TTF were assessed by Spearman’s rank correlation coefficient. A p-value less than 0.05 was considered statistically significant. We used the Bonferroni correction for multiple comparisons to adjust the familywise error rate when comparing differences between the three groups. The statistical methods of this study were reviewed by Kamoshida T from the Department of Gastroenterology, Hitachi General Hospital, Japan. All statistical analyses were performed using SPSS software, version 22 (IBM Corp., Armonk, NY, United States).", "Patients We included 134 patients [median age, 69 years (range, 41–89 years); male, n = 99; female, n = 35] with advanced HCC who received sorafenib monotherapy without posttreatment (Group A, n = 41; Group B, n = 30; Group C, n = 63). The main etiological factor was hepatitis C virus (HCV) (77/134 patients, 57.5%), followed by hepatitis B virus (HBV) (30/134 patients, 22.4%).\nWe included 134 patients [median age, 69 years (range, 41–89 years); male, n = 99; female, n = 35] with advanced HCC who received sorafenib monotherapy without posttreatment (Group A, n = 41; Group B, n = 30; Group C, n = 63). The main etiological factor was hepatitis C virus (HCV) (77/134 patients, 57.5%), followed by hepatitis B virus (HBV) (30/134 patients, 22.4%).\nBaseline characteristics All patients had cirrhosis [Child-Pugh A, n = 117 (87.3%); Child-Pugh B, n = 17 (12.7%)]. HCC was BCLC stage B in 55 patients (41.0%) and BCLC stage C in 79 patients (59.0%). None of the patients had a second primary cancer. Portal vein thrombosis was present in 35 patients (26.1%), and extrahepatic metastases were found in 67 patients (50.0%) (Table 1).\nBaseline characteristics of patients in Groups A, B, and C\nAFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HCV: Hepatitis C virus; HBV: Hepatitis B virus; INR: International normalized ratio; TACE: Transcatheter arterial chemoembolization; ECOG: Eastern Cooperative Oncology Group.\nAll patients had cirrhosis [Child-Pugh A, n = 117 (87.3%); Child-Pugh B, n = 17 (12.7%)]. HCC was BCLC stage B in 55 patients (41.0%) and BCLC stage C in 79 patients (59.0%). None of the patients had a second primary cancer. Portal vein thrombosis was present in 35 patients (26.1%), and extrahepatic metastases were found in 67 patients (50.0%) (Table 1).\nBaseline characteristics of patients in Groups A, B, and C\nAFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HCV: Hepatitis C virus; HBV: Hepatitis B virus; INR: International normalized ratio; TACE: Transcatheter arterial chemoembolization; ECOG: Eastern Cooperative Oncology Group.\nAEs An AE of at least grade 1 was observed in all patients after sorafenib administration. However, none of the patients experienced any grade 4 AEs. The main AEs in all groups were fatigue (30.6%), diarrhea (39.6%), hypertension (31.3%), anorexia (29.9%), and thrombocytopenia (38.8%) (Table 2). Many patients required temporary sorafenib interruption because of AEs (Group A, 19.5%; Group B, 6.7%; Group C, 41.3%). Of patients who temporarily stopped taking sorafenib, the rate of those who resumed treatment at a reduced dose was the highest in Group C (Group A vs Group B, P = 0.70; Group A vs Group C, P = 0.11; Group B vs Group C, P < 0.01) (Table 3).\nPrevalence of adverse events after beginning sorafenib, according to CTCAE version 5.0, n (%)\nDose modification related to adverse events\nNA: Not available.\nAn AE of at least grade 1 was observed in all patients after sorafenib administration. However, none of the patients experienced any grade 4 AEs. The main AEs in all groups were fatigue (30.6%), diarrhea (39.6%), hypertension (31.3%), anorexia (29.9%), and thrombocytopenia (38.8%) (Table 2). Many patients required temporary sorafenib interruption because of AEs (Group A, 19.5%; Group B, 6.7%; Group C, 41.3%). Of patients who temporarily stopped taking sorafenib, the rate of those who resumed treatment at a reduced dose was the highest in Group C (Group A vs Group B, P = 0.70; Group A vs Group C, P = 0.11; Group B vs Group C, P < 0.01) (Table 3).\nPrevalence of adverse events after beginning sorafenib, according to CTCAE version 5.0, n (%)\nDose modification related to adverse events\nNA: Not available.\nRadiological response evaluations CT examinations performed every 2 mo showed that the disease control rate (DCR) gradually decreased in all groups. The response rate (RR) and DCR 8 mo after the start of sorafenib treatment were the highest in Group C (RR, 9.5%; DCR, 65.1%) (Table 4).\nRadiological response according to the modified response evaluation criteria in solid tumors\nCT examinations performed every 2 mo showed that the disease control rate (DCR) gradually decreased in all groups. The response rate (RR) and DCR 8 mo after the start of sorafenib treatment were the highest in Group C (RR, 9.5%; DCR, 65.1%) (Table 4).\nRadiological response according to the modified response evaluation criteria in solid tumors\nPermanent sorafenib discontinuation The main causes of permanent drug discontinuation were HCC progression and sorafenib-related AE intolerance. Permanent discontinuation due to AE intolerance occurred most frequently in Group B (Group A (17.1%) vs Group B (60.0%), P < 0.01; Group A (17.1%) vs Group C (20.6%), P = 1.00; Group B (60.0%) vs Group C (20.6%), P < 0.05) (Table 5).\nReasons for permanent sorafenib discontinuation, n (%)\nThe main causes of permanent drug discontinuation were HCC progression and sorafenib-related AE intolerance. Permanent discontinuation due to AE intolerance occurred most frequently in Group B (Group A (17.1%) vs Group B (60.0%), P < 0.01; Group A (17.1%) vs Group C (20.6%), P = 1.00; Group B (60.0%) vs Group C (20.6%), P < 0.05) (Table 5).\nReasons for permanent sorafenib discontinuation, n (%)\nOS after sorafenib therapy The median OS was 6.2 mo in Group A, 7.7 mo in Group B, and 13.9 mo in Group C. The difference in the median OS was significant between Groups A and C (P < 0.01). In multivariate analysis, Group A vs Group C (HR, 0.41; 95%CI: 0.25–0.66; P < 0.01) and BCLC-B (HR, 0.60, 95%CI: 0.41–0.89; P = 0.01) were independent predictors of survival (Figures 1 and 2, Table 6).\nKaplan-Meier estimates and prognostic factors of overall survival (comparison between each group). Group A, patients without hand-foot skin reaction (HFSR) but with pharmacist intervention; Group B, patients with HFSR and the nonmutual cooperation system; Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). \nKaplan-Meier estimates and prognostic factors of overall survival (Barcelona Clinic Liver Cancer B vs Barcelona Clinic Liver Cancer C). BCLC: Barcelona Clinic Liver Cancer.\nPrognostic factors of overall survival by multivariable Cox regression analysis\nAFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HBV: Hepatitis B virus; ECOG: Eastern Cooperative Oncology Group.\nThe median OS was 6.2 mo in Group A, 7.7 mo in Group B, and 13.9 mo in Group C. The difference in the median OS was significant between Groups A and C (P < 0.01). In multivariate analysis, Group A vs Group C (HR, 0.41; 95%CI: 0.25–0.66; P < 0.01) and BCLC-B (HR, 0.60, 95%CI: 0.41–0.89; P = 0.01) were independent predictors of survival (Figures 1 and 2, Table 6).\nKaplan-Meier estimates and prognostic factors of overall survival (comparison between each group). Group A, patients without hand-foot skin reaction (HFSR) but with pharmacist intervention; Group B, patients with HFSR and the nonmutual cooperation system; Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). \nKaplan-Meier estimates and prognostic factors of overall survival (Barcelona Clinic Liver Cancer B vs Barcelona Clinic Liver Cancer C). BCLC: Barcelona Clinic Liver Cancer.\nPrognostic factors of overall survival by multivariable Cox regression analysis\nAFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HBV: Hepatitis B virus; ECOG: Eastern Cooperative Oncology Group.\nMutual cooperation system evaluation The median TTF in Group C was 5.0 mo (95%CI: 3.8–6.5), which was the highest of all the groups [Group C (5.0 mo) vs Group A (2.1 mo), P < 0.01; Group C (5.0 mo) vs Group B (0.5 mo), P < 0.01). In multivariable Cox regression analysis, Group A vs Group B (HR, 1.69, 95%CI, 1.04–2.75; P = 0.03) and Group A vs Group C (HR, 0.53; 95%CI: 0.35–0.81; P < 0.01) were significant predictors of TTF (Table 7). The proportions of patients with a PDC of < 0.8 were 29.3% in Group A, 73.3% in Group B, and 23.8% in Group C. Group B had a significantly higher sorafenib PDC than Groups A (P < 0.01) and C (P < 0.01). Adjusted logistic regression analysis showed that nonadherence (PDC ≤ 0.8) was associated with Group B vs Group A (OR, 0.11; 95%CI: 0.04–0.36; P < 0.01) and Group B vs Group C (OR, 0.09; 95%CI: 0.03–0.27; P < 0.01) (Figure 3, Table 8).\nProportion and prognostic factors of nonadherence. Group A, patients without hand-foot skin reaction (HFSR) but with pharmacist intervention; Group B, patients with HFSR and the nonmutual cooperation system; Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). \nPrognostic factors of time-to-treatment failure by multivariable Cox regression analysis\nAFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HBV: Hepatitis B virus; ECOG: Eastern Cooperative Oncology Group.\nPrognostic factors of proportion of days covered by logistic regression analyses\nThe median TTF in Group C was 5.0 mo (95%CI: 3.8–6.5), which was the highest of all the groups [Group C (5.0 mo) vs Group A (2.1 mo), P < 0.01; Group C (5.0 mo) vs Group B (0.5 mo), P < 0.01). In multivariable Cox regression analysis, Group A vs Group B (HR, 1.69, 95%CI, 1.04–2.75; P = 0.03) and Group A vs Group C (HR, 0.53; 95%CI: 0.35–0.81; P < 0.01) were significant predictors of TTF (Table 7). The proportions of patients with a PDC of < 0.8 were 29.3% in Group A, 73.3% in Group B, and 23.8% in Group C. Group B had a significantly higher sorafenib PDC than Groups A (P < 0.01) and C (P < 0.01). Adjusted logistic regression analysis showed that nonadherence (PDC ≤ 0.8) was associated with Group B vs Group A (OR, 0.11; 95%CI: 0.04–0.36; P < 0.01) and Group B vs Group C (OR, 0.09; 95%CI: 0.03–0.27; P < 0.01) (Figure 3, Table 8).\nProportion and prognostic factors of nonadherence. Group A, patients without hand-foot skin reaction (HFSR) but with pharmacist intervention; Group B, patients with HFSR and the nonmutual cooperation system; Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). \nPrognostic factors of time-to-treatment failure by multivariable Cox regression analysis\nAFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HBV: Hepatitis B virus; ECOG: Eastern Cooperative Oncology Group.\nPrognostic factors of proportion of days covered by logistic regression analyses\nCorrelation between OS and TTF The Spearman’s rank correlation coefficients between OS and TTF in each group were 0.41 (Group A; P < 0.01), 0.13 (Group B; P = 0.51), and 0.58 (Group C; P <0.01). There was a highly significant correlation between OS and TTF in Group C. However, there was no correlation between OS and TTF in Group B.\nThe Spearman’s rank correlation coefficients between OS and TTF in each group were 0.41 (Group A; P < 0.01), 0.13 (Group B; P = 0.51), and 0.58 (Group C; P <0.01). There was a highly significant correlation between OS and TTF in Group C. However, there was no correlation between OS and TTF in Group B.", "We included 134 patients [median age, 69 years (range, 41–89 years); male, n = 99; female, n = 35] with advanced HCC who received sorafenib monotherapy without posttreatment (Group A, n = 41; Group B, n = 30; Group C, n = 63). The main etiological factor was hepatitis C virus (HCV) (77/134 patients, 57.5%), followed by hepatitis B virus (HBV) (30/134 patients, 22.4%).", "All patients had cirrhosis [Child-Pugh A, n = 117 (87.3%); Child-Pugh B, n = 17 (12.7%)]. HCC was BCLC stage B in 55 patients (41.0%) and BCLC stage C in 79 patients (59.0%). None of the patients had a second primary cancer. Portal vein thrombosis was present in 35 patients (26.1%), and extrahepatic metastases were found in 67 patients (50.0%) (Table 1).\nBaseline characteristics of patients in Groups A, B, and C\nAFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HCV: Hepatitis C virus; HBV: Hepatitis B virus; INR: International normalized ratio; TACE: Transcatheter arterial chemoembolization; ECOG: Eastern Cooperative Oncology Group.", "An AE of at least grade 1 was observed in all patients after sorafenib administration. However, none of the patients experienced any grade 4 AEs. The main AEs in all groups were fatigue (30.6%), diarrhea (39.6%), hypertension (31.3%), anorexia (29.9%), and thrombocytopenia (38.8%) (Table 2). Many patients required temporary sorafenib interruption because of AEs (Group A, 19.5%; Group B, 6.7%; Group C, 41.3%). Of patients who temporarily stopped taking sorafenib, the rate of those who resumed treatment at a reduced dose was the highest in Group C (Group A vs Group B, P = 0.70; Group A vs Group C, P = 0.11; Group B vs Group C, P < 0.01) (Table 3).\nPrevalence of adverse events after beginning sorafenib, according to CTCAE version 5.0, n (%)\nDose modification related to adverse events\nNA: Not available.", "CT examinations performed every 2 mo showed that the disease control rate (DCR) gradually decreased in all groups. The response rate (RR) and DCR 8 mo after the start of sorafenib treatment were the highest in Group C (RR, 9.5%; DCR, 65.1%) (Table 4).\nRadiological response according to the modified response evaluation criteria in solid tumors", "The main causes of permanent drug discontinuation were HCC progression and sorafenib-related AE intolerance. Permanent discontinuation due to AE intolerance occurred most frequently in Group B (Group A (17.1%) vs Group B (60.0%), P < 0.01; Group A (17.1%) vs Group C (20.6%), P = 1.00; Group B (60.0%) vs Group C (20.6%), P < 0.05) (Table 5).\nReasons for permanent sorafenib discontinuation, n (%)", "The median OS was 6.2 mo in Group A, 7.7 mo in Group B, and 13.9 mo in Group C. The difference in the median OS was significant between Groups A and C (P < 0.01). In multivariate analysis, Group A vs Group C (HR, 0.41; 95%CI: 0.25–0.66; P < 0.01) and BCLC-B (HR, 0.60, 95%CI: 0.41–0.89; P = 0.01) were independent predictors of survival (Figures 1 and 2, Table 6).\nKaplan-Meier estimates and prognostic factors of overall survival (comparison between each group). Group A, patients without hand-foot skin reaction (HFSR) but with pharmacist intervention; Group B, patients with HFSR and the nonmutual cooperation system; Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). \nKaplan-Meier estimates and prognostic factors of overall survival (Barcelona Clinic Liver Cancer B vs Barcelona Clinic Liver Cancer C). BCLC: Barcelona Clinic Liver Cancer.\nPrognostic factors of overall survival by multivariable Cox regression analysis\nAFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HBV: Hepatitis B virus; ECOG: Eastern Cooperative Oncology Group.", "The median TTF in Group C was 5.0 mo (95%CI: 3.8–6.5), which was the highest of all the groups [Group C (5.0 mo) vs Group A (2.1 mo), P < 0.01; Group C (5.0 mo) vs Group B (0.5 mo), P < 0.01). In multivariable Cox regression analysis, Group A vs Group B (HR, 1.69, 95%CI, 1.04–2.75; P = 0.03) and Group A vs Group C (HR, 0.53; 95%CI: 0.35–0.81; P < 0.01) were significant predictors of TTF (Table 7). The proportions of patients with a PDC of < 0.8 were 29.3% in Group A, 73.3% in Group B, and 23.8% in Group C. Group B had a significantly higher sorafenib PDC than Groups A (P < 0.01) and C (P < 0.01). Adjusted logistic regression analysis showed that nonadherence (PDC ≤ 0.8) was associated with Group B vs Group A (OR, 0.11; 95%CI: 0.04–0.36; P < 0.01) and Group B vs Group C (OR, 0.09; 95%CI: 0.03–0.27; P < 0.01) (Figure 3, Table 8).\nProportion and prognostic factors of nonadherence. Group A, patients without hand-foot skin reaction (HFSR) but with pharmacist intervention; Group B, patients with HFSR and the nonmutual cooperation system; Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). \nPrognostic factors of time-to-treatment failure by multivariable Cox regression analysis\nAFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HBV: Hepatitis B virus; ECOG: Eastern Cooperative Oncology Group.\nPrognostic factors of proportion of days covered by logistic regression analyses", "The Spearman’s rank correlation coefficients between OS and TTF in each group were 0.41 (Group A; P < 0.01), 0.13 (Group B; P = 0.51), and 0.58 (Group C; P <0.01). There was a highly significant correlation between OS and TTF in Group C. However, there was no correlation between OS and TTF in Group B.", "We investigated the effect of cooperation between oncologists and pharmacists on the prognosis of patients with advanced HCC treated with sorafenib monotherapy. In the present study, the occurrence of HFSR was associated with improved patient prognosis, and this improvement was significantly enhanced by appropriate medication adherence. Close cooperation between oncologists and pharmacists increased adherence, and a strong correlation was observed between OS and TTF.\nSeveral studies have indicated that the emergence of HFSR is associated with prolonged survival in patients with advanced HCC treated with sorafenib[5,6,21]. However, these studies did not evaluate the correlation between medication adherence and survival after the appearance of an AE, including HFSR. Targeted therapies, including sorafenib, can result in unexpected AEs that do not occur after the administration of earlier chemotherapy drugs[22]. Oncologists must recognize these novel AEs at an early stage and provide appropriate treatment to the extent possible. However, previous studies revealed that optimal AE management requires considerable experience and time[7,23]. Management of sorafenib-related AEs includes data collection for AE grading, patient education, and determination of the appropriate sorafenib dose by an oncologist[15].\nThe use of sorafenib is associated with various AEs, including gastrointestinal, constitutional, or dermatologic events[1,2], and their management may require dose reduction or temporary discontinuation to avoid sorafenib treatment cessation. For example, an appropriate sorafenib dose reduction yielded a decreased rate of permanent discontinuation due to AEs[7]. However, in many patients, these dose changes do not mitigate intolerable or severe AEs, and permanent sorafenib discontinuation is required[24].\nIn our study, only the mutual cooperation system promoted dose reduction after an AE and extended the TTF. In the mutual cooperation system, pharmacists were responsible for collecting data on AE grades, educating patients, and managing any leftover medicine. Furthermore, they documented their findings in a report for the oncologist. On the other hand, in the nonmutual cooperation system, an oncologist examined the patient before pharmacists were involved in patient management. Oncologists were required to evaluate AE grading data, educate patients, and determine the appropriate sorafenib dose. Only after the medical examination did a pharmacist provide additional patient management, and their findings were not reported to the oncologist. In the nonmutual cooperation system, the oncologist had to obtain a substantial amount of information to maintain or revise the sorafenib treatment regimen within 5 to 10 min.\nGiven these differences between the systems, our results suggest that the mutual cooperation system led to appropriate dose reductions, as reflected by the extended TTF. However, we do not recommend starting sorafenib at half the standard dose (800 mg/d). Dose reductions were guided by the results obtained by the mutual cooperation system. Additionally, we demonstrated that the best outcomes occur when optimal dosing and good medication adherence are achieved early in the course of sorafenib treatment. In our study, only the mutual cooperation system ensured good adherence in patients who experienced HFSR secondary to sorafenib treatment and prevented unnecessary permanent medication discontinuation.\nA previous study showed that despite dramatic improvements in adjuvant hormonal therapy for breast cancer, nonadherence, early discontinuation, and effective cancer treatment are affected by treatment-related toxicity, and appropriate interventions are needed to improve breast cancer survival[25]. Another cohort study indicated that long-term tamoxifen therapy for breast cancer reduced the risk of death, while the risk of death increased with a low adherence rate[26]. In our study, the medical team continued sharing patient information after the patient started taking sorafenib; therefore, the mutual cooperation system enabled the medical team to prevent HFSR or promptly provide patient management, as appropriate. In contrast, the nonmutual cooperation system did not allow patient information to be shared at an early stage; thus, the medical team was not able to take measures to prevent HFSR or plan palliation care in a timely manner. The differences in the effectiveness of HFSR prevention and palliation between the two systems highlight the importance of the various hurdles that can affect medication adherence.\nHurdles to medication adherence are complex and include patient-, clinician-, and healthcare system-related factors. Patient-related factors, such as limited involvement in the treatment decision-making process, poor health literacy, doubts about medication effectiveness, and previous adverse effects, influence adherence. Clinician-related factors include failure to recognize nonadherence, poor patient communication, and inadequate multidisciplinary communication between oncologists and pharmacists. Healthcare system-related factors include relationships with clinicians and clinicians’ satisfaction with patient care[27,28]. Thus, multiple factors may become hurdles to improving adherence. The mutual cooperation system coordinates interactions among patients, clinicians, and the health system, thereby minimizing barriers to adherence.\nSurprisingly, this study revealed that the prognostic value of HFSR was enhanced by appropriate medication adherence. On the other hand, BCLC-B HCC was an independent predictor of improved OS[29]. BCLC stage did not affect the difference in OS between Groups A and C, as there was no significant between-group difference in the baseline stage distribution.\nWe have reasonable evidence to confirm the validity of our results. First, variables such as age, sex, etiology, ECOG performance status, liver function, comorbidities, and TACE procedure count were not significantly different between the groups. Second, we verified that all patients had received sorafenib monotherapy and no subsequent chemotherapy; therefore, neither our patients’ prognoses nor the prolonged OS we observed was affected by other chemotherapeutic agents.\nNevertheless, our study has a few limitations. First, our study design was based on the mutual cooperation system. After a patient was first checked by a specialized pharmacist, the oncologist determined whether to prescribe sorafenib based on the pharmacist’s report. However, patients were not required to participate in the mutual cooperation system, and involvement was subject to the patient’s wish. After the patients underwent a medical examination by an oncologist, a specialized pharmacist could also provide patient guidance about sorafenib. While patients who were unwilling to participate in the mutual cooperation system may have been included in Group A or B, it is unknown whether this enrollment could have affected the adherence rate. Second, OS and TTF were higher in the mutual cooperation system group (Group C) than in Group A and Group B. It is difficult to determine whether these results were caused by improved adherence or the mechanism underlying the prognostic efficacy of HFSR.", "The mutual cooperation system increased treatment duration and improved prognosis in patients with HFSR secondary to sorafenib treatment. Additionally, the mutual cooperation system allowed us to promptly initiate sorafenib treatment. Our study clearly demonstrates the clinical and research benefits of this system. The mutual cooperation system for sorafenib treatment management described in this study could be applied to the management of patients treated with other multikinase inhibitors to extend OS. The increased OS resulting from the mutual cooperation system could have a substantial impact on the design of clinical studies in which sorafenib is used as the control drug. Additionally, nonadherence may have adversely affected OS in previous studies, leading researchers to underestimate drug efficacy. We propose that future clinical investigations designed to improve medication adherence could eliminate OS underestimation." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Study design", "Patient selection", "Data collection", "Computed tomography evaluations", "Intervention", "Mutual cooperation system", "Sorafenib therapy and AE management", "Criteria for permanent sorafenib discontinuation", "Statistical analysis", "RESULTS", "Patients", "Baseline characteristics", "AEs", "Radiological response evaluations", "Permanent sorafenib discontinuation", "OS after sorafenib therapy", "Mutual cooperation system evaluation", "Correlation between OS and TTF", "DISCUSSION", "CONCLUSION" ]
[ "Sorafenib is a multikinase inhibitor used to treat advanced hepatocellular carcinoma (HCC)[1,2]. Although sorafenib prolongs overall survival (OS) in patients with HCC, it is associated with various adverse events (AEs) that may lead to permanent discontinuation[3].\nPrevious studies found that hand-foot skin reaction (HFSR) was a prognostic marker of longer survival[4-6]. While HFSR is an important predictor of survival outcomes in clinical practice and clinical sorafenib trials, AE management could influence the efficacy of HFSR as a prognostic factor. A recent study showed that increased clinician experience with AEs reduced the potential for discontinuing sorafenib therapy, resulting in a longer OS in patients with HCC[7]. Nevertheless, it takes a long time for clinicians to develop the necessary experience for the management of AEs, and even with experience, it takes a substantial amount of time to provide a system of adequate follow-up after sorafenib initiation.\nAs sorafenib is administered orally, its successful use for HCC treatment relies on patient medication adherence. However, many studies indicate that patients with cancer are sometimes nonadherent when prescribed oral drugs[8,9], and AEs are the main cause of poor adherence[10]. Poor adherence can lead to poor outcomes, and clinicians may wrongly conclude that a drug is ineffective because the response to treatment is insufficient[11].\nIt is important for patients to actively participate in making treatment decisions and then receive treatment according to their decisions to improve adherence[12]. We introduced behavior change techniques (patient education, medication regimen management, pharmacist-led interventions)[13,14] in our facilities as interventions to promote adherence. Using a preliminary simulation, we estimated that collecting patient information takes at least 20 min. From this, we concluded that it is difficult for oncologists to manage drugs that cause various AEs (e.g., sorafenib) without assistance due to their obligations to many patients. Thus, we developed a mutual cooperation system involving collaboration between oncologists and pharmacists to ensure effective AE management. This mutual cooperation system consisted of the initial intervention by a pharmacist followed by a medical examination by an oncologist. However, this system was affected by the patient’s behavior because patients were not obliged to follow the system. Some patients received intervention from a pharmacist after a medical examination by an oncologist.\nEffective AE management that improves medication adherence has a considerable impact on survival outcomes. Previous single-center studies suggest that healthcare provider interventions improve adherence, and the onset of HFSR was a favorable prognostic factor of OS in patients with HCC[15,16]. However, little is known about the association between prognosis and medication adherence in patients with HCC, and multicenter studies on this relationship are lacking. Therefore, we aimed to compare the impact of different AE interventions on patient prognosis.", "Study design We retrospectively evaluated patients with advanced HCC treated with sorafenib monotherapy and no subsequent chemotherapeutic agent between May 2009 and March 2018 using the medical records of the following core hospitals in Japan: Hitachi General Hospital, Ibaraki Prefectural Central Hospital, Ibaraki Cancer Center, and Tokyo Medical University Ibaraki Medical Center. These hospitals are core hospitals that were designated by the government to provide specialized cancer medical care. The patients were separated into three groups: Group A, patients without HFSR but with pharmacist intervention (this intervention was performed by pharmacists who did not share interview information with the oncologist; it is called nonmutual cooperation system); Group B, patients with HFSR and nonmutual cooperation system; and Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system).\nWe retrospectively evaluated patients with advanced HCC treated with sorafenib monotherapy and no subsequent chemotherapeutic agent between May 2009 and March 2018 using the medical records of the following core hospitals in Japan: Hitachi General Hospital, Ibaraki Prefectural Central Hospital, Ibaraki Cancer Center, and Tokyo Medical University Ibaraki Medical Center. These hospitals are core hospitals that were designated by the government to provide specialized cancer medical care. The patients were separated into three groups: Group A, patients without HFSR but with pharmacist intervention (this intervention was performed by pharmacists who did not share interview information with the oncologist; it is called nonmutual cooperation system); Group B, patients with HFSR and nonmutual cooperation system; and Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system).\nPatient selection We included patients with stage B or C HCC according to the Barcelona Clinic Liver Cancer (BCLC) staging system. The indication criteria for sorafenib administration were as follows: Child-Pugh grade A or B; Eastern Cooperative Oncology Group (ECOG) performance status 0 or 1; alanine aminotransferase < 5-fold the upper limit of the normal range; total bilirubin level < 2.0 mg/dL; neutrophil count > 1500/µL; hemoglobin level ≥ 8.5 g/dL; platelet count > 75000/µL; and no dialysis requirement. The study exclusion criteria were as follows: Patients with a history of thrombosis or ischemic heart disease, pregnant women and those who could become pregnant, and patients with brain metastases. Our study protocol was approved by the ethics committee of each hospital and was performed according to the ethical guidelines of the 1975 Declaration of Helsinki. We obtained informed consent using an opt-out option on each facility’s website (see Institution website uniform resource locators). This study was registered with the University Hospital Medical Information Network (UMIN) (ID: UMIN000038701).\nWe included patients with stage B or C HCC according to the Barcelona Clinic Liver Cancer (BCLC) staging system. The indication criteria for sorafenib administration were as follows: Child-Pugh grade A or B; Eastern Cooperative Oncology Group (ECOG) performance status 0 or 1; alanine aminotransferase < 5-fold the upper limit of the normal range; total bilirubin level < 2.0 mg/dL; neutrophil count > 1500/µL; hemoglobin level ≥ 8.5 g/dL; platelet count > 75000/µL; and no dialysis requirement. The study exclusion criteria were as follows: Patients with a history of thrombosis or ischemic heart disease, pregnant women and those who could become pregnant, and patients with brain metastases. Our study protocol was approved by the ethics committee of each hospital and was performed according to the ethical guidelines of the 1975 Declaration of Helsinki. We obtained informed consent using an opt-out option on each facility’s website (see Institution website uniform resource locators). This study was registered with the University Hospital Medical Information Network (UMIN) (ID: UMIN000038701).\nData collection We collected patient data from the start of sorafenib, including age, sex, etiology of underlying liver disease, Child-Pugh score, history of present illness, medical history, tumor marker level [alpha-fetoprotein (AFP)], ECOG performance status, and relevant laboratory tests, including total bilirubin, albumin, and international normalized ratio (INR). Laboratory tests and tumor marker levels were obtained every 8–10 wk until permanent sorafenib discontinuation.\nWe collected patient data from the start of sorafenib, including age, sex, etiology of underlying liver disease, Child-Pugh score, history of present illness, medical history, tumor marker level [alpha-fetoprotein (AFP)], ECOG performance status, and relevant laboratory tests, including total bilirubin, albumin, and international normalized ratio (INR). Laboratory tests and tumor marker levels were obtained every 8–10 wk until permanent sorafenib discontinuation.\nComputed tomography evaluations Sorafenib response evaluations on computed tomography (CT) were scheduled for 8 wk after the first treatment, and subsequent evaluations were planned every 8 wk. Thoracic, abdominal, and pelvic CT scans were performed with intravenous iodinated contrast media. CT evaluations were conducted based on the modified Response Evaluation Criteria in Solid Tumors (mRECIST)[17] by an oncologist.\nSorafenib response evaluations on computed tomography (CT) were scheduled for 8 wk after the first treatment, and subsequent evaluations were planned every 8 wk. Thoracic, abdominal, and pelvic CT scans were performed with intravenous iodinated contrast media. CT evaluations were conducted based on the modified Response Evaluation Criteria in Solid Tumors (mRECIST)[17] by an oncologist.\nIntervention Pharmacists with special expertise provided medical care at the pharmacist’s outpatient clinic before or after a patient was medically examined by an oncologist. Every 8 wk at each visit, an AE evaluation, a residual drug count, self-management advice, and patient education, including descriptions of successful cases of AE management, pharmacist support, and advice for relieving patient anxiety and misunderstanding, were conducted. AEs were evaluated according to the National Cancer Institute Common Toxicity Criteria for Adverse Events (NCI-CTCAE) version 5.0. Pharmacists recommended that all patients use heparinoids before sorafenib treatment to prevent AEs. Additionally, the prophylactic use of urea cream to prevent dermatologic AEs was recommended after beginning sorafenib treatment. Pharmacist intervention began at the start of sorafenib treatment and continued until treatment ended.\nPharmacists with special expertise provided medical care at the pharmacist’s outpatient clinic before or after a patient was medically examined by an oncologist. Every 8 wk at each visit, an AE evaluation, a residual drug count, self-management advice, and patient education, including descriptions of successful cases of AE management, pharmacist support, and advice for relieving patient anxiety and misunderstanding, were conducted. AEs were evaluated according to the National Cancer Institute Common Toxicity Criteria for Adverse Events (NCI-CTCAE) version 5.0. Pharmacists recommended that all patients use heparinoids before sorafenib treatment to prevent AEs. Additionally, the prophylactic use of urea cream to prevent dermatologic AEs was recommended after beginning sorafenib treatment. Pharmacist intervention began at the start of sorafenib treatment and continued until treatment ended.\nMutual cooperation system We developed a mutual cooperation system that was initiated at the start of sorafenib treatment to manage AEs effectively. Although patients in Groups B and C received medical advice from oncologists and pharmacists, the systems differed. Group C received 20- to 30-min consultations during which pharmacists provided accurate information about sorafenib to alleviate fear and anxiety related to AEs. After each visit, the pharmacist summarized the consultation results in a report and discussed their findings with an oncologist. Group B patients received a 5- to 10-min session during which pharmacists provided the same information about sorafenib that Group C had received. These pharmacist consultations were brief because a thorough medical examination by an oncologist had already been completed. Furthermore, the pharmacist did not record the consultation content in the medical chart because the visit involved verbal intervention only, and a detailed report was unnecessary.\nWe developed a mutual cooperation system that was initiated at the start of sorafenib treatment to manage AEs effectively. Although patients in Groups B and C received medical advice from oncologists and pharmacists, the systems differed. Group C received 20- to 30-min consultations during which pharmacists provided accurate information about sorafenib to alleviate fear and anxiety related to AEs. After each visit, the pharmacist summarized the consultation results in a report and discussed their findings with an oncologist. Group B patients received a 5- to 10-min session during which pharmacists provided the same information about sorafenib that Group C had received. These pharmacist consultations were brief because a thorough medical examination by an oncologist had already been completed. Furthermore, the pharmacist did not record the consultation content in the medical chart because the visit involved verbal intervention only, and a detailed report was unnecessary.\nSorafenib therapy and AE management Sorafenib was administered at a dose of 400 or 800 mg/d. The initial dose of sorafenib was determined by an oncologist. At the start of sorafenib treatment, all patients received information about AEs from a pharmacist and an oncologist. Patients who developed an AE could confer with their consulting pharmacist or prescribing oncologist. Pharmacists collected and recorded patient data, including the AE grade (according to NCI-CTCAE version 5.0), AE time of onset, and emotional response of the patient to the AE. Oncologists performed dose modifications throughout treatment, including reductions, interruptions, and reintroductions, according to the drug manufacturer’s package insert for sorafenib.\nSorafenib was administered at a dose of 400 or 800 mg/d. The initial dose of sorafenib was determined by an oncologist. At the start of sorafenib treatment, all patients received information about AEs from a pharmacist and an oncologist. Patients who developed an AE could confer with their consulting pharmacist or prescribing oncologist. Pharmacists collected and recorded patient data, including the AE grade (according to NCI-CTCAE version 5.0), AE time of onset, and emotional response of the patient to the AE. Oncologists performed dose modifications throughout treatment, including reductions, interruptions, and reintroductions, according to the drug manufacturer’s package insert for sorafenib.\nCriteria for permanent sorafenib discontinuation Sorafenib was permanently discontinued when any of the following events occurred: (1) Tumor progression, defined as either radiologic (by the mRECIST criteria) or clinically progressive disease (e.g., ECOG performance status decline or onset of severe symptoms with no connection to liver failure); (2) Unacceptable AEs, defined as moderate to severe AEs (e.g., grades 2–4) that persisted after dose reduction or temporary treatment interruption; or (3) Liver decompensation, defined as gastrointestinal bleeding, ascites, jaundice, or encephalopathy[3]. All patients were managed by an oncologist and received excellent supportive care after sorafenib was permanently discontinued. Time to treatment failure (TTF) was defined as the duration from the start of sorafenib treatment to permanent discontinuation. The proportion of days covered (PDC) was defined as the TTF divided by the time to radiologic progressive disease after sorafenib[18]. Nonadherence was defined as a PDC of ≤ 80%[19].\nSorafenib was permanently discontinued when any of the following events occurred: (1) Tumor progression, defined as either radiologic (by the mRECIST criteria) or clinically progressive disease (e.g., ECOG performance status decline or onset of severe symptoms with no connection to liver failure); (2) Unacceptable AEs, defined as moderate to severe AEs (e.g., grades 2–4) that persisted after dose reduction or temporary treatment interruption; or (3) Liver decompensation, defined as gastrointestinal bleeding, ascites, jaundice, or encephalopathy[3]. All patients were managed by an oncologist and received excellent supportive care after sorafenib was permanently discontinued. Time to treatment failure (TTF) was defined as the duration from the start of sorafenib treatment to permanent discontinuation. The proportion of days covered (PDC) was defined as the TTF divided by the time to radiologic progressive disease after sorafenib[18]. Nonadherence was defined as a PDC of ≤ 80%[19].\nStatistical analysis Categorical variables were assessed using the chi-square test and are presented as frequencies or percentages. Continuous variables were analyzed using the Mann-Whitney U test and are expressed as the mean ± SD. OS and TTF were evaluated using the Kaplan-Meier method. A landmark analysis[20] was performed to consider HFSR cases that might have occurred if a patient with HCC had not died as a guarantee-time bias. The analysis was performed using the time when the highest-grade HFSR occurred in 50% or more of the patients as a landmark (here, 30 d). The log-rank test was used to estimate differences in survival curves. Additionally, we used Cox regression analyses to evaluate the relationship between the time to the occurrence of an event and explanatory variables. Logistic regression analyses were used to evaluate the relationship between nonadherence and explanatory variables.\nWe included the following baseline characteristics as variables in our univariate analysis: age, sex, etiology of liver disease, bilirubin level, albumin level, INR, BCLC stage, ECOG performance status, macrovascular invasion, extrahepatic spread, serum AFP level, and number of previous transarterial chemoembolization (TACE) procedures for liver cancer. Variables identified as significant in the univariate analysis were included in the multivariate analysis. The correlations between OS and TTF were assessed by Spearman’s rank correlation coefficient. A p-value less than 0.05 was considered statistically significant. We used the Bonferroni correction for multiple comparisons to adjust the familywise error rate when comparing differences between the three groups. The statistical methods of this study were reviewed by Kamoshida T from the Department of Gastroenterology, Hitachi General Hospital, Japan. All statistical analyses were performed using SPSS software, version 22 (IBM Corp., Armonk, NY, United States).\nCategorical variables were assessed using the chi-square test and are presented as frequencies or percentages. Continuous variables were analyzed using the Mann-Whitney U test and are expressed as the mean ± SD. OS and TTF were evaluated using the Kaplan-Meier method. A landmark analysis[20] was performed to consider HFSR cases that might have occurred if a patient with HCC had not died as a guarantee-time bias. The analysis was performed using the time when the highest-grade HFSR occurred in 50% or more of the patients as a landmark (here, 30 d). The log-rank test was used to estimate differences in survival curves. Additionally, we used Cox regression analyses to evaluate the relationship between the time to the occurrence of an event and explanatory variables. Logistic regression analyses were used to evaluate the relationship between nonadherence and explanatory variables.\nWe included the following baseline characteristics as variables in our univariate analysis: age, sex, etiology of liver disease, bilirubin level, albumin level, INR, BCLC stage, ECOG performance status, macrovascular invasion, extrahepatic spread, serum AFP level, and number of previous transarterial chemoembolization (TACE) procedures for liver cancer. Variables identified as significant in the univariate analysis were included in the multivariate analysis. The correlations between OS and TTF were assessed by Spearman’s rank correlation coefficient. A p-value less than 0.05 was considered statistically significant. We used the Bonferroni correction for multiple comparisons to adjust the familywise error rate when comparing differences between the three groups. The statistical methods of this study were reviewed by Kamoshida T from the Department of Gastroenterology, Hitachi General Hospital, Japan. All statistical analyses were performed using SPSS software, version 22 (IBM Corp., Armonk, NY, United States).", "We retrospectively evaluated patients with advanced HCC treated with sorafenib monotherapy and no subsequent chemotherapeutic agent between May 2009 and March 2018 using the medical records of the following core hospitals in Japan: Hitachi General Hospital, Ibaraki Prefectural Central Hospital, Ibaraki Cancer Center, and Tokyo Medical University Ibaraki Medical Center. These hospitals are core hospitals that were designated by the government to provide specialized cancer medical care. The patients were separated into three groups: Group A, patients without HFSR but with pharmacist intervention (this intervention was performed by pharmacists who did not share interview information with the oncologist; it is called nonmutual cooperation system); Group B, patients with HFSR and nonmutual cooperation system; and Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system).", "We included patients with stage B or C HCC according to the Barcelona Clinic Liver Cancer (BCLC) staging system. The indication criteria for sorafenib administration were as follows: Child-Pugh grade A or B; Eastern Cooperative Oncology Group (ECOG) performance status 0 or 1; alanine aminotransferase < 5-fold the upper limit of the normal range; total bilirubin level < 2.0 mg/dL; neutrophil count > 1500/µL; hemoglobin level ≥ 8.5 g/dL; platelet count > 75000/µL; and no dialysis requirement. The study exclusion criteria were as follows: Patients with a history of thrombosis or ischemic heart disease, pregnant women and those who could become pregnant, and patients with brain metastases. Our study protocol was approved by the ethics committee of each hospital and was performed according to the ethical guidelines of the 1975 Declaration of Helsinki. We obtained informed consent using an opt-out option on each facility’s website (see Institution website uniform resource locators). This study was registered with the University Hospital Medical Information Network (UMIN) (ID: UMIN000038701).", "We collected patient data from the start of sorafenib, including age, sex, etiology of underlying liver disease, Child-Pugh score, history of present illness, medical history, tumor marker level [alpha-fetoprotein (AFP)], ECOG performance status, and relevant laboratory tests, including total bilirubin, albumin, and international normalized ratio (INR). Laboratory tests and tumor marker levels were obtained every 8–10 wk until permanent sorafenib discontinuation.", "Sorafenib response evaluations on computed tomography (CT) were scheduled for 8 wk after the first treatment, and subsequent evaluations were planned every 8 wk. Thoracic, abdominal, and pelvic CT scans were performed with intravenous iodinated contrast media. CT evaluations were conducted based on the modified Response Evaluation Criteria in Solid Tumors (mRECIST)[17] by an oncologist.", "Pharmacists with special expertise provided medical care at the pharmacist’s outpatient clinic before or after a patient was medically examined by an oncologist. Every 8 wk at each visit, an AE evaluation, a residual drug count, self-management advice, and patient education, including descriptions of successful cases of AE management, pharmacist support, and advice for relieving patient anxiety and misunderstanding, were conducted. AEs were evaluated according to the National Cancer Institute Common Toxicity Criteria for Adverse Events (NCI-CTCAE) version 5.0. Pharmacists recommended that all patients use heparinoids before sorafenib treatment to prevent AEs. Additionally, the prophylactic use of urea cream to prevent dermatologic AEs was recommended after beginning sorafenib treatment. Pharmacist intervention began at the start of sorafenib treatment and continued until treatment ended.", "We developed a mutual cooperation system that was initiated at the start of sorafenib treatment to manage AEs effectively. Although patients in Groups B and C received medical advice from oncologists and pharmacists, the systems differed. Group C received 20- to 30-min consultations during which pharmacists provided accurate information about sorafenib to alleviate fear and anxiety related to AEs. After each visit, the pharmacist summarized the consultation results in a report and discussed their findings with an oncologist. Group B patients received a 5- to 10-min session during which pharmacists provided the same information about sorafenib that Group C had received. These pharmacist consultations were brief because a thorough medical examination by an oncologist had already been completed. Furthermore, the pharmacist did not record the consultation content in the medical chart because the visit involved verbal intervention only, and a detailed report was unnecessary.", "Sorafenib was administered at a dose of 400 or 800 mg/d. The initial dose of sorafenib was determined by an oncologist. At the start of sorafenib treatment, all patients received information about AEs from a pharmacist and an oncologist. Patients who developed an AE could confer with their consulting pharmacist or prescribing oncologist. Pharmacists collected and recorded patient data, including the AE grade (according to NCI-CTCAE version 5.0), AE time of onset, and emotional response of the patient to the AE. Oncologists performed dose modifications throughout treatment, including reductions, interruptions, and reintroductions, according to the drug manufacturer’s package insert for sorafenib.", "Sorafenib was permanently discontinued when any of the following events occurred: (1) Tumor progression, defined as either radiologic (by the mRECIST criteria) or clinically progressive disease (e.g., ECOG performance status decline or onset of severe symptoms with no connection to liver failure); (2) Unacceptable AEs, defined as moderate to severe AEs (e.g., grades 2–4) that persisted after dose reduction or temporary treatment interruption; or (3) Liver decompensation, defined as gastrointestinal bleeding, ascites, jaundice, or encephalopathy[3]. All patients were managed by an oncologist and received excellent supportive care after sorafenib was permanently discontinued. Time to treatment failure (TTF) was defined as the duration from the start of sorafenib treatment to permanent discontinuation. The proportion of days covered (PDC) was defined as the TTF divided by the time to radiologic progressive disease after sorafenib[18]. Nonadherence was defined as a PDC of ≤ 80%[19].", "Categorical variables were assessed using the chi-square test and are presented as frequencies or percentages. Continuous variables were analyzed using the Mann-Whitney U test and are expressed as the mean ± SD. OS and TTF were evaluated using the Kaplan-Meier method. A landmark analysis[20] was performed to consider HFSR cases that might have occurred if a patient with HCC had not died as a guarantee-time bias. The analysis was performed using the time when the highest-grade HFSR occurred in 50% or more of the patients as a landmark (here, 30 d). The log-rank test was used to estimate differences in survival curves. Additionally, we used Cox regression analyses to evaluate the relationship between the time to the occurrence of an event and explanatory variables. Logistic regression analyses were used to evaluate the relationship between nonadherence and explanatory variables.\nWe included the following baseline characteristics as variables in our univariate analysis: age, sex, etiology of liver disease, bilirubin level, albumin level, INR, BCLC stage, ECOG performance status, macrovascular invasion, extrahepatic spread, serum AFP level, and number of previous transarterial chemoembolization (TACE) procedures for liver cancer. Variables identified as significant in the univariate analysis were included in the multivariate analysis. The correlations between OS and TTF were assessed by Spearman’s rank correlation coefficient. A p-value less than 0.05 was considered statistically significant. We used the Bonferroni correction for multiple comparisons to adjust the familywise error rate when comparing differences between the three groups. The statistical methods of this study were reviewed by Kamoshida T from the Department of Gastroenterology, Hitachi General Hospital, Japan. All statistical analyses were performed using SPSS software, version 22 (IBM Corp., Armonk, NY, United States).", "Patients We included 134 patients [median age, 69 years (range, 41–89 years); male, n = 99; female, n = 35] with advanced HCC who received sorafenib monotherapy without posttreatment (Group A, n = 41; Group B, n = 30; Group C, n = 63). The main etiological factor was hepatitis C virus (HCV) (77/134 patients, 57.5%), followed by hepatitis B virus (HBV) (30/134 patients, 22.4%).\nWe included 134 patients [median age, 69 years (range, 41–89 years); male, n = 99; female, n = 35] with advanced HCC who received sorafenib monotherapy without posttreatment (Group A, n = 41; Group B, n = 30; Group C, n = 63). The main etiological factor was hepatitis C virus (HCV) (77/134 patients, 57.5%), followed by hepatitis B virus (HBV) (30/134 patients, 22.4%).\nBaseline characteristics All patients had cirrhosis [Child-Pugh A, n = 117 (87.3%); Child-Pugh B, n = 17 (12.7%)]. HCC was BCLC stage B in 55 patients (41.0%) and BCLC stage C in 79 patients (59.0%). None of the patients had a second primary cancer. Portal vein thrombosis was present in 35 patients (26.1%), and extrahepatic metastases were found in 67 patients (50.0%) (Table 1).\nBaseline characteristics of patients in Groups A, B, and C\nAFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HCV: Hepatitis C virus; HBV: Hepatitis B virus; INR: International normalized ratio; TACE: Transcatheter arterial chemoembolization; ECOG: Eastern Cooperative Oncology Group.\nAll patients had cirrhosis [Child-Pugh A, n = 117 (87.3%); Child-Pugh B, n = 17 (12.7%)]. HCC was BCLC stage B in 55 patients (41.0%) and BCLC stage C in 79 patients (59.0%). None of the patients had a second primary cancer. Portal vein thrombosis was present in 35 patients (26.1%), and extrahepatic metastases were found in 67 patients (50.0%) (Table 1).\nBaseline characteristics of patients in Groups A, B, and C\nAFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HCV: Hepatitis C virus; HBV: Hepatitis B virus; INR: International normalized ratio; TACE: Transcatheter arterial chemoembolization; ECOG: Eastern Cooperative Oncology Group.\nAEs An AE of at least grade 1 was observed in all patients after sorafenib administration. However, none of the patients experienced any grade 4 AEs. The main AEs in all groups were fatigue (30.6%), diarrhea (39.6%), hypertension (31.3%), anorexia (29.9%), and thrombocytopenia (38.8%) (Table 2). Many patients required temporary sorafenib interruption because of AEs (Group A, 19.5%; Group B, 6.7%; Group C, 41.3%). Of patients who temporarily stopped taking sorafenib, the rate of those who resumed treatment at a reduced dose was the highest in Group C (Group A vs Group B, P = 0.70; Group A vs Group C, P = 0.11; Group B vs Group C, P < 0.01) (Table 3).\nPrevalence of adverse events after beginning sorafenib, according to CTCAE version 5.0, n (%)\nDose modification related to adverse events\nNA: Not available.\nAn AE of at least grade 1 was observed in all patients after sorafenib administration. However, none of the patients experienced any grade 4 AEs. The main AEs in all groups were fatigue (30.6%), diarrhea (39.6%), hypertension (31.3%), anorexia (29.9%), and thrombocytopenia (38.8%) (Table 2). Many patients required temporary sorafenib interruption because of AEs (Group A, 19.5%; Group B, 6.7%; Group C, 41.3%). Of patients who temporarily stopped taking sorafenib, the rate of those who resumed treatment at a reduced dose was the highest in Group C (Group A vs Group B, P = 0.70; Group A vs Group C, P = 0.11; Group B vs Group C, P < 0.01) (Table 3).\nPrevalence of adverse events after beginning sorafenib, according to CTCAE version 5.0, n (%)\nDose modification related to adverse events\nNA: Not available.\nRadiological response evaluations CT examinations performed every 2 mo showed that the disease control rate (DCR) gradually decreased in all groups. The response rate (RR) and DCR 8 mo after the start of sorafenib treatment were the highest in Group C (RR, 9.5%; DCR, 65.1%) (Table 4).\nRadiological response according to the modified response evaluation criteria in solid tumors\nCT examinations performed every 2 mo showed that the disease control rate (DCR) gradually decreased in all groups. The response rate (RR) and DCR 8 mo after the start of sorafenib treatment were the highest in Group C (RR, 9.5%; DCR, 65.1%) (Table 4).\nRadiological response according to the modified response evaluation criteria in solid tumors\nPermanent sorafenib discontinuation The main causes of permanent drug discontinuation were HCC progression and sorafenib-related AE intolerance. Permanent discontinuation due to AE intolerance occurred most frequently in Group B (Group A (17.1%) vs Group B (60.0%), P < 0.01; Group A (17.1%) vs Group C (20.6%), P = 1.00; Group B (60.0%) vs Group C (20.6%), P < 0.05) (Table 5).\nReasons for permanent sorafenib discontinuation, n (%)\nThe main causes of permanent drug discontinuation were HCC progression and sorafenib-related AE intolerance. Permanent discontinuation due to AE intolerance occurred most frequently in Group B (Group A (17.1%) vs Group B (60.0%), P < 0.01; Group A (17.1%) vs Group C (20.6%), P = 1.00; Group B (60.0%) vs Group C (20.6%), P < 0.05) (Table 5).\nReasons for permanent sorafenib discontinuation, n (%)\nOS after sorafenib therapy The median OS was 6.2 mo in Group A, 7.7 mo in Group B, and 13.9 mo in Group C. The difference in the median OS was significant between Groups A and C (P < 0.01). In multivariate analysis, Group A vs Group C (HR, 0.41; 95%CI: 0.25–0.66; P < 0.01) and BCLC-B (HR, 0.60, 95%CI: 0.41–0.89; P = 0.01) were independent predictors of survival (Figures 1 and 2, Table 6).\nKaplan-Meier estimates and prognostic factors of overall survival (comparison between each group). Group A, patients without hand-foot skin reaction (HFSR) but with pharmacist intervention; Group B, patients with HFSR and the nonmutual cooperation system; Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). \nKaplan-Meier estimates and prognostic factors of overall survival (Barcelona Clinic Liver Cancer B vs Barcelona Clinic Liver Cancer C). BCLC: Barcelona Clinic Liver Cancer.\nPrognostic factors of overall survival by multivariable Cox regression analysis\nAFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HBV: Hepatitis B virus; ECOG: Eastern Cooperative Oncology Group.\nThe median OS was 6.2 mo in Group A, 7.7 mo in Group B, and 13.9 mo in Group C. The difference in the median OS was significant between Groups A and C (P < 0.01). In multivariate analysis, Group A vs Group C (HR, 0.41; 95%CI: 0.25–0.66; P < 0.01) and BCLC-B (HR, 0.60, 95%CI: 0.41–0.89; P = 0.01) were independent predictors of survival (Figures 1 and 2, Table 6).\nKaplan-Meier estimates and prognostic factors of overall survival (comparison between each group). Group A, patients without hand-foot skin reaction (HFSR) but with pharmacist intervention; Group B, patients with HFSR and the nonmutual cooperation system; Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). \nKaplan-Meier estimates and prognostic factors of overall survival (Barcelona Clinic Liver Cancer B vs Barcelona Clinic Liver Cancer C). BCLC: Barcelona Clinic Liver Cancer.\nPrognostic factors of overall survival by multivariable Cox regression analysis\nAFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HBV: Hepatitis B virus; ECOG: Eastern Cooperative Oncology Group.\nMutual cooperation system evaluation The median TTF in Group C was 5.0 mo (95%CI: 3.8–6.5), which was the highest of all the groups [Group C (5.0 mo) vs Group A (2.1 mo), P < 0.01; Group C (5.0 mo) vs Group B (0.5 mo), P < 0.01). In multivariable Cox regression analysis, Group A vs Group B (HR, 1.69, 95%CI, 1.04–2.75; P = 0.03) and Group A vs Group C (HR, 0.53; 95%CI: 0.35–0.81; P < 0.01) were significant predictors of TTF (Table 7). The proportions of patients with a PDC of < 0.8 were 29.3% in Group A, 73.3% in Group B, and 23.8% in Group C. Group B had a significantly higher sorafenib PDC than Groups A (P < 0.01) and C (P < 0.01). Adjusted logistic regression analysis showed that nonadherence (PDC ≤ 0.8) was associated with Group B vs Group A (OR, 0.11; 95%CI: 0.04–0.36; P < 0.01) and Group B vs Group C (OR, 0.09; 95%CI: 0.03–0.27; P < 0.01) (Figure 3, Table 8).\nProportion and prognostic factors of nonadherence. Group A, patients without hand-foot skin reaction (HFSR) but with pharmacist intervention; Group B, patients with HFSR and the nonmutual cooperation system; Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). \nPrognostic factors of time-to-treatment failure by multivariable Cox regression analysis\nAFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HBV: Hepatitis B virus; ECOG: Eastern Cooperative Oncology Group.\nPrognostic factors of proportion of days covered by logistic regression analyses\nThe median TTF in Group C was 5.0 mo (95%CI: 3.8–6.5), which was the highest of all the groups [Group C (5.0 mo) vs Group A (2.1 mo), P < 0.01; Group C (5.0 mo) vs Group B (0.5 mo), P < 0.01). In multivariable Cox regression analysis, Group A vs Group B (HR, 1.69, 95%CI, 1.04–2.75; P = 0.03) and Group A vs Group C (HR, 0.53; 95%CI: 0.35–0.81; P < 0.01) were significant predictors of TTF (Table 7). The proportions of patients with a PDC of < 0.8 were 29.3% in Group A, 73.3% in Group B, and 23.8% in Group C. Group B had a significantly higher sorafenib PDC than Groups A (P < 0.01) and C (P < 0.01). Adjusted logistic regression analysis showed that nonadherence (PDC ≤ 0.8) was associated with Group B vs Group A (OR, 0.11; 95%CI: 0.04–0.36; P < 0.01) and Group B vs Group C (OR, 0.09; 95%CI: 0.03–0.27; P < 0.01) (Figure 3, Table 8).\nProportion and prognostic factors of nonadherence. Group A, patients without hand-foot skin reaction (HFSR) but with pharmacist intervention; Group B, patients with HFSR and the nonmutual cooperation system; Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). \nPrognostic factors of time-to-treatment failure by multivariable Cox regression analysis\nAFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HBV: Hepatitis B virus; ECOG: Eastern Cooperative Oncology Group.\nPrognostic factors of proportion of days covered by logistic regression analyses\nCorrelation between OS and TTF The Spearman’s rank correlation coefficients between OS and TTF in each group were 0.41 (Group A; P < 0.01), 0.13 (Group B; P = 0.51), and 0.58 (Group C; P <0.01). There was a highly significant correlation between OS and TTF in Group C. However, there was no correlation between OS and TTF in Group B.\nThe Spearman’s rank correlation coefficients between OS and TTF in each group were 0.41 (Group A; P < 0.01), 0.13 (Group B; P = 0.51), and 0.58 (Group C; P <0.01). There was a highly significant correlation between OS and TTF in Group C. However, there was no correlation between OS and TTF in Group B.", "We included 134 patients [median age, 69 years (range, 41–89 years); male, n = 99; female, n = 35] with advanced HCC who received sorafenib monotherapy without posttreatment (Group A, n = 41; Group B, n = 30; Group C, n = 63). The main etiological factor was hepatitis C virus (HCV) (77/134 patients, 57.5%), followed by hepatitis B virus (HBV) (30/134 patients, 22.4%).", "All patients had cirrhosis [Child-Pugh A, n = 117 (87.3%); Child-Pugh B, n = 17 (12.7%)]. HCC was BCLC stage B in 55 patients (41.0%) and BCLC stage C in 79 patients (59.0%). None of the patients had a second primary cancer. Portal vein thrombosis was present in 35 patients (26.1%), and extrahepatic metastases were found in 67 patients (50.0%) (Table 1).\nBaseline characteristics of patients in Groups A, B, and C\nAFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HCV: Hepatitis C virus; HBV: Hepatitis B virus; INR: International normalized ratio; TACE: Transcatheter arterial chemoembolization; ECOG: Eastern Cooperative Oncology Group.", "An AE of at least grade 1 was observed in all patients after sorafenib administration. However, none of the patients experienced any grade 4 AEs. The main AEs in all groups were fatigue (30.6%), diarrhea (39.6%), hypertension (31.3%), anorexia (29.9%), and thrombocytopenia (38.8%) (Table 2). Many patients required temporary sorafenib interruption because of AEs (Group A, 19.5%; Group B, 6.7%; Group C, 41.3%). Of patients who temporarily stopped taking sorafenib, the rate of those who resumed treatment at a reduced dose was the highest in Group C (Group A vs Group B, P = 0.70; Group A vs Group C, P = 0.11; Group B vs Group C, P < 0.01) (Table 3).\nPrevalence of adverse events after beginning sorafenib, according to CTCAE version 5.0, n (%)\nDose modification related to adverse events\nNA: Not available.", "CT examinations performed every 2 mo showed that the disease control rate (DCR) gradually decreased in all groups. The response rate (RR) and DCR 8 mo after the start of sorafenib treatment were the highest in Group C (RR, 9.5%; DCR, 65.1%) (Table 4).\nRadiological response according to the modified response evaluation criteria in solid tumors", "The main causes of permanent drug discontinuation were HCC progression and sorafenib-related AE intolerance. Permanent discontinuation due to AE intolerance occurred most frequently in Group B (Group A (17.1%) vs Group B (60.0%), P < 0.01; Group A (17.1%) vs Group C (20.6%), P = 1.00; Group B (60.0%) vs Group C (20.6%), P < 0.05) (Table 5).\nReasons for permanent sorafenib discontinuation, n (%)", "The median OS was 6.2 mo in Group A, 7.7 mo in Group B, and 13.9 mo in Group C. The difference in the median OS was significant between Groups A and C (P < 0.01). In multivariate analysis, Group A vs Group C (HR, 0.41; 95%CI: 0.25–0.66; P < 0.01) and BCLC-B (HR, 0.60, 95%CI: 0.41–0.89; P = 0.01) were independent predictors of survival (Figures 1 and 2, Table 6).\nKaplan-Meier estimates and prognostic factors of overall survival (comparison between each group). Group A, patients without hand-foot skin reaction (HFSR) but with pharmacist intervention; Group B, patients with HFSR and the nonmutual cooperation system; Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). \nKaplan-Meier estimates and prognostic factors of overall survival (Barcelona Clinic Liver Cancer B vs Barcelona Clinic Liver Cancer C). BCLC: Barcelona Clinic Liver Cancer.\nPrognostic factors of overall survival by multivariable Cox regression analysis\nAFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HBV: Hepatitis B virus; ECOG: Eastern Cooperative Oncology Group.", "The median TTF in Group C was 5.0 mo (95%CI: 3.8–6.5), which was the highest of all the groups [Group C (5.0 mo) vs Group A (2.1 mo), P < 0.01; Group C (5.0 mo) vs Group B (0.5 mo), P < 0.01). In multivariable Cox regression analysis, Group A vs Group B (HR, 1.69, 95%CI, 1.04–2.75; P = 0.03) and Group A vs Group C (HR, 0.53; 95%CI: 0.35–0.81; P < 0.01) were significant predictors of TTF (Table 7). The proportions of patients with a PDC of < 0.8 were 29.3% in Group A, 73.3% in Group B, and 23.8% in Group C. Group B had a significantly higher sorafenib PDC than Groups A (P < 0.01) and C (P < 0.01). Adjusted logistic regression analysis showed that nonadherence (PDC ≤ 0.8) was associated with Group B vs Group A (OR, 0.11; 95%CI: 0.04–0.36; P < 0.01) and Group B vs Group C (OR, 0.09; 95%CI: 0.03–0.27; P < 0.01) (Figure 3, Table 8).\nProportion and prognostic factors of nonadherence. Group A, patients without hand-foot skin reaction (HFSR) but with pharmacist intervention; Group B, patients with HFSR and the nonmutual cooperation system; Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). \nPrognostic factors of time-to-treatment failure by multivariable Cox regression analysis\nAFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HBV: Hepatitis B virus; ECOG: Eastern Cooperative Oncology Group.\nPrognostic factors of proportion of days covered by logistic regression analyses", "The Spearman’s rank correlation coefficients between OS and TTF in each group were 0.41 (Group A; P < 0.01), 0.13 (Group B; P = 0.51), and 0.58 (Group C; P <0.01). There was a highly significant correlation between OS and TTF in Group C. However, there was no correlation between OS and TTF in Group B.", "We investigated the effect of cooperation between oncologists and pharmacists on the prognosis of patients with advanced HCC treated with sorafenib monotherapy. In the present study, the occurrence of HFSR was associated with improved patient prognosis, and this improvement was significantly enhanced by appropriate medication adherence. Close cooperation between oncologists and pharmacists increased adherence, and a strong correlation was observed between OS and TTF.\nSeveral studies have indicated that the emergence of HFSR is associated with prolonged survival in patients with advanced HCC treated with sorafenib[5,6,21]. However, these studies did not evaluate the correlation between medication adherence and survival after the appearance of an AE, including HFSR. Targeted therapies, including sorafenib, can result in unexpected AEs that do not occur after the administration of earlier chemotherapy drugs[22]. Oncologists must recognize these novel AEs at an early stage and provide appropriate treatment to the extent possible. However, previous studies revealed that optimal AE management requires considerable experience and time[7,23]. Management of sorafenib-related AEs includes data collection for AE grading, patient education, and determination of the appropriate sorafenib dose by an oncologist[15].\nThe use of sorafenib is associated with various AEs, including gastrointestinal, constitutional, or dermatologic events[1,2], and their management may require dose reduction or temporary discontinuation to avoid sorafenib treatment cessation. For example, an appropriate sorafenib dose reduction yielded a decreased rate of permanent discontinuation due to AEs[7]. However, in many patients, these dose changes do not mitigate intolerable or severe AEs, and permanent sorafenib discontinuation is required[24].\nIn our study, only the mutual cooperation system promoted dose reduction after an AE and extended the TTF. In the mutual cooperation system, pharmacists were responsible for collecting data on AE grades, educating patients, and managing any leftover medicine. Furthermore, they documented their findings in a report for the oncologist. On the other hand, in the nonmutual cooperation system, an oncologist examined the patient before pharmacists were involved in patient management. Oncologists were required to evaluate AE grading data, educate patients, and determine the appropriate sorafenib dose. Only after the medical examination did a pharmacist provide additional patient management, and their findings were not reported to the oncologist. In the nonmutual cooperation system, the oncologist had to obtain a substantial amount of information to maintain or revise the sorafenib treatment regimen within 5 to 10 min.\nGiven these differences between the systems, our results suggest that the mutual cooperation system led to appropriate dose reductions, as reflected by the extended TTF. However, we do not recommend starting sorafenib at half the standard dose (800 mg/d). Dose reductions were guided by the results obtained by the mutual cooperation system. Additionally, we demonstrated that the best outcomes occur when optimal dosing and good medication adherence are achieved early in the course of sorafenib treatment. In our study, only the mutual cooperation system ensured good adherence in patients who experienced HFSR secondary to sorafenib treatment and prevented unnecessary permanent medication discontinuation.\nA previous study showed that despite dramatic improvements in adjuvant hormonal therapy for breast cancer, nonadherence, early discontinuation, and effective cancer treatment are affected by treatment-related toxicity, and appropriate interventions are needed to improve breast cancer survival[25]. Another cohort study indicated that long-term tamoxifen therapy for breast cancer reduced the risk of death, while the risk of death increased with a low adherence rate[26]. In our study, the medical team continued sharing patient information after the patient started taking sorafenib; therefore, the mutual cooperation system enabled the medical team to prevent HFSR or promptly provide patient management, as appropriate. In contrast, the nonmutual cooperation system did not allow patient information to be shared at an early stage; thus, the medical team was not able to take measures to prevent HFSR or plan palliation care in a timely manner. The differences in the effectiveness of HFSR prevention and palliation between the two systems highlight the importance of the various hurdles that can affect medication adherence.\nHurdles to medication adherence are complex and include patient-, clinician-, and healthcare system-related factors. Patient-related factors, such as limited involvement in the treatment decision-making process, poor health literacy, doubts about medication effectiveness, and previous adverse effects, influence adherence. Clinician-related factors include failure to recognize nonadherence, poor patient communication, and inadequate multidisciplinary communication between oncologists and pharmacists. Healthcare system-related factors include relationships with clinicians and clinicians’ satisfaction with patient care[27,28]. Thus, multiple factors may become hurdles to improving adherence. The mutual cooperation system coordinates interactions among patients, clinicians, and the health system, thereby minimizing barriers to adherence.\nSurprisingly, this study revealed that the prognostic value of HFSR was enhanced by appropriate medication adherence. On the other hand, BCLC-B HCC was an independent predictor of improved OS[29]. BCLC stage did not affect the difference in OS between Groups A and C, as there was no significant between-group difference in the baseline stage distribution.\nWe have reasonable evidence to confirm the validity of our results. First, variables such as age, sex, etiology, ECOG performance status, liver function, comorbidities, and TACE procedure count were not significantly different between the groups. Second, we verified that all patients had received sorafenib monotherapy and no subsequent chemotherapy; therefore, neither our patients’ prognoses nor the prolonged OS we observed was affected by other chemotherapeutic agents.\nNevertheless, our study has a few limitations. First, our study design was based on the mutual cooperation system. After a patient was first checked by a specialized pharmacist, the oncologist determined whether to prescribe sorafenib based on the pharmacist’s report. However, patients were not required to participate in the mutual cooperation system, and involvement was subject to the patient’s wish. After the patients underwent a medical examination by an oncologist, a specialized pharmacist could also provide patient guidance about sorafenib. While patients who were unwilling to participate in the mutual cooperation system may have been included in Group A or B, it is unknown whether this enrollment could have affected the adherence rate. Second, OS and TTF were higher in the mutual cooperation system group (Group C) than in Group A and Group B. It is difficult to determine whether these results were caused by improved adherence or the mechanism underlying the prognostic efficacy of HFSR.", "The mutual cooperation system increased treatment duration and improved prognosis in patients with HFSR secondary to sorafenib treatment. Additionally, the mutual cooperation system allowed us to promptly initiate sorafenib treatment. Our study clearly demonstrates the clinical and research benefits of this system. The mutual cooperation system for sorafenib treatment management described in this study could be applied to the management of patients treated with other multikinase inhibitors to extend OS. The increased OS resulting from the mutual cooperation system could have a substantial impact on the design of clinical studies in which sorafenib is used as the control drug. Additionally, nonadherence may have adversely affected OS in previous studies, leading researchers to underestimate drug efficacy. We propose that future clinical investigations designed to improve medication adherence could eliminate OS underestimation." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Hepatocellular carcinoma", "Sorafenib", "Pharmacists", "Oncologists", "Prognosis", "Duration of therapy" ]
INTRODUCTION: Sorafenib is a multikinase inhibitor used to treat advanced hepatocellular carcinoma (HCC)[1,2]. Although sorafenib prolongs overall survival (OS) in patients with HCC, it is associated with various adverse events (AEs) that may lead to permanent discontinuation[3]. Previous studies found that hand-foot skin reaction (HFSR) was a prognostic marker of longer survival[4-6]. While HFSR is an important predictor of survival outcomes in clinical practice and clinical sorafenib trials, AE management could influence the efficacy of HFSR as a prognostic factor. A recent study showed that increased clinician experience with AEs reduced the potential for discontinuing sorafenib therapy, resulting in a longer OS in patients with HCC[7]. Nevertheless, it takes a long time for clinicians to develop the necessary experience for the management of AEs, and even with experience, it takes a substantial amount of time to provide a system of adequate follow-up after sorafenib initiation. As sorafenib is administered orally, its successful use for HCC treatment relies on patient medication adherence. However, many studies indicate that patients with cancer are sometimes nonadherent when prescribed oral drugs[8,9], and AEs are the main cause of poor adherence[10]. Poor adherence can lead to poor outcomes, and clinicians may wrongly conclude that a drug is ineffective because the response to treatment is insufficient[11]. It is important for patients to actively participate in making treatment decisions and then receive treatment according to their decisions to improve adherence[12]. We introduced behavior change techniques (patient education, medication regimen management, pharmacist-led interventions)[13,14] in our facilities as interventions to promote adherence. Using a preliminary simulation, we estimated that collecting patient information takes at least 20 min. From this, we concluded that it is difficult for oncologists to manage drugs that cause various AEs (e.g., sorafenib) without assistance due to their obligations to many patients. Thus, we developed a mutual cooperation system involving collaboration between oncologists and pharmacists to ensure effective AE management. This mutual cooperation system consisted of the initial intervention by a pharmacist followed by a medical examination by an oncologist. However, this system was affected by the patient’s behavior because patients were not obliged to follow the system. Some patients received intervention from a pharmacist after a medical examination by an oncologist. Effective AE management that improves medication adherence has a considerable impact on survival outcomes. Previous single-center studies suggest that healthcare provider interventions improve adherence, and the onset of HFSR was a favorable prognostic factor of OS in patients with HCC[15,16]. However, little is known about the association between prognosis and medication adherence in patients with HCC, and multicenter studies on this relationship are lacking. Therefore, we aimed to compare the impact of different AE interventions on patient prognosis. MATERIALS AND METHODS: Study design We retrospectively evaluated patients with advanced HCC treated with sorafenib monotherapy and no subsequent chemotherapeutic agent between May 2009 and March 2018 using the medical records of the following core hospitals in Japan: Hitachi General Hospital, Ibaraki Prefectural Central Hospital, Ibaraki Cancer Center, and Tokyo Medical University Ibaraki Medical Center. These hospitals are core hospitals that were designated by the government to provide specialized cancer medical care. The patients were separated into three groups: Group A, patients without HFSR but with pharmacist intervention (this intervention was performed by pharmacists who did not share interview information with the oncologist; it is called nonmutual cooperation system); Group B, patients with HFSR and nonmutual cooperation system; and Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). We retrospectively evaluated patients with advanced HCC treated with sorafenib monotherapy and no subsequent chemotherapeutic agent between May 2009 and March 2018 using the medical records of the following core hospitals in Japan: Hitachi General Hospital, Ibaraki Prefectural Central Hospital, Ibaraki Cancer Center, and Tokyo Medical University Ibaraki Medical Center. These hospitals are core hospitals that were designated by the government to provide specialized cancer medical care. The patients were separated into three groups: Group A, patients without HFSR but with pharmacist intervention (this intervention was performed by pharmacists who did not share interview information with the oncologist; it is called nonmutual cooperation system); Group B, patients with HFSR and nonmutual cooperation system; and Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). Patient selection We included patients with stage B or C HCC according to the Barcelona Clinic Liver Cancer (BCLC) staging system. The indication criteria for sorafenib administration were as follows: Child-Pugh grade A or B; Eastern Cooperative Oncology Group (ECOG) performance status 0 or 1; alanine aminotransferase < 5-fold the upper limit of the normal range; total bilirubin level < 2.0 mg/dL; neutrophil count > 1500/µL; hemoglobin level ≥ 8.5 g/dL; platelet count > 75000/µL; and no dialysis requirement. The study exclusion criteria were as follows: Patients with a history of thrombosis or ischemic heart disease, pregnant women and those who could become pregnant, and patients with brain metastases. Our study protocol was approved by the ethics committee of each hospital and was performed according to the ethical guidelines of the 1975 Declaration of Helsinki. We obtained informed consent using an opt-out option on each facility’s website (see Institution website uniform resource locators). This study was registered with the University Hospital Medical Information Network (UMIN) (ID: UMIN000038701). We included patients with stage B or C HCC according to the Barcelona Clinic Liver Cancer (BCLC) staging system. The indication criteria for sorafenib administration were as follows: Child-Pugh grade A or B; Eastern Cooperative Oncology Group (ECOG) performance status 0 or 1; alanine aminotransferase < 5-fold the upper limit of the normal range; total bilirubin level < 2.0 mg/dL; neutrophil count > 1500/µL; hemoglobin level ≥ 8.5 g/dL; platelet count > 75000/µL; and no dialysis requirement. The study exclusion criteria were as follows: Patients with a history of thrombosis or ischemic heart disease, pregnant women and those who could become pregnant, and patients with brain metastases. Our study protocol was approved by the ethics committee of each hospital and was performed according to the ethical guidelines of the 1975 Declaration of Helsinki. We obtained informed consent using an opt-out option on each facility’s website (see Institution website uniform resource locators). This study was registered with the University Hospital Medical Information Network (UMIN) (ID: UMIN000038701). Data collection We collected patient data from the start of sorafenib, including age, sex, etiology of underlying liver disease, Child-Pugh score, history of present illness, medical history, tumor marker level [alpha-fetoprotein (AFP)], ECOG performance status, and relevant laboratory tests, including total bilirubin, albumin, and international normalized ratio (INR). Laboratory tests and tumor marker levels were obtained every 8–10 wk until permanent sorafenib discontinuation. We collected patient data from the start of sorafenib, including age, sex, etiology of underlying liver disease, Child-Pugh score, history of present illness, medical history, tumor marker level [alpha-fetoprotein (AFP)], ECOG performance status, and relevant laboratory tests, including total bilirubin, albumin, and international normalized ratio (INR). Laboratory tests and tumor marker levels were obtained every 8–10 wk until permanent sorafenib discontinuation. Computed tomography evaluations Sorafenib response evaluations on computed tomography (CT) were scheduled for 8 wk after the first treatment, and subsequent evaluations were planned every 8 wk. Thoracic, abdominal, and pelvic CT scans were performed with intravenous iodinated contrast media. CT evaluations were conducted based on the modified Response Evaluation Criteria in Solid Tumors (mRECIST)[17] by an oncologist. Sorafenib response evaluations on computed tomography (CT) were scheduled for 8 wk after the first treatment, and subsequent evaluations were planned every 8 wk. Thoracic, abdominal, and pelvic CT scans were performed with intravenous iodinated contrast media. CT evaluations were conducted based on the modified Response Evaluation Criteria in Solid Tumors (mRECIST)[17] by an oncologist. Intervention Pharmacists with special expertise provided medical care at the pharmacist’s outpatient clinic before or after a patient was medically examined by an oncologist. Every 8 wk at each visit, an AE evaluation, a residual drug count, self-management advice, and patient education, including descriptions of successful cases of AE management, pharmacist support, and advice for relieving patient anxiety and misunderstanding, were conducted. AEs were evaluated according to the National Cancer Institute Common Toxicity Criteria for Adverse Events (NCI-CTCAE) version 5.0. Pharmacists recommended that all patients use heparinoids before sorafenib treatment to prevent AEs. Additionally, the prophylactic use of urea cream to prevent dermatologic AEs was recommended after beginning sorafenib treatment. Pharmacist intervention began at the start of sorafenib treatment and continued until treatment ended. Pharmacists with special expertise provided medical care at the pharmacist’s outpatient clinic before or after a patient was medically examined by an oncologist. Every 8 wk at each visit, an AE evaluation, a residual drug count, self-management advice, and patient education, including descriptions of successful cases of AE management, pharmacist support, and advice for relieving patient anxiety and misunderstanding, were conducted. AEs were evaluated according to the National Cancer Institute Common Toxicity Criteria for Adverse Events (NCI-CTCAE) version 5.0. Pharmacists recommended that all patients use heparinoids before sorafenib treatment to prevent AEs. Additionally, the prophylactic use of urea cream to prevent dermatologic AEs was recommended after beginning sorafenib treatment. Pharmacist intervention began at the start of sorafenib treatment and continued until treatment ended. Mutual cooperation system We developed a mutual cooperation system that was initiated at the start of sorafenib treatment to manage AEs effectively. Although patients in Groups B and C received medical advice from oncologists and pharmacists, the systems differed. Group C received 20- to 30-min consultations during which pharmacists provided accurate information about sorafenib to alleviate fear and anxiety related to AEs. After each visit, the pharmacist summarized the consultation results in a report and discussed their findings with an oncologist. Group B patients received a 5- to 10-min session during which pharmacists provided the same information about sorafenib that Group C had received. These pharmacist consultations were brief because a thorough medical examination by an oncologist had already been completed. Furthermore, the pharmacist did not record the consultation content in the medical chart because the visit involved verbal intervention only, and a detailed report was unnecessary. We developed a mutual cooperation system that was initiated at the start of sorafenib treatment to manage AEs effectively. Although patients in Groups B and C received medical advice from oncologists and pharmacists, the systems differed. Group C received 20- to 30-min consultations during which pharmacists provided accurate information about sorafenib to alleviate fear and anxiety related to AEs. After each visit, the pharmacist summarized the consultation results in a report and discussed their findings with an oncologist. Group B patients received a 5- to 10-min session during which pharmacists provided the same information about sorafenib that Group C had received. These pharmacist consultations were brief because a thorough medical examination by an oncologist had already been completed. Furthermore, the pharmacist did not record the consultation content in the medical chart because the visit involved verbal intervention only, and a detailed report was unnecessary. Sorafenib therapy and AE management Sorafenib was administered at a dose of 400 or 800 mg/d. The initial dose of sorafenib was determined by an oncologist. At the start of sorafenib treatment, all patients received information about AEs from a pharmacist and an oncologist. Patients who developed an AE could confer with their consulting pharmacist or prescribing oncologist. Pharmacists collected and recorded patient data, including the AE grade (according to NCI-CTCAE version 5.0), AE time of onset, and emotional response of the patient to the AE. Oncologists performed dose modifications throughout treatment, including reductions, interruptions, and reintroductions, according to the drug manufacturer’s package insert for sorafenib. Sorafenib was administered at a dose of 400 or 800 mg/d. The initial dose of sorafenib was determined by an oncologist. At the start of sorafenib treatment, all patients received information about AEs from a pharmacist and an oncologist. Patients who developed an AE could confer with their consulting pharmacist or prescribing oncologist. Pharmacists collected and recorded patient data, including the AE grade (according to NCI-CTCAE version 5.0), AE time of onset, and emotional response of the patient to the AE. Oncologists performed dose modifications throughout treatment, including reductions, interruptions, and reintroductions, according to the drug manufacturer’s package insert for sorafenib. Criteria for permanent sorafenib discontinuation Sorafenib was permanently discontinued when any of the following events occurred: (1) Tumor progression, defined as either radiologic (by the mRECIST criteria) or clinically progressive disease (e.g., ECOG performance status decline or onset of severe symptoms with no connection to liver failure); (2) Unacceptable AEs, defined as moderate to severe AEs (e.g., grades 2–4) that persisted after dose reduction or temporary treatment interruption; or (3) Liver decompensation, defined as gastrointestinal bleeding, ascites, jaundice, or encephalopathy[3]. All patients were managed by an oncologist and received excellent supportive care after sorafenib was permanently discontinued. Time to treatment failure (TTF) was defined as the duration from the start of sorafenib treatment to permanent discontinuation. The proportion of days covered (PDC) was defined as the TTF divided by the time to radiologic progressive disease after sorafenib[18]. Nonadherence was defined as a PDC of ≤ 80%[19]. Sorafenib was permanently discontinued when any of the following events occurred: (1) Tumor progression, defined as either radiologic (by the mRECIST criteria) or clinically progressive disease (e.g., ECOG performance status decline or onset of severe symptoms with no connection to liver failure); (2) Unacceptable AEs, defined as moderate to severe AEs (e.g., grades 2–4) that persisted after dose reduction or temporary treatment interruption; or (3) Liver decompensation, defined as gastrointestinal bleeding, ascites, jaundice, or encephalopathy[3]. All patients were managed by an oncologist and received excellent supportive care after sorafenib was permanently discontinued. Time to treatment failure (TTF) was defined as the duration from the start of sorafenib treatment to permanent discontinuation. The proportion of days covered (PDC) was defined as the TTF divided by the time to radiologic progressive disease after sorafenib[18]. Nonadherence was defined as a PDC of ≤ 80%[19]. Statistical analysis Categorical variables were assessed using the chi-square test and are presented as frequencies or percentages. Continuous variables were analyzed using the Mann-Whitney U test and are expressed as the mean ± SD. OS and TTF were evaluated using the Kaplan-Meier method. A landmark analysis[20] was performed to consider HFSR cases that might have occurred if a patient with HCC had not died as a guarantee-time bias. The analysis was performed using the time when the highest-grade HFSR occurred in 50% or more of the patients as a landmark (here, 30 d). The log-rank test was used to estimate differences in survival curves. Additionally, we used Cox regression analyses to evaluate the relationship between the time to the occurrence of an event and explanatory variables. Logistic regression analyses were used to evaluate the relationship between nonadherence and explanatory variables. We included the following baseline characteristics as variables in our univariate analysis: age, sex, etiology of liver disease, bilirubin level, albumin level, INR, BCLC stage, ECOG performance status, macrovascular invasion, extrahepatic spread, serum AFP level, and number of previous transarterial chemoembolization (TACE) procedures for liver cancer. Variables identified as significant in the univariate analysis were included in the multivariate analysis. The correlations between OS and TTF were assessed by Spearman’s rank correlation coefficient. A p-value less than 0.05 was considered statistically significant. We used the Bonferroni correction for multiple comparisons to adjust the familywise error rate when comparing differences between the three groups. The statistical methods of this study were reviewed by Kamoshida T from the Department of Gastroenterology, Hitachi General Hospital, Japan. All statistical analyses were performed using SPSS software, version 22 (IBM Corp., Armonk, NY, United States). Categorical variables were assessed using the chi-square test and are presented as frequencies or percentages. Continuous variables were analyzed using the Mann-Whitney U test and are expressed as the mean ± SD. OS and TTF were evaluated using the Kaplan-Meier method. A landmark analysis[20] was performed to consider HFSR cases that might have occurred if a patient with HCC had not died as a guarantee-time bias. The analysis was performed using the time when the highest-grade HFSR occurred in 50% or more of the patients as a landmark (here, 30 d). The log-rank test was used to estimate differences in survival curves. Additionally, we used Cox regression analyses to evaluate the relationship between the time to the occurrence of an event and explanatory variables. Logistic regression analyses were used to evaluate the relationship between nonadherence and explanatory variables. We included the following baseline characteristics as variables in our univariate analysis: age, sex, etiology of liver disease, bilirubin level, albumin level, INR, BCLC stage, ECOG performance status, macrovascular invasion, extrahepatic spread, serum AFP level, and number of previous transarterial chemoembolization (TACE) procedures for liver cancer. Variables identified as significant in the univariate analysis were included in the multivariate analysis. The correlations between OS and TTF were assessed by Spearman’s rank correlation coefficient. A p-value less than 0.05 was considered statistically significant. We used the Bonferroni correction for multiple comparisons to adjust the familywise error rate when comparing differences between the three groups. The statistical methods of this study were reviewed by Kamoshida T from the Department of Gastroenterology, Hitachi General Hospital, Japan. All statistical analyses were performed using SPSS software, version 22 (IBM Corp., Armonk, NY, United States). Study design: We retrospectively evaluated patients with advanced HCC treated with sorafenib monotherapy and no subsequent chemotherapeutic agent between May 2009 and March 2018 using the medical records of the following core hospitals in Japan: Hitachi General Hospital, Ibaraki Prefectural Central Hospital, Ibaraki Cancer Center, and Tokyo Medical University Ibaraki Medical Center. These hospitals are core hospitals that were designated by the government to provide specialized cancer medical care. The patients were separated into three groups: Group A, patients without HFSR but with pharmacist intervention (this intervention was performed by pharmacists who did not share interview information with the oncologist; it is called nonmutual cooperation system); Group B, patients with HFSR and nonmutual cooperation system; and Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). Patient selection: We included patients with stage B or C HCC according to the Barcelona Clinic Liver Cancer (BCLC) staging system. The indication criteria for sorafenib administration were as follows: Child-Pugh grade A or B; Eastern Cooperative Oncology Group (ECOG) performance status 0 or 1; alanine aminotransferase < 5-fold the upper limit of the normal range; total bilirubin level < 2.0 mg/dL; neutrophil count > 1500/µL; hemoglobin level ≥ 8.5 g/dL; platelet count > 75000/µL; and no dialysis requirement. The study exclusion criteria were as follows: Patients with a history of thrombosis or ischemic heart disease, pregnant women and those who could become pregnant, and patients with brain metastases. Our study protocol was approved by the ethics committee of each hospital and was performed according to the ethical guidelines of the 1975 Declaration of Helsinki. We obtained informed consent using an opt-out option on each facility’s website (see Institution website uniform resource locators). This study was registered with the University Hospital Medical Information Network (UMIN) (ID: UMIN000038701). Data collection: We collected patient data from the start of sorafenib, including age, sex, etiology of underlying liver disease, Child-Pugh score, history of present illness, medical history, tumor marker level [alpha-fetoprotein (AFP)], ECOG performance status, and relevant laboratory tests, including total bilirubin, albumin, and international normalized ratio (INR). Laboratory tests and tumor marker levels were obtained every 8–10 wk until permanent sorafenib discontinuation. Computed tomography evaluations: Sorafenib response evaluations on computed tomography (CT) were scheduled for 8 wk after the first treatment, and subsequent evaluations were planned every 8 wk. Thoracic, abdominal, and pelvic CT scans were performed with intravenous iodinated contrast media. CT evaluations were conducted based on the modified Response Evaluation Criteria in Solid Tumors (mRECIST)[17] by an oncologist. Intervention: Pharmacists with special expertise provided medical care at the pharmacist’s outpatient clinic before or after a patient was medically examined by an oncologist. Every 8 wk at each visit, an AE evaluation, a residual drug count, self-management advice, and patient education, including descriptions of successful cases of AE management, pharmacist support, and advice for relieving patient anxiety and misunderstanding, were conducted. AEs were evaluated according to the National Cancer Institute Common Toxicity Criteria for Adverse Events (NCI-CTCAE) version 5.0. Pharmacists recommended that all patients use heparinoids before sorafenib treatment to prevent AEs. Additionally, the prophylactic use of urea cream to prevent dermatologic AEs was recommended after beginning sorafenib treatment. Pharmacist intervention began at the start of sorafenib treatment and continued until treatment ended. Mutual cooperation system: We developed a mutual cooperation system that was initiated at the start of sorafenib treatment to manage AEs effectively. Although patients in Groups B and C received medical advice from oncologists and pharmacists, the systems differed. Group C received 20- to 30-min consultations during which pharmacists provided accurate information about sorafenib to alleviate fear and anxiety related to AEs. After each visit, the pharmacist summarized the consultation results in a report and discussed their findings with an oncologist. Group B patients received a 5- to 10-min session during which pharmacists provided the same information about sorafenib that Group C had received. These pharmacist consultations were brief because a thorough medical examination by an oncologist had already been completed. Furthermore, the pharmacist did not record the consultation content in the medical chart because the visit involved verbal intervention only, and a detailed report was unnecessary. Sorafenib therapy and AE management: Sorafenib was administered at a dose of 400 or 800 mg/d. The initial dose of sorafenib was determined by an oncologist. At the start of sorafenib treatment, all patients received information about AEs from a pharmacist and an oncologist. Patients who developed an AE could confer with their consulting pharmacist or prescribing oncologist. Pharmacists collected and recorded patient data, including the AE grade (according to NCI-CTCAE version 5.0), AE time of onset, and emotional response of the patient to the AE. Oncologists performed dose modifications throughout treatment, including reductions, interruptions, and reintroductions, according to the drug manufacturer’s package insert for sorafenib. Criteria for permanent sorafenib discontinuation: Sorafenib was permanently discontinued when any of the following events occurred: (1) Tumor progression, defined as either radiologic (by the mRECIST criteria) or clinically progressive disease (e.g., ECOG performance status decline or onset of severe symptoms with no connection to liver failure); (2) Unacceptable AEs, defined as moderate to severe AEs (e.g., grades 2–4) that persisted after dose reduction or temporary treatment interruption; or (3) Liver decompensation, defined as gastrointestinal bleeding, ascites, jaundice, or encephalopathy[3]. All patients were managed by an oncologist and received excellent supportive care after sorafenib was permanently discontinued. Time to treatment failure (TTF) was defined as the duration from the start of sorafenib treatment to permanent discontinuation. The proportion of days covered (PDC) was defined as the TTF divided by the time to radiologic progressive disease after sorafenib[18]. Nonadherence was defined as a PDC of ≤ 80%[19]. Statistical analysis: Categorical variables were assessed using the chi-square test and are presented as frequencies or percentages. Continuous variables were analyzed using the Mann-Whitney U test and are expressed as the mean ± SD. OS and TTF were evaluated using the Kaplan-Meier method. A landmark analysis[20] was performed to consider HFSR cases that might have occurred if a patient with HCC had not died as a guarantee-time bias. The analysis was performed using the time when the highest-grade HFSR occurred in 50% or more of the patients as a landmark (here, 30 d). The log-rank test was used to estimate differences in survival curves. Additionally, we used Cox regression analyses to evaluate the relationship between the time to the occurrence of an event and explanatory variables. Logistic regression analyses were used to evaluate the relationship between nonadherence and explanatory variables. We included the following baseline characteristics as variables in our univariate analysis: age, sex, etiology of liver disease, bilirubin level, albumin level, INR, BCLC stage, ECOG performance status, macrovascular invasion, extrahepatic spread, serum AFP level, and number of previous transarterial chemoembolization (TACE) procedures for liver cancer. Variables identified as significant in the univariate analysis were included in the multivariate analysis. The correlations between OS and TTF were assessed by Spearman’s rank correlation coefficient. A p-value less than 0.05 was considered statistically significant. We used the Bonferroni correction for multiple comparisons to adjust the familywise error rate when comparing differences between the three groups. The statistical methods of this study were reviewed by Kamoshida T from the Department of Gastroenterology, Hitachi General Hospital, Japan. All statistical analyses were performed using SPSS software, version 22 (IBM Corp., Armonk, NY, United States). RESULTS: Patients We included 134 patients [median age, 69 years (range, 41–89 years); male, n = 99; female, n = 35] with advanced HCC who received sorafenib monotherapy without posttreatment (Group A, n = 41; Group B, n = 30; Group C, n = 63). The main etiological factor was hepatitis C virus (HCV) (77/134 patients, 57.5%), followed by hepatitis B virus (HBV) (30/134 patients, 22.4%). We included 134 patients [median age, 69 years (range, 41–89 years); male, n = 99; female, n = 35] with advanced HCC who received sorafenib monotherapy without posttreatment (Group A, n = 41; Group B, n = 30; Group C, n = 63). The main etiological factor was hepatitis C virus (HCV) (77/134 patients, 57.5%), followed by hepatitis B virus (HBV) (30/134 patients, 22.4%). Baseline characteristics All patients had cirrhosis [Child-Pugh A, n = 117 (87.3%); Child-Pugh B, n = 17 (12.7%)]. HCC was BCLC stage B in 55 patients (41.0%) and BCLC stage C in 79 patients (59.0%). None of the patients had a second primary cancer. Portal vein thrombosis was present in 35 patients (26.1%), and extrahepatic metastases were found in 67 patients (50.0%) (Table 1). Baseline characteristics of patients in Groups A, B, and C AFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HCV: Hepatitis C virus; HBV: Hepatitis B virus; INR: International normalized ratio; TACE: Transcatheter arterial chemoembolization; ECOG: Eastern Cooperative Oncology Group. All patients had cirrhosis [Child-Pugh A, n = 117 (87.3%); Child-Pugh B, n = 17 (12.7%)]. HCC was BCLC stage B in 55 patients (41.0%) and BCLC stage C in 79 patients (59.0%). None of the patients had a second primary cancer. Portal vein thrombosis was present in 35 patients (26.1%), and extrahepatic metastases were found in 67 patients (50.0%) (Table 1). Baseline characteristics of patients in Groups A, B, and C AFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HCV: Hepatitis C virus; HBV: Hepatitis B virus; INR: International normalized ratio; TACE: Transcatheter arterial chemoembolization; ECOG: Eastern Cooperative Oncology Group. AEs An AE of at least grade 1 was observed in all patients after sorafenib administration. However, none of the patients experienced any grade 4 AEs. The main AEs in all groups were fatigue (30.6%), diarrhea (39.6%), hypertension (31.3%), anorexia (29.9%), and thrombocytopenia (38.8%) (Table 2). Many patients required temporary sorafenib interruption because of AEs (Group A, 19.5%; Group B, 6.7%; Group C, 41.3%). Of patients who temporarily stopped taking sorafenib, the rate of those who resumed treatment at a reduced dose was the highest in Group C (Group A vs Group B, P = 0.70; Group A vs Group C, P = 0.11; Group B vs Group C, P < 0.01) (Table 3). Prevalence of adverse events after beginning sorafenib, according to CTCAE version 5.0, n (%) Dose modification related to adverse events NA: Not available. An AE of at least grade 1 was observed in all patients after sorafenib administration. However, none of the patients experienced any grade 4 AEs. The main AEs in all groups were fatigue (30.6%), diarrhea (39.6%), hypertension (31.3%), anorexia (29.9%), and thrombocytopenia (38.8%) (Table 2). Many patients required temporary sorafenib interruption because of AEs (Group A, 19.5%; Group B, 6.7%; Group C, 41.3%). Of patients who temporarily stopped taking sorafenib, the rate of those who resumed treatment at a reduced dose was the highest in Group C (Group A vs Group B, P = 0.70; Group A vs Group C, P = 0.11; Group B vs Group C, P < 0.01) (Table 3). Prevalence of adverse events after beginning sorafenib, according to CTCAE version 5.0, n (%) Dose modification related to adverse events NA: Not available. Radiological response evaluations CT examinations performed every 2 mo showed that the disease control rate (DCR) gradually decreased in all groups. The response rate (RR) and DCR 8 mo after the start of sorafenib treatment were the highest in Group C (RR, 9.5%; DCR, 65.1%) (Table 4). Radiological response according to the modified response evaluation criteria in solid tumors CT examinations performed every 2 mo showed that the disease control rate (DCR) gradually decreased in all groups. The response rate (RR) and DCR 8 mo after the start of sorafenib treatment were the highest in Group C (RR, 9.5%; DCR, 65.1%) (Table 4). Radiological response according to the modified response evaluation criteria in solid tumors Permanent sorafenib discontinuation The main causes of permanent drug discontinuation were HCC progression and sorafenib-related AE intolerance. Permanent discontinuation due to AE intolerance occurred most frequently in Group B (Group A (17.1%) vs Group B (60.0%), P < 0.01; Group A (17.1%) vs Group C (20.6%), P = 1.00; Group B (60.0%) vs Group C (20.6%), P < 0.05) (Table 5). Reasons for permanent sorafenib discontinuation, n (%) The main causes of permanent drug discontinuation were HCC progression and sorafenib-related AE intolerance. Permanent discontinuation due to AE intolerance occurred most frequently in Group B (Group A (17.1%) vs Group B (60.0%), P < 0.01; Group A (17.1%) vs Group C (20.6%), P = 1.00; Group B (60.0%) vs Group C (20.6%), P < 0.05) (Table 5). Reasons for permanent sorafenib discontinuation, n (%) OS after sorafenib therapy The median OS was 6.2 mo in Group A, 7.7 mo in Group B, and 13.9 mo in Group C. The difference in the median OS was significant between Groups A and C (P < 0.01). In multivariate analysis, Group A vs Group C (HR, 0.41; 95%CI: 0.25–0.66; P < 0.01) and BCLC-B (HR, 0.60, 95%CI: 0.41–0.89; P = 0.01) were independent predictors of survival (Figures 1 and 2, Table 6). Kaplan-Meier estimates and prognostic factors of overall survival (comparison between each group). Group A, patients without hand-foot skin reaction (HFSR) but with pharmacist intervention; Group B, patients with HFSR and the nonmutual cooperation system; Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). Kaplan-Meier estimates and prognostic factors of overall survival (Barcelona Clinic Liver Cancer B vs Barcelona Clinic Liver Cancer C). BCLC: Barcelona Clinic Liver Cancer. Prognostic factors of overall survival by multivariable Cox regression analysis AFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HBV: Hepatitis B virus; ECOG: Eastern Cooperative Oncology Group. The median OS was 6.2 mo in Group A, 7.7 mo in Group B, and 13.9 mo in Group C. The difference in the median OS was significant between Groups A and C (P < 0.01). In multivariate analysis, Group A vs Group C (HR, 0.41; 95%CI: 0.25–0.66; P < 0.01) and BCLC-B (HR, 0.60, 95%CI: 0.41–0.89; P = 0.01) were independent predictors of survival (Figures 1 and 2, Table 6). Kaplan-Meier estimates and prognostic factors of overall survival (comparison between each group). Group A, patients without hand-foot skin reaction (HFSR) but with pharmacist intervention; Group B, patients with HFSR and the nonmutual cooperation system; Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). Kaplan-Meier estimates and prognostic factors of overall survival (Barcelona Clinic Liver Cancer B vs Barcelona Clinic Liver Cancer C). BCLC: Barcelona Clinic Liver Cancer. Prognostic factors of overall survival by multivariable Cox regression analysis AFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HBV: Hepatitis B virus; ECOG: Eastern Cooperative Oncology Group. Mutual cooperation system evaluation The median TTF in Group C was 5.0 mo (95%CI: 3.8–6.5), which was the highest of all the groups [Group C (5.0 mo) vs Group A (2.1 mo), P < 0.01; Group C (5.0 mo) vs Group B (0.5 mo), P < 0.01). In multivariable Cox regression analysis, Group A vs Group B (HR, 1.69, 95%CI, 1.04–2.75; P = 0.03) and Group A vs Group C (HR, 0.53; 95%CI: 0.35–0.81; P < 0.01) were significant predictors of TTF (Table 7). The proportions of patients with a PDC of < 0.8 were 29.3% in Group A, 73.3% in Group B, and 23.8% in Group C. Group B had a significantly higher sorafenib PDC than Groups A (P < 0.01) and C (P < 0.01). Adjusted logistic regression analysis showed that nonadherence (PDC ≤ 0.8) was associated with Group B vs Group A (OR, 0.11; 95%CI: 0.04–0.36; P < 0.01) and Group B vs Group C (OR, 0.09; 95%CI: 0.03–0.27; P < 0.01) (Figure 3, Table 8). Proportion and prognostic factors of nonadherence. Group A, patients without hand-foot skin reaction (HFSR) but with pharmacist intervention; Group B, patients with HFSR and the nonmutual cooperation system; Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). Prognostic factors of time-to-treatment failure by multivariable Cox regression analysis AFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HBV: Hepatitis B virus; ECOG: Eastern Cooperative Oncology Group. Prognostic factors of proportion of days covered by logistic regression analyses The median TTF in Group C was 5.0 mo (95%CI: 3.8–6.5), which was the highest of all the groups [Group C (5.0 mo) vs Group A (2.1 mo), P < 0.01; Group C (5.0 mo) vs Group B (0.5 mo), P < 0.01). In multivariable Cox regression analysis, Group A vs Group B (HR, 1.69, 95%CI, 1.04–2.75; P = 0.03) and Group A vs Group C (HR, 0.53; 95%CI: 0.35–0.81; P < 0.01) were significant predictors of TTF (Table 7). The proportions of patients with a PDC of < 0.8 were 29.3% in Group A, 73.3% in Group B, and 23.8% in Group C. Group B had a significantly higher sorafenib PDC than Groups A (P < 0.01) and C (P < 0.01). Adjusted logistic regression analysis showed that nonadherence (PDC ≤ 0.8) was associated with Group B vs Group A (OR, 0.11; 95%CI: 0.04–0.36; P < 0.01) and Group B vs Group C (OR, 0.09; 95%CI: 0.03–0.27; P < 0.01) (Figure 3, Table 8). Proportion and prognostic factors of nonadherence. Group A, patients without hand-foot skin reaction (HFSR) but with pharmacist intervention; Group B, patients with HFSR and the nonmutual cooperation system; Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). Prognostic factors of time-to-treatment failure by multivariable Cox regression analysis AFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HBV: Hepatitis B virus; ECOG: Eastern Cooperative Oncology Group. Prognostic factors of proportion of days covered by logistic regression analyses Correlation between OS and TTF The Spearman’s rank correlation coefficients between OS and TTF in each group were 0.41 (Group A; P < 0.01), 0.13 (Group B; P = 0.51), and 0.58 (Group C; P <0.01). There was a highly significant correlation between OS and TTF in Group C. However, there was no correlation between OS and TTF in Group B. The Spearman’s rank correlation coefficients between OS and TTF in each group were 0.41 (Group A; P < 0.01), 0.13 (Group B; P = 0.51), and 0.58 (Group C; P <0.01). There was a highly significant correlation between OS and TTF in Group C. However, there was no correlation between OS and TTF in Group B. Patients: We included 134 patients [median age, 69 years (range, 41–89 years); male, n = 99; female, n = 35] with advanced HCC who received sorafenib monotherapy without posttreatment (Group A, n = 41; Group B, n = 30; Group C, n = 63). The main etiological factor was hepatitis C virus (HCV) (77/134 patients, 57.5%), followed by hepatitis B virus (HBV) (30/134 patients, 22.4%). Baseline characteristics: All patients had cirrhosis [Child-Pugh A, n = 117 (87.3%); Child-Pugh B, n = 17 (12.7%)]. HCC was BCLC stage B in 55 patients (41.0%) and BCLC stage C in 79 patients (59.0%). None of the patients had a second primary cancer. Portal vein thrombosis was present in 35 patients (26.1%), and extrahepatic metastases were found in 67 patients (50.0%) (Table 1). Baseline characteristics of patients in Groups A, B, and C AFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HCV: Hepatitis C virus; HBV: Hepatitis B virus; INR: International normalized ratio; TACE: Transcatheter arterial chemoembolization; ECOG: Eastern Cooperative Oncology Group. AEs: An AE of at least grade 1 was observed in all patients after sorafenib administration. However, none of the patients experienced any grade 4 AEs. The main AEs in all groups were fatigue (30.6%), diarrhea (39.6%), hypertension (31.3%), anorexia (29.9%), and thrombocytopenia (38.8%) (Table 2). Many patients required temporary sorafenib interruption because of AEs (Group A, 19.5%; Group B, 6.7%; Group C, 41.3%). Of patients who temporarily stopped taking sorafenib, the rate of those who resumed treatment at a reduced dose was the highest in Group C (Group A vs Group B, P = 0.70; Group A vs Group C, P = 0.11; Group B vs Group C, P < 0.01) (Table 3). Prevalence of adverse events after beginning sorafenib, according to CTCAE version 5.0, n (%) Dose modification related to adverse events NA: Not available. Radiological response evaluations: CT examinations performed every 2 mo showed that the disease control rate (DCR) gradually decreased in all groups. The response rate (RR) and DCR 8 mo after the start of sorafenib treatment were the highest in Group C (RR, 9.5%; DCR, 65.1%) (Table 4). Radiological response according to the modified response evaluation criteria in solid tumors Permanent sorafenib discontinuation: The main causes of permanent drug discontinuation were HCC progression and sorafenib-related AE intolerance. Permanent discontinuation due to AE intolerance occurred most frequently in Group B (Group A (17.1%) vs Group B (60.0%), P < 0.01; Group A (17.1%) vs Group C (20.6%), P = 1.00; Group B (60.0%) vs Group C (20.6%), P < 0.05) (Table 5). Reasons for permanent sorafenib discontinuation, n (%) OS after sorafenib therapy: The median OS was 6.2 mo in Group A, 7.7 mo in Group B, and 13.9 mo in Group C. The difference in the median OS was significant between Groups A and C (P < 0.01). In multivariate analysis, Group A vs Group C (HR, 0.41; 95%CI: 0.25–0.66; P < 0.01) and BCLC-B (HR, 0.60, 95%CI: 0.41–0.89; P = 0.01) were independent predictors of survival (Figures 1 and 2, Table 6). Kaplan-Meier estimates and prognostic factors of overall survival (comparison between each group). Group A, patients without hand-foot skin reaction (HFSR) but with pharmacist intervention; Group B, patients with HFSR and the nonmutual cooperation system; Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). Kaplan-Meier estimates and prognostic factors of overall survival (Barcelona Clinic Liver Cancer B vs Barcelona Clinic Liver Cancer C). BCLC: Barcelona Clinic Liver Cancer. Prognostic factors of overall survival by multivariable Cox regression analysis AFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HBV: Hepatitis B virus; ECOG: Eastern Cooperative Oncology Group. Mutual cooperation system evaluation: The median TTF in Group C was 5.0 mo (95%CI: 3.8–6.5), which was the highest of all the groups [Group C (5.0 mo) vs Group A (2.1 mo), P < 0.01; Group C (5.0 mo) vs Group B (0.5 mo), P < 0.01). In multivariable Cox regression analysis, Group A vs Group B (HR, 1.69, 95%CI, 1.04–2.75; P = 0.03) and Group A vs Group C (HR, 0.53; 95%CI: 0.35–0.81; P < 0.01) were significant predictors of TTF (Table 7). The proportions of patients with a PDC of < 0.8 were 29.3% in Group A, 73.3% in Group B, and 23.8% in Group C. Group B had a significantly higher sorafenib PDC than Groups A (P < 0.01) and C (P < 0.01). Adjusted logistic regression analysis showed that nonadherence (PDC ≤ 0.8) was associated with Group B vs Group A (OR, 0.11; 95%CI: 0.04–0.36; P < 0.01) and Group B vs Group C (OR, 0.09; 95%CI: 0.03–0.27; P < 0.01) (Figure 3, Table 8). Proportion and prognostic factors of nonadherence. Group A, patients without hand-foot skin reaction (HFSR) but with pharmacist intervention; Group B, patients with HFSR and the nonmutual cooperation system; Group C, patients with HFSR and intervention by pharmacists who shared interview information with the oncologist (mutual cooperation system). Prognostic factors of time-to-treatment failure by multivariable Cox regression analysis AFP: Alpha-fetoprotein; BCLC: Barcelona Clinic Liver Cancer; HBV: Hepatitis B virus; ECOG: Eastern Cooperative Oncology Group. Prognostic factors of proportion of days covered by logistic regression analyses Correlation between OS and TTF: The Spearman’s rank correlation coefficients between OS and TTF in each group were 0.41 (Group A; P < 0.01), 0.13 (Group B; P = 0.51), and 0.58 (Group C; P <0.01). There was a highly significant correlation between OS and TTF in Group C. However, there was no correlation between OS and TTF in Group B. DISCUSSION: We investigated the effect of cooperation between oncologists and pharmacists on the prognosis of patients with advanced HCC treated with sorafenib monotherapy. In the present study, the occurrence of HFSR was associated with improved patient prognosis, and this improvement was significantly enhanced by appropriate medication adherence. Close cooperation between oncologists and pharmacists increased adherence, and a strong correlation was observed between OS and TTF. Several studies have indicated that the emergence of HFSR is associated with prolonged survival in patients with advanced HCC treated with sorafenib[5,6,21]. However, these studies did not evaluate the correlation between medication adherence and survival after the appearance of an AE, including HFSR. Targeted therapies, including sorafenib, can result in unexpected AEs that do not occur after the administration of earlier chemotherapy drugs[22]. Oncologists must recognize these novel AEs at an early stage and provide appropriate treatment to the extent possible. However, previous studies revealed that optimal AE management requires considerable experience and time[7,23]. Management of sorafenib-related AEs includes data collection for AE grading, patient education, and determination of the appropriate sorafenib dose by an oncologist[15]. The use of sorafenib is associated with various AEs, including gastrointestinal, constitutional, or dermatologic events[1,2], and their management may require dose reduction or temporary discontinuation to avoid sorafenib treatment cessation. For example, an appropriate sorafenib dose reduction yielded a decreased rate of permanent discontinuation due to AEs[7]. However, in many patients, these dose changes do not mitigate intolerable or severe AEs, and permanent sorafenib discontinuation is required[24]. In our study, only the mutual cooperation system promoted dose reduction after an AE and extended the TTF. In the mutual cooperation system, pharmacists were responsible for collecting data on AE grades, educating patients, and managing any leftover medicine. Furthermore, they documented their findings in a report for the oncologist. On the other hand, in the nonmutual cooperation system, an oncologist examined the patient before pharmacists were involved in patient management. Oncologists were required to evaluate AE grading data, educate patients, and determine the appropriate sorafenib dose. Only after the medical examination did a pharmacist provide additional patient management, and their findings were not reported to the oncologist. In the nonmutual cooperation system, the oncologist had to obtain a substantial amount of information to maintain or revise the sorafenib treatment regimen within 5 to 10 min. Given these differences between the systems, our results suggest that the mutual cooperation system led to appropriate dose reductions, as reflected by the extended TTF. However, we do not recommend starting sorafenib at half the standard dose (800 mg/d). Dose reductions were guided by the results obtained by the mutual cooperation system. Additionally, we demonstrated that the best outcomes occur when optimal dosing and good medication adherence are achieved early in the course of sorafenib treatment. In our study, only the mutual cooperation system ensured good adherence in patients who experienced HFSR secondary to sorafenib treatment and prevented unnecessary permanent medication discontinuation. A previous study showed that despite dramatic improvements in adjuvant hormonal therapy for breast cancer, nonadherence, early discontinuation, and effective cancer treatment are affected by treatment-related toxicity, and appropriate interventions are needed to improve breast cancer survival[25]. Another cohort study indicated that long-term tamoxifen therapy for breast cancer reduced the risk of death, while the risk of death increased with a low adherence rate[26]. In our study, the medical team continued sharing patient information after the patient started taking sorafenib; therefore, the mutual cooperation system enabled the medical team to prevent HFSR or promptly provide patient management, as appropriate. In contrast, the nonmutual cooperation system did not allow patient information to be shared at an early stage; thus, the medical team was not able to take measures to prevent HFSR or plan palliation care in a timely manner. The differences in the effectiveness of HFSR prevention and palliation between the two systems highlight the importance of the various hurdles that can affect medication adherence. Hurdles to medication adherence are complex and include patient-, clinician-, and healthcare system-related factors. Patient-related factors, such as limited involvement in the treatment decision-making process, poor health literacy, doubts about medication effectiveness, and previous adverse effects, influence adherence. Clinician-related factors include failure to recognize nonadherence, poor patient communication, and inadequate multidisciplinary communication between oncologists and pharmacists. Healthcare system-related factors include relationships with clinicians and clinicians’ satisfaction with patient care[27,28]. Thus, multiple factors may become hurdles to improving adherence. The mutual cooperation system coordinates interactions among patients, clinicians, and the health system, thereby minimizing barriers to adherence. Surprisingly, this study revealed that the prognostic value of HFSR was enhanced by appropriate medication adherence. On the other hand, BCLC-B HCC was an independent predictor of improved OS[29]. BCLC stage did not affect the difference in OS between Groups A and C, as there was no significant between-group difference in the baseline stage distribution. We have reasonable evidence to confirm the validity of our results. First, variables such as age, sex, etiology, ECOG performance status, liver function, comorbidities, and TACE procedure count were not significantly different between the groups. Second, we verified that all patients had received sorafenib monotherapy and no subsequent chemotherapy; therefore, neither our patients’ prognoses nor the prolonged OS we observed was affected by other chemotherapeutic agents. Nevertheless, our study has a few limitations. First, our study design was based on the mutual cooperation system. After a patient was first checked by a specialized pharmacist, the oncologist determined whether to prescribe sorafenib based on the pharmacist’s report. However, patients were not required to participate in the mutual cooperation system, and involvement was subject to the patient’s wish. After the patients underwent a medical examination by an oncologist, a specialized pharmacist could also provide patient guidance about sorafenib. While patients who were unwilling to participate in the mutual cooperation system may have been included in Group A or B, it is unknown whether this enrollment could have affected the adherence rate. Second, OS and TTF were higher in the mutual cooperation system group (Group C) than in Group A and Group B. It is difficult to determine whether these results were caused by improved adherence or the mechanism underlying the prognostic efficacy of HFSR. CONCLUSION: The mutual cooperation system increased treatment duration and improved prognosis in patients with HFSR secondary to sorafenib treatment. Additionally, the mutual cooperation system allowed us to promptly initiate sorafenib treatment. Our study clearly demonstrates the clinical and research benefits of this system. The mutual cooperation system for sorafenib treatment management described in this study could be applied to the management of patients treated with other multikinase inhibitors to extend OS. The increased OS resulting from the mutual cooperation system could have a substantial impact on the design of clinical studies in which sorafenib is used as the control drug. Additionally, nonadherence may have adversely affected OS in previous studies, leading researchers to underestimate drug efficacy. We propose that future clinical investigations designed to improve medication adherence could eliminate OS underestimation.
Background: Sorafenib is an oral drug that prolongs overall survival (OS) in patients with hepatocellular carcinoma. Adverse events, including hand-foot skin reaction (HFSR), lead to permanent sorafenib discontinuation. Methods: We performed a retrospective, multicenter study of patients treated with sorafenib monotherapy between May 2009 and March 2018. We developed a mutual cooperation system that was initiated at the start of sorafenib treatment to effectively manage adverse events. The mutual cooperation system entailed patients receiving consultations during which pharmacists provided accurate information about sorafenib to alleviate the fear and anxiety related to adverse events. We stratified the patients into three groups: Group A, patients without HFSR but with pharmacist intervention; Group B, patients with HFSR and pharmacist interventions unreported to oncologists (nonmutual cooperation system); and Group C, patients with HFSR and pharmacist interventions known to oncologists (mutual cooperation system). OS and time to treatment failure (TTF) were evaluated using the Kaplan-Meier method. Results: We enrolled 134 patients (Group A, n = 41; Group B, n = 30; Group C, n = 63). The median OS was significantly different between Groups A and C (6.2 vs 13.9 mo, p < 0.01) but not between Groups A and B (6.2 vs 7.7 mo, P = 0.62). Group A vs Group C was an independent OS predictor (HR, 0.41; 95%CI: 0.25-0.66; P < 0.01). In Group B alone, TTF was significantly lower and the nonadherence rate was higher (P < 0.01). In addition, the Spearman's rank correlation coefficients between OS and TTF in each group were 0.41 (Group A; P < 0.01), 0.13 (Group B; P = 0.51), and 0.58 (Group C; P < 0.01). There was a highly significant correlation between OS and TTF in Group C. However, there was no correlation between OS and TTF in Group B. Conclusions: The mutual cooperation system increased treatment duration and improved prognosis in patients with HFSR. Future prospective studies (e.g., randomized controlled trials) and improved adherence could help prevent OS underestimation.
INTRODUCTION: Sorafenib is a multikinase inhibitor used to treat advanced hepatocellular carcinoma (HCC)[1,2]. Although sorafenib prolongs overall survival (OS) in patients with HCC, it is associated with various adverse events (AEs) that may lead to permanent discontinuation[3]. Previous studies found that hand-foot skin reaction (HFSR) was a prognostic marker of longer survival[4-6]. While HFSR is an important predictor of survival outcomes in clinical practice and clinical sorafenib trials, AE management could influence the efficacy of HFSR as a prognostic factor. A recent study showed that increased clinician experience with AEs reduced the potential for discontinuing sorafenib therapy, resulting in a longer OS in patients with HCC[7]. Nevertheless, it takes a long time for clinicians to develop the necessary experience for the management of AEs, and even with experience, it takes a substantial amount of time to provide a system of adequate follow-up after sorafenib initiation. As sorafenib is administered orally, its successful use for HCC treatment relies on patient medication adherence. However, many studies indicate that patients with cancer are sometimes nonadherent when prescribed oral drugs[8,9], and AEs are the main cause of poor adherence[10]. Poor adherence can lead to poor outcomes, and clinicians may wrongly conclude that a drug is ineffective because the response to treatment is insufficient[11]. It is important for patients to actively participate in making treatment decisions and then receive treatment according to their decisions to improve adherence[12]. We introduced behavior change techniques (patient education, medication regimen management, pharmacist-led interventions)[13,14] in our facilities as interventions to promote adherence. Using a preliminary simulation, we estimated that collecting patient information takes at least 20 min. From this, we concluded that it is difficult for oncologists to manage drugs that cause various AEs (e.g., sorafenib) without assistance due to their obligations to many patients. Thus, we developed a mutual cooperation system involving collaboration between oncologists and pharmacists to ensure effective AE management. This mutual cooperation system consisted of the initial intervention by a pharmacist followed by a medical examination by an oncologist. However, this system was affected by the patient’s behavior because patients were not obliged to follow the system. Some patients received intervention from a pharmacist after a medical examination by an oncologist. Effective AE management that improves medication adherence has a considerable impact on survival outcomes. Previous single-center studies suggest that healthcare provider interventions improve adherence, and the onset of HFSR was a favorable prognostic factor of OS in patients with HCC[15,16]. However, little is known about the association between prognosis and medication adherence in patients with HCC, and multicenter studies on this relationship are lacking. Therefore, we aimed to compare the impact of different AE interventions on patient prognosis. CONCLUSION: Future prospective studies (e.g., randomized controlled trials) and improved adherence could help avoid OS underestimation.
Background: Sorafenib is an oral drug that prolongs overall survival (OS) in patients with hepatocellular carcinoma. Adverse events, including hand-foot skin reaction (HFSR), lead to permanent sorafenib discontinuation. Methods: We performed a retrospective, multicenter study of patients treated with sorafenib monotherapy between May 2009 and March 2018. We developed a mutual cooperation system that was initiated at the start of sorafenib treatment to effectively manage adverse events. The mutual cooperation system entailed patients receiving consultations during which pharmacists provided accurate information about sorafenib to alleviate the fear and anxiety related to adverse events. We stratified the patients into three groups: Group A, patients without HFSR but with pharmacist intervention; Group B, patients with HFSR and pharmacist interventions unreported to oncologists (nonmutual cooperation system); and Group C, patients with HFSR and pharmacist interventions known to oncologists (mutual cooperation system). OS and time to treatment failure (TTF) were evaluated using the Kaplan-Meier method. Results: We enrolled 134 patients (Group A, n = 41; Group B, n = 30; Group C, n = 63). The median OS was significantly different between Groups A and C (6.2 vs 13.9 mo, p < 0.01) but not between Groups A and B (6.2 vs 7.7 mo, P = 0.62). Group A vs Group C was an independent OS predictor (HR, 0.41; 95%CI: 0.25-0.66; P < 0.01). In Group B alone, TTF was significantly lower and the nonadherence rate was higher (P < 0.01). In addition, the Spearman's rank correlation coefficients between OS and TTF in each group were 0.41 (Group A; P < 0.01), 0.13 (Group B; P = 0.51), and 0.58 (Group C; P < 0.01). There was a highly significant correlation between OS and TTF in Group C. However, there was no correlation between OS and TTF in Group B. Conclusions: The mutual cooperation system increased treatment duration and improved prognosis in patients with HFSR. Future prospective studies (e.g., randomized controlled trials) and improved adherence could help prevent OS underestimation.
10,172
415
[ 519, 151, 204, 85, 65, 145, 159, 121, 176, 336, 2562, 96, 154, 192, 72, 100, 236, 341, 71, 1200, 140 ]
22
[ "group", "patients", "sorafenib", "treatment", "system", "cooperation", "cooperation system", "hfsr", "aes", "oncologist" ]
[ "hcc treated sorafenib", "sorafenib treatment permanent", "carcinoma hcc sorafenib", "discontinuing sorafenib therapy", "hcc sorafenib prolongs" ]
null
[CONTENT] Hepatocellular carcinoma | Sorafenib | Pharmacists | Oncologists | Prognosis | Duration of therapy [SUMMARY]
[CONTENT] Hepatocellular carcinoma | Sorafenib | Pharmacists | Oncologists | Prognosis | Duration of therapy [SUMMARY]
null
[CONTENT] Hepatocellular carcinoma | Sorafenib | Pharmacists | Oncologists | Prognosis | Duration of therapy [SUMMARY]
[CONTENT] Hepatocellular carcinoma | Sorafenib | Pharmacists | Oncologists | Prognosis | Duration of therapy [SUMMARY]
[CONTENT] Hepatocellular carcinoma | Sorafenib | Pharmacists | Oncologists | Prognosis | Duration of therapy [SUMMARY]
[CONTENT] Antineoplastic Agents | Carcinoma, Hepatocellular | Humans | Liver Neoplasms | Niacinamide | Phenylurea Compounds | Prospective Studies | Retrospective Studies | Sorafenib | Treatment Outcome [SUMMARY]
[CONTENT] Antineoplastic Agents | Carcinoma, Hepatocellular | Humans | Liver Neoplasms | Niacinamide | Phenylurea Compounds | Prospective Studies | Retrospective Studies | Sorafenib | Treatment Outcome [SUMMARY]
null
[CONTENT] Antineoplastic Agents | Carcinoma, Hepatocellular | Humans | Liver Neoplasms | Niacinamide | Phenylurea Compounds | Prospective Studies | Retrospective Studies | Sorafenib | Treatment Outcome [SUMMARY]
[CONTENT] Antineoplastic Agents | Carcinoma, Hepatocellular | Humans | Liver Neoplasms | Niacinamide | Phenylurea Compounds | Prospective Studies | Retrospective Studies | Sorafenib | Treatment Outcome [SUMMARY]
[CONTENT] Antineoplastic Agents | Carcinoma, Hepatocellular | Humans | Liver Neoplasms | Niacinamide | Phenylurea Compounds | Prospective Studies | Retrospective Studies | Sorafenib | Treatment Outcome [SUMMARY]
[CONTENT] hcc treated sorafenib | sorafenib treatment permanent | carcinoma hcc sorafenib | discontinuing sorafenib therapy | hcc sorafenib prolongs [SUMMARY]
[CONTENT] hcc treated sorafenib | sorafenib treatment permanent | carcinoma hcc sorafenib | discontinuing sorafenib therapy | hcc sorafenib prolongs [SUMMARY]
null
[CONTENT] hcc treated sorafenib | sorafenib treatment permanent | carcinoma hcc sorafenib | discontinuing sorafenib therapy | hcc sorafenib prolongs [SUMMARY]
[CONTENT] hcc treated sorafenib | sorafenib treatment permanent | carcinoma hcc sorafenib | discontinuing sorafenib therapy | hcc sorafenib prolongs [SUMMARY]
[CONTENT] hcc treated sorafenib | sorafenib treatment permanent | carcinoma hcc sorafenib | discontinuing sorafenib therapy | hcc sorafenib prolongs [SUMMARY]
[CONTENT] group | patients | sorafenib | treatment | system | cooperation | cooperation system | hfsr | aes | oncologist [SUMMARY]
[CONTENT] group | patients | sorafenib | treatment | system | cooperation | cooperation system | hfsr | aes | oncologist [SUMMARY]
null
[CONTENT] group | patients | sorafenib | treatment | system | cooperation | cooperation system | hfsr | aes | oncologist [SUMMARY]
[CONTENT] group | patients | sorafenib | treatment | system | cooperation | cooperation system | hfsr | aes | oncologist [SUMMARY]
[CONTENT] group | patients | sorafenib | treatment | system | cooperation | cooperation system | hfsr | aes | oncologist [SUMMARY]
[CONTENT] adherence | patients hcc | interventions | management | patients | medication | studies | hcc | takes | os patients [SUMMARY]
[CONTENT] sorafenib | medical | patients | defined | variables | treatment | oncologist | pharmacist | patient | level [SUMMARY]
null
[CONTENT] clinical | system | os | mutual cooperation system | mutual cooperation | mutual | cooperation system | cooperation | treatment | sorafenib treatment [SUMMARY]
[CONTENT] group | patients | sorafenib | treatment | system | vs | vs group | 01 | aes | patient [SUMMARY]
[CONTENT] group | patients | sorafenib | treatment | system | vs | vs group | 01 | aes | patient [SUMMARY]
[CONTENT] Sorafenib ||| [SUMMARY]
[CONTENT] May 2009 and March 2018 ||| ||| ||| three | Group A | Group B | HFSR | Group C | HFSR ||| TTF [SUMMARY]
null
[CONTENT] HFSR ||| [SUMMARY]
[CONTENT] Sorafenib ||| ||| May 2009 and March 2018 ||| ||| ||| three | Group A | Group B | HFSR | Group C | HFSR ||| TTF ||| ||| 134 | Group A | 41 | Group B | 30 | Group C | 63 ||| Groups A | 6.2 | 13.9 mo | Groups A | 6.2 | 7.7 | 0.62 ||| Group A vs Group C | 0.41 | 0.25-0.66 | 0.01 ||| TTF ||| Spearman | TTF | 0.41 (Group A | 0.01 | 0.13 | 0.51 | 0.58 | Group C | 0.01 ||| TTF | Group C. However | TTF | Group B. Conclusions | HFSR ||| [SUMMARY]
[CONTENT] Sorafenib ||| ||| May 2009 and March 2018 ||| ||| ||| three | Group A | Group B | HFSR | Group C | HFSR ||| TTF ||| ||| 134 | Group A | 41 | Group B | 30 | Group C | 63 ||| Groups A | 6.2 | 13.9 mo | Groups A | 6.2 | 7.7 | 0.62 ||| Group A vs Group C | 0.41 | 0.25-0.66 | 0.01 ||| TTF ||| Spearman | TTF | 0.41 (Group A | 0.01 | 0.13 | 0.51 | 0.58 | Group C | 0.01 ||| TTF | Group C. However | TTF | Group B. Conclusions | HFSR ||| [SUMMARY]
Cerebrospinal Fluid Gusher in Cochlear Implantation and Its Association with Inner-Ear Malformations.
36349668
It is aimed to investigate the incidence of cerebrospinal fluid gusher in cochlear implantation and the association between cerebrospinal fluid gusher and inner-ear malformations in adult and pediatric patients.
BACKGROUND
A retrospective case review of 1025 primary cochlear implantation procedures was performed. Patients with inner-ear malformation or cerebrospinal fluid gusher during primary cochlear implantation were included and divided into 2 groups according to age: pediatric and adult groups.
METHODS
The incidence of inner-ear malformation was 4.19% (17/405) and 7.6% (47/620) in the adult and pediatric groups, respectively. There was a significant difference in the incidence of inner-ear malformation in the pediatric group. The incidence of cerebrospinal fluid gusher was 0.9% (4/405) and 4.1% (26/620) in the adult and pediatric groups, respectively. There was a significant difference in the incidence of gusher between the adult and pediatric groups.
RESULTS
The incidence of a cerebrospinal fluid gusher is higher in the pediatric group, compared to adults due to a higher rate of inner-ear malformation. Inner-ear malformation poses a risk factor for cerebrospinal fluid gusher.
CONCLUSION
[ "Adult", "Humans", "Child", "Cochlear Implantation", "Retrospective Studies", "Ear, Inner", "Cochlear Implants", "Cerebrospinal Fluid Otorrhea" ]
9682834
Introduction
Profuse clear fluid leak after opening the inner ear during surgery is called “gusher” in the literature. Gushing of cerebrospinal fluid (CSF) during cochlear implantation (CI) can lead to an increased risk of meningitis, erroneous electrode insertion, poor speech understanding, and word acquisition.1,2 The incidence of CSF gusher in CI has been reported to be between 1% and 5%.3 It is known that the incidence can reach up to 40% in the presence of inner-ear malformation (IEM).4 Management of gusher is important for the safety of the patients and surgery. Both conservative and surgical treatments are used in the presence of a gusher. Although it can be managed by minor surgical intervention, it may be a life-threatening complication. Thus, the risk and management of gusher in CI should be considered. In this study, we aimed to investigate the incidence of CSF gusher in CI with or without IEM and the association between CSF gusher and IEM in adult and pediatric patients.
Methods
After ethical committee approval (May 8, 2019/03), a retrospective chart review was performed on 1025 CI procedures, including children and adults, at the Otorhinolaryngology Clinic of the University of Health Sciences, İzmir Bozyaka Education and Research Hospital between 2009 and 2019. The study included only patients with profuse CSF gusher and/or IEM, during primary surgery. Cases with oozing, pulsatile perilymph, or mild leaks were excluded. Patients with CI were divided into 2 groups according to age: pediatric (≤17 years) and adult (≥18 years) patients. Cochlear and vestibular malformations were classified according to the Sennaroglu classification.5 Then, the patients were subdivided into groups according to the presence/absence of gusher and IEM. Patients with IEM were further divided into subgroups according to the type of anomalies. The demographic characteristics of the patients, surgical results, and management techniques for CSF gusher were evaluated. Computed tomography (CT) and magnetic resonance imaging (MRI) examinations of the patients were re-evaluated using advanced techniques. All MRI examinations were performed on a 1.5T MRI system (Philips Achieva, Philips Healthcare, Best, the Netherlands). Inner-ear structures were examined with a 3-dimensional balanced gradient echo-weighted sequence. Temporal bone CT imaging was performed using a 64-channel multi-detector scanner (Aquilion; Toshiba Medical Systems, Tokyo, Japan). Temporal bones were visualized in 3 basic planes (axial, sagittal, and coronal), and isometric reconstruction at each 0.5, 1, or 2 mm. Statistical Analysis For statistical analysis, we used the Statistical Package for the Social Sciences 22 (IBM SPSS Corp.; Armonk, NY, USA). Chi-square and Fisher’s exact test were performed to evaluate any significant difference between the groups based on P value < .05. For statistical analysis, we used the Statistical Package for the Social Sciences 22 (IBM SPSS Corp.; Armonk, NY, USA). Chi-square and Fisher’s exact test were performed to evaluate any significant difference between the groups based on P value < .05.
Results
A total of 1025 CIs were performed at our clinic between 2009 and 2019. Of these operations, 620 were in the pediatric group and 405 in the adult group. There were 64 patients with IEM, 17 and 47 in the adult and pediatric groups, respectively. The incidence of IEM in CI was 6.24% (64/1025), 4.19% (17/405), in the adult group and 7.6% (47/620) in the pediatric group. A chi-square test showed that there was a significant difference in the pediatric group in the incidence of IEM (Table 1). In the patients with IEM, 22 operations were performed on the left side and 42 operations on the right side. The mean age was 10.54 years (SD = 13.57). Gusher was detected in 25 of 64 (39%) ears. Despite having IEM, the gusher was not observed in 39 ears whereas it was seen in 5 of the remaining 961 patients without IEM. A comparison between the gusher without IEM (n = 5) and gusher with IEM groups (n = 25) showed that the presence of IEM was a statistically significant high risk for the gusher (P = .002) (Table 2). The incidence of gusher was 2.92% (30/1025) in patients with CI, and it was 0.9% (4/405) and 4.1% (26/620) in the adult and pediatric groups, respectively. A chi-square test showed that there was a significant difference between the adult and pediatric groups in the incidence of gusher (Table 1). In the gusher group (n = 30), 7 surgeries were performed on the left side and 23 operations on the right side. The mean age was 6.07 (SD = 6.79) in the gusher group, including 4 patients in the adult group and 26 patients in the pediatric group. Inner-ear malformation was detected in 25 of 30 ears in patients with gusher. The probability of detecting IEM in patients with gusher was significantly higher than in patients without gusher (P < .001). In the postoperative period, the gusher persisted in 4 patients. The complaints of 2 patients were resolved with conservative methods. The remaining two patients were re-operated to avoid gusher complications such as meningitis, and the gusher was controlled. Five patients had gusher, but IEM was not present. Radiological findings of 5 patients without IEM were re-evaluated. Two of these 5 patients had a defect at internal acoustic canal (IAC) fundus. However, there was no cochlear basal turn defect. In patients with IEM, 5 (7.8%) patients had incomplete partition type 1 (IP1), 28 (43.7%) incomplete partition type 2 (IP2), 9 (14%) incomplete partition type 3 (IP3), 3 (4.6%) common cavity (CC), 13 (20.3%) isolated enlarged vestibular aqueduct syndrome (EVA), 5 (7.8%) cochlear hypoplasia (CH), and 1 (1.5%) isolated semicircular canal aplasia. The findings of our study showed that among 9 patients with IP3, 8 had gusher (P = .04). The rate of gusher in patients with IP3 was 88.8%. The analysis results and the data for all groups are presented in Table 3.
null
null
[ "Main Points", "Statistical Analysis" ]
[ "Gusher may be seen with normal radiological anatomy during cochlear implantation.\nInner-ear malformations increase the risk of gusher in the cochlear implantation.\nIncomplete partition type 3 patients have a higher risk of gusher compared to other inner-ear malformation subgroups.\nConservative methods and meticulous postoperative care are generally enough to stop the gusher.", "For statistical analysis, we used the Statistical Package for the Social Sciences 22 (IBM SPSS Corp.; Armonk, NY, USA). Chi-square and Fisher’s exact test were performed to evaluate any significant difference between the groups based on P value < .05. " ]
[ null, null ]
[ "Main Points", "Introduction", "Methods", "Statistical Analysis", "Results", "Discussion", "Conclusion" ]
[ "Gusher may be seen with normal radiological anatomy during cochlear implantation.\nInner-ear malformations increase the risk of gusher in the cochlear implantation.\nIncomplete partition type 3 patients have a higher risk of gusher compared to other inner-ear malformation subgroups.\nConservative methods and meticulous postoperative care are generally enough to stop the gusher.", "Profuse clear fluid leak after opening the inner ear during surgery is called “gusher” in the literature. Gushing of cerebrospinal fluid (CSF) during cochlear implantation (CI) can lead to an increased risk of meningitis, erroneous electrode insertion, poor speech understanding, and word acquisition.1,2 The incidence of CSF gusher in CI has been reported to be between 1% and 5%.3 It is known that the incidence can reach up to 40% in the presence of inner-ear malformation (IEM).4 Management of gusher is important for the safety of the patients and surgery. Both conservative and surgical treatments are used in the presence of a gusher. Although it can be managed by minor surgical intervention, it may be a life-threatening complication. Thus, the risk and management of gusher in CI should be considered.\nIn this study, we aimed to investigate the incidence of CSF gusher in CI with or without IEM and the association between CSF gusher and IEM in adult and pediatric patients. ", "After ethical committee approval (May 8, 2019/03), a retrospective chart review was performed on 1025 CI procedures, including children and adults, at the Otorhinolaryngology Clinic of the University of Health Sciences, İzmir Bozyaka Education and Research Hospital between 2009 and 2019. The study included only patients with profuse CSF gusher and/or IEM, during primary surgery. Cases with oozing, pulsatile perilymph, or mild leaks were excluded. Patients with CI were divided into 2 groups according to age: pediatric (≤17 years) and adult (≥18 years) patients. Cochlear and vestibular malformations were classified according to the Sennaroglu classification.5 Then, the patients were subdivided into groups according to the presence/absence of gusher and IEM. Patients with IEM were further divided into subgroups according to the type of anomalies. The demographic characteristics of the patients, surgical results, and management techniques for CSF gusher were evaluated. \nComputed tomography (CT) and magnetic resonance imaging (MRI) examinations of the patients were re-evaluated using advanced techniques. All MRI examinations were performed on a 1.5T MRI system (Philips Achieva, Philips Healthcare, Best, the Netherlands). Inner-ear structures were examined with a 3-dimensional balanced gradient echo-weighted sequence. Temporal bone CT imaging was performed using a 64-channel multi-detector scanner (Aquilion; Toshiba Medical Systems, Tokyo, Japan). Temporal bones were visualized in 3 basic planes (axial, sagittal, and coronal), and isometric reconstruction at each 0.5, 1, or 2 mm.\nStatistical Analysis For statistical analysis, we used the Statistical Package for the Social Sciences 22 (IBM SPSS Corp.; Armonk, NY, USA). Chi-square and Fisher’s exact test were performed to evaluate any significant difference between the groups based on P value < .05. \nFor statistical analysis, we used the Statistical Package for the Social Sciences 22 (IBM SPSS Corp.; Armonk, NY, USA). Chi-square and Fisher’s exact test were performed to evaluate any significant difference between the groups based on P value < .05. ", "For statistical analysis, we used the Statistical Package for the Social Sciences 22 (IBM SPSS Corp.; Armonk, NY, USA). Chi-square and Fisher’s exact test were performed to evaluate any significant difference between the groups based on P value < .05. ", "A total of 1025 CIs were performed at our clinic between 2009 and 2019. Of these operations, 620 were in the pediatric group and 405 in the adult group. There were 64 patients with IEM, 17 and 47 in the adult and pediatric groups, respectively. The incidence of IEM in CI was 6.24% (64/1025), 4.19% (17/405), in the adult group and 7.6% (47/620) in the pediatric group. A chi-square test showed that there was a significant difference in the pediatric group in the incidence of IEM (Table 1). In the patients with IEM, 22 operations were performed on the left side and 42 operations on the right side. The mean age was 10.54 years (SD = 13.57). Gusher was detected in 25 of 64 (39%) ears. Despite having IEM, the gusher was not observed in 39 ears whereas it was seen in 5 of the remaining 961 patients without IEM. A comparison between the gusher without IEM (n = 5) and gusher with IEM groups (n = 25) showed that the presence of IEM was a statistically significant high risk for the gusher (P = .002) (Table 2).\nThe incidence of gusher was 2.92% (30/1025) in patients with CI, and it was 0.9% (4/405) and 4.1% (26/620) in the adult and pediatric groups, respectively. A chi-square test showed that there was a significant difference between the adult and pediatric groups in the incidence of gusher (Table 1). In the gusher group (n = 30), 7 surgeries were performed on the left side and 23 operations on the right side. The mean age was 6.07 (SD = 6.79) in the gusher group, including 4 patients in the adult group and 26 patients in the pediatric group. Inner-ear malformation was detected in 25 of 30 ears in patients with gusher. The probability of detecting IEM in patients with gusher was significantly higher than in patients without gusher (P < .001). \nIn the postoperative period, the gusher persisted in 4 patients. The complaints of 2 patients were resolved with conservative methods. The remaining two patients were re-operated to avoid gusher complications such as meningitis, and the gusher was controlled.\nFive patients had gusher, but IEM was not present. Radiological findings of 5 patients without IEM were re-evaluated. Two of these 5 patients had a defect at internal acoustic canal (IAC) fundus. However, there was no cochlear basal turn defect. \nIn patients with IEM, 5 (7.8%) patients had incomplete partition type 1 (IP1), 28 (43.7%) incomplete partition type 2 (IP2), 9 (14%) incomplete partition type 3 (IP3), 3 (4.6%) common cavity (CC), 13 (20.3%) isolated enlarged vestibular aqueduct syndrome (EVA), 5 (7.8%) cochlear hypoplasia (CH), and 1 (1.5%) isolated semicircular canal aplasia. The findings of our study showed that among 9 patients with IP3, 8 had gusher (P = .04). The rate of gusher in patients with IP3 was 88.8%. The analysis results and the data for all groups are presented in Table 3. ", "Gusher is a problem that may occur during CI and/or postoperative period. The gentle flow of clear fluid that develops during cochleostomy is called “oozing,” while profuse and rapid flow is called “gusher.”6 The risk of postoperative meningitis is much higher in the presence of a gusher. Also, it may lead to difficulties during electrode insertion causing misplacement, poor speech understanding, and word acquisition. Excessive CSF can usually access the cochlea through patent developmental pathways of the otic capsule. It has been shown that the risk of gusher may increase in IEM.4 Preoperative radiology should be carefully examined, and postoperative follow-up should be done closely to prevent any potential complications. According to Sennaroglu et al.5 IEM can be classified as labyrinthine aplasia, cochlear aplasia, cochlear hypoplasia, CC deformity, and incomplete partition, which, in itself, has 3 subgroups. Inner-ear malformation represents approximately 20-30% of the congenital hearing loss cases based on radiology results.7 Ding et al3 mentioned in a review that IEM, including cochlear ossification and round window dysplasia, had an incidence of 22.4% in CI. In our study, the incidence of IEM was 6.24% in 1025 ears with CI, which is consistent with the literature.8,9 When patients with IEM were examined separately in the adult and pediatric groups, we found that the incidence of IEM was 4.19% and 8.38% in the adult and pediatric groups, respectively. In the literature, the gusher rate was reported between 5% and 6.7 % in CI.3,10-12 In our study, CSF gusher rate in CI was 2.92 %, which was also consistent with the literature. The incidence of gusher was 0.9% and 4.1% in the the adult and pediatric groups, respectively. We believe that this difference is due to the higher rate of IEM in pediatric patients. \nInner-ear malformation is considered a predisposing factor for CSF gusher.13 Merhy et al14 previously have shown that the incidence of CSF gusher increases in the presence of IEM. Hashemi et al15 statistically proved the presence of a correlation of IEM with the CSF gusher. In our study, gusher rate of patients who had IEM was 39.1% (25/64) in parallel with the literature.13 It can be said IEMs increase the risk of gusher in the CI. \nThere were 5 (0.5%) ears with gusher in 956 patients without IEM. It has been previously reported that gusher was found in patients with bone defects at the fundus of the IAC or at cochlear basal turn.16-18 We evaluated the radiologic examinations of our 5 patients again. They had no cochlear basal turn or IAC defect. In 2 of these 5 patients, there was isolated cochlear septal defect only. This factor might have played a role in the mechanism of gusher.17\nAlthough during the preoperative radiological examinations, some patients had defects in the inner ear, which may lead to CSF leak, gusher did not occur in 3 patients with IP1, 20 patients with IP2, 1 patient with IP3, 4 patients with CH, 8 patients with EVA, and none of the ears with CC had gusher during surgery. Sennaroglu et al referring to this dilemma suggested that a fibrotic tissue, not detectable on radiological examinations, may prevent gusher during surgery.19\nThere is limited data on the incidence of gusher in IEM subtypes in the literature. It has been reported that the gusher rate was 39-45.9% in IP1, 8-15% in IP2, 100% in IP3, 27% in CH, 23-27% in CC, and 0-11% in EVA groups.13,20-24 But no statistical analyses were performed between the groups. Sennaroglu et al19 reported in their review that gusher rates were 100% in IP3, 39% in IP1, 15% in IP2 and there was not any gusher in CC and EVA groups. In our study, there was a CSF gusher in 8 of 9 patients (88%) who had IP3 malformation, followed by 40% in IP1, 38.4% in EVA, 28.57% in IP2, and 20% in CH groups. There was a statistically significant difference in IP3 group among other IEM groups. In a meta-analysis, Farhood et al20 reported that the gusher rate was 45.9% in IP1 group. Despite a small sample size in our study, the gusher rate was 40% in IP1 group. It is thought that the gusher risk may be high in individuals with CC malformation, due to the frequent lateral wall anomalies in IAC. However, in our study, no patient with gusher was found in the CC malformation subgroup. Similar to our findings, in the review of Sennaroglu et al19 and in the meta-analysis of Shi et al.21 it was reported that the incidence of gusher was low in patients with CC malformation. Bajin et al22 reported that the gusher rate was 8% in IP2 group. In our study, the gusher rate was 28% (8/28) in IP2 group. Bajin et al emphasized the relationship between modiolar base defect grade and risk of gusher. In our study, the reason for the higher rate of gusher in IP2 group can be explained by the presence of high-grade modiolar base defect. However, the IP2 group was not evaluated in this respect in our study. Further studies are needed to demonstrate the relationship between the level of modiolar base defect and the risk of gusher.\nOur study covers a period of 10 years and all surgeons adopted the same surgical philosophy in our center. During surgery, if gusher occurs, waiting for about 10 minutes is generally enough to reduce the outflow speed of CSF. When it slows down, surgeons should perform a large cochleostomy as Graham et al23 suggested. It makes electrode insertion easier and with the large cochleostomy, application of a piece of muscle and fascia is easier for achieving a watertight sealing. Additionally, we use Tisseel Kit fibrin sealant (Baxter, Deerfield, Ill, USA) to strengthen the seal, but not to stop the gusher outflow. Following this, the eustachian tube should be blocked temporarily with oxidized cellulose to prevent CSF from leaking to the nasopharynx. Also especially in IP groups, we use a cork-type stopper cochlear implant electrode or prepare a perichondrium stopper that covers the electrode shaft circumferentially, which was suggested by Sennaroglu et al.24 A 3-layer wound closure is important to achieve a watertight closure. Compression head dressing is applied and head elevation during bed rest for up to 3 days is recommended for all patients. Meticulous packing of the cochleostomy with these conservative precautions was enough to stop the gusher in our patients. There was no need for further interventions like petrosectomy or lumbar CSF drainage. In all patients, complete insertion of all active electrodes was accomplished and subsequent functioning of the implants was satisfactory. ", "The incidence of gusher is higher in the pediatric group, compared to adults due to a higher rate of IEM. Inner-ear malfunction is a risk factor for gusher, and in IP3 patients, there was a statistically significant higher risk of gusher compared to other IEM subgroups. Gusher may be seen with normal radiological anatomy during CI. Therefore, CI surgeons should always be prepared for the risk of gusher and know what to do for management of gusher." ]
[ null, "intro", "methods", null, "results", "discussion", "H1" ]
[ "Cochlear implantation", "ear", "inner", "radiology", "cerebrospinal fluid otorrhea" ]
Main Points: Gusher may be seen with normal radiological anatomy during cochlear implantation. Inner-ear malformations increase the risk of gusher in the cochlear implantation. Incomplete partition type 3 patients have a higher risk of gusher compared to other inner-ear malformation subgroups. Conservative methods and meticulous postoperative care are generally enough to stop the gusher. Introduction: Profuse clear fluid leak after opening the inner ear during surgery is called “gusher” in the literature. Gushing of cerebrospinal fluid (CSF) during cochlear implantation (CI) can lead to an increased risk of meningitis, erroneous electrode insertion, poor speech understanding, and word acquisition.1,2 The incidence of CSF gusher in CI has been reported to be between 1% and 5%.3 It is known that the incidence can reach up to 40% in the presence of inner-ear malformation (IEM).4 Management of gusher is important for the safety of the patients and surgery. Both conservative and surgical treatments are used in the presence of a gusher. Although it can be managed by minor surgical intervention, it may be a life-threatening complication. Thus, the risk and management of gusher in CI should be considered. In this study, we aimed to investigate the incidence of CSF gusher in CI with or without IEM and the association between CSF gusher and IEM in adult and pediatric patients. Methods: After ethical committee approval (May 8, 2019/03), a retrospective chart review was performed on 1025 CI procedures, including children and adults, at the Otorhinolaryngology Clinic of the University of Health Sciences, İzmir Bozyaka Education and Research Hospital between 2009 and 2019. The study included only patients with profuse CSF gusher and/or IEM, during primary surgery. Cases with oozing, pulsatile perilymph, or mild leaks were excluded. Patients with CI were divided into 2 groups according to age: pediatric (≤17 years) and adult (≥18 years) patients. Cochlear and vestibular malformations were classified according to the Sennaroglu classification.5 Then, the patients were subdivided into groups according to the presence/absence of gusher and IEM. Patients with IEM were further divided into subgroups according to the type of anomalies. The demographic characteristics of the patients, surgical results, and management techniques for CSF gusher were evaluated. Computed tomography (CT) and magnetic resonance imaging (MRI) examinations of the patients were re-evaluated using advanced techniques. All MRI examinations were performed on a 1.5T MRI system (Philips Achieva, Philips Healthcare, Best, the Netherlands). Inner-ear structures were examined with a 3-dimensional balanced gradient echo-weighted sequence. Temporal bone CT imaging was performed using a 64-channel multi-detector scanner (Aquilion; Toshiba Medical Systems, Tokyo, Japan). Temporal bones were visualized in 3 basic planes (axial, sagittal, and coronal), and isometric reconstruction at each 0.5, 1, or 2 mm. Statistical Analysis For statistical analysis, we used the Statistical Package for the Social Sciences 22 (IBM SPSS Corp.; Armonk, NY, USA). Chi-square and Fisher’s exact test were performed to evaluate any significant difference between the groups based on P value < .05. For statistical analysis, we used the Statistical Package for the Social Sciences 22 (IBM SPSS Corp.; Armonk, NY, USA). Chi-square and Fisher’s exact test were performed to evaluate any significant difference between the groups based on P value < .05. Statistical Analysis: For statistical analysis, we used the Statistical Package for the Social Sciences 22 (IBM SPSS Corp.; Armonk, NY, USA). Chi-square and Fisher’s exact test were performed to evaluate any significant difference between the groups based on P value < .05. Results: A total of 1025 CIs were performed at our clinic between 2009 and 2019. Of these operations, 620 were in the pediatric group and 405 in the adult group. There were 64 patients with IEM, 17 and 47 in the adult and pediatric groups, respectively. The incidence of IEM in CI was 6.24% (64/1025), 4.19% (17/405), in the adult group and 7.6% (47/620) in the pediatric group. A chi-square test showed that there was a significant difference in the pediatric group in the incidence of IEM (Table 1). In the patients with IEM, 22 operations were performed on the left side and 42 operations on the right side. The mean age was 10.54 years (SD = 13.57). Gusher was detected in 25 of 64 (39%) ears. Despite having IEM, the gusher was not observed in 39 ears whereas it was seen in 5 of the remaining 961 patients without IEM. A comparison between the gusher without IEM (n = 5) and gusher with IEM groups (n = 25) showed that the presence of IEM was a statistically significant high risk for the gusher (P = .002) (Table 2). The incidence of gusher was 2.92% (30/1025) in patients with CI, and it was 0.9% (4/405) and 4.1% (26/620) in the adult and pediatric groups, respectively. A chi-square test showed that there was a significant difference between the adult and pediatric groups in the incidence of gusher (Table 1). In the gusher group (n = 30), 7 surgeries were performed on the left side and 23 operations on the right side. The mean age was 6.07 (SD = 6.79) in the gusher group, including 4 patients in the adult group and 26 patients in the pediatric group. Inner-ear malformation was detected in 25 of 30 ears in patients with gusher. The probability of detecting IEM in patients with gusher was significantly higher than in patients without gusher (P < .001). In the postoperative period, the gusher persisted in 4 patients. The complaints of 2 patients were resolved with conservative methods. The remaining two patients were re-operated to avoid gusher complications such as meningitis, and the gusher was controlled. Five patients had gusher, but IEM was not present. Radiological findings of 5 patients without IEM were re-evaluated. Two of these 5 patients had a defect at internal acoustic canal (IAC) fundus. However, there was no cochlear basal turn defect. In patients with IEM, 5 (7.8%) patients had incomplete partition type 1 (IP1), 28 (43.7%) incomplete partition type 2 (IP2), 9 (14%) incomplete partition type 3 (IP3), 3 (4.6%) common cavity (CC), 13 (20.3%) isolated enlarged vestibular aqueduct syndrome (EVA), 5 (7.8%) cochlear hypoplasia (CH), and 1 (1.5%) isolated semicircular canal aplasia. The findings of our study showed that among 9 patients with IP3, 8 had gusher (P = .04). The rate of gusher in patients with IP3 was 88.8%. The analysis results and the data for all groups are presented in Table 3. Discussion: Gusher is a problem that may occur during CI and/or postoperative period. The gentle flow of clear fluid that develops during cochleostomy is called “oozing,” while profuse and rapid flow is called “gusher.”6 The risk of postoperative meningitis is much higher in the presence of a gusher. Also, it may lead to difficulties during electrode insertion causing misplacement, poor speech understanding, and word acquisition. Excessive CSF can usually access the cochlea through patent developmental pathways of the otic capsule. It has been shown that the risk of gusher may increase in IEM.4 Preoperative radiology should be carefully examined, and postoperative follow-up should be done closely to prevent any potential complications. According to Sennaroglu et al.5 IEM can be classified as labyrinthine aplasia, cochlear aplasia, cochlear hypoplasia, CC deformity, and incomplete partition, which, in itself, has 3 subgroups. Inner-ear malformation represents approximately 20-30% of the congenital hearing loss cases based on radiology results.7 Ding et al3 mentioned in a review that IEM, including cochlear ossification and round window dysplasia, had an incidence of 22.4% in CI. In our study, the incidence of IEM was 6.24% in 1025 ears with CI, which is consistent with the literature.8,9 When patients with IEM were examined separately in the adult and pediatric groups, we found that the incidence of IEM was 4.19% and 8.38% in the adult and pediatric groups, respectively. In the literature, the gusher rate was reported between 5% and 6.7 % in CI.3,10-12 In our study, CSF gusher rate in CI was 2.92 %, which was also consistent with the literature. The incidence of gusher was 0.9% and 4.1% in the the adult and pediatric groups, respectively. We believe that this difference is due to the higher rate of IEM in pediatric patients. Inner-ear malformation is considered a predisposing factor for CSF gusher.13 Merhy et al14 previously have shown that the incidence of CSF gusher increases in the presence of IEM. Hashemi et al15 statistically proved the presence of a correlation of IEM with the CSF gusher. In our study, gusher rate of patients who had IEM was 39.1% (25/64) in parallel with the literature.13 It can be said IEMs increase the risk of gusher in the CI. There were 5 (0.5%) ears with gusher in 956 patients without IEM. It has been previously reported that gusher was found in patients with bone defects at the fundus of the IAC or at cochlear basal turn.16-18 We evaluated the radiologic examinations of our 5 patients again. They had no cochlear basal turn or IAC defect. In 2 of these 5 patients, there was isolated cochlear septal defect only. This factor might have played a role in the mechanism of gusher.17 Although during the preoperative radiological examinations, some patients had defects in the inner ear, which may lead to CSF leak, gusher did not occur in 3 patients with IP1, 20 patients with IP2, 1 patient with IP3, 4 patients with CH, 8 patients with EVA, and none of the ears with CC had gusher during surgery. Sennaroglu et al referring to this dilemma suggested that a fibrotic tissue, not detectable on radiological examinations, may prevent gusher during surgery.19 There is limited data on the incidence of gusher in IEM subtypes in the literature. It has been reported that the gusher rate was 39-45.9% in IP1, 8-15% in IP2, 100% in IP3, 27% in CH, 23-27% in CC, and 0-11% in EVA groups.13,20-24 But no statistical analyses were performed between the groups. Sennaroglu et al19 reported in their review that gusher rates were 100% in IP3, 39% in IP1, 15% in IP2 and there was not any gusher in CC and EVA groups. In our study, there was a CSF gusher in 8 of 9 patients (88%) who had IP3 malformation, followed by 40% in IP1, 38.4% in EVA, 28.57% in IP2, and 20% in CH groups. There was a statistically significant difference in IP3 group among other IEM groups. In a meta-analysis, Farhood et al20 reported that the gusher rate was 45.9% in IP1 group. Despite a small sample size in our study, the gusher rate was 40% in IP1 group. It is thought that the gusher risk may be high in individuals with CC malformation, due to the frequent lateral wall anomalies in IAC. However, in our study, no patient with gusher was found in the CC malformation subgroup. Similar to our findings, in the review of Sennaroglu et al19 and in the meta-analysis of Shi et al.21 it was reported that the incidence of gusher was low in patients with CC malformation. Bajin et al22 reported that the gusher rate was 8% in IP2 group. In our study, the gusher rate was 28% (8/28) in IP2 group. Bajin et al emphasized the relationship between modiolar base defect grade and risk of gusher. In our study, the reason for the higher rate of gusher in IP2 group can be explained by the presence of high-grade modiolar base defect. However, the IP2 group was not evaluated in this respect in our study. Further studies are needed to demonstrate the relationship between the level of modiolar base defect and the risk of gusher. Our study covers a period of 10 years and all surgeons adopted the same surgical philosophy in our center. During surgery, if gusher occurs, waiting for about 10 minutes is generally enough to reduce the outflow speed of CSF. When it slows down, surgeons should perform a large cochleostomy as Graham et al23 suggested. It makes electrode insertion easier and with the large cochleostomy, application of a piece of muscle and fascia is easier for achieving a watertight sealing. Additionally, we use Tisseel Kit fibrin sealant (Baxter, Deerfield, Ill, USA) to strengthen the seal, but not to stop the gusher outflow. Following this, the eustachian tube should be blocked temporarily with oxidized cellulose to prevent CSF from leaking to the nasopharynx. Also especially in IP groups, we use a cork-type stopper cochlear implant electrode or prepare a perichondrium stopper that covers the electrode shaft circumferentially, which was suggested by Sennaroglu et al.24 A 3-layer wound closure is important to achieve a watertight closure. Compression head dressing is applied and head elevation during bed rest for up to 3 days is recommended for all patients. Meticulous packing of the cochleostomy with these conservative precautions was enough to stop the gusher in our patients. There was no need for further interventions like petrosectomy or lumbar CSF drainage. In all patients, complete insertion of all active electrodes was accomplished and subsequent functioning of the implants was satisfactory. Conclusion: The incidence of gusher is higher in the pediatric group, compared to adults due to a higher rate of IEM. Inner-ear malfunction is a risk factor for gusher, and in IP3 patients, there was a statistically significant higher risk of gusher compared to other IEM subgroups. Gusher may be seen with normal radiological anatomy during CI. Therefore, CI surgeons should always be prepared for the risk of gusher and know what to do for management of gusher.
Background: It is aimed to investigate the incidence of cerebrospinal fluid gusher in cochlear implantation and the association between cerebrospinal fluid gusher and inner-ear malformations in adult and pediatric patients. Methods: A retrospective case review of 1025 primary cochlear implantation procedures was performed. Patients with inner-ear malformation or cerebrospinal fluid gusher during primary cochlear implantation were included and divided into 2 groups according to age: pediatric and adult groups. Results: The incidence of inner-ear malformation was 4.19% (17/405) and 7.6% (47/620) in the adult and pediatric groups, respectively. There was a significant difference in the incidence of inner-ear malformation in the pediatric group. The incidence of cerebrospinal fluid gusher was 0.9% (4/405) and 4.1% (26/620) in the adult and pediatric groups, respectively. There was a significant difference in the incidence of gusher between the adult and pediatric groups. Conclusions: The incidence of a cerebrospinal fluid gusher is higher in the pediatric group, compared to adults due to a higher rate of inner-ear malformation. Inner-ear malformation poses a risk factor for cerebrospinal fluid gusher.
null
null
2,749
221
[ 63, 51 ]
7
[ "gusher", "patients", "iem", "groups", "group", "csf", "ci", "incidence", "risk", "pediatric" ]
[ "surgery gusher occurs", "ears patients gusher", "cochlear implantation inner", "csf cochlear implantation", "risk gusher cochlear" ]
null
null
[CONTENT] Cochlear implantation | ear | inner | radiology | cerebrospinal fluid otorrhea [SUMMARY]
[CONTENT] Cochlear implantation | ear | inner | radiology | cerebrospinal fluid otorrhea [SUMMARY]
[CONTENT] Cochlear implantation | ear | inner | radiology | cerebrospinal fluid otorrhea [SUMMARY]
null
[CONTENT] Cochlear implantation | ear | inner | radiology | cerebrospinal fluid otorrhea [SUMMARY]
null
[CONTENT] Adult | Humans | Child | Cochlear Implantation | Retrospective Studies | Ear, Inner | Cochlear Implants | Cerebrospinal Fluid Otorrhea [SUMMARY]
[CONTENT] Adult | Humans | Child | Cochlear Implantation | Retrospective Studies | Ear, Inner | Cochlear Implants | Cerebrospinal Fluid Otorrhea [SUMMARY]
[CONTENT] Adult | Humans | Child | Cochlear Implantation | Retrospective Studies | Ear, Inner | Cochlear Implants | Cerebrospinal Fluid Otorrhea [SUMMARY]
null
[CONTENT] Adult | Humans | Child | Cochlear Implantation | Retrospective Studies | Ear, Inner | Cochlear Implants | Cerebrospinal Fluid Otorrhea [SUMMARY]
null
[CONTENT] surgery gusher occurs | ears patients gusher | cochlear implantation inner | csf cochlear implantation | risk gusher cochlear [SUMMARY]
[CONTENT] surgery gusher occurs | ears patients gusher | cochlear implantation inner | csf cochlear implantation | risk gusher cochlear [SUMMARY]
[CONTENT] surgery gusher occurs | ears patients gusher | cochlear implantation inner | csf cochlear implantation | risk gusher cochlear [SUMMARY]
null
[CONTENT] surgery gusher occurs | ears patients gusher | cochlear implantation inner | csf cochlear implantation | risk gusher cochlear [SUMMARY]
null
[CONTENT] gusher | patients | iem | groups | group | csf | ci | incidence | risk | pediatric [SUMMARY]
[CONTENT] gusher | patients | iem | groups | group | csf | ci | incidence | risk | pediatric [SUMMARY]
[CONTENT] gusher | patients | iem | groups | group | csf | ci | incidence | risk | pediatric [SUMMARY]
null
[CONTENT] gusher | patients | iem | groups | group | csf | ci | incidence | risk | pediatric [SUMMARY]
null
[CONTENT] gusher | csf | gusher ci | ci | csf gusher | incidence csf gusher ci | csf gusher ci | incidence | incidence csf gusher | management gusher [SUMMARY]
[CONTENT] statistical | patients | according | performed | mri | statistical analysis statistical | analysis statistical | sciences | statistical analysis | groups [SUMMARY]
[CONTENT] patients | gusher | iem | group | showed | table | patients gusher | operations | pediatric | adult [SUMMARY]
null
[CONTENT] gusher | patients | iem | risk | groups | ci | csf | group | statistical | incidence [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] 1025 ||| 2 [SUMMARY]
[CONTENT] 4.19% | 7.6% | 47/620 ||| ||| 0.9% | 4/405 | 4.1% | 26/620 ||| [SUMMARY]
null
[CONTENT] ||| 1025 ||| 2 ||| 4.19% | 7.6% | 47/620 ||| ||| 0.9% | 4/405 | 4.1% | 26/620 ||| ||| ||| Inner [SUMMARY]
null
Glucose-6-phosphate dehydrogenase mutations in malaria endemic area of Thailand by multiplexed high-resolution melting curve analysis.
33879156
Glucose-6-phosphate dehydrogenase (G6PD) deficiency, the most common enzymopathy in humans, is prevalent in tropical and subtropical areas where malaria is endemic. Anti-malarial drugs, such as primaquine and tafenoquine, can cause haemolysis in G6PD-deficient individuals. Hence, G6PD testing is recommended before radical treatment against vivax malaria. Phenotypic assays have been widely used for screening G6PD deficiency, but in heterozygous females, the random lyonization causes difficulty in interpreting the results. Over 200 G6PD variants have been identified, which form genotypes associated with differences in the degree of G6PD deficiency and vulnerability to haemolysis. This study aimed to assess the frequency of G6PD mutations using a newly developed molecular genotyping test.
BACKGROUND
A multiplexed high-resolution melting (HRM) assay was developed to detect eight G6PD mutations, in which four mutations can be tested simultaneously. Validation of the method was performed using 70 G6PD-deficient samples. The test was then applied to screen 725 blood samples from people living along the Thai-Myanmar border. The enzyme activity of these samples was also determined using water-soluble tetrazolium salts (WST-8) assay. Then, the correlation between genotype and enzyme activity was analysed.
METHODS
The sensitivity of the multiplexed HRM assay for detecting G6PD mutations was 100 % [95 % confidence interval (CI): 94.87-100 %] with specificity of 100 % (95 % CI: 87.66-100 %). The overall prevalence of G6PD deficiency in the studied population as revealed by phenotypic WST-8 assay was 20.55 % (149/725). In contrast, by the multiplexed HRM assay, 27.17 % (197/725) of subjects were shown to have G6PD mutations. The mutations detected in this study included four single variants, G6PD Mahidol (187/197), G6PD Canton (4/197), G6PD Viangchan (3/197) and G6PD Chinese-5 (1/197), and two double mutations, G6PD Mahidol + Canton (1/197) and G6PD Chinese-4 + Viangchan (1/197). A broad range of G6PD enzyme activities were observed in individuals carrying G6PD Mahidol, especially in females.
RESULTS
The multiplexed HRM-based assay is sensitive and reliable for detecting G6PD mutations. This genotyping assay can facilitate the detection of heterozygotes, which could be useful as a supplementary approach for high-throughput screening of G6PD deficiency in malaria endemic areas before the administration of primaquine and tafenoquine.
CONCLUSIONS
[ "Female", "Genotyping Techniques", "Glucosephosphate Dehydrogenase Deficiency", "Humans", "Malaria, Vivax", "Male", "Thailand" ]
8056697
Background
Glucose-6-phosphate dehydrogenase (G6PD) deficiency is an inherited genetic defect and the most common enzymopathy, affecting approximately 500 million people worldwide with more than 200 variants have been identified [1]. G6PD deficiency is prevalent in tropical and subtropical areas where malaria is endemic, including Africa and Southeast Asia [2]. Evidence has suggested that G6PD deficiency confers protection against malaria infection [3–5]. However, this is still controversial because several studies have yielded contradictory results with some claiming that the protective effects of G6PD deficiency were observed in male hemizygotes only, in female heterozygotes only, or in both [6–9]. The major clinical concern associated with G6PD deficiency is haemolysis upon exposure to oxidant drugs, including anti-malarials such as 8-aminoquinolines (primaquine and tafenoquine) [10–13]. Primaquine and tafenoquine are the only medications capable of killing Plasmodium vivax and Plasmodium ovale at the dormant liver stage (hypnozoite). The World Health Organization (WHO) recommends that G6PD activity be measured before efforts to perform radical treatment of malaria [14]. G6PD deficiency can be diagnosed by either phenotypic or genotypic assay. Phenotypic tests are based on the assessment of G6PD activity, measuring the production of reduced nicotinamide adenine dinucleotide phosphate (NADPH), which can be done quantitatively. The standard quantitative method is spectrophotometry, in which NADPH production is monitored at 340 nm [15]. This method is accurate and reliable, but is laborious, time-consuming, and requires complicated sample preparation and technical skills; as such, it is not commonly used for field-based screening. A colorimetric G6PD assay, based on water-soluble tetrazolium salts (WST-8), was developed as an alternative to the gold standard of spectrophotometry [16]. In this approach, no sample preparation is required and whole blood or dried blood spots can be used to test G6PD activity [17]. The WST-8-based assay can be used as a quantitative method or a qualitative one by the naked eye, offering the possibility of performing mass screening of G6PD deficiency in the context of malaria elimination using primaquine and tafenoquine [16, 18]. Although not the standard method for measuring G6PD activity, the sensitivity of WST-8 for detecting NAD(P)H was found to be five-fold greater than that of the spectrophotometric assay. Moreover, results obtained by measuring dehydrogenase activities in biological samples using WST-8 assay were in parallel with the standard method [19]. For G6PD testing, WST-8 was applied, in 96-well format, to the screening of G6PD deficiency in different populations [20–22]. The sensitivity and specificity of WST-8 for detecting G6PD activity < 30 % were 55 % and 98 %, respectively, compared with the spectrophotometric method [20]. In addition, sensitivity of 72 % and specificity of 98 % were reported for WST-8, in comparison with the standard quantitative G6PD assay (R&D Diagnostics) [21]. This suggests that WST-8 could be a key tool for G6PD testing, but it requires further development before deployment in the field. G6PD diagnostic tests are currently available, including qualitative tests such as fluorescent spot test (FST) and CareStart™ G6PD rapid diagnostic test, as well as quantitative point-of-care tests such as CareStart™ G6PD biosensor and STANDARD™ G6PD test. Unfortunately, these tests are not widely used for G6PD testing because they are too expensive and can be difficult to interpret [23–25]. Qualitative tests are reliable for identifying G6PD deficiency in hemizygous males and homozygous females, but are unable to identify heterozygous females [26–28]. This is because, in heterozygous females, a wide range of G6PD activities are observed as a result of the random X-chromosome inactivation or lyonization [29]. To date, over 200 G6PD variants have been identified, which form genotypes associated with differences in the degree of deficiency and vulnerability to haemolysis [30]. Moreover, G6PD activities vary among G6PD-deficient individuals carrying the same genotype [31, 32]. G6PD genotyping can be performed using restriction fragment length polymorphism [33, 34], amplification refractory mutation system [35, 36], gold nanoparticles-based assay [37], high resolution melting (HRM) curve analysis [38, 39] and DNA sequencing [40, 41]. Additionally, multiplex genotyping systems are currently available. DiaPlexC™ G6PD Genotyping Kit (Asian type) can detect eight mutations, namely, G6PD Vanua Lava, G6PD Mahidol, G6PD Mediterranean, G6PD Coimbra, G6PD Viangchan, G6PD Union, G6PD Canton, and G6PD Kaiping. Thus, this assay offers high-throughput screening of G6PD mutations by one-step PCR [42]. However, after PCR amplification, an additional gel electrophoresis step is required to check the size of the amplified PCR products, which is impractical for large population screening. The HRM assay is a powerful and reliable tool that has been widely used in the detection of gene mutations [43–45]. Previously, HRM assays were applied to detect G6PD mutations in different population groups [38, 46–48]. However, previous HRM assays could detect only one or two mutations at a time. Although a multiplexed system to detect six mutations in four reactions was later described, the assay system and interpretation of results were complex [49]. The prevalence of G6PD deficiency in Thailand ranges between 5 and 18 %, depending on the geographical area [50–54]. More than 20 G6PD variants have been identified in the country, among which the most common is G6PD Viangchan, followed by G6PD Mahidol, G6PD Canton, G6PD Union, G6PD Kaiping, G6PD Gaohe, G6PD Chinese-4, G6PD Chinese-5, G6PD Valladolid, G6PD Coimbra and G6PD Aures. Along the Thai–Myanmar border, a malaria endemic area, prevalence of G6PD deficiency of 9–18 % was reported in males [26]. Moreover, a rate of G6PD deficiency of 7.4 % was reported from the screening of 1,340 newborns [27]. G6PD Mahidol was shown to be the most common variant in this population, accounting for 88 % of all variants, followed by G6PD Chinese-4, G6PD Viangchan, and G6PD Mediterranean. Generally, to avoid the risk of haemolysis upon malaria treatment, G6PD testing is recommended before the administration of primaquine and tafenoquine. The aim of this study was to develop a molecular diagnostic test to enable an accurate, reliable and high-throughput platform for detecting G6PD mutations, which can be used as a supplement to the screening of G6PD deficiency, especially in heterozygous females. To validate the method, 70 G6PD-deficient and 28 non-deficient samples were tested and the results were compared with the findings obtained by direct DNA sequencing. The potential utility of the developed HRM test for the detection of G6PD variants in a study area in Thailand was then examined. The correlation between genotype and the phenotype of enzyme activity (as determined using WST-8) was also analysed.
null
null
Results
Development and validation of 4-plex HRM assay A multiplexed HRM assay was developed to detect eight G6PD variants that are common in Thailand in two reactions (Fig. 1). By using a specific primer pair for each mutation, reaction 1 simultaneously detects four mutations [G6PD Gaohe (A95G), G6PD Mahidol (G487A), G6PD Viangchan (G871A), and G6PD Canton (G1376T)]. Reaction 2 concurrently detects another four mutations [G6PD Chinese-4 (G392T), G6PD Chinese-5 (C1024T), G6PD Union (C1360T), and G6PD Kaiping (G1388A)]. The assay is based on a single fluorescent dye, EvaGreen, without the need for a quenching probe. The primers were designed to detect the mutations by generating PCR products with distinctive melting temperatures, Tm. In contrast, no amplification occurred in WT samples. A peak at the corresponding Tm reveals the genotype of each sample. The gDNA of known G6PD mutations was used as positive controls. Overall, 70 G6PD-deficient samples and 28 non-deficient samples were used to evaluate the performance of the developed 4-plex HRM assay, while direct DNA sequencing was used as a reference test (Table 3). Table 3G6PD mutations of 70-deficient samples detected by 4-plex HRM and direct DNA sequencingMutationHRM assayDNA sequencingGaohe (A95G)4/704/70Chinese-4 (G392T)3/703/70Mahidol (G487A)5/705/70Viangchan (G871A)28/7028/70Chinese-5 (C1024T)1/701/70Canton (G1376T)14/7014/70Kaiping (G1388A)13/7013/70Mahidol + Canton (G487A + G1376T)1/701/70Gaohe + Kaiping (A95G + G1388A)1/701/70 G6PD mutations of 70-deficient samples detected by 4-plex HRM and direct DNA sequencing In comparison to direct DNA sequencing, the 4-plex HRM assay was 100 % sensitive [95 % confidence interval (CI): 94.87–100 %] and 100 % specific (95 % CI: 87.66–100 %), with no cross-reactivity for the detection of G6PD mutations (Table 4). Additionally, the multiplexed HRM assay could correctly identify the double mutations (G6PD Mahidol + Canton and G6PD Gaohe + Kaiping). This indicates that the developed method is reliable for detecting G6PD mutations. Table 4Performance of the HRM assay for the identification of G6PD mutationsParameterHRM assayTrue positive70/70True negative28/28False positive0/28False negative0/70Sensitivity100 %Specificity100 %Positive predictive value100 %Negative predictive value100 % Performance of the HRM assay for the identification of G6PD mutations A multiplexed HRM assay was developed to detect eight G6PD variants that are common in Thailand in two reactions (Fig. 1). By using a specific primer pair for each mutation, reaction 1 simultaneously detects four mutations [G6PD Gaohe (A95G), G6PD Mahidol (G487A), G6PD Viangchan (G871A), and G6PD Canton (G1376T)]. Reaction 2 concurrently detects another four mutations [G6PD Chinese-4 (G392T), G6PD Chinese-5 (C1024T), G6PD Union (C1360T), and G6PD Kaiping (G1388A)]. The assay is based on a single fluorescent dye, EvaGreen, without the need for a quenching probe. The primers were designed to detect the mutations by generating PCR products with distinctive melting temperatures, Tm. In contrast, no amplification occurred in WT samples. A peak at the corresponding Tm reveals the genotype of each sample. The gDNA of known G6PD mutations was used as positive controls. Overall, 70 G6PD-deficient samples and 28 non-deficient samples were used to evaluate the performance of the developed 4-plex HRM assay, while direct DNA sequencing was used as a reference test (Table 3). Table 3G6PD mutations of 70-deficient samples detected by 4-plex HRM and direct DNA sequencingMutationHRM assayDNA sequencingGaohe (A95G)4/704/70Chinese-4 (G392T)3/703/70Mahidol (G487A)5/705/70Viangchan (G871A)28/7028/70Chinese-5 (C1024T)1/701/70Canton (G1376T)14/7014/70Kaiping (G1388A)13/7013/70Mahidol + Canton (G487A + G1376T)1/701/70Gaohe + Kaiping (A95G + G1388A)1/701/70 G6PD mutations of 70-deficient samples detected by 4-plex HRM and direct DNA sequencing In comparison to direct DNA sequencing, the 4-plex HRM assay was 100 % sensitive [95 % confidence interval (CI): 94.87–100 %] and 100 % specific (95 % CI: 87.66–100 %), with no cross-reactivity for the detection of G6PD mutations (Table 4). Additionally, the multiplexed HRM assay could correctly identify the double mutations (G6PD Mahidol + Canton and G6PD Gaohe + Kaiping). This indicates that the developed method is reliable for detecting G6PD mutations. Table 4Performance of the HRM assay for the identification of G6PD mutationsParameterHRM assayTrue positive70/70True negative28/28False positive0/28False negative0/70Sensitivity100 %Specificity100 %Positive predictive value100 %Negative predictive value100 % Performance of the HRM assay for the identification of G6PD mutations Phenotypic screening of G6PD deficiency by WST-8 assay The prevalence of G6PD deficiency in people living in a malaria endemic area in Thailand, namely, Tha Song Yang District, Tak Province, was determined by G6PD activity assay (WST-8). Figure 2 indicates the G6PD enzyme activity of 725 samples measured by WST-8. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively. The adjusted male median (AMM) value was determined (10.31 ± 3.81 U/gHb) and defined as 100 % G6PD activity [56]. The WHO defined G6PD activity of less than 30 % as deficient and G6PD activity ranging between 30 and 80 % as intermediate [57]. Fig. 2G6PD activity of 725 individuals (368 males and 357 females) measured by the WST-8 method. The adjusted male median (AMM) value was determined to be 10.31 ± 3.81 U/gHb and defined as 100 % G6PD activity. Dotted horizontal lines indicate G6PD activity at 30 and 70 % of the AMM G6PD activity of 725 individuals (368 males and 357 females) measured by the WST-8 method. The adjusted male median (AMM) value was determined to be 10.31 ± 3.81 U/gHb and defined as 100 % G6PD activity. Dotted horizontal lines indicate G6PD activity at 30 and 70 % of the AMM Nonetheless, G6PD activity of 70 % was used as a threshold for tafenoquine prescription [58, 59]. In this study, G6PD activity levels of less than 30 % and 30–70 % of the AMM were thus considered as deficient and intermediate, respectively. Subjects with G6PD activity over 70 % of the AMM were defined as normal. Based on the WST-8 assay, the prevalence of G6PD deficiency in the studied population was 20.55 % (149/725; Table 5). Prevalence rates of G6PD deficiency of 20.11 % (74/368) and 21.01 % (75/357) were observed in males and females, respectively. In addition, average G6PD activity of deficient males and females was 1.59 ± 0.89 and 1.69 ± 0.77 U/gHb, respectively. Intermediate G6PD activity (30–70 %) was found in 7.34 % (27/368) of males and 16.25 % (58/357) of females. Average G6PD activity of non-deficient (> 70 %) cases was 11.78 ± 2.11 U/gHb in males and 11.89 ± 2.49 U/gHb in females. The frequency distribution of G6PD activity of the 725 individuals measured by WST-8 is shown in Fig. 3a. The majority of the enzyme activities were distributed between 7 and 16 U/gHb. The frequency distribution of G6PD activity by sex is illustrated in Fig. 3b, c. A broader distribution of G6PD activities was seen in females than in males. Table 5Prevalence of G6PD deficiency determined by WST-8 enzyme activity assayG6PD statusMale, N (%)Female, N (%)Total, N (%)Deficient (< 30 %)47 (12.77 %)17 (4.76 %)64 (8.83 %)Intermediate (30–70 %)27 (7.34 %)58 (16.25 %)85 (11.72 %)Normal (> 70 %)294 (79.89 %)282 (78.99 %)576 (79.45 %)Total368 (100 %)357 (100 %)725 (100 %) Prevalence of G6PD deficiency determined by WST-8 enzyme activity assay Fig. 3Frequency distribution of G6PD activity. a G6PD activity for all 725 samples, showing the majority of samples in the range between 7 and 16 U/gHb. The average G6PD activity of the 725 samples was 10.19 ± 3.96 U/gHb. G6PD activity for (b) 368 males and (c) 357 females. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively Frequency distribution of G6PD activity. a G6PD activity for all 725 samples, showing the majority of samples in the range between 7 and 16 U/gHb. The average G6PD activity of the 725 samples was 10.19 ± 3.96 U/gHb. G6PD activity for (b) 368 males and (c) 357 females. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively The prevalence of G6PD deficiency in people living in a malaria endemic area in Thailand, namely, Tha Song Yang District, Tak Province, was determined by G6PD activity assay (WST-8). Figure 2 indicates the G6PD enzyme activity of 725 samples measured by WST-8. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively. The adjusted male median (AMM) value was determined (10.31 ± 3.81 U/gHb) and defined as 100 % G6PD activity [56]. The WHO defined G6PD activity of less than 30 % as deficient and G6PD activity ranging between 30 and 80 % as intermediate [57]. Fig. 2G6PD activity of 725 individuals (368 males and 357 females) measured by the WST-8 method. The adjusted male median (AMM) value was determined to be 10.31 ± 3.81 U/gHb and defined as 100 % G6PD activity. Dotted horizontal lines indicate G6PD activity at 30 and 70 % of the AMM G6PD activity of 725 individuals (368 males and 357 females) measured by the WST-8 method. The adjusted male median (AMM) value was determined to be 10.31 ± 3.81 U/gHb and defined as 100 % G6PD activity. Dotted horizontal lines indicate G6PD activity at 30 and 70 % of the AMM Nonetheless, G6PD activity of 70 % was used as a threshold for tafenoquine prescription [58, 59]. In this study, G6PD activity levels of less than 30 % and 30–70 % of the AMM were thus considered as deficient and intermediate, respectively. Subjects with G6PD activity over 70 % of the AMM were defined as normal. Based on the WST-8 assay, the prevalence of G6PD deficiency in the studied population was 20.55 % (149/725; Table 5). Prevalence rates of G6PD deficiency of 20.11 % (74/368) and 21.01 % (75/357) were observed in males and females, respectively. In addition, average G6PD activity of deficient males and females was 1.59 ± 0.89 and 1.69 ± 0.77 U/gHb, respectively. Intermediate G6PD activity (30–70 %) was found in 7.34 % (27/368) of males and 16.25 % (58/357) of females. Average G6PD activity of non-deficient (> 70 %) cases was 11.78 ± 2.11 U/gHb in males and 11.89 ± 2.49 U/gHb in females. The frequency distribution of G6PD activity of the 725 individuals measured by WST-8 is shown in Fig. 3a. The majority of the enzyme activities were distributed between 7 and 16 U/gHb. The frequency distribution of G6PD activity by sex is illustrated in Fig. 3b, c. A broader distribution of G6PD activities was seen in females than in males. Table 5Prevalence of G6PD deficiency determined by WST-8 enzyme activity assayG6PD statusMale, N (%)Female, N (%)Total, N (%)Deficient (< 30 %)47 (12.77 %)17 (4.76 %)64 (8.83 %)Intermediate (30–70 %)27 (7.34 %)58 (16.25 %)85 (11.72 %)Normal (> 70 %)294 (79.89 %)282 (78.99 %)576 (79.45 %)Total368 (100 %)357 (100 %)725 (100 %) Prevalence of G6PD deficiency determined by WST-8 enzyme activity assay Fig. 3Frequency distribution of G6PD activity. a G6PD activity for all 725 samples, showing the majority of samples in the range between 7 and 16 U/gHb. The average G6PD activity of the 725 samples was 10.19 ± 3.96 U/gHb. G6PD activity for (b) 368 males and (c) 357 females. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively Frequency distribution of G6PD activity. a G6PD activity for all 725 samples, showing the majority of samples in the range between 7 and 16 U/gHb. The average G6PD activity of the 725 samples was 10.19 ± 3.96 U/gHb. G6PD activity for (b) 368 males and (c) 357 females. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively Genotypic screening of G6PD deficiency using the multiplexed HRM assay The developed 4-plex HRM assay was applied to screen for G6PD mutations in 725 blood samples. This assay identified 197 of the 725 (27.17 %) individuals as possessing at least one mutation with an adverse effect on function (Table 6). The prevalence of subjects carrying at least one G6PD mutation was 20.11 % (74/368) in males and 34.45 % (123/357) in females. The most common G6PD mutation detected in the studied population was G6PD Mahidol, accounting for 94.92 % of the total (n = 187; 72 in males and 115 in females). Other single mutations observed in the study included G6PD Canton (2.03 %; 4 in females), G6PD Viangchan (1.52 %; 1 in a male and 2 in females), and G6PD Chinese-5 (0.51 %; 1 in a male). The HRM assay could also detect the double mutant variants, which were G6PD Mahidol + Canton (0.51 %; 1 in a female) and G6PD Chinese-4 + Viangchan (0.51 %; 1 in a female). Figure 4 shows the G6PD activity of deficient and normal samples identified by HRM for males and females. G6PD enzyme activity of deficient subjects, especially in females, spanned from the deficient region (< 30 %) to the normal region (> 70 %). A large distribution of G6PD enzyme activities in females is caused by genetic mosaicism as a result of X-inactivation. The distribution of G6PD activity by mutation type is illustrated in Fig. 5. Non-variant individuals are also included in this plot. Variation of G6PD activities among the different mutations was observed. Moreover, compared with the G6PD enzyme activity in males with the same mutation, that in females was greater. Enzyme activity of 0.89 and 6.16 U/gHb was observed for G6PD Viangchan in males and females, respectively. Interestingly, G6PD Mahidol, a Class III variant with mild deficiency, which was the most prevalent variant in the studied population, exhibited a wide range of G6PD activities, in both males (range: 0.10–10.73 U/gHb, mean: 3.20 ± 2.46 U/gHb) and females (range: 0.10–17.72 U/gHb, mean: 7.72 ± 4.24 U/gHb). Notably, G6PD enzyme activity in the double mutant variants (G6PD Mahidol + Canton and G6PD Chinese-4 + Viangchan) was significantly decreased compared with that of the single mutants. Table 6Observed ranges of enzyme activity and G6PD genotypes identified by HRM assayGenderVariantNG6PD activity (U/gHb)Nucleotide changeAmino acid changeWHO Classification Male Mahidol720.10-10.73G487AGly163SerIII(n = 368)Chinese-5Viangchan112.100.89C1024TG871ALeu342PheVal291MetIIIINon-variant2947.16–18.05--Normal Female Mahidol1150.10-17.72G487AGly163SerIII(n = 357)Canton46.50-10.48G1376TArg249LeuIIViangchan26.07–6.25G871AVal291MetIIMahidol + Canton14.12G487A + G1376TGly163Ser + Arg249LeuII/IIIChinese-4 + Viangchan10.69G392T + G871AGly131Val + Val291MetIINon-variant2344.96–18.67--Normal Observed ranges of enzyme activity and G6PD genotypes identified by HRM assay Chinese-5 Viangchan 1 1 2.10 0.89 C1024T G871A Leu342Phe Val291Met II II Fig. 4G6PD activity of deficient and normal samples identified by HRM assay. G6PD activity in a male and b female subjects. The average G6PD activity of deficient males and females was 3.16 ± 2.45 and 7.66 ± 4.19 U/gHb, respectively. The average G6PD activity of normal males and females was 11.77 ± 2.13 and 11.76 ± 2.68 U/gHb, respectively G6PD activity of deficient and normal samples identified by HRM assay. G6PD activity in a male and b female subjects. The average G6PD activity of deficient males and females was 3.16 ± 2.45 and 7.66 ± 4.19 U/gHb, respectively. The average G6PD activity of normal males and females was 11.77 ± 2.13 and 11.76 ± 2.68 U/gHb, respectively Fig. 5Distribution of G6PD activity by mutation type. Males carrying G6PD Mahidol showed G6PD enzyme activity ranging from 0.10 to 10.73 U/gHb. Females carrying G6PD Mahidol showed a wider range of G6PD enzyme activities (0.10–17.72 U/gHb). Females with G6PD Canton exhibited G6PD activity between 6.50 and 10.48 U/gHb. Females with G6PD Viangchan showed G6PD activity of 6.07–6.25 U/gHb. Normal males showed G6PD activity ranging from 7.16 to 18.05 U/gHb and normal females showed that between 4.96 and 18.67 U/gHb Distribution of G6PD activity by mutation type. Males carrying G6PD Mahidol showed G6PD enzyme activity ranging from 0.10 to 10.73 U/gHb. Females carrying G6PD Mahidol showed a wider range of G6PD enzyme activities (0.10–17.72 U/gHb). Females with G6PD Canton exhibited G6PD activity between 6.50 and 10.48 U/gHb. Females with G6PD Viangchan showed G6PD activity of 6.07–6.25 U/gHb. Normal males showed G6PD activity ranging from 7.16 to 18.05 U/gHb and normal females showed that between 4.96 and 18.67 U/gHb The developed 4-plex HRM assay was applied to screen for G6PD mutations in 725 blood samples. This assay identified 197 of the 725 (27.17 %) individuals as possessing at least one mutation with an adverse effect on function (Table 6). The prevalence of subjects carrying at least one G6PD mutation was 20.11 % (74/368) in males and 34.45 % (123/357) in females. The most common G6PD mutation detected in the studied population was G6PD Mahidol, accounting for 94.92 % of the total (n = 187; 72 in males and 115 in females). Other single mutations observed in the study included G6PD Canton (2.03 %; 4 in females), G6PD Viangchan (1.52 %; 1 in a male and 2 in females), and G6PD Chinese-5 (0.51 %; 1 in a male). The HRM assay could also detect the double mutant variants, which were G6PD Mahidol + Canton (0.51 %; 1 in a female) and G6PD Chinese-4 + Viangchan (0.51 %; 1 in a female). Figure 4 shows the G6PD activity of deficient and normal samples identified by HRM for males and females. G6PD enzyme activity of deficient subjects, especially in females, spanned from the deficient region (< 30 %) to the normal region (> 70 %). A large distribution of G6PD enzyme activities in females is caused by genetic mosaicism as a result of X-inactivation. The distribution of G6PD activity by mutation type is illustrated in Fig. 5. Non-variant individuals are also included in this plot. Variation of G6PD activities among the different mutations was observed. Moreover, compared with the G6PD enzyme activity in males with the same mutation, that in females was greater. Enzyme activity of 0.89 and 6.16 U/gHb was observed for G6PD Viangchan in males and females, respectively. Interestingly, G6PD Mahidol, a Class III variant with mild deficiency, which was the most prevalent variant in the studied population, exhibited a wide range of G6PD activities, in both males (range: 0.10–10.73 U/gHb, mean: 3.20 ± 2.46 U/gHb) and females (range: 0.10–17.72 U/gHb, mean: 7.72 ± 4.24 U/gHb). Notably, G6PD enzyme activity in the double mutant variants (G6PD Mahidol + Canton and G6PD Chinese-4 + Viangchan) was significantly decreased compared with that of the single mutants. Table 6Observed ranges of enzyme activity and G6PD genotypes identified by HRM assayGenderVariantNG6PD activity (U/gHb)Nucleotide changeAmino acid changeWHO Classification Male Mahidol720.10-10.73G487AGly163SerIII(n = 368)Chinese-5Viangchan112.100.89C1024TG871ALeu342PheVal291MetIIIINon-variant2947.16–18.05--Normal Female Mahidol1150.10-17.72G487AGly163SerIII(n = 357)Canton46.50-10.48G1376TArg249LeuIIViangchan26.07–6.25G871AVal291MetIIMahidol + Canton14.12G487A + G1376TGly163Ser + Arg249LeuII/IIIChinese-4 + Viangchan10.69G392T + G871AGly131Val + Val291MetIINon-variant2344.96–18.67--Normal Observed ranges of enzyme activity and G6PD genotypes identified by HRM assay Chinese-5 Viangchan 1 1 2.10 0.89 C1024T G871A Leu342Phe Val291Met II II Fig. 4G6PD activity of deficient and normal samples identified by HRM assay. G6PD activity in a male and b female subjects. The average G6PD activity of deficient males and females was 3.16 ± 2.45 and 7.66 ± 4.19 U/gHb, respectively. The average G6PD activity of normal males and females was 11.77 ± 2.13 and 11.76 ± 2.68 U/gHb, respectively G6PD activity of deficient and normal samples identified by HRM assay. G6PD activity in a male and b female subjects. The average G6PD activity of deficient males and females was 3.16 ± 2.45 and 7.66 ± 4.19 U/gHb, respectively. The average G6PD activity of normal males and females was 11.77 ± 2.13 and 11.76 ± 2.68 U/gHb, respectively Fig. 5Distribution of G6PD activity by mutation type. Males carrying G6PD Mahidol showed G6PD enzyme activity ranging from 0.10 to 10.73 U/gHb. Females carrying G6PD Mahidol showed a wider range of G6PD enzyme activities (0.10–17.72 U/gHb). Females with G6PD Canton exhibited G6PD activity between 6.50 and 10.48 U/gHb. Females with G6PD Viangchan showed G6PD activity of 6.07–6.25 U/gHb. Normal males showed G6PD activity ranging from 7.16 to 18.05 U/gHb and normal females showed that between 4.96 and 18.67 U/gHb Distribution of G6PD activity by mutation type. Males carrying G6PD Mahidol showed G6PD enzyme activity ranging from 0.10 to 10.73 U/gHb. Females carrying G6PD Mahidol showed a wider range of G6PD enzyme activities (0.10–17.72 U/gHb). Females with G6PD Canton exhibited G6PD activity between 6.50 and 10.48 U/gHb. Females with G6PD Viangchan showed G6PD activity of 6.07–6.25 U/gHb. Normal males showed G6PD activity ranging from 7.16 to 18.05 U/gHb and normal females showed that between 4.96 and 18.67 U/gHb
Conclusions
A multiplexed HRM assay for the detection of eight common G6PD mutations in Thailand was developed. The performance of the assay was excellent, with 100 % specificity and 100 % sensitivity. The prevalence of G6PD mutations in 725 people living in a malaria endemic area along the Thai–Myanmar border was determined to be 27.17 % by HRM, which is greater than the prevalence of G6PD deficiency determined by the WST-8 phenotypic assay (20.55 %). Performing a phenotypic assay alone might thus be inadequate and the result might not be an accurate predictor of G6PD deficiency, especially in heterozygous females. As an option to overcome this problem, the multiplexed HRM assay is rapid, accurate and reliable for detecting G6PD mutations, enabling high-throughput screening. This assay could be useful as a supplementary approach for high-throughput screening of G6PD deficiency before the administration of 8-aminoquinolones in malaria endemic areas.
[ "Background", "Methods", "Blood samples", "Phenotypic screening of G6PD deficiency using WST-8 assay", "DNA extraction", "Primer design", "PCR amplification and melting curve analysis", "PCR amplification and DNA sequencing", "Statistical analysis", "Development and validation of 4-plex HRM assay", "Phenotypic screening of G6PD deficiency by WST-8 assay", "\nGenotypic screening of G6PD deficiency using the multiplexed HRM assay\n" ]
[ "Glucose-6-phosphate dehydrogenase (G6PD) deficiency is an inherited genetic defect and the most common enzymopathy, affecting approximately 500 million people worldwide with more than 200 variants have been identified [1]. G6PD deficiency is prevalent in tropical and subtropical areas where malaria is endemic, including Africa and Southeast Asia [2]. Evidence has suggested that G6PD deficiency confers protection against malaria infection [3–5]. However, this is still controversial because several studies have yielded contradictory results with some claiming that the protective effects of G6PD deficiency were observed in male hemizygotes only, in female heterozygotes only, or in both [6–9]. The major clinical concern associated with G6PD deficiency is haemolysis upon exposure to oxidant drugs, including anti-malarials such as 8-aminoquinolines (primaquine and tafenoquine) [10–13]. Primaquine and tafenoquine are the only medications capable of killing Plasmodium vivax and Plasmodium ovale at the dormant liver stage (hypnozoite). The World Health Organization (WHO) recommends that G6PD activity be measured before efforts to perform radical treatment of malaria [14].\nG6PD deficiency can be diagnosed by either phenotypic or genotypic assay. Phenotypic tests are based on the assessment of G6PD activity, measuring the production of reduced nicotinamide adenine dinucleotide phosphate (NADPH), which can be done quantitatively. The standard quantitative method is spectrophotometry, in which NADPH production is monitored at 340 nm [15]. This method is accurate and reliable, but is laborious, time-consuming, and requires complicated sample preparation and technical skills; as such, it is not commonly used for field-based screening. A colorimetric G6PD assay, based on water-soluble tetrazolium salts (WST-8), was developed as an alternative to the gold standard of spectrophotometry [16]. In this approach, no sample preparation is required and whole blood or dried blood spots can be used to test G6PD activity [17]. The WST-8-based assay can be used as a quantitative method or a qualitative one by the naked eye, offering the possibility of performing mass screening of G6PD deficiency in the context of malaria elimination using primaquine and tafenoquine [16, 18]. Although not the standard method for measuring G6PD activity, the sensitivity of WST-8 for detecting NAD(P)H was found to be five-fold greater than that of the spectrophotometric assay. Moreover, results obtained by measuring dehydrogenase activities in biological samples using WST-8 assay were in parallel with the standard method [19]. For G6PD testing, WST-8 was applied, in 96-well format, to the screening of G6PD deficiency in different populations [20–22]. The sensitivity and specificity of WST-8 for detecting G6PD activity < 30 % were 55 % and 98 %, respectively, compared with the spectrophotometric method [20]. In addition, sensitivity of 72 % and specificity of 98 % were reported for WST-8, in comparison with the standard quantitative G6PD assay (R&D Diagnostics) [21]. This suggests that WST-8 could be a key tool for G6PD testing, but it requires further development before deployment in the field.\nG6PD diagnostic tests are currently available, including qualitative tests such as fluorescent spot test (FST) and CareStart™ G6PD rapid diagnostic test, as well as quantitative point-of-care tests such as CareStart™ G6PD biosensor and STANDARD™ G6PD test. Unfortunately, these tests are not widely used for G6PD testing because they are too expensive and can be difficult to interpret [23–25]. Qualitative tests are reliable for identifying G6PD deficiency in hemizygous males and homozygous females, but are unable to identify heterozygous females [26–28]. This is because, in heterozygous females, a wide range of G6PD activities are observed as a result of the random X-chromosome inactivation or lyonization [29]. To date, over 200 G6PD variants have been identified, which form genotypes associated with differences in the degree of deficiency and vulnerability to haemolysis [30]. Moreover, G6PD activities vary among G6PD-deficient individuals carrying the same genotype [31, 32].\nG6PD genotyping can be performed using restriction fragment length polymorphism [33, 34], amplification refractory mutation system [35, 36], gold nanoparticles-based assay [37], high resolution melting (HRM) curve analysis [38, 39] and DNA sequencing [40, 41]. Additionally, multiplex genotyping systems are currently available. DiaPlexC™ G6PD Genotyping Kit (Asian type) can detect eight mutations, namely, G6PD Vanua Lava, G6PD Mahidol, G6PD Mediterranean, G6PD Coimbra, G6PD Viangchan, G6PD Union, G6PD Canton, and G6PD Kaiping. Thus, this assay offers high-throughput screening of G6PD mutations by one-step PCR [42]. However, after PCR amplification, an additional gel electrophoresis step is required to check the size of the amplified PCR products, which is impractical for large population screening. The HRM assay is a powerful and reliable tool that has been widely used in the detection of gene mutations [43–45]. Previously, HRM assays were applied to detect G6PD mutations in different population groups [38, 46–48]. However, previous HRM assays could detect only one or two mutations at a time. Although a multiplexed system to detect six mutations in four reactions was later described, the assay system and interpretation of results were complex [49].\nThe prevalence of G6PD deficiency in Thailand ranges between 5 and 18 %, depending on the geographical area [50–54]. More than 20 G6PD variants have been identified in the country, among which the most common is G6PD Viangchan, followed by G6PD Mahidol, G6PD Canton, G6PD Union, G6PD Kaiping, G6PD Gaohe, G6PD Chinese-4, G6PD Chinese-5, G6PD Valladolid, G6PD Coimbra and G6PD Aures. Along the Thai–Myanmar border, a malaria endemic area, prevalence of G6PD deficiency of 9–18 % was reported in males [26]. Moreover, a rate of G6PD deficiency of 7.4 % was reported from the screening of 1,340 newborns [27]. G6PD Mahidol was shown to be the most common variant in this population, accounting for 88 % of all variants, followed by G6PD Chinese-4, G6PD Viangchan, and G6PD Mediterranean. Generally, to avoid the risk of haemolysis upon malaria treatment, G6PD testing is recommended before the administration of primaquine and tafenoquine. The aim of this study was to develop a molecular diagnostic test to enable an accurate, reliable and high-throughput platform for detecting G6PD mutations, which can be used as a supplement to the screening of G6PD deficiency, especially in heterozygous females. To validate the method, 70 G6PD-deficient and 28 non-deficient samples were tested and the results were compared with the findings obtained by direct DNA sequencing. The potential utility of the developed HRM test for the detection of G6PD variants in a study area in Thailand was then examined. The correlation between genotype and the phenotype of enzyme activity (as determined using WST-8) was also analysed.", "Blood samples Blood samples were collected in ethylenediaminetetraacetic acid (EDTA) tubes and transported to the laboratory under storage at 4 °C. Thereafter, samples were stored at −20 °C until use, for approximately 1−3 months. Under these conditions, the integrity of samples for phenotypic analysis was maintained as it was recently reported that blood samples were stable for up to 7–12 months when stored in EDTA tubes at − 20 °C [55].\nFor the validation of HRM assays, 70 G6PD-deficient and 28 non-deficient blood samples were collected from healthy Thai volunteers at the Faculty of Medicine Ramathibodi Hospital. All samples were spectrophotometrically tested for G6PD activity and genotyped by DNA sequencing. Ethical approval for the study was provided by the Committee on Human Rights Related to Research Involving Human Subjects, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand (approval number MURA 2018/252).\nFor the screening of G6PD deficiency, 725 blood samples (from 368 males and 357 females) were collected in EDTA tubes from residents living along the Thai–Myanmar border, a malaria endemic area, namely, in Tha Song Yang District, Tak Province, Thailand. Ethical approval for the study was provided by the Human Ethics Committee, Faculty of Tropical Medicine, Mahidol University (approval number MUTM 2019-016-01).\nBlood samples were collected in ethylenediaminetetraacetic acid (EDTA) tubes and transported to the laboratory under storage at 4 °C. Thereafter, samples were stored at −20 °C until use, for approximately 1−3 months. Under these conditions, the integrity of samples for phenotypic analysis was maintained as it was recently reported that blood samples were stable for up to 7–12 months when stored in EDTA tubes at − 20 °C [55].\nFor the validation of HRM assays, 70 G6PD-deficient and 28 non-deficient blood samples were collected from healthy Thai volunteers at the Faculty of Medicine Ramathibodi Hospital. All samples were spectrophotometrically tested for G6PD activity and genotyped by DNA sequencing. Ethical approval for the study was provided by the Committee on Human Rights Related to Research Involving Human Subjects, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand (approval number MURA 2018/252).\nFor the screening of G6PD deficiency, 725 blood samples (from 368 males and 357 females) were collected in EDTA tubes from residents living along the Thai–Myanmar border, a malaria endemic area, namely, in Tha Song Yang District, Tak Province, Thailand. Ethical approval for the study was provided by the Human Ethics Committee, Faculty of Tropical Medicine, Mahidol University (approval number MUTM 2019-016-01).\nPhenotypic screening of G6PD deficiency using WST-8 assay WST-8 is not a standard method for measuring G6PD activity. However, this assay was used for phenotypic screening in this study because its performance was found to be indistinguishable from that of the spectrophotometric method involving measurement of the absorbance of NAD(P)H at 340 nm [19]. The method showed high accuracy with % relative error of 0.7–0.25. For precision, % coefficient of variation for within-run and between-run of the WST-8 method ranged between 0.6 and 4.5. WST-8 also exhibited excellent reproducibility with Z′ values of 0.90–0.99. Although WST-8 provides advantages regarding the diagnosis of G6PD deficiency, this method will require further development before being deployed in a clinical context [20].\nReaction mixtures of 100 µl, consisting of 20 mM Tris-HCl pH 8.0, 10 mM MgCl2, 500 µM glucose-6-phosphate (G6P), 100 µM NADP+, and 100 µM WST-8 (Sigma-Aldrich, Darmstadt, Germany), were mixed with a blood sample of 2 µl in a 96-well plate. The absorbance was measured at 450 nm with a reference at 650 nm using a microplate reader (Sunrise; Tecan, Männedorf, Switzerland). The absorbance at 450 nm of a reaction mixture set up in the absence of G6P substrate was used for background subtraction. The G6PD activity was calculated using an NADPH calibration curve. Haemoglobin concentration was measured using Drabkin’s reagent (Sigma-Aldrich). G6PD activity was reported as units (U) per gram of haemoglobin (gHb). Experiments were performed in triplicate.\nWST-8 is not a standard method for measuring G6PD activity. However, this assay was used for phenotypic screening in this study because its performance was found to be indistinguishable from that of the spectrophotometric method involving measurement of the absorbance of NAD(P)H at 340 nm [19]. The method showed high accuracy with % relative error of 0.7–0.25. For precision, % coefficient of variation for within-run and between-run of the WST-8 method ranged between 0.6 and 4.5. WST-8 also exhibited excellent reproducibility with Z′ values of 0.90–0.99. Although WST-8 provides advantages regarding the diagnosis of G6PD deficiency, this method will require further development before being deployed in a clinical context [20].\nReaction mixtures of 100 µl, consisting of 20 mM Tris-HCl pH 8.0, 10 mM MgCl2, 500 µM glucose-6-phosphate (G6P), 100 µM NADP+, and 100 µM WST-8 (Sigma-Aldrich, Darmstadt, Germany), were mixed with a blood sample of 2 µl in a 96-well plate. The absorbance was measured at 450 nm with a reference at 650 nm using a microplate reader (Sunrise; Tecan, Männedorf, Switzerland). The absorbance at 450 nm of a reaction mixture set up in the absence of G6P substrate was used for background subtraction. The G6PD activity was calculated using an NADPH calibration curve. Haemoglobin concentration was measured using Drabkin’s reagent (Sigma-Aldrich). G6PD activity was reported as units (U) per gram of haemoglobin (gHb). Experiments were performed in triplicate.\nDNA extraction DNA extraction was performed using the QIAsymphony DNA Mini Kit (QIAGEN, Hilden, Germany), in accordance with the manufacturer’s instructions. Blood samples of 100 µl were extracted and eluted into a final volume of 50 µl. DNA concentration was measured using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA).\nDNA extraction was performed using the QIAsymphony DNA Mini Kit (QIAGEN, Hilden, Germany), in accordance with the manufacturer’s instructions. Blood samples of 100 µl were extracted and eluted into a final volume of 50 µl. DNA concentration was measured using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA).\nPrimer design Primers were designed to detect eight common G6PD mutations in the Thai population: G6PD Gaohe (A95G), G6PD Chinese-4 (G392T), G6PD Mahidol (G487A), G6PD Viangchan (G871A), G6PD Chinese-5 (C1024T), G6PD Union (C1360T), G6PD Canton (G1376T) and G6PD Kaiping (G1388A; Table 1). The primers were designed to detect the mutations by generating PCR products with distinctive melting temperatures (Tm; Fig. 1).\n\nTable 1HRM primers used in this studyReaction systemPrimer nameG6PD variantPrimer sequence (from 5’ to 3’)Primer concentration (nM)Amplicon size (bp)Tm of PCRproduct (°C)1A95G_FGaoheTTCCATCAGTCGGATACACG60010081.05A95G_R(His32Arg)AGGCATGGAGCAGGCACTTC600G487A_FMahidolTCCGGGCTCCCAGCAGAA4008784.80G487A_R(Gly163Ser)GGTTGGACAGCCGGTCA400G871A_FViangchanGGCTTTCTCTCAGGTCAAGA4006678.32G871A_R(Val291Met)CCCAGGACCACATTGTTGGC400G1376T_FCantonCCTCAGCGACGAGCTCCT6009983.65G1376T_R(Arg459Leu)CTGCCATAAATATAGGGGATGG6002G392T_FChinese-4CATGAATGCCCTCCACCTGGT2008785.05G392T_R(Gly131Val)TTCTTGGTGACGGCCTCGTA200C1024T_FChinese-5CACTTTTGCAGCCGTCGTCT4009983.10C1024T_R(Leu342Phe)CACACAGGGCATGCCCAGTT400C1360T_FUnionGAGCCAGATGCACTTCGTGT20012787.67C1360T_R(Arg454Cys)GAGGGGACATAGTATGGCTT200G1388A_FKaipingGCTCCGTGAGGCCTGGCA4005778.97G1388A_R(Arg463His)TTCTCCAGCTCAATCTGGTGC400\nHRM primers used in this study\n\nFig. 1Identification of G6PD mutations by the multiplexed HRM assay. The assay is based on base complementarity between primers and the DNA template. Mutant samples produce a peak at the corresponding Tm, while WT samples do not produce PCR products, giving a flat line\nIdentification of G6PD mutations by the multiplexed HRM assay. The assay is based on base complementarity between primers and the DNA template. Mutant samples produce a peak at the corresponding Tm, while WT samples do not produce PCR products, giving a flat line\nPrimers were designed to detect eight common G6PD mutations in the Thai population: G6PD Gaohe (A95G), G6PD Chinese-4 (G392T), G6PD Mahidol (G487A), G6PD Viangchan (G871A), G6PD Chinese-5 (C1024T), G6PD Union (C1360T), G6PD Canton (G1376T) and G6PD Kaiping (G1388A; Table 1). The primers were designed to detect the mutations by generating PCR products with distinctive melting temperatures (Tm; Fig. 1).\n\nTable 1HRM primers used in this studyReaction systemPrimer nameG6PD variantPrimer sequence (from 5’ to 3’)Primer concentration (nM)Amplicon size (bp)Tm of PCRproduct (°C)1A95G_FGaoheTTCCATCAGTCGGATACACG60010081.05A95G_R(His32Arg)AGGCATGGAGCAGGCACTTC600G487A_FMahidolTCCGGGCTCCCAGCAGAA4008784.80G487A_R(Gly163Ser)GGTTGGACAGCCGGTCA400G871A_FViangchanGGCTTTCTCTCAGGTCAAGA4006678.32G871A_R(Val291Met)CCCAGGACCACATTGTTGGC400G1376T_FCantonCCTCAGCGACGAGCTCCT6009983.65G1376T_R(Arg459Leu)CTGCCATAAATATAGGGGATGG6002G392T_FChinese-4CATGAATGCCCTCCACCTGGT2008785.05G392T_R(Gly131Val)TTCTTGGTGACGGCCTCGTA200C1024T_FChinese-5CACTTTTGCAGCCGTCGTCT4009983.10C1024T_R(Leu342Phe)CACACAGGGCATGCCCAGTT400C1360T_FUnionGAGCCAGATGCACTTCGTGT20012787.67C1360T_R(Arg454Cys)GAGGGGACATAGTATGGCTT200G1388A_FKaipingGCTCCGTGAGGCCTGGCA4005778.97G1388A_R(Arg463His)TTCTCCAGCTCAATCTGGTGC400\nHRM primers used in this study\n\nFig. 1Identification of G6PD mutations by the multiplexed HRM assay. The assay is based on base complementarity between primers and the DNA template. Mutant samples produce a peak at the corresponding Tm, while WT samples do not produce PCR products, giving a flat line\nIdentification of G6PD mutations by the multiplexed HRM assay. The assay is based on base complementarity between primers and the DNA template. Mutant samples produce a peak at the corresponding Tm, while WT samples do not produce PCR products, giving a flat line\nPCR amplification and melting curve analysis Assay conditions, including primer concentrations, assay protocol, and detection conditions, were optimized to maximize the sensitivity and specificity of the assay and to minimize the cross-reactivity. Multiplexed HRM assay was performed in a total volume of 12.5 µl, containing 6.25 µl of 2× HRM Type-It mix (QIAGEN), various concentrations of each primer (Table 1), molecular-grade water and 2.5 µl of the gDNA template (3–10 ng/µl). PCR amplification and melting curve analysis were performed using the Rotor-Gene Q (QIAGEN) with the following conditions: 1 cycle of 95 °C for 5 min, and then 30 cycles of 95 °C for 10 s, 63 °C for 30 s, and 72 °C for 10 s. Subsequently, HRM analysis was performed by melting from 75 to 90 °C, reading at every 0.1 °C step with 2 s of stabilization. Positive (gDNA with known mutations, confirmed by DNA sequencing) and negative controls (gDNA of G6PD wild-type (WT), confirmed by DNA sequencing) were included in every run. Data analysis was carried out using the Rotor-Gene Q software. Experiments were performed in triplicate.\nPCR amplification and DNA sequencing To validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada).\n\nTable 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG\nSequencing primers used in this study\nTo validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada).\n\nTable 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG\nSequencing primers used in this study\nAssay conditions, including primer concentrations, assay protocol, and detection conditions, were optimized to maximize the sensitivity and specificity of the assay and to minimize the cross-reactivity. Multiplexed HRM assay was performed in a total volume of 12.5 µl, containing 6.25 µl of 2× HRM Type-It mix (QIAGEN), various concentrations of each primer (Table 1), molecular-grade water and 2.5 µl of the gDNA template (3–10 ng/µl). PCR amplification and melting curve analysis were performed using the Rotor-Gene Q (QIAGEN) with the following conditions: 1 cycle of 95 °C for 5 min, and then 30 cycles of 95 °C for 10 s, 63 °C for 30 s, and 72 °C for 10 s. Subsequently, HRM analysis was performed by melting from 75 to 90 °C, reading at every 0.1 °C step with 2 s of stabilization. Positive (gDNA with known mutations, confirmed by DNA sequencing) and negative controls (gDNA of G6PD wild-type (WT), confirmed by DNA sequencing) were included in every run. Data analysis was carried out using the Rotor-Gene Q software. Experiments were performed in triplicate.\nPCR amplification and DNA sequencing To validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada).\n\nTable 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG\nSequencing primers used in this study\nTo validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada).\n\nTable 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG\nSequencing primers used in this study\nStatistical analysis Data are presented as mean ± SD. Statistical analyses and plotting of graphs were performed using GraphPad Prism (GraphPad Software, La Jolla, CA, USA). To assess the performance of multiplexed HRM in the detection of G6PD mutations, the numbers of true positives, true negatives, false positives, and false negatives were determined. The following parameters were calculated: sensitivity = [true positives/(true positives + false negatives)] ⋅100; specificity = [true negatives/(true negatives + false positives)] ⋅100; positive predictive value = [true positives/(true positives + false positives)] ⋅100; and negative predictive value = [true negatives/(true positives + false negatives)] ⋅100.\nData are presented as mean ± SD. Statistical analyses and plotting of graphs were performed using GraphPad Prism (GraphPad Software, La Jolla, CA, USA). To assess the performance of multiplexed HRM in the detection of G6PD mutations, the numbers of true positives, true negatives, false positives, and false negatives were determined. The following parameters were calculated: sensitivity = [true positives/(true positives + false negatives)] ⋅100; specificity = [true negatives/(true negatives + false positives)] ⋅100; positive predictive value = [true positives/(true positives + false positives)] ⋅100; and negative predictive value = [true negatives/(true positives + false negatives)] ⋅100.", "Blood samples were collected in ethylenediaminetetraacetic acid (EDTA) tubes and transported to the laboratory under storage at 4 °C. Thereafter, samples were stored at −20 °C until use, for approximately 1−3 months. Under these conditions, the integrity of samples for phenotypic analysis was maintained as it was recently reported that blood samples were stable for up to 7–12 months when stored in EDTA tubes at − 20 °C [55].\nFor the validation of HRM assays, 70 G6PD-deficient and 28 non-deficient blood samples were collected from healthy Thai volunteers at the Faculty of Medicine Ramathibodi Hospital. All samples were spectrophotometrically tested for G6PD activity and genotyped by DNA sequencing. Ethical approval for the study was provided by the Committee on Human Rights Related to Research Involving Human Subjects, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand (approval number MURA 2018/252).\nFor the screening of G6PD deficiency, 725 blood samples (from 368 males and 357 females) were collected in EDTA tubes from residents living along the Thai–Myanmar border, a malaria endemic area, namely, in Tha Song Yang District, Tak Province, Thailand. Ethical approval for the study was provided by the Human Ethics Committee, Faculty of Tropical Medicine, Mahidol University (approval number MUTM 2019-016-01).", "WST-8 is not a standard method for measuring G6PD activity. However, this assay was used for phenotypic screening in this study because its performance was found to be indistinguishable from that of the spectrophotometric method involving measurement of the absorbance of NAD(P)H at 340 nm [19]. The method showed high accuracy with % relative error of 0.7–0.25. For precision, % coefficient of variation for within-run and between-run of the WST-8 method ranged between 0.6 and 4.5. WST-8 also exhibited excellent reproducibility with Z′ values of 0.90–0.99. Although WST-8 provides advantages regarding the diagnosis of G6PD deficiency, this method will require further development before being deployed in a clinical context [20].\nReaction mixtures of 100 µl, consisting of 20 mM Tris-HCl pH 8.0, 10 mM MgCl2, 500 µM glucose-6-phosphate (G6P), 100 µM NADP+, and 100 µM WST-8 (Sigma-Aldrich, Darmstadt, Germany), were mixed with a blood sample of 2 µl in a 96-well plate. The absorbance was measured at 450 nm with a reference at 650 nm using a microplate reader (Sunrise; Tecan, Männedorf, Switzerland). The absorbance at 450 nm of a reaction mixture set up in the absence of G6P substrate was used for background subtraction. The G6PD activity was calculated using an NADPH calibration curve. Haemoglobin concentration was measured using Drabkin’s reagent (Sigma-Aldrich). G6PD activity was reported as units (U) per gram of haemoglobin (gHb). Experiments were performed in triplicate.", "DNA extraction was performed using the QIAsymphony DNA Mini Kit (QIAGEN, Hilden, Germany), in accordance with the manufacturer’s instructions. Blood samples of 100 µl were extracted and eluted into a final volume of 50 µl. DNA concentration was measured using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA).", "Primers were designed to detect eight common G6PD mutations in the Thai population: G6PD Gaohe (A95G), G6PD Chinese-4 (G392T), G6PD Mahidol (G487A), G6PD Viangchan (G871A), G6PD Chinese-5 (C1024T), G6PD Union (C1360T), G6PD Canton (G1376T) and G6PD Kaiping (G1388A; Table 1). The primers were designed to detect the mutations by generating PCR products with distinctive melting temperatures (Tm; Fig. 1).\n\nTable 1HRM primers used in this studyReaction systemPrimer nameG6PD variantPrimer sequence (from 5’ to 3’)Primer concentration (nM)Amplicon size (bp)Tm of PCRproduct (°C)1A95G_FGaoheTTCCATCAGTCGGATACACG60010081.05A95G_R(His32Arg)AGGCATGGAGCAGGCACTTC600G487A_FMahidolTCCGGGCTCCCAGCAGAA4008784.80G487A_R(Gly163Ser)GGTTGGACAGCCGGTCA400G871A_FViangchanGGCTTTCTCTCAGGTCAAGA4006678.32G871A_R(Val291Met)CCCAGGACCACATTGTTGGC400G1376T_FCantonCCTCAGCGACGAGCTCCT6009983.65G1376T_R(Arg459Leu)CTGCCATAAATATAGGGGATGG6002G392T_FChinese-4CATGAATGCCCTCCACCTGGT2008785.05G392T_R(Gly131Val)TTCTTGGTGACGGCCTCGTA200C1024T_FChinese-5CACTTTTGCAGCCGTCGTCT4009983.10C1024T_R(Leu342Phe)CACACAGGGCATGCCCAGTT400C1360T_FUnionGAGCCAGATGCACTTCGTGT20012787.67C1360T_R(Arg454Cys)GAGGGGACATAGTATGGCTT200G1388A_FKaipingGCTCCGTGAGGCCTGGCA4005778.97G1388A_R(Arg463His)TTCTCCAGCTCAATCTGGTGC400\nHRM primers used in this study\n\nFig. 1Identification of G6PD mutations by the multiplexed HRM assay. The assay is based on base complementarity between primers and the DNA template. Mutant samples produce a peak at the corresponding Tm, while WT samples do not produce PCR products, giving a flat line\nIdentification of G6PD mutations by the multiplexed HRM assay. The assay is based on base complementarity between primers and the DNA template. Mutant samples produce a peak at the corresponding Tm, while WT samples do not produce PCR products, giving a flat line", "Assay conditions, including primer concentrations, assay protocol, and detection conditions, were optimized to maximize the sensitivity and specificity of the assay and to minimize the cross-reactivity. Multiplexed HRM assay was performed in a total volume of 12.5 µl, containing 6.25 µl of 2× HRM Type-It mix (QIAGEN), various concentrations of each primer (Table 1), molecular-grade water and 2.5 µl of the gDNA template (3–10 ng/µl). PCR amplification and melting curve analysis were performed using the Rotor-Gene Q (QIAGEN) with the following conditions: 1 cycle of 95 °C for 5 min, and then 30 cycles of 95 °C for 10 s, 63 °C for 30 s, and 72 °C for 10 s. Subsequently, HRM analysis was performed by melting from 75 to 90 °C, reading at every 0.1 °C step with 2 s of stabilization. Positive (gDNA with known mutations, confirmed by DNA sequencing) and negative controls (gDNA of G6PD wild-type (WT), confirmed by DNA sequencing) were included in every run. Data analysis was carried out using the Rotor-Gene Q software. Experiments were performed in triplicate.\nPCR amplification and DNA sequencing To validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada).\n\nTable 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG\nSequencing primers used in this study\nTo validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada).\n\nTable 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG\nSequencing primers used in this study", "To validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada).\n\nTable 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG\nSequencing primers used in this study", "Data are presented as mean ± SD. Statistical analyses and plotting of graphs were performed using GraphPad Prism (GraphPad Software, La Jolla, CA, USA). To assess the performance of multiplexed HRM in the detection of G6PD mutations, the numbers of true positives, true negatives, false positives, and false negatives were determined. The following parameters were calculated: sensitivity = [true positives/(true positives + false negatives)] ⋅100; specificity = [true negatives/(true negatives + false positives)] ⋅100; positive predictive value = [true positives/(true positives + false positives)] ⋅100; and negative predictive value = [true negatives/(true positives + false negatives)] ⋅100.", "A multiplexed HRM assay was developed to detect eight G6PD variants that are common in Thailand in two reactions (Fig. 1). By using a specific primer pair for each mutation, reaction 1 simultaneously detects four mutations [G6PD Gaohe (A95G), G6PD Mahidol (G487A), G6PD Viangchan (G871A), and G6PD Canton (G1376T)]. Reaction 2 concurrently detects another four mutations [G6PD Chinese-4 (G392T), G6PD Chinese-5 (C1024T), G6PD Union (C1360T), and G6PD Kaiping (G1388A)]. The assay is based on a single fluorescent dye, EvaGreen, without the need for a quenching probe. The primers were designed to detect the mutations by generating PCR products with distinctive melting temperatures, Tm. In contrast, no amplification occurred in WT samples. A peak at the corresponding Tm reveals the genotype of each sample. The gDNA of known G6PD mutations was used as positive controls. Overall, 70 G6PD-deficient samples and 28 non-deficient samples were used to evaluate the performance of the developed 4-plex HRM assay, while direct DNA sequencing was used as a reference test (Table 3).\n\nTable 3G6PD mutations of 70-deficient samples detected by 4-plex HRM and direct DNA sequencingMutationHRM assayDNA sequencingGaohe (A95G)4/704/70Chinese-4 (G392T)3/703/70Mahidol (G487A)5/705/70Viangchan (G871A)28/7028/70Chinese-5 (C1024T)1/701/70Canton (G1376T)14/7014/70Kaiping (G1388A)13/7013/70Mahidol + Canton (G487A + G1376T)1/701/70Gaohe + Kaiping (A95G + G1388A)1/701/70\nG6PD mutations of 70-deficient samples detected by 4-plex HRM and direct DNA sequencing\nIn comparison to direct DNA sequencing, the 4-plex HRM assay was 100 % sensitive [95 % confidence interval (CI): 94.87–100 %] and 100 % specific (95 % CI: 87.66–100 %), with no cross-reactivity for the detection of G6PD mutations (Table 4). Additionally, the multiplexed HRM assay could correctly identify the double mutations (G6PD Mahidol + Canton and G6PD Gaohe + Kaiping). This indicates that the developed method is reliable for detecting G6PD mutations.\n\nTable 4Performance of the HRM assay for the identification of G6PD mutationsParameterHRM assayTrue positive70/70True negative28/28False positive0/28False negative0/70Sensitivity100 %Specificity100 %Positive predictive value100 %Negative predictive value100 %\nPerformance of the HRM assay for the identification of G6PD mutations", "The prevalence of G6PD deficiency in people living in a malaria endemic area in Thailand, namely, Tha Song Yang District, Tak Province, was determined by G6PD activity assay (WST-8). Figure 2 indicates the G6PD enzyme activity of 725 samples measured by WST-8. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively. The adjusted male median (AMM) value was determined (10.31 ± 3.81 U/gHb) and defined as 100 % G6PD activity [56]. The WHO defined G6PD activity of less than 30 % as deficient and G6PD activity ranging between 30 and 80 % as intermediate [57].\n\nFig. 2G6PD activity of 725 individuals (368 males and 357 females) measured by the WST-8 method. The adjusted male median (AMM) value was determined to be 10.31 ± 3.81 U/gHb and defined as 100 % G6PD activity. Dotted horizontal lines indicate G6PD activity at 30 and 70 % of the AMM\nG6PD activity of 725 individuals (368 males and 357 females) measured by the WST-8 method. The adjusted male median (AMM) value was determined to be 10.31 ± 3.81 U/gHb and defined as 100 % G6PD activity. Dotted horizontal lines indicate G6PD activity at 30 and 70 % of the AMM\nNonetheless, G6PD activity of 70 % was used as a threshold for tafenoquine prescription [58, 59]. In this study, G6PD activity levels of less than 30 % and 30–70 % of the AMM were thus considered as deficient and intermediate, respectively. Subjects with G6PD activity over 70 % of the AMM were defined as normal. Based on the WST-8 assay, the prevalence of G6PD deficiency in the studied population was 20.55 % (149/725; Table 5). Prevalence rates of G6PD deficiency of 20.11 % (74/368) and 21.01 % (75/357) were observed in males and females, respectively. In addition, average G6PD activity of deficient males and females was 1.59 ± 0.89 and 1.69 ± 0.77 U/gHb, respectively. Intermediate G6PD activity (30–70 %) was found in 7.34 % (27/368) of males and 16.25 % (58/357) of females. Average G6PD activity of non-deficient (> 70 %) cases was 11.78 ± 2.11 U/gHb in males and 11.89 ± 2.49 U/gHb in females. The frequency distribution of G6PD activity of the 725 individuals measured by WST-8 is shown in Fig. 3a. The majority of the enzyme activities were distributed between 7 and 16 U/gHb. The frequency distribution of G6PD activity by sex is illustrated in Fig. 3b, c. A broader distribution of G6PD activities was seen in females than in males.\n\nTable 5Prevalence of G6PD deficiency determined by WST-8 enzyme activity assayG6PD statusMale, N (%)Female, N (%)Total, N (%)Deficient (< 30 %)47 (12.77 %)17 (4.76 %)64 (8.83 %)Intermediate (30–70 %)27 (7.34 %)58 (16.25 %)85 (11.72 %)Normal (> 70 %)294 (79.89 %)282 (78.99 %)576 (79.45 %)Total368 (100 %)357 (100 %)725 (100 %)\nPrevalence of G6PD deficiency determined by WST-8 enzyme activity assay\n\nFig. 3Frequency distribution of G6PD activity. a G6PD activity for all 725 samples, showing the majority of samples in the range between 7 and 16 U/gHb. The average G6PD activity of the 725 samples was 10.19 ± 3.96 U/gHb. G6PD activity for (b) 368 males and (c) 357 females. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively\nFrequency distribution of G6PD activity. a G6PD activity for all 725 samples, showing the majority of samples in the range between 7 and 16 U/gHb. The average G6PD activity of the 725 samples was 10.19 ± 3.96 U/gHb. G6PD activity for (b) 368 males and (c) 357 females. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively", "The developed 4-plex HRM assay was applied to screen for G6PD mutations in 725 blood samples. This assay identified 197 of the 725 (27.17 %) individuals as possessing at least one mutation with an adverse effect on function (Table 6). The prevalence of subjects carrying at least one G6PD mutation was 20.11 % (74/368) in males and 34.45 % (123/357) in females. The most common G6PD mutation detected in the studied population was G6PD Mahidol, accounting for 94.92 % of the total (n = 187; 72 in males and 115 in females). Other single mutations observed in the study included G6PD Canton (2.03 %; 4 in females), G6PD Viangchan (1.52 %; 1 in a male and 2 in females), and G6PD Chinese-5 (0.51 %; 1 in a male). The HRM assay could also detect the double mutant variants, which were G6PD Mahidol + Canton (0.51 %; 1 in a female) and G6PD Chinese-4 + Viangchan (0.51 %; 1 in a female). Figure 4 shows the G6PD activity of deficient and normal samples identified by HRM for males and females. G6PD enzyme activity of deficient subjects, especially in females, spanned from the deficient region (< 30 %) to the normal region (> 70 %). A large distribution of G6PD enzyme activities in females is caused by genetic mosaicism as a result of X-inactivation. The distribution of G6PD activity by mutation type is illustrated in Fig. 5. Non-variant individuals are also included in this plot. Variation of G6PD activities among the different mutations was observed. Moreover, compared with the G6PD enzyme activity in males with the same mutation, that in females was greater. Enzyme activity of 0.89 and 6.16 U/gHb was observed for G6PD Viangchan in males and females, respectively. Interestingly, G6PD Mahidol, a Class III variant with mild deficiency, which was the most prevalent variant in the studied population, exhibited a wide range of G6PD activities, in both males (range: 0.10–10.73 U/gHb, mean: 3.20 ± 2.46 U/gHb) and females (range: 0.10–17.72 U/gHb, mean: 7.72 ± 4.24 U/gHb). Notably, G6PD enzyme activity in the double mutant variants (G6PD Mahidol + Canton and G6PD Chinese-4 + Viangchan) was significantly decreased compared with that of the single mutants.\n\nTable 6Observed ranges of enzyme activity and G6PD genotypes identified by HRM assayGenderVariantNG6PD activity (U/gHb)Nucleotide changeAmino acid changeWHO Classification\nMale Mahidol720.10-10.73G487AGly163SerIII(n = 368)Chinese-5Viangchan112.100.89C1024TG871ALeu342PheVal291MetIIIINon-variant2947.16–18.05--Normal\nFemale Mahidol1150.10-17.72G487AGly163SerIII(n = 357)Canton46.50-10.48G1376TArg249LeuIIViangchan26.07–6.25G871AVal291MetIIMahidol + Canton14.12G487A + G1376TGly163Ser + Arg249LeuII/IIIChinese-4 + Viangchan10.69G392T + G871AGly131Val + Val291MetIINon-variant2344.96–18.67--Normal\nObserved ranges of enzyme activity and G6PD genotypes identified by HRM assay\nChinese-5\nViangchan\n1\n1\n2.10\n0.89\nC1024T\nG871A\nLeu342Phe\nVal291Met\nII\nII\n\nFig. 4G6PD activity of deficient and normal samples identified by HRM assay. G6PD activity in a male and b female subjects. The average G6PD activity of deficient males and females was 3.16 ± 2.45 and 7.66 ± 4.19 U/gHb, respectively. The average G6PD activity of normal males and females was 11.77 ± 2.13 and 11.76 ± 2.68 U/gHb, respectively\nG6PD activity of deficient and normal samples identified by HRM assay. G6PD activity in a male and b female subjects. The average G6PD activity of deficient males and females was 3.16 ± 2.45 and 7.66 ± 4.19 U/gHb, respectively. The average G6PD activity of normal males and females was 11.77 ± 2.13 and 11.76 ± 2.68 U/gHb, respectively\n\nFig. 5Distribution of G6PD activity by mutation type. Males carrying G6PD Mahidol showed G6PD enzyme activity ranging from 0.10 to 10.73 U/gHb. Females carrying G6PD Mahidol showed a wider range of G6PD enzyme activities (0.10–17.72 U/gHb). Females with G6PD Canton exhibited G6PD activity between 6.50 and 10.48 U/gHb. Females with G6PD Viangchan showed G6PD activity of 6.07–6.25 U/gHb. Normal males showed G6PD activity ranging from 7.16 to 18.05 U/gHb and normal females showed that between 4.96 and 18.67 U/gHb\nDistribution of G6PD activity by mutation type. Males carrying G6PD Mahidol showed G6PD enzyme activity ranging from 0.10 to 10.73 U/gHb. Females carrying G6PD Mahidol showed a wider range of G6PD enzyme activities (0.10–17.72 U/gHb). Females with G6PD Canton exhibited G6PD activity between 6.50 and 10.48 U/gHb. Females with G6PD Viangchan showed G6PD activity of 6.07–6.25 U/gHb. Normal males showed G6PD activity ranging from 7.16 to 18.05 U/gHb and normal females showed that between 4.96 and 18.67 U/gHb" ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Blood samples", "Phenotypic screening of G6PD deficiency using WST-8 assay", "DNA extraction", "Primer design", "PCR amplification and melting curve analysis", "PCR amplification and DNA sequencing", "Statistical analysis", "Results", "Development and validation of 4-plex HRM assay", "Phenotypic screening of G6PD deficiency by WST-8 assay", "\nGenotypic screening of G6PD deficiency using the multiplexed HRM assay\n", "Discussion", "Conclusions" ]
[ "Glucose-6-phosphate dehydrogenase (G6PD) deficiency is an inherited genetic defect and the most common enzymopathy, affecting approximately 500 million people worldwide with more than 200 variants have been identified [1]. G6PD deficiency is prevalent in tropical and subtropical areas where malaria is endemic, including Africa and Southeast Asia [2]. Evidence has suggested that G6PD deficiency confers protection against malaria infection [3–5]. However, this is still controversial because several studies have yielded contradictory results with some claiming that the protective effects of G6PD deficiency were observed in male hemizygotes only, in female heterozygotes only, or in both [6–9]. The major clinical concern associated with G6PD deficiency is haemolysis upon exposure to oxidant drugs, including anti-malarials such as 8-aminoquinolines (primaquine and tafenoquine) [10–13]. Primaquine and tafenoquine are the only medications capable of killing Plasmodium vivax and Plasmodium ovale at the dormant liver stage (hypnozoite). The World Health Organization (WHO) recommends that G6PD activity be measured before efforts to perform radical treatment of malaria [14].\nG6PD deficiency can be diagnosed by either phenotypic or genotypic assay. Phenotypic tests are based on the assessment of G6PD activity, measuring the production of reduced nicotinamide adenine dinucleotide phosphate (NADPH), which can be done quantitatively. The standard quantitative method is spectrophotometry, in which NADPH production is monitored at 340 nm [15]. This method is accurate and reliable, but is laborious, time-consuming, and requires complicated sample preparation and technical skills; as such, it is not commonly used for field-based screening. A colorimetric G6PD assay, based on water-soluble tetrazolium salts (WST-8), was developed as an alternative to the gold standard of spectrophotometry [16]. In this approach, no sample preparation is required and whole blood or dried blood spots can be used to test G6PD activity [17]. The WST-8-based assay can be used as a quantitative method or a qualitative one by the naked eye, offering the possibility of performing mass screening of G6PD deficiency in the context of malaria elimination using primaquine and tafenoquine [16, 18]. Although not the standard method for measuring G6PD activity, the sensitivity of WST-8 for detecting NAD(P)H was found to be five-fold greater than that of the spectrophotometric assay. Moreover, results obtained by measuring dehydrogenase activities in biological samples using WST-8 assay were in parallel with the standard method [19]. For G6PD testing, WST-8 was applied, in 96-well format, to the screening of G6PD deficiency in different populations [20–22]. The sensitivity and specificity of WST-8 for detecting G6PD activity < 30 % were 55 % and 98 %, respectively, compared with the spectrophotometric method [20]. In addition, sensitivity of 72 % and specificity of 98 % were reported for WST-8, in comparison with the standard quantitative G6PD assay (R&D Diagnostics) [21]. This suggests that WST-8 could be a key tool for G6PD testing, but it requires further development before deployment in the field.\nG6PD diagnostic tests are currently available, including qualitative tests such as fluorescent spot test (FST) and CareStart™ G6PD rapid diagnostic test, as well as quantitative point-of-care tests such as CareStart™ G6PD biosensor and STANDARD™ G6PD test. Unfortunately, these tests are not widely used for G6PD testing because they are too expensive and can be difficult to interpret [23–25]. Qualitative tests are reliable for identifying G6PD deficiency in hemizygous males and homozygous females, but are unable to identify heterozygous females [26–28]. This is because, in heterozygous females, a wide range of G6PD activities are observed as a result of the random X-chromosome inactivation or lyonization [29]. To date, over 200 G6PD variants have been identified, which form genotypes associated with differences in the degree of deficiency and vulnerability to haemolysis [30]. Moreover, G6PD activities vary among G6PD-deficient individuals carrying the same genotype [31, 32].\nG6PD genotyping can be performed using restriction fragment length polymorphism [33, 34], amplification refractory mutation system [35, 36], gold nanoparticles-based assay [37], high resolution melting (HRM) curve analysis [38, 39] and DNA sequencing [40, 41]. Additionally, multiplex genotyping systems are currently available. DiaPlexC™ G6PD Genotyping Kit (Asian type) can detect eight mutations, namely, G6PD Vanua Lava, G6PD Mahidol, G6PD Mediterranean, G6PD Coimbra, G6PD Viangchan, G6PD Union, G6PD Canton, and G6PD Kaiping. Thus, this assay offers high-throughput screening of G6PD mutations by one-step PCR [42]. However, after PCR amplification, an additional gel electrophoresis step is required to check the size of the amplified PCR products, which is impractical for large population screening. The HRM assay is a powerful and reliable tool that has been widely used in the detection of gene mutations [43–45]. Previously, HRM assays were applied to detect G6PD mutations in different population groups [38, 46–48]. However, previous HRM assays could detect only one or two mutations at a time. Although a multiplexed system to detect six mutations in four reactions was later described, the assay system and interpretation of results were complex [49].\nThe prevalence of G6PD deficiency in Thailand ranges between 5 and 18 %, depending on the geographical area [50–54]. More than 20 G6PD variants have been identified in the country, among which the most common is G6PD Viangchan, followed by G6PD Mahidol, G6PD Canton, G6PD Union, G6PD Kaiping, G6PD Gaohe, G6PD Chinese-4, G6PD Chinese-5, G6PD Valladolid, G6PD Coimbra and G6PD Aures. Along the Thai–Myanmar border, a malaria endemic area, prevalence of G6PD deficiency of 9–18 % was reported in males [26]. Moreover, a rate of G6PD deficiency of 7.4 % was reported from the screening of 1,340 newborns [27]. G6PD Mahidol was shown to be the most common variant in this population, accounting for 88 % of all variants, followed by G6PD Chinese-4, G6PD Viangchan, and G6PD Mediterranean. Generally, to avoid the risk of haemolysis upon malaria treatment, G6PD testing is recommended before the administration of primaquine and tafenoquine. The aim of this study was to develop a molecular diagnostic test to enable an accurate, reliable and high-throughput platform for detecting G6PD mutations, which can be used as a supplement to the screening of G6PD deficiency, especially in heterozygous females. To validate the method, 70 G6PD-deficient and 28 non-deficient samples were tested and the results were compared with the findings obtained by direct DNA sequencing. The potential utility of the developed HRM test for the detection of G6PD variants in a study area in Thailand was then examined. The correlation between genotype and the phenotype of enzyme activity (as determined using WST-8) was also analysed.", "Blood samples Blood samples were collected in ethylenediaminetetraacetic acid (EDTA) tubes and transported to the laboratory under storage at 4 °C. Thereafter, samples were stored at −20 °C until use, for approximately 1−3 months. Under these conditions, the integrity of samples for phenotypic analysis was maintained as it was recently reported that blood samples were stable for up to 7–12 months when stored in EDTA tubes at − 20 °C [55].\nFor the validation of HRM assays, 70 G6PD-deficient and 28 non-deficient blood samples were collected from healthy Thai volunteers at the Faculty of Medicine Ramathibodi Hospital. All samples were spectrophotometrically tested for G6PD activity and genotyped by DNA sequencing. Ethical approval for the study was provided by the Committee on Human Rights Related to Research Involving Human Subjects, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand (approval number MURA 2018/252).\nFor the screening of G6PD deficiency, 725 blood samples (from 368 males and 357 females) were collected in EDTA tubes from residents living along the Thai–Myanmar border, a malaria endemic area, namely, in Tha Song Yang District, Tak Province, Thailand. Ethical approval for the study was provided by the Human Ethics Committee, Faculty of Tropical Medicine, Mahidol University (approval number MUTM 2019-016-01).\nBlood samples were collected in ethylenediaminetetraacetic acid (EDTA) tubes and transported to the laboratory under storage at 4 °C. Thereafter, samples were stored at −20 °C until use, for approximately 1−3 months. Under these conditions, the integrity of samples for phenotypic analysis was maintained as it was recently reported that blood samples were stable for up to 7–12 months when stored in EDTA tubes at − 20 °C [55].\nFor the validation of HRM assays, 70 G6PD-deficient and 28 non-deficient blood samples were collected from healthy Thai volunteers at the Faculty of Medicine Ramathibodi Hospital. All samples were spectrophotometrically tested for G6PD activity and genotyped by DNA sequencing. Ethical approval for the study was provided by the Committee on Human Rights Related to Research Involving Human Subjects, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand (approval number MURA 2018/252).\nFor the screening of G6PD deficiency, 725 blood samples (from 368 males and 357 females) were collected in EDTA tubes from residents living along the Thai–Myanmar border, a malaria endemic area, namely, in Tha Song Yang District, Tak Province, Thailand. Ethical approval for the study was provided by the Human Ethics Committee, Faculty of Tropical Medicine, Mahidol University (approval number MUTM 2019-016-01).\nPhenotypic screening of G6PD deficiency using WST-8 assay WST-8 is not a standard method for measuring G6PD activity. However, this assay was used for phenotypic screening in this study because its performance was found to be indistinguishable from that of the spectrophotometric method involving measurement of the absorbance of NAD(P)H at 340 nm [19]. The method showed high accuracy with % relative error of 0.7–0.25. For precision, % coefficient of variation for within-run and between-run of the WST-8 method ranged between 0.6 and 4.5. WST-8 also exhibited excellent reproducibility with Z′ values of 0.90–0.99. Although WST-8 provides advantages regarding the diagnosis of G6PD deficiency, this method will require further development before being deployed in a clinical context [20].\nReaction mixtures of 100 µl, consisting of 20 mM Tris-HCl pH 8.0, 10 mM MgCl2, 500 µM glucose-6-phosphate (G6P), 100 µM NADP+, and 100 µM WST-8 (Sigma-Aldrich, Darmstadt, Germany), were mixed with a blood sample of 2 µl in a 96-well plate. The absorbance was measured at 450 nm with a reference at 650 nm using a microplate reader (Sunrise; Tecan, Männedorf, Switzerland). The absorbance at 450 nm of a reaction mixture set up in the absence of G6P substrate was used for background subtraction. The G6PD activity was calculated using an NADPH calibration curve. Haemoglobin concentration was measured using Drabkin’s reagent (Sigma-Aldrich). G6PD activity was reported as units (U) per gram of haemoglobin (gHb). Experiments were performed in triplicate.\nWST-8 is not a standard method for measuring G6PD activity. However, this assay was used for phenotypic screening in this study because its performance was found to be indistinguishable from that of the spectrophotometric method involving measurement of the absorbance of NAD(P)H at 340 nm [19]. The method showed high accuracy with % relative error of 0.7–0.25. For precision, % coefficient of variation for within-run and between-run of the WST-8 method ranged between 0.6 and 4.5. WST-8 also exhibited excellent reproducibility with Z′ values of 0.90–0.99. Although WST-8 provides advantages regarding the diagnosis of G6PD deficiency, this method will require further development before being deployed in a clinical context [20].\nReaction mixtures of 100 µl, consisting of 20 mM Tris-HCl pH 8.0, 10 mM MgCl2, 500 µM glucose-6-phosphate (G6P), 100 µM NADP+, and 100 µM WST-8 (Sigma-Aldrich, Darmstadt, Germany), were mixed with a blood sample of 2 µl in a 96-well plate. The absorbance was measured at 450 nm with a reference at 650 nm using a microplate reader (Sunrise; Tecan, Männedorf, Switzerland). The absorbance at 450 nm of a reaction mixture set up in the absence of G6P substrate was used for background subtraction. The G6PD activity was calculated using an NADPH calibration curve. Haemoglobin concentration was measured using Drabkin’s reagent (Sigma-Aldrich). G6PD activity was reported as units (U) per gram of haemoglobin (gHb). Experiments were performed in triplicate.\nDNA extraction DNA extraction was performed using the QIAsymphony DNA Mini Kit (QIAGEN, Hilden, Germany), in accordance with the manufacturer’s instructions. Blood samples of 100 µl were extracted and eluted into a final volume of 50 µl. DNA concentration was measured using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA).\nDNA extraction was performed using the QIAsymphony DNA Mini Kit (QIAGEN, Hilden, Germany), in accordance with the manufacturer’s instructions. Blood samples of 100 µl were extracted and eluted into a final volume of 50 µl. DNA concentration was measured using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA).\nPrimer design Primers were designed to detect eight common G6PD mutations in the Thai population: G6PD Gaohe (A95G), G6PD Chinese-4 (G392T), G6PD Mahidol (G487A), G6PD Viangchan (G871A), G6PD Chinese-5 (C1024T), G6PD Union (C1360T), G6PD Canton (G1376T) and G6PD Kaiping (G1388A; Table 1). The primers were designed to detect the mutations by generating PCR products with distinctive melting temperatures (Tm; Fig. 1).\n\nTable 1HRM primers used in this studyReaction systemPrimer nameG6PD variantPrimer sequence (from 5’ to 3’)Primer concentration (nM)Amplicon size (bp)Tm of PCRproduct (°C)1A95G_FGaoheTTCCATCAGTCGGATACACG60010081.05A95G_R(His32Arg)AGGCATGGAGCAGGCACTTC600G487A_FMahidolTCCGGGCTCCCAGCAGAA4008784.80G487A_R(Gly163Ser)GGTTGGACAGCCGGTCA400G871A_FViangchanGGCTTTCTCTCAGGTCAAGA4006678.32G871A_R(Val291Met)CCCAGGACCACATTGTTGGC400G1376T_FCantonCCTCAGCGACGAGCTCCT6009983.65G1376T_R(Arg459Leu)CTGCCATAAATATAGGGGATGG6002G392T_FChinese-4CATGAATGCCCTCCACCTGGT2008785.05G392T_R(Gly131Val)TTCTTGGTGACGGCCTCGTA200C1024T_FChinese-5CACTTTTGCAGCCGTCGTCT4009983.10C1024T_R(Leu342Phe)CACACAGGGCATGCCCAGTT400C1360T_FUnionGAGCCAGATGCACTTCGTGT20012787.67C1360T_R(Arg454Cys)GAGGGGACATAGTATGGCTT200G1388A_FKaipingGCTCCGTGAGGCCTGGCA4005778.97G1388A_R(Arg463His)TTCTCCAGCTCAATCTGGTGC400\nHRM primers used in this study\n\nFig. 1Identification of G6PD mutations by the multiplexed HRM assay. The assay is based on base complementarity between primers and the DNA template. Mutant samples produce a peak at the corresponding Tm, while WT samples do not produce PCR products, giving a flat line\nIdentification of G6PD mutations by the multiplexed HRM assay. The assay is based on base complementarity between primers and the DNA template. Mutant samples produce a peak at the corresponding Tm, while WT samples do not produce PCR products, giving a flat line\nPrimers were designed to detect eight common G6PD mutations in the Thai population: G6PD Gaohe (A95G), G6PD Chinese-4 (G392T), G6PD Mahidol (G487A), G6PD Viangchan (G871A), G6PD Chinese-5 (C1024T), G6PD Union (C1360T), G6PD Canton (G1376T) and G6PD Kaiping (G1388A; Table 1). The primers were designed to detect the mutations by generating PCR products with distinctive melting temperatures (Tm; Fig. 1).\n\nTable 1HRM primers used in this studyReaction systemPrimer nameG6PD variantPrimer sequence (from 5’ to 3’)Primer concentration (nM)Amplicon size (bp)Tm of PCRproduct (°C)1A95G_FGaoheTTCCATCAGTCGGATACACG60010081.05A95G_R(His32Arg)AGGCATGGAGCAGGCACTTC600G487A_FMahidolTCCGGGCTCCCAGCAGAA4008784.80G487A_R(Gly163Ser)GGTTGGACAGCCGGTCA400G871A_FViangchanGGCTTTCTCTCAGGTCAAGA4006678.32G871A_R(Val291Met)CCCAGGACCACATTGTTGGC400G1376T_FCantonCCTCAGCGACGAGCTCCT6009983.65G1376T_R(Arg459Leu)CTGCCATAAATATAGGGGATGG6002G392T_FChinese-4CATGAATGCCCTCCACCTGGT2008785.05G392T_R(Gly131Val)TTCTTGGTGACGGCCTCGTA200C1024T_FChinese-5CACTTTTGCAGCCGTCGTCT4009983.10C1024T_R(Leu342Phe)CACACAGGGCATGCCCAGTT400C1360T_FUnionGAGCCAGATGCACTTCGTGT20012787.67C1360T_R(Arg454Cys)GAGGGGACATAGTATGGCTT200G1388A_FKaipingGCTCCGTGAGGCCTGGCA4005778.97G1388A_R(Arg463His)TTCTCCAGCTCAATCTGGTGC400\nHRM primers used in this study\n\nFig. 1Identification of G6PD mutations by the multiplexed HRM assay. The assay is based on base complementarity between primers and the DNA template. Mutant samples produce a peak at the corresponding Tm, while WT samples do not produce PCR products, giving a flat line\nIdentification of G6PD mutations by the multiplexed HRM assay. The assay is based on base complementarity between primers and the DNA template. Mutant samples produce a peak at the corresponding Tm, while WT samples do not produce PCR products, giving a flat line\nPCR amplification and melting curve analysis Assay conditions, including primer concentrations, assay protocol, and detection conditions, were optimized to maximize the sensitivity and specificity of the assay and to minimize the cross-reactivity. Multiplexed HRM assay was performed in a total volume of 12.5 µl, containing 6.25 µl of 2× HRM Type-It mix (QIAGEN), various concentrations of each primer (Table 1), molecular-grade water and 2.5 µl of the gDNA template (3–10 ng/µl). PCR amplification and melting curve analysis were performed using the Rotor-Gene Q (QIAGEN) with the following conditions: 1 cycle of 95 °C for 5 min, and then 30 cycles of 95 °C for 10 s, 63 °C for 30 s, and 72 °C for 10 s. Subsequently, HRM analysis was performed by melting from 75 to 90 °C, reading at every 0.1 °C step with 2 s of stabilization. Positive (gDNA with known mutations, confirmed by DNA sequencing) and negative controls (gDNA of G6PD wild-type (WT), confirmed by DNA sequencing) were included in every run. Data analysis was carried out using the Rotor-Gene Q software. Experiments were performed in triplicate.\nPCR amplification and DNA sequencing To validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada).\n\nTable 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG\nSequencing primers used in this study\nTo validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada).\n\nTable 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG\nSequencing primers used in this study\nAssay conditions, including primer concentrations, assay protocol, and detection conditions, were optimized to maximize the sensitivity and specificity of the assay and to minimize the cross-reactivity. Multiplexed HRM assay was performed in a total volume of 12.5 µl, containing 6.25 µl of 2× HRM Type-It mix (QIAGEN), various concentrations of each primer (Table 1), molecular-grade water and 2.5 µl of the gDNA template (3–10 ng/µl). PCR amplification and melting curve analysis were performed using the Rotor-Gene Q (QIAGEN) with the following conditions: 1 cycle of 95 °C for 5 min, and then 30 cycles of 95 °C for 10 s, 63 °C for 30 s, and 72 °C for 10 s. Subsequently, HRM analysis was performed by melting from 75 to 90 °C, reading at every 0.1 °C step with 2 s of stabilization. Positive (gDNA with known mutations, confirmed by DNA sequencing) and negative controls (gDNA of G6PD wild-type (WT), confirmed by DNA sequencing) were included in every run. Data analysis was carried out using the Rotor-Gene Q software. Experiments were performed in triplicate.\nPCR amplification and DNA sequencing To validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada).\n\nTable 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG\nSequencing primers used in this study\nTo validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada).\n\nTable 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG\nSequencing primers used in this study\nStatistical analysis Data are presented as mean ± SD. Statistical analyses and plotting of graphs were performed using GraphPad Prism (GraphPad Software, La Jolla, CA, USA). To assess the performance of multiplexed HRM in the detection of G6PD mutations, the numbers of true positives, true negatives, false positives, and false negatives were determined. The following parameters were calculated: sensitivity = [true positives/(true positives + false negatives)] ⋅100; specificity = [true negatives/(true negatives + false positives)] ⋅100; positive predictive value = [true positives/(true positives + false positives)] ⋅100; and negative predictive value = [true negatives/(true positives + false negatives)] ⋅100.\nData are presented as mean ± SD. Statistical analyses and plotting of graphs were performed using GraphPad Prism (GraphPad Software, La Jolla, CA, USA). To assess the performance of multiplexed HRM in the detection of G6PD mutations, the numbers of true positives, true negatives, false positives, and false negatives were determined. The following parameters were calculated: sensitivity = [true positives/(true positives + false negatives)] ⋅100; specificity = [true negatives/(true negatives + false positives)] ⋅100; positive predictive value = [true positives/(true positives + false positives)] ⋅100; and negative predictive value = [true negatives/(true positives + false negatives)] ⋅100.", "Blood samples were collected in ethylenediaminetetraacetic acid (EDTA) tubes and transported to the laboratory under storage at 4 °C. Thereafter, samples were stored at −20 °C until use, for approximately 1−3 months. Under these conditions, the integrity of samples for phenotypic analysis was maintained as it was recently reported that blood samples were stable for up to 7–12 months when stored in EDTA tubes at − 20 °C [55].\nFor the validation of HRM assays, 70 G6PD-deficient and 28 non-deficient blood samples were collected from healthy Thai volunteers at the Faculty of Medicine Ramathibodi Hospital. All samples were spectrophotometrically tested for G6PD activity and genotyped by DNA sequencing. Ethical approval for the study was provided by the Committee on Human Rights Related to Research Involving Human Subjects, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand (approval number MURA 2018/252).\nFor the screening of G6PD deficiency, 725 blood samples (from 368 males and 357 females) were collected in EDTA tubes from residents living along the Thai–Myanmar border, a malaria endemic area, namely, in Tha Song Yang District, Tak Province, Thailand. Ethical approval for the study was provided by the Human Ethics Committee, Faculty of Tropical Medicine, Mahidol University (approval number MUTM 2019-016-01).", "WST-8 is not a standard method for measuring G6PD activity. However, this assay was used for phenotypic screening in this study because its performance was found to be indistinguishable from that of the spectrophotometric method involving measurement of the absorbance of NAD(P)H at 340 nm [19]. The method showed high accuracy with % relative error of 0.7–0.25. For precision, % coefficient of variation for within-run and between-run of the WST-8 method ranged between 0.6 and 4.5. WST-8 also exhibited excellent reproducibility with Z′ values of 0.90–0.99. Although WST-8 provides advantages regarding the diagnosis of G6PD deficiency, this method will require further development before being deployed in a clinical context [20].\nReaction mixtures of 100 µl, consisting of 20 mM Tris-HCl pH 8.0, 10 mM MgCl2, 500 µM glucose-6-phosphate (G6P), 100 µM NADP+, and 100 µM WST-8 (Sigma-Aldrich, Darmstadt, Germany), were mixed with a blood sample of 2 µl in a 96-well plate. The absorbance was measured at 450 nm with a reference at 650 nm using a microplate reader (Sunrise; Tecan, Männedorf, Switzerland). The absorbance at 450 nm of a reaction mixture set up in the absence of G6P substrate was used for background subtraction. The G6PD activity was calculated using an NADPH calibration curve. Haemoglobin concentration was measured using Drabkin’s reagent (Sigma-Aldrich). G6PD activity was reported as units (U) per gram of haemoglobin (gHb). Experiments were performed in triplicate.", "DNA extraction was performed using the QIAsymphony DNA Mini Kit (QIAGEN, Hilden, Germany), in accordance with the manufacturer’s instructions. Blood samples of 100 µl were extracted and eluted into a final volume of 50 µl. DNA concentration was measured using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA).", "Primers were designed to detect eight common G6PD mutations in the Thai population: G6PD Gaohe (A95G), G6PD Chinese-4 (G392T), G6PD Mahidol (G487A), G6PD Viangchan (G871A), G6PD Chinese-5 (C1024T), G6PD Union (C1360T), G6PD Canton (G1376T) and G6PD Kaiping (G1388A; Table 1). The primers were designed to detect the mutations by generating PCR products with distinctive melting temperatures (Tm; Fig. 1).\n\nTable 1HRM primers used in this studyReaction systemPrimer nameG6PD variantPrimer sequence (from 5’ to 3’)Primer concentration (nM)Amplicon size (bp)Tm of PCRproduct (°C)1A95G_FGaoheTTCCATCAGTCGGATACACG60010081.05A95G_R(His32Arg)AGGCATGGAGCAGGCACTTC600G487A_FMahidolTCCGGGCTCCCAGCAGAA4008784.80G487A_R(Gly163Ser)GGTTGGACAGCCGGTCA400G871A_FViangchanGGCTTTCTCTCAGGTCAAGA4006678.32G871A_R(Val291Met)CCCAGGACCACATTGTTGGC400G1376T_FCantonCCTCAGCGACGAGCTCCT6009983.65G1376T_R(Arg459Leu)CTGCCATAAATATAGGGGATGG6002G392T_FChinese-4CATGAATGCCCTCCACCTGGT2008785.05G392T_R(Gly131Val)TTCTTGGTGACGGCCTCGTA200C1024T_FChinese-5CACTTTTGCAGCCGTCGTCT4009983.10C1024T_R(Leu342Phe)CACACAGGGCATGCCCAGTT400C1360T_FUnionGAGCCAGATGCACTTCGTGT20012787.67C1360T_R(Arg454Cys)GAGGGGACATAGTATGGCTT200G1388A_FKaipingGCTCCGTGAGGCCTGGCA4005778.97G1388A_R(Arg463His)TTCTCCAGCTCAATCTGGTGC400\nHRM primers used in this study\n\nFig. 1Identification of G6PD mutations by the multiplexed HRM assay. The assay is based on base complementarity between primers and the DNA template. Mutant samples produce a peak at the corresponding Tm, while WT samples do not produce PCR products, giving a flat line\nIdentification of G6PD mutations by the multiplexed HRM assay. The assay is based on base complementarity between primers and the DNA template. Mutant samples produce a peak at the corresponding Tm, while WT samples do not produce PCR products, giving a flat line", "Assay conditions, including primer concentrations, assay protocol, and detection conditions, were optimized to maximize the sensitivity and specificity of the assay and to minimize the cross-reactivity. Multiplexed HRM assay was performed in a total volume of 12.5 µl, containing 6.25 µl of 2× HRM Type-It mix (QIAGEN), various concentrations of each primer (Table 1), molecular-grade water and 2.5 µl of the gDNA template (3–10 ng/µl). PCR amplification and melting curve analysis were performed using the Rotor-Gene Q (QIAGEN) with the following conditions: 1 cycle of 95 °C for 5 min, and then 30 cycles of 95 °C for 10 s, 63 °C for 30 s, and 72 °C for 10 s. Subsequently, HRM analysis was performed by melting from 75 to 90 °C, reading at every 0.1 °C step with 2 s of stabilization. Positive (gDNA with known mutations, confirmed by DNA sequencing) and negative controls (gDNA of G6PD wild-type (WT), confirmed by DNA sequencing) were included in every run. Data analysis was carried out using the Rotor-Gene Q software. Experiments were performed in triplicate.\nPCR amplification and DNA sequencing To validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada).\n\nTable 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG\nSequencing primers used in this study\nTo validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada).\n\nTable 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG\nSequencing primers used in this study", "To validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada).\n\nTable 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG\nSequencing primers used in this study", "Data are presented as mean ± SD. Statistical analyses and plotting of graphs were performed using GraphPad Prism (GraphPad Software, La Jolla, CA, USA). To assess the performance of multiplexed HRM in the detection of G6PD mutations, the numbers of true positives, true negatives, false positives, and false negatives were determined. The following parameters were calculated: sensitivity = [true positives/(true positives + false negatives)] ⋅100; specificity = [true negatives/(true negatives + false positives)] ⋅100; positive predictive value = [true positives/(true positives + false positives)] ⋅100; and negative predictive value = [true negatives/(true positives + false negatives)] ⋅100.", "Development and validation of 4-plex HRM assay A multiplexed HRM assay was developed to detect eight G6PD variants that are common in Thailand in two reactions (Fig. 1). By using a specific primer pair for each mutation, reaction 1 simultaneously detects four mutations [G6PD Gaohe (A95G), G6PD Mahidol (G487A), G6PD Viangchan (G871A), and G6PD Canton (G1376T)]. Reaction 2 concurrently detects another four mutations [G6PD Chinese-4 (G392T), G6PD Chinese-5 (C1024T), G6PD Union (C1360T), and G6PD Kaiping (G1388A)]. The assay is based on a single fluorescent dye, EvaGreen, without the need for a quenching probe. The primers were designed to detect the mutations by generating PCR products with distinctive melting temperatures, Tm. In contrast, no amplification occurred in WT samples. A peak at the corresponding Tm reveals the genotype of each sample. The gDNA of known G6PD mutations was used as positive controls. Overall, 70 G6PD-deficient samples and 28 non-deficient samples were used to evaluate the performance of the developed 4-plex HRM assay, while direct DNA sequencing was used as a reference test (Table 3).\n\nTable 3G6PD mutations of 70-deficient samples detected by 4-plex HRM and direct DNA sequencingMutationHRM assayDNA sequencingGaohe (A95G)4/704/70Chinese-4 (G392T)3/703/70Mahidol (G487A)5/705/70Viangchan (G871A)28/7028/70Chinese-5 (C1024T)1/701/70Canton (G1376T)14/7014/70Kaiping (G1388A)13/7013/70Mahidol + Canton (G487A + G1376T)1/701/70Gaohe + Kaiping (A95G + G1388A)1/701/70\nG6PD mutations of 70-deficient samples detected by 4-plex HRM and direct DNA sequencing\nIn comparison to direct DNA sequencing, the 4-plex HRM assay was 100 % sensitive [95 % confidence interval (CI): 94.87–100 %] and 100 % specific (95 % CI: 87.66–100 %), with no cross-reactivity for the detection of G6PD mutations (Table 4). Additionally, the multiplexed HRM assay could correctly identify the double mutations (G6PD Mahidol + Canton and G6PD Gaohe + Kaiping). This indicates that the developed method is reliable for detecting G6PD mutations.\n\nTable 4Performance of the HRM assay for the identification of G6PD mutationsParameterHRM assayTrue positive70/70True negative28/28False positive0/28False negative0/70Sensitivity100 %Specificity100 %Positive predictive value100 %Negative predictive value100 %\nPerformance of the HRM assay for the identification of G6PD mutations\nA multiplexed HRM assay was developed to detect eight G6PD variants that are common in Thailand in two reactions (Fig. 1). By using a specific primer pair for each mutation, reaction 1 simultaneously detects four mutations [G6PD Gaohe (A95G), G6PD Mahidol (G487A), G6PD Viangchan (G871A), and G6PD Canton (G1376T)]. Reaction 2 concurrently detects another four mutations [G6PD Chinese-4 (G392T), G6PD Chinese-5 (C1024T), G6PD Union (C1360T), and G6PD Kaiping (G1388A)]. The assay is based on a single fluorescent dye, EvaGreen, without the need for a quenching probe. The primers were designed to detect the mutations by generating PCR products with distinctive melting temperatures, Tm. In contrast, no amplification occurred in WT samples. A peak at the corresponding Tm reveals the genotype of each sample. The gDNA of known G6PD mutations was used as positive controls. Overall, 70 G6PD-deficient samples and 28 non-deficient samples were used to evaluate the performance of the developed 4-plex HRM assay, while direct DNA sequencing was used as a reference test (Table 3).\n\nTable 3G6PD mutations of 70-deficient samples detected by 4-plex HRM and direct DNA sequencingMutationHRM assayDNA sequencingGaohe (A95G)4/704/70Chinese-4 (G392T)3/703/70Mahidol (G487A)5/705/70Viangchan (G871A)28/7028/70Chinese-5 (C1024T)1/701/70Canton (G1376T)14/7014/70Kaiping (G1388A)13/7013/70Mahidol + Canton (G487A + G1376T)1/701/70Gaohe + Kaiping (A95G + G1388A)1/701/70\nG6PD mutations of 70-deficient samples detected by 4-plex HRM and direct DNA sequencing\nIn comparison to direct DNA sequencing, the 4-plex HRM assay was 100 % sensitive [95 % confidence interval (CI): 94.87–100 %] and 100 % specific (95 % CI: 87.66–100 %), with no cross-reactivity for the detection of G6PD mutations (Table 4). Additionally, the multiplexed HRM assay could correctly identify the double mutations (G6PD Mahidol + Canton and G6PD Gaohe + Kaiping). This indicates that the developed method is reliable for detecting G6PD mutations.\n\nTable 4Performance of the HRM assay for the identification of G6PD mutationsParameterHRM assayTrue positive70/70True negative28/28False positive0/28False negative0/70Sensitivity100 %Specificity100 %Positive predictive value100 %Negative predictive value100 %\nPerformance of the HRM assay for the identification of G6PD mutations\nPhenotypic screening of G6PD deficiency by WST-8 assay The prevalence of G6PD deficiency in people living in a malaria endemic area in Thailand, namely, Tha Song Yang District, Tak Province, was determined by G6PD activity assay (WST-8). Figure 2 indicates the G6PD enzyme activity of 725 samples measured by WST-8. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively. The adjusted male median (AMM) value was determined (10.31 ± 3.81 U/gHb) and defined as 100 % G6PD activity [56]. The WHO defined G6PD activity of less than 30 % as deficient and G6PD activity ranging between 30 and 80 % as intermediate [57].\n\nFig. 2G6PD activity of 725 individuals (368 males and 357 females) measured by the WST-8 method. The adjusted male median (AMM) value was determined to be 10.31 ± 3.81 U/gHb and defined as 100 % G6PD activity. Dotted horizontal lines indicate G6PD activity at 30 and 70 % of the AMM\nG6PD activity of 725 individuals (368 males and 357 females) measured by the WST-8 method. The adjusted male median (AMM) value was determined to be 10.31 ± 3.81 U/gHb and defined as 100 % G6PD activity. Dotted horizontal lines indicate G6PD activity at 30 and 70 % of the AMM\nNonetheless, G6PD activity of 70 % was used as a threshold for tafenoquine prescription [58, 59]. In this study, G6PD activity levels of less than 30 % and 30–70 % of the AMM were thus considered as deficient and intermediate, respectively. Subjects with G6PD activity over 70 % of the AMM were defined as normal. Based on the WST-8 assay, the prevalence of G6PD deficiency in the studied population was 20.55 % (149/725; Table 5). Prevalence rates of G6PD deficiency of 20.11 % (74/368) and 21.01 % (75/357) were observed in males and females, respectively. In addition, average G6PD activity of deficient males and females was 1.59 ± 0.89 and 1.69 ± 0.77 U/gHb, respectively. Intermediate G6PD activity (30–70 %) was found in 7.34 % (27/368) of males and 16.25 % (58/357) of females. Average G6PD activity of non-deficient (> 70 %) cases was 11.78 ± 2.11 U/gHb in males and 11.89 ± 2.49 U/gHb in females. The frequency distribution of G6PD activity of the 725 individuals measured by WST-8 is shown in Fig. 3a. The majority of the enzyme activities were distributed between 7 and 16 U/gHb. The frequency distribution of G6PD activity by sex is illustrated in Fig. 3b, c. A broader distribution of G6PD activities was seen in females than in males.\n\nTable 5Prevalence of G6PD deficiency determined by WST-8 enzyme activity assayG6PD statusMale, N (%)Female, N (%)Total, N (%)Deficient (< 30 %)47 (12.77 %)17 (4.76 %)64 (8.83 %)Intermediate (30–70 %)27 (7.34 %)58 (16.25 %)85 (11.72 %)Normal (> 70 %)294 (79.89 %)282 (78.99 %)576 (79.45 %)Total368 (100 %)357 (100 %)725 (100 %)\nPrevalence of G6PD deficiency determined by WST-8 enzyme activity assay\n\nFig. 3Frequency distribution of G6PD activity. a G6PD activity for all 725 samples, showing the majority of samples in the range between 7 and 16 U/gHb. The average G6PD activity of the 725 samples was 10.19 ± 3.96 U/gHb. G6PD activity for (b) 368 males and (c) 357 females. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively\nFrequency distribution of G6PD activity. a G6PD activity for all 725 samples, showing the majority of samples in the range between 7 and 16 U/gHb. The average G6PD activity of the 725 samples was 10.19 ± 3.96 U/gHb. G6PD activity for (b) 368 males and (c) 357 females. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively\nThe prevalence of G6PD deficiency in people living in a malaria endemic area in Thailand, namely, Tha Song Yang District, Tak Province, was determined by G6PD activity assay (WST-8). Figure 2 indicates the G6PD enzyme activity of 725 samples measured by WST-8. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively. The adjusted male median (AMM) value was determined (10.31 ± 3.81 U/gHb) and defined as 100 % G6PD activity [56]. The WHO defined G6PD activity of less than 30 % as deficient and G6PD activity ranging between 30 and 80 % as intermediate [57].\n\nFig. 2G6PD activity of 725 individuals (368 males and 357 females) measured by the WST-8 method. The adjusted male median (AMM) value was determined to be 10.31 ± 3.81 U/gHb and defined as 100 % G6PD activity. Dotted horizontal lines indicate G6PD activity at 30 and 70 % of the AMM\nG6PD activity of 725 individuals (368 males and 357 females) measured by the WST-8 method. The adjusted male median (AMM) value was determined to be 10.31 ± 3.81 U/gHb and defined as 100 % G6PD activity. Dotted horizontal lines indicate G6PD activity at 30 and 70 % of the AMM\nNonetheless, G6PD activity of 70 % was used as a threshold for tafenoquine prescription [58, 59]. In this study, G6PD activity levels of less than 30 % and 30–70 % of the AMM were thus considered as deficient and intermediate, respectively. Subjects with G6PD activity over 70 % of the AMM were defined as normal. Based on the WST-8 assay, the prevalence of G6PD deficiency in the studied population was 20.55 % (149/725; Table 5). Prevalence rates of G6PD deficiency of 20.11 % (74/368) and 21.01 % (75/357) were observed in males and females, respectively. In addition, average G6PD activity of deficient males and females was 1.59 ± 0.89 and 1.69 ± 0.77 U/gHb, respectively. Intermediate G6PD activity (30–70 %) was found in 7.34 % (27/368) of males and 16.25 % (58/357) of females. Average G6PD activity of non-deficient (> 70 %) cases was 11.78 ± 2.11 U/gHb in males and 11.89 ± 2.49 U/gHb in females. The frequency distribution of G6PD activity of the 725 individuals measured by WST-8 is shown in Fig. 3a. The majority of the enzyme activities were distributed between 7 and 16 U/gHb. The frequency distribution of G6PD activity by sex is illustrated in Fig. 3b, c. A broader distribution of G6PD activities was seen in females than in males.\n\nTable 5Prevalence of G6PD deficiency determined by WST-8 enzyme activity assayG6PD statusMale, N (%)Female, N (%)Total, N (%)Deficient (< 30 %)47 (12.77 %)17 (4.76 %)64 (8.83 %)Intermediate (30–70 %)27 (7.34 %)58 (16.25 %)85 (11.72 %)Normal (> 70 %)294 (79.89 %)282 (78.99 %)576 (79.45 %)Total368 (100 %)357 (100 %)725 (100 %)\nPrevalence of G6PD deficiency determined by WST-8 enzyme activity assay\n\nFig. 3Frequency distribution of G6PD activity. a G6PD activity for all 725 samples, showing the majority of samples in the range between 7 and 16 U/gHb. The average G6PD activity of the 725 samples was 10.19 ± 3.96 U/gHb. G6PD activity for (b) 368 males and (c) 357 females. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively\nFrequency distribution of G6PD activity. a G6PD activity for all 725 samples, showing the majority of samples in the range between 7 and 16 U/gHb. The average G6PD activity of the 725 samples was 10.19 ± 3.96 U/gHb. G6PD activity for (b) 368 males and (c) 357 females. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively\n\nGenotypic screening of G6PD deficiency using the multiplexed HRM assay\n The developed 4-plex HRM assay was applied to screen for G6PD mutations in 725 blood samples. This assay identified 197 of the 725 (27.17 %) individuals as possessing at least one mutation with an adverse effect on function (Table 6). The prevalence of subjects carrying at least one G6PD mutation was 20.11 % (74/368) in males and 34.45 % (123/357) in females. The most common G6PD mutation detected in the studied population was G6PD Mahidol, accounting for 94.92 % of the total (n = 187; 72 in males and 115 in females). Other single mutations observed in the study included G6PD Canton (2.03 %; 4 in females), G6PD Viangchan (1.52 %; 1 in a male and 2 in females), and G6PD Chinese-5 (0.51 %; 1 in a male). The HRM assay could also detect the double mutant variants, which were G6PD Mahidol + Canton (0.51 %; 1 in a female) and G6PD Chinese-4 + Viangchan (0.51 %; 1 in a female). Figure 4 shows the G6PD activity of deficient and normal samples identified by HRM for males and females. G6PD enzyme activity of deficient subjects, especially in females, spanned from the deficient region (< 30 %) to the normal region (> 70 %). A large distribution of G6PD enzyme activities in females is caused by genetic mosaicism as a result of X-inactivation. The distribution of G6PD activity by mutation type is illustrated in Fig. 5. Non-variant individuals are also included in this plot. Variation of G6PD activities among the different mutations was observed. Moreover, compared with the G6PD enzyme activity in males with the same mutation, that in females was greater. Enzyme activity of 0.89 and 6.16 U/gHb was observed for G6PD Viangchan in males and females, respectively. Interestingly, G6PD Mahidol, a Class III variant with mild deficiency, which was the most prevalent variant in the studied population, exhibited a wide range of G6PD activities, in both males (range: 0.10–10.73 U/gHb, mean: 3.20 ± 2.46 U/gHb) and females (range: 0.10–17.72 U/gHb, mean: 7.72 ± 4.24 U/gHb). Notably, G6PD enzyme activity in the double mutant variants (G6PD Mahidol + Canton and G6PD Chinese-4 + Viangchan) was significantly decreased compared with that of the single mutants.\n\nTable 6Observed ranges of enzyme activity and G6PD genotypes identified by HRM assayGenderVariantNG6PD activity (U/gHb)Nucleotide changeAmino acid changeWHO Classification\nMale Mahidol720.10-10.73G487AGly163SerIII(n = 368)Chinese-5Viangchan112.100.89C1024TG871ALeu342PheVal291MetIIIINon-variant2947.16–18.05--Normal\nFemale Mahidol1150.10-17.72G487AGly163SerIII(n = 357)Canton46.50-10.48G1376TArg249LeuIIViangchan26.07–6.25G871AVal291MetIIMahidol + Canton14.12G487A + G1376TGly163Ser + Arg249LeuII/IIIChinese-4 + Viangchan10.69G392T + G871AGly131Val + Val291MetIINon-variant2344.96–18.67--Normal\nObserved ranges of enzyme activity and G6PD genotypes identified by HRM assay\nChinese-5\nViangchan\n1\n1\n2.10\n0.89\nC1024T\nG871A\nLeu342Phe\nVal291Met\nII\nII\n\nFig. 4G6PD activity of deficient and normal samples identified by HRM assay. G6PD activity in a male and b female subjects. The average G6PD activity of deficient males and females was 3.16 ± 2.45 and 7.66 ± 4.19 U/gHb, respectively. The average G6PD activity of normal males and females was 11.77 ± 2.13 and 11.76 ± 2.68 U/gHb, respectively\nG6PD activity of deficient and normal samples identified by HRM assay. G6PD activity in a male and b female subjects. The average G6PD activity of deficient males and females was 3.16 ± 2.45 and 7.66 ± 4.19 U/gHb, respectively. The average G6PD activity of normal males and females was 11.77 ± 2.13 and 11.76 ± 2.68 U/gHb, respectively\n\nFig. 5Distribution of G6PD activity by mutation type. Males carrying G6PD Mahidol showed G6PD enzyme activity ranging from 0.10 to 10.73 U/gHb. Females carrying G6PD Mahidol showed a wider range of G6PD enzyme activities (0.10–17.72 U/gHb). Females with G6PD Canton exhibited G6PD activity between 6.50 and 10.48 U/gHb. Females with G6PD Viangchan showed G6PD activity of 6.07–6.25 U/gHb. Normal males showed G6PD activity ranging from 7.16 to 18.05 U/gHb and normal females showed that between 4.96 and 18.67 U/gHb\nDistribution of G6PD activity by mutation type. Males carrying G6PD Mahidol showed G6PD enzyme activity ranging from 0.10 to 10.73 U/gHb. Females carrying G6PD Mahidol showed a wider range of G6PD enzyme activities (0.10–17.72 U/gHb). Females with G6PD Canton exhibited G6PD activity between 6.50 and 10.48 U/gHb. Females with G6PD Viangchan showed G6PD activity of 6.07–6.25 U/gHb. Normal males showed G6PD activity ranging from 7.16 to 18.05 U/gHb and normal females showed that between 4.96 and 18.67 U/gHb\nThe developed 4-plex HRM assay was applied to screen for G6PD mutations in 725 blood samples. This assay identified 197 of the 725 (27.17 %) individuals as possessing at least one mutation with an adverse effect on function (Table 6). The prevalence of subjects carrying at least one G6PD mutation was 20.11 % (74/368) in males and 34.45 % (123/357) in females. The most common G6PD mutation detected in the studied population was G6PD Mahidol, accounting for 94.92 % of the total (n = 187; 72 in males and 115 in females). Other single mutations observed in the study included G6PD Canton (2.03 %; 4 in females), G6PD Viangchan (1.52 %; 1 in a male and 2 in females), and G6PD Chinese-5 (0.51 %; 1 in a male). The HRM assay could also detect the double mutant variants, which were G6PD Mahidol + Canton (0.51 %; 1 in a female) and G6PD Chinese-4 + Viangchan (0.51 %; 1 in a female). Figure 4 shows the G6PD activity of deficient and normal samples identified by HRM for males and females. G6PD enzyme activity of deficient subjects, especially in females, spanned from the deficient region (< 30 %) to the normal region (> 70 %). A large distribution of G6PD enzyme activities in females is caused by genetic mosaicism as a result of X-inactivation. The distribution of G6PD activity by mutation type is illustrated in Fig. 5. Non-variant individuals are also included in this plot. Variation of G6PD activities among the different mutations was observed. Moreover, compared with the G6PD enzyme activity in males with the same mutation, that in females was greater. Enzyme activity of 0.89 and 6.16 U/gHb was observed for G6PD Viangchan in males and females, respectively. Interestingly, G6PD Mahidol, a Class III variant with mild deficiency, which was the most prevalent variant in the studied population, exhibited a wide range of G6PD activities, in both males (range: 0.10–10.73 U/gHb, mean: 3.20 ± 2.46 U/gHb) and females (range: 0.10–17.72 U/gHb, mean: 7.72 ± 4.24 U/gHb). Notably, G6PD enzyme activity in the double mutant variants (G6PD Mahidol + Canton and G6PD Chinese-4 + Viangchan) was significantly decreased compared with that of the single mutants.\n\nTable 6Observed ranges of enzyme activity and G6PD genotypes identified by HRM assayGenderVariantNG6PD activity (U/gHb)Nucleotide changeAmino acid changeWHO Classification\nMale Mahidol720.10-10.73G487AGly163SerIII(n = 368)Chinese-5Viangchan112.100.89C1024TG871ALeu342PheVal291MetIIIINon-variant2947.16–18.05--Normal\nFemale Mahidol1150.10-17.72G487AGly163SerIII(n = 357)Canton46.50-10.48G1376TArg249LeuIIViangchan26.07–6.25G871AVal291MetIIMahidol + Canton14.12G487A + G1376TGly163Ser + Arg249LeuII/IIIChinese-4 + Viangchan10.69G392T + G871AGly131Val + Val291MetIINon-variant2344.96–18.67--Normal\nObserved ranges of enzyme activity and G6PD genotypes identified by HRM assay\nChinese-5\nViangchan\n1\n1\n2.10\n0.89\nC1024T\nG871A\nLeu342Phe\nVal291Met\nII\nII\n\nFig. 4G6PD activity of deficient and normal samples identified by HRM assay. G6PD activity in a male and b female subjects. The average G6PD activity of deficient males and females was 3.16 ± 2.45 and 7.66 ± 4.19 U/gHb, respectively. The average G6PD activity of normal males and females was 11.77 ± 2.13 and 11.76 ± 2.68 U/gHb, respectively\nG6PD activity of deficient and normal samples identified by HRM assay. G6PD activity in a male and b female subjects. The average G6PD activity of deficient males and females was 3.16 ± 2.45 and 7.66 ± 4.19 U/gHb, respectively. The average G6PD activity of normal males and females was 11.77 ± 2.13 and 11.76 ± 2.68 U/gHb, respectively\n\nFig. 5Distribution of G6PD activity by mutation type. Males carrying G6PD Mahidol showed G6PD enzyme activity ranging from 0.10 to 10.73 U/gHb. Females carrying G6PD Mahidol showed a wider range of G6PD enzyme activities (0.10–17.72 U/gHb). Females with G6PD Canton exhibited G6PD activity between 6.50 and 10.48 U/gHb. Females with G6PD Viangchan showed G6PD activity of 6.07–6.25 U/gHb. Normal males showed G6PD activity ranging from 7.16 to 18.05 U/gHb and normal females showed that between 4.96 and 18.67 U/gHb\nDistribution of G6PD activity by mutation type. Males carrying G6PD Mahidol showed G6PD enzyme activity ranging from 0.10 to 10.73 U/gHb. Females carrying G6PD Mahidol showed a wider range of G6PD enzyme activities (0.10–17.72 U/gHb). Females with G6PD Canton exhibited G6PD activity between 6.50 and 10.48 U/gHb. Females with G6PD Viangchan showed G6PD activity of 6.07–6.25 U/gHb. Normal males showed G6PD activity ranging from 7.16 to 18.05 U/gHb and normal females showed that between 4.96 and 18.67 U/gHb", "A multiplexed HRM assay was developed to detect eight G6PD variants that are common in Thailand in two reactions (Fig. 1). By using a specific primer pair for each mutation, reaction 1 simultaneously detects four mutations [G6PD Gaohe (A95G), G6PD Mahidol (G487A), G6PD Viangchan (G871A), and G6PD Canton (G1376T)]. Reaction 2 concurrently detects another four mutations [G6PD Chinese-4 (G392T), G6PD Chinese-5 (C1024T), G6PD Union (C1360T), and G6PD Kaiping (G1388A)]. The assay is based on a single fluorescent dye, EvaGreen, without the need for a quenching probe. The primers were designed to detect the mutations by generating PCR products with distinctive melting temperatures, Tm. In contrast, no amplification occurred in WT samples. A peak at the corresponding Tm reveals the genotype of each sample. The gDNA of known G6PD mutations was used as positive controls. Overall, 70 G6PD-deficient samples and 28 non-deficient samples were used to evaluate the performance of the developed 4-plex HRM assay, while direct DNA sequencing was used as a reference test (Table 3).\n\nTable 3G6PD mutations of 70-deficient samples detected by 4-plex HRM and direct DNA sequencingMutationHRM assayDNA sequencingGaohe (A95G)4/704/70Chinese-4 (G392T)3/703/70Mahidol (G487A)5/705/70Viangchan (G871A)28/7028/70Chinese-5 (C1024T)1/701/70Canton (G1376T)14/7014/70Kaiping (G1388A)13/7013/70Mahidol + Canton (G487A + G1376T)1/701/70Gaohe + Kaiping (A95G + G1388A)1/701/70\nG6PD mutations of 70-deficient samples detected by 4-plex HRM and direct DNA sequencing\nIn comparison to direct DNA sequencing, the 4-plex HRM assay was 100 % sensitive [95 % confidence interval (CI): 94.87–100 %] and 100 % specific (95 % CI: 87.66–100 %), with no cross-reactivity for the detection of G6PD mutations (Table 4). Additionally, the multiplexed HRM assay could correctly identify the double mutations (G6PD Mahidol + Canton and G6PD Gaohe + Kaiping). This indicates that the developed method is reliable for detecting G6PD mutations.\n\nTable 4Performance of the HRM assay for the identification of G6PD mutationsParameterHRM assayTrue positive70/70True negative28/28False positive0/28False negative0/70Sensitivity100 %Specificity100 %Positive predictive value100 %Negative predictive value100 %\nPerformance of the HRM assay for the identification of G6PD mutations", "The prevalence of G6PD deficiency in people living in a malaria endemic area in Thailand, namely, Tha Song Yang District, Tak Province, was determined by G6PD activity assay (WST-8). Figure 2 indicates the G6PD enzyme activity of 725 samples measured by WST-8. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively. The adjusted male median (AMM) value was determined (10.31 ± 3.81 U/gHb) and defined as 100 % G6PD activity [56]. The WHO defined G6PD activity of less than 30 % as deficient and G6PD activity ranging between 30 and 80 % as intermediate [57].\n\nFig. 2G6PD activity of 725 individuals (368 males and 357 females) measured by the WST-8 method. The adjusted male median (AMM) value was determined to be 10.31 ± 3.81 U/gHb and defined as 100 % G6PD activity. Dotted horizontal lines indicate G6PD activity at 30 and 70 % of the AMM\nG6PD activity of 725 individuals (368 males and 357 females) measured by the WST-8 method. The adjusted male median (AMM) value was determined to be 10.31 ± 3.81 U/gHb and defined as 100 % G6PD activity. Dotted horizontal lines indicate G6PD activity at 30 and 70 % of the AMM\nNonetheless, G6PD activity of 70 % was used as a threshold for tafenoquine prescription [58, 59]. In this study, G6PD activity levels of less than 30 % and 30–70 % of the AMM were thus considered as deficient and intermediate, respectively. Subjects with G6PD activity over 70 % of the AMM were defined as normal. Based on the WST-8 assay, the prevalence of G6PD deficiency in the studied population was 20.55 % (149/725; Table 5). Prevalence rates of G6PD deficiency of 20.11 % (74/368) and 21.01 % (75/357) were observed in males and females, respectively. In addition, average G6PD activity of deficient males and females was 1.59 ± 0.89 and 1.69 ± 0.77 U/gHb, respectively. Intermediate G6PD activity (30–70 %) was found in 7.34 % (27/368) of males and 16.25 % (58/357) of females. Average G6PD activity of non-deficient (> 70 %) cases was 11.78 ± 2.11 U/gHb in males and 11.89 ± 2.49 U/gHb in females. The frequency distribution of G6PD activity of the 725 individuals measured by WST-8 is shown in Fig. 3a. The majority of the enzyme activities were distributed between 7 and 16 U/gHb. The frequency distribution of G6PD activity by sex is illustrated in Fig. 3b, c. A broader distribution of G6PD activities was seen in females than in males.\n\nTable 5Prevalence of G6PD deficiency determined by WST-8 enzyme activity assayG6PD statusMale, N (%)Female, N (%)Total, N (%)Deficient (< 30 %)47 (12.77 %)17 (4.76 %)64 (8.83 %)Intermediate (30–70 %)27 (7.34 %)58 (16.25 %)85 (11.72 %)Normal (> 70 %)294 (79.89 %)282 (78.99 %)576 (79.45 %)Total368 (100 %)357 (100 %)725 (100 %)\nPrevalence of G6PD deficiency determined by WST-8 enzyme activity assay\n\nFig. 3Frequency distribution of G6PD activity. a G6PD activity for all 725 samples, showing the majority of samples in the range between 7 and 16 U/gHb. The average G6PD activity of the 725 samples was 10.19 ± 3.96 U/gHb. G6PD activity for (b) 368 males and (c) 357 females. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively\nFrequency distribution of G6PD activity. a G6PD activity for all 725 samples, showing the majority of samples in the range between 7 and 16 U/gHb. The average G6PD activity of the 725 samples was 10.19 ± 3.96 U/gHb. G6PD activity for (b) 368 males and (c) 357 females. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively", "The developed 4-plex HRM assay was applied to screen for G6PD mutations in 725 blood samples. This assay identified 197 of the 725 (27.17 %) individuals as possessing at least one mutation with an adverse effect on function (Table 6). The prevalence of subjects carrying at least one G6PD mutation was 20.11 % (74/368) in males and 34.45 % (123/357) in females. The most common G6PD mutation detected in the studied population was G6PD Mahidol, accounting for 94.92 % of the total (n = 187; 72 in males and 115 in females). Other single mutations observed in the study included G6PD Canton (2.03 %; 4 in females), G6PD Viangchan (1.52 %; 1 in a male and 2 in females), and G6PD Chinese-5 (0.51 %; 1 in a male). The HRM assay could also detect the double mutant variants, which were G6PD Mahidol + Canton (0.51 %; 1 in a female) and G6PD Chinese-4 + Viangchan (0.51 %; 1 in a female). Figure 4 shows the G6PD activity of deficient and normal samples identified by HRM for males and females. G6PD enzyme activity of deficient subjects, especially in females, spanned from the deficient region (< 30 %) to the normal region (> 70 %). A large distribution of G6PD enzyme activities in females is caused by genetic mosaicism as a result of X-inactivation. The distribution of G6PD activity by mutation type is illustrated in Fig. 5. Non-variant individuals are also included in this plot. Variation of G6PD activities among the different mutations was observed. Moreover, compared with the G6PD enzyme activity in males with the same mutation, that in females was greater. Enzyme activity of 0.89 and 6.16 U/gHb was observed for G6PD Viangchan in males and females, respectively. Interestingly, G6PD Mahidol, a Class III variant with mild deficiency, which was the most prevalent variant in the studied population, exhibited a wide range of G6PD activities, in both males (range: 0.10–10.73 U/gHb, mean: 3.20 ± 2.46 U/gHb) and females (range: 0.10–17.72 U/gHb, mean: 7.72 ± 4.24 U/gHb). Notably, G6PD enzyme activity in the double mutant variants (G6PD Mahidol + Canton and G6PD Chinese-4 + Viangchan) was significantly decreased compared with that of the single mutants.\n\nTable 6Observed ranges of enzyme activity and G6PD genotypes identified by HRM assayGenderVariantNG6PD activity (U/gHb)Nucleotide changeAmino acid changeWHO Classification\nMale Mahidol720.10-10.73G487AGly163SerIII(n = 368)Chinese-5Viangchan112.100.89C1024TG871ALeu342PheVal291MetIIIINon-variant2947.16–18.05--Normal\nFemale Mahidol1150.10-17.72G487AGly163SerIII(n = 357)Canton46.50-10.48G1376TArg249LeuIIViangchan26.07–6.25G871AVal291MetIIMahidol + Canton14.12G487A + G1376TGly163Ser + Arg249LeuII/IIIChinese-4 + Viangchan10.69G392T + G871AGly131Val + Val291MetIINon-variant2344.96–18.67--Normal\nObserved ranges of enzyme activity and G6PD genotypes identified by HRM assay\nChinese-5\nViangchan\n1\n1\n2.10\n0.89\nC1024T\nG871A\nLeu342Phe\nVal291Met\nII\nII\n\nFig. 4G6PD activity of deficient and normal samples identified by HRM assay. G6PD activity in a male and b female subjects. The average G6PD activity of deficient males and females was 3.16 ± 2.45 and 7.66 ± 4.19 U/gHb, respectively. The average G6PD activity of normal males and females was 11.77 ± 2.13 and 11.76 ± 2.68 U/gHb, respectively\nG6PD activity of deficient and normal samples identified by HRM assay. G6PD activity in a male and b female subjects. The average G6PD activity of deficient males and females was 3.16 ± 2.45 and 7.66 ± 4.19 U/gHb, respectively. The average G6PD activity of normal males and females was 11.77 ± 2.13 and 11.76 ± 2.68 U/gHb, respectively\n\nFig. 5Distribution of G6PD activity by mutation type. Males carrying G6PD Mahidol showed G6PD enzyme activity ranging from 0.10 to 10.73 U/gHb. Females carrying G6PD Mahidol showed a wider range of G6PD enzyme activities (0.10–17.72 U/gHb). Females with G6PD Canton exhibited G6PD activity between 6.50 and 10.48 U/gHb. Females with G6PD Viangchan showed G6PD activity of 6.07–6.25 U/gHb. Normal males showed G6PD activity ranging from 7.16 to 18.05 U/gHb and normal females showed that between 4.96 and 18.67 U/gHb\nDistribution of G6PD activity by mutation type. Males carrying G6PD Mahidol showed G6PD enzyme activity ranging from 0.10 to 10.73 U/gHb. Females carrying G6PD Mahidol showed a wider range of G6PD enzyme activities (0.10–17.72 U/gHb). Females with G6PD Canton exhibited G6PD activity between 6.50 and 10.48 U/gHb. Females with G6PD Viangchan showed G6PD activity of 6.07–6.25 U/gHb. Normal males showed G6PD activity ranging from 7.16 to 18.05 U/gHb and normal females showed that between 4.96 and 18.67 U/gHb", "HRM assays have been widely used to detect gene mutations [43–45]. However, for G6PD genotyping, most of the developed HRM assays are singleplex or duplex, which can only detect one or two mutations simultaneously [38, 46, 47]. To enable the detection of multiple mutations, more than one fluorescent dye must be included in the reaction mixture, which usually makes the reaction more expensive [39, 60, 61].\nAs reported in this paper, a 4-plex HRM assay for the detection of G6PD mutations common in Thailand and elsewhere in Asia was developed, using a single dye with a run time of 80 min. Evaluation of the 4-plex HRM assay using 70 G6PD-deficient samples indicated that it was accurate and reliable for detecting G6PD mutations (with specificity and sensitivity of 100 %) compared with DNA sequencing. Among the 70 deficient assay validation samples, G6PD Viangchan was the most prevalent variant, followed by G6PD Canton and G6PD Kaiping. This is in accordance with previous reports showing that G6PD Viangchan was the most common variant in Thais [28, 51]. Two double mutations, G6PD Mahidol + Canton and G6PD Gaohe + Kaiping, were also identified in the studied population. G6PD Mahidol + Canton was first identified in people living along the Thai–Myanmar border where Karen and Burman are the major population groups [62].\nAfter validation, the multiplexed HRM assay was applied to screen G6PD mutations of 725 people living in a malaria endemic area along the Thai–Myanmar border. The prevalence of G6PD deficiency in this population was also determined by phenotypic enzyme activity assay using WST-8. Considering a 30 % activity cut-off, the overall prevalence of G6PD deficiency was 8.83 % by WST-8 assay. If the upper limit of a 70 % cut-off was considered, the overall prevalence increased to 20.55 %. By sex, at the 70 % cut-off, the prevalence of G6PD deficiency was 20.11 % in males and 21.01 % in females. In contrast, by the multiplexed HRM assay, the frequency of G6PD mutations was 20.11 and 34.45 % in males and females, respectively. Thus, the prevalence of G6PD deficiency in males is equivalent between these two assays, but in females, genetic analysis using HRM indicated a high frequency of g6pd gene mutations (34.45 %), that is notably greater than the prevalence of G6PD deficiency measured by WST-8 (21.01 %). The multiplexed HRM assay identified four single mutants (95.11 % G6PD Mahidol, 2.03 % G6PD Canton, 1.52 % G6PD Viangchan, and 0.51 % G6PD Chinese-5) and two double mutants (0.51 % G6PD Mahidol + Canton and 0.51 % G6PD Chinese-4 + Viangchan) in the studied population. In good agreement with previous reports, G6PD Mahidol was the most common variant in the Karen population [26, 62]. The double mutant G6PD Chinese-4 + Viangchan was identified here for the first time.\nA broad range of G6PD activities were observed among the different genotypes. G6PD Mahidol showed ranges of enzyme activity of 0.10–10.73 and 0.10–17.72 U/gHb in males and females, respectively. For G6PD Canton, the range of enzyme activity was 6.50–10.48 U/gHb in females. A wider distribution of enzyme activities was observed in females carrying G6PD Mahidol than in males. Additionally, 5 males and 58 females carrying G6PD Mahidol and 3 females carrying G6PD Canton showed G6PD activity even greater than the 70 % activity cut-off (7 U/gHb). Similar findings were previously described in genotype–phenotype association studies [31, 32]. This is mainly attributable to the fact that the degree of deficiency and vulnerability to haemolysis vary substantially among the different genotypes. Furthermore, because G6PD deficiency is an X-linked genetic defect, males exhibit hemizygous deficiency while females can exhibit either homozygous or heterozygous deficiency. In heterozygous females, a wide range of G6PD activities can be observed, in which the activity of the normal erythrocyte population may compensate for the lost activity of the deficient erythrocyte population. In addition, abnormally high G6PD activity observed in persons carrying G6PD mutations could also be attributable to the following factors. The first is the limited performance quality of the WST-8 used in this study. The second is elevated reticulocyte count in blood samples. Young red blood cells usually exhibit greater G6PD enzyme activity than aged red blood cells. The last is the presence of leukocytes in tested samples (in this study, G6PD activity was measured from whole blood samples). This is because leukocytes retain their nucleus and therefore can continuously synthesize the G6PD enzyme.\nG6PD-deficient individuals have increased susceptibility to haemolysis upon exposure to oxidative agents, including primaquine and tafenoquine which are the only medications effective for the radical treatment of infection by P. vivax and P. ovale. To ensure successful and safe treatment, a single dose of 300 mg of tafenoquine or 0.5 mg/kg/day primaquine for 14 days should be prescribed only in patients with at least 70 % G6PD activity [63]. The effect of the primaquine dose on haemolysis was reported to differ between G6PD normal and G6PD Mahidol heterozygous individuals [64]. These heterozygous females were identified as normal by FST assay while the quantitative assay revealed G6PD activity of 62 and 99 %. Tafenoquine was also reported to cause dose-dependent haemolysis in G6PD Mahidol heterozygous individuals with enzyme activity of 61–80 % [13]. As such, drug-induced haemolysis associated with G6PD deficiency depends on two major factors: the first is the level of G6PD activity, which is determined by G6PD genotype, and the second is the exposure to oxidative stress, namely, metabolites of antimalarial drugs (8-aminoquinolines). Therefore, the drug dose and the ability to metabolize the parent compounds also contribute to the severity of haemolysis in malaria treatment.\nCurrently, the cut-off of 70 % AMM is widely accepted as an appropriate threshold to determine whether or not to administer tafenoquine [58, 59, 63]. However, the cut-off value should be carefully defined and tested in each population group. Based on the obtained results, to enable accurate determination of the prevalence of G6PD deficiency using WST-8, different cut-off values are required in males and females. The upper limit of 70 % of AMM is recommended for males. However, in heterozygous females, neither the lower (30 %) nor the upper (70 %) limit is reliable for screening G6PD deficiency. It should be noted that the results reported here are based on the WST-8 assay, which is an alternative to the standard method for measuring G6PD activity. Upon using other G6PD tests, the results might be different. For male populations, phenotype tests are useful for G6PD deficiency screening and to enable safe treatment. In contrast, in heterozygous females in whom a wide range of G6PD activities are observed, phenotypic enzyme assay alone might be insufficient to identify G6PD deficiency. Hence, alternative approaches such as genetic analysis could be useful for determining whether drugs should be administered in populations suspected of having G6PD deficiency. The multiplexed HRM assay developed here could be useful for identifying G6PD variants in Thai populations. Although other multiplex systems for genetic analysis are currently available, they might be unsuitable for large population screening. The G6PD gene chip kit can detect 13–14 mutations common in Chinese populations, but must be combined with a hybridization kit, which is time-consuming [32, 65]. The DiaPlexC™ (Asian type) can simultaneously detect eight mutations, but requires an additional gel electrophoresis step to check the amplified products [42]. Additionally, the kit might not be applicable for deployment in regions where populations carry other G6PD mutations (e.g., G6PD Gaohe, G6PD Chinese-4, and G6PD Chinese-5), for which the kit cannot test.\nIt should be mentioned that, in this study, no mutation was detected in 12 females who were considered likely to be G6PD-deficient because the observed G6PD activity was lower than 7 U/gHb. This might have been because of the limited performance quality (sensitivity of 55–72 %) of the WST-8 assay used in this study [20, 21]. Alternatively, these subjects might carry G6PD mutations for which the multiplexed HRM assay cannot test. DNA sequencing of the whole g6pd gene might be required to detect mutations in such individuals. Additionally, to enable mutational screening in more diverse population groups, the assay should be expanded to include other mutations, such as G6PD Mediterranean, G6PD Valladolid, G6PD Coimbra, and G6PD Aures [26]. It should also be noted that the multiplexed HRM assays developed here are not able to identify zygosity of the samples and, thus, require further development before being deployed. Nevertheless, the HRM assays could be of great use for analysing G6PD mutations in supplement to phenotypic G6PD screening in heterozygous females as well as in populations suspected of having G6PD deficiency. Primarily, G6PD genotyping is being done in G6PD-deficient individuals. However, more data on the genotype–phenotype association of G6PD deficiency in diverse population groups should be obtained, which requires a high-throughput screening platform.", "A multiplexed HRM assay for the detection of eight common G6PD mutations in Thailand was developed. The performance of the assay was excellent, with 100 % specificity and 100 % sensitivity. The prevalence of G6PD mutations in 725 people living in a malaria endemic area along the Thai–Myanmar border was determined to be 27.17 % by HRM, which is greater than the prevalence of G6PD deficiency determined by the WST-8 phenotypic assay (20.55 %). Performing a phenotypic assay alone might thus be inadequate and the result might not be an accurate predictor of G6PD deficiency, especially in heterozygous females. As an option to overcome this problem, the multiplexed HRM assay is rapid, accurate and reliable for detecting G6PD mutations, enabling high-throughput screening. This assay could be useful as a supplementary approach for high-throughput screening of G6PD deficiency before the administration of 8-aminoquinolones in malaria endemic areas." ]
[ null, null, null, null, null, null, null, null, null, "results", null, null, null, "discussion", "conclusion" ]
[ "G6PD deficiency", "G6PD mutations", "High-resolution melting", "Phenotype", "G6PD enzyme activity", "G6PD genotype", "WST-8" ]
Background: Glucose-6-phosphate dehydrogenase (G6PD) deficiency is an inherited genetic defect and the most common enzymopathy, affecting approximately 500 million people worldwide with more than 200 variants have been identified [1]. G6PD deficiency is prevalent in tropical and subtropical areas where malaria is endemic, including Africa and Southeast Asia [2]. Evidence has suggested that G6PD deficiency confers protection against malaria infection [3–5]. However, this is still controversial because several studies have yielded contradictory results with some claiming that the protective effects of G6PD deficiency were observed in male hemizygotes only, in female heterozygotes only, or in both [6–9]. The major clinical concern associated with G6PD deficiency is haemolysis upon exposure to oxidant drugs, including anti-malarials such as 8-aminoquinolines (primaquine and tafenoquine) [10–13]. Primaquine and tafenoquine are the only medications capable of killing Plasmodium vivax and Plasmodium ovale at the dormant liver stage (hypnozoite). The World Health Organization (WHO) recommends that G6PD activity be measured before efforts to perform radical treatment of malaria [14]. G6PD deficiency can be diagnosed by either phenotypic or genotypic assay. Phenotypic tests are based on the assessment of G6PD activity, measuring the production of reduced nicotinamide adenine dinucleotide phosphate (NADPH), which can be done quantitatively. The standard quantitative method is spectrophotometry, in which NADPH production is monitored at 340 nm [15]. This method is accurate and reliable, but is laborious, time-consuming, and requires complicated sample preparation and technical skills; as such, it is not commonly used for field-based screening. A colorimetric G6PD assay, based on water-soluble tetrazolium salts (WST-8), was developed as an alternative to the gold standard of spectrophotometry [16]. In this approach, no sample preparation is required and whole blood or dried blood spots can be used to test G6PD activity [17]. The WST-8-based assay can be used as a quantitative method or a qualitative one by the naked eye, offering the possibility of performing mass screening of G6PD deficiency in the context of malaria elimination using primaquine and tafenoquine [16, 18]. Although not the standard method for measuring G6PD activity, the sensitivity of WST-8 for detecting NAD(P)H was found to be five-fold greater than that of the spectrophotometric assay. Moreover, results obtained by measuring dehydrogenase activities in biological samples using WST-8 assay were in parallel with the standard method [19]. For G6PD testing, WST-8 was applied, in 96-well format, to the screening of G6PD deficiency in different populations [20–22]. The sensitivity and specificity of WST-8 for detecting G6PD activity < 30 % were 55 % and 98 %, respectively, compared with the spectrophotometric method [20]. In addition, sensitivity of 72 % and specificity of 98 % were reported for WST-8, in comparison with the standard quantitative G6PD assay (R&D Diagnostics) [21]. This suggests that WST-8 could be a key tool for G6PD testing, but it requires further development before deployment in the field. G6PD diagnostic tests are currently available, including qualitative tests such as fluorescent spot test (FST) and CareStart™ G6PD rapid diagnostic test, as well as quantitative point-of-care tests such as CareStart™ G6PD biosensor and STANDARD™ G6PD test. Unfortunately, these tests are not widely used for G6PD testing because they are too expensive and can be difficult to interpret [23–25]. Qualitative tests are reliable for identifying G6PD deficiency in hemizygous males and homozygous females, but are unable to identify heterozygous females [26–28]. This is because, in heterozygous females, a wide range of G6PD activities are observed as a result of the random X-chromosome inactivation or lyonization [29]. To date, over 200 G6PD variants have been identified, which form genotypes associated with differences in the degree of deficiency and vulnerability to haemolysis [30]. Moreover, G6PD activities vary among G6PD-deficient individuals carrying the same genotype [31, 32]. G6PD genotyping can be performed using restriction fragment length polymorphism [33, 34], amplification refractory mutation system [35, 36], gold nanoparticles-based assay [37], high resolution melting (HRM) curve analysis [38, 39] and DNA sequencing [40, 41]. Additionally, multiplex genotyping systems are currently available. DiaPlexC™ G6PD Genotyping Kit (Asian type) can detect eight mutations, namely, G6PD Vanua Lava, G6PD Mahidol, G6PD Mediterranean, G6PD Coimbra, G6PD Viangchan, G6PD Union, G6PD Canton, and G6PD Kaiping. Thus, this assay offers high-throughput screening of G6PD mutations by one-step PCR [42]. However, after PCR amplification, an additional gel electrophoresis step is required to check the size of the amplified PCR products, which is impractical for large population screening. The HRM assay is a powerful and reliable tool that has been widely used in the detection of gene mutations [43–45]. Previously, HRM assays were applied to detect G6PD mutations in different population groups [38, 46–48]. However, previous HRM assays could detect only one or two mutations at a time. Although a multiplexed system to detect six mutations in four reactions was later described, the assay system and interpretation of results were complex [49]. The prevalence of G6PD deficiency in Thailand ranges between 5 and 18 %, depending on the geographical area [50–54]. More than 20 G6PD variants have been identified in the country, among which the most common is G6PD Viangchan, followed by G6PD Mahidol, G6PD Canton, G6PD Union, G6PD Kaiping, G6PD Gaohe, G6PD Chinese-4, G6PD Chinese-5, G6PD Valladolid, G6PD Coimbra and G6PD Aures. Along the Thai–Myanmar border, a malaria endemic area, prevalence of G6PD deficiency of 9–18 % was reported in males [26]. Moreover, a rate of G6PD deficiency of 7.4 % was reported from the screening of 1,340 newborns [27]. G6PD Mahidol was shown to be the most common variant in this population, accounting for 88 % of all variants, followed by G6PD Chinese-4, G6PD Viangchan, and G6PD Mediterranean. Generally, to avoid the risk of haemolysis upon malaria treatment, G6PD testing is recommended before the administration of primaquine and tafenoquine. The aim of this study was to develop a molecular diagnostic test to enable an accurate, reliable and high-throughput platform for detecting G6PD mutations, which can be used as a supplement to the screening of G6PD deficiency, especially in heterozygous females. To validate the method, 70 G6PD-deficient and 28 non-deficient samples were tested and the results were compared with the findings obtained by direct DNA sequencing. The potential utility of the developed HRM test for the detection of G6PD variants in a study area in Thailand was then examined. The correlation between genotype and the phenotype of enzyme activity (as determined using WST-8) was also analysed. Methods: Blood samples Blood samples were collected in ethylenediaminetetraacetic acid (EDTA) tubes and transported to the laboratory under storage at 4 °C. Thereafter, samples were stored at −20 °C until use, for approximately 1−3 months. Under these conditions, the integrity of samples for phenotypic analysis was maintained as it was recently reported that blood samples were stable for up to 7–12 months when stored in EDTA tubes at − 20 °C [55]. For the validation of HRM assays, 70 G6PD-deficient and 28 non-deficient blood samples were collected from healthy Thai volunteers at the Faculty of Medicine Ramathibodi Hospital. All samples were spectrophotometrically tested for G6PD activity and genotyped by DNA sequencing. Ethical approval for the study was provided by the Committee on Human Rights Related to Research Involving Human Subjects, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand (approval number MURA 2018/252). For the screening of G6PD deficiency, 725 blood samples (from 368 males and 357 females) were collected in EDTA tubes from residents living along the Thai–Myanmar border, a malaria endemic area, namely, in Tha Song Yang District, Tak Province, Thailand. Ethical approval for the study was provided by the Human Ethics Committee, Faculty of Tropical Medicine, Mahidol University (approval number MUTM 2019-016-01). Blood samples were collected in ethylenediaminetetraacetic acid (EDTA) tubes and transported to the laboratory under storage at 4 °C. Thereafter, samples were stored at −20 °C until use, for approximately 1−3 months. Under these conditions, the integrity of samples for phenotypic analysis was maintained as it was recently reported that blood samples were stable for up to 7–12 months when stored in EDTA tubes at − 20 °C [55]. For the validation of HRM assays, 70 G6PD-deficient and 28 non-deficient blood samples were collected from healthy Thai volunteers at the Faculty of Medicine Ramathibodi Hospital. All samples were spectrophotometrically tested for G6PD activity and genotyped by DNA sequencing. Ethical approval for the study was provided by the Committee on Human Rights Related to Research Involving Human Subjects, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand (approval number MURA 2018/252). For the screening of G6PD deficiency, 725 blood samples (from 368 males and 357 females) were collected in EDTA tubes from residents living along the Thai–Myanmar border, a malaria endemic area, namely, in Tha Song Yang District, Tak Province, Thailand. Ethical approval for the study was provided by the Human Ethics Committee, Faculty of Tropical Medicine, Mahidol University (approval number MUTM 2019-016-01). Phenotypic screening of G6PD deficiency using WST-8 assay WST-8 is not a standard method for measuring G6PD activity. However, this assay was used for phenotypic screening in this study because its performance was found to be indistinguishable from that of the spectrophotometric method involving measurement of the absorbance of NAD(P)H at 340 nm [19]. The method showed high accuracy with % relative error of 0.7–0.25. For precision, % coefficient of variation for within-run and between-run of the WST-8 method ranged between 0.6 and 4.5. WST-8 also exhibited excellent reproducibility with Z′ values of 0.90–0.99. Although WST-8 provides advantages regarding the diagnosis of G6PD deficiency, this method will require further development before being deployed in a clinical context [20]. Reaction mixtures of 100 µl, consisting of 20 mM Tris-HCl pH 8.0, 10 mM MgCl2, 500 µM glucose-6-phosphate (G6P), 100 µM NADP+, and 100 µM WST-8 (Sigma-Aldrich, Darmstadt, Germany), were mixed with a blood sample of 2 µl in a 96-well plate. The absorbance was measured at 450 nm with a reference at 650 nm using a microplate reader (Sunrise; Tecan, Männedorf, Switzerland). The absorbance at 450 nm of a reaction mixture set up in the absence of G6P substrate was used for background subtraction. The G6PD activity was calculated using an NADPH calibration curve. Haemoglobin concentration was measured using Drabkin’s reagent (Sigma-Aldrich). G6PD activity was reported as units (U) per gram of haemoglobin (gHb). Experiments were performed in triplicate. WST-8 is not a standard method for measuring G6PD activity. However, this assay was used for phenotypic screening in this study because its performance was found to be indistinguishable from that of the spectrophotometric method involving measurement of the absorbance of NAD(P)H at 340 nm [19]. The method showed high accuracy with % relative error of 0.7–0.25. For precision, % coefficient of variation for within-run and between-run of the WST-8 method ranged between 0.6 and 4.5. WST-8 also exhibited excellent reproducibility with Z′ values of 0.90–0.99. Although WST-8 provides advantages regarding the diagnosis of G6PD deficiency, this method will require further development before being deployed in a clinical context [20]. Reaction mixtures of 100 µl, consisting of 20 mM Tris-HCl pH 8.0, 10 mM MgCl2, 500 µM glucose-6-phosphate (G6P), 100 µM NADP+, and 100 µM WST-8 (Sigma-Aldrich, Darmstadt, Germany), were mixed with a blood sample of 2 µl in a 96-well plate. The absorbance was measured at 450 nm with a reference at 650 nm using a microplate reader (Sunrise; Tecan, Männedorf, Switzerland). The absorbance at 450 nm of a reaction mixture set up in the absence of G6P substrate was used for background subtraction. The G6PD activity was calculated using an NADPH calibration curve. Haemoglobin concentration was measured using Drabkin’s reagent (Sigma-Aldrich). G6PD activity was reported as units (U) per gram of haemoglobin (gHb). Experiments were performed in triplicate. DNA extraction DNA extraction was performed using the QIAsymphony DNA Mini Kit (QIAGEN, Hilden, Germany), in accordance with the manufacturer’s instructions. Blood samples of 100 µl were extracted and eluted into a final volume of 50 µl. DNA concentration was measured using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). DNA extraction was performed using the QIAsymphony DNA Mini Kit (QIAGEN, Hilden, Germany), in accordance with the manufacturer’s instructions. Blood samples of 100 µl were extracted and eluted into a final volume of 50 µl. DNA concentration was measured using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). Primer design Primers were designed to detect eight common G6PD mutations in the Thai population: G6PD Gaohe (A95G), G6PD Chinese-4 (G392T), G6PD Mahidol (G487A), G6PD Viangchan (G871A), G6PD Chinese-5 (C1024T), G6PD Union (C1360T), G6PD Canton (G1376T) and G6PD Kaiping (G1388A; Table 1). The primers were designed to detect the mutations by generating PCR products with distinctive melting temperatures (Tm; Fig. 1). Table 1HRM primers used in this studyReaction systemPrimer nameG6PD variantPrimer sequence (from 5’ to 3’)Primer concentration (nM)Amplicon size (bp)Tm of PCRproduct (°C)1A95G_FGaoheTTCCATCAGTCGGATACACG60010081.05A95G_R(His32Arg)AGGCATGGAGCAGGCACTTC600G487A_FMahidolTCCGGGCTCCCAGCAGAA4008784.80G487A_R(Gly163Ser)GGTTGGACAGCCGGTCA400G871A_FViangchanGGCTTTCTCTCAGGTCAAGA4006678.32G871A_R(Val291Met)CCCAGGACCACATTGTTGGC400G1376T_FCantonCCTCAGCGACGAGCTCCT6009983.65G1376T_R(Arg459Leu)CTGCCATAAATATAGGGGATGG6002G392T_FChinese-4CATGAATGCCCTCCACCTGGT2008785.05G392T_R(Gly131Val)TTCTTGGTGACGGCCTCGTA200C1024T_FChinese-5CACTTTTGCAGCCGTCGTCT4009983.10C1024T_R(Leu342Phe)CACACAGGGCATGCCCAGTT400C1360T_FUnionGAGCCAGATGCACTTCGTGT20012787.67C1360T_R(Arg454Cys)GAGGGGACATAGTATGGCTT200G1388A_FKaipingGCTCCGTGAGGCCTGGCA4005778.97G1388A_R(Arg463His)TTCTCCAGCTCAATCTGGTGC400 HRM primers used in this study Fig. 1Identification of G6PD mutations by the multiplexed HRM assay. The assay is based on base complementarity between primers and the DNA template. Mutant samples produce a peak at the corresponding Tm, while WT samples do not produce PCR products, giving a flat line Identification of G6PD mutations by the multiplexed HRM assay. The assay is based on base complementarity between primers and the DNA template. Mutant samples produce a peak at the corresponding Tm, while WT samples do not produce PCR products, giving a flat line Primers were designed to detect eight common G6PD mutations in the Thai population: G6PD Gaohe (A95G), G6PD Chinese-4 (G392T), G6PD Mahidol (G487A), G6PD Viangchan (G871A), G6PD Chinese-5 (C1024T), G6PD Union (C1360T), G6PD Canton (G1376T) and G6PD Kaiping (G1388A; Table 1). The primers were designed to detect the mutations by generating PCR products with distinctive melting temperatures (Tm; Fig. 1). Table 1HRM primers used in this studyReaction systemPrimer nameG6PD variantPrimer sequence (from 5’ to 3’)Primer concentration (nM)Amplicon size (bp)Tm of PCRproduct (°C)1A95G_FGaoheTTCCATCAGTCGGATACACG60010081.05A95G_R(His32Arg)AGGCATGGAGCAGGCACTTC600G487A_FMahidolTCCGGGCTCCCAGCAGAA4008784.80G487A_R(Gly163Ser)GGTTGGACAGCCGGTCA400G871A_FViangchanGGCTTTCTCTCAGGTCAAGA4006678.32G871A_R(Val291Met)CCCAGGACCACATTGTTGGC400G1376T_FCantonCCTCAGCGACGAGCTCCT6009983.65G1376T_R(Arg459Leu)CTGCCATAAATATAGGGGATGG6002G392T_FChinese-4CATGAATGCCCTCCACCTGGT2008785.05G392T_R(Gly131Val)TTCTTGGTGACGGCCTCGTA200C1024T_FChinese-5CACTTTTGCAGCCGTCGTCT4009983.10C1024T_R(Leu342Phe)CACACAGGGCATGCCCAGTT400C1360T_FUnionGAGCCAGATGCACTTCGTGT20012787.67C1360T_R(Arg454Cys)GAGGGGACATAGTATGGCTT200G1388A_FKaipingGCTCCGTGAGGCCTGGCA4005778.97G1388A_R(Arg463His)TTCTCCAGCTCAATCTGGTGC400 HRM primers used in this study Fig. 1Identification of G6PD mutations by the multiplexed HRM assay. The assay is based on base complementarity between primers and the DNA template. Mutant samples produce a peak at the corresponding Tm, while WT samples do not produce PCR products, giving a flat line Identification of G6PD mutations by the multiplexed HRM assay. The assay is based on base complementarity between primers and the DNA template. Mutant samples produce a peak at the corresponding Tm, while WT samples do not produce PCR products, giving a flat line PCR amplification and melting curve analysis Assay conditions, including primer concentrations, assay protocol, and detection conditions, were optimized to maximize the sensitivity and specificity of the assay and to minimize the cross-reactivity. Multiplexed HRM assay was performed in a total volume of 12.5 µl, containing 6.25 µl of 2× HRM Type-It mix (QIAGEN), various concentrations of each primer (Table 1), molecular-grade water and 2.5 µl of the gDNA template (3–10 ng/µl). PCR amplification and melting curve analysis were performed using the Rotor-Gene Q (QIAGEN) with the following conditions: 1 cycle of 95 °C for 5 min, and then 30 cycles of 95 °C for 10 s, 63 °C for 30 s, and 72 °C for 10 s. Subsequently, HRM analysis was performed by melting from 75 to 90 °C, reading at every 0.1 °C step with 2 s of stabilization. Positive (gDNA with known mutations, confirmed by DNA sequencing) and negative controls (gDNA of G6PD wild-type (WT), confirmed by DNA sequencing) were included in every run. Data analysis was carried out using the Rotor-Gene Q software. Experiments were performed in triplicate. PCR amplification and DNA sequencing To validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada). Table 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG Sequencing primers used in this study To validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada). Table 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG Sequencing primers used in this study Assay conditions, including primer concentrations, assay protocol, and detection conditions, were optimized to maximize the sensitivity and specificity of the assay and to minimize the cross-reactivity. Multiplexed HRM assay was performed in a total volume of 12.5 µl, containing 6.25 µl of 2× HRM Type-It mix (QIAGEN), various concentrations of each primer (Table 1), molecular-grade water and 2.5 µl of the gDNA template (3–10 ng/µl). PCR amplification and melting curve analysis were performed using the Rotor-Gene Q (QIAGEN) with the following conditions: 1 cycle of 95 °C for 5 min, and then 30 cycles of 95 °C for 10 s, 63 °C for 30 s, and 72 °C for 10 s. Subsequently, HRM analysis was performed by melting from 75 to 90 °C, reading at every 0.1 °C step with 2 s of stabilization. Positive (gDNA with known mutations, confirmed by DNA sequencing) and negative controls (gDNA of G6PD wild-type (WT), confirmed by DNA sequencing) were included in every run. Data analysis was carried out using the Rotor-Gene Q software. Experiments were performed in triplicate. PCR amplification and DNA sequencing To validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada). Table 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG Sequencing primers used in this study To validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada). Table 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG Sequencing primers used in this study Statistical analysis Data are presented as mean ± SD. Statistical analyses and plotting of graphs were performed using GraphPad Prism (GraphPad Software, La Jolla, CA, USA). To assess the performance of multiplexed HRM in the detection of G6PD mutations, the numbers of true positives, true negatives, false positives, and false negatives were determined. The following parameters were calculated: sensitivity = [true positives/(true positives + false negatives)] ⋅100; specificity = [true negatives/(true negatives + false positives)] ⋅100; positive predictive value = [true positives/(true positives + false positives)] ⋅100; and negative predictive value = [true negatives/(true positives + false negatives)] ⋅100. Data are presented as mean ± SD. Statistical analyses and plotting of graphs were performed using GraphPad Prism (GraphPad Software, La Jolla, CA, USA). To assess the performance of multiplexed HRM in the detection of G6PD mutations, the numbers of true positives, true negatives, false positives, and false negatives were determined. The following parameters were calculated: sensitivity = [true positives/(true positives + false negatives)] ⋅100; specificity = [true negatives/(true negatives + false positives)] ⋅100; positive predictive value = [true positives/(true positives + false positives)] ⋅100; and negative predictive value = [true negatives/(true positives + false negatives)] ⋅100. Blood samples: Blood samples were collected in ethylenediaminetetraacetic acid (EDTA) tubes and transported to the laboratory under storage at 4 °C. Thereafter, samples were stored at −20 °C until use, for approximately 1−3 months. Under these conditions, the integrity of samples for phenotypic analysis was maintained as it was recently reported that blood samples were stable for up to 7–12 months when stored in EDTA tubes at − 20 °C [55]. For the validation of HRM assays, 70 G6PD-deficient and 28 non-deficient blood samples were collected from healthy Thai volunteers at the Faculty of Medicine Ramathibodi Hospital. All samples were spectrophotometrically tested for G6PD activity and genotyped by DNA sequencing. Ethical approval for the study was provided by the Committee on Human Rights Related to Research Involving Human Subjects, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand (approval number MURA 2018/252). For the screening of G6PD deficiency, 725 blood samples (from 368 males and 357 females) were collected in EDTA tubes from residents living along the Thai–Myanmar border, a malaria endemic area, namely, in Tha Song Yang District, Tak Province, Thailand. Ethical approval for the study was provided by the Human Ethics Committee, Faculty of Tropical Medicine, Mahidol University (approval number MUTM 2019-016-01). Phenotypic screening of G6PD deficiency using WST-8 assay: WST-8 is not a standard method for measuring G6PD activity. However, this assay was used for phenotypic screening in this study because its performance was found to be indistinguishable from that of the spectrophotometric method involving measurement of the absorbance of NAD(P)H at 340 nm [19]. The method showed high accuracy with % relative error of 0.7–0.25. For precision, % coefficient of variation for within-run and between-run of the WST-8 method ranged between 0.6 and 4.5. WST-8 also exhibited excellent reproducibility with Z′ values of 0.90–0.99. Although WST-8 provides advantages regarding the diagnosis of G6PD deficiency, this method will require further development before being deployed in a clinical context [20]. Reaction mixtures of 100 µl, consisting of 20 mM Tris-HCl pH 8.0, 10 mM MgCl2, 500 µM glucose-6-phosphate (G6P), 100 µM NADP+, and 100 µM WST-8 (Sigma-Aldrich, Darmstadt, Germany), were mixed with a blood sample of 2 µl in a 96-well plate. The absorbance was measured at 450 nm with a reference at 650 nm using a microplate reader (Sunrise; Tecan, Männedorf, Switzerland). The absorbance at 450 nm of a reaction mixture set up in the absence of G6P substrate was used for background subtraction. The G6PD activity was calculated using an NADPH calibration curve. Haemoglobin concentration was measured using Drabkin’s reagent (Sigma-Aldrich). G6PD activity was reported as units (U) per gram of haemoglobin (gHb). Experiments were performed in triplicate. DNA extraction: DNA extraction was performed using the QIAsymphony DNA Mini Kit (QIAGEN, Hilden, Germany), in accordance with the manufacturer’s instructions. Blood samples of 100 µl were extracted and eluted into a final volume of 50 µl. DNA concentration was measured using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). Primer design: Primers were designed to detect eight common G6PD mutations in the Thai population: G6PD Gaohe (A95G), G6PD Chinese-4 (G392T), G6PD Mahidol (G487A), G6PD Viangchan (G871A), G6PD Chinese-5 (C1024T), G6PD Union (C1360T), G6PD Canton (G1376T) and G6PD Kaiping (G1388A; Table 1). The primers were designed to detect the mutations by generating PCR products with distinctive melting temperatures (Tm; Fig. 1). Table 1HRM primers used in this studyReaction systemPrimer nameG6PD variantPrimer sequence (from 5’ to 3’)Primer concentration (nM)Amplicon size (bp)Tm of PCRproduct (°C)1A95G_FGaoheTTCCATCAGTCGGATACACG60010081.05A95G_R(His32Arg)AGGCATGGAGCAGGCACTTC600G487A_FMahidolTCCGGGCTCCCAGCAGAA4008784.80G487A_R(Gly163Ser)GGTTGGACAGCCGGTCA400G871A_FViangchanGGCTTTCTCTCAGGTCAAGA4006678.32G871A_R(Val291Met)CCCAGGACCACATTGTTGGC400G1376T_FCantonCCTCAGCGACGAGCTCCT6009983.65G1376T_R(Arg459Leu)CTGCCATAAATATAGGGGATGG6002G392T_FChinese-4CATGAATGCCCTCCACCTGGT2008785.05G392T_R(Gly131Val)TTCTTGGTGACGGCCTCGTA200C1024T_FChinese-5CACTTTTGCAGCCGTCGTCT4009983.10C1024T_R(Leu342Phe)CACACAGGGCATGCCCAGTT400C1360T_FUnionGAGCCAGATGCACTTCGTGT20012787.67C1360T_R(Arg454Cys)GAGGGGACATAGTATGGCTT200G1388A_FKaipingGCTCCGTGAGGCCTGGCA4005778.97G1388A_R(Arg463His)TTCTCCAGCTCAATCTGGTGC400 HRM primers used in this study Fig. 1Identification of G6PD mutations by the multiplexed HRM assay. The assay is based on base complementarity between primers and the DNA template. Mutant samples produce a peak at the corresponding Tm, while WT samples do not produce PCR products, giving a flat line Identification of G6PD mutations by the multiplexed HRM assay. The assay is based on base complementarity between primers and the DNA template. Mutant samples produce a peak at the corresponding Tm, while WT samples do not produce PCR products, giving a flat line PCR amplification and melting curve analysis: Assay conditions, including primer concentrations, assay protocol, and detection conditions, were optimized to maximize the sensitivity and specificity of the assay and to minimize the cross-reactivity. Multiplexed HRM assay was performed in a total volume of 12.5 µl, containing 6.25 µl of 2× HRM Type-It mix (QIAGEN), various concentrations of each primer (Table 1), molecular-grade water and 2.5 µl of the gDNA template (3–10 ng/µl). PCR amplification and melting curve analysis were performed using the Rotor-Gene Q (QIAGEN) with the following conditions: 1 cycle of 95 °C for 5 min, and then 30 cycles of 95 °C for 10 s, 63 °C for 30 s, and 72 °C for 10 s. Subsequently, HRM analysis was performed by melting from 75 to 90 °C, reading at every 0.1 °C step with 2 s of stabilization. Positive (gDNA with known mutations, confirmed by DNA sequencing) and negative controls (gDNA of G6PD wild-type (WT), confirmed by DNA sequencing) were included in every run. Data analysis was carried out using the Rotor-Gene Q software. Experiments were performed in triplicate. PCR amplification and DNA sequencing To validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada). Table 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG Sequencing primers used in this study To validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada). Table 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG Sequencing primers used in this study PCR amplification and DNA sequencing: To validate the HRM results, PCR and sequencing primers were designed, as shown in Table 2. For DNA amplification, extracted gDNA was used as a template. The g6pd gene was amplified using four primer sets (Exon2F−Exon2R, Exon3F−Exon5R, Exon6F−Exon8R, and Exon9F−Exon13R), which cover all 13 exons. The PCR reaction was set up in a final volume of 50 µl, containing 1⋅ Taq Buffer with (NH4)2SO4, 2.5 mM MgCl2, 200 µM of each dNTP, 0.25 µM of each primer, 50 ng gDNA and 1.25 U of Taq DNA polymerase (Thermo Fisher Scientific). The thermal cycling profile was as follows: initial denaturation at 95 °C for 3 min; 35 cycles of denaturation at 95 °C for 30 s, annealing for 30 s, and extension at 72 °C for 1 min; followed by final extension at 72 °C for 10 min. The annealing temperature was 60 °C for the primers Exon2F−Exon2R, Exon3F−Exon5R, and Exon6F−Exon8R, and 65 °C for Exon9F−Exon13R. PCR products were subjected to gel purification and sequenced (Bio Basic, Ontario, Canada). Table 2Sequencing primers used in this studyPrimer namePrimer sequence (from 5’ to 3’)Exon2FGGGCAATCAGGTGTCACCExon2RGGCTTTTAAGATTGGGGCCTExon3FAGACATGCTTGTGGCCCAGTAExon5FGGACACTGACTTCTGAGGGCAExon5RAAGGGAGGGCAACGGCAAExon6FCACGGGGGCGAGGAGGTTExon8FCGGTTTTATGATTCAGTGATAExon8RAGGGCATGCTCCTGGGGAExon9FGTGAGCAGAGCCAAGCAGExon11FCAGATACAAGGTGCCCTACAGExon13RTGGCGGGGGTGGAGGTGG Sequencing primers used in this study Statistical analysis: Data are presented as mean ± SD. Statistical analyses and plotting of graphs were performed using GraphPad Prism (GraphPad Software, La Jolla, CA, USA). To assess the performance of multiplexed HRM in the detection of G6PD mutations, the numbers of true positives, true negatives, false positives, and false negatives were determined. The following parameters were calculated: sensitivity = [true positives/(true positives + false negatives)] ⋅100; specificity = [true negatives/(true negatives + false positives)] ⋅100; positive predictive value = [true positives/(true positives + false positives)] ⋅100; and negative predictive value = [true negatives/(true positives + false negatives)] ⋅100. Results: Development and validation of 4-plex HRM assay A multiplexed HRM assay was developed to detect eight G6PD variants that are common in Thailand in two reactions (Fig. 1). By using a specific primer pair for each mutation, reaction 1 simultaneously detects four mutations [G6PD Gaohe (A95G), G6PD Mahidol (G487A), G6PD Viangchan (G871A), and G6PD Canton (G1376T)]. Reaction 2 concurrently detects another four mutations [G6PD Chinese-4 (G392T), G6PD Chinese-5 (C1024T), G6PD Union (C1360T), and G6PD Kaiping (G1388A)]. The assay is based on a single fluorescent dye, EvaGreen, without the need for a quenching probe. The primers were designed to detect the mutations by generating PCR products with distinctive melting temperatures, Tm. In contrast, no amplification occurred in WT samples. A peak at the corresponding Tm reveals the genotype of each sample. The gDNA of known G6PD mutations was used as positive controls. Overall, 70 G6PD-deficient samples and 28 non-deficient samples were used to evaluate the performance of the developed 4-plex HRM assay, while direct DNA sequencing was used as a reference test (Table 3). Table 3G6PD mutations of 70-deficient samples detected by 4-plex HRM and direct DNA sequencingMutationHRM assayDNA sequencingGaohe (A95G)4/704/70Chinese-4 (G392T)3/703/70Mahidol (G487A)5/705/70Viangchan (G871A)28/7028/70Chinese-5 (C1024T)1/701/70Canton (G1376T)14/7014/70Kaiping (G1388A)13/7013/70Mahidol + Canton (G487A + G1376T)1/701/70Gaohe + Kaiping (A95G + G1388A)1/701/70 G6PD mutations of 70-deficient samples detected by 4-plex HRM and direct DNA sequencing In comparison to direct DNA sequencing, the 4-plex HRM assay was 100 % sensitive [95 % confidence interval (CI): 94.87–100 %] and 100 % specific (95 % CI: 87.66–100 %), with no cross-reactivity for the detection of G6PD mutations (Table 4). Additionally, the multiplexed HRM assay could correctly identify the double mutations (G6PD Mahidol + Canton and G6PD Gaohe + Kaiping). This indicates that the developed method is reliable for detecting G6PD mutations. Table 4Performance of the HRM assay for the identification of G6PD mutationsParameterHRM assayTrue positive70/70True negative28/28False positive0/28False negative0/70Sensitivity100 %Specificity100 %Positive predictive value100 %Negative predictive value100 % Performance of the HRM assay for the identification of G6PD mutations A multiplexed HRM assay was developed to detect eight G6PD variants that are common in Thailand in two reactions (Fig. 1). By using a specific primer pair for each mutation, reaction 1 simultaneously detects four mutations [G6PD Gaohe (A95G), G6PD Mahidol (G487A), G6PD Viangchan (G871A), and G6PD Canton (G1376T)]. Reaction 2 concurrently detects another four mutations [G6PD Chinese-4 (G392T), G6PD Chinese-5 (C1024T), G6PD Union (C1360T), and G6PD Kaiping (G1388A)]. The assay is based on a single fluorescent dye, EvaGreen, without the need for a quenching probe. The primers were designed to detect the mutations by generating PCR products with distinctive melting temperatures, Tm. In contrast, no amplification occurred in WT samples. A peak at the corresponding Tm reveals the genotype of each sample. The gDNA of known G6PD mutations was used as positive controls. Overall, 70 G6PD-deficient samples and 28 non-deficient samples were used to evaluate the performance of the developed 4-plex HRM assay, while direct DNA sequencing was used as a reference test (Table 3). Table 3G6PD mutations of 70-deficient samples detected by 4-plex HRM and direct DNA sequencingMutationHRM assayDNA sequencingGaohe (A95G)4/704/70Chinese-4 (G392T)3/703/70Mahidol (G487A)5/705/70Viangchan (G871A)28/7028/70Chinese-5 (C1024T)1/701/70Canton (G1376T)14/7014/70Kaiping (G1388A)13/7013/70Mahidol + Canton (G487A + G1376T)1/701/70Gaohe + Kaiping (A95G + G1388A)1/701/70 G6PD mutations of 70-deficient samples detected by 4-plex HRM and direct DNA sequencing In comparison to direct DNA sequencing, the 4-plex HRM assay was 100 % sensitive [95 % confidence interval (CI): 94.87–100 %] and 100 % specific (95 % CI: 87.66–100 %), with no cross-reactivity for the detection of G6PD mutations (Table 4). Additionally, the multiplexed HRM assay could correctly identify the double mutations (G6PD Mahidol + Canton and G6PD Gaohe + Kaiping). This indicates that the developed method is reliable for detecting G6PD mutations. Table 4Performance of the HRM assay for the identification of G6PD mutationsParameterHRM assayTrue positive70/70True negative28/28False positive0/28False negative0/70Sensitivity100 %Specificity100 %Positive predictive value100 %Negative predictive value100 % Performance of the HRM assay for the identification of G6PD mutations Phenotypic screening of G6PD deficiency by WST-8 assay The prevalence of G6PD deficiency in people living in a malaria endemic area in Thailand, namely, Tha Song Yang District, Tak Province, was determined by G6PD activity assay (WST-8). Figure 2 indicates the G6PD enzyme activity of 725 samples measured by WST-8. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively. The adjusted male median (AMM) value was determined (10.31 ± 3.81 U/gHb) and defined as 100 % G6PD activity [56]. The WHO defined G6PD activity of less than 30 % as deficient and G6PD activity ranging between 30 and 80 % as intermediate [57]. Fig. 2G6PD activity of 725 individuals (368 males and 357 females) measured by the WST-8 method. The adjusted male median (AMM) value was determined to be 10.31 ± 3.81 U/gHb and defined as 100 % G6PD activity. Dotted horizontal lines indicate G6PD activity at 30 and 70 % of the AMM G6PD activity of 725 individuals (368 males and 357 females) measured by the WST-8 method. The adjusted male median (AMM) value was determined to be 10.31 ± 3.81 U/gHb and defined as 100 % G6PD activity. Dotted horizontal lines indicate G6PD activity at 30 and 70 % of the AMM Nonetheless, G6PD activity of 70 % was used as a threshold for tafenoquine prescription [58, 59]. In this study, G6PD activity levels of less than 30 % and 30–70 % of the AMM were thus considered as deficient and intermediate, respectively. Subjects with G6PD activity over 70 % of the AMM were defined as normal. Based on the WST-8 assay, the prevalence of G6PD deficiency in the studied population was 20.55 % (149/725; Table 5). Prevalence rates of G6PD deficiency of 20.11 % (74/368) and 21.01 % (75/357) were observed in males and females, respectively. In addition, average G6PD activity of deficient males and females was 1.59 ± 0.89 and 1.69 ± 0.77 U/gHb, respectively. Intermediate G6PD activity (30–70 %) was found in 7.34 % (27/368) of males and 16.25 % (58/357) of females. Average G6PD activity of non-deficient (> 70 %) cases was 11.78 ± 2.11 U/gHb in males and 11.89 ± 2.49 U/gHb in females. The frequency distribution of G6PD activity of the 725 individuals measured by WST-8 is shown in Fig. 3a. The majority of the enzyme activities were distributed between 7 and 16 U/gHb. The frequency distribution of G6PD activity by sex is illustrated in Fig. 3b, c. A broader distribution of G6PD activities was seen in females than in males. Table 5Prevalence of G6PD deficiency determined by WST-8 enzyme activity assayG6PD statusMale, N (%)Female, N (%)Total, N (%)Deficient (< 30 %)47 (12.77 %)17 (4.76 %)64 (8.83 %)Intermediate (30–70 %)27 (7.34 %)58 (16.25 %)85 (11.72 %)Normal (> 70 %)294 (79.89 %)282 (78.99 %)576 (79.45 %)Total368 (100 %)357 (100 %)725 (100 %) Prevalence of G6PD deficiency determined by WST-8 enzyme activity assay Fig. 3Frequency distribution of G6PD activity. a G6PD activity for all 725 samples, showing the majority of samples in the range between 7 and 16 U/gHb. The average G6PD activity of the 725 samples was 10.19 ± 3.96 U/gHb. G6PD activity for (b) 368 males and (c) 357 females. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively Frequency distribution of G6PD activity. a G6PD activity for all 725 samples, showing the majority of samples in the range between 7 and 16 U/gHb. The average G6PD activity of the 725 samples was 10.19 ± 3.96 U/gHb. G6PD activity for (b) 368 males and (c) 357 females. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively The prevalence of G6PD deficiency in people living in a malaria endemic area in Thailand, namely, Tha Song Yang District, Tak Province, was determined by G6PD activity assay (WST-8). Figure 2 indicates the G6PD enzyme activity of 725 samples measured by WST-8. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively. The adjusted male median (AMM) value was determined (10.31 ± 3.81 U/gHb) and defined as 100 % G6PD activity [56]. The WHO defined G6PD activity of less than 30 % as deficient and G6PD activity ranging between 30 and 80 % as intermediate [57]. Fig. 2G6PD activity of 725 individuals (368 males and 357 females) measured by the WST-8 method. The adjusted male median (AMM) value was determined to be 10.31 ± 3.81 U/gHb and defined as 100 % G6PD activity. Dotted horizontal lines indicate G6PD activity at 30 and 70 % of the AMM G6PD activity of 725 individuals (368 males and 357 females) measured by the WST-8 method. The adjusted male median (AMM) value was determined to be 10.31 ± 3.81 U/gHb and defined as 100 % G6PD activity. Dotted horizontal lines indicate G6PD activity at 30 and 70 % of the AMM Nonetheless, G6PD activity of 70 % was used as a threshold for tafenoquine prescription [58, 59]. In this study, G6PD activity levels of less than 30 % and 30–70 % of the AMM were thus considered as deficient and intermediate, respectively. Subjects with G6PD activity over 70 % of the AMM were defined as normal. Based on the WST-8 assay, the prevalence of G6PD deficiency in the studied population was 20.55 % (149/725; Table 5). Prevalence rates of G6PD deficiency of 20.11 % (74/368) and 21.01 % (75/357) were observed in males and females, respectively. In addition, average G6PD activity of deficient males and females was 1.59 ± 0.89 and 1.69 ± 0.77 U/gHb, respectively. Intermediate G6PD activity (30–70 %) was found in 7.34 % (27/368) of males and 16.25 % (58/357) of females. Average G6PD activity of non-deficient (> 70 %) cases was 11.78 ± 2.11 U/gHb in males and 11.89 ± 2.49 U/gHb in females. The frequency distribution of G6PD activity of the 725 individuals measured by WST-8 is shown in Fig. 3a. The majority of the enzyme activities were distributed between 7 and 16 U/gHb. The frequency distribution of G6PD activity by sex is illustrated in Fig. 3b, c. A broader distribution of G6PD activities was seen in females than in males. Table 5Prevalence of G6PD deficiency determined by WST-8 enzyme activity assayG6PD statusMale, N (%)Female, N (%)Total, N (%)Deficient (< 30 %)47 (12.77 %)17 (4.76 %)64 (8.83 %)Intermediate (30–70 %)27 (7.34 %)58 (16.25 %)85 (11.72 %)Normal (> 70 %)294 (79.89 %)282 (78.99 %)576 (79.45 %)Total368 (100 %)357 (100 %)725 (100 %) Prevalence of G6PD deficiency determined by WST-8 enzyme activity assay Fig. 3Frequency distribution of G6PD activity. a G6PD activity for all 725 samples, showing the majority of samples in the range between 7 and 16 U/gHb. The average G6PD activity of the 725 samples was 10.19 ± 3.96 U/gHb. G6PD activity for (b) 368 males and (c) 357 females. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively Frequency distribution of G6PD activity. a G6PD activity for all 725 samples, showing the majority of samples in the range between 7 and 16 U/gHb. The average G6PD activity of the 725 samples was 10.19 ± 3.96 U/gHb. G6PD activity for (b) 368 males and (c) 357 females. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively Genotypic screening of G6PD deficiency using the multiplexed HRM assay The developed 4-plex HRM assay was applied to screen for G6PD mutations in 725 blood samples. This assay identified 197 of the 725 (27.17 %) individuals as possessing at least one mutation with an adverse effect on function (Table 6). The prevalence of subjects carrying at least one G6PD mutation was 20.11 % (74/368) in males and 34.45 % (123/357) in females. The most common G6PD mutation detected in the studied population was G6PD Mahidol, accounting for 94.92 % of the total (n = 187; 72 in males and 115 in females). Other single mutations observed in the study included G6PD Canton (2.03 %; 4 in females), G6PD Viangchan (1.52 %; 1 in a male and 2 in females), and G6PD Chinese-5 (0.51 %; 1 in a male). The HRM assay could also detect the double mutant variants, which were G6PD Mahidol + Canton (0.51 %; 1 in a female) and G6PD Chinese-4 + Viangchan (0.51 %; 1 in a female). Figure 4 shows the G6PD activity of deficient and normal samples identified by HRM for males and females. G6PD enzyme activity of deficient subjects, especially in females, spanned from the deficient region (< 30 %) to the normal region (> 70 %). A large distribution of G6PD enzyme activities in females is caused by genetic mosaicism as a result of X-inactivation. The distribution of G6PD activity by mutation type is illustrated in Fig. 5. Non-variant individuals are also included in this plot. Variation of G6PD activities among the different mutations was observed. Moreover, compared with the G6PD enzyme activity in males with the same mutation, that in females was greater. Enzyme activity of 0.89 and 6.16 U/gHb was observed for G6PD Viangchan in males and females, respectively. Interestingly, G6PD Mahidol, a Class III variant with mild deficiency, which was the most prevalent variant in the studied population, exhibited a wide range of G6PD activities, in both males (range: 0.10–10.73 U/gHb, mean: 3.20 ± 2.46 U/gHb) and females (range: 0.10–17.72 U/gHb, mean: 7.72 ± 4.24 U/gHb). Notably, G6PD enzyme activity in the double mutant variants (G6PD Mahidol + Canton and G6PD Chinese-4 + Viangchan) was significantly decreased compared with that of the single mutants. Table 6Observed ranges of enzyme activity and G6PD genotypes identified by HRM assayGenderVariantNG6PD activity (U/gHb)Nucleotide changeAmino acid changeWHO Classification Male Mahidol720.10-10.73G487AGly163SerIII(n = 368)Chinese-5Viangchan112.100.89C1024TG871ALeu342PheVal291MetIIIINon-variant2947.16–18.05--Normal Female Mahidol1150.10-17.72G487AGly163SerIII(n = 357)Canton46.50-10.48G1376TArg249LeuIIViangchan26.07–6.25G871AVal291MetIIMahidol + Canton14.12G487A + G1376TGly163Ser + Arg249LeuII/IIIChinese-4 + Viangchan10.69G392T + G871AGly131Val + Val291MetIINon-variant2344.96–18.67--Normal Observed ranges of enzyme activity and G6PD genotypes identified by HRM assay Chinese-5 Viangchan 1 1 2.10 0.89 C1024T G871A Leu342Phe Val291Met II II Fig. 4G6PD activity of deficient and normal samples identified by HRM assay. G6PD activity in a male and b female subjects. The average G6PD activity of deficient males and females was 3.16 ± 2.45 and 7.66 ± 4.19 U/gHb, respectively. The average G6PD activity of normal males and females was 11.77 ± 2.13 and 11.76 ± 2.68 U/gHb, respectively G6PD activity of deficient and normal samples identified by HRM assay. G6PD activity in a male and b female subjects. The average G6PD activity of deficient males and females was 3.16 ± 2.45 and 7.66 ± 4.19 U/gHb, respectively. The average G6PD activity of normal males and females was 11.77 ± 2.13 and 11.76 ± 2.68 U/gHb, respectively Fig. 5Distribution of G6PD activity by mutation type. Males carrying G6PD Mahidol showed G6PD enzyme activity ranging from 0.10 to 10.73 U/gHb. Females carrying G6PD Mahidol showed a wider range of G6PD enzyme activities (0.10–17.72 U/gHb). Females with G6PD Canton exhibited G6PD activity between 6.50 and 10.48 U/gHb. Females with G6PD Viangchan showed G6PD activity of 6.07–6.25 U/gHb. Normal males showed G6PD activity ranging from 7.16 to 18.05 U/gHb and normal females showed that between 4.96 and 18.67 U/gHb Distribution of G6PD activity by mutation type. Males carrying G6PD Mahidol showed G6PD enzyme activity ranging from 0.10 to 10.73 U/gHb. Females carrying G6PD Mahidol showed a wider range of G6PD enzyme activities (0.10–17.72 U/gHb). Females with G6PD Canton exhibited G6PD activity between 6.50 and 10.48 U/gHb. Females with G6PD Viangchan showed G6PD activity of 6.07–6.25 U/gHb. Normal males showed G6PD activity ranging from 7.16 to 18.05 U/gHb and normal females showed that between 4.96 and 18.67 U/gHb The developed 4-plex HRM assay was applied to screen for G6PD mutations in 725 blood samples. This assay identified 197 of the 725 (27.17 %) individuals as possessing at least one mutation with an adverse effect on function (Table 6). The prevalence of subjects carrying at least one G6PD mutation was 20.11 % (74/368) in males and 34.45 % (123/357) in females. The most common G6PD mutation detected in the studied population was G6PD Mahidol, accounting for 94.92 % of the total (n = 187; 72 in males and 115 in females). Other single mutations observed in the study included G6PD Canton (2.03 %; 4 in females), G6PD Viangchan (1.52 %; 1 in a male and 2 in females), and G6PD Chinese-5 (0.51 %; 1 in a male). The HRM assay could also detect the double mutant variants, which were G6PD Mahidol + Canton (0.51 %; 1 in a female) and G6PD Chinese-4 + Viangchan (0.51 %; 1 in a female). Figure 4 shows the G6PD activity of deficient and normal samples identified by HRM for males and females. G6PD enzyme activity of deficient subjects, especially in females, spanned from the deficient region (< 30 %) to the normal region (> 70 %). A large distribution of G6PD enzyme activities in females is caused by genetic mosaicism as a result of X-inactivation. The distribution of G6PD activity by mutation type is illustrated in Fig. 5. Non-variant individuals are also included in this plot. Variation of G6PD activities among the different mutations was observed. Moreover, compared with the G6PD enzyme activity in males with the same mutation, that in females was greater. Enzyme activity of 0.89 and 6.16 U/gHb was observed for G6PD Viangchan in males and females, respectively. Interestingly, G6PD Mahidol, a Class III variant with mild deficiency, which was the most prevalent variant in the studied population, exhibited a wide range of G6PD activities, in both males (range: 0.10–10.73 U/gHb, mean: 3.20 ± 2.46 U/gHb) and females (range: 0.10–17.72 U/gHb, mean: 7.72 ± 4.24 U/gHb). Notably, G6PD enzyme activity in the double mutant variants (G6PD Mahidol + Canton and G6PD Chinese-4 + Viangchan) was significantly decreased compared with that of the single mutants. Table 6Observed ranges of enzyme activity and G6PD genotypes identified by HRM assayGenderVariantNG6PD activity (U/gHb)Nucleotide changeAmino acid changeWHO Classification Male Mahidol720.10-10.73G487AGly163SerIII(n = 368)Chinese-5Viangchan112.100.89C1024TG871ALeu342PheVal291MetIIIINon-variant2947.16–18.05--Normal Female Mahidol1150.10-17.72G487AGly163SerIII(n = 357)Canton46.50-10.48G1376TArg249LeuIIViangchan26.07–6.25G871AVal291MetIIMahidol + Canton14.12G487A + G1376TGly163Ser + Arg249LeuII/IIIChinese-4 + Viangchan10.69G392T + G871AGly131Val + Val291MetIINon-variant2344.96–18.67--Normal Observed ranges of enzyme activity and G6PD genotypes identified by HRM assay Chinese-5 Viangchan 1 1 2.10 0.89 C1024T G871A Leu342Phe Val291Met II II Fig. 4G6PD activity of deficient and normal samples identified by HRM assay. G6PD activity in a male and b female subjects. The average G6PD activity of deficient males and females was 3.16 ± 2.45 and 7.66 ± 4.19 U/gHb, respectively. The average G6PD activity of normal males and females was 11.77 ± 2.13 and 11.76 ± 2.68 U/gHb, respectively G6PD activity of deficient and normal samples identified by HRM assay. G6PD activity in a male and b female subjects. The average G6PD activity of deficient males and females was 3.16 ± 2.45 and 7.66 ± 4.19 U/gHb, respectively. The average G6PD activity of normal males and females was 11.77 ± 2.13 and 11.76 ± 2.68 U/gHb, respectively Fig. 5Distribution of G6PD activity by mutation type. Males carrying G6PD Mahidol showed G6PD enzyme activity ranging from 0.10 to 10.73 U/gHb. Females carrying G6PD Mahidol showed a wider range of G6PD enzyme activities (0.10–17.72 U/gHb). Females with G6PD Canton exhibited G6PD activity between 6.50 and 10.48 U/gHb. Females with G6PD Viangchan showed G6PD activity of 6.07–6.25 U/gHb. Normal males showed G6PD activity ranging from 7.16 to 18.05 U/gHb and normal females showed that between 4.96 and 18.67 U/gHb Distribution of G6PD activity by mutation type. Males carrying G6PD Mahidol showed G6PD enzyme activity ranging from 0.10 to 10.73 U/gHb. Females carrying G6PD Mahidol showed a wider range of G6PD enzyme activities (0.10–17.72 U/gHb). Females with G6PD Canton exhibited G6PD activity between 6.50 and 10.48 U/gHb. Females with G6PD Viangchan showed G6PD activity of 6.07–6.25 U/gHb. Normal males showed G6PD activity ranging from 7.16 to 18.05 U/gHb and normal females showed that between 4.96 and 18.67 U/gHb Development and validation of 4-plex HRM assay: A multiplexed HRM assay was developed to detect eight G6PD variants that are common in Thailand in two reactions (Fig. 1). By using a specific primer pair for each mutation, reaction 1 simultaneously detects four mutations [G6PD Gaohe (A95G), G6PD Mahidol (G487A), G6PD Viangchan (G871A), and G6PD Canton (G1376T)]. Reaction 2 concurrently detects another four mutations [G6PD Chinese-4 (G392T), G6PD Chinese-5 (C1024T), G6PD Union (C1360T), and G6PD Kaiping (G1388A)]. The assay is based on a single fluorescent dye, EvaGreen, without the need for a quenching probe. The primers were designed to detect the mutations by generating PCR products with distinctive melting temperatures, Tm. In contrast, no amplification occurred in WT samples. A peak at the corresponding Tm reveals the genotype of each sample. The gDNA of known G6PD mutations was used as positive controls. Overall, 70 G6PD-deficient samples and 28 non-deficient samples were used to evaluate the performance of the developed 4-plex HRM assay, while direct DNA sequencing was used as a reference test (Table 3). Table 3G6PD mutations of 70-deficient samples detected by 4-plex HRM and direct DNA sequencingMutationHRM assayDNA sequencingGaohe (A95G)4/704/70Chinese-4 (G392T)3/703/70Mahidol (G487A)5/705/70Viangchan (G871A)28/7028/70Chinese-5 (C1024T)1/701/70Canton (G1376T)14/7014/70Kaiping (G1388A)13/7013/70Mahidol + Canton (G487A + G1376T)1/701/70Gaohe + Kaiping (A95G + G1388A)1/701/70 G6PD mutations of 70-deficient samples detected by 4-plex HRM and direct DNA sequencing In comparison to direct DNA sequencing, the 4-plex HRM assay was 100 % sensitive [95 % confidence interval (CI): 94.87–100 %] and 100 % specific (95 % CI: 87.66–100 %), with no cross-reactivity for the detection of G6PD mutations (Table 4). Additionally, the multiplexed HRM assay could correctly identify the double mutations (G6PD Mahidol + Canton and G6PD Gaohe + Kaiping). This indicates that the developed method is reliable for detecting G6PD mutations. Table 4Performance of the HRM assay for the identification of G6PD mutationsParameterHRM assayTrue positive70/70True negative28/28False positive0/28False negative0/70Sensitivity100 %Specificity100 %Positive predictive value100 %Negative predictive value100 % Performance of the HRM assay for the identification of G6PD mutations Phenotypic screening of G6PD deficiency by WST-8 assay: The prevalence of G6PD deficiency in people living in a malaria endemic area in Thailand, namely, Tha Song Yang District, Tak Province, was determined by G6PD activity assay (WST-8). Figure 2 indicates the G6PD enzyme activity of 725 samples measured by WST-8. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively. The adjusted male median (AMM) value was determined (10.31 ± 3.81 U/gHb) and defined as 100 % G6PD activity [56]. The WHO defined G6PD activity of less than 30 % as deficient and G6PD activity ranging between 30 and 80 % as intermediate [57]. Fig. 2G6PD activity of 725 individuals (368 males and 357 females) measured by the WST-8 method. The adjusted male median (AMM) value was determined to be 10.31 ± 3.81 U/gHb and defined as 100 % G6PD activity. Dotted horizontal lines indicate G6PD activity at 30 and 70 % of the AMM G6PD activity of 725 individuals (368 males and 357 females) measured by the WST-8 method. The adjusted male median (AMM) value was determined to be 10.31 ± 3.81 U/gHb and defined as 100 % G6PD activity. Dotted horizontal lines indicate G6PD activity at 30 and 70 % of the AMM Nonetheless, G6PD activity of 70 % was used as a threshold for tafenoquine prescription [58, 59]. In this study, G6PD activity levels of less than 30 % and 30–70 % of the AMM were thus considered as deficient and intermediate, respectively. Subjects with G6PD activity over 70 % of the AMM were defined as normal. Based on the WST-8 assay, the prevalence of G6PD deficiency in the studied population was 20.55 % (149/725; Table 5). Prevalence rates of G6PD deficiency of 20.11 % (74/368) and 21.01 % (75/357) were observed in males and females, respectively. In addition, average G6PD activity of deficient males and females was 1.59 ± 0.89 and 1.69 ± 0.77 U/gHb, respectively. Intermediate G6PD activity (30–70 %) was found in 7.34 % (27/368) of males and 16.25 % (58/357) of females. Average G6PD activity of non-deficient (> 70 %) cases was 11.78 ± 2.11 U/gHb in males and 11.89 ± 2.49 U/gHb in females. The frequency distribution of G6PD activity of the 725 individuals measured by WST-8 is shown in Fig. 3a. The majority of the enzyme activities were distributed between 7 and 16 U/gHb. The frequency distribution of G6PD activity by sex is illustrated in Fig. 3b, c. A broader distribution of G6PD activities was seen in females than in males. Table 5Prevalence of G6PD deficiency determined by WST-8 enzyme activity assayG6PD statusMale, N (%)Female, N (%)Total, N (%)Deficient (< 30 %)47 (12.77 %)17 (4.76 %)64 (8.83 %)Intermediate (30–70 %)27 (7.34 %)58 (16.25 %)85 (11.72 %)Normal (> 70 %)294 (79.89 %)282 (78.99 %)576 (79.45 %)Total368 (100 %)357 (100 %)725 (100 %) Prevalence of G6PD deficiency determined by WST-8 enzyme activity assay Fig. 3Frequency distribution of G6PD activity. a G6PD activity for all 725 samples, showing the majority of samples in the range between 7 and 16 U/gHb. The average G6PD activity of the 725 samples was 10.19 ± 3.96 U/gHb. G6PD activity for (b) 368 males and (c) 357 females. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively Frequency distribution of G6PD activity. a G6PD activity for all 725 samples, showing the majority of samples in the range between 7 and 16 U/gHb. The average G6PD activity of the 725 samples was 10.19 ± 3.96 U/gHb. G6PD activity for (b) 368 males and (c) 357 females. The average G6PD activity in males and females was 9.99 ± 4.14 and 10.35 ± 3.81 U/gHb, respectively Genotypic screening of G6PD deficiency using the multiplexed HRM assay : The developed 4-plex HRM assay was applied to screen for G6PD mutations in 725 blood samples. This assay identified 197 of the 725 (27.17 %) individuals as possessing at least one mutation with an adverse effect on function (Table 6). The prevalence of subjects carrying at least one G6PD mutation was 20.11 % (74/368) in males and 34.45 % (123/357) in females. The most common G6PD mutation detected in the studied population was G6PD Mahidol, accounting for 94.92 % of the total (n = 187; 72 in males and 115 in females). Other single mutations observed in the study included G6PD Canton (2.03 %; 4 in females), G6PD Viangchan (1.52 %; 1 in a male and 2 in females), and G6PD Chinese-5 (0.51 %; 1 in a male). The HRM assay could also detect the double mutant variants, which were G6PD Mahidol + Canton (0.51 %; 1 in a female) and G6PD Chinese-4 + Viangchan (0.51 %; 1 in a female). Figure 4 shows the G6PD activity of deficient and normal samples identified by HRM for males and females. G6PD enzyme activity of deficient subjects, especially in females, spanned from the deficient region (< 30 %) to the normal region (> 70 %). A large distribution of G6PD enzyme activities in females is caused by genetic mosaicism as a result of X-inactivation. The distribution of G6PD activity by mutation type is illustrated in Fig. 5. Non-variant individuals are also included in this plot. Variation of G6PD activities among the different mutations was observed. Moreover, compared with the G6PD enzyme activity in males with the same mutation, that in females was greater. Enzyme activity of 0.89 and 6.16 U/gHb was observed for G6PD Viangchan in males and females, respectively. Interestingly, G6PD Mahidol, a Class III variant with mild deficiency, which was the most prevalent variant in the studied population, exhibited a wide range of G6PD activities, in both males (range: 0.10–10.73 U/gHb, mean: 3.20 ± 2.46 U/gHb) and females (range: 0.10–17.72 U/gHb, mean: 7.72 ± 4.24 U/gHb). Notably, G6PD enzyme activity in the double mutant variants (G6PD Mahidol + Canton and G6PD Chinese-4 + Viangchan) was significantly decreased compared with that of the single mutants. Table 6Observed ranges of enzyme activity and G6PD genotypes identified by HRM assayGenderVariantNG6PD activity (U/gHb)Nucleotide changeAmino acid changeWHO Classification Male Mahidol720.10-10.73G487AGly163SerIII(n = 368)Chinese-5Viangchan112.100.89C1024TG871ALeu342PheVal291MetIIIINon-variant2947.16–18.05--Normal Female Mahidol1150.10-17.72G487AGly163SerIII(n = 357)Canton46.50-10.48G1376TArg249LeuIIViangchan26.07–6.25G871AVal291MetIIMahidol + Canton14.12G487A + G1376TGly163Ser + Arg249LeuII/IIIChinese-4 + Viangchan10.69G392T + G871AGly131Val + Val291MetIINon-variant2344.96–18.67--Normal Observed ranges of enzyme activity and G6PD genotypes identified by HRM assay Chinese-5 Viangchan 1 1 2.10 0.89 C1024T G871A Leu342Phe Val291Met II II Fig. 4G6PD activity of deficient and normal samples identified by HRM assay. G6PD activity in a male and b female subjects. The average G6PD activity of deficient males and females was 3.16 ± 2.45 and 7.66 ± 4.19 U/gHb, respectively. The average G6PD activity of normal males and females was 11.77 ± 2.13 and 11.76 ± 2.68 U/gHb, respectively G6PD activity of deficient and normal samples identified by HRM assay. G6PD activity in a male and b female subjects. The average G6PD activity of deficient males and females was 3.16 ± 2.45 and 7.66 ± 4.19 U/gHb, respectively. The average G6PD activity of normal males and females was 11.77 ± 2.13 and 11.76 ± 2.68 U/gHb, respectively Fig. 5Distribution of G6PD activity by mutation type. Males carrying G6PD Mahidol showed G6PD enzyme activity ranging from 0.10 to 10.73 U/gHb. Females carrying G6PD Mahidol showed a wider range of G6PD enzyme activities (0.10–17.72 U/gHb). Females with G6PD Canton exhibited G6PD activity between 6.50 and 10.48 U/gHb. Females with G6PD Viangchan showed G6PD activity of 6.07–6.25 U/gHb. Normal males showed G6PD activity ranging from 7.16 to 18.05 U/gHb and normal females showed that between 4.96 and 18.67 U/gHb Distribution of G6PD activity by mutation type. Males carrying G6PD Mahidol showed G6PD enzyme activity ranging from 0.10 to 10.73 U/gHb. Females carrying G6PD Mahidol showed a wider range of G6PD enzyme activities (0.10–17.72 U/gHb). Females with G6PD Canton exhibited G6PD activity between 6.50 and 10.48 U/gHb. Females with G6PD Viangchan showed G6PD activity of 6.07–6.25 U/gHb. Normal males showed G6PD activity ranging from 7.16 to 18.05 U/gHb and normal females showed that between 4.96 and 18.67 U/gHb Discussion: HRM assays have been widely used to detect gene mutations [43–45]. However, for G6PD genotyping, most of the developed HRM assays are singleplex or duplex, which can only detect one or two mutations simultaneously [38, 46, 47]. To enable the detection of multiple mutations, more than one fluorescent dye must be included in the reaction mixture, which usually makes the reaction more expensive [39, 60, 61]. As reported in this paper, a 4-plex HRM assay for the detection of G6PD mutations common in Thailand and elsewhere in Asia was developed, using a single dye with a run time of 80 min. Evaluation of the 4-plex HRM assay using 70 G6PD-deficient samples indicated that it was accurate and reliable for detecting G6PD mutations (with specificity and sensitivity of 100 %) compared with DNA sequencing. Among the 70 deficient assay validation samples, G6PD Viangchan was the most prevalent variant, followed by G6PD Canton and G6PD Kaiping. This is in accordance with previous reports showing that G6PD Viangchan was the most common variant in Thais [28, 51]. Two double mutations, G6PD Mahidol + Canton and G6PD Gaohe + Kaiping, were also identified in the studied population. G6PD Mahidol + Canton was first identified in people living along the Thai–Myanmar border where Karen and Burman are the major population groups [62]. After validation, the multiplexed HRM assay was applied to screen G6PD mutations of 725 people living in a malaria endemic area along the Thai–Myanmar border. The prevalence of G6PD deficiency in this population was also determined by phenotypic enzyme activity assay using WST-8. Considering a 30 % activity cut-off, the overall prevalence of G6PD deficiency was 8.83 % by WST-8 assay. If the upper limit of a 70 % cut-off was considered, the overall prevalence increased to 20.55 %. By sex, at the 70 % cut-off, the prevalence of G6PD deficiency was 20.11 % in males and 21.01 % in females. In contrast, by the multiplexed HRM assay, the frequency of G6PD mutations was 20.11 and 34.45 % in males and females, respectively. Thus, the prevalence of G6PD deficiency in males is equivalent between these two assays, but in females, genetic analysis using HRM indicated a high frequency of g6pd gene mutations (34.45 %), that is notably greater than the prevalence of G6PD deficiency measured by WST-8 (21.01 %). The multiplexed HRM assay identified four single mutants (95.11 % G6PD Mahidol, 2.03 % G6PD Canton, 1.52 % G6PD Viangchan, and 0.51 % G6PD Chinese-5) and two double mutants (0.51 % G6PD Mahidol + Canton and 0.51 % G6PD Chinese-4 + Viangchan) in the studied population. In good agreement with previous reports, G6PD Mahidol was the most common variant in the Karen population [26, 62]. The double mutant G6PD Chinese-4 + Viangchan was identified here for the first time. A broad range of G6PD activities were observed among the different genotypes. G6PD Mahidol showed ranges of enzyme activity of 0.10–10.73 and 0.10–17.72 U/gHb in males and females, respectively. For G6PD Canton, the range of enzyme activity was 6.50–10.48 U/gHb in females. A wider distribution of enzyme activities was observed in females carrying G6PD Mahidol than in males. Additionally, 5 males and 58 females carrying G6PD Mahidol and 3 females carrying G6PD Canton showed G6PD activity even greater than the 70 % activity cut-off (7 U/gHb). Similar findings were previously described in genotype–phenotype association studies [31, 32]. This is mainly attributable to the fact that the degree of deficiency and vulnerability to haemolysis vary substantially among the different genotypes. Furthermore, because G6PD deficiency is an X-linked genetic defect, males exhibit hemizygous deficiency while females can exhibit either homozygous or heterozygous deficiency. In heterozygous females, a wide range of G6PD activities can be observed, in which the activity of the normal erythrocyte population may compensate for the lost activity of the deficient erythrocyte population. In addition, abnormally high G6PD activity observed in persons carrying G6PD mutations could also be attributable to the following factors. The first is the limited performance quality of the WST-8 used in this study. The second is elevated reticulocyte count in blood samples. Young red blood cells usually exhibit greater G6PD enzyme activity than aged red blood cells. The last is the presence of leukocytes in tested samples (in this study, G6PD activity was measured from whole blood samples). This is because leukocytes retain their nucleus and therefore can continuously synthesize the G6PD enzyme. G6PD-deficient individuals have increased susceptibility to haemolysis upon exposure to oxidative agents, including primaquine and tafenoquine which are the only medications effective for the radical treatment of infection by P. vivax and P. ovale. To ensure successful and safe treatment, a single dose of 300 mg of tafenoquine or 0.5 mg/kg/day primaquine for 14 days should be prescribed only in patients with at least 70 % G6PD activity [63]. The effect of the primaquine dose on haemolysis was reported to differ between G6PD normal and G6PD Mahidol heterozygous individuals [64]. These heterozygous females were identified as normal by FST assay while the quantitative assay revealed G6PD activity of 62 and 99 %. Tafenoquine was also reported to cause dose-dependent haemolysis in G6PD Mahidol heterozygous individuals with enzyme activity of 61–80 % [13]. As such, drug-induced haemolysis associated with G6PD deficiency depends on two major factors: the first is the level of G6PD activity, which is determined by G6PD genotype, and the second is the exposure to oxidative stress, namely, metabolites of antimalarial drugs (8-aminoquinolines). Therefore, the drug dose and the ability to metabolize the parent compounds also contribute to the severity of haemolysis in malaria treatment. Currently, the cut-off of 70 % AMM is widely accepted as an appropriate threshold to determine whether or not to administer tafenoquine [58, 59, 63]. However, the cut-off value should be carefully defined and tested in each population group. Based on the obtained results, to enable accurate determination of the prevalence of G6PD deficiency using WST-8, different cut-off values are required in males and females. The upper limit of 70 % of AMM is recommended for males. However, in heterozygous females, neither the lower (30 %) nor the upper (70 %) limit is reliable for screening G6PD deficiency. It should be noted that the results reported here are based on the WST-8 assay, which is an alternative to the standard method for measuring G6PD activity. Upon using other G6PD tests, the results might be different. For male populations, phenotype tests are useful for G6PD deficiency screening and to enable safe treatment. In contrast, in heterozygous females in whom a wide range of G6PD activities are observed, phenotypic enzyme assay alone might be insufficient to identify G6PD deficiency. Hence, alternative approaches such as genetic analysis could be useful for determining whether drugs should be administered in populations suspected of having G6PD deficiency. The multiplexed HRM assay developed here could be useful for identifying G6PD variants in Thai populations. Although other multiplex systems for genetic analysis are currently available, they might be unsuitable for large population screening. The G6PD gene chip kit can detect 13–14 mutations common in Chinese populations, but must be combined with a hybridization kit, which is time-consuming [32, 65]. The DiaPlexC™ (Asian type) can simultaneously detect eight mutations, but requires an additional gel electrophoresis step to check the amplified products [42]. Additionally, the kit might not be applicable for deployment in regions where populations carry other G6PD mutations (e.g., G6PD Gaohe, G6PD Chinese-4, and G6PD Chinese-5), for which the kit cannot test. It should be mentioned that, in this study, no mutation was detected in 12 females who were considered likely to be G6PD-deficient because the observed G6PD activity was lower than 7 U/gHb. This might have been because of the limited performance quality (sensitivity of 55–72 %) of the WST-8 assay used in this study [20, 21]. Alternatively, these subjects might carry G6PD mutations for which the multiplexed HRM assay cannot test. DNA sequencing of the whole g6pd gene might be required to detect mutations in such individuals. Additionally, to enable mutational screening in more diverse population groups, the assay should be expanded to include other mutations, such as G6PD Mediterranean, G6PD Valladolid, G6PD Coimbra, and G6PD Aures [26]. It should also be noted that the multiplexed HRM assays developed here are not able to identify zygosity of the samples and, thus, require further development before being deployed. Nevertheless, the HRM assays could be of great use for analysing G6PD mutations in supplement to phenotypic G6PD screening in heterozygous females as well as in populations suspected of having G6PD deficiency. Primarily, G6PD genotyping is being done in G6PD-deficient individuals. However, more data on the genotype–phenotype association of G6PD deficiency in diverse population groups should be obtained, which requires a high-throughput screening platform. Conclusions: A multiplexed HRM assay for the detection of eight common G6PD mutations in Thailand was developed. The performance of the assay was excellent, with 100 % specificity and 100 % sensitivity. The prevalence of G6PD mutations in 725 people living in a malaria endemic area along the Thai–Myanmar border was determined to be 27.17 % by HRM, which is greater than the prevalence of G6PD deficiency determined by the WST-8 phenotypic assay (20.55 %). Performing a phenotypic assay alone might thus be inadequate and the result might not be an accurate predictor of G6PD deficiency, especially in heterozygous females. As an option to overcome this problem, the multiplexed HRM assay is rapid, accurate and reliable for detecting G6PD mutations, enabling high-throughput screening. This assay could be useful as a supplementary approach for high-throughput screening of G6PD deficiency before the administration of 8-aminoquinolones in malaria endemic areas.
Background: Glucose-6-phosphate dehydrogenase (G6PD) deficiency, the most common enzymopathy in humans, is prevalent in tropical and subtropical areas where malaria is endemic. Anti-malarial drugs, such as primaquine and tafenoquine, can cause haemolysis in G6PD-deficient individuals. Hence, G6PD testing is recommended before radical treatment against vivax malaria. Phenotypic assays have been widely used for screening G6PD deficiency, but in heterozygous females, the random lyonization causes difficulty in interpreting the results. Over 200 G6PD variants have been identified, which form genotypes associated with differences in the degree of G6PD deficiency and vulnerability to haemolysis. This study aimed to assess the frequency of G6PD mutations using a newly developed molecular genotyping test. Methods: A multiplexed high-resolution melting (HRM) assay was developed to detect eight G6PD mutations, in which four mutations can be tested simultaneously. Validation of the method was performed using 70 G6PD-deficient samples. The test was then applied to screen 725 blood samples from people living along the Thai-Myanmar border. The enzyme activity of these samples was also determined using water-soluble tetrazolium salts (WST-8) assay. Then, the correlation between genotype and enzyme activity was analysed. Results: The sensitivity of the multiplexed HRM assay for detecting G6PD mutations was 100 % [95 % confidence interval (CI): 94.87-100 %] with specificity of 100 % (95 % CI: 87.66-100 %). The overall prevalence of G6PD deficiency in the studied population as revealed by phenotypic WST-8 assay was 20.55 % (149/725). In contrast, by the multiplexed HRM assay, 27.17 % (197/725) of subjects were shown to have G6PD mutations. The mutations detected in this study included four single variants, G6PD Mahidol (187/197), G6PD Canton (4/197), G6PD Viangchan (3/197) and G6PD Chinese-5 (1/197), and two double mutations, G6PD Mahidol + Canton (1/197) and G6PD Chinese-4 + Viangchan (1/197). A broad range of G6PD enzyme activities were observed in individuals carrying G6PD Mahidol, especially in females. Conclusions: The multiplexed HRM-based assay is sensitive and reliable for detecting G6PD mutations. This genotyping assay can facilitate the detection of heterozygotes, which could be useful as a supplementary approach for high-throughput screening of G6PD deficiency in malaria endemic areas before the administration of primaquine and tafenoquine.
Background: Glucose-6-phosphate dehydrogenase (G6PD) deficiency is an inherited genetic defect and the most common enzymopathy, affecting approximately 500 million people worldwide with more than 200 variants have been identified [1]. G6PD deficiency is prevalent in tropical and subtropical areas where malaria is endemic, including Africa and Southeast Asia [2]. Evidence has suggested that G6PD deficiency confers protection against malaria infection [3–5]. However, this is still controversial because several studies have yielded contradictory results with some claiming that the protective effects of G6PD deficiency were observed in male hemizygotes only, in female heterozygotes only, or in both [6–9]. The major clinical concern associated with G6PD deficiency is haemolysis upon exposure to oxidant drugs, including anti-malarials such as 8-aminoquinolines (primaquine and tafenoquine) [10–13]. Primaquine and tafenoquine are the only medications capable of killing Plasmodium vivax and Plasmodium ovale at the dormant liver stage (hypnozoite). The World Health Organization (WHO) recommends that G6PD activity be measured before efforts to perform radical treatment of malaria [14]. G6PD deficiency can be diagnosed by either phenotypic or genotypic assay. Phenotypic tests are based on the assessment of G6PD activity, measuring the production of reduced nicotinamide adenine dinucleotide phosphate (NADPH), which can be done quantitatively. The standard quantitative method is spectrophotometry, in which NADPH production is monitored at 340 nm [15]. This method is accurate and reliable, but is laborious, time-consuming, and requires complicated sample preparation and technical skills; as such, it is not commonly used for field-based screening. A colorimetric G6PD assay, based on water-soluble tetrazolium salts (WST-8), was developed as an alternative to the gold standard of spectrophotometry [16]. In this approach, no sample preparation is required and whole blood or dried blood spots can be used to test G6PD activity [17]. The WST-8-based assay can be used as a quantitative method or a qualitative one by the naked eye, offering the possibility of performing mass screening of G6PD deficiency in the context of malaria elimination using primaquine and tafenoquine [16, 18]. Although not the standard method for measuring G6PD activity, the sensitivity of WST-8 for detecting NAD(P)H was found to be five-fold greater than that of the spectrophotometric assay. Moreover, results obtained by measuring dehydrogenase activities in biological samples using WST-8 assay were in parallel with the standard method [19]. For G6PD testing, WST-8 was applied, in 96-well format, to the screening of G6PD deficiency in different populations [20–22]. The sensitivity and specificity of WST-8 for detecting G6PD activity < 30 % were 55 % and 98 %, respectively, compared with the spectrophotometric method [20]. In addition, sensitivity of 72 % and specificity of 98 % were reported for WST-8, in comparison with the standard quantitative G6PD assay (R&D Diagnostics) [21]. This suggests that WST-8 could be a key tool for G6PD testing, but it requires further development before deployment in the field. G6PD diagnostic tests are currently available, including qualitative tests such as fluorescent spot test (FST) and CareStart™ G6PD rapid diagnostic test, as well as quantitative point-of-care tests such as CareStart™ G6PD biosensor and STANDARD™ G6PD test. Unfortunately, these tests are not widely used for G6PD testing because they are too expensive and can be difficult to interpret [23–25]. Qualitative tests are reliable for identifying G6PD deficiency in hemizygous males and homozygous females, but are unable to identify heterozygous females [26–28]. This is because, in heterozygous females, a wide range of G6PD activities are observed as a result of the random X-chromosome inactivation or lyonization [29]. To date, over 200 G6PD variants have been identified, which form genotypes associated with differences in the degree of deficiency and vulnerability to haemolysis [30]. Moreover, G6PD activities vary among G6PD-deficient individuals carrying the same genotype [31, 32]. G6PD genotyping can be performed using restriction fragment length polymorphism [33, 34], amplification refractory mutation system [35, 36], gold nanoparticles-based assay [37], high resolution melting (HRM) curve analysis [38, 39] and DNA sequencing [40, 41]. Additionally, multiplex genotyping systems are currently available. DiaPlexC™ G6PD Genotyping Kit (Asian type) can detect eight mutations, namely, G6PD Vanua Lava, G6PD Mahidol, G6PD Mediterranean, G6PD Coimbra, G6PD Viangchan, G6PD Union, G6PD Canton, and G6PD Kaiping. Thus, this assay offers high-throughput screening of G6PD mutations by one-step PCR [42]. However, after PCR amplification, an additional gel electrophoresis step is required to check the size of the amplified PCR products, which is impractical for large population screening. The HRM assay is a powerful and reliable tool that has been widely used in the detection of gene mutations [43–45]. Previously, HRM assays were applied to detect G6PD mutations in different population groups [38, 46–48]. However, previous HRM assays could detect only one or two mutations at a time. Although a multiplexed system to detect six mutations in four reactions was later described, the assay system and interpretation of results were complex [49]. The prevalence of G6PD deficiency in Thailand ranges between 5 and 18 %, depending on the geographical area [50–54]. More than 20 G6PD variants have been identified in the country, among which the most common is G6PD Viangchan, followed by G6PD Mahidol, G6PD Canton, G6PD Union, G6PD Kaiping, G6PD Gaohe, G6PD Chinese-4, G6PD Chinese-5, G6PD Valladolid, G6PD Coimbra and G6PD Aures. Along the Thai–Myanmar border, a malaria endemic area, prevalence of G6PD deficiency of 9–18 % was reported in males [26]. Moreover, a rate of G6PD deficiency of 7.4 % was reported from the screening of 1,340 newborns [27]. G6PD Mahidol was shown to be the most common variant in this population, accounting for 88 % of all variants, followed by G6PD Chinese-4, G6PD Viangchan, and G6PD Mediterranean. Generally, to avoid the risk of haemolysis upon malaria treatment, G6PD testing is recommended before the administration of primaquine and tafenoquine. The aim of this study was to develop a molecular diagnostic test to enable an accurate, reliable and high-throughput platform for detecting G6PD mutations, which can be used as a supplement to the screening of G6PD deficiency, especially in heterozygous females. To validate the method, 70 G6PD-deficient and 28 non-deficient samples were tested and the results were compared with the findings obtained by direct DNA sequencing. The potential utility of the developed HRM test for the detection of G6PD variants in a study area in Thailand was then examined. The correlation between genotype and the phenotype of enzyme activity (as determined using WST-8) was also analysed. Conclusions: A multiplexed HRM assay for the detection of eight common G6PD mutations in Thailand was developed. The performance of the assay was excellent, with 100 % specificity and 100 % sensitivity. The prevalence of G6PD mutations in 725 people living in a malaria endemic area along the Thai–Myanmar border was determined to be 27.17 % by HRM, which is greater than the prevalence of G6PD deficiency determined by the WST-8 phenotypic assay (20.55 %). Performing a phenotypic assay alone might thus be inadequate and the result might not be an accurate predictor of G6PD deficiency, especially in heterozygous females. As an option to overcome this problem, the multiplexed HRM assay is rapid, accurate and reliable for detecting G6PD mutations, enabling high-throughput screening. This assay could be useful as a supplementary approach for high-throughput screening of G6PD deficiency before the administration of 8-aminoquinolones in malaria endemic areas.
Background: Glucose-6-phosphate dehydrogenase (G6PD) deficiency, the most common enzymopathy in humans, is prevalent in tropical and subtropical areas where malaria is endemic. Anti-malarial drugs, such as primaquine and tafenoquine, can cause haemolysis in G6PD-deficient individuals. Hence, G6PD testing is recommended before radical treatment against vivax malaria. Phenotypic assays have been widely used for screening G6PD deficiency, but in heterozygous females, the random lyonization causes difficulty in interpreting the results. Over 200 G6PD variants have been identified, which form genotypes associated with differences in the degree of G6PD deficiency and vulnerability to haemolysis. This study aimed to assess the frequency of G6PD mutations using a newly developed molecular genotyping test. Methods: A multiplexed high-resolution melting (HRM) assay was developed to detect eight G6PD mutations, in which four mutations can be tested simultaneously. Validation of the method was performed using 70 G6PD-deficient samples. The test was then applied to screen 725 blood samples from people living along the Thai-Myanmar border. The enzyme activity of these samples was also determined using water-soluble tetrazolium salts (WST-8) assay. Then, the correlation between genotype and enzyme activity was analysed. Results: The sensitivity of the multiplexed HRM assay for detecting G6PD mutations was 100 % [95 % confidence interval (CI): 94.87-100 %] with specificity of 100 % (95 % CI: 87.66-100 %). The overall prevalence of G6PD deficiency in the studied population as revealed by phenotypic WST-8 assay was 20.55 % (149/725). In contrast, by the multiplexed HRM assay, 27.17 % (197/725) of subjects were shown to have G6PD mutations. The mutations detected in this study included four single variants, G6PD Mahidol (187/197), G6PD Canton (4/197), G6PD Viangchan (3/197) and G6PD Chinese-5 (1/197), and two double mutations, G6PD Mahidol + Canton (1/197) and G6PD Chinese-4 + Viangchan (1/197). A broad range of G6PD enzyme activities were observed in individuals carrying G6PD Mahidol, especially in females. Conclusions: The multiplexed HRM-based assay is sensitive and reliable for detecting G6PD mutations. This genotyping assay can facilitate the detection of heterozygotes, which could be useful as a supplementary approach for high-throughput screening of G6PD deficiency in malaria endemic areas before the administration of primaquine and tafenoquine.
15,663
466
[ 1328, 3467, 254, 291, 66, 230, 740, 245, 136, 451, 871, 952 ]
15
[ "g6pd", "activity", "g6pd activity", "females", "ghb", "assay", "hrm", "10", "samples", "males" ]
[ "malaria treatment currently", "severity haemolysis malaria", "risk haemolysis malaria", "dehydrogenase g6pd deficiency", "malaria elimination primaquine" ]
null
[CONTENT] G6PD deficiency | G6PD mutations | High-resolution melting | Phenotype | G6PD enzyme activity | G6PD genotype | WST-8 [SUMMARY]
null
[CONTENT] G6PD deficiency | G6PD mutations | High-resolution melting | Phenotype | G6PD enzyme activity | G6PD genotype | WST-8 [SUMMARY]
[CONTENT] G6PD deficiency | G6PD mutations | High-resolution melting | Phenotype | G6PD enzyme activity | G6PD genotype | WST-8 [SUMMARY]
[CONTENT] G6PD deficiency | G6PD mutations | High-resolution melting | Phenotype | G6PD enzyme activity | G6PD genotype | WST-8 [SUMMARY]
[CONTENT] G6PD deficiency | G6PD mutations | High-resolution melting | Phenotype | G6PD enzyme activity | G6PD genotype | WST-8 [SUMMARY]
[CONTENT] Female | Genotyping Techniques | Glucosephosphate Dehydrogenase Deficiency | Humans | Malaria, Vivax | Male | Thailand [SUMMARY]
null
[CONTENT] Female | Genotyping Techniques | Glucosephosphate Dehydrogenase Deficiency | Humans | Malaria, Vivax | Male | Thailand [SUMMARY]
[CONTENT] Female | Genotyping Techniques | Glucosephosphate Dehydrogenase Deficiency | Humans | Malaria, Vivax | Male | Thailand [SUMMARY]
[CONTENT] Female | Genotyping Techniques | Glucosephosphate Dehydrogenase Deficiency | Humans | Malaria, Vivax | Male | Thailand [SUMMARY]
[CONTENT] Female | Genotyping Techniques | Glucosephosphate Dehydrogenase Deficiency | Humans | Malaria, Vivax | Male | Thailand [SUMMARY]
[CONTENT] malaria treatment currently | severity haemolysis malaria | risk haemolysis malaria | dehydrogenase g6pd deficiency | malaria elimination primaquine [SUMMARY]
null
[CONTENT] malaria treatment currently | severity haemolysis malaria | risk haemolysis malaria | dehydrogenase g6pd deficiency | malaria elimination primaquine [SUMMARY]
[CONTENT] malaria treatment currently | severity haemolysis malaria | risk haemolysis malaria | dehydrogenase g6pd deficiency | malaria elimination primaquine [SUMMARY]
[CONTENT] malaria treatment currently | severity haemolysis malaria | risk haemolysis malaria | dehydrogenase g6pd deficiency | malaria elimination primaquine [SUMMARY]
[CONTENT] malaria treatment currently | severity haemolysis malaria | risk haemolysis malaria | dehydrogenase g6pd deficiency | malaria elimination primaquine [SUMMARY]
[CONTENT] g6pd | activity | g6pd activity | females | ghb | assay | hrm | 10 | samples | males [SUMMARY]
null
[CONTENT] g6pd | activity | g6pd activity | females | ghb | assay | hrm | 10 | samples | males [SUMMARY]
[CONTENT] g6pd | activity | g6pd activity | females | ghb | assay | hrm | 10 | samples | males [SUMMARY]
[CONTENT] g6pd | activity | g6pd activity | females | ghb | assay | hrm | 10 | samples | males [SUMMARY]
[CONTENT] g6pd | activity | g6pd activity | females | ghb | assay | hrm | 10 | samples | males [SUMMARY]
[CONTENT] g6pd | deficiency | g6pd deficiency | tests | wst | standard | test | assay | g6pd testing | testing [SUMMARY]
null
[CONTENT] g6pd | activity | g6pd activity | ghb | females | males | 10 | normal | enzyme | average [SUMMARY]
[CONTENT] assay | g6pd | phenotypic assay | throughput | throughput screening | high throughput screening | accurate | high throughput | g6pd deficiency | g6pd mutations [SUMMARY]
[CONTENT] g6pd | activity | g6pd activity | assay | females | ghb | samples | mutations | hrm | primers [SUMMARY]
[CONTENT] g6pd | activity | g6pd activity | assay | females | ghb | samples | mutations | hrm | primers [SUMMARY]
[CONTENT] ||| ||| ||| ||| Over 200 ||| [SUMMARY]
null
[CONTENT] ||| 100 % ||| 95 % | CI | 94.87-100 | 100 % | 95 % | CI | 87.66-100 ||| WST-8 | 20.55 % ||| 27.17 % | 197/725 ||| four | Mahidol | 187/197 | 4/197 | G6PD Viangchan | 3/197 | 1/197 | two | Mahidol | 1/197 | 1/197 ||| Mahidol [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| ||| ||| Over 200 ||| ||| eight | four ||| 70 ||| 725 | Thai ||| WST-8 ||| ||| ||| ||| 100 % ||| 95 % | CI | 94.87-100 | 100 % | 95 % | CI | 87.66-100 ||| WST-8 | 20.55 % ||| 27.17 % | 197/725 ||| four | Mahidol | 187/197 | 4/197 | G6PD Viangchan | 3/197 | 1/197 | two | Mahidol | 1/197 | 1/197 ||| Mahidol ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| ||| Over 200 ||| ||| eight | four ||| 70 ||| 725 | Thai ||| WST-8 ||| ||| ||| ||| 100 % ||| 95 % | CI | 94.87-100 | 100 % | 95 % | CI | 87.66-100 ||| WST-8 | 20.55 % ||| 27.17 % | 197/725 ||| four | Mahidol | 187/197 | 4/197 | G6PD Viangchan | 3/197 | 1/197 | two | Mahidol | 1/197 | 1/197 ||| Mahidol ||| ||| [SUMMARY]
Vasorelaxant Effect of
36014534
Trachelospermi caulis (T. caulis) has been used as a traditional herbal medicine in Asian countries. Although it is well known that T. caulis has beneficial effects, no sufficient research data are available on the cardiovascular effect of T. caulis. We investigated whether T. caulis extract has vascular effects in rat resistance arteries in this study.
BACKGROUND
To examine whether T. caulis extract affects vascular reactivity, we measured isometric tension of rat mesenteric resistance arteries using a multi-wire myograph system. T. caulis extract was administered after arteries were pre-contracted with high K+ (70 mM) or phenylephrine (5 µM). Vanillin, a single active component of T. caulis, was used to treat mesenteric arteries.
METHODS
T. caulis extract caused vascular relaxation in a concentration-dependent manner, which was endothelium-independent. To further identify the mechanism, we incubated the arteries in Ca2+-free solution containing high K+, followed by a cumulative administration of CaCl2 (0.01-2.0 mM) with or without T. caulis extract (250 µg/mL). The treatment of T. caulis extract decreased contractile responses induced by the addition of Ca2+, which suggested that the extracellular Ca2+ influx was inhibited by the T. caulis extract. Moreover, an active compound of T. caulis extract, vanillin, also induced vasodilation in mesenteric resistance arteries.
RESULTS
T. caulis extract and its active compound, vanillin, concentration-dependently induced vascular relaxation in mesenteric resistance arteries. These results suggest that the administration of T. caulis extract could help decrease blood pressure.
CONCLUSION
[ "Animals", "Endothelium, Vascular", "Mesenteric Arteries", "Plant Extracts", "Rats", "Vasodilation", "Vasodilator Agents" ]
9413539
1. Introduction
Cardiovascular disease (CVD) is the one of the leading causes of morbidity and mortality worldwide. CVD was responsible for 17.3 million deaths worldwide in 2017 which is expected to increase to >23.6 million by 2030 [1,2]. CVD consists of hypertension, heart failure, stroke and a number of vascular and cardiac problems [3]. Hypertension is defined as a systolic blood pressure (SBP) of 130 mmHg or more and a diastolic blood pressure (DBP) of 90 mmHg or more [4]. The arterial system comprises capillaries, arterioles, small arteries, medium-sized arteries and large conduit arteries. Resistance arteries, vessels with lumen diameters measuring <400 μm, are major organs of the cardiovascular system and regulate blood flow and peripheral resistance [5,6]. An increase in vascular resistance caused by the decrease in the internal lumen diameter of the blood vessel is a major cause of elevated blood pressure [7,8]. On the other hand, the relaxation of blood vessels leads to an increase in the lumen diameter of the blood vessels, resulting in an immediate decrease in blood pressure. The relaxation responses of arteries were not only endothelium-dependent but also endothelium-independent. Recently, evidence has been accumulating that endothelium-independent vasodilation is impaired in various vascular beds such as coronary, brachial, forearm and peripheral conduit arteries in patients with cardiovascular diseases and cardiovascular risk factors [9]. In particular, hypertension and diabetes mellitus have been shown to be associated with the impairment of endothelium-independent vasodilation [10,11]. Thus, it is necessary to discover substances that induce endothelium-independent vasodilation as much as substances that cause endothelium-dependent vasodilation. Although synthetic medications have been widely used to treat and cure patients at various stages of CVD, including hypertension, the adverse effects remain a challenge. Because the treatment of hypertension is continued over a long period of time; therefore, the use of synthetic drugs may result in drug resistance and adverse effects [12]. In addition to the use of synthetic drugs to treat hypertension, the use of a natural product is widely increasing [13,14]. Recently, the use of plant extract has shown a steady growth because of low toxicity and well-established therapy [15]. Various plants used in traditional medicine have been studied for their potential use in the treatment of cardiovascular disease [16]. Vasodilatory effects of medicinal plants have been extensively explored over the last two decades and have proven to be potentially effective for the treatment of CVD in clinics all over the world [17,18,19,20]. Trachelospermi caulis (T. caulis) belongs to the Apocynaceae family, which is the dried leafy stem of trachelospermum asiaticum var. intermedium [21]. T. caulis has been used as traditional herbal medicine to attenuate fever and pain of the knees and loins because of its antipyretic and antinociceptive effects in Asian countries [22]. It is well known that T. caulis lowers blood pressure in oriental medicine [23]. Although T. caulis has been suggested to provoke beneficial effects, the effect and mechanism of action in cardiovascular system are unknown. Therefore, the aim of our research is to explore the effect of T. caulis extract on vascular functionality in resistance arteries and to elucidate the underlying mechanism.
null
null
2. Results
2.1. Effect of T. caulis on Contraction Induced by KCl or PE in Rat Mesenteric Arteries Trachelospermi caulis (5–250 μg/mL) induced vascular relaxation in a concentration-dependent manner in endothelium-intact mesenteric arteries pre-contracted with high K+ (70 mM) or phenylephrine (PE, 5 μM) and in endothelium-denuded mesenteric arteries pre-contracted with PE (5 μM) as shown in Figure 1. EC50 of T. caulis is 98.1 μg/mL in the endothelium-intact arteries pre-contracted with high K+ (70 mM), 62.36 μg/mL in the endothelium-denuded arteries pre-contracted with PE and 36.61 μg/mL in the endothelium-intact arteries pre-contracted with PE. The maximal relaxation value of T. caulis-induced relaxation is 80.73 ± 6.05% in the endothelium-intact arteries pre-contracted with high K+, 89.6 ± 2.28% in the endothelium-intact arteries pre-contracted with PE and 93.30 ± 4.46% in the endothelium-denuded arteries pre-contracted with PE. Cumulative administration of vehicle (DMSO, 0.0025–0.125%) did not affect the contraction induced by PE (Figure 1 inset). Trachelospermi caulis (5–250 μg/mL) induced vascular relaxation in a concentration-dependent manner in endothelium-intact mesenteric arteries pre-contracted with high K+ (70 mM) or phenylephrine (PE, 5 μM) and in endothelium-denuded mesenteric arteries pre-contracted with PE (5 μM) as shown in Figure 1. EC50 of T. caulis is 98.1 μg/mL in the endothelium-intact arteries pre-contracted with high K+ (70 mM), 62.36 μg/mL in the endothelium-denuded arteries pre-contracted with PE and 36.61 μg/mL in the endothelium-intact arteries pre-contracted with PE. The maximal relaxation value of T. caulis-induced relaxation is 80.73 ± 6.05% in the endothelium-intact arteries pre-contracted with high K+, 89.6 ± 2.28% in the endothelium-intact arteries pre-contracted with PE and 93.30 ± 4.46% in the endothelium-denuded arteries pre-contracted with PE. Cumulative administration of vehicle (DMSO, 0.0025–0.125%) did not affect the contraction induced by PE (Figure 1 inset). 2.2. T. caulis-Induced Endothelium-Independent Relaxation in Rat Mesenteric Arteries To explore whether T. caulis-induced relaxation is dependent on endothelium. T. caulis extract was applied in endothelium-intact (Figure 2A) or endothelium-denuded (Figure 2B) mesenteric arteries. T. caulis extract induced vasodilation in the presence and in the absence of the endothelium, and there was no significant difference (93.4 ± 3.56% and 97.17 ± 6.75%, respectively, Figure 2C). To explore whether T. caulis-induced relaxation is dependent on endothelium. T. caulis extract was applied in endothelium-intact (Figure 2A) or endothelium-denuded (Figure 2B) mesenteric arteries. T. caulis extract induced vasodilation in the presence and in the absence of the endothelium, and there was no significant difference (93.4 ± 3.56% and 97.17 ± 6.75%, respectively, Figure 2C). 2.3. Effects of L-NNA, Indomethacin and ODQ on T. caulis-Induced Vasodilation To investigate the involvement of the nitric oxide (NO)/cyclic guanosine monophosphate (cGMP) and cyclooxygenase (COX)/prostacyclin (PGI2) pathways in T. caulis-induced vasodilation, arteries were incubated for 20 min with endothelial nitric oxide (eNOS) inhibitor, Nω-Nitro-L-arginine (L-NNA, 500 μM), or soluble guanylyl cyclase (sGC) inhibitor, 1H-(1,2,4)oxadiazolo[4,3-a]quinoxalin-1-one (ODQ, 5 μM), or COX inhibitor, indomethacin (10 μM), before being contracted with PE (5 μM). The relaxation responses of T. caulis were 89.39 ± 5.12%, 94.41 ± 5.41% and 92.03 ± 4.45% in the presence of L-NNA, ODQ and indomethacin, respectively (Figure 3). To investigate the involvement of the nitric oxide (NO)/cyclic guanosine monophosphate (cGMP) and cyclooxygenase (COX)/prostacyclin (PGI2) pathways in T. caulis-induced vasodilation, arteries were incubated for 20 min with endothelial nitric oxide (eNOS) inhibitor, Nω-Nitro-L-arginine (L-NNA, 500 μM), or soluble guanylyl cyclase (sGC) inhibitor, 1H-(1,2,4)oxadiazolo[4,3-a]quinoxalin-1-one (ODQ, 5 μM), or COX inhibitor, indomethacin (10 μM), before being contracted with PE (5 μM). The relaxation responses of T. caulis were 89.39 ± 5.12%, 94.41 ± 5.41% and 92.03 ± 4.45% in the presence of L-NNA, ODQ and indomethacin, respectively (Figure 3). 2.4. Effects of K+ Channel Blockers on T. caulis Extract-Induced Vascular Relaxation To determine whether K+ channels are involved in T. caulis-induced relaxation, non-selective K+ channel blocker, tetraethylammonium (TEA, 2 mM) or inward rectifier K+ channel blocker, BaCl2 (30 μM) or voltage-dependent K+ channel blocker, 4-aminopyridine (4-AP, 100 μM) or ATP-sensitive K+ channel blocker, glibenclamide (10 μM), were pre-treated 20 min before being contracted with PE (5 μM). The relaxation responses of T. caulis were 96.23 ± 2.72%, 95.26 ± 0.27%, 93.44 ± 2.10% and 93.51 ± 1.62%, in the presence of TEA, BaCl2, 4-AP and glibenclamide, respectively (Figure 4). To determine whether K+ channels are involved in T. caulis-induced relaxation, non-selective K+ channel blocker, tetraethylammonium (TEA, 2 mM) or inward rectifier K+ channel blocker, BaCl2 (30 μM) or voltage-dependent K+ channel blocker, 4-aminopyridine (4-AP, 100 μM) or ATP-sensitive K+ channel blocker, glibenclamide (10 μM), were pre-treated 20 min before being contracted with PE (5 μM). The relaxation responses of T. caulis were 96.23 ± 2.72%, 95.26 ± 0.27%, 93.44 ± 2.10% and 93.51 ± 1.62%, in the presence of TEA, BaCl2, 4-AP and glibenclamide, respectively (Figure 4). 2.5. Effect of T. caulis on Extracellular Ca2+-Induced Contraction To identify whether the vasodilatory effect of T. caulis depends on the inhibition of extracellular Ca2+ influx, the mesenteric arteries were incubated in a Ca2+-free solution containing sarcoplasmic reticulum Ca2+-ATPase (SERCA) inhibitor, cyclopiazonic acid (CPA, 5 μΜ) and KCl (70 mM), and then CaCl2 was added by concentration (0.1–2.0 mM) to increase the Ca2+ concentration in the arteries. Before treating the arteries with T. caulis, it was confirmed that the contraction responses caused by the repeated addition of Ca2+ were not changed in endothelium-intact and endothelium-denuded mesenteric arteries (Figure 5A,C). Pre-treatment of T. caulis significantly reduced the contractile responses induced by the cumulative addition of Ca2+ in endothelium-intact and endothelium-denuded mesenteric arteries (Figure 5B,D). To identify whether the vasodilatory effect of T. caulis depends on the inhibition of extracellular Ca2+ influx, the mesenteric arteries were incubated in a Ca2+-free solution containing sarcoplasmic reticulum Ca2+-ATPase (SERCA) inhibitor, cyclopiazonic acid (CPA, 5 μΜ) and KCl (70 mM), and then CaCl2 was added by concentration (0.1–2.0 mM) to increase the Ca2+ concentration in the arteries. Before treating the arteries with T. caulis, it was confirmed that the contraction responses caused by the repeated addition of Ca2+ were not changed in endothelium-intact and endothelium-denuded mesenteric arteries (Figure 5A,C). Pre-treatment of T. caulis significantly reduced the contractile responses induced by the cumulative addition of Ca2+ in endothelium-intact and endothelium-denuded mesenteric arteries (Figure 5B,D). 2.6. Effect of T. caulis on the BAY K8644-Induced Contraction To confirm that voltage-gated calcium channel is involved in the T. caulis extract-induced vasodilation, arteries were pre-treated with L-type voltage-gated calcium channel activator, BAY K8644, and then T. caulis extract was administered in the mesenteric arteries. Treatment of vehicle (DMSO 0.001–0.04%) did not affect BAY K8644-induced contraction (Figure 6A). T. caulis extract concentration dependently induced vascular relaxation (Figure 6B). To confirm that voltage-gated calcium channel is involved in the T. caulis extract-induced vasodilation, arteries were pre-treated with L-type voltage-gated calcium channel activator, BAY K8644, and then T. caulis extract was administered in the mesenteric arteries. Treatment of vehicle (DMSO 0.001–0.04%) did not affect BAY K8644-induced contraction (Figure 6A). T. caulis extract concentration dependently induced vascular relaxation (Figure 6B). 2.7. Effects of Vanillin, a Single Active Compound of T. caulis, on Mesenteric Arteries To support the findings about vascular effect of T. caulis, we further investigated whether single active compounds of T. caulis exert similar effect as T. caulis extract. The gas chromatogram of the compounds identified in the extract of T. caulis is shown in Figure S1 of supplementary materials. The identities of 12 compounds were determined along with their retention time (Table S1 of supplementary materials). The compounds identified based on the gas chromatography–mass spectrometry (GC/MS) analysis include butanoic acid (butyric acid), cyclobutanol, 3-nitropropanoic acid, furan-2-carbaldehyde (furfural), 4-hydroxy-3-methoxybenzaldehyde (vanillin), 4-hydroxy-3-methoxybenzaldehyde,(1R,2S,3S,4R,5R)-6,8-dioxabicyclo[3.2.1]octane-2,3,4-triol, 3-hydroxy-4-methoxybenzoic acid, 3,4,5,6-tetrahydroxy-2-methoxyhexanal, 3,4,5,6-tetrahydroxy-2-methoxyhexanal,(3S,4S,5R,6S)-3-methoxy-6-(methoxymethyl)oxane-2,4,5-triol, hexanoic acid, 1-(2,6-dihydroxy-4-methoxyphenyl)butan-1-one and benzoic acid. Among these candidate single compounds, we found that 4-hydroxy-3-methoxybenzaldehyde (vanillin) has a vascular relaxation effect. We observed that vanillin (0.01–20 mM) induced vasodilation in a concentration-dependent manner in rat mesenteric arteries pre-contracted with high-K+ solution (70 mM) or PE (5 μM). (Figure 7). EC50 of vanillin is 1.1 mM in the mesenteric arteries pre-contracted with high K+ (70 mM) and 1.9 mM in the arteries pre-contracted with PE. We also tested vascular effects of two more single compounds, butyric acid and furfural, in mesenteric resistance arteries. However, butyric acid did not induce significant vasodilation (Figure S2 of supplementary materials). As shown in Figure S3 of supplementary materials, furfural induced vasodilation in a very high concentration (10 mM). To support the findings about vascular effect of T. caulis, we further investigated whether single active compounds of T. caulis exert similar effect as T. caulis extract. The gas chromatogram of the compounds identified in the extract of T. caulis is shown in Figure S1 of supplementary materials. The identities of 12 compounds were determined along with their retention time (Table S1 of supplementary materials). The compounds identified based on the gas chromatography–mass spectrometry (GC/MS) analysis include butanoic acid (butyric acid), cyclobutanol, 3-nitropropanoic acid, furan-2-carbaldehyde (furfural), 4-hydroxy-3-methoxybenzaldehyde (vanillin), 4-hydroxy-3-methoxybenzaldehyde,(1R,2S,3S,4R,5R)-6,8-dioxabicyclo[3.2.1]octane-2,3,4-triol, 3-hydroxy-4-methoxybenzoic acid, 3,4,5,6-tetrahydroxy-2-methoxyhexanal, 3,4,5,6-tetrahydroxy-2-methoxyhexanal,(3S,4S,5R,6S)-3-methoxy-6-(methoxymethyl)oxane-2,4,5-triol, hexanoic acid, 1-(2,6-dihydroxy-4-methoxyphenyl)butan-1-one and benzoic acid. Among these candidate single compounds, we found that 4-hydroxy-3-methoxybenzaldehyde (vanillin) has a vascular relaxation effect. We observed that vanillin (0.01–20 mM) induced vasodilation in a concentration-dependent manner in rat mesenteric arteries pre-contracted with high-K+ solution (70 mM) or PE (5 μM). (Figure 7). EC50 of vanillin is 1.1 mM in the mesenteric arteries pre-contracted with high K+ (70 mM) and 1.9 mM in the arteries pre-contracted with PE. We also tested vascular effects of two more single compounds, butyric acid and furfural, in mesenteric resistance arteries. However, butyric acid did not induce significant vasodilation (Figure S2 of supplementary materials). As shown in Figure S3 of supplementary materials, furfural induced vasodilation in a very high concentration (10 mM).
5. Conclusions
In the present study, for the first time, we discovered that T. caulis extract induced vascular relaxation in rat mesenteric resistance arteries. The vasodilatory effect of T. caulis was endothelium-independent and the inhibition of extracellular Ca2+ influx was related to T. caulis extract-induced vascular relaxation. Our results suggest that T. caulis could be a valuable herbal resource in future research and in the treatment of cardiovascular diseases.
[ "2.1. Effect of T. caulis on Contraction Induced by KCl or PE in Rat Mesenteric Arteries", "2.2. T. caulis-Induced Endothelium-Independent Relaxation in Rat Mesenteric Arteries", "2.3. Effects of L-NNA, Indomethacin and ODQ on T. caulis-Induced Vasodilation", "2.4. Effects of K+ Channel Blockers on T. caulis Extract-Induced Vascular Relaxation", "2.5. Effect of T. caulis on Extracellular Ca2+-Induced Contraction", "2.6. Effect of T. caulis on the BAY K8644-Induced Contraction", "2.7. Effects of Vanillin, a Single Active Compound of T. caulis, on Mesenteric Arteries", "4. Materials and Methods", "4.1. Animals and Tissue Preparation", "4.2. Preparation of T. caulis Extract", "4.3. Measurement of Isometric Tension in Mesenteric Arteries", "4.4. Chemicals and Reagents", "4.5. Statistical Analysis" ]
[ "Trachelospermi caulis (5–250 μg/mL) induced vascular relaxation in a concentration-dependent manner in endothelium-intact mesenteric arteries pre-contracted with high K+ (70 mM) or phenylephrine (PE, 5 μM) and in endothelium-denuded mesenteric arteries pre-contracted with PE (5 μM) as shown in Figure 1. EC50 of T. caulis is 98.1 μg/mL in the endothelium-intact arteries pre-contracted with high K+ (70 mM), 62.36 μg/mL in the endothelium-denuded arteries pre-contracted with PE and 36.61 μg/mL in the endothelium-intact arteries pre-contracted with PE. The maximal relaxation value of T. caulis-induced relaxation is 80.73 ± 6.05% in the endothelium-intact arteries pre-contracted with high K+, 89.6 ± 2.28% in the endothelium-intact arteries pre-contracted with PE and 93.30 ± 4.46% in the endothelium-denuded arteries pre-contracted with PE. Cumulative administration of vehicle (DMSO, 0.0025–0.125%) did not affect the contraction induced by PE (Figure 1 inset).", "To explore whether T. caulis-induced relaxation is dependent on endothelium. T. caulis extract was applied in endothelium-intact (Figure 2A) or endothelium-denuded (Figure 2B) mesenteric arteries. T. caulis extract induced vasodilation in the presence and in the absence of the endothelium, and there was no significant difference (93.4 ± 3.56% and 97.17 ± 6.75%, respectively, Figure 2C).", "To investigate the involvement of the nitric oxide (NO)/cyclic guanosine monophosphate (cGMP) and cyclooxygenase (COX)/prostacyclin (PGI2) pathways in T. caulis-induced vasodilation, arteries were incubated for 20 min with endothelial nitric oxide (eNOS) inhibitor, Nω-Nitro-L-arginine (L-NNA, 500 μM), or soluble guanylyl cyclase (sGC) inhibitor, 1H-(1,2,4)oxadiazolo[4,3-a]quinoxalin-1-one (ODQ, 5 μM), or COX inhibitor, indomethacin (10 μM), before being contracted with PE (5 μM). The relaxation responses of T. caulis were 89.39 ± 5.12%, 94.41 ± 5.41% and 92.03 ± 4.45% in the presence of L-NNA, ODQ and indomethacin, respectively (Figure 3).", "To determine whether K+ channels are involved in T. caulis-induced relaxation, non-selective K+ channel blocker, tetraethylammonium (TEA, 2 mM) or inward rectifier K+ channel blocker, BaCl2 (30 μM) or voltage-dependent K+ channel blocker, 4-aminopyridine (4-AP, 100 μM) or ATP-sensitive K+ channel blocker, glibenclamide (10 μM), were pre-treated 20 min before being contracted with PE (5 μM). The relaxation responses of T. caulis were 96.23 ± 2.72%, 95.26 ± 0.27%, 93.44 ± 2.10% and 93.51 ± 1.62%, in the presence of TEA, BaCl2, 4-AP and glibenclamide, respectively (Figure 4).", "To identify whether the vasodilatory effect of T. caulis depends on the inhibition of extracellular Ca2+ influx, the mesenteric arteries were incubated in a Ca2+-free solution containing sarcoplasmic reticulum Ca2+-ATPase (SERCA) inhibitor, cyclopiazonic acid (CPA, 5 μΜ) and KCl (70 mM), and then CaCl2 was added by concentration (0.1–2.0 mM) to increase the Ca2+ concentration in the arteries. Before treating the arteries with T. caulis, it was confirmed that the contraction responses caused by the repeated addition of Ca2+ were not changed in endothelium-intact and endothelium-denuded mesenteric arteries (Figure 5A,C). Pre-treatment of T. caulis significantly reduced the contractile responses induced by the cumulative addition of Ca2+ in endothelium-intact and endothelium-denuded mesenteric arteries (Figure 5B,D).", "To confirm that voltage-gated calcium channel is involved in the T. caulis extract-induced vasodilation, arteries were pre-treated with L-type voltage-gated calcium channel activator, BAY K8644, and then T. caulis extract was administered in the mesenteric arteries. Treatment of vehicle (DMSO 0.001–0.04%) did not affect BAY K8644-induced contraction (Figure 6A). T. caulis extract concentration dependently induced vascular relaxation (Figure 6B).", "To support the findings about vascular effect of T. caulis, we further investigated whether single active compounds of T. caulis exert similar effect as T. caulis extract. The gas chromatogram of the compounds identified in the extract of T. caulis is shown in Figure S1 of supplementary materials. The identities of 12 compounds were determined along with their retention time (Table S1 of supplementary materials). The compounds identified based on the gas chromatography–mass spectrometry (GC/MS) analysis include butanoic acid (butyric acid), cyclobutanol, 3-nitropropanoic acid, furan-2-carbaldehyde (furfural), 4-hydroxy-3-methoxybenzaldehyde (vanillin), 4-hydroxy-3-methoxybenzaldehyde,(1R,2S,3S,4R,5R)-6,8-dioxabicyclo[3.2.1]octane-2,3,4-triol, 3-hydroxy-4-methoxybenzoic acid, 3,4,5,6-tetrahydroxy-2-methoxyhexanal, 3,4,5,6-tetrahydroxy-2-methoxyhexanal,(3S,4S,5R,6S)-3-methoxy-6-(methoxymethyl)oxane-2,4,5-triol, hexanoic acid, 1-(2,6-dihydroxy-4-methoxyphenyl)butan-1-one and benzoic acid. Among these candidate single compounds, we found that 4-hydroxy-3-methoxybenzaldehyde (vanillin) has a vascular relaxation effect. We observed that vanillin (0.01–20 mM) induced vasodilation in a concentration-dependent manner in rat mesenteric arteries pre-contracted with high-K+ solution (70 mM) or PE (5 μM). (Figure 7). EC50 of vanillin is 1.1 mM in the mesenteric arteries pre-contracted with high K+ (70 mM) and 1.9 mM in the arteries pre-contracted with PE. We also tested vascular effects of two more single compounds, butyric acid and furfural, in mesenteric resistance arteries. However, butyric acid did not induce significant vasodilation (Figure S2 of supplementary materials). As shown in Figure S3 of supplementary materials, furfural induced vasodilation in a very high concentration (10 mM).", "4.1. Animals and Tissue Preparation All experiments were performed according to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH publication No. 85-23, 2011) and were approved by the Ethics Committee and the Institutional Animal Care and Use Committee of Yonsei University, College of Medicine (Approval number: 2021-0058). In this experiment, 8–11-week-old male Sprague Dawley rats were used. After being sacrificed, the mesenteric arteries were rapidly dissected and placed in an ice-cold Krebs–Henseleit (K-H) solution (composition (mM): NaCl 119, KCl 4.6, MgSO4 1.2, KH2PO4 1.2, CaCl2 2.5, NaHCO3 25 and glucose 11.1) bubbled with 95% O2 and 5% CO2. Adipose and connective tissue were removed from the mesenteric arteries using a microscope (model SZ-40, Olympus, Shinjuku, Tokyo, Japan). The 2nd branches of mesenteric arteries (250–300 μm) were cut into 2–3 mm-long sections and used in this experiment. If necessary, endothelium was removed by gently rubbing using thin forceps.\nAll experiments were performed according to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH publication No. 85-23, 2011) and were approved by the Ethics Committee and the Institutional Animal Care and Use Committee of Yonsei University, College of Medicine (Approval number: 2021-0058). In this experiment, 8–11-week-old male Sprague Dawley rats were used. After being sacrificed, the mesenteric arteries were rapidly dissected and placed in an ice-cold Krebs–Henseleit (K-H) solution (composition (mM): NaCl 119, KCl 4.6, MgSO4 1.2, KH2PO4 1.2, CaCl2 2.5, NaHCO3 25 and glucose 11.1) bubbled with 95% O2 and 5% CO2. Adipose and connective tissue were removed from the mesenteric arteries using a microscope (model SZ-40, Olympus, Shinjuku, Tokyo, Japan). The 2nd branches of mesenteric arteries (250–300 μm) were cut into 2–3 mm-long sections and used in this experiment. If necessary, endothelium was removed by gently rubbing using thin forceps.\n4.2. Preparation of T. caulis Extract The plant extract (CW01-037) used in this research was obtained from the Korea Plant Extract Bank at the Korea Research Institute of Bioscience and Biotechnology (Daejeon, Korea). The plant (103 g) dried in the shade and powdered was added to 1 L of distilled water and heat-extracted for 150 min at 100 °C in an extractor (DW-290, DAEWOONG ELECTRONIC APPIIANCE). After filtration and drying under freeze drying using a freeze dryer (Clean Vac 8, HANIL SCIENCE Co., Ltd.), the yield of the T. caulis extract was 8.3% (8.55 g) of the plant powder and was dissolved in 5% of dimethyl sulfoxide (DMSO).\nThe plant extract (CW01-037) used in this research was obtained from the Korea Plant Extract Bank at the Korea Research Institute of Bioscience and Biotechnology (Daejeon, Korea). The plant (103 g) dried in the shade and powdered was added to 1 L of distilled water and heat-extracted for 150 min at 100 °C in an extractor (DW-290, DAEWOONG ELECTRONIC APPIIANCE). After filtration and drying under freeze drying using a freeze dryer (Clean Vac 8, HANIL SCIENCE Co., Ltd.), the yield of the T. caulis extract was 8.3% (8.55 g) of the plant powder and was dissolved in 5% of dimethyl sulfoxide (DMSO).\n4.3. Measurement of Isometric Tension in Mesenteric Arteries Isolated segments were mounted in wire myography (model 620M, Danish Myotechnology, Aarhus, Denmark) for the recording of isometric tension. Arteries were bathed in 37 °C K-H solution, constantly bubbled with 95% O2 and 5% CO2. Vessels were equilibrated for 20 min and stretched to their optimal resting tension ~4 mN. Contractility of the vessel was evaluated by incubating KCl (70 mM) 3 times. The response was recorded by stabilizing the vessel by contracting the arteries to KCl (70 mM) or PE (5 μM), followed by a cumulative addition of extract (5–250 μg/mL) or vehicle (DMSO, 0.00025–0.125%). To investigate the mechanism of vascular relaxation of the aqueous extract of T. caulis, L-NNA, indomethacin, ODQ, TEA, BaCl2, glibenclamide or 4-AP were pre-treated for 20 min, and then the relaxation response of the T. caulis (250 μg/mL) to phenylephrine (5 μM) contraction was recorded. An experiment was conducted to determine the effect of Ca2+ on vascular relaxation when the T. caulis (250 μg/mL) is treated. CPA (5 μM) was treated in Ca2+-free solution to remove both intracellular and extracellular calcium. After replacing the solution with Ca2+-free K-H solution containing 70 mM of KCl and CPA to Ca2+-free K-H solution containing 70 mM of KCl, changes in contraction were recorded by increasing the concentration of CaCl2 on arteries. The extract was then pre-treated in the same arteries for 20 min and the changes in contractility in the same experiment were compared with those before pre-treatment with the T. caulis. The CaCl2-induced contraction was calculated as the percentage of maximum contraction recorded from the KCl contraction. In addition, some arteries were pre-contracted by BAY K8644 (30 nM) in K-H solution containing 15 mM KCl to investigate T. caulis-induced relaxation.\nIsolated segments were mounted in wire myography (model 620M, Danish Myotechnology, Aarhus, Denmark) for the recording of isometric tension. Arteries were bathed in 37 °C K-H solution, constantly bubbled with 95% O2 and 5% CO2. Vessels were equilibrated for 20 min and stretched to their optimal resting tension ~4 mN. Contractility of the vessel was evaluated by incubating KCl (70 mM) 3 times. The response was recorded by stabilizing the vessel by contracting the arteries to KCl (70 mM) or PE (5 μM), followed by a cumulative addition of extract (5–250 μg/mL) or vehicle (DMSO, 0.00025–0.125%). To investigate the mechanism of vascular relaxation of the aqueous extract of T. caulis, L-NNA, indomethacin, ODQ, TEA, BaCl2, glibenclamide or 4-AP were pre-treated for 20 min, and then the relaxation response of the T. caulis (250 μg/mL) to phenylephrine (5 μM) contraction was recorded. An experiment was conducted to determine the effect of Ca2+ on vascular relaxation when the T. caulis (250 μg/mL) is treated. CPA (5 μM) was treated in Ca2+-free solution to remove both intracellular and extracellular calcium. After replacing the solution with Ca2+-free K-H solution containing 70 mM of KCl and CPA to Ca2+-free K-H solution containing 70 mM of KCl, changes in contraction were recorded by increasing the concentration of CaCl2 on arteries. The extract was then pre-treated in the same arteries for 20 min and the changes in contractility in the same experiment were compared with those before pre-treatment with the T. caulis. The CaCl2-induced contraction was calculated as the percentage of maximum contraction recorded from the KCl contraction. In addition, some arteries were pre-contracted by BAY K8644 (30 nM) in K-H solution containing 15 mM KCl to investigate T. caulis-induced relaxation.\n4.4. Chemicals and Reagents Phenylephrine hydrochloride (PE), ACh, L-NNA, ODQ, TEA, BaCl2, 4-AP, butyric acid, furfural and vanillin were purchased from Sigma-Aldrich (St. Louis, MO, USA). Indomethacin was obtained from Calbiochem (Darmstadt, Germany). Glibenclamide was purchased from Tocris Bioscience (Bristol, UK). CPA was obtained from Enzo Life Sciences (Farmingdale, NY, USA).\nPhenylephrine hydrochloride (PE), ACh, L-NNA, ODQ, TEA, BaCl2, 4-AP, butyric acid, furfural and vanillin were purchased from Sigma-Aldrich (St. Louis, MO, USA). Indomethacin was obtained from Calbiochem (Darmstadt, Germany). Glibenclamide was purchased from Tocris Bioscience (Bristol, UK). CPA was obtained from Enzo Life Sciences (Farmingdale, NY, USA).\n4.5. Statistical Analysis All values are expressed as mean ± standard deviations. One-way or two-way ANOVA was used to compare the groups when appropriate. Comparisons between groups were performed with t-tests when the ANOVA test was statistically significant. For all experiments measuring diameter, the n-values mean the number of vessels derived from each different animal. Values of * p < 0.05 were considered statistically significant. Differences between specified groups were analyzed using the Student’s t test (2-tailed) to compare the two groups, with * p < 0.05 considered statistically significant.\nAll values are expressed as mean ± standard deviations. One-way or two-way ANOVA was used to compare the groups when appropriate. Comparisons between groups were performed with t-tests when the ANOVA test was statistically significant. For all experiments measuring diameter, the n-values mean the number of vessels derived from each different animal. Values of * p < 0.05 were considered statistically significant. Differences between specified groups were analyzed using the Student’s t test (2-tailed) to compare the two groups, with * p < 0.05 considered statistically significant.", "All experiments were performed according to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH publication No. 85-23, 2011) and were approved by the Ethics Committee and the Institutional Animal Care and Use Committee of Yonsei University, College of Medicine (Approval number: 2021-0058). In this experiment, 8–11-week-old male Sprague Dawley rats were used. After being sacrificed, the mesenteric arteries were rapidly dissected and placed in an ice-cold Krebs–Henseleit (K-H) solution (composition (mM): NaCl 119, KCl 4.6, MgSO4 1.2, KH2PO4 1.2, CaCl2 2.5, NaHCO3 25 and glucose 11.1) bubbled with 95% O2 and 5% CO2. Adipose and connective tissue were removed from the mesenteric arteries using a microscope (model SZ-40, Olympus, Shinjuku, Tokyo, Japan). The 2nd branches of mesenteric arteries (250–300 μm) were cut into 2–3 mm-long sections and used in this experiment. If necessary, endothelium was removed by gently rubbing using thin forceps.", "The plant extract (CW01-037) used in this research was obtained from the Korea Plant Extract Bank at the Korea Research Institute of Bioscience and Biotechnology (Daejeon, Korea). The plant (103 g) dried in the shade and powdered was added to 1 L of distilled water and heat-extracted for 150 min at 100 °C in an extractor (DW-290, DAEWOONG ELECTRONIC APPIIANCE). After filtration and drying under freeze drying using a freeze dryer (Clean Vac 8, HANIL SCIENCE Co., Ltd.), the yield of the T. caulis extract was 8.3% (8.55 g) of the plant powder and was dissolved in 5% of dimethyl sulfoxide (DMSO).", "Isolated segments were mounted in wire myography (model 620M, Danish Myotechnology, Aarhus, Denmark) for the recording of isometric tension. Arteries were bathed in 37 °C K-H solution, constantly bubbled with 95% O2 and 5% CO2. Vessels were equilibrated for 20 min and stretched to their optimal resting tension ~4 mN. Contractility of the vessel was evaluated by incubating KCl (70 mM) 3 times. The response was recorded by stabilizing the vessel by contracting the arteries to KCl (70 mM) or PE (5 μM), followed by a cumulative addition of extract (5–250 μg/mL) or vehicle (DMSO, 0.00025–0.125%). To investigate the mechanism of vascular relaxation of the aqueous extract of T. caulis, L-NNA, indomethacin, ODQ, TEA, BaCl2, glibenclamide or 4-AP were pre-treated for 20 min, and then the relaxation response of the T. caulis (250 μg/mL) to phenylephrine (5 μM) contraction was recorded. An experiment was conducted to determine the effect of Ca2+ on vascular relaxation when the T. caulis (250 μg/mL) is treated. CPA (5 μM) was treated in Ca2+-free solution to remove both intracellular and extracellular calcium. After replacing the solution with Ca2+-free K-H solution containing 70 mM of KCl and CPA to Ca2+-free K-H solution containing 70 mM of KCl, changes in contraction were recorded by increasing the concentration of CaCl2 on arteries. The extract was then pre-treated in the same arteries for 20 min and the changes in contractility in the same experiment were compared with those before pre-treatment with the T. caulis. The CaCl2-induced contraction was calculated as the percentage of maximum contraction recorded from the KCl contraction. In addition, some arteries were pre-contracted by BAY K8644 (30 nM) in K-H solution containing 15 mM KCl to investigate T. caulis-induced relaxation.", "Phenylephrine hydrochloride (PE), ACh, L-NNA, ODQ, TEA, BaCl2, 4-AP, butyric acid, furfural and vanillin were purchased from Sigma-Aldrich (St. Louis, MO, USA). Indomethacin was obtained from Calbiochem (Darmstadt, Germany). Glibenclamide was purchased from Tocris Bioscience (Bristol, UK). CPA was obtained from Enzo Life Sciences (Farmingdale, NY, USA).", "All values are expressed as mean ± standard deviations. One-way or two-way ANOVA was used to compare the groups when appropriate. Comparisons between groups were performed with t-tests when the ANOVA test was statistically significant. For all experiments measuring diameter, the n-values mean the number of vessels derived from each different animal. Values of * p < 0.05 were considered statistically significant. Differences between specified groups were analyzed using the Student’s t test (2-tailed) to compare the two groups, with * p < 0.05 considered statistically significant." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Results", "2.1. Effect of T. caulis on Contraction Induced by KCl or PE in Rat Mesenteric Arteries", "2.2. T. caulis-Induced Endothelium-Independent Relaxation in Rat Mesenteric Arteries", "2.3. Effects of L-NNA, Indomethacin and ODQ on T. caulis-Induced Vasodilation", "2.4. Effects of K+ Channel Blockers on T. caulis Extract-Induced Vascular Relaxation", "2.5. Effect of T. caulis on Extracellular Ca2+-Induced Contraction", "2.6. Effect of T. caulis on the BAY K8644-Induced Contraction", "2.7. Effects of Vanillin, a Single Active Compound of T. caulis, on Mesenteric Arteries", "3. Discussion", "4. Materials and Methods", "4.1. Animals and Tissue Preparation", "4.2. Preparation of T. caulis Extract", "4.3. Measurement of Isometric Tension in Mesenteric Arteries", "4.4. Chemicals and Reagents", "4.5. Statistical Analysis", "5. Conclusions" ]
[ "Cardiovascular disease (CVD) is the one of the leading causes of morbidity and mortality worldwide. CVD was responsible for 17.3 million deaths worldwide in 2017 which is expected to increase to >23.6 million by 2030 [1,2]. CVD consists of hypertension, heart failure, stroke and a number of vascular and cardiac problems [3]. Hypertension is defined as a systolic blood pressure (SBP) of 130 mmHg or more and a diastolic blood pressure (DBP) of 90 mmHg or more [4].\nThe arterial system comprises capillaries, arterioles, small arteries, medium-sized arteries and large conduit arteries. Resistance arteries, vessels with lumen diameters measuring <400 μm, are major organs of the cardiovascular system and regulate blood flow and peripheral resistance [5,6]. An increase in vascular resistance caused by the decrease in the internal lumen diameter of the blood vessel is a major cause of elevated blood pressure [7,8]. On the other hand, the relaxation of blood vessels leads to an increase in the lumen diameter of the blood vessels, resulting in an immediate decrease in blood pressure. The relaxation responses of arteries were not only endothelium-dependent but also endothelium-independent. Recently, evidence has been accumulating that endothelium-independent vasodilation is impaired in various vascular beds such as coronary, brachial, forearm and peripheral conduit arteries in patients with cardiovascular diseases and cardiovascular risk factors [9]. In particular, hypertension and diabetes mellitus have been shown to be associated with the impairment of endothelium-independent vasodilation [10,11]. Thus, it is necessary to discover substances that induce endothelium-independent vasodilation as much as substances that cause endothelium-dependent vasodilation.\nAlthough synthetic medications have been widely used to treat and cure patients at various stages of CVD, including hypertension, the adverse effects remain a challenge. Because the treatment of hypertension is continued over a long period of time; therefore, the use of synthetic drugs may result in drug resistance and adverse effects [12]. In addition to the use of synthetic drugs to treat hypertension, the use of a natural product is widely increasing [13,14]. Recently, the use of plant extract has shown a steady growth because of low toxicity and well-established therapy [15]. Various plants used in traditional medicine have been studied for their potential use in the treatment of cardiovascular disease [16]. Vasodilatory effects of medicinal plants have been extensively explored over the last two decades and have proven to be potentially effective for the treatment of CVD in clinics all over the world [17,18,19,20].\nTrachelospermi caulis (T. caulis) belongs to the Apocynaceae family, which is the dried leafy stem of trachelospermum asiaticum var. intermedium [21]. T. caulis has been used as traditional herbal medicine to attenuate fever and pain of the knees and loins because of its antipyretic and antinociceptive effects in Asian countries [22]. It is well known that T. caulis lowers blood pressure in oriental medicine [23]. Although T. caulis has been suggested to provoke beneficial effects, the effect and mechanism of action in cardiovascular system are unknown. Therefore, the aim of our research is to explore the effect of T. caulis extract on vascular functionality in resistance arteries and to elucidate the underlying mechanism.", "2.1. Effect of T. caulis on Contraction Induced by KCl or PE in Rat Mesenteric Arteries Trachelospermi caulis (5–250 μg/mL) induced vascular relaxation in a concentration-dependent manner in endothelium-intact mesenteric arteries pre-contracted with high K+ (70 mM) or phenylephrine (PE, 5 μM) and in endothelium-denuded mesenteric arteries pre-contracted with PE (5 μM) as shown in Figure 1. EC50 of T. caulis is 98.1 μg/mL in the endothelium-intact arteries pre-contracted with high K+ (70 mM), 62.36 μg/mL in the endothelium-denuded arteries pre-contracted with PE and 36.61 μg/mL in the endothelium-intact arteries pre-contracted with PE. The maximal relaxation value of T. caulis-induced relaxation is 80.73 ± 6.05% in the endothelium-intact arteries pre-contracted with high K+, 89.6 ± 2.28% in the endothelium-intact arteries pre-contracted with PE and 93.30 ± 4.46% in the endothelium-denuded arteries pre-contracted with PE. Cumulative administration of vehicle (DMSO, 0.0025–0.125%) did not affect the contraction induced by PE (Figure 1 inset).\nTrachelospermi caulis (5–250 μg/mL) induced vascular relaxation in a concentration-dependent manner in endothelium-intact mesenteric arteries pre-contracted with high K+ (70 mM) or phenylephrine (PE, 5 μM) and in endothelium-denuded mesenteric arteries pre-contracted with PE (5 μM) as shown in Figure 1. EC50 of T. caulis is 98.1 μg/mL in the endothelium-intact arteries pre-contracted with high K+ (70 mM), 62.36 μg/mL in the endothelium-denuded arteries pre-contracted with PE and 36.61 μg/mL in the endothelium-intact arteries pre-contracted with PE. The maximal relaxation value of T. caulis-induced relaxation is 80.73 ± 6.05% in the endothelium-intact arteries pre-contracted with high K+, 89.6 ± 2.28% in the endothelium-intact arteries pre-contracted with PE and 93.30 ± 4.46% in the endothelium-denuded arteries pre-contracted with PE. Cumulative administration of vehicle (DMSO, 0.0025–0.125%) did not affect the contraction induced by PE (Figure 1 inset).\n2.2. T. caulis-Induced Endothelium-Independent Relaxation in Rat Mesenteric Arteries To explore whether T. caulis-induced relaxation is dependent on endothelium. T. caulis extract was applied in endothelium-intact (Figure 2A) or endothelium-denuded (Figure 2B) mesenteric arteries. T. caulis extract induced vasodilation in the presence and in the absence of the endothelium, and there was no significant difference (93.4 ± 3.56% and 97.17 ± 6.75%, respectively, Figure 2C).\nTo explore whether T. caulis-induced relaxation is dependent on endothelium. T. caulis extract was applied in endothelium-intact (Figure 2A) or endothelium-denuded (Figure 2B) mesenteric arteries. T. caulis extract induced vasodilation in the presence and in the absence of the endothelium, and there was no significant difference (93.4 ± 3.56% and 97.17 ± 6.75%, respectively, Figure 2C).\n2.3. Effects of L-NNA, Indomethacin and ODQ on T. caulis-Induced Vasodilation To investigate the involvement of the nitric oxide (NO)/cyclic guanosine monophosphate (cGMP) and cyclooxygenase (COX)/prostacyclin (PGI2) pathways in T. caulis-induced vasodilation, arteries were incubated for 20 min with endothelial nitric oxide (eNOS) inhibitor, Nω-Nitro-L-arginine (L-NNA, 500 μM), or soluble guanylyl cyclase (sGC) inhibitor, 1H-(1,2,4)oxadiazolo[4,3-a]quinoxalin-1-one (ODQ, 5 μM), or COX inhibitor, indomethacin (10 μM), before being contracted with PE (5 μM). The relaxation responses of T. caulis were 89.39 ± 5.12%, 94.41 ± 5.41% and 92.03 ± 4.45% in the presence of L-NNA, ODQ and indomethacin, respectively (Figure 3).\nTo investigate the involvement of the nitric oxide (NO)/cyclic guanosine monophosphate (cGMP) and cyclooxygenase (COX)/prostacyclin (PGI2) pathways in T. caulis-induced vasodilation, arteries were incubated for 20 min with endothelial nitric oxide (eNOS) inhibitor, Nω-Nitro-L-arginine (L-NNA, 500 μM), or soluble guanylyl cyclase (sGC) inhibitor, 1H-(1,2,4)oxadiazolo[4,3-a]quinoxalin-1-one (ODQ, 5 μM), or COX inhibitor, indomethacin (10 μM), before being contracted with PE (5 μM). The relaxation responses of T. caulis were 89.39 ± 5.12%, 94.41 ± 5.41% and 92.03 ± 4.45% in the presence of L-NNA, ODQ and indomethacin, respectively (Figure 3).\n2.4. Effects of K+ Channel Blockers on T. caulis Extract-Induced Vascular Relaxation To determine whether K+ channels are involved in T. caulis-induced relaxation, non-selective K+ channel blocker, tetraethylammonium (TEA, 2 mM) or inward rectifier K+ channel blocker, BaCl2 (30 μM) or voltage-dependent K+ channel blocker, 4-aminopyridine (4-AP, 100 μM) or ATP-sensitive K+ channel blocker, glibenclamide (10 μM), were pre-treated 20 min before being contracted with PE (5 μM). The relaxation responses of T. caulis were 96.23 ± 2.72%, 95.26 ± 0.27%, 93.44 ± 2.10% and 93.51 ± 1.62%, in the presence of TEA, BaCl2, 4-AP and glibenclamide, respectively (Figure 4).\nTo determine whether K+ channels are involved in T. caulis-induced relaxation, non-selective K+ channel blocker, tetraethylammonium (TEA, 2 mM) or inward rectifier K+ channel blocker, BaCl2 (30 μM) or voltage-dependent K+ channel blocker, 4-aminopyridine (4-AP, 100 μM) or ATP-sensitive K+ channel blocker, glibenclamide (10 μM), were pre-treated 20 min before being contracted with PE (5 μM). The relaxation responses of T. caulis were 96.23 ± 2.72%, 95.26 ± 0.27%, 93.44 ± 2.10% and 93.51 ± 1.62%, in the presence of TEA, BaCl2, 4-AP and glibenclamide, respectively (Figure 4).\n2.5. Effect of T. caulis on Extracellular Ca2+-Induced Contraction To identify whether the vasodilatory effect of T. caulis depends on the inhibition of extracellular Ca2+ influx, the mesenteric arteries were incubated in a Ca2+-free solution containing sarcoplasmic reticulum Ca2+-ATPase (SERCA) inhibitor, cyclopiazonic acid (CPA, 5 μΜ) and KCl (70 mM), and then CaCl2 was added by concentration (0.1–2.0 mM) to increase the Ca2+ concentration in the arteries. Before treating the arteries with T. caulis, it was confirmed that the contraction responses caused by the repeated addition of Ca2+ were not changed in endothelium-intact and endothelium-denuded mesenteric arteries (Figure 5A,C). Pre-treatment of T. caulis significantly reduced the contractile responses induced by the cumulative addition of Ca2+ in endothelium-intact and endothelium-denuded mesenteric arteries (Figure 5B,D).\nTo identify whether the vasodilatory effect of T. caulis depends on the inhibition of extracellular Ca2+ influx, the mesenteric arteries were incubated in a Ca2+-free solution containing sarcoplasmic reticulum Ca2+-ATPase (SERCA) inhibitor, cyclopiazonic acid (CPA, 5 μΜ) and KCl (70 mM), and then CaCl2 was added by concentration (0.1–2.0 mM) to increase the Ca2+ concentration in the arteries. Before treating the arteries with T. caulis, it was confirmed that the contraction responses caused by the repeated addition of Ca2+ were not changed in endothelium-intact and endothelium-denuded mesenteric arteries (Figure 5A,C). Pre-treatment of T. caulis significantly reduced the contractile responses induced by the cumulative addition of Ca2+ in endothelium-intact and endothelium-denuded mesenteric arteries (Figure 5B,D).\n2.6. Effect of T. caulis on the BAY K8644-Induced Contraction To confirm that voltage-gated calcium channel is involved in the T. caulis extract-induced vasodilation, arteries were pre-treated with L-type voltage-gated calcium channel activator, BAY K8644, and then T. caulis extract was administered in the mesenteric arteries. Treatment of vehicle (DMSO 0.001–0.04%) did not affect BAY K8644-induced contraction (Figure 6A). T. caulis extract concentration dependently induced vascular relaxation (Figure 6B).\nTo confirm that voltage-gated calcium channel is involved in the T. caulis extract-induced vasodilation, arteries were pre-treated with L-type voltage-gated calcium channel activator, BAY K8644, and then T. caulis extract was administered in the mesenteric arteries. Treatment of vehicle (DMSO 0.001–0.04%) did not affect BAY K8644-induced contraction (Figure 6A). T. caulis extract concentration dependently induced vascular relaxation (Figure 6B).\n2.7. Effects of Vanillin, a Single Active Compound of T. caulis, on Mesenteric Arteries To support the findings about vascular effect of T. caulis, we further investigated whether single active compounds of T. caulis exert similar effect as T. caulis extract. The gas chromatogram of the compounds identified in the extract of T. caulis is shown in Figure S1 of supplementary materials. The identities of 12 compounds were determined along with their retention time (Table S1 of supplementary materials). The compounds identified based on the gas chromatography–mass spectrometry (GC/MS) analysis include butanoic acid (butyric acid), cyclobutanol, 3-nitropropanoic acid, furan-2-carbaldehyde (furfural), 4-hydroxy-3-methoxybenzaldehyde (vanillin), 4-hydroxy-3-methoxybenzaldehyde,(1R,2S,3S,4R,5R)-6,8-dioxabicyclo[3.2.1]octane-2,3,4-triol, 3-hydroxy-4-methoxybenzoic acid, 3,4,5,6-tetrahydroxy-2-methoxyhexanal, 3,4,5,6-tetrahydroxy-2-methoxyhexanal,(3S,4S,5R,6S)-3-methoxy-6-(methoxymethyl)oxane-2,4,5-triol, hexanoic acid, 1-(2,6-dihydroxy-4-methoxyphenyl)butan-1-one and benzoic acid. Among these candidate single compounds, we found that 4-hydroxy-3-methoxybenzaldehyde (vanillin) has a vascular relaxation effect. We observed that vanillin (0.01–20 mM) induced vasodilation in a concentration-dependent manner in rat mesenteric arteries pre-contracted with high-K+ solution (70 mM) or PE (5 μM). (Figure 7). EC50 of vanillin is 1.1 mM in the mesenteric arteries pre-contracted with high K+ (70 mM) and 1.9 mM in the arteries pre-contracted with PE. We also tested vascular effects of two more single compounds, butyric acid and furfural, in mesenteric resistance arteries. However, butyric acid did not induce significant vasodilation (Figure S2 of supplementary materials). As shown in Figure S3 of supplementary materials, furfural induced vasodilation in a very high concentration (10 mM).\nTo support the findings about vascular effect of T. caulis, we further investigated whether single active compounds of T. caulis exert similar effect as T. caulis extract. The gas chromatogram of the compounds identified in the extract of T. caulis is shown in Figure S1 of supplementary materials. The identities of 12 compounds were determined along with their retention time (Table S1 of supplementary materials). The compounds identified based on the gas chromatography–mass spectrometry (GC/MS) analysis include butanoic acid (butyric acid), cyclobutanol, 3-nitropropanoic acid, furan-2-carbaldehyde (furfural), 4-hydroxy-3-methoxybenzaldehyde (vanillin), 4-hydroxy-3-methoxybenzaldehyde,(1R,2S,3S,4R,5R)-6,8-dioxabicyclo[3.2.1]octane-2,3,4-triol, 3-hydroxy-4-methoxybenzoic acid, 3,4,5,6-tetrahydroxy-2-methoxyhexanal, 3,4,5,6-tetrahydroxy-2-methoxyhexanal,(3S,4S,5R,6S)-3-methoxy-6-(methoxymethyl)oxane-2,4,5-triol, hexanoic acid, 1-(2,6-dihydroxy-4-methoxyphenyl)butan-1-one and benzoic acid. Among these candidate single compounds, we found that 4-hydroxy-3-methoxybenzaldehyde (vanillin) has a vascular relaxation effect. We observed that vanillin (0.01–20 mM) induced vasodilation in a concentration-dependent manner in rat mesenteric arteries pre-contracted with high-K+ solution (70 mM) or PE (5 μM). (Figure 7). EC50 of vanillin is 1.1 mM in the mesenteric arteries pre-contracted with high K+ (70 mM) and 1.9 mM in the arteries pre-contracted with PE. We also tested vascular effects of two more single compounds, butyric acid and furfural, in mesenteric resistance arteries. However, butyric acid did not induce significant vasodilation (Figure S2 of supplementary materials). As shown in Figure S3 of supplementary materials, furfural induced vasodilation in a very high concentration (10 mM).", "Trachelospermi caulis (5–250 μg/mL) induced vascular relaxation in a concentration-dependent manner in endothelium-intact mesenteric arteries pre-contracted with high K+ (70 mM) or phenylephrine (PE, 5 μM) and in endothelium-denuded mesenteric arteries pre-contracted with PE (5 μM) as shown in Figure 1. EC50 of T. caulis is 98.1 μg/mL in the endothelium-intact arteries pre-contracted with high K+ (70 mM), 62.36 μg/mL in the endothelium-denuded arteries pre-contracted with PE and 36.61 μg/mL in the endothelium-intact arteries pre-contracted with PE. The maximal relaxation value of T. caulis-induced relaxation is 80.73 ± 6.05% in the endothelium-intact arteries pre-contracted with high K+, 89.6 ± 2.28% in the endothelium-intact arteries pre-contracted with PE and 93.30 ± 4.46% in the endothelium-denuded arteries pre-contracted with PE. Cumulative administration of vehicle (DMSO, 0.0025–0.125%) did not affect the contraction induced by PE (Figure 1 inset).", "To explore whether T. caulis-induced relaxation is dependent on endothelium. T. caulis extract was applied in endothelium-intact (Figure 2A) or endothelium-denuded (Figure 2B) mesenteric arteries. T. caulis extract induced vasodilation in the presence and in the absence of the endothelium, and there was no significant difference (93.4 ± 3.56% and 97.17 ± 6.75%, respectively, Figure 2C).", "To investigate the involvement of the nitric oxide (NO)/cyclic guanosine monophosphate (cGMP) and cyclooxygenase (COX)/prostacyclin (PGI2) pathways in T. caulis-induced vasodilation, arteries were incubated for 20 min with endothelial nitric oxide (eNOS) inhibitor, Nω-Nitro-L-arginine (L-NNA, 500 μM), or soluble guanylyl cyclase (sGC) inhibitor, 1H-(1,2,4)oxadiazolo[4,3-a]quinoxalin-1-one (ODQ, 5 μM), or COX inhibitor, indomethacin (10 μM), before being contracted with PE (5 μM). The relaxation responses of T. caulis were 89.39 ± 5.12%, 94.41 ± 5.41% and 92.03 ± 4.45% in the presence of L-NNA, ODQ and indomethacin, respectively (Figure 3).", "To determine whether K+ channels are involved in T. caulis-induced relaxation, non-selective K+ channel blocker, tetraethylammonium (TEA, 2 mM) or inward rectifier K+ channel blocker, BaCl2 (30 μM) or voltage-dependent K+ channel blocker, 4-aminopyridine (4-AP, 100 μM) or ATP-sensitive K+ channel blocker, glibenclamide (10 μM), were pre-treated 20 min before being contracted with PE (5 μM). The relaxation responses of T. caulis were 96.23 ± 2.72%, 95.26 ± 0.27%, 93.44 ± 2.10% and 93.51 ± 1.62%, in the presence of TEA, BaCl2, 4-AP and glibenclamide, respectively (Figure 4).", "To identify whether the vasodilatory effect of T. caulis depends on the inhibition of extracellular Ca2+ influx, the mesenteric arteries were incubated in a Ca2+-free solution containing sarcoplasmic reticulum Ca2+-ATPase (SERCA) inhibitor, cyclopiazonic acid (CPA, 5 μΜ) and KCl (70 mM), and then CaCl2 was added by concentration (0.1–2.0 mM) to increase the Ca2+ concentration in the arteries. Before treating the arteries with T. caulis, it was confirmed that the contraction responses caused by the repeated addition of Ca2+ were not changed in endothelium-intact and endothelium-denuded mesenteric arteries (Figure 5A,C). Pre-treatment of T. caulis significantly reduced the contractile responses induced by the cumulative addition of Ca2+ in endothelium-intact and endothelium-denuded mesenteric arteries (Figure 5B,D).", "To confirm that voltage-gated calcium channel is involved in the T. caulis extract-induced vasodilation, arteries were pre-treated with L-type voltage-gated calcium channel activator, BAY K8644, and then T. caulis extract was administered in the mesenteric arteries. Treatment of vehicle (DMSO 0.001–0.04%) did not affect BAY K8644-induced contraction (Figure 6A). T. caulis extract concentration dependently induced vascular relaxation (Figure 6B).", "To support the findings about vascular effect of T. caulis, we further investigated whether single active compounds of T. caulis exert similar effect as T. caulis extract. The gas chromatogram of the compounds identified in the extract of T. caulis is shown in Figure S1 of supplementary materials. The identities of 12 compounds were determined along with their retention time (Table S1 of supplementary materials). The compounds identified based on the gas chromatography–mass spectrometry (GC/MS) analysis include butanoic acid (butyric acid), cyclobutanol, 3-nitropropanoic acid, furan-2-carbaldehyde (furfural), 4-hydroxy-3-methoxybenzaldehyde (vanillin), 4-hydroxy-3-methoxybenzaldehyde,(1R,2S,3S,4R,5R)-6,8-dioxabicyclo[3.2.1]octane-2,3,4-triol, 3-hydroxy-4-methoxybenzoic acid, 3,4,5,6-tetrahydroxy-2-methoxyhexanal, 3,4,5,6-tetrahydroxy-2-methoxyhexanal,(3S,4S,5R,6S)-3-methoxy-6-(methoxymethyl)oxane-2,4,5-triol, hexanoic acid, 1-(2,6-dihydroxy-4-methoxyphenyl)butan-1-one and benzoic acid. Among these candidate single compounds, we found that 4-hydroxy-3-methoxybenzaldehyde (vanillin) has a vascular relaxation effect. We observed that vanillin (0.01–20 mM) induced vasodilation in a concentration-dependent manner in rat mesenteric arteries pre-contracted with high-K+ solution (70 mM) or PE (5 μM). (Figure 7). EC50 of vanillin is 1.1 mM in the mesenteric arteries pre-contracted with high K+ (70 mM) and 1.9 mM in the arteries pre-contracted with PE. We also tested vascular effects of two more single compounds, butyric acid and furfural, in mesenteric resistance arteries. However, butyric acid did not induce significant vasodilation (Figure S2 of supplementary materials). As shown in Figure S3 of supplementary materials, furfural induced vasodilation in a very high concentration (10 mM).", "The aim of the present study was to examine the direct effect of T. caulis extract on the vascular functionality in resistance arteries and to determine the underlying mechanism. We demonstrated that T. caulis reduced contraction induced by KCl or PE in rat mesenteric resistance arteries. T. caulis concentration-dependently induced vascular relaxation in the presence and absence of the endothelium. Additionally, the pre-treatment of L-NNA, ODQ and indomethacin did not affect the vasodilatory effect of T. caulis, which indicates that T. caulis-induced relaxation is not related with the NO pathway. The K+ channel blockers, TEA, BaCl2, 4-AP and glibenclamide, did not affect the T. caulis-induced relaxation either. T. caulis inhibited extracellular Ca2+-induced vasoconstriction responses in the mesenteric arteries. In addition, one of the active compounds of T. caulis, vanillin, also has a similar relaxation effect.\nTrachelospermi caulis is well-known herb that is used to alleviate swelling from sore throats and carbuncles, as well as to lower fever from muscular contracture and rheumatoid arthritis [24]. Although, beneficial effects of T. caulis have been well-reported, no sufficient data are available on the cardiovascular effect of T. caulis. This is the first study that presents the effect of T. caulis on the vascular responses in rat mesenteric resistance arteries.\nThe vascular endothelium is a single layer lining the luminal surface of vessels. In response to various stimuli, endothelium releases vasoactive substances such as nitric oxide (NO), prostacyclin (prostaglandin I2; PGI2), endothelium-derived hyperpolarizing factor (EDHF), thromboxane (TXA2) and endothelin-1 (ET-1) [25]. Among these substances, it is well known that NO, PGI2 and EDHF induce vasodilation [26]. In the present study, we investigated whether T. caulis causes vascular relaxation through endothelial cells. We found that T. caulis induced vasodilation in the presence and in the absence of endothelium, and there was no significant difference. These finding suggested that T. caulis-induced vasodilation is endothelium-independent. In order to confirm that T. caulis-induced vascular relaxation occurs endothelium independent, we performed experiments using various inhibitors such as L-NNA, ODQ and indomethacin. After NO is generated by nitric oxide synthase (NOS) in endothelium, it diffuses into smooth muscle cells and activates sGC to increase intracellular cyclic guanosine monophosphate (cGMP) concentration, resulting in relaxation [27]. We found that there was no significant difference in the vasodilatory effect of pre-treatment with the L-NNA and ODQ. These results show that T. caulis does not relax blood vessels through the NO-cGMP pathway. Another vasodilator factors released from endothelial cells, PGI2, is produced by cyclooxygenase (COX) [28]. In the case of pre-treatment with indomethacin, a non-selective COX inhibitor, there was no significant difference in the vasodilation effect of T. caulis compared to the control. These results indicate that the vasodilatory effects of T. caulis are not related to PGI2. Taken together, T. caulis does not cause a relaxation effect through endothelium.\nSince it has been confirmed that T. caulis-induced vasodilation is endothelium-independent, we next investigated whether T. caulis causes a vascular relaxation by acting on smooth muscle directly. Vascular smooth muscle relaxation is initiated by decrease in intracellular Ca2+, which results from reduction of extracellular Ca2+ influx or Ca2+ releases from intracellular store (SR) [29]. Activation of K+ channel induces K+ efflux which causes membrane hyperpolarization. Membrane hyperpolarization contributes to closure of VDCC to block the influx of extracellular Ca2+, which induces relaxation of the smooth muscle cells [30]. In the previous study, it has been reported that K+ channel is involved in the vasodilatory effect of plant extract in mesenteric arteries [31]. In this study, we treated mesenteric arteries with various types of K+ channel blockers such as TEA, BaCl2, 4-AP and glibenclamide. Pre-treatment with K+ channel blockers did not affect the vasodilatory effect of T. caulis. These results indicate that the vasodilatory effect of T. caulis was not induced by the activation of K+ channels.\nNext, we examined whether T. caulis caused relaxation through reduction of Ca2+ influx. Because the blocking the Ca2+ channels did not induce contraction of the arteries, the relaxation effect failed to be tested in the presence of Ca2+ channel blockers. Thus, we used an alternative method to test the effect of T. caulis. The mesenteric arteries were incubated in the Ca2+ free K-H solution containing CPA, which is the state that intracellular Ca2+ is depleted. Then, the 70 mM K+ is administered to enable the opening of VDCC. The cumulative addition of Ca2+ induced contractile responses in mesenteric arteries, which is reduced by administration of T. caulis. To confirm that VDCC is inhibited by treatment of T. caulis extract, we used VDCC activator, BAY K8644. Since treatment of BAY K8644 alone did not cause stable contraction, arteries were incubated with K-H solution containing 15 mM of K+ which create an environment where VDCC could be opened. T. caulis extract concentration dependently induced vascular relaxation in the arteries pre-contracted with BAY K8644 (Figure 6B). These results suggest that extracellular Ca2+ influx was inhibited by treatment of T. caulis extract.\nAlthough we discovered vasodilatory effect of T. caulis, further study was required to confirm whether single active compound of T. caulis extract also induces vascular relaxation in mesenteric arteries. Among the several compounds identified based on gas chromatography–mass spectrometry (GC/MS), we the tested effect of 4-hydroxy-3-methoxybenzaldehyde (vanillin), butyric acid and furfural on the mesenteric arteries. We found that butyric acid did not induce significant relaxation responses in the arteries (Figure S2 of supplementary materials) and only a high concentration of furfural could exert a vasodilatory effect on the mesenteric arteries (Figure S3 of supplementary materials). Interestingly, vanillin has a potent vasodilatory effect, such as T. caulis extract (Figure 7), on mesenteric arteries. This result is in accordance with a previous study which reported that vanillin induced relaxation in porcine coronary and basilar arteries [32]. Vanillin is a small molecule that is commonly used as flavoring agents or as additives by the food and cosmetics industries. It is considered that vanillin has various biological functions such as anti-inflammatory, antioxidative and neuroprotective functions [33,34]. In the present study, vanillin induced vascular relaxation in a concentration-dependent manner in mesenteric arteries pre-contracted with PE and high K+. These results support our findings that T. caulis induces vasodilation in mesenteric resistance arteries.\nPlants contain a variety of metabolites including polyphenols, catechins, flavonoids, alkaloid and many volatile components [35]. Some of these metabolites have been suggested to have cardio-protective and antihypertensive effects [16]. The present study suggests the potential use of T. caulis extracts as antihypertensive agents by showing that the T. caulis extract significantly relaxes resistance arteries.", "4.1. Animals and Tissue Preparation All experiments were performed according to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH publication No. 85-23, 2011) and were approved by the Ethics Committee and the Institutional Animal Care and Use Committee of Yonsei University, College of Medicine (Approval number: 2021-0058). In this experiment, 8–11-week-old male Sprague Dawley rats were used. After being sacrificed, the mesenteric arteries were rapidly dissected and placed in an ice-cold Krebs–Henseleit (K-H) solution (composition (mM): NaCl 119, KCl 4.6, MgSO4 1.2, KH2PO4 1.2, CaCl2 2.5, NaHCO3 25 and glucose 11.1) bubbled with 95% O2 and 5% CO2. Adipose and connective tissue were removed from the mesenteric arteries using a microscope (model SZ-40, Olympus, Shinjuku, Tokyo, Japan). The 2nd branches of mesenteric arteries (250–300 μm) were cut into 2–3 mm-long sections and used in this experiment. If necessary, endothelium was removed by gently rubbing using thin forceps.\nAll experiments were performed according to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH publication No. 85-23, 2011) and were approved by the Ethics Committee and the Institutional Animal Care and Use Committee of Yonsei University, College of Medicine (Approval number: 2021-0058). In this experiment, 8–11-week-old male Sprague Dawley rats were used. After being sacrificed, the mesenteric arteries were rapidly dissected and placed in an ice-cold Krebs–Henseleit (K-H) solution (composition (mM): NaCl 119, KCl 4.6, MgSO4 1.2, KH2PO4 1.2, CaCl2 2.5, NaHCO3 25 and glucose 11.1) bubbled with 95% O2 and 5% CO2. Adipose and connective tissue were removed from the mesenteric arteries using a microscope (model SZ-40, Olympus, Shinjuku, Tokyo, Japan). The 2nd branches of mesenteric arteries (250–300 μm) were cut into 2–3 mm-long sections and used in this experiment. If necessary, endothelium was removed by gently rubbing using thin forceps.\n4.2. Preparation of T. caulis Extract The plant extract (CW01-037) used in this research was obtained from the Korea Plant Extract Bank at the Korea Research Institute of Bioscience and Biotechnology (Daejeon, Korea). The plant (103 g) dried in the shade and powdered was added to 1 L of distilled water and heat-extracted for 150 min at 100 °C in an extractor (DW-290, DAEWOONG ELECTRONIC APPIIANCE). After filtration and drying under freeze drying using a freeze dryer (Clean Vac 8, HANIL SCIENCE Co., Ltd.), the yield of the T. caulis extract was 8.3% (8.55 g) of the plant powder and was dissolved in 5% of dimethyl sulfoxide (DMSO).\nThe plant extract (CW01-037) used in this research was obtained from the Korea Plant Extract Bank at the Korea Research Institute of Bioscience and Biotechnology (Daejeon, Korea). The plant (103 g) dried in the shade and powdered was added to 1 L of distilled water and heat-extracted for 150 min at 100 °C in an extractor (DW-290, DAEWOONG ELECTRONIC APPIIANCE). After filtration and drying under freeze drying using a freeze dryer (Clean Vac 8, HANIL SCIENCE Co., Ltd.), the yield of the T. caulis extract was 8.3% (8.55 g) of the plant powder and was dissolved in 5% of dimethyl sulfoxide (DMSO).\n4.3. Measurement of Isometric Tension in Mesenteric Arteries Isolated segments were mounted in wire myography (model 620M, Danish Myotechnology, Aarhus, Denmark) for the recording of isometric tension. Arteries were bathed in 37 °C K-H solution, constantly bubbled with 95% O2 and 5% CO2. Vessels were equilibrated for 20 min and stretched to their optimal resting tension ~4 mN. Contractility of the vessel was evaluated by incubating KCl (70 mM) 3 times. The response was recorded by stabilizing the vessel by contracting the arteries to KCl (70 mM) or PE (5 μM), followed by a cumulative addition of extract (5–250 μg/mL) or vehicle (DMSO, 0.00025–0.125%). To investigate the mechanism of vascular relaxation of the aqueous extract of T. caulis, L-NNA, indomethacin, ODQ, TEA, BaCl2, glibenclamide or 4-AP were pre-treated for 20 min, and then the relaxation response of the T. caulis (250 μg/mL) to phenylephrine (5 μM) contraction was recorded. An experiment was conducted to determine the effect of Ca2+ on vascular relaxation when the T. caulis (250 μg/mL) is treated. CPA (5 μM) was treated in Ca2+-free solution to remove both intracellular and extracellular calcium. After replacing the solution with Ca2+-free K-H solution containing 70 mM of KCl and CPA to Ca2+-free K-H solution containing 70 mM of KCl, changes in contraction were recorded by increasing the concentration of CaCl2 on arteries. The extract was then pre-treated in the same arteries for 20 min and the changes in contractility in the same experiment were compared with those before pre-treatment with the T. caulis. The CaCl2-induced contraction was calculated as the percentage of maximum contraction recorded from the KCl contraction. In addition, some arteries were pre-contracted by BAY K8644 (30 nM) in K-H solution containing 15 mM KCl to investigate T. caulis-induced relaxation.\nIsolated segments were mounted in wire myography (model 620M, Danish Myotechnology, Aarhus, Denmark) for the recording of isometric tension. Arteries were bathed in 37 °C K-H solution, constantly bubbled with 95% O2 and 5% CO2. Vessels were equilibrated for 20 min and stretched to their optimal resting tension ~4 mN. Contractility of the vessel was evaluated by incubating KCl (70 mM) 3 times. The response was recorded by stabilizing the vessel by contracting the arteries to KCl (70 mM) or PE (5 μM), followed by a cumulative addition of extract (5–250 μg/mL) or vehicle (DMSO, 0.00025–0.125%). To investigate the mechanism of vascular relaxation of the aqueous extract of T. caulis, L-NNA, indomethacin, ODQ, TEA, BaCl2, glibenclamide or 4-AP were pre-treated for 20 min, and then the relaxation response of the T. caulis (250 μg/mL) to phenylephrine (5 μM) contraction was recorded. An experiment was conducted to determine the effect of Ca2+ on vascular relaxation when the T. caulis (250 μg/mL) is treated. CPA (5 μM) was treated in Ca2+-free solution to remove both intracellular and extracellular calcium. After replacing the solution with Ca2+-free K-H solution containing 70 mM of KCl and CPA to Ca2+-free K-H solution containing 70 mM of KCl, changes in contraction were recorded by increasing the concentration of CaCl2 on arteries. The extract was then pre-treated in the same arteries for 20 min and the changes in contractility in the same experiment were compared with those before pre-treatment with the T. caulis. The CaCl2-induced contraction was calculated as the percentage of maximum contraction recorded from the KCl contraction. In addition, some arteries were pre-contracted by BAY K8644 (30 nM) in K-H solution containing 15 mM KCl to investigate T. caulis-induced relaxation.\n4.4. Chemicals and Reagents Phenylephrine hydrochloride (PE), ACh, L-NNA, ODQ, TEA, BaCl2, 4-AP, butyric acid, furfural and vanillin were purchased from Sigma-Aldrich (St. Louis, MO, USA). Indomethacin was obtained from Calbiochem (Darmstadt, Germany). Glibenclamide was purchased from Tocris Bioscience (Bristol, UK). CPA was obtained from Enzo Life Sciences (Farmingdale, NY, USA).\nPhenylephrine hydrochloride (PE), ACh, L-NNA, ODQ, TEA, BaCl2, 4-AP, butyric acid, furfural and vanillin were purchased from Sigma-Aldrich (St. Louis, MO, USA). Indomethacin was obtained from Calbiochem (Darmstadt, Germany). Glibenclamide was purchased from Tocris Bioscience (Bristol, UK). CPA was obtained from Enzo Life Sciences (Farmingdale, NY, USA).\n4.5. Statistical Analysis All values are expressed as mean ± standard deviations. One-way or two-way ANOVA was used to compare the groups when appropriate. Comparisons between groups were performed with t-tests when the ANOVA test was statistically significant. For all experiments measuring diameter, the n-values mean the number of vessels derived from each different animal. Values of * p < 0.05 were considered statistically significant. Differences between specified groups were analyzed using the Student’s t test (2-tailed) to compare the two groups, with * p < 0.05 considered statistically significant.\nAll values are expressed as mean ± standard deviations. One-way or two-way ANOVA was used to compare the groups when appropriate. Comparisons between groups were performed with t-tests when the ANOVA test was statistically significant. For all experiments measuring diameter, the n-values mean the number of vessels derived from each different animal. Values of * p < 0.05 were considered statistically significant. Differences between specified groups were analyzed using the Student’s t test (2-tailed) to compare the two groups, with * p < 0.05 considered statistically significant.", "All experiments were performed according to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH publication No. 85-23, 2011) and were approved by the Ethics Committee and the Institutional Animal Care and Use Committee of Yonsei University, College of Medicine (Approval number: 2021-0058). In this experiment, 8–11-week-old male Sprague Dawley rats were used. After being sacrificed, the mesenteric arteries were rapidly dissected and placed in an ice-cold Krebs–Henseleit (K-H) solution (composition (mM): NaCl 119, KCl 4.6, MgSO4 1.2, KH2PO4 1.2, CaCl2 2.5, NaHCO3 25 and glucose 11.1) bubbled with 95% O2 and 5% CO2. Adipose and connective tissue were removed from the mesenteric arteries using a microscope (model SZ-40, Olympus, Shinjuku, Tokyo, Japan). The 2nd branches of mesenteric arteries (250–300 μm) were cut into 2–3 mm-long sections and used in this experiment. If necessary, endothelium was removed by gently rubbing using thin forceps.", "The plant extract (CW01-037) used in this research was obtained from the Korea Plant Extract Bank at the Korea Research Institute of Bioscience and Biotechnology (Daejeon, Korea). The plant (103 g) dried in the shade and powdered was added to 1 L of distilled water and heat-extracted for 150 min at 100 °C in an extractor (DW-290, DAEWOONG ELECTRONIC APPIIANCE). After filtration and drying under freeze drying using a freeze dryer (Clean Vac 8, HANIL SCIENCE Co., Ltd.), the yield of the T. caulis extract was 8.3% (8.55 g) of the plant powder and was dissolved in 5% of dimethyl sulfoxide (DMSO).", "Isolated segments were mounted in wire myography (model 620M, Danish Myotechnology, Aarhus, Denmark) for the recording of isometric tension. Arteries were bathed in 37 °C K-H solution, constantly bubbled with 95% O2 and 5% CO2. Vessels were equilibrated for 20 min and stretched to their optimal resting tension ~4 mN. Contractility of the vessel was evaluated by incubating KCl (70 mM) 3 times. The response was recorded by stabilizing the vessel by contracting the arteries to KCl (70 mM) or PE (5 μM), followed by a cumulative addition of extract (5–250 μg/mL) or vehicle (DMSO, 0.00025–0.125%). To investigate the mechanism of vascular relaxation of the aqueous extract of T. caulis, L-NNA, indomethacin, ODQ, TEA, BaCl2, glibenclamide or 4-AP were pre-treated for 20 min, and then the relaxation response of the T. caulis (250 μg/mL) to phenylephrine (5 μM) contraction was recorded. An experiment was conducted to determine the effect of Ca2+ on vascular relaxation when the T. caulis (250 μg/mL) is treated. CPA (5 μM) was treated in Ca2+-free solution to remove both intracellular and extracellular calcium. After replacing the solution with Ca2+-free K-H solution containing 70 mM of KCl and CPA to Ca2+-free K-H solution containing 70 mM of KCl, changes in contraction were recorded by increasing the concentration of CaCl2 on arteries. The extract was then pre-treated in the same arteries for 20 min and the changes in contractility in the same experiment were compared with those before pre-treatment with the T. caulis. The CaCl2-induced contraction was calculated as the percentage of maximum contraction recorded from the KCl contraction. In addition, some arteries were pre-contracted by BAY K8644 (30 nM) in K-H solution containing 15 mM KCl to investigate T. caulis-induced relaxation.", "Phenylephrine hydrochloride (PE), ACh, L-NNA, ODQ, TEA, BaCl2, 4-AP, butyric acid, furfural and vanillin were purchased from Sigma-Aldrich (St. Louis, MO, USA). Indomethacin was obtained from Calbiochem (Darmstadt, Germany). Glibenclamide was purchased from Tocris Bioscience (Bristol, UK). CPA was obtained from Enzo Life Sciences (Farmingdale, NY, USA).", "All values are expressed as mean ± standard deviations. One-way or two-way ANOVA was used to compare the groups when appropriate. Comparisons between groups were performed with t-tests when the ANOVA test was statistically significant. For all experiments measuring diameter, the n-values mean the number of vessels derived from each different animal. Values of * p < 0.05 were considered statistically significant. Differences between specified groups were analyzed using the Student’s t test (2-tailed) to compare the two groups, with * p < 0.05 considered statistically significant.", "In the present study, for the first time, we discovered that T. caulis extract induced vascular relaxation in rat mesenteric resistance arteries. The vasodilatory effect of T. caulis was endothelium-independent and the inhibition of extracellular Ca2+ influx was related to T. caulis extract-induced vascular relaxation. Our results suggest that T. caulis could be a valuable herbal resource in future research and in the treatment of cardiovascular diseases." ]
[ "intro", "results", null, null, null, null, null, null, null, "discussion", null, null, null, null, null, null, "conclusions" ]
[ "\nTrachelospermi caulis\n", "vasodilation", "mesenteric resistance arteries", "vanillin", "relaxation", "Ca2+" ]
1. Introduction: Cardiovascular disease (CVD) is the one of the leading causes of morbidity and mortality worldwide. CVD was responsible for 17.3 million deaths worldwide in 2017 which is expected to increase to >23.6 million by 2030 [1,2]. CVD consists of hypertension, heart failure, stroke and a number of vascular and cardiac problems [3]. Hypertension is defined as a systolic blood pressure (SBP) of 130 mmHg or more and a diastolic blood pressure (DBP) of 90 mmHg or more [4]. The arterial system comprises capillaries, arterioles, small arteries, medium-sized arteries and large conduit arteries. Resistance arteries, vessels with lumen diameters measuring <400 μm, are major organs of the cardiovascular system and regulate blood flow and peripheral resistance [5,6]. An increase in vascular resistance caused by the decrease in the internal lumen diameter of the blood vessel is a major cause of elevated blood pressure [7,8]. On the other hand, the relaxation of blood vessels leads to an increase in the lumen diameter of the blood vessels, resulting in an immediate decrease in blood pressure. The relaxation responses of arteries were not only endothelium-dependent but also endothelium-independent. Recently, evidence has been accumulating that endothelium-independent vasodilation is impaired in various vascular beds such as coronary, brachial, forearm and peripheral conduit arteries in patients with cardiovascular diseases and cardiovascular risk factors [9]. In particular, hypertension and diabetes mellitus have been shown to be associated with the impairment of endothelium-independent vasodilation [10,11]. Thus, it is necessary to discover substances that induce endothelium-independent vasodilation as much as substances that cause endothelium-dependent vasodilation. Although synthetic medications have been widely used to treat and cure patients at various stages of CVD, including hypertension, the adverse effects remain a challenge. Because the treatment of hypertension is continued over a long period of time; therefore, the use of synthetic drugs may result in drug resistance and adverse effects [12]. In addition to the use of synthetic drugs to treat hypertension, the use of a natural product is widely increasing [13,14]. Recently, the use of plant extract has shown a steady growth because of low toxicity and well-established therapy [15]. Various plants used in traditional medicine have been studied for their potential use in the treatment of cardiovascular disease [16]. Vasodilatory effects of medicinal plants have been extensively explored over the last two decades and have proven to be potentially effective for the treatment of CVD in clinics all over the world [17,18,19,20]. Trachelospermi caulis (T. caulis) belongs to the Apocynaceae family, which is the dried leafy stem of trachelospermum asiaticum var. intermedium [21]. T. caulis has been used as traditional herbal medicine to attenuate fever and pain of the knees and loins because of its antipyretic and antinociceptive effects in Asian countries [22]. It is well known that T. caulis lowers blood pressure in oriental medicine [23]. Although T. caulis has been suggested to provoke beneficial effects, the effect and mechanism of action in cardiovascular system are unknown. Therefore, the aim of our research is to explore the effect of T. caulis extract on vascular functionality in resistance arteries and to elucidate the underlying mechanism. 2. Results: 2.1. Effect of T. caulis on Contraction Induced by KCl or PE in Rat Mesenteric Arteries Trachelospermi caulis (5–250 μg/mL) induced vascular relaxation in a concentration-dependent manner in endothelium-intact mesenteric arteries pre-contracted with high K+ (70 mM) or phenylephrine (PE, 5 μM) and in endothelium-denuded mesenteric arteries pre-contracted with PE (5 μM) as shown in Figure 1. EC50 of T. caulis is 98.1 μg/mL in the endothelium-intact arteries pre-contracted with high K+ (70 mM), 62.36 μg/mL in the endothelium-denuded arteries pre-contracted with PE and 36.61 μg/mL in the endothelium-intact arteries pre-contracted with PE. The maximal relaxation value of T. caulis-induced relaxation is 80.73 ± 6.05% in the endothelium-intact arteries pre-contracted with high K+, 89.6 ± 2.28% in the endothelium-intact arteries pre-contracted with PE and 93.30 ± 4.46% in the endothelium-denuded arteries pre-contracted with PE. Cumulative administration of vehicle (DMSO, 0.0025–0.125%) did not affect the contraction induced by PE (Figure 1 inset). Trachelospermi caulis (5–250 μg/mL) induced vascular relaxation in a concentration-dependent manner in endothelium-intact mesenteric arteries pre-contracted with high K+ (70 mM) or phenylephrine (PE, 5 μM) and in endothelium-denuded mesenteric arteries pre-contracted with PE (5 μM) as shown in Figure 1. EC50 of T. caulis is 98.1 μg/mL in the endothelium-intact arteries pre-contracted with high K+ (70 mM), 62.36 μg/mL in the endothelium-denuded arteries pre-contracted with PE and 36.61 μg/mL in the endothelium-intact arteries pre-contracted with PE. The maximal relaxation value of T. caulis-induced relaxation is 80.73 ± 6.05% in the endothelium-intact arteries pre-contracted with high K+, 89.6 ± 2.28% in the endothelium-intact arteries pre-contracted with PE and 93.30 ± 4.46% in the endothelium-denuded arteries pre-contracted with PE. Cumulative administration of vehicle (DMSO, 0.0025–0.125%) did not affect the contraction induced by PE (Figure 1 inset). 2.2. T. caulis-Induced Endothelium-Independent Relaxation in Rat Mesenteric Arteries To explore whether T. caulis-induced relaxation is dependent on endothelium. T. caulis extract was applied in endothelium-intact (Figure 2A) or endothelium-denuded (Figure 2B) mesenteric arteries. T. caulis extract induced vasodilation in the presence and in the absence of the endothelium, and there was no significant difference (93.4 ± 3.56% and 97.17 ± 6.75%, respectively, Figure 2C). To explore whether T. caulis-induced relaxation is dependent on endothelium. T. caulis extract was applied in endothelium-intact (Figure 2A) or endothelium-denuded (Figure 2B) mesenteric arteries. T. caulis extract induced vasodilation in the presence and in the absence of the endothelium, and there was no significant difference (93.4 ± 3.56% and 97.17 ± 6.75%, respectively, Figure 2C). 2.3. Effects of L-NNA, Indomethacin and ODQ on T. caulis-Induced Vasodilation To investigate the involvement of the nitric oxide (NO)/cyclic guanosine monophosphate (cGMP) and cyclooxygenase (COX)/prostacyclin (PGI2) pathways in T. caulis-induced vasodilation, arteries were incubated for 20 min with endothelial nitric oxide (eNOS) inhibitor, Nω-Nitro-L-arginine (L-NNA, 500 μM), or soluble guanylyl cyclase (sGC) inhibitor, 1H-(1,2,4)oxadiazolo[4,3-a]quinoxalin-1-one (ODQ, 5 μM), or COX inhibitor, indomethacin (10 μM), before being contracted with PE (5 μM). The relaxation responses of T. caulis were 89.39 ± 5.12%, 94.41 ± 5.41% and 92.03 ± 4.45% in the presence of L-NNA, ODQ and indomethacin, respectively (Figure 3). To investigate the involvement of the nitric oxide (NO)/cyclic guanosine monophosphate (cGMP) and cyclooxygenase (COX)/prostacyclin (PGI2) pathways in T. caulis-induced vasodilation, arteries were incubated for 20 min with endothelial nitric oxide (eNOS) inhibitor, Nω-Nitro-L-arginine (L-NNA, 500 μM), or soluble guanylyl cyclase (sGC) inhibitor, 1H-(1,2,4)oxadiazolo[4,3-a]quinoxalin-1-one (ODQ, 5 μM), or COX inhibitor, indomethacin (10 μM), before being contracted with PE (5 μM). The relaxation responses of T. caulis were 89.39 ± 5.12%, 94.41 ± 5.41% and 92.03 ± 4.45% in the presence of L-NNA, ODQ and indomethacin, respectively (Figure 3). 2.4. Effects of K+ Channel Blockers on T. caulis Extract-Induced Vascular Relaxation To determine whether K+ channels are involved in T. caulis-induced relaxation, non-selective K+ channel blocker, tetraethylammonium (TEA, 2 mM) or inward rectifier K+ channel blocker, BaCl2 (30 μM) or voltage-dependent K+ channel blocker, 4-aminopyridine (4-AP, 100 μM) or ATP-sensitive K+ channel blocker, glibenclamide (10 μM), were pre-treated 20 min before being contracted with PE (5 μM). The relaxation responses of T. caulis were 96.23 ± 2.72%, 95.26 ± 0.27%, 93.44 ± 2.10% and 93.51 ± 1.62%, in the presence of TEA, BaCl2, 4-AP and glibenclamide, respectively (Figure 4). To determine whether K+ channels are involved in T. caulis-induced relaxation, non-selective K+ channel blocker, tetraethylammonium (TEA, 2 mM) or inward rectifier K+ channel blocker, BaCl2 (30 μM) or voltage-dependent K+ channel blocker, 4-aminopyridine (4-AP, 100 μM) or ATP-sensitive K+ channel blocker, glibenclamide (10 μM), were pre-treated 20 min before being contracted with PE (5 μM). The relaxation responses of T. caulis were 96.23 ± 2.72%, 95.26 ± 0.27%, 93.44 ± 2.10% and 93.51 ± 1.62%, in the presence of TEA, BaCl2, 4-AP and glibenclamide, respectively (Figure 4). 2.5. Effect of T. caulis on Extracellular Ca2+-Induced Contraction To identify whether the vasodilatory effect of T. caulis depends on the inhibition of extracellular Ca2+ influx, the mesenteric arteries were incubated in a Ca2+-free solution containing sarcoplasmic reticulum Ca2+-ATPase (SERCA) inhibitor, cyclopiazonic acid (CPA, 5 μΜ) and KCl (70 mM), and then CaCl2 was added by concentration (0.1–2.0 mM) to increase the Ca2+ concentration in the arteries. Before treating the arteries with T. caulis, it was confirmed that the contraction responses caused by the repeated addition of Ca2+ were not changed in endothelium-intact and endothelium-denuded mesenteric arteries (Figure 5A,C). Pre-treatment of T. caulis significantly reduced the contractile responses induced by the cumulative addition of Ca2+ in endothelium-intact and endothelium-denuded mesenteric arteries (Figure 5B,D). To identify whether the vasodilatory effect of T. caulis depends on the inhibition of extracellular Ca2+ influx, the mesenteric arteries were incubated in a Ca2+-free solution containing sarcoplasmic reticulum Ca2+-ATPase (SERCA) inhibitor, cyclopiazonic acid (CPA, 5 μΜ) and KCl (70 mM), and then CaCl2 was added by concentration (0.1–2.0 mM) to increase the Ca2+ concentration in the arteries. Before treating the arteries with T. caulis, it was confirmed that the contraction responses caused by the repeated addition of Ca2+ were not changed in endothelium-intact and endothelium-denuded mesenteric arteries (Figure 5A,C). Pre-treatment of T. caulis significantly reduced the contractile responses induced by the cumulative addition of Ca2+ in endothelium-intact and endothelium-denuded mesenteric arteries (Figure 5B,D). 2.6. Effect of T. caulis on the BAY K8644-Induced Contraction To confirm that voltage-gated calcium channel is involved in the T. caulis extract-induced vasodilation, arteries were pre-treated with L-type voltage-gated calcium channel activator, BAY K8644, and then T. caulis extract was administered in the mesenteric arteries. Treatment of vehicle (DMSO 0.001–0.04%) did not affect BAY K8644-induced contraction (Figure 6A). T. caulis extract concentration dependently induced vascular relaxation (Figure 6B). To confirm that voltage-gated calcium channel is involved in the T. caulis extract-induced vasodilation, arteries were pre-treated with L-type voltage-gated calcium channel activator, BAY K8644, and then T. caulis extract was administered in the mesenteric arteries. Treatment of vehicle (DMSO 0.001–0.04%) did not affect BAY K8644-induced contraction (Figure 6A). T. caulis extract concentration dependently induced vascular relaxation (Figure 6B). 2.7. Effects of Vanillin, a Single Active Compound of T. caulis, on Mesenteric Arteries To support the findings about vascular effect of T. caulis, we further investigated whether single active compounds of T. caulis exert similar effect as T. caulis extract. The gas chromatogram of the compounds identified in the extract of T. caulis is shown in Figure S1 of supplementary materials. The identities of 12 compounds were determined along with their retention time (Table S1 of supplementary materials). The compounds identified based on the gas chromatography–mass spectrometry (GC/MS) analysis include butanoic acid (butyric acid), cyclobutanol, 3-nitropropanoic acid, furan-2-carbaldehyde (furfural), 4-hydroxy-3-methoxybenzaldehyde (vanillin), 4-hydroxy-3-methoxybenzaldehyde,(1R,2S,3S,4R,5R)-6,8-dioxabicyclo[3.2.1]octane-2,3,4-triol, 3-hydroxy-4-methoxybenzoic acid, 3,4,5,6-tetrahydroxy-2-methoxyhexanal, 3,4,5,6-tetrahydroxy-2-methoxyhexanal,(3S,4S,5R,6S)-3-methoxy-6-(methoxymethyl)oxane-2,4,5-triol, hexanoic acid, 1-(2,6-dihydroxy-4-methoxyphenyl)butan-1-one and benzoic acid. Among these candidate single compounds, we found that 4-hydroxy-3-methoxybenzaldehyde (vanillin) has a vascular relaxation effect. We observed that vanillin (0.01–20 mM) induced vasodilation in a concentration-dependent manner in rat mesenteric arteries pre-contracted with high-K+ solution (70 mM) or PE (5 μM). (Figure 7). EC50 of vanillin is 1.1 mM in the mesenteric arteries pre-contracted with high K+ (70 mM) and 1.9 mM in the arteries pre-contracted with PE. We also tested vascular effects of two more single compounds, butyric acid and furfural, in mesenteric resistance arteries. However, butyric acid did not induce significant vasodilation (Figure S2 of supplementary materials). As shown in Figure S3 of supplementary materials, furfural induced vasodilation in a very high concentration (10 mM). To support the findings about vascular effect of T. caulis, we further investigated whether single active compounds of T. caulis exert similar effect as T. caulis extract. The gas chromatogram of the compounds identified in the extract of T. caulis is shown in Figure S1 of supplementary materials. The identities of 12 compounds were determined along with their retention time (Table S1 of supplementary materials). The compounds identified based on the gas chromatography–mass spectrometry (GC/MS) analysis include butanoic acid (butyric acid), cyclobutanol, 3-nitropropanoic acid, furan-2-carbaldehyde (furfural), 4-hydroxy-3-methoxybenzaldehyde (vanillin), 4-hydroxy-3-methoxybenzaldehyde,(1R,2S,3S,4R,5R)-6,8-dioxabicyclo[3.2.1]octane-2,3,4-triol, 3-hydroxy-4-methoxybenzoic acid, 3,4,5,6-tetrahydroxy-2-methoxyhexanal, 3,4,5,6-tetrahydroxy-2-methoxyhexanal,(3S,4S,5R,6S)-3-methoxy-6-(methoxymethyl)oxane-2,4,5-triol, hexanoic acid, 1-(2,6-dihydroxy-4-methoxyphenyl)butan-1-one and benzoic acid. Among these candidate single compounds, we found that 4-hydroxy-3-methoxybenzaldehyde (vanillin) has a vascular relaxation effect. We observed that vanillin (0.01–20 mM) induced vasodilation in a concentration-dependent manner in rat mesenteric arteries pre-contracted with high-K+ solution (70 mM) or PE (5 μM). (Figure 7). EC50 of vanillin is 1.1 mM in the mesenteric arteries pre-contracted with high K+ (70 mM) and 1.9 mM in the arteries pre-contracted with PE. We also tested vascular effects of two more single compounds, butyric acid and furfural, in mesenteric resistance arteries. However, butyric acid did not induce significant vasodilation (Figure S2 of supplementary materials). As shown in Figure S3 of supplementary materials, furfural induced vasodilation in a very high concentration (10 mM). 2.1. Effect of T. caulis on Contraction Induced by KCl or PE in Rat Mesenteric Arteries: Trachelospermi caulis (5–250 μg/mL) induced vascular relaxation in a concentration-dependent manner in endothelium-intact mesenteric arteries pre-contracted with high K+ (70 mM) or phenylephrine (PE, 5 μM) and in endothelium-denuded mesenteric arteries pre-contracted with PE (5 μM) as shown in Figure 1. EC50 of T. caulis is 98.1 μg/mL in the endothelium-intact arteries pre-contracted with high K+ (70 mM), 62.36 μg/mL in the endothelium-denuded arteries pre-contracted with PE and 36.61 μg/mL in the endothelium-intact arteries pre-contracted with PE. The maximal relaxation value of T. caulis-induced relaxation is 80.73 ± 6.05% in the endothelium-intact arteries pre-contracted with high K+, 89.6 ± 2.28% in the endothelium-intact arteries pre-contracted with PE and 93.30 ± 4.46% in the endothelium-denuded arteries pre-contracted with PE. Cumulative administration of vehicle (DMSO, 0.0025–0.125%) did not affect the contraction induced by PE (Figure 1 inset). 2.2. T. caulis-Induced Endothelium-Independent Relaxation in Rat Mesenteric Arteries: To explore whether T. caulis-induced relaxation is dependent on endothelium. T. caulis extract was applied in endothelium-intact (Figure 2A) or endothelium-denuded (Figure 2B) mesenteric arteries. T. caulis extract induced vasodilation in the presence and in the absence of the endothelium, and there was no significant difference (93.4 ± 3.56% and 97.17 ± 6.75%, respectively, Figure 2C). 2.3. Effects of L-NNA, Indomethacin and ODQ on T. caulis-Induced Vasodilation: To investigate the involvement of the nitric oxide (NO)/cyclic guanosine monophosphate (cGMP) and cyclooxygenase (COX)/prostacyclin (PGI2) pathways in T. caulis-induced vasodilation, arteries were incubated for 20 min with endothelial nitric oxide (eNOS) inhibitor, Nω-Nitro-L-arginine (L-NNA, 500 μM), or soluble guanylyl cyclase (sGC) inhibitor, 1H-(1,2,4)oxadiazolo[4,3-a]quinoxalin-1-one (ODQ, 5 μM), or COX inhibitor, indomethacin (10 μM), before being contracted with PE (5 μM). The relaxation responses of T. caulis were 89.39 ± 5.12%, 94.41 ± 5.41% and 92.03 ± 4.45% in the presence of L-NNA, ODQ and indomethacin, respectively (Figure 3). 2.4. Effects of K+ Channel Blockers on T. caulis Extract-Induced Vascular Relaxation: To determine whether K+ channels are involved in T. caulis-induced relaxation, non-selective K+ channel blocker, tetraethylammonium (TEA, 2 mM) or inward rectifier K+ channel blocker, BaCl2 (30 μM) or voltage-dependent K+ channel blocker, 4-aminopyridine (4-AP, 100 μM) or ATP-sensitive K+ channel blocker, glibenclamide (10 μM), were pre-treated 20 min before being contracted with PE (5 μM). The relaxation responses of T. caulis were 96.23 ± 2.72%, 95.26 ± 0.27%, 93.44 ± 2.10% and 93.51 ± 1.62%, in the presence of TEA, BaCl2, 4-AP and glibenclamide, respectively (Figure 4). 2.5. Effect of T. caulis on Extracellular Ca2+-Induced Contraction: To identify whether the vasodilatory effect of T. caulis depends on the inhibition of extracellular Ca2+ influx, the mesenteric arteries were incubated in a Ca2+-free solution containing sarcoplasmic reticulum Ca2+-ATPase (SERCA) inhibitor, cyclopiazonic acid (CPA, 5 μΜ) and KCl (70 mM), and then CaCl2 was added by concentration (0.1–2.0 mM) to increase the Ca2+ concentration in the arteries. Before treating the arteries with T. caulis, it was confirmed that the contraction responses caused by the repeated addition of Ca2+ were not changed in endothelium-intact and endothelium-denuded mesenteric arteries (Figure 5A,C). Pre-treatment of T. caulis significantly reduced the contractile responses induced by the cumulative addition of Ca2+ in endothelium-intact and endothelium-denuded mesenteric arteries (Figure 5B,D). 2.6. Effect of T. caulis on the BAY K8644-Induced Contraction: To confirm that voltage-gated calcium channel is involved in the T. caulis extract-induced vasodilation, arteries were pre-treated with L-type voltage-gated calcium channel activator, BAY K8644, and then T. caulis extract was administered in the mesenteric arteries. Treatment of vehicle (DMSO 0.001–0.04%) did not affect BAY K8644-induced contraction (Figure 6A). T. caulis extract concentration dependently induced vascular relaxation (Figure 6B). 2.7. Effects of Vanillin, a Single Active Compound of T. caulis, on Mesenteric Arteries: To support the findings about vascular effect of T. caulis, we further investigated whether single active compounds of T. caulis exert similar effect as T. caulis extract. The gas chromatogram of the compounds identified in the extract of T. caulis is shown in Figure S1 of supplementary materials. The identities of 12 compounds were determined along with their retention time (Table S1 of supplementary materials). The compounds identified based on the gas chromatography–mass spectrometry (GC/MS) analysis include butanoic acid (butyric acid), cyclobutanol, 3-nitropropanoic acid, furan-2-carbaldehyde (furfural), 4-hydroxy-3-methoxybenzaldehyde (vanillin), 4-hydroxy-3-methoxybenzaldehyde,(1R,2S,3S,4R,5R)-6,8-dioxabicyclo[3.2.1]octane-2,3,4-triol, 3-hydroxy-4-methoxybenzoic acid, 3,4,5,6-tetrahydroxy-2-methoxyhexanal, 3,4,5,6-tetrahydroxy-2-methoxyhexanal,(3S,4S,5R,6S)-3-methoxy-6-(methoxymethyl)oxane-2,4,5-triol, hexanoic acid, 1-(2,6-dihydroxy-4-methoxyphenyl)butan-1-one and benzoic acid. Among these candidate single compounds, we found that 4-hydroxy-3-methoxybenzaldehyde (vanillin) has a vascular relaxation effect. We observed that vanillin (0.01–20 mM) induced vasodilation in a concentration-dependent manner in rat mesenteric arteries pre-contracted with high-K+ solution (70 mM) or PE (5 μM). (Figure 7). EC50 of vanillin is 1.1 mM in the mesenteric arteries pre-contracted with high K+ (70 mM) and 1.9 mM in the arteries pre-contracted with PE. We also tested vascular effects of two more single compounds, butyric acid and furfural, in mesenteric resistance arteries. However, butyric acid did not induce significant vasodilation (Figure S2 of supplementary materials). As shown in Figure S3 of supplementary materials, furfural induced vasodilation in a very high concentration (10 mM). 3. Discussion: The aim of the present study was to examine the direct effect of T. caulis extract on the vascular functionality in resistance arteries and to determine the underlying mechanism. We demonstrated that T. caulis reduced contraction induced by KCl or PE in rat mesenteric resistance arteries. T. caulis concentration-dependently induced vascular relaxation in the presence and absence of the endothelium. Additionally, the pre-treatment of L-NNA, ODQ and indomethacin did not affect the vasodilatory effect of T. caulis, which indicates that T. caulis-induced relaxation is not related with the NO pathway. The K+ channel blockers, TEA, BaCl2, 4-AP and glibenclamide, did not affect the T. caulis-induced relaxation either. T. caulis inhibited extracellular Ca2+-induced vasoconstriction responses in the mesenteric arteries. In addition, one of the active compounds of T. caulis, vanillin, also has a similar relaxation effect. Trachelospermi caulis is well-known herb that is used to alleviate swelling from sore throats and carbuncles, as well as to lower fever from muscular contracture and rheumatoid arthritis [24]. Although, beneficial effects of T. caulis have been well-reported, no sufficient data are available on the cardiovascular effect of T. caulis. This is the first study that presents the effect of T. caulis on the vascular responses in rat mesenteric resistance arteries. The vascular endothelium is a single layer lining the luminal surface of vessels. In response to various stimuli, endothelium releases vasoactive substances such as nitric oxide (NO), prostacyclin (prostaglandin I2; PGI2), endothelium-derived hyperpolarizing factor (EDHF), thromboxane (TXA2) and endothelin-1 (ET-1) [25]. Among these substances, it is well known that NO, PGI2 and EDHF induce vasodilation [26]. In the present study, we investigated whether T. caulis causes vascular relaxation through endothelial cells. We found that T. caulis induced vasodilation in the presence and in the absence of endothelium, and there was no significant difference. These finding suggested that T. caulis-induced vasodilation is endothelium-independent. In order to confirm that T. caulis-induced vascular relaxation occurs endothelium independent, we performed experiments using various inhibitors such as L-NNA, ODQ and indomethacin. After NO is generated by nitric oxide synthase (NOS) in endothelium, it diffuses into smooth muscle cells and activates sGC to increase intracellular cyclic guanosine monophosphate (cGMP) concentration, resulting in relaxation [27]. We found that there was no significant difference in the vasodilatory effect of pre-treatment with the L-NNA and ODQ. These results show that T. caulis does not relax blood vessels through the NO-cGMP pathway. Another vasodilator factors released from endothelial cells, PGI2, is produced by cyclooxygenase (COX) [28]. In the case of pre-treatment with indomethacin, a non-selective COX inhibitor, there was no significant difference in the vasodilation effect of T. caulis compared to the control. These results indicate that the vasodilatory effects of T. caulis are not related to PGI2. Taken together, T. caulis does not cause a relaxation effect through endothelium. Since it has been confirmed that T. caulis-induced vasodilation is endothelium-independent, we next investigated whether T. caulis causes a vascular relaxation by acting on smooth muscle directly. Vascular smooth muscle relaxation is initiated by decrease in intracellular Ca2+, which results from reduction of extracellular Ca2+ influx or Ca2+ releases from intracellular store (SR) [29]. Activation of K+ channel induces K+ efflux which causes membrane hyperpolarization. Membrane hyperpolarization contributes to closure of VDCC to block the influx of extracellular Ca2+, which induces relaxation of the smooth muscle cells [30]. In the previous study, it has been reported that K+ channel is involved in the vasodilatory effect of plant extract in mesenteric arteries [31]. In this study, we treated mesenteric arteries with various types of K+ channel blockers such as TEA, BaCl2, 4-AP and glibenclamide. Pre-treatment with K+ channel blockers did not affect the vasodilatory effect of T. caulis. These results indicate that the vasodilatory effect of T. caulis was not induced by the activation of K+ channels. Next, we examined whether T. caulis caused relaxation through reduction of Ca2+ influx. Because the blocking the Ca2+ channels did not induce contraction of the arteries, the relaxation effect failed to be tested in the presence of Ca2+ channel blockers. Thus, we used an alternative method to test the effect of T. caulis. The mesenteric arteries were incubated in the Ca2+ free K-H solution containing CPA, which is the state that intracellular Ca2+ is depleted. Then, the 70 mM K+ is administered to enable the opening of VDCC. The cumulative addition of Ca2+ induced contractile responses in mesenteric arteries, which is reduced by administration of T. caulis. To confirm that VDCC is inhibited by treatment of T. caulis extract, we used VDCC activator, BAY K8644. Since treatment of BAY K8644 alone did not cause stable contraction, arteries were incubated with K-H solution containing 15 mM of K+ which create an environment where VDCC could be opened. T. caulis extract concentration dependently induced vascular relaxation in the arteries pre-contracted with BAY K8644 (Figure 6B). These results suggest that extracellular Ca2+ influx was inhibited by treatment of T. caulis extract. Although we discovered vasodilatory effect of T. caulis, further study was required to confirm whether single active compound of T. caulis extract also induces vascular relaxation in mesenteric arteries. Among the several compounds identified based on gas chromatography–mass spectrometry (GC/MS), we the tested effect of 4-hydroxy-3-methoxybenzaldehyde (vanillin), butyric acid and furfural on the mesenteric arteries. We found that butyric acid did not induce significant relaxation responses in the arteries (Figure S2 of supplementary materials) and only a high concentration of furfural could exert a vasodilatory effect on the mesenteric arteries (Figure S3 of supplementary materials). Interestingly, vanillin has a potent vasodilatory effect, such as T. caulis extract (Figure 7), on mesenteric arteries. This result is in accordance with a previous study which reported that vanillin induced relaxation in porcine coronary and basilar arteries [32]. Vanillin is a small molecule that is commonly used as flavoring agents or as additives by the food and cosmetics industries. It is considered that vanillin has various biological functions such as anti-inflammatory, antioxidative and neuroprotective functions [33,34]. In the present study, vanillin induced vascular relaxation in a concentration-dependent manner in mesenteric arteries pre-contracted with PE and high K+. These results support our findings that T. caulis induces vasodilation in mesenteric resistance arteries. Plants contain a variety of metabolites including polyphenols, catechins, flavonoids, alkaloid and many volatile components [35]. Some of these metabolites have been suggested to have cardio-protective and antihypertensive effects [16]. The present study suggests the potential use of T. caulis extracts as antihypertensive agents by showing that the T. caulis extract significantly relaxes resistance arteries. 4. Materials and Methods: 4.1. Animals and Tissue Preparation All experiments were performed according to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH publication No. 85-23, 2011) and were approved by the Ethics Committee and the Institutional Animal Care and Use Committee of Yonsei University, College of Medicine (Approval number: 2021-0058). In this experiment, 8–11-week-old male Sprague Dawley rats were used. After being sacrificed, the mesenteric arteries were rapidly dissected and placed in an ice-cold Krebs–Henseleit (K-H) solution (composition (mM): NaCl 119, KCl 4.6, MgSO4 1.2, KH2PO4 1.2, CaCl2 2.5, NaHCO3 25 and glucose 11.1) bubbled with 95% O2 and 5% CO2. Adipose and connective tissue were removed from the mesenteric arteries using a microscope (model SZ-40, Olympus, Shinjuku, Tokyo, Japan). The 2nd branches of mesenteric arteries (250–300 μm) were cut into 2–3 mm-long sections and used in this experiment. If necessary, endothelium was removed by gently rubbing using thin forceps. All experiments were performed according to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH publication No. 85-23, 2011) and were approved by the Ethics Committee and the Institutional Animal Care and Use Committee of Yonsei University, College of Medicine (Approval number: 2021-0058). In this experiment, 8–11-week-old male Sprague Dawley rats were used. After being sacrificed, the mesenteric arteries were rapidly dissected and placed in an ice-cold Krebs–Henseleit (K-H) solution (composition (mM): NaCl 119, KCl 4.6, MgSO4 1.2, KH2PO4 1.2, CaCl2 2.5, NaHCO3 25 and glucose 11.1) bubbled with 95% O2 and 5% CO2. Adipose and connective tissue were removed from the mesenteric arteries using a microscope (model SZ-40, Olympus, Shinjuku, Tokyo, Japan). The 2nd branches of mesenteric arteries (250–300 μm) were cut into 2–3 mm-long sections and used in this experiment. If necessary, endothelium was removed by gently rubbing using thin forceps. 4.2. Preparation of T. caulis Extract The plant extract (CW01-037) used in this research was obtained from the Korea Plant Extract Bank at the Korea Research Institute of Bioscience and Biotechnology (Daejeon, Korea). The plant (103 g) dried in the shade and powdered was added to 1 L of distilled water and heat-extracted for 150 min at 100 °C in an extractor (DW-290, DAEWOONG ELECTRONIC APPIIANCE). After filtration and drying under freeze drying using a freeze dryer (Clean Vac 8, HANIL SCIENCE Co., Ltd.), the yield of the T. caulis extract was 8.3% (8.55 g) of the plant powder and was dissolved in 5% of dimethyl sulfoxide (DMSO). The plant extract (CW01-037) used in this research was obtained from the Korea Plant Extract Bank at the Korea Research Institute of Bioscience and Biotechnology (Daejeon, Korea). The plant (103 g) dried in the shade and powdered was added to 1 L of distilled water and heat-extracted for 150 min at 100 °C in an extractor (DW-290, DAEWOONG ELECTRONIC APPIIANCE). After filtration and drying under freeze drying using a freeze dryer (Clean Vac 8, HANIL SCIENCE Co., Ltd.), the yield of the T. caulis extract was 8.3% (8.55 g) of the plant powder and was dissolved in 5% of dimethyl sulfoxide (DMSO). 4.3. Measurement of Isometric Tension in Mesenteric Arteries Isolated segments were mounted in wire myography (model 620M, Danish Myotechnology, Aarhus, Denmark) for the recording of isometric tension. Arteries were bathed in 37 °C K-H solution, constantly bubbled with 95% O2 and 5% CO2. Vessels were equilibrated for 20 min and stretched to their optimal resting tension ~4 mN. Contractility of the vessel was evaluated by incubating KCl (70 mM) 3 times. The response was recorded by stabilizing the vessel by contracting the arteries to KCl (70 mM) or PE (5 μM), followed by a cumulative addition of extract (5–250 μg/mL) or vehicle (DMSO, 0.00025–0.125%). To investigate the mechanism of vascular relaxation of the aqueous extract of T. caulis, L-NNA, indomethacin, ODQ, TEA, BaCl2, glibenclamide or 4-AP were pre-treated for 20 min, and then the relaxation response of the T. caulis (250 μg/mL) to phenylephrine (5 μM) contraction was recorded. An experiment was conducted to determine the effect of Ca2+ on vascular relaxation when the T. caulis (250 μg/mL) is treated. CPA (5 μM) was treated in Ca2+-free solution to remove both intracellular and extracellular calcium. After replacing the solution with Ca2+-free K-H solution containing 70 mM of KCl and CPA to Ca2+-free K-H solution containing 70 mM of KCl, changes in contraction were recorded by increasing the concentration of CaCl2 on arteries. The extract was then pre-treated in the same arteries for 20 min and the changes in contractility in the same experiment were compared with those before pre-treatment with the T. caulis. The CaCl2-induced contraction was calculated as the percentage of maximum contraction recorded from the KCl contraction. In addition, some arteries were pre-contracted by BAY K8644 (30 nM) in K-H solution containing 15 mM KCl to investigate T. caulis-induced relaxation. Isolated segments were mounted in wire myography (model 620M, Danish Myotechnology, Aarhus, Denmark) for the recording of isometric tension. Arteries were bathed in 37 °C K-H solution, constantly bubbled with 95% O2 and 5% CO2. Vessels were equilibrated for 20 min and stretched to their optimal resting tension ~4 mN. Contractility of the vessel was evaluated by incubating KCl (70 mM) 3 times. The response was recorded by stabilizing the vessel by contracting the arteries to KCl (70 mM) or PE (5 μM), followed by a cumulative addition of extract (5–250 μg/mL) or vehicle (DMSO, 0.00025–0.125%). To investigate the mechanism of vascular relaxation of the aqueous extract of T. caulis, L-NNA, indomethacin, ODQ, TEA, BaCl2, glibenclamide or 4-AP were pre-treated for 20 min, and then the relaxation response of the T. caulis (250 μg/mL) to phenylephrine (5 μM) contraction was recorded. An experiment was conducted to determine the effect of Ca2+ on vascular relaxation when the T. caulis (250 μg/mL) is treated. CPA (5 μM) was treated in Ca2+-free solution to remove both intracellular and extracellular calcium. After replacing the solution with Ca2+-free K-H solution containing 70 mM of KCl and CPA to Ca2+-free K-H solution containing 70 mM of KCl, changes in contraction were recorded by increasing the concentration of CaCl2 on arteries. The extract was then pre-treated in the same arteries for 20 min and the changes in contractility in the same experiment were compared with those before pre-treatment with the T. caulis. The CaCl2-induced contraction was calculated as the percentage of maximum contraction recorded from the KCl contraction. In addition, some arteries were pre-contracted by BAY K8644 (30 nM) in K-H solution containing 15 mM KCl to investigate T. caulis-induced relaxation. 4.4. Chemicals and Reagents Phenylephrine hydrochloride (PE), ACh, L-NNA, ODQ, TEA, BaCl2, 4-AP, butyric acid, furfural and vanillin were purchased from Sigma-Aldrich (St. Louis, MO, USA). Indomethacin was obtained from Calbiochem (Darmstadt, Germany). Glibenclamide was purchased from Tocris Bioscience (Bristol, UK). CPA was obtained from Enzo Life Sciences (Farmingdale, NY, USA). Phenylephrine hydrochloride (PE), ACh, L-NNA, ODQ, TEA, BaCl2, 4-AP, butyric acid, furfural and vanillin were purchased from Sigma-Aldrich (St. Louis, MO, USA). Indomethacin was obtained from Calbiochem (Darmstadt, Germany). Glibenclamide was purchased from Tocris Bioscience (Bristol, UK). CPA was obtained from Enzo Life Sciences (Farmingdale, NY, USA). 4.5. Statistical Analysis All values are expressed as mean ± standard deviations. One-way or two-way ANOVA was used to compare the groups when appropriate. Comparisons between groups were performed with t-tests when the ANOVA test was statistically significant. For all experiments measuring diameter, the n-values mean the number of vessels derived from each different animal. Values of * p < 0.05 were considered statistically significant. Differences between specified groups were analyzed using the Student’s t test (2-tailed) to compare the two groups, with * p < 0.05 considered statistically significant. All values are expressed as mean ± standard deviations. One-way or two-way ANOVA was used to compare the groups when appropriate. Comparisons between groups were performed with t-tests when the ANOVA test was statistically significant. For all experiments measuring diameter, the n-values mean the number of vessels derived from each different animal. Values of * p < 0.05 were considered statistically significant. Differences between specified groups were analyzed using the Student’s t test (2-tailed) to compare the two groups, with * p < 0.05 considered statistically significant. 4.1. Animals and Tissue Preparation: All experiments were performed according to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH publication No. 85-23, 2011) and were approved by the Ethics Committee and the Institutional Animal Care and Use Committee of Yonsei University, College of Medicine (Approval number: 2021-0058). In this experiment, 8–11-week-old male Sprague Dawley rats were used. After being sacrificed, the mesenteric arteries were rapidly dissected and placed in an ice-cold Krebs–Henseleit (K-H) solution (composition (mM): NaCl 119, KCl 4.6, MgSO4 1.2, KH2PO4 1.2, CaCl2 2.5, NaHCO3 25 and glucose 11.1) bubbled with 95% O2 and 5% CO2. Adipose and connective tissue were removed from the mesenteric arteries using a microscope (model SZ-40, Olympus, Shinjuku, Tokyo, Japan). The 2nd branches of mesenteric arteries (250–300 μm) were cut into 2–3 mm-long sections and used in this experiment. If necessary, endothelium was removed by gently rubbing using thin forceps. 4.2. Preparation of T. caulis Extract: The plant extract (CW01-037) used in this research was obtained from the Korea Plant Extract Bank at the Korea Research Institute of Bioscience and Biotechnology (Daejeon, Korea). The plant (103 g) dried in the shade and powdered was added to 1 L of distilled water and heat-extracted for 150 min at 100 °C in an extractor (DW-290, DAEWOONG ELECTRONIC APPIIANCE). After filtration and drying under freeze drying using a freeze dryer (Clean Vac 8, HANIL SCIENCE Co., Ltd.), the yield of the T. caulis extract was 8.3% (8.55 g) of the plant powder and was dissolved in 5% of dimethyl sulfoxide (DMSO). 4.3. Measurement of Isometric Tension in Mesenteric Arteries: Isolated segments were mounted in wire myography (model 620M, Danish Myotechnology, Aarhus, Denmark) for the recording of isometric tension. Arteries were bathed in 37 °C K-H solution, constantly bubbled with 95% O2 and 5% CO2. Vessels were equilibrated for 20 min and stretched to their optimal resting tension ~4 mN. Contractility of the vessel was evaluated by incubating KCl (70 mM) 3 times. The response was recorded by stabilizing the vessel by contracting the arteries to KCl (70 mM) or PE (5 μM), followed by a cumulative addition of extract (5–250 μg/mL) or vehicle (DMSO, 0.00025–0.125%). To investigate the mechanism of vascular relaxation of the aqueous extract of T. caulis, L-NNA, indomethacin, ODQ, TEA, BaCl2, glibenclamide or 4-AP were pre-treated for 20 min, and then the relaxation response of the T. caulis (250 μg/mL) to phenylephrine (5 μM) contraction was recorded. An experiment was conducted to determine the effect of Ca2+ on vascular relaxation when the T. caulis (250 μg/mL) is treated. CPA (5 μM) was treated in Ca2+-free solution to remove both intracellular and extracellular calcium. After replacing the solution with Ca2+-free K-H solution containing 70 mM of KCl and CPA to Ca2+-free K-H solution containing 70 mM of KCl, changes in contraction were recorded by increasing the concentration of CaCl2 on arteries. The extract was then pre-treated in the same arteries for 20 min and the changes in contractility in the same experiment were compared with those before pre-treatment with the T. caulis. The CaCl2-induced contraction was calculated as the percentage of maximum contraction recorded from the KCl contraction. In addition, some arteries were pre-contracted by BAY K8644 (30 nM) in K-H solution containing 15 mM KCl to investigate T. caulis-induced relaxation. 4.4. Chemicals and Reagents: Phenylephrine hydrochloride (PE), ACh, L-NNA, ODQ, TEA, BaCl2, 4-AP, butyric acid, furfural and vanillin were purchased from Sigma-Aldrich (St. Louis, MO, USA). Indomethacin was obtained from Calbiochem (Darmstadt, Germany). Glibenclamide was purchased from Tocris Bioscience (Bristol, UK). CPA was obtained from Enzo Life Sciences (Farmingdale, NY, USA). 4.5. Statistical Analysis: All values are expressed as mean ± standard deviations. One-way or two-way ANOVA was used to compare the groups when appropriate. Comparisons between groups were performed with t-tests when the ANOVA test was statistically significant. For all experiments measuring diameter, the n-values mean the number of vessels derived from each different animal. Values of * p < 0.05 were considered statistically significant. Differences between specified groups were analyzed using the Student’s t test (2-tailed) to compare the two groups, with * p < 0.05 considered statistically significant. 5. Conclusions: In the present study, for the first time, we discovered that T. caulis extract induced vascular relaxation in rat mesenteric resistance arteries. The vasodilatory effect of T. caulis was endothelium-independent and the inhibition of extracellular Ca2+ influx was related to T. caulis extract-induced vascular relaxation. Our results suggest that T. caulis could be a valuable herbal resource in future research and in the treatment of cardiovascular diseases.
Background: Trachelospermi caulis (T. caulis) has been used as a traditional herbal medicine in Asian countries. Although it is well known that T. caulis has beneficial effects, no sufficient research data are available on the cardiovascular effect of T. caulis. We investigated whether T. caulis extract has vascular effects in rat resistance arteries in this study. Methods: To examine whether T. caulis extract affects vascular reactivity, we measured isometric tension of rat mesenteric resistance arteries using a multi-wire myograph system. T. caulis extract was administered after arteries were pre-contracted with high K+ (70 mM) or phenylephrine (5 µM). Vanillin, a single active component of T. caulis, was used to treat mesenteric arteries. Results: T. caulis extract caused vascular relaxation in a concentration-dependent manner, which was endothelium-independent. To further identify the mechanism, we incubated the arteries in Ca2+-free solution containing high K+, followed by a cumulative administration of CaCl2 (0.01-2.0 mM) with or without T. caulis extract (250 µg/mL). The treatment of T. caulis extract decreased contractile responses induced by the addition of Ca2+, which suggested that the extracellular Ca2+ influx was inhibited by the T. caulis extract. Moreover, an active compound of T. caulis extract, vanillin, also induced vasodilation in mesenteric resistance arteries. Conclusions: T. caulis extract and its active compound, vanillin, concentration-dependently induced vascular relaxation in mesenteric resistance arteries. These results suggest that the administration of T. caulis extract could help decrease blood pressure.
1. Introduction: Cardiovascular disease (CVD) is the one of the leading causes of morbidity and mortality worldwide. CVD was responsible for 17.3 million deaths worldwide in 2017 which is expected to increase to >23.6 million by 2030 [1,2]. CVD consists of hypertension, heart failure, stroke and a number of vascular and cardiac problems [3]. Hypertension is defined as a systolic blood pressure (SBP) of 130 mmHg or more and a diastolic blood pressure (DBP) of 90 mmHg or more [4]. The arterial system comprises capillaries, arterioles, small arteries, medium-sized arteries and large conduit arteries. Resistance arteries, vessels with lumen diameters measuring <400 μm, are major organs of the cardiovascular system and regulate blood flow and peripheral resistance [5,6]. An increase in vascular resistance caused by the decrease in the internal lumen diameter of the blood vessel is a major cause of elevated blood pressure [7,8]. On the other hand, the relaxation of blood vessels leads to an increase in the lumen diameter of the blood vessels, resulting in an immediate decrease in blood pressure. The relaxation responses of arteries were not only endothelium-dependent but also endothelium-independent. Recently, evidence has been accumulating that endothelium-independent vasodilation is impaired in various vascular beds such as coronary, brachial, forearm and peripheral conduit arteries in patients with cardiovascular diseases and cardiovascular risk factors [9]. In particular, hypertension and diabetes mellitus have been shown to be associated with the impairment of endothelium-independent vasodilation [10,11]. Thus, it is necessary to discover substances that induce endothelium-independent vasodilation as much as substances that cause endothelium-dependent vasodilation. Although synthetic medications have been widely used to treat and cure patients at various stages of CVD, including hypertension, the adverse effects remain a challenge. Because the treatment of hypertension is continued over a long period of time; therefore, the use of synthetic drugs may result in drug resistance and adverse effects [12]. In addition to the use of synthetic drugs to treat hypertension, the use of a natural product is widely increasing [13,14]. Recently, the use of plant extract has shown a steady growth because of low toxicity and well-established therapy [15]. Various plants used in traditional medicine have been studied for their potential use in the treatment of cardiovascular disease [16]. Vasodilatory effects of medicinal plants have been extensively explored over the last two decades and have proven to be potentially effective for the treatment of CVD in clinics all over the world [17,18,19,20]. Trachelospermi caulis (T. caulis) belongs to the Apocynaceae family, which is the dried leafy stem of trachelospermum asiaticum var. intermedium [21]. T. caulis has been used as traditional herbal medicine to attenuate fever and pain of the knees and loins because of its antipyretic and antinociceptive effects in Asian countries [22]. It is well known that T. caulis lowers blood pressure in oriental medicine [23]. Although T. caulis has been suggested to provoke beneficial effects, the effect and mechanism of action in cardiovascular system are unknown. Therefore, the aim of our research is to explore the effect of T. caulis extract on vascular functionality in resistance arteries and to elucidate the underlying mechanism. 5. Conclusions: In the present study, for the first time, we discovered that T. caulis extract induced vascular relaxation in rat mesenteric resistance arteries. The vasodilatory effect of T. caulis was endothelium-independent and the inhibition of extracellular Ca2+ influx was related to T. caulis extract-induced vascular relaxation. Our results suggest that T. caulis could be a valuable herbal resource in future research and in the treatment of cardiovascular diseases.
Background: Trachelospermi caulis (T. caulis) has been used as a traditional herbal medicine in Asian countries. Although it is well known that T. caulis has beneficial effects, no sufficient research data are available on the cardiovascular effect of T. caulis. We investigated whether T. caulis extract has vascular effects in rat resistance arteries in this study. Methods: To examine whether T. caulis extract affects vascular reactivity, we measured isometric tension of rat mesenteric resistance arteries using a multi-wire myograph system. T. caulis extract was administered after arteries were pre-contracted with high K+ (70 mM) or phenylephrine (5 µM). Vanillin, a single active component of T. caulis, was used to treat mesenteric arteries. Results: T. caulis extract caused vascular relaxation in a concentration-dependent manner, which was endothelium-independent. To further identify the mechanism, we incubated the arteries in Ca2+-free solution containing high K+, followed by a cumulative administration of CaCl2 (0.01-2.0 mM) with or without T. caulis extract (250 µg/mL). The treatment of T. caulis extract decreased contractile responses induced by the addition of Ca2+, which suggested that the extracellular Ca2+ influx was inhibited by the T. caulis extract. Moreover, an active compound of T. caulis extract, vanillin, also induced vasodilation in mesenteric resistance arteries. Conclusions: T. caulis extract and its active compound, vanillin, concentration-dependently induced vascular relaxation in mesenteric resistance arteries. These results suggest that the administration of T. caulis extract could help decrease blood pressure.
8,464
299
[ 205, 76, 141, 136, 156, 84, 325, 1842, 207, 131, 374, 81, 108 ]
17
[ "caulis", "arteries", "induced", "endothelium", "relaxation", "mesenteric", "pre", "mm", "extract", "mesenteric arteries" ]
[ "conduit arteries resistance", "vascular resistance caused", "resistance arteries vessels", "cardiovascular disease cvd", "cvd consists hypertension" ]
null
[CONTENT] Trachelospermi caulis | vasodilation | mesenteric resistance arteries | vanillin | relaxation | Ca2+ [SUMMARY]
null
[CONTENT] Trachelospermi caulis | vasodilation | mesenteric resistance arteries | vanillin | relaxation | Ca2+ [SUMMARY]
[CONTENT] Trachelospermi caulis | vasodilation | mesenteric resistance arteries | vanillin | relaxation | Ca2+ [SUMMARY]
[CONTENT] Trachelospermi caulis | vasodilation | mesenteric resistance arteries | vanillin | relaxation | Ca2+ [SUMMARY]
[CONTENT] Trachelospermi caulis | vasodilation | mesenteric resistance arteries | vanillin | relaxation | Ca2+ [SUMMARY]
[CONTENT] Animals | Endothelium, Vascular | Mesenteric Arteries | Plant Extracts | Rats | Vasodilation | Vasodilator Agents [SUMMARY]
null
[CONTENT] Animals | Endothelium, Vascular | Mesenteric Arteries | Plant Extracts | Rats | Vasodilation | Vasodilator Agents [SUMMARY]
[CONTENT] Animals | Endothelium, Vascular | Mesenteric Arteries | Plant Extracts | Rats | Vasodilation | Vasodilator Agents [SUMMARY]
[CONTENT] Animals | Endothelium, Vascular | Mesenteric Arteries | Plant Extracts | Rats | Vasodilation | Vasodilator Agents [SUMMARY]
[CONTENT] Animals | Endothelium, Vascular | Mesenteric Arteries | Plant Extracts | Rats | Vasodilation | Vasodilator Agents [SUMMARY]
[CONTENT] conduit arteries resistance | vascular resistance caused | resistance arteries vessels | cardiovascular disease cvd | cvd consists hypertension [SUMMARY]
null
[CONTENT] conduit arteries resistance | vascular resistance caused | resistance arteries vessels | cardiovascular disease cvd | cvd consists hypertension [SUMMARY]
[CONTENT] conduit arteries resistance | vascular resistance caused | resistance arteries vessels | cardiovascular disease cvd | cvd consists hypertension [SUMMARY]
[CONTENT] conduit arteries resistance | vascular resistance caused | resistance arteries vessels | cardiovascular disease cvd | cvd consists hypertension [SUMMARY]
[CONTENT] conduit arteries resistance | vascular resistance caused | resistance arteries vessels | cardiovascular disease cvd | cvd consists hypertension [SUMMARY]
[CONTENT] caulis | arteries | induced | endothelium | relaxation | mesenteric | pre | mm | extract | mesenteric arteries [SUMMARY]
null
[CONTENT] caulis | arteries | induced | endothelium | relaxation | mesenteric | pre | mm | extract | mesenteric arteries [SUMMARY]
[CONTENT] caulis | arteries | induced | endothelium | relaxation | mesenteric | pre | mm | extract | mesenteric arteries [SUMMARY]
[CONTENT] caulis | arteries | induced | endothelium | relaxation | mesenteric | pre | mm | extract | mesenteric arteries [SUMMARY]
[CONTENT] caulis | arteries | induced | endothelium | relaxation | mesenteric | pre | mm | extract | mesenteric arteries [SUMMARY]
[CONTENT] blood | hypertension | pressure | cvd | blood pressure | cardiovascular | use | effects | resistance | lumen [SUMMARY]
null
[CONTENT] arteries | caulis | endothelium | figure | pre | contracted | arteries pre | pre contracted | arteries pre contracted | induced [SUMMARY]
[CONTENT] extract induced vascular relaxation | caulis extract induced vascular | extract induced vascular | caulis | extract induced | caulis extract induced | induced vascular relaxation | induced vascular | vascular relaxation | induced vascular relaxation rat [SUMMARY]
[CONTENT] caulis | arteries | endothelium | extract | induced | figure | ca2 | pre | relaxation | mesenteric [SUMMARY]
[CONTENT] caulis | arteries | endothelium | extract | induced | figure | ca2 | pre | relaxation | mesenteric [SUMMARY]
[CONTENT] Trachelospermi | T. | Asian ||| T. | T. ||| T. [SUMMARY]
null
[CONTENT] ||| K+ | 0.01-2.0 | T. | 250 ||| T. | T. ||| T. | vanillin [SUMMARY]
[CONTENT] vanillin ||| T. [SUMMARY]
[CONTENT] Trachelospermi | T. | Asian ||| T. | T. ||| T. ||| T. ||| 70 mM | 5 ||| Vanillin | T. ||| ||| ||| K+ | 0.01-2.0 | T. | 250 ||| T. | T. ||| T. | vanillin ||| vanillin ||| T. [SUMMARY]
[CONTENT] Trachelospermi | T. | Asian ||| T. | T. ||| T. ||| T. ||| 70 mM | 5 ||| Vanillin | T. ||| ||| ||| K+ | 0.01-2.0 | T. | 250 ||| T. | T. ||| T. | vanillin ||| vanillin ||| T. [SUMMARY]
Are COPD Prescription Patterns Aligned with Guidelines? Evidence from a Canadian Population-Based Study.
33790551
In contemporary guidelines for the management of Chronic Obstructive Pulmonary Disease (COPD), the history of acute exacerbations plays an important role in the choice of long-term inhaled therapies. This study aimed at evaluating population-level trends of filled inhaled prescriptions over the time course of COPD and their relation to the history of exacerbations.
BACKGROUND
We used administrative health databases in British Columbia, Canada (1997-2015), to create a retrospective incident cohort of individuals with diagnosed COPD. We quantified long-acting inhaled medication prescriptions within each year of follow-up and documented their trend over the time course of COPD. Using generalized linear models, we investigated the association between the frequent exacerbator status (≥2 moderate or ≥1 severe exacerbation(s) in the previous 12 months) and filling a prescription after a physician visit.
METHODS
132,004 COPD patients were included (mean age 68.6, 49.2% female). The most common medication class during the first year of diagnosis was inhaled corticosteroids (ICS, used by 49.9%), followed by long-acting beta-2 adrenoreceptor agonists (LABA, 31.8%). Long-acting muscarinic receptor antagonists (LAMA) were the least commonly prescribed (10.4%). ICS remained the most common prescription throughout follow-up, being used by approximately 50% of patients during each year. 39.0% of patients received combination inhaled therapies in their first year of diagnosis, with ICS+LABA being the most common (30.7%). The association with exacerbation history was the most pronounced for triple therapy with an odds ratio (OR) of 2.68 for general practitioners and 2.02 for specialists (p<0.001 for both). Such associations were generally stronger among GPs compared with specialists, with the exception of monotherapy with LABA or ICS.
RESULTS
We documented low utilization of monotherapies (specifically LAMA) and high utilization of combination therapies (particularly ICS containing). Specialists were less likely to consider exacerbation history in the choice of inhaled therapies compared with GPs.
CONCLUSION
[ "Administration, Inhalation", "Adrenal Cortex Hormones", "Adrenergic beta-2 Receptor Agonists", "Aged", "British Columbia", "Bronchodilator Agents", "Drug Prescriptions", "Drug Therapy, Combination", "Female", "Humans", "Male", "Muscarinic Antagonists", "Pulmonary Disease, Chronic Obstructive", "Retrospective Studies" ]
8006812
Introduction
Chronic obstructive pulmonary disease (COPD) is a common disease of the airways characterized by progressive airflow limitation and periods of intensified disease activity referred to as acute exacerbations (or lung attacks).1 COPD is among the leading causes of disease burden in terms of lost disability-adjusted life years (DALYs).2,3 In Canada, exacerbations of COPD are the leading cause of medical hospital admissions and a major cause of morbidity and mortality.4 In all stages of COPD, risk factor modification (eg, smoking cessation) has a high potential for modifying the course of the disease. However, for most patients, pharmacotherapy is an essential component of disease management.5 In contemporary COPD management guidelines, exacerbations are an important determinant of initiation and choice of drug therapy. For example, in the current Canadian Thoracic Society (CTS) guidelines, treatment choice is based on the dichotomization of patients into frequent or non-frequent exacerbators,6 with frequent exacerbator status defined based on 12-month history (≥2 moderate or ≥1 severe exacerbation(s)). In general, treatment is stepped up (intensified) if the patient becomes a frequent exacerbator, and can be stepped down if the patient is a non-frequent exacerbator. Other disease management strategies such as the influential Global Initiative for Chronic Obstructive Lung Disease (GOLD)5 and the American Thoracic Society’s guidelines7 adopt a similar approach. Adherence to guideline recommendations on pharmacotherapies is generally low for chronic diseases.8 This might be due to the lack of awareness about guidelines by care providers, or low acceptance of, and adherence to, care providers’ recommendations by patients. Evaluating the level of adherence to guidelines in “real world” settings enables the identification of preventable gaps in care. Population-based health databases that capture treatment patterns at the community level without the risk of Hawthorne effect (individuals modifying their behavior as a reaction to being observed) or recall bias are a unique resource for this purpose. The aims of the current study were to use such databases to describe the trend of filled prescriptions for inhaled medication throughout the time course of COPD (primary objective), and to evaluate the association between prescribed inhaled therapies with the frequent-exacerbator definition (secondary objective).
null
null
null
null
Conclusion
Taken together, these results suggest that the utilization of inhaled therapies for COPD is not aligned with current Canadian or international guidelines. This is most clearly demonstrated in the low utilization of LAMA and high utilization of ICS in newly diagnosed patients, as well as high utilization of combination therapies as first-line. An interesting finding of our study was that, compared to treatment recommendations by specialists, treatment recommendations made by GPs were potentially more reflective of the exacerbation history, but this might well reflect the more complicated clinical scenarios that specialists manage. Overall, improving adherence to guideline recommendations should be promoted across all specialty groups. Our results also suggest that treatments are not stepped down as recommended by guidelines, potentially resulting in significant levels of overtreatment.
[ "Methods", "Data Sources", "Study Population", "Outcomes", "Statistical Analysis", "Results", "Trends of Filled Prescriptions for Inhaled Medications Over the Time Course of COPD", "Association Between Filled Prescriptions After an Outpatient Visit and Exacerbation History", "Discussion", "Conclusion" ]
[ "This study was approved by The University of British Columbia’s human ethics board (#H13-00684). All inferences, opinions, and conclusions drawn in this research are those of the authors and do not reflect the opinions or policies of the Data Steward(s). The health databases complied with the Freedom of Information and Protection of Privacy Act.\nData Sources We used administrative health databases of British Columbia (a Canadian Province with a population of 5.07M as of 20199) from January 1997 to December 2015. The administrative needs of the Province’s public healthcare system have resulted in the accumulation of healthcare encounter data for all legal residents of the Province. The data have high reliability with a very low rate of missing values.10 We had access to demographics,11 hospitalization,12 outpatient services,13 and filled prescriptions14 records (including drug dose, quantity and day supply). Hospitalization and outpatient services records contain diagnostic International Classification of Diseases (ICD) codes. Each filled prescription record contains a unique identifier for the drug, as well as the dose and duration of supply. These data have frequently been used to quantity healthcare resource use patterns and their association with outcomes.15,16\nWe used administrative health databases of British Columbia (a Canadian Province with a population of 5.07M as of 20199) from January 1997 to December 2015. The administrative needs of the Province’s public healthcare system have resulted in the accumulation of healthcare encounter data for all legal residents of the Province. The data have high reliability with a very low rate of missing values.10 We had access to demographics,11 hospitalization,12 outpatient services,13 and filled prescriptions14 records (including drug dose, quantity and day supply). Hospitalization and outpatient services records contain diagnostic International Classification of Diseases (ICD) codes. Each filled prescription record contains a unique identifier for the drug, as well as the dose and duration of supply. These data have frequently been used to quantity healthcare resource use patterns and their association with outcomes.15,16\nStudy Population We created an incident cohort of individuals with diagnosed COPD using a previously validated case definition.17 In this definition, a patient is categorized as having COPD if during any 24-month rolling window, they had ≥1 hospitalization or ≥2 outpatient visits with COPD as the primary diagnosis (ICD-9: 491, 492, 493.2, 496; ICD-10: J41-J44). In a previous review study, this definition had a sensitivity of 65.5% and a specificity of 91.5% (which is likely to be even higher in our sample given that individuals had to have COPD-related prescription records to contribute to the results).17 The lower bound for the age criteria was 35, similar to previous studies18,19 to ensure that asthma patients (who might have similar health records as patients with COPD) were not over-represented. Patients entered the cohort on the date of their first COPD-related outpatient visit, which was considered the entry date, marking the beginning of follow-up. To ensure that an incident cohort of COPD patients is being selected, only those who were captured in the data for at least five years before their entry date were included. Patients were followed from their entry date to the date of their last resource use of any kind, death, end of registration in the database, or December 31st, 2015 (the administrative end of the study), whichever occurred first.\nWe created an incident cohort of individuals with diagnosed COPD using a previously validated case definition.17 In this definition, a patient is categorized as having COPD if during any 24-month rolling window, they had ≥1 hospitalization or ≥2 outpatient visits with COPD as the primary diagnosis (ICD-9: 491, 492, 493.2, 496; ICD-10: J41-J44). In a previous review study, this definition had a sensitivity of 65.5% and a specificity of 91.5% (which is likely to be even higher in our sample given that individuals had to have COPD-related prescription records to contribute to the results).17 The lower bound for the age criteria was 35, similar to previous studies18,19 to ensure that asthma patients (who might have similar health records as patients with COPD) were not over-represented. Patients entered the cohort on the date of their first COPD-related outpatient visit, which was considered the entry date, marking the beginning of follow-up. To ensure that an incident cohort of COPD patients is being selected, only those who were captured in the data for at least five years before their entry date were included. Patients were followed from their entry date to the date of their last resource use of any kind, death, end of registration in the database, or December 31st, 2015 (the administrative end of the study), whichever occurred first.\nOutcomes We evaluated two outcomes for the primary objective (illustrating medication trends over the time course of COPD). The first was filled prescriptions for long-acting inhaled medications during the time course of COPD. This was quantified in terms of the proportion of patients who filled prescriptions for a given medication class, as well as the average (per patient) dose-adjusted number of canisters dispensed. The following medication classes were evaluated: long-acting muscarinic receptor antagonists (LAMA), long-acting beta-2 adrenoreceptor agonists (LABA), and inhaled corticosteroids (ICS). We used a master list of COPD-related inhaled medications identified by their unique identifier linked with dose-equivalency tables (Supplementary Material – Section 1). The second outcome was filled prescriptions for combination therapies. The following combination therapies were considered: LAMA+LABA, LAMA+ICS, LABA+ICS, and LAMA+LABA+ICS. This outcome was quantified in terms of 1) the proportion of patients who, at least for part of the year, received inhaled therapies of different classes, and 2) medication possession ratio (MPR) for combination therapies. MPR was determined by counting the number of days within a follow-up year in which the patient was within the duration of supply of filled prescriptions for more than one class of medication (in a single inhaler or separate inhalers) and dividing it by the length of the follow-up period (365 days). If different classes of therapies had been dispensed for the patient in separate inhalers, they counted towards combination therapy if their duration of supply overlapped. For this (primary objective) we included all prescription records by general practitioners (GPs) and specialists, regardless of who prescribed them.\nThe outcome for the secondary objective (association between frequent-exacerbator status and inhaled medication prescription) was filled prescriptions within 14 days after a COPD-related outpatient visit to a physician. We limited the visits to GP, internal medicine specialist, and respirologist (the latter two are collectively referred to as specialists). The reason is that the clinical experts on our team believed that other specialties rarely initiate or change COPD-related medications and mostly refer the patient to their treating physician. If multiple visits occurred within 14 days of each other for the same patient, only the last visit was considered. At the time of a physician visit, a patient was classified as a frequent or non-frequent exacerbator based on their exacerbation history in the previous 12 months. We identified moderate and severe exacerbations using a previously published algorithm.20 A moderate exacerbation was defined based on filling prescriptions for oral corticosteroid or antibiotics within seven days after an outpatient visit with the main diagnostic (ICD) code of COPD (list of medications in Supplementary Material - Section 1). A severe exacerbation was defined as an instance of hospital admission with the main discharge diagnosis for COPD.\nWe evaluated two outcomes for the primary objective (illustrating medication trends over the time course of COPD). The first was filled prescriptions for long-acting inhaled medications during the time course of COPD. This was quantified in terms of the proportion of patients who filled prescriptions for a given medication class, as well as the average (per patient) dose-adjusted number of canisters dispensed. The following medication classes were evaluated: long-acting muscarinic receptor antagonists (LAMA), long-acting beta-2 adrenoreceptor agonists (LABA), and inhaled corticosteroids (ICS). We used a master list of COPD-related inhaled medications identified by their unique identifier linked with dose-equivalency tables (Supplementary Material – Section 1). The second outcome was filled prescriptions for combination therapies. The following combination therapies were considered: LAMA+LABA, LAMA+ICS, LABA+ICS, and LAMA+LABA+ICS. This outcome was quantified in terms of 1) the proportion of patients who, at least for part of the year, received inhaled therapies of different classes, and 2) medication possession ratio (MPR) for combination therapies. MPR was determined by counting the number of days within a follow-up year in which the patient was within the duration of supply of filled prescriptions for more than one class of medication (in a single inhaler or separate inhalers) and dividing it by the length of the follow-up period (365 days). If different classes of therapies had been dispensed for the patient in separate inhalers, they counted towards combination therapy if their duration of supply overlapped. For this (primary objective) we included all prescription records by general practitioners (GPs) and specialists, regardless of who prescribed them.\nThe outcome for the secondary objective (association between frequent-exacerbator status and inhaled medication prescription) was filled prescriptions within 14 days after a COPD-related outpatient visit to a physician. We limited the visits to GP, internal medicine specialist, and respirologist (the latter two are collectively referred to as specialists). The reason is that the clinical experts on our team believed that other specialties rarely initiate or change COPD-related medications and mostly refer the patient to their treating physician. If multiple visits occurred within 14 days of each other for the same patient, only the last visit was considered. At the time of a physician visit, a patient was classified as a frequent or non-frequent exacerbator based on their exacerbation history in the previous 12 months. We identified moderate and severe exacerbations using a previously published algorithm.20 A moderate exacerbation was defined based on filling prescriptions for oral corticosteroid or antibiotics within seven days after an outpatient visit with the main diagnostic (ICD) code of COPD (list of medications in Supplementary Material - Section 1). A severe exacerbation was defined as an instance of hospital admission with the main discharge diagnosis for COPD.\nStatistical Analysis For the primary objective, we divided the follow-up time of each patient into adjacent 12-month periods, starting from the entry date (if the last period was <12 months it was ignored). We plotted the proportion of COPD patients who received a given class of COPD medication as a function of years since COPD diagnosis. We also plotted the average number of dose-adjusted canisters per COPD patient during each follow-up period, separately for each inhaled medication class. In order to convert dispensed quantities to number of canisters, we used a specific formulation as the reference for each medication class, as follows: 1) salmeterol 50 mcg, 60 doses per canister for LABA, 2) beclomethasone 100 mcg, 200 doses per canister for ICS, and 3) tiotropium 18 mcg, 30 doses per canister for LAMA.\nTo test for time trends, we used a generalized linear model (with logit link function and binomial distribution) with generalized estimating equation to account for the clustering of observation units (patient-years) within patients. Years since the entry date was the independent variable, whose coefficient captured the time trend. We did not control for any other variable in this analysis given the primary interest was examining temporal trends.\nFor the secondary objective, we used a generalized linear model (with the same link function and distribution as above) to associate filled prescriptions after physician visits with exacerbation history. This model was fitted separately for each medication class, and for GPs versus specialists. Again, we used a generalized estimating equation to account for the clustered nature of the data (multiple physician visits for each patient). We controlled for the following variables: sex, age on the date of visit, and socio-economic status on the date of visit (estimated based on neighbourhood income quantiles). SAS Enterprise Guide (version 7.1, Cary, NC, USA) was used for all analyses. Two-tailed p-values were considered significant at 0.05 level.\nWe performed a sensitivity analysis that explored whether our findings are affected by the inadvertent inclusion of some patients with asthma but without fixed airway obstruction. For this analysis, we excluded all individuals who met a previously validated case definition of asthma21 any time during the study period (before or after COPD diagnosis). We reported the proportion of each medication classes that were prescribed after a physician visit, separately by the frequent exacerbator status.\nFor the primary objective, we divided the follow-up time of each patient into adjacent 12-month periods, starting from the entry date (if the last period was <12 months it was ignored). We plotted the proportion of COPD patients who received a given class of COPD medication as a function of years since COPD diagnosis. We also plotted the average number of dose-adjusted canisters per COPD patient during each follow-up period, separately for each inhaled medication class. In order to convert dispensed quantities to number of canisters, we used a specific formulation as the reference for each medication class, as follows: 1) salmeterol 50 mcg, 60 doses per canister for LABA, 2) beclomethasone 100 mcg, 200 doses per canister for ICS, and 3) tiotropium 18 mcg, 30 doses per canister for LAMA.\nTo test for time trends, we used a generalized linear model (with logit link function and binomial distribution) with generalized estimating equation to account for the clustering of observation units (patient-years) within patients. Years since the entry date was the independent variable, whose coefficient captured the time trend. We did not control for any other variable in this analysis given the primary interest was examining temporal trends.\nFor the secondary objective, we used a generalized linear model (with the same link function and distribution as above) to associate filled prescriptions after physician visits with exacerbation history. This model was fitted separately for each medication class, and for GPs versus specialists. Again, we used a generalized estimating equation to account for the clustered nature of the data (multiple physician visits for each patient). We controlled for the following variables: sex, age on the date of visit, and socio-economic status on the date of visit (estimated based on neighbourhood income quantiles). SAS Enterprise Guide (version 7.1, Cary, NC, USA) was used for all analyses. Two-tailed p-values were considered significant at 0.05 level.\nWe performed a sensitivity analysis that explored whether our findings are affected by the inadvertent inclusion of some patients with asthma but without fixed airway obstruction. For this analysis, we excluded all individuals who met a previously validated case definition of asthma21 any time during the study period (before or after COPD diagnosis). We reported the proportion of each medication classes that were prescribed after a physician visit, separately by the frequent exacerbator status.", "We used administrative health databases of British Columbia (a Canadian Province with a population of 5.07M as of 20199) from January 1997 to December 2015. The administrative needs of the Province’s public healthcare system have resulted in the accumulation of healthcare encounter data for all legal residents of the Province. The data have high reliability with a very low rate of missing values.10 We had access to demographics,11 hospitalization,12 outpatient services,13 and filled prescriptions14 records (including drug dose, quantity and day supply). Hospitalization and outpatient services records contain diagnostic International Classification of Diseases (ICD) codes. Each filled prescription record contains a unique identifier for the drug, as well as the dose and duration of supply. These data have frequently been used to quantity healthcare resource use patterns and their association with outcomes.15,16", "We created an incident cohort of individuals with diagnosed COPD using a previously validated case definition.17 In this definition, a patient is categorized as having COPD if during any 24-month rolling window, they had ≥1 hospitalization or ≥2 outpatient visits with COPD as the primary diagnosis (ICD-9: 491, 492, 493.2, 496; ICD-10: J41-J44). In a previous review study, this definition had a sensitivity of 65.5% and a specificity of 91.5% (which is likely to be even higher in our sample given that individuals had to have COPD-related prescription records to contribute to the results).17 The lower bound for the age criteria was 35, similar to previous studies18,19 to ensure that asthma patients (who might have similar health records as patients with COPD) were not over-represented. Patients entered the cohort on the date of their first COPD-related outpatient visit, which was considered the entry date, marking the beginning of follow-up. To ensure that an incident cohort of COPD patients is being selected, only those who were captured in the data for at least five years before their entry date were included. Patients were followed from their entry date to the date of their last resource use of any kind, death, end of registration in the database, or December 31st, 2015 (the administrative end of the study), whichever occurred first.", "We evaluated two outcomes for the primary objective (illustrating medication trends over the time course of COPD). The first was filled prescriptions for long-acting inhaled medications during the time course of COPD. This was quantified in terms of the proportion of patients who filled prescriptions for a given medication class, as well as the average (per patient) dose-adjusted number of canisters dispensed. The following medication classes were evaluated: long-acting muscarinic receptor antagonists (LAMA), long-acting beta-2 adrenoreceptor agonists (LABA), and inhaled corticosteroids (ICS). We used a master list of COPD-related inhaled medications identified by their unique identifier linked with dose-equivalency tables (Supplementary Material – Section 1). The second outcome was filled prescriptions for combination therapies. The following combination therapies were considered: LAMA+LABA, LAMA+ICS, LABA+ICS, and LAMA+LABA+ICS. This outcome was quantified in terms of 1) the proportion of patients who, at least for part of the year, received inhaled therapies of different classes, and 2) medication possession ratio (MPR) for combination therapies. MPR was determined by counting the number of days within a follow-up year in which the patient was within the duration of supply of filled prescriptions for more than one class of medication (in a single inhaler or separate inhalers) and dividing it by the length of the follow-up period (365 days). If different classes of therapies had been dispensed for the patient in separate inhalers, they counted towards combination therapy if their duration of supply overlapped. For this (primary objective) we included all prescription records by general practitioners (GPs) and specialists, regardless of who prescribed them.\nThe outcome for the secondary objective (association between frequent-exacerbator status and inhaled medication prescription) was filled prescriptions within 14 days after a COPD-related outpatient visit to a physician. We limited the visits to GP, internal medicine specialist, and respirologist (the latter two are collectively referred to as specialists). The reason is that the clinical experts on our team believed that other specialties rarely initiate or change COPD-related medications and mostly refer the patient to their treating physician. If multiple visits occurred within 14 days of each other for the same patient, only the last visit was considered. At the time of a physician visit, a patient was classified as a frequent or non-frequent exacerbator based on their exacerbation history in the previous 12 months. We identified moderate and severe exacerbations using a previously published algorithm.20 A moderate exacerbation was defined based on filling prescriptions for oral corticosteroid or antibiotics within seven days after an outpatient visit with the main diagnostic (ICD) code of COPD (list of medications in Supplementary Material - Section 1). A severe exacerbation was defined as an instance of hospital admission with the main discharge diagnosis for COPD.", "For the primary objective, we divided the follow-up time of each patient into adjacent 12-month periods, starting from the entry date (if the last period was <12 months it was ignored). We plotted the proportion of COPD patients who received a given class of COPD medication as a function of years since COPD diagnosis. We also plotted the average number of dose-adjusted canisters per COPD patient during each follow-up period, separately for each inhaled medication class. In order to convert dispensed quantities to number of canisters, we used a specific formulation as the reference for each medication class, as follows: 1) salmeterol 50 mcg, 60 doses per canister for LABA, 2) beclomethasone 100 mcg, 200 doses per canister for ICS, and 3) tiotropium 18 mcg, 30 doses per canister for LAMA.\nTo test for time trends, we used a generalized linear model (with logit link function and binomial distribution) with generalized estimating equation to account for the clustering of observation units (patient-years) within patients. Years since the entry date was the independent variable, whose coefficient captured the time trend. We did not control for any other variable in this analysis given the primary interest was examining temporal trends.\nFor the secondary objective, we used a generalized linear model (with the same link function and distribution as above) to associate filled prescriptions after physician visits with exacerbation history. This model was fitted separately for each medication class, and for GPs versus specialists. Again, we used a generalized estimating equation to account for the clustered nature of the data (multiple physician visits for each patient). We controlled for the following variables: sex, age on the date of visit, and socio-economic status on the date of visit (estimated based on neighbourhood income quantiles). SAS Enterprise Guide (version 7.1, Cary, NC, USA) was used for all analyses. Two-tailed p-values were considered significant at 0.05 level.\nWe performed a sensitivity analysis that explored whether our findings are affected by the inadvertent inclusion of some patients with asthma but without fixed airway obstruction. For this analysis, we excluded all individuals who met a previously validated case definition of asthma21 any time during the study period (before or after COPD diagnosis). We reported the proportion of each medication classes that were prescribed after a physician visit, separately by the frequent exacerbator status.", "In our study, 132,004 individuals satisfied the case definition of COPD. Of these, 64,942 patients (49.2%) were female and the mean age at entry was 68.6 (SD=12.5) years. These patients contributed 707,575 years of follow-up (mean follow-up 5.4 years, SD=3.5). Table 1 provides basic sociodemographic characteristics of these patients.Table 1Characteristics of Patients and the Visits in the CohortPatient Characteristics at Index DateStudy Cohort (n=132,004)Female sex (%) a64,942 (49.2)Age (SD)68.6 (12.5)Years from COPD diagnosis5.4 (3.5)Socioeconomic status (%) Quantile 134,699 (26.3%) Quantile 228,116 (21.3%) Quantile 324,630 (18.7%) Quantile 422,500 (17.1%) Quantile 519,545 (14.8%) Unknown2,514 (1.9%)Visit Characteristics for the Sub-Cohort of COPD Patients Alive in 2010Visits Followed by Prescription (n=260,164)Patient was frequent exacerbator in the previous 12 months79,820 (30.7%)History of PFT before visit228,464 (87.8%)History of asthma diagnosis before visits142,457 (54.8%)COPD-related medications in the three months before the visit ICS only28,624 (11.0%) LABA only2,039 (0.78%) LAMA only18,414 (7.08%) LAMA+LABA2,086 (0.8%) ICS+LABA80,205 (30.83%) ICS+LAMA4,861 (1.87%) ICS+LAMA+LABA71,152 (27.35%)Note: aNumber of patients (and percent of cohort) are reported unless otherwise indicated.Abbreviations: COPD, chronic obstructive pulmonary disease; PFT, pulmonary function test; ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists.\n\nCharacteristics of Patients and the Visits in the Cohort\nNote: aNumber of patients (and percent of cohort) are reported unless otherwise indicated.\nAbbreviations: COPD, chronic obstructive pulmonary disease; PFT, pulmonary function test; ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists.\nTrends of Filled Prescriptions for Inhaled Medications Over the Time Course of COPD The trends are depicted in Figure 1. ICS was the most common medication class prescribed to patients in their first year of diagnosis (49.9% - panel A). It was followed by LABA (31.8%) and LAMA (10.4%). The percentage of patients who filled ICS-containing prescriptions per year remained the same (~50%) throughout the time course of COPD, whereas this percentage increased modestly for LABA and LAMA (by 7% and 9%, respectively, over 15 years). In terms of the quantity of dispensed medications, again ICS was the most prescribed ingredient (average of 2.9 canisters per patient per year over the 15-year period – panel B). The average number of canisters (per year) for LAMA, LABA, and ICS increased by 119%, 56%, and 40%, respectively, over 15 years.Figure 1Trends in the proportion of patients filling at least one prescription (A) and average dose-adjusted number of canisters (B) for major COPD inhaled therapies, during disease course.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists.\nTrends in the proportion of patients filling at least one prescription (A) and average dose-adjusted number of canisters (B) for major COPD inhaled therapies, during disease course.\nThe trends for combination therapies are illustrated in Figure 2. In the first year of diagnosis, 30.7% of patients received ICS+LABA (panel A), and on average 11.0% of a COPD patient-times was covered by these combination therapies (panel B). The second most common combination therapy was triple therapy (LAMA+LABA+ICS), taken by 6.1% of patients in their first year of diagnosis, covering 2.0% of their time in the first follow-up year. Other patterns of combination therapies were not common. Among patients with 12–15 years of COPD history, on average 58.5% received combination therapies, with MPR of 33.0%. Both the proportion of patients receiving combination therapies and the average MPR increased for ICS+LABA and triple therapy over the time course of COPD.Figure 2Trends in the proportion of patients on combination therapies (A) and average medication possession ratio (B) over the time course of COPD.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic agents; ICS/LABA/LAMA, triple therapy.\nTrends in the proportion of patients on combination therapies (A) and average medication possession ratio (B) over the time course of COPD.\nThe trends are depicted in Figure 1. ICS was the most common medication class prescribed to patients in their first year of diagnosis (49.9% - panel A). It was followed by LABA (31.8%) and LAMA (10.4%). The percentage of patients who filled ICS-containing prescriptions per year remained the same (~50%) throughout the time course of COPD, whereas this percentage increased modestly for LABA and LAMA (by 7% and 9%, respectively, over 15 years). In terms of the quantity of dispensed medications, again ICS was the most prescribed ingredient (average of 2.9 canisters per patient per year over the 15-year period – panel B). The average number of canisters (per year) for LAMA, LABA, and ICS increased by 119%, 56%, and 40%, respectively, over 15 years.Figure 1Trends in the proportion of patients filling at least one prescription (A) and average dose-adjusted number of canisters (B) for major COPD inhaled therapies, during disease course.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists.\nTrends in the proportion of patients filling at least one prescription (A) and average dose-adjusted number of canisters (B) for major COPD inhaled therapies, during disease course.\nThe trends for combination therapies are illustrated in Figure 2. In the first year of diagnosis, 30.7% of patients received ICS+LABA (panel A), and on average 11.0% of a COPD patient-times was covered by these combination therapies (panel B). The second most common combination therapy was triple therapy (LAMA+LABA+ICS), taken by 6.1% of patients in their first year of diagnosis, covering 2.0% of their time in the first follow-up year. Other patterns of combination therapies were not common. Among patients with 12–15 years of COPD history, on average 58.5% received combination therapies, with MPR of 33.0%. Both the proportion of patients receiving combination therapies and the average MPR increased for ICS+LABA and triple therapy over the time course of COPD.Figure 2Trends in the proportion of patients on combination therapies (A) and average medication possession ratio (B) over the time course of COPD.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic agents; ICS/LABA/LAMA, triple therapy.\nTrends in the proportion of patients on combination therapies (A) and average medication possession ratio (B) over the time course of COPD.\nAssociation Between Filled Prescriptions After an Outpatient Visit and Exacerbation History During follow-up, 260,164 eligible outpatient visits were recorded that were followed with a filled prescription for an inhaled medication. 85% of the outpatient visits were made to GPs. In 79,820 (30.7%) of visits, the patient was classified as frequent-exacerbator given their 12-month exacerbation history (Table 1).\nThe frequencies (by percentage) of different inhaled therapies prescribed for COPD patients, stratified by the frequent exacerbator status, are provided in Figure 3. For non-frequent exacerbators, the most common treatment was ICS+LABA (14.5%), followed by ICS alone (5.8%). Outpatient visits among frequent exacerbators were most commonly followed by filled prescriptions for ICS+LABA (18.9%) and triple therapy (11.6%).Figure 3Frequency of long-acting inhaled medication prescription in frequent (dark bars) and non-frequent (light bars) exacerbators.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists.\nFrequency of long-acting inhaled medication prescription in frequent (dark bars) and non-frequent (light bars) exacerbators.\nFigure 4 provides the adjusted odds ratios (ORs) and 95% confidence intervals (CI) associating filled prescriptions with the frequent exacerbator status (compared to non-frequent exacerbators), separately for GPs and specialists. All associations were significant, except for two: LABA+LAMA therapy for specialists, and monotherapy with LABA for GPs. For both groups, triple therapy showed the strongest association with a positive exacerbation history (OR of 2.68 for GPs and 2.02 for specialists; p<0.0001 for both). The associations were generally stronger (further away than 1.00) for GPs, with the exception of monotherapies with ICS or LABA.Figure 4Forest plot of odds ratio (OR) and 95% confidence interval between frequent-exacerbator status and filled prescriptions for each medication type, separately for GP and specialist compared to non-frequent exacerbators.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists; GP, general practitioner.\nForest plot of odds ratio (OR) and 95% confidence interval between frequent-exacerbator status and filled prescriptions for each medication type, separately for GP and specialist compared to non-frequent exacerbators.\nThe result of the sensitivity analysis is provided in Supplementary Material – Section 2, Supplementary Figure 1. The trend of medication prescription did not differ substantially from the main analysis, nor did the association between frequent exacerbator definition and prescription patterns. Again, ICS+LABA was the most common combination therapy after a physician visit, regardless of exacerbation history. Similar to the main analysis, the second most common therapy was monotherapy with ICS among non-frequent exacerbators and triple therapy among frequent exacerbators.\nDuring follow-up, 260,164 eligible outpatient visits were recorded that were followed with a filled prescription for an inhaled medication. 85% of the outpatient visits were made to GPs. In 79,820 (30.7%) of visits, the patient was classified as frequent-exacerbator given their 12-month exacerbation history (Table 1).\nThe frequencies (by percentage) of different inhaled therapies prescribed for COPD patients, stratified by the frequent exacerbator status, are provided in Figure 3. For non-frequent exacerbators, the most common treatment was ICS+LABA (14.5%), followed by ICS alone (5.8%). Outpatient visits among frequent exacerbators were most commonly followed by filled prescriptions for ICS+LABA (18.9%) and triple therapy (11.6%).Figure 3Frequency of long-acting inhaled medication prescription in frequent (dark bars) and non-frequent (light bars) exacerbators.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists.\nFrequency of long-acting inhaled medication prescription in frequent (dark bars) and non-frequent (light bars) exacerbators.\nFigure 4 provides the adjusted odds ratios (ORs) and 95% confidence intervals (CI) associating filled prescriptions with the frequent exacerbator status (compared to non-frequent exacerbators), separately for GPs and specialists. All associations were significant, except for two: LABA+LAMA therapy for specialists, and monotherapy with LABA for GPs. For both groups, triple therapy showed the strongest association with a positive exacerbation history (OR of 2.68 for GPs and 2.02 for specialists; p<0.0001 for both). The associations were generally stronger (further away than 1.00) for GPs, with the exception of monotherapies with ICS or LABA.Figure 4Forest plot of odds ratio (OR) and 95% confidence interval between frequent-exacerbator status and filled prescriptions for each medication type, separately for GP and specialist compared to non-frequent exacerbators.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists; GP, general practitioner.\nForest plot of odds ratio (OR) and 95% confidence interval between frequent-exacerbator status and filled prescriptions for each medication type, separately for GP and specialist compared to non-frequent exacerbators.\nThe result of the sensitivity analysis is provided in Supplementary Material – Section 2, Supplementary Figure 1. The trend of medication prescription did not differ substantially from the main analysis, nor did the association between frequent exacerbator definition and prescription patterns. Again, ICS+LABA was the most common combination therapy after a physician visit, regardless of exacerbation history. Similar to the main analysis, the second most common therapy was monotherapy with ICS among non-frequent exacerbators and triple therapy among frequent exacerbators.", "The trends are depicted in Figure 1. ICS was the most common medication class prescribed to patients in their first year of diagnosis (49.9% - panel A). It was followed by LABA (31.8%) and LAMA (10.4%). The percentage of patients who filled ICS-containing prescriptions per year remained the same (~50%) throughout the time course of COPD, whereas this percentage increased modestly for LABA and LAMA (by 7% and 9%, respectively, over 15 years). In terms of the quantity of dispensed medications, again ICS was the most prescribed ingredient (average of 2.9 canisters per patient per year over the 15-year period – panel B). The average number of canisters (per year) for LAMA, LABA, and ICS increased by 119%, 56%, and 40%, respectively, over 15 years.Figure 1Trends in the proportion of patients filling at least one prescription (A) and average dose-adjusted number of canisters (B) for major COPD inhaled therapies, during disease course.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists.\nTrends in the proportion of patients filling at least one prescription (A) and average dose-adjusted number of canisters (B) for major COPD inhaled therapies, during disease course.\nThe trends for combination therapies are illustrated in Figure 2. In the first year of diagnosis, 30.7% of patients received ICS+LABA (panel A), and on average 11.0% of a COPD patient-times was covered by these combination therapies (panel B). The second most common combination therapy was triple therapy (LAMA+LABA+ICS), taken by 6.1% of patients in their first year of diagnosis, covering 2.0% of their time in the first follow-up year. Other patterns of combination therapies were not common. Among patients with 12–15 years of COPD history, on average 58.5% received combination therapies, with MPR of 33.0%. Both the proportion of patients receiving combination therapies and the average MPR increased for ICS+LABA and triple therapy over the time course of COPD.Figure 2Trends in the proportion of patients on combination therapies (A) and average medication possession ratio (B) over the time course of COPD.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic agents; ICS/LABA/LAMA, triple therapy.\nTrends in the proportion of patients on combination therapies (A) and average medication possession ratio (B) over the time course of COPD.", "During follow-up, 260,164 eligible outpatient visits were recorded that were followed with a filled prescription for an inhaled medication. 85% of the outpatient visits were made to GPs. In 79,820 (30.7%) of visits, the patient was classified as frequent-exacerbator given their 12-month exacerbation history (Table 1).\nThe frequencies (by percentage) of different inhaled therapies prescribed for COPD patients, stratified by the frequent exacerbator status, are provided in Figure 3. For non-frequent exacerbators, the most common treatment was ICS+LABA (14.5%), followed by ICS alone (5.8%). Outpatient visits among frequent exacerbators were most commonly followed by filled prescriptions for ICS+LABA (18.9%) and triple therapy (11.6%).Figure 3Frequency of long-acting inhaled medication prescription in frequent (dark bars) and non-frequent (light bars) exacerbators.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists.\nFrequency of long-acting inhaled medication prescription in frequent (dark bars) and non-frequent (light bars) exacerbators.\nFigure 4 provides the adjusted odds ratios (ORs) and 95% confidence intervals (CI) associating filled prescriptions with the frequent exacerbator status (compared to non-frequent exacerbators), separately for GPs and specialists. All associations were significant, except for two: LABA+LAMA therapy for specialists, and monotherapy with LABA for GPs. For both groups, triple therapy showed the strongest association with a positive exacerbation history (OR of 2.68 for GPs and 2.02 for specialists; p<0.0001 for both). The associations were generally stronger (further away than 1.00) for GPs, with the exception of monotherapies with ICS or LABA.Figure 4Forest plot of odds ratio (OR) and 95% confidence interval between frequent-exacerbator status and filled prescriptions for each medication type, separately for GP and specialist compared to non-frequent exacerbators.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists; GP, general practitioner.\nForest plot of odds ratio (OR) and 95% confidence interval between frequent-exacerbator status and filled prescriptions for each medication type, separately for GP and specialist compared to non-frequent exacerbators.\nThe result of the sensitivity analysis is provided in Supplementary Material – Section 2, Supplementary Figure 1. The trend of medication prescription did not differ substantially from the main analysis, nor did the association between frequent exacerbator definition and prescription patterns. Again, ICS+LABA was the most common combination therapy after a physician visit, regardless of exacerbation history. Similar to the main analysis, the second most common therapy was monotherapy with ICS among non-frequent exacerbators and triple therapy among frequent exacerbators.", "Using 15 years of population-based data, we quantified the pattern of long-acting inhaled therapies in an incident cohort of patients with diagnosed COPD. We demonstrated that ICS was the most commonly prescribed medication class throughout the time course of the disease, with about 50% of patients having filled prescriptions for ICS during their first year of diagnosis. While this proportion remained roughly the same over the subsequent follow-up years, the average number of canisters of ICS per patient increased, indicating that the dosage of ICS increased over time among ICS users. The second most commonly used ingredient was LABA, received by a third of the cohort upon diagnosis. Further, we documented that 39.0% of patients received combination therapies in their first year of diagnosis, with ICS+LABA being the most common (30.7%), followed by triple therapy. In the Canadian guidelines, the recommended first-line therapy is monotherapy with LAMA or LABA, and ICS should only be used among individuals who do not respond to dual LABA+LAMA therapy. ICS can be started earlier in the course of the disease in patients with a history of asthma. However, as our sensitivity analysis indicates, the use of ICS was also common in COPD patients without any history of asthma. As such, these results demonstrate potentially significant departures from guideline recommendations, in particular the overuse of ICS and combination therapies.\nIn addition, our results revealed a difference between GPs and specialists in terms of considering exacerbation history in prescribing inhalers. The Canadian (and GOLD) guidelines generally recommend intensifying medications (eg, adding a second or third therapy) among patients who experience exacerbations in the previous 12 months. Following this recommendation should make intensive treatments (dual therapies and triple therapy) positively associated with exacerbation history. Our results suggest that GPs are more responsive in their treatment choices to the positive exacerbation history. On the other hand, following guidelines should cause monotherapies to have a negative association with the frequent exacerbator status, as no patient with frequent exacerbations in the previous year should be on monotherapy. However, there was only one association for monotherapies that was negative (LABA monotherapy among specialists). Overall, this might suggest that treatment step-downs are not generally adhered to among GPs and specialists alike.\nOur findings are similar to those of the previous studies which have shown that many prescriptions of ICS do not have clear indications.16,23 A study from the Province of Manitoba, Canada (for the 1997–2012 period), reported that ICS was by far the most commonly prescribed first-line therapy in patients with COPD (28.2%), while LAMA (2.1%) and LABA (2.3%) were prescribed much less frequently. The proportion of patients who received ICS, which was often in combination with LABA, increased from 23.5% to 34.4% during the study period.18 Similar results have also been documented in two other Canadian provinces.24 Similar to our results, a UK-based study also reported overuse of ICS and undertreatment with LAMA+LABA.25\nWe are not aware of any previous study that correlates the choice of medication with the frequent exacerbator status and specialty of care. Previous studies have compared the differences in GP and respiratory specialist in terms of post-exacerbation management and outcomes, with one showing that although there were no differences in the 12-month re-admission and mortality rates between patients, there was significant difference in the management of acute exacerbations between the two groups.22 One potential mechanism explaining the seemingly higher adherence to guidelines among GPs in our study is that generally more complicated patients, such as those with more intensive symptoms, unsatisfactory response to previous therapies, and higher burden of comorbidity, are referred to specialists. This might result in specialists having to consider several different factors in their choice of treatment, not just what guidelines recommend. Another possible factor is the local healthcare policies. In British Columbia, both LAMA and ICS+LABA are “special authority” medications for GPs, requiring GPs to objectively document fixed airflow obstruction before being able to prescribe such medications. Obtaining a special authority for such prescriptions after an exacerbation might have been easier for GPs (eg, if the patient already received spirometry during their admission or visit to Emergency Room), making the prescriptions to be more strongly correlated with exacerbation history. Overall, the purpose of this study was not to explore the etiology of non-adherence to treatment recommendations. A study that addresses the reasons for the difference in treatment decisions between GPs and specialists is an important one that requires a dedicated design and analysis.\nThis study has multiple strengths. The use of data from the entire population of a well-defined geographic region reduces the risk of bias due to self-selection into the cohort. Retrieval of filled prescription records eliminates the risk of information or recall bias or Hawthorne effect. Avoiding recall bias is particularly important for the analysis of the associations between medication use and exacerbation history, as a history of exacerbation might affect patient recall. The large size of the data and several years of follow-up enabled us to have high statistical power to evaluate long-term trends. On the other hand, the limitations of this study should also be acknowledged. Filled prescription records do not necessarily equate to the actual medication that was recommended to the patient, as physicians might advise their patients to continue using previously stocked medications. Some physicians might also provide medication samples to patients that are not recorded in the data. One possible bias in this study is the under-reporting of exacerbations. Our algorithm for eliciting exacerbation history might not be completely aligned with physician assessment of exacerbation history.20", "Taken together, these results suggest that the utilization of inhaled therapies for COPD is not aligned with current Canadian or international guidelines. This is most clearly demonstrated in the low utilization of LAMA and high utilization of ICS in newly diagnosed patients, as well as high utilization of combination therapies as first-line. An interesting finding of our study was that, compared to treatment recommendations by specialists, treatment recommendations made by GPs were potentially more reflective of the exacerbation history, but this might well reflect the more complicated clinical scenarios that specialists manage. Overall, improving adherence to guideline recommendations should be promoted across all specialty groups. Our results also suggest that treatments are not stepped down as recommended by guidelines, potentially resulting in significant levels of overtreatment." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Data Sources", "Study Population", "Outcomes", "Statistical Analysis", "Results", "Trends of Filled Prescriptions for Inhaled Medications Over the Time Course of COPD", "Association Between Filled Prescriptions After an Outpatient Visit and Exacerbation History", "Discussion", "Conclusion" ]
[ "Chronic obstructive pulmonary disease (COPD) is a common disease of the airways characterized by progressive airflow limitation and periods of intensified disease activity referred to as acute exacerbations (or lung attacks).1 COPD is among the leading causes of disease burden in terms of lost disability-adjusted life years (DALYs).2,3 In Canada, exacerbations of COPD are the leading cause of medical hospital admissions and a major cause of morbidity and mortality.4\nIn all stages of COPD, risk factor modification (eg, smoking cessation) has a high potential for modifying the course of the disease. However, for most patients, pharmacotherapy is an essential component of disease management.5 In contemporary COPD management guidelines, exacerbations are an important determinant of initiation and choice of drug therapy. For example, in the current Canadian Thoracic Society (CTS) guidelines, treatment choice is based on the dichotomization of patients into frequent or non-frequent exacerbators,6 with frequent exacerbator status defined based on 12-month history (≥2 moderate or ≥1 severe exacerbation(s)). In general, treatment is stepped up (intensified) if the patient becomes a frequent exacerbator, and can be stepped down if the patient is a non-frequent exacerbator. Other disease management strategies such as the influential Global Initiative for Chronic Obstructive Lung Disease (GOLD)5 and the American Thoracic Society’s guidelines7 adopt a similar approach.\nAdherence to guideline recommendations on pharmacotherapies is generally low for chronic diseases.8 This might be due to the lack of awareness about guidelines by care providers, or low acceptance of, and adherence to, care providers’ recommendations by patients. Evaluating the level of adherence to guidelines in “real world” settings enables the identification of preventable gaps in care. Population-based health databases that capture treatment patterns at the community level without the risk of Hawthorne effect (individuals modifying their behavior as a reaction to being observed) or recall bias are a unique resource for this purpose. The aims of the current study were to use such databases to describe the trend of filled prescriptions for inhaled medication throughout the time course of COPD (primary objective), and to evaluate the association between prescribed inhaled therapies with the frequent-exacerbator definition (secondary objective).", "This study was approved by The University of British Columbia’s human ethics board (#H13-00684). All inferences, opinions, and conclusions drawn in this research are those of the authors and do not reflect the opinions or policies of the Data Steward(s). The health databases complied with the Freedom of Information and Protection of Privacy Act.\nData Sources We used administrative health databases of British Columbia (a Canadian Province with a population of 5.07M as of 20199) from January 1997 to December 2015. The administrative needs of the Province’s public healthcare system have resulted in the accumulation of healthcare encounter data for all legal residents of the Province. The data have high reliability with a very low rate of missing values.10 We had access to demographics,11 hospitalization,12 outpatient services,13 and filled prescriptions14 records (including drug dose, quantity and day supply). Hospitalization and outpatient services records contain diagnostic International Classification of Diseases (ICD) codes. Each filled prescription record contains a unique identifier for the drug, as well as the dose and duration of supply. These data have frequently been used to quantity healthcare resource use patterns and their association with outcomes.15,16\nWe used administrative health databases of British Columbia (a Canadian Province with a population of 5.07M as of 20199) from January 1997 to December 2015. The administrative needs of the Province’s public healthcare system have resulted in the accumulation of healthcare encounter data for all legal residents of the Province. The data have high reliability with a very low rate of missing values.10 We had access to demographics,11 hospitalization,12 outpatient services,13 and filled prescriptions14 records (including drug dose, quantity and day supply). Hospitalization and outpatient services records contain diagnostic International Classification of Diseases (ICD) codes. Each filled prescription record contains a unique identifier for the drug, as well as the dose and duration of supply. These data have frequently been used to quantity healthcare resource use patterns and their association with outcomes.15,16\nStudy Population We created an incident cohort of individuals with diagnosed COPD using a previously validated case definition.17 In this definition, a patient is categorized as having COPD if during any 24-month rolling window, they had ≥1 hospitalization or ≥2 outpatient visits with COPD as the primary diagnosis (ICD-9: 491, 492, 493.2, 496; ICD-10: J41-J44). In a previous review study, this definition had a sensitivity of 65.5% and a specificity of 91.5% (which is likely to be even higher in our sample given that individuals had to have COPD-related prescription records to contribute to the results).17 The lower bound for the age criteria was 35, similar to previous studies18,19 to ensure that asthma patients (who might have similar health records as patients with COPD) were not over-represented. Patients entered the cohort on the date of their first COPD-related outpatient visit, which was considered the entry date, marking the beginning of follow-up. To ensure that an incident cohort of COPD patients is being selected, only those who were captured in the data for at least five years before their entry date were included. Patients were followed from their entry date to the date of their last resource use of any kind, death, end of registration in the database, or December 31st, 2015 (the administrative end of the study), whichever occurred first.\nWe created an incident cohort of individuals with diagnosed COPD using a previously validated case definition.17 In this definition, a patient is categorized as having COPD if during any 24-month rolling window, they had ≥1 hospitalization or ≥2 outpatient visits with COPD as the primary diagnosis (ICD-9: 491, 492, 493.2, 496; ICD-10: J41-J44). In a previous review study, this definition had a sensitivity of 65.5% and a specificity of 91.5% (which is likely to be even higher in our sample given that individuals had to have COPD-related prescription records to contribute to the results).17 The lower bound for the age criteria was 35, similar to previous studies18,19 to ensure that asthma patients (who might have similar health records as patients with COPD) were not over-represented. Patients entered the cohort on the date of their first COPD-related outpatient visit, which was considered the entry date, marking the beginning of follow-up. To ensure that an incident cohort of COPD patients is being selected, only those who were captured in the data for at least five years before their entry date were included. Patients were followed from their entry date to the date of their last resource use of any kind, death, end of registration in the database, or December 31st, 2015 (the administrative end of the study), whichever occurred first.\nOutcomes We evaluated two outcomes for the primary objective (illustrating medication trends over the time course of COPD). The first was filled prescriptions for long-acting inhaled medications during the time course of COPD. This was quantified in terms of the proportion of patients who filled prescriptions for a given medication class, as well as the average (per patient) dose-adjusted number of canisters dispensed. The following medication classes were evaluated: long-acting muscarinic receptor antagonists (LAMA), long-acting beta-2 adrenoreceptor agonists (LABA), and inhaled corticosteroids (ICS). We used a master list of COPD-related inhaled medications identified by their unique identifier linked with dose-equivalency tables (Supplementary Material – Section 1). The second outcome was filled prescriptions for combination therapies. The following combination therapies were considered: LAMA+LABA, LAMA+ICS, LABA+ICS, and LAMA+LABA+ICS. This outcome was quantified in terms of 1) the proportion of patients who, at least for part of the year, received inhaled therapies of different classes, and 2) medication possession ratio (MPR) for combination therapies. MPR was determined by counting the number of days within a follow-up year in which the patient was within the duration of supply of filled prescriptions for more than one class of medication (in a single inhaler or separate inhalers) and dividing it by the length of the follow-up period (365 days). If different classes of therapies had been dispensed for the patient in separate inhalers, they counted towards combination therapy if their duration of supply overlapped. For this (primary objective) we included all prescription records by general practitioners (GPs) and specialists, regardless of who prescribed them.\nThe outcome for the secondary objective (association between frequent-exacerbator status and inhaled medication prescription) was filled prescriptions within 14 days after a COPD-related outpatient visit to a physician. We limited the visits to GP, internal medicine specialist, and respirologist (the latter two are collectively referred to as specialists). The reason is that the clinical experts on our team believed that other specialties rarely initiate or change COPD-related medications and mostly refer the patient to their treating physician. If multiple visits occurred within 14 days of each other for the same patient, only the last visit was considered. At the time of a physician visit, a patient was classified as a frequent or non-frequent exacerbator based on their exacerbation history in the previous 12 months. We identified moderate and severe exacerbations using a previously published algorithm.20 A moderate exacerbation was defined based on filling prescriptions for oral corticosteroid or antibiotics within seven days after an outpatient visit with the main diagnostic (ICD) code of COPD (list of medications in Supplementary Material - Section 1). A severe exacerbation was defined as an instance of hospital admission with the main discharge diagnosis for COPD.\nWe evaluated two outcomes for the primary objective (illustrating medication trends over the time course of COPD). The first was filled prescriptions for long-acting inhaled medications during the time course of COPD. This was quantified in terms of the proportion of patients who filled prescriptions for a given medication class, as well as the average (per patient) dose-adjusted number of canisters dispensed. The following medication classes were evaluated: long-acting muscarinic receptor antagonists (LAMA), long-acting beta-2 adrenoreceptor agonists (LABA), and inhaled corticosteroids (ICS). We used a master list of COPD-related inhaled medications identified by their unique identifier linked with dose-equivalency tables (Supplementary Material – Section 1). The second outcome was filled prescriptions for combination therapies. The following combination therapies were considered: LAMA+LABA, LAMA+ICS, LABA+ICS, and LAMA+LABA+ICS. This outcome was quantified in terms of 1) the proportion of patients who, at least for part of the year, received inhaled therapies of different classes, and 2) medication possession ratio (MPR) for combination therapies. MPR was determined by counting the number of days within a follow-up year in which the patient was within the duration of supply of filled prescriptions for more than one class of medication (in a single inhaler or separate inhalers) and dividing it by the length of the follow-up period (365 days). If different classes of therapies had been dispensed for the patient in separate inhalers, they counted towards combination therapy if their duration of supply overlapped. For this (primary objective) we included all prescription records by general practitioners (GPs) and specialists, regardless of who prescribed them.\nThe outcome for the secondary objective (association between frequent-exacerbator status and inhaled medication prescription) was filled prescriptions within 14 days after a COPD-related outpatient visit to a physician. We limited the visits to GP, internal medicine specialist, and respirologist (the latter two are collectively referred to as specialists). The reason is that the clinical experts on our team believed that other specialties rarely initiate or change COPD-related medications and mostly refer the patient to their treating physician. If multiple visits occurred within 14 days of each other for the same patient, only the last visit was considered. At the time of a physician visit, a patient was classified as a frequent or non-frequent exacerbator based on their exacerbation history in the previous 12 months. We identified moderate and severe exacerbations using a previously published algorithm.20 A moderate exacerbation was defined based on filling prescriptions for oral corticosteroid or antibiotics within seven days after an outpatient visit with the main diagnostic (ICD) code of COPD (list of medications in Supplementary Material - Section 1). A severe exacerbation was defined as an instance of hospital admission with the main discharge diagnosis for COPD.\nStatistical Analysis For the primary objective, we divided the follow-up time of each patient into adjacent 12-month periods, starting from the entry date (if the last period was <12 months it was ignored). We plotted the proportion of COPD patients who received a given class of COPD medication as a function of years since COPD diagnosis. We also plotted the average number of dose-adjusted canisters per COPD patient during each follow-up period, separately for each inhaled medication class. In order to convert dispensed quantities to number of canisters, we used a specific formulation as the reference for each medication class, as follows: 1) salmeterol 50 mcg, 60 doses per canister for LABA, 2) beclomethasone 100 mcg, 200 doses per canister for ICS, and 3) tiotropium 18 mcg, 30 doses per canister for LAMA.\nTo test for time trends, we used a generalized linear model (with logit link function and binomial distribution) with generalized estimating equation to account for the clustering of observation units (patient-years) within patients. Years since the entry date was the independent variable, whose coefficient captured the time trend. We did not control for any other variable in this analysis given the primary interest was examining temporal trends.\nFor the secondary objective, we used a generalized linear model (with the same link function and distribution as above) to associate filled prescriptions after physician visits with exacerbation history. This model was fitted separately for each medication class, and for GPs versus specialists. Again, we used a generalized estimating equation to account for the clustered nature of the data (multiple physician visits for each patient). We controlled for the following variables: sex, age on the date of visit, and socio-economic status on the date of visit (estimated based on neighbourhood income quantiles). SAS Enterprise Guide (version 7.1, Cary, NC, USA) was used for all analyses. Two-tailed p-values were considered significant at 0.05 level.\nWe performed a sensitivity analysis that explored whether our findings are affected by the inadvertent inclusion of some patients with asthma but without fixed airway obstruction. For this analysis, we excluded all individuals who met a previously validated case definition of asthma21 any time during the study period (before or after COPD diagnosis). We reported the proportion of each medication classes that were prescribed after a physician visit, separately by the frequent exacerbator status.\nFor the primary objective, we divided the follow-up time of each patient into adjacent 12-month periods, starting from the entry date (if the last period was <12 months it was ignored). We plotted the proportion of COPD patients who received a given class of COPD medication as a function of years since COPD diagnosis. We also plotted the average number of dose-adjusted canisters per COPD patient during each follow-up period, separately for each inhaled medication class. In order to convert dispensed quantities to number of canisters, we used a specific formulation as the reference for each medication class, as follows: 1) salmeterol 50 mcg, 60 doses per canister for LABA, 2) beclomethasone 100 mcg, 200 doses per canister for ICS, and 3) tiotropium 18 mcg, 30 doses per canister for LAMA.\nTo test for time trends, we used a generalized linear model (with logit link function and binomial distribution) with generalized estimating equation to account for the clustering of observation units (patient-years) within patients. Years since the entry date was the independent variable, whose coefficient captured the time trend. We did not control for any other variable in this analysis given the primary interest was examining temporal trends.\nFor the secondary objective, we used a generalized linear model (with the same link function and distribution as above) to associate filled prescriptions after physician visits with exacerbation history. This model was fitted separately for each medication class, and for GPs versus specialists. Again, we used a generalized estimating equation to account for the clustered nature of the data (multiple physician visits for each patient). We controlled for the following variables: sex, age on the date of visit, and socio-economic status on the date of visit (estimated based on neighbourhood income quantiles). SAS Enterprise Guide (version 7.1, Cary, NC, USA) was used for all analyses. Two-tailed p-values were considered significant at 0.05 level.\nWe performed a sensitivity analysis that explored whether our findings are affected by the inadvertent inclusion of some patients with asthma but without fixed airway obstruction. For this analysis, we excluded all individuals who met a previously validated case definition of asthma21 any time during the study period (before or after COPD diagnosis). We reported the proportion of each medication classes that were prescribed after a physician visit, separately by the frequent exacerbator status.", "We used administrative health databases of British Columbia (a Canadian Province with a population of 5.07M as of 20199) from January 1997 to December 2015. The administrative needs of the Province’s public healthcare system have resulted in the accumulation of healthcare encounter data for all legal residents of the Province. The data have high reliability with a very low rate of missing values.10 We had access to demographics,11 hospitalization,12 outpatient services,13 and filled prescriptions14 records (including drug dose, quantity and day supply). Hospitalization and outpatient services records contain diagnostic International Classification of Diseases (ICD) codes. Each filled prescription record contains a unique identifier for the drug, as well as the dose and duration of supply. These data have frequently been used to quantity healthcare resource use patterns and their association with outcomes.15,16", "We created an incident cohort of individuals with diagnosed COPD using a previously validated case definition.17 In this definition, a patient is categorized as having COPD if during any 24-month rolling window, they had ≥1 hospitalization or ≥2 outpatient visits with COPD as the primary diagnosis (ICD-9: 491, 492, 493.2, 496; ICD-10: J41-J44). In a previous review study, this definition had a sensitivity of 65.5% and a specificity of 91.5% (which is likely to be even higher in our sample given that individuals had to have COPD-related prescription records to contribute to the results).17 The lower bound for the age criteria was 35, similar to previous studies18,19 to ensure that asthma patients (who might have similar health records as patients with COPD) were not over-represented. Patients entered the cohort on the date of their first COPD-related outpatient visit, which was considered the entry date, marking the beginning of follow-up. To ensure that an incident cohort of COPD patients is being selected, only those who were captured in the data for at least five years before their entry date were included. Patients were followed from their entry date to the date of their last resource use of any kind, death, end of registration in the database, or December 31st, 2015 (the administrative end of the study), whichever occurred first.", "We evaluated two outcomes for the primary objective (illustrating medication trends over the time course of COPD). The first was filled prescriptions for long-acting inhaled medications during the time course of COPD. This was quantified in terms of the proportion of patients who filled prescriptions for a given medication class, as well as the average (per patient) dose-adjusted number of canisters dispensed. The following medication classes were evaluated: long-acting muscarinic receptor antagonists (LAMA), long-acting beta-2 adrenoreceptor agonists (LABA), and inhaled corticosteroids (ICS). We used a master list of COPD-related inhaled medications identified by their unique identifier linked with dose-equivalency tables (Supplementary Material – Section 1). The second outcome was filled prescriptions for combination therapies. The following combination therapies were considered: LAMA+LABA, LAMA+ICS, LABA+ICS, and LAMA+LABA+ICS. This outcome was quantified in terms of 1) the proportion of patients who, at least for part of the year, received inhaled therapies of different classes, and 2) medication possession ratio (MPR) for combination therapies. MPR was determined by counting the number of days within a follow-up year in which the patient was within the duration of supply of filled prescriptions for more than one class of medication (in a single inhaler or separate inhalers) and dividing it by the length of the follow-up period (365 days). If different classes of therapies had been dispensed for the patient in separate inhalers, they counted towards combination therapy if their duration of supply overlapped. For this (primary objective) we included all prescription records by general practitioners (GPs) and specialists, regardless of who prescribed them.\nThe outcome for the secondary objective (association between frequent-exacerbator status and inhaled medication prescription) was filled prescriptions within 14 days after a COPD-related outpatient visit to a physician. We limited the visits to GP, internal medicine specialist, and respirologist (the latter two are collectively referred to as specialists). The reason is that the clinical experts on our team believed that other specialties rarely initiate or change COPD-related medications and mostly refer the patient to their treating physician. If multiple visits occurred within 14 days of each other for the same patient, only the last visit was considered. At the time of a physician visit, a patient was classified as a frequent or non-frequent exacerbator based on their exacerbation history in the previous 12 months. We identified moderate and severe exacerbations using a previously published algorithm.20 A moderate exacerbation was defined based on filling prescriptions for oral corticosteroid or antibiotics within seven days after an outpatient visit with the main diagnostic (ICD) code of COPD (list of medications in Supplementary Material - Section 1). A severe exacerbation was defined as an instance of hospital admission with the main discharge diagnosis for COPD.", "For the primary objective, we divided the follow-up time of each patient into adjacent 12-month periods, starting from the entry date (if the last period was <12 months it was ignored). We plotted the proportion of COPD patients who received a given class of COPD medication as a function of years since COPD diagnosis. We also plotted the average number of dose-adjusted canisters per COPD patient during each follow-up period, separately for each inhaled medication class. In order to convert dispensed quantities to number of canisters, we used a specific formulation as the reference for each medication class, as follows: 1) salmeterol 50 mcg, 60 doses per canister for LABA, 2) beclomethasone 100 mcg, 200 doses per canister for ICS, and 3) tiotropium 18 mcg, 30 doses per canister for LAMA.\nTo test for time trends, we used a generalized linear model (with logit link function and binomial distribution) with generalized estimating equation to account for the clustering of observation units (patient-years) within patients. Years since the entry date was the independent variable, whose coefficient captured the time trend. We did not control for any other variable in this analysis given the primary interest was examining temporal trends.\nFor the secondary objective, we used a generalized linear model (with the same link function and distribution as above) to associate filled prescriptions after physician visits with exacerbation history. This model was fitted separately for each medication class, and for GPs versus specialists. Again, we used a generalized estimating equation to account for the clustered nature of the data (multiple physician visits for each patient). We controlled for the following variables: sex, age on the date of visit, and socio-economic status on the date of visit (estimated based on neighbourhood income quantiles). SAS Enterprise Guide (version 7.1, Cary, NC, USA) was used for all analyses. Two-tailed p-values were considered significant at 0.05 level.\nWe performed a sensitivity analysis that explored whether our findings are affected by the inadvertent inclusion of some patients with asthma but without fixed airway obstruction. For this analysis, we excluded all individuals who met a previously validated case definition of asthma21 any time during the study period (before or after COPD diagnosis). We reported the proportion of each medication classes that were prescribed after a physician visit, separately by the frequent exacerbator status.", "In our study, 132,004 individuals satisfied the case definition of COPD. Of these, 64,942 patients (49.2%) were female and the mean age at entry was 68.6 (SD=12.5) years. These patients contributed 707,575 years of follow-up (mean follow-up 5.4 years, SD=3.5). Table 1 provides basic sociodemographic characteristics of these patients.Table 1Characteristics of Patients and the Visits in the CohortPatient Characteristics at Index DateStudy Cohort (n=132,004)Female sex (%) a64,942 (49.2)Age (SD)68.6 (12.5)Years from COPD diagnosis5.4 (3.5)Socioeconomic status (%) Quantile 134,699 (26.3%) Quantile 228,116 (21.3%) Quantile 324,630 (18.7%) Quantile 422,500 (17.1%) Quantile 519,545 (14.8%) Unknown2,514 (1.9%)Visit Characteristics for the Sub-Cohort of COPD Patients Alive in 2010Visits Followed by Prescription (n=260,164)Patient was frequent exacerbator in the previous 12 months79,820 (30.7%)History of PFT before visit228,464 (87.8%)History of asthma diagnosis before visits142,457 (54.8%)COPD-related medications in the three months before the visit ICS only28,624 (11.0%) LABA only2,039 (0.78%) LAMA only18,414 (7.08%) LAMA+LABA2,086 (0.8%) ICS+LABA80,205 (30.83%) ICS+LAMA4,861 (1.87%) ICS+LAMA+LABA71,152 (27.35%)Note: aNumber of patients (and percent of cohort) are reported unless otherwise indicated.Abbreviations: COPD, chronic obstructive pulmonary disease; PFT, pulmonary function test; ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists.\n\nCharacteristics of Patients and the Visits in the Cohort\nNote: aNumber of patients (and percent of cohort) are reported unless otherwise indicated.\nAbbreviations: COPD, chronic obstructive pulmonary disease; PFT, pulmonary function test; ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists.\nTrends of Filled Prescriptions for Inhaled Medications Over the Time Course of COPD The trends are depicted in Figure 1. ICS was the most common medication class prescribed to patients in their first year of diagnosis (49.9% - panel A). It was followed by LABA (31.8%) and LAMA (10.4%). The percentage of patients who filled ICS-containing prescriptions per year remained the same (~50%) throughout the time course of COPD, whereas this percentage increased modestly for LABA and LAMA (by 7% and 9%, respectively, over 15 years). In terms of the quantity of dispensed medications, again ICS was the most prescribed ingredient (average of 2.9 canisters per patient per year over the 15-year period – panel B). The average number of canisters (per year) for LAMA, LABA, and ICS increased by 119%, 56%, and 40%, respectively, over 15 years.Figure 1Trends in the proportion of patients filling at least one prescription (A) and average dose-adjusted number of canisters (B) for major COPD inhaled therapies, during disease course.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists.\nTrends in the proportion of patients filling at least one prescription (A) and average dose-adjusted number of canisters (B) for major COPD inhaled therapies, during disease course.\nThe trends for combination therapies are illustrated in Figure 2. In the first year of diagnosis, 30.7% of patients received ICS+LABA (panel A), and on average 11.0% of a COPD patient-times was covered by these combination therapies (panel B). The second most common combination therapy was triple therapy (LAMA+LABA+ICS), taken by 6.1% of patients in their first year of diagnosis, covering 2.0% of their time in the first follow-up year. Other patterns of combination therapies were not common. Among patients with 12–15 years of COPD history, on average 58.5% received combination therapies, with MPR of 33.0%. Both the proportion of patients receiving combination therapies and the average MPR increased for ICS+LABA and triple therapy over the time course of COPD.Figure 2Trends in the proportion of patients on combination therapies (A) and average medication possession ratio (B) over the time course of COPD.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic agents; ICS/LABA/LAMA, triple therapy.\nTrends in the proportion of patients on combination therapies (A) and average medication possession ratio (B) over the time course of COPD.\nThe trends are depicted in Figure 1. ICS was the most common medication class prescribed to patients in their first year of diagnosis (49.9% - panel A). It was followed by LABA (31.8%) and LAMA (10.4%). The percentage of patients who filled ICS-containing prescriptions per year remained the same (~50%) throughout the time course of COPD, whereas this percentage increased modestly for LABA and LAMA (by 7% and 9%, respectively, over 15 years). In terms of the quantity of dispensed medications, again ICS was the most prescribed ingredient (average of 2.9 canisters per patient per year over the 15-year period – panel B). The average number of canisters (per year) for LAMA, LABA, and ICS increased by 119%, 56%, and 40%, respectively, over 15 years.Figure 1Trends in the proportion of patients filling at least one prescription (A) and average dose-adjusted number of canisters (B) for major COPD inhaled therapies, during disease course.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists.\nTrends in the proportion of patients filling at least one prescription (A) and average dose-adjusted number of canisters (B) for major COPD inhaled therapies, during disease course.\nThe trends for combination therapies are illustrated in Figure 2. In the first year of diagnosis, 30.7% of patients received ICS+LABA (panel A), and on average 11.0% of a COPD patient-times was covered by these combination therapies (panel B). The second most common combination therapy was triple therapy (LAMA+LABA+ICS), taken by 6.1% of patients in their first year of diagnosis, covering 2.0% of their time in the first follow-up year. Other patterns of combination therapies were not common. Among patients with 12–15 years of COPD history, on average 58.5% received combination therapies, with MPR of 33.0%. Both the proportion of patients receiving combination therapies and the average MPR increased for ICS+LABA and triple therapy over the time course of COPD.Figure 2Trends in the proportion of patients on combination therapies (A) and average medication possession ratio (B) over the time course of COPD.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic agents; ICS/LABA/LAMA, triple therapy.\nTrends in the proportion of patients on combination therapies (A) and average medication possession ratio (B) over the time course of COPD.\nAssociation Between Filled Prescriptions After an Outpatient Visit and Exacerbation History During follow-up, 260,164 eligible outpatient visits were recorded that were followed with a filled prescription for an inhaled medication. 85% of the outpatient visits were made to GPs. In 79,820 (30.7%) of visits, the patient was classified as frequent-exacerbator given their 12-month exacerbation history (Table 1).\nThe frequencies (by percentage) of different inhaled therapies prescribed for COPD patients, stratified by the frequent exacerbator status, are provided in Figure 3. For non-frequent exacerbators, the most common treatment was ICS+LABA (14.5%), followed by ICS alone (5.8%). Outpatient visits among frequent exacerbators were most commonly followed by filled prescriptions for ICS+LABA (18.9%) and triple therapy (11.6%).Figure 3Frequency of long-acting inhaled medication prescription in frequent (dark bars) and non-frequent (light bars) exacerbators.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists.\nFrequency of long-acting inhaled medication prescription in frequent (dark bars) and non-frequent (light bars) exacerbators.\nFigure 4 provides the adjusted odds ratios (ORs) and 95% confidence intervals (CI) associating filled prescriptions with the frequent exacerbator status (compared to non-frequent exacerbators), separately for GPs and specialists. All associations were significant, except for two: LABA+LAMA therapy for specialists, and monotherapy with LABA for GPs. For both groups, triple therapy showed the strongest association with a positive exacerbation history (OR of 2.68 for GPs and 2.02 for specialists; p<0.0001 for both). The associations were generally stronger (further away than 1.00) for GPs, with the exception of monotherapies with ICS or LABA.Figure 4Forest plot of odds ratio (OR) and 95% confidence interval between frequent-exacerbator status and filled prescriptions for each medication type, separately for GP and specialist compared to non-frequent exacerbators.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists; GP, general practitioner.\nForest plot of odds ratio (OR) and 95% confidence interval between frequent-exacerbator status and filled prescriptions for each medication type, separately for GP and specialist compared to non-frequent exacerbators.\nThe result of the sensitivity analysis is provided in Supplementary Material – Section 2, Supplementary Figure 1. The trend of medication prescription did not differ substantially from the main analysis, nor did the association between frequent exacerbator definition and prescription patterns. Again, ICS+LABA was the most common combination therapy after a physician visit, regardless of exacerbation history. Similar to the main analysis, the second most common therapy was monotherapy with ICS among non-frequent exacerbators and triple therapy among frequent exacerbators.\nDuring follow-up, 260,164 eligible outpatient visits were recorded that were followed with a filled prescription for an inhaled medication. 85% of the outpatient visits were made to GPs. In 79,820 (30.7%) of visits, the patient was classified as frequent-exacerbator given their 12-month exacerbation history (Table 1).\nThe frequencies (by percentage) of different inhaled therapies prescribed for COPD patients, stratified by the frequent exacerbator status, are provided in Figure 3. For non-frequent exacerbators, the most common treatment was ICS+LABA (14.5%), followed by ICS alone (5.8%). Outpatient visits among frequent exacerbators were most commonly followed by filled prescriptions for ICS+LABA (18.9%) and triple therapy (11.6%).Figure 3Frequency of long-acting inhaled medication prescription in frequent (dark bars) and non-frequent (light bars) exacerbators.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists.\nFrequency of long-acting inhaled medication prescription in frequent (dark bars) and non-frequent (light bars) exacerbators.\nFigure 4 provides the adjusted odds ratios (ORs) and 95% confidence intervals (CI) associating filled prescriptions with the frequent exacerbator status (compared to non-frequent exacerbators), separately for GPs and specialists. All associations were significant, except for two: LABA+LAMA therapy for specialists, and monotherapy with LABA for GPs. For both groups, triple therapy showed the strongest association with a positive exacerbation history (OR of 2.68 for GPs and 2.02 for specialists; p<0.0001 for both). The associations were generally stronger (further away than 1.00) for GPs, with the exception of monotherapies with ICS or LABA.Figure 4Forest plot of odds ratio (OR) and 95% confidence interval between frequent-exacerbator status and filled prescriptions for each medication type, separately for GP and specialist compared to non-frequent exacerbators.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists; GP, general practitioner.\nForest plot of odds ratio (OR) and 95% confidence interval between frequent-exacerbator status and filled prescriptions for each medication type, separately for GP and specialist compared to non-frequent exacerbators.\nThe result of the sensitivity analysis is provided in Supplementary Material – Section 2, Supplementary Figure 1. The trend of medication prescription did not differ substantially from the main analysis, nor did the association between frequent exacerbator definition and prescription patterns. Again, ICS+LABA was the most common combination therapy after a physician visit, regardless of exacerbation history. Similar to the main analysis, the second most common therapy was monotherapy with ICS among non-frequent exacerbators and triple therapy among frequent exacerbators.", "The trends are depicted in Figure 1. ICS was the most common medication class prescribed to patients in their first year of diagnosis (49.9% - panel A). It was followed by LABA (31.8%) and LAMA (10.4%). The percentage of patients who filled ICS-containing prescriptions per year remained the same (~50%) throughout the time course of COPD, whereas this percentage increased modestly for LABA and LAMA (by 7% and 9%, respectively, over 15 years). In terms of the quantity of dispensed medications, again ICS was the most prescribed ingredient (average of 2.9 canisters per patient per year over the 15-year period – panel B). The average number of canisters (per year) for LAMA, LABA, and ICS increased by 119%, 56%, and 40%, respectively, over 15 years.Figure 1Trends in the proportion of patients filling at least one prescription (A) and average dose-adjusted number of canisters (B) for major COPD inhaled therapies, during disease course.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists.\nTrends in the proportion of patients filling at least one prescription (A) and average dose-adjusted number of canisters (B) for major COPD inhaled therapies, during disease course.\nThe trends for combination therapies are illustrated in Figure 2. In the first year of diagnosis, 30.7% of patients received ICS+LABA (panel A), and on average 11.0% of a COPD patient-times was covered by these combination therapies (panel B). The second most common combination therapy was triple therapy (LAMA+LABA+ICS), taken by 6.1% of patients in their first year of diagnosis, covering 2.0% of their time in the first follow-up year. Other patterns of combination therapies were not common. Among patients with 12–15 years of COPD history, on average 58.5% received combination therapies, with MPR of 33.0%. Both the proportion of patients receiving combination therapies and the average MPR increased for ICS+LABA and triple therapy over the time course of COPD.Figure 2Trends in the proportion of patients on combination therapies (A) and average medication possession ratio (B) over the time course of COPD.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic agents; ICS/LABA/LAMA, triple therapy.\nTrends in the proportion of patients on combination therapies (A) and average medication possession ratio (B) over the time course of COPD.", "During follow-up, 260,164 eligible outpatient visits were recorded that were followed with a filled prescription for an inhaled medication. 85% of the outpatient visits were made to GPs. In 79,820 (30.7%) of visits, the patient was classified as frequent-exacerbator given their 12-month exacerbation history (Table 1).\nThe frequencies (by percentage) of different inhaled therapies prescribed for COPD patients, stratified by the frequent exacerbator status, are provided in Figure 3. For non-frequent exacerbators, the most common treatment was ICS+LABA (14.5%), followed by ICS alone (5.8%). Outpatient visits among frequent exacerbators were most commonly followed by filled prescriptions for ICS+LABA (18.9%) and triple therapy (11.6%).Figure 3Frequency of long-acting inhaled medication prescription in frequent (dark bars) and non-frequent (light bars) exacerbators.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists.\nFrequency of long-acting inhaled medication prescription in frequent (dark bars) and non-frequent (light bars) exacerbators.\nFigure 4 provides the adjusted odds ratios (ORs) and 95% confidence intervals (CI) associating filled prescriptions with the frequent exacerbator status (compared to non-frequent exacerbators), separately for GPs and specialists. All associations were significant, except for two: LABA+LAMA therapy for specialists, and monotherapy with LABA for GPs. For both groups, triple therapy showed the strongest association with a positive exacerbation history (OR of 2.68 for GPs and 2.02 for specialists; p<0.0001 for both). The associations were generally stronger (further away than 1.00) for GPs, with the exception of monotherapies with ICS or LABA.Figure 4Forest plot of odds ratio (OR) and 95% confidence interval between frequent-exacerbator status and filled prescriptions for each medication type, separately for GP and specialist compared to non-frequent exacerbators.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists; GP, general practitioner.\nForest plot of odds ratio (OR) and 95% confidence interval between frequent-exacerbator status and filled prescriptions for each medication type, separately for GP and specialist compared to non-frequent exacerbators.\nThe result of the sensitivity analysis is provided in Supplementary Material – Section 2, Supplementary Figure 1. The trend of medication prescription did not differ substantially from the main analysis, nor did the association between frequent exacerbator definition and prescription patterns. Again, ICS+LABA was the most common combination therapy after a physician visit, regardless of exacerbation history. Similar to the main analysis, the second most common therapy was monotherapy with ICS among non-frequent exacerbators and triple therapy among frequent exacerbators.", "Using 15 years of population-based data, we quantified the pattern of long-acting inhaled therapies in an incident cohort of patients with diagnosed COPD. We demonstrated that ICS was the most commonly prescribed medication class throughout the time course of the disease, with about 50% of patients having filled prescriptions for ICS during their first year of diagnosis. While this proportion remained roughly the same over the subsequent follow-up years, the average number of canisters of ICS per patient increased, indicating that the dosage of ICS increased over time among ICS users. The second most commonly used ingredient was LABA, received by a third of the cohort upon diagnosis. Further, we documented that 39.0% of patients received combination therapies in their first year of diagnosis, with ICS+LABA being the most common (30.7%), followed by triple therapy. In the Canadian guidelines, the recommended first-line therapy is monotherapy with LAMA or LABA, and ICS should only be used among individuals who do not respond to dual LABA+LAMA therapy. ICS can be started earlier in the course of the disease in patients with a history of asthma. However, as our sensitivity analysis indicates, the use of ICS was also common in COPD patients without any history of asthma. As such, these results demonstrate potentially significant departures from guideline recommendations, in particular the overuse of ICS and combination therapies.\nIn addition, our results revealed a difference between GPs and specialists in terms of considering exacerbation history in prescribing inhalers. The Canadian (and GOLD) guidelines generally recommend intensifying medications (eg, adding a second or third therapy) among patients who experience exacerbations in the previous 12 months. Following this recommendation should make intensive treatments (dual therapies and triple therapy) positively associated with exacerbation history. Our results suggest that GPs are more responsive in their treatment choices to the positive exacerbation history. On the other hand, following guidelines should cause monotherapies to have a negative association with the frequent exacerbator status, as no patient with frequent exacerbations in the previous year should be on monotherapy. However, there was only one association for monotherapies that was negative (LABA monotherapy among specialists). Overall, this might suggest that treatment step-downs are not generally adhered to among GPs and specialists alike.\nOur findings are similar to those of the previous studies which have shown that many prescriptions of ICS do not have clear indications.16,23 A study from the Province of Manitoba, Canada (for the 1997–2012 period), reported that ICS was by far the most commonly prescribed first-line therapy in patients with COPD (28.2%), while LAMA (2.1%) and LABA (2.3%) were prescribed much less frequently. The proportion of patients who received ICS, which was often in combination with LABA, increased from 23.5% to 34.4% during the study period.18 Similar results have also been documented in two other Canadian provinces.24 Similar to our results, a UK-based study also reported overuse of ICS and undertreatment with LAMA+LABA.25\nWe are not aware of any previous study that correlates the choice of medication with the frequent exacerbator status and specialty of care. Previous studies have compared the differences in GP and respiratory specialist in terms of post-exacerbation management and outcomes, with one showing that although there were no differences in the 12-month re-admission and mortality rates between patients, there was significant difference in the management of acute exacerbations between the two groups.22 One potential mechanism explaining the seemingly higher adherence to guidelines among GPs in our study is that generally more complicated patients, such as those with more intensive symptoms, unsatisfactory response to previous therapies, and higher burden of comorbidity, are referred to specialists. This might result in specialists having to consider several different factors in their choice of treatment, not just what guidelines recommend. Another possible factor is the local healthcare policies. In British Columbia, both LAMA and ICS+LABA are “special authority” medications for GPs, requiring GPs to objectively document fixed airflow obstruction before being able to prescribe such medications. Obtaining a special authority for such prescriptions after an exacerbation might have been easier for GPs (eg, if the patient already received spirometry during their admission or visit to Emergency Room), making the prescriptions to be more strongly correlated with exacerbation history. Overall, the purpose of this study was not to explore the etiology of non-adherence to treatment recommendations. A study that addresses the reasons for the difference in treatment decisions between GPs and specialists is an important one that requires a dedicated design and analysis.\nThis study has multiple strengths. The use of data from the entire population of a well-defined geographic region reduces the risk of bias due to self-selection into the cohort. Retrieval of filled prescription records eliminates the risk of information or recall bias or Hawthorne effect. Avoiding recall bias is particularly important for the analysis of the associations between medication use and exacerbation history, as a history of exacerbation might affect patient recall. The large size of the data and several years of follow-up enabled us to have high statistical power to evaluate long-term trends. On the other hand, the limitations of this study should also be acknowledged. Filled prescription records do not necessarily equate to the actual medication that was recommended to the patient, as physicians might advise their patients to continue using previously stocked medications. Some physicians might also provide medication samples to patients that are not recorded in the data. One possible bias in this study is the under-reporting of exacerbations. Our algorithm for eliciting exacerbation history might not be completely aligned with physician assessment of exacerbation history.20", "Taken together, these results suggest that the utilization of inhaled therapies for COPD is not aligned with current Canadian or international guidelines. This is most clearly demonstrated in the low utilization of LAMA and high utilization of ICS in newly diagnosed patients, as well as high utilization of combination therapies as first-line. An interesting finding of our study was that, compared to treatment recommendations by specialists, treatment recommendations made by GPs were potentially more reflective of the exacerbation history, but this might well reflect the more complicated clinical scenarios that specialists manage. Overall, improving adherence to guideline recommendations should be promoted across all specialty groups. Our results also suggest that treatments are not stepped down as recommended by guidelines, potentially resulting in significant levels of overtreatment." ]
[ "intro", null, null, null, null, null, null, null, null, null, null ]
[ "COPD", "prescription", "medication", "exacerbation", "time trend" ]
Introduction: Chronic obstructive pulmonary disease (COPD) is a common disease of the airways characterized by progressive airflow limitation and periods of intensified disease activity referred to as acute exacerbations (or lung attacks).1 COPD is among the leading causes of disease burden in terms of lost disability-adjusted life years (DALYs).2,3 In Canada, exacerbations of COPD are the leading cause of medical hospital admissions and a major cause of morbidity and mortality.4 In all stages of COPD, risk factor modification (eg, smoking cessation) has a high potential for modifying the course of the disease. However, for most patients, pharmacotherapy is an essential component of disease management.5 In contemporary COPD management guidelines, exacerbations are an important determinant of initiation and choice of drug therapy. For example, in the current Canadian Thoracic Society (CTS) guidelines, treatment choice is based on the dichotomization of patients into frequent or non-frequent exacerbators,6 with frequent exacerbator status defined based on 12-month history (≥2 moderate or ≥1 severe exacerbation(s)). In general, treatment is stepped up (intensified) if the patient becomes a frequent exacerbator, and can be stepped down if the patient is a non-frequent exacerbator. Other disease management strategies such as the influential Global Initiative for Chronic Obstructive Lung Disease (GOLD)5 and the American Thoracic Society’s guidelines7 adopt a similar approach. Adherence to guideline recommendations on pharmacotherapies is generally low for chronic diseases.8 This might be due to the lack of awareness about guidelines by care providers, or low acceptance of, and adherence to, care providers’ recommendations by patients. Evaluating the level of adherence to guidelines in “real world” settings enables the identification of preventable gaps in care. Population-based health databases that capture treatment patterns at the community level without the risk of Hawthorne effect (individuals modifying their behavior as a reaction to being observed) or recall bias are a unique resource for this purpose. The aims of the current study were to use such databases to describe the trend of filled prescriptions for inhaled medication throughout the time course of COPD (primary objective), and to evaluate the association between prescribed inhaled therapies with the frequent-exacerbator definition (secondary objective). Methods: This study was approved by The University of British Columbia’s human ethics board (#H13-00684). All inferences, opinions, and conclusions drawn in this research are those of the authors and do not reflect the opinions or policies of the Data Steward(s). The health databases complied with the Freedom of Information and Protection of Privacy Act. Data Sources We used administrative health databases of British Columbia (a Canadian Province with a population of 5.07M as of 20199) from January 1997 to December 2015. The administrative needs of the Province’s public healthcare system have resulted in the accumulation of healthcare encounter data for all legal residents of the Province. The data have high reliability with a very low rate of missing values.10 We had access to demographics,11 hospitalization,12 outpatient services,13 and filled prescriptions14 records (including drug dose, quantity and day supply). Hospitalization and outpatient services records contain diagnostic International Classification of Diseases (ICD) codes. Each filled prescription record contains a unique identifier for the drug, as well as the dose and duration of supply. These data have frequently been used to quantity healthcare resource use patterns and their association with outcomes.15,16 We used administrative health databases of British Columbia (a Canadian Province with a population of 5.07M as of 20199) from January 1997 to December 2015. The administrative needs of the Province’s public healthcare system have resulted in the accumulation of healthcare encounter data for all legal residents of the Province. The data have high reliability with a very low rate of missing values.10 We had access to demographics,11 hospitalization,12 outpatient services,13 and filled prescriptions14 records (including drug dose, quantity and day supply). Hospitalization and outpatient services records contain diagnostic International Classification of Diseases (ICD) codes. Each filled prescription record contains a unique identifier for the drug, as well as the dose and duration of supply. These data have frequently been used to quantity healthcare resource use patterns and their association with outcomes.15,16 Study Population We created an incident cohort of individuals with diagnosed COPD using a previously validated case definition.17 In this definition, a patient is categorized as having COPD if during any 24-month rolling window, they had ≥1 hospitalization or ≥2 outpatient visits with COPD as the primary diagnosis (ICD-9: 491, 492, 493.2, 496; ICD-10: J41-J44). In a previous review study, this definition had a sensitivity of 65.5% and a specificity of 91.5% (which is likely to be even higher in our sample given that individuals had to have COPD-related prescription records to contribute to the results).17 The lower bound for the age criteria was 35, similar to previous studies18,19 to ensure that asthma patients (who might have similar health records as patients with COPD) were not over-represented. Patients entered the cohort on the date of their first COPD-related outpatient visit, which was considered the entry date, marking the beginning of follow-up. To ensure that an incident cohort of COPD patients is being selected, only those who were captured in the data for at least five years before their entry date were included. Patients were followed from their entry date to the date of their last resource use of any kind, death, end of registration in the database, or December 31st, 2015 (the administrative end of the study), whichever occurred first. We created an incident cohort of individuals with diagnosed COPD using a previously validated case definition.17 In this definition, a patient is categorized as having COPD if during any 24-month rolling window, they had ≥1 hospitalization or ≥2 outpatient visits with COPD as the primary diagnosis (ICD-9: 491, 492, 493.2, 496; ICD-10: J41-J44). In a previous review study, this definition had a sensitivity of 65.5% and a specificity of 91.5% (which is likely to be even higher in our sample given that individuals had to have COPD-related prescription records to contribute to the results).17 The lower bound for the age criteria was 35, similar to previous studies18,19 to ensure that asthma patients (who might have similar health records as patients with COPD) were not over-represented. Patients entered the cohort on the date of their first COPD-related outpatient visit, which was considered the entry date, marking the beginning of follow-up. To ensure that an incident cohort of COPD patients is being selected, only those who were captured in the data for at least five years before their entry date were included. Patients were followed from their entry date to the date of their last resource use of any kind, death, end of registration in the database, or December 31st, 2015 (the administrative end of the study), whichever occurred first. Outcomes We evaluated two outcomes for the primary objective (illustrating medication trends over the time course of COPD). The first was filled prescriptions for long-acting inhaled medications during the time course of COPD. This was quantified in terms of the proportion of patients who filled prescriptions for a given medication class, as well as the average (per patient) dose-adjusted number of canisters dispensed. The following medication classes were evaluated: long-acting muscarinic receptor antagonists (LAMA), long-acting beta-2 adrenoreceptor agonists (LABA), and inhaled corticosteroids (ICS). We used a master list of COPD-related inhaled medications identified by their unique identifier linked with dose-equivalency tables (Supplementary Material – Section 1). The second outcome was filled prescriptions for combination therapies. The following combination therapies were considered: LAMA+LABA, LAMA+ICS, LABA+ICS, and LAMA+LABA+ICS. This outcome was quantified in terms of 1) the proportion of patients who, at least for part of the year, received inhaled therapies of different classes, and 2) medication possession ratio (MPR) for combination therapies. MPR was determined by counting the number of days within a follow-up year in which the patient was within the duration of supply of filled prescriptions for more than one class of medication (in a single inhaler or separate inhalers) and dividing it by the length of the follow-up period (365 days). If different classes of therapies had been dispensed for the patient in separate inhalers, they counted towards combination therapy if their duration of supply overlapped. For this (primary objective) we included all prescription records by general practitioners (GPs) and specialists, regardless of who prescribed them. The outcome for the secondary objective (association between frequent-exacerbator status and inhaled medication prescription) was filled prescriptions within 14 days after a COPD-related outpatient visit to a physician. We limited the visits to GP, internal medicine specialist, and respirologist (the latter two are collectively referred to as specialists). The reason is that the clinical experts on our team believed that other specialties rarely initiate or change COPD-related medications and mostly refer the patient to their treating physician. If multiple visits occurred within 14 days of each other for the same patient, only the last visit was considered. At the time of a physician visit, a patient was classified as a frequent or non-frequent exacerbator based on their exacerbation history in the previous 12 months. We identified moderate and severe exacerbations using a previously published algorithm.20 A moderate exacerbation was defined based on filling prescriptions for oral corticosteroid or antibiotics within seven days after an outpatient visit with the main diagnostic (ICD) code of COPD (list of medications in Supplementary Material - Section 1). A severe exacerbation was defined as an instance of hospital admission with the main discharge diagnosis for COPD. We evaluated two outcomes for the primary objective (illustrating medication trends over the time course of COPD). The first was filled prescriptions for long-acting inhaled medications during the time course of COPD. This was quantified in terms of the proportion of patients who filled prescriptions for a given medication class, as well as the average (per patient) dose-adjusted number of canisters dispensed. The following medication classes were evaluated: long-acting muscarinic receptor antagonists (LAMA), long-acting beta-2 adrenoreceptor agonists (LABA), and inhaled corticosteroids (ICS). We used a master list of COPD-related inhaled medications identified by their unique identifier linked with dose-equivalency tables (Supplementary Material – Section 1). The second outcome was filled prescriptions for combination therapies. The following combination therapies were considered: LAMA+LABA, LAMA+ICS, LABA+ICS, and LAMA+LABA+ICS. This outcome was quantified in terms of 1) the proportion of patients who, at least for part of the year, received inhaled therapies of different classes, and 2) medication possession ratio (MPR) for combination therapies. MPR was determined by counting the number of days within a follow-up year in which the patient was within the duration of supply of filled prescriptions for more than one class of medication (in a single inhaler or separate inhalers) and dividing it by the length of the follow-up period (365 days). If different classes of therapies had been dispensed for the patient in separate inhalers, they counted towards combination therapy if their duration of supply overlapped. For this (primary objective) we included all prescription records by general practitioners (GPs) and specialists, regardless of who prescribed them. The outcome for the secondary objective (association between frequent-exacerbator status and inhaled medication prescription) was filled prescriptions within 14 days after a COPD-related outpatient visit to a physician. We limited the visits to GP, internal medicine specialist, and respirologist (the latter two are collectively referred to as specialists). The reason is that the clinical experts on our team believed that other specialties rarely initiate or change COPD-related medications and mostly refer the patient to their treating physician. If multiple visits occurred within 14 days of each other for the same patient, only the last visit was considered. At the time of a physician visit, a patient was classified as a frequent or non-frequent exacerbator based on their exacerbation history in the previous 12 months. We identified moderate and severe exacerbations using a previously published algorithm.20 A moderate exacerbation was defined based on filling prescriptions for oral corticosteroid or antibiotics within seven days after an outpatient visit with the main diagnostic (ICD) code of COPD (list of medications in Supplementary Material - Section 1). A severe exacerbation was defined as an instance of hospital admission with the main discharge diagnosis for COPD. Statistical Analysis For the primary objective, we divided the follow-up time of each patient into adjacent 12-month periods, starting from the entry date (if the last period was <12 months it was ignored). We plotted the proportion of COPD patients who received a given class of COPD medication as a function of years since COPD diagnosis. We also plotted the average number of dose-adjusted canisters per COPD patient during each follow-up period, separately for each inhaled medication class. In order to convert dispensed quantities to number of canisters, we used a specific formulation as the reference for each medication class, as follows: 1) salmeterol 50 mcg, 60 doses per canister for LABA, 2) beclomethasone 100 mcg, 200 doses per canister for ICS, and 3) tiotropium 18 mcg, 30 doses per canister for LAMA. To test for time trends, we used a generalized linear model (with logit link function and binomial distribution) with generalized estimating equation to account for the clustering of observation units (patient-years) within patients. Years since the entry date was the independent variable, whose coefficient captured the time trend. We did not control for any other variable in this analysis given the primary interest was examining temporal trends. For the secondary objective, we used a generalized linear model (with the same link function and distribution as above) to associate filled prescriptions after physician visits with exacerbation history. This model was fitted separately for each medication class, and for GPs versus specialists. Again, we used a generalized estimating equation to account for the clustered nature of the data (multiple physician visits for each patient). We controlled for the following variables: sex, age on the date of visit, and socio-economic status on the date of visit (estimated based on neighbourhood income quantiles). SAS Enterprise Guide (version 7.1, Cary, NC, USA) was used for all analyses. Two-tailed p-values were considered significant at 0.05 level. We performed a sensitivity analysis that explored whether our findings are affected by the inadvertent inclusion of some patients with asthma but without fixed airway obstruction. For this analysis, we excluded all individuals who met a previously validated case definition of asthma21 any time during the study period (before or after COPD diagnosis). We reported the proportion of each medication classes that were prescribed after a physician visit, separately by the frequent exacerbator status. For the primary objective, we divided the follow-up time of each patient into adjacent 12-month periods, starting from the entry date (if the last period was <12 months it was ignored). We plotted the proportion of COPD patients who received a given class of COPD medication as a function of years since COPD diagnosis. We also plotted the average number of dose-adjusted canisters per COPD patient during each follow-up period, separately for each inhaled medication class. In order to convert dispensed quantities to number of canisters, we used a specific formulation as the reference for each medication class, as follows: 1) salmeterol 50 mcg, 60 doses per canister for LABA, 2) beclomethasone 100 mcg, 200 doses per canister for ICS, and 3) tiotropium 18 mcg, 30 doses per canister for LAMA. To test for time trends, we used a generalized linear model (with logit link function and binomial distribution) with generalized estimating equation to account for the clustering of observation units (patient-years) within patients. Years since the entry date was the independent variable, whose coefficient captured the time trend. We did not control for any other variable in this analysis given the primary interest was examining temporal trends. For the secondary objective, we used a generalized linear model (with the same link function and distribution as above) to associate filled prescriptions after physician visits with exacerbation history. This model was fitted separately for each medication class, and for GPs versus specialists. Again, we used a generalized estimating equation to account for the clustered nature of the data (multiple physician visits for each patient). We controlled for the following variables: sex, age on the date of visit, and socio-economic status on the date of visit (estimated based on neighbourhood income quantiles). SAS Enterprise Guide (version 7.1, Cary, NC, USA) was used for all analyses. Two-tailed p-values were considered significant at 0.05 level. We performed a sensitivity analysis that explored whether our findings are affected by the inadvertent inclusion of some patients with asthma but without fixed airway obstruction. For this analysis, we excluded all individuals who met a previously validated case definition of asthma21 any time during the study period (before or after COPD diagnosis). We reported the proportion of each medication classes that were prescribed after a physician visit, separately by the frequent exacerbator status. Data Sources: We used administrative health databases of British Columbia (a Canadian Province with a population of 5.07M as of 20199) from January 1997 to December 2015. The administrative needs of the Province’s public healthcare system have resulted in the accumulation of healthcare encounter data for all legal residents of the Province. The data have high reliability with a very low rate of missing values.10 We had access to demographics,11 hospitalization,12 outpatient services,13 and filled prescriptions14 records (including drug dose, quantity and day supply). Hospitalization and outpatient services records contain diagnostic International Classification of Diseases (ICD) codes. Each filled prescription record contains a unique identifier for the drug, as well as the dose and duration of supply. These data have frequently been used to quantity healthcare resource use patterns and their association with outcomes.15,16 Study Population: We created an incident cohort of individuals with diagnosed COPD using a previously validated case definition.17 In this definition, a patient is categorized as having COPD if during any 24-month rolling window, they had ≥1 hospitalization or ≥2 outpatient visits with COPD as the primary diagnosis (ICD-9: 491, 492, 493.2, 496; ICD-10: J41-J44). In a previous review study, this definition had a sensitivity of 65.5% and a specificity of 91.5% (which is likely to be even higher in our sample given that individuals had to have COPD-related prescription records to contribute to the results).17 The lower bound for the age criteria was 35, similar to previous studies18,19 to ensure that asthma patients (who might have similar health records as patients with COPD) were not over-represented. Patients entered the cohort on the date of their first COPD-related outpatient visit, which was considered the entry date, marking the beginning of follow-up. To ensure that an incident cohort of COPD patients is being selected, only those who were captured in the data for at least five years before their entry date were included. Patients were followed from their entry date to the date of their last resource use of any kind, death, end of registration in the database, or December 31st, 2015 (the administrative end of the study), whichever occurred first. Outcomes: We evaluated two outcomes for the primary objective (illustrating medication trends over the time course of COPD). The first was filled prescriptions for long-acting inhaled medications during the time course of COPD. This was quantified in terms of the proportion of patients who filled prescriptions for a given medication class, as well as the average (per patient) dose-adjusted number of canisters dispensed. The following medication classes were evaluated: long-acting muscarinic receptor antagonists (LAMA), long-acting beta-2 adrenoreceptor agonists (LABA), and inhaled corticosteroids (ICS). We used a master list of COPD-related inhaled medications identified by their unique identifier linked with dose-equivalency tables (Supplementary Material – Section 1). The second outcome was filled prescriptions for combination therapies. The following combination therapies were considered: LAMA+LABA, LAMA+ICS, LABA+ICS, and LAMA+LABA+ICS. This outcome was quantified in terms of 1) the proportion of patients who, at least for part of the year, received inhaled therapies of different classes, and 2) medication possession ratio (MPR) for combination therapies. MPR was determined by counting the number of days within a follow-up year in which the patient was within the duration of supply of filled prescriptions for more than one class of medication (in a single inhaler or separate inhalers) and dividing it by the length of the follow-up period (365 days). If different classes of therapies had been dispensed for the patient in separate inhalers, they counted towards combination therapy if their duration of supply overlapped. For this (primary objective) we included all prescription records by general practitioners (GPs) and specialists, regardless of who prescribed them. The outcome for the secondary objective (association between frequent-exacerbator status and inhaled medication prescription) was filled prescriptions within 14 days after a COPD-related outpatient visit to a physician. We limited the visits to GP, internal medicine specialist, and respirologist (the latter two are collectively referred to as specialists). The reason is that the clinical experts on our team believed that other specialties rarely initiate or change COPD-related medications and mostly refer the patient to their treating physician. If multiple visits occurred within 14 days of each other for the same patient, only the last visit was considered. At the time of a physician visit, a patient was classified as a frequent or non-frequent exacerbator based on their exacerbation history in the previous 12 months. We identified moderate and severe exacerbations using a previously published algorithm.20 A moderate exacerbation was defined based on filling prescriptions for oral corticosteroid or antibiotics within seven days after an outpatient visit with the main diagnostic (ICD) code of COPD (list of medications in Supplementary Material - Section 1). A severe exacerbation was defined as an instance of hospital admission with the main discharge diagnosis for COPD. Statistical Analysis: For the primary objective, we divided the follow-up time of each patient into adjacent 12-month periods, starting from the entry date (if the last period was <12 months it was ignored). We plotted the proportion of COPD patients who received a given class of COPD medication as a function of years since COPD diagnosis. We also plotted the average number of dose-adjusted canisters per COPD patient during each follow-up period, separately for each inhaled medication class. In order to convert dispensed quantities to number of canisters, we used a specific formulation as the reference for each medication class, as follows: 1) salmeterol 50 mcg, 60 doses per canister for LABA, 2) beclomethasone 100 mcg, 200 doses per canister for ICS, and 3) tiotropium 18 mcg, 30 doses per canister for LAMA. To test for time trends, we used a generalized linear model (with logit link function and binomial distribution) with generalized estimating equation to account for the clustering of observation units (patient-years) within patients. Years since the entry date was the independent variable, whose coefficient captured the time trend. We did not control for any other variable in this analysis given the primary interest was examining temporal trends. For the secondary objective, we used a generalized linear model (with the same link function and distribution as above) to associate filled prescriptions after physician visits with exacerbation history. This model was fitted separately for each medication class, and for GPs versus specialists. Again, we used a generalized estimating equation to account for the clustered nature of the data (multiple physician visits for each patient). We controlled for the following variables: sex, age on the date of visit, and socio-economic status on the date of visit (estimated based on neighbourhood income quantiles). SAS Enterprise Guide (version 7.1, Cary, NC, USA) was used for all analyses. Two-tailed p-values were considered significant at 0.05 level. We performed a sensitivity analysis that explored whether our findings are affected by the inadvertent inclusion of some patients with asthma but without fixed airway obstruction. For this analysis, we excluded all individuals who met a previously validated case definition of asthma21 any time during the study period (before or after COPD diagnosis). We reported the proportion of each medication classes that were prescribed after a physician visit, separately by the frequent exacerbator status. Results: In our study, 132,004 individuals satisfied the case definition of COPD. Of these, 64,942 patients (49.2%) were female and the mean age at entry was 68.6 (SD=12.5) years. These patients contributed 707,575 years of follow-up (mean follow-up 5.4 years, SD=3.5). Table 1 provides basic sociodemographic characteristics of these patients.Table 1Characteristics of Patients and the Visits in the CohortPatient Characteristics at Index DateStudy Cohort (n=132,004)Female sex (%) a64,942 (49.2)Age (SD)68.6 (12.5)Years from COPD diagnosis5.4 (3.5)Socioeconomic status (%) Quantile 134,699 (26.3%) Quantile 228,116 (21.3%) Quantile 324,630 (18.7%) Quantile 422,500 (17.1%) Quantile 519,545 (14.8%) Unknown2,514 (1.9%)Visit Characteristics for the Sub-Cohort of COPD Patients Alive in 2010Visits Followed by Prescription (n=260,164)Patient was frequent exacerbator in the previous 12 months79,820 (30.7%)History of PFT before visit228,464 (87.8%)History of asthma diagnosis before visits142,457 (54.8%)COPD-related medications in the three months before the visit ICS only28,624 (11.0%) LABA only2,039 (0.78%) LAMA only18,414 (7.08%) LAMA+LABA2,086 (0.8%) ICS+LABA80,205 (30.83%) ICS+LAMA4,861 (1.87%) ICS+LAMA+LABA71,152 (27.35%)Note: aNumber of patients (and percent of cohort) are reported unless otherwise indicated.Abbreviations: COPD, chronic obstructive pulmonary disease; PFT, pulmonary function test; ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists. Characteristics of Patients and the Visits in the Cohort Note: aNumber of patients (and percent of cohort) are reported unless otherwise indicated. Abbreviations: COPD, chronic obstructive pulmonary disease; PFT, pulmonary function test; ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists. Trends of Filled Prescriptions for Inhaled Medications Over the Time Course of COPD The trends are depicted in Figure 1. ICS was the most common medication class prescribed to patients in their first year of diagnosis (49.9% - panel A). It was followed by LABA (31.8%) and LAMA (10.4%). The percentage of patients who filled ICS-containing prescriptions per year remained the same (~50%) throughout the time course of COPD, whereas this percentage increased modestly for LABA and LAMA (by 7% and 9%, respectively, over 15 years). In terms of the quantity of dispensed medications, again ICS was the most prescribed ingredient (average of 2.9 canisters per patient per year over the 15-year period – panel B). The average number of canisters (per year) for LAMA, LABA, and ICS increased by 119%, 56%, and 40%, respectively, over 15 years.Figure 1Trends in the proportion of patients filling at least one prescription (A) and average dose-adjusted number of canisters (B) for major COPD inhaled therapies, during disease course.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists. Trends in the proportion of patients filling at least one prescription (A) and average dose-adjusted number of canisters (B) for major COPD inhaled therapies, during disease course. The trends for combination therapies are illustrated in Figure 2. In the first year of diagnosis, 30.7% of patients received ICS+LABA (panel A), and on average 11.0% of a COPD patient-times was covered by these combination therapies (panel B). The second most common combination therapy was triple therapy (LAMA+LABA+ICS), taken by 6.1% of patients in their first year of diagnosis, covering 2.0% of their time in the first follow-up year. Other patterns of combination therapies were not common. Among patients with 12–15 years of COPD history, on average 58.5% received combination therapies, with MPR of 33.0%. Both the proportion of patients receiving combination therapies and the average MPR increased for ICS+LABA and triple therapy over the time course of COPD.Figure 2Trends in the proportion of patients on combination therapies (A) and average medication possession ratio (B) over the time course of COPD.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic agents; ICS/LABA/LAMA, triple therapy. Trends in the proportion of patients on combination therapies (A) and average medication possession ratio (B) over the time course of COPD. The trends are depicted in Figure 1. ICS was the most common medication class prescribed to patients in their first year of diagnosis (49.9% - panel A). It was followed by LABA (31.8%) and LAMA (10.4%). The percentage of patients who filled ICS-containing prescriptions per year remained the same (~50%) throughout the time course of COPD, whereas this percentage increased modestly for LABA and LAMA (by 7% and 9%, respectively, over 15 years). In terms of the quantity of dispensed medications, again ICS was the most prescribed ingredient (average of 2.9 canisters per patient per year over the 15-year period – panel B). The average number of canisters (per year) for LAMA, LABA, and ICS increased by 119%, 56%, and 40%, respectively, over 15 years.Figure 1Trends in the proportion of patients filling at least one prescription (A) and average dose-adjusted number of canisters (B) for major COPD inhaled therapies, during disease course.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists. Trends in the proportion of patients filling at least one prescription (A) and average dose-adjusted number of canisters (B) for major COPD inhaled therapies, during disease course. The trends for combination therapies are illustrated in Figure 2. In the first year of diagnosis, 30.7% of patients received ICS+LABA (panel A), and on average 11.0% of a COPD patient-times was covered by these combination therapies (panel B). The second most common combination therapy was triple therapy (LAMA+LABA+ICS), taken by 6.1% of patients in their first year of diagnosis, covering 2.0% of their time in the first follow-up year. Other patterns of combination therapies were not common. Among patients with 12–15 years of COPD history, on average 58.5% received combination therapies, with MPR of 33.0%. Both the proportion of patients receiving combination therapies and the average MPR increased for ICS+LABA and triple therapy over the time course of COPD.Figure 2Trends in the proportion of patients on combination therapies (A) and average medication possession ratio (B) over the time course of COPD.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic agents; ICS/LABA/LAMA, triple therapy. Trends in the proportion of patients on combination therapies (A) and average medication possession ratio (B) over the time course of COPD. Association Between Filled Prescriptions After an Outpatient Visit and Exacerbation History During follow-up, 260,164 eligible outpatient visits were recorded that were followed with a filled prescription for an inhaled medication. 85% of the outpatient visits were made to GPs. In 79,820 (30.7%) of visits, the patient was classified as frequent-exacerbator given their 12-month exacerbation history (Table 1). The frequencies (by percentage) of different inhaled therapies prescribed for COPD patients, stratified by the frequent exacerbator status, are provided in Figure 3. For non-frequent exacerbators, the most common treatment was ICS+LABA (14.5%), followed by ICS alone (5.8%). Outpatient visits among frequent exacerbators were most commonly followed by filled prescriptions for ICS+LABA (18.9%) and triple therapy (11.6%).Figure 3Frequency of long-acting inhaled medication prescription in frequent (dark bars) and non-frequent (light bars) exacerbators.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists. Frequency of long-acting inhaled medication prescription in frequent (dark bars) and non-frequent (light bars) exacerbators. Figure 4 provides the adjusted odds ratios (ORs) and 95% confidence intervals (CI) associating filled prescriptions with the frequent exacerbator status (compared to non-frequent exacerbators), separately for GPs and specialists. All associations were significant, except for two: LABA+LAMA therapy for specialists, and monotherapy with LABA for GPs. For both groups, triple therapy showed the strongest association with a positive exacerbation history (OR of 2.68 for GPs and 2.02 for specialists; p<0.0001 for both). The associations were generally stronger (further away than 1.00) for GPs, with the exception of monotherapies with ICS or LABA.Figure 4Forest plot of odds ratio (OR) and 95% confidence interval between frequent-exacerbator status and filled prescriptions for each medication type, separately for GP and specialist compared to non-frequent exacerbators.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists; GP, general practitioner. Forest plot of odds ratio (OR) and 95% confidence interval between frequent-exacerbator status and filled prescriptions for each medication type, separately for GP and specialist compared to non-frequent exacerbators. The result of the sensitivity analysis is provided in Supplementary Material – Section 2, Supplementary Figure 1. The trend of medication prescription did not differ substantially from the main analysis, nor did the association between frequent exacerbator definition and prescription patterns. Again, ICS+LABA was the most common combination therapy after a physician visit, regardless of exacerbation history. Similar to the main analysis, the second most common therapy was monotherapy with ICS among non-frequent exacerbators and triple therapy among frequent exacerbators. During follow-up, 260,164 eligible outpatient visits were recorded that were followed with a filled prescription for an inhaled medication. 85% of the outpatient visits were made to GPs. In 79,820 (30.7%) of visits, the patient was classified as frequent-exacerbator given their 12-month exacerbation history (Table 1). The frequencies (by percentage) of different inhaled therapies prescribed for COPD patients, stratified by the frequent exacerbator status, are provided in Figure 3. For non-frequent exacerbators, the most common treatment was ICS+LABA (14.5%), followed by ICS alone (5.8%). Outpatient visits among frequent exacerbators were most commonly followed by filled prescriptions for ICS+LABA (18.9%) and triple therapy (11.6%).Figure 3Frequency of long-acting inhaled medication prescription in frequent (dark bars) and non-frequent (light bars) exacerbators.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists. Frequency of long-acting inhaled medication prescription in frequent (dark bars) and non-frequent (light bars) exacerbators. Figure 4 provides the adjusted odds ratios (ORs) and 95% confidence intervals (CI) associating filled prescriptions with the frequent exacerbator status (compared to non-frequent exacerbators), separately for GPs and specialists. All associations were significant, except for two: LABA+LAMA therapy for specialists, and monotherapy with LABA for GPs. For both groups, triple therapy showed the strongest association with a positive exacerbation history (OR of 2.68 for GPs and 2.02 for specialists; p<0.0001 for both). The associations were generally stronger (further away than 1.00) for GPs, with the exception of monotherapies with ICS or LABA.Figure 4Forest plot of odds ratio (OR) and 95% confidence interval between frequent-exacerbator status and filled prescriptions for each medication type, separately for GP and specialist compared to non-frequent exacerbators.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists; GP, general practitioner. Forest plot of odds ratio (OR) and 95% confidence interval between frequent-exacerbator status and filled prescriptions for each medication type, separately for GP and specialist compared to non-frequent exacerbators. The result of the sensitivity analysis is provided in Supplementary Material – Section 2, Supplementary Figure 1. The trend of medication prescription did not differ substantially from the main analysis, nor did the association between frequent exacerbator definition and prescription patterns. Again, ICS+LABA was the most common combination therapy after a physician visit, regardless of exacerbation history. Similar to the main analysis, the second most common therapy was monotherapy with ICS among non-frequent exacerbators and triple therapy among frequent exacerbators. Trends of Filled Prescriptions for Inhaled Medications Over the Time Course of COPD: The trends are depicted in Figure 1. ICS was the most common medication class prescribed to patients in their first year of diagnosis (49.9% - panel A). It was followed by LABA (31.8%) and LAMA (10.4%). The percentage of patients who filled ICS-containing prescriptions per year remained the same (~50%) throughout the time course of COPD, whereas this percentage increased modestly for LABA and LAMA (by 7% and 9%, respectively, over 15 years). In terms of the quantity of dispensed medications, again ICS was the most prescribed ingredient (average of 2.9 canisters per patient per year over the 15-year period – panel B). The average number of canisters (per year) for LAMA, LABA, and ICS increased by 119%, 56%, and 40%, respectively, over 15 years.Figure 1Trends in the proportion of patients filling at least one prescription (A) and average dose-adjusted number of canisters (B) for major COPD inhaled therapies, during disease course.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists. Trends in the proportion of patients filling at least one prescription (A) and average dose-adjusted number of canisters (B) for major COPD inhaled therapies, during disease course. The trends for combination therapies are illustrated in Figure 2. In the first year of diagnosis, 30.7% of patients received ICS+LABA (panel A), and on average 11.0% of a COPD patient-times was covered by these combination therapies (panel B). The second most common combination therapy was triple therapy (LAMA+LABA+ICS), taken by 6.1% of patients in their first year of diagnosis, covering 2.0% of their time in the first follow-up year. Other patterns of combination therapies were not common. Among patients with 12–15 years of COPD history, on average 58.5% received combination therapies, with MPR of 33.0%. Both the proportion of patients receiving combination therapies and the average MPR increased for ICS+LABA and triple therapy over the time course of COPD.Figure 2Trends in the proportion of patients on combination therapies (A) and average medication possession ratio (B) over the time course of COPD.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic agents; ICS/LABA/LAMA, triple therapy. Trends in the proportion of patients on combination therapies (A) and average medication possession ratio (B) over the time course of COPD. Association Between Filled Prescriptions After an Outpatient Visit and Exacerbation History: During follow-up, 260,164 eligible outpatient visits were recorded that were followed with a filled prescription for an inhaled medication. 85% of the outpatient visits were made to GPs. In 79,820 (30.7%) of visits, the patient was classified as frequent-exacerbator given their 12-month exacerbation history (Table 1). The frequencies (by percentage) of different inhaled therapies prescribed for COPD patients, stratified by the frequent exacerbator status, are provided in Figure 3. For non-frequent exacerbators, the most common treatment was ICS+LABA (14.5%), followed by ICS alone (5.8%). Outpatient visits among frequent exacerbators were most commonly followed by filled prescriptions for ICS+LABA (18.9%) and triple therapy (11.6%).Figure 3Frequency of long-acting inhaled medication prescription in frequent (dark bars) and non-frequent (light bars) exacerbators.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists. Frequency of long-acting inhaled medication prescription in frequent (dark bars) and non-frequent (light bars) exacerbators. Figure 4 provides the adjusted odds ratios (ORs) and 95% confidence intervals (CI) associating filled prescriptions with the frequent exacerbator status (compared to non-frequent exacerbators), separately for GPs and specialists. All associations were significant, except for two: LABA+LAMA therapy for specialists, and monotherapy with LABA for GPs. For both groups, triple therapy showed the strongest association with a positive exacerbation history (OR of 2.68 for GPs and 2.02 for specialists; p<0.0001 for both). The associations were generally stronger (further away than 1.00) for GPs, with the exception of monotherapies with ICS or LABA.Figure 4Forest plot of odds ratio (OR) and 95% confidence interval between frequent-exacerbator status and filled prescriptions for each medication type, separately for GP and specialist compared to non-frequent exacerbators.Abbreviations: ICS, inhaled corticosteroids; LABA, long-acting beta-2 adrenoceptor agonists; LAMA, long-acting muscarinic receptor antagonists; GP, general practitioner. Forest plot of odds ratio (OR) and 95% confidence interval between frequent-exacerbator status and filled prescriptions for each medication type, separately for GP and specialist compared to non-frequent exacerbators. The result of the sensitivity analysis is provided in Supplementary Material – Section 2, Supplementary Figure 1. The trend of medication prescription did not differ substantially from the main analysis, nor did the association between frequent exacerbator definition and prescription patterns. Again, ICS+LABA was the most common combination therapy after a physician visit, regardless of exacerbation history. Similar to the main analysis, the second most common therapy was monotherapy with ICS among non-frequent exacerbators and triple therapy among frequent exacerbators. Discussion: Using 15 years of population-based data, we quantified the pattern of long-acting inhaled therapies in an incident cohort of patients with diagnosed COPD. We demonstrated that ICS was the most commonly prescribed medication class throughout the time course of the disease, with about 50% of patients having filled prescriptions for ICS during their first year of diagnosis. While this proportion remained roughly the same over the subsequent follow-up years, the average number of canisters of ICS per patient increased, indicating that the dosage of ICS increased over time among ICS users. The second most commonly used ingredient was LABA, received by a third of the cohort upon diagnosis. Further, we documented that 39.0% of patients received combination therapies in their first year of diagnosis, with ICS+LABA being the most common (30.7%), followed by triple therapy. In the Canadian guidelines, the recommended first-line therapy is monotherapy with LAMA or LABA, and ICS should only be used among individuals who do not respond to dual LABA+LAMA therapy. ICS can be started earlier in the course of the disease in patients with a history of asthma. However, as our sensitivity analysis indicates, the use of ICS was also common in COPD patients without any history of asthma. As such, these results demonstrate potentially significant departures from guideline recommendations, in particular the overuse of ICS and combination therapies. In addition, our results revealed a difference between GPs and specialists in terms of considering exacerbation history in prescribing inhalers. The Canadian (and GOLD) guidelines generally recommend intensifying medications (eg, adding a second or third therapy) among patients who experience exacerbations in the previous 12 months. Following this recommendation should make intensive treatments (dual therapies and triple therapy) positively associated with exacerbation history. Our results suggest that GPs are more responsive in their treatment choices to the positive exacerbation history. On the other hand, following guidelines should cause monotherapies to have a negative association with the frequent exacerbator status, as no patient with frequent exacerbations in the previous year should be on monotherapy. However, there was only one association for monotherapies that was negative (LABA monotherapy among specialists). Overall, this might suggest that treatment step-downs are not generally adhered to among GPs and specialists alike. Our findings are similar to those of the previous studies which have shown that many prescriptions of ICS do not have clear indications.16,23 A study from the Province of Manitoba, Canada (for the 1997–2012 period), reported that ICS was by far the most commonly prescribed first-line therapy in patients with COPD (28.2%), while LAMA (2.1%) and LABA (2.3%) were prescribed much less frequently. The proportion of patients who received ICS, which was often in combination with LABA, increased from 23.5% to 34.4% during the study period.18 Similar results have also been documented in two other Canadian provinces.24 Similar to our results, a UK-based study also reported overuse of ICS and undertreatment with LAMA+LABA.25 We are not aware of any previous study that correlates the choice of medication with the frequent exacerbator status and specialty of care. Previous studies have compared the differences in GP and respiratory specialist in terms of post-exacerbation management and outcomes, with one showing that although there were no differences in the 12-month re-admission and mortality rates between patients, there was significant difference in the management of acute exacerbations between the two groups.22 One potential mechanism explaining the seemingly higher adherence to guidelines among GPs in our study is that generally more complicated patients, such as those with more intensive symptoms, unsatisfactory response to previous therapies, and higher burden of comorbidity, are referred to specialists. This might result in specialists having to consider several different factors in their choice of treatment, not just what guidelines recommend. Another possible factor is the local healthcare policies. In British Columbia, both LAMA and ICS+LABA are “special authority” medications for GPs, requiring GPs to objectively document fixed airflow obstruction before being able to prescribe such medications. Obtaining a special authority for such prescriptions after an exacerbation might have been easier for GPs (eg, if the patient already received spirometry during their admission or visit to Emergency Room), making the prescriptions to be more strongly correlated with exacerbation history. Overall, the purpose of this study was not to explore the etiology of non-adherence to treatment recommendations. A study that addresses the reasons for the difference in treatment decisions between GPs and specialists is an important one that requires a dedicated design and analysis. This study has multiple strengths. The use of data from the entire population of a well-defined geographic region reduces the risk of bias due to self-selection into the cohort. Retrieval of filled prescription records eliminates the risk of information or recall bias or Hawthorne effect. Avoiding recall bias is particularly important for the analysis of the associations between medication use and exacerbation history, as a history of exacerbation might affect patient recall. The large size of the data and several years of follow-up enabled us to have high statistical power to evaluate long-term trends. On the other hand, the limitations of this study should also be acknowledged. Filled prescription records do not necessarily equate to the actual medication that was recommended to the patient, as physicians might advise their patients to continue using previously stocked medications. Some physicians might also provide medication samples to patients that are not recorded in the data. One possible bias in this study is the under-reporting of exacerbations. Our algorithm for eliciting exacerbation history might not be completely aligned with physician assessment of exacerbation history.20 Conclusion: Taken together, these results suggest that the utilization of inhaled therapies for COPD is not aligned with current Canadian or international guidelines. This is most clearly demonstrated in the low utilization of LAMA and high utilization of ICS in newly diagnosed patients, as well as high utilization of combination therapies as first-line. An interesting finding of our study was that, compared to treatment recommendations by specialists, treatment recommendations made by GPs were potentially more reflective of the exacerbation history, but this might well reflect the more complicated clinical scenarios that specialists manage. Overall, improving adherence to guideline recommendations should be promoted across all specialty groups. Our results also suggest that treatments are not stepped down as recommended by guidelines, potentially resulting in significant levels of overtreatment.
Background: In contemporary guidelines for the management of Chronic Obstructive Pulmonary Disease (COPD), the history of acute exacerbations plays an important role in the choice of long-term inhaled therapies. This study aimed at evaluating population-level trends of filled inhaled prescriptions over the time course of COPD and their relation to the history of exacerbations. Methods: We used administrative health databases in British Columbia, Canada (1997-2015), to create a retrospective incident cohort of individuals with diagnosed COPD. We quantified long-acting inhaled medication prescriptions within each year of follow-up and documented their trend over the time course of COPD. Using generalized linear models, we investigated the association between the frequent exacerbator status (≥2 moderate or ≥1 severe exacerbation(s) in the previous 12 months) and filling a prescription after a physician visit. Results: 132,004 COPD patients were included (mean age 68.6, 49.2% female). The most common medication class during the first year of diagnosis was inhaled corticosteroids (ICS, used by 49.9%), followed by long-acting beta-2 adrenoreceptor agonists (LABA, 31.8%). Long-acting muscarinic receptor antagonists (LAMA) were the least commonly prescribed (10.4%). ICS remained the most common prescription throughout follow-up, being used by approximately 50% of patients during each year. 39.0% of patients received combination inhaled therapies in their first year of diagnosis, with ICS+LABA being the most common (30.7%). The association with exacerbation history was the most pronounced for triple therapy with an odds ratio (OR) of 2.68 for general practitioners and 2.02 for specialists (p<0.001 for both). Such associations were generally stronger among GPs compared with specialists, with the exception of monotherapy with LABA or ICS. Conclusions: We documented low utilization of monotherapies (specifically LAMA) and high utilization of combination therapies (particularly ICS containing). Specialists were less likely to consider exacerbation history in the choice of inhaled therapies compared with GPs.
Introduction: Chronic obstructive pulmonary disease (COPD) is a common disease of the airways characterized by progressive airflow limitation and periods of intensified disease activity referred to as acute exacerbations (or lung attacks).1 COPD is among the leading causes of disease burden in terms of lost disability-adjusted life years (DALYs).2,3 In Canada, exacerbations of COPD are the leading cause of medical hospital admissions and a major cause of morbidity and mortality.4 In all stages of COPD, risk factor modification (eg, smoking cessation) has a high potential for modifying the course of the disease. However, for most patients, pharmacotherapy is an essential component of disease management.5 In contemporary COPD management guidelines, exacerbations are an important determinant of initiation and choice of drug therapy. For example, in the current Canadian Thoracic Society (CTS) guidelines, treatment choice is based on the dichotomization of patients into frequent or non-frequent exacerbators,6 with frequent exacerbator status defined based on 12-month history (≥2 moderate or ≥1 severe exacerbation(s)). In general, treatment is stepped up (intensified) if the patient becomes a frequent exacerbator, and can be stepped down if the patient is a non-frequent exacerbator. Other disease management strategies such as the influential Global Initiative for Chronic Obstructive Lung Disease (GOLD)5 and the American Thoracic Society’s guidelines7 adopt a similar approach. Adherence to guideline recommendations on pharmacotherapies is generally low for chronic diseases.8 This might be due to the lack of awareness about guidelines by care providers, or low acceptance of, and adherence to, care providers’ recommendations by patients. Evaluating the level of adherence to guidelines in “real world” settings enables the identification of preventable gaps in care. Population-based health databases that capture treatment patterns at the community level without the risk of Hawthorne effect (individuals modifying their behavior as a reaction to being observed) or recall bias are a unique resource for this purpose. The aims of the current study were to use such databases to describe the trend of filled prescriptions for inhaled medication throughout the time course of COPD (primary objective), and to evaluate the association between prescribed inhaled therapies with the frequent-exacerbator definition (secondary objective). Conclusion: Taken together, these results suggest that the utilization of inhaled therapies for COPD is not aligned with current Canadian or international guidelines. This is most clearly demonstrated in the low utilization of LAMA and high utilization of ICS in newly diagnosed patients, as well as high utilization of combination therapies as first-line. An interesting finding of our study was that, compared to treatment recommendations by specialists, treatment recommendations made by GPs were potentially more reflective of the exacerbation history, but this might well reflect the more complicated clinical scenarios that specialists manage. Overall, improving adherence to guideline recommendations should be promoted across all specialty groups. Our results also suggest that treatments are not stepped down as recommended by guidelines, potentially resulting in significant levels of overtreatment.
Background: In contemporary guidelines for the management of Chronic Obstructive Pulmonary Disease (COPD), the history of acute exacerbations plays an important role in the choice of long-term inhaled therapies. This study aimed at evaluating population-level trends of filled inhaled prescriptions over the time course of COPD and their relation to the history of exacerbations. Methods: We used administrative health databases in British Columbia, Canada (1997-2015), to create a retrospective incident cohort of individuals with diagnosed COPD. We quantified long-acting inhaled medication prescriptions within each year of follow-up and documented their trend over the time course of COPD. Using generalized linear models, we investigated the association between the frequent exacerbator status (≥2 moderate or ≥1 severe exacerbation(s) in the previous 12 months) and filling a prescription after a physician visit. Results: 132,004 COPD patients were included (mean age 68.6, 49.2% female). The most common medication class during the first year of diagnosis was inhaled corticosteroids (ICS, used by 49.9%), followed by long-acting beta-2 adrenoreceptor agonists (LABA, 31.8%). Long-acting muscarinic receptor antagonists (LAMA) were the least commonly prescribed (10.4%). ICS remained the most common prescription throughout follow-up, being used by approximately 50% of patients during each year. 39.0% of patients received combination inhaled therapies in their first year of diagnosis, with ICS+LABA being the most common (30.7%). The association with exacerbation history was the most pronounced for triple therapy with an odds ratio (OR) of 2.68 for general practitioners and 2.02 for specialists (p<0.001 for both). Such associations were generally stronger among GPs compared with specialists, with the exception of monotherapy with LABA or ICS. Conclusions: We documented low utilization of monotherapies (specifically LAMA) and high utilization of combination therapies (particularly ICS containing). Specialists were less likely to consider exacerbation history in the choice of inhaled therapies compared with GPs.
9,475
385
[ 2907, 147, 262, 540, 464, 2445, 499, 531, 1061, 140 ]
11
[ "copd", "ics", "patients", "laba", "frequent", "medication", "inhaled", "lama", "therapies", "patient" ]
[ "copd management guidelines", "chronic obstructive pulmonary", "lung attacks copd", "prescribed copd patients", "inhaled therapies copd" ]
null
null
[CONTENT] COPD | prescription | medication | exacerbation | time trend [SUMMARY]
null
null
[CONTENT] COPD | prescription | medication | exacerbation | time trend [SUMMARY]
[CONTENT] COPD | prescription | medication | exacerbation | time trend [SUMMARY]
[CONTENT] COPD | prescription | medication | exacerbation | time trend [SUMMARY]
[CONTENT] Administration, Inhalation | Adrenal Cortex Hormones | Adrenergic beta-2 Receptor Agonists | Aged | British Columbia | Bronchodilator Agents | Drug Prescriptions | Drug Therapy, Combination | Female | Humans | Male | Muscarinic Antagonists | Pulmonary Disease, Chronic Obstructive | Retrospective Studies [SUMMARY]
null
null
[CONTENT] Administration, Inhalation | Adrenal Cortex Hormones | Adrenergic beta-2 Receptor Agonists | Aged | British Columbia | Bronchodilator Agents | Drug Prescriptions | Drug Therapy, Combination | Female | Humans | Male | Muscarinic Antagonists | Pulmonary Disease, Chronic Obstructive | Retrospective Studies [SUMMARY]
[CONTENT] Administration, Inhalation | Adrenal Cortex Hormones | Adrenergic beta-2 Receptor Agonists | Aged | British Columbia | Bronchodilator Agents | Drug Prescriptions | Drug Therapy, Combination | Female | Humans | Male | Muscarinic Antagonists | Pulmonary Disease, Chronic Obstructive | Retrospective Studies [SUMMARY]
[CONTENT] Administration, Inhalation | Adrenal Cortex Hormones | Adrenergic beta-2 Receptor Agonists | Aged | British Columbia | Bronchodilator Agents | Drug Prescriptions | Drug Therapy, Combination | Female | Humans | Male | Muscarinic Antagonists | Pulmonary Disease, Chronic Obstructive | Retrospective Studies [SUMMARY]
[CONTENT] copd management guidelines | chronic obstructive pulmonary | lung attacks copd | prescribed copd patients | inhaled therapies copd [SUMMARY]
null
null
[CONTENT] copd management guidelines | chronic obstructive pulmonary | lung attacks copd | prescribed copd patients | inhaled therapies copd [SUMMARY]
[CONTENT] copd management guidelines | chronic obstructive pulmonary | lung attacks copd | prescribed copd patients | inhaled therapies copd [SUMMARY]
[CONTENT] copd management guidelines | chronic obstructive pulmonary | lung attacks copd | prescribed copd patients | inhaled therapies copd [SUMMARY]
[CONTENT] copd | ics | patients | laba | frequent | medication | inhaled | lama | therapies | patient [SUMMARY]
null
null
[CONTENT] copd | ics | patients | laba | frequent | medication | inhaled | lama | therapies | patient [SUMMARY]
[CONTENT] copd | ics | patients | laba | frequent | medication | inhaled | lama | therapies | patient [SUMMARY]
[CONTENT] copd | ics | patients | laba | frequent | medication | inhaled | lama | therapies | patient [SUMMARY]
[CONTENT] disease | frequent | guidelines | management | chronic | care | copd | adherence | exacerbations | frequent exacerbator [SUMMARY]
null
null
[CONTENT] utilization | recommendations | high utilization | potentially | treatment recommendations | results suggest | suggest | guidelines | results | high [SUMMARY]
[CONTENT] copd | ics | laba | patients | frequent | medication | therapies | lama | date | inhaled [SUMMARY]
[CONTENT] copd | ics | laba | patients | frequent | medication | therapies | lama | date | inhaled [SUMMARY]
[CONTENT] Chronic Obstructive Pulmonary Disease ||| COPD [SUMMARY]
null
null
[CONTENT] ICS ||| [SUMMARY]
[CONTENT] Chronic Obstructive Pulmonary Disease ||| COPD ||| British Columbia | Canada | 1997-2015 ||| each year | COPD ||| linear | ≥1 | the previous 12 months ||| 132,004 | age 68.6 | 49.2% ||| the first year | ICS | 49.9% | LABA | 31.8% ||| 10.4% ||| ICS | approximately 50% | each year ||| 39.0% | their first year | ICS+LABA | 30.7% ||| 2.68 | 2.02 ||| LABA | ICS ||| ICS ||| [SUMMARY]
[CONTENT] Chronic Obstructive Pulmonary Disease ||| COPD ||| British Columbia | Canada | 1997-2015 ||| each year | COPD ||| linear | ≥1 | the previous 12 months ||| 132,004 | age 68.6 | 49.2% ||| the first year | ICS | 49.9% | LABA | 31.8% ||| 10.4% ||| ICS | approximately 50% | each year ||| 39.0% | their first year | ICS+LABA | 30.7% ||| 2.68 | 2.02 ||| LABA | ICS ||| ICS ||| [SUMMARY]
A comparison of different antibiotic regimens for the treatment of naturally acquired shigellosis in rhesus and pigtailed macaques (Macaca mulatta and nemestrina).
36045594
Shigella spp. are common enteric pathogens in captive non-human primates. Treatment of symptomatic infections involves supportive care and antibiotic therapy, typically with an empirical choice of antibiotic.
BACKGROUND
Twenty-four clinically ill, Shigella PCR-positive animals were randomly assigned to one of four treatment groups: single-dose ceftiofur crystalline free acid (CCFA), single-dose azithromycin gavage, a 5-day tapering azithromycin dose, or 7-day course of enrofloxacin. We hypothesized that all antimicrobial therapies would have similar efficacy.
METHODS
Animals in all groups cleared Shigella, based on fecal PCR, and had resolution of clinical signs 2 weeks after treatment. Eight out of nine clinically ill and PCR-positive animals tested negative by fecal culture.
RESULTS
Single-dose CCFA, single-dose azithromycin, and a 5-day tapering course of azithromycin all performed as well as a 7-day course of enrofloxacin in eliminating Shigella infection. Fecal PCR may be a better diagnostic than culture for Shigella.
CONCLUSIONS
[ "Animals", "Dysentery, Bacillary", "Macaca mulatta", "Macaca nemestrina", "Anti-Bacterial Agents", "Enrofloxacin", "Azithromycin", "Shigella" ]
9805204
INTRODUCTION
Shigella spp. are non‐spore forming, facultatively anaerobic, gram‐negative rods that are transmitted fecal‐orally and a major causative agent of dysentery in humans and non‐human primates. 1 , 2 Shigella remains an important pathogen worldwide and is a major cause of morbidity among humans, particularly in children under the age of 5. 2 In the United States alone, it accounts for an estimated 450 000 infections per year. 2 Based on biochemical and serological properties, Shigella can be separated into four serotypes, which include sonnei, boydii, dysenteriae, and flexneri. All four serotypes are important causes of morbidity and mortality in humans particularly in low‐resource areas. 2 Captive non‐human primates may acquire shigellosis due to their close contact with humans or from conspecifics in endemically infected colonies, with S. flexneri being the most frequent serotype isolated. 1 , 2 , 3 The symptoms of shigellosis range from acute life threatening disease to subclinical carrier states. 1 , 4 Acute cases often present as severe dysentery with bloody mucoid diarrhea, anorexia, weight loss, and secondary dehydration resulting in rapid decline and potentially death. 1 , 4 The development of disease relies on invasion of bacteria into the colonic epithelial cells resulting in mucosal inflammation, hemorrhage, and necrosis. 5 While subclinical carriers appear clinically healthy, they remain an important source of infection to conspecifics and laboratory animal workers. 1 Animals may also develop non‐enteric forms of the disease including reactive arthritis, gingivitis, abortion, and air sac infection 4 , 6 , 7 , 8 , 9 (Figure 1). The morbidity and mortality associated with these infections can significantly impact animal welfare and be a confounder for their use in biomedical research studies. Pigtailed macaque with Shigella gingivitis Large, group‐housed breeding colonies pose a particular challenge in both the transmission and treatment of Shigella. Complete elimination from a colony is often not practical due to intermittent shedding and asymptomatic or subclinical infections. Unfortunately, antibody production does not provide long‐lasting protection from subsequent infections, and animals may be chronically reinfected. 4 For this reason, treatment is typically directed toward symptomatic animals with supportive care being the primary intervention while antibiotic therapy is often reserved for more severe or protracted cases. Prolonged treatment of individual animals within a breeding colony poses its own unique challenges. Long‐term removal of animals for multi‐day antibiotic therapy can cause unnecessary stress on the individual animal and instabilities within the group social structure, each of which can potentiate the symptoms and transmission of Shigella within a group via the stress response. Another consideration for administration of cage‐side, multi‐day, oral medication, is compliance. The individual being treated may not ingest the full dose or may be unable or unwilling to take oral medications within the group, often due to low social rank. In this study, we sought to identify novel antibiotic treatments for mild to moderate shigellosis that would ensure compliance while minimizing the stress and social disruption often associated with drug administration. Although enrofloxacin is the common antibiotic of choice for empirical treatment in macaques, additional classes of antibiotics such as macrolides and cephalosporins are reported to be effective in treating shigellosis in humans. 10 Specific antibiotics within these classes, such as azithromycin and Ceftiofur Crystalline Free Acid (CCFA), can be given as a single dose or abbreviated course and offers refinement options for veterinarians as one‐time dosing or abbreviated courses may be utilized. Reports in the literature have shown that a single, high‐dose azithromycin for the treatment of shigellosis in children resulted in increased weight and survivability. 11 In human medicine, a 5‐day tapering dose of azithromycin, also known by brand names Zithromax™ and Z‐Pak™, can be used for a variety of clinical diseases due to its wide spectrum of activity. 12 Additionally, CCFA, a fourth generation cephalosporin, has been demonstrated to reach therapeutic concentrations in macaques for at least 7 days when given as a one‐time convenient dose. 13 We compared these three treatments to the most used antibiotic for Shigella treatment in macaques, enrofloxacin. We hypothesized that multiple, more conveniently dosed antimicrobial therapies would be effective in treating naturally acquired shigellosis in rhesus and pigtailed macaques (Macaca mulatta and nemestrina). As a secondary aim, we wanted to compare the diagnostic value of fecal PCR and fecal culture for detection of Shigella within our colony. Based on previous experience utilizing both diagnostic methods, we hypothesized that PCR would be more sensitive than culture for the detection of Shigella.
null
null
RESULTS
Treatment results Clinically ill animals found positive for Shigella by PCR were randomly assigned to one of the four treatment groups: single‐dose ceftiofur crystalline free acid (CCFA), single‐dose azithromycin gavage, a 5‐day tapering azithromycin dose, and Enrofloxacin. At the fecal PCR follow‐up, all animals regardless of treatment group cleared infection and clinical signs resolved indicating that all four treatments were effective. This offers a refinement in our current treatment of choice for shigellosis in macaques (Figure 5). Fecal and PCR results Clinically ill animals found positive for Shigella by PCR were randomly assigned to one of the four treatment groups: single‐dose ceftiofur crystalline free acid (CCFA), single‐dose azithromycin gavage, a 5‐day tapering azithromycin dose, and Enrofloxacin. At the fecal PCR follow‐up, all animals regardless of treatment group cleared infection and clinical signs resolved indicating that all four treatments were effective. This offers a refinement in our current treatment of choice for shigellosis in macaques (Figure 5). Fecal and PCR results Comparison of fecal PCR vs. culture While not a primary aim of this study, we did compare fecal PCR and culture results to further explore our own previous observation that detection of Shigella increased dramatically when our testing lab switched to PCR methods. Interestingly, 8 out of the 9 animals that had paired fecal cultures submitted were negative by culture but positive by PCR testing, indicating that culture is not the ideal diagnostic when Shigella is suspected. While not a primary aim of this study, we did compare fecal PCR and culture results to further explore our own previous observation that detection of Shigella increased dramatically when our testing lab switched to PCR methods. Interestingly, 8 out of the 9 animals that had paired fecal cultures submitted were negative by culture but positive by PCR testing, indicating that culture is not the ideal diagnostic when Shigella is suspected.
null
null
[ "INTRODUCTION", "Humane care guidelines", "Husbandry", "Animals", "Study design", "Drug administration and dosages", "5‐day tapering dose of azithromycin", "Single‐dose azithromycin gavage", "Single‐dose CCFA", "Enrofloxacin", "Sample collection", "Statistical analysis", "Treatment results", "Comparison of fecal PCR vs. culture" ]
[ "\nShigella spp. are non‐spore forming, facultatively anaerobic, gram‐negative rods that are transmitted fecal‐orally and a major causative agent of dysentery in humans and non‐human primates.\n1\n, \n2\n\nShigella remains an important pathogen worldwide and is a major cause of morbidity among humans, particularly in children under the age of 5.\n2\n In the United States alone, it accounts for an estimated 450 000 infections per year.\n2\n Based on biochemical and serological properties, Shigella can be separated into four serotypes, which include sonnei, boydii, dysenteriae, and flexneri. All four serotypes are important causes of morbidity and mortality in humans particularly in low‐resource areas.\n2\n Captive non‐human primates may acquire shigellosis due to their close contact with humans or from conspecifics in endemically infected colonies, with S. flexneri being the most frequent serotype isolated.\n1\n, \n2\n, \n3\n\n\nThe symptoms of shigellosis range from acute life threatening disease to subclinical carrier states.\n1\n, \n4\n Acute cases often present as severe dysentery with bloody mucoid diarrhea, anorexia, weight loss, and secondary dehydration resulting in rapid decline and potentially death.\n1\n, \n4\n The development of disease relies on invasion of bacteria into the colonic epithelial cells resulting in mucosal inflammation, hemorrhage, and necrosis.\n5\n While subclinical carriers appear clinically healthy, they remain an important source of infection to conspecifics and laboratory animal workers.\n1\n Animals may also develop non‐enteric forms of the disease including reactive arthritis, gingivitis, abortion, and air sac infection\n4\n, \n6\n, \n7\n, \n8\n, \n9\n (Figure 1). The morbidity and mortality associated with these infections can significantly impact animal welfare and be a confounder for their use in biomedical research studies.\nPigtailed macaque with Shigella gingivitis\n\nLarge, group‐housed breeding colonies pose a particular challenge in both the transmission and treatment of Shigella. Complete elimination from a colony is often not practical due to intermittent shedding and asymptomatic or subclinical infections. Unfortunately, antibody production does not provide long‐lasting protection from subsequent infections, and animals may be chronically reinfected.\n4\n For this reason, treatment is typically directed toward symptomatic animals with supportive care being the primary intervention while antibiotic therapy is often reserved for more severe or protracted cases. Prolonged treatment of individual animals within a breeding colony poses its own unique challenges. Long‐term removal of animals for multi‐day antibiotic therapy can cause unnecessary stress on the individual animal and instabilities within the group social structure, each of which can potentiate the symptoms and transmission of Shigella within a group via the stress response. Another consideration for administration of cage‐side, multi‐day, oral medication, is compliance. The individual being treated may not ingest the full dose or may be unable or unwilling to take oral medications within the group, often due to low social rank.\nIn this study, we sought to identify novel antibiotic treatments for mild to moderate shigellosis that would ensure compliance while minimizing the stress and social disruption often associated with drug administration. Although enrofloxacin is the common antibiotic of choice for empirical treatment in macaques, additional classes of antibiotics such as macrolides and cephalosporins are reported to be effective in treating shigellosis in humans.\n10\n Specific antibiotics within these classes, such as azithromycin and Ceftiofur Crystalline Free Acid (CCFA), can be given as a single dose or abbreviated course and offers refinement options for veterinarians as one‐time dosing or abbreviated courses may be utilized. Reports in the literature have shown that a single, high‐dose azithromycin for the treatment of shigellosis in children resulted in increased weight and survivability.\n11\n In human medicine, a 5‐day tapering dose of azithromycin, also known by brand names Zithromax™ and Z‐Pak™, can be used for a variety of clinical diseases due to its wide spectrum of activity.\n12\n Additionally, CCFA, a fourth generation cephalosporin, has been demonstrated to reach therapeutic concentrations in macaques for at least 7 days when given as a one‐time convenient dose.\n13\n We compared these three treatments to the most used antibiotic for Shigella treatment in macaques, enrofloxacin. We hypothesized that multiple, more conveniently dosed antimicrobial therapies would be effective in treating naturally acquired shigellosis in rhesus and pigtailed macaques (Macaca mulatta and nemestrina). As a secondary aim, we wanted to compare the diagnostic value of fecal PCR and fecal culture for detection of Shigella within our colony. Based on previous experience utilizing both diagnostic methods, we hypothesized that PCR would be more sensitive than culture for the detection of Shigella.", "The Institutional Care and Use Committee of Johns Hopkins University, an AAALAC‐accredited institution, approved the protocol in which this study was conducted under. Animals and procedures were in compliance with the US Public Health Service's Policy on Humane Care and Use of Laboratory Animals,\n14\n the US Department of Agriculture's Animal Welfare Act,\n15\n and the National Research Council's Guide for the Care and Use of Laboratory Animals.\n16\n\n", "All animals were housed at the Johns Hopkins University Research Farm in single‐species harem breeding groups or same sex juvenile/young adult groups. Animal enclosures consisted of runs with concrete flooring or raised corncrib cages. All animals had indoor and outdoor access. Animals were fed a standard commercial diet (rhesus macaques: 5049 Fiber‐Plus Monkey Diet, LabDiet); (pigtailed macaques: 5038 Monkey Diet, LabDiet) and rotating food enrichment items including fresh fruits, vegetables, and dried fruit treats. Animals were provided water ad libitum. Annual colony health screening included intradermal tuberculin testing and serology for Macacine herpesvirus 1 (B virus), simian immunodeficiency virus, simian T‐cell leukemia virus, and simian retrovirus. All animals were consistently negative on tuberculosis testing and viral serology.", "Fourteen rhesus and ten pigtailed macaques (Macaca mulatta and nemestrina) were enrolled in this study. All animals enrolled met the following inclusion criteria: clinically stable and at least 2.5 kg in weight, testing positive by fecal PCR for Shigella and showing one or more symptoms of shigellosis, such as mild or moderate diarrhea, dehydration, hematochezia, fever, or gingivitis. The most common clinical signs seen were dehydration, fecal staining, and/or diarrhea. Animals were excluded from the study if they were less than 2.5 kg in weight, were clinically unstable requiring aggressive supportive care, or immunocompromised. Excluded animals received John Hopkins University standard of care treatment for severe shigellosis, which includes enrofloxacin antibiotic therapy.", "Once inclusion criteria were met animals were randomized (online software; random.org) into one of four treatment groups (Figure 2): Single‐dose CCFA, single‐dose azithromycin, 5‐day tapering dose of azithromycin, or enrofloxacin (positive control). Animals were sedated with 10–15 mg/kg ketamine intramuscularly (IM) for pre‐study physical examinations and drug administration. Animals that received CCFA and a single dose of azithromycin were returned to their groups. Animals were singly housed with visual access to other macaques if they were in the enrofloxacin group receiving injectable medication or the 5‐day tapering dose azithromycin group and determined to be low‐ranking animals that could not receive oral medications within their group. Animals were monitored daily until treatment was completed and clinical signs resolved. Follow‐up PCR was collected at least 2 weeks following treatment (Figure 3).\nTreatment groups\nStudy design", "5‐day tapering dose of azithromycin (Chewables, 120 mg/tablet; Bio‐Serv) (Azithromycin tablets, 250 mg; TAGI Pharma, Inc.). Five‐day tapering dose of Azithromycin given orally in a flavored tablet or crushed in a food enrichment vehicle (typically a peanut butter and graham cracker sandwich). Dosed at 125 mg PO once, followed by 62.5 mg PO q 24 h × 4 days for animals less than 5 kg in weight. Animals greater than 5 kg were dosed at 250 mg PO once, followed by 125 mg PO q24h × 4 days.\n(Chewables, 120 mg/tablet; Bio‐Serv) (Azithromycin tablets, 250 mg; TAGI Pharma, Inc.). Five‐day tapering dose of Azithromycin given orally in a flavored tablet or crushed in a food enrichment vehicle (typically a peanut butter and graham cracker sandwich). Dosed at 125 mg PO once, followed by 62.5 mg PO q 24 h × 4 days for animals less than 5 kg in weight. Animals greater than 5 kg were dosed at 250 mg PO once, followed by 125 mg PO q24h × 4 days.\nSingle‐dose azithromycin gavage (Azithromycin for Oral Suspension, 40 mg/ml; TEVA Pharmaceuticals USA Inc.) Azithromycin as a compounded suspension or tablet crushed and mixed with water administered via oral gavage with a red rubber catheter. The animal was sedated with 10–15 mg/kg ketamine IM; a red rubber catheter was measured from the mouth to just below the last rib (where the stomach sits). The monkey was then held in a seated position (i.e., High Fowler's position), and the gavage tube was passed through the oropharynx into the esophagus and subsequently the stomach. The appropriate amount of drug was pulled up into a syringe and attached to the end of the red rubber catheter. The drug was administered with the monkey in the upright position (Figure 4) and followed by 3–5 ml of water. The red rubber was then clamped off and pulled directly out of the animals' mouth. The animal was then left in the sitting upright position for 3–5 min or until the animal exhibited a robust swallowing reflex before being returned to the cage for recovery. Animals were monitored for adverse side effects such as vomiting for 24 h, and no adverse effects were reported.\n(A) Materials needed: saline, azithromycin suspension, syringes, and a red rubber catheter. (B) The red rubber catheter being measured to the level of the stomach in a sedated macaque. (C) The red rubber is gently passed through a sedated macaque's mouth down the esophagus and into the stomach. (D) The animal remains in the High Fowler's position for 3–5 min before being returned to the cage and recovered.\n(Azithromycin for Oral Suspension, 40 mg/ml; TEVA Pharmaceuticals USA Inc.) Azithromycin as a compounded suspension or tablet crushed and mixed with water administered via oral gavage with a red rubber catheter. The animal was sedated with 10–15 mg/kg ketamine IM; a red rubber catheter was measured from the mouth to just below the last rib (where the stomach sits). The monkey was then held in a seated position (i.e., High Fowler's position), and the gavage tube was passed through the oropharynx into the esophagus and subsequently the stomach. The appropriate amount of drug was pulled up into a syringe and attached to the end of the red rubber catheter. The drug was administered with the monkey in the upright position (Figure 4) and followed by 3–5 ml of water. The red rubber was then clamped off and pulled directly out of the animals' mouth. The animal was then left in the sitting upright position for 3–5 min or until the animal exhibited a robust swallowing reflex before being returned to the cage for recovery. Animals were monitored for adverse side effects such as vomiting for 24 h, and no adverse effects were reported.\n(A) Materials needed: saline, azithromycin suspension, syringes, and a red rubber catheter. (B) The red rubber catheter being measured to the level of the stomach in a sedated macaque. (C) The red rubber is gently passed through a sedated macaque's mouth down the esophagus and into the stomach. (D) The animal remains in the High Fowler's position for 3–5 min before being returned to the cage and recovered.\nSingle‐dose CCFA (Excede Sterile Suspension, 100 mg/ml; Zoetis). Administered at 20 mg/kg subcutaneously as a one‐time dose.\n(Excede Sterile Suspension, 100 mg/ml; Zoetis). Administered at 20 mg/kg subcutaneously as a one‐time dose.\nEnrofloxacin (Baytril 100, 100 mg/ml; Bayer). Administered as 10 mg/kg intramuscularly q24 h for 7 days.\n(Baytril 100, 100 mg/ml; Bayer). Administered as 10 mg/kg intramuscularly q24 h for 7 days.", "(Chewables, 120 mg/tablet; Bio‐Serv) (Azithromycin tablets, 250 mg; TAGI Pharma, Inc.). Five‐day tapering dose of Azithromycin given orally in a flavored tablet or crushed in a food enrichment vehicle (typically a peanut butter and graham cracker sandwich). Dosed at 125 mg PO once, followed by 62.5 mg PO q 24 h × 4 days for animals less than 5 kg in weight. Animals greater than 5 kg were dosed at 250 mg PO once, followed by 125 mg PO q24h × 4 days.", "(Azithromycin for Oral Suspension, 40 mg/ml; TEVA Pharmaceuticals USA Inc.) Azithromycin as a compounded suspension or tablet crushed and mixed with water administered via oral gavage with a red rubber catheter. The animal was sedated with 10–15 mg/kg ketamine IM; a red rubber catheter was measured from the mouth to just below the last rib (where the stomach sits). The monkey was then held in a seated position (i.e., High Fowler's position), and the gavage tube was passed through the oropharynx into the esophagus and subsequently the stomach. The appropriate amount of drug was pulled up into a syringe and attached to the end of the red rubber catheter. The drug was administered with the monkey in the upright position (Figure 4) and followed by 3–5 ml of water. The red rubber was then clamped off and pulled directly out of the animals' mouth. The animal was then left in the sitting upright position for 3–5 min or until the animal exhibited a robust swallowing reflex before being returned to the cage for recovery. Animals were monitored for adverse side effects such as vomiting for 24 h, and no adverse effects were reported.\n(A) Materials needed: saline, azithromycin suspension, syringes, and a red rubber catheter. (B) The red rubber catheter being measured to the level of the stomach in a sedated macaque. (C) The red rubber is gently passed through a sedated macaque's mouth down the esophagus and into the stomach. (D) The animal remains in the High Fowler's position for 3–5 min before being returned to the cage and recovered.", "(Excede Sterile Suspension, 100 mg/ml; Zoetis). Administered at 20 mg/kg subcutaneously as a one‐time dose.", "(Baytril 100, 100 mg/ml; Bayer). Administered as 10 mg/kg intramuscularly q24 h for 7 days.", "All animals were anesthetized with 10–15 mg/kg ketamine HCL (Zetamine™, 100 mg/ml, MWI) intramuscularly. For fecal PCR, a cotton‐tipped applicator was inserted into the rectum and then placed into a liquid Cary‐Blair with Indicator Single Vial‐Orange Cap (C&S Collection Vials Modified Cary Blair, 15 ml; Medical Chemical Corporation) and refrigerated and/or kept on ice until the sample could be delivered to the Johns Hopkins Medical Microbiology Laboratory (Baltimore, MD) for analysis. A parallel fecal culture was collected from select animals for the purpose of comparing results. Feces was collected directly from the rectum of each animal tested and placed immediately into transport medium (CultureSwab Cary‐Blair Agar, Becton Dickinson) and submitted IDEXX Laboratories, Glen Burnie, MD, who cultured samples for the presence of enteric pathogens (Salmonella, Shigella, Campylobacter spp., Yersinia, Aeromonas, Plesiomonas, E. coli 0157, and Vibrio). Follow‐up fecal PCRs were collected at least 14 days after the completion of treatment. For CCFA, follow‐up PCR was collected at least 2 weeks after last metabolically active dose which was considered 7 days after initial treatment.", "This study was designed to produce data in a contingency table with the four treatment arms as rows, and post‐treatment shigellosis status as a dependent, binary outcome variable across two columns. Due to the fact that all treatment conditions eliminated shigellosis in all animals tested, that is, one of the dependent columns contained all “zeros,” statistical analysis of the resulting contingency table is not possible or warranted.", "Clinically ill animals found positive for Shigella by PCR were randomly assigned to one of the four treatment groups: single‐dose ceftiofur crystalline free acid (CCFA), single‐dose azithromycin gavage, a 5‐day tapering azithromycin dose, and Enrofloxacin. At the fecal PCR follow‐up, all animals regardless of treatment group cleared infection and clinical signs resolved indicating that all four treatments were effective. This offers a refinement in our current treatment of choice for shigellosis in macaques (Figure 5).\nFecal and PCR results", "While not a primary aim of this study, we did compare fecal PCR and culture results to further explore our own previous observation that detection of Shigella increased dramatically when our testing lab switched to PCR methods. Interestingly, 8 out of the 9 animals that had paired fecal cultures submitted were negative by culture but positive by PCR testing, indicating that culture is not the ideal diagnostic when Shigella is suspected." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Humane care guidelines", "Husbandry", "Animals", "Study design", "Drug administration and dosages", "5‐day tapering dose of azithromycin", "Single‐dose azithromycin gavage", "Single‐dose CCFA", "Enrofloxacin", "Sample collection", "Statistical analysis", "RESULTS", "Treatment results", "Comparison of fecal PCR vs. culture", "DISCUSSION", "CONFLICT OF INTEREST" ]
[ "\nShigella spp. are non‐spore forming, facultatively anaerobic, gram‐negative rods that are transmitted fecal‐orally and a major causative agent of dysentery in humans and non‐human primates.\n1\n, \n2\n\nShigella remains an important pathogen worldwide and is a major cause of morbidity among humans, particularly in children under the age of 5.\n2\n In the United States alone, it accounts for an estimated 450 000 infections per year.\n2\n Based on biochemical and serological properties, Shigella can be separated into four serotypes, which include sonnei, boydii, dysenteriae, and flexneri. All four serotypes are important causes of morbidity and mortality in humans particularly in low‐resource areas.\n2\n Captive non‐human primates may acquire shigellosis due to their close contact with humans or from conspecifics in endemically infected colonies, with S. flexneri being the most frequent serotype isolated.\n1\n, \n2\n, \n3\n\n\nThe symptoms of shigellosis range from acute life threatening disease to subclinical carrier states.\n1\n, \n4\n Acute cases often present as severe dysentery with bloody mucoid diarrhea, anorexia, weight loss, and secondary dehydration resulting in rapid decline and potentially death.\n1\n, \n4\n The development of disease relies on invasion of bacteria into the colonic epithelial cells resulting in mucosal inflammation, hemorrhage, and necrosis.\n5\n While subclinical carriers appear clinically healthy, they remain an important source of infection to conspecifics and laboratory animal workers.\n1\n Animals may also develop non‐enteric forms of the disease including reactive arthritis, gingivitis, abortion, and air sac infection\n4\n, \n6\n, \n7\n, \n8\n, \n9\n (Figure 1). The morbidity and mortality associated with these infections can significantly impact animal welfare and be a confounder for their use in biomedical research studies.\nPigtailed macaque with Shigella gingivitis\n\nLarge, group‐housed breeding colonies pose a particular challenge in both the transmission and treatment of Shigella. Complete elimination from a colony is often not practical due to intermittent shedding and asymptomatic or subclinical infections. Unfortunately, antibody production does not provide long‐lasting protection from subsequent infections, and animals may be chronically reinfected.\n4\n For this reason, treatment is typically directed toward symptomatic animals with supportive care being the primary intervention while antibiotic therapy is often reserved for more severe or protracted cases. Prolonged treatment of individual animals within a breeding colony poses its own unique challenges. Long‐term removal of animals for multi‐day antibiotic therapy can cause unnecessary stress on the individual animal and instabilities within the group social structure, each of which can potentiate the symptoms and transmission of Shigella within a group via the stress response. Another consideration for administration of cage‐side, multi‐day, oral medication, is compliance. The individual being treated may not ingest the full dose or may be unable or unwilling to take oral medications within the group, often due to low social rank.\nIn this study, we sought to identify novel antibiotic treatments for mild to moderate shigellosis that would ensure compliance while minimizing the stress and social disruption often associated with drug administration. Although enrofloxacin is the common antibiotic of choice for empirical treatment in macaques, additional classes of antibiotics such as macrolides and cephalosporins are reported to be effective in treating shigellosis in humans.\n10\n Specific antibiotics within these classes, such as azithromycin and Ceftiofur Crystalline Free Acid (CCFA), can be given as a single dose or abbreviated course and offers refinement options for veterinarians as one‐time dosing or abbreviated courses may be utilized. Reports in the literature have shown that a single, high‐dose azithromycin for the treatment of shigellosis in children resulted in increased weight and survivability.\n11\n In human medicine, a 5‐day tapering dose of azithromycin, also known by brand names Zithromax™ and Z‐Pak™, can be used for a variety of clinical diseases due to its wide spectrum of activity.\n12\n Additionally, CCFA, a fourth generation cephalosporin, has been demonstrated to reach therapeutic concentrations in macaques for at least 7 days when given as a one‐time convenient dose.\n13\n We compared these three treatments to the most used antibiotic for Shigella treatment in macaques, enrofloxacin. We hypothesized that multiple, more conveniently dosed antimicrobial therapies would be effective in treating naturally acquired shigellosis in rhesus and pigtailed macaques (Macaca mulatta and nemestrina). As a secondary aim, we wanted to compare the diagnostic value of fecal PCR and fecal culture for detection of Shigella within our colony. Based on previous experience utilizing both diagnostic methods, we hypothesized that PCR would be more sensitive than culture for the detection of Shigella.", "Humane care guidelines The Institutional Care and Use Committee of Johns Hopkins University, an AAALAC‐accredited institution, approved the protocol in which this study was conducted under. Animals and procedures were in compliance with the US Public Health Service's Policy on Humane Care and Use of Laboratory Animals,\n14\n the US Department of Agriculture's Animal Welfare Act,\n15\n and the National Research Council's Guide for the Care and Use of Laboratory Animals.\n16\n\n\nThe Institutional Care and Use Committee of Johns Hopkins University, an AAALAC‐accredited institution, approved the protocol in which this study was conducted under. Animals and procedures were in compliance with the US Public Health Service's Policy on Humane Care and Use of Laboratory Animals,\n14\n the US Department of Agriculture's Animal Welfare Act,\n15\n and the National Research Council's Guide for the Care and Use of Laboratory Animals.\n16\n\n\nHusbandry All animals were housed at the Johns Hopkins University Research Farm in single‐species harem breeding groups or same sex juvenile/young adult groups. Animal enclosures consisted of runs with concrete flooring or raised corncrib cages. All animals had indoor and outdoor access. Animals were fed a standard commercial diet (rhesus macaques: 5049 Fiber‐Plus Monkey Diet, LabDiet); (pigtailed macaques: 5038 Monkey Diet, LabDiet) and rotating food enrichment items including fresh fruits, vegetables, and dried fruit treats. Animals were provided water ad libitum. Annual colony health screening included intradermal tuberculin testing and serology for Macacine herpesvirus 1 (B virus), simian immunodeficiency virus, simian T‐cell leukemia virus, and simian retrovirus. All animals were consistently negative on tuberculosis testing and viral serology.\nAll animals were housed at the Johns Hopkins University Research Farm in single‐species harem breeding groups or same sex juvenile/young adult groups. Animal enclosures consisted of runs with concrete flooring or raised corncrib cages. All animals had indoor and outdoor access. Animals were fed a standard commercial diet (rhesus macaques: 5049 Fiber‐Plus Monkey Diet, LabDiet); (pigtailed macaques: 5038 Monkey Diet, LabDiet) and rotating food enrichment items including fresh fruits, vegetables, and dried fruit treats. Animals were provided water ad libitum. Annual colony health screening included intradermal tuberculin testing and serology for Macacine herpesvirus 1 (B virus), simian immunodeficiency virus, simian T‐cell leukemia virus, and simian retrovirus. All animals were consistently negative on tuberculosis testing and viral serology.\nAnimals Fourteen rhesus and ten pigtailed macaques (Macaca mulatta and nemestrina) were enrolled in this study. All animals enrolled met the following inclusion criteria: clinically stable and at least 2.5 kg in weight, testing positive by fecal PCR for Shigella and showing one or more symptoms of shigellosis, such as mild or moderate diarrhea, dehydration, hematochezia, fever, or gingivitis. The most common clinical signs seen were dehydration, fecal staining, and/or diarrhea. Animals were excluded from the study if they were less than 2.5 kg in weight, were clinically unstable requiring aggressive supportive care, or immunocompromised. Excluded animals received John Hopkins University standard of care treatment for severe shigellosis, which includes enrofloxacin antibiotic therapy.\nFourteen rhesus and ten pigtailed macaques (Macaca mulatta and nemestrina) were enrolled in this study. All animals enrolled met the following inclusion criteria: clinically stable and at least 2.5 kg in weight, testing positive by fecal PCR for Shigella and showing one or more symptoms of shigellosis, such as mild or moderate diarrhea, dehydration, hematochezia, fever, or gingivitis. The most common clinical signs seen were dehydration, fecal staining, and/or diarrhea. Animals were excluded from the study if they were less than 2.5 kg in weight, were clinically unstable requiring aggressive supportive care, or immunocompromised. Excluded animals received John Hopkins University standard of care treatment for severe shigellosis, which includes enrofloxacin antibiotic therapy.\nStudy design Once inclusion criteria were met animals were randomized (online software; random.org) into one of four treatment groups (Figure 2): Single‐dose CCFA, single‐dose azithromycin, 5‐day tapering dose of azithromycin, or enrofloxacin (positive control). Animals were sedated with 10–15 mg/kg ketamine intramuscularly (IM) for pre‐study physical examinations and drug administration. Animals that received CCFA and a single dose of azithromycin were returned to their groups. Animals were singly housed with visual access to other macaques if they were in the enrofloxacin group receiving injectable medication or the 5‐day tapering dose azithromycin group and determined to be low‐ranking animals that could not receive oral medications within their group. Animals were monitored daily until treatment was completed and clinical signs resolved. Follow‐up PCR was collected at least 2 weeks following treatment (Figure 3).\nTreatment groups\nStudy design\nOnce inclusion criteria were met animals were randomized (online software; random.org) into one of four treatment groups (Figure 2): Single‐dose CCFA, single‐dose azithromycin, 5‐day tapering dose of azithromycin, or enrofloxacin (positive control). Animals were sedated with 10–15 mg/kg ketamine intramuscularly (IM) for pre‐study physical examinations and drug administration. Animals that received CCFA and a single dose of azithromycin were returned to their groups. Animals were singly housed with visual access to other macaques if they were in the enrofloxacin group receiving injectable medication or the 5‐day tapering dose azithromycin group and determined to be low‐ranking animals that could not receive oral medications within their group. Animals were monitored daily until treatment was completed and clinical signs resolved. Follow‐up PCR was collected at least 2 weeks following treatment (Figure 3).\nTreatment groups\nStudy design\nDrug administration and dosages 5‐day tapering dose of azithromycin (Chewables, 120 mg/tablet; Bio‐Serv) (Azithromycin tablets, 250 mg; TAGI Pharma, Inc.). Five‐day tapering dose of Azithromycin given orally in a flavored tablet or crushed in a food enrichment vehicle (typically a peanut butter and graham cracker sandwich). Dosed at 125 mg PO once, followed by 62.5 mg PO q 24 h × 4 days for animals less than 5 kg in weight. Animals greater than 5 kg were dosed at 250 mg PO once, followed by 125 mg PO q24h × 4 days.\n(Chewables, 120 mg/tablet; Bio‐Serv) (Azithromycin tablets, 250 mg; TAGI Pharma, Inc.). Five‐day tapering dose of Azithromycin given orally in a flavored tablet or crushed in a food enrichment vehicle (typically a peanut butter and graham cracker sandwich). Dosed at 125 mg PO once, followed by 62.5 mg PO q 24 h × 4 days for animals less than 5 kg in weight. Animals greater than 5 kg were dosed at 250 mg PO once, followed by 125 mg PO q24h × 4 days.\nSingle‐dose azithromycin gavage (Azithromycin for Oral Suspension, 40 mg/ml; TEVA Pharmaceuticals USA Inc.) Azithromycin as a compounded suspension or tablet crushed and mixed with water administered via oral gavage with a red rubber catheter. The animal was sedated with 10–15 mg/kg ketamine IM; a red rubber catheter was measured from the mouth to just below the last rib (where the stomach sits). The monkey was then held in a seated position (i.e., High Fowler's position), and the gavage tube was passed through the oropharynx into the esophagus and subsequently the stomach. The appropriate amount of drug was pulled up into a syringe and attached to the end of the red rubber catheter. The drug was administered with the monkey in the upright position (Figure 4) and followed by 3–5 ml of water. The red rubber was then clamped off and pulled directly out of the animals' mouth. The animal was then left in the sitting upright position for 3–5 min or until the animal exhibited a robust swallowing reflex before being returned to the cage for recovery. Animals were monitored for adverse side effects such as vomiting for 24 h, and no adverse effects were reported.\n(A) Materials needed: saline, azithromycin suspension, syringes, and a red rubber catheter. (B) The red rubber catheter being measured to the level of the stomach in a sedated macaque. (C) The red rubber is gently passed through a sedated macaque's mouth down the esophagus and into the stomach. (D) The animal remains in the High Fowler's position for 3–5 min before being returned to the cage and recovered.\n(Azithromycin for Oral Suspension, 40 mg/ml; TEVA Pharmaceuticals USA Inc.) Azithromycin as a compounded suspension or tablet crushed and mixed with water administered via oral gavage with a red rubber catheter. The animal was sedated with 10–15 mg/kg ketamine IM; a red rubber catheter was measured from the mouth to just below the last rib (where the stomach sits). The monkey was then held in a seated position (i.e., High Fowler's position), and the gavage tube was passed through the oropharynx into the esophagus and subsequently the stomach. The appropriate amount of drug was pulled up into a syringe and attached to the end of the red rubber catheter. The drug was administered with the monkey in the upright position (Figure 4) and followed by 3–5 ml of water. The red rubber was then clamped off and pulled directly out of the animals' mouth. The animal was then left in the sitting upright position for 3–5 min or until the animal exhibited a robust swallowing reflex before being returned to the cage for recovery. Animals were monitored for adverse side effects such as vomiting for 24 h, and no adverse effects were reported.\n(A) Materials needed: saline, azithromycin suspension, syringes, and a red rubber catheter. (B) The red rubber catheter being measured to the level of the stomach in a sedated macaque. (C) The red rubber is gently passed through a sedated macaque's mouth down the esophagus and into the stomach. (D) The animal remains in the High Fowler's position for 3–5 min before being returned to the cage and recovered.\nSingle‐dose CCFA (Excede Sterile Suspension, 100 mg/ml; Zoetis). Administered at 20 mg/kg subcutaneously as a one‐time dose.\n(Excede Sterile Suspension, 100 mg/ml; Zoetis). Administered at 20 mg/kg subcutaneously as a one‐time dose.\nEnrofloxacin (Baytril 100, 100 mg/ml; Bayer). Administered as 10 mg/kg intramuscularly q24 h for 7 days.\n(Baytril 100, 100 mg/ml; Bayer). Administered as 10 mg/kg intramuscularly q24 h for 7 days.\n5‐day tapering dose of azithromycin (Chewables, 120 mg/tablet; Bio‐Serv) (Azithromycin tablets, 250 mg; TAGI Pharma, Inc.). Five‐day tapering dose of Azithromycin given orally in a flavored tablet or crushed in a food enrichment vehicle (typically a peanut butter and graham cracker sandwich). Dosed at 125 mg PO once, followed by 62.5 mg PO q 24 h × 4 days for animals less than 5 kg in weight. Animals greater than 5 kg were dosed at 250 mg PO once, followed by 125 mg PO q24h × 4 days.\n(Chewables, 120 mg/tablet; Bio‐Serv) (Azithromycin tablets, 250 mg; TAGI Pharma, Inc.). Five‐day tapering dose of Azithromycin given orally in a flavored tablet or crushed in a food enrichment vehicle (typically a peanut butter and graham cracker sandwich). Dosed at 125 mg PO once, followed by 62.5 mg PO q 24 h × 4 days for animals less than 5 kg in weight. Animals greater than 5 kg were dosed at 250 mg PO once, followed by 125 mg PO q24h × 4 days.\nSingle‐dose azithromycin gavage (Azithromycin for Oral Suspension, 40 mg/ml; TEVA Pharmaceuticals USA Inc.) Azithromycin as a compounded suspension or tablet crushed and mixed with water administered via oral gavage with a red rubber catheter. The animal was sedated with 10–15 mg/kg ketamine IM; a red rubber catheter was measured from the mouth to just below the last rib (where the stomach sits). The monkey was then held in a seated position (i.e., High Fowler's position), and the gavage tube was passed through the oropharynx into the esophagus and subsequently the stomach. The appropriate amount of drug was pulled up into a syringe and attached to the end of the red rubber catheter. The drug was administered with the monkey in the upright position (Figure 4) and followed by 3–5 ml of water. The red rubber was then clamped off and pulled directly out of the animals' mouth. The animal was then left in the sitting upright position for 3–5 min or until the animal exhibited a robust swallowing reflex before being returned to the cage for recovery. Animals were monitored for adverse side effects such as vomiting for 24 h, and no adverse effects were reported.\n(A) Materials needed: saline, azithromycin suspension, syringes, and a red rubber catheter. (B) The red rubber catheter being measured to the level of the stomach in a sedated macaque. (C) The red rubber is gently passed through a sedated macaque's mouth down the esophagus and into the stomach. (D) The animal remains in the High Fowler's position for 3–5 min before being returned to the cage and recovered.\n(Azithromycin for Oral Suspension, 40 mg/ml; TEVA Pharmaceuticals USA Inc.) Azithromycin as a compounded suspension or tablet crushed and mixed with water administered via oral gavage with a red rubber catheter. The animal was sedated with 10–15 mg/kg ketamine IM; a red rubber catheter was measured from the mouth to just below the last rib (where the stomach sits). The monkey was then held in a seated position (i.e., High Fowler's position), and the gavage tube was passed through the oropharynx into the esophagus and subsequently the stomach. The appropriate amount of drug was pulled up into a syringe and attached to the end of the red rubber catheter. The drug was administered with the monkey in the upright position (Figure 4) and followed by 3–5 ml of water. The red rubber was then clamped off and pulled directly out of the animals' mouth. The animal was then left in the sitting upright position for 3–5 min or until the animal exhibited a robust swallowing reflex before being returned to the cage for recovery. Animals were monitored for adverse side effects such as vomiting for 24 h, and no adverse effects were reported.\n(A) Materials needed: saline, azithromycin suspension, syringes, and a red rubber catheter. (B) The red rubber catheter being measured to the level of the stomach in a sedated macaque. (C) The red rubber is gently passed through a sedated macaque's mouth down the esophagus and into the stomach. (D) The animal remains in the High Fowler's position for 3–5 min before being returned to the cage and recovered.\nSingle‐dose CCFA (Excede Sterile Suspension, 100 mg/ml; Zoetis). Administered at 20 mg/kg subcutaneously as a one‐time dose.\n(Excede Sterile Suspension, 100 mg/ml; Zoetis). Administered at 20 mg/kg subcutaneously as a one‐time dose.\nEnrofloxacin (Baytril 100, 100 mg/ml; Bayer). Administered as 10 mg/kg intramuscularly q24 h for 7 days.\n(Baytril 100, 100 mg/ml; Bayer). Administered as 10 mg/kg intramuscularly q24 h for 7 days.\nSample collection All animals were anesthetized with 10–15 mg/kg ketamine HCL (Zetamine™, 100 mg/ml, MWI) intramuscularly. For fecal PCR, a cotton‐tipped applicator was inserted into the rectum and then placed into a liquid Cary‐Blair with Indicator Single Vial‐Orange Cap (C&S Collection Vials Modified Cary Blair, 15 ml; Medical Chemical Corporation) and refrigerated and/or kept on ice until the sample could be delivered to the Johns Hopkins Medical Microbiology Laboratory (Baltimore, MD) for analysis. A parallel fecal culture was collected from select animals for the purpose of comparing results. Feces was collected directly from the rectum of each animal tested and placed immediately into transport medium (CultureSwab Cary‐Blair Agar, Becton Dickinson) and submitted IDEXX Laboratories, Glen Burnie, MD, who cultured samples for the presence of enteric pathogens (Salmonella, Shigella, Campylobacter spp., Yersinia, Aeromonas, Plesiomonas, E. coli 0157, and Vibrio). Follow‐up fecal PCRs were collected at least 14 days after the completion of treatment. For CCFA, follow‐up PCR was collected at least 2 weeks after last metabolically active dose which was considered 7 days after initial treatment.\nAll animals were anesthetized with 10–15 mg/kg ketamine HCL (Zetamine™, 100 mg/ml, MWI) intramuscularly. For fecal PCR, a cotton‐tipped applicator was inserted into the rectum and then placed into a liquid Cary‐Blair with Indicator Single Vial‐Orange Cap (C&S Collection Vials Modified Cary Blair, 15 ml; Medical Chemical Corporation) and refrigerated and/or kept on ice until the sample could be delivered to the Johns Hopkins Medical Microbiology Laboratory (Baltimore, MD) for analysis. A parallel fecal culture was collected from select animals for the purpose of comparing results. Feces was collected directly from the rectum of each animal tested and placed immediately into transport medium (CultureSwab Cary‐Blair Agar, Becton Dickinson) and submitted IDEXX Laboratories, Glen Burnie, MD, who cultured samples for the presence of enteric pathogens (Salmonella, Shigella, Campylobacter spp., Yersinia, Aeromonas, Plesiomonas, E. coli 0157, and Vibrio). Follow‐up fecal PCRs were collected at least 14 days after the completion of treatment. For CCFA, follow‐up PCR was collected at least 2 weeks after last metabolically active dose which was considered 7 days after initial treatment.\nStatistical analysis This study was designed to produce data in a contingency table with the four treatment arms as rows, and post‐treatment shigellosis status as a dependent, binary outcome variable across two columns. Due to the fact that all treatment conditions eliminated shigellosis in all animals tested, that is, one of the dependent columns contained all “zeros,” statistical analysis of the resulting contingency table is not possible or warranted.\nThis study was designed to produce data in a contingency table with the four treatment arms as rows, and post‐treatment shigellosis status as a dependent, binary outcome variable across two columns. Due to the fact that all treatment conditions eliminated shigellosis in all animals tested, that is, one of the dependent columns contained all “zeros,” statistical analysis of the resulting contingency table is not possible or warranted.", "The Institutional Care and Use Committee of Johns Hopkins University, an AAALAC‐accredited institution, approved the protocol in which this study was conducted under. Animals and procedures were in compliance with the US Public Health Service's Policy on Humane Care and Use of Laboratory Animals,\n14\n the US Department of Agriculture's Animal Welfare Act,\n15\n and the National Research Council's Guide for the Care and Use of Laboratory Animals.\n16\n\n", "All animals were housed at the Johns Hopkins University Research Farm in single‐species harem breeding groups or same sex juvenile/young adult groups. Animal enclosures consisted of runs with concrete flooring or raised corncrib cages. All animals had indoor and outdoor access. Animals were fed a standard commercial diet (rhesus macaques: 5049 Fiber‐Plus Monkey Diet, LabDiet); (pigtailed macaques: 5038 Monkey Diet, LabDiet) and rotating food enrichment items including fresh fruits, vegetables, and dried fruit treats. Animals were provided water ad libitum. Annual colony health screening included intradermal tuberculin testing and serology for Macacine herpesvirus 1 (B virus), simian immunodeficiency virus, simian T‐cell leukemia virus, and simian retrovirus. All animals were consistently negative on tuberculosis testing and viral serology.", "Fourteen rhesus and ten pigtailed macaques (Macaca mulatta and nemestrina) were enrolled in this study. All animals enrolled met the following inclusion criteria: clinically stable and at least 2.5 kg in weight, testing positive by fecal PCR for Shigella and showing one or more symptoms of shigellosis, such as mild or moderate diarrhea, dehydration, hematochezia, fever, or gingivitis. The most common clinical signs seen were dehydration, fecal staining, and/or diarrhea. Animals were excluded from the study if they were less than 2.5 kg in weight, were clinically unstable requiring aggressive supportive care, or immunocompromised. Excluded animals received John Hopkins University standard of care treatment for severe shigellosis, which includes enrofloxacin antibiotic therapy.", "Once inclusion criteria were met animals were randomized (online software; random.org) into one of four treatment groups (Figure 2): Single‐dose CCFA, single‐dose azithromycin, 5‐day tapering dose of azithromycin, or enrofloxacin (positive control). Animals were sedated with 10–15 mg/kg ketamine intramuscularly (IM) for pre‐study physical examinations and drug administration. Animals that received CCFA and a single dose of azithromycin were returned to their groups. Animals were singly housed with visual access to other macaques if they were in the enrofloxacin group receiving injectable medication or the 5‐day tapering dose azithromycin group and determined to be low‐ranking animals that could not receive oral medications within their group. Animals were monitored daily until treatment was completed and clinical signs resolved. Follow‐up PCR was collected at least 2 weeks following treatment (Figure 3).\nTreatment groups\nStudy design", "5‐day tapering dose of azithromycin (Chewables, 120 mg/tablet; Bio‐Serv) (Azithromycin tablets, 250 mg; TAGI Pharma, Inc.). Five‐day tapering dose of Azithromycin given orally in a flavored tablet or crushed in a food enrichment vehicle (typically a peanut butter and graham cracker sandwich). Dosed at 125 mg PO once, followed by 62.5 mg PO q 24 h × 4 days for animals less than 5 kg in weight. Animals greater than 5 kg were dosed at 250 mg PO once, followed by 125 mg PO q24h × 4 days.\n(Chewables, 120 mg/tablet; Bio‐Serv) (Azithromycin tablets, 250 mg; TAGI Pharma, Inc.). Five‐day tapering dose of Azithromycin given orally in a flavored tablet or crushed in a food enrichment vehicle (typically a peanut butter and graham cracker sandwich). Dosed at 125 mg PO once, followed by 62.5 mg PO q 24 h × 4 days for animals less than 5 kg in weight. Animals greater than 5 kg were dosed at 250 mg PO once, followed by 125 mg PO q24h × 4 days.\nSingle‐dose azithromycin gavage (Azithromycin for Oral Suspension, 40 mg/ml; TEVA Pharmaceuticals USA Inc.) Azithromycin as a compounded suspension or tablet crushed and mixed with water administered via oral gavage with a red rubber catheter. The animal was sedated with 10–15 mg/kg ketamine IM; a red rubber catheter was measured from the mouth to just below the last rib (where the stomach sits). The monkey was then held in a seated position (i.e., High Fowler's position), and the gavage tube was passed through the oropharynx into the esophagus and subsequently the stomach. The appropriate amount of drug was pulled up into a syringe and attached to the end of the red rubber catheter. The drug was administered with the monkey in the upright position (Figure 4) and followed by 3–5 ml of water. The red rubber was then clamped off and pulled directly out of the animals' mouth. The animal was then left in the sitting upright position for 3–5 min or until the animal exhibited a robust swallowing reflex before being returned to the cage for recovery. Animals were monitored for adverse side effects such as vomiting for 24 h, and no adverse effects were reported.\n(A) Materials needed: saline, azithromycin suspension, syringes, and a red rubber catheter. (B) The red rubber catheter being measured to the level of the stomach in a sedated macaque. (C) The red rubber is gently passed through a sedated macaque's mouth down the esophagus and into the stomach. (D) The animal remains in the High Fowler's position for 3–5 min before being returned to the cage and recovered.\n(Azithromycin for Oral Suspension, 40 mg/ml; TEVA Pharmaceuticals USA Inc.) Azithromycin as a compounded suspension or tablet crushed and mixed with water administered via oral gavage with a red rubber catheter. The animal was sedated with 10–15 mg/kg ketamine IM; a red rubber catheter was measured from the mouth to just below the last rib (where the stomach sits). The monkey was then held in a seated position (i.e., High Fowler's position), and the gavage tube was passed through the oropharynx into the esophagus and subsequently the stomach. The appropriate amount of drug was pulled up into a syringe and attached to the end of the red rubber catheter. The drug was administered with the monkey in the upright position (Figure 4) and followed by 3–5 ml of water. The red rubber was then clamped off and pulled directly out of the animals' mouth. The animal was then left in the sitting upright position for 3–5 min or until the animal exhibited a robust swallowing reflex before being returned to the cage for recovery. Animals were monitored for adverse side effects such as vomiting for 24 h, and no adverse effects were reported.\n(A) Materials needed: saline, azithromycin suspension, syringes, and a red rubber catheter. (B) The red rubber catheter being measured to the level of the stomach in a sedated macaque. (C) The red rubber is gently passed through a sedated macaque's mouth down the esophagus and into the stomach. (D) The animal remains in the High Fowler's position for 3–5 min before being returned to the cage and recovered.\nSingle‐dose CCFA (Excede Sterile Suspension, 100 mg/ml; Zoetis). Administered at 20 mg/kg subcutaneously as a one‐time dose.\n(Excede Sterile Suspension, 100 mg/ml; Zoetis). Administered at 20 mg/kg subcutaneously as a one‐time dose.\nEnrofloxacin (Baytril 100, 100 mg/ml; Bayer). Administered as 10 mg/kg intramuscularly q24 h for 7 days.\n(Baytril 100, 100 mg/ml; Bayer). Administered as 10 mg/kg intramuscularly q24 h for 7 days.", "(Chewables, 120 mg/tablet; Bio‐Serv) (Azithromycin tablets, 250 mg; TAGI Pharma, Inc.). Five‐day tapering dose of Azithromycin given orally in a flavored tablet or crushed in a food enrichment vehicle (typically a peanut butter and graham cracker sandwich). Dosed at 125 mg PO once, followed by 62.5 mg PO q 24 h × 4 days for animals less than 5 kg in weight. Animals greater than 5 kg were dosed at 250 mg PO once, followed by 125 mg PO q24h × 4 days.", "(Azithromycin for Oral Suspension, 40 mg/ml; TEVA Pharmaceuticals USA Inc.) Azithromycin as a compounded suspension or tablet crushed and mixed with water administered via oral gavage with a red rubber catheter. The animal was sedated with 10–15 mg/kg ketamine IM; a red rubber catheter was measured from the mouth to just below the last rib (where the stomach sits). The monkey was then held in a seated position (i.e., High Fowler's position), and the gavage tube was passed through the oropharynx into the esophagus and subsequently the stomach. The appropriate amount of drug was pulled up into a syringe and attached to the end of the red rubber catheter. The drug was administered with the monkey in the upright position (Figure 4) and followed by 3–5 ml of water. The red rubber was then clamped off and pulled directly out of the animals' mouth. The animal was then left in the sitting upright position for 3–5 min or until the animal exhibited a robust swallowing reflex before being returned to the cage for recovery. Animals were monitored for adverse side effects such as vomiting for 24 h, and no adverse effects were reported.\n(A) Materials needed: saline, azithromycin suspension, syringes, and a red rubber catheter. (B) The red rubber catheter being measured to the level of the stomach in a sedated macaque. (C) The red rubber is gently passed through a sedated macaque's mouth down the esophagus and into the stomach. (D) The animal remains in the High Fowler's position for 3–5 min before being returned to the cage and recovered.", "(Excede Sterile Suspension, 100 mg/ml; Zoetis). Administered at 20 mg/kg subcutaneously as a one‐time dose.", "(Baytril 100, 100 mg/ml; Bayer). Administered as 10 mg/kg intramuscularly q24 h for 7 days.", "All animals were anesthetized with 10–15 mg/kg ketamine HCL (Zetamine™, 100 mg/ml, MWI) intramuscularly. For fecal PCR, a cotton‐tipped applicator was inserted into the rectum and then placed into a liquid Cary‐Blair with Indicator Single Vial‐Orange Cap (C&S Collection Vials Modified Cary Blair, 15 ml; Medical Chemical Corporation) and refrigerated and/or kept on ice until the sample could be delivered to the Johns Hopkins Medical Microbiology Laboratory (Baltimore, MD) for analysis. A parallel fecal culture was collected from select animals for the purpose of comparing results. Feces was collected directly from the rectum of each animal tested and placed immediately into transport medium (CultureSwab Cary‐Blair Agar, Becton Dickinson) and submitted IDEXX Laboratories, Glen Burnie, MD, who cultured samples for the presence of enteric pathogens (Salmonella, Shigella, Campylobacter spp., Yersinia, Aeromonas, Plesiomonas, E. coli 0157, and Vibrio). Follow‐up fecal PCRs were collected at least 14 days after the completion of treatment. For CCFA, follow‐up PCR was collected at least 2 weeks after last metabolically active dose which was considered 7 days after initial treatment.", "This study was designed to produce data in a contingency table with the four treatment arms as rows, and post‐treatment shigellosis status as a dependent, binary outcome variable across two columns. Due to the fact that all treatment conditions eliminated shigellosis in all animals tested, that is, one of the dependent columns contained all “zeros,” statistical analysis of the resulting contingency table is not possible or warranted.", "Treatment results Clinically ill animals found positive for Shigella by PCR were randomly assigned to one of the four treatment groups: single‐dose ceftiofur crystalline free acid (CCFA), single‐dose azithromycin gavage, a 5‐day tapering azithromycin dose, and Enrofloxacin. At the fecal PCR follow‐up, all animals regardless of treatment group cleared infection and clinical signs resolved indicating that all four treatments were effective. This offers a refinement in our current treatment of choice for shigellosis in macaques (Figure 5).\nFecal and PCR results\nClinically ill animals found positive for Shigella by PCR were randomly assigned to one of the four treatment groups: single‐dose ceftiofur crystalline free acid (CCFA), single‐dose azithromycin gavage, a 5‐day tapering azithromycin dose, and Enrofloxacin. At the fecal PCR follow‐up, all animals regardless of treatment group cleared infection and clinical signs resolved indicating that all four treatments were effective. This offers a refinement in our current treatment of choice for shigellosis in macaques (Figure 5).\nFecal and PCR results\nComparison of fecal PCR vs. culture While not a primary aim of this study, we did compare fecal PCR and culture results to further explore our own previous observation that detection of Shigella increased dramatically when our testing lab switched to PCR methods. Interestingly, 8 out of the 9 animals that had paired fecal cultures submitted were negative by culture but positive by PCR testing, indicating that culture is not the ideal diagnostic when Shigella is suspected.\nWhile not a primary aim of this study, we did compare fecal PCR and culture results to further explore our own previous observation that detection of Shigella increased dramatically when our testing lab switched to PCR methods. Interestingly, 8 out of the 9 animals that had paired fecal cultures submitted were negative by culture but positive by PCR testing, indicating that culture is not the ideal diagnostic when Shigella is suspected.", "Clinically ill animals found positive for Shigella by PCR were randomly assigned to one of the four treatment groups: single‐dose ceftiofur crystalline free acid (CCFA), single‐dose azithromycin gavage, a 5‐day tapering azithromycin dose, and Enrofloxacin. At the fecal PCR follow‐up, all animals regardless of treatment group cleared infection and clinical signs resolved indicating that all four treatments were effective. This offers a refinement in our current treatment of choice for shigellosis in macaques (Figure 5).\nFecal and PCR results", "While not a primary aim of this study, we did compare fecal PCR and culture results to further explore our own previous observation that detection of Shigella increased dramatically when our testing lab switched to PCR methods. Interestingly, 8 out of the 9 animals that had paired fecal cultures submitted were negative by culture but positive by PCR testing, indicating that culture is not the ideal diagnostic when Shigella is suspected.", "This study demonstrates multiple convenient options for the treatment of shigellosis in macaques and the limitations of culturing Shigella. Single‐dose CCFA, single‐dose azithromycin, and a 5‐day tapering course of azithromycin all performed as well as a 7‐day course of enrofloxacin in eliminating Shigella infection, as confirmed by resolution of clinical signs and negative follow‐up fecal PCR. All animals on study had resolution of clinical signs and a negative fecal PCR at least 14 days post‐treatment. While these uniform results are certainly encouraging from the standpoint of treatment efficacy, they defy analysis by the means anticipated in our study design, that is, a 4 × 2 contingency table and Fisher's exact test. While in the design phase of this study, we did consider the possibility of such a result, though it seemed to be remote. The inclusion of a control condition, that is, no therapy, undoubtedly would have solved this statistical problem. In our estimation, the benefit of ensuring a p value in the face of potential, and subsequently realized, perfect treatment efficacy did not outweigh the ethical cost of delaying effective treatment in animals showing symptoms of shigellosis.\nInterestingly, only one of our cultures, the traditional method of diagnosis, came back positive for Shigella. Cultures have limited sensitivity and are dependent on various factors including stage of disease, storage of the culturette, and bacterial load.\n1\n, \n5\n, \n17\n Due to the limited sensitivity of this diagnostic method, Shigella cases are often under reported and underestimated.\n5\n Historically, in our primate colony at Johns Hopkins we rarely had animals culture positive for Shigella, but when our microbiology lab switched to PCR testing we found that it was endemic within our colony. PCR has been proven to be an effective and a more sensitive method of diagnosis and is our recommended diagnostic of choice at this time.\n17\n Some limitations to consider when using PCR detection are that the DNA being detected could be from a living or dead organism; therefore, an animal without clinical signs may not need to be treated. It is also important to take a direct rectal swab rather than a stool sample from the cage floor as environmental contamination could confound the results.\nHistorically, macaques infected with shigellosis have been treated with a variety of injectables such as penicillin, and more recently enrofloxacin.\n17\n, \n18\n Enrofloxacin is the common empirical choice when shigellosis is suspected but is prone to antimicrobial resistance within a colony.\n18\n Having multiple treatment options for Shigella offers refinements to traditional treatments as animals may remain in their social groups and may receive oral medications rather than injectables. In this study, we have shown that all 4 antibiotic regimes are effective treatments but each facility should consider which treatment is best for the animal based on his/her life stage, severity of disease, and how the animal is housed. Antimicrobial resistance should be considered when treating groups of animals given that antimicrobial resistance is on the rise and there have been documented cases of resistant Shigella.\n19\n Mechanisms of drug resistance in Shigella spp. include decrease in cellular permeability, extrusion of drugs by active efflux pumps, and overexpression of drug‐modifying and inactivating enzymes or by target modification by mutation.\n19\n Having multiple classes of antibiotics to treat shigella within a colony may be critical given the organism's tendency to develop resistance.\n18\n\n\nDiagnosing and treating Shigella spp. within the vivarium are not only imperative for the health of the animals as Shigella is a significant cause of mortality and morbidity in captive non‐human primates but also presents an occupational health concern for the staff working within the facility due to the zoonotic potential.\n1\n, \n4\n Due to its highly infectious nature, close contact with non‐human primates has led to small outbreaks among staff at biomedical research facilities and zoos.\n20\n, \n21\n Additionally and more concerningly, transmissible drug‐resistant Shigella has been isolated from pet monkeys and their owners.\n22\n Prevention of laboratory‐associated infections should be a top priority given the zoonotic and reverse zoonotic nature of the disease. Therefore, it is imperative to not only wear appropriate personal protective equipment when working with animals but also to detect and efficiently treat Shigella within a research colony. Eradication within a colony would be extremely difficult, if not impossible; therefore, early detection via clinical signs and prompt treatment is imperative for health and welfare as well as increased survivability and decreased transmission within a colony. While there are vaccines in development, there is not currently a vaccine available to prevent infection with shigellosis. Due to the current limitations, our study emphasized treatment rather than eradication. In this study, we have demonstrated that multiple, more conveniently dosed antimicrobial therapies are effective in treating naturally acquired shigellosis and detection by fecal PCR is more sensitive than culture in rhesus and pigtailed macaques (Macaca mulatta and nemestrina).", "The authors have no conflict of interest to declare." ]
[ null, "materials-and-methods", null, null, null, null, null, null, null, null, null, null, null, "results", null, null, "discussion", "COI-statement" ]
[ "non‐human primate", "\nShigella\n", "shigella antibiotic" ]
INTRODUCTION: Shigella spp. are non‐spore forming, facultatively anaerobic, gram‐negative rods that are transmitted fecal‐orally and a major causative agent of dysentery in humans and non‐human primates. 1 , 2 Shigella remains an important pathogen worldwide and is a major cause of morbidity among humans, particularly in children under the age of 5. 2 In the United States alone, it accounts for an estimated 450 000 infections per year. 2 Based on biochemical and serological properties, Shigella can be separated into four serotypes, which include sonnei, boydii, dysenteriae, and flexneri. All four serotypes are important causes of morbidity and mortality in humans particularly in low‐resource areas. 2 Captive non‐human primates may acquire shigellosis due to their close contact with humans or from conspecifics in endemically infected colonies, with S. flexneri being the most frequent serotype isolated. 1 , 2 , 3 The symptoms of shigellosis range from acute life threatening disease to subclinical carrier states. 1 , 4 Acute cases often present as severe dysentery with bloody mucoid diarrhea, anorexia, weight loss, and secondary dehydration resulting in rapid decline and potentially death. 1 , 4 The development of disease relies on invasion of bacteria into the colonic epithelial cells resulting in mucosal inflammation, hemorrhage, and necrosis. 5 While subclinical carriers appear clinically healthy, they remain an important source of infection to conspecifics and laboratory animal workers. 1 Animals may also develop non‐enteric forms of the disease including reactive arthritis, gingivitis, abortion, and air sac infection 4 , 6 , 7 , 8 , 9 (Figure 1). The morbidity and mortality associated with these infections can significantly impact animal welfare and be a confounder for their use in biomedical research studies. Pigtailed macaque with Shigella gingivitis Large, group‐housed breeding colonies pose a particular challenge in both the transmission and treatment of Shigella. Complete elimination from a colony is often not practical due to intermittent shedding and asymptomatic or subclinical infections. Unfortunately, antibody production does not provide long‐lasting protection from subsequent infections, and animals may be chronically reinfected. 4 For this reason, treatment is typically directed toward symptomatic animals with supportive care being the primary intervention while antibiotic therapy is often reserved for more severe or protracted cases. Prolonged treatment of individual animals within a breeding colony poses its own unique challenges. Long‐term removal of animals for multi‐day antibiotic therapy can cause unnecessary stress on the individual animal and instabilities within the group social structure, each of which can potentiate the symptoms and transmission of Shigella within a group via the stress response. Another consideration for administration of cage‐side, multi‐day, oral medication, is compliance. The individual being treated may not ingest the full dose or may be unable or unwilling to take oral medications within the group, often due to low social rank. In this study, we sought to identify novel antibiotic treatments for mild to moderate shigellosis that would ensure compliance while minimizing the stress and social disruption often associated with drug administration. Although enrofloxacin is the common antibiotic of choice for empirical treatment in macaques, additional classes of antibiotics such as macrolides and cephalosporins are reported to be effective in treating shigellosis in humans. 10 Specific antibiotics within these classes, such as azithromycin and Ceftiofur Crystalline Free Acid (CCFA), can be given as a single dose or abbreviated course and offers refinement options for veterinarians as one‐time dosing or abbreviated courses may be utilized. Reports in the literature have shown that a single, high‐dose azithromycin for the treatment of shigellosis in children resulted in increased weight and survivability. 11 In human medicine, a 5‐day tapering dose of azithromycin, also known by brand names Zithromax™ and Z‐Pak™, can be used for a variety of clinical diseases due to its wide spectrum of activity. 12 Additionally, CCFA, a fourth generation cephalosporin, has been demonstrated to reach therapeutic concentrations in macaques for at least 7 days when given as a one‐time convenient dose. 13 We compared these three treatments to the most used antibiotic for Shigella treatment in macaques, enrofloxacin. We hypothesized that multiple, more conveniently dosed antimicrobial therapies would be effective in treating naturally acquired shigellosis in rhesus and pigtailed macaques (Macaca mulatta and nemestrina). As a secondary aim, we wanted to compare the diagnostic value of fecal PCR and fecal culture for detection of Shigella within our colony. Based on previous experience utilizing both diagnostic methods, we hypothesized that PCR would be more sensitive than culture for the detection of Shigella. MATERIALS AND METHODS: Humane care guidelines The Institutional Care and Use Committee of Johns Hopkins University, an AAALAC‐accredited institution, approved the protocol in which this study was conducted under. Animals and procedures were in compliance with the US Public Health Service's Policy on Humane Care and Use of Laboratory Animals, 14 the US Department of Agriculture's Animal Welfare Act, 15 and the National Research Council's Guide for the Care and Use of Laboratory Animals. 16 The Institutional Care and Use Committee of Johns Hopkins University, an AAALAC‐accredited institution, approved the protocol in which this study was conducted under. Animals and procedures were in compliance with the US Public Health Service's Policy on Humane Care and Use of Laboratory Animals, 14 the US Department of Agriculture's Animal Welfare Act, 15 and the National Research Council's Guide for the Care and Use of Laboratory Animals. 16 Husbandry All animals were housed at the Johns Hopkins University Research Farm in single‐species harem breeding groups or same sex juvenile/young adult groups. Animal enclosures consisted of runs with concrete flooring or raised corncrib cages. All animals had indoor and outdoor access. Animals were fed a standard commercial diet (rhesus macaques: 5049 Fiber‐Plus Monkey Diet, LabDiet); (pigtailed macaques: 5038 Monkey Diet, LabDiet) and rotating food enrichment items including fresh fruits, vegetables, and dried fruit treats. Animals were provided water ad libitum. Annual colony health screening included intradermal tuberculin testing and serology for Macacine herpesvirus 1 (B virus), simian immunodeficiency virus, simian T‐cell leukemia virus, and simian retrovirus. All animals were consistently negative on tuberculosis testing and viral serology. All animals were housed at the Johns Hopkins University Research Farm in single‐species harem breeding groups or same sex juvenile/young adult groups. Animal enclosures consisted of runs with concrete flooring or raised corncrib cages. All animals had indoor and outdoor access. Animals were fed a standard commercial diet (rhesus macaques: 5049 Fiber‐Plus Monkey Diet, LabDiet); (pigtailed macaques: 5038 Monkey Diet, LabDiet) and rotating food enrichment items including fresh fruits, vegetables, and dried fruit treats. Animals were provided water ad libitum. Annual colony health screening included intradermal tuberculin testing and serology for Macacine herpesvirus 1 (B virus), simian immunodeficiency virus, simian T‐cell leukemia virus, and simian retrovirus. All animals were consistently negative on tuberculosis testing and viral serology. Animals Fourteen rhesus and ten pigtailed macaques (Macaca mulatta and nemestrina) were enrolled in this study. All animals enrolled met the following inclusion criteria: clinically stable and at least 2.5 kg in weight, testing positive by fecal PCR for Shigella and showing one or more symptoms of shigellosis, such as mild or moderate diarrhea, dehydration, hematochezia, fever, or gingivitis. The most common clinical signs seen were dehydration, fecal staining, and/or diarrhea. Animals were excluded from the study if they were less than 2.5 kg in weight, were clinically unstable requiring aggressive supportive care, or immunocompromised. Excluded animals received John Hopkins University standard of care treatment for severe shigellosis, which includes enrofloxacin antibiotic therapy. Fourteen rhesus and ten pigtailed macaques (Macaca mulatta and nemestrina) were enrolled in this study. All animals enrolled met the following inclusion criteria: clinically stable and at least 2.5 kg in weight, testing positive by fecal PCR for Shigella and showing one or more symptoms of shigellosis, such as mild or moderate diarrhea, dehydration, hematochezia, fever, or gingivitis. The most common clinical signs seen were dehydration, fecal staining, and/or diarrhea. Animals were excluded from the study if they were less than 2.5 kg in weight, were clinically unstable requiring aggressive supportive care, or immunocompromised. Excluded animals received John Hopkins University standard of care treatment for severe shigellosis, which includes enrofloxacin antibiotic therapy. Study design Once inclusion criteria were met animals were randomized (online software; random.org) into one of four treatment groups (Figure 2): Single‐dose CCFA, single‐dose azithromycin, 5‐day tapering dose of azithromycin, or enrofloxacin (positive control). Animals were sedated with 10–15 mg/kg ketamine intramuscularly (IM) for pre‐study physical examinations and drug administration. Animals that received CCFA and a single dose of azithromycin were returned to their groups. Animals were singly housed with visual access to other macaques if they were in the enrofloxacin group receiving injectable medication or the 5‐day tapering dose azithromycin group and determined to be low‐ranking animals that could not receive oral medications within their group. Animals were monitored daily until treatment was completed and clinical signs resolved. Follow‐up PCR was collected at least 2 weeks following treatment (Figure 3). Treatment groups Study design Once inclusion criteria were met animals were randomized (online software; random.org) into one of four treatment groups (Figure 2): Single‐dose CCFA, single‐dose azithromycin, 5‐day tapering dose of azithromycin, or enrofloxacin (positive control). Animals were sedated with 10–15 mg/kg ketamine intramuscularly (IM) for pre‐study physical examinations and drug administration. Animals that received CCFA and a single dose of azithromycin were returned to their groups. Animals were singly housed with visual access to other macaques if they were in the enrofloxacin group receiving injectable medication or the 5‐day tapering dose azithromycin group and determined to be low‐ranking animals that could not receive oral medications within their group. Animals were monitored daily until treatment was completed and clinical signs resolved. Follow‐up PCR was collected at least 2 weeks following treatment (Figure 3). Treatment groups Study design Drug administration and dosages 5‐day tapering dose of azithromycin (Chewables, 120 mg/tablet; Bio‐Serv) (Azithromycin tablets, 250 mg; TAGI Pharma, Inc.). Five‐day tapering dose of Azithromycin given orally in a flavored tablet or crushed in a food enrichment vehicle (typically a peanut butter and graham cracker sandwich). Dosed at 125 mg PO once, followed by 62.5 mg PO q 24 h × 4 days for animals less than 5 kg in weight. Animals greater than 5 kg were dosed at 250 mg PO once, followed by 125 mg PO q24h × 4 days. (Chewables, 120 mg/tablet; Bio‐Serv) (Azithromycin tablets, 250 mg; TAGI Pharma, Inc.). Five‐day tapering dose of Azithromycin given orally in a flavored tablet or crushed in a food enrichment vehicle (typically a peanut butter and graham cracker sandwich). Dosed at 125 mg PO once, followed by 62.5 mg PO q 24 h × 4 days for animals less than 5 kg in weight. Animals greater than 5 kg were dosed at 250 mg PO once, followed by 125 mg PO q24h × 4 days. Single‐dose azithromycin gavage (Azithromycin for Oral Suspension, 40 mg/ml; TEVA Pharmaceuticals USA Inc.) Azithromycin as a compounded suspension or tablet crushed and mixed with water administered via oral gavage with a red rubber catheter. The animal was sedated with 10–15 mg/kg ketamine IM; a red rubber catheter was measured from the mouth to just below the last rib (where the stomach sits). The monkey was then held in a seated position (i.e., High Fowler's position), and the gavage tube was passed through the oropharynx into the esophagus and subsequently the stomach. The appropriate amount of drug was pulled up into a syringe and attached to the end of the red rubber catheter. The drug was administered with the monkey in the upright position (Figure 4) and followed by 3–5 ml of water. The red rubber was then clamped off and pulled directly out of the animals' mouth. The animal was then left in the sitting upright position for 3–5 min or until the animal exhibited a robust swallowing reflex before being returned to the cage for recovery. Animals were monitored for adverse side effects such as vomiting for 24 h, and no adverse effects were reported. (A) Materials needed: saline, azithromycin suspension, syringes, and a red rubber catheter. (B) The red rubber catheter being measured to the level of the stomach in a sedated macaque. (C) The red rubber is gently passed through a sedated macaque's mouth down the esophagus and into the stomach. (D) The animal remains in the High Fowler's position for 3–5 min before being returned to the cage and recovered. (Azithromycin for Oral Suspension, 40 mg/ml; TEVA Pharmaceuticals USA Inc.) Azithromycin as a compounded suspension or tablet crushed and mixed with water administered via oral gavage with a red rubber catheter. The animal was sedated with 10–15 mg/kg ketamine IM; a red rubber catheter was measured from the mouth to just below the last rib (where the stomach sits). The monkey was then held in a seated position (i.e., High Fowler's position), and the gavage tube was passed through the oropharynx into the esophagus and subsequently the stomach. The appropriate amount of drug was pulled up into a syringe and attached to the end of the red rubber catheter. The drug was administered with the monkey in the upright position (Figure 4) and followed by 3–5 ml of water. The red rubber was then clamped off and pulled directly out of the animals' mouth. The animal was then left in the sitting upright position for 3–5 min or until the animal exhibited a robust swallowing reflex before being returned to the cage for recovery. Animals were monitored for adverse side effects such as vomiting for 24 h, and no adverse effects were reported. (A) Materials needed: saline, azithromycin suspension, syringes, and a red rubber catheter. (B) The red rubber catheter being measured to the level of the stomach in a sedated macaque. (C) The red rubber is gently passed through a sedated macaque's mouth down the esophagus and into the stomach. (D) The animal remains in the High Fowler's position for 3–5 min before being returned to the cage and recovered. Single‐dose CCFA (Excede Sterile Suspension, 100 mg/ml; Zoetis). Administered at 20 mg/kg subcutaneously as a one‐time dose. (Excede Sterile Suspension, 100 mg/ml; Zoetis). Administered at 20 mg/kg subcutaneously as a one‐time dose. Enrofloxacin (Baytril 100, 100 mg/ml; Bayer). Administered as 10 mg/kg intramuscularly q24 h for 7 days. (Baytril 100, 100 mg/ml; Bayer). Administered as 10 mg/kg intramuscularly q24 h for 7 days. 5‐day tapering dose of azithromycin (Chewables, 120 mg/tablet; Bio‐Serv) (Azithromycin tablets, 250 mg; TAGI Pharma, Inc.). Five‐day tapering dose of Azithromycin given orally in a flavored tablet or crushed in a food enrichment vehicle (typically a peanut butter and graham cracker sandwich). Dosed at 125 mg PO once, followed by 62.5 mg PO q 24 h × 4 days for animals less than 5 kg in weight. Animals greater than 5 kg were dosed at 250 mg PO once, followed by 125 mg PO q24h × 4 days. (Chewables, 120 mg/tablet; Bio‐Serv) (Azithromycin tablets, 250 mg; TAGI Pharma, Inc.). Five‐day tapering dose of Azithromycin given orally in a flavored tablet or crushed in a food enrichment vehicle (typically a peanut butter and graham cracker sandwich). Dosed at 125 mg PO once, followed by 62.5 mg PO q 24 h × 4 days for animals less than 5 kg in weight. Animals greater than 5 kg were dosed at 250 mg PO once, followed by 125 mg PO q24h × 4 days. Single‐dose azithromycin gavage (Azithromycin for Oral Suspension, 40 mg/ml; TEVA Pharmaceuticals USA Inc.) Azithromycin as a compounded suspension or tablet crushed and mixed with water administered via oral gavage with a red rubber catheter. The animal was sedated with 10–15 mg/kg ketamine IM; a red rubber catheter was measured from the mouth to just below the last rib (where the stomach sits). The monkey was then held in a seated position (i.e., High Fowler's position), and the gavage tube was passed through the oropharynx into the esophagus and subsequently the stomach. The appropriate amount of drug was pulled up into a syringe and attached to the end of the red rubber catheter. The drug was administered with the monkey in the upright position (Figure 4) and followed by 3–5 ml of water. The red rubber was then clamped off and pulled directly out of the animals' mouth. The animal was then left in the sitting upright position for 3–5 min or until the animal exhibited a robust swallowing reflex before being returned to the cage for recovery. Animals were monitored for adverse side effects such as vomiting for 24 h, and no adverse effects were reported. (A) Materials needed: saline, azithromycin suspension, syringes, and a red rubber catheter. (B) The red rubber catheter being measured to the level of the stomach in a sedated macaque. (C) The red rubber is gently passed through a sedated macaque's mouth down the esophagus and into the stomach. (D) The animal remains in the High Fowler's position for 3–5 min before being returned to the cage and recovered. (Azithromycin for Oral Suspension, 40 mg/ml; TEVA Pharmaceuticals USA Inc.) Azithromycin as a compounded suspension or tablet crushed and mixed with water administered via oral gavage with a red rubber catheter. The animal was sedated with 10–15 mg/kg ketamine IM; a red rubber catheter was measured from the mouth to just below the last rib (where the stomach sits). The monkey was then held in a seated position (i.e., High Fowler's position), and the gavage tube was passed through the oropharynx into the esophagus and subsequently the stomach. The appropriate amount of drug was pulled up into a syringe and attached to the end of the red rubber catheter. The drug was administered with the monkey in the upright position (Figure 4) and followed by 3–5 ml of water. The red rubber was then clamped off and pulled directly out of the animals' mouth. The animal was then left in the sitting upright position for 3–5 min or until the animal exhibited a robust swallowing reflex before being returned to the cage for recovery. Animals were monitored for adverse side effects such as vomiting for 24 h, and no adverse effects were reported. (A) Materials needed: saline, azithromycin suspension, syringes, and a red rubber catheter. (B) The red rubber catheter being measured to the level of the stomach in a sedated macaque. (C) The red rubber is gently passed through a sedated macaque's mouth down the esophagus and into the stomach. (D) The animal remains in the High Fowler's position for 3–5 min before being returned to the cage and recovered. Single‐dose CCFA (Excede Sterile Suspension, 100 mg/ml; Zoetis). Administered at 20 mg/kg subcutaneously as a one‐time dose. (Excede Sterile Suspension, 100 mg/ml; Zoetis). Administered at 20 mg/kg subcutaneously as a one‐time dose. Enrofloxacin (Baytril 100, 100 mg/ml; Bayer). Administered as 10 mg/kg intramuscularly q24 h for 7 days. (Baytril 100, 100 mg/ml; Bayer). Administered as 10 mg/kg intramuscularly q24 h for 7 days. Sample collection All animals were anesthetized with 10–15 mg/kg ketamine HCL (Zetamine™, 100 mg/ml, MWI) intramuscularly. For fecal PCR, a cotton‐tipped applicator was inserted into the rectum and then placed into a liquid Cary‐Blair with Indicator Single Vial‐Orange Cap (C&S Collection Vials Modified Cary Blair, 15 ml; Medical Chemical Corporation) and refrigerated and/or kept on ice until the sample could be delivered to the Johns Hopkins Medical Microbiology Laboratory (Baltimore, MD) for analysis. A parallel fecal culture was collected from select animals for the purpose of comparing results. Feces was collected directly from the rectum of each animal tested and placed immediately into transport medium (CultureSwab Cary‐Blair Agar, Becton Dickinson) and submitted IDEXX Laboratories, Glen Burnie, MD, who cultured samples for the presence of enteric pathogens (Salmonella, Shigella, Campylobacter spp., Yersinia, Aeromonas, Plesiomonas, E. coli 0157, and Vibrio). Follow‐up fecal PCRs were collected at least 14 days after the completion of treatment. For CCFA, follow‐up PCR was collected at least 2 weeks after last metabolically active dose which was considered 7 days after initial treatment. All animals were anesthetized with 10–15 mg/kg ketamine HCL (Zetamine™, 100 mg/ml, MWI) intramuscularly. For fecal PCR, a cotton‐tipped applicator was inserted into the rectum and then placed into a liquid Cary‐Blair with Indicator Single Vial‐Orange Cap (C&S Collection Vials Modified Cary Blair, 15 ml; Medical Chemical Corporation) and refrigerated and/or kept on ice until the sample could be delivered to the Johns Hopkins Medical Microbiology Laboratory (Baltimore, MD) for analysis. A parallel fecal culture was collected from select animals for the purpose of comparing results. Feces was collected directly from the rectum of each animal tested and placed immediately into transport medium (CultureSwab Cary‐Blair Agar, Becton Dickinson) and submitted IDEXX Laboratories, Glen Burnie, MD, who cultured samples for the presence of enteric pathogens (Salmonella, Shigella, Campylobacter spp., Yersinia, Aeromonas, Plesiomonas, E. coli 0157, and Vibrio). Follow‐up fecal PCRs were collected at least 14 days after the completion of treatment. For CCFA, follow‐up PCR was collected at least 2 weeks after last metabolically active dose which was considered 7 days after initial treatment. Statistical analysis This study was designed to produce data in a contingency table with the four treatment arms as rows, and post‐treatment shigellosis status as a dependent, binary outcome variable across two columns. Due to the fact that all treatment conditions eliminated shigellosis in all animals tested, that is, one of the dependent columns contained all “zeros,” statistical analysis of the resulting contingency table is not possible or warranted. This study was designed to produce data in a contingency table with the four treatment arms as rows, and post‐treatment shigellosis status as a dependent, binary outcome variable across two columns. Due to the fact that all treatment conditions eliminated shigellosis in all animals tested, that is, one of the dependent columns contained all “zeros,” statistical analysis of the resulting contingency table is not possible or warranted. Humane care guidelines: The Institutional Care and Use Committee of Johns Hopkins University, an AAALAC‐accredited institution, approved the protocol in which this study was conducted under. Animals and procedures were in compliance with the US Public Health Service's Policy on Humane Care and Use of Laboratory Animals, 14 the US Department of Agriculture's Animal Welfare Act, 15 and the National Research Council's Guide for the Care and Use of Laboratory Animals. 16 Husbandry: All animals were housed at the Johns Hopkins University Research Farm in single‐species harem breeding groups or same sex juvenile/young adult groups. Animal enclosures consisted of runs with concrete flooring or raised corncrib cages. All animals had indoor and outdoor access. Animals were fed a standard commercial diet (rhesus macaques: 5049 Fiber‐Plus Monkey Diet, LabDiet); (pigtailed macaques: 5038 Monkey Diet, LabDiet) and rotating food enrichment items including fresh fruits, vegetables, and dried fruit treats. Animals were provided water ad libitum. Annual colony health screening included intradermal tuberculin testing and serology for Macacine herpesvirus 1 (B virus), simian immunodeficiency virus, simian T‐cell leukemia virus, and simian retrovirus. All animals were consistently negative on tuberculosis testing and viral serology. Animals: Fourteen rhesus and ten pigtailed macaques (Macaca mulatta and nemestrina) were enrolled in this study. All animals enrolled met the following inclusion criteria: clinically stable and at least 2.5 kg in weight, testing positive by fecal PCR for Shigella and showing one or more symptoms of shigellosis, such as mild or moderate diarrhea, dehydration, hematochezia, fever, or gingivitis. The most common clinical signs seen were dehydration, fecal staining, and/or diarrhea. Animals were excluded from the study if they were less than 2.5 kg in weight, were clinically unstable requiring aggressive supportive care, or immunocompromised. Excluded animals received John Hopkins University standard of care treatment for severe shigellosis, which includes enrofloxacin antibiotic therapy. Study design: Once inclusion criteria were met animals were randomized (online software; random.org) into one of four treatment groups (Figure 2): Single‐dose CCFA, single‐dose azithromycin, 5‐day tapering dose of azithromycin, or enrofloxacin (positive control). Animals were sedated with 10–15 mg/kg ketamine intramuscularly (IM) for pre‐study physical examinations and drug administration. Animals that received CCFA and a single dose of azithromycin were returned to their groups. Animals were singly housed with visual access to other macaques if they were in the enrofloxacin group receiving injectable medication or the 5‐day tapering dose azithromycin group and determined to be low‐ranking animals that could not receive oral medications within their group. Animals were monitored daily until treatment was completed and clinical signs resolved. Follow‐up PCR was collected at least 2 weeks following treatment (Figure 3). Treatment groups Study design Drug administration and dosages: 5‐day tapering dose of azithromycin (Chewables, 120 mg/tablet; Bio‐Serv) (Azithromycin tablets, 250 mg; TAGI Pharma, Inc.). Five‐day tapering dose of Azithromycin given orally in a flavored tablet or crushed in a food enrichment vehicle (typically a peanut butter and graham cracker sandwich). Dosed at 125 mg PO once, followed by 62.5 mg PO q 24 h × 4 days for animals less than 5 kg in weight. Animals greater than 5 kg were dosed at 250 mg PO once, followed by 125 mg PO q24h × 4 days. (Chewables, 120 mg/tablet; Bio‐Serv) (Azithromycin tablets, 250 mg; TAGI Pharma, Inc.). Five‐day tapering dose of Azithromycin given orally in a flavored tablet or crushed in a food enrichment vehicle (typically a peanut butter and graham cracker sandwich). Dosed at 125 mg PO once, followed by 62.5 mg PO q 24 h × 4 days for animals less than 5 kg in weight. Animals greater than 5 kg were dosed at 250 mg PO once, followed by 125 mg PO q24h × 4 days. Single‐dose azithromycin gavage (Azithromycin for Oral Suspension, 40 mg/ml; TEVA Pharmaceuticals USA Inc.) Azithromycin as a compounded suspension or tablet crushed and mixed with water administered via oral gavage with a red rubber catheter. The animal was sedated with 10–15 mg/kg ketamine IM; a red rubber catheter was measured from the mouth to just below the last rib (where the stomach sits). The monkey was then held in a seated position (i.e., High Fowler's position), and the gavage tube was passed through the oropharynx into the esophagus and subsequently the stomach. The appropriate amount of drug was pulled up into a syringe and attached to the end of the red rubber catheter. The drug was administered with the monkey in the upright position (Figure 4) and followed by 3–5 ml of water. The red rubber was then clamped off and pulled directly out of the animals' mouth. The animal was then left in the sitting upright position for 3–5 min or until the animal exhibited a robust swallowing reflex before being returned to the cage for recovery. Animals were monitored for adverse side effects such as vomiting for 24 h, and no adverse effects were reported. (A) Materials needed: saline, azithromycin suspension, syringes, and a red rubber catheter. (B) The red rubber catheter being measured to the level of the stomach in a sedated macaque. (C) The red rubber is gently passed through a sedated macaque's mouth down the esophagus and into the stomach. (D) The animal remains in the High Fowler's position for 3–5 min before being returned to the cage and recovered. (Azithromycin for Oral Suspension, 40 mg/ml; TEVA Pharmaceuticals USA Inc.) Azithromycin as a compounded suspension or tablet crushed and mixed with water administered via oral gavage with a red rubber catheter. The animal was sedated with 10–15 mg/kg ketamine IM; a red rubber catheter was measured from the mouth to just below the last rib (where the stomach sits). The monkey was then held in a seated position (i.e., High Fowler's position), and the gavage tube was passed through the oropharynx into the esophagus and subsequently the stomach. The appropriate amount of drug was pulled up into a syringe and attached to the end of the red rubber catheter. The drug was administered with the monkey in the upright position (Figure 4) and followed by 3–5 ml of water. The red rubber was then clamped off and pulled directly out of the animals' mouth. The animal was then left in the sitting upright position for 3–5 min or until the animal exhibited a robust swallowing reflex before being returned to the cage for recovery. Animals were monitored for adverse side effects such as vomiting for 24 h, and no adverse effects were reported. (A) Materials needed: saline, azithromycin suspension, syringes, and a red rubber catheter. (B) The red rubber catheter being measured to the level of the stomach in a sedated macaque. (C) The red rubber is gently passed through a sedated macaque's mouth down the esophagus and into the stomach. (D) The animal remains in the High Fowler's position for 3–5 min before being returned to the cage and recovered. Single‐dose CCFA (Excede Sterile Suspension, 100 mg/ml; Zoetis). Administered at 20 mg/kg subcutaneously as a one‐time dose. (Excede Sterile Suspension, 100 mg/ml; Zoetis). Administered at 20 mg/kg subcutaneously as a one‐time dose. Enrofloxacin (Baytril 100, 100 mg/ml; Bayer). Administered as 10 mg/kg intramuscularly q24 h for 7 days. (Baytril 100, 100 mg/ml; Bayer). Administered as 10 mg/kg intramuscularly q24 h for 7 days. 5‐day tapering dose of azithromycin: (Chewables, 120 mg/tablet; Bio‐Serv) (Azithromycin tablets, 250 mg; TAGI Pharma, Inc.). Five‐day tapering dose of Azithromycin given orally in a flavored tablet or crushed in a food enrichment vehicle (typically a peanut butter and graham cracker sandwich). Dosed at 125 mg PO once, followed by 62.5 mg PO q 24 h × 4 days for animals less than 5 kg in weight. Animals greater than 5 kg were dosed at 250 mg PO once, followed by 125 mg PO q24h × 4 days. Single‐dose azithromycin gavage: (Azithromycin for Oral Suspension, 40 mg/ml; TEVA Pharmaceuticals USA Inc.) Azithromycin as a compounded suspension or tablet crushed and mixed with water administered via oral gavage with a red rubber catheter. The animal was sedated with 10–15 mg/kg ketamine IM; a red rubber catheter was measured from the mouth to just below the last rib (where the stomach sits). The monkey was then held in a seated position (i.e., High Fowler's position), and the gavage tube was passed through the oropharynx into the esophagus and subsequently the stomach. The appropriate amount of drug was pulled up into a syringe and attached to the end of the red rubber catheter. The drug was administered with the monkey in the upright position (Figure 4) and followed by 3–5 ml of water. The red rubber was then clamped off and pulled directly out of the animals' mouth. The animal was then left in the sitting upright position for 3–5 min or until the animal exhibited a robust swallowing reflex before being returned to the cage for recovery. Animals were monitored for adverse side effects such as vomiting for 24 h, and no adverse effects were reported. (A) Materials needed: saline, azithromycin suspension, syringes, and a red rubber catheter. (B) The red rubber catheter being measured to the level of the stomach in a sedated macaque. (C) The red rubber is gently passed through a sedated macaque's mouth down the esophagus and into the stomach. (D) The animal remains in the High Fowler's position for 3–5 min before being returned to the cage and recovered. Single‐dose CCFA: (Excede Sterile Suspension, 100 mg/ml; Zoetis). Administered at 20 mg/kg subcutaneously as a one‐time dose. Enrofloxacin: (Baytril 100, 100 mg/ml; Bayer). Administered as 10 mg/kg intramuscularly q24 h for 7 days. Sample collection: All animals were anesthetized with 10–15 mg/kg ketamine HCL (Zetamine™, 100 mg/ml, MWI) intramuscularly. For fecal PCR, a cotton‐tipped applicator was inserted into the rectum and then placed into a liquid Cary‐Blair with Indicator Single Vial‐Orange Cap (C&S Collection Vials Modified Cary Blair, 15 ml; Medical Chemical Corporation) and refrigerated and/or kept on ice until the sample could be delivered to the Johns Hopkins Medical Microbiology Laboratory (Baltimore, MD) for analysis. A parallel fecal culture was collected from select animals for the purpose of comparing results. Feces was collected directly from the rectum of each animal tested and placed immediately into transport medium (CultureSwab Cary‐Blair Agar, Becton Dickinson) and submitted IDEXX Laboratories, Glen Burnie, MD, who cultured samples for the presence of enteric pathogens (Salmonella, Shigella, Campylobacter spp., Yersinia, Aeromonas, Plesiomonas, E. coli 0157, and Vibrio). Follow‐up fecal PCRs were collected at least 14 days after the completion of treatment. For CCFA, follow‐up PCR was collected at least 2 weeks after last metabolically active dose which was considered 7 days after initial treatment. Statistical analysis: This study was designed to produce data in a contingency table with the four treatment arms as rows, and post‐treatment shigellosis status as a dependent, binary outcome variable across two columns. Due to the fact that all treatment conditions eliminated shigellosis in all animals tested, that is, one of the dependent columns contained all “zeros,” statistical analysis of the resulting contingency table is not possible or warranted. RESULTS: Treatment results Clinically ill animals found positive for Shigella by PCR were randomly assigned to one of the four treatment groups: single‐dose ceftiofur crystalline free acid (CCFA), single‐dose azithromycin gavage, a 5‐day tapering azithromycin dose, and Enrofloxacin. At the fecal PCR follow‐up, all animals regardless of treatment group cleared infection and clinical signs resolved indicating that all four treatments were effective. This offers a refinement in our current treatment of choice for shigellosis in macaques (Figure 5). Fecal and PCR results Clinically ill animals found positive for Shigella by PCR were randomly assigned to one of the four treatment groups: single‐dose ceftiofur crystalline free acid (CCFA), single‐dose azithromycin gavage, a 5‐day tapering azithromycin dose, and Enrofloxacin. At the fecal PCR follow‐up, all animals regardless of treatment group cleared infection and clinical signs resolved indicating that all four treatments were effective. This offers a refinement in our current treatment of choice for shigellosis in macaques (Figure 5). Fecal and PCR results Comparison of fecal PCR vs. culture While not a primary aim of this study, we did compare fecal PCR and culture results to further explore our own previous observation that detection of Shigella increased dramatically when our testing lab switched to PCR methods. Interestingly, 8 out of the 9 animals that had paired fecal cultures submitted were negative by culture but positive by PCR testing, indicating that culture is not the ideal diagnostic when Shigella is suspected. While not a primary aim of this study, we did compare fecal PCR and culture results to further explore our own previous observation that detection of Shigella increased dramatically when our testing lab switched to PCR methods. Interestingly, 8 out of the 9 animals that had paired fecal cultures submitted were negative by culture but positive by PCR testing, indicating that culture is not the ideal diagnostic when Shigella is suspected. Treatment results: Clinically ill animals found positive for Shigella by PCR were randomly assigned to one of the four treatment groups: single‐dose ceftiofur crystalline free acid (CCFA), single‐dose azithromycin gavage, a 5‐day tapering azithromycin dose, and Enrofloxacin. At the fecal PCR follow‐up, all animals regardless of treatment group cleared infection and clinical signs resolved indicating that all four treatments were effective. This offers a refinement in our current treatment of choice for shigellosis in macaques (Figure 5). Fecal and PCR results Comparison of fecal PCR vs. culture: While not a primary aim of this study, we did compare fecal PCR and culture results to further explore our own previous observation that detection of Shigella increased dramatically when our testing lab switched to PCR methods. Interestingly, 8 out of the 9 animals that had paired fecal cultures submitted were negative by culture but positive by PCR testing, indicating that culture is not the ideal diagnostic when Shigella is suspected. DISCUSSION: This study demonstrates multiple convenient options for the treatment of shigellosis in macaques and the limitations of culturing Shigella. Single‐dose CCFA, single‐dose azithromycin, and a 5‐day tapering course of azithromycin all performed as well as a 7‐day course of enrofloxacin in eliminating Shigella infection, as confirmed by resolution of clinical signs and negative follow‐up fecal PCR. All animals on study had resolution of clinical signs and a negative fecal PCR at least 14 days post‐treatment. While these uniform results are certainly encouraging from the standpoint of treatment efficacy, they defy analysis by the means anticipated in our study design, that is, a 4 × 2 contingency table and Fisher's exact test. While in the design phase of this study, we did consider the possibility of such a result, though it seemed to be remote. The inclusion of a control condition, that is, no therapy, undoubtedly would have solved this statistical problem. In our estimation, the benefit of ensuring a p value in the face of potential, and subsequently realized, perfect treatment efficacy did not outweigh the ethical cost of delaying effective treatment in animals showing symptoms of shigellosis. Interestingly, only one of our cultures, the traditional method of diagnosis, came back positive for Shigella. Cultures have limited sensitivity and are dependent on various factors including stage of disease, storage of the culturette, and bacterial load. 1 , 5 , 17 Due to the limited sensitivity of this diagnostic method, Shigella cases are often under reported and underestimated. 5 Historically, in our primate colony at Johns Hopkins we rarely had animals culture positive for Shigella, but when our microbiology lab switched to PCR testing we found that it was endemic within our colony. PCR has been proven to be an effective and a more sensitive method of diagnosis and is our recommended diagnostic of choice at this time. 17 Some limitations to consider when using PCR detection are that the DNA being detected could be from a living or dead organism; therefore, an animal without clinical signs may not need to be treated. It is also important to take a direct rectal swab rather than a stool sample from the cage floor as environmental contamination could confound the results. Historically, macaques infected with shigellosis have been treated with a variety of injectables such as penicillin, and more recently enrofloxacin. 17 , 18 Enrofloxacin is the common empirical choice when shigellosis is suspected but is prone to antimicrobial resistance within a colony. 18 Having multiple treatment options for Shigella offers refinements to traditional treatments as animals may remain in their social groups and may receive oral medications rather than injectables. In this study, we have shown that all 4 antibiotic regimes are effective treatments but each facility should consider which treatment is best for the animal based on his/her life stage, severity of disease, and how the animal is housed. Antimicrobial resistance should be considered when treating groups of animals given that antimicrobial resistance is on the rise and there have been documented cases of resistant Shigella. 19 Mechanisms of drug resistance in Shigella spp. include decrease in cellular permeability, extrusion of drugs by active efflux pumps, and overexpression of drug‐modifying and inactivating enzymes or by target modification by mutation. 19 Having multiple classes of antibiotics to treat shigella within a colony may be critical given the organism's tendency to develop resistance. 18 Diagnosing and treating Shigella spp. within the vivarium are not only imperative for the health of the animals as Shigella is a significant cause of mortality and morbidity in captive non‐human primates but also presents an occupational health concern for the staff working within the facility due to the zoonotic potential. 1 , 4 Due to its highly infectious nature, close contact with non‐human primates has led to small outbreaks among staff at biomedical research facilities and zoos. 20 , 21 Additionally and more concerningly, transmissible drug‐resistant Shigella has been isolated from pet monkeys and their owners. 22 Prevention of laboratory‐associated infections should be a top priority given the zoonotic and reverse zoonotic nature of the disease. Therefore, it is imperative to not only wear appropriate personal protective equipment when working with animals but also to detect and efficiently treat Shigella within a research colony. Eradication within a colony would be extremely difficult, if not impossible; therefore, early detection via clinical signs and prompt treatment is imperative for health and welfare as well as increased survivability and decreased transmission within a colony. While there are vaccines in development, there is not currently a vaccine available to prevent infection with shigellosis. Due to the current limitations, our study emphasized treatment rather than eradication. In this study, we have demonstrated that multiple, more conveniently dosed antimicrobial therapies are effective in treating naturally acquired shigellosis and detection by fecal PCR is more sensitive than culture in rhesus and pigtailed macaques (Macaca mulatta and nemestrina). CONFLICT OF INTEREST: The authors have no conflict of interest to declare.
Background: Shigella spp. are common enteric pathogens in captive non-human primates. Treatment of symptomatic infections involves supportive care and antibiotic therapy, typically with an empirical choice of antibiotic. Methods: Twenty-four clinically ill, Shigella PCR-positive animals were randomly assigned to one of four treatment groups: single-dose ceftiofur crystalline free acid (CCFA), single-dose azithromycin gavage, a 5-day tapering azithromycin dose, or 7-day course of enrofloxacin. We hypothesized that all antimicrobial therapies would have similar efficacy. Results: Animals in all groups cleared Shigella, based on fecal PCR, and had resolution of clinical signs 2 weeks after treatment. Eight out of nine clinically ill and PCR-positive animals tested negative by fecal culture. Conclusions: Single-dose CCFA, single-dose azithromycin, and a 5-day tapering course of azithromycin all performed as well as a 7-day course of enrofloxacin in eliminating Shigella infection. Fecal PCR may be a better diagnostic than culture for Shigella.
null
null
8,364
202
[ 876, 84, 142, 133, 161, 992, 117, 315, 27, 28, 220, 76, 93, 76 ]
18
[ "animals", "mg", "azithromycin", "dose", "treatment", "red rubber", "rubber", "red", "kg", "animal" ]
[ "shigellosis suspected", "shigellosis range acute", "treating shigella spp", "naturally acquired shigellosis", "eliminating shigella infection" ]
null
null
null
[CONTENT] non‐human primate | Shigella | shigella antibiotic [SUMMARY]
null
[CONTENT] non‐human primate | Shigella | shigella antibiotic [SUMMARY]
null
[CONTENT] non‐human primate | Shigella | shigella antibiotic [SUMMARY]
null
[CONTENT] Animals | Dysentery, Bacillary | Macaca mulatta | Macaca nemestrina | Anti-Bacterial Agents | Enrofloxacin | Azithromycin | Shigella [SUMMARY]
null
[CONTENT] Animals | Dysentery, Bacillary | Macaca mulatta | Macaca nemestrina | Anti-Bacterial Agents | Enrofloxacin | Azithromycin | Shigella [SUMMARY]
null
[CONTENT] Animals | Dysentery, Bacillary | Macaca mulatta | Macaca nemestrina | Anti-Bacterial Agents | Enrofloxacin | Azithromycin | Shigella [SUMMARY]
null
[CONTENT] shigellosis suspected | shigellosis range acute | treating shigella spp | naturally acquired shigellosis | eliminating shigella infection [SUMMARY]
null
[CONTENT] shigellosis suspected | shigellosis range acute | treating shigella spp | naturally acquired shigellosis | eliminating shigella infection [SUMMARY]
null
[CONTENT] shigellosis suspected | shigellosis range acute | treating shigella spp | naturally acquired shigellosis | eliminating shigella infection [SUMMARY]
null
[CONTENT] animals | mg | azithromycin | dose | treatment | red rubber | rubber | red | kg | animal [SUMMARY]
null
[CONTENT] animals | mg | azithromycin | dose | treatment | red rubber | rubber | red | kg | animal [SUMMARY]
null
[CONTENT] animals | mg | azithromycin | dose | treatment | red rubber | rubber | red | kg | animal [SUMMARY]
null
[CONTENT] humans | shigella | antibiotic | non | infections | shigellosis | treatment | subclinical | individual | stress [SUMMARY]
null
[CONTENT] pcr | fecal | culture | fecal pcr | treatment | shigella | indicating | results | dose | testing [SUMMARY]
null
[CONTENT] mg | animals | treatment | dose | pcr | azithromycin | fecal | kg | shigella | rubber [SUMMARY]
null
[CONTENT] Shigella ||| ||| [SUMMARY]
null
[CONTENT] Shigella | PCR | 2 weeks ||| Eight | nine [SUMMARY]
null
[CONTENT] ||| ||| ||| Twenty-four | Shigella PCR-positive | one | four | CCFA | azithromycin gavage | 5-day | 7-day ||| ||| Shigella | PCR | 2 weeks ||| Eight | nine ||| CCFA | 5-day | 7-day | Shigella ||| Shigella [SUMMARY]
null
Microglia P2Y₆ receptors mediate nitric oxide release and astrocyte apoptosis.
25178395
During cerebral inflammation uracil nucleotides leak to the extracellular medium and activate glial pyrimidine receptors contributing to the development of a reactive phenotype. Chronically activated microglia acquire an anti-inflammatory phenotype that favors neuronal differentiation, but the impact of these microglia on astrogliosis is unknown. We investigated the contribution of pyrimidine receptors to microglia-astrocyte signaling in a chronic model of inflammation and its impact on astrogliosis.
BACKGROUND
Co-cultures of astrocytes and microglia were chronically treated with lipopolysaccharide (LPS) and incubated with uracil nucleotides for 48 h. The effect of nucleotides was evaluated in methyl-[3H]-thymidine incorporation. Western blot and immunofluorescence was performed to detect the expression of P2Y6 receptors and the inducible form of nitric oxide synthase (iNOS). Nitric oxide (NO) release was quantified through Griess reaction. Cell death was also investigated by the LDH assay and by the TUNEL assay or Hoechst 33258 staining.
METHODS
UTP, UDP (0.001 to 1 mM) or PSB 0474 (0.01 to 10 μM) inhibited cell proliferation up to 43 ± 2% (n = 10, P <0.05), an effect prevented by the selective P2Y6 receptor antagonist MRS 2578 (1 μM). UTP was rapidly metabolized into UDP, which had a longer half-life. The inhibitory effect of UDP (1 mM) was abolished by phospholipase C (PLC), protein kinase C (PKC) and nitric oxide synthase (NOS) inhibitors. Both UDP (1 mM) and PSB 0474 (10 μM) increased NO release up to 199 ± 20% (n = 4, P <0.05), an effect dependent on P2Y6 receptors-PLC-PKC pathway activation, indicating that this pathway mediates NO release. Western blot and immunocytochemistry analysis indicated that P2Y6 receptors were expressed in the cultures being mainly localized in microglia. Moreover, the expression of iNOS was mainly observed in microglia and was upregulated by UDP (1 mM) or PSB 0474 (10 μM). UDP-mediated NO release induced apoptosis in astrocytes, but not in microglia.
RESULTS
In LPS treated co-cultures of astrocytes and microglia, UTP is rapidly converted into UDP, which activates P2Y6 receptors inducing the release of NO by microglia that causes astrocyte apoptosis, thus controlling their rate of proliferation and preventing an excessive astrogliosis.
CONCLUSIONS
[ "Animals", "Animals, Newborn", "Apoptosis", "Astrocytes", "Cell Cycle", "Cell Proliferation", "Cells, Cultured", "Coculture Techniques", "Enzyme Inhibitors", "Gene Expression Regulation", "Glial Fibrillary Acidic Protein", "Lipopolysaccharides", "Microglia", "Nitric Oxide", "Rats", "Rats, Wistar", "Receptors, Purinergic P2", "Thymidine", "Time Factors", "Tritium", "Uracil Nucleotides" ]
4158093
Background
Chronic inflammation is characteristic of several brain disorders leading to loss of cognitive function. In the central nervous system (CNS), the inflammatory response is mediated by glial cells that acquire reactive phenotypes to participate in neuronal repair mechanisms [1,2]. In particular, astrocytes respond with a complex reaction named astrogliosis that includes several morphological and functional changes, such as cell hypertrophy, glial fibrillary acidic protein (GFAP) and nestin up-regulation [3], and cell proliferation [4]. These progressive changes are time and context dependent, being regulated by inflammatory mediators produced in the lesion site [5]. Activated microglia are the main source of these inflammatory mediators, assuming an important role in the modulation of astrogliosis progression during the course of the inflammatory response [6,7]. These mediators may be pro-inflammatory, such as IL-1β, TNF-α and nitric oxide (NO), or anti-inflammatory, such as IL-10, IL-4, TGF-β, according to the microglia phenotype, which is highly dependent on the pathological context [2,8]. Lipopolysaccharide (LPS) is an agonist of toll-like receptors-4 (TLR4), inducing a pro-inflammatory phenotype in microglia. However, chronic activation of TLR4 receptors has been shown to promote microglia polarization toward an anti-inflammatory phenotype [9,10], but its impact in the inflammatory response and in the modulation of astrogliosis remains to be established. In fact, different extents of astrogliosis and microgliosis have different impacts in neuronal regeneration [1,2]. In the extreme end of the astrogliosis spectrum, proliferating astrocytes may interact with fibroblasts and other glial cells to form a glial scar, creating an environment that prevents axon regeneration [11], leading to the idea that inhibition or control of this response would be beneficial to neuronal survival after injury. Therefore, the mediators produced by chronically activated microglia may have an important role to prevent excessive astrogliosis and promote neuronal regeneration and sprouting. In a context of chronic brain inflammation, both adenine and uracil nucleotides attain high concentrations in the extracellular medium (in the mM range) due to cell damage or death, and activate P2 receptors in both types of glial cells, contributing to astrogliosis [12] and reinforcing the release of inflammatory messengers produced by microglia [13]. Particularly, the uracil nucleotides may activate pyrimidine receptors, such as the P2Y2,4,6 and P2Y14 receptor subtypes [14] that participate in the inflammatory response [15]. P2Y6 receptors contribute to the clearance of necrotic cell debris by stimulating microglia phagocytosis of dying neurons [16], whereas the P2Y2 receptors mediate astrocyte migration [17], but the effect of uracil nucleotides in the modulation of astroglial proliferation and their role in the control of glial scar formation is largely unknown. To investigate the role of pyrimidine receptors in microglia-astrocyte signaling and its impact in the control of astrogliosis, it was used a cell culture model that could represent a state of chronic brain inflammation, which consisted of co-cultures of astrocytes and microglia submitted to a long-term treatment with LPS (0.1 μg/ml). The cultures obtained were used to investigate: i) the effect of uracil nucleotides in cell proliferation, ii) the influence of ectonucleotidases on uracil nucleotides metabolism and consequent impact in cell proliferation, iii) the signaling pathways and the mechanisms activated by the pyrimidine receptors involved in the control of cell proliferation, and iv) the contribution of microglia pyrimidine receptors to the modulation of astroglial proliferation.
null
null
Results
Characterization of the co-cultures The primary cortical brain cultures treated with lipopolysaccharide (LPS; 0.1 μg/ml) for 30 days in vitro, consisted of monolayers of astrocytes exhibiting a flattened, polygonal morphology and containing 4.36 ± 0.42% (n = 5) of microglia spread over the top of the astrocyte monolayer (Figure 1). The co-cultures obtained were named LPS cultures where microglial cells exhibited an amoeboid phenotype with retracted or short thick processes, suggestive of their activation [23], as expected for in vitro LPS treated microglia. In support of the potential of LPS to activate microglia and to prevent their proliferation, it was observed that in co-cultures grown without any treatment, the percentage of microglia was higher, approximately 8.0%, and presented longer processes [22].Figure 1 Immunofluorescent micrograph representative of co-cultures treated with lipopolysaccharide (LPS) (0.1 μg/ml). Astrocytes were labeled with rabbit anti-GFAP (TRICT, red) and microglia with mouse anti-CD11b (Alexa Fluor 488, green). The percentage of microglia in cultures was 4.36 ± 0.42% (n = 5). Scale bar: 50 μm. Immunofluorescent micrograph representative of co-cultures treated with lipopolysaccharide (LPS) (0.1 μg/ml). Astrocytes were labeled with rabbit anti-GFAP (TRICT, red) and microglia with mouse anti-CD11b (Alexa Fluor 488, green). The percentage of microglia in cultures was 4.36 ± 0.42% (n = 5). Scale bar: 50 μm. In this study, LPS cultures were used to study the effect of uracil nucleotides in cell proliferation, as well as the contribution of activated microglia to this response. The primary cortical brain cultures treated with lipopolysaccharide (LPS; 0.1 μg/ml) for 30 days in vitro, consisted of monolayers of astrocytes exhibiting a flattened, polygonal morphology and containing 4.36 ± 0.42% (n = 5) of microglia spread over the top of the astrocyte monolayer (Figure 1). The co-cultures obtained were named LPS cultures where microglial cells exhibited an amoeboid phenotype with retracted or short thick processes, suggestive of their activation [23], as expected for in vitro LPS treated microglia. In support of the potential of LPS to activate microglia and to prevent their proliferation, it was observed that in co-cultures grown without any treatment, the percentage of microglia was higher, approximately 8.0%, and presented longer processes [22].Figure 1 Immunofluorescent micrograph representative of co-cultures treated with lipopolysaccharide (LPS) (0.1 μg/ml). Astrocytes were labeled with rabbit anti-GFAP (TRICT, red) and microglia with mouse anti-CD11b (Alexa Fluor 488, green). The percentage of microglia in cultures was 4.36 ± 0.42% (n = 5). Scale bar: 50 μm. Immunofluorescent micrograph representative of co-cultures treated with lipopolysaccharide (LPS) (0.1 μg/ml). Astrocytes were labeled with rabbit anti-GFAP (TRICT, red) and microglia with mouse anti-CD11b (Alexa Fluor 488, green). The percentage of microglia in cultures was 4.36 ± 0.42% (n = 5). Scale bar: 50 μm. In this study, LPS cultures were used to study the effect of uracil nucleotides in cell proliferation, as well as the contribution of activated microglia to this response. Effects of uracil nucleotides in cell proliferation LPS cultures were incubated with several uracil nucleotides to evaluate their influence in cell proliferation. UTP that activates the P2Y2,4 subtypes, UDP and its analogue PSB 0474, both selective for the P2Y6 receptors and UDP-glucose that is selective for the P2Y14 receptors [14] were tested in a wide range of concentrations. Except for UDP-glucose, the uracil nucleotides UTP, UDP and PSB 0474 caused a concentration-dependent inhibition of cell proliferation (Figure 2).Figure 2 Effects of uracil nucleotides in cell proliferation. Lipopolysaccharide (LPS) cultures were incubated with nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed in percentage of control. Values are means ± SEM from five to ten experiments. *P <0.05, significant differences from control. Effects of uracil nucleotides in cell proliferation. Lipopolysaccharide (LPS) cultures were incubated with nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed in percentage of control. Values are means ± SEM from five to ten experiments. *P <0.05, significant differences from control. LPS cultures were incubated with several uracil nucleotides to evaluate their influence in cell proliferation. UTP that activates the P2Y2,4 subtypes, UDP and its analogue PSB 0474, both selective for the P2Y6 receptors and UDP-glucose that is selective for the P2Y14 receptors [14] were tested in a wide range of concentrations. Except for UDP-glucose, the uracil nucleotides UTP, UDP and PSB 0474 caused a concentration-dependent inhibition of cell proliferation (Figure 2).Figure 2 Effects of uracil nucleotides in cell proliferation. Lipopolysaccharide (LPS) cultures were incubated with nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed in percentage of control. Values are means ± SEM from five to ten experiments. *P <0.05, significant differences from control. Effects of uracil nucleotides in cell proliferation. Lipopolysaccharide (LPS) cultures were incubated with nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed in percentage of control. Values are means ± SEM from five to ten experiments. *P <0.05, significant differences from control. Extracellular metabolism of uracil nucleotides UTP metabolism was very fast, with a half-life of 10.3 ± 0.5 min (n = 5), and the main metabolite formed during the first hour was UDP, which remained in the medium up to 8 h (Figure 3A). When tested from the beginning, UDP metabolism was much slower compared to that of UTP (Figure 3B); its half-life was 77.3 ± 2.3 min (n = 4; P <0.05). The PSB 0474 half-life could not be evaluated because the highest concentration tested that caused an inhibition of cell proliferation was still below the detection limit of the method used to study the metabolism of these compounds.Figure 3 Metabolism of uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with 0.1 mM of (A) UTP or (B) UDP and samples were collected at 0, 1, 3, 8, 24 and 48 h. Uracil nucleotides and their metabolites were quantified by HPLC-UV. Values are means ± SEM from four experiments. Metabolism of uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with 0.1 mM of (A) UTP or (B) UDP and samples were collected at 0, 1, 3, 8, 24 and 48 h. Uracil nucleotides and their metabolites were quantified by HPLC-UV. Values are means ± SEM from four experiments. UTP metabolism was very fast, with a half-life of 10.3 ± 0.5 min (n = 5), and the main metabolite formed during the first hour was UDP, which remained in the medium up to 8 h (Figure 3A). When tested from the beginning, UDP metabolism was much slower compared to that of UTP (Figure 3B); its half-life was 77.3 ± 2.3 min (n = 4; P <0.05). The PSB 0474 half-life could not be evaluated because the highest concentration tested that caused an inhibition of cell proliferation was still below the detection limit of the method used to study the metabolism of these compounds.Figure 3 Metabolism of uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with 0.1 mM of (A) UTP or (B) UDP and samples were collected at 0, 1, 3, 8, 24 and 48 h. Uracil nucleotides and their metabolites were quantified by HPLC-UV. Values are means ± SEM from four experiments. Metabolism of uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with 0.1 mM of (A) UTP or (B) UDP and samples were collected at 0, 1, 3, 8, 24 and 48 h. Uracil nucleotides and their metabolites were quantified by HPLC-UV. Values are means ± SEM from four experiments. Expression and pharmacological characterization of the P2Y receptor subtype involved in the inhibition of cell proliferation induced by uracil nucleotides The inhibitory effect of both PSB 0474 (1 μM) and UDP (1 mM) in cell proliferation was abolished by the selective P2Y6 receptor antagonist MRS 2578 (1 μM; Figure 4A). The inhibitory effect of UTP (0.1 mM) was also abolished by MRS 2578 (not shown), suggesting that this effect depends on its conversion into UDP and activation of P2Y6 receptors. Uncoupling Gi/o proteins from receptors with pertussis toxin (PTX, 0.1 μg/ml) did not change the effect of UDP (1 mM), which was attenuated by the phospholipase C (PLC) inhibitor U 73122 (1 μM), but not by its inactive analog U 73343 (1 μM), and by the protein kinase C (PKC) inhibitor RO 32-0432 (1 μM; Figure 4A), confirming the coupling of P2Y6 receptors to the Gq-PLC-PKC pathway.Figure 4 Pyrimidine receptors and signaling pathway involved in the inhibition of cell proliferation mediated by uracil nucleotides. (A) Lipopolysaccharide (LPS) cultures were incubated with the nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides, except PTX, which was added to the medium 24 h before. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed as percentage of change from the respective control. Values are means ± SEM from eight to twenty experiments. *P <0.05, significant differences from respective control; +P <0.05, significant differences from the agonist alone. (B) Representative western blots showing the expression of P2Y6 receptors obtained from whole cell lysates. Two bands of 25 kDa, one of 36 kDa and another of 86 kDa, specifically reacted with the anti-P2Y6 antibody. These bands were absent in the presence of the respective neutralizing peptide (np). Pyrimidine receptors and signaling pathway involved in the inhibition of cell proliferation mediated by uracil nucleotides. (A) Lipopolysaccharide (LPS) cultures were incubated with the nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides, except PTX, which was added to the medium 24 h before. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed as percentage of change from the respective control. Values are means ± SEM from eight to twenty experiments. *P <0.05, significant differences from respective control; +P <0.05, significant differences from the agonist alone. (B) Representative western blots showing the expression of P2Y6 receptors obtained from whole cell lysates. Two bands of 25 kDa, one of 36 kDa and another of 86 kDa, specifically reacted with the anti-P2Y6 antibody. These bands were absent in the presence of the respective neutralizing peptide (np). P2Y6 receptor expression in LPS cultures comprised four bands, two of 25 kDa, one of 36 kDa and another of 86 kDa, which were all absent in the presence of the P2Y6 receptor neutralizing peptide (np, Figure 4B). Analysis of the cellular localization of P2Y6 receptors by immunocytochemistry revealed a preferential co-localization with microglia (Figure 5), suggesting that uracil nucleotides may inhibit cell proliferation via microglial cells.Figure 5 Cellular distribution and localization of P2Y 6 receptors in lipopolysaccharide (LPS) cultures. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), P2Y6 receptors were labeled with rabbit anti-P2Y6 (TRICT, red) and nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of P2Y6 receptors that are coincident with microglia, but not in astrocytes (blue nuclei that do not label with CD11b and P2Y6 receptor antibodies). Scale bar: 20 μm. Cellular distribution and localization of P2Y 6 receptors in lipopolysaccharide (LPS) cultures. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), P2Y6 receptors were labeled with rabbit anti-P2Y6 (TRICT, red) and nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of P2Y6 receptors that are coincident with microglia, but not in astrocytes (blue nuclei that do not label with CD11b and P2Y6 receptor antibodies). Scale bar: 20 μm. The inhibitory effect of both PSB 0474 (1 μM) and UDP (1 mM) in cell proliferation was abolished by the selective P2Y6 receptor antagonist MRS 2578 (1 μM; Figure 4A). The inhibitory effect of UTP (0.1 mM) was also abolished by MRS 2578 (not shown), suggesting that this effect depends on its conversion into UDP and activation of P2Y6 receptors. Uncoupling Gi/o proteins from receptors with pertussis toxin (PTX, 0.1 μg/ml) did not change the effect of UDP (1 mM), which was attenuated by the phospholipase C (PLC) inhibitor U 73122 (1 μM), but not by its inactive analog U 73343 (1 μM), and by the protein kinase C (PKC) inhibitor RO 32-0432 (1 μM; Figure 4A), confirming the coupling of P2Y6 receptors to the Gq-PLC-PKC pathway.Figure 4 Pyrimidine receptors and signaling pathway involved in the inhibition of cell proliferation mediated by uracil nucleotides. (A) Lipopolysaccharide (LPS) cultures were incubated with the nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides, except PTX, which was added to the medium 24 h before. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed as percentage of change from the respective control. Values are means ± SEM from eight to twenty experiments. *P <0.05, significant differences from respective control; +P <0.05, significant differences from the agonist alone. (B) Representative western blots showing the expression of P2Y6 receptors obtained from whole cell lysates. Two bands of 25 kDa, one of 36 kDa and another of 86 kDa, specifically reacted with the anti-P2Y6 antibody. These bands were absent in the presence of the respective neutralizing peptide (np). Pyrimidine receptors and signaling pathway involved in the inhibition of cell proliferation mediated by uracil nucleotides. (A) Lipopolysaccharide (LPS) cultures were incubated with the nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides, except PTX, which was added to the medium 24 h before. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed as percentage of change from the respective control. Values are means ± SEM from eight to twenty experiments. *P <0.05, significant differences from respective control; +P <0.05, significant differences from the agonist alone. (B) Representative western blots showing the expression of P2Y6 receptors obtained from whole cell lysates. Two bands of 25 kDa, one of 36 kDa and another of 86 kDa, specifically reacted with the anti-P2Y6 antibody. These bands were absent in the presence of the respective neutralizing peptide (np). P2Y6 receptor expression in LPS cultures comprised four bands, two of 25 kDa, one of 36 kDa and another of 86 kDa, which were all absent in the presence of the P2Y6 receptor neutralizing peptide (np, Figure 4B). Analysis of the cellular localization of P2Y6 receptors by immunocytochemistry revealed a preferential co-localization with microglia (Figure 5), suggesting that uracil nucleotides may inhibit cell proliferation via microglial cells.Figure 5 Cellular distribution and localization of P2Y 6 receptors in lipopolysaccharide (LPS) cultures. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), P2Y6 receptors were labeled with rabbit anti-P2Y6 (TRICT, red) and nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of P2Y6 receptors that are coincident with microglia, but not in astrocytes (blue nuclei that do not label with CD11b and P2Y6 receptor antibodies). Scale bar: 20 μm. Cellular distribution and localization of P2Y 6 receptors in lipopolysaccharide (LPS) cultures. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), P2Y6 receptors were labeled with rabbit anti-P2Y6 (TRICT, red) and nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of P2Y6 receptors that are coincident with microglia, but not in astrocytes (blue nuclei that do not label with CD11b and P2Y6 receptor antibodies). Scale bar: 20 μm. P2Y6 receptor-mediated nitric oxide production LPS increases iNOS expression and NO production by microglia [24,25], but this effect is attenuated during chronic LPS stimulation [9,10]. Since NO may inhibit astroglial proliferation [26] it was investigated whether P2Y6 receptor activation by uracil nucleotides modulated NO release in chronic LPS stimulated microglia. UDP (1 mM) and PSB 0474 (10 μM) increased NO release in the culture medium (Figure 6), an effect abolished by the selective antagonist of the P2Y6 receptors MRS 2578 (1 μM) and by the PLC inhibitor U 73122 (1 μM), or by the PKC inhibitor RO 32-0432 (1 μM; Figure 6). Additionally, the inhibitory effect of UDP in cell proliferation (1 mM; 44 ± 2, n = 25) was abolished by the NOS inhibitor L-NAME (0.1 mM; 7 ± 3, n = 8, P <0.05), and this effect was reversed in the presence of L-arginine (3 mM; 28 ± 6, n = 6; P <0.05).Figure 6 Nitric oxide synthesis mediated by uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides. The concentration of nitrites plus nitrates was evaluated in the culture supernatants and was expressed as percentage of change from the respective control. Values are means ± SEM from four experiments. *P <0.05, significant differences from the respective control; +P <0.05, significant differences from the agonist alone. Nitric oxide synthesis mediated by uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides. The concentration of nitrites plus nitrates was evaluated in the culture supernatants and was expressed as percentage of change from the respective control. Values are means ± SEM from four experiments. *P <0.05, significant differences from the respective control; +P <0.05, significant differences from the agonist alone. In order to identify the cellular source of NO released upon P2Y6 receptor activation, the expression of iNOS was immunolocalized either with microglia or astrocytes in LPS cultures. No iNOS expression was detected in astrocytes, in control conditions and after treatment with the uracil nucleotides (Figure 7), whereas in microglia iNOS expression was residual in control conditions but was significantly increased after 48 h incubation with PSB 0474 (10 μM) or UDP (1 mM; Figure 7).Figure 7 Cellular localization of inducible nitric oxide synthase (iNOS) in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), astrocytes with mouse anti-GFAP (Alexa Fluor 488, green) and iNOS with rabbit anti-iNOS (TRITC, red). Cell nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of iNOS in the cells and are coincident with an increased expression of iNOS in microglia, but not in astrocytes, upon stimulation with the uracil nucleotides. Scale bar = 10 μm. Cellular localization of inducible nitric oxide synthase (iNOS) in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), astrocytes with mouse anti-GFAP (Alexa Fluor 488, green) and iNOS with rabbit anti-iNOS (TRITC, red). Cell nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of iNOS in the cells and are coincident with an increased expression of iNOS in microglia, but not in astrocytes, upon stimulation with the uracil nucleotides. Scale bar = 10 μm. LPS increases iNOS expression and NO production by microglia [24,25], but this effect is attenuated during chronic LPS stimulation [9,10]. Since NO may inhibit astroglial proliferation [26] it was investigated whether P2Y6 receptor activation by uracil nucleotides modulated NO release in chronic LPS stimulated microglia. UDP (1 mM) and PSB 0474 (10 μM) increased NO release in the culture medium (Figure 6), an effect abolished by the selective antagonist of the P2Y6 receptors MRS 2578 (1 μM) and by the PLC inhibitor U 73122 (1 μM), or by the PKC inhibitor RO 32-0432 (1 μM; Figure 6). Additionally, the inhibitory effect of UDP in cell proliferation (1 mM; 44 ± 2, n = 25) was abolished by the NOS inhibitor L-NAME (0.1 mM; 7 ± 3, n = 8, P <0.05), and this effect was reversed in the presence of L-arginine (3 mM; 28 ± 6, n = 6; P <0.05).Figure 6 Nitric oxide synthesis mediated by uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides. The concentration of nitrites plus nitrates was evaluated in the culture supernatants and was expressed as percentage of change from the respective control. Values are means ± SEM from four experiments. *P <0.05, significant differences from the respective control; +P <0.05, significant differences from the agonist alone. Nitric oxide synthesis mediated by uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides. The concentration of nitrites plus nitrates was evaluated in the culture supernatants and was expressed as percentage of change from the respective control. Values are means ± SEM from four experiments. *P <0.05, significant differences from the respective control; +P <0.05, significant differences from the agonist alone. In order to identify the cellular source of NO released upon P2Y6 receptor activation, the expression of iNOS was immunolocalized either with microglia or astrocytes in LPS cultures. No iNOS expression was detected in astrocytes, in control conditions and after treatment with the uracil nucleotides (Figure 7), whereas in microglia iNOS expression was residual in control conditions but was significantly increased after 48 h incubation with PSB 0474 (10 μM) or UDP (1 mM; Figure 7).Figure 7 Cellular localization of inducible nitric oxide synthase (iNOS) in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), astrocytes with mouse anti-GFAP (Alexa Fluor 488, green) and iNOS with rabbit anti-iNOS (TRITC, red). Cell nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of iNOS in the cells and are coincident with an increased expression of iNOS in microglia, but not in astrocytes, upon stimulation with the uracil nucleotides. Scale bar = 10 μm. Cellular localization of inducible nitric oxide synthase (iNOS) in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), astrocytes with mouse anti-GFAP (Alexa Fluor 488, green) and iNOS with rabbit anti-iNOS (TRITC, red). Cell nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of iNOS in the cells and are coincident with an increased expression of iNOS in microglia, but not in astrocytes, upon stimulation with the uracil nucleotides. Scale bar = 10 μm. P2Y6 receptor-mediated inhibition of cell proliferation: mechanisms involved In these experimental conditions, microglial cells are responsible for NO production mediated by P2Y6 receptors. To clarify the mechanisms behind uracil nucleotides inhibition of cell proliferation, their effect on cell cycle progression and cell death was investigated. Uracil nucleotides had no effect in cell cycle progression of glial cells. The percentage of cells in G0/G1, S or G2/M phase of the cell cycle was similar in control cultures (75.4 ± 1.8, 17.6 ± 3.2, 7.1 ± 1.5, respectively, n = 3) and those treated with UDP (1 mM; 76.4 ± 1.8, 13.6 ± 2.8, 10.0 ± 1.5, respectively, n = 3), therefore excluding cell cycle arrest as the mechanism involved in the inhibition of cell proliferation. Another possibility was that inhibition of cell proliferation mediated by uracil nucleotides could result from an increase in cell death. UDP (1 mM) and PSB 0474 (10 μM) caused no change in LDH release, which excluded cell death by necrosis (Figure 8A). However, both UDP (1 mM) and PSB 0474 (10 μM) induced cell death by apoptosis assessed by the TUNEL assay (Figure 8A).Figure 8 Effects of uracil nucleotides in cell death in lipopolysaccharide (LPS) cultures. (A) Necrotic cell death was evaluated by measuring the release of lactate dehydrogenase (LDH) and apoptotic cell death was evaluated by the TUNEL assay, after incubation with uracil nucleotides or solvent for 48 h. LDH activity was measured in the culture medium and in the culture extracts and the fraction released is represented in percentage of total LDH. The number of apoptotic cells was expressed as percentage of the total number of cells counted. Values are means ± SEM from four to seven experiments. *P <0.05, significant differences from respective control (solvent). (B) Cellular localization of apoptotic nuclei, obtained with the Hoechst 33258 staining in LPS cultures. Astrocytes were labeled with rabbit anti-GFAP (TRITC, red), microglia with mouse anti-CD11b (Alexa Fluor 488, green) and cell nuclei with Hoechst 33258 (blue). LPS cultures were incubated with solvent or UDP for 48 h. Shrunken nuclei with a bright fluorescence appearance, characteristic of apoptotic nuclei are clearly coincident with astrocytes (white arrows), but not with microglia. Scale bar = 20 μm. Effects of uracil nucleotides in cell death in lipopolysaccharide (LPS) cultures. (A) Necrotic cell death was evaluated by measuring the release of lactate dehydrogenase (LDH) and apoptotic cell death was evaluated by the TUNEL assay, after incubation with uracil nucleotides or solvent for 48 h. LDH activity was measured in the culture medium and in the culture extracts and the fraction released is represented in percentage of total LDH. The number of apoptotic cells was expressed as percentage of the total number of cells counted. Values are means ± SEM from four to seven experiments. *P <0.05, significant differences from respective control (solvent). (B) Cellular localization of apoptotic nuclei, obtained with the Hoechst 33258 staining in LPS cultures. Astrocytes were labeled with rabbit anti-GFAP (TRITC, red), microglia with mouse anti-CD11b (Alexa Fluor 488, green) and cell nuclei with Hoechst 33258 (blue). LPS cultures were incubated with solvent or UDP for 48 h. Shrunken nuclei with a bright fluorescence appearance, characteristic of apoptotic nuclei are clearly coincident with astrocytes (white arrows), but not with microglia. Scale bar = 20 μm. Cultures treated with UDP (1 mM) showed an increase in the number of shrunken nuclei with a bright fluorescence appearance obtained by Hoechst 33258 staining (Figure 8B). The percentage of apoptotic nuclei in control cultures was 6.75 ± 0.65% (n = 8) and increased to 16.02 ± 0.75% (n = 8, P <0.05) in cultures treated with UDP (1 mM). This increase in the number of apoptotic cells was attenuated to 11.83 ± 0.61% (n = 8, P <0.05) when UDP was tested in the presence of L-NAME (0.1 mM). Additionally, the apoptotic nuclei co-localized with astrocytes but not with microglia (Figure 8B). In these experimental conditions, microglial cells are responsible for NO production mediated by P2Y6 receptors. To clarify the mechanisms behind uracil nucleotides inhibition of cell proliferation, their effect on cell cycle progression and cell death was investigated. Uracil nucleotides had no effect in cell cycle progression of glial cells. The percentage of cells in G0/G1, S or G2/M phase of the cell cycle was similar in control cultures (75.4 ± 1.8, 17.6 ± 3.2, 7.1 ± 1.5, respectively, n = 3) and those treated with UDP (1 mM; 76.4 ± 1.8, 13.6 ± 2.8, 10.0 ± 1.5, respectively, n = 3), therefore excluding cell cycle arrest as the mechanism involved in the inhibition of cell proliferation. Another possibility was that inhibition of cell proliferation mediated by uracil nucleotides could result from an increase in cell death. UDP (1 mM) and PSB 0474 (10 μM) caused no change in LDH release, which excluded cell death by necrosis (Figure 8A). However, both UDP (1 mM) and PSB 0474 (10 μM) induced cell death by apoptosis assessed by the TUNEL assay (Figure 8A).Figure 8 Effects of uracil nucleotides in cell death in lipopolysaccharide (LPS) cultures. (A) Necrotic cell death was evaluated by measuring the release of lactate dehydrogenase (LDH) and apoptotic cell death was evaluated by the TUNEL assay, after incubation with uracil nucleotides or solvent for 48 h. LDH activity was measured in the culture medium and in the culture extracts and the fraction released is represented in percentage of total LDH. The number of apoptotic cells was expressed as percentage of the total number of cells counted. Values are means ± SEM from four to seven experiments. *P <0.05, significant differences from respective control (solvent). (B) Cellular localization of apoptotic nuclei, obtained with the Hoechst 33258 staining in LPS cultures. Astrocytes were labeled with rabbit anti-GFAP (TRITC, red), microglia with mouse anti-CD11b (Alexa Fluor 488, green) and cell nuclei with Hoechst 33258 (blue). LPS cultures were incubated with solvent or UDP for 48 h. Shrunken nuclei with a bright fluorescence appearance, characteristic of apoptotic nuclei are clearly coincident with astrocytes (white arrows), but not with microglia. Scale bar = 20 μm. Effects of uracil nucleotides in cell death in lipopolysaccharide (LPS) cultures. (A) Necrotic cell death was evaluated by measuring the release of lactate dehydrogenase (LDH) and apoptotic cell death was evaluated by the TUNEL assay, after incubation with uracil nucleotides or solvent for 48 h. LDH activity was measured in the culture medium and in the culture extracts and the fraction released is represented in percentage of total LDH. The number of apoptotic cells was expressed as percentage of the total number of cells counted. Values are means ± SEM from four to seven experiments. *P <0.05, significant differences from respective control (solvent). (B) Cellular localization of apoptotic nuclei, obtained with the Hoechst 33258 staining in LPS cultures. Astrocytes were labeled with rabbit anti-GFAP (TRITC, red), microglia with mouse anti-CD11b (Alexa Fluor 488, green) and cell nuclei with Hoechst 33258 (blue). LPS cultures were incubated with solvent or UDP for 48 h. Shrunken nuclei with a bright fluorescence appearance, characteristic of apoptotic nuclei are clearly coincident with astrocytes (white arrows), but not with microglia. Scale bar = 20 μm. Cultures treated with UDP (1 mM) showed an increase in the number of shrunken nuclei with a bright fluorescence appearance obtained by Hoechst 33258 staining (Figure 8B). The percentage of apoptotic nuclei in control cultures was 6.75 ± 0.65% (n = 8) and increased to 16.02 ± 0.75% (n = 8, P <0.05) in cultures treated with UDP (1 mM). This increase in the number of apoptotic cells was attenuated to 11.83 ± 0.61% (n = 8, P <0.05) when UDP was tested in the presence of L-NAME (0.1 mM). Additionally, the apoptotic nuclei co-localized with astrocytes but not with microglia (Figure 8B).
Conclusions
The present study shows that chronically activated microglia influence the astroglial response to uracil nucleotides favoring astroglial apoptosis as a consequence of microglial P2Y6 receptor activation that induces NO release (Figure 9). Therefore, P2Y6 receptor activation may represent an important mechanism by which microglia control excessive astrogliosis that may hamper neuronal regeneration. Nevertheless, it is known that human cortical astrocytes are diverse and structurally and functionally more complex than their rodent counterparts [45]; therefore this hypothesis should be further confirmed within human glial cells.Figure 9 Schematic representation of the purinergic mechanisms mediating microglia-astrocyte communication in lipopolysaccharide (LPS) cultures. Uracil nucleotides released during inflammatory response activate microglia P2Y6 receptors coupled to the phospholipase C (PLC) - protein kinase C (PKC) pathway, which mediates an increase in inducible nitric oxide synthase (iNOS) expression and consequently, in nitric oxide (NO) release. Diffusible NO mediates astroglial apoptosis. Schematic representation of the purinergic mechanisms mediating microglia-astrocyte communication in lipopolysaccharide (LPS) cultures. Uracil nucleotides released during inflammatory response activate microglia P2Y6 receptors coupled to the phospholipase C (PLC) - protein kinase C (PKC) pathway, which mediates an increase in inducible nitric oxide synthase (iNOS) expression and consequently, in nitric oxide (NO) release. Diffusible NO mediates astroglial apoptosis.
[ "Materials", "Cell cultures", "Immunocytochemistry", "DNA synthesis", "Metabolism of nucleotides", "Western blot analysis", "Nitric oxide assay", "Cell cycle", "Cell death assays", "Statistical analysis", "Characterization of the co-cultures", "Effects of uracil nucleotides in cell proliferation", "Extracellular metabolism of uracil nucleotides", "Expression and pharmacological characterization of the P2Y receptor subtype involved in the inhibition of cell proliferation induced by uracil nucleotides", "P2Y6 receptor-mediated nitric oxide production", "P2Y6 receptor-mediated inhibition of cell proliferation: mechanisms involved" ]
[ "The antibodies used and the respective information are listed in Table 1. The following drugs and reagents were used: L-arginine (L-ARG), lipopolysaccharide from Salmonella thyphimurium (LPS), N-nitro-L-arginine methyl ester hydrochloride (L-NAME), pertussis toxin (PTX), bisindolylmaleimide XI hydrochloride (RO 32-0432), penicillin, streptomycin, uracil, uridine, uridine-5’-monophosphate disodium (UMP), uridine-5’-diphosphate sodium (UDP), uridine 5'-triphosphate trisodium (UTP), uridine 5'-diphosphoglucose disodium (UDP-glucose), 1-[6-[((17β)-3-methoxyestra-1,3,5[10]-trien-17-yl)amino]hexyl]-2,5-pyrrolidinedione (U 73343), 1-[6-[((17β)-3-methoxyestra-1,3,5[10]-trien-17-yl)amino]hexyl]-1H-pyrrole-2,5dione (U 73122), 2'-(4-hydroxyphenyl)-5-(4-methyl-1-piperazinyl)-2,5'-bi-1H-benzimidazole trihydrochloride hydrate (Hoechst 33258), Ribonuclease A (RNAse) and propidium iodide (PI) from Sigma-Aldrich (Sintra, Portugal); N,N''-1,4 butanediylbis[N'-(3-isothiocyanatophenyl)thiourea] (MRS 2578) and 3-(2-oxo-2-phenylethyl)uridine-5'-diphosphate disodium (PSB 0474) from Tocris (Bristol, UK); methyl-[3H]thymidine (specific activity 80 to 86 Ci/mmol) and enhanced chemiluminescence (ECL) western blotting system from Amersham Biosciences (Lisbon, Portugal). Stock solutions of drugs were prepared with dimethyl sulfoxide or distilled water and kept at -20°C. Solutions of drugs were prepared from stock solutions diluted in culture medium immediately before use.Table 1\nPrimary and secondary antibodies used in immunocytochemistry and western blotting\n\nPrimary antibodies\n\nAntigen\n\nCode\n\nHost\n\nDilution\n\nSupplier\nGFAPG9269Rabbit1:600 (IF)SigmaGFAPG6171Mouse1:600 (IF)SigmaCD11bsc-53086Mouse1:50 (IF)Santa Cruz Biotechnology, IncP2Y6\nAPR-011Rabbit1:200 (IF)Alomone1:300 (WB)iNOSAB5382Rabbit1:5 000 (IF)ChemiconActinsc-1615-RRabbit1:200 (WB)Santa Cruz Biotechnology, Inc\nSecondary antibodies\n\nAntigen\n\nCode\n\nHost\n\nDilution\n\nSupplier\nTRITC anti-rabbitT6778Goat1:400 (IF; GFAP, P2Y6)Sigma1:2 000 (IF; iNOS)Alexa Fluor 488 anti-mouseA-11034Goat1:400 (IF)Mol. Probesanti-rabbit conjugated to horseradish peroxidasesc-2004Goat1:10 000 (WB)Santa Cruz Biotechnology, IncIF, immunofluorescence; WB, western blot analysis.\n\nPrimary and secondary antibodies used in immunocytochemistry and western blotting\n\nIF, immunofluorescence; WB, western blot analysis.", "Animal handling and experiments were in accordance with the guidelines prepared by Committee on Care and Use of Laboratory Animal Resources (National Research Council, USA), followed the Directive 2010/63/EU of the European Parliament and the Council of the European Union and were approved by the ethics committee of the Faculty of Pharmacy from the University of Porto. Primary co-cultures of astrocytes and microglia were prepared from newborn (P0-P2) Wistar rats (Charles River, Barcelona, Spain) as previously described [18] with minor modifications. Cell cultures were treated with 0.1 μg/ml LPS and were incubated at 37°C in a humidified atmosphere of 95% air, 5% CO2. The medium containing 0.1 μg/ml LPS was replaced one day after cell cultures preparation, and subsequently, twice a week, with LPS remaining in the cultures from the first day in vitro (DIV1) until the end of the experiments. Cultures were synchronized to a quiescent phase of the cell cycle, by shifting fetal bovine serum concentration in the medium from 10% to 0.1% for 48 h, and then used in experiments at DIV30.", "Cultures were fixed and permeabilized as described in previous studies [19]. For double immunofluorescence, cultures were incubated with the primary antibodies (Table 1) overnight at 4°C. Visualization of GFAP, CD11b, and P2Y6 receptors and iNOS positive cells was accomplished upon 1 h incubation, at room temperature, with the secondary antibodies (Table 1). In negative controls, the primary antibody was omitted. Cell nuclei were labeled with Hoechst 33258 (5 μg/ml) for 30 min at room temperature. To evaluate the percentage of microglia in the cultures, approximately 200 cells per culture were counted, and the number of CD11b positive cells was expressed as percentage of the total number of cells counted.", "Cultures grown in 24-well plates were incubated with uracil nucleotides or solvent for 48 h, and methyl-[3H]-thymidine was added in the last 24 h, at a concentration of 1 μCi/ml. Antagonists or enzymatic inhibitors were added to the medium 1 h before uracil nucleotides. In experiments performed in the presence of PTX, the drug was added to the culture medium 24 h before the uracil nucleotides. At the end of the 48-h period of incubation, the protein content and methyl-[3H]-thymidine incorporation were evaluated as previously described [20].", "The metabolism of nucleotides was evaluated as previously described [20]. Briefly, cultures were incubated with uracil nucleotides, all at 0.1 mM, and samples were collected at 0, 1, 3, 8, 24 and 48 h. For evaluation of UTP half-life, additional samples were collected at 0, 5, 10, 15, 30, 60 min. The uracil nucleotides or their metabolites were separated and quantified by ion-pair-reverse-phase high-performance liquid chromatography (HPLC) with UV detection set at 254 nm [21]. Standards were analyzed in the same conditions and the retention time identified was (min): uracil (0.95), uridine (1.32), UMP (2.15), UDP (4.40) and UTP (6.40). The concentration of nucleotides and metabolites was calculated by peak area integration, followed by interpolation in calibration curves obtained with standards.", "The expression of P2Y6 receptors was evaluated as previously described [22]. Membranes were probed for 2 h at room temperature with appropriately diluted primary rabbit polyclonal antibodies anti-P2Y6 or anti-actin, followed by the secondary antibody goat anti-rabbit IgG conjugated to horseradish peroxidase (Table 1). The immunocomplexes were detected by ECL.", "Cultures were incubated with uracil nucleotides or solvent for 48 h. The P2Y6 antagonist MRS 2578 or other enzyme inhibitors, when tested, were added 1 h before the uracil nucleotides. At the end of the 48-h period of incubation, the nitric oxide released into the culture medium was assessed by measuring the accumulation of nitrates plus nitrites according to the instructions of a Nitrate/Nitrite Colorimetric Assay kit (Cayman, France). The content of nitrates plus nitrites present in the supernatants was expressed as percentage of respective control.", "The ability of uracil nucleotides to arrest glial cells in a specific cell cycle stage was evaluated in cultures treated with uracil nucleotides or solvent for 48 h. Cells were harvested by trypsinization, rinsed with ice-cold PBS and fixed in ice-cold 70% ethanol for 15 min at -20°C. Cells were rinsed again with PBS and incubated with 0.2 mg/ml RNAse A at 37°C for 15 min and further with 0.5 mg/ml propidium iodide for at least 30 min in the dark at room temperature. The percentage of cells in each phase of the cell cycle was determined by a flow cytometric analysis using the FACSCalibur flow cytometer from BD Biosciences (Enzifarma, Porto, Portugal) and the CellQuest software from BD Biosciences (Enzifarma, Porto, Portugal). Cell cycle phases were identified and quantified using ModFit LT software (Verity Software House Inc., Topsham, USA).", "Necrotic cell death was assessed by measuring the lactate dehydrogenase (LDH) release with an enzymatic assay according to the manufacturer’s instructions (Sigma-Aldrich, Sintra, Portugal). Cultures were incubated with uracil nucleotides or solvent for 48 h. LDH activity was determined in the culture supernatants and respective extracts. The amount of LDH released into the culture medium was expressed as the percentage of total LDH.\nApoptotic cell death was evaluated either by the indirect terminal transferase-mediated dUTP-digoxigenin nick end-labeling (TUNEL) to detect DNA fragmentation using an ApopTag peroxidase detection kit (Millipore, Madrid, Spain), or by the analysis of nuclear morphology with Hoechst 33258 staining (described above). Cultures were treated with uracil nucleotides or solvent for 48 h and, when present, L-NAME was added 1 h before nucleotides. The number of TUNEL positive cells was evaluated as previously described [20]. The number of apoptotic cells, observed with Hoechst 33258 staining, was evaluated by analyzing eight high-power fields (×400) in each culture, and the number of cells showing shrunken nuclei with a bright fluorescence appearance was expressed as percentage of total cell number counted.", "Data are expressed as means ± standard errors of the mean (SEM) from n number of experiments. Statistical analysis was carried out using the unpaired Student’s t-test or ANOVA followed by Dunnett’s multiple comparison test. Significant differences were indicated by P values lower than 0.05.", "The primary cortical brain cultures treated with lipopolysaccharide (LPS; 0.1 μg/ml) for 30 days in vitro, consisted of monolayers of astrocytes exhibiting a flattened, polygonal morphology and containing 4.36 ± 0.42% (n = 5) of microglia spread over the top of the astrocyte monolayer (Figure 1). The co-cultures obtained were named LPS cultures where microglial cells exhibited an amoeboid phenotype with retracted or short thick processes, suggestive of their activation [23], as expected for in vitro LPS treated microglia. In support of the potential of LPS to activate microglia and to prevent their proliferation, it was observed that in co-cultures grown without any treatment, the percentage of microglia was higher, approximately 8.0%, and presented longer processes [22].Figure 1\nImmunofluorescent micrograph representative of co-cultures treated with lipopolysaccharide (LPS) (0.1 μg/ml). Astrocytes were labeled with rabbit anti-GFAP (TRICT, red) and microglia with mouse anti-CD11b (Alexa Fluor 488, green). The percentage of microglia in cultures was 4.36 ± 0.42% (n = 5). Scale bar: 50 μm.\n\nImmunofluorescent micrograph representative of co-cultures treated with lipopolysaccharide (LPS) (0.1 μg/ml). Astrocytes were labeled with rabbit anti-GFAP (TRICT, red) and microglia with mouse anti-CD11b (Alexa Fluor 488, green). The percentage of microglia in cultures was 4.36 ± 0.42% (n = 5). Scale bar: 50 μm.\nIn this study, LPS cultures were used to study the effect of uracil nucleotides in cell proliferation, as well as the contribution of activated microglia to this response.", "LPS cultures were incubated with several uracil nucleotides to evaluate their influence in cell proliferation. UTP that activates the P2Y2,4 subtypes, UDP and its analogue PSB 0474, both selective for the P2Y6 receptors and UDP-glucose that is selective for the P2Y14 receptors [14] were tested in a wide range of concentrations. Except for UDP-glucose, the uracil nucleotides UTP, UDP and PSB 0474 caused a concentration-dependent inhibition of cell proliferation (Figure 2).Figure 2\nEffects of uracil nucleotides in cell proliferation. Lipopolysaccharide (LPS) cultures were incubated with nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed in percentage of control. Values are means ± SEM from five to ten experiments. *P <0.05, significant differences from control.\n\nEffects of uracil nucleotides in cell proliferation. Lipopolysaccharide (LPS) cultures were incubated with nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed in percentage of control. Values are means ± SEM from five to ten experiments. *P <0.05, significant differences from control.", "UTP metabolism was very fast, with a half-life of 10.3 ± 0.5 min (n = 5), and the main metabolite formed during the first hour was UDP, which remained in the medium up to 8 h (Figure 3A). When tested from the beginning, UDP metabolism was much slower compared to that of UTP (Figure 3B); its half-life was 77.3 ± 2.3 min (n = 4; P <0.05). The PSB 0474 half-life could not be evaluated because the highest concentration tested that caused an inhibition of cell proliferation was still below the detection limit of the method used to study the metabolism of these compounds.Figure 3\nMetabolism of uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with 0.1 mM of (A) UTP or (B) UDP and samples were collected at 0, 1, 3, 8, 24 and 48 h. Uracil nucleotides and their metabolites were quantified by HPLC-UV. Values are means ± SEM from four experiments.\n\nMetabolism of uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with 0.1 mM of (A) UTP or (B) UDP and samples were collected at 0, 1, 3, 8, 24 and 48 h. Uracil nucleotides and their metabolites were quantified by HPLC-UV. Values are means ± SEM from four experiments.", "The inhibitory effect of both PSB 0474 (1 μM) and UDP (1 mM) in cell proliferation was abolished by the selective P2Y6 receptor antagonist MRS 2578 (1 μM; Figure 4A). The inhibitory effect of UTP (0.1 mM) was also abolished by MRS 2578 (not shown), suggesting that this effect depends on its conversion into UDP and activation of P2Y6 receptors. Uncoupling Gi/o proteins from receptors with pertussis toxin (PTX, 0.1 μg/ml) did not change the effect of UDP (1 mM), which was attenuated by the phospholipase C (PLC) inhibitor U 73122 (1 μM), but not by its inactive analog U 73343 (1 μM), and by the protein kinase C (PKC) inhibitor RO 32-0432 (1 μM; Figure 4A), confirming the coupling of P2Y6 receptors to the Gq-PLC-PKC pathway.Figure 4\nPyrimidine receptors and signaling pathway involved in the inhibition of cell proliferation mediated by uracil nucleotides. (A) Lipopolysaccharide (LPS) cultures were incubated with the nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides, except PTX, which was added to the medium 24 h before. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed as percentage of change from the respective control. Values are means ± SEM from eight to twenty experiments. *P <0.05, significant differences from respective control; +P <0.05, significant differences from the agonist alone. (B) Representative western blots showing the expression of P2Y6 receptors obtained from whole cell lysates. Two bands of 25 kDa, one of 36 kDa and another of 86 kDa, specifically reacted with the anti-P2Y6 antibody. These bands were absent in the presence of the respective neutralizing peptide (np).\n\nPyrimidine receptors and signaling pathway involved in the inhibition of cell proliferation mediated by uracil nucleotides. (A) Lipopolysaccharide (LPS) cultures were incubated with the nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides, except PTX, which was added to the medium 24 h before. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed as percentage of change from the respective control. Values are means ± SEM from eight to twenty experiments. *P <0.05, significant differences from respective control; +P <0.05, significant differences from the agonist alone. (B) Representative western blots showing the expression of P2Y6 receptors obtained from whole cell lysates. Two bands of 25 kDa, one of 36 kDa and another of 86 kDa, specifically reacted with the anti-P2Y6 antibody. These bands were absent in the presence of the respective neutralizing peptide (np).\nP2Y6 receptor expression in LPS cultures comprised four bands, two of 25 kDa, one of 36 kDa and another of 86 kDa, which were all absent in the presence of the P2Y6 receptor neutralizing peptide (np, Figure 4B). Analysis of the cellular localization of P2Y6 receptors by immunocytochemistry revealed a preferential co-localization with microglia (Figure 5), suggesting that uracil nucleotides may inhibit cell proliferation via microglial cells.Figure 5\nCellular distribution and localization of P2Y\n6\nreceptors in lipopolysaccharide (LPS) cultures. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), P2Y6 receptors were labeled with rabbit anti-P2Y6 (TRICT, red) and nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of P2Y6 receptors that are coincident with microglia, but not in astrocytes (blue nuclei that do not label with CD11b and P2Y6 receptor antibodies). Scale bar: 20 μm.\n\nCellular distribution and localization of P2Y\n6\nreceptors in lipopolysaccharide (LPS) cultures. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), P2Y6 receptors were labeled with rabbit anti-P2Y6 (TRICT, red) and nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of P2Y6 receptors that are coincident with microglia, but not in astrocytes (blue nuclei that do not label with CD11b and P2Y6 receptor antibodies). Scale bar: 20 μm.", "LPS increases iNOS expression and NO production by microglia [24,25], but this effect is attenuated during chronic LPS stimulation [9,10]. Since NO may inhibit astroglial proliferation [26] it was investigated whether P2Y6 receptor activation by uracil nucleotides modulated NO release in chronic LPS stimulated microglia. UDP (1 mM) and PSB 0474 (10 μM) increased NO release in the culture medium (Figure 6), an effect abolished by the selective antagonist of the P2Y6 receptors MRS 2578 (1 μM) and by the PLC inhibitor U 73122 (1 μM), or by the PKC inhibitor RO 32-0432 (1 μM; Figure 6). Additionally, the inhibitory effect of UDP in cell proliferation (1 mM; 44 ± 2, n = 25) was abolished by the NOS inhibitor L-NAME (0.1 mM; 7 ± 3, n = 8, P <0.05), and this effect was reversed in the presence of L-arginine (3 mM; 28 ± 6, n = 6; P <0.05).Figure 6\nNitric oxide synthesis mediated by uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides. The concentration of nitrites plus nitrates was evaluated in the culture supernatants and was expressed as percentage of change from the respective control. Values are means ± SEM from four experiments. *P <0.05, significant differences from the respective control; +P <0.05, significant differences from the agonist alone.\n\nNitric oxide synthesis mediated by uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides. The concentration of nitrites plus nitrates was evaluated in the culture supernatants and was expressed as percentage of change from the respective control. Values are means ± SEM from four experiments. *P <0.05, significant differences from the respective control; +P <0.05, significant differences from the agonist alone.\nIn order to identify the cellular source of NO released upon P2Y6 receptor activation, the expression of iNOS was immunolocalized either with microglia or astrocytes in LPS cultures. No iNOS expression was detected in astrocytes, in control conditions and after treatment with the uracil nucleotides (Figure 7), whereas in microglia iNOS expression was residual in control conditions but was significantly increased after 48 h incubation with PSB 0474 (10 μM) or UDP (1 mM; Figure 7).Figure 7\nCellular localization of inducible nitric oxide synthase (iNOS) in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), astrocytes with mouse anti-GFAP (Alexa Fluor 488, green) and iNOS with rabbit anti-iNOS (TRITC, red). Cell nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of iNOS in the cells and are coincident with an increased expression of iNOS in microglia, but not in astrocytes, upon stimulation with the uracil nucleotides. Scale bar = 10 μm.\n\nCellular localization of inducible nitric oxide synthase (iNOS) in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), astrocytes with mouse anti-GFAP (Alexa Fluor 488, green) and iNOS with rabbit anti-iNOS (TRITC, red). Cell nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of iNOS in the cells and are coincident with an increased expression of iNOS in microglia, but not in astrocytes, upon stimulation with the uracil nucleotides. Scale bar = 10 μm.", "In these experimental conditions, microglial cells are responsible for NO production mediated by P2Y6 receptors. To clarify the mechanisms behind uracil nucleotides inhibition of cell proliferation, their effect on cell cycle progression and cell death was investigated.\nUracil nucleotides had no effect in cell cycle progression of glial cells. The percentage of cells in G0/G1, S or G2/M phase of the cell cycle was similar in control cultures (75.4 ± 1.8, 17.6 ± 3.2, 7.1 ± 1.5, respectively, n = 3) and those treated with UDP (1 mM; 76.4 ± 1.8, 13.6 ± 2.8, 10.0 ± 1.5, respectively, n = 3), therefore excluding cell cycle arrest as the mechanism involved in the inhibition of cell proliferation.\nAnother possibility was that inhibition of cell proliferation mediated by uracil nucleotides could result from an increase in cell death. UDP (1 mM) and PSB 0474 (10 μM) caused no change in LDH release, which excluded cell death by necrosis (Figure 8A). However, both UDP (1 mM) and PSB 0474 (10 μM) induced cell death by apoptosis assessed by the TUNEL assay (Figure 8A).Figure 8\nEffects of uracil nucleotides in cell death in lipopolysaccharide (LPS) cultures. (A) Necrotic cell death was evaluated by measuring the release of lactate dehydrogenase (LDH) and apoptotic cell death was evaluated by the TUNEL assay, after incubation with uracil nucleotides or solvent for 48 h. LDH activity was measured in the culture medium and in the culture extracts and the fraction released is represented in percentage of total LDH. The number of apoptotic cells was expressed as percentage of the total number of cells counted. Values are means ± SEM from four to seven experiments. *P <0.05, significant differences from respective control (solvent). (B) Cellular localization of apoptotic nuclei, obtained with the Hoechst 33258 staining in LPS cultures. Astrocytes were labeled with rabbit anti-GFAP (TRITC, red), microglia with mouse anti-CD11b (Alexa Fluor 488, green) and cell nuclei with Hoechst 33258 (blue). LPS cultures were incubated with solvent or UDP for 48 h. Shrunken nuclei with a bright fluorescence appearance, characteristic of apoptotic nuclei are clearly coincident with astrocytes (white arrows), but not with microglia. Scale bar = 20 μm.\n\nEffects of uracil nucleotides in cell death in lipopolysaccharide (LPS) cultures. (A) Necrotic cell death was evaluated by measuring the release of lactate dehydrogenase (LDH) and apoptotic cell death was evaluated by the TUNEL assay, after incubation with uracil nucleotides or solvent for 48 h. LDH activity was measured in the culture medium and in the culture extracts and the fraction released is represented in percentage of total LDH. The number of apoptotic cells was expressed as percentage of the total number of cells counted. Values are means ± SEM from four to seven experiments. *P <0.05, significant differences from respective control (solvent). (B) Cellular localization of apoptotic nuclei, obtained with the Hoechst 33258 staining in LPS cultures. Astrocytes were labeled with rabbit anti-GFAP (TRITC, red), microglia with mouse anti-CD11b (Alexa Fluor 488, green) and cell nuclei with Hoechst 33258 (blue). LPS cultures were incubated with solvent or UDP for 48 h. Shrunken nuclei with a bright fluorescence appearance, characteristic of apoptotic nuclei are clearly coincident with astrocytes (white arrows), but not with microglia. Scale bar = 20 μm.\nCultures treated with UDP (1 mM) showed an increase in the number of shrunken nuclei with a bright fluorescence appearance obtained by Hoechst 33258 staining (Figure 8B). The percentage of apoptotic nuclei in control cultures was 6.75 ± 0.65% (n = 8) and increased to 16.02 ± 0.75% (n = 8, P <0.05) in cultures treated with UDP (1 mM). This increase in the number of apoptotic cells was attenuated to 11.83 ± 0.61% (n = 8, P <0.05) when UDP was tested in the presence of L-NAME (0.1 mM). Additionally, the apoptotic nuclei co-localized with astrocytes but not with microglia (Figure 8B)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Materials", "Cell cultures", "Immunocytochemistry", "DNA synthesis", "Metabolism of nucleotides", "Western blot analysis", "Nitric oxide assay", "Cell cycle", "Cell death assays", "Statistical analysis", "Results", "Characterization of the co-cultures", "Effects of uracil nucleotides in cell proliferation", "Extracellular metabolism of uracil nucleotides", "Expression and pharmacological characterization of the P2Y receptor subtype involved in the inhibition of cell proliferation induced by uracil nucleotides", "P2Y6 receptor-mediated nitric oxide production", "P2Y6 receptor-mediated inhibition of cell proliferation: mechanisms involved", "Discussion", "Conclusions" ]
[ "Chronic inflammation is characteristic of several brain disorders leading to loss of cognitive function. In the central nervous system (CNS), the inflammatory response is mediated by glial cells that acquire reactive phenotypes to participate in neuronal repair mechanisms [1,2]. In particular, astrocytes respond with a complex reaction named astrogliosis that includes several morphological and functional changes, such as cell hypertrophy, glial fibrillary acidic protein (GFAP) and nestin up-regulation [3], and cell proliferation [4]. These progressive changes are time and context dependent, being regulated by inflammatory mediators produced in the lesion site [5]. Activated microglia are the main source of these inflammatory mediators, assuming an important role in the modulation of astrogliosis progression during the course of the inflammatory response [6,7]. These mediators may be pro-inflammatory, such as IL-1β, TNF-α and nitric oxide (NO), or anti-inflammatory, such as IL-10, IL-4, TGF-β, according to the microglia phenotype, which is highly dependent on the pathological context [2,8]. Lipopolysaccharide (LPS) is an agonist of toll-like receptors-4 (TLR4), inducing a pro-inflammatory phenotype in microglia. However, chronic activation of TLR4 receptors has been shown to promote microglia polarization toward an anti-inflammatory phenotype [9,10], but its impact in the inflammatory response and in the modulation of astrogliosis remains to be established. In fact, different extents of astrogliosis and microgliosis have different impacts in neuronal regeneration [1,2]. In the extreme end of the astrogliosis spectrum, proliferating astrocytes may interact with fibroblasts and other glial cells to form a glial scar, creating an environment that prevents axon regeneration [11], leading to the idea that inhibition or control of this response would be beneficial to neuronal survival after injury. Therefore, the mediators produced by chronically activated microglia may have an important role to prevent excessive astrogliosis and promote neuronal regeneration and sprouting.\nIn a context of chronic brain inflammation, both adenine and uracil nucleotides attain high concentrations in the extracellular medium (in the mM range) due to cell damage or death, and activate P2 receptors in both types of glial cells, contributing to astrogliosis [12] and reinforcing the release of inflammatory messengers produced by microglia [13]. Particularly, the uracil nucleotides may activate pyrimidine receptors, such as the P2Y2,4,6 and P2Y14 receptor subtypes [14] that participate in the inflammatory response [15]. P2Y6 receptors contribute to the clearance of necrotic cell debris by stimulating microglia phagocytosis of dying neurons [16], whereas the P2Y2 receptors mediate astrocyte migration [17], but the effect of uracil nucleotides in the modulation of astroglial proliferation and their role in the control of glial scar formation is largely unknown.\nTo investigate the role of pyrimidine receptors in microglia-astrocyte signaling and its impact in the control of astrogliosis, it was used a cell culture model that could represent a state of chronic brain inflammation, which consisted of co-cultures of astrocytes and microglia submitted to a long-term treatment with LPS (0.1 μg/ml). The cultures obtained were used to investigate: i) the effect of uracil nucleotides in cell proliferation, ii) the influence of ectonucleotidases on uracil nucleotides metabolism and consequent impact in cell proliferation, iii) the signaling pathways and the mechanisms activated by the pyrimidine receptors involved in the control of cell proliferation, and iv) the contribution of microglia pyrimidine receptors to the modulation of astroglial proliferation.", " Materials The antibodies used and the respective information are listed in Table 1. The following drugs and reagents were used: L-arginine (L-ARG), lipopolysaccharide from Salmonella thyphimurium (LPS), N-nitro-L-arginine methyl ester hydrochloride (L-NAME), pertussis toxin (PTX), bisindolylmaleimide XI hydrochloride (RO 32-0432), penicillin, streptomycin, uracil, uridine, uridine-5’-monophosphate disodium (UMP), uridine-5’-diphosphate sodium (UDP), uridine 5'-triphosphate trisodium (UTP), uridine 5'-diphosphoglucose disodium (UDP-glucose), 1-[6-[((17β)-3-methoxyestra-1,3,5[10]-trien-17-yl)amino]hexyl]-2,5-pyrrolidinedione (U 73343), 1-[6-[((17β)-3-methoxyestra-1,3,5[10]-trien-17-yl)amino]hexyl]-1H-pyrrole-2,5dione (U 73122), 2'-(4-hydroxyphenyl)-5-(4-methyl-1-piperazinyl)-2,5'-bi-1H-benzimidazole trihydrochloride hydrate (Hoechst 33258), Ribonuclease A (RNAse) and propidium iodide (PI) from Sigma-Aldrich (Sintra, Portugal); N,N''-1,4 butanediylbis[N'-(3-isothiocyanatophenyl)thiourea] (MRS 2578) and 3-(2-oxo-2-phenylethyl)uridine-5'-diphosphate disodium (PSB 0474) from Tocris (Bristol, UK); methyl-[3H]thymidine (specific activity 80 to 86 Ci/mmol) and enhanced chemiluminescence (ECL) western blotting system from Amersham Biosciences (Lisbon, Portugal). Stock solutions of drugs were prepared with dimethyl sulfoxide or distilled water and kept at -20°C. Solutions of drugs were prepared from stock solutions diluted in culture medium immediately before use.Table 1\nPrimary and secondary antibodies used in immunocytochemistry and western blotting\n\nPrimary antibodies\n\nAntigen\n\nCode\n\nHost\n\nDilution\n\nSupplier\nGFAPG9269Rabbit1:600 (IF)SigmaGFAPG6171Mouse1:600 (IF)SigmaCD11bsc-53086Mouse1:50 (IF)Santa Cruz Biotechnology, IncP2Y6\nAPR-011Rabbit1:200 (IF)Alomone1:300 (WB)iNOSAB5382Rabbit1:5 000 (IF)ChemiconActinsc-1615-RRabbit1:200 (WB)Santa Cruz Biotechnology, Inc\nSecondary antibodies\n\nAntigen\n\nCode\n\nHost\n\nDilution\n\nSupplier\nTRITC anti-rabbitT6778Goat1:400 (IF; GFAP, P2Y6)Sigma1:2 000 (IF; iNOS)Alexa Fluor 488 anti-mouseA-11034Goat1:400 (IF)Mol. Probesanti-rabbit conjugated to horseradish peroxidasesc-2004Goat1:10 000 (WB)Santa Cruz Biotechnology, IncIF, immunofluorescence; WB, western blot analysis.\n\nPrimary and secondary antibodies used in immunocytochemistry and western blotting\n\nIF, immunofluorescence; WB, western blot analysis.\nThe antibodies used and the respective information are listed in Table 1. The following drugs and reagents were used: L-arginine (L-ARG), lipopolysaccharide from Salmonella thyphimurium (LPS), N-nitro-L-arginine methyl ester hydrochloride (L-NAME), pertussis toxin (PTX), bisindolylmaleimide XI hydrochloride (RO 32-0432), penicillin, streptomycin, uracil, uridine, uridine-5’-monophosphate disodium (UMP), uridine-5’-diphosphate sodium (UDP), uridine 5'-triphosphate trisodium (UTP), uridine 5'-diphosphoglucose disodium (UDP-glucose), 1-[6-[((17β)-3-methoxyestra-1,3,5[10]-trien-17-yl)amino]hexyl]-2,5-pyrrolidinedione (U 73343), 1-[6-[((17β)-3-methoxyestra-1,3,5[10]-trien-17-yl)amino]hexyl]-1H-pyrrole-2,5dione (U 73122), 2'-(4-hydroxyphenyl)-5-(4-methyl-1-piperazinyl)-2,5'-bi-1H-benzimidazole trihydrochloride hydrate (Hoechst 33258), Ribonuclease A (RNAse) and propidium iodide (PI) from Sigma-Aldrich (Sintra, Portugal); N,N''-1,4 butanediylbis[N'-(3-isothiocyanatophenyl)thiourea] (MRS 2578) and 3-(2-oxo-2-phenylethyl)uridine-5'-diphosphate disodium (PSB 0474) from Tocris (Bristol, UK); methyl-[3H]thymidine (specific activity 80 to 86 Ci/mmol) and enhanced chemiluminescence (ECL) western blotting system from Amersham Biosciences (Lisbon, Portugal). Stock solutions of drugs were prepared with dimethyl sulfoxide or distilled water and kept at -20°C. Solutions of drugs were prepared from stock solutions diluted in culture medium immediately before use.Table 1\nPrimary and secondary antibodies used in immunocytochemistry and western blotting\n\nPrimary antibodies\n\nAntigen\n\nCode\n\nHost\n\nDilution\n\nSupplier\nGFAPG9269Rabbit1:600 (IF)SigmaGFAPG6171Mouse1:600 (IF)SigmaCD11bsc-53086Mouse1:50 (IF)Santa Cruz Biotechnology, IncP2Y6\nAPR-011Rabbit1:200 (IF)Alomone1:300 (WB)iNOSAB5382Rabbit1:5 000 (IF)ChemiconActinsc-1615-RRabbit1:200 (WB)Santa Cruz Biotechnology, Inc\nSecondary antibodies\n\nAntigen\n\nCode\n\nHost\n\nDilution\n\nSupplier\nTRITC anti-rabbitT6778Goat1:400 (IF; GFAP, P2Y6)Sigma1:2 000 (IF; iNOS)Alexa Fluor 488 anti-mouseA-11034Goat1:400 (IF)Mol. Probesanti-rabbit conjugated to horseradish peroxidasesc-2004Goat1:10 000 (WB)Santa Cruz Biotechnology, IncIF, immunofluorescence; WB, western blot analysis.\n\nPrimary and secondary antibodies used in immunocytochemistry and western blotting\n\nIF, immunofluorescence; WB, western blot analysis.\n Cell cultures Animal handling and experiments were in accordance with the guidelines prepared by Committee on Care and Use of Laboratory Animal Resources (National Research Council, USA), followed the Directive 2010/63/EU of the European Parliament and the Council of the European Union and were approved by the ethics committee of the Faculty of Pharmacy from the University of Porto. Primary co-cultures of astrocytes and microglia were prepared from newborn (P0-P2) Wistar rats (Charles River, Barcelona, Spain) as previously described [18] with minor modifications. Cell cultures were treated with 0.1 μg/ml LPS and were incubated at 37°C in a humidified atmosphere of 95% air, 5% CO2. The medium containing 0.1 μg/ml LPS was replaced one day after cell cultures preparation, and subsequently, twice a week, with LPS remaining in the cultures from the first day in vitro (DIV1) until the end of the experiments. Cultures were synchronized to a quiescent phase of the cell cycle, by shifting fetal bovine serum concentration in the medium from 10% to 0.1% for 48 h, and then used in experiments at DIV30.\nAnimal handling and experiments were in accordance with the guidelines prepared by Committee on Care and Use of Laboratory Animal Resources (National Research Council, USA), followed the Directive 2010/63/EU of the European Parliament and the Council of the European Union and were approved by the ethics committee of the Faculty of Pharmacy from the University of Porto. Primary co-cultures of astrocytes and microglia were prepared from newborn (P0-P2) Wistar rats (Charles River, Barcelona, Spain) as previously described [18] with minor modifications. Cell cultures were treated with 0.1 μg/ml LPS and were incubated at 37°C in a humidified atmosphere of 95% air, 5% CO2. The medium containing 0.1 μg/ml LPS was replaced one day after cell cultures preparation, and subsequently, twice a week, with LPS remaining in the cultures from the first day in vitro (DIV1) until the end of the experiments. Cultures were synchronized to a quiescent phase of the cell cycle, by shifting fetal bovine serum concentration in the medium from 10% to 0.1% for 48 h, and then used in experiments at DIV30.\n Immunocytochemistry Cultures were fixed and permeabilized as described in previous studies [19]. For double immunofluorescence, cultures were incubated with the primary antibodies (Table 1) overnight at 4°C. Visualization of GFAP, CD11b, and P2Y6 receptors and iNOS positive cells was accomplished upon 1 h incubation, at room temperature, with the secondary antibodies (Table 1). In negative controls, the primary antibody was omitted. Cell nuclei were labeled with Hoechst 33258 (5 μg/ml) for 30 min at room temperature. To evaluate the percentage of microglia in the cultures, approximately 200 cells per culture were counted, and the number of CD11b positive cells was expressed as percentage of the total number of cells counted.\nCultures were fixed and permeabilized as described in previous studies [19]. For double immunofluorescence, cultures were incubated with the primary antibodies (Table 1) overnight at 4°C. Visualization of GFAP, CD11b, and P2Y6 receptors and iNOS positive cells was accomplished upon 1 h incubation, at room temperature, with the secondary antibodies (Table 1). In negative controls, the primary antibody was omitted. Cell nuclei were labeled with Hoechst 33258 (5 μg/ml) for 30 min at room temperature. To evaluate the percentage of microglia in the cultures, approximately 200 cells per culture were counted, and the number of CD11b positive cells was expressed as percentage of the total number of cells counted.\n DNA synthesis Cultures grown in 24-well plates were incubated with uracil nucleotides or solvent for 48 h, and methyl-[3H]-thymidine was added in the last 24 h, at a concentration of 1 μCi/ml. Antagonists or enzymatic inhibitors were added to the medium 1 h before uracil nucleotides. In experiments performed in the presence of PTX, the drug was added to the culture medium 24 h before the uracil nucleotides. At the end of the 48-h period of incubation, the protein content and methyl-[3H]-thymidine incorporation were evaluated as previously described [20].\nCultures grown in 24-well plates were incubated with uracil nucleotides or solvent for 48 h, and methyl-[3H]-thymidine was added in the last 24 h, at a concentration of 1 μCi/ml. Antagonists or enzymatic inhibitors were added to the medium 1 h before uracil nucleotides. In experiments performed in the presence of PTX, the drug was added to the culture medium 24 h before the uracil nucleotides. At the end of the 48-h period of incubation, the protein content and methyl-[3H]-thymidine incorporation were evaluated as previously described [20].\n Metabolism of nucleotides The metabolism of nucleotides was evaluated as previously described [20]. Briefly, cultures were incubated with uracil nucleotides, all at 0.1 mM, and samples were collected at 0, 1, 3, 8, 24 and 48 h. For evaluation of UTP half-life, additional samples were collected at 0, 5, 10, 15, 30, 60 min. The uracil nucleotides or their metabolites were separated and quantified by ion-pair-reverse-phase high-performance liquid chromatography (HPLC) with UV detection set at 254 nm [21]. Standards were analyzed in the same conditions and the retention time identified was (min): uracil (0.95), uridine (1.32), UMP (2.15), UDP (4.40) and UTP (6.40). The concentration of nucleotides and metabolites was calculated by peak area integration, followed by interpolation in calibration curves obtained with standards.\nThe metabolism of nucleotides was evaluated as previously described [20]. Briefly, cultures were incubated with uracil nucleotides, all at 0.1 mM, and samples were collected at 0, 1, 3, 8, 24 and 48 h. For evaluation of UTP half-life, additional samples were collected at 0, 5, 10, 15, 30, 60 min. The uracil nucleotides or their metabolites were separated and quantified by ion-pair-reverse-phase high-performance liquid chromatography (HPLC) with UV detection set at 254 nm [21]. Standards were analyzed in the same conditions and the retention time identified was (min): uracil (0.95), uridine (1.32), UMP (2.15), UDP (4.40) and UTP (6.40). The concentration of nucleotides and metabolites was calculated by peak area integration, followed by interpolation in calibration curves obtained with standards.\n Western blot analysis The expression of P2Y6 receptors was evaluated as previously described [22]. Membranes were probed for 2 h at room temperature with appropriately diluted primary rabbit polyclonal antibodies anti-P2Y6 or anti-actin, followed by the secondary antibody goat anti-rabbit IgG conjugated to horseradish peroxidase (Table 1). The immunocomplexes were detected by ECL.\nThe expression of P2Y6 receptors was evaluated as previously described [22]. Membranes were probed for 2 h at room temperature with appropriately diluted primary rabbit polyclonal antibodies anti-P2Y6 or anti-actin, followed by the secondary antibody goat anti-rabbit IgG conjugated to horseradish peroxidase (Table 1). The immunocomplexes were detected by ECL.\n Nitric oxide assay Cultures were incubated with uracil nucleotides or solvent for 48 h. The P2Y6 antagonist MRS 2578 or other enzyme inhibitors, when tested, were added 1 h before the uracil nucleotides. At the end of the 48-h period of incubation, the nitric oxide released into the culture medium was assessed by measuring the accumulation of nitrates plus nitrites according to the instructions of a Nitrate/Nitrite Colorimetric Assay kit (Cayman, France). The content of nitrates plus nitrites present in the supernatants was expressed as percentage of respective control.\nCultures were incubated with uracil nucleotides or solvent for 48 h. The P2Y6 antagonist MRS 2578 or other enzyme inhibitors, when tested, were added 1 h before the uracil nucleotides. At the end of the 48-h period of incubation, the nitric oxide released into the culture medium was assessed by measuring the accumulation of nitrates plus nitrites according to the instructions of a Nitrate/Nitrite Colorimetric Assay kit (Cayman, France). The content of nitrates plus nitrites present in the supernatants was expressed as percentage of respective control.\n Cell cycle The ability of uracil nucleotides to arrest glial cells in a specific cell cycle stage was evaluated in cultures treated with uracil nucleotides or solvent for 48 h. Cells were harvested by trypsinization, rinsed with ice-cold PBS and fixed in ice-cold 70% ethanol for 15 min at -20°C. Cells were rinsed again with PBS and incubated with 0.2 mg/ml RNAse A at 37°C for 15 min and further with 0.5 mg/ml propidium iodide for at least 30 min in the dark at room temperature. The percentage of cells in each phase of the cell cycle was determined by a flow cytometric analysis using the FACSCalibur flow cytometer from BD Biosciences (Enzifarma, Porto, Portugal) and the CellQuest software from BD Biosciences (Enzifarma, Porto, Portugal). Cell cycle phases were identified and quantified using ModFit LT software (Verity Software House Inc., Topsham, USA).\nThe ability of uracil nucleotides to arrest glial cells in a specific cell cycle stage was evaluated in cultures treated with uracil nucleotides or solvent for 48 h. Cells were harvested by trypsinization, rinsed with ice-cold PBS and fixed in ice-cold 70% ethanol for 15 min at -20°C. Cells were rinsed again with PBS and incubated with 0.2 mg/ml RNAse A at 37°C for 15 min and further with 0.5 mg/ml propidium iodide for at least 30 min in the dark at room temperature. The percentage of cells in each phase of the cell cycle was determined by a flow cytometric analysis using the FACSCalibur flow cytometer from BD Biosciences (Enzifarma, Porto, Portugal) and the CellQuest software from BD Biosciences (Enzifarma, Porto, Portugal). Cell cycle phases were identified and quantified using ModFit LT software (Verity Software House Inc., Topsham, USA).\n Cell death assays Necrotic cell death was assessed by measuring the lactate dehydrogenase (LDH) release with an enzymatic assay according to the manufacturer’s instructions (Sigma-Aldrich, Sintra, Portugal). Cultures were incubated with uracil nucleotides or solvent for 48 h. LDH activity was determined in the culture supernatants and respective extracts. The amount of LDH released into the culture medium was expressed as the percentage of total LDH.\nApoptotic cell death was evaluated either by the indirect terminal transferase-mediated dUTP-digoxigenin nick end-labeling (TUNEL) to detect DNA fragmentation using an ApopTag peroxidase detection kit (Millipore, Madrid, Spain), or by the analysis of nuclear morphology with Hoechst 33258 staining (described above). Cultures were treated with uracil nucleotides or solvent for 48 h and, when present, L-NAME was added 1 h before nucleotides. The number of TUNEL positive cells was evaluated as previously described [20]. The number of apoptotic cells, observed with Hoechst 33258 staining, was evaluated by analyzing eight high-power fields (×400) in each culture, and the number of cells showing shrunken nuclei with a bright fluorescence appearance was expressed as percentage of total cell number counted.\nNecrotic cell death was assessed by measuring the lactate dehydrogenase (LDH) release with an enzymatic assay according to the manufacturer’s instructions (Sigma-Aldrich, Sintra, Portugal). Cultures were incubated with uracil nucleotides or solvent for 48 h. LDH activity was determined in the culture supernatants and respective extracts. The amount of LDH released into the culture medium was expressed as the percentage of total LDH.\nApoptotic cell death was evaluated either by the indirect terminal transferase-mediated dUTP-digoxigenin nick end-labeling (TUNEL) to detect DNA fragmentation using an ApopTag peroxidase detection kit (Millipore, Madrid, Spain), or by the analysis of nuclear morphology with Hoechst 33258 staining (described above). Cultures were treated with uracil nucleotides or solvent for 48 h and, when present, L-NAME was added 1 h before nucleotides. The number of TUNEL positive cells was evaluated as previously described [20]. The number of apoptotic cells, observed with Hoechst 33258 staining, was evaluated by analyzing eight high-power fields (×400) in each culture, and the number of cells showing shrunken nuclei with a bright fluorescence appearance was expressed as percentage of total cell number counted.\n Statistical analysis Data are expressed as means ± standard errors of the mean (SEM) from n number of experiments. Statistical analysis was carried out using the unpaired Student’s t-test or ANOVA followed by Dunnett’s multiple comparison test. Significant differences were indicated by P values lower than 0.05.\nData are expressed as means ± standard errors of the mean (SEM) from n number of experiments. Statistical analysis was carried out using the unpaired Student’s t-test or ANOVA followed by Dunnett’s multiple comparison test. Significant differences were indicated by P values lower than 0.05.", "The antibodies used and the respective information are listed in Table 1. The following drugs and reagents were used: L-arginine (L-ARG), lipopolysaccharide from Salmonella thyphimurium (LPS), N-nitro-L-arginine methyl ester hydrochloride (L-NAME), pertussis toxin (PTX), bisindolylmaleimide XI hydrochloride (RO 32-0432), penicillin, streptomycin, uracil, uridine, uridine-5’-monophosphate disodium (UMP), uridine-5’-diphosphate sodium (UDP), uridine 5'-triphosphate trisodium (UTP), uridine 5'-diphosphoglucose disodium (UDP-glucose), 1-[6-[((17β)-3-methoxyestra-1,3,5[10]-trien-17-yl)amino]hexyl]-2,5-pyrrolidinedione (U 73343), 1-[6-[((17β)-3-methoxyestra-1,3,5[10]-trien-17-yl)amino]hexyl]-1H-pyrrole-2,5dione (U 73122), 2'-(4-hydroxyphenyl)-5-(4-methyl-1-piperazinyl)-2,5'-bi-1H-benzimidazole trihydrochloride hydrate (Hoechst 33258), Ribonuclease A (RNAse) and propidium iodide (PI) from Sigma-Aldrich (Sintra, Portugal); N,N''-1,4 butanediylbis[N'-(3-isothiocyanatophenyl)thiourea] (MRS 2578) and 3-(2-oxo-2-phenylethyl)uridine-5'-diphosphate disodium (PSB 0474) from Tocris (Bristol, UK); methyl-[3H]thymidine (specific activity 80 to 86 Ci/mmol) and enhanced chemiluminescence (ECL) western blotting system from Amersham Biosciences (Lisbon, Portugal). Stock solutions of drugs were prepared with dimethyl sulfoxide or distilled water and kept at -20°C. Solutions of drugs were prepared from stock solutions diluted in culture medium immediately before use.Table 1\nPrimary and secondary antibodies used in immunocytochemistry and western blotting\n\nPrimary antibodies\n\nAntigen\n\nCode\n\nHost\n\nDilution\n\nSupplier\nGFAPG9269Rabbit1:600 (IF)SigmaGFAPG6171Mouse1:600 (IF)SigmaCD11bsc-53086Mouse1:50 (IF)Santa Cruz Biotechnology, IncP2Y6\nAPR-011Rabbit1:200 (IF)Alomone1:300 (WB)iNOSAB5382Rabbit1:5 000 (IF)ChemiconActinsc-1615-RRabbit1:200 (WB)Santa Cruz Biotechnology, Inc\nSecondary antibodies\n\nAntigen\n\nCode\n\nHost\n\nDilution\n\nSupplier\nTRITC anti-rabbitT6778Goat1:400 (IF; GFAP, P2Y6)Sigma1:2 000 (IF; iNOS)Alexa Fluor 488 anti-mouseA-11034Goat1:400 (IF)Mol. Probesanti-rabbit conjugated to horseradish peroxidasesc-2004Goat1:10 000 (WB)Santa Cruz Biotechnology, IncIF, immunofluorescence; WB, western blot analysis.\n\nPrimary and secondary antibodies used in immunocytochemistry and western blotting\n\nIF, immunofluorescence; WB, western blot analysis.", "Animal handling and experiments were in accordance with the guidelines prepared by Committee on Care and Use of Laboratory Animal Resources (National Research Council, USA), followed the Directive 2010/63/EU of the European Parliament and the Council of the European Union and were approved by the ethics committee of the Faculty of Pharmacy from the University of Porto. Primary co-cultures of astrocytes and microglia were prepared from newborn (P0-P2) Wistar rats (Charles River, Barcelona, Spain) as previously described [18] with minor modifications. Cell cultures were treated with 0.1 μg/ml LPS and were incubated at 37°C in a humidified atmosphere of 95% air, 5% CO2. The medium containing 0.1 μg/ml LPS was replaced one day after cell cultures preparation, and subsequently, twice a week, with LPS remaining in the cultures from the first day in vitro (DIV1) until the end of the experiments. Cultures were synchronized to a quiescent phase of the cell cycle, by shifting fetal bovine serum concentration in the medium from 10% to 0.1% for 48 h, and then used in experiments at DIV30.", "Cultures were fixed and permeabilized as described in previous studies [19]. For double immunofluorescence, cultures were incubated with the primary antibodies (Table 1) overnight at 4°C. Visualization of GFAP, CD11b, and P2Y6 receptors and iNOS positive cells was accomplished upon 1 h incubation, at room temperature, with the secondary antibodies (Table 1). In negative controls, the primary antibody was omitted. Cell nuclei were labeled with Hoechst 33258 (5 μg/ml) for 30 min at room temperature. To evaluate the percentage of microglia in the cultures, approximately 200 cells per culture were counted, and the number of CD11b positive cells was expressed as percentage of the total number of cells counted.", "Cultures grown in 24-well plates were incubated with uracil nucleotides or solvent for 48 h, and methyl-[3H]-thymidine was added in the last 24 h, at a concentration of 1 μCi/ml. Antagonists or enzymatic inhibitors were added to the medium 1 h before uracil nucleotides. In experiments performed in the presence of PTX, the drug was added to the culture medium 24 h before the uracil nucleotides. At the end of the 48-h period of incubation, the protein content and methyl-[3H]-thymidine incorporation were evaluated as previously described [20].", "The metabolism of nucleotides was evaluated as previously described [20]. Briefly, cultures were incubated with uracil nucleotides, all at 0.1 mM, and samples were collected at 0, 1, 3, 8, 24 and 48 h. For evaluation of UTP half-life, additional samples were collected at 0, 5, 10, 15, 30, 60 min. The uracil nucleotides or their metabolites were separated and quantified by ion-pair-reverse-phase high-performance liquid chromatography (HPLC) with UV detection set at 254 nm [21]. Standards were analyzed in the same conditions and the retention time identified was (min): uracil (0.95), uridine (1.32), UMP (2.15), UDP (4.40) and UTP (6.40). The concentration of nucleotides and metabolites was calculated by peak area integration, followed by interpolation in calibration curves obtained with standards.", "The expression of P2Y6 receptors was evaluated as previously described [22]. Membranes were probed for 2 h at room temperature with appropriately diluted primary rabbit polyclonal antibodies anti-P2Y6 or anti-actin, followed by the secondary antibody goat anti-rabbit IgG conjugated to horseradish peroxidase (Table 1). The immunocomplexes were detected by ECL.", "Cultures were incubated with uracil nucleotides or solvent for 48 h. The P2Y6 antagonist MRS 2578 or other enzyme inhibitors, when tested, were added 1 h before the uracil nucleotides. At the end of the 48-h period of incubation, the nitric oxide released into the culture medium was assessed by measuring the accumulation of nitrates plus nitrites according to the instructions of a Nitrate/Nitrite Colorimetric Assay kit (Cayman, France). The content of nitrates plus nitrites present in the supernatants was expressed as percentage of respective control.", "The ability of uracil nucleotides to arrest glial cells in a specific cell cycle stage was evaluated in cultures treated with uracil nucleotides or solvent for 48 h. Cells were harvested by trypsinization, rinsed with ice-cold PBS and fixed in ice-cold 70% ethanol for 15 min at -20°C. Cells were rinsed again with PBS and incubated with 0.2 mg/ml RNAse A at 37°C for 15 min and further with 0.5 mg/ml propidium iodide for at least 30 min in the dark at room temperature. The percentage of cells in each phase of the cell cycle was determined by a flow cytometric analysis using the FACSCalibur flow cytometer from BD Biosciences (Enzifarma, Porto, Portugal) and the CellQuest software from BD Biosciences (Enzifarma, Porto, Portugal). Cell cycle phases were identified and quantified using ModFit LT software (Verity Software House Inc., Topsham, USA).", "Necrotic cell death was assessed by measuring the lactate dehydrogenase (LDH) release with an enzymatic assay according to the manufacturer’s instructions (Sigma-Aldrich, Sintra, Portugal). Cultures were incubated with uracil nucleotides or solvent for 48 h. LDH activity was determined in the culture supernatants and respective extracts. The amount of LDH released into the culture medium was expressed as the percentage of total LDH.\nApoptotic cell death was evaluated either by the indirect terminal transferase-mediated dUTP-digoxigenin nick end-labeling (TUNEL) to detect DNA fragmentation using an ApopTag peroxidase detection kit (Millipore, Madrid, Spain), or by the analysis of nuclear morphology with Hoechst 33258 staining (described above). Cultures were treated with uracil nucleotides or solvent for 48 h and, when present, L-NAME was added 1 h before nucleotides. The number of TUNEL positive cells was evaluated as previously described [20]. The number of apoptotic cells, observed with Hoechst 33258 staining, was evaluated by analyzing eight high-power fields (×400) in each culture, and the number of cells showing shrunken nuclei with a bright fluorescence appearance was expressed as percentage of total cell number counted.", "Data are expressed as means ± standard errors of the mean (SEM) from n number of experiments. Statistical analysis was carried out using the unpaired Student’s t-test or ANOVA followed by Dunnett’s multiple comparison test. Significant differences were indicated by P values lower than 0.05.", " Characterization of the co-cultures The primary cortical brain cultures treated with lipopolysaccharide (LPS; 0.1 μg/ml) for 30 days in vitro, consisted of monolayers of astrocytes exhibiting a flattened, polygonal morphology and containing 4.36 ± 0.42% (n = 5) of microglia spread over the top of the astrocyte monolayer (Figure 1). The co-cultures obtained were named LPS cultures where microglial cells exhibited an amoeboid phenotype with retracted or short thick processes, suggestive of their activation [23], as expected for in vitro LPS treated microglia. In support of the potential of LPS to activate microglia and to prevent their proliferation, it was observed that in co-cultures grown without any treatment, the percentage of microglia was higher, approximately 8.0%, and presented longer processes [22].Figure 1\nImmunofluorescent micrograph representative of co-cultures treated with lipopolysaccharide (LPS) (0.1 μg/ml). Astrocytes were labeled with rabbit anti-GFAP (TRICT, red) and microglia with mouse anti-CD11b (Alexa Fluor 488, green). The percentage of microglia in cultures was 4.36 ± 0.42% (n = 5). Scale bar: 50 μm.\n\nImmunofluorescent micrograph representative of co-cultures treated with lipopolysaccharide (LPS) (0.1 μg/ml). Astrocytes were labeled with rabbit anti-GFAP (TRICT, red) and microglia with mouse anti-CD11b (Alexa Fluor 488, green). The percentage of microglia in cultures was 4.36 ± 0.42% (n = 5). Scale bar: 50 μm.\nIn this study, LPS cultures were used to study the effect of uracil nucleotides in cell proliferation, as well as the contribution of activated microglia to this response.\nThe primary cortical brain cultures treated with lipopolysaccharide (LPS; 0.1 μg/ml) for 30 days in vitro, consisted of monolayers of astrocytes exhibiting a flattened, polygonal morphology and containing 4.36 ± 0.42% (n = 5) of microglia spread over the top of the astrocyte monolayer (Figure 1). The co-cultures obtained were named LPS cultures where microglial cells exhibited an amoeboid phenotype with retracted or short thick processes, suggestive of their activation [23], as expected for in vitro LPS treated microglia. In support of the potential of LPS to activate microglia and to prevent their proliferation, it was observed that in co-cultures grown without any treatment, the percentage of microglia was higher, approximately 8.0%, and presented longer processes [22].Figure 1\nImmunofluorescent micrograph representative of co-cultures treated with lipopolysaccharide (LPS) (0.1 μg/ml). Astrocytes were labeled with rabbit anti-GFAP (TRICT, red) and microglia with mouse anti-CD11b (Alexa Fluor 488, green). The percentage of microglia in cultures was 4.36 ± 0.42% (n = 5). Scale bar: 50 μm.\n\nImmunofluorescent micrograph representative of co-cultures treated with lipopolysaccharide (LPS) (0.1 μg/ml). Astrocytes were labeled with rabbit anti-GFAP (TRICT, red) and microglia with mouse anti-CD11b (Alexa Fluor 488, green). The percentage of microglia in cultures was 4.36 ± 0.42% (n = 5). Scale bar: 50 μm.\nIn this study, LPS cultures were used to study the effect of uracil nucleotides in cell proliferation, as well as the contribution of activated microglia to this response.\n Effects of uracil nucleotides in cell proliferation LPS cultures were incubated with several uracil nucleotides to evaluate their influence in cell proliferation. UTP that activates the P2Y2,4 subtypes, UDP and its analogue PSB 0474, both selective for the P2Y6 receptors and UDP-glucose that is selective for the P2Y14 receptors [14] were tested in a wide range of concentrations. Except for UDP-glucose, the uracil nucleotides UTP, UDP and PSB 0474 caused a concentration-dependent inhibition of cell proliferation (Figure 2).Figure 2\nEffects of uracil nucleotides in cell proliferation. Lipopolysaccharide (LPS) cultures were incubated with nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed in percentage of control. Values are means ± SEM from five to ten experiments. *P <0.05, significant differences from control.\n\nEffects of uracil nucleotides in cell proliferation. Lipopolysaccharide (LPS) cultures were incubated with nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed in percentage of control. Values are means ± SEM from five to ten experiments. *P <0.05, significant differences from control.\nLPS cultures were incubated with several uracil nucleotides to evaluate their influence in cell proliferation. UTP that activates the P2Y2,4 subtypes, UDP and its analogue PSB 0474, both selective for the P2Y6 receptors and UDP-glucose that is selective for the P2Y14 receptors [14] were tested in a wide range of concentrations. Except for UDP-glucose, the uracil nucleotides UTP, UDP and PSB 0474 caused a concentration-dependent inhibition of cell proliferation (Figure 2).Figure 2\nEffects of uracil nucleotides in cell proliferation. Lipopolysaccharide (LPS) cultures were incubated with nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed in percentage of control. Values are means ± SEM from five to ten experiments. *P <0.05, significant differences from control.\n\nEffects of uracil nucleotides in cell proliferation. Lipopolysaccharide (LPS) cultures were incubated with nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed in percentage of control. Values are means ± SEM from five to ten experiments. *P <0.05, significant differences from control.\n Extracellular metabolism of uracil nucleotides UTP metabolism was very fast, with a half-life of 10.3 ± 0.5 min (n = 5), and the main metabolite formed during the first hour was UDP, which remained in the medium up to 8 h (Figure 3A). When tested from the beginning, UDP metabolism was much slower compared to that of UTP (Figure 3B); its half-life was 77.3 ± 2.3 min (n = 4; P <0.05). The PSB 0474 half-life could not be evaluated because the highest concentration tested that caused an inhibition of cell proliferation was still below the detection limit of the method used to study the metabolism of these compounds.Figure 3\nMetabolism of uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with 0.1 mM of (A) UTP or (B) UDP and samples were collected at 0, 1, 3, 8, 24 and 48 h. Uracil nucleotides and their metabolites were quantified by HPLC-UV. Values are means ± SEM from four experiments.\n\nMetabolism of uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with 0.1 mM of (A) UTP or (B) UDP and samples were collected at 0, 1, 3, 8, 24 and 48 h. Uracil nucleotides and their metabolites were quantified by HPLC-UV. Values are means ± SEM from four experiments.\nUTP metabolism was very fast, with a half-life of 10.3 ± 0.5 min (n = 5), and the main metabolite formed during the first hour was UDP, which remained in the medium up to 8 h (Figure 3A). When tested from the beginning, UDP metabolism was much slower compared to that of UTP (Figure 3B); its half-life was 77.3 ± 2.3 min (n = 4; P <0.05). The PSB 0474 half-life could not be evaluated because the highest concentration tested that caused an inhibition of cell proliferation was still below the detection limit of the method used to study the metabolism of these compounds.Figure 3\nMetabolism of uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with 0.1 mM of (A) UTP or (B) UDP and samples were collected at 0, 1, 3, 8, 24 and 48 h. Uracil nucleotides and their metabolites were quantified by HPLC-UV. Values are means ± SEM from four experiments.\n\nMetabolism of uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with 0.1 mM of (A) UTP or (B) UDP and samples were collected at 0, 1, 3, 8, 24 and 48 h. Uracil nucleotides and their metabolites were quantified by HPLC-UV. Values are means ± SEM from four experiments.\n Expression and pharmacological characterization of the P2Y receptor subtype involved in the inhibition of cell proliferation induced by uracil nucleotides The inhibitory effect of both PSB 0474 (1 μM) and UDP (1 mM) in cell proliferation was abolished by the selective P2Y6 receptor antagonist MRS 2578 (1 μM; Figure 4A). The inhibitory effect of UTP (0.1 mM) was also abolished by MRS 2578 (not shown), suggesting that this effect depends on its conversion into UDP and activation of P2Y6 receptors. Uncoupling Gi/o proteins from receptors with pertussis toxin (PTX, 0.1 μg/ml) did not change the effect of UDP (1 mM), which was attenuated by the phospholipase C (PLC) inhibitor U 73122 (1 μM), but not by its inactive analog U 73343 (1 μM), and by the protein kinase C (PKC) inhibitor RO 32-0432 (1 μM; Figure 4A), confirming the coupling of P2Y6 receptors to the Gq-PLC-PKC pathway.Figure 4\nPyrimidine receptors and signaling pathway involved in the inhibition of cell proliferation mediated by uracil nucleotides. (A) Lipopolysaccharide (LPS) cultures were incubated with the nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides, except PTX, which was added to the medium 24 h before. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed as percentage of change from the respective control. Values are means ± SEM from eight to twenty experiments. *P <0.05, significant differences from respective control; +P <0.05, significant differences from the agonist alone. (B) Representative western blots showing the expression of P2Y6 receptors obtained from whole cell lysates. Two bands of 25 kDa, one of 36 kDa and another of 86 kDa, specifically reacted with the anti-P2Y6 antibody. These bands were absent in the presence of the respective neutralizing peptide (np).\n\nPyrimidine receptors and signaling pathway involved in the inhibition of cell proliferation mediated by uracil nucleotides. (A) Lipopolysaccharide (LPS) cultures were incubated with the nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides, except PTX, which was added to the medium 24 h before. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed as percentage of change from the respective control. Values are means ± SEM from eight to twenty experiments. *P <0.05, significant differences from respective control; +P <0.05, significant differences from the agonist alone. (B) Representative western blots showing the expression of P2Y6 receptors obtained from whole cell lysates. Two bands of 25 kDa, one of 36 kDa and another of 86 kDa, specifically reacted with the anti-P2Y6 antibody. These bands were absent in the presence of the respective neutralizing peptide (np).\nP2Y6 receptor expression in LPS cultures comprised four bands, two of 25 kDa, one of 36 kDa and another of 86 kDa, which were all absent in the presence of the P2Y6 receptor neutralizing peptide (np, Figure 4B). Analysis of the cellular localization of P2Y6 receptors by immunocytochemistry revealed a preferential co-localization with microglia (Figure 5), suggesting that uracil nucleotides may inhibit cell proliferation via microglial cells.Figure 5\nCellular distribution and localization of P2Y\n6\nreceptors in lipopolysaccharide (LPS) cultures. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), P2Y6 receptors were labeled with rabbit anti-P2Y6 (TRICT, red) and nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of P2Y6 receptors that are coincident with microglia, but not in astrocytes (blue nuclei that do not label with CD11b and P2Y6 receptor antibodies). Scale bar: 20 μm.\n\nCellular distribution and localization of P2Y\n6\nreceptors in lipopolysaccharide (LPS) cultures. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), P2Y6 receptors were labeled with rabbit anti-P2Y6 (TRICT, red) and nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of P2Y6 receptors that are coincident with microglia, but not in astrocytes (blue nuclei that do not label with CD11b and P2Y6 receptor antibodies). Scale bar: 20 μm.\nThe inhibitory effect of both PSB 0474 (1 μM) and UDP (1 mM) in cell proliferation was abolished by the selective P2Y6 receptor antagonist MRS 2578 (1 μM; Figure 4A). The inhibitory effect of UTP (0.1 mM) was also abolished by MRS 2578 (not shown), suggesting that this effect depends on its conversion into UDP and activation of P2Y6 receptors. Uncoupling Gi/o proteins from receptors with pertussis toxin (PTX, 0.1 μg/ml) did not change the effect of UDP (1 mM), which was attenuated by the phospholipase C (PLC) inhibitor U 73122 (1 μM), but not by its inactive analog U 73343 (1 μM), and by the protein kinase C (PKC) inhibitor RO 32-0432 (1 μM; Figure 4A), confirming the coupling of P2Y6 receptors to the Gq-PLC-PKC pathway.Figure 4\nPyrimidine receptors and signaling pathway involved in the inhibition of cell proliferation mediated by uracil nucleotides. (A) Lipopolysaccharide (LPS) cultures were incubated with the nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides, except PTX, which was added to the medium 24 h before. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed as percentage of change from the respective control. Values are means ± SEM from eight to twenty experiments. *P <0.05, significant differences from respective control; +P <0.05, significant differences from the agonist alone. (B) Representative western blots showing the expression of P2Y6 receptors obtained from whole cell lysates. Two bands of 25 kDa, one of 36 kDa and another of 86 kDa, specifically reacted with the anti-P2Y6 antibody. These bands were absent in the presence of the respective neutralizing peptide (np).\n\nPyrimidine receptors and signaling pathway involved in the inhibition of cell proliferation mediated by uracil nucleotides. (A) Lipopolysaccharide (LPS) cultures were incubated with the nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides, except PTX, which was added to the medium 24 h before. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed as percentage of change from the respective control. Values are means ± SEM from eight to twenty experiments. *P <0.05, significant differences from respective control; +P <0.05, significant differences from the agonist alone. (B) Representative western blots showing the expression of P2Y6 receptors obtained from whole cell lysates. Two bands of 25 kDa, one of 36 kDa and another of 86 kDa, specifically reacted with the anti-P2Y6 antibody. These bands were absent in the presence of the respective neutralizing peptide (np).\nP2Y6 receptor expression in LPS cultures comprised four bands, two of 25 kDa, one of 36 kDa and another of 86 kDa, which were all absent in the presence of the P2Y6 receptor neutralizing peptide (np, Figure 4B). Analysis of the cellular localization of P2Y6 receptors by immunocytochemistry revealed a preferential co-localization with microglia (Figure 5), suggesting that uracil nucleotides may inhibit cell proliferation via microglial cells.Figure 5\nCellular distribution and localization of P2Y\n6\nreceptors in lipopolysaccharide (LPS) cultures. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), P2Y6 receptors were labeled with rabbit anti-P2Y6 (TRICT, red) and nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of P2Y6 receptors that are coincident with microglia, but not in astrocytes (blue nuclei that do not label with CD11b and P2Y6 receptor antibodies). Scale bar: 20 μm.\n\nCellular distribution and localization of P2Y\n6\nreceptors in lipopolysaccharide (LPS) cultures. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), P2Y6 receptors were labeled with rabbit anti-P2Y6 (TRICT, red) and nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of P2Y6 receptors that are coincident with microglia, but not in astrocytes (blue nuclei that do not label with CD11b and P2Y6 receptor antibodies). Scale bar: 20 μm.\n P2Y6 receptor-mediated nitric oxide production LPS increases iNOS expression and NO production by microglia [24,25], but this effect is attenuated during chronic LPS stimulation [9,10]. Since NO may inhibit astroglial proliferation [26] it was investigated whether P2Y6 receptor activation by uracil nucleotides modulated NO release in chronic LPS stimulated microglia. UDP (1 mM) and PSB 0474 (10 μM) increased NO release in the culture medium (Figure 6), an effect abolished by the selective antagonist of the P2Y6 receptors MRS 2578 (1 μM) and by the PLC inhibitor U 73122 (1 μM), or by the PKC inhibitor RO 32-0432 (1 μM; Figure 6). Additionally, the inhibitory effect of UDP in cell proliferation (1 mM; 44 ± 2, n = 25) was abolished by the NOS inhibitor L-NAME (0.1 mM; 7 ± 3, n = 8, P <0.05), and this effect was reversed in the presence of L-arginine (3 mM; 28 ± 6, n = 6; P <0.05).Figure 6\nNitric oxide synthesis mediated by uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides. The concentration of nitrites plus nitrates was evaluated in the culture supernatants and was expressed as percentage of change from the respective control. Values are means ± SEM from four experiments. *P <0.05, significant differences from the respective control; +P <0.05, significant differences from the agonist alone.\n\nNitric oxide synthesis mediated by uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides. The concentration of nitrites plus nitrates was evaluated in the culture supernatants and was expressed as percentage of change from the respective control. Values are means ± SEM from four experiments. *P <0.05, significant differences from the respective control; +P <0.05, significant differences from the agonist alone.\nIn order to identify the cellular source of NO released upon P2Y6 receptor activation, the expression of iNOS was immunolocalized either with microglia or astrocytes in LPS cultures. No iNOS expression was detected in astrocytes, in control conditions and after treatment with the uracil nucleotides (Figure 7), whereas in microglia iNOS expression was residual in control conditions but was significantly increased after 48 h incubation with PSB 0474 (10 μM) or UDP (1 mM; Figure 7).Figure 7\nCellular localization of inducible nitric oxide synthase (iNOS) in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), astrocytes with mouse anti-GFAP (Alexa Fluor 488, green) and iNOS with rabbit anti-iNOS (TRITC, red). Cell nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of iNOS in the cells and are coincident with an increased expression of iNOS in microglia, but not in astrocytes, upon stimulation with the uracil nucleotides. Scale bar = 10 μm.\n\nCellular localization of inducible nitric oxide synthase (iNOS) in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), astrocytes with mouse anti-GFAP (Alexa Fluor 488, green) and iNOS with rabbit anti-iNOS (TRITC, red). Cell nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of iNOS in the cells and are coincident with an increased expression of iNOS in microglia, but not in astrocytes, upon stimulation with the uracil nucleotides. Scale bar = 10 μm.\nLPS increases iNOS expression and NO production by microglia [24,25], but this effect is attenuated during chronic LPS stimulation [9,10]. Since NO may inhibit astroglial proliferation [26] it was investigated whether P2Y6 receptor activation by uracil nucleotides modulated NO release in chronic LPS stimulated microglia. UDP (1 mM) and PSB 0474 (10 μM) increased NO release in the culture medium (Figure 6), an effect abolished by the selective antagonist of the P2Y6 receptors MRS 2578 (1 μM) and by the PLC inhibitor U 73122 (1 μM), or by the PKC inhibitor RO 32-0432 (1 μM; Figure 6). Additionally, the inhibitory effect of UDP in cell proliferation (1 mM; 44 ± 2, n = 25) was abolished by the NOS inhibitor L-NAME (0.1 mM; 7 ± 3, n = 8, P <0.05), and this effect was reversed in the presence of L-arginine (3 mM; 28 ± 6, n = 6; P <0.05).Figure 6\nNitric oxide synthesis mediated by uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides. The concentration of nitrites plus nitrates was evaluated in the culture supernatants and was expressed as percentage of change from the respective control. Values are means ± SEM from four experiments. *P <0.05, significant differences from the respective control; +P <0.05, significant differences from the agonist alone.\n\nNitric oxide synthesis mediated by uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides. The concentration of nitrites plus nitrates was evaluated in the culture supernatants and was expressed as percentage of change from the respective control. Values are means ± SEM from four experiments. *P <0.05, significant differences from the respective control; +P <0.05, significant differences from the agonist alone.\nIn order to identify the cellular source of NO released upon P2Y6 receptor activation, the expression of iNOS was immunolocalized either with microglia or astrocytes in LPS cultures. No iNOS expression was detected in astrocytes, in control conditions and after treatment with the uracil nucleotides (Figure 7), whereas in microglia iNOS expression was residual in control conditions but was significantly increased after 48 h incubation with PSB 0474 (10 μM) or UDP (1 mM; Figure 7).Figure 7\nCellular localization of inducible nitric oxide synthase (iNOS) in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), astrocytes with mouse anti-GFAP (Alexa Fluor 488, green) and iNOS with rabbit anti-iNOS (TRITC, red). Cell nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of iNOS in the cells and are coincident with an increased expression of iNOS in microglia, but not in astrocytes, upon stimulation with the uracil nucleotides. Scale bar = 10 μm.\n\nCellular localization of inducible nitric oxide synthase (iNOS) in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), astrocytes with mouse anti-GFAP (Alexa Fluor 488, green) and iNOS with rabbit anti-iNOS (TRITC, red). Cell nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of iNOS in the cells and are coincident with an increased expression of iNOS in microglia, but not in astrocytes, upon stimulation with the uracil nucleotides. Scale bar = 10 μm.\n P2Y6 receptor-mediated inhibition of cell proliferation: mechanisms involved In these experimental conditions, microglial cells are responsible for NO production mediated by P2Y6 receptors. To clarify the mechanisms behind uracil nucleotides inhibition of cell proliferation, their effect on cell cycle progression and cell death was investigated.\nUracil nucleotides had no effect in cell cycle progression of glial cells. The percentage of cells in G0/G1, S or G2/M phase of the cell cycle was similar in control cultures (75.4 ± 1.8, 17.6 ± 3.2, 7.1 ± 1.5, respectively, n = 3) and those treated with UDP (1 mM; 76.4 ± 1.8, 13.6 ± 2.8, 10.0 ± 1.5, respectively, n = 3), therefore excluding cell cycle arrest as the mechanism involved in the inhibition of cell proliferation.\nAnother possibility was that inhibition of cell proliferation mediated by uracil nucleotides could result from an increase in cell death. UDP (1 mM) and PSB 0474 (10 μM) caused no change in LDH release, which excluded cell death by necrosis (Figure 8A). However, both UDP (1 mM) and PSB 0474 (10 μM) induced cell death by apoptosis assessed by the TUNEL assay (Figure 8A).Figure 8\nEffects of uracil nucleotides in cell death in lipopolysaccharide (LPS) cultures. (A) Necrotic cell death was evaluated by measuring the release of lactate dehydrogenase (LDH) and apoptotic cell death was evaluated by the TUNEL assay, after incubation with uracil nucleotides or solvent for 48 h. LDH activity was measured in the culture medium and in the culture extracts and the fraction released is represented in percentage of total LDH. The number of apoptotic cells was expressed as percentage of the total number of cells counted. Values are means ± SEM from four to seven experiments. *P <0.05, significant differences from respective control (solvent). (B) Cellular localization of apoptotic nuclei, obtained with the Hoechst 33258 staining in LPS cultures. Astrocytes were labeled with rabbit anti-GFAP (TRITC, red), microglia with mouse anti-CD11b (Alexa Fluor 488, green) and cell nuclei with Hoechst 33258 (blue). LPS cultures were incubated with solvent or UDP for 48 h. Shrunken nuclei with a bright fluorescence appearance, characteristic of apoptotic nuclei are clearly coincident with astrocytes (white arrows), but not with microglia. Scale bar = 20 μm.\n\nEffects of uracil nucleotides in cell death in lipopolysaccharide (LPS) cultures. (A) Necrotic cell death was evaluated by measuring the release of lactate dehydrogenase (LDH) and apoptotic cell death was evaluated by the TUNEL assay, after incubation with uracil nucleotides or solvent for 48 h. LDH activity was measured in the culture medium and in the culture extracts and the fraction released is represented in percentage of total LDH. The number of apoptotic cells was expressed as percentage of the total number of cells counted. Values are means ± SEM from four to seven experiments. *P <0.05, significant differences from respective control (solvent). (B) Cellular localization of apoptotic nuclei, obtained with the Hoechst 33258 staining in LPS cultures. Astrocytes were labeled with rabbit anti-GFAP (TRITC, red), microglia with mouse anti-CD11b (Alexa Fluor 488, green) and cell nuclei with Hoechst 33258 (blue). LPS cultures were incubated with solvent or UDP for 48 h. Shrunken nuclei with a bright fluorescence appearance, characteristic of apoptotic nuclei are clearly coincident with astrocytes (white arrows), but not with microglia. Scale bar = 20 μm.\nCultures treated with UDP (1 mM) showed an increase in the number of shrunken nuclei with a bright fluorescence appearance obtained by Hoechst 33258 staining (Figure 8B). The percentage of apoptotic nuclei in control cultures was 6.75 ± 0.65% (n = 8) and increased to 16.02 ± 0.75% (n = 8, P <0.05) in cultures treated with UDP (1 mM). This increase in the number of apoptotic cells was attenuated to 11.83 ± 0.61% (n = 8, P <0.05) when UDP was tested in the presence of L-NAME (0.1 mM). Additionally, the apoptotic nuclei co-localized with astrocytes but not with microglia (Figure 8B).\nIn these experimental conditions, microglial cells are responsible for NO production mediated by P2Y6 receptors. To clarify the mechanisms behind uracil nucleotides inhibition of cell proliferation, their effect on cell cycle progression and cell death was investigated.\nUracil nucleotides had no effect in cell cycle progression of glial cells. The percentage of cells in G0/G1, S or G2/M phase of the cell cycle was similar in control cultures (75.4 ± 1.8, 17.6 ± 3.2, 7.1 ± 1.5, respectively, n = 3) and those treated with UDP (1 mM; 76.4 ± 1.8, 13.6 ± 2.8, 10.0 ± 1.5, respectively, n = 3), therefore excluding cell cycle arrest as the mechanism involved in the inhibition of cell proliferation.\nAnother possibility was that inhibition of cell proliferation mediated by uracil nucleotides could result from an increase in cell death. UDP (1 mM) and PSB 0474 (10 μM) caused no change in LDH release, which excluded cell death by necrosis (Figure 8A). However, both UDP (1 mM) and PSB 0474 (10 μM) induced cell death by apoptosis assessed by the TUNEL assay (Figure 8A).Figure 8\nEffects of uracil nucleotides in cell death in lipopolysaccharide (LPS) cultures. (A) Necrotic cell death was evaluated by measuring the release of lactate dehydrogenase (LDH) and apoptotic cell death was evaluated by the TUNEL assay, after incubation with uracil nucleotides or solvent for 48 h. LDH activity was measured in the culture medium and in the culture extracts and the fraction released is represented in percentage of total LDH. The number of apoptotic cells was expressed as percentage of the total number of cells counted. Values are means ± SEM from four to seven experiments. *P <0.05, significant differences from respective control (solvent). (B) Cellular localization of apoptotic nuclei, obtained with the Hoechst 33258 staining in LPS cultures. Astrocytes were labeled with rabbit anti-GFAP (TRITC, red), microglia with mouse anti-CD11b (Alexa Fluor 488, green) and cell nuclei with Hoechst 33258 (blue). LPS cultures were incubated with solvent or UDP for 48 h. Shrunken nuclei with a bright fluorescence appearance, characteristic of apoptotic nuclei are clearly coincident with astrocytes (white arrows), but not with microglia. Scale bar = 20 μm.\n\nEffects of uracil nucleotides in cell death in lipopolysaccharide (LPS) cultures. (A) Necrotic cell death was evaluated by measuring the release of lactate dehydrogenase (LDH) and apoptotic cell death was evaluated by the TUNEL assay, after incubation with uracil nucleotides or solvent for 48 h. LDH activity was measured in the culture medium and in the culture extracts and the fraction released is represented in percentage of total LDH. The number of apoptotic cells was expressed as percentage of the total number of cells counted. Values are means ± SEM from four to seven experiments. *P <0.05, significant differences from respective control (solvent). (B) Cellular localization of apoptotic nuclei, obtained with the Hoechst 33258 staining in LPS cultures. Astrocytes were labeled with rabbit anti-GFAP (TRITC, red), microglia with mouse anti-CD11b (Alexa Fluor 488, green) and cell nuclei with Hoechst 33258 (blue). LPS cultures were incubated with solvent or UDP for 48 h. Shrunken nuclei with a bright fluorescence appearance, characteristic of apoptotic nuclei are clearly coincident with astrocytes (white arrows), but not with microglia. Scale bar = 20 μm.\nCultures treated with UDP (1 mM) showed an increase in the number of shrunken nuclei with a bright fluorescence appearance obtained by Hoechst 33258 staining (Figure 8B). The percentage of apoptotic nuclei in control cultures was 6.75 ± 0.65% (n = 8) and increased to 16.02 ± 0.75% (n = 8, P <0.05) in cultures treated with UDP (1 mM). This increase in the number of apoptotic cells was attenuated to 11.83 ± 0.61% (n = 8, P <0.05) when UDP was tested in the presence of L-NAME (0.1 mM). Additionally, the apoptotic nuclei co-localized with astrocytes but not with microglia (Figure 8B).", "The primary cortical brain cultures treated with lipopolysaccharide (LPS; 0.1 μg/ml) for 30 days in vitro, consisted of monolayers of astrocytes exhibiting a flattened, polygonal morphology and containing 4.36 ± 0.42% (n = 5) of microglia spread over the top of the astrocyte monolayer (Figure 1). The co-cultures obtained were named LPS cultures where microglial cells exhibited an amoeboid phenotype with retracted or short thick processes, suggestive of their activation [23], as expected for in vitro LPS treated microglia. In support of the potential of LPS to activate microglia and to prevent their proliferation, it was observed that in co-cultures grown without any treatment, the percentage of microglia was higher, approximately 8.0%, and presented longer processes [22].Figure 1\nImmunofluorescent micrograph representative of co-cultures treated with lipopolysaccharide (LPS) (0.1 μg/ml). Astrocytes were labeled with rabbit anti-GFAP (TRICT, red) and microglia with mouse anti-CD11b (Alexa Fluor 488, green). The percentage of microglia in cultures was 4.36 ± 0.42% (n = 5). Scale bar: 50 μm.\n\nImmunofluorescent micrograph representative of co-cultures treated with lipopolysaccharide (LPS) (0.1 μg/ml). Astrocytes were labeled with rabbit anti-GFAP (TRICT, red) and microglia with mouse anti-CD11b (Alexa Fluor 488, green). The percentage of microglia in cultures was 4.36 ± 0.42% (n = 5). Scale bar: 50 μm.\nIn this study, LPS cultures were used to study the effect of uracil nucleotides in cell proliferation, as well as the contribution of activated microglia to this response.", "LPS cultures were incubated with several uracil nucleotides to evaluate their influence in cell proliferation. UTP that activates the P2Y2,4 subtypes, UDP and its analogue PSB 0474, both selective for the P2Y6 receptors and UDP-glucose that is selective for the P2Y14 receptors [14] were tested in a wide range of concentrations. Except for UDP-glucose, the uracil nucleotides UTP, UDP and PSB 0474 caused a concentration-dependent inhibition of cell proliferation (Figure 2).Figure 2\nEffects of uracil nucleotides in cell proliferation. Lipopolysaccharide (LPS) cultures were incubated with nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed in percentage of control. Values are means ± SEM from five to ten experiments. *P <0.05, significant differences from control.\n\nEffects of uracil nucleotides in cell proliferation. Lipopolysaccharide (LPS) cultures were incubated with nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed in percentage of control. Values are means ± SEM from five to ten experiments. *P <0.05, significant differences from control.", "UTP metabolism was very fast, with a half-life of 10.3 ± 0.5 min (n = 5), and the main metabolite formed during the first hour was UDP, which remained in the medium up to 8 h (Figure 3A). When tested from the beginning, UDP metabolism was much slower compared to that of UTP (Figure 3B); its half-life was 77.3 ± 2.3 min (n = 4; P <0.05). The PSB 0474 half-life could not be evaluated because the highest concentration tested that caused an inhibition of cell proliferation was still below the detection limit of the method used to study the metabolism of these compounds.Figure 3\nMetabolism of uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with 0.1 mM of (A) UTP or (B) UDP and samples were collected at 0, 1, 3, 8, 24 and 48 h. Uracil nucleotides and their metabolites were quantified by HPLC-UV. Values are means ± SEM from four experiments.\n\nMetabolism of uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with 0.1 mM of (A) UTP or (B) UDP and samples were collected at 0, 1, 3, 8, 24 and 48 h. Uracil nucleotides and their metabolites were quantified by HPLC-UV. Values are means ± SEM from four experiments.", "The inhibitory effect of both PSB 0474 (1 μM) and UDP (1 mM) in cell proliferation was abolished by the selective P2Y6 receptor antagonist MRS 2578 (1 μM; Figure 4A). The inhibitory effect of UTP (0.1 mM) was also abolished by MRS 2578 (not shown), suggesting that this effect depends on its conversion into UDP and activation of P2Y6 receptors. Uncoupling Gi/o proteins from receptors with pertussis toxin (PTX, 0.1 μg/ml) did not change the effect of UDP (1 mM), which was attenuated by the phospholipase C (PLC) inhibitor U 73122 (1 μM), but not by its inactive analog U 73343 (1 μM), and by the protein kinase C (PKC) inhibitor RO 32-0432 (1 μM; Figure 4A), confirming the coupling of P2Y6 receptors to the Gq-PLC-PKC pathway.Figure 4\nPyrimidine receptors and signaling pathway involved in the inhibition of cell proliferation mediated by uracil nucleotides. (A) Lipopolysaccharide (LPS) cultures were incubated with the nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides, except PTX, which was added to the medium 24 h before. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed as percentage of change from the respective control. Values are means ± SEM from eight to twenty experiments. *P <0.05, significant differences from respective control; +P <0.05, significant differences from the agonist alone. (B) Representative western blots showing the expression of P2Y6 receptors obtained from whole cell lysates. Two bands of 25 kDa, one of 36 kDa and another of 86 kDa, specifically reacted with the anti-P2Y6 antibody. These bands were absent in the presence of the respective neutralizing peptide (np).\n\nPyrimidine receptors and signaling pathway involved in the inhibition of cell proliferation mediated by uracil nucleotides. (A) Lipopolysaccharide (LPS) cultures were incubated with the nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides, except PTX, which was added to the medium 24 h before. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed as percentage of change from the respective control. Values are means ± SEM from eight to twenty experiments. *P <0.05, significant differences from respective control; +P <0.05, significant differences from the agonist alone. (B) Representative western blots showing the expression of P2Y6 receptors obtained from whole cell lysates. Two bands of 25 kDa, one of 36 kDa and another of 86 kDa, specifically reacted with the anti-P2Y6 antibody. These bands were absent in the presence of the respective neutralizing peptide (np).\nP2Y6 receptor expression in LPS cultures comprised four bands, two of 25 kDa, one of 36 kDa and another of 86 kDa, which were all absent in the presence of the P2Y6 receptor neutralizing peptide (np, Figure 4B). Analysis of the cellular localization of P2Y6 receptors by immunocytochemistry revealed a preferential co-localization with microglia (Figure 5), suggesting that uracil nucleotides may inhibit cell proliferation via microglial cells.Figure 5\nCellular distribution and localization of P2Y\n6\nreceptors in lipopolysaccharide (LPS) cultures. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), P2Y6 receptors were labeled with rabbit anti-P2Y6 (TRICT, red) and nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of P2Y6 receptors that are coincident with microglia, but not in astrocytes (blue nuclei that do not label with CD11b and P2Y6 receptor antibodies). Scale bar: 20 μm.\n\nCellular distribution and localization of P2Y\n6\nreceptors in lipopolysaccharide (LPS) cultures. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), P2Y6 receptors were labeled with rabbit anti-P2Y6 (TRICT, red) and nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of P2Y6 receptors that are coincident with microglia, but not in astrocytes (blue nuclei that do not label with CD11b and P2Y6 receptor antibodies). Scale bar: 20 μm.", "LPS increases iNOS expression and NO production by microglia [24,25], but this effect is attenuated during chronic LPS stimulation [9,10]. Since NO may inhibit astroglial proliferation [26] it was investigated whether P2Y6 receptor activation by uracil nucleotides modulated NO release in chronic LPS stimulated microglia. UDP (1 mM) and PSB 0474 (10 μM) increased NO release in the culture medium (Figure 6), an effect abolished by the selective antagonist of the P2Y6 receptors MRS 2578 (1 μM) and by the PLC inhibitor U 73122 (1 μM), or by the PKC inhibitor RO 32-0432 (1 μM; Figure 6). Additionally, the inhibitory effect of UDP in cell proliferation (1 mM; 44 ± 2, n = 25) was abolished by the NOS inhibitor L-NAME (0.1 mM; 7 ± 3, n = 8, P <0.05), and this effect was reversed in the presence of L-arginine (3 mM; 28 ± 6, n = 6; P <0.05).Figure 6\nNitric oxide synthesis mediated by uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides. The concentration of nitrites plus nitrates was evaluated in the culture supernatants and was expressed as percentage of change from the respective control. Values are means ± SEM from four experiments. *P <0.05, significant differences from the respective control; +P <0.05, significant differences from the agonist alone.\n\nNitric oxide synthesis mediated by uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides. The concentration of nitrites plus nitrates was evaluated in the culture supernatants and was expressed as percentage of change from the respective control. Values are means ± SEM from four experiments. *P <0.05, significant differences from the respective control; +P <0.05, significant differences from the agonist alone.\nIn order to identify the cellular source of NO released upon P2Y6 receptor activation, the expression of iNOS was immunolocalized either with microglia or astrocytes in LPS cultures. No iNOS expression was detected in astrocytes, in control conditions and after treatment with the uracil nucleotides (Figure 7), whereas in microglia iNOS expression was residual in control conditions but was significantly increased after 48 h incubation with PSB 0474 (10 μM) or UDP (1 mM; Figure 7).Figure 7\nCellular localization of inducible nitric oxide synthase (iNOS) in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), astrocytes with mouse anti-GFAP (Alexa Fluor 488, green) and iNOS with rabbit anti-iNOS (TRITC, red). Cell nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of iNOS in the cells and are coincident with an increased expression of iNOS in microglia, but not in astrocytes, upon stimulation with the uracil nucleotides. Scale bar = 10 μm.\n\nCellular localization of inducible nitric oxide synthase (iNOS) in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), astrocytes with mouse anti-GFAP (Alexa Fluor 488, green) and iNOS with rabbit anti-iNOS (TRITC, red). Cell nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of iNOS in the cells and are coincident with an increased expression of iNOS in microglia, but not in astrocytes, upon stimulation with the uracil nucleotides. Scale bar = 10 μm.", "In these experimental conditions, microglial cells are responsible for NO production mediated by P2Y6 receptors. To clarify the mechanisms behind uracil nucleotides inhibition of cell proliferation, their effect on cell cycle progression and cell death was investigated.\nUracil nucleotides had no effect in cell cycle progression of glial cells. The percentage of cells in G0/G1, S or G2/M phase of the cell cycle was similar in control cultures (75.4 ± 1.8, 17.6 ± 3.2, 7.1 ± 1.5, respectively, n = 3) and those treated with UDP (1 mM; 76.4 ± 1.8, 13.6 ± 2.8, 10.0 ± 1.5, respectively, n = 3), therefore excluding cell cycle arrest as the mechanism involved in the inhibition of cell proliferation.\nAnother possibility was that inhibition of cell proliferation mediated by uracil nucleotides could result from an increase in cell death. UDP (1 mM) and PSB 0474 (10 μM) caused no change in LDH release, which excluded cell death by necrosis (Figure 8A). However, both UDP (1 mM) and PSB 0474 (10 μM) induced cell death by apoptosis assessed by the TUNEL assay (Figure 8A).Figure 8\nEffects of uracil nucleotides in cell death in lipopolysaccharide (LPS) cultures. (A) Necrotic cell death was evaluated by measuring the release of lactate dehydrogenase (LDH) and apoptotic cell death was evaluated by the TUNEL assay, after incubation with uracil nucleotides or solvent for 48 h. LDH activity was measured in the culture medium and in the culture extracts and the fraction released is represented in percentage of total LDH. The number of apoptotic cells was expressed as percentage of the total number of cells counted. Values are means ± SEM from four to seven experiments. *P <0.05, significant differences from respective control (solvent). (B) Cellular localization of apoptotic nuclei, obtained with the Hoechst 33258 staining in LPS cultures. Astrocytes were labeled with rabbit anti-GFAP (TRITC, red), microglia with mouse anti-CD11b (Alexa Fluor 488, green) and cell nuclei with Hoechst 33258 (blue). LPS cultures were incubated with solvent or UDP for 48 h. Shrunken nuclei with a bright fluorescence appearance, characteristic of apoptotic nuclei are clearly coincident with astrocytes (white arrows), but not with microglia. Scale bar = 20 μm.\n\nEffects of uracil nucleotides in cell death in lipopolysaccharide (LPS) cultures. (A) Necrotic cell death was evaluated by measuring the release of lactate dehydrogenase (LDH) and apoptotic cell death was evaluated by the TUNEL assay, after incubation with uracil nucleotides or solvent for 48 h. LDH activity was measured in the culture medium and in the culture extracts and the fraction released is represented in percentage of total LDH. The number of apoptotic cells was expressed as percentage of the total number of cells counted. Values are means ± SEM from four to seven experiments. *P <0.05, significant differences from respective control (solvent). (B) Cellular localization of apoptotic nuclei, obtained with the Hoechst 33258 staining in LPS cultures. Astrocytes were labeled with rabbit anti-GFAP (TRITC, red), microglia with mouse anti-CD11b (Alexa Fluor 488, green) and cell nuclei with Hoechst 33258 (blue). LPS cultures were incubated with solvent or UDP for 48 h. Shrunken nuclei with a bright fluorescence appearance, characteristic of apoptotic nuclei are clearly coincident with astrocytes (white arrows), but not with microglia. Scale bar = 20 μm.\nCultures treated with UDP (1 mM) showed an increase in the number of shrunken nuclei with a bright fluorescence appearance obtained by Hoechst 33258 staining (Figure 8B). The percentage of apoptotic nuclei in control cultures was 6.75 ± 0.65% (n = 8) and increased to 16.02 ± 0.75% (n = 8, P <0.05) in cultures treated with UDP (1 mM). This increase in the number of apoptotic cells was attenuated to 11.83 ± 0.61% (n = 8, P <0.05) when UDP was tested in the presence of L-NAME (0.1 mM). Additionally, the apoptotic nuclei co-localized with astrocytes but not with microglia (Figure 8B).", "The CNS, with the contribution of glial cells (both astrocytes and microglia), has been shown to be able to mount an innate immune reaction in response to danger signals, such as endogenous nucleotides released upon cerebral injury or exogenous pathogens, systemic bacteria or virus [27]. It is now well recognized that microglia functional plasticity is strictly stimuli-dependent [2,8]; however, it is not known how microglia coordinate the inflammatory response and the progress of astrogliosis in a paradigm of chronic glia activation, characteristic of several inflammatory pathologies. Exposure of CNS to endotoxins, such as LPS, is a useful approach to activate immunity, in particular microglial cells [28]. Therefore, such immunological challenge was used in an in vitro model where microglia and astrocytes were present and could cooperate to reproduce some of the conditions that might be observed during chronic brain inflammation. It consisted of co-cultures of astrocytes containing 4 to 5% microglia chronically stimulated with 0.1 μg/ml LPS for 30 days.\nIn previous studies, we have shown that in co-cultures of astrocytes and microglia containing a higher percentage of microglia, but without LPS treatment, UTP caused an inhibition of cell proliferation that could be correlated with a higher expression of P2Y6 receptors in microglia. In contrast, in highly enriched astroglial cultures, either treated or not with LPS, uracil nucleotides had no effect in cell proliferation [22], suggesting a fundamental role of microglial cells to the P2Y6 receptor-mediated inhibitory effect. In LPS cultures, UTP also inhibited cell proliferation and this effect was extensive to UDP and to the selective agonist of the P2Y6 receptors PSB 0474 [29], but not to the selective agonist of P2Y14 receptors UDP-glucose [30]. The inhibitory effect of uracil nucleotides was mediated by P2Y6 receptors, since it was abolished by MRS 2578, the selective antagonist of this receptor subtype [31]. Although UTP has some affinity for P2Y6 receptors [32], its inhibitory effect was mainly dependent on its metabolism and formation of UDP, since UTP had a very short half-life, being rapidly converted into UDP, which remained in the culture medium for about 8 h. This conclusion is further supported by the observation that an inhibitory effect of UTP and of other uracil nucleotides was already observed within 8 h of incubation (not shown) before a significant accumulation of the other metabolites, such as uridine or uracil could be detected in the culture medium.\nUDP half-life was much longer than that of UTP, which may be explained by the NTPDases expressed in these cultures. Microglia express mainly NTPDase1, which hydrolyses UTP faster than UDP [33], and astrocytes, the main cell type present in these cultures, express high levels of NTPDase2 [34]. This enzyme seems to be upregulated upon LPS stimulation [35] and preferentially hydrolyzes UTP compared to UDP [36]. Thus, the predominance of NTPDase2 activity in these cultures favors UDP accumulation and a preferential activation of P2Y6 receptors.\nP2Y6 receptors were coupled to the Gq-PLC-PKC pathway [14]; however activation of this pathway in astrocytes has been shown to mediate cell proliferation [20], suggesting a microglial localization. Therefore, the mechanisms involved in the inhibitory effect mediated by P2Y6 receptors were further investigated by looking to the cellular localization of these receptors. Expression of P2Y6 receptors in LPS cultures revealed a multiple band pattern, as previously described and discussed [22]. Only three of the four bands were lost after adsorption with the neutralizing peptide, suggesting this antibody is detecting some other antigen [37]. The cellular localization of P2Y6 receptors was analyzed by immunocytochemistry, which revealed that this P2Y6 antibody reacts mainly with microglial antigens. This observation is also in agreement with previous studies showing an up-regulation of P2Y6 receptor expression in LPS-activated microglia [16,38]. Although there are no specific P2Y6 antibodies available [37], results obtained by western blot and immunocytochemistry analysis, together with pharmacological data and the fact that UDP had no effect in highly enriched astroglial cultures [22], all support the conclusion that P2Y6 receptors are mainly localized in microglia.\nLPS increases iNOS expression and NO production in microglia via PKC activation [24,25,39]. However, NO release by microglia decays upon chronic LPS stimulation [9,10] suggesting a downregulation of this signaling pathway. In LPS cultures, all the uracil nucleotides that inhibited cell proliferation also increased the release of NO, an effect mediated by P2Y6 receptors coupled to the PLC-PKC pathway. Thus, a crosstalk between LPS and P2Y6 receptors signaling pathways may ensure additional iNOS expression and NO production when LPS response is already downregulated. The inhibitory effect of UDP was also prevented by L-NAME, a NOS inhibitor, confirming the involvement of NO that was shown to be exclusively produced by microglia iNOS [25,40]. Our results confirm previous observations, since only microglia showed iNOS immunoreactivity, which was upregulated when LPS cultures were treated with uracil nucleotides, suggesting that microglia are the main source of NO detected in the culture medium, either under basal conditions or upon P2Y6 receptor activation.\nNO may potentially damage cells through the formation of reactive nitrogen species that cause DNA fragmentation [41]. This mechanism could be responsible for the inhibition of cell proliferation induced by uracil nucleotides, since these compounds increased the number of cells presenting DNA fragmentation, an indicator of cell death by apoptosis. In some systems NO-mediated apoptosis is preceded by cell cycle arrest [42], which was not observed in this study. In LPS cultures, UDP induced cell death without any previous effect in the cycle of confluent and synchronized cells. P2Y6 receptor activation and NO release only affected astrocytes, but not microglia viability. Shrunken nuclei with a bright fluorescence appearance, indicating chromatin condensation characteristic of apoptotic cells, were highly coincident with astrocytes but did not co-localize with microglia. Because cell population in the S-phase of the cell cycle was not changed by UDP, an increase in NO release seems to result from iNOS upregulation of individual pre-existent microglia, excluding the possibility of microglia proliferation. This result also indicates that P2Y6 receptors do not mediate microglia proliferation and therefore, their activation cannot modify the anti-proliferative profile established by 0.1 μg/ml LPS [43].\nAlthough uracil nucleotides had no effect in highly enriched astroglial cultures, cell death by apoptosis mediated by pyrimidine receptors was already observed in co-cultures of astrocytes and microglia (without LPS treatment), but to a smaller extent [22], suggesting that LPS and/or uracil nucleotides are not able to induce astrocyte death without the contribution of microglia. Nevertheless, it seems that in co-cultures of astrocytes and microglia, LPS potentiates astrocyte apoptosis mediated by uracil nucleotides, since it increases from 6% in co-cultures without LPS treatment [22] to 15% in LPS cultures. LPS facilitates P2Y6 receptor-mediated NO release by microglia, but other cytokines such as IL-1β or TNF-α [44] may come into play contributing to astroglial cell death.", "The present study shows that chronically activated microglia influence the astroglial response to uracil nucleotides favoring astroglial apoptosis as a consequence of microglial P2Y6 receptor activation that induces NO release (Figure 9). Therefore, P2Y6 receptor activation may represent an important mechanism by which microglia control excessive astrogliosis that may hamper neuronal regeneration. Nevertheless, it is known that human cortical astrocytes are diverse and structurally and functionally more complex than their rodent counterparts [45]; therefore this hypothesis should be further confirmed within human glial cells.Figure 9\nSchematic representation of the purinergic mechanisms mediating microglia-astrocyte communication in lipopolysaccharide (LPS) cultures. Uracil nucleotides released during inflammatory response activate microglia P2Y6 receptors coupled to the phospholipase C (PLC) - protein kinase C (PKC) pathway, which mediates an increase in inducible nitric oxide synthase (iNOS) expression and consequently, in nitric oxide (NO) release. Diffusible NO mediates astroglial apoptosis.\n\nSchematic representation of the purinergic mechanisms mediating microglia-astrocyte communication in lipopolysaccharide (LPS) cultures. Uracil nucleotides released during inflammatory response activate microglia P2Y6 receptors coupled to the phospholipase C (PLC) - protein kinase C (PKC) pathway, which mediates an increase in inducible nitric oxide synthase (iNOS) expression and consequently, in nitric oxide (NO) release. Diffusible NO mediates astroglial apoptosis." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", "conclusion" ]
[ "lipopolysaccharide", "astroglial proliferation", "microglia", "uracil nucleotides", "P2Y6 receptors", "nitric oxide", "apoptosis" ]
Background: Chronic inflammation is characteristic of several brain disorders leading to loss of cognitive function. In the central nervous system (CNS), the inflammatory response is mediated by glial cells that acquire reactive phenotypes to participate in neuronal repair mechanisms [1,2]. In particular, astrocytes respond with a complex reaction named astrogliosis that includes several morphological and functional changes, such as cell hypertrophy, glial fibrillary acidic protein (GFAP) and nestin up-regulation [3], and cell proliferation [4]. These progressive changes are time and context dependent, being regulated by inflammatory mediators produced in the lesion site [5]. Activated microglia are the main source of these inflammatory mediators, assuming an important role in the modulation of astrogliosis progression during the course of the inflammatory response [6,7]. These mediators may be pro-inflammatory, such as IL-1β, TNF-α and nitric oxide (NO), or anti-inflammatory, such as IL-10, IL-4, TGF-β, according to the microglia phenotype, which is highly dependent on the pathological context [2,8]. Lipopolysaccharide (LPS) is an agonist of toll-like receptors-4 (TLR4), inducing a pro-inflammatory phenotype in microglia. However, chronic activation of TLR4 receptors has been shown to promote microglia polarization toward an anti-inflammatory phenotype [9,10], but its impact in the inflammatory response and in the modulation of astrogliosis remains to be established. In fact, different extents of astrogliosis and microgliosis have different impacts in neuronal regeneration [1,2]. In the extreme end of the astrogliosis spectrum, proliferating astrocytes may interact with fibroblasts and other glial cells to form a glial scar, creating an environment that prevents axon regeneration [11], leading to the idea that inhibition or control of this response would be beneficial to neuronal survival after injury. Therefore, the mediators produced by chronically activated microglia may have an important role to prevent excessive astrogliosis and promote neuronal regeneration and sprouting. In a context of chronic brain inflammation, both adenine and uracil nucleotides attain high concentrations in the extracellular medium (in the mM range) due to cell damage or death, and activate P2 receptors in both types of glial cells, contributing to astrogliosis [12] and reinforcing the release of inflammatory messengers produced by microglia [13]. Particularly, the uracil nucleotides may activate pyrimidine receptors, such as the P2Y2,4,6 and P2Y14 receptor subtypes [14] that participate in the inflammatory response [15]. P2Y6 receptors contribute to the clearance of necrotic cell debris by stimulating microglia phagocytosis of dying neurons [16], whereas the P2Y2 receptors mediate astrocyte migration [17], but the effect of uracil nucleotides in the modulation of astroglial proliferation and their role in the control of glial scar formation is largely unknown. To investigate the role of pyrimidine receptors in microglia-astrocyte signaling and its impact in the control of astrogliosis, it was used a cell culture model that could represent a state of chronic brain inflammation, which consisted of co-cultures of astrocytes and microglia submitted to a long-term treatment with LPS (0.1 μg/ml). The cultures obtained were used to investigate: i) the effect of uracil nucleotides in cell proliferation, ii) the influence of ectonucleotidases on uracil nucleotides metabolism and consequent impact in cell proliferation, iii) the signaling pathways and the mechanisms activated by the pyrimidine receptors involved in the control of cell proliferation, and iv) the contribution of microglia pyrimidine receptors to the modulation of astroglial proliferation. Methods: Materials The antibodies used and the respective information are listed in Table 1. The following drugs and reagents were used: L-arginine (L-ARG), lipopolysaccharide from Salmonella thyphimurium (LPS), N-nitro-L-arginine methyl ester hydrochloride (L-NAME), pertussis toxin (PTX), bisindolylmaleimide XI hydrochloride (RO 32-0432), penicillin, streptomycin, uracil, uridine, uridine-5’-monophosphate disodium (UMP), uridine-5’-diphosphate sodium (UDP), uridine 5'-triphosphate trisodium (UTP), uridine 5'-diphosphoglucose disodium (UDP-glucose), 1-[6-[((17β)-3-methoxyestra-1,3,5[10]-trien-17-yl)amino]hexyl]-2,5-pyrrolidinedione (U 73343), 1-[6-[((17β)-3-methoxyestra-1,3,5[10]-trien-17-yl)amino]hexyl]-1H-pyrrole-2,5dione (U 73122), 2'-(4-hydroxyphenyl)-5-(4-methyl-1-piperazinyl)-2,5'-bi-1H-benzimidazole trihydrochloride hydrate (Hoechst 33258), Ribonuclease A (RNAse) and propidium iodide (PI) from Sigma-Aldrich (Sintra, Portugal); N,N''-1,4 butanediylbis[N'-(3-isothiocyanatophenyl)thiourea] (MRS 2578) and 3-(2-oxo-2-phenylethyl)uridine-5'-diphosphate disodium (PSB 0474) from Tocris (Bristol, UK); methyl-[3H]thymidine (specific activity 80 to 86 Ci/mmol) and enhanced chemiluminescence (ECL) western blotting system from Amersham Biosciences (Lisbon, Portugal). Stock solutions of drugs were prepared with dimethyl sulfoxide or distilled water and kept at -20°C. Solutions of drugs were prepared from stock solutions diluted in culture medium immediately before use.Table 1 Primary and secondary antibodies used in immunocytochemistry and western blotting Primary antibodies Antigen Code Host Dilution Supplier GFAPG9269Rabbit1:600 (IF)SigmaGFAPG6171Mouse1:600 (IF)SigmaCD11bsc-53086Mouse1:50 (IF)Santa Cruz Biotechnology, IncP2Y6 APR-011Rabbit1:200 (IF)Alomone1:300 (WB)iNOSAB5382Rabbit1:5 000 (IF)ChemiconActinsc-1615-RRabbit1:200 (WB)Santa Cruz Biotechnology, Inc Secondary antibodies Antigen Code Host Dilution Supplier TRITC anti-rabbitT6778Goat1:400 (IF; GFAP, P2Y6)Sigma1:2 000 (IF; iNOS)Alexa Fluor 488 anti-mouseA-11034Goat1:400 (IF)Mol. Probesanti-rabbit conjugated to horseradish peroxidasesc-2004Goat1:10 000 (WB)Santa Cruz Biotechnology, IncIF, immunofluorescence; WB, western blot analysis. Primary and secondary antibodies used in immunocytochemistry and western blotting IF, immunofluorescence; WB, western blot analysis. The antibodies used and the respective information are listed in Table 1. The following drugs and reagents were used: L-arginine (L-ARG), lipopolysaccharide from Salmonella thyphimurium (LPS), N-nitro-L-arginine methyl ester hydrochloride (L-NAME), pertussis toxin (PTX), bisindolylmaleimide XI hydrochloride (RO 32-0432), penicillin, streptomycin, uracil, uridine, uridine-5’-monophosphate disodium (UMP), uridine-5’-diphosphate sodium (UDP), uridine 5'-triphosphate trisodium (UTP), uridine 5'-diphosphoglucose disodium (UDP-glucose), 1-[6-[((17β)-3-methoxyestra-1,3,5[10]-trien-17-yl)amino]hexyl]-2,5-pyrrolidinedione (U 73343), 1-[6-[((17β)-3-methoxyestra-1,3,5[10]-trien-17-yl)amino]hexyl]-1H-pyrrole-2,5dione (U 73122), 2'-(4-hydroxyphenyl)-5-(4-methyl-1-piperazinyl)-2,5'-bi-1H-benzimidazole trihydrochloride hydrate (Hoechst 33258), Ribonuclease A (RNAse) and propidium iodide (PI) from Sigma-Aldrich (Sintra, Portugal); N,N''-1,4 butanediylbis[N'-(3-isothiocyanatophenyl)thiourea] (MRS 2578) and 3-(2-oxo-2-phenylethyl)uridine-5'-diphosphate disodium (PSB 0474) from Tocris (Bristol, UK); methyl-[3H]thymidine (specific activity 80 to 86 Ci/mmol) and enhanced chemiluminescence (ECL) western blotting system from Amersham Biosciences (Lisbon, Portugal). Stock solutions of drugs were prepared with dimethyl sulfoxide or distilled water and kept at -20°C. Solutions of drugs were prepared from stock solutions diluted in culture medium immediately before use.Table 1 Primary and secondary antibodies used in immunocytochemistry and western blotting Primary antibodies Antigen Code Host Dilution Supplier GFAPG9269Rabbit1:600 (IF)SigmaGFAPG6171Mouse1:600 (IF)SigmaCD11bsc-53086Mouse1:50 (IF)Santa Cruz Biotechnology, IncP2Y6 APR-011Rabbit1:200 (IF)Alomone1:300 (WB)iNOSAB5382Rabbit1:5 000 (IF)ChemiconActinsc-1615-RRabbit1:200 (WB)Santa Cruz Biotechnology, Inc Secondary antibodies Antigen Code Host Dilution Supplier TRITC anti-rabbitT6778Goat1:400 (IF; GFAP, P2Y6)Sigma1:2 000 (IF; iNOS)Alexa Fluor 488 anti-mouseA-11034Goat1:400 (IF)Mol. Probesanti-rabbit conjugated to horseradish peroxidasesc-2004Goat1:10 000 (WB)Santa Cruz Biotechnology, IncIF, immunofluorescence; WB, western blot analysis. Primary and secondary antibodies used in immunocytochemistry and western blotting IF, immunofluorescence; WB, western blot analysis. Cell cultures Animal handling and experiments were in accordance with the guidelines prepared by Committee on Care and Use of Laboratory Animal Resources (National Research Council, USA), followed the Directive 2010/63/EU of the European Parliament and the Council of the European Union and were approved by the ethics committee of the Faculty of Pharmacy from the University of Porto. Primary co-cultures of astrocytes and microglia were prepared from newborn (P0-P2) Wistar rats (Charles River, Barcelona, Spain) as previously described [18] with minor modifications. Cell cultures were treated with 0.1 μg/ml LPS and were incubated at 37°C in a humidified atmosphere of 95% air, 5% CO2. The medium containing 0.1 μg/ml LPS was replaced one day after cell cultures preparation, and subsequently, twice a week, with LPS remaining in the cultures from the first day in vitro (DIV1) until the end of the experiments. Cultures were synchronized to a quiescent phase of the cell cycle, by shifting fetal bovine serum concentration in the medium from 10% to 0.1% for 48 h, and then used in experiments at DIV30. Animal handling and experiments were in accordance with the guidelines prepared by Committee on Care and Use of Laboratory Animal Resources (National Research Council, USA), followed the Directive 2010/63/EU of the European Parliament and the Council of the European Union and were approved by the ethics committee of the Faculty of Pharmacy from the University of Porto. Primary co-cultures of astrocytes and microglia were prepared from newborn (P0-P2) Wistar rats (Charles River, Barcelona, Spain) as previously described [18] with minor modifications. Cell cultures were treated with 0.1 μg/ml LPS and were incubated at 37°C in a humidified atmosphere of 95% air, 5% CO2. The medium containing 0.1 μg/ml LPS was replaced one day after cell cultures preparation, and subsequently, twice a week, with LPS remaining in the cultures from the first day in vitro (DIV1) until the end of the experiments. Cultures were synchronized to a quiescent phase of the cell cycle, by shifting fetal bovine serum concentration in the medium from 10% to 0.1% for 48 h, and then used in experiments at DIV30. Immunocytochemistry Cultures were fixed and permeabilized as described in previous studies [19]. For double immunofluorescence, cultures were incubated with the primary antibodies (Table 1) overnight at 4°C. Visualization of GFAP, CD11b, and P2Y6 receptors and iNOS positive cells was accomplished upon 1 h incubation, at room temperature, with the secondary antibodies (Table 1). In negative controls, the primary antibody was omitted. Cell nuclei were labeled with Hoechst 33258 (5 μg/ml) for 30 min at room temperature. To evaluate the percentage of microglia in the cultures, approximately 200 cells per culture were counted, and the number of CD11b positive cells was expressed as percentage of the total number of cells counted. Cultures were fixed and permeabilized as described in previous studies [19]. For double immunofluorescence, cultures were incubated with the primary antibodies (Table 1) overnight at 4°C. Visualization of GFAP, CD11b, and P2Y6 receptors and iNOS positive cells was accomplished upon 1 h incubation, at room temperature, with the secondary antibodies (Table 1). In negative controls, the primary antibody was omitted. Cell nuclei were labeled with Hoechst 33258 (5 μg/ml) for 30 min at room temperature. To evaluate the percentage of microglia in the cultures, approximately 200 cells per culture were counted, and the number of CD11b positive cells was expressed as percentage of the total number of cells counted. DNA synthesis Cultures grown in 24-well plates were incubated with uracil nucleotides or solvent for 48 h, and methyl-[3H]-thymidine was added in the last 24 h, at a concentration of 1 μCi/ml. Antagonists or enzymatic inhibitors were added to the medium 1 h before uracil nucleotides. In experiments performed in the presence of PTX, the drug was added to the culture medium 24 h before the uracil nucleotides. At the end of the 48-h period of incubation, the protein content and methyl-[3H]-thymidine incorporation were evaluated as previously described [20]. Cultures grown in 24-well plates were incubated with uracil nucleotides or solvent for 48 h, and methyl-[3H]-thymidine was added in the last 24 h, at a concentration of 1 μCi/ml. Antagonists or enzymatic inhibitors were added to the medium 1 h before uracil nucleotides. In experiments performed in the presence of PTX, the drug was added to the culture medium 24 h before the uracil nucleotides. At the end of the 48-h period of incubation, the protein content and methyl-[3H]-thymidine incorporation were evaluated as previously described [20]. Metabolism of nucleotides The metabolism of nucleotides was evaluated as previously described [20]. Briefly, cultures were incubated with uracil nucleotides, all at 0.1 mM, and samples were collected at 0, 1, 3, 8, 24 and 48 h. For evaluation of UTP half-life, additional samples were collected at 0, 5, 10, 15, 30, 60 min. The uracil nucleotides or their metabolites were separated and quantified by ion-pair-reverse-phase high-performance liquid chromatography (HPLC) with UV detection set at 254 nm [21]. Standards were analyzed in the same conditions and the retention time identified was (min): uracil (0.95), uridine (1.32), UMP (2.15), UDP (4.40) and UTP (6.40). The concentration of nucleotides and metabolites was calculated by peak area integration, followed by interpolation in calibration curves obtained with standards. The metabolism of nucleotides was evaluated as previously described [20]. Briefly, cultures were incubated with uracil nucleotides, all at 0.1 mM, and samples were collected at 0, 1, 3, 8, 24 and 48 h. For evaluation of UTP half-life, additional samples were collected at 0, 5, 10, 15, 30, 60 min. The uracil nucleotides or their metabolites were separated and quantified by ion-pair-reverse-phase high-performance liquid chromatography (HPLC) with UV detection set at 254 nm [21]. Standards were analyzed in the same conditions and the retention time identified was (min): uracil (0.95), uridine (1.32), UMP (2.15), UDP (4.40) and UTP (6.40). The concentration of nucleotides and metabolites was calculated by peak area integration, followed by interpolation in calibration curves obtained with standards. Western blot analysis The expression of P2Y6 receptors was evaluated as previously described [22]. Membranes were probed for 2 h at room temperature with appropriately diluted primary rabbit polyclonal antibodies anti-P2Y6 or anti-actin, followed by the secondary antibody goat anti-rabbit IgG conjugated to horseradish peroxidase (Table 1). The immunocomplexes were detected by ECL. The expression of P2Y6 receptors was evaluated as previously described [22]. Membranes were probed for 2 h at room temperature with appropriately diluted primary rabbit polyclonal antibodies anti-P2Y6 or anti-actin, followed by the secondary antibody goat anti-rabbit IgG conjugated to horseradish peroxidase (Table 1). The immunocomplexes were detected by ECL. Nitric oxide assay Cultures were incubated with uracil nucleotides or solvent for 48 h. The P2Y6 antagonist MRS 2578 or other enzyme inhibitors, when tested, were added 1 h before the uracil nucleotides. At the end of the 48-h period of incubation, the nitric oxide released into the culture medium was assessed by measuring the accumulation of nitrates plus nitrites according to the instructions of a Nitrate/Nitrite Colorimetric Assay kit (Cayman, France). The content of nitrates plus nitrites present in the supernatants was expressed as percentage of respective control. Cultures were incubated with uracil nucleotides or solvent for 48 h. The P2Y6 antagonist MRS 2578 or other enzyme inhibitors, when tested, were added 1 h before the uracil nucleotides. At the end of the 48-h period of incubation, the nitric oxide released into the culture medium was assessed by measuring the accumulation of nitrates plus nitrites according to the instructions of a Nitrate/Nitrite Colorimetric Assay kit (Cayman, France). The content of nitrates plus nitrites present in the supernatants was expressed as percentage of respective control. Cell cycle The ability of uracil nucleotides to arrest glial cells in a specific cell cycle stage was evaluated in cultures treated with uracil nucleotides or solvent for 48 h. Cells were harvested by trypsinization, rinsed with ice-cold PBS and fixed in ice-cold 70% ethanol for 15 min at -20°C. Cells were rinsed again with PBS and incubated with 0.2 mg/ml RNAse A at 37°C for 15 min and further with 0.5 mg/ml propidium iodide for at least 30 min in the dark at room temperature. The percentage of cells in each phase of the cell cycle was determined by a flow cytometric analysis using the FACSCalibur flow cytometer from BD Biosciences (Enzifarma, Porto, Portugal) and the CellQuest software from BD Biosciences (Enzifarma, Porto, Portugal). Cell cycle phases were identified and quantified using ModFit LT software (Verity Software House Inc., Topsham, USA). The ability of uracil nucleotides to arrest glial cells in a specific cell cycle stage was evaluated in cultures treated with uracil nucleotides or solvent for 48 h. Cells were harvested by trypsinization, rinsed with ice-cold PBS and fixed in ice-cold 70% ethanol for 15 min at -20°C. Cells were rinsed again with PBS and incubated with 0.2 mg/ml RNAse A at 37°C for 15 min and further with 0.5 mg/ml propidium iodide for at least 30 min in the dark at room temperature. The percentage of cells in each phase of the cell cycle was determined by a flow cytometric analysis using the FACSCalibur flow cytometer from BD Biosciences (Enzifarma, Porto, Portugal) and the CellQuest software from BD Biosciences (Enzifarma, Porto, Portugal). Cell cycle phases were identified and quantified using ModFit LT software (Verity Software House Inc., Topsham, USA). Cell death assays Necrotic cell death was assessed by measuring the lactate dehydrogenase (LDH) release with an enzymatic assay according to the manufacturer’s instructions (Sigma-Aldrich, Sintra, Portugal). Cultures were incubated with uracil nucleotides or solvent for 48 h. LDH activity was determined in the culture supernatants and respective extracts. The amount of LDH released into the culture medium was expressed as the percentage of total LDH. Apoptotic cell death was evaluated either by the indirect terminal transferase-mediated dUTP-digoxigenin nick end-labeling (TUNEL) to detect DNA fragmentation using an ApopTag peroxidase detection kit (Millipore, Madrid, Spain), or by the analysis of nuclear morphology with Hoechst 33258 staining (described above). Cultures were treated with uracil nucleotides or solvent for 48 h and, when present, L-NAME was added 1 h before nucleotides. The number of TUNEL positive cells was evaluated as previously described [20]. The number of apoptotic cells, observed with Hoechst 33258 staining, was evaluated by analyzing eight high-power fields (×400) in each culture, and the number of cells showing shrunken nuclei with a bright fluorescence appearance was expressed as percentage of total cell number counted. Necrotic cell death was assessed by measuring the lactate dehydrogenase (LDH) release with an enzymatic assay according to the manufacturer’s instructions (Sigma-Aldrich, Sintra, Portugal). Cultures were incubated with uracil nucleotides or solvent for 48 h. LDH activity was determined in the culture supernatants and respective extracts. The amount of LDH released into the culture medium was expressed as the percentage of total LDH. Apoptotic cell death was evaluated either by the indirect terminal transferase-mediated dUTP-digoxigenin nick end-labeling (TUNEL) to detect DNA fragmentation using an ApopTag peroxidase detection kit (Millipore, Madrid, Spain), or by the analysis of nuclear morphology with Hoechst 33258 staining (described above). Cultures were treated with uracil nucleotides or solvent for 48 h and, when present, L-NAME was added 1 h before nucleotides. The number of TUNEL positive cells was evaluated as previously described [20]. The number of apoptotic cells, observed with Hoechst 33258 staining, was evaluated by analyzing eight high-power fields (×400) in each culture, and the number of cells showing shrunken nuclei with a bright fluorescence appearance was expressed as percentage of total cell number counted. Statistical analysis Data are expressed as means ± standard errors of the mean (SEM) from n number of experiments. Statistical analysis was carried out using the unpaired Student’s t-test or ANOVA followed by Dunnett’s multiple comparison test. Significant differences were indicated by P values lower than 0.05. Data are expressed as means ± standard errors of the mean (SEM) from n number of experiments. Statistical analysis was carried out using the unpaired Student’s t-test or ANOVA followed by Dunnett’s multiple comparison test. Significant differences were indicated by P values lower than 0.05. Materials: The antibodies used and the respective information are listed in Table 1. The following drugs and reagents were used: L-arginine (L-ARG), lipopolysaccharide from Salmonella thyphimurium (LPS), N-nitro-L-arginine methyl ester hydrochloride (L-NAME), pertussis toxin (PTX), bisindolylmaleimide XI hydrochloride (RO 32-0432), penicillin, streptomycin, uracil, uridine, uridine-5’-monophosphate disodium (UMP), uridine-5’-diphosphate sodium (UDP), uridine 5'-triphosphate trisodium (UTP), uridine 5'-diphosphoglucose disodium (UDP-glucose), 1-[6-[((17β)-3-methoxyestra-1,3,5[10]-trien-17-yl)amino]hexyl]-2,5-pyrrolidinedione (U 73343), 1-[6-[((17β)-3-methoxyestra-1,3,5[10]-trien-17-yl)amino]hexyl]-1H-pyrrole-2,5dione (U 73122), 2'-(4-hydroxyphenyl)-5-(4-methyl-1-piperazinyl)-2,5'-bi-1H-benzimidazole trihydrochloride hydrate (Hoechst 33258), Ribonuclease A (RNAse) and propidium iodide (PI) from Sigma-Aldrich (Sintra, Portugal); N,N''-1,4 butanediylbis[N'-(3-isothiocyanatophenyl)thiourea] (MRS 2578) and 3-(2-oxo-2-phenylethyl)uridine-5'-diphosphate disodium (PSB 0474) from Tocris (Bristol, UK); methyl-[3H]thymidine (specific activity 80 to 86 Ci/mmol) and enhanced chemiluminescence (ECL) western blotting system from Amersham Biosciences (Lisbon, Portugal). Stock solutions of drugs were prepared with dimethyl sulfoxide or distilled water and kept at -20°C. Solutions of drugs were prepared from stock solutions diluted in culture medium immediately before use.Table 1 Primary and secondary antibodies used in immunocytochemistry and western blotting Primary antibodies Antigen Code Host Dilution Supplier GFAPG9269Rabbit1:600 (IF)SigmaGFAPG6171Mouse1:600 (IF)SigmaCD11bsc-53086Mouse1:50 (IF)Santa Cruz Biotechnology, IncP2Y6 APR-011Rabbit1:200 (IF)Alomone1:300 (WB)iNOSAB5382Rabbit1:5 000 (IF)ChemiconActinsc-1615-RRabbit1:200 (WB)Santa Cruz Biotechnology, Inc Secondary antibodies Antigen Code Host Dilution Supplier TRITC anti-rabbitT6778Goat1:400 (IF; GFAP, P2Y6)Sigma1:2 000 (IF; iNOS)Alexa Fluor 488 anti-mouseA-11034Goat1:400 (IF)Mol. Probesanti-rabbit conjugated to horseradish peroxidasesc-2004Goat1:10 000 (WB)Santa Cruz Biotechnology, IncIF, immunofluorescence; WB, western blot analysis. Primary and secondary antibodies used in immunocytochemistry and western blotting IF, immunofluorescence; WB, western blot analysis. Cell cultures: Animal handling and experiments were in accordance with the guidelines prepared by Committee on Care and Use of Laboratory Animal Resources (National Research Council, USA), followed the Directive 2010/63/EU of the European Parliament and the Council of the European Union and were approved by the ethics committee of the Faculty of Pharmacy from the University of Porto. Primary co-cultures of astrocytes and microglia were prepared from newborn (P0-P2) Wistar rats (Charles River, Barcelona, Spain) as previously described [18] with minor modifications. Cell cultures were treated with 0.1 μg/ml LPS and were incubated at 37°C in a humidified atmosphere of 95% air, 5% CO2. The medium containing 0.1 μg/ml LPS was replaced one day after cell cultures preparation, and subsequently, twice a week, with LPS remaining in the cultures from the first day in vitro (DIV1) until the end of the experiments. Cultures were synchronized to a quiescent phase of the cell cycle, by shifting fetal bovine serum concentration in the medium from 10% to 0.1% for 48 h, and then used in experiments at DIV30. Immunocytochemistry: Cultures were fixed and permeabilized as described in previous studies [19]. For double immunofluorescence, cultures were incubated with the primary antibodies (Table 1) overnight at 4°C. Visualization of GFAP, CD11b, and P2Y6 receptors and iNOS positive cells was accomplished upon 1 h incubation, at room temperature, with the secondary antibodies (Table 1). In negative controls, the primary antibody was omitted. Cell nuclei were labeled with Hoechst 33258 (5 μg/ml) for 30 min at room temperature. To evaluate the percentage of microglia in the cultures, approximately 200 cells per culture were counted, and the number of CD11b positive cells was expressed as percentage of the total number of cells counted. DNA synthesis: Cultures grown in 24-well plates were incubated with uracil nucleotides or solvent for 48 h, and methyl-[3H]-thymidine was added in the last 24 h, at a concentration of 1 μCi/ml. Antagonists or enzymatic inhibitors were added to the medium 1 h before uracil nucleotides. In experiments performed in the presence of PTX, the drug was added to the culture medium 24 h before the uracil nucleotides. At the end of the 48-h period of incubation, the protein content and methyl-[3H]-thymidine incorporation were evaluated as previously described [20]. Metabolism of nucleotides: The metabolism of nucleotides was evaluated as previously described [20]. Briefly, cultures were incubated with uracil nucleotides, all at 0.1 mM, and samples were collected at 0, 1, 3, 8, 24 and 48 h. For evaluation of UTP half-life, additional samples were collected at 0, 5, 10, 15, 30, 60 min. The uracil nucleotides or their metabolites were separated and quantified by ion-pair-reverse-phase high-performance liquid chromatography (HPLC) with UV detection set at 254 nm [21]. Standards were analyzed in the same conditions and the retention time identified was (min): uracil (0.95), uridine (1.32), UMP (2.15), UDP (4.40) and UTP (6.40). The concentration of nucleotides and metabolites was calculated by peak area integration, followed by interpolation in calibration curves obtained with standards. Western blot analysis: The expression of P2Y6 receptors was evaluated as previously described [22]. Membranes were probed for 2 h at room temperature with appropriately diluted primary rabbit polyclonal antibodies anti-P2Y6 or anti-actin, followed by the secondary antibody goat anti-rabbit IgG conjugated to horseradish peroxidase (Table 1). The immunocomplexes were detected by ECL. Nitric oxide assay: Cultures were incubated with uracil nucleotides or solvent for 48 h. The P2Y6 antagonist MRS 2578 or other enzyme inhibitors, when tested, were added 1 h before the uracil nucleotides. At the end of the 48-h period of incubation, the nitric oxide released into the culture medium was assessed by measuring the accumulation of nitrates plus nitrites according to the instructions of a Nitrate/Nitrite Colorimetric Assay kit (Cayman, France). The content of nitrates plus nitrites present in the supernatants was expressed as percentage of respective control. Cell cycle: The ability of uracil nucleotides to arrest glial cells in a specific cell cycle stage was evaluated in cultures treated with uracil nucleotides or solvent for 48 h. Cells were harvested by trypsinization, rinsed with ice-cold PBS and fixed in ice-cold 70% ethanol for 15 min at -20°C. Cells were rinsed again with PBS and incubated with 0.2 mg/ml RNAse A at 37°C for 15 min and further with 0.5 mg/ml propidium iodide for at least 30 min in the dark at room temperature. The percentage of cells in each phase of the cell cycle was determined by a flow cytometric analysis using the FACSCalibur flow cytometer from BD Biosciences (Enzifarma, Porto, Portugal) and the CellQuest software from BD Biosciences (Enzifarma, Porto, Portugal). Cell cycle phases were identified and quantified using ModFit LT software (Verity Software House Inc., Topsham, USA). Cell death assays: Necrotic cell death was assessed by measuring the lactate dehydrogenase (LDH) release with an enzymatic assay according to the manufacturer’s instructions (Sigma-Aldrich, Sintra, Portugal). Cultures were incubated with uracil nucleotides or solvent for 48 h. LDH activity was determined in the culture supernatants and respective extracts. The amount of LDH released into the culture medium was expressed as the percentage of total LDH. Apoptotic cell death was evaluated either by the indirect terminal transferase-mediated dUTP-digoxigenin nick end-labeling (TUNEL) to detect DNA fragmentation using an ApopTag peroxidase detection kit (Millipore, Madrid, Spain), or by the analysis of nuclear morphology with Hoechst 33258 staining (described above). Cultures were treated with uracil nucleotides or solvent for 48 h and, when present, L-NAME was added 1 h before nucleotides. The number of TUNEL positive cells was evaluated as previously described [20]. The number of apoptotic cells, observed with Hoechst 33258 staining, was evaluated by analyzing eight high-power fields (×400) in each culture, and the number of cells showing shrunken nuclei with a bright fluorescence appearance was expressed as percentage of total cell number counted. Statistical analysis: Data are expressed as means ± standard errors of the mean (SEM) from n number of experiments. Statistical analysis was carried out using the unpaired Student’s t-test or ANOVA followed by Dunnett’s multiple comparison test. Significant differences were indicated by P values lower than 0.05. Results: Characterization of the co-cultures The primary cortical brain cultures treated with lipopolysaccharide (LPS; 0.1 μg/ml) for 30 days in vitro, consisted of monolayers of astrocytes exhibiting a flattened, polygonal morphology and containing 4.36 ± 0.42% (n = 5) of microglia spread over the top of the astrocyte monolayer (Figure 1). The co-cultures obtained were named LPS cultures where microglial cells exhibited an amoeboid phenotype with retracted or short thick processes, suggestive of their activation [23], as expected for in vitro LPS treated microglia. In support of the potential of LPS to activate microglia and to prevent their proliferation, it was observed that in co-cultures grown without any treatment, the percentage of microglia was higher, approximately 8.0%, and presented longer processes [22].Figure 1 Immunofluorescent micrograph representative of co-cultures treated with lipopolysaccharide (LPS) (0.1 μg/ml). Astrocytes were labeled with rabbit anti-GFAP (TRICT, red) and microglia with mouse anti-CD11b (Alexa Fluor 488, green). The percentage of microglia in cultures was 4.36 ± 0.42% (n = 5). Scale bar: 50 μm. Immunofluorescent micrograph representative of co-cultures treated with lipopolysaccharide (LPS) (0.1 μg/ml). Astrocytes were labeled with rabbit anti-GFAP (TRICT, red) and microglia with mouse anti-CD11b (Alexa Fluor 488, green). The percentage of microglia in cultures was 4.36 ± 0.42% (n = 5). Scale bar: 50 μm. In this study, LPS cultures were used to study the effect of uracil nucleotides in cell proliferation, as well as the contribution of activated microglia to this response. The primary cortical brain cultures treated with lipopolysaccharide (LPS; 0.1 μg/ml) for 30 days in vitro, consisted of monolayers of astrocytes exhibiting a flattened, polygonal morphology and containing 4.36 ± 0.42% (n = 5) of microglia spread over the top of the astrocyte monolayer (Figure 1). The co-cultures obtained were named LPS cultures where microglial cells exhibited an amoeboid phenotype with retracted or short thick processes, suggestive of their activation [23], as expected for in vitro LPS treated microglia. In support of the potential of LPS to activate microglia and to prevent their proliferation, it was observed that in co-cultures grown without any treatment, the percentage of microglia was higher, approximately 8.0%, and presented longer processes [22].Figure 1 Immunofluorescent micrograph representative of co-cultures treated with lipopolysaccharide (LPS) (0.1 μg/ml). Astrocytes were labeled with rabbit anti-GFAP (TRICT, red) and microglia with mouse anti-CD11b (Alexa Fluor 488, green). The percentage of microglia in cultures was 4.36 ± 0.42% (n = 5). Scale bar: 50 μm. Immunofluorescent micrograph representative of co-cultures treated with lipopolysaccharide (LPS) (0.1 μg/ml). Astrocytes were labeled with rabbit anti-GFAP (TRICT, red) and microglia with mouse anti-CD11b (Alexa Fluor 488, green). The percentage of microglia in cultures was 4.36 ± 0.42% (n = 5). Scale bar: 50 μm. In this study, LPS cultures were used to study the effect of uracil nucleotides in cell proliferation, as well as the contribution of activated microglia to this response. Effects of uracil nucleotides in cell proliferation LPS cultures were incubated with several uracil nucleotides to evaluate their influence in cell proliferation. UTP that activates the P2Y2,4 subtypes, UDP and its analogue PSB 0474, both selective for the P2Y6 receptors and UDP-glucose that is selective for the P2Y14 receptors [14] were tested in a wide range of concentrations. Except for UDP-glucose, the uracil nucleotides UTP, UDP and PSB 0474 caused a concentration-dependent inhibition of cell proliferation (Figure 2).Figure 2 Effects of uracil nucleotides in cell proliferation. Lipopolysaccharide (LPS) cultures were incubated with nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed in percentage of control. Values are means ± SEM from five to ten experiments. *P <0.05, significant differences from control. Effects of uracil nucleotides in cell proliferation. Lipopolysaccharide (LPS) cultures were incubated with nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed in percentage of control. Values are means ± SEM from five to ten experiments. *P <0.05, significant differences from control. LPS cultures were incubated with several uracil nucleotides to evaluate their influence in cell proliferation. UTP that activates the P2Y2,4 subtypes, UDP and its analogue PSB 0474, both selective for the P2Y6 receptors and UDP-glucose that is selective for the P2Y14 receptors [14] were tested in a wide range of concentrations. Except for UDP-glucose, the uracil nucleotides UTP, UDP and PSB 0474 caused a concentration-dependent inhibition of cell proliferation (Figure 2).Figure 2 Effects of uracil nucleotides in cell proliferation. Lipopolysaccharide (LPS) cultures were incubated with nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed in percentage of control. Values are means ± SEM from five to ten experiments. *P <0.05, significant differences from control. Effects of uracil nucleotides in cell proliferation. Lipopolysaccharide (LPS) cultures were incubated with nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed in percentage of control. Values are means ± SEM from five to ten experiments. *P <0.05, significant differences from control. Extracellular metabolism of uracil nucleotides UTP metabolism was very fast, with a half-life of 10.3 ± 0.5 min (n = 5), and the main metabolite formed during the first hour was UDP, which remained in the medium up to 8 h (Figure 3A). When tested from the beginning, UDP metabolism was much slower compared to that of UTP (Figure 3B); its half-life was 77.3 ± 2.3 min (n = 4; P <0.05). The PSB 0474 half-life could not be evaluated because the highest concentration tested that caused an inhibition of cell proliferation was still below the detection limit of the method used to study the metabolism of these compounds.Figure 3 Metabolism of uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with 0.1 mM of (A) UTP or (B) UDP and samples were collected at 0, 1, 3, 8, 24 and 48 h. Uracil nucleotides and their metabolites were quantified by HPLC-UV. Values are means ± SEM from four experiments. Metabolism of uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with 0.1 mM of (A) UTP or (B) UDP and samples were collected at 0, 1, 3, 8, 24 and 48 h. Uracil nucleotides and their metabolites were quantified by HPLC-UV. Values are means ± SEM from four experiments. UTP metabolism was very fast, with a half-life of 10.3 ± 0.5 min (n = 5), and the main metabolite formed during the first hour was UDP, which remained in the medium up to 8 h (Figure 3A). When tested from the beginning, UDP metabolism was much slower compared to that of UTP (Figure 3B); its half-life was 77.3 ± 2.3 min (n = 4; P <0.05). The PSB 0474 half-life could not be evaluated because the highest concentration tested that caused an inhibition of cell proliferation was still below the detection limit of the method used to study the metabolism of these compounds.Figure 3 Metabolism of uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with 0.1 mM of (A) UTP or (B) UDP and samples were collected at 0, 1, 3, 8, 24 and 48 h. Uracil nucleotides and their metabolites were quantified by HPLC-UV. Values are means ± SEM from four experiments. Metabolism of uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with 0.1 mM of (A) UTP or (B) UDP and samples were collected at 0, 1, 3, 8, 24 and 48 h. Uracil nucleotides and their metabolites were quantified by HPLC-UV. Values are means ± SEM from four experiments. Expression and pharmacological characterization of the P2Y receptor subtype involved in the inhibition of cell proliferation induced by uracil nucleotides The inhibitory effect of both PSB 0474 (1 μM) and UDP (1 mM) in cell proliferation was abolished by the selective P2Y6 receptor antagonist MRS 2578 (1 μM; Figure 4A). The inhibitory effect of UTP (0.1 mM) was also abolished by MRS 2578 (not shown), suggesting that this effect depends on its conversion into UDP and activation of P2Y6 receptors. Uncoupling Gi/o proteins from receptors with pertussis toxin (PTX, 0.1 μg/ml) did not change the effect of UDP (1 mM), which was attenuated by the phospholipase C (PLC) inhibitor U 73122 (1 μM), but not by its inactive analog U 73343 (1 μM), and by the protein kinase C (PKC) inhibitor RO 32-0432 (1 μM; Figure 4A), confirming the coupling of P2Y6 receptors to the Gq-PLC-PKC pathway.Figure 4 Pyrimidine receptors and signaling pathway involved in the inhibition of cell proliferation mediated by uracil nucleotides. (A) Lipopolysaccharide (LPS) cultures were incubated with the nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides, except PTX, which was added to the medium 24 h before. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed as percentage of change from the respective control. Values are means ± SEM from eight to twenty experiments. *P <0.05, significant differences from respective control; +P <0.05, significant differences from the agonist alone. (B) Representative western blots showing the expression of P2Y6 receptors obtained from whole cell lysates. Two bands of 25 kDa, one of 36 kDa and another of 86 kDa, specifically reacted with the anti-P2Y6 antibody. These bands were absent in the presence of the respective neutralizing peptide (np). Pyrimidine receptors and signaling pathway involved in the inhibition of cell proliferation mediated by uracil nucleotides. (A) Lipopolysaccharide (LPS) cultures were incubated with the nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides, except PTX, which was added to the medium 24 h before. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed as percentage of change from the respective control. Values are means ± SEM from eight to twenty experiments. *P <0.05, significant differences from respective control; +P <0.05, significant differences from the agonist alone. (B) Representative western blots showing the expression of P2Y6 receptors obtained from whole cell lysates. Two bands of 25 kDa, one of 36 kDa and another of 86 kDa, specifically reacted with the anti-P2Y6 antibody. These bands were absent in the presence of the respective neutralizing peptide (np). P2Y6 receptor expression in LPS cultures comprised four bands, two of 25 kDa, one of 36 kDa and another of 86 kDa, which were all absent in the presence of the P2Y6 receptor neutralizing peptide (np, Figure 4B). Analysis of the cellular localization of P2Y6 receptors by immunocytochemistry revealed a preferential co-localization with microglia (Figure 5), suggesting that uracil nucleotides may inhibit cell proliferation via microglial cells.Figure 5 Cellular distribution and localization of P2Y 6 receptors in lipopolysaccharide (LPS) cultures. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), P2Y6 receptors were labeled with rabbit anti-P2Y6 (TRICT, red) and nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of P2Y6 receptors that are coincident with microglia, but not in astrocytes (blue nuclei that do not label with CD11b and P2Y6 receptor antibodies). Scale bar: 20 μm. Cellular distribution and localization of P2Y 6 receptors in lipopolysaccharide (LPS) cultures. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), P2Y6 receptors were labeled with rabbit anti-P2Y6 (TRICT, red) and nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of P2Y6 receptors that are coincident with microglia, but not in astrocytes (blue nuclei that do not label with CD11b and P2Y6 receptor antibodies). Scale bar: 20 μm. The inhibitory effect of both PSB 0474 (1 μM) and UDP (1 mM) in cell proliferation was abolished by the selective P2Y6 receptor antagonist MRS 2578 (1 μM; Figure 4A). The inhibitory effect of UTP (0.1 mM) was also abolished by MRS 2578 (not shown), suggesting that this effect depends on its conversion into UDP and activation of P2Y6 receptors. Uncoupling Gi/o proteins from receptors with pertussis toxin (PTX, 0.1 μg/ml) did not change the effect of UDP (1 mM), which was attenuated by the phospholipase C (PLC) inhibitor U 73122 (1 μM), but not by its inactive analog U 73343 (1 μM), and by the protein kinase C (PKC) inhibitor RO 32-0432 (1 μM; Figure 4A), confirming the coupling of P2Y6 receptors to the Gq-PLC-PKC pathway.Figure 4 Pyrimidine receptors and signaling pathway involved in the inhibition of cell proliferation mediated by uracil nucleotides. (A) Lipopolysaccharide (LPS) cultures were incubated with the nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides, except PTX, which was added to the medium 24 h before. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed as percentage of change from the respective control. Values are means ± SEM from eight to twenty experiments. *P <0.05, significant differences from respective control; +P <0.05, significant differences from the agonist alone. (B) Representative western blots showing the expression of P2Y6 receptors obtained from whole cell lysates. Two bands of 25 kDa, one of 36 kDa and another of 86 kDa, specifically reacted with the anti-P2Y6 antibody. These bands were absent in the presence of the respective neutralizing peptide (np). Pyrimidine receptors and signaling pathway involved in the inhibition of cell proliferation mediated by uracil nucleotides. (A) Lipopolysaccharide (LPS) cultures were incubated with the nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides, except PTX, which was added to the medium 24 h before. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed as percentage of change from the respective control. Values are means ± SEM from eight to twenty experiments. *P <0.05, significant differences from respective control; +P <0.05, significant differences from the agonist alone. (B) Representative western blots showing the expression of P2Y6 receptors obtained from whole cell lysates. Two bands of 25 kDa, one of 36 kDa and another of 86 kDa, specifically reacted with the anti-P2Y6 antibody. These bands were absent in the presence of the respective neutralizing peptide (np). P2Y6 receptor expression in LPS cultures comprised four bands, two of 25 kDa, one of 36 kDa and another of 86 kDa, which were all absent in the presence of the P2Y6 receptor neutralizing peptide (np, Figure 4B). Analysis of the cellular localization of P2Y6 receptors by immunocytochemistry revealed a preferential co-localization with microglia (Figure 5), suggesting that uracil nucleotides may inhibit cell proliferation via microglial cells.Figure 5 Cellular distribution and localization of P2Y 6 receptors in lipopolysaccharide (LPS) cultures. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), P2Y6 receptors were labeled with rabbit anti-P2Y6 (TRICT, red) and nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of P2Y6 receptors that are coincident with microglia, but not in astrocytes (blue nuclei that do not label with CD11b and P2Y6 receptor antibodies). Scale bar: 20 μm. Cellular distribution and localization of P2Y 6 receptors in lipopolysaccharide (LPS) cultures. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), P2Y6 receptors were labeled with rabbit anti-P2Y6 (TRICT, red) and nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of P2Y6 receptors that are coincident with microglia, but not in astrocytes (blue nuclei that do not label with CD11b and P2Y6 receptor antibodies). Scale bar: 20 μm. P2Y6 receptor-mediated nitric oxide production LPS increases iNOS expression and NO production by microglia [24,25], but this effect is attenuated during chronic LPS stimulation [9,10]. Since NO may inhibit astroglial proliferation [26] it was investigated whether P2Y6 receptor activation by uracil nucleotides modulated NO release in chronic LPS stimulated microglia. UDP (1 mM) and PSB 0474 (10 μM) increased NO release in the culture medium (Figure 6), an effect abolished by the selective antagonist of the P2Y6 receptors MRS 2578 (1 μM) and by the PLC inhibitor U 73122 (1 μM), or by the PKC inhibitor RO 32-0432 (1 μM; Figure 6). Additionally, the inhibitory effect of UDP in cell proliferation (1 mM; 44 ± 2, n = 25) was abolished by the NOS inhibitor L-NAME (0.1 mM; 7 ± 3, n = 8, P <0.05), and this effect was reversed in the presence of L-arginine (3 mM; 28 ± 6, n = 6; P <0.05).Figure 6 Nitric oxide synthesis mediated by uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides. The concentration of nitrites plus nitrates was evaluated in the culture supernatants and was expressed as percentage of change from the respective control. Values are means ± SEM from four experiments. *P <0.05, significant differences from the respective control; +P <0.05, significant differences from the agonist alone. Nitric oxide synthesis mediated by uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides. The concentration of nitrites plus nitrates was evaluated in the culture supernatants and was expressed as percentage of change from the respective control. Values are means ± SEM from four experiments. *P <0.05, significant differences from the respective control; +P <0.05, significant differences from the agonist alone. In order to identify the cellular source of NO released upon P2Y6 receptor activation, the expression of iNOS was immunolocalized either with microglia or astrocytes in LPS cultures. No iNOS expression was detected in astrocytes, in control conditions and after treatment with the uracil nucleotides (Figure 7), whereas in microglia iNOS expression was residual in control conditions but was significantly increased after 48 h incubation with PSB 0474 (10 μM) or UDP (1 mM; Figure 7).Figure 7 Cellular localization of inducible nitric oxide synthase (iNOS) in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), astrocytes with mouse anti-GFAP (Alexa Fluor 488, green) and iNOS with rabbit anti-iNOS (TRITC, red). Cell nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of iNOS in the cells and are coincident with an increased expression of iNOS in microglia, but not in astrocytes, upon stimulation with the uracil nucleotides. Scale bar = 10 μm. Cellular localization of inducible nitric oxide synthase (iNOS) in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), astrocytes with mouse anti-GFAP (Alexa Fluor 488, green) and iNOS with rabbit anti-iNOS (TRITC, red). Cell nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of iNOS in the cells and are coincident with an increased expression of iNOS in microglia, but not in astrocytes, upon stimulation with the uracil nucleotides. Scale bar = 10 μm. LPS increases iNOS expression and NO production by microglia [24,25], but this effect is attenuated during chronic LPS stimulation [9,10]. Since NO may inhibit astroglial proliferation [26] it was investigated whether P2Y6 receptor activation by uracil nucleotides modulated NO release in chronic LPS stimulated microglia. UDP (1 mM) and PSB 0474 (10 μM) increased NO release in the culture medium (Figure 6), an effect abolished by the selective antagonist of the P2Y6 receptors MRS 2578 (1 μM) and by the PLC inhibitor U 73122 (1 μM), or by the PKC inhibitor RO 32-0432 (1 μM; Figure 6). Additionally, the inhibitory effect of UDP in cell proliferation (1 mM; 44 ± 2, n = 25) was abolished by the NOS inhibitor L-NAME (0.1 mM; 7 ± 3, n = 8, P <0.05), and this effect was reversed in the presence of L-arginine (3 mM; 28 ± 6, n = 6; P <0.05).Figure 6 Nitric oxide synthesis mediated by uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides. The concentration of nitrites plus nitrates was evaluated in the culture supernatants and was expressed as percentage of change from the respective control. Values are means ± SEM from four experiments. *P <0.05, significant differences from the respective control; +P <0.05, significant differences from the agonist alone. Nitric oxide synthesis mediated by uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides. The concentration of nitrites plus nitrates was evaluated in the culture supernatants and was expressed as percentage of change from the respective control. Values are means ± SEM from four experiments. *P <0.05, significant differences from the respective control; +P <0.05, significant differences from the agonist alone. In order to identify the cellular source of NO released upon P2Y6 receptor activation, the expression of iNOS was immunolocalized either with microglia or astrocytes in LPS cultures. No iNOS expression was detected in astrocytes, in control conditions and after treatment with the uracil nucleotides (Figure 7), whereas in microglia iNOS expression was residual in control conditions but was significantly increased after 48 h incubation with PSB 0474 (10 μM) or UDP (1 mM; Figure 7).Figure 7 Cellular localization of inducible nitric oxide synthase (iNOS) in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), astrocytes with mouse anti-GFAP (Alexa Fluor 488, green) and iNOS with rabbit anti-iNOS (TRITC, red). Cell nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of iNOS in the cells and are coincident with an increased expression of iNOS in microglia, but not in astrocytes, upon stimulation with the uracil nucleotides. Scale bar = 10 μm. Cellular localization of inducible nitric oxide synthase (iNOS) in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), astrocytes with mouse anti-GFAP (Alexa Fluor 488, green) and iNOS with rabbit anti-iNOS (TRITC, red). Cell nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of iNOS in the cells and are coincident with an increased expression of iNOS in microglia, but not in astrocytes, upon stimulation with the uracil nucleotides. Scale bar = 10 μm. P2Y6 receptor-mediated inhibition of cell proliferation: mechanisms involved In these experimental conditions, microglial cells are responsible for NO production mediated by P2Y6 receptors. To clarify the mechanisms behind uracil nucleotides inhibition of cell proliferation, their effect on cell cycle progression and cell death was investigated. Uracil nucleotides had no effect in cell cycle progression of glial cells. The percentage of cells in G0/G1, S or G2/M phase of the cell cycle was similar in control cultures (75.4 ± 1.8, 17.6 ± 3.2, 7.1 ± 1.5, respectively, n = 3) and those treated with UDP (1 mM; 76.4 ± 1.8, 13.6 ± 2.8, 10.0 ± 1.5, respectively, n = 3), therefore excluding cell cycle arrest as the mechanism involved in the inhibition of cell proliferation. Another possibility was that inhibition of cell proliferation mediated by uracil nucleotides could result from an increase in cell death. UDP (1 mM) and PSB 0474 (10 μM) caused no change in LDH release, which excluded cell death by necrosis (Figure 8A). However, both UDP (1 mM) and PSB 0474 (10 μM) induced cell death by apoptosis assessed by the TUNEL assay (Figure 8A).Figure 8 Effects of uracil nucleotides in cell death in lipopolysaccharide (LPS) cultures. (A) Necrotic cell death was evaluated by measuring the release of lactate dehydrogenase (LDH) and apoptotic cell death was evaluated by the TUNEL assay, after incubation with uracil nucleotides or solvent for 48 h. LDH activity was measured in the culture medium and in the culture extracts and the fraction released is represented in percentage of total LDH. The number of apoptotic cells was expressed as percentage of the total number of cells counted. Values are means ± SEM from four to seven experiments. *P <0.05, significant differences from respective control (solvent). (B) Cellular localization of apoptotic nuclei, obtained with the Hoechst 33258 staining in LPS cultures. Astrocytes were labeled with rabbit anti-GFAP (TRITC, red), microglia with mouse anti-CD11b (Alexa Fluor 488, green) and cell nuclei with Hoechst 33258 (blue). LPS cultures were incubated with solvent or UDP for 48 h. Shrunken nuclei with a bright fluorescence appearance, characteristic of apoptotic nuclei are clearly coincident with astrocytes (white arrows), but not with microglia. Scale bar = 20 μm. Effects of uracil nucleotides in cell death in lipopolysaccharide (LPS) cultures. (A) Necrotic cell death was evaluated by measuring the release of lactate dehydrogenase (LDH) and apoptotic cell death was evaluated by the TUNEL assay, after incubation with uracil nucleotides or solvent for 48 h. LDH activity was measured in the culture medium and in the culture extracts and the fraction released is represented in percentage of total LDH. The number of apoptotic cells was expressed as percentage of the total number of cells counted. Values are means ± SEM from four to seven experiments. *P <0.05, significant differences from respective control (solvent). (B) Cellular localization of apoptotic nuclei, obtained with the Hoechst 33258 staining in LPS cultures. Astrocytes were labeled with rabbit anti-GFAP (TRITC, red), microglia with mouse anti-CD11b (Alexa Fluor 488, green) and cell nuclei with Hoechst 33258 (blue). LPS cultures were incubated with solvent or UDP for 48 h. Shrunken nuclei with a bright fluorescence appearance, characteristic of apoptotic nuclei are clearly coincident with astrocytes (white arrows), but not with microglia. Scale bar = 20 μm. Cultures treated with UDP (1 mM) showed an increase in the number of shrunken nuclei with a bright fluorescence appearance obtained by Hoechst 33258 staining (Figure 8B). The percentage of apoptotic nuclei in control cultures was 6.75 ± 0.65% (n = 8) and increased to 16.02 ± 0.75% (n = 8, P <0.05) in cultures treated with UDP (1 mM). This increase in the number of apoptotic cells was attenuated to 11.83 ± 0.61% (n = 8, P <0.05) when UDP was tested in the presence of L-NAME (0.1 mM). Additionally, the apoptotic nuclei co-localized with astrocytes but not with microglia (Figure 8B). In these experimental conditions, microglial cells are responsible for NO production mediated by P2Y6 receptors. To clarify the mechanisms behind uracil nucleotides inhibition of cell proliferation, their effect on cell cycle progression and cell death was investigated. Uracil nucleotides had no effect in cell cycle progression of glial cells. The percentage of cells in G0/G1, S or G2/M phase of the cell cycle was similar in control cultures (75.4 ± 1.8, 17.6 ± 3.2, 7.1 ± 1.5, respectively, n = 3) and those treated with UDP (1 mM; 76.4 ± 1.8, 13.6 ± 2.8, 10.0 ± 1.5, respectively, n = 3), therefore excluding cell cycle arrest as the mechanism involved in the inhibition of cell proliferation. Another possibility was that inhibition of cell proliferation mediated by uracil nucleotides could result from an increase in cell death. UDP (1 mM) and PSB 0474 (10 μM) caused no change in LDH release, which excluded cell death by necrosis (Figure 8A). However, both UDP (1 mM) and PSB 0474 (10 μM) induced cell death by apoptosis assessed by the TUNEL assay (Figure 8A).Figure 8 Effects of uracil nucleotides in cell death in lipopolysaccharide (LPS) cultures. (A) Necrotic cell death was evaluated by measuring the release of lactate dehydrogenase (LDH) and apoptotic cell death was evaluated by the TUNEL assay, after incubation with uracil nucleotides or solvent for 48 h. LDH activity was measured in the culture medium and in the culture extracts and the fraction released is represented in percentage of total LDH. The number of apoptotic cells was expressed as percentage of the total number of cells counted. Values are means ± SEM from four to seven experiments. *P <0.05, significant differences from respective control (solvent). (B) Cellular localization of apoptotic nuclei, obtained with the Hoechst 33258 staining in LPS cultures. Astrocytes were labeled with rabbit anti-GFAP (TRITC, red), microglia with mouse anti-CD11b (Alexa Fluor 488, green) and cell nuclei with Hoechst 33258 (blue). LPS cultures were incubated with solvent or UDP for 48 h. Shrunken nuclei with a bright fluorescence appearance, characteristic of apoptotic nuclei are clearly coincident with astrocytes (white arrows), but not with microglia. Scale bar = 20 μm. Effects of uracil nucleotides in cell death in lipopolysaccharide (LPS) cultures. (A) Necrotic cell death was evaluated by measuring the release of lactate dehydrogenase (LDH) and apoptotic cell death was evaluated by the TUNEL assay, after incubation with uracil nucleotides or solvent for 48 h. LDH activity was measured in the culture medium and in the culture extracts and the fraction released is represented in percentage of total LDH. The number of apoptotic cells was expressed as percentage of the total number of cells counted. Values are means ± SEM from four to seven experiments. *P <0.05, significant differences from respective control (solvent). (B) Cellular localization of apoptotic nuclei, obtained with the Hoechst 33258 staining in LPS cultures. Astrocytes were labeled with rabbit anti-GFAP (TRITC, red), microglia with mouse anti-CD11b (Alexa Fluor 488, green) and cell nuclei with Hoechst 33258 (blue). LPS cultures were incubated with solvent or UDP for 48 h. Shrunken nuclei with a bright fluorescence appearance, characteristic of apoptotic nuclei are clearly coincident with astrocytes (white arrows), but not with microglia. Scale bar = 20 μm. Cultures treated with UDP (1 mM) showed an increase in the number of shrunken nuclei with a bright fluorescence appearance obtained by Hoechst 33258 staining (Figure 8B). The percentage of apoptotic nuclei in control cultures was 6.75 ± 0.65% (n = 8) and increased to 16.02 ± 0.75% (n = 8, P <0.05) in cultures treated with UDP (1 mM). This increase in the number of apoptotic cells was attenuated to 11.83 ± 0.61% (n = 8, P <0.05) when UDP was tested in the presence of L-NAME (0.1 mM). Additionally, the apoptotic nuclei co-localized with astrocytes but not with microglia (Figure 8B). Characterization of the co-cultures: The primary cortical brain cultures treated with lipopolysaccharide (LPS; 0.1 μg/ml) for 30 days in vitro, consisted of monolayers of astrocytes exhibiting a flattened, polygonal morphology and containing 4.36 ± 0.42% (n = 5) of microglia spread over the top of the astrocyte monolayer (Figure 1). The co-cultures obtained were named LPS cultures where microglial cells exhibited an amoeboid phenotype with retracted or short thick processes, suggestive of their activation [23], as expected for in vitro LPS treated microglia. In support of the potential of LPS to activate microglia and to prevent their proliferation, it was observed that in co-cultures grown without any treatment, the percentage of microglia was higher, approximately 8.0%, and presented longer processes [22].Figure 1 Immunofluorescent micrograph representative of co-cultures treated with lipopolysaccharide (LPS) (0.1 μg/ml). Astrocytes were labeled with rabbit anti-GFAP (TRICT, red) and microglia with mouse anti-CD11b (Alexa Fluor 488, green). The percentage of microglia in cultures was 4.36 ± 0.42% (n = 5). Scale bar: 50 μm. Immunofluorescent micrograph representative of co-cultures treated with lipopolysaccharide (LPS) (0.1 μg/ml). Astrocytes were labeled with rabbit anti-GFAP (TRICT, red) and microglia with mouse anti-CD11b (Alexa Fluor 488, green). The percentage of microglia in cultures was 4.36 ± 0.42% (n = 5). Scale bar: 50 μm. In this study, LPS cultures were used to study the effect of uracil nucleotides in cell proliferation, as well as the contribution of activated microglia to this response. Effects of uracil nucleotides in cell proliferation: LPS cultures were incubated with several uracil nucleotides to evaluate their influence in cell proliferation. UTP that activates the P2Y2,4 subtypes, UDP and its analogue PSB 0474, both selective for the P2Y6 receptors and UDP-glucose that is selective for the P2Y14 receptors [14] were tested in a wide range of concentrations. Except for UDP-glucose, the uracil nucleotides UTP, UDP and PSB 0474 caused a concentration-dependent inhibition of cell proliferation (Figure 2).Figure 2 Effects of uracil nucleotides in cell proliferation. Lipopolysaccharide (LPS) cultures were incubated with nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed in percentage of control. Values are means ± SEM from five to ten experiments. *P <0.05, significant differences from control. Effects of uracil nucleotides in cell proliferation. Lipopolysaccharide (LPS) cultures were incubated with nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed in percentage of control. Values are means ± SEM from five to ten experiments. *P <0.05, significant differences from control. Extracellular metabolism of uracil nucleotides: UTP metabolism was very fast, with a half-life of 10.3 ± 0.5 min (n = 5), and the main metabolite formed during the first hour was UDP, which remained in the medium up to 8 h (Figure 3A). When tested from the beginning, UDP metabolism was much slower compared to that of UTP (Figure 3B); its half-life was 77.3 ± 2.3 min (n = 4; P <0.05). The PSB 0474 half-life could not be evaluated because the highest concentration tested that caused an inhibition of cell proliferation was still below the detection limit of the method used to study the metabolism of these compounds.Figure 3 Metabolism of uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with 0.1 mM of (A) UTP or (B) UDP and samples were collected at 0, 1, 3, 8, 24 and 48 h. Uracil nucleotides and their metabolites were quantified by HPLC-UV. Values are means ± SEM from four experiments. Metabolism of uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with 0.1 mM of (A) UTP or (B) UDP and samples were collected at 0, 1, 3, 8, 24 and 48 h. Uracil nucleotides and their metabolites were quantified by HPLC-UV. Values are means ± SEM from four experiments. Expression and pharmacological characterization of the P2Y receptor subtype involved in the inhibition of cell proliferation induced by uracil nucleotides: The inhibitory effect of both PSB 0474 (1 μM) and UDP (1 mM) in cell proliferation was abolished by the selective P2Y6 receptor antagonist MRS 2578 (1 μM; Figure 4A). The inhibitory effect of UTP (0.1 mM) was also abolished by MRS 2578 (not shown), suggesting that this effect depends on its conversion into UDP and activation of P2Y6 receptors. Uncoupling Gi/o proteins from receptors with pertussis toxin (PTX, 0.1 μg/ml) did not change the effect of UDP (1 mM), which was attenuated by the phospholipase C (PLC) inhibitor U 73122 (1 μM), but not by its inactive analog U 73343 (1 μM), and by the protein kinase C (PKC) inhibitor RO 32-0432 (1 μM; Figure 4A), confirming the coupling of P2Y6 receptors to the Gq-PLC-PKC pathway.Figure 4 Pyrimidine receptors and signaling pathway involved in the inhibition of cell proliferation mediated by uracil nucleotides. (A) Lipopolysaccharide (LPS) cultures were incubated with the nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides, except PTX, which was added to the medium 24 h before. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed as percentage of change from the respective control. Values are means ± SEM from eight to twenty experiments. *P <0.05, significant differences from respective control; +P <0.05, significant differences from the agonist alone. (B) Representative western blots showing the expression of P2Y6 receptors obtained from whole cell lysates. Two bands of 25 kDa, one of 36 kDa and another of 86 kDa, specifically reacted with the anti-P2Y6 antibody. These bands were absent in the presence of the respective neutralizing peptide (np). Pyrimidine receptors and signaling pathway involved in the inhibition of cell proliferation mediated by uracil nucleotides. (A) Lipopolysaccharide (LPS) cultures were incubated with the nucleotides for 48 h and in the last 24 h methyl-[3H]-thymidine was added to the medium at a concentration of 1 μCi/ml. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides, except PTX, which was added to the medium 24 h before. Effects in cell proliferation were estimated by methyl-[3H]-thymidine incorporation and expressed as percentage of change from the respective control. Values are means ± SEM from eight to twenty experiments. *P <0.05, significant differences from respective control; +P <0.05, significant differences from the agonist alone. (B) Representative western blots showing the expression of P2Y6 receptors obtained from whole cell lysates. Two bands of 25 kDa, one of 36 kDa and another of 86 kDa, specifically reacted with the anti-P2Y6 antibody. These bands were absent in the presence of the respective neutralizing peptide (np). P2Y6 receptor expression in LPS cultures comprised four bands, two of 25 kDa, one of 36 kDa and another of 86 kDa, which were all absent in the presence of the P2Y6 receptor neutralizing peptide (np, Figure 4B). Analysis of the cellular localization of P2Y6 receptors by immunocytochemistry revealed a preferential co-localization with microglia (Figure 5), suggesting that uracil nucleotides may inhibit cell proliferation via microglial cells.Figure 5 Cellular distribution and localization of P2Y 6 receptors in lipopolysaccharide (LPS) cultures. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), P2Y6 receptors were labeled with rabbit anti-P2Y6 (TRICT, red) and nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of P2Y6 receptors that are coincident with microglia, but not in astrocytes (blue nuclei that do not label with CD11b and P2Y6 receptor antibodies). Scale bar: 20 μm. Cellular distribution and localization of P2Y 6 receptors in lipopolysaccharide (LPS) cultures. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), P2Y6 receptors were labeled with rabbit anti-P2Y6 (TRICT, red) and nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of P2Y6 receptors that are coincident with microglia, but not in astrocytes (blue nuclei that do not label with CD11b and P2Y6 receptor antibodies). Scale bar: 20 μm. P2Y6 receptor-mediated nitric oxide production: LPS increases iNOS expression and NO production by microglia [24,25], but this effect is attenuated during chronic LPS stimulation [9,10]. Since NO may inhibit astroglial proliferation [26] it was investigated whether P2Y6 receptor activation by uracil nucleotides modulated NO release in chronic LPS stimulated microglia. UDP (1 mM) and PSB 0474 (10 μM) increased NO release in the culture medium (Figure 6), an effect abolished by the selective antagonist of the P2Y6 receptors MRS 2578 (1 μM) and by the PLC inhibitor U 73122 (1 μM), or by the PKC inhibitor RO 32-0432 (1 μM; Figure 6). Additionally, the inhibitory effect of UDP in cell proliferation (1 mM; 44 ± 2, n = 25) was abolished by the NOS inhibitor L-NAME (0.1 mM; 7 ± 3, n = 8, P <0.05), and this effect was reversed in the presence of L-arginine (3 mM; 28 ± 6, n = 6; P <0.05).Figure 6 Nitric oxide synthesis mediated by uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides. The concentration of nitrites plus nitrates was evaluated in the culture supernatants and was expressed as percentage of change from the respective control. Values are means ± SEM from four experiments. *P <0.05, significant differences from the respective control; +P <0.05, significant differences from the agonist alone. Nitric oxide synthesis mediated by uracil nucleotides in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. The P2Y6 antagonist MRS 2578 and enzyme inhibitors were added to the medium 1 h before the nucleotides. The concentration of nitrites plus nitrates was evaluated in the culture supernatants and was expressed as percentage of change from the respective control. Values are means ± SEM from four experiments. *P <0.05, significant differences from the respective control; +P <0.05, significant differences from the agonist alone. In order to identify the cellular source of NO released upon P2Y6 receptor activation, the expression of iNOS was immunolocalized either with microglia or astrocytes in LPS cultures. No iNOS expression was detected in astrocytes, in control conditions and after treatment with the uracil nucleotides (Figure 7), whereas in microglia iNOS expression was residual in control conditions but was significantly increased after 48 h incubation with PSB 0474 (10 μM) or UDP (1 mM; Figure 7).Figure 7 Cellular localization of inducible nitric oxide synthase (iNOS) in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), astrocytes with mouse anti-GFAP (Alexa Fluor 488, green) and iNOS with rabbit anti-iNOS (TRITC, red). Cell nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of iNOS in the cells and are coincident with an increased expression of iNOS in microglia, but not in astrocytes, upon stimulation with the uracil nucleotides. Scale bar = 10 μm. Cellular localization of inducible nitric oxide synthase (iNOS) in lipopolysaccharide (LPS) cultures. Cells were incubated with UDP or PSB 0474 for 48 h. Microglia were labeled with mouse anti-CD11b (Alexa Fluor 488, green), astrocytes with mouse anti-GFAP (Alexa Fluor 488, green) and iNOS with rabbit anti-iNOS (TRITC, red). Cell nuclei were labeled with Hoechst 33258 (blue). The orange spots represent the expression of iNOS in the cells and are coincident with an increased expression of iNOS in microglia, but not in astrocytes, upon stimulation with the uracil nucleotides. Scale bar = 10 μm. P2Y6 receptor-mediated inhibition of cell proliferation: mechanisms involved: In these experimental conditions, microglial cells are responsible for NO production mediated by P2Y6 receptors. To clarify the mechanisms behind uracil nucleotides inhibition of cell proliferation, their effect on cell cycle progression and cell death was investigated. Uracil nucleotides had no effect in cell cycle progression of glial cells. The percentage of cells in G0/G1, S or G2/M phase of the cell cycle was similar in control cultures (75.4 ± 1.8, 17.6 ± 3.2, 7.1 ± 1.5, respectively, n = 3) and those treated with UDP (1 mM; 76.4 ± 1.8, 13.6 ± 2.8, 10.0 ± 1.5, respectively, n = 3), therefore excluding cell cycle arrest as the mechanism involved in the inhibition of cell proliferation. Another possibility was that inhibition of cell proliferation mediated by uracil nucleotides could result from an increase in cell death. UDP (1 mM) and PSB 0474 (10 μM) caused no change in LDH release, which excluded cell death by necrosis (Figure 8A). However, both UDP (1 mM) and PSB 0474 (10 μM) induced cell death by apoptosis assessed by the TUNEL assay (Figure 8A).Figure 8 Effects of uracil nucleotides in cell death in lipopolysaccharide (LPS) cultures. (A) Necrotic cell death was evaluated by measuring the release of lactate dehydrogenase (LDH) and apoptotic cell death was evaluated by the TUNEL assay, after incubation with uracil nucleotides or solvent for 48 h. LDH activity was measured in the culture medium and in the culture extracts and the fraction released is represented in percentage of total LDH. The number of apoptotic cells was expressed as percentage of the total number of cells counted. Values are means ± SEM from four to seven experiments. *P <0.05, significant differences from respective control (solvent). (B) Cellular localization of apoptotic nuclei, obtained with the Hoechst 33258 staining in LPS cultures. Astrocytes were labeled with rabbit anti-GFAP (TRITC, red), microglia with mouse anti-CD11b (Alexa Fluor 488, green) and cell nuclei with Hoechst 33258 (blue). LPS cultures were incubated with solvent or UDP for 48 h. Shrunken nuclei with a bright fluorescence appearance, characteristic of apoptotic nuclei are clearly coincident with astrocytes (white arrows), but not with microglia. Scale bar = 20 μm. Effects of uracil nucleotides in cell death in lipopolysaccharide (LPS) cultures. (A) Necrotic cell death was evaluated by measuring the release of lactate dehydrogenase (LDH) and apoptotic cell death was evaluated by the TUNEL assay, after incubation with uracil nucleotides or solvent for 48 h. LDH activity was measured in the culture medium and in the culture extracts and the fraction released is represented in percentage of total LDH. The number of apoptotic cells was expressed as percentage of the total number of cells counted. Values are means ± SEM from four to seven experiments. *P <0.05, significant differences from respective control (solvent). (B) Cellular localization of apoptotic nuclei, obtained with the Hoechst 33258 staining in LPS cultures. Astrocytes were labeled with rabbit anti-GFAP (TRITC, red), microglia with mouse anti-CD11b (Alexa Fluor 488, green) and cell nuclei with Hoechst 33258 (blue). LPS cultures were incubated with solvent or UDP for 48 h. Shrunken nuclei with a bright fluorescence appearance, characteristic of apoptotic nuclei are clearly coincident with astrocytes (white arrows), but not with microglia. Scale bar = 20 μm. Cultures treated with UDP (1 mM) showed an increase in the number of shrunken nuclei with a bright fluorescence appearance obtained by Hoechst 33258 staining (Figure 8B). The percentage of apoptotic nuclei in control cultures was 6.75 ± 0.65% (n = 8) and increased to 16.02 ± 0.75% (n = 8, P <0.05) in cultures treated with UDP (1 mM). This increase in the number of apoptotic cells was attenuated to 11.83 ± 0.61% (n = 8, P <0.05) when UDP was tested in the presence of L-NAME (0.1 mM). Additionally, the apoptotic nuclei co-localized with astrocytes but not with microglia (Figure 8B). Discussion: The CNS, with the contribution of glial cells (both astrocytes and microglia), has been shown to be able to mount an innate immune reaction in response to danger signals, such as endogenous nucleotides released upon cerebral injury or exogenous pathogens, systemic bacteria or virus [27]. It is now well recognized that microglia functional plasticity is strictly stimuli-dependent [2,8]; however, it is not known how microglia coordinate the inflammatory response and the progress of astrogliosis in a paradigm of chronic glia activation, characteristic of several inflammatory pathologies. Exposure of CNS to endotoxins, such as LPS, is a useful approach to activate immunity, in particular microglial cells [28]. Therefore, such immunological challenge was used in an in vitro model where microglia and astrocytes were present and could cooperate to reproduce some of the conditions that might be observed during chronic brain inflammation. It consisted of co-cultures of astrocytes containing 4 to 5% microglia chronically stimulated with 0.1 μg/ml LPS for 30 days. In previous studies, we have shown that in co-cultures of astrocytes and microglia containing a higher percentage of microglia, but without LPS treatment, UTP caused an inhibition of cell proliferation that could be correlated with a higher expression of P2Y6 receptors in microglia. In contrast, in highly enriched astroglial cultures, either treated or not with LPS, uracil nucleotides had no effect in cell proliferation [22], suggesting a fundamental role of microglial cells to the P2Y6 receptor-mediated inhibitory effect. In LPS cultures, UTP also inhibited cell proliferation and this effect was extensive to UDP and to the selective agonist of the P2Y6 receptors PSB 0474 [29], but not to the selective agonist of P2Y14 receptors UDP-glucose [30]. The inhibitory effect of uracil nucleotides was mediated by P2Y6 receptors, since it was abolished by MRS 2578, the selective antagonist of this receptor subtype [31]. Although UTP has some affinity for P2Y6 receptors [32], its inhibitory effect was mainly dependent on its metabolism and formation of UDP, since UTP had a very short half-life, being rapidly converted into UDP, which remained in the culture medium for about 8 h. This conclusion is further supported by the observation that an inhibitory effect of UTP and of other uracil nucleotides was already observed within 8 h of incubation (not shown) before a significant accumulation of the other metabolites, such as uridine or uracil could be detected in the culture medium. UDP half-life was much longer than that of UTP, which may be explained by the NTPDases expressed in these cultures. Microglia express mainly NTPDase1, which hydrolyses UTP faster than UDP [33], and astrocytes, the main cell type present in these cultures, express high levels of NTPDase2 [34]. This enzyme seems to be upregulated upon LPS stimulation [35] and preferentially hydrolyzes UTP compared to UDP [36]. Thus, the predominance of NTPDase2 activity in these cultures favors UDP accumulation and a preferential activation of P2Y6 receptors. P2Y6 receptors were coupled to the Gq-PLC-PKC pathway [14]; however activation of this pathway in astrocytes has been shown to mediate cell proliferation [20], suggesting a microglial localization. Therefore, the mechanisms involved in the inhibitory effect mediated by P2Y6 receptors were further investigated by looking to the cellular localization of these receptors. Expression of P2Y6 receptors in LPS cultures revealed a multiple band pattern, as previously described and discussed [22]. Only three of the four bands were lost after adsorption with the neutralizing peptide, suggesting this antibody is detecting some other antigen [37]. The cellular localization of P2Y6 receptors was analyzed by immunocytochemistry, which revealed that this P2Y6 antibody reacts mainly with microglial antigens. This observation is also in agreement with previous studies showing an up-regulation of P2Y6 receptor expression in LPS-activated microglia [16,38]. Although there are no specific P2Y6 antibodies available [37], results obtained by western blot and immunocytochemistry analysis, together with pharmacological data and the fact that UDP had no effect in highly enriched astroglial cultures [22], all support the conclusion that P2Y6 receptors are mainly localized in microglia. LPS increases iNOS expression and NO production in microglia via PKC activation [24,25,39]. However, NO release by microglia decays upon chronic LPS stimulation [9,10] suggesting a downregulation of this signaling pathway. In LPS cultures, all the uracil nucleotides that inhibited cell proliferation also increased the release of NO, an effect mediated by P2Y6 receptors coupled to the PLC-PKC pathway. Thus, a crosstalk between LPS and P2Y6 receptors signaling pathways may ensure additional iNOS expression and NO production when LPS response is already downregulated. The inhibitory effect of UDP was also prevented by L-NAME, a NOS inhibitor, confirming the involvement of NO that was shown to be exclusively produced by microglia iNOS [25,40]. Our results confirm previous observations, since only microglia showed iNOS immunoreactivity, which was upregulated when LPS cultures were treated with uracil nucleotides, suggesting that microglia are the main source of NO detected in the culture medium, either under basal conditions or upon P2Y6 receptor activation. NO may potentially damage cells through the formation of reactive nitrogen species that cause DNA fragmentation [41]. This mechanism could be responsible for the inhibition of cell proliferation induced by uracil nucleotides, since these compounds increased the number of cells presenting DNA fragmentation, an indicator of cell death by apoptosis. In some systems NO-mediated apoptosis is preceded by cell cycle arrest [42], which was not observed in this study. In LPS cultures, UDP induced cell death without any previous effect in the cycle of confluent and synchronized cells. P2Y6 receptor activation and NO release only affected astrocytes, but not microglia viability. Shrunken nuclei with a bright fluorescence appearance, indicating chromatin condensation characteristic of apoptotic cells, were highly coincident with astrocytes but did not co-localize with microglia. Because cell population in the S-phase of the cell cycle was not changed by UDP, an increase in NO release seems to result from iNOS upregulation of individual pre-existent microglia, excluding the possibility of microglia proliferation. This result also indicates that P2Y6 receptors do not mediate microglia proliferation and therefore, their activation cannot modify the anti-proliferative profile established by 0.1 μg/ml LPS [43]. Although uracil nucleotides had no effect in highly enriched astroglial cultures, cell death by apoptosis mediated by pyrimidine receptors was already observed in co-cultures of astrocytes and microglia (without LPS treatment), but to a smaller extent [22], suggesting that LPS and/or uracil nucleotides are not able to induce astrocyte death without the contribution of microglia. Nevertheless, it seems that in co-cultures of astrocytes and microglia, LPS potentiates astrocyte apoptosis mediated by uracil nucleotides, since it increases from 6% in co-cultures without LPS treatment [22] to 15% in LPS cultures. LPS facilitates P2Y6 receptor-mediated NO release by microglia, but other cytokines such as IL-1β or TNF-α [44] may come into play contributing to astroglial cell death. Conclusions: The present study shows that chronically activated microglia influence the astroglial response to uracil nucleotides favoring astroglial apoptosis as a consequence of microglial P2Y6 receptor activation that induces NO release (Figure 9). Therefore, P2Y6 receptor activation may represent an important mechanism by which microglia control excessive astrogliosis that may hamper neuronal regeneration. Nevertheless, it is known that human cortical astrocytes are diverse and structurally and functionally more complex than their rodent counterparts [45]; therefore this hypothesis should be further confirmed within human glial cells.Figure 9 Schematic representation of the purinergic mechanisms mediating microglia-astrocyte communication in lipopolysaccharide (LPS) cultures. Uracil nucleotides released during inflammatory response activate microglia P2Y6 receptors coupled to the phospholipase C (PLC) - protein kinase C (PKC) pathway, which mediates an increase in inducible nitric oxide synthase (iNOS) expression and consequently, in nitric oxide (NO) release. Diffusible NO mediates astroglial apoptosis. Schematic representation of the purinergic mechanisms mediating microglia-astrocyte communication in lipopolysaccharide (LPS) cultures. Uracil nucleotides released during inflammatory response activate microglia P2Y6 receptors coupled to the phospholipase C (PLC) - protein kinase C (PKC) pathway, which mediates an increase in inducible nitric oxide synthase (iNOS) expression and consequently, in nitric oxide (NO) release. Diffusible NO mediates astroglial apoptosis.
Background: During cerebral inflammation uracil nucleotides leak to the extracellular medium and activate glial pyrimidine receptors contributing to the development of a reactive phenotype. Chronically activated microglia acquire an anti-inflammatory phenotype that favors neuronal differentiation, but the impact of these microglia on astrogliosis is unknown. We investigated the contribution of pyrimidine receptors to microglia-astrocyte signaling in a chronic model of inflammation and its impact on astrogliosis. Methods: Co-cultures of astrocytes and microglia were chronically treated with lipopolysaccharide (LPS) and incubated with uracil nucleotides for 48 h. The effect of nucleotides was evaluated in methyl-[3H]-thymidine incorporation. Western blot and immunofluorescence was performed to detect the expression of P2Y6 receptors and the inducible form of nitric oxide synthase (iNOS). Nitric oxide (NO) release was quantified through Griess reaction. Cell death was also investigated by the LDH assay and by the TUNEL assay or Hoechst 33258 staining. Results: UTP, UDP (0.001 to 1 mM) or PSB 0474 (0.01 to 10 μM) inhibited cell proliferation up to 43 ± 2% (n = 10, P <0.05), an effect prevented by the selective P2Y6 receptor antagonist MRS 2578 (1 μM). UTP was rapidly metabolized into UDP, which had a longer half-life. The inhibitory effect of UDP (1 mM) was abolished by phospholipase C (PLC), protein kinase C (PKC) and nitric oxide synthase (NOS) inhibitors. Both UDP (1 mM) and PSB 0474 (10 μM) increased NO release up to 199 ± 20% (n = 4, P <0.05), an effect dependent on P2Y6 receptors-PLC-PKC pathway activation, indicating that this pathway mediates NO release. Western blot and immunocytochemistry analysis indicated that P2Y6 receptors were expressed in the cultures being mainly localized in microglia. Moreover, the expression of iNOS was mainly observed in microglia and was upregulated by UDP (1 mM) or PSB 0474 (10 μM). UDP-mediated NO release induced apoptosis in astrocytes, but not in microglia. Conclusions: In LPS treated co-cultures of astrocytes and microglia, UTP is rapidly converted into UDP, which activates P2Y6 receptors inducing the release of NO by microglia that causes astrocyte apoptosis, thus controlling their rate of proliferation and preventing an excessive astrogliosis.
Background: Chronic inflammation is characteristic of several brain disorders leading to loss of cognitive function. In the central nervous system (CNS), the inflammatory response is mediated by glial cells that acquire reactive phenotypes to participate in neuronal repair mechanisms [1,2]. In particular, astrocytes respond with a complex reaction named astrogliosis that includes several morphological and functional changes, such as cell hypertrophy, glial fibrillary acidic protein (GFAP) and nestin up-regulation [3], and cell proliferation [4]. These progressive changes are time and context dependent, being regulated by inflammatory mediators produced in the lesion site [5]. Activated microglia are the main source of these inflammatory mediators, assuming an important role in the modulation of astrogliosis progression during the course of the inflammatory response [6,7]. These mediators may be pro-inflammatory, such as IL-1β, TNF-α and nitric oxide (NO), or anti-inflammatory, such as IL-10, IL-4, TGF-β, according to the microglia phenotype, which is highly dependent on the pathological context [2,8]. Lipopolysaccharide (LPS) is an agonist of toll-like receptors-4 (TLR4), inducing a pro-inflammatory phenotype in microglia. However, chronic activation of TLR4 receptors has been shown to promote microglia polarization toward an anti-inflammatory phenotype [9,10], but its impact in the inflammatory response and in the modulation of astrogliosis remains to be established. In fact, different extents of astrogliosis and microgliosis have different impacts in neuronal regeneration [1,2]. In the extreme end of the astrogliosis spectrum, proliferating astrocytes may interact with fibroblasts and other glial cells to form a glial scar, creating an environment that prevents axon regeneration [11], leading to the idea that inhibition or control of this response would be beneficial to neuronal survival after injury. Therefore, the mediators produced by chronically activated microglia may have an important role to prevent excessive astrogliosis and promote neuronal regeneration and sprouting. In a context of chronic brain inflammation, both adenine and uracil nucleotides attain high concentrations in the extracellular medium (in the mM range) due to cell damage or death, and activate P2 receptors in both types of glial cells, contributing to astrogliosis [12] and reinforcing the release of inflammatory messengers produced by microglia [13]. Particularly, the uracil nucleotides may activate pyrimidine receptors, such as the P2Y2,4,6 and P2Y14 receptor subtypes [14] that participate in the inflammatory response [15]. P2Y6 receptors contribute to the clearance of necrotic cell debris by stimulating microglia phagocytosis of dying neurons [16], whereas the P2Y2 receptors mediate astrocyte migration [17], but the effect of uracil nucleotides in the modulation of astroglial proliferation and their role in the control of glial scar formation is largely unknown. To investigate the role of pyrimidine receptors in microglia-astrocyte signaling and its impact in the control of astrogliosis, it was used a cell culture model that could represent a state of chronic brain inflammation, which consisted of co-cultures of astrocytes and microglia submitted to a long-term treatment with LPS (0.1 μg/ml). The cultures obtained were used to investigate: i) the effect of uracil nucleotides in cell proliferation, ii) the influence of ectonucleotidases on uracil nucleotides metabolism and consequent impact in cell proliferation, iii) the signaling pathways and the mechanisms activated by the pyrimidine receptors involved in the control of cell proliferation, and iv) the contribution of microglia pyrimidine receptors to the modulation of astroglial proliferation. Conclusions: The present study shows that chronically activated microglia influence the astroglial response to uracil nucleotides favoring astroglial apoptosis as a consequence of microglial P2Y6 receptor activation that induces NO release (Figure 9). Therefore, P2Y6 receptor activation may represent an important mechanism by which microglia control excessive astrogliosis that may hamper neuronal regeneration. Nevertheless, it is known that human cortical astrocytes are diverse and structurally and functionally more complex than their rodent counterparts [45]; therefore this hypothesis should be further confirmed within human glial cells.Figure 9 Schematic representation of the purinergic mechanisms mediating microglia-astrocyte communication in lipopolysaccharide (LPS) cultures. Uracil nucleotides released during inflammatory response activate microglia P2Y6 receptors coupled to the phospholipase C (PLC) - protein kinase C (PKC) pathway, which mediates an increase in inducible nitric oxide synthase (iNOS) expression and consequently, in nitric oxide (NO) release. Diffusible NO mediates astroglial apoptosis. Schematic representation of the purinergic mechanisms mediating microglia-astrocyte communication in lipopolysaccharide (LPS) cultures. Uracil nucleotides released during inflammatory response activate microglia P2Y6 receptors coupled to the phospholipase C (PLC) - protein kinase C (PKC) pathway, which mediates an increase in inducible nitric oxide synthase (iNOS) expression and consequently, in nitric oxide (NO) release. Diffusible NO mediates astroglial apoptosis.
Background: During cerebral inflammation uracil nucleotides leak to the extracellular medium and activate glial pyrimidine receptors contributing to the development of a reactive phenotype. Chronically activated microglia acquire an anti-inflammatory phenotype that favors neuronal differentiation, but the impact of these microglia on astrogliosis is unknown. We investigated the contribution of pyrimidine receptors to microglia-astrocyte signaling in a chronic model of inflammation and its impact on astrogliosis. Methods: Co-cultures of astrocytes and microglia were chronically treated with lipopolysaccharide (LPS) and incubated with uracil nucleotides for 48 h. The effect of nucleotides was evaluated in methyl-[3H]-thymidine incorporation. Western blot and immunofluorescence was performed to detect the expression of P2Y6 receptors and the inducible form of nitric oxide synthase (iNOS). Nitric oxide (NO) release was quantified through Griess reaction. Cell death was also investigated by the LDH assay and by the TUNEL assay or Hoechst 33258 staining. Results: UTP, UDP (0.001 to 1 mM) or PSB 0474 (0.01 to 10 μM) inhibited cell proliferation up to 43 ± 2% (n = 10, P <0.05), an effect prevented by the selective P2Y6 receptor antagonist MRS 2578 (1 μM). UTP was rapidly metabolized into UDP, which had a longer half-life. The inhibitory effect of UDP (1 mM) was abolished by phospholipase C (PLC), protein kinase C (PKC) and nitric oxide synthase (NOS) inhibitors. Both UDP (1 mM) and PSB 0474 (10 μM) increased NO release up to 199 ± 20% (n = 4, P <0.05), an effect dependent on P2Y6 receptors-PLC-PKC pathway activation, indicating that this pathway mediates NO release. Western blot and immunocytochemistry analysis indicated that P2Y6 receptors were expressed in the cultures being mainly localized in microglia. Moreover, the expression of iNOS was mainly observed in microglia and was upregulated by UDP (1 mM) or PSB 0474 (10 μM). UDP-mediated NO release induced apoptosis in astrocytes, but not in microglia. Conclusions: In LPS treated co-cultures of astrocytes and microglia, UTP is rapidly converted into UDP, which activates P2Y6 receptors inducing the release of NO by microglia that causes astrocyte apoptosis, thus controlling their rate of proliferation and preventing an excessive astrogliosis.
17,768
448
[ 400, 219, 140, 108, 175, 66, 101, 176, 228, 56, 341, 256, 283, 893, 781, 854 ]
21
[ "cell", "cultures", "nucleotides", "uracil", "uracil nucleotides", "lps", "microglia", "p2y6", "udp", "cells" ]
[ "astrocytes microglia shown", "astrocytes containing microglia", "mediating microglia astrocyte", "microglia astrocyte signaling", "microglia influence astroglial" ]
null
[CONTENT] lipopolysaccharide | astroglial proliferation | microglia | uracil nucleotides | P2Y6 receptors | nitric oxide | apoptosis [SUMMARY]
null
[CONTENT] lipopolysaccharide | astroglial proliferation | microglia | uracil nucleotides | P2Y6 receptors | nitric oxide | apoptosis [SUMMARY]
[CONTENT] lipopolysaccharide | astroglial proliferation | microglia | uracil nucleotides | P2Y6 receptors | nitric oxide | apoptosis [SUMMARY]
[CONTENT] lipopolysaccharide | astroglial proliferation | microglia | uracil nucleotides | P2Y6 receptors | nitric oxide | apoptosis [SUMMARY]
[CONTENT] lipopolysaccharide | astroglial proliferation | microglia | uracil nucleotides | P2Y6 receptors | nitric oxide | apoptosis [SUMMARY]
[CONTENT] Animals | Animals, Newborn | Apoptosis | Astrocytes | Cell Cycle | Cell Proliferation | Cells, Cultured | Coculture Techniques | Enzyme Inhibitors | Gene Expression Regulation | Glial Fibrillary Acidic Protein | Lipopolysaccharides | Microglia | Nitric Oxide | Rats | Rats, Wistar | Receptors, Purinergic P2 | Thymidine | Time Factors | Tritium | Uracil Nucleotides [SUMMARY]
null
[CONTENT] Animals | Animals, Newborn | Apoptosis | Astrocytes | Cell Cycle | Cell Proliferation | Cells, Cultured | Coculture Techniques | Enzyme Inhibitors | Gene Expression Regulation | Glial Fibrillary Acidic Protein | Lipopolysaccharides | Microglia | Nitric Oxide | Rats | Rats, Wistar | Receptors, Purinergic P2 | Thymidine | Time Factors | Tritium | Uracil Nucleotides [SUMMARY]
[CONTENT] Animals | Animals, Newborn | Apoptosis | Astrocytes | Cell Cycle | Cell Proliferation | Cells, Cultured | Coculture Techniques | Enzyme Inhibitors | Gene Expression Regulation | Glial Fibrillary Acidic Protein | Lipopolysaccharides | Microglia | Nitric Oxide | Rats | Rats, Wistar | Receptors, Purinergic P2 | Thymidine | Time Factors | Tritium | Uracil Nucleotides [SUMMARY]
[CONTENT] Animals | Animals, Newborn | Apoptosis | Astrocytes | Cell Cycle | Cell Proliferation | Cells, Cultured | Coculture Techniques | Enzyme Inhibitors | Gene Expression Regulation | Glial Fibrillary Acidic Protein | Lipopolysaccharides | Microglia | Nitric Oxide | Rats | Rats, Wistar | Receptors, Purinergic P2 | Thymidine | Time Factors | Tritium | Uracil Nucleotides [SUMMARY]
[CONTENT] Animals | Animals, Newborn | Apoptosis | Astrocytes | Cell Cycle | Cell Proliferation | Cells, Cultured | Coculture Techniques | Enzyme Inhibitors | Gene Expression Regulation | Glial Fibrillary Acidic Protein | Lipopolysaccharides | Microglia | Nitric Oxide | Rats | Rats, Wistar | Receptors, Purinergic P2 | Thymidine | Time Factors | Tritium | Uracil Nucleotides [SUMMARY]
[CONTENT] astrocytes microglia shown | astrocytes containing microglia | mediating microglia astrocyte | microglia astrocyte signaling | microglia influence astroglial [SUMMARY]
null
[CONTENT] astrocytes microglia shown | astrocytes containing microglia | mediating microglia astrocyte | microglia astrocyte signaling | microglia influence astroglial [SUMMARY]
[CONTENT] astrocytes microglia shown | astrocytes containing microglia | mediating microglia astrocyte | microglia astrocyte signaling | microglia influence astroglial [SUMMARY]
[CONTENT] astrocytes microglia shown | astrocytes containing microglia | mediating microglia astrocyte | microglia astrocyte signaling | microglia influence astroglial [SUMMARY]
[CONTENT] astrocytes microglia shown | astrocytes containing microglia | mediating microglia astrocyte | microglia astrocyte signaling | microglia influence astroglial [SUMMARY]
[CONTENT] cell | cultures | nucleotides | uracil | uracil nucleotides | lps | microglia | p2y6 | udp | cells [SUMMARY]
null
[CONTENT] cell | cultures | nucleotides | uracil | uracil nucleotides | lps | microglia | p2y6 | udp | cells [SUMMARY]
[CONTENT] cell | cultures | nucleotides | uracil | uracil nucleotides | lps | microglia | p2y6 | udp | cells [SUMMARY]
[CONTENT] cell | cultures | nucleotides | uracil | uracil nucleotides | lps | microglia | p2y6 | udp | cells [SUMMARY]
[CONTENT] cell | cultures | nucleotides | uracil | uracil nucleotides | lps | microglia | p2y6 | udp | cells [SUMMARY]
[CONTENT] inflammatory | astrogliosis | microglia | receptors | modulation | mediators | glial | neuronal | role | response [SUMMARY]
null
[CONTENT] cell | μm | figure | lps | udp | nucleotides | microglia | lps cultures | p2y6 | cultures [SUMMARY]
[CONTENT] mediates | astroglial apoptosis | microglia | astroglial | nitric | nitric oxide | oxide | apoptosis | response | release diffusible mediates astroglial [SUMMARY]
[CONTENT] nucleotides | cell | cultures | microglia | uracil | uracil nucleotides | lps | p2y6 | cells | udp [SUMMARY]
[CONTENT] nucleotides | cell | cultures | microglia | uracil | uracil nucleotides | lps | p2y6 | cells | udp [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
null
[CONTENT] UTP | UDP | 0.001 | 1 mM | PSB 0474 | 0.01 | 2% | 10 | 1 ||| UTP | UDP | half ||| UDP | 1 mM | C (PLC | NOS ||| UDP | 1 mM | PSB 0474 | 10 | 199 ± | 20% | 4 ||| ||| UDP | 1 mM | PSB 0474 | 10 ||| [SUMMARY]
[CONTENT] LPS | UTP | UDP [SUMMARY]
[CONTENT] ||| ||| ||| LPS | 48 h. ||| ||| ||| LDH | TUNEL | Hoechst ||| UTP | UDP | 0.001 | 1 mM | PSB 0474 | 0.01 | 2% | 10 | 1 ||| UTP | UDP | half ||| UDP | 1 mM | C (PLC | NOS ||| UDP | 1 mM | PSB 0474 | 10 | 199 ± | 20% | 4 ||| ||| UDP | 1 mM | PSB 0474 | 10 ||| ||| LPS | UTP | UDP [SUMMARY]
[CONTENT] ||| ||| ||| LPS | 48 h. ||| ||| ||| LDH | TUNEL | Hoechst ||| UTP | UDP | 0.001 | 1 mM | PSB 0474 | 0.01 | 2% | 10 | 1 ||| UTP | UDP | half ||| UDP | 1 mM | C (PLC | NOS ||| UDP | 1 mM | PSB 0474 | 10 | 199 ± | 20% | 4 ||| ||| UDP | 1 mM | PSB 0474 | 10 ||| ||| LPS | UTP | UDP [SUMMARY]
Type 1 diabetes mellitus and SARS-CoV-2 in pediatric and adult patients - Data from the DPV network.
36443963
Data on patients with type 1 diabetes mellitus (T1DM) and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections are sparse. This study aimed to investigate the association between SARS-CoV-2 infection and T1DM.
BACKGROUND
Data from the Prospective Diabetes Follow-up (DPV) Registry were analyzed for diabetes patients tested for SARS-CoV-2 by polymerase chain reaction (PCR) in Germany, Austria, Switzerland, and Luxembourg during January 2020-June 2021, using Wilcoxon rank-sum and chi-square tests for continuous and dichotomous variables, adjusted for multiple testing.
METHODS
Data analysis of 1855 pediatric T1DM patients revealed no differences between asymptomatic/symptomatic infected and SARS-CoV-2 negative/positive patients regarding age, new-onset diabetes, diabetes duration, and body mass index. Glycated hemoglobin A1c (HbA1c) and diabetic ketoacidosis (DKA) rate were not elevated in SARS-CoV-2-positive vs. -negative patients. The COVID-19 manifestation index was 37.5% in individuals with known T1DM, but 57.1% in individuals with new-onset diabetes. 68.8% of positively tested patients were managed as outpatients/telemedically. Data analysis of 240 adult T1MD patients revealed no differences between positively and negatively tested patients except lower HbA1c. Of these patients, 83.3% had symptomatic infections; 35.7% of positively tested patients were hospitalized.
RESULTS
Our results indicate low morbidity in SARS-CoV-2-infected pediatric T1DM patients. Most patients with known T1DM and SARS-CoV-2 infections could be managed as outpatients. However, SARS-CoV-2 infection was usually symptomatic if it coincided with new-onset diabetes. In adult patients, symptomatic SARS-CoV-2 infection and hospitalization were associated with age.
CONCLUSIONS
[ "Adult", "Child", "Humans", "SARS-CoV-2", "Diabetes Mellitus, Type 1", "COVID-19", "Glycated Hemoglobin", "Prospective Studies", "Diabetic Ketoacidosis" ]
9705805
INTRODUCTION
In late 2019, the severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) pandemic began to spread across the world. 1 , 2 Growing evidence showed increased morbidity and mortality in patients with type 2 diabetes mellitus. 3 , 4 , 5 On the other hand, data on patients with type 1 diabetes mellitus (T1DM), especially pediatric patients, are sparse and at times inconsistent. 6 , 7 In part, conclusions for pediatric patients have been drawn by extrapolation from data on adult patients with diabetes mellitus. This procedure is not valid and may lead to misinformation. 8 Thus, targeted analysis of pediatric populations as well as adult populations with T1DM is essential in order to gain clarity regarding the impact of SARS‐CoV‐2 infection on these patients. The aim of the present study was to (1) investigate the influence of SARS‐CoV‐2 infections on the course of T1DM and of T1DM on infections with SARS‐CoV‐2 in children and adolescents in Germany, Austria, Switzerland, and Luxembourg, (2) gain information about SARS‐CoV‐2‐infected adult patients with T1DM in central Europe, and (3) learn about the differences between pediatric and adult patients with T1DM.
METHODS
Data source and study population Data from the Prospective Diabetes Follow‐up (DPV) Registry were analyzed. The DPV registry is maintained by the Institute of Epidemiology and Medical Biometry at the University of Ulm, Ulm, Germany and collects patient data from diabetes centers in Germany, Austria, Switzerland, and Luxembourg, covering >90% of all pediatric diabetes centers in Germany and Austria. 9 Informed consent (verbal or written) to participate in the DPV registry was available from patients and/or their parents. The ethics committee of the University of Ulm approved the analysis of the anonymized DPV data (approval no. 314/21). Data analysis included all T1DM patients aged >6 months with documented polymerase chain reaction (PCR) tests for SARS‐CoV‐2. All patients tested upon inpatient admission as well as outpatients and externally tested patients were included in the analysis. The data collection period was 01/2020–06/2021, and data were analyzed separately for pediatric (age <18 years) and adult (age ≥ 18 years) patients. Data were aggregated per patient ±10 days around the test. Overall, 162 centers contributed to the data collection, including 150 centers from Germany, 10 from Austria, and one each from Switzerland and Luxembourg. Data from SARS‐CoV‐2‐positive and ‐negative T1DM patients were compared. In a second step, the following subgroups of data were further analyzed: First, data from SARS‐CoV‐2‐positive T1DM patients were analyzed, for whom there was documented information on symptoms (comparison of patients with/without symptoms of coronavirus disease 2019 [COVID‐19]). Second, data from hospitalized SARS‐CoV‐2‐positive and ‐negative T1DM patients were compared. Third, a separate analysis of pediatric patients with new‐onset diabetes or follow‐up examinations was performed with respect to glycated hemoglobin A1c (HbA1c) and the rate of diabetic ketoacidosis (DKA). In addition, we analyzed data from SARS‐CoV‐2‐positive and ‐negative adult T1DM patients (total population; hospitalized patients). Data from the Prospective Diabetes Follow‐up (DPV) Registry were analyzed. The DPV registry is maintained by the Institute of Epidemiology and Medical Biometry at the University of Ulm, Ulm, Germany and collects patient data from diabetes centers in Germany, Austria, Switzerland, and Luxembourg, covering >90% of all pediatric diabetes centers in Germany and Austria. 9 Informed consent (verbal or written) to participate in the DPV registry was available from patients and/or their parents. The ethics committee of the University of Ulm approved the analysis of the anonymized DPV data (approval no. 314/21). Data analysis included all T1DM patients aged >6 months with documented polymerase chain reaction (PCR) tests for SARS‐CoV‐2. All patients tested upon inpatient admission as well as outpatients and externally tested patients were included in the analysis. The data collection period was 01/2020–06/2021, and data were analyzed separately for pediatric (age <18 years) and adult (age ≥ 18 years) patients. Data were aggregated per patient ±10 days around the test. Overall, 162 centers contributed to the data collection, including 150 centers from Germany, 10 from Austria, and one each from Switzerland and Luxembourg. Data from SARS‐CoV‐2‐positive and ‐negative T1DM patients were compared. In a second step, the following subgroups of data were further analyzed: First, data from SARS‐CoV‐2‐positive T1DM patients were analyzed, for whom there was documented information on symptoms (comparison of patients with/without symptoms of coronavirus disease 2019 [COVID‐19]). Second, data from hospitalized SARS‐CoV‐2‐positive and ‐negative T1DM patients were compared. Third, a separate analysis of pediatric patients with new‐onset diabetes or follow‐up examinations was performed with respect to glycated hemoglobin A1c (HbA1c) and the rate of diabetic ketoacidosis (DKA). In addition, we analyzed data from SARS‐CoV‐2‐positive and ‐negative adult T1DM patients (total population; hospitalized patients). Variables Demographic data included patient age, age at new‐onset diabetes, diabetes duration, sex, and migrant background (patient and/or at least one parent born outside Germany, Austria, Switzerland, or Luxembourg). Clinical data included HbA1c (%, mmol/mol) and body mass index (BMI, kg/m2). For the pediatric patients, BMI values were expressed as standard deviation scores (SDS) based on the German KiGGS percentiles (German Health Interview and Examination Survey for Children and Adolescents). 10 To ensure comparability of the local HbA1c values, the respective values were converted according to the “multiple of the mean” method for normalization to the reference range of the Diabetes Control and Complications Trial (DCCT). 11 The other clinical parameters were the reason for admission (unknown, new‐onset diabetes, education, DKA, and “other reasons”, e.g., admission for SARS‐CoV‐2 infection), the mode of care (outpatient, inpatient, telemedical care), the proportion of patients with DKA, and DKA at new onset of diabetes. DKA was defined as pH <7.3 and/or bicarbonate levels <15 mmol/L. The COVID‐19 manifestation index was defined as the percentage of symptomatic infections in all patients testing positive for SARS‐CoV‐2. Demographic data included patient age, age at new‐onset diabetes, diabetes duration, sex, and migrant background (patient and/or at least one parent born outside Germany, Austria, Switzerland, or Luxembourg). Clinical data included HbA1c (%, mmol/mol) and body mass index (BMI, kg/m2). For the pediatric patients, BMI values were expressed as standard deviation scores (SDS) based on the German KiGGS percentiles (German Health Interview and Examination Survey for Children and Adolescents). 10 To ensure comparability of the local HbA1c values, the respective values were converted according to the “multiple of the mean” method for normalization to the reference range of the Diabetes Control and Complications Trial (DCCT). 11 The other clinical parameters were the reason for admission (unknown, new‐onset diabetes, education, DKA, and “other reasons”, e.g., admission for SARS‐CoV‐2 infection), the mode of care (outpatient, inpatient, telemedical care), the proportion of patients with DKA, and DKA at new onset of diabetes. DKA was defined as pH <7.3 and/or bicarbonate levels <15 mmol/L. The COVID‐19 manifestation index was defined as the percentage of symptomatic infections in all patients testing positive for SARS‐CoV‐2. Statistical analyses Analyses were performed as unadjusted comparisons using SAS 9.4 software (TS1M7; SAS Institute Inc) on a Windows Server 2019 mainframe to compute the Wilcoxon rank‐sum test for continuous variables and the chi‐square test for dichotomous variables. All analyses were adjusted for multiple testing using the Bonferroni‐Holm method (Bonferroni stepdown). Two‐tailed p values <0.05 were considered significant. Analyses were performed as unadjusted comparisons using SAS 9.4 software (TS1M7; SAS Institute Inc) on a Windows Server 2019 mainframe to compute the Wilcoxon rank‐sum test for continuous variables and the chi‐square test for dichotomous variables. All analyses were adjusted for multiple testing using the Bonferroni‐Holm method (Bonferroni stepdown). Two‐tailed p values <0.05 were considered significant.
RESULTS
T1DM in children and adolescents In total, 1855 pediatric T1DM patients underwent SARS‐CoV‐2 testing by PCR; 157 patients tested positive (cumulative positive rate 8.5%). There was no statistical difference between negatively and positively tested patients with respect to age, age at new‐onset diabetes, diabetes duration, and BMI‐SDS (see Table 1). HbA1c was significantly lowered in positively tested patients (p = 0.003). The difference was due to the patients with known and treated T1DM; there was no difference in this respect in the patients with new‐onset diabetes. The rate of DKA was not increased in patients who tested positive for SARS‐CoV‐2 compared to patients who tested negative (patients with known diabetes: 2.5% vs. 6.3%, new‐onset diabetes: 31.4% vs. 35.1%; see Table 2). Pediatric patients with type 1 diabetes mellitus by PCR result Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; KiGGS, German Health Interview and Examination Survey for Children and Adolescents; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. Pediatric patients with type 1 diabetes mellitus by PCR result; patients with diabetes onset and known diabetes Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. SARS‐CoV‐2‐positive patients received mostly inpatient care during the first 6 months of the pandemic (93.3%) whereas negatively tested patients were only treated in 38% as inpatients. As the pandemic went on, this treatment changed. As for 06/2021, patients who tested negative for SARS‐CoV‐2 received care as inpatients in 75.8% of cases, as outpatients in 24% of cases, and by telemedicine in 0.2% of cases. By contrast, only 31.2% of positively tested patients received care as inpatients, 59.3% as outpatients, and another 9.6% by telemedicine (p < 0.001 in each case). For SARS‐CoV‐2‐negative patients, the most common reasons for admission were new‐onset diabetes (41.5%) and admission for diabetes education (41.2%). By contrast, 69.4% of patients who tested positive were admitted to hospital due to new‐onset diabetes and only 10.2% were admitted for education, (p < 0.001 in each case). Of 46 patients with a positive SARS‐CoV‐2 PCR result and available data on symptoms of SARS‐CoV‐2 infection, 20 (43.5%) had COVID‐19 symptoms. In individuals with known T1DM, the COVID‐19 manifestation index was 37.5% (12/32). By contrast, 8/14 (57.1%) children and adolescents with new‐onset diabetes and 5/6 (83.3%) hospitalized adult patients with new‐onset diabetes showed symptoms of COVID‐19 (Figure 1). There was no significant difference between patients with COVID‐19 and asymptomatic patients with positive SARS‐CoV‐2 PCR results in terms of age, age at new‐onset diabetes, diabetes duration, HbA1c, and BMI‐SDS (see Table 3). 17/26 (65.4%) asymptomatically and 11/20 (55%) symptomatically infected pediatric patients were managed as outpatients. Only 6 (30%) of the symptomatically infected patients were admitted as inpatients, all due to acute diabetes complications (5 patients with new‐onset diabetes and 1 with DKA) and 4 (15.4%) of the asymptomatically infected patients (1 each with new‐onset diabetes/DKA and 2 for diabetes education). 3 (15%) patients with COVID‐19 and 5 (19.2%) asymptomatically infected patients received telemedical instructions. At new‐onset diabetes, DKA was present in 4/8 (50%) patients with symptomatic infection and 2/6 (33.3%) patients with asymptomatic infection. COVID‐19 manifestation index. T1DM, type 1 diabetes mellitus. Pediatric patients with type 1 diabetes mellitus by PCR result and reported COVID‐19 symptoms Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; KiGGS, German Health Interview and Examination Survey for Children and Adolescents; n, overall number of patients; PCR, polymerase chain reaction. In total, 1855 pediatric T1DM patients underwent SARS‐CoV‐2 testing by PCR; 157 patients tested positive (cumulative positive rate 8.5%). There was no statistical difference between negatively and positively tested patients with respect to age, age at new‐onset diabetes, diabetes duration, and BMI‐SDS (see Table 1). HbA1c was significantly lowered in positively tested patients (p = 0.003). The difference was due to the patients with known and treated T1DM; there was no difference in this respect in the patients with new‐onset diabetes. The rate of DKA was not increased in patients who tested positive for SARS‐CoV‐2 compared to patients who tested negative (patients with known diabetes: 2.5% vs. 6.3%, new‐onset diabetes: 31.4% vs. 35.1%; see Table 2). Pediatric patients with type 1 diabetes mellitus by PCR result Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; KiGGS, German Health Interview and Examination Survey for Children and Adolescents; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. Pediatric patients with type 1 diabetes mellitus by PCR result; patients with diabetes onset and known diabetes Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. SARS‐CoV‐2‐positive patients received mostly inpatient care during the first 6 months of the pandemic (93.3%) whereas negatively tested patients were only treated in 38% as inpatients. As the pandemic went on, this treatment changed. As for 06/2021, patients who tested negative for SARS‐CoV‐2 received care as inpatients in 75.8% of cases, as outpatients in 24% of cases, and by telemedicine in 0.2% of cases. By contrast, only 31.2% of positively tested patients received care as inpatients, 59.3% as outpatients, and another 9.6% by telemedicine (p < 0.001 in each case). For SARS‐CoV‐2‐negative patients, the most common reasons for admission were new‐onset diabetes (41.5%) and admission for diabetes education (41.2%). By contrast, 69.4% of patients who tested positive were admitted to hospital due to new‐onset diabetes and only 10.2% were admitted for education, (p < 0.001 in each case). Of 46 patients with a positive SARS‐CoV‐2 PCR result and available data on symptoms of SARS‐CoV‐2 infection, 20 (43.5%) had COVID‐19 symptoms. In individuals with known T1DM, the COVID‐19 manifestation index was 37.5% (12/32). By contrast, 8/14 (57.1%) children and adolescents with new‐onset diabetes and 5/6 (83.3%) hospitalized adult patients with new‐onset diabetes showed symptoms of COVID‐19 (Figure 1). There was no significant difference between patients with COVID‐19 and asymptomatic patients with positive SARS‐CoV‐2 PCR results in terms of age, age at new‐onset diabetes, diabetes duration, HbA1c, and BMI‐SDS (see Table 3). 17/26 (65.4%) asymptomatically and 11/20 (55%) symptomatically infected pediatric patients were managed as outpatients. Only 6 (30%) of the symptomatically infected patients were admitted as inpatients, all due to acute diabetes complications (5 patients with new‐onset diabetes and 1 with DKA) and 4 (15.4%) of the asymptomatically infected patients (1 each with new‐onset diabetes/DKA and 2 for diabetes education). 3 (15%) patients with COVID‐19 and 5 (19.2%) asymptomatically infected patients received telemedical instructions. At new‐onset diabetes, DKA was present in 4/8 (50%) patients with symptomatic infection and 2/6 (33.3%) patients with asymptomatic infection. COVID‐19 manifestation index. T1DM, type 1 diabetes mellitus. Pediatric patients with type 1 diabetes mellitus by PCR result and reported COVID‐19 symptoms Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; KiGGS, German Health Interview and Examination Survey for Children and Adolescents; n, overall number of patients; PCR, polymerase chain reaction. T1DM in adults Analysis of the data from 240 adult patients with T1DM showed no statistically significant differences between SARS‐CoV‐2 negatively/positively tested patients except for HbA1c, which was significantly lower in positively tested patients (see Table 4). The most frequent reasons for admission of negatively tested patients to hospital were diabetes education (25.9%), DKA (19.4%), and new‐onset diabetes (14.8%), whereas positively tested patients were mostly admitted for “other reasons” (not primarily diabetes‐related reasons, e.g., SARS‐CoV‐2 infection [35%]) and DKA (30%). Questioning about infection symptoms was documented in 12 patients who tested positive, 10 (83.3%) of whom had symptomatic infection (Figure 1). Nevertheless, only 35.7% of the positively tested patients required hospitalization (Figure 2). Adult patients with type 1 diabetes mellitus by PCR result Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. SARS‐CoV‐2 hospitalization rate. T1DM, type 1 diabetes mellitus. Analysis of the data from 240 adult patients with T1DM showed no statistically significant differences between SARS‐CoV‐2 negatively/positively tested patients except for HbA1c, which was significantly lower in positively tested patients (see Table 4). The most frequent reasons for admission of negatively tested patients to hospital were diabetes education (25.9%), DKA (19.4%), and new‐onset diabetes (14.8%), whereas positively tested patients were mostly admitted for “other reasons” (not primarily diabetes‐related reasons, e.g., SARS‐CoV‐2 infection [35%]) and DKA (30%). Questioning about infection symptoms was documented in 12 patients who tested positive, 10 (83.3%) of whom had symptomatic infection (Figure 1). Nevertheless, only 35.7% of the positively tested patients required hospitalization (Figure 2). Adult patients with type 1 diabetes mellitus by PCR result Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. SARS‐CoV‐2 hospitalization rate. T1DM, type 1 diabetes mellitus.
CONCLUSIONS
Our results are indicative of the low morbidity in children and adolescents with T1DM and SARS‐CoV‐2 infection. Importantly, our study found no increase in DKA rate among patients with positive SARS‐CoV‐2 tests. Patients who tested positive for SARS‐CoV‐2 did not have elevated HbA1c levels compared to those who tested negative. In addition, no association was found between age/diabetes duration/HbA1c/BMI and symptoms of infections. Thus, no evidence emerged that these factors led to increased SARS‐CoV‐2 morbidity in pediatric patients with a previous diagnosis of T1DM in our study population. Most patients with known T1DM and SARS‐CoV‐2 infection could be managed as outpatients. However, when SARS‐CoV‐2 infection coincided with new‐onset diabetes, the infection was usually symptomatic. In adult patients, an association was observed between age and frequency of symptomatic SARS‐CoV‐2 infection as well as hospitalization. Since diabetes sequelae are associated with a more severe course, optimization of patients' metabolic situation is recommended.
[ "INTRODUCTION", "Data source and study population", "Variables", "Statistical analyses", "\nT1DM in children and adolescents", "\nT1DM in adults", "\nT1DM in pediatric patients", "Adult patients", "Limitations", "AUTHOR CONTRIBUTIONS", "FUNDING INFORMATION", "ETHICS STATEMENT", "PATIENT CONSENT STATEMENT" ]
[ "In late 2019, the severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) pandemic began to spread across the world.\n1\n, \n2\n Growing evidence showed increased morbidity and mortality in patients with type 2 diabetes mellitus.\n3\n, \n4\n, \n5\n On the other hand, data on patients with type 1 diabetes mellitus (T1DM), especially pediatric patients, are sparse and at times inconsistent.\n6\n, \n7\n In part, conclusions for pediatric patients have been drawn by extrapolation from data on adult patients with diabetes mellitus. This procedure is not valid and may lead to misinformation.\n8\n Thus, targeted analysis of pediatric populations as well as adult populations with T1DM is essential in order to gain clarity regarding the impact of SARS‐CoV‐2 infection on these patients. The aim of the present study was to (1) investigate the influence of SARS‐CoV‐2 infections on the course of T1DM and of T1DM on infections with SARS‐CoV‐2 in children and adolescents in Germany, Austria, Switzerland, and Luxembourg, (2) gain information about SARS‐CoV‐2‐infected adult patients with T1DM in central Europe, and (3) learn about the differences between pediatric and adult patients with T1DM.", "Data from the Prospective Diabetes Follow‐up (DPV) Registry were analyzed. The DPV registry is maintained by the Institute of Epidemiology and Medical Biometry at the University of Ulm, Ulm, Germany and collects patient data from diabetes centers in Germany, Austria, Switzerland, and Luxembourg, covering >90% of all pediatric diabetes centers in Germany and Austria.\n9\n Informed consent (verbal or written) to participate in the DPV registry was available from patients and/or their parents. The ethics committee of the University of Ulm approved the analysis of the anonymized DPV data (approval no. 314/21).\nData analysis included all T1DM patients aged >6 months with documented polymerase chain reaction (PCR) tests for SARS‐CoV‐2. All patients tested upon inpatient admission as well as outpatients and externally tested patients were included in the analysis. The data collection period was 01/2020–06/2021, and data were analyzed separately for pediatric (age <18 years) and adult (age ≥ 18 years) patients. Data were aggregated per patient ±10 days around the test. Overall, 162 centers contributed to the data collection, including 150 centers from Germany, 10 from Austria, and one each from Switzerland and Luxembourg. Data from SARS‐CoV‐2‐positive and ‐negative T1DM patients were compared. In a second step, the following subgroups of data were further analyzed: First, data from SARS‐CoV‐2‐positive T1DM patients were analyzed, for whom there was documented information on symptoms (comparison of patients with/without symptoms of coronavirus disease 2019 [COVID‐19]). Second, data from hospitalized SARS‐CoV‐2‐positive and ‐negative T1DM patients were compared. Third, a separate analysis of pediatric patients with new‐onset diabetes or follow‐up examinations was performed with respect to glycated hemoglobin A1c (HbA1c) and the rate of diabetic ketoacidosis (DKA). In addition, we analyzed data from SARS‐CoV‐2‐positive and ‐negative adult T1DM patients (total population; hospitalized patients).", "Demographic data included patient age, age at new‐onset diabetes, diabetes duration, sex, and migrant background (patient and/or at least one parent born outside Germany, Austria, Switzerland, or Luxembourg). Clinical data included HbA1c (%, mmol/mol) and body mass index (BMI, kg/m2). For the pediatric patients, BMI values were expressed as standard deviation scores (SDS) based on the German KiGGS percentiles (German Health Interview and Examination Survey for Children and Adolescents).\n10\n To ensure comparability of the local HbA1c values, the respective values were converted according to the “multiple of the mean” method for normalization to the reference range of the Diabetes Control and Complications Trial (DCCT).\n11\n The other clinical parameters were the reason for admission (unknown, new‐onset diabetes, education, DKA, and “other reasons”, e.g., admission for SARS‐CoV‐2 infection), the mode of care (outpatient, inpatient, telemedical care), the proportion of patients with DKA, and DKA at new onset of diabetes. DKA was defined as pH <7.3 and/or bicarbonate levels <15 mmol/L. The COVID‐19 manifestation index was defined as the percentage of symptomatic infections in all patients testing positive for SARS‐CoV‐2.", "Analyses were performed as unadjusted comparisons using SAS 9.4 software (TS1M7; SAS Institute Inc) on a Windows Server 2019 mainframe to compute the Wilcoxon rank‐sum test for continuous variables and the chi‐square test for dichotomous variables. All analyses were adjusted for multiple testing using the Bonferroni‐Holm method (Bonferroni stepdown). Two‐tailed p values <0.05 were considered significant.", "In total, 1855 pediatric T1DM patients underwent SARS‐CoV‐2 testing by PCR; 157 patients tested positive (cumulative positive rate 8.5%). There was no statistical difference between negatively and positively tested patients with respect to age, age at new‐onset diabetes, diabetes duration, and BMI‐SDS (see Table 1). HbA1c was significantly lowered in positively tested patients (p = 0.003). The difference was due to the patients with known and treated T1DM; there was no difference in this respect in the patients with new‐onset diabetes. The rate of DKA was not increased in patients who tested positive for SARS‐CoV‐2 compared to patients who tested negative (patients with known diabetes: 2.5% vs. 6.3%, new‐onset diabetes: 31.4% vs. 35.1%; see Table 2).\nPediatric patients with type 1 diabetes mellitus by PCR result\nAbbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; KiGGS, German Health Interview and Examination Survey for Children and Adolescents; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2.\nPediatric patients with type 1 diabetes mellitus by PCR result; patients with diabetes onset and known diabetes\nAbbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2.\nSARS‐CoV‐2‐positive patients received mostly inpatient care during the first 6 months of the pandemic (93.3%) whereas negatively tested patients were only treated in 38% as inpatients. As the pandemic went on, this treatment changed. As for 06/2021, patients who tested negative for SARS‐CoV‐2 received care as inpatients in 75.8% of cases, as outpatients in 24% of cases, and by telemedicine in 0.2% of cases. By contrast, only 31.2% of positively tested patients received care as inpatients, 59.3% as outpatients, and another 9.6% by telemedicine (p < 0.001 in each case). For SARS‐CoV‐2‐negative patients, the most common reasons for admission were new‐onset diabetes (41.5%) and admission for diabetes education (41.2%). By contrast, 69.4% of patients who tested positive were admitted to hospital due to new‐onset diabetes and only 10.2% were admitted for education, (p < 0.001 in each case).\nOf 46 patients with a positive SARS‐CoV‐2 PCR result and available data on symptoms of SARS‐CoV‐2 infection, 20 (43.5%) had COVID‐19 symptoms. In individuals with known T1DM, the COVID‐19 manifestation index was 37.5% (12/32). By contrast, 8/14 (57.1%) children and adolescents with new‐onset diabetes and 5/6 (83.3%) hospitalized adult patients with new‐onset diabetes showed symptoms of COVID‐19 (Figure 1). There was no significant difference between patients with COVID‐19 and asymptomatic patients with positive SARS‐CoV‐2 PCR results in terms of age, age at new‐onset diabetes, diabetes duration, HbA1c, and BMI‐SDS (see Table 3). 17/26 (65.4%) asymptomatically and 11/20 (55%) symptomatically infected pediatric patients were managed as outpatients. Only 6 (30%) of the symptomatically infected patients were admitted as inpatients, all due to acute diabetes complications (5 patients with new‐onset diabetes and 1 with DKA) and 4 (15.4%) of the asymptomatically infected patients (1 each with new‐onset diabetes/DKA and 2 for diabetes education). 3 (15%) patients with COVID‐19 and 5 (19.2%) asymptomatically infected patients received telemedical instructions. At new‐onset diabetes, DKA was present in 4/8 (50%) patients with symptomatic infection and 2/6 (33.3%) patients with asymptomatic infection.\nCOVID‐19 manifestation index. T1DM, type 1 diabetes mellitus.\nPediatric patients with type 1 diabetes mellitus by PCR result and reported COVID‐19 symptoms\nAbbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; KiGGS, German Health Interview and Examination Survey for Children and Adolescents; n, overall number of patients; PCR, polymerase chain reaction.", "Analysis of the data from 240 adult patients with T1DM showed no statistically significant differences between SARS‐CoV‐2 negatively/positively tested patients except for HbA1c, which was significantly lower in positively tested patients (see Table 4). The most frequent reasons for admission of negatively tested patients to hospital were diabetes education (25.9%), DKA (19.4%), and new‐onset diabetes (14.8%), whereas positively tested patients were mostly admitted for “other reasons” (not primarily diabetes‐related reasons, e.g., SARS‐CoV‐2 infection [35%]) and DKA (30%). Questioning about infection symptoms was documented in 12 patients who tested positive, 10 (83.3%) of whom had symptomatic infection (Figure 1). Nevertheless, only 35.7% of the positively tested patients required hospitalization (Figure 2).\nAdult patients with type 1 diabetes mellitus by PCR result\nAbbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2.\nSARS‐CoV‐2 hospitalization rate. T1DM, type 1 diabetes mellitus.", "SARS‐CoV‐2‐positive patients initially received mostly inpatient care during the first 6 months of the pandemic. The data may reflect the initial uncertainty in the management of SARS‐CoV‐2. After initial results suggested that there was no greatly increased risk of disease in young patients with T1DM, care was able to be provided predominantly as outpatient treatment and by telemedicine. In cases in which a pediatric patient with a positive test required inpatient care, this was primarily due to the concurrent new onset of T1DM (69.4%, 34/49 patients).\nOur analysis found that SARS‐CoV‐2‐positive patients had significantly lower HbA1c levels than negatively tested patients. Further analysis showed that this difference was due to significantly lower HbA1c levels in patients with known T1DM, while HbA1c levels were the same in positively and negatively SARS‐CoV‐2 tested pediatric patients with new‐onset diabetes. Our database did not contain any information as to why patients were tested for SARS‐CoV‐2, so the reason for the significantly lower HbA1c value in SARS‐CoV‐2 positive patients remains unclear. In the group of SARS‐CoV‐2 negative patients the percentage of patients admitted to the hospital for diabetes education was much higher than in the SARS‐CoV‐2 positive group. The main reason for these admissions was poor diabetes control. This may explain the higher HbA1c level in children and adolescents with negative SARS‐CoV‐2 tests. Population‐wide screening studies among diabetic pediatric patients may provide clarity in this regard. Importantly, no increased HbA1c levels in terms of a risk factor for SARS‐CoV‐2 infection were observed in SARS‐CoV‐2‐positive patients. There were no other significant differences between negatively and positively tested pediatric patients. The results are indicative of the low morbidity in children and adolescents with T1DM and SARS‐CoV‐2 infection, as has also been reported for children from the United States and England.\n8\n, \n12\n In addition, the data show that metabolic control generally remained stable in the setting of infection and that diabetes‐associated morbidity did not worsen with SARS‐CoV‐2 infection. Our results indicate the success of patient education and outpatient management of patients with T1DM in Germany, Austria, Switzerland, and Luxembourg.\nThe rate of DKA at new onset of diabetes was significantly higher during the COVID‐19 pandemic than in prior years. Kamrath et al. reported a significantly increased value of 44.7% for 2020.\n13\n During the 01/2020–06/2021 analysis period, the DKA rate in our pediatric cohort was still higher than in previous years (35.8% compared to approximately 24%). Possible reasons may include delayed presentation due to fear of visiting the hospital, delayed presentation due to infection with SARS‐CoV‐2, and delayed diagnosis of new‐onset diabetes due to infection symptoms. In their analysis of adult patient data, Pasquel et al. found evidence that infection with SARS‐CoV‐2 itself led to more severe DKA.\n14\n Li et al. demonstrated that SARS‐CoV‐2 infection can lead to ketosis and ketoacidosis, not only in patients with diabetes.\n15\n In our study population, there was no evidence in this regard, as the rate of DKA was not increased in SARS‐CoV‐2‐positive compared with SARS‐CoV‐2‐negative pediatric patients. Further analysis showed that neither patients with known diabetes nor patients with new‐onset diabetes had an elevated DKA rate compared to patients from the SARS‐CoV‐2‐negative control group.\nAmong the subgroup of patients with new‐onset diabetes and available data on COVID‐19 symptoms, infection was symptomatic in 8/14 cases (57.1%), which was markedly higher than in the overall study population, where the percentage was 43.5%. This finding indicates that new‐onset diabetes may have aggravated COVID‐19 infection, which is consistent with identical observations by Singh et al.\n16\n By contrast, no such evidence was found for patients with previously known T1DM, where the COVID‐19 manifestation index of 37.5% was even below the value of about 50% reported for the general population.\n17\n\n", "In adult patients with T1DM, the proportion of symptomatic infections was almost twice as high as in pediatric patients (83.3% vs. 43.5%). Definitive statements are not yet available regarding the COVID‐19 manifestation index, which depends on multiple co‐variables, including age, preexisting disease, and socioeconomic status of the population.\n18\n, \n19\n, \n20\n COVID‐19 manifestation index values reported in the literature vary from 55% to >90%\n21\n, \n22\n and are reported as being about 50% in children,\n17\n which is consistent with our data. Age and diabetes duration in SARS‐CoV‐2‐positive adult patients with T1DM tended to be higher than in those with T1DM who tested negative for SARS‐CoV‐2 in our study. This was particularly true for hospitalized patients with SARS‐CoV‐2 infection (24.2 vs. 37.1 vs. 49.2 years). No such associations were observed in our pediatric patients. The hospitalization rate was slightly higher in adult patients with T1DM, even though there were significantly fewer patients with diabetes manifestation in the latter group from DPV network. A separate analysis of the DPV data from adult patients with T2DM, which is not part of the present study, showed an even higher hospitalization rate of 82.3% (median age 73.5 years; unpublished data). This could mean that the sequelae of long‐standing diabetes are risk factors for a symptomatic and/or severe course of SARS‐CoV‐2 infections in adult patients. In addition to the higher age, which in itself is a risk factor for severe SARS‐CoV‐2 infections, this may explain the higher hospitalization rate in adult patients. Indeed, the literature has demonstrated the association between increasing age, cardiovascular risk factors, diabetes sequelae, and a more severe course of the SARS‐CoV‐2 infection.\n14\n, \n23\n, \n24\n From these observations follows the recommendation to lower blood glucose and HbA1c levels as a primary prevention measure to reduce the risk of severe COVID‐19 courses.\n25\n, \n26\n, \n27\n Due to lack of data on COVID‐19 symptoms in adult T1DM patients we were unable to prove this hypothesis with our current data. The significantly higher HbA1c in SARS‐CoV‐2 negatively tested patients compared to positively tested adult patients with T1DM is most likely due to a higher percentage of inpatients with diabetes manifestation (14.8 vs. 5.0%) and diabetes education (25.9 vs. 20.0%).", "Due to the retrospective design of the study as an analysis of existing database records and the absence of a nondiabetic comparison group, our observations need to be considered tentative and interpreted with caution. Since our analysis compared a relatively small group of patients with SARS‐CoV2‐positive T1DM with a markedly larger group of negatively tested patients, it was repeatedly impossible to measure significant differences between the groups, despite clear differences between the data. Further studies in larger cohorts may be useful to confirm our observations.\nOur database did not contain any clear information on the reasons why patients were tested for SARS‐CoV‐2. Since SARS‐CoV2 testing was usually performed on hospital admission but not in the case of outpatient appointments, our data originated mainly from inpatients as well as patients with infection symptoms. This limits the statements regarding the above‐mentioned group. Population‐wide screening studies among diabetic pediatric patients may provide clarity in this regard.\nOur results reflect the COVID‐19 pandemic during the periods when the original and the “delta” variants of SARS‐CoV‐2 were predominant. Infection rates were much higher during the subsequent “omicron” wave, especially in pediatric patients, and some symptoms differed considerably. Hence, results for this later period may differ from the present findings. DPV registry data for this period will become available in the fall of 2022.", "B.R.B. was responsible for conceptualization, data analysis and interpretation, investigation, and writing of the original draft of the manuscript. S.R.T. was responsible for methodology, data analysis, formal analysis, and review and editing of the manuscript. C.K. was responsible for review and editing of the manuscript. T.R.R. was responsible for conceptualization, investigation, review and editing of the manuscript. R.W.H. was responsible for conceptualization, supervision, funding acquisition, review and editing of the manuscript. B.K., K.K., A.M., E.M.‐R., F.R., T.R.R. and R.W.H were responsible for data acquisition. S.R.T., C.K., B.K., K.K., E.M.‐R., F.R., T.R.R. and R.W.H. were responsible for scientific discussion of the results and important intellectual content and review and editing of the manuscript. All authors reviewed the draft manuscript for important intellectual content and approved the manuscript prior to submission. S.R.T. and R.W.H. are the guarantors of this work and, as such, had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.", "The DPV is supported through the German Federal Ministry for Education and Research within the German Centre for Diabetes Research (grant 82DZD14A02). The funding organization had no role in design and conduct of the study, collection, management, analysis, and interpretation of the data, preparation, review, or approval of the manuscript, or decision to submit the manuscript for publication.", "The ethics committee of the University of Ulm approved the analysis of the anonymized DPV data (approval no. 314/21).", "Informed consent (verbal or written) to participate in the DPV registry was available from patients and/or their parents." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Data source and study population", "Variables", "Statistical analyses", "RESULTS", "\nT1DM in children and adolescents", "\nT1DM in adults", "DISCUSSION", "\nT1DM in pediatric patients", "Adult patients", "Limitations", "CONCLUSIONS", "AUTHOR CONTRIBUTIONS", "FUNDING INFORMATION", "CONFLICT OF INTEREST", "ETHICS STATEMENT", "PATIENT CONSENT STATEMENT", "Supporting information" ]
[ "In late 2019, the severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) pandemic began to spread across the world.\n1\n, \n2\n Growing evidence showed increased morbidity and mortality in patients with type 2 diabetes mellitus.\n3\n, \n4\n, \n5\n On the other hand, data on patients with type 1 diabetes mellitus (T1DM), especially pediatric patients, are sparse and at times inconsistent.\n6\n, \n7\n In part, conclusions for pediatric patients have been drawn by extrapolation from data on adult patients with diabetes mellitus. This procedure is not valid and may lead to misinformation.\n8\n Thus, targeted analysis of pediatric populations as well as adult populations with T1DM is essential in order to gain clarity regarding the impact of SARS‐CoV‐2 infection on these patients. The aim of the present study was to (1) investigate the influence of SARS‐CoV‐2 infections on the course of T1DM and of T1DM on infections with SARS‐CoV‐2 in children and adolescents in Germany, Austria, Switzerland, and Luxembourg, (2) gain information about SARS‐CoV‐2‐infected adult patients with T1DM in central Europe, and (3) learn about the differences between pediatric and adult patients with T1DM.", "Data source and study population Data from the Prospective Diabetes Follow‐up (DPV) Registry were analyzed. The DPV registry is maintained by the Institute of Epidemiology and Medical Biometry at the University of Ulm, Ulm, Germany and collects patient data from diabetes centers in Germany, Austria, Switzerland, and Luxembourg, covering >90% of all pediatric diabetes centers in Germany and Austria.\n9\n Informed consent (verbal or written) to participate in the DPV registry was available from patients and/or their parents. The ethics committee of the University of Ulm approved the analysis of the anonymized DPV data (approval no. 314/21).\nData analysis included all T1DM patients aged >6 months with documented polymerase chain reaction (PCR) tests for SARS‐CoV‐2. All patients tested upon inpatient admission as well as outpatients and externally tested patients were included in the analysis. The data collection period was 01/2020–06/2021, and data were analyzed separately for pediatric (age <18 years) and adult (age ≥ 18 years) patients. Data were aggregated per patient ±10 days around the test. Overall, 162 centers contributed to the data collection, including 150 centers from Germany, 10 from Austria, and one each from Switzerland and Luxembourg. Data from SARS‐CoV‐2‐positive and ‐negative T1DM patients were compared. In a second step, the following subgroups of data were further analyzed: First, data from SARS‐CoV‐2‐positive T1DM patients were analyzed, for whom there was documented information on symptoms (comparison of patients with/without symptoms of coronavirus disease 2019 [COVID‐19]). Second, data from hospitalized SARS‐CoV‐2‐positive and ‐negative T1DM patients were compared. Third, a separate analysis of pediatric patients with new‐onset diabetes or follow‐up examinations was performed with respect to glycated hemoglobin A1c (HbA1c) and the rate of diabetic ketoacidosis (DKA). In addition, we analyzed data from SARS‐CoV‐2‐positive and ‐negative adult T1DM patients (total population; hospitalized patients).\nData from the Prospective Diabetes Follow‐up (DPV) Registry were analyzed. The DPV registry is maintained by the Institute of Epidemiology and Medical Biometry at the University of Ulm, Ulm, Germany and collects patient data from diabetes centers in Germany, Austria, Switzerland, and Luxembourg, covering >90% of all pediatric diabetes centers in Germany and Austria.\n9\n Informed consent (verbal or written) to participate in the DPV registry was available from patients and/or their parents. The ethics committee of the University of Ulm approved the analysis of the anonymized DPV data (approval no. 314/21).\nData analysis included all T1DM patients aged >6 months with documented polymerase chain reaction (PCR) tests for SARS‐CoV‐2. All patients tested upon inpatient admission as well as outpatients and externally tested patients were included in the analysis. The data collection period was 01/2020–06/2021, and data were analyzed separately for pediatric (age <18 years) and adult (age ≥ 18 years) patients. Data were aggregated per patient ±10 days around the test. Overall, 162 centers contributed to the data collection, including 150 centers from Germany, 10 from Austria, and one each from Switzerland and Luxembourg. Data from SARS‐CoV‐2‐positive and ‐negative T1DM patients were compared. In a second step, the following subgroups of data were further analyzed: First, data from SARS‐CoV‐2‐positive T1DM patients were analyzed, for whom there was documented information on symptoms (comparison of patients with/without symptoms of coronavirus disease 2019 [COVID‐19]). Second, data from hospitalized SARS‐CoV‐2‐positive and ‐negative T1DM patients were compared. Third, a separate analysis of pediatric patients with new‐onset diabetes or follow‐up examinations was performed with respect to glycated hemoglobin A1c (HbA1c) and the rate of diabetic ketoacidosis (DKA). In addition, we analyzed data from SARS‐CoV‐2‐positive and ‐negative adult T1DM patients (total population; hospitalized patients).\nVariables Demographic data included patient age, age at new‐onset diabetes, diabetes duration, sex, and migrant background (patient and/or at least one parent born outside Germany, Austria, Switzerland, or Luxembourg). Clinical data included HbA1c (%, mmol/mol) and body mass index (BMI, kg/m2). For the pediatric patients, BMI values were expressed as standard deviation scores (SDS) based on the German KiGGS percentiles (German Health Interview and Examination Survey for Children and Adolescents).\n10\n To ensure comparability of the local HbA1c values, the respective values were converted according to the “multiple of the mean” method for normalization to the reference range of the Diabetes Control and Complications Trial (DCCT).\n11\n The other clinical parameters were the reason for admission (unknown, new‐onset diabetes, education, DKA, and “other reasons”, e.g., admission for SARS‐CoV‐2 infection), the mode of care (outpatient, inpatient, telemedical care), the proportion of patients with DKA, and DKA at new onset of diabetes. DKA was defined as pH <7.3 and/or bicarbonate levels <15 mmol/L. The COVID‐19 manifestation index was defined as the percentage of symptomatic infections in all patients testing positive for SARS‐CoV‐2.\nDemographic data included patient age, age at new‐onset diabetes, diabetes duration, sex, and migrant background (patient and/or at least one parent born outside Germany, Austria, Switzerland, or Luxembourg). Clinical data included HbA1c (%, mmol/mol) and body mass index (BMI, kg/m2). For the pediatric patients, BMI values were expressed as standard deviation scores (SDS) based on the German KiGGS percentiles (German Health Interview and Examination Survey for Children and Adolescents).\n10\n To ensure comparability of the local HbA1c values, the respective values were converted according to the “multiple of the mean” method for normalization to the reference range of the Diabetes Control and Complications Trial (DCCT).\n11\n The other clinical parameters were the reason for admission (unknown, new‐onset diabetes, education, DKA, and “other reasons”, e.g., admission for SARS‐CoV‐2 infection), the mode of care (outpatient, inpatient, telemedical care), the proportion of patients with DKA, and DKA at new onset of diabetes. DKA was defined as pH <7.3 and/or bicarbonate levels <15 mmol/L. The COVID‐19 manifestation index was defined as the percentage of symptomatic infections in all patients testing positive for SARS‐CoV‐2.\nStatistical analyses Analyses were performed as unadjusted comparisons using SAS 9.4 software (TS1M7; SAS Institute Inc) on a Windows Server 2019 mainframe to compute the Wilcoxon rank‐sum test for continuous variables and the chi‐square test for dichotomous variables. All analyses were adjusted for multiple testing using the Bonferroni‐Holm method (Bonferroni stepdown). Two‐tailed p values <0.05 were considered significant.\nAnalyses were performed as unadjusted comparisons using SAS 9.4 software (TS1M7; SAS Institute Inc) on a Windows Server 2019 mainframe to compute the Wilcoxon rank‐sum test for continuous variables and the chi‐square test for dichotomous variables. All analyses were adjusted for multiple testing using the Bonferroni‐Holm method (Bonferroni stepdown). Two‐tailed p values <0.05 were considered significant.", "Data from the Prospective Diabetes Follow‐up (DPV) Registry were analyzed. The DPV registry is maintained by the Institute of Epidemiology and Medical Biometry at the University of Ulm, Ulm, Germany and collects patient data from diabetes centers in Germany, Austria, Switzerland, and Luxembourg, covering >90% of all pediatric diabetes centers in Germany and Austria.\n9\n Informed consent (verbal or written) to participate in the DPV registry was available from patients and/or their parents. The ethics committee of the University of Ulm approved the analysis of the anonymized DPV data (approval no. 314/21).\nData analysis included all T1DM patients aged >6 months with documented polymerase chain reaction (PCR) tests for SARS‐CoV‐2. All patients tested upon inpatient admission as well as outpatients and externally tested patients were included in the analysis. The data collection period was 01/2020–06/2021, and data were analyzed separately for pediatric (age <18 years) and adult (age ≥ 18 years) patients. Data were aggregated per patient ±10 days around the test. Overall, 162 centers contributed to the data collection, including 150 centers from Germany, 10 from Austria, and one each from Switzerland and Luxembourg. Data from SARS‐CoV‐2‐positive and ‐negative T1DM patients were compared. In a second step, the following subgroups of data were further analyzed: First, data from SARS‐CoV‐2‐positive T1DM patients were analyzed, for whom there was documented information on symptoms (comparison of patients with/without symptoms of coronavirus disease 2019 [COVID‐19]). Second, data from hospitalized SARS‐CoV‐2‐positive and ‐negative T1DM patients were compared. Third, a separate analysis of pediatric patients with new‐onset diabetes or follow‐up examinations was performed with respect to glycated hemoglobin A1c (HbA1c) and the rate of diabetic ketoacidosis (DKA). In addition, we analyzed data from SARS‐CoV‐2‐positive and ‐negative adult T1DM patients (total population; hospitalized patients).", "Demographic data included patient age, age at new‐onset diabetes, diabetes duration, sex, and migrant background (patient and/or at least one parent born outside Germany, Austria, Switzerland, or Luxembourg). Clinical data included HbA1c (%, mmol/mol) and body mass index (BMI, kg/m2). For the pediatric patients, BMI values were expressed as standard deviation scores (SDS) based on the German KiGGS percentiles (German Health Interview and Examination Survey for Children and Adolescents).\n10\n To ensure comparability of the local HbA1c values, the respective values were converted according to the “multiple of the mean” method for normalization to the reference range of the Diabetes Control and Complications Trial (DCCT).\n11\n The other clinical parameters were the reason for admission (unknown, new‐onset diabetes, education, DKA, and “other reasons”, e.g., admission for SARS‐CoV‐2 infection), the mode of care (outpatient, inpatient, telemedical care), the proportion of patients with DKA, and DKA at new onset of diabetes. DKA was defined as pH <7.3 and/or bicarbonate levels <15 mmol/L. The COVID‐19 manifestation index was defined as the percentage of symptomatic infections in all patients testing positive for SARS‐CoV‐2.", "Analyses were performed as unadjusted comparisons using SAS 9.4 software (TS1M7; SAS Institute Inc) on a Windows Server 2019 mainframe to compute the Wilcoxon rank‐sum test for continuous variables and the chi‐square test for dichotomous variables. All analyses were adjusted for multiple testing using the Bonferroni‐Holm method (Bonferroni stepdown). Two‐tailed p values <0.05 were considered significant.", "\nT1DM in children and adolescents In total, 1855 pediatric T1DM patients underwent SARS‐CoV‐2 testing by PCR; 157 patients tested positive (cumulative positive rate 8.5%). There was no statistical difference between negatively and positively tested patients with respect to age, age at new‐onset diabetes, diabetes duration, and BMI‐SDS (see Table 1). HbA1c was significantly lowered in positively tested patients (p = 0.003). The difference was due to the patients with known and treated T1DM; there was no difference in this respect in the patients with new‐onset diabetes. The rate of DKA was not increased in patients who tested positive for SARS‐CoV‐2 compared to patients who tested negative (patients with known diabetes: 2.5% vs. 6.3%, new‐onset diabetes: 31.4% vs. 35.1%; see Table 2).\nPediatric patients with type 1 diabetes mellitus by PCR result\nAbbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; KiGGS, German Health Interview and Examination Survey for Children and Adolescents; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2.\nPediatric patients with type 1 diabetes mellitus by PCR result; patients with diabetes onset and known diabetes\nAbbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2.\nSARS‐CoV‐2‐positive patients received mostly inpatient care during the first 6 months of the pandemic (93.3%) whereas negatively tested patients were only treated in 38% as inpatients. As the pandemic went on, this treatment changed. As for 06/2021, patients who tested negative for SARS‐CoV‐2 received care as inpatients in 75.8% of cases, as outpatients in 24% of cases, and by telemedicine in 0.2% of cases. By contrast, only 31.2% of positively tested patients received care as inpatients, 59.3% as outpatients, and another 9.6% by telemedicine (p < 0.001 in each case). For SARS‐CoV‐2‐negative patients, the most common reasons for admission were new‐onset diabetes (41.5%) and admission for diabetes education (41.2%). By contrast, 69.4% of patients who tested positive were admitted to hospital due to new‐onset diabetes and only 10.2% were admitted for education, (p < 0.001 in each case).\nOf 46 patients with a positive SARS‐CoV‐2 PCR result and available data on symptoms of SARS‐CoV‐2 infection, 20 (43.5%) had COVID‐19 symptoms. In individuals with known T1DM, the COVID‐19 manifestation index was 37.5% (12/32). By contrast, 8/14 (57.1%) children and adolescents with new‐onset diabetes and 5/6 (83.3%) hospitalized adult patients with new‐onset diabetes showed symptoms of COVID‐19 (Figure 1). There was no significant difference between patients with COVID‐19 and asymptomatic patients with positive SARS‐CoV‐2 PCR results in terms of age, age at new‐onset diabetes, diabetes duration, HbA1c, and BMI‐SDS (see Table 3). 17/26 (65.4%) asymptomatically and 11/20 (55%) symptomatically infected pediatric patients were managed as outpatients. Only 6 (30%) of the symptomatically infected patients were admitted as inpatients, all due to acute diabetes complications (5 patients with new‐onset diabetes and 1 with DKA) and 4 (15.4%) of the asymptomatically infected patients (1 each with new‐onset diabetes/DKA and 2 for diabetes education). 3 (15%) patients with COVID‐19 and 5 (19.2%) asymptomatically infected patients received telemedical instructions. At new‐onset diabetes, DKA was present in 4/8 (50%) patients with symptomatic infection and 2/6 (33.3%) patients with asymptomatic infection.\nCOVID‐19 manifestation index. T1DM, type 1 diabetes mellitus.\nPediatric patients with type 1 diabetes mellitus by PCR result and reported COVID‐19 symptoms\nAbbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; KiGGS, German Health Interview and Examination Survey for Children and Adolescents; n, overall number of patients; PCR, polymerase chain reaction.\nIn total, 1855 pediatric T1DM patients underwent SARS‐CoV‐2 testing by PCR; 157 patients tested positive (cumulative positive rate 8.5%). There was no statistical difference between negatively and positively tested patients with respect to age, age at new‐onset diabetes, diabetes duration, and BMI‐SDS (see Table 1). HbA1c was significantly lowered in positively tested patients (p = 0.003). The difference was due to the patients with known and treated T1DM; there was no difference in this respect in the patients with new‐onset diabetes. The rate of DKA was not increased in patients who tested positive for SARS‐CoV‐2 compared to patients who tested negative (patients with known diabetes: 2.5% vs. 6.3%, new‐onset diabetes: 31.4% vs. 35.1%; see Table 2).\nPediatric patients with type 1 diabetes mellitus by PCR result\nAbbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; KiGGS, German Health Interview and Examination Survey for Children and Adolescents; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2.\nPediatric patients with type 1 diabetes mellitus by PCR result; patients with diabetes onset and known diabetes\nAbbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2.\nSARS‐CoV‐2‐positive patients received mostly inpatient care during the first 6 months of the pandemic (93.3%) whereas negatively tested patients were only treated in 38% as inpatients. As the pandemic went on, this treatment changed. As for 06/2021, patients who tested negative for SARS‐CoV‐2 received care as inpatients in 75.8% of cases, as outpatients in 24% of cases, and by telemedicine in 0.2% of cases. By contrast, only 31.2% of positively tested patients received care as inpatients, 59.3% as outpatients, and another 9.6% by telemedicine (p < 0.001 in each case). For SARS‐CoV‐2‐negative patients, the most common reasons for admission were new‐onset diabetes (41.5%) and admission for diabetes education (41.2%). By contrast, 69.4% of patients who tested positive were admitted to hospital due to new‐onset diabetes and only 10.2% were admitted for education, (p < 0.001 in each case).\nOf 46 patients with a positive SARS‐CoV‐2 PCR result and available data on symptoms of SARS‐CoV‐2 infection, 20 (43.5%) had COVID‐19 symptoms. In individuals with known T1DM, the COVID‐19 manifestation index was 37.5% (12/32). By contrast, 8/14 (57.1%) children and adolescents with new‐onset diabetes and 5/6 (83.3%) hospitalized adult patients with new‐onset diabetes showed symptoms of COVID‐19 (Figure 1). There was no significant difference between patients with COVID‐19 and asymptomatic patients with positive SARS‐CoV‐2 PCR results in terms of age, age at new‐onset diabetes, diabetes duration, HbA1c, and BMI‐SDS (see Table 3). 17/26 (65.4%) asymptomatically and 11/20 (55%) symptomatically infected pediatric patients were managed as outpatients. Only 6 (30%) of the symptomatically infected patients were admitted as inpatients, all due to acute diabetes complications (5 patients with new‐onset diabetes and 1 with DKA) and 4 (15.4%) of the asymptomatically infected patients (1 each with new‐onset diabetes/DKA and 2 for diabetes education). 3 (15%) patients with COVID‐19 and 5 (19.2%) asymptomatically infected patients received telemedical instructions. At new‐onset diabetes, DKA was present in 4/8 (50%) patients with symptomatic infection and 2/6 (33.3%) patients with asymptomatic infection.\nCOVID‐19 manifestation index. T1DM, type 1 diabetes mellitus.\nPediatric patients with type 1 diabetes mellitus by PCR result and reported COVID‐19 symptoms\nAbbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; KiGGS, German Health Interview and Examination Survey for Children and Adolescents; n, overall number of patients; PCR, polymerase chain reaction.\n\nT1DM in adults Analysis of the data from 240 adult patients with T1DM showed no statistically significant differences between SARS‐CoV‐2 negatively/positively tested patients except for HbA1c, which was significantly lower in positively tested patients (see Table 4). The most frequent reasons for admission of negatively tested patients to hospital were diabetes education (25.9%), DKA (19.4%), and new‐onset diabetes (14.8%), whereas positively tested patients were mostly admitted for “other reasons” (not primarily diabetes‐related reasons, e.g., SARS‐CoV‐2 infection [35%]) and DKA (30%). Questioning about infection symptoms was documented in 12 patients who tested positive, 10 (83.3%) of whom had symptomatic infection (Figure 1). Nevertheless, only 35.7% of the positively tested patients required hospitalization (Figure 2).\nAdult patients with type 1 diabetes mellitus by PCR result\nAbbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2.\nSARS‐CoV‐2 hospitalization rate. T1DM, type 1 diabetes mellitus.\nAnalysis of the data from 240 adult patients with T1DM showed no statistically significant differences between SARS‐CoV‐2 negatively/positively tested patients except for HbA1c, which was significantly lower in positively tested patients (see Table 4). The most frequent reasons for admission of negatively tested patients to hospital were diabetes education (25.9%), DKA (19.4%), and new‐onset diabetes (14.8%), whereas positively tested patients were mostly admitted for “other reasons” (not primarily diabetes‐related reasons, e.g., SARS‐CoV‐2 infection [35%]) and DKA (30%). Questioning about infection symptoms was documented in 12 patients who tested positive, 10 (83.3%) of whom had symptomatic infection (Figure 1). Nevertheless, only 35.7% of the positively tested patients required hospitalization (Figure 2).\nAdult patients with type 1 diabetes mellitus by PCR result\nAbbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2.\nSARS‐CoV‐2 hospitalization rate. T1DM, type 1 diabetes mellitus.", "In total, 1855 pediatric T1DM patients underwent SARS‐CoV‐2 testing by PCR; 157 patients tested positive (cumulative positive rate 8.5%). There was no statistical difference between negatively and positively tested patients with respect to age, age at new‐onset diabetes, diabetes duration, and BMI‐SDS (see Table 1). HbA1c was significantly lowered in positively tested patients (p = 0.003). The difference was due to the patients with known and treated T1DM; there was no difference in this respect in the patients with new‐onset diabetes. The rate of DKA was not increased in patients who tested positive for SARS‐CoV‐2 compared to patients who tested negative (patients with known diabetes: 2.5% vs. 6.3%, new‐onset diabetes: 31.4% vs. 35.1%; see Table 2).\nPediatric patients with type 1 diabetes mellitus by PCR result\nAbbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; KiGGS, German Health Interview and Examination Survey for Children and Adolescents; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2.\nPediatric patients with type 1 diabetes mellitus by PCR result; patients with diabetes onset and known diabetes\nAbbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2.\nSARS‐CoV‐2‐positive patients received mostly inpatient care during the first 6 months of the pandemic (93.3%) whereas negatively tested patients were only treated in 38% as inpatients. As the pandemic went on, this treatment changed. As for 06/2021, patients who tested negative for SARS‐CoV‐2 received care as inpatients in 75.8% of cases, as outpatients in 24% of cases, and by telemedicine in 0.2% of cases. By contrast, only 31.2% of positively tested patients received care as inpatients, 59.3% as outpatients, and another 9.6% by telemedicine (p < 0.001 in each case). For SARS‐CoV‐2‐negative patients, the most common reasons for admission were new‐onset diabetes (41.5%) and admission for diabetes education (41.2%). By contrast, 69.4% of patients who tested positive were admitted to hospital due to new‐onset diabetes and only 10.2% were admitted for education, (p < 0.001 in each case).\nOf 46 patients with a positive SARS‐CoV‐2 PCR result and available data on symptoms of SARS‐CoV‐2 infection, 20 (43.5%) had COVID‐19 symptoms. In individuals with known T1DM, the COVID‐19 manifestation index was 37.5% (12/32). By contrast, 8/14 (57.1%) children and adolescents with new‐onset diabetes and 5/6 (83.3%) hospitalized adult patients with new‐onset diabetes showed symptoms of COVID‐19 (Figure 1). There was no significant difference between patients with COVID‐19 and asymptomatic patients with positive SARS‐CoV‐2 PCR results in terms of age, age at new‐onset diabetes, diabetes duration, HbA1c, and BMI‐SDS (see Table 3). 17/26 (65.4%) asymptomatically and 11/20 (55%) symptomatically infected pediatric patients were managed as outpatients. Only 6 (30%) of the symptomatically infected patients were admitted as inpatients, all due to acute diabetes complications (5 patients with new‐onset diabetes and 1 with DKA) and 4 (15.4%) of the asymptomatically infected patients (1 each with new‐onset diabetes/DKA and 2 for diabetes education). 3 (15%) patients with COVID‐19 and 5 (19.2%) asymptomatically infected patients received telemedical instructions. At new‐onset diabetes, DKA was present in 4/8 (50%) patients with symptomatic infection and 2/6 (33.3%) patients with asymptomatic infection.\nCOVID‐19 manifestation index. T1DM, type 1 diabetes mellitus.\nPediatric patients with type 1 diabetes mellitus by PCR result and reported COVID‐19 symptoms\nAbbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; KiGGS, German Health Interview and Examination Survey for Children and Adolescents; n, overall number of patients; PCR, polymerase chain reaction.", "Analysis of the data from 240 adult patients with T1DM showed no statistically significant differences between SARS‐CoV‐2 negatively/positively tested patients except for HbA1c, which was significantly lower in positively tested patients (see Table 4). The most frequent reasons for admission of negatively tested patients to hospital were diabetes education (25.9%), DKA (19.4%), and new‐onset diabetes (14.8%), whereas positively tested patients were mostly admitted for “other reasons” (not primarily diabetes‐related reasons, e.g., SARS‐CoV‐2 infection [35%]) and DKA (30%). Questioning about infection symptoms was documented in 12 patients who tested positive, 10 (83.3%) of whom had symptomatic infection (Figure 1). Nevertheless, only 35.7% of the positively tested patients required hospitalization (Figure 2).\nAdult patients with type 1 diabetes mellitus by PCR result\nAbbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2.\nSARS‐CoV‐2 hospitalization rate. T1DM, type 1 diabetes mellitus.", "\nT1DM in pediatric patients SARS‐CoV‐2‐positive patients initially received mostly inpatient care during the first 6 months of the pandemic. The data may reflect the initial uncertainty in the management of SARS‐CoV‐2. After initial results suggested that there was no greatly increased risk of disease in young patients with T1DM, care was able to be provided predominantly as outpatient treatment and by telemedicine. In cases in which a pediatric patient with a positive test required inpatient care, this was primarily due to the concurrent new onset of T1DM (69.4%, 34/49 patients).\nOur analysis found that SARS‐CoV‐2‐positive patients had significantly lower HbA1c levels than negatively tested patients. Further analysis showed that this difference was due to significantly lower HbA1c levels in patients with known T1DM, while HbA1c levels were the same in positively and negatively SARS‐CoV‐2 tested pediatric patients with new‐onset diabetes. Our database did not contain any information as to why patients were tested for SARS‐CoV‐2, so the reason for the significantly lower HbA1c value in SARS‐CoV‐2 positive patients remains unclear. In the group of SARS‐CoV‐2 negative patients the percentage of patients admitted to the hospital for diabetes education was much higher than in the SARS‐CoV‐2 positive group. The main reason for these admissions was poor diabetes control. This may explain the higher HbA1c level in children and adolescents with negative SARS‐CoV‐2 tests. Population‐wide screening studies among diabetic pediatric patients may provide clarity in this regard. Importantly, no increased HbA1c levels in terms of a risk factor for SARS‐CoV‐2 infection were observed in SARS‐CoV‐2‐positive patients. There were no other significant differences between negatively and positively tested pediatric patients. The results are indicative of the low morbidity in children and adolescents with T1DM and SARS‐CoV‐2 infection, as has also been reported for children from the United States and England.\n8\n, \n12\n In addition, the data show that metabolic control generally remained stable in the setting of infection and that diabetes‐associated morbidity did not worsen with SARS‐CoV‐2 infection. Our results indicate the success of patient education and outpatient management of patients with T1DM in Germany, Austria, Switzerland, and Luxembourg.\nThe rate of DKA at new onset of diabetes was significantly higher during the COVID‐19 pandemic than in prior years. Kamrath et al. reported a significantly increased value of 44.7% for 2020.\n13\n During the 01/2020–06/2021 analysis period, the DKA rate in our pediatric cohort was still higher than in previous years (35.8% compared to approximately 24%). Possible reasons may include delayed presentation due to fear of visiting the hospital, delayed presentation due to infection with SARS‐CoV‐2, and delayed diagnosis of new‐onset diabetes due to infection symptoms. In their analysis of adult patient data, Pasquel et al. found evidence that infection with SARS‐CoV‐2 itself led to more severe DKA.\n14\n Li et al. demonstrated that SARS‐CoV‐2 infection can lead to ketosis and ketoacidosis, not only in patients with diabetes.\n15\n In our study population, there was no evidence in this regard, as the rate of DKA was not increased in SARS‐CoV‐2‐positive compared with SARS‐CoV‐2‐negative pediatric patients. Further analysis showed that neither patients with known diabetes nor patients with new‐onset diabetes had an elevated DKA rate compared to patients from the SARS‐CoV‐2‐negative control group.\nAmong the subgroup of patients with new‐onset diabetes and available data on COVID‐19 symptoms, infection was symptomatic in 8/14 cases (57.1%), which was markedly higher than in the overall study population, where the percentage was 43.5%. This finding indicates that new‐onset diabetes may have aggravated COVID‐19 infection, which is consistent with identical observations by Singh et al.\n16\n By contrast, no such evidence was found for patients with previously known T1DM, where the COVID‐19 manifestation index of 37.5% was even below the value of about 50% reported for the general population.\n17\n\n\nSARS‐CoV‐2‐positive patients initially received mostly inpatient care during the first 6 months of the pandemic. The data may reflect the initial uncertainty in the management of SARS‐CoV‐2. After initial results suggested that there was no greatly increased risk of disease in young patients with T1DM, care was able to be provided predominantly as outpatient treatment and by telemedicine. In cases in which a pediatric patient with a positive test required inpatient care, this was primarily due to the concurrent new onset of T1DM (69.4%, 34/49 patients).\nOur analysis found that SARS‐CoV‐2‐positive patients had significantly lower HbA1c levels than negatively tested patients. Further analysis showed that this difference was due to significantly lower HbA1c levels in patients with known T1DM, while HbA1c levels were the same in positively and negatively SARS‐CoV‐2 tested pediatric patients with new‐onset diabetes. Our database did not contain any information as to why patients were tested for SARS‐CoV‐2, so the reason for the significantly lower HbA1c value in SARS‐CoV‐2 positive patients remains unclear. In the group of SARS‐CoV‐2 negative patients the percentage of patients admitted to the hospital for diabetes education was much higher than in the SARS‐CoV‐2 positive group. The main reason for these admissions was poor diabetes control. This may explain the higher HbA1c level in children and adolescents with negative SARS‐CoV‐2 tests. Population‐wide screening studies among diabetic pediatric patients may provide clarity in this regard. Importantly, no increased HbA1c levels in terms of a risk factor for SARS‐CoV‐2 infection were observed in SARS‐CoV‐2‐positive patients. There were no other significant differences between negatively and positively tested pediatric patients. The results are indicative of the low morbidity in children and adolescents with T1DM and SARS‐CoV‐2 infection, as has also been reported for children from the United States and England.\n8\n, \n12\n In addition, the data show that metabolic control generally remained stable in the setting of infection and that diabetes‐associated morbidity did not worsen with SARS‐CoV‐2 infection. Our results indicate the success of patient education and outpatient management of patients with T1DM in Germany, Austria, Switzerland, and Luxembourg.\nThe rate of DKA at new onset of diabetes was significantly higher during the COVID‐19 pandemic than in prior years. Kamrath et al. reported a significantly increased value of 44.7% for 2020.\n13\n During the 01/2020–06/2021 analysis period, the DKA rate in our pediatric cohort was still higher than in previous years (35.8% compared to approximately 24%). Possible reasons may include delayed presentation due to fear of visiting the hospital, delayed presentation due to infection with SARS‐CoV‐2, and delayed diagnosis of new‐onset diabetes due to infection symptoms. In their analysis of adult patient data, Pasquel et al. found evidence that infection with SARS‐CoV‐2 itself led to more severe DKA.\n14\n Li et al. demonstrated that SARS‐CoV‐2 infection can lead to ketosis and ketoacidosis, not only in patients with diabetes.\n15\n In our study population, there was no evidence in this regard, as the rate of DKA was not increased in SARS‐CoV‐2‐positive compared with SARS‐CoV‐2‐negative pediatric patients. Further analysis showed that neither patients with known diabetes nor patients with new‐onset diabetes had an elevated DKA rate compared to patients from the SARS‐CoV‐2‐negative control group.\nAmong the subgroup of patients with new‐onset diabetes and available data on COVID‐19 symptoms, infection was symptomatic in 8/14 cases (57.1%), which was markedly higher than in the overall study population, where the percentage was 43.5%. This finding indicates that new‐onset diabetes may have aggravated COVID‐19 infection, which is consistent with identical observations by Singh et al.\n16\n By contrast, no such evidence was found for patients with previously known T1DM, where the COVID‐19 manifestation index of 37.5% was even below the value of about 50% reported for the general population.\n17\n\n\nAdult patients In adult patients with T1DM, the proportion of symptomatic infections was almost twice as high as in pediatric patients (83.3% vs. 43.5%). Definitive statements are not yet available regarding the COVID‐19 manifestation index, which depends on multiple co‐variables, including age, preexisting disease, and socioeconomic status of the population.\n18\n, \n19\n, \n20\n COVID‐19 manifestation index values reported in the literature vary from 55% to >90%\n21\n, \n22\n and are reported as being about 50% in children,\n17\n which is consistent with our data. Age and diabetes duration in SARS‐CoV‐2‐positive adult patients with T1DM tended to be higher than in those with T1DM who tested negative for SARS‐CoV‐2 in our study. This was particularly true for hospitalized patients with SARS‐CoV‐2 infection (24.2 vs. 37.1 vs. 49.2 years). No such associations were observed in our pediatric patients. The hospitalization rate was slightly higher in adult patients with T1DM, even though there were significantly fewer patients with diabetes manifestation in the latter group from DPV network. A separate analysis of the DPV data from adult patients with T2DM, which is not part of the present study, showed an even higher hospitalization rate of 82.3% (median age 73.5 years; unpublished data). This could mean that the sequelae of long‐standing diabetes are risk factors for a symptomatic and/or severe course of SARS‐CoV‐2 infections in adult patients. In addition to the higher age, which in itself is a risk factor for severe SARS‐CoV‐2 infections, this may explain the higher hospitalization rate in adult patients. Indeed, the literature has demonstrated the association between increasing age, cardiovascular risk factors, diabetes sequelae, and a more severe course of the SARS‐CoV‐2 infection.\n14\n, \n23\n, \n24\n From these observations follows the recommendation to lower blood glucose and HbA1c levels as a primary prevention measure to reduce the risk of severe COVID‐19 courses.\n25\n, \n26\n, \n27\n Due to lack of data on COVID‐19 symptoms in adult T1DM patients we were unable to prove this hypothesis with our current data. The significantly higher HbA1c in SARS‐CoV‐2 negatively tested patients compared to positively tested adult patients with T1DM is most likely due to a higher percentage of inpatients with diabetes manifestation (14.8 vs. 5.0%) and diabetes education (25.9 vs. 20.0%).\nIn adult patients with T1DM, the proportion of symptomatic infections was almost twice as high as in pediatric patients (83.3% vs. 43.5%). Definitive statements are not yet available regarding the COVID‐19 manifestation index, which depends on multiple co‐variables, including age, preexisting disease, and socioeconomic status of the population.\n18\n, \n19\n, \n20\n COVID‐19 manifestation index values reported in the literature vary from 55% to >90%\n21\n, \n22\n and are reported as being about 50% in children,\n17\n which is consistent with our data. Age and diabetes duration in SARS‐CoV‐2‐positive adult patients with T1DM tended to be higher than in those with T1DM who tested negative for SARS‐CoV‐2 in our study. This was particularly true for hospitalized patients with SARS‐CoV‐2 infection (24.2 vs. 37.1 vs. 49.2 years). No such associations were observed in our pediatric patients. The hospitalization rate was slightly higher in adult patients with T1DM, even though there were significantly fewer patients with diabetes manifestation in the latter group from DPV network. A separate analysis of the DPV data from adult patients with T2DM, which is not part of the present study, showed an even higher hospitalization rate of 82.3% (median age 73.5 years; unpublished data). This could mean that the sequelae of long‐standing diabetes are risk factors for a symptomatic and/or severe course of SARS‐CoV‐2 infections in adult patients. In addition to the higher age, which in itself is a risk factor for severe SARS‐CoV‐2 infections, this may explain the higher hospitalization rate in adult patients. Indeed, the literature has demonstrated the association between increasing age, cardiovascular risk factors, diabetes sequelae, and a more severe course of the SARS‐CoV‐2 infection.\n14\n, \n23\n, \n24\n From these observations follows the recommendation to lower blood glucose and HbA1c levels as a primary prevention measure to reduce the risk of severe COVID‐19 courses.\n25\n, \n26\n, \n27\n Due to lack of data on COVID‐19 symptoms in adult T1DM patients we were unable to prove this hypothesis with our current data. The significantly higher HbA1c in SARS‐CoV‐2 negatively tested patients compared to positively tested adult patients with T1DM is most likely due to a higher percentage of inpatients with diabetes manifestation (14.8 vs. 5.0%) and diabetes education (25.9 vs. 20.0%).\nLimitations Due to the retrospective design of the study as an analysis of existing database records and the absence of a nondiabetic comparison group, our observations need to be considered tentative and interpreted with caution. Since our analysis compared a relatively small group of patients with SARS‐CoV2‐positive T1DM with a markedly larger group of negatively tested patients, it was repeatedly impossible to measure significant differences between the groups, despite clear differences between the data. Further studies in larger cohorts may be useful to confirm our observations.\nOur database did not contain any clear information on the reasons why patients were tested for SARS‐CoV‐2. Since SARS‐CoV2 testing was usually performed on hospital admission but not in the case of outpatient appointments, our data originated mainly from inpatients as well as patients with infection symptoms. This limits the statements regarding the above‐mentioned group. Population‐wide screening studies among diabetic pediatric patients may provide clarity in this regard.\nOur results reflect the COVID‐19 pandemic during the periods when the original and the “delta” variants of SARS‐CoV‐2 were predominant. Infection rates were much higher during the subsequent “omicron” wave, especially in pediatric patients, and some symptoms differed considerably. Hence, results for this later period may differ from the present findings. DPV registry data for this period will become available in the fall of 2022.\nDue to the retrospective design of the study as an analysis of existing database records and the absence of a nondiabetic comparison group, our observations need to be considered tentative and interpreted with caution. Since our analysis compared a relatively small group of patients with SARS‐CoV2‐positive T1DM with a markedly larger group of negatively tested patients, it was repeatedly impossible to measure significant differences between the groups, despite clear differences between the data. Further studies in larger cohorts may be useful to confirm our observations.\nOur database did not contain any clear information on the reasons why patients were tested for SARS‐CoV‐2. Since SARS‐CoV2 testing was usually performed on hospital admission but not in the case of outpatient appointments, our data originated mainly from inpatients as well as patients with infection symptoms. This limits the statements regarding the above‐mentioned group. Population‐wide screening studies among diabetic pediatric patients may provide clarity in this regard.\nOur results reflect the COVID‐19 pandemic during the periods when the original and the “delta” variants of SARS‐CoV‐2 were predominant. Infection rates were much higher during the subsequent “omicron” wave, especially in pediatric patients, and some symptoms differed considerably. Hence, results for this later period may differ from the present findings. DPV registry data for this period will become available in the fall of 2022.", "SARS‐CoV‐2‐positive patients initially received mostly inpatient care during the first 6 months of the pandemic. The data may reflect the initial uncertainty in the management of SARS‐CoV‐2. After initial results suggested that there was no greatly increased risk of disease in young patients with T1DM, care was able to be provided predominantly as outpatient treatment and by telemedicine. In cases in which a pediatric patient with a positive test required inpatient care, this was primarily due to the concurrent new onset of T1DM (69.4%, 34/49 patients).\nOur analysis found that SARS‐CoV‐2‐positive patients had significantly lower HbA1c levels than negatively tested patients. Further analysis showed that this difference was due to significantly lower HbA1c levels in patients with known T1DM, while HbA1c levels were the same in positively and negatively SARS‐CoV‐2 tested pediatric patients with new‐onset diabetes. Our database did not contain any information as to why patients were tested for SARS‐CoV‐2, so the reason for the significantly lower HbA1c value in SARS‐CoV‐2 positive patients remains unclear. In the group of SARS‐CoV‐2 negative patients the percentage of patients admitted to the hospital for diabetes education was much higher than in the SARS‐CoV‐2 positive group. The main reason for these admissions was poor diabetes control. This may explain the higher HbA1c level in children and adolescents with negative SARS‐CoV‐2 tests. Population‐wide screening studies among diabetic pediatric patients may provide clarity in this regard. Importantly, no increased HbA1c levels in terms of a risk factor for SARS‐CoV‐2 infection were observed in SARS‐CoV‐2‐positive patients. There were no other significant differences between negatively and positively tested pediatric patients. The results are indicative of the low morbidity in children and adolescents with T1DM and SARS‐CoV‐2 infection, as has also been reported for children from the United States and England.\n8\n, \n12\n In addition, the data show that metabolic control generally remained stable in the setting of infection and that diabetes‐associated morbidity did not worsen with SARS‐CoV‐2 infection. Our results indicate the success of patient education and outpatient management of patients with T1DM in Germany, Austria, Switzerland, and Luxembourg.\nThe rate of DKA at new onset of diabetes was significantly higher during the COVID‐19 pandemic than in prior years. Kamrath et al. reported a significantly increased value of 44.7% for 2020.\n13\n During the 01/2020–06/2021 analysis period, the DKA rate in our pediatric cohort was still higher than in previous years (35.8% compared to approximately 24%). Possible reasons may include delayed presentation due to fear of visiting the hospital, delayed presentation due to infection with SARS‐CoV‐2, and delayed diagnosis of new‐onset diabetes due to infection symptoms. In their analysis of adult patient data, Pasquel et al. found evidence that infection with SARS‐CoV‐2 itself led to more severe DKA.\n14\n Li et al. demonstrated that SARS‐CoV‐2 infection can lead to ketosis and ketoacidosis, not only in patients with diabetes.\n15\n In our study population, there was no evidence in this regard, as the rate of DKA was not increased in SARS‐CoV‐2‐positive compared with SARS‐CoV‐2‐negative pediatric patients. Further analysis showed that neither patients with known diabetes nor patients with new‐onset diabetes had an elevated DKA rate compared to patients from the SARS‐CoV‐2‐negative control group.\nAmong the subgroup of patients with new‐onset diabetes and available data on COVID‐19 symptoms, infection was symptomatic in 8/14 cases (57.1%), which was markedly higher than in the overall study population, where the percentage was 43.5%. This finding indicates that new‐onset diabetes may have aggravated COVID‐19 infection, which is consistent with identical observations by Singh et al.\n16\n By contrast, no such evidence was found for patients with previously known T1DM, where the COVID‐19 manifestation index of 37.5% was even below the value of about 50% reported for the general population.\n17\n\n", "In adult patients with T1DM, the proportion of symptomatic infections was almost twice as high as in pediatric patients (83.3% vs. 43.5%). Definitive statements are not yet available regarding the COVID‐19 manifestation index, which depends on multiple co‐variables, including age, preexisting disease, and socioeconomic status of the population.\n18\n, \n19\n, \n20\n COVID‐19 manifestation index values reported in the literature vary from 55% to >90%\n21\n, \n22\n and are reported as being about 50% in children,\n17\n which is consistent with our data. Age and diabetes duration in SARS‐CoV‐2‐positive adult patients with T1DM tended to be higher than in those with T1DM who tested negative for SARS‐CoV‐2 in our study. This was particularly true for hospitalized patients with SARS‐CoV‐2 infection (24.2 vs. 37.1 vs. 49.2 years). No such associations were observed in our pediatric patients. The hospitalization rate was slightly higher in adult patients with T1DM, even though there were significantly fewer patients with diabetes manifestation in the latter group from DPV network. A separate analysis of the DPV data from adult patients with T2DM, which is not part of the present study, showed an even higher hospitalization rate of 82.3% (median age 73.5 years; unpublished data). This could mean that the sequelae of long‐standing diabetes are risk factors for a symptomatic and/or severe course of SARS‐CoV‐2 infections in adult patients. In addition to the higher age, which in itself is a risk factor for severe SARS‐CoV‐2 infections, this may explain the higher hospitalization rate in adult patients. Indeed, the literature has demonstrated the association between increasing age, cardiovascular risk factors, diabetes sequelae, and a more severe course of the SARS‐CoV‐2 infection.\n14\n, \n23\n, \n24\n From these observations follows the recommendation to lower blood glucose and HbA1c levels as a primary prevention measure to reduce the risk of severe COVID‐19 courses.\n25\n, \n26\n, \n27\n Due to lack of data on COVID‐19 symptoms in adult T1DM patients we were unable to prove this hypothesis with our current data. The significantly higher HbA1c in SARS‐CoV‐2 negatively tested patients compared to positively tested adult patients with T1DM is most likely due to a higher percentage of inpatients with diabetes manifestation (14.8 vs. 5.0%) and diabetes education (25.9 vs. 20.0%).", "Due to the retrospective design of the study as an analysis of existing database records and the absence of a nondiabetic comparison group, our observations need to be considered tentative and interpreted with caution. Since our analysis compared a relatively small group of patients with SARS‐CoV2‐positive T1DM with a markedly larger group of negatively tested patients, it was repeatedly impossible to measure significant differences between the groups, despite clear differences between the data. Further studies in larger cohorts may be useful to confirm our observations.\nOur database did not contain any clear information on the reasons why patients were tested for SARS‐CoV‐2. Since SARS‐CoV2 testing was usually performed on hospital admission but not in the case of outpatient appointments, our data originated mainly from inpatients as well as patients with infection symptoms. This limits the statements regarding the above‐mentioned group. Population‐wide screening studies among diabetic pediatric patients may provide clarity in this regard.\nOur results reflect the COVID‐19 pandemic during the periods when the original and the “delta” variants of SARS‐CoV‐2 were predominant. Infection rates were much higher during the subsequent “omicron” wave, especially in pediatric patients, and some symptoms differed considerably. Hence, results for this later period may differ from the present findings. DPV registry data for this period will become available in the fall of 2022.", "Our results are indicative of the low morbidity in children and adolescents with T1DM and SARS‐CoV‐2 infection. Importantly, our study found no increase in DKA rate among patients with positive SARS‐CoV‐2 tests. Patients who tested positive for SARS‐CoV‐2 did not have elevated HbA1c levels compared to those who tested negative. In addition, no association was found between age/diabetes duration/HbA1c/BMI and symptoms of infections. Thus, no evidence emerged that these factors led to increased SARS‐CoV‐2 morbidity in pediatric patients with a previous diagnosis of T1DM in our study population. Most patients with known T1DM and SARS‐CoV‐2 infection could be managed as outpatients. However, when SARS‐CoV‐2 infection coincided with new‐onset diabetes, the infection was usually symptomatic.\nIn adult patients, an association was observed between age and frequency of symptomatic SARS‐CoV‐2 infection as well as hospitalization. Since diabetes sequelae are associated with a more severe course, optimization of patients' metabolic situation is recommended.", "B.R.B. was responsible for conceptualization, data analysis and interpretation, investigation, and writing of the original draft of the manuscript. S.R.T. was responsible for methodology, data analysis, formal analysis, and review and editing of the manuscript. C.K. was responsible for review and editing of the manuscript. T.R.R. was responsible for conceptualization, investigation, review and editing of the manuscript. R.W.H. was responsible for conceptualization, supervision, funding acquisition, review and editing of the manuscript. B.K., K.K., A.M., E.M.‐R., F.R., T.R.R. and R.W.H were responsible for data acquisition. S.R.T., C.K., B.K., K.K., E.M.‐R., F.R., T.R.R. and R.W.H. were responsible for scientific discussion of the results and important intellectual content and review and editing of the manuscript. All authors reviewed the draft manuscript for important intellectual content and approved the manuscript prior to submission. S.R.T. and R.W.H. are the guarantors of this work and, as such, had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.", "The DPV is supported through the German Federal Ministry for Education and Research within the German Centre for Diabetes Research (grant 82DZD14A02). The funding organization had no role in design and conduct of the study, collection, management, analysis, and interpretation of the data, preparation, review, or approval of the manuscript, or decision to submit the manuscript for publication.", "None declared.", "The ethics committee of the University of Ulm approved the analysis of the anonymized DPV data (approval no. 314/21).", "Informed consent (verbal or written) to participate in the DPV registry was available from patients and/or their parents.", "Supporting Information.\nClick here for additional data file." ]
[ null, "methods", null, null, null, "results", null, null, "discussion", null, null, null, "conclusions", null, null, "COI-statement", null, null, "supplementary-material" ]
[ "COVID‐19", "diabetic ketoacidosis", "DPV database", "SARS‐CoV‐2", "type 1 diabetes mellitus", "新冠肺炎", "1型糖尿病", "糖尿病酮症酸中毒", "前瞻性糖尿病随访数据", "严重急性呼吸综合征冠状病毒2型" ]
INTRODUCTION: In late 2019, the severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) pandemic began to spread across the world. 1 , 2 Growing evidence showed increased morbidity and mortality in patients with type 2 diabetes mellitus. 3 , 4 , 5 On the other hand, data on patients with type 1 diabetes mellitus (T1DM), especially pediatric patients, are sparse and at times inconsistent. 6 , 7 In part, conclusions for pediatric patients have been drawn by extrapolation from data on adult patients with diabetes mellitus. This procedure is not valid and may lead to misinformation. 8 Thus, targeted analysis of pediatric populations as well as adult populations with T1DM is essential in order to gain clarity regarding the impact of SARS‐CoV‐2 infection on these patients. The aim of the present study was to (1) investigate the influence of SARS‐CoV‐2 infections on the course of T1DM and of T1DM on infections with SARS‐CoV‐2 in children and adolescents in Germany, Austria, Switzerland, and Luxembourg, (2) gain information about SARS‐CoV‐2‐infected adult patients with T1DM in central Europe, and (3) learn about the differences between pediatric and adult patients with T1DM. METHODS: Data source and study population Data from the Prospective Diabetes Follow‐up (DPV) Registry were analyzed. The DPV registry is maintained by the Institute of Epidemiology and Medical Biometry at the University of Ulm, Ulm, Germany and collects patient data from diabetes centers in Germany, Austria, Switzerland, and Luxembourg, covering >90% of all pediatric diabetes centers in Germany and Austria. 9 Informed consent (verbal or written) to participate in the DPV registry was available from patients and/or their parents. The ethics committee of the University of Ulm approved the analysis of the anonymized DPV data (approval no. 314/21). Data analysis included all T1DM patients aged >6 months with documented polymerase chain reaction (PCR) tests for SARS‐CoV‐2. All patients tested upon inpatient admission as well as outpatients and externally tested patients were included in the analysis. The data collection period was 01/2020–06/2021, and data were analyzed separately for pediatric (age <18 years) and adult (age ≥ 18 years) patients. Data were aggregated per patient ±10 days around the test. Overall, 162 centers contributed to the data collection, including 150 centers from Germany, 10 from Austria, and one each from Switzerland and Luxembourg. Data from SARS‐CoV‐2‐positive and ‐negative T1DM patients were compared. In a second step, the following subgroups of data were further analyzed: First, data from SARS‐CoV‐2‐positive T1DM patients were analyzed, for whom there was documented information on symptoms (comparison of patients with/without symptoms of coronavirus disease 2019 [COVID‐19]). Second, data from hospitalized SARS‐CoV‐2‐positive and ‐negative T1DM patients were compared. Third, a separate analysis of pediatric patients with new‐onset diabetes or follow‐up examinations was performed with respect to glycated hemoglobin A1c (HbA1c) and the rate of diabetic ketoacidosis (DKA). In addition, we analyzed data from SARS‐CoV‐2‐positive and ‐negative adult T1DM patients (total population; hospitalized patients). Data from the Prospective Diabetes Follow‐up (DPV) Registry were analyzed. The DPV registry is maintained by the Institute of Epidemiology and Medical Biometry at the University of Ulm, Ulm, Germany and collects patient data from diabetes centers in Germany, Austria, Switzerland, and Luxembourg, covering >90% of all pediatric diabetes centers in Germany and Austria. 9 Informed consent (verbal or written) to participate in the DPV registry was available from patients and/or their parents. The ethics committee of the University of Ulm approved the analysis of the anonymized DPV data (approval no. 314/21). Data analysis included all T1DM patients aged >6 months with documented polymerase chain reaction (PCR) tests for SARS‐CoV‐2. All patients tested upon inpatient admission as well as outpatients and externally tested patients were included in the analysis. The data collection period was 01/2020–06/2021, and data were analyzed separately for pediatric (age <18 years) and adult (age ≥ 18 years) patients. Data were aggregated per patient ±10 days around the test. Overall, 162 centers contributed to the data collection, including 150 centers from Germany, 10 from Austria, and one each from Switzerland and Luxembourg. Data from SARS‐CoV‐2‐positive and ‐negative T1DM patients were compared. In a second step, the following subgroups of data were further analyzed: First, data from SARS‐CoV‐2‐positive T1DM patients were analyzed, for whom there was documented information on symptoms (comparison of patients with/without symptoms of coronavirus disease 2019 [COVID‐19]). Second, data from hospitalized SARS‐CoV‐2‐positive and ‐negative T1DM patients were compared. Third, a separate analysis of pediatric patients with new‐onset diabetes or follow‐up examinations was performed with respect to glycated hemoglobin A1c (HbA1c) and the rate of diabetic ketoacidosis (DKA). In addition, we analyzed data from SARS‐CoV‐2‐positive and ‐negative adult T1DM patients (total population; hospitalized patients). Variables Demographic data included patient age, age at new‐onset diabetes, diabetes duration, sex, and migrant background (patient and/or at least one parent born outside Germany, Austria, Switzerland, or Luxembourg). Clinical data included HbA1c (%, mmol/mol) and body mass index (BMI, kg/m2). For the pediatric patients, BMI values were expressed as standard deviation scores (SDS) based on the German KiGGS percentiles (German Health Interview and Examination Survey for Children and Adolescents). 10 To ensure comparability of the local HbA1c values, the respective values were converted according to the “multiple of the mean” method for normalization to the reference range of the Diabetes Control and Complications Trial (DCCT). 11 The other clinical parameters were the reason for admission (unknown, new‐onset diabetes, education, DKA, and “other reasons”, e.g., admission for SARS‐CoV‐2 infection), the mode of care (outpatient, inpatient, telemedical care), the proportion of patients with DKA, and DKA at new onset of diabetes. DKA was defined as pH <7.3 and/or bicarbonate levels <15 mmol/L. The COVID‐19 manifestation index was defined as the percentage of symptomatic infections in all patients testing positive for SARS‐CoV‐2. Demographic data included patient age, age at new‐onset diabetes, diabetes duration, sex, and migrant background (patient and/or at least one parent born outside Germany, Austria, Switzerland, or Luxembourg). Clinical data included HbA1c (%, mmol/mol) and body mass index (BMI, kg/m2). For the pediatric patients, BMI values were expressed as standard deviation scores (SDS) based on the German KiGGS percentiles (German Health Interview and Examination Survey for Children and Adolescents). 10 To ensure comparability of the local HbA1c values, the respective values were converted according to the “multiple of the mean” method for normalization to the reference range of the Diabetes Control and Complications Trial (DCCT). 11 The other clinical parameters were the reason for admission (unknown, new‐onset diabetes, education, DKA, and “other reasons”, e.g., admission for SARS‐CoV‐2 infection), the mode of care (outpatient, inpatient, telemedical care), the proportion of patients with DKA, and DKA at new onset of diabetes. DKA was defined as pH <7.3 and/or bicarbonate levels <15 mmol/L. The COVID‐19 manifestation index was defined as the percentage of symptomatic infections in all patients testing positive for SARS‐CoV‐2. Statistical analyses Analyses were performed as unadjusted comparisons using SAS 9.4 software (TS1M7; SAS Institute Inc) on a Windows Server 2019 mainframe to compute the Wilcoxon rank‐sum test for continuous variables and the chi‐square test for dichotomous variables. All analyses were adjusted for multiple testing using the Bonferroni‐Holm method (Bonferroni stepdown). Two‐tailed p values <0.05 were considered significant. Analyses were performed as unadjusted comparisons using SAS 9.4 software (TS1M7; SAS Institute Inc) on a Windows Server 2019 mainframe to compute the Wilcoxon rank‐sum test for continuous variables and the chi‐square test for dichotomous variables. All analyses were adjusted for multiple testing using the Bonferroni‐Holm method (Bonferroni stepdown). Two‐tailed p values <0.05 were considered significant. Data source and study population: Data from the Prospective Diabetes Follow‐up (DPV) Registry were analyzed. The DPV registry is maintained by the Institute of Epidemiology and Medical Biometry at the University of Ulm, Ulm, Germany and collects patient data from diabetes centers in Germany, Austria, Switzerland, and Luxembourg, covering >90% of all pediatric diabetes centers in Germany and Austria. 9 Informed consent (verbal or written) to participate in the DPV registry was available from patients and/or their parents. The ethics committee of the University of Ulm approved the analysis of the anonymized DPV data (approval no. 314/21). Data analysis included all T1DM patients aged >6 months with documented polymerase chain reaction (PCR) tests for SARS‐CoV‐2. All patients tested upon inpatient admission as well as outpatients and externally tested patients were included in the analysis. The data collection period was 01/2020–06/2021, and data were analyzed separately for pediatric (age <18 years) and adult (age ≥ 18 years) patients. Data were aggregated per patient ±10 days around the test. Overall, 162 centers contributed to the data collection, including 150 centers from Germany, 10 from Austria, and one each from Switzerland and Luxembourg. Data from SARS‐CoV‐2‐positive and ‐negative T1DM patients were compared. In a second step, the following subgroups of data were further analyzed: First, data from SARS‐CoV‐2‐positive T1DM patients were analyzed, for whom there was documented information on symptoms (comparison of patients with/without symptoms of coronavirus disease 2019 [COVID‐19]). Second, data from hospitalized SARS‐CoV‐2‐positive and ‐negative T1DM patients were compared. Third, a separate analysis of pediatric patients with new‐onset diabetes or follow‐up examinations was performed with respect to glycated hemoglobin A1c (HbA1c) and the rate of diabetic ketoacidosis (DKA). In addition, we analyzed data from SARS‐CoV‐2‐positive and ‐negative adult T1DM patients (total population; hospitalized patients). Variables: Demographic data included patient age, age at new‐onset diabetes, diabetes duration, sex, and migrant background (patient and/or at least one parent born outside Germany, Austria, Switzerland, or Luxembourg). Clinical data included HbA1c (%, mmol/mol) and body mass index (BMI, kg/m2). For the pediatric patients, BMI values were expressed as standard deviation scores (SDS) based on the German KiGGS percentiles (German Health Interview and Examination Survey for Children and Adolescents). 10 To ensure comparability of the local HbA1c values, the respective values were converted according to the “multiple of the mean” method for normalization to the reference range of the Diabetes Control and Complications Trial (DCCT). 11 The other clinical parameters were the reason for admission (unknown, new‐onset diabetes, education, DKA, and “other reasons”, e.g., admission for SARS‐CoV‐2 infection), the mode of care (outpatient, inpatient, telemedical care), the proportion of patients with DKA, and DKA at new onset of diabetes. DKA was defined as pH <7.3 and/or bicarbonate levels <15 mmol/L. The COVID‐19 manifestation index was defined as the percentage of symptomatic infections in all patients testing positive for SARS‐CoV‐2. Statistical analyses: Analyses were performed as unadjusted comparisons using SAS 9.4 software (TS1M7; SAS Institute Inc) on a Windows Server 2019 mainframe to compute the Wilcoxon rank‐sum test for continuous variables and the chi‐square test for dichotomous variables. All analyses were adjusted for multiple testing using the Bonferroni‐Holm method (Bonferroni stepdown). Two‐tailed p values <0.05 were considered significant. RESULTS: T1DM in children and adolescents In total, 1855 pediatric T1DM patients underwent SARS‐CoV‐2 testing by PCR; 157 patients tested positive (cumulative positive rate 8.5%). There was no statistical difference between negatively and positively tested patients with respect to age, age at new‐onset diabetes, diabetes duration, and BMI‐SDS (see Table 1). HbA1c was significantly lowered in positively tested patients (p = 0.003). The difference was due to the patients with known and treated T1DM; there was no difference in this respect in the patients with new‐onset diabetes. The rate of DKA was not increased in patients who tested positive for SARS‐CoV‐2 compared to patients who tested negative (patients with known diabetes: 2.5% vs. 6.3%, new‐onset diabetes: 31.4% vs. 35.1%; see Table 2). Pediatric patients with type 1 diabetes mellitus by PCR result Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; KiGGS, German Health Interview and Examination Survey for Children and Adolescents; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. Pediatric patients with type 1 diabetes mellitus by PCR result; patients with diabetes onset and known diabetes Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. SARS‐CoV‐2‐positive patients received mostly inpatient care during the first 6 months of the pandemic (93.3%) whereas negatively tested patients were only treated in 38% as inpatients. As the pandemic went on, this treatment changed. As for 06/2021, patients who tested negative for SARS‐CoV‐2 received care as inpatients in 75.8% of cases, as outpatients in 24% of cases, and by telemedicine in 0.2% of cases. By contrast, only 31.2% of positively tested patients received care as inpatients, 59.3% as outpatients, and another 9.6% by telemedicine (p < 0.001 in each case). For SARS‐CoV‐2‐negative patients, the most common reasons for admission were new‐onset diabetes (41.5%) and admission for diabetes education (41.2%). By contrast, 69.4% of patients who tested positive were admitted to hospital due to new‐onset diabetes and only 10.2% were admitted for education, (p < 0.001 in each case). Of 46 patients with a positive SARS‐CoV‐2 PCR result and available data on symptoms of SARS‐CoV‐2 infection, 20 (43.5%) had COVID‐19 symptoms. In individuals with known T1DM, the COVID‐19 manifestation index was 37.5% (12/32). By contrast, 8/14 (57.1%) children and adolescents with new‐onset diabetes and 5/6 (83.3%) hospitalized adult patients with new‐onset diabetes showed symptoms of COVID‐19 (Figure 1). There was no significant difference between patients with COVID‐19 and asymptomatic patients with positive SARS‐CoV‐2 PCR results in terms of age, age at new‐onset diabetes, diabetes duration, HbA1c, and BMI‐SDS (see Table 3). 17/26 (65.4%) asymptomatically and 11/20 (55%) symptomatically infected pediatric patients were managed as outpatients. Only 6 (30%) of the symptomatically infected patients were admitted as inpatients, all due to acute diabetes complications (5 patients with new‐onset diabetes and 1 with DKA) and 4 (15.4%) of the asymptomatically infected patients (1 each with new‐onset diabetes/DKA and 2 for diabetes education). 3 (15%) patients with COVID‐19 and 5 (19.2%) asymptomatically infected patients received telemedical instructions. At new‐onset diabetes, DKA was present in 4/8 (50%) patients with symptomatic infection and 2/6 (33.3%) patients with asymptomatic infection. COVID‐19 manifestation index. T1DM, type 1 diabetes mellitus. Pediatric patients with type 1 diabetes mellitus by PCR result and reported COVID‐19 symptoms Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; KiGGS, German Health Interview and Examination Survey for Children and Adolescents; n, overall number of patients; PCR, polymerase chain reaction. In total, 1855 pediatric T1DM patients underwent SARS‐CoV‐2 testing by PCR; 157 patients tested positive (cumulative positive rate 8.5%). There was no statistical difference between negatively and positively tested patients with respect to age, age at new‐onset diabetes, diabetes duration, and BMI‐SDS (see Table 1). HbA1c was significantly lowered in positively tested patients (p = 0.003). The difference was due to the patients with known and treated T1DM; there was no difference in this respect in the patients with new‐onset diabetes. The rate of DKA was not increased in patients who tested positive for SARS‐CoV‐2 compared to patients who tested negative (patients with known diabetes: 2.5% vs. 6.3%, new‐onset diabetes: 31.4% vs. 35.1%; see Table 2). Pediatric patients with type 1 diabetes mellitus by PCR result Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; KiGGS, German Health Interview and Examination Survey for Children and Adolescents; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. Pediatric patients with type 1 diabetes mellitus by PCR result; patients with diabetes onset and known diabetes Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. SARS‐CoV‐2‐positive patients received mostly inpatient care during the first 6 months of the pandemic (93.3%) whereas negatively tested patients were only treated in 38% as inpatients. As the pandemic went on, this treatment changed. As for 06/2021, patients who tested negative for SARS‐CoV‐2 received care as inpatients in 75.8% of cases, as outpatients in 24% of cases, and by telemedicine in 0.2% of cases. By contrast, only 31.2% of positively tested patients received care as inpatients, 59.3% as outpatients, and another 9.6% by telemedicine (p < 0.001 in each case). For SARS‐CoV‐2‐negative patients, the most common reasons for admission were new‐onset diabetes (41.5%) and admission for diabetes education (41.2%). By contrast, 69.4% of patients who tested positive were admitted to hospital due to new‐onset diabetes and only 10.2% were admitted for education, (p < 0.001 in each case). Of 46 patients with a positive SARS‐CoV‐2 PCR result and available data on symptoms of SARS‐CoV‐2 infection, 20 (43.5%) had COVID‐19 symptoms. In individuals with known T1DM, the COVID‐19 manifestation index was 37.5% (12/32). By contrast, 8/14 (57.1%) children and adolescents with new‐onset diabetes and 5/6 (83.3%) hospitalized adult patients with new‐onset diabetes showed symptoms of COVID‐19 (Figure 1). There was no significant difference between patients with COVID‐19 and asymptomatic patients with positive SARS‐CoV‐2 PCR results in terms of age, age at new‐onset diabetes, diabetes duration, HbA1c, and BMI‐SDS (see Table 3). 17/26 (65.4%) asymptomatically and 11/20 (55%) symptomatically infected pediatric patients were managed as outpatients. Only 6 (30%) of the symptomatically infected patients were admitted as inpatients, all due to acute diabetes complications (5 patients with new‐onset diabetes and 1 with DKA) and 4 (15.4%) of the asymptomatically infected patients (1 each with new‐onset diabetes/DKA and 2 for diabetes education). 3 (15%) patients with COVID‐19 and 5 (19.2%) asymptomatically infected patients received telemedical instructions. At new‐onset diabetes, DKA was present in 4/8 (50%) patients with symptomatic infection and 2/6 (33.3%) patients with asymptomatic infection. COVID‐19 manifestation index. T1DM, type 1 diabetes mellitus. Pediatric patients with type 1 diabetes mellitus by PCR result and reported COVID‐19 symptoms Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; KiGGS, German Health Interview and Examination Survey for Children and Adolescents; n, overall number of patients; PCR, polymerase chain reaction. T1DM in adults Analysis of the data from 240 adult patients with T1DM showed no statistically significant differences between SARS‐CoV‐2 negatively/positively tested patients except for HbA1c, which was significantly lower in positively tested patients (see Table 4). The most frequent reasons for admission of negatively tested patients to hospital were diabetes education (25.9%), DKA (19.4%), and new‐onset diabetes (14.8%), whereas positively tested patients were mostly admitted for “other reasons” (not primarily diabetes‐related reasons, e.g., SARS‐CoV‐2 infection [35%]) and DKA (30%). Questioning about infection symptoms was documented in 12 patients who tested positive, 10 (83.3%) of whom had symptomatic infection (Figure 1). Nevertheless, only 35.7% of the positively tested patients required hospitalization (Figure 2). Adult patients with type 1 diabetes mellitus by PCR result Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. SARS‐CoV‐2 hospitalization rate. T1DM, type 1 diabetes mellitus. Analysis of the data from 240 adult patients with T1DM showed no statistically significant differences between SARS‐CoV‐2 negatively/positively tested patients except for HbA1c, which was significantly lower in positively tested patients (see Table 4). The most frequent reasons for admission of negatively tested patients to hospital were diabetes education (25.9%), DKA (19.4%), and new‐onset diabetes (14.8%), whereas positively tested patients were mostly admitted for “other reasons” (not primarily diabetes‐related reasons, e.g., SARS‐CoV‐2 infection [35%]) and DKA (30%). Questioning about infection symptoms was documented in 12 patients who tested positive, 10 (83.3%) of whom had symptomatic infection (Figure 1). Nevertheless, only 35.7% of the positively tested patients required hospitalization (Figure 2). Adult patients with type 1 diabetes mellitus by PCR result Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. SARS‐CoV‐2 hospitalization rate. T1DM, type 1 diabetes mellitus. T1DM in children and adolescents: In total, 1855 pediatric T1DM patients underwent SARS‐CoV‐2 testing by PCR; 157 patients tested positive (cumulative positive rate 8.5%). There was no statistical difference between negatively and positively tested patients with respect to age, age at new‐onset diabetes, diabetes duration, and BMI‐SDS (see Table 1). HbA1c was significantly lowered in positively tested patients (p = 0.003). The difference was due to the patients with known and treated T1DM; there was no difference in this respect in the patients with new‐onset diabetes. The rate of DKA was not increased in patients who tested positive for SARS‐CoV‐2 compared to patients who tested negative (patients with known diabetes: 2.5% vs. 6.3%, new‐onset diabetes: 31.4% vs. 35.1%; see Table 2). Pediatric patients with type 1 diabetes mellitus by PCR result Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; KiGGS, German Health Interview and Examination Survey for Children and Adolescents; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. Pediatric patients with type 1 diabetes mellitus by PCR result; patients with diabetes onset and known diabetes Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. SARS‐CoV‐2‐positive patients received mostly inpatient care during the first 6 months of the pandemic (93.3%) whereas negatively tested patients were only treated in 38% as inpatients. As the pandemic went on, this treatment changed. As for 06/2021, patients who tested negative for SARS‐CoV‐2 received care as inpatients in 75.8% of cases, as outpatients in 24% of cases, and by telemedicine in 0.2% of cases. By contrast, only 31.2% of positively tested patients received care as inpatients, 59.3% as outpatients, and another 9.6% by telemedicine (p < 0.001 in each case). For SARS‐CoV‐2‐negative patients, the most common reasons for admission were new‐onset diabetes (41.5%) and admission for diabetes education (41.2%). By contrast, 69.4% of patients who tested positive were admitted to hospital due to new‐onset diabetes and only 10.2% were admitted for education, (p < 0.001 in each case). Of 46 patients with a positive SARS‐CoV‐2 PCR result and available data on symptoms of SARS‐CoV‐2 infection, 20 (43.5%) had COVID‐19 symptoms. In individuals with known T1DM, the COVID‐19 manifestation index was 37.5% (12/32). By contrast, 8/14 (57.1%) children and adolescents with new‐onset diabetes and 5/6 (83.3%) hospitalized adult patients with new‐onset diabetes showed symptoms of COVID‐19 (Figure 1). There was no significant difference between patients with COVID‐19 and asymptomatic patients with positive SARS‐CoV‐2 PCR results in terms of age, age at new‐onset diabetes, diabetes duration, HbA1c, and BMI‐SDS (see Table 3). 17/26 (65.4%) asymptomatically and 11/20 (55%) symptomatically infected pediatric patients were managed as outpatients. Only 6 (30%) of the symptomatically infected patients were admitted as inpatients, all due to acute diabetes complications (5 patients with new‐onset diabetes and 1 with DKA) and 4 (15.4%) of the asymptomatically infected patients (1 each with new‐onset diabetes/DKA and 2 for diabetes education). 3 (15%) patients with COVID‐19 and 5 (19.2%) asymptomatically infected patients received telemedical instructions. At new‐onset diabetes, DKA was present in 4/8 (50%) patients with symptomatic infection and 2/6 (33.3%) patients with asymptomatic infection. COVID‐19 manifestation index. T1DM, type 1 diabetes mellitus. Pediatric patients with type 1 diabetes mellitus by PCR result and reported COVID‐19 symptoms Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; KiGGS, German Health Interview and Examination Survey for Children and Adolescents; n, overall number of patients; PCR, polymerase chain reaction. T1DM in adults: Analysis of the data from 240 adult patients with T1DM showed no statistically significant differences between SARS‐CoV‐2 negatively/positively tested patients except for HbA1c, which was significantly lower in positively tested patients (see Table 4). The most frequent reasons for admission of negatively tested patients to hospital were diabetes education (25.9%), DKA (19.4%), and new‐onset diabetes (14.8%), whereas positively tested patients were mostly admitted for “other reasons” (not primarily diabetes‐related reasons, e.g., SARS‐CoV‐2 infection [35%]) and DKA (30%). Questioning about infection symptoms was documented in 12 patients who tested positive, 10 (83.3%) of whom had symptomatic infection (Figure 1). Nevertheless, only 35.7% of the positively tested patients required hospitalization (Figure 2). Adult patients with type 1 diabetes mellitus by PCR result Abbreviations: %, percentage of patients relative to the overall number of patients; DKA, diabetic ketoacidosis; HbA1C, glycated hemoglobin A1c; IQR, interquartile range; n, overall number of patients; PCR, polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. SARS‐CoV‐2 hospitalization rate. T1DM, type 1 diabetes mellitus. DISCUSSION: T1DM in pediatric patients SARS‐CoV‐2‐positive patients initially received mostly inpatient care during the first 6 months of the pandemic. The data may reflect the initial uncertainty in the management of SARS‐CoV‐2. After initial results suggested that there was no greatly increased risk of disease in young patients with T1DM, care was able to be provided predominantly as outpatient treatment and by telemedicine. In cases in which a pediatric patient with a positive test required inpatient care, this was primarily due to the concurrent new onset of T1DM (69.4%, 34/49 patients). Our analysis found that SARS‐CoV‐2‐positive patients had significantly lower HbA1c levels than negatively tested patients. Further analysis showed that this difference was due to significantly lower HbA1c levels in patients with known T1DM, while HbA1c levels were the same in positively and negatively SARS‐CoV‐2 tested pediatric patients with new‐onset diabetes. Our database did not contain any information as to why patients were tested for SARS‐CoV‐2, so the reason for the significantly lower HbA1c value in SARS‐CoV‐2 positive patients remains unclear. In the group of SARS‐CoV‐2 negative patients the percentage of patients admitted to the hospital for diabetes education was much higher than in the SARS‐CoV‐2 positive group. The main reason for these admissions was poor diabetes control. This may explain the higher HbA1c level in children and adolescents with negative SARS‐CoV‐2 tests. Population‐wide screening studies among diabetic pediatric patients may provide clarity in this regard. Importantly, no increased HbA1c levels in terms of a risk factor for SARS‐CoV‐2 infection were observed in SARS‐CoV‐2‐positive patients. There were no other significant differences between negatively and positively tested pediatric patients. The results are indicative of the low morbidity in children and adolescents with T1DM and SARS‐CoV‐2 infection, as has also been reported for children from the United States and England. 8 , 12 In addition, the data show that metabolic control generally remained stable in the setting of infection and that diabetes‐associated morbidity did not worsen with SARS‐CoV‐2 infection. Our results indicate the success of patient education and outpatient management of patients with T1DM in Germany, Austria, Switzerland, and Luxembourg. The rate of DKA at new onset of diabetes was significantly higher during the COVID‐19 pandemic than in prior years. Kamrath et al. reported a significantly increased value of 44.7% for 2020. 13 During the 01/2020–06/2021 analysis period, the DKA rate in our pediatric cohort was still higher than in previous years (35.8% compared to approximately 24%). Possible reasons may include delayed presentation due to fear of visiting the hospital, delayed presentation due to infection with SARS‐CoV‐2, and delayed diagnosis of new‐onset diabetes due to infection symptoms. In their analysis of adult patient data, Pasquel et al. found evidence that infection with SARS‐CoV‐2 itself led to more severe DKA. 14 Li et al. demonstrated that SARS‐CoV‐2 infection can lead to ketosis and ketoacidosis, not only in patients with diabetes. 15 In our study population, there was no evidence in this regard, as the rate of DKA was not increased in SARS‐CoV‐2‐positive compared with SARS‐CoV‐2‐negative pediatric patients. Further analysis showed that neither patients with known diabetes nor patients with new‐onset diabetes had an elevated DKA rate compared to patients from the SARS‐CoV‐2‐negative control group. Among the subgroup of patients with new‐onset diabetes and available data on COVID‐19 symptoms, infection was symptomatic in 8/14 cases (57.1%), which was markedly higher than in the overall study population, where the percentage was 43.5%. This finding indicates that new‐onset diabetes may have aggravated COVID‐19 infection, which is consistent with identical observations by Singh et al. 16 By contrast, no such evidence was found for patients with previously known T1DM, where the COVID‐19 manifestation index of 37.5% was even below the value of about 50% reported for the general population. 17 SARS‐CoV‐2‐positive patients initially received mostly inpatient care during the first 6 months of the pandemic. The data may reflect the initial uncertainty in the management of SARS‐CoV‐2. After initial results suggested that there was no greatly increased risk of disease in young patients with T1DM, care was able to be provided predominantly as outpatient treatment and by telemedicine. In cases in which a pediatric patient with a positive test required inpatient care, this was primarily due to the concurrent new onset of T1DM (69.4%, 34/49 patients). Our analysis found that SARS‐CoV‐2‐positive patients had significantly lower HbA1c levels than negatively tested patients. Further analysis showed that this difference was due to significantly lower HbA1c levels in patients with known T1DM, while HbA1c levels were the same in positively and negatively SARS‐CoV‐2 tested pediatric patients with new‐onset diabetes. Our database did not contain any information as to why patients were tested for SARS‐CoV‐2, so the reason for the significantly lower HbA1c value in SARS‐CoV‐2 positive patients remains unclear. In the group of SARS‐CoV‐2 negative patients the percentage of patients admitted to the hospital for diabetes education was much higher than in the SARS‐CoV‐2 positive group. The main reason for these admissions was poor diabetes control. This may explain the higher HbA1c level in children and adolescents with negative SARS‐CoV‐2 tests. Population‐wide screening studies among diabetic pediatric patients may provide clarity in this regard. Importantly, no increased HbA1c levels in terms of a risk factor for SARS‐CoV‐2 infection were observed in SARS‐CoV‐2‐positive patients. There were no other significant differences between negatively and positively tested pediatric patients. The results are indicative of the low morbidity in children and adolescents with T1DM and SARS‐CoV‐2 infection, as has also been reported for children from the United States and England. 8 , 12 In addition, the data show that metabolic control generally remained stable in the setting of infection and that diabetes‐associated morbidity did not worsen with SARS‐CoV‐2 infection. Our results indicate the success of patient education and outpatient management of patients with T1DM in Germany, Austria, Switzerland, and Luxembourg. The rate of DKA at new onset of diabetes was significantly higher during the COVID‐19 pandemic than in prior years. Kamrath et al. reported a significantly increased value of 44.7% for 2020. 13 During the 01/2020–06/2021 analysis period, the DKA rate in our pediatric cohort was still higher than in previous years (35.8% compared to approximately 24%). Possible reasons may include delayed presentation due to fear of visiting the hospital, delayed presentation due to infection with SARS‐CoV‐2, and delayed diagnosis of new‐onset diabetes due to infection symptoms. In their analysis of adult patient data, Pasquel et al. found evidence that infection with SARS‐CoV‐2 itself led to more severe DKA. 14 Li et al. demonstrated that SARS‐CoV‐2 infection can lead to ketosis and ketoacidosis, not only in patients with diabetes. 15 In our study population, there was no evidence in this regard, as the rate of DKA was not increased in SARS‐CoV‐2‐positive compared with SARS‐CoV‐2‐negative pediatric patients. Further analysis showed that neither patients with known diabetes nor patients with new‐onset diabetes had an elevated DKA rate compared to patients from the SARS‐CoV‐2‐negative control group. Among the subgroup of patients with new‐onset diabetes and available data on COVID‐19 symptoms, infection was symptomatic in 8/14 cases (57.1%), which was markedly higher than in the overall study population, where the percentage was 43.5%. This finding indicates that new‐onset diabetes may have aggravated COVID‐19 infection, which is consistent with identical observations by Singh et al. 16 By contrast, no such evidence was found for patients with previously known T1DM, where the COVID‐19 manifestation index of 37.5% was even below the value of about 50% reported for the general population. 17 Adult patients In adult patients with T1DM, the proportion of symptomatic infections was almost twice as high as in pediatric patients (83.3% vs. 43.5%). Definitive statements are not yet available regarding the COVID‐19 manifestation index, which depends on multiple co‐variables, including age, preexisting disease, and socioeconomic status of the population. 18 , 19 , 20 COVID‐19 manifestation index values reported in the literature vary from 55% to >90% 21 , 22 and are reported as being about 50% in children, 17 which is consistent with our data. Age and diabetes duration in SARS‐CoV‐2‐positive adult patients with T1DM tended to be higher than in those with T1DM who tested negative for SARS‐CoV‐2 in our study. This was particularly true for hospitalized patients with SARS‐CoV‐2 infection (24.2 vs. 37.1 vs. 49.2 years). No such associations were observed in our pediatric patients. The hospitalization rate was slightly higher in adult patients with T1DM, even though there were significantly fewer patients with diabetes manifestation in the latter group from DPV network. A separate analysis of the DPV data from adult patients with T2DM, which is not part of the present study, showed an even higher hospitalization rate of 82.3% (median age 73.5 years; unpublished data). This could mean that the sequelae of long‐standing diabetes are risk factors for a symptomatic and/or severe course of SARS‐CoV‐2 infections in adult patients. In addition to the higher age, which in itself is a risk factor for severe SARS‐CoV‐2 infections, this may explain the higher hospitalization rate in adult patients. Indeed, the literature has demonstrated the association between increasing age, cardiovascular risk factors, diabetes sequelae, and a more severe course of the SARS‐CoV‐2 infection. 14 , 23 , 24 From these observations follows the recommendation to lower blood glucose and HbA1c levels as a primary prevention measure to reduce the risk of severe COVID‐19 courses. 25 , 26 , 27 Due to lack of data on COVID‐19 symptoms in adult T1DM patients we were unable to prove this hypothesis with our current data. The significantly higher HbA1c in SARS‐CoV‐2 negatively tested patients compared to positively tested adult patients with T1DM is most likely due to a higher percentage of inpatients with diabetes manifestation (14.8 vs. 5.0%) and diabetes education (25.9 vs. 20.0%). In adult patients with T1DM, the proportion of symptomatic infections was almost twice as high as in pediatric patients (83.3% vs. 43.5%). Definitive statements are not yet available regarding the COVID‐19 manifestation index, which depends on multiple co‐variables, including age, preexisting disease, and socioeconomic status of the population. 18 , 19 , 20 COVID‐19 manifestation index values reported in the literature vary from 55% to >90% 21 , 22 and are reported as being about 50% in children, 17 which is consistent with our data. Age and diabetes duration in SARS‐CoV‐2‐positive adult patients with T1DM tended to be higher than in those with T1DM who tested negative for SARS‐CoV‐2 in our study. This was particularly true for hospitalized patients with SARS‐CoV‐2 infection (24.2 vs. 37.1 vs. 49.2 years). No such associations were observed in our pediatric patients. The hospitalization rate was slightly higher in adult patients with T1DM, even though there were significantly fewer patients with diabetes manifestation in the latter group from DPV network. A separate analysis of the DPV data from adult patients with T2DM, which is not part of the present study, showed an even higher hospitalization rate of 82.3% (median age 73.5 years; unpublished data). This could mean that the sequelae of long‐standing diabetes are risk factors for a symptomatic and/or severe course of SARS‐CoV‐2 infections in adult patients. In addition to the higher age, which in itself is a risk factor for severe SARS‐CoV‐2 infections, this may explain the higher hospitalization rate in adult patients. Indeed, the literature has demonstrated the association between increasing age, cardiovascular risk factors, diabetes sequelae, and a more severe course of the SARS‐CoV‐2 infection. 14 , 23 , 24 From these observations follows the recommendation to lower blood glucose and HbA1c levels as a primary prevention measure to reduce the risk of severe COVID‐19 courses. 25 , 26 , 27 Due to lack of data on COVID‐19 symptoms in adult T1DM patients we were unable to prove this hypothesis with our current data. The significantly higher HbA1c in SARS‐CoV‐2 negatively tested patients compared to positively tested adult patients with T1DM is most likely due to a higher percentage of inpatients with diabetes manifestation (14.8 vs. 5.0%) and diabetes education (25.9 vs. 20.0%). Limitations Due to the retrospective design of the study as an analysis of existing database records and the absence of a nondiabetic comparison group, our observations need to be considered tentative and interpreted with caution. Since our analysis compared a relatively small group of patients with SARS‐CoV2‐positive T1DM with a markedly larger group of negatively tested patients, it was repeatedly impossible to measure significant differences between the groups, despite clear differences between the data. Further studies in larger cohorts may be useful to confirm our observations. Our database did not contain any clear information on the reasons why patients were tested for SARS‐CoV‐2. Since SARS‐CoV2 testing was usually performed on hospital admission but not in the case of outpatient appointments, our data originated mainly from inpatients as well as patients with infection symptoms. This limits the statements regarding the above‐mentioned group. Population‐wide screening studies among diabetic pediatric patients may provide clarity in this regard. Our results reflect the COVID‐19 pandemic during the periods when the original and the “delta” variants of SARS‐CoV‐2 were predominant. Infection rates were much higher during the subsequent “omicron” wave, especially in pediatric patients, and some symptoms differed considerably. Hence, results for this later period may differ from the present findings. DPV registry data for this period will become available in the fall of 2022. Due to the retrospective design of the study as an analysis of existing database records and the absence of a nondiabetic comparison group, our observations need to be considered tentative and interpreted with caution. Since our analysis compared a relatively small group of patients with SARS‐CoV2‐positive T1DM with a markedly larger group of negatively tested patients, it was repeatedly impossible to measure significant differences between the groups, despite clear differences between the data. Further studies in larger cohorts may be useful to confirm our observations. Our database did not contain any clear information on the reasons why patients were tested for SARS‐CoV‐2. Since SARS‐CoV2 testing was usually performed on hospital admission but not in the case of outpatient appointments, our data originated mainly from inpatients as well as patients with infection symptoms. This limits the statements regarding the above‐mentioned group. Population‐wide screening studies among diabetic pediatric patients may provide clarity in this regard. Our results reflect the COVID‐19 pandemic during the periods when the original and the “delta” variants of SARS‐CoV‐2 were predominant. Infection rates were much higher during the subsequent “omicron” wave, especially in pediatric patients, and some symptoms differed considerably. Hence, results for this later period may differ from the present findings. DPV registry data for this period will become available in the fall of 2022. T1DM in pediatric patients: SARS‐CoV‐2‐positive patients initially received mostly inpatient care during the first 6 months of the pandemic. The data may reflect the initial uncertainty in the management of SARS‐CoV‐2. After initial results suggested that there was no greatly increased risk of disease in young patients with T1DM, care was able to be provided predominantly as outpatient treatment and by telemedicine. In cases in which a pediatric patient with a positive test required inpatient care, this was primarily due to the concurrent new onset of T1DM (69.4%, 34/49 patients). Our analysis found that SARS‐CoV‐2‐positive patients had significantly lower HbA1c levels than negatively tested patients. Further analysis showed that this difference was due to significantly lower HbA1c levels in patients with known T1DM, while HbA1c levels were the same in positively and negatively SARS‐CoV‐2 tested pediatric patients with new‐onset diabetes. Our database did not contain any information as to why patients were tested for SARS‐CoV‐2, so the reason for the significantly lower HbA1c value in SARS‐CoV‐2 positive patients remains unclear. In the group of SARS‐CoV‐2 negative patients the percentage of patients admitted to the hospital for diabetes education was much higher than in the SARS‐CoV‐2 positive group. The main reason for these admissions was poor diabetes control. This may explain the higher HbA1c level in children and adolescents with negative SARS‐CoV‐2 tests. Population‐wide screening studies among diabetic pediatric patients may provide clarity in this regard. Importantly, no increased HbA1c levels in terms of a risk factor for SARS‐CoV‐2 infection were observed in SARS‐CoV‐2‐positive patients. There were no other significant differences between negatively and positively tested pediatric patients. The results are indicative of the low morbidity in children and adolescents with T1DM and SARS‐CoV‐2 infection, as has also been reported for children from the United States and England. 8 , 12 In addition, the data show that metabolic control generally remained stable in the setting of infection and that diabetes‐associated morbidity did not worsen with SARS‐CoV‐2 infection. Our results indicate the success of patient education and outpatient management of patients with T1DM in Germany, Austria, Switzerland, and Luxembourg. The rate of DKA at new onset of diabetes was significantly higher during the COVID‐19 pandemic than in prior years. Kamrath et al. reported a significantly increased value of 44.7% for 2020. 13 During the 01/2020–06/2021 analysis period, the DKA rate in our pediatric cohort was still higher than in previous years (35.8% compared to approximately 24%). Possible reasons may include delayed presentation due to fear of visiting the hospital, delayed presentation due to infection with SARS‐CoV‐2, and delayed diagnosis of new‐onset diabetes due to infection symptoms. In their analysis of adult patient data, Pasquel et al. found evidence that infection with SARS‐CoV‐2 itself led to more severe DKA. 14 Li et al. demonstrated that SARS‐CoV‐2 infection can lead to ketosis and ketoacidosis, not only in patients with diabetes. 15 In our study population, there was no evidence in this regard, as the rate of DKA was not increased in SARS‐CoV‐2‐positive compared with SARS‐CoV‐2‐negative pediatric patients. Further analysis showed that neither patients with known diabetes nor patients with new‐onset diabetes had an elevated DKA rate compared to patients from the SARS‐CoV‐2‐negative control group. Among the subgroup of patients with new‐onset diabetes and available data on COVID‐19 symptoms, infection was symptomatic in 8/14 cases (57.1%), which was markedly higher than in the overall study population, where the percentage was 43.5%. This finding indicates that new‐onset diabetes may have aggravated COVID‐19 infection, which is consistent with identical observations by Singh et al. 16 By contrast, no such evidence was found for patients with previously known T1DM, where the COVID‐19 manifestation index of 37.5% was even below the value of about 50% reported for the general population. 17 Adult patients: In adult patients with T1DM, the proportion of symptomatic infections was almost twice as high as in pediatric patients (83.3% vs. 43.5%). Definitive statements are not yet available regarding the COVID‐19 manifestation index, which depends on multiple co‐variables, including age, preexisting disease, and socioeconomic status of the population. 18 , 19 , 20 COVID‐19 manifestation index values reported in the literature vary from 55% to >90% 21 , 22 and are reported as being about 50% in children, 17 which is consistent with our data. Age and diabetes duration in SARS‐CoV‐2‐positive adult patients with T1DM tended to be higher than in those with T1DM who tested negative for SARS‐CoV‐2 in our study. This was particularly true for hospitalized patients with SARS‐CoV‐2 infection (24.2 vs. 37.1 vs. 49.2 years). No such associations were observed in our pediatric patients. The hospitalization rate was slightly higher in adult patients with T1DM, even though there were significantly fewer patients with diabetes manifestation in the latter group from DPV network. A separate analysis of the DPV data from adult patients with T2DM, which is not part of the present study, showed an even higher hospitalization rate of 82.3% (median age 73.5 years; unpublished data). This could mean that the sequelae of long‐standing diabetes are risk factors for a symptomatic and/or severe course of SARS‐CoV‐2 infections in adult patients. In addition to the higher age, which in itself is a risk factor for severe SARS‐CoV‐2 infections, this may explain the higher hospitalization rate in adult patients. Indeed, the literature has demonstrated the association between increasing age, cardiovascular risk factors, diabetes sequelae, and a more severe course of the SARS‐CoV‐2 infection. 14 , 23 , 24 From these observations follows the recommendation to lower blood glucose and HbA1c levels as a primary prevention measure to reduce the risk of severe COVID‐19 courses. 25 , 26 , 27 Due to lack of data on COVID‐19 symptoms in adult T1DM patients we were unable to prove this hypothesis with our current data. The significantly higher HbA1c in SARS‐CoV‐2 negatively tested patients compared to positively tested adult patients with T1DM is most likely due to a higher percentage of inpatients with diabetes manifestation (14.8 vs. 5.0%) and diabetes education (25.9 vs. 20.0%). Limitations: Due to the retrospective design of the study as an analysis of existing database records and the absence of a nondiabetic comparison group, our observations need to be considered tentative and interpreted with caution. Since our analysis compared a relatively small group of patients with SARS‐CoV2‐positive T1DM with a markedly larger group of negatively tested patients, it was repeatedly impossible to measure significant differences between the groups, despite clear differences between the data. Further studies in larger cohorts may be useful to confirm our observations. Our database did not contain any clear information on the reasons why patients were tested for SARS‐CoV‐2. Since SARS‐CoV2 testing was usually performed on hospital admission but not in the case of outpatient appointments, our data originated mainly from inpatients as well as patients with infection symptoms. This limits the statements regarding the above‐mentioned group. Population‐wide screening studies among diabetic pediatric patients may provide clarity in this regard. Our results reflect the COVID‐19 pandemic during the periods when the original and the “delta” variants of SARS‐CoV‐2 were predominant. Infection rates were much higher during the subsequent “omicron” wave, especially in pediatric patients, and some symptoms differed considerably. Hence, results for this later period may differ from the present findings. DPV registry data for this period will become available in the fall of 2022. CONCLUSIONS: Our results are indicative of the low morbidity in children and adolescents with T1DM and SARS‐CoV‐2 infection. Importantly, our study found no increase in DKA rate among patients with positive SARS‐CoV‐2 tests. Patients who tested positive for SARS‐CoV‐2 did not have elevated HbA1c levels compared to those who tested negative. In addition, no association was found between age/diabetes duration/HbA1c/BMI and symptoms of infections. Thus, no evidence emerged that these factors led to increased SARS‐CoV‐2 morbidity in pediatric patients with a previous diagnosis of T1DM in our study population. Most patients with known T1DM and SARS‐CoV‐2 infection could be managed as outpatients. However, when SARS‐CoV‐2 infection coincided with new‐onset diabetes, the infection was usually symptomatic. In adult patients, an association was observed between age and frequency of symptomatic SARS‐CoV‐2 infection as well as hospitalization. Since diabetes sequelae are associated with a more severe course, optimization of patients' metabolic situation is recommended. AUTHOR CONTRIBUTIONS: B.R.B. was responsible for conceptualization, data analysis and interpretation, investigation, and writing of the original draft of the manuscript. S.R.T. was responsible for methodology, data analysis, formal analysis, and review and editing of the manuscript. C.K. was responsible for review and editing of the manuscript. T.R.R. was responsible for conceptualization, investigation, review and editing of the manuscript. R.W.H. was responsible for conceptualization, supervision, funding acquisition, review and editing of the manuscript. B.K., K.K., A.M., E.M.‐R., F.R., T.R.R. and R.W.H were responsible for data acquisition. S.R.T., C.K., B.K., K.K., E.M.‐R., F.R., T.R.R. and R.W.H. were responsible for scientific discussion of the results and important intellectual content and review and editing of the manuscript. All authors reviewed the draft manuscript for important intellectual content and approved the manuscript prior to submission. S.R.T. and R.W.H. are the guarantors of this work and, as such, had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. FUNDING INFORMATION: The DPV is supported through the German Federal Ministry for Education and Research within the German Centre for Diabetes Research (grant 82DZD14A02). The funding organization had no role in design and conduct of the study, collection, management, analysis, and interpretation of the data, preparation, review, or approval of the manuscript, or decision to submit the manuscript for publication. CONFLICT OF INTEREST: None declared. ETHICS STATEMENT: The ethics committee of the University of Ulm approved the analysis of the anonymized DPV data (approval no. 314/21). PATIENT CONSENT STATEMENT: Informed consent (verbal or written) to participate in the DPV registry was available from patients and/or their parents. Supporting information: Supporting Information. Click here for additional data file.
Background: Data on patients with type 1 diabetes mellitus (T1DM) and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections are sparse. This study aimed to investigate the association between SARS-CoV-2 infection and T1DM. Methods: Data from the Prospective Diabetes Follow-up (DPV) Registry were analyzed for diabetes patients tested for SARS-CoV-2 by polymerase chain reaction (PCR) in Germany, Austria, Switzerland, and Luxembourg during January 2020-June 2021, using Wilcoxon rank-sum and chi-square tests for continuous and dichotomous variables, adjusted for multiple testing. Results: Data analysis of 1855 pediatric T1DM patients revealed no differences between asymptomatic/symptomatic infected and SARS-CoV-2 negative/positive patients regarding age, new-onset diabetes, diabetes duration, and body mass index. Glycated hemoglobin A1c (HbA1c) and diabetic ketoacidosis (DKA) rate were not elevated in SARS-CoV-2-positive vs. -negative patients. The COVID-19 manifestation index was 37.5% in individuals with known T1DM, but 57.1% in individuals with new-onset diabetes. 68.8% of positively tested patients were managed as outpatients/telemedically. Data analysis of 240 adult T1MD patients revealed no differences between positively and negatively tested patients except lower HbA1c. Of these patients, 83.3% had symptomatic infections; 35.7% of positively tested patients were hospitalized. Conclusions: Our results indicate low morbidity in SARS-CoV-2-infected pediatric T1DM patients. Most patients with known T1DM and SARS-CoV-2 infections could be managed as outpatients. However, SARS-CoV-2 infection was usually symptomatic if it coincided with new-onset diabetes. In adult patients, symptomatic SARS-CoV-2 infection and hospitalization were associated with age.
INTRODUCTION: In late 2019, the severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) pandemic began to spread across the world. 1 , 2 Growing evidence showed increased morbidity and mortality in patients with type 2 diabetes mellitus. 3 , 4 , 5 On the other hand, data on patients with type 1 diabetes mellitus (T1DM), especially pediatric patients, are sparse and at times inconsistent. 6 , 7 In part, conclusions for pediatric patients have been drawn by extrapolation from data on adult patients with diabetes mellitus. This procedure is not valid and may lead to misinformation. 8 Thus, targeted analysis of pediatric populations as well as adult populations with T1DM is essential in order to gain clarity regarding the impact of SARS‐CoV‐2 infection on these patients. The aim of the present study was to (1) investigate the influence of SARS‐CoV‐2 infections on the course of T1DM and of T1DM on infections with SARS‐CoV‐2 in children and adolescents in Germany, Austria, Switzerland, and Luxembourg, (2) gain information about SARS‐CoV‐2‐infected adult patients with T1DM in central Europe, and (3) learn about the differences between pediatric and adult patients with T1DM. CONCLUSIONS: Our results are indicative of the low morbidity in children and adolescents with T1DM and SARS‐CoV‐2 infection. Importantly, our study found no increase in DKA rate among patients with positive SARS‐CoV‐2 tests. Patients who tested positive for SARS‐CoV‐2 did not have elevated HbA1c levels compared to those who tested negative. In addition, no association was found between age/diabetes duration/HbA1c/BMI and symptoms of infections. Thus, no evidence emerged that these factors led to increased SARS‐CoV‐2 morbidity in pediatric patients with a previous diagnosis of T1DM in our study population. Most patients with known T1DM and SARS‐CoV‐2 infection could be managed as outpatients. However, when SARS‐CoV‐2 infection coincided with new‐onset diabetes, the infection was usually symptomatic. In adult patients, an association was observed between age and frequency of symptomatic SARS‐CoV‐2 infection as well as hospitalization. Since diabetes sequelae are associated with a more severe course, optimization of patients' metabolic situation is recommended.
Background: Data on patients with type 1 diabetes mellitus (T1DM) and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections are sparse. This study aimed to investigate the association between SARS-CoV-2 infection and T1DM. Methods: Data from the Prospective Diabetes Follow-up (DPV) Registry were analyzed for diabetes patients tested for SARS-CoV-2 by polymerase chain reaction (PCR) in Germany, Austria, Switzerland, and Luxembourg during January 2020-June 2021, using Wilcoxon rank-sum and chi-square tests for continuous and dichotomous variables, adjusted for multiple testing. Results: Data analysis of 1855 pediatric T1DM patients revealed no differences between asymptomatic/symptomatic infected and SARS-CoV-2 negative/positive patients regarding age, new-onset diabetes, diabetes duration, and body mass index. Glycated hemoglobin A1c (HbA1c) and diabetic ketoacidosis (DKA) rate were not elevated in SARS-CoV-2-positive vs. -negative patients. The COVID-19 manifestation index was 37.5% in individuals with known T1DM, but 57.1% in individuals with new-onset diabetes. 68.8% of positively tested patients were managed as outpatients/telemedically. Data analysis of 240 adult T1MD patients revealed no differences between positively and negatively tested patients except lower HbA1c. Of these patients, 83.3% had symptomatic infections; 35.7% of positively tested patients were hospitalized. Conclusions: Our results indicate low morbidity in SARS-CoV-2-infected pediatric T1DM patients. Most patients with known T1DM and SARS-CoV-2 infections could be managed as outpatients. However, SARS-CoV-2 infection was usually symptomatic if it coincided with new-onset diabetes. In adult patients, symptomatic SARS-CoV-2 infection and hospitalization were associated with age.
10,186
329
[ 230, 359, 240, 65, 817, 233, 704, 447, 242, 201, 70, 23, 21 ]
19
[ "patients", "diabetes", "sars", "cov", "sars cov", "data", "t1dm", "tested", "onset", "new" ]
[ "sars cov infections", "patients sars cov2", "respiratory syndrome coronavirus", "coronavirus pediatric", "coronavirus pediatric patients" ]
[CONTENT] COVID‐19 | diabetic ketoacidosis | DPV database | SARS‐CoV‐2 | type 1 diabetes mellitus | 新冠肺炎 | 1型糖尿病 | 糖尿病酮症酸中毒 | 前瞻性糖尿病随访数据 | 严重急性呼吸综合征冠状病毒2型 [SUMMARY]
[CONTENT] COVID‐19 | diabetic ketoacidosis | DPV database | SARS‐CoV‐2 | type 1 diabetes mellitus | 新冠肺炎 | 1型糖尿病 | 糖尿病酮症酸中毒 | 前瞻性糖尿病随访数据 | 严重急性呼吸综合征冠状病毒2型 [SUMMARY]
[CONTENT] COVID‐19 | diabetic ketoacidosis | DPV database | SARS‐CoV‐2 | type 1 diabetes mellitus | 新冠肺炎 | 1型糖尿病 | 糖尿病酮症酸中毒 | 前瞻性糖尿病随访数据 | 严重急性呼吸综合征冠状病毒2型 [SUMMARY]
[CONTENT] COVID‐19 | diabetic ketoacidosis | DPV database | SARS‐CoV‐2 | type 1 diabetes mellitus | 新冠肺炎 | 1型糖尿病 | 糖尿病酮症酸中毒 | 前瞻性糖尿病随访数据 | 严重急性呼吸综合征冠状病毒2型 [SUMMARY]
[CONTENT] COVID‐19 | diabetic ketoacidosis | DPV database | SARS‐CoV‐2 | type 1 diabetes mellitus | 新冠肺炎 | 1型糖尿病 | 糖尿病酮症酸中毒 | 前瞻性糖尿病随访数据 | 严重急性呼吸综合征冠状病毒2型 [SUMMARY]
[CONTENT] COVID‐19 | diabetic ketoacidosis | DPV database | SARS‐CoV‐2 | type 1 diabetes mellitus | 新冠肺炎 | 1型糖尿病 | 糖尿病酮症酸中毒 | 前瞻性糖尿病随访数据 | 严重急性呼吸综合征冠状病毒2型 [SUMMARY]
[CONTENT] Adult | Child | Humans | SARS-CoV-2 | Diabetes Mellitus, Type 1 | COVID-19 | Glycated Hemoglobin | Prospective Studies | Diabetic Ketoacidosis [SUMMARY]
[CONTENT] Adult | Child | Humans | SARS-CoV-2 | Diabetes Mellitus, Type 1 | COVID-19 | Glycated Hemoglobin | Prospective Studies | Diabetic Ketoacidosis [SUMMARY]
[CONTENT] Adult | Child | Humans | SARS-CoV-2 | Diabetes Mellitus, Type 1 | COVID-19 | Glycated Hemoglobin | Prospective Studies | Diabetic Ketoacidosis [SUMMARY]
[CONTENT] Adult | Child | Humans | SARS-CoV-2 | Diabetes Mellitus, Type 1 | COVID-19 | Glycated Hemoglobin | Prospective Studies | Diabetic Ketoacidosis [SUMMARY]
[CONTENT] Adult | Child | Humans | SARS-CoV-2 | Diabetes Mellitus, Type 1 | COVID-19 | Glycated Hemoglobin | Prospective Studies | Diabetic Ketoacidosis [SUMMARY]
[CONTENT] Adult | Child | Humans | SARS-CoV-2 | Diabetes Mellitus, Type 1 | COVID-19 | Glycated Hemoglobin | Prospective Studies | Diabetic Ketoacidosis [SUMMARY]
[CONTENT] sars cov infections | patients sars cov2 | respiratory syndrome coronavirus | coronavirus pediatric | coronavirus pediatric patients [SUMMARY]
[CONTENT] sars cov infections | patients sars cov2 | respiratory syndrome coronavirus | coronavirus pediatric | coronavirus pediatric patients [SUMMARY]
[CONTENT] sars cov infections | patients sars cov2 | respiratory syndrome coronavirus | coronavirus pediatric | coronavirus pediatric patients [SUMMARY]
[CONTENT] sars cov infections | patients sars cov2 | respiratory syndrome coronavirus | coronavirus pediatric | coronavirus pediatric patients [SUMMARY]
[CONTENT] sars cov infections | patients sars cov2 | respiratory syndrome coronavirus | coronavirus pediatric | coronavirus pediatric patients [SUMMARY]
[CONTENT] sars cov infections | patients sars cov2 | respiratory syndrome coronavirus | coronavirus pediatric | coronavirus pediatric patients [SUMMARY]
[CONTENT] patients | diabetes | sars | cov | sars cov | data | t1dm | tested | onset | new [SUMMARY]
[CONTENT] patients | diabetes | sars | cov | sars cov | data | t1dm | tested | onset | new [SUMMARY]
[CONTENT] patients | diabetes | sars | cov | sars cov | data | t1dm | tested | onset | new [SUMMARY]
[CONTENT] patients | diabetes | sars | cov | sars cov | data | t1dm | tested | onset | new [SUMMARY]
[CONTENT] patients | diabetes | sars | cov | sars cov | data | t1dm | tested | onset | new [SUMMARY]
[CONTENT] patients | diabetes | sars | cov | sars cov | data | t1dm | tested | onset | new [SUMMARY]
[CONTENT] patients | t1dm | diabetes mellitus | mellitus | cov | sars cov | sars | populations | gain | adult [SUMMARY]
[CONTENT] patients | data | analyzed | diabetes | centers | included | t1dm patients | germany | cov | sars cov [SUMMARY]
[CONTENT] patients | diabetes | pcr | tested | onset | number | number patients | overall number | overall number patients | new [SUMMARY]
[CONTENT] cov | sars | sars cov | patients | infection | cov infection | sars cov infection | found | t1dm sars cov | t1dm sars [SUMMARY]
[CONTENT] patients | sars | cov | sars cov | diabetes | declared | data | t1dm | tested | infection [SUMMARY]
[CONTENT] patients | sars | cov | sars cov | diabetes | declared | data | t1dm | tested | infection [SUMMARY]
[CONTENT] 1 | 2 ||| [SUMMARY]
[CONTENT] PCR | Germany | Austria | Switzerland | Luxembourg | January 2020-June 2021 | Wilcoxon [SUMMARY]
[CONTENT] 1855 ||| DKA ||| COVID-19 | 37.5% | 57.1% ||| 68.8% ||| 240 ||| 83.3% | 35.7% [SUMMARY]
[CONTENT] ||| ||| ||| [SUMMARY]
[CONTENT] 1 | 2 ||| PCR | Germany | Austria | Switzerland | Luxembourg | January 2020-June 2021 | Wilcoxon ||| ||| 1855 ||| DKA ||| COVID-19 | 37.5% | 57.1% ||| 68.8% ||| 240 ||| 83.3% | 35.7% ||| ||| ||| ||| [SUMMARY]
[CONTENT] 1 | 2 ||| PCR | Germany | Austria | Switzerland | Luxembourg | January 2020-June 2021 | Wilcoxon ||| ||| 1855 ||| DKA ||| COVID-19 | 37.5% | 57.1% ||| 68.8% ||| 240 ||| 83.3% | 35.7% ||| ||| ||| ||| [SUMMARY]
Correlations of switch/sucrose nonfermentable complex mutations with clinical outcomes in advanced non-small cell lung cancer.
36126963
The switch/sucrose nonfermentable complex mutations (SWI/SNF-mut) are common in non-small cell lung cancer (NSCLC). However, the association of SWI/SNF-mut with the clinical outcomes of immune checkpoint inhibitors (ICIs), particularly of epidermal growth factor receptor tyrosine kinase inhibitors (EGFR-TKIs), has not been established.
BACKGROUND
We retrospectively collected data of patients at Cancer Hospital Chinese Academy of Medical Sciences. Patients with advanced NSCLC who received programmed cell death protein-1 or programmed cell death ligand 1 (PD-[L]1) inhibitors were included in cohort 1 and those with EGFR mutations (EGFR-mutant) received EGFR-TKIs monotherapy were included in cohort 2. Two reported Memorial Sloan-Kettering Cancer Center (MSKCC) cohorts received immunotherapy alone used as the validation for cohort 1. We analyzed the relationship between SWI/SNF alterations and clinical outcomes in each cohort.
METHODS
In total, 1162 patients were included, of which 230 patients (19.8%) were identified as SWI/SNF-mut with the most common genetic alterations being ARID1A (33.4%) and SMARCA4 (28.3%). In cohort 1 (n = 146), patients with co-mutations of SWI/SNF and Kirsten rat sarcoma oncogene (KRAS) (SWI/SNFmutKRASmut, n = 18) had significantly prolonged progression-free survival (PFS) (8.6 m vs. 1.9 m; hazard ratio [HR],  0.31; 95% confidence intervals [CI], 0.11-0.83; p = 0.032) to PD-(L)1 inhibitors monotherapy, which was consistent with the MSKCC cohorts (not reach [NR] vs. 6.3 m; HR, 0.36, 95% CI, 0.15-0.82; p = 0.016). In cohort 2 (n = 205), ARID1A-mut (n = 16) was associated with improved PFS after EGFR-TKIs (20.6 m vs. 11.2 m; HR, 0.47, 95% CI, 0.27-0.94; p = 0.023).
RESULTS
In advanced NSCLC, patients with SWI/SNFmutKRASmut seem to benefit more from ICIs. Furthermore, ARID1A-mut may provide a protective effect to EGFR-TKIs in EGFR-mutant patients. However, this is a retrospective single-institution analysis that requires further validation by large prospective studies.
CONCLUSIONS
[ "Humans", "Carcinoma, Non-Small-Cell Lung", "Retrospective Studies", "ErbB Receptors", "Lung Neoplasms", "Prospective Studies", "Sucrose", "Prognosis", "Mutation", "Protein Kinase Inhibitors", "DNA Helicases", "Nuclear Proteins", "Transcription Factors" ]
9626335
INTRODUCTION
The human switch/sucrose nonfermentable (SWI/SNF), which is a chromatin remodeling complex dependent on adenosine triphosphate (ATP), affects DNA replication and repair by regulating genomic architecture. 1 SWI/SNF encoded by 29 genes belongs to three broad subfamilies: canonical BAF (cBAF), polybromo‐associated BAF (PBAF), and non‐canonical BAF (ncBAF). 2 , 3 Genomic abnormalities in SWI/SNF were presented in nearly 20% of non–small cell lung cancer (NSCLC), of which the frequently mutated genes were ARID1A, SMARCA4, ARID2, ARID1B, and PBRM1. 4 , 5 , 6 Previous studies in NSCLC have identified that SWI/SNF mutations (SWI/SNF‐mut) were frequently co‐mutated with Kirsten rat sarcoma oncogene (KRAS), STK11, KEAP1, and mutually exclusive with current sensitive driver mutations including epidermal growth factor receptor (EGFR), ALK, MET, ROS1, and RET. 7 , 8 In NSCLC, ARID1A, and SMARCA4 were the most frequently mutated genes, which occur in 8% to 9% of patients. 8 , 9 , 10 , 11 The impact of SWI/SNF gene mutations on the efficacy of immune checkpoint inhibitors (ICIs) in NSCLC has recently drawn considerable attention. A retrospective study among 292 NSCLC patients treated with ICIs found that patients with SMARCA4 alterations had improved overall survival (OS) (hazard ratio [HR], 0.67; p = 0.01). 10 However, another retrospective study showed that ICI‐treated patients with homozygous truncating SMARCA4‐mut presented significantly shortened OS (HR, 1.62; p = 0.01), but no statistical difference was observed between patients with non‐homozygous truncating SMARCA4‐mut and SMARCA4 wild‐type (wt). 8 Therefore, most of the previous studies on the relationship between SWI/SNF and ICIs in NSCLC included patients from non‐Asian populations and conclusions were inconsistent. EGFR tyrosine kinase inhibitors (EGFR‐TKIs), as the typical targeted inhibitors, have greatly propelled the evolution of precision treatment and prolonged survival for advanced NSCLC patients with EGFR mutations. 12 , 13 , 14 EGFR exon 19 deletions (19del) and exon 21 Leu858Arg (21L858R) mutation represent the prevalent sensitive mutations to EGFR‐TKIs. 15 However, there is almost no evidence on the association of SWI/SNF‐mut with the clinical outcome to EGFR‐TKIs to date. Here, we characterized the clinical characteristics of SWI/SNF‐mut and evaluated the relationship between SWI/SNF‐mut and clinical outcomes of ICIs and EGFR‐TKIs in Chinese patients with NSCLC.
METHODS
Study population From January 2019 to October 2021, we collected data of patients detected by next‐generation sequencing (NGS) in the Cancer Hospital Chinese Academy of Medical Sciences for analysis. Patients with advanced NSCLC were evaluated in two cohorts: cohort 1 included patients who received programmed cell death protein‐1 or programmed cell death ligand 1 (PD[L]‐1) inhibitors alone (monotherapy) or in combination with chemotherapy (combined therapy), and cohort 2 included those with EGFR 19del or 21L858R mutation (EGFR‐mutant) who received EGFR‐TKIs monotherapy. Patients underwent PD‐L1 inhibitors or EGFR‐TKIs as consolidation therapy after concurrent radiotherapy or without assessment after treatment were excluded. Two Memorial Sloan‐Kettering Cancer Center (MSKCC) cohorts totaling 109 NSCLC received immunotherapy alone (PD‐1 inhibitors or CTLA4 inhibitors) were included as validation for cohort 1. 16 , 17 The clinical data and whole exome sequencing (WES) data of MSKCC cohorts were downloaded from cBioPortal (http://www.cbioportal.org/). Clinical characteristics, treatment data, and survival information were collected for analysis. The response was determined based on Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1. Pathology and gene sequencing results were reviewed by two oncologists independently. This study was approved by the Institutional Review Board of the Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College (NCC21/032‐2703). We performed methods according to approved guidelines and obtained comprehensive informed consent from each patient. From January 2019 to October 2021, we collected data of patients detected by next‐generation sequencing (NGS) in the Cancer Hospital Chinese Academy of Medical Sciences for analysis. Patients with advanced NSCLC were evaluated in two cohorts: cohort 1 included patients who received programmed cell death protein‐1 or programmed cell death ligand 1 (PD[L]‐1) inhibitors alone (monotherapy) or in combination with chemotherapy (combined therapy), and cohort 2 included those with EGFR 19del or 21L858R mutation (EGFR‐mutant) who received EGFR‐TKIs monotherapy. Patients underwent PD‐L1 inhibitors or EGFR‐TKIs as consolidation therapy after concurrent radiotherapy or without assessment after treatment were excluded. Two Memorial Sloan‐Kettering Cancer Center (MSKCC) cohorts totaling 109 NSCLC received immunotherapy alone (PD‐1 inhibitors or CTLA4 inhibitors) were included as validation for cohort 1. 16 , 17 The clinical data and whole exome sequencing (WES) data of MSKCC cohorts were downloaded from cBioPortal (http://www.cbioportal.org/). Clinical characteristics, treatment data, and survival information were collected for analysis. The response was determined based on Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1. Pathology and gene sequencing results were reviewed by two oncologists independently. This study was approved by the Institutional Review Board of the Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College (NCC21/032‐2703). We performed methods according to approved guidelines and obtained comprehensive informed consent from each patient. Targeted tumor NGS Genetic alterations were detected by OncoScreen assay, 18 which surveys the following 10 SWI/SNF subunits: ARID1A, SMARCA4, ARID1B, ARID2, PBRM1, SMARCA2, SMARCB1, SMARCD1, BRD7, and BCL11A. SWI/SNF‐mut was defined as having a mutation in at least one of these subunits; otherwise, SWI/SNF‐wt was defined. Tissue blocks containing at least 20% tumor content were selected to obtain genomic DNA. Single‐nucleotide variant mutations were selected without germline and intron mutations from all samples. We classified SWI/SNF gene alterations into three groups: (i) SWI/SNF truncating mutations (SWI/SNF‐tm) included frameshift insertion–deletion (indels), nonsense, or splice mutation types; (ii) SWI/SNF non‐truncating mutations (SWI/SNF‐ntm) included missense, in frameshift indels, or fusions; and (iii) SWI/SNF compound mutations (SWI/SNF‐cm) include both truncating alterations and non‐truncating alterations. Tumor mutational burden (TMB) was measured as the number of somatic, coding, base substitution, and indel mutations per megabase (Mb). Genetic alterations were detected by OncoScreen assay, 18 which surveys the following 10 SWI/SNF subunits: ARID1A, SMARCA4, ARID1B, ARID2, PBRM1, SMARCA2, SMARCB1, SMARCD1, BRD7, and BCL11A. SWI/SNF‐mut was defined as having a mutation in at least one of these subunits; otherwise, SWI/SNF‐wt was defined. Tissue blocks containing at least 20% tumor content were selected to obtain genomic DNA. Single‐nucleotide variant mutations were selected without germline and intron mutations from all samples. We classified SWI/SNF gene alterations into three groups: (i) SWI/SNF truncating mutations (SWI/SNF‐tm) included frameshift insertion–deletion (indels), nonsense, or splice mutation types; (ii) SWI/SNF non‐truncating mutations (SWI/SNF‐ntm) included missense, in frameshift indels, or fusions; and (iii) SWI/SNF compound mutations (SWI/SNF‐cm) include both truncating alterations and non‐truncating alterations. Tumor mutational burden (TMB) was measured as the number of somatic, coding, base substitution, and indel mutations per megabase (Mb). Immunohistochemistry The expression of PD‐L1 was detected with an anti‐PD‐L1 antibody (Dako 22C3) according to manufacturer's instructions and tumor proportion score (TPS) was determined as per routine procedure. The expression of PD‐L1 was detected with an anti‐PD‐L1 antibody (Dako 22C3) according to manufacturer's instructions and tumor proportion score (TPS) was determined as per routine procedure. Statistical analysis The overall response rate (ORR), disease control rate (DCR), and progression‐free survival (PFS) was analyzed among patients who underwent PD‐L1 inhibitors or EGFR‐TKIs. The time from initiation of treatment to progression or death was defined as PFS, if not as censored at the last fellow‐up date. We performed statistical analysis by IBM SPSS 26.0, R4.0.3, and Graph Pad Prism 8. Comparison of the clinical characteristics and response rate of each group were evaluated by t‐test, Wilcoxon test, χ2 test, or Fisher's exact test, when appropriate. Kaplan–Meier methodology and log‐rank tests were used to estimate the distributions and differences in event‐time with 95% confidence intervals (CI), respectively. In univariate and multivariate models, HR for PFS was estimated in Cox proportional hazards models and signal of association with p < 0.1 were included in multivariate analysis. All analyses were at two‐sided level and p < 0.05 was predefined statistically significance. The overall response rate (ORR), disease control rate (DCR), and progression‐free survival (PFS) was analyzed among patients who underwent PD‐L1 inhibitors or EGFR‐TKIs. The time from initiation of treatment to progression or death was defined as PFS, if not as censored at the last fellow‐up date. We performed statistical analysis by IBM SPSS 26.0, R4.0.3, and Graph Pad Prism 8. Comparison of the clinical characteristics and response rate of each group were evaluated by t‐test, Wilcoxon test, χ2 test, or Fisher's exact test, when appropriate. Kaplan–Meier methodology and log‐rank tests were used to estimate the distributions and differences in event‐time with 95% confidence intervals (CI), respectively. In univariate and multivariate models, HR for PFS was estimated in Cox proportional hazards models and signal of association with p < 0.1 were included in multivariate analysis. All analyses were at two‐sided level and p < 0.05 was predefined statistically significance.
RESULTS
Clinical characteristics In total, 1162 NSCLC detected by NGS were included (Figure 1), identifying 230 (19.8%) patients with SWI/SNF‐mut (Figure 2(a)). Compared with 932 (80.2%) SWI/SNF‐wt tumors, those with SWI/SNF‐mut were more common in older age (p = 0.009), male sex (p = 0.002), smokers (p < 0.001), positive PD‐L1 expression (p = 0.028), high TMB (7.2 vs. 3.2 mut/mb, p = 0.009, Figure 2b), advanced disease (p < 0.001), bone metastasis (p = 0.018), and liver metastasis (p < 0.001) (Table 1). Detailed inclusion and exclusion criteria. (a) Frequency of SWI/SNF genomic alterations among 1162 NSCLC. (b) TMB by SWI/SNF mutation status. Frequency of (c) the full spectrum of SWI/SNF genomic alterations and (d) the co‐mutations in SWI/SNF complex. SWI/SNF, switch/sucrose nonfermentable family; NSCLC, non–small cell lung cancer; TMB, tumor mutational burden; wt, wild‐type; mut, mutation; Mb, megabase. Clinical characteristics by SWI/SNF mutation status in the overall cohort of 1162 NSCLC Abbreviations: Mut, mutation; mut/mb, mutations per megabase; N.A., not available; PD‐L1, programmed cell death‐ligand 1; SWI/SNF, switch/sucrose nonfermentable; TMB, tumor mutational burden; wt, wild‐type. Adenosquamous carcinomas, sarcomatoid carcinoma, carcinoid, large cell carcinoma, high‐grade neuroendocrine carcinoma, and poorly differentiated non‐small cell lung carcinomas; not otherwise specified (NOS). In total, 1162 NSCLC detected by NGS were included (Figure 1), identifying 230 (19.8%) patients with SWI/SNF‐mut (Figure 2(a)). Compared with 932 (80.2%) SWI/SNF‐wt tumors, those with SWI/SNF‐mut were more common in older age (p = 0.009), male sex (p = 0.002), smokers (p < 0.001), positive PD‐L1 expression (p = 0.028), high TMB (7.2 vs. 3.2 mut/mb, p = 0.009, Figure 2b), advanced disease (p < 0.001), bone metastasis (p = 0.018), and liver metastasis (p < 0.001) (Table 1). Detailed inclusion and exclusion criteria. (a) Frequency of SWI/SNF genomic alterations among 1162 NSCLC. (b) TMB by SWI/SNF mutation status. Frequency of (c) the full spectrum of SWI/SNF genomic alterations and (d) the co‐mutations in SWI/SNF complex. SWI/SNF, switch/sucrose nonfermentable family; NSCLC, non–small cell lung cancer; TMB, tumor mutational burden; wt, wild‐type; mut, mutation; Mb, megabase. Clinical characteristics by SWI/SNF mutation status in the overall cohort of 1162 NSCLC Abbreviations: Mut, mutation; mut/mb, mutations per megabase; N.A., not available; PD‐L1, programmed cell death‐ligand 1; SWI/SNF, switch/sucrose nonfermentable; TMB, tumor mutational burden; wt, wild‐type. Adenosquamous carcinomas, sarcomatoid carcinoma, carcinoid, large cell carcinoma, high‐grade neuroendocrine carcinoma, and poorly differentiated non‐small cell lung carcinomas; not otherwise specified (NOS). Spectrum of SWI/SNF genomic alterations Of the 230 patients with SWI/SNF‐mut, there were 100 (43.5%) truncating alterations, 113 (49.1%) non‐truncating alterations, and 17 (7.4%) compound alterations (Figure 2(a)). The frequency of SWI/SNF genomic alterations were ARID1A (33.4%), SMARCA4 (28.3%), ARID2 (15.6%), PBRM1 (14.8%), ARID1B (6.9%), SMARCA2(3.9%), SMARCB1 (3.0%), BRD7(2.5%), SMARCD1 (0.9%), and BCL11A (0.4%), respectively, among which 8.3% had multiple alterations in SWI/SNF genes (Figure 2(c)). In our cohort, the most commonly co‐mutated genes with SWI/SNF were TP53 (52.1%), EGFR (27.3%), KRAS (13.4%), and KEAP1 (11.7%), respectively (Figure 2(d)). The DNA and protein changes of SWI/SNF‐mut were listed in Table S1. Of the 230 patients with SWI/SNF‐mut, there were 100 (43.5%) truncating alterations, 113 (49.1%) non‐truncating alterations, and 17 (7.4%) compound alterations (Figure 2(a)). The frequency of SWI/SNF genomic alterations were ARID1A (33.4%), SMARCA4 (28.3%), ARID2 (15.6%), PBRM1 (14.8%), ARID1B (6.9%), SMARCA2(3.9%), SMARCB1 (3.0%), BRD7(2.5%), SMARCD1 (0.9%), and BCL11A (0.4%), respectively, among which 8.3% had multiple alterations in SWI/SNF genes (Figure 2(c)). In our cohort, the most commonly co‐mutated genes with SWI/SNF were TP53 (52.1%), EGFR (27.3%), KRAS (13.4%), and KEAP1 (11.7%), respectively (Figure 2(d)). The DNA and protein changes of SWI/SNF‐mut were listed in Table S1. Association of SWI/SNF and KRAS co‐mutations with clinical outcome to immunotherapy in advanced NSCLC In total, 146 patients with advanced NSCLC were included in cohort 1 (Table S2). There were 127 (87.0%) patients received combined therapy, whereas the remaining patients received PD‐(L)1 inhibitors monotherapy. Although SWI/SNF‐mut tumors had a higher TMB (10.3 vs. 5.0 Mut/mb, p = 0.005) than those with SWI/SNF‐wt, there were no statistically differences across ORR (40.0% vs. 34.4%, p = 0.493) (Figure 3(a)), DCR (77.6% vs. 75.4%, p = 0.752) (Figure 3(a)), or PFS (6.5 m vs. 5.0 m; HR, 0.77; 95% CI, 0.51–1.18; p = 0.231) (Figure 3(b)) between them. Moreover, neither of the different SWI/SNF‐mut types, ARID1A‐mut nor SMARCA4‐mut, impacted the efficacy to PD‐(L)1 inhibitors (Figure S1(a)–(c)). Because published data showed KRAS are frequently co‐mutated with SWI/SNF and are associated with the prognosis of ICIs in NSCLC, 11 we further analyze clinical characteristics (Table S4) and efficacy to ICI by KRAS mutation status and found that there were no significant differences between KRAS‐wt and KRAS‐mut patients for PFS after ICIs (Figure S1(d)). However, patients with concurrent SWI/SNF mutation and KRAS mutation (SWI/SNFmutKRASmut, n = 18) had significantly higher TMB (14.0 vs. 7.0 mut/mb, p = 0.002) (Figure 3(c)) and conferred longer PFS (8.9 m vs. 4.9 m; HR, 0.53, 95% CI, 0.26–1.06; p = 0.071) (Figure 3(d)) to PD‐L1 inhibitors than those with non‐SWI/SNFmutKRASmut (defined as SWI/SNF wild‐type and KRAS wild‐type, SWI/SNF wild‐type and KRAS mutation, SWI/SNF mutation and KRAS wild‐type, n = 128), especially for monotherapy treatment (8.6 m vs. 1.9 m; HR, 0.31, 95% CI, 0.11–0.83; p = 0.032) (Figure 3(e),(f)). (a) Response rate and (b) Kaplan–Meier analysis for PFS to PD‐(L)1 inhibitors in patients with SWI/SNF‐mut vs. SWI/SNF‐wt in cohort 1. (c) Response rate and (d) Kaplan–Meier analysis for PFS to PD‐(L)1 inhibitors in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in cohort 1. Kaplan–Meier analysis for PFS to (e) PD‐(L)1 inhibitors monotherapy and (f) combine therapy in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in cohort 1. (g) Response rate and (h) Kaplan–Meier analysis for PFS to ICIs in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in MSKCC cohorts. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; Mb, megabase; PFS, progression‐free survival; PD‐(L)1, programmed cell death ligand 1; HR, hazard ratio; ICIs, immune checkpoint inhibitors. Because of the number of cases treated with ICIs monotherapy was limited in this cohort, we included two MSKCC cohorts containing totally 109 patients with advanced NSCLC received ICIs alone for validation (Table S3). Consistently, no significant differences of survival were observed either in patients with SWI/SNF‐mut and SWI/SNF‐wt (Figure S1(e)) or in patients with KRAS‐mut and KRAS‐wt (Figure S1(f)), whereas SWI/SNFmutKRASmut patients exhibited higher TMB (9.8 vs. 5.7 mut/mb, p = 0.043) (Figure 3(g)) and longer PFS to ICIs than non‐SWI/SNFmutKRASmut patients (NR vs. 6.3 m; HR, 0.36; 95% CI, 0.15–0.82; p = 0.016) (Figure 3(h)). In total, 146 patients with advanced NSCLC were included in cohort 1 (Table S2). There were 127 (87.0%) patients received combined therapy, whereas the remaining patients received PD‐(L)1 inhibitors monotherapy. Although SWI/SNF‐mut tumors had a higher TMB (10.3 vs. 5.0 Mut/mb, p = 0.005) than those with SWI/SNF‐wt, there were no statistically differences across ORR (40.0% vs. 34.4%, p = 0.493) (Figure 3(a)), DCR (77.6% vs. 75.4%, p = 0.752) (Figure 3(a)), or PFS (6.5 m vs. 5.0 m; HR, 0.77; 95% CI, 0.51–1.18; p = 0.231) (Figure 3(b)) between them. Moreover, neither of the different SWI/SNF‐mut types, ARID1A‐mut nor SMARCA4‐mut, impacted the efficacy to PD‐(L)1 inhibitors (Figure S1(a)–(c)). Because published data showed KRAS are frequently co‐mutated with SWI/SNF and are associated with the prognosis of ICIs in NSCLC, 11 we further analyze clinical characteristics (Table S4) and efficacy to ICI by KRAS mutation status and found that there were no significant differences between KRAS‐wt and KRAS‐mut patients for PFS after ICIs (Figure S1(d)). However, patients with concurrent SWI/SNF mutation and KRAS mutation (SWI/SNFmutKRASmut, n = 18) had significantly higher TMB (14.0 vs. 7.0 mut/mb, p = 0.002) (Figure 3(c)) and conferred longer PFS (8.9 m vs. 4.9 m; HR, 0.53, 95% CI, 0.26–1.06; p = 0.071) (Figure 3(d)) to PD‐L1 inhibitors than those with non‐SWI/SNFmutKRASmut (defined as SWI/SNF wild‐type and KRAS wild‐type, SWI/SNF wild‐type and KRAS mutation, SWI/SNF mutation and KRAS wild‐type, n = 128), especially for monotherapy treatment (8.6 m vs. 1.9 m; HR, 0.31, 95% CI, 0.11–0.83; p = 0.032) (Figure 3(e),(f)). (a) Response rate and (b) Kaplan–Meier analysis for PFS to PD‐(L)1 inhibitors in patients with SWI/SNF‐mut vs. SWI/SNF‐wt in cohort 1. (c) Response rate and (d) Kaplan–Meier analysis for PFS to PD‐(L)1 inhibitors in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in cohort 1. Kaplan–Meier analysis for PFS to (e) PD‐(L)1 inhibitors monotherapy and (f) combine therapy in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in cohort 1. (g) Response rate and (h) Kaplan–Meier analysis for PFS to ICIs in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in MSKCC cohorts. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; Mb, megabase; PFS, progression‐free survival; PD‐(L)1, programmed cell death ligand 1; HR, hazard ratio; ICIs, immune checkpoint inhibitors. Because of the number of cases treated with ICIs monotherapy was limited in this cohort, we included two MSKCC cohorts containing totally 109 patients with advanced NSCLC received ICIs alone for validation (Table S3). Consistently, no significant differences of survival were observed either in patients with SWI/SNF‐mut and SWI/SNF‐wt (Figure S1(e)) or in patients with KRAS‐mut and KRAS‐wt (Figure S1(f)), whereas SWI/SNFmutKRASmut patients exhibited higher TMB (9.8 vs. 5.7 mut/mb, p = 0.043) (Figure 3(g)) and longer PFS to ICIs than non‐SWI/SNFmutKRASmut patients (NR vs. 6.3 m; HR, 0.36; 95% CI, 0.15–0.82; p = 0.016) (Figure 3(h)). Association of ARID1A‐Mut with clinical survival to EGFR‐TKIs in EGFR‐mutant NSCLC There were 205 NSCLC with advanced EGFR‐mutant received EGFR‐TKIs alone in cohort 2 (Table S5), in which 82.9% (n = 170) of the patients were treated with first or second‐generation TKIs (gefitinib, erlotinib, icotinib, afatinib, or dacomitinib), whereas the remaining patients with third‐generation TKIs (osimertinib or almonertinib). Comparing SWI/SNF‐wt patients (n = 164, 80.0%) with SWI/SNF‐mut patients (n = 41, 20.0%), there were no significant differences across ORR, DCR, or PFS in the overall group (n = 205) (Figure 4(a),(b)) nor in the subgroups of first, second (n = 170) (Figure S2(a),(b)), or third generation TKIs treatment (n = 35, Figure S2c,d). (a) Response rate and (b) Kaplan–Meier analysis for PFS to TKIs in patients with SWI/SNF‐wt vs. SWI/SNF‐mut. (c) Response rate and (d) Kaplan–Meier analysis for PFS to TKI in patients with ARID1A‐wt vs. ARID1A‐mut. (e) Multivariable survival analysis of clinical factors for PFS to TKIs. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; PFS, progression‐free survival; TKIs, tyrosine kinase inhibitors; HR, hazard ratio; CI, confidence interval. In cohort 2, the most frequent SWI/SNF genomic alterations were ARID1A (n = 16, 7.8%), and the baseline clinical characteristics were similar between patients with ARID1A‐mut and ARID1A‐wt (Table S5). However, patients with ARID1A‐mut had higher DCR (100.0% vs. 92.6%, p = 0.541) (Figure 4(c)) and significantly longer prolonged PFS to EGFR‐TKIs than those with ARID1A‐wt (20.6 m vs. 11.2 m; HR, 0.47; 95% CI, 0.27–0.94; p = 0.023) (Figure 4(d)), which was consistent both in first, second (Figure S2(e)) and third generation EGFR‐TKIs subgroups (Figure S2(f)). Conversely, none of the other SWI/SNF gene alterations were associated with efficacy of EGFR‐TKIs (Table S6). Importantly, in the multivariable survival analysis adjusted for other variables, ARID1A‐mut was still associated with improved survival to EGFR‐TKIs (HR, 0.49; 95% CI, 0.25–0.98; p = 0.047) (Figure 4(e)). There were 205 NSCLC with advanced EGFR‐mutant received EGFR‐TKIs alone in cohort 2 (Table S5), in which 82.9% (n = 170) of the patients were treated with first or second‐generation TKIs (gefitinib, erlotinib, icotinib, afatinib, or dacomitinib), whereas the remaining patients with third‐generation TKIs (osimertinib or almonertinib). Comparing SWI/SNF‐wt patients (n = 164, 80.0%) with SWI/SNF‐mut patients (n = 41, 20.0%), there were no significant differences across ORR, DCR, or PFS in the overall group (n = 205) (Figure 4(a),(b)) nor in the subgroups of first, second (n = 170) (Figure S2(a),(b)), or third generation TKIs treatment (n = 35, Figure S2c,d). (a) Response rate and (b) Kaplan–Meier analysis for PFS to TKIs in patients with SWI/SNF‐wt vs. SWI/SNF‐mut. (c) Response rate and (d) Kaplan–Meier analysis for PFS to TKI in patients with ARID1A‐wt vs. ARID1A‐mut. (e) Multivariable survival analysis of clinical factors for PFS to TKIs. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; PFS, progression‐free survival; TKIs, tyrosine kinase inhibitors; HR, hazard ratio; CI, confidence interval. In cohort 2, the most frequent SWI/SNF genomic alterations were ARID1A (n = 16, 7.8%), and the baseline clinical characteristics were similar between patients with ARID1A‐mut and ARID1A‐wt (Table S5). However, patients with ARID1A‐mut had higher DCR (100.0% vs. 92.6%, p = 0.541) (Figure 4(c)) and significantly longer prolonged PFS to EGFR‐TKIs than those with ARID1A‐wt (20.6 m vs. 11.2 m; HR, 0.47; 95% CI, 0.27–0.94; p = 0.023) (Figure 4(d)), which was consistent both in first, second (Figure S2(e)) and third generation EGFR‐TKIs subgroups (Figure S2(f)). Conversely, none of the other SWI/SNF gene alterations were associated with efficacy of EGFR‐TKIs (Table S6). Importantly, in the multivariable survival analysis adjusted for other variables, ARID1A‐mut was still associated with improved survival to EGFR‐TKIs (HR, 0.49; 95% CI, 0.25–0.98; p = 0.047) (Figure 4(e)).
null
null
[ "INTRODUCTION", "Study population", "Targeted tumor NGS\n", "Immunohistochemistry", "Statistical analysis", "Clinical characteristics", "Spectrum of SWI/SNF genomic alterations", "Association of SWI/SNF and KRAS co‐mutations with clinical outcome to immunotherapy in advanced NSCLC\n", "Association of ARID1A‐Mut with clinical survival to EGFR‐TKIs in EGFR‐mutant NSCLC\n", "DISCLOSURE" ]
[ "The human switch/sucrose nonfermentable (SWI/SNF), which is a chromatin remodeling complex dependent on adenosine triphosphate (ATP), affects DNA replication and repair by regulating genomic architecture.\n1\n SWI/SNF encoded by 29 genes belongs to three broad subfamilies: canonical BAF (cBAF), polybromo‐associated BAF (PBAF), and non‐canonical BAF (ncBAF).\n2\n, \n3\n Genomic abnormalities in SWI/SNF were presented in nearly 20% of non–small cell lung cancer (NSCLC), of which the frequently mutated genes were ARID1A, SMARCA4, ARID2, ARID1B, and PBRM1.\n4\n, \n5\n, \n6\n Previous studies in NSCLC have identified that SWI/SNF mutations (SWI/SNF‐mut) were frequently co‐mutated with Kirsten rat sarcoma oncogene (KRAS), STK11, KEAP1, and mutually exclusive with current sensitive driver mutations including epidermal growth factor receptor (EGFR), ALK, MET, ROS1, and RET.\n7\n, \n8\n\n\nIn NSCLC, ARID1A, and SMARCA4 were the most frequently mutated genes, which occur in 8% to 9% of patients.\n8\n, \n9\n, \n10\n, \n11\n The impact of SWI/SNF gene mutations on the efficacy of immune checkpoint inhibitors (ICIs) in NSCLC has recently drawn considerable attention. A retrospective study among 292 NSCLC patients treated with ICIs found that patients with SMARCA4 alterations had improved overall survival (OS) (hazard ratio [HR], 0.67; p = 0.01).\n10\n However, another retrospective study showed that ICI‐treated patients with homozygous truncating SMARCA4‐mut presented significantly shortened OS (HR, 1.62; p = 0.01), but no statistical difference was observed between patients with non‐homozygous truncating SMARCA4‐mut and SMARCA4 wild‐type (wt).\n8\n Therefore, most of the previous studies on the relationship between SWI/SNF and ICIs in NSCLC included patients from non‐Asian populations and conclusions were inconsistent.\nEGFR tyrosine kinase inhibitors (EGFR‐TKIs), as the typical targeted inhibitors, have greatly propelled the evolution of precision treatment and prolonged survival for advanced NSCLC patients with EGFR mutations.\n12\n, \n13\n, \n14\n EGFR exon 19 deletions (19del) and exon 21 Leu858Arg (21L858R) mutation represent the prevalent sensitive mutations to EGFR‐TKIs.\n15\n However, there is almost no evidence on the association of SWI/SNF‐mut with the clinical outcome to EGFR‐TKIs to date. Here, we characterized the clinical characteristics of SWI/SNF‐mut and evaluated the relationship between SWI/SNF‐mut and clinical outcomes of ICIs and EGFR‐TKIs in Chinese patients with NSCLC.", "From January 2019 to October 2021, we collected data of patients detected by next‐generation sequencing (NGS) in the Cancer Hospital Chinese Academy of Medical Sciences for analysis. Patients with advanced NSCLC were evaluated in two cohorts: cohort 1 included patients who received programmed cell death protein‐1 or programmed cell death ligand 1 (PD[L]‐1) inhibitors alone (monotherapy) or in combination with chemotherapy (combined therapy), and cohort 2 included those with EGFR 19del or 21L858R mutation (EGFR‐mutant) who received EGFR‐TKIs monotherapy. Patients underwent PD‐L1 inhibitors or EGFR‐TKIs as consolidation therapy after concurrent radiotherapy or without assessment after treatment were excluded. Two Memorial Sloan‐Kettering Cancer Center (MSKCC) cohorts totaling 109 NSCLC received immunotherapy alone (PD‐1 inhibitors or CTLA4 inhibitors) were included as validation for cohort 1.\n16\n, \n17\n The clinical data and whole exome sequencing (WES) data of MSKCC cohorts were downloaded from cBioPortal (http://www.cbioportal.org/). Clinical characteristics, treatment data, and survival information were collected for analysis. The response was determined based on Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1. Pathology and gene sequencing results were reviewed by two oncologists independently. This study was approved by the Institutional Review Board of the Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College (NCC21/032‐2703). We performed methods according to approved guidelines and obtained comprehensive informed consent from each patient.", "Genetic alterations were detected by OncoScreen assay,\n18\n which surveys the following 10 SWI/SNF subunits: ARID1A, SMARCA4, ARID1B, ARID2, PBRM1, SMARCA2, SMARCB1, SMARCD1, BRD7, and BCL11A. SWI/SNF‐mut was defined as having a mutation in at least one of these subunits; otherwise, SWI/SNF‐wt was defined. Tissue blocks containing at least 20% tumor content were selected to obtain genomic DNA. Single‐nucleotide variant mutations were selected without germline and intron mutations from all samples. We classified SWI/SNF gene alterations into three groups: (i) SWI/SNF truncating mutations (SWI/SNF‐tm) included frameshift insertion–deletion (indels), nonsense, or splice mutation types; (ii) SWI/SNF non‐truncating mutations (SWI/SNF‐ntm) included missense, in frameshift indels, or fusions; and (iii) SWI/SNF compound mutations (SWI/SNF‐cm) include both truncating alterations and non‐truncating alterations. Tumor mutational burden (TMB) was measured as the number of somatic, coding, base substitution, and indel mutations per megabase (Mb).", "The expression of PD‐L1 was detected with an anti‐PD‐L1 antibody (Dako 22C3) according to manufacturer's instructions and tumor proportion score (TPS) was determined as per routine procedure.", "The overall response rate (ORR), disease control rate (DCR), and progression‐free survival (PFS) was analyzed among patients who underwent PD‐L1 inhibitors or EGFR‐TKIs. The time from initiation of treatment to progression or death was defined as PFS, if not as censored at the last fellow‐up date. We performed statistical analysis by IBM SPSS 26.0, R4.0.3, and Graph Pad Prism 8. Comparison of the clinical characteristics and response rate of each group were evaluated by t‐test, Wilcoxon test, χ2 test, or Fisher's exact test, when appropriate. Kaplan–Meier methodology and log‐rank tests were used to estimate the distributions and differences in event‐time with 95% confidence intervals (CI), respectively. In univariate and multivariate models, HR for PFS was estimated in Cox proportional hazards models and signal of association with p < 0.1 were included in multivariate analysis. All analyses were at two‐sided level and p < 0.05 was predefined statistically significance.", "In total, 1162 NSCLC detected by NGS were included (Figure 1), identifying 230 (19.8%) patients with SWI/SNF‐mut (Figure 2(a)). Compared with 932 (80.2%) SWI/SNF‐wt tumors, those with SWI/SNF‐mut were more common in older age (p = 0.009), male sex (p = 0.002), smokers (p < 0.001), positive PD‐L1 expression (p = 0.028), high TMB (7.2 vs. 3.2 mut/mb, p = 0.009, Figure 2b), advanced disease (p < 0.001), bone metastasis (p = 0.018), and liver metastasis (p < 0.001) (Table 1).\nDetailed inclusion and exclusion criteria.\n(a) Frequency of SWI/SNF genomic alterations among 1162 NSCLC. (b) TMB by SWI/SNF mutation status. Frequency of (c) the full spectrum of SWI/SNF genomic alterations and (d) the co‐mutations in SWI/SNF complex. SWI/SNF, switch/sucrose nonfermentable family; NSCLC, non–small cell lung cancer; TMB, tumor mutational burden; wt, wild‐type; mut, mutation; Mb, megabase.\nClinical characteristics by SWI/SNF mutation status in the overall cohort of 1162 NSCLC\nAbbreviations: Mut, mutation; mut/mb, mutations per megabase; N.A., not available; PD‐L1, programmed cell death‐ligand 1; SWI/SNF, switch/sucrose nonfermentable; TMB, tumor mutational burden; wt, wild‐type.\nAdenosquamous carcinomas, sarcomatoid carcinoma, carcinoid, large cell carcinoma, high‐grade neuroendocrine carcinoma, and poorly differentiated non‐small cell lung carcinomas; not otherwise specified (NOS).", "Of the 230 patients with SWI/SNF‐mut, there were 100 (43.5%) truncating alterations, 113 (49.1%) non‐truncating alterations, and 17 (7.4%) compound alterations (Figure 2(a)). The frequency of SWI/SNF genomic alterations were ARID1A (33.4%), SMARCA4 (28.3%), ARID2 (15.6%), PBRM1 (14.8%), ARID1B (6.9%), SMARCA2(3.9%), SMARCB1 (3.0%), BRD7(2.5%), SMARCD1 (0.9%), and BCL11A (0.4%), respectively, among which 8.3% had multiple alterations in SWI/SNF genes (Figure 2(c)). In our cohort, the most commonly co‐mutated genes with SWI/SNF were TP53 (52.1%), EGFR (27.3%), KRAS (13.4%), and KEAP1 (11.7%), respectively (Figure 2(d)). The DNA and protein changes of SWI/SNF‐mut were listed in Table S1.", "In total, 146 patients with advanced NSCLC were included in cohort 1 (Table S2). There were 127 (87.0%) patients received combined therapy, whereas the remaining patients received PD‐(L)1 inhibitors monotherapy. Although SWI/SNF‐mut tumors had a higher TMB (10.3 vs. 5.0 Mut/mb, p = 0.005) than those with SWI/SNF‐wt, there were no statistically differences across ORR (40.0% vs. 34.4%, p = 0.493) (Figure 3(a)), DCR (77.6% vs. 75.4%, p = 0.752) (Figure 3(a)), or PFS (6.5 m vs. 5.0 m; HR, 0.77; 95% CI, 0.51–1.18; p = 0.231) (Figure 3(b)) between them. Moreover, neither of the different SWI/SNF‐mut types, ARID1A‐mut nor SMARCA4‐mut, impacted the efficacy to PD‐(L)1 inhibitors (Figure S1(a)–(c)). Because published data showed KRAS are frequently co‐mutated with SWI/SNF and are associated with the prognosis of ICIs in NSCLC,\n11\n we further analyze clinical characteristics (Table S4) and efficacy to ICI by KRAS mutation status and found that there were no significant differences between KRAS‐wt and KRAS‐mut patients for PFS after ICIs (Figure S1(d)). However, patients with concurrent SWI/SNF mutation and KRAS mutation (SWI/SNFmutKRASmut, n = 18) had significantly higher TMB (14.0 vs. 7.0 mut/mb, p = 0.002) (Figure 3(c)) and conferred longer PFS (8.9 m vs. 4.9 m; HR, 0.53, 95% CI, 0.26–1.06; p = 0.071) (Figure 3(d)) to PD‐L1 inhibitors than those with non‐SWI/SNFmutKRASmut (defined as SWI/SNF wild‐type and KRAS wild‐type, SWI/SNF wild‐type and KRAS mutation, SWI/SNF mutation and KRAS wild‐type, n = 128), especially for monotherapy treatment (8.6 m vs. 1.9 m; HR, 0.31, 95% CI, 0.11–0.83; p = 0.032) (Figure 3(e),(f)).\n(a) Response rate and (b) Kaplan–Meier analysis for PFS to PD‐(L)1 inhibitors in patients with SWI/SNF‐mut vs. SWI/SNF‐wt in cohort 1. (c) Response rate and (d) Kaplan–Meier analysis for PFS to PD‐(L)1 inhibitors in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in cohort 1. Kaplan–Meier analysis for PFS to (e) PD‐(L)1 inhibitors monotherapy and (f) combine therapy in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in cohort 1. (g) Response rate and (h) Kaplan–Meier analysis for PFS to ICIs in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in MSKCC cohorts. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; Mb, megabase; PFS, progression‐free survival; PD‐(L)1, programmed cell death ligand 1; HR, hazard ratio; ICIs, immune checkpoint inhibitors.\nBecause of the number of cases treated with ICIs monotherapy was limited in this cohort, we included two MSKCC cohorts containing totally 109 patients with advanced NSCLC received ICIs alone for validation (Table S3). Consistently, no significant differences of survival were observed either in patients with SWI/SNF‐mut and SWI/SNF‐wt (Figure S1(e)) or in patients with KRAS‐mut and KRAS‐wt (Figure S1(f)), whereas SWI/SNFmutKRASmut patients exhibited higher TMB (9.8 vs. 5.7 mut/mb, p = 0.043) (Figure 3(g)) and longer PFS to ICIs than non‐SWI/SNFmutKRASmut patients (NR vs. 6.3 m; HR, 0.36; 95% CI, 0.15–0.82; p = 0.016) (Figure 3(h)).", "There were 205 NSCLC with advanced EGFR‐mutant received EGFR‐TKIs alone in cohort 2 (Table S5), in which 82.9% (n = 170) of the patients were treated with first or second‐generation TKIs (gefitinib, erlotinib, icotinib, afatinib, or dacomitinib), whereas the remaining patients with third‐generation TKIs (osimertinib or almonertinib). Comparing SWI/SNF‐wt patients (n = 164, 80.0%) with SWI/SNF‐mut patients (n = 41, 20.0%), there were no significant differences across ORR, DCR, or PFS in the overall group (n = 205) (Figure 4(a),(b)) nor in the subgroups of first, second (n = 170) (Figure S2(a),(b)), or third generation TKIs treatment (n = 35, Figure S2c,d).\n(a) Response rate and (b) Kaplan–Meier analysis for PFS to TKIs in patients with SWI/SNF‐wt vs. SWI/SNF‐mut. (c) Response rate and (d) Kaplan–Meier analysis for PFS to TKI in patients with ARID1A‐wt vs. ARID1A‐mut. (e) Multivariable survival analysis of clinical factors for PFS to TKIs. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; PFS, progression‐free survival; TKIs, tyrosine kinase inhibitors; HR, hazard ratio; CI, confidence interval.\nIn cohort 2, the most frequent SWI/SNF genomic alterations were ARID1A (n = 16, 7.8%), and the baseline clinical characteristics were similar between patients with ARID1A‐mut and ARID1A‐wt (Table S5). However, patients with ARID1A‐mut had higher DCR (100.0% vs. 92.6%, p = 0.541) (Figure 4(c)) and significantly longer prolonged PFS to EGFR‐TKIs than those with ARID1A‐wt (20.6 m vs. 11.2 m; HR, 0.47; 95% CI, 0.27–0.94; p = 0.023) (Figure 4(d)), which was consistent both in first, second (Figure S2(e)) and third generation EGFR‐TKIs subgroups (Figure S2(f)). Conversely, none of the other SWI/SNF gene alterations were associated with efficacy of EGFR‐TKIs (Table S6). Importantly, in the multivariable survival analysis adjusted for other variables, ARID1A‐mut was still associated with improved survival to EGFR‐TKIs (HR, 0.49; 95% CI, 0.25–0.98; p = 0.047) (Figure 4(e)).", "The authors declare that there are no conflicts of interest." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Study population", "Targeted tumor NGS\n", "Immunohistochemistry", "Statistical analysis", "RESULTS", "Clinical characteristics", "Spectrum of SWI/SNF genomic alterations", "Association of SWI/SNF and KRAS co‐mutations with clinical outcome to immunotherapy in advanced NSCLC\n", "Association of ARID1A‐Mut with clinical survival to EGFR‐TKIs in EGFR‐mutant NSCLC\n", "DISCUSSION", "DISCLOSURE", "Supporting information" ]
[ "The human switch/sucrose nonfermentable (SWI/SNF), which is a chromatin remodeling complex dependent on adenosine triphosphate (ATP), affects DNA replication and repair by regulating genomic architecture.\n1\n SWI/SNF encoded by 29 genes belongs to three broad subfamilies: canonical BAF (cBAF), polybromo‐associated BAF (PBAF), and non‐canonical BAF (ncBAF).\n2\n, \n3\n Genomic abnormalities in SWI/SNF were presented in nearly 20% of non–small cell lung cancer (NSCLC), of which the frequently mutated genes were ARID1A, SMARCA4, ARID2, ARID1B, and PBRM1.\n4\n, \n5\n, \n6\n Previous studies in NSCLC have identified that SWI/SNF mutations (SWI/SNF‐mut) were frequently co‐mutated with Kirsten rat sarcoma oncogene (KRAS), STK11, KEAP1, and mutually exclusive with current sensitive driver mutations including epidermal growth factor receptor (EGFR), ALK, MET, ROS1, and RET.\n7\n, \n8\n\n\nIn NSCLC, ARID1A, and SMARCA4 were the most frequently mutated genes, which occur in 8% to 9% of patients.\n8\n, \n9\n, \n10\n, \n11\n The impact of SWI/SNF gene mutations on the efficacy of immune checkpoint inhibitors (ICIs) in NSCLC has recently drawn considerable attention. A retrospective study among 292 NSCLC patients treated with ICIs found that patients with SMARCA4 alterations had improved overall survival (OS) (hazard ratio [HR], 0.67; p = 0.01).\n10\n However, another retrospective study showed that ICI‐treated patients with homozygous truncating SMARCA4‐mut presented significantly shortened OS (HR, 1.62; p = 0.01), but no statistical difference was observed between patients with non‐homozygous truncating SMARCA4‐mut and SMARCA4 wild‐type (wt).\n8\n Therefore, most of the previous studies on the relationship between SWI/SNF and ICIs in NSCLC included patients from non‐Asian populations and conclusions were inconsistent.\nEGFR tyrosine kinase inhibitors (EGFR‐TKIs), as the typical targeted inhibitors, have greatly propelled the evolution of precision treatment and prolonged survival for advanced NSCLC patients with EGFR mutations.\n12\n, \n13\n, \n14\n EGFR exon 19 deletions (19del) and exon 21 Leu858Arg (21L858R) mutation represent the prevalent sensitive mutations to EGFR‐TKIs.\n15\n However, there is almost no evidence on the association of SWI/SNF‐mut with the clinical outcome to EGFR‐TKIs to date. Here, we characterized the clinical characteristics of SWI/SNF‐mut and evaluated the relationship between SWI/SNF‐mut and clinical outcomes of ICIs and EGFR‐TKIs in Chinese patients with NSCLC.", "Study population From January 2019 to October 2021, we collected data of patients detected by next‐generation sequencing (NGS) in the Cancer Hospital Chinese Academy of Medical Sciences for analysis. Patients with advanced NSCLC were evaluated in two cohorts: cohort 1 included patients who received programmed cell death protein‐1 or programmed cell death ligand 1 (PD[L]‐1) inhibitors alone (monotherapy) or in combination with chemotherapy (combined therapy), and cohort 2 included those with EGFR 19del or 21L858R mutation (EGFR‐mutant) who received EGFR‐TKIs monotherapy. Patients underwent PD‐L1 inhibitors or EGFR‐TKIs as consolidation therapy after concurrent radiotherapy or without assessment after treatment were excluded. Two Memorial Sloan‐Kettering Cancer Center (MSKCC) cohorts totaling 109 NSCLC received immunotherapy alone (PD‐1 inhibitors or CTLA4 inhibitors) were included as validation for cohort 1.\n16\n, \n17\n The clinical data and whole exome sequencing (WES) data of MSKCC cohorts were downloaded from cBioPortal (http://www.cbioportal.org/). Clinical characteristics, treatment data, and survival information were collected for analysis. The response was determined based on Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1. Pathology and gene sequencing results were reviewed by two oncologists independently. This study was approved by the Institutional Review Board of the Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College (NCC21/032‐2703). We performed methods according to approved guidelines and obtained comprehensive informed consent from each patient.\nFrom January 2019 to October 2021, we collected data of patients detected by next‐generation sequencing (NGS) in the Cancer Hospital Chinese Academy of Medical Sciences for analysis. Patients with advanced NSCLC were evaluated in two cohorts: cohort 1 included patients who received programmed cell death protein‐1 or programmed cell death ligand 1 (PD[L]‐1) inhibitors alone (monotherapy) or in combination with chemotherapy (combined therapy), and cohort 2 included those with EGFR 19del or 21L858R mutation (EGFR‐mutant) who received EGFR‐TKIs monotherapy. Patients underwent PD‐L1 inhibitors or EGFR‐TKIs as consolidation therapy after concurrent radiotherapy or without assessment after treatment were excluded. Two Memorial Sloan‐Kettering Cancer Center (MSKCC) cohorts totaling 109 NSCLC received immunotherapy alone (PD‐1 inhibitors or CTLA4 inhibitors) were included as validation for cohort 1.\n16\n, \n17\n The clinical data and whole exome sequencing (WES) data of MSKCC cohorts were downloaded from cBioPortal (http://www.cbioportal.org/). Clinical characteristics, treatment data, and survival information were collected for analysis. The response was determined based on Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1. Pathology and gene sequencing results were reviewed by two oncologists independently. This study was approved by the Institutional Review Board of the Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College (NCC21/032‐2703). We performed methods according to approved guidelines and obtained comprehensive informed consent from each patient.\nTargeted tumor NGS\n Genetic alterations were detected by OncoScreen assay,\n18\n which surveys the following 10 SWI/SNF subunits: ARID1A, SMARCA4, ARID1B, ARID2, PBRM1, SMARCA2, SMARCB1, SMARCD1, BRD7, and BCL11A. SWI/SNF‐mut was defined as having a mutation in at least one of these subunits; otherwise, SWI/SNF‐wt was defined. Tissue blocks containing at least 20% tumor content were selected to obtain genomic DNA. Single‐nucleotide variant mutations were selected without germline and intron mutations from all samples. We classified SWI/SNF gene alterations into three groups: (i) SWI/SNF truncating mutations (SWI/SNF‐tm) included frameshift insertion–deletion (indels), nonsense, or splice mutation types; (ii) SWI/SNF non‐truncating mutations (SWI/SNF‐ntm) included missense, in frameshift indels, or fusions; and (iii) SWI/SNF compound mutations (SWI/SNF‐cm) include both truncating alterations and non‐truncating alterations. Tumor mutational burden (TMB) was measured as the number of somatic, coding, base substitution, and indel mutations per megabase (Mb).\nGenetic alterations were detected by OncoScreen assay,\n18\n which surveys the following 10 SWI/SNF subunits: ARID1A, SMARCA4, ARID1B, ARID2, PBRM1, SMARCA2, SMARCB1, SMARCD1, BRD7, and BCL11A. SWI/SNF‐mut was defined as having a mutation in at least one of these subunits; otherwise, SWI/SNF‐wt was defined. Tissue blocks containing at least 20% tumor content were selected to obtain genomic DNA. Single‐nucleotide variant mutations were selected without germline and intron mutations from all samples. We classified SWI/SNF gene alterations into three groups: (i) SWI/SNF truncating mutations (SWI/SNF‐tm) included frameshift insertion–deletion (indels), nonsense, or splice mutation types; (ii) SWI/SNF non‐truncating mutations (SWI/SNF‐ntm) included missense, in frameshift indels, or fusions; and (iii) SWI/SNF compound mutations (SWI/SNF‐cm) include both truncating alterations and non‐truncating alterations. Tumor mutational burden (TMB) was measured as the number of somatic, coding, base substitution, and indel mutations per megabase (Mb).\nImmunohistochemistry The expression of PD‐L1 was detected with an anti‐PD‐L1 antibody (Dako 22C3) according to manufacturer's instructions and tumor proportion score (TPS) was determined as per routine procedure.\nThe expression of PD‐L1 was detected with an anti‐PD‐L1 antibody (Dako 22C3) according to manufacturer's instructions and tumor proportion score (TPS) was determined as per routine procedure.\nStatistical analysis The overall response rate (ORR), disease control rate (DCR), and progression‐free survival (PFS) was analyzed among patients who underwent PD‐L1 inhibitors or EGFR‐TKIs. The time from initiation of treatment to progression or death was defined as PFS, if not as censored at the last fellow‐up date. We performed statistical analysis by IBM SPSS 26.0, R4.0.3, and Graph Pad Prism 8. Comparison of the clinical characteristics and response rate of each group were evaluated by t‐test, Wilcoxon test, χ2 test, or Fisher's exact test, when appropriate. Kaplan–Meier methodology and log‐rank tests were used to estimate the distributions and differences in event‐time with 95% confidence intervals (CI), respectively. In univariate and multivariate models, HR for PFS was estimated in Cox proportional hazards models and signal of association with p < 0.1 were included in multivariate analysis. All analyses were at two‐sided level and p < 0.05 was predefined statistically significance.\nThe overall response rate (ORR), disease control rate (DCR), and progression‐free survival (PFS) was analyzed among patients who underwent PD‐L1 inhibitors or EGFR‐TKIs. The time from initiation of treatment to progression or death was defined as PFS, if not as censored at the last fellow‐up date. We performed statistical analysis by IBM SPSS 26.0, R4.0.3, and Graph Pad Prism 8. Comparison of the clinical characteristics and response rate of each group were evaluated by t‐test, Wilcoxon test, χ2 test, or Fisher's exact test, when appropriate. Kaplan–Meier methodology and log‐rank tests were used to estimate the distributions and differences in event‐time with 95% confidence intervals (CI), respectively. In univariate and multivariate models, HR for PFS was estimated in Cox proportional hazards models and signal of association with p < 0.1 were included in multivariate analysis. All analyses were at two‐sided level and p < 0.05 was predefined statistically significance.", "From January 2019 to October 2021, we collected data of patients detected by next‐generation sequencing (NGS) in the Cancer Hospital Chinese Academy of Medical Sciences for analysis. Patients with advanced NSCLC were evaluated in two cohorts: cohort 1 included patients who received programmed cell death protein‐1 or programmed cell death ligand 1 (PD[L]‐1) inhibitors alone (monotherapy) or in combination with chemotherapy (combined therapy), and cohort 2 included those with EGFR 19del or 21L858R mutation (EGFR‐mutant) who received EGFR‐TKIs monotherapy. Patients underwent PD‐L1 inhibitors or EGFR‐TKIs as consolidation therapy after concurrent radiotherapy or without assessment after treatment were excluded. Two Memorial Sloan‐Kettering Cancer Center (MSKCC) cohorts totaling 109 NSCLC received immunotherapy alone (PD‐1 inhibitors or CTLA4 inhibitors) were included as validation for cohort 1.\n16\n, \n17\n The clinical data and whole exome sequencing (WES) data of MSKCC cohorts were downloaded from cBioPortal (http://www.cbioportal.org/). Clinical characteristics, treatment data, and survival information were collected for analysis. The response was determined based on Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1. Pathology and gene sequencing results were reviewed by two oncologists independently. This study was approved by the Institutional Review Board of the Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College (NCC21/032‐2703). We performed methods according to approved guidelines and obtained comprehensive informed consent from each patient.", "Genetic alterations were detected by OncoScreen assay,\n18\n which surveys the following 10 SWI/SNF subunits: ARID1A, SMARCA4, ARID1B, ARID2, PBRM1, SMARCA2, SMARCB1, SMARCD1, BRD7, and BCL11A. SWI/SNF‐mut was defined as having a mutation in at least one of these subunits; otherwise, SWI/SNF‐wt was defined. Tissue blocks containing at least 20% tumor content were selected to obtain genomic DNA. Single‐nucleotide variant mutations were selected without germline and intron mutations from all samples. We classified SWI/SNF gene alterations into three groups: (i) SWI/SNF truncating mutations (SWI/SNF‐tm) included frameshift insertion–deletion (indels), nonsense, or splice mutation types; (ii) SWI/SNF non‐truncating mutations (SWI/SNF‐ntm) included missense, in frameshift indels, or fusions; and (iii) SWI/SNF compound mutations (SWI/SNF‐cm) include both truncating alterations and non‐truncating alterations. Tumor mutational burden (TMB) was measured as the number of somatic, coding, base substitution, and indel mutations per megabase (Mb).", "The expression of PD‐L1 was detected with an anti‐PD‐L1 antibody (Dako 22C3) according to manufacturer's instructions and tumor proportion score (TPS) was determined as per routine procedure.", "The overall response rate (ORR), disease control rate (DCR), and progression‐free survival (PFS) was analyzed among patients who underwent PD‐L1 inhibitors or EGFR‐TKIs. The time from initiation of treatment to progression or death was defined as PFS, if not as censored at the last fellow‐up date. We performed statistical analysis by IBM SPSS 26.0, R4.0.3, and Graph Pad Prism 8. Comparison of the clinical characteristics and response rate of each group were evaluated by t‐test, Wilcoxon test, χ2 test, or Fisher's exact test, when appropriate. Kaplan–Meier methodology and log‐rank tests were used to estimate the distributions and differences in event‐time with 95% confidence intervals (CI), respectively. In univariate and multivariate models, HR for PFS was estimated in Cox proportional hazards models and signal of association with p < 0.1 were included in multivariate analysis. All analyses were at two‐sided level and p < 0.05 was predefined statistically significance.", "Clinical characteristics In total, 1162 NSCLC detected by NGS were included (Figure 1), identifying 230 (19.8%) patients with SWI/SNF‐mut (Figure 2(a)). Compared with 932 (80.2%) SWI/SNF‐wt tumors, those with SWI/SNF‐mut were more common in older age (p = 0.009), male sex (p = 0.002), smokers (p < 0.001), positive PD‐L1 expression (p = 0.028), high TMB (7.2 vs. 3.2 mut/mb, p = 0.009, Figure 2b), advanced disease (p < 0.001), bone metastasis (p = 0.018), and liver metastasis (p < 0.001) (Table 1).\nDetailed inclusion and exclusion criteria.\n(a) Frequency of SWI/SNF genomic alterations among 1162 NSCLC. (b) TMB by SWI/SNF mutation status. Frequency of (c) the full spectrum of SWI/SNF genomic alterations and (d) the co‐mutations in SWI/SNF complex. SWI/SNF, switch/sucrose nonfermentable family; NSCLC, non–small cell lung cancer; TMB, tumor mutational burden; wt, wild‐type; mut, mutation; Mb, megabase.\nClinical characteristics by SWI/SNF mutation status in the overall cohort of 1162 NSCLC\nAbbreviations: Mut, mutation; mut/mb, mutations per megabase; N.A., not available; PD‐L1, programmed cell death‐ligand 1; SWI/SNF, switch/sucrose nonfermentable; TMB, tumor mutational burden; wt, wild‐type.\nAdenosquamous carcinomas, sarcomatoid carcinoma, carcinoid, large cell carcinoma, high‐grade neuroendocrine carcinoma, and poorly differentiated non‐small cell lung carcinomas; not otherwise specified (NOS).\nIn total, 1162 NSCLC detected by NGS were included (Figure 1), identifying 230 (19.8%) patients with SWI/SNF‐mut (Figure 2(a)). Compared with 932 (80.2%) SWI/SNF‐wt tumors, those with SWI/SNF‐mut were more common in older age (p = 0.009), male sex (p = 0.002), smokers (p < 0.001), positive PD‐L1 expression (p = 0.028), high TMB (7.2 vs. 3.2 mut/mb, p = 0.009, Figure 2b), advanced disease (p < 0.001), bone metastasis (p = 0.018), and liver metastasis (p < 0.001) (Table 1).\nDetailed inclusion and exclusion criteria.\n(a) Frequency of SWI/SNF genomic alterations among 1162 NSCLC. (b) TMB by SWI/SNF mutation status. Frequency of (c) the full spectrum of SWI/SNF genomic alterations and (d) the co‐mutations in SWI/SNF complex. SWI/SNF, switch/sucrose nonfermentable family; NSCLC, non–small cell lung cancer; TMB, tumor mutational burden; wt, wild‐type; mut, mutation; Mb, megabase.\nClinical characteristics by SWI/SNF mutation status in the overall cohort of 1162 NSCLC\nAbbreviations: Mut, mutation; mut/mb, mutations per megabase; N.A., not available; PD‐L1, programmed cell death‐ligand 1; SWI/SNF, switch/sucrose nonfermentable; TMB, tumor mutational burden; wt, wild‐type.\nAdenosquamous carcinomas, sarcomatoid carcinoma, carcinoid, large cell carcinoma, high‐grade neuroendocrine carcinoma, and poorly differentiated non‐small cell lung carcinomas; not otherwise specified (NOS).\nSpectrum of SWI/SNF genomic alterations Of the 230 patients with SWI/SNF‐mut, there were 100 (43.5%) truncating alterations, 113 (49.1%) non‐truncating alterations, and 17 (7.4%) compound alterations (Figure 2(a)). The frequency of SWI/SNF genomic alterations were ARID1A (33.4%), SMARCA4 (28.3%), ARID2 (15.6%), PBRM1 (14.8%), ARID1B (6.9%), SMARCA2(3.9%), SMARCB1 (3.0%), BRD7(2.5%), SMARCD1 (0.9%), and BCL11A (0.4%), respectively, among which 8.3% had multiple alterations in SWI/SNF genes (Figure 2(c)). In our cohort, the most commonly co‐mutated genes with SWI/SNF were TP53 (52.1%), EGFR (27.3%), KRAS (13.4%), and KEAP1 (11.7%), respectively (Figure 2(d)). The DNA and protein changes of SWI/SNF‐mut were listed in Table S1.\nOf the 230 patients with SWI/SNF‐mut, there were 100 (43.5%) truncating alterations, 113 (49.1%) non‐truncating alterations, and 17 (7.4%) compound alterations (Figure 2(a)). The frequency of SWI/SNF genomic alterations were ARID1A (33.4%), SMARCA4 (28.3%), ARID2 (15.6%), PBRM1 (14.8%), ARID1B (6.9%), SMARCA2(3.9%), SMARCB1 (3.0%), BRD7(2.5%), SMARCD1 (0.9%), and BCL11A (0.4%), respectively, among which 8.3% had multiple alterations in SWI/SNF genes (Figure 2(c)). In our cohort, the most commonly co‐mutated genes with SWI/SNF were TP53 (52.1%), EGFR (27.3%), KRAS (13.4%), and KEAP1 (11.7%), respectively (Figure 2(d)). The DNA and protein changes of SWI/SNF‐mut were listed in Table S1.\nAssociation of SWI/SNF and KRAS co‐mutations with clinical outcome to immunotherapy in advanced NSCLC\n In total, 146 patients with advanced NSCLC were included in cohort 1 (Table S2). There were 127 (87.0%) patients received combined therapy, whereas the remaining patients received PD‐(L)1 inhibitors monotherapy. Although SWI/SNF‐mut tumors had a higher TMB (10.3 vs. 5.0 Mut/mb, p = 0.005) than those with SWI/SNF‐wt, there were no statistically differences across ORR (40.0% vs. 34.4%, p = 0.493) (Figure 3(a)), DCR (77.6% vs. 75.4%, p = 0.752) (Figure 3(a)), or PFS (6.5 m vs. 5.0 m; HR, 0.77; 95% CI, 0.51–1.18; p = 0.231) (Figure 3(b)) between them. Moreover, neither of the different SWI/SNF‐mut types, ARID1A‐mut nor SMARCA4‐mut, impacted the efficacy to PD‐(L)1 inhibitors (Figure S1(a)–(c)). Because published data showed KRAS are frequently co‐mutated with SWI/SNF and are associated with the prognosis of ICIs in NSCLC,\n11\n we further analyze clinical characteristics (Table S4) and efficacy to ICI by KRAS mutation status and found that there were no significant differences between KRAS‐wt and KRAS‐mut patients for PFS after ICIs (Figure S1(d)). However, patients with concurrent SWI/SNF mutation and KRAS mutation (SWI/SNFmutKRASmut, n = 18) had significantly higher TMB (14.0 vs. 7.0 mut/mb, p = 0.002) (Figure 3(c)) and conferred longer PFS (8.9 m vs. 4.9 m; HR, 0.53, 95% CI, 0.26–1.06; p = 0.071) (Figure 3(d)) to PD‐L1 inhibitors than those with non‐SWI/SNFmutKRASmut (defined as SWI/SNF wild‐type and KRAS wild‐type, SWI/SNF wild‐type and KRAS mutation, SWI/SNF mutation and KRAS wild‐type, n = 128), especially for monotherapy treatment (8.6 m vs. 1.9 m; HR, 0.31, 95% CI, 0.11–0.83; p = 0.032) (Figure 3(e),(f)).\n(a) Response rate and (b) Kaplan–Meier analysis for PFS to PD‐(L)1 inhibitors in patients with SWI/SNF‐mut vs. SWI/SNF‐wt in cohort 1. (c) Response rate and (d) Kaplan–Meier analysis for PFS to PD‐(L)1 inhibitors in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in cohort 1. Kaplan–Meier analysis for PFS to (e) PD‐(L)1 inhibitors monotherapy and (f) combine therapy in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in cohort 1. (g) Response rate and (h) Kaplan–Meier analysis for PFS to ICIs in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in MSKCC cohorts. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; Mb, megabase; PFS, progression‐free survival; PD‐(L)1, programmed cell death ligand 1; HR, hazard ratio; ICIs, immune checkpoint inhibitors.\nBecause of the number of cases treated with ICIs monotherapy was limited in this cohort, we included two MSKCC cohorts containing totally 109 patients with advanced NSCLC received ICIs alone for validation (Table S3). Consistently, no significant differences of survival were observed either in patients with SWI/SNF‐mut and SWI/SNF‐wt (Figure S1(e)) or in patients with KRAS‐mut and KRAS‐wt (Figure S1(f)), whereas SWI/SNFmutKRASmut patients exhibited higher TMB (9.8 vs. 5.7 mut/mb, p = 0.043) (Figure 3(g)) and longer PFS to ICIs than non‐SWI/SNFmutKRASmut patients (NR vs. 6.3 m; HR, 0.36; 95% CI, 0.15–0.82; p = 0.016) (Figure 3(h)).\nIn total, 146 patients with advanced NSCLC were included in cohort 1 (Table S2). There were 127 (87.0%) patients received combined therapy, whereas the remaining patients received PD‐(L)1 inhibitors monotherapy. Although SWI/SNF‐mut tumors had a higher TMB (10.3 vs. 5.0 Mut/mb, p = 0.005) than those with SWI/SNF‐wt, there were no statistically differences across ORR (40.0% vs. 34.4%, p = 0.493) (Figure 3(a)), DCR (77.6% vs. 75.4%, p = 0.752) (Figure 3(a)), or PFS (6.5 m vs. 5.0 m; HR, 0.77; 95% CI, 0.51–1.18; p = 0.231) (Figure 3(b)) between them. Moreover, neither of the different SWI/SNF‐mut types, ARID1A‐mut nor SMARCA4‐mut, impacted the efficacy to PD‐(L)1 inhibitors (Figure S1(a)–(c)). Because published data showed KRAS are frequently co‐mutated with SWI/SNF and are associated with the prognosis of ICIs in NSCLC,\n11\n we further analyze clinical characteristics (Table S4) and efficacy to ICI by KRAS mutation status and found that there were no significant differences between KRAS‐wt and KRAS‐mut patients for PFS after ICIs (Figure S1(d)). However, patients with concurrent SWI/SNF mutation and KRAS mutation (SWI/SNFmutKRASmut, n = 18) had significantly higher TMB (14.0 vs. 7.0 mut/mb, p = 0.002) (Figure 3(c)) and conferred longer PFS (8.9 m vs. 4.9 m; HR, 0.53, 95% CI, 0.26–1.06; p = 0.071) (Figure 3(d)) to PD‐L1 inhibitors than those with non‐SWI/SNFmutKRASmut (defined as SWI/SNF wild‐type and KRAS wild‐type, SWI/SNF wild‐type and KRAS mutation, SWI/SNF mutation and KRAS wild‐type, n = 128), especially for monotherapy treatment (8.6 m vs. 1.9 m; HR, 0.31, 95% CI, 0.11–0.83; p = 0.032) (Figure 3(e),(f)).\n(a) Response rate and (b) Kaplan–Meier analysis for PFS to PD‐(L)1 inhibitors in patients with SWI/SNF‐mut vs. SWI/SNF‐wt in cohort 1. (c) Response rate and (d) Kaplan–Meier analysis for PFS to PD‐(L)1 inhibitors in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in cohort 1. Kaplan–Meier analysis for PFS to (e) PD‐(L)1 inhibitors monotherapy and (f) combine therapy in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in cohort 1. (g) Response rate and (h) Kaplan–Meier analysis for PFS to ICIs in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in MSKCC cohorts. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; Mb, megabase; PFS, progression‐free survival; PD‐(L)1, programmed cell death ligand 1; HR, hazard ratio; ICIs, immune checkpoint inhibitors.\nBecause of the number of cases treated with ICIs monotherapy was limited in this cohort, we included two MSKCC cohorts containing totally 109 patients with advanced NSCLC received ICIs alone for validation (Table S3). Consistently, no significant differences of survival were observed either in patients with SWI/SNF‐mut and SWI/SNF‐wt (Figure S1(e)) or in patients with KRAS‐mut and KRAS‐wt (Figure S1(f)), whereas SWI/SNFmutKRASmut patients exhibited higher TMB (9.8 vs. 5.7 mut/mb, p = 0.043) (Figure 3(g)) and longer PFS to ICIs than non‐SWI/SNFmutKRASmut patients (NR vs. 6.3 m; HR, 0.36; 95% CI, 0.15–0.82; p = 0.016) (Figure 3(h)).\nAssociation of ARID1A‐Mut with clinical survival to EGFR‐TKIs in EGFR‐mutant NSCLC\n There were 205 NSCLC with advanced EGFR‐mutant received EGFR‐TKIs alone in cohort 2 (Table S5), in which 82.9% (n = 170) of the patients were treated with first or second‐generation TKIs (gefitinib, erlotinib, icotinib, afatinib, or dacomitinib), whereas the remaining patients with third‐generation TKIs (osimertinib or almonertinib). Comparing SWI/SNF‐wt patients (n = 164, 80.0%) with SWI/SNF‐mut patients (n = 41, 20.0%), there were no significant differences across ORR, DCR, or PFS in the overall group (n = 205) (Figure 4(a),(b)) nor in the subgroups of first, second (n = 170) (Figure S2(a),(b)), or third generation TKIs treatment (n = 35, Figure S2c,d).\n(a) Response rate and (b) Kaplan–Meier analysis for PFS to TKIs in patients with SWI/SNF‐wt vs. SWI/SNF‐mut. (c) Response rate and (d) Kaplan–Meier analysis for PFS to TKI in patients with ARID1A‐wt vs. ARID1A‐mut. (e) Multivariable survival analysis of clinical factors for PFS to TKIs. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; PFS, progression‐free survival; TKIs, tyrosine kinase inhibitors; HR, hazard ratio; CI, confidence interval.\nIn cohort 2, the most frequent SWI/SNF genomic alterations were ARID1A (n = 16, 7.8%), and the baseline clinical characteristics were similar between patients with ARID1A‐mut and ARID1A‐wt (Table S5). However, patients with ARID1A‐mut had higher DCR (100.0% vs. 92.6%, p = 0.541) (Figure 4(c)) and significantly longer prolonged PFS to EGFR‐TKIs than those with ARID1A‐wt (20.6 m vs. 11.2 m; HR, 0.47; 95% CI, 0.27–0.94; p = 0.023) (Figure 4(d)), which was consistent both in first, second (Figure S2(e)) and third generation EGFR‐TKIs subgroups (Figure S2(f)). Conversely, none of the other SWI/SNF gene alterations were associated with efficacy of EGFR‐TKIs (Table S6). Importantly, in the multivariable survival analysis adjusted for other variables, ARID1A‐mut was still associated with improved survival to EGFR‐TKIs (HR, 0.49; 95% CI, 0.25–0.98; p = 0.047) (Figure 4(e)).\nThere were 205 NSCLC with advanced EGFR‐mutant received EGFR‐TKIs alone in cohort 2 (Table S5), in which 82.9% (n = 170) of the patients were treated with first or second‐generation TKIs (gefitinib, erlotinib, icotinib, afatinib, or dacomitinib), whereas the remaining patients with third‐generation TKIs (osimertinib or almonertinib). Comparing SWI/SNF‐wt patients (n = 164, 80.0%) with SWI/SNF‐mut patients (n = 41, 20.0%), there were no significant differences across ORR, DCR, or PFS in the overall group (n = 205) (Figure 4(a),(b)) nor in the subgroups of first, second (n = 170) (Figure S2(a),(b)), or third generation TKIs treatment (n = 35, Figure S2c,d).\n(a) Response rate and (b) Kaplan–Meier analysis for PFS to TKIs in patients with SWI/SNF‐wt vs. SWI/SNF‐mut. (c) Response rate and (d) Kaplan–Meier analysis for PFS to TKI in patients with ARID1A‐wt vs. ARID1A‐mut. (e) Multivariable survival analysis of clinical factors for PFS to TKIs. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; PFS, progression‐free survival; TKIs, tyrosine kinase inhibitors; HR, hazard ratio; CI, confidence interval.\nIn cohort 2, the most frequent SWI/SNF genomic alterations were ARID1A (n = 16, 7.8%), and the baseline clinical characteristics were similar between patients with ARID1A‐mut and ARID1A‐wt (Table S5). However, patients with ARID1A‐mut had higher DCR (100.0% vs. 92.6%, p = 0.541) (Figure 4(c)) and significantly longer prolonged PFS to EGFR‐TKIs than those with ARID1A‐wt (20.6 m vs. 11.2 m; HR, 0.47; 95% CI, 0.27–0.94; p = 0.023) (Figure 4(d)), which was consistent both in first, second (Figure S2(e)) and third generation EGFR‐TKIs subgroups (Figure S2(f)). Conversely, none of the other SWI/SNF gene alterations were associated with efficacy of EGFR‐TKIs (Table S6). Importantly, in the multivariable survival analysis adjusted for other variables, ARID1A‐mut was still associated with improved survival to EGFR‐TKIs (HR, 0.49; 95% CI, 0.25–0.98; p = 0.047) (Figure 4(e)).", "In total, 1162 NSCLC detected by NGS were included (Figure 1), identifying 230 (19.8%) patients with SWI/SNF‐mut (Figure 2(a)). Compared with 932 (80.2%) SWI/SNF‐wt tumors, those with SWI/SNF‐mut were more common in older age (p = 0.009), male sex (p = 0.002), smokers (p < 0.001), positive PD‐L1 expression (p = 0.028), high TMB (7.2 vs. 3.2 mut/mb, p = 0.009, Figure 2b), advanced disease (p < 0.001), bone metastasis (p = 0.018), and liver metastasis (p < 0.001) (Table 1).\nDetailed inclusion and exclusion criteria.\n(a) Frequency of SWI/SNF genomic alterations among 1162 NSCLC. (b) TMB by SWI/SNF mutation status. Frequency of (c) the full spectrum of SWI/SNF genomic alterations and (d) the co‐mutations in SWI/SNF complex. SWI/SNF, switch/sucrose nonfermentable family; NSCLC, non–small cell lung cancer; TMB, tumor mutational burden; wt, wild‐type; mut, mutation; Mb, megabase.\nClinical characteristics by SWI/SNF mutation status in the overall cohort of 1162 NSCLC\nAbbreviations: Mut, mutation; mut/mb, mutations per megabase; N.A., not available; PD‐L1, programmed cell death‐ligand 1; SWI/SNF, switch/sucrose nonfermentable; TMB, tumor mutational burden; wt, wild‐type.\nAdenosquamous carcinomas, sarcomatoid carcinoma, carcinoid, large cell carcinoma, high‐grade neuroendocrine carcinoma, and poorly differentiated non‐small cell lung carcinomas; not otherwise specified (NOS).", "Of the 230 patients with SWI/SNF‐mut, there were 100 (43.5%) truncating alterations, 113 (49.1%) non‐truncating alterations, and 17 (7.4%) compound alterations (Figure 2(a)). The frequency of SWI/SNF genomic alterations were ARID1A (33.4%), SMARCA4 (28.3%), ARID2 (15.6%), PBRM1 (14.8%), ARID1B (6.9%), SMARCA2(3.9%), SMARCB1 (3.0%), BRD7(2.5%), SMARCD1 (0.9%), and BCL11A (0.4%), respectively, among which 8.3% had multiple alterations in SWI/SNF genes (Figure 2(c)). In our cohort, the most commonly co‐mutated genes with SWI/SNF were TP53 (52.1%), EGFR (27.3%), KRAS (13.4%), and KEAP1 (11.7%), respectively (Figure 2(d)). The DNA and protein changes of SWI/SNF‐mut were listed in Table S1.", "In total, 146 patients with advanced NSCLC were included in cohort 1 (Table S2). There were 127 (87.0%) patients received combined therapy, whereas the remaining patients received PD‐(L)1 inhibitors monotherapy. Although SWI/SNF‐mut tumors had a higher TMB (10.3 vs. 5.0 Mut/mb, p = 0.005) than those with SWI/SNF‐wt, there were no statistically differences across ORR (40.0% vs. 34.4%, p = 0.493) (Figure 3(a)), DCR (77.6% vs. 75.4%, p = 0.752) (Figure 3(a)), or PFS (6.5 m vs. 5.0 m; HR, 0.77; 95% CI, 0.51–1.18; p = 0.231) (Figure 3(b)) between them. Moreover, neither of the different SWI/SNF‐mut types, ARID1A‐mut nor SMARCA4‐mut, impacted the efficacy to PD‐(L)1 inhibitors (Figure S1(a)–(c)). Because published data showed KRAS are frequently co‐mutated with SWI/SNF and are associated with the prognosis of ICIs in NSCLC,\n11\n we further analyze clinical characteristics (Table S4) and efficacy to ICI by KRAS mutation status and found that there were no significant differences between KRAS‐wt and KRAS‐mut patients for PFS after ICIs (Figure S1(d)). However, patients with concurrent SWI/SNF mutation and KRAS mutation (SWI/SNFmutKRASmut, n = 18) had significantly higher TMB (14.0 vs. 7.0 mut/mb, p = 0.002) (Figure 3(c)) and conferred longer PFS (8.9 m vs. 4.9 m; HR, 0.53, 95% CI, 0.26–1.06; p = 0.071) (Figure 3(d)) to PD‐L1 inhibitors than those with non‐SWI/SNFmutKRASmut (defined as SWI/SNF wild‐type and KRAS wild‐type, SWI/SNF wild‐type and KRAS mutation, SWI/SNF mutation and KRAS wild‐type, n = 128), especially for monotherapy treatment (8.6 m vs. 1.9 m; HR, 0.31, 95% CI, 0.11–0.83; p = 0.032) (Figure 3(e),(f)).\n(a) Response rate and (b) Kaplan–Meier analysis for PFS to PD‐(L)1 inhibitors in patients with SWI/SNF‐mut vs. SWI/SNF‐wt in cohort 1. (c) Response rate and (d) Kaplan–Meier analysis for PFS to PD‐(L)1 inhibitors in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in cohort 1. Kaplan–Meier analysis for PFS to (e) PD‐(L)1 inhibitors monotherapy and (f) combine therapy in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in cohort 1. (g) Response rate and (h) Kaplan–Meier analysis for PFS to ICIs in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in MSKCC cohorts. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; Mb, megabase; PFS, progression‐free survival; PD‐(L)1, programmed cell death ligand 1; HR, hazard ratio; ICIs, immune checkpoint inhibitors.\nBecause of the number of cases treated with ICIs monotherapy was limited in this cohort, we included two MSKCC cohorts containing totally 109 patients with advanced NSCLC received ICIs alone for validation (Table S3). Consistently, no significant differences of survival were observed either in patients with SWI/SNF‐mut and SWI/SNF‐wt (Figure S1(e)) or in patients with KRAS‐mut and KRAS‐wt (Figure S1(f)), whereas SWI/SNFmutKRASmut patients exhibited higher TMB (9.8 vs. 5.7 mut/mb, p = 0.043) (Figure 3(g)) and longer PFS to ICIs than non‐SWI/SNFmutKRASmut patients (NR vs. 6.3 m; HR, 0.36; 95% CI, 0.15–0.82; p = 0.016) (Figure 3(h)).", "There were 205 NSCLC with advanced EGFR‐mutant received EGFR‐TKIs alone in cohort 2 (Table S5), in which 82.9% (n = 170) of the patients were treated with first or second‐generation TKIs (gefitinib, erlotinib, icotinib, afatinib, or dacomitinib), whereas the remaining patients with third‐generation TKIs (osimertinib or almonertinib). Comparing SWI/SNF‐wt patients (n = 164, 80.0%) with SWI/SNF‐mut patients (n = 41, 20.0%), there were no significant differences across ORR, DCR, or PFS in the overall group (n = 205) (Figure 4(a),(b)) nor in the subgroups of first, second (n = 170) (Figure S2(a),(b)), or third generation TKIs treatment (n = 35, Figure S2c,d).\n(a) Response rate and (b) Kaplan–Meier analysis for PFS to TKIs in patients with SWI/SNF‐wt vs. SWI/SNF‐mut. (c) Response rate and (d) Kaplan–Meier analysis for PFS to TKI in patients with ARID1A‐wt vs. ARID1A‐mut. (e) Multivariable survival analysis of clinical factors for PFS to TKIs. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; PFS, progression‐free survival; TKIs, tyrosine kinase inhibitors; HR, hazard ratio; CI, confidence interval.\nIn cohort 2, the most frequent SWI/SNF genomic alterations were ARID1A (n = 16, 7.8%), and the baseline clinical characteristics were similar between patients with ARID1A‐mut and ARID1A‐wt (Table S5). However, patients with ARID1A‐mut had higher DCR (100.0% vs. 92.6%, p = 0.541) (Figure 4(c)) and significantly longer prolonged PFS to EGFR‐TKIs than those with ARID1A‐wt (20.6 m vs. 11.2 m; HR, 0.47; 95% CI, 0.27–0.94; p = 0.023) (Figure 4(d)), which was consistent both in first, second (Figure S2(e)) and third generation EGFR‐TKIs subgroups (Figure S2(f)). Conversely, none of the other SWI/SNF gene alterations were associated with efficacy of EGFR‐TKIs (Table S6). Importantly, in the multivariable survival analysis adjusted for other variables, ARID1A‐mut was still associated with improved survival to EGFR‐TKIs (HR, 0.49; 95% CI, 0.25–0.98; p = 0.047) (Figure 4(e)).", "In our institutional study of 1162 Chinese patients with NSCLC, we reported a mutational incidence rate of ~20% in SWI/SNF genes. Consistent with previous studies,\n9\n, \n11\n, \n19\n, \n20\n SWI/SNF‐mut was correlated with older age, smoking, male sex, higher TMB, and higher proportion of PD‐L1‐positive. Moreover, SWI/SNF‐mut was more often observed in tumors with advanced disease, bone, and liver metastasis. In advanced NSCLC, neither SWI/SNF‐mut nor its common individual gene alterations affected the clinical outcome of ICIs. However, patients with SWI/SNFmutKRASmut had improved clinical outcome to ICIs compared to those with non‐SWI/SNFmutKRASmut. Furthermore, our study indicated that ARID1A‐mut was associated with prolonged survival to EGFR‐TKIs in patients with EGFR‐mutant.\nPrevious studies reported that SWI/SNF genes are frequently co‐mutated with KRAS, and patients with SMARCA4 and KRAS co‐mutation had poorer survival rates after ICIs.\n11\n, \n21\n However, we found that patients with SWI/SNFmutKRASmut presented higher ORR and DCR and longer PFS to ICIs, especially in those treated with ICIs monotherapy. Similarly, multiple studies have showed that KRAS‐mut impacts clinical outcomes of ICIs in NSCLC.\n22\n, \n23\n, \n24\n, \n25\n, \n26\n A meta‐analysis including six randomized clinical trials on ICIs monotherapy or combined therapy as second‐ or first‐line treatment for advanced NSCLC (IMpower‐150, Keynote‐189 and Keynote‐042, Oak, Poplar, and CheckMate‐057), demonstrated that ICIs had prolonged PFS and OS in KRAS‐mut patients.\n26\n KRAS‐mut was correlated with PD‐L1‐positive (p = 0.031)\n27\n and has been confirmed to be a potential driver to produce more neoantigens.\n23\n In our study, the patients harboring SWI/SNFmutKRASmut had significantly higher TMB compared to those without SWI/SNFmutKRASmut. Overall, it is possible that the superior clinical outcome observed in SWI/SNFmutRASmut patients might be because of the increased production of neoantigens and a higher PD‐L1 expression, both of which are highly correlated with improved response to ICIs.\nEGFR mutation rates were higher in our cohort (27.3%) than that in previous studies (11%–14%),\n8\n, \n11\n which is likely because of the higher prevalence of EGFR mutations in Asian NSCLC (50%–60%) compared to Caucasian NSCLC (10%–20%).\n12\n, \n13\n, \n28\n In advanced EGFR‐mut NSCLC, we observed a significantly longer PFS after EGFR‐TKIs in patients with ARID1A‐mut compared to those with ARID1A‐wt. As a cBAF‐specific subunit, ARID1A protein has been confirmed to be one of the critical components that modify the position of nucleosomes on DNA and is associated with proliferation, migration, invasion, and metastasis of various cancers.\n29\n, \n30\n, \n31\n, \n32\n, \n33\n, \n34\n Recent studies showed that enhanced glutathione metabolism contributes to EGFR‐TKIs resistance.\n34\n Inhibition of glutathione de novo synthesis by targeting AKR1B1 or METTL7B can overcome acquired resistance to both first and third generation EGFR‐TKIs in NSCLC.\n35\n, \n36\n ARID1A‐inactivating mutations were associated with reduced metabolism of glutathione in ARID1A‐deficient cancers,\n37\n which may be one of the factors affecting sensitivity to EGFR‐TKIs. On the contrary, a retrospective study of 19 EGFR‐mutant NSCLC patients showed that icotinib treatment had significantly reduced PFS (p = 0.001) for patients with ARID1A‐mut (n = 3).\n38\n Obviously, the sample size is small and all the patients received first generation EGFR‐TKIs, which differs from ours. For the remaining SWI/SNF gene alterations, efficacy of EGFR‐TKIs exhibits significant heterogeneity. Further studies with larger sample sizes are needed for validation.\nIn our study, there were several limitations that should be considered. First, this was a retrospective single‐institution analysis with some inherent bias. Second, our assay contained the most common SWI/SNF genes, but did not cover all SWI/SNF genes. Third, the results of this study were descriptive and further research is required to determine the mechanisms of the effect of SWI/SNF gene mutations on the response to ICIs and EGFR‐TKIs.\nIn conclusion, our study represents a comprehensive cohort with SWI/SNF‐mut NSCLC in China and provides new knowledge on the genetic contributions of SWI/SNF‐mut to response to ICIs and EGFR‐TKIs in advanced NSCLC. Although no association was observed between SWI/SNF‐mut and clinical outcomes of ICIs or TKIs, SWI/SNF‐mut and KRAS‐mut co‐occurrence appeared to improved clinical survival for patients who received ICI treatment, especially for those who received ICI alone. Furthermore, we demonstrated that ARID1A‐mut seems to prolong clinical survival after EGFR‐TKIs in EGFR‐mutant advanced NSCLC. These findings could lead to improved outcomes for NSCLC by identifying more patient‐specific treatments that cater to SWI/SNF genetic alternations. Additional prospective studies with more detailed sequencing of all SWI/SNF genes and larger sample sizes are needed in the future. Meanwhile, further studies to unveil the mechanism by which SWI/SNF affects clinical outcomes is an important issue.", "The authors declare that there are no conflicts of interest.", "\nFigure S1 Kaplan–Meier analysis for PFS to PD‐L1 inhibitors in (a) SWI/SNF‐wt vs. SWI/SNF‐tm vs. SWI/SNF‐ntm vs. SWI/SNF‐cm patients, (b) ARID1A‐wt vs. ARID1A‐mut patients (c) SMARCA4‐wt vs. SMARCA4‐mut patients and (d) KRAS‐wt vs. KRAS‐mut patients in cohort 1. Kaplan–Meier analysis for PFS to ICIs in (e) SWI/SNF‐wt vs. SWI/SNF‐mut patients and (f) KRAS‐wt vs. KRAS‐mut patients in MSKCC cohorts. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; PFS, median progression‐free survival; HR, hazard ratio; ICIs, immune checkpoint inhibitors.\nClick here for additional data file.\n\nFigure S2 (a) Response rate and (b) Kaplan–Meier analysis for PFS to first and second generation TKIs in patients with SWI/SNF‐wt vs. SWI/SNF‐mut. (c) Response rate and (d) Kaplan–Meier analysis for PFS to third generation TKIs in patients withSWI/SNF‐wt vs. SWI/SNFmut. Kaplan–Meier analysis for PFS to (e) first, second generation and (f) third generation TKIs in patients with ARID1A‐wt vs. ARID1A‐mut. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; TKIs, tyrosine kinase inhibitors; PFS, progression‐free survival; HR, hazard ratio.\nClick here for additional data file.\n\nTable S1 The list of SWI/SNF mutations in non–small cell lung carcinomas.\nClick here for additional data file.\n\nTable S2. Clinical characteristics by SWI/SNF and KRAS mutation status in cohort 1.\nClick here for additional data file." ]
[ null, "methods", null, null, null, null, "results", null, null, null, null, "discussion", null, "supplementary-material" ]
[ "epidermal growth factor receptor", "immune checkpoint inhibitors", "non–small cell lung cancer", "SWI/SNF", "tyrosine kinase inhibitors" ]
INTRODUCTION: The human switch/sucrose nonfermentable (SWI/SNF), which is a chromatin remodeling complex dependent on adenosine triphosphate (ATP), affects DNA replication and repair by regulating genomic architecture. 1 SWI/SNF encoded by 29 genes belongs to three broad subfamilies: canonical BAF (cBAF), polybromo‐associated BAF (PBAF), and non‐canonical BAF (ncBAF). 2 , 3 Genomic abnormalities in SWI/SNF were presented in nearly 20% of non–small cell lung cancer (NSCLC), of which the frequently mutated genes were ARID1A, SMARCA4, ARID2, ARID1B, and PBRM1. 4 , 5 , 6 Previous studies in NSCLC have identified that SWI/SNF mutations (SWI/SNF‐mut) were frequently co‐mutated with Kirsten rat sarcoma oncogene (KRAS), STK11, KEAP1, and mutually exclusive with current sensitive driver mutations including epidermal growth factor receptor (EGFR), ALK, MET, ROS1, and RET. 7 , 8 In NSCLC, ARID1A, and SMARCA4 were the most frequently mutated genes, which occur in 8% to 9% of patients. 8 , 9 , 10 , 11 The impact of SWI/SNF gene mutations on the efficacy of immune checkpoint inhibitors (ICIs) in NSCLC has recently drawn considerable attention. A retrospective study among 292 NSCLC patients treated with ICIs found that patients with SMARCA4 alterations had improved overall survival (OS) (hazard ratio [HR], 0.67; p = 0.01). 10 However, another retrospective study showed that ICI‐treated patients with homozygous truncating SMARCA4‐mut presented significantly shortened OS (HR, 1.62; p = 0.01), but no statistical difference was observed between patients with non‐homozygous truncating SMARCA4‐mut and SMARCA4 wild‐type (wt). 8 Therefore, most of the previous studies on the relationship between SWI/SNF and ICIs in NSCLC included patients from non‐Asian populations and conclusions were inconsistent. EGFR tyrosine kinase inhibitors (EGFR‐TKIs), as the typical targeted inhibitors, have greatly propelled the evolution of precision treatment and prolonged survival for advanced NSCLC patients with EGFR mutations. 12 , 13 , 14 EGFR exon 19 deletions (19del) and exon 21 Leu858Arg (21L858R) mutation represent the prevalent sensitive mutations to EGFR‐TKIs. 15 However, there is almost no evidence on the association of SWI/SNF‐mut with the clinical outcome to EGFR‐TKIs to date. Here, we characterized the clinical characteristics of SWI/SNF‐mut and evaluated the relationship between SWI/SNF‐mut and clinical outcomes of ICIs and EGFR‐TKIs in Chinese patients with NSCLC. METHODS: Study population From January 2019 to October 2021, we collected data of patients detected by next‐generation sequencing (NGS) in the Cancer Hospital Chinese Academy of Medical Sciences for analysis. Patients with advanced NSCLC were evaluated in two cohorts: cohort 1 included patients who received programmed cell death protein‐1 or programmed cell death ligand 1 (PD[L]‐1) inhibitors alone (monotherapy) or in combination with chemotherapy (combined therapy), and cohort 2 included those with EGFR 19del or 21L858R mutation (EGFR‐mutant) who received EGFR‐TKIs monotherapy. Patients underwent PD‐L1 inhibitors or EGFR‐TKIs as consolidation therapy after concurrent radiotherapy or without assessment after treatment were excluded. Two Memorial Sloan‐Kettering Cancer Center (MSKCC) cohorts totaling 109 NSCLC received immunotherapy alone (PD‐1 inhibitors or CTLA4 inhibitors) were included as validation for cohort 1. 16 , 17 The clinical data and whole exome sequencing (WES) data of MSKCC cohorts were downloaded from cBioPortal (http://www.cbioportal.org/). Clinical characteristics, treatment data, and survival information were collected for analysis. The response was determined based on Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1. Pathology and gene sequencing results were reviewed by two oncologists independently. This study was approved by the Institutional Review Board of the Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College (NCC21/032‐2703). We performed methods according to approved guidelines and obtained comprehensive informed consent from each patient. From January 2019 to October 2021, we collected data of patients detected by next‐generation sequencing (NGS) in the Cancer Hospital Chinese Academy of Medical Sciences for analysis. Patients with advanced NSCLC were evaluated in two cohorts: cohort 1 included patients who received programmed cell death protein‐1 or programmed cell death ligand 1 (PD[L]‐1) inhibitors alone (monotherapy) or in combination with chemotherapy (combined therapy), and cohort 2 included those with EGFR 19del or 21L858R mutation (EGFR‐mutant) who received EGFR‐TKIs monotherapy. Patients underwent PD‐L1 inhibitors or EGFR‐TKIs as consolidation therapy after concurrent radiotherapy or without assessment after treatment were excluded. Two Memorial Sloan‐Kettering Cancer Center (MSKCC) cohorts totaling 109 NSCLC received immunotherapy alone (PD‐1 inhibitors or CTLA4 inhibitors) were included as validation for cohort 1. 16 , 17 The clinical data and whole exome sequencing (WES) data of MSKCC cohorts were downloaded from cBioPortal (http://www.cbioportal.org/). Clinical characteristics, treatment data, and survival information were collected for analysis. The response was determined based on Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1. Pathology and gene sequencing results were reviewed by two oncologists independently. This study was approved by the Institutional Review Board of the Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College (NCC21/032‐2703). We performed methods according to approved guidelines and obtained comprehensive informed consent from each patient. Targeted tumor NGS Genetic alterations were detected by OncoScreen assay, 18 which surveys the following 10 SWI/SNF subunits: ARID1A, SMARCA4, ARID1B, ARID2, PBRM1, SMARCA2, SMARCB1, SMARCD1, BRD7, and BCL11A. SWI/SNF‐mut was defined as having a mutation in at least one of these subunits; otherwise, SWI/SNF‐wt was defined. Tissue blocks containing at least 20% tumor content were selected to obtain genomic DNA. Single‐nucleotide variant mutations were selected without germline and intron mutations from all samples. We classified SWI/SNF gene alterations into three groups: (i) SWI/SNF truncating mutations (SWI/SNF‐tm) included frameshift insertion–deletion (indels), nonsense, or splice mutation types; (ii) SWI/SNF non‐truncating mutations (SWI/SNF‐ntm) included missense, in frameshift indels, or fusions; and (iii) SWI/SNF compound mutations (SWI/SNF‐cm) include both truncating alterations and non‐truncating alterations. Tumor mutational burden (TMB) was measured as the number of somatic, coding, base substitution, and indel mutations per megabase (Mb). Genetic alterations were detected by OncoScreen assay, 18 which surveys the following 10 SWI/SNF subunits: ARID1A, SMARCA4, ARID1B, ARID2, PBRM1, SMARCA2, SMARCB1, SMARCD1, BRD7, and BCL11A. SWI/SNF‐mut was defined as having a mutation in at least one of these subunits; otherwise, SWI/SNF‐wt was defined. Tissue blocks containing at least 20% tumor content were selected to obtain genomic DNA. Single‐nucleotide variant mutations were selected without germline and intron mutations from all samples. We classified SWI/SNF gene alterations into three groups: (i) SWI/SNF truncating mutations (SWI/SNF‐tm) included frameshift insertion–deletion (indels), nonsense, or splice mutation types; (ii) SWI/SNF non‐truncating mutations (SWI/SNF‐ntm) included missense, in frameshift indels, or fusions; and (iii) SWI/SNF compound mutations (SWI/SNF‐cm) include both truncating alterations and non‐truncating alterations. Tumor mutational burden (TMB) was measured as the number of somatic, coding, base substitution, and indel mutations per megabase (Mb). Immunohistochemistry The expression of PD‐L1 was detected with an anti‐PD‐L1 antibody (Dako 22C3) according to manufacturer's instructions and tumor proportion score (TPS) was determined as per routine procedure. The expression of PD‐L1 was detected with an anti‐PD‐L1 antibody (Dako 22C3) according to manufacturer's instructions and tumor proportion score (TPS) was determined as per routine procedure. Statistical analysis The overall response rate (ORR), disease control rate (DCR), and progression‐free survival (PFS) was analyzed among patients who underwent PD‐L1 inhibitors or EGFR‐TKIs. The time from initiation of treatment to progression or death was defined as PFS, if not as censored at the last fellow‐up date. We performed statistical analysis by IBM SPSS 26.0, R4.0.3, and Graph Pad Prism 8. Comparison of the clinical characteristics and response rate of each group were evaluated by t‐test, Wilcoxon test, χ2 test, or Fisher's exact test, when appropriate. Kaplan–Meier methodology and log‐rank tests were used to estimate the distributions and differences in event‐time with 95% confidence intervals (CI), respectively. In univariate and multivariate models, HR for PFS was estimated in Cox proportional hazards models and signal of association with p < 0.1 were included in multivariate analysis. All analyses were at two‐sided level and p < 0.05 was predefined statistically significance. The overall response rate (ORR), disease control rate (DCR), and progression‐free survival (PFS) was analyzed among patients who underwent PD‐L1 inhibitors or EGFR‐TKIs. The time from initiation of treatment to progression or death was defined as PFS, if not as censored at the last fellow‐up date. We performed statistical analysis by IBM SPSS 26.0, R4.0.3, and Graph Pad Prism 8. Comparison of the clinical characteristics and response rate of each group were evaluated by t‐test, Wilcoxon test, χ2 test, or Fisher's exact test, when appropriate. Kaplan–Meier methodology and log‐rank tests were used to estimate the distributions and differences in event‐time with 95% confidence intervals (CI), respectively. In univariate and multivariate models, HR for PFS was estimated in Cox proportional hazards models and signal of association with p < 0.1 were included in multivariate analysis. All analyses were at two‐sided level and p < 0.05 was predefined statistically significance. Study population: From January 2019 to October 2021, we collected data of patients detected by next‐generation sequencing (NGS) in the Cancer Hospital Chinese Academy of Medical Sciences for analysis. Patients with advanced NSCLC were evaluated in two cohorts: cohort 1 included patients who received programmed cell death protein‐1 or programmed cell death ligand 1 (PD[L]‐1) inhibitors alone (monotherapy) or in combination with chemotherapy (combined therapy), and cohort 2 included those with EGFR 19del or 21L858R mutation (EGFR‐mutant) who received EGFR‐TKIs monotherapy. Patients underwent PD‐L1 inhibitors or EGFR‐TKIs as consolidation therapy after concurrent radiotherapy or without assessment after treatment were excluded. Two Memorial Sloan‐Kettering Cancer Center (MSKCC) cohorts totaling 109 NSCLC received immunotherapy alone (PD‐1 inhibitors or CTLA4 inhibitors) were included as validation for cohort 1. 16 , 17 The clinical data and whole exome sequencing (WES) data of MSKCC cohorts were downloaded from cBioPortal (http://www.cbioportal.org/). Clinical characteristics, treatment data, and survival information were collected for analysis. The response was determined based on Response Evaluation Criteria in Solid Tumors (RECIST) version 1.1. Pathology and gene sequencing results were reviewed by two oncologists independently. This study was approved by the Institutional Review Board of the Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College (NCC21/032‐2703). We performed methods according to approved guidelines and obtained comprehensive informed consent from each patient. Targeted tumor NGS : Genetic alterations were detected by OncoScreen assay, 18 which surveys the following 10 SWI/SNF subunits: ARID1A, SMARCA4, ARID1B, ARID2, PBRM1, SMARCA2, SMARCB1, SMARCD1, BRD7, and BCL11A. SWI/SNF‐mut was defined as having a mutation in at least one of these subunits; otherwise, SWI/SNF‐wt was defined. Tissue blocks containing at least 20% tumor content were selected to obtain genomic DNA. Single‐nucleotide variant mutations were selected without germline and intron mutations from all samples. We classified SWI/SNF gene alterations into three groups: (i) SWI/SNF truncating mutations (SWI/SNF‐tm) included frameshift insertion–deletion (indels), nonsense, or splice mutation types; (ii) SWI/SNF non‐truncating mutations (SWI/SNF‐ntm) included missense, in frameshift indels, or fusions; and (iii) SWI/SNF compound mutations (SWI/SNF‐cm) include both truncating alterations and non‐truncating alterations. Tumor mutational burden (TMB) was measured as the number of somatic, coding, base substitution, and indel mutations per megabase (Mb). Immunohistochemistry: The expression of PD‐L1 was detected with an anti‐PD‐L1 antibody (Dako 22C3) according to manufacturer's instructions and tumor proportion score (TPS) was determined as per routine procedure. Statistical analysis: The overall response rate (ORR), disease control rate (DCR), and progression‐free survival (PFS) was analyzed among patients who underwent PD‐L1 inhibitors or EGFR‐TKIs. The time from initiation of treatment to progression or death was defined as PFS, if not as censored at the last fellow‐up date. We performed statistical analysis by IBM SPSS 26.0, R4.0.3, and Graph Pad Prism 8. Comparison of the clinical characteristics and response rate of each group were evaluated by t‐test, Wilcoxon test, χ2 test, or Fisher's exact test, when appropriate. Kaplan–Meier methodology and log‐rank tests were used to estimate the distributions and differences in event‐time with 95% confidence intervals (CI), respectively. In univariate and multivariate models, HR for PFS was estimated in Cox proportional hazards models and signal of association with p < 0.1 were included in multivariate analysis. All analyses were at two‐sided level and p < 0.05 was predefined statistically significance. RESULTS: Clinical characteristics In total, 1162 NSCLC detected by NGS were included (Figure 1), identifying 230 (19.8%) patients with SWI/SNF‐mut (Figure 2(a)). Compared with 932 (80.2%) SWI/SNF‐wt tumors, those with SWI/SNF‐mut were more common in older age (p = 0.009), male sex (p = 0.002), smokers (p < 0.001), positive PD‐L1 expression (p = 0.028), high TMB (7.2 vs. 3.2 mut/mb, p = 0.009, Figure 2b), advanced disease (p < 0.001), bone metastasis (p = 0.018), and liver metastasis (p < 0.001) (Table 1). Detailed inclusion and exclusion criteria. (a) Frequency of SWI/SNF genomic alterations among 1162 NSCLC. (b) TMB by SWI/SNF mutation status. Frequency of (c) the full spectrum of SWI/SNF genomic alterations and (d) the co‐mutations in SWI/SNF complex. SWI/SNF, switch/sucrose nonfermentable family; NSCLC, non–small cell lung cancer; TMB, tumor mutational burden; wt, wild‐type; mut, mutation; Mb, megabase. Clinical characteristics by SWI/SNF mutation status in the overall cohort of 1162 NSCLC Abbreviations: Mut, mutation; mut/mb, mutations per megabase; N.A., not available; PD‐L1, programmed cell death‐ligand 1; SWI/SNF, switch/sucrose nonfermentable; TMB, tumor mutational burden; wt, wild‐type. Adenosquamous carcinomas, sarcomatoid carcinoma, carcinoid, large cell carcinoma, high‐grade neuroendocrine carcinoma, and poorly differentiated non‐small cell lung carcinomas; not otherwise specified (NOS). In total, 1162 NSCLC detected by NGS were included (Figure 1), identifying 230 (19.8%) patients with SWI/SNF‐mut (Figure 2(a)). Compared with 932 (80.2%) SWI/SNF‐wt tumors, those with SWI/SNF‐mut were more common in older age (p = 0.009), male sex (p = 0.002), smokers (p < 0.001), positive PD‐L1 expression (p = 0.028), high TMB (7.2 vs. 3.2 mut/mb, p = 0.009, Figure 2b), advanced disease (p < 0.001), bone metastasis (p = 0.018), and liver metastasis (p < 0.001) (Table 1). Detailed inclusion and exclusion criteria. (a) Frequency of SWI/SNF genomic alterations among 1162 NSCLC. (b) TMB by SWI/SNF mutation status. Frequency of (c) the full spectrum of SWI/SNF genomic alterations and (d) the co‐mutations in SWI/SNF complex. SWI/SNF, switch/sucrose nonfermentable family; NSCLC, non–small cell lung cancer; TMB, tumor mutational burden; wt, wild‐type; mut, mutation; Mb, megabase. Clinical characteristics by SWI/SNF mutation status in the overall cohort of 1162 NSCLC Abbreviations: Mut, mutation; mut/mb, mutations per megabase; N.A., not available; PD‐L1, programmed cell death‐ligand 1; SWI/SNF, switch/sucrose nonfermentable; TMB, tumor mutational burden; wt, wild‐type. Adenosquamous carcinomas, sarcomatoid carcinoma, carcinoid, large cell carcinoma, high‐grade neuroendocrine carcinoma, and poorly differentiated non‐small cell lung carcinomas; not otherwise specified (NOS). Spectrum of SWI/SNF genomic alterations Of the 230 patients with SWI/SNF‐mut, there were 100 (43.5%) truncating alterations, 113 (49.1%) non‐truncating alterations, and 17 (7.4%) compound alterations (Figure 2(a)). The frequency of SWI/SNF genomic alterations were ARID1A (33.4%), SMARCA4 (28.3%), ARID2 (15.6%), PBRM1 (14.8%), ARID1B (6.9%), SMARCA2(3.9%), SMARCB1 (3.0%), BRD7(2.5%), SMARCD1 (0.9%), and BCL11A (0.4%), respectively, among which 8.3% had multiple alterations in SWI/SNF genes (Figure 2(c)). In our cohort, the most commonly co‐mutated genes with SWI/SNF were TP53 (52.1%), EGFR (27.3%), KRAS (13.4%), and KEAP1 (11.7%), respectively (Figure 2(d)). The DNA and protein changes of SWI/SNF‐mut were listed in Table S1. Of the 230 patients with SWI/SNF‐mut, there were 100 (43.5%) truncating alterations, 113 (49.1%) non‐truncating alterations, and 17 (7.4%) compound alterations (Figure 2(a)). The frequency of SWI/SNF genomic alterations were ARID1A (33.4%), SMARCA4 (28.3%), ARID2 (15.6%), PBRM1 (14.8%), ARID1B (6.9%), SMARCA2(3.9%), SMARCB1 (3.0%), BRD7(2.5%), SMARCD1 (0.9%), and BCL11A (0.4%), respectively, among which 8.3% had multiple alterations in SWI/SNF genes (Figure 2(c)). In our cohort, the most commonly co‐mutated genes with SWI/SNF were TP53 (52.1%), EGFR (27.3%), KRAS (13.4%), and KEAP1 (11.7%), respectively (Figure 2(d)). The DNA and protein changes of SWI/SNF‐mut were listed in Table S1. Association of SWI/SNF and KRAS co‐mutations with clinical outcome to immunotherapy in advanced NSCLC In total, 146 patients with advanced NSCLC were included in cohort 1 (Table S2). There were 127 (87.0%) patients received combined therapy, whereas the remaining patients received PD‐(L)1 inhibitors monotherapy. Although SWI/SNF‐mut tumors had a higher TMB (10.3 vs. 5.0 Mut/mb, p = 0.005) than those with SWI/SNF‐wt, there were no statistically differences across ORR (40.0% vs. 34.4%, p = 0.493) (Figure 3(a)), DCR (77.6% vs. 75.4%, p = 0.752) (Figure 3(a)), or PFS (6.5 m vs. 5.0 m; HR, 0.77; 95% CI, 0.51–1.18; p = 0.231) (Figure 3(b)) between them. Moreover, neither of the different SWI/SNF‐mut types, ARID1A‐mut nor SMARCA4‐mut, impacted the efficacy to PD‐(L)1 inhibitors (Figure S1(a)–(c)). Because published data showed KRAS are frequently co‐mutated with SWI/SNF and are associated with the prognosis of ICIs in NSCLC, 11 we further analyze clinical characteristics (Table S4) and efficacy to ICI by KRAS mutation status and found that there were no significant differences between KRAS‐wt and KRAS‐mut patients for PFS after ICIs (Figure S1(d)). However, patients with concurrent SWI/SNF mutation and KRAS mutation (SWI/SNFmutKRASmut, n = 18) had significantly higher TMB (14.0 vs. 7.0 mut/mb, p = 0.002) (Figure 3(c)) and conferred longer PFS (8.9 m vs. 4.9 m; HR, 0.53, 95% CI, 0.26–1.06; p = 0.071) (Figure 3(d)) to PD‐L1 inhibitors than those with non‐SWI/SNFmutKRASmut (defined as SWI/SNF wild‐type and KRAS wild‐type, SWI/SNF wild‐type and KRAS mutation, SWI/SNF mutation and KRAS wild‐type, n = 128), especially for monotherapy treatment (8.6 m vs. 1.9 m; HR, 0.31, 95% CI, 0.11–0.83; p = 0.032) (Figure 3(e),(f)). (a) Response rate and (b) Kaplan–Meier analysis for PFS to PD‐(L)1 inhibitors in patients with SWI/SNF‐mut vs. SWI/SNF‐wt in cohort 1. (c) Response rate and (d) Kaplan–Meier analysis for PFS to PD‐(L)1 inhibitors in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in cohort 1. Kaplan–Meier analysis for PFS to (e) PD‐(L)1 inhibitors monotherapy and (f) combine therapy in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in cohort 1. (g) Response rate and (h) Kaplan–Meier analysis for PFS to ICIs in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in MSKCC cohorts. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; Mb, megabase; PFS, progression‐free survival; PD‐(L)1, programmed cell death ligand 1; HR, hazard ratio; ICIs, immune checkpoint inhibitors. Because of the number of cases treated with ICIs monotherapy was limited in this cohort, we included two MSKCC cohorts containing totally 109 patients with advanced NSCLC received ICIs alone for validation (Table S3). Consistently, no significant differences of survival were observed either in patients with SWI/SNF‐mut and SWI/SNF‐wt (Figure S1(e)) or in patients with KRAS‐mut and KRAS‐wt (Figure S1(f)), whereas SWI/SNFmutKRASmut patients exhibited higher TMB (9.8 vs. 5.7 mut/mb, p = 0.043) (Figure 3(g)) and longer PFS to ICIs than non‐SWI/SNFmutKRASmut patients (NR vs. 6.3 m; HR, 0.36; 95% CI, 0.15–0.82; p = 0.016) (Figure 3(h)). In total, 146 patients with advanced NSCLC were included in cohort 1 (Table S2). There were 127 (87.0%) patients received combined therapy, whereas the remaining patients received PD‐(L)1 inhibitors monotherapy. Although SWI/SNF‐mut tumors had a higher TMB (10.3 vs. 5.0 Mut/mb, p = 0.005) than those with SWI/SNF‐wt, there were no statistically differences across ORR (40.0% vs. 34.4%, p = 0.493) (Figure 3(a)), DCR (77.6% vs. 75.4%, p = 0.752) (Figure 3(a)), or PFS (6.5 m vs. 5.0 m; HR, 0.77; 95% CI, 0.51–1.18; p = 0.231) (Figure 3(b)) between them. Moreover, neither of the different SWI/SNF‐mut types, ARID1A‐mut nor SMARCA4‐mut, impacted the efficacy to PD‐(L)1 inhibitors (Figure S1(a)–(c)). Because published data showed KRAS are frequently co‐mutated with SWI/SNF and are associated with the prognosis of ICIs in NSCLC, 11 we further analyze clinical characteristics (Table S4) and efficacy to ICI by KRAS mutation status and found that there were no significant differences between KRAS‐wt and KRAS‐mut patients for PFS after ICIs (Figure S1(d)). However, patients with concurrent SWI/SNF mutation and KRAS mutation (SWI/SNFmutKRASmut, n = 18) had significantly higher TMB (14.0 vs. 7.0 mut/mb, p = 0.002) (Figure 3(c)) and conferred longer PFS (8.9 m vs. 4.9 m; HR, 0.53, 95% CI, 0.26–1.06; p = 0.071) (Figure 3(d)) to PD‐L1 inhibitors than those with non‐SWI/SNFmutKRASmut (defined as SWI/SNF wild‐type and KRAS wild‐type, SWI/SNF wild‐type and KRAS mutation, SWI/SNF mutation and KRAS wild‐type, n = 128), especially for monotherapy treatment (8.6 m vs. 1.9 m; HR, 0.31, 95% CI, 0.11–0.83; p = 0.032) (Figure 3(e),(f)). (a) Response rate and (b) Kaplan–Meier analysis for PFS to PD‐(L)1 inhibitors in patients with SWI/SNF‐mut vs. SWI/SNF‐wt in cohort 1. (c) Response rate and (d) Kaplan–Meier analysis for PFS to PD‐(L)1 inhibitors in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in cohort 1. Kaplan–Meier analysis for PFS to (e) PD‐(L)1 inhibitors monotherapy and (f) combine therapy in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in cohort 1. (g) Response rate and (h) Kaplan–Meier analysis for PFS to ICIs in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in MSKCC cohorts. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; Mb, megabase; PFS, progression‐free survival; PD‐(L)1, programmed cell death ligand 1; HR, hazard ratio; ICIs, immune checkpoint inhibitors. Because of the number of cases treated with ICIs monotherapy was limited in this cohort, we included two MSKCC cohorts containing totally 109 patients with advanced NSCLC received ICIs alone for validation (Table S3). Consistently, no significant differences of survival were observed either in patients with SWI/SNF‐mut and SWI/SNF‐wt (Figure S1(e)) or in patients with KRAS‐mut and KRAS‐wt (Figure S1(f)), whereas SWI/SNFmutKRASmut patients exhibited higher TMB (9.8 vs. 5.7 mut/mb, p = 0.043) (Figure 3(g)) and longer PFS to ICIs than non‐SWI/SNFmutKRASmut patients (NR vs. 6.3 m; HR, 0.36; 95% CI, 0.15–0.82; p = 0.016) (Figure 3(h)). Association of ARID1A‐Mut with clinical survival to EGFR‐TKIs in EGFR‐mutant NSCLC There were 205 NSCLC with advanced EGFR‐mutant received EGFR‐TKIs alone in cohort 2 (Table S5), in which 82.9% (n = 170) of the patients were treated with first or second‐generation TKIs (gefitinib, erlotinib, icotinib, afatinib, or dacomitinib), whereas the remaining patients with third‐generation TKIs (osimertinib or almonertinib). Comparing SWI/SNF‐wt patients (n = 164, 80.0%) with SWI/SNF‐mut patients (n = 41, 20.0%), there were no significant differences across ORR, DCR, or PFS in the overall group (n = 205) (Figure 4(a),(b)) nor in the subgroups of first, second (n = 170) (Figure S2(a),(b)), or third generation TKIs treatment (n = 35, Figure S2c,d). (a) Response rate and (b) Kaplan–Meier analysis for PFS to TKIs in patients with SWI/SNF‐wt vs. SWI/SNF‐mut. (c) Response rate and (d) Kaplan–Meier analysis for PFS to TKI in patients with ARID1A‐wt vs. ARID1A‐mut. (e) Multivariable survival analysis of clinical factors for PFS to TKIs. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; PFS, progression‐free survival; TKIs, tyrosine kinase inhibitors; HR, hazard ratio; CI, confidence interval. In cohort 2, the most frequent SWI/SNF genomic alterations were ARID1A (n = 16, 7.8%), and the baseline clinical characteristics were similar between patients with ARID1A‐mut and ARID1A‐wt (Table S5). However, patients with ARID1A‐mut had higher DCR (100.0% vs. 92.6%, p = 0.541) (Figure 4(c)) and significantly longer prolonged PFS to EGFR‐TKIs than those with ARID1A‐wt (20.6 m vs. 11.2 m; HR, 0.47; 95% CI, 0.27–0.94; p = 0.023) (Figure 4(d)), which was consistent both in first, second (Figure S2(e)) and third generation EGFR‐TKIs subgroups (Figure S2(f)). Conversely, none of the other SWI/SNF gene alterations were associated with efficacy of EGFR‐TKIs (Table S6). Importantly, in the multivariable survival analysis adjusted for other variables, ARID1A‐mut was still associated with improved survival to EGFR‐TKIs (HR, 0.49; 95% CI, 0.25–0.98; p = 0.047) (Figure 4(e)). There were 205 NSCLC with advanced EGFR‐mutant received EGFR‐TKIs alone in cohort 2 (Table S5), in which 82.9% (n = 170) of the patients were treated with first or second‐generation TKIs (gefitinib, erlotinib, icotinib, afatinib, or dacomitinib), whereas the remaining patients with third‐generation TKIs (osimertinib or almonertinib). Comparing SWI/SNF‐wt patients (n = 164, 80.0%) with SWI/SNF‐mut patients (n = 41, 20.0%), there were no significant differences across ORR, DCR, or PFS in the overall group (n = 205) (Figure 4(a),(b)) nor in the subgroups of first, second (n = 170) (Figure S2(a),(b)), or third generation TKIs treatment (n = 35, Figure S2c,d). (a) Response rate and (b) Kaplan–Meier analysis for PFS to TKIs in patients with SWI/SNF‐wt vs. SWI/SNF‐mut. (c) Response rate and (d) Kaplan–Meier analysis for PFS to TKI in patients with ARID1A‐wt vs. ARID1A‐mut. (e) Multivariable survival analysis of clinical factors for PFS to TKIs. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; PFS, progression‐free survival; TKIs, tyrosine kinase inhibitors; HR, hazard ratio; CI, confidence interval. In cohort 2, the most frequent SWI/SNF genomic alterations were ARID1A (n = 16, 7.8%), and the baseline clinical characteristics were similar between patients with ARID1A‐mut and ARID1A‐wt (Table S5). However, patients with ARID1A‐mut had higher DCR (100.0% vs. 92.6%, p = 0.541) (Figure 4(c)) and significantly longer prolonged PFS to EGFR‐TKIs than those with ARID1A‐wt (20.6 m vs. 11.2 m; HR, 0.47; 95% CI, 0.27–0.94; p = 0.023) (Figure 4(d)), which was consistent both in first, second (Figure S2(e)) and third generation EGFR‐TKIs subgroups (Figure S2(f)). Conversely, none of the other SWI/SNF gene alterations were associated with efficacy of EGFR‐TKIs (Table S6). Importantly, in the multivariable survival analysis adjusted for other variables, ARID1A‐mut was still associated with improved survival to EGFR‐TKIs (HR, 0.49; 95% CI, 0.25–0.98; p = 0.047) (Figure 4(e)). Clinical characteristics: In total, 1162 NSCLC detected by NGS were included (Figure 1), identifying 230 (19.8%) patients with SWI/SNF‐mut (Figure 2(a)). Compared with 932 (80.2%) SWI/SNF‐wt tumors, those with SWI/SNF‐mut were more common in older age (p = 0.009), male sex (p = 0.002), smokers (p < 0.001), positive PD‐L1 expression (p = 0.028), high TMB (7.2 vs. 3.2 mut/mb, p = 0.009, Figure 2b), advanced disease (p < 0.001), bone metastasis (p = 0.018), and liver metastasis (p < 0.001) (Table 1). Detailed inclusion and exclusion criteria. (a) Frequency of SWI/SNF genomic alterations among 1162 NSCLC. (b) TMB by SWI/SNF mutation status. Frequency of (c) the full spectrum of SWI/SNF genomic alterations and (d) the co‐mutations in SWI/SNF complex. SWI/SNF, switch/sucrose nonfermentable family; NSCLC, non–small cell lung cancer; TMB, tumor mutational burden; wt, wild‐type; mut, mutation; Mb, megabase. Clinical characteristics by SWI/SNF mutation status in the overall cohort of 1162 NSCLC Abbreviations: Mut, mutation; mut/mb, mutations per megabase; N.A., not available; PD‐L1, programmed cell death‐ligand 1; SWI/SNF, switch/sucrose nonfermentable; TMB, tumor mutational burden; wt, wild‐type. Adenosquamous carcinomas, sarcomatoid carcinoma, carcinoid, large cell carcinoma, high‐grade neuroendocrine carcinoma, and poorly differentiated non‐small cell lung carcinomas; not otherwise specified (NOS). Spectrum of SWI/SNF genomic alterations: Of the 230 patients with SWI/SNF‐mut, there were 100 (43.5%) truncating alterations, 113 (49.1%) non‐truncating alterations, and 17 (7.4%) compound alterations (Figure 2(a)). The frequency of SWI/SNF genomic alterations were ARID1A (33.4%), SMARCA4 (28.3%), ARID2 (15.6%), PBRM1 (14.8%), ARID1B (6.9%), SMARCA2(3.9%), SMARCB1 (3.0%), BRD7(2.5%), SMARCD1 (0.9%), and BCL11A (0.4%), respectively, among which 8.3% had multiple alterations in SWI/SNF genes (Figure 2(c)). In our cohort, the most commonly co‐mutated genes with SWI/SNF were TP53 (52.1%), EGFR (27.3%), KRAS (13.4%), and KEAP1 (11.7%), respectively (Figure 2(d)). The DNA and protein changes of SWI/SNF‐mut were listed in Table S1. Association of SWI/SNF and KRAS co‐mutations with clinical outcome to immunotherapy in advanced NSCLC : In total, 146 patients with advanced NSCLC were included in cohort 1 (Table S2). There were 127 (87.0%) patients received combined therapy, whereas the remaining patients received PD‐(L)1 inhibitors monotherapy. Although SWI/SNF‐mut tumors had a higher TMB (10.3 vs. 5.0 Mut/mb, p = 0.005) than those with SWI/SNF‐wt, there were no statistically differences across ORR (40.0% vs. 34.4%, p = 0.493) (Figure 3(a)), DCR (77.6% vs. 75.4%, p = 0.752) (Figure 3(a)), or PFS (6.5 m vs. 5.0 m; HR, 0.77; 95% CI, 0.51–1.18; p = 0.231) (Figure 3(b)) between them. Moreover, neither of the different SWI/SNF‐mut types, ARID1A‐mut nor SMARCA4‐mut, impacted the efficacy to PD‐(L)1 inhibitors (Figure S1(a)–(c)). Because published data showed KRAS are frequently co‐mutated with SWI/SNF and are associated with the prognosis of ICIs in NSCLC, 11 we further analyze clinical characteristics (Table S4) and efficacy to ICI by KRAS mutation status and found that there were no significant differences between KRAS‐wt and KRAS‐mut patients for PFS after ICIs (Figure S1(d)). However, patients with concurrent SWI/SNF mutation and KRAS mutation (SWI/SNFmutKRASmut, n = 18) had significantly higher TMB (14.0 vs. 7.0 mut/mb, p = 0.002) (Figure 3(c)) and conferred longer PFS (8.9 m vs. 4.9 m; HR, 0.53, 95% CI, 0.26–1.06; p = 0.071) (Figure 3(d)) to PD‐L1 inhibitors than those with non‐SWI/SNFmutKRASmut (defined as SWI/SNF wild‐type and KRAS wild‐type, SWI/SNF wild‐type and KRAS mutation, SWI/SNF mutation and KRAS wild‐type, n = 128), especially for monotherapy treatment (8.6 m vs. 1.9 m; HR, 0.31, 95% CI, 0.11–0.83; p = 0.032) (Figure 3(e),(f)). (a) Response rate and (b) Kaplan–Meier analysis for PFS to PD‐(L)1 inhibitors in patients with SWI/SNF‐mut vs. SWI/SNF‐wt in cohort 1. (c) Response rate and (d) Kaplan–Meier analysis for PFS to PD‐(L)1 inhibitors in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in cohort 1. Kaplan–Meier analysis for PFS to (e) PD‐(L)1 inhibitors monotherapy and (f) combine therapy in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in cohort 1. (g) Response rate and (h) Kaplan–Meier analysis for PFS to ICIs in patients with SWI/SNFmutKRASmut vs. non‐SWI/SNFmutKRASmut in MSKCC cohorts. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; Mb, megabase; PFS, progression‐free survival; PD‐(L)1, programmed cell death ligand 1; HR, hazard ratio; ICIs, immune checkpoint inhibitors. Because of the number of cases treated with ICIs monotherapy was limited in this cohort, we included two MSKCC cohorts containing totally 109 patients with advanced NSCLC received ICIs alone for validation (Table S3). Consistently, no significant differences of survival were observed either in patients with SWI/SNF‐mut and SWI/SNF‐wt (Figure S1(e)) or in patients with KRAS‐mut and KRAS‐wt (Figure S1(f)), whereas SWI/SNFmutKRASmut patients exhibited higher TMB (9.8 vs. 5.7 mut/mb, p = 0.043) (Figure 3(g)) and longer PFS to ICIs than non‐SWI/SNFmutKRASmut patients (NR vs. 6.3 m; HR, 0.36; 95% CI, 0.15–0.82; p = 0.016) (Figure 3(h)). Association of ARID1A‐Mut with clinical survival to EGFR‐TKIs in EGFR‐mutant NSCLC : There were 205 NSCLC with advanced EGFR‐mutant received EGFR‐TKIs alone in cohort 2 (Table S5), in which 82.9% (n = 170) of the patients were treated with first or second‐generation TKIs (gefitinib, erlotinib, icotinib, afatinib, or dacomitinib), whereas the remaining patients with third‐generation TKIs (osimertinib or almonertinib). Comparing SWI/SNF‐wt patients (n = 164, 80.0%) with SWI/SNF‐mut patients (n = 41, 20.0%), there were no significant differences across ORR, DCR, or PFS in the overall group (n = 205) (Figure 4(a),(b)) nor in the subgroups of first, second (n = 170) (Figure S2(a),(b)), or third generation TKIs treatment (n = 35, Figure S2c,d). (a) Response rate and (b) Kaplan–Meier analysis for PFS to TKIs in patients with SWI/SNF‐wt vs. SWI/SNF‐mut. (c) Response rate and (d) Kaplan–Meier analysis for PFS to TKI in patients with ARID1A‐wt vs. ARID1A‐mut. (e) Multivariable survival analysis of clinical factors for PFS to TKIs. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; PFS, progression‐free survival; TKIs, tyrosine kinase inhibitors; HR, hazard ratio; CI, confidence interval. In cohort 2, the most frequent SWI/SNF genomic alterations were ARID1A (n = 16, 7.8%), and the baseline clinical characteristics were similar between patients with ARID1A‐mut and ARID1A‐wt (Table S5). However, patients with ARID1A‐mut had higher DCR (100.0% vs. 92.6%, p = 0.541) (Figure 4(c)) and significantly longer prolonged PFS to EGFR‐TKIs than those with ARID1A‐wt (20.6 m vs. 11.2 m; HR, 0.47; 95% CI, 0.27–0.94; p = 0.023) (Figure 4(d)), which was consistent both in first, second (Figure S2(e)) and third generation EGFR‐TKIs subgroups (Figure S2(f)). Conversely, none of the other SWI/SNF gene alterations were associated with efficacy of EGFR‐TKIs (Table S6). Importantly, in the multivariable survival analysis adjusted for other variables, ARID1A‐mut was still associated with improved survival to EGFR‐TKIs (HR, 0.49; 95% CI, 0.25–0.98; p = 0.047) (Figure 4(e)). DISCUSSION: In our institutional study of 1162 Chinese patients with NSCLC, we reported a mutational incidence rate of ~20% in SWI/SNF genes. Consistent with previous studies, 9 , 11 , 19 , 20 SWI/SNF‐mut was correlated with older age, smoking, male sex, higher TMB, and higher proportion of PD‐L1‐positive. Moreover, SWI/SNF‐mut was more often observed in tumors with advanced disease, bone, and liver metastasis. In advanced NSCLC, neither SWI/SNF‐mut nor its common individual gene alterations affected the clinical outcome of ICIs. However, patients with SWI/SNFmutKRASmut had improved clinical outcome to ICIs compared to those with non‐SWI/SNFmutKRASmut. Furthermore, our study indicated that ARID1A‐mut was associated with prolonged survival to EGFR‐TKIs in patients with EGFR‐mutant. Previous studies reported that SWI/SNF genes are frequently co‐mutated with KRAS, and patients with SMARCA4 and KRAS co‐mutation had poorer survival rates after ICIs. 11 , 21 However, we found that patients with SWI/SNFmutKRASmut presented higher ORR and DCR and longer PFS to ICIs, especially in those treated with ICIs monotherapy. Similarly, multiple studies have showed that KRAS‐mut impacts clinical outcomes of ICIs in NSCLC. 22 , 23 , 24 , 25 , 26 A meta‐analysis including six randomized clinical trials on ICIs monotherapy or combined therapy as second‐ or first‐line treatment for advanced NSCLC (IMpower‐150, Keynote‐189 and Keynote‐042, Oak, Poplar, and CheckMate‐057), demonstrated that ICIs had prolonged PFS and OS in KRAS‐mut patients. 26 KRAS‐mut was correlated with PD‐L1‐positive (p = 0.031) 27 and has been confirmed to be a potential driver to produce more neoantigens. 23 In our study, the patients harboring SWI/SNFmutKRASmut had significantly higher TMB compared to those without SWI/SNFmutKRASmut. Overall, it is possible that the superior clinical outcome observed in SWI/SNFmutRASmut patients might be because of the increased production of neoantigens and a higher PD‐L1 expression, both of which are highly correlated with improved response to ICIs. EGFR mutation rates were higher in our cohort (27.3%) than that in previous studies (11%–14%), 8 , 11 which is likely because of the higher prevalence of EGFR mutations in Asian NSCLC (50%–60%) compared to Caucasian NSCLC (10%–20%). 12 , 13 , 28 In advanced EGFR‐mut NSCLC, we observed a significantly longer PFS after EGFR‐TKIs in patients with ARID1A‐mut compared to those with ARID1A‐wt. As a cBAF‐specific subunit, ARID1A protein has been confirmed to be one of the critical components that modify the position of nucleosomes on DNA and is associated with proliferation, migration, invasion, and metastasis of various cancers. 29 , 30 , 31 , 32 , 33 , 34 Recent studies showed that enhanced glutathione metabolism contributes to EGFR‐TKIs resistance. 34 Inhibition of glutathione de novo synthesis by targeting AKR1B1 or METTL7B can overcome acquired resistance to both first and third generation EGFR‐TKIs in NSCLC. 35 , 36 ARID1A‐inactivating mutations were associated with reduced metabolism of glutathione in ARID1A‐deficient cancers, 37 which may be one of the factors affecting sensitivity to EGFR‐TKIs. On the contrary, a retrospective study of 19 EGFR‐mutant NSCLC patients showed that icotinib treatment had significantly reduced PFS (p = 0.001) for patients with ARID1A‐mut (n = 3). 38 Obviously, the sample size is small and all the patients received first generation EGFR‐TKIs, which differs from ours. For the remaining SWI/SNF gene alterations, efficacy of EGFR‐TKIs exhibits significant heterogeneity. Further studies with larger sample sizes are needed for validation. In our study, there were several limitations that should be considered. First, this was a retrospective single‐institution analysis with some inherent bias. Second, our assay contained the most common SWI/SNF genes, but did not cover all SWI/SNF genes. Third, the results of this study were descriptive and further research is required to determine the mechanisms of the effect of SWI/SNF gene mutations on the response to ICIs and EGFR‐TKIs. In conclusion, our study represents a comprehensive cohort with SWI/SNF‐mut NSCLC in China and provides new knowledge on the genetic contributions of SWI/SNF‐mut to response to ICIs and EGFR‐TKIs in advanced NSCLC. Although no association was observed between SWI/SNF‐mut and clinical outcomes of ICIs or TKIs, SWI/SNF‐mut and KRAS‐mut co‐occurrence appeared to improved clinical survival for patients who received ICI treatment, especially for those who received ICI alone. Furthermore, we demonstrated that ARID1A‐mut seems to prolong clinical survival after EGFR‐TKIs in EGFR‐mutant advanced NSCLC. These findings could lead to improved outcomes for NSCLC by identifying more patient‐specific treatments that cater to SWI/SNF genetic alternations. Additional prospective studies with more detailed sequencing of all SWI/SNF genes and larger sample sizes are needed in the future. Meanwhile, further studies to unveil the mechanism by which SWI/SNF affects clinical outcomes is an important issue. DISCLOSURE: The authors declare that there are no conflicts of interest. Supporting information: Figure S1 Kaplan–Meier analysis for PFS to PD‐L1 inhibitors in (a) SWI/SNF‐wt vs. SWI/SNF‐tm vs. SWI/SNF‐ntm vs. SWI/SNF‐cm patients, (b) ARID1A‐wt vs. ARID1A‐mut patients (c) SMARCA4‐wt vs. SMARCA4‐mut patients and (d) KRAS‐wt vs. KRAS‐mut patients in cohort 1. Kaplan–Meier analysis for PFS to ICIs in (e) SWI/SNF‐wt vs. SWI/SNF‐mut patients and (f) KRAS‐wt vs. KRAS‐mut patients in MSKCC cohorts. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; PFS, median progression‐free survival; HR, hazard ratio; ICIs, immune checkpoint inhibitors. Click here for additional data file. Figure S2 (a) Response rate and (b) Kaplan–Meier analysis for PFS to first and second generation TKIs in patients with SWI/SNF‐wt vs. SWI/SNF‐mut. (c) Response rate and (d) Kaplan–Meier analysis for PFS to third generation TKIs in patients withSWI/SNF‐wt vs. SWI/SNFmut. Kaplan–Meier analysis for PFS to (e) first, second generation and (f) third generation TKIs in patients with ARID1A‐wt vs. ARID1A‐mut. SWI/SNF, switch/sucrose nonfermentable family; wt, wild‐type; mut, mutation; TKIs, tyrosine kinase inhibitors; PFS, progression‐free survival; HR, hazard ratio. Click here for additional data file. Table S1 The list of SWI/SNF mutations in non–small cell lung carcinomas. Click here for additional data file. Table S2. Clinical characteristics by SWI/SNF and KRAS mutation status in cohort 1. Click here for additional data file.
Background: The switch/sucrose nonfermentable complex mutations (SWI/SNF-mut) are common in non-small cell lung cancer (NSCLC). However, the association of SWI/SNF-mut with the clinical outcomes of immune checkpoint inhibitors (ICIs), particularly of epidermal growth factor receptor tyrosine kinase inhibitors (EGFR-TKIs), has not been established. Methods: We retrospectively collected data of patients at Cancer Hospital Chinese Academy of Medical Sciences. Patients with advanced NSCLC who received programmed cell death protein-1 or programmed cell death ligand 1 (PD-[L]1) inhibitors were included in cohort 1 and those with EGFR mutations (EGFR-mutant) received EGFR-TKIs monotherapy were included in cohort 2. Two reported Memorial Sloan-Kettering Cancer Center (MSKCC) cohorts received immunotherapy alone used as the validation for cohort 1. We analyzed the relationship between SWI/SNF alterations and clinical outcomes in each cohort. Results: In total, 1162 patients were included, of which 230 patients (19.8%) were identified as SWI/SNF-mut with the most common genetic alterations being ARID1A (33.4%) and SMARCA4 (28.3%). In cohort 1 (n = 146), patients with co-mutations of SWI/SNF and Kirsten rat sarcoma oncogene (KRAS) (SWI/SNFmutKRASmut, n = 18) had significantly prolonged progression-free survival (PFS) (8.6 m vs. 1.9 m; hazard ratio [HR],  0.31; 95% confidence intervals [CI], 0.11-0.83; p = 0.032) to PD-(L)1 inhibitors monotherapy, which was consistent with the MSKCC cohorts (not reach [NR] vs. 6.3 m; HR, 0.36, 95% CI, 0.15-0.82; p = 0.016). In cohort 2 (n = 205), ARID1A-mut (n = 16) was associated with improved PFS after EGFR-TKIs (20.6 m vs. 11.2 m; HR, 0.47, 95% CI, 0.27-0.94; p = 0.023). Conclusions: In advanced NSCLC, patients with SWI/SNFmutKRASmut seem to benefit more from ICIs. Furthermore, ARID1A-mut may provide a protective effect to EGFR-TKIs in EGFR-mutant patients. However, this is a retrospective single-institution analysis that requires further validation by large prospective studies.
null
null
9,291
469
[ 510, 261, 209, 33, 182, 342, 192, 738, 484, 11 ]
14
[ "swi", "snf", "swi snf", "patients", "mut", "figure", "vs", "pfs", "egfr", "tkis" ]
[ "snf gene mutations", "lung carcinomas specified", "mutated genes arid1a", "swi snf chromatin", "cell lung cancer" ]
null
null
[CONTENT] epidermal growth factor receptor | immune checkpoint inhibitors | non–small cell lung cancer | SWI/SNF | tyrosine kinase inhibitors [SUMMARY]
[CONTENT] epidermal growth factor receptor | immune checkpoint inhibitors | non–small cell lung cancer | SWI/SNF | tyrosine kinase inhibitors [SUMMARY]
[CONTENT] epidermal growth factor receptor | immune checkpoint inhibitors | non–small cell lung cancer | SWI/SNF | tyrosine kinase inhibitors [SUMMARY]
null
[CONTENT] epidermal growth factor receptor | immune checkpoint inhibitors | non–small cell lung cancer | SWI/SNF | tyrosine kinase inhibitors [SUMMARY]
null
[CONTENT] Humans | Carcinoma, Non-Small-Cell Lung | Retrospective Studies | ErbB Receptors | Lung Neoplasms | Prospective Studies | Sucrose | Prognosis | Mutation | Protein Kinase Inhibitors | DNA Helicases | Nuclear Proteins | Transcription Factors [SUMMARY]
[CONTENT] Humans | Carcinoma, Non-Small-Cell Lung | Retrospective Studies | ErbB Receptors | Lung Neoplasms | Prospective Studies | Sucrose | Prognosis | Mutation | Protein Kinase Inhibitors | DNA Helicases | Nuclear Proteins | Transcription Factors [SUMMARY]
[CONTENT] Humans | Carcinoma, Non-Small-Cell Lung | Retrospective Studies | ErbB Receptors | Lung Neoplasms | Prospective Studies | Sucrose | Prognosis | Mutation | Protein Kinase Inhibitors | DNA Helicases | Nuclear Proteins | Transcription Factors [SUMMARY]
null
[CONTENT] Humans | Carcinoma, Non-Small-Cell Lung | Retrospective Studies | ErbB Receptors | Lung Neoplasms | Prospective Studies | Sucrose | Prognosis | Mutation | Protein Kinase Inhibitors | DNA Helicases | Nuclear Proteins | Transcription Factors [SUMMARY]
null
[CONTENT] snf gene mutations | lung carcinomas specified | mutated genes arid1a | swi snf chromatin | cell lung cancer [SUMMARY]
[CONTENT] snf gene mutations | lung carcinomas specified | mutated genes arid1a | swi snf chromatin | cell lung cancer [SUMMARY]
[CONTENT] snf gene mutations | lung carcinomas specified | mutated genes arid1a | swi snf chromatin | cell lung cancer [SUMMARY]
null
[CONTENT] snf gene mutations | lung carcinomas specified | mutated genes arid1a | swi snf chromatin | cell lung cancer [SUMMARY]
null
[CONTENT] swi | snf | swi snf | patients | mut | figure | vs | pfs | egfr | tkis [SUMMARY]
[CONTENT] swi | snf | swi snf | patients | mut | figure | vs | pfs | egfr | tkis [SUMMARY]
[CONTENT] swi | snf | swi snf | patients | mut | figure | vs | pfs | egfr | tkis [SUMMARY]
null
[CONTENT] swi | snf | swi snf | patients | mut | figure | vs | pfs | egfr | tkis [SUMMARY]
null
[CONTENT] snf | swi | swi snf | nsclc | egfr | patients | smarca4 | baf | mutations | mut [SUMMARY]
[CONTENT] swi snf | snf | swi | test | mutations | included | pd | medical | truncating | data [SUMMARY]
[CONTENT] swi | snf | swi snf | figure | mut | vs | patients | pfs | snfmutkrasmut | swi snfmutkrasmut [SUMMARY]
null
[CONTENT] swi | snf | swi snf | mut | patients | figure | vs | egfr | pfs | tkis [SUMMARY]
null
[CONTENT] SWI/SNF-mut | NSCLC ||| SWI/SNF-mut [SUMMARY]
[CONTENT] Cancer Hospital Chinese Academy of Medical Sciences ||| NSCLC | 1 | EGFR | 2 ||| Two | Memorial Sloan-Kettering Cancer Center | 1 ||| SWI [SUMMARY]
[CONTENT] 1162 | 230 | 19.8% | SWI | 33.4% | SMARCA4 | 28.3% ||| 1 | 146 | SWI/SNF | Kirsten | KRAS | SWI | 8.6 | 1.9 | 0.31 | 95% ||| CI] | 0.11 | 0.032 | MSKCC | 6.3 | 0.36 | 95% | CI | 0.15-0.82 ||| 2 | 205 | 16 | 20.6 | 11.2 | 0.47 | 95% | CI | 0.27 | 0.023 [SUMMARY]
null
[CONTENT] SWI/SNF-mut | NSCLC ||| SWI/SNF-mut ||| Cancer Hospital Chinese Academy of Medical Sciences ||| NSCLC | 1 | EGFR | 2 ||| Two | Memorial Sloan-Kettering Cancer Center | 1 ||| SWI ||| ||| 1162 | 230 | 19.8% | SWI | 33.4% | SMARCA4 | 28.3% ||| 1 | 146 | SWI/SNF | Kirsten | KRAS | SWI | 8.6 | 1.9 | 0.31 | 95% ||| CI] | 0.11 | 0.032 | MSKCC | 6.3 | 0.36 | 95% | CI | 0.15-0.82 ||| 2 | 205 | 16 | 20.6 | 11.2 | 0.47 | 95% | CI | 0.27 | 0.023 ||| NSCLC | SWI ||| EGFR ||| [SUMMARY]
null
Impact of the COVID-19 pandemic on malaria cases in health facilities in northern Ghana: a retrospective analysis of routine surveillance data.
35570272
The COVID-19 pandemic and its collateral damage severely impact health systems globally and risk to worsen the malaria situation in endemic countries. Malaria is a leading cause of morbidity and mortality in Ghana. This study aims to describe the potential effects of the COVID-19 pandemic on malaria cases observed in health facilities in the Northern Region of Ghana.
BACKGROUND
Monthly routine data from the District Health Information Management System II (DHIMS2) of the Northern Region of Ghana were analysed. Overall outpatient department visits (OPD) and malaria case rates from the years 2015-2019 were compared to the corresponding data of the year 2020.
METHODS
Compared to the corresponding periods of the years 2015-2019, overall visits and malaria cases in paediatric and adult OPDs in northern Ghana decreased in March and April 2020, when major movement and social restrictions were implemented in response to the pandemic. Cases slightly rebounded afterwards in 2020, but stayed below the average of the previous years. Malaria data from inpatient departments showed a similar but more pronounced trend when compared to OPDs. In pregnant women, however, malaria cases in OPDs increased after the first COVID-19 wave.
RESULTS
The findings from this study show that the COVID-19 pandemic affects the malaria burden in health facilities of northern Ghana, with declines in inpatient and outpatient rates except for pregnant women. They may have experienced reduced access to insecticide-treated nets and intermittent preventive malaria treatment in pregnancy, resulting in subsequent higher malaria morbidity. Further data, particularly from community-based studies and ideally complemented by qualitative research, are needed to fully determine the impact of the pandemic on the malaria situation in Africa.
CONCLUSIONS
[ "Adult", "COVID-19", "Child", "Female", "Ghana", "Health Facilities", "Humans", "Malaria", "Pandemics", "Pregnancy", "Retrospective Studies" ]
9107588
Background
Malaria remains one of the leading causes of morbidity and mortality in sub-Saharan Africa (SSA). Globally, there have been 627.000 malaria related deaths in 2020, 12% more than in 2019; 68% of these additional deaths were attributed to indirect consequences of the COVID-19 pandemic [1]. In Ghana, malaria is responsible for 10% of the overall mortality and nearly one quarter of all under five childhood deaths [1, 2]. The 2015–2020 Ghana Strategic Action Plan aimed to reduce the burden of malaria by 75.0% [3], but the COVID-19 pandemic could halt or even reverse the declining trends. In 2020, malaria was the cause of more than one third of all OPD attendances [4]; moreover 17.6% of OPD visits of pregnant women were due to malaria [5]. The global spread of the coronavirus disease 2019 (COVID-19) was declared a Public Health Emergency of International Concern at the end of January 2020 [6]. Many African governments responded rapidly to this threat by implementing control measures even before first cases were detected in their countries, comprising border closures, movement restrictions, social distancing and school closures [7]. SSA accounted for only about 3.5% of the globally reported COVID-19 morbidity and mortality, mostly from the southern and northern rims of the continent, while being home to 17% of the global population [8]. This may be explained by factors such as a younger population, hotter climate, interferences with other infectious diseases, but especially lack of diagnostics and underreporting [9, 10]. Ghana was among the countries with the highest reported COVID-19 cases and deaths in western and central SSA, as of the end of 2020 [11]. COVID-19 vaccinations started in February 2021 but coverage in Ghana is still low with only 10% of the population fully vaccinated by February 2022 [12]. The socio-economic disruptions associated with the disease and the preventive measures present huge challenges for health systems and whole societies, especially in low- and middle income countries [13]. In the highly malaria-endemic African countries, the progress made in malaria control during the last two decades was feared to be reversed by the side effects of the COVID-19 pandemic [14, 15]. In the Ghanaian context, as no ITN mass campaigns were scheduled for 2020, the worst-case scenario presented in a modelling study by Weiss et al. could have been a decline in access to anti-malarial medication by 75%, resulting in an increase of malaria morbidity and mortality by 13% and 55%, respectively [14]. Overall, the predicted public health relevant effects of the COVID-19 pandemic on malaria include shared clinical disease manifestations leading to diagnostic challenges, shortages of anti-malarial medication, rapid diagnostic tests, preventive tools and personal protective equipment, decreasing quality of surveillance systems, and re-allocation of funds and professionals towards COVID-19 control activities [16]. This study aims to describe potential effects of the COVID-19 pandemic on malaria cases in health facilities in the Northern Region of Ghana – a highly malaria endemic region. The research hypothesis that “lower access to health care services in combination with impaired malaria surveillance systems may have led to a lower number of reported malaria cases in highly endemic countries” will be investigated [16]. This hypothesis leads to the specific research question if the COVID-19 pandemic has led to less reported malaria cases in northern Ghana in 2020.
null
null
Results
Table 1 presents a brief description of the dataset. Altogether 5.8 million OPD visits were reported between 2015 and 2020; 39% of those were diagnosed with malaria. Of all malaria cases, 20% were children under the age of five years and 2% were pregnant women. 295,465 patients were hospitalized with diagnosed malaria, 56% of those were children under the age of five. The mean population in the Northern Region of Ghana of the years from 2015 to 2020 was 1,842,701. Table 1Description of the dataset on outpatients and malaria patients recorded in northern Ghana health facilities during the years 2015–2020Total numberPercentage (%)Outpatient department visits All OPD5,804,910100 Malaria confirmed2,278,29639 Malaria confirmed among children < 5 years454,77920 Malaria confirmed among pregnant women46,6932Hospital–admitted patients  Malaria confirmed295,465100 Malaria confirmed among children < 5 years165,31356Mean mid–year population Total population1,842,701100 Children < 5 years257,97814 Women aged 15 to 45423,82123 Description of the dataset on outpatients and malaria patients recorded in northern Ghana health facilities during the years 2015–2020 Figure 1 presents the case rates of the different outcomes reported from health facilities in the Northern Region of Ghana for the years 2015-20 separately as well as a combined rate for the period 2015 to 2019. All OPD visits (Fig. 1a), including also non-malaria patients, have experienced a decline in March/April 2020 (the months where COVID-19 control measures were implemented in the country) and stayed low during the following months. After a further decrease in September 2020, the numbers increased again in October 2020 to the levels observed in previous years. This trend is similar but not as pronounced in the malaria OPD visits (Fig. 1b). The decline in accessing OPD malaria health care is strongest in children under the age of five years, especially from June to September 2020 (Fig. 1c). In pregnant women, however, a different trend with an increase of malaria cases, starting in June and exceeding previous year levels, can be observed (Fig. 1d). The 2020 numbers of the hospital-admitted malaria patients (March till October) stayed consistently below the numbers from previous years (Fig. 1e); and in accordance with the OPD figures, this trend is more pronounced in children under five years (Fig. 1f). Fig. 1Reported monthly rates (A for all outpatient department (OPD) visits per 100,000 of the general population; B for OPD visits with confirmed malaria per 100,000 of the general population; C for OPD visits with malaria in children under the age of 5 per 100,000 of all children under 5, D for OPD visits with malaria in pregnant women per 100,000 of all women aged 15–45, E for hospital-admitted patients with malaria per 100,000 of the general population; F for hospital-admitted malaria in children under 5 per 100,000 of all children under 5) in health facilities of the Northern Region, Ghana, for the years 2015–2020 Reported monthly rates (A for all outpatient department (OPD) visits per 100,000 of the general population; B for OPD visits with confirmed malaria per 100,000 of the general population; C for OPD visits with malaria in children under the age of 5 per 100,000 of all children under 5, D for OPD visits with malaria in pregnant women per 100,000 of all women aged 15–45, E for hospital-admitted patients with malaria per 100,000 of the general population; F for hospital-admitted malaria in children under 5 per 100,000 of all children under 5) in health facilities of the Northern Region, Ghana, for the years 2015–2020 The rate ratios (RR), depicting quarterly measures comparing the rates of 2020 to the combined rates of the years 2015 to 2019, are presented in Table 2. General OPD visits were significantly reduced in the 2nd and 3rd quarters of 2020 compared to the previous years by up to 27%, with a return to previous standards at the end of the year. For the overall malaria cases only the 3rd quarter of 2020 experienced a decrease by 26%, but increases by 27% in the 4th quarter. Ambulatory malaria cases in children under five experienced stronger reductions compared to previous years by 43% in the 3rd quarter of 2020, and increases by 20% in the 4th quarter. These evolutions are not mirrored by the population of pregnant women with malaria infections, where no major reductions were observed during the first quarters of 2020 compared to previous years but with an earlier and stronger increase that reached 48% in the 4th quarter. The situation is slightly different in patients admitted to the hospital with malaria. The reductions in the 2nd and 3rd quarters of 2020 are more pronounced (46% and 43%) and the numbers recover at the end of the year but do not exceed previous levels. Again, as for the outpatient population, this trend is more pronounced in children under five years of age (64% in the 2nd, 67% in the 3rd quarter). Table 2Quarterly rate ratios (RR), rate differences (ΔR) and p − values comparing the rates of 2020 with the combined rates of the years 2015 to 2019 for outpatients and malaria patients in health facilities of northern GhanaRRΔRp − value1st quarter2nd quarter3rd quarter4th quarterOutpatient department visitsAll visits0.93− 269.00.2020.80− 745.3< 0.0010.73− 1448.2< 0.0011.03125.40.283All malaria1.0443.00.6790.91− 110.80.3000.74− 654.30.0021.27570.8< 0.001Malaria children < 5 years0.96− 60.60.6970.81− 310.90.0210.57− 1614.5< 0.0011.19589.50.005Malaria pregnant women0.87− 14.20.2690.96− 4.40.7091.1427.60.1271.4880.7< 0.001Inpatient department vistitsAll malaria0.79− 36.80.1390.54− 65.10.0080.57− 139.7< 0.0010.94−16.50.526Malaria children < 5 years0.74− 186.70.0350.46− 300.70.0200.43− 764.5< 0.0010.82− 224.50.067 Quarterly rate ratios (RR), rate differences (ΔR) and p − values comparing the rates of 2020 with the combined rates of the years 2015 to 2019 for outpatients and malaria patients in health facilities of northern Ghana 0.93 − 269.0 0.202 0.80 − 745.3 < 0.001 0.73 − 1448.2 < 0.001 1.03 125.4 0.283 1.04 43.0 0.679 0.91 − 110.8 0.300 0.74 − 654.3 0.002 1.27 570.8 < 0.001 0.96 − 60.6 0.697 0.81 − 310.9 0.021 0.57 − 1614.5 < 0.001 1.19 589.5 0.005 0.87 − 14.2 0.269 0.96 − 4.4 0.709 1.14 27.6 0.127 1.48 80.7 < 0.001 0.79 − 36.8 0.139 0.54 − 65.1 0.008 0.57 − 139.7 < 0.001 0.94 −16.5 0.526 0.74 − 186.7 0.035 0.46 − 300.7 0.020 0.43 − 764.5 < 0.001 0.82 − 224.5 0.067
Conclusions
This study shows that the COVID-19 pandemic has been associated with reduced overall outpatient visits and reduced malaria cases reported from northern Ghana’s health facilities. Further data and qualitative explanations from Ghana and other SSA countries and in particular data from community-based studies are needed to fully judge the impact of the pandemic on the malaria situation on the African continent.
[ "Background", "Methods", "Study area", "Study design and data", "Analysis" ]
[ "Malaria remains one of the leading causes of morbidity and mortality in sub-Saharan Africa (SSA). Globally, there have been 627.000 malaria related deaths in 2020, 12% more than in 2019; 68% of these additional deaths were attributed to indirect consequences of the COVID-19 pandemic [1]. In Ghana, malaria is responsible for 10% of the overall mortality and nearly one quarter of all under five childhood deaths [1, 2]. The 2015–2020 Ghana Strategic Action Plan aimed to reduce the burden of malaria by 75.0% [3], but the COVID-19 pandemic could halt or even reverse the declining trends. In 2020, malaria was the cause of more than one third of all OPD attendances [4]; moreover 17.6% of OPD visits of pregnant women were due to malaria [5].\nThe global spread of the coronavirus disease 2019 (COVID-19) was declared a Public Health Emergency of International Concern at the end of January 2020 [6]. Many African governments responded rapidly to this threat by implementing control measures even before first cases were detected in their countries, comprising border closures, movement restrictions, social distancing and school closures [7]. SSA accounted for only about 3.5% of the globally reported COVID-19 morbidity and mortality, mostly from the southern and northern rims of the continent, while being home to 17% of the global population [8]. This may be explained by factors such as a younger population, hotter climate, interferences with other infectious diseases, but especially lack of diagnostics and underreporting [9, 10]. Ghana was among the countries with the highest reported COVID-19 cases and deaths in western and central SSA, as of the end of 2020 [11]. COVID-19 vaccinations started in February 2021 but coverage in Ghana is still low with only 10% of the population fully vaccinated by February 2022 [12].\nThe socio-economic disruptions associated with the disease and the preventive measures present huge challenges for health systems and whole societies, especially in low- and middle income countries [13]. In the highly malaria-endemic African countries, the progress made in malaria control during the last two decades was feared to be reversed by the side effects of the COVID-19 pandemic [14, 15]. In the Ghanaian context, as no ITN mass campaigns were scheduled for 2020, the worst-case scenario presented in a modelling study by Weiss et al. could have been a decline in access to anti-malarial medication by 75%, resulting in an increase of malaria morbidity and mortality by 13% and 55%, respectively [14]. Overall, the predicted public health relevant effects of the COVID-19 pandemic on malaria include shared clinical disease manifestations leading to diagnostic challenges, shortages of anti-malarial medication, rapid diagnostic tests, preventive tools and personal protective equipment, decreasing quality of surveillance systems, and re-allocation of funds and professionals towards COVID-19 control activities [16].\nThis study aims to describe potential effects of the COVID-19 pandemic on malaria cases in health facilities in the Northern Region of Ghana – a highly malaria endemic region. The research hypothesis that “lower access to health care services in combination with impaired malaria surveillance systems may have led to a lower number of reported malaria cases in highly endemic countries” will be investigated [16]. This hypothesis leads to the specific research question if the COVID-19 pandemic has led to less reported malaria cases in northern Ghana in 2020.", "Study area Ghana, with its population of about 31 million, lies in western SSA and has a relatively well functioning health care system [17, 18]. The Northern Region, with its capital city Tamale, had a population of 1.9 million in 2020. The socio-economic situation of the Northern Region is below the national average of the country and the region has the highest mortality rate in children under the age of five years [19].\nMalaria is highly endemic in northern Ghana with a seasonal transmission pattern that is strongest between July and November. ITNs are a major malaria prevention strategy in Ghana; in 2019, about 74% of households owned at least one ITN and 52% of all households had at least one ITN per two people [20]. According to the state of the nation’s health report 2018, the malaria prevalence in children under- five years in the Northern Region is 40%, the highest in Ghana [21]. Regarding the epidemiology of COVID-19, Ghana recorded the first cases in March 2020. The government responded immediately with the implementation of social gathering and travel restrictions as well as school closures. The country’s major cities were placed under partial lockdown soon after. This lockdown started on March 30, 2020, and lasted for two weeks. Schools were partially reopened on June 21, 2020, and borders were reopened to international airlines on September 21, 2020 [22]. As at end of March 2021, the country had a total of 90,583 confirmed cases and 743 deaths [23]. These cases were clustered around two major waves in March-September 2020 and January-March 2021. The first wave coincided partly with the rainy season in the Northern Region, the time when the majority of cases of malaria in children and pregnant women are recorded [24].\nIn Ghana, effects of the COVID-19 pandemic on malaria control interventions concerned the country’s stock of artemisinin-based combination therapy, the functioning of its insecticide-treated mosquito net (ITN) routine distribution, and the overall access to primary health care services and facilities [25].\nAlthough the lockdown did not include the Northern Region directly, the other pandemic control measures including the ‘stay at home unless absolutely necessary‘ campaign, suspension of OPD services in many hospitals and the general anxiety among the population, led to reduction in antenatal and child welfare clinic attendance. This situation could have further affected malaria control measures as the antenatal and child welfare clinics are two major service delivery points where education on malaria prevention, intermittent preventive treatment in pregnancy (IPTp) services and distribution of ITNs to children and pregnant women are carried out [26].\nGhana, with its population of about 31 million, lies in western SSA and has a relatively well functioning health care system [17, 18]. The Northern Region, with its capital city Tamale, had a population of 1.9 million in 2020. The socio-economic situation of the Northern Region is below the national average of the country and the region has the highest mortality rate in children under the age of five years [19].\nMalaria is highly endemic in northern Ghana with a seasonal transmission pattern that is strongest between July and November. ITNs are a major malaria prevention strategy in Ghana; in 2019, about 74% of households owned at least one ITN and 52% of all households had at least one ITN per two people [20]. According to the state of the nation’s health report 2018, the malaria prevalence in children under- five years in the Northern Region is 40%, the highest in Ghana [21]. Regarding the epidemiology of COVID-19, Ghana recorded the first cases in March 2020. The government responded immediately with the implementation of social gathering and travel restrictions as well as school closures. The country’s major cities were placed under partial lockdown soon after. This lockdown started on March 30, 2020, and lasted for two weeks. Schools were partially reopened on June 21, 2020, and borders were reopened to international airlines on September 21, 2020 [22]. As at end of March 2021, the country had a total of 90,583 confirmed cases and 743 deaths [23]. These cases were clustered around two major waves in March-September 2020 and January-March 2021. The first wave coincided partly with the rainy season in the Northern Region, the time when the majority of cases of malaria in children and pregnant women are recorded [24].\nIn Ghana, effects of the COVID-19 pandemic on malaria control interventions concerned the country’s stock of artemisinin-based combination therapy, the functioning of its insecticide-treated mosquito net (ITN) routine distribution, and the overall access to primary health care services and facilities [25].\nAlthough the lockdown did not include the Northern Region directly, the other pandemic control measures including the ‘stay at home unless absolutely necessary‘ campaign, suspension of OPD services in many hospitals and the general anxiety among the population, led to reduction in antenatal and child welfare clinic attendance. This situation could have further affected malaria control measures as the antenatal and child welfare clinics are two major service delivery points where education on malaria prevention, intermittent preventive treatment in pregnancy (IPTp) services and distribution of ITNs to children and pregnant women are carried out [26].", "Ghana, with its population of about 31 million, lies in western SSA and has a relatively well functioning health care system [17, 18]. The Northern Region, with its capital city Tamale, had a population of 1.9 million in 2020. The socio-economic situation of the Northern Region is below the national average of the country and the region has the highest mortality rate in children under the age of five years [19].\nMalaria is highly endemic in northern Ghana with a seasonal transmission pattern that is strongest between July and November. ITNs are a major malaria prevention strategy in Ghana; in 2019, about 74% of households owned at least one ITN and 52% of all households had at least one ITN per two people [20]. According to the state of the nation’s health report 2018, the malaria prevalence in children under- five years in the Northern Region is 40%, the highest in Ghana [21]. Regarding the epidemiology of COVID-19, Ghana recorded the first cases in March 2020. The government responded immediately with the implementation of social gathering and travel restrictions as well as school closures. The country’s major cities were placed under partial lockdown soon after. This lockdown started on March 30, 2020, and lasted for two weeks. Schools were partially reopened on June 21, 2020, and borders were reopened to international airlines on September 21, 2020 [22]. As at end of March 2021, the country had a total of 90,583 confirmed cases and 743 deaths [23]. These cases were clustered around two major waves in March-September 2020 and January-March 2021. The first wave coincided partly with the rainy season in the Northern Region, the time when the majority of cases of malaria in children and pregnant women are recorded [24].\nIn Ghana, effects of the COVID-19 pandemic on malaria control interventions concerned the country’s stock of artemisinin-based combination therapy, the functioning of its insecticide-treated mosquito net (ITN) routine distribution, and the overall access to primary health care services and facilities [25].\nAlthough the lockdown did not include the Northern Region directly, the other pandemic control measures including the ‘stay at home unless absolutely necessary‘ campaign, suspension of OPD services in many hospitals and the general anxiety among the population, led to reduction in antenatal and child welfare clinic attendance. This situation could have further affected malaria control measures as the antenatal and child welfare clinics are two major service delivery points where education on malaria prevention, intermittent preventive treatment in pregnancy (IPTp) services and distribution of ITNs to children and pregnant women are carried out [26].", "This retrospective observational study uses monthly malaria morbidity data on the overall number of outpatients (interpreted as less severe cases) and inpatients (more severe cases). Additionally, all outpatient visits (including non-malaria related visits) were analysed. Subgroup analysis was performed for the two groups at high risk for severe malaria manifestations, children under five years of age and pregnant women [1]. Cases were extracted from the District Health Information Management System II (DHIMS2) on demographic and health parameters of northern Ghana from January 1, 2015, to December 31, 2020. This system was implemented in 2007 with an update in 2012 and has improved the data quality and completeness since [27].\nMalaria diagnosis was based either on the results of rapid diagnostic tests or microscopy.\nMid-year population estimates of the Northern Region of Ghana were also provided through the DHIMS2.", "The data have been processed with Microsoft Excel Version 16.52 and analyzed with Stata IC Version 16 (StataCorp, College Station, TX, USA). Monthly rates of OPD visits and confirmed malaria cases for the year 2020 and as a comparison for the years 2015–2019 separately as well as mean rates with 95% confidence intervals (95% CI) have been calculated and plotted using population figures of the Northern Region of Ghana. Additionally, quarterly rates are presented; rates of 2020 versus the combined rates of 2015–2019 have been compared with the z-test. The data allowed analysing children under five years and pregnant women separately using the fraction of the under-five population (14% of the population) and the fraction of women between 15 and 45 years (23% of the population) as estimates of the respective population denominators [28]. Malaria deaths have not been analysed due to low quality of the mortality data sets." ]
[ null, null, null, null, null ]
[ "Background", "Methods", "Study area", "Study design and data", "Analysis", "Results", "Discussion", "Conclusions" ]
[ "Malaria remains one of the leading causes of morbidity and mortality in sub-Saharan Africa (SSA). Globally, there have been 627.000 malaria related deaths in 2020, 12% more than in 2019; 68% of these additional deaths were attributed to indirect consequences of the COVID-19 pandemic [1]. In Ghana, malaria is responsible for 10% of the overall mortality and nearly one quarter of all under five childhood deaths [1, 2]. The 2015–2020 Ghana Strategic Action Plan aimed to reduce the burden of malaria by 75.0% [3], but the COVID-19 pandemic could halt or even reverse the declining trends. In 2020, malaria was the cause of more than one third of all OPD attendances [4]; moreover 17.6% of OPD visits of pregnant women were due to malaria [5].\nThe global spread of the coronavirus disease 2019 (COVID-19) was declared a Public Health Emergency of International Concern at the end of January 2020 [6]. Many African governments responded rapidly to this threat by implementing control measures even before first cases were detected in their countries, comprising border closures, movement restrictions, social distancing and school closures [7]. SSA accounted for only about 3.5% of the globally reported COVID-19 morbidity and mortality, mostly from the southern and northern rims of the continent, while being home to 17% of the global population [8]. This may be explained by factors such as a younger population, hotter climate, interferences with other infectious diseases, but especially lack of diagnostics and underreporting [9, 10]. Ghana was among the countries with the highest reported COVID-19 cases and deaths in western and central SSA, as of the end of 2020 [11]. COVID-19 vaccinations started in February 2021 but coverage in Ghana is still low with only 10% of the population fully vaccinated by February 2022 [12].\nThe socio-economic disruptions associated with the disease and the preventive measures present huge challenges for health systems and whole societies, especially in low- and middle income countries [13]. In the highly malaria-endemic African countries, the progress made in malaria control during the last two decades was feared to be reversed by the side effects of the COVID-19 pandemic [14, 15]. In the Ghanaian context, as no ITN mass campaigns were scheduled for 2020, the worst-case scenario presented in a modelling study by Weiss et al. could have been a decline in access to anti-malarial medication by 75%, resulting in an increase of malaria morbidity and mortality by 13% and 55%, respectively [14]. Overall, the predicted public health relevant effects of the COVID-19 pandemic on malaria include shared clinical disease manifestations leading to diagnostic challenges, shortages of anti-malarial medication, rapid diagnostic tests, preventive tools and personal protective equipment, decreasing quality of surveillance systems, and re-allocation of funds and professionals towards COVID-19 control activities [16].\nThis study aims to describe potential effects of the COVID-19 pandemic on malaria cases in health facilities in the Northern Region of Ghana – a highly malaria endemic region. The research hypothesis that “lower access to health care services in combination with impaired malaria surveillance systems may have led to a lower number of reported malaria cases in highly endemic countries” will be investigated [16]. This hypothesis leads to the specific research question if the COVID-19 pandemic has led to less reported malaria cases in northern Ghana in 2020.", "Study area Ghana, with its population of about 31 million, lies in western SSA and has a relatively well functioning health care system [17, 18]. The Northern Region, with its capital city Tamale, had a population of 1.9 million in 2020. The socio-economic situation of the Northern Region is below the national average of the country and the region has the highest mortality rate in children under the age of five years [19].\nMalaria is highly endemic in northern Ghana with a seasonal transmission pattern that is strongest between July and November. ITNs are a major malaria prevention strategy in Ghana; in 2019, about 74% of households owned at least one ITN and 52% of all households had at least one ITN per two people [20]. According to the state of the nation’s health report 2018, the malaria prevalence in children under- five years in the Northern Region is 40%, the highest in Ghana [21]. Regarding the epidemiology of COVID-19, Ghana recorded the first cases in March 2020. The government responded immediately with the implementation of social gathering and travel restrictions as well as school closures. The country’s major cities were placed under partial lockdown soon after. This lockdown started on March 30, 2020, and lasted for two weeks. Schools were partially reopened on June 21, 2020, and borders were reopened to international airlines on September 21, 2020 [22]. As at end of March 2021, the country had a total of 90,583 confirmed cases and 743 deaths [23]. These cases were clustered around two major waves in March-September 2020 and January-March 2021. The first wave coincided partly with the rainy season in the Northern Region, the time when the majority of cases of malaria in children and pregnant women are recorded [24].\nIn Ghana, effects of the COVID-19 pandemic on malaria control interventions concerned the country’s stock of artemisinin-based combination therapy, the functioning of its insecticide-treated mosquito net (ITN) routine distribution, and the overall access to primary health care services and facilities [25].\nAlthough the lockdown did not include the Northern Region directly, the other pandemic control measures including the ‘stay at home unless absolutely necessary‘ campaign, suspension of OPD services in many hospitals and the general anxiety among the population, led to reduction in antenatal and child welfare clinic attendance. This situation could have further affected malaria control measures as the antenatal and child welfare clinics are two major service delivery points where education on malaria prevention, intermittent preventive treatment in pregnancy (IPTp) services and distribution of ITNs to children and pregnant women are carried out [26].\nGhana, with its population of about 31 million, lies in western SSA and has a relatively well functioning health care system [17, 18]. The Northern Region, with its capital city Tamale, had a population of 1.9 million in 2020. The socio-economic situation of the Northern Region is below the national average of the country and the region has the highest mortality rate in children under the age of five years [19].\nMalaria is highly endemic in northern Ghana with a seasonal transmission pattern that is strongest between July and November. ITNs are a major malaria prevention strategy in Ghana; in 2019, about 74% of households owned at least one ITN and 52% of all households had at least one ITN per two people [20]. According to the state of the nation’s health report 2018, the malaria prevalence in children under- five years in the Northern Region is 40%, the highest in Ghana [21]. Regarding the epidemiology of COVID-19, Ghana recorded the first cases in March 2020. The government responded immediately with the implementation of social gathering and travel restrictions as well as school closures. The country’s major cities were placed under partial lockdown soon after. This lockdown started on March 30, 2020, and lasted for two weeks. Schools were partially reopened on June 21, 2020, and borders were reopened to international airlines on September 21, 2020 [22]. As at end of March 2021, the country had a total of 90,583 confirmed cases and 743 deaths [23]. These cases were clustered around two major waves in March-September 2020 and January-March 2021. The first wave coincided partly with the rainy season in the Northern Region, the time when the majority of cases of malaria in children and pregnant women are recorded [24].\nIn Ghana, effects of the COVID-19 pandemic on malaria control interventions concerned the country’s stock of artemisinin-based combination therapy, the functioning of its insecticide-treated mosquito net (ITN) routine distribution, and the overall access to primary health care services and facilities [25].\nAlthough the lockdown did not include the Northern Region directly, the other pandemic control measures including the ‘stay at home unless absolutely necessary‘ campaign, suspension of OPD services in many hospitals and the general anxiety among the population, led to reduction in antenatal and child welfare clinic attendance. This situation could have further affected malaria control measures as the antenatal and child welfare clinics are two major service delivery points where education on malaria prevention, intermittent preventive treatment in pregnancy (IPTp) services and distribution of ITNs to children and pregnant women are carried out [26].", "Ghana, with its population of about 31 million, lies in western SSA and has a relatively well functioning health care system [17, 18]. The Northern Region, with its capital city Tamale, had a population of 1.9 million in 2020. The socio-economic situation of the Northern Region is below the national average of the country and the region has the highest mortality rate in children under the age of five years [19].\nMalaria is highly endemic in northern Ghana with a seasonal transmission pattern that is strongest between July and November. ITNs are a major malaria prevention strategy in Ghana; in 2019, about 74% of households owned at least one ITN and 52% of all households had at least one ITN per two people [20]. According to the state of the nation’s health report 2018, the malaria prevalence in children under- five years in the Northern Region is 40%, the highest in Ghana [21]. Regarding the epidemiology of COVID-19, Ghana recorded the first cases in March 2020. The government responded immediately with the implementation of social gathering and travel restrictions as well as school closures. The country’s major cities were placed under partial lockdown soon after. This lockdown started on March 30, 2020, and lasted for two weeks. Schools were partially reopened on June 21, 2020, and borders were reopened to international airlines on September 21, 2020 [22]. As at end of March 2021, the country had a total of 90,583 confirmed cases and 743 deaths [23]. These cases were clustered around two major waves in March-September 2020 and January-March 2021. The first wave coincided partly with the rainy season in the Northern Region, the time when the majority of cases of malaria in children and pregnant women are recorded [24].\nIn Ghana, effects of the COVID-19 pandemic on malaria control interventions concerned the country’s stock of artemisinin-based combination therapy, the functioning of its insecticide-treated mosquito net (ITN) routine distribution, and the overall access to primary health care services and facilities [25].\nAlthough the lockdown did not include the Northern Region directly, the other pandemic control measures including the ‘stay at home unless absolutely necessary‘ campaign, suspension of OPD services in many hospitals and the general anxiety among the population, led to reduction in antenatal and child welfare clinic attendance. This situation could have further affected malaria control measures as the antenatal and child welfare clinics are two major service delivery points where education on malaria prevention, intermittent preventive treatment in pregnancy (IPTp) services and distribution of ITNs to children and pregnant women are carried out [26].", "This retrospective observational study uses monthly malaria morbidity data on the overall number of outpatients (interpreted as less severe cases) and inpatients (more severe cases). Additionally, all outpatient visits (including non-malaria related visits) were analysed. Subgroup analysis was performed for the two groups at high risk for severe malaria manifestations, children under five years of age and pregnant women [1]. Cases were extracted from the District Health Information Management System II (DHIMS2) on demographic and health parameters of northern Ghana from January 1, 2015, to December 31, 2020. This system was implemented in 2007 with an update in 2012 and has improved the data quality and completeness since [27].\nMalaria diagnosis was based either on the results of rapid diagnostic tests or microscopy.\nMid-year population estimates of the Northern Region of Ghana were also provided through the DHIMS2.", "The data have been processed with Microsoft Excel Version 16.52 and analyzed with Stata IC Version 16 (StataCorp, College Station, TX, USA). Monthly rates of OPD visits and confirmed malaria cases for the year 2020 and as a comparison for the years 2015–2019 separately as well as mean rates with 95% confidence intervals (95% CI) have been calculated and plotted using population figures of the Northern Region of Ghana. Additionally, quarterly rates are presented; rates of 2020 versus the combined rates of 2015–2019 have been compared with the z-test. The data allowed analysing children under five years and pregnant women separately using the fraction of the under-five population (14% of the population) and the fraction of women between 15 and 45 years (23% of the population) as estimates of the respective population denominators [28]. Malaria deaths have not been analysed due to low quality of the mortality data sets.", "Table 1 presents a brief description of the dataset. Altogether 5.8 million OPD visits were reported between 2015 and 2020; 39% of those were diagnosed with malaria. Of all malaria cases, 20% were children under the age of five years and 2% were pregnant women. 295,465 patients were hospitalized with diagnosed malaria, 56% of those were children under the age of five. The mean population in the Northern Region of Ghana of the years from 2015 to 2020 was 1,842,701.\n\nTable 1Description of the dataset on outpatients and malaria patients recorded in northern Ghana health facilities during the years 2015–2020Total numberPercentage (%)Outpatient department visits All OPD5,804,910100 Malaria confirmed2,278,29639 Malaria confirmed among children < 5 years454,77920 Malaria confirmed among pregnant women46,6932Hospital–admitted patients  Malaria confirmed295,465100 Malaria confirmed among children < 5 years165,31356Mean mid–year population Total population1,842,701100 Children < 5 years257,97814 Women aged 15 to 45423,82123\nDescription of the dataset on outpatients and malaria patients recorded in northern Ghana health facilities during the years 2015–2020\nFigure 1 presents the case rates of the different outcomes reported from health facilities in the Northern Region of Ghana for the years 2015-20 separately as well as a combined rate for the period 2015 to 2019. All OPD visits (Fig. 1a), including also non-malaria patients, have experienced a decline in March/April 2020 (the months where COVID-19 control measures were implemented in the country) and stayed low during the following months. After a further decrease in September 2020, the numbers increased again in October 2020 to the levels observed in previous years. This trend is similar but not as pronounced in the malaria OPD visits (Fig. 1b). The decline in accessing OPD malaria health care is strongest in children under the age of five years, especially from June to September 2020 (Fig. 1c). In pregnant women, however, a different trend with an increase of malaria cases, starting in June and exceeding previous year levels, can be observed (Fig. 1d). The 2020 numbers of the hospital-admitted malaria patients (March till October) stayed consistently below the numbers from previous years (Fig. 1e); and in accordance with the OPD figures, this trend is more pronounced in children under five years (Fig. 1f).\n\nFig. 1Reported monthly rates (A for all outpatient department (OPD) visits per 100,000 of the general population; B for OPD visits with confirmed malaria per 100,000 of the general population; C for OPD visits with malaria in children under the age of 5 per 100,000 of all children under 5, D for OPD visits with malaria in pregnant women per 100,000 of all women aged 15–45, E for hospital-admitted patients with malaria per 100,000 of the general population; F for hospital-admitted malaria in children under 5 per 100,000 of all children under 5) in health facilities of the Northern Region, Ghana, for the years 2015–2020\nReported monthly rates (A for all outpatient department (OPD) visits per 100,000 of the general population; B for OPD visits with confirmed malaria per 100,000 of the general population; C for OPD visits with malaria in children under the age of 5 per 100,000 of all children under 5, D for OPD visits with malaria in pregnant women per 100,000 of all women aged 15–45, E for hospital-admitted patients with malaria per 100,000 of the general population; F for hospital-admitted malaria in children under 5 per 100,000 of all children under 5) in health facilities of the Northern Region, Ghana, for the years 2015–2020\nThe rate ratios (RR), depicting quarterly measures comparing the rates of 2020 to the combined rates of the years 2015 to 2019, are presented in Table 2. General OPD visits were significantly reduced in the 2nd and 3rd quarters of 2020 compared to the previous years by up to 27%, with a return to previous standards at the end of the year. For the overall malaria cases only the 3rd quarter of 2020 experienced a decrease by 26%, but increases by 27% in the 4th quarter. Ambulatory malaria cases in children under five experienced stronger reductions compared to previous years by 43% in the 3rd quarter of 2020, and increases by 20% in the 4th quarter. These evolutions are not mirrored by the population of pregnant women with malaria infections, where no major reductions were observed during the first quarters of 2020 compared to previous years but with an earlier and stronger increase that reached 48% in the 4th quarter. The situation is slightly different in patients admitted to the hospital with malaria. The reductions in the 2nd and 3rd quarters of 2020 are more pronounced (46% and 43%) and the numbers recover at the end of the year but do not exceed previous levels. Again, as for the outpatient population, this trend is more pronounced in children under five years of age (64% in the 2nd, 67% in the 3rd quarter).\n\nTable 2Quarterly rate ratios (RR), rate differences (ΔR) and p − values comparing the rates of 2020 with the combined rates of the years 2015 to 2019 for outpatients and malaria patients in health facilities of northern GhanaRRΔRp − value1st quarter2nd quarter3rd quarter4th quarterOutpatient department visitsAll visits0.93− 269.00.2020.80− 745.3< 0.0010.73− 1448.2< 0.0011.03125.40.283All malaria1.0443.00.6790.91− 110.80.3000.74− 654.30.0021.27570.8< 0.001Malaria children < 5 years0.96− 60.60.6970.81− 310.90.0210.57− 1614.5< 0.0011.19589.50.005Malaria pregnant women0.87− 14.20.2690.96− 4.40.7091.1427.60.1271.4880.7< 0.001Inpatient department vistitsAll malaria0.79− 36.80.1390.54− 65.10.0080.57− 139.7< 0.0010.94−16.50.526Malaria children < 5 years0.74− 186.70.0350.46− 300.70.0200.43− 764.5< 0.0010.82− 224.50.067\nQuarterly rate ratios (RR), rate differences (ΔR) and p − values comparing the rates of 2020 with the combined rates of the years 2015 to 2019 for outpatients and malaria patients in health facilities of northern Ghana\n0.93\n− 269.0\n0.202\n0.80\n− 745.3\n< 0.001\n0.73\n− 1448.2\n< 0.001\n1.03\n125.4\n0.283\n1.04\n43.0\n0.679\n0.91\n− 110.8\n0.300\n0.74\n− 654.3\n0.002\n1.27\n570.8\n< 0.001\n0.96\n− 60.6\n0.697\n0.81\n− 310.9\n0.021\n0.57\n− 1614.5\n< 0.001\n1.19\n589.5\n0.005\n0.87\n− 14.2\n0.269\n0.96\n− 4.4\n0.709\n1.14\n27.6\n0.127\n1.48\n80.7\n< 0.001\n0.79\n− 36.8\n0.139\n0.54\n− 65.1\n0.008\n0.57\n− 139.7\n< 0.001\n0.94\n−16.5\n0.526\n0.74\n− 186.7\n0.035\n0.46\n− 300.7\n0.020\n0.43\n− 764.5\n< 0.001\n0.82\n− 224.5\n0.067", "The Covid-19 pandemic has major consequences on the functioning of health services and direct and indirect effects on the burden of various diseases [29, 30]. In this paper, effects of the pandemic on malaria case numbers in health facilities of northern Ghana, a region highly endemic for malaria, are described.\nIn northern Ghana, a slight but significant decline was observed in malaria cases during the 2nd and 3rd quarter of 2020. This decline is even more significant considering that the period coincides with the rainy season in northern Ghana (May-November) when usually the majority of malaria cases are recorded. Cases only rebounded to the average levels of previous years at the end of 2020. This pattern was visible in both, outpatient and inpatient settings, but more pronounced in the hospitalized population. The same applies to children and adults, where reductions were also observed in both groups, but were more marked in children under five years of age. The marked decline in March/April 2020 can be explained by the extensive movement and gathering restrictions and early stay-at-home advices for COVID-19-like symptoms unless these get severe. Such measures have likely supported the hesitancy to visit health facilities during the pandemic, which in turn poses a major risk for developing severe malaria [13, 31]. The decline observed in March/April 2020 was even more remarkable in inpatients. This does not support our initial hypothesis, that in cases of more severe malaria manifestation, patients were still brought to health facilities and hospitalized, despite the pandemic. The findings from this analysis support the hypothesis, that the reported malaria burden in health facilities will shrink due to the effects of the COVID-19 pandemic in highly malaria-endemic countries [32]. They also support results of the WHO World Malaria Report [13], and they agree with results of similar studies from other SSA countries classified as highly endemic for malaria, such as Sierra Leone, Uganda and the Democratic Republic of the Congo [33–36].\nThe distinct decrease of OPD visits in the health facilities of northern Ghana in September 2020 could be explained by unusual heavy floods that started mid-August, which might have further complicated the access to health services. These floods have provided a favourable habitat for Anopheles mosquitoes, what could explain the observed increase of malaria cases in October 2020.\nMalaria cases seen in health facilities among pregnant women show a different trend. After a decline in April 2020, cases have rebounded rapidly in this population and reached even higher levels compared to previous years. The most likely explanation of such an opposite trend would be the early hesitancy of pregnant women to visit health facilities. This is probably due to the fear of getting infected with COVID-19, combined with initial disruptions of the provision of IPTp to women in antenatal care (ANC) services as well as the disruption of routine distribution of ITNs [37]. The disrupted access to and delivery of ANC services is likely to explain the malaria case trend in April. However, without IPTp and ITNs, more women were at risk for malaria thereafter, which can explain the subsequent rise in malaria cases over the following months. Also, many pregnant women probably have sought the missed ANC with subsequent malaria diagnosis after the initial movement restrictions were lifted.\nGhana had already achieved high levels of ITN coverage, and no ITN mass campaign was planned for 2020 [13]. However, the routine distribution of ITNs, which is usually done in health facilities during ANC sessions and in primary schools, needed to be adapted to the COVID-19 measures, which included school closures from March 2020 until January 2021 [38, 39]. Also the seasonal malaria chemoprevention intervention for children and the annual indoor residual spraying of insecticides, which both require physical contact between the health workers and the community, needed to be modified [40, 41]. As another consequence of the COVID-19 pandemic, the provision of rapid diagnostic tests for malaria was fragile, which may have led to under-diagnosis of cases [42]. The main explanation for the lower number of malaria cases seen in health facilities was limited access to health facilities – public transportations were unavailable or unaffordable, and health facilities were closed or only provided reduced services [43, 44]. This is supported by findings from a study from Rwanda which showed that health facility visits for malaria decreased while community health services for malaria increased [43]. Finally, reports of hesitancy to visit health facilities due to fear of getting infected with COVID-19 were common [37, 42]. Last but not least, the malaria health care worker capacities were limited due to frequent reassignments to the control of COVID-19, to stigmatization or absence following quarantine, or to the development of COVID-19 disease or even death [14, 39, 45].\nThis study has strengths and limitations. A strength of the study is that the data represent a whole year of follow-up into the pandemic, which provides a more comprehensive picture of the effects compared to the previous studies with much shorter study periods. Also, the subgroup analysis of children under the age of five and pregnant women allows for a more complete picture. A major limitation is that the surveillance system itself may probably have been affected by the pandemic, producing a bias in the reported numbers. Massive underreporting could have falsified the observed trends and our conclusions. Moreover, it is not clear if the quality of surveillance data is fully comparable during the five years observed. The data from the Northern Region of Ghana may also not be representative for other malaria endemic areas in SSA, thus, the study has a limited external validity. Absenteeism in health facilities by people with malaria symptoms that have switched to self-medication or traditional medicine or that could not afford reaching official health care during the pandemic could also have had an albeit unknown effect on the malaria figures [46]. Especially in the first months of the pandemic, many people may have used malaria medication off-label to prevent and treat COVID-19 what may also have impacted the malaria situation [13].", "This study shows that the COVID-19 pandemic has been associated with reduced overall outpatient visits and reduced malaria cases reported from northern Ghana’s health facilities. Further data and qualitative explanations from Ghana and other SSA countries and in particular data from community-based studies are needed to fully judge the impact of the pandemic on the malaria situation on the African continent." ]
[ null, null, null, null, null, "results", "discussion", "conclusion" ]
[ "COVID-19", "Pandemic", "Malaria", "Sub-Saharan Africa", "Ghana", "Northern Region", "Health information system", "Surveillance", "Morbidity", "Routine data" ]
Background: Malaria remains one of the leading causes of morbidity and mortality in sub-Saharan Africa (SSA). Globally, there have been 627.000 malaria related deaths in 2020, 12% more than in 2019; 68% of these additional deaths were attributed to indirect consequences of the COVID-19 pandemic [1]. In Ghana, malaria is responsible for 10% of the overall mortality and nearly one quarter of all under five childhood deaths [1, 2]. The 2015–2020 Ghana Strategic Action Plan aimed to reduce the burden of malaria by 75.0% [3], but the COVID-19 pandemic could halt or even reverse the declining trends. In 2020, malaria was the cause of more than one third of all OPD attendances [4]; moreover 17.6% of OPD visits of pregnant women were due to malaria [5]. The global spread of the coronavirus disease 2019 (COVID-19) was declared a Public Health Emergency of International Concern at the end of January 2020 [6]. Many African governments responded rapidly to this threat by implementing control measures even before first cases were detected in their countries, comprising border closures, movement restrictions, social distancing and school closures [7]. SSA accounted for only about 3.5% of the globally reported COVID-19 morbidity and mortality, mostly from the southern and northern rims of the continent, while being home to 17% of the global population [8]. This may be explained by factors such as a younger population, hotter climate, interferences with other infectious diseases, but especially lack of diagnostics and underreporting [9, 10]. Ghana was among the countries with the highest reported COVID-19 cases and deaths in western and central SSA, as of the end of 2020 [11]. COVID-19 vaccinations started in February 2021 but coverage in Ghana is still low with only 10% of the population fully vaccinated by February 2022 [12]. The socio-economic disruptions associated with the disease and the preventive measures present huge challenges for health systems and whole societies, especially in low- and middle income countries [13]. In the highly malaria-endemic African countries, the progress made in malaria control during the last two decades was feared to be reversed by the side effects of the COVID-19 pandemic [14, 15]. In the Ghanaian context, as no ITN mass campaigns were scheduled for 2020, the worst-case scenario presented in a modelling study by Weiss et al. could have been a decline in access to anti-malarial medication by 75%, resulting in an increase of malaria morbidity and mortality by 13% and 55%, respectively [14]. Overall, the predicted public health relevant effects of the COVID-19 pandemic on malaria include shared clinical disease manifestations leading to diagnostic challenges, shortages of anti-malarial medication, rapid diagnostic tests, preventive tools and personal protective equipment, decreasing quality of surveillance systems, and re-allocation of funds and professionals towards COVID-19 control activities [16]. This study aims to describe potential effects of the COVID-19 pandemic on malaria cases in health facilities in the Northern Region of Ghana – a highly malaria endemic region. The research hypothesis that “lower access to health care services in combination with impaired malaria surveillance systems may have led to a lower number of reported malaria cases in highly endemic countries” will be investigated [16]. This hypothesis leads to the specific research question if the COVID-19 pandemic has led to less reported malaria cases in northern Ghana in 2020. Methods: Study area Ghana, with its population of about 31 million, lies in western SSA and has a relatively well functioning health care system [17, 18]. The Northern Region, with its capital city Tamale, had a population of 1.9 million in 2020. The socio-economic situation of the Northern Region is below the national average of the country and the region has the highest mortality rate in children under the age of five years [19]. Malaria is highly endemic in northern Ghana with a seasonal transmission pattern that is strongest between July and November. ITNs are a major malaria prevention strategy in Ghana; in 2019, about 74% of households owned at least one ITN and 52% of all households had at least one ITN per two people [20]. According to the state of the nation’s health report 2018, the malaria prevalence in children under- five years in the Northern Region is 40%, the highest in Ghana [21]. Regarding the epidemiology of COVID-19, Ghana recorded the first cases in March 2020. The government responded immediately with the implementation of social gathering and travel restrictions as well as school closures. The country’s major cities were placed under partial lockdown soon after. This lockdown started on March 30, 2020, and lasted for two weeks. Schools were partially reopened on June 21, 2020, and borders were reopened to international airlines on September 21, 2020 [22]. As at end of March 2021, the country had a total of 90,583 confirmed cases and 743 deaths [23]. These cases were clustered around two major waves in March-September 2020 and January-March 2021. The first wave coincided partly with the rainy season in the Northern Region, the time when the majority of cases of malaria in children and pregnant women are recorded [24]. In Ghana, effects of the COVID-19 pandemic on malaria control interventions concerned the country’s stock of artemisinin-based combination therapy, the functioning of its insecticide-treated mosquito net (ITN) routine distribution, and the overall access to primary health care services and facilities [25]. Although the lockdown did not include the Northern Region directly, the other pandemic control measures including the ‘stay at home unless absolutely necessary‘ campaign, suspension of OPD services in many hospitals and the general anxiety among the population, led to reduction in antenatal and child welfare clinic attendance. This situation could have further affected malaria control measures as the antenatal and child welfare clinics are two major service delivery points where education on malaria prevention, intermittent preventive treatment in pregnancy (IPTp) services and distribution of ITNs to children and pregnant women are carried out [26]. Ghana, with its population of about 31 million, lies in western SSA and has a relatively well functioning health care system [17, 18]. The Northern Region, with its capital city Tamale, had a population of 1.9 million in 2020. The socio-economic situation of the Northern Region is below the national average of the country and the region has the highest mortality rate in children under the age of five years [19]. Malaria is highly endemic in northern Ghana with a seasonal transmission pattern that is strongest between July and November. ITNs are a major malaria prevention strategy in Ghana; in 2019, about 74% of households owned at least one ITN and 52% of all households had at least one ITN per two people [20]. According to the state of the nation’s health report 2018, the malaria prevalence in children under- five years in the Northern Region is 40%, the highest in Ghana [21]. Regarding the epidemiology of COVID-19, Ghana recorded the first cases in March 2020. The government responded immediately with the implementation of social gathering and travel restrictions as well as school closures. The country’s major cities were placed under partial lockdown soon after. This lockdown started on March 30, 2020, and lasted for two weeks. Schools were partially reopened on June 21, 2020, and borders were reopened to international airlines on September 21, 2020 [22]. As at end of March 2021, the country had a total of 90,583 confirmed cases and 743 deaths [23]. These cases were clustered around two major waves in March-September 2020 and January-March 2021. The first wave coincided partly with the rainy season in the Northern Region, the time when the majority of cases of malaria in children and pregnant women are recorded [24]. In Ghana, effects of the COVID-19 pandemic on malaria control interventions concerned the country’s stock of artemisinin-based combination therapy, the functioning of its insecticide-treated mosquito net (ITN) routine distribution, and the overall access to primary health care services and facilities [25]. Although the lockdown did not include the Northern Region directly, the other pandemic control measures including the ‘stay at home unless absolutely necessary‘ campaign, suspension of OPD services in many hospitals and the general anxiety among the population, led to reduction in antenatal and child welfare clinic attendance. This situation could have further affected malaria control measures as the antenatal and child welfare clinics are two major service delivery points where education on malaria prevention, intermittent preventive treatment in pregnancy (IPTp) services and distribution of ITNs to children and pregnant women are carried out [26]. Study area: Ghana, with its population of about 31 million, lies in western SSA and has a relatively well functioning health care system [17, 18]. The Northern Region, with its capital city Tamale, had a population of 1.9 million in 2020. The socio-economic situation of the Northern Region is below the national average of the country and the region has the highest mortality rate in children under the age of five years [19]. Malaria is highly endemic in northern Ghana with a seasonal transmission pattern that is strongest between July and November. ITNs are a major malaria prevention strategy in Ghana; in 2019, about 74% of households owned at least one ITN and 52% of all households had at least one ITN per two people [20]. According to the state of the nation’s health report 2018, the malaria prevalence in children under- five years in the Northern Region is 40%, the highest in Ghana [21]. Regarding the epidemiology of COVID-19, Ghana recorded the first cases in March 2020. The government responded immediately with the implementation of social gathering and travel restrictions as well as school closures. The country’s major cities were placed under partial lockdown soon after. This lockdown started on March 30, 2020, and lasted for two weeks. Schools were partially reopened on June 21, 2020, and borders were reopened to international airlines on September 21, 2020 [22]. As at end of March 2021, the country had a total of 90,583 confirmed cases and 743 deaths [23]. These cases were clustered around two major waves in March-September 2020 and January-March 2021. The first wave coincided partly with the rainy season in the Northern Region, the time when the majority of cases of malaria in children and pregnant women are recorded [24]. In Ghana, effects of the COVID-19 pandemic on malaria control interventions concerned the country’s stock of artemisinin-based combination therapy, the functioning of its insecticide-treated mosquito net (ITN) routine distribution, and the overall access to primary health care services and facilities [25]. Although the lockdown did not include the Northern Region directly, the other pandemic control measures including the ‘stay at home unless absolutely necessary‘ campaign, suspension of OPD services in many hospitals and the general anxiety among the population, led to reduction in antenatal and child welfare clinic attendance. This situation could have further affected malaria control measures as the antenatal and child welfare clinics are two major service delivery points where education on malaria prevention, intermittent preventive treatment in pregnancy (IPTp) services and distribution of ITNs to children and pregnant women are carried out [26]. Study design and data: This retrospective observational study uses monthly malaria morbidity data on the overall number of outpatients (interpreted as less severe cases) and inpatients (more severe cases). Additionally, all outpatient visits (including non-malaria related visits) were analysed. Subgroup analysis was performed for the two groups at high risk for severe malaria manifestations, children under five years of age and pregnant women [1]. Cases were extracted from the District Health Information Management System II (DHIMS2) on demographic and health parameters of northern Ghana from January 1, 2015, to December 31, 2020. This system was implemented in 2007 with an update in 2012 and has improved the data quality and completeness since [27]. Malaria diagnosis was based either on the results of rapid diagnostic tests or microscopy. Mid-year population estimates of the Northern Region of Ghana were also provided through the DHIMS2. Analysis: The data have been processed with Microsoft Excel Version 16.52 and analyzed with Stata IC Version 16 (StataCorp, College Station, TX, USA). Monthly rates of OPD visits and confirmed malaria cases for the year 2020 and as a comparison for the years 2015–2019 separately as well as mean rates with 95% confidence intervals (95% CI) have been calculated and plotted using population figures of the Northern Region of Ghana. Additionally, quarterly rates are presented; rates of 2020 versus the combined rates of 2015–2019 have been compared with the z-test. The data allowed analysing children under five years and pregnant women separately using the fraction of the under-five population (14% of the population) and the fraction of women between 15 and 45 years (23% of the population) as estimates of the respective population denominators [28]. Malaria deaths have not been analysed due to low quality of the mortality data sets. Results: Table 1 presents a brief description of the dataset. Altogether 5.8 million OPD visits were reported between 2015 and 2020; 39% of those were diagnosed with malaria. Of all malaria cases, 20% were children under the age of five years and 2% were pregnant women. 295,465 patients were hospitalized with diagnosed malaria, 56% of those were children under the age of five. The mean population in the Northern Region of Ghana of the years from 2015 to 2020 was 1,842,701. Table 1Description of the dataset on outpatients and malaria patients recorded in northern Ghana health facilities during the years 2015–2020Total numberPercentage (%)Outpatient department visits All OPD5,804,910100 Malaria confirmed2,278,29639 Malaria confirmed among children < 5 years454,77920 Malaria confirmed among pregnant women46,6932Hospital–admitted patients  Malaria confirmed295,465100 Malaria confirmed among children < 5 years165,31356Mean mid–year population Total population1,842,701100 Children < 5 years257,97814 Women aged 15 to 45423,82123 Description of the dataset on outpatients and malaria patients recorded in northern Ghana health facilities during the years 2015–2020 Figure 1 presents the case rates of the different outcomes reported from health facilities in the Northern Region of Ghana for the years 2015-20 separately as well as a combined rate for the period 2015 to 2019. All OPD visits (Fig. 1a), including also non-malaria patients, have experienced a decline in March/April 2020 (the months where COVID-19 control measures were implemented in the country) and stayed low during the following months. After a further decrease in September 2020, the numbers increased again in October 2020 to the levels observed in previous years. This trend is similar but not as pronounced in the malaria OPD visits (Fig. 1b). The decline in accessing OPD malaria health care is strongest in children under the age of five years, especially from June to September 2020 (Fig. 1c). In pregnant women, however, a different trend with an increase of malaria cases, starting in June and exceeding previous year levels, can be observed (Fig. 1d). The 2020 numbers of the hospital-admitted malaria patients (March till October) stayed consistently below the numbers from previous years (Fig. 1e); and in accordance with the OPD figures, this trend is more pronounced in children under five years (Fig. 1f). Fig. 1Reported monthly rates (A for all outpatient department (OPD) visits per 100,000 of the general population; B for OPD visits with confirmed malaria per 100,000 of the general population; C for OPD visits with malaria in children under the age of 5 per 100,000 of all children under 5, D for OPD visits with malaria in pregnant women per 100,000 of all women aged 15–45, E for hospital-admitted patients with malaria per 100,000 of the general population; F for hospital-admitted malaria in children under 5 per 100,000 of all children under 5) in health facilities of the Northern Region, Ghana, for the years 2015–2020 Reported monthly rates (A for all outpatient department (OPD) visits per 100,000 of the general population; B for OPD visits with confirmed malaria per 100,000 of the general population; C for OPD visits with malaria in children under the age of 5 per 100,000 of all children under 5, D for OPD visits with malaria in pregnant women per 100,000 of all women aged 15–45, E for hospital-admitted patients with malaria per 100,000 of the general population; F for hospital-admitted malaria in children under 5 per 100,000 of all children under 5) in health facilities of the Northern Region, Ghana, for the years 2015–2020 The rate ratios (RR), depicting quarterly measures comparing the rates of 2020 to the combined rates of the years 2015 to 2019, are presented in Table 2. General OPD visits were significantly reduced in the 2nd and 3rd quarters of 2020 compared to the previous years by up to 27%, with a return to previous standards at the end of the year. For the overall malaria cases only the 3rd quarter of 2020 experienced a decrease by 26%, but increases by 27% in the 4th quarter. Ambulatory malaria cases in children under five experienced stronger reductions compared to previous years by 43% in the 3rd quarter of 2020, and increases by 20% in the 4th quarter. These evolutions are not mirrored by the population of pregnant women with malaria infections, where no major reductions were observed during the first quarters of 2020 compared to previous years but with an earlier and stronger increase that reached 48% in the 4th quarter. The situation is slightly different in patients admitted to the hospital with malaria. The reductions in the 2nd and 3rd quarters of 2020 are more pronounced (46% and 43%) and the numbers recover at the end of the year but do not exceed previous levels. Again, as for the outpatient population, this trend is more pronounced in children under five years of age (64% in the 2nd, 67% in the 3rd quarter). Table 2Quarterly rate ratios (RR), rate differences (ΔR) and p − values comparing the rates of 2020 with the combined rates of the years 2015 to 2019 for outpatients and malaria patients in health facilities of northern GhanaRRΔRp − value1st quarter2nd quarter3rd quarter4th quarterOutpatient department visitsAll visits0.93− 269.00.2020.80− 745.3< 0.0010.73− 1448.2< 0.0011.03125.40.283All malaria1.0443.00.6790.91− 110.80.3000.74− 654.30.0021.27570.8< 0.001Malaria children < 5 years0.96− 60.60.6970.81− 310.90.0210.57− 1614.5< 0.0011.19589.50.005Malaria pregnant women0.87− 14.20.2690.96− 4.40.7091.1427.60.1271.4880.7< 0.001Inpatient department vistitsAll malaria0.79− 36.80.1390.54− 65.10.0080.57− 139.7< 0.0010.94−16.50.526Malaria children < 5 years0.74− 186.70.0350.46− 300.70.0200.43− 764.5< 0.0010.82− 224.50.067 Quarterly rate ratios (RR), rate differences (ΔR) and p − values comparing the rates of 2020 with the combined rates of the years 2015 to 2019 for outpatients and malaria patients in health facilities of northern Ghana 0.93 − 269.0 0.202 0.80 − 745.3 < 0.001 0.73 − 1448.2 < 0.001 1.03 125.4 0.283 1.04 43.0 0.679 0.91 − 110.8 0.300 0.74 − 654.3 0.002 1.27 570.8 < 0.001 0.96 − 60.6 0.697 0.81 − 310.9 0.021 0.57 − 1614.5 < 0.001 1.19 589.5 0.005 0.87 − 14.2 0.269 0.96 − 4.4 0.709 1.14 27.6 0.127 1.48 80.7 < 0.001 0.79 − 36.8 0.139 0.54 − 65.1 0.008 0.57 − 139.7 < 0.001 0.94 −16.5 0.526 0.74 − 186.7 0.035 0.46 − 300.7 0.020 0.43 − 764.5 < 0.001 0.82 − 224.5 0.067 Discussion: The Covid-19 pandemic has major consequences on the functioning of health services and direct and indirect effects on the burden of various diseases [29, 30]. In this paper, effects of the pandemic on malaria case numbers in health facilities of northern Ghana, a region highly endemic for malaria, are described. In northern Ghana, a slight but significant decline was observed in malaria cases during the 2nd and 3rd quarter of 2020. This decline is even more significant considering that the period coincides with the rainy season in northern Ghana (May-November) when usually the majority of malaria cases are recorded. Cases only rebounded to the average levels of previous years at the end of 2020. This pattern was visible in both, outpatient and inpatient settings, but more pronounced in the hospitalized population. The same applies to children and adults, where reductions were also observed in both groups, but were more marked in children under five years of age. The marked decline in March/April 2020 can be explained by the extensive movement and gathering restrictions and early stay-at-home advices for COVID-19-like symptoms unless these get severe. Such measures have likely supported the hesitancy to visit health facilities during the pandemic, which in turn poses a major risk for developing severe malaria [13, 31]. The decline observed in March/April 2020 was even more remarkable in inpatients. This does not support our initial hypothesis, that in cases of more severe malaria manifestation, patients were still brought to health facilities and hospitalized, despite the pandemic. The findings from this analysis support the hypothesis, that the reported malaria burden in health facilities will shrink due to the effects of the COVID-19 pandemic in highly malaria-endemic countries [32]. They also support results of the WHO World Malaria Report [13], and they agree with results of similar studies from other SSA countries classified as highly endemic for malaria, such as Sierra Leone, Uganda and the Democratic Republic of the Congo [33–36]. The distinct decrease of OPD visits in the health facilities of northern Ghana in September 2020 could be explained by unusual heavy floods that started mid-August, which might have further complicated the access to health services. These floods have provided a favourable habitat for Anopheles mosquitoes, what could explain the observed increase of malaria cases in October 2020. Malaria cases seen in health facilities among pregnant women show a different trend. After a decline in April 2020, cases have rebounded rapidly in this population and reached even higher levels compared to previous years. The most likely explanation of such an opposite trend would be the early hesitancy of pregnant women to visit health facilities. This is probably due to the fear of getting infected with COVID-19, combined with initial disruptions of the provision of IPTp to women in antenatal care (ANC) services as well as the disruption of routine distribution of ITNs [37]. The disrupted access to and delivery of ANC services is likely to explain the malaria case trend in April. However, without IPTp and ITNs, more women were at risk for malaria thereafter, which can explain the subsequent rise in malaria cases over the following months. Also, many pregnant women probably have sought the missed ANC with subsequent malaria diagnosis after the initial movement restrictions were lifted. Ghana had already achieved high levels of ITN coverage, and no ITN mass campaign was planned for 2020 [13]. However, the routine distribution of ITNs, which is usually done in health facilities during ANC sessions and in primary schools, needed to be adapted to the COVID-19 measures, which included school closures from March 2020 until January 2021 [38, 39]. Also the seasonal malaria chemoprevention intervention for children and the annual indoor residual spraying of insecticides, which both require physical contact between the health workers and the community, needed to be modified [40, 41]. As another consequence of the COVID-19 pandemic, the provision of rapid diagnostic tests for malaria was fragile, which may have led to under-diagnosis of cases [42]. The main explanation for the lower number of malaria cases seen in health facilities was limited access to health facilities – public transportations were unavailable or unaffordable, and health facilities were closed or only provided reduced services [43, 44]. This is supported by findings from a study from Rwanda which showed that health facility visits for malaria decreased while community health services for malaria increased [43]. Finally, reports of hesitancy to visit health facilities due to fear of getting infected with COVID-19 were common [37, 42]. Last but not least, the malaria health care worker capacities were limited due to frequent reassignments to the control of COVID-19, to stigmatization or absence following quarantine, or to the development of COVID-19 disease or even death [14, 39, 45]. This study has strengths and limitations. A strength of the study is that the data represent a whole year of follow-up into the pandemic, which provides a more comprehensive picture of the effects compared to the previous studies with much shorter study periods. Also, the subgroup analysis of children under the age of five and pregnant women allows for a more complete picture. A major limitation is that the surveillance system itself may probably have been affected by the pandemic, producing a bias in the reported numbers. Massive underreporting could have falsified the observed trends and our conclusions. Moreover, it is not clear if the quality of surveillance data is fully comparable during the five years observed. The data from the Northern Region of Ghana may also not be representative for other malaria endemic areas in SSA, thus, the study has a limited external validity. Absenteeism in health facilities by people with malaria symptoms that have switched to self-medication or traditional medicine or that could not afford reaching official health care during the pandemic could also have had an albeit unknown effect on the malaria figures [46]. Especially in the first months of the pandemic, many people may have used malaria medication off-label to prevent and treat COVID-19 what may also have impacted the malaria situation [13]. Conclusions: This study shows that the COVID-19 pandemic has been associated with reduced overall outpatient visits and reduced malaria cases reported from northern Ghana’s health facilities. Further data and qualitative explanations from Ghana and other SSA countries and in particular data from community-based studies are needed to fully judge the impact of the pandemic on the malaria situation on the African continent.
Background: The COVID-19 pandemic and its collateral damage severely impact health systems globally and risk to worsen the malaria situation in endemic countries. Malaria is a leading cause of morbidity and mortality in Ghana. This study aims to describe the potential effects of the COVID-19 pandemic on malaria cases observed in health facilities in the Northern Region of Ghana. Methods: Monthly routine data from the District Health Information Management System II (DHIMS2) of the Northern Region of Ghana were analysed. Overall outpatient department visits (OPD) and malaria case rates from the years 2015-2019 were compared to the corresponding data of the year 2020. Results: Compared to the corresponding periods of the years 2015-2019, overall visits and malaria cases in paediatric and adult OPDs in northern Ghana decreased in March and April 2020, when major movement and social restrictions were implemented in response to the pandemic. Cases slightly rebounded afterwards in 2020, but stayed below the average of the previous years. Malaria data from inpatient departments showed a similar but more pronounced trend when compared to OPDs. In pregnant women, however, malaria cases in OPDs increased after the first COVID-19 wave. Conclusions: The findings from this study show that the COVID-19 pandemic affects the malaria burden in health facilities of northern Ghana, with declines in inpatient and outpatient rates except for pregnant women. They may have experienced reduced access to insecticide-treated nets and intermittent preventive malaria treatment in pregnancy, resulting in subsequent higher malaria morbidity. Further data, particularly from community-based studies and ideally complemented by qualitative research, are needed to fully determine the impact of the pandemic on the malaria situation in Africa.
Background: Malaria remains one of the leading causes of morbidity and mortality in sub-Saharan Africa (SSA). Globally, there have been 627.000 malaria related deaths in 2020, 12% more than in 2019; 68% of these additional deaths were attributed to indirect consequences of the COVID-19 pandemic [1]. In Ghana, malaria is responsible for 10% of the overall mortality and nearly one quarter of all under five childhood deaths [1, 2]. The 2015–2020 Ghana Strategic Action Plan aimed to reduce the burden of malaria by 75.0% [3], but the COVID-19 pandemic could halt or even reverse the declining trends. In 2020, malaria was the cause of more than one third of all OPD attendances [4]; moreover 17.6% of OPD visits of pregnant women were due to malaria [5]. The global spread of the coronavirus disease 2019 (COVID-19) was declared a Public Health Emergency of International Concern at the end of January 2020 [6]. Many African governments responded rapidly to this threat by implementing control measures even before first cases were detected in their countries, comprising border closures, movement restrictions, social distancing and school closures [7]. SSA accounted for only about 3.5% of the globally reported COVID-19 morbidity and mortality, mostly from the southern and northern rims of the continent, while being home to 17% of the global population [8]. This may be explained by factors such as a younger population, hotter climate, interferences with other infectious diseases, but especially lack of diagnostics and underreporting [9, 10]. Ghana was among the countries with the highest reported COVID-19 cases and deaths in western and central SSA, as of the end of 2020 [11]. COVID-19 vaccinations started in February 2021 but coverage in Ghana is still low with only 10% of the population fully vaccinated by February 2022 [12]. The socio-economic disruptions associated with the disease and the preventive measures present huge challenges for health systems and whole societies, especially in low- and middle income countries [13]. In the highly malaria-endemic African countries, the progress made in malaria control during the last two decades was feared to be reversed by the side effects of the COVID-19 pandemic [14, 15]. In the Ghanaian context, as no ITN mass campaigns were scheduled for 2020, the worst-case scenario presented in a modelling study by Weiss et al. could have been a decline in access to anti-malarial medication by 75%, resulting in an increase of malaria morbidity and mortality by 13% and 55%, respectively [14]. Overall, the predicted public health relevant effects of the COVID-19 pandemic on malaria include shared clinical disease manifestations leading to diagnostic challenges, shortages of anti-malarial medication, rapid diagnostic tests, preventive tools and personal protective equipment, decreasing quality of surveillance systems, and re-allocation of funds and professionals towards COVID-19 control activities [16]. This study aims to describe potential effects of the COVID-19 pandemic on malaria cases in health facilities in the Northern Region of Ghana – a highly malaria endemic region. The research hypothesis that “lower access to health care services in combination with impaired malaria surveillance systems may have led to a lower number of reported malaria cases in highly endemic countries” will be investigated [16]. This hypothesis leads to the specific research question if the COVID-19 pandemic has led to less reported malaria cases in northern Ghana in 2020. Conclusions: This study shows that the COVID-19 pandemic has been associated with reduced overall outpatient visits and reduced malaria cases reported from northern Ghana’s health facilities. Further data and qualitative explanations from Ghana and other SSA countries and in particular data from community-based studies are needed to fully judge the impact of the pandemic on the malaria situation on the African continent.
Background: The COVID-19 pandemic and its collateral damage severely impact health systems globally and risk to worsen the malaria situation in endemic countries. Malaria is a leading cause of morbidity and mortality in Ghana. This study aims to describe the potential effects of the COVID-19 pandemic on malaria cases observed in health facilities in the Northern Region of Ghana. Methods: Monthly routine data from the District Health Information Management System II (DHIMS2) of the Northern Region of Ghana were analysed. Overall outpatient department visits (OPD) and malaria case rates from the years 2015-2019 were compared to the corresponding data of the year 2020. Results: Compared to the corresponding periods of the years 2015-2019, overall visits and malaria cases in paediatric and adult OPDs in northern Ghana decreased in March and April 2020, when major movement and social restrictions were implemented in response to the pandemic. Cases slightly rebounded afterwards in 2020, but stayed below the average of the previous years. Malaria data from inpatient departments showed a similar but more pronounced trend when compared to OPDs. In pregnant women, however, malaria cases in OPDs increased after the first COVID-19 wave. Conclusions: The findings from this study show that the COVID-19 pandemic affects the malaria burden in health facilities of northern Ghana, with declines in inpatient and outpatient rates except for pregnant women. They may have experienced reduced access to insecticide-treated nets and intermittent preventive malaria treatment in pregnancy, resulting in subsequent higher malaria morbidity. Further data, particularly from community-based studies and ideally complemented by qualitative research, are needed to fully determine the impact of the pandemic on the malaria situation in Africa.
5,145
316
[ 661, 1027, 512, 168, 178 ]
8
[ "malaria", "2020", "health", "ghana", "northern", "children", "cases", "19", "years", "population" ]
[ "ghana representative malaria", "impact pandemic malaria", "pandemic malaria control", "ghana 21 epidemiology", "ghana highly malaria" ]
null
[CONTENT] COVID-19 | Pandemic | Malaria | Sub-Saharan Africa | Ghana | Northern Region | Health information system | Surveillance | Morbidity | Routine data [SUMMARY]
null
[CONTENT] COVID-19 | Pandemic | Malaria | Sub-Saharan Africa | Ghana | Northern Region | Health information system | Surveillance | Morbidity | Routine data [SUMMARY]
[CONTENT] COVID-19 | Pandemic | Malaria | Sub-Saharan Africa | Ghana | Northern Region | Health information system | Surveillance | Morbidity | Routine data [SUMMARY]
[CONTENT] COVID-19 | Pandemic | Malaria | Sub-Saharan Africa | Ghana | Northern Region | Health information system | Surveillance | Morbidity | Routine data [SUMMARY]
[CONTENT] COVID-19 | Pandemic | Malaria | Sub-Saharan Africa | Ghana | Northern Region | Health information system | Surveillance | Morbidity | Routine data [SUMMARY]
[CONTENT] Adult | COVID-19 | Child | Female | Ghana | Health Facilities | Humans | Malaria | Pandemics | Pregnancy | Retrospective Studies [SUMMARY]
null
[CONTENT] Adult | COVID-19 | Child | Female | Ghana | Health Facilities | Humans | Malaria | Pandemics | Pregnancy | Retrospective Studies [SUMMARY]
[CONTENT] Adult | COVID-19 | Child | Female | Ghana | Health Facilities | Humans | Malaria | Pandemics | Pregnancy | Retrospective Studies [SUMMARY]
[CONTENT] Adult | COVID-19 | Child | Female | Ghana | Health Facilities | Humans | Malaria | Pandemics | Pregnancy | Retrospective Studies [SUMMARY]
[CONTENT] Adult | COVID-19 | Child | Female | Ghana | Health Facilities | Humans | Malaria | Pandemics | Pregnancy | Retrospective Studies [SUMMARY]
[CONTENT] ghana representative malaria | impact pandemic malaria | pandemic malaria control | ghana 21 epidemiology | ghana highly malaria [SUMMARY]
null
[CONTENT] ghana representative malaria | impact pandemic malaria | pandemic malaria control | ghana 21 epidemiology | ghana highly malaria [SUMMARY]
[CONTENT] ghana representative malaria | impact pandemic malaria | pandemic malaria control | ghana 21 epidemiology | ghana highly malaria [SUMMARY]
[CONTENT] ghana representative malaria | impact pandemic malaria | pandemic malaria control | ghana 21 epidemiology | ghana highly malaria [SUMMARY]
[CONTENT] ghana representative malaria | impact pandemic malaria | pandemic malaria control | ghana 21 epidemiology | ghana highly malaria [SUMMARY]
[CONTENT] malaria | 2020 | health | ghana | northern | children | cases | 19 | years | population [SUMMARY]
null
[CONTENT] malaria | 2020 | health | ghana | northern | children | cases | 19 | years | population [SUMMARY]
[CONTENT] malaria | 2020 | health | ghana | northern | children | cases | 19 | years | population [SUMMARY]
[CONTENT] malaria | 2020 | health | ghana | northern | children | cases | 19 | years | population [SUMMARY]
[CONTENT] malaria | 2020 | health | ghana | northern | children | cases | 19 | years | population [SUMMARY]
[CONTENT] malaria | 19 | covid 19 | covid | countries | pandemic | 19 pandemic | covid 19 pandemic | 2020 | systems [SUMMARY]
null
[CONTENT] malaria | 100 000 | 100 | 000 | children | patients | years | 2020 | opd visits | rates [SUMMARY]
[CONTENT] reduced | data | pandemic | impact | ghana health facilities data | ssa countries particular | ssa countries particular data | reported northern | reported northern ghana | reported northern ghana health [SUMMARY]
[CONTENT] malaria | 2020 | health | ghana | 19 | cases | northern | pandemic | children | covid 19 [SUMMARY]
[CONTENT] malaria | 2020 | health | ghana | 19 | cases | northern | pandemic | children | covid 19 [SUMMARY]
[CONTENT] COVID-19 ||| Malaria | Ghana ||| COVID-19 | the Northern Region | Ghana [SUMMARY]
null
[CONTENT] Ghana | March | April 2020 ||| 2020 | the previous years ||| Malaria ||| first | COVID-19 [SUMMARY]
[CONTENT] COVID-19 | Ghana ||| ||| Africa [SUMMARY]
[CONTENT] COVID-19 ||| Malaria | Ghana ||| COVID-19 | the Northern Region | Ghana ||| Monthly | the District Health Information Management System II | the Northern Region | Ghana ||| the years 2015-2019 | the year 2020 ||| Ghana | March | April 2020 ||| 2020 | the previous years ||| Malaria ||| first | COVID-19 ||| COVID-19 | Ghana ||| ||| Africa [SUMMARY]
[CONTENT] COVID-19 ||| Malaria | Ghana ||| COVID-19 | the Northern Region | Ghana ||| Monthly | the District Health Information Management System II | the Northern Region | Ghana ||| the years 2015-2019 | the year 2020 ||| Ghana | March | April 2020 ||| 2020 | the previous years ||| Malaria ||| first | COVID-19 ||| COVID-19 | Ghana ||| ||| Africa [SUMMARY]
Combined Treatment With Carotid Endoarterectomy and Coronary Artery Bypass Grafting: A Single-Institutional Experience in 222 Patients.
35499500
Carotid atherosclerotic disease is a known independent risk factor of post operative stroke after coronary artery bypass grafting (CABG). The best management of concomitant coronary artery disease and carotid artery disease remains debated. Current strategies include simultaneous carotid endoarterectomy (CEA) and CABG, staged CEA followed by CABG, staged CABG followed by CEA, staged transfemoral carotid artery stenting (TF-CAS) followed by CABG, simultaneous TF-CAS and CABG and transcarotid artery stenting.
INTRODUCTION
We report our experience based on a cohort of 222 patients undergoing combined CEA and CABG surgery who come to our observation from 2004 to 2020. All patients with >70% carotid stenosis and severe multivessel or common truncal coronary artery disease underwent combined CEA and CABG surgery at our instituion. 30% of patients had previously remote neurological symptoms or a cerebral CT-scan with ischemic lesions. Patients with carotid stenosis >70%, either asymptomatic or symptomatic, underwent CT-scan without contrast media to assess ischemic brain injury, and in some cases, if necessary, CT-angiography of the neck and intracranial vessels.
METHODS
The overall perioperative mortality rate was 4.1% (9/222 patients). Two patients (.9%) had periprocedural ipsilateral transient ischemic attack (TIA) which completely resolved by the second postoperative day. Two patients (.9%) had an ipsilateral stroke, while 7 patients (3.2%) had a stroke of the controlateral brain hemisphere. Two patients (.9%) patients were affected by periprocedural coma caused by cerebral hypoperfusion due to perioperative heart failure. There were no statistically significant differences between patients in Extracorporeal Circulation (ECC) and Off-pump patients in the onset of perioperative stroke.
RESULTS
Our experience reported that combined surgical treatment of CEA and CABG, possibly Off-Pump, is a feasible treatment procedure, able to minimize the risk of post-operative stroke and cognitive deficits.
CONCLUSION
[ "Carotid Artery Diseases", "Carotid Stenosis", "Coronary Artery Bypass", "Coronary Artery Disease", "Endarterectomy, Carotid", "Humans", "Stents", "Stroke", "Treatment Outcome" ]
10233500
Introduction
Postoperative stroke is one the most important complication after coronary artery grafting (CABG) in patients with severe coronary disease. The rate of post-CABG stroke range between .5% and 7%.1,2 The etiology of post-CABG stroke is multifactorial. Carotid atherosclerotic disease is a known independent risk factor of post operative stroke after coronary revascularization.3,4 The risk of stroke after CABG surgery has been closely related to the degree of carotid stenosis.5, 6 However, the best management of concomitant coronary artery disease and carotid artery disease remains controversial. Current treatment options include simultaneous carotid endoarterectomy (CEA) and CABG, staged CEA followed by CABG, staged CABG followed CEA, staged transfemoral carotid artery stenting (TF-CAS) and CABG, and simultaneous TF-CAS and CABG. None of these procedures have been universally accepted. We report our experience based on a large group of patients suffering from severe carotid stenosis and concomitant severe coronary artery disease through a combined approach of carotid CEA and CABG.
null
null
Results
The patients undergoing combined surgery with CEA + CABG underwent coronary revascularization both in ECC and Off-Pump. 150 patients (67.5%) underwent CABG using ECC while 72 (32,5%) were done Off-Pump. The overall perioperative mortality rate was 4% (n = 9 patients). Two patients (.9%) had periprocedural ipsilateral TIA which completely resolved by the second postoperative day. Two patients had an ipsilateral stroke (.9%), while 7 patients had a stroke of the contralateral cerebral hemisphere. Two patients (3.1%) suffered periprocedural coma caused by cerebral hypoperfusion due to perioperative heart failure (.9%) (Table 3). No patient required re-operation for neck hematoma after CEA and only in 3 cases (1%) underwent re-exploration for bleeding following CABG. There were no statistically significant differences in incidence of stroke between ECC and Off-pump CABG patients. The 2 perioperative strokes ispilateral to the CEA occurred using respectively both techniques. The same was true for the 5 contralateral strokes to carotid CEA, (3 in patients with ECC and 2 in patients Off-pump).Table 3.30-Day Perioperative Complications in 222 combined cardiac and carotid procedures.Death94.1 %Myocardial infarction135.9 %Stroke ipsilateral CEA20.9 %TIAs ipsilateral carotid CEA20.9 %Stroke contralateral carotid CEA52.3 %Hypoxic coma20.9 %Transient myocardial ischemia3214.4 %Transient rise of creatinine >1.5 mg/dl5424.3 %Surgical reoperation for bleeding (from median sternotomy)31.4 % 30-Day Perioperative Complications in 222 combined cardiac and carotid procedures.
Conclusions
In conclusion, our experience reported that the concomitant CEA+CABG treatment is safe and effective in patients with severe and multivessel coronary artery disease and symptomatic severe carotid stenosis.
[ "Surgical Treatment" ]
[ "All procedures were performed under general anesthesia with full invasive\nmonitoring, before onset of cardiopulmonary bypass (CPB). In no cases was CEA\nperformed using loco-regional anesthesia. 72 (32.5%) patients underwent off-pump\nbypass. Neurological assessment was monitored with near infrared refractory\nspectroscopy and stump pressure measurement with 40 mmHg threshold. In\nparticular, the intra-operative neuromonitoring was performed by the INVOS™\nsystem (Medtronic, Dublin, Ireland), so as to provide a continuous non-invasive\nmeasurement of cerebral oxygen saturation and a reliable indication of changes\nin cerebral perfusion.\nSurgical dissection of the carotid arteries was usually completed while cardiac\nsurgeons harvested the saphenous vein. All patients were anticoagulated with\nheparin 5000 IU before the carotid arteries were cross-clamped. During the CEA,\ncarotid shunting was performed in 200 patients (90%) and all reconstructions\nwere performed with bovine pericardium patch. After CEA, the neck wound was\nusually left open to identify any suture bleeding during cardiac surgery due to\nanticoagulation with heparin. In particular, in our experience, in all patients\nundergoing synchronous CEA and CABG, the ECC was performed lowering brain\ntemperature by 2-5°C during ischemia (mild intra-ischemic hypothermia). This\nprocedure was found the most efficacious neuro-protective strategy because it\nwas associated to a reduction in the incidence and severity of cognitive deficits.\n7\n The intra-operative anticoagulation status was managed using a\nthromboelastograph (Haemonetics®, Boston, USA). After CABG and removal of the\nECC cannulae, the heparin was reversed with protamine sulphate. The cervical\nwound was closed at the completion of the cardiac surgery." ]
[ null ]
[ "Introduction", "Materials and Methods", "Surgical Treatment", "Results", "Discussion", "Conclusions" ]
[ "Postoperative stroke is one the most important complication after coronary artery\ngrafting (CABG) in patients with severe coronary disease. The rate of post-CABG\nstroke range between .5% and 7%.1,2 The etiology of post-CABG\nstroke is multifactorial. Carotid atherosclerotic disease is a known independent\nrisk factor of post operative stroke after coronary revascularization.3,4 The risk of stroke after CABG\nsurgery has been closely related to the degree of carotid stenosis.5,\n6\n However, the best management of concomitant coronary artery disease and\ncarotid artery disease remains controversial. Current treatment options include\nsimultaneous carotid endoarterectomy (CEA) and CABG, staged CEA followed by CABG,\nstaged CABG followed CEA, staged transfemoral carotid artery stenting (TF-CAS) and\nCABG, and simultaneous TF-CAS and CABG. None of these procedures have been\nuniversally accepted.\nWe report our experience based on a large group of patients suffering from severe\ncarotid stenosis and concomitant severe coronary artery disease through a combined\napproach of carotid CEA and CABG.", "From April 2004 to December 2020 a total of 2332 patients underwent CABG in our\ndepartment. Of these, 222 patients (9.5%) presented a severe stenosis >70% and\nsevere multivessel or common truncal coronary artery disease, and underwent\nCABG+CEA. The percentage of patients treated with combined CEA/CABG did not change\nover time and was approximately 13-15 patients per years. Symptomatic patients with\nmoderate (50-69%) stenoses underwent only CABG. The characteristics of our\npopulation are reported in Table 1. 155 patients (70%) were asymptomatic and presented no findings\nat brain CT imaging; the remaining 67 patients (30%) presented previous neurological\nsymptoms and ischemic brain lesions at brain CT imaging. CT findings are defined as\nchronic infarct or lacunar infarct or ipsilateral hemispheric infarct. Before CABG\nsurgery all patients underwent ultrasonography of the neck vessels. Patients with\ncarotid stenosis greater than 70%, either asymptomatic or symptomatic, underwent CT\nscan without contrast media to assess ischemic brain injury, and in some cases, if\nnecessary, CT angiography of the neck and intracranial vessels. The preoperative\ncardiac and neurological clinical assessment is shown in Table 2.Table 1.Baseline Risk-factors in patients undergoing combined carotid-CABG\nsurgery.Patients222Sex M/F172/50Age average ± SD68.9 ± 10Hypertension209 (94.1%)Dislipidemia88 (39.6%)Diabetes132 (59.5%)Chronic renal failure33 (14.9%)Peripheral arterial occlusive disease66 (29.7%)Chronic pulmonary obstructive disease149 (67.1%)Table 2.Preoperative Cardiac and Neurological evaluation in 222 pts undergoing\ncombined surgery.N°%Cardiac history • Myocardical infarction13259.5 • Unstable angina (III-IV CCS)18985.1 • Chronic stable angina (I-II CCS)3314.9Neurological history • Asymptomatic15569.8 • Stroke115.0 • TIAs5625.2CT evidence of ipsilateral stroke • Positive6730.2 • Negative15569.8Carotid stenosis on ultrasound • Ipsilateral >70% and contralateral\n>50%5625.2 • Ipsilateral >70% and contralateral\n>70%2310.4 • Ipsilateraal >70% and contralateral\nocclusion115.0 • >70% stenosis carotid artery\nunilateral13360.0\nBaseline Risk-factors in patients undergoing combined carotid-CABG\nsurgery.\nPreoperative Cardiac and Neurological evaluation in 222 pts undergoing\ncombined surgery.\nSurgical Treatment All procedures were performed under general anesthesia with full invasive\nmonitoring, before onset of cardiopulmonary bypass (CPB). In no cases was CEA\nperformed using loco-regional anesthesia. 72 (32.5%) patients underwent off-pump\nbypass. Neurological assessment was monitored with near infrared refractory\nspectroscopy and stump pressure measurement with 40 mmHg threshold. In\nparticular, the intra-operative neuromonitoring was performed by the INVOS™\nsystem (Medtronic, Dublin, Ireland), so as to provide a continuous non-invasive\nmeasurement of cerebral oxygen saturation and a reliable indication of changes\nin cerebral perfusion.\nSurgical dissection of the carotid arteries was usually completed while cardiac\nsurgeons harvested the saphenous vein. All patients were anticoagulated with\nheparin 5000 IU before the carotid arteries were cross-clamped. During the CEA,\ncarotid shunting was performed in 200 patients (90%) and all reconstructions\nwere performed with bovine pericardium patch. After CEA, the neck wound was\nusually left open to identify any suture bleeding during cardiac surgery due to\nanticoagulation with heparin. In particular, in our experience, in all patients\nundergoing synchronous CEA and CABG, the ECC was performed lowering brain\ntemperature by 2-5°C during ischemia (mild intra-ischemic hypothermia). This\nprocedure was found the most efficacious neuro-protective strategy because it\nwas associated to a reduction in the incidence and severity of cognitive deficits.\n7\n The intra-operative anticoagulation status was managed using a\nthromboelastograph (Haemonetics®, Boston, USA). After CABG and removal of the\nECC cannulae, the heparin was reversed with protamine sulphate. The cervical\nwound was closed at the completion of the cardiac surgery.\nAll procedures were performed under general anesthesia with full invasive\nmonitoring, before onset of cardiopulmonary bypass (CPB). In no cases was CEA\nperformed using loco-regional anesthesia. 72 (32.5%) patients underwent off-pump\nbypass. Neurological assessment was monitored with near infrared refractory\nspectroscopy and stump pressure measurement with 40 mmHg threshold. In\nparticular, the intra-operative neuromonitoring was performed by the INVOS™\nsystem (Medtronic, Dublin, Ireland), so as to provide a continuous non-invasive\nmeasurement of cerebral oxygen saturation and a reliable indication of changes\nin cerebral perfusion.\nSurgical dissection of the carotid arteries was usually completed while cardiac\nsurgeons harvested the saphenous vein. All patients were anticoagulated with\nheparin 5000 IU before the carotid arteries were cross-clamped. During the CEA,\ncarotid shunting was performed in 200 patients (90%) and all reconstructions\nwere performed with bovine pericardium patch. After CEA, the neck wound was\nusually left open to identify any suture bleeding during cardiac surgery due to\nanticoagulation with heparin. In particular, in our experience, in all patients\nundergoing synchronous CEA and CABG, the ECC was performed lowering brain\ntemperature by 2-5°C during ischemia (mild intra-ischemic hypothermia). This\nprocedure was found the most efficacious neuro-protective strategy because it\nwas associated to a reduction in the incidence and severity of cognitive deficits.\n7\n The intra-operative anticoagulation status was managed using a\nthromboelastograph (Haemonetics®, Boston, USA). After CABG and removal of the\nECC cannulae, the heparin was reversed with protamine sulphate. The cervical\nwound was closed at the completion of the cardiac surgery.", "All procedures were performed under general anesthesia with full invasive\nmonitoring, before onset of cardiopulmonary bypass (CPB). In no cases was CEA\nperformed using loco-regional anesthesia. 72 (32.5%) patients underwent off-pump\nbypass. Neurological assessment was monitored with near infrared refractory\nspectroscopy and stump pressure measurement with 40 mmHg threshold. In\nparticular, the intra-operative neuromonitoring was performed by the INVOS™\nsystem (Medtronic, Dublin, Ireland), so as to provide a continuous non-invasive\nmeasurement of cerebral oxygen saturation and a reliable indication of changes\nin cerebral perfusion.\nSurgical dissection of the carotid arteries was usually completed while cardiac\nsurgeons harvested the saphenous vein. All patients were anticoagulated with\nheparin 5000 IU before the carotid arteries were cross-clamped. During the CEA,\ncarotid shunting was performed in 200 patients (90%) and all reconstructions\nwere performed with bovine pericardium patch. After CEA, the neck wound was\nusually left open to identify any suture bleeding during cardiac surgery due to\nanticoagulation with heparin. In particular, in our experience, in all patients\nundergoing synchronous CEA and CABG, the ECC was performed lowering brain\ntemperature by 2-5°C during ischemia (mild intra-ischemic hypothermia). This\nprocedure was found the most efficacious neuro-protective strategy because it\nwas associated to a reduction in the incidence and severity of cognitive deficits.\n7\n The intra-operative anticoagulation status was managed using a\nthromboelastograph (Haemonetics®, Boston, USA). After CABG and removal of the\nECC cannulae, the heparin was reversed with protamine sulphate. The cervical\nwound was closed at the completion of the cardiac surgery.", "The patients undergoing combined surgery with CEA + CABG underwent coronary\nrevascularization both in ECC and Off-Pump. 150 patients (67.5%) underwent CABG\nusing ECC while 72 (32,5%) were done Off-Pump. The overall perioperative mortality\nrate was 4% (n = 9 patients). Two patients (.9%) had periprocedural ipsilateral TIA\nwhich completely resolved by the second postoperative day. Two patients had an\nipsilateral stroke (.9%), while 7 patients had a stroke of the contralateral\ncerebral hemisphere. Two patients (3.1%) suffered periprocedural coma caused by\ncerebral hypoperfusion due to perioperative heart failure (.9%) (Table 3). No patient\nrequired re-operation for neck hematoma after CEA and only in 3 cases (1%) underwent\nre-exploration for bleeding following CABG. There were no statistically significant\ndifferences in incidence of stroke between ECC and Off-pump CABG patients. The 2\nperioperative strokes ispilateral to the CEA occurred using respectively both\ntechniques. The same was true for the 5 contralateral strokes to carotid CEA, (3 in\npatients with ECC and 2 in patients Off-pump).Table 3.30-Day Perioperative Complications in 222 combined cardiac and carotid\nprocedures.Death94.1 %Myocardial infarction135.9 %Stroke ipsilateral CEA20.9 %TIAs ipsilateral carotid CEA20.9 %Stroke contralateral carotid CEA52.3 %Hypoxic coma20.9 %Transient myocardial ischemia3214.4 %Transient rise of creatinine >1.5 mg/dl5424.3 %Surgical reoperation for bleeding (from median\nsternotomy)31.4 %\n30-Day Perioperative Complications in 222 combined cardiac and carotid\nprocedures.", "Atherosclerosis is a disease that simultaneously affects the carotid and coronary\narteries. Concomitant disease can occur in 2-20% of patients, with an average\nincidence of 8%. Approximately 28% of patients undergoing CEA have significant\ncoronary disease requiring revascularization .\n8\n Similarly, 12% of patients undergoing coronary revascularization have\nhemodynamically significant carotid disease.\n9\n The presence of significant carotid disease increases the risk of stroke\nafter CABG.10,11 Patients\nundergoing CABG who are free from significant carotid disease have been shown to\nhave a relatively low risk of stroke; as low as 1.9%. While those with\nhemodynamically significant stenosis run a greater risk of cerebrovascular\ncomplications following CABG. Indeed risk of stroke has been shown to be as high as\n3% in patients with unilateral stenosis >70%, 5% in patients with bilateral\nstenosis >70% and 7-11% in patients with unilateral carotid occlusion and carotid\nstenosis contralateral >70%.\n12\n The principal risk of stroke during CABG can be explained by manipulation of\nthe atherosclerotic aorta during cannulation and its clamping necessary to establish\nthe cardiopulmonary bypass (CPD) .13,14 Therefore avoiding the use of\nCPB could help to reduce this incidence. CABG without CPB, so-called “beating heart”\nCABG, could represent an acceptable alternative to conventional CABG with CPB, in\norder to reduce the incidence of adverse events in patients with significant carotid\nartery stenosis. A number of studies have shown no difference in mortality and\ncerebrovascular complications between CABG on-pump and CABG off-pump\n(14). On the other hand, off-pump CABG is associated with shorter\noperative time, shorter ICU stay, as well as shorter overall hospitalization, and\ndecrease in postoperative bleeding and blood transfusion.\n15\n There is no consensus on the optimal management of synchronous carotid and\ncoronary disease, although several retrospective studies have attempted to answer\nthis question.\nA recent meta-analysis compared simultaneous carotid endarterectomy (CEA) and CABG vs\nstaged CEA and CABG for patients with concomitant CAD and carotid artery stenosis in\nterms of perioperative outcomes.\n16\n Eleven studies comprising 44 895 patients were included in this analysis\n(21 710 in the synchronous group and 23 185 patients in the staged group). The\nreported results showed that the synchronous CEA and CABG group had a statistically\nsignificant lower risk for myocardial infarction and higher risk for stroke and\ndeath. In addition, transient ischemic attacks, postoperative bleeding and pulmonary\ncomplications were similar between the 2 groups.\nOn the other hand, research conducted by Snider et al.\n17\n asserted that combined CEA and CABG surgery have an acceptable 30-day\nmorbidity and mortality (2% deaths, 3% myocardial infarctions 1% major stroke).\nHowever, these authors emphasize that the management of these patients requires the\nuse of carotid shunting during CEA and a careful and appropriate pre, intra and\npost-operative assessment as well as short intra-operative time to reduce the\nischemic injuries especially during CBP time.\n17\n\nAnother meta-analysis by Sharma et al. \n18\n found no difference in mortality and stroke between combined CEA-CABG and\nStaged CEA-CABG. In this case, twelve studies were identified with a total of 17 469\nand 7552 patients in the combined and staged group respectively. A pooled analysis\nrevealed no difference in the early mortality, post operative stroke, combined early\nmortality or stroke and combined endpoint of MI or stroke between the 2 surgical\napproaches. The authors conclude that the 2 strategies can be used interchangeable\nin clinical practice, with each having specific applications linked to specific\nclinical conditions.\nAnother approach is the TransFemoral Carotid Artery Stenting (TF-CAS) as an\nalternative to CEA in patients with severe coronary artery diseases. However, TF-CAS\nhas not been widely adopted in the setting of patients requiring concomitant CABG\nand carotid interventions because of the higher stroke rates compared with CEA.\n19\n In fact, a meta-analysis of 31 studies by Paraskevas et al.\n19\n reported a perioperative stoke rate of 15% for patient with symptomatic\ncarotid disease undergoing TF-CAS and CABG.\nTranscarotid artery revascularization (TCAR) with dynamic flow reversal is a new\nendovascular option for carotid revascularization. The initial results appear to be\nsuperior to TF-CAS and has decreased incidence of stroke in some part perhaps due to\nthe fact that, unlike TF-CAS, TCAR avoids arch manipulation .\n20\n Williams et al.\n21\n have recently described TCAR performed concomitantly at the time of CABG. In\nthis setting, TCAR can be achieved via a small supraclavicular incision. However,\nthis hybrid strategy during CABG requires deviation from protocol of perioperative\nDAP so critical to protect against early stent thrombosis. Indeed these authors did\nnot administer Clopidogrel preoperatively and only loaded the patient with\nClopidogrel postoperatively. In our experience based on 222 patients undergoing\nsimultaneous carotid and coronary revascularization, the overall mortality was 4.1%\nand neurologic morbidity was 5.0%. In our opinion, the 2 patients with perioperative\nhypoxic coma due to diffuse cerebral hypoperfusion and severe perioperative heart\nfailure must not be counted in neurological morbidity. The incidence of\nperioperative stroke and transient cerebrovascular complications (TIAs) were 4.1% (9\npatients) and .9% (2 patients), respectively. The 2 perioperative strokes were\nipsilateral to carotid CEA, while the remaining 7 occurred in the contralateral\nhemisphere. In the present study, the stroke rate for asymptomatic carotid stenosis\nwas .9% (2/222 patients), while the overall stroke rate was 3.2% (7/222 patients) in\ncontralateral hemisphere. Taking into account the incidence of stroke in relation to\nthe type of coronary revascularization, it must be underlined that the 2 ipsilateral\nstrokes occurred in ECC (1 patient) and in Off-Pump (1 patient), while the\ncontralateral strokes occurred in Off-Pump (3 patients) and in ECC (4 patients).\nThe occurring of 7 strokes contralateral to the CEA and only 2 ipsilateral to the CEA\nsuggest that the cause of cerebral embolization is mainly related to manipulations\nof the ascending aorta rather than clamping or cannulation for ECC in patients with\naggressive atherosclerosis.\nOur results for unilateral vs bilateral disease are very similar. In our opinion,\nthis is because bilateral carotid stenosis (˃70%) were treated as first with CEA for\nisolated carotid and then, after 7 days, CEA + CABG. In this way, the patient comes\nto CABG with both carotid surgery, then reducing the risk of stroke.\nUnilateral aymptomatic stenosis is still a controversial area today. In our clinical\npractice, the historical tendency was to treat the unilateral asymptomatic stenosis\nwith CEA + CABG for ˃70% stenosis. In the last 4 years, however, patients with\nunilateral asymptomatic stenosis ˃70% underwent CEA only and only if they show\nipsilateral ischemic lesions on brain-CT. If these patients do not report ischemic\nlesions on brain-CT, they undergo only to CABG but not to CEA.\nThe findings of the present study are supported by the results of a recent large\nmulticentric observational study\n22\n aimed to evaluate the association of CEA and CABG with postoperative\noutcomes. Klarin et al.\n22\n identified 994 off-pump CABG patients (497 CABG only and 497 CABG-CEA) and\n5952 on-pump CABG patients (2976 CABG only and 2976 CABG-CEA). In patients who\nreceived on-pump operations, those undergoing CABG-CEA had no observed difference in\nrate of in-hospital stroke (OR, .93; 95% CI, .72-1.21; P = .6), higher incidence of\nSTS morbidity composite events (OR, 1.15, 95% CI, 1.01-1.31; P = .03), and no\nobserved difference in 30-day mortality (OR, 1.28; 95% CI, .97-1.69; P = .08)\ncompared with those undergoing CABG only. For off-pump procedures, CABG-CEA patients\nhad no observed difference in rate of in-hospital stroke (OR, .80; 95% CI, .37-1.69;\nP = .56) compared with those undergoing CABG only.\nLimitations of our study include its retrospective and nonrandomized nature. Next,\nour analysis describes only CABG-CEA in the setting of CABG using CPB vs off-pump\nCABG but does not include other strategies such as staged CABG followed by CEA or\nCEA followed by CABG. Finally, the availability of CT findings for all patients\ndeserves a thorough analysis and we aimed to explore and analyze the correlation\nbetween CT findings and symptom status in a new paper.", "In conclusion, our experience reported that the concomitant CEA+CABG treatment is\nsafe and effective in patients with severe and multivessel coronary artery disease\nand symptomatic severe carotid stenosis." ]
[ "intro", "materials|methods", null, "results", "discussion", "conclusions" ]
[ "carotid", "coronary", "coronary artery bypass grafting" ]
Introduction: Postoperative stroke is one the most important complication after coronary artery grafting (CABG) in patients with severe coronary disease. The rate of post-CABG stroke range between .5% and 7%.1,2 The etiology of post-CABG stroke is multifactorial. Carotid atherosclerotic disease is a known independent risk factor of post operative stroke after coronary revascularization.3,4 The risk of stroke after CABG surgery has been closely related to the degree of carotid stenosis.5, 6 However, the best management of concomitant coronary artery disease and carotid artery disease remains controversial. Current treatment options include simultaneous carotid endoarterectomy (CEA) and CABG, staged CEA followed by CABG, staged CABG followed CEA, staged transfemoral carotid artery stenting (TF-CAS) and CABG, and simultaneous TF-CAS and CABG. None of these procedures have been universally accepted. We report our experience based on a large group of patients suffering from severe carotid stenosis and concomitant severe coronary artery disease through a combined approach of carotid CEA and CABG. Materials and Methods: From April 2004 to December 2020 a total of 2332 patients underwent CABG in our department. Of these, 222 patients (9.5%) presented a severe stenosis >70% and severe multivessel or common truncal coronary artery disease, and underwent CABG+CEA. The percentage of patients treated with combined CEA/CABG did not change over time and was approximately 13-15 patients per years. Symptomatic patients with moderate (50-69%) stenoses underwent only CABG. The characteristics of our population are reported in Table 1. 155 patients (70%) were asymptomatic and presented no findings at brain CT imaging; the remaining 67 patients (30%) presented previous neurological symptoms and ischemic brain lesions at brain CT imaging. CT findings are defined as chronic infarct or lacunar infarct or ipsilateral hemispheric infarct. Before CABG surgery all patients underwent ultrasonography of the neck vessels. Patients with carotid stenosis greater than 70%, either asymptomatic or symptomatic, underwent CT scan without contrast media to assess ischemic brain injury, and in some cases, if necessary, CT angiography of the neck and intracranial vessels. The preoperative cardiac and neurological clinical assessment is shown in Table 2.Table 1.Baseline Risk-factors in patients undergoing combined carotid-CABG surgery.Patients222Sex M/F172/50Age average ± SD68.9 ± 10Hypertension209 (94.1%)Dislipidemia88 (39.6%)Diabetes132 (59.5%)Chronic renal failure33 (14.9%)Peripheral arterial occlusive disease66 (29.7%)Chronic pulmonary obstructive disease149 (67.1%)Table 2.Preoperative Cardiac and Neurological evaluation in 222 pts undergoing combined surgery.N°%Cardiac history • Myocardical infarction13259.5 • Unstable angina (III-IV CCS)18985.1 • Chronic stable angina (I-II CCS)3314.9Neurological history • Asymptomatic15569.8 • Stroke115.0 • TIAs5625.2CT evidence of ipsilateral stroke • Positive6730.2 • Negative15569.8Carotid stenosis on ultrasound • Ipsilateral >70% and contralateral >50%5625.2 • Ipsilateral >70% and contralateral >70%2310.4 • Ipsilateraal >70% and contralateral occlusion115.0 • >70% stenosis carotid artery unilateral13360.0 Baseline Risk-factors in patients undergoing combined carotid-CABG surgery. Preoperative Cardiac and Neurological evaluation in 222 pts undergoing combined surgery. Surgical Treatment All procedures were performed under general anesthesia with full invasive monitoring, before onset of cardiopulmonary bypass (CPB). In no cases was CEA performed using loco-regional anesthesia. 72 (32.5%) patients underwent off-pump bypass. Neurological assessment was monitored with near infrared refractory spectroscopy and stump pressure measurement with 40 mmHg threshold. In particular, the intra-operative neuromonitoring was performed by the INVOS™ system (Medtronic, Dublin, Ireland), so as to provide a continuous non-invasive measurement of cerebral oxygen saturation and a reliable indication of changes in cerebral perfusion. Surgical dissection of the carotid arteries was usually completed while cardiac surgeons harvested the saphenous vein. All patients were anticoagulated with heparin 5000 IU before the carotid arteries were cross-clamped. During the CEA, carotid shunting was performed in 200 patients (90%) and all reconstructions were performed with bovine pericardium patch. After CEA, the neck wound was usually left open to identify any suture bleeding during cardiac surgery due to anticoagulation with heparin. In particular, in our experience, in all patients undergoing synchronous CEA and CABG, the ECC was performed lowering brain temperature by 2-5°C during ischemia (mild intra-ischemic hypothermia). This procedure was found the most efficacious neuro-protective strategy because it was associated to a reduction in the incidence and severity of cognitive deficits. 7 The intra-operative anticoagulation status was managed using a thromboelastograph (Haemonetics®, Boston, USA). After CABG and removal of the ECC cannulae, the heparin was reversed with protamine sulphate. The cervical wound was closed at the completion of the cardiac surgery. All procedures were performed under general anesthesia with full invasive monitoring, before onset of cardiopulmonary bypass (CPB). In no cases was CEA performed using loco-regional anesthesia. 72 (32.5%) patients underwent off-pump bypass. Neurological assessment was monitored with near infrared refractory spectroscopy and stump pressure measurement with 40 mmHg threshold. In particular, the intra-operative neuromonitoring was performed by the INVOS™ system (Medtronic, Dublin, Ireland), so as to provide a continuous non-invasive measurement of cerebral oxygen saturation and a reliable indication of changes in cerebral perfusion. Surgical dissection of the carotid arteries was usually completed while cardiac surgeons harvested the saphenous vein. All patients were anticoagulated with heparin 5000 IU before the carotid arteries were cross-clamped. During the CEA, carotid shunting was performed in 200 patients (90%) and all reconstructions were performed with bovine pericardium patch. After CEA, the neck wound was usually left open to identify any suture bleeding during cardiac surgery due to anticoagulation with heparin. In particular, in our experience, in all patients undergoing synchronous CEA and CABG, the ECC was performed lowering brain temperature by 2-5°C during ischemia (mild intra-ischemic hypothermia). This procedure was found the most efficacious neuro-protective strategy because it was associated to a reduction in the incidence and severity of cognitive deficits. 7 The intra-operative anticoagulation status was managed using a thromboelastograph (Haemonetics®, Boston, USA). After CABG and removal of the ECC cannulae, the heparin was reversed with protamine sulphate. The cervical wound was closed at the completion of the cardiac surgery. Surgical Treatment: All procedures were performed under general anesthesia with full invasive monitoring, before onset of cardiopulmonary bypass (CPB). In no cases was CEA performed using loco-regional anesthesia. 72 (32.5%) patients underwent off-pump bypass. Neurological assessment was monitored with near infrared refractory spectroscopy and stump pressure measurement with 40 mmHg threshold. In particular, the intra-operative neuromonitoring was performed by the INVOS™ system (Medtronic, Dublin, Ireland), so as to provide a continuous non-invasive measurement of cerebral oxygen saturation and a reliable indication of changes in cerebral perfusion. Surgical dissection of the carotid arteries was usually completed while cardiac surgeons harvested the saphenous vein. All patients were anticoagulated with heparin 5000 IU before the carotid arteries were cross-clamped. During the CEA, carotid shunting was performed in 200 patients (90%) and all reconstructions were performed with bovine pericardium patch. After CEA, the neck wound was usually left open to identify any suture bleeding during cardiac surgery due to anticoagulation with heparin. In particular, in our experience, in all patients undergoing synchronous CEA and CABG, the ECC was performed lowering brain temperature by 2-5°C during ischemia (mild intra-ischemic hypothermia). This procedure was found the most efficacious neuro-protective strategy because it was associated to a reduction in the incidence and severity of cognitive deficits. 7 The intra-operative anticoagulation status was managed using a thromboelastograph (Haemonetics®, Boston, USA). After CABG and removal of the ECC cannulae, the heparin was reversed with protamine sulphate. The cervical wound was closed at the completion of the cardiac surgery. Results: The patients undergoing combined surgery with CEA + CABG underwent coronary revascularization both in ECC and Off-Pump. 150 patients (67.5%) underwent CABG using ECC while 72 (32,5%) were done Off-Pump. The overall perioperative mortality rate was 4% (n = 9 patients). Two patients (.9%) had periprocedural ipsilateral TIA which completely resolved by the second postoperative day. Two patients had an ipsilateral stroke (.9%), while 7 patients had a stroke of the contralateral cerebral hemisphere. Two patients (3.1%) suffered periprocedural coma caused by cerebral hypoperfusion due to perioperative heart failure (.9%) (Table 3). No patient required re-operation for neck hematoma after CEA and only in 3 cases (1%) underwent re-exploration for bleeding following CABG. There were no statistically significant differences in incidence of stroke between ECC and Off-pump CABG patients. The 2 perioperative strokes ispilateral to the CEA occurred using respectively both techniques. The same was true for the 5 contralateral strokes to carotid CEA, (3 in patients with ECC and 2 in patients Off-pump).Table 3.30-Day Perioperative Complications in 222 combined cardiac and carotid procedures.Death94.1 %Myocardial infarction135.9 %Stroke ipsilateral CEA20.9 %TIAs ipsilateral carotid CEA20.9 %Stroke contralateral carotid CEA52.3 %Hypoxic coma20.9 %Transient myocardial ischemia3214.4 %Transient rise of creatinine >1.5 mg/dl5424.3 %Surgical reoperation for bleeding (from median sternotomy)31.4 % 30-Day Perioperative Complications in 222 combined cardiac and carotid procedures. Discussion: Atherosclerosis is a disease that simultaneously affects the carotid and coronary arteries. Concomitant disease can occur in 2-20% of patients, with an average incidence of 8%. Approximately 28% of patients undergoing CEA have significant coronary disease requiring revascularization . 8 Similarly, 12% of patients undergoing coronary revascularization have hemodynamically significant carotid disease. 9 The presence of significant carotid disease increases the risk of stroke after CABG.10,11 Patients undergoing CABG who are free from significant carotid disease have been shown to have a relatively low risk of stroke; as low as 1.9%. While those with hemodynamically significant stenosis run a greater risk of cerebrovascular complications following CABG. Indeed risk of stroke has been shown to be as high as 3% in patients with unilateral stenosis >70%, 5% in patients with bilateral stenosis >70% and 7-11% in patients with unilateral carotid occlusion and carotid stenosis contralateral >70%. 12 The principal risk of stroke during CABG can be explained by manipulation of the atherosclerotic aorta during cannulation and its clamping necessary to establish the cardiopulmonary bypass (CPD) .13,14 Therefore avoiding the use of CPB could help to reduce this incidence. CABG without CPB, so-called “beating heart” CABG, could represent an acceptable alternative to conventional CABG with CPB, in order to reduce the incidence of adverse events in patients with significant carotid artery stenosis. A number of studies have shown no difference in mortality and cerebrovascular complications between CABG on-pump and CABG off-pump (14). On the other hand, off-pump CABG is associated with shorter operative time, shorter ICU stay, as well as shorter overall hospitalization, and decrease in postoperative bleeding and blood transfusion. 15 There is no consensus on the optimal management of synchronous carotid and coronary disease, although several retrospective studies have attempted to answer this question. A recent meta-analysis compared simultaneous carotid endarterectomy (CEA) and CABG vs staged CEA and CABG for patients with concomitant CAD and carotid artery stenosis in terms of perioperative outcomes. 16 Eleven studies comprising 44 895 patients were included in this analysis (21 710 in the synchronous group and 23 185 patients in the staged group). The reported results showed that the synchronous CEA and CABG group had a statistically significant lower risk for myocardial infarction and higher risk for stroke and death. In addition, transient ischemic attacks, postoperative bleeding and pulmonary complications were similar between the 2 groups. On the other hand, research conducted by Snider et al. 17 asserted that combined CEA and CABG surgery have an acceptable 30-day morbidity and mortality (2% deaths, 3% myocardial infarctions 1% major stroke). However, these authors emphasize that the management of these patients requires the use of carotid shunting during CEA and a careful and appropriate pre, intra and post-operative assessment as well as short intra-operative time to reduce the ischemic injuries especially during CBP time. 17 Another meta-analysis by Sharma et al. 18 found no difference in mortality and stroke between combined CEA-CABG and Staged CEA-CABG. In this case, twelve studies were identified with a total of 17 469 and 7552 patients in the combined and staged group respectively. A pooled analysis revealed no difference in the early mortality, post operative stroke, combined early mortality or stroke and combined endpoint of MI or stroke between the 2 surgical approaches. The authors conclude that the 2 strategies can be used interchangeable in clinical practice, with each having specific applications linked to specific clinical conditions. Another approach is the TransFemoral Carotid Artery Stenting (TF-CAS) as an alternative to CEA in patients with severe coronary artery diseases. However, TF-CAS has not been widely adopted in the setting of patients requiring concomitant CABG and carotid interventions because of the higher stroke rates compared with CEA. 19 In fact, a meta-analysis of 31 studies by Paraskevas et al. 19 reported a perioperative stoke rate of 15% for patient with symptomatic carotid disease undergoing TF-CAS and CABG. Transcarotid artery revascularization (TCAR) with dynamic flow reversal is a new endovascular option for carotid revascularization. The initial results appear to be superior to TF-CAS and has decreased incidence of stroke in some part perhaps due to the fact that, unlike TF-CAS, TCAR avoids arch manipulation . 20 Williams et al. 21 have recently described TCAR performed concomitantly at the time of CABG. In this setting, TCAR can be achieved via a small supraclavicular incision. However, this hybrid strategy during CABG requires deviation from protocol of perioperative DAP so critical to protect against early stent thrombosis. Indeed these authors did not administer Clopidogrel preoperatively and only loaded the patient with Clopidogrel postoperatively. In our experience based on 222 patients undergoing simultaneous carotid and coronary revascularization, the overall mortality was 4.1% and neurologic morbidity was 5.0%. In our opinion, the 2 patients with perioperative hypoxic coma due to diffuse cerebral hypoperfusion and severe perioperative heart failure must not be counted in neurological morbidity. The incidence of perioperative stroke and transient cerebrovascular complications (TIAs) were 4.1% (9 patients) and .9% (2 patients), respectively. The 2 perioperative strokes were ipsilateral to carotid CEA, while the remaining 7 occurred in the contralateral hemisphere. In the present study, the stroke rate for asymptomatic carotid stenosis was .9% (2/222 patients), while the overall stroke rate was 3.2% (7/222 patients) in contralateral hemisphere. Taking into account the incidence of stroke in relation to the type of coronary revascularization, it must be underlined that the 2 ipsilateral strokes occurred in ECC (1 patient) and in Off-Pump (1 patient), while the contralateral strokes occurred in Off-Pump (3 patients) and in ECC (4 patients). The occurring of 7 strokes contralateral to the CEA and only 2 ipsilateral to the CEA suggest that the cause of cerebral embolization is mainly related to manipulations of the ascending aorta rather than clamping or cannulation for ECC in patients with aggressive atherosclerosis. Our results for unilateral vs bilateral disease are very similar. In our opinion, this is because bilateral carotid stenosis (˃70%) were treated as first with CEA for isolated carotid and then, after 7 days, CEA + CABG. In this way, the patient comes to CABG with both carotid surgery, then reducing the risk of stroke. Unilateral aymptomatic stenosis is still a controversial area today. In our clinical practice, the historical tendency was to treat the unilateral asymptomatic stenosis with CEA + CABG for ˃70% stenosis. In the last 4 years, however, patients with unilateral asymptomatic stenosis ˃70% underwent CEA only and only if they show ipsilateral ischemic lesions on brain-CT. If these patients do not report ischemic lesions on brain-CT, they undergo only to CABG but not to CEA. The findings of the present study are supported by the results of a recent large multicentric observational study 22 aimed to evaluate the association of CEA and CABG with postoperative outcomes. Klarin et al. 22 identified 994 off-pump CABG patients (497 CABG only and 497 CABG-CEA) and 5952 on-pump CABG patients (2976 CABG only and 2976 CABG-CEA). In patients who received on-pump operations, those undergoing CABG-CEA had no observed difference in rate of in-hospital stroke (OR, .93; 95% CI, .72-1.21; P = .6), higher incidence of STS morbidity composite events (OR, 1.15, 95% CI, 1.01-1.31; P = .03), and no observed difference in 30-day mortality (OR, 1.28; 95% CI, .97-1.69; P = .08) compared with those undergoing CABG only. For off-pump procedures, CABG-CEA patients had no observed difference in rate of in-hospital stroke (OR, .80; 95% CI, .37-1.69; P = .56) compared with those undergoing CABG only. Limitations of our study include its retrospective and nonrandomized nature. Next, our analysis describes only CABG-CEA in the setting of CABG using CPB vs off-pump CABG but does not include other strategies such as staged CABG followed by CEA or CEA followed by CABG. Finally, the availability of CT findings for all patients deserves a thorough analysis and we aimed to explore and analyze the correlation between CT findings and symptom status in a new paper. Conclusions: In conclusion, our experience reported that the concomitant CEA+CABG treatment is safe and effective in patients with severe and multivessel coronary artery disease and symptomatic severe carotid stenosis.
Background: Carotid atherosclerotic disease is a known independent risk factor of post operative stroke after coronary artery bypass grafting (CABG). The best management of concomitant coronary artery disease and carotid artery disease remains debated. Current strategies include simultaneous carotid endoarterectomy (CEA) and CABG, staged CEA followed by CABG, staged CABG followed by CEA, staged transfemoral carotid artery stenting (TF-CAS) followed by CABG, simultaneous TF-CAS and CABG and transcarotid artery stenting. Methods: We report our experience based on a cohort of 222 patients undergoing combined CEA and CABG surgery who come to our observation from 2004 to 2020. All patients with >70% carotid stenosis and severe multivessel or common truncal coronary artery disease underwent combined CEA and CABG surgery at our instituion. 30% of patients had previously remote neurological symptoms or a cerebral CT-scan with ischemic lesions. Patients with carotid stenosis >70%, either asymptomatic or symptomatic, underwent CT-scan without contrast media to assess ischemic brain injury, and in some cases, if necessary, CT-angiography of the neck and intracranial vessels. Results: The overall perioperative mortality rate was 4.1% (9/222 patients). Two patients (.9%) had periprocedural ipsilateral transient ischemic attack (TIA) which completely resolved by the second postoperative day. Two patients (.9%) had an ipsilateral stroke, while 7 patients (3.2%) had a stroke of the controlateral brain hemisphere. Two patients (.9%) patients were affected by periprocedural coma caused by cerebral hypoperfusion due to perioperative heart failure. There were no statistically significant differences between patients in Extracorporeal Circulation (ECC) and Off-pump patients in the onset of perioperative stroke. Conclusions: Our experience reported that combined surgical treatment of CEA and CABG, possibly Off-Pump, is a feasible treatment procedure, able to minimize the risk of post-operative stroke and cognitive deficits.
Introduction: Postoperative stroke is one the most important complication after coronary artery grafting (CABG) in patients with severe coronary disease. The rate of post-CABG stroke range between .5% and 7%.1,2 The etiology of post-CABG stroke is multifactorial. Carotid atherosclerotic disease is a known independent risk factor of post operative stroke after coronary revascularization.3,4 The risk of stroke after CABG surgery has been closely related to the degree of carotid stenosis.5, 6 However, the best management of concomitant coronary artery disease and carotid artery disease remains controversial. Current treatment options include simultaneous carotid endoarterectomy (CEA) and CABG, staged CEA followed by CABG, staged CABG followed CEA, staged transfemoral carotid artery stenting (TF-CAS) and CABG, and simultaneous TF-CAS and CABG. None of these procedures have been universally accepted. We report our experience based on a large group of patients suffering from severe carotid stenosis and concomitant severe coronary artery disease through a combined approach of carotid CEA and CABG. Conclusions: In conclusion, our experience reported that the concomitant CEA+CABG treatment is safe and effective in patients with severe and multivessel coronary artery disease and symptomatic severe carotid stenosis.
Background: Carotid atherosclerotic disease is a known independent risk factor of post operative stroke after coronary artery bypass grafting (CABG). The best management of concomitant coronary artery disease and carotid artery disease remains debated. Current strategies include simultaneous carotid endoarterectomy (CEA) and CABG, staged CEA followed by CABG, staged CABG followed by CEA, staged transfemoral carotid artery stenting (TF-CAS) followed by CABG, simultaneous TF-CAS and CABG and transcarotid artery stenting. Methods: We report our experience based on a cohort of 222 patients undergoing combined CEA and CABG surgery who come to our observation from 2004 to 2020. All patients with >70% carotid stenosis and severe multivessel or common truncal coronary artery disease underwent combined CEA and CABG surgery at our instituion. 30% of patients had previously remote neurological symptoms or a cerebral CT-scan with ischemic lesions. Patients with carotid stenosis >70%, either asymptomatic or symptomatic, underwent CT-scan without contrast media to assess ischemic brain injury, and in some cases, if necessary, CT-angiography of the neck and intracranial vessels. Results: The overall perioperative mortality rate was 4.1% (9/222 patients). Two patients (.9%) had periprocedural ipsilateral transient ischemic attack (TIA) which completely resolved by the second postoperative day. Two patients (.9%) had an ipsilateral stroke, while 7 patients (3.2%) had a stroke of the controlateral brain hemisphere. Two patients (.9%) patients were affected by periprocedural coma caused by cerebral hypoperfusion due to perioperative heart failure. There were no statistically significant differences between patients in Extracorporeal Circulation (ECC) and Off-pump patients in the onset of perioperative stroke. Conclusions: Our experience reported that combined surgical treatment of CEA and CABG, possibly Off-Pump, is a feasible treatment procedure, able to minimize the risk of post-operative stroke and cognitive deficits.
3,709
367
[ 335 ]
6
[ "cabg", "patients", "cea", "carotid", "stroke", "performed", "stenosis", "cea cabg", "pump", "disease" ]
[ "artery stenting tf", "carotid coronary revascularization", "stroke cabg surgery", "infarct cabg surgery", "simultaneous carotid endarterectomy" ]
null
[CONTENT] carotid | coronary | coronary artery bypass grafting [SUMMARY]
null
[CONTENT] carotid | coronary | coronary artery bypass grafting [SUMMARY]
[CONTENT] carotid | coronary | coronary artery bypass grafting [SUMMARY]
[CONTENT] carotid | coronary | coronary artery bypass grafting [SUMMARY]
[CONTENT] carotid | coronary | coronary artery bypass grafting [SUMMARY]
[CONTENT] Carotid Artery Diseases | Carotid Stenosis | Coronary Artery Bypass | Coronary Artery Disease | Endarterectomy, Carotid | Humans | Stents | Stroke | Treatment Outcome [SUMMARY]
null
[CONTENT] Carotid Artery Diseases | Carotid Stenosis | Coronary Artery Bypass | Coronary Artery Disease | Endarterectomy, Carotid | Humans | Stents | Stroke | Treatment Outcome [SUMMARY]
[CONTENT] Carotid Artery Diseases | Carotid Stenosis | Coronary Artery Bypass | Coronary Artery Disease | Endarterectomy, Carotid | Humans | Stents | Stroke | Treatment Outcome [SUMMARY]
[CONTENT] Carotid Artery Diseases | Carotid Stenosis | Coronary Artery Bypass | Coronary Artery Disease | Endarterectomy, Carotid | Humans | Stents | Stroke | Treatment Outcome [SUMMARY]
[CONTENT] Carotid Artery Diseases | Carotid Stenosis | Coronary Artery Bypass | Coronary Artery Disease | Endarterectomy, Carotid | Humans | Stents | Stroke | Treatment Outcome [SUMMARY]
[CONTENT] artery stenting tf | carotid coronary revascularization | stroke cabg surgery | infarct cabg surgery | simultaneous carotid endarterectomy [SUMMARY]
null
[CONTENT] artery stenting tf | carotid coronary revascularization | stroke cabg surgery | infarct cabg surgery | simultaneous carotid endarterectomy [SUMMARY]
[CONTENT] artery stenting tf | carotid coronary revascularization | stroke cabg surgery | infarct cabg surgery | simultaneous carotid endarterectomy [SUMMARY]
[CONTENT] artery stenting tf | carotid coronary revascularization | stroke cabg surgery | infarct cabg surgery | simultaneous carotid endarterectomy [SUMMARY]
[CONTENT] artery stenting tf | carotid coronary revascularization | stroke cabg surgery | infarct cabg surgery | simultaneous carotid endarterectomy [SUMMARY]
[CONTENT] cabg | patients | cea | carotid | stroke | performed | stenosis | cea cabg | pump | disease [SUMMARY]
null
[CONTENT] cabg | patients | cea | carotid | stroke | performed | stenosis | cea cabg | pump | disease [SUMMARY]
[CONTENT] cabg | patients | cea | carotid | stroke | performed | stenosis | cea cabg | pump | disease [SUMMARY]
[CONTENT] cabg | patients | cea | carotid | stroke | performed | stenosis | cea cabg | pump | disease [SUMMARY]
[CONTENT] cabg | patients | cea | carotid | stroke | performed | stenosis | cea cabg | pump | disease [SUMMARY]
[CONTENT] cabg | carotid | disease | stroke | artery | coronary | post | staged | artery disease | cabg stroke [SUMMARY]
null
[CONTENT] patients | perioperative | stroke | ipsilateral | day | pump | ecc | carotid | contralateral | day perioperative complications 222 [SUMMARY]
[CONTENT] severe | multivessel coronary artery | conclusion experience reported | cabg treatment | multivessel coronary artery disease | conclusion experience reported concomitant | experience reported | experience reported concomitant | experience reported concomitant cea | treatment safe effective patients [SUMMARY]
[CONTENT] cabg | patients | carotid | cea | stroke | performed | disease | artery | severe | stenosis [SUMMARY]
[CONTENT] cabg | patients | carotid | cea | stroke | performed | disease | artery | severe | stenosis [SUMMARY]
[CONTENT] ||| ||| CEA | CABG | CEA | CABG | CEA | CABG | CABG [SUMMARY]
null
[CONTENT] 4.1% | 9/222 ||| Two | the second postoperative day ||| Two | 7 | 3.2% ||| Two ||| [SUMMARY]
[CONTENT] CEA | CABG | Off-Pump [SUMMARY]
[CONTENT] ||| ||| CEA | CABG | CEA | CABG | CEA | CABG | CABG ||| 222 | CEA | CABG | 2004 | 2020 ||| 70% | CEA | CABG ||| 30% ||| 70% ||| ||| 4.1% | 9/222 ||| Two | the second postoperative day ||| Two | 7 | 3.2% ||| Two ||| ||| CEA | CABG | Off-Pump [SUMMARY]
[CONTENT] ||| ||| CEA | CABG | CEA | CABG | CEA | CABG | CABG ||| 222 | CEA | CABG | 2004 | 2020 ||| 70% | CEA | CABG ||| 30% ||| 70% ||| ||| 4.1% | 9/222 ||| Two | the second postoperative day ||| Two | 7 | 3.2% ||| Two ||| ||| CEA | CABG | Off-Pump [SUMMARY]
The spatial distribution of known predictors of autism spectrum disorders impacts geographic variability in prevalence in central North Carolina.
23113973
The causes of autism spectrum disorders (ASD) remain largely unknown and widely debated; however, evidence increasingly points to the importance of environmental exposures. A growing number of studies use geographic variability in ASD prevalence or exposure patterns to investigate the association between environmental factors and ASD. However, differences in the geographic distribution of established risk and predictive factors for ASD, such as maternal education or age, can interfere with investigations of ASD etiology. We evaluated geographic variability in the prevalence of ASD in central North Carolina and the impact of spatial confounding by known risk and predictive factors.
BACKGROUND
Children meeting a standardized case definition for ASD at 8 years of age were identified through records-based surveillance for 8 counties biennially from 2002 to 2008 (n=532). Vital records were used to identify the underlying cohort (15% random sample of children born in the same years as children with an ASD, n=11,034), and to obtain birth addresses. We used generalized additive models (GAMs) to estimate the prevalence of ASD across the region by smoothing latitude and longitude. GAMs, unlike methods used in previous spatial analyses of ASD, allow for extensive adjustment of individual-level risk factors (e.g. maternal age and education) when evaluating spatial variability of disease prevalence.
METHODS
Unadjusted maps revealed geographic variation in surveillance-recognized ASD. Children born in certain regions of the study area were up to 1.27 times as likely to be recognized as having ASD compared to children born in the study area as a whole (prevalence ratio (PR) range across the study area 0.57-1.27; global P=0.003). However, geographic gradients of ASD prevalence were attenuated after adjusting for spatial confounders (adjusted PR range 0.72-1.12 across the study area; global P=0.052).
RESULTS
In these data, spatial variation of ASD in central NC can be explained largely by factors impacting diagnosis, such as maternal education, emphasizing the importance of adjusting for differences in the geographic distribution of known individual-level predictors in spatial analyses of ASD. These results underscore the critical importance of accounting for such factors in studies of environmental exposures that vary across regions.
CONCLUSIONS
[ "Child", "Child Development Disorders, Pervasive", "Demography", "Educational Status", "Female", "Geographic Information Systems", "Humans", "Male", "North Carolina", "Population Surveillance", "Predictive Value of Tests", "Prevalence", "Risk Factors", "Spatial Analysis" ]
3499188
Background
Autism spectrum disorders (ASD) are complex neurodevelopmental disorders characterized by impaired social interaction and communication, and restrictive and repetitive behavior [1]. Estimates in 2008 indicate that approximately 1 in 88 children have ASD and that the prevalence of documented ASD is on the rise [2]. The causes for ASD remain largely unknown and widely debated [3,4]. Environmental exposures are hypothesized to contribute to ASD etiology [5]; however, identifying exposures of concern has been complicated by the relative rarity of ASD, the extensive number of candidate exposures, and the lack of exposure measurements during early life, the developmentally relevant time period for ASD. Thus, studies have explored the geographic distribution of ASD as a means for generating hypotheses about spatially distributed environmental exposures. Additionally, studies have used geographically-based exposure assignment to evaluate the impact of specific exposures such as air pollutants, pesticides, and hazardous wastes on ASD risk [6-9]. One problem with evaluating the geographically non-random prevalence of ASD for etiologic purposes lies in difficulties disentangling the geographic distribution of other factors associated with diagnosis [10,11]. For example, higher maternal education is associated with increased ASD diagnosis in the United States [12,13] but not in some European countries [14]. These results suggest that maternal education is a factor in promoting recognition of ASD, but not necessarily the occurrence of ASD. Identifying patterns related to ASD diagnosis may be helpful for public health services allocation; however, to generate hypotheses about etiology, we must distinguish diagnostic patterns from patterns of ASD occurrence. For example, prioritizing investigations of a geographically-based environmental cause may be unwarranted if the observed spatial pattern is driven by maternal education (i.e. spatial confounding). Two previous studies investigated geographic variability of ASD as a means of identifying environmental exposures related to ASD prevalence [15,16]. Both reported that children born in certain regions of California were more likely to have a recognized ASD than children born in other parts of the state [15,16]. The authors attributed their findings to regional differences in the underlying population (i.e. demographic and socioeconomic factors) or geographic variability of environmental exposures; but were unable to disentangle factors promoting diagnosis from environmental exposures because they did not adjust for potentially important individual-level spatial confounding [15,16]. In order to determine the potential for environmentally distributed exposure to be associated with ASD in central North Carolina (such as contaminants in air or water), we first explored whether spatial differences in ASD prevalence existed and then whether differences remained after accounting for spatially distributed covariates associated with ASD risk and diagnosis. We used a method of spatial epidemiology that applies generalized additive models (GAMs) to assess the spatial variation of disease in a region while simultaneously adjusting for other geographically distributed individual-level factors [17,18] such as maternal education, age, and smoking. Combining GAMs and geographic information systems allowed us to predict a continuous surface of ASD prevalence across our study area. Our research serves not only to expand the consideration of spatial patterns of ASD to geographic regions other than California but also to improve the utility of such studies by directly examining how adjustment for known risk and predictive factors influences geographic patterns. Patterns remaining after accounting for factors that influence ASD recognition may better reveal the distribution of novel geographically distributed etiologic factors impacting ASD prevalence.
Methods
Study population The Autism and Developmental Disabilities Monitoring (ADDM) Network is an active, population-based surveillance program that monitors the prevalence of developmental disabilities among children aged 8 years, an age by which most children with ASD have been evaluated [19], in selected geographic regions across the United States [20]. The ADDM Network conducts standardized review of medical and educational records and trained clinicians determine whether standardized case definitions for ASD and intellectual disability (ID) are met [20]. Our analyses utilized data from the North Carolina ADDM site (NC-ADDM), which began biennial surveillance in 2002. Analyses were restricted to children born in the 8 counties that were under surveillance during all study years (2002–2008). To represent the underlying population, we randomly selected a 15% sample of birth records for children born in the same study counties and years as children included in ADDM (biennial 1994–2000; n=11,908, representing a region averaging 20,000 births per year). Figure 1 provides orientation to the population distribution in NC, where red dots indicate children with developmental disabilities (ASD or ID) and blue dots indicate children randomly sampled from the entire central NC area. We excluded children who were adopted because information on birth address was missing and those who died in infancy because they were not part of the risk set for development disabilities (n=93; <1%). NC-ADDM and our current analyses were reviewed and approved by the Institutional Review Board at the University of North Carolina-Chapel Hill. Eight county central North Carolina study area. The residential addresses at birth for the birth cohort (blue points) and children with ASD or ID (red points) born in 1994, 1996, 1998 and 2000 are displayed with altered locations to preserve confidentiality. The Autism and Developmental Disabilities Monitoring (ADDM) Network is an active, population-based surveillance program that monitors the prevalence of developmental disabilities among children aged 8 years, an age by which most children with ASD have been evaluated [19], in selected geographic regions across the United States [20]. The ADDM Network conducts standardized review of medical and educational records and trained clinicians determine whether standardized case definitions for ASD and intellectual disability (ID) are met [20]. Our analyses utilized data from the North Carolina ADDM site (NC-ADDM), which began biennial surveillance in 2002. Analyses were restricted to children born in the 8 counties that were under surveillance during all study years (2002–2008). To represent the underlying population, we randomly selected a 15% sample of birth records for children born in the same study counties and years as children included in ADDM (biennial 1994–2000; n=11,908, representing a region averaging 20,000 births per year). Figure 1 provides orientation to the population distribution in NC, where red dots indicate children with developmental disabilities (ASD or ID) and blue dots indicate children randomly sampled from the entire central NC area. We excluded children who were adopted because information on birth address was missing and those who died in infancy because they were not part of the risk set for development disabilities (n=93; <1%). NC-ADDM and our current analyses were reviewed and approved by the Institutional Review Board at the University of North Carolina-Chapel Hill. Eight county central North Carolina study area. The residential addresses at birth for the birth cohort (blue points) and children with ASD or ID (red points) born in 1994, 1996, 1998 and 2000 are displayed with altered locations to preserve confidentiality. ASD and ID classification To fully explore the impact of spatial confounding on the geographic distribution of developmental disability, we considered 4 different disability groups: all children with an ASD, 2 subgroups within all ASD classified by the co-occurrence of ID (ASD+ID and ASD-ID), and the independent group of all children with ID without regard to ASD status. Examining these independent and cross-classified groups provided a more nuanced disentanglement of factors which may act differently to promote ASD, versus ID, recognition, as follows. Diagnosis of ASD requires comprehensive evaluation of a constellation of behaviors that can be more involved than a singular evaluation assessment of Intelligence Quotient (IQ) to determine ID. Yet, ASD often occurs with ID, frequently presenting more severe disability that is recognized at an earlier age. In addition, Durkin et al. (2010) reported a differential socioeconomic (SES) gradient in ASD prevalence by the presence or absence of ID; SES acts as a stronger risk factor for ASD-ID compared to ASD+ID [3]. Because we expect SES to vary spatially and be a spatial confounder, analyzing ASD without respect to ID may mask the true spatial patterning of disease. Children met the standardized case definition for ASD if clinician reviewers determined their developmental evaluation records indicated behaviors consistent with ASD, based on the Diagnostic and Statistical Manual IVTM criteria for Autistic Disorder, Asperger Disorder, or Pervasive Developmental Disorder Not-Otherwise-Specified [21]. Children met the standardized definition for ID if clinician review of developmental evaluations determined they had an IQ ≤ 70 on the most recently administered psychometrics test such as the Battelle–cognitive domain [22], Differential Ability Scales [23], Stanford-Binet–4th ed. [24], Wechsler Preschool and Primary Scale of Intelligence [25], and the Wechsler Intelligence Scale for Children-III [26], or, in the absence of test scores, a written statement in the records indicated the presence of a severe or profound intellectual disability [20]. Our analyses included 561 children with ASD, who were further classified into two subgroups: children with ASD without ID (ASD-ID, n=330), and children with ASD and ID (ASD+ID, n=231). As a comparison to the ASD analyses, we conducted additional analyses investigating the spatial variability in the prevalence of ID (n=1,028) regardless of ASD status. To fully explore the impact of spatial confounding on the geographic distribution of developmental disability, we considered 4 different disability groups: all children with an ASD, 2 subgroups within all ASD classified by the co-occurrence of ID (ASD+ID and ASD-ID), and the independent group of all children with ID without regard to ASD status. Examining these independent and cross-classified groups provided a more nuanced disentanglement of factors which may act differently to promote ASD, versus ID, recognition, as follows. Diagnosis of ASD requires comprehensive evaluation of a constellation of behaviors that can be more involved than a singular evaluation assessment of Intelligence Quotient (IQ) to determine ID. Yet, ASD often occurs with ID, frequently presenting more severe disability that is recognized at an earlier age. In addition, Durkin et al. (2010) reported a differential socioeconomic (SES) gradient in ASD prevalence by the presence or absence of ID; SES acts as a stronger risk factor for ASD-ID compared to ASD+ID [3]. Because we expect SES to vary spatially and be a spatial confounder, analyzing ASD without respect to ID may mask the true spatial patterning of disease. Children met the standardized case definition for ASD if clinician reviewers determined their developmental evaluation records indicated behaviors consistent with ASD, based on the Diagnostic and Statistical Manual IVTM criteria for Autistic Disorder, Asperger Disorder, or Pervasive Developmental Disorder Not-Otherwise-Specified [21]. Children met the standardized definition for ID if clinician review of developmental evaluations determined they had an IQ ≤ 70 on the most recently administered psychometrics test such as the Battelle–cognitive domain [22], Differential Ability Scales [23], Stanford-Binet–4th ed. [24], Wechsler Preschool and Primary Scale of Intelligence [25], and the Wechsler Intelligence Scale for Children-III [26], or, in the absence of test scores, a written statement in the records indicated the presence of a severe or profound intellectual disability [20]. Our analyses included 561 children with ASD, who were further classified into two subgroups: children with ASD without ID (ASD-ID, n=330), and children with ASD and ID (ASD+ID, n=231). As a comparison to the ASD analyses, we conducted additional analyses investigating the spatial variability in the prevalence of ID (n=1,028) regardless of ASD status. Residential location and covariates Surveillance data were linked to birth records to obtain the residential address and covariate information at the time of birth of children with ASD and ID. Birth data were chosen to reflect the most etiologically relevant time period for brain development [27]. We combined several methods to geocode (i.e. assign latitude and longitude coordinates) the residential birth addresses of study children to improve geocoding accuracy and reduce positional errors [28]. First, we ran all addresses through the ZP4 software which cleans and updates addresses using U.S. Postal Service databases (e.g. converting rural routes to E-911 street names) (version expiring March 2011, Semphoria; Memphis Tennessee). We then cleaned all addresses individually and geocoded them using ArcGIS (version 9.3, Redlands, California) and U.S. Census Tigerline files [29]. For unmatched addresses, we used Google Maps to locate the residence where possible [30]. Using these methods, we successfully geocoded 12,299 (93.41%) of the 13,167 residential addresses. Of the addresses we were unable to geocode, 379 (2.88%) were post office boxes and 489 (3.71%) were addresses that were either incomplete or that we were not able to geocode to a specific location. Geocoding success was similar for the birth cohort and children with ASD and ID. Surveillance data were linked to birth records to obtain the residential address and covariate information at the time of birth of children with ASD and ID. Birth data were chosen to reflect the most etiologically relevant time period for brain development [27]. We combined several methods to geocode (i.e. assign latitude and longitude coordinates) the residential birth addresses of study children to improve geocoding accuracy and reduce positional errors [28]. First, we ran all addresses through the ZP4 software which cleans and updates addresses using U.S. Postal Service databases (e.g. converting rural routes to E-911 street names) (version expiring March 2011, Semphoria; Memphis Tennessee). We then cleaned all addresses individually and geocoded them using ArcGIS (version 9.3, Redlands, California) and U.S. Census Tigerline files [29]. For unmatched addresses, we used Google Maps to locate the residence where possible [30]. Using these methods, we successfully geocoded 12,299 (93.41%) of the 13,167 residential addresses. Of the addresses we were unable to geocode, 379 (2.88%) were post office boxes and 489 (3.71%) were addresses that were either incomplete or that we were not able to geocode to a specific location. Geocoding success was similar for the birth cohort and children with ASD and ID. Spatial analysis We estimated the log odds of ASD and ID using GAMs, an extension of linear regression models that can analyze binary outcome data and accommodate both parametric and non-parametric model components [17]. For non-parametric model terms, GAMs replace the traditional exposure term in an ordinary logistic regression with a smooth term (i.e. a term of best fit after adjusting for other covariates). In these analyses we applied a bivariate smooth to latitude and longitude coordinates and included all other covariates as parametric terms [17,18]. We used a locally weighted regression smoother (loess) which adapts to changes in the data density that are likely to occur in analyses of residential locations due to variability in population density. To predict prevalence the loess smoother utilizes information from nearby data points (weighting information based on its distance from the prediction point). The region or neighborhood from which data are drawn to predict prevalence is based on the percentage of data points in the neighborhood and is referred to as the span size. Choice of span size is a trade-off between bias and variability. A larger span size (more data included) results in a flatter surface with low variability but increased bias, while a small span size results in high variability and comparatively low bias. We determined the optimal amount of smoothing (optimal span size), by minimizing Akaike’s Information Criterion (AIC) [17,18]. A large optimal span size indicates less spatial variability in prevalence compared to a small optimal span. We created a grid covering the study area that extended across all latitude and longitude coordinates of all addresses. We excluded grid points in regions of the study area where no children in our sample were born. The resulting grid comprised approximately 4,300 points. We predicted the log odds for each of the 4 developmental outcome groups (e.g. ASD) at each point on the grid and calculated an odds ratio using the entire study area as the referent group; the odds at each point was divided by the odds from a reduced model which omitted the latitude and longitude smooth term [18]. We did not remove children with developmental outcomes from the random selection of births drawn to represent the underlying distribution of births and, consequently, some cases were included in the denominator. As a result, the odds ratios from these models are mathematically equivalent to prevalence ratios. We mapped prevalence ratios using a continuous color scale (blue to red) and a constant scale range for all maps with the same outcome (i.e. ASD; ASD+ID; ASD-ID; and ID). To provide information in the interpretation of spatial patterns that could be driven by sparse data, we preformed a 2-step statistical screening procedure, as follows. First, we tested the null hypothesis that developmental disability does not depend on geographic location, generally (e.g. the predicted map surface is a flat horizontal plane). Residential locations of individuals were permuted 999 times while preserving their case status and covariates [as described in [20]. In each permutation, the GAM with the optimal span size from the original dataset was run and the global deviance statistic computed. We used a conservative p<0.025, which accounts for inflated type 1 error rates associated with using the optimal span size for the original dataset in permutations, to determine if location was a statistically significant predictor of disability [details in [31]. If the global statistic indicated that location was a statistically significant predictor of disability generally, we next evaluated location-specific (point-wise) departures from the null hypothesis using the same set of permutations [18,31]. Regions with significantly increased or decreased surveillance-recognized ASD or ID prevalence were defined as points ranking in the upper or lower 2.5% of the distribution of permuted prevalence ratios at each point, respectively [18]. Statistical analyses were performed in the R Package 2.12.02 (Vienna, Austria) using the gam library and a local scoring algorithm GAM estimation procedure and publically available statistical code [32,33]. All maps were created using ArcGIS 9.3 (version 9.3, Redlands, California). We estimated the log odds of ASD and ID using GAMs, an extension of linear regression models that can analyze binary outcome data and accommodate both parametric and non-parametric model components [17]. For non-parametric model terms, GAMs replace the traditional exposure term in an ordinary logistic regression with a smooth term (i.e. a term of best fit after adjusting for other covariates). In these analyses we applied a bivariate smooth to latitude and longitude coordinates and included all other covariates as parametric terms [17,18]. We used a locally weighted regression smoother (loess) which adapts to changes in the data density that are likely to occur in analyses of residential locations due to variability in population density. To predict prevalence the loess smoother utilizes information from nearby data points (weighting information based on its distance from the prediction point). The region or neighborhood from which data are drawn to predict prevalence is based on the percentage of data points in the neighborhood and is referred to as the span size. Choice of span size is a trade-off between bias and variability. A larger span size (more data included) results in a flatter surface with low variability but increased bias, while a small span size results in high variability and comparatively low bias. We determined the optimal amount of smoothing (optimal span size), by minimizing Akaike’s Information Criterion (AIC) [17,18]. A large optimal span size indicates less spatial variability in prevalence compared to a small optimal span. We created a grid covering the study area that extended across all latitude and longitude coordinates of all addresses. We excluded grid points in regions of the study area where no children in our sample were born. The resulting grid comprised approximately 4,300 points. We predicted the log odds for each of the 4 developmental outcome groups (e.g. ASD) at each point on the grid and calculated an odds ratio using the entire study area as the referent group; the odds at each point was divided by the odds from a reduced model which omitted the latitude and longitude smooth term [18]. We did not remove children with developmental outcomes from the random selection of births drawn to represent the underlying distribution of births and, consequently, some cases were included in the denominator. As a result, the odds ratios from these models are mathematically equivalent to prevalence ratios. We mapped prevalence ratios using a continuous color scale (blue to red) and a constant scale range for all maps with the same outcome (i.e. ASD; ASD+ID; ASD-ID; and ID). To provide information in the interpretation of spatial patterns that could be driven by sparse data, we preformed a 2-step statistical screening procedure, as follows. First, we tested the null hypothesis that developmental disability does not depend on geographic location, generally (e.g. the predicted map surface is a flat horizontal plane). Residential locations of individuals were permuted 999 times while preserving their case status and covariates [as described in [20]. In each permutation, the GAM with the optimal span size from the original dataset was run and the global deviance statistic computed. We used a conservative p<0.025, which accounts for inflated type 1 error rates associated with using the optimal span size for the original dataset in permutations, to determine if location was a statistically significant predictor of disability [details in [31]. If the global statistic indicated that location was a statistically significant predictor of disability generally, we next evaluated location-specific (point-wise) departures from the null hypothesis using the same set of permutations [18,31]. Regions with significantly increased or decreased surveillance-recognized ASD or ID prevalence were defined as points ranking in the upper or lower 2.5% of the distribution of permuted prevalence ratios at each point, respectively [18]. Statistical analyses were performed in the R Package 2.12.02 (Vienna, Austria) using the gam library and a local scoring algorithm GAM estimation procedure and publically available statistical code [32,33]. All maps were created using ArcGIS 9.3 (version 9.3, Redlands, California). Confounding We adjusted models for several previously established ASD predictive factors including year of birth; plurality; maternal age, race/ethnicity, and level of education; and report of tobacco use during pregnancy (categorizations in Table 1, [3,11,12,34-36]). Covariate data were nearly complete for these variables; however 35 children (<1%) had missing data and were excluded from all analyses. Additionally, we investigated confounding by other factors potentially related to ASD risk or recognition including method of delivery (vaginal delivery vs. cesarean section), marital status (married vs. unmarried), birth weight (<2500 g; 2501–3000 g; 3001–4000 g; >4000 g), and adequacy of prenatal care [assessed using the Adequacy of Prenatal Care Utilization Index; categorized as less than adequate (inadequate or intermediate prenatal care); adequate; adequate-plus [37]]. We used 3 approaches to fully assess the confounding influence of these factors: 1) We assessed the change in patterns of spatial variability, using a side by side visual inspection of maps before and after adjustment, comparing the areas of reduced and elevated prevalence ratios and the color intensities (indicating the magnitude of prevalence ratios). To assure equivalency, the number of observations and span size were held constant [18]. 2) We also investigated changes in the model-selected optimal span size of analyses with and without adjustment. A smaller optimal span size, using less of the data, is selected when the data supports more peaks and valleys in prevalence. It follows that a change in the optimal span size after the inclusion of a covariate can indicate spatial confounding. 3) Finally, to investigate the spatial variability of the predictive factors themselves (spatial associations between the factor and disability, necessary to cause confounding), we compared maps of each covariate to maps of the developmental disability. Selected Characteristics of the Birth Cohort and Children with ASD and ID in Eight North Carolina Counties in 2002, 2004, 2006 and 2008 We adjusted models for several previously established ASD predictive factors including year of birth; plurality; maternal age, race/ethnicity, and level of education; and report of tobacco use during pregnancy (categorizations in Table 1, [3,11,12,34-36]). Covariate data were nearly complete for these variables; however 35 children (<1%) had missing data and were excluded from all analyses. Additionally, we investigated confounding by other factors potentially related to ASD risk or recognition including method of delivery (vaginal delivery vs. cesarean section), marital status (married vs. unmarried), birth weight (<2500 g; 2501–3000 g; 3001–4000 g; >4000 g), and adequacy of prenatal care [assessed using the Adequacy of Prenatal Care Utilization Index; categorized as less than adequate (inadequate or intermediate prenatal care); adequate; adequate-plus [37]]. We used 3 approaches to fully assess the confounding influence of these factors: 1) We assessed the change in patterns of spatial variability, using a side by side visual inspection of maps before and after adjustment, comparing the areas of reduced and elevated prevalence ratios and the color intensities (indicating the magnitude of prevalence ratios). To assure equivalency, the number of observations and span size were held constant [18]. 2) We also investigated changes in the model-selected optimal span size of analyses with and without adjustment. A smaller optimal span size, using less of the data, is selected when the data supports more peaks and valleys in prevalence. It follows that a change in the optimal span size after the inclusion of a covariate can indicate spatial confounding. 3) Finally, to investigate the spatial variability of the predictive factors themselves (spatial associations between the factor and disability, necessary to cause confounding), we compared maps of each covariate to maps of the developmental disability. Selected Characteristics of the Birth Cohort and Children with ASD and ID in Eight North Carolina Counties in 2002, 2004, 2006 and 2008 Robustness of analyses Our final dataset contained some siblings. In addition to being genetically more similar to each other, siblings typically share the same residence. Including siblings living at the same address in analyses could induce spatial clustering as a result of familial (i.e. genetic) similarities rather than geographically-linked factors. To assess the robustness of our results to including a small number of sibling groups, we conducted secondary analyses including only one randomly selected child per family. Families were defined as children for whom the mother had the same first and maiden name and date of birth (obtained from birth certificates). Because information for fathers was missing and incomplete on many birth records, we did not attempt to identify paternal siblings. Our final dataset contained some siblings. In addition to being genetically more similar to each other, siblings typically share the same residence. Including siblings living at the same address in analyses could induce spatial clustering as a result of familial (i.e. genetic) similarities rather than geographically-linked factors. To assess the robustness of our results to including a small number of sibling groups, we conducted secondary analyses including only one randomly selected child per family. Families were defined as children for whom the mother had the same first and maiden name and date of birth (obtained from birth certificates). Because information for fathers was missing and incomplete on many birth records, we did not attempt to identify paternal siblings.
Results
Selected characteristics for the birth cohort and children with ASD or ID are displayed in Table 1. Children with ASD were more likely to be male, have older mothers, and mothers with higher educational attainment. As has been reported previously [2,38], the age 8 prevalence of ASD in the region increased from 2002–2008, however the prevalence of ID remained relatively stable (Table 1). Residential locations at birth for our study population are displayed in Figure 1 with slight alteration to preserve confidentiality. We found geographic variability in ASD prevalence across the study area in unadjusted analyses, as indicated by the global statistical test (Figure 2a; optimal span size=0.75; global P=0.003; Table 2). The Point-wise statistical tests identified areas of increased ASD prevalence in portions of Alamance, Durham, and Orange Counties, and children living in these areas were 1.10 to 1.27 times as likely to have a surveillance-recognized ASD at age 8 years compared to children born in the study area as a whole. Conversely, children born in the western part of the region were 0.57 to 0.90 times as likely to have a surveillance-recognized ASD. Geographic variability in ASD prevalence was attenuated after adjusting for confounding by year of birth; plurality; maternal age, race, and level of education; and report of tobacco use during pregnancy, indicated by the following results. In the adjusted model, the optimal span size (determined by minimizing the model AIC) increased, indicating less variability, and the global p-value was not consistent with departures from a flat pattern of ASD prevalence (Figure 2b; optimal span=0.95; global P=0.052; Table 2). The range of prevalence ratios across the study area was diminished: the adjusted model yielded PRs ranging from 0.72 to 1.12, in contrast to the unadjusted model, where PRs ranged from 0.57 to 1.27. Geographic distribution of ASD prevalence relative to the birth cohort n=11,034 and ASD n=532: Unadjusted (A) and fully adjusted models (B) are presented using the optimal span size of each (0.75 and 0.95 respectively). The unadjusted model is significantly different than flat (global P=0.003). Areas of significantly increased and decreased prevalence are indicated by black contour bands. Adjusted model is not significantly different than flat (global P=0.052). Adjustment factors were year of birth; plurality; maternal age, race, and level of education; and report of tobacco use during pregnancy. Summary of Spatial Analyses AF – Additional File. Additionally adjusting for method of delivery, marital status, birthweight, and adequacy of prenatal care did not change the appearance of maps for any of the 4 developmental disability groups considered, or the range of prevalence ratios observed across the study area; these variables were dropped from adjusted analyses. Spatial confounding was driven primarily by the higher educational attainment in Alamance, Chatham, Durham, and Orange Counties, and to a lesser extent, to greater maternal age observed in the same counties; maps with adjustment for maternal age and education only had the same optimal span size (0.95), a similar range of prevalence ratios, and were visually very similar to the fully adjusted maps. When we examined the patterns of maternal educational attainment and age across the study area, both were similar to the unadjusted ASD pattern. For example, mothers living in the areas where the unadjusted ASD prevalence was highest were approximately 1.75 times as likely to have completed college than women in the study area as a whole (Additional file 1: Figure S1). Among the 532 children with ASD, 214 (40.22%) also had an ID. The spatial patterns of both subgroups (ASD+ID and ASD-ID) were similar to each other and to the map of the combined group (all ASD); the locations of increased prevalence and the intensity of the increases were similar across all maps (Table 2; Additional file 1: Figure S2). Adjusting for spatial confounding resulted in flatter maps of ASD+ID and ASD-ID; however, greater attenuation was observed in the analysis of ASD-ID in which the optimal span size increased from 0.70 to 0.95 after adjustment (indicating the surface was less variable/ more flat after adjustment; Table 2; unadjusted figures not shown). Additionally, the range of PRs was more substantially attenuated by adjustment in the analysis of ASD-ID. In our sample, ASD-ID was 1.33 times as high in children who had mothers with college or more education (referent group=children of mothers with some post secondary education; P=0.059); but these same mothers were less likely than mothers with some postsecondary education to have a child with ASD + ID (aPR = 0.67, P = 0.060). When ID was analyzed as a separate disability, unadjusted analyses revealed significant spatial variability in prevalence across the study area; several areas of increased and decreased ID prevalence were observed (Figure 3a; optimal span = 0.10; global P < 0.001). Geographic gradients in prevalence ratios were somewhat attenuated in the adjusted analyses, however, patterns of residual variability in the prevalence of ID remained after accounting for known predictors (Figure 3b; optimal span = 0.30; global P = 0.065). Additionally, although adjusting for spatial confounding resulted in a larger span size, the optimal span size of the adjusted analyses remained small (0.30), indicating spatial variability; however the adjusted analyses did not reach global statistical significance at the alpha = 0.025 level. Geographic distribution of ID prevalence relative to the birth cohort n=11,034 and ID n=916: Unadjusted (A) and fully adjusted models (B) are presented using the optimal span size of each (0.10 and 0.30 respectively). The unadjusted model is significantly different than flat (global P<0.001). Areas of significantly increased and decreased risk are indicated by black contour bands. Adjusted model is not significantly different than flat (global P=0.065). Adjustment factors were year of birth; plurality; maternal age, race, and level of education; and report of tobacco use during pregnancy. Only 269 sibling pairs were in our analyses and their impact on analyses of both ASD and ID was negligible. When we randomly selected one child from each family for analyses, the pattern of spatial variability was quite similar to analyses including all children (results not shown).
Conclusions
Our results demonstrate the importance of adjusting for predictive and diagnostic factors that may spatially confound the search for novel environmentally-distributed risk factors for ASD. These adjustment methods can help the search for causes of developmental disabilities proceed more efficiently by aiding the interpretation of geographic patterns. Cumulatively, our results suggest that known predictive factors for ASD account for a large portion of the observed geographic variability in prevalence in central North Carolina. Although we did not identify spatial variability in ASD in NC after controlling for known predictive and risk factors, follow up of these results in other regions is may provide clues for ASD etiology.
[ "Background", "Study population", "ASD and ID classification", "Residential location and covariates", "Spatial analysis", "Confounding", "Robustness of analyses", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Autism spectrum disorders (ASD) are complex neurodevelopmental disorders characterized by impaired social interaction and communication, and restrictive and repetitive behavior\n[1]. Estimates in 2008 indicate that approximately 1 in 88 children have ASD and that the prevalence of documented ASD is on the rise\n[2]. The causes for ASD remain largely unknown and widely debated\n[3,4]. Environmental exposures are hypothesized to contribute to ASD etiology\n[5]; however, identifying exposures of concern has been complicated by the relative rarity of ASD, the extensive number of candidate exposures, and the lack of exposure measurements during early life, the developmentally relevant time period for ASD. Thus, studies have explored the geographic distribution of ASD as a means for generating hypotheses about spatially distributed environmental exposures. Additionally, studies have used geographically-based exposure assignment to evaluate the impact of specific exposures such as air pollutants, pesticides, and hazardous wastes on ASD risk\n[6-9].\nOne problem with evaluating the geographically non-random prevalence of ASD for etiologic purposes lies in difficulties disentangling the geographic distribution of other factors associated with diagnosis\n[10,11]. For example, higher maternal education is associated with increased ASD diagnosis in the United States\n[12,13] but not in some European countries\n[14]. These results suggest that maternal education is a factor in promoting recognition of ASD, but not necessarily the occurrence of ASD. Identifying patterns related to ASD diagnosis may be helpful for public health services allocation; however, to generate hypotheses about etiology, we must distinguish diagnostic patterns from patterns of ASD occurrence. For example, prioritizing investigations of a geographically-based environmental cause may be unwarranted if the observed spatial pattern is driven by maternal education (i.e. spatial confounding).\nTwo previous studies investigated geographic variability of ASD as a means of identifying environmental exposures related to ASD prevalence\n[15,16]. Both reported that children born in certain regions of California were more likely to have a recognized ASD than children born in other parts of the state\n[15,16]. The authors attributed their findings to regional differences in the underlying population (i.e. demographic and socioeconomic factors) or geographic variability of environmental exposures; but were unable to disentangle factors promoting diagnosis from environmental exposures because they did not adjust for potentially important individual-level spatial confounding\n[15,16].\nIn order to determine the potential for environmentally distributed exposure to be associated with ASD in central North Carolina (such as contaminants in air or water), we first explored whether spatial differences in ASD prevalence existed and then whether differences remained after accounting for spatially distributed covariates associated with ASD risk and diagnosis. We used a method of spatial epidemiology that applies generalized additive models (GAMs) to assess the spatial variation of disease in a region while simultaneously adjusting for other geographically distributed individual-level factors\n[17,18] such as maternal education, age, and smoking. Combining GAMs and geographic information systems allowed us to predict a continuous surface of ASD prevalence across our study area. Our research serves not only to expand the consideration of spatial patterns of ASD to geographic regions other than California but also to improve the utility of such studies by directly examining how adjustment for known risk and predictive factors influences geographic patterns. Patterns remaining after accounting for factors that influence ASD recognition may better reveal the distribution of novel geographically distributed etiologic factors impacting ASD prevalence.", "The Autism and Developmental Disabilities Monitoring (ADDM) Network is an active, population-based surveillance program that monitors the prevalence of developmental disabilities among children aged 8 years, an age by which most children with ASD have been evaluated\n[19], in selected geographic regions across the United States\n[20]. The ADDM Network conducts standardized review of medical and educational records and trained clinicians determine whether standardized case definitions for ASD and intellectual disability (ID) are met\n[20]. Our analyses utilized data from the North Carolina ADDM site (NC-ADDM), which began biennial surveillance in 2002. Analyses were restricted to children born in the 8 counties that were under surveillance during all study years (2002–2008).\nTo represent the underlying population, we randomly selected a 15% sample of birth records for children born in the same study counties and years as children included in ADDM (biennial 1994–2000; n=11,908, representing a region averaging 20,000 births per year). Figure\n1 provides orientation to the population distribution in NC, where red dots indicate children with developmental disabilities (ASD or ID) and blue dots indicate children randomly sampled from the entire central NC area. We excluded children who were adopted because information on birth address was missing and those who died in infancy because they were not part of the risk set for development disabilities (n=93; <1%). NC-ADDM and our current analyses were reviewed and approved by the Institutional Review Board at the University of North Carolina-Chapel Hill.\nEight county central North Carolina study area. The residential addresses at birth for the birth cohort (blue points) and children with ASD or ID (red points) born in 1994, 1996, 1998 and 2000 are displayed with altered locations to preserve confidentiality.", "To fully explore the impact of spatial confounding on the geographic distribution of developmental disability, we considered 4 different disability groups: all children with an ASD, 2 subgroups within all ASD classified by the co-occurrence of ID (ASD+ID and ASD-ID), and the independent group of all children with ID without regard to ASD status. Examining these independent and cross-classified groups provided a more nuanced disentanglement of factors which may act differently to promote ASD, versus ID, recognition, as follows. Diagnosis of ASD requires comprehensive evaluation of a constellation of behaviors that can be more involved than a singular evaluation assessment of Intelligence Quotient (IQ) to determine ID. Yet, ASD often occurs with ID, frequently presenting more severe disability that is recognized at an earlier age. In addition, Durkin et al. (2010) reported a differential socioeconomic (SES) gradient in ASD prevalence by the presence or absence of ID; SES acts as a stronger risk factor for ASD-ID compared to ASD+ID\n[3]. Because we expect SES to vary spatially and be a spatial confounder, analyzing ASD without respect to ID may mask the true spatial patterning of disease.\nChildren met the standardized case definition for ASD if clinician reviewers determined their developmental evaluation records indicated behaviors consistent with ASD, based on the Diagnostic and Statistical Manual IVTM criteria for Autistic Disorder, Asperger Disorder, or Pervasive Developmental Disorder Not-Otherwise-Specified\n[21]. Children met the standardized definition for ID if clinician review of developmental evaluations determined they had an IQ ≤ 70 on the most recently administered psychometrics test such as the Battelle–cognitive domain\n[22], Differential Ability Scales\n[23], Stanford-Binet–4th ed.\n[24], Wechsler Preschool and Primary Scale of Intelligence\n[25], and the Wechsler Intelligence Scale for Children-III\n[26], or, in the absence of test scores, a written statement in the records indicated the presence of a severe or profound intellectual disability\n[20]. Our analyses included 561 children with ASD, who were further classified into two subgroups: children with ASD without ID (ASD-ID, n=330), and children with ASD and ID (ASD+ID, n=231). As a comparison to the ASD analyses, we conducted additional analyses investigating the spatial variability in the prevalence of ID (n=1,028) regardless of ASD status.", "Surveillance data were linked to birth records to obtain the residential address and covariate information at the time of birth of children with ASD and ID. Birth data were chosen to reflect the most etiologically relevant time period for brain development\n[27].\nWe combined several methods to geocode (i.e. assign latitude and longitude coordinates) the residential birth addresses of study children to improve geocoding accuracy and reduce positional errors\n[28]. First, we ran all addresses through the ZP4 software which cleans and updates addresses using U.S. Postal Service databases (e.g. converting rural routes to E-911 street names) (version expiring March 2011, Semphoria; Memphis Tennessee). We then cleaned all addresses individually and geocoded them using ArcGIS (version 9.3, Redlands, California) and U.S. Census Tigerline files\n[29]. For unmatched addresses, we used Google Maps to locate the residence where possible\n[30]. Using these methods, we successfully geocoded 12,299 (93.41%) of the 13,167 residential addresses. Of the addresses we were unable to geocode, 379 (2.88%) were post office boxes and 489 (3.71%) were addresses that were either incomplete or that we were not able to geocode to a specific location. Geocoding success was similar for the birth cohort and children with ASD and ID.", "We estimated the log odds of ASD and ID using GAMs, an extension of linear regression models that can analyze binary outcome data and accommodate both parametric and non-parametric model components\n[17]. For non-parametric model terms, GAMs replace the traditional exposure term in an ordinary logistic regression with a smooth term (i.e. a term of best fit after adjusting for other covariates). In these analyses we applied a bivariate smooth to latitude and longitude coordinates and included all other covariates as parametric terms\n[17,18].\nWe used a locally weighted regression smoother (loess) which adapts to changes in the data density that are likely to occur in analyses of residential locations due to variability in population density. To predict prevalence the loess smoother utilizes information from nearby data points (weighting information based on its distance from the prediction point). The region or neighborhood from which data are drawn to predict prevalence is based on the percentage of data points in the neighborhood and is referred to as the span size. Choice of span size is a trade-off between bias and variability. A larger span size (more data included) results in a flatter surface with low variability but increased bias, while a small span size results in high variability and comparatively low bias. We determined the optimal amount of smoothing (optimal span size), by minimizing Akaike’s Information Criterion (AIC)\n[17,18]. A large optimal span size indicates less spatial variability in prevalence compared to a small optimal span.\nWe created a grid covering the study area that extended across all latitude and longitude coordinates of all addresses. We excluded grid points in regions of the study area where no children in our sample were born. The resulting grid comprised approximately 4,300 points. We predicted the log odds for each of the 4 developmental outcome groups (e.g. ASD) at each point on the grid and calculated an odds ratio using the entire study area as the referent group; the odds at each point was divided by the odds from a reduced model which omitted the latitude and longitude smooth term\n[18]. We did not remove children with developmental outcomes from the random selection of births drawn to represent the underlying distribution of births and, consequently, some cases were included in the denominator. As a result, the odds ratios from these models are mathematically equivalent to prevalence ratios. We mapped prevalence ratios using a continuous color scale (blue to red) and a constant scale range for all maps with the same outcome (i.e. ASD; ASD+ID; ASD-ID; and ID).\nTo provide information in the interpretation of spatial patterns that could be driven by sparse data, we preformed a 2-step statistical screening procedure, as follows. First, we tested the null hypothesis that developmental disability does not depend on geographic location, generally (e.g. the predicted map surface is a flat horizontal plane). Residential locations of individuals were permuted 999 times while preserving their case status and covariates [as described in\n[20]. In each permutation, the GAM with the optimal span size from the original dataset was run and the global deviance statistic computed. We used a conservative p<0.025, which accounts for inflated type 1 error rates associated with using the optimal span size for the original dataset in permutations, to determine if location was a statistically significant predictor of disability [details in\n[31]. If the global statistic indicated that location was a statistically significant predictor of disability generally, we next evaluated location-specific (point-wise) departures from the null hypothesis using the same set of permutations\n[18,31]. Regions with significantly increased or decreased surveillance-recognized ASD or ID prevalence were defined as points ranking in the upper or lower 2.5% of the distribution of permuted prevalence ratios at each point, respectively\n[18].\nStatistical analyses were performed in the R Package 2.12.02 (Vienna, Austria) using the gam library and a local scoring algorithm GAM estimation procedure and publically available statistical code\n[32,33]. All maps were created using ArcGIS 9.3 (version 9.3, Redlands, California).", "We adjusted models for several previously established ASD predictive factors including year of birth; plurality; maternal age, race/ethnicity, and level of education; and report of tobacco use during pregnancy (categorizations in Table\n1,\n[3,11,12,34-36]). Covariate data were nearly complete for these variables; however 35 children (<1%) had missing data and were excluded from all analyses. Additionally, we investigated confounding by other factors potentially related to ASD risk or recognition including method of delivery (vaginal delivery vs. cesarean section), marital status (married vs. unmarried), birth weight (<2500 g; 2501–3000 g; 3001–4000 g; >4000 g), and adequacy of prenatal care [assessed using the Adequacy of Prenatal Care Utilization Index; categorized as less than adequate (inadequate or intermediate prenatal care); adequate; adequate-plus\n[37]]. We used 3 approaches to fully assess the confounding influence of these factors: 1) We assessed the change in patterns of spatial variability, using a side by side visual inspection of maps before and after adjustment, comparing the areas of reduced and elevated prevalence ratios and the color intensities (indicating the magnitude of prevalence ratios). To assure equivalency, the number of observations and span size were held constant\n[18]. 2) We also investigated changes in the model-selected optimal span size of analyses with and without adjustment. A smaller optimal span size, using less of the data, is selected when the data supports more peaks and valleys in prevalence. It follows that a change in the optimal span size after the inclusion of a covariate can indicate spatial confounding. 3) Finally, to investigate the spatial variability of the predictive factors themselves (spatial associations between the factor and disability, necessary to cause confounding), we compared maps of each covariate to maps of the developmental disability.\nSelected Characteristics of the Birth Cohort and Children with ASD and ID in Eight North Carolina Counties in 2002, 2004, 2006 and 2008", "Our final dataset contained some siblings. In addition to being genetically more similar to each other, siblings typically share the same residence. Including siblings living at the same address in analyses could induce spatial clustering as a result of familial (i.e. genetic) similarities rather than geographically-linked factors. To assess the robustness of our results to including a small number of sibling groups, we conducted secondary analyses including only one randomly selected child per family. Families were defined as children for whom the mother had the same first and maiden name and date of birth (obtained from birth certificates). Because information for fathers was missing and incomplete on many birth records, we did not attempt to identify paternal siblings.", "AF: Additional file; APR: Adjusted prevalence ratio; ADDM Network: Autism and developmental disabilities monitoring network; AIC: Akaike’s information criterion; ASD: Autism spectrum disorders; ASD+ID: Autism spectrum disorders with intellectual disability; ASD-ID: Autism spectrum disorders without intellectual disability; ID: Intellectual disability; GAM: Generalized additive model; GIS: Geographic information systems; PR: Prevalence ratio; SES: Socioeconomic status.", "All authors declare that they have are no actual or potential competing interests.", "KH assisted in conceiving the study, conducted analyses, and drafted the manuscript. AK assisted conceiving the study and the analysis plan, and assisted in manuscript editing. VV collaborated on statistical issues, and assisted in the analysis plan and manuscript editing. JD, the principle investigator of the North Carolina Autism and Developmental Disabilities Monitoring Network, assisted in the analysis plan and manuscript editing. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study population", "ASD and ID classification", "Residential location and covariates", "Spatial analysis", "Confounding", "Robustness of analyses", "Results", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions", "Supplementary Material" ]
[ "Autism spectrum disorders (ASD) are complex neurodevelopmental disorders characterized by impaired social interaction and communication, and restrictive and repetitive behavior\n[1]. Estimates in 2008 indicate that approximately 1 in 88 children have ASD and that the prevalence of documented ASD is on the rise\n[2]. The causes for ASD remain largely unknown and widely debated\n[3,4]. Environmental exposures are hypothesized to contribute to ASD etiology\n[5]; however, identifying exposures of concern has been complicated by the relative rarity of ASD, the extensive number of candidate exposures, and the lack of exposure measurements during early life, the developmentally relevant time period for ASD. Thus, studies have explored the geographic distribution of ASD as a means for generating hypotheses about spatially distributed environmental exposures. Additionally, studies have used geographically-based exposure assignment to evaluate the impact of specific exposures such as air pollutants, pesticides, and hazardous wastes on ASD risk\n[6-9].\nOne problem with evaluating the geographically non-random prevalence of ASD for etiologic purposes lies in difficulties disentangling the geographic distribution of other factors associated with diagnosis\n[10,11]. For example, higher maternal education is associated with increased ASD diagnosis in the United States\n[12,13] but not in some European countries\n[14]. These results suggest that maternal education is a factor in promoting recognition of ASD, but not necessarily the occurrence of ASD. Identifying patterns related to ASD diagnosis may be helpful for public health services allocation; however, to generate hypotheses about etiology, we must distinguish diagnostic patterns from patterns of ASD occurrence. For example, prioritizing investigations of a geographically-based environmental cause may be unwarranted if the observed spatial pattern is driven by maternal education (i.e. spatial confounding).\nTwo previous studies investigated geographic variability of ASD as a means of identifying environmental exposures related to ASD prevalence\n[15,16]. Both reported that children born in certain regions of California were more likely to have a recognized ASD than children born in other parts of the state\n[15,16]. The authors attributed their findings to regional differences in the underlying population (i.e. demographic and socioeconomic factors) or geographic variability of environmental exposures; but were unable to disentangle factors promoting diagnosis from environmental exposures because they did not adjust for potentially important individual-level spatial confounding\n[15,16].\nIn order to determine the potential for environmentally distributed exposure to be associated with ASD in central North Carolina (such as contaminants in air or water), we first explored whether spatial differences in ASD prevalence existed and then whether differences remained after accounting for spatially distributed covariates associated with ASD risk and diagnosis. We used a method of spatial epidemiology that applies generalized additive models (GAMs) to assess the spatial variation of disease in a region while simultaneously adjusting for other geographically distributed individual-level factors\n[17,18] such as maternal education, age, and smoking. Combining GAMs and geographic information systems allowed us to predict a continuous surface of ASD prevalence across our study area. Our research serves not only to expand the consideration of spatial patterns of ASD to geographic regions other than California but also to improve the utility of such studies by directly examining how adjustment for known risk and predictive factors influences geographic patterns. Patterns remaining after accounting for factors that influence ASD recognition may better reveal the distribution of novel geographically distributed etiologic factors impacting ASD prevalence.", " Study population The Autism and Developmental Disabilities Monitoring (ADDM) Network is an active, population-based surveillance program that monitors the prevalence of developmental disabilities among children aged 8 years, an age by which most children with ASD have been evaluated\n[19], in selected geographic regions across the United States\n[20]. The ADDM Network conducts standardized review of medical and educational records and trained clinicians determine whether standardized case definitions for ASD and intellectual disability (ID) are met\n[20]. Our analyses utilized data from the North Carolina ADDM site (NC-ADDM), which began biennial surveillance in 2002. Analyses were restricted to children born in the 8 counties that were under surveillance during all study years (2002–2008).\nTo represent the underlying population, we randomly selected a 15% sample of birth records for children born in the same study counties and years as children included in ADDM (biennial 1994–2000; n=11,908, representing a region averaging 20,000 births per year). Figure\n1 provides orientation to the population distribution in NC, where red dots indicate children with developmental disabilities (ASD or ID) and blue dots indicate children randomly sampled from the entire central NC area. We excluded children who were adopted because information on birth address was missing and those who died in infancy because they were not part of the risk set for development disabilities (n=93; <1%). NC-ADDM and our current analyses were reviewed and approved by the Institutional Review Board at the University of North Carolina-Chapel Hill.\nEight county central North Carolina study area. The residential addresses at birth for the birth cohort (blue points) and children with ASD or ID (red points) born in 1994, 1996, 1998 and 2000 are displayed with altered locations to preserve confidentiality.\nThe Autism and Developmental Disabilities Monitoring (ADDM) Network is an active, population-based surveillance program that monitors the prevalence of developmental disabilities among children aged 8 years, an age by which most children with ASD have been evaluated\n[19], in selected geographic regions across the United States\n[20]. The ADDM Network conducts standardized review of medical and educational records and trained clinicians determine whether standardized case definitions for ASD and intellectual disability (ID) are met\n[20]. Our analyses utilized data from the North Carolina ADDM site (NC-ADDM), which began biennial surveillance in 2002. Analyses were restricted to children born in the 8 counties that were under surveillance during all study years (2002–2008).\nTo represent the underlying population, we randomly selected a 15% sample of birth records for children born in the same study counties and years as children included in ADDM (biennial 1994–2000; n=11,908, representing a region averaging 20,000 births per year). Figure\n1 provides orientation to the population distribution in NC, where red dots indicate children with developmental disabilities (ASD or ID) and blue dots indicate children randomly sampled from the entire central NC area. We excluded children who were adopted because information on birth address was missing and those who died in infancy because they were not part of the risk set for development disabilities (n=93; <1%). NC-ADDM and our current analyses were reviewed and approved by the Institutional Review Board at the University of North Carolina-Chapel Hill.\nEight county central North Carolina study area. The residential addresses at birth for the birth cohort (blue points) and children with ASD or ID (red points) born in 1994, 1996, 1998 and 2000 are displayed with altered locations to preserve confidentiality.\n ASD and ID classification To fully explore the impact of spatial confounding on the geographic distribution of developmental disability, we considered 4 different disability groups: all children with an ASD, 2 subgroups within all ASD classified by the co-occurrence of ID (ASD+ID and ASD-ID), and the independent group of all children with ID without regard to ASD status. Examining these independent and cross-classified groups provided a more nuanced disentanglement of factors which may act differently to promote ASD, versus ID, recognition, as follows. Diagnosis of ASD requires comprehensive evaluation of a constellation of behaviors that can be more involved than a singular evaluation assessment of Intelligence Quotient (IQ) to determine ID. Yet, ASD often occurs with ID, frequently presenting more severe disability that is recognized at an earlier age. In addition, Durkin et al. (2010) reported a differential socioeconomic (SES) gradient in ASD prevalence by the presence or absence of ID; SES acts as a stronger risk factor for ASD-ID compared to ASD+ID\n[3]. Because we expect SES to vary spatially and be a spatial confounder, analyzing ASD without respect to ID may mask the true spatial patterning of disease.\nChildren met the standardized case definition for ASD if clinician reviewers determined their developmental evaluation records indicated behaviors consistent with ASD, based on the Diagnostic and Statistical Manual IVTM criteria for Autistic Disorder, Asperger Disorder, or Pervasive Developmental Disorder Not-Otherwise-Specified\n[21]. Children met the standardized definition for ID if clinician review of developmental evaluations determined they had an IQ ≤ 70 on the most recently administered psychometrics test such as the Battelle–cognitive domain\n[22], Differential Ability Scales\n[23], Stanford-Binet–4th ed.\n[24], Wechsler Preschool and Primary Scale of Intelligence\n[25], and the Wechsler Intelligence Scale for Children-III\n[26], or, in the absence of test scores, a written statement in the records indicated the presence of a severe or profound intellectual disability\n[20]. Our analyses included 561 children with ASD, who were further classified into two subgroups: children with ASD without ID (ASD-ID, n=330), and children with ASD and ID (ASD+ID, n=231). As a comparison to the ASD analyses, we conducted additional analyses investigating the spatial variability in the prevalence of ID (n=1,028) regardless of ASD status.\nTo fully explore the impact of spatial confounding on the geographic distribution of developmental disability, we considered 4 different disability groups: all children with an ASD, 2 subgroups within all ASD classified by the co-occurrence of ID (ASD+ID and ASD-ID), and the independent group of all children with ID without regard to ASD status. Examining these independent and cross-classified groups provided a more nuanced disentanglement of factors which may act differently to promote ASD, versus ID, recognition, as follows. Diagnosis of ASD requires comprehensive evaluation of a constellation of behaviors that can be more involved than a singular evaluation assessment of Intelligence Quotient (IQ) to determine ID. Yet, ASD often occurs with ID, frequently presenting more severe disability that is recognized at an earlier age. In addition, Durkin et al. (2010) reported a differential socioeconomic (SES) gradient in ASD prevalence by the presence or absence of ID; SES acts as a stronger risk factor for ASD-ID compared to ASD+ID\n[3]. Because we expect SES to vary spatially and be a spatial confounder, analyzing ASD without respect to ID may mask the true spatial patterning of disease.\nChildren met the standardized case definition for ASD if clinician reviewers determined their developmental evaluation records indicated behaviors consistent with ASD, based on the Diagnostic and Statistical Manual IVTM criteria for Autistic Disorder, Asperger Disorder, or Pervasive Developmental Disorder Not-Otherwise-Specified\n[21]. Children met the standardized definition for ID if clinician review of developmental evaluations determined they had an IQ ≤ 70 on the most recently administered psychometrics test such as the Battelle–cognitive domain\n[22], Differential Ability Scales\n[23], Stanford-Binet–4th ed.\n[24], Wechsler Preschool and Primary Scale of Intelligence\n[25], and the Wechsler Intelligence Scale for Children-III\n[26], or, in the absence of test scores, a written statement in the records indicated the presence of a severe or profound intellectual disability\n[20]. Our analyses included 561 children with ASD, who were further classified into two subgroups: children with ASD without ID (ASD-ID, n=330), and children with ASD and ID (ASD+ID, n=231). As a comparison to the ASD analyses, we conducted additional analyses investigating the spatial variability in the prevalence of ID (n=1,028) regardless of ASD status.\n Residential location and covariates Surveillance data were linked to birth records to obtain the residential address and covariate information at the time of birth of children with ASD and ID. Birth data were chosen to reflect the most etiologically relevant time period for brain development\n[27].\nWe combined several methods to geocode (i.e. assign latitude and longitude coordinates) the residential birth addresses of study children to improve geocoding accuracy and reduce positional errors\n[28]. First, we ran all addresses through the ZP4 software which cleans and updates addresses using U.S. Postal Service databases (e.g. converting rural routes to E-911 street names) (version expiring March 2011, Semphoria; Memphis Tennessee). We then cleaned all addresses individually and geocoded them using ArcGIS (version 9.3, Redlands, California) and U.S. Census Tigerline files\n[29]. For unmatched addresses, we used Google Maps to locate the residence where possible\n[30]. Using these methods, we successfully geocoded 12,299 (93.41%) of the 13,167 residential addresses. Of the addresses we were unable to geocode, 379 (2.88%) were post office boxes and 489 (3.71%) were addresses that were either incomplete or that we were not able to geocode to a specific location. Geocoding success was similar for the birth cohort and children with ASD and ID.\nSurveillance data were linked to birth records to obtain the residential address and covariate information at the time of birth of children with ASD and ID. Birth data were chosen to reflect the most etiologically relevant time period for brain development\n[27].\nWe combined several methods to geocode (i.e. assign latitude and longitude coordinates) the residential birth addresses of study children to improve geocoding accuracy and reduce positional errors\n[28]. First, we ran all addresses through the ZP4 software which cleans and updates addresses using U.S. Postal Service databases (e.g. converting rural routes to E-911 street names) (version expiring March 2011, Semphoria; Memphis Tennessee). We then cleaned all addresses individually and geocoded them using ArcGIS (version 9.3, Redlands, California) and U.S. Census Tigerline files\n[29]. For unmatched addresses, we used Google Maps to locate the residence where possible\n[30]. Using these methods, we successfully geocoded 12,299 (93.41%) of the 13,167 residential addresses. Of the addresses we were unable to geocode, 379 (2.88%) were post office boxes and 489 (3.71%) were addresses that were either incomplete or that we were not able to geocode to a specific location. Geocoding success was similar for the birth cohort and children with ASD and ID.\n Spatial analysis We estimated the log odds of ASD and ID using GAMs, an extension of linear regression models that can analyze binary outcome data and accommodate both parametric and non-parametric model components\n[17]. For non-parametric model terms, GAMs replace the traditional exposure term in an ordinary logistic regression with a smooth term (i.e. a term of best fit after adjusting for other covariates). In these analyses we applied a bivariate smooth to latitude and longitude coordinates and included all other covariates as parametric terms\n[17,18].\nWe used a locally weighted regression smoother (loess) which adapts to changes in the data density that are likely to occur in analyses of residential locations due to variability in population density. To predict prevalence the loess smoother utilizes information from nearby data points (weighting information based on its distance from the prediction point). The region or neighborhood from which data are drawn to predict prevalence is based on the percentage of data points in the neighborhood and is referred to as the span size. Choice of span size is a trade-off between bias and variability. A larger span size (more data included) results in a flatter surface with low variability but increased bias, while a small span size results in high variability and comparatively low bias. We determined the optimal amount of smoothing (optimal span size), by minimizing Akaike’s Information Criterion (AIC)\n[17,18]. A large optimal span size indicates less spatial variability in prevalence compared to a small optimal span.\nWe created a grid covering the study area that extended across all latitude and longitude coordinates of all addresses. We excluded grid points in regions of the study area where no children in our sample were born. The resulting grid comprised approximately 4,300 points. We predicted the log odds for each of the 4 developmental outcome groups (e.g. ASD) at each point on the grid and calculated an odds ratio using the entire study area as the referent group; the odds at each point was divided by the odds from a reduced model which omitted the latitude and longitude smooth term\n[18]. We did not remove children with developmental outcomes from the random selection of births drawn to represent the underlying distribution of births and, consequently, some cases were included in the denominator. As a result, the odds ratios from these models are mathematically equivalent to prevalence ratios. We mapped prevalence ratios using a continuous color scale (blue to red) and a constant scale range for all maps with the same outcome (i.e. ASD; ASD+ID; ASD-ID; and ID).\nTo provide information in the interpretation of spatial patterns that could be driven by sparse data, we preformed a 2-step statistical screening procedure, as follows. First, we tested the null hypothesis that developmental disability does not depend on geographic location, generally (e.g. the predicted map surface is a flat horizontal plane). Residential locations of individuals were permuted 999 times while preserving their case status and covariates [as described in\n[20]. In each permutation, the GAM with the optimal span size from the original dataset was run and the global deviance statistic computed. We used a conservative p<0.025, which accounts for inflated type 1 error rates associated with using the optimal span size for the original dataset in permutations, to determine if location was a statistically significant predictor of disability [details in\n[31]. If the global statistic indicated that location was a statistically significant predictor of disability generally, we next evaluated location-specific (point-wise) departures from the null hypothesis using the same set of permutations\n[18,31]. Regions with significantly increased or decreased surveillance-recognized ASD or ID prevalence were defined as points ranking in the upper or lower 2.5% of the distribution of permuted prevalence ratios at each point, respectively\n[18].\nStatistical analyses were performed in the R Package 2.12.02 (Vienna, Austria) using the gam library and a local scoring algorithm GAM estimation procedure and publically available statistical code\n[32,33]. All maps were created using ArcGIS 9.3 (version 9.3, Redlands, California).\nWe estimated the log odds of ASD and ID using GAMs, an extension of linear regression models that can analyze binary outcome data and accommodate both parametric and non-parametric model components\n[17]. For non-parametric model terms, GAMs replace the traditional exposure term in an ordinary logistic regression with a smooth term (i.e. a term of best fit after adjusting for other covariates). In these analyses we applied a bivariate smooth to latitude and longitude coordinates and included all other covariates as parametric terms\n[17,18].\nWe used a locally weighted regression smoother (loess) which adapts to changes in the data density that are likely to occur in analyses of residential locations due to variability in population density. To predict prevalence the loess smoother utilizes information from nearby data points (weighting information based on its distance from the prediction point). The region or neighborhood from which data are drawn to predict prevalence is based on the percentage of data points in the neighborhood and is referred to as the span size. Choice of span size is a trade-off between bias and variability. A larger span size (more data included) results in a flatter surface with low variability but increased bias, while a small span size results in high variability and comparatively low bias. We determined the optimal amount of smoothing (optimal span size), by minimizing Akaike’s Information Criterion (AIC)\n[17,18]. A large optimal span size indicates less spatial variability in prevalence compared to a small optimal span.\nWe created a grid covering the study area that extended across all latitude and longitude coordinates of all addresses. We excluded grid points in regions of the study area where no children in our sample were born. The resulting grid comprised approximately 4,300 points. We predicted the log odds for each of the 4 developmental outcome groups (e.g. ASD) at each point on the grid and calculated an odds ratio using the entire study area as the referent group; the odds at each point was divided by the odds from a reduced model which omitted the latitude and longitude smooth term\n[18]. We did not remove children with developmental outcomes from the random selection of births drawn to represent the underlying distribution of births and, consequently, some cases were included in the denominator. As a result, the odds ratios from these models are mathematically equivalent to prevalence ratios. We mapped prevalence ratios using a continuous color scale (blue to red) and a constant scale range for all maps with the same outcome (i.e. ASD; ASD+ID; ASD-ID; and ID).\nTo provide information in the interpretation of spatial patterns that could be driven by sparse data, we preformed a 2-step statistical screening procedure, as follows. First, we tested the null hypothesis that developmental disability does not depend on geographic location, generally (e.g. the predicted map surface is a flat horizontal plane). Residential locations of individuals were permuted 999 times while preserving their case status and covariates [as described in\n[20]. In each permutation, the GAM with the optimal span size from the original dataset was run and the global deviance statistic computed. We used a conservative p<0.025, which accounts for inflated type 1 error rates associated with using the optimal span size for the original dataset in permutations, to determine if location was a statistically significant predictor of disability [details in\n[31]. If the global statistic indicated that location was a statistically significant predictor of disability generally, we next evaluated location-specific (point-wise) departures from the null hypothesis using the same set of permutations\n[18,31]. Regions with significantly increased or decreased surveillance-recognized ASD or ID prevalence were defined as points ranking in the upper or lower 2.5% of the distribution of permuted prevalence ratios at each point, respectively\n[18].\nStatistical analyses were performed in the R Package 2.12.02 (Vienna, Austria) using the gam library and a local scoring algorithm GAM estimation procedure and publically available statistical code\n[32,33]. All maps were created using ArcGIS 9.3 (version 9.3, Redlands, California).\n Confounding We adjusted models for several previously established ASD predictive factors including year of birth; plurality; maternal age, race/ethnicity, and level of education; and report of tobacco use during pregnancy (categorizations in Table\n1,\n[3,11,12,34-36]). Covariate data were nearly complete for these variables; however 35 children (<1%) had missing data and were excluded from all analyses. Additionally, we investigated confounding by other factors potentially related to ASD risk or recognition including method of delivery (vaginal delivery vs. cesarean section), marital status (married vs. unmarried), birth weight (<2500 g; 2501–3000 g; 3001–4000 g; >4000 g), and adequacy of prenatal care [assessed using the Adequacy of Prenatal Care Utilization Index; categorized as less than adequate (inadequate or intermediate prenatal care); adequate; adequate-plus\n[37]]. We used 3 approaches to fully assess the confounding influence of these factors: 1) We assessed the change in patterns of spatial variability, using a side by side visual inspection of maps before and after adjustment, comparing the areas of reduced and elevated prevalence ratios and the color intensities (indicating the magnitude of prevalence ratios). To assure equivalency, the number of observations and span size were held constant\n[18]. 2) We also investigated changes in the model-selected optimal span size of analyses with and without adjustment. A smaller optimal span size, using less of the data, is selected when the data supports more peaks and valleys in prevalence. It follows that a change in the optimal span size after the inclusion of a covariate can indicate spatial confounding. 3) Finally, to investigate the spatial variability of the predictive factors themselves (spatial associations between the factor and disability, necessary to cause confounding), we compared maps of each covariate to maps of the developmental disability.\nSelected Characteristics of the Birth Cohort and Children with ASD and ID in Eight North Carolina Counties in 2002, 2004, 2006 and 2008\nWe adjusted models for several previously established ASD predictive factors including year of birth; plurality; maternal age, race/ethnicity, and level of education; and report of tobacco use during pregnancy (categorizations in Table\n1,\n[3,11,12,34-36]). Covariate data were nearly complete for these variables; however 35 children (<1%) had missing data and were excluded from all analyses. Additionally, we investigated confounding by other factors potentially related to ASD risk or recognition including method of delivery (vaginal delivery vs. cesarean section), marital status (married vs. unmarried), birth weight (<2500 g; 2501–3000 g; 3001–4000 g; >4000 g), and adequacy of prenatal care [assessed using the Adequacy of Prenatal Care Utilization Index; categorized as less than adequate (inadequate or intermediate prenatal care); adequate; adequate-plus\n[37]]. We used 3 approaches to fully assess the confounding influence of these factors: 1) We assessed the change in patterns of spatial variability, using a side by side visual inspection of maps before and after adjustment, comparing the areas of reduced and elevated prevalence ratios and the color intensities (indicating the magnitude of prevalence ratios). To assure equivalency, the number of observations and span size were held constant\n[18]. 2) We also investigated changes in the model-selected optimal span size of analyses with and without adjustment. A smaller optimal span size, using less of the data, is selected when the data supports more peaks and valleys in prevalence. It follows that a change in the optimal span size after the inclusion of a covariate can indicate spatial confounding. 3) Finally, to investigate the spatial variability of the predictive factors themselves (spatial associations between the factor and disability, necessary to cause confounding), we compared maps of each covariate to maps of the developmental disability.\nSelected Characteristics of the Birth Cohort and Children with ASD and ID in Eight North Carolina Counties in 2002, 2004, 2006 and 2008\n Robustness of analyses Our final dataset contained some siblings. In addition to being genetically more similar to each other, siblings typically share the same residence. Including siblings living at the same address in analyses could induce spatial clustering as a result of familial (i.e. genetic) similarities rather than geographically-linked factors. To assess the robustness of our results to including a small number of sibling groups, we conducted secondary analyses including only one randomly selected child per family. Families were defined as children for whom the mother had the same first and maiden name and date of birth (obtained from birth certificates). Because information for fathers was missing and incomplete on many birth records, we did not attempt to identify paternal siblings.\nOur final dataset contained some siblings. In addition to being genetically more similar to each other, siblings typically share the same residence. Including siblings living at the same address in analyses could induce spatial clustering as a result of familial (i.e. genetic) similarities rather than geographically-linked factors. To assess the robustness of our results to including a small number of sibling groups, we conducted secondary analyses including only one randomly selected child per family. Families were defined as children for whom the mother had the same first and maiden name and date of birth (obtained from birth certificates). Because information for fathers was missing and incomplete on many birth records, we did not attempt to identify paternal siblings.", "The Autism and Developmental Disabilities Monitoring (ADDM) Network is an active, population-based surveillance program that monitors the prevalence of developmental disabilities among children aged 8 years, an age by which most children with ASD have been evaluated\n[19], in selected geographic regions across the United States\n[20]. The ADDM Network conducts standardized review of medical and educational records and trained clinicians determine whether standardized case definitions for ASD and intellectual disability (ID) are met\n[20]. Our analyses utilized data from the North Carolina ADDM site (NC-ADDM), which began biennial surveillance in 2002. Analyses were restricted to children born in the 8 counties that were under surveillance during all study years (2002–2008).\nTo represent the underlying population, we randomly selected a 15% sample of birth records for children born in the same study counties and years as children included in ADDM (biennial 1994–2000; n=11,908, representing a region averaging 20,000 births per year). Figure\n1 provides orientation to the population distribution in NC, where red dots indicate children with developmental disabilities (ASD or ID) and blue dots indicate children randomly sampled from the entire central NC area. We excluded children who were adopted because information on birth address was missing and those who died in infancy because they were not part of the risk set for development disabilities (n=93; <1%). NC-ADDM and our current analyses were reviewed and approved by the Institutional Review Board at the University of North Carolina-Chapel Hill.\nEight county central North Carolina study area. The residential addresses at birth for the birth cohort (blue points) and children with ASD or ID (red points) born in 1994, 1996, 1998 and 2000 are displayed with altered locations to preserve confidentiality.", "To fully explore the impact of spatial confounding on the geographic distribution of developmental disability, we considered 4 different disability groups: all children with an ASD, 2 subgroups within all ASD classified by the co-occurrence of ID (ASD+ID and ASD-ID), and the independent group of all children with ID without regard to ASD status. Examining these independent and cross-classified groups provided a more nuanced disentanglement of factors which may act differently to promote ASD, versus ID, recognition, as follows. Diagnosis of ASD requires comprehensive evaluation of a constellation of behaviors that can be more involved than a singular evaluation assessment of Intelligence Quotient (IQ) to determine ID. Yet, ASD often occurs with ID, frequently presenting more severe disability that is recognized at an earlier age. In addition, Durkin et al. (2010) reported a differential socioeconomic (SES) gradient in ASD prevalence by the presence or absence of ID; SES acts as a stronger risk factor for ASD-ID compared to ASD+ID\n[3]. Because we expect SES to vary spatially and be a spatial confounder, analyzing ASD without respect to ID may mask the true spatial patterning of disease.\nChildren met the standardized case definition for ASD if clinician reviewers determined their developmental evaluation records indicated behaviors consistent with ASD, based on the Diagnostic and Statistical Manual IVTM criteria for Autistic Disorder, Asperger Disorder, or Pervasive Developmental Disorder Not-Otherwise-Specified\n[21]. Children met the standardized definition for ID if clinician review of developmental evaluations determined they had an IQ ≤ 70 on the most recently administered psychometrics test such as the Battelle–cognitive domain\n[22], Differential Ability Scales\n[23], Stanford-Binet–4th ed.\n[24], Wechsler Preschool and Primary Scale of Intelligence\n[25], and the Wechsler Intelligence Scale for Children-III\n[26], or, in the absence of test scores, a written statement in the records indicated the presence of a severe or profound intellectual disability\n[20]. Our analyses included 561 children with ASD, who were further classified into two subgroups: children with ASD without ID (ASD-ID, n=330), and children with ASD and ID (ASD+ID, n=231). As a comparison to the ASD analyses, we conducted additional analyses investigating the spatial variability in the prevalence of ID (n=1,028) regardless of ASD status.", "Surveillance data were linked to birth records to obtain the residential address and covariate information at the time of birth of children with ASD and ID. Birth data were chosen to reflect the most etiologically relevant time period for brain development\n[27].\nWe combined several methods to geocode (i.e. assign latitude and longitude coordinates) the residential birth addresses of study children to improve geocoding accuracy and reduce positional errors\n[28]. First, we ran all addresses through the ZP4 software which cleans and updates addresses using U.S. Postal Service databases (e.g. converting rural routes to E-911 street names) (version expiring March 2011, Semphoria; Memphis Tennessee). We then cleaned all addresses individually and geocoded them using ArcGIS (version 9.3, Redlands, California) and U.S. Census Tigerline files\n[29]. For unmatched addresses, we used Google Maps to locate the residence where possible\n[30]. Using these methods, we successfully geocoded 12,299 (93.41%) of the 13,167 residential addresses. Of the addresses we were unable to geocode, 379 (2.88%) were post office boxes and 489 (3.71%) were addresses that were either incomplete or that we were not able to geocode to a specific location. Geocoding success was similar for the birth cohort and children with ASD and ID.", "We estimated the log odds of ASD and ID using GAMs, an extension of linear regression models that can analyze binary outcome data and accommodate both parametric and non-parametric model components\n[17]. For non-parametric model terms, GAMs replace the traditional exposure term in an ordinary logistic regression with a smooth term (i.e. a term of best fit after adjusting for other covariates). In these analyses we applied a bivariate smooth to latitude and longitude coordinates and included all other covariates as parametric terms\n[17,18].\nWe used a locally weighted regression smoother (loess) which adapts to changes in the data density that are likely to occur in analyses of residential locations due to variability in population density. To predict prevalence the loess smoother utilizes information from nearby data points (weighting information based on its distance from the prediction point). The region or neighborhood from which data are drawn to predict prevalence is based on the percentage of data points in the neighborhood and is referred to as the span size. Choice of span size is a trade-off between bias and variability. A larger span size (more data included) results in a flatter surface with low variability but increased bias, while a small span size results in high variability and comparatively low bias. We determined the optimal amount of smoothing (optimal span size), by minimizing Akaike’s Information Criterion (AIC)\n[17,18]. A large optimal span size indicates less spatial variability in prevalence compared to a small optimal span.\nWe created a grid covering the study area that extended across all latitude and longitude coordinates of all addresses. We excluded grid points in regions of the study area where no children in our sample were born. The resulting grid comprised approximately 4,300 points. We predicted the log odds for each of the 4 developmental outcome groups (e.g. ASD) at each point on the grid and calculated an odds ratio using the entire study area as the referent group; the odds at each point was divided by the odds from a reduced model which omitted the latitude and longitude smooth term\n[18]. We did not remove children with developmental outcomes from the random selection of births drawn to represent the underlying distribution of births and, consequently, some cases were included in the denominator. As a result, the odds ratios from these models are mathematically equivalent to prevalence ratios. We mapped prevalence ratios using a continuous color scale (blue to red) and a constant scale range for all maps with the same outcome (i.e. ASD; ASD+ID; ASD-ID; and ID).\nTo provide information in the interpretation of spatial patterns that could be driven by sparse data, we preformed a 2-step statistical screening procedure, as follows. First, we tested the null hypothesis that developmental disability does not depend on geographic location, generally (e.g. the predicted map surface is a flat horizontal plane). Residential locations of individuals were permuted 999 times while preserving their case status and covariates [as described in\n[20]. In each permutation, the GAM with the optimal span size from the original dataset was run and the global deviance statistic computed. We used a conservative p<0.025, which accounts for inflated type 1 error rates associated with using the optimal span size for the original dataset in permutations, to determine if location was a statistically significant predictor of disability [details in\n[31]. If the global statistic indicated that location was a statistically significant predictor of disability generally, we next evaluated location-specific (point-wise) departures from the null hypothesis using the same set of permutations\n[18,31]. Regions with significantly increased or decreased surveillance-recognized ASD or ID prevalence were defined as points ranking in the upper or lower 2.5% of the distribution of permuted prevalence ratios at each point, respectively\n[18].\nStatistical analyses were performed in the R Package 2.12.02 (Vienna, Austria) using the gam library and a local scoring algorithm GAM estimation procedure and publically available statistical code\n[32,33]. All maps were created using ArcGIS 9.3 (version 9.3, Redlands, California).", "We adjusted models for several previously established ASD predictive factors including year of birth; plurality; maternal age, race/ethnicity, and level of education; and report of tobacco use during pregnancy (categorizations in Table\n1,\n[3,11,12,34-36]). Covariate data were nearly complete for these variables; however 35 children (<1%) had missing data and were excluded from all analyses. Additionally, we investigated confounding by other factors potentially related to ASD risk or recognition including method of delivery (vaginal delivery vs. cesarean section), marital status (married vs. unmarried), birth weight (<2500 g; 2501–3000 g; 3001–4000 g; >4000 g), and adequacy of prenatal care [assessed using the Adequacy of Prenatal Care Utilization Index; categorized as less than adequate (inadequate or intermediate prenatal care); adequate; adequate-plus\n[37]]. We used 3 approaches to fully assess the confounding influence of these factors: 1) We assessed the change in patterns of spatial variability, using a side by side visual inspection of maps before and after adjustment, comparing the areas of reduced and elevated prevalence ratios and the color intensities (indicating the magnitude of prevalence ratios). To assure equivalency, the number of observations and span size were held constant\n[18]. 2) We also investigated changes in the model-selected optimal span size of analyses with and without adjustment. A smaller optimal span size, using less of the data, is selected when the data supports more peaks and valleys in prevalence. It follows that a change in the optimal span size after the inclusion of a covariate can indicate spatial confounding. 3) Finally, to investigate the spatial variability of the predictive factors themselves (spatial associations between the factor and disability, necessary to cause confounding), we compared maps of each covariate to maps of the developmental disability.\nSelected Characteristics of the Birth Cohort and Children with ASD and ID in Eight North Carolina Counties in 2002, 2004, 2006 and 2008", "Our final dataset contained some siblings. In addition to being genetically more similar to each other, siblings typically share the same residence. Including siblings living at the same address in analyses could induce spatial clustering as a result of familial (i.e. genetic) similarities rather than geographically-linked factors. To assess the robustness of our results to including a small number of sibling groups, we conducted secondary analyses including only one randomly selected child per family. Families were defined as children for whom the mother had the same first and maiden name and date of birth (obtained from birth certificates). Because information for fathers was missing and incomplete on many birth records, we did not attempt to identify paternal siblings.", "Selected characteristics for the birth cohort and children with ASD or ID are displayed in Table\n1. Children with ASD were more likely to be male, have older mothers, and mothers with higher educational attainment. As has been reported previously\n[2,38], the age 8 prevalence of ASD in the region increased from 2002–2008, however the prevalence of ID remained relatively stable (Table\n1). Residential locations at birth for our study population are displayed in Figure\n1 with slight alteration to preserve confidentiality.\nWe found geographic variability in ASD prevalence across the study area in unadjusted analyses, as indicated by the global statistical test (Figure\n2a; optimal span size=0.75; global P=0.003; Table\n2). The Point-wise statistical tests identified areas of increased ASD prevalence in portions of Alamance, Durham, and Orange Counties, and children living in these areas were 1.10 to 1.27 times as likely to have a surveillance-recognized ASD at age 8 years compared to children born in the study area as a whole. Conversely, children born in the western part of the region were 0.57 to 0.90 times as likely to have a surveillance-recognized ASD. Geographic variability in ASD prevalence was attenuated after adjusting for confounding by year of birth; plurality; maternal age, race, and level of education; and report of tobacco use during pregnancy, indicated by the following results. In the adjusted model, the optimal span size (determined by minimizing the model AIC) increased, indicating less variability, and the global p-value was not consistent with departures from a flat pattern of ASD prevalence (Figure\n2b; optimal span=0.95; global P=0.052; Table\n2). The range of prevalence ratios across the study area was diminished: the adjusted model yielded PRs ranging from 0.72 to 1.12, in contrast to the unadjusted model, where PRs ranged from 0.57 to 1.27.\nGeographic distribution of ASD prevalence relative to the birth cohort n=11,034 and ASD n=532: Unadjusted (A) and fully adjusted models (B) are presented using the optimal span size of each (0.75 and 0.95 respectively). The unadjusted model is significantly different than flat (global P=0.003). Areas of significantly increased and decreased prevalence are indicated by black contour bands. Adjusted model is not significantly different than flat (global P=0.052). Adjustment factors were year of birth; plurality; maternal age, race, and level of education; and report of tobacco use during pregnancy.\nSummary of Spatial Analyses\nAF – Additional File.\nAdditionally adjusting for method of delivery, marital status, birthweight, and adequacy of prenatal care did not change the appearance of maps for any of the 4 developmental disability groups considered, or the range of prevalence ratios observed across the study area; these variables were dropped from adjusted analyses. Spatial confounding was driven primarily by the higher educational attainment in Alamance, Chatham, Durham, and Orange Counties, and to a lesser extent, to greater maternal age observed in the same counties; maps with adjustment for maternal age and education only had the same optimal span size (0.95), a similar range of prevalence ratios, and were visually very similar to the fully adjusted maps. When we examined the patterns of maternal educational attainment and age across the study area, both were similar to the unadjusted ASD pattern. For example, mothers living in the areas where the unadjusted ASD prevalence was highest were approximately 1.75 times as likely to have completed college than women in the study area as a whole (Additional file\n1: Figure S1).\nAmong the 532 children with ASD, 214 (40.22%) also had an ID. The spatial patterns of both subgroups (ASD+ID and ASD-ID) were similar to each other and to the map of the combined group (all ASD); the locations of increased prevalence and the intensity of the increases were similar across all maps (Table\n2; Additional file\n1: Figure S2). Adjusting for spatial confounding resulted in flatter maps of ASD+ID and ASD-ID; however, greater attenuation was observed in the analysis of ASD-ID in which the optimal span size increased from 0.70 to 0.95 after adjustment (indicating the surface was less variable/ more flat after adjustment; Table\n2; unadjusted figures not shown). Additionally, the range of PRs was more substantially attenuated by adjustment in the analysis of ASD-ID. In our sample, ASD-ID was 1.33 times as high in children who had mothers with college or more education (referent group=children of mothers with some post secondary education; P=0.059); but these same mothers were less likely than mothers with some postsecondary education to have a child with ASD + ID (aPR = 0.67, P = 0.060).\nWhen ID was analyzed as a separate disability, unadjusted analyses revealed significant spatial variability in prevalence across the study area; several areas of increased and decreased ID prevalence were observed (Figure\n3a; optimal span = 0.10; global P < 0.001). Geographic gradients in prevalence ratios were somewhat attenuated in the adjusted analyses, however, patterns of residual variability in the prevalence of ID remained after accounting for known predictors (Figure\n3b; optimal span = 0.30; global P = 0.065). Additionally, although adjusting for spatial confounding resulted in a larger span size, the optimal span size of the adjusted analyses remained small (0.30), indicating spatial variability; however the adjusted analyses did not reach global statistical significance at the alpha = 0.025 level.\nGeographic distribution of ID prevalence relative to the birth cohort n=11,034 and ID n=916: Unadjusted (A) and fully adjusted models (B) are presented using the optimal span size of each (0.10 and 0.30 respectively). The unadjusted model is significantly different than flat (global P<0.001). Areas of significantly increased and decreased risk are indicated by black contour bands. Adjusted model is not significantly different than flat (global P=0.065). Adjustment factors were year of birth; plurality; maternal age, race, and level of education; and report of tobacco use during pregnancy.\nOnly 269 sibling pairs were in our analyses and their impact on analyses of both ASD and ID was negligible. When we randomly selected one child from each family for analyses, the pattern of spatial variability was quite similar to analyses including all children (results not shown).", "Although we observed spatial variability in ASD prevalence in unadjusted analyses, the pattern of ASD appeared to be largely explained by factors influencing diagnosis -- maternal age and education -- which differed across the study area. The larger optimal span size and global p-value in the adjusted analyses, as well as decreased variability in prevalence ratios across the study area, all indicated less geographic variability after adjustment for known risk and predictive factors for ASD. Our results corroborate previous reports\n[15,16] demonstrating spatial variability in ASD prevalence before adjusting for spatial confounding. In secondary analyses Van Meter and colleagues reported the impact of living in neighborhoods with high ASD prevalence was diminished in analyses adjusting for parental education, a finding consistent with our results\n[16].\nOur results highlight the importance of adjusting for the geographic distribution of known individual-level predictors in spatial analyses. Here, adjustment for socioeconomic factors associated with diagnosis greatly attenuated geographic variability in ASD prevalence. This has implications for studies of ASD etiology that assign environmental exposures based on geography, such as air pollutants and agricultural pesticides, for which we caution the causal interpretation of geographic patterns that are not controlled for individual-level factors. Using GAMs allowed us to carefully account for known risk and diagnostic factors, a major strength of these analyses\n[18]. In addition, we considered both the statistical and qualitative aspects of the observed geographic patterns, tempering interpretation of areas of sparse data by considering tests of global deviance and changes in the optimal span size along-side visual inspection of the patterns in our interpretation of results.\nNC-ADDM Network provided a large population based sample with information on co-occurring conditions, allowing us to consider differences in spatial confounding by the severity and type of disability. Patterns for separate disability groups, ASD+ID and ASD-ID, were visually similar to those for all ASD; however, ASD-ID appeared to be more impacted by spatial confounding as indicated by a greater attenuation in prevalence ratios and a increase of the optimal span size in the adjusted analysis. The associations between ASD (+/−ID) and maternal education and age that we observed in these analyses are consistent with previously reported associations in non-spatial analyses\n[3,4] which indicate that ASD-ID is more strongly associated with higher SES than ASD+ID.\nAlthough the global significance test did not indicate variability, our results suggest that ID prevalence varied geographically across the study area; even after adjusting for known spatial confounders the optimal span size remained small and results were visually suggestive of geographic differences in ID prevalence. There are several possible explanations for the observed variability in ID, including residual confounding by unmeasured variables and differences in the distribution of environmental factors across the study area.\nLinking data from the NC-ADDM Network and vital records also strengthened our analyses. Vital records provided individual-level data on a number of covariates, allowing us to account for spatial confounding by these factors, and also provided residential location at the time of birth, which may be more relevant to ASD etiology than the address at diagnosis. Although the majority of children with ASD at age 8 in ADDM were also born in the study area, local changes in address were common (68.14% of children with ASD had a change in residential address between birth and age 8). If the spatial patterns we observed are due to difference in diagnosis, spatial variability may be more apparent using a later address corresponding to the time of diagnosis.\nA final strength of our study was our ability to evaluate the influence of potential sibling clusters in analyses. While it is often suggested that including siblings in analyses will induce clustering, we did not observe this pattern in our data. One possible explanation is that the number of siblings in our analyses was relatively small (e.g. 269 sibling pairs in the ASD analysis of 11,566 children), due in part to the biennial study design and the selection of only part of the source population (15%).\nThere are also several important limitations of our methods. We chose the optimal tradeoff between bias and variance of the smooth (the optimal span size) by minimizing the AIC. Selecting the optimal span size for a dataset, however, may obscure important small scale variability in maps; it is possible that our analyses were conducted on a scale too large to identify small scale environmental exposures relevant in the etiology of ASD. Examining different span sizes may reveal important features of the data. Similarly, we assessed spatial confounding by visually comparing maps of prevalence before and after adjustment holding the span size and included observations constant and by investigating changes in the optimal span size before and after adjustment\n[18]; however, more objective methods of assessing confounding are needed and are an important topic for future research. We also used p-values as a screening tool, to evaluate the global spatial variation as well as areas of increased or decreased prevalence. Our use of p-values here was to evaluate whether or not spatial variability existed (i.e. whether the prevalence of ASD was constant across the geographic area). Nonetheless, many epidemiologists prefer the use of confidence intervals, which provide information on the precision of the observed association\n[39]. While it is possible to calculate confidence intervals in these analyses, it is visually difficult to display three surfaces simultaneously on maps. Additionally, we conducted permutation tests using the span size of the observed data to test the null hypothesis that the map is flat. Permuted datasets under this null hypothesis may have had a larger optimal span size, particularly for the ID analyses, which had a relatively small span size for the observed data (optimal span = 0.30). Consequently, using the span size of the observed data could result in a p-value that is too small\n[18].", "Our results demonstrate the importance of adjusting for predictive and diagnostic factors that may spatially confound the search for novel environmentally-distributed risk factors for ASD. These adjustment methods can help the search for causes of developmental disabilities proceed more efficiently by aiding the interpretation of geographic patterns. Cumulatively, our results suggest that known predictive factors for ASD account for a large portion of the observed geographic variability in prevalence in central North Carolina. Although we did not identify spatial variability in ASD in NC after controlling for known predictive and risk factors, follow up of these results in other regions is may provide clues for ASD etiology.", "AF: Additional file; APR: Adjusted prevalence ratio; ADDM Network: Autism and developmental disabilities monitoring network; AIC: Akaike’s information criterion; ASD: Autism spectrum disorders; ASD+ID: Autism spectrum disorders with intellectual disability; ASD-ID: Autism spectrum disorders without intellectual disability; ID: Intellectual disability; GAM: Generalized additive model; GIS: Geographic information systems; PR: Prevalence ratio; SES: Socioeconomic status.", "All authors declare that they have are no actual or potential competing interests.", "KH assisted in conceiving the study, conducted analyses, and drafted the manuscript. AK assisted conceiving the study and the analysis plan, and assisted in manuscript editing. VV collaborated on statistical issues, and assisted in the analysis plan and manuscript editing. JD, the principle investigator of the North Carolina Autism and Developmental Disabilities Monitoring Network, assisted in the analysis plan and manuscript editing. All authors read and approved the final manuscript.", "Figure S1. Mother's educational attainment at the time of birth n=11,034. Map reflects the optimal span size of the ASD analyses (span=0.95; global P<0.001); the optimal span size of the education analysis was 0.05 (global P<0.001). Larger prevalence ratios indicate a higher prevalence of mothers with college or more education at the child’s birth. Areas of significantly increased and decreased risk are indicated by black contour bands. Figure S2. Adjusted maps for (A) ASD prevalence (birth cohort n=11,034 and ASD n=532), (B) ASD-ID (birth cohort n=11,034 and ASD-ID n=318), and (C) ASD+ID (birth cohort n=11,034 and ASD+ID n=214). Maps are not significantly different than flat (global P=0.052, global P=0.294 and global P=0.196, respectively). Adjustment factors were year of birth; plurality; maternal age, race, and level of education; and report of tobacco use during pregnancy.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, "results", "discussion", "conclusions", null, null, null, "supplementary-material" ]
[ "Autism spectrum disorders (ASD)", "Intellectual disability (ID)", "Spatial analysis", "Disease mapping", "Generalized additive models (GAMs)", "Geographic information systems (GIS)" ]
Background: Autism spectrum disorders (ASD) are complex neurodevelopmental disorders characterized by impaired social interaction and communication, and restrictive and repetitive behavior [1]. Estimates in 2008 indicate that approximately 1 in 88 children have ASD and that the prevalence of documented ASD is on the rise [2]. The causes for ASD remain largely unknown and widely debated [3,4]. Environmental exposures are hypothesized to contribute to ASD etiology [5]; however, identifying exposures of concern has been complicated by the relative rarity of ASD, the extensive number of candidate exposures, and the lack of exposure measurements during early life, the developmentally relevant time period for ASD. Thus, studies have explored the geographic distribution of ASD as a means for generating hypotheses about spatially distributed environmental exposures. Additionally, studies have used geographically-based exposure assignment to evaluate the impact of specific exposures such as air pollutants, pesticides, and hazardous wastes on ASD risk [6-9]. One problem with evaluating the geographically non-random prevalence of ASD for etiologic purposes lies in difficulties disentangling the geographic distribution of other factors associated with diagnosis [10,11]. For example, higher maternal education is associated with increased ASD diagnosis in the United States [12,13] but not in some European countries [14]. These results suggest that maternal education is a factor in promoting recognition of ASD, but not necessarily the occurrence of ASD. Identifying patterns related to ASD diagnosis may be helpful for public health services allocation; however, to generate hypotheses about etiology, we must distinguish diagnostic patterns from patterns of ASD occurrence. For example, prioritizing investigations of a geographically-based environmental cause may be unwarranted if the observed spatial pattern is driven by maternal education (i.e. spatial confounding). Two previous studies investigated geographic variability of ASD as a means of identifying environmental exposures related to ASD prevalence [15,16]. Both reported that children born in certain regions of California were more likely to have a recognized ASD than children born in other parts of the state [15,16]. The authors attributed their findings to regional differences in the underlying population (i.e. demographic and socioeconomic factors) or geographic variability of environmental exposures; but were unable to disentangle factors promoting diagnosis from environmental exposures because they did not adjust for potentially important individual-level spatial confounding [15,16]. In order to determine the potential for environmentally distributed exposure to be associated with ASD in central North Carolina (such as contaminants in air or water), we first explored whether spatial differences in ASD prevalence existed and then whether differences remained after accounting for spatially distributed covariates associated with ASD risk and diagnosis. We used a method of spatial epidemiology that applies generalized additive models (GAMs) to assess the spatial variation of disease in a region while simultaneously adjusting for other geographically distributed individual-level factors [17,18] such as maternal education, age, and smoking. Combining GAMs and geographic information systems allowed us to predict a continuous surface of ASD prevalence across our study area. Our research serves not only to expand the consideration of spatial patterns of ASD to geographic regions other than California but also to improve the utility of such studies by directly examining how adjustment for known risk and predictive factors influences geographic patterns. Patterns remaining after accounting for factors that influence ASD recognition may better reveal the distribution of novel geographically distributed etiologic factors impacting ASD prevalence. Methods: Study population The Autism and Developmental Disabilities Monitoring (ADDM) Network is an active, population-based surveillance program that monitors the prevalence of developmental disabilities among children aged 8 years, an age by which most children with ASD have been evaluated [19], in selected geographic regions across the United States [20]. The ADDM Network conducts standardized review of medical and educational records and trained clinicians determine whether standardized case definitions for ASD and intellectual disability (ID) are met [20]. Our analyses utilized data from the North Carolina ADDM site (NC-ADDM), which began biennial surveillance in 2002. Analyses were restricted to children born in the 8 counties that were under surveillance during all study years (2002–2008). To represent the underlying population, we randomly selected a 15% sample of birth records for children born in the same study counties and years as children included in ADDM (biennial 1994–2000; n=11,908, representing a region averaging 20,000 births per year). Figure 1 provides orientation to the population distribution in NC, where red dots indicate children with developmental disabilities (ASD or ID) and blue dots indicate children randomly sampled from the entire central NC area. We excluded children who were adopted because information on birth address was missing and those who died in infancy because they were not part of the risk set for development disabilities (n=93; <1%). NC-ADDM and our current analyses were reviewed and approved by the Institutional Review Board at the University of North Carolina-Chapel Hill. Eight county central North Carolina study area. The residential addresses at birth for the birth cohort (blue points) and children with ASD or ID (red points) born in 1994, 1996, 1998 and 2000 are displayed with altered locations to preserve confidentiality. The Autism and Developmental Disabilities Monitoring (ADDM) Network is an active, population-based surveillance program that monitors the prevalence of developmental disabilities among children aged 8 years, an age by which most children with ASD have been evaluated [19], in selected geographic regions across the United States [20]. The ADDM Network conducts standardized review of medical and educational records and trained clinicians determine whether standardized case definitions for ASD and intellectual disability (ID) are met [20]. Our analyses utilized data from the North Carolina ADDM site (NC-ADDM), which began biennial surveillance in 2002. Analyses were restricted to children born in the 8 counties that were under surveillance during all study years (2002–2008). To represent the underlying population, we randomly selected a 15% sample of birth records for children born in the same study counties and years as children included in ADDM (biennial 1994–2000; n=11,908, representing a region averaging 20,000 births per year). Figure 1 provides orientation to the population distribution in NC, where red dots indicate children with developmental disabilities (ASD or ID) and blue dots indicate children randomly sampled from the entire central NC area. We excluded children who were adopted because information on birth address was missing and those who died in infancy because they were not part of the risk set for development disabilities (n=93; <1%). NC-ADDM and our current analyses were reviewed and approved by the Institutional Review Board at the University of North Carolina-Chapel Hill. Eight county central North Carolina study area. The residential addresses at birth for the birth cohort (blue points) and children with ASD or ID (red points) born in 1994, 1996, 1998 and 2000 are displayed with altered locations to preserve confidentiality. ASD and ID classification To fully explore the impact of spatial confounding on the geographic distribution of developmental disability, we considered 4 different disability groups: all children with an ASD, 2 subgroups within all ASD classified by the co-occurrence of ID (ASD+ID and ASD-ID), and the independent group of all children with ID without regard to ASD status. Examining these independent and cross-classified groups provided a more nuanced disentanglement of factors which may act differently to promote ASD, versus ID, recognition, as follows. Diagnosis of ASD requires comprehensive evaluation of a constellation of behaviors that can be more involved than a singular evaluation assessment of Intelligence Quotient (IQ) to determine ID. Yet, ASD often occurs with ID, frequently presenting more severe disability that is recognized at an earlier age. In addition, Durkin et al. (2010) reported a differential socioeconomic (SES) gradient in ASD prevalence by the presence or absence of ID; SES acts as a stronger risk factor for ASD-ID compared to ASD+ID [3]. Because we expect SES to vary spatially and be a spatial confounder, analyzing ASD without respect to ID may mask the true spatial patterning of disease. Children met the standardized case definition for ASD if clinician reviewers determined their developmental evaluation records indicated behaviors consistent with ASD, based on the Diagnostic and Statistical Manual IVTM criteria for Autistic Disorder, Asperger Disorder, or Pervasive Developmental Disorder Not-Otherwise-Specified [21]. Children met the standardized definition for ID if clinician review of developmental evaluations determined they had an IQ ≤ 70 on the most recently administered psychometrics test such as the Battelle–cognitive domain [22], Differential Ability Scales [23], Stanford-Binet–4th ed. [24], Wechsler Preschool and Primary Scale of Intelligence [25], and the Wechsler Intelligence Scale for Children-III [26], or, in the absence of test scores, a written statement in the records indicated the presence of a severe or profound intellectual disability [20]. Our analyses included 561 children with ASD, who were further classified into two subgroups: children with ASD without ID (ASD-ID, n=330), and children with ASD and ID (ASD+ID, n=231). As a comparison to the ASD analyses, we conducted additional analyses investigating the spatial variability in the prevalence of ID (n=1,028) regardless of ASD status. To fully explore the impact of spatial confounding on the geographic distribution of developmental disability, we considered 4 different disability groups: all children with an ASD, 2 subgroups within all ASD classified by the co-occurrence of ID (ASD+ID and ASD-ID), and the independent group of all children with ID without regard to ASD status. Examining these independent and cross-classified groups provided a more nuanced disentanglement of factors which may act differently to promote ASD, versus ID, recognition, as follows. Diagnosis of ASD requires comprehensive evaluation of a constellation of behaviors that can be more involved than a singular evaluation assessment of Intelligence Quotient (IQ) to determine ID. Yet, ASD often occurs with ID, frequently presenting more severe disability that is recognized at an earlier age. In addition, Durkin et al. (2010) reported a differential socioeconomic (SES) gradient in ASD prevalence by the presence or absence of ID; SES acts as a stronger risk factor for ASD-ID compared to ASD+ID [3]. Because we expect SES to vary spatially and be a spatial confounder, analyzing ASD without respect to ID may mask the true spatial patterning of disease. Children met the standardized case definition for ASD if clinician reviewers determined their developmental evaluation records indicated behaviors consistent with ASD, based on the Diagnostic and Statistical Manual IVTM criteria for Autistic Disorder, Asperger Disorder, or Pervasive Developmental Disorder Not-Otherwise-Specified [21]. Children met the standardized definition for ID if clinician review of developmental evaluations determined they had an IQ ≤ 70 on the most recently administered psychometrics test such as the Battelle–cognitive domain [22], Differential Ability Scales [23], Stanford-Binet–4th ed. [24], Wechsler Preschool and Primary Scale of Intelligence [25], and the Wechsler Intelligence Scale for Children-III [26], or, in the absence of test scores, a written statement in the records indicated the presence of a severe or profound intellectual disability [20]. Our analyses included 561 children with ASD, who were further classified into two subgroups: children with ASD without ID (ASD-ID, n=330), and children with ASD and ID (ASD+ID, n=231). As a comparison to the ASD analyses, we conducted additional analyses investigating the spatial variability in the prevalence of ID (n=1,028) regardless of ASD status. Residential location and covariates Surveillance data were linked to birth records to obtain the residential address and covariate information at the time of birth of children with ASD and ID. Birth data were chosen to reflect the most etiologically relevant time period for brain development [27]. We combined several methods to geocode (i.e. assign latitude and longitude coordinates) the residential birth addresses of study children to improve geocoding accuracy and reduce positional errors [28]. First, we ran all addresses through the ZP4 software which cleans and updates addresses using U.S. Postal Service databases (e.g. converting rural routes to E-911 street names) (version expiring March 2011, Semphoria; Memphis Tennessee). We then cleaned all addresses individually and geocoded them using ArcGIS (version 9.3, Redlands, California) and U.S. Census Tigerline files [29]. For unmatched addresses, we used Google Maps to locate the residence where possible [30]. Using these methods, we successfully geocoded 12,299 (93.41%) of the 13,167 residential addresses. Of the addresses we were unable to geocode, 379 (2.88%) were post office boxes and 489 (3.71%) were addresses that were either incomplete or that we were not able to geocode to a specific location. Geocoding success was similar for the birth cohort and children with ASD and ID. Surveillance data were linked to birth records to obtain the residential address and covariate information at the time of birth of children with ASD and ID. Birth data were chosen to reflect the most etiologically relevant time period for brain development [27]. We combined several methods to geocode (i.e. assign latitude and longitude coordinates) the residential birth addresses of study children to improve geocoding accuracy and reduce positional errors [28]. First, we ran all addresses through the ZP4 software which cleans and updates addresses using U.S. Postal Service databases (e.g. converting rural routes to E-911 street names) (version expiring March 2011, Semphoria; Memphis Tennessee). We then cleaned all addresses individually and geocoded them using ArcGIS (version 9.3, Redlands, California) and U.S. Census Tigerline files [29]. For unmatched addresses, we used Google Maps to locate the residence where possible [30]. Using these methods, we successfully geocoded 12,299 (93.41%) of the 13,167 residential addresses. Of the addresses we were unable to geocode, 379 (2.88%) were post office boxes and 489 (3.71%) were addresses that were either incomplete or that we were not able to geocode to a specific location. Geocoding success was similar for the birth cohort and children with ASD and ID. Spatial analysis We estimated the log odds of ASD and ID using GAMs, an extension of linear regression models that can analyze binary outcome data and accommodate both parametric and non-parametric model components [17]. For non-parametric model terms, GAMs replace the traditional exposure term in an ordinary logistic regression with a smooth term (i.e. a term of best fit after adjusting for other covariates). In these analyses we applied a bivariate smooth to latitude and longitude coordinates and included all other covariates as parametric terms [17,18]. We used a locally weighted regression smoother (loess) which adapts to changes in the data density that are likely to occur in analyses of residential locations due to variability in population density. To predict prevalence the loess smoother utilizes information from nearby data points (weighting information based on its distance from the prediction point). The region or neighborhood from which data are drawn to predict prevalence is based on the percentage of data points in the neighborhood and is referred to as the span size. Choice of span size is a trade-off between bias and variability. A larger span size (more data included) results in a flatter surface with low variability but increased bias, while a small span size results in high variability and comparatively low bias. We determined the optimal amount of smoothing (optimal span size), by minimizing Akaike’s Information Criterion (AIC) [17,18]. A large optimal span size indicates less spatial variability in prevalence compared to a small optimal span. We created a grid covering the study area that extended across all latitude and longitude coordinates of all addresses. We excluded grid points in regions of the study area where no children in our sample were born. The resulting grid comprised approximately 4,300 points. We predicted the log odds for each of the 4 developmental outcome groups (e.g. ASD) at each point on the grid and calculated an odds ratio using the entire study area as the referent group; the odds at each point was divided by the odds from a reduced model which omitted the latitude and longitude smooth term [18]. We did not remove children with developmental outcomes from the random selection of births drawn to represent the underlying distribution of births and, consequently, some cases were included in the denominator. As a result, the odds ratios from these models are mathematically equivalent to prevalence ratios. We mapped prevalence ratios using a continuous color scale (blue to red) and a constant scale range for all maps with the same outcome (i.e. ASD; ASD+ID; ASD-ID; and ID). To provide information in the interpretation of spatial patterns that could be driven by sparse data, we preformed a 2-step statistical screening procedure, as follows. First, we tested the null hypothesis that developmental disability does not depend on geographic location, generally (e.g. the predicted map surface is a flat horizontal plane). Residential locations of individuals were permuted 999 times while preserving their case status and covariates [as described in [20]. In each permutation, the GAM with the optimal span size from the original dataset was run and the global deviance statistic computed. We used a conservative p<0.025, which accounts for inflated type 1 error rates associated with using the optimal span size for the original dataset in permutations, to determine if location was a statistically significant predictor of disability [details in [31]. If the global statistic indicated that location was a statistically significant predictor of disability generally, we next evaluated location-specific (point-wise) departures from the null hypothesis using the same set of permutations [18,31]. Regions with significantly increased or decreased surveillance-recognized ASD or ID prevalence were defined as points ranking in the upper or lower 2.5% of the distribution of permuted prevalence ratios at each point, respectively [18]. Statistical analyses were performed in the R Package 2.12.02 (Vienna, Austria) using the gam library and a local scoring algorithm GAM estimation procedure and publically available statistical code [32,33]. All maps were created using ArcGIS 9.3 (version 9.3, Redlands, California). We estimated the log odds of ASD and ID using GAMs, an extension of linear regression models that can analyze binary outcome data and accommodate both parametric and non-parametric model components [17]. For non-parametric model terms, GAMs replace the traditional exposure term in an ordinary logistic regression with a smooth term (i.e. a term of best fit after adjusting for other covariates). In these analyses we applied a bivariate smooth to latitude and longitude coordinates and included all other covariates as parametric terms [17,18]. We used a locally weighted regression smoother (loess) which adapts to changes in the data density that are likely to occur in analyses of residential locations due to variability in population density. To predict prevalence the loess smoother utilizes information from nearby data points (weighting information based on its distance from the prediction point). The region or neighborhood from which data are drawn to predict prevalence is based on the percentage of data points in the neighborhood and is referred to as the span size. Choice of span size is a trade-off between bias and variability. A larger span size (more data included) results in a flatter surface with low variability but increased bias, while a small span size results in high variability and comparatively low bias. We determined the optimal amount of smoothing (optimal span size), by minimizing Akaike’s Information Criterion (AIC) [17,18]. A large optimal span size indicates less spatial variability in prevalence compared to a small optimal span. We created a grid covering the study area that extended across all latitude and longitude coordinates of all addresses. We excluded grid points in regions of the study area where no children in our sample were born. The resulting grid comprised approximately 4,300 points. We predicted the log odds for each of the 4 developmental outcome groups (e.g. ASD) at each point on the grid and calculated an odds ratio using the entire study area as the referent group; the odds at each point was divided by the odds from a reduced model which omitted the latitude and longitude smooth term [18]. We did not remove children with developmental outcomes from the random selection of births drawn to represent the underlying distribution of births and, consequently, some cases were included in the denominator. As a result, the odds ratios from these models are mathematically equivalent to prevalence ratios. We mapped prevalence ratios using a continuous color scale (blue to red) and a constant scale range for all maps with the same outcome (i.e. ASD; ASD+ID; ASD-ID; and ID). To provide information in the interpretation of spatial patterns that could be driven by sparse data, we preformed a 2-step statistical screening procedure, as follows. First, we tested the null hypothesis that developmental disability does not depend on geographic location, generally (e.g. the predicted map surface is a flat horizontal plane). Residential locations of individuals were permuted 999 times while preserving their case status and covariates [as described in [20]. In each permutation, the GAM with the optimal span size from the original dataset was run and the global deviance statistic computed. We used a conservative p<0.025, which accounts for inflated type 1 error rates associated with using the optimal span size for the original dataset in permutations, to determine if location was a statistically significant predictor of disability [details in [31]. If the global statistic indicated that location was a statistically significant predictor of disability generally, we next evaluated location-specific (point-wise) departures from the null hypothesis using the same set of permutations [18,31]. Regions with significantly increased or decreased surveillance-recognized ASD or ID prevalence were defined as points ranking in the upper or lower 2.5% of the distribution of permuted prevalence ratios at each point, respectively [18]. Statistical analyses were performed in the R Package 2.12.02 (Vienna, Austria) using the gam library and a local scoring algorithm GAM estimation procedure and publically available statistical code [32,33]. All maps were created using ArcGIS 9.3 (version 9.3, Redlands, California). Confounding We adjusted models for several previously established ASD predictive factors including year of birth; plurality; maternal age, race/ethnicity, and level of education; and report of tobacco use during pregnancy (categorizations in Table 1, [3,11,12,34-36]). Covariate data were nearly complete for these variables; however 35 children (<1%) had missing data and were excluded from all analyses. Additionally, we investigated confounding by other factors potentially related to ASD risk or recognition including method of delivery (vaginal delivery vs. cesarean section), marital status (married vs. unmarried), birth weight (<2500 g; 2501–3000 g; 3001–4000 g; >4000 g), and adequacy of prenatal care [assessed using the Adequacy of Prenatal Care Utilization Index; categorized as less than adequate (inadequate or intermediate prenatal care); adequate; adequate-plus [37]]. We used 3 approaches to fully assess the confounding influence of these factors: 1) We assessed the change in patterns of spatial variability, using a side by side visual inspection of maps before and after adjustment, comparing the areas of reduced and elevated prevalence ratios and the color intensities (indicating the magnitude of prevalence ratios). To assure equivalency, the number of observations and span size were held constant [18]. 2) We also investigated changes in the model-selected optimal span size of analyses with and without adjustment. A smaller optimal span size, using less of the data, is selected when the data supports more peaks and valleys in prevalence. It follows that a change in the optimal span size after the inclusion of a covariate can indicate spatial confounding. 3) Finally, to investigate the spatial variability of the predictive factors themselves (spatial associations between the factor and disability, necessary to cause confounding), we compared maps of each covariate to maps of the developmental disability. Selected Characteristics of the Birth Cohort and Children with ASD and ID in Eight North Carolina Counties in 2002, 2004, 2006 and 2008 We adjusted models for several previously established ASD predictive factors including year of birth; plurality; maternal age, race/ethnicity, and level of education; and report of tobacco use during pregnancy (categorizations in Table 1, [3,11,12,34-36]). Covariate data were nearly complete for these variables; however 35 children (<1%) had missing data and were excluded from all analyses. Additionally, we investigated confounding by other factors potentially related to ASD risk or recognition including method of delivery (vaginal delivery vs. cesarean section), marital status (married vs. unmarried), birth weight (<2500 g; 2501–3000 g; 3001–4000 g; >4000 g), and adequacy of prenatal care [assessed using the Adequacy of Prenatal Care Utilization Index; categorized as less than adequate (inadequate or intermediate prenatal care); adequate; adequate-plus [37]]. We used 3 approaches to fully assess the confounding influence of these factors: 1) We assessed the change in patterns of spatial variability, using a side by side visual inspection of maps before and after adjustment, comparing the areas of reduced and elevated prevalence ratios and the color intensities (indicating the magnitude of prevalence ratios). To assure equivalency, the number of observations and span size were held constant [18]. 2) We also investigated changes in the model-selected optimal span size of analyses with and without adjustment. A smaller optimal span size, using less of the data, is selected when the data supports more peaks and valleys in prevalence. It follows that a change in the optimal span size after the inclusion of a covariate can indicate spatial confounding. 3) Finally, to investigate the spatial variability of the predictive factors themselves (spatial associations between the factor and disability, necessary to cause confounding), we compared maps of each covariate to maps of the developmental disability. Selected Characteristics of the Birth Cohort and Children with ASD and ID in Eight North Carolina Counties in 2002, 2004, 2006 and 2008 Robustness of analyses Our final dataset contained some siblings. In addition to being genetically more similar to each other, siblings typically share the same residence. Including siblings living at the same address in analyses could induce spatial clustering as a result of familial (i.e. genetic) similarities rather than geographically-linked factors. To assess the robustness of our results to including a small number of sibling groups, we conducted secondary analyses including only one randomly selected child per family. Families were defined as children for whom the mother had the same first and maiden name and date of birth (obtained from birth certificates). Because information for fathers was missing and incomplete on many birth records, we did not attempt to identify paternal siblings. Our final dataset contained some siblings. In addition to being genetically more similar to each other, siblings typically share the same residence. Including siblings living at the same address in analyses could induce spatial clustering as a result of familial (i.e. genetic) similarities rather than geographically-linked factors. To assess the robustness of our results to including a small number of sibling groups, we conducted secondary analyses including only one randomly selected child per family. Families were defined as children for whom the mother had the same first and maiden name and date of birth (obtained from birth certificates). Because information for fathers was missing and incomplete on many birth records, we did not attempt to identify paternal siblings. Study population: The Autism and Developmental Disabilities Monitoring (ADDM) Network is an active, population-based surveillance program that monitors the prevalence of developmental disabilities among children aged 8 years, an age by which most children with ASD have been evaluated [19], in selected geographic regions across the United States [20]. The ADDM Network conducts standardized review of medical and educational records and trained clinicians determine whether standardized case definitions for ASD and intellectual disability (ID) are met [20]. Our analyses utilized data from the North Carolina ADDM site (NC-ADDM), which began biennial surveillance in 2002. Analyses were restricted to children born in the 8 counties that were under surveillance during all study years (2002–2008). To represent the underlying population, we randomly selected a 15% sample of birth records for children born in the same study counties and years as children included in ADDM (biennial 1994–2000; n=11,908, representing a region averaging 20,000 births per year). Figure 1 provides orientation to the population distribution in NC, where red dots indicate children with developmental disabilities (ASD or ID) and blue dots indicate children randomly sampled from the entire central NC area. We excluded children who were adopted because information on birth address was missing and those who died in infancy because they were not part of the risk set for development disabilities (n=93; <1%). NC-ADDM and our current analyses were reviewed and approved by the Institutional Review Board at the University of North Carolina-Chapel Hill. Eight county central North Carolina study area. The residential addresses at birth for the birth cohort (blue points) and children with ASD or ID (red points) born in 1994, 1996, 1998 and 2000 are displayed with altered locations to preserve confidentiality. ASD and ID classification: To fully explore the impact of spatial confounding on the geographic distribution of developmental disability, we considered 4 different disability groups: all children with an ASD, 2 subgroups within all ASD classified by the co-occurrence of ID (ASD+ID and ASD-ID), and the independent group of all children with ID without regard to ASD status. Examining these independent and cross-classified groups provided a more nuanced disentanglement of factors which may act differently to promote ASD, versus ID, recognition, as follows. Diagnosis of ASD requires comprehensive evaluation of a constellation of behaviors that can be more involved than a singular evaluation assessment of Intelligence Quotient (IQ) to determine ID. Yet, ASD often occurs with ID, frequently presenting more severe disability that is recognized at an earlier age. In addition, Durkin et al. (2010) reported a differential socioeconomic (SES) gradient in ASD prevalence by the presence or absence of ID; SES acts as a stronger risk factor for ASD-ID compared to ASD+ID [3]. Because we expect SES to vary spatially and be a spatial confounder, analyzing ASD without respect to ID may mask the true spatial patterning of disease. Children met the standardized case definition for ASD if clinician reviewers determined their developmental evaluation records indicated behaviors consistent with ASD, based on the Diagnostic and Statistical Manual IVTM criteria for Autistic Disorder, Asperger Disorder, or Pervasive Developmental Disorder Not-Otherwise-Specified [21]. Children met the standardized definition for ID if clinician review of developmental evaluations determined they had an IQ ≤ 70 on the most recently administered psychometrics test such as the Battelle–cognitive domain [22], Differential Ability Scales [23], Stanford-Binet–4th ed. [24], Wechsler Preschool and Primary Scale of Intelligence [25], and the Wechsler Intelligence Scale for Children-III [26], or, in the absence of test scores, a written statement in the records indicated the presence of a severe or profound intellectual disability [20]. Our analyses included 561 children with ASD, who were further classified into two subgroups: children with ASD without ID (ASD-ID, n=330), and children with ASD and ID (ASD+ID, n=231). As a comparison to the ASD analyses, we conducted additional analyses investigating the spatial variability in the prevalence of ID (n=1,028) regardless of ASD status. Residential location and covariates: Surveillance data were linked to birth records to obtain the residential address and covariate information at the time of birth of children with ASD and ID. Birth data were chosen to reflect the most etiologically relevant time period for brain development [27]. We combined several methods to geocode (i.e. assign latitude and longitude coordinates) the residential birth addresses of study children to improve geocoding accuracy and reduce positional errors [28]. First, we ran all addresses through the ZP4 software which cleans and updates addresses using U.S. Postal Service databases (e.g. converting rural routes to E-911 street names) (version expiring March 2011, Semphoria; Memphis Tennessee). We then cleaned all addresses individually and geocoded them using ArcGIS (version 9.3, Redlands, California) and U.S. Census Tigerline files [29]. For unmatched addresses, we used Google Maps to locate the residence where possible [30]. Using these methods, we successfully geocoded 12,299 (93.41%) of the 13,167 residential addresses. Of the addresses we were unable to geocode, 379 (2.88%) were post office boxes and 489 (3.71%) were addresses that were either incomplete or that we were not able to geocode to a specific location. Geocoding success was similar for the birth cohort and children with ASD and ID. Spatial analysis: We estimated the log odds of ASD and ID using GAMs, an extension of linear regression models that can analyze binary outcome data and accommodate both parametric and non-parametric model components [17]. For non-parametric model terms, GAMs replace the traditional exposure term in an ordinary logistic regression with a smooth term (i.e. a term of best fit after adjusting for other covariates). In these analyses we applied a bivariate smooth to latitude and longitude coordinates and included all other covariates as parametric terms [17,18]. We used a locally weighted regression smoother (loess) which adapts to changes in the data density that are likely to occur in analyses of residential locations due to variability in population density. To predict prevalence the loess smoother utilizes information from nearby data points (weighting information based on its distance from the prediction point). The region or neighborhood from which data are drawn to predict prevalence is based on the percentage of data points in the neighborhood and is referred to as the span size. Choice of span size is a trade-off between bias and variability. A larger span size (more data included) results in a flatter surface with low variability but increased bias, while a small span size results in high variability and comparatively low bias. We determined the optimal amount of smoothing (optimal span size), by minimizing Akaike’s Information Criterion (AIC) [17,18]. A large optimal span size indicates less spatial variability in prevalence compared to a small optimal span. We created a grid covering the study area that extended across all latitude and longitude coordinates of all addresses. We excluded grid points in regions of the study area where no children in our sample were born. The resulting grid comprised approximately 4,300 points. We predicted the log odds for each of the 4 developmental outcome groups (e.g. ASD) at each point on the grid and calculated an odds ratio using the entire study area as the referent group; the odds at each point was divided by the odds from a reduced model which omitted the latitude and longitude smooth term [18]. We did not remove children with developmental outcomes from the random selection of births drawn to represent the underlying distribution of births and, consequently, some cases were included in the denominator. As a result, the odds ratios from these models are mathematically equivalent to prevalence ratios. We mapped prevalence ratios using a continuous color scale (blue to red) and a constant scale range for all maps with the same outcome (i.e. ASD; ASD+ID; ASD-ID; and ID). To provide information in the interpretation of spatial patterns that could be driven by sparse data, we preformed a 2-step statistical screening procedure, as follows. First, we tested the null hypothesis that developmental disability does not depend on geographic location, generally (e.g. the predicted map surface is a flat horizontal plane). Residential locations of individuals were permuted 999 times while preserving their case status and covariates [as described in [20]. In each permutation, the GAM with the optimal span size from the original dataset was run and the global deviance statistic computed. We used a conservative p<0.025, which accounts for inflated type 1 error rates associated with using the optimal span size for the original dataset in permutations, to determine if location was a statistically significant predictor of disability [details in [31]. If the global statistic indicated that location was a statistically significant predictor of disability generally, we next evaluated location-specific (point-wise) departures from the null hypothesis using the same set of permutations [18,31]. Regions with significantly increased or decreased surveillance-recognized ASD or ID prevalence were defined as points ranking in the upper or lower 2.5% of the distribution of permuted prevalence ratios at each point, respectively [18]. Statistical analyses were performed in the R Package 2.12.02 (Vienna, Austria) using the gam library and a local scoring algorithm GAM estimation procedure and publically available statistical code [32,33]. All maps were created using ArcGIS 9.3 (version 9.3, Redlands, California). Confounding: We adjusted models for several previously established ASD predictive factors including year of birth; plurality; maternal age, race/ethnicity, and level of education; and report of tobacco use during pregnancy (categorizations in Table 1, [3,11,12,34-36]). Covariate data were nearly complete for these variables; however 35 children (<1%) had missing data and were excluded from all analyses. Additionally, we investigated confounding by other factors potentially related to ASD risk or recognition including method of delivery (vaginal delivery vs. cesarean section), marital status (married vs. unmarried), birth weight (<2500 g; 2501–3000 g; 3001–4000 g; >4000 g), and adequacy of prenatal care [assessed using the Adequacy of Prenatal Care Utilization Index; categorized as less than adequate (inadequate or intermediate prenatal care); adequate; adequate-plus [37]]. We used 3 approaches to fully assess the confounding influence of these factors: 1) We assessed the change in patterns of spatial variability, using a side by side visual inspection of maps before and after adjustment, comparing the areas of reduced and elevated prevalence ratios and the color intensities (indicating the magnitude of prevalence ratios). To assure equivalency, the number of observations and span size were held constant [18]. 2) We also investigated changes in the model-selected optimal span size of analyses with and without adjustment. A smaller optimal span size, using less of the data, is selected when the data supports more peaks and valleys in prevalence. It follows that a change in the optimal span size after the inclusion of a covariate can indicate spatial confounding. 3) Finally, to investigate the spatial variability of the predictive factors themselves (spatial associations between the factor and disability, necessary to cause confounding), we compared maps of each covariate to maps of the developmental disability. Selected Characteristics of the Birth Cohort and Children with ASD and ID in Eight North Carolina Counties in 2002, 2004, 2006 and 2008 Robustness of analyses: Our final dataset contained some siblings. In addition to being genetically more similar to each other, siblings typically share the same residence. Including siblings living at the same address in analyses could induce spatial clustering as a result of familial (i.e. genetic) similarities rather than geographically-linked factors. To assess the robustness of our results to including a small number of sibling groups, we conducted secondary analyses including only one randomly selected child per family. Families were defined as children for whom the mother had the same first and maiden name and date of birth (obtained from birth certificates). Because information for fathers was missing and incomplete on many birth records, we did not attempt to identify paternal siblings. Results: Selected characteristics for the birth cohort and children with ASD or ID are displayed in Table 1. Children with ASD were more likely to be male, have older mothers, and mothers with higher educational attainment. As has been reported previously [2,38], the age 8 prevalence of ASD in the region increased from 2002–2008, however the prevalence of ID remained relatively stable (Table 1). Residential locations at birth for our study population are displayed in Figure 1 with slight alteration to preserve confidentiality. We found geographic variability in ASD prevalence across the study area in unadjusted analyses, as indicated by the global statistical test (Figure 2a; optimal span size=0.75; global P=0.003; Table 2). The Point-wise statistical tests identified areas of increased ASD prevalence in portions of Alamance, Durham, and Orange Counties, and children living in these areas were 1.10 to 1.27 times as likely to have a surveillance-recognized ASD at age 8 years compared to children born in the study area as a whole. Conversely, children born in the western part of the region were 0.57 to 0.90 times as likely to have a surveillance-recognized ASD. Geographic variability in ASD prevalence was attenuated after adjusting for confounding by year of birth; plurality; maternal age, race, and level of education; and report of tobacco use during pregnancy, indicated by the following results. In the adjusted model, the optimal span size (determined by minimizing the model AIC) increased, indicating less variability, and the global p-value was not consistent with departures from a flat pattern of ASD prevalence (Figure 2b; optimal span=0.95; global P=0.052; Table 2). The range of prevalence ratios across the study area was diminished: the adjusted model yielded PRs ranging from 0.72 to 1.12, in contrast to the unadjusted model, where PRs ranged from 0.57 to 1.27. Geographic distribution of ASD prevalence relative to the birth cohort n=11,034 and ASD n=532: Unadjusted (A) and fully adjusted models (B) are presented using the optimal span size of each (0.75 and 0.95 respectively). The unadjusted model is significantly different than flat (global P=0.003). Areas of significantly increased and decreased prevalence are indicated by black contour bands. Adjusted model is not significantly different than flat (global P=0.052). Adjustment factors were year of birth; plurality; maternal age, race, and level of education; and report of tobacco use during pregnancy. Summary of Spatial Analyses AF – Additional File. Additionally adjusting for method of delivery, marital status, birthweight, and adequacy of prenatal care did not change the appearance of maps for any of the 4 developmental disability groups considered, or the range of prevalence ratios observed across the study area; these variables were dropped from adjusted analyses. Spatial confounding was driven primarily by the higher educational attainment in Alamance, Chatham, Durham, and Orange Counties, and to a lesser extent, to greater maternal age observed in the same counties; maps with adjustment for maternal age and education only had the same optimal span size (0.95), a similar range of prevalence ratios, and were visually very similar to the fully adjusted maps. When we examined the patterns of maternal educational attainment and age across the study area, both were similar to the unadjusted ASD pattern. For example, mothers living in the areas where the unadjusted ASD prevalence was highest were approximately 1.75 times as likely to have completed college than women in the study area as a whole (Additional file 1: Figure S1). Among the 532 children with ASD, 214 (40.22%) also had an ID. The spatial patterns of both subgroups (ASD+ID and ASD-ID) were similar to each other and to the map of the combined group (all ASD); the locations of increased prevalence and the intensity of the increases were similar across all maps (Table 2; Additional file 1: Figure S2). Adjusting for spatial confounding resulted in flatter maps of ASD+ID and ASD-ID; however, greater attenuation was observed in the analysis of ASD-ID in which the optimal span size increased from 0.70 to 0.95 after adjustment (indicating the surface was less variable/ more flat after adjustment; Table 2; unadjusted figures not shown). Additionally, the range of PRs was more substantially attenuated by adjustment in the analysis of ASD-ID. In our sample, ASD-ID was 1.33 times as high in children who had mothers with college or more education (referent group=children of mothers with some post secondary education; P=0.059); but these same mothers were less likely than mothers with some postsecondary education to have a child with ASD + ID (aPR = 0.67, P = 0.060). When ID was analyzed as a separate disability, unadjusted analyses revealed significant spatial variability in prevalence across the study area; several areas of increased and decreased ID prevalence were observed (Figure 3a; optimal span = 0.10; global P < 0.001). Geographic gradients in prevalence ratios were somewhat attenuated in the adjusted analyses, however, patterns of residual variability in the prevalence of ID remained after accounting for known predictors (Figure 3b; optimal span = 0.30; global P = 0.065). Additionally, although adjusting for spatial confounding resulted in a larger span size, the optimal span size of the adjusted analyses remained small (0.30), indicating spatial variability; however the adjusted analyses did not reach global statistical significance at the alpha = 0.025 level. Geographic distribution of ID prevalence relative to the birth cohort n=11,034 and ID n=916: Unadjusted (A) and fully adjusted models (B) are presented using the optimal span size of each (0.10 and 0.30 respectively). The unadjusted model is significantly different than flat (global P<0.001). Areas of significantly increased and decreased risk are indicated by black contour bands. Adjusted model is not significantly different than flat (global P=0.065). Adjustment factors were year of birth; plurality; maternal age, race, and level of education; and report of tobacco use during pregnancy. Only 269 sibling pairs were in our analyses and their impact on analyses of both ASD and ID was negligible. When we randomly selected one child from each family for analyses, the pattern of spatial variability was quite similar to analyses including all children (results not shown). Discussion: Although we observed spatial variability in ASD prevalence in unadjusted analyses, the pattern of ASD appeared to be largely explained by factors influencing diagnosis -- maternal age and education -- which differed across the study area. The larger optimal span size and global p-value in the adjusted analyses, as well as decreased variability in prevalence ratios across the study area, all indicated less geographic variability after adjustment for known risk and predictive factors for ASD. Our results corroborate previous reports [15,16] demonstrating spatial variability in ASD prevalence before adjusting for spatial confounding. In secondary analyses Van Meter and colleagues reported the impact of living in neighborhoods with high ASD prevalence was diminished in analyses adjusting for parental education, a finding consistent with our results [16]. Our results highlight the importance of adjusting for the geographic distribution of known individual-level predictors in spatial analyses. Here, adjustment for socioeconomic factors associated with diagnosis greatly attenuated geographic variability in ASD prevalence. This has implications for studies of ASD etiology that assign environmental exposures based on geography, such as air pollutants and agricultural pesticides, for which we caution the causal interpretation of geographic patterns that are not controlled for individual-level factors. Using GAMs allowed us to carefully account for known risk and diagnostic factors, a major strength of these analyses [18]. In addition, we considered both the statistical and qualitative aspects of the observed geographic patterns, tempering interpretation of areas of sparse data by considering tests of global deviance and changes in the optimal span size along-side visual inspection of the patterns in our interpretation of results. NC-ADDM Network provided a large population based sample with information on co-occurring conditions, allowing us to consider differences in spatial confounding by the severity and type of disability. Patterns for separate disability groups, ASD+ID and ASD-ID, were visually similar to those for all ASD; however, ASD-ID appeared to be more impacted by spatial confounding as indicated by a greater attenuation in prevalence ratios and a increase of the optimal span size in the adjusted analysis. The associations between ASD (+/−ID) and maternal education and age that we observed in these analyses are consistent with previously reported associations in non-spatial analyses [3,4] which indicate that ASD-ID is more strongly associated with higher SES than ASD+ID. Although the global significance test did not indicate variability, our results suggest that ID prevalence varied geographically across the study area; even after adjusting for known spatial confounders the optimal span size remained small and results were visually suggestive of geographic differences in ID prevalence. There are several possible explanations for the observed variability in ID, including residual confounding by unmeasured variables and differences in the distribution of environmental factors across the study area. Linking data from the NC-ADDM Network and vital records also strengthened our analyses. Vital records provided individual-level data on a number of covariates, allowing us to account for spatial confounding by these factors, and also provided residential location at the time of birth, which may be more relevant to ASD etiology than the address at diagnosis. Although the majority of children with ASD at age 8 in ADDM were also born in the study area, local changes in address were common (68.14% of children with ASD had a change in residential address between birth and age 8). If the spatial patterns we observed are due to difference in diagnosis, spatial variability may be more apparent using a later address corresponding to the time of diagnosis. A final strength of our study was our ability to evaluate the influence of potential sibling clusters in analyses. While it is often suggested that including siblings in analyses will induce clustering, we did not observe this pattern in our data. One possible explanation is that the number of siblings in our analyses was relatively small (e.g. 269 sibling pairs in the ASD analysis of 11,566 children), due in part to the biennial study design and the selection of only part of the source population (15%). There are also several important limitations of our methods. We chose the optimal tradeoff between bias and variance of the smooth (the optimal span size) by minimizing the AIC. Selecting the optimal span size for a dataset, however, may obscure important small scale variability in maps; it is possible that our analyses were conducted on a scale too large to identify small scale environmental exposures relevant in the etiology of ASD. Examining different span sizes may reveal important features of the data. Similarly, we assessed spatial confounding by visually comparing maps of prevalence before and after adjustment holding the span size and included observations constant and by investigating changes in the optimal span size before and after adjustment [18]; however, more objective methods of assessing confounding are needed and are an important topic for future research. We also used p-values as a screening tool, to evaluate the global spatial variation as well as areas of increased or decreased prevalence. Our use of p-values here was to evaluate whether or not spatial variability existed (i.e. whether the prevalence of ASD was constant across the geographic area). Nonetheless, many epidemiologists prefer the use of confidence intervals, which provide information on the precision of the observed association [39]. While it is possible to calculate confidence intervals in these analyses, it is visually difficult to display three surfaces simultaneously on maps. Additionally, we conducted permutation tests using the span size of the observed data to test the null hypothesis that the map is flat. Permuted datasets under this null hypothesis may have had a larger optimal span size, particularly for the ID analyses, which had a relatively small span size for the observed data (optimal span = 0.30). Consequently, using the span size of the observed data could result in a p-value that is too small [18]. Conclusions: Our results demonstrate the importance of adjusting for predictive and diagnostic factors that may spatially confound the search for novel environmentally-distributed risk factors for ASD. These adjustment methods can help the search for causes of developmental disabilities proceed more efficiently by aiding the interpretation of geographic patterns. Cumulatively, our results suggest that known predictive factors for ASD account for a large portion of the observed geographic variability in prevalence in central North Carolina. Although we did not identify spatial variability in ASD in NC after controlling for known predictive and risk factors, follow up of these results in other regions is may provide clues for ASD etiology. Abbreviations: AF: Additional file; APR: Adjusted prevalence ratio; ADDM Network: Autism and developmental disabilities monitoring network; AIC: Akaike’s information criterion; ASD: Autism spectrum disorders; ASD+ID: Autism spectrum disorders with intellectual disability; ASD-ID: Autism spectrum disorders without intellectual disability; ID: Intellectual disability; GAM: Generalized additive model; GIS: Geographic information systems; PR: Prevalence ratio; SES: Socioeconomic status. Competing interests: All authors declare that they have are no actual or potential competing interests. Authors’ contributions: KH assisted in conceiving the study, conducted analyses, and drafted the manuscript. AK assisted conceiving the study and the analysis plan, and assisted in manuscript editing. VV collaborated on statistical issues, and assisted in the analysis plan and manuscript editing. JD, the principle investigator of the North Carolina Autism and Developmental Disabilities Monitoring Network, assisted in the analysis plan and manuscript editing. All authors read and approved the final manuscript. Supplementary Material: Figure S1. Mother's educational attainment at the time of birth n=11,034. Map reflects the optimal span size of the ASD analyses (span=0.95; global P<0.001); the optimal span size of the education analysis was 0.05 (global P<0.001). Larger prevalence ratios indicate a higher prevalence of mothers with college or more education at the child’s birth. Areas of significantly increased and decreased risk are indicated by black contour bands. Figure S2. Adjusted maps for (A) ASD prevalence (birth cohort n=11,034 and ASD n=532), (B) ASD-ID (birth cohort n=11,034 and ASD-ID n=318), and (C) ASD+ID (birth cohort n=11,034 and ASD+ID n=214). Maps are not significantly different than flat (global P=0.052, global P=0.294 and global P=0.196, respectively). Adjustment factors were year of birth; plurality; maternal age, race, and level of education; and report of tobacco use during pregnancy. Click here for file
Background: The causes of autism spectrum disorders (ASD) remain largely unknown and widely debated; however, evidence increasingly points to the importance of environmental exposures. A growing number of studies use geographic variability in ASD prevalence or exposure patterns to investigate the association between environmental factors and ASD. However, differences in the geographic distribution of established risk and predictive factors for ASD, such as maternal education or age, can interfere with investigations of ASD etiology. We evaluated geographic variability in the prevalence of ASD in central North Carolina and the impact of spatial confounding by known risk and predictive factors. Methods: Children meeting a standardized case definition for ASD at 8 years of age were identified through records-based surveillance for 8 counties biennially from 2002 to 2008 (n=532). Vital records were used to identify the underlying cohort (15% random sample of children born in the same years as children with an ASD, n=11,034), and to obtain birth addresses. We used generalized additive models (GAMs) to estimate the prevalence of ASD across the region by smoothing latitude and longitude. GAMs, unlike methods used in previous spatial analyses of ASD, allow for extensive adjustment of individual-level risk factors (e.g. maternal age and education) when evaluating spatial variability of disease prevalence. Results: Unadjusted maps revealed geographic variation in surveillance-recognized ASD. Children born in certain regions of the study area were up to 1.27 times as likely to be recognized as having ASD compared to children born in the study area as a whole (prevalence ratio (PR) range across the study area 0.57-1.27; global P=0.003). However, geographic gradients of ASD prevalence were attenuated after adjusting for spatial confounders (adjusted PR range 0.72-1.12 across the study area; global P=0.052). Conclusions: In these data, spatial variation of ASD in central NC can be explained largely by factors impacting diagnosis, such as maternal education, emphasizing the importance of adjusting for differences in the geographic distribution of known individual-level predictors in spatial analyses of ASD. These results underscore the critical importance of accounting for such factors in studies of environmental exposures that vary across regions.
Background: Autism spectrum disorders (ASD) are complex neurodevelopmental disorders characterized by impaired social interaction and communication, and restrictive and repetitive behavior [1]. Estimates in 2008 indicate that approximately 1 in 88 children have ASD and that the prevalence of documented ASD is on the rise [2]. The causes for ASD remain largely unknown and widely debated [3,4]. Environmental exposures are hypothesized to contribute to ASD etiology [5]; however, identifying exposures of concern has been complicated by the relative rarity of ASD, the extensive number of candidate exposures, and the lack of exposure measurements during early life, the developmentally relevant time period for ASD. Thus, studies have explored the geographic distribution of ASD as a means for generating hypotheses about spatially distributed environmental exposures. Additionally, studies have used geographically-based exposure assignment to evaluate the impact of specific exposures such as air pollutants, pesticides, and hazardous wastes on ASD risk [6-9]. One problem with evaluating the geographically non-random prevalence of ASD for etiologic purposes lies in difficulties disentangling the geographic distribution of other factors associated with diagnosis [10,11]. For example, higher maternal education is associated with increased ASD diagnosis in the United States [12,13] but not in some European countries [14]. These results suggest that maternal education is a factor in promoting recognition of ASD, but not necessarily the occurrence of ASD. Identifying patterns related to ASD diagnosis may be helpful for public health services allocation; however, to generate hypotheses about etiology, we must distinguish diagnostic patterns from patterns of ASD occurrence. For example, prioritizing investigations of a geographically-based environmental cause may be unwarranted if the observed spatial pattern is driven by maternal education (i.e. spatial confounding). Two previous studies investigated geographic variability of ASD as a means of identifying environmental exposures related to ASD prevalence [15,16]. Both reported that children born in certain regions of California were more likely to have a recognized ASD than children born in other parts of the state [15,16]. The authors attributed their findings to regional differences in the underlying population (i.e. demographic and socioeconomic factors) or geographic variability of environmental exposures; but were unable to disentangle factors promoting diagnosis from environmental exposures because they did not adjust for potentially important individual-level spatial confounding [15,16]. In order to determine the potential for environmentally distributed exposure to be associated with ASD in central North Carolina (such as contaminants in air or water), we first explored whether spatial differences in ASD prevalence existed and then whether differences remained after accounting for spatially distributed covariates associated with ASD risk and diagnosis. We used a method of spatial epidemiology that applies generalized additive models (GAMs) to assess the spatial variation of disease in a region while simultaneously adjusting for other geographically distributed individual-level factors [17,18] such as maternal education, age, and smoking. Combining GAMs and geographic information systems allowed us to predict a continuous surface of ASD prevalence across our study area. Our research serves not only to expand the consideration of spatial patterns of ASD to geographic regions other than California but also to improve the utility of such studies by directly examining how adjustment for known risk and predictive factors influences geographic patterns. Patterns remaining after accounting for factors that influence ASD recognition may better reveal the distribution of novel geographically distributed etiologic factors impacting ASD prevalence. Conclusions: Our results demonstrate the importance of adjusting for predictive and diagnostic factors that may spatially confound the search for novel environmentally-distributed risk factors for ASD. These adjustment methods can help the search for causes of developmental disabilities proceed more efficiently by aiding the interpretation of geographic patterns. Cumulatively, our results suggest that known predictive factors for ASD account for a large portion of the observed geographic variability in prevalence in central North Carolina. Although we did not identify spatial variability in ASD in NC after controlling for known predictive and risk factors, follow up of these results in other regions is may provide clues for ASD etiology.
Background: The causes of autism spectrum disorders (ASD) remain largely unknown and widely debated; however, evidence increasingly points to the importance of environmental exposures. A growing number of studies use geographic variability in ASD prevalence or exposure patterns to investigate the association between environmental factors and ASD. However, differences in the geographic distribution of established risk and predictive factors for ASD, such as maternal education or age, can interfere with investigations of ASD etiology. We evaluated geographic variability in the prevalence of ASD in central North Carolina and the impact of spatial confounding by known risk and predictive factors. Methods: Children meeting a standardized case definition for ASD at 8 years of age were identified through records-based surveillance for 8 counties biennially from 2002 to 2008 (n=532). Vital records were used to identify the underlying cohort (15% random sample of children born in the same years as children with an ASD, n=11,034), and to obtain birth addresses. We used generalized additive models (GAMs) to estimate the prevalence of ASD across the region by smoothing latitude and longitude. GAMs, unlike methods used in previous spatial analyses of ASD, allow for extensive adjustment of individual-level risk factors (e.g. maternal age and education) when evaluating spatial variability of disease prevalence. Results: Unadjusted maps revealed geographic variation in surveillance-recognized ASD. Children born in certain regions of the study area were up to 1.27 times as likely to be recognized as having ASD compared to children born in the study area as a whole (prevalence ratio (PR) range across the study area 0.57-1.27; global P=0.003). However, geographic gradients of ASD prevalence were attenuated after adjusting for spatial confounders (adjusted PR range 0.72-1.12 across the study area; global P=0.052). Conclusions: In these data, spatial variation of ASD in central NC can be explained largely by factors impacting diagnosis, such as maternal education, emphasizing the importance of adjusting for differences in the geographic distribution of known individual-level predictors in spatial analyses of ASD. These results underscore the critical importance of accounting for such factors in studies of environmental exposures that vary across regions.
10,627
414
[ 648, 342, 460, 248, 787, 387, 133, 81, 14, 81 ]
15
[ "asd", "id", "children", "prevalence", "asd id", "analyses", "span", "spatial", "birth", "size" ]
[ "etiology identifying exposures", "criterion asd autism", "impacting asd prevalence", "environmental exposures related", "asd autism spectrum" ]
[CONTENT] Autism spectrum disorders (ASD) | Intellectual disability (ID) | Spatial analysis | Disease mapping | Generalized additive models (GAMs) | Geographic information systems (GIS) [SUMMARY]
[CONTENT] Autism spectrum disorders (ASD) | Intellectual disability (ID) | Spatial analysis | Disease mapping | Generalized additive models (GAMs) | Geographic information systems (GIS) [SUMMARY]
[CONTENT] Autism spectrum disorders (ASD) | Intellectual disability (ID) | Spatial analysis | Disease mapping | Generalized additive models (GAMs) | Geographic information systems (GIS) [SUMMARY]
[CONTENT] Autism spectrum disorders (ASD) | Intellectual disability (ID) | Spatial analysis | Disease mapping | Generalized additive models (GAMs) | Geographic information systems (GIS) [SUMMARY]
[CONTENT] Autism spectrum disorders (ASD) | Intellectual disability (ID) | Spatial analysis | Disease mapping | Generalized additive models (GAMs) | Geographic information systems (GIS) [SUMMARY]
[CONTENT] Autism spectrum disorders (ASD) | Intellectual disability (ID) | Spatial analysis | Disease mapping | Generalized additive models (GAMs) | Geographic information systems (GIS) [SUMMARY]
[CONTENT] Child | Child Development Disorders, Pervasive | Demography | Educational Status | Female | Geographic Information Systems | Humans | Male | North Carolina | Population Surveillance | Predictive Value of Tests | Prevalence | Risk Factors | Spatial Analysis [SUMMARY]
[CONTENT] Child | Child Development Disorders, Pervasive | Demography | Educational Status | Female | Geographic Information Systems | Humans | Male | North Carolina | Population Surveillance | Predictive Value of Tests | Prevalence | Risk Factors | Spatial Analysis [SUMMARY]
[CONTENT] Child | Child Development Disorders, Pervasive | Demography | Educational Status | Female | Geographic Information Systems | Humans | Male | North Carolina | Population Surveillance | Predictive Value of Tests | Prevalence | Risk Factors | Spatial Analysis [SUMMARY]
[CONTENT] Child | Child Development Disorders, Pervasive | Demography | Educational Status | Female | Geographic Information Systems | Humans | Male | North Carolina | Population Surveillance | Predictive Value of Tests | Prevalence | Risk Factors | Spatial Analysis [SUMMARY]
[CONTENT] Child | Child Development Disorders, Pervasive | Demography | Educational Status | Female | Geographic Information Systems | Humans | Male | North Carolina | Population Surveillance | Predictive Value of Tests | Prevalence | Risk Factors | Spatial Analysis [SUMMARY]
[CONTENT] Child | Child Development Disorders, Pervasive | Demography | Educational Status | Female | Geographic Information Systems | Humans | Male | North Carolina | Population Surveillance | Predictive Value of Tests | Prevalence | Risk Factors | Spatial Analysis [SUMMARY]
[CONTENT] etiology identifying exposures | criterion asd autism | impacting asd prevalence | environmental exposures related | asd autism spectrum [SUMMARY]
[CONTENT] etiology identifying exposures | criterion asd autism | impacting asd prevalence | environmental exposures related | asd autism spectrum [SUMMARY]
[CONTENT] etiology identifying exposures | criterion asd autism | impacting asd prevalence | environmental exposures related | asd autism spectrum [SUMMARY]
[CONTENT] etiology identifying exposures | criterion asd autism | impacting asd prevalence | environmental exposures related | asd autism spectrum [SUMMARY]
[CONTENT] etiology identifying exposures | criterion asd autism | impacting asd prevalence | environmental exposures related | asd autism spectrum [SUMMARY]
[CONTENT] etiology identifying exposures | criterion asd autism | impacting asd prevalence | environmental exposures related | asd autism spectrum [SUMMARY]
[CONTENT] asd | id | children | prevalence | asd id | analyses | span | spatial | birth | size [SUMMARY]
[CONTENT] asd | id | children | prevalence | asd id | analyses | span | spatial | birth | size [SUMMARY]
[CONTENT] asd | id | children | prevalence | asd id | analyses | span | spatial | birth | size [SUMMARY]
[CONTENT] asd | id | children | prevalence | asd id | analyses | span | spatial | birth | size [SUMMARY]
[CONTENT] asd | id | children | prevalence | asd id | analyses | span | spatial | birth | size [SUMMARY]
[CONTENT] asd | id | children | prevalence | asd id | analyses | span | spatial | birth | size [SUMMARY]
[CONTENT] asd | exposures | environmental | distributed | environmental exposures | diagnosis | geographically | studies | maternal education | spatial [SUMMARY]
[CONTENT] asd | id | children | data | asd id | span | birth | span size | size | addresses [SUMMARY]
[CONTENT] asd | unadjusted | id | prevalence | global | adjusted | span | mothers | optimal span | optimal [SUMMARY]
[CONTENT] known predictive | search | risk factors | predictive | factors | factors asd | results | asd | known | variability [SUMMARY]
[CONTENT] asd | id | children | birth | span | prevalence | asd id | span size | size | spatial [SUMMARY]
[CONTENT] asd | id | children | birth | span | prevalence | asd id | span size | size | spatial [SUMMARY]
[CONTENT] ||| ASD | ASD ||| ASD | ASD ||| ASD | North Carolina [SUMMARY]
[CONTENT] ASD | 8 years of age | 8 | 2002 ||| 15% | the same years | ASD ||| ASD ||| GAMs | ASD [SUMMARY]
[CONTENT] ASD ||| 1.27 | ASD | 0.57-1.27 ||| ASD | 0.72-1.12 [SUMMARY]
[CONTENT] ASD | NC | ASD ||| [SUMMARY]
[CONTENT] ||| ASD | ASD ||| ASD | ASD ||| ASD | North Carolina ||| ASD | 8 years of age | 8 | 2002 ||| 15% | the same years | ASD ||| ASD ||| GAMs | ASD ||| ASD ||| 1.27 | ASD | 0.57-1.27 ||| ASD | 0.72-1.12 ||| ASD | NC | ASD ||| [SUMMARY]
[CONTENT] ||| ASD | ASD ||| ASD | ASD ||| ASD | North Carolina ||| ASD | 8 years of age | 8 | 2002 ||| 15% | the same years | ASD ||| ASD ||| GAMs | ASD ||| ASD ||| 1.27 | ASD | 0.57-1.27 ||| ASD | 0.72-1.12 ||| ASD | NC | ASD ||| [SUMMARY]
Streptozocin chemotherapy for advanced/metastatic well-differentiated neuroendocrine tumors: an analysis of a multi-center survey in Japan.
25348496
Neuroendocrine tumors (NETs) are believed to be relatively rare and to follow a generally indolent course. However, liver metastases are common in NET patients and the outcome of NET liver metastasis is poor. In Western countries, streptozocin (STZ) has been established as a first-line anticancer drug for unresectable NET; however, STZ cannot be used in daily practice in Japan. The aim of the present study was to determine the status of STZ usage in Japan and to evaluate the effectiveness and safety of STZ chemotherapy in Japanese NET patients.
BACKGROUND
A retrospective multi-center survey was conducted. Five institutions with experience performing STZ chemotherapy participated in the study. The patient demographics, tumor characteristics, context of STZ chemotherapy, and patient outcome were collected and assessed.
METHODS
Fifty-four patients were enrolled. The main recipients of STZ chemotherapy were middle-aged patients with pancreatic NET and unresectable liver metastases. The predominant regimen was the weekly/bi-weekly intravenous administration of STZ combined with other oral anticancer agents. STZ monotherapy was used in one-fourth of the patients. The median progression-free and overall survival periods were 11.8 and 38.7 months, respectively, and sustained stable disease was obtained in some selected patients. The adverse events profile was mild and tolerable.
RESULTS
Our survey showed the clinical benefit and safety of STZ therapy for Japanese patients with unresectable NET. Therefore, we recommend that STZ, which is the only cytotoxic agent available against NET, should be used in daily practice in Japan.
CONCLUSIONS
[ "Adult", "Aged", "Antibiotics, Antineoplastic", "Disease-Free Survival", "Female", "Humans", "Japan", "Male", "Middle Aged", "Neuroendocrine Tumors", "Pancreatic Neoplasms", "Retrospective Studies", "Streptozocin", "Survival Rate", "Treatment Outcome", "Young Adult" ]
4493796
Introduction
Neuroendocrine tumors (NETs) have been regarded as relatively rare neoplasms, but the number of patients with NET is increasing in the US [1], Europe [2], and Japan [3, 4]. The epidemiological pattern of NET is highly heterogeneous; for example, the tumor location, biological behavior (functioning NET or non-functioning NET), and percentage of distant metastases differ extensively among databases. Therefore, the clinical outcomes of the various treatment modalities also differ according to the characteristics of the study cohort. The clinical course of well-differentiated NET (NET G1 or NET G2) is believed to be generally indolent, but some previous studies have documented that 40–95 % of NET patients are metastatic at presentation, and the 5-year survival rate is 56–83 % for metastatic intestinal NETs and 40–60 % for metastatic pancreatic NETs. Thus, optimal management of metastatic lesions, especially of liver metastases, is key to improving the outcomes of NET patients [5]. Streptozocin (STZ) was first discovered as an antibiotic derived from Streptomyces achromogenes, and was approved in the US as a cytotoxic antitumor drug for symptomatic or advanced pancreatic NET in 1982. In Western countries, STZ combined with doxorubicin (DOX) or fluorouracil (5-FU) has been established as a first-line chemotherapy for both pancreatic and gastrointestinal NETs based on several clinical trails including randomized clinical trials [6–12]. However, STZ has not been covered by the Japanese insurance system, and Japanese oncologists/gastroenterologists cannot choose this powerful option for the treatment of advanced or metastatic NETs. Because of these specific circumstances, STZ chemotherapy has only been used in clinical trials in Japan. Therefore, the aim of the study was to investigate the actual situations in which STZ is used in Japan and to evaluate the effectiveness and safety of STZ chemotherapy among Japanese NET patients.
null
null
Results
Patients The data of 54 patients were collected. The patient cohort consisted of 24 male and 30 female patients, and the median age was 54.0 years (range 24–76 years) at the onset of the disease and 56.0 years (range 31–77 years) at the start of STZ administration (Table 1). Regarding the distribution of age at the onset of the disease, a peak occurred at between 50 and 59 years, followed by a second peak at 60–69 years (as shown in the Electronic supplementary material). The performance status of most of the patients was 0 or 1 (Table 1).Table 1Patient demographics and tumor characteristicsParametersNo. of patientsPercent (%)Sex Male2444.4 Female3055.6Age at onset Mean52.5 Median54.0 Range24–76Age at the beginning of STZ administration Mean56.0 Median56.0 Range31–77Performance status 03463.0 11731.5 235.5 3–400.0Primary site Pancreaticoduodenal NET(46)(85.2)  Pancreas head1226.1  Pancreas body1021.7  Pancreas tail1941.3  Head, body and tail12.2  Duodenal48.7 Gastrointestinal NET(8)(14.8)  Stomach225.0  Small Intestine112.5  Rectum450.0  Others112.5Pathological diagnosis (WHO 2000) Well-differentiated endocrine tumor00.0 Well-differentiated endocrine carcinoma5296.4 Poorly-differentiated endocrine carcinoma/small cell carcinoma11.8 Others11.8Functioning NET/non-functioning NET Functioning(18)(33.3)  Gastrinoma916.7  Insulinoma713.0  Glucagonoma47.4  Somatostatinoma11.9  Serotonin, tachykinins producing tumor11.9 Non-functioning(36)(66.7)Metastatic site(s) Liver5398.1 Lymph nodes2648.1 Peritoneum35.6 Lung23.7 Others1018.5 Patient demographics and tumor characteristics The data of 54 patients were collected. The patient cohort consisted of 24 male and 30 female patients, and the median age was 54.0 years (range 24–76 years) at the onset of the disease and 56.0 years (range 31–77 years) at the start of STZ administration (Table 1). Regarding the distribution of age at the onset of the disease, a peak occurred at between 50 and 59 years, followed by a second peak at 60–69 years (as shown in the Electronic supplementary material). The performance status of most of the patients was 0 or 1 (Table 1).Table 1Patient demographics and tumor characteristicsParametersNo. of patientsPercent (%)Sex Male2444.4 Female3055.6Age at onset Mean52.5 Median54.0 Range24–76Age at the beginning of STZ administration Mean56.0 Median56.0 Range31–77Performance status 03463.0 11731.5 235.5 3–400.0Primary site Pancreaticoduodenal NET(46)(85.2)  Pancreas head1226.1  Pancreas body1021.7  Pancreas tail1941.3  Head, body and tail12.2  Duodenal48.7 Gastrointestinal NET(8)(14.8)  Stomach225.0  Small Intestine112.5  Rectum450.0  Others112.5Pathological diagnosis (WHO 2000) Well-differentiated endocrine tumor00.0 Well-differentiated endocrine carcinoma5296.4 Poorly-differentiated endocrine carcinoma/small cell carcinoma11.8 Others11.8Functioning NET/non-functioning NET Functioning(18)(33.3)  Gastrinoma916.7  Insulinoma713.0  Glucagonoma47.4  Somatostatinoma11.9  Serotonin, tachykinins producing tumor11.9 Non-functioning(36)(66.7)Metastatic site(s) Liver5398.1 Lymph nodes2648.1 Peritoneum35.6 Lung23.7 Others1018.5 Patient demographics and tumor characteristics Tumor characteristics The characteristics of the tumors are summarized in Table 1. Forty-two patients had pancreatic NET (P-NET), and the duodenum and gastrointestinal tract were the original sites in 4 and 8 patients, respectively. The pathological diagnosis based on the WHO Classification 2000 was well-differentiated endocrine carcinoma in 52 patients (96.4 %). One-third of the tumors (n = 18) were functioning, with 9 gastrinomas and 7 insulinomas; the other two-thirds were non-functioning NETs. All the patients had metastatic sites: all but one patient had liver metastasis, with lymph node metastasis being the second most common site (n = 26, 48.1 %). The characteristics of the tumors are summarized in Table 1. Forty-two patients had pancreatic NET (P-NET), and the duodenum and gastrointestinal tract were the original sites in 4 and 8 patients, respectively. The pathological diagnosis based on the WHO Classification 2000 was well-differentiated endocrine carcinoma in 52 patients (96.4 %). One-third of the tumors (n = 18) were functioning, with 9 gastrinomas and 7 insulinomas; the other two-thirds were non-functioning NETs. All the patients had metastatic sites: all but one patient had liver metastasis, with lymph node metastasis being the second most common site (n = 26, 48.1 %). STZ therapy STZ chemotherapy was used as a first-line therapy in 39 patients, as a second-line therapy in 11 patients, and as a third-line therapy in 4 patients. The treatments used prior to STZ chemotherapy included transcatheter arterial chemoembolization (TACE), octreotide, 5FU, and gemcitabine. STZ was administered intravenously in 35 patients (64.8 %) and intra-arterially in 3 patients. Both routes were used in 15 patients. The dosing regimen was daily [350–500 mg/m2 of STZ administered for 5 consecutive days (days 1–5) every 6 weeks] in 14 patients and weekly or bi-weekly in 31 patients (350–1,000 mg/m2 of STZ administered at each treatment). Both regimens were used in 3 patients. Interestingly, the participating institutions in Eastern Japan applied a weekly or bi-weekly regimen, while the institutions in Western Japan applied a daily regimen. Thirteen patients received STZ monotherapy, while a combination therapy was used in the other 41 patients. The combined antitumor agents included tegafur-uracil (UFT, n = 26), octreotide (n = 20), fluorouracil (5-FU, n = 15), and oral fluoropyrimidine (S-1, n = 6) (Table 2).Table 2STZ therapyParametersNo. of patientsPercent (%)Dosing route Intravenous (IV)3564.8 Intra-arterial (IA)35.6 IV/IA1527.8 Unknown11.9Dosing regimen Daily1425.9 Weekly/bi-weekly3157.4 Daily/weekly35.6 Others611.1Antitumor agents combined with STZ Doxorubicin11.9 Fluorouracil (5-FU)1527.8 Oral fluoropyrimidine (S-1)611.1 Tegafur-uracil (UFT)2648.1 Octreotide2037.0 Mitomycin C35.6 Interferon11.9 Sunitinib11.9 None (STZ monotherapy)1324.1 STZ therapy The dosing period ranged from 0 to 105 months, with a median of 12.4 months. The dosing period was within 20 months in most patients (Fig. 1a). The total amount of STZ administered ranged from 1.0 to 128.0 g (median 18.8 g) (Fig. 1b).Fig. 1Distribution of the dosing period (a) and the total amount of STZ administered (b) (n = 54) Distribution of the dosing period (a) and the total amount of STZ administered (b) (n = 54) The tumor response as evaluated according to the RECIST criteria is shown in Table 3. The tumor response was CR in 2 patients, PR in 11 patients, SD in 9 patients, PD in 25 patients, and unknown in 7 patients, with a response rate of 27.7 %. The response to STZ monotherapy was CR in 1 patient, PR in 4 patients, SD in 1 patient, PD in 8 patients, and unknown in 4 patients, with a response rate of 35.7 %.Table 3Tumor response, evaluated according to the RECIST criteriaTumor responseaccording to RECIST criteriaAll casesPancreaticoduodenal NETGastrointestinal NETSubtotalSTZ monotherapyCombinationtherapySubtotalSTZ monotherapyCombination therapy n % n % n % n % n % n % n %54461432844CR24.325.3111.113.400.000.000.0PR1123.9923.7333.3620.7225.0125.0125.0SD919.6821.1111.1724.1112.500.0125.0PD2554.32052.6555.61551.7562.5375.0250.0UK7743000 UK unknown Tumor response, evaluated according to the RECIST criteria UK unknown Documented adverse events included nausea (n = 12, 22.2 %), vomiting (n = 7, 13.0 %), and lethargy (n = 4, 7.4 %). Other adverse hematological, hepatobiliary, or nervous system events were observed in a few patients. Grade 3 adverse events were observed in 6 patients (3 nausea and 3 vomiting), but no grade 4 adverse events were documented (Table 4). New-onset diabetes mellitus was not documented, but the control of the disease was impaired during STZ therapy in one patient who had been treated for diabetes mellitus.Table 4Adverse eventsAdverse events n %CTCAE gradeG1G2G3G4UnknownGastointestinal disorder Abdominal pain11.9–1––– Diarrhea23.711––– Epigastric pain11.91–––– Nausea1222.2543–– Acute pancreatitis12.9–1––– Vomiting713.0133––Hematolymphoid system disorder Leukopenia11.91–––– Neutropenia23.711––– Thrombocytopenia11.91––––Ocular lesion Abnormal ocular sensation11.9––––1Hepatobiliary system disorder Liver function abnormality11.91––––Nerve system disorder Syncope11.9––1–– Headache11.9––––1Others Lethargy47.431––– Back pain11.9–1––– Adverse events STZ therapy was discontinued in 46 patients. The reasons for the discontinuation were tumor progression in 43 patients, conversion to other treatments in 2 patients, and a severe adverse event in 1 patient. STZ chemotherapy was used as a first-line therapy in 39 patients, as a second-line therapy in 11 patients, and as a third-line therapy in 4 patients. The treatments used prior to STZ chemotherapy included transcatheter arterial chemoembolization (TACE), octreotide, 5FU, and gemcitabine. STZ was administered intravenously in 35 patients (64.8 %) and intra-arterially in 3 patients. Both routes were used in 15 patients. The dosing regimen was daily [350–500 mg/m2 of STZ administered for 5 consecutive days (days 1–5) every 6 weeks] in 14 patients and weekly or bi-weekly in 31 patients (350–1,000 mg/m2 of STZ administered at each treatment). Both regimens were used in 3 patients. Interestingly, the participating institutions in Eastern Japan applied a weekly or bi-weekly regimen, while the institutions in Western Japan applied a daily regimen. Thirteen patients received STZ monotherapy, while a combination therapy was used in the other 41 patients. The combined antitumor agents included tegafur-uracil (UFT, n = 26), octreotide (n = 20), fluorouracil (5-FU, n = 15), and oral fluoropyrimidine (S-1, n = 6) (Table 2).Table 2STZ therapyParametersNo. of patientsPercent (%)Dosing route Intravenous (IV)3564.8 Intra-arterial (IA)35.6 IV/IA1527.8 Unknown11.9Dosing regimen Daily1425.9 Weekly/bi-weekly3157.4 Daily/weekly35.6 Others611.1Antitumor agents combined with STZ Doxorubicin11.9 Fluorouracil (5-FU)1527.8 Oral fluoropyrimidine (S-1)611.1 Tegafur-uracil (UFT)2648.1 Octreotide2037.0 Mitomycin C35.6 Interferon11.9 Sunitinib11.9 None (STZ monotherapy)1324.1 STZ therapy The dosing period ranged from 0 to 105 months, with a median of 12.4 months. The dosing period was within 20 months in most patients (Fig. 1a). The total amount of STZ administered ranged from 1.0 to 128.0 g (median 18.8 g) (Fig. 1b).Fig. 1Distribution of the dosing period (a) and the total amount of STZ administered (b) (n = 54) Distribution of the dosing period (a) and the total amount of STZ administered (b) (n = 54) The tumor response as evaluated according to the RECIST criteria is shown in Table 3. The tumor response was CR in 2 patients, PR in 11 patients, SD in 9 patients, PD in 25 patients, and unknown in 7 patients, with a response rate of 27.7 %. The response to STZ monotherapy was CR in 1 patient, PR in 4 patients, SD in 1 patient, PD in 8 patients, and unknown in 4 patients, with a response rate of 35.7 %.Table 3Tumor response, evaluated according to the RECIST criteriaTumor responseaccording to RECIST criteriaAll casesPancreaticoduodenal NETGastrointestinal NETSubtotalSTZ monotherapyCombinationtherapySubtotalSTZ monotherapyCombination therapy n % n % n % n % n % n % n %54461432844CR24.325.3111.113.400.000.000.0PR1123.9923.7333.3620.7225.0125.0125.0SD919.6821.1111.1724.1112.500.0125.0PD2554.32052.6555.61551.7562.5375.0250.0UK7743000 UK unknown Tumor response, evaluated according to the RECIST criteria UK unknown Documented adverse events included nausea (n = 12, 22.2 %), vomiting (n = 7, 13.0 %), and lethargy (n = 4, 7.4 %). Other adverse hematological, hepatobiliary, or nervous system events were observed in a few patients. Grade 3 adverse events were observed in 6 patients (3 nausea and 3 vomiting), but no grade 4 adverse events were documented (Table 4). New-onset diabetes mellitus was not documented, but the control of the disease was impaired during STZ therapy in one patient who had been treated for diabetes mellitus.Table 4Adverse eventsAdverse events n %CTCAE gradeG1G2G3G4UnknownGastointestinal disorder Abdominal pain11.9–1––– Diarrhea23.711––– Epigastric pain11.91–––– Nausea1222.2543–– Acute pancreatitis12.9–1––– Vomiting713.0133––Hematolymphoid system disorder Leukopenia11.91–––– Neutropenia23.711––– Thrombocytopenia11.91––––Ocular lesion Abnormal ocular sensation11.9––––1Hepatobiliary system disorder Liver function abnormality11.91––––Nerve system disorder Syncope11.9––1–– Headache11.9––––1Others Lethargy47.431––– Back pain11.9–1––– Adverse events STZ therapy was discontinued in 46 patients. The reasons for the discontinuation were tumor progression in 43 patients, conversion to other treatments in 2 patients, and a severe adverse event in 1 patient. Patient outcome Data regarding patient outcome were available for 38 patients. The progression-free and overall survival curves are shown in Figs. 2 and 3. The median progression-free period was 11.8 months in all of the patients (mean 23.0 ± 3.5 months), 7.6 months in the functioning NET patients, and 16.8 months in the non-functioning NET patients (P = 0.14). Meanwhile, the median survival period was 38.7 months in all of the patients (mean 28.7 ± 2.6 months), 23.6 months in the functioning NET patients, and 38.7 months in the non-functioning NET patients (P = 0.32).Fig. 2Progression-free survival curves for all of the patients (n = 38) (a) and stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) Fig. 3Overall survival curves for all the patients (n = 38) (a) and curves stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) Progression-free survival curves for all of the patients (n = 38) (a) and stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) Overall survival curves for all the patients (n = 38) (a) and curves stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) The median amount of STZ administered was 18.8 g. When the patients were stratified according to the amount of STZ (≥18.8 or <18.8 g), both the overall survival rate and the progression-free survival rate were better in the patients who received ≥18.8 g STZ (see the Electronic supplementary material 2). The overall and progression-free survival outcomes were similar among the patients who received a daily regimen and those receiving a weekly/bi-weekly regimen (data not shown). In addition, the outcomes did not differ between patients with pancreaticoduodenal NET (n = 46) and those with gastrointestinal NET (n = 8) (see the Electronic supplementary material 3). Data regarding patient outcome were available for 38 patients. The progression-free and overall survival curves are shown in Figs. 2 and 3. The median progression-free period was 11.8 months in all of the patients (mean 23.0 ± 3.5 months), 7.6 months in the functioning NET patients, and 16.8 months in the non-functioning NET patients (P = 0.14). Meanwhile, the median survival period was 38.7 months in all of the patients (mean 28.7 ± 2.6 months), 23.6 months in the functioning NET patients, and 38.7 months in the non-functioning NET patients (P = 0.32).Fig. 2Progression-free survival curves for all of the patients (n = 38) (a) and stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) Fig. 3Overall survival curves for all the patients (n = 38) (a) and curves stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) Progression-free survival curves for all of the patients (n = 38) (a) and stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) Overall survival curves for all the patients (n = 38) (a) and curves stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) The median amount of STZ administered was 18.8 g. When the patients were stratified according to the amount of STZ (≥18.8 or <18.8 g), both the overall survival rate and the progression-free survival rate were better in the patients who received ≥18.8 g STZ (see the Electronic supplementary material 2). The overall and progression-free survival outcomes were similar among the patients who received a daily regimen and those receiving a weekly/bi-weekly regimen (data not shown). In addition, the outcomes did not differ between patients with pancreaticoduodenal NET (n = 46) and those with gastrointestinal NET (n = 8) (see the Electronic supplementary material 3).
null
null
[ "Patients", "Tumor characteristics", "STZ therapy", "Patient outcome", "" ]
[ "The data of 54 patients were collected. The patient cohort consisted of 24 male and 30 female patients, and the median age was 54.0 years (range 24–76 years) at the onset of the disease and 56.0 years (range 31–77 years) at the start of STZ administration (Table 1). Regarding the distribution of age at the onset of the disease, a peak occurred at between 50 and 59 years, followed by a second peak at 60–69 years (as shown in the Electronic supplementary material). The performance status of most of the patients was 0 or 1 (Table 1).Table 1Patient demographics and tumor characteristicsParametersNo. of patientsPercent (%)Sex Male2444.4 Female3055.6Age at onset Mean52.5 Median54.0 Range24–76Age at the beginning of STZ administration Mean56.0 Median56.0 Range31–77Performance status 03463.0 11731.5 235.5 3–400.0Primary site Pancreaticoduodenal NET(46)(85.2)  Pancreas head1226.1  Pancreas body1021.7  Pancreas tail1941.3  Head, body and tail12.2  Duodenal48.7 Gastrointestinal NET(8)(14.8)  Stomach225.0  Small Intestine112.5  Rectum450.0  Others112.5Pathological diagnosis (WHO 2000) Well-differentiated endocrine tumor00.0 Well-differentiated endocrine carcinoma5296.4 Poorly-differentiated endocrine carcinoma/small cell carcinoma11.8 Others11.8Functioning NET/non-functioning NET Functioning(18)(33.3)  Gastrinoma916.7  Insulinoma713.0  Glucagonoma47.4  Somatostatinoma11.9  Serotonin, tachykinins producing tumor11.9 Non-functioning(36)(66.7)Metastatic site(s) Liver5398.1 Lymph nodes2648.1 Peritoneum35.6 Lung23.7 Others1018.5\n\nPatient demographics and tumor characteristics", "The characteristics of the tumors are summarized in Table 1. Forty-two patients had pancreatic NET (P-NET), and the duodenum and gastrointestinal tract were the original sites in 4 and 8 patients, respectively. The pathological diagnosis based on the WHO Classification 2000 was well-differentiated endocrine carcinoma in 52 patients (96.4 %). One-third of the tumors (n = 18) were functioning, with 9 gastrinomas and 7 insulinomas; the other two-thirds were non-functioning NETs. All the patients had metastatic sites: all but one patient had liver metastasis, with lymph node metastasis being the second most common site (n = 26, 48.1 %).", "STZ chemotherapy was used as a first-line therapy in 39 patients, as a second-line therapy in 11 patients, and as a third-line therapy in 4 patients. The treatments used prior to STZ chemotherapy included transcatheter arterial chemoembolization (TACE), octreotide, 5FU, and gemcitabine.\nSTZ was administered intravenously in 35 patients (64.8 %) and intra-arterially in 3 patients. Both routes were used in 15 patients. The dosing regimen was daily [350–500 mg/m2 of STZ administered for 5 consecutive days (days 1–5) every 6 weeks] in 14 patients and weekly or bi-weekly in 31 patients (350–1,000 mg/m2 of STZ administered at each treatment).\nBoth regimens were used in 3 patients. Interestingly, the participating institutions in Eastern Japan applied a weekly or bi-weekly regimen, while the institutions in Western Japan applied a daily regimen.\nThirteen patients received STZ monotherapy, while a combination therapy was used in the other 41 patients. The combined antitumor agents included tegafur-uracil (UFT, n = 26), octreotide (n = 20), fluorouracil (5-FU, n = 15), and oral fluoropyrimidine (S-1, n = 6) (Table 2).Table 2STZ therapyParametersNo. of patientsPercent (%)Dosing route Intravenous (IV)3564.8 Intra-arterial (IA)35.6 IV/IA1527.8 Unknown11.9Dosing regimen Daily1425.9 Weekly/bi-weekly3157.4 Daily/weekly35.6 Others611.1Antitumor agents combined with STZ Doxorubicin11.9 Fluorouracil (5-FU)1527.8 Oral fluoropyrimidine (S-1)611.1 Tegafur-uracil (UFT)2648.1 Octreotide2037.0 Mitomycin C35.6 Interferon11.9 Sunitinib11.9 None (STZ monotherapy)1324.1\n\nSTZ therapy\nThe dosing period ranged from 0 to 105 months, with a median of 12.4 months. The dosing period was within 20 months in most patients (Fig. 1a). The total amount of STZ administered ranged from 1.0 to 128.0 g (median 18.8 g) (Fig. 1b).Fig. 1Distribution of the dosing period (a) and the total amount of STZ administered (b) (n = 54)\n\nDistribution of the dosing period (a) and the total amount of STZ administered (b) (n = 54)\nThe tumor response as evaluated according to the RECIST criteria is shown in Table 3. The tumor response was CR in 2 patients, PR in 11 patients, SD in 9 patients, PD in 25 patients, and unknown in 7 patients, with a response rate of 27.7 %. The response to STZ monotherapy was CR in 1 patient, PR in 4 patients, SD in 1 patient, PD in 8 patients, and unknown in 4 patients, with a response rate of 35.7 %.Table 3Tumor response, evaluated according to the RECIST criteriaTumor responseaccording to RECIST criteriaAll casesPancreaticoduodenal NETGastrointestinal NETSubtotalSTZ monotherapyCombinationtherapySubtotalSTZ monotherapyCombination therapy\nn\n%\nn\n%\nn\n%\nn\n%\nn\n%\nn\n%\nn\n%54461432844CR24.325.3111.113.400.000.000.0PR1123.9923.7333.3620.7225.0125.0125.0SD919.6821.1111.1724.1112.500.0125.0PD2554.32052.6555.61551.7562.5375.0250.0UK7743000\nUK unknown\n\nTumor response, evaluated according to the RECIST criteria\n\nUK unknown\nDocumented adverse events included nausea (n = 12, 22.2 %), vomiting (n = 7, 13.0 %), and lethargy (n = 4, 7.4 %). Other adverse hematological, hepatobiliary, or nervous system events were observed in a few patients. Grade 3 adverse events were observed in 6 patients (3 nausea and 3 vomiting), but no grade 4 adverse events were documented (Table 4).\nNew-onset diabetes mellitus was not documented, but the control of the disease was impaired during STZ therapy in one patient who had been treated for diabetes mellitus.Table 4Adverse eventsAdverse events\nn\n%CTCAE gradeG1G2G3G4UnknownGastointestinal disorder Abdominal pain11.9–1––– Diarrhea23.711––– Epigastric pain11.91–––– Nausea1222.2543–– Acute pancreatitis12.9–1––– Vomiting713.0133––Hematolymphoid system disorder Leukopenia11.91–––– Neutropenia23.711––– Thrombocytopenia11.91––––Ocular lesion Abnormal ocular sensation11.9––––1Hepatobiliary system disorder Liver function abnormality11.91––––Nerve system disorder Syncope11.9––1–– Headache11.9––––1Others Lethargy47.431––– Back pain11.9–1–––\n\nAdverse events\nSTZ therapy was discontinued in 46 patients. The reasons for the discontinuation were tumor progression in 43 patients, conversion to other treatments in 2 patients, and a severe adverse event in 1 patient.", "Data regarding patient outcome were available for 38 patients. The progression-free and overall survival curves are shown in Figs. 2 and 3. The median progression-free period was 11.8 months in all of the patients (mean 23.0 ± 3.5 months), 7.6 months in the functioning NET patients, and 16.8 months in the non-functioning NET patients (P = 0.14). Meanwhile, the median survival period was 38.7 months in all of the patients (mean 28.7 ± 2.6 months), 23.6 months in the functioning NET patients, and 38.7 months in the non-functioning NET patients (P = 0.32).Fig. 2Progression-free survival curves for all of the patients (n = 38) (a) and stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b)\nFig. 3Overall survival curves for all the patients (n = 38) (a) and curves stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b)\n\nProgression-free survival curves for all of the patients (n = 38) (a) and stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b)\nOverall survival curves for all the patients (n = 38) (a) and curves stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b)\nThe median amount of STZ administered was 18.8 g. When the patients were stratified according to the amount of STZ (≥18.8 or <18.8 g), both the overall survival rate and the progression-free survival rate were better in the patients who received ≥18.8 g STZ (see the Electronic supplementary material 2).\nThe overall and progression-free survival outcomes were similar among the patients who received a daily regimen and those receiving a weekly/bi-weekly regimen (data not shown). In addition, the outcomes did not differ between patients with pancreaticoduodenal NET (n = 46) and those with gastrointestinal NET (n = 8) (see the Electronic supplementary material 3).", "Below are the links to the electronic supplementary material.\nSupplementary material 1 (TIFF 104 kb)\nSupplementary material 2 (TIFF 106 kb)\nSupplementary material 3 (TIFF 96 kb)\n\nSupplementary material 1 (TIFF 104 kb)\nSupplementary material 2 (TIFF 106 kb)\nSupplementary material 3 (TIFF 96 kb)" ]
[ null, null, null, null, null ]
[ "Introduction", "Methods", "Results", "Patients", "Tumor characteristics", "STZ therapy", "Patient outcome", "Discussion", "Electronic supplementary material", "" ]
[ "Neuroendocrine tumors (NETs) have been regarded as relatively rare neoplasms, but the number of patients with NET is increasing in the US [1], Europe [2], and Japan [3, 4]. The epidemiological pattern of NET is highly heterogeneous; for example, the tumor location, biological behavior (functioning NET or non-functioning NET), and percentage of distant metastases differ extensively among databases. Therefore, the clinical outcomes of the various treatment modalities also differ according to the characteristics of the study cohort.\nThe clinical course of well-differentiated NET (NET G1 or NET G2) is believed to be generally indolent, but some previous studies have documented that 40–95 % of NET patients are metastatic at presentation, and the 5-year survival rate is 56–83 % for metastatic intestinal NETs and 40–60 % for metastatic pancreatic NETs. Thus, optimal management of metastatic lesions, especially of liver metastases, is key to improving the outcomes of NET patients [5].\nStreptozocin (STZ) was first discovered as an antibiotic derived from Streptomyces achromogenes, and was approved in the US as a cytotoxic antitumor drug for symptomatic or advanced pancreatic NET in 1982. In Western countries, STZ combined with doxorubicin (DOX) or fluorouracil (5-FU) has been established as a first-line chemotherapy for both pancreatic and gastrointestinal NETs based on several clinical trails including randomized clinical trials [6–12]. However, STZ has not been covered by the Japanese insurance system, and Japanese oncologists/gastroenterologists cannot choose this powerful option for the treatment of advanced or metastatic NETs.\nBecause of these specific circumstances, STZ chemotherapy has only been used in clinical trials in Japan. Therefore, the aim of the study was to investigate the actual situations in which STZ is used in Japan and to evaluate the effectiveness and safety of STZ chemotherapy among Japanese NET patients.", "This study was conducted as a retrospective multi-center survey. Five institutions (The University of Tokyo Hospital, Tokyo; Osaka Saiseikai Noe Hospital, Osaka; Kyoto University Hospital, Kyoto; Japanese Red Cross Medical Center, Tokyo; and Yamagata University Hospital, Yamagata) with experience performing STZ chemotherapy participated in the present study.\nPatients who were treated with STZ between September 1995 and November 2011 were included as the study subjects. The following clinicopathological factors were investigated: (1) sex, age at the start of STZ chemotherapy, date at the start of STZ chemotherapy, and performance status at the start of STZ chemotherapy; (2) clinical diagnosis, site of the primary tumor, age of tumor presentation, behavior of the tumor (functioning or non-functioning), presence or absence of metastasis, and metastatic site(s); (3) STZ treatment regimen, period of treatment, total dose of administered STZ, anti-tumor drugs used in combination with STZ; and (4) efficacy of STZ chemotherapy, adverse events, progression-free survival period, and overall survival period. The tumor response to STZ therapy was evaluated using RECIST criteria [13], and adverse events were assessed according to the National Cancer Institute Common Terminology Criteria for Adverse Events (CTCAE version 4.0). The survival curves were generated using Kaplan–Meier methods [14], and the differences among the curves were evaluated using a log-rank test [15]. Differences were considered significant when P < 0.05.\nThe protocol was approved by the local ethical committee of each institution that participated in the study. The clinical data were summarized in a blinded manner.", " Patients The data of 54 patients were collected. The patient cohort consisted of 24 male and 30 female patients, and the median age was 54.0 years (range 24–76 years) at the onset of the disease and 56.0 years (range 31–77 years) at the start of STZ administration (Table 1). Regarding the distribution of age at the onset of the disease, a peak occurred at between 50 and 59 years, followed by a second peak at 60–69 years (as shown in the Electronic supplementary material). The performance status of most of the patients was 0 or 1 (Table 1).Table 1Patient demographics and tumor characteristicsParametersNo. of patientsPercent (%)Sex Male2444.4 Female3055.6Age at onset Mean52.5 Median54.0 Range24–76Age at the beginning of STZ administration Mean56.0 Median56.0 Range31–77Performance status 03463.0 11731.5 235.5 3–400.0Primary site Pancreaticoduodenal NET(46)(85.2)  Pancreas head1226.1  Pancreas body1021.7  Pancreas tail1941.3  Head, body and tail12.2  Duodenal48.7 Gastrointestinal NET(8)(14.8)  Stomach225.0  Small Intestine112.5  Rectum450.0  Others112.5Pathological diagnosis (WHO 2000) Well-differentiated endocrine tumor00.0 Well-differentiated endocrine carcinoma5296.4 Poorly-differentiated endocrine carcinoma/small cell carcinoma11.8 Others11.8Functioning NET/non-functioning NET Functioning(18)(33.3)  Gastrinoma916.7  Insulinoma713.0  Glucagonoma47.4  Somatostatinoma11.9  Serotonin, tachykinins producing tumor11.9 Non-functioning(36)(66.7)Metastatic site(s) Liver5398.1 Lymph nodes2648.1 Peritoneum35.6 Lung23.7 Others1018.5\n\nPatient demographics and tumor characteristics\nThe data of 54 patients were collected. The patient cohort consisted of 24 male and 30 female patients, and the median age was 54.0 years (range 24–76 years) at the onset of the disease and 56.0 years (range 31–77 years) at the start of STZ administration (Table 1). Regarding the distribution of age at the onset of the disease, a peak occurred at between 50 and 59 years, followed by a second peak at 60–69 years (as shown in the Electronic supplementary material). The performance status of most of the patients was 0 or 1 (Table 1).Table 1Patient demographics and tumor characteristicsParametersNo. of patientsPercent (%)Sex Male2444.4 Female3055.6Age at onset Mean52.5 Median54.0 Range24–76Age at the beginning of STZ administration Mean56.0 Median56.0 Range31–77Performance status 03463.0 11731.5 235.5 3–400.0Primary site Pancreaticoduodenal NET(46)(85.2)  Pancreas head1226.1  Pancreas body1021.7  Pancreas tail1941.3  Head, body and tail12.2  Duodenal48.7 Gastrointestinal NET(8)(14.8)  Stomach225.0  Small Intestine112.5  Rectum450.0  Others112.5Pathological diagnosis (WHO 2000) Well-differentiated endocrine tumor00.0 Well-differentiated endocrine carcinoma5296.4 Poorly-differentiated endocrine carcinoma/small cell carcinoma11.8 Others11.8Functioning NET/non-functioning NET Functioning(18)(33.3)  Gastrinoma916.7  Insulinoma713.0  Glucagonoma47.4  Somatostatinoma11.9  Serotonin, tachykinins producing tumor11.9 Non-functioning(36)(66.7)Metastatic site(s) Liver5398.1 Lymph nodes2648.1 Peritoneum35.6 Lung23.7 Others1018.5\n\nPatient demographics and tumor characteristics\n Tumor characteristics The characteristics of the tumors are summarized in Table 1. Forty-two patients had pancreatic NET (P-NET), and the duodenum and gastrointestinal tract were the original sites in 4 and 8 patients, respectively. The pathological diagnosis based on the WHO Classification 2000 was well-differentiated endocrine carcinoma in 52 patients (96.4 %). One-third of the tumors (n = 18) were functioning, with 9 gastrinomas and 7 insulinomas; the other two-thirds were non-functioning NETs. All the patients had metastatic sites: all but one patient had liver metastasis, with lymph node metastasis being the second most common site (n = 26, 48.1 %).\nThe characteristics of the tumors are summarized in Table 1. Forty-two patients had pancreatic NET (P-NET), and the duodenum and gastrointestinal tract were the original sites in 4 and 8 patients, respectively. The pathological diagnosis based on the WHO Classification 2000 was well-differentiated endocrine carcinoma in 52 patients (96.4 %). One-third of the tumors (n = 18) were functioning, with 9 gastrinomas and 7 insulinomas; the other two-thirds were non-functioning NETs. All the patients had metastatic sites: all but one patient had liver metastasis, with lymph node metastasis being the second most common site (n = 26, 48.1 %).\n STZ therapy STZ chemotherapy was used as a first-line therapy in 39 patients, as a second-line therapy in 11 patients, and as a third-line therapy in 4 patients. The treatments used prior to STZ chemotherapy included transcatheter arterial chemoembolization (TACE), octreotide, 5FU, and gemcitabine.\nSTZ was administered intravenously in 35 patients (64.8 %) and intra-arterially in 3 patients. Both routes were used in 15 patients. The dosing regimen was daily [350–500 mg/m2 of STZ administered for 5 consecutive days (days 1–5) every 6 weeks] in 14 patients and weekly or bi-weekly in 31 patients (350–1,000 mg/m2 of STZ administered at each treatment).\nBoth regimens were used in 3 patients. Interestingly, the participating institutions in Eastern Japan applied a weekly or bi-weekly regimen, while the institutions in Western Japan applied a daily regimen.\nThirteen patients received STZ monotherapy, while a combination therapy was used in the other 41 patients. The combined antitumor agents included tegafur-uracil (UFT, n = 26), octreotide (n = 20), fluorouracil (5-FU, n = 15), and oral fluoropyrimidine (S-1, n = 6) (Table 2).Table 2STZ therapyParametersNo. of patientsPercent (%)Dosing route Intravenous (IV)3564.8 Intra-arterial (IA)35.6 IV/IA1527.8 Unknown11.9Dosing regimen Daily1425.9 Weekly/bi-weekly3157.4 Daily/weekly35.6 Others611.1Antitumor agents combined with STZ Doxorubicin11.9 Fluorouracil (5-FU)1527.8 Oral fluoropyrimidine (S-1)611.1 Tegafur-uracil (UFT)2648.1 Octreotide2037.0 Mitomycin C35.6 Interferon11.9 Sunitinib11.9 None (STZ monotherapy)1324.1\n\nSTZ therapy\nThe dosing period ranged from 0 to 105 months, with a median of 12.4 months. The dosing period was within 20 months in most patients (Fig. 1a). The total amount of STZ administered ranged from 1.0 to 128.0 g (median 18.8 g) (Fig. 1b).Fig. 1Distribution of the dosing period (a) and the total amount of STZ administered (b) (n = 54)\n\nDistribution of the dosing period (a) and the total amount of STZ administered (b) (n = 54)\nThe tumor response as evaluated according to the RECIST criteria is shown in Table 3. The tumor response was CR in 2 patients, PR in 11 patients, SD in 9 patients, PD in 25 patients, and unknown in 7 patients, with a response rate of 27.7 %. The response to STZ monotherapy was CR in 1 patient, PR in 4 patients, SD in 1 patient, PD in 8 patients, and unknown in 4 patients, with a response rate of 35.7 %.Table 3Tumor response, evaluated according to the RECIST criteriaTumor responseaccording to RECIST criteriaAll casesPancreaticoduodenal NETGastrointestinal NETSubtotalSTZ monotherapyCombinationtherapySubtotalSTZ monotherapyCombination therapy\nn\n%\nn\n%\nn\n%\nn\n%\nn\n%\nn\n%\nn\n%54461432844CR24.325.3111.113.400.000.000.0PR1123.9923.7333.3620.7225.0125.0125.0SD919.6821.1111.1724.1112.500.0125.0PD2554.32052.6555.61551.7562.5375.0250.0UK7743000\nUK unknown\n\nTumor response, evaluated according to the RECIST criteria\n\nUK unknown\nDocumented adverse events included nausea (n = 12, 22.2 %), vomiting (n = 7, 13.0 %), and lethargy (n = 4, 7.4 %). Other adverse hematological, hepatobiliary, or nervous system events were observed in a few patients. Grade 3 adverse events were observed in 6 patients (3 nausea and 3 vomiting), but no grade 4 adverse events were documented (Table 4).\nNew-onset diabetes mellitus was not documented, but the control of the disease was impaired during STZ therapy in one patient who had been treated for diabetes mellitus.Table 4Adverse eventsAdverse events\nn\n%CTCAE gradeG1G2G3G4UnknownGastointestinal disorder Abdominal pain11.9–1––– Diarrhea23.711––– Epigastric pain11.91–––– Nausea1222.2543–– Acute pancreatitis12.9–1––– Vomiting713.0133––Hematolymphoid system disorder Leukopenia11.91–––– Neutropenia23.711––– Thrombocytopenia11.91––––Ocular lesion Abnormal ocular sensation11.9––––1Hepatobiliary system disorder Liver function abnormality11.91––––Nerve system disorder Syncope11.9––1–– Headache11.9––––1Others Lethargy47.431––– Back pain11.9–1–––\n\nAdverse events\nSTZ therapy was discontinued in 46 patients. The reasons for the discontinuation were tumor progression in 43 patients, conversion to other treatments in 2 patients, and a severe adverse event in 1 patient.\nSTZ chemotherapy was used as a first-line therapy in 39 patients, as a second-line therapy in 11 patients, and as a third-line therapy in 4 patients. The treatments used prior to STZ chemotherapy included transcatheter arterial chemoembolization (TACE), octreotide, 5FU, and gemcitabine.\nSTZ was administered intravenously in 35 patients (64.8 %) and intra-arterially in 3 patients. Both routes were used in 15 patients. The dosing regimen was daily [350–500 mg/m2 of STZ administered for 5 consecutive days (days 1–5) every 6 weeks] in 14 patients and weekly or bi-weekly in 31 patients (350–1,000 mg/m2 of STZ administered at each treatment).\nBoth regimens were used in 3 patients. Interestingly, the participating institutions in Eastern Japan applied a weekly or bi-weekly regimen, while the institutions in Western Japan applied a daily regimen.\nThirteen patients received STZ monotherapy, while a combination therapy was used in the other 41 patients. The combined antitumor agents included tegafur-uracil (UFT, n = 26), octreotide (n = 20), fluorouracil (5-FU, n = 15), and oral fluoropyrimidine (S-1, n = 6) (Table 2).Table 2STZ therapyParametersNo. of patientsPercent (%)Dosing route Intravenous (IV)3564.8 Intra-arterial (IA)35.6 IV/IA1527.8 Unknown11.9Dosing regimen Daily1425.9 Weekly/bi-weekly3157.4 Daily/weekly35.6 Others611.1Antitumor agents combined with STZ Doxorubicin11.9 Fluorouracil (5-FU)1527.8 Oral fluoropyrimidine (S-1)611.1 Tegafur-uracil (UFT)2648.1 Octreotide2037.0 Mitomycin C35.6 Interferon11.9 Sunitinib11.9 None (STZ monotherapy)1324.1\n\nSTZ therapy\nThe dosing period ranged from 0 to 105 months, with a median of 12.4 months. The dosing period was within 20 months in most patients (Fig. 1a). The total amount of STZ administered ranged from 1.0 to 128.0 g (median 18.8 g) (Fig. 1b).Fig. 1Distribution of the dosing period (a) and the total amount of STZ administered (b) (n = 54)\n\nDistribution of the dosing period (a) and the total amount of STZ administered (b) (n = 54)\nThe tumor response as evaluated according to the RECIST criteria is shown in Table 3. The tumor response was CR in 2 patients, PR in 11 patients, SD in 9 patients, PD in 25 patients, and unknown in 7 patients, with a response rate of 27.7 %. The response to STZ monotherapy was CR in 1 patient, PR in 4 patients, SD in 1 patient, PD in 8 patients, and unknown in 4 patients, with a response rate of 35.7 %.Table 3Tumor response, evaluated according to the RECIST criteriaTumor responseaccording to RECIST criteriaAll casesPancreaticoduodenal NETGastrointestinal NETSubtotalSTZ monotherapyCombinationtherapySubtotalSTZ monotherapyCombination therapy\nn\n%\nn\n%\nn\n%\nn\n%\nn\n%\nn\n%\nn\n%54461432844CR24.325.3111.113.400.000.000.0PR1123.9923.7333.3620.7225.0125.0125.0SD919.6821.1111.1724.1112.500.0125.0PD2554.32052.6555.61551.7562.5375.0250.0UK7743000\nUK unknown\n\nTumor response, evaluated according to the RECIST criteria\n\nUK unknown\nDocumented adverse events included nausea (n = 12, 22.2 %), vomiting (n = 7, 13.0 %), and lethargy (n = 4, 7.4 %). Other adverse hematological, hepatobiliary, or nervous system events were observed in a few patients. Grade 3 adverse events were observed in 6 patients (3 nausea and 3 vomiting), but no grade 4 adverse events were documented (Table 4).\nNew-onset diabetes mellitus was not documented, but the control of the disease was impaired during STZ therapy in one patient who had been treated for diabetes mellitus.Table 4Adverse eventsAdverse events\nn\n%CTCAE gradeG1G2G3G4UnknownGastointestinal disorder Abdominal pain11.9–1––– Diarrhea23.711––– Epigastric pain11.91–––– Nausea1222.2543–– Acute pancreatitis12.9–1––– Vomiting713.0133––Hematolymphoid system disorder Leukopenia11.91–––– Neutropenia23.711––– Thrombocytopenia11.91––––Ocular lesion Abnormal ocular sensation11.9––––1Hepatobiliary system disorder Liver function abnormality11.91––––Nerve system disorder Syncope11.9––1–– Headache11.9––––1Others Lethargy47.431––– Back pain11.9–1–––\n\nAdverse events\nSTZ therapy was discontinued in 46 patients. The reasons for the discontinuation were tumor progression in 43 patients, conversion to other treatments in 2 patients, and a severe adverse event in 1 patient.\n Patient outcome Data regarding patient outcome were available for 38 patients. The progression-free and overall survival curves are shown in Figs. 2 and 3. The median progression-free period was 11.8 months in all of the patients (mean 23.0 ± 3.5 months), 7.6 months in the functioning NET patients, and 16.8 months in the non-functioning NET patients (P = 0.14). Meanwhile, the median survival period was 38.7 months in all of the patients (mean 28.7 ± 2.6 months), 23.6 months in the functioning NET patients, and 38.7 months in the non-functioning NET patients (P = 0.32).Fig. 2Progression-free survival curves for all of the patients (n = 38) (a) and stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b)\nFig. 3Overall survival curves for all the patients (n = 38) (a) and curves stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b)\n\nProgression-free survival curves for all of the patients (n = 38) (a) and stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b)\nOverall survival curves for all the patients (n = 38) (a) and curves stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b)\nThe median amount of STZ administered was 18.8 g. When the patients were stratified according to the amount of STZ (≥18.8 or <18.8 g), both the overall survival rate and the progression-free survival rate were better in the patients who received ≥18.8 g STZ (see the Electronic supplementary material 2).\nThe overall and progression-free survival outcomes were similar among the patients who received a daily regimen and those receiving a weekly/bi-weekly regimen (data not shown). In addition, the outcomes did not differ between patients with pancreaticoduodenal NET (n = 46) and those with gastrointestinal NET (n = 8) (see the Electronic supplementary material 3).\nData regarding patient outcome were available for 38 patients. The progression-free and overall survival curves are shown in Figs. 2 and 3. The median progression-free period was 11.8 months in all of the patients (mean 23.0 ± 3.5 months), 7.6 months in the functioning NET patients, and 16.8 months in the non-functioning NET patients (P = 0.14). Meanwhile, the median survival period was 38.7 months in all of the patients (mean 28.7 ± 2.6 months), 23.6 months in the functioning NET patients, and 38.7 months in the non-functioning NET patients (P = 0.32).Fig. 2Progression-free survival curves for all of the patients (n = 38) (a) and stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b)\nFig. 3Overall survival curves for all the patients (n = 38) (a) and curves stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b)\n\nProgression-free survival curves for all of the patients (n = 38) (a) and stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b)\nOverall survival curves for all the patients (n = 38) (a) and curves stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b)\nThe median amount of STZ administered was 18.8 g. When the patients were stratified according to the amount of STZ (≥18.8 or <18.8 g), both the overall survival rate and the progression-free survival rate were better in the patients who received ≥18.8 g STZ (see the Electronic supplementary material 2).\nThe overall and progression-free survival outcomes were similar among the patients who received a daily regimen and those receiving a weekly/bi-weekly regimen (data not shown). In addition, the outcomes did not differ between patients with pancreaticoduodenal NET (n = 46) and those with gastrointestinal NET (n = 8) (see the Electronic supplementary material 3).", "The data of 54 patients were collected. The patient cohort consisted of 24 male and 30 female patients, and the median age was 54.0 years (range 24–76 years) at the onset of the disease and 56.0 years (range 31–77 years) at the start of STZ administration (Table 1). Regarding the distribution of age at the onset of the disease, a peak occurred at between 50 and 59 years, followed by a second peak at 60–69 years (as shown in the Electronic supplementary material). The performance status of most of the patients was 0 or 1 (Table 1).Table 1Patient demographics and tumor characteristicsParametersNo. of patientsPercent (%)Sex Male2444.4 Female3055.6Age at onset Mean52.5 Median54.0 Range24–76Age at the beginning of STZ administration Mean56.0 Median56.0 Range31–77Performance status 03463.0 11731.5 235.5 3–400.0Primary site Pancreaticoduodenal NET(46)(85.2)  Pancreas head1226.1  Pancreas body1021.7  Pancreas tail1941.3  Head, body and tail12.2  Duodenal48.7 Gastrointestinal NET(8)(14.8)  Stomach225.0  Small Intestine112.5  Rectum450.0  Others112.5Pathological diagnosis (WHO 2000) Well-differentiated endocrine tumor00.0 Well-differentiated endocrine carcinoma5296.4 Poorly-differentiated endocrine carcinoma/small cell carcinoma11.8 Others11.8Functioning NET/non-functioning NET Functioning(18)(33.3)  Gastrinoma916.7  Insulinoma713.0  Glucagonoma47.4  Somatostatinoma11.9  Serotonin, tachykinins producing tumor11.9 Non-functioning(36)(66.7)Metastatic site(s) Liver5398.1 Lymph nodes2648.1 Peritoneum35.6 Lung23.7 Others1018.5\n\nPatient demographics and tumor characteristics", "The characteristics of the tumors are summarized in Table 1. Forty-two patients had pancreatic NET (P-NET), and the duodenum and gastrointestinal tract were the original sites in 4 and 8 patients, respectively. The pathological diagnosis based on the WHO Classification 2000 was well-differentiated endocrine carcinoma in 52 patients (96.4 %). One-third of the tumors (n = 18) were functioning, with 9 gastrinomas and 7 insulinomas; the other two-thirds were non-functioning NETs. All the patients had metastatic sites: all but one patient had liver metastasis, with lymph node metastasis being the second most common site (n = 26, 48.1 %).", "STZ chemotherapy was used as a first-line therapy in 39 patients, as a second-line therapy in 11 patients, and as a third-line therapy in 4 patients. The treatments used prior to STZ chemotherapy included transcatheter arterial chemoembolization (TACE), octreotide, 5FU, and gemcitabine.\nSTZ was administered intravenously in 35 patients (64.8 %) and intra-arterially in 3 patients. Both routes were used in 15 patients. The dosing regimen was daily [350–500 mg/m2 of STZ administered for 5 consecutive days (days 1–5) every 6 weeks] in 14 patients and weekly or bi-weekly in 31 patients (350–1,000 mg/m2 of STZ administered at each treatment).\nBoth regimens were used in 3 patients. Interestingly, the participating institutions in Eastern Japan applied a weekly or bi-weekly regimen, while the institutions in Western Japan applied a daily regimen.\nThirteen patients received STZ monotherapy, while a combination therapy was used in the other 41 patients. The combined antitumor agents included tegafur-uracil (UFT, n = 26), octreotide (n = 20), fluorouracil (5-FU, n = 15), and oral fluoropyrimidine (S-1, n = 6) (Table 2).Table 2STZ therapyParametersNo. of patientsPercent (%)Dosing route Intravenous (IV)3564.8 Intra-arterial (IA)35.6 IV/IA1527.8 Unknown11.9Dosing regimen Daily1425.9 Weekly/bi-weekly3157.4 Daily/weekly35.6 Others611.1Antitumor agents combined with STZ Doxorubicin11.9 Fluorouracil (5-FU)1527.8 Oral fluoropyrimidine (S-1)611.1 Tegafur-uracil (UFT)2648.1 Octreotide2037.0 Mitomycin C35.6 Interferon11.9 Sunitinib11.9 None (STZ monotherapy)1324.1\n\nSTZ therapy\nThe dosing period ranged from 0 to 105 months, with a median of 12.4 months. The dosing period was within 20 months in most patients (Fig. 1a). The total amount of STZ administered ranged from 1.0 to 128.0 g (median 18.8 g) (Fig. 1b).Fig. 1Distribution of the dosing period (a) and the total amount of STZ administered (b) (n = 54)\n\nDistribution of the dosing period (a) and the total amount of STZ administered (b) (n = 54)\nThe tumor response as evaluated according to the RECIST criteria is shown in Table 3. The tumor response was CR in 2 patients, PR in 11 patients, SD in 9 patients, PD in 25 patients, and unknown in 7 patients, with a response rate of 27.7 %. The response to STZ monotherapy was CR in 1 patient, PR in 4 patients, SD in 1 patient, PD in 8 patients, and unknown in 4 patients, with a response rate of 35.7 %.Table 3Tumor response, evaluated according to the RECIST criteriaTumor responseaccording to RECIST criteriaAll casesPancreaticoduodenal NETGastrointestinal NETSubtotalSTZ monotherapyCombinationtherapySubtotalSTZ monotherapyCombination therapy\nn\n%\nn\n%\nn\n%\nn\n%\nn\n%\nn\n%\nn\n%54461432844CR24.325.3111.113.400.000.000.0PR1123.9923.7333.3620.7225.0125.0125.0SD919.6821.1111.1724.1112.500.0125.0PD2554.32052.6555.61551.7562.5375.0250.0UK7743000\nUK unknown\n\nTumor response, evaluated according to the RECIST criteria\n\nUK unknown\nDocumented adverse events included nausea (n = 12, 22.2 %), vomiting (n = 7, 13.0 %), and lethargy (n = 4, 7.4 %). Other adverse hematological, hepatobiliary, or nervous system events were observed in a few patients. Grade 3 adverse events were observed in 6 patients (3 nausea and 3 vomiting), but no grade 4 adverse events were documented (Table 4).\nNew-onset diabetes mellitus was not documented, but the control of the disease was impaired during STZ therapy in one patient who had been treated for diabetes mellitus.Table 4Adverse eventsAdverse events\nn\n%CTCAE gradeG1G2G3G4UnknownGastointestinal disorder Abdominal pain11.9–1––– Diarrhea23.711––– Epigastric pain11.91–––– Nausea1222.2543–– Acute pancreatitis12.9–1––– Vomiting713.0133––Hematolymphoid system disorder Leukopenia11.91–––– Neutropenia23.711––– Thrombocytopenia11.91––––Ocular lesion Abnormal ocular sensation11.9––––1Hepatobiliary system disorder Liver function abnormality11.91––––Nerve system disorder Syncope11.9––1–– Headache11.9––––1Others Lethargy47.431––– Back pain11.9–1–––\n\nAdverse events\nSTZ therapy was discontinued in 46 patients. The reasons for the discontinuation were tumor progression in 43 patients, conversion to other treatments in 2 patients, and a severe adverse event in 1 patient.", "Data regarding patient outcome were available for 38 patients. The progression-free and overall survival curves are shown in Figs. 2 and 3. The median progression-free period was 11.8 months in all of the patients (mean 23.0 ± 3.5 months), 7.6 months in the functioning NET patients, and 16.8 months in the non-functioning NET patients (P = 0.14). Meanwhile, the median survival period was 38.7 months in all of the patients (mean 28.7 ± 2.6 months), 23.6 months in the functioning NET patients, and 38.7 months in the non-functioning NET patients (P = 0.32).Fig. 2Progression-free survival curves for all of the patients (n = 38) (a) and stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b)\nFig. 3Overall survival curves for all the patients (n = 38) (a) and curves stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b)\n\nProgression-free survival curves for all of the patients (n = 38) (a) and stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b)\nOverall survival curves for all the patients (n = 38) (a) and curves stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b)\nThe median amount of STZ administered was 18.8 g. When the patients were stratified according to the amount of STZ (≥18.8 or <18.8 g), both the overall survival rate and the progression-free survival rate were better in the patients who received ≥18.8 g STZ (see the Electronic supplementary material 2).\nThe overall and progression-free survival outcomes were similar among the patients who received a daily regimen and those receiving a weekly/bi-weekly regimen (data not shown). In addition, the outcomes did not differ between patients with pancreaticoduodenal NET (n = 46) and those with gastrointestinal NET (n = 8) (see the Electronic supplementary material 3).", "The present study was a retrospective multi-center cohort study in patients with unresectable NET receiving STZ chemotherapy. This is the first attempt to determine the circumstances surrounding chemotherapy for NET patients in Japan. The five participating centers were high-volume centers treating NET patients in Japan, and most of the patients who received STZ therapy before 2011 were thought to have been included in the study.\nDuring the study period (from 1995 to 2011), octreotide was the only antitumor agent against NET available in Japan until everolimus and sunitinib began to be covered by the Japanese insurance system. STZ is not yet covered by the Japanese insurance system:, so STZ therapy had only been conducted on a clinical trial basis using imported STZ at all of the institutions that participated in the present study. One of the aims of our study was to encourage the approval of STZ use in a daily clinical setting in Japan.\nThe results of the present study revealed that the main recipients of STZ chemotherapy were patients with P-NET (well-differentiated endocrine carcinoma based on the WHO Classification 2000) with liver metastases. The dosing routes and dosing regimens varied among the regions and institutions, but an intravenous weekly/bi-weekly regimen was popularly applied. The original regimen proposed by Moertel et al. [7] was a combination therapy of STZ with doxorubicin or STZ with fluorouracil (5-FU); however, in the present study, various antitumor agents were combined with STZ, and STZ monotherapy was applied in one-fourth of the patients. The reasons for this were likely twofold: first, the use of oral anticancer drugs, such as S-1 and UFT, is popular in Japan; second, the use of other cytotoxic anticancer drugs has not been approved.\nOur results showed that the response rate was 27.7 % for all of the enrolled patients, and a subgroup analysis showed that the response rate was 28.2 % for pancreaticoduodenal NET patients and 25.0 % for gastrointestinal NET patients, respectively. In addition, STZ monotherapy was associated with a response rate of 35.7 % (40.0 % for pancreaticoduodenal NET and 25.0 % for gastrointestinal NET). These figures were comparable with those obtained in Western series in which radiological measurements were used to evaluate tumor response [10–12].\nThe dosing period was less than 10 months in 45 % of the patients, and 10–20 months in 22 % of the patients. As a result, the total amount of STZ adminstered was less than 20 g in over 50 % of the patients (Fig. 1b). These results corresponded to a median progression-free period of 11.8 months (Fig. 2). The figure of 11.8 months was similar to that obtained in studies examining everolimus and sunitinib [16, 17]. However, the progression-free survival curve in STZ therapy patients reached a plateau about 2 years after the start of the therapy (Fig. 2), showing a difference from the everolimus and sunitinib studies. This finding suggested that sustained stable disease can be expected in some selected patients receiving STZ, and that these patients can undergo STZ chemotherapy for a long period because of the mild adverse event profile. Actually, some patients in our study received STZ therapy for over 5 years. As expected, the outcomes were better among the patients who received a larger dose of STZ (see the Electronic supplementary material 2). These results also support the idea that long-term STZ chemotherapy is associated with long-term SD maintenance. In our analyses, the progressions and overall survivals were comparable between the patients with functioning NET and those with non-functioning NET, suggesting that STZ is applicable to all NET patients with the same dosing regimen.\nOur survey showed that the adverse events associated with STZ chemotherapy were acceptable. Studies using animal models showed that high-dose STZ administration induced impaired glucose tolerance, leading to diabetes mellitus. In the present survey, new-onset diabetes mellitus induced by STZ was not documented. In addition, STZ therapy was discontinued because of a severe adverse event in only one patient. This mild adverse event profile can likely be attributed to the relatively low-dose regimens performed in our series (350–500 mg/m2 in the daily regimen, and 350–1,000 mg/m2 regimen in the weekly/bi-weekly regimen).\nIn conclusion, our survey showed the clinical benefit and safety of STZ therapy for pancreaticoduodenal and gastrointestinal NET. Therefore, we recommend that STZ, the only cytotoxic agent available for NET, should be used in daily practice in Japan.\n", " Below are the links to the electronic supplementary material.\nSupplementary material 1 (TIFF 104 kb)\nSupplementary material 2 (TIFF 106 kb)\nSupplementary material 3 (TIFF 96 kb)\n\nSupplementary material 1 (TIFF 104 kb)\nSupplementary material 2 (TIFF 106 kb)\nSupplementary material 3 (TIFF 96 kb)\nBelow are the links to the electronic supplementary material.\nSupplementary material 1 (TIFF 104 kb)\nSupplementary material 2 (TIFF 106 kb)\nSupplementary material 3 (TIFF 96 kb)\n\nSupplementary material 1 (TIFF 104 kb)\nSupplementary material 2 (TIFF 106 kb)\nSupplementary material 3 (TIFF 96 kb)", "Below are the links to the electronic supplementary material.\nSupplementary material 1 (TIFF 104 kb)\nSupplementary material 2 (TIFF 106 kb)\nSupplementary material 3 (TIFF 96 kb)\n\nSupplementary material 1 (TIFF 104 kb)\nSupplementary material 2 (TIFF 106 kb)\nSupplementary material 3 (TIFF 96 kb)" ]
[ "introduction", "materials|methods", "results", null, null, null, null, "discussion", "supplementary-material", null ]
[ "Neuroendocrine tumors", "Streptozocin", "Multi-center survey", "Tumor response", "Progression-free survival rate" ]
Introduction: Neuroendocrine tumors (NETs) have been regarded as relatively rare neoplasms, but the number of patients with NET is increasing in the US [1], Europe [2], and Japan [3, 4]. The epidemiological pattern of NET is highly heterogeneous; for example, the tumor location, biological behavior (functioning NET or non-functioning NET), and percentage of distant metastases differ extensively among databases. Therefore, the clinical outcomes of the various treatment modalities also differ according to the characteristics of the study cohort. The clinical course of well-differentiated NET (NET G1 or NET G2) is believed to be generally indolent, but some previous studies have documented that 40–95 % of NET patients are metastatic at presentation, and the 5-year survival rate is 56–83 % for metastatic intestinal NETs and 40–60 % for metastatic pancreatic NETs. Thus, optimal management of metastatic lesions, especially of liver metastases, is key to improving the outcomes of NET patients [5]. Streptozocin (STZ) was first discovered as an antibiotic derived from Streptomyces achromogenes, and was approved in the US as a cytotoxic antitumor drug for symptomatic or advanced pancreatic NET in 1982. In Western countries, STZ combined with doxorubicin (DOX) or fluorouracil (5-FU) has been established as a first-line chemotherapy for both pancreatic and gastrointestinal NETs based on several clinical trails including randomized clinical trials [6–12]. However, STZ has not been covered by the Japanese insurance system, and Japanese oncologists/gastroenterologists cannot choose this powerful option for the treatment of advanced or metastatic NETs. Because of these specific circumstances, STZ chemotherapy has only been used in clinical trials in Japan. Therefore, the aim of the study was to investigate the actual situations in which STZ is used in Japan and to evaluate the effectiveness and safety of STZ chemotherapy among Japanese NET patients. Methods: This study was conducted as a retrospective multi-center survey. Five institutions (The University of Tokyo Hospital, Tokyo; Osaka Saiseikai Noe Hospital, Osaka; Kyoto University Hospital, Kyoto; Japanese Red Cross Medical Center, Tokyo; and Yamagata University Hospital, Yamagata) with experience performing STZ chemotherapy participated in the present study. Patients who were treated with STZ between September 1995 and November 2011 were included as the study subjects. The following clinicopathological factors were investigated: (1) sex, age at the start of STZ chemotherapy, date at the start of STZ chemotherapy, and performance status at the start of STZ chemotherapy; (2) clinical diagnosis, site of the primary tumor, age of tumor presentation, behavior of the tumor (functioning or non-functioning), presence or absence of metastasis, and metastatic site(s); (3) STZ treatment regimen, period of treatment, total dose of administered STZ, anti-tumor drugs used in combination with STZ; and (4) efficacy of STZ chemotherapy, adverse events, progression-free survival period, and overall survival period. The tumor response to STZ therapy was evaluated using RECIST criteria [13], and adverse events were assessed according to the National Cancer Institute Common Terminology Criteria for Adverse Events (CTCAE version 4.0). The survival curves were generated using Kaplan–Meier methods [14], and the differences among the curves were evaluated using a log-rank test [15]. Differences were considered significant when P < 0.05. The protocol was approved by the local ethical committee of each institution that participated in the study. The clinical data were summarized in a blinded manner. Results: Patients The data of 54 patients were collected. The patient cohort consisted of 24 male and 30 female patients, and the median age was 54.0 years (range 24–76 years) at the onset of the disease and 56.0 years (range 31–77 years) at the start of STZ administration (Table 1). Regarding the distribution of age at the onset of the disease, a peak occurred at between 50 and 59 years, followed by a second peak at 60–69 years (as shown in the Electronic supplementary material). The performance status of most of the patients was 0 or 1 (Table 1).Table 1Patient demographics and tumor characteristicsParametersNo. of patientsPercent (%)Sex Male2444.4 Female3055.6Age at onset Mean52.5 Median54.0 Range24–76Age at the beginning of STZ administration Mean56.0 Median56.0 Range31–77Performance status 03463.0 11731.5 235.5 3–400.0Primary site Pancreaticoduodenal NET(46)(85.2)  Pancreas head1226.1  Pancreas body1021.7  Pancreas tail1941.3  Head, body and tail12.2  Duodenal48.7 Gastrointestinal NET(8)(14.8)  Stomach225.0  Small Intestine112.5  Rectum450.0  Others112.5Pathological diagnosis (WHO 2000) Well-differentiated endocrine tumor00.0 Well-differentiated endocrine carcinoma5296.4 Poorly-differentiated endocrine carcinoma/small cell carcinoma11.8 Others11.8Functioning NET/non-functioning NET Functioning(18)(33.3)  Gastrinoma916.7  Insulinoma713.0  Glucagonoma47.4  Somatostatinoma11.9  Serotonin, tachykinins producing tumor11.9 Non-functioning(36)(66.7)Metastatic site(s) Liver5398.1 Lymph nodes2648.1 Peritoneum35.6 Lung23.7 Others1018.5 Patient demographics and tumor characteristics The data of 54 patients were collected. The patient cohort consisted of 24 male and 30 female patients, and the median age was 54.0 years (range 24–76 years) at the onset of the disease and 56.0 years (range 31–77 years) at the start of STZ administration (Table 1). Regarding the distribution of age at the onset of the disease, a peak occurred at between 50 and 59 years, followed by a second peak at 60–69 years (as shown in the Electronic supplementary material). The performance status of most of the patients was 0 or 1 (Table 1).Table 1Patient demographics and tumor characteristicsParametersNo. of patientsPercent (%)Sex Male2444.4 Female3055.6Age at onset Mean52.5 Median54.0 Range24–76Age at the beginning of STZ administration Mean56.0 Median56.0 Range31–77Performance status 03463.0 11731.5 235.5 3–400.0Primary site Pancreaticoduodenal NET(46)(85.2)  Pancreas head1226.1  Pancreas body1021.7  Pancreas tail1941.3  Head, body and tail12.2  Duodenal48.7 Gastrointestinal NET(8)(14.8)  Stomach225.0  Small Intestine112.5  Rectum450.0  Others112.5Pathological diagnosis (WHO 2000) Well-differentiated endocrine tumor00.0 Well-differentiated endocrine carcinoma5296.4 Poorly-differentiated endocrine carcinoma/small cell carcinoma11.8 Others11.8Functioning NET/non-functioning NET Functioning(18)(33.3)  Gastrinoma916.7  Insulinoma713.0  Glucagonoma47.4  Somatostatinoma11.9  Serotonin, tachykinins producing tumor11.9 Non-functioning(36)(66.7)Metastatic site(s) Liver5398.1 Lymph nodes2648.1 Peritoneum35.6 Lung23.7 Others1018.5 Patient demographics and tumor characteristics Tumor characteristics The characteristics of the tumors are summarized in Table 1. Forty-two patients had pancreatic NET (P-NET), and the duodenum and gastrointestinal tract were the original sites in 4 and 8 patients, respectively. The pathological diagnosis based on the WHO Classification 2000 was well-differentiated endocrine carcinoma in 52 patients (96.4 %). One-third of the tumors (n = 18) were functioning, with 9 gastrinomas and 7 insulinomas; the other two-thirds were non-functioning NETs. All the patients had metastatic sites: all but one patient had liver metastasis, with lymph node metastasis being the second most common site (n = 26, 48.1 %). The characteristics of the tumors are summarized in Table 1. Forty-two patients had pancreatic NET (P-NET), and the duodenum and gastrointestinal tract were the original sites in 4 and 8 patients, respectively. The pathological diagnosis based on the WHO Classification 2000 was well-differentiated endocrine carcinoma in 52 patients (96.4 %). One-third of the tumors (n = 18) were functioning, with 9 gastrinomas and 7 insulinomas; the other two-thirds were non-functioning NETs. All the patients had metastatic sites: all but one patient had liver metastasis, with lymph node metastasis being the second most common site (n = 26, 48.1 %). STZ therapy STZ chemotherapy was used as a first-line therapy in 39 patients, as a second-line therapy in 11 patients, and as a third-line therapy in 4 patients. The treatments used prior to STZ chemotherapy included transcatheter arterial chemoembolization (TACE), octreotide, 5FU, and gemcitabine. STZ was administered intravenously in 35 patients (64.8 %) and intra-arterially in 3 patients. Both routes were used in 15 patients. The dosing regimen was daily [350–500 mg/m2 of STZ administered for 5 consecutive days (days 1–5) every 6 weeks] in 14 patients and weekly or bi-weekly in 31 patients (350–1,000 mg/m2 of STZ administered at each treatment). Both regimens were used in 3 patients. Interestingly, the participating institutions in Eastern Japan applied a weekly or bi-weekly regimen, while the institutions in Western Japan applied a daily regimen. Thirteen patients received STZ monotherapy, while a combination therapy was used in the other 41 patients. The combined antitumor agents included tegafur-uracil (UFT, n = 26), octreotide (n = 20), fluorouracil (5-FU, n = 15), and oral fluoropyrimidine (S-1, n = 6) (Table 2).Table 2STZ therapyParametersNo. of patientsPercent (%)Dosing route Intravenous (IV)3564.8 Intra-arterial (IA)35.6 IV/IA1527.8 Unknown11.9Dosing regimen Daily1425.9 Weekly/bi-weekly3157.4 Daily/weekly35.6 Others611.1Antitumor agents combined with STZ Doxorubicin11.9 Fluorouracil (5-FU)1527.8 Oral fluoropyrimidine (S-1)611.1 Tegafur-uracil (UFT)2648.1 Octreotide2037.0 Mitomycin C35.6 Interferon11.9 Sunitinib11.9 None (STZ monotherapy)1324.1 STZ therapy The dosing period ranged from 0 to 105 months, with a median of 12.4 months. The dosing period was within 20 months in most patients (Fig. 1a). The total amount of STZ administered ranged from 1.0 to 128.0 g (median 18.8 g) (Fig. 1b).Fig. 1Distribution of the dosing period (a) and the total amount of STZ administered (b) (n = 54) Distribution of the dosing period (a) and the total amount of STZ administered (b) (n = 54) The tumor response as evaluated according to the RECIST criteria is shown in Table 3. The tumor response was CR in 2 patients, PR in 11 patients, SD in 9 patients, PD in 25 patients, and unknown in 7 patients, with a response rate of 27.7 %. The response to STZ monotherapy was CR in 1 patient, PR in 4 patients, SD in 1 patient, PD in 8 patients, and unknown in 4 patients, with a response rate of 35.7 %.Table 3Tumor response, evaluated according to the RECIST criteriaTumor responseaccording to RECIST criteriaAll casesPancreaticoduodenal NETGastrointestinal NETSubtotalSTZ monotherapyCombinationtherapySubtotalSTZ monotherapyCombination therapy n % n % n % n % n % n % n %54461432844CR24.325.3111.113.400.000.000.0PR1123.9923.7333.3620.7225.0125.0125.0SD919.6821.1111.1724.1112.500.0125.0PD2554.32052.6555.61551.7562.5375.0250.0UK7743000 UK unknown Tumor response, evaluated according to the RECIST criteria UK unknown Documented adverse events included nausea (n = 12, 22.2 %), vomiting (n = 7, 13.0 %), and lethargy (n = 4, 7.4 %). Other adverse hematological, hepatobiliary, or nervous system events were observed in a few patients. Grade 3 adverse events were observed in 6 patients (3 nausea and 3 vomiting), but no grade 4 adverse events were documented (Table 4). New-onset diabetes mellitus was not documented, but the control of the disease was impaired during STZ therapy in one patient who had been treated for diabetes mellitus.Table 4Adverse eventsAdverse events n %CTCAE gradeG1G2G3G4UnknownGastointestinal disorder Abdominal pain11.9–1––– Diarrhea23.711––– Epigastric pain11.91–––– Nausea1222.2543–– Acute pancreatitis12.9–1––– Vomiting713.0133––Hematolymphoid system disorder Leukopenia11.91–––– Neutropenia23.711––– Thrombocytopenia11.91––––Ocular lesion Abnormal ocular sensation11.9––––1Hepatobiliary system disorder Liver function abnormality11.91––––Nerve system disorder Syncope11.9––1–– Headache11.9––––1Others Lethargy47.431––– Back pain11.9–1––– Adverse events STZ therapy was discontinued in 46 patients. The reasons for the discontinuation were tumor progression in 43 patients, conversion to other treatments in 2 patients, and a severe adverse event in 1 patient. STZ chemotherapy was used as a first-line therapy in 39 patients, as a second-line therapy in 11 patients, and as a third-line therapy in 4 patients. The treatments used prior to STZ chemotherapy included transcatheter arterial chemoembolization (TACE), octreotide, 5FU, and gemcitabine. STZ was administered intravenously in 35 patients (64.8 %) and intra-arterially in 3 patients. Both routes were used in 15 patients. The dosing regimen was daily [350–500 mg/m2 of STZ administered for 5 consecutive days (days 1–5) every 6 weeks] in 14 patients and weekly or bi-weekly in 31 patients (350–1,000 mg/m2 of STZ administered at each treatment). Both regimens were used in 3 patients. Interestingly, the participating institutions in Eastern Japan applied a weekly or bi-weekly regimen, while the institutions in Western Japan applied a daily regimen. Thirteen patients received STZ monotherapy, while a combination therapy was used in the other 41 patients. The combined antitumor agents included tegafur-uracil (UFT, n = 26), octreotide (n = 20), fluorouracil (5-FU, n = 15), and oral fluoropyrimidine (S-1, n = 6) (Table 2).Table 2STZ therapyParametersNo. of patientsPercent (%)Dosing route Intravenous (IV)3564.8 Intra-arterial (IA)35.6 IV/IA1527.8 Unknown11.9Dosing regimen Daily1425.9 Weekly/bi-weekly3157.4 Daily/weekly35.6 Others611.1Antitumor agents combined with STZ Doxorubicin11.9 Fluorouracil (5-FU)1527.8 Oral fluoropyrimidine (S-1)611.1 Tegafur-uracil (UFT)2648.1 Octreotide2037.0 Mitomycin C35.6 Interferon11.9 Sunitinib11.9 None (STZ monotherapy)1324.1 STZ therapy The dosing period ranged from 0 to 105 months, with a median of 12.4 months. The dosing period was within 20 months in most patients (Fig. 1a). The total amount of STZ administered ranged from 1.0 to 128.0 g (median 18.8 g) (Fig. 1b).Fig. 1Distribution of the dosing period (a) and the total amount of STZ administered (b) (n = 54) Distribution of the dosing period (a) and the total amount of STZ administered (b) (n = 54) The tumor response as evaluated according to the RECIST criteria is shown in Table 3. The tumor response was CR in 2 patients, PR in 11 patients, SD in 9 patients, PD in 25 patients, and unknown in 7 patients, with a response rate of 27.7 %. The response to STZ monotherapy was CR in 1 patient, PR in 4 patients, SD in 1 patient, PD in 8 patients, and unknown in 4 patients, with a response rate of 35.7 %.Table 3Tumor response, evaluated according to the RECIST criteriaTumor responseaccording to RECIST criteriaAll casesPancreaticoduodenal NETGastrointestinal NETSubtotalSTZ monotherapyCombinationtherapySubtotalSTZ monotherapyCombination therapy n % n % n % n % n % n % n %54461432844CR24.325.3111.113.400.000.000.0PR1123.9923.7333.3620.7225.0125.0125.0SD919.6821.1111.1724.1112.500.0125.0PD2554.32052.6555.61551.7562.5375.0250.0UK7743000 UK unknown Tumor response, evaluated according to the RECIST criteria UK unknown Documented adverse events included nausea (n = 12, 22.2 %), vomiting (n = 7, 13.0 %), and lethargy (n = 4, 7.4 %). Other adverse hematological, hepatobiliary, or nervous system events were observed in a few patients. Grade 3 adverse events were observed in 6 patients (3 nausea and 3 vomiting), but no grade 4 adverse events were documented (Table 4). New-onset diabetes mellitus was not documented, but the control of the disease was impaired during STZ therapy in one patient who had been treated for diabetes mellitus.Table 4Adverse eventsAdverse events n %CTCAE gradeG1G2G3G4UnknownGastointestinal disorder Abdominal pain11.9–1––– Diarrhea23.711––– Epigastric pain11.91–––– Nausea1222.2543–– Acute pancreatitis12.9–1––– Vomiting713.0133––Hematolymphoid system disorder Leukopenia11.91–––– Neutropenia23.711––– Thrombocytopenia11.91––––Ocular lesion Abnormal ocular sensation11.9––––1Hepatobiliary system disorder Liver function abnormality11.91––––Nerve system disorder Syncope11.9––1–– Headache11.9––––1Others Lethargy47.431––– Back pain11.9–1––– Adverse events STZ therapy was discontinued in 46 patients. The reasons for the discontinuation were tumor progression in 43 patients, conversion to other treatments in 2 patients, and a severe adverse event in 1 patient. Patient outcome Data regarding patient outcome were available for 38 patients. The progression-free and overall survival curves are shown in Figs. 2 and 3. The median progression-free period was 11.8 months in all of the patients (mean 23.0 ± 3.5 months), 7.6 months in the functioning NET patients, and 16.8 months in the non-functioning NET patients (P = 0.14). Meanwhile, the median survival period was 38.7 months in all of the patients (mean 28.7 ± 2.6 months), 23.6 months in the functioning NET patients, and 38.7 months in the non-functioning NET patients (P = 0.32).Fig. 2Progression-free survival curves for all of the patients (n = 38) (a) and stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) Fig. 3Overall survival curves for all the patients (n = 38) (a) and curves stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) Progression-free survival curves for all of the patients (n = 38) (a) and stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) Overall survival curves for all the patients (n = 38) (a) and curves stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) The median amount of STZ administered was 18.8 g. When the patients were stratified according to the amount of STZ (≥18.8 or <18.8 g), both the overall survival rate and the progression-free survival rate were better in the patients who received ≥18.8 g STZ (see the Electronic supplementary material 2). The overall and progression-free survival outcomes were similar among the patients who received a daily regimen and those receiving a weekly/bi-weekly regimen (data not shown). In addition, the outcomes did not differ between patients with pancreaticoduodenal NET (n = 46) and those with gastrointestinal NET (n = 8) (see the Electronic supplementary material 3). Data regarding patient outcome were available for 38 patients. The progression-free and overall survival curves are shown in Figs. 2 and 3. The median progression-free period was 11.8 months in all of the patients (mean 23.0 ± 3.5 months), 7.6 months in the functioning NET patients, and 16.8 months in the non-functioning NET patients (P = 0.14). Meanwhile, the median survival period was 38.7 months in all of the patients (mean 28.7 ± 2.6 months), 23.6 months in the functioning NET patients, and 38.7 months in the non-functioning NET patients (P = 0.32).Fig. 2Progression-free survival curves for all of the patients (n = 38) (a) and stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) Fig. 3Overall survival curves for all the patients (n = 38) (a) and curves stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) Progression-free survival curves for all of the patients (n = 38) (a) and stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) Overall survival curves for all the patients (n = 38) (a) and curves stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) The median amount of STZ administered was 18.8 g. When the patients were stratified according to the amount of STZ (≥18.8 or <18.8 g), both the overall survival rate and the progression-free survival rate were better in the patients who received ≥18.8 g STZ (see the Electronic supplementary material 2). The overall and progression-free survival outcomes were similar among the patients who received a daily regimen and those receiving a weekly/bi-weekly regimen (data not shown). In addition, the outcomes did not differ between patients with pancreaticoduodenal NET (n = 46) and those with gastrointestinal NET (n = 8) (see the Electronic supplementary material 3). Patients: The data of 54 patients were collected. The patient cohort consisted of 24 male and 30 female patients, and the median age was 54.0 years (range 24–76 years) at the onset of the disease and 56.0 years (range 31–77 years) at the start of STZ administration (Table 1). Regarding the distribution of age at the onset of the disease, a peak occurred at between 50 and 59 years, followed by a second peak at 60–69 years (as shown in the Electronic supplementary material). The performance status of most of the patients was 0 or 1 (Table 1).Table 1Patient demographics and tumor characteristicsParametersNo. of patientsPercent (%)Sex Male2444.4 Female3055.6Age at onset Mean52.5 Median54.0 Range24–76Age at the beginning of STZ administration Mean56.0 Median56.0 Range31–77Performance status 03463.0 11731.5 235.5 3–400.0Primary site Pancreaticoduodenal NET(46)(85.2)  Pancreas head1226.1  Pancreas body1021.7  Pancreas tail1941.3  Head, body and tail12.2  Duodenal48.7 Gastrointestinal NET(8)(14.8)  Stomach225.0  Small Intestine112.5  Rectum450.0  Others112.5Pathological diagnosis (WHO 2000) Well-differentiated endocrine tumor00.0 Well-differentiated endocrine carcinoma5296.4 Poorly-differentiated endocrine carcinoma/small cell carcinoma11.8 Others11.8Functioning NET/non-functioning NET Functioning(18)(33.3)  Gastrinoma916.7  Insulinoma713.0  Glucagonoma47.4  Somatostatinoma11.9  Serotonin, tachykinins producing tumor11.9 Non-functioning(36)(66.7)Metastatic site(s) Liver5398.1 Lymph nodes2648.1 Peritoneum35.6 Lung23.7 Others1018.5 Patient demographics and tumor characteristics Tumor characteristics: The characteristics of the tumors are summarized in Table 1. Forty-two patients had pancreatic NET (P-NET), and the duodenum and gastrointestinal tract were the original sites in 4 and 8 patients, respectively. The pathological diagnosis based on the WHO Classification 2000 was well-differentiated endocrine carcinoma in 52 patients (96.4 %). One-third of the tumors (n = 18) were functioning, with 9 gastrinomas and 7 insulinomas; the other two-thirds were non-functioning NETs. All the patients had metastatic sites: all but one patient had liver metastasis, with lymph node metastasis being the second most common site (n = 26, 48.1 %). STZ therapy: STZ chemotherapy was used as a first-line therapy in 39 patients, as a second-line therapy in 11 patients, and as a third-line therapy in 4 patients. The treatments used prior to STZ chemotherapy included transcatheter arterial chemoembolization (TACE), octreotide, 5FU, and gemcitabine. STZ was administered intravenously in 35 patients (64.8 %) and intra-arterially in 3 patients. Both routes were used in 15 patients. The dosing regimen was daily [350–500 mg/m2 of STZ administered for 5 consecutive days (days 1–5) every 6 weeks] in 14 patients and weekly or bi-weekly in 31 patients (350–1,000 mg/m2 of STZ administered at each treatment). Both regimens were used in 3 patients. Interestingly, the participating institutions in Eastern Japan applied a weekly or bi-weekly regimen, while the institutions in Western Japan applied a daily regimen. Thirteen patients received STZ monotherapy, while a combination therapy was used in the other 41 patients. The combined antitumor agents included tegafur-uracil (UFT, n = 26), octreotide (n = 20), fluorouracil (5-FU, n = 15), and oral fluoropyrimidine (S-1, n = 6) (Table 2).Table 2STZ therapyParametersNo. of patientsPercent (%)Dosing route Intravenous (IV)3564.8 Intra-arterial (IA)35.6 IV/IA1527.8 Unknown11.9Dosing regimen Daily1425.9 Weekly/bi-weekly3157.4 Daily/weekly35.6 Others611.1Antitumor agents combined with STZ Doxorubicin11.9 Fluorouracil (5-FU)1527.8 Oral fluoropyrimidine (S-1)611.1 Tegafur-uracil (UFT)2648.1 Octreotide2037.0 Mitomycin C35.6 Interferon11.9 Sunitinib11.9 None (STZ monotherapy)1324.1 STZ therapy The dosing period ranged from 0 to 105 months, with a median of 12.4 months. The dosing period was within 20 months in most patients (Fig. 1a). The total amount of STZ administered ranged from 1.0 to 128.0 g (median 18.8 g) (Fig. 1b).Fig. 1Distribution of the dosing period (a) and the total amount of STZ administered (b) (n = 54) Distribution of the dosing period (a) and the total amount of STZ administered (b) (n = 54) The tumor response as evaluated according to the RECIST criteria is shown in Table 3. The tumor response was CR in 2 patients, PR in 11 patients, SD in 9 patients, PD in 25 patients, and unknown in 7 patients, with a response rate of 27.7 %. The response to STZ monotherapy was CR in 1 patient, PR in 4 patients, SD in 1 patient, PD in 8 patients, and unknown in 4 patients, with a response rate of 35.7 %.Table 3Tumor response, evaluated according to the RECIST criteriaTumor responseaccording to RECIST criteriaAll casesPancreaticoduodenal NETGastrointestinal NETSubtotalSTZ monotherapyCombinationtherapySubtotalSTZ monotherapyCombination therapy n % n % n % n % n % n % n %54461432844CR24.325.3111.113.400.000.000.0PR1123.9923.7333.3620.7225.0125.0125.0SD919.6821.1111.1724.1112.500.0125.0PD2554.32052.6555.61551.7562.5375.0250.0UK7743000 UK unknown Tumor response, evaluated according to the RECIST criteria UK unknown Documented adverse events included nausea (n = 12, 22.2 %), vomiting (n = 7, 13.0 %), and lethargy (n = 4, 7.4 %). Other adverse hematological, hepatobiliary, or nervous system events were observed in a few patients. Grade 3 adverse events were observed in 6 patients (3 nausea and 3 vomiting), but no grade 4 adverse events were documented (Table 4). New-onset diabetes mellitus was not documented, but the control of the disease was impaired during STZ therapy in one patient who had been treated for diabetes mellitus.Table 4Adverse eventsAdverse events n %CTCAE gradeG1G2G3G4UnknownGastointestinal disorder Abdominal pain11.9–1––– Diarrhea23.711––– Epigastric pain11.91–––– Nausea1222.2543–– Acute pancreatitis12.9–1––– Vomiting713.0133––Hematolymphoid system disorder Leukopenia11.91–––– Neutropenia23.711––– Thrombocytopenia11.91––––Ocular lesion Abnormal ocular sensation11.9––––1Hepatobiliary system disorder Liver function abnormality11.91––––Nerve system disorder Syncope11.9––1–– Headache11.9––––1Others Lethargy47.431––– Back pain11.9–1––– Adverse events STZ therapy was discontinued in 46 patients. The reasons for the discontinuation were tumor progression in 43 patients, conversion to other treatments in 2 patients, and a severe adverse event in 1 patient. Patient outcome: Data regarding patient outcome were available for 38 patients. The progression-free and overall survival curves are shown in Figs. 2 and 3. The median progression-free period was 11.8 months in all of the patients (mean 23.0 ± 3.5 months), 7.6 months in the functioning NET patients, and 16.8 months in the non-functioning NET patients (P = 0.14). Meanwhile, the median survival period was 38.7 months in all of the patients (mean 28.7 ± 2.6 months), 23.6 months in the functioning NET patients, and 38.7 months in the non-functioning NET patients (P = 0.32).Fig. 2Progression-free survival curves for all of the patients (n = 38) (a) and stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) Fig. 3Overall survival curves for all the patients (n = 38) (a) and curves stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) Progression-free survival curves for all of the patients (n = 38) (a) and stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) Overall survival curves for all the patients (n = 38) (a) and curves stratified according to functioning (n = 12) and non-functioning (n = 26) tumors (b) The median amount of STZ administered was 18.8 g. When the patients were stratified according to the amount of STZ (≥18.8 or <18.8 g), both the overall survival rate and the progression-free survival rate were better in the patients who received ≥18.8 g STZ (see the Electronic supplementary material 2). The overall and progression-free survival outcomes were similar among the patients who received a daily regimen and those receiving a weekly/bi-weekly regimen (data not shown). In addition, the outcomes did not differ between patients with pancreaticoduodenal NET (n = 46) and those with gastrointestinal NET (n = 8) (see the Electronic supplementary material 3). Discussion: The present study was a retrospective multi-center cohort study in patients with unresectable NET receiving STZ chemotherapy. This is the first attempt to determine the circumstances surrounding chemotherapy for NET patients in Japan. The five participating centers were high-volume centers treating NET patients in Japan, and most of the patients who received STZ therapy before 2011 were thought to have been included in the study. During the study period (from 1995 to 2011), octreotide was the only antitumor agent against NET available in Japan until everolimus and sunitinib began to be covered by the Japanese insurance system. STZ is not yet covered by the Japanese insurance system:, so STZ therapy had only been conducted on a clinical trial basis using imported STZ at all of the institutions that participated in the present study. One of the aims of our study was to encourage the approval of STZ use in a daily clinical setting in Japan. The results of the present study revealed that the main recipients of STZ chemotherapy were patients with P-NET (well-differentiated endocrine carcinoma based on the WHO Classification 2000) with liver metastases. The dosing routes and dosing regimens varied among the regions and institutions, but an intravenous weekly/bi-weekly regimen was popularly applied. The original regimen proposed by Moertel et al. [7] was a combination therapy of STZ with doxorubicin or STZ with fluorouracil (5-FU); however, in the present study, various antitumor agents were combined with STZ, and STZ monotherapy was applied in one-fourth of the patients. The reasons for this were likely twofold: first, the use of oral anticancer drugs, such as S-1 and UFT, is popular in Japan; second, the use of other cytotoxic anticancer drugs has not been approved. Our results showed that the response rate was 27.7 % for all of the enrolled patients, and a subgroup analysis showed that the response rate was 28.2 % for pancreaticoduodenal NET patients and 25.0 % for gastrointestinal NET patients, respectively. In addition, STZ monotherapy was associated with a response rate of 35.7 % (40.0 % for pancreaticoduodenal NET and 25.0 % for gastrointestinal NET). These figures were comparable with those obtained in Western series in which radiological measurements were used to evaluate tumor response [10–12]. The dosing period was less than 10 months in 45 % of the patients, and 10–20 months in 22 % of the patients. As a result, the total amount of STZ adminstered was less than 20 g in over 50 % of the patients (Fig. 1b). These results corresponded to a median progression-free period of 11.8 months (Fig. 2). The figure of 11.8 months was similar to that obtained in studies examining everolimus and sunitinib [16, 17]. However, the progression-free survival curve in STZ therapy patients reached a plateau about 2 years after the start of the therapy (Fig. 2), showing a difference from the everolimus and sunitinib studies. This finding suggested that sustained stable disease can be expected in some selected patients receiving STZ, and that these patients can undergo STZ chemotherapy for a long period because of the mild adverse event profile. Actually, some patients in our study received STZ therapy for over 5 years. As expected, the outcomes were better among the patients who received a larger dose of STZ (see the Electronic supplementary material 2). These results also support the idea that long-term STZ chemotherapy is associated with long-term SD maintenance. In our analyses, the progressions and overall survivals were comparable between the patients with functioning NET and those with non-functioning NET, suggesting that STZ is applicable to all NET patients with the same dosing regimen. Our survey showed that the adverse events associated with STZ chemotherapy were acceptable. Studies using animal models showed that high-dose STZ administration induced impaired glucose tolerance, leading to diabetes mellitus. In the present survey, new-onset diabetes mellitus induced by STZ was not documented. In addition, STZ therapy was discontinued because of a severe adverse event in only one patient. This mild adverse event profile can likely be attributed to the relatively low-dose regimens performed in our series (350–500 mg/m2 in the daily regimen, and 350–1,000 mg/m2 regimen in the weekly/bi-weekly regimen). In conclusion, our survey showed the clinical benefit and safety of STZ therapy for pancreaticoduodenal and gastrointestinal NET. Therefore, we recommend that STZ, the only cytotoxic agent available for NET, should be used in daily practice in Japan. Electronic supplementary material: Below are the links to the electronic supplementary material. Supplementary material 1 (TIFF 104 kb) Supplementary material 2 (TIFF 106 kb) Supplementary material 3 (TIFF 96 kb) Supplementary material 1 (TIFF 104 kb) Supplementary material 2 (TIFF 106 kb) Supplementary material 3 (TIFF 96 kb) Below are the links to the electronic supplementary material. Supplementary material 1 (TIFF 104 kb) Supplementary material 2 (TIFF 106 kb) Supplementary material 3 (TIFF 96 kb) Supplementary material 1 (TIFF 104 kb) Supplementary material 2 (TIFF 106 kb) Supplementary material 3 (TIFF 96 kb) : Below are the links to the electronic supplementary material. Supplementary material 1 (TIFF 104 kb) Supplementary material 2 (TIFF 106 kb) Supplementary material 3 (TIFF 96 kb) Supplementary material 1 (TIFF 104 kb) Supplementary material 2 (TIFF 106 kb) Supplementary material 3 (TIFF 96 kb)
Background: Neuroendocrine tumors (NETs) are believed to be relatively rare and to follow a generally indolent course. However, liver metastases are common in NET patients and the outcome of NET liver metastasis is poor. In Western countries, streptozocin (STZ) has been established as a first-line anticancer drug for unresectable NET; however, STZ cannot be used in daily practice in Japan. The aim of the present study was to determine the status of STZ usage in Japan and to evaluate the effectiveness and safety of STZ chemotherapy in Japanese NET patients. Methods: A retrospective multi-center survey was conducted. Five institutions with experience performing STZ chemotherapy participated in the study. The patient demographics, tumor characteristics, context of STZ chemotherapy, and patient outcome were collected and assessed. Results: Fifty-four patients were enrolled. The main recipients of STZ chemotherapy were middle-aged patients with pancreatic NET and unresectable liver metastases. The predominant regimen was the weekly/bi-weekly intravenous administration of STZ combined with other oral anticancer agents. STZ monotherapy was used in one-fourth of the patients. The median progression-free and overall survival periods were 11.8 and 38.7 months, respectively, and sustained stable disease was obtained in some selected patients. The adverse events profile was mild and tolerable. Conclusions: Our survey showed the clinical benefit and safety of STZ therapy for Japanese patients with unresectable NET. Therefore, we recommend that STZ, which is the only cytotoxic agent available against NET, should be used in daily practice in Japan.
null
null
7,072
301
[ 276, 138, 864, 465, 70 ]
10
[ "patients", "stz", "net", "functioning", "months", "therapy", "supplementary", "material", "survival", "supplementary material" ]
[ "metastatic intestinal nets", "chemotherapy net", "patient liver metastasis", "introduction neuroendocrine tumors", "net patients streptozocin" ]
null
null
null
[CONTENT] Neuroendocrine tumors | Streptozocin | Multi-center survey | Tumor response | Progression-free survival rate [SUMMARY]
null
[CONTENT] Neuroendocrine tumors | Streptozocin | Multi-center survey | Tumor response | Progression-free survival rate [SUMMARY]
null
[CONTENT] Neuroendocrine tumors | Streptozocin | Multi-center survey | Tumor response | Progression-free survival rate [SUMMARY]
null
[CONTENT] Adult | Aged | Antibiotics, Antineoplastic | Disease-Free Survival | Female | Humans | Japan | Male | Middle Aged | Neuroendocrine Tumors | Pancreatic Neoplasms | Retrospective Studies | Streptozocin | Survival Rate | Treatment Outcome | Young Adult [SUMMARY]
null
[CONTENT] Adult | Aged | Antibiotics, Antineoplastic | Disease-Free Survival | Female | Humans | Japan | Male | Middle Aged | Neuroendocrine Tumors | Pancreatic Neoplasms | Retrospective Studies | Streptozocin | Survival Rate | Treatment Outcome | Young Adult [SUMMARY]
null
[CONTENT] Adult | Aged | Antibiotics, Antineoplastic | Disease-Free Survival | Female | Humans | Japan | Male | Middle Aged | Neuroendocrine Tumors | Pancreatic Neoplasms | Retrospective Studies | Streptozocin | Survival Rate | Treatment Outcome | Young Adult [SUMMARY]
null
[CONTENT] metastatic intestinal nets | chemotherapy net | patient liver metastasis | introduction neuroendocrine tumors | net patients streptozocin [SUMMARY]
null
[CONTENT] metastatic intestinal nets | chemotherapy net | patient liver metastasis | introduction neuroendocrine tumors | net patients streptozocin [SUMMARY]
null
[CONTENT] metastatic intestinal nets | chemotherapy net | patient liver metastasis | introduction neuroendocrine tumors | net patients streptozocin [SUMMARY]
null
[CONTENT] patients | stz | net | functioning | months | therapy | supplementary | material | survival | supplementary material [SUMMARY]
null
[CONTENT] patients | stz | net | functioning | months | therapy | supplementary | material | survival | supplementary material [SUMMARY]
null
[CONTENT] patients | stz | net | functioning | months | therapy | supplementary | material | survival | supplementary material [SUMMARY]
null
[CONTENT] net | clinical | nets | metastatic | stz | japanese | pancreatic | advanced | trials | clinical trials [SUMMARY]
null
[CONTENT] patients | stz | functioning | months | table | net | 38 | therapy | survival | curves [SUMMARY]
null
[CONTENT] patients | stz | net | kb | tiff | material tiff | supplementary material tiff | functioning | supplementary material | material [SUMMARY]
null
[CONTENT] ||| NET | NET ||| STZ | first | NET | STZ | daily | Japan ||| STZ | Japan | STZ | Japanese | NET [SUMMARY]
null
[CONTENT] Fifty-four ||| STZ | NET ||| weekly | STZ ||| STZ | one-fourth ||| 11.8 | 38.7 months ||| [SUMMARY]
null
[CONTENT] ||| NET | NET ||| STZ | first | NET | STZ | daily | Japan ||| STZ | Japan | STZ | Japanese | NET ||| ||| Five | STZ ||| STZ ||| ||| Fifty-four ||| STZ | NET ||| weekly | STZ ||| STZ | one-fourth ||| 11.8 | 38.7 months ||| ||| STZ | Japanese | NET ||| STZ | NET | daily | Japan [SUMMARY]
null
Urinary Podocyte Count as a Potential Routine Laboratory Test for Glomerular Disease: A Novel Method Using Liquid-Based Cytology and Immunoenzyme Staining.
35350010
This study investigated whether our urinary podocyte detection method using podocalyxin (PDX) and Wilms tumor 1 (WT1) immunoenzyme staining combined with liquid-based cytology can serve as a noninvasive routine laboratory test for glomerular disease.
INTRODUCTION
The presence of PDX- and WT1-positive cells was investigated in 79 patients with glomerular disease and 51 patients with nonglomerular disease.
METHODS
The frequencies and numbers of PDX- and WT1-positive cells were significantly higher in the glomerular disease group than in the nonglomerular disease group. The best cutoffs for PDX- and WT1-positive cell counts for identifying patients with glomerular disease were 3.5 (sensitivity = 67.1% and specificity = 100%) and 1.2 cells/10 mL (sensitivity = 43.0% and specificity = 100%), respectively.
RESULTS
Because our urinary podocyte detection method using PDX immunoenzyme staining can be standardized and it detected glomerular disease with high accuracy, it can likely serve as a noninvasive routine laboratory test for various glomerular diseases.
CONCLUSION
[ "Cytodiagnosis", "Humans", "Kidney Diseases", "Podocytes", "Staining and Labeling" ]
9501740
Introduction
Podocytes are highly specialized epithelial cells that play an important role as filters that prevent the leakage of high-molecular-weight proteins, glomerular basement membrane (GBM) components, and endothelial cells [1, 2]. Podocytes are injured by various insults (genetic, mechanical, immunologic, and toxic). Upon podocyte injury, localized changes occur in the slit membrane, followed by foot process effacement [3]. This phenomenon causes a decrease in glomerular filtration function and clinically appears as proteinuria. Other podocyte injury mechanisms include hypertrophy, detachment, apoptosis, and epithelial-to-mesenchymal transition [4, 5, 6, 7]. In these cases, podocytes detach from the GBM frequently. Podocytes are terminally differentiated cells that are generally unable to replicate in response to injury [8]. Thus, to correct local podocyte loss, parietal epithelial cells (PECs) are activated and attached to the naked GBM. When podocyte loss is excessive, PECs are further activated and induced to proliferate, eventually causing glomerulosclerosis. Additionally, severe glomerular injury associated with defects of the capillary wall and GBM leads to the proliferation of PECs and crescent formation. In this situation, podocytes are severely damaged, leading to their detachment from the GBM [2, 9]. Because highly damaged podocytes detach from the GBM and accumulate in urine, several studies revealed that the detection of podocytes in the urine is a valuable noninvasive biomarker for disease activity and therapeutic efficacy in various glomerular diseases [2, 10, 11]. These previous studies relied on the combination of conventional sample preparation methods (direct smears and cytospins) and podocalyxin (PDX) immunofluorescence staining to detect urinary podocytes. However, conventional sample preparation methods have problems and difficulty regarding standardization, including air-drying artifacts and significant cell loss during the fixation process [12]. Moreover, because immunofluorescence staining requires a fluorescent microscope, darkroom, and human resources, it is difficult to perform this method as a routine laboratory test in small- to medium-sized hospitals, and this staining method cannot create permanent specimens. Therefore, we have developed a new urinary podocyte detection method as a routine laboratory test [13]. For urine cytologic specimen preparation, we adopted the SurePath method, a major liquid-based cytology (LBC) method. The advantages of the SurePath method include its higher cell recovery rate, better cell preservation, and standardized technique [13]. Additionally, this method can be used both automatically and manually. In particular, the manual procedure can be used even in small- to medium-sized hospitals and developing countries, provided a slide rack and centrifuge are available. Meanwhile, we leveraged immunoenzyme method for immunostaining. Consequently, we developed a new method for detecting urinary podocytes that combines the SurePath method and immunoenzyme staining using the Wilms tumor 1 (WT1) antibody (podocyte marker) [13]. This method proved effective for detecting glomerular disease, in line with previous reports using PDX immunofluorescence staining [10, 11], and it was also possible to detect crescent formation [14]. Although PDX is considered the best marker for detecting podocytes, in our experiments, the PDX antibody used in previous studies did not work with immunoenzyme staining in urine cytology [10, 11]. Therefore, we chose WT1, which is commonly employed for routine pathologic examination, to detect urinary podocytes in immunoenzyme staining in our previous studies. However, because WT1 is positive for cytoplasm in immunoenzyme staining using alcohol-fixed cytologic specimens [13, 14, 15], researchers accustomed to immunofluorescence staining for nucleus expression may feel uncomfortable with this technique. Thus, we have searched for a PDX antibody that works in immunoenzyme staining in urine cytology. After identifying a commercial PDX antibody, we investigated its effectiveness. This study assessed whether our urinary podocyte detection method using PDX immunoenzyme staining can represent a noninvasive routine laboratory test for glomerular disease. Therefore, we investigated the frequencies and numbers of PDX- and WT1-positive cells in patients with and without glomerular disease. Meanwhile, we examined the correlation between the number of positive cells and renal function markers. This is the first report investigating glomerular disease using a urinary podocyte detection method combining SurePath and PDX immunoenzyme staining.
null
null
Results
Comparisons of PDX- and WT1-Positive Cell Counts between the Groups PDX- and WT1-positive cells were strongly positive in the cytoplasm. These positive cells appeared singly; they were 20–40 μm in maximum length and round or oval in shape (shown in Fig. 1). In addition, these positive cells were occasionally encased by a cast. The numbers of PDX- and WT1-positive cells were significantly higher in the glomerular disease group than in the nonglomerular disease group (shown in Fig. 2). In addition, there were no significant differences in PDX- and WT1-positive cell numbers within the nonglomerular disease group. However, in the glomerular disease group, the number of PDX-positive cells was significantly higher than that of WT1-positive cells. The best cutoff for urinary PDX-positive cells to differentiate glomerular disease from nonglomerular disease was 3.5 cells/10 mL (sensitivity = 67.1%, specificity = 100%, positive predictive value = 100%, negative predictive value = 66.2%), and the cutoff for WT1-positive cells was 1.2 cells/10 mL (sensitivity = 43.0%, specificity = 100%, positive predictive value = 100%, negative predictive value = 53.1%). These cutoffs produced area under the curves of 0.835 and 0.705 (shown in Fig. 3), respectively, and revealed that glomerular disease could be detected with moderate accuracy (Table 2). PDX- and WT1-positive cells were strongly positive in the cytoplasm. These positive cells appeared singly; they were 20–40 μm in maximum length and round or oval in shape (shown in Fig. 1). In addition, these positive cells were occasionally encased by a cast. The numbers of PDX- and WT1-positive cells were significantly higher in the glomerular disease group than in the nonglomerular disease group (shown in Fig. 2). In addition, there were no significant differences in PDX- and WT1-positive cell numbers within the nonglomerular disease group. However, in the glomerular disease group, the number of PDX-positive cells was significantly higher than that of WT1-positive cells. The best cutoff for urinary PDX-positive cells to differentiate glomerular disease from nonglomerular disease was 3.5 cells/10 mL (sensitivity = 67.1%, specificity = 100%, positive predictive value = 100%, negative predictive value = 66.2%), and the cutoff for WT1-positive cells was 1.2 cells/10 mL (sensitivity = 43.0%, specificity = 100%, positive predictive value = 100%, negative predictive value = 53.1%). These cutoffs produced area under the curves of 0.835 and 0.705 (shown in Fig. 3), respectively, and revealed that glomerular disease could be detected with moderate accuracy (Table 2). Comparison of PDX- and WT1-Positive Cell Counts among the Three Glomerular Disease Groups The numbers of PDX- and WT1-positive cells were significantly higher in the IgA nephropathy and HSPN groups than those in the MGAs group (shown in Fig. 4). On the other hand, there were no significant differences in PDX- and WT1-positive cell numbers between the IgA nephropathy and HSPN groups. The numbers of PDX- and WT1-positive cells were significantly higher in the IgA nephropathy and HSPN groups than those in the MGAs group (shown in Fig. 4). On the other hand, there were no significant differences in PDX- and WT1-positive cell numbers between the IgA nephropathy and HSPN groups. Correlations of PDX- and WT1-Positive Cell Counts with Renal Function Markers in Glomerular Disease In the glomerular disease group, the numbers of PDX- and WT1-positive cells were not correlated with patient sex, height, weight, age, serum creatinine levels, urine protein-to-creatinine ratio, and occult blood in urine. In the glomerular disease group, the numbers of PDX- and WT1-positive cells were not correlated with patient sex, height, weight, age, serum creatinine levels, urine protein-to-creatinine ratio, and occult blood in urine.
null
null
[ "Patients, Urine Samples, and LBC (SurePath) Slides", "Immunoenzyme Staining", "Counting of PDX- and WT1-Positive Urinary Cells", "Laboratory Test Data", "Statistical Analysis", "Comparisons of PDX- and WT1-Positive Cell Counts between the Groups", "Comparison of PDX- and WT1-Positive Cell Counts among the Three Glomerular Disease Groups", "Correlations of PDX- and WT1-Positive Cell Counts with Renal Function Markers in Glomerular Disease", "Statement of Ethics", "Funding Sources", "Author Contributions" ]
[ "This study used data of 79 patients (46 males and 33 females, mean age = 10.6 ± 4.1 years) with glomerular disease who underwent renal biopsy and 51 patients (32 males and 19 females, mean age = 6.0 ± 4.8 years) with nonglomerular disease (underwent orthopedic or plastic surgery) in Shizuoka Children's Hospital. The distribution of each glomerular disease is summarized in Table 1.\nAll samples were residual voided urine after a routine test (glomerular disease: median = 30 mL [5–68 mL], nonglomerular disease: median = 23 mL [5–84 mL]), and catheterized urine and bladder washing urine were not included. In the glomerular disease group, voided urine was collected immediately before renal biopsy. The same urine sample was divided into two aliquots and used to prepare two LBC slides. Then, each slide was stained with PDX and WT1 antibodies. LBC slides were prepared using the SurePath (Becton, Dickinson, Franklin Lakes, NJ, USA) modified manual protocols [13].", "LBC slides were fixed in 95% ethanol for 30 min and labeled with primary antibodies of PDX (dilution 1:16,000, clone EPR9518; Abcam, Cambridge, UK) and WT1 (dilution 1:1,000, clone 6F-H2; DakoCytomation, Glostrup, Denmark) using the BOND-MAX automatic immunostaining system (Leica Microsystems, Wetzlar, Germany).", "The total numbers of PDX- and WT1-positive cells on each LBC slide were counted on the entire smear area under a light microscope (×10 and ×40 magnification) by 2 authors (J.S. and T.I.) blinded to the patients' diagnoses. Some patients were too young to provide constant amounts of samples. Therefore, to eliminate the effects of urine volume, the number of podocytes in 10 mL of urine was calculated for each patient.\nFirst, we compared the frequencies and numbers of urinary PDX- and WT1-positive cells between the glomerular disease and nonglomerular disease groups. Second, we compared the numbers of PDX- and WT1-positive cells among 3 groups (i.e., IgA nephropathy, Henoch-Schönlein purpura nephritis [HSPN], and minor glomerular abnormalities [MGAs] groups), which are the top three glomerular diseases in this study. Third, we examined the correlations of the numbers of positive cells with renal function markers such as serum creatinine, urine protein-to-creatinine ratio, and occult blood in the urine in patients with glomerular disease.", "Serum and urine creatinine contents were measured by the enzymatic method (Cygnus Auto CRE; Shino-Test Corp., Tokyo, Japan) using a BioMajesty JCA-BM9130 automatic biochemical analyzer (JEOL Ltd., Tokyo, Japan). Urine protein levels were measured by the pyrogallol red method (Micro TP-AR 2; Wako Pure Chemical Industries, Osaka, Japan) using a BioMajesty JCA-BM9130 automatic biochemical analyzer (JEOL Ltd.). Occult blood in urine scores of 1+ or more on dipstick testing (Uropaper α||| Eiken; Eiken Chemical, Tokyo, Japan) as read by an automated dipstick reader (US-3500; Eiken Chemical, Tokyo, Japan) was defined as positive.", "The Mann-Whitney U test and Kruskal-Wallis test were used where appropriate. p < 0.05 indicated statistical significance. All analyses were performed using StatFlex software (version 7.0; Artec Inc., Osaka, Japan).", "PDX- and WT1-positive cells were strongly positive in the cytoplasm. These positive cells appeared singly; they were 20–40 μm in maximum length and round or oval in shape (shown in Fig. 1). In addition, these positive cells were occasionally encased by a cast.\nThe numbers of PDX- and WT1-positive cells were significantly higher in the glomerular disease group than in the nonglomerular disease group (shown in Fig. 2). In addition, there were no significant differences in PDX- and WT1-positive cell numbers within the nonglomerular disease group. However, in the glomerular disease group, the number of PDX-positive cells was significantly higher than that of WT1-positive cells.\nThe best cutoff for urinary PDX-positive cells to differentiate glomerular disease from nonglomerular disease was 3.5 cells/10 mL (sensitivity = 67.1%, specificity = 100%, positive predictive value = 100%, negative predictive value = 66.2%), and the cutoff for WT1-positive cells was 1.2 cells/10 mL (sensitivity = 43.0%, specificity = 100%, positive predictive value = 100%, negative predictive value = 53.1%). These cutoffs produced area under the curves of 0.835 and 0.705 (shown in Fig. 3), respectively, and revealed that glomerular disease could be detected with moderate accuracy (Table 2).", "The numbers of PDX- and WT1-positive cells were significantly higher in the IgA nephropathy and HSPN groups than those in the MGAs group (shown in Fig. 4). On the other hand, there were no significant differences in PDX- and WT1-positive cell numbers between the IgA nephropathy and HSPN groups.", "In the glomerular disease group, the numbers of PDX- and WT1-positive cells were not correlated with patient sex, height, weight, age, serum creatinine levels, urine protein-to-creatinine ratio, and occult blood in urine.", "This study was approved by the Ethics Committee of Shizuoka Children's Hospital (no. 2020-28) and Kobe University Graduate School of Health Sciences (no. 977) and conducted in accordance with the principles of the Declaration of Helsinki. Informed consent was obtained from each patient's parent or legal guardian.", "This work was financially supported by the Shizuoka General Hospital Encouragement Research Grant and the Japanese Society of Clinical Cytology Grant.", "Methodology: Ohsaki H. Formal analysis: Sakane J., Inoue T., Nakamura A., and Ohsaki H; resources: Kitayama H., Yamada M., Miyama Y., Kawamura H., Iwafuchi H., and Ohsaki H; writing − original draft: Ohsaki H; writing − review and editing: Kitayama H., Nakamura A., Kamoshida S., and Ohsaki H; and supervision: Ohsaki H." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and Methods", "Patients, Urine Samples, and LBC (SurePath) Slides", "Immunoenzyme Staining", "Counting of PDX- and WT1-Positive Urinary Cells", "Laboratory Test Data", "Statistical Analysis", "Results", "Comparisons of PDX- and WT1-Positive Cell Counts between the Groups", "Comparison of PDX- and WT1-Positive Cell Counts among the Three Glomerular Disease Groups", "Correlations of PDX- and WT1-Positive Cell Counts with Renal Function Markers in Glomerular Disease", "Discussion", "Statement of Ethics", "Conflict of Interest Statement", "Funding Sources", "Author Contributions", "Data Availability Statement" ]
[ "Podocytes are highly specialized epithelial cells that play an important role as filters that prevent the leakage of high-molecular-weight proteins, glomerular basement membrane (GBM) components, and endothelial cells [1, 2]. Podocytes are injured by various insults (genetic, mechanical, immunologic, and toxic). Upon podocyte injury, localized changes occur in the slit membrane, followed by foot process effacement [3]. This phenomenon causes a decrease in glomerular filtration function and clinically appears as proteinuria. Other podocyte injury mechanisms include hypertrophy, detachment, apoptosis, and epithelial-to-mesenchymal transition [4, 5, 6, 7]. In these cases, podocytes detach from the GBM frequently. Podocytes are terminally differentiated cells that are generally unable to replicate in response to injury [8]. Thus, to correct local podocyte loss, parietal epithelial cells (PECs) are activated and attached to the naked GBM. When podocyte loss is excessive, PECs are further activated and induced to proliferate, eventually causing glomerulosclerosis. Additionally, severe glomerular injury associated with defects of the capillary wall and GBM leads to the proliferation of PECs and crescent formation. In this situation, podocytes are severely damaged, leading to their detachment from the GBM [2, 9].\nBecause highly damaged podocytes detach from the GBM and accumulate in urine, several studies revealed that the detection of podocytes in the urine is a valuable noninvasive biomarker for disease activity and therapeutic efficacy in various glomerular diseases [2, 10, 11]. These previous studies relied on the combination of conventional sample preparation methods (direct smears and cytospins) and podocalyxin (PDX) immunofluorescence staining to detect urinary podocytes. However, conventional sample preparation methods have problems and difficulty regarding standardization, including air-drying artifacts and significant cell loss during the fixation process [12]. Moreover, because immunofluorescence staining requires a fluorescent microscope, darkroom, and human resources, it is difficult to perform this method as a routine laboratory test in small- to medium-sized hospitals, and this staining method cannot create permanent specimens. Therefore, we have developed a new urinary podocyte detection method as a routine laboratory test [13]. For urine cytologic specimen preparation, we adopted the SurePath method, a major liquid-based cytology (LBC) method. The advantages of the SurePath method include its higher cell recovery rate, better cell preservation, and standardized technique [13]. Additionally, this method can be used both automatically and manually. In particular, the manual procedure can be used even in small- to medium-sized hospitals and developing countries, provided a slide rack and centrifuge are available. Meanwhile, we leveraged immunoenzyme method for immunostaining. Consequently, we developed a new method for detecting urinary podocytes that combines the SurePath method and immunoenzyme staining using the Wilms tumor 1 (WT1) antibody (podocyte marker) [13]. This method proved effective for detecting glomerular disease, in line with previous reports using PDX immunofluorescence staining [10, 11], and it was also possible to detect crescent formation [14]. Although PDX is considered the best marker for detecting podocytes, in our experiments, the PDX antibody used in previous studies did not work with immunoenzyme staining in urine cytology [10, 11]. Therefore, we chose WT1, which is commonly employed for routine pathologic examination, to detect urinary podocytes in immunoenzyme staining in our previous studies. However, because WT1 is positive for cytoplasm in immunoenzyme staining using alcohol-fixed cytologic specimens [13, 14, 15], researchers accustomed to immunofluorescence staining for nucleus expression may feel uncomfortable with this technique. Thus, we have searched for a PDX antibody that works in immunoenzyme staining in urine cytology. After identifying a commercial PDX antibody, we investigated its effectiveness.\nThis study assessed whether our urinary podocyte detection method using PDX immunoenzyme staining can represent a noninvasive routine laboratory test for glomerular disease. Therefore, we investigated the frequencies and numbers of PDX- and WT1-positive cells in patients with and without glomerular disease. Meanwhile, we examined the correlation between the number of positive cells and renal function markers. This is the first report investigating glomerular disease using a urinary podocyte detection method combining SurePath and PDX immunoenzyme staining.", "Patients, Urine Samples, and LBC (SurePath) Slides This study used data of 79 patients (46 males and 33 females, mean age = 10.6 ± 4.1 years) with glomerular disease who underwent renal biopsy and 51 patients (32 males and 19 females, mean age = 6.0 ± 4.8 years) with nonglomerular disease (underwent orthopedic or plastic surgery) in Shizuoka Children's Hospital. The distribution of each glomerular disease is summarized in Table 1.\nAll samples were residual voided urine after a routine test (glomerular disease: median = 30 mL [5–68 mL], nonglomerular disease: median = 23 mL [5–84 mL]), and catheterized urine and bladder washing urine were not included. In the glomerular disease group, voided urine was collected immediately before renal biopsy. The same urine sample was divided into two aliquots and used to prepare two LBC slides. Then, each slide was stained with PDX and WT1 antibodies. LBC slides were prepared using the SurePath (Becton, Dickinson, Franklin Lakes, NJ, USA) modified manual protocols [13].\nThis study used data of 79 patients (46 males and 33 females, mean age = 10.6 ± 4.1 years) with glomerular disease who underwent renal biopsy and 51 patients (32 males and 19 females, mean age = 6.0 ± 4.8 years) with nonglomerular disease (underwent orthopedic or plastic surgery) in Shizuoka Children's Hospital. The distribution of each glomerular disease is summarized in Table 1.\nAll samples were residual voided urine after a routine test (glomerular disease: median = 30 mL [5–68 mL], nonglomerular disease: median = 23 mL [5–84 mL]), and catheterized urine and bladder washing urine were not included. In the glomerular disease group, voided urine was collected immediately before renal biopsy. The same urine sample was divided into two aliquots and used to prepare two LBC slides. Then, each slide was stained with PDX and WT1 antibodies. LBC slides were prepared using the SurePath (Becton, Dickinson, Franklin Lakes, NJ, USA) modified manual protocols [13].\nImmunoenzyme Staining LBC slides were fixed in 95% ethanol for 30 min and labeled with primary antibodies of PDX (dilution 1:16,000, clone EPR9518; Abcam, Cambridge, UK) and WT1 (dilution 1:1,000, clone 6F-H2; DakoCytomation, Glostrup, Denmark) using the BOND-MAX automatic immunostaining system (Leica Microsystems, Wetzlar, Germany).\nLBC slides were fixed in 95% ethanol for 30 min and labeled with primary antibodies of PDX (dilution 1:16,000, clone EPR9518; Abcam, Cambridge, UK) and WT1 (dilution 1:1,000, clone 6F-H2; DakoCytomation, Glostrup, Denmark) using the BOND-MAX automatic immunostaining system (Leica Microsystems, Wetzlar, Germany).\nCounting of PDX- and WT1-Positive Urinary Cells The total numbers of PDX- and WT1-positive cells on each LBC slide were counted on the entire smear area under a light microscope (×10 and ×40 magnification) by 2 authors (J.S. and T.I.) blinded to the patients' diagnoses. Some patients were too young to provide constant amounts of samples. Therefore, to eliminate the effects of urine volume, the number of podocytes in 10 mL of urine was calculated for each patient.\nFirst, we compared the frequencies and numbers of urinary PDX- and WT1-positive cells between the glomerular disease and nonglomerular disease groups. Second, we compared the numbers of PDX- and WT1-positive cells among 3 groups (i.e., IgA nephropathy, Henoch-Schönlein purpura nephritis [HSPN], and minor glomerular abnormalities [MGAs] groups), which are the top three glomerular diseases in this study. Third, we examined the correlations of the numbers of positive cells with renal function markers such as serum creatinine, urine protein-to-creatinine ratio, and occult blood in the urine in patients with glomerular disease.\nThe total numbers of PDX- and WT1-positive cells on each LBC slide were counted on the entire smear area under a light microscope (×10 and ×40 magnification) by 2 authors (J.S. and T.I.) blinded to the patients' diagnoses. Some patients were too young to provide constant amounts of samples. Therefore, to eliminate the effects of urine volume, the number of podocytes in 10 mL of urine was calculated for each patient.\nFirst, we compared the frequencies and numbers of urinary PDX- and WT1-positive cells between the glomerular disease and nonglomerular disease groups. Second, we compared the numbers of PDX- and WT1-positive cells among 3 groups (i.e., IgA nephropathy, Henoch-Schönlein purpura nephritis [HSPN], and minor glomerular abnormalities [MGAs] groups), which are the top three glomerular diseases in this study. Third, we examined the correlations of the numbers of positive cells with renal function markers such as serum creatinine, urine protein-to-creatinine ratio, and occult blood in the urine in patients with glomerular disease.\nLaboratory Test Data Serum and urine creatinine contents were measured by the enzymatic method (Cygnus Auto CRE; Shino-Test Corp., Tokyo, Japan) using a BioMajesty JCA-BM9130 automatic biochemical analyzer (JEOL Ltd., Tokyo, Japan). Urine protein levels were measured by the pyrogallol red method (Micro TP-AR 2; Wako Pure Chemical Industries, Osaka, Japan) using a BioMajesty JCA-BM9130 automatic biochemical analyzer (JEOL Ltd.). Occult blood in urine scores of 1+ or more on dipstick testing (Uropaper α||| Eiken; Eiken Chemical, Tokyo, Japan) as read by an automated dipstick reader (US-3500; Eiken Chemical, Tokyo, Japan) was defined as positive.\nSerum and urine creatinine contents were measured by the enzymatic method (Cygnus Auto CRE; Shino-Test Corp., Tokyo, Japan) using a BioMajesty JCA-BM9130 automatic biochemical analyzer (JEOL Ltd., Tokyo, Japan). Urine protein levels were measured by the pyrogallol red method (Micro TP-AR 2; Wako Pure Chemical Industries, Osaka, Japan) using a BioMajesty JCA-BM9130 automatic biochemical analyzer (JEOL Ltd.). Occult blood in urine scores of 1+ or more on dipstick testing (Uropaper α||| Eiken; Eiken Chemical, Tokyo, Japan) as read by an automated dipstick reader (US-3500; Eiken Chemical, Tokyo, Japan) was defined as positive.\nStatistical Analysis The Mann-Whitney U test and Kruskal-Wallis test were used where appropriate. p < 0.05 indicated statistical significance. All analyses were performed using StatFlex software (version 7.0; Artec Inc., Osaka, Japan).\nThe Mann-Whitney U test and Kruskal-Wallis test were used where appropriate. p < 0.05 indicated statistical significance. All analyses were performed using StatFlex software (version 7.0; Artec Inc., Osaka, Japan).", "This study used data of 79 patients (46 males and 33 females, mean age = 10.6 ± 4.1 years) with glomerular disease who underwent renal biopsy and 51 patients (32 males and 19 females, mean age = 6.0 ± 4.8 years) with nonglomerular disease (underwent orthopedic or plastic surgery) in Shizuoka Children's Hospital. The distribution of each glomerular disease is summarized in Table 1.\nAll samples were residual voided urine after a routine test (glomerular disease: median = 30 mL [5–68 mL], nonglomerular disease: median = 23 mL [5–84 mL]), and catheterized urine and bladder washing urine were not included. In the glomerular disease group, voided urine was collected immediately before renal biopsy. The same urine sample was divided into two aliquots and used to prepare two LBC slides. Then, each slide was stained with PDX and WT1 antibodies. LBC slides were prepared using the SurePath (Becton, Dickinson, Franklin Lakes, NJ, USA) modified manual protocols [13].", "LBC slides were fixed in 95% ethanol for 30 min and labeled with primary antibodies of PDX (dilution 1:16,000, clone EPR9518; Abcam, Cambridge, UK) and WT1 (dilution 1:1,000, clone 6F-H2; DakoCytomation, Glostrup, Denmark) using the BOND-MAX automatic immunostaining system (Leica Microsystems, Wetzlar, Germany).", "The total numbers of PDX- and WT1-positive cells on each LBC slide were counted on the entire smear area under a light microscope (×10 and ×40 magnification) by 2 authors (J.S. and T.I.) blinded to the patients' diagnoses. Some patients were too young to provide constant amounts of samples. Therefore, to eliminate the effects of urine volume, the number of podocytes in 10 mL of urine was calculated for each patient.\nFirst, we compared the frequencies and numbers of urinary PDX- and WT1-positive cells between the glomerular disease and nonglomerular disease groups. Second, we compared the numbers of PDX- and WT1-positive cells among 3 groups (i.e., IgA nephropathy, Henoch-Schönlein purpura nephritis [HSPN], and minor glomerular abnormalities [MGAs] groups), which are the top three glomerular diseases in this study. Third, we examined the correlations of the numbers of positive cells with renal function markers such as serum creatinine, urine protein-to-creatinine ratio, and occult blood in the urine in patients with glomerular disease.", "Serum and urine creatinine contents were measured by the enzymatic method (Cygnus Auto CRE; Shino-Test Corp., Tokyo, Japan) using a BioMajesty JCA-BM9130 automatic biochemical analyzer (JEOL Ltd., Tokyo, Japan). Urine protein levels were measured by the pyrogallol red method (Micro TP-AR 2; Wako Pure Chemical Industries, Osaka, Japan) using a BioMajesty JCA-BM9130 automatic biochemical analyzer (JEOL Ltd.). Occult blood in urine scores of 1+ or more on dipstick testing (Uropaper α||| Eiken; Eiken Chemical, Tokyo, Japan) as read by an automated dipstick reader (US-3500; Eiken Chemical, Tokyo, Japan) was defined as positive.", "The Mann-Whitney U test and Kruskal-Wallis test were used where appropriate. p < 0.05 indicated statistical significance. All analyses were performed using StatFlex software (version 7.0; Artec Inc., Osaka, Japan).", "Comparisons of PDX- and WT1-Positive Cell Counts between the Groups PDX- and WT1-positive cells were strongly positive in the cytoplasm. These positive cells appeared singly; they were 20–40 μm in maximum length and round or oval in shape (shown in Fig. 1). In addition, these positive cells were occasionally encased by a cast.\nThe numbers of PDX- and WT1-positive cells were significantly higher in the glomerular disease group than in the nonglomerular disease group (shown in Fig. 2). In addition, there were no significant differences in PDX- and WT1-positive cell numbers within the nonglomerular disease group. However, in the glomerular disease group, the number of PDX-positive cells was significantly higher than that of WT1-positive cells.\nThe best cutoff for urinary PDX-positive cells to differentiate glomerular disease from nonglomerular disease was 3.5 cells/10 mL (sensitivity = 67.1%, specificity = 100%, positive predictive value = 100%, negative predictive value = 66.2%), and the cutoff for WT1-positive cells was 1.2 cells/10 mL (sensitivity = 43.0%, specificity = 100%, positive predictive value = 100%, negative predictive value = 53.1%). These cutoffs produced area under the curves of 0.835 and 0.705 (shown in Fig. 3), respectively, and revealed that glomerular disease could be detected with moderate accuracy (Table 2).\nPDX- and WT1-positive cells were strongly positive in the cytoplasm. These positive cells appeared singly; they were 20–40 μm in maximum length and round or oval in shape (shown in Fig. 1). In addition, these positive cells were occasionally encased by a cast.\nThe numbers of PDX- and WT1-positive cells were significantly higher in the glomerular disease group than in the nonglomerular disease group (shown in Fig. 2). In addition, there were no significant differences in PDX- and WT1-positive cell numbers within the nonglomerular disease group. However, in the glomerular disease group, the number of PDX-positive cells was significantly higher than that of WT1-positive cells.\nThe best cutoff for urinary PDX-positive cells to differentiate glomerular disease from nonglomerular disease was 3.5 cells/10 mL (sensitivity = 67.1%, specificity = 100%, positive predictive value = 100%, negative predictive value = 66.2%), and the cutoff for WT1-positive cells was 1.2 cells/10 mL (sensitivity = 43.0%, specificity = 100%, positive predictive value = 100%, negative predictive value = 53.1%). These cutoffs produced area under the curves of 0.835 and 0.705 (shown in Fig. 3), respectively, and revealed that glomerular disease could be detected with moderate accuracy (Table 2).\nComparison of PDX- and WT1-Positive Cell Counts among the Three Glomerular Disease Groups The numbers of PDX- and WT1-positive cells were significantly higher in the IgA nephropathy and HSPN groups than those in the MGAs group (shown in Fig. 4). On the other hand, there were no significant differences in PDX- and WT1-positive cell numbers between the IgA nephropathy and HSPN groups.\nThe numbers of PDX- and WT1-positive cells were significantly higher in the IgA nephropathy and HSPN groups than those in the MGAs group (shown in Fig. 4). On the other hand, there were no significant differences in PDX- and WT1-positive cell numbers between the IgA nephropathy and HSPN groups.\nCorrelations of PDX- and WT1-Positive Cell Counts with Renal Function Markers in Glomerular Disease In the glomerular disease group, the numbers of PDX- and WT1-positive cells were not correlated with patient sex, height, weight, age, serum creatinine levels, urine protein-to-creatinine ratio, and occult blood in urine.\nIn the glomerular disease group, the numbers of PDX- and WT1-positive cells were not correlated with patient sex, height, weight, age, serum creatinine levels, urine protein-to-creatinine ratio, and occult blood in urine.", "PDX- and WT1-positive cells were strongly positive in the cytoplasm. These positive cells appeared singly; they were 20–40 μm in maximum length and round or oval in shape (shown in Fig. 1). In addition, these positive cells were occasionally encased by a cast.\nThe numbers of PDX- and WT1-positive cells were significantly higher in the glomerular disease group than in the nonglomerular disease group (shown in Fig. 2). In addition, there were no significant differences in PDX- and WT1-positive cell numbers within the nonglomerular disease group. However, in the glomerular disease group, the number of PDX-positive cells was significantly higher than that of WT1-positive cells.\nThe best cutoff for urinary PDX-positive cells to differentiate glomerular disease from nonglomerular disease was 3.5 cells/10 mL (sensitivity = 67.1%, specificity = 100%, positive predictive value = 100%, negative predictive value = 66.2%), and the cutoff for WT1-positive cells was 1.2 cells/10 mL (sensitivity = 43.0%, specificity = 100%, positive predictive value = 100%, negative predictive value = 53.1%). These cutoffs produced area under the curves of 0.835 and 0.705 (shown in Fig. 3), respectively, and revealed that glomerular disease could be detected with moderate accuracy (Table 2).", "The numbers of PDX- and WT1-positive cells were significantly higher in the IgA nephropathy and HSPN groups than those in the MGAs group (shown in Fig. 4). On the other hand, there were no significant differences in PDX- and WT1-positive cell numbers between the IgA nephropathy and HSPN groups.", "In the glomerular disease group, the numbers of PDX- and WT1-positive cells were not correlated with patient sex, height, weight, age, serum creatinine levels, urine protein-to-creatinine ratio, and occult blood in urine.", "PDX- and WT1-positive cells exhibited cytoplasmic expression in this study. Researchers accustomed to immunofluorescence staining may raise concerns regarding the WT1 immunoenzyme staining results (cytoplasmic positivity of WT1). Previous studies have reported that WT1 is involved in transcriptional regulation within the nucleus and RNA metabolism and translational regulation in the cytoplasm [16, 17]. Likewise, Western blotting of nuclear and cytoplasmic fractions using WT1 antibody revealed that WT1 predominantly inhabited the nucleus, although cytoplasmic fractions were also positive for WT1 [18]. For these reasons, in this study, WT1-positive cells displayed cytoplasmic expression.\nThe PDX- and WT1-positive cells of various sizes and shapes were observed in this study. These morphologic features are consistent with previous reports on PDX immunofluorescence staining [10, 11, 19]. One hypothesis explaining the various shapes and sizes of podocytes is miniaturization attributable to apoptosis [8]. Conversely, hypertrophy may be induced to allow the remaining podocytes to cover the GBM in denuded areas following the detachment of other podocytes [20]. Although podocytes are considered terminally differentiated cells, they have been reported to reenter the cell cycle in several glomerular diseases [20, 21, 22]. Podocytes proceed until the mitosis phase under stress or injury; however, as they cannot assemble mitotic spindles, cytokinesis is impossible. Consequently, these podocytes show hypertrophy and become multinucleated [20]. Additionally, it has been demonstrated that both podocytes and active PECs are positive for PDX and WT1 in several glomerular diseases [11, 14, 23, 24]. Therefore, the result of this study that urinary PDX- and WT1-positive cells had diverse morphologic features may be explained by the fact that these cells are derived from podocytes and active PECs. On the other hand, PDX- and WT1-positive cells' morphology was mainly round shape and about 20 μm in maximum length, and very few cells were present on the slides. Hence, it is difficult to detect these cells only by cytomorphology, and thus, immunocytochemistry is necessary. In this study, PDX- and WT1-positive cells were occasionally encased by a cast. Because casts are formed as the molds for the lumina of renal tubules and collecting ducts, these positive cells inside the cast were suggested to have arisen from nephrons [14].\nWang et al. [25] reported that in a pediatric study, PDX-positive cells were detected in 53.8% (35/65) of patients in the glomerular disease group, and the number of PDX-positive cells was higher in this group than in the healthy control group. Their differences were smaller than our findings, although the data cannot be easily compared because the urine volume, cytologic smear preparation method, and immunocytochemistry method were different. One possible explanation for the increased frequency of PDX-positive cells in our study is the improvement of cell recovery using the SurePath method [26, 27, 28]. Meanwhile, another study found that PDX immunofluorescence staining of urine sediments was positive in 92.5% (62/65) of children with glomerular disease [10]. However, this discrepancy is likely caused by the inclusion of positive casts and granules in addition to PDX-positive cells in their study. The sensitivity and specificity of WT1-positive cells in the present study (sensitivity = 43.0% and specificity = 100%) were similar to those in a previous study on adult patients with glomerular disease (sensitivity = 50.0% and specificity = 100%) [13]. The results of the present study indicated that immunoenzyme staining using the PDX antibody can detect glomerular disease with a higher accuracy than that using WT1 antibody. Some researchers may prefer PDX immunofluorescence staining, but because immunoenzyme staining is routinely performed as a pathologic examination, we consider it will be better suited for routine tests [14].\nThis study targeted 79 cases of glomerular diseases, but the number of cases was tiny in most of the glomerular disease types. Therefore, we compared the numbers of PDX- and WT1-positive cells among the 3 groups (IgA nephropathy, HSPN, and MGAs groups), which had a relatively large number of cases. As a result, the numbers of PDX- and WT1-positive cells were significantly higher in the IgA nephropathy and HSPN groups than that in the MGAs group. On the other hand, there were no significant differences in these positive cell numbers between the IgA nephropathy and HSPN groups. Our results were similar to those of previous studies on the detection of PDX-positive cells using immunofluorescence staining [7, 29, 30, 31].\nThe lack of correlations of PDX- and WT1-positive cell counts with renal function markers likely has several explanations. The number of urinary PDX-positive cells was significantly higher in patients with focal segmental glomerulosclerosis than in those with membranous nephropathy or minimal change nephrotic syndrome in prior research [29, 30]. The number of urinary PDX-positive cells was significantly higher in patients with active IgA nephropathy or HSPN than in those with inactive disease cases [7, 31]. Furthermore, in minimal change nephrotic syndrome, it is known that a large amount of protein appears in the urine even though PDX- and WT1-positive cells are rarely observed [13, 29, 30]. In IgA nephropathy, hematuria is usually present, though proteinuria is rare in the early stages. As mentioned previously, urinary PDX-positive cells and some renal function markers reflect both glomerular disease type and activity, but the current study population included patients with various disease types and activities. These findings potentially explain the lack of correlations of PDX- and WT1-positive cell counts with serum creatinine, urine protein-to-creatinine ratio, and occult blood in the urine in the present study. From this result, we would like to note that our method is a noninvasive biomarker independent of other tests, and glomerular disease can be detected with high accuracy using our cutoffs for PDX-positive cells.\nIn conclusion, because our urinary podocytes detection method can be standardized and it detected glomerular disease with high accuracy, we believe this method could emerge as a noninvasive routine laboratory test for various glomerular diseases. Future work should focus on the relationship between the activity of individual glomerular diseases and PDX-positive cell counts.", "This study was approved by the Ethics Committee of Shizuoka Children's Hospital (no. 2020-28) and Kobe University Graduate School of Health Sciences (no. 977) and conducted in accordance with the principles of the Declaration of Helsinki. Informed consent was obtained from each patient's parent or legal guardian.", "The authors have no conflicts of interest to declare.", "This work was financially supported by the Shizuoka General Hospital Encouragement Research Grant and the Japanese Society of Clinical Cytology Grant.", "Methodology: Ohsaki H. Formal analysis: Sakane J., Inoue T., Nakamura A., and Ohsaki H; resources: Kitayama H., Yamada M., Miyama Y., Kawamura H., Iwafuchi H., and Ohsaki H; writing − original draft: Ohsaki H; writing − review and editing: Kitayama H., Nakamura A., Kamoshida S., and Ohsaki H; and supervision: Ohsaki H.", "All data generated or analyzed during this study are included in this article. Further inquiries can be directed to the corresponding author." ]
[ "intro", "materials|methods", null, null, null, null, null, "results", null, null, null, "discussion", null, "COI-statement", null, null, "data-availability" ]
[ "Podocyte", "Liquid-based cytology", "Immunocytochemistry", "Urine cytology", "Glomerular disease" ]
Introduction: Podocytes are highly specialized epithelial cells that play an important role as filters that prevent the leakage of high-molecular-weight proteins, glomerular basement membrane (GBM) components, and endothelial cells [1, 2]. Podocytes are injured by various insults (genetic, mechanical, immunologic, and toxic). Upon podocyte injury, localized changes occur in the slit membrane, followed by foot process effacement [3]. This phenomenon causes a decrease in glomerular filtration function and clinically appears as proteinuria. Other podocyte injury mechanisms include hypertrophy, detachment, apoptosis, and epithelial-to-mesenchymal transition [4, 5, 6, 7]. In these cases, podocytes detach from the GBM frequently. Podocytes are terminally differentiated cells that are generally unable to replicate in response to injury [8]. Thus, to correct local podocyte loss, parietal epithelial cells (PECs) are activated and attached to the naked GBM. When podocyte loss is excessive, PECs are further activated and induced to proliferate, eventually causing glomerulosclerosis. Additionally, severe glomerular injury associated with defects of the capillary wall and GBM leads to the proliferation of PECs and crescent formation. In this situation, podocytes are severely damaged, leading to their detachment from the GBM [2, 9]. Because highly damaged podocytes detach from the GBM and accumulate in urine, several studies revealed that the detection of podocytes in the urine is a valuable noninvasive biomarker for disease activity and therapeutic efficacy in various glomerular diseases [2, 10, 11]. These previous studies relied on the combination of conventional sample preparation methods (direct smears and cytospins) and podocalyxin (PDX) immunofluorescence staining to detect urinary podocytes. However, conventional sample preparation methods have problems and difficulty regarding standardization, including air-drying artifacts and significant cell loss during the fixation process [12]. Moreover, because immunofluorescence staining requires a fluorescent microscope, darkroom, and human resources, it is difficult to perform this method as a routine laboratory test in small- to medium-sized hospitals, and this staining method cannot create permanent specimens. Therefore, we have developed a new urinary podocyte detection method as a routine laboratory test [13]. For urine cytologic specimen preparation, we adopted the SurePath method, a major liquid-based cytology (LBC) method. The advantages of the SurePath method include its higher cell recovery rate, better cell preservation, and standardized technique [13]. Additionally, this method can be used both automatically and manually. In particular, the manual procedure can be used even in small- to medium-sized hospitals and developing countries, provided a slide rack and centrifuge are available. Meanwhile, we leveraged immunoenzyme method for immunostaining. Consequently, we developed a new method for detecting urinary podocytes that combines the SurePath method and immunoenzyme staining using the Wilms tumor 1 (WT1) antibody (podocyte marker) [13]. This method proved effective for detecting glomerular disease, in line with previous reports using PDX immunofluorescence staining [10, 11], and it was also possible to detect crescent formation [14]. Although PDX is considered the best marker for detecting podocytes, in our experiments, the PDX antibody used in previous studies did not work with immunoenzyme staining in urine cytology [10, 11]. Therefore, we chose WT1, which is commonly employed for routine pathologic examination, to detect urinary podocytes in immunoenzyme staining in our previous studies. However, because WT1 is positive for cytoplasm in immunoenzyme staining using alcohol-fixed cytologic specimens [13, 14, 15], researchers accustomed to immunofluorescence staining for nucleus expression may feel uncomfortable with this technique. Thus, we have searched for a PDX antibody that works in immunoenzyme staining in urine cytology. After identifying a commercial PDX antibody, we investigated its effectiveness. This study assessed whether our urinary podocyte detection method using PDX immunoenzyme staining can represent a noninvasive routine laboratory test for glomerular disease. Therefore, we investigated the frequencies and numbers of PDX- and WT1-positive cells in patients with and without glomerular disease. Meanwhile, we examined the correlation between the number of positive cells and renal function markers. This is the first report investigating glomerular disease using a urinary podocyte detection method combining SurePath and PDX immunoenzyme staining. Materials and Methods: Patients, Urine Samples, and LBC (SurePath) Slides This study used data of 79 patients (46 males and 33 females, mean age = 10.6 ± 4.1 years) with glomerular disease who underwent renal biopsy and 51 patients (32 males and 19 females, mean age = 6.0 ± 4.8 years) with nonglomerular disease (underwent orthopedic or plastic surgery) in Shizuoka Children's Hospital. The distribution of each glomerular disease is summarized in Table 1. All samples were residual voided urine after a routine test (glomerular disease: median = 30 mL [5–68 mL], nonglomerular disease: median = 23 mL [5–84 mL]), and catheterized urine and bladder washing urine were not included. In the glomerular disease group, voided urine was collected immediately before renal biopsy. The same urine sample was divided into two aliquots and used to prepare two LBC slides. Then, each slide was stained with PDX and WT1 antibodies. LBC slides were prepared using the SurePath (Becton, Dickinson, Franklin Lakes, NJ, USA) modified manual protocols [13]. This study used data of 79 patients (46 males and 33 females, mean age = 10.6 ± 4.1 years) with glomerular disease who underwent renal biopsy and 51 patients (32 males and 19 females, mean age = 6.0 ± 4.8 years) with nonglomerular disease (underwent orthopedic or plastic surgery) in Shizuoka Children's Hospital. The distribution of each glomerular disease is summarized in Table 1. All samples were residual voided urine after a routine test (glomerular disease: median = 30 mL [5–68 mL], nonglomerular disease: median = 23 mL [5–84 mL]), and catheterized urine and bladder washing urine were not included. In the glomerular disease group, voided urine was collected immediately before renal biopsy. The same urine sample was divided into two aliquots and used to prepare two LBC slides. Then, each slide was stained with PDX and WT1 antibodies. LBC slides were prepared using the SurePath (Becton, Dickinson, Franklin Lakes, NJ, USA) modified manual protocols [13]. Immunoenzyme Staining LBC slides were fixed in 95% ethanol for 30 min and labeled with primary antibodies of PDX (dilution 1:16,000, clone EPR9518; Abcam, Cambridge, UK) and WT1 (dilution 1:1,000, clone 6F-H2; DakoCytomation, Glostrup, Denmark) using the BOND-MAX automatic immunostaining system (Leica Microsystems, Wetzlar, Germany). LBC slides were fixed in 95% ethanol for 30 min and labeled with primary antibodies of PDX (dilution 1:16,000, clone EPR9518; Abcam, Cambridge, UK) and WT1 (dilution 1:1,000, clone 6F-H2; DakoCytomation, Glostrup, Denmark) using the BOND-MAX automatic immunostaining system (Leica Microsystems, Wetzlar, Germany). Counting of PDX- and WT1-Positive Urinary Cells The total numbers of PDX- and WT1-positive cells on each LBC slide were counted on the entire smear area under a light microscope (×10 and ×40 magnification) by 2 authors (J.S. and T.I.) blinded to the patients' diagnoses. Some patients were too young to provide constant amounts of samples. Therefore, to eliminate the effects of urine volume, the number of podocytes in 10 mL of urine was calculated for each patient. First, we compared the frequencies and numbers of urinary PDX- and WT1-positive cells between the glomerular disease and nonglomerular disease groups. Second, we compared the numbers of PDX- and WT1-positive cells among 3 groups (i.e., IgA nephropathy, Henoch-Schönlein purpura nephritis [HSPN], and minor glomerular abnormalities [MGAs] groups), which are the top three glomerular diseases in this study. Third, we examined the correlations of the numbers of positive cells with renal function markers such as serum creatinine, urine protein-to-creatinine ratio, and occult blood in the urine in patients with glomerular disease. The total numbers of PDX- and WT1-positive cells on each LBC slide were counted on the entire smear area under a light microscope (×10 and ×40 magnification) by 2 authors (J.S. and T.I.) blinded to the patients' diagnoses. Some patients were too young to provide constant amounts of samples. Therefore, to eliminate the effects of urine volume, the number of podocytes in 10 mL of urine was calculated for each patient. First, we compared the frequencies and numbers of urinary PDX- and WT1-positive cells between the glomerular disease and nonglomerular disease groups. Second, we compared the numbers of PDX- and WT1-positive cells among 3 groups (i.e., IgA nephropathy, Henoch-Schönlein purpura nephritis [HSPN], and minor glomerular abnormalities [MGAs] groups), which are the top three glomerular diseases in this study. Third, we examined the correlations of the numbers of positive cells with renal function markers such as serum creatinine, urine protein-to-creatinine ratio, and occult blood in the urine in patients with glomerular disease. Laboratory Test Data Serum and urine creatinine contents were measured by the enzymatic method (Cygnus Auto CRE; Shino-Test Corp., Tokyo, Japan) using a BioMajesty JCA-BM9130 automatic biochemical analyzer (JEOL Ltd., Tokyo, Japan). Urine protein levels were measured by the pyrogallol red method (Micro TP-AR 2; Wako Pure Chemical Industries, Osaka, Japan) using a BioMajesty JCA-BM9130 automatic biochemical analyzer (JEOL Ltd.). Occult blood in urine scores of 1+ or more on dipstick testing (Uropaper α||| Eiken; Eiken Chemical, Tokyo, Japan) as read by an automated dipstick reader (US-3500; Eiken Chemical, Tokyo, Japan) was defined as positive. Serum and urine creatinine contents were measured by the enzymatic method (Cygnus Auto CRE; Shino-Test Corp., Tokyo, Japan) using a BioMajesty JCA-BM9130 automatic biochemical analyzer (JEOL Ltd., Tokyo, Japan). Urine protein levels were measured by the pyrogallol red method (Micro TP-AR 2; Wako Pure Chemical Industries, Osaka, Japan) using a BioMajesty JCA-BM9130 automatic biochemical analyzer (JEOL Ltd.). Occult blood in urine scores of 1+ or more on dipstick testing (Uropaper α||| Eiken; Eiken Chemical, Tokyo, Japan) as read by an automated dipstick reader (US-3500; Eiken Chemical, Tokyo, Japan) was defined as positive. Statistical Analysis The Mann-Whitney U test and Kruskal-Wallis test were used where appropriate. p < 0.05 indicated statistical significance. All analyses were performed using StatFlex software (version 7.0; Artec Inc., Osaka, Japan). The Mann-Whitney U test and Kruskal-Wallis test were used where appropriate. p < 0.05 indicated statistical significance. All analyses were performed using StatFlex software (version 7.0; Artec Inc., Osaka, Japan). Patients, Urine Samples, and LBC (SurePath) Slides: This study used data of 79 patients (46 males and 33 females, mean age = 10.6 ± 4.1 years) with glomerular disease who underwent renal biopsy and 51 patients (32 males and 19 females, mean age = 6.0 ± 4.8 years) with nonglomerular disease (underwent orthopedic or plastic surgery) in Shizuoka Children's Hospital. The distribution of each glomerular disease is summarized in Table 1. All samples were residual voided urine after a routine test (glomerular disease: median = 30 mL [5–68 mL], nonglomerular disease: median = 23 mL [5–84 mL]), and catheterized urine and bladder washing urine were not included. In the glomerular disease group, voided urine was collected immediately before renal biopsy. The same urine sample was divided into two aliquots and used to prepare two LBC slides. Then, each slide was stained with PDX and WT1 antibodies. LBC slides were prepared using the SurePath (Becton, Dickinson, Franklin Lakes, NJ, USA) modified manual protocols [13]. Immunoenzyme Staining: LBC slides were fixed in 95% ethanol for 30 min and labeled with primary antibodies of PDX (dilution 1:16,000, clone EPR9518; Abcam, Cambridge, UK) and WT1 (dilution 1:1,000, clone 6F-H2; DakoCytomation, Glostrup, Denmark) using the BOND-MAX automatic immunostaining system (Leica Microsystems, Wetzlar, Germany). Counting of PDX- and WT1-Positive Urinary Cells: The total numbers of PDX- and WT1-positive cells on each LBC slide were counted on the entire smear area under a light microscope (×10 and ×40 magnification) by 2 authors (J.S. and T.I.) blinded to the patients' diagnoses. Some patients were too young to provide constant amounts of samples. Therefore, to eliminate the effects of urine volume, the number of podocytes in 10 mL of urine was calculated for each patient. First, we compared the frequencies and numbers of urinary PDX- and WT1-positive cells between the glomerular disease and nonglomerular disease groups. Second, we compared the numbers of PDX- and WT1-positive cells among 3 groups (i.e., IgA nephropathy, Henoch-Schönlein purpura nephritis [HSPN], and minor glomerular abnormalities [MGAs] groups), which are the top three glomerular diseases in this study. Third, we examined the correlations of the numbers of positive cells with renal function markers such as serum creatinine, urine protein-to-creatinine ratio, and occult blood in the urine in patients with glomerular disease. Laboratory Test Data: Serum and urine creatinine contents were measured by the enzymatic method (Cygnus Auto CRE; Shino-Test Corp., Tokyo, Japan) using a BioMajesty JCA-BM9130 automatic biochemical analyzer (JEOL Ltd., Tokyo, Japan). Urine protein levels were measured by the pyrogallol red method (Micro TP-AR 2; Wako Pure Chemical Industries, Osaka, Japan) using a BioMajesty JCA-BM9130 automatic biochemical analyzer (JEOL Ltd.). Occult blood in urine scores of 1+ or more on dipstick testing (Uropaper α||| Eiken; Eiken Chemical, Tokyo, Japan) as read by an automated dipstick reader (US-3500; Eiken Chemical, Tokyo, Japan) was defined as positive. Statistical Analysis: The Mann-Whitney U test and Kruskal-Wallis test were used where appropriate. p < 0.05 indicated statistical significance. All analyses were performed using StatFlex software (version 7.0; Artec Inc., Osaka, Japan). Results: Comparisons of PDX- and WT1-Positive Cell Counts between the Groups PDX- and WT1-positive cells were strongly positive in the cytoplasm. These positive cells appeared singly; they were 20–40 μm in maximum length and round or oval in shape (shown in Fig. 1). In addition, these positive cells were occasionally encased by a cast. The numbers of PDX- and WT1-positive cells were significantly higher in the glomerular disease group than in the nonglomerular disease group (shown in Fig. 2). In addition, there were no significant differences in PDX- and WT1-positive cell numbers within the nonglomerular disease group. However, in the glomerular disease group, the number of PDX-positive cells was significantly higher than that of WT1-positive cells. The best cutoff for urinary PDX-positive cells to differentiate glomerular disease from nonglomerular disease was 3.5 cells/10 mL (sensitivity = 67.1%, specificity = 100%, positive predictive value = 100%, negative predictive value = 66.2%), and the cutoff for WT1-positive cells was 1.2 cells/10 mL (sensitivity = 43.0%, specificity = 100%, positive predictive value = 100%, negative predictive value = 53.1%). These cutoffs produced area under the curves of 0.835 and 0.705 (shown in Fig. 3), respectively, and revealed that glomerular disease could be detected with moderate accuracy (Table 2). PDX- and WT1-positive cells were strongly positive in the cytoplasm. These positive cells appeared singly; they were 20–40 μm in maximum length and round or oval in shape (shown in Fig. 1). In addition, these positive cells were occasionally encased by a cast. The numbers of PDX- and WT1-positive cells were significantly higher in the glomerular disease group than in the nonglomerular disease group (shown in Fig. 2). In addition, there were no significant differences in PDX- and WT1-positive cell numbers within the nonglomerular disease group. However, in the glomerular disease group, the number of PDX-positive cells was significantly higher than that of WT1-positive cells. The best cutoff for urinary PDX-positive cells to differentiate glomerular disease from nonglomerular disease was 3.5 cells/10 mL (sensitivity = 67.1%, specificity = 100%, positive predictive value = 100%, negative predictive value = 66.2%), and the cutoff for WT1-positive cells was 1.2 cells/10 mL (sensitivity = 43.0%, specificity = 100%, positive predictive value = 100%, negative predictive value = 53.1%). These cutoffs produced area under the curves of 0.835 and 0.705 (shown in Fig. 3), respectively, and revealed that glomerular disease could be detected with moderate accuracy (Table 2). Comparison of PDX- and WT1-Positive Cell Counts among the Three Glomerular Disease Groups The numbers of PDX- and WT1-positive cells were significantly higher in the IgA nephropathy and HSPN groups than those in the MGAs group (shown in Fig. 4). On the other hand, there were no significant differences in PDX- and WT1-positive cell numbers between the IgA nephropathy and HSPN groups. The numbers of PDX- and WT1-positive cells were significantly higher in the IgA nephropathy and HSPN groups than those in the MGAs group (shown in Fig. 4). On the other hand, there were no significant differences in PDX- and WT1-positive cell numbers between the IgA nephropathy and HSPN groups. Correlations of PDX- and WT1-Positive Cell Counts with Renal Function Markers in Glomerular Disease In the glomerular disease group, the numbers of PDX- and WT1-positive cells were not correlated with patient sex, height, weight, age, serum creatinine levels, urine protein-to-creatinine ratio, and occult blood in urine. In the glomerular disease group, the numbers of PDX- and WT1-positive cells were not correlated with patient sex, height, weight, age, serum creatinine levels, urine protein-to-creatinine ratio, and occult blood in urine. Comparisons of PDX- and WT1-Positive Cell Counts between the Groups: PDX- and WT1-positive cells were strongly positive in the cytoplasm. These positive cells appeared singly; they were 20–40 μm in maximum length and round or oval in shape (shown in Fig. 1). In addition, these positive cells were occasionally encased by a cast. The numbers of PDX- and WT1-positive cells were significantly higher in the glomerular disease group than in the nonglomerular disease group (shown in Fig. 2). In addition, there were no significant differences in PDX- and WT1-positive cell numbers within the nonglomerular disease group. However, in the glomerular disease group, the number of PDX-positive cells was significantly higher than that of WT1-positive cells. The best cutoff for urinary PDX-positive cells to differentiate glomerular disease from nonglomerular disease was 3.5 cells/10 mL (sensitivity = 67.1%, specificity = 100%, positive predictive value = 100%, negative predictive value = 66.2%), and the cutoff for WT1-positive cells was 1.2 cells/10 mL (sensitivity = 43.0%, specificity = 100%, positive predictive value = 100%, negative predictive value = 53.1%). These cutoffs produced area under the curves of 0.835 and 0.705 (shown in Fig. 3), respectively, and revealed that glomerular disease could be detected with moderate accuracy (Table 2). Comparison of PDX- and WT1-Positive Cell Counts among the Three Glomerular Disease Groups: The numbers of PDX- and WT1-positive cells were significantly higher in the IgA nephropathy and HSPN groups than those in the MGAs group (shown in Fig. 4). On the other hand, there were no significant differences in PDX- and WT1-positive cell numbers between the IgA nephropathy and HSPN groups. Correlations of PDX- and WT1-Positive Cell Counts with Renal Function Markers in Glomerular Disease: In the glomerular disease group, the numbers of PDX- and WT1-positive cells were not correlated with patient sex, height, weight, age, serum creatinine levels, urine protein-to-creatinine ratio, and occult blood in urine. Discussion: PDX- and WT1-positive cells exhibited cytoplasmic expression in this study. Researchers accustomed to immunofluorescence staining may raise concerns regarding the WT1 immunoenzyme staining results (cytoplasmic positivity of WT1). Previous studies have reported that WT1 is involved in transcriptional regulation within the nucleus and RNA metabolism and translational regulation in the cytoplasm [16, 17]. Likewise, Western blotting of nuclear and cytoplasmic fractions using WT1 antibody revealed that WT1 predominantly inhabited the nucleus, although cytoplasmic fractions were also positive for WT1 [18]. For these reasons, in this study, WT1-positive cells displayed cytoplasmic expression. The PDX- and WT1-positive cells of various sizes and shapes were observed in this study. These morphologic features are consistent with previous reports on PDX immunofluorescence staining [10, 11, 19]. One hypothesis explaining the various shapes and sizes of podocytes is miniaturization attributable to apoptosis [8]. Conversely, hypertrophy may be induced to allow the remaining podocytes to cover the GBM in denuded areas following the detachment of other podocytes [20]. Although podocytes are considered terminally differentiated cells, they have been reported to reenter the cell cycle in several glomerular diseases [20, 21, 22]. Podocytes proceed until the mitosis phase under stress or injury; however, as they cannot assemble mitotic spindles, cytokinesis is impossible. Consequently, these podocytes show hypertrophy and become multinucleated [20]. Additionally, it has been demonstrated that both podocytes and active PECs are positive for PDX and WT1 in several glomerular diseases [11, 14, 23, 24]. Therefore, the result of this study that urinary PDX- and WT1-positive cells had diverse morphologic features may be explained by the fact that these cells are derived from podocytes and active PECs. On the other hand, PDX- and WT1-positive cells' morphology was mainly round shape and about 20 μm in maximum length, and very few cells were present on the slides. Hence, it is difficult to detect these cells only by cytomorphology, and thus, immunocytochemistry is necessary. In this study, PDX- and WT1-positive cells were occasionally encased by a cast. Because casts are formed as the molds for the lumina of renal tubules and collecting ducts, these positive cells inside the cast were suggested to have arisen from nephrons [14]. Wang et al. [25] reported that in a pediatric study, PDX-positive cells were detected in 53.8% (35/65) of patients in the glomerular disease group, and the number of PDX-positive cells was higher in this group than in the healthy control group. Their differences were smaller than our findings, although the data cannot be easily compared because the urine volume, cytologic smear preparation method, and immunocytochemistry method were different. One possible explanation for the increased frequency of PDX-positive cells in our study is the improvement of cell recovery using the SurePath method [26, 27, 28]. Meanwhile, another study found that PDX immunofluorescence staining of urine sediments was positive in 92.5% (62/65) of children with glomerular disease [10]. However, this discrepancy is likely caused by the inclusion of positive casts and granules in addition to PDX-positive cells in their study. The sensitivity and specificity of WT1-positive cells in the present study (sensitivity = 43.0% and specificity = 100%) were similar to those in a previous study on adult patients with glomerular disease (sensitivity = 50.0% and specificity = 100%) [13]. The results of the present study indicated that immunoenzyme staining using the PDX antibody can detect glomerular disease with a higher accuracy than that using WT1 antibody. Some researchers may prefer PDX immunofluorescence staining, but because immunoenzyme staining is routinely performed as a pathologic examination, we consider it will be better suited for routine tests [14]. This study targeted 79 cases of glomerular diseases, but the number of cases was tiny in most of the glomerular disease types. Therefore, we compared the numbers of PDX- and WT1-positive cells among the 3 groups (IgA nephropathy, HSPN, and MGAs groups), which had a relatively large number of cases. As a result, the numbers of PDX- and WT1-positive cells were significantly higher in the IgA nephropathy and HSPN groups than that in the MGAs group. On the other hand, there were no significant differences in these positive cell numbers between the IgA nephropathy and HSPN groups. Our results were similar to those of previous studies on the detection of PDX-positive cells using immunofluorescence staining [7, 29, 30, 31]. The lack of correlations of PDX- and WT1-positive cell counts with renal function markers likely has several explanations. The number of urinary PDX-positive cells was significantly higher in patients with focal segmental glomerulosclerosis than in those with membranous nephropathy or minimal change nephrotic syndrome in prior research [29, 30]. The number of urinary PDX-positive cells was significantly higher in patients with active IgA nephropathy or HSPN than in those with inactive disease cases [7, 31]. Furthermore, in minimal change nephrotic syndrome, it is known that a large amount of protein appears in the urine even though PDX- and WT1-positive cells are rarely observed [13, 29, 30]. In IgA nephropathy, hematuria is usually present, though proteinuria is rare in the early stages. As mentioned previously, urinary PDX-positive cells and some renal function markers reflect both glomerular disease type and activity, but the current study population included patients with various disease types and activities. These findings potentially explain the lack of correlations of PDX- and WT1-positive cell counts with serum creatinine, urine protein-to-creatinine ratio, and occult blood in the urine in the present study. From this result, we would like to note that our method is a noninvasive biomarker independent of other tests, and glomerular disease can be detected with high accuracy using our cutoffs for PDX-positive cells. In conclusion, because our urinary podocytes detection method can be standardized and it detected glomerular disease with high accuracy, we believe this method could emerge as a noninvasive routine laboratory test for various glomerular diseases. Future work should focus on the relationship between the activity of individual glomerular diseases and PDX-positive cell counts. Statement of Ethics: This study was approved by the Ethics Committee of Shizuoka Children's Hospital (no. 2020-28) and Kobe University Graduate School of Health Sciences (no. 977) and conducted in accordance with the principles of the Declaration of Helsinki. Informed consent was obtained from each patient's parent or legal guardian. Conflict of Interest Statement: The authors have no conflicts of interest to declare. Funding Sources: This work was financially supported by the Shizuoka General Hospital Encouragement Research Grant and the Japanese Society of Clinical Cytology Grant. Author Contributions: Methodology: Ohsaki H. Formal analysis: Sakane J., Inoue T., Nakamura A., and Ohsaki H; resources: Kitayama H., Yamada M., Miyama Y., Kawamura H., Iwafuchi H., and Ohsaki H; writing − original draft: Ohsaki H; writing − review and editing: Kitayama H., Nakamura A., Kamoshida S., and Ohsaki H; and supervision: Ohsaki H. Data Availability Statement: All data generated or analyzed during this study are included in this article. Further inquiries can be directed to the corresponding author.
Background: This study investigated whether our urinary podocyte detection method using podocalyxin (PDX) and Wilms tumor 1 (WT1) immunoenzyme staining combined with liquid-based cytology can serve as a noninvasive routine laboratory test for glomerular disease. Methods: The presence of PDX- and WT1-positive cells was investigated in 79 patients with glomerular disease and 51 patients with nonglomerular disease. Results: The frequencies and numbers of PDX- and WT1-positive cells were significantly higher in the glomerular disease group than in the nonglomerular disease group. The best cutoffs for PDX- and WT1-positive cell counts for identifying patients with glomerular disease were 3.5 (sensitivity = 67.1% and specificity = 100%) and 1.2 cells/10 mL (sensitivity = 43.0% and specificity = 100%), respectively. Conclusions: Because our urinary podocyte detection method using PDX immunoenzyme staining can be standardized and it detected glomerular disease with high accuracy, it can likely serve as a noninvasive routine laboratory test for various glomerular diseases.
null
null
5,391
191
[ 193, 65, 202, 130, 42, 255, 59, 46, 59, 22, 73 ]
17
[ "positive", "pdx", "cells", "wt1", "disease", "glomerular", "positive cells", "wt1 positive", "urine", "pdx wt1" ]
[ "podocytes active pecs", "endothelial cells podocytes", "cells glomerular", "proteinuria podocyte", "podocyte injury mechanisms" ]
null
null
null
[CONTENT] Podocyte | Liquid-based cytology | Immunocytochemistry | Urine cytology | Glomerular disease [SUMMARY]
null
[CONTENT] Podocyte | Liquid-based cytology | Immunocytochemistry | Urine cytology | Glomerular disease [SUMMARY]
null
[CONTENT] Podocyte | Liquid-based cytology | Immunocytochemistry | Urine cytology | Glomerular disease [SUMMARY]
null
[CONTENT] Cytodiagnosis | Humans | Kidney Diseases | Podocytes | Staining and Labeling [SUMMARY]
null
[CONTENT] Cytodiagnosis | Humans | Kidney Diseases | Podocytes | Staining and Labeling [SUMMARY]
null
[CONTENT] Cytodiagnosis | Humans | Kidney Diseases | Podocytes | Staining and Labeling [SUMMARY]
null
[CONTENT] podocytes active pecs | endothelial cells podocytes | cells glomerular | proteinuria podocyte | podocyte injury mechanisms [SUMMARY]
null
[CONTENT] podocytes active pecs | endothelial cells podocytes | cells glomerular | proteinuria podocyte | podocyte injury mechanisms [SUMMARY]
null
[CONTENT] podocytes active pecs | endothelial cells podocytes | cells glomerular | proteinuria podocyte | podocyte injury mechanisms [SUMMARY]
null
[CONTENT] positive | pdx | cells | wt1 | disease | glomerular | positive cells | wt1 positive | urine | pdx wt1 [SUMMARY]
null
[CONTENT] positive | pdx | cells | wt1 | disease | glomerular | positive cells | wt1 positive | urine | pdx wt1 [SUMMARY]
null
[CONTENT] positive | pdx | cells | wt1 | disease | glomerular | positive cells | wt1 positive | urine | pdx wt1 [SUMMARY]
null
[CONTENT] staining | method | podocyte | podocytes | immunoenzyme | immunoenzyme staining | gbm | glomerular | pdx | urinary [SUMMARY]
null
[CONTENT] positive | cells | positive cells | wt1 positive | disease | pdx | wt1 | pdx wt1 positive | pdx wt1 | predictive value [SUMMARY]
null
[CONTENT] positive | cells | disease | pdx | positive cells | wt1 | glomerular | urine | wt1 positive | pdx wt1 [SUMMARY]
null
[CONTENT] 1 [SUMMARY]
null
[CONTENT] PDX- ||| PDX- | 3.5 | 67.1% | 100% | 1.2 | 43.0% | 100% [SUMMARY]
null
[CONTENT] ||| 1 ||| PDX- | 79 | 51 ||| PDX- ||| PDX- | 3.5 | 67.1% | 100% | 1.2 | 43.0% | 100% ||| PDX [SUMMARY]
null
[The surgeon's balancing act-Teaching in the clinical routine].
34297149
Thus medical students must be inspired to undertake this specialty. Students complain that the teaching is subordinate to patient care and limited by a lack of time and medical personnel. Although there are many studies assessing student perceptions, few exist that focus on the issues that teachers face.
BACKGROUND
In this prospective study guidelines for semistructured interviews with formulated, open questions were created, which were specified with further questions. All interviews were conducted using these guidelines and recorded. The number of interviews were a function of the concept of content saturation.
MATERIAL AND METHODS
All 22 participants perceived that the teaching in clinical practice is of paramount importance. Nevertheless, respondents described that learning goals in the clinical routine are not always achieved. The main reason is a lack of time; however, as clinical experience increases other factors will similarly become more important: Consultants and heads of departments complain about deficiencies in students' previous knowledge, including insufficient motivation. Most respondents described that they do not feel appreciated for teaching. Overall, student teaching was perceived as an additional burden but all respondents found the task to be extremely worthwhile.
RESULTS
In addition to the lack of personnel, a lack of appreciation is the most significant obstacle towards effective teaching. It is therefore important to increase the value of teaching by rewarding good achievements and the creation of effective transparency.
CONCLUSION
[ "Attitude of Health Personnel", "Humans", "Motivation", "Prospective Studies", "Students, Medical", "Surgeons", "Teaching" ]
8894151
null
null
null
null
null
null
Fazit für die Praxis
Obwohl Chirurgen die Lehre als absolut lohnenswert empfinden, betrachten sie sie als zusätzliche Belastung.Neben Personal- und Zeitmangel ist fehlende Anerkennung der Lehre im klinischen Alltag das wichtigste Hemmnis für eine effektive Lehre.Es bedarf einer verbesserten Unterstützung der Lehrenden auf der Station mit der Schaffung von Zeiträumen, die der Lehre vorbehalten sind und nicht durch Überstunden kompensiert werden müssen.Die Wertigkeit der Lehre sollte durch eine vermehrte Wertschätzung oder Belohnung guter Lehrleistung durch die Kliniken erhöht werden. Obwohl Chirurgen die Lehre als absolut lohnenswert empfinden, betrachten sie sie als zusätzliche Belastung. Neben Personal- und Zeitmangel ist fehlende Anerkennung der Lehre im klinischen Alltag das wichtigste Hemmnis für eine effektive Lehre. Es bedarf einer verbesserten Unterstützung der Lehrenden auf der Station mit der Schaffung von Zeiträumen, die der Lehre vorbehalten sind und nicht durch Überstunden kompensiert werden müssen. Die Wertigkeit der Lehre sollte durch eine vermehrte Wertschätzung oder Belohnung guter Lehrleistung durch die Kliniken erhöht werden.
[ "Hintergrund", "Material und Methoden", "Fragebogenerstellung", "Durchführung der Interviews", "Datenanalyse und Auswertung", "Ergebnisse", "Diskussion", "Schlussfolgerung" ]
[ "Der Nachwuchsmangel stellt die Chirurgie vor bisher nicht gekannte Probleme [20]: Im Jahr 2030 werden 7300 Chirurgen und Orthopäden im stationären Sektor fehlen [11]. Diese Entwicklung ist schon im Medizinstudium sichtbar: Sowohl bundesweite [16] Umfragen als auch Befragungen an einzelnen Fakultäten [21] zeigten, dass die Anzahl derjenigen Studierenden, die eine Weiterbildung in einem chirurgischen Fach anstreben, während des Studiums deutlich sinkt. Damit ist die Chirurgie nach der Labormedizin das Fach, das den größten Interessensverlust während des Studiums erleidet [16]. Insgesamt entschieden sich gemäß des bayerischen Absolventenpanels ein Jahr nach Studienabschluss weniger als 10 % der Absolventen für eine Weiterbildung in den chirurgischen Fächern [5].\nVor diesem Hintergrund muss die Chirurgie vermehrt für ihr Fach werben und die Studierenden für die Weiterbildung zum Chirurgen begeistern [20]. Die beste Möglichkeit bieten Blockpraktika und Famulaturen [19], da hier das Interesse nicht nur für eine PJ(Praktisches-Jahr)-Rotation in ein chirurgisches Fach, sondern auch für eine chirurgische Weiterbildung signifikant gesteigert werden kann [8, 13].\nInsgesamt bewerten die Studierenden die Lehre am Krankenbett als dringend notwendig zum Erlernen praktischer Fertigkeiten [17–19, 26]. Jedoch bemängeln sie, dass ihr eigener Unterricht der Patientenversorgung nachgeordnetund damit durch die Faktoren Zeit und ärztliches Personal deutlich und zunehmend limitiert ist [19, 22, 26]. Dies konnte in verschiedenen Arbeiten bestätigt werden, die einen Rückgang der Lehre am Krankenbett beschreiben [1, 17, 18].\nHierfür gibt es multifaktorielle Gründe, insbesondere strukturelle Änderungen, die zu einer zunehmenden Arbeitsverdichtung führen wie die Verkürzung der Liegedauer mit höherem Patientenaufkommen [17] und zunehmendem Dokumentationsaufwand ohne zusätzlich geschaffene Zeiträume [1, 18]. Darüber hinaus werden vor allem die gestiegenen Anforderungen auf Seiten der Lernenden [1] und fehlendes Feedback zur Qualität der Lehre [1] sowie insbesondere die mangelnde Wertschätzung für das Engagement in der Lehre [1, 14] als wichtige Gründe diskutiert. Dies führt dazu, dass Lehre im klinischen Alltag zu selten erfolgt und gute Gelegenheiten hierfür verpasst werden [26].\nDie Studierenden sind davon überzeugt, dass jeder Arzt am Krankenbett unterrichten kann [26]. Kasch et al. konnten zeigen, dass fast die Hälfte der befragten Studierenden jedoch unzufrieden mit der didaktischen Qualität der Lehre in ihrer chirurgischen Famulatur war und nur 35,3 % das stattgehabte „bedside teaching“ als gut empfanden [7].\nEs existieren bisher nur wenige Arbeiten mit dem Fokus auf die Lehrenden in den Universitätskliniken. Unseres Wissens gibt es bisher sogar nur eine Arbeit, die sich mit der Lehre in den peripheren Lehrkrankenhäusern beschäftigt [15], obwohl hier der Großteil der Studierenden am Krankenbett unterrichtet wird. Ziel der vorliegenden Arbeit ist es daher, die Lehre an den Lehrkrankenhäusern und den Universitätskliniken im Fach Chirurgie im Stationsalltag aus Sicht der Lehrenden genauer zu beleuchten und Ursachen möglicher Probleme und Hemmnisse aufzuzeigen.", "Das Studiendesign war prospektiv. Gemäß Aussage der Ethikkommission der Universitätsklinik Frankfurt war für die vorliegende Studie kein Ethikvotum notwendig, da es sich nicht um ein biomedizinisches Forschungsvorhaben im engeren Sinne der Deklaration von Helsinki handelt. Die Teilnahme an der Studie erfolgte freiwillig mit schriftlicher Einverständniserklärung und konnte jederzeit ohne Angabe von Gründen beendet werden.\nFragebogenerstellung Basierend auf den Fragestellungen der Studie wurde ein Leitfaden für semistrukturierte Interviews erstellt. Der Leitfaden diente dazu, die Interviewthematik einzugrenzen und Themenkomplexe vorzugeben, um die Vergleichbarkeit der Ergebnisse der verschiedenen Interviews zu sichern. Der Leitfaden wurde in einem Probeinterview mit Chirurgen, die nicht an der Erstellung des Leitfadens beteiligt waren, getestet und basierend hierauf erneut verfeinert. Nach der Überarbeitung bestand der Leitfaden aus 15 offenen, ausformulierten Fragen, von denen jede mit weiteren Spezifizierungsfragen versehen war. So stellte der Leitfaden die Vergleichbarkeit der einzelnen Interviews sicher.\nBasierend auf den Fragestellungen der Studie wurde ein Leitfaden für semistrukturierte Interviews erstellt. Der Leitfaden diente dazu, die Interviewthematik einzugrenzen und Themenkomplexe vorzugeben, um die Vergleichbarkeit der Ergebnisse der verschiedenen Interviews zu sichern. Der Leitfaden wurde in einem Probeinterview mit Chirurgen, die nicht an der Erstellung des Leitfadens beteiligt waren, getestet und basierend hierauf erneut verfeinert. Nach der Überarbeitung bestand der Leitfaden aus 15 offenen, ausformulierten Fragen, von denen jede mit weiteren Spezifizierungsfragen versehen war. So stellte der Leitfaden die Vergleichbarkeit der einzelnen Interviews sicher.\nDurchführung der Interviews Die Rekrutierung der Teilnehmer erfolgte durch direkte Ansprache der Lehrenden an der Universitätsklinik Frankfurt und an den akademischen Lehrkrankenhäusern. Die Teilnahme erfolgte freiwillig nach einer ausführlichen Aufklärung über die Studie und die damit verbundene Tonaufnahme sowie der Einwilligung zur anonymisierten Auswertung und zur späteren Veröffentlichung der Studienergebnisse. Eine Aufwandsentschädigung erhielten die Teilnehmer nicht.\nAlle Interviews erfolgten in einer reizarmen Umgebung, wobei nur der interviewte Dozent und der Interviewer anwesend waren. Die Interviews wurden mittels eines Tonaufnahmegerätes aufgezeichnet. Jedes Interview wurde mit einer standardisierten Eingangsfrage eröffnet. Im Verlauf des Interviews wurde die Reihenfolge der Fragen in Abhängigkeit vom Gesprächsverlauf variiert. Die Interviews endeten, wenn alle Fragen zur Zufriedenheit der Dozenten beantwortet waren. Die Anzahl der geführten Interviews ergab sich aus dem Prinzip der inhaltlichen Sättigung.\nDie Rekrutierung der Teilnehmer erfolgte durch direkte Ansprache der Lehrenden an der Universitätsklinik Frankfurt und an den akademischen Lehrkrankenhäusern. Die Teilnahme erfolgte freiwillig nach einer ausführlichen Aufklärung über die Studie und die damit verbundene Tonaufnahme sowie der Einwilligung zur anonymisierten Auswertung und zur späteren Veröffentlichung der Studienergebnisse. Eine Aufwandsentschädigung erhielten die Teilnehmer nicht.\nAlle Interviews erfolgten in einer reizarmen Umgebung, wobei nur der interviewte Dozent und der Interviewer anwesend waren. Die Interviews wurden mittels eines Tonaufnahmegerätes aufgezeichnet. Jedes Interview wurde mit einer standardisierten Eingangsfrage eröffnet. Im Verlauf des Interviews wurde die Reihenfolge der Fragen in Abhängigkeit vom Gesprächsverlauf variiert. Die Interviews endeten, wenn alle Fragen zur Zufriedenheit der Dozenten beantwortet waren. Die Anzahl der geführten Interviews ergab sich aus dem Prinzip der inhaltlichen Sättigung.\nDatenanalyse und Auswertung Alle Interviews wurden wortwörtlich unter Verwendung des Programms f5 (dr. dresing & pehl GmbH, Marburg, Deutschland) transkribiert und zur weiteren Auswertung anonymisiert. Die Auswertung erfolgte mit dem Programm MAXQDA (VERBI Software, Consult Sozialforschung GmbH, Berlin, Deutschland) mittels strukturierender Inhaltsanalyse [12]. Hierfür erfolgte zunächst basierend auf dem Leitfaden die Definition von Hauptkategorien und ersten Unterkategorien, anhand derer alle Interviews durch zwei der Autoren unabhängig voneinander ausgewertet wurden. Im Anschluss wurden abweichende Codierungen diskutiert und die Definitionen der Kategorien verfeinert. Anhand dieser Definitionen wurden alle Interviews ausgewertet.\nAlle Interviews wurden wortwörtlich unter Verwendung des Programms f5 (dr. dresing & pehl GmbH, Marburg, Deutschland) transkribiert und zur weiteren Auswertung anonymisiert. Die Auswertung erfolgte mit dem Programm MAXQDA (VERBI Software, Consult Sozialforschung GmbH, Berlin, Deutschland) mittels strukturierender Inhaltsanalyse [12]. Hierfür erfolgte zunächst basierend auf dem Leitfaden die Definition von Hauptkategorien und ersten Unterkategorien, anhand derer alle Interviews durch zwei der Autoren unabhängig voneinander ausgewertet wurden. Im Anschluss wurden abweichende Codierungen diskutiert und die Definitionen der Kategorien verfeinert. Anhand dieser Definitionen wurden alle Interviews ausgewertet.", "Basierend auf den Fragestellungen der Studie wurde ein Leitfaden für semistrukturierte Interviews erstellt. Der Leitfaden diente dazu, die Interviewthematik einzugrenzen und Themenkomplexe vorzugeben, um die Vergleichbarkeit der Ergebnisse der verschiedenen Interviews zu sichern. Der Leitfaden wurde in einem Probeinterview mit Chirurgen, die nicht an der Erstellung des Leitfadens beteiligt waren, getestet und basierend hierauf erneut verfeinert. Nach der Überarbeitung bestand der Leitfaden aus 15 offenen, ausformulierten Fragen, von denen jede mit weiteren Spezifizierungsfragen versehen war. So stellte der Leitfaden die Vergleichbarkeit der einzelnen Interviews sicher.", "Die Rekrutierung der Teilnehmer erfolgte durch direkte Ansprache der Lehrenden an der Universitätsklinik Frankfurt und an den akademischen Lehrkrankenhäusern. Die Teilnahme erfolgte freiwillig nach einer ausführlichen Aufklärung über die Studie und die damit verbundene Tonaufnahme sowie der Einwilligung zur anonymisierten Auswertung und zur späteren Veröffentlichung der Studienergebnisse. Eine Aufwandsentschädigung erhielten die Teilnehmer nicht.\nAlle Interviews erfolgten in einer reizarmen Umgebung, wobei nur der interviewte Dozent und der Interviewer anwesend waren. Die Interviews wurden mittels eines Tonaufnahmegerätes aufgezeichnet. Jedes Interview wurde mit einer standardisierten Eingangsfrage eröffnet. Im Verlauf des Interviews wurde die Reihenfolge der Fragen in Abhängigkeit vom Gesprächsverlauf variiert. Die Interviews endeten, wenn alle Fragen zur Zufriedenheit der Dozenten beantwortet waren. Die Anzahl der geführten Interviews ergab sich aus dem Prinzip der inhaltlichen Sättigung.", "Alle Interviews wurden wortwörtlich unter Verwendung des Programms f5 (dr. dresing & pehl GmbH, Marburg, Deutschland) transkribiert und zur weiteren Auswertung anonymisiert. Die Auswertung erfolgte mit dem Programm MAXQDA (VERBI Software, Consult Sozialforschung GmbH, Berlin, Deutschland) mittels strukturierender Inhaltsanalyse [12]. Hierfür erfolgte zunächst basierend auf dem Leitfaden die Definition von Hauptkategorien und ersten Unterkategorien, anhand derer alle Interviews durch zwei der Autoren unabhängig voneinander ausgewertet wurden. Im Anschluss wurden abweichende Codierungen diskutiert und die Definitionen der Kategorien verfeinert. Anhand dieser Definitionen wurden alle Interviews ausgewertet.", "Insgesamt wurden 22 Interviews geführt. Die soziodemographischen Daten der Teilnehmer sind in Tab. 1 dargestellt.GesamtFrauenMännerTeilnehmer22616Chirurgisches FachAllgemeinchirurgie826Gefäßchirurgie202Herz- und Thoraxchirurgie303Mund‑, Kiefer- und plastische Gesichtschirurgie404Unfallchirurgie541WeiterbildungsstandChefarzt303Oberarzt716Facharzt101Assistenzarzt1156ArbeitsstätteUniversitätsklinik1459Akademisches Lehrkrankenhaus817\nAlle im Rahmen der vorliegenden Arbeit befragten Ärzte (22/22) messen der Lehre im klinischen Alltag einen hohen Stellenwert bei. Die Bedeutung der Ausbildung des Nachwuchses für die Zukunft ist allen Teilnehmenden bewusst. Die Wichtigkeit, die die Befragten der Lehre im klinischen Alltag beimessen, ist hierbei unabhängig vom Weiterbildungsstand.Also ich finde es super wichtig, weil man ja Ärzte von morgen ausbildet und wenn die es nicht mitbekommen im Studium, wann sollen sie es denn dann lernen?Ich nehme das relativ ernst, weil ich der Meinung bin, dass eben auch kommende Generationen von Patienten vor allem davon profitieren, wenn wir als Jungmediziner den noch Jüngeren oder Studenten Sachen beibringen.\nAlso ich finde es super wichtig, weil man ja Ärzte von morgen ausbildet und wenn die es nicht mitbekommen im Studium, wann sollen sie es denn dann lernen?\nIch nehme das relativ ernst, weil ich der Meinung bin, dass eben auch kommende Generationen von Patienten vor allem davon profitieren, wenn wir als Jungmediziner den noch Jüngeren oder Studenten Sachen beibringen.\nEine große Bedeutung kommt in der Lehre den Lernzielen zu. Dennoch beschreiben die Befragten, dass die Lernziele, die sie für die Studierenden im klinischen Alltag definieren, nicht immer erreicht würden. Hierfür werden verschiedene Gründe angeführt (siehe auch Tab. 2). Unabhängig sowohl vom eigenen Stand der Weiterbildung als auch von der Art der Arbeitsstätte wird der Zeitmangel als Haupthemmnis für Lehre im klinischen Alltag angeführt:Zeit. Wir sind ja so straff eingebunden, dass wir uns auch ohne Studenten nicht langweilen, und wenn wir dann die Aufgabe zusätzlich noch kriegen, den Studenten adäquat zu unterrichten, geht das dann von meiner Freizeit ab.HemmnisAnzahl der NennungenZeit- und Personalmangel19 (86,4 %)Zu geringe Motivation der Studierenden10 (45,4 %)Unangemessene Erwartungen der Studierenden3 (13,6 %)Mangelhafte Vorkenntnisse der Studierenden2 (9,1 %)\nZeit. Wir sind ja so straff eingebunden, dass wir uns auch ohne Studenten nicht langweilen, und wenn wir dann die Aufgabe zusätzlich noch kriegen, den Studenten adäquat zu unterrichten, geht das dann von meiner Freizeit ab.\nMit zunehmender klinischer Erfahrung der Dozenten kommen jedoch weitere Faktoren hinzu: Besonders die von uns befragten Ober- und Chefärzte beklagen teilweise eklatante Mängel bezüglich der Vorkenntnisse der Studierenden, welche ein effektives Unterrichten im klinischen Alltag nicht möglich machen. Beklagt wird hierbei, dass ohne fundierte theoretische Kenntnisse, welche die Studierenden im Vorfeld des Stationseinsatzes beispielsweise in Vorlesungen und Seminaren erworben haben müssen, keine effektive Lehre auf der Station möglich sei.Aber es fängt schon allein damit an, dass die fundierten theoretischen Kenntnisse erstmal vorhanden sein müssen...Weil die mit zum Teil extrem unterschiedlichen Vorkenntnissen kommen und um es mal etwas provokant zu sagen, die Vorkenntnisse sind relativ gering [...] Wenn man die Verkehrsregeln nicht kennt, kann man nicht Autofahren. Wenn man keine Ahnung von Chirurgie hat, ist man hier [...] interessierter Laie.\nAber es fängt schon allein damit an, dass die fundierten theoretischen Kenntnisse erstmal vorhanden sein müssen...\nWeil die mit zum Teil extrem unterschiedlichen Vorkenntnissen kommen und um es mal etwas provokant zu sagen, die Vorkenntnisse sind relativ gering [...] Wenn man die Verkehrsregeln nicht kennt, kann man nicht Autofahren. Wenn man keine Ahnung von Chirurgie hat, ist man hier [...] interessierter Laie.\nDarüber hinaus sehen die durch uns befragten Chirurgen ein weiteres Hemmnis für effektive Lehre im klinischen Alltag in der ihrer Meinung nach teilweise zu geringen Motivation einiger Studierender. Dies werde insbesondere durch das Pflichttertial Chirurgie begünstigt, da hier nicht nur Studierende unterrichtet werden müssen, die ein originäres Interesse am Fach haben, sondern eben auch solche, die bereits wissen, dass sie nicht in der Chirurgie arbeiten wollen.Und es gibt immer mal Probleme, wenn dann Studenten dazwischen sind, die völlig desinteressiert sind.Es gibt Leute, die halt mit Desinteresse ankommen, wo man keinen Spaß hat, denen was beizubringen, [...] bin ich froh, wenn die nach ‘ner Zeit wieder gehen.\nUnd es gibt immer mal Probleme, wenn dann Studenten dazwischen sind, die völlig desinteressiert sind.\nEs gibt Leute, die halt mit Desinteresse ankommen, wo man keinen Spaß hat, denen was beizubringen, [...] bin ich froh, wenn die nach ‘ner Zeit wieder gehen.\nDie befragten Ober- und Chefärzte berichten zudem, dass mangelnde Vorkenntnis und eine zu geringe Motivation noch dadurch verschärft würden, dass die Studierenden häufig mit nicht ihrem Ausbildungsstand angemessenen Erwartungen in die Klinikeinsätze starten würden. Durch diese erhöhte Erwartung käme es zu Frustration auf beiden Seiten.Dass gar nicht mehr so die Bereitschaft da ist, was beigebracht zu bekommen, weil’s halt ihren Erwartungen nicht entspricht. So von wegen: „Wie Verbandswechsel? Ich will Arzt werden, das macht eine Schwester“.Die heutigen Studenten verlangen viel mehr. Und wissen gar nicht, in Anführungsstrichen, was auf den Arzt täglich zukommt\nDass gar nicht mehr so die Bereitschaft da ist, was beigebracht zu bekommen, weil’s halt ihren Erwartungen nicht entspricht. So von wegen: „Wie Verbandswechsel? Ich will Arzt werden, das macht eine Schwester“.\nDie heutigen Studenten verlangen viel mehr. Und wissen gar nicht, in Anführungsstrichen, was auf den Arzt täglich zukommt\nZusätzlich beschreiben die Ober- und Chefärzte auch Einschränkungen aufgrund der organisatorischen Rahmenbedingungen: Als besonders limitierend empfinden sie, dass häufig auf den Stationen gerade diejenigen für die Betreuung der Studierenden zuständig seien, die selbst noch unerfahren sind. Hierdurch käme es zu der Situation, dass die jüngsten ärztlichen Kollegen, während sie selbst noch ein Gefühl der Überforderung erleben und für die Ausführung ihrer Aufgaben länger benötigen als die erfahreneren Kollegen, zusätzlich zu ihren Aufgaben in der Patientenversorgung auf der Station gleichzeitig noch für die Betreuung der Studierenden verantwortlich seien. Ihnen fehle nicht nur die Zeit, sondern auch das nötige Wissen, um die Studierenden unterrichten zu können.... mit einem jungen Assistenzarzt, der selber noch nicht weiß, wo die Herrentoilette ist, sagen wir mal, ist das halt ein Problem.… dass man den Jüngsten die Verantwortung für die Studierenden überträgt und das als selbstverständlich erachtet, dass genau die das auch können sollen. [...] Die wissen mit sich selbst nichts anzufangen. Die brauchen selbst noch Hilfe bei der Stationsführung und da kann man nicht noch sagen, den bildest du jetzt noch mit aus.\n... mit einem jungen Assistenzarzt, der selber noch nicht weiß, wo die Herrentoilette ist, sagen wir mal, ist das halt ein Problem.\n… dass man den Jüngsten die Verantwortung für die Studierenden überträgt und das als selbstverständlich erachtet, dass genau die das auch können sollen. [...] Die wissen mit sich selbst nichts anzufangen. Die brauchen selbst noch Hilfe bei der Stationsführung und da kann man nicht noch sagen, den bildest du jetzt noch mit aus.\nNur zwei der von uns befragten Dozenten (9,1 %) geben an, regelmäßig und strukturiert Feedback für ihre Lehrleistung zu erhalten. Dennoch beschreiben die in der vorliegenden Studie befragten Dozenten, dass das häufig positive Feedback der Studierenden für sie die wichtigste Form der Anerkennung für ihre Bemühungen um die studentische Lehre sei. Sie fühlten sich hierdurch in ihren Bemühungen bestärkt und seien daher auch gerne bereit, Zeit in die Lehre zu investieren, auch wenn sie dafür Überstunden machen müssen.Ich krieg schon Anerkennungen, insofern, dass ich glaube, dass Studenten die gefördert werden, auch sehr dankbar sind.Ja, ein Benefit ist natürlich darin zu sehen, dass man Leute für den eigenen Beruf interessiert.\nIch krieg schon Anerkennungen, insofern, dass ich glaube, dass Studenten die gefördert werden, auch sehr dankbar sind.\nJa, ein Benefit ist natürlich darin zu sehen, dass man Leute für den eigenen Beruf interessiert.\nOb die Dozenten darüber hinaus eine weitere Anerkennung von Seiten ihrer Vorgesetzten erhalten, wird von ihnen sehr unterschiedlich wahrgenommen. Insgesamt beschreiben die meisten Befragten, keine Anerkennung für ihre Lehre zu erhalten (68,2 % oder 15/22). Diejenigen Dozenten, die angeben, dass ihre Tätigkeit in der Lehre anerkannt wird, erfahren diese Anerkennung meist von ihren Vorgesetzten, weniger von Seiten der Klinikverwaltung. Auffällig hierbei ist, dass es keinen Zusammenhang zwischen der wahrgenommenen Anerkennung der Lehrtätigkeit und der Art der Arbeitsstätte gibt.Da krieg ich kein Dankeschön, vielleicht von den Studenten, ich krieg eher Druck, die Klinik geht vor und dann macht man ‘ne Triage und sagt, okay, jetzt muss ich abwägen.Eine Anerkennung von Seiten des Arbeitsgebers bekomme ich nicht. Ich bekomme eine Anerkennung von Seiten meines Chefs, der sehr wohl feststellt [...], dass die Studenten hier zufrieden sind.\nDa krieg ich kein Dankeschön, vielleicht von den Studenten, ich krieg eher Druck, die Klinik geht vor und dann macht man ‘ne Triage und sagt, okay, jetzt muss ich abwägen.\nEine Anerkennung von Seiten des Arbeitsgebers bekomme ich nicht. Ich bekomme eine Anerkennung von Seiten meines Chefs, der sehr wohl feststellt [...], dass die Studenten hier zufrieden sind.\nNur 27,3 % (6/22) der befragten Dozenten geben an, didaktisch ausreichend geschult zu sein. Diejenigen Befragten, die sich nicht ausreichend geschult fühlen, empfinden dies als nachteilig.Nein. Gar nicht. Niemals im Studium und auch später nicht.Relativ schockierend wenig, würde ich sagen.\nNein. Gar nicht. Niemals im Studium und auch später nicht.\nRelativ schockierend wenig, würde ich sagen.\nDie Dozenten, die sich als ausreichend geschult betrachten, sind hauptsächlich diejenigen, die bereits im Rahmen eines Habilitationsverfahrens Didaktikkurse besucht haben. Diese Kurse werden von den Befragten, die sie bereits besucht haben, als sehr hilfreich bewertet. Auch die jüngeren Dozenten schätzen den Besuch von Didaktikkursen als prinzipiell hilfreich ein. Sie berichteten aber, dass sie häufig keine Möglichkeit sähen, hierfür in ihrem ohnehin vollen Zeitplan Freiräume zu schaffen.Sachen, an die man vorher im Prinzip nicht gedacht hat, weil die didaktische Aus- und Weiterbildung in der Medizin im Prinzip überhaupt nicht vorkommt und so kann man denke ich aus jedem dieser Kurse was ziehen.…dass die Jobkomponente so schon genug Herausforderungen und zeitliche Einnahme darstellt. Ich würd’, glaube ich, davon mit Sicherheit profitieren, wüsste aber nicht, wo ich das in meiner Freizeit noch unterbringen kann ehrlicherweise.\nSachen, an die man vorher im Prinzip nicht gedacht hat, weil die didaktische Aus- und Weiterbildung in der Medizin im Prinzip überhaupt nicht vorkommt und so kann man denke ich aus jedem dieser Kurse was ziehen.\n…dass die Jobkomponente so schon genug Herausforderungen und zeitliche Einnahme darstellt. Ich würd’, glaube ich, davon mit Sicherheit profitieren, wüsste aber nicht, wo ich das in meiner Freizeit noch unterbringen kann ehrlicherweise.\nVor diesen Hintergründen wird die studentische Lehre auf den chirurgischen Stationen generell als zusätzliche Belastung wahrgenommen. Alle von uns befragten Dozenten betonten aber, diese Belastung als gewinnbringend und lohnenswert zu empfinden.Es macht auch Spaß und dafür bleibt man ehrlich gesagt auch gerne mal eine viertel oder halbe Stunde länger. Ist schon ok.Eine zusätzliche Belastung ist es, zweifelsohne. Aber es ist es wert.\nEs macht auch Spaß und dafür bleibt man ehrlich gesagt auch gerne mal eine viertel oder halbe Stunde länger. Ist schon ok.\nEine zusätzliche Belastung ist es, zweifelsohne. Aber es ist es wert.", "Die vorliegende Arbeit belegt, dass die Lehre im klinischen Alltag für Chirurgen sowohl an den Universitätskliniken als auch an den akademischen Lehrkrankenhäusern einen hohen Stellenwert hat. Dennoch zeigt sie auch, mit welchen teilweise eklatanten Hemmnissen die Lehrenden konfrontiert sind, insbesondere dem Mangel an Zeit zum Lehren. In ihrem Review aus dem Jahre 2014 zeigten Peters und ten Cate ähnliche Ergebnisse auf: Ein häufiger Grund für den Rückgang des „bedside teaching“ sind die Veränderungen, mit denen die Krankenhäuser in den letzten Jahren konfrontiert waren, wie z. B. die Verkürzung der Liegedauer oder die zunehmende Arbeitsbelastung der Ärzte [17]. Auch für Deutschland sind eine zunehmende Arbeitsplatzbelastung, gestiegene Patientenzahlen und eine Zunahme der patientenfernen Tätigkeiten belegt [2]. Hier bedarf es einer verbesserten Unterstützung der Lehrenden auf der Station mit der Schaffung von Zeiträumen, die der Lehre vorbehalten sind und nicht durch Überstunden kompensiert werden müssen. Dies wird mit dem Eintritt der Generation Y in den Arbeitsmarkt umso bedeutsamer, da für diese Generation eine ausgeglichene Work-Life-Balance einer der wichtigsten Faktoren für die Wahl des Arbeitsplatzes ist [7]. Auch für andere klinische Fächer kann angenommen werden, dass Herausforderungen wie die Verkürzung der Liegezeit oder die zunehmende Arbeitsbelastung Hemmnisse für die Lehre im klinischen Alltag darstellen. Aussagekräftige Studien hierzu fehlen allerdings bisher. Weitere Studien sollten daher darauf fokussieren, welche der in der vorliegenden Studie aufgezeigten Hindernisse auch in anderen klinischen Fächern relevant sind und ob es Unterschiede in den wahrgenommenen Belastungen zwischen den Fächern gibt.\nDie in der vorliegenden Arbeit befragten Dozenten berichten, im klinischen Alltag zu wenig Zeit für die Lehre der Studierenden zur Verfügung zu haben. Die Studierenden befürchten häufig bereits vor ihrem tatsächlichen Stationseinsatz, dass sie zu wenig ärztliche Betreuung erfahren werden [10]. Dies bestätigt sich insbesondere im Praktischen Jahr, hier beschreiben die Studierenden zu wenig Zeit und zu wenig Supervision durch die sie betreuenden Ärzte, sie plädieren daher für die Einrichtung von ausdrücklich für die Lehre freigehaltenen Zeitfenstern [22]. Auch in einer bundesweiten Umfrage unter Medizinstudierenden im chirurgischen Tertial des Praktischen Jahres bestätigte sich dies: Nur 38,3 % der Studierenden bewerteten den Kontakt zu den Lehrenden als gut oder sehr gut. Hierbei zeigten sich diejenigen Studierenden zufriedener, die über einen guten oder sehr guten Kontakt zu den betreuenden Ober- und Fachärzten berichteten [4]. Dieses Manko erscheint umso gravierender, da Meder et al. belegen konnten, dass mit intensiver, qualitativ hochwertiger Lehre Studierende für ein chirurgisches Fach begeistert werden können [13]. Da die Chirurgie wie wenige andere Fächer mit einem massiven Nachwuchsmangel konfrontiert ist [20, 24, 25], erscheint es umso notwendiger, die Studierenden durch hochwertige Lehre für eine spätere Tätigkeit in einem chirurgischen Fach zu begeistern.\nIn der vorliegenden Arbeit gaben nur 27,3 % der Dozenten an, ausreichend didaktisch für die Lehre im klinischen Alltag geschult zu sein. Diese subjektive Wahrnehmung der Dozenten spiegelt sich auch in der Bewertung durch die Studierenden wider: Fröhlich et al. zeigten, dass die didaktische Qualität der Lehre nur von 37,8 % der Studierenden im Pflichttertial Chirurgie als gut bis sehr gut bewertet wird [4]. Eine fundierte Ausbildung der Studierenden im klinischen Alltag ist aber nur möglich, wenn die Dozenten ausreichend didaktisch geschult sind. Dies wird in den kommenden Jahren umso wichtiger, da im Zuge des Masterplanes 2020 umfassende Änderungen nicht nur für das Medizinstudium als solches, sondern auch eine grundlegende Umgestaltung des M3-Examens geplant sind [3, 6]. Hierbei sollen vermehrt diejenigen Kompetenzen geprüft werden, die die Studierenden im direkten Umgang mit den Patienten und Kollegen auf den Stationen erwerben müssen. Gemeinsam mit den Ergebnissen der vorliegenden Studie unterstützt dies die Forderung von Stieger et al. nach der Etablierung klinischer Lehrexperten, die sowohl die fachärztliche Qualifikation als auch die medizindidaktische in sich vereinen und so neben der klinischen Lehre auch Aufgaben in Kurrikulums- und Prüfungsentwicklung, Koordination und Organisation der Lehre sowie evidenzbasierter Lehr- und Lernforschung erfüllen sollen [23]. Doch auch für die Qualifikation zu einem solchen klinischen Lehrexperten muss durch die Dozenten zusätzliche Zeit aufgebracht werden. In der vorliegenden Studie berichten besonders die noch unerfahreneren Dozenten, diese Zeit für ihre eigene didaktische Weiterbildung nicht mit dem ohnehin vollen Dienstplan vereinbaren zu können. Hier erscheint die Förderung von mit dem klinischen Alltag vergleichbaren Kursen, wie beispielsweise dem durch die chirurgische Arbeitsgemeinschaft Lehre der Deutschen Gesellschaft für Chirurgie initiierten Train-the-Trainer-Kurs, unbedingt erforderlich [1].\nInsgesamt betrachten alle in der vorliegenden Arbeit befragten Dozenten die Lehre im klinischen Alltag als lohnenswert. Als starker Motivator wird hierbei das positive Feedback durch die Studierenden genannt, während Anerkennung oder Feedback durch die Vorgesetzten oder die Kliniken selbst häufig fehlen. Hierin unterscheiden sich die chirurgischen Dozenten nicht von anderen Hochschullehrern. Kiefer et al. zeigten 2013, dass, obwohl die Lehre an sich 97 % der Hochschullehrer Freude bereitet, 53 % in dieser Hinsicht grundsätzlichen Veränderungsbedarf sehen. Nur 37 % der Hochschullehrer berichten, Wertschätzung für gute Lehre durch ihre Vorgesetzten zu erhalten [9]. Als Hauptmotivator wird auch hier der Wunsch, die Studierenden für das eigene Fach zu begeistern, und das positive Feedback durch die Studierenden berichtet [9]. Insgesamt erscheint es daher notwendig, standardisierte Bewertungsmöglichkeiten für die Lehre im klinischen Alltag zu schaffen, durch die niederschwellig und transparent eine Anerkennung guter Lehrleistung durch Vorgesetzte und Kliniken begründet werden kann.", "Neben Personalmangel und dem daraus resultierende Mangel an Zeit erscheint die fehlende Anerkennung der Lehre im klinischen Alltag das wichtigste Hemmnis für eine effektive Lehre. Somit erscheint es wichtig, neben einer nachhaltigen Verbesserung der personellen Rahmenbedingungen, die Wertigkeit der Lehre durch die Wertschätzung oder Belohnung guter Lehrleistungen und Schaffung einer dahingehenden Transparenz zu erhöhen. So können durch qualitativ hochwertige Lehre Studierende für die Chirurgie begeistert werden." ]
[ null, null, null, null, null, null, null, null ]
[ "Hintergrund", "Material und Methoden", "Fragebogenerstellung", "Durchführung der Interviews", "Datenanalyse und Auswertung", "Ergebnisse", "Diskussion", "Schlussfolgerung", "Fazit für die Praxis" ]
[ "Der Nachwuchsmangel stellt die Chirurgie vor bisher nicht gekannte Probleme [20]: Im Jahr 2030 werden 7300 Chirurgen und Orthopäden im stationären Sektor fehlen [11]. Diese Entwicklung ist schon im Medizinstudium sichtbar: Sowohl bundesweite [16] Umfragen als auch Befragungen an einzelnen Fakultäten [21] zeigten, dass die Anzahl derjenigen Studierenden, die eine Weiterbildung in einem chirurgischen Fach anstreben, während des Studiums deutlich sinkt. Damit ist die Chirurgie nach der Labormedizin das Fach, das den größten Interessensverlust während des Studiums erleidet [16]. Insgesamt entschieden sich gemäß des bayerischen Absolventenpanels ein Jahr nach Studienabschluss weniger als 10 % der Absolventen für eine Weiterbildung in den chirurgischen Fächern [5].\nVor diesem Hintergrund muss die Chirurgie vermehrt für ihr Fach werben und die Studierenden für die Weiterbildung zum Chirurgen begeistern [20]. Die beste Möglichkeit bieten Blockpraktika und Famulaturen [19], da hier das Interesse nicht nur für eine PJ(Praktisches-Jahr)-Rotation in ein chirurgisches Fach, sondern auch für eine chirurgische Weiterbildung signifikant gesteigert werden kann [8, 13].\nInsgesamt bewerten die Studierenden die Lehre am Krankenbett als dringend notwendig zum Erlernen praktischer Fertigkeiten [17–19, 26]. Jedoch bemängeln sie, dass ihr eigener Unterricht der Patientenversorgung nachgeordnetund damit durch die Faktoren Zeit und ärztliches Personal deutlich und zunehmend limitiert ist [19, 22, 26]. Dies konnte in verschiedenen Arbeiten bestätigt werden, die einen Rückgang der Lehre am Krankenbett beschreiben [1, 17, 18].\nHierfür gibt es multifaktorielle Gründe, insbesondere strukturelle Änderungen, die zu einer zunehmenden Arbeitsverdichtung führen wie die Verkürzung der Liegedauer mit höherem Patientenaufkommen [17] und zunehmendem Dokumentationsaufwand ohne zusätzlich geschaffene Zeiträume [1, 18]. Darüber hinaus werden vor allem die gestiegenen Anforderungen auf Seiten der Lernenden [1] und fehlendes Feedback zur Qualität der Lehre [1] sowie insbesondere die mangelnde Wertschätzung für das Engagement in der Lehre [1, 14] als wichtige Gründe diskutiert. Dies führt dazu, dass Lehre im klinischen Alltag zu selten erfolgt und gute Gelegenheiten hierfür verpasst werden [26].\nDie Studierenden sind davon überzeugt, dass jeder Arzt am Krankenbett unterrichten kann [26]. Kasch et al. konnten zeigen, dass fast die Hälfte der befragten Studierenden jedoch unzufrieden mit der didaktischen Qualität der Lehre in ihrer chirurgischen Famulatur war und nur 35,3 % das stattgehabte „bedside teaching“ als gut empfanden [7].\nEs existieren bisher nur wenige Arbeiten mit dem Fokus auf die Lehrenden in den Universitätskliniken. Unseres Wissens gibt es bisher sogar nur eine Arbeit, die sich mit der Lehre in den peripheren Lehrkrankenhäusern beschäftigt [15], obwohl hier der Großteil der Studierenden am Krankenbett unterrichtet wird. Ziel der vorliegenden Arbeit ist es daher, die Lehre an den Lehrkrankenhäusern und den Universitätskliniken im Fach Chirurgie im Stationsalltag aus Sicht der Lehrenden genauer zu beleuchten und Ursachen möglicher Probleme und Hemmnisse aufzuzeigen.", "Das Studiendesign war prospektiv. Gemäß Aussage der Ethikkommission der Universitätsklinik Frankfurt war für die vorliegende Studie kein Ethikvotum notwendig, da es sich nicht um ein biomedizinisches Forschungsvorhaben im engeren Sinne der Deklaration von Helsinki handelt. Die Teilnahme an der Studie erfolgte freiwillig mit schriftlicher Einverständniserklärung und konnte jederzeit ohne Angabe von Gründen beendet werden.\nFragebogenerstellung Basierend auf den Fragestellungen der Studie wurde ein Leitfaden für semistrukturierte Interviews erstellt. Der Leitfaden diente dazu, die Interviewthematik einzugrenzen und Themenkomplexe vorzugeben, um die Vergleichbarkeit der Ergebnisse der verschiedenen Interviews zu sichern. Der Leitfaden wurde in einem Probeinterview mit Chirurgen, die nicht an der Erstellung des Leitfadens beteiligt waren, getestet und basierend hierauf erneut verfeinert. Nach der Überarbeitung bestand der Leitfaden aus 15 offenen, ausformulierten Fragen, von denen jede mit weiteren Spezifizierungsfragen versehen war. So stellte der Leitfaden die Vergleichbarkeit der einzelnen Interviews sicher.\nBasierend auf den Fragestellungen der Studie wurde ein Leitfaden für semistrukturierte Interviews erstellt. Der Leitfaden diente dazu, die Interviewthematik einzugrenzen und Themenkomplexe vorzugeben, um die Vergleichbarkeit der Ergebnisse der verschiedenen Interviews zu sichern. Der Leitfaden wurde in einem Probeinterview mit Chirurgen, die nicht an der Erstellung des Leitfadens beteiligt waren, getestet und basierend hierauf erneut verfeinert. Nach der Überarbeitung bestand der Leitfaden aus 15 offenen, ausformulierten Fragen, von denen jede mit weiteren Spezifizierungsfragen versehen war. So stellte der Leitfaden die Vergleichbarkeit der einzelnen Interviews sicher.\nDurchführung der Interviews Die Rekrutierung der Teilnehmer erfolgte durch direkte Ansprache der Lehrenden an der Universitätsklinik Frankfurt und an den akademischen Lehrkrankenhäusern. Die Teilnahme erfolgte freiwillig nach einer ausführlichen Aufklärung über die Studie und die damit verbundene Tonaufnahme sowie der Einwilligung zur anonymisierten Auswertung und zur späteren Veröffentlichung der Studienergebnisse. Eine Aufwandsentschädigung erhielten die Teilnehmer nicht.\nAlle Interviews erfolgten in einer reizarmen Umgebung, wobei nur der interviewte Dozent und der Interviewer anwesend waren. Die Interviews wurden mittels eines Tonaufnahmegerätes aufgezeichnet. Jedes Interview wurde mit einer standardisierten Eingangsfrage eröffnet. Im Verlauf des Interviews wurde die Reihenfolge der Fragen in Abhängigkeit vom Gesprächsverlauf variiert. Die Interviews endeten, wenn alle Fragen zur Zufriedenheit der Dozenten beantwortet waren. Die Anzahl der geführten Interviews ergab sich aus dem Prinzip der inhaltlichen Sättigung.\nDie Rekrutierung der Teilnehmer erfolgte durch direkte Ansprache der Lehrenden an der Universitätsklinik Frankfurt und an den akademischen Lehrkrankenhäusern. Die Teilnahme erfolgte freiwillig nach einer ausführlichen Aufklärung über die Studie und die damit verbundene Tonaufnahme sowie der Einwilligung zur anonymisierten Auswertung und zur späteren Veröffentlichung der Studienergebnisse. Eine Aufwandsentschädigung erhielten die Teilnehmer nicht.\nAlle Interviews erfolgten in einer reizarmen Umgebung, wobei nur der interviewte Dozent und der Interviewer anwesend waren. Die Interviews wurden mittels eines Tonaufnahmegerätes aufgezeichnet. Jedes Interview wurde mit einer standardisierten Eingangsfrage eröffnet. Im Verlauf des Interviews wurde die Reihenfolge der Fragen in Abhängigkeit vom Gesprächsverlauf variiert. Die Interviews endeten, wenn alle Fragen zur Zufriedenheit der Dozenten beantwortet waren. Die Anzahl der geführten Interviews ergab sich aus dem Prinzip der inhaltlichen Sättigung.\nDatenanalyse und Auswertung Alle Interviews wurden wortwörtlich unter Verwendung des Programms f5 (dr. dresing & pehl GmbH, Marburg, Deutschland) transkribiert und zur weiteren Auswertung anonymisiert. Die Auswertung erfolgte mit dem Programm MAXQDA (VERBI Software, Consult Sozialforschung GmbH, Berlin, Deutschland) mittels strukturierender Inhaltsanalyse [12]. Hierfür erfolgte zunächst basierend auf dem Leitfaden die Definition von Hauptkategorien und ersten Unterkategorien, anhand derer alle Interviews durch zwei der Autoren unabhängig voneinander ausgewertet wurden. Im Anschluss wurden abweichende Codierungen diskutiert und die Definitionen der Kategorien verfeinert. Anhand dieser Definitionen wurden alle Interviews ausgewertet.\nAlle Interviews wurden wortwörtlich unter Verwendung des Programms f5 (dr. dresing & pehl GmbH, Marburg, Deutschland) transkribiert und zur weiteren Auswertung anonymisiert. Die Auswertung erfolgte mit dem Programm MAXQDA (VERBI Software, Consult Sozialforschung GmbH, Berlin, Deutschland) mittels strukturierender Inhaltsanalyse [12]. Hierfür erfolgte zunächst basierend auf dem Leitfaden die Definition von Hauptkategorien und ersten Unterkategorien, anhand derer alle Interviews durch zwei der Autoren unabhängig voneinander ausgewertet wurden. Im Anschluss wurden abweichende Codierungen diskutiert und die Definitionen der Kategorien verfeinert. Anhand dieser Definitionen wurden alle Interviews ausgewertet.", "Basierend auf den Fragestellungen der Studie wurde ein Leitfaden für semistrukturierte Interviews erstellt. Der Leitfaden diente dazu, die Interviewthematik einzugrenzen und Themenkomplexe vorzugeben, um die Vergleichbarkeit der Ergebnisse der verschiedenen Interviews zu sichern. Der Leitfaden wurde in einem Probeinterview mit Chirurgen, die nicht an der Erstellung des Leitfadens beteiligt waren, getestet und basierend hierauf erneut verfeinert. Nach der Überarbeitung bestand der Leitfaden aus 15 offenen, ausformulierten Fragen, von denen jede mit weiteren Spezifizierungsfragen versehen war. So stellte der Leitfaden die Vergleichbarkeit der einzelnen Interviews sicher.", "Die Rekrutierung der Teilnehmer erfolgte durch direkte Ansprache der Lehrenden an der Universitätsklinik Frankfurt und an den akademischen Lehrkrankenhäusern. Die Teilnahme erfolgte freiwillig nach einer ausführlichen Aufklärung über die Studie und die damit verbundene Tonaufnahme sowie der Einwilligung zur anonymisierten Auswertung und zur späteren Veröffentlichung der Studienergebnisse. Eine Aufwandsentschädigung erhielten die Teilnehmer nicht.\nAlle Interviews erfolgten in einer reizarmen Umgebung, wobei nur der interviewte Dozent und der Interviewer anwesend waren. Die Interviews wurden mittels eines Tonaufnahmegerätes aufgezeichnet. Jedes Interview wurde mit einer standardisierten Eingangsfrage eröffnet. Im Verlauf des Interviews wurde die Reihenfolge der Fragen in Abhängigkeit vom Gesprächsverlauf variiert. Die Interviews endeten, wenn alle Fragen zur Zufriedenheit der Dozenten beantwortet waren. Die Anzahl der geführten Interviews ergab sich aus dem Prinzip der inhaltlichen Sättigung.", "Alle Interviews wurden wortwörtlich unter Verwendung des Programms f5 (dr. dresing & pehl GmbH, Marburg, Deutschland) transkribiert und zur weiteren Auswertung anonymisiert. Die Auswertung erfolgte mit dem Programm MAXQDA (VERBI Software, Consult Sozialforschung GmbH, Berlin, Deutschland) mittels strukturierender Inhaltsanalyse [12]. Hierfür erfolgte zunächst basierend auf dem Leitfaden die Definition von Hauptkategorien und ersten Unterkategorien, anhand derer alle Interviews durch zwei der Autoren unabhängig voneinander ausgewertet wurden. Im Anschluss wurden abweichende Codierungen diskutiert und die Definitionen der Kategorien verfeinert. Anhand dieser Definitionen wurden alle Interviews ausgewertet.", "Insgesamt wurden 22 Interviews geführt. Die soziodemographischen Daten der Teilnehmer sind in Tab. 1 dargestellt.GesamtFrauenMännerTeilnehmer22616Chirurgisches FachAllgemeinchirurgie826Gefäßchirurgie202Herz- und Thoraxchirurgie303Mund‑, Kiefer- und plastische Gesichtschirurgie404Unfallchirurgie541WeiterbildungsstandChefarzt303Oberarzt716Facharzt101Assistenzarzt1156ArbeitsstätteUniversitätsklinik1459Akademisches Lehrkrankenhaus817\nAlle im Rahmen der vorliegenden Arbeit befragten Ärzte (22/22) messen der Lehre im klinischen Alltag einen hohen Stellenwert bei. Die Bedeutung der Ausbildung des Nachwuchses für die Zukunft ist allen Teilnehmenden bewusst. Die Wichtigkeit, die die Befragten der Lehre im klinischen Alltag beimessen, ist hierbei unabhängig vom Weiterbildungsstand.Also ich finde es super wichtig, weil man ja Ärzte von morgen ausbildet und wenn die es nicht mitbekommen im Studium, wann sollen sie es denn dann lernen?Ich nehme das relativ ernst, weil ich der Meinung bin, dass eben auch kommende Generationen von Patienten vor allem davon profitieren, wenn wir als Jungmediziner den noch Jüngeren oder Studenten Sachen beibringen.\nAlso ich finde es super wichtig, weil man ja Ärzte von morgen ausbildet und wenn die es nicht mitbekommen im Studium, wann sollen sie es denn dann lernen?\nIch nehme das relativ ernst, weil ich der Meinung bin, dass eben auch kommende Generationen von Patienten vor allem davon profitieren, wenn wir als Jungmediziner den noch Jüngeren oder Studenten Sachen beibringen.\nEine große Bedeutung kommt in der Lehre den Lernzielen zu. Dennoch beschreiben die Befragten, dass die Lernziele, die sie für die Studierenden im klinischen Alltag definieren, nicht immer erreicht würden. Hierfür werden verschiedene Gründe angeführt (siehe auch Tab. 2). Unabhängig sowohl vom eigenen Stand der Weiterbildung als auch von der Art der Arbeitsstätte wird der Zeitmangel als Haupthemmnis für Lehre im klinischen Alltag angeführt:Zeit. Wir sind ja so straff eingebunden, dass wir uns auch ohne Studenten nicht langweilen, und wenn wir dann die Aufgabe zusätzlich noch kriegen, den Studenten adäquat zu unterrichten, geht das dann von meiner Freizeit ab.HemmnisAnzahl der NennungenZeit- und Personalmangel19 (86,4 %)Zu geringe Motivation der Studierenden10 (45,4 %)Unangemessene Erwartungen der Studierenden3 (13,6 %)Mangelhafte Vorkenntnisse der Studierenden2 (9,1 %)\nZeit. Wir sind ja so straff eingebunden, dass wir uns auch ohne Studenten nicht langweilen, und wenn wir dann die Aufgabe zusätzlich noch kriegen, den Studenten adäquat zu unterrichten, geht das dann von meiner Freizeit ab.\nMit zunehmender klinischer Erfahrung der Dozenten kommen jedoch weitere Faktoren hinzu: Besonders die von uns befragten Ober- und Chefärzte beklagen teilweise eklatante Mängel bezüglich der Vorkenntnisse der Studierenden, welche ein effektives Unterrichten im klinischen Alltag nicht möglich machen. Beklagt wird hierbei, dass ohne fundierte theoretische Kenntnisse, welche die Studierenden im Vorfeld des Stationseinsatzes beispielsweise in Vorlesungen und Seminaren erworben haben müssen, keine effektive Lehre auf der Station möglich sei.Aber es fängt schon allein damit an, dass die fundierten theoretischen Kenntnisse erstmal vorhanden sein müssen...Weil die mit zum Teil extrem unterschiedlichen Vorkenntnissen kommen und um es mal etwas provokant zu sagen, die Vorkenntnisse sind relativ gering [...] Wenn man die Verkehrsregeln nicht kennt, kann man nicht Autofahren. Wenn man keine Ahnung von Chirurgie hat, ist man hier [...] interessierter Laie.\nAber es fängt schon allein damit an, dass die fundierten theoretischen Kenntnisse erstmal vorhanden sein müssen...\nWeil die mit zum Teil extrem unterschiedlichen Vorkenntnissen kommen und um es mal etwas provokant zu sagen, die Vorkenntnisse sind relativ gering [...] Wenn man die Verkehrsregeln nicht kennt, kann man nicht Autofahren. Wenn man keine Ahnung von Chirurgie hat, ist man hier [...] interessierter Laie.\nDarüber hinaus sehen die durch uns befragten Chirurgen ein weiteres Hemmnis für effektive Lehre im klinischen Alltag in der ihrer Meinung nach teilweise zu geringen Motivation einiger Studierender. Dies werde insbesondere durch das Pflichttertial Chirurgie begünstigt, da hier nicht nur Studierende unterrichtet werden müssen, die ein originäres Interesse am Fach haben, sondern eben auch solche, die bereits wissen, dass sie nicht in der Chirurgie arbeiten wollen.Und es gibt immer mal Probleme, wenn dann Studenten dazwischen sind, die völlig desinteressiert sind.Es gibt Leute, die halt mit Desinteresse ankommen, wo man keinen Spaß hat, denen was beizubringen, [...] bin ich froh, wenn die nach ‘ner Zeit wieder gehen.\nUnd es gibt immer mal Probleme, wenn dann Studenten dazwischen sind, die völlig desinteressiert sind.\nEs gibt Leute, die halt mit Desinteresse ankommen, wo man keinen Spaß hat, denen was beizubringen, [...] bin ich froh, wenn die nach ‘ner Zeit wieder gehen.\nDie befragten Ober- und Chefärzte berichten zudem, dass mangelnde Vorkenntnis und eine zu geringe Motivation noch dadurch verschärft würden, dass die Studierenden häufig mit nicht ihrem Ausbildungsstand angemessenen Erwartungen in die Klinikeinsätze starten würden. Durch diese erhöhte Erwartung käme es zu Frustration auf beiden Seiten.Dass gar nicht mehr so die Bereitschaft da ist, was beigebracht zu bekommen, weil’s halt ihren Erwartungen nicht entspricht. So von wegen: „Wie Verbandswechsel? Ich will Arzt werden, das macht eine Schwester“.Die heutigen Studenten verlangen viel mehr. Und wissen gar nicht, in Anführungsstrichen, was auf den Arzt täglich zukommt\nDass gar nicht mehr so die Bereitschaft da ist, was beigebracht zu bekommen, weil’s halt ihren Erwartungen nicht entspricht. So von wegen: „Wie Verbandswechsel? Ich will Arzt werden, das macht eine Schwester“.\nDie heutigen Studenten verlangen viel mehr. Und wissen gar nicht, in Anführungsstrichen, was auf den Arzt täglich zukommt\nZusätzlich beschreiben die Ober- und Chefärzte auch Einschränkungen aufgrund der organisatorischen Rahmenbedingungen: Als besonders limitierend empfinden sie, dass häufig auf den Stationen gerade diejenigen für die Betreuung der Studierenden zuständig seien, die selbst noch unerfahren sind. Hierdurch käme es zu der Situation, dass die jüngsten ärztlichen Kollegen, während sie selbst noch ein Gefühl der Überforderung erleben und für die Ausführung ihrer Aufgaben länger benötigen als die erfahreneren Kollegen, zusätzlich zu ihren Aufgaben in der Patientenversorgung auf der Station gleichzeitig noch für die Betreuung der Studierenden verantwortlich seien. Ihnen fehle nicht nur die Zeit, sondern auch das nötige Wissen, um die Studierenden unterrichten zu können.... mit einem jungen Assistenzarzt, der selber noch nicht weiß, wo die Herrentoilette ist, sagen wir mal, ist das halt ein Problem.… dass man den Jüngsten die Verantwortung für die Studierenden überträgt und das als selbstverständlich erachtet, dass genau die das auch können sollen. [...] Die wissen mit sich selbst nichts anzufangen. Die brauchen selbst noch Hilfe bei der Stationsführung und da kann man nicht noch sagen, den bildest du jetzt noch mit aus.\n... mit einem jungen Assistenzarzt, der selber noch nicht weiß, wo die Herrentoilette ist, sagen wir mal, ist das halt ein Problem.\n… dass man den Jüngsten die Verantwortung für die Studierenden überträgt und das als selbstverständlich erachtet, dass genau die das auch können sollen. [...] Die wissen mit sich selbst nichts anzufangen. Die brauchen selbst noch Hilfe bei der Stationsführung und da kann man nicht noch sagen, den bildest du jetzt noch mit aus.\nNur zwei der von uns befragten Dozenten (9,1 %) geben an, regelmäßig und strukturiert Feedback für ihre Lehrleistung zu erhalten. Dennoch beschreiben die in der vorliegenden Studie befragten Dozenten, dass das häufig positive Feedback der Studierenden für sie die wichtigste Form der Anerkennung für ihre Bemühungen um die studentische Lehre sei. Sie fühlten sich hierdurch in ihren Bemühungen bestärkt und seien daher auch gerne bereit, Zeit in die Lehre zu investieren, auch wenn sie dafür Überstunden machen müssen.Ich krieg schon Anerkennungen, insofern, dass ich glaube, dass Studenten die gefördert werden, auch sehr dankbar sind.Ja, ein Benefit ist natürlich darin zu sehen, dass man Leute für den eigenen Beruf interessiert.\nIch krieg schon Anerkennungen, insofern, dass ich glaube, dass Studenten die gefördert werden, auch sehr dankbar sind.\nJa, ein Benefit ist natürlich darin zu sehen, dass man Leute für den eigenen Beruf interessiert.\nOb die Dozenten darüber hinaus eine weitere Anerkennung von Seiten ihrer Vorgesetzten erhalten, wird von ihnen sehr unterschiedlich wahrgenommen. Insgesamt beschreiben die meisten Befragten, keine Anerkennung für ihre Lehre zu erhalten (68,2 % oder 15/22). Diejenigen Dozenten, die angeben, dass ihre Tätigkeit in der Lehre anerkannt wird, erfahren diese Anerkennung meist von ihren Vorgesetzten, weniger von Seiten der Klinikverwaltung. Auffällig hierbei ist, dass es keinen Zusammenhang zwischen der wahrgenommenen Anerkennung der Lehrtätigkeit und der Art der Arbeitsstätte gibt.Da krieg ich kein Dankeschön, vielleicht von den Studenten, ich krieg eher Druck, die Klinik geht vor und dann macht man ‘ne Triage und sagt, okay, jetzt muss ich abwägen.Eine Anerkennung von Seiten des Arbeitsgebers bekomme ich nicht. Ich bekomme eine Anerkennung von Seiten meines Chefs, der sehr wohl feststellt [...], dass die Studenten hier zufrieden sind.\nDa krieg ich kein Dankeschön, vielleicht von den Studenten, ich krieg eher Druck, die Klinik geht vor und dann macht man ‘ne Triage und sagt, okay, jetzt muss ich abwägen.\nEine Anerkennung von Seiten des Arbeitsgebers bekomme ich nicht. Ich bekomme eine Anerkennung von Seiten meines Chefs, der sehr wohl feststellt [...], dass die Studenten hier zufrieden sind.\nNur 27,3 % (6/22) der befragten Dozenten geben an, didaktisch ausreichend geschult zu sein. Diejenigen Befragten, die sich nicht ausreichend geschult fühlen, empfinden dies als nachteilig.Nein. Gar nicht. Niemals im Studium und auch später nicht.Relativ schockierend wenig, würde ich sagen.\nNein. Gar nicht. Niemals im Studium und auch später nicht.\nRelativ schockierend wenig, würde ich sagen.\nDie Dozenten, die sich als ausreichend geschult betrachten, sind hauptsächlich diejenigen, die bereits im Rahmen eines Habilitationsverfahrens Didaktikkurse besucht haben. Diese Kurse werden von den Befragten, die sie bereits besucht haben, als sehr hilfreich bewertet. Auch die jüngeren Dozenten schätzen den Besuch von Didaktikkursen als prinzipiell hilfreich ein. Sie berichteten aber, dass sie häufig keine Möglichkeit sähen, hierfür in ihrem ohnehin vollen Zeitplan Freiräume zu schaffen.Sachen, an die man vorher im Prinzip nicht gedacht hat, weil die didaktische Aus- und Weiterbildung in der Medizin im Prinzip überhaupt nicht vorkommt und so kann man denke ich aus jedem dieser Kurse was ziehen.…dass die Jobkomponente so schon genug Herausforderungen und zeitliche Einnahme darstellt. Ich würd’, glaube ich, davon mit Sicherheit profitieren, wüsste aber nicht, wo ich das in meiner Freizeit noch unterbringen kann ehrlicherweise.\nSachen, an die man vorher im Prinzip nicht gedacht hat, weil die didaktische Aus- und Weiterbildung in der Medizin im Prinzip überhaupt nicht vorkommt und so kann man denke ich aus jedem dieser Kurse was ziehen.\n…dass die Jobkomponente so schon genug Herausforderungen und zeitliche Einnahme darstellt. Ich würd’, glaube ich, davon mit Sicherheit profitieren, wüsste aber nicht, wo ich das in meiner Freizeit noch unterbringen kann ehrlicherweise.\nVor diesen Hintergründen wird die studentische Lehre auf den chirurgischen Stationen generell als zusätzliche Belastung wahrgenommen. Alle von uns befragten Dozenten betonten aber, diese Belastung als gewinnbringend und lohnenswert zu empfinden.Es macht auch Spaß und dafür bleibt man ehrlich gesagt auch gerne mal eine viertel oder halbe Stunde länger. Ist schon ok.Eine zusätzliche Belastung ist es, zweifelsohne. Aber es ist es wert.\nEs macht auch Spaß und dafür bleibt man ehrlich gesagt auch gerne mal eine viertel oder halbe Stunde länger. Ist schon ok.\nEine zusätzliche Belastung ist es, zweifelsohne. Aber es ist es wert.", "Die vorliegende Arbeit belegt, dass die Lehre im klinischen Alltag für Chirurgen sowohl an den Universitätskliniken als auch an den akademischen Lehrkrankenhäusern einen hohen Stellenwert hat. Dennoch zeigt sie auch, mit welchen teilweise eklatanten Hemmnissen die Lehrenden konfrontiert sind, insbesondere dem Mangel an Zeit zum Lehren. In ihrem Review aus dem Jahre 2014 zeigten Peters und ten Cate ähnliche Ergebnisse auf: Ein häufiger Grund für den Rückgang des „bedside teaching“ sind die Veränderungen, mit denen die Krankenhäuser in den letzten Jahren konfrontiert waren, wie z. B. die Verkürzung der Liegedauer oder die zunehmende Arbeitsbelastung der Ärzte [17]. Auch für Deutschland sind eine zunehmende Arbeitsplatzbelastung, gestiegene Patientenzahlen und eine Zunahme der patientenfernen Tätigkeiten belegt [2]. Hier bedarf es einer verbesserten Unterstützung der Lehrenden auf der Station mit der Schaffung von Zeiträumen, die der Lehre vorbehalten sind und nicht durch Überstunden kompensiert werden müssen. Dies wird mit dem Eintritt der Generation Y in den Arbeitsmarkt umso bedeutsamer, da für diese Generation eine ausgeglichene Work-Life-Balance einer der wichtigsten Faktoren für die Wahl des Arbeitsplatzes ist [7]. Auch für andere klinische Fächer kann angenommen werden, dass Herausforderungen wie die Verkürzung der Liegezeit oder die zunehmende Arbeitsbelastung Hemmnisse für die Lehre im klinischen Alltag darstellen. Aussagekräftige Studien hierzu fehlen allerdings bisher. Weitere Studien sollten daher darauf fokussieren, welche der in der vorliegenden Studie aufgezeigten Hindernisse auch in anderen klinischen Fächern relevant sind und ob es Unterschiede in den wahrgenommenen Belastungen zwischen den Fächern gibt.\nDie in der vorliegenden Arbeit befragten Dozenten berichten, im klinischen Alltag zu wenig Zeit für die Lehre der Studierenden zur Verfügung zu haben. Die Studierenden befürchten häufig bereits vor ihrem tatsächlichen Stationseinsatz, dass sie zu wenig ärztliche Betreuung erfahren werden [10]. Dies bestätigt sich insbesondere im Praktischen Jahr, hier beschreiben die Studierenden zu wenig Zeit und zu wenig Supervision durch die sie betreuenden Ärzte, sie plädieren daher für die Einrichtung von ausdrücklich für die Lehre freigehaltenen Zeitfenstern [22]. Auch in einer bundesweiten Umfrage unter Medizinstudierenden im chirurgischen Tertial des Praktischen Jahres bestätigte sich dies: Nur 38,3 % der Studierenden bewerteten den Kontakt zu den Lehrenden als gut oder sehr gut. Hierbei zeigten sich diejenigen Studierenden zufriedener, die über einen guten oder sehr guten Kontakt zu den betreuenden Ober- und Fachärzten berichteten [4]. Dieses Manko erscheint umso gravierender, da Meder et al. belegen konnten, dass mit intensiver, qualitativ hochwertiger Lehre Studierende für ein chirurgisches Fach begeistert werden können [13]. Da die Chirurgie wie wenige andere Fächer mit einem massiven Nachwuchsmangel konfrontiert ist [20, 24, 25], erscheint es umso notwendiger, die Studierenden durch hochwertige Lehre für eine spätere Tätigkeit in einem chirurgischen Fach zu begeistern.\nIn der vorliegenden Arbeit gaben nur 27,3 % der Dozenten an, ausreichend didaktisch für die Lehre im klinischen Alltag geschult zu sein. Diese subjektive Wahrnehmung der Dozenten spiegelt sich auch in der Bewertung durch die Studierenden wider: Fröhlich et al. zeigten, dass die didaktische Qualität der Lehre nur von 37,8 % der Studierenden im Pflichttertial Chirurgie als gut bis sehr gut bewertet wird [4]. Eine fundierte Ausbildung der Studierenden im klinischen Alltag ist aber nur möglich, wenn die Dozenten ausreichend didaktisch geschult sind. Dies wird in den kommenden Jahren umso wichtiger, da im Zuge des Masterplanes 2020 umfassende Änderungen nicht nur für das Medizinstudium als solches, sondern auch eine grundlegende Umgestaltung des M3-Examens geplant sind [3, 6]. Hierbei sollen vermehrt diejenigen Kompetenzen geprüft werden, die die Studierenden im direkten Umgang mit den Patienten und Kollegen auf den Stationen erwerben müssen. Gemeinsam mit den Ergebnissen der vorliegenden Studie unterstützt dies die Forderung von Stieger et al. nach der Etablierung klinischer Lehrexperten, die sowohl die fachärztliche Qualifikation als auch die medizindidaktische in sich vereinen und so neben der klinischen Lehre auch Aufgaben in Kurrikulums- und Prüfungsentwicklung, Koordination und Organisation der Lehre sowie evidenzbasierter Lehr- und Lernforschung erfüllen sollen [23]. Doch auch für die Qualifikation zu einem solchen klinischen Lehrexperten muss durch die Dozenten zusätzliche Zeit aufgebracht werden. In der vorliegenden Studie berichten besonders die noch unerfahreneren Dozenten, diese Zeit für ihre eigene didaktische Weiterbildung nicht mit dem ohnehin vollen Dienstplan vereinbaren zu können. Hier erscheint die Förderung von mit dem klinischen Alltag vergleichbaren Kursen, wie beispielsweise dem durch die chirurgische Arbeitsgemeinschaft Lehre der Deutschen Gesellschaft für Chirurgie initiierten Train-the-Trainer-Kurs, unbedingt erforderlich [1].\nInsgesamt betrachten alle in der vorliegenden Arbeit befragten Dozenten die Lehre im klinischen Alltag als lohnenswert. Als starker Motivator wird hierbei das positive Feedback durch die Studierenden genannt, während Anerkennung oder Feedback durch die Vorgesetzten oder die Kliniken selbst häufig fehlen. Hierin unterscheiden sich die chirurgischen Dozenten nicht von anderen Hochschullehrern. Kiefer et al. zeigten 2013, dass, obwohl die Lehre an sich 97 % der Hochschullehrer Freude bereitet, 53 % in dieser Hinsicht grundsätzlichen Veränderungsbedarf sehen. Nur 37 % der Hochschullehrer berichten, Wertschätzung für gute Lehre durch ihre Vorgesetzten zu erhalten [9]. Als Hauptmotivator wird auch hier der Wunsch, die Studierenden für das eigene Fach zu begeistern, und das positive Feedback durch die Studierenden berichtet [9]. Insgesamt erscheint es daher notwendig, standardisierte Bewertungsmöglichkeiten für die Lehre im klinischen Alltag zu schaffen, durch die niederschwellig und transparent eine Anerkennung guter Lehrleistung durch Vorgesetzte und Kliniken begründet werden kann.", "Neben Personalmangel und dem daraus resultierende Mangel an Zeit erscheint die fehlende Anerkennung der Lehre im klinischen Alltag das wichtigste Hemmnis für eine effektive Lehre. Somit erscheint es wichtig, neben einer nachhaltigen Verbesserung der personellen Rahmenbedingungen, die Wertigkeit der Lehre durch die Wertschätzung oder Belohnung guter Lehrleistungen und Schaffung einer dahingehenden Transparenz zu erhöhen. So können durch qualitativ hochwertige Lehre Studierende für die Chirurgie begeistert werden.", "\nObwohl Chirurgen die Lehre als absolut lohnenswert empfinden, betrachten sie sie als zusätzliche Belastung.Neben Personal- und Zeitmangel ist fehlende Anerkennung der Lehre im klinischen Alltag das wichtigste Hemmnis für eine effektive Lehre.Es bedarf einer verbesserten Unterstützung der Lehrenden auf der Station mit der Schaffung von Zeiträumen, die der Lehre vorbehalten sind und nicht durch Überstunden kompensiert werden müssen.Die Wertigkeit der Lehre sollte durch eine vermehrte Wertschätzung oder Belohnung guter Lehrleistung durch die Kliniken erhöht werden.\n\nObwohl Chirurgen die Lehre als absolut lohnenswert empfinden, betrachten sie sie als zusätzliche Belastung.\nNeben Personal- und Zeitmangel ist fehlende Anerkennung der Lehre im klinischen Alltag das wichtigste Hemmnis für eine effektive Lehre.\nEs bedarf einer verbesserten Unterstützung der Lehrenden auf der Station mit der Schaffung von Zeiträumen, die der Lehre vorbehalten sind und nicht durch Überstunden kompensiert werden müssen.\nDie Wertigkeit der Lehre sollte durch eine vermehrte Wertschätzung oder Belohnung guter Lehrleistung durch die Kliniken erhöht werden." ]
[ null, null, null, null, null, null, null, null, "conclusion" ]
[ "Lehre im klinischen Alltag", "Personalmangel", "Nachwuchsmangel", "Anerkennung", "Qualitative Analyse", "Teaching in daily surgical routine", "Personnel shortage", "Shortage of recruits", "Recognition", "Qualitative analysis" ]
Hintergrund: Der Nachwuchsmangel stellt die Chirurgie vor bisher nicht gekannte Probleme [20]: Im Jahr 2030 werden 7300 Chirurgen und Orthopäden im stationären Sektor fehlen [11]. Diese Entwicklung ist schon im Medizinstudium sichtbar: Sowohl bundesweite [16] Umfragen als auch Befragungen an einzelnen Fakultäten [21] zeigten, dass die Anzahl derjenigen Studierenden, die eine Weiterbildung in einem chirurgischen Fach anstreben, während des Studiums deutlich sinkt. Damit ist die Chirurgie nach der Labormedizin das Fach, das den größten Interessensverlust während des Studiums erleidet [16]. Insgesamt entschieden sich gemäß des bayerischen Absolventenpanels ein Jahr nach Studienabschluss weniger als 10 % der Absolventen für eine Weiterbildung in den chirurgischen Fächern [5]. Vor diesem Hintergrund muss die Chirurgie vermehrt für ihr Fach werben und die Studierenden für die Weiterbildung zum Chirurgen begeistern [20]. Die beste Möglichkeit bieten Blockpraktika und Famulaturen [19], da hier das Interesse nicht nur für eine PJ(Praktisches-Jahr)-Rotation in ein chirurgisches Fach, sondern auch für eine chirurgische Weiterbildung signifikant gesteigert werden kann [8, 13]. Insgesamt bewerten die Studierenden die Lehre am Krankenbett als dringend notwendig zum Erlernen praktischer Fertigkeiten [17–19, 26]. Jedoch bemängeln sie, dass ihr eigener Unterricht der Patientenversorgung nachgeordnetund damit durch die Faktoren Zeit und ärztliches Personal deutlich und zunehmend limitiert ist [19, 22, 26]. Dies konnte in verschiedenen Arbeiten bestätigt werden, die einen Rückgang der Lehre am Krankenbett beschreiben [1, 17, 18]. Hierfür gibt es multifaktorielle Gründe, insbesondere strukturelle Änderungen, die zu einer zunehmenden Arbeitsverdichtung führen wie die Verkürzung der Liegedauer mit höherem Patientenaufkommen [17] und zunehmendem Dokumentationsaufwand ohne zusätzlich geschaffene Zeiträume [1, 18]. Darüber hinaus werden vor allem die gestiegenen Anforderungen auf Seiten der Lernenden [1] und fehlendes Feedback zur Qualität der Lehre [1] sowie insbesondere die mangelnde Wertschätzung für das Engagement in der Lehre [1, 14] als wichtige Gründe diskutiert. Dies führt dazu, dass Lehre im klinischen Alltag zu selten erfolgt und gute Gelegenheiten hierfür verpasst werden [26]. Die Studierenden sind davon überzeugt, dass jeder Arzt am Krankenbett unterrichten kann [26]. Kasch et al. konnten zeigen, dass fast die Hälfte der befragten Studierenden jedoch unzufrieden mit der didaktischen Qualität der Lehre in ihrer chirurgischen Famulatur war und nur 35,3 % das stattgehabte „bedside teaching“ als gut empfanden [7]. Es existieren bisher nur wenige Arbeiten mit dem Fokus auf die Lehrenden in den Universitätskliniken. Unseres Wissens gibt es bisher sogar nur eine Arbeit, die sich mit der Lehre in den peripheren Lehrkrankenhäusern beschäftigt [15], obwohl hier der Großteil der Studierenden am Krankenbett unterrichtet wird. Ziel der vorliegenden Arbeit ist es daher, die Lehre an den Lehrkrankenhäusern und den Universitätskliniken im Fach Chirurgie im Stationsalltag aus Sicht der Lehrenden genauer zu beleuchten und Ursachen möglicher Probleme und Hemmnisse aufzuzeigen. Material und Methoden: Das Studiendesign war prospektiv. Gemäß Aussage der Ethikkommission der Universitätsklinik Frankfurt war für die vorliegende Studie kein Ethikvotum notwendig, da es sich nicht um ein biomedizinisches Forschungsvorhaben im engeren Sinne der Deklaration von Helsinki handelt. Die Teilnahme an der Studie erfolgte freiwillig mit schriftlicher Einverständniserklärung und konnte jederzeit ohne Angabe von Gründen beendet werden. Fragebogenerstellung Basierend auf den Fragestellungen der Studie wurde ein Leitfaden für semistrukturierte Interviews erstellt. Der Leitfaden diente dazu, die Interviewthematik einzugrenzen und Themenkomplexe vorzugeben, um die Vergleichbarkeit der Ergebnisse der verschiedenen Interviews zu sichern. Der Leitfaden wurde in einem Probeinterview mit Chirurgen, die nicht an der Erstellung des Leitfadens beteiligt waren, getestet und basierend hierauf erneut verfeinert. Nach der Überarbeitung bestand der Leitfaden aus 15 offenen, ausformulierten Fragen, von denen jede mit weiteren Spezifizierungsfragen versehen war. So stellte der Leitfaden die Vergleichbarkeit der einzelnen Interviews sicher. Basierend auf den Fragestellungen der Studie wurde ein Leitfaden für semistrukturierte Interviews erstellt. Der Leitfaden diente dazu, die Interviewthematik einzugrenzen und Themenkomplexe vorzugeben, um die Vergleichbarkeit der Ergebnisse der verschiedenen Interviews zu sichern. Der Leitfaden wurde in einem Probeinterview mit Chirurgen, die nicht an der Erstellung des Leitfadens beteiligt waren, getestet und basierend hierauf erneut verfeinert. Nach der Überarbeitung bestand der Leitfaden aus 15 offenen, ausformulierten Fragen, von denen jede mit weiteren Spezifizierungsfragen versehen war. So stellte der Leitfaden die Vergleichbarkeit der einzelnen Interviews sicher. Durchführung der Interviews Die Rekrutierung der Teilnehmer erfolgte durch direkte Ansprache der Lehrenden an der Universitätsklinik Frankfurt und an den akademischen Lehrkrankenhäusern. Die Teilnahme erfolgte freiwillig nach einer ausführlichen Aufklärung über die Studie und die damit verbundene Tonaufnahme sowie der Einwilligung zur anonymisierten Auswertung und zur späteren Veröffentlichung der Studienergebnisse. Eine Aufwandsentschädigung erhielten die Teilnehmer nicht. Alle Interviews erfolgten in einer reizarmen Umgebung, wobei nur der interviewte Dozent und der Interviewer anwesend waren. Die Interviews wurden mittels eines Tonaufnahmegerätes aufgezeichnet. Jedes Interview wurde mit einer standardisierten Eingangsfrage eröffnet. Im Verlauf des Interviews wurde die Reihenfolge der Fragen in Abhängigkeit vom Gesprächsverlauf variiert. Die Interviews endeten, wenn alle Fragen zur Zufriedenheit der Dozenten beantwortet waren. Die Anzahl der geführten Interviews ergab sich aus dem Prinzip der inhaltlichen Sättigung. Die Rekrutierung der Teilnehmer erfolgte durch direkte Ansprache der Lehrenden an der Universitätsklinik Frankfurt und an den akademischen Lehrkrankenhäusern. Die Teilnahme erfolgte freiwillig nach einer ausführlichen Aufklärung über die Studie und die damit verbundene Tonaufnahme sowie der Einwilligung zur anonymisierten Auswertung und zur späteren Veröffentlichung der Studienergebnisse. Eine Aufwandsentschädigung erhielten die Teilnehmer nicht. Alle Interviews erfolgten in einer reizarmen Umgebung, wobei nur der interviewte Dozent und der Interviewer anwesend waren. Die Interviews wurden mittels eines Tonaufnahmegerätes aufgezeichnet. Jedes Interview wurde mit einer standardisierten Eingangsfrage eröffnet. Im Verlauf des Interviews wurde die Reihenfolge der Fragen in Abhängigkeit vom Gesprächsverlauf variiert. Die Interviews endeten, wenn alle Fragen zur Zufriedenheit der Dozenten beantwortet waren. Die Anzahl der geführten Interviews ergab sich aus dem Prinzip der inhaltlichen Sättigung. Datenanalyse und Auswertung Alle Interviews wurden wortwörtlich unter Verwendung des Programms f5 (dr. dresing & pehl GmbH, Marburg, Deutschland) transkribiert und zur weiteren Auswertung anonymisiert. Die Auswertung erfolgte mit dem Programm MAXQDA (VERBI Software, Consult Sozialforschung GmbH, Berlin, Deutschland) mittels strukturierender Inhaltsanalyse [12]. Hierfür erfolgte zunächst basierend auf dem Leitfaden die Definition von Hauptkategorien und ersten Unterkategorien, anhand derer alle Interviews durch zwei der Autoren unabhängig voneinander ausgewertet wurden. Im Anschluss wurden abweichende Codierungen diskutiert und die Definitionen der Kategorien verfeinert. Anhand dieser Definitionen wurden alle Interviews ausgewertet. Alle Interviews wurden wortwörtlich unter Verwendung des Programms f5 (dr. dresing & pehl GmbH, Marburg, Deutschland) transkribiert und zur weiteren Auswertung anonymisiert. Die Auswertung erfolgte mit dem Programm MAXQDA (VERBI Software, Consult Sozialforschung GmbH, Berlin, Deutschland) mittels strukturierender Inhaltsanalyse [12]. Hierfür erfolgte zunächst basierend auf dem Leitfaden die Definition von Hauptkategorien und ersten Unterkategorien, anhand derer alle Interviews durch zwei der Autoren unabhängig voneinander ausgewertet wurden. Im Anschluss wurden abweichende Codierungen diskutiert und die Definitionen der Kategorien verfeinert. Anhand dieser Definitionen wurden alle Interviews ausgewertet. Fragebogenerstellung: Basierend auf den Fragestellungen der Studie wurde ein Leitfaden für semistrukturierte Interviews erstellt. Der Leitfaden diente dazu, die Interviewthematik einzugrenzen und Themenkomplexe vorzugeben, um die Vergleichbarkeit der Ergebnisse der verschiedenen Interviews zu sichern. Der Leitfaden wurde in einem Probeinterview mit Chirurgen, die nicht an der Erstellung des Leitfadens beteiligt waren, getestet und basierend hierauf erneut verfeinert. Nach der Überarbeitung bestand der Leitfaden aus 15 offenen, ausformulierten Fragen, von denen jede mit weiteren Spezifizierungsfragen versehen war. So stellte der Leitfaden die Vergleichbarkeit der einzelnen Interviews sicher. Durchführung der Interviews: Die Rekrutierung der Teilnehmer erfolgte durch direkte Ansprache der Lehrenden an der Universitätsklinik Frankfurt und an den akademischen Lehrkrankenhäusern. Die Teilnahme erfolgte freiwillig nach einer ausführlichen Aufklärung über die Studie und die damit verbundene Tonaufnahme sowie der Einwilligung zur anonymisierten Auswertung und zur späteren Veröffentlichung der Studienergebnisse. Eine Aufwandsentschädigung erhielten die Teilnehmer nicht. Alle Interviews erfolgten in einer reizarmen Umgebung, wobei nur der interviewte Dozent und der Interviewer anwesend waren. Die Interviews wurden mittels eines Tonaufnahmegerätes aufgezeichnet. Jedes Interview wurde mit einer standardisierten Eingangsfrage eröffnet. Im Verlauf des Interviews wurde die Reihenfolge der Fragen in Abhängigkeit vom Gesprächsverlauf variiert. Die Interviews endeten, wenn alle Fragen zur Zufriedenheit der Dozenten beantwortet waren. Die Anzahl der geführten Interviews ergab sich aus dem Prinzip der inhaltlichen Sättigung. Datenanalyse und Auswertung: Alle Interviews wurden wortwörtlich unter Verwendung des Programms f5 (dr. dresing & pehl GmbH, Marburg, Deutschland) transkribiert und zur weiteren Auswertung anonymisiert. Die Auswertung erfolgte mit dem Programm MAXQDA (VERBI Software, Consult Sozialforschung GmbH, Berlin, Deutschland) mittels strukturierender Inhaltsanalyse [12]. Hierfür erfolgte zunächst basierend auf dem Leitfaden die Definition von Hauptkategorien und ersten Unterkategorien, anhand derer alle Interviews durch zwei der Autoren unabhängig voneinander ausgewertet wurden. Im Anschluss wurden abweichende Codierungen diskutiert und die Definitionen der Kategorien verfeinert. Anhand dieser Definitionen wurden alle Interviews ausgewertet. Ergebnisse: Insgesamt wurden 22 Interviews geführt. Die soziodemographischen Daten der Teilnehmer sind in Tab. 1 dargestellt.GesamtFrauenMännerTeilnehmer22616Chirurgisches FachAllgemeinchirurgie826Gefäßchirurgie202Herz- und Thoraxchirurgie303Mund‑, Kiefer- und plastische Gesichtschirurgie404Unfallchirurgie541WeiterbildungsstandChefarzt303Oberarzt716Facharzt101Assistenzarzt1156ArbeitsstätteUniversitätsklinik1459Akademisches Lehrkrankenhaus817 Alle im Rahmen der vorliegenden Arbeit befragten Ärzte (22/22) messen der Lehre im klinischen Alltag einen hohen Stellenwert bei. Die Bedeutung der Ausbildung des Nachwuchses für die Zukunft ist allen Teilnehmenden bewusst. Die Wichtigkeit, die die Befragten der Lehre im klinischen Alltag beimessen, ist hierbei unabhängig vom Weiterbildungsstand.Also ich finde es super wichtig, weil man ja Ärzte von morgen ausbildet und wenn die es nicht mitbekommen im Studium, wann sollen sie es denn dann lernen?Ich nehme das relativ ernst, weil ich der Meinung bin, dass eben auch kommende Generationen von Patienten vor allem davon profitieren, wenn wir als Jungmediziner den noch Jüngeren oder Studenten Sachen beibringen. Also ich finde es super wichtig, weil man ja Ärzte von morgen ausbildet und wenn die es nicht mitbekommen im Studium, wann sollen sie es denn dann lernen? Ich nehme das relativ ernst, weil ich der Meinung bin, dass eben auch kommende Generationen von Patienten vor allem davon profitieren, wenn wir als Jungmediziner den noch Jüngeren oder Studenten Sachen beibringen. Eine große Bedeutung kommt in der Lehre den Lernzielen zu. Dennoch beschreiben die Befragten, dass die Lernziele, die sie für die Studierenden im klinischen Alltag definieren, nicht immer erreicht würden. Hierfür werden verschiedene Gründe angeführt (siehe auch Tab. 2). Unabhängig sowohl vom eigenen Stand der Weiterbildung als auch von der Art der Arbeitsstätte wird der Zeitmangel als Haupthemmnis für Lehre im klinischen Alltag angeführt:Zeit. Wir sind ja so straff eingebunden, dass wir uns auch ohne Studenten nicht langweilen, und wenn wir dann die Aufgabe zusätzlich noch kriegen, den Studenten adäquat zu unterrichten, geht das dann von meiner Freizeit ab.HemmnisAnzahl der NennungenZeit- und Personalmangel19 (86,4 %)Zu geringe Motivation der Studierenden10 (45,4 %)Unangemessene Erwartungen der Studierenden3 (13,6 %)Mangelhafte Vorkenntnisse der Studierenden2 (9,1 %) Zeit. Wir sind ja so straff eingebunden, dass wir uns auch ohne Studenten nicht langweilen, und wenn wir dann die Aufgabe zusätzlich noch kriegen, den Studenten adäquat zu unterrichten, geht das dann von meiner Freizeit ab. Mit zunehmender klinischer Erfahrung der Dozenten kommen jedoch weitere Faktoren hinzu: Besonders die von uns befragten Ober- und Chefärzte beklagen teilweise eklatante Mängel bezüglich der Vorkenntnisse der Studierenden, welche ein effektives Unterrichten im klinischen Alltag nicht möglich machen. Beklagt wird hierbei, dass ohne fundierte theoretische Kenntnisse, welche die Studierenden im Vorfeld des Stationseinsatzes beispielsweise in Vorlesungen und Seminaren erworben haben müssen, keine effektive Lehre auf der Station möglich sei.Aber es fängt schon allein damit an, dass die fundierten theoretischen Kenntnisse erstmal vorhanden sein müssen...Weil die mit zum Teil extrem unterschiedlichen Vorkenntnissen kommen und um es mal etwas provokant zu sagen, die Vorkenntnisse sind relativ gering [...] Wenn man die Verkehrsregeln nicht kennt, kann man nicht Autofahren. Wenn man keine Ahnung von Chirurgie hat, ist man hier [...] interessierter Laie. Aber es fängt schon allein damit an, dass die fundierten theoretischen Kenntnisse erstmal vorhanden sein müssen... Weil die mit zum Teil extrem unterschiedlichen Vorkenntnissen kommen und um es mal etwas provokant zu sagen, die Vorkenntnisse sind relativ gering [...] Wenn man die Verkehrsregeln nicht kennt, kann man nicht Autofahren. Wenn man keine Ahnung von Chirurgie hat, ist man hier [...] interessierter Laie. Darüber hinaus sehen die durch uns befragten Chirurgen ein weiteres Hemmnis für effektive Lehre im klinischen Alltag in der ihrer Meinung nach teilweise zu geringen Motivation einiger Studierender. Dies werde insbesondere durch das Pflichttertial Chirurgie begünstigt, da hier nicht nur Studierende unterrichtet werden müssen, die ein originäres Interesse am Fach haben, sondern eben auch solche, die bereits wissen, dass sie nicht in der Chirurgie arbeiten wollen.Und es gibt immer mal Probleme, wenn dann Studenten dazwischen sind, die völlig desinteressiert sind.Es gibt Leute, die halt mit Desinteresse ankommen, wo man keinen Spaß hat, denen was beizubringen, [...] bin ich froh, wenn die nach ‘ner Zeit wieder gehen. Und es gibt immer mal Probleme, wenn dann Studenten dazwischen sind, die völlig desinteressiert sind. Es gibt Leute, die halt mit Desinteresse ankommen, wo man keinen Spaß hat, denen was beizubringen, [...] bin ich froh, wenn die nach ‘ner Zeit wieder gehen. Die befragten Ober- und Chefärzte berichten zudem, dass mangelnde Vorkenntnis und eine zu geringe Motivation noch dadurch verschärft würden, dass die Studierenden häufig mit nicht ihrem Ausbildungsstand angemessenen Erwartungen in die Klinikeinsätze starten würden. Durch diese erhöhte Erwartung käme es zu Frustration auf beiden Seiten.Dass gar nicht mehr so die Bereitschaft da ist, was beigebracht zu bekommen, weil’s halt ihren Erwartungen nicht entspricht. So von wegen: „Wie Verbandswechsel? Ich will Arzt werden, das macht eine Schwester“.Die heutigen Studenten verlangen viel mehr. Und wissen gar nicht, in Anführungsstrichen, was auf den Arzt täglich zukommt Dass gar nicht mehr so die Bereitschaft da ist, was beigebracht zu bekommen, weil’s halt ihren Erwartungen nicht entspricht. So von wegen: „Wie Verbandswechsel? Ich will Arzt werden, das macht eine Schwester“. Die heutigen Studenten verlangen viel mehr. Und wissen gar nicht, in Anführungsstrichen, was auf den Arzt täglich zukommt Zusätzlich beschreiben die Ober- und Chefärzte auch Einschränkungen aufgrund der organisatorischen Rahmenbedingungen: Als besonders limitierend empfinden sie, dass häufig auf den Stationen gerade diejenigen für die Betreuung der Studierenden zuständig seien, die selbst noch unerfahren sind. Hierdurch käme es zu der Situation, dass die jüngsten ärztlichen Kollegen, während sie selbst noch ein Gefühl der Überforderung erleben und für die Ausführung ihrer Aufgaben länger benötigen als die erfahreneren Kollegen, zusätzlich zu ihren Aufgaben in der Patientenversorgung auf der Station gleichzeitig noch für die Betreuung der Studierenden verantwortlich seien. Ihnen fehle nicht nur die Zeit, sondern auch das nötige Wissen, um die Studierenden unterrichten zu können.... mit einem jungen Assistenzarzt, der selber noch nicht weiß, wo die Herrentoilette ist, sagen wir mal, ist das halt ein Problem.… dass man den Jüngsten die Verantwortung für die Studierenden überträgt und das als selbstverständlich erachtet, dass genau die das auch können sollen. [...] Die wissen mit sich selbst nichts anzufangen. Die brauchen selbst noch Hilfe bei der Stationsführung und da kann man nicht noch sagen, den bildest du jetzt noch mit aus. ... mit einem jungen Assistenzarzt, der selber noch nicht weiß, wo die Herrentoilette ist, sagen wir mal, ist das halt ein Problem. … dass man den Jüngsten die Verantwortung für die Studierenden überträgt und das als selbstverständlich erachtet, dass genau die das auch können sollen. [...] Die wissen mit sich selbst nichts anzufangen. Die brauchen selbst noch Hilfe bei der Stationsführung und da kann man nicht noch sagen, den bildest du jetzt noch mit aus. Nur zwei der von uns befragten Dozenten (9,1 %) geben an, regelmäßig und strukturiert Feedback für ihre Lehrleistung zu erhalten. Dennoch beschreiben die in der vorliegenden Studie befragten Dozenten, dass das häufig positive Feedback der Studierenden für sie die wichtigste Form der Anerkennung für ihre Bemühungen um die studentische Lehre sei. Sie fühlten sich hierdurch in ihren Bemühungen bestärkt und seien daher auch gerne bereit, Zeit in die Lehre zu investieren, auch wenn sie dafür Überstunden machen müssen.Ich krieg schon Anerkennungen, insofern, dass ich glaube, dass Studenten die gefördert werden, auch sehr dankbar sind.Ja, ein Benefit ist natürlich darin zu sehen, dass man Leute für den eigenen Beruf interessiert. Ich krieg schon Anerkennungen, insofern, dass ich glaube, dass Studenten die gefördert werden, auch sehr dankbar sind. Ja, ein Benefit ist natürlich darin zu sehen, dass man Leute für den eigenen Beruf interessiert. Ob die Dozenten darüber hinaus eine weitere Anerkennung von Seiten ihrer Vorgesetzten erhalten, wird von ihnen sehr unterschiedlich wahrgenommen. Insgesamt beschreiben die meisten Befragten, keine Anerkennung für ihre Lehre zu erhalten (68,2 % oder 15/22). Diejenigen Dozenten, die angeben, dass ihre Tätigkeit in der Lehre anerkannt wird, erfahren diese Anerkennung meist von ihren Vorgesetzten, weniger von Seiten der Klinikverwaltung. Auffällig hierbei ist, dass es keinen Zusammenhang zwischen der wahrgenommenen Anerkennung der Lehrtätigkeit und der Art der Arbeitsstätte gibt.Da krieg ich kein Dankeschön, vielleicht von den Studenten, ich krieg eher Druck, die Klinik geht vor und dann macht man ‘ne Triage und sagt, okay, jetzt muss ich abwägen.Eine Anerkennung von Seiten des Arbeitsgebers bekomme ich nicht. Ich bekomme eine Anerkennung von Seiten meines Chefs, der sehr wohl feststellt [...], dass die Studenten hier zufrieden sind. Da krieg ich kein Dankeschön, vielleicht von den Studenten, ich krieg eher Druck, die Klinik geht vor und dann macht man ‘ne Triage und sagt, okay, jetzt muss ich abwägen. Eine Anerkennung von Seiten des Arbeitsgebers bekomme ich nicht. Ich bekomme eine Anerkennung von Seiten meines Chefs, der sehr wohl feststellt [...], dass die Studenten hier zufrieden sind. Nur 27,3 % (6/22) der befragten Dozenten geben an, didaktisch ausreichend geschult zu sein. Diejenigen Befragten, die sich nicht ausreichend geschult fühlen, empfinden dies als nachteilig.Nein. Gar nicht. Niemals im Studium und auch später nicht.Relativ schockierend wenig, würde ich sagen. Nein. Gar nicht. Niemals im Studium und auch später nicht. Relativ schockierend wenig, würde ich sagen. Die Dozenten, die sich als ausreichend geschult betrachten, sind hauptsächlich diejenigen, die bereits im Rahmen eines Habilitationsverfahrens Didaktikkurse besucht haben. Diese Kurse werden von den Befragten, die sie bereits besucht haben, als sehr hilfreich bewertet. Auch die jüngeren Dozenten schätzen den Besuch von Didaktikkursen als prinzipiell hilfreich ein. Sie berichteten aber, dass sie häufig keine Möglichkeit sähen, hierfür in ihrem ohnehin vollen Zeitplan Freiräume zu schaffen.Sachen, an die man vorher im Prinzip nicht gedacht hat, weil die didaktische Aus- und Weiterbildung in der Medizin im Prinzip überhaupt nicht vorkommt und so kann man denke ich aus jedem dieser Kurse was ziehen.…dass die Jobkomponente so schon genug Herausforderungen und zeitliche Einnahme darstellt. Ich würd’, glaube ich, davon mit Sicherheit profitieren, wüsste aber nicht, wo ich das in meiner Freizeit noch unterbringen kann ehrlicherweise. Sachen, an die man vorher im Prinzip nicht gedacht hat, weil die didaktische Aus- und Weiterbildung in der Medizin im Prinzip überhaupt nicht vorkommt und so kann man denke ich aus jedem dieser Kurse was ziehen. …dass die Jobkomponente so schon genug Herausforderungen und zeitliche Einnahme darstellt. Ich würd’, glaube ich, davon mit Sicherheit profitieren, wüsste aber nicht, wo ich das in meiner Freizeit noch unterbringen kann ehrlicherweise. Vor diesen Hintergründen wird die studentische Lehre auf den chirurgischen Stationen generell als zusätzliche Belastung wahrgenommen. Alle von uns befragten Dozenten betonten aber, diese Belastung als gewinnbringend und lohnenswert zu empfinden.Es macht auch Spaß und dafür bleibt man ehrlich gesagt auch gerne mal eine viertel oder halbe Stunde länger. Ist schon ok.Eine zusätzliche Belastung ist es, zweifelsohne. Aber es ist es wert. Es macht auch Spaß und dafür bleibt man ehrlich gesagt auch gerne mal eine viertel oder halbe Stunde länger. Ist schon ok. Eine zusätzliche Belastung ist es, zweifelsohne. Aber es ist es wert. Diskussion: Die vorliegende Arbeit belegt, dass die Lehre im klinischen Alltag für Chirurgen sowohl an den Universitätskliniken als auch an den akademischen Lehrkrankenhäusern einen hohen Stellenwert hat. Dennoch zeigt sie auch, mit welchen teilweise eklatanten Hemmnissen die Lehrenden konfrontiert sind, insbesondere dem Mangel an Zeit zum Lehren. In ihrem Review aus dem Jahre 2014 zeigten Peters und ten Cate ähnliche Ergebnisse auf: Ein häufiger Grund für den Rückgang des „bedside teaching“ sind die Veränderungen, mit denen die Krankenhäuser in den letzten Jahren konfrontiert waren, wie z. B. die Verkürzung der Liegedauer oder die zunehmende Arbeitsbelastung der Ärzte [17]. Auch für Deutschland sind eine zunehmende Arbeitsplatzbelastung, gestiegene Patientenzahlen und eine Zunahme der patientenfernen Tätigkeiten belegt [2]. Hier bedarf es einer verbesserten Unterstützung der Lehrenden auf der Station mit der Schaffung von Zeiträumen, die der Lehre vorbehalten sind und nicht durch Überstunden kompensiert werden müssen. Dies wird mit dem Eintritt der Generation Y in den Arbeitsmarkt umso bedeutsamer, da für diese Generation eine ausgeglichene Work-Life-Balance einer der wichtigsten Faktoren für die Wahl des Arbeitsplatzes ist [7]. Auch für andere klinische Fächer kann angenommen werden, dass Herausforderungen wie die Verkürzung der Liegezeit oder die zunehmende Arbeitsbelastung Hemmnisse für die Lehre im klinischen Alltag darstellen. Aussagekräftige Studien hierzu fehlen allerdings bisher. Weitere Studien sollten daher darauf fokussieren, welche der in der vorliegenden Studie aufgezeigten Hindernisse auch in anderen klinischen Fächern relevant sind und ob es Unterschiede in den wahrgenommenen Belastungen zwischen den Fächern gibt. Die in der vorliegenden Arbeit befragten Dozenten berichten, im klinischen Alltag zu wenig Zeit für die Lehre der Studierenden zur Verfügung zu haben. Die Studierenden befürchten häufig bereits vor ihrem tatsächlichen Stationseinsatz, dass sie zu wenig ärztliche Betreuung erfahren werden [10]. Dies bestätigt sich insbesondere im Praktischen Jahr, hier beschreiben die Studierenden zu wenig Zeit und zu wenig Supervision durch die sie betreuenden Ärzte, sie plädieren daher für die Einrichtung von ausdrücklich für die Lehre freigehaltenen Zeitfenstern [22]. Auch in einer bundesweiten Umfrage unter Medizinstudierenden im chirurgischen Tertial des Praktischen Jahres bestätigte sich dies: Nur 38,3 % der Studierenden bewerteten den Kontakt zu den Lehrenden als gut oder sehr gut. Hierbei zeigten sich diejenigen Studierenden zufriedener, die über einen guten oder sehr guten Kontakt zu den betreuenden Ober- und Fachärzten berichteten [4]. Dieses Manko erscheint umso gravierender, da Meder et al. belegen konnten, dass mit intensiver, qualitativ hochwertiger Lehre Studierende für ein chirurgisches Fach begeistert werden können [13]. Da die Chirurgie wie wenige andere Fächer mit einem massiven Nachwuchsmangel konfrontiert ist [20, 24, 25], erscheint es umso notwendiger, die Studierenden durch hochwertige Lehre für eine spätere Tätigkeit in einem chirurgischen Fach zu begeistern. In der vorliegenden Arbeit gaben nur 27,3 % der Dozenten an, ausreichend didaktisch für die Lehre im klinischen Alltag geschult zu sein. Diese subjektive Wahrnehmung der Dozenten spiegelt sich auch in der Bewertung durch die Studierenden wider: Fröhlich et al. zeigten, dass die didaktische Qualität der Lehre nur von 37,8 % der Studierenden im Pflichttertial Chirurgie als gut bis sehr gut bewertet wird [4]. Eine fundierte Ausbildung der Studierenden im klinischen Alltag ist aber nur möglich, wenn die Dozenten ausreichend didaktisch geschult sind. Dies wird in den kommenden Jahren umso wichtiger, da im Zuge des Masterplanes 2020 umfassende Änderungen nicht nur für das Medizinstudium als solches, sondern auch eine grundlegende Umgestaltung des M3-Examens geplant sind [3, 6]. Hierbei sollen vermehrt diejenigen Kompetenzen geprüft werden, die die Studierenden im direkten Umgang mit den Patienten und Kollegen auf den Stationen erwerben müssen. Gemeinsam mit den Ergebnissen der vorliegenden Studie unterstützt dies die Forderung von Stieger et al. nach der Etablierung klinischer Lehrexperten, die sowohl die fachärztliche Qualifikation als auch die medizindidaktische in sich vereinen und so neben der klinischen Lehre auch Aufgaben in Kurrikulums- und Prüfungsentwicklung, Koordination und Organisation der Lehre sowie evidenzbasierter Lehr- und Lernforschung erfüllen sollen [23]. Doch auch für die Qualifikation zu einem solchen klinischen Lehrexperten muss durch die Dozenten zusätzliche Zeit aufgebracht werden. In der vorliegenden Studie berichten besonders die noch unerfahreneren Dozenten, diese Zeit für ihre eigene didaktische Weiterbildung nicht mit dem ohnehin vollen Dienstplan vereinbaren zu können. Hier erscheint die Förderung von mit dem klinischen Alltag vergleichbaren Kursen, wie beispielsweise dem durch die chirurgische Arbeitsgemeinschaft Lehre der Deutschen Gesellschaft für Chirurgie initiierten Train-the-Trainer-Kurs, unbedingt erforderlich [1]. Insgesamt betrachten alle in der vorliegenden Arbeit befragten Dozenten die Lehre im klinischen Alltag als lohnenswert. Als starker Motivator wird hierbei das positive Feedback durch die Studierenden genannt, während Anerkennung oder Feedback durch die Vorgesetzten oder die Kliniken selbst häufig fehlen. Hierin unterscheiden sich die chirurgischen Dozenten nicht von anderen Hochschullehrern. Kiefer et al. zeigten 2013, dass, obwohl die Lehre an sich 97 % der Hochschullehrer Freude bereitet, 53 % in dieser Hinsicht grundsätzlichen Veränderungsbedarf sehen. Nur 37 % der Hochschullehrer berichten, Wertschätzung für gute Lehre durch ihre Vorgesetzten zu erhalten [9]. Als Hauptmotivator wird auch hier der Wunsch, die Studierenden für das eigene Fach zu begeistern, und das positive Feedback durch die Studierenden berichtet [9]. Insgesamt erscheint es daher notwendig, standardisierte Bewertungsmöglichkeiten für die Lehre im klinischen Alltag zu schaffen, durch die niederschwellig und transparent eine Anerkennung guter Lehrleistung durch Vorgesetzte und Kliniken begründet werden kann. Schlussfolgerung: Neben Personalmangel und dem daraus resultierende Mangel an Zeit erscheint die fehlende Anerkennung der Lehre im klinischen Alltag das wichtigste Hemmnis für eine effektive Lehre. Somit erscheint es wichtig, neben einer nachhaltigen Verbesserung der personellen Rahmenbedingungen, die Wertigkeit der Lehre durch die Wertschätzung oder Belohnung guter Lehrleistungen und Schaffung einer dahingehenden Transparenz zu erhöhen. So können durch qualitativ hochwertige Lehre Studierende für die Chirurgie begeistert werden. Fazit für die Praxis: Obwohl Chirurgen die Lehre als absolut lohnenswert empfinden, betrachten sie sie als zusätzliche Belastung.Neben Personal- und Zeitmangel ist fehlende Anerkennung der Lehre im klinischen Alltag das wichtigste Hemmnis für eine effektive Lehre.Es bedarf einer verbesserten Unterstützung der Lehrenden auf der Station mit der Schaffung von Zeiträumen, die der Lehre vorbehalten sind und nicht durch Überstunden kompensiert werden müssen.Die Wertigkeit der Lehre sollte durch eine vermehrte Wertschätzung oder Belohnung guter Lehrleistung durch die Kliniken erhöht werden. Obwohl Chirurgen die Lehre als absolut lohnenswert empfinden, betrachten sie sie als zusätzliche Belastung. Neben Personal- und Zeitmangel ist fehlende Anerkennung der Lehre im klinischen Alltag das wichtigste Hemmnis für eine effektive Lehre. Es bedarf einer verbesserten Unterstützung der Lehrenden auf der Station mit der Schaffung von Zeiträumen, die der Lehre vorbehalten sind und nicht durch Überstunden kompensiert werden müssen. Die Wertigkeit der Lehre sollte durch eine vermehrte Wertschätzung oder Belohnung guter Lehrleistung durch die Kliniken erhöht werden.
Background: Thus medical students must be inspired to undertake this specialty. Students complain that the teaching is subordinate to patient care and limited by a lack of time and medical personnel. Although there are many studies assessing student perceptions, few exist that focus on the issues that teachers face. Methods: In this prospective study guidelines for semistructured interviews with formulated, open questions were created, which were specified with further questions. All interviews were conducted using these guidelines and recorded. The number of interviews were a function of the concept of content saturation. Results: All 22 participants perceived that the teaching in clinical practice is of paramount importance. Nevertheless, respondents described that learning goals in the clinical routine are not always achieved. The main reason is a lack of time; however, as clinical experience increases other factors will similarly become more important: Consultants and heads of departments complain about deficiencies in students' previous knowledge, including insufficient motivation. Most respondents described that they do not feel appreciated for teaching. Overall, student teaching was perceived as an additional burden but all respondents found the task to be extremely worthwhile. Conclusions: In addition to the lack of personnel, a lack of appreciation is the most significant obstacle towards effective teaching. It is therefore important to increase the value of teaching by rewarding good achievements and the creation of effective transparency.
null
null
5,074
265
[ 540, 748, 97, 137, 104, 2157, 1000, 72 ]
9
[ "die", "der", "und", "nicht", "für", "lehre", "den", "im", "dass", "mit" ]
[ "chirurgischen stationen generell", "des bayerischen absolventenpanels", "studiums erleidet 16", "eine chirurgische weiterbildung", "7300 chirurgen und" ]
null
null
null
null
null
null
null
[CONTENT] Lehre im klinischen Alltag | Personalmangel | Nachwuchsmangel | Anerkennung | Qualitative Analyse | Teaching in daily surgical routine | Personnel shortage | Shortage of recruits | Recognition | Qualitative analysis [SUMMARY]
[CONTENT] Lehre im klinischen Alltag | Personalmangel | Nachwuchsmangel | Anerkennung | Qualitative Analyse | Teaching in daily surgical routine | Personnel shortage | Shortage of recruits | Recognition | Qualitative analysis [SUMMARY]
null
null
null
null
[CONTENT] Attitude of Health Personnel | Humans | Motivation | Prospective Studies | Students, Medical | Surgeons | Teaching [SUMMARY]
[CONTENT] Attitude of Health Personnel | Humans | Motivation | Prospective Studies | Students, Medical | Surgeons | Teaching [SUMMARY]
null
null
null
null
[CONTENT] chirurgischen stationen generell | des bayerischen absolventenpanels | studiums erleidet 16 | eine chirurgische weiterbildung | 7300 chirurgen und [SUMMARY]
[CONTENT] chirurgischen stationen generell | des bayerischen absolventenpanels | studiums erleidet 16 | eine chirurgische weiterbildung | 7300 chirurgen und [SUMMARY]
null
null
null
null
[CONTENT] die | der | und | nicht | für | lehre | den | im | dass | mit [SUMMARY]
[CONTENT] die | der | und | nicht | für | lehre | den | im | dass | mit [SUMMARY]
null
null
null
null
[CONTENT] lehre | der | der lehre | die | als | sie | durch | werden | die kliniken erhöht | zusätzliche belastung neben personal [SUMMARY]
[CONTENT] der | die | und | lehre | interviews | leitfaden | für | den | mit | dass [SUMMARY]
null
null
null
null
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| ||| ||| ||| ||| ||| 22 ||| ||| ||| ||| ||| ||| [SUMMARY]
null
State of equity: childhood immunization in the World Health Organization African Region.
29296140
In 2010, the Global Vaccine Action Plan called on all countries to reach and sustain 90% national coverage and 80% coverage in all districts for the third dose of diphtheria-tetanus-pertussis vaccine (DTP3) by 2015 and for all vaccines in national immunization schedules by 2020. The aims of this study are to analyze recent trends in national vaccination coverage in the World Health Organization African Region andto assess how these trends differ by country income category.
INTRODUCTION
We compared national vaccination coverage estimates for DTP3 and the first dose of measles-containing vaccine (MCV) obtained from the World Health Organization (WHO)/United Nations Children's Fund (UNICEF) joint estimates of national immunization coverage for all African Region countries. Using United Nations (UN) population estimates of surviving infants and country income category for the corresponding year, we calculated population-weighted average vaccination coverage by country income category (i.e., low, lower middle, and upper middle-income) for the years 2000, 2005, 2010 and 2015.
METHODS
DTP3 coverage in the African Region increased from 52% in 2000 to 76% in 2015,and MCV1 coverage increased from 53% to 74% during the same period, but with considerable differences among countries. Thirty-six African Region countries were low income in 2000 with an average DTP3 coverage of 50% while 26 were low income in 2015 with an average coverage of 80%. Five countries were lower middle-income in 2000 with an average DTP3 coverage of 84% while 12 were lower middle-income in 2015 with an average coverage of 69%. Five countries were upper middle-income in 2000 with an average DTP3 coverage of 73% and eight were upper middle-income in 2015 with an average coverage of 76%.
RESULTS
Disparities in vaccination coverage by country persist in the African Region, with countries that were lower middle-income having the lowest coverage on average in 2015. Monitoring and addressing these disparities is essential for meeting global immunization targets.
CONCLUSION
[ "Africa", "Developing Countries", "Diphtheria-Tetanus-Pertussis Vaccine", "Global Health", "Humans", "Immunization Programs", "Immunization Schedule", "Income", "Infant", "Measles Vaccine", "Vaccination", "Vaccination Coverage", "World Health Organization" ]
5745948
Introduction
In 2012, the World Health Assembly endorsed the Global Vaccine Action Plan 2011-2020 (GVAP), which calls on all countries to reach and sustain 90% national coverage and 80% coverage in all districts for the third dose of diphtheria-tetanus-pertussis (DTP3) containing vaccine by 2015 and 90% national coverage and 80% coverage in all districts for all vaccines included in national immunization schedules by 2020 [1]. The GVAP also calls for a focus on equity in vaccination coverage through reducing the gap in coverage between low-income countries and high-income countries, and by reducing pockets of low sub-national vaccination coverage. Within the WHO African Region (AFR), a majority of countries have historically been classified by the World Bank as low-income [2], and immunization services in many countries usually operate within a relatively resource-constrained environment compared to high-income countries. Most of the 47 African Region countries are the focus of heavy investments from foreign donors, including Gavi, the Vaccine Alliance, which has disbursed at least US$6 billion during 2000-2016 for vaccine introductions and health system strengthening activities in 37African Region countries [3, 4]. No published analyses have described recent trends in vaccination coverage, equity and vaccine introductions among African Region countries in light of the substantial investments made in these countries over the past 16 years. Examining how these trends differ by country income status and by Gavi-eligibility status may provide useful insight into gaps to be addressed in future work by the countries and their external partners. Additionally, as global partners such as Gavi, WHO and UNICEF look to further support countries in reaching GVAP goals, identifying countries which have outperformed their peers may also yield useful lessons learned for strengthening immunization systems within resource-constrained settings. The aims of this study are to analyze recent trends in national and subnational vaccination coverage in the African Region, to assess how these trends differ by country income category and Gavi-eligibility, to review the coverage achieved with recently introduced vaccines in African Region countries and to identify low-income African Region countries which outperform their peers in reaching and sustaining high levels of vaccination coverage.
Methods
Data sources Global, regional and national vaccination coverage estimates were obtained from the WHO/UNICEF estimates of national immunization coverage (WUENIC) released in July 2016 [5, 6]. These data provide coverage estimates for multiple routine vaccinations, including third dose of DTP-containing vaccine (DTP3), third dose of Haemophilus influenza type b vaccine (Hib3), third dose of polio vaccine (Pol3), measles-containing vaccine first dose (MCV1) and second dose (MCV2), third dose of pneumococcal conjugate vaccine (PCV3), first dose of rubella-containing vaccine (RCV1), and last dose of rotavirus vaccine (Rota-last) during 1980-2015. Country-reported estimates of the proportion of districts that have reached 80% vaccination coverage in a given year were obtained from annual country-reported data collected through the WHO/UNICEF Joint Reporting Form on Immunization (JRF) process and made available for public use [7]. Information on vaccine introductions was obtained from data made publicly available by WHO [8]. National estimates for the number of surviving infants were obtained from the United Nations Population Division (UNPD), 2015 revision [9]. National estimates of gross national income per capita (GNI per capita) and categorization of countries by national income level from 1990 to 2015 were obtained from the World Bank [10]. Information on whether a country was eligible forGavi funding was obtained from sections of the Gavi website on countries’ eligibility for and transitions from Gavi support [11, 12]. Global, regional and national vaccination coverage estimates were obtained from the WHO/UNICEF estimates of national immunization coverage (WUENIC) released in July 2016 [5, 6]. These data provide coverage estimates for multiple routine vaccinations, including third dose of DTP-containing vaccine (DTP3), third dose of Haemophilus influenza type b vaccine (Hib3), third dose of polio vaccine (Pol3), measles-containing vaccine first dose (MCV1) and second dose (MCV2), third dose of pneumococcal conjugate vaccine (PCV3), first dose of rubella-containing vaccine (RCV1), and last dose of rotavirus vaccine (Rota-last) during 1980-2015. Country-reported estimates of the proportion of districts that have reached 80% vaccination coverage in a given year were obtained from annual country-reported data collected through the WHO/UNICEF Joint Reporting Form on Immunization (JRF) process and made available for public use [7]. Information on vaccine introductions was obtained from data made publicly available by WHO [8]. National estimates for the number of surviving infants were obtained from the United Nations Population Division (UNPD), 2015 revision [9]. National estimates of gross national income per capita (GNI per capita) and categorization of countries by national income level from 1990 to 2015 were obtained from the World Bank [10]. Information on whether a country was eligible forGavi funding was obtained from sections of the Gavi website on countries’ eligibility for and transitions from Gavi support [11, 12]. Definitions Country income categories were defined using World Bank historical criteria for low-income, lower middle-income, upper middle-income, and high-income countries within the given year of analysis [2]. The World Bank uses GNI per capita, calculated using the World Bank Atlas method, to categorize countries by income level. For instance, in 2015, low-income countries were defined as those with a GNI per capita of $1,025 or less, lower middle-income countries were those with a GNI per capita between $1,026 and $4,035, upper middle-income countries were those with a GNI per capita between $4,036 and $12,475, and high-income countries were those with a GNI per capita of $12,476 or more. Gavi-eligible countries are defined as those that were eligible (GNI per capita ≤US$ 1,000)for Gavi funding in Phase 1, commencing in 2000. This group includes 37 countries in AFR. Despite changes in the criteria for countries’ eligibility for Gavi support, 35 countries in the African Region were still eligible for Gavi support in 2015 [11]. Vaccination coverage is defined as the proportion of surviving infants in a given year that received the vaccine. DTP1–3 dropout is defined as the proportion of infants who received the first dose of DTP vaccine (DTP1) but did not receive the third dose of DTP vaccine (DTP3). According to WHO Immunization in Practice guidelines, a dropout rate of 10% or higher indicates that challenges may exist with ensuring that children who start a country’s recommended schedule are able to receive all doses of recommended vaccination series [13,14]. Country income categories were defined using World Bank historical criteria for low-income, lower middle-income, upper middle-income, and high-income countries within the given year of analysis [2]. The World Bank uses GNI per capita, calculated using the World Bank Atlas method, to categorize countries by income level. For instance, in 2015, low-income countries were defined as those with a GNI per capita of $1,025 or less, lower middle-income countries were those with a GNI per capita between $1,026 and $4,035, upper middle-income countries were those with a GNI per capita between $4,036 and $12,475, and high-income countries were those with a GNI per capita of $12,476 or more. Gavi-eligible countries are defined as those that were eligible (GNI per capita ≤US$ 1,000)for Gavi funding in Phase 1, commencing in 2000. This group includes 37 countries in AFR. Despite changes in the criteria for countries’ eligibility for Gavi support, 35 countries in the African Region were still eligible for Gavi support in 2015 [11]. Vaccination coverage is defined as the proportion of surviving infants in a given year that received the vaccine. DTP1–3 dropout is defined as the proportion of infants who received the first dose of DTP vaccine (DTP1) but did not receive the third dose of DTP vaccine (DTP3). According to WHO Immunization in Practice guidelines, a dropout rate of 10% or higher indicates that challenges may exist with ensuring that children who start a country’s recommended schedule are able to receive all doses of recommended vaccination series [13,14]. Data analysis For several vaccines and numbers of doses (DTP3, Pol3, MCV1, MCV2), we evaluated trends in African Region coverage compared to global-level coverage between 2000 and 2015. We compared national vaccination coverage estimates, the proportion of districts in a country reported to reach ≥90% DTP3 coverage, and the number of unvaccinated children for DTP3 across African Region countries for 2015, as well as the trend in subnational coverage from 2010 to 2015. We calculated the number of unvaccinated children in each country by using the UNPD estimates for surviving infants in 2015 and the WUENIC 2015 DTP3 estimates. We identified the African Region countries that had achieved ≥90% DTP3 coverage in 2015, and grouped the remaining countries according to level of DTP3 coverage achieved (80%–89%, 70%–79% and <70%). We used the 2010 and 2015 estimates to calculate the relative change in national DTP3 coverage over the five-year period and compared the relative change in coverage across the Region. Using the UNPD estimates for surviving infants [9], we calculated weighted average vaccination coverage by income category and Gavi-eligibility status for the years 2000, 2005, 2010 and 2015 and compared the relative change by income category groups and Gavi-eligibility. We identified African Region countries with particularly good performance using the following three criteria: 1) countries that maintained a national DTP1 and DTP3 coverage ≥90% for the period 2013 to 2015, 2) countries that maintained DTP1-3 dropout ≤10% for the period 2013 to 2015 and 3) countries that maintained a MCV1 coverage ≥90% for the period 2013 to 2015. Countries met all three criteria were defined as positive deviants. Finally, we reviewed the African Region’s performance regarding the introduction of and coverage with selected vaccines that have become available since 1980 or remain underused. We compared the African Region’s coverage with each of these vaccines to that of the other five WHO Regions and to global coverage. For several vaccines and numbers of doses (DTP3, Pol3, MCV1, MCV2), we evaluated trends in African Region coverage compared to global-level coverage between 2000 and 2015. We compared national vaccination coverage estimates, the proportion of districts in a country reported to reach ≥90% DTP3 coverage, and the number of unvaccinated children for DTP3 across African Region countries for 2015, as well as the trend in subnational coverage from 2010 to 2015. We calculated the number of unvaccinated children in each country by using the UNPD estimates for surviving infants in 2015 and the WUENIC 2015 DTP3 estimates. We identified the African Region countries that had achieved ≥90% DTP3 coverage in 2015, and grouped the remaining countries according to level of DTP3 coverage achieved (80%–89%, 70%–79% and <70%). We used the 2010 and 2015 estimates to calculate the relative change in national DTP3 coverage over the five-year period and compared the relative change in coverage across the Region. Using the UNPD estimates for surviving infants [9], we calculated weighted average vaccination coverage by income category and Gavi-eligibility status for the years 2000, 2005, 2010 and 2015 and compared the relative change by income category groups and Gavi-eligibility. We identified African Region countries with particularly good performance using the following three criteria: 1) countries that maintained a national DTP1 and DTP3 coverage ≥90% for the period 2013 to 2015, 2) countries that maintained DTP1-3 dropout ≤10% for the period 2013 to 2015 and 3) countries that maintained a MCV1 coverage ≥90% for the period 2013 to 2015. Countries met all three criteria were defined as positive deviants. Finally, we reviewed the African Region’s performance regarding the introduction of and coverage with selected vaccines that have become available since 1980 or remain underused. We compared the African Region’s coverage with each of these vaccines to that of the other five WHO Regions and to global coverage.
Results
DTP3 and Pol3 coverage trends In 2015, regional DTP3 coverage in the African Region reached 76% from a 2000 level of52%. In comparison, global DTP3 coverage increased from 72% to 86% over the same period (Figure 1). In 2015, national DTP3 coverage for African Region countries ranged from 16% (Equatorial Guinea) to 98% (Rwanda and Tanzania), with 16 (34%) of all 47 African Region countries achieving ≥90% national DTP3 coverage and 6(13%)reaching ≥80% DTP3 coverage in every district. Thirteen of the countries achieving ≥90% national DTP3 coverage had also done so in 2014 and 2013. Of the 31 African Region countries that did not achieve ≥90% national DTP3 coverage in 2015, 16 (34%) had coverage of 80%-89%, 3 (6%) had coverage of 70%-79% and 12 (26%) had coverage of <70%. The 2015 African Region coverage of 76% resulted in 8.1 million children unvaccinated with DTP3 (42% of the global total in 2015); nearly 3.9 million lived in three African Region countries: Nigeria (2.9 million), Democratic Republic of the Congo (0.6 million) and Ethiopia (0.4 million). In 2000, 11.7 million were unvaccinated with DTP3. Routine immunization coverage globally and in the African Region, 2000-2015 From 2010 to 2015, the relative change in national DTP3 coverage varied between -28% and 25% across countries. Eighteen African Region countries experienced a reduction in DTP3 coverage from 2010 to 2015; of these, 4 countries (Equatorial Guinea, Liberia, Angola and Guinea) had a greater than 10% reduction in DTP3 coverage from 2010 to 2015. Over the same period, the African Region Pol3 coverage showed similar patterns as DTP3 coverage (Figure 1). The African Region Pol3 coverage minimally increased, moving from 74% in 2010 to 76% in 2015, with 2015 Pol3 national coverages ranging between 17% (Equatorial Guinea) and 98% (Rwanda and Tanzania). In 2015, regional DTP3 coverage in the African Region reached 76% from a 2000 level of52%. In comparison, global DTP3 coverage increased from 72% to 86% over the same period (Figure 1). In 2015, national DTP3 coverage for African Region countries ranged from 16% (Equatorial Guinea) to 98% (Rwanda and Tanzania), with 16 (34%) of all 47 African Region countries achieving ≥90% national DTP3 coverage and 6(13%)reaching ≥80% DTP3 coverage in every district. Thirteen of the countries achieving ≥90% national DTP3 coverage had also done so in 2014 and 2013. Of the 31 African Region countries that did not achieve ≥90% national DTP3 coverage in 2015, 16 (34%) had coverage of 80%-89%, 3 (6%) had coverage of 70%-79% and 12 (26%) had coverage of <70%. The 2015 African Region coverage of 76% resulted in 8.1 million children unvaccinated with DTP3 (42% of the global total in 2015); nearly 3.9 million lived in three African Region countries: Nigeria (2.9 million), Democratic Republic of the Congo (0.6 million) and Ethiopia (0.4 million). In 2000, 11.7 million were unvaccinated with DTP3. Routine immunization coverage globally and in the African Region, 2000-2015 From 2010 to 2015, the relative change in national DTP3 coverage varied between -28% and 25% across countries. Eighteen African Region countries experienced a reduction in DTP3 coverage from 2010 to 2015; of these, 4 countries (Equatorial Guinea, Liberia, Angola and Guinea) had a greater than 10% reduction in DTP3 coverage from 2010 to 2015. Over the same period, the African Region Pol3 coverage showed similar patterns as DTP3 coverage (Figure 1). The African Region Pol3 coverage minimally increased, moving from 74% in 2010 to 76% in 2015, with 2015 Pol3 national coverages ranging between 17% (Equatorial Guinea) and 98% (Rwanda and Tanzania). MCV1 and MCV2 coverage trends Estimated MCV1 coverage in the African Region increased from 53% in 2000 to 73% by 2010 and was 74% in 2015 (Figure 1). In comparison, global MCV1 coverage increased from 72% in 2000 to 85% in 2010 and was still 85% in 2015. In 2015, national MCV1 coverage for African Region countries ranged between 20% and 99%, with 12 (26%) countries reaching ≥90% national MCV1 coverage. From 2010 to 2015, 20 African Region countries experienced a decrease in MCV1 coverage, 23 experienced an increase, and 4had no change. Five countries (Swaziland, Kenya, Equatorial Guinea, Eritrea, and Angola) had a greater than 10% decrease in MCV1 coverage from 2010 to 2015. MCV2 coverage in the African Region (including countries that had not yet introduced MCV2) increased from 5% in 2000 to 18% in 2015, with 23 (49%) African Region countries including MCV2 in their 2015 immunization schedule compared to 5 (11%) in 2000. In comparison, 83% of countries globally had introduced MCV2 by 2015, compared to 46% in 2000 and global MCV2 coverage was 61% in 2015, compared to 15% in 2000. Estimated MCV1 coverage in the African Region increased from 53% in 2000 to 73% by 2010 and was 74% in 2015 (Figure 1). In comparison, global MCV1 coverage increased from 72% in 2000 to 85% in 2010 and was still 85% in 2015. In 2015, national MCV1 coverage for African Region countries ranged between 20% and 99%, with 12 (26%) countries reaching ≥90% national MCV1 coverage. From 2010 to 2015, 20 African Region countries experienced a decrease in MCV1 coverage, 23 experienced an increase, and 4had no change. Five countries (Swaziland, Kenya, Equatorial Guinea, Eritrea, and Angola) had a greater than 10% decrease in MCV1 coverage from 2010 to 2015. MCV2 coverage in the African Region (including countries that had not yet introduced MCV2) increased from 5% in 2000 to 18% in 2015, with 23 (49%) African Region countries including MCV2 in their 2015 immunization schedule compared to 5 (11%) in 2000. In comparison, 83% of countries globally had introduced MCV2 by 2015, compared to 46% in 2000 and global MCV2 coverage was 61% in 2015, compared to 15% in 2000. Analysis by income category and Gavi-eligibility status When classified by country income category in 2015, 26 African Region countries were low-income, 12 were lower middle-income, 8 were upper middle-income and 1 was high income; these numbers changed from 36, 5, 5, and 0, respectively, in 2000 (Table 1). Income category of each country in the World Health Organization African Region by year, 2000, 2005, 2010 and 2015 Shaded cells indicate where country changed income category from previous 5-year time point. Source: World Bank analytical categorizations using country gross national income per capita for given year The population-weighted average national DTP3 coverage across low-income African Region countries increased from 50% in 2000 to 80% in 2015 (average annual change, 2.0%), with a lower annual change during 2010-2015, when coverage increased from 74% to 80% (average annual change, 1.2%), than during 2000-2010, when coverage increased from 50% to 74% (average annual change, 2.4%) (Figure 2 A). In lower middle-income countries, the average DTP3 coverage decreased from 84% in 2000 to 69% in 2015 (average annual change,-1.0%), though there was a slight increase from 67% to 69% during 2010-2015. The decrease in lower middle-income countries’ average DTP3 coverage coincided with Nigeria being reclassified as a lower middle-income country instead of a low income country. Nigeria’s 2015 DTP3 coverage of 56% was below the 87% average for the other lower middle-income countries. For lower middle-income countries excluding Nigeria, DTP3 coverage was 84% in 2000, 68% in 2005, 84% in 2010, and 87% in 2015. In comparison, for low income countries excluding Nigeria, DTP3 coverage increased from 55% in 2000 to 69% in 2005, 74% in 2010 and 80% in 2015. Immunization program performance (MCV1 coverage, DTP3 coverage and DTP1–3 dropout rate) for countries in the African Region 1, by country income category 2 and Gavi-eligibility status 3, 2000–2015 Among upper middle-income countries, a small annual change (average annual change, 0.2%) was observed during 2000-2015 as DTP3 coverage increased from 73% to 76%; however, coverage reached a high of 79% in 2010 before decreasing to 76% by 2015. The average national DTP3 coverage increased from 77% to 81% between 2000 and 2015 (average annual change, 0.3%) for Gavi-ineligible countries and from 49% to 76% (average annual change, 1.8%) for Gavi-eligible countries (Figure 2 B). African Region MCV1 coverage by income and Gavi-eligibility category exhibited similar trends as DTP3 coverage during the 2000-2015 period (Figure 2 C,D). In low-income African Region countries, MCV1 coverage increased from 57% to 79% (average annual change: 1.6 percentage points) during 2000-2015, while in lower middle-income countries, coverage was 82% in 2000 and 65% in 2015 (average annual change,-1.1%). For lower middle-income countries without Nigeria, MCV1 coverage was 82% in 2000 and 60% in 2005, then reached 81% in 2010 and 80% in 2015 (average annual change, -0.1%). In upper middle-income countries, coverage was 65% in 2000 and reached 75% in 2015 (average annual change,0.7%). For upper middle-income countries, the largest change in MCV1 coverage occurred during 2005-2010 (64% to 83%). Between 2000 and 2015, MCV1 coverage rose from 60% to 73% in Gavi-eligible countries (average annual change, 0.9%) and from 72% to 77% (average annual change, 0.33%) in Gavi-ineligible countries. The MCV1 coverage gap between Gavi-eligible and Gavi-ineligible African Region countries decreased from 12% to 4% during 2000 to 2015. The average DTP1–3 dropout rate was 24% in 2000 and 9% in 2015 among African Region countries that were low-income in those years, 11% in 2000 and 15% in 2015 in countries that were lower middle-income in those years, and 14% in 2000 and 9% in 2015 among African Region countries that were upper-middle income in those years (Figure 2 E). The average DTP1–3 dropout rate decreased from 13% to 5% from 2000 to 2015 for Gavi-ineligible countries and from 24% to 11% for Gavi-eligible countries (Figure 2 F). When classified by country income category in 2015, 26 African Region countries were low-income, 12 were lower middle-income, 8 were upper middle-income and 1 was high income; these numbers changed from 36, 5, 5, and 0, respectively, in 2000 (Table 1). Income category of each country in the World Health Organization African Region by year, 2000, 2005, 2010 and 2015 Shaded cells indicate where country changed income category from previous 5-year time point. Source: World Bank analytical categorizations using country gross national income per capita for given year The population-weighted average national DTP3 coverage across low-income African Region countries increased from 50% in 2000 to 80% in 2015 (average annual change, 2.0%), with a lower annual change during 2010-2015, when coverage increased from 74% to 80% (average annual change, 1.2%), than during 2000-2010, when coverage increased from 50% to 74% (average annual change, 2.4%) (Figure 2 A). In lower middle-income countries, the average DTP3 coverage decreased from 84% in 2000 to 69% in 2015 (average annual change,-1.0%), though there was a slight increase from 67% to 69% during 2010-2015. The decrease in lower middle-income countries’ average DTP3 coverage coincided with Nigeria being reclassified as a lower middle-income country instead of a low income country. Nigeria’s 2015 DTP3 coverage of 56% was below the 87% average for the other lower middle-income countries. For lower middle-income countries excluding Nigeria, DTP3 coverage was 84% in 2000, 68% in 2005, 84% in 2010, and 87% in 2015. In comparison, for low income countries excluding Nigeria, DTP3 coverage increased from 55% in 2000 to 69% in 2005, 74% in 2010 and 80% in 2015. Immunization program performance (MCV1 coverage, DTP3 coverage and DTP1–3 dropout rate) for countries in the African Region 1, by country income category 2 and Gavi-eligibility status 3, 2000–2015 Among upper middle-income countries, a small annual change (average annual change, 0.2%) was observed during 2000-2015 as DTP3 coverage increased from 73% to 76%; however, coverage reached a high of 79% in 2010 before decreasing to 76% by 2015. The average national DTP3 coverage increased from 77% to 81% between 2000 and 2015 (average annual change, 0.3%) for Gavi-ineligible countries and from 49% to 76% (average annual change, 1.8%) for Gavi-eligible countries (Figure 2 B). African Region MCV1 coverage by income and Gavi-eligibility category exhibited similar trends as DTP3 coverage during the 2000-2015 period (Figure 2 C,D). In low-income African Region countries, MCV1 coverage increased from 57% to 79% (average annual change: 1.6 percentage points) during 2000-2015, while in lower middle-income countries, coverage was 82% in 2000 and 65% in 2015 (average annual change,-1.1%). For lower middle-income countries without Nigeria, MCV1 coverage was 82% in 2000 and 60% in 2005, then reached 81% in 2010 and 80% in 2015 (average annual change, -0.1%). In upper middle-income countries, coverage was 65% in 2000 and reached 75% in 2015 (average annual change,0.7%). For upper middle-income countries, the largest change in MCV1 coverage occurred during 2005-2010 (64% to 83%). Between 2000 and 2015, MCV1 coverage rose from 60% to 73% in Gavi-eligible countries (average annual change, 0.9%) and from 72% to 77% (average annual change, 0.33%) in Gavi-ineligible countries. The MCV1 coverage gap between Gavi-eligible and Gavi-ineligible African Region countries decreased from 12% to 4% during 2000 to 2015. The average DTP1–3 dropout rate was 24% in 2000 and 9% in 2015 among African Region countries that were low-income in those years, 11% in 2000 and 15% in 2015 in countries that were lower middle-income in those years, and 14% in 2000 and 9% in 2015 among African Region countries that were upper-middle income in those years (Figure 2 E). The average DTP1–3 dropout rate decreased from 13% to 5% from 2000 to 2015 for Gavi-ineligible countries and from 24% to 11% for Gavi-eligible countries (Figure 2 F). Positive deviance analysis During 2013-2015, eleven African Region countries met all three positive deviance analysis thresholds: national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout <10% and MCV1 coverage ≥90% for each of the last three years. Of these, four were low-income countries in 2015 (Rwanda, Burundi, The Gambia and the United Republic of Tanzania), three were lower middle-income (Lesotho, Sao Tome and Principe and Cabo Verde), three were upper middle-income countries (Algeria, Botswana, and Mauritius) and one was a high-income country (Seychelles). Of these 11 countries, The Gambia, Mauritius, Rwanda and Sao Tome and Principe also achieved ≥80% DTP3 coverage in 100% of districts. Two additional countries in the African Region achieved national DTP1 and DTP3 coverage ≥90% and DTP1–3 dropout <10% for the last three years, but did not reach the positive deviance threshold of MCV1 coverage ≥90% (Eritrea and Swaziland). During 2013-2015, eleven African Region countries met all three positive deviance analysis thresholds: national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout <10% and MCV1 coverage ≥90% for each of the last three years. Of these, four were low-income countries in 2015 (Rwanda, Burundi, The Gambia and the United Republic of Tanzania), three were lower middle-income (Lesotho, Sao Tome and Principe and Cabo Verde), three were upper middle-income countries (Algeria, Botswana, and Mauritius) and one was a high-income country (Seychelles). Of these 11 countries, The Gambia, Mauritius, Rwanda and Sao Tome and Principe also achieved ≥80% DTP3 coverage in 100% of districts. Two additional countries in the African Region achieved national DTP1 and DTP3 coverage ≥90% and DTP1–3 dropout <10% for the last three years, but did not reach the positive deviance threshold of MCV1 coverage ≥90% (Eritrea and Swaziland). Vaccine introductions Several vaccines have been added to the WHO recommended schedule since the inception of the Expanded Programme on Immunization (EPI) in 1974. By 2015, 23 (49%) African Region countries had introduced MCV2, while 37 (79%) had introduced PCV and 29 (62%) had introduced rotavirus vaccine. Regional coverage for Hib3, PCV3, RCV1 and Rota-last increased in the African Region between 2010 and 2015 (Table 2), as 33 African Region countries introduced PCV, 27 introduced rotavirus vaccine, 7 introduced Hib, and 6 introduced RCV1. African Region PCV3 reached 59% in 2015 compared to global coverage of 37%. Among the six WHO Regions, the African Region ranked fourth in 2015 for Hib3 coverage and second for both PCV3 and Rota-last coverage (data not shown). However, African Region RCV1 coverage was the lowest among all regions at 12%. In the African Region, the number and proportion of countries reaching 90% national coverage in 2015 with Hib3, PCV3, Rota-last and RCV1 vaccines were 16 (34%), 6 (13%), 6 (13%) and 5 (11%), respectively. Of the four positive deviant, low-income countries (as of 2015), all achieved ≥90% coverage with Hib3, PCV3, and Rota-last in 2015; Tanzania and Rwanda also achieved ≥90% coverage with RCV1 in 2015, while Burundi and The Gambia had yet to introduce RCV1. African region and global coverage of recently introduced vaccines, 2010 and 2015 Definitions: Rota=final dose of rotavirus vaccine; Hib3=third dose of Haemophilus influenzae type b vaccine; PCV3=third dose of pneumococcal conjugate vaccine; RCV1=first dose of rubella-containing vaccine Several vaccines have been added to the WHO recommended schedule since the inception of the Expanded Programme on Immunization (EPI) in 1974. By 2015, 23 (49%) African Region countries had introduced MCV2, while 37 (79%) had introduced PCV and 29 (62%) had introduced rotavirus vaccine. Regional coverage for Hib3, PCV3, RCV1 and Rota-last increased in the African Region between 2010 and 2015 (Table 2), as 33 African Region countries introduced PCV, 27 introduced rotavirus vaccine, 7 introduced Hib, and 6 introduced RCV1. African Region PCV3 reached 59% in 2015 compared to global coverage of 37%. Among the six WHO Regions, the African Region ranked fourth in 2015 for Hib3 coverage and second for both PCV3 and Rota-last coverage (data not shown). However, African Region RCV1 coverage was the lowest among all regions at 12%. In the African Region, the number and proportion of countries reaching 90% national coverage in 2015 with Hib3, PCV3, Rota-last and RCV1 vaccines were 16 (34%), 6 (13%), 6 (13%) and 5 (11%), respectively. Of the four positive deviant, low-income countries (as of 2015), all achieved ≥90% coverage with Hib3, PCV3, and Rota-last in 2015; Tanzania and Rwanda also achieved ≥90% coverage with RCV1 in 2015, while Burundi and The Gambia had yet to introduce RCV1. African region and global coverage of recently introduced vaccines, 2010 and 2015 Definitions: Rota=final dose of rotavirus vaccine; Hib3=third dose of Haemophilus influenzae type b vaccine; PCV3=third dose of pneumococcal conjugate vaccine; RCV1=first dose of rubella-containing vaccine
Conclusion
The described inequities in vaccination coverage by region, country and at the subnational level are an important barrier to achieving higher coverage and disease control and elimination goals. Moving forward, successful immunization programs will need to ensure high-level political commitment to immunization, ownership and stewardship of the program, and the financial and human resource capacity to institute a wide range of innovative and cohesive strategies to reach every child and to stimulate demand for vaccination services in all communities. The February 2016 Addis Declaration on Immunization in which African heads of state committed to increasing domestic resources for immunization and improving the effectiveness and efficiency of immunization programs is a sign that the necessary political commitment exists [29]. The fulfillment of the pledges in the declaration will be essential to maximizing the benefits from immunization in the African Region. What is known about this topic Childhood immunization is a safe, effective and cost-effective intervention that is critical to reducing infant and under-five morbidity and mortality. It is also a key part of the struggle against antimicrobial resistance and can be used as a tool in health system strengthening; Equity is a key component of the strategies outlined by the Global Vaccine Action Plan 2011-2020 and the UN 2030 Agenda for Sustainable Development; The Global Vaccine Action Plan 2011-2020 called for all countries to reach and sustain 90% national coverage and 80% coverage in all districts for the third dose of diphtheria, tetanus, and pertussis vaccine (DTP3) by 2015 and 90% national coverage and 80% coverage in all districts for all vaccines in national immunization schedules by 2020. Childhood immunization is a safe, effective and cost-effective intervention that is critical to reducing infant and under-five morbidity and mortality. It is also a key part of the struggle against antimicrobial resistance and can be used as a tool in health system strengthening; Equity is a key component of the strategies outlined by the Global Vaccine Action Plan 2011-2020 and the UN 2030 Agenda for Sustainable Development; The Global Vaccine Action Plan 2011-2020 called for all countries to reach and sustain 90% national coverage and 80% coverage in all districts for the third dose of diphtheria, tetanus, and pertussis vaccine (DTP3) by 2015 and 90% national coverage and 80% coverage in all districts for all vaccines in national immunization schedules by 2020. What this study adds As yet, no countries in the African Region are meeting the 2020 coverage targets; During 2013-2015, eleven African Region countries achieved national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout ≤ 10% and MCV1 coverage ≥ 90% for each of the three years; of these 11 countries, only Rwanda, a low income country, achieved 80% DTP3 coverage in all districts and ≥90% coverage for Hib, PCV3, RCV and Rota in 2015; Despite improvements in immunization coverage in the African Region since 2000, more progress is needed in lower middle-income and upper middle-income countries as well as in low income countries. As yet, no countries in the African Region are meeting the 2020 coverage targets; During 2013-2015, eleven African Region countries achieved national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout ≤ 10% and MCV1 coverage ≥ 90% for each of the three years; of these 11 countries, only Rwanda, a low income country, achieved 80% DTP3 coverage in all districts and ≥90% coverage for Hib, PCV3, RCV and Rota in 2015; Despite improvements in immunization coverage in the African Region since 2000, more progress is needed in lower middle-income and upper middle-income countries as well as in low income countries.
[ "Data sources", "Definitions", "Data analysis", "DTP3 and Pol3 coverage trends", "MCV1 and MCV2 coverage trends", "Analysis by income category and Gavi-eligibility status", "Positive deviance analysis", "Vaccine introductions", "What is known about this topic", "What this study adds", "Competing interests" ]
[ "Global, regional and national vaccination coverage estimates were obtained from the WHO/UNICEF estimates of national immunization coverage (WUENIC) released in July 2016 [5, 6]. These data provide coverage estimates for multiple routine vaccinations, including third dose of DTP-containing vaccine (DTP3), third dose of Haemophilus influenza type b vaccine (Hib3), third dose of polio vaccine (Pol3), measles-containing vaccine first dose (MCV1) and second dose (MCV2), third dose of pneumococcal conjugate vaccine (PCV3), first dose of rubella-containing vaccine (RCV1), and last dose of rotavirus vaccine (Rota-last) during 1980-2015. Country-reported estimates of the proportion of districts that have reached 80% vaccination coverage in a given year were obtained from annual country-reported data collected through the WHO/UNICEF Joint Reporting Form on Immunization (JRF) process and made available for public use [7]. Information on vaccine introductions was obtained from data made publicly available by WHO [8].\nNational estimates for the number of surviving infants were obtained from the United Nations Population Division (UNPD), 2015 revision [9]. National estimates of gross national income per capita (GNI per capita) and categorization of countries by national income level from 1990 to 2015 were obtained from the World Bank [10]. Information on whether a country was eligible forGavi funding was obtained from sections of the Gavi website on countries’ eligibility for and transitions from Gavi support [11, 12].", "Country income categories were defined using World Bank historical criteria for low-income, lower middle-income, upper middle-income, and high-income countries within the given year of analysis [2]. The World Bank uses GNI per capita, calculated using the World Bank Atlas method, to categorize countries by income level. For instance, in 2015, low-income countries were defined as those with a GNI per capita of $1,025 or less, lower middle-income countries were those with a GNI per capita between $1,026 and $4,035, upper middle-income countries were those with a GNI per capita between $4,036 and $12,475, and high-income countries were those with a GNI per capita of $12,476 or more.\nGavi-eligible countries are defined as those that were eligible (GNI per capita ≤US$ 1,000)for Gavi funding in Phase 1, commencing in 2000. This group includes 37 countries in AFR. Despite changes in the criteria for countries’ eligibility for Gavi support, 35 countries in the African Region were still eligible for Gavi support in 2015 [11].\nVaccination coverage is defined as the proportion of surviving infants in a given year that received the vaccine. DTP1–3 dropout is defined as the proportion of infants who received the first dose of DTP vaccine (DTP1) but did not receive the third dose of DTP vaccine (DTP3). According to WHO Immunization in Practice guidelines, a dropout rate of 10% or higher indicates that challenges may exist with ensuring that children who start a country’s recommended schedule are able to receive all doses of recommended vaccination series [13,14].", "For several vaccines and numbers of doses (DTP3, Pol3, MCV1, MCV2), we evaluated trends in African Region coverage compared to global-level coverage between 2000 and 2015. We compared national vaccination coverage estimates, the proportion of districts in a country reported to reach ≥90% DTP3 coverage, and the number of unvaccinated children for DTP3 across African Region countries for 2015, as well as the trend in subnational coverage from 2010 to 2015. We calculated the number of unvaccinated children in each country by using the UNPD estimates for surviving infants in 2015 and the WUENIC 2015 DTP3 estimates. We identified the African Region countries that had achieved ≥90% DTP3 coverage in 2015, and grouped the remaining countries according to level of DTP3 coverage achieved (80%–89%, 70%–79% and <70%). We used the 2010 and 2015 estimates to calculate the relative change in national DTP3 coverage over the five-year period and compared the relative change in coverage across the Region.\nUsing the UNPD estimates for surviving infants [9], we calculated weighted average vaccination coverage by income category and Gavi-eligibility status for the years 2000, 2005, 2010 and 2015 and compared the relative change by income category groups and Gavi-eligibility.\nWe identified African Region countries with particularly good performance using the following three criteria: 1) countries that maintained a national DTP1 and DTP3 coverage ≥90% for the period 2013 to 2015, 2) countries that maintained DTP1-3 dropout ≤10% for the period 2013 to 2015 and 3) countries that maintained a MCV1 coverage ≥90% for the period 2013 to 2015. Countries met all three criteria were defined as positive deviants.\nFinally, we reviewed the African Region’s performance regarding the introduction of and coverage with selected vaccines that have become available since 1980 or remain underused. We compared the African Region’s coverage with each of these vaccines to that of the other five WHO Regions and to global coverage.", "In 2015, regional DTP3 coverage in the African Region reached 76% from a 2000 level of52%. In comparison, global DTP3 coverage increased from 72% to 86% over the same period (Figure 1). In 2015, national DTP3 coverage for African Region countries ranged from 16% (Equatorial Guinea) to 98% (Rwanda and Tanzania), with 16 (34%) of all 47 African Region countries achieving ≥90% national DTP3 coverage and 6(13%)reaching ≥80% DTP3 coverage in every district. Thirteen of the countries achieving ≥90% national DTP3 coverage had also done so in 2014 and 2013. Of the 31 African Region countries that did not achieve ≥90% national DTP3 coverage in 2015, 16 (34%) had coverage of 80%-89%, 3 (6%) had coverage of 70%-79% and 12 (26%) had coverage of <70%. The 2015 African Region coverage of 76% resulted in 8.1 million children unvaccinated with DTP3 (42% of the global total in 2015); nearly 3.9 million lived in three African Region countries: Nigeria (2.9 million), Democratic Republic of the Congo (0.6 million) and Ethiopia (0.4 million). In 2000, 11.7 million were unvaccinated with DTP3.\nRoutine immunization coverage globally and in the African Region, 2000-2015\nFrom 2010 to 2015, the relative change in national DTP3 coverage varied between -28% and 25% across countries. Eighteen African Region countries experienced a reduction in DTP3 coverage from 2010 to 2015; of these, 4 countries (Equatorial Guinea, Liberia, Angola and Guinea) had a greater than 10% reduction in DTP3 coverage from 2010 to 2015. Over the same period, the African Region Pol3 coverage showed similar patterns as DTP3 coverage (Figure 1). The African Region Pol3 coverage minimally increased, moving from 74% in 2010 to 76% in 2015, with 2015 Pol3 national coverages ranging between 17% (Equatorial Guinea) and 98% (Rwanda and Tanzania).", "Estimated MCV1 coverage in the African Region increased from 53% in 2000 to 73% by 2010 and was 74% in 2015 (Figure 1). In comparison, global MCV1 coverage increased from 72% in 2000 to 85% in 2010 and was still 85% in 2015. In 2015, national MCV1 coverage for African Region countries ranged between 20% and 99%, with 12 (26%) countries reaching ≥90% national MCV1 coverage. From 2010 to 2015, 20 African Region countries experienced a decrease in MCV1 coverage, 23 experienced an increase, and 4had no change. Five countries (Swaziland, Kenya, Equatorial Guinea, Eritrea, and Angola) had a greater than 10% decrease in MCV1 coverage from 2010 to 2015. MCV2 coverage in the African Region (including countries that had not yet introduced MCV2) increased from 5% in 2000 to 18% in 2015, with 23 (49%) African Region countries including MCV2 in their 2015 immunization schedule compared to 5 (11%) in 2000. In comparison, 83% of countries globally had introduced MCV2 by 2015, compared to 46% in 2000 and global MCV2 coverage was 61% in 2015, compared to 15% in 2000.", "When classified by country income category in 2015, 26 African Region countries were low-income, 12 were lower middle-income, 8 were upper middle-income and 1 was high income; these numbers changed from 36, 5, 5, and 0, respectively, in 2000 (Table 1).\nIncome category of each country in the World Health Organization African Region by year, 2000, 2005, 2010 and 2015\nShaded cells indicate where country changed income category from previous 5-year time point. Source: World Bank analytical categorizations using country gross national income per capita for given year\nThe population-weighted average national DTP3 coverage across low-income African Region countries increased from 50% in 2000 to 80% in 2015 (average annual change, 2.0%), with a lower annual change during 2010-2015, when coverage increased from 74% to 80% (average annual change, 1.2%), than during 2000-2010, when coverage increased from 50% to 74% (average annual change, 2.4%) (Figure 2 A). In lower middle-income countries, the average DTP3 coverage decreased from 84% in 2000 to 69% in 2015 (average annual change,-1.0%), though there was a slight increase from 67% to 69% during 2010-2015. The decrease in lower middle-income countries’ average DTP3 coverage coincided with Nigeria being reclassified as a lower middle-income country instead of a low income country. Nigeria’s 2015 DTP3 coverage of 56% was below the 87% average for the other lower middle-income countries. For lower middle-income countries excluding Nigeria, DTP3 coverage was 84% in 2000, 68% in 2005, 84% in 2010, and 87% in 2015. In comparison, for low income countries excluding Nigeria, DTP3 coverage increased from 55% in 2000 to 69% in 2005, 74% in 2010 and 80% in 2015.\nImmunization program performance (MCV1 coverage, DTP3 coverage and DTP1–3 dropout rate) for countries in the African Region 1, by country income category 2 and Gavi-eligibility status 3, 2000–2015\nAmong upper middle-income countries, a small annual change (average annual change, 0.2%) was observed during 2000-2015 as DTP3 coverage increased from 73% to 76%; however, coverage reached a high of 79% in 2010 before decreasing to 76% by 2015. The average national DTP3 coverage increased from 77% to 81% between 2000 and 2015 (average annual change, 0.3%) for Gavi-ineligible countries and from 49% to 76% (average annual change, 1.8%) for Gavi-eligible countries (Figure 2 B).\nAfrican Region MCV1 coverage by income and Gavi-eligibility category exhibited similar trends as DTP3 coverage during the 2000-2015 period (Figure 2 C,D). In low-income African Region countries, MCV1 coverage increased from 57% to 79% (average annual change: 1.6 percentage points) during 2000-2015, while in lower middle-income countries, coverage was 82% in 2000 and 65% in 2015 (average annual change,-1.1%). For lower middle-income countries without Nigeria, MCV1 coverage was 82% in 2000 and 60% in 2005, then reached 81% in 2010 and 80% in 2015 (average annual change, -0.1%). In upper middle-income countries, coverage was 65% in 2000 and reached 75% in 2015 (average annual change,0.7%). For upper middle-income countries, the largest change in MCV1 coverage occurred during 2005-2010 (64% to 83%). Between 2000 and 2015, MCV1 coverage rose from 60% to 73% in Gavi-eligible countries (average annual change, 0.9%) and from 72% to 77% (average annual change, 0.33%) in Gavi-ineligible countries. The MCV1 coverage gap between Gavi-eligible and Gavi-ineligible African Region countries decreased from 12% to 4% during 2000 to 2015.\nThe average DTP1–3 dropout rate was 24% in 2000 and 9% in 2015 among African Region countries that were low-income in those years, 11% in 2000 and 15% in 2015 in countries that were lower middle-income in those years, and 14% in 2000 and 9% in 2015 among African Region countries that were upper-middle income in those years (Figure 2 E). The average DTP1–3 dropout rate decreased from 13% to 5% from 2000 to 2015 for Gavi-ineligible countries and from 24% to 11% for Gavi-eligible countries (Figure 2 F).", "During 2013-2015, eleven African Region countries met all three positive deviance analysis thresholds: national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout <10% and MCV1 coverage ≥90% for each of the last three years. Of these, four were low-income countries in 2015 (Rwanda, Burundi, The Gambia and the United Republic of Tanzania), three were lower middle-income (Lesotho, Sao Tome and Principe and Cabo Verde), three were upper middle-income countries (Algeria, Botswana, and Mauritius) and one was a high-income country (Seychelles). Of these 11 countries, The Gambia, Mauritius, Rwanda and Sao Tome and Principe also achieved ≥80% DTP3 coverage in 100% of districts.\nTwo additional countries in the African Region achieved national DTP1 and DTP3 coverage ≥90% and DTP1–3 dropout <10% for the last three years, but did not reach the positive deviance threshold of MCV1 coverage ≥90% (Eritrea and Swaziland).", "Several vaccines have been added to the WHO recommended schedule since the inception of the Expanded Programme on Immunization (EPI) in 1974. By 2015, 23 (49%) African Region countries had introduced MCV2, while 37 (79%) had introduced PCV and 29 (62%) had introduced rotavirus vaccine. Regional coverage for Hib3, PCV3, RCV1 and Rota-last increased in the African Region between 2010 and 2015 (Table 2), as 33 African Region countries introduced PCV, 27 introduced rotavirus vaccine, 7 introduced Hib, and 6 introduced RCV1. African Region PCV3 reached 59% in 2015 compared to global coverage of 37%. Among the six WHO Regions, the African Region ranked fourth in 2015 for Hib3 coverage and second for both PCV3 and Rota-last coverage (data not shown). However, African Region RCV1 coverage was the lowest among all regions at 12%. In the African Region, the number and proportion of countries reaching 90% national coverage in 2015 with Hib3, PCV3, Rota-last and RCV1 vaccines were 16 (34%), 6 (13%), 6 (13%) and 5 (11%), respectively. Of the four positive deviant, low-income countries (as of 2015), all achieved ≥90% coverage with Hib3, PCV3, and Rota-last in 2015; Tanzania and Rwanda also achieved ≥90% coverage with RCV1 in 2015, while Burundi and The Gambia had yet to introduce RCV1.\nAfrican region and global coverage of recently introduced vaccines, 2010 and 2015\nDefinitions: Rota=final dose of rotavirus vaccine; Hib3=third dose of Haemophilus influenzae type b vaccine; PCV3=third dose of pneumococcal conjugate vaccine; RCV1=first dose of rubella-containing vaccine", "Childhood immunization is a safe, effective and cost-effective intervention that is critical to reducing infant and under-five morbidity and mortality. It is also a key part of the struggle against antimicrobial resistance and can be used as a tool in health system strengthening;\nEquity is a key component of the strategies outlined by the Global Vaccine Action Plan 2011-2020 and the UN 2030 Agenda for Sustainable Development;\nThe Global Vaccine Action Plan 2011-2020 called for all countries to reach and sustain 90% national coverage and 80% coverage in all districts for the third dose of diphtheria, tetanus, and pertussis vaccine (DTP3) by 2015 and 90% national coverage and 80% coverage in all districts for all vaccines in national immunization schedules by 2020.", "As yet, no countries in the African Region are meeting the 2020 coverage targets;\nDuring 2013-2015, eleven African Region countries achieved national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout ≤ 10% and MCV1 coverage ≥ 90% for each of the three years; of these 11 countries, only Rwanda, a low income country, achieved 80% DTP3 coverage in all districts and ≥90% coverage for Hib, PCV3, RCV and Rota in 2015;\nDespite improvements in immunization coverage in the African Region since 2000, more progress is needed in lower middle-income and upper middle-income countries as well as in low income countries.", "The authors declare no competing interest." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Data sources", "Definitions", "Data analysis", "Results", "DTP3 and Pol3 coverage trends", "MCV1 and MCV2 coverage trends", "Analysis by income category and Gavi-eligibility status", "Positive deviance analysis", "Vaccine introductions", "Discussion", "Conclusion", "What is known about this topic", "What this study adds", "Competing interests" ]
[ "In 2012, the World Health Assembly endorsed the Global Vaccine Action Plan 2011-2020 (GVAP), which calls on all countries to reach and sustain 90% national coverage and 80% coverage in all districts for the third dose of diphtheria-tetanus-pertussis (DTP3) containing vaccine by 2015 and 90% national coverage and 80% coverage in all districts for all vaccines included in national immunization schedules by 2020 [1]. The GVAP also calls for a focus on equity in vaccination coverage through reducing the gap in coverage between low-income countries and high-income countries, and by reducing pockets of low sub-national vaccination coverage. Within the WHO African Region (AFR), a majority of countries have historically been classified by the World Bank as low-income [2], and immunization services in many countries usually operate within a relatively resource-constrained environment compared to high-income countries. Most of the 47 African Region countries are the focus of heavy investments from foreign donors, including Gavi, the Vaccine Alliance, which has disbursed at least US$6 billion during 2000-2016 for vaccine introductions and health system strengthening activities in 37African Region countries [3, 4].\nNo published analyses have described recent trends in vaccination coverage, equity and vaccine introductions among African Region countries in light of the substantial investments made in these countries over the past 16 years. Examining how these trends differ by country income status and by Gavi-eligibility status may provide useful insight into gaps to be addressed in future work by the countries and their external partners. Additionally, as global partners such as Gavi, WHO and UNICEF look to further support countries in reaching GVAP goals, identifying countries which have outperformed their peers may also yield useful lessons learned for strengthening immunization systems within resource-constrained settings.\nThe aims of this study are to analyze recent trends in national and subnational vaccination coverage in the African Region, to assess how these trends differ by country income category and Gavi-eligibility, to review the coverage achieved with recently introduced vaccines in African Region countries and to identify low-income African Region countries which outperform their peers in reaching and sustaining high levels of vaccination coverage.", " Data sources Global, regional and national vaccination coverage estimates were obtained from the WHO/UNICEF estimates of national immunization coverage (WUENIC) released in July 2016 [5, 6]. These data provide coverage estimates for multiple routine vaccinations, including third dose of DTP-containing vaccine (DTP3), third dose of Haemophilus influenza type b vaccine (Hib3), third dose of polio vaccine (Pol3), measles-containing vaccine first dose (MCV1) and second dose (MCV2), third dose of pneumococcal conjugate vaccine (PCV3), first dose of rubella-containing vaccine (RCV1), and last dose of rotavirus vaccine (Rota-last) during 1980-2015. Country-reported estimates of the proportion of districts that have reached 80% vaccination coverage in a given year were obtained from annual country-reported data collected through the WHO/UNICEF Joint Reporting Form on Immunization (JRF) process and made available for public use [7]. Information on vaccine introductions was obtained from data made publicly available by WHO [8].\nNational estimates for the number of surviving infants were obtained from the United Nations Population Division (UNPD), 2015 revision [9]. National estimates of gross national income per capita (GNI per capita) and categorization of countries by national income level from 1990 to 2015 were obtained from the World Bank [10]. Information on whether a country was eligible forGavi funding was obtained from sections of the Gavi website on countries’ eligibility for and transitions from Gavi support [11, 12].\nGlobal, regional and national vaccination coverage estimates were obtained from the WHO/UNICEF estimates of national immunization coverage (WUENIC) released in July 2016 [5, 6]. These data provide coverage estimates for multiple routine vaccinations, including third dose of DTP-containing vaccine (DTP3), third dose of Haemophilus influenza type b vaccine (Hib3), third dose of polio vaccine (Pol3), measles-containing vaccine first dose (MCV1) and second dose (MCV2), third dose of pneumococcal conjugate vaccine (PCV3), first dose of rubella-containing vaccine (RCV1), and last dose of rotavirus vaccine (Rota-last) during 1980-2015. Country-reported estimates of the proportion of districts that have reached 80% vaccination coverage in a given year were obtained from annual country-reported data collected through the WHO/UNICEF Joint Reporting Form on Immunization (JRF) process and made available for public use [7]. Information on vaccine introductions was obtained from data made publicly available by WHO [8].\nNational estimates for the number of surviving infants were obtained from the United Nations Population Division (UNPD), 2015 revision [9]. National estimates of gross national income per capita (GNI per capita) and categorization of countries by national income level from 1990 to 2015 were obtained from the World Bank [10]. Information on whether a country was eligible forGavi funding was obtained from sections of the Gavi website on countries’ eligibility for and transitions from Gavi support [11, 12].\n Definitions Country income categories were defined using World Bank historical criteria for low-income, lower middle-income, upper middle-income, and high-income countries within the given year of analysis [2]. The World Bank uses GNI per capita, calculated using the World Bank Atlas method, to categorize countries by income level. For instance, in 2015, low-income countries were defined as those with a GNI per capita of $1,025 or less, lower middle-income countries were those with a GNI per capita between $1,026 and $4,035, upper middle-income countries were those with a GNI per capita between $4,036 and $12,475, and high-income countries were those with a GNI per capita of $12,476 or more.\nGavi-eligible countries are defined as those that were eligible (GNI per capita ≤US$ 1,000)for Gavi funding in Phase 1, commencing in 2000. This group includes 37 countries in AFR. Despite changes in the criteria for countries’ eligibility for Gavi support, 35 countries in the African Region were still eligible for Gavi support in 2015 [11].\nVaccination coverage is defined as the proportion of surviving infants in a given year that received the vaccine. DTP1–3 dropout is defined as the proportion of infants who received the first dose of DTP vaccine (DTP1) but did not receive the third dose of DTP vaccine (DTP3). According to WHO Immunization in Practice guidelines, a dropout rate of 10% or higher indicates that challenges may exist with ensuring that children who start a country’s recommended schedule are able to receive all doses of recommended vaccination series [13,14].\nCountry income categories were defined using World Bank historical criteria for low-income, lower middle-income, upper middle-income, and high-income countries within the given year of analysis [2]. The World Bank uses GNI per capita, calculated using the World Bank Atlas method, to categorize countries by income level. For instance, in 2015, low-income countries were defined as those with a GNI per capita of $1,025 or less, lower middle-income countries were those with a GNI per capita between $1,026 and $4,035, upper middle-income countries were those with a GNI per capita between $4,036 and $12,475, and high-income countries were those with a GNI per capita of $12,476 or more.\nGavi-eligible countries are defined as those that were eligible (GNI per capita ≤US$ 1,000)for Gavi funding in Phase 1, commencing in 2000. This group includes 37 countries in AFR. Despite changes in the criteria for countries’ eligibility for Gavi support, 35 countries in the African Region were still eligible for Gavi support in 2015 [11].\nVaccination coverage is defined as the proportion of surviving infants in a given year that received the vaccine. DTP1–3 dropout is defined as the proportion of infants who received the first dose of DTP vaccine (DTP1) but did not receive the third dose of DTP vaccine (DTP3). According to WHO Immunization in Practice guidelines, a dropout rate of 10% or higher indicates that challenges may exist with ensuring that children who start a country’s recommended schedule are able to receive all doses of recommended vaccination series [13,14].\n Data analysis For several vaccines and numbers of doses (DTP3, Pol3, MCV1, MCV2), we evaluated trends in African Region coverage compared to global-level coverage between 2000 and 2015. We compared national vaccination coverage estimates, the proportion of districts in a country reported to reach ≥90% DTP3 coverage, and the number of unvaccinated children for DTP3 across African Region countries for 2015, as well as the trend in subnational coverage from 2010 to 2015. We calculated the number of unvaccinated children in each country by using the UNPD estimates for surviving infants in 2015 and the WUENIC 2015 DTP3 estimates. We identified the African Region countries that had achieved ≥90% DTP3 coverage in 2015, and grouped the remaining countries according to level of DTP3 coverage achieved (80%–89%, 70%–79% and <70%). We used the 2010 and 2015 estimates to calculate the relative change in national DTP3 coverage over the five-year period and compared the relative change in coverage across the Region.\nUsing the UNPD estimates for surviving infants [9], we calculated weighted average vaccination coverage by income category and Gavi-eligibility status for the years 2000, 2005, 2010 and 2015 and compared the relative change by income category groups and Gavi-eligibility.\nWe identified African Region countries with particularly good performance using the following three criteria: 1) countries that maintained a national DTP1 and DTP3 coverage ≥90% for the period 2013 to 2015, 2) countries that maintained DTP1-3 dropout ≤10% for the period 2013 to 2015 and 3) countries that maintained a MCV1 coverage ≥90% for the period 2013 to 2015. Countries met all three criteria were defined as positive deviants.\nFinally, we reviewed the African Region’s performance regarding the introduction of and coverage with selected vaccines that have become available since 1980 or remain underused. We compared the African Region’s coverage with each of these vaccines to that of the other five WHO Regions and to global coverage.\nFor several vaccines and numbers of doses (DTP3, Pol3, MCV1, MCV2), we evaluated trends in African Region coverage compared to global-level coverage between 2000 and 2015. We compared national vaccination coverage estimates, the proportion of districts in a country reported to reach ≥90% DTP3 coverage, and the number of unvaccinated children for DTP3 across African Region countries for 2015, as well as the trend in subnational coverage from 2010 to 2015. We calculated the number of unvaccinated children in each country by using the UNPD estimates for surviving infants in 2015 and the WUENIC 2015 DTP3 estimates. We identified the African Region countries that had achieved ≥90% DTP3 coverage in 2015, and grouped the remaining countries according to level of DTP3 coverage achieved (80%–89%, 70%–79% and <70%). We used the 2010 and 2015 estimates to calculate the relative change in national DTP3 coverage over the five-year period and compared the relative change in coverage across the Region.\nUsing the UNPD estimates for surviving infants [9], we calculated weighted average vaccination coverage by income category and Gavi-eligibility status for the years 2000, 2005, 2010 and 2015 and compared the relative change by income category groups and Gavi-eligibility.\nWe identified African Region countries with particularly good performance using the following three criteria: 1) countries that maintained a national DTP1 and DTP3 coverage ≥90% for the period 2013 to 2015, 2) countries that maintained DTP1-3 dropout ≤10% for the period 2013 to 2015 and 3) countries that maintained a MCV1 coverage ≥90% for the period 2013 to 2015. Countries met all three criteria were defined as positive deviants.\nFinally, we reviewed the African Region’s performance regarding the introduction of and coverage with selected vaccines that have become available since 1980 or remain underused. We compared the African Region’s coverage with each of these vaccines to that of the other five WHO Regions and to global coverage.", "Global, regional and national vaccination coverage estimates were obtained from the WHO/UNICEF estimates of national immunization coverage (WUENIC) released in July 2016 [5, 6]. These data provide coverage estimates for multiple routine vaccinations, including third dose of DTP-containing vaccine (DTP3), third dose of Haemophilus influenza type b vaccine (Hib3), third dose of polio vaccine (Pol3), measles-containing vaccine first dose (MCV1) and second dose (MCV2), third dose of pneumococcal conjugate vaccine (PCV3), first dose of rubella-containing vaccine (RCV1), and last dose of rotavirus vaccine (Rota-last) during 1980-2015. Country-reported estimates of the proportion of districts that have reached 80% vaccination coverage in a given year were obtained from annual country-reported data collected through the WHO/UNICEF Joint Reporting Form on Immunization (JRF) process and made available for public use [7]. Information on vaccine introductions was obtained from data made publicly available by WHO [8].\nNational estimates for the number of surviving infants were obtained from the United Nations Population Division (UNPD), 2015 revision [9]. National estimates of gross national income per capita (GNI per capita) and categorization of countries by national income level from 1990 to 2015 were obtained from the World Bank [10]. Information on whether a country was eligible forGavi funding was obtained from sections of the Gavi website on countries’ eligibility for and transitions from Gavi support [11, 12].", "Country income categories were defined using World Bank historical criteria for low-income, lower middle-income, upper middle-income, and high-income countries within the given year of analysis [2]. The World Bank uses GNI per capita, calculated using the World Bank Atlas method, to categorize countries by income level. For instance, in 2015, low-income countries were defined as those with a GNI per capita of $1,025 or less, lower middle-income countries were those with a GNI per capita between $1,026 and $4,035, upper middle-income countries were those with a GNI per capita between $4,036 and $12,475, and high-income countries were those with a GNI per capita of $12,476 or more.\nGavi-eligible countries are defined as those that were eligible (GNI per capita ≤US$ 1,000)for Gavi funding in Phase 1, commencing in 2000. This group includes 37 countries in AFR. Despite changes in the criteria for countries’ eligibility for Gavi support, 35 countries in the African Region were still eligible for Gavi support in 2015 [11].\nVaccination coverage is defined as the proportion of surviving infants in a given year that received the vaccine. DTP1–3 dropout is defined as the proportion of infants who received the first dose of DTP vaccine (DTP1) but did not receive the third dose of DTP vaccine (DTP3). According to WHO Immunization in Practice guidelines, a dropout rate of 10% or higher indicates that challenges may exist with ensuring that children who start a country’s recommended schedule are able to receive all doses of recommended vaccination series [13,14].", "For several vaccines and numbers of doses (DTP3, Pol3, MCV1, MCV2), we evaluated trends in African Region coverage compared to global-level coverage between 2000 and 2015. We compared national vaccination coverage estimates, the proportion of districts in a country reported to reach ≥90% DTP3 coverage, and the number of unvaccinated children for DTP3 across African Region countries for 2015, as well as the trend in subnational coverage from 2010 to 2015. We calculated the number of unvaccinated children in each country by using the UNPD estimates for surviving infants in 2015 and the WUENIC 2015 DTP3 estimates. We identified the African Region countries that had achieved ≥90% DTP3 coverage in 2015, and grouped the remaining countries according to level of DTP3 coverage achieved (80%–89%, 70%–79% and <70%). We used the 2010 and 2015 estimates to calculate the relative change in national DTP3 coverage over the five-year period and compared the relative change in coverage across the Region.\nUsing the UNPD estimates for surviving infants [9], we calculated weighted average vaccination coverage by income category and Gavi-eligibility status for the years 2000, 2005, 2010 and 2015 and compared the relative change by income category groups and Gavi-eligibility.\nWe identified African Region countries with particularly good performance using the following three criteria: 1) countries that maintained a national DTP1 and DTP3 coverage ≥90% for the period 2013 to 2015, 2) countries that maintained DTP1-3 dropout ≤10% for the period 2013 to 2015 and 3) countries that maintained a MCV1 coverage ≥90% for the period 2013 to 2015. Countries met all three criteria were defined as positive deviants.\nFinally, we reviewed the African Region’s performance regarding the introduction of and coverage with selected vaccines that have become available since 1980 or remain underused. We compared the African Region’s coverage with each of these vaccines to that of the other five WHO Regions and to global coverage.", " DTP3 and Pol3 coverage trends In 2015, regional DTP3 coverage in the African Region reached 76% from a 2000 level of52%. In comparison, global DTP3 coverage increased from 72% to 86% over the same period (Figure 1). In 2015, national DTP3 coverage for African Region countries ranged from 16% (Equatorial Guinea) to 98% (Rwanda and Tanzania), with 16 (34%) of all 47 African Region countries achieving ≥90% national DTP3 coverage and 6(13%)reaching ≥80% DTP3 coverage in every district. Thirteen of the countries achieving ≥90% national DTP3 coverage had also done so in 2014 and 2013. Of the 31 African Region countries that did not achieve ≥90% national DTP3 coverage in 2015, 16 (34%) had coverage of 80%-89%, 3 (6%) had coverage of 70%-79% and 12 (26%) had coverage of <70%. The 2015 African Region coverage of 76% resulted in 8.1 million children unvaccinated with DTP3 (42% of the global total in 2015); nearly 3.9 million lived in three African Region countries: Nigeria (2.9 million), Democratic Republic of the Congo (0.6 million) and Ethiopia (0.4 million). In 2000, 11.7 million were unvaccinated with DTP3.\nRoutine immunization coverage globally and in the African Region, 2000-2015\nFrom 2010 to 2015, the relative change in national DTP3 coverage varied between -28% and 25% across countries. Eighteen African Region countries experienced a reduction in DTP3 coverage from 2010 to 2015; of these, 4 countries (Equatorial Guinea, Liberia, Angola and Guinea) had a greater than 10% reduction in DTP3 coverage from 2010 to 2015. Over the same period, the African Region Pol3 coverage showed similar patterns as DTP3 coverage (Figure 1). The African Region Pol3 coverage minimally increased, moving from 74% in 2010 to 76% in 2015, with 2015 Pol3 national coverages ranging between 17% (Equatorial Guinea) and 98% (Rwanda and Tanzania).\nIn 2015, regional DTP3 coverage in the African Region reached 76% from a 2000 level of52%. In comparison, global DTP3 coverage increased from 72% to 86% over the same period (Figure 1). In 2015, national DTP3 coverage for African Region countries ranged from 16% (Equatorial Guinea) to 98% (Rwanda and Tanzania), with 16 (34%) of all 47 African Region countries achieving ≥90% national DTP3 coverage and 6(13%)reaching ≥80% DTP3 coverage in every district. Thirteen of the countries achieving ≥90% national DTP3 coverage had also done so in 2014 and 2013. Of the 31 African Region countries that did not achieve ≥90% national DTP3 coverage in 2015, 16 (34%) had coverage of 80%-89%, 3 (6%) had coverage of 70%-79% and 12 (26%) had coverage of <70%. The 2015 African Region coverage of 76% resulted in 8.1 million children unvaccinated with DTP3 (42% of the global total in 2015); nearly 3.9 million lived in three African Region countries: Nigeria (2.9 million), Democratic Republic of the Congo (0.6 million) and Ethiopia (0.4 million). In 2000, 11.7 million were unvaccinated with DTP3.\nRoutine immunization coverage globally and in the African Region, 2000-2015\nFrom 2010 to 2015, the relative change in national DTP3 coverage varied between -28% and 25% across countries. Eighteen African Region countries experienced a reduction in DTP3 coverage from 2010 to 2015; of these, 4 countries (Equatorial Guinea, Liberia, Angola and Guinea) had a greater than 10% reduction in DTP3 coverage from 2010 to 2015. Over the same period, the African Region Pol3 coverage showed similar patterns as DTP3 coverage (Figure 1). The African Region Pol3 coverage minimally increased, moving from 74% in 2010 to 76% in 2015, with 2015 Pol3 national coverages ranging between 17% (Equatorial Guinea) and 98% (Rwanda and Tanzania).\n MCV1 and MCV2 coverage trends Estimated MCV1 coverage in the African Region increased from 53% in 2000 to 73% by 2010 and was 74% in 2015 (Figure 1). In comparison, global MCV1 coverage increased from 72% in 2000 to 85% in 2010 and was still 85% in 2015. In 2015, national MCV1 coverage for African Region countries ranged between 20% and 99%, with 12 (26%) countries reaching ≥90% national MCV1 coverage. From 2010 to 2015, 20 African Region countries experienced a decrease in MCV1 coverage, 23 experienced an increase, and 4had no change. Five countries (Swaziland, Kenya, Equatorial Guinea, Eritrea, and Angola) had a greater than 10% decrease in MCV1 coverage from 2010 to 2015. MCV2 coverage in the African Region (including countries that had not yet introduced MCV2) increased from 5% in 2000 to 18% in 2015, with 23 (49%) African Region countries including MCV2 in their 2015 immunization schedule compared to 5 (11%) in 2000. In comparison, 83% of countries globally had introduced MCV2 by 2015, compared to 46% in 2000 and global MCV2 coverage was 61% in 2015, compared to 15% in 2000.\nEstimated MCV1 coverage in the African Region increased from 53% in 2000 to 73% by 2010 and was 74% in 2015 (Figure 1). In comparison, global MCV1 coverage increased from 72% in 2000 to 85% in 2010 and was still 85% in 2015. In 2015, national MCV1 coverage for African Region countries ranged between 20% and 99%, with 12 (26%) countries reaching ≥90% national MCV1 coverage. From 2010 to 2015, 20 African Region countries experienced a decrease in MCV1 coverage, 23 experienced an increase, and 4had no change. Five countries (Swaziland, Kenya, Equatorial Guinea, Eritrea, and Angola) had a greater than 10% decrease in MCV1 coverage from 2010 to 2015. MCV2 coverage in the African Region (including countries that had not yet introduced MCV2) increased from 5% in 2000 to 18% in 2015, with 23 (49%) African Region countries including MCV2 in their 2015 immunization schedule compared to 5 (11%) in 2000. In comparison, 83% of countries globally had introduced MCV2 by 2015, compared to 46% in 2000 and global MCV2 coverage was 61% in 2015, compared to 15% in 2000.\n Analysis by income category and Gavi-eligibility status When classified by country income category in 2015, 26 African Region countries were low-income, 12 were lower middle-income, 8 were upper middle-income and 1 was high income; these numbers changed from 36, 5, 5, and 0, respectively, in 2000 (Table 1).\nIncome category of each country in the World Health Organization African Region by year, 2000, 2005, 2010 and 2015\nShaded cells indicate where country changed income category from previous 5-year time point. Source: World Bank analytical categorizations using country gross national income per capita for given year\nThe population-weighted average national DTP3 coverage across low-income African Region countries increased from 50% in 2000 to 80% in 2015 (average annual change, 2.0%), with a lower annual change during 2010-2015, when coverage increased from 74% to 80% (average annual change, 1.2%), than during 2000-2010, when coverage increased from 50% to 74% (average annual change, 2.4%) (Figure 2 A). In lower middle-income countries, the average DTP3 coverage decreased from 84% in 2000 to 69% in 2015 (average annual change,-1.0%), though there was a slight increase from 67% to 69% during 2010-2015. The decrease in lower middle-income countries’ average DTP3 coverage coincided with Nigeria being reclassified as a lower middle-income country instead of a low income country. Nigeria’s 2015 DTP3 coverage of 56% was below the 87% average for the other lower middle-income countries. For lower middle-income countries excluding Nigeria, DTP3 coverage was 84% in 2000, 68% in 2005, 84% in 2010, and 87% in 2015. In comparison, for low income countries excluding Nigeria, DTP3 coverage increased from 55% in 2000 to 69% in 2005, 74% in 2010 and 80% in 2015.\nImmunization program performance (MCV1 coverage, DTP3 coverage and DTP1–3 dropout rate) for countries in the African Region 1, by country income category 2 and Gavi-eligibility status 3, 2000–2015\nAmong upper middle-income countries, a small annual change (average annual change, 0.2%) was observed during 2000-2015 as DTP3 coverage increased from 73% to 76%; however, coverage reached a high of 79% in 2010 before decreasing to 76% by 2015. The average national DTP3 coverage increased from 77% to 81% between 2000 and 2015 (average annual change, 0.3%) for Gavi-ineligible countries and from 49% to 76% (average annual change, 1.8%) for Gavi-eligible countries (Figure 2 B).\nAfrican Region MCV1 coverage by income and Gavi-eligibility category exhibited similar trends as DTP3 coverage during the 2000-2015 period (Figure 2 C,D). In low-income African Region countries, MCV1 coverage increased from 57% to 79% (average annual change: 1.6 percentage points) during 2000-2015, while in lower middle-income countries, coverage was 82% in 2000 and 65% in 2015 (average annual change,-1.1%). For lower middle-income countries without Nigeria, MCV1 coverage was 82% in 2000 and 60% in 2005, then reached 81% in 2010 and 80% in 2015 (average annual change, -0.1%). In upper middle-income countries, coverage was 65% in 2000 and reached 75% in 2015 (average annual change,0.7%). For upper middle-income countries, the largest change in MCV1 coverage occurred during 2005-2010 (64% to 83%). Between 2000 and 2015, MCV1 coverage rose from 60% to 73% in Gavi-eligible countries (average annual change, 0.9%) and from 72% to 77% (average annual change, 0.33%) in Gavi-ineligible countries. The MCV1 coverage gap between Gavi-eligible and Gavi-ineligible African Region countries decreased from 12% to 4% during 2000 to 2015.\nThe average DTP1–3 dropout rate was 24% in 2000 and 9% in 2015 among African Region countries that were low-income in those years, 11% in 2000 and 15% in 2015 in countries that were lower middle-income in those years, and 14% in 2000 and 9% in 2015 among African Region countries that were upper-middle income in those years (Figure 2 E). The average DTP1–3 dropout rate decreased from 13% to 5% from 2000 to 2015 for Gavi-ineligible countries and from 24% to 11% for Gavi-eligible countries (Figure 2 F).\nWhen classified by country income category in 2015, 26 African Region countries were low-income, 12 were lower middle-income, 8 were upper middle-income and 1 was high income; these numbers changed from 36, 5, 5, and 0, respectively, in 2000 (Table 1).\nIncome category of each country in the World Health Organization African Region by year, 2000, 2005, 2010 and 2015\nShaded cells indicate where country changed income category from previous 5-year time point. Source: World Bank analytical categorizations using country gross national income per capita for given year\nThe population-weighted average national DTP3 coverage across low-income African Region countries increased from 50% in 2000 to 80% in 2015 (average annual change, 2.0%), with a lower annual change during 2010-2015, when coverage increased from 74% to 80% (average annual change, 1.2%), than during 2000-2010, when coverage increased from 50% to 74% (average annual change, 2.4%) (Figure 2 A). In lower middle-income countries, the average DTP3 coverage decreased from 84% in 2000 to 69% in 2015 (average annual change,-1.0%), though there was a slight increase from 67% to 69% during 2010-2015. The decrease in lower middle-income countries’ average DTP3 coverage coincided with Nigeria being reclassified as a lower middle-income country instead of a low income country. Nigeria’s 2015 DTP3 coverage of 56% was below the 87% average for the other lower middle-income countries. For lower middle-income countries excluding Nigeria, DTP3 coverage was 84% in 2000, 68% in 2005, 84% in 2010, and 87% in 2015. In comparison, for low income countries excluding Nigeria, DTP3 coverage increased from 55% in 2000 to 69% in 2005, 74% in 2010 and 80% in 2015.\nImmunization program performance (MCV1 coverage, DTP3 coverage and DTP1–3 dropout rate) for countries in the African Region 1, by country income category 2 and Gavi-eligibility status 3, 2000–2015\nAmong upper middle-income countries, a small annual change (average annual change, 0.2%) was observed during 2000-2015 as DTP3 coverage increased from 73% to 76%; however, coverage reached a high of 79% in 2010 before decreasing to 76% by 2015. The average national DTP3 coverage increased from 77% to 81% between 2000 and 2015 (average annual change, 0.3%) for Gavi-ineligible countries and from 49% to 76% (average annual change, 1.8%) for Gavi-eligible countries (Figure 2 B).\nAfrican Region MCV1 coverage by income and Gavi-eligibility category exhibited similar trends as DTP3 coverage during the 2000-2015 period (Figure 2 C,D). In low-income African Region countries, MCV1 coverage increased from 57% to 79% (average annual change: 1.6 percentage points) during 2000-2015, while in lower middle-income countries, coverage was 82% in 2000 and 65% in 2015 (average annual change,-1.1%). For lower middle-income countries without Nigeria, MCV1 coverage was 82% in 2000 and 60% in 2005, then reached 81% in 2010 and 80% in 2015 (average annual change, -0.1%). In upper middle-income countries, coverage was 65% in 2000 and reached 75% in 2015 (average annual change,0.7%). For upper middle-income countries, the largest change in MCV1 coverage occurred during 2005-2010 (64% to 83%). Between 2000 and 2015, MCV1 coverage rose from 60% to 73% in Gavi-eligible countries (average annual change, 0.9%) and from 72% to 77% (average annual change, 0.33%) in Gavi-ineligible countries. The MCV1 coverage gap between Gavi-eligible and Gavi-ineligible African Region countries decreased from 12% to 4% during 2000 to 2015.\nThe average DTP1–3 dropout rate was 24% in 2000 and 9% in 2015 among African Region countries that were low-income in those years, 11% in 2000 and 15% in 2015 in countries that were lower middle-income in those years, and 14% in 2000 and 9% in 2015 among African Region countries that were upper-middle income in those years (Figure 2 E). The average DTP1–3 dropout rate decreased from 13% to 5% from 2000 to 2015 for Gavi-ineligible countries and from 24% to 11% for Gavi-eligible countries (Figure 2 F).\n Positive deviance analysis During 2013-2015, eleven African Region countries met all three positive deviance analysis thresholds: national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout <10% and MCV1 coverage ≥90% for each of the last three years. Of these, four were low-income countries in 2015 (Rwanda, Burundi, The Gambia and the United Republic of Tanzania), three were lower middle-income (Lesotho, Sao Tome and Principe and Cabo Verde), three were upper middle-income countries (Algeria, Botswana, and Mauritius) and one was a high-income country (Seychelles). Of these 11 countries, The Gambia, Mauritius, Rwanda and Sao Tome and Principe also achieved ≥80% DTP3 coverage in 100% of districts.\nTwo additional countries in the African Region achieved national DTP1 and DTP3 coverage ≥90% and DTP1–3 dropout <10% for the last three years, but did not reach the positive deviance threshold of MCV1 coverage ≥90% (Eritrea and Swaziland).\nDuring 2013-2015, eleven African Region countries met all three positive deviance analysis thresholds: national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout <10% and MCV1 coverage ≥90% for each of the last three years. Of these, four were low-income countries in 2015 (Rwanda, Burundi, The Gambia and the United Republic of Tanzania), three were lower middle-income (Lesotho, Sao Tome and Principe and Cabo Verde), three were upper middle-income countries (Algeria, Botswana, and Mauritius) and one was a high-income country (Seychelles). Of these 11 countries, The Gambia, Mauritius, Rwanda and Sao Tome and Principe also achieved ≥80% DTP3 coverage in 100% of districts.\nTwo additional countries in the African Region achieved national DTP1 and DTP3 coverage ≥90% and DTP1–3 dropout <10% for the last three years, but did not reach the positive deviance threshold of MCV1 coverage ≥90% (Eritrea and Swaziland).\n Vaccine introductions Several vaccines have been added to the WHO recommended schedule since the inception of the Expanded Programme on Immunization (EPI) in 1974. By 2015, 23 (49%) African Region countries had introduced MCV2, while 37 (79%) had introduced PCV and 29 (62%) had introduced rotavirus vaccine. Regional coverage for Hib3, PCV3, RCV1 and Rota-last increased in the African Region between 2010 and 2015 (Table 2), as 33 African Region countries introduced PCV, 27 introduced rotavirus vaccine, 7 introduced Hib, and 6 introduced RCV1. African Region PCV3 reached 59% in 2015 compared to global coverage of 37%. Among the six WHO Regions, the African Region ranked fourth in 2015 for Hib3 coverage and second for both PCV3 and Rota-last coverage (data not shown). However, African Region RCV1 coverage was the lowest among all regions at 12%. In the African Region, the number and proportion of countries reaching 90% national coverage in 2015 with Hib3, PCV3, Rota-last and RCV1 vaccines were 16 (34%), 6 (13%), 6 (13%) and 5 (11%), respectively. Of the four positive deviant, low-income countries (as of 2015), all achieved ≥90% coverage with Hib3, PCV3, and Rota-last in 2015; Tanzania and Rwanda also achieved ≥90% coverage with RCV1 in 2015, while Burundi and The Gambia had yet to introduce RCV1.\nAfrican region and global coverage of recently introduced vaccines, 2010 and 2015\nDefinitions: Rota=final dose of rotavirus vaccine; Hib3=third dose of Haemophilus influenzae type b vaccine; PCV3=third dose of pneumococcal conjugate vaccine; RCV1=first dose of rubella-containing vaccine\nSeveral vaccines have been added to the WHO recommended schedule since the inception of the Expanded Programme on Immunization (EPI) in 1974. By 2015, 23 (49%) African Region countries had introduced MCV2, while 37 (79%) had introduced PCV and 29 (62%) had introduced rotavirus vaccine. Regional coverage for Hib3, PCV3, RCV1 and Rota-last increased in the African Region between 2010 and 2015 (Table 2), as 33 African Region countries introduced PCV, 27 introduced rotavirus vaccine, 7 introduced Hib, and 6 introduced RCV1. African Region PCV3 reached 59% in 2015 compared to global coverage of 37%. Among the six WHO Regions, the African Region ranked fourth in 2015 for Hib3 coverage and second for both PCV3 and Rota-last coverage (data not shown). However, African Region RCV1 coverage was the lowest among all regions at 12%. In the African Region, the number and proportion of countries reaching 90% national coverage in 2015 with Hib3, PCV3, Rota-last and RCV1 vaccines were 16 (34%), 6 (13%), 6 (13%) and 5 (11%), respectively. Of the four positive deviant, low-income countries (as of 2015), all achieved ≥90% coverage with Hib3, PCV3, and Rota-last in 2015; Tanzania and Rwanda also achieved ≥90% coverage with RCV1 in 2015, while Burundi and The Gambia had yet to introduce RCV1.\nAfrican region and global coverage of recently introduced vaccines, 2010 and 2015\nDefinitions: Rota=final dose of rotavirus vaccine; Hib3=third dose of Haemophilus influenzae type b vaccine; PCV3=third dose of pneumococcal conjugate vaccine; RCV1=first dose of rubella-containing vaccine", "In 2015, regional DTP3 coverage in the African Region reached 76% from a 2000 level of52%. In comparison, global DTP3 coverage increased from 72% to 86% over the same period (Figure 1). In 2015, national DTP3 coverage for African Region countries ranged from 16% (Equatorial Guinea) to 98% (Rwanda and Tanzania), with 16 (34%) of all 47 African Region countries achieving ≥90% national DTP3 coverage and 6(13%)reaching ≥80% DTP3 coverage in every district. Thirteen of the countries achieving ≥90% national DTP3 coverage had also done so in 2014 and 2013. Of the 31 African Region countries that did not achieve ≥90% national DTP3 coverage in 2015, 16 (34%) had coverage of 80%-89%, 3 (6%) had coverage of 70%-79% and 12 (26%) had coverage of <70%. The 2015 African Region coverage of 76% resulted in 8.1 million children unvaccinated with DTP3 (42% of the global total in 2015); nearly 3.9 million lived in three African Region countries: Nigeria (2.9 million), Democratic Republic of the Congo (0.6 million) and Ethiopia (0.4 million). In 2000, 11.7 million were unvaccinated with DTP3.\nRoutine immunization coverage globally and in the African Region, 2000-2015\nFrom 2010 to 2015, the relative change in national DTP3 coverage varied between -28% and 25% across countries. Eighteen African Region countries experienced a reduction in DTP3 coverage from 2010 to 2015; of these, 4 countries (Equatorial Guinea, Liberia, Angola and Guinea) had a greater than 10% reduction in DTP3 coverage from 2010 to 2015. Over the same period, the African Region Pol3 coverage showed similar patterns as DTP3 coverage (Figure 1). The African Region Pol3 coverage minimally increased, moving from 74% in 2010 to 76% in 2015, with 2015 Pol3 national coverages ranging between 17% (Equatorial Guinea) and 98% (Rwanda and Tanzania).", "Estimated MCV1 coverage in the African Region increased from 53% in 2000 to 73% by 2010 and was 74% in 2015 (Figure 1). In comparison, global MCV1 coverage increased from 72% in 2000 to 85% in 2010 and was still 85% in 2015. In 2015, national MCV1 coverage for African Region countries ranged between 20% and 99%, with 12 (26%) countries reaching ≥90% national MCV1 coverage. From 2010 to 2015, 20 African Region countries experienced a decrease in MCV1 coverage, 23 experienced an increase, and 4had no change. Five countries (Swaziland, Kenya, Equatorial Guinea, Eritrea, and Angola) had a greater than 10% decrease in MCV1 coverage from 2010 to 2015. MCV2 coverage in the African Region (including countries that had not yet introduced MCV2) increased from 5% in 2000 to 18% in 2015, with 23 (49%) African Region countries including MCV2 in their 2015 immunization schedule compared to 5 (11%) in 2000. In comparison, 83% of countries globally had introduced MCV2 by 2015, compared to 46% in 2000 and global MCV2 coverage was 61% in 2015, compared to 15% in 2000.", "When classified by country income category in 2015, 26 African Region countries were low-income, 12 were lower middle-income, 8 were upper middle-income and 1 was high income; these numbers changed from 36, 5, 5, and 0, respectively, in 2000 (Table 1).\nIncome category of each country in the World Health Organization African Region by year, 2000, 2005, 2010 and 2015\nShaded cells indicate where country changed income category from previous 5-year time point. Source: World Bank analytical categorizations using country gross national income per capita for given year\nThe population-weighted average national DTP3 coverage across low-income African Region countries increased from 50% in 2000 to 80% in 2015 (average annual change, 2.0%), with a lower annual change during 2010-2015, when coverage increased from 74% to 80% (average annual change, 1.2%), than during 2000-2010, when coverage increased from 50% to 74% (average annual change, 2.4%) (Figure 2 A). In lower middle-income countries, the average DTP3 coverage decreased from 84% in 2000 to 69% in 2015 (average annual change,-1.0%), though there was a slight increase from 67% to 69% during 2010-2015. The decrease in lower middle-income countries’ average DTP3 coverage coincided with Nigeria being reclassified as a lower middle-income country instead of a low income country. Nigeria’s 2015 DTP3 coverage of 56% was below the 87% average for the other lower middle-income countries. For lower middle-income countries excluding Nigeria, DTP3 coverage was 84% in 2000, 68% in 2005, 84% in 2010, and 87% in 2015. In comparison, for low income countries excluding Nigeria, DTP3 coverage increased from 55% in 2000 to 69% in 2005, 74% in 2010 and 80% in 2015.\nImmunization program performance (MCV1 coverage, DTP3 coverage and DTP1–3 dropout rate) for countries in the African Region 1, by country income category 2 and Gavi-eligibility status 3, 2000–2015\nAmong upper middle-income countries, a small annual change (average annual change, 0.2%) was observed during 2000-2015 as DTP3 coverage increased from 73% to 76%; however, coverage reached a high of 79% in 2010 before decreasing to 76% by 2015. The average national DTP3 coverage increased from 77% to 81% between 2000 and 2015 (average annual change, 0.3%) for Gavi-ineligible countries and from 49% to 76% (average annual change, 1.8%) for Gavi-eligible countries (Figure 2 B).\nAfrican Region MCV1 coverage by income and Gavi-eligibility category exhibited similar trends as DTP3 coverage during the 2000-2015 period (Figure 2 C,D). In low-income African Region countries, MCV1 coverage increased from 57% to 79% (average annual change: 1.6 percentage points) during 2000-2015, while in lower middle-income countries, coverage was 82% in 2000 and 65% in 2015 (average annual change,-1.1%). For lower middle-income countries without Nigeria, MCV1 coverage was 82% in 2000 and 60% in 2005, then reached 81% in 2010 and 80% in 2015 (average annual change, -0.1%). In upper middle-income countries, coverage was 65% in 2000 and reached 75% in 2015 (average annual change,0.7%). For upper middle-income countries, the largest change in MCV1 coverage occurred during 2005-2010 (64% to 83%). Between 2000 and 2015, MCV1 coverage rose from 60% to 73% in Gavi-eligible countries (average annual change, 0.9%) and from 72% to 77% (average annual change, 0.33%) in Gavi-ineligible countries. The MCV1 coverage gap between Gavi-eligible and Gavi-ineligible African Region countries decreased from 12% to 4% during 2000 to 2015.\nThe average DTP1–3 dropout rate was 24% in 2000 and 9% in 2015 among African Region countries that were low-income in those years, 11% in 2000 and 15% in 2015 in countries that were lower middle-income in those years, and 14% in 2000 and 9% in 2015 among African Region countries that were upper-middle income in those years (Figure 2 E). The average DTP1–3 dropout rate decreased from 13% to 5% from 2000 to 2015 for Gavi-ineligible countries and from 24% to 11% for Gavi-eligible countries (Figure 2 F).", "During 2013-2015, eleven African Region countries met all three positive deviance analysis thresholds: national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout <10% and MCV1 coverage ≥90% for each of the last three years. Of these, four were low-income countries in 2015 (Rwanda, Burundi, The Gambia and the United Republic of Tanzania), three were lower middle-income (Lesotho, Sao Tome and Principe and Cabo Verde), three were upper middle-income countries (Algeria, Botswana, and Mauritius) and one was a high-income country (Seychelles). Of these 11 countries, The Gambia, Mauritius, Rwanda and Sao Tome and Principe also achieved ≥80% DTP3 coverage in 100% of districts.\nTwo additional countries in the African Region achieved national DTP1 and DTP3 coverage ≥90% and DTP1–3 dropout <10% for the last three years, but did not reach the positive deviance threshold of MCV1 coverage ≥90% (Eritrea and Swaziland).", "Several vaccines have been added to the WHO recommended schedule since the inception of the Expanded Programme on Immunization (EPI) in 1974. By 2015, 23 (49%) African Region countries had introduced MCV2, while 37 (79%) had introduced PCV and 29 (62%) had introduced rotavirus vaccine. Regional coverage for Hib3, PCV3, RCV1 and Rota-last increased in the African Region between 2010 and 2015 (Table 2), as 33 African Region countries introduced PCV, 27 introduced rotavirus vaccine, 7 introduced Hib, and 6 introduced RCV1. African Region PCV3 reached 59% in 2015 compared to global coverage of 37%. Among the six WHO Regions, the African Region ranked fourth in 2015 for Hib3 coverage and second for both PCV3 and Rota-last coverage (data not shown). However, African Region RCV1 coverage was the lowest among all regions at 12%. In the African Region, the number and proportion of countries reaching 90% national coverage in 2015 with Hib3, PCV3, Rota-last and RCV1 vaccines were 16 (34%), 6 (13%), 6 (13%) and 5 (11%), respectively. Of the four positive deviant, low-income countries (as of 2015), all achieved ≥90% coverage with Hib3, PCV3, and Rota-last in 2015; Tanzania and Rwanda also achieved ≥90% coverage with RCV1 in 2015, while Burundi and The Gambia had yet to introduce RCV1.\nAfrican region and global coverage of recently introduced vaccines, 2010 and 2015\nDefinitions: Rota=final dose of rotavirus vaccine; Hib3=third dose of Haemophilus influenzae type b vaccine; PCV3=third dose of pneumococcal conjugate vaccine; RCV1=first dose of rubella-containing vaccine", "In this report, we provide an update on the African Region’s progress and challenges in reaching GVAP vaccination coverage goals intended to help extend the benefits of immunization equitably to all people. Over the past 15 years, African Region vaccination coverage increases have outpaced global coverage increases, with a 24% increase in DTP3 coverage and a 21% increase in MCV1 coverage, while the number of children unvaccinated with DTP3 in the Region decreased to a low of 8.1 million in 2015 from 11.8 million in 2000.However, since 2010 MCV1 and DTP3 vaccination coverage has improved little in the African region, with lower middle income countries having the lowest DTP3 coverage among country income categories in 2015, largely due to Nigeria’s DTP3 coverage. Additionally, within-country geographic inequities are substantial, with only 13% of countries achieving ≥80% DTP3 coverage in every district in 2015. Despite the large increases in DTP3 and MCV1 regional coverage during 2000-2015, the annual rate of change has slowed substantially during 2010-2015, indicating a great risk of not reaching GVAP goals if the pattern continues.\nIn this African Region analysis, we identified inequities in vaccination coverage among country income groups, between Gavi-eligibility groups, and at the national and subnational levels. In particular, we found that lower middle-income African Region countries had lower DTP3 coverage than low-income or upper middle-income African Region countries. The lower middle-income countries also had lower MCV1 coverage and higher DTP1-3 dropout rates than low-income or upper middle-income African Region countries in 2015. We also found that despite considerable progress in narrowing the gap between Gavi-eligible and Gavi-ineligible countries across program performance indicators, a gap still exists. The continued existence of this gap indicates the need for sustained, long-term investment in strengthening immunization systems. Ensuring adequate funding for national immunization programs across the African Region is particularly critical at this time as funds from the Global Polio Eradication Initiative are beginning to subside and increasing numbers of middle income countries are likely to reach the transition phase out of Gavi support in the coming years\nDespite overall substantial improvement in coverage since 2000, we noted concerning temporal trends as 43% and 38% of African Region countries had a decrease in MCV1 and DTP3 coverage, respectively, during 2010-2015. The slowed rate of increase in DTP3 coverage among African Region countries as a whole during 2010-2015 (0.8% average annual change) compared to 2000-2010 (2% average annual change) is similar to patterns seen in other regions [15]. This pattern suggests incremental improvements in coverage may be easier at low levels of coverage than high ones [15], particularly because as coverage rises continued improvements may depend on changes in national infrastructure (e.g., roads, telecommunications), health sector infrastructure (e.g., facilities, supply chain), or health staff (e.g., education) that require considerable time and resources. In addition, these observations suggest that considerable patience and persistence may be required to reach high coverage targets. Understanding the amount of additional funding required to achieve progress at higher baseline levels of coverage and how immunization programs can most efficiently utilize funds and other resources to improve their performance is an important operational research area.\nOur findings about the geographic inequities in immunization coverage within countries highlight one of the many types of inequity within countries that lead to less than optimal immunization coverage at the country level [16]. Analyses of data from other sources such as coverage surveys and Demographic and Health Surveys have revealed a wide range of inequities at the subnational level similar to the disparity of WHO/UNICEF coverage estimates at the national and Regional levels. Previous studies have shown coverage to be as much as 60% higher among urban children than rural children [17]. About one in three countries have substantial coverage disparities associated with income level, with at least a 20% gap for DTP and MCV coverage between the richest and the poorest quintiles [18]. The disparities in coverage vary between and within countries and WHO Regions [17],and inequities persist in a large majority of Gavi-eligible countries [19]. Over the last 10 years, there has been considerable progress in improving equity in immunization coverage as the lowest wealth quintile and the least educated have seen the largest improvements in coverage. However, the lower middle and middle quintiles have benefited less from these improvements and warrant greater attention in the future [18].\nUsing our criteria for positive deviance, we identified four countries that were low-income in 2015 and three countries that were lower middle-income countries in 2015 that were outperforming their peers in reaching and sustaining high vaccination coverage. In 2015, of the positive deviant countries, Rwanda was the only low-income country to achieve ≥80% DTP3 coverage in every district as well as coverage ≥90% for the more recently introduced vaccines included in our analysis (Hib, PCV3, RCV1, and Rota). Nearly reaching this benchmark, Tanzania achieved ≥90% coverage for all the latter vaccines, and ≥80% DTP3 coverage in 92% of its districts. The success in Rwanda has been attributed to several key factors including country ownership and high-level commitment, engagement with village-based community health workers, recognition of immunization as a gateway to access other integrated primary healthcare services, and the creation of a short message service (SMS) text message system for registering pregnancies and births [20, 21]. Rwanda’s commitment to immunization is reflected in its early introduction of new vaccines, which has also given the country more time to increase coverage with those vaccines compared to countries that have introduced them recently.\nTo achieve immunization and equity goals, the needs and reasons for success of low- and middle-income WHO Regions, countries, districts, and individuals will have to be better understood and addressed [22]. The experience from low-income countries that are making progress in line with GVAP goals, such as Rwanda, should be widely shared and applied. Further context-specific quantitative and qualitative research and tools are required to map gaps in coverage and immunity over time, explore why inequities persist, and determine how best to address them. In the African Region, supporting the evolution of the Reach Every District (RED) strategy into a Reach Every Child (REC) strategy can help address subnational inequities in coverage [23]. A REC strategy can be facilitated by strong microplanning that incorporates fixed site and outreach sessions targeting underserved populations, such as mobile and border groups, non-citizens and undocumented migrants. Improving the implementation of birth registration may also prove beneficial to microplanning efforts, since less than half of the children in the African Region are registered at birth [24, 25]. Clear communications delivered by multiple messengers to stimulate community demand for immunization in all population groups are critical to ensuring that caregivers understand the value of vaccination as well as the risks and see it as a human right [24] and a community responsibility. Establishing strengthened equity-oriented health information systems, including the improved use of mobile technologies, with high-quality immunization data to the individual levelis an important prerequisite to ensuring that policy and practice can be appropriately formulated to achieve immunization coverage targets. For example, where the rate of improvement in coverage slows, prompt efforts are needed to address subnational pockets of low coverage.\nThis analysis has limitations. While WHO/UNICEF estimates of vaccination coverage are generally thought to best reflect true vaccination coverage, administrative sources have been shown both to overestimate and to underestimate coverage compared to coverage surveys [26]. Although administrative coverage estimates can be more timely and convenient, inaccuracies in either the numerator (number of doses administered) or denominator (census data and projections) can render them unreliable. Several examinations of equity have been performed using coverage survey data, which has the advantage that it is not reliant on the accuracy of administrative numerator, denominator or target population estimates. However, coverage surveys are costly and cannot provide the timely information required to guide immunization programs [27]. As the number of recommended vaccines increases and the use of home vaccination cards declines in some countries, the risk of bias in parental recall may increase [28]. Furthermore, because of the inherent imprecise nature of the coverage estimates, the data in this analysis were examined for every fifth year. An alternative approach would be to use the annual coverage estimates with a data smoothing technique.", "The described inequities in vaccination coverage by region, country and at the subnational level are an important barrier to achieving higher coverage and disease control and elimination goals. Moving forward, successful immunization programs will need to ensure high-level political commitment to immunization, ownership and stewardship of the program, and the financial and human resource capacity to institute a wide range of innovative and cohesive strategies to reach every child and to stimulate demand for vaccination services in all communities. The February 2016 Addis Declaration on Immunization in which African heads of state committed to increasing domestic resources for immunization and improving the effectiveness and efficiency of immunization programs is a sign that the necessary political commitment exists [29]. The fulfillment of the pledges in the declaration will be essential to maximizing the benefits from immunization in the African Region.\n What is known about this topic Childhood immunization is a safe, effective and cost-effective intervention that is critical to reducing infant and under-five morbidity and mortality. It is also a key part of the struggle against antimicrobial resistance and can be used as a tool in health system strengthening;\nEquity is a key component of the strategies outlined by the Global Vaccine Action Plan 2011-2020 and the UN 2030 Agenda for Sustainable Development;\nThe Global Vaccine Action Plan 2011-2020 called for all countries to reach and sustain 90% national coverage and 80% coverage in all districts for the third dose of diphtheria, tetanus, and pertussis vaccine (DTP3) by 2015 and 90% national coverage and 80% coverage in all districts for all vaccines in national immunization schedules by 2020.\nChildhood immunization is a safe, effective and cost-effective intervention that is critical to reducing infant and under-five morbidity and mortality. It is also a key part of the struggle against antimicrobial resistance and can be used as a tool in health system strengthening;\nEquity is a key component of the strategies outlined by the Global Vaccine Action Plan 2011-2020 and the UN 2030 Agenda for Sustainable Development;\nThe Global Vaccine Action Plan 2011-2020 called for all countries to reach and sustain 90% national coverage and 80% coverage in all districts for the third dose of diphtheria, tetanus, and pertussis vaccine (DTP3) by 2015 and 90% national coverage and 80% coverage in all districts for all vaccines in national immunization schedules by 2020.\n What this study adds As yet, no countries in the African Region are meeting the 2020 coverage targets;\nDuring 2013-2015, eleven African Region countries achieved national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout ≤ 10% and MCV1 coverage ≥ 90% for each of the three years; of these 11 countries, only Rwanda, a low income country, achieved 80% DTP3 coverage in all districts and ≥90% coverage for Hib, PCV3, RCV and Rota in 2015;\nDespite improvements in immunization coverage in the African Region since 2000, more progress is needed in lower middle-income and upper middle-income countries as well as in low income countries.\nAs yet, no countries in the African Region are meeting the 2020 coverage targets;\nDuring 2013-2015, eleven African Region countries achieved national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout ≤ 10% and MCV1 coverage ≥ 90% for each of the three years; of these 11 countries, only Rwanda, a low income country, achieved 80% DTP3 coverage in all districts and ≥90% coverage for Hib, PCV3, RCV and Rota in 2015;\nDespite improvements in immunization coverage in the African Region since 2000, more progress is needed in lower middle-income and upper middle-income countries as well as in low income countries.", "Childhood immunization is a safe, effective and cost-effective intervention that is critical to reducing infant and under-five morbidity and mortality. It is also a key part of the struggle against antimicrobial resistance and can be used as a tool in health system strengthening;\nEquity is a key component of the strategies outlined by the Global Vaccine Action Plan 2011-2020 and the UN 2030 Agenda for Sustainable Development;\nThe Global Vaccine Action Plan 2011-2020 called for all countries to reach and sustain 90% national coverage and 80% coverage in all districts for the third dose of diphtheria, tetanus, and pertussis vaccine (DTP3) by 2015 and 90% national coverage and 80% coverage in all districts for all vaccines in national immunization schedules by 2020.", "As yet, no countries in the African Region are meeting the 2020 coverage targets;\nDuring 2013-2015, eleven African Region countries achieved national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout ≤ 10% and MCV1 coverage ≥ 90% for each of the three years; of these 11 countries, only Rwanda, a low income country, achieved 80% DTP3 coverage in all districts and ≥90% coverage for Hib, PCV3, RCV and Rota in 2015;\nDespite improvements in immunization coverage in the African Region since 2000, more progress is needed in lower middle-income and upper middle-income countries as well as in low income countries.", "The authors declare no competing interest." ]
[ "intro", "methods", null, null, null, "results", null, null, null, null, null, "discussion", "conclusion", null, null, null ]
[ "Vaccination", "immunization coverage", "World Health Organization African region" ]
Introduction: In 2012, the World Health Assembly endorsed the Global Vaccine Action Plan 2011-2020 (GVAP), which calls on all countries to reach and sustain 90% national coverage and 80% coverage in all districts for the third dose of diphtheria-tetanus-pertussis (DTP3) containing vaccine by 2015 and 90% national coverage and 80% coverage in all districts for all vaccines included in national immunization schedules by 2020 [1]. The GVAP also calls for a focus on equity in vaccination coverage through reducing the gap in coverage between low-income countries and high-income countries, and by reducing pockets of low sub-national vaccination coverage. Within the WHO African Region (AFR), a majority of countries have historically been classified by the World Bank as low-income [2], and immunization services in many countries usually operate within a relatively resource-constrained environment compared to high-income countries. Most of the 47 African Region countries are the focus of heavy investments from foreign donors, including Gavi, the Vaccine Alliance, which has disbursed at least US$6 billion during 2000-2016 for vaccine introductions and health system strengthening activities in 37African Region countries [3, 4]. No published analyses have described recent trends in vaccination coverage, equity and vaccine introductions among African Region countries in light of the substantial investments made in these countries over the past 16 years. Examining how these trends differ by country income status and by Gavi-eligibility status may provide useful insight into gaps to be addressed in future work by the countries and their external partners. Additionally, as global partners such as Gavi, WHO and UNICEF look to further support countries in reaching GVAP goals, identifying countries which have outperformed their peers may also yield useful lessons learned for strengthening immunization systems within resource-constrained settings. The aims of this study are to analyze recent trends in national and subnational vaccination coverage in the African Region, to assess how these trends differ by country income category and Gavi-eligibility, to review the coverage achieved with recently introduced vaccines in African Region countries and to identify low-income African Region countries which outperform their peers in reaching and sustaining high levels of vaccination coverage. Methods: Data sources Global, regional and national vaccination coverage estimates were obtained from the WHO/UNICEF estimates of national immunization coverage (WUENIC) released in July 2016 [5, 6]. These data provide coverage estimates for multiple routine vaccinations, including third dose of DTP-containing vaccine (DTP3), third dose of Haemophilus influenza type b vaccine (Hib3), third dose of polio vaccine (Pol3), measles-containing vaccine first dose (MCV1) and second dose (MCV2), third dose of pneumococcal conjugate vaccine (PCV3), first dose of rubella-containing vaccine (RCV1), and last dose of rotavirus vaccine (Rota-last) during 1980-2015. Country-reported estimates of the proportion of districts that have reached 80% vaccination coverage in a given year were obtained from annual country-reported data collected through the WHO/UNICEF Joint Reporting Form on Immunization (JRF) process and made available for public use [7]. Information on vaccine introductions was obtained from data made publicly available by WHO [8]. National estimates for the number of surviving infants were obtained from the United Nations Population Division (UNPD), 2015 revision [9]. National estimates of gross national income per capita (GNI per capita) and categorization of countries by national income level from 1990 to 2015 were obtained from the World Bank [10]. Information on whether a country was eligible forGavi funding was obtained from sections of the Gavi website on countries’ eligibility for and transitions from Gavi support [11, 12]. Global, regional and national vaccination coverage estimates were obtained from the WHO/UNICEF estimates of national immunization coverage (WUENIC) released in July 2016 [5, 6]. These data provide coverage estimates for multiple routine vaccinations, including third dose of DTP-containing vaccine (DTP3), third dose of Haemophilus influenza type b vaccine (Hib3), third dose of polio vaccine (Pol3), measles-containing vaccine first dose (MCV1) and second dose (MCV2), third dose of pneumococcal conjugate vaccine (PCV3), first dose of rubella-containing vaccine (RCV1), and last dose of rotavirus vaccine (Rota-last) during 1980-2015. Country-reported estimates of the proportion of districts that have reached 80% vaccination coverage in a given year were obtained from annual country-reported data collected through the WHO/UNICEF Joint Reporting Form on Immunization (JRF) process and made available for public use [7]. Information on vaccine introductions was obtained from data made publicly available by WHO [8]. National estimates for the number of surviving infants were obtained from the United Nations Population Division (UNPD), 2015 revision [9]. National estimates of gross national income per capita (GNI per capita) and categorization of countries by national income level from 1990 to 2015 were obtained from the World Bank [10]. Information on whether a country was eligible forGavi funding was obtained from sections of the Gavi website on countries’ eligibility for and transitions from Gavi support [11, 12]. Definitions Country income categories were defined using World Bank historical criteria for low-income, lower middle-income, upper middle-income, and high-income countries within the given year of analysis [2]. The World Bank uses GNI per capita, calculated using the World Bank Atlas method, to categorize countries by income level. For instance, in 2015, low-income countries were defined as those with a GNI per capita of $1,025 or less, lower middle-income countries were those with a GNI per capita between $1,026 and $4,035, upper middle-income countries were those with a GNI per capita between $4,036 and $12,475, and high-income countries were those with a GNI per capita of $12,476 or more. Gavi-eligible countries are defined as those that were eligible (GNI per capita ≤US$ 1,000)for Gavi funding in Phase 1, commencing in 2000. This group includes 37 countries in AFR. Despite changes in the criteria for countries’ eligibility for Gavi support, 35 countries in the African Region were still eligible for Gavi support in 2015 [11]. Vaccination coverage is defined as the proportion of surviving infants in a given year that received the vaccine. DTP1–3 dropout is defined as the proportion of infants who received the first dose of DTP vaccine (DTP1) but did not receive the third dose of DTP vaccine (DTP3). According to WHO Immunization in Practice guidelines, a dropout rate of 10% or higher indicates that challenges may exist with ensuring that children who start a country’s recommended schedule are able to receive all doses of recommended vaccination series [13,14]. Country income categories were defined using World Bank historical criteria for low-income, lower middle-income, upper middle-income, and high-income countries within the given year of analysis [2]. The World Bank uses GNI per capita, calculated using the World Bank Atlas method, to categorize countries by income level. For instance, in 2015, low-income countries were defined as those with a GNI per capita of $1,025 or less, lower middle-income countries were those with a GNI per capita between $1,026 and $4,035, upper middle-income countries were those with a GNI per capita between $4,036 and $12,475, and high-income countries were those with a GNI per capita of $12,476 or more. Gavi-eligible countries are defined as those that were eligible (GNI per capita ≤US$ 1,000)for Gavi funding in Phase 1, commencing in 2000. This group includes 37 countries in AFR. Despite changes in the criteria for countries’ eligibility for Gavi support, 35 countries in the African Region were still eligible for Gavi support in 2015 [11]. Vaccination coverage is defined as the proportion of surviving infants in a given year that received the vaccine. DTP1–3 dropout is defined as the proportion of infants who received the first dose of DTP vaccine (DTP1) but did not receive the third dose of DTP vaccine (DTP3). According to WHO Immunization in Practice guidelines, a dropout rate of 10% or higher indicates that challenges may exist with ensuring that children who start a country’s recommended schedule are able to receive all doses of recommended vaccination series [13,14]. Data analysis For several vaccines and numbers of doses (DTP3, Pol3, MCV1, MCV2), we evaluated trends in African Region coverage compared to global-level coverage between 2000 and 2015. We compared national vaccination coverage estimates, the proportion of districts in a country reported to reach ≥90% DTP3 coverage, and the number of unvaccinated children for DTP3 across African Region countries for 2015, as well as the trend in subnational coverage from 2010 to 2015. We calculated the number of unvaccinated children in each country by using the UNPD estimates for surviving infants in 2015 and the WUENIC 2015 DTP3 estimates. We identified the African Region countries that had achieved ≥90% DTP3 coverage in 2015, and grouped the remaining countries according to level of DTP3 coverage achieved (80%–89%, 70%–79% and <70%). We used the 2010 and 2015 estimates to calculate the relative change in national DTP3 coverage over the five-year period and compared the relative change in coverage across the Region. Using the UNPD estimates for surviving infants [9], we calculated weighted average vaccination coverage by income category and Gavi-eligibility status for the years 2000, 2005, 2010 and 2015 and compared the relative change by income category groups and Gavi-eligibility. We identified African Region countries with particularly good performance using the following three criteria: 1) countries that maintained a national DTP1 and DTP3 coverage ≥90% for the period 2013 to 2015, 2) countries that maintained DTP1-3 dropout ≤10% for the period 2013 to 2015 and 3) countries that maintained a MCV1 coverage ≥90% for the period 2013 to 2015. Countries met all three criteria were defined as positive deviants. Finally, we reviewed the African Region’s performance regarding the introduction of and coverage with selected vaccines that have become available since 1980 or remain underused. We compared the African Region’s coverage with each of these vaccines to that of the other five WHO Regions and to global coverage. For several vaccines and numbers of doses (DTP3, Pol3, MCV1, MCV2), we evaluated trends in African Region coverage compared to global-level coverage between 2000 and 2015. We compared national vaccination coverage estimates, the proportion of districts in a country reported to reach ≥90% DTP3 coverage, and the number of unvaccinated children for DTP3 across African Region countries for 2015, as well as the trend in subnational coverage from 2010 to 2015. We calculated the number of unvaccinated children in each country by using the UNPD estimates for surviving infants in 2015 and the WUENIC 2015 DTP3 estimates. We identified the African Region countries that had achieved ≥90% DTP3 coverage in 2015, and grouped the remaining countries according to level of DTP3 coverage achieved (80%–89%, 70%–79% and <70%). We used the 2010 and 2015 estimates to calculate the relative change in national DTP3 coverage over the five-year period and compared the relative change in coverage across the Region. Using the UNPD estimates for surviving infants [9], we calculated weighted average vaccination coverage by income category and Gavi-eligibility status for the years 2000, 2005, 2010 and 2015 and compared the relative change by income category groups and Gavi-eligibility. We identified African Region countries with particularly good performance using the following three criteria: 1) countries that maintained a national DTP1 and DTP3 coverage ≥90% for the period 2013 to 2015, 2) countries that maintained DTP1-3 dropout ≤10% for the period 2013 to 2015 and 3) countries that maintained a MCV1 coverage ≥90% for the period 2013 to 2015. Countries met all three criteria were defined as positive deviants. Finally, we reviewed the African Region’s performance regarding the introduction of and coverage with selected vaccines that have become available since 1980 or remain underused. We compared the African Region’s coverage with each of these vaccines to that of the other five WHO Regions and to global coverage. Data sources: Global, regional and national vaccination coverage estimates were obtained from the WHO/UNICEF estimates of national immunization coverage (WUENIC) released in July 2016 [5, 6]. These data provide coverage estimates for multiple routine vaccinations, including third dose of DTP-containing vaccine (DTP3), third dose of Haemophilus influenza type b vaccine (Hib3), third dose of polio vaccine (Pol3), measles-containing vaccine first dose (MCV1) and second dose (MCV2), third dose of pneumococcal conjugate vaccine (PCV3), first dose of rubella-containing vaccine (RCV1), and last dose of rotavirus vaccine (Rota-last) during 1980-2015. Country-reported estimates of the proportion of districts that have reached 80% vaccination coverage in a given year were obtained from annual country-reported data collected through the WHO/UNICEF Joint Reporting Form on Immunization (JRF) process and made available for public use [7]. Information on vaccine introductions was obtained from data made publicly available by WHO [8]. National estimates for the number of surviving infants were obtained from the United Nations Population Division (UNPD), 2015 revision [9]. National estimates of gross national income per capita (GNI per capita) and categorization of countries by national income level from 1990 to 2015 were obtained from the World Bank [10]. Information on whether a country was eligible forGavi funding was obtained from sections of the Gavi website on countries’ eligibility for and transitions from Gavi support [11, 12]. Definitions: Country income categories were defined using World Bank historical criteria for low-income, lower middle-income, upper middle-income, and high-income countries within the given year of analysis [2]. The World Bank uses GNI per capita, calculated using the World Bank Atlas method, to categorize countries by income level. For instance, in 2015, low-income countries were defined as those with a GNI per capita of $1,025 or less, lower middle-income countries were those with a GNI per capita between $1,026 and $4,035, upper middle-income countries were those with a GNI per capita between $4,036 and $12,475, and high-income countries were those with a GNI per capita of $12,476 or more. Gavi-eligible countries are defined as those that were eligible (GNI per capita ≤US$ 1,000)for Gavi funding in Phase 1, commencing in 2000. This group includes 37 countries in AFR. Despite changes in the criteria for countries’ eligibility for Gavi support, 35 countries in the African Region were still eligible for Gavi support in 2015 [11]. Vaccination coverage is defined as the proportion of surviving infants in a given year that received the vaccine. DTP1–3 dropout is defined as the proportion of infants who received the first dose of DTP vaccine (DTP1) but did not receive the third dose of DTP vaccine (DTP3). According to WHO Immunization in Practice guidelines, a dropout rate of 10% or higher indicates that challenges may exist with ensuring that children who start a country’s recommended schedule are able to receive all doses of recommended vaccination series [13,14]. Data analysis: For several vaccines and numbers of doses (DTP3, Pol3, MCV1, MCV2), we evaluated trends in African Region coverage compared to global-level coverage between 2000 and 2015. We compared national vaccination coverage estimates, the proportion of districts in a country reported to reach ≥90% DTP3 coverage, and the number of unvaccinated children for DTP3 across African Region countries for 2015, as well as the trend in subnational coverage from 2010 to 2015. We calculated the number of unvaccinated children in each country by using the UNPD estimates for surviving infants in 2015 and the WUENIC 2015 DTP3 estimates. We identified the African Region countries that had achieved ≥90% DTP3 coverage in 2015, and grouped the remaining countries according to level of DTP3 coverage achieved (80%–89%, 70%–79% and <70%). We used the 2010 and 2015 estimates to calculate the relative change in national DTP3 coverage over the five-year period and compared the relative change in coverage across the Region. Using the UNPD estimates for surviving infants [9], we calculated weighted average vaccination coverage by income category and Gavi-eligibility status for the years 2000, 2005, 2010 and 2015 and compared the relative change by income category groups and Gavi-eligibility. We identified African Region countries with particularly good performance using the following three criteria: 1) countries that maintained a national DTP1 and DTP3 coverage ≥90% for the period 2013 to 2015, 2) countries that maintained DTP1-3 dropout ≤10% for the period 2013 to 2015 and 3) countries that maintained a MCV1 coverage ≥90% for the period 2013 to 2015. Countries met all three criteria were defined as positive deviants. Finally, we reviewed the African Region’s performance regarding the introduction of and coverage with selected vaccines that have become available since 1980 or remain underused. We compared the African Region’s coverage with each of these vaccines to that of the other five WHO Regions and to global coverage. Results: DTP3 and Pol3 coverage trends In 2015, regional DTP3 coverage in the African Region reached 76% from a 2000 level of52%. In comparison, global DTP3 coverage increased from 72% to 86% over the same period (Figure 1). In 2015, national DTP3 coverage for African Region countries ranged from 16% (Equatorial Guinea) to 98% (Rwanda and Tanzania), with 16 (34%) of all 47 African Region countries achieving ≥90% national DTP3 coverage and 6(13%)reaching ≥80% DTP3 coverage in every district. Thirteen of the countries achieving ≥90% national DTP3 coverage had also done so in 2014 and 2013. Of the 31 African Region countries that did not achieve ≥90% national DTP3 coverage in 2015, 16 (34%) had coverage of 80%-89%, 3 (6%) had coverage of 70%-79% and 12 (26%) had coverage of <70%. The 2015 African Region coverage of 76% resulted in 8.1 million children unvaccinated with DTP3 (42% of the global total in 2015); nearly 3.9 million lived in three African Region countries: Nigeria (2.9 million), Democratic Republic of the Congo (0.6 million) and Ethiopia (0.4 million). In 2000, 11.7 million were unvaccinated with DTP3. Routine immunization coverage globally and in the African Region, 2000-2015 From 2010 to 2015, the relative change in national DTP3 coverage varied between -28% and 25% across countries. Eighteen African Region countries experienced a reduction in DTP3 coverage from 2010 to 2015; of these, 4 countries (Equatorial Guinea, Liberia, Angola and Guinea) had a greater than 10% reduction in DTP3 coverage from 2010 to 2015. Over the same period, the African Region Pol3 coverage showed similar patterns as DTP3 coverage (Figure 1). The African Region Pol3 coverage minimally increased, moving from 74% in 2010 to 76% in 2015, with 2015 Pol3 national coverages ranging between 17% (Equatorial Guinea) and 98% (Rwanda and Tanzania). In 2015, regional DTP3 coverage in the African Region reached 76% from a 2000 level of52%. In comparison, global DTP3 coverage increased from 72% to 86% over the same period (Figure 1). In 2015, national DTP3 coverage for African Region countries ranged from 16% (Equatorial Guinea) to 98% (Rwanda and Tanzania), with 16 (34%) of all 47 African Region countries achieving ≥90% national DTP3 coverage and 6(13%)reaching ≥80% DTP3 coverage in every district. Thirteen of the countries achieving ≥90% national DTP3 coverage had also done so in 2014 and 2013. Of the 31 African Region countries that did not achieve ≥90% national DTP3 coverage in 2015, 16 (34%) had coverage of 80%-89%, 3 (6%) had coverage of 70%-79% and 12 (26%) had coverage of <70%. The 2015 African Region coverage of 76% resulted in 8.1 million children unvaccinated with DTP3 (42% of the global total in 2015); nearly 3.9 million lived in three African Region countries: Nigeria (2.9 million), Democratic Republic of the Congo (0.6 million) and Ethiopia (0.4 million). In 2000, 11.7 million were unvaccinated with DTP3. Routine immunization coverage globally and in the African Region, 2000-2015 From 2010 to 2015, the relative change in national DTP3 coverage varied between -28% and 25% across countries. Eighteen African Region countries experienced a reduction in DTP3 coverage from 2010 to 2015; of these, 4 countries (Equatorial Guinea, Liberia, Angola and Guinea) had a greater than 10% reduction in DTP3 coverage from 2010 to 2015. Over the same period, the African Region Pol3 coverage showed similar patterns as DTP3 coverage (Figure 1). The African Region Pol3 coverage minimally increased, moving from 74% in 2010 to 76% in 2015, with 2015 Pol3 national coverages ranging between 17% (Equatorial Guinea) and 98% (Rwanda and Tanzania). MCV1 and MCV2 coverage trends Estimated MCV1 coverage in the African Region increased from 53% in 2000 to 73% by 2010 and was 74% in 2015 (Figure 1). In comparison, global MCV1 coverage increased from 72% in 2000 to 85% in 2010 and was still 85% in 2015. In 2015, national MCV1 coverage for African Region countries ranged between 20% and 99%, with 12 (26%) countries reaching ≥90% national MCV1 coverage. From 2010 to 2015, 20 African Region countries experienced a decrease in MCV1 coverage, 23 experienced an increase, and 4had no change. Five countries (Swaziland, Kenya, Equatorial Guinea, Eritrea, and Angola) had a greater than 10% decrease in MCV1 coverage from 2010 to 2015. MCV2 coverage in the African Region (including countries that had not yet introduced MCV2) increased from 5% in 2000 to 18% in 2015, with 23 (49%) African Region countries including MCV2 in their 2015 immunization schedule compared to 5 (11%) in 2000. In comparison, 83% of countries globally had introduced MCV2 by 2015, compared to 46% in 2000 and global MCV2 coverage was 61% in 2015, compared to 15% in 2000. Estimated MCV1 coverage in the African Region increased from 53% in 2000 to 73% by 2010 and was 74% in 2015 (Figure 1). In comparison, global MCV1 coverage increased from 72% in 2000 to 85% in 2010 and was still 85% in 2015. In 2015, national MCV1 coverage for African Region countries ranged between 20% and 99%, with 12 (26%) countries reaching ≥90% national MCV1 coverage. From 2010 to 2015, 20 African Region countries experienced a decrease in MCV1 coverage, 23 experienced an increase, and 4had no change. Five countries (Swaziland, Kenya, Equatorial Guinea, Eritrea, and Angola) had a greater than 10% decrease in MCV1 coverage from 2010 to 2015. MCV2 coverage in the African Region (including countries that had not yet introduced MCV2) increased from 5% in 2000 to 18% in 2015, with 23 (49%) African Region countries including MCV2 in their 2015 immunization schedule compared to 5 (11%) in 2000. In comparison, 83% of countries globally had introduced MCV2 by 2015, compared to 46% in 2000 and global MCV2 coverage was 61% in 2015, compared to 15% in 2000. Analysis by income category and Gavi-eligibility status When classified by country income category in 2015, 26 African Region countries were low-income, 12 were lower middle-income, 8 were upper middle-income and 1 was high income; these numbers changed from 36, 5, 5, and 0, respectively, in 2000 (Table 1). Income category of each country in the World Health Organization African Region by year, 2000, 2005, 2010 and 2015 Shaded cells indicate where country changed income category from previous 5-year time point. Source: World Bank analytical categorizations using country gross national income per capita for given year The population-weighted average national DTP3 coverage across low-income African Region countries increased from 50% in 2000 to 80% in 2015 (average annual change, 2.0%), with a lower annual change during 2010-2015, when coverage increased from 74% to 80% (average annual change, 1.2%), than during 2000-2010, when coverage increased from 50% to 74% (average annual change, 2.4%) (Figure 2 A). In lower middle-income countries, the average DTP3 coverage decreased from 84% in 2000 to 69% in 2015 (average annual change,-1.0%), though there was a slight increase from 67% to 69% during 2010-2015. The decrease in lower middle-income countries’ average DTP3 coverage coincided with Nigeria being reclassified as a lower middle-income country instead of a low income country. Nigeria’s 2015 DTP3 coverage of 56% was below the 87% average for the other lower middle-income countries. For lower middle-income countries excluding Nigeria, DTP3 coverage was 84% in 2000, 68% in 2005, 84% in 2010, and 87% in 2015. In comparison, for low income countries excluding Nigeria, DTP3 coverage increased from 55% in 2000 to 69% in 2005, 74% in 2010 and 80% in 2015. Immunization program performance (MCV1 coverage, DTP3 coverage and DTP1–3 dropout rate) for countries in the African Region 1, by country income category 2 and Gavi-eligibility status 3, 2000–2015 Among upper middle-income countries, a small annual change (average annual change, 0.2%) was observed during 2000-2015 as DTP3 coverage increased from 73% to 76%; however, coverage reached a high of 79% in 2010 before decreasing to 76% by 2015. The average national DTP3 coverage increased from 77% to 81% between 2000 and 2015 (average annual change, 0.3%) for Gavi-ineligible countries and from 49% to 76% (average annual change, 1.8%) for Gavi-eligible countries (Figure 2 B). African Region MCV1 coverage by income and Gavi-eligibility category exhibited similar trends as DTP3 coverage during the 2000-2015 period (Figure 2 C,D). In low-income African Region countries, MCV1 coverage increased from 57% to 79% (average annual change: 1.6 percentage points) during 2000-2015, while in lower middle-income countries, coverage was 82% in 2000 and 65% in 2015 (average annual change,-1.1%). For lower middle-income countries without Nigeria, MCV1 coverage was 82% in 2000 and 60% in 2005, then reached 81% in 2010 and 80% in 2015 (average annual change, -0.1%). In upper middle-income countries, coverage was 65% in 2000 and reached 75% in 2015 (average annual change,0.7%). For upper middle-income countries, the largest change in MCV1 coverage occurred during 2005-2010 (64% to 83%). Between 2000 and 2015, MCV1 coverage rose from 60% to 73% in Gavi-eligible countries (average annual change, 0.9%) and from 72% to 77% (average annual change, 0.33%) in Gavi-ineligible countries. The MCV1 coverage gap between Gavi-eligible and Gavi-ineligible African Region countries decreased from 12% to 4% during 2000 to 2015. The average DTP1–3 dropout rate was 24% in 2000 and 9% in 2015 among African Region countries that were low-income in those years, 11% in 2000 and 15% in 2015 in countries that were lower middle-income in those years, and 14% in 2000 and 9% in 2015 among African Region countries that were upper-middle income in those years (Figure 2 E). The average DTP1–3 dropout rate decreased from 13% to 5% from 2000 to 2015 for Gavi-ineligible countries and from 24% to 11% for Gavi-eligible countries (Figure 2 F). When classified by country income category in 2015, 26 African Region countries were low-income, 12 were lower middle-income, 8 were upper middle-income and 1 was high income; these numbers changed from 36, 5, 5, and 0, respectively, in 2000 (Table 1). Income category of each country in the World Health Organization African Region by year, 2000, 2005, 2010 and 2015 Shaded cells indicate where country changed income category from previous 5-year time point. Source: World Bank analytical categorizations using country gross national income per capita for given year The population-weighted average national DTP3 coverage across low-income African Region countries increased from 50% in 2000 to 80% in 2015 (average annual change, 2.0%), with a lower annual change during 2010-2015, when coverage increased from 74% to 80% (average annual change, 1.2%), than during 2000-2010, when coverage increased from 50% to 74% (average annual change, 2.4%) (Figure 2 A). In lower middle-income countries, the average DTP3 coverage decreased from 84% in 2000 to 69% in 2015 (average annual change,-1.0%), though there was a slight increase from 67% to 69% during 2010-2015. The decrease in lower middle-income countries’ average DTP3 coverage coincided with Nigeria being reclassified as a lower middle-income country instead of a low income country. Nigeria’s 2015 DTP3 coverage of 56% was below the 87% average for the other lower middle-income countries. For lower middle-income countries excluding Nigeria, DTP3 coverage was 84% in 2000, 68% in 2005, 84% in 2010, and 87% in 2015. In comparison, for low income countries excluding Nigeria, DTP3 coverage increased from 55% in 2000 to 69% in 2005, 74% in 2010 and 80% in 2015. Immunization program performance (MCV1 coverage, DTP3 coverage and DTP1–3 dropout rate) for countries in the African Region 1, by country income category 2 and Gavi-eligibility status 3, 2000–2015 Among upper middle-income countries, a small annual change (average annual change, 0.2%) was observed during 2000-2015 as DTP3 coverage increased from 73% to 76%; however, coverage reached a high of 79% in 2010 before decreasing to 76% by 2015. The average national DTP3 coverage increased from 77% to 81% between 2000 and 2015 (average annual change, 0.3%) for Gavi-ineligible countries and from 49% to 76% (average annual change, 1.8%) for Gavi-eligible countries (Figure 2 B). African Region MCV1 coverage by income and Gavi-eligibility category exhibited similar trends as DTP3 coverage during the 2000-2015 period (Figure 2 C,D). In low-income African Region countries, MCV1 coverage increased from 57% to 79% (average annual change: 1.6 percentage points) during 2000-2015, while in lower middle-income countries, coverage was 82% in 2000 and 65% in 2015 (average annual change,-1.1%). For lower middle-income countries without Nigeria, MCV1 coverage was 82% in 2000 and 60% in 2005, then reached 81% in 2010 and 80% in 2015 (average annual change, -0.1%). In upper middle-income countries, coverage was 65% in 2000 and reached 75% in 2015 (average annual change,0.7%). For upper middle-income countries, the largest change in MCV1 coverage occurred during 2005-2010 (64% to 83%). Between 2000 and 2015, MCV1 coverage rose from 60% to 73% in Gavi-eligible countries (average annual change, 0.9%) and from 72% to 77% (average annual change, 0.33%) in Gavi-ineligible countries. The MCV1 coverage gap between Gavi-eligible and Gavi-ineligible African Region countries decreased from 12% to 4% during 2000 to 2015. The average DTP1–3 dropout rate was 24% in 2000 and 9% in 2015 among African Region countries that were low-income in those years, 11% in 2000 and 15% in 2015 in countries that were lower middle-income in those years, and 14% in 2000 and 9% in 2015 among African Region countries that were upper-middle income in those years (Figure 2 E). The average DTP1–3 dropout rate decreased from 13% to 5% from 2000 to 2015 for Gavi-ineligible countries and from 24% to 11% for Gavi-eligible countries (Figure 2 F). Positive deviance analysis During 2013-2015, eleven African Region countries met all three positive deviance analysis thresholds: national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout <10% and MCV1 coverage ≥90% for each of the last three years. Of these, four were low-income countries in 2015 (Rwanda, Burundi, The Gambia and the United Republic of Tanzania), three were lower middle-income (Lesotho, Sao Tome and Principe and Cabo Verde), three were upper middle-income countries (Algeria, Botswana, and Mauritius) and one was a high-income country (Seychelles). Of these 11 countries, The Gambia, Mauritius, Rwanda and Sao Tome and Principe also achieved ≥80% DTP3 coverage in 100% of districts. Two additional countries in the African Region achieved national DTP1 and DTP3 coverage ≥90% and DTP1–3 dropout <10% for the last three years, but did not reach the positive deviance threshold of MCV1 coverage ≥90% (Eritrea and Swaziland). During 2013-2015, eleven African Region countries met all three positive deviance analysis thresholds: national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout <10% and MCV1 coverage ≥90% for each of the last three years. Of these, four were low-income countries in 2015 (Rwanda, Burundi, The Gambia and the United Republic of Tanzania), three were lower middle-income (Lesotho, Sao Tome and Principe and Cabo Verde), three were upper middle-income countries (Algeria, Botswana, and Mauritius) and one was a high-income country (Seychelles). Of these 11 countries, The Gambia, Mauritius, Rwanda and Sao Tome and Principe also achieved ≥80% DTP3 coverage in 100% of districts. Two additional countries in the African Region achieved national DTP1 and DTP3 coverage ≥90% and DTP1–3 dropout <10% for the last three years, but did not reach the positive deviance threshold of MCV1 coverage ≥90% (Eritrea and Swaziland). Vaccine introductions Several vaccines have been added to the WHO recommended schedule since the inception of the Expanded Programme on Immunization (EPI) in 1974. By 2015, 23 (49%) African Region countries had introduced MCV2, while 37 (79%) had introduced PCV and 29 (62%) had introduced rotavirus vaccine. Regional coverage for Hib3, PCV3, RCV1 and Rota-last increased in the African Region between 2010 and 2015 (Table 2), as 33 African Region countries introduced PCV, 27 introduced rotavirus vaccine, 7 introduced Hib, and 6 introduced RCV1. African Region PCV3 reached 59% in 2015 compared to global coverage of 37%. Among the six WHO Regions, the African Region ranked fourth in 2015 for Hib3 coverage and second for both PCV3 and Rota-last coverage (data not shown). However, African Region RCV1 coverage was the lowest among all regions at 12%. In the African Region, the number and proportion of countries reaching 90% national coverage in 2015 with Hib3, PCV3, Rota-last and RCV1 vaccines were 16 (34%), 6 (13%), 6 (13%) and 5 (11%), respectively. Of the four positive deviant, low-income countries (as of 2015), all achieved ≥90% coverage with Hib3, PCV3, and Rota-last in 2015; Tanzania and Rwanda also achieved ≥90% coverage with RCV1 in 2015, while Burundi and The Gambia had yet to introduce RCV1. African region and global coverage of recently introduced vaccines, 2010 and 2015 Definitions: Rota=final dose of rotavirus vaccine; Hib3=third dose of Haemophilus influenzae type b vaccine; PCV3=third dose of pneumococcal conjugate vaccine; RCV1=first dose of rubella-containing vaccine Several vaccines have been added to the WHO recommended schedule since the inception of the Expanded Programme on Immunization (EPI) in 1974. By 2015, 23 (49%) African Region countries had introduced MCV2, while 37 (79%) had introduced PCV and 29 (62%) had introduced rotavirus vaccine. Regional coverage for Hib3, PCV3, RCV1 and Rota-last increased in the African Region between 2010 and 2015 (Table 2), as 33 African Region countries introduced PCV, 27 introduced rotavirus vaccine, 7 introduced Hib, and 6 introduced RCV1. African Region PCV3 reached 59% in 2015 compared to global coverage of 37%. Among the six WHO Regions, the African Region ranked fourth in 2015 for Hib3 coverage and second for both PCV3 and Rota-last coverage (data not shown). However, African Region RCV1 coverage was the lowest among all regions at 12%. In the African Region, the number and proportion of countries reaching 90% national coverage in 2015 with Hib3, PCV3, Rota-last and RCV1 vaccines were 16 (34%), 6 (13%), 6 (13%) and 5 (11%), respectively. Of the four positive deviant, low-income countries (as of 2015), all achieved ≥90% coverage with Hib3, PCV3, and Rota-last in 2015; Tanzania and Rwanda also achieved ≥90% coverage with RCV1 in 2015, while Burundi and The Gambia had yet to introduce RCV1. African region and global coverage of recently introduced vaccines, 2010 and 2015 Definitions: Rota=final dose of rotavirus vaccine; Hib3=third dose of Haemophilus influenzae type b vaccine; PCV3=third dose of pneumococcal conjugate vaccine; RCV1=first dose of rubella-containing vaccine DTP3 and Pol3 coverage trends: In 2015, regional DTP3 coverage in the African Region reached 76% from a 2000 level of52%. In comparison, global DTP3 coverage increased from 72% to 86% over the same period (Figure 1). In 2015, national DTP3 coverage for African Region countries ranged from 16% (Equatorial Guinea) to 98% (Rwanda and Tanzania), with 16 (34%) of all 47 African Region countries achieving ≥90% national DTP3 coverage and 6(13%)reaching ≥80% DTP3 coverage in every district. Thirteen of the countries achieving ≥90% national DTP3 coverage had also done so in 2014 and 2013. Of the 31 African Region countries that did not achieve ≥90% national DTP3 coverage in 2015, 16 (34%) had coverage of 80%-89%, 3 (6%) had coverage of 70%-79% and 12 (26%) had coverage of <70%. The 2015 African Region coverage of 76% resulted in 8.1 million children unvaccinated with DTP3 (42% of the global total in 2015); nearly 3.9 million lived in three African Region countries: Nigeria (2.9 million), Democratic Republic of the Congo (0.6 million) and Ethiopia (0.4 million). In 2000, 11.7 million were unvaccinated with DTP3. Routine immunization coverage globally and in the African Region, 2000-2015 From 2010 to 2015, the relative change in national DTP3 coverage varied between -28% and 25% across countries. Eighteen African Region countries experienced a reduction in DTP3 coverage from 2010 to 2015; of these, 4 countries (Equatorial Guinea, Liberia, Angola and Guinea) had a greater than 10% reduction in DTP3 coverage from 2010 to 2015. Over the same period, the African Region Pol3 coverage showed similar patterns as DTP3 coverage (Figure 1). The African Region Pol3 coverage minimally increased, moving from 74% in 2010 to 76% in 2015, with 2015 Pol3 national coverages ranging between 17% (Equatorial Guinea) and 98% (Rwanda and Tanzania). MCV1 and MCV2 coverage trends: Estimated MCV1 coverage in the African Region increased from 53% in 2000 to 73% by 2010 and was 74% in 2015 (Figure 1). In comparison, global MCV1 coverage increased from 72% in 2000 to 85% in 2010 and was still 85% in 2015. In 2015, national MCV1 coverage for African Region countries ranged between 20% and 99%, with 12 (26%) countries reaching ≥90% national MCV1 coverage. From 2010 to 2015, 20 African Region countries experienced a decrease in MCV1 coverage, 23 experienced an increase, and 4had no change. Five countries (Swaziland, Kenya, Equatorial Guinea, Eritrea, and Angola) had a greater than 10% decrease in MCV1 coverage from 2010 to 2015. MCV2 coverage in the African Region (including countries that had not yet introduced MCV2) increased from 5% in 2000 to 18% in 2015, with 23 (49%) African Region countries including MCV2 in their 2015 immunization schedule compared to 5 (11%) in 2000. In comparison, 83% of countries globally had introduced MCV2 by 2015, compared to 46% in 2000 and global MCV2 coverage was 61% in 2015, compared to 15% in 2000. Analysis by income category and Gavi-eligibility status: When classified by country income category in 2015, 26 African Region countries were low-income, 12 were lower middle-income, 8 were upper middle-income and 1 was high income; these numbers changed from 36, 5, 5, and 0, respectively, in 2000 (Table 1). Income category of each country in the World Health Organization African Region by year, 2000, 2005, 2010 and 2015 Shaded cells indicate where country changed income category from previous 5-year time point. Source: World Bank analytical categorizations using country gross national income per capita for given year The population-weighted average national DTP3 coverage across low-income African Region countries increased from 50% in 2000 to 80% in 2015 (average annual change, 2.0%), with a lower annual change during 2010-2015, when coverage increased from 74% to 80% (average annual change, 1.2%), than during 2000-2010, when coverage increased from 50% to 74% (average annual change, 2.4%) (Figure 2 A). In lower middle-income countries, the average DTP3 coverage decreased from 84% in 2000 to 69% in 2015 (average annual change,-1.0%), though there was a slight increase from 67% to 69% during 2010-2015. The decrease in lower middle-income countries’ average DTP3 coverage coincided with Nigeria being reclassified as a lower middle-income country instead of a low income country. Nigeria’s 2015 DTP3 coverage of 56% was below the 87% average for the other lower middle-income countries. For lower middle-income countries excluding Nigeria, DTP3 coverage was 84% in 2000, 68% in 2005, 84% in 2010, and 87% in 2015. In comparison, for low income countries excluding Nigeria, DTP3 coverage increased from 55% in 2000 to 69% in 2005, 74% in 2010 and 80% in 2015. Immunization program performance (MCV1 coverage, DTP3 coverage and DTP1–3 dropout rate) for countries in the African Region 1, by country income category 2 and Gavi-eligibility status 3, 2000–2015 Among upper middle-income countries, a small annual change (average annual change, 0.2%) was observed during 2000-2015 as DTP3 coverage increased from 73% to 76%; however, coverage reached a high of 79% in 2010 before decreasing to 76% by 2015. The average national DTP3 coverage increased from 77% to 81% between 2000 and 2015 (average annual change, 0.3%) for Gavi-ineligible countries and from 49% to 76% (average annual change, 1.8%) for Gavi-eligible countries (Figure 2 B). African Region MCV1 coverage by income and Gavi-eligibility category exhibited similar trends as DTP3 coverage during the 2000-2015 period (Figure 2 C,D). In low-income African Region countries, MCV1 coverage increased from 57% to 79% (average annual change: 1.6 percentage points) during 2000-2015, while in lower middle-income countries, coverage was 82% in 2000 and 65% in 2015 (average annual change,-1.1%). For lower middle-income countries without Nigeria, MCV1 coverage was 82% in 2000 and 60% in 2005, then reached 81% in 2010 and 80% in 2015 (average annual change, -0.1%). In upper middle-income countries, coverage was 65% in 2000 and reached 75% in 2015 (average annual change,0.7%). For upper middle-income countries, the largest change in MCV1 coverage occurred during 2005-2010 (64% to 83%). Between 2000 and 2015, MCV1 coverage rose from 60% to 73% in Gavi-eligible countries (average annual change, 0.9%) and from 72% to 77% (average annual change, 0.33%) in Gavi-ineligible countries. The MCV1 coverage gap between Gavi-eligible and Gavi-ineligible African Region countries decreased from 12% to 4% during 2000 to 2015. The average DTP1–3 dropout rate was 24% in 2000 and 9% in 2015 among African Region countries that were low-income in those years, 11% in 2000 and 15% in 2015 in countries that were lower middle-income in those years, and 14% in 2000 and 9% in 2015 among African Region countries that were upper-middle income in those years (Figure 2 E). The average DTP1–3 dropout rate decreased from 13% to 5% from 2000 to 2015 for Gavi-ineligible countries and from 24% to 11% for Gavi-eligible countries (Figure 2 F). Positive deviance analysis: During 2013-2015, eleven African Region countries met all three positive deviance analysis thresholds: national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout <10% and MCV1 coverage ≥90% for each of the last three years. Of these, four were low-income countries in 2015 (Rwanda, Burundi, The Gambia and the United Republic of Tanzania), three were lower middle-income (Lesotho, Sao Tome and Principe and Cabo Verde), three were upper middle-income countries (Algeria, Botswana, and Mauritius) and one was a high-income country (Seychelles). Of these 11 countries, The Gambia, Mauritius, Rwanda and Sao Tome and Principe also achieved ≥80% DTP3 coverage in 100% of districts. Two additional countries in the African Region achieved national DTP1 and DTP3 coverage ≥90% and DTP1–3 dropout <10% for the last three years, but did not reach the positive deviance threshold of MCV1 coverage ≥90% (Eritrea and Swaziland). Vaccine introductions: Several vaccines have been added to the WHO recommended schedule since the inception of the Expanded Programme on Immunization (EPI) in 1974. By 2015, 23 (49%) African Region countries had introduced MCV2, while 37 (79%) had introduced PCV and 29 (62%) had introduced rotavirus vaccine. Regional coverage for Hib3, PCV3, RCV1 and Rota-last increased in the African Region between 2010 and 2015 (Table 2), as 33 African Region countries introduced PCV, 27 introduced rotavirus vaccine, 7 introduced Hib, and 6 introduced RCV1. African Region PCV3 reached 59% in 2015 compared to global coverage of 37%. Among the six WHO Regions, the African Region ranked fourth in 2015 for Hib3 coverage and second for both PCV3 and Rota-last coverage (data not shown). However, African Region RCV1 coverage was the lowest among all regions at 12%. In the African Region, the number and proportion of countries reaching 90% national coverage in 2015 with Hib3, PCV3, Rota-last and RCV1 vaccines were 16 (34%), 6 (13%), 6 (13%) and 5 (11%), respectively. Of the four positive deviant, low-income countries (as of 2015), all achieved ≥90% coverage with Hib3, PCV3, and Rota-last in 2015; Tanzania and Rwanda also achieved ≥90% coverage with RCV1 in 2015, while Burundi and The Gambia had yet to introduce RCV1. African region and global coverage of recently introduced vaccines, 2010 and 2015 Definitions: Rota=final dose of rotavirus vaccine; Hib3=third dose of Haemophilus influenzae type b vaccine; PCV3=third dose of pneumococcal conjugate vaccine; RCV1=first dose of rubella-containing vaccine Discussion: In this report, we provide an update on the African Region’s progress and challenges in reaching GVAP vaccination coverage goals intended to help extend the benefits of immunization equitably to all people. Over the past 15 years, African Region vaccination coverage increases have outpaced global coverage increases, with a 24% increase in DTP3 coverage and a 21% increase in MCV1 coverage, while the number of children unvaccinated with DTP3 in the Region decreased to a low of 8.1 million in 2015 from 11.8 million in 2000.However, since 2010 MCV1 and DTP3 vaccination coverage has improved little in the African region, with lower middle income countries having the lowest DTP3 coverage among country income categories in 2015, largely due to Nigeria’s DTP3 coverage. Additionally, within-country geographic inequities are substantial, with only 13% of countries achieving ≥80% DTP3 coverage in every district in 2015. Despite the large increases in DTP3 and MCV1 regional coverage during 2000-2015, the annual rate of change has slowed substantially during 2010-2015, indicating a great risk of not reaching GVAP goals if the pattern continues. In this African Region analysis, we identified inequities in vaccination coverage among country income groups, between Gavi-eligibility groups, and at the national and subnational levels. In particular, we found that lower middle-income African Region countries had lower DTP3 coverage than low-income or upper middle-income African Region countries. The lower middle-income countries also had lower MCV1 coverage and higher DTP1-3 dropout rates than low-income or upper middle-income African Region countries in 2015. We also found that despite considerable progress in narrowing the gap between Gavi-eligible and Gavi-ineligible countries across program performance indicators, a gap still exists. The continued existence of this gap indicates the need for sustained, long-term investment in strengthening immunization systems. Ensuring adequate funding for national immunization programs across the African Region is particularly critical at this time as funds from the Global Polio Eradication Initiative are beginning to subside and increasing numbers of middle income countries are likely to reach the transition phase out of Gavi support in the coming years Despite overall substantial improvement in coverage since 2000, we noted concerning temporal trends as 43% and 38% of African Region countries had a decrease in MCV1 and DTP3 coverage, respectively, during 2010-2015. The slowed rate of increase in DTP3 coverage among African Region countries as a whole during 2010-2015 (0.8% average annual change) compared to 2000-2010 (2% average annual change) is similar to patterns seen in other regions [15]. This pattern suggests incremental improvements in coverage may be easier at low levels of coverage than high ones [15], particularly because as coverage rises continued improvements may depend on changes in national infrastructure (e.g., roads, telecommunications), health sector infrastructure (e.g., facilities, supply chain), or health staff (e.g., education) that require considerable time and resources. In addition, these observations suggest that considerable patience and persistence may be required to reach high coverage targets. Understanding the amount of additional funding required to achieve progress at higher baseline levels of coverage and how immunization programs can most efficiently utilize funds and other resources to improve their performance is an important operational research area. Our findings about the geographic inequities in immunization coverage within countries highlight one of the many types of inequity within countries that lead to less than optimal immunization coverage at the country level [16]. Analyses of data from other sources such as coverage surveys and Demographic and Health Surveys have revealed a wide range of inequities at the subnational level similar to the disparity of WHO/UNICEF coverage estimates at the national and Regional levels. Previous studies have shown coverage to be as much as 60% higher among urban children than rural children [17]. About one in three countries have substantial coverage disparities associated with income level, with at least a 20% gap for DTP and MCV coverage between the richest and the poorest quintiles [18]. The disparities in coverage vary between and within countries and WHO Regions [17],and inequities persist in a large majority of Gavi-eligible countries [19]. Over the last 10 years, there has been considerable progress in improving equity in immunization coverage as the lowest wealth quintile and the least educated have seen the largest improvements in coverage. However, the lower middle and middle quintiles have benefited less from these improvements and warrant greater attention in the future [18]. Using our criteria for positive deviance, we identified four countries that were low-income in 2015 and three countries that were lower middle-income countries in 2015 that were outperforming their peers in reaching and sustaining high vaccination coverage. In 2015, of the positive deviant countries, Rwanda was the only low-income country to achieve ≥80% DTP3 coverage in every district as well as coverage ≥90% for the more recently introduced vaccines included in our analysis (Hib, PCV3, RCV1, and Rota). Nearly reaching this benchmark, Tanzania achieved ≥90% coverage for all the latter vaccines, and ≥80% DTP3 coverage in 92% of its districts. The success in Rwanda has been attributed to several key factors including country ownership and high-level commitment, engagement with village-based community health workers, recognition of immunization as a gateway to access other integrated primary healthcare services, and the creation of a short message service (SMS) text message system for registering pregnancies and births [20, 21]. Rwanda’s commitment to immunization is reflected in its early introduction of new vaccines, which has also given the country more time to increase coverage with those vaccines compared to countries that have introduced them recently. To achieve immunization and equity goals, the needs and reasons for success of low- and middle-income WHO Regions, countries, districts, and individuals will have to be better understood and addressed [22]. The experience from low-income countries that are making progress in line with GVAP goals, such as Rwanda, should be widely shared and applied. Further context-specific quantitative and qualitative research and tools are required to map gaps in coverage and immunity over time, explore why inequities persist, and determine how best to address them. In the African Region, supporting the evolution of the Reach Every District (RED) strategy into a Reach Every Child (REC) strategy can help address subnational inequities in coverage [23]. A REC strategy can be facilitated by strong microplanning that incorporates fixed site and outreach sessions targeting underserved populations, such as mobile and border groups, non-citizens and undocumented migrants. Improving the implementation of birth registration may also prove beneficial to microplanning efforts, since less than half of the children in the African Region are registered at birth [24, 25]. Clear communications delivered by multiple messengers to stimulate community demand for immunization in all population groups are critical to ensuring that caregivers understand the value of vaccination as well as the risks and see it as a human right [24] and a community responsibility. Establishing strengthened equity-oriented health information systems, including the improved use of mobile technologies, with high-quality immunization data to the individual levelis an important prerequisite to ensuring that policy and practice can be appropriately formulated to achieve immunization coverage targets. For example, where the rate of improvement in coverage slows, prompt efforts are needed to address subnational pockets of low coverage. This analysis has limitations. While WHO/UNICEF estimates of vaccination coverage are generally thought to best reflect true vaccination coverage, administrative sources have been shown both to overestimate and to underestimate coverage compared to coverage surveys [26]. Although administrative coverage estimates can be more timely and convenient, inaccuracies in either the numerator (number of doses administered) or denominator (census data and projections) can render them unreliable. Several examinations of equity have been performed using coverage survey data, which has the advantage that it is not reliant on the accuracy of administrative numerator, denominator or target population estimates. However, coverage surveys are costly and cannot provide the timely information required to guide immunization programs [27]. As the number of recommended vaccines increases and the use of home vaccination cards declines in some countries, the risk of bias in parental recall may increase [28]. Furthermore, because of the inherent imprecise nature of the coverage estimates, the data in this analysis were examined for every fifth year. An alternative approach would be to use the annual coverage estimates with a data smoothing technique. Conclusion: The described inequities in vaccination coverage by region, country and at the subnational level are an important barrier to achieving higher coverage and disease control and elimination goals. Moving forward, successful immunization programs will need to ensure high-level political commitment to immunization, ownership and stewardship of the program, and the financial and human resource capacity to institute a wide range of innovative and cohesive strategies to reach every child and to stimulate demand for vaccination services in all communities. The February 2016 Addis Declaration on Immunization in which African heads of state committed to increasing domestic resources for immunization and improving the effectiveness and efficiency of immunization programs is a sign that the necessary political commitment exists [29]. The fulfillment of the pledges in the declaration will be essential to maximizing the benefits from immunization in the African Region. What is known about this topic Childhood immunization is a safe, effective and cost-effective intervention that is critical to reducing infant and under-five morbidity and mortality. It is also a key part of the struggle against antimicrobial resistance and can be used as a tool in health system strengthening; Equity is a key component of the strategies outlined by the Global Vaccine Action Plan 2011-2020 and the UN 2030 Agenda for Sustainable Development; The Global Vaccine Action Plan 2011-2020 called for all countries to reach and sustain 90% national coverage and 80% coverage in all districts for the third dose of diphtheria, tetanus, and pertussis vaccine (DTP3) by 2015 and 90% national coverage and 80% coverage in all districts for all vaccines in national immunization schedules by 2020. Childhood immunization is a safe, effective and cost-effective intervention that is critical to reducing infant and under-five morbidity and mortality. It is also a key part of the struggle against antimicrobial resistance and can be used as a tool in health system strengthening; Equity is a key component of the strategies outlined by the Global Vaccine Action Plan 2011-2020 and the UN 2030 Agenda for Sustainable Development; The Global Vaccine Action Plan 2011-2020 called for all countries to reach and sustain 90% national coverage and 80% coverage in all districts for the third dose of diphtheria, tetanus, and pertussis vaccine (DTP3) by 2015 and 90% national coverage and 80% coverage in all districts for all vaccines in national immunization schedules by 2020. What this study adds As yet, no countries in the African Region are meeting the 2020 coverage targets; During 2013-2015, eleven African Region countries achieved national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout ≤ 10% and MCV1 coverage ≥ 90% for each of the three years; of these 11 countries, only Rwanda, a low income country, achieved 80% DTP3 coverage in all districts and ≥90% coverage for Hib, PCV3, RCV and Rota in 2015; Despite improvements in immunization coverage in the African Region since 2000, more progress is needed in lower middle-income and upper middle-income countries as well as in low income countries. As yet, no countries in the African Region are meeting the 2020 coverage targets; During 2013-2015, eleven African Region countries achieved national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout ≤ 10% and MCV1 coverage ≥ 90% for each of the three years; of these 11 countries, only Rwanda, a low income country, achieved 80% DTP3 coverage in all districts and ≥90% coverage for Hib, PCV3, RCV and Rota in 2015; Despite improvements in immunization coverage in the African Region since 2000, more progress is needed in lower middle-income and upper middle-income countries as well as in low income countries. What is known about this topic: Childhood immunization is a safe, effective and cost-effective intervention that is critical to reducing infant and under-five morbidity and mortality. It is also a key part of the struggle against antimicrobial resistance and can be used as a tool in health system strengthening; Equity is a key component of the strategies outlined by the Global Vaccine Action Plan 2011-2020 and the UN 2030 Agenda for Sustainable Development; The Global Vaccine Action Plan 2011-2020 called for all countries to reach and sustain 90% national coverage and 80% coverage in all districts for the third dose of diphtheria, tetanus, and pertussis vaccine (DTP3) by 2015 and 90% national coverage and 80% coverage in all districts for all vaccines in national immunization schedules by 2020. What this study adds: As yet, no countries in the African Region are meeting the 2020 coverage targets; During 2013-2015, eleven African Region countries achieved national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout ≤ 10% and MCV1 coverage ≥ 90% for each of the three years; of these 11 countries, only Rwanda, a low income country, achieved 80% DTP3 coverage in all districts and ≥90% coverage for Hib, PCV3, RCV and Rota in 2015; Despite improvements in immunization coverage in the African Region since 2000, more progress is needed in lower middle-income and upper middle-income countries as well as in low income countries. Competing interests: The authors declare no competing interest.
Background: In 2010, the Global Vaccine Action Plan called on all countries to reach and sustain 90% national coverage and 80% coverage in all districts for the third dose of diphtheria-tetanus-pertussis vaccine (DTP3) by 2015 and for all vaccines in national immunization schedules by 2020. The aims of this study are to analyze recent trends in national vaccination coverage in the World Health Organization African Region andto assess how these trends differ by country income category. Methods: We compared national vaccination coverage estimates for DTP3 and the first dose of measles-containing vaccine (MCV) obtained from the World Health Organization (WHO)/United Nations Children's Fund (UNICEF) joint estimates of national immunization coverage for all African Region countries. Using United Nations (UN) population estimates of surviving infants and country income category for the corresponding year, we calculated population-weighted average vaccination coverage by country income category (i.e., low, lower middle, and upper middle-income) for the years 2000, 2005, 2010 and 2015. Results: DTP3 coverage in the African Region increased from 52% in 2000 to 76% in 2015,and MCV1 coverage increased from 53% to 74% during the same period, but with considerable differences among countries. Thirty-six African Region countries were low income in 2000 with an average DTP3 coverage of 50% while 26 were low income in 2015 with an average coverage of 80%. Five countries were lower middle-income in 2000 with an average DTP3 coverage of 84% while 12 were lower middle-income in 2015 with an average coverage of 69%. Five countries were upper middle-income in 2000 with an average DTP3 coverage of 73% and eight were upper middle-income in 2015 with an average coverage of 76%. Conclusions: Disparities in vaccination coverage by country persist in the African Region, with countries that were lower middle-income having the lowest coverage on average in 2015. Monitoring and addressing these disparities is essential for meeting global immunization targets.
Introduction: In 2012, the World Health Assembly endorsed the Global Vaccine Action Plan 2011-2020 (GVAP), which calls on all countries to reach and sustain 90% national coverage and 80% coverage in all districts for the third dose of diphtheria-tetanus-pertussis (DTP3) containing vaccine by 2015 and 90% national coverage and 80% coverage in all districts for all vaccines included in national immunization schedules by 2020 [1]. The GVAP also calls for a focus on equity in vaccination coverage through reducing the gap in coverage between low-income countries and high-income countries, and by reducing pockets of low sub-national vaccination coverage. Within the WHO African Region (AFR), a majority of countries have historically been classified by the World Bank as low-income [2], and immunization services in many countries usually operate within a relatively resource-constrained environment compared to high-income countries. Most of the 47 African Region countries are the focus of heavy investments from foreign donors, including Gavi, the Vaccine Alliance, which has disbursed at least US$6 billion during 2000-2016 for vaccine introductions and health system strengthening activities in 37African Region countries [3, 4]. No published analyses have described recent trends in vaccination coverage, equity and vaccine introductions among African Region countries in light of the substantial investments made in these countries over the past 16 years. Examining how these trends differ by country income status and by Gavi-eligibility status may provide useful insight into gaps to be addressed in future work by the countries and their external partners. Additionally, as global partners such as Gavi, WHO and UNICEF look to further support countries in reaching GVAP goals, identifying countries which have outperformed their peers may also yield useful lessons learned for strengthening immunization systems within resource-constrained settings. The aims of this study are to analyze recent trends in national and subnational vaccination coverage in the African Region, to assess how these trends differ by country income category and Gavi-eligibility, to review the coverage achieved with recently introduced vaccines in African Region countries and to identify low-income African Region countries which outperform their peers in reaching and sustaining high levels of vaccination coverage. Conclusion: The described inequities in vaccination coverage by region, country and at the subnational level are an important barrier to achieving higher coverage and disease control and elimination goals. Moving forward, successful immunization programs will need to ensure high-level political commitment to immunization, ownership and stewardship of the program, and the financial and human resource capacity to institute a wide range of innovative and cohesive strategies to reach every child and to stimulate demand for vaccination services in all communities. The February 2016 Addis Declaration on Immunization in which African heads of state committed to increasing domestic resources for immunization and improving the effectiveness and efficiency of immunization programs is a sign that the necessary political commitment exists [29]. The fulfillment of the pledges in the declaration will be essential to maximizing the benefits from immunization in the African Region. What is known about this topic Childhood immunization is a safe, effective and cost-effective intervention that is critical to reducing infant and under-five morbidity and mortality. It is also a key part of the struggle against antimicrobial resistance and can be used as a tool in health system strengthening; Equity is a key component of the strategies outlined by the Global Vaccine Action Plan 2011-2020 and the UN 2030 Agenda for Sustainable Development; The Global Vaccine Action Plan 2011-2020 called for all countries to reach and sustain 90% national coverage and 80% coverage in all districts for the third dose of diphtheria, tetanus, and pertussis vaccine (DTP3) by 2015 and 90% national coverage and 80% coverage in all districts for all vaccines in national immunization schedules by 2020. Childhood immunization is a safe, effective and cost-effective intervention that is critical to reducing infant and under-five morbidity and mortality. It is also a key part of the struggle against antimicrobial resistance and can be used as a tool in health system strengthening; Equity is a key component of the strategies outlined by the Global Vaccine Action Plan 2011-2020 and the UN 2030 Agenda for Sustainable Development; The Global Vaccine Action Plan 2011-2020 called for all countries to reach and sustain 90% national coverage and 80% coverage in all districts for the third dose of diphtheria, tetanus, and pertussis vaccine (DTP3) by 2015 and 90% national coverage and 80% coverage in all districts for all vaccines in national immunization schedules by 2020. What this study adds As yet, no countries in the African Region are meeting the 2020 coverage targets; During 2013-2015, eleven African Region countries achieved national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout ≤ 10% and MCV1 coverage ≥ 90% for each of the three years; of these 11 countries, only Rwanda, a low income country, achieved 80% DTP3 coverage in all districts and ≥90% coverage for Hib, PCV3, RCV and Rota in 2015; Despite improvements in immunization coverage in the African Region since 2000, more progress is needed in lower middle-income and upper middle-income countries as well as in low income countries. As yet, no countries in the African Region are meeting the 2020 coverage targets; During 2013-2015, eleven African Region countries achieved national DTP1 and DTP3 coverage ≥90%, DTP1–3 dropout ≤ 10% and MCV1 coverage ≥ 90% for each of the three years; of these 11 countries, only Rwanda, a low income country, achieved 80% DTP3 coverage in all districts and ≥90% coverage for Hib, PCV3, RCV and Rota in 2015; Despite improvements in immunization coverage in the African Region since 2000, more progress is needed in lower middle-income and upper middle-income countries as well as in low income countries.
Background: In 2010, the Global Vaccine Action Plan called on all countries to reach and sustain 90% national coverage and 80% coverage in all districts for the third dose of diphtheria-tetanus-pertussis vaccine (DTP3) by 2015 and for all vaccines in national immunization schedules by 2020. The aims of this study are to analyze recent trends in national vaccination coverage in the World Health Organization African Region andto assess how these trends differ by country income category. Methods: We compared national vaccination coverage estimates for DTP3 and the first dose of measles-containing vaccine (MCV) obtained from the World Health Organization (WHO)/United Nations Children's Fund (UNICEF) joint estimates of national immunization coverage for all African Region countries. Using United Nations (UN) population estimates of surviving infants and country income category for the corresponding year, we calculated population-weighted average vaccination coverage by country income category (i.e., low, lower middle, and upper middle-income) for the years 2000, 2005, 2010 and 2015. Results: DTP3 coverage in the African Region increased from 52% in 2000 to 76% in 2015,and MCV1 coverage increased from 53% to 74% during the same period, but with considerable differences among countries. Thirty-six African Region countries were low income in 2000 with an average DTP3 coverage of 50% while 26 were low income in 2015 with an average coverage of 80%. Five countries were lower middle-income in 2000 with an average DTP3 coverage of 84% while 12 were lower middle-income in 2015 with an average coverage of 69%. Five countries were upper middle-income in 2000 with an average DTP3 coverage of 73% and eight were upper middle-income in 2015 with an average coverage of 76%. Conclusions: Disparities in vaccination coverage by country persist in the African Region, with countries that were lower middle-income having the lowest coverage on average in 2015. Monitoring and addressing these disparities is essential for meeting global immunization targets.
12,216
387
[ 294, 311, 371, 382, 233, 898, 189, 339, 145, 125, 7 ]
16
[ "coverage", "countries", "2015", "income", "region", "african", "african region", "dtp3", "2000", "dtp3 coverage" ]
[ "vaccination coverage country", "vaccine regional coverage", "vaccination coverage equity", "immunization programs african", "funding national immunization" ]
[CONTENT] Vaccination | immunization coverage | World Health Organization African region [SUMMARY]
[CONTENT] Vaccination | immunization coverage | World Health Organization African region [SUMMARY]
[CONTENT] Vaccination | immunization coverage | World Health Organization African region [SUMMARY]
[CONTENT] Vaccination | immunization coverage | World Health Organization African region [SUMMARY]
[CONTENT] Vaccination | immunization coverage | World Health Organization African region [SUMMARY]
[CONTENT] Vaccination | immunization coverage | World Health Organization African region [SUMMARY]
[CONTENT] Africa | Developing Countries | Diphtheria-Tetanus-Pertussis Vaccine | Global Health | Humans | Immunization Programs | Immunization Schedule | Income | Infant | Measles Vaccine | Vaccination | Vaccination Coverage | World Health Organization [SUMMARY]
[CONTENT] Africa | Developing Countries | Diphtheria-Tetanus-Pertussis Vaccine | Global Health | Humans | Immunization Programs | Immunization Schedule | Income | Infant | Measles Vaccine | Vaccination | Vaccination Coverage | World Health Organization [SUMMARY]
[CONTENT] Africa | Developing Countries | Diphtheria-Tetanus-Pertussis Vaccine | Global Health | Humans | Immunization Programs | Immunization Schedule | Income | Infant | Measles Vaccine | Vaccination | Vaccination Coverage | World Health Organization [SUMMARY]
[CONTENT] Africa | Developing Countries | Diphtheria-Tetanus-Pertussis Vaccine | Global Health | Humans | Immunization Programs | Immunization Schedule | Income | Infant | Measles Vaccine | Vaccination | Vaccination Coverage | World Health Organization [SUMMARY]
[CONTENT] Africa | Developing Countries | Diphtheria-Tetanus-Pertussis Vaccine | Global Health | Humans | Immunization Programs | Immunization Schedule | Income | Infant | Measles Vaccine | Vaccination | Vaccination Coverage | World Health Organization [SUMMARY]
[CONTENT] Africa | Developing Countries | Diphtheria-Tetanus-Pertussis Vaccine | Global Health | Humans | Immunization Programs | Immunization Schedule | Income | Infant | Measles Vaccine | Vaccination | Vaccination Coverage | World Health Organization [SUMMARY]
[CONTENT] vaccination coverage country | vaccine regional coverage | vaccination coverage equity | immunization programs african | funding national immunization [SUMMARY]
[CONTENT] vaccination coverage country | vaccine regional coverage | vaccination coverage equity | immunization programs african | funding national immunization [SUMMARY]
[CONTENT] vaccination coverage country | vaccine regional coverage | vaccination coverage equity | immunization programs african | funding national immunization [SUMMARY]
[CONTENT] vaccination coverage country | vaccine regional coverage | vaccination coverage equity | immunization programs african | funding national immunization [SUMMARY]
[CONTENT] vaccination coverage country | vaccine regional coverage | vaccination coverage equity | immunization programs african | funding national immunization [SUMMARY]
[CONTENT] vaccination coverage country | vaccine regional coverage | vaccination coverage equity | immunization programs african | funding national immunization [SUMMARY]
[CONTENT] coverage | countries | 2015 | income | region | african | african region | dtp3 | 2000 | dtp3 coverage [SUMMARY]
[CONTENT] coverage | countries | 2015 | income | region | african | african region | dtp3 | 2000 | dtp3 coverage [SUMMARY]
[CONTENT] coverage | countries | 2015 | income | region | african | african region | dtp3 | 2000 | dtp3 coverage [SUMMARY]
[CONTENT] coverage | countries | 2015 | income | region | african | african region | dtp3 | 2000 | dtp3 coverage [SUMMARY]
[CONTENT] coverage | countries | 2015 | income | region | african | african region | dtp3 | 2000 | dtp3 coverage [SUMMARY]
[CONTENT] coverage | countries | 2015 | income | region | african | african region | dtp3 | 2000 | dtp3 coverage [SUMMARY]
[CONTENT] countries | coverage | income | vaccination coverage | vaccination | region | gvap | vaccine | trends | african region [SUMMARY]
[CONTENT] estimates | countries | coverage | vaccine | income | gni capita | gni | 2015 | obtained | dose [SUMMARY]
[CONTENT] coverage | 2015 | countries | income | average | african region | african | region | 2000 | dtp3 coverage [SUMMARY]
[CONTENT] coverage | 2020 | immunization | coverage districts | countries | 90 | effective | income | vaccine | action [SUMMARY]
[CONTENT] coverage | countries | 2015 | income | region | african | african region | dtp3 | dtp3 coverage | vaccine [SUMMARY]
[CONTENT] coverage | countries | 2015 | income | region | african | african region | dtp3 | dtp3 coverage | vaccine [SUMMARY]
[CONTENT] 2010 | the Global Vaccine Action Plan | 90% | 80% | third | DTP3 | 2015 | 2020 ||| the World Health Organization African Region [SUMMARY]
[CONTENT] DTP3 | first | MCV | the World Health Organization | African ||| United Nations | UN | the corresponding year | the years 2000, 2005 | 2010 | 2015 [SUMMARY]
[CONTENT] DTP3 | the African Region | 52% | 2000 | 76% | 2015,and | 53% to 74% ||| Thirty-six | African | 2000 | DTP3 | 50% | 26 | 2015 | 80% ||| Five | 2000 | DTP3 | 84% | 12 | 2015 | 69% ||| Five | 2000 | DTP3 | 73% | eight | 2015 | 76% [SUMMARY]
[CONTENT] the African Region | 2015 ||| [SUMMARY]
[CONTENT] ||| 2010 | the Global Vaccine Action Plan | 90% | 80% | third | DTP3 | 2015 | 2020 ||| the World Health Organization African Region ||| DTP3 | first | MCV | the World Health Organization | African ||| United Nations | UN | the corresponding year | the years 2000, 2005 | 2010 | 2015 ||| DTP3 | the African Region | 52% | 2000 | 76% | 2015,and | 53% to 74% ||| Thirty-six | African | 2000 | DTP3 | 50% | 26 | 2015 | 80% ||| Five | 2000 | DTP3 | 84% | 12 | 2015 | 69% ||| Five | 2000 | DTP3 | 73% | eight | 2015 | 76% ||| the African Region | 2015 ||| [SUMMARY]
[CONTENT] ||| 2010 | the Global Vaccine Action Plan | 90% | 80% | third | DTP3 | 2015 | 2020 ||| the World Health Organization African Region ||| DTP3 | first | MCV | the World Health Organization | African ||| United Nations | UN | the corresponding year | the years 2000, 2005 | 2010 | 2015 ||| DTP3 | the African Region | 52% | 2000 | 76% | 2015,and | 53% to 74% ||| Thirty-six | African | 2000 | DTP3 | 50% | 26 | 2015 | 80% ||| Five | 2000 | DTP3 | 84% | 12 | 2015 | 69% ||| Five | 2000 | DTP3 | 73% | eight | 2015 | 76% ||| the African Region | 2015 ||| [SUMMARY]
Geographic Information System-based association between the sewage network, geographical location of intermediate hosts, and autochthonous cases for the estimation of risk areas of schistosomiasis infection in Ourinhos, São Paulo, Brazil.
33886822
Ourinhos is a municipality located between the Pardo and Paranapanema rivers, and it has been characterized by the endemic transmission of schistosomiasis since 1952. We used geospatial analysis to identify areas prone to human schistosomiasis infections in Ourinhos. We studied the association between the sewage network, co-occurrence of Biomphalaria snails (identified as intermediate hosts [IHs] of Schistosoma mansoni), and autochthonous cases.
INTRODUCTION
Gi spatial statistics, Ripley's K12-function, and kernel density estimation were used to evaluate the association between schistosomiasis data reported during 2007-2016 and the occurrence of IHs during 2015-2017. These data were superimposed on the municipality sewage network data.
METHODS
We used 20 points with reported IH; they were colonized predominantly by Biomphalaria glabrata, followed by B. tenagophila and B. straminea. Based on Gi statistics, a significant cluster of autochthonous cases was superimposed on the Christoni and Água da Veada water bodies, with distances of approximately 300 m and 2200 m from the points where B. glabrata and B. straminea were present, respectively.
RESULTS
The residence geographical location of autochthonous cases allied with the spatial analysis of IHs and the coverage of the sewage network provide important information for the detection of human-infection areas. Our results demonstrated that the tools used for direct surveillance, control, and elimination of schistosomiasis are appropriate.
CONCLUSIONS
[ "Animals", "Biomphalaria", "Brazil", "Disease Vectors", "Geographic Information Systems", "Humans", "Schistosoma mansoni", "Schistosomiasis", "Schistosomiasis mansoni", "Sewage" ]
8047698
INTRODUCTION
Schistosomiasis is a parasitic infection that is considered a neglected tropical disease 1 . Schistosomiasis mansoni infection in Brazil is associated with the development of the parasite Schistosoma mansoni Sambon, 1907 in three species of snails of the genus Biomphalaria (Preston, 1910), namely, B. glabrata (Say, 1818), B. tenagophila (Orbigny, 1835), and B. straminea (Dunker, 1848) 2 . Human infections are highly prevalent, mainly in the northeast of the country and in the southeast, where it is endemic in some areas 3 . In the state of São Paulo, human infections occur in specific areas where schistosomiasis endemicity is low 3 . Among these areas, the Middle Paranapanema region, where it borders with the state of Paraná, is usually reported as an important endemic region 4 . However, a recent study using spatial analysis tools in an area considered a GeoSentinel surveillance site for schistosomiasis pointed out that human schistosomiasis infections are more likely to occur in Ourinhos than in the other regions across the 25 municipalities of the Hydrographic Unit Water of Resources Management of Middle Paranapanema (UGRHI-17) 5 , 6 , 7 . Currently, Ourinhos accounts for 93% of all autochthonous cases in Middle Paranapanema 5 , with cases reported since 1952 8 , 9 . The schistosomiasis cases observed in Ourinhos are probably associated with B. glabrata, which is a natural host to S. mansoni in the municipality 10 , 11 . This species was initially identified in Ourinhos in 1919 12 and continues to proliferate in water bodies in this municipality 13 , 14 . B. tenagophila and B. straminea, two other S. mansoni intermediate-host (IH) species, have also been described/observed in the area 13 , 14 . The spatial association between the occurrence of autochthonous cases and the presence of IHs can be analyzed using Gi spatial statistics. This tool, with the support of geographic information systems (GIS), uses the geographic coordinates of locations to find spatial clusters of a certain measure or quantity around a specific point and infer the distances at which these clusters are statistically significant 15 . Previous studies have used this tool (Gi or Gi* statistics) to analyze schistosomiasis in Africa 16 , 17 , 18 , 19 and vector-borne diseases, such as dengue, in Brazil 20 . Additionally, other studies have investigated schistosomiasis using GIS worldwide 5 , 21 , 22 , 23 . Thus, GIS and spatial analysis tools may contribute to identifying areas with the highest risk of human schistosomiasis infection and other diseases and consequently help guide public health measures 21 , 24 . The present study used a GIS-based approach to identify rural and urban areas at risk of schistosomiasis transmission in Ourinhos (São Paulo, Brazil), combining data sources related to the presence of snails that act as S. mansoni IHs, the historical occurrence of human infection, and the sewage network.
METHODS
Study area The study was conducted in the municipality of Ourinhos, southwest of the state of São Paulo (22° 58 44″ S, 49° 52 15″ W, Figure 1). The municipality extends over an area of 296 km² and had an estimated population of 113,542 inhabitants in 2019 25 , 26 , 97% of which lived in urban areas 25 . The municipality is covered by a variety of freshwater bodies located between the Pardo and Paranapanema rivers, which are tributaries of the Paraná river 27 . FIGURE 1:Maps of (A) Brazil, South America; (B) the state of São Paulo; and (C) the municipality of Ourinhos, showing the distribution of S. mansoni intermediate-host (IH) species (Biomphalaria) identified during 2015-2017, the autochthonous cases of 2007-2016, and the water bodies in Ourinhos. The numbers (N°) in Figure 1C correspond to the collection points presented in Table 1. Source: IBGE 36 , 37 ; SMA 38 ; SUCEN/Palasio et al. 14 ; SINAN/CVE. The study was conducted in the municipality of Ourinhos, southwest of the state of São Paulo (22° 58 44″ S, 49° 52 15″ W, Figure 1). The municipality extends over an area of 296 km² and had an estimated population of 113,542 inhabitants in 2019 25 , 26 , 97% of which lived in urban areas 25 . The municipality is covered by a variety of freshwater bodies located between the Pardo and Paranapanema rivers, which are tributaries of the Paraná river 27 . FIGURE 1:Maps of (A) Brazil, South America; (B) the state of São Paulo; and (C) the municipality of Ourinhos, showing the distribution of S. mansoni intermediate-host (IH) species (Biomphalaria) identified during 2015-2017, the autochthonous cases of 2007-2016, and the water bodies in Ourinhos. The numbers (N°) in Figure 1C correspond to the collection points presented in Table 1. Source: IBGE 36 , 37 ; SMA 38 ; SUCEN/Palasio et al. 14 ; SINAN/CVE. Data source The geographic coordinates related to each taxonomic group identified from 20 collection points in eight of the freshwater bodies positive for IH Biomphalaria species and the frequency of specimens per species are displayed in Table 1. These data integrate a survey conducted in 2015-2017 at 141 sampling points located along 26 water bodies in urban, peri-urban, and rural areas in the geographical limits of the Ourinhos municipality, according to malacological and geospatial approaches described by Palasio et al. 14 . Of the 141 points sampled, 121 were negative for the presence of IHs or were colonized by species of Biomphalaria that were naturally refractory to S. mansoni 14 . As reported in our preliminary and pivotal study 14 , the snails sampled were examined in the laboratory to analyze the presence of cercariae from trematodes 28 , and the species were concurrently identified through morphological characters according to Paraense (1975, 1981) 29 , 30 and the DNA barcode protocol 31 , 32 , 33 . A detailed explanation of the parasitological approach and the morphological and molecular identification of snails used in this study has been provided by Palasio et al. 14 . TABLE 1:Geographic coordinates and number of S. mansoni intermediate-host specimens of each Biomphalaria species collected in water bodies and points and percentage of residents served by a sewage network, septic tank, or rudimentary tank according to the census tracts of the location at the sample points in the municipality of Ourinhos, SP, Brazil, during 2015-2017. Water bodyPoint**Latitude (°)Longitude (°)No. of snails Collection date% Residents served by Census tracts* Btt Bgl Bst Boc/Bgl Boc/Btt Boc/Bst Boc Bsp sewage network tank Other *** sep rud Christoni1-22.967600-49.874683-170-----422015-201689.3-10.30.3 2-22.967117-49.875167-25-----212015-2017 3-22.952833-49.876333-45-68---342015-201695.60.63.7- 4-22.950050-49.875850-25-10--14282015-2016 Água da Veada 5-22.953222-49.878306--51---591042015-201795.60.63.7- Furninhas 6-22.985556-49.849972125-----50-20157.886.35.9- 7-22.976766-49.85174532-------2015100--- Jacu 8-23.016306-49.905000----31-2472015-2016---- 9-23.008944-49.872750-59-----412015-20171.80.796.11.1 10-23.021167-49.877278----98-47532015-2016---- Jacuzinho 11-22.995111-49.874333--20----612015-201698.41.30.3- Lageadinho 12-23.010972-49.824500-12------201519.19.271.6- 13-23.022833-49.826733-31------2016 Sobra 14-23.041650-49.860233-35------2016---- 15-23.032400-49.85861757-------2016---- 16-23.027467-49.864867-46------20160.97.191.9- 17-23.025063-49.863847--38-----2016---- 18-23.038333-49.861400-----53-172016---- Barreirinha 19-22.985800-49.80051727-------2016-5.3 94.7- 20-22.988750-49.801167142-------2016-2017 Total 383 448 109 78 129 53 194 408-----*Source: IBGE 25 , ** SUCEN/Palasio et al. 14 , numbers presented in Figure 1 ***Others = ditch, river, lake, or other sewer. Bgl: B. glabrata, Bst: B. straminea, Btt: B. tenagophila, Boc: B. occidentalis, Bsp: Biomphalaria spp., sep: septic, rud: rudimentary. *Source: IBGE 25 , ** SUCEN/Palasio et al. 14 , numbers presented in Figure 1 ***Others = ditch, river, lake, or other sewer. Bgl: B. glabrata, Bst: B. straminea, Btt: B. tenagophila, Boc: B. occidentalis, Bsp: Biomphalaria spp., sep: septic, rud: rudimentary. Information regarding schistosomiasis cases from 2007 to 2016, including notification date, residence geographical location, probable infection site (PIS), and epidemiologic classification, was obtained from the National Notifiable Disease Information System (SINAN). Access to the necessary information was provided by the Alexandre Vranjac Center for Epidemiologic Surveillance (CVE). We used this information to obtain the frequencies of the occurrence of autochthonous, imported, and unknown-origin cases in the municipality. Once the residence geographical location for each autochthonous schistosomiasis case was known, the batch geocoding tool 34 , which uses Google Earth, was used to obtain the respective geographic coordinates (datum WGS84). Cartographic material (maps with rivers, census tract layers, and sanitation data) was obtained from the Brazilian Institute of Geography and Statistics (IBGE) 25 , 35 , 36 , 37 , the Secretariat for the Environment of the State of São Paulo (SMA) 38 , and the National Secretariat for Environmental Sanitation (SNSA) in partnership with the Brazilian National Water Agency (ANA) 39 . The points corresponding to the geographic coordinates of snail IHs and autochthonous schistosomiasis cases were imported into and viewed using QGIS software version 3.10.5 40 . A fundamental part of the current study was the data related to the percentages of residents served by the sewage system, septic tanks, and/or rudimentary tanks 25 , according to census tracts 35 . These data were computed using the MMQGIS plugin 41 coupled with spatial join geographic operation, both available on QGIS version 3.10.5 40 . To do this, we considered the map of the census tracts, statistical data from the 2010 census 25 , 35 , and geographical coordinates of IHs. The geographic coordinates related to each taxonomic group identified from 20 collection points in eight of the freshwater bodies positive for IH Biomphalaria species and the frequency of specimens per species are displayed in Table 1. These data integrate a survey conducted in 2015-2017 at 141 sampling points located along 26 water bodies in urban, peri-urban, and rural areas in the geographical limits of the Ourinhos municipality, according to malacological and geospatial approaches described by Palasio et al. 14 . Of the 141 points sampled, 121 were negative for the presence of IHs or were colonized by species of Biomphalaria that were naturally refractory to S. mansoni 14 . As reported in our preliminary and pivotal study 14 , the snails sampled were examined in the laboratory to analyze the presence of cercariae from trematodes 28 , and the species were concurrently identified through morphological characters according to Paraense (1975, 1981) 29 , 30 and the DNA barcode protocol 31 , 32 , 33 . A detailed explanation of the parasitological approach and the morphological and molecular identification of snails used in this study has been provided by Palasio et al. 14 . TABLE 1:Geographic coordinates and number of S. mansoni intermediate-host specimens of each Biomphalaria species collected in water bodies and points and percentage of residents served by a sewage network, septic tank, or rudimentary tank according to the census tracts of the location at the sample points in the municipality of Ourinhos, SP, Brazil, during 2015-2017. Water bodyPoint**Latitude (°)Longitude (°)No. of snails Collection date% Residents served by Census tracts* Btt Bgl Bst Boc/Bgl Boc/Btt Boc/Bst Boc Bsp sewage network tank Other *** sep rud Christoni1-22.967600-49.874683-170-----422015-201689.3-10.30.3 2-22.967117-49.875167-25-----212015-2017 3-22.952833-49.876333-45-68---342015-201695.60.63.7- 4-22.950050-49.875850-25-10--14282015-2016 Água da Veada 5-22.953222-49.878306--51---591042015-201795.60.63.7- Furninhas 6-22.985556-49.849972125-----50-20157.886.35.9- 7-22.976766-49.85174532-------2015100--- Jacu 8-23.016306-49.905000----31-2472015-2016---- 9-23.008944-49.872750-59-----412015-20171.80.796.11.1 10-23.021167-49.877278----98-47532015-2016---- Jacuzinho 11-22.995111-49.874333--20----612015-201698.41.30.3- Lageadinho 12-23.010972-49.824500-12------201519.19.271.6- 13-23.022833-49.826733-31------2016 Sobra 14-23.041650-49.860233-35------2016---- 15-23.032400-49.85861757-------2016---- 16-23.027467-49.864867-46------20160.97.191.9- 17-23.025063-49.863847--38-----2016---- 18-23.038333-49.861400-----53-172016---- Barreirinha 19-22.985800-49.80051727-------2016-5.3 94.7- 20-22.988750-49.801167142-------2016-2017 Total 383 448 109 78 129 53 194 408-----*Source: IBGE 25 , ** SUCEN/Palasio et al. 14 , numbers presented in Figure 1 ***Others = ditch, river, lake, or other sewer. Bgl: B. glabrata, Bst: B. straminea, Btt: B. tenagophila, Boc: B. occidentalis, Bsp: Biomphalaria spp., sep: septic, rud: rudimentary. *Source: IBGE 25 , ** SUCEN/Palasio et al. 14 , numbers presented in Figure 1 ***Others = ditch, river, lake, or other sewer. Bgl: B. glabrata, Bst: B. straminea, Btt: B. tenagophila, Boc: B. occidentalis, Bsp: Biomphalaria spp., sep: septic, rud: rudimentary. Information regarding schistosomiasis cases from 2007 to 2016, including notification date, residence geographical location, probable infection site (PIS), and epidemiologic classification, was obtained from the National Notifiable Disease Information System (SINAN). Access to the necessary information was provided by the Alexandre Vranjac Center for Epidemiologic Surveillance (CVE). We used this information to obtain the frequencies of the occurrence of autochthonous, imported, and unknown-origin cases in the municipality. Once the residence geographical location for each autochthonous schistosomiasis case was known, the batch geocoding tool 34 , which uses Google Earth, was used to obtain the respective geographic coordinates (datum WGS84). Cartographic material (maps with rivers, census tract layers, and sanitation data) was obtained from the Brazilian Institute of Geography and Statistics (IBGE) 25 , 35 , 36 , 37 , the Secretariat for the Environment of the State of São Paulo (SMA) 38 , and the National Secretariat for Environmental Sanitation (SNSA) in partnership with the Brazilian National Water Agency (ANA) 39 . The points corresponding to the geographic coordinates of snail IHs and autochthonous schistosomiasis cases were imported into and viewed using QGIS software version 3.10.5 40 . A fundamental part of the current study was the data related to the percentages of residents served by the sewage system, septic tanks, and/or rudimentary tanks 25 , according to census tracts 35 . These data were computed using the MMQGIS plugin 41 coupled with spatial join geographic operation, both available on QGIS version 3.10.5 40 . To do this, we considered the map of the census tracts, statistical data from the 2010 census 25 , 35 , and geographical coordinates of IHs. Data analysis The relationship between the spatial distribution of autochthonous cases of schistosomiasis from 2007 to 2016 and the presence of IHs from 2015 to 2017 was analyzed using Gi spatial statistics, which is an indicator of local spatial association 15 , 42 . These statistics considered the occurrence of autochthonous cases around the points where IH snails were found. The schistosomiasis incidence rates by census tracts were calculated using the MMQGIS plugin 41 in QGIS 40 . In the 2010 population 25 , the centroid coordinates of the census tracts 35 and the coordinates of georeferenced schistosomiasis cases were considered. Gi statistics were calculated for each geographic coordinate where IHs were detected, taking into account the incidence rates. This allowed us to obtain a profile of schistosomiasis based on the spatial pattern of autochthonous schistosomiasis cases and its relationship with freshwater bodies and the IH snails that colonize them. A significance level of 5% was used, which corresponded to the minimum value of the Gi statistics (3.2889) (N = 100) according to Table 3 of the paper published by Ord and Getis 15 . The application of the Gi statistics took into account the measured attribute values in the pairs of coordinates corresponding to the locations analyzed (schistosomiasis incidence rates in census tract centroids). As it is a focal statistic, it considered the pair of geographic coordinates of each focus with IH (i) without taking into account the value of the attribute at this point 15 . Gi can be written as Σjwij(d) xj / Σjxj, for i ≠ j, where j represents the geographic coordinates of the centroid of the census tracts where there are schistosomiasis cases, Wij the binary and symmetric matrix that defines the neighborhood between the areas, xj the values of the incidence rates of the cases in the position of each j, and d the measure of the distance established by the neighborhood model. This calculation was performed with the sum of neighboring samples in relation to the position of i, wherein the value xi was not included in the sum as the place where IHs were collected, and the incidence rate was considered null (= 0) 15 . In the present study, a significant result for these statistics would indicate that the location in question may be considered a potential infection area for schistosomiasis. Gi statistics were calculated using the ‘spdep’ package 43 in R version 3.2.2 44 . The presence of clusters was investigated using a maximum distance of 4000 m between each point where IHs were present and the centroids of the census tracts. Furthermore, the spatial dependence between the distribution of points corresponding to autochthonous cases of schistosomiasis and those corresponding to places where the IHs were found was evaluated by considering the geographic coordinates of the autochthonous cases and the IH, using Ripley’s K12-function 45 , 46 and the R software version 3.2.2 44 with the ‘Splancs’ package 47 . We used the borders of the study area in a shapefile format and considered the coordinates of the cases and IHs in the UTM format. The result of the K12-function allowed us to verify the radius of influence, which is the limited and statistically significant distance where a positive spatial dependence between the two distributions of points occurs. We used the geographic coordinates of the autochthonous cases of schistosomiasis and the radius of influence data of the K12-function result to estimate the kernel density with a plugin available in the program QGIS version 3.10.5 40 . With the coordinates of the points where IHs were found and the radius of influence of each point analyzed in the Gi statistics (distances considered significant, higher limit), it was possible to identify the clusters of autochthonous cases around points with IHs. We performed this procedure using the MMQGIS plugin 41 and created a buffer geographic operation available on QGIS 40 . We merged all clusters into one cluster, which was restricted to the Ourinhos urban area. This cluster map obtained using Gi statistics was superimposed onto the respective autochthonous case hotspots obtained using the kernel tool. The relationship between the spatial distribution of autochthonous cases of schistosomiasis from 2007 to 2016 and the presence of IHs from 2015 to 2017 was analyzed using Gi spatial statistics, which is an indicator of local spatial association 15 , 42 . These statistics considered the occurrence of autochthonous cases around the points where IH snails were found. The schistosomiasis incidence rates by census tracts were calculated using the MMQGIS plugin 41 in QGIS 40 . In the 2010 population 25 , the centroid coordinates of the census tracts 35 and the coordinates of georeferenced schistosomiasis cases were considered. Gi statistics were calculated for each geographic coordinate where IHs were detected, taking into account the incidence rates. This allowed us to obtain a profile of schistosomiasis based on the spatial pattern of autochthonous schistosomiasis cases and its relationship with freshwater bodies and the IH snails that colonize them. A significance level of 5% was used, which corresponded to the minimum value of the Gi statistics (3.2889) (N = 100) according to Table 3 of the paper published by Ord and Getis 15 . The application of the Gi statistics took into account the measured attribute values in the pairs of coordinates corresponding to the locations analyzed (schistosomiasis incidence rates in census tract centroids). As it is a focal statistic, it considered the pair of geographic coordinates of each focus with IH (i) without taking into account the value of the attribute at this point 15 . Gi can be written as Σjwij(d) xj / Σjxj, for i ≠ j, where j represents the geographic coordinates of the centroid of the census tracts where there are schistosomiasis cases, Wij the binary and symmetric matrix that defines the neighborhood between the areas, xj the values of the incidence rates of the cases in the position of each j, and d the measure of the distance established by the neighborhood model. This calculation was performed with the sum of neighboring samples in relation to the position of i, wherein the value xi was not included in the sum as the place where IHs were collected, and the incidence rate was considered null (= 0) 15 . In the present study, a significant result for these statistics would indicate that the location in question may be considered a potential infection area for schistosomiasis. Gi statistics were calculated using the ‘spdep’ package 43 in R version 3.2.2 44 . The presence of clusters was investigated using a maximum distance of 4000 m between each point where IHs were present and the centroids of the census tracts. Furthermore, the spatial dependence between the distribution of points corresponding to autochthonous cases of schistosomiasis and those corresponding to places where the IHs were found was evaluated by considering the geographic coordinates of the autochthonous cases and the IH, using Ripley’s K12-function 45 , 46 and the R software version 3.2.2 44 with the ‘Splancs’ package 47 . We used the borders of the study area in a shapefile format and considered the coordinates of the cases and IHs in the UTM format. The result of the K12-function allowed us to verify the radius of influence, which is the limited and statistically significant distance where a positive spatial dependence between the two distributions of points occurs. We used the geographic coordinates of the autochthonous cases of schistosomiasis and the radius of influence data of the K12-function result to estimate the kernel density with a plugin available in the program QGIS version 3.10.5 40 . With the coordinates of the points where IHs were found and the radius of influence of each point analyzed in the Gi statistics (distances considered significant, higher limit), it was possible to identify the clusters of autochthonous cases around points with IHs. We performed this procedure using the MMQGIS plugin 41 and created a buffer geographic operation available on QGIS 40 . We merged all clusters into one cluster, which was restricted to the Ourinhos urban area. This cluster map obtained using Gi statistics was superimposed onto the respective autochthonous case hotspots obtained using the kernel tool. Ethical considerations The project was approved by the Faculdade de Saúde Pública of Universidade de São Paulo, Committee for Ethics in Research, the Plataforma Brasil system, Ministério da Saúde - Conselho Nacional de Saúde (number, CAAE: 53559816.0.0000.5421). The project was approved by the Faculdade de Saúde Pública of Universidade de São Paulo, Committee for Ethics in Research, the Plataforma Brasil system, Ministério da Saúde - Conselho Nacional de Saúde (number, CAAE: 53559816.0.0000.5421).
RESULTS
IH occurrence in Ourinhos The geographical occurrence points for B. glabrata, B. tenagophila, and B. straminea over the 20 locations sampled across eight freshwater bodies assigned as breeding sites for IHs demonstrated the predominant occurrence of B. glabrata (Figure 1C). We also found that the relative abundance of B. glabrata was higher in Christoni, Sobra, and Jacu than that of the other two IHs investigated (Table 1, Figure 2). However, none of the snails were infected with S. mansoni in the parasitological analysis, as reported by Palasio et al. 14 . Overall, the three natural S. mansoni-IH species occurred in the urban, peri-urban, and rural areas of Ourinhos (Figure 1C, Figure 2). FIGURE 2:Number of intermediate-host specimens (Biomphalaria species) of S. mansoni collected during 2015-2017 in eight water bodies in the urban, peri-urban, and rural areas of the municipality of Ourinhos, SP, Brazil. The geographical occurrence points for B. glabrata, B. tenagophila, and B. straminea over the 20 locations sampled across eight freshwater bodies assigned as breeding sites for IHs demonstrated the predominant occurrence of B. glabrata (Figure 1C). We also found that the relative abundance of B. glabrata was higher in Christoni, Sobra, and Jacu than that of the other two IHs investigated (Table 1, Figure 2). However, none of the snails were infected with S. mansoni in the parasitological analysis, as reported by Palasio et al. 14 . Overall, the three natural S. mansoni-IH species occurred in the urban, peri-urban, and rural areas of Ourinhos (Figure 1C, Figure 2). FIGURE 2:Number of intermediate-host specimens (Biomphalaria species) of S. mansoni collected during 2015-2017 in eight water bodies in the urban, peri-urban, and rural areas of the municipality of Ourinhos, SP, Brazil. Frequency of schistosomiasis and PIS and their relationship with the IH and sewage system The frequencies of autochthonous, imported, and unknown-origin cases were 39.7% (25), 7.9% (5), and 52.4% (33), respectively. On average, 6.3 cases per year were detected. Using information from the PIS in the epidemiological survey records, eight PIS were geocoded in the water bodies of Christoni, Furninhas, Jacu, Sobra (Pinhos Lake), Lageadinho, Chumbiadinha (Lake), Furnas (Fapi Lake), and Lake of São Luiz plant (Figure 3). It is noteworthy that the presence of IHs was only found in the first five locations, the majority of PIS were vague, and information was not available for 48% of them, making geocoding impossible. In the water bodies of Chumbiadinha, Lake of São Luiz plant, and Furnas, B. glabrata were reported until 2009, 2009, and 2012, respectively 13 , 48 , 49 . Currently, only B. occidentalis Paraense, 1981 has been identified in these sites 14 , and this species is not susceptible to S. mansoni 50 . IHs were found in overlapping areas with a high percentage of residents served by the sewage network as well as in places served by tanks or other systems (Figure 3, Table 1). FIGURE 3:Map of the municipality of Ourinhos, state of São Paulo, Brazil, highlighting the main probable infection site (PIS), percentage of residents served by a sewage network according to the census tracts, sewage treatment plants (STPs), and points where S. mansoni intermediate hosts (IHs) were found. The numbers (N°) in this figure correspond to the collection points presented in Table 1. Source: IBGE 25 , 35 , 36 , 37 ; SMA 38 ; ANA/SNSA 39 . The frequencies of autochthonous, imported, and unknown-origin cases were 39.7% (25), 7.9% (5), and 52.4% (33), respectively. On average, 6.3 cases per year were detected. Using information from the PIS in the epidemiological survey records, eight PIS were geocoded in the water bodies of Christoni, Furninhas, Jacu, Sobra (Pinhos Lake), Lageadinho, Chumbiadinha (Lake), Furnas (Fapi Lake), and Lake of São Luiz plant (Figure 3). It is noteworthy that the presence of IHs was only found in the first five locations, the majority of PIS were vague, and information was not available for 48% of them, making geocoding impossible. In the water bodies of Chumbiadinha, Lake of São Luiz plant, and Furnas, B. glabrata were reported until 2009, 2009, and 2012, respectively 13 , 48 , 49 . Currently, only B. occidentalis Paraense, 1981 has been identified in these sites 14 , and this species is not susceptible to S. mansoni 50 . IHs were found in overlapping areas with a high percentage of residents served by the sewage network as well as in places served by tanks or other systems (Figure 3, Table 1). FIGURE 3:Map of the municipality of Ourinhos, state of São Paulo, Brazil, highlighting the main probable infection site (PIS), percentage of residents served by a sewage network according to the census tracts, sewage treatment plants (STPs), and points where S. mansoni intermediate hosts (IHs) were found. The numbers (N°) in this figure correspond to the collection points presented in Table 1. Source: IBGE 25 , 35 , 36 , 37 ; SMA 38 ; ANA/SNSA 39 . Association between IH occurrence and autochthonous cases Figure 4A shows the results of the Gi statistics with locations considered potential risk areas for human schistosomiasis infection as well as the extent of the concatenated clusters of autochthonous cases around these points and the kernel density map. Significant clusters of autochthonous cases were superimposed on the Christoni stream region from approximately 300 m (lower limit) to 2200 m (higher limit) from sites where B. glabrata was detected. Another cluster was superimposed on the Água da Veada stream region at a distance of approximately 1600-2000 m (Figure 4A-B) from the B. straminea collection site 14 (Table 1). All clusters were combined into a single cluster (Cluster 1 in Figure 4A). The graph obtained using the K12-function shown in Figure 4C indicates a positive spatial dependence up to a distance of approximately 759 m between autochthonous cases and IH snails. FIGURE 4: (A) Kernel density map of the urban area (759 m radius of influence) showing the distribution of autochthonous schistosomiasis cases and significant clusters in the Gi statistics of cases around sampling points with intermediate hosts (foci). (B) Graph showing significant clusters of autochthonous cases around the Christoni stream and Água da Veada stream sampling points with IH. (C) Graph of the bivariate K12-function analysis in Ourinhos, SP, Brazil, during the 2007-2016 period. (B)* Statistically significant values are above the horizontal line (Gi [d] > 3.28, P < 0.05); (C)** The blue curve above the envelope shows a positive spatial dependence between the autochthonous cases of schistosomiasis and the IH up to a distance of ~759 m. Figure 4A shows the results of the Gi statistics with locations considered potential risk areas for human schistosomiasis infection as well as the extent of the concatenated clusters of autochthonous cases around these points and the kernel density map. Significant clusters of autochthonous cases were superimposed on the Christoni stream region from approximately 300 m (lower limit) to 2200 m (higher limit) from sites where B. glabrata was detected. Another cluster was superimposed on the Água da Veada stream region at a distance of approximately 1600-2000 m (Figure 4A-B) from the B. straminea collection site 14 (Table 1). All clusters were combined into a single cluster (Cluster 1 in Figure 4A). The graph obtained using the K12-function shown in Figure 4C indicates a positive spatial dependence up to a distance of approximately 759 m between autochthonous cases and IH snails. FIGURE 4: (A) Kernel density map of the urban area (759 m radius of influence) showing the distribution of autochthonous schistosomiasis cases and significant clusters in the Gi statistics of cases around sampling points with intermediate hosts (foci). (B) Graph showing significant clusters of autochthonous cases around the Christoni stream and Água da Veada stream sampling points with IH. (C) Graph of the bivariate K12-function analysis in Ourinhos, SP, Brazil, during the 2007-2016 period. (B)* Statistically significant values are above the horizontal line (Gi [d] > 3.28, P < 0.05); (C)** The blue curve above the envelope shows a positive spatial dependence between the autochthonous cases of schistosomiasis and the IH up to a distance of ~759 m.
null
null
[ "Study area", "Data source", "Data analysis", "Ethical considerations", "IH occurrence in Ourinhos", "Frequency of schistosomiasis and PIS and their relationship with the IH and sewage system", "Association between IH occurrence and autochthonous cases" ]
[ "The study was conducted in the municipality of Ourinhos, southwest of the state of São Paulo (22° 58 44″ S, 49° 52 15″ W, Figure 1). The municipality extends over an area of 296 km² and had an estimated population of 113,542 inhabitants in 2019\n25\n\n,\n\n26\n, 97% of which lived in urban areas\n25\n. The municipality is covered by a variety of freshwater bodies located between the Pardo and Paranapanema rivers, which are tributaries of the Paraná river\n27\n.\n\nFIGURE 1:Maps of (A) Brazil, South America; (B) the state of São Paulo; and (C) the municipality of Ourinhos, showing the distribution of S. mansoni intermediate-host (IH) species (Biomphalaria) identified during 2015-2017, the autochthonous cases of 2007-2016, and the water bodies in Ourinhos. The numbers (N°) in Figure 1C correspond to the collection points presented in Table 1. Source: IBGE\n36\n\n,\n\n37\n; SMA\n38\n; SUCEN/Palasio et al.\n14\n; SINAN/CVE.\n", "The geographic coordinates related to each taxonomic group identified from 20 collection points in eight of the freshwater bodies positive for IH Biomphalaria species and the frequency of specimens per species are displayed in Table 1. These data integrate a survey conducted in 2015-2017 at 141 sampling points located along 26 water bodies in urban, peri-urban, and rural areas in the geographical limits of the Ourinhos municipality, according to malacological and geospatial approaches described by Palasio et al.\n14\n. Of the 141 points sampled, 121 were negative for the presence of IHs or were colonized by species of Biomphalaria that were naturally refractory to S. mansoni\n\n14\n. As reported in our preliminary and pivotal study\n14\n, the snails sampled were examined in the laboratory to analyze the presence of cercariae from trematodes\n28\n, and the species were concurrently identified through morphological characters according to Paraense (1975, 1981)\n29\n\n,\n\n30\n and the DNA barcode protocol\n31\n\n,\n\n32\n\n,\n\n33\n. A detailed explanation of the parasitological approach and the morphological and molecular identification of snails used in this study has been provided by Palasio et al.\n14\n.\n\nTABLE 1:Geographic coordinates and number of S. mansoni intermediate-host specimens of each Biomphalaria species collected in water bodies and points and percentage of residents served by a sewage network, septic tank, or rudimentary tank according to the census tracts of the location at the sample points in the municipality of Ourinhos, SP, Brazil, during 2015-2017. Water bodyPoint**Latitude (°)Longitude (°)No. of snails Collection date% Residents served by Census tracts* \n\n\n\n\nBtt\n\nBgl\n\nBst\n\nBoc/Bgl\n\nBoc/Btt\n\nBoc/Bst\n\nBoc\n\nBsp\n\n\nsewage network\n\ntank\n\nOther ***\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nsep\n\nrud\n\n Christoni1-22.967600-49.874683-170-----422015-201689.3-10.30.3\n2-22.967117-49.875167-25-----212015-2017\n\n\n\n\n3-22.952833-49.876333-45-68---342015-201695.60.63.7-\n4-22.950050-49.875850-25-10--14282015-2016\n\n\n\n\nÁgua da Veada\n5-22.953222-49.878306--51---591042015-201795.60.63.7-\nFurninhas\n6-22.985556-49.849972125-----50-20157.886.35.9-\n7-22.976766-49.85174532-------2015100---\nJacu\n8-23.016306-49.905000----31-2472015-2016----\n9-23.008944-49.872750-59-----412015-20171.80.796.11.1\n10-23.021167-49.877278----98-47532015-2016----\nJacuzinho\n11-22.995111-49.874333--20----612015-201698.41.30.3-\nLageadinho\n12-23.010972-49.824500-12------201519.19.271.6-\n13-23.022833-49.826733-31------2016\n\n\n\n\nSobra\n14-23.041650-49.860233-35------2016----\n15-23.032400-49.85861757-------2016----\n16-23.027467-49.864867-46------20160.97.191.9-\n17-23.025063-49.863847--38-----2016----\n18-23.038333-49.861400-----53-172016----\nBarreirinha\n19-22.985800-49.80051727-------2016-5.3 94.7-\n20-22.988750-49.801167142-------2016-2017\n\n\n\n\nTotal\n\n\n\n\n383\n\n448\n\n109\n\n78\n\n129\n\n53\n\n194\n 408-----*Source: IBGE\n25\n, ** SUCEN/Palasio et al.\n14\n, numbers presented in Figure 1 ***Others = ditch, river, lake, or other sewer.\nBgl: B. glabrata, Bst: B. straminea, Btt: B. tenagophila, Boc: B. occidentalis, Bsp: Biomphalaria spp., sep: septic, rud: rudimentary.\n\n*Source: IBGE\n25\n, ** SUCEN/Palasio et al.\n14\n, numbers presented in Figure 1 ***Others = ditch, river, lake, or other sewer.\n\nBgl: B. glabrata, Bst: B. straminea, Btt: B. tenagophila, Boc: B. occidentalis, Bsp: Biomphalaria spp., sep: septic, rud: rudimentary.\nInformation regarding schistosomiasis cases from 2007 to 2016, including notification date, residence geographical location, probable infection site (PIS), and epidemiologic classification, was obtained from the National Notifiable Disease Information System (SINAN). Access to the necessary information was provided by the Alexandre Vranjac Center for Epidemiologic Surveillance (CVE). We used this information to obtain the frequencies of the occurrence of autochthonous, imported, and unknown-origin cases in the municipality.\nOnce the residence geographical location for each autochthonous schistosomiasis case was known, the batch geocoding tool\n34\n, which uses Google Earth, was used to obtain the respective geographic coordinates (datum WGS84). Cartographic material (maps with rivers, census tract layers, and sanitation data) was obtained from the Brazilian Institute of Geography and Statistics (IBGE)\n25\n\n,\n\n35\n\n,\n\n36\n\n,\n\n37\n, the Secretariat for the Environment of the State of São Paulo (SMA)\n38\n, and the National Secretariat for Environmental Sanitation (SNSA) in partnership with the Brazilian National Water Agency (ANA)\n39\n. The points corresponding to the geographic coordinates of snail IHs and autochthonous schistosomiasis cases were imported into and viewed using QGIS software version 3.10.5\n40\n. \nA fundamental part of the current study was the data related to the percentages of residents served by the sewage system, septic tanks, and/or rudimentary tanks\n25\n, according to census tracts\n35\n. These data were computed using the MMQGIS plugin\n41\ncoupled with spatial join geographic operation, both available on QGIS version 3.10.5\n40\n. To do this, we considered the map of the census tracts, statistical data from the 2010 census\n25\n\n,\n\n35\n, and geographical coordinates of IHs.", "The relationship between the spatial distribution of autochthonous cases of schistosomiasis from 2007 to 2016 and the presence of IHs from 2015 to 2017 was analyzed using Gi spatial statistics, which is an indicator of local spatial association\n15\n\n,\n\n42\n. These statistics considered the occurrence of autochthonous cases around the points where IH snails were found.\nThe schistosomiasis incidence rates by census tracts were calculated using the MMQGIS plugin\n41\n in QGIS\n40\n. In the 2010 population\n25\n, the centroid coordinates of the census tracts\n35\n and the coordinates of georeferenced schistosomiasis cases were considered. Gi statistics were calculated for each geographic coordinate where IHs were detected, taking into account the incidence rates. This allowed us to obtain a profile of schistosomiasis based on the spatial pattern of autochthonous schistosomiasis cases and its relationship with freshwater bodies and the IH snails that colonize them. A significance level of 5% was used, which corresponded to the minimum value of the Gi statistics (3.2889) (N = 100) according to Table 3 of the paper published by Ord and Getis\n15\n. \nThe application of the Gi statistics took into account the measured attribute values in the pairs of coordinates corresponding to the locations analyzed (schistosomiasis incidence rates in census tract centroids). As it is a focal statistic, it considered the pair of geographic coordinates of each focus with IH (i) without taking into account the value of the attribute at this point\n15\n.\nGi can be written as Σjwij(d) xj / Σjxj, for i ≠ j, where j represents the geographic coordinates of the centroid of the census tracts where there are schistosomiasis cases, Wij the binary and symmetric matrix that defines the neighborhood between the areas, xj the values of the incidence rates of the cases in the position of each j, and d the measure of the distance established by the neighborhood model. This calculation was performed with the sum of neighboring samples in relation to the position of i, wherein the value xi was not included in the sum as the place where IHs were collected, and the incidence rate was considered null (= 0)\n15\n.\nIn the present study, a significant result for these statistics would indicate that the location in question may be considered a potential infection area for schistosomiasis. Gi statistics were calculated using the ‘spdep’ package\n43\n in R version 3.2.2\n44\n. The presence of clusters was investigated using a maximum distance of 4000 m between each point where IHs were present and the centroids of the census tracts.\nFurthermore, the spatial dependence between the distribution of points corresponding to autochthonous cases of schistosomiasis and those corresponding to places where the IHs were found was evaluated by considering the geographic coordinates of the autochthonous cases and the IH, using Ripley’s K12-function\n45\n\n,\n\n46\n and the R software version 3.2.2\n44\n with the ‘Splancs’ package\n47\n. We used the borders of the study area in a shapefile format and considered the coordinates of the cases and IHs in the UTM format. The result of the K12-function allowed us to verify the radius of influence, which is the limited and statistically significant distance where a positive spatial dependence between the two distributions of points occurs. We used the geographic coordinates of the autochthonous cases of schistosomiasis and the radius of influence data of the K12-function result to estimate the kernel density with a plugin available in the program QGIS version 3.10.5\n40\n.\nWith the coordinates of the points where IHs were found and the radius of influence of each point analyzed in the Gi statistics (distances considered significant, higher limit), it was possible to identify the clusters of autochthonous cases around points with IHs. We performed this procedure using the MMQGIS plugin\n41\n and created a buffer geographic operation available on QGIS\n40\n. We merged all clusters into one cluster, which was restricted to the Ourinhos urban area. This cluster map obtained using Gi statistics was superimposed onto the respective autochthonous case hotspots obtained using the kernel tool.", "The project was approved by the Faculdade de Saúde Pública of Universidade de São Paulo, Committee for Ethics in Research, the Plataforma Brasil system, Ministério da Saúde - Conselho Nacional de Saúde (number, CAAE: 53559816.0.0000.5421).", "The geographical occurrence points for B. glabrata, B. tenagophila, and B. straminea over the 20 locations sampled across eight freshwater bodies assigned as breeding sites for IHs demonstrated the predominant occurrence of B. glabrata (Figure 1C). We also found that the relative abundance of B. glabrata was higher in Christoni, Sobra, and Jacu than that of the other two IHs investigated (Table 1, Figure 2). However, none of the snails were infected with S. mansoni in the parasitological analysis, as reported by Palasio et al.\n14\n. Overall, the three natural S. mansoni-IH species occurred in the urban, peri-urban, and rural areas of Ourinhos (Figure 1C, Figure 2).\n\nFIGURE 2:Number of intermediate-host specimens (Biomphalaria species) of S. mansoni collected during 2015-2017 in eight water bodies in the urban, peri-urban, and rural areas of the municipality of Ourinhos, SP, Brazil.\n", "The frequencies of autochthonous, imported, and unknown-origin cases were 39.7% (25), 7.9% (5), and 52.4% (33), respectively. On average, 6.3 cases per year were detected. Using information from the PIS in the epidemiological survey records, eight PIS were geocoded in the water bodies of Christoni, Furninhas, Jacu, Sobra (Pinhos Lake), Lageadinho, Chumbiadinha (Lake), Furnas (Fapi Lake), and Lake of São Luiz plant (Figure 3). It is noteworthy that the presence of IHs was only found in the first five locations, the majority of PIS were vague, and information was not available for 48% of them, making geocoding impossible.\nIn the water bodies of Chumbiadinha, Lake of São Luiz plant, and Furnas, B. glabrata were reported until 2009, 2009, and 2012, respectively\n13\n\n,\n\n48\n\n,\n\n49\n. Currently, only B. occidentalis Paraense, 1981 has been identified in these sites\n14\n, and this species is not susceptible to S. mansoni\n\n50\n. IHs were found in overlapping areas with a high percentage of residents served by the sewage network as well as in places served by tanks or other systems (Figure 3, Table 1). \n\nFIGURE 3:Map of the municipality of Ourinhos, state of São Paulo, Brazil, highlighting the main probable infection site (PIS), percentage of residents served by a sewage network according to the census tracts, sewage treatment plants (STPs), and points where S. mansoni intermediate hosts (IHs) were found. The numbers (N°) in this figure correspond to the collection points presented in Table 1. Source: IBGE\n25\n\n,\n\n35\n\n,\n\n36\n\n,\n\n37\n; SMA\n38\n; ANA/SNSA\n39\n.\n", "\nFigure 4A shows the results of the Gi statistics with locations considered potential risk areas for human schistosomiasis infection as well as the extent of the concatenated clusters of autochthonous cases around these points and the kernel density map.\nSignificant clusters of autochthonous cases were superimposed on the Christoni stream region from approximately 300 m (lower limit) to 2200 m (higher limit) from sites where B. glabrata was detected. Another cluster was superimposed on the Água da Veada stream region at a distance of approximately 1600-2000 m (Figure 4A-B) from the B. straminea collection site\n14\n (Table 1). All clusters were combined into a single cluster (Cluster 1 in Figure 4A). \nThe graph obtained using the K12-function shown in Figure 4C indicates a positive spatial dependence up to a distance of approximately 759 m between autochthonous cases and IH snails.\n\nFIGURE 4:\n(A) Kernel density map of the urban area (759 m radius of influence) showing the distribution of autochthonous schistosomiasis cases and significant clusters in the Gi statistics of cases around sampling points with intermediate hosts (foci). (B) Graph showing significant clusters of autochthonous cases around the Christoni stream and Água da Veada stream sampling points with IH. (C) Graph of the bivariate K12-function analysis in Ourinhos, SP, Brazil, during the 2007-2016 period. (B)* Statistically significant values are above the horizontal line (Gi [d] > 3.28, P < 0.05); (C)** The blue curve above the envelope shows a positive spatial dependence between the autochthonous cases of schistosomiasis and the IH up to a distance of ~759 m.\n" ]
[ null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Study area", "Data source", "Data analysis", "Ethical considerations", "RESULTS", "IH occurrence in Ourinhos", "Frequency of schistosomiasis and PIS and their relationship with the IH and sewage system", "Association between IH occurrence and autochthonous cases", "DISCUSSION" ]
[ "Schistosomiasis is a parasitic infection that is considered a neglected tropical disease\n1\n. Schistosomiasis mansoni infection in Brazil is associated with the development of the parasite Schistosoma mansoni Sambon, 1907 in three species of snails of the genus Biomphalaria (Preston, 1910), namely, B. glabrata (Say, 1818), B. tenagophila (Orbigny, 1835), and B. straminea (Dunker, 1848)\n2\n. Human infections are highly prevalent, mainly in the northeast of the country and in the southeast, where it is endemic in some areas\n3\n. \nIn the state of São Paulo, human infections occur in specific areas where schistosomiasis endemicity is low\n3\n. Among these areas, the Middle Paranapanema region, where it borders with the state of Paraná, is usually reported as an important endemic region\n4\n. However, a recent study using spatial analysis tools in an area considered a GeoSentinel surveillance site for schistosomiasis pointed out that human schistosomiasis infections are more likely to occur in Ourinhos than in the other regions across the 25 municipalities of the Hydrographic Unit Water of Resources Management of Middle Paranapanema (UGRHI-17)\n5\n\n,\n\n6\n\n,\n\n7\n. Currently, Ourinhos accounts for 93% of all autochthonous cases in Middle Paranapanema\n5\n, with cases reported since 1952\n8\n\n,\n\n9\n.\nThe schistosomiasis cases observed in Ourinhos are probably associated with B. glabrata, which is a natural host to S. mansoni in the municipality\n10\n\n,\n\n11\n. This species was initially identified in Ourinhos in 1919\n12\n and continues to proliferate in water bodies in this municipality\n13\n\n,\n\n14\n. B. tenagophila and B. straminea, two other S. mansoni intermediate-host (IH) species, have also been described/observed in the area\n13\n\n,\n\n14\n.\nThe spatial association between the occurrence of autochthonous cases and the presence of IHs can be analyzed using Gi spatial statistics. This tool, with the support of geographic information systems (GIS), uses the geographic coordinates of locations to find spatial clusters of a certain measure or quantity around a specific point and infer the distances at which these clusters are statistically significant\n15\n. Previous studies have used this tool (Gi or Gi* statistics) to analyze schistosomiasis in Africa\n16\n\n,\n\n17\n\n,\n\n18\n\n,\n\n19\nand vector-borne diseases, such as dengue, in Brazil\n20\n. Additionally, other studies have investigated schistosomiasis using GIS worldwide\n5\n\n,\n\n21\n\n,\n\n22\n\n,\n\n23\n. Thus, GIS and spatial analysis tools may contribute to identifying areas with the highest risk of human schistosomiasis infection and other diseases and consequently help guide public health measures\n21\n\n,\n\n24\n.\nThe present study used a GIS-based approach to identify rural and urban areas at risk of schistosomiasis transmission in Ourinhos (São Paulo, Brazil), combining data sources related to the presence of snails that act as S. mansoni IHs, the historical occurrence of human infection, and the sewage network.", "Study area The study was conducted in the municipality of Ourinhos, southwest of the state of São Paulo (22° 58 44″ S, 49° 52 15″ W, Figure 1). The municipality extends over an area of 296 km² and had an estimated population of 113,542 inhabitants in 2019\n25\n\n,\n\n26\n, 97% of which lived in urban areas\n25\n. The municipality is covered by a variety of freshwater bodies located between the Pardo and Paranapanema rivers, which are tributaries of the Paraná river\n27\n.\n\nFIGURE 1:Maps of (A) Brazil, South America; (B) the state of São Paulo; and (C) the municipality of Ourinhos, showing the distribution of S. mansoni intermediate-host (IH) species (Biomphalaria) identified during 2015-2017, the autochthonous cases of 2007-2016, and the water bodies in Ourinhos. The numbers (N°) in Figure 1C correspond to the collection points presented in Table 1. Source: IBGE\n36\n\n,\n\n37\n; SMA\n38\n; SUCEN/Palasio et al.\n14\n; SINAN/CVE.\n\nThe study was conducted in the municipality of Ourinhos, southwest of the state of São Paulo (22° 58 44″ S, 49° 52 15″ W, Figure 1). The municipality extends over an area of 296 km² and had an estimated population of 113,542 inhabitants in 2019\n25\n\n,\n\n26\n, 97% of which lived in urban areas\n25\n. The municipality is covered by a variety of freshwater bodies located between the Pardo and Paranapanema rivers, which are tributaries of the Paraná river\n27\n.\n\nFIGURE 1:Maps of (A) Brazil, South America; (B) the state of São Paulo; and (C) the municipality of Ourinhos, showing the distribution of S. mansoni intermediate-host (IH) species (Biomphalaria) identified during 2015-2017, the autochthonous cases of 2007-2016, and the water bodies in Ourinhos. The numbers (N°) in Figure 1C correspond to the collection points presented in Table 1. Source: IBGE\n36\n\n,\n\n37\n; SMA\n38\n; SUCEN/Palasio et al.\n14\n; SINAN/CVE.\n\nData source The geographic coordinates related to each taxonomic group identified from 20 collection points in eight of the freshwater bodies positive for IH Biomphalaria species and the frequency of specimens per species are displayed in Table 1. These data integrate a survey conducted in 2015-2017 at 141 sampling points located along 26 water bodies in urban, peri-urban, and rural areas in the geographical limits of the Ourinhos municipality, according to malacological and geospatial approaches described by Palasio et al.\n14\n. Of the 141 points sampled, 121 were negative for the presence of IHs or were colonized by species of Biomphalaria that were naturally refractory to S. mansoni\n\n14\n. As reported in our preliminary and pivotal study\n14\n, the snails sampled were examined in the laboratory to analyze the presence of cercariae from trematodes\n28\n, and the species were concurrently identified through morphological characters according to Paraense (1975, 1981)\n29\n\n,\n\n30\n and the DNA barcode protocol\n31\n\n,\n\n32\n\n,\n\n33\n. A detailed explanation of the parasitological approach and the morphological and molecular identification of snails used in this study has been provided by Palasio et al.\n14\n.\n\nTABLE 1:Geographic coordinates and number of S. mansoni intermediate-host specimens of each Biomphalaria species collected in water bodies and points and percentage of residents served by a sewage network, septic tank, or rudimentary tank according to the census tracts of the location at the sample points in the municipality of Ourinhos, SP, Brazil, during 2015-2017. Water bodyPoint**Latitude (°)Longitude (°)No. of snails Collection date% Residents served by Census tracts* \n\n\n\n\nBtt\n\nBgl\n\nBst\n\nBoc/Bgl\n\nBoc/Btt\n\nBoc/Bst\n\nBoc\n\nBsp\n\n\nsewage network\n\ntank\n\nOther ***\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nsep\n\nrud\n\n Christoni1-22.967600-49.874683-170-----422015-201689.3-10.30.3\n2-22.967117-49.875167-25-----212015-2017\n\n\n\n\n3-22.952833-49.876333-45-68---342015-201695.60.63.7-\n4-22.950050-49.875850-25-10--14282015-2016\n\n\n\n\nÁgua da Veada\n5-22.953222-49.878306--51---591042015-201795.60.63.7-\nFurninhas\n6-22.985556-49.849972125-----50-20157.886.35.9-\n7-22.976766-49.85174532-------2015100---\nJacu\n8-23.016306-49.905000----31-2472015-2016----\n9-23.008944-49.872750-59-----412015-20171.80.796.11.1\n10-23.021167-49.877278----98-47532015-2016----\nJacuzinho\n11-22.995111-49.874333--20----612015-201698.41.30.3-\nLageadinho\n12-23.010972-49.824500-12------201519.19.271.6-\n13-23.022833-49.826733-31------2016\n\n\n\n\nSobra\n14-23.041650-49.860233-35------2016----\n15-23.032400-49.85861757-------2016----\n16-23.027467-49.864867-46------20160.97.191.9-\n17-23.025063-49.863847--38-----2016----\n18-23.038333-49.861400-----53-172016----\nBarreirinha\n19-22.985800-49.80051727-------2016-5.3 94.7-\n20-22.988750-49.801167142-------2016-2017\n\n\n\n\nTotal\n\n\n\n\n383\n\n448\n\n109\n\n78\n\n129\n\n53\n\n194\n 408-----*Source: IBGE\n25\n, ** SUCEN/Palasio et al.\n14\n, numbers presented in Figure 1 ***Others = ditch, river, lake, or other sewer.\nBgl: B. glabrata, Bst: B. straminea, Btt: B. tenagophila, Boc: B. occidentalis, Bsp: Biomphalaria spp., sep: septic, rud: rudimentary.\n\n*Source: IBGE\n25\n, ** SUCEN/Palasio et al.\n14\n, numbers presented in Figure 1 ***Others = ditch, river, lake, or other sewer.\n\nBgl: B. glabrata, Bst: B. straminea, Btt: B. tenagophila, Boc: B. occidentalis, Bsp: Biomphalaria spp., sep: septic, rud: rudimentary.\nInformation regarding schistosomiasis cases from 2007 to 2016, including notification date, residence geographical location, probable infection site (PIS), and epidemiologic classification, was obtained from the National Notifiable Disease Information System (SINAN). Access to the necessary information was provided by the Alexandre Vranjac Center for Epidemiologic Surveillance (CVE). We used this information to obtain the frequencies of the occurrence of autochthonous, imported, and unknown-origin cases in the municipality.\nOnce the residence geographical location for each autochthonous schistosomiasis case was known, the batch geocoding tool\n34\n, which uses Google Earth, was used to obtain the respective geographic coordinates (datum WGS84). Cartographic material (maps with rivers, census tract layers, and sanitation data) was obtained from the Brazilian Institute of Geography and Statistics (IBGE)\n25\n\n,\n\n35\n\n,\n\n36\n\n,\n\n37\n, the Secretariat for the Environment of the State of São Paulo (SMA)\n38\n, and the National Secretariat for Environmental Sanitation (SNSA) in partnership with the Brazilian National Water Agency (ANA)\n39\n. The points corresponding to the geographic coordinates of snail IHs and autochthonous schistosomiasis cases were imported into and viewed using QGIS software version 3.10.5\n40\n. \nA fundamental part of the current study was the data related to the percentages of residents served by the sewage system, septic tanks, and/or rudimentary tanks\n25\n, according to census tracts\n35\n. These data were computed using the MMQGIS plugin\n41\ncoupled with spatial join geographic operation, both available on QGIS version 3.10.5\n40\n. To do this, we considered the map of the census tracts, statistical data from the 2010 census\n25\n\n,\n\n35\n, and geographical coordinates of IHs.\nThe geographic coordinates related to each taxonomic group identified from 20 collection points in eight of the freshwater bodies positive for IH Biomphalaria species and the frequency of specimens per species are displayed in Table 1. These data integrate a survey conducted in 2015-2017 at 141 sampling points located along 26 water bodies in urban, peri-urban, and rural areas in the geographical limits of the Ourinhos municipality, according to malacological and geospatial approaches described by Palasio et al.\n14\n. Of the 141 points sampled, 121 were negative for the presence of IHs or were colonized by species of Biomphalaria that were naturally refractory to S. mansoni\n\n14\n. As reported in our preliminary and pivotal study\n14\n, the snails sampled were examined in the laboratory to analyze the presence of cercariae from trematodes\n28\n, and the species were concurrently identified through morphological characters according to Paraense (1975, 1981)\n29\n\n,\n\n30\n and the DNA barcode protocol\n31\n\n,\n\n32\n\n,\n\n33\n. A detailed explanation of the parasitological approach and the morphological and molecular identification of snails used in this study has been provided by Palasio et al.\n14\n.\n\nTABLE 1:Geographic coordinates and number of S. mansoni intermediate-host specimens of each Biomphalaria species collected in water bodies and points and percentage of residents served by a sewage network, septic tank, or rudimentary tank according to the census tracts of the location at the sample points in the municipality of Ourinhos, SP, Brazil, during 2015-2017. Water bodyPoint**Latitude (°)Longitude (°)No. of snails Collection date% Residents served by Census tracts* \n\n\n\n\nBtt\n\nBgl\n\nBst\n\nBoc/Bgl\n\nBoc/Btt\n\nBoc/Bst\n\nBoc\n\nBsp\n\n\nsewage network\n\ntank\n\nOther ***\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nsep\n\nrud\n\n Christoni1-22.967600-49.874683-170-----422015-201689.3-10.30.3\n2-22.967117-49.875167-25-----212015-2017\n\n\n\n\n3-22.952833-49.876333-45-68---342015-201695.60.63.7-\n4-22.950050-49.875850-25-10--14282015-2016\n\n\n\n\nÁgua da Veada\n5-22.953222-49.878306--51---591042015-201795.60.63.7-\nFurninhas\n6-22.985556-49.849972125-----50-20157.886.35.9-\n7-22.976766-49.85174532-------2015100---\nJacu\n8-23.016306-49.905000----31-2472015-2016----\n9-23.008944-49.872750-59-----412015-20171.80.796.11.1\n10-23.021167-49.877278----98-47532015-2016----\nJacuzinho\n11-22.995111-49.874333--20----612015-201698.41.30.3-\nLageadinho\n12-23.010972-49.824500-12------201519.19.271.6-\n13-23.022833-49.826733-31------2016\n\n\n\n\nSobra\n14-23.041650-49.860233-35------2016----\n15-23.032400-49.85861757-------2016----\n16-23.027467-49.864867-46------20160.97.191.9-\n17-23.025063-49.863847--38-----2016----\n18-23.038333-49.861400-----53-172016----\nBarreirinha\n19-22.985800-49.80051727-------2016-5.3 94.7-\n20-22.988750-49.801167142-------2016-2017\n\n\n\n\nTotal\n\n\n\n\n383\n\n448\n\n109\n\n78\n\n129\n\n53\n\n194\n 408-----*Source: IBGE\n25\n, ** SUCEN/Palasio et al.\n14\n, numbers presented in Figure 1 ***Others = ditch, river, lake, or other sewer.\nBgl: B. glabrata, Bst: B. straminea, Btt: B. tenagophila, Boc: B. occidentalis, Bsp: Biomphalaria spp., sep: septic, rud: rudimentary.\n\n*Source: IBGE\n25\n, ** SUCEN/Palasio et al.\n14\n, numbers presented in Figure 1 ***Others = ditch, river, lake, or other sewer.\n\nBgl: B. glabrata, Bst: B. straminea, Btt: B. tenagophila, Boc: B. occidentalis, Bsp: Biomphalaria spp., sep: septic, rud: rudimentary.\nInformation regarding schistosomiasis cases from 2007 to 2016, including notification date, residence geographical location, probable infection site (PIS), and epidemiologic classification, was obtained from the National Notifiable Disease Information System (SINAN). Access to the necessary information was provided by the Alexandre Vranjac Center for Epidemiologic Surveillance (CVE). We used this information to obtain the frequencies of the occurrence of autochthonous, imported, and unknown-origin cases in the municipality.\nOnce the residence geographical location for each autochthonous schistosomiasis case was known, the batch geocoding tool\n34\n, which uses Google Earth, was used to obtain the respective geographic coordinates (datum WGS84). Cartographic material (maps with rivers, census tract layers, and sanitation data) was obtained from the Brazilian Institute of Geography and Statistics (IBGE)\n25\n\n,\n\n35\n\n,\n\n36\n\n,\n\n37\n, the Secretariat for the Environment of the State of São Paulo (SMA)\n38\n, and the National Secretariat for Environmental Sanitation (SNSA) in partnership with the Brazilian National Water Agency (ANA)\n39\n. The points corresponding to the geographic coordinates of snail IHs and autochthonous schistosomiasis cases were imported into and viewed using QGIS software version 3.10.5\n40\n. \nA fundamental part of the current study was the data related to the percentages of residents served by the sewage system, septic tanks, and/or rudimentary tanks\n25\n, according to census tracts\n35\n. These data were computed using the MMQGIS plugin\n41\ncoupled with spatial join geographic operation, both available on QGIS version 3.10.5\n40\n. To do this, we considered the map of the census tracts, statistical data from the 2010 census\n25\n\n,\n\n35\n, and geographical coordinates of IHs.\nData analysis The relationship between the spatial distribution of autochthonous cases of schistosomiasis from 2007 to 2016 and the presence of IHs from 2015 to 2017 was analyzed using Gi spatial statistics, which is an indicator of local spatial association\n15\n\n,\n\n42\n. These statistics considered the occurrence of autochthonous cases around the points where IH snails were found.\nThe schistosomiasis incidence rates by census tracts were calculated using the MMQGIS plugin\n41\n in QGIS\n40\n. In the 2010 population\n25\n, the centroid coordinates of the census tracts\n35\n and the coordinates of georeferenced schistosomiasis cases were considered. Gi statistics were calculated for each geographic coordinate where IHs were detected, taking into account the incidence rates. This allowed us to obtain a profile of schistosomiasis based on the spatial pattern of autochthonous schistosomiasis cases and its relationship with freshwater bodies and the IH snails that colonize them. A significance level of 5% was used, which corresponded to the minimum value of the Gi statistics (3.2889) (N = 100) according to Table 3 of the paper published by Ord and Getis\n15\n. \nThe application of the Gi statistics took into account the measured attribute values in the pairs of coordinates corresponding to the locations analyzed (schistosomiasis incidence rates in census tract centroids). As it is a focal statistic, it considered the pair of geographic coordinates of each focus with IH (i) without taking into account the value of the attribute at this point\n15\n.\nGi can be written as Σjwij(d) xj / Σjxj, for i ≠ j, where j represents the geographic coordinates of the centroid of the census tracts where there are schistosomiasis cases, Wij the binary and symmetric matrix that defines the neighborhood between the areas, xj the values of the incidence rates of the cases in the position of each j, and d the measure of the distance established by the neighborhood model. This calculation was performed with the sum of neighboring samples in relation to the position of i, wherein the value xi was not included in the sum as the place where IHs were collected, and the incidence rate was considered null (= 0)\n15\n.\nIn the present study, a significant result for these statistics would indicate that the location in question may be considered a potential infection area for schistosomiasis. Gi statistics were calculated using the ‘spdep’ package\n43\n in R version 3.2.2\n44\n. The presence of clusters was investigated using a maximum distance of 4000 m between each point where IHs were present and the centroids of the census tracts.\nFurthermore, the spatial dependence between the distribution of points corresponding to autochthonous cases of schistosomiasis and those corresponding to places where the IHs were found was evaluated by considering the geographic coordinates of the autochthonous cases and the IH, using Ripley’s K12-function\n45\n\n,\n\n46\n and the R software version 3.2.2\n44\n with the ‘Splancs’ package\n47\n. We used the borders of the study area in a shapefile format and considered the coordinates of the cases and IHs in the UTM format. The result of the K12-function allowed us to verify the radius of influence, which is the limited and statistically significant distance where a positive spatial dependence between the two distributions of points occurs. We used the geographic coordinates of the autochthonous cases of schistosomiasis and the radius of influence data of the K12-function result to estimate the kernel density with a plugin available in the program QGIS version 3.10.5\n40\n.\nWith the coordinates of the points where IHs were found and the radius of influence of each point analyzed in the Gi statistics (distances considered significant, higher limit), it was possible to identify the clusters of autochthonous cases around points with IHs. We performed this procedure using the MMQGIS plugin\n41\n and created a buffer geographic operation available on QGIS\n40\n. We merged all clusters into one cluster, which was restricted to the Ourinhos urban area. This cluster map obtained using Gi statistics was superimposed onto the respective autochthonous case hotspots obtained using the kernel tool.\nThe relationship between the spatial distribution of autochthonous cases of schistosomiasis from 2007 to 2016 and the presence of IHs from 2015 to 2017 was analyzed using Gi spatial statistics, which is an indicator of local spatial association\n15\n\n,\n\n42\n. These statistics considered the occurrence of autochthonous cases around the points where IH snails were found.\nThe schistosomiasis incidence rates by census tracts were calculated using the MMQGIS plugin\n41\n in QGIS\n40\n. In the 2010 population\n25\n, the centroid coordinates of the census tracts\n35\n and the coordinates of georeferenced schistosomiasis cases were considered. Gi statistics were calculated for each geographic coordinate where IHs were detected, taking into account the incidence rates. This allowed us to obtain a profile of schistosomiasis based on the spatial pattern of autochthonous schistosomiasis cases and its relationship with freshwater bodies and the IH snails that colonize them. A significance level of 5% was used, which corresponded to the minimum value of the Gi statistics (3.2889) (N = 100) according to Table 3 of the paper published by Ord and Getis\n15\n. \nThe application of the Gi statistics took into account the measured attribute values in the pairs of coordinates corresponding to the locations analyzed (schistosomiasis incidence rates in census tract centroids). As it is a focal statistic, it considered the pair of geographic coordinates of each focus with IH (i) without taking into account the value of the attribute at this point\n15\n.\nGi can be written as Σjwij(d) xj / Σjxj, for i ≠ j, where j represents the geographic coordinates of the centroid of the census tracts where there are schistosomiasis cases, Wij the binary and symmetric matrix that defines the neighborhood between the areas, xj the values of the incidence rates of the cases in the position of each j, and d the measure of the distance established by the neighborhood model. This calculation was performed with the sum of neighboring samples in relation to the position of i, wherein the value xi was not included in the sum as the place where IHs were collected, and the incidence rate was considered null (= 0)\n15\n.\nIn the present study, a significant result for these statistics would indicate that the location in question may be considered a potential infection area for schistosomiasis. Gi statistics were calculated using the ‘spdep’ package\n43\n in R version 3.2.2\n44\n. The presence of clusters was investigated using a maximum distance of 4000 m between each point where IHs were present and the centroids of the census tracts.\nFurthermore, the spatial dependence between the distribution of points corresponding to autochthonous cases of schistosomiasis and those corresponding to places where the IHs were found was evaluated by considering the geographic coordinates of the autochthonous cases and the IH, using Ripley’s K12-function\n45\n\n,\n\n46\n and the R software version 3.2.2\n44\n with the ‘Splancs’ package\n47\n. We used the borders of the study area in a shapefile format and considered the coordinates of the cases and IHs in the UTM format. The result of the K12-function allowed us to verify the radius of influence, which is the limited and statistically significant distance where a positive spatial dependence between the two distributions of points occurs. We used the geographic coordinates of the autochthonous cases of schistosomiasis and the radius of influence data of the K12-function result to estimate the kernel density with a plugin available in the program QGIS version 3.10.5\n40\n.\nWith the coordinates of the points where IHs were found and the radius of influence of each point analyzed in the Gi statistics (distances considered significant, higher limit), it was possible to identify the clusters of autochthonous cases around points with IHs. We performed this procedure using the MMQGIS plugin\n41\n and created a buffer geographic operation available on QGIS\n40\n. We merged all clusters into one cluster, which was restricted to the Ourinhos urban area. This cluster map obtained using Gi statistics was superimposed onto the respective autochthonous case hotspots obtained using the kernel tool.\nEthical considerations The project was approved by the Faculdade de Saúde Pública of Universidade de São Paulo, Committee for Ethics in Research, the Plataforma Brasil system, Ministério da Saúde - Conselho Nacional de Saúde (number, CAAE: 53559816.0.0000.5421).\nThe project was approved by the Faculdade de Saúde Pública of Universidade de São Paulo, Committee for Ethics in Research, the Plataforma Brasil system, Ministério da Saúde - Conselho Nacional de Saúde (number, CAAE: 53559816.0.0000.5421).", "The study was conducted in the municipality of Ourinhos, southwest of the state of São Paulo (22° 58 44″ S, 49° 52 15″ W, Figure 1). The municipality extends over an area of 296 km² and had an estimated population of 113,542 inhabitants in 2019\n25\n\n,\n\n26\n, 97% of which lived in urban areas\n25\n. The municipality is covered by a variety of freshwater bodies located between the Pardo and Paranapanema rivers, which are tributaries of the Paraná river\n27\n.\n\nFIGURE 1:Maps of (A) Brazil, South America; (B) the state of São Paulo; and (C) the municipality of Ourinhos, showing the distribution of S. mansoni intermediate-host (IH) species (Biomphalaria) identified during 2015-2017, the autochthonous cases of 2007-2016, and the water bodies in Ourinhos. The numbers (N°) in Figure 1C correspond to the collection points presented in Table 1. Source: IBGE\n36\n\n,\n\n37\n; SMA\n38\n; SUCEN/Palasio et al.\n14\n; SINAN/CVE.\n", "The geographic coordinates related to each taxonomic group identified from 20 collection points in eight of the freshwater bodies positive for IH Biomphalaria species and the frequency of specimens per species are displayed in Table 1. These data integrate a survey conducted in 2015-2017 at 141 sampling points located along 26 water bodies in urban, peri-urban, and rural areas in the geographical limits of the Ourinhos municipality, according to malacological and geospatial approaches described by Palasio et al.\n14\n. Of the 141 points sampled, 121 were negative for the presence of IHs or were colonized by species of Biomphalaria that were naturally refractory to S. mansoni\n\n14\n. As reported in our preliminary and pivotal study\n14\n, the snails sampled were examined in the laboratory to analyze the presence of cercariae from trematodes\n28\n, and the species were concurrently identified through morphological characters according to Paraense (1975, 1981)\n29\n\n,\n\n30\n and the DNA barcode protocol\n31\n\n,\n\n32\n\n,\n\n33\n. A detailed explanation of the parasitological approach and the morphological and molecular identification of snails used in this study has been provided by Palasio et al.\n14\n.\n\nTABLE 1:Geographic coordinates and number of S. mansoni intermediate-host specimens of each Biomphalaria species collected in water bodies and points and percentage of residents served by a sewage network, septic tank, or rudimentary tank according to the census tracts of the location at the sample points in the municipality of Ourinhos, SP, Brazil, during 2015-2017. Water bodyPoint**Latitude (°)Longitude (°)No. of snails Collection date% Residents served by Census tracts* \n\n\n\n\nBtt\n\nBgl\n\nBst\n\nBoc/Bgl\n\nBoc/Btt\n\nBoc/Bst\n\nBoc\n\nBsp\n\n\nsewage network\n\ntank\n\nOther ***\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nsep\n\nrud\n\n Christoni1-22.967600-49.874683-170-----422015-201689.3-10.30.3\n2-22.967117-49.875167-25-----212015-2017\n\n\n\n\n3-22.952833-49.876333-45-68---342015-201695.60.63.7-\n4-22.950050-49.875850-25-10--14282015-2016\n\n\n\n\nÁgua da Veada\n5-22.953222-49.878306--51---591042015-201795.60.63.7-\nFurninhas\n6-22.985556-49.849972125-----50-20157.886.35.9-\n7-22.976766-49.85174532-------2015100---\nJacu\n8-23.016306-49.905000----31-2472015-2016----\n9-23.008944-49.872750-59-----412015-20171.80.796.11.1\n10-23.021167-49.877278----98-47532015-2016----\nJacuzinho\n11-22.995111-49.874333--20----612015-201698.41.30.3-\nLageadinho\n12-23.010972-49.824500-12------201519.19.271.6-\n13-23.022833-49.826733-31------2016\n\n\n\n\nSobra\n14-23.041650-49.860233-35------2016----\n15-23.032400-49.85861757-------2016----\n16-23.027467-49.864867-46------20160.97.191.9-\n17-23.025063-49.863847--38-----2016----\n18-23.038333-49.861400-----53-172016----\nBarreirinha\n19-22.985800-49.80051727-------2016-5.3 94.7-\n20-22.988750-49.801167142-------2016-2017\n\n\n\n\nTotal\n\n\n\n\n383\n\n448\n\n109\n\n78\n\n129\n\n53\n\n194\n 408-----*Source: IBGE\n25\n, ** SUCEN/Palasio et al.\n14\n, numbers presented in Figure 1 ***Others = ditch, river, lake, or other sewer.\nBgl: B. glabrata, Bst: B. straminea, Btt: B. tenagophila, Boc: B. occidentalis, Bsp: Biomphalaria spp., sep: septic, rud: rudimentary.\n\n*Source: IBGE\n25\n, ** SUCEN/Palasio et al.\n14\n, numbers presented in Figure 1 ***Others = ditch, river, lake, or other sewer.\n\nBgl: B. glabrata, Bst: B. straminea, Btt: B. tenagophila, Boc: B. occidentalis, Bsp: Biomphalaria spp., sep: septic, rud: rudimentary.\nInformation regarding schistosomiasis cases from 2007 to 2016, including notification date, residence geographical location, probable infection site (PIS), and epidemiologic classification, was obtained from the National Notifiable Disease Information System (SINAN). Access to the necessary information was provided by the Alexandre Vranjac Center for Epidemiologic Surveillance (CVE). We used this information to obtain the frequencies of the occurrence of autochthonous, imported, and unknown-origin cases in the municipality.\nOnce the residence geographical location for each autochthonous schistosomiasis case was known, the batch geocoding tool\n34\n, which uses Google Earth, was used to obtain the respective geographic coordinates (datum WGS84). Cartographic material (maps with rivers, census tract layers, and sanitation data) was obtained from the Brazilian Institute of Geography and Statistics (IBGE)\n25\n\n,\n\n35\n\n,\n\n36\n\n,\n\n37\n, the Secretariat for the Environment of the State of São Paulo (SMA)\n38\n, and the National Secretariat for Environmental Sanitation (SNSA) in partnership with the Brazilian National Water Agency (ANA)\n39\n. The points corresponding to the geographic coordinates of snail IHs and autochthonous schistosomiasis cases were imported into and viewed using QGIS software version 3.10.5\n40\n. \nA fundamental part of the current study was the data related to the percentages of residents served by the sewage system, septic tanks, and/or rudimentary tanks\n25\n, according to census tracts\n35\n. These data were computed using the MMQGIS plugin\n41\ncoupled with spatial join geographic operation, both available on QGIS version 3.10.5\n40\n. To do this, we considered the map of the census tracts, statistical data from the 2010 census\n25\n\n,\n\n35\n, and geographical coordinates of IHs.", "The relationship between the spatial distribution of autochthonous cases of schistosomiasis from 2007 to 2016 and the presence of IHs from 2015 to 2017 was analyzed using Gi spatial statistics, which is an indicator of local spatial association\n15\n\n,\n\n42\n. These statistics considered the occurrence of autochthonous cases around the points where IH snails were found.\nThe schistosomiasis incidence rates by census tracts were calculated using the MMQGIS plugin\n41\n in QGIS\n40\n. In the 2010 population\n25\n, the centroid coordinates of the census tracts\n35\n and the coordinates of georeferenced schistosomiasis cases were considered. Gi statistics were calculated for each geographic coordinate where IHs were detected, taking into account the incidence rates. This allowed us to obtain a profile of schistosomiasis based on the spatial pattern of autochthonous schistosomiasis cases and its relationship with freshwater bodies and the IH snails that colonize them. A significance level of 5% was used, which corresponded to the minimum value of the Gi statistics (3.2889) (N = 100) according to Table 3 of the paper published by Ord and Getis\n15\n. \nThe application of the Gi statistics took into account the measured attribute values in the pairs of coordinates corresponding to the locations analyzed (schistosomiasis incidence rates in census tract centroids). As it is a focal statistic, it considered the pair of geographic coordinates of each focus with IH (i) without taking into account the value of the attribute at this point\n15\n.\nGi can be written as Σjwij(d) xj / Σjxj, for i ≠ j, where j represents the geographic coordinates of the centroid of the census tracts where there are schistosomiasis cases, Wij the binary and symmetric matrix that defines the neighborhood between the areas, xj the values of the incidence rates of the cases in the position of each j, and d the measure of the distance established by the neighborhood model. This calculation was performed with the sum of neighboring samples in relation to the position of i, wherein the value xi was not included in the sum as the place where IHs were collected, and the incidence rate was considered null (= 0)\n15\n.\nIn the present study, a significant result for these statistics would indicate that the location in question may be considered a potential infection area for schistosomiasis. Gi statistics were calculated using the ‘spdep’ package\n43\n in R version 3.2.2\n44\n. The presence of clusters was investigated using a maximum distance of 4000 m between each point where IHs were present and the centroids of the census tracts.\nFurthermore, the spatial dependence between the distribution of points corresponding to autochthonous cases of schistosomiasis and those corresponding to places where the IHs were found was evaluated by considering the geographic coordinates of the autochthonous cases and the IH, using Ripley’s K12-function\n45\n\n,\n\n46\n and the R software version 3.2.2\n44\n with the ‘Splancs’ package\n47\n. We used the borders of the study area in a shapefile format and considered the coordinates of the cases and IHs in the UTM format. The result of the K12-function allowed us to verify the radius of influence, which is the limited and statistically significant distance where a positive spatial dependence between the two distributions of points occurs. We used the geographic coordinates of the autochthonous cases of schistosomiasis and the radius of influence data of the K12-function result to estimate the kernel density with a plugin available in the program QGIS version 3.10.5\n40\n.\nWith the coordinates of the points where IHs were found and the radius of influence of each point analyzed in the Gi statistics (distances considered significant, higher limit), it was possible to identify the clusters of autochthonous cases around points with IHs. We performed this procedure using the MMQGIS plugin\n41\n and created a buffer geographic operation available on QGIS\n40\n. We merged all clusters into one cluster, which was restricted to the Ourinhos urban area. This cluster map obtained using Gi statistics was superimposed onto the respective autochthonous case hotspots obtained using the kernel tool.", "The project was approved by the Faculdade de Saúde Pública of Universidade de São Paulo, Committee for Ethics in Research, the Plataforma Brasil system, Ministério da Saúde - Conselho Nacional de Saúde (number, CAAE: 53559816.0.0000.5421).", "IH occurrence in Ourinhos The geographical occurrence points for B. glabrata, B. tenagophila, and B. straminea over the 20 locations sampled across eight freshwater bodies assigned as breeding sites for IHs demonstrated the predominant occurrence of B. glabrata (Figure 1C). We also found that the relative abundance of B. glabrata was higher in Christoni, Sobra, and Jacu than that of the other two IHs investigated (Table 1, Figure 2). However, none of the snails were infected with S. mansoni in the parasitological analysis, as reported by Palasio et al.\n14\n. Overall, the three natural S. mansoni-IH species occurred in the urban, peri-urban, and rural areas of Ourinhos (Figure 1C, Figure 2).\n\nFIGURE 2:Number of intermediate-host specimens (Biomphalaria species) of S. mansoni collected during 2015-2017 in eight water bodies in the urban, peri-urban, and rural areas of the municipality of Ourinhos, SP, Brazil.\n\nThe geographical occurrence points for B. glabrata, B. tenagophila, and B. straminea over the 20 locations sampled across eight freshwater bodies assigned as breeding sites for IHs demonstrated the predominant occurrence of B. glabrata (Figure 1C). We also found that the relative abundance of B. glabrata was higher in Christoni, Sobra, and Jacu than that of the other two IHs investigated (Table 1, Figure 2). However, none of the snails were infected with S. mansoni in the parasitological analysis, as reported by Palasio et al.\n14\n. Overall, the three natural S. mansoni-IH species occurred in the urban, peri-urban, and rural areas of Ourinhos (Figure 1C, Figure 2).\n\nFIGURE 2:Number of intermediate-host specimens (Biomphalaria species) of S. mansoni collected during 2015-2017 in eight water bodies in the urban, peri-urban, and rural areas of the municipality of Ourinhos, SP, Brazil.\n\nFrequency of schistosomiasis and PIS and their relationship with the IH and sewage system The frequencies of autochthonous, imported, and unknown-origin cases were 39.7% (25), 7.9% (5), and 52.4% (33), respectively. On average, 6.3 cases per year were detected. Using information from the PIS in the epidemiological survey records, eight PIS were geocoded in the water bodies of Christoni, Furninhas, Jacu, Sobra (Pinhos Lake), Lageadinho, Chumbiadinha (Lake), Furnas (Fapi Lake), and Lake of São Luiz plant (Figure 3). It is noteworthy that the presence of IHs was only found in the first five locations, the majority of PIS were vague, and information was not available for 48% of them, making geocoding impossible.\nIn the water bodies of Chumbiadinha, Lake of São Luiz plant, and Furnas, B. glabrata were reported until 2009, 2009, and 2012, respectively\n13\n\n,\n\n48\n\n,\n\n49\n. Currently, only B. occidentalis Paraense, 1981 has been identified in these sites\n14\n, and this species is not susceptible to S. mansoni\n\n50\n. IHs were found in overlapping areas with a high percentage of residents served by the sewage network as well as in places served by tanks or other systems (Figure 3, Table 1). \n\nFIGURE 3:Map of the municipality of Ourinhos, state of São Paulo, Brazil, highlighting the main probable infection site (PIS), percentage of residents served by a sewage network according to the census tracts, sewage treatment plants (STPs), and points where S. mansoni intermediate hosts (IHs) were found. The numbers (N°) in this figure correspond to the collection points presented in Table 1. Source: IBGE\n25\n\n,\n\n35\n\n,\n\n36\n\n,\n\n37\n; SMA\n38\n; ANA/SNSA\n39\n.\n\nThe frequencies of autochthonous, imported, and unknown-origin cases were 39.7% (25), 7.9% (5), and 52.4% (33), respectively. On average, 6.3 cases per year were detected. Using information from the PIS in the epidemiological survey records, eight PIS were geocoded in the water bodies of Christoni, Furninhas, Jacu, Sobra (Pinhos Lake), Lageadinho, Chumbiadinha (Lake), Furnas (Fapi Lake), and Lake of São Luiz plant (Figure 3). It is noteworthy that the presence of IHs was only found in the first five locations, the majority of PIS were vague, and information was not available for 48% of them, making geocoding impossible.\nIn the water bodies of Chumbiadinha, Lake of São Luiz plant, and Furnas, B. glabrata were reported until 2009, 2009, and 2012, respectively\n13\n\n,\n\n48\n\n,\n\n49\n. Currently, only B. occidentalis Paraense, 1981 has been identified in these sites\n14\n, and this species is not susceptible to S. mansoni\n\n50\n. IHs were found in overlapping areas with a high percentage of residents served by the sewage network as well as in places served by tanks or other systems (Figure 3, Table 1). \n\nFIGURE 3:Map of the municipality of Ourinhos, state of São Paulo, Brazil, highlighting the main probable infection site (PIS), percentage of residents served by a sewage network according to the census tracts, sewage treatment plants (STPs), and points where S. mansoni intermediate hosts (IHs) were found. The numbers (N°) in this figure correspond to the collection points presented in Table 1. Source: IBGE\n25\n\n,\n\n35\n\n,\n\n36\n\n,\n\n37\n; SMA\n38\n; ANA/SNSA\n39\n.\n\nAssociation between IH occurrence and autochthonous cases \nFigure 4A shows the results of the Gi statistics with locations considered potential risk areas for human schistosomiasis infection as well as the extent of the concatenated clusters of autochthonous cases around these points and the kernel density map.\nSignificant clusters of autochthonous cases were superimposed on the Christoni stream region from approximately 300 m (lower limit) to 2200 m (higher limit) from sites where B. glabrata was detected. Another cluster was superimposed on the Água da Veada stream region at a distance of approximately 1600-2000 m (Figure 4A-B) from the B. straminea collection site\n14\n (Table 1). All clusters were combined into a single cluster (Cluster 1 in Figure 4A). \nThe graph obtained using the K12-function shown in Figure 4C indicates a positive spatial dependence up to a distance of approximately 759 m between autochthonous cases and IH snails.\n\nFIGURE 4:\n(A) Kernel density map of the urban area (759 m radius of influence) showing the distribution of autochthonous schistosomiasis cases and significant clusters in the Gi statistics of cases around sampling points with intermediate hosts (foci). (B) Graph showing significant clusters of autochthonous cases around the Christoni stream and Água da Veada stream sampling points with IH. (C) Graph of the bivariate K12-function analysis in Ourinhos, SP, Brazil, during the 2007-2016 period. (B)* Statistically significant values are above the horizontal line (Gi [d] > 3.28, P < 0.05); (C)** The blue curve above the envelope shows a positive spatial dependence between the autochthonous cases of schistosomiasis and the IH up to a distance of ~759 m.\n\n\nFigure 4A shows the results of the Gi statistics with locations considered potential risk areas for human schistosomiasis infection as well as the extent of the concatenated clusters of autochthonous cases around these points and the kernel density map.\nSignificant clusters of autochthonous cases were superimposed on the Christoni stream region from approximately 300 m (lower limit) to 2200 m (higher limit) from sites where B. glabrata was detected. Another cluster was superimposed on the Água da Veada stream region at a distance of approximately 1600-2000 m (Figure 4A-B) from the B. straminea collection site\n14\n (Table 1). All clusters were combined into a single cluster (Cluster 1 in Figure 4A). \nThe graph obtained using the K12-function shown in Figure 4C indicates a positive spatial dependence up to a distance of approximately 759 m between autochthonous cases and IH snails.\n\nFIGURE 4:\n(A) Kernel density map of the urban area (759 m radius of influence) showing the distribution of autochthonous schistosomiasis cases and significant clusters in the Gi statistics of cases around sampling points with intermediate hosts (foci). (B) Graph showing significant clusters of autochthonous cases around the Christoni stream and Água da Veada stream sampling points with IH. (C) Graph of the bivariate K12-function analysis in Ourinhos, SP, Brazil, during the 2007-2016 period. (B)* Statistically significant values are above the horizontal line (Gi [d] > 3.28, P < 0.05); (C)** The blue curve above the envelope shows a positive spatial dependence between the autochthonous cases of schistosomiasis and the IH up to a distance of ~759 m.\n", "The geographical occurrence points for B. glabrata, B. tenagophila, and B. straminea over the 20 locations sampled across eight freshwater bodies assigned as breeding sites for IHs demonstrated the predominant occurrence of B. glabrata (Figure 1C). We also found that the relative abundance of B. glabrata was higher in Christoni, Sobra, and Jacu than that of the other two IHs investigated (Table 1, Figure 2). However, none of the snails were infected with S. mansoni in the parasitological analysis, as reported by Palasio et al.\n14\n. Overall, the three natural S. mansoni-IH species occurred in the urban, peri-urban, and rural areas of Ourinhos (Figure 1C, Figure 2).\n\nFIGURE 2:Number of intermediate-host specimens (Biomphalaria species) of S. mansoni collected during 2015-2017 in eight water bodies in the urban, peri-urban, and rural areas of the municipality of Ourinhos, SP, Brazil.\n", "The frequencies of autochthonous, imported, and unknown-origin cases were 39.7% (25), 7.9% (5), and 52.4% (33), respectively. On average, 6.3 cases per year were detected. Using information from the PIS in the epidemiological survey records, eight PIS were geocoded in the water bodies of Christoni, Furninhas, Jacu, Sobra (Pinhos Lake), Lageadinho, Chumbiadinha (Lake), Furnas (Fapi Lake), and Lake of São Luiz plant (Figure 3). It is noteworthy that the presence of IHs was only found in the first five locations, the majority of PIS were vague, and information was not available for 48% of them, making geocoding impossible.\nIn the water bodies of Chumbiadinha, Lake of São Luiz plant, and Furnas, B. glabrata were reported until 2009, 2009, and 2012, respectively\n13\n\n,\n\n48\n\n,\n\n49\n. Currently, only B. occidentalis Paraense, 1981 has been identified in these sites\n14\n, and this species is not susceptible to S. mansoni\n\n50\n. IHs were found in overlapping areas with a high percentage of residents served by the sewage network as well as in places served by tanks or other systems (Figure 3, Table 1). \n\nFIGURE 3:Map of the municipality of Ourinhos, state of São Paulo, Brazil, highlighting the main probable infection site (PIS), percentage of residents served by a sewage network according to the census tracts, sewage treatment plants (STPs), and points where S. mansoni intermediate hosts (IHs) were found. The numbers (N°) in this figure correspond to the collection points presented in Table 1. Source: IBGE\n25\n\n,\n\n35\n\n,\n\n36\n\n,\n\n37\n; SMA\n38\n; ANA/SNSA\n39\n.\n", "\nFigure 4A shows the results of the Gi statistics with locations considered potential risk areas for human schistosomiasis infection as well as the extent of the concatenated clusters of autochthonous cases around these points and the kernel density map.\nSignificant clusters of autochthonous cases were superimposed on the Christoni stream region from approximately 300 m (lower limit) to 2200 m (higher limit) from sites where B. glabrata was detected. Another cluster was superimposed on the Água da Veada stream region at a distance of approximately 1600-2000 m (Figure 4A-B) from the B. straminea collection site\n14\n (Table 1). All clusters were combined into a single cluster (Cluster 1 in Figure 4A). \nThe graph obtained using the K12-function shown in Figure 4C indicates a positive spatial dependence up to a distance of approximately 759 m between autochthonous cases and IH snails.\n\nFIGURE 4:\n(A) Kernel density map of the urban area (759 m radius of influence) showing the distribution of autochthonous schistosomiasis cases and significant clusters in the Gi statistics of cases around sampling points with intermediate hosts (foci). (B) Graph showing significant clusters of autochthonous cases around the Christoni stream and Água da Veada stream sampling points with IH. (C) Graph of the bivariate K12-function analysis in Ourinhos, SP, Brazil, during the 2007-2016 period. (B)* Statistically significant values are above the horizontal line (Gi [d] > 3.28, P < 0.05); (C)** The blue curve above the envelope shows a positive spatial dependence between the autochthonous cases of schistosomiasis and the IH up to a distance of ~759 m.\n", "In this study, the association of data from autochthonous cases reported in Ourinhos (2007-2016) with the spatial location of IHs and the sewage network allowed the identification of the Christoni freshwater body as the most suitable area for human schistosomiasis infection. The results obtained through statistics corroborated previous results\n5\n\n,\n\n51\n\n, which showed that the Christoni stream was the area most probably at highest risk of peridomestic schistosomiasis transmission\n51\n\n,\n\n52\n. Interestingly, in addition to the identification of specific points, the results obtained using Gi statistics provided important information regarding the significant distances that should relate to the local occurrence of schistosomiasis infection, a phenomenon that has not been previously reported in the literature.\nThe inventory of species in the Christoni stream shows that B. glabrata is abundant and predominates at four specific points\n14\n, which corroborates data from previous surveys\n10\n\n,\n\n13\n\n,\n\n48\n\n,\n\n49\n. Although the B. glabrata samples collected were reported to be negative for cercariae infection, this species is known to be the most suitable IH for the development and transmission of the parasite\n11\n\n,\n\n53\n. Based on previous studies, it is considered the most competent IH species for S. mansoni transmission in the Paranapanema region\n51\n\n,\n\n54\n.\nIn Ourinhos, B. glabrata predominance in census tracts, where sanitary sewage is still performed using a septic and/or rudimentary tank, is relevant and of concern from a medical perspective. In the case of the census tracts where the Christoni stream is located, 89.3% of the residents are served by a sewage network and 10.6% by a rudimentary tank and other drains\n25\n. This percentage pattern of sanitary sewage is a potential explanation for the decrease in the number of autochthonous cases in recent years as well as for the maintenance of focal transmission in this municipality\n5\n.\n\nB. straminea is resilient to extreme environmental variations and is capable of adapting to altered environments\n55\n. The presence of the B. straminea IH in the Água da Veada stream, where previous surveys indicated colonization by B. tenagophila\n\n13\n and B. glabrata\n\n49\n\n,\n\n56\n, is further evidence of the expansion potential of B. straminea. Regarding the natural susceptibility of B. straminea to S. mansoni, the physiological adaptation of this species to the parasite is relatively high in snails inhabiting regions of northeastern Brazil\n57\n. The occurrence of B. straminea in Água da Veada and the fact that a portion of the residents living nearby are still served by septic (0.6%) or rudimentary (3.7%) tanks\n25\n suggest the need for enhanced surveillance of areas colonized by this species. Although designated as a statistically significant cluster for schistosomiasis infection, in Água da Veada are more likely to result from a bias associated with its closeness to Christoni. \nAlthough the Gi statistics considered only Christoni to have significant associations, the water bodies of Sobra, Lageadinho, and Jacu require monitoring, as B. glabrata\n\n14\n is also present therein. In addition, the percentage of residents served by the sewage system near these areas was below 20%\n25\n. B. glabrata presence has been registered in the water bodies of Jacu, located in peri-urban areas, in the past and continues today\n13\n\n,\n\n14\n\n,\n\n58\n. In the Sobra and Lageadinho water bodies, located in rural areas, the first record of this species was in 2009\n49\n. Until then, only B. straminea had been recorded in Sobra water bodies\n59\n.\nThe results of this study demonstrate that the use of GIS tools in association with malacology, epidemiological data, and sewerage infrastructure has the potential to improve schistosomiasis control, fostering the use of new technologies to locally eliminate future infections.\nOne of the limitations of this study is the discrepancy between the periods of collection of snails (2015-2017) and of data on schistosomiasis cases (2007-2016). Despite this incoherence, such a limitation may be overcome by comparing our results with those from the literature\n13\n\n,\n\n48\n\n,\n\n49\n\n,\n\n54\n\n,\n\n56\n\n,\n\n58\n\n,\n\n59\n.\nIn the 2007-2016 period, the Gi statistics made it possible to exclusively identify the Christoni stream as a location characterized by significant clusters of autochthonous cases associated with the presence of B. glabrata. Therefore, this species is a candidate for the main target of environmental monitoring measures in this municipality. In addition, the use of this technique allowed us to verify that the association between the residence geographical location of autochthonous cases and the spatial distribution of IH provides vital information regarding potential transmission areas. Despite the absence of cercariae in the samples of B. glabrata collected in Ourinhos, the high susceptibility of this species to S. mansoni in laboratory conditions\n11\n\n,\n\n60\n indicates the risk of schistosomiasis persistence in this region.\nMoreover, the Gi statistics partially overcame the limitation related to the lack of precise information regarding the location of the PIS, which is consistent with the information that characterizes the transmission of schistosomiasis as being predominantly peridomestic\n51\n. The proximity between water bodies and residences is another typical characteristic of the incipient urbanization process.\nThe fact that the significant distance in the Gi statistics is approximately 2 km allowed us to calibrate the surveillance activities to a concise and statistically pre-established area. Accordingly, it is possible to develop schistosomiasis control and monitoring activities at well-defined focal points, rationalizing the use of public resources, since Brazil spent approximately 7.7 million dollars in 2015 to control the infection\n61\n. Therefore, the information presented in this study as well as the tools used may be adequate to develop and direct surveillance actions that contribute to the control and even elimination of schistosomiasis in the municipality of Ourinhos." ]
[ "intro", "methods", null, null, null, null, "results", null, null, null, "discussion" ]
[ "Schistosomiasis", "Biomphalaria", "Spatial analysis", "Gi statistics", "Georeferencing", "Epidemiology" ]
INTRODUCTION: Schistosomiasis is a parasitic infection that is considered a neglected tropical disease 1 . Schistosomiasis mansoni infection in Brazil is associated with the development of the parasite Schistosoma mansoni Sambon, 1907 in three species of snails of the genus Biomphalaria (Preston, 1910), namely, B. glabrata (Say, 1818), B. tenagophila (Orbigny, 1835), and B. straminea (Dunker, 1848) 2 . Human infections are highly prevalent, mainly in the northeast of the country and in the southeast, where it is endemic in some areas 3 . In the state of São Paulo, human infections occur in specific areas where schistosomiasis endemicity is low 3 . Among these areas, the Middle Paranapanema region, where it borders with the state of Paraná, is usually reported as an important endemic region 4 . However, a recent study using spatial analysis tools in an area considered a GeoSentinel surveillance site for schistosomiasis pointed out that human schistosomiasis infections are more likely to occur in Ourinhos than in the other regions across the 25 municipalities of the Hydrographic Unit Water of Resources Management of Middle Paranapanema (UGRHI-17) 5 , 6 , 7 . Currently, Ourinhos accounts for 93% of all autochthonous cases in Middle Paranapanema 5 , with cases reported since 1952 8 , 9 . The schistosomiasis cases observed in Ourinhos are probably associated with B. glabrata, which is a natural host to S. mansoni in the municipality 10 , 11 . This species was initially identified in Ourinhos in 1919 12 and continues to proliferate in water bodies in this municipality 13 , 14 . B. tenagophila and B. straminea, two other S. mansoni intermediate-host (IH) species, have also been described/observed in the area 13 , 14 . The spatial association between the occurrence of autochthonous cases and the presence of IHs can be analyzed using Gi spatial statistics. This tool, with the support of geographic information systems (GIS), uses the geographic coordinates of locations to find spatial clusters of a certain measure or quantity around a specific point and infer the distances at which these clusters are statistically significant 15 . Previous studies have used this tool (Gi or Gi* statistics) to analyze schistosomiasis in Africa 16 , 17 , 18 , 19 and vector-borne diseases, such as dengue, in Brazil 20 . Additionally, other studies have investigated schistosomiasis using GIS worldwide 5 , 21 , 22 , 23 . Thus, GIS and spatial analysis tools may contribute to identifying areas with the highest risk of human schistosomiasis infection and other diseases and consequently help guide public health measures 21 , 24 . The present study used a GIS-based approach to identify rural and urban areas at risk of schistosomiasis transmission in Ourinhos (São Paulo, Brazil), combining data sources related to the presence of snails that act as S. mansoni IHs, the historical occurrence of human infection, and the sewage network. METHODS: Study area The study was conducted in the municipality of Ourinhos, southwest of the state of São Paulo (22° 58 44″ S, 49° 52 15″ W, Figure 1). The municipality extends over an area of 296 km² and had an estimated population of 113,542 inhabitants in 2019 25 , 26 , 97% of which lived in urban areas 25 . The municipality is covered by a variety of freshwater bodies located between the Pardo and Paranapanema rivers, which are tributaries of the Paraná river 27 . FIGURE 1:Maps of (A) Brazil, South America; (B) the state of São Paulo; and (C) the municipality of Ourinhos, showing the distribution of S. mansoni intermediate-host (IH) species (Biomphalaria) identified during 2015-2017, the autochthonous cases of 2007-2016, and the water bodies in Ourinhos. The numbers (N°) in Figure 1C correspond to the collection points presented in Table 1. Source: IBGE 36 , 37 ; SMA 38 ; SUCEN/Palasio et al. 14 ; SINAN/CVE. The study was conducted in the municipality of Ourinhos, southwest of the state of São Paulo (22° 58 44″ S, 49° 52 15″ W, Figure 1). The municipality extends over an area of 296 km² and had an estimated population of 113,542 inhabitants in 2019 25 , 26 , 97% of which lived in urban areas 25 . The municipality is covered by a variety of freshwater bodies located between the Pardo and Paranapanema rivers, which are tributaries of the Paraná river 27 . FIGURE 1:Maps of (A) Brazil, South America; (B) the state of São Paulo; and (C) the municipality of Ourinhos, showing the distribution of S. mansoni intermediate-host (IH) species (Biomphalaria) identified during 2015-2017, the autochthonous cases of 2007-2016, and the water bodies in Ourinhos. The numbers (N°) in Figure 1C correspond to the collection points presented in Table 1. Source: IBGE 36 , 37 ; SMA 38 ; SUCEN/Palasio et al. 14 ; SINAN/CVE. Data source The geographic coordinates related to each taxonomic group identified from 20 collection points in eight of the freshwater bodies positive for IH Biomphalaria species and the frequency of specimens per species are displayed in Table 1. These data integrate a survey conducted in 2015-2017 at 141 sampling points located along 26 water bodies in urban, peri-urban, and rural areas in the geographical limits of the Ourinhos municipality, according to malacological and geospatial approaches described by Palasio et al. 14 . Of the 141 points sampled, 121 were negative for the presence of IHs or were colonized by species of Biomphalaria that were naturally refractory to S. mansoni 14 . As reported in our preliminary and pivotal study 14 , the snails sampled were examined in the laboratory to analyze the presence of cercariae from trematodes 28 , and the species were concurrently identified through morphological characters according to Paraense (1975, 1981) 29 , 30 and the DNA barcode protocol 31 , 32 , 33 . A detailed explanation of the parasitological approach and the morphological and molecular identification of snails used in this study has been provided by Palasio et al. 14 . TABLE 1:Geographic coordinates and number of S. mansoni intermediate-host specimens of each Biomphalaria species collected in water bodies and points and percentage of residents served by a sewage network, septic tank, or rudimentary tank according to the census tracts of the location at the sample points in the municipality of Ourinhos, SP, Brazil, during 2015-2017. Water bodyPoint**Latitude (°)Longitude (°)No. of snails Collection date% Residents served by Census tracts* Btt Bgl Bst Boc/Bgl Boc/Btt Boc/Bst Boc Bsp sewage network tank Other *** sep rud Christoni1-22.967600-49.874683-170-----422015-201689.3-10.30.3 2-22.967117-49.875167-25-----212015-2017 3-22.952833-49.876333-45-68---342015-201695.60.63.7- 4-22.950050-49.875850-25-10--14282015-2016 Água da Veada 5-22.953222-49.878306--51---591042015-201795.60.63.7- Furninhas 6-22.985556-49.849972125-----50-20157.886.35.9- 7-22.976766-49.85174532-------2015100--- Jacu 8-23.016306-49.905000----31-2472015-2016---- 9-23.008944-49.872750-59-----412015-20171.80.796.11.1 10-23.021167-49.877278----98-47532015-2016---- Jacuzinho 11-22.995111-49.874333--20----612015-201698.41.30.3- Lageadinho 12-23.010972-49.824500-12------201519.19.271.6- 13-23.022833-49.826733-31------2016 Sobra 14-23.041650-49.860233-35------2016---- 15-23.032400-49.85861757-------2016---- 16-23.027467-49.864867-46------20160.97.191.9- 17-23.025063-49.863847--38-----2016---- 18-23.038333-49.861400-----53-172016---- Barreirinha 19-22.985800-49.80051727-------2016-5.3 94.7- 20-22.988750-49.801167142-------2016-2017 Total 383 448 109 78 129 53 194 408-----*Source: IBGE 25 , ** SUCEN/Palasio et al. 14 , numbers presented in Figure 1 ***Others = ditch, river, lake, or other sewer. Bgl: B. glabrata, Bst: B. straminea, Btt: B. tenagophila, Boc: B. occidentalis, Bsp: Biomphalaria spp., sep: septic, rud: rudimentary. *Source: IBGE 25 , ** SUCEN/Palasio et al. 14 , numbers presented in Figure 1 ***Others = ditch, river, lake, or other sewer. Bgl: B. glabrata, Bst: B. straminea, Btt: B. tenagophila, Boc: B. occidentalis, Bsp: Biomphalaria spp., sep: septic, rud: rudimentary. Information regarding schistosomiasis cases from 2007 to 2016, including notification date, residence geographical location, probable infection site (PIS), and epidemiologic classification, was obtained from the National Notifiable Disease Information System (SINAN). Access to the necessary information was provided by the Alexandre Vranjac Center for Epidemiologic Surveillance (CVE). We used this information to obtain the frequencies of the occurrence of autochthonous, imported, and unknown-origin cases in the municipality. Once the residence geographical location for each autochthonous schistosomiasis case was known, the batch geocoding tool 34 , which uses Google Earth, was used to obtain the respective geographic coordinates (datum WGS84). Cartographic material (maps with rivers, census tract layers, and sanitation data) was obtained from the Brazilian Institute of Geography and Statistics (IBGE) 25 , 35 , 36 , 37 , the Secretariat for the Environment of the State of São Paulo (SMA) 38 , and the National Secretariat for Environmental Sanitation (SNSA) in partnership with the Brazilian National Water Agency (ANA) 39 . The points corresponding to the geographic coordinates of snail IHs and autochthonous schistosomiasis cases were imported into and viewed using QGIS software version 3.10.5 40 . A fundamental part of the current study was the data related to the percentages of residents served by the sewage system, septic tanks, and/or rudimentary tanks 25 , according to census tracts 35 . These data were computed using the MMQGIS plugin 41 coupled with spatial join geographic operation, both available on QGIS version 3.10.5 40 . To do this, we considered the map of the census tracts, statistical data from the 2010 census 25 , 35 , and geographical coordinates of IHs. The geographic coordinates related to each taxonomic group identified from 20 collection points in eight of the freshwater bodies positive for IH Biomphalaria species and the frequency of specimens per species are displayed in Table 1. These data integrate a survey conducted in 2015-2017 at 141 sampling points located along 26 water bodies in urban, peri-urban, and rural areas in the geographical limits of the Ourinhos municipality, according to malacological and geospatial approaches described by Palasio et al. 14 . Of the 141 points sampled, 121 were negative for the presence of IHs or were colonized by species of Biomphalaria that were naturally refractory to S. mansoni 14 . As reported in our preliminary and pivotal study 14 , the snails sampled were examined in the laboratory to analyze the presence of cercariae from trematodes 28 , and the species were concurrently identified through morphological characters according to Paraense (1975, 1981) 29 , 30 and the DNA barcode protocol 31 , 32 , 33 . A detailed explanation of the parasitological approach and the morphological and molecular identification of snails used in this study has been provided by Palasio et al. 14 . TABLE 1:Geographic coordinates and number of S. mansoni intermediate-host specimens of each Biomphalaria species collected in water bodies and points and percentage of residents served by a sewage network, septic tank, or rudimentary tank according to the census tracts of the location at the sample points in the municipality of Ourinhos, SP, Brazil, during 2015-2017. Water bodyPoint**Latitude (°)Longitude (°)No. of snails Collection date% Residents served by Census tracts* Btt Bgl Bst Boc/Bgl Boc/Btt Boc/Bst Boc Bsp sewage network tank Other *** sep rud Christoni1-22.967600-49.874683-170-----422015-201689.3-10.30.3 2-22.967117-49.875167-25-----212015-2017 3-22.952833-49.876333-45-68---342015-201695.60.63.7- 4-22.950050-49.875850-25-10--14282015-2016 Água da Veada 5-22.953222-49.878306--51---591042015-201795.60.63.7- Furninhas 6-22.985556-49.849972125-----50-20157.886.35.9- 7-22.976766-49.85174532-------2015100--- Jacu 8-23.016306-49.905000----31-2472015-2016---- 9-23.008944-49.872750-59-----412015-20171.80.796.11.1 10-23.021167-49.877278----98-47532015-2016---- Jacuzinho 11-22.995111-49.874333--20----612015-201698.41.30.3- Lageadinho 12-23.010972-49.824500-12------201519.19.271.6- 13-23.022833-49.826733-31------2016 Sobra 14-23.041650-49.860233-35------2016---- 15-23.032400-49.85861757-------2016---- 16-23.027467-49.864867-46------20160.97.191.9- 17-23.025063-49.863847--38-----2016---- 18-23.038333-49.861400-----53-172016---- Barreirinha 19-22.985800-49.80051727-------2016-5.3 94.7- 20-22.988750-49.801167142-------2016-2017 Total 383 448 109 78 129 53 194 408-----*Source: IBGE 25 , ** SUCEN/Palasio et al. 14 , numbers presented in Figure 1 ***Others = ditch, river, lake, or other sewer. Bgl: B. glabrata, Bst: B. straminea, Btt: B. tenagophila, Boc: B. occidentalis, Bsp: Biomphalaria spp., sep: septic, rud: rudimentary. *Source: IBGE 25 , ** SUCEN/Palasio et al. 14 , numbers presented in Figure 1 ***Others = ditch, river, lake, or other sewer. Bgl: B. glabrata, Bst: B. straminea, Btt: B. tenagophila, Boc: B. occidentalis, Bsp: Biomphalaria spp., sep: septic, rud: rudimentary. Information regarding schistosomiasis cases from 2007 to 2016, including notification date, residence geographical location, probable infection site (PIS), and epidemiologic classification, was obtained from the National Notifiable Disease Information System (SINAN). Access to the necessary information was provided by the Alexandre Vranjac Center for Epidemiologic Surveillance (CVE). We used this information to obtain the frequencies of the occurrence of autochthonous, imported, and unknown-origin cases in the municipality. Once the residence geographical location for each autochthonous schistosomiasis case was known, the batch geocoding tool 34 , which uses Google Earth, was used to obtain the respective geographic coordinates (datum WGS84). Cartographic material (maps with rivers, census tract layers, and sanitation data) was obtained from the Brazilian Institute of Geography and Statistics (IBGE) 25 , 35 , 36 , 37 , the Secretariat for the Environment of the State of São Paulo (SMA) 38 , and the National Secretariat for Environmental Sanitation (SNSA) in partnership with the Brazilian National Water Agency (ANA) 39 . The points corresponding to the geographic coordinates of snail IHs and autochthonous schistosomiasis cases were imported into and viewed using QGIS software version 3.10.5 40 . A fundamental part of the current study was the data related to the percentages of residents served by the sewage system, septic tanks, and/or rudimentary tanks 25 , according to census tracts 35 . These data were computed using the MMQGIS plugin 41 coupled with spatial join geographic operation, both available on QGIS version 3.10.5 40 . To do this, we considered the map of the census tracts, statistical data from the 2010 census 25 , 35 , and geographical coordinates of IHs. Data analysis The relationship between the spatial distribution of autochthonous cases of schistosomiasis from 2007 to 2016 and the presence of IHs from 2015 to 2017 was analyzed using Gi spatial statistics, which is an indicator of local spatial association 15 , 42 . These statistics considered the occurrence of autochthonous cases around the points where IH snails were found. The schistosomiasis incidence rates by census tracts were calculated using the MMQGIS plugin 41 in QGIS 40 . In the 2010 population 25 , the centroid coordinates of the census tracts 35 and the coordinates of georeferenced schistosomiasis cases were considered. Gi statistics were calculated for each geographic coordinate where IHs were detected, taking into account the incidence rates. This allowed us to obtain a profile of schistosomiasis based on the spatial pattern of autochthonous schistosomiasis cases and its relationship with freshwater bodies and the IH snails that colonize them. A significance level of 5% was used, which corresponded to the minimum value of the Gi statistics (3.2889) (N = 100) according to Table 3 of the paper published by Ord and Getis 15 . The application of the Gi statistics took into account the measured attribute values in the pairs of coordinates corresponding to the locations analyzed (schistosomiasis incidence rates in census tract centroids). As it is a focal statistic, it considered the pair of geographic coordinates of each focus with IH (i) without taking into account the value of the attribute at this point 15 . Gi can be written as Σjwij(d) xj / Σjxj, for i ≠ j, where j represents the geographic coordinates of the centroid of the census tracts where there are schistosomiasis cases, Wij the binary and symmetric matrix that defines the neighborhood between the areas, xj the values of the incidence rates of the cases in the position of each j, and d the measure of the distance established by the neighborhood model. This calculation was performed with the sum of neighboring samples in relation to the position of i, wherein the value xi was not included in the sum as the place where IHs were collected, and the incidence rate was considered null (= 0) 15 . In the present study, a significant result for these statistics would indicate that the location in question may be considered a potential infection area for schistosomiasis. Gi statistics were calculated using the ‘spdep’ package 43 in R version 3.2.2 44 . The presence of clusters was investigated using a maximum distance of 4000 m between each point where IHs were present and the centroids of the census tracts. Furthermore, the spatial dependence between the distribution of points corresponding to autochthonous cases of schistosomiasis and those corresponding to places where the IHs were found was evaluated by considering the geographic coordinates of the autochthonous cases and the IH, using Ripley’s K12-function 45 , 46 and the R software version 3.2.2 44 with the ‘Splancs’ package 47 . We used the borders of the study area in a shapefile format and considered the coordinates of the cases and IHs in the UTM format. The result of the K12-function allowed us to verify the radius of influence, which is the limited and statistically significant distance where a positive spatial dependence between the two distributions of points occurs. We used the geographic coordinates of the autochthonous cases of schistosomiasis and the radius of influence data of the K12-function result to estimate the kernel density with a plugin available in the program QGIS version 3.10.5 40 . With the coordinates of the points where IHs were found and the radius of influence of each point analyzed in the Gi statistics (distances considered significant, higher limit), it was possible to identify the clusters of autochthonous cases around points with IHs. We performed this procedure using the MMQGIS plugin 41 and created a buffer geographic operation available on QGIS 40 . We merged all clusters into one cluster, which was restricted to the Ourinhos urban area. This cluster map obtained using Gi statistics was superimposed onto the respective autochthonous case hotspots obtained using the kernel tool. The relationship between the spatial distribution of autochthonous cases of schistosomiasis from 2007 to 2016 and the presence of IHs from 2015 to 2017 was analyzed using Gi spatial statistics, which is an indicator of local spatial association 15 , 42 . These statistics considered the occurrence of autochthonous cases around the points where IH snails were found. The schistosomiasis incidence rates by census tracts were calculated using the MMQGIS plugin 41 in QGIS 40 . In the 2010 population 25 , the centroid coordinates of the census tracts 35 and the coordinates of georeferenced schistosomiasis cases were considered. Gi statistics were calculated for each geographic coordinate where IHs were detected, taking into account the incidence rates. This allowed us to obtain a profile of schistosomiasis based on the spatial pattern of autochthonous schistosomiasis cases and its relationship with freshwater bodies and the IH snails that colonize them. A significance level of 5% was used, which corresponded to the minimum value of the Gi statistics (3.2889) (N = 100) according to Table 3 of the paper published by Ord and Getis 15 . The application of the Gi statistics took into account the measured attribute values in the pairs of coordinates corresponding to the locations analyzed (schistosomiasis incidence rates in census tract centroids). As it is a focal statistic, it considered the pair of geographic coordinates of each focus with IH (i) without taking into account the value of the attribute at this point 15 . Gi can be written as Σjwij(d) xj / Σjxj, for i ≠ j, where j represents the geographic coordinates of the centroid of the census tracts where there are schistosomiasis cases, Wij the binary and symmetric matrix that defines the neighborhood between the areas, xj the values of the incidence rates of the cases in the position of each j, and d the measure of the distance established by the neighborhood model. This calculation was performed with the sum of neighboring samples in relation to the position of i, wherein the value xi was not included in the sum as the place where IHs were collected, and the incidence rate was considered null (= 0) 15 . In the present study, a significant result for these statistics would indicate that the location in question may be considered a potential infection area for schistosomiasis. Gi statistics were calculated using the ‘spdep’ package 43 in R version 3.2.2 44 . The presence of clusters was investigated using a maximum distance of 4000 m between each point where IHs were present and the centroids of the census tracts. Furthermore, the spatial dependence between the distribution of points corresponding to autochthonous cases of schistosomiasis and those corresponding to places where the IHs were found was evaluated by considering the geographic coordinates of the autochthonous cases and the IH, using Ripley’s K12-function 45 , 46 and the R software version 3.2.2 44 with the ‘Splancs’ package 47 . We used the borders of the study area in a shapefile format and considered the coordinates of the cases and IHs in the UTM format. The result of the K12-function allowed us to verify the radius of influence, which is the limited and statistically significant distance where a positive spatial dependence between the two distributions of points occurs. We used the geographic coordinates of the autochthonous cases of schistosomiasis and the radius of influence data of the K12-function result to estimate the kernel density with a plugin available in the program QGIS version 3.10.5 40 . With the coordinates of the points where IHs were found and the radius of influence of each point analyzed in the Gi statistics (distances considered significant, higher limit), it was possible to identify the clusters of autochthonous cases around points with IHs. We performed this procedure using the MMQGIS plugin 41 and created a buffer geographic operation available on QGIS 40 . We merged all clusters into one cluster, which was restricted to the Ourinhos urban area. This cluster map obtained using Gi statistics was superimposed onto the respective autochthonous case hotspots obtained using the kernel tool. Ethical considerations The project was approved by the Faculdade de Saúde Pública of Universidade de São Paulo, Committee for Ethics in Research, the Plataforma Brasil system, Ministério da Saúde - Conselho Nacional de Saúde (number, CAAE: 53559816.0.0000.5421). The project was approved by the Faculdade de Saúde Pública of Universidade de São Paulo, Committee for Ethics in Research, the Plataforma Brasil system, Ministério da Saúde - Conselho Nacional de Saúde (number, CAAE: 53559816.0.0000.5421). Study area: The study was conducted in the municipality of Ourinhos, southwest of the state of São Paulo (22° 58 44″ S, 49° 52 15″ W, Figure 1). The municipality extends over an area of 296 km² and had an estimated population of 113,542 inhabitants in 2019 25 , 26 , 97% of which lived in urban areas 25 . The municipality is covered by a variety of freshwater bodies located between the Pardo and Paranapanema rivers, which are tributaries of the Paraná river 27 . FIGURE 1:Maps of (A) Brazil, South America; (B) the state of São Paulo; and (C) the municipality of Ourinhos, showing the distribution of S. mansoni intermediate-host (IH) species (Biomphalaria) identified during 2015-2017, the autochthonous cases of 2007-2016, and the water bodies in Ourinhos. The numbers (N°) in Figure 1C correspond to the collection points presented in Table 1. Source: IBGE 36 , 37 ; SMA 38 ; SUCEN/Palasio et al. 14 ; SINAN/CVE. Data source: The geographic coordinates related to each taxonomic group identified from 20 collection points in eight of the freshwater bodies positive for IH Biomphalaria species and the frequency of specimens per species are displayed in Table 1. These data integrate a survey conducted in 2015-2017 at 141 sampling points located along 26 water bodies in urban, peri-urban, and rural areas in the geographical limits of the Ourinhos municipality, according to malacological and geospatial approaches described by Palasio et al. 14 . Of the 141 points sampled, 121 were negative for the presence of IHs or were colonized by species of Biomphalaria that were naturally refractory to S. mansoni 14 . As reported in our preliminary and pivotal study 14 , the snails sampled were examined in the laboratory to analyze the presence of cercariae from trematodes 28 , and the species were concurrently identified through morphological characters according to Paraense (1975, 1981) 29 , 30 and the DNA barcode protocol 31 , 32 , 33 . A detailed explanation of the parasitological approach and the morphological and molecular identification of snails used in this study has been provided by Palasio et al. 14 . TABLE 1:Geographic coordinates and number of S. mansoni intermediate-host specimens of each Biomphalaria species collected in water bodies and points and percentage of residents served by a sewage network, septic tank, or rudimentary tank according to the census tracts of the location at the sample points in the municipality of Ourinhos, SP, Brazil, during 2015-2017. Water bodyPoint**Latitude (°)Longitude (°)No. of snails Collection date% Residents served by Census tracts* Btt Bgl Bst Boc/Bgl Boc/Btt Boc/Bst Boc Bsp sewage network tank Other *** sep rud Christoni1-22.967600-49.874683-170-----422015-201689.3-10.30.3 2-22.967117-49.875167-25-----212015-2017 3-22.952833-49.876333-45-68---342015-201695.60.63.7- 4-22.950050-49.875850-25-10--14282015-2016 Água da Veada 5-22.953222-49.878306--51---591042015-201795.60.63.7- Furninhas 6-22.985556-49.849972125-----50-20157.886.35.9- 7-22.976766-49.85174532-------2015100--- Jacu 8-23.016306-49.905000----31-2472015-2016---- 9-23.008944-49.872750-59-----412015-20171.80.796.11.1 10-23.021167-49.877278----98-47532015-2016---- Jacuzinho 11-22.995111-49.874333--20----612015-201698.41.30.3- Lageadinho 12-23.010972-49.824500-12------201519.19.271.6- 13-23.022833-49.826733-31------2016 Sobra 14-23.041650-49.860233-35------2016---- 15-23.032400-49.85861757-------2016---- 16-23.027467-49.864867-46------20160.97.191.9- 17-23.025063-49.863847--38-----2016---- 18-23.038333-49.861400-----53-172016---- Barreirinha 19-22.985800-49.80051727-------2016-5.3 94.7- 20-22.988750-49.801167142-------2016-2017 Total 383 448 109 78 129 53 194 408-----*Source: IBGE 25 , ** SUCEN/Palasio et al. 14 , numbers presented in Figure 1 ***Others = ditch, river, lake, or other sewer. Bgl: B. glabrata, Bst: B. straminea, Btt: B. tenagophila, Boc: B. occidentalis, Bsp: Biomphalaria spp., sep: septic, rud: rudimentary. *Source: IBGE 25 , ** SUCEN/Palasio et al. 14 , numbers presented in Figure 1 ***Others = ditch, river, lake, or other sewer. Bgl: B. glabrata, Bst: B. straminea, Btt: B. tenagophila, Boc: B. occidentalis, Bsp: Biomphalaria spp., sep: septic, rud: rudimentary. Information regarding schistosomiasis cases from 2007 to 2016, including notification date, residence geographical location, probable infection site (PIS), and epidemiologic classification, was obtained from the National Notifiable Disease Information System (SINAN). Access to the necessary information was provided by the Alexandre Vranjac Center for Epidemiologic Surveillance (CVE). We used this information to obtain the frequencies of the occurrence of autochthonous, imported, and unknown-origin cases in the municipality. Once the residence geographical location for each autochthonous schistosomiasis case was known, the batch geocoding tool 34 , which uses Google Earth, was used to obtain the respective geographic coordinates (datum WGS84). Cartographic material (maps with rivers, census tract layers, and sanitation data) was obtained from the Brazilian Institute of Geography and Statistics (IBGE) 25 , 35 , 36 , 37 , the Secretariat for the Environment of the State of São Paulo (SMA) 38 , and the National Secretariat for Environmental Sanitation (SNSA) in partnership with the Brazilian National Water Agency (ANA) 39 . The points corresponding to the geographic coordinates of snail IHs and autochthonous schistosomiasis cases were imported into and viewed using QGIS software version 3.10.5 40 . A fundamental part of the current study was the data related to the percentages of residents served by the sewage system, septic tanks, and/or rudimentary tanks 25 , according to census tracts 35 . These data were computed using the MMQGIS plugin 41 coupled with spatial join geographic operation, both available on QGIS version 3.10.5 40 . To do this, we considered the map of the census tracts, statistical data from the 2010 census 25 , 35 , and geographical coordinates of IHs. Data analysis: The relationship between the spatial distribution of autochthonous cases of schistosomiasis from 2007 to 2016 and the presence of IHs from 2015 to 2017 was analyzed using Gi spatial statistics, which is an indicator of local spatial association 15 , 42 . These statistics considered the occurrence of autochthonous cases around the points where IH snails were found. The schistosomiasis incidence rates by census tracts were calculated using the MMQGIS plugin 41 in QGIS 40 . In the 2010 population 25 , the centroid coordinates of the census tracts 35 and the coordinates of georeferenced schistosomiasis cases were considered. Gi statistics were calculated for each geographic coordinate where IHs were detected, taking into account the incidence rates. This allowed us to obtain a profile of schistosomiasis based on the spatial pattern of autochthonous schistosomiasis cases and its relationship with freshwater bodies and the IH snails that colonize them. A significance level of 5% was used, which corresponded to the minimum value of the Gi statistics (3.2889) (N = 100) according to Table 3 of the paper published by Ord and Getis 15 . The application of the Gi statistics took into account the measured attribute values in the pairs of coordinates corresponding to the locations analyzed (schistosomiasis incidence rates in census tract centroids). As it is a focal statistic, it considered the pair of geographic coordinates of each focus with IH (i) without taking into account the value of the attribute at this point 15 . Gi can be written as Σjwij(d) xj / Σjxj, for i ≠ j, where j represents the geographic coordinates of the centroid of the census tracts where there are schistosomiasis cases, Wij the binary and symmetric matrix that defines the neighborhood between the areas, xj the values of the incidence rates of the cases in the position of each j, and d the measure of the distance established by the neighborhood model. This calculation was performed with the sum of neighboring samples in relation to the position of i, wherein the value xi was not included in the sum as the place where IHs were collected, and the incidence rate was considered null (= 0) 15 . In the present study, a significant result for these statistics would indicate that the location in question may be considered a potential infection area for schistosomiasis. Gi statistics were calculated using the ‘spdep’ package 43 in R version 3.2.2 44 . The presence of clusters was investigated using a maximum distance of 4000 m between each point where IHs were present and the centroids of the census tracts. Furthermore, the spatial dependence between the distribution of points corresponding to autochthonous cases of schistosomiasis and those corresponding to places where the IHs were found was evaluated by considering the geographic coordinates of the autochthonous cases and the IH, using Ripley’s K12-function 45 , 46 and the R software version 3.2.2 44 with the ‘Splancs’ package 47 . We used the borders of the study area in a shapefile format and considered the coordinates of the cases and IHs in the UTM format. The result of the K12-function allowed us to verify the radius of influence, which is the limited and statistically significant distance where a positive spatial dependence between the two distributions of points occurs. We used the geographic coordinates of the autochthonous cases of schistosomiasis and the radius of influence data of the K12-function result to estimate the kernel density with a plugin available in the program QGIS version 3.10.5 40 . With the coordinates of the points where IHs were found and the radius of influence of each point analyzed in the Gi statistics (distances considered significant, higher limit), it was possible to identify the clusters of autochthonous cases around points with IHs. We performed this procedure using the MMQGIS plugin 41 and created a buffer geographic operation available on QGIS 40 . We merged all clusters into one cluster, which was restricted to the Ourinhos urban area. This cluster map obtained using Gi statistics was superimposed onto the respective autochthonous case hotspots obtained using the kernel tool. Ethical considerations: The project was approved by the Faculdade de Saúde Pública of Universidade de São Paulo, Committee for Ethics in Research, the Plataforma Brasil system, Ministério da Saúde - Conselho Nacional de Saúde (number, CAAE: 53559816.0.0000.5421). RESULTS: IH occurrence in Ourinhos The geographical occurrence points for B. glabrata, B. tenagophila, and B. straminea over the 20 locations sampled across eight freshwater bodies assigned as breeding sites for IHs demonstrated the predominant occurrence of B. glabrata (Figure 1C). We also found that the relative abundance of B. glabrata was higher in Christoni, Sobra, and Jacu than that of the other two IHs investigated (Table 1, Figure 2). However, none of the snails were infected with S. mansoni in the parasitological analysis, as reported by Palasio et al. 14 . Overall, the three natural S. mansoni-IH species occurred in the urban, peri-urban, and rural areas of Ourinhos (Figure 1C, Figure 2). FIGURE 2:Number of intermediate-host specimens (Biomphalaria species) of S. mansoni collected during 2015-2017 in eight water bodies in the urban, peri-urban, and rural areas of the municipality of Ourinhos, SP, Brazil. The geographical occurrence points for B. glabrata, B. tenagophila, and B. straminea over the 20 locations sampled across eight freshwater bodies assigned as breeding sites for IHs demonstrated the predominant occurrence of B. glabrata (Figure 1C). We also found that the relative abundance of B. glabrata was higher in Christoni, Sobra, and Jacu than that of the other two IHs investigated (Table 1, Figure 2). However, none of the snails were infected with S. mansoni in the parasitological analysis, as reported by Palasio et al. 14 . Overall, the three natural S. mansoni-IH species occurred in the urban, peri-urban, and rural areas of Ourinhos (Figure 1C, Figure 2). FIGURE 2:Number of intermediate-host specimens (Biomphalaria species) of S. mansoni collected during 2015-2017 in eight water bodies in the urban, peri-urban, and rural areas of the municipality of Ourinhos, SP, Brazil. Frequency of schistosomiasis and PIS and their relationship with the IH and sewage system The frequencies of autochthonous, imported, and unknown-origin cases were 39.7% (25), 7.9% (5), and 52.4% (33), respectively. On average, 6.3 cases per year were detected. Using information from the PIS in the epidemiological survey records, eight PIS were geocoded in the water bodies of Christoni, Furninhas, Jacu, Sobra (Pinhos Lake), Lageadinho, Chumbiadinha (Lake), Furnas (Fapi Lake), and Lake of São Luiz plant (Figure 3). It is noteworthy that the presence of IHs was only found in the first five locations, the majority of PIS were vague, and information was not available for 48% of them, making geocoding impossible. In the water bodies of Chumbiadinha, Lake of São Luiz plant, and Furnas, B. glabrata were reported until 2009, 2009, and 2012, respectively 13 , 48 , 49 . Currently, only B. occidentalis Paraense, 1981 has been identified in these sites 14 , and this species is not susceptible to S. mansoni 50 . IHs were found in overlapping areas with a high percentage of residents served by the sewage network as well as in places served by tanks or other systems (Figure 3, Table 1). FIGURE 3:Map of the municipality of Ourinhos, state of São Paulo, Brazil, highlighting the main probable infection site (PIS), percentage of residents served by a sewage network according to the census tracts, sewage treatment plants (STPs), and points where S. mansoni intermediate hosts (IHs) were found. The numbers (N°) in this figure correspond to the collection points presented in Table 1. Source: IBGE 25 , 35 , 36 , 37 ; SMA 38 ; ANA/SNSA 39 . The frequencies of autochthonous, imported, and unknown-origin cases were 39.7% (25), 7.9% (5), and 52.4% (33), respectively. On average, 6.3 cases per year were detected. Using information from the PIS in the epidemiological survey records, eight PIS were geocoded in the water bodies of Christoni, Furninhas, Jacu, Sobra (Pinhos Lake), Lageadinho, Chumbiadinha (Lake), Furnas (Fapi Lake), and Lake of São Luiz plant (Figure 3). It is noteworthy that the presence of IHs was only found in the first five locations, the majority of PIS were vague, and information was not available for 48% of them, making geocoding impossible. In the water bodies of Chumbiadinha, Lake of São Luiz plant, and Furnas, B. glabrata were reported until 2009, 2009, and 2012, respectively 13 , 48 , 49 . Currently, only B. occidentalis Paraense, 1981 has been identified in these sites 14 , and this species is not susceptible to S. mansoni 50 . IHs were found in overlapping areas with a high percentage of residents served by the sewage network as well as in places served by tanks or other systems (Figure 3, Table 1). FIGURE 3:Map of the municipality of Ourinhos, state of São Paulo, Brazil, highlighting the main probable infection site (PIS), percentage of residents served by a sewage network according to the census tracts, sewage treatment plants (STPs), and points where S. mansoni intermediate hosts (IHs) were found. The numbers (N°) in this figure correspond to the collection points presented in Table 1. Source: IBGE 25 , 35 , 36 , 37 ; SMA 38 ; ANA/SNSA 39 . Association between IH occurrence and autochthonous cases Figure 4A shows the results of the Gi statistics with locations considered potential risk areas for human schistosomiasis infection as well as the extent of the concatenated clusters of autochthonous cases around these points and the kernel density map. Significant clusters of autochthonous cases were superimposed on the Christoni stream region from approximately 300 m (lower limit) to 2200 m (higher limit) from sites where B. glabrata was detected. Another cluster was superimposed on the Água da Veada stream region at a distance of approximately 1600-2000 m (Figure 4A-B) from the B. straminea collection site 14 (Table 1). All clusters were combined into a single cluster (Cluster 1 in Figure 4A). The graph obtained using the K12-function shown in Figure 4C indicates a positive spatial dependence up to a distance of approximately 759 m between autochthonous cases and IH snails. FIGURE 4: (A) Kernel density map of the urban area (759 m radius of influence) showing the distribution of autochthonous schistosomiasis cases and significant clusters in the Gi statistics of cases around sampling points with intermediate hosts (foci). (B) Graph showing significant clusters of autochthonous cases around the Christoni stream and Água da Veada stream sampling points with IH. (C) Graph of the bivariate K12-function analysis in Ourinhos, SP, Brazil, during the 2007-2016 period. (B)* Statistically significant values are above the horizontal line (Gi [d] > 3.28, P < 0.05); (C)** The blue curve above the envelope shows a positive spatial dependence between the autochthonous cases of schistosomiasis and the IH up to a distance of ~759 m. Figure 4A shows the results of the Gi statistics with locations considered potential risk areas for human schistosomiasis infection as well as the extent of the concatenated clusters of autochthonous cases around these points and the kernel density map. Significant clusters of autochthonous cases were superimposed on the Christoni stream region from approximately 300 m (lower limit) to 2200 m (higher limit) from sites where B. glabrata was detected. Another cluster was superimposed on the Água da Veada stream region at a distance of approximately 1600-2000 m (Figure 4A-B) from the B. straminea collection site 14 (Table 1). All clusters were combined into a single cluster (Cluster 1 in Figure 4A). The graph obtained using the K12-function shown in Figure 4C indicates a positive spatial dependence up to a distance of approximately 759 m between autochthonous cases and IH snails. FIGURE 4: (A) Kernel density map of the urban area (759 m radius of influence) showing the distribution of autochthonous schistosomiasis cases and significant clusters in the Gi statistics of cases around sampling points with intermediate hosts (foci). (B) Graph showing significant clusters of autochthonous cases around the Christoni stream and Água da Veada stream sampling points with IH. (C) Graph of the bivariate K12-function analysis in Ourinhos, SP, Brazil, during the 2007-2016 period. (B)* Statistically significant values are above the horizontal line (Gi [d] > 3.28, P < 0.05); (C)** The blue curve above the envelope shows a positive spatial dependence between the autochthonous cases of schistosomiasis and the IH up to a distance of ~759 m. IH occurrence in Ourinhos: The geographical occurrence points for B. glabrata, B. tenagophila, and B. straminea over the 20 locations sampled across eight freshwater bodies assigned as breeding sites for IHs demonstrated the predominant occurrence of B. glabrata (Figure 1C). We also found that the relative abundance of B. glabrata was higher in Christoni, Sobra, and Jacu than that of the other two IHs investigated (Table 1, Figure 2). However, none of the snails were infected with S. mansoni in the parasitological analysis, as reported by Palasio et al. 14 . Overall, the three natural S. mansoni-IH species occurred in the urban, peri-urban, and rural areas of Ourinhos (Figure 1C, Figure 2). FIGURE 2:Number of intermediate-host specimens (Biomphalaria species) of S. mansoni collected during 2015-2017 in eight water bodies in the urban, peri-urban, and rural areas of the municipality of Ourinhos, SP, Brazil. Frequency of schistosomiasis and PIS and their relationship with the IH and sewage system: The frequencies of autochthonous, imported, and unknown-origin cases were 39.7% (25), 7.9% (5), and 52.4% (33), respectively. On average, 6.3 cases per year were detected. Using information from the PIS in the epidemiological survey records, eight PIS were geocoded in the water bodies of Christoni, Furninhas, Jacu, Sobra (Pinhos Lake), Lageadinho, Chumbiadinha (Lake), Furnas (Fapi Lake), and Lake of São Luiz plant (Figure 3). It is noteworthy that the presence of IHs was only found in the first five locations, the majority of PIS were vague, and information was not available for 48% of them, making geocoding impossible. In the water bodies of Chumbiadinha, Lake of São Luiz plant, and Furnas, B. glabrata were reported until 2009, 2009, and 2012, respectively 13 , 48 , 49 . Currently, only B. occidentalis Paraense, 1981 has been identified in these sites 14 , and this species is not susceptible to S. mansoni 50 . IHs were found in overlapping areas with a high percentage of residents served by the sewage network as well as in places served by tanks or other systems (Figure 3, Table 1). FIGURE 3:Map of the municipality of Ourinhos, state of São Paulo, Brazil, highlighting the main probable infection site (PIS), percentage of residents served by a sewage network according to the census tracts, sewage treatment plants (STPs), and points where S. mansoni intermediate hosts (IHs) were found. The numbers (N°) in this figure correspond to the collection points presented in Table 1. Source: IBGE 25 , 35 , 36 , 37 ; SMA 38 ; ANA/SNSA 39 . Association between IH occurrence and autochthonous cases: Figure 4A shows the results of the Gi statistics with locations considered potential risk areas for human schistosomiasis infection as well as the extent of the concatenated clusters of autochthonous cases around these points and the kernel density map. Significant clusters of autochthonous cases were superimposed on the Christoni stream region from approximately 300 m (lower limit) to 2200 m (higher limit) from sites where B. glabrata was detected. Another cluster was superimposed on the Água da Veada stream region at a distance of approximately 1600-2000 m (Figure 4A-B) from the B. straminea collection site 14 (Table 1). All clusters were combined into a single cluster (Cluster 1 in Figure 4A). The graph obtained using the K12-function shown in Figure 4C indicates a positive spatial dependence up to a distance of approximately 759 m between autochthonous cases and IH snails. FIGURE 4: (A) Kernel density map of the urban area (759 m radius of influence) showing the distribution of autochthonous schistosomiasis cases and significant clusters in the Gi statistics of cases around sampling points with intermediate hosts (foci). (B) Graph showing significant clusters of autochthonous cases around the Christoni stream and Água da Veada stream sampling points with IH. (C) Graph of the bivariate K12-function analysis in Ourinhos, SP, Brazil, during the 2007-2016 period. (B)* Statistically significant values are above the horizontal line (Gi [d] > 3.28, P < 0.05); (C)** The blue curve above the envelope shows a positive spatial dependence between the autochthonous cases of schistosomiasis and the IH up to a distance of ~759 m. DISCUSSION: In this study, the association of data from autochthonous cases reported in Ourinhos (2007-2016) with the spatial location of IHs and the sewage network allowed the identification of the Christoni freshwater body as the most suitable area for human schistosomiasis infection. The results obtained through statistics corroborated previous results 5 , 51 , which showed that the Christoni stream was the area most probably at highest risk of peridomestic schistosomiasis transmission 51 , 52 . Interestingly, in addition to the identification of specific points, the results obtained using Gi statistics provided important information regarding the significant distances that should relate to the local occurrence of schistosomiasis infection, a phenomenon that has not been previously reported in the literature. The inventory of species in the Christoni stream shows that B. glabrata is abundant and predominates at four specific points 14 , which corroborates data from previous surveys 10 , 13 , 48 , 49 . Although the B. glabrata samples collected were reported to be negative for cercariae infection, this species is known to be the most suitable IH for the development and transmission of the parasite 11 , 53 . Based on previous studies, it is considered the most competent IH species for S. mansoni transmission in the Paranapanema region 51 , 54 . In Ourinhos, B. glabrata predominance in census tracts, where sanitary sewage is still performed using a septic and/or rudimentary tank, is relevant and of concern from a medical perspective. In the case of the census tracts where the Christoni stream is located, 89.3% of the residents are served by a sewage network and 10.6% by a rudimentary tank and other drains 25 . This percentage pattern of sanitary sewage is a potential explanation for the decrease in the number of autochthonous cases in recent years as well as for the maintenance of focal transmission in this municipality 5 . B. straminea is resilient to extreme environmental variations and is capable of adapting to altered environments 55 . The presence of the B. straminea IH in the Água da Veada stream, where previous surveys indicated colonization by B. tenagophila 13 and B. glabrata 49 , 56 , is further evidence of the expansion potential of B. straminea. Regarding the natural susceptibility of B. straminea to S. mansoni, the physiological adaptation of this species to the parasite is relatively high in snails inhabiting regions of northeastern Brazil 57 . The occurrence of B. straminea in Água da Veada and the fact that a portion of the residents living nearby are still served by septic (0.6%) or rudimentary (3.7%) tanks 25 suggest the need for enhanced surveillance of areas colonized by this species. Although designated as a statistically significant cluster for schistosomiasis infection, in Água da Veada are more likely to result from a bias associated with its closeness to Christoni. Although the Gi statistics considered only Christoni to have significant associations, the water bodies of Sobra, Lageadinho, and Jacu require monitoring, as B. glabrata 14 is also present therein. In addition, the percentage of residents served by the sewage system near these areas was below 20% 25 . B. glabrata presence has been registered in the water bodies of Jacu, located in peri-urban areas, in the past and continues today 13 , 14 , 58 . In the Sobra and Lageadinho water bodies, located in rural areas, the first record of this species was in 2009 49 . Until then, only B. straminea had been recorded in Sobra water bodies 59 . The results of this study demonstrate that the use of GIS tools in association with malacology, epidemiological data, and sewerage infrastructure has the potential to improve schistosomiasis control, fostering the use of new technologies to locally eliminate future infections. One of the limitations of this study is the discrepancy between the periods of collection of snails (2015-2017) and of data on schistosomiasis cases (2007-2016). Despite this incoherence, such a limitation may be overcome by comparing our results with those from the literature 13 , 48 , 49 , 54 , 56 , 58 , 59 . In the 2007-2016 period, the Gi statistics made it possible to exclusively identify the Christoni stream as a location characterized by significant clusters of autochthonous cases associated with the presence of B. glabrata. Therefore, this species is a candidate for the main target of environmental monitoring measures in this municipality. In addition, the use of this technique allowed us to verify that the association between the residence geographical location of autochthonous cases and the spatial distribution of IH provides vital information regarding potential transmission areas. Despite the absence of cercariae in the samples of B. glabrata collected in Ourinhos, the high susceptibility of this species to S. mansoni in laboratory conditions 11 , 60 indicates the risk of schistosomiasis persistence in this region. Moreover, the Gi statistics partially overcame the limitation related to the lack of precise information regarding the location of the PIS, which is consistent with the information that characterizes the transmission of schistosomiasis as being predominantly peridomestic 51 . The proximity between water bodies and residences is another typical characteristic of the incipient urbanization process. The fact that the significant distance in the Gi statistics is approximately 2 km allowed us to calibrate the surveillance activities to a concise and statistically pre-established area. Accordingly, it is possible to develop schistosomiasis control and monitoring activities at well-defined focal points, rationalizing the use of public resources, since Brazil spent approximately 7.7 million dollars in 2015 to control the infection 61 . Therefore, the information presented in this study as well as the tools used may be adequate to develop and direct surveillance actions that contribute to the control and even elimination of schistosomiasis in the municipality of Ourinhos.
Background: Ourinhos is a municipality located between the Pardo and Paranapanema rivers, and it has been characterized by the endemic transmission of schistosomiasis since 1952. We used geospatial analysis to identify areas prone to human schistosomiasis infections in Ourinhos. We studied the association between the sewage network, co-occurrence of Biomphalaria snails (identified as intermediate hosts [IHs] of Schistosoma mansoni), and autochthonous cases. Methods: Gi spatial statistics, Ripley's K12-function, and kernel density estimation were used to evaluate the association between schistosomiasis data reported during 2007-2016 and the occurrence of IHs during 2015-2017. These data were superimposed on the municipality sewage network data. Results: We used 20 points with reported IH; they were colonized predominantly by Biomphalaria glabrata, followed by B. tenagophila and B. straminea. Based on Gi statistics, a significant cluster of autochthonous cases was superimposed on the Christoni and Água da Veada water bodies, with distances of approximately 300 m and 2200 m from the points where B. glabrata and B. straminea were present, respectively. Conclusions: The residence geographical location of autochthonous cases allied with the spatial analysis of IHs and the coverage of the sewage network provide important information for the detection of human-infection areas. Our results demonstrated that the tools used for direct surveillance, control, and elimination of schistosomiasis are appropriate.
null
null
10,898
260
[ 221, 1115, 779, 43, 183, 361, 321 ]
11
[ "cases", "49", "schistosomiasis", "autochthonous", "points", "figure", "ihs", "autochthonous cases", "statistics", "coordinates" ]
[ "areas human schistosomiasis", "schistosomiasis gis worldwide", "brazil frequency schistosomiasis", "schistosomiasis endemicity low", "analyze schistosomiasis africa" ]
null
null
[CONTENT] Schistosomiasis | Biomphalaria | Spatial analysis | Gi statistics | Georeferencing | Epidemiology [SUMMARY]
[CONTENT] Schistosomiasis | Biomphalaria | Spatial analysis | Gi statistics | Georeferencing | Epidemiology [SUMMARY]
[CONTENT] Schistosomiasis | Biomphalaria | Spatial analysis | Gi statistics | Georeferencing | Epidemiology [SUMMARY]
null
[CONTENT] Schistosomiasis | Biomphalaria | Spatial analysis | Gi statistics | Georeferencing | Epidemiology [SUMMARY]
null
[CONTENT] Animals | Biomphalaria | Brazil | Disease Vectors | Geographic Information Systems | Humans | Schistosoma mansoni | Schistosomiasis | Schistosomiasis mansoni | Sewage [SUMMARY]
[CONTENT] Animals | Biomphalaria | Brazil | Disease Vectors | Geographic Information Systems | Humans | Schistosoma mansoni | Schistosomiasis | Schistosomiasis mansoni | Sewage [SUMMARY]
[CONTENT] Animals | Biomphalaria | Brazil | Disease Vectors | Geographic Information Systems | Humans | Schistosoma mansoni | Schistosomiasis | Schistosomiasis mansoni | Sewage [SUMMARY]
null
[CONTENT] Animals | Biomphalaria | Brazil | Disease Vectors | Geographic Information Systems | Humans | Schistosoma mansoni | Schistosomiasis | Schistosomiasis mansoni | Sewage [SUMMARY]
null
[CONTENT] areas human schistosomiasis | schistosomiasis gis worldwide | brazil frequency schistosomiasis | schistosomiasis endemicity low | analyze schistosomiasis africa [SUMMARY]
[CONTENT] areas human schistosomiasis | schistosomiasis gis worldwide | brazil frequency schistosomiasis | schistosomiasis endemicity low | analyze schistosomiasis africa [SUMMARY]
[CONTENT] areas human schistosomiasis | schistosomiasis gis worldwide | brazil frequency schistosomiasis | schistosomiasis endemicity low | analyze schistosomiasis africa [SUMMARY]
null
[CONTENT] areas human schistosomiasis | schistosomiasis gis worldwide | brazil frequency schistosomiasis | schistosomiasis endemicity low | analyze schistosomiasis africa [SUMMARY]
null
[CONTENT] cases | 49 | schistosomiasis | autochthonous | points | figure | ihs | autochthonous cases | statistics | coordinates [SUMMARY]
[CONTENT] cases | 49 | schistosomiasis | autochthonous | points | figure | ihs | autochthonous cases | statistics | coordinates [SUMMARY]
[CONTENT] cases | 49 | schistosomiasis | autochthonous | points | figure | ihs | autochthonous cases | statistics | coordinates [SUMMARY]
null
[CONTENT] cases | 49 | schistosomiasis | autochthonous | points | figure | ihs | autochthonous cases | statistics | coordinates [SUMMARY]
null
[CONTENT] schistosomiasis | gis | human | middle paranapanema | middle | infections | spatial | mansoni | paranapanema | occur [SUMMARY]
[CONTENT] 49 | coordinates | 23 | 22 | geographic | schistosomiasis | cases | census | 2016 | geographic coordinates [SUMMARY]
[CONTENT] figure | cases | lake | autochthonous | stream | autochthonous cases | clusters | pis | figure 4a | 759 [SUMMARY]
null
[CONTENT] figure | cases | schistosomiasis | 49 | autochthonous | points | ihs | autochthonous cases | gi | coordinates [SUMMARY]
null
[CONTENT] Ourinhos | Pardo | Paranapanema | 1952 ||| Ourinhos ||| Biomphalaria | Schistosoma [SUMMARY]
[CONTENT] Ripley | K12 | 2007-2016 | 2015-2017 ||| [SUMMARY]
[CONTENT] 20 | IH | Biomphalaria | B. | B. straminea ||| Água da Veada | approximately 300 | 2200 | B. | B. straminea [SUMMARY]
null
[CONTENT] Ourinhos | Pardo | Paranapanema | 1952 ||| Ourinhos ||| Biomphalaria | Schistosoma ||| Ripley | K12 | 2007-2016 | 2015-2017 ||| ||| ||| 20 | IH | Biomphalaria | B. | B. straminea ||| Água da Veada | approximately 300 | 2200 | B. | B. straminea ||| ||| [SUMMARY]
null
Short uncemented stems allow greater femoral flexibility and may reduce peri-prosthetic fracture risk: a dry bone and cadaveric study.
25701257
Short femoral stems for uncemented total hip arthroplasty have been introduced as a safe alternative to traditional longer stem designs. However, there has been little biomechanical examination of the effects of stem length on complications of surgery. This study aims to examine the effect of femoral stem length on torsional resistance to peri-prosthetic fracture.
BACKGROUND
We tested 16 synthetic and two paired cadaveric femora. Specimens were implanted and then rapidly rotated until fracture to simulate internal rotation on a planted foot, as might occur during stumbling. 3D planning software and custom-printed 3D cutting guides were used to enhance the accuracy and consistency of our stem insertion technique.
MATERIALS AND METHODS
Synthetic femora implanted with short stems fractured at a significantly higher torque (27.1 vs. 24.2 Nm, p = 0.03) and angle (30.3° vs. 22.3°, p = 0.002) than those implanted with long stems. Fracture patterns of the two groups were different, but showed remarkable consistency within each group. These characteristic fracture patterns were closely replicated in the pair of cadaveric femora.
RESULTS
This new short-stemmed press-fit femoral component allows more femoral flexibility and confers a higher resistance to peri-prosthetic fracture from torsional forces than long stems.
CONCLUSIONS
[ "Arthroplasty, Replacement, Hip", "Cadaver", "Cementation", "Femoral Fractures", "Hip Prosthesis", "Humans", "Periprosthetic Fractures", "Prosthesis Design", "Prosthesis Failure" ]
4559535
Introduction
Short femoral stems for uncemented total hip arthroplasty (THA) have been introduced widely, with the suggestion that they may facilitate easier revision [1], distribute stress anatomically [2] and cause fewer intra-operative complications than longer stem designs [3]. With some series reporting 10–16-year survival rates of 99–100 % [4, 5], short stems may be considered a safe alternative to traditional longer stem designs. However, there has been little biomechanical examination of the effects of stem length on complications of surgery. Peri-prosthetic fracture following primary THA is estimated to occur in approximately 1 % of cases, rising to 4 % within 5 years for revision cases where longer stems are used [6, 7]. Fracture is associated with increased morbidity and dysfunction [8, 9]. Previous studies in cemented stems have found that short stems do not confer a higher risk of peri-prosthetic fracture [10]. The majority of stems inserted worldwide are uncemented and little has been published about the effect of stem length on peri-prosthetic fracture pattern in these press-fit stems. The fracture pattern is also relevant, as it determines treatment and may affect subsequent morbidity [11, 12]. The aim of this study was to examine the impact of femoral stem length on (1) the resistance to fracture of implanted stems subjected to torsional forces, and (2) the peri-prosthetic fracture patterns in a synthetic bone model. Finally, we wished to assess the clinical relevance of this model by comparing tested synthetic femurs to results obtained by testing a single pair of cadaveric bones.
null
null
Results
The torsional force required to fracture the short-stem implanted femurs [mean 27.1 Nm, range 24.4–30.3, standard deviation (SD) 2.1] was significantly greater than that of the long stems (mean 24.2 Nm, range 21.1–30.1, SD 2.8) (Fig. 1; p = 0.03). The ranges of fracture torque for the short (24.4–30.3 Nm) and long (21.1–25.7 Nm) stems show only partial overlap, with the exception of a single outlier (30.1 Nm) in the long-stemmed group (excluding this value, the range was 21.05–25.70 Nm). The torsional force required to fracture the short-stem implanted cadaveric femur (27.8 Nm) was higher than that for the long stem (14.7 Nm). The angular deformation at fracture for the short stems (mean 30°, range 24°–36°, SD 5.2) was significantly greater than that of the long stems (mean 22o, range 19°–25°, SD 3.2, p = 0.002), (Fig. 1). The ranges of fracture angle for the short (24.3°–35.9°) and long (18.6°–27.7°) stems show only partial overlap. Fracture torque and angle data are presented in Fig. 4. The cadaveric bone fracture angle was 14.5° for the short stem, but was not clearly determinable for the longer stem.Fig. 4Box plots of the fracture angles and torque of long and short implanted stems Box plots of the fracture angles and torque of long and short implanted stems The fracture patterns for the two implants were consistent but different. Both stems displayed a spiral fracture pattern with the apex of fracture 3 cm below the lesser trochanter. However, the long-stem group had a butterfly segment of the anterior part of the greater trochanter but the short-stem group’s involved the entire greater trochanter (Fig. 5). The single outlier from the long-stem group (which fractured at 30.1 Nm) had a similar fracture pattern to the short stems.Fig. 5Photograph showing fracture patterns of the long- and short-stem groups Photograph showing fracture patterns of the long- and short-stem groups
null
null
[]
[]
[]
[ "Introduction", "Materials and methods", "Results", "Discussion" ]
[ "Short femoral stems for uncemented total hip arthroplasty (THA) have been introduced widely, with the suggestion that they may facilitate easier revision [1], distribute stress anatomically [2] and cause fewer intra-operative complications than longer stem designs [3]. With some series reporting 10–16-year survival rates of 99–100 % [4, 5], short stems may be considered a safe alternative to traditional longer stem designs. However, there has been little biomechanical examination of the effects of stem length on complications of surgery.\nPeri-prosthetic fracture following primary THA is estimated to occur in approximately 1 % of cases, rising to 4 % within 5 years for revision cases where longer stems are used [6, 7]. Fracture is associated with increased morbidity and dysfunction [8, 9]. Previous studies in cemented stems have found that short stems do not confer a higher risk of peri-prosthetic fracture [10]. The majority of stems inserted worldwide are uncemented and little has been published about the effect of stem length on peri-prosthetic fracture pattern in these press-fit stems. The fracture pattern is also relevant, as it determines treatment and may affect subsequent morbidity [11, 12].\nThe aim of this study was to examine the impact of femoral stem length on (1) the resistance to fracture of implanted stems subjected to torsional forces, and (2) the peri-prosthetic fracture patterns in a synthetic bone model. Finally, we wished to assess the clinical relevance of this model by comparing tested synthetic femurs to results obtained by testing a single pair of cadaveric bones.", "In order to compare the two femoral prostheses, we implanted them into synthetic (and later paired cadaveric) femurs. These femurs were then subjected to torsional mechanical testing.\nThis study compared a successful uncemented long-stem design with a shorter one. From shoulder to tip, the longer stems measured 152 mm, while the shorter ones were 100 mm. Besides the apparent difference in length, the shorter stem had a wider proximal section, and was reduced laterally to make insertion easier and minimise the risk of fracture of the greater trochanter. Both stems were fully hydroxyapatite-coated with 12/14 neck tapers and collars to prevent implantation past the required depth (Fig. 1). These stems also required different femoral preparations. The short-stem rasps were designed to be more bone-sparing by impacting loose bone, while the longer stem rasp was designed for more bone extraction.Fig. 1Photograph of the short- and long-stemmedprostheses with three-dimensional rendered images of implanted femurs (shown in grey) and the planned positioning of the implant (shown in blue) (colour figure online)\nPhotograph of the short- and long-stemmedprostheses with three-dimensional rendered images of implanted femurs (shown in grey) and the planned positioning of the implant (shown in blue) (colour figure online)\nLeft-sided “medium”-sized synthetic composite femoral bones (Sawbone Model Number 1121; Sawbones Europe AB, Sweden) were used for their consistency of geometry and to aid a repeatable and controllable methodology (Fig. 2). These bones were dual density, with a foam polyurethane cortical shell. Bones from the same batch were used to avoid any inter-batch variation in mechanical properties. One synthetic bone was scanned by computed tomography (CT) to generate a digital three-dimensional (3D) model, which was later used for planning and validation of correct implant positioning.Fig. 2Photograph of a synthetic femur in antero-posterior and lateral plane, following osteotomy of femoral neck using the three-dimensional cutting guide (Embody, UK)\nPhotograph of a synthetic femur in antero-posterior and lateral plane, following osteotomy of femoral neck using the three-dimensional cutting guide (Embody, UK)\nA 3D surgical plan was made by one of the authors (S.C.) using the CT scan data from the synthetic bone, and 3D data files of the implants. Ideal positioning for each implant was determined based on alignment of the implant neck and head within the original bone (Fig. 1). From this data, the optimal position for the neck osteotomy and box chiselling entry point could be determined and planned.\nTwo 3D cutting guides (Embody, UK)—one for each femoral stem—were produced to ensure accuracy and repeatability of our osteotomy cuts and our box chiselling. These guides are designed to precisely match the surface anatomy of the bone (Fig. 2).\nUse of these guides ensured that cutting and box chiselling of bone was restricted to areas pre-defined by the 3D planning. Subsequent reaming and rasping thus began in the correct location and planes.\nWe began by pinning the cutting guide to the specimen. The specimen-matched guide then directed the neck osteotomy and box chiselling of the femoral shaft (Fig. 2). Each bone was sequentially reamed and rasped according to the manufacturer’s instructions.\nAn experienced surgeon (J.P.C.) used standard intra-operative techniques to determine the appropriate implant size. A size 11 was used for the long, and a size 12 for the short stem. The prostheses were then inserted until seated.\nThe distal 18 cm of each femur were sawn off, and the implanted proximal femurs were potted in polymethylmethacrylate (PMMA) bone cement (within a metal cylinder). The cement was fixed to the cylinder with three screws to prevent rotation and left for 30 min to cure.\nThe metal cylinder was mounted to the base of a servohydraulic testing machine (Instron 8874 Biaxial Testing System; Instron Corporation, MA, USA) using a bespoke adjustable vice. The potted bone was aligned such that the plane of the femoral stem was vertical, and directly underneath the centre of the servohydraulic crosshead. A 6-mm hex key was attached to the crosshead and lowered into the 6-mm hex hole in the implant (this hole is aligned with the centre of the distal femoral stem). This allowed the stem to be rotated about its central axis (Fig. 3).Fig. 3Image showing standardised set-up of equipment\nImage showing standardised set-up of equipment\nThroughout the testing a small constant vertical load of 10 N was applied, to counteract any vertical loosening, and to ensure engagement of the hex key in the implant hex hole. Before each test, the Instron crosshead was manually positioned in a neutral position, fully engaged with the implant but with no vertical or rotational force.\nTo test resistance to femoral fracture, the implant was rotated clockwise through 90° in 1 s. This testing protocol has been described previously [13], and is designed to simulate peri-prosthetic fracture due to internal rotation on a planted foot, as might occur during stumbling.\nTorque, rotation, vertical load and vertical position data were sampled 50 times per second throughout the testing protocols, and were exported to a data spreadsheet file (Microsoft Excel; Microsoft Corporation, WA, USA).\nFollowing ethical approval, a single pair of cadaveric femurs were extracted from an embalmed cadaver donated to the Human Anatomy Unit (Charing Cross Hospital, London, UK). The cadaver had been embalmed with a mixture of formaldehyde, phenol, polyethylene glycol and alcohol, which has been shown not to significantly affect the stiffness of bone [14].\nAn experienced surgeon used a posterior approach and standard intra-operative techniques to implant and size the short and long femoral stems. The femurs were then carefully dissected from the cadavers and stripped of soft tissues.\nThe implanted femurs underwent the same experimental setup as the synthetic bones. Testing was in a clockwise direction on the left, and anticlockwise on the right femur to ensure both hips were torqued in internal rotation. The data was analysed using SPSS (IBM SPSS Statistics, version 20) using a Mann–Whitney U test as data was not found to be parametric.", "The torsional force required to fracture the short-stem implanted femurs [mean 27.1 Nm, range 24.4–30.3, standard deviation (SD) 2.1] was significantly greater than that of the long stems (mean 24.2 Nm, range 21.1–30.1, SD 2.8) (Fig. 1; p = 0.03). The ranges of fracture torque for the short (24.4–30.3 Nm) and long (21.1–25.7 Nm) stems show only partial overlap, with the exception of a single outlier (30.1 Nm) in the long-stemmed group (excluding this value, the range was 21.05–25.70 Nm). The torsional force required to fracture the short-stem implanted cadaveric femur (27.8 Nm) was higher than that for the long stem (14.7 Nm).\nThe angular deformation at fracture for the short stems (mean 30°, range 24°–36°, SD 5.2) was significantly greater than that of the long stems (mean 22o, range 19°–25°, SD 3.2, p = 0.002), (Fig. 1). The ranges of fracture angle for the short (24.3°–35.9°) and long (18.6°–27.7°) stems show only partial overlap. Fracture torque and angle data are presented in Fig. 4. The cadaveric bone fracture angle was 14.5° for the short stem, but was not clearly determinable for the longer stem.Fig. 4Box plots of the fracture angles and torque of long and short implanted stems\nBox plots of the fracture angles and torque of long and short implanted stems\nThe fracture patterns for the two implants were consistent but different. Both stems displayed a spiral fracture pattern with the apex of fracture 3 cm below the lesser trochanter. However, the long-stem group had a butterfly segment of the anterior part of the greater trochanter but the short-stem group’s involved the entire greater trochanter (Fig. 5). The single outlier from the long-stem group (which fractured at 30.1 Nm) had a similar fracture pattern to the short stems.Fig. 5Photograph showing fracture patterns of the long- and short-stem groups\nPhotograph showing fracture patterns of the long- and short-stem groups", "In this study, we sought to compare the pattern and force required to induce a peri-prosthetic fracture of femurs implanted with uncemented short- and long-stem hip replacements. We found that bones implanted with the short-stemmed implants required a significantly higher force before fracture. Implanted femurs were also found to be more flexible and deformed more prior to fracture in the short-stem group. Although limited, testing in paired cadaveric femurs demonstrated a similar fracture torque and pattern to the results seen with synthetic bones. Our findings are consistent with a similar study [13] where the torsional fracture strength of cemented femoral components (in a synthetic bone model) demonstrated fracture torques of 25–40 Nm and fracture angles of 20°–35°. Jakubowitz et al. [10] compared the grit-blasted short uncemented Mayo® hip (Zimmer, Warsaw, IN, USA) to an equivalent uncemented long-stem design. Whilst these implants differed from those in our study in many ways, the authors similarly found that the short-stem implants compared favourably to the long-stemmed equivalent with respect to the risk of a peri-prosthetic fracture.\nAs the short stem in our study is a relatively new addition to the implant market, we are not able to evaluate the fracture patterns we observed against clinical reports of peri-prosthetic fracture. However, given the clear and consistent difference between the fracture patterns of the two groups in synthetic and paired cadaveric femurs (Fig. 5), we can be confident that this difference is significant. Furthermore, Van Eynde et al. [15] have reported a typical fracture pattern in an uncemented long-stem series that was very similar to the fracture pattern we described for our long-stemmed implants.\nThe peri-prosthetic fracture pattern can have implications for recovery and treatment; however, as both fractures created an unstable femoral stem, revision of the stem would be necessary if they occurred in the early post-operative period [16].\nPrevious work by Cristofolini et al. [17] has demonstrated that the mechanical strength variability of cadaveric femurs may be up to 200 times that of composite synthetic femurs. These results help rationalise our choice in using mainly synthetic bones, which benefit from consistent geometries and mechanical properties. Their use also enabled accurate and reproducible implant positioning. In addition, previous studies have shown they do behave similarly to human femurs in mechanical testing protocols [13, 18–20]. Synthetic femurs may thus be a reasonable surrogate for human bone and our cadaveric testing results further support this conclusion.\nA limitation of this in vitro study is that we are only able to simulate initial implant behaviour. The on-growth of bone onto the implanted femoral stems, promoted by the hydroxyapatite (HA) coating, only occurs in living bone. HA in living subjects would therefore increase the strength of implant fixation [21]. The present study is therefore most relevant in the context of implant behaviour in the early post-operative period, before full bony on-growth occurs.\nWe are also limited in our interpretation of the data by the fact that we only tested one size of each implant in a “medium”-sized synthetic femur. We cannot therefore comment on how implant sizing might affect biomechanical behaviour. Further work could investigate the effects of implant sizing on peri-prosthetic fracture risk.\nIn conclusion, we found that the peri-prosthetic fracture pattern of the two stems were different. In spite of this, both patterns would require stem revision and hence present a similar revision dilemma. However, the new short-stemmed press-fit femoral component allows more femoral flexibility and confers a higher resistance to peri-prosthetic fracture from torsional forces than the long stem. This higher resistance to fracture is an important consideration when selecting implants for elderly female patients who are both more likely to fall and to have osteoporotic bone." ]
[ "introduction", "materials|methods", "results", "discussion" ]
[ "Short stem", "Total hip arthroplasty", "Mechanical testing", "Fracture" ]
Introduction: Short femoral stems for uncemented total hip arthroplasty (THA) have been introduced widely, with the suggestion that they may facilitate easier revision [1], distribute stress anatomically [2] and cause fewer intra-operative complications than longer stem designs [3]. With some series reporting 10–16-year survival rates of 99–100 % [4, 5], short stems may be considered a safe alternative to traditional longer stem designs. However, there has been little biomechanical examination of the effects of stem length on complications of surgery. Peri-prosthetic fracture following primary THA is estimated to occur in approximately 1 % of cases, rising to 4 % within 5 years for revision cases where longer stems are used [6, 7]. Fracture is associated with increased morbidity and dysfunction [8, 9]. Previous studies in cemented stems have found that short stems do not confer a higher risk of peri-prosthetic fracture [10]. The majority of stems inserted worldwide are uncemented and little has been published about the effect of stem length on peri-prosthetic fracture pattern in these press-fit stems. The fracture pattern is also relevant, as it determines treatment and may affect subsequent morbidity [11, 12]. The aim of this study was to examine the impact of femoral stem length on (1) the resistance to fracture of implanted stems subjected to torsional forces, and (2) the peri-prosthetic fracture patterns in a synthetic bone model. Finally, we wished to assess the clinical relevance of this model by comparing tested synthetic femurs to results obtained by testing a single pair of cadaveric bones. Materials and methods: In order to compare the two femoral prostheses, we implanted them into synthetic (and later paired cadaveric) femurs. These femurs were then subjected to torsional mechanical testing. This study compared a successful uncemented long-stem design with a shorter one. From shoulder to tip, the longer stems measured 152 mm, while the shorter ones were 100 mm. Besides the apparent difference in length, the shorter stem had a wider proximal section, and was reduced laterally to make insertion easier and minimise the risk of fracture of the greater trochanter. Both stems were fully hydroxyapatite-coated with 12/14 neck tapers and collars to prevent implantation past the required depth (Fig. 1). These stems also required different femoral preparations. The short-stem rasps were designed to be more bone-sparing by impacting loose bone, while the longer stem rasp was designed for more bone extraction.Fig. 1Photograph of the short- and long-stemmedprostheses with three-dimensional rendered images of implanted femurs (shown in grey) and the planned positioning of the implant (shown in blue) (colour figure online) Photograph of the short- and long-stemmedprostheses with three-dimensional rendered images of implanted femurs (shown in grey) and the planned positioning of the implant (shown in blue) (colour figure online) Left-sided “medium”-sized synthetic composite femoral bones (Sawbone Model Number 1121; Sawbones Europe AB, Sweden) were used for their consistency of geometry and to aid a repeatable and controllable methodology (Fig. 2). These bones were dual density, with a foam polyurethane cortical shell. Bones from the same batch were used to avoid any inter-batch variation in mechanical properties. One synthetic bone was scanned by computed tomography (CT) to generate a digital three-dimensional (3D) model, which was later used for planning and validation of correct implant positioning.Fig. 2Photograph of a synthetic femur in antero-posterior and lateral plane, following osteotomy of femoral neck using the three-dimensional cutting guide (Embody, UK) Photograph of a synthetic femur in antero-posterior and lateral plane, following osteotomy of femoral neck using the three-dimensional cutting guide (Embody, UK) A 3D surgical plan was made by one of the authors (S.C.) using the CT scan data from the synthetic bone, and 3D data files of the implants. Ideal positioning for each implant was determined based on alignment of the implant neck and head within the original bone (Fig. 1). From this data, the optimal position for the neck osteotomy and box chiselling entry point could be determined and planned. Two 3D cutting guides (Embody, UK)—one for each femoral stem—were produced to ensure accuracy and repeatability of our osteotomy cuts and our box chiselling. These guides are designed to precisely match the surface anatomy of the bone (Fig. 2). Use of these guides ensured that cutting and box chiselling of bone was restricted to areas pre-defined by the 3D planning. Subsequent reaming and rasping thus began in the correct location and planes. We began by pinning the cutting guide to the specimen. The specimen-matched guide then directed the neck osteotomy and box chiselling of the femoral shaft (Fig. 2). Each bone was sequentially reamed and rasped according to the manufacturer’s instructions. An experienced surgeon (J.P.C.) used standard intra-operative techniques to determine the appropriate implant size. A size 11 was used for the long, and a size 12 for the short stem. The prostheses were then inserted until seated. The distal 18 cm of each femur were sawn off, and the implanted proximal femurs were potted in polymethylmethacrylate (PMMA) bone cement (within a metal cylinder). The cement was fixed to the cylinder with three screws to prevent rotation and left for 30 min to cure. The metal cylinder was mounted to the base of a servohydraulic testing machine (Instron 8874 Biaxial Testing System; Instron Corporation, MA, USA) using a bespoke adjustable vice. The potted bone was aligned such that the plane of the femoral stem was vertical, and directly underneath the centre of the servohydraulic crosshead. A 6-mm hex key was attached to the crosshead and lowered into the 6-mm hex hole in the implant (this hole is aligned with the centre of the distal femoral stem). This allowed the stem to be rotated about its central axis (Fig. 3).Fig. 3Image showing standardised set-up of equipment Image showing standardised set-up of equipment Throughout the testing a small constant vertical load of 10 N was applied, to counteract any vertical loosening, and to ensure engagement of the hex key in the implant hex hole. Before each test, the Instron crosshead was manually positioned in a neutral position, fully engaged with the implant but with no vertical or rotational force. To test resistance to femoral fracture, the implant was rotated clockwise through 90° in 1 s. This testing protocol has been described previously [13], and is designed to simulate peri-prosthetic fracture due to internal rotation on a planted foot, as might occur during stumbling. Torque, rotation, vertical load and vertical position data were sampled 50 times per second throughout the testing protocols, and were exported to a data spreadsheet file (Microsoft Excel; Microsoft Corporation, WA, USA). Following ethical approval, a single pair of cadaveric femurs were extracted from an embalmed cadaver donated to the Human Anatomy Unit (Charing Cross Hospital, London, UK). The cadaver had been embalmed with a mixture of formaldehyde, phenol, polyethylene glycol and alcohol, which has been shown not to significantly affect the stiffness of bone [14]. An experienced surgeon used a posterior approach and standard intra-operative techniques to implant and size the short and long femoral stems. The femurs were then carefully dissected from the cadavers and stripped of soft tissues. The implanted femurs underwent the same experimental setup as the synthetic bones. Testing was in a clockwise direction on the left, and anticlockwise on the right femur to ensure both hips were torqued in internal rotation. The data was analysed using SPSS (IBM SPSS Statistics, version 20) using a Mann–Whitney U test as data was not found to be parametric. Results: The torsional force required to fracture the short-stem implanted femurs [mean 27.1 Nm, range 24.4–30.3, standard deviation (SD) 2.1] was significantly greater than that of the long stems (mean 24.2 Nm, range 21.1–30.1, SD 2.8) (Fig. 1; p = 0.03). The ranges of fracture torque for the short (24.4–30.3 Nm) and long (21.1–25.7 Nm) stems show only partial overlap, with the exception of a single outlier (30.1 Nm) in the long-stemmed group (excluding this value, the range was 21.05–25.70 Nm). The torsional force required to fracture the short-stem implanted cadaveric femur (27.8 Nm) was higher than that for the long stem (14.7 Nm). The angular deformation at fracture for the short stems (mean 30°, range 24°–36°, SD 5.2) was significantly greater than that of the long stems (mean 22o, range 19°–25°, SD 3.2, p = 0.002), (Fig. 1). The ranges of fracture angle for the short (24.3°–35.9°) and long (18.6°–27.7°) stems show only partial overlap. Fracture torque and angle data are presented in Fig. 4. The cadaveric bone fracture angle was 14.5° for the short stem, but was not clearly determinable for the longer stem.Fig. 4Box plots of the fracture angles and torque of long and short implanted stems Box plots of the fracture angles and torque of long and short implanted stems The fracture patterns for the two implants were consistent but different. Both stems displayed a spiral fracture pattern with the apex of fracture 3 cm below the lesser trochanter. However, the long-stem group had a butterfly segment of the anterior part of the greater trochanter but the short-stem group’s involved the entire greater trochanter (Fig. 5). The single outlier from the long-stem group (which fractured at 30.1 Nm) had a similar fracture pattern to the short stems.Fig. 5Photograph showing fracture patterns of the long- and short-stem groups Photograph showing fracture patterns of the long- and short-stem groups Discussion: In this study, we sought to compare the pattern and force required to induce a peri-prosthetic fracture of femurs implanted with uncemented short- and long-stem hip replacements. We found that bones implanted with the short-stemmed implants required a significantly higher force before fracture. Implanted femurs were also found to be more flexible and deformed more prior to fracture in the short-stem group. Although limited, testing in paired cadaveric femurs demonstrated a similar fracture torque and pattern to the results seen with synthetic bones. Our findings are consistent with a similar study [13] where the torsional fracture strength of cemented femoral components (in a synthetic bone model) demonstrated fracture torques of 25–40 Nm and fracture angles of 20°–35°. Jakubowitz et al. [10] compared the grit-blasted short uncemented Mayo® hip (Zimmer, Warsaw, IN, USA) to an equivalent uncemented long-stem design. Whilst these implants differed from those in our study in many ways, the authors similarly found that the short-stem implants compared favourably to the long-stemmed equivalent with respect to the risk of a peri-prosthetic fracture. As the short stem in our study is a relatively new addition to the implant market, we are not able to evaluate the fracture patterns we observed against clinical reports of peri-prosthetic fracture. However, given the clear and consistent difference between the fracture patterns of the two groups in synthetic and paired cadaveric femurs (Fig. 5), we can be confident that this difference is significant. Furthermore, Van Eynde et al. [15] have reported a typical fracture pattern in an uncemented long-stem series that was very similar to the fracture pattern we described for our long-stemmed implants. The peri-prosthetic fracture pattern can have implications for recovery and treatment; however, as both fractures created an unstable femoral stem, revision of the stem would be necessary if they occurred in the early post-operative period [16]. Previous work by Cristofolini et al. [17] has demonstrated that the mechanical strength variability of cadaveric femurs may be up to 200 times that of composite synthetic femurs. These results help rationalise our choice in using mainly synthetic bones, which benefit from consistent geometries and mechanical properties. Their use also enabled accurate and reproducible implant positioning. In addition, previous studies have shown they do behave similarly to human femurs in mechanical testing protocols [13, 18–20]. Synthetic femurs may thus be a reasonable surrogate for human bone and our cadaveric testing results further support this conclusion. A limitation of this in vitro study is that we are only able to simulate initial implant behaviour. The on-growth of bone onto the implanted femoral stems, promoted by the hydroxyapatite (HA) coating, only occurs in living bone. HA in living subjects would therefore increase the strength of implant fixation [21]. The present study is therefore most relevant in the context of implant behaviour in the early post-operative period, before full bony on-growth occurs. We are also limited in our interpretation of the data by the fact that we only tested one size of each implant in a “medium”-sized synthetic femur. We cannot therefore comment on how implant sizing might affect biomechanical behaviour. Further work could investigate the effects of implant sizing on peri-prosthetic fracture risk. In conclusion, we found that the peri-prosthetic fracture pattern of the two stems were different. In spite of this, both patterns would require stem revision and hence present a similar revision dilemma. However, the new short-stemmed press-fit femoral component allows more femoral flexibility and confers a higher resistance to peri-prosthetic fracture from torsional forces than the long stem. This higher resistance to fracture is an important consideration when selecting implants for elderly female patients who are both more likely to fall and to have osteoporotic bone.
Background: Short femoral stems for uncemented total hip arthroplasty have been introduced as a safe alternative to traditional longer stem designs. However, there has been little biomechanical examination of the effects of stem length on complications of surgery. This study aims to examine the effect of femoral stem length on torsional resistance to peri-prosthetic fracture. Methods: We tested 16 synthetic and two paired cadaveric femora. Specimens were implanted and then rapidly rotated until fracture to simulate internal rotation on a planted foot, as might occur during stumbling. 3D planning software and custom-printed 3D cutting guides were used to enhance the accuracy and consistency of our stem insertion technique. Results: Synthetic femora implanted with short stems fractured at a significantly higher torque (27.1 vs. 24.2 Nm, p = 0.03) and angle (30.3° vs. 22.3°, p = 0.002) than those implanted with long stems. Fracture patterns of the two groups were different, but showed remarkable consistency within each group. These characteristic fracture patterns were closely replicated in the pair of cadaveric femora. Conclusions: This new short-stemmed press-fit femoral component allows more femoral flexibility and confers a higher resistance to peri-prosthetic fracture from torsional forces than long stems.
null
null
2,734
242
[]
4
[ "fracture", "stem", "short", "long", "stems", "bone", "implant", "femoral", "femurs", "fig" ]
[ "implanted femoral stems", "femoral stem allowed", "long stem hip", "fracture short stems", "longer stems fracture" ]
null
null
null
[CONTENT] Short stem | Total hip arthroplasty | Mechanical testing | Fracture [SUMMARY]
null
[CONTENT] Short stem | Total hip arthroplasty | Mechanical testing | Fracture [SUMMARY]
null
[CONTENT] Short stem | Total hip arthroplasty | Mechanical testing | Fracture [SUMMARY]
null
[CONTENT] Arthroplasty, Replacement, Hip | Cadaver | Cementation | Femoral Fractures | Hip Prosthesis | Humans | Periprosthetic Fractures | Prosthesis Design | Prosthesis Failure [SUMMARY]
null
[CONTENT] Arthroplasty, Replacement, Hip | Cadaver | Cementation | Femoral Fractures | Hip Prosthesis | Humans | Periprosthetic Fractures | Prosthesis Design | Prosthesis Failure [SUMMARY]
null
[CONTENT] Arthroplasty, Replacement, Hip | Cadaver | Cementation | Femoral Fractures | Hip Prosthesis | Humans | Periprosthetic Fractures | Prosthesis Design | Prosthesis Failure [SUMMARY]
null
[CONTENT] implanted femoral stems | femoral stem allowed | long stem hip | fracture short stems | longer stems fracture [SUMMARY]
null
[CONTENT] implanted femoral stems | femoral stem allowed | long stem hip | fracture short stems | longer stems fracture [SUMMARY]
null
[CONTENT] implanted femoral stems | femoral stem allowed | long stem hip | fracture short stems | longer stems fracture [SUMMARY]
null
[CONTENT] fracture | stem | short | long | stems | bone | implant | femoral | femurs | fig [SUMMARY]
null
[CONTENT] fracture | stem | short | long | stems | bone | implant | femoral | femurs | fig [SUMMARY]
null
[CONTENT] fracture | stem | short | long | stems | bone | implant | femoral | femurs | fig [SUMMARY]
null
[CONTENT] stems | fracture | stem length | stem | peri | peri prosthetic | peri prosthetic fracture | prosthetic | prosthetic fracture | length [SUMMARY]
null
[CONTENT] fracture | long | nm | short | stem | range | 24 | 30 | stems | sd [SUMMARY]
null
[CONTENT] fracture | stem | short | stems | long | implant | femoral | synthetic | fig | peri prosthetic [SUMMARY]
null
[CONTENT] ||| ||| [SUMMARY]
null
[CONTENT] 27.1 | 24.2 | 0.03 | 30.3 | 22.3 | 0.002 ||| two ||| cadaveric femora [SUMMARY]
null
[CONTENT] ||| ||| ||| 16 | two | cadaveric femora ||| ||| ||| ||| 27.1 | 24.2 | 0.03 | 30.3 | 22.3 | 0.002 ||| two ||| ||| [SUMMARY]
null
Thyroid Nodule Size as a Predictor of Malignancy in Follicular and Hurthle Neoplasms.
34452575
The management of follicular (FN) and Hurthle cell neoplasms (HCN) is often difficult because of the uncertainty of malignancy risk. We aimed to assess characteristics of benign and malignant follicular and Hurthle neoplasms based on their shape and size.
INTRODUCTION
Patients with Follicular adenoma (FA) or carcinoma (FC) and Hurthle Cell adenoma (HCA) or carcinoma (HCC) who had preoperative ultrasonography were included. Demographic data were retrieved. Size and shape of the nodules were measured. Logistic regression analyses and odds ratios were performed.
MATERIALS AND METHODS
A total of 115 nodules with 57 carcinomas and 58 adenomas were included. Logistic regression analysis shows that the nodule height and the patient age are predictors of malignancy (p-values = 0.001 and 0.042). A cutoff value of nodule height ≥ 4 cm. produces an odds ratio of 4.5 (p-value = 0.006). An age ≥ 55 year-old demonstrates an odds ratio of 2.4-3.6 (p-value = 0.03). Taller-than-wide shape was not statistically significant (p-value = 0.613).
RESULTS
FC and HCC are larger than FA and HCA in size, with a cutoff at 4 cm. Increasing age increases the odds of malignancy with a cutoff at 55 year-old. Taller-than-wide shape is not a predictor of malignancy.
CONCLUSION
[ "Adenocarcinoma, Follicular", "Adenoma", "Adenoma, Oxyphilic", "Case-Control Studies", "Female", "Follow-Up Studies", "Humans", "Male", "Middle Aged", "Prognosis", "Retrospective Studies", "Thyroid Neoplasms", "Thyroid Nodule", "Thyroidectomy", "Ultrasonography" ]
8629458
Introduction
Follicular neoplasms (FN) and Hurthle cell neoplasms (HCN) represent 2-10% of thyroid nodules and they often cause challenges in managing patients with thyroid lesions. There is not yet a true consensus criteria for triaging these patients, since the main focus of thyroid cancer is mainly on the papillary thyroid carcinoma (PTC). For both FN and HCN, cytological features that are obtained with fine-needle aspiration biopsy (FNAB) cannot accurately differentiate benign lesions from malignant ones. Because vascular or capsular invasion is required to diagnose malignancy of these types of thyroid nodules, final pathology is essential (Carling and Udelsman, 2005; Mathur et al., 2014). Surgical treatment of these nodules ranges from hemithyroidectomy to total thyroidectomy. Even with data supporting adequacy of hemithyroidectomy in 74-96% of these cases, total thyroidectomy is often performed due to cost-effectiveness, risk of reoperation if a large nodule is found to be malignant and complications of reoperation (Melck et al., 2006; Wiseman et al., 2006; Corso et al., 2014). With better preoperative predictors, proper selection of the appropriate procedure would benefit patients in terms of postoperative complications, avoidance of reoperation, and overall quality of life (Megwalu and Green, 2016; Kuba et al., 2017). Several ultrasound features, such as lack of a sonographic halo, hypoechoic appearance, predominantly solid contents, a heterogeneous echotexture, and the presence of calcifications, were reported to be predictors of follicular carcinomas (Sillery et al., 2010; Zhang and Hu, 2014). In one study volume of FC was shown to be significantly higher than FA (Sillery et al., 2010). An increasing likelihood of follicular carcinoma (FC) and Hurthle cell carcinoma (HCC) in larger nodules has been reported (Méndez et al., 2008; Gulcelik et al., 2008; Choi et al., 2009; Sillery et al., 2010; Ibrahim et al., 2015; Arpana et al., 2018; Jin et al., 2018). However, there are not many articles had compared the size of FC and HCC directly to the size of follicular adenoma (FA) and Hurthle cell adenoma (HCA) and the results are discrepancy (Seo et al., 2009; Sillery et al., 2010; Zhang and Hu, 2014). In this study, we evaluated the characteristics of FN and HCN based on their size, shape, and the patient data.
null
null
Results
We found 556 patients with FN and HN as the final surgical pathology in our database. Of those, 397 had preoperative US available for our review. In patients with multiple nodules, the specific location of the nodule was not stated in 162 patients, causing uncertainty of nodule localization in US and thus such cases were excluded. One hundred twenty of the patients did not have both transverse and longitudinal images. This left a total of 115 patients that could be analyzed, including 58 benign (39 FA and 19 HCA) and 57 malignant (35 FC and 22 HCC). The gender, age, width, depth, height and taller-than-wide appearance is found to be statistically significant (P-values = 0.031, 0.014, 0.001, 0.001, <0.001, and 0.115, respectively) based on the univariable binary logistic regression analysis of each independent variable (Table1). These variables were then analyzed by multivariable binary logistic regression analysis with forward LR and backward LR methods. In both methods, only height and age are considered statistically significant (P-values = 0.001 and 0.042, respectively) (Table 2). For visualization of P-value trend of other variables, logistic regression analysis with enter method was performed and shown in Table 2. The area under the curve (AUC) of this model is 0.72, representing a good model fit. Nodule size was stratified by height and we found that increasing size increases the odds ratio (OR) of carcinoma. The categorized size reached statistical significance at size equal to or > 4 cm (P-value = 0.006, OR = 4.5) (Table 3 and Figure 2). Stratified age groups showed increase OR for carcinoma with increase age. The optimal cutoff value is at 55 year-old (OR = 2.4-3.6, p-value = 0.03). (Table 4). Pearson’s correlation demonstrated strong, positive correlation between beam-defined and diameter-defined ratios (r = 0.768, p-value = 0.01) (Figure 3). The mean (SD) of beam-defined and diameter-defined ratios for nodule shape of all data were 0.85 (±0.21) and 0.88 (±0.26), respectively (p-value = 0.051). When excluded the equivocal ratios (ratios range from 0.9-1.1), the mean (SD) were 0.78 (±0.23) and 0.81 (±0.27), respectively (p-value = 0.118).
null
null
[ "Author Contribution Statement" ]
[ "A.B. and Z.A. conceived of the presented idea. A.B. developed the theory and performed the computations. A.Z and K.P. contributed to sample preparation. C.D. and D.E. verified the analytical methods. M.R.C, M.S., D.E and B.E. supervised the findings of this work. All authors discussed the results and contributed to the final manuscript." ]
[ null ]
[ "Introduction", "Materials and Methods", "Results", "Discussion", "Author Contribution Statement" ]
[ "Follicular neoplasms (FN) and Hurthle cell neoplasms (HCN) represent 2-10% of thyroid nodules and they often cause challenges in managing patients with thyroid lesions. There is not yet a true consensus criteria for triaging these patients, since the main focus of thyroid cancer is mainly on the papillary thyroid carcinoma (PTC). For both FN and HCN, cytological features that are obtained with fine-needle aspiration biopsy (FNAB) cannot accurately differentiate benign lesions from malignant ones. Because vascular or capsular invasion is required to diagnose malignancy of these types of thyroid nodules, final pathology is essential (Carling and Udelsman, 2005; Mathur et al., 2014). Surgical treatment of these nodules ranges from hemithyroidectomy to total thyroidectomy. Even with data supporting adequacy of hemithyroidectomy in 74-96% of these cases, total thyroidectomy is often performed due to cost-effectiveness, risk of reoperation if a large nodule is found to be malignant and complications of reoperation (Melck et al., 2006; Wiseman et al., 2006; Corso et al., 2014). With better preoperative predictors, proper selection of the appropriate procedure would benefit patients in terms of postoperative complications, avoidance of reoperation, and overall quality of life (Megwalu and Green, 2016; Kuba et al., 2017). Several ultrasound features, such as lack of a sonographic halo, hypoechoic appearance, predominantly solid contents, a heterogeneous echotexture, and the presence of calcifications, were reported to be predictors of follicular carcinomas (Sillery et al., 2010; Zhang and Hu, 2014). In one study volume of FC was shown to be significantly higher than FA (Sillery et al., 2010). An increasing likelihood of follicular carcinoma (FC) and Hurthle cell carcinoma (HCC) in larger nodules has been reported (Méndez et al., 2008; Gulcelik et al., 2008; Choi et al., 2009; Sillery et al., 2010; Ibrahim et al., 2015; Arpana et al., 2018; Jin et al., 2018). However, there are not many articles had compared the size of FC and HCC directly to the size of follicular adenoma (FA) and Hurthle cell adenoma (HCA) and the results are discrepancy (Seo et al., 2009; Sillery et al., 2010; Zhang and Hu, 2014). \nIn this study, we evaluated the characteristics of FN and HCN based on their size, shape, and the patient data. ", "\nPatients\n\nThis study was approved by the local institutional review board with a waiver of informed consent. We retrospectively reviewed pathology reports of patients who underwent thyroidectomy at our institution between January 2012 to December 2017 to identify patients with follicular or Hurthle cell adenomas or carcinomas in the final surgical pathology. Only subjects that have preoperative US with nodules can be identified surely in both transverse and longitudinal images, were included. Demographic data and thyroid stimulating hormone (TSH) level were recorded. \n\nUS evaluation\n\nThe nodules were measured in three axes by one neuroradiologist (AB). The first axis is the maximal dimension in the transverse image, the second is the maximal dimension perpendicular to the first dimension on the transverse image. These two axes are referred as width and depth, depending on the shape of the nodule. The last dimension (referred to as ‘height’) is the maximal dimension in the longitudinal image. (Tessler et al., 2017) (Figure 1) \nA taller than wide shape appearance is determined based on the transverse image comparing the diameters parallel (tallness) and perpendicular (wideness) to the ultrasound beam (Tessler et al., 2017). From this description, we drew a bounding box around the nodule and calculated the ratio between the diameters in Y-axis and X-axis of the nodule in transverse image such that the X-axis was perpendicular to the US beam and the Y-axis was parallel (Figure 1). A Y/X axis ratio greater than 1 was classified as “beam-defined” taller than wide shape. We also calculated the ratio between depth and width from the three-axes measurement. These ratios would reflect the different angulation of the nodules which might simulate the various angulations on real time US. This parameter is referred to as the “diameter-defined” ratio in our study.\n\nStatistical analysis\n\nThe SPSS v.22 software package was used for statistical analysis. Univariate binary logistic regression analysis was used for both categorical data (gender, TSH categories and shape) and continuous data (age, TSH value, width, perpendicular dimension and height). For each categorical data category, the predictor value and reference category were set. The independent variables which had P-value < 0.25 were included in the next step for multivariable analysis. For multivariable analysis, forward logistic regression (LR) and backward LR methods were performed. The independent variables, which were statistically significant (P-value < 0.05) in both methods, were then analysed by enter method LR to check the multicollinearity and interactions. The area under the curve (AUC) of receiver operator characteristic (ROC) curve was used for assessing the model performance. The beam-defined and diameter ratios for taller than wide shape are compared by paired t-test and correlation determination with Pearson’s correlation.", "We found 556 patients with FN and HN as the final surgical pathology in our database. Of those, 397 had preoperative US available for our review. In patients with multiple nodules, the specific location of the nodule was not stated in 162 patients, causing uncertainty of nodule localization in US and thus such cases were excluded. One hundred twenty of the patients did not have both transverse and longitudinal images. This left a total of 115 patients that could be analyzed, including 58 benign (39 FA and 19 HCA) and 57 malignant (35 FC and 22 HCC). \nThe gender, age, width, depth, height and taller-than-wide appearance is found to be statistically significant (P-values = 0.031, 0.014, 0.001, 0.001, <0.001, and 0.115, respectively) based on the univariable binary logistic regression analysis of each independent variable (Table1). These variables were then analyzed by multivariable binary logistic regression analysis with forward LR and backward LR methods. In both methods, only height and age are considered statistically significant (P-values = 0.001 and 0.042, respectively) (Table 2). For visualization of P-value trend of other variables, logistic regression analysis with enter method was performed and shown in Table 2. The area under the curve (AUC) of this model is 0.72, representing a good model fit.\nNodule size was stratified by height and we found that increasing size increases the odds ratio (OR) of carcinoma. The categorized size reached statistical significance at size equal to or > 4 cm (P-value = 0.006, OR = 4.5) (Table 3 and Figure 2).\nStratified age groups showed increase OR for carcinoma with increase age. The optimal cutoff value is at 55 year-old (OR = 2.4-3.6, p-value = 0.03). (Table 4).\nPearson’s correlation demonstrated strong, positive correlation between beam-defined and diameter-defined ratios (r = 0.768, p-value = 0.01) (Figure 3). The mean (SD) of beam-defined and diameter-defined ratios for nodule shape of all data were 0.85 (±0.21) and 0.88 (±0.26), respectively (p-value = 0.051). When excluded the equivocal ratios (ratios range from 0.9-1.1), the mean (SD) were 0.78 (±0.23) and 0.81 (±0.27), respectively (p-value = 0.118). ", "Our study showed that size and age are independent predictors of malignancy in follicular and Hurthle cell tumors. FC and HC are significantly larger than adenomas with the optimal cutoff at size ≥ 4 cm (OR = 4.5, p-value = 0.006). Several studies mentioned that FC and HCC tend to be larger in size as compared to other types of malignancy but they did not compare them to their benign counterparts (Méndez et al., 2008; Gulcelik et al., 2008; Choi et al., 2009; Ibrahim et al., 2015; Jin et al., 2018). Increasing tumor volume was also reported to increase the risk of malignancy in FN (Sillery et al., 2010). Previous studies on FN cytology reported that size was not a predictor of malignancy, however these studies compared final pathology of malignancy with final benign pathology without subclassification of a specific tumor subtype; e.i. Follicular and papillary carcinomas were considered all together as malignancy (Méndez et al., 2008; Gulcelik et al., 2008; Choi et al., 2009; Parikh et al., 2013; Ibrahim et al., 2015). When it comes to the size, measurement method is crucial. Our study is one of the few studies describing the method of measurement, which we did according to a standard system (Tessler et al., 2017).\nEven though increasing age unsurprisingly increases risk of malignancy, we showed the odds ratio of each age group, which could be of additional value for triaging patients based on this basic parameter. We found an optimal cutoff age at 55 year-old (OR = 2.4-3.6, p-value = 0.03).\nTo the best of our knowledge, our study is the first showing that taller-than-wide shape is not an independent predictor for malignancy in FN and HCN. Taller-than-wide shape has been well accepted to be an important malignant US feature in thyroid cancer. However, most studies on thyroid cancer are weighted toward findings of papillary carcinoma which has different nature both clinically and histologically (Raparia et al., 2009; Yoon et al., 2010; Castro et al., 2011; Ren et al., 2015; Espinosa De Ycaza et al., 2016; Tessler et al., 2017; Yang and Fried, 2018). A previous study using CT comparing shape of FN and nodular hyperplasia found that taller-than-wide shape is significantly more common in FN, irrespective of adenoma or carcinoma (Lee et al., 2016). Several studies have reported that follicular variant of PTC has significantly less taller-than-wide appearance as compared to conventional PTC (Kim et al., 2009; Anuradha et al., 2016; Jeon et al., 2016). Interobserver agreement of taller than wide shape ranges from fair to almost perfect (kappa ranges from 0.282-1) in previous studies.(Chandramohan et al., 2016; Tessler et al., 2017; Grani et al., 2018; Phuttharak et al., 2019; Pang et al., 2019). We believe this variation occurs because of variability in the position of the US probe in real time scanning. We found a near-significant difference between the beam-defined and diameter-defined ratios on the entire data (p-value = 0.051) but after excluding the equivocal ratios (ratios range from 0.9-1.1), the difference was not significant (p-value = 0.118). Angulations of the probe would not have a serious effect on nodules that are strikingly taller than wide. However, for nodules that are equivocal, the different angulation will affect the ratio; hence affect the overall shape. Inspection alone without measurement will cause discordant comments on the same nodule. \nLimitations of this study is that the data is from a single institution and it was collected retrospectively. A large number of patients were excluded due to the incompleteness of data (e.g. both longitudinal and perpendicular views). Nonetheless, our study has one of the highest numbers of follicular and Hurthle cell neoplastic pathology with almost balanced number of adenomas and carcinomas. \nIn conclusion, FC and HCC are significantly larger than FA and HCA with a significant cutoff value at 4 cm. Increasing age groups show increasing odds ratios of malignancy. Our study suggests that taller-than-wide shape is not a predictor for carcinoma in this group of patients, in contrast to PTC. We believe that this knowledge will be helpful in more accurately assessing the patient’s risk of malignancy thus better guiding subsequent management.\nDescriptive Data and Univariable Logistic Regression Analysis\n\na, Statistically significant for univariable analysis.\nMultivariable Logistic Regression Analysis\n*, Variables analyzed by forward logistic regression (LR) and backward LR methods, **, Variables analyzed by enter method, a Statistically significant for multivariable analysis.\nOdds Ratio of Tumor Size According to Height\nOR, Odds ratio; a, Statistical significant difference at size equal to or more than 4 cm.\nOdds Ratio of Age\nOR, Odds ratio; a, Statistical significant difference at age more than 55 year-old.\nTransverse (A, B and C) and longitudinal (D) US images of an isoechoic solid thyroid nodule demonstrating the measurement method (A, B and D; depth, width and height diameters). Bounding box is drawn around the nodule (C) and the ratio between Y and X axes represents the shape of nodule. The shape of this nodule is taller-than-wide. The final pathology is a follicular adenoma\nPercentage of Malignancy According to Nodule Height. **P-value = 0.006\nScatter Plot of Beam-Defined and Diameter-defined Ratios. (r = 0.768, p-value = 0.01)", "A.B. and Z.A. conceived of the presented idea. A.B. developed the theory and performed the computations. A.Z and K.P. contributed to sample preparation. C.D. and D.E. verified the analytical methods. M.R.C, M.S., D.E and B.E. supervised the findings of this work. All authors discussed the results and contributed to the final manuscript." ]
[ "intro", "materials|methods", "results", "discussion", null ]
[ "Follicular neoplasm", "hurthle cell neoplasm", "size", "talle-than-wide" ]
Introduction: Follicular neoplasms (FN) and Hurthle cell neoplasms (HCN) represent 2-10% of thyroid nodules and they often cause challenges in managing patients with thyroid lesions. There is not yet a true consensus criteria for triaging these patients, since the main focus of thyroid cancer is mainly on the papillary thyroid carcinoma (PTC). For both FN and HCN, cytological features that are obtained with fine-needle aspiration biopsy (FNAB) cannot accurately differentiate benign lesions from malignant ones. Because vascular or capsular invasion is required to diagnose malignancy of these types of thyroid nodules, final pathology is essential (Carling and Udelsman, 2005; Mathur et al., 2014). Surgical treatment of these nodules ranges from hemithyroidectomy to total thyroidectomy. Even with data supporting adequacy of hemithyroidectomy in 74-96% of these cases, total thyroidectomy is often performed due to cost-effectiveness, risk of reoperation if a large nodule is found to be malignant and complications of reoperation (Melck et al., 2006; Wiseman et al., 2006; Corso et al., 2014). With better preoperative predictors, proper selection of the appropriate procedure would benefit patients in terms of postoperative complications, avoidance of reoperation, and overall quality of life (Megwalu and Green, 2016; Kuba et al., 2017). Several ultrasound features, such as lack of a sonographic halo, hypoechoic appearance, predominantly solid contents, a heterogeneous echotexture, and the presence of calcifications, were reported to be predictors of follicular carcinomas (Sillery et al., 2010; Zhang and Hu, 2014). In one study volume of FC was shown to be significantly higher than FA (Sillery et al., 2010). An increasing likelihood of follicular carcinoma (FC) and Hurthle cell carcinoma (HCC) in larger nodules has been reported (Méndez et al., 2008; Gulcelik et al., 2008; Choi et al., 2009; Sillery et al., 2010; Ibrahim et al., 2015; Arpana et al., 2018; Jin et al., 2018). However, there are not many articles had compared the size of FC and HCC directly to the size of follicular adenoma (FA) and Hurthle cell adenoma (HCA) and the results are discrepancy (Seo et al., 2009; Sillery et al., 2010; Zhang and Hu, 2014). In this study, we evaluated the characteristics of FN and HCN based on their size, shape, and the patient data. Materials and Methods: Patients This study was approved by the local institutional review board with a waiver of informed consent. We retrospectively reviewed pathology reports of patients who underwent thyroidectomy at our institution between January 2012 to December 2017 to identify patients with follicular or Hurthle cell adenomas or carcinomas in the final surgical pathology. Only subjects that have preoperative US with nodules can be identified surely in both transverse and longitudinal images, were included. Demographic data and thyroid stimulating hormone (TSH) level were recorded. US evaluation The nodules were measured in three axes by one neuroradiologist (AB). The first axis is the maximal dimension in the transverse image, the second is the maximal dimension perpendicular to the first dimension on the transverse image. These two axes are referred as width and depth, depending on the shape of the nodule. The last dimension (referred to as ‘height’) is the maximal dimension in the longitudinal image. (Tessler et al., 2017) (Figure 1) A taller than wide shape appearance is determined based on the transverse image comparing the diameters parallel (tallness) and perpendicular (wideness) to the ultrasound beam (Tessler et al., 2017). From this description, we drew a bounding box around the nodule and calculated the ratio between the diameters in Y-axis and X-axis of the nodule in transverse image such that the X-axis was perpendicular to the US beam and the Y-axis was parallel (Figure 1). A Y/X axis ratio greater than 1 was classified as “beam-defined” taller than wide shape. We also calculated the ratio between depth and width from the three-axes measurement. These ratios would reflect the different angulation of the nodules which might simulate the various angulations on real time US. This parameter is referred to as the “diameter-defined” ratio in our study. Statistical analysis The SPSS v.22 software package was used for statistical analysis. Univariate binary logistic regression analysis was used for both categorical data (gender, TSH categories and shape) and continuous data (age, TSH value, width, perpendicular dimension and height). For each categorical data category, the predictor value and reference category were set. The independent variables which had P-value < 0.25 were included in the next step for multivariable analysis. For multivariable analysis, forward logistic regression (LR) and backward LR methods were performed. The independent variables, which were statistically significant (P-value < 0.05) in both methods, were then analysed by enter method LR to check the multicollinearity and interactions. The area under the curve (AUC) of receiver operator characteristic (ROC) curve was used for assessing the model performance. The beam-defined and diameter ratios for taller than wide shape are compared by paired t-test and correlation determination with Pearson’s correlation. Results: We found 556 patients with FN and HN as the final surgical pathology in our database. Of those, 397 had preoperative US available for our review. In patients with multiple nodules, the specific location of the nodule was not stated in 162 patients, causing uncertainty of nodule localization in US and thus such cases were excluded. One hundred twenty of the patients did not have both transverse and longitudinal images. This left a total of 115 patients that could be analyzed, including 58 benign (39 FA and 19 HCA) and 57 malignant (35 FC and 22 HCC). The gender, age, width, depth, height and taller-than-wide appearance is found to be statistically significant (P-values = 0.031, 0.014, 0.001, 0.001, <0.001, and 0.115, respectively) based on the univariable binary logistic regression analysis of each independent variable (Table1). These variables were then analyzed by multivariable binary logistic regression analysis with forward LR and backward LR methods. In both methods, only height and age are considered statistically significant (P-values = 0.001 and 0.042, respectively) (Table 2). For visualization of P-value trend of other variables, logistic regression analysis with enter method was performed and shown in Table 2. The area under the curve (AUC) of this model is 0.72, representing a good model fit. Nodule size was stratified by height and we found that increasing size increases the odds ratio (OR) of carcinoma. The categorized size reached statistical significance at size equal to or > 4 cm (P-value = 0.006, OR = 4.5) (Table 3 and Figure 2). Stratified age groups showed increase OR for carcinoma with increase age. The optimal cutoff value is at 55 year-old (OR = 2.4-3.6, p-value = 0.03). (Table 4). Pearson’s correlation demonstrated strong, positive correlation between beam-defined and diameter-defined ratios (r = 0.768, p-value = 0.01) (Figure 3). The mean (SD) of beam-defined and diameter-defined ratios for nodule shape of all data were 0.85 (±0.21) and 0.88 (±0.26), respectively (p-value = 0.051). When excluded the equivocal ratios (ratios range from 0.9-1.1), the mean (SD) were 0.78 (±0.23) and 0.81 (±0.27), respectively (p-value = 0.118). Discussion: Our study showed that size and age are independent predictors of malignancy in follicular and Hurthle cell tumors. FC and HC are significantly larger than adenomas with the optimal cutoff at size ≥ 4 cm (OR = 4.5, p-value = 0.006). Several studies mentioned that FC and HCC tend to be larger in size as compared to other types of malignancy but they did not compare them to their benign counterparts (Méndez et al., 2008; Gulcelik et al., 2008; Choi et al., 2009; Ibrahim et al., 2015; Jin et al., 2018). Increasing tumor volume was also reported to increase the risk of malignancy in FN (Sillery et al., 2010). Previous studies on FN cytology reported that size was not a predictor of malignancy, however these studies compared final pathology of malignancy with final benign pathology without subclassification of a specific tumor subtype; e.i. Follicular and papillary carcinomas were considered all together as malignancy (Méndez et al., 2008; Gulcelik et al., 2008; Choi et al., 2009; Parikh et al., 2013; Ibrahim et al., 2015). When it comes to the size, measurement method is crucial. Our study is one of the few studies describing the method of measurement, which we did according to a standard system (Tessler et al., 2017). Even though increasing age unsurprisingly increases risk of malignancy, we showed the odds ratio of each age group, which could be of additional value for triaging patients based on this basic parameter. We found an optimal cutoff age at 55 year-old (OR = 2.4-3.6, p-value = 0.03). To the best of our knowledge, our study is the first showing that taller-than-wide shape is not an independent predictor for malignancy in FN and HCN. Taller-than-wide shape has been well accepted to be an important malignant US feature in thyroid cancer. However, most studies on thyroid cancer are weighted toward findings of papillary carcinoma which has different nature both clinically and histologically (Raparia et al., 2009; Yoon et al., 2010; Castro et al., 2011; Ren et al., 2015; Espinosa De Ycaza et al., 2016; Tessler et al., 2017; Yang and Fried, 2018). A previous study using CT comparing shape of FN and nodular hyperplasia found that taller-than-wide shape is significantly more common in FN, irrespective of adenoma or carcinoma (Lee et al., 2016). Several studies have reported that follicular variant of PTC has significantly less taller-than-wide appearance as compared to conventional PTC (Kim et al., 2009; Anuradha et al., 2016; Jeon et al., 2016). Interobserver agreement of taller than wide shape ranges from fair to almost perfect (kappa ranges from 0.282-1) in previous studies.(Chandramohan et al., 2016; Tessler et al., 2017; Grani et al., 2018; Phuttharak et al., 2019; Pang et al., 2019). We believe this variation occurs because of variability in the position of the US probe in real time scanning. We found a near-significant difference between the beam-defined and diameter-defined ratios on the entire data (p-value = 0.051) but after excluding the equivocal ratios (ratios range from 0.9-1.1), the difference was not significant (p-value = 0.118). Angulations of the probe would not have a serious effect on nodules that are strikingly taller than wide. However, for nodules that are equivocal, the different angulation will affect the ratio; hence affect the overall shape. Inspection alone without measurement will cause discordant comments on the same nodule. Limitations of this study is that the data is from a single institution and it was collected retrospectively. A large number of patients were excluded due to the incompleteness of data (e.g. both longitudinal and perpendicular views). Nonetheless, our study has one of the highest numbers of follicular and Hurthle cell neoplastic pathology with almost balanced number of adenomas and carcinomas. In conclusion, FC and HCC are significantly larger than FA and HCA with a significant cutoff value at 4 cm. Increasing age groups show increasing odds ratios of malignancy. Our study suggests that taller-than-wide shape is not a predictor for carcinoma in this group of patients, in contrast to PTC. We believe that this knowledge will be helpful in more accurately assessing the patient’s risk of malignancy thus better guiding subsequent management. Descriptive Data and Univariable Logistic Regression Analysis a, Statistically significant for univariable analysis. Multivariable Logistic Regression Analysis *, Variables analyzed by forward logistic regression (LR) and backward LR methods, **, Variables analyzed by enter method, a Statistically significant for multivariable analysis. Odds Ratio of Tumor Size According to Height OR, Odds ratio; a, Statistical significant difference at size equal to or more than 4 cm. Odds Ratio of Age OR, Odds ratio; a, Statistical significant difference at age more than 55 year-old. Transverse (A, B and C) and longitudinal (D) US images of an isoechoic solid thyroid nodule demonstrating the measurement method (A, B and D; depth, width and height diameters). Bounding box is drawn around the nodule (C) and the ratio between Y and X axes represents the shape of nodule. The shape of this nodule is taller-than-wide. The final pathology is a follicular adenoma Percentage of Malignancy According to Nodule Height. **P-value = 0.006 Scatter Plot of Beam-Defined and Diameter-defined Ratios. (r = 0.768, p-value = 0.01) Author Contribution Statement: A.B. and Z.A. conceived of the presented idea. A.B. developed the theory and performed the computations. A.Z and K.P. contributed to sample preparation. C.D. and D.E. verified the analytical methods. M.R.C, M.S., D.E and B.E. supervised the findings of this work. All authors discussed the results and contributed to the final manuscript.
Background: The management of follicular (FN) and Hurthle cell neoplasms (HCN) is often difficult because of the uncertainty of malignancy risk. We aimed to assess characteristics of benign and malignant follicular and Hurthle neoplasms based on their shape and size. Methods: Patients with Follicular adenoma (FA) or carcinoma (FC) and Hurthle Cell adenoma (HCA) or carcinoma (HCC) who had preoperative ultrasonography were included. Demographic data were retrieved. Size and shape of the nodules were measured. Logistic regression analyses and odds ratios were performed. Results: A total of 115 nodules with 57 carcinomas and 58 adenomas were included. Logistic regression analysis shows that the nodule height and the patient age are predictors of malignancy (p-values = 0.001 and 0.042). A cutoff value of nodule height ≥ 4 cm. produces an odds ratio of 4.5 (p-value = 0.006). An age ≥ 55 year-old demonstrates an odds ratio of 2.4-3.6 (p-value = 0.03). Taller-than-wide shape was not statistically significant (p-value = 0.613). Conclusions: FC and HCC are larger than FA and HCA in size, with a cutoff at 4 cm. Increasing age increases the odds of malignancy with a cutoff at 55 year-old. Taller-than-wide shape is not a predictor of malignancy.
null
null
2,702
267
[ 60 ]
5
[ "value", "shape", "nodule", "patients", "size", "age", "malignancy", "wide", "analysis", "taller wide" ]
[ "total thyroidectomy performed", "10 thyroid nodules", "nodules ranges hemithyroidectomy", "papillary thyroid carcinoma", "thyroid carcinoma" ]
null
null
null
[CONTENT] Follicular neoplasm | hurthle cell neoplasm | size | talle-than-wide [SUMMARY]
null
[CONTENT] Follicular neoplasm | hurthle cell neoplasm | size | talle-than-wide [SUMMARY]
null
[CONTENT] Follicular neoplasm | hurthle cell neoplasm | size | talle-than-wide [SUMMARY]
null
[CONTENT] Adenocarcinoma, Follicular | Adenoma | Adenoma, Oxyphilic | Case-Control Studies | Female | Follow-Up Studies | Humans | Male | Middle Aged | Prognosis | Retrospective Studies | Thyroid Neoplasms | Thyroid Nodule | Thyroidectomy | Ultrasonography [SUMMARY]
null
[CONTENT] Adenocarcinoma, Follicular | Adenoma | Adenoma, Oxyphilic | Case-Control Studies | Female | Follow-Up Studies | Humans | Male | Middle Aged | Prognosis | Retrospective Studies | Thyroid Neoplasms | Thyroid Nodule | Thyroidectomy | Ultrasonography [SUMMARY]
null
[CONTENT] Adenocarcinoma, Follicular | Adenoma | Adenoma, Oxyphilic | Case-Control Studies | Female | Follow-Up Studies | Humans | Male | Middle Aged | Prognosis | Retrospective Studies | Thyroid Neoplasms | Thyroid Nodule | Thyroidectomy | Ultrasonography [SUMMARY]
null
[CONTENT] total thyroidectomy performed | 10 thyroid nodules | nodules ranges hemithyroidectomy | papillary thyroid carcinoma | thyroid carcinoma [SUMMARY]
null
[CONTENT] total thyroidectomy performed | 10 thyroid nodules | nodules ranges hemithyroidectomy | papillary thyroid carcinoma | thyroid carcinoma [SUMMARY]
null
[CONTENT] total thyroidectomy performed | 10 thyroid nodules | nodules ranges hemithyroidectomy | papillary thyroid carcinoma | thyroid carcinoma [SUMMARY]
null
[CONTENT] value | shape | nodule | patients | size | age | malignancy | wide | analysis | taller wide [SUMMARY]
null
[CONTENT] value | shape | nodule | patients | size | age | malignancy | wide | analysis | taller wide [SUMMARY]
null
[CONTENT] value | shape | nodule | patients | size | age | malignancy | wide | analysis | taller wide [SUMMARY]
null
[CONTENT] 2014 | thyroid | sillery | sillery 2010 | 2010 | reoperation | follicular | hcn | nodules | fc [SUMMARY]
null
[CONTENT] value | 001 | table | respectively | patients | size | age | defined | ratios | nodule [SUMMARY]
null
[CONTENT] value | size | patients | shape | analysis | contributed | nodule | malignancy | age | defined [SUMMARY]
null
[CONTENT] Hurthle | HCN ||| Hurthle [SUMMARY]
null
[CONTENT] 115 | 57 | 58 ||| 0.001 | 0.042 ||| 4 cm | 4.5 | 0.006 ||| 55 year-old | 2.4-3.6 | 0.03 ||| Taller | 0.613 [SUMMARY]
null
[CONTENT] Hurthle | HCN ||| Hurthle ||| FC | Hurthle Cell ||| ||| ||| ||| 115 | 57 | 58 ||| 0.001 | 0.042 ||| 4 cm | 4.5 | 0.006 ||| 55 year-old | 2.4-3.6 | 0.03 ||| Taller | 0.613 ||| 4 cm ||| 55 year-old ||| Taller [SUMMARY]
null
Influence of an acetate- and a lactate-based balanced infusion solution on acid base physiology and hemodynamics: an observational pilot study.
22769740
The current pilot study compares the impact of an intravenous infusion of Ringer's lactate to an acetate-based solution with regard to acid-base balance. The study design included the variables of the Stewart approach and focused on the effective strong ion difference. Because adverse hemodynamic effects have been reported when using acetate buffered solutions in hemodialysis, hemodynamics were also evaluated.
BACKGROUND
Twenty-four women who had undergone abdominal gynecologic surgery and who had received either Ringer's lactate (Strong Ion Difference 28 mmol/L; n = 12) or an acetate-based solution (Strong Ion Difference 36.8 mmol/L; n = 12) according to an established clinical protocol and its precursor were included in the investigation. After induction of general anesthesia, a set of acid-base variables, hemodynamic values and serum electrolytes was measured three times during the next 120 minutes.
METHODS
Patients received a mean dose of 4,054 ± 450 ml of either one or the other of the solutions. In terms of mean arterial blood pressure and norepinephrine requirements there were no differences to observe between the study groups. pH and serum HCO3- concentration decreased slightly but significantly only with Ringer's lactate. In addition, the acetate-based solution kept the plasma effective strong ion difference more stable than Ringer's lactate.
RESULTS
Both of the solutions provided hemodynamic stability. Concerning consistency of acid base parameters none of the solutions seemed to be inferior, either. Whether the slight advantages observed for the acetate-buffered solution in terms of stability of pH and plasma HCO3- are clinically relevant, needs to be investigated in a larger randomized controlled trial.
CONCLUSIONS
[ "Acetates", "Acid-Base Equilibrium", "Adult", "Aged", "Arterial Pressure", "Bicarbonates", "Buffers", "Electrocardiography", "Electrolytes", "Female", "Genital Neoplasms, Female", "Hemodynamics", "Humans", "Hydrogen-Ion Concentration", "Infusion Pumps", "Infusions, Intravenous", "Isotonic Solutions", "Middle Aged", "Norepinephrine", "Pilot Projects", "Ringer's Lactate", "Young Adult" ]
3479046
Background
In the fields of surgery and intensive care, hyperchloremic acidosis is a well-known problem in patients receiving large amounts of standard crystalloids, especially 0.9% sodium chloride solutions. A series of investigations has emphasized the disadvantageous effects of hyperchloremic acidosis on various organ systems, for example, hemodynamics, NO-production, renal blood circulation, urinary output or hemostasis [1-4]. Balanced crystalloids, whose composition prevents hyperchloremia, are increasingly accepted and likely to be ‘state of the art’ in the near future [3,5-7]. Balanced hydroxyethyl-starch solutions are also available today. All of these preparations are characterized by the presence of metabolizable organic anions such as lactate, acetate or malate and contain physiological electrolyte concentrations. Theoretically, balanced solutions which possess a strong ion difference (SID) of around 24 mmol/L after metabolization (for example, Ringer’s lactate) are supposed to have no influence on blood pH even after intravenous infusion of several liters [8]. However, balanced crystalloids containing lactate have some disadvantages as well [9]. Their metabolism depends on liver function and does require increased oxygen consumption. Acetated solutions do not display these shortcomings, but previous studies in patients undergoing hemodialysis have demonstrated that their administration may lead to hemodynamic instability and hypotension [10]. Despite their future perspective, to date the routine use of balanced crystalloids is still infrequent in anesthesia. Large randomized, controlled clinical trials (RCT’s) investigating possible advantages as well as adverse effects of balanced crystalloids for routine use in the operating theater are sparse. The aim of the current study was to provide a preliminary comparison of two differently buffered balanced infusion solutions by reviewing data obtained from two established clinical protocols and,in addition, to test the feasibility of a future large RCT. The main target of interest was to investigate the influence of a solution containing a plasma-like SID (acetate buffered, SID = 36.8 mmol/L) and a solution with a comparably low SID (lactate buffered, SID = 28 mmol/L) on the variables of Stewart’s acid–base approach including the effective strong ion difference (SIDe). The impact of the acetate-based infusion solution on hemodynamic stability was also studied. In order to guide a future RCT, primary and secondary variables of interest were determined and sample size estimation was performed.
Methods
In 2009 Ringer’s lactate (B. Braun Melsungen, Melsungen, Germany) was replaced by IonosterilTM (Fresenius Medical Care, Schweinfurt, Germany), an acetate-based infusion solution (ABIS), as a standard intraoperative infusion solution in our institution. In a before and after analysis we studied acid–base balance and hemodynamics in 24 women without obvious cardiac, pulmonary or renal diseases (classified as American Society of Anesthesiologists physical status I or II) who underwent major, open gynecologic cancer surgery. The study comprised two arms: the first consisted of patients who had received solely Ringer’s lactate (RL-group, n = 12) and the second of women who were treated only with the ABIS (ABIS-group, n = 12). The composition of both infusion solutions is listed in Table 1. Patients suffering from pre-existing acid–base disorders and patients who required blood transfusions, plasma products or colloids during the study period were not included in the investigation. A retrospective chart review was performed for patients in both arms. The mean intergroup difference of changes in SIDe between baseline values and the end of the study period was defined as the primary variable of interest. Changes in arterial plasma pH and noradrenalin requirements during general anesthesia were considered as secondary variables. The target parameters of the study were obtained from intraoperative routine blood samples, taken from arterial lines in patients requiring postoperative intensive care management (see Table 2). Because no additional blood sampling was necessary and the study was designed to evaluate two different versions of an established infusion regimen, the Ethics Committee of the University of Munich waived the need for patient’s informed consent for this pilot trial. Composition of study solutions [], serum concentration of the ion between the brackets; ABIS, acetate-based infusion solution. List of variables ABIS, acetate-based infusion solution; ABP, arterial blood pressure; BE, base excess; BL, baseline; [], serum concentration of the variable between the brackets. General anesthesia was induced with intravenous propofol, sufentanil, and cis-atracurium. After tracheal intubation, anesthesia was maintained with propofol (6 to 10 mg/kg/hour) and additional doses of sufentanil and cis-atracurium as appropriate. Mechanical ventilation was performed to maintain an adequate oxygenation and to preserve normocapnia. Intraoperative monitoring included end-tidal PaCO2, electrocardiography, central venous pressure, arterial blood pressure, pulseoximetry and esophageal temperature. During the operative period, the patient’s temperature was kept constant in any case. Urinary output as well as the blood loss during the time course of the investigation was noted. In order to obtain blood pH, base excess (BE), HCO3- and PaCO2, a standard blood gas analyzer (Bayer Rapidlab 1265, Bayer Healthcare LLC, East Walpole, MA, USA) was used. Serum electrolyte and lactate concentrations, as well as hemoglobin, hematocrit and albumin concentrations were assessed in the central laboratory of our institution. Depending on the results of the laboratory measurements the following variables were calculated: · [PO43-] ionized (Pi) (mmol/L): = [PO43-] (mg/dL) · 0.3229 · Anion gap (mmol/L): = [Na+] – [Cl-] – [HCO3-] · Effective Strong Ion Difference (SIDe) (mmol/L): = [A-] + [HCO3- = [A-] + (0.0304 · PCO2 · 10(pH-6.1)) according to Figge [11,12] · Week acids serum-concentration [A- (mmol/L): = [Albumin · (0.123 · pH – 0.631)] + [Pi · (0.309 · pH - 0.469)] according to Figge [11,12] · Strong Ion Gap (SIG) (mmol/L): = [Na+ + [K+ + 2[Ca++ + 2[Mg++ – [Cl-–[Lac-–[A-–[HCO3- according to Stewart [13] Hemodynamics In accordance with our institution’s clinical practice a mean arterial pressure (MAP) of at least 80 mmHg was regarded as necessary during general anesthesia. If volume replacement with the study solutions alone (up to 30 ml/kg/hour) was not sufficient to maintain the necessary level of MAP norepinephrine was administered in increasing doses of 0.1 mg/hour as required until the target value was achieved. In accordance with our institution’s clinical practice a mean arterial pressure (MAP) of at least 80 mmHg was regarded as necessary during general anesthesia. If volume replacement with the study solutions alone (up to 30 ml/kg/hour) was not sufficient to maintain the necessary level of MAP norepinephrine was administered in increasing doses of 0.1 mg/hour as required until the target value was achieved. Statistical analysis Mean, standard deviation of the mean (parametric values), median and range (non-parametric data) were calculated for each target parameter. The Shapiro-Wilk test was used to test for normality. Because most of the data were normally distributed, they are presented as mean ± standard deviation. We used a repeated measurement analysis of variances (RM-ANOVA) followed by a Student-Newman-Keuls test (with data normally distributed) or a Friedman test (with data not normally distributed) to describe changes of measurement parameters within a group during the course of time (three points of measurement). In order to compare differences between the groups an ANOVA was followed by a Student-Newmann-Keuls test (see above). If data were not normally distributed a one-way ANOVA on ranks (Kruskal-Wallis test) was followed by Dunn’s test. For all determinations a type I error protection of P <0.05 was considered significant. Statistical analysis was performed using Sigma Stat Software Version 3.1 (RockWare Inc. Golden, Colorado, USA) and Microsoft Excel Version 2003 (Microsoft Deutschland GmbH, Unterschleißheim, Germany). All electrolyte concentrations are expressed as mmolLl. Mean, standard deviation of the mean (parametric values), median and range (non-parametric data) were calculated for each target parameter. The Shapiro-Wilk test was used to test for normality. Because most of the data were normally distributed, they are presented as mean ± standard deviation. We used a repeated measurement analysis of variances (RM-ANOVA) followed by a Student-Newman-Keuls test (with data normally distributed) or a Friedman test (with data not normally distributed) to describe changes of measurement parameters within a group during the course of time (three points of measurement). In order to compare differences between the groups an ANOVA was followed by a Student-Newmann-Keuls test (see above). If data were not normally distributed a one-way ANOVA on ranks (Kruskal-Wallis test) was followed by Dunn’s test. For all determinations a type I error protection of P <0.05 was considered significant. Statistical analysis was performed using Sigma Stat Software Version 3.1 (RockWare Inc. Golden, Colorado, USA) and Microsoft Excel Version 2003 (Microsoft Deutschland GmbH, Unterschleißheim, Germany). All electrolyte concentrations are expressed as mmolLl.
Results
We did not notice significant differences concerning patients’ demographic data (mean age: RL-group, 64 ± 8; ABIS-group, 61 ± 10 years; mean body mass index: RL-group, 24.8 ± 2.6; ABIS-group, 24.5 ± 3.5; mean body weight: RL-group, 68.6 ± 6.0 kg; ABIS-group, 66.4 ± 14.0 kg) (P > 0.05 for all comparisons). All women underwent surgery for ovarian cancer including open radical hysterectomy, adnexal surgery and para-aortal as well as pelvic lymph node dissection. Twenty-four patients, who a priori fulfilled the inclusion criteria, were enrolled in this study (12 per group). Protocol violations were not observed and subsequently there was no need to retrospectively exclude patients from the trial. The mean intergroup difference of changes in SIDe between BL and T120 proved to be 2.08 ± 2.9 mmol/L, indicating a strong effect (d = 0.91) according to Cohen [14]. Thus, assuming an effect size of 2 mmol/L, a future RCT should include at least 35 patients in each study group, given a type II error protection of β > 0.8. Acid–base balance and Stewart approach Parameters contributing to the ‘classic’ acid–base balance (see Table 3) did not differ significantly between groups at any time point of measurement. Intergroup comparisons concerning parameters relevant for Stewart’s acid–base approach, especially the SIDe, showed no significant differences, either. However, values noticed for the RL-group showed small but significant intragroup changes during the time course, especially concerning pH and [HCO3-]. In the ABIS-group, these values did not change. In addition, relevant intragroup reductions were observed in SIDe and [A-] for RL and ABIS. Albumin serum concentrations were diminished to approximately 80% of their basic values in both groups. Acid–base parameters All values except paCO2, albumin and pH are given in mmol/L. ABIS, acetate-based infusion solution; BE, base excess; BL, baseline measurement; RL, Ringer’s lactate; SD, standard deviation; SIDe, effective strong ion difference; SIG, strong ion gap; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements. *, Significant intragroup changes over time (P < 0.05) between BL and T120; °, Significant intragroup changes over time (P < 0.05) between BL and T60. Parameters contributing to the ‘classic’ acid–base balance (see Table 3) did not differ significantly between groups at any time point of measurement. Intergroup comparisons concerning parameters relevant for Stewart’s acid–base approach, especially the SIDe, showed no significant differences, either. However, values noticed for the RL-group showed small but significant intragroup changes during the time course, especially concerning pH and [HCO3-]. In the ABIS-group, these values did not change. In addition, relevant intragroup reductions were observed in SIDe and [A-] for RL and ABIS. Albumin serum concentrations were diminished to approximately 80% of their basic values in both groups. Acid–base parameters All values except paCO2, albumin and pH are given in mmol/L. ABIS, acetate-based infusion solution; BE, base excess; BL, baseline measurement; RL, Ringer’s lactate; SD, standard deviation; SIDe, effective strong ion difference; SIG, strong ion gap; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements. *, Significant intragroup changes over time (P < 0.05) between BL and T120; °, Significant intragroup changes over time (P < 0.05) between BL and T60. Hemodynamics The overall mean blood loss during the time course was 763 ± 427 ml; it was 655 ± 460 ml in the RL-group and 870 ± 378 ml in the ABIS-group (P > 0.05). The total amount of the infused study solutions was 4,066 ± 308 ml for RL and 4,041 ± 572 ml for the ABIS (P > 0.05). Norepinephrine requirements increased constantly during the study period, but differences between groups did not reach significance. Nevertheless, a slight but significant reduction in CVP could be observed in the RL-group. There were no differences in urine output. Further parameters listed under ‘Hemodynamics’ in Table 4 were not significantly different between groups. Hemodynamics ABIS, acetate-based infusion solution; BL, baseline measurement; CVP, central venous pressure; HR, heart rate; MAP, mean arterial pressure; RL, Ringer’s lactate; SD, standard deviation; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements; *, significant intragroup changes over time (P < 0.05) between BL and T120. The overall mean blood loss during the time course was 763 ± 427 ml; it was 655 ± 460 ml in the RL-group and 870 ± 378 ml in the ABIS-group (P > 0.05). The total amount of the infused study solutions was 4,066 ± 308 ml for RL and 4,041 ± 572 ml for the ABIS (P > 0.05). Norepinephrine requirements increased constantly during the study period, but differences between groups did not reach significance. Nevertheless, a slight but significant reduction in CVP could be observed in the RL-group. There were no differences in urine output. Further parameters listed under ‘Hemodynamics’ in Table 4 were not significantly different between groups. Hemodynamics ABIS, acetate-based infusion solution; BL, baseline measurement; CVP, central venous pressure; HR, heart rate; MAP, mean arterial pressure; RL, Ringer’s lactate; SD, standard deviation; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements; *, significant intragroup changes over time (P < 0.05) between BL and T120. Electrolytes Serum electrolytes were measured mainly to calculate the parameters of the Stewart approach. Except for [lactate-] and [Mg++] no significant differences between the study groups were observed (see Table 5). Significantly higher levels of [Mg++] could be found in the ABIS-group at T60 and T120. Serum lactate concentrations were significantly elevated for RL compared to ABIS at T60 and T120 (P < 0.05 for all comparisons). In addition, a significant increase in [Cl-] could be noticed during the time course in the RL-group (Table 5), but not in the ABIS-group. Serum electrolyte concentrations All values except [PO43-] are given in mmol/L. ABIS, acetate-based infusion solution; BL, baseline measurement; RL, Ringer’s lactate; SD, standard deviation; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements; *, significant intragroup changes over time (P <0.05) between BL and T120; #, significant differences between groups at the referring time points of measurement (P < 0.05). Serum electrolytes were measured mainly to calculate the parameters of the Stewart approach. Except for [lactate-] and [Mg++] no significant differences between the study groups were observed (see Table 5). Significantly higher levels of [Mg++] could be found in the ABIS-group at T60 and T120. Serum lactate concentrations were significantly elevated for RL compared to ABIS at T60 and T120 (P < 0.05 for all comparisons). In addition, a significant increase in [Cl-] could be noticed during the time course in the RL-group (Table 5), but not in the ABIS-group. Serum electrolyte concentrations All values except [PO43-] are given in mmol/L. ABIS, acetate-based infusion solution; BL, baseline measurement; RL, Ringer’s lactate; SD, standard deviation; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements; *, significant intragroup changes over time (P <0.05) between BL and T120; #, significant differences between groups at the referring time points of measurement (P < 0.05).
Conclusions
In the current preliminary comparison Ringer’s lactate as well as ABIS proved to be suitable for fluid replacement during abdominal surgery. Hemodynamic stability remained unaffected by both of the solutions. Concerning consistency of acid base parameters none of the solutions seemed to be clearly inferior, either. ABIS had a smaller effect on pH, BE, [HCO3-] and SIDe changes than lactated Ringer, but whether these differences will turn out to be of clinical relevance has to be investigated in a larger RCT. With regard to SIDe and [A-] neither ABIS nor Ringer’s lactate seem to be ‘ideally balanced’, because both of these values decreased dose dependently in both groups. A future RCT might provide important data concerning the on-going search for an ideal balanced solution.
[ "Background", "Hemodynamics", "Statistical analysis", "Acid–base balance and Stewart approach", "Hemodynamics", "Electrolytes", "Acid–base balance", "Hemodynamic values", "Additional findings", "Feasibility of a future RCT", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "In the fields of surgery and intensive care, hyperchloremic acidosis is a well-known problem in patients receiving large amounts of standard crystalloids, especially 0.9% sodium chloride solutions. A series of investigations has emphasized the disadvantageous effects of hyperchloremic acidosis on various organ systems, for example, hemodynamics, NO-production, renal blood circulation, urinary output or hemostasis [1-4]. Balanced crystalloids, whose composition prevents hyperchloremia, are increasingly accepted and likely to be ‘state of the art’ in the near future [3,5-7]. Balanced hydroxyethyl-starch solutions are also available today. All of these preparations are characterized by the presence of metabolizable organic anions such as lactate, acetate or malate and contain physiological electrolyte concentrations. Theoretically, balanced solutions which possess a strong ion difference (SID) of around 24 mmol/L after metabolization (for example, Ringer’s lactate) are supposed to have no influence on blood pH even after intravenous infusion of several liters [8]. However, balanced crystalloids containing lactate have some disadvantages as well [9]. Their metabolism depends on liver function and does require increased oxygen consumption. Acetated solutions do not display these shortcomings, but previous studies in patients undergoing hemodialysis have demonstrated that their administration may lead to hemodynamic instability and hypotension [10].\nDespite their future perspective, to date the routine use of balanced crystalloids is still infrequent in anesthesia. Large randomized, controlled clinical trials (RCT’s) investigating possible advantages as well as adverse effects of balanced crystalloids for routine use in the operating theater are sparse. The aim of the current study was to provide a preliminary comparison of two differently buffered balanced infusion solutions by reviewing data obtained from two established clinical protocols and,in addition, to test the feasibility of a future large RCT. The main target of interest was to investigate the influence of a solution containing a plasma-like SID (acetate buffered, SID = 36.8 mmol/L) and a solution with a comparably low SID (lactate buffered, SID = 28 mmol/L) on the variables of Stewart’s acid–base approach including the effective strong ion difference (SIDe). The impact of the acetate-based infusion solution on hemodynamic stability was also studied. In order to guide a future RCT, primary and secondary variables of interest were determined and sample size estimation was performed.", "In accordance with our institution’s clinical practice a mean arterial pressure (MAP) of at least 80 mmHg was regarded as necessary during general anesthesia. If volume replacement with the study solutions alone (up to 30 ml/kg/hour) was not sufficient to maintain the necessary level of MAP norepinephrine was administered in increasing doses of 0.1 mg/hour as required until the target value was achieved.", "Mean, standard deviation of the mean (parametric values), median and range (non-parametric data) were calculated for each target parameter. The Shapiro-Wilk test was used to test for normality. Because most of the data were normally distributed, they are presented as mean ± standard deviation. We used a repeated measurement analysis of variances (RM-ANOVA) followed by a Student-Newman-Keuls test (with data normally distributed) or a Friedman test (with data not normally distributed) to describe changes of measurement parameters within a group during the course of time (three points of measurement). In order to compare differences between the groups an ANOVA was followed by a Student-Newmann-Keuls test (see above). If data were not normally distributed a one-way ANOVA on ranks (Kruskal-Wallis test) was followed by Dunn’s test. For all determinations a type I error protection of P <0.05 was considered significant. Statistical analysis was performed using Sigma Stat Software Version 3.1 (RockWare Inc. Golden, Colorado, USA) and Microsoft Excel Version 2003 (Microsoft Deutschland GmbH, Unterschleißheim, Germany). All electrolyte concentrations are expressed as mmolLl.", "Parameters contributing to the ‘classic’ acid–base balance (see Table 3) did not differ significantly between groups at any time point of measurement. Intergroup comparisons concerning parameters relevant for Stewart’s acid–base approach, especially the SIDe, showed no significant differences, either. However, values noticed for the RL-group showed small but significant intragroup changes during the time course, especially concerning pH and [HCO3-]. In the ABIS-group, these values did not change. In addition, relevant intragroup reductions were observed in SIDe and [A-] for RL and ABIS. Albumin serum concentrations were diminished to approximately 80% of their basic values in both groups.\nAcid–base parameters\nAll values except paCO2, albumin and pH are given in mmol/L. ABIS, acetate-based infusion solution; BE, base excess; BL, baseline measurement; RL, Ringer’s lactate; SD, standard deviation; SIDe, effective strong ion difference; SIG, strong ion gap; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements. *, Significant intragroup changes over time (P < 0.05) between BL and T120; °, Significant intragroup changes over time (P < 0.05) between BL and T60.", "The overall mean blood loss during the time course was 763 ± 427 ml; it was 655 ± 460 ml in the RL-group and 870 ± 378 ml in the ABIS-group (P > 0.05). The total amount of the infused study solutions was 4,066 ± 308 ml for RL and 4,041 ± 572 ml for the ABIS (P > 0.05). Norepinephrine requirements increased constantly during the study period, but differences between groups did not reach significance. Nevertheless, a slight but significant reduction in CVP could be observed in the RL-group. There were no differences in urine output. Further parameters listed under ‘Hemodynamics’ in Table 4 were not significantly different between groups.\nHemodynamics\nABIS, acetate-based infusion solution; BL, baseline measurement; CVP, central venous pressure; HR, heart rate; MAP, mean arterial pressure; RL, Ringer’s lactate; SD, standard deviation; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements; *, significant intragroup changes over time (P < 0.05) between BL and T120.", "Serum electrolytes were measured mainly to calculate the parameters of the Stewart approach. Except for [lactate-] and [Mg++] no significant differences between the study groups were observed (see Table 5). Significantly higher levels of [Mg++] could be found in the ABIS-group at T60 and T120. Serum lactate concentrations were significantly elevated for RL compared to ABIS at T60 and T120 (P < 0.05 for all comparisons). In addition, a significant increase in [Cl-] could be noticed during the time course in the RL-group (Table 5), but not in the ABIS-group.\nSerum electrolyte concentrations\nAll values except [PO43-] are given in mmol/L. ABIS, acetate-based infusion solution; BL, baseline measurement; RL, Ringer’s lactate; SD, standard deviation; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements; *, significant intragroup changes over time (P <0.05) between BL and T120; #, significant differences between groups at the referring time points of measurement (P < 0.05).", "On first sight, the acid–base parameters pH, BE and [HCO3- remained remarkably constant during the time course. This observation corresponds to the results of previous investigators [4,8]. However, it was evident that the ABIS had less influence on the ‘classic’ acid–base parameters than RL. pH, [HCO3- and BE did not change from BL to T120 in the ABIS-group, whereas a small but significant reduction of pH values and [HCO3- could be observed in the RL-group. Concerning the parameters of Stewart’s acid–base model, we noticed a relevant reduction in serum albumin concentrations, which is easily explained by the diluting effects of the albumin-free crystalloid solutions. This reduction was accompanied by a corresponding decrease in [A- (Δ BL-T120: RL-group, 2.70 mmol/L; ABIS-group, 2.68 mmol/L). In addition, the SIDe declined as well. In contrast to [A- a small difference in the quantity of the decline could be observed with SIDe (Δ BL-T120: RL-group, 4.7 mmol/l; ABIS-group, 2.6 mmol/L; P > 0.05). Although the difference in SIDe-reduction was not significant between groups, one can speculate that the pronounced decrease of effective SID in the RL-group (which should lead to an acidosis according to the Stewart approach) was only incompletely neutralized by the decrease in [A- (which should lead to an alkalosis). This might explain the small but significant pH reduction that could be observed in the RL-group (see Figure 1).\nReduction of [A-], SIDeand [HCO3-] during the time course (BL – T120). ABIS, acetate-based infusion solution; BL, baseline; RL, Ringer’s lactate; T120, measurement at 120 ± minutes; SIDe, effective strong ion difference; *, significant intragroup changes over time (P <0.05) between BL and T120.\nThis finding is surprising, because it does not correspond to Morgan’s theory of the ‘ideal SID’ [7]. According to Morgan et al. only infusion solutions with a SID of around 24 mmol/L do not influence blood pH. As a consequence one should have expected RL (SID 28 mmol/L) rather than ABIS (SID 36.8 mmol/L) to guarantee pH stability and ABIS to cause a mild alkalosis. Nevertheless, in the current study ABIS, whose SID nearly approximates normal plasma SIDe (39 ± 4 mmol/L), provided better pH stability [15]. We have no explanation for the discrepancy concerning Morgan’s theory. One can only speculate that perhaps infusion rate and/or metabolic turnover of the metabolizable anions may be important in this situation. In addition, according to Wooten’s multi-compartmental model, not only extracellular, but also intracellular effects, and fluid redistribution between compartments may have influenced acid–base chemistry [16].\nOne possible limitation of the current study was that some of the values (especially the SID) had to be calculated from directly measured parameters. With differences between and within groups being very small, a propagation of errors could have influenced group comparisons. However, modern blood gas analyzers, as used in the current study, are able to measure acid–base variables extremely precisely (pH bias <0.005, PaCO2 bias < 1.2 mmHg) [17]. In addition, measurement errors concerning the components of SID should occur similarly in all patients. Therefore, differences between groups should not be affected.\nIn summary, there were no significant intergroup differences to observe. Compared to RL, ABIS showed less influence on the ‘classic’ acid–base parameters (pH, BE, [HCO3-) and the solution did not provoke a clinically relevant change in plasma SIDe. These results are in accordance with a recently published investigation on trauma patients requiring intensive care treatment with regard to pH stability provided by acetated solutions [18]. The clinical relevance of the relatively small intragroup changes in this study may be a matter of discussion. Scientific research on hyperchloremic acidosis has revealed that even small changes in pH can have a negative impact on various organ systems [1,2,4,19]. In addition, fluid requirements during major surgery may exceed by far the amount of four liters of crystalloids administered during the current study. Thus, small (but significant) differences concerning acid–base stability of applied crystalloids may gain importance successively if increasing amounts of fluids are administered.", "Hemodialysis with acetate-containing solutions can be the cause of severe hemodynamic instability because of increased nitric oxide synthesis and enhanced arrhythmias [20-22]. Even the small quantity of acetate present in various dialysis fluids (usually 35 mmol/L) exposes a patient's blood to an acetate concentration 10 to 40 times the physiological level (50 to 100 μmol/L) [23-25]. Fournier et al. observed that, according to the nature of hemodialysis (counter current process), the concentration of plasma acetate increased from 0.02 mmol/L to 0.22 mmol/L in three hours of hemodialysis [25]. To date, it is unknown whether the administration of acetated infusion solutions can provoke substantial rises in plasma acetate levels as well. However, one can speculate that while (slowly) infusing acetate-containing solutions, a considerably greater amount of the substance (compared to hemodialysis) will be metabolized. This should lead to relatively low serum concentrations.\nThis assumption is supported by two different observations: Firstly, no significant differences could be observed in terms of hemodynamic values between the study groups. Secondly, the SIG in the ABIS-group did not change significantly during the time course. According to Stewart’s approach, an increase in plasma acetate must lead to an increase in SIG [13]. This hypothesis has been confirmed by Bruegger et al. in a hemorrhagic shock canine model: A 2.2 mmol/L rise in acetate plasma levels caused an identical (absolute) increase in SIG [26]. As a consequence, it is unlikely that relevant amounts of acetate accumulated in the circulation in the ABIS-group. However, hemodynamic stability could only be maintained by administering increasing amounts of norepinephrine (P < 0.05 from BL to T120 in both of the groups). There are several possible explanations: Firstly, the range of blood loss was relatively broad (from 50 ml up to 1,500 ml). Despite fluid replacement, which, according to the literature, should have been adequate, these fluid losses were obviously not sufficiently counterbalanced in some of the patients, especially in the ABIS-group [27,28]. It seems probable that some patients developed a mild hypovolemia during the time course of the study. This assumption is supported by a decreasing CVP in the ABIS group, in which blood loss tended to be more severe than in the RL-group. Secondly, one has to take into account the mild vasodilatory effect of the used narcotics (propofol) that may have contributed to norepinephrine requirements. Thus, when considering only the current, preliminary data it seems that both solutions ‘behaved like normal crystalloids’ without substance-specific hemodynamic effects. However, it will be interesting to investigate if small intergroup differences observed for CVP or MAP which, nevertheless, are of clinical importance, will gain significance in a larger RCT.", "Slightly elevated lactate levels were observed in the RL-group. These increases were only moderate (but significant) and hardly exceeded our institutions’ range of normal values (0.5 to 1.5 mmol/L). In contrast to ABIS [Mg++] significantly decreased in the RL-group. Because RL does not contain magnesium, this can best be explained by a simple dilution effect. We consider these changes to be of minor clinical importance. However, for patients suffering from liver or metabolic diseases or cardiac arrhythmias ABIS might be the preferable infusion solution.", "The current study showed that a larger RCT is feasible. The preliminary sample size estimation did not include the possibility of protocol violations or dropouts. Therefore, at least 35 patients will have to be enrolled in each group. With regard to the consistency of the study population in terms of co-factors of surgery and demographic data it is likely that a large RCT will provide reliable and reproducible results, despite the small intergroup differences in primary and secondary values of interest we observed.", "[]: serum-concentration of the ion between the brackets; [A-]: serum-concentration of week acids; ABIS: acetate-based infusion solution; ABP: arterial blood pressure; BE: base excess; CVP: central venous pressure; HR: heart rate; MAP: mean arterial pressure; Pi: [PO43-] ionized; RCT: randomized controlled trial; RL: Ringer’s lactate; SID: strong ion difference; SIDe: effective strong ion difference; SIG: strong ion gap.", "This study was performed using research funding provided by Fresenius Medical Care (Fresenius Medical Care Deutschland GmbH, Hafenstr. 9, 97424 Schweinfurt, Bayern, Germany). Fresenius Medical Care did not have any influence on study design or manuscript approval. Klaus Hofmann-Kiefer, Matthias Jacob and Markus Rehm have lectured for Baxter Deutschland GmbH (Unterschleißheim, Germany), Fresenius Kabi Deutschland GmbH (Bad Homburg, Germany), B Braun, Melsungen AG (Melsungen, Germany), Serumwerk Bernburg AG (Bernburg, Germany) and CSL Behring GmbH (Marburg, Germany). In addition, they received unrestricted research grants from Serumwerk Bernburg AG (Bernburg, Germany), CSL Behring GmbH (Marburg, Germany) and Fresenius Kabi Deutschland GmbH (Bad Homburg, Germany). All authors declare they have no competing interests.", "KH-K made substantial contributions to the development of the study design, wrote the manuscript and performed the statistics. DC participated in data acquisition and analysis and critically revised the manuscript. TK and MP participated in data acquisition and patient coordination. MJ and PC critically revised and helped to draft the manuscript. MR participated substantially in the design and coordination of the study and gave final approval of the version to be published. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Hemodynamics", "Statistical analysis", "Results", "Acid–base balance and Stewart approach", "Hemodynamics", "Electrolytes", "Discussion", "Acid–base balance", "Hemodynamic values", "Additional findings", "Feasibility of a future RCT", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "In the fields of surgery and intensive care, hyperchloremic acidosis is a well-known problem in patients receiving large amounts of standard crystalloids, especially 0.9% sodium chloride solutions. A series of investigations has emphasized the disadvantageous effects of hyperchloremic acidosis on various organ systems, for example, hemodynamics, NO-production, renal blood circulation, urinary output or hemostasis [1-4]. Balanced crystalloids, whose composition prevents hyperchloremia, are increasingly accepted and likely to be ‘state of the art’ in the near future [3,5-7]. Balanced hydroxyethyl-starch solutions are also available today. All of these preparations are characterized by the presence of metabolizable organic anions such as lactate, acetate or malate and contain physiological electrolyte concentrations. Theoretically, balanced solutions which possess a strong ion difference (SID) of around 24 mmol/L after metabolization (for example, Ringer’s lactate) are supposed to have no influence on blood pH even after intravenous infusion of several liters [8]. However, balanced crystalloids containing lactate have some disadvantages as well [9]. Their metabolism depends on liver function and does require increased oxygen consumption. Acetated solutions do not display these shortcomings, but previous studies in patients undergoing hemodialysis have demonstrated that their administration may lead to hemodynamic instability and hypotension [10].\nDespite their future perspective, to date the routine use of balanced crystalloids is still infrequent in anesthesia. Large randomized, controlled clinical trials (RCT’s) investigating possible advantages as well as adverse effects of balanced crystalloids for routine use in the operating theater are sparse. The aim of the current study was to provide a preliminary comparison of two differently buffered balanced infusion solutions by reviewing data obtained from two established clinical protocols and,in addition, to test the feasibility of a future large RCT. The main target of interest was to investigate the influence of a solution containing a plasma-like SID (acetate buffered, SID = 36.8 mmol/L) and a solution with a comparably low SID (lactate buffered, SID = 28 mmol/L) on the variables of Stewart’s acid–base approach including the effective strong ion difference (SIDe). The impact of the acetate-based infusion solution on hemodynamic stability was also studied. In order to guide a future RCT, primary and secondary variables of interest were determined and sample size estimation was performed.", "In 2009 Ringer’s lactate (B. Braun Melsungen, Melsungen, Germany) was replaced by IonosterilTM (Fresenius Medical Care, Schweinfurt, Germany), an acetate-based infusion solution (ABIS), as a standard intraoperative infusion solution in our institution. In a before and after analysis we studied acid–base balance and hemodynamics in 24 women without obvious cardiac, pulmonary or renal diseases (classified as American Society of Anesthesiologists physical status I or II) who underwent major, open gynecologic cancer surgery. The study comprised two arms: the first consisted of patients who had received solely Ringer’s lactate (RL-group, n = 12) and the second of women who were treated only with the ABIS (ABIS-group, n = 12). The composition of both infusion solutions is listed in Table 1. Patients suffering from pre-existing acid–base disorders and patients who required blood transfusions, plasma products or colloids during the study period were not included in the investigation. A retrospective chart review was performed for patients in both arms. The mean intergroup difference of changes in SIDe between baseline values and the end of the study period was defined as the primary variable of interest. Changes in arterial plasma pH and noradrenalin requirements during general anesthesia were considered as secondary variables. The target parameters of the study were obtained from intraoperative routine blood samples, taken from arterial lines in patients requiring postoperative intensive care management (see Table 2). Because no additional blood sampling was necessary and the study was designed to evaluate two different versions of an established infusion regimen, the Ethics Committee of the University of Munich waived the need for patient’s informed consent for this pilot trial.\nComposition of study solutions\n[], serum concentration of the ion between the brackets; ABIS, acetate-based infusion solution.\nList of variables\nABIS, acetate-based infusion solution; ABP, arterial blood pressure; BE, base excess; BL, baseline; [], serum concentration of the variable between the brackets.\nGeneral anesthesia was induced with intravenous propofol, sufentanil, and cis-atracurium. After tracheal intubation, anesthesia was maintained with propofol (6 to 10 mg/kg/hour) and additional doses of sufentanil and cis-atracurium as appropriate. Mechanical ventilation was performed to maintain an adequate oxygenation and to preserve normocapnia. Intraoperative monitoring included end-tidal PaCO2, electrocardiography, central venous pressure, arterial blood pressure, pulseoximetry and esophageal temperature. During the operative period, the patient’s temperature was kept constant in any case. Urinary output as well as the blood loss during the time course of the investigation was noted. In order to obtain blood pH, base excess (BE), HCO3- and PaCO2, a standard blood gas analyzer (Bayer Rapidlab 1265, Bayer Healthcare LLC, East Walpole, MA, USA) was used. Serum electrolyte and lactate concentrations, as well as hemoglobin, hematocrit and albumin concentrations were assessed in the central laboratory of our institution. Depending on the results of the laboratory measurements the following variables were calculated:\n· [PO43-] ionized (Pi) (mmol/L): = [PO43-] (mg/dL) · 0.3229\n· Anion gap (mmol/L): = [Na+] – [Cl-] – [HCO3-]\n· Effective Strong Ion Difference (SIDe) (mmol/L): = [A-] + [HCO3- = [A-] + (0.0304 · PCO2 · 10(pH-6.1)) according to Figge [11,12]\n· Week acids serum-concentration [A- (mmol/L): = [Albumin · (0.123 · pH – 0.631)] + [Pi · (0.309 · pH - 0.469)] according to Figge [11,12]\n· Strong Ion Gap (SIG) (mmol/L): = [Na+ + [K+ + 2[Ca++ + 2[Mg++ – [Cl-–[Lac-–[A-–[HCO3- according to Stewart [13]\n Hemodynamics In accordance with our institution’s clinical practice a mean arterial pressure (MAP) of at least 80 mmHg was regarded as necessary during general anesthesia. If volume replacement with the study solutions alone (up to 30 ml/kg/hour) was not sufficient to maintain the necessary level of MAP norepinephrine was administered in increasing doses of 0.1 mg/hour as required until the target value was achieved.\nIn accordance with our institution’s clinical practice a mean arterial pressure (MAP) of at least 80 mmHg was regarded as necessary during general anesthesia. If volume replacement with the study solutions alone (up to 30 ml/kg/hour) was not sufficient to maintain the necessary level of MAP norepinephrine was administered in increasing doses of 0.1 mg/hour as required until the target value was achieved.\n Statistical analysis Mean, standard deviation of the mean (parametric values), median and range (non-parametric data) were calculated for each target parameter. The Shapiro-Wilk test was used to test for normality. Because most of the data were normally distributed, they are presented as mean ± standard deviation. We used a repeated measurement analysis of variances (RM-ANOVA) followed by a Student-Newman-Keuls test (with data normally distributed) or a Friedman test (with data not normally distributed) to describe changes of measurement parameters within a group during the course of time (three points of measurement). In order to compare differences between the groups an ANOVA was followed by a Student-Newmann-Keuls test (see above). If data were not normally distributed a one-way ANOVA on ranks (Kruskal-Wallis test) was followed by Dunn’s test. For all determinations a type I error protection of P <0.05 was considered significant. Statistical analysis was performed using Sigma Stat Software Version 3.1 (RockWare Inc. Golden, Colorado, USA) and Microsoft Excel Version 2003 (Microsoft Deutschland GmbH, Unterschleißheim, Germany). All electrolyte concentrations are expressed as mmolLl.\nMean, standard deviation of the mean (parametric values), median and range (non-parametric data) were calculated for each target parameter. The Shapiro-Wilk test was used to test for normality. Because most of the data were normally distributed, they are presented as mean ± standard deviation. We used a repeated measurement analysis of variances (RM-ANOVA) followed by a Student-Newman-Keuls test (with data normally distributed) or a Friedman test (with data not normally distributed) to describe changes of measurement parameters within a group during the course of time (three points of measurement). In order to compare differences between the groups an ANOVA was followed by a Student-Newmann-Keuls test (see above). If data were not normally distributed a one-way ANOVA on ranks (Kruskal-Wallis test) was followed by Dunn’s test. For all determinations a type I error protection of P <0.05 was considered significant. Statistical analysis was performed using Sigma Stat Software Version 3.1 (RockWare Inc. Golden, Colorado, USA) and Microsoft Excel Version 2003 (Microsoft Deutschland GmbH, Unterschleißheim, Germany). All electrolyte concentrations are expressed as mmolLl.", "In accordance with our institution’s clinical practice a mean arterial pressure (MAP) of at least 80 mmHg was regarded as necessary during general anesthesia. If volume replacement with the study solutions alone (up to 30 ml/kg/hour) was not sufficient to maintain the necessary level of MAP norepinephrine was administered in increasing doses of 0.1 mg/hour as required until the target value was achieved.", "Mean, standard deviation of the mean (parametric values), median and range (non-parametric data) were calculated for each target parameter. The Shapiro-Wilk test was used to test for normality. Because most of the data were normally distributed, they are presented as mean ± standard deviation. We used a repeated measurement analysis of variances (RM-ANOVA) followed by a Student-Newman-Keuls test (with data normally distributed) or a Friedman test (with data not normally distributed) to describe changes of measurement parameters within a group during the course of time (three points of measurement). In order to compare differences between the groups an ANOVA was followed by a Student-Newmann-Keuls test (see above). If data were not normally distributed a one-way ANOVA on ranks (Kruskal-Wallis test) was followed by Dunn’s test. For all determinations a type I error protection of P <0.05 was considered significant. Statistical analysis was performed using Sigma Stat Software Version 3.1 (RockWare Inc. Golden, Colorado, USA) and Microsoft Excel Version 2003 (Microsoft Deutschland GmbH, Unterschleißheim, Germany). All electrolyte concentrations are expressed as mmolLl.", "We did not notice significant differences concerning patients’ demographic data (mean age: RL-group, 64 ± 8; ABIS-group, 61 ± 10 years; mean body mass index: RL-group, 24.8 ± 2.6; ABIS-group, 24.5 ± 3.5; mean body weight: RL-group, 68.6 ± 6.0 kg; ABIS-group, 66.4 ± 14.0 kg) (P > 0.05 for all comparisons). All women underwent surgery for ovarian cancer including open radical hysterectomy, adnexal surgery and para-aortal as well as pelvic lymph node dissection. Twenty-four patients, who a priori fulfilled the inclusion criteria, were enrolled in this study (12 per group). Protocol violations were not observed and subsequently there was no need to retrospectively exclude patients from the trial. The mean intergroup difference of changes in SIDe between BL and T120 proved to be 2.08 ± 2.9 mmol/L, indicating a strong effect (d = 0.91) according to Cohen [14]. Thus, assuming an effect size of 2 mmol/L, a future RCT should include at least 35 patients in each study group, given a type II error protection of β > 0.8.\n Acid–base balance and Stewart approach Parameters contributing to the ‘classic’ acid–base balance (see Table 3) did not differ significantly between groups at any time point of measurement. Intergroup comparisons concerning parameters relevant for Stewart’s acid–base approach, especially the SIDe, showed no significant differences, either. However, values noticed for the RL-group showed small but significant intragroup changes during the time course, especially concerning pH and [HCO3-]. In the ABIS-group, these values did not change. In addition, relevant intragroup reductions were observed in SIDe and [A-] for RL and ABIS. Albumin serum concentrations were diminished to approximately 80% of their basic values in both groups.\nAcid–base parameters\nAll values except paCO2, albumin and pH are given in mmol/L. ABIS, acetate-based infusion solution; BE, base excess; BL, baseline measurement; RL, Ringer’s lactate; SD, standard deviation; SIDe, effective strong ion difference; SIG, strong ion gap; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements. *, Significant intragroup changes over time (P < 0.05) between BL and T120; °, Significant intragroup changes over time (P < 0.05) between BL and T60.\nParameters contributing to the ‘classic’ acid–base balance (see Table 3) did not differ significantly between groups at any time point of measurement. Intergroup comparisons concerning parameters relevant for Stewart’s acid–base approach, especially the SIDe, showed no significant differences, either. However, values noticed for the RL-group showed small but significant intragroup changes during the time course, especially concerning pH and [HCO3-]. In the ABIS-group, these values did not change. In addition, relevant intragroup reductions were observed in SIDe and [A-] for RL and ABIS. Albumin serum concentrations were diminished to approximately 80% of their basic values in both groups.\nAcid–base parameters\nAll values except paCO2, albumin and pH are given in mmol/L. ABIS, acetate-based infusion solution; BE, base excess; BL, baseline measurement; RL, Ringer’s lactate; SD, standard deviation; SIDe, effective strong ion difference; SIG, strong ion gap; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements. *, Significant intragroup changes over time (P < 0.05) between BL and T120; °, Significant intragroup changes over time (P < 0.05) between BL and T60.\n Hemodynamics The overall mean blood loss during the time course was 763 ± 427 ml; it was 655 ± 460 ml in the RL-group and 870 ± 378 ml in the ABIS-group (P > 0.05). The total amount of the infused study solutions was 4,066 ± 308 ml for RL and 4,041 ± 572 ml for the ABIS (P > 0.05). Norepinephrine requirements increased constantly during the study period, but differences between groups did not reach significance. Nevertheless, a slight but significant reduction in CVP could be observed in the RL-group. There were no differences in urine output. Further parameters listed under ‘Hemodynamics’ in Table 4 were not significantly different between groups.\nHemodynamics\nABIS, acetate-based infusion solution; BL, baseline measurement; CVP, central venous pressure; HR, heart rate; MAP, mean arterial pressure; RL, Ringer’s lactate; SD, standard deviation; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements; *, significant intragroup changes over time (P < 0.05) between BL and T120.\nThe overall mean blood loss during the time course was 763 ± 427 ml; it was 655 ± 460 ml in the RL-group and 870 ± 378 ml in the ABIS-group (P > 0.05). The total amount of the infused study solutions was 4,066 ± 308 ml for RL and 4,041 ± 572 ml for the ABIS (P > 0.05). Norepinephrine requirements increased constantly during the study period, but differences between groups did not reach significance. Nevertheless, a slight but significant reduction in CVP could be observed in the RL-group. There were no differences in urine output. Further parameters listed under ‘Hemodynamics’ in Table 4 were not significantly different between groups.\nHemodynamics\nABIS, acetate-based infusion solution; BL, baseline measurement; CVP, central venous pressure; HR, heart rate; MAP, mean arterial pressure; RL, Ringer’s lactate; SD, standard deviation; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements; *, significant intragroup changes over time (P < 0.05) between BL and T120.\n Electrolytes Serum electrolytes were measured mainly to calculate the parameters of the Stewart approach. Except for [lactate-] and [Mg++] no significant differences between the study groups were observed (see Table 5). Significantly higher levels of [Mg++] could be found in the ABIS-group at T60 and T120. Serum lactate concentrations were significantly elevated for RL compared to ABIS at T60 and T120 (P < 0.05 for all comparisons). In addition, a significant increase in [Cl-] could be noticed during the time course in the RL-group (Table 5), but not in the ABIS-group.\nSerum electrolyte concentrations\nAll values except [PO43-] are given in mmol/L. ABIS, acetate-based infusion solution; BL, baseline measurement; RL, Ringer’s lactate; SD, standard deviation; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements; *, significant intragroup changes over time (P <0.05) between BL and T120; #, significant differences between groups at the referring time points of measurement (P < 0.05).\nSerum electrolytes were measured mainly to calculate the parameters of the Stewart approach. Except for [lactate-] and [Mg++] no significant differences between the study groups were observed (see Table 5). Significantly higher levels of [Mg++] could be found in the ABIS-group at T60 and T120. Serum lactate concentrations were significantly elevated for RL compared to ABIS at T60 and T120 (P < 0.05 for all comparisons). In addition, a significant increase in [Cl-] could be noticed during the time course in the RL-group (Table 5), but not in the ABIS-group.\nSerum electrolyte concentrations\nAll values except [PO43-] are given in mmol/L. ABIS, acetate-based infusion solution; BL, baseline measurement; RL, Ringer’s lactate; SD, standard deviation; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements; *, significant intragroup changes over time (P <0.05) between BL and T120; #, significant differences between groups at the referring time points of measurement (P < 0.05).", "Parameters contributing to the ‘classic’ acid–base balance (see Table 3) did not differ significantly between groups at any time point of measurement. Intergroup comparisons concerning parameters relevant for Stewart’s acid–base approach, especially the SIDe, showed no significant differences, either. However, values noticed for the RL-group showed small but significant intragroup changes during the time course, especially concerning pH and [HCO3-]. In the ABIS-group, these values did not change. In addition, relevant intragroup reductions were observed in SIDe and [A-] for RL and ABIS. Albumin serum concentrations were diminished to approximately 80% of their basic values in both groups.\nAcid–base parameters\nAll values except paCO2, albumin and pH are given in mmol/L. ABIS, acetate-based infusion solution; BE, base excess; BL, baseline measurement; RL, Ringer’s lactate; SD, standard deviation; SIDe, effective strong ion difference; SIG, strong ion gap; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements. *, Significant intragroup changes over time (P < 0.05) between BL and T120; °, Significant intragroup changes over time (P < 0.05) between BL and T60.", "The overall mean blood loss during the time course was 763 ± 427 ml; it was 655 ± 460 ml in the RL-group and 870 ± 378 ml in the ABIS-group (P > 0.05). The total amount of the infused study solutions was 4,066 ± 308 ml for RL and 4,041 ± 572 ml for the ABIS (P > 0.05). Norepinephrine requirements increased constantly during the study period, but differences between groups did not reach significance. Nevertheless, a slight but significant reduction in CVP could be observed in the RL-group. There were no differences in urine output. Further parameters listed under ‘Hemodynamics’ in Table 4 were not significantly different between groups.\nHemodynamics\nABIS, acetate-based infusion solution; BL, baseline measurement; CVP, central venous pressure; HR, heart rate; MAP, mean arterial pressure; RL, Ringer’s lactate; SD, standard deviation; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements; *, significant intragroup changes over time (P < 0.05) between BL and T120.", "Serum electrolytes were measured mainly to calculate the parameters of the Stewart approach. Except for [lactate-] and [Mg++] no significant differences between the study groups were observed (see Table 5). Significantly higher levels of [Mg++] could be found in the ABIS-group at T60 and T120. Serum lactate concentrations were significantly elevated for RL compared to ABIS at T60 and T120 (P < 0.05 for all comparisons). In addition, a significant increase in [Cl-] could be noticed during the time course in the RL-group (Table 5), but not in the ABIS-group.\nSerum electrolyte concentrations\nAll values except [PO43-] are given in mmol/L. ABIS, acetate-based infusion solution; BL, baseline measurement; RL, Ringer’s lactate; SD, standard deviation; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements; *, significant intragroup changes over time (P <0.05) between BL and T120; #, significant differences between groups at the referring time points of measurement (P < 0.05).", "The current investigation was designed to compare two currently available balanced infusion solutions in a clinical setting. One solution contained lactate, the other acetate as a metabolizable anion. Main areas of interest were the influence of these solutions on acid–base balance and hemodynamic stability.\n Acid–base balance On first sight, the acid–base parameters pH, BE and [HCO3- remained remarkably constant during the time course. This observation corresponds to the results of previous investigators [4,8]. However, it was evident that the ABIS had less influence on the ‘classic’ acid–base parameters than RL. pH, [HCO3- and BE did not change from BL to T120 in the ABIS-group, whereas a small but significant reduction of pH values and [HCO3- could be observed in the RL-group. Concerning the parameters of Stewart’s acid–base model, we noticed a relevant reduction in serum albumin concentrations, which is easily explained by the diluting effects of the albumin-free crystalloid solutions. This reduction was accompanied by a corresponding decrease in [A- (Δ BL-T120: RL-group, 2.70 mmol/L; ABIS-group, 2.68 mmol/L). In addition, the SIDe declined as well. In contrast to [A- a small difference in the quantity of the decline could be observed with SIDe (Δ BL-T120: RL-group, 4.7 mmol/l; ABIS-group, 2.6 mmol/L; P > 0.05). Although the difference in SIDe-reduction was not significant between groups, one can speculate that the pronounced decrease of effective SID in the RL-group (which should lead to an acidosis according to the Stewart approach) was only incompletely neutralized by the decrease in [A- (which should lead to an alkalosis). This might explain the small but significant pH reduction that could be observed in the RL-group (see Figure 1).\nReduction of [A-], SIDeand [HCO3-] during the time course (BL – T120). ABIS, acetate-based infusion solution; BL, baseline; RL, Ringer’s lactate; T120, measurement at 120 ± minutes; SIDe, effective strong ion difference; *, significant intragroup changes over time (P <0.05) between BL and T120.\nThis finding is surprising, because it does not correspond to Morgan’s theory of the ‘ideal SID’ [7]. According to Morgan et al. only infusion solutions with a SID of around 24 mmol/L do not influence blood pH. As a consequence one should have expected RL (SID 28 mmol/L) rather than ABIS (SID 36.8 mmol/L) to guarantee pH stability and ABIS to cause a mild alkalosis. Nevertheless, in the current study ABIS, whose SID nearly approximates normal plasma SIDe (39 ± 4 mmol/L), provided better pH stability [15]. We have no explanation for the discrepancy concerning Morgan’s theory. One can only speculate that perhaps infusion rate and/or metabolic turnover of the metabolizable anions may be important in this situation. In addition, according to Wooten’s multi-compartmental model, not only extracellular, but also intracellular effects, and fluid redistribution between compartments may have influenced acid–base chemistry [16].\nOne possible limitation of the current study was that some of the values (especially the SID) had to be calculated from directly measured parameters. With differences between and within groups being very small, a propagation of errors could have influenced group comparisons. However, modern blood gas analyzers, as used in the current study, are able to measure acid–base variables extremely precisely (pH bias <0.005, PaCO2 bias < 1.2 mmHg) [17]. In addition, measurement errors concerning the components of SID should occur similarly in all patients. Therefore, differences between groups should not be affected.\nIn summary, there were no significant intergroup differences to observe. Compared to RL, ABIS showed less influence on the ‘classic’ acid–base parameters (pH, BE, [HCO3-) and the solution did not provoke a clinically relevant change in plasma SIDe. These results are in accordance with a recently published investigation on trauma patients requiring intensive care treatment with regard to pH stability provided by acetated solutions [18]. The clinical relevance of the relatively small intragroup changes in this study may be a matter of discussion. Scientific research on hyperchloremic acidosis has revealed that even small changes in pH can have a negative impact on various organ systems [1,2,4,19]. In addition, fluid requirements during major surgery may exceed by far the amount of four liters of crystalloids administered during the current study. Thus, small (but significant) differences concerning acid–base stability of applied crystalloids may gain importance successively if increasing amounts of fluids are administered.\nOn first sight, the acid–base parameters pH, BE and [HCO3- remained remarkably constant during the time course. This observation corresponds to the results of previous investigators [4,8]. However, it was evident that the ABIS had less influence on the ‘classic’ acid–base parameters than RL. pH, [HCO3- and BE did not change from BL to T120 in the ABIS-group, whereas a small but significant reduction of pH values and [HCO3- could be observed in the RL-group. Concerning the parameters of Stewart’s acid–base model, we noticed a relevant reduction in serum albumin concentrations, which is easily explained by the diluting effects of the albumin-free crystalloid solutions. This reduction was accompanied by a corresponding decrease in [A- (Δ BL-T120: RL-group, 2.70 mmol/L; ABIS-group, 2.68 mmol/L). In addition, the SIDe declined as well. In contrast to [A- a small difference in the quantity of the decline could be observed with SIDe (Δ BL-T120: RL-group, 4.7 mmol/l; ABIS-group, 2.6 mmol/L; P > 0.05). Although the difference in SIDe-reduction was not significant between groups, one can speculate that the pronounced decrease of effective SID in the RL-group (which should lead to an acidosis according to the Stewart approach) was only incompletely neutralized by the decrease in [A- (which should lead to an alkalosis). This might explain the small but significant pH reduction that could be observed in the RL-group (see Figure 1).\nReduction of [A-], SIDeand [HCO3-] during the time course (BL – T120). ABIS, acetate-based infusion solution; BL, baseline; RL, Ringer’s lactate; T120, measurement at 120 ± minutes; SIDe, effective strong ion difference; *, significant intragroup changes over time (P <0.05) between BL and T120.\nThis finding is surprising, because it does not correspond to Morgan’s theory of the ‘ideal SID’ [7]. According to Morgan et al. only infusion solutions with a SID of around 24 mmol/L do not influence blood pH. As a consequence one should have expected RL (SID 28 mmol/L) rather than ABIS (SID 36.8 mmol/L) to guarantee pH stability and ABIS to cause a mild alkalosis. Nevertheless, in the current study ABIS, whose SID nearly approximates normal plasma SIDe (39 ± 4 mmol/L), provided better pH stability [15]. We have no explanation for the discrepancy concerning Morgan’s theory. One can only speculate that perhaps infusion rate and/or metabolic turnover of the metabolizable anions may be important in this situation. In addition, according to Wooten’s multi-compartmental model, not only extracellular, but also intracellular effects, and fluid redistribution between compartments may have influenced acid–base chemistry [16].\nOne possible limitation of the current study was that some of the values (especially the SID) had to be calculated from directly measured parameters. With differences between and within groups being very small, a propagation of errors could have influenced group comparisons. However, modern blood gas analyzers, as used in the current study, are able to measure acid–base variables extremely precisely (pH bias <0.005, PaCO2 bias < 1.2 mmHg) [17]. In addition, measurement errors concerning the components of SID should occur similarly in all patients. Therefore, differences between groups should not be affected.\nIn summary, there were no significant intergroup differences to observe. Compared to RL, ABIS showed less influence on the ‘classic’ acid–base parameters (pH, BE, [HCO3-) and the solution did not provoke a clinically relevant change in plasma SIDe. These results are in accordance with a recently published investigation on trauma patients requiring intensive care treatment with regard to pH stability provided by acetated solutions [18]. The clinical relevance of the relatively small intragroup changes in this study may be a matter of discussion. Scientific research on hyperchloremic acidosis has revealed that even small changes in pH can have a negative impact on various organ systems [1,2,4,19]. In addition, fluid requirements during major surgery may exceed by far the amount of four liters of crystalloids administered during the current study. Thus, small (but significant) differences concerning acid–base stability of applied crystalloids may gain importance successively if increasing amounts of fluids are administered.\n Hemodynamic values Hemodialysis with acetate-containing solutions can be the cause of severe hemodynamic instability because of increased nitric oxide synthesis and enhanced arrhythmias [20-22]. Even the small quantity of acetate present in various dialysis fluids (usually 35 mmol/L) exposes a patient's blood to an acetate concentration 10 to 40 times the physiological level (50 to 100 μmol/L) [23-25]. Fournier et al. observed that, according to the nature of hemodialysis (counter current process), the concentration of plasma acetate increased from 0.02 mmol/L to 0.22 mmol/L in three hours of hemodialysis [25]. To date, it is unknown whether the administration of acetated infusion solutions can provoke substantial rises in plasma acetate levels as well. However, one can speculate that while (slowly) infusing acetate-containing solutions, a considerably greater amount of the substance (compared to hemodialysis) will be metabolized. This should lead to relatively low serum concentrations.\nThis assumption is supported by two different observations: Firstly, no significant differences could be observed in terms of hemodynamic values between the study groups. Secondly, the SIG in the ABIS-group did not change significantly during the time course. According to Stewart’s approach, an increase in plasma acetate must lead to an increase in SIG [13]. This hypothesis has been confirmed by Bruegger et al. in a hemorrhagic shock canine model: A 2.2 mmol/L rise in acetate plasma levels caused an identical (absolute) increase in SIG [26]. As a consequence, it is unlikely that relevant amounts of acetate accumulated in the circulation in the ABIS-group. However, hemodynamic stability could only be maintained by administering increasing amounts of norepinephrine (P < 0.05 from BL to T120 in both of the groups). There are several possible explanations: Firstly, the range of blood loss was relatively broad (from 50 ml up to 1,500 ml). Despite fluid replacement, which, according to the literature, should have been adequate, these fluid losses were obviously not sufficiently counterbalanced in some of the patients, especially in the ABIS-group [27,28]. It seems probable that some patients developed a mild hypovolemia during the time course of the study. This assumption is supported by a decreasing CVP in the ABIS group, in which blood loss tended to be more severe than in the RL-group. Secondly, one has to take into account the mild vasodilatory effect of the used narcotics (propofol) that may have contributed to norepinephrine requirements. Thus, when considering only the current, preliminary data it seems that both solutions ‘behaved like normal crystalloids’ without substance-specific hemodynamic effects. However, it will be interesting to investigate if small intergroup differences observed for CVP or MAP which, nevertheless, are of clinical importance, will gain significance in a larger RCT.\nHemodialysis with acetate-containing solutions can be the cause of severe hemodynamic instability because of increased nitric oxide synthesis and enhanced arrhythmias [20-22]. Even the small quantity of acetate present in various dialysis fluids (usually 35 mmol/L) exposes a patient's blood to an acetate concentration 10 to 40 times the physiological level (50 to 100 μmol/L) [23-25]. Fournier et al. observed that, according to the nature of hemodialysis (counter current process), the concentration of plasma acetate increased from 0.02 mmol/L to 0.22 mmol/L in three hours of hemodialysis [25]. To date, it is unknown whether the administration of acetated infusion solutions can provoke substantial rises in plasma acetate levels as well. However, one can speculate that while (slowly) infusing acetate-containing solutions, a considerably greater amount of the substance (compared to hemodialysis) will be metabolized. This should lead to relatively low serum concentrations.\nThis assumption is supported by two different observations: Firstly, no significant differences could be observed in terms of hemodynamic values between the study groups. Secondly, the SIG in the ABIS-group did not change significantly during the time course. According to Stewart’s approach, an increase in plasma acetate must lead to an increase in SIG [13]. This hypothesis has been confirmed by Bruegger et al. in a hemorrhagic shock canine model: A 2.2 mmol/L rise in acetate plasma levels caused an identical (absolute) increase in SIG [26]. As a consequence, it is unlikely that relevant amounts of acetate accumulated in the circulation in the ABIS-group. However, hemodynamic stability could only be maintained by administering increasing amounts of norepinephrine (P < 0.05 from BL to T120 in both of the groups). There are several possible explanations: Firstly, the range of blood loss was relatively broad (from 50 ml up to 1,500 ml). Despite fluid replacement, which, according to the literature, should have been adequate, these fluid losses were obviously not sufficiently counterbalanced in some of the patients, especially in the ABIS-group [27,28]. It seems probable that some patients developed a mild hypovolemia during the time course of the study. This assumption is supported by a decreasing CVP in the ABIS group, in which blood loss tended to be more severe than in the RL-group. Secondly, one has to take into account the mild vasodilatory effect of the used narcotics (propofol) that may have contributed to norepinephrine requirements. Thus, when considering only the current, preliminary data it seems that both solutions ‘behaved like normal crystalloids’ without substance-specific hemodynamic effects. However, it will be interesting to investigate if small intergroup differences observed for CVP or MAP which, nevertheless, are of clinical importance, will gain significance in a larger RCT.\n Additional findings Slightly elevated lactate levels were observed in the RL-group. These increases were only moderate (but significant) and hardly exceeded our institutions’ range of normal values (0.5 to 1.5 mmol/L). In contrast to ABIS [Mg++] significantly decreased in the RL-group. Because RL does not contain magnesium, this can best be explained by a simple dilution effect. We consider these changes to be of minor clinical importance. However, for patients suffering from liver or metabolic diseases or cardiac arrhythmias ABIS might be the preferable infusion solution.\nSlightly elevated lactate levels were observed in the RL-group. These increases were only moderate (but significant) and hardly exceeded our institutions’ range of normal values (0.5 to 1.5 mmol/L). In contrast to ABIS [Mg++] significantly decreased in the RL-group. Because RL does not contain magnesium, this can best be explained by a simple dilution effect. We consider these changes to be of minor clinical importance. However, for patients suffering from liver or metabolic diseases or cardiac arrhythmias ABIS might be the preferable infusion solution.\n Feasibility of a future RCT The current study showed that a larger RCT is feasible. The preliminary sample size estimation did not include the possibility of protocol violations or dropouts. Therefore, at least 35 patients will have to be enrolled in each group. With regard to the consistency of the study population in terms of co-factors of surgery and demographic data it is likely that a large RCT will provide reliable and reproducible results, despite the small intergroup differences in primary and secondary values of interest we observed.\nThe current study showed that a larger RCT is feasible. The preliminary sample size estimation did not include the possibility of protocol violations or dropouts. Therefore, at least 35 patients will have to be enrolled in each group. With regard to the consistency of the study population in terms of co-factors of surgery and demographic data it is likely that a large RCT will provide reliable and reproducible results, despite the small intergroup differences in primary and secondary values of interest we observed.", "On first sight, the acid–base parameters pH, BE and [HCO3- remained remarkably constant during the time course. This observation corresponds to the results of previous investigators [4,8]. However, it was evident that the ABIS had less influence on the ‘classic’ acid–base parameters than RL. pH, [HCO3- and BE did not change from BL to T120 in the ABIS-group, whereas a small but significant reduction of pH values and [HCO3- could be observed in the RL-group. Concerning the parameters of Stewart’s acid–base model, we noticed a relevant reduction in serum albumin concentrations, which is easily explained by the diluting effects of the albumin-free crystalloid solutions. This reduction was accompanied by a corresponding decrease in [A- (Δ BL-T120: RL-group, 2.70 mmol/L; ABIS-group, 2.68 mmol/L). In addition, the SIDe declined as well. In contrast to [A- a small difference in the quantity of the decline could be observed with SIDe (Δ BL-T120: RL-group, 4.7 mmol/l; ABIS-group, 2.6 mmol/L; P > 0.05). Although the difference in SIDe-reduction was not significant between groups, one can speculate that the pronounced decrease of effective SID in the RL-group (which should lead to an acidosis according to the Stewart approach) was only incompletely neutralized by the decrease in [A- (which should lead to an alkalosis). This might explain the small but significant pH reduction that could be observed in the RL-group (see Figure 1).\nReduction of [A-], SIDeand [HCO3-] during the time course (BL – T120). ABIS, acetate-based infusion solution; BL, baseline; RL, Ringer’s lactate; T120, measurement at 120 ± minutes; SIDe, effective strong ion difference; *, significant intragroup changes over time (P <0.05) between BL and T120.\nThis finding is surprising, because it does not correspond to Morgan’s theory of the ‘ideal SID’ [7]. According to Morgan et al. only infusion solutions with a SID of around 24 mmol/L do not influence blood pH. As a consequence one should have expected RL (SID 28 mmol/L) rather than ABIS (SID 36.8 mmol/L) to guarantee pH stability and ABIS to cause a mild alkalosis. Nevertheless, in the current study ABIS, whose SID nearly approximates normal plasma SIDe (39 ± 4 mmol/L), provided better pH stability [15]. We have no explanation for the discrepancy concerning Morgan’s theory. One can only speculate that perhaps infusion rate and/or metabolic turnover of the metabolizable anions may be important in this situation. In addition, according to Wooten’s multi-compartmental model, not only extracellular, but also intracellular effects, and fluid redistribution between compartments may have influenced acid–base chemistry [16].\nOne possible limitation of the current study was that some of the values (especially the SID) had to be calculated from directly measured parameters. With differences between and within groups being very small, a propagation of errors could have influenced group comparisons. However, modern blood gas analyzers, as used in the current study, are able to measure acid–base variables extremely precisely (pH bias <0.005, PaCO2 bias < 1.2 mmHg) [17]. In addition, measurement errors concerning the components of SID should occur similarly in all patients. Therefore, differences between groups should not be affected.\nIn summary, there were no significant intergroup differences to observe. Compared to RL, ABIS showed less influence on the ‘classic’ acid–base parameters (pH, BE, [HCO3-) and the solution did not provoke a clinically relevant change in plasma SIDe. These results are in accordance with a recently published investigation on trauma patients requiring intensive care treatment with regard to pH stability provided by acetated solutions [18]. The clinical relevance of the relatively small intragroup changes in this study may be a matter of discussion. Scientific research on hyperchloremic acidosis has revealed that even small changes in pH can have a negative impact on various organ systems [1,2,4,19]. In addition, fluid requirements during major surgery may exceed by far the amount of four liters of crystalloids administered during the current study. Thus, small (but significant) differences concerning acid–base stability of applied crystalloids may gain importance successively if increasing amounts of fluids are administered.", "Hemodialysis with acetate-containing solutions can be the cause of severe hemodynamic instability because of increased nitric oxide synthesis and enhanced arrhythmias [20-22]. Even the small quantity of acetate present in various dialysis fluids (usually 35 mmol/L) exposes a patient's blood to an acetate concentration 10 to 40 times the physiological level (50 to 100 μmol/L) [23-25]. Fournier et al. observed that, according to the nature of hemodialysis (counter current process), the concentration of plasma acetate increased from 0.02 mmol/L to 0.22 mmol/L in three hours of hemodialysis [25]. To date, it is unknown whether the administration of acetated infusion solutions can provoke substantial rises in plasma acetate levels as well. However, one can speculate that while (slowly) infusing acetate-containing solutions, a considerably greater amount of the substance (compared to hemodialysis) will be metabolized. This should lead to relatively low serum concentrations.\nThis assumption is supported by two different observations: Firstly, no significant differences could be observed in terms of hemodynamic values between the study groups. Secondly, the SIG in the ABIS-group did not change significantly during the time course. According to Stewart’s approach, an increase in plasma acetate must lead to an increase in SIG [13]. This hypothesis has been confirmed by Bruegger et al. in a hemorrhagic shock canine model: A 2.2 mmol/L rise in acetate plasma levels caused an identical (absolute) increase in SIG [26]. As a consequence, it is unlikely that relevant amounts of acetate accumulated in the circulation in the ABIS-group. However, hemodynamic stability could only be maintained by administering increasing amounts of norepinephrine (P < 0.05 from BL to T120 in both of the groups). There are several possible explanations: Firstly, the range of blood loss was relatively broad (from 50 ml up to 1,500 ml). Despite fluid replacement, which, according to the literature, should have been adequate, these fluid losses were obviously not sufficiently counterbalanced in some of the patients, especially in the ABIS-group [27,28]. It seems probable that some patients developed a mild hypovolemia during the time course of the study. This assumption is supported by a decreasing CVP in the ABIS group, in which blood loss tended to be more severe than in the RL-group. Secondly, one has to take into account the mild vasodilatory effect of the used narcotics (propofol) that may have contributed to norepinephrine requirements. Thus, when considering only the current, preliminary data it seems that both solutions ‘behaved like normal crystalloids’ without substance-specific hemodynamic effects. However, it will be interesting to investigate if small intergroup differences observed for CVP or MAP which, nevertheless, are of clinical importance, will gain significance in a larger RCT.", "Slightly elevated lactate levels were observed in the RL-group. These increases were only moderate (but significant) and hardly exceeded our institutions’ range of normal values (0.5 to 1.5 mmol/L). In contrast to ABIS [Mg++] significantly decreased in the RL-group. Because RL does not contain magnesium, this can best be explained by a simple dilution effect. We consider these changes to be of minor clinical importance. However, for patients suffering from liver or metabolic diseases or cardiac arrhythmias ABIS might be the preferable infusion solution.", "The current study showed that a larger RCT is feasible. The preliminary sample size estimation did not include the possibility of protocol violations or dropouts. Therefore, at least 35 patients will have to be enrolled in each group. With regard to the consistency of the study population in terms of co-factors of surgery and demographic data it is likely that a large RCT will provide reliable and reproducible results, despite the small intergroup differences in primary and secondary values of interest we observed.", "In the current preliminary comparison Ringer’s lactate as well as ABIS proved to be suitable for fluid replacement during abdominal surgery. Hemodynamic stability remained unaffected by both of the solutions. Concerning consistency of acid base parameters none of the solutions seemed to be clearly inferior, either. ABIS had a smaller effect on pH, BE, [HCO3-] and SIDe changes than lactated Ringer, but whether these differences will turn out to be of clinical relevance has to be investigated in a larger RCT. With regard to SIDe and [A-] neither ABIS nor Ringer’s lactate seem to be ‘ideally balanced’, because both of these values decreased dose dependently in both groups. A future RCT might provide important data concerning the on-going search for an ideal balanced solution.", "[]: serum-concentration of the ion between the brackets; [A-]: serum-concentration of week acids; ABIS: acetate-based infusion solution; ABP: arterial blood pressure; BE: base excess; CVP: central venous pressure; HR: heart rate; MAP: mean arterial pressure; Pi: [PO43-] ionized; RCT: randomized controlled trial; RL: Ringer’s lactate; SID: strong ion difference; SIDe: effective strong ion difference; SIG: strong ion gap.", "This study was performed using research funding provided by Fresenius Medical Care (Fresenius Medical Care Deutschland GmbH, Hafenstr. 9, 97424 Schweinfurt, Bayern, Germany). Fresenius Medical Care did not have any influence on study design or manuscript approval. Klaus Hofmann-Kiefer, Matthias Jacob and Markus Rehm have lectured for Baxter Deutschland GmbH (Unterschleißheim, Germany), Fresenius Kabi Deutschland GmbH (Bad Homburg, Germany), B Braun, Melsungen AG (Melsungen, Germany), Serumwerk Bernburg AG (Bernburg, Germany) and CSL Behring GmbH (Marburg, Germany). In addition, they received unrestricted research grants from Serumwerk Bernburg AG (Bernburg, Germany), CSL Behring GmbH (Marburg, Germany) and Fresenius Kabi Deutschland GmbH (Bad Homburg, Germany). All authors declare they have no competing interests.", "KH-K made substantial contributions to the development of the study design, wrote the manuscript and performed the statistics. DC participated in data acquisition and analysis and critically revised the manuscript. TK and MP participated in data acquisition and patient coordination. MJ and PC critically revised and helped to draft the manuscript. MR participated substantially in the design and coordination of the study and gave final approval of the version to be published. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, "results", null, null, null, "discussion", null, null, null, null, "conclusions", null, null, null ]
[ "Acetate", "Lactate", "Balanced infusion solution", "Acid–base balance", "Hemodynamic stability" ]
Background: In the fields of surgery and intensive care, hyperchloremic acidosis is a well-known problem in patients receiving large amounts of standard crystalloids, especially 0.9% sodium chloride solutions. A series of investigations has emphasized the disadvantageous effects of hyperchloremic acidosis on various organ systems, for example, hemodynamics, NO-production, renal blood circulation, urinary output or hemostasis [1-4]. Balanced crystalloids, whose composition prevents hyperchloremia, are increasingly accepted and likely to be ‘state of the art’ in the near future [3,5-7]. Balanced hydroxyethyl-starch solutions are also available today. All of these preparations are characterized by the presence of metabolizable organic anions such as lactate, acetate or malate and contain physiological electrolyte concentrations. Theoretically, balanced solutions which possess a strong ion difference (SID) of around 24 mmol/L after metabolization (for example, Ringer’s lactate) are supposed to have no influence on blood pH even after intravenous infusion of several liters [8]. However, balanced crystalloids containing lactate have some disadvantages as well [9]. Their metabolism depends on liver function and does require increased oxygen consumption. Acetated solutions do not display these shortcomings, but previous studies in patients undergoing hemodialysis have demonstrated that their administration may lead to hemodynamic instability and hypotension [10]. Despite their future perspective, to date the routine use of balanced crystalloids is still infrequent in anesthesia. Large randomized, controlled clinical trials (RCT’s) investigating possible advantages as well as adverse effects of balanced crystalloids for routine use in the operating theater are sparse. The aim of the current study was to provide a preliminary comparison of two differently buffered balanced infusion solutions by reviewing data obtained from two established clinical protocols and,in addition, to test the feasibility of a future large RCT. The main target of interest was to investigate the influence of a solution containing a plasma-like SID (acetate buffered, SID = 36.8 mmol/L) and a solution with a comparably low SID (lactate buffered, SID = 28 mmol/L) on the variables of Stewart’s acid–base approach including the effective strong ion difference (SIDe). The impact of the acetate-based infusion solution on hemodynamic stability was also studied. In order to guide a future RCT, primary and secondary variables of interest were determined and sample size estimation was performed. Methods: In 2009 Ringer’s lactate (B. Braun Melsungen, Melsungen, Germany) was replaced by IonosterilTM (Fresenius Medical Care, Schweinfurt, Germany), an acetate-based infusion solution (ABIS), as a standard intraoperative infusion solution in our institution. In a before and after analysis we studied acid–base balance and hemodynamics in 24 women without obvious cardiac, pulmonary or renal diseases (classified as American Society of Anesthesiologists physical status I or II) who underwent major, open gynecologic cancer surgery. The study comprised two arms: the first consisted of patients who had received solely Ringer’s lactate (RL-group, n = 12) and the second of women who were treated only with the ABIS (ABIS-group, n = 12). The composition of both infusion solutions is listed in Table 1. Patients suffering from pre-existing acid–base disorders and patients who required blood transfusions, plasma products or colloids during the study period were not included in the investigation. A retrospective chart review was performed for patients in both arms. The mean intergroup difference of changes in SIDe between baseline values and the end of the study period was defined as the primary variable of interest. Changes in arterial plasma pH and noradrenalin requirements during general anesthesia were considered as secondary variables. The target parameters of the study were obtained from intraoperative routine blood samples, taken from arterial lines in patients requiring postoperative intensive care management (see Table 2). Because no additional blood sampling was necessary and the study was designed to evaluate two different versions of an established infusion regimen, the Ethics Committee of the University of Munich waived the need for patient’s informed consent for this pilot trial. Composition of study solutions [], serum concentration of the ion between the brackets; ABIS, acetate-based infusion solution. List of variables ABIS, acetate-based infusion solution; ABP, arterial blood pressure; BE, base excess; BL, baseline; [], serum concentration of the variable between the brackets. General anesthesia was induced with intravenous propofol, sufentanil, and cis-atracurium. After tracheal intubation, anesthesia was maintained with propofol (6 to 10 mg/kg/hour) and additional doses of sufentanil and cis-atracurium as appropriate. Mechanical ventilation was performed to maintain an adequate oxygenation and to preserve normocapnia. Intraoperative monitoring included end-tidal PaCO2, electrocardiography, central venous pressure, arterial blood pressure, pulseoximetry and esophageal temperature. During the operative period, the patient’s temperature was kept constant in any case. Urinary output as well as the blood loss during the time course of the investigation was noted. In order to obtain blood pH, base excess (BE), HCO3- and PaCO2, a standard blood gas analyzer (Bayer Rapidlab 1265, Bayer Healthcare LLC, East Walpole, MA, USA) was used. Serum electrolyte and lactate concentrations, as well as hemoglobin, hematocrit and albumin concentrations were assessed in the central laboratory of our institution. Depending on the results of the laboratory measurements the following variables were calculated: · [PO43-] ionized (Pi) (mmol/L): = [PO43-] (mg/dL) · 0.3229 · Anion gap (mmol/L): = [Na+] – [Cl-] – [HCO3-] · Effective Strong Ion Difference (SIDe) (mmol/L): = [A-] + [HCO3- = [A-] + (0.0304 · PCO2 · 10(pH-6.1)) according to Figge [11,12] · Week acids serum-concentration [A- (mmol/L): = [Albumin · (0.123 · pH – 0.631)] + [Pi · (0.309 · pH - 0.469)] according to Figge [11,12] · Strong Ion Gap (SIG) (mmol/L): = [Na+ + [K+ + 2[Ca++ + 2[Mg++ – [Cl-–[Lac-–[A-–[HCO3- according to Stewart [13] Hemodynamics In accordance with our institution’s clinical practice a mean arterial pressure (MAP) of at least 80 mmHg was regarded as necessary during general anesthesia. If volume replacement with the study solutions alone (up to 30 ml/kg/hour) was not sufficient to maintain the necessary level of MAP norepinephrine was administered in increasing doses of 0.1 mg/hour as required until the target value was achieved. In accordance with our institution’s clinical practice a mean arterial pressure (MAP) of at least 80 mmHg was regarded as necessary during general anesthesia. If volume replacement with the study solutions alone (up to 30 ml/kg/hour) was not sufficient to maintain the necessary level of MAP norepinephrine was administered in increasing doses of 0.1 mg/hour as required until the target value was achieved. Statistical analysis Mean, standard deviation of the mean (parametric values), median and range (non-parametric data) were calculated for each target parameter. The Shapiro-Wilk test was used to test for normality. Because most of the data were normally distributed, they are presented as mean ± standard deviation. We used a repeated measurement analysis of variances (RM-ANOVA) followed by a Student-Newman-Keuls test (with data normally distributed) or a Friedman test (with data not normally distributed) to describe changes of measurement parameters within a group during the course of time (three points of measurement). In order to compare differences between the groups an ANOVA was followed by a Student-Newmann-Keuls test (see above). If data were not normally distributed a one-way ANOVA on ranks (Kruskal-Wallis test) was followed by Dunn’s test. For all determinations a type I error protection of P <0.05 was considered significant. Statistical analysis was performed using Sigma Stat Software Version 3.1 (RockWare Inc. Golden, Colorado, USA) and Microsoft Excel Version 2003 (Microsoft Deutschland GmbH, Unterschleißheim, Germany). All electrolyte concentrations are expressed as mmolLl. Mean, standard deviation of the mean (parametric values), median and range (non-parametric data) were calculated for each target parameter. The Shapiro-Wilk test was used to test for normality. Because most of the data were normally distributed, they are presented as mean ± standard deviation. We used a repeated measurement analysis of variances (RM-ANOVA) followed by a Student-Newman-Keuls test (with data normally distributed) or a Friedman test (with data not normally distributed) to describe changes of measurement parameters within a group during the course of time (three points of measurement). In order to compare differences between the groups an ANOVA was followed by a Student-Newmann-Keuls test (see above). If data were not normally distributed a one-way ANOVA on ranks (Kruskal-Wallis test) was followed by Dunn’s test. For all determinations a type I error protection of P <0.05 was considered significant. Statistical analysis was performed using Sigma Stat Software Version 3.1 (RockWare Inc. Golden, Colorado, USA) and Microsoft Excel Version 2003 (Microsoft Deutschland GmbH, Unterschleißheim, Germany). All electrolyte concentrations are expressed as mmolLl. Hemodynamics: In accordance with our institution’s clinical practice a mean arterial pressure (MAP) of at least 80 mmHg was regarded as necessary during general anesthesia. If volume replacement with the study solutions alone (up to 30 ml/kg/hour) was not sufficient to maintain the necessary level of MAP norepinephrine was administered in increasing doses of 0.1 mg/hour as required until the target value was achieved. Statistical analysis: Mean, standard deviation of the mean (parametric values), median and range (non-parametric data) were calculated for each target parameter. The Shapiro-Wilk test was used to test for normality. Because most of the data were normally distributed, they are presented as mean ± standard deviation. We used a repeated measurement analysis of variances (RM-ANOVA) followed by a Student-Newman-Keuls test (with data normally distributed) or a Friedman test (with data not normally distributed) to describe changes of measurement parameters within a group during the course of time (three points of measurement). In order to compare differences between the groups an ANOVA was followed by a Student-Newmann-Keuls test (see above). If data were not normally distributed a one-way ANOVA on ranks (Kruskal-Wallis test) was followed by Dunn’s test. For all determinations a type I error protection of P <0.05 was considered significant. Statistical analysis was performed using Sigma Stat Software Version 3.1 (RockWare Inc. Golden, Colorado, USA) and Microsoft Excel Version 2003 (Microsoft Deutschland GmbH, Unterschleißheim, Germany). All electrolyte concentrations are expressed as mmolLl. Results: We did not notice significant differences concerning patients’ demographic data (mean age: RL-group, 64 ± 8; ABIS-group, 61 ± 10 years; mean body mass index: RL-group, 24.8 ± 2.6; ABIS-group, 24.5 ± 3.5; mean body weight: RL-group, 68.6 ± 6.0 kg; ABIS-group, 66.4 ± 14.0 kg) (P > 0.05 for all comparisons). All women underwent surgery for ovarian cancer including open radical hysterectomy, adnexal surgery and para-aortal as well as pelvic lymph node dissection. Twenty-four patients, who a priori fulfilled the inclusion criteria, were enrolled in this study (12 per group). Protocol violations were not observed and subsequently there was no need to retrospectively exclude patients from the trial. The mean intergroup difference of changes in SIDe between BL and T120 proved to be 2.08 ± 2.9 mmol/L, indicating a strong effect (d = 0.91) according to Cohen [14]. Thus, assuming an effect size of 2 mmol/L, a future RCT should include at least 35 patients in each study group, given a type II error protection of β > 0.8. Acid–base balance and Stewart approach Parameters contributing to the ‘classic’ acid–base balance (see Table 3) did not differ significantly between groups at any time point of measurement. Intergroup comparisons concerning parameters relevant for Stewart’s acid–base approach, especially the SIDe, showed no significant differences, either. However, values noticed for the RL-group showed small but significant intragroup changes during the time course, especially concerning pH and [HCO3-]. In the ABIS-group, these values did not change. In addition, relevant intragroup reductions were observed in SIDe and [A-] for RL and ABIS. Albumin serum concentrations were diminished to approximately 80% of their basic values in both groups. Acid–base parameters All values except paCO2, albumin and pH are given in mmol/L. ABIS, acetate-based infusion solution; BE, base excess; BL, baseline measurement; RL, Ringer’s lactate; SD, standard deviation; SIDe, effective strong ion difference; SIG, strong ion gap; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements. *, Significant intragroup changes over time (P < 0.05) between BL and T120; °, Significant intragroup changes over time (P < 0.05) between BL and T60. Parameters contributing to the ‘classic’ acid–base balance (see Table 3) did not differ significantly between groups at any time point of measurement. Intergroup comparisons concerning parameters relevant for Stewart’s acid–base approach, especially the SIDe, showed no significant differences, either. However, values noticed for the RL-group showed small but significant intragroup changes during the time course, especially concerning pH and [HCO3-]. In the ABIS-group, these values did not change. In addition, relevant intragroup reductions were observed in SIDe and [A-] for RL and ABIS. Albumin serum concentrations were diminished to approximately 80% of their basic values in both groups. Acid–base parameters All values except paCO2, albumin and pH are given in mmol/L. ABIS, acetate-based infusion solution; BE, base excess; BL, baseline measurement; RL, Ringer’s lactate; SD, standard deviation; SIDe, effective strong ion difference; SIG, strong ion gap; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements. *, Significant intragroup changes over time (P < 0.05) between BL and T120; °, Significant intragroup changes over time (P < 0.05) between BL and T60. Hemodynamics The overall mean blood loss during the time course was 763 ± 427 ml; it was 655 ± 460 ml in the RL-group and 870 ± 378 ml in the ABIS-group (P > 0.05). The total amount of the infused study solutions was 4,066 ± 308 ml for RL and 4,041 ± 572 ml for the ABIS (P > 0.05). Norepinephrine requirements increased constantly during the study period, but differences between groups did not reach significance. Nevertheless, a slight but significant reduction in CVP could be observed in the RL-group. There were no differences in urine output. Further parameters listed under ‘Hemodynamics’ in Table 4 were not significantly different between groups. Hemodynamics ABIS, acetate-based infusion solution; BL, baseline measurement; CVP, central venous pressure; HR, heart rate; MAP, mean arterial pressure; RL, Ringer’s lactate; SD, standard deviation; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements; *, significant intragroup changes over time (P < 0.05) between BL and T120. The overall mean blood loss during the time course was 763 ± 427 ml; it was 655 ± 460 ml in the RL-group and 870 ± 378 ml in the ABIS-group (P > 0.05). The total amount of the infused study solutions was 4,066 ± 308 ml for RL and 4,041 ± 572 ml for the ABIS (P > 0.05). Norepinephrine requirements increased constantly during the study period, but differences between groups did not reach significance. Nevertheless, a slight but significant reduction in CVP could be observed in the RL-group. There were no differences in urine output. Further parameters listed under ‘Hemodynamics’ in Table 4 were not significantly different between groups. Hemodynamics ABIS, acetate-based infusion solution; BL, baseline measurement; CVP, central venous pressure; HR, heart rate; MAP, mean arterial pressure; RL, Ringer’s lactate; SD, standard deviation; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements; *, significant intragroup changes over time (P < 0.05) between BL and T120. Electrolytes Serum electrolytes were measured mainly to calculate the parameters of the Stewart approach. Except for [lactate-] and [Mg++] no significant differences between the study groups were observed (see Table 5). Significantly higher levels of [Mg++] could be found in the ABIS-group at T60 and T120. Serum lactate concentrations were significantly elevated for RL compared to ABIS at T60 and T120 (P < 0.05 for all comparisons). In addition, a significant increase in [Cl-] could be noticed during the time course in the RL-group (Table 5), but not in the ABIS-group. Serum electrolyte concentrations All values except [PO43-] are given in mmol/L. ABIS, acetate-based infusion solution; BL, baseline measurement; RL, Ringer’s lactate; SD, standard deviation; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements; *, significant intragroup changes over time (P <0.05) between BL and T120; #, significant differences between groups at the referring time points of measurement (P < 0.05). Serum electrolytes were measured mainly to calculate the parameters of the Stewart approach. Except for [lactate-] and [Mg++] no significant differences between the study groups were observed (see Table 5). Significantly higher levels of [Mg++] could be found in the ABIS-group at T60 and T120. Serum lactate concentrations were significantly elevated for RL compared to ABIS at T60 and T120 (P < 0.05 for all comparisons). In addition, a significant increase in [Cl-] could be noticed during the time course in the RL-group (Table 5), but not in the ABIS-group. Serum electrolyte concentrations All values except [PO43-] are given in mmol/L. ABIS, acetate-based infusion solution; BL, baseline measurement; RL, Ringer’s lactate; SD, standard deviation; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements; *, significant intragroup changes over time (P <0.05) between BL and T120; #, significant differences between groups at the referring time points of measurement (P < 0.05). Acid–base balance and Stewart approach: Parameters contributing to the ‘classic’ acid–base balance (see Table 3) did not differ significantly between groups at any time point of measurement. Intergroup comparisons concerning parameters relevant for Stewart’s acid–base approach, especially the SIDe, showed no significant differences, either. However, values noticed for the RL-group showed small but significant intragroup changes during the time course, especially concerning pH and [HCO3-]. In the ABIS-group, these values did not change. In addition, relevant intragroup reductions were observed in SIDe and [A-] for RL and ABIS. Albumin serum concentrations were diminished to approximately 80% of their basic values in both groups. Acid–base parameters All values except paCO2, albumin and pH are given in mmol/L. ABIS, acetate-based infusion solution; BE, base excess; BL, baseline measurement; RL, Ringer’s lactate; SD, standard deviation; SIDe, effective strong ion difference; SIG, strong ion gap; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements. *, Significant intragroup changes over time (P < 0.05) between BL and T120; °, Significant intragroup changes over time (P < 0.05) between BL and T60. Hemodynamics: The overall mean blood loss during the time course was 763 ± 427 ml; it was 655 ± 460 ml in the RL-group and 870 ± 378 ml in the ABIS-group (P > 0.05). The total amount of the infused study solutions was 4,066 ± 308 ml for RL and 4,041 ± 572 ml for the ABIS (P > 0.05). Norepinephrine requirements increased constantly during the study period, but differences between groups did not reach significance. Nevertheless, a slight but significant reduction in CVP could be observed in the RL-group. There were no differences in urine output. Further parameters listed under ‘Hemodynamics’ in Table 4 were not significantly different between groups. Hemodynamics ABIS, acetate-based infusion solution; BL, baseline measurement; CVP, central venous pressure; HR, heart rate; MAP, mean arterial pressure; RL, Ringer’s lactate; SD, standard deviation; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements; *, significant intragroup changes over time (P < 0.05) between BL and T120. Electrolytes: Serum electrolytes were measured mainly to calculate the parameters of the Stewart approach. Except for [lactate-] and [Mg++] no significant differences between the study groups were observed (see Table 5). Significantly higher levels of [Mg++] could be found in the ABIS-group at T60 and T120. Serum lactate concentrations were significantly elevated for RL compared to ABIS at T60 and T120 (P < 0.05 for all comparisons). In addition, a significant increase in [Cl-] could be noticed during the time course in the RL-group (Table 5), but not in the ABIS-group. Serum electrolyte concentrations All values except [PO43-] are given in mmol/L. ABIS, acetate-based infusion solution; BL, baseline measurement; RL, Ringer’s lactate; SD, standard deviation; T60, T120, measurement at 60 ± 10 and 120 ± 10 minutes; 95% CI, 95% confidence interval of means; І Δ120 І, absolute value of the difference between T120 and baseline measurements; *, significant intragroup changes over time (P <0.05) between BL and T120; #, significant differences between groups at the referring time points of measurement (P < 0.05). Discussion: The current investigation was designed to compare two currently available balanced infusion solutions in a clinical setting. One solution contained lactate, the other acetate as a metabolizable anion. Main areas of interest were the influence of these solutions on acid–base balance and hemodynamic stability. Acid–base balance On first sight, the acid–base parameters pH, BE and [HCO3- remained remarkably constant during the time course. This observation corresponds to the results of previous investigators [4,8]. However, it was evident that the ABIS had less influence on the ‘classic’ acid–base parameters than RL. pH, [HCO3- and BE did not change from BL to T120 in the ABIS-group, whereas a small but significant reduction of pH values and [HCO3- could be observed in the RL-group. Concerning the parameters of Stewart’s acid–base model, we noticed a relevant reduction in serum albumin concentrations, which is easily explained by the diluting effects of the albumin-free crystalloid solutions. This reduction was accompanied by a corresponding decrease in [A- (Δ BL-T120: RL-group, 2.70 mmol/L; ABIS-group, 2.68 mmol/L). In addition, the SIDe declined as well. In contrast to [A- a small difference in the quantity of the decline could be observed with SIDe (Δ BL-T120: RL-group, 4.7 mmol/l; ABIS-group, 2.6 mmol/L; P > 0.05). Although the difference in SIDe-reduction was not significant between groups, one can speculate that the pronounced decrease of effective SID in the RL-group (which should lead to an acidosis according to the Stewart approach) was only incompletely neutralized by the decrease in [A- (which should lead to an alkalosis). This might explain the small but significant pH reduction that could be observed in the RL-group (see Figure 1). Reduction of [A-], SIDeand [HCO3-] during the time course (BL – T120). ABIS, acetate-based infusion solution; BL, baseline; RL, Ringer’s lactate; T120, measurement at 120 ± minutes; SIDe, effective strong ion difference; *, significant intragroup changes over time (P <0.05) between BL and T120. This finding is surprising, because it does not correspond to Morgan’s theory of the ‘ideal SID’ [7]. According to Morgan et al. only infusion solutions with a SID of around 24 mmol/L do not influence blood pH. As a consequence one should have expected RL (SID 28 mmol/L) rather than ABIS (SID 36.8 mmol/L) to guarantee pH stability and ABIS to cause a mild alkalosis. Nevertheless, in the current study ABIS, whose SID nearly approximates normal plasma SIDe (39 ± 4 mmol/L), provided better pH stability [15]. We have no explanation for the discrepancy concerning Morgan’s theory. One can only speculate that perhaps infusion rate and/or metabolic turnover of the metabolizable anions may be important in this situation. In addition, according to Wooten’s multi-compartmental model, not only extracellular, but also intracellular effects, and fluid redistribution between compartments may have influenced acid–base chemistry [16]. One possible limitation of the current study was that some of the values (especially the SID) had to be calculated from directly measured parameters. With differences between and within groups being very small, a propagation of errors could have influenced group comparisons. However, modern blood gas analyzers, as used in the current study, are able to measure acid–base variables extremely precisely (pH bias <0.005, PaCO2 bias < 1.2 mmHg) [17]. In addition, measurement errors concerning the components of SID should occur similarly in all patients. Therefore, differences between groups should not be affected. In summary, there were no significant intergroup differences to observe. Compared to RL, ABIS showed less influence on the ‘classic’ acid–base parameters (pH, BE, [HCO3-) and the solution did not provoke a clinically relevant change in plasma SIDe. These results are in accordance with a recently published investigation on trauma patients requiring intensive care treatment with regard to pH stability provided by acetated solutions [18]. The clinical relevance of the relatively small intragroup changes in this study may be a matter of discussion. Scientific research on hyperchloremic acidosis has revealed that even small changes in pH can have a negative impact on various organ systems [1,2,4,19]. In addition, fluid requirements during major surgery may exceed by far the amount of four liters of crystalloids administered during the current study. Thus, small (but significant) differences concerning acid–base stability of applied crystalloids may gain importance successively if increasing amounts of fluids are administered. On first sight, the acid–base parameters pH, BE and [HCO3- remained remarkably constant during the time course. This observation corresponds to the results of previous investigators [4,8]. However, it was evident that the ABIS had less influence on the ‘classic’ acid–base parameters than RL. pH, [HCO3- and BE did not change from BL to T120 in the ABIS-group, whereas a small but significant reduction of pH values and [HCO3- could be observed in the RL-group. Concerning the parameters of Stewart’s acid–base model, we noticed a relevant reduction in serum albumin concentrations, which is easily explained by the diluting effects of the albumin-free crystalloid solutions. This reduction was accompanied by a corresponding decrease in [A- (Δ BL-T120: RL-group, 2.70 mmol/L; ABIS-group, 2.68 mmol/L). In addition, the SIDe declined as well. In contrast to [A- a small difference in the quantity of the decline could be observed with SIDe (Δ BL-T120: RL-group, 4.7 mmol/l; ABIS-group, 2.6 mmol/L; P > 0.05). Although the difference in SIDe-reduction was not significant between groups, one can speculate that the pronounced decrease of effective SID in the RL-group (which should lead to an acidosis according to the Stewart approach) was only incompletely neutralized by the decrease in [A- (which should lead to an alkalosis). This might explain the small but significant pH reduction that could be observed in the RL-group (see Figure 1). Reduction of [A-], SIDeand [HCO3-] during the time course (BL – T120). ABIS, acetate-based infusion solution; BL, baseline; RL, Ringer’s lactate; T120, measurement at 120 ± minutes; SIDe, effective strong ion difference; *, significant intragroup changes over time (P <0.05) between BL and T120. This finding is surprising, because it does not correspond to Morgan’s theory of the ‘ideal SID’ [7]. According to Morgan et al. only infusion solutions with a SID of around 24 mmol/L do not influence blood pH. As a consequence one should have expected RL (SID 28 mmol/L) rather than ABIS (SID 36.8 mmol/L) to guarantee pH stability and ABIS to cause a mild alkalosis. Nevertheless, in the current study ABIS, whose SID nearly approximates normal plasma SIDe (39 ± 4 mmol/L), provided better pH stability [15]. We have no explanation for the discrepancy concerning Morgan’s theory. One can only speculate that perhaps infusion rate and/or metabolic turnover of the metabolizable anions may be important in this situation. In addition, according to Wooten’s multi-compartmental model, not only extracellular, but also intracellular effects, and fluid redistribution between compartments may have influenced acid–base chemistry [16]. One possible limitation of the current study was that some of the values (especially the SID) had to be calculated from directly measured parameters. With differences between and within groups being very small, a propagation of errors could have influenced group comparisons. However, modern blood gas analyzers, as used in the current study, are able to measure acid–base variables extremely precisely (pH bias <0.005, PaCO2 bias < 1.2 mmHg) [17]. In addition, measurement errors concerning the components of SID should occur similarly in all patients. Therefore, differences between groups should not be affected. In summary, there were no significant intergroup differences to observe. Compared to RL, ABIS showed less influence on the ‘classic’ acid–base parameters (pH, BE, [HCO3-) and the solution did not provoke a clinically relevant change in plasma SIDe. These results are in accordance with a recently published investigation on trauma patients requiring intensive care treatment with regard to pH stability provided by acetated solutions [18]. The clinical relevance of the relatively small intragroup changes in this study may be a matter of discussion. Scientific research on hyperchloremic acidosis has revealed that even small changes in pH can have a negative impact on various organ systems [1,2,4,19]. In addition, fluid requirements during major surgery may exceed by far the amount of four liters of crystalloids administered during the current study. Thus, small (but significant) differences concerning acid–base stability of applied crystalloids may gain importance successively if increasing amounts of fluids are administered. Hemodynamic values Hemodialysis with acetate-containing solutions can be the cause of severe hemodynamic instability because of increased nitric oxide synthesis and enhanced arrhythmias [20-22]. Even the small quantity of acetate present in various dialysis fluids (usually 35 mmol/L) exposes a patient's blood to an acetate concentration 10 to 40 times the physiological level (50 to 100 μmol/L) [23-25]. Fournier et al. observed that, according to the nature of hemodialysis (counter current process), the concentration of plasma acetate increased from 0.02 mmol/L to 0.22 mmol/L in three hours of hemodialysis [25]. To date, it is unknown whether the administration of acetated infusion solutions can provoke substantial rises in plasma acetate levels as well. However, one can speculate that while (slowly) infusing acetate-containing solutions, a considerably greater amount of the substance (compared to hemodialysis) will be metabolized. This should lead to relatively low serum concentrations. This assumption is supported by two different observations: Firstly, no significant differences could be observed in terms of hemodynamic values between the study groups. Secondly, the SIG in the ABIS-group did not change significantly during the time course. According to Stewart’s approach, an increase in plasma acetate must lead to an increase in SIG [13]. This hypothesis has been confirmed by Bruegger et al. in a hemorrhagic shock canine model: A 2.2 mmol/L rise in acetate plasma levels caused an identical (absolute) increase in SIG [26]. As a consequence, it is unlikely that relevant amounts of acetate accumulated in the circulation in the ABIS-group. However, hemodynamic stability could only be maintained by administering increasing amounts of norepinephrine (P < 0.05 from BL to T120 in both of the groups). There are several possible explanations: Firstly, the range of blood loss was relatively broad (from 50 ml up to 1,500 ml). Despite fluid replacement, which, according to the literature, should have been adequate, these fluid losses were obviously not sufficiently counterbalanced in some of the patients, especially in the ABIS-group [27,28]. It seems probable that some patients developed a mild hypovolemia during the time course of the study. This assumption is supported by a decreasing CVP in the ABIS group, in which blood loss tended to be more severe than in the RL-group. Secondly, one has to take into account the mild vasodilatory effect of the used narcotics (propofol) that may have contributed to norepinephrine requirements. Thus, when considering only the current, preliminary data it seems that both solutions ‘behaved like normal crystalloids’ without substance-specific hemodynamic effects. However, it will be interesting to investigate if small intergroup differences observed for CVP or MAP which, nevertheless, are of clinical importance, will gain significance in a larger RCT. Hemodialysis with acetate-containing solutions can be the cause of severe hemodynamic instability because of increased nitric oxide synthesis and enhanced arrhythmias [20-22]. Even the small quantity of acetate present in various dialysis fluids (usually 35 mmol/L) exposes a patient's blood to an acetate concentration 10 to 40 times the physiological level (50 to 100 μmol/L) [23-25]. Fournier et al. observed that, according to the nature of hemodialysis (counter current process), the concentration of plasma acetate increased from 0.02 mmol/L to 0.22 mmol/L in three hours of hemodialysis [25]. To date, it is unknown whether the administration of acetated infusion solutions can provoke substantial rises in plasma acetate levels as well. However, one can speculate that while (slowly) infusing acetate-containing solutions, a considerably greater amount of the substance (compared to hemodialysis) will be metabolized. This should lead to relatively low serum concentrations. This assumption is supported by two different observations: Firstly, no significant differences could be observed in terms of hemodynamic values between the study groups. Secondly, the SIG in the ABIS-group did not change significantly during the time course. According to Stewart’s approach, an increase in plasma acetate must lead to an increase in SIG [13]. This hypothesis has been confirmed by Bruegger et al. in a hemorrhagic shock canine model: A 2.2 mmol/L rise in acetate plasma levels caused an identical (absolute) increase in SIG [26]. As a consequence, it is unlikely that relevant amounts of acetate accumulated in the circulation in the ABIS-group. However, hemodynamic stability could only be maintained by administering increasing amounts of norepinephrine (P < 0.05 from BL to T120 in both of the groups). There are several possible explanations: Firstly, the range of blood loss was relatively broad (from 50 ml up to 1,500 ml). Despite fluid replacement, which, according to the literature, should have been adequate, these fluid losses were obviously not sufficiently counterbalanced in some of the patients, especially in the ABIS-group [27,28]. It seems probable that some patients developed a mild hypovolemia during the time course of the study. This assumption is supported by a decreasing CVP in the ABIS group, in which blood loss tended to be more severe than in the RL-group. Secondly, one has to take into account the mild vasodilatory effect of the used narcotics (propofol) that may have contributed to norepinephrine requirements. Thus, when considering only the current, preliminary data it seems that both solutions ‘behaved like normal crystalloids’ without substance-specific hemodynamic effects. However, it will be interesting to investigate if small intergroup differences observed for CVP or MAP which, nevertheless, are of clinical importance, will gain significance in a larger RCT. Additional findings Slightly elevated lactate levels were observed in the RL-group. These increases were only moderate (but significant) and hardly exceeded our institutions’ range of normal values (0.5 to 1.5 mmol/L). In contrast to ABIS [Mg++] significantly decreased in the RL-group. Because RL does not contain magnesium, this can best be explained by a simple dilution effect. We consider these changes to be of minor clinical importance. However, for patients suffering from liver or metabolic diseases or cardiac arrhythmias ABIS might be the preferable infusion solution. Slightly elevated lactate levels were observed in the RL-group. These increases were only moderate (but significant) and hardly exceeded our institutions’ range of normal values (0.5 to 1.5 mmol/L). In contrast to ABIS [Mg++] significantly decreased in the RL-group. Because RL does not contain magnesium, this can best be explained by a simple dilution effect. We consider these changes to be of minor clinical importance. However, for patients suffering from liver or metabolic diseases or cardiac arrhythmias ABIS might be the preferable infusion solution. Feasibility of a future RCT The current study showed that a larger RCT is feasible. The preliminary sample size estimation did not include the possibility of protocol violations or dropouts. Therefore, at least 35 patients will have to be enrolled in each group. With regard to the consistency of the study population in terms of co-factors of surgery and demographic data it is likely that a large RCT will provide reliable and reproducible results, despite the small intergroup differences in primary and secondary values of interest we observed. The current study showed that a larger RCT is feasible. The preliminary sample size estimation did not include the possibility of protocol violations or dropouts. Therefore, at least 35 patients will have to be enrolled in each group. With regard to the consistency of the study population in terms of co-factors of surgery and demographic data it is likely that a large RCT will provide reliable and reproducible results, despite the small intergroup differences in primary and secondary values of interest we observed. Acid–base balance: On first sight, the acid–base parameters pH, BE and [HCO3- remained remarkably constant during the time course. This observation corresponds to the results of previous investigators [4,8]. However, it was evident that the ABIS had less influence on the ‘classic’ acid–base parameters than RL. pH, [HCO3- and BE did not change from BL to T120 in the ABIS-group, whereas a small but significant reduction of pH values and [HCO3- could be observed in the RL-group. Concerning the parameters of Stewart’s acid–base model, we noticed a relevant reduction in serum albumin concentrations, which is easily explained by the diluting effects of the albumin-free crystalloid solutions. This reduction was accompanied by a corresponding decrease in [A- (Δ BL-T120: RL-group, 2.70 mmol/L; ABIS-group, 2.68 mmol/L). In addition, the SIDe declined as well. In contrast to [A- a small difference in the quantity of the decline could be observed with SIDe (Δ BL-T120: RL-group, 4.7 mmol/l; ABIS-group, 2.6 mmol/L; P > 0.05). Although the difference in SIDe-reduction was not significant between groups, one can speculate that the pronounced decrease of effective SID in the RL-group (which should lead to an acidosis according to the Stewart approach) was only incompletely neutralized by the decrease in [A- (which should lead to an alkalosis). This might explain the small but significant pH reduction that could be observed in the RL-group (see Figure 1). Reduction of [A-], SIDeand [HCO3-] during the time course (BL – T120). ABIS, acetate-based infusion solution; BL, baseline; RL, Ringer’s lactate; T120, measurement at 120 ± minutes; SIDe, effective strong ion difference; *, significant intragroup changes over time (P <0.05) between BL and T120. This finding is surprising, because it does not correspond to Morgan’s theory of the ‘ideal SID’ [7]. According to Morgan et al. only infusion solutions with a SID of around 24 mmol/L do not influence blood pH. As a consequence one should have expected RL (SID 28 mmol/L) rather than ABIS (SID 36.8 mmol/L) to guarantee pH stability and ABIS to cause a mild alkalosis. Nevertheless, in the current study ABIS, whose SID nearly approximates normal plasma SIDe (39 ± 4 mmol/L), provided better pH stability [15]. We have no explanation for the discrepancy concerning Morgan’s theory. One can only speculate that perhaps infusion rate and/or metabolic turnover of the metabolizable anions may be important in this situation. In addition, according to Wooten’s multi-compartmental model, not only extracellular, but also intracellular effects, and fluid redistribution between compartments may have influenced acid–base chemistry [16]. One possible limitation of the current study was that some of the values (especially the SID) had to be calculated from directly measured parameters. With differences between and within groups being very small, a propagation of errors could have influenced group comparisons. However, modern blood gas analyzers, as used in the current study, are able to measure acid–base variables extremely precisely (pH bias <0.005, PaCO2 bias < 1.2 mmHg) [17]. In addition, measurement errors concerning the components of SID should occur similarly in all patients. Therefore, differences between groups should not be affected. In summary, there were no significant intergroup differences to observe. Compared to RL, ABIS showed less influence on the ‘classic’ acid–base parameters (pH, BE, [HCO3-) and the solution did not provoke a clinically relevant change in plasma SIDe. These results are in accordance with a recently published investigation on trauma patients requiring intensive care treatment with regard to pH stability provided by acetated solutions [18]. The clinical relevance of the relatively small intragroup changes in this study may be a matter of discussion. Scientific research on hyperchloremic acidosis has revealed that even small changes in pH can have a negative impact on various organ systems [1,2,4,19]. In addition, fluid requirements during major surgery may exceed by far the amount of four liters of crystalloids administered during the current study. Thus, small (but significant) differences concerning acid–base stability of applied crystalloids may gain importance successively if increasing amounts of fluids are administered. Hemodynamic values: Hemodialysis with acetate-containing solutions can be the cause of severe hemodynamic instability because of increased nitric oxide synthesis and enhanced arrhythmias [20-22]. Even the small quantity of acetate present in various dialysis fluids (usually 35 mmol/L) exposes a patient's blood to an acetate concentration 10 to 40 times the physiological level (50 to 100 μmol/L) [23-25]. Fournier et al. observed that, according to the nature of hemodialysis (counter current process), the concentration of plasma acetate increased from 0.02 mmol/L to 0.22 mmol/L in three hours of hemodialysis [25]. To date, it is unknown whether the administration of acetated infusion solutions can provoke substantial rises in plasma acetate levels as well. However, one can speculate that while (slowly) infusing acetate-containing solutions, a considerably greater amount of the substance (compared to hemodialysis) will be metabolized. This should lead to relatively low serum concentrations. This assumption is supported by two different observations: Firstly, no significant differences could be observed in terms of hemodynamic values between the study groups. Secondly, the SIG in the ABIS-group did not change significantly during the time course. According to Stewart’s approach, an increase in plasma acetate must lead to an increase in SIG [13]. This hypothesis has been confirmed by Bruegger et al. in a hemorrhagic shock canine model: A 2.2 mmol/L rise in acetate plasma levels caused an identical (absolute) increase in SIG [26]. As a consequence, it is unlikely that relevant amounts of acetate accumulated in the circulation in the ABIS-group. However, hemodynamic stability could only be maintained by administering increasing amounts of norepinephrine (P < 0.05 from BL to T120 in both of the groups). There are several possible explanations: Firstly, the range of blood loss was relatively broad (from 50 ml up to 1,500 ml). Despite fluid replacement, which, according to the literature, should have been adequate, these fluid losses were obviously not sufficiently counterbalanced in some of the patients, especially in the ABIS-group [27,28]. It seems probable that some patients developed a mild hypovolemia during the time course of the study. This assumption is supported by a decreasing CVP in the ABIS group, in which blood loss tended to be more severe than in the RL-group. Secondly, one has to take into account the mild vasodilatory effect of the used narcotics (propofol) that may have contributed to norepinephrine requirements. Thus, when considering only the current, preliminary data it seems that both solutions ‘behaved like normal crystalloids’ without substance-specific hemodynamic effects. However, it will be interesting to investigate if small intergroup differences observed for CVP or MAP which, nevertheless, are of clinical importance, will gain significance in a larger RCT. Additional findings: Slightly elevated lactate levels were observed in the RL-group. These increases were only moderate (but significant) and hardly exceeded our institutions’ range of normal values (0.5 to 1.5 mmol/L). In contrast to ABIS [Mg++] significantly decreased in the RL-group. Because RL does not contain magnesium, this can best be explained by a simple dilution effect. We consider these changes to be of minor clinical importance. However, for patients suffering from liver or metabolic diseases or cardiac arrhythmias ABIS might be the preferable infusion solution. Feasibility of a future RCT: The current study showed that a larger RCT is feasible. The preliminary sample size estimation did not include the possibility of protocol violations or dropouts. Therefore, at least 35 patients will have to be enrolled in each group. With regard to the consistency of the study population in terms of co-factors of surgery and demographic data it is likely that a large RCT will provide reliable and reproducible results, despite the small intergroup differences in primary and secondary values of interest we observed. Conclusions: In the current preliminary comparison Ringer’s lactate as well as ABIS proved to be suitable for fluid replacement during abdominal surgery. Hemodynamic stability remained unaffected by both of the solutions. Concerning consistency of acid base parameters none of the solutions seemed to be clearly inferior, either. ABIS had a smaller effect on pH, BE, [HCO3-] and SIDe changes than lactated Ringer, but whether these differences will turn out to be of clinical relevance has to be investigated in a larger RCT. With regard to SIDe and [A-] neither ABIS nor Ringer’s lactate seem to be ‘ideally balanced’, because both of these values decreased dose dependently in both groups. A future RCT might provide important data concerning the on-going search for an ideal balanced solution. Abbreviations: []: serum-concentration of the ion between the brackets; [A-]: serum-concentration of week acids; ABIS: acetate-based infusion solution; ABP: arterial blood pressure; BE: base excess; CVP: central venous pressure; HR: heart rate; MAP: mean arterial pressure; Pi: [PO43-] ionized; RCT: randomized controlled trial; RL: Ringer’s lactate; SID: strong ion difference; SIDe: effective strong ion difference; SIG: strong ion gap. Competing interests: This study was performed using research funding provided by Fresenius Medical Care (Fresenius Medical Care Deutschland GmbH, Hafenstr. 9, 97424 Schweinfurt, Bayern, Germany). Fresenius Medical Care did not have any influence on study design or manuscript approval. Klaus Hofmann-Kiefer, Matthias Jacob and Markus Rehm have lectured for Baxter Deutschland GmbH (Unterschleißheim, Germany), Fresenius Kabi Deutschland GmbH (Bad Homburg, Germany), B Braun, Melsungen AG (Melsungen, Germany), Serumwerk Bernburg AG (Bernburg, Germany) and CSL Behring GmbH (Marburg, Germany). In addition, they received unrestricted research grants from Serumwerk Bernburg AG (Bernburg, Germany), CSL Behring GmbH (Marburg, Germany) and Fresenius Kabi Deutschland GmbH (Bad Homburg, Germany). All authors declare they have no competing interests. Authors’ contributions: KH-K made substantial contributions to the development of the study design, wrote the manuscript and performed the statistics. DC participated in data acquisition and analysis and critically revised the manuscript. TK and MP participated in data acquisition and patient coordination. MJ and PC critically revised and helped to draft the manuscript. MR participated substantially in the design and coordination of the study and gave final approval of the version to be published. All authors read and approved the final manuscript.
Background: The current pilot study compares the impact of an intravenous infusion of Ringer's lactate to an acetate-based solution with regard to acid-base balance. The study design included the variables of the Stewart approach and focused on the effective strong ion difference. Because adverse hemodynamic effects have been reported when using acetate buffered solutions in hemodialysis, hemodynamics were also evaluated. Methods: Twenty-four women who had undergone abdominal gynecologic surgery and who had received either Ringer's lactate (Strong Ion Difference 28 mmol/L; n = 12) or an acetate-based solution (Strong Ion Difference 36.8 mmol/L; n = 12) according to an established clinical protocol and its precursor were included in the investigation. After induction of general anesthesia, a set of acid-base variables, hemodynamic values and serum electrolytes was measured three times during the next 120 minutes. Results: Patients received a mean dose of 4,054 ± 450 ml of either one or the other of the solutions. In terms of mean arterial blood pressure and norepinephrine requirements there were no differences to observe between the study groups. pH and serum HCO3- concentration decreased slightly but significantly only with Ringer's lactate. In addition, the acetate-based solution kept the plasma effective strong ion difference more stable than Ringer's lactate. Conclusions: Both of the solutions provided hemodynamic stability. Concerning consistency of acid base parameters none of the solutions seemed to be inferior, either. Whether the slight advantages observed for the acetate-buffered solution in terms of stability of pH and plasma HCO3- are clinically relevant, needs to be investigated in a larger randomized controlled trial.
Background: In the fields of surgery and intensive care, hyperchloremic acidosis is a well-known problem in patients receiving large amounts of standard crystalloids, especially 0.9% sodium chloride solutions. A series of investigations has emphasized the disadvantageous effects of hyperchloremic acidosis on various organ systems, for example, hemodynamics, NO-production, renal blood circulation, urinary output or hemostasis [1-4]. Balanced crystalloids, whose composition prevents hyperchloremia, are increasingly accepted and likely to be ‘state of the art’ in the near future [3,5-7]. Balanced hydroxyethyl-starch solutions are also available today. All of these preparations are characterized by the presence of metabolizable organic anions such as lactate, acetate or malate and contain physiological electrolyte concentrations. Theoretically, balanced solutions which possess a strong ion difference (SID) of around 24 mmol/L after metabolization (for example, Ringer’s lactate) are supposed to have no influence on blood pH even after intravenous infusion of several liters [8]. However, balanced crystalloids containing lactate have some disadvantages as well [9]. Their metabolism depends on liver function and does require increased oxygen consumption. Acetated solutions do not display these shortcomings, but previous studies in patients undergoing hemodialysis have demonstrated that their administration may lead to hemodynamic instability and hypotension [10]. Despite their future perspective, to date the routine use of balanced crystalloids is still infrequent in anesthesia. Large randomized, controlled clinical trials (RCT’s) investigating possible advantages as well as adverse effects of balanced crystalloids for routine use in the operating theater are sparse. The aim of the current study was to provide a preliminary comparison of two differently buffered balanced infusion solutions by reviewing data obtained from two established clinical protocols and,in addition, to test the feasibility of a future large RCT. The main target of interest was to investigate the influence of a solution containing a plasma-like SID (acetate buffered, SID = 36.8 mmol/L) and a solution with a comparably low SID (lactate buffered, SID = 28 mmol/L) on the variables of Stewart’s acid–base approach including the effective strong ion difference (SIDe). The impact of the acetate-based infusion solution on hemodynamic stability was also studied. In order to guide a future RCT, primary and secondary variables of interest were determined and sample size estimation was performed. Conclusions: In the current preliminary comparison Ringer’s lactate as well as ABIS proved to be suitable for fluid replacement during abdominal surgery. Hemodynamic stability remained unaffected by both of the solutions. Concerning consistency of acid base parameters none of the solutions seemed to be clearly inferior, either. ABIS had a smaller effect on pH, BE, [HCO3-] and SIDe changes than lactated Ringer, but whether these differences will turn out to be of clinical relevance has to be investigated in a larger RCT. With regard to SIDe and [A-] neither ABIS nor Ringer’s lactate seem to be ‘ideally balanced’, because both of these values decreased dose dependently in both groups. A future RCT might provide important data concerning the on-going search for an ideal balanced solution.
Background: The current pilot study compares the impact of an intravenous infusion of Ringer's lactate to an acetate-based solution with regard to acid-base balance. The study design included the variables of the Stewart approach and focused on the effective strong ion difference. Because adverse hemodynamic effects have been reported when using acetate buffered solutions in hemodialysis, hemodynamics were also evaluated. Methods: Twenty-four women who had undergone abdominal gynecologic surgery and who had received either Ringer's lactate (Strong Ion Difference 28 mmol/L; n = 12) or an acetate-based solution (Strong Ion Difference 36.8 mmol/L; n = 12) according to an established clinical protocol and its precursor were included in the investigation. After induction of general anesthesia, a set of acid-base variables, hemodynamic values and serum electrolytes was measured three times during the next 120 minutes. Results: Patients received a mean dose of 4,054 ± 450 ml of either one or the other of the solutions. In terms of mean arterial blood pressure and norepinephrine requirements there were no differences to observe between the study groups. pH and serum HCO3- concentration decreased slightly but significantly only with Ringer's lactate. In addition, the acetate-based solution kept the plasma effective strong ion difference more stable than Ringer's lactate. Conclusions: Both of the solutions provided hemodynamic stability. Concerning consistency of acid base parameters none of the solutions seemed to be inferior, either. Whether the slight advantages observed for the acetate-buffered solution in terms of stability of pH and plasma HCO3- are clinically relevant, needs to be investigated in a larger randomized controlled trial.
10,229
315
[ 454, 76, 230, 274, 254, 243, 875, 548, 105, 91, 98, 155, 89 ]
17
[ "group", "abis", "rl", "significant", "mmol", "t120", "study", "acetate", "time", "ph" ]
[ "hyperchloremia increasingly accepted", "prevents hyperchloremia", "care hyperchloremic acidosis", "composition prevents hyperchloremia", "hyperchloremic acidosis organ" ]
[CONTENT] Acetate | Lactate | Balanced infusion solution | Acid–base balance | Hemodynamic stability [SUMMARY]
[CONTENT] Acetate | Lactate | Balanced infusion solution | Acid–base balance | Hemodynamic stability [SUMMARY]
[CONTENT] Acetate | Lactate | Balanced infusion solution | Acid–base balance | Hemodynamic stability [SUMMARY]
[CONTENT] Acetate | Lactate | Balanced infusion solution | Acid–base balance | Hemodynamic stability [SUMMARY]
[CONTENT] Acetate | Lactate | Balanced infusion solution | Acid–base balance | Hemodynamic stability [SUMMARY]
[CONTENT] Acetate | Lactate | Balanced infusion solution | Acid–base balance | Hemodynamic stability [SUMMARY]
[CONTENT] Acetates | Acid-Base Equilibrium | Adult | Aged | Arterial Pressure | Bicarbonates | Buffers | Electrocardiography | Electrolytes | Female | Genital Neoplasms, Female | Hemodynamics | Humans | Hydrogen-Ion Concentration | Infusion Pumps | Infusions, Intravenous | Isotonic Solutions | Middle Aged | Norepinephrine | Pilot Projects | Ringer's Lactate | Young Adult [SUMMARY]
[CONTENT] Acetates | Acid-Base Equilibrium | Adult | Aged | Arterial Pressure | Bicarbonates | Buffers | Electrocardiography | Electrolytes | Female | Genital Neoplasms, Female | Hemodynamics | Humans | Hydrogen-Ion Concentration | Infusion Pumps | Infusions, Intravenous | Isotonic Solutions | Middle Aged | Norepinephrine | Pilot Projects | Ringer's Lactate | Young Adult [SUMMARY]
[CONTENT] Acetates | Acid-Base Equilibrium | Adult | Aged | Arterial Pressure | Bicarbonates | Buffers | Electrocardiography | Electrolytes | Female | Genital Neoplasms, Female | Hemodynamics | Humans | Hydrogen-Ion Concentration | Infusion Pumps | Infusions, Intravenous | Isotonic Solutions | Middle Aged | Norepinephrine | Pilot Projects | Ringer's Lactate | Young Adult [SUMMARY]
[CONTENT] Acetates | Acid-Base Equilibrium | Adult | Aged | Arterial Pressure | Bicarbonates | Buffers | Electrocardiography | Electrolytes | Female | Genital Neoplasms, Female | Hemodynamics | Humans | Hydrogen-Ion Concentration | Infusion Pumps | Infusions, Intravenous | Isotonic Solutions | Middle Aged | Norepinephrine | Pilot Projects | Ringer's Lactate | Young Adult [SUMMARY]
[CONTENT] Acetates | Acid-Base Equilibrium | Adult | Aged | Arterial Pressure | Bicarbonates | Buffers | Electrocardiography | Electrolytes | Female | Genital Neoplasms, Female | Hemodynamics | Humans | Hydrogen-Ion Concentration | Infusion Pumps | Infusions, Intravenous | Isotonic Solutions | Middle Aged | Norepinephrine | Pilot Projects | Ringer's Lactate | Young Adult [SUMMARY]
[CONTENT] Acetates | Acid-Base Equilibrium | Adult | Aged | Arterial Pressure | Bicarbonates | Buffers | Electrocardiography | Electrolytes | Female | Genital Neoplasms, Female | Hemodynamics | Humans | Hydrogen-Ion Concentration | Infusion Pumps | Infusions, Intravenous | Isotonic Solutions | Middle Aged | Norepinephrine | Pilot Projects | Ringer's Lactate | Young Adult [SUMMARY]
[CONTENT] hyperchloremia increasingly accepted | prevents hyperchloremia | care hyperchloremic acidosis | composition prevents hyperchloremia | hyperchloremic acidosis organ [SUMMARY]
[CONTENT] hyperchloremia increasingly accepted | prevents hyperchloremia | care hyperchloremic acidosis | composition prevents hyperchloremia | hyperchloremic acidosis organ [SUMMARY]
[CONTENT] hyperchloremia increasingly accepted | prevents hyperchloremia | care hyperchloremic acidosis | composition prevents hyperchloremia | hyperchloremic acidosis organ [SUMMARY]
[CONTENT] hyperchloremia increasingly accepted | prevents hyperchloremia | care hyperchloremic acidosis | composition prevents hyperchloremia | hyperchloremic acidosis organ [SUMMARY]
[CONTENT] hyperchloremia increasingly accepted | prevents hyperchloremia | care hyperchloremic acidosis | composition prevents hyperchloremia | hyperchloremic acidosis organ [SUMMARY]
[CONTENT] hyperchloremia increasingly accepted | prevents hyperchloremia | care hyperchloremic acidosis | composition prevents hyperchloremia | hyperchloremic acidosis organ [SUMMARY]
[CONTENT] group | abis | rl | significant | mmol | t120 | study | acetate | time | ph [SUMMARY]
[CONTENT] group | abis | rl | significant | mmol | t120 | study | acetate | time | ph [SUMMARY]
[CONTENT] group | abis | rl | significant | mmol | t120 | study | acetate | time | ph [SUMMARY]
[CONTENT] group | abis | rl | significant | mmol | t120 | study | acetate | time | ph [SUMMARY]
[CONTENT] group | abis | rl | significant | mmol | t120 | study | acetate | time | ph [SUMMARY]
[CONTENT] group | abis | rl | significant | mmol | t120 | study | acetate | time | ph [SUMMARY]
[CONTENT] balanced | balanced crystalloids | crystalloids | sid | buffered | future | solutions | large | use | routine use [SUMMARY]
[CONTENT] test | normally distributed | data normally distributed | data normally | distributed | normally | mean | anova | test data normally distributed | test data normally [SUMMARY]
[CONTENT] t120 | rl | group | abis | significant | time | 95 | t60 | measurement | 05 [SUMMARY]
[CONTENT] balanced | ringer | abis | concerning | rct | remained unaffected solutions | clearly inferior abis smaller | larger rct regard abis | larger rct regard | clinical relevance investigated [SUMMARY]
[CONTENT] abis | group | rl | t120 | significant | study | time | mmol | measurement | bl [SUMMARY]
[CONTENT] abis | group | rl | t120 | significant | study | time | mmol | measurement | bl [SUMMARY]
[CONTENT] Ringer ||| Stewart ||| [SUMMARY]
[CONTENT] Twenty-four | Ringer | 12 | 36.8 | 12 ||| anesthesia | three | the next 120 minutes [SUMMARY]
[CONTENT] 4,054 | 450 ml ||| ||| ||| serum HCO3- | Ringer ||| Ringer [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] Ringer ||| Stewart ||| ||| Twenty-four | Ringer | 12 | 36.8 | 12 ||| anesthesia | three | the next 120 minutes ||| ||| 4,054 | 450 ml ||| ||| ||| serum HCO3- | Ringer ||| Ringer ||| ||| ||| [SUMMARY]
[CONTENT] Ringer ||| Stewart ||| ||| Twenty-four | Ringer | 12 | 36.8 | 12 ||| anesthesia | three | the next 120 minutes ||| ||| 4,054 | 450 ml ||| ||| ||| serum HCO3- | Ringer ||| Ringer ||| ||| ||| [SUMMARY]
Propolis Reduces the Expression of Autophagy-Related Proteins in Chondrocytes under Interleukin-1β Stimulus.
31374866
Osteoarthritis (OA) is a progressive and multifactorial disease that is associated with aging. A number of changes occur in aged cartilage, such as increased oxidative stress, decreased markers of healthy cartilage, and alterations in the autophagy pathway. Propolis extracts contain a mixture of polyphenols and it has been proved that they have high antioxidant capacity and could regulate the autophagic pathway. Our objective was to evaluate the effect of ethanolic extract of propolis (EEP) on chondrocytes that were stimulated with IL-1β.
BACKGROUND
Rabbit chondrocytes were isolated and stimulated with IL-1β and treated with EEP. We evaluated cell viability, nitric oxide production, healthy cartilage, and OA markers, and the expression of three proteins associated with the autophagy pathway LC3, ATG5, and AKT1.
METHODS
The EEP treatment reduces the expression of LC3, ATG5, and AKT1, reduces the production of nitric oxide, increases the expression of healthy markers, and reduces OA markers.
RESULTS
These results suggest that treatment with EEP in chondrocytes that were stimulated with IL-1β has beneficial effects, such as a decrease in the expression of proteins associated with autophagy, MMP13, and production of nitric oxide, and also increased collagen II.
CONCLUSIONS
[ "Animals", "Antioxidants", "Autophagy", "Autophagy-Related Proteins", "Cells, Cultured", "Chondrocytes", "Interleukin-1beta", "Matrix Metalloproteinase 13", "Nitric Oxide", "Osteoarthritis", "Propolis", "Rabbits" ]
6695581
1. Introduction
Osteoarthritis (OA) is a progressive, degenerative, and multifactorial joint that is disease characterized by a progressive loss of articular cartilage, subchondral bone sclerosis, osteophyte formation and synovial inflammation that has been positioned as the world-wide leading cause of pain and dysfunction [1,2,3,4,5,6,7]. Moreover, the high prevalence of OA and its great impact on the work ability make this disease an important social problem [8,9]. Aging is one of the most important risk factors of OA [5,6,10,11,12]. In aged cartilage as well as in OA, there is an increased production of reactive nitrogen and oxygen species (RNOS) that impact the adult chondrocytes, since they become more susceptible to cell death mediated by RNOS [13,14]. Chondrocytes are stimulated by proinflammatory cytokines, such as IL-1β, which leads to the production of large amounts of nitric oxide (NO) through the increased activity of inducible nitric oxide synthase (iNOS) [15,16,17]. Subsequently, NO inhibits the production of an extracellular matrix (ECM) and interferes with important paracrine and autocrine factors that are involved in OA, perpetuating the catabolic state of articular chondrocytes [15,16]. Animal and human articular cartilage with OA have both been reported to increased RNOS production, which also contributes to the degradation of cartilage by upregulation of matrix metalloproteinases (MMPs) such as MMP13 [18,19,20,21]. Moreover, the cellular oxidation caused by an increased production of RNOS leads to cell aging, especially in postmitotic tissues, such as cartilage [22], which depend on autophagy as the main mechanism for eliminating damaged or dysfunctional organelles and macromolecules [23]. Autophagy is a preserved evolutionary pathway of intracellular degradation, in which damaged organelles and long-lived proteins are degraded and recycled to maintain cellular homeostasis [24,25,26]. This process consists of dynamic membrane rearrangements that are mediated by a group of four main autophagy-related proteins (ATG) that include unc-51, like autophagy activating kinase 1 (ULK1), Beclin1 (BECN1), microtubule associated protein 1 light chain 3 alpha (LC3), and autophagy related 5 (ATG5). Upstream, the PI3/AKT and ERK/MAPK pathways are able to regulate the mammalian target of rapamycin (mTOR), which is a vital regulator of autophagy, by controlling the interaction between mTOR serine and threonine kinases in the mTOR complex 1 (mTORC-1) [27]. The inhibition of mTORC-1 promotes autophagy, while the activation of mTOR kinase suppresses it [27,28]. In addition to its key role in physiological conditions, aging is often accompanied by defects of general autophagy [29] and its deregulation is implicated in various pathological conditions, such as aging-related diseases [30,31]. In fact, alterations of autophagy are correlated with cell death and OA [31]. Autophagy has a controversial role in cellular survival and death [32,33]. Although autophagy mostly protects cells allow their adaptation to several types of stress, excessive or prolonged activation of this pathway can promote cell death [26,34]. Moreover, the autophagy pathway may be related to proinflammatory signaling through a mechanism that involves oxidative stress [35,36,37,38]. It has been observed that cellular damage that is generated by excessive production of RNOS can stimulate this pathway [22,26,39]. For this reason, the functional relationship between autophagy and apoptosis is complex and apparently it is the stimulus that determines an induction of apoptosis or autophagy in a context-dependent mode [34,40,41]. One of the links between these processes could be ATG5, which has a dual role in autophagy and apoptosis, because it could trigger apoptosis through several mechanisms and it is part of the molecular mechanisms that govern the inhibitory crosstalk between apoptosis and autophagy [40]. Although several therapeutic strategies have been developed to improve the repair of hyaline cartilage, none has been sufficiently effective to generate functional and long-lasting tissue. Currently, there are no drugs available to modify OA and a large number of candidate drugs have failed to demonstrate efficacy or they were associated with significant side effects [23,42]. This makes it necessary to search for other therapeutic alternatives that can avoid undesired effects and can be adapted to the progressive and multimodal nature of OA. Polyphenols are the most common bioactive natural products that are present in fruits, vegetables, seeds, among others [43,44,45,46], and they have a wide range of activities in the prevention and treatment of various physiological or pathophysiological states, such as cancer, neuroinflammation, diabetes, and aging [47,48,49]. Several of the beneficial effects of polyphenols have been attributed to their antioxidant capacity and their ability to modulate antioxidant defense mechanisms [50]. Additionally, these bioactive components have a great potential to prevent diseases through genetic and epigenetic modifications [48,49,51,52,53]. Paullauf et al., also reported that polyphenols affect numerous cellular targets that can induce or inhibit autophagy and mention that autophagy interferes with the symptoms and putative causes of aging [54,55]. In fact, several studies described the regulation that polyphenols have on the path of autophagy [49,56,57,58,59,60]. Propolis extract is an extremely complex mixture of natural substances; it contains amino acids, phenolic acids, phenolic acid esters, flavonoids, cinnamic acid, terpenes, and caffeic acid [61], and it has multiple pharmacological properties, including hepatoprotective, antioxidant, and anti-inflammatory actions, and in the cartilage, it has been shown to offer excellent protection, being mediated in part by its RNOS scavenger action [62,63]. Pinocembrin (PB) is one of the most abundant flavonoids in propolis [64,65,66] and it has been associated to the inhibition of MMP-1, MMP-3, and MMP-13 expression at both the protein and mRNA levels in cartilage [66]. Additionally, it is suggested that PB could protect the brain against ischemia-reperfusion injury, and the possible mechanisms might be attributed to the inhibition of apoptosis and reversed autophagy disfunction in the penumbra area [65,67]. Altogether, this evidence suggests a potential effect of propolis in reverting the alterations on the autophagy pathway in chondrocytes with OA, promoting the viability of the chondrocytes and the maintenance of healthy cartilage. Thus, the goal of our study was to evaluate the effect of ethanolic extract of propolis (EEP) in chondrocytes that were stimulated with IL-1β and its influence on the expression of proteins related to autophagy pathway and healthy and osteoarthritic marker.
null
null
2. Results
2.1. Characterization of Polyphenols Present in the EEP The chromatographic profile of the EEP was obtained while using HPLC, showing about 53 peaks (Figure 1), whose identification was assigned by analyzing standards using the same conditions as in EEP sample, while considering the exact mass, UV absorption spectrum, and decomposition in the gas phase (fragmentation). Table 1 describes the identified compounds and related data. Additionally, the amount of PB was measured by mass spectrometer, resulting in a concentration of 44.1 mg mL−1. The chromatographic profile of the EEP was obtained while using HPLC, showing about 53 peaks (Figure 1), whose identification was assigned by analyzing standards using the same conditions as in EEP sample, while considering the exact mass, UV absorption spectrum, and decomposition in the gas phase (fragmentation). Table 1 describes the identified compounds and related data. Additionally, the amount of PB was measured by mass spectrometer, resulting in a concentration of 44.1 mg mL−1. 2.2. Cell Viability Analysis after Treatments Cell viability assay by trypan blue was performed to identify the highest dose of EEP that does not show a significant decrease in chondrocytes viability. After the EEP treatment using concentrations that ranged between 0 and 15 µg/mL, a significant decrease in cell viability was observed, starting from 5 µg/mL (Figure 2). For this reason, 2.5 µg/mL was selected as the dose for the following EEP treatments. The IL-1β, bafilomycin, and rapamycin doses were selected according to the literature and these did not significantly modify the viability of the chondrocytes (data not shown). Cell viability assay by trypan blue was performed to identify the highest dose of EEP that does not show a significant decrease in chondrocytes viability. After the EEP treatment using concentrations that ranged between 0 and 15 µg/mL, a significant decrease in cell viability was observed, starting from 5 µg/mL (Figure 2). For this reason, 2.5 µg/mL was selected as the dose for the following EEP treatments. The IL-1β, bafilomycin, and rapamycin doses were selected according to the literature and these did not significantly modify the viability of the chondrocytes (data not shown). 2.3. Effect of OA Induction on Autophagy-Related Proteins LC3I and LC3II proteins were detected by western blot (Figure 3a) to evaluate the effect of IL-1β-induced OA on autophagy pathway. An increase in the protein expression of LC3I was observed under IL-1β inflammatory stimulus, similarly to that observed after the treatment with rapamycin (RAP), which corresponds to a well-known autophagy stimulator (Figure 3a,b). Subsequently, no changes in the protein expression of LC3I or LC3II were observed when EEP or vehicle treatment was applied (Figure 3b,c). The cells exposed to bafilomycin (BAF), an autophagy inhibitor, markedly increased LC3II expression (Figure 3a,c). This accumulation in response to BAF treatment was described in the guidelines for the use and interpretation of assays for the monitoring autophagy by Klionsky et al. [7], where the accumulation of LC3I and II can be obtained by interrupting the autophagosome-lysosome fusion step, increasing the number of autophagosomes. Finally, a decrease in LC3I was observed in OA-induced chondrocytes after EEP treatment when compared to cells stimulated with IL-1β (Figure 3a,b). LC3I and LC3II proteins were detected by western blot (Figure 3a) to evaluate the effect of IL-1β-induced OA on autophagy pathway. An increase in the protein expression of LC3I was observed under IL-1β inflammatory stimulus, similarly to that observed after the treatment with rapamycin (RAP), which corresponds to a well-known autophagy stimulator (Figure 3a,b). Subsequently, no changes in the protein expression of LC3I or LC3II were observed when EEP or vehicle treatment was applied (Figure 3b,c). The cells exposed to bafilomycin (BAF), an autophagy inhibitor, markedly increased LC3II expression (Figure 3a,c). This accumulation in response to BAF treatment was described in the guidelines for the use and interpretation of assays for the monitoring autophagy by Klionsky et al. [7], where the accumulation of LC3I and II can be obtained by interrupting the autophagosome-lysosome fusion step, increasing the number of autophagosomes. Finally, a decrease in LC3I was observed in OA-induced chondrocytes after EEP treatment when compared to cells stimulated with IL-1β (Figure 3a,b). 2.4. Effect of EEP Treatment on Autophagy Protein in OA Chondrocytes To analyze the autophagy pathway in OA chondrocytes under EEP treatment, three proteins were selected: LC3, ATG5, and AKT1. The protein expression of LC3I had a significant decrease in the condition co-treated with IL-1β and EEP when compared to the IL-1β group, and there were no significant differences between EEP treatment, and the EEP and IL-1β condition (Figure 4a). In relation to LC3 gene expression, a significant decrease was observed in OA chondrocytes that were treated with EEP, as compared to the IL-1β stimulated condition (Figure 4d). Regarding ATG5 protein expression, a significant decrease was observed in the condition of co-treated with IL-1β and EEP when compared to the IL-1β group and there were no significant differences between EEP treatment, and EEP and IL-1β condition (Figure 4b). In relation to ATG5 gene expression, a significant decrease was observed in OA chondrocytes that were treated with EEP compared to IL-1β stimulated condition (Figure 4e). Finally, a significant decrease of AKT1 protein expression was observed in the condition co-treated with IL-1β and EEP when compared with the IL-1β group and there were no significant differences between EEP treatment, and EEP and IL-1β condition (Figure 4c). This effect was not reflected in the gene expression analysis (Figure 4f). To analyze the autophagy pathway in OA chondrocytes under EEP treatment, three proteins were selected: LC3, ATG5, and AKT1. The protein expression of LC3I had a significant decrease in the condition co-treated with IL-1β and EEP when compared to the IL-1β group, and there were no significant differences between EEP treatment, and the EEP and IL-1β condition (Figure 4a). In relation to LC3 gene expression, a significant decrease was observed in OA chondrocytes that were treated with EEP, as compared to the IL-1β stimulated condition (Figure 4d). Regarding ATG5 protein expression, a significant decrease was observed in the condition of co-treated with IL-1β and EEP when compared to the IL-1β group and there were no significant differences between EEP treatment, and EEP and IL-1β condition (Figure 4b). In relation to ATG5 gene expression, a significant decrease was observed in OA chondrocytes that were treated with EEP compared to IL-1β stimulated condition (Figure 4e). Finally, a significant decrease of AKT1 protein expression was observed in the condition co-treated with IL-1β and EEP when compared with the IL-1β group and there were no significant differences between EEP treatment, and EEP and IL-1β condition (Figure 4c). This effect was not reflected in the gene expression analysis (Figure 4f). 2.5. Effect of EEP Treatment on Cartilage Markers Expression in OA Chondrocytes Collagen II (Col2a1) was selected as a healthy cartilage marker and MMP13 as OA marker. Col2a1 protein expression did not significantly change with EEP treatment when compared to the IL-1β stimulation. However, there is a significant increase from 1 to 1.4 in the condition of being co-treated with IL-1β and EEP when compared to the IL-1β stimulation, and between EEP, and IL-1β and EEP (Figure 5a). On the other hand, a significant increase in gene expression is observed between the IL-1β and EEP condition, but not in the condition of being co-treated with IL-1β and EEP (Figure 5c). There is a significant decrease in MMP13 protein expression when EEP treatment is co-treated with IL-1β stimulation as compared to the IL-1β condition and there were no significant differences between EEP treatment, and EEP and IL-1β co-treated condition (Figure 5b). In relation to gene expression, there is a significant decrease in the EEP and IL-1β co-treatment when compared to the IL-1β condition (Figure 5d). Collagen II (Col2a1) was selected as a healthy cartilage marker and MMP13 as OA marker. Col2a1 protein expression did not significantly change with EEP treatment when compared to the IL-1β stimulation. However, there is a significant increase from 1 to 1.4 in the condition of being co-treated with IL-1β and EEP when compared to the IL-1β stimulation, and between EEP, and IL-1β and EEP (Figure 5a). On the other hand, a significant increase in gene expression is observed between the IL-1β and EEP condition, but not in the condition of being co-treated with IL-1β and EEP (Figure 5c). There is a significant decrease in MMP13 protein expression when EEP treatment is co-treated with IL-1β stimulation as compared to the IL-1β condition and there were no significant differences between EEP treatment, and EEP and IL-1β co-treated condition (Figure 5b). In relation to gene expression, there is a significant decrease in the EEP and IL-1β co-treatment when compared to the IL-1β condition (Figure 5d). 2.6. Effect of EEP Treatment on Chondrocytes Nitric Oxide Production Induced by the Inflammatory Stimulus A significant increase in the amount of NO from 8 to 22 μM was observed in chondrocytes that were stimulated with IL-1β as compared to the control condition; this increase is reduced significantly to 16 μM with EEP treatment (Figure 6). In addition, EEP treatment does not modify the amount of NO in the supernatant compared to the control condition. On the other hand, the activation or inhibition of autophagy does not significantly modify the amount of NO that is released to the medium (Figure 6). A significant increase in the amount of NO from 8 to 22 μM was observed in chondrocytes that were stimulated with IL-1β as compared to the control condition; this increase is reduced significantly to 16 μM with EEP treatment (Figure 6). In addition, EEP treatment does not modify the amount of NO in the supernatant compared to the control condition. On the other hand, the activation or inhibition of autophagy does not significantly modify the amount of NO that is released to the medium (Figure 6).
null
null
[ "2.1. Characterization of Polyphenols Present in the EEP", "2.2. Cell Viability Analysis after Treatments", "2.3. Effect of OA Induction on Autophagy-Related Proteins", "2.4. Effect of EEP Treatment on Autophagy Protein in OA Chondrocytes", "2.5. Effect of EEP Treatment on Cartilage Markers Expression in OA Chondrocytes", "2.6. Effect of EEP Treatment on Chondrocytes Nitric Oxide Production Induced by the Inflammatory Stimulus", "4. Materials and Methods", "4.1. Preparation and Characterization of Ethanolic Extract of Propolis", "4.2. Cartilage Samples Collection and Primary Chondrocyte Culture", "4.3. Cell Viability Analysis in Response to EEP Treatment", "4.4. In Vitro Model of OA and Treatments", "4.5. Western Blot Analysis", "4.6. Reverse Transcription-Quantitative Polymerase Chain Reaction (RT-qPCR)", "4.7. Measurement of NO Release", "4.8. Statistical Analysis" ]
[ "The chromatographic profile of the EEP was obtained while using HPLC, showing about 53 peaks (Figure 1), whose identification was assigned by analyzing standards using the same conditions as in EEP sample, while considering the exact mass, UV absorption spectrum, and decomposition in the gas phase (fragmentation). Table 1 describes the identified compounds and related data. Additionally, the amount of PB was measured by mass spectrometer, resulting in a concentration of 44.1 mg mL−1.", "Cell viability assay by trypan blue was performed to identify the highest dose of EEP that does not show a significant decrease in chondrocytes viability. After the EEP treatment using concentrations that ranged between 0 and 15 µg/mL, a significant decrease in cell viability was observed, starting from 5 µg/mL (Figure 2). For this reason, 2.5 µg/mL was selected as the dose for the following EEP treatments. The IL-1β, bafilomycin, and rapamycin doses were selected according to the literature and these did not significantly modify the viability of the chondrocytes (data not shown).", "LC3I and LC3II proteins were detected by western blot (Figure 3a) to evaluate the effect of IL-1β-induced OA on autophagy pathway. An increase in the protein expression of LC3I was observed under IL-1β inflammatory stimulus, similarly to that observed after the treatment with rapamycin (RAP), which corresponds to a well-known autophagy stimulator (Figure 3a,b). Subsequently, no changes in the protein expression of LC3I or LC3II were observed when EEP or vehicle treatment was applied (Figure 3b,c). The cells exposed to bafilomycin (BAF), an autophagy inhibitor, markedly increased LC3II expression (Figure 3a,c). This accumulation in response to BAF treatment was described in the guidelines for the use and interpretation of assays for the monitoring autophagy by Klionsky et al. [7], where the accumulation of LC3I and II can be obtained by interrupting the autophagosome-lysosome fusion step, increasing the number of autophagosomes. Finally, a decrease in LC3I was observed in OA-induced chondrocytes after EEP treatment when compared to cells stimulated with IL-1β (Figure 3a,b).", "To analyze the autophagy pathway in OA chondrocytes under EEP treatment, three proteins were selected: LC3, ATG5, and AKT1. The protein expression of LC3I had a significant decrease in the condition co-treated with IL-1β and EEP when compared to the IL-1β group, and there were no significant differences between EEP treatment, and the EEP and IL-1β condition (Figure 4a). In relation to LC3 gene expression, a significant decrease was observed in OA chondrocytes that were treated with EEP, as compared to the IL-1β stimulated condition (Figure 4d).\nRegarding ATG5 protein expression, a significant decrease was observed in the condition of co-treated with IL-1β and EEP when compared to the IL-1β group and there were no significant differences between EEP treatment, and EEP and IL-1β condition (Figure 4b). In relation to ATG5 gene expression, a significant decrease was observed in OA chondrocytes that were treated with EEP compared to IL-1β stimulated condition (Figure 4e).\nFinally, a significant decrease of AKT1 protein expression was observed in the condition co-treated with IL-1β and EEP when compared with the IL-1β group and there were no significant differences between EEP treatment, and EEP and IL-1β condition (Figure 4c). This effect was not reflected in the gene expression analysis (Figure 4f).", "Collagen II (Col2a1) was selected as a healthy cartilage marker and MMP13 as OA marker. Col2a1 protein expression did not significantly change with EEP treatment when compared to the IL-1β stimulation. However, there is a significant increase from 1 to 1.4 in the condition of being co-treated with IL-1β and EEP when compared to the IL-1β stimulation, and between EEP, and IL-1β and EEP (Figure 5a). On the other hand, a significant increase in gene expression is observed between the IL-1β and EEP condition, but not in the condition of being co-treated with IL-1β and EEP (Figure 5c).\nThere is a significant decrease in MMP13 protein expression when EEP treatment is co-treated with IL-1β stimulation as compared to the IL-1β condition and there were no significant differences between EEP treatment, and EEP and IL-1β co-treated condition (Figure 5b). In relation to gene expression, there is a significant decrease in the EEP and IL-1β co-treatment when compared to the IL-1β condition (Figure 5d).", "A significant increase in the amount of NO from 8 to 22 μM was observed in chondrocytes that were stimulated with IL-1β as compared to the control condition; this increase is reduced significantly to 16 μM with EEP treatment (Figure 6). In addition, EEP treatment does not modify the amount of NO in the supernatant compared to the control condition. On the other hand, the activation or inhibition of autophagy does not significantly modify the amount of NO that is released to the medium (Figure 6).", " 4.1. Preparation and Characterization of Ethanolic Extract of Propolis A crude brown propolis sample was obtained from a mountainous area (latitude −38°58′4046′′, longitude −72°1′1573′′) near Cunco city, La Araucanía, Chile. Briefly, crude propolis was mixed with ethanol 80% in a 1:3 w/v proportion in an amber colored bottle and incubated for 30 min at 60 °C under constant mixing. Subsequently, the mixture was filtrated on a Whatman No. 1 filter paper in order to separate the ethanolic extract from crude propolis residues. The extract was left at 4 °C and then centrifuged for one night, in order to promote the precipitation of waxes and other poorly soluble waste. Subsequently, the propolis solvents were removed by evaporation and then the product was lyophilized and reconstituted in a 1:1 w/v proportion with ethanol. The EEP was analyzed with HPLC equipment (LC-20AD pumps, SIL-20AC injector, CTO-20A columns and DAD detector SPD-M20A, Kyoto, Japan); MS: MicroTOF-QII, Bruker Daltonics (Billerica, MA, USA). In the mass spectrometer, the DAD detector effluent was divided into a 1:10 ratio (split 1:10), with one part (50 μL/min.) being directed to the mass spectrometer. The electrospray source was used in negative mode at 3000 V. Nebulizer gas (nitrogen): 35 psi; drying gas (nitrogen): 6 L/min. at 220 °C. The mass scale of the equipment was calibrated with a sodium acetate solution. Additionally, the amount of pinocembrin was quantified while using the same method.\nA crude brown propolis sample was obtained from a mountainous area (latitude −38°58′4046′′, longitude −72°1′1573′′) near Cunco city, La Araucanía, Chile. Briefly, crude propolis was mixed with ethanol 80% in a 1:3 w/v proportion in an amber colored bottle and incubated for 30 min at 60 °C under constant mixing. Subsequently, the mixture was filtrated on a Whatman No. 1 filter paper in order to separate the ethanolic extract from crude propolis residues. The extract was left at 4 °C and then centrifuged for one night, in order to promote the precipitation of waxes and other poorly soluble waste. Subsequently, the propolis solvents were removed by evaporation and then the product was lyophilized and reconstituted in a 1:1 w/v proportion with ethanol. The EEP was analyzed with HPLC equipment (LC-20AD pumps, SIL-20AC injector, CTO-20A columns and DAD detector SPD-M20A, Kyoto, Japan); MS: MicroTOF-QII, Bruker Daltonics (Billerica, MA, USA). In the mass spectrometer, the DAD detector effluent was divided into a 1:10 ratio (split 1:10), with one part (50 μL/min.) being directed to the mass spectrometer. The electrospray source was used in negative mode at 3000 V. Nebulizer gas (nitrogen): 35 psi; drying gas (nitrogen): 6 L/min. at 220 °C. The mass scale of the equipment was calibrated with a sodium acetate solution. Additionally, the amount of pinocembrin was quantified while using the same method.\n 4.2. Cartilage Samples Collection and Primary Chondrocyte Culture Normal articular cartilage was collected from five white New Zealand buck rabbits, which were anesthetized with overdose of propofol and euthanized using Potassium chlorate 60 mg/Kg. After knee joint surgery, the pieces of articular cartilage were dissected and separated from the underlying bone and connective tissues. The pieces were washed three times with PBS 1× and 5% penicillin/streptomicin. The extracted cartilage was digested in a solution of 2 mg/mL Protease Type XIV. Bacterial from (Streptomyces griseus) (Sigma Aldrich, St Louis, MO, USA) in PBS 1× for 1.5 h and 1.5 mg/mL collagenase B (Roche, Meylan, France) in basic medium DMEM at 37 °C overnight. This suspension was centrifuged at 1200 rpm for 8 min. to collect the chondrocytes and was washed with basic medium DMEM. The chondrocytes were cultured in DMEM/F12 (1:1 with 15% FBS plus 1% antibiotic mixture of penicillin/streptomycin) at a density of 1 × 105 cells/mL and incubated in a humidified atmosphere of 5% CO2 at 37 °C. Culture medium was changed every two days and each passage was made when the confluence reached between 80–90%. We only used the second passage of cells in all experiments in order to avoid loss of chondrocyte phenotype [92].\nNormal articular cartilage was collected from five white New Zealand buck rabbits, which were anesthetized with overdose of propofol and euthanized using Potassium chlorate 60 mg/Kg. After knee joint surgery, the pieces of articular cartilage were dissected and separated from the underlying bone and connective tissues. The pieces were washed three times with PBS 1× and 5% penicillin/streptomicin. The extracted cartilage was digested in a solution of 2 mg/mL Protease Type XIV. Bacterial from (Streptomyces griseus) (Sigma Aldrich, St Louis, MO, USA) in PBS 1× for 1.5 h and 1.5 mg/mL collagenase B (Roche, Meylan, France) in basic medium DMEM at 37 °C overnight. This suspension was centrifuged at 1200 rpm for 8 min. to collect the chondrocytes and was washed with basic medium DMEM. The chondrocytes were cultured in DMEM/F12 (1:1 with 15% FBS plus 1% antibiotic mixture of penicillin/streptomycin) at a density of 1 × 105 cells/mL and incubated in a humidified atmosphere of 5% CO2 at 37 °C. Culture medium was changed every two days and each passage was made when the confluence reached between 80–90%. We only used the second passage of cells in all experiments in order to avoid loss of chondrocyte phenotype [92].\n 4.3. Cell Viability Analysis in Response to EEP Treatment Cell viability was assessed while using trypan blue staining, as previously described [93]. Rabbit chondrocytes were briefly incubated and then exposed to different concentrations of EEP (0, 2.5, 5, 7.5, 10, and 15 µg/mL) in 24 well plates at a density of 2 × 104 cell per well for 24 h. Each experiment was performed in triplicate. The results are expressed as % of viable cells relative to control cells (untreated).\nCell viability was assessed while using trypan blue staining, as previously described [93]. Rabbit chondrocytes were briefly incubated and then exposed to different concentrations of EEP (0, 2.5, 5, 7.5, 10, and 15 µg/mL) in 24 well plates at a density of 2 × 104 cell per well for 24 h. Each experiment was performed in triplicate. The results are expressed as % of viable cells relative to control cells (untreated).\n 4.4. In Vitro Model of OA and Treatments For the induction of OA-like biological changes, the chondrocytes were stimulated using IL-1β [17]. The cells were incubated for 24 h under the following conditions: Control (untreated); RAP (100 nM rapamycin, Sigma Aldrich, USA); BAF (20 nM bafilomycin, Sigma Aldrich, USA); IL-1β (10 ng/mL IL-1β); EEP (EEP 2.5 μg/mL); IL-1β and EEP (10 ng/mL IL-1β and 2.5 μg/mL EEP); and, vehicle (2% ethanol).\nFor the induction of OA-like biological changes, the chondrocytes were stimulated using IL-1β [17]. The cells were incubated for 24 h under the following conditions: Control (untreated); RAP (100 nM rapamycin, Sigma Aldrich, USA); BAF (20 nM bafilomycin, Sigma Aldrich, USA); IL-1β (10 ng/mL IL-1β); EEP (EEP 2.5 μg/mL); IL-1β and EEP (10 ng/mL IL-1β and 2.5 μg/mL EEP); and, vehicle (2% ethanol).\n 4.5. Western Blot Analysis The chondrocytes were isolated from confluent monolayer cultures using RIPA buffer supplemented with 1mg/mL Halt™ Protease and Phosphatase Inhibitor Cocktail (Thermo Fisher, Waltham, MA, USA) at 4 °C for 30 min. The samples were then centrifuged at 15,000 rpm for 30 min., and the supernatants were harvested to measure the protein concentration. Proteins were quantified while using the Bradford method with BCA detection kit and adjusted to equal concentrations (45 µg) across different samples. Equal amounts of protein were heated at 95 °C for 5 min. and separated using 4–20% Mini protean TGX Precast gel (Biorad, Hercules, CA, USA). Following electrophoresis, the proteins were transferred onto a polyvinylidene fluoride membrane (PVDF, MilliporeSigma, Burlington, MA, USA). The membranes were blocked with 5% BSA (TBST) at room temperature for 1h and then incubated overnight at 4 °C with primary antibodies of autophagy proteins: ATG5 (ab108327, Abcam, Cambridge, UK), AKT1 (#9272, Cell Signaling, Danvers, MA, USA), LC3 (ab128025, Abcam, Cambridge, UK), and cartilage markers COL2A1 (ab34712, Abcam, Cambridge, UK), MMP13 (ab84594, Abcam, Cambridge, UK). Following three wash steps with TBST the membranes were incubated with Horseradish peroxidase (HRP) goat anti-rabbit IgG (#7074 Cell Signaling, Danvers, MA, USA) for 1 h at room temperature. They were washed then with TBST, three times, for 5 min. each time. Protein bands were detected using Amersham ECL TM Advance Western Blotting Detection Kit (GE Healthcare, Chicago, IL, USA). \nIn the case that proteins have a weight similar to β-actin, stripping was performed with Restore Western Blot Stripping Buffer (21059, Thermo Fisher Scientific, Waltham (MA), USA) according to the manufacturer’s instructions. Finally, β-actin expression was used as load control in all western blot assays (A5441, Sigma Aldrich, St Louis, MO, USA). The quantification was performed by densitometry and bands analysis with the ImageJ 1.8.0 software (Rasband, W.S., ImageJ, U. S. National Institutes of Health, Bethesda, MD, USA)\nThe chondrocytes were isolated from confluent monolayer cultures using RIPA buffer supplemented with 1mg/mL Halt™ Protease and Phosphatase Inhibitor Cocktail (Thermo Fisher, Waltham, MA, USA) at 4 °C for 30 min. The samples were then centrifuged at 15,000 rpm for 30 min., and the supernatants were harvested to measure the protein concentration. Proteins were quantified while using the Bradford method with BCA detection kit and adjusted to equal concentrations (45 µg) across different samples. Equal amounts of protein were heated at 95 °C for 5 min. and separated using 4–20% Mini protean TGX Precast gel (Biorad, Hercules, CA, USA). Following electrophoresis, the proteins were transferred onto a polyvinylidene fluoride membrane (PVDF, MilliporeSigma, Burlington, MA, USA). The membranes were blocked with 5% BSA (TBST) at room temperature for 1h and then incubated overnight at 4 °C with primary antibodies of autophagy proteins: ATG5 (ab108327, Abcam, Cambridge, UK), AKT1 (#9272, Cell Signaling, Danvers, MA, USA), LC3 (ab128025, Abcam, Cambridge, UK), and cartilage markers COL2A1 (ab34712, Abcam, Cambridge, UK), MMP13 (ab84594, Abcam, Cambridge, UK). Following three wash steps with TBST the membranes were incubated with Horseradish peroxidase (HRP) goat anti-rabbit IgG (#7074 Cell Signaling, Danvers, MA, USA) for 1 h at room temperature. They were washed then with TBST, three times, for 5 min. each time. Protein bands were detected using Amersham ECL TM Advance Western Blotting Detection Kit (GE Healthcare, Chicago, IL, USA). \nIn the case that proteins have a weight similar to β-actin, stripping was performed with Restore Western Blot Stripping Buffer (21059, Thermo Fisher Scientific, Waltham (MA), USA) according to the manufacturer’s instructions. Finally, β-actin expression was used as load control in all western blot assays (A5441, Sigma Aldrich, St Louis, MO, USA). The quantification was performed by densitometry and bands analysis with the ImageJ 1.8.0 software (Rasband, W.S., ImageJ, U. S. National Institutes of Health, Bethesda, MD, USA)\n 4.6. Reverse Transcription-Quantitative Polymerase Chain Reaction (RT-qPCR) Total RNA was isolated while using mirVana™ miRNA Isolation Kit (AM1560, Thermo Fisher Scientific, Waltham (MA), USA), according to the manufacturer’s instructions. First-strand cDNA was synthesized while using kit Superscript VILO cDNA synthesis, according to the manufacturer’s instructions, and Quantitative PCR was performed using 7500 real time PCR system with SYBR Green Master Mix (4309155, Thermo Fisher Scientific, Waltham (MA), USA). The forward and reverse primers that were used are shown in Table 2. β-actin expression was used as an internal control. Each experiment was repeated three times in technical triplicate. The relative expression of target genes was calculated while using the −2−∆∆Cq method.\nTotal RNA was isolated while using mirVana™ miRNA Isolation Kit (AM1560, Thermo Fisher Scientific, Waltham (MA), USA), according to the manufacturer’s instructions. First-strand cDNA was synthesized while using kit Superscript VILO cDNA synthesis, according to the manufacturer’s instructions, and Quantitative PCR was performed using 7500 real time PCR system with SYBR Green Master Mix (4309155, Thermo Fisher Scientific, Waltham (MA), USA). The forward and reverse primers that were used are shown in Table 2. β-actin expression was used as an internal control. Each experiment was repeated three times in technical triplicate. The relative expression of target genes was calculated while using the −2−∆∆Cq method.\n 4.7. Measurement of NO Release The NO release was measured from supernatant of rabbit chondrocytes cultures employing a NO chemiluminescence analyzer (model NOA, Sievers Instruments, Boulder, CO, USA). The evaluated conditions were: control; RAP; BAF; IL-1β; EEP; IL-1β and EEP; and, vehicle. All conditions were cultured for 24 h. The release of gaseous compounds was monitored for at least 8 h at intervals of 15 and 30 min., and 1, 2, 4, 6, and 8 h. After 15 min. of incubation, aliquots (0.5 mL) of accumulated gaseous materials in the headspace were injected into the detector chamber using a Hamilton Gastight syringe [94]. Each experiment was repeated three times.\nThe NO release was measured from supernatant of rabbit chondrocytes cultures employing a NO chemiluminescence analyzer (model NOA, Sievers Instruments, Boulder, CO, USA). The evaluated conditions were: control; RAP; BAF; IL-1β; EEP; IL-1β and EEP; and, vehicle. All conditions were cultured for 24 h. The release of gaseous compounds was monitored for at least 8 h at intervals of 15 and 30 min., and 1, 2, 4, 6, and 8 h. After 15 min. of incubation, aliquots (0.5 mL) of accumulated gaseous materials in the headspace were injected into the detector chamber using a Hamilton Gastight syringe [94]. Each experiment was repeated three times.\n 4.8. Statistical Analysis All of the experiments were repeated at least three times. The results were expressed as mean ± S.D., and the data was analyzed using one-way ANOVA followed by Dunnett, Bonferroni or Tukey Multiple Comparisons while using Sigma Plot (Analysis made in Graph Pad Prism version 5) to determine any significant differences. p < 0.05 was considered to be statistically significant.\nAll of the experiments were repeated at least three times. The results were expressed as mean ± S.D., and the data was analyzed using one-way ANOVA followed by Dunnett, Bonferroni or Tukey Multiple Comparisons while using Sigma Plot (Analysis made in Graph Pad Prism version 5) to determine any significant differences. p < 0.05 was considered to be statistically significant.", "A crude brown propolis sample was obtained from a mountainous area (latitude −38°58′4046′′, longitude −72°1′1573′′) near Cunco city, La Araucanía, Chile. Briefly, crude propolis was mixed with ethanol 80% in a 1:3 w/v proportion in an amber colored bottle and incubated for 30 min at 60 °C under constant mixing. Subsequently, the mixture was filtrated on a Whatman No. 1 filter paper in order to separate the ethanolic extract from crude propolis residues. The extract was left at 4 °C and then centrifuged for one night, in order to promote the precipitation of waxes and other poorly soluble waste. Subsequently, the propolis solvents were removed by evaporation and then the product was lyophilized and reconstituted in a 1:1 w/v proportion with ethanol. The EEP was analyzed with HPLC equipment (LC-20AD pumps, SIL-20AC injector, CTO-20A columns and DAD detector SPD-M20A, Kyoto, Japan); MS: MicroTOF-QII, Bruker Daltonics (Billerica, MA, USA). In the mass spectrometer, the DAD detector effluent was divided into a 1:10 ratio (split 1:10), with one part (50 μL/min.) being directed to the mass spectrometer. The electrospray source was used in negative mode at 3000 V. Nebulizer gas (nitrogen): 35 psi; drying gas (nitrogen): 6 L/min. at 220 °C. The mass scale of the equipment was calibrated with a sodium acetate solution. Additionally, the amount of pinocembrin was quantified while using the same method.", "Normal articular cartilage was collected from five white New Zealand buck rabbits, which were anesthetized with overdose of propofol and euthanized using Potassium chlorate 60 mg/Kg. After knee joint surgery, the pieces of articular cartilage were dissected and separated from the underlying bone and connective tissues. The pieces were washed three times with PBS 1× and 5% penicillin/streptomicin. The extracted cartilage was digested in a solution of 2 mg/mL Protease Type XIV. Bacterial from (Streptomyces griseus) (Sigma Aldrich, St Louis, MO, USA) in PBS 1× for 1.5 h and 1.5 mg/mL collagenase B (Roche, Meylan, France) in basic medium DMEM at 37 °C overnight. This suspension was centrifuged at 1200 rpm for 8 min. to collect the chondrocytes and was washed with basic medium DMEM. The chondrocytes were cultured in DMEM/F12 (1:1 with 15% FBS plus 1% antibiotic mixture of penicillin/streptomycin) at a density of 1 × 105 cells/mL and incubated in a humidified atmosphere of 5% CO2 at 37 °C. Culture medium was changed every two days and each passage was made when the confluence reached between 80–90%. We only used the second passage of cells in all experiments in order to avoid loss of chondrocyte phenotype [92].", "Cell viability was assessed while using trypan blue staining, as previously described [93]. Rabbit chondrocytes were briefly incubated and then exposed to different concentrations of EEP (0, 2.5, 5, 7.5, 10, and 15 µg/mL) in 24 well plates at a density of 2 × 104 cell per well for 24 h. Each experiment was performed in triplicate. The results are expressed as % of viable cells relative to control cells (untreated).", "For the induction of OA-like biological changes, the chondrocytes were stimulated using IL-1β [17]. The cells were incubated for 24 h under the following conditions: Control (untreated); RAP (100 nM rapamycin, Sigma Aldrich, USA); BAF (20 nM bafilomycin, Sigma Aldrich, USA); IL-1β (10 ng/mL IL-1β); EEP (EEP 2.5 μg/mL); IL-1β and EEP (10 ng/mL IL-1β and 2.5 μg/mL EEP); and, vehicle (2% ethanol).", "The chondrocytes were isolated from confluent monolayer cultures using RIPA buffer supplemented with 1mg/mL Halt™ Protease and Phosphatase Inhibitor Cocktail (Thermo Fisher, Waltham, MA, USA) at 4 °C for 30 min. The samples were then centrifuged at 15,000 rpm for 30 min., and the supernatants were harvested to measure the protein concentration. Proteins were quantified while using the Bradford method with BCA detection kit and adjusted to equal concentrations (45 µg) across different samples. Equal amounts of protein were heated at 95 °C for 5 min. and separated using 4–20% Mini protean TGX Precast gel (Biorad, Hercules, CA, USA). Following electrophoresis, the proteins were transferred onto a polyvinylidene fluoride membrane (PVDF, MilliporeSigma, Burlington, MA, USA). The membranes were blocked with 5% BSA (TBST) at room temperature for 1h and then incubated overnight at 4 °C with primary antibodies of autophagy proteins: ATG5 (ab108327, Abcam, Cambridge, UK), AKT1 (#9272, Cell Signaling, Danvers, MA, USA), LC3 (ab128025, Abcam, Cambridge, UK), and cartilage markers COL2A1 (ab34712, Abcam, Cambridge, UK), MMP13 (ab84594, Abcam, Cambridge, UK). Following three wash steps with TBST the membranes were incubated with Horseradish peroxidase (HRP) goat anti-rabbit IgG (#7074 Cell Signaling, Danvers, MA, USA) for 1 h at room temperature. They were washed then with TBST, three times, for 5 min. each time. Protein bands were detected using Amersham ECL TM Advance Western Blotting Detection Kit (GE Healthcare, Chicago, IL, USA). \nIn the case that proteins have a weight similar to β-actin, stripping was performed with Restore Western Blot Stripping Buffer (21059, Thermo Fisher Scientific, Waltham (MA), USA) according to the manufacturer’s instructions. Finally, β-actin expression was used as load control in all western blot assays (A5441, Sigma Aldrich, St Louis, MO, USA). The quantification was performed by densitometry and bands analysis with the ImageJ 1.8.0 software (Rasband, W.S., ImageJ, U. S. National Institutes of Health, Bethesda, MD, USA)", "Total RNA was isolated while using mirVana™ miRNA Isolation Kit (AM1560, Thermo Fisher Scientific, Waltham (MA), USA), according to the manufacturer’s instructions. First-strand cDNA was synthesized while using kit Superscript VILO cDNA synthesis, according to the manufacturer’s instructions, and Quantitative PCR was performed using 7500 real time PCR system with SYBR Green Master Mix (4309155, Thermo Fisher Scientific, Waltham (MA), USA). The forward and reverse primers that were used are shown in Table 2. β-actin expression was used as an internal control. Each experiment was repeated three times in technical triplicate. The relative expression of target genes was calculated while using the −2−∆∆Cq method.", "The NO release was measured from supernatant of rabbit chondrocytes cultures employing a NO chemiluminescence analyzer (model NOA, Sievers Instruments, Boulder, CO, USA). The evaluated conditions were: control; RAP; BAF; IL-1β; EEP; IL-1β and EEP; and, vehicle. All conditions were cultured for 24 h. The release of gaseous compounds was monitored for at least 8 h at intervals of 15 and 30 min., and 1, 2, 4, 6, and 8 h. After 15 min. of incubation, aliquots (0.5 mL) of accumulated gaseous materials in the headspace were injected into the detector chamber using a Hamilton Gastight syringe [94]. Each experiment was repeated three times.", "All of the experiments were repeated at least three times. The results were expressed as mean ± S.D., and the data was analyzed using one-way ANOVA followed by Dunnett, Bonferroni or Tukey Multiple Comparisons while using Sigma Plot (Analysis made in Graph Pad Prism version 5) to determine any significant differences. p < 0.05 was considered to be statistically significant." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Results", "2.1. Characterization of Polyphenols Present in the EEP", "2.2. Cell Viability Analysis after Treatments", "2.3. Effect of OA Induction on Autophagy-Related Proteins", "2.4. Effect of EEP Treatment on Autophagy Protein in OA Chondrocytes", "2.5. Effect of EEP Treatment on Cartilage Markers Expression in OA Chondrocytes", "2.6. Effect of EEP Treatment on Chondrocytes Nitric Oxide Production Induced by the Inflammatory Stimulus", "3. Discussion", "4. Materials and Methods", "4.1. Preparation and Characterization of Ethanolic Extract of Propolis", "4.2. Cartilage Samples Collection and Primary Chondrocyte Culture", "4.3. Cell Viability Analysis in Response to EEP Treatment", "4.4. In Vitro Model of OA and Treatments", "4.5. Western Blot Analysis", "4.6. Reverse Transcription-Quantitative Polymerase Chain Reaction (RT-qPCR)", "4.7. Measurement of NO Release", "4.8. Statistical Analysis" ]
[ "Osteoarthritis (OA) is a progressive, degenerative, and multifactorial joint that is disease characterized by a progressive loss of articular cartilage, subchondral bone sclerosis, osteophyte formation and synovial inflammation that has been positioned as the world-wide leading cause of pain and dysfunction [1,2,3,4,5,6,7]. Moreover, the high prevalence of OA and its great impact on the work ability make this disease an important social problem [8,9].\nAging is one of the most important risk factors of OA [5,6,10,11,12]. In aged cartilage as well as in OA, there is an increased production of reactive nitrogen and oxygen species (RNOS) that impact the adult chondrocytes, since they become more susceptible to cell death mediated by RNOS [13,14]. Chondrocytes are stimulated by proinflammatory cytokines, such as IL-1β, which leads to the production of large amounts of nitric oxide (NO) through the increased activity of inducible nitric oxide synthase (iNOS) [15,16,17]. Subsequently, NO inhibits the production of an extracellular matrix (ECM) and interferes with important paracrine and autocrine factors that are involved in OA, perpetuating the catabolic state of articular chondrocytes [15,16]. Animal and human articular cartilage with OA have both been reported to increased RNOS production, which also contributes to the degradation of cartilage by upregulation of matrix metalloproteinases (MMPs) such as MMP13 [18,19,20,21]. Moreover, the cellular oxidation caused by an increased production of RNOS leads to cell aging, especially in postmitotic tissues, such as cartilage [22], which depend on autophagy as the main mechanism for eliminating damaged or dysfunctional organelles and macromolecules [23].\nAutophagy is a preserved evolutionary pathway of intracellular degradation, in which damaged organelles and long-lived proteins are degraded and recycled to maintain cellular homeostasis [24,25,26]. This process consists of dynamic membrane rearrangements that are mediated by a group of four main autophagy-related proteins (ATG) that include unc-51, like autophagy activating kinase 1 (ULK1), Beclin1 (BECN1), microtubule associated protein 1 light chain 3 alpha (LC3), and autophagy related 5 (ATG5). Upstream, the PI3/AKT and ERK/MAPK pathways are able to regulate the mammalian target of rapamycin (mTOR), which is a vital regulator of autophagy, by controlling the interaction between mTOR serine and threonine kinases in the mTOR complex 1 (mTORC-1) [27]. The inhibition of mTORC-1 promotes autophagy, while the activation of mTOR kinase suppresses it [27,28]. In addition to its key role in physiological conditions, aging is often accompanied by defects of general autophagy [29] and its deregulation is implicated in various pathological conditions, such as aging-related diseases [30,31]. In fact, alterations of autophagy are correlated with cell death and OA [31].\nAutophagy has a controversial role in cellular survival and death [32,33]. Although autophagy mostly protects cells allow their adaptation to several types of stress, excessive or prolonged activation of this pathway can promote cell death [26,34]. Moreover, the autophagy pathway may be related to proinflammatory signaling through a mechanism that involves oxidative stress [35,36,37,38]. It has been observed that cellular damage that is generated by excessive production of RNOS can stimulate this pathway [22,26,39]. For this reason, the functional relationship between autophagy and apoptosis is complex and apparently it is the stimulus that determines an induction of apoptosis or autophagy in a context-dependent mode [34,40,41]. One of the links between these processes could be ATG5, which has a dual role in autophagy and apoptosis, because it could trigger apoptosis through several mechanisms and it is part of the molecular mechanisms that govern the inhibitory crosstalk between apoptosis and autophagy [40].\nAlthough several therapeutic strategies have been developed to improve the repair of hyaline cartilage, none has been sufficiently effective to generate functional and long-lasting tissue. Currently, there are no drugs available to modify OA and a large number of candidate drugs have failed to demonstrate efficacy or they were associated with significant side effects [23,42]. This makes it necessary to search for other therapeutic alternatives that can avoid undesired effects and can be adapted to the progressive and multimodal nature of OA.\nPolyphenols are the most common bioactive natural products that are present in fruits, vegetables, seeds, among others [43,44,45,46], and they have a wide range of activities in the prevention and treatment of various physiological or pathophysiological states, such as cancer, neuroinflammation, diabetes, and aging [47,48,49]. Several of the beneficial effects of polyphenols have been attributed to their antioxidant capacity and their ability to modulate antioxidant defense mechanisms [50]. Additionally, these bioactive components have a great potential to prevent diseases through genetic and epigenetic modifications [48,49,51,52,53]. Paullauf et al., also reported that polyphenols affect numerous cellular targets that can induce or inhibit autophagy and mention that autophagy interferes with the symptoms and putative causes of aging [54,55]. In fact, several studies described the regulation that polyphenols have on the path of autophagy [49,56,57,58,59,60].\nPropolis extract is an extremely complex mixture of natural substances; it contains amino acids, phenolic acids, phenolic acid esters, flavonoids, cinnamic acid, terpenes, and caffeic acid [61], and it has multiple pharmacological properties, including hepatoprotective, antioxidant, and anti-inflammatory actions, and in the cartilage, it has been shown to offer excellent protection, being mediated in part by its RNOS scavenger action [62,63]. Pinocembrin (PB) is one of the most abundant flavonoids in propolis [64,65,66] and it has been associated to the inhibition of MMP-1, MMP-3, and MMP-13 expression at both the protein and mRNA levels in cartilage [66]. Additionally, it is suggested that PB could protect the brain against ischemia-reperfusion injury, and the possible mechanisms might be attributed to the inhibition of apoptosis and reversed autophagy disfunction in the penumbra area [65,67].\nAltogether, this evidence suggests a potential effect of propolis in reverting the alterations on the autophagy pathway in chondrocytes with OA, promoting the viability of the chondrocytes and the maintenance of healthy cartilage. Thus, the goal of our study was to evaluate the effect of ethanolic extract of propolis (EEP) in chondrocytes that were stimulated with IL-1β and its influence on the expression of proteins related to autophagy pathway and healthy and osteoarthritic marker.", " 2.1. Characterization of Polyphenols Present in the EEP The chromatographic profile of the EEP was obtained while using HPLC, showing about 53 peaks (Figure 1), whose identification was assigned by analyzing standards using the same conditions as in EEP sample, while considering the exact mass, UV absorption spectrum, and decomposition in the gas phase (fragmentation). Table 1 describes the identified compounds and related data. Additionally, the amount of PB was measured by mass spectrometer, resulting in a concentration of 44.1 mg mL−1.\nThe chromatographic profile of the EEP was obtained while using HPLC, showing about 53 peaks (Figure 1), whose identification was assigned by analyzing standards using the same conditions as in EEP sample, while considering the exact mass, UV absorption spectrum, and decomposition in the gas phase (fragmentation). Table 1 describes the identified compounds and related data. Additionally, the amount of PB was measured by mass spectrometer, resulting in a concentration of 44.1 mg mL−1.\n 2.2. Cell Viability Analysis after Treatments Cell viability assay by trypan blue was performed to identify the highest dose of EEP that does not show a significant decrease in chondrocytes viability. After the EEP treatment using concentrations that ranged between 0 and 15 µg/mL, a significant decrease in cell viability was observed, starting from 5 µg/mL (Figure 2). For this reason, 2.5 µg/mL was selected as the dose for the following EEP treatments. The IL-1β, bafilomycin, and rapamycin doses were selected according to the literature and these did not significantly modify the viability of the chondrocytes (data not shown).\nCell viability assay by trypan blue was performed to identify the highest dose of EEP that does not show a significant decrease in chondrocytes viability. After the EEP treatment using concentrations that ranged between 0 and 15 µg/mL, a significant decrease in cell viability was observed, starting from 5 µg/mL (Figure 2). For this reason, 2.5 µg/mL was selected as the dose for the following EEP treatments. The IL-1β, bafilomycin, and rapamycin doses were selected according to the literature and these did not significantly modify the viability of the chondrocytes (data not shown).\n 2.3. Effect of OA Induction on Autophagy-Related Proteins LC3I and LC3II proteins were detected by western blot (Figure 3a) to evaluate the effect of IL-1β-induced OA on autophagy pathway. An increase in the protein expression of LC3I was observed under IL-1β inflammatory stimulus, similarly to that observed after the treatment with rapamycin (RAP), which corresponds to a well-known autophagy stimulator (Figure 3a,b). Subsequently, no changes in the protein expression of LC3I or LC3II were observed when EEP or vehicle treatment was applied (Figure 3b,c). The cells exposed to bafilomycin (BAF), an autophagy inhibitor, markedly increased LC3II expression (Figure 3a,c). This accumulation in response to BAF treatment was described in the guidelines for the use and interpretation of assays for the monitoring autophagy by Klionsky et al. [7], where the accumulation of LC3I and II can be obtained by interrupting the autophagosome-lysosome fusion step, increasing the number of autophagosomes. Finally, a decrease in LC3I was observed in OA-induced chondrocytes after EEP treatment when compared to cells stimulated with IL-1β (Figure 3a,b).\nLC3I and LC3II proteins were detected by western blot (Figure 3a) to evaluate the effect of IL-1β-induced OA on autophagy pathway. An increase in the protein expression of LC3I was observed under IL-1β inflammatory stimulus, similarly to that observed after the treatment with rapamycin (RAP), which corresponds to a well-known autophagy stimulator (Figure 3a,b). Subsequently, no changes in the protein expression of LC3I or LC3II were observed when EEP or vehicle treatment was applied (Figure 3b,c). The cells exposed to bafilomycin (BAF), an autophagy inhibitor, markedly increased LC3II expression (Figure 3a,c). This accumulation in response to BAF treatment was described in the guidelines for the use and interpretation of assays for the monitoring autophagy by Klionsky et al. [7], where the accumulation of LC3I and II can be obtained by interrupting the autophagosome-lysosome fusion step, increasing the number of autophagosomes. Finally, a decrease in LC3I was observed in OA-induced chondrocytes after EEP treatment when compared to cells stimulated with IL-1β (Figure 3a,b).\n 2.4. Effect of EEP Treatment on Autophagy Protein in OA Chondrocytes To analyze the autophagy pathway in OA chondrocytes under EEP treatment, three proteins were selected: LC3, ATG5, and AKT1. The protein expression of LC3I had a significant decrease in the condition co-treated with IL-1β and EEP when compared to the IL-1β group, and there were no significant differences between EEP treatment, and the EEP and IL-1β condition (Figure 4a). In relation to LC3 gene expression, a significant decrease was observed in OA chondrocytes that were treated with EEP, as compared to the IL-1β stimulated condition (Figure 4d).\nRegarding ATG5 protein expression, a significant decrease was observed in the condition of co-treated with IL-1β and EEP when compared to the IL-1β group and there were no significant differences between EEP treatment, and EEP and IL-1β condition (Figure 4b). In relation to ATG5 gene expression, a significant decrease was observed in OA chondrocytes that were treated with EEP compared to IL-1β stimulated condition (Figure 4e).\nFinally, a significant decrease of AKT1 protein expression was observed in the condition co-treated with IL-1β and EEP when compared with the IL-1β group and there were no significant differences between EEP treatment, and EEP and IL-1β condition (Figure 4c). This effect was not reflected in the gene expression analysis (Figure 4f).\nTo analyze the autophagy pathway in OA chondrocytes under EEP treatment, three proteins were selected: LC3, ATG5, and AKT1. The protein expression of LC3I had a significant decrease in the condition co-treated with IL-1β and EEP when compared to the IL-1β group, and there were no significant differences between EEP treatment, and the EEP and IL-1β condition (Figure 4a). In relation to LC3 gene expression, a significant decrease was observed in OA chondrocytes that were treated with EEP, as compared to the IL-1β stimulated condition (Figure 4d).\nRegarding ATG5 protein expression, a significant decrease was observed in the condition of co-treated with IL-1β and EEP when compared to the IL-1β group and there were no significant differences between EEP treatment, and EEP and IL-1β condition (Figure 4b). In relation to ATG5 gene expression, a significant decrease was observed in OA chondrocytes that were treated with EEP compared to IL-1β stimulated condition (Figure 4e).\nFinally, a significant decrease of AKT1 protein expression was observed in the condition co-treated with IL-1β and EEP when compared with the IL-1β group and there were no significant differences between EEP treatment, and EEP and IL-1β condition (Figure 4c). This effect was not reflected in the gene expression analysis (Figure 4f).\n 2.5. Effect of EEP Treatment on Cartilage Markers Expression in OA Chondrocytes Collagen II (Col2a1) was selected as a healthy cartilage marker and MMP13 as OA marker. Col2a1 protein expression did not significantly change with EEP treatment when compared to the IL-1β stimulation. However, there is a significant increase from 1 to 1.4 in the condition of being co-treated with IL-1β and EEP when compared to the IL-1β stimulation, and between EEP, and IL-1β and EEP (Figure 5a). On the other hand, a significant increase in gene expression is observed between the IL-1β and EEP condition, but not in the condition of being co-treated with IL-1β and EEP (Figure 5c).\nThere is a significant decrease in MMP13 protein expression when EEP treatment is co-treated with IL-1β stimulation as compared to the IL-1β condition and there were no significant differences between EEP treatment, and EEP and IL-1β co-treated condition (Figure 5b). In relation to gene expression, there is a significant decrease in the EEP and IL-1β co-treatment when compared to the IL-1β condition (Figure 5d).\nCollagen II (Col2a1) was selected as a healthy cartilage marker and MMP13 as OA marker. Col2a1 protein expression did not significantly change with EEP treatment when compared to the IL-1β stimulation. However, there is a significant increase from 1 to 1.4 in the condition of being co-treated with IL-1β and EEP when compared to the IL-1β stimulation, and between EEP, and IL-1β and EEP (Figure 5a). On the other hand, a significant increase in gene expression is observed between the IL-1β and EEP condition, but not in the condition of being co-treated with IL-1β and EEP (Figure 5c).\nThere is a significant decrease in MMP13 protein expression when EEP treatment is co-treated with IL-1β stimulation as compared to the IL-1β condition and there were no significant differences between EEP treatment, and EEP and IL-1β co-treated condition (Figure 5b). In relation to gene expression, there is a significant decrease in the EEP and IL-1β co-treatment when compared to the IL-1β condition (Figure 5d).\n 2.6. Effect of EEP Treatment on Chondrocytes Nitric Oxide Production Induced by the Inflammatory Stimulus A significant increase in the amount of NO from 8 to 22 μM was observed in chondrocytes that were stimulated with IL-1β as compared to the control condition; this increase is reduced significantly to 16 μM with EEP treatment (Figure 6). In addition, EEP treatment does not modify the amount of NO in the supernatant compared to the control condition. On the other hand, the activation or inhibition of autophagy does not significantly modify the amount of NO that is released to the medium (Figure 6).\nA significant increase in the amount of NO from 8 to 22 μM was observed in chondrocytes that were stimulated with IL-1β as compared to the control condition; this increase is reduced significantly to 16 μM with EEP treatment (Figure 6). In addition, EEP treatment does not modify the amount of NO in the supernatant compared to the control condition. On the other hand, the activation or inhibition of autophagy does not significantly modify the amount of NO that is released to the medium (Figure 6).", "The chromatographic profile of the EEP was obtained while using HPLC, showing about 53 peaks (Figure 1), whose identification was assigned by analyzing standards using the same conditions as in EEP sample, while considering the exact mass, UV absorption spectrum, and decomposition in the gas phase (fragmentation). Table 1 describes the identified compounds and related data. Additionally, the amount of PB was measured by mass spectrometer, resulting in a concentration of 44.1 mg mL−1.", "Cell viability assay by trypan blue was performed to identify the highest dose of EEP that does not show a significant decrease in chondrocytes viability. After the EEP treatment using concentrations that ranged between 0 and 15 µg/mL, a significant decrease in cell viability was observed, starting from 5 µg/mL (Figure 2). For this reason, 2.5 µg/mL was selected as the dose for the following EEP treatments. The IL-1β, bafilomycin, and rapamycin doses were selected according to the literature and these did not significantly modify the viability of the chondrocytes (data not shown).", "LC3I and LC3II proteins were detected by western blot (Figure 3a) to evaluate the effect of IL-1β-induced OA on autophagy pathway. An increase in the protein expression of LC3I was observed under IL-1β inflammatory stimulus, similarly to that observed after the treatment with rapamycin (RAP), which corresponds to a well-known autophagy stimulator (Figure 3a,b). Subsequently, no changes in the protein expression of LC3I or LC3II were observed when EEP or vehicle treatment was applied (Figure 3b,c). The cells exposed to bafilomycin (BAF), an autophagy inhibitor, markedly increased LC3II expression (Figure 3a,c). This accumulation in response to BAF treatment was described in the guidelines for the use and interpretation of assays for the monitoring autophagy by Klionsky et al. [7], where the accumulation of LC3I and II can be obtained by interrupting the autophagosome-lysosome fusion step, increasing the number of autophagosomes. Finally, a decrease in LC3I was observed in OA-induced chondrocytes after EEP treatment when compared to cells stimulated with IL-1β (Figure 3a,b).", "To analyze the autophagy pathway in OA chondrocytes under EEP treatment, three proteins were selected: LC3, ATG5, and AKT1. The protein expression of LC3I had a significant decrease in the condition co-treated with IL-1β and EEP when compared to the IL-1β group, and there were no significant differences between EEP treatment, and the EEP and IL-1β condition (Figure 4a). In relation to LC3 gene expression, a significant decrease was observed in OA chondrocytes that were treated with EEP, as compared to the IL-1β stimulated condition (Figure 4d).\nRegarding ATG5 protein expression, a significant decrease was observed in the condition of co-treated with IL-1β and EEP when compared to the IL-1β group and there were no significant differences between EEP treatment, and EEP and IL-1β condition (Figure 4b). In relation to ATG5 gene expression, a significant decrease was observed in OA chondrocytes that were treated with EEP compared to IL-1β stimulated condition (Figure 4e).\nFinally, a significant decrease of AKT1 protein expression was observed in the condition co-treated with IL-1β and EEP when compared with the IL-1β group and there were no significant differences between EEP treatment, and EEP and IL-1β condition (Figure 4c). This effect was not reflected in the gene expression analysis (Figure 4f).", "Collagen II (Col2a1) was selected as a healthy cartilage marker and MMP13 as OA marker. Col2a1 protein expression did not significantly change with EEP treatment when compared to the IL-1β stimulation. However, there is a significant increase from 1 to 1.4 in the condition of being co-treated with IL-1β and EEP when compared to the IL-1β stimulation, and between EEP, and IL-1β and EEP (Figure 5a). On the other hand, a significant increase in gene expression is observed between the IL-1β and EEP condition, but not in the condition of being co-treated with IL-1β and EEP (Figure 5c).\nThere is a significant decrease in MMP13 protein expression when EEP treatment is co-treated with IL-1β stimulation as compared to the IL-1β condition and there were no significant differences between EEP treatment, and EEP and IL-1β co-treated condition (Figure 5b). In relation to gene expression, there is a significant decrease in the EEP and IL-1β co-treatment when compared to the IL-1β condition (Figure 5d).", "A significant increase in the amount of NO from 8 to 22 μM was observed in chondrocytes that were stimulated with IL-1β as compared to the control condition; this increase is reduced significantly to 16 μM with EEP treatment (Figure 6). In addition, EEP treatment does not modify the amount of NO in the supernatant compared to the control condition. On the other hand, the activation or inhibition of autophagy does not significantly modify the amount of NO that is released to the medium (Figure 6).", "OA is the most common chronic degenerative joint disease and it heavily impacts on life quality [8,9]. OA treatment is complex, because it is a multifactorial disease and current therapies are, at best, moderately effective pain relievers and several of these drugs have adverse effects [9], so the development of safe treatments is necessary.\nAutophagy is a highly conserved mechanism of homeostasis maintenance [55,68] and its deregulation contributes to OA development. In fact, in late stages of OA, this process also could be conjunctly activated with apoptosis as an alternative pathway to chondroptosis [69,70]. Hence, the modulation of autophagy can be a promising therapeutic strategy for OA, since it has the potential to counteract both effects of the inflammatory stimuli and age-related defects [8].\nPropolis is highly rich in active components and its extracts have numerous applications in treating various diseases [71]. PB is one of the most abundant flavonoid in propolis [65,72] and it has drawn much attention for its broad spectrum of pharmacological properties, such as the reversion of autophagy dysfunction in the ischemia-reperfusion injury [65,67]. We evaluated the amount of pinocembrin present in the Chilean propolis used in this study, since its composition depends on the source of the various trees that were used by honeybees [72,73]. The used EEP contains a significant amount of pinocembrin, being polyphenol the most abundant extract (Figure 1 and Table 1). These results are corroborated by previous studies of propolis that were extracted from the same region [74,75,76].\nGiven that autophagy defects could have a central role in OA, the objective of this study was to evaluate the potential effect of propolis in autophagy proteins on chondrocytes with OA. Importantly, there are no absolute criteria for determining autophagic status that are applicable in every biological or experimental context [7]. The radio between LC3I and LC3II is used to evaluate the autophagy pathway [55]. One limitation of our study was that it was not possible to calculate said radius due to the low abundance of LC3II, hence preventing reliable quantification. Therefore, it was decided to omit this indicator. Nevertheless, LC3I is used as a marker of autophagic induction [55] and the conversion pattern of LC3I to LC3II is dependent on the type and cellular context [7], so we evaluated the expression of this marker in response of IL-1β stimulation.\nCertain studies have indicated that stimulation with IL-1β could inhibit the autophagy pathway, especially in those that are associated with nutrient deprivation [77]. In our study, we decided to observe what happens with the autophagy pathway without a context of nutrient deprivation.\nAfter the IL-1β stimulation, there was an increase in LC3I protein expression similar to the one that was observed with the autophagy inducer, rapamycin (Figure 3). This suggests that the inflammatory stimulus with IL-1β increases the expression of LC3, which is consistent with the literature [78,79,80]. On the other hand, a decrease in LC3I protein expression with EEP treatment on OA chondrocyte was observed, as compared to chondrocytes that were stimulated with IL-1β, which suggested that EEP could regulate the autophagy pathway (Figure 3).\nSubsequently, an analysis of gene and protein expression of three genes associated with the autophagy pathway was performed in response to IL-1β-stimulation and treatment with EEP in chondrocytes: LC3, ATG5, and AKT1. In response to IL-1β–stimulation, thee chondrocytes increased their protein expression of LC3I, ATG5, and AKT1 (Figure 4a–c). This result was also observed in mRNA expression of LC3I and ATG5 (Figure 4d,e), which suggested that the autophagy pathway is triggered by the inflammatory stimulus. Autophagy has been recognized as an adaptive response to stress that promotes survival, whereas in other cases it is capable of promoting cell death and morbidity [68,69,70], and a functional relationship between both processes is postulated [34,40,41]. This is also consistent with the effects of another proinflammatory cytokine, TNFα, which also increases with age and in OA, which is additionally able to increase the expression of LC3 through the inhibition of AKT activation [79,80]. AKT1 is an upstream regulator of mTOR and is usually considered a suppressor of autophagy, but it has been observed that some natural components activate autophagy with a concomitant increase in AKT phosphorylation [27]. \nOn the other hand, it has been shown that the acute stimuli of oxidative stress may be able to induce a positive regulation of autophagy in an adaptive way, which helps to restore intracellular homeostasis, however, an alteration in autophagy may be generated if this stress is prolonged [35]. In aged cartilage and the development of OA, there is an increase ROS levels and adult chondrocytes become more susceptible to cell death that is mediated by ROS [13,14]. In addition, chondrocytes that are stimulated by proinflammatory cytokines, such as IL-1β, produce large amounts of nitric oxide [15,16,81,82], which contributes in part to this oxidative/nitrosative stress. This agrees with what we observed in chondrocytes that were stimulated with IL-1β (Figure 6). We suggest that the alterations produced in chondrocytes with OA are partly produced by this oxidative stress that is induced by NO.\nSome polyphenols might regulate several cellular targets that can thus induce or inhibit autophagy [54], and it is thought that a partial restoration of basal autophagy might contribute to improving chondrocyte viability in OA [55]. Therefore, we evaluated whether the EEP, rich in polyphenols, could regulate the expression of the selected proteins. In response to EEP treatment, chondrocytes stimulated with IL-1β significantly decrease the protein expression of LC3I, ATG5, and AKT1 (Figure 4a–c). This result was also observed in mRNA expression of LC3I and ATG5 (Figure 4d,e). This suggests that EEP treatment, which is a rich mixture of polyphenols, particularly pinocembrin, could be able to return to it basal levels of autophagy. Recently, it has been reported that Chinese propolis reduce the inflammation through an inhibition of autophagy by reducing the distribution and accumulation of LC3 in vascular endothelial cells [83]. Another study mentions that a Taiwanese green propolis partially inhibited the NLRP3 inflammasome via autophagy induction [84]. Moreover, chrysin, another polyphenol, has been reported to be present in propolis and our samples (Table 1, number 32). Chrysin would be able to decrease the induction of proteins associated with autophagy in mesangial cells that were exposed to advanced glycation end products [85].\nThis decrease could restore this pathway normal levels, returning the homeostasis that had been lost with the inflammatory stimulus. As oxidative stress could mediate an excessive induction of autophagy [86], we analyzed whether EEP treatment, by exerting its scavenger action [62,63] could decrease the amount of NO that was released to the medium. Indeed, the EEP treatment reduced the amount of NO produced (Figure 6). Therefore, this could be the mechanism by which the autophagy activation may be inhibited, decreasing the expression of proteins, such as LC3 I, ATG5, and AKT1 (Figure 4a–c). To elucidate this, functional analysis should be done in the pathway of autophagy in chondrocytes in response to EEP treatment.\nCollagen(s) are long-lived proteins [87,88]. Oxidative damage is, by far, the most important way of inducing a post-translational chemical modification of ECM of the cartilage that will alter collagen longevity [88]. So, a reduction in the oxidative damage by EEP treatment could improve the expression of healthy cartilage marker. EEP treatment induces an increase in protein expression of Col2a1 in chondrocytes that were stimulated with IL-1β probably through this way (Figure 5a,b). It has been described that the increase of ROS contributes to the degradation of the cartilage by means of an up regulation of MMPs [18] and that pinocembrin (one of the most abundant components of our propolis) inhibits the expression of MMP-1, MMP-3, and MMP-13 at the protein and mRNA levels in cartilage [66]. EEP treatment significantly reduces MMP-13 protein expression in chondrocytes that were stimulated with IL-1β (Figure 5c,d), probably again through an antioxidant mechanism. These results suggest the beneficial effects of EEP treatment in chondrocytes with OA in vitro, increasing the expression of healthy cartilage marker and reducing the OA marker.\nTo the best of our knowledge, there is no study that analyzes the effect of propolis on the expression of col2a1 or MMP13 on chondrocytes. Although, it has been described that pinocembrin was able to reduce MMP13 levels in chondrocytes that were stimulated with TNF-α [66] and chrysin was able to increase the levels of collagen II and reduce the levels of MMP13 in chondrocytes [85].\nKoussounadis et al. [89], in 2015, mentioned that mRNA and protein expression do not always present good correlation for diverse reasons, for example, the different post-transcriptional regulation, mRNA degradation, the half-life of the protein, among others. In the case of the differences that were observed in the expression of Col2a1, we can infer that, since it is a long-lived protein, the oxidative stress that was generated by the inflammatory stimulus could favor the degradation of this protein. Moreover, its degradation would reduce by decreasing the amount of nitric oxide without altering mRNA expression when the treatment with EEP is applied (Figure 5a). It is also necessary to mention that the MMP13 expression has a negative relationship with the Col2a1 expression and since this metalloprotinase specifically degrades Col2a1, this could be another way in which the protein expression of Col2a1 varies without generating any modification of its mRNA.\nAutophagy is a dynamic pathway whose activity can change in response to different stimulus, such as drugs, cell type, and confluence [7], because of that, it would be not unusual to find differences between gene and protein expression and sometimes it can be difficult to interpret the results. Hence, this is why we only mentioned the increase or reduction of protein expression.\nAccording to the guidelines for the study of autophagy, most of the ATG genes do not show significant changes in mRNA levels when autophagy is induced [7]. A slight increase of mRNA of LC3 has been observed between 4 to 16 h after starvation, which decreases over time [90]. For this reason, it is thought that the increases in the mRNA expression of LC3 can be quite modest and are cell type and organism dependent. It is also suggested that the study of protein expression would be a more significant parameter in relation to the initiation and completion of autophagy [7]. Therefore, it is likely that, during the time in which the mRNA analysis was performed (24 h post treatment), it had already been degraded by intracellular mechanisms, which would be observed as a lower mRNA expression, despite the fact that the protein expression would remain high (Figure 4). Alternatively, we should have measured the rate of general protein breakdown by autophagy, to measure the autophagic flow, and therefore estimate the activity of the autophagy pathway. This is another limitation of this study.\nIt is important to consider that the antioxidant capacity of propolis responds to a heterogeneous mixture of compounds, including several polyphenols [72,73]. Each one possesses a particular antioxidant capacity and, for this reason, only the amount present of a particular polyphenol in the EEP is not a good indicator of the scavenger capacity of the compound [91]. For example, quercetin found in the chromatographic profile of Figure 1, as indicated with the number 11, has approximately 20 times more antioxidant power than pinocembrin, although it is found in a smaller quantities than this latter [91]. Therefore, we cannot rule out that the other polyphenols may be responsible for the antioxidant effect of propolis. In addition, we must take into account that the mixture of polyphenols could have an additive effect. In order to clarify this, the experiments should be carried out to include the effect of each of the different polyphenols that were present in the EEP sample, in comparison to the complete mixture.\nIn conclusion, the inflammatory stimulation with IL-1β applied on chondrocytes causes a decrease in the expression of healthy cartilage marker and increases in the OA marker, also generates a deregulation of the autophagy pathway. The treatment with EEP is probably able to inhibit these deregulations counteracting the decompensation of the mechanisms that maintain cellular homeostasis, such as autophagy, which could be the initial trigger of mechanical and structural alterations of the tissue. We propose that the mechanism that is involved in the effect of propolis is through a reduction of oxidative stress that is generated by the application of the inflammatory stimulus (Figure 7).\nFinally, we propose that treatment with dietary polyphenols in people with OA triggered by aging could be an effective complementary therapeutic approach, since, through anti-inflammatory, antioxidant, and autophagy regulating mechanisms, could inhibit or reduce the causes that would origin cartilage degeneration in relationship to age, such as: modification of the composition of the matrix, increase in oxidative stress, decrease in the number of chondrocytes that are associated with age, and change of the chondrocyte phenotype, among others, thus promoting the viability of chondrocytes and the maintenance of a healthy cartilage. More in vitro and in vivo studies are needed in order to support the effect of polyphenols in OA, evaluating the bioavailability and establishing an effective dose.", " 4.1. Preparation and Characterization of Ethanolic Extract of Propolis A crude brown propolis sample was obtained from a mountainous area (latitude −38°58′4046′′, longitude −72°1′1573′′) near Cunco city, La Araucanía, Chile. Briefly, crude propolis was mixed with ethanol 80% in a 1:3 w/v proportion in an amber colored bottle and incubated for 30 min at 60 °C under constant mixing. Subsequently, the mixture was filtrated on a Whatman No. 1 filter paper in order to separate the ethanolic extract from crude propolis residues. The extract was left at 4 °C and then centrifuged for one night, in order to promote the precipitation of waxes and other poorly soluble waste. Subsequently, the propolis solvents were removed by evaporation and then the product was lyophilized and reconstituted in a 1:1 w/v proportion with ethanol. The EEP was analyzed with HPLC equipment (LC-20AD pumps, SIL-20AC injector, CTO-20A columns and DAD detector SPD-M20A, Kyoto, Japan); MS: MicroTOF-QII, Bruker Daltonics (Billerica, MA, USA). In the mass spectrometer, the DAD detector effluent was divided into a 1:10 ratio (split 1:10), with one part (50 μL/min.) being directed to the mass spectrometer. The electrospray source was used in negative mode at 3000 V. Nebulizer gas (nitrogen): 35 psi; drying gas (nitrogen): 6 L/min. at 220 °C. The mass scale of the equipment was calibrated with a sodium acetate solution. Additionally, the amount of pinocembrin was quantified while using the same method.\nA crude brown propolis sample was obtained from a mountainous area (latitude −38°58′4046′′, longitude −72°1′1573′′) near Cunco city, La Araucanía, Chile. Briefly, crude propolis was mixed with ethanol 80% in a 1:3 w/v proportion in an amber colored bottle and incubated for 30 min at 60 °C under constant mixing. Subsequently, the mixture was filtrated on a Whatman No. 1 filter paper in order to separate the ethanolic extract from crude propolis residues. The extract was left at 4 °C and then centrifuged for one night, in order to promote the precipitation of waxes and other poorly soluble waste. Subsequently, the propolis solvents were removed by evaporation and then the product was lyophilized and reconstituted in a 1:1 w/v proportion with ethanol. The EEP was analyzed with HPLC equipment (LC-20AD pumps, SIL-20AC injector, CTO-20A columns and DAD detector SPD-M20A, Kyoto, Japan); MS: MicroTOF-QII, Bruker Daltonics (Billerica, MA, USA). In the mass spectrometer, the DAD detector effluent was divided into a 1:10 ratio (split 1:10), with one part (50 μL/min.) being directed to the mass spectrometer. The electrospray source was used in negative mode at 3000 V. Nebulizer gas (nitrogen): 35 psi; drying gas (nitrogen): 6 L/min. at 220 °C. The mass scale of the equipment was calibrated with a sodium acetate solution. Additionally, the amount of pinocembrin was quantified while using the same method.\n 4.2. Cartilage Samples Collection and Primary Chondrocyte Culture Normal articular cartilage was collected from five white New Zealand buck rabbits, which were anesthetized with overdose of propofol and euthanized using Potassium chlorate 60 mg/Kg. After knee joint surgery, the pieces of articular cartilage were dissected and separated from the underlying bone and connective tissues. The pieces were washed three times with PBS 1× and 5% penicillin/streptomicin. The extracted cartilage was digested in a solution of 2 mg/mL Protease Type XIV. Bacterial from (Streptomyces griseus) (Sigma Aldrich, St Louis, MO, USA) in PBS 1× for 1.5 h and 1.5 mg/mL collagenase B (Roche, Meylan, France) in basic medium DMEM at 37 °C overnight. This suspension was centrifuged at 1200 rpm for 8 min. to collect the chondrocytes and was washed with basic medium DMEM. The chondrocytes were cultured in DMEM/F12 (1:1 with 15% FBS plus 1% antibiotic mixture of penicillin/streptomycin) at a density of 1 × 105 cells/mL and incubated in a humidified atmosphere of 5% CO2 at 37 °C. Culture medium was changed every two days and each passage was made when the confluence reached between 80–90%. We only used the second passage of cells in all experiments in order to avoid loss of chondrocyte phenotype [92].\nNormal articular cartilage was collected from five white New Zealand buck rabbits, which were anesthetized with overdose of propofol and euthanized using Potassium chlorate 60 mg/Kg. After knee joint surgery, the pieces of articular cartilage were dissected and separated from the underlying bone and connective tissues. The pieces were washed three times with PBS 1× and 5% penicillin/streptomicin. The extracted cartilage was digested in a solution of 2 mg/mL Protease Type XIV. Bacterial from (Streptomyces griseus) (Sigma Aldrich, St Louis, MO, USA) in PBS 1× for 1.5 h and 1.5 mg/mL collagenase B (Roche, Meylan, France) in basic medium DMEM at 37 °C overnight. This suspension was centrifuged at 1200 rpm for 8 min. to collect the chondrocytes and was washed with basic medium DMEM. The chondrocytes were cultured in DMEM/F12 (1:1 with 15% FBS plus 1% antibiotic mixture of penicillin/streptomycin) at a density of 1 × 105 cells/mL and incubated in a humidified atmosphere of 5% CO2 at 37 °C. Culture medium was changed every two days and each passage was made when the confluence reached between 80–90%. We only used the second passage of cells in all experiments in order to avoid loss of chondrocyte phenotype [92].\n 4.3. Cell Viability Analysis in Response to EEP Treatment Cell viability was assessed while using trypan blue staining, as previously described [93]. Rabbit chondrocytes were briefly incubated and then exposed to different concentrations of EEP (0, 2.5, 5, 7.5, 10, and 15 µg/mL) in 24 well plates at a density of 2 × 104 cell per well for 24 h. Each experiment was performed in triplicate. The results are expressed as % of viable cells relative to control cells (untreated).\nCell viability was assessed while using trypan blue staining, as previously described [93]. Rabbit chondrocytes were briefly incubated and then exposed to different concentrations of EEP (0, 2.5, 5, 7.5, 10, and 15 µg/mL) in 24 well plates at a density of 2 × 104 cell per well for 24 h. Each experiment was performed in triplicate. The results are expressed as % of viable cells relative to control cells (untreated).\n 4.4. In Vitro Model of OA and Treatments For the induction of OA-like biological changes, the chondrocytes were stimulated using IL-1β [17]. The cells were incubated for 24 h under the following conditions: Control (untreated); RAP (100 nM rapamycin, Sigma Aldrich, USA); BAF (20 nM bafilomycin, Sigma Aldrich, USA); IL-1β (10 ng/mL IL-1β); EEP (EEP 2.5 μg/mL); IL-1β and EEP (10 ng/mL IL-1β and 2.5 μg/mL EEP); and, vehicle (2% ethanol).\nFor the induction of OA-like biological changes, the chondrocytes were stimulated using IL-1β [17]. The cells were incubated for 24 h under the following conditions: Control (untreated); RAP (100 nM rapamycin, Sigma Aldrich, USA); BAF (20 nM bafilomycin, Sigma Aldrich, USA); IL-1β (10 ng/mL IL-1β); EEP (EEP 2.5 μg/mL); IL-1β and EEP (10 ng/mL IL-1β and 2.5 μg/mL EEP); and, vehicle (2% ethanol).\n 4.5. Western Blot Analysis The chondrocytes were isolated from confluent monolayer cultures using RIPA buffer supplemented with 1mg/mL Halt™ Protease and Phosphatase Inhibitor Cocktail (Thermo Fisher, Waltham, MA, USA) at 4 °C for 30 min. The samples were then centrifuged at 15,000 rpm for 30 min., and the supernatants were harvested to measure the protein concentration. Proteins were quantified while using the Bradford method with BCA detection kit and adjusted to equal concentrations (45 µg) across different samples. Equal amounts of protein were heated at 95 °C for 5 min. and separated using 4–20% Mini protean TGX Precast gel (Biorad, Hercules, CA, USA). Following electrophoresis, the proteins were transferred onto a polyvinylidene fluoride membrane (PVDF, MilliporeSigma, Burlington, MA, USA). The membranes were blocked with 5% BSA (TBST) at room temperature for 1h and then incubated overnight at 4 °C with primary antibodies of autophagy proteins: ATG5 (ab108327, Abcam, Cambridge, UK), AKT1 (#9272, Cell Signaling, Danvers, MA, USA), LC3 (ab128025, Abcam, Cambridge, UK), and cartilage markers COL2A1 (ab34712, Abcam, Cambridge, UK), MMP13 (ab84594, Abcam, Cambridge, UK). Following three wash steps with TBST the membranes were incubated with Horseradish peroxidase (HRP) goat anti-rabbit IgG (#7074 Cell Signaling, Danvers, MA, USA) for 1 h at room temperature. They were washed then with TBST, three times, for 5 min. each time. Protein bands were detected using Amersham ECL TM Advance Western Blotting Detection Kit (GE Healthcare, Chicago, IL, USA). \nIn the case that proteins have a weight similar to β-actin, stripping was performed with Restore Western Blot Stripping Buffer (21059, Thermo Fisher Scientific, Waltham (MA), USA) according to the manufacturer’s instructions. Finally, β-actin expression was used as load control in all western blot assays (A5441, Sigma Aldrich, St Louis, MO, USA). The quantification was performed by densitometry and bands analysis with the ImageJ 1.8.0 software (Rasband, W.S., ImageJ, U. S. National Institutes of Health, Bethesda, MD, USA)\nThe chondrocytes were isolated from confluent monolayer cultures using RIPA buffer supplemented with 1mg/mL Halt™ Protease and Phosphatase Inhibitor Cocktail (Thermo Fisher, Waltham, MA, USA) at 4 °C for 30 min. The samples were then centrifuged at 15,000 rpm for 30 min., and the supernatants were harvested to measure the protein concentration. Proteins were quantified while using the Bradford method with BCA detection kit and adjusted to equal concentrations (45 µg) across different samples. Equal amounts of protein were heated at 95 °C for 5 min. and separated using 4–20% Mini protean TGX Precast gel (Biorad, Hercules, CA, USA). Following electrophoresis, the proteins were transferred onto a polyvinylidene fluoride membrane (PVDF, MilliporeSigma, Burlington, MA, USA). The membranes were blocked with 5% BSA (TBST) at room temperature for 1h and then incubated overnight at 4 °C with primary antibodies of autophagy proteins: ATG5 (ab108327, Abcam, Cambridge, UK), AKT1 (#9272, Cell Signaling, Danvers, MA, USA), LC3 (ab128025, Abcam, Cambridge, UK), and cartilage markers COL2A1 (ab34712, Abcam, Cambridge, UK), MMP13 (ab84594, Abcam, Cambridge, UK). Following three wash steps with TBST the membranes were incubated with Horseradish peroxidase (HRP) goat anti-rabbit IgG (#7074 Cell Signaling, Danvers, MA, USA) for 1 h at room temperature. They were washed then with TBST, three times, for 5 min. each time. Protein bands were detected using Amersham ECL TM Advance Western Blotting Detection Kit (GE Healthcare, Chicago, IL, USA). \nIn the case that proteins have a weight similar to β-actin, stripping was performed with Restore Western Blot Stripping Buffer (21059, Thermo Fisher Scientific, Waltham (MA), USA) according to the manufacturer’s instructions. Finally, β-actin expression was used as load control in all western blot assays (A5441, Sigma Aldrich, St Louis, MO, USA). The quantification was performed by densitometry and bands analysis with the ImageJ 1.8.0 software (Rasband, W.S., ImageJ, U. S. National Institutes of Health, Bethesda, MD, USA)\n 4.6. Reverse Transcription-Quantitative Polymerase Chain Reaction (RT-qPCR) Total RNA was isolated while using mirVana™ miRNA Isolation Kit (AM1560, Thermo Fisher Scientific, Waltham (MA), USA), according to the manufacturer’s instructions. First-strand cDNA was synthesized while using kit Superscript VILO cDNA synthesis, according to the manufacturer’s instructions, and Quantitative PCR was performed using 7500 real time PCR system with SYBR Green Master Mix (4309155, Thermo Fisher Scientific, Waltham (MA), USA). The forward and reverse primers that were used are shown in Table 2. β-actin expression was used as an internal control. Each experiment was repeated three times in technical triplicate. The relative expression of target genes was calculated while using the −2−∆∆Cq method.\nTotal RNA was isolated while using mirVana™ miRNA Isolation Kit (AM1560, Thermo Fisher Scientific, Waltham (MA), USA), according to the manufacturer’s instructions. First-strand cDNA was synthesized while using kit Superscript VILO cDNA synthesis, according to the manufacturer’s instructions, and Quantitative PCR was performed using 7500 real time PCR system with SYBR Green Master Mix (4309155, Thermo Fisher Scientific, Waltham (MA), USA). The forward and reverse primers that were used are shown in Table 2. β-actin expression was used as an internal control. Each experiment was repeated three times in technical triplicate. The relative expression of target genes was calculated while using the −2−∆∆Cq method.\n 4.7. Measurement of NO Release The NO release was measured from supernatant of rabbit chondrocytes cultures employing a NO chemiluminescence analyzer (model NOA, Sievers Instruments, Boulder, CO, USA). The evaluated conditions were: control; RAP; BAF; IL-1β; EEP; IL-1β and EEP; and, vehicle. All conditions were cultured for 24 h. The release of gaseous compounds was monitored for at least 8 h at intervals of 15 and 30 min., and 1, 2, 4, 6, and 8 h. After 15 min. of incubation, aliquots (0.5 mL) of accumulated gaseous materials in the headspace were injected into the detector chamber using a Hamilton Gastight syringe [94]. Each experiment was repeated three times.\nThe NO release was measured from supernatant of rabbit chondrocytes cultures employing a NO chemiluminescence analyzer (model NOA, Sievers Instruments, Boulder, CO, USA). The evaluated conditions were: control; RAP; BAF; IL-1β; EEP; IL-1β and EEP; and, vehicle. All conditions were cultured for 24 h. The release of gaseous compounds was monitored for at least 8 h at intervals of 15 and 30 min., and 1, 2, 4, 6, and 8 h. After 15 min. of incubation, aliquots (0.5 mL) of accumulated gaseous materials in the headspace were injected into the detector chamber using a Hamilton Gastight syringe [94]. Each experiment was repeated three times.\n 4.8. Statistical Analysis All of the experiments were repeated at least three times. The results were expressed as mean ± S.D., and the data was analyzed using one-way ANOVA followed by Dunnett, Bonferroni or Tukey Multiple Comparisons while using Sigma Plot (Analysis made in Graph Pad Prism version 5) to determine any significant differences. p < 0.05 was considered to be statistically significant.\nAll of the experiments were repeated at least three times. The results were expressed as mean ± S.D., and the data was analyzed using one-way ANOVA followed by Dunnett, Bonferroni or Tukey Multiple Comparisons while using Sigma Plot (Analysis made in Graph Pad Prism version 5) to determine any significant differences. p < 0.05 was considered to be statistically significant.", "A crude brown propolis sample was obtained from a mountainous area (latitude −38°58′4046′′, longitude −72°1′1573′′) near Cunco city, La Araucanía, Chile. Briefly, crude propolis was mixed with ethanol 80% in a 1:3 w/v proportion in an amber colored bottle and incubated for 30 min at 60 °C under constant mixing. Subsequently, the mixture was filtrated on a Whatman No. 1 filter paper in order to separate the ethanolic extract from crude propolis residues. The extract was left at 4 °C and then centrifuged for one night, in order to promote the precipitation of waxes and other poorly soluble waste. Subsequently, the propolis solvents were removed by evaporation and then the product was lyophilized and reconstituted in a 1:1 w/v proportion with ethanol. The EEP was analyzed with HPLC equipment (LC-20AD pumps, SIL-20AC injector, CTO-20A columns and DAD detector SPD-M20A, Kyoto, Japan); MS: MicroTOF-QII, Bruker Daltonics (Billerica, MA, USA). In the mass spectrometer, the DAD detector effluent was divided into a 1:10 ratio (split 1:10), with one part (50 μL/min.) being directed to the mass spectrometer. The electrospray source was used in negative mode at 3000 V. Nebulizer gas (nitrogen): 35 psi; drying gas (nitrogen): 6 L/min. at 220 °C. The mass scale of the equipment was calibrated with a sodium acetate solution. Additionally, the amount of pinocembrin was quantified while using the same method.", "Normal articular cartilage was collected from five white New Zealand buck rabbits, which were anesthetized with overdose of propofol and euthanized using Potassium chlorate 60 mg/Kg. After knee joint surgery, the pieces of articular cartilage were dissected and separated from the underlying bone and connective tissues. The pieces were washed three times with PBS 1× and 5% penicillin/streptomicin. The extracted cartilage was digested in a solution of 2 mg/mL Protease Type XIV. Bacterial from (Streptomyces griseus) (Sigma Aldrich, St Louis, MO, USA) in PBS 1× for 1.5 h and 1.5 mg/mL collagenase B (Roche, Meylan, France) in basic medium DMEM at 37 °C overnight. This suspension was centrifuged at 1200 rpm for 8 min. to collect the chondrocytes and was washed with basic medium DMEM. The chondrocytes were cultured in DMEM/F12 (1:1 with 15% FBS plus 1% antibiotic mixture of penicillin/streptomycin) at a density of 1 × 105 cells/mL and incubated in a humidified atmosphere of 5% CO2 at 37 °C. Culture medium was changed every two days and each passage was made when the confluence reached between 80–90%. We only used the second passage of cells in all experiments in order to avoid loss of chondrocyte phenotype [92].", "Cell viability was assessed while using trypan blue staining, as previously described [93]. Rabbit chondrocytes were briefly incubated and then exposed to different concentrations of EEP (0, 2.5, 5, 7.5, 10, and 15 µg/mL) in 24 well plates at a density of 2 × 104 cell per well for 24 h. Each experiment was performed in triplicate. The results are expressed as % of viable cells relative to control cells (untreated).", "For the induction of OA-like biological changes, the chondrocytes were stimulated using IL-1β [17]. The cells were incubated for 24 h under the following conditions: Control (untreated); RAP (100 nM rapamycin, Sigma Aldrich, USA); BAF (20 nM bafilomycin, Sigma Aldrich, USA); IL-1β (10 ng/mL IL-1β); EEP (EEP 2.5 μg/mL); IL-1β and EEP (10 ng/mL IL-1β and 2.5 μg/mL EEP); and, vehicle (2% ethanol).", "The chondrocytes were isolated from confluent monolayer cultures using RIPA buffer supplemented with 1mg/mL Halt™ Protease and Phosphatase Inhibitor Cocktail (Thermo Fisher, Waltham, MA, USA) at 4 °C for 30 min. The samples were then centrifuged at 15,000 rpm for 30 min., and the supernatants were harvested to measure the protein concentration. Proteins were quantified while using the Bradford method with BCA detection kit and adjusted to equal concentrations (45 µg) across different samples. Equal amounts of protein were heated at 95 °C for 5 min. and separated using 4–20% Mini protean TGX Precast gel (Biorad, Hercules, CA, USA). Following electrophoresis, the proteins were transferred onto a polyvinylidene fluoride membrane (PVDF, MilliporeSigma, Burlington, MA, USA). The membranes were blocked with 5% BSA (TBST) at room temperature for 1h and then incubated overnight at 4 °C with primary antibodies of autophagy proteins: ATG5 (ab108327, Abcam, Cambridge, UK), AKT1 (#9272, Cell Signaling, Danvers, MA, USA), LC3 (ab128025, Abcam, Cambridge, UK), and cartilage markers COL2A1 (ab34712, Abcam, Cambridge, UK), MMP13 (ab84594, Abcam, Cambridge, UK). Following three wash steps with TBST the membranes were incubated with Horseradish peroxidase (HRP) goat anti-rabbit IgG (#7074 Cell Signaling, Danvers, MA, USA) for 1 h at room temperature. They were washed then with TBST, three times, for 5 min. each time. Protein bands were detected using Amersham ECL TM Advance Western Blotting Detection Kit (GE Healthcare, Chicago, IL, USA). \nIn the case that proteins have a weight similar to β-actin, stripping was performed with Restore Western Blot Stripping Buffer (21059, Thermo Fisher Scientific, Waltham (MA), USA) according to the manufacturer’s instructions. Finally, β-actin expression was used as load control in all western blot assays (A5441, Sigma Aldrich, St Louis, MO, USA). The quantification was performed by densitometry and bands analysis with the ImageJ 1.8.0 software (Rasband, W.S., ImageJ, U. S. National Institutes of Health, Bethesda, MD, USA)", "Total RNA was isolated while using mirVana™ miRNA Isolation Kit (AM1560, Thermo Fisher Scientific, Waltham (MA), USA), according to the manufacturer’s instructions. First-strand cDNA was synthesized while using kit Superscript VILO cDNA synthesis, according to the manufacturer’s instructions, and Quantitative PCR was performed using 7500 real time PCR system with SYBR Green Master Mix (4309155, Thermo Fisher Scientific, Waltham (MA), USA). The forward and reverse primers that were used are shown in Table 2. β-actin expression was used as an internal control. Each experiment was repeated three times in technical triplicate. The relative expression of target genes was calculated while using the −2−∆∆Cq method.", "The NO release was measured from supernatant of rabbit chondrocytes cultures employing a NO chemiluminescence analyzer (model NOA, Sievers Instruments, Boulder, CO, USA). The evaluated conditions were: control; RAP; BAF; IL-1β; EEP; IL-1β and EEP; and, vehicle. All conditions were cultured for 24 h. The release of gaseous compounds was monitored for at least 8 h at intervals of 15 and 30 min., and 1, 2, 4, 6, and 8 h. After 15 min. of incubation, aliquots (0.5 mL) of accumulated gaseous materials in the headspace were injected into the detector chamber using a Hamilton Gastight syringe [94]. Each experiment was repeated three times.", "All of the experiments were repeated at least three times. The results were expressed as mean ± S.D., and the data was analyzed using one-way ANOVA followed by Dunnett, Bonferroni or Tukey Multiple Comparisons while using Sigma Plot (Analysis made in Graph Pad Prism version 5) to determine any significant differences. p < 0.05 was considered to be statistically significant." ]
[ "intro", "results", null, null, null, null, null, null, "discussion", null, null, null, null, null, null, null, null, null ]
[ "propolis", "autophagy", "chondrocytes" ]
1. Introduction: Osteoarthritis (OA) is a progressive, degenerative, and multifactorial joint that is disease characterized by a progressive loss of articular cartilage, subchondral bone sclerosis, osteophyte formation and synovial inflammation that has been positioned as the world-wide leading cause of pain and dysfunction [1,2,3,4,5,6,7]. Moreover, the high prevalence of OA and its great impact on the work ability make this disease an important social problem [8,9]. Aging is one of the most important risk factors of OA [5,6,10,11,12]. In aged cartilage as well as in OA, there is an increased production of reactive nitrogen and oxygen species (RNOS) that impact the adult chondrocytes, since they become more susceptible to cell death mediated by RNOS [13,14]. Chondrocytes are stimulated by proinflammatory cytokines, such as IL-1β, which leads to the production of large amounts of nitric oxide (NO) through the increased activity of inducible nitric oxide synthase (iNOS) [15,16,17]. Subsequently, NO inhibits the production of an extracellular matrix (ECM) and interferes with important paracrine and autocrine factors that are involved in OA, perpetuating the catabolic state of articular chondrocytes [15,16]. Animal and human articular cartilage with OA have both been reported to increased RNOS production, which also contributes to the degradation of cartilage by upregulation of matrix metalloproteinases (MMPs) such as MMP13 [18,19,20,21]. Moreover, the cellular oxidation caused by an increased production of RNOS leads to cell aging, especially in postmitotic tissues, such as cartilage [22], which depend on autophagy as the main mechanism for eliminating damaged or dysfunctional organelles and macromolecules [23]. Autophagy is a preserved evolutionary pathway of intracellular degradation, in which damaged organelles and long-lived proteins are degraded and recycled to maintain cellular homeostasis [24,25,26]. This process consists of dynamic membrane rearrangements that are mediated by a group of four main autophagy-related proteins (ATG) that include unc-51, like autophagy activating kinase 1 (ULK1), Beclin1 (BECN1), microtubule associated protein 1 light chain 3 alpha (LC3), and autophagy related 5 (ATG5). Upstream, the PI3/AKT and ERK/MAPK pathways are able to regulate the mammalian target of rapamycin (mTOR), which is a vital regulator of autophagy, by controlling the interaction between mTOR serine and threonine kinases in the mTOR complex 1 (mTORC-1) [27]. The inhibition of mTORC-1 promotes autophagy, while the activation of mTOR kinase suppresses it [27,28]. In addition to its key role in physiological conditions, aging is often accompanied by defects of general autophagy [29] and its deregulation is implicated in various pathological conditions, such as aging-related diseases [30,31]. In fact, alterations of autophagy are correlated with cell death and OA [31]. Autophagy has a controversial role in cellular survival and death [32,33]. Although autophagy mostly protects cells allow their adaptation to several types of stress, excessive or prolonged activation of this pathway can promote cell death [26,34]. Moreover, the autophagy pathway may be related to proinflammatory signaling through a mechanism that involves oxidative stress [35,36,37,38]. It has been observed that cellular damage that is generated by excessive production of RNOS can stimulate this pathway [22,26,39]. For this reason, the functional relationship between autophagy and apoptosis is complex and apparently it is the stimulus that determines an induction of apoptosis or autophagy in a context-dependent mode [34,40,41]. One of the links between these processes could be ATG5, which has a dual role in autophagy and apoptosis, because it could trigger apoptosis through several mechanisms and it is part of the molecular mechanisms that govern the inhibitory crosstalk between apoptosis and autophagy [40]. Although several therapeutic strategies have been developed to improve the repair of hyaline cartilage, none has been sufficiently effective to generate functional and long-lasting tissue. Currently, there are no drugs available to modify OA and a large number of candidate drugs have failed to demonstrate efficacy or they were associated with significant side effects [23,42]. This makes it necessary to search for other therapeutic alternatives that can avoid undesired effects and can be adapted to the progressive and multimodal nature of OA. Polyphenols are the most common bioactive natural products that are present in fruits, vegetables, seeds, among others [43,44,45,46], and they have a wide range of activities in the prevention and treatment of various physiological or pathophysiological states, such as cancer, neuroinflammation, diabetes, and aging [47,48,49]. Several of the beneficial effects of polyphenols have been attributed to their antioxidant capacity and their ability to modulate antioxidant defense mechanisms [50]. Additionally, these bioactive components have a great potential to prevent diseases through genetic and epigenetic modifications [48,49,51,52,53]. Paullauf et al., also reported that polyphenols affect numerous cellular targets that can induce or inhibit autophagy and mention that autophagy interferes with the symptoms and putative causes of aging [54,55]. In fact, several studies described the regulation that polyphenols have on the path of autophagy [49,56,57,58,59,60]. Propolis extract is an extremely complex mixture of natural substances; it contains amino acids, phenolic acids, phenolic acid esters, flavonoids, cinnamic acid, terpenes, and caffeic acid [61], and it has multiple pharmacological properties, including hepatoprotective, antioxidant, and anti-inflammatory actions, and in the cartilage, it has been shown to offer excellent protection, being mediated in part by its RNOS scavenger action [62,63]. Pinocembrin (PB) is one of the most abundant flavonoids in propolis [64,65,66] and it has been associated to the inhibition of MMP-1, MMP-3, and MMP-13 expression at both the protein and mRNA levels in cartilage [66]. Additionally, it is suggested that PB could protect the brain against ischemia-reperfusion injury, and the possible mechanisms might be attributed to the inhibition of apoptosis and reversed autophagy disfunction in the penumbra area [65,67]. Altogether, this evidence suggests a potential effect of propolis in reverting the alterations on the autophagy pathway in chondrocytes with OA, promoting the viability of the chondrocytes and the maintenance of healthy cartilage. Thus, the goal of our study was to evaluate the effect of ethanolic extract of propolis (EEP) in chondrocytes that were stimulated with IL-1β and its influence on the expression of proteins related to autophagy pathway and healthy and osteoarthritic marker. 2. Results: 2.1. Characterization of Polyphenols Present in the EEP The chromatographic profile of the EEP was obtained while using HPLC, showing about 53 peaks (Figure 1), whose identification was assigned by analyzing standards using the same conditions as in EEP sample, while considering the exact mass, UV absorption spectrum, and decomposition in the gas phase (fragmentation). Table 1 describes the identified compounds and related data. Additionally, the amount of PB was measured by mass spectrometer, resulting in a concentration of 44.1 mg mL−1. The chromatographic profile of the EEP was obtained while using HPLC, showing about 53 peaks (Figure 1), whose identification was assigned by analyzing standards using the same conditions as in EEP sample, while considering the exact mass, UV absorption spectrum, and decomposition in the gas phase (fragmentation). Table 1 describes the identified compounds and related data. Additionally, the amount of PB was measured by mass spectrometer, resulting in a concentration of 44.1 mg mL−1. 2.2. Cell Viability Analysis after Treatments Cell viability assay by trypan blue was performed to identify the highest dose of EEP that does not show a significant decrease in chondrocytes viability. After the EEP treatment using concentrations that ranged between 0 and 15 µg/mL, a significant decrease in cell viability was observed, starting from 5 µg/mL (Figure 2). For this reason, 2.5 µg/mL was selected as the dose for the following EEP treatments. The IL-1β, bafilomycin, and rapamycin doses were selected according to the literature and these did not significantly modify the viability of the chondrocytes (data not shown). Cell viability assay by trypan blue was performed to identify the highest dose of EEP that does not show a significant decrease in chondrocytes viability. After the EEP treatment using concentrations that ranged between 0 and 15 µg/mL, a significant decrease in cell viability was observed, starting from 5 µg/mL (Figure 2). For this reason, 2.5 µg/mL was selected as the dose for the following EEP treatments. The IL-1β, bafilomycin, and rapamycin doses were selected according to the literature and these did not significantly modify the viability of the chondrocytes (data not shown). 2.3. Effect of OA Induction on Autophagy-Related Proteins LC3I and LC3II proteins were detected by western blot (Figure 3a) to evaluate the effect of IL-1β-induced OA on autophagy pathway. An increase in the protein expression of LC3I was observed under IL-1β inflammatory stimulus, similarly to that observed after the treatment with rapamycin (RAP), which corresponds to a well-known autophagy stimulator (Figure 3a,b). Subsequently, no changes in the protein expression of LC3I or LC3II were observed when EEP or vehicle treatment was applied (Figure 3b,c). The cells exposed to bafilomycin (BAF), an autophagy inhibitor, markedly increased LC3II expression (Figure 3a,c). This accumulation in response to BAF treatment was described in the guidelines for the use and interpretation of assays for the monitoring autophagy by Klionsky et al. [7], where the accumulation of LC3I and II can be obtained by interrupting the autophagosome-lysosome fusion step, increasing the number of autophagosomes. Finally, a decrease in LC3I was observed in OA-induced chondrocytes after EEP treatment when compared to cells stimulated with IL-1β (Figure 3a,b). LC3I and LC3II proteins were detected by western blot (Figure 3a) to evaluate the effect of IL-1β-induced OA on autophagy pathway. An increase in the protein expression of LC3I was observed under IL-1β inflammatory stimulus, similarly to that observed after the treatment with rapamycin (RAP), which corresponds to a well-known autophagy stimulator (Figure 3a,b). Subsequently, no changes in the protein expression of LC3I or LC3II were observed when EEP or vehicle treatment was applied (Figure 3b,c). The cells exposed to bafilomycin (BAF), an autophagy inhibitor, markedly increased LC3II expression (Figure 3a,c). This accumulation in response to BAF treatment was described in the guidelines for the use and interpretation of assays for the monitoring autophagy by Klionsky et al. [7], where the accumulation of LC3I and II can be obtained by interrupting the autophagosome-lysosome fusion step, increasing the number of autophagosomes. Finally, a decrease in LC3I was observed in OA-induced chondrocytes after EEP treatment when compared to cells stimulated with IL-1β (Figure 3a,b). 2.4. Effect of EEP Treatment on Autophagy Protein in OA Chondrocytes To analyze the autophagy pathway in OA chondrocytes under EEP treatment, three proteins were selected: LC3, ATG5, and AKT1. The protein expression of LC3I had a significant decrease in the condition co-treated with IL-1β and EEP when compared to the IL-1β group, and there were no significant differences between EEP treatment, and the EEP and IL-1β condition (Figure 4a). In relation to LC3 gene expression, a significant decrease was observed in OA chondrocytes that were treated with EEP, as compared to the IL-1β stimulated condition (Figure 4d). Regarding ATG5 protein expression, a significant decrease was observed in the condition of co-treated with IL-1β and EEP when compared to the IL-1β group and there were no significant differences between EEP treatment, and EEP and IL-1β condition (Figure 4b). In relation to ATG5 gene expression, a significant decrease was observed in OA chondrocytes that were treated with EEP compared to IL-1β stimulated condition (Figure 4e). Finally, a significant decrease of AKT1 protein expression was observed in the condition co-treated with IL-1β and EEP when compared with the IL-1β group and there were no significant differences between EEP treatment, and EEP and IL-1β condition (Figure 4c). This effect was not reflected in the gene expression analysis (Figure 4f). To analyze the autophagy pathway in OA chondrocytes under EEP treatment, three proteins were selected: LC3, ATG5, and AKT1. The protein expression of LC3I had a significant decrease in the condition co-treated with IL-1β and EEP when compared to the IL-1β group, and there were no significant differences between EEP treatment, and the EEP and IL-1β condition (Figure 4a). In relation to LC3 gene expression, a significant decrease was observed in OA chondrocytes that were treated with EEP, as compared to the IL-1β stimulated condition (Figure 4d). Regarding ATG5 protein expression, a significant decrease was observed in the condition of co-treated with IL-1β and EEP when compared to the IL-1β group and there were no significant differences between EEP treatment, and EEP and IL-1β condition (Figure 4b). In relation to ATG5 gene expression, a significant decrease was observed in OA chondrocytes that were treated with EEP compared to IL-1β stimulated condition (Figure 4e). Finally, a significant decrease of AKT1 protein expression was observed in the condition co-treated with IL-1β and EEP when compared with the IL-1β group and there were no significant differences between EEP treatment, and EEP and IL-1β condition (Figure 4c). This effect was not reflected in the gene expression analysis (Figure 4f). 2.5. Effect of EEP Treatment on Cartilage Markers Expression in OA Chondrocytes Collagen II (Col2a1) was selected as a healthy cartilage marker and MMP13 as OA marker. Col2a1 protein expression did not significantly change with EEP treatment when compared to the IL-1β stimulation. However, there is a significant increase from 1 to 1.4 in the condition of being co-treated with IL-1β and EEP when compared to the IL-1β stimulation, and between EEP, and IL-1β and EEP (Figure 5a). On the other hand, a significant increase in gene expression is observed between the IL-1β and EEP condition, but not in the condition of being co-treated with IL-1β and EEP (Figure 5c). There is a significant decrease in MMP13 protein expression when EEP treatment is co-treated with IL-1β stimulation as compared to the IL-1β condition and there were no significant differences between EEP treatment, and EEP and IL-1β co-treated condition (Figure 5b). In relation to gene expression, there is a significant decrease in the EEP and IL-1β co-treatment when compared to the IL-1β condition (Figure 5d). Collagen II (Col2a1) was selected as a healthy cartilage marker and MMP13 as OA marker. Col2a1 protein expression did not significantly change with EEP treatment when compared to the IL-1β stimulation. However, there is a significant increase from 1 to 1.4 in the condition of being co-treated with IL-1β and EEP when compared to the IL-1β stimulation, and between EEP, and IL-1β and EEP (Figure 5a). On the other hand, a significant increase in gene expression is observed between the IL-1β and EEP condition, but not in the condition of being co-treated with IL-1β and EEP (Figure 5c). There is a significant decrease in MMP13 protein expression when EEP treatment is co-treated with IL-1β stimulation as compared to the IL-1β condition and there were no significant differences between EEP treatment, and EEP and IL-1β co-treated condition (Figure 5b). In relation to gene expression, there is a significant decrease in the EEP and IL-1β co-treatment when compared to the IL-1β condition (Figure 5d). 2.6. Effect of EEP Treatment on Chondrocytes Nitric Oxide Production Induced by the Inflammatory Stimulus A significant increase in the amount of NO from 8 to 22 μM was observed in chondrocytes that were stimulated with IL-1β as compared to the control condition; this increase is reduced significantly to 16 μM with EEP treatment (Figure 6). In addition, EEP treatment does not modify the amount of NO in the supernatant compared to the control condition. On the other hand, the activation or inhibition of autophagy does not significantly modify the amount of NO that is released to the medium (Figure 6). A significant increase in the amount of NO from 8 to 22 μM was observed in chondrocytes that were stimulated with IL-1β as compared to the control condition; this increase is reduced significantly to 16 μM with EEP treatment (Figure 6). In addition, EEP treatment does not modify the amount of NO in the supernatant compared to the control condition. On the other hand, the activation or inhibition of autophagy does not significantly modify the amount of NO that is released to the medium (Figure 6). 2.1. Characterization of Polyphenols Present in the EEP: The chromatographic profile of the EEP was obtained while using HPLC, showing about 53 peaks (Figure 1), whose identification was assigned by analyzing standards using the same conditions as in EEP sample, while considering the exact mass, UV absorption spectrum, and decomposition in the gas phase (fragmentation). Table 1 describes the identified compounds and related data. Additionally, the amount of PB was measured by mass spectrometer, resulting in a concentration of 44.1 mg mL−1. 2.2. Cell Viability Analysis after Treatments: Cell viability assay by trypan blue was performed to identify the highest dose of EEP that does not show a significant decrease in chondrocytes viability. After the EEP treatment using concentrations that ranged between 0 and 15 µg/mL, a significant decrease in cell viability was observed, starting from 5 µg/mL (Figure 2). For this reason, 2.5 µg/mL was selected as the dose for the following EEP treatments. The IL-1β, bafilomycin, and rapamycin doses were selected according to the literature and these did not significantly modify the viability of the chondrocytes (data not shown). 2.3. Effect of OA Induction on Autophagy-Related Proteins: LC3I and LC3II proteins were detected by western blot (Figure 3a) to evaluate the effect of IL-1β-induced OA on autophagy pathway. An increase in the protein expression of LC3I was observed under IL-1β inflammatory stimulus, similarly to that observed after the treatment with rapamycin (RAP), which corresponds to a well-known autophagy stimulator (Figure 3a,b). Subsequently, no changes in the protein expression of LC3I or LC3II were observed when EEP or vehicle treatment was applied (Figure 3b,c). The cells exposed to bafilomycin (BAF), an autophagy inhibitor, markedly increased LC3II expression (Figure 3a,c). This accumulation in response to BAF treatment was described in the guidelines for the use and interpretation of assays for the monitoring autophagy by Klionsky et al. [7], where the accumulation of LC3I and II can be obtained by interrupting the autophagosome-lysosome fusion step, increasing the number of autophagosomes. Finally, a decrease in LC3I was observed in OA-induced chondrocytes after EEP treatment when compared to cells stimulated with IL-1β (Figure 3a,b). 2.4. Effect of EEP Treatment on Autophagy Protein in OA Chondrocytes: To analyze the autophagy pathway in OA chondrocytes under EEP treatment, three proteins were selected: LC3, ATG5, and AKT1. The protein expression of LC3I had a significant decrease in the condition co-treated with IL-1β and EEP when compared to the IL-1β group, and there were no significant differences between EEP treatment, and the EEP and IL-1β condition (Figure 4a). In relation to LC3 gene expression, a significant decrease was observed in OA chondrocytes that were treated with EEP, as compared to the IL-1β stimulated condition (Figure 4d). Regarding ATG5 protein expression, a significant decrease was observed in the condition of co-treated with IL-1β and EEP when compared to the IL-1β group and there were no significant differences between EEP treatment, and EEP and IL-1β condition (Figure 4b). In relation to ATG5 gene expression, a significant decrease was observed in OA chondrocytes that were treated with EEP compared to IL-1β stimulated condition (Figure 4e). Finally, a significant decrease of AKT1 protein expression was observed in the condition co-treated with IL-1β and EEP when compared with the IL-1β group and there were no significant differences between EEP treatment, and EEP and IL-1β condition (Figure 4c). This effect was not reflected in the gene expression analysis (Figure 4f). 2.5. Effect of EEP Treatment on Cartilage Markers Expression in OA Chondrocytes: Collagen II (Col2a1) was selected as a healthy cartilage marker and MMP13 as OA marker. Col2a1 protein expression did not significantly change with EEP treatment when compared to the IL-1β stimulation. However, there is a significant increase from 1 to 1.4 in the condition of being co-treated with IL-1β and EEP when compared to the IL-1β stimulation, and between EEP, and IL-1β and EEP (Figure 5a). On the other hand, a significant increase in gene expression is observed between the IL-1β and EEP condition, but not in the condition of being co-treated with IL-1β and EEP (Figure 5c). There is a significant decrease in MMP13 protein expression when EEP treatment is co-treated with IL-1β stimulation as compared to the IL-1β condition and there were no significant differences between EEP treatment, and EEP and IL-1β co-treated condition (Figure 5b). In relation to gene expression, there is a significant decrease in the EEP and IL-1β co-treatment when compared to the IL-1β condition (Figure 5d). 2.6. Effect of EEP Treatment on Chondrocytes Nitric Oxide Production Induced by the Inflammatory Stimulus: A significant increase in the amount of NO from 8 to 22 μM was observed in chondrocytes that were stimulated with IL-1β as compared to the control condition; this increase is reduced significantly to 16 μM with EEP treatment (Figure 6). In addition, EEP treatment does not modify the amount of NO in the supernatant compared to the control condition. On the other hand, the activation or inhibition of autophagy does not significantly modify the amount of NO that is released to the medium (Figure 6). 3. Discussion: OA is the most common chronic degenerative joint disease and it heavily impacts on life quality [8,9]. OA treatment is complex, because it is a multifactorial disease and current therapies are, at best, moderately effective pain relievers and several of these drugs have adverse effects [9], so the development of safe treatments is necessary. Autophagy is a highly conserved mechanism of homeostasis maintenance [55,68] and its deregulation contributes to OA development. In fact, in late stages of OA, this process also could be conjunctly activated with apoptosis as an alternative pathway to chondroptosis [69,70]. Hence, the modulation of autophagy can be a promising therapeutic strategy for OA, since it has the potential to counteract both effects of the inflammatory stimuli and age-related defects [8]. Propolis is highly rich in active components and its extracts have numerous applications in treating various diseases [71]. PB is one of the most abundant flavonoid in propolis [65,72] and it has drawn much attention for its broad spectrum of pharmacological properties, such as the reversion of autophagy dysfunction in the ischemia-reperfusion injury [65,67]. We evaluated the amount of pinocembrin present in the Chilean propolis used in this study, since its composition depends on the source of the various trees that were used by honeybees [72,73]. The used EEP contains a significant amount of pinocembrin, being polyphenol the most abundant extract (Figure 1 and Table 1). These results are corroborated by previous studies of propolis that were extracted from the same region [74,75,76]. Given that autophagy defects could have a central role in OA, the objective of this study was to evaluate the potential effect of propolis in autophagy proteins on chondrocytes with OA. Importantly, there are no absolute criteria for determining autophagic status that are applicable in every biological or experimental context [7]. The radio between LC3I and LC3II is used to evaluate the autophagy pathway [55]. One limitation of our study was that it was not possible to calculate said radius due to the low abundance of LC3II, hence preventing reliable quantification. Therefore, it was decided to omit this indicator. Nevertheless, LC3I is used as a marker of autophagic induction [55] and the conversion pattern of LC3I to LC3II is dependent on the type and cellular context [7], so we evaluated the expression of this marker in response of IL-1β stimulation. Certain studies have indicated that stimulation with IL-1β could inhibit the autophagy pathway, especially in those that are associated with nutrient deprivation [77]. In our study, we decided to observe what happens with the autophagy pathway without a context of nutrient deprivation. After the IL-1β stimulation, there was an increase in LC3I protein expression similar to the one that was observed with the autophagy inducer, rapamycin (Figure 3). This suggests that the inflammatory stimulus with IL-1β increases the expression of LC3, which is consistent with the literature [78,79,80]. On the other hand, a decrease in LC3I protein expression with EEP treatment on OA chondrocyte was observed, as compared to chondrocytes that were stimulated with IL-1β, which suggested that EEP could regulate the autophagy pathway (Figure 3). Subsequently, an analysis of gene and protein expression of three genes associated with the autophagy pathway was performed in response to IL-1β-stimulation and treatment with EEP in chondrocytes: LC3, ATG5, and AKT1. In response to IL-1β–stimulation, thee chondrocytes increased their protein expression of LC3I, ATG5, and AKT1 (Figure 4a–c). This result was also observed in mRNA expression of LC3I and ATG5 (Figure 4d,e), which suggested that the autophagy pathway is triggered by the inflammatory stimulus. Autophagy has been recognized as an adaptive response to stress that promotes survival, whereas in other cases it is capable of promoting cell death and morbidity [68,69,70], and a functional relationship between both processes is postulated [34,40,41]. This is also consistent with the effects of another proinflammatory cytokine, TNFα, which also increases with age and in OA, which is additionally able to increase the expression of LC3 through the inhibition of AKT activation [79,80]. AKT1 is an upstream regulator of mTOR and is usually considered a suppressor of autophagy, but it has been observed that some natural components activate autophagy with a concomitant increase in AKT phosphorylation [27]. On the other hand, it has been shown that the acute stimuli of oxidative stress may be able to induce a positive regulation of autophagy in an adaptive way, which helps to restore intracellular homeostasis, however, an alteration in autophagy may be generated if this stress is prolonged [35]. In aged cartilage and the development of OA, there is an increase ROS levels and adult chondrocytes become more susceptible to cell death that is mediated by ROS [13,14]. In addition, chondrocytes that are stimulated by proinflammatory cytokines, such as IL-1β, produce large amounts of nitric oxide [15,16,81,82], which contributes in part to this oxidative/nitrosative stress. This agrees with what we observed in chondrocytes that were stimulated with IL-1β (Figure 6). We suggest that the alterations produced in chondrocytes with OA are partly produced by this oxidative stress that is induced by NO. Some polyphenols might regulate several cellular targets that can thus induce or inhibit autophagy [54], and it is thought that a partial restoration of basal autophagy might contribute to improving chondrocyte viability in OA [55]. Therefore, we evaluated whether the EEP, rich in polyphenols, could regulate the expression of the selected proteins. In response to EEP treatment, chondrocytes stimulated with IL-1β significantly decrease the protein expression of LC3I, ATG5, and AKT1 (Figure 4a–c). This result was also observed in mRNA expression of LC3I and ATG5 (Figure 4d,e). This suggests that EEP treatment, which is a rich mixture of polyphenols, particularly pinocembrin, could be able to return to it basal levels of autophagy. Recently, it has been reported that Chinese propolis reduce the inflammation through an inhibition of autophagy by reducing the distribution and accumulation of LC3 in vascular endothelial cells [83]. Another study mentions that a Taiwanese green propolis partially inhibited the NLRP3 inflammasome via autophagy induction [84]. Moreover, chrysin, another polyphenol, has been reported to be present in propolis and our samples (Table 1, number 32). Chrysin would be able to decrease the induction of proteins associated with autophagy in mesangial cells that were exposed to advanced glycation end products [85]. This decrease could restore this pathway normal levels, returning the homeostasis that had been lost with the inflammatory stimulus. As oxidative stress could mediate an excessive induction of autophagy [86], we analyzed whether EEP treatment, by exerting its scavenger action [62,63] could decrease the amount of NO that was released to the medium. Indeed, the EEP treatment reduced the amount of NO produced (Figure 6). Therefore, this could be the mechanism by which the autophagy activation may be inhibited, decreasing the expression of proteins, such as LC3 I, ATG5, and AKT1 (Figure 4a–c). To elucidate this, functional analysis should be done in the pathway of autophagy in chondrocytes in response to EEP treatment. Collagen(s) are long-lived proteins [87,88]. Oxidative damage is, by far, the most important way of inducing a post-translational chemical modification of ECM of the cartilage that will alter collagen longevity [88]. So, a reduction in the oxidative damage by EEP treatment could improve the expression of healthy cartilage marker. EEP treatment induces an increase in protein expression of Col2a1 in chondrocytes that were stimulated with IL-1β probably through this way (Figure 5a,b). It has been described that the increase of ROS contributes to the degradation of the cartilage by means of an up regulation of MMPs [18] and that pinocembrin (one of the most abundant components of our propolis) inhibits the expression of MMP-1, MMP-3, and MMP-13 at the protein and mRNA levels in cartilage [66]. EEP treatment significantly reduces MMP-13 protein expression in chondrocytes that were stimulated with IL-1β (Figure 5c,d), probably again through an antioxidant mechanism. These results suggest the beneficial effects of EEP treatment in chondrocytes with OA in vitro, increasing the expression of healthy cartilage marker and reducing the OA marker. To the best of our knowledge, there is no study that analyzes the effect of propolis on the expression of col2a1 or MMP13 on chondrocytes. Although, it has been described that pinocembrin was able to reduce MMP13 levels in chondrocytes that were stimulated with TNF-α [66] and chrysin was able to increase the levels of collagen II and reduce the levels of MMP13 in chondrocytes [85]. Koussounadis et al. [89], in 2015, mentioned that mRNA and protein expression do not always present good correlation for diverse reasons, for example, the different post-transcriptional regulation, mRNA degradation, the half-life of the protein, among others. In the case of the differences that were observed in the expression of Col2a1, we can infer that, since it is a long-lived protein, the oxidative stress that was generated by the inflammatory stimulus could favor the degradation of this protein. Moreover, its degradation would reduce by decreasing the amount of nitric oxide without altering mRNA expression when the treatment with EEP is applied (Figure 5a). It is also necessary to mention that the MMP13 expression has a negative relationship with the Col2a1 expression and since this metalloprotinase specifically degrades Col2a1, this could be another way in which the protein expression of Col2a1 varies without generating any modification of its mRNA. Autophagy is a dynamic pathway whose activity can change in response to different stimulus, such as drugs, cell type, and confluence [7], because of that, it would be not unusual to find differences between gene and protein expression and sometimes it can be difficult to interpret the results. Hence, this is why we only mentioned the increase or reduction of protein expression. According to the guidelines for the study of autophagy, most of the ATG genes do not show significant changes in mRNA levels when autophagy is induced [7]. A slight increase of mRNA of LC3 has been observed between 4 to 16 h after starvation, which decreases over time [90]. For this reason, it is thought that the increases in the mRNA expression of LC3 can be quite modest and are cell type and organism dependent. It is also suggested that the study of protein expression would be a more significant parameter in relation to the initiation and completion of autophagy [7]. Therefore, it is likely that, during the time in which the mRNA analysis was performed (24 h post treatment), it had already been degraded by intracellular mechanisms, which would be observed as a lower mRNA expression, despite the fact that the protein expression would remain high (Figure 4). Alternatively, we should have measured the rate of general protein breakdown by autophagy, to measure the autophagic flow, and therefore estimate the activity of the autophagy pathway. This is another limitation of this study. It is important to consider that the antioxidant capacity of propolis responds to a heterogeneous mixture of compounds, including several polyphenols [72,73]. Each one possesses a particular antioxidant capacity and, for this reason, only the amount present of a particular polyphenol in the EEP is not a good indicator of the scavenger capacity of the compound [91]. For example, quercetin found in the chromatographic profile of Figure 1, as indicated with the number 11, has approximately 20 times more antioxidant power than pinocembrin, although it is found in a smaller quantities than this latter [91]. Therefore, we cannot rule out that the other polyphenols may be responsible for the antioxidant effect of propolis. In addition, we must take into account that the mixture of polyphenols could have an additive effect. In order to clarify this, the experiments should be carried out to include the effect of each of the different polyphenols that were present in the EEP sample, in comparison to the complete mixture. In conclusion, the inflammatory stimulation with IL-1β applied on chondrocytes causes a decrease in the expression of healthy cartilage marker and increases in the OA marker, also generates a deregulation of the autophagy pathway. The treatment with EEP is probably able to inhibit these deregulations counteracting the decompensation of the mechanisms that maintain cellular homeostasis, such as autophagy, which could be the initial trigger of mechanical and structural alterations of the tissue. We propose that the mechanism that is involved in the effect of propolis is through a reduction of oxidative stress that is generated by the application of the inflammatory stimulus (Figure 7). Finally, we propose that treatment with dietary polyphenols in people with OA triggered by aging could be an effective complementary therapeutic approach, since, through anti-inflammatory, antioxidant, and autophagy regulating mechanisms, could inhibit or reduce the causes that would origin cartilage degeneration in relationship to age, such as: modification of the composition of the matrix, increase in oxidative stress, decrease in the number of chondrocytes that are associated with age, and change of the chondrocyte phenotype, among others, thus promoting the viability of chondrocytes and the maintenance of a healthy cartilage. More in vitro and in vivo studies are needed in order to support the effect of polyphenols in OA, evaluating the bioavailability and establishing an effective dose. 4. Materials and Methods: 4.1. Preparation and Characterization of Ethanolic Extract of Propolis A crude brown propolis sample was obtained from a mountainous area (latitude −38°58′4046′′, longitude −72°1′1573′′) near Cunco city, La Araucanía, Chile. Briefly, crude propolis was mixed with ethanol 80% in a 1:3 w/v proportion in an amber colored bottle and incubated for 30 min at 60 °C under constant mixing. Subsequently, the mixture was filtrated on a Whatman No. 1 filter paper in order to separate the ethanolic extract from crude propolis residues. The extract was left at 4 °C and then centrifuged for one night, in order to promote the precipitation of waxes and other poorly soluble waste. Subsequently, the propolis solvents were removed by evaporation and then the product was lyophilized and reconstituted in a 1:1 w/v proportion with ethanol. The EEP was analyzed with HPLC equipment (LC-20AD pumps, SIL-20AC injector, CTO-20A columns and DAD detector SPD-M20A, Kyoto, Japan); MS: MicroTOF-QII, Bruker Daltonics (Billerica, MA, USA). In the mass spectrometer, the DAD detector effluent was divided into a 1:10 ratio (split 1:10), with one part (50 μL/min.) being directed to the mass spectrometer. The electrospray source was used in negative mode at 3000 V. Nebulizer gas (nitrogen): 35 psi; drying gas (nitrogen): 6 L/min. at 220 °C. The mass scale of the equipment was calibrated with a sodium acetate solution. Additionally, the amount of pinocembrin was quantified while using the same method. A crude brown propolis sample was obtained from a mountainous area (latitude −38°58′4046′′, longitude −72°1′1573′′) near Cunco city, La Araucanía, Chile. Briefly, crude propolis was mixed with ethanol 80% in a 1:3 w/v proportion in an amber colored bottle and incubated for 30 min at 60 °C under constant mixing. Subsequently, the mixture was filtrated on a Whatman No. 1 filter paper in order to separate the ethanolic extract from crude propolis residues. The extract was left at 4 °C and then centrifuged for one night, in order to promote the precipitation of waxes and other poorly soluble waste. Subsequently, the propolis solvents were removed by evaporation and then the product was lyophilized and reconstituted in a 1:1 w/v proportion with ethanol. The EEP was analyzed with HPLC equipment (LC-20AD pumps, SIL-20AC injector, CTO-20A columns and DAD detector SPD-M20A, Kyoto, Japan); MS: MicroTOF-QII, Bruker Daltonics (Billerica, MA, USA). In the mass spectrometer, the DAD detector effluent was divided into a 1:10 ratio (split 1:10), with one part (50 μL/min.) being directed to the mass spectrometer. The electrospray source was used in negative mode at 3000 V. Nebulizer gas (nitrogen): 35 psi; drying gas (nitrogen): 6 L/min. at 220 °C. The mass scale of the equipment was calibrated with a sodium acetate solution. Additionally, the amount of pinocembrin was quantified while using the same method. 4.2. Cartilage Samples Collection and Primary Chondrocyte Culture Normal articular cartilage was collected from five white New Zealand buck rabbits, which were anesthetized with overdose of propofol and euthanized using Potassium chlorate 60 mg/Kg. After knee joint surgery, the pieces of articular cartilage were dissected and separated from the underlying bone and connective tissues. The pieces were washed three times with PBS 1× and 5% penicillin/streptomicin. The extracted cartilage was digested in a solution of 2 mg/mL Protease Type XIV. Bacterial from (Streptomyces griseus) (Sigma Aldrich, St Louis, MO, USA) in PBS 1× for 1.5 h and 1.5 mg/mL collagenase B (Roche, Meylan, France) in basic medium DMEM at 37 °C overnight. This suspension was centrifuged at 1200 rpm for 8 min. to collect the chondrocytes and was washed with basic medium DMEM. The chondrocytes were cultured in DMEM/F12 (1:1 with 15% FBS plus 1% antibiotic mixture of penicillin/streptomycin) at a density of 1 × 105 cells/mL and incubated in a humidified atmosphere of 5% CO2 at 37 °C. Culture medium was changed every two days and each passage was made when the confluence reached between 80–90%. We only used the second passage of cells in all experiments in order to avoid loss of chondrocyte phenotype [92]. Normal articular cartilage was collected from five white New Zealand buck rabbits, which were anesthetized with overdose of propofol and euthanized using Potassium chlorate 60 mg/Kg. After knee joint surgery, the pieces of articular cartilage were dissected and separated from the underlying bone and connective tissues. The pieces were washed three times with PBS 1× and 5% penicillin/streptomicin. The extracted cartilage was digested in a solution of 2 mg/mL Protease Type XIV. Bacterial from (Streptomyces griseus) (Sigma Aldrich, St Louis, MO, USA) in PBS 1× for 1.5 h and 1.5 mg/mL collagenase B (Roche, Meylan, France) in basic medium DMEM at 37 °C overnight. This suspension was centrifuged at 1200 rpm for 8 min. to collect the chondrocytes and was washed with basic medium DMEM. The chondrocytes were cultured in DMEM/F12 (1:1 with 15% FBS plus 1% antibiotic mixture of penicillin/streptomycin) at a density of 1 × 105 cells/mL and incubated in a humidified atmosphere of 5% CO2 at 37 °C. Culture medium was changed every two days and each passage was made when the confluence reached between 80–90%. We only used the second passage of cells in all experiments in order to avoid loss of chondrocyte phenotype [92]. 4.3. Cell Viability Analysis in Response to EEP Treatment Cell viability was assessed while using trypan blue staining, as previously described [93]. Rabbit chondrocytes were briefly incubated and then exposed to different concentrations of EEP (0, 2.5, 5, 7.5, 10, and 15 µg/mL) in 24 well plates at a density of 2 × 104 cell per well for 24 h. Each experiment was performed in triplicate. The results are expressed as % of viable cells relative to control cells (untreated). Cell viability was assessed while using trypan blue staining, as previously described [93]. Rabbit chondrocytes were briefly incubated and then exposed to different concentrations of EEP (0, 2.5, 5, 7.5, 10, and 15 µg/mL) in 24 well plates at a density of 2 × 104 cell per well for 24 h. Each experiment was performed in triplicate. The results are expressed as % of viable cells relative to control cells (untreated). 4.4. In Vitro Model of OA and Treatments For the induction of OA-like biological changes, the chondrocytes were stimulated using IL-1β [17]. The cells were incubated for 24 h under the following conditions: Control (untreated); RAP (100 nM rapamycin, Sigma Aldrich, USA); BAF (20 nM bafilomycin, Sigma Aldrich, USA); IL-1β (10 ng/mL IL-1β); EEP (EEP 2.5 μg/mL); IL-1β and EEP (10 ng/mL IL-1β and 2.5 μg/mL EEP); and, vehicle (2% ethanol). For the induction of OA-like biological changes, the chondrocytes were stimulated using IL-1β [17]. The cells were incubated for 24 h under the following conditions: Control (untreated); RAP (100 nM rapamycin, Sigma Aldrich, USA); BAF (20 nM bafilomycin, Sigma Aldrich, USA); IL-1β (10 ng/mL IL-1β); EEP (EEP 2.5 μg/mL); IL-1β and EEP (10 ng/mL IL-1β and 2.5 μg/mL EEP); and, vehicle (2% ethanol). 4.5. Western Blot Analysis The chondrocytes were isolated from confluent monolayer cultures using RIPA buffer supplemented with 1mg/mL Halt™ Protease and Phosphatase Inhibitor Cocktail (Thermo Fisher, Waltham, MA, USA) at 4 °C for 30 min. The samples were then centrifuged at 15,000 rpm for 30 min., and the supernatants were harvested to measure the protein concentration. Proteins were quantified while using the Bradford method with BCA detection kit and adjusted to equal concentrations (45 µg) across different samples. Equal amounts of protein were heated at 95 °C for 5 min. and separated using 4–20% Mini protean TGX Precast gel (Biorad, Hercules, CA, USA). Following electrophoresis, the proteins were transferred onto a polyvinylidene fluoride membrane (PVDF, MilliporeSigma, Burlington, MA, USA). The membranes were blocked with 5% BSA (TBST) at room temperature for 1h and then incubated overnight at 4 °C with primary antibodies of autophagy proteins: ATG5 (ab108327, Abcam, Cambridge, UK), AKT1 (#9272, Cell Signaling, Danvers, MA, USA), LC3 (ab128025, Abcam, Cambridge, UK), and cartilage markers COL2A1 (ab34712, Abcam, Cambridge, UK), MMP13 (ab84594, Abcam, Cambridge, UK). Following three wash steps with TBST the membranes were incubated with Horseradish peroxidase (HRP) goat anti-rabbit IgG (#7074 Cell Signaling, Danvers, MA, USA) for 1 h at room temperature. They were washed then with TBST, three times, for 5 min. each time. Protein bands were detected using Amersham ECL TM Advance Western Blotting Detection Kit (GE Healthcare, Chicago, IL, USA). In the case that proteins have a weight similar to β-actin, stripping was performed with Restore Western Blot Stripping Buffer (21059, Thermo Fisher Scientific, Waltham (MA), USA) according to the manufacturer’s instructions. Finally, β-actin expression was used as load control in all western blot assays (A5441, Sigma Aldrich, St Louis, MO, USA). The quantification was performed by densitometry and bands analysis with the ImageJ 1.8.0 software (Rasband, W.S., ImageJ, U. S. National Institutes of Health, Bethesda, MD, USA) The chondrocytes were isolated from confluent monolayer cultures using RIPA buffer supplemented with 1mg/mL Halt™ Protease and Phosphatase Inhibitor Cocktail (Thermo Fisher, Waltham, MA, USA) at 4 °C for 30 min. The samples were then centrifuged at 15,000 rpm for 30 min., and the supernatants were harvested to measure the protein concentration. Proteins were quantified while using the Bradford method with BCA detection kit and adjusted to equal concentrations (45 µg) across different samples. Equal amounts of protein were heated at 95 °C for 5 min. and separated using 4–20% Mini protean TGX Precast gel (Biorad, Hercules, CA, USA). Following electrophoresis, the proteins were transferred onto a polyvinylidene fluoride membrane (PVDF, MilliporeSigma, Burlington, MA, USA). The membranes were blocked with 5% BSA (TBST) at room temperature for 1h and then incubated overnight at 4 °C with primary antibodies of autophagy proteins: ATG5 (ab108327, Abcam, Cambridge, UK), AKT1 (#9272, Cell Signaling, Danvers, MA, USA), LC3 (ab128025, Abcam, Cambridge, UK), and cartilage markers COL2A1 (ab34712, Abcam, Cambridge, UK), MMP13 (ab84594, Abcam, Cambridge, UK). Following three wash steps with TBST the membranes were incubated with Horseradish peroxidase (HRP) goat anti-rabbit IgG (#7074 Cell Signaling, Danvers, MA, USA) for 1 h at room temperature. They were washed then with TBST, three times, for 5 min. each time. Protein bands were detected using Amersham ECL TM Advance Western Blotting Detection Kit (GE Healthcare, Chicago, IL, USA). In the case that proteins have a weight similar to β-actin, stripping was performed with Restore Western Blot Stripping Buffer (21059, Thermo Fisher Scientific, Waltham (MA), USA) according to the manufacturer’s instructions. Finally, β-actin expression was used as load control in all western blot assays (A5441, Sigma Aldrich, St Louis, MO, USA). The quantification was performed by densitometry and bands analysis with the ImageJ 1.8.0 software (Rasband, W.S., ImageJ, U. S. National Institutes of Health, Bethesda, MD, USA) 4.6. Reverse Transcription-Quantitative Polymerase Chain Reaction (RT-qPCR) Total RNA was isolated while using mirVana™ miRNA Isolation Kit (AM1560, Thermo Fisher Scientific, Waltham (MA), USA), according to the manufacturer’s instructions. First-strand cDNA was synthesized while using kit Superscript VILO cDNA synthesis, according to the manufacturer’s instructions, and Quantitative PCR was performed using 7500 real time PCR system with SYBR Green Master Mix (4309155, Thermo Fisher Scientific, Waltham (MA), USA). The forward and reverse primers that were used are shown in Table 2. β-actin expression was used as an internal control. Each experiment was repeated three times in technical triplicate. The relative expression of target genes was calculated while using the −2−∆∆Cq method. Total RNA was isolated while using mirVana™ miRNA Isolation Kit (AM1560, Thermo Fisher Scientific, Waltham (MA), USA), according to the manufacturer’s instructions. First-strand cDNA was synthesized while using kit Superscript VILO cDNA synthesis, according to the manufacturer’s instructions, and Quantitative PCR was performed using 7500 real time PCR system with SYBR Green Master Mix (4309155, Thermo Fisher Scientific, Waltham (MA), USA). The forward and reverse primers that were used are shown in Table 2. β-actin expression was used as an internal control. Each experiment was repeated three times in technical triplicate. The relative expression of target genes was calculated while using the −2−∆∆Cq method. 4.7. Measurement of NO Release The NO release was measured from supernatant of rabbit chondrocytes cultures employing a NO chemiluminescence analyzer (model NOA, Sievers Instruments, Boulder, CO, USA). The evaluated conditions were: control; RAP; BAF; IL-1β; EEP; IL-1β and EEP; and, vehicle. All conditions were cultured for 24 h. The release of gaseous compounds was monitored for at least 8 h at intervals of 15 and 30 min., and 1, 2, 4, 6, and 8 h. After 15 min. of incubation, aliquots (0.5 mL) of accumulated gaseous materials in the headspace were injected into the detector chamber using a Hamilton Gastight syringe [94]. Each experiment was repeated three times. The NO release was measured from supernatant of rabbit chondrocytes cultures employing a NO chemiluminescence analyzer (model NOA, Sievers Instruments, Boulder, CO, USA). The evaluated conditions were: control; RAP; BAF; IL-1β; EEP; IL-1β and EEP; and, vehicle. All conditions were cultured for 24 h. The release of gaseous compounds was monitored for at least 8 h at intervals of 15 and 30 min., and 1, 2, 4, 6, and 8 h. After 15 min. of incubation, aliquots (0.5 mL) of accumulated gaseous materials in the headspace were injected into the detector chamber using a Hamilton Gastight syringe [94]. Each experiment was repeated three times. 4.8. Statistical Analysis All of the experiments were repeated at least three times. The results were expressed as mean ± S.D., and the data was analyzed using one-way ANOVA followed by Dunnett, Bonferroni or Tukey Multiple Comparisons while using Sigma Plot (Analysis made in Graph Pad Prism version 5) to determine any significant differences. p < 0.05 was considered to be statistically significant. All of the experiments were repeated at least three times. The results were expressed as mean ± S.D., and the data was analyzed using one-way ANOVA followed by Dunnett, Bonferroni or Tukey Multiple Comparisons while using Sigma Plot (Analysis made in Graph Pad Prism version 5) to determine any significant differences. p < 0.05 was considered to be statistically significant. 4.1. Preparation and Characterization of Ethanolic Extract of Propolis: A crude brown propolis sample was obtained from a mountainous area (latitude −38°58′4046′′, longitude −72°1′1573′′) near Cunco city, La Araucanía, Chile. Briefly, crude propolis was mixed with ethanol 80% in a 1:3 w/v proportion in an amber colored bottle and incubated for 30 min at 60 °C under constant mixing. Subsequently, the mixture was filtrated on a Whatman No. 1 filter paper in order to separate the ethanolic extract from crude propolis residues. The extract was left at 4 °C and then centrifuged for one night, in order to promote the precipitation of waxes and other poorly soluble waste. Subsequently, the propolis solvents were removed by evaporation and then the product was lyophilized and reconstituted in a 1:1 w/v proportion with ethanol. The EEP was analyzed with HPLC equipment (LC-20AD pumps, SIL-20AC injector, CTO-20A columns and DAD detector SPD-M20A, Kyoto, Japan); MS: MicroTOF-QII, Bruker Daltonics (Billerica, MA, USA). In the mass spectrometer, the DAD detector effluent was divided into a 1:10 ratio (split 1:10), with one part (50 μL/min.) being directed to the mass spectrometer. The electrospray source was used in negative mode at 3000 V. Nebulizer gas (nitrogen): 35 psi; drying gas (nitrogen): 6 L/min. at 220 °C. The mass scale of the equipment was calibrated with a sodium acetate solution. Additionally, the amount of pinocembrin was quantified while using the same method. 4.2. Cartilage Samples Collection and Primary Chondrocyte Culture: Normal articular cartilage was collected from five white New Zealand buck rabbits, which were anesthetized with overdose of propofol and euthanized using Potassium chlorate 60 mg/Kg. After knee joint surgery, the pieces of articular cartilage were dissected and separated from the underlying bone and connective tissues. The pieces were washed three times with PBS 1× and 5% penicillin/streptomicin. The extracted cartilage was digested in a solution of 2 mg/mL Protease Type XIV. Bacterial from (Streptomyces griseus) (Sigma Aldrich, St Louis, MO, USA) in PBS 1× for 1.5 h and 1.5 mg/mL collagenase B (Roche, Meylan, France) in basic medium DMEM at 37 °C overnight. This suspension was centrifuged at 1200 rpm for 8 min. to collect the chondrocytes and was washed with basic medium DMEM. The chondrocytes were cultured in DMEM/F12 (1:1 with 15% FBS plus 1% antibiotic mixture of penicillin/streptomycin) at a density of 1 × 105 cells/mL and incubated in a humidified atmosphere of 5% CO2 at 37 °C. Culture medium was changed every two days and each passage was made when the confluence reached between 80–90%. We only used the second passage of cells in all experiments in order to avoid loss of chondrocyte phenotype [92]. 4.3. Cell Viability Analysis in Response to EEP Treatment: Cell viability was assessed while using trypan blue staining, as previously described [93]. Rabbit chondrocytes were briefly incubated and then exposed to different concentrations of EEP (0, 2.5, 5, 7.5, 10, and 15 µg/mL) in 24 well plates at a density of 2 × 104 cell per well for 24 h. Each experiment was performed in triplicate. The results are expressed as % of viable cells relative to control cells (untreated). 4.4. In Vitro Model of OA and Treatments: For the induction of OA-like biological changes, the chondrocytes were stimulated using IL-1β [17]. The cells were incubated for 24 h under the following conditions: Control (untreated); RAP (100 nM rapamycin, Sigma Aldrich, USA); BAF (20 nM bafilomycin, Sigma Aldrich, USA); IL-1β (10 ng/mL IL-1β); EEP (EEP 2.5 μg/mL); IL-1β and EEP (10 ng/mL IL-1β and 2.5 μg/mL EEP); and, vehicle (2% ethanol). 4.5. Western Blot Analysis: The chondrocytes were isolated from confluent monolayer cultures using RIPA buffer supplemented with 1mg/mL Halt™ Protease and Phosphatase Inhibitor Cocktail (Thermo Fisher, Waltham, MA, USA) at 4 °C for 30 min. The samples were then centrifuged at 15,000 rpm for 30 min., and the supernatants were harvested to measure the protein concentration. Proteins were quantified while using the Bradford method with BCA detection kit and adjusted to equal concentrations (45 µg) across different samples. Equal amounts of protein were heated at 95 °C for 5 min. and separated using 4–20% Mini protean TGX Precast gel (Biorad, Hercules, CA, USA). Following electrophoresis, the proteins were transferred onto a polyvinylidene fluoride membrane (PVDF, MilliporeSigma, Burlington, MA, USA). The membranes were blocked with 5% BSA (TBST) at room temperature for 1h and then incubated overnight at 4 °C with primary antibodies of autophagy proteins: ATG5 (ab108327, Abcam, Cambridge, UK), AKT1 (#9272, Cell Signaling, Danvers, MA, USA), LC3 (ab128025, Abcam, Cambridge, UK), and cartilage markers COL2A1 (ab34712, Abcam, Cambridge, UK), MMP13 (ab84594, Abcam, Cambridge, UK). Following three wash steps with TBST the membranes were incubated with Horseradish peroxidase (HRP) goat anti-rabbit IgG (#7074 Cell Signaling, Danvers, MA, USA) for 1 h at room temperature. They were washed then with TBST, three times, for 5 min. each time. Protein bands were detected using Amersham ECL TM Advance Western Blotting Detection Kit (GE Healthcare, Chicago, IL, USA). In the case that proteins have a weight similar to β-actin, stripping was performed with Restore Western Blot Stripping Buffer (21059, Thermo Fisher Scientific, Waltham (MA), USA) according to the manufacturer’s instructions. Finally, β-actin expression was used as load control in all western blot assays (A5441, Sigma Aldrich, St Louis, MO, USA). The quantification was performed by densitometry and bands analysis with the ImageJ 1.8.0 software (Rasband, W.S., ImageJ, U. S. National Institutes of Health, Bethesda, MD, USA) 4.6. Reverse Transcription-Quantitative Polymerase Chain Reaction (RT-qPCR): Total RNA was isolated while using mirVana™ miRNA Isolation Kit (AM1560, Thermo Fisher Scientific, Waltham (MA), USA), according to the manufacturer’s instructions. First-strand cDNA was synthesized while using kit Superscript VILO cDNA synthesis, according to the manufacturer’s instructions, and Quantitative PCR was performed using 7500 real time PCR system with SYBR Green Master Mix (4309155, Thermo Fisher Scientific, Waltham (MA), USA). The forward and reverse primers that were used are shown in Table 2. β-actin expression was used as an internal control. Each experiment was repeated three times in technical triplicate. The relative expression of target genes was calculated while using the −2−∆∆Cq method. 4.7. Measurement of NO Release: The NO release was measured from supernatant of rabbit chondrocytes cultures employing a NO chemiluminescence analyzer (model NOA, Sievers Instruments, Boulder, CO, USA). The evaluated conditions were: control; RAP; BAF; IL-1β; EEP; IL-1β and EEP; and, vehicle. All conditions were cultured for 24 h. The release of gaseous compounds was monitored for at least 8 h at intervals of 15 and 30 min., and 1, 2, 4, 6, and 8 h. After 15 min. of incubation, aliquots (0.5 mL) of accumulated gaseous materials in the headspace were injected into the detector chamber using a Hamilton Gastight syringe [94]. Each experiment was repeated three times. 4.8. Statistical Analysis: All of the experiments were repeated at least three times. The results were expressed as mean ± S.D., and the data was analyzed using one-way ANOVA followed by Dunnett, Bonferroni or Tukey Multiple Comparisons while using Sigma Plot (Analysis made in Graph Pad Prism version 5) to determine any significant differences. p < 0.05 was considered to be statistically significant.
Background: Osteoarthritis (OA) is a progressive and multifactorial disease that is associated with aging. A number of changes occur in aged cartilage, such as increased oxidative stress, decreased markers of healthy cartilage, and alterations in the autophagy pathway. Propolis extracts contain a mixture of polyphenols and it has been proved that they have high antioxidant capacity and could regulate the autophagic pathway. Our objective was to evaluate the effect of ethanolic extract of propolis (EEP) on chondrocytes that were stimulated with IL-1β. Methods: Rabbit chondrocytes were isolated and stimulated with IL-1β and treated with EEP. We evaluated cell viability, nitric oxide production, healthy cartilage, and OA markers, and the expression of three proteins associated with the autophagy pathway LC3, ATG5, and AKT1. Results: The EEP treatment reduces the expression of LC3, ATG5, and AKT1, reduces the production of nitric oxide, increases the expression of healthy markers, and reduces OA markers. Conclusions: These results suggest that treatment with EEP in chondrocytes that were stimulated with IL-1β has beneficial effects, such as a decrease in the expression of proteins associated with autophagy, MMP13, and production of nitric oxide, and also increased collagen II.
null
null
11,499
232
[ 88, 112, 209, 245, 196, 96, 3091, 292, 247, 88, 105, 431, 134, 134, 69 ]
18
[ "eep", "il", "il 1β", "1β", "expression", "autophagy", "figure", "chondrocytes", "treatment", "significant" ]
[ "produced chondrocytes oa", "expression oa chondrocytes", "pathway chondrocytes oa", "oa chondrocytes treated", "chondrocytes nitric oxide" ]
null
null
null
[CONTENT] propolis | autophagy | chondrocytes [SUMMARY]
null
[CONTENT] propolis | autophagy | chondrocytes [SUMMARY]
null
[CONTENT] propolis | autophagy | chondrocytes [SUMMARY]
null
[CONTENT] Animals | Antioxidants | Autophagy | Autophagy-Related Proteins | Cells, Cultured | Chondrocytes | Interleukin-1beta | Matrix Metalloproteinase 13 | Nitric Oxide | Osteoarthritis | Propolis | Rabbits [SUMMARY]
null
[CONTENT] Animals | Antioxidants | Autophagy | Autophagy-Related Proteins | Cells, Cultured | Chondrocytes | Interleukin-1beta | Matrix Metalloproteinase 13 | Nitric Oxide | Osteoarthritis | Propolis | Rabbits [SUMMARY]
null
[CONTENT] Animals | Antioxidants | Autophagy | Autophagy-Related Proteins | Cells, Cultured | Chondrocytes | Interleukin-1beta | Matrix Metalloproteinase 13 | Nitric Oxide | Osteoarthritis | Propolis | Rabbits [SUMMARY]
null
[CONTENT] produced chondrocytes oa | expression oa chondrocytes | pathway chondrocytes oa | oa chondrocytes treated | chondrocytes nitric oxide [SUMMARY]
null
[CONTENT] produced chondrocytes oa | expression oa chondrocytes | pathway chondrocytes oa | oa chondrocytes treated | chondrocytes nitric oxide [SUMMARY]
null
[CONTENT] produced chondrocytes oa | expression oa chondrocytes | pathway chondrocytes oa | oa chondrocytes treated | chondrocytes nitric oxide [SUMMARY]
null
[CONTENT] eep | il | il 1β | 1β | expression | autophagy | figure | chondrocytes | treatment | significant [SUMMARY]
null
[CONTENT] eep | il | il 1β | 1β | expression | autophagy | figure | chondrocytes | treatment | significant [SUMMARY]
null
[CONTENT] eep | il | il 1β | 1β | expression | autophagy | figure | chondrocytes | treatment | significant [SUMMARY]
null
[CONTENT] autophagy | rnos | oa | aging | apoptosis | production | cartilage | cellular | pathway | related [SUMMARY]
null
[CONTENT] eep | 1β | il 1β | condition | il | figure | treatment | significant | compared | compared il 1β [SUMMARY]
null
[CONTENT] eep | il 1β | 1β | il | figure | condition | autophagy | expression | significant | treatment [SUMMARY]
null
[CONTENT] OA ||| ||| Propolis ||| EEP | IL-1β [SUMMARY]
null
[CONTENT] EEP | ATG5 | AKT1 | OA [SUMMARY]
null
[CONTENT] OA ||| ||| Propolis ||| EEP | IL-1β ||| IL-1β | EEP ||| OA | three | ATG5 | AKT1 ||| ||| EEP | ATG5 | AKT1 | OA ||| EEP | IL-1β [SUMMARY]
null
COVID-19 Outbreaks in Nursing Homes Despite Full Vaccination with BNT162b2 of a Majority of Residents.
35313315
It is not known if widespread vaccination can prevent the spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in subpopulations at high risk, like older adults in nursing homes (NH).
BACKGROUND
We identified, using national professional networks, NH that suffered COVID-19 outbreaks despite having completed a vaccination campaign, and asked them to send data, using predefined collecting forms, on the number of residents exposed, their vaccination status and the number, characteristics, and evolution of patients infected. The main outcome was to identify outbreaks occurring in NH with high vaccine coverage. Secondary outcomes were residents' risk of being infected, developing severe disease, or dying from COVID-19 during the outbreak. SARS-CoV-2 infection was defined by a positive reverse transcriptase-polymerase chain reaction. All residents were serially tested whenever cases appeared in a facility. Unadjusted secondary attack rates, relative risks, and vaccine effectiveness during the outbreak were estimated.
METHODS
We identified 31 NH suffering an outbreak during March-April 2021, of which 27 sent data, cumulating 1,768 residents (mean age 88.4, 73.4% women, 78.2% fully vaccinated). BNT162b2 was the vaccine employed in all NH. There were 365 cases of SARS-CoV-2 infection. Median secondary attack rates were 20.0% (IQR 4.4%-50.0%) among unvaccinated residents and 16.7% (IQR 9.5%-29.2%) among fully vaccinated ones. Severe cases developed in 42 of 80 (52.5%) unvaccinated patients, compared with 56 of 248 (22.6%) fully vaccinated ones (relative risks [RR] 4.17, 95% CI: 2.43-7.17). Twenty of the unvaccinated patients (25.0%) and 16 of fully vaccinated ones (6.5%) died from COVID-19 (RR 5.11, 95% CI: 2.49-10.5). Estimated vaccine effectiveness during the outbreak was 34.5% (95% CI: 18.5-47.3) for preventing SARS-CoV-2 infection, 71.8% (58.8-80.7) for preventing severe disease, and 83.1% (67.8-91.1) for preventing death.
RESULTS
Outbreaks of COVID-19, including severe cases and deaths, can still occur in NH despite full vaccination of a majority of residents. Vaccine remains highly effective, however, for preventing severe disease and death. Prevention and control measures for SARS-CoV-2 should be maintained in NH at periods of high incidence in the community.
CONCLUSIONS
[ "Humans", "Female", "Aged", "Male", "COVID-19", "SARS-CoV-2", "BNT162 Vaccine", "Vaccination", "Disease Outbreaks", "Nursing Homes" ]
9058997
Introduction
Vaccines are a key component of any strategy for achieving long-term control of the current severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic. In addition to preventing coronavirus disease 2019 (COVID-19) morbidity and mortality, they are also expected to limit further spread of the disease [1, 2]. There is great hope that extensive vaccination against SARS-CoV-2 will allow a progressive return to normal life. Several vaccines have proven in large randomized trials to be highly effective for preventing COVID-19 symptomatic or severe disease and death [3, 4, 5, 6, 7]. However, most people enrolled in those trials were middle-aged adults, and few data are available for some subpopulations at high risk. Frail older patients, in particular, like those living in nursing homes (NH), are one of the groups most at risk of acquiring SARS-CoV-2 infection, suffering severe disease, and dying [8, 9, 10]. However, they have been poorly represented in clinical trials. A few studies have analyzed antibody and/or cellular immune response following vaccination in limited numbers of older people (40–200 older people included) reaching inconsistent results [11, 12, 13, 14]. A recent observational study using national surveillance data after a nationwide vaccination campaign in Israel found that BNT162b2 vaccine maintained high effectiveness in the general population of older adults, including adults aged 85 or older [15]. Despite limited effectiveness data, older patients in NH have understandably been a priority target for vaccination in most countries. In France, residents of NH were the first target of the national vaccination campaign [16]. In January and February 2021, BNT162b2 vaccine was offered to all residents and to their health care professionals aged 50 years or older, or with risk factors for severe COVID-19 disease. Most residents and many health care professionals accepted. In March and April 2021, France suffered a third wave of the COVID-19 pandemic, in which the B.1.1.7 (Alpha) variant, more contagious, was dominant (82.6% of all strains analyzed) [17]. We were then concerned to learn that COVID-19 outbreaks had developed in some NH despite the previous vaccination of a large proportion of their residents, and that they included severe cases and even deaths. We described one of those outbreaks in a separate report [18]. We also wanted to know if such outbreaks were just a rare, isolated event, or there were more. Therefore, we looked for other cases of COVID-19 outbreaks in NH with high vaccination rates among its residents.
Methods
The objective of the study was to know if COVID-19 outbreaks could occur in NH during a pandemic wave once a majority of its residents were fully vaccinated and to have a rough estimation of the frequency of such events − this is, if they were very rare or something more frequent. If such outbreaks did not appear to be rare, we secondarily aimed to estimate secondary attack rates and vaccine effectiveness in those particular circumstances (older patients in NH, during an outbreak, in a period of high exposition to the virus), i.e., a worst-case scenario. We contacted colleagues working in geriatric facilities in France, from April 15 to May 10, 2021, by e-mail and phone, using personal contacts and several regional, and national professional networks (Société Française de Gériatrie et Gérontologie, Gérontopôle d'Île-de-France, Collégiale de gériatrie de l'APHP, Conseil National des Professionnels de Gériatrie). We asked them to report to us any recent COVID-19 outbreak (three or more grouped cases over a 10-day period) arising in NH that included patients previously vaccinated. One investigator (A.R., J.B.) called the medical coordinator of any NH that answered the survey, or that we had identified as possibly having had an outbreak, to confirm the outbreak and proportion of vaccinated residents, and request him or her to participate in the study. Centers that accepted to participate were asked to send data, using predefined collecting forms, on the number of persons exposed, their vaccination status, the vaccine administered, time elapsed from vaccination to the onset of the outbreak and the number, basic demographic data, and evolution of patients infected, including deaths and the cause of death, as well as any data available on SARS-CoV-2 variants or sequencing, if those have been performed. COVID-19 infection was defined on the basis of a positive reverse transcriptase-polymerase chain reaction test. French national procedures mandate that, when a case is detected in an NH, a systematic screening by reverse transcriptase-polymerase chain reaction on nasopharyngeal swabs of all residents and health care professionals were conducted [19]. The screening is repeated 7 days later on those residents initially negative. For the purposes of this study, we defined severe COVID-19 disease as any patient requiring oxygen or intravenous fluids for more than 48 h, requiring hospitalization, or dying from COVID-19 (defined as any patient who died from COVID-19 disease or from a decompensation of previous pathologies precipitated by the COVID-19 infection). Persons who had received their second vaccine dose ≥7 days before the outbreak were considered fully vaccinated, consistent with the effect observed in BNT162b2 mRNA vaccine clinical trials [3]. Persons who had received only one dose or a second dose less than 7 days before the onset of the outbreak were considered partially vaccinated. We estimated secondary attack rates following the index case for each outbreak, in fully vaccinated, partially vaccinated, and unvaccinated patients. Relative risks were calculated comparing unvaccinated residents versus fully vaccinated residents (i.e., fully vaccinated residents were set as the reference group). Vaccine effectiveness in fully vaccinated residents during the outbreak ([risk among unvaccinated group–risk among vaccinated group]/risk among unvaccinated group) was calculated for the following outcomes: SARS-CoV-2 infection (including asymptomatic cases), severe COVID-19 disease, and death from COVID-19. Correlation between vaccination rates among residents and secondary attack rates was explored using univariate regression. While we initially planned for, the data we finally retrieved did not allow us to conduct a multivariate analysis in order to adjust for important covariates like vaccine coverage in staff, staff numbers, clustering of observations, and individual characteristics of vaccinated and unvaccinated residents. Thus, only crude estimates were calculated. We followed the STROBE statement for improving the reporting of observational studies [20]. The completed STROBE checklist is available as online supplementary material (for all online suppl. material, see www.karger.com/doi/10.1159/000523701).
Results
We identified 31 NH that experienced a COVID-19 outbreak occurring from February 1st to April 30th, 2021. Four declined to participate or did not send data. We report the analysis of 27 NH distributed all along with the national territory (Fig. 1). Data summarized by NH is shown in Table 1. Of the 1,768 residents present in the 27 facilities, 78.2% were fully vaccinated (95% CI: 71.9–83.9), 5.7% partially vaccinated (95% CI: 3.38–8.02), and 16.5% were not vaccinated (95% CI: 11.9–21.1). BNT162b2, an mRNA vaccine, was the vaccine administered in all NH. Vaccine has been administrated in most cases in January and February 2021, during a national vaccination campaign targeting NH residents. There were a total of 365 cases of SARS-CoV-2 infection in the facilities. Attack rates varied largely between facilities. Among fully vaccinated residents, the secondary attack rate ranged from 5.0% to 60.7% (median 16.7%, IQR 9.5%–29.2%). Secondary attack rates among partially vaccinated and unvaccinated residents ranged from 0% to 100% in both groups. Median values were 17.4% (IQR 0.0%–66.7%) for partly vaccinated residents and 20.0% (IQR 4.4%–50.0%) for unvaccinated residents. There was no correlation between the proportion of residents fully vaccinated in each NH and its global secondary attack rate (r = −0.12, p = 0.72). Data on SARS-CoV-2 infection severity and related deaths are displayed in Table 2. In the pooled population of patients who suffered SARS-CoV-2 infection despite previous full vaccination, 143 (57.7%) were asymptomatic at all times, 49 (19.7%) had mild symptoms, and 56 (22.6%) developed severe disease. Among unvaccinated patients, 19 (23.7%) were asymptomatic, 19 (23.7%) had mild symptoms, and 42 (52.5%) developed severe disease. The death rate was 6.5% among fully vaccinated patients and 25.0% among unvaccinated ones. The relative risks of unvaccinated persons, compared to vaccinated, were 1.52 (95% CI: 1.23–1.90) for being infected with SARS-CoV-2, 2.75 (95% CI: 1.90–3.03) for developing symptomatic disease, 4.17 (95% CI: 2.43–7.17) for developing severe disease, and 5.11 (95% CI: 2.49–10.5) for dying from COVID-19. BNT162b2 vaccine estimated effectiveness during the outbreak was 34.5% (95% CI: 18.5–47.3) for preventing SARS-CoV-2 infection, 63.6% (95% CI: 51.4–72.8) for preventing symptomatic disease, 71.8% (95% CI: 58.8–80.7) for preventing severe disease, and 83.1% (95% CI: 67.8–91.1) for preventing death from COVID-19. Finally, we were able to retrieve only very limited data regarding SARS-CoV-2 variants and staff vaccination coverage. A search for variants was negative in two of the facilities, variant B.1.1.7 was found in one facility and variant B.1.351 was found to be the responsible for the outbreak in another facility. Staff vaccination coverage was obtained from three NH only. Health caregivers fully vaccinated at the beginning of the outbreak were 1 of 32 (3.1%), 33 of 100 (33.0%), and 35 of 50 (70.0%).
null
null
[ "Statement of Ethics", "Funding Sources", "Author Contributions" ]
[ "The study was conducted in accordance with the Declaration of Helsinki. This type of study is exempt from the Ethic Committee review or oral or written participants' consent according to French law (Décret no. 2016-1537, November 17, 2016). All data collected was anonymous and was managed according to the requirements of the French National Commission Informatique et Libertés (CNIL) for the use of data in noninterventional health studies (declaration number 2216052v0).", "This work was supported by the Assistance Publique − Hôpitaux de Paris (APHP) and Sorbonne Université. The funding sources had no role in data collection, analysis, or interpretation; nor in the design of the study or its conduct. We were not paid to write this article by a pharmaceutical company or other agency.", "Carmelo Lafuente-Lafuente: conceptualization, methodology, formal analysis, software, supervision, validation, and writing − original draft. Antonio Rainone: investigation, data curation, formal analysis, visualization, validation, and writing − original draft. Olivier Guérin: investigation, data curation, and project administration. Olivier Drunat: investigation, resources, and supervision. Claude Jeandel: investigation, resources, and supervision. Olivier Hanon: project administration, resources, validation, writing − review and editing. Joël Belmin: conceptualization, formal analysis, funding acquisition, project administration, supervision, validation, visualization, and writing − review and editing." ]
[ null, null, null ]
[ "Introduction", "Methods", "Results", "Discussion", "Statement of Ethics", "Conflict of Interest Statement", "Funding Sources", "Author Contributions", "Data Availability Statement" ]
[ "Vaccines are a key component of any strategy for achieving long-term control of the current severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic. In addition to preventing coronavirus disease 2019 (COVID-19) morbidity and mortality, they are also expected to limit further spread of the disease [1, 2]. There is great hope that extensive vaccination against SARS-CoV-2 will allow a progressive return to normal life.\nSeveral vaccines have proven in large randomized trials to be highly effective for preventing COVID-19 symptomatic or severe disease and death [3, 4, 5, 6, 7]. However, most people enrolled in those trials were middle-aged adults, and few data are available for some subpopulations at high risk. Frail older patients, in particular, like those living in nursing homes (NH), are one of the groups most at risk of acquiring SARS-CoV-2 infection, suffering severe disease, and dying [8, 9, 10]. However, they have been poorly represented in clinical trials. A few studies have analyzed antibody and/or cellular immune response following vaccination in limited numbers of older people (40–200 older people included) reaching inconsistent results [11, 12, 13, 14]. A recent observational study using national surveillance data after a nationwide vaccination campaign in Israel found that BNT162b2 vaccine maintained high effectiveness in the general population of older adults, including adults aged 85 or older [15].\nDespite limited effectiveness data, older patients in NH have understandably been a priority target for vaccination in most countries. In France, residents of NH were the first target of the national vaccination campaign [16]. In January and February 2021, BNT162b2 vaccine was offered to all residents and to their health care professionals aged 50 years or older, or with risk factors for severe COVID-19 disease. Most residents and many health care professionals accepted.\nIn March and April 2021, France suffered a third wave of the COVID-19 pandemic, in which the B.1.1.7 (Alpha) variant, more contagious, was dominant (82.6% of all strains analyzed) [17]. We were then concerned to learn that COVID-19 outbreaks had developed in some NH despite the previous vaccination of a large proportion of their residents, and that they included severe cases and even deaths. We described one of those outbreaks in a separate report [18]. We also wanted to know if such outbreaks were just a rare, isolated event, or there were more. Therefore, we looked for other cases of COVID-19 outbreaks in NH with high vaccination rates among its residents.", "The objective of the study was to know if COVID-19 outbreaks could occur in NH during a pandemic wave once a majority of its residents were fully vaccinated and to have a rough estimation of the frequency of such events − this is, if they were very rare or something more frequent. If such outbreaks did not appear to be rare, we secondarily aimed to estimate secondary attack rates and vaccine effectiveness in those particular circumstances (older patients in NH, during an outbreak, in a period of high exposition to the virus), i.e., a worst-case scenario.\nWe contacted colleagues working in geriatric facilities in France, from April 15 to May 10, 2021, by e-mail and phone, using personal contacts and several regional, and national professional networks (Société Française de Gériatrie et Gérontologie, Gérontopôle d'Île-de-France, Collégiale de gériatrie de l'APHP, Conseil National des Professionnels de Gériatrie). We asked them to report to us any recent COVID-19 outbreak (three or more grouped cases over a 10-day period) arising in NH that included patients previously vaccinated.\nOne investigator (A.R., J.B.) called the medical coordinator of any NH that answered the survey, or that we had identified as possibly having had an outbreak, to confirm the outbreak and proportion of vaccinated residents, and request him or her to participate in the study. Centers that accepted to participate were asked to send data, using predefined collecting forms, on the number of persons exposed, their vaccination status, the vaccine administered, time elapsed from vaccination to the onset of the outbreak and the number, basic demographic data, and evolution of patients infected, including deaths and the cause of death, as well as any data available on SARS-CoV-2 variants or sequencing, if those have been performed.\nCOVID-19 infection was defined on the basis of a positive reverse transcriptase-polymerase chain reaction test. French national procedures mandate that, when a case is detected in an NH, a systematic screening by reverse transcriptase-polymerase chain reaction on nasopharyngeal swabs of all residents and health care professionals were conducted [19]. The screening is repeated 7 days later on those residents initially negative. For the purposes of this study, we defined severe COVID-19 disease as any patient requiring oxygen or intravenous fluids for more than 48 h, requiring hospitalization, or dying from COVID-19 (defined as any patient who died from COVID-19 disease or from a decompensation of previous pathologies precipitated by the COVID-19 infection).\nPersons who had received their second vaccine dose ≥7 days before the outbreak were considered fully vaccinated, consistent with the effect observed in BNT162b2 mRNA vaccine clinical trials [3]. Persons who had received only one dose or a second dose less than 7 days before the onset of the outbreak were considered partially vaccinated.\nWe estimated secondary attack rates following the index case for each outbreak, in fully vaccinated, partially vaccinated, and unvaccinated patients. Relative risks were calculated comparing unvaccinated residents versus fully vaccinated residents (i.e., fully vaccinated residents were set as the reference group). Vaccine effectiveness in fully vaccinated residents during the outbreak ([risk among unvaccinated group–risk among vaccinated group]/risk among unvaccinated group) was calculated for the following outcomes: SARS-CoV-2 infection (including asymptomatic cases), severe COVID-19 disease, and death from COVID-19. Correlation between vaccination rates among residents and secondary attack rates was explored using univariate regression. While we initially planned for, the data we finally retrieved did not allow us to conduct a multivariate analysis in order to adjust for important covariates like vaccine coverage in staff, staff numbers, clustering of observations, and individual characteristics of vaccinated and unvaccinated residents. Thus, only crude estimates were calculated.\nWe followed the STROBE statement for improving the reporting of observational studies [20]. The completed STROBE checklist is available as online supplementary material (for all online suppl. material, see www.karger.com/doi/10.1159/000523701).", "We identified 31 NH that experienced a COVID-19 outbreak occurring from February 1st to April 30th, 2021. Four declined to participate or did not send data. We report the analysis of 27 NH distributed all along with the national territory (Fig. 1).\nData summarized by NH is shown in Table 1. Of the 1,768 residents present in the 27 facilities, 78.2% were fully vaccinated (95% CI: 71.9–83.9), 5.7% partially vaccinated (95% CI: 3.38–8.02), and 16.5% were not vaccinated (95% CI: 11.9–21.1). BNT162b2, an mRNA vaccine, was the vaccine administered in all NH. Vaccine has been administrated in most cases in January and February 2021, during a national vaccination campaign targeting NH residents.\nThere were a total of 365 cases of SARS-CoV-2 infection in the facilities. Attack rates varied largely between facilities. Among fully vaccinated residents, the secondary attack rate ranged from 5.0% to 60.7% (median 16.7%, IQR 9.5%–29.2%). Secondary attack rates among partially vaccinated and unvaccinated residents ranged from 0% to 100% in both groups. Median values were 17.4% (IQR 0.0%–66.7%) for partly vaccinated residents and 20.0% (IQR 4.4%–50.0%) for unvaccinated residents. There was no correlation between the proportion of residents fully vaccinated in each NH and its global secondary attack rate (r = −0.12, p = 0.72).\nData on SARS-CoV-2 infection severity and related deaths are displayed in Table 2. In the pooled population of patients who suffered SARS-CoV-2 infection despite previous full vaccination, 143 (57.7%) were asymptomatic at all times, 49 (19.7%) had mild symptoms, and 56 (22.6%) developed severe disease. Among unvaccinated patients, 19 (23.7%) were asymptomatic, 19 (23.7%) had mild symptoms, and 42 (52.5%) developed severe disease. The death rate was 6.5% among fully vaccinated patients and 25.0% among unvaccinated ones. The relative risks of unvaccinated persons, compared to vaccinated, were 1.52 (95% CI: 1.23–1.90) for being infected with SARS-CoV-2, 2.75 (95% CI: 1.90–3.03) for developing symptomatic disease, 4.17 (95% CI: 2.43–7.17) for developing severe disease, and 5.11 (95% CI: 2.49–10.5) for dying from COVID-19. BNT162b2 vaccine estimated effectiveness during the outbreak was 34.5% (95% CI: 18.5–47.3) for preventing SARS-CoV-2 infection, 63.6% (95% CI: 51.4–72.8) for preventing symptomatic disease, 71.8% (95% CI: 58.8–80.7) for preventing severe disease, and 83.1% (95% CI: 67.8–91.1) for preventing death from COVID-19.\nFinally, we were able to retrieve only very limited data regarding SARS-CoV-2 variants and staff vaccination coverage. A search for variants was negative in two of the facilities, variant B.1.1.7 was found in one facility and variant B.1.351 was found to be the responsible for the outbreak in another facility. Staff vaccination coverage was obtained from three NH only. Health caregivers fully vaccinated at the beginning of the outbreak were 1 of 32 (3.1%), 33 of 100 (33.0%), and 35 of 50 (70.0%).", "Our study shows that outbreaks of COVID-19 can definitely occur in fully vaccinated older patients residing in NH, leading to severe cases, hospitalizations, and even deaths. We observed a high secondary attack rate in many facilities, despite high vaccine coverage among residents, suggesting that vaccination did not block SARS-CoV-2 transmission in this population. On the other hand, the risk of severe disease and death was considerably lower in vaccinated residents than in nonvaccinated.\nA limitation of our work is that we did not systematically survey all 7,200 NH existing in France. Probably there were other NH that suffered outbreaks but that our survey did not reach, or they did not answer, so it is not possible to obtain a reliable estimate of the incidence of this type of outbreaks. In any case, we can say that they are not rare, isolated events.\nIt is also important to note that attack rates in our study are secondary attack rates during an outbreak and are not an estimation of the general incidence of cases among vaccinated individuals in NH. Similarly, it is important to understand that the vaccine effectiveness we estimate here is a measure during the outbreak; this is, in a situation of high exposure to the virus and in a subgroup of older people, residents of NH, who typically are very frail. This is a worst-case scenario and not an estimation of the actual effectiveness of BNT162b2 vaccine in the general population of older persons residing in NH.\nFew COVID-19 outbreaks had been reported to date in groups of persons well vaccinated against SARS-CoV-2. In separate reports, we documented COVID-19 in vaccinated older persons during an outbreak in a hospital rehabilitation unit and in an NH, both in France [18, 21]. Cavanaugh et al. [22] recently described a COVID-19 outbreak in an NH in Kentucky, USA, involving 26 residents, of whom 18 have been fully vaccinated. By contrast, Teran et al. [23] followed COVID-19 cases for several months in 75 skilled nursing facilities in Chicago after a vaccination campaign. They found only 22 residents developing SARS-CoV-2 infection after full vaccination and no case of secondary transmission inside facilities [23].\nA lower effectiveness of COVID-19 vaccines among older and frail patients, as compared to the efficacy documented in randomized clinical trials, is not completely unexpected. Impaired immunogenicity of vaccines, attributed to immunosenescence, a broad term for declining immunity with age, involving both humoral and cellular immune responses, has been well described in older patients with frailty or living in NH [24, 25]. Several studies have documented a reduced antibody response to influenza vaccine [26, 27] and other virus vaccines [28, 29] in NH residents. Two published studies [14, 30] and one still unpublished [31] (at the moment of writing this report) have found a reduced antibody response to BNT162b2 vaccine in older long-term care residents. Nonetheless, even if a decreased effectiveness of SARS-CoV-2 vaccines could be expected in this population, the occurrence of outbreaks in geriatric settings among vaccinated residents is highly concerning. It is also very different from data obtained in the general population of older persons, as in a recent national observational study in Israel that found a preserved high effectiveness (94.1%–97.4%) of BNT162b2 vaccine in persons 85 years or older [15].\nIt is important to note that the SARS-CoV-2 vaccine was not completely ineffective in our study. The risk of developing severe disease or dying from COVID-19 was greatly reduced in vaccinated residents. Mortality, in particular, was largely reduced among vaccinated persons, compared with nonvaccinated residents in the same facilities. Overall, those finding suggests that the main effect of vaccination in these patients might be reducing the severity of the infection rather than avoiding the infection itself or blocking its transmission.\nThere are other reasons that might explain our observations. One is a lower effectiveness of BNT162b2 vaccine against some SARS-CoV-2 variants. Data on variants in our study was very limited but identified two variants involved in outbreaks. Variant B.1.1.7 (Alpha), which was confirmed in one facility, was highly prevalent (82.6% of all strains) in France during the period studied [17]. It has been reported that BNT162b2 vaccine induces a reduced neutralizing antibody response against this variant [32]. Variant B.1.351 (Beta) was found to be the responsible for the outbreak in another center. B.1.351 variant has been found to escape neutralization by several monoclonal antibodies and by plasma from convalescent patients [33, 34]. Other factor that may facilitate outbreaks in NH is low vaccination coverage among staff members. We retrieved staff vaccination rates from few facilities, but it was clearly lower than in residents. Finally, all the outbreaks described in this work happened during a pandemic wave, and it is known that the incidence of COVID-19 among NH residents and staff members usually follows closely the incidence of COVID-19 in surrounding communities [35].\nOur study has thus several important limitations. We did not cover all NH in our country and probably there were other outbreaks our survey did not find, so we cannot make an estimation of its actual incidence. Data retrieved on SARS-CoV-2 variants, staff vaccination coverage, and resident's individual characteristics (especially comorbidities, immune suppression, and data on previous SARS-CoV-2 infection or antibody levels) were limited, hence their influence on the risk of developing an outbreak could not be analyzed. It is also important to note, as already discussed, that the estimations of vaccine effectiveness here made are circumscribed to a particular situation − high virus exposure during an outbreak. Moreover, older people living in NH are usually frailer and cumulate more comorbidities than other groups of older persons, so the estimations made for this population should not be extrapolated to the general population of older persons.\nIn spite of these limitations, our findings have practical implications. First, they underscore that vaccination alone, even when extensive, cannot guarantee an absence of outbreaks in NH. Prevention and control measures against SARS-CoV-2 in NH and other geriatric facilities should be maintained for the moment, at least at times of high COVID-19 incidence in the surrounding community. Such measures would need to be adapted, though, as prolonged isolation or reduced human contact can have important psychological consequences in older people [36, 37, 38]. Secondly, our findings stress that more research is needed to improve vaccines effectiveness in this population − and probably in other populations also at risk, like immunosuppressed patients. Possible strategies might be the administration of additional doses − a strategy supported by recent studies [39] and chosen by many countries - the administration of a higher dose of antigen, or the sequential combination of different vaccines.\nIn conclusion, in this study, we found that COVID-19 outbreaks can still occur in older residents in NH despite full vaccination with BNT162b2 of a majority of them. In this setting and under similar circumstances − high COVID-19 incidence during a pandemic wave − high vaccination coverage among residents does not guarantee to prevent SARS-CoV-2 from spreading in the facility. Nonetheless, BNT162b2 vaccine appears to remain highly effective for preventing severe disease, hospitalization, and death.\nPrevention and control measures for SARS-CoV-2 should be still maintained in NH at periods of high incidence in the community. More research is needed to improve the effectiveness of SARS-CoV-2 vaccines in frail older persons.", "The study was conducted in accordance with the Declaration of Helsinki. This type of study is exempt from the Ethic Committee review or oral or written participants' consent according to French law (Décret no. 2016-1537, November 17, 2016). All data collected was anonymous and was managed according to the requirements of the French National Commission Informatique et Libertés (CNIL) for the use of data in noninterventional health studies (declaration number 2216052v0).", "Dr. Hanon reports personal fees from Bayer Healthcare, Servier, Astra-Zeneca, Boston Scientific, Vifor, BMS, Pfizer, and Boehringer Ingelheim, outside the submitted work. Dr. Jeandel reports personal fees from Bayer, Boehringer Ingelheim, BMS, Pfizer, Novartis, Servier, and Vifor, outside the submitted work. Dr. Belmin reports personal fees from Novartis and Pfizer, outside the submitted work. Dr. Lafuente-Lafuente, Dr. Rainone, Dr. Guérin, and Dr. Drunat have nothing to disclose.", "This work was supported by the Assistance Publique − Hôpitaux de Paris (APHP) and Sorbonne Université. The funding sources had no role in data collection, analysis, or interpretation; nor in the design of the study or its conduct. We were not paid to write this article by a pharmaceutical company or other agency.", "Carmelo Lafuente-Lafuente: conceptualization, methodology, formal analysis, software, supervision, validation, and writing − original draft. Antonio Rainone: investigation, data curation, formal analysis, visualization, validation, and writing − original draft. Olivier Guérin: investigation, data curation, and project administration. Olivier Drunat: investigation, resources, and supervision. Claude Jeandel: investigation, resources, and supervision. Olivier Hanon: project administration, resources, validation, writing − review and editing. Joël Belmin: conceptualization, formal analysis, funding acquisition, project administration, supervision, validation, visualization, and writing − review and editing.", "The data that support the findings of this study is openly available in the Open Science Framework (DOI: 10.17605/OSF.IO/7R349)." ]
[ "intro", "methods", "results", "discussion", null, "COI-statement", null, null, "data-availability" ]
[ "Coronavirus disease 2019", "Severe acute respiratory syndrome coronavirus 2", "Vaccination", "Outbreak", "Older adults", "Nursing home" ]
Introduction: Vaccines are a key component of any strategy for achieving long-term control of the current severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic. In addition to preventing coronavirus disease 2019 (COVID-19) morbidity and mortality, they are also expected to limit further spread of the disease [1, 2]. There is great hope that extensive vaccination against SARS-CoV-2 will allow a progressive return to normal life. Several vaccines have proven in large randomized trials to be highly effective for preventing COVID-19 symptomatic or severe disease and death [3, 4, 5, 6, 7]. However, most people enrolled in those trials were middle-aged adults, and few data are available for some subpopulations at high risk. Frail older patients, in particular, like those living in nursing homes (NH), are one of the groups most at risk of acquiring SARS-CoV-2 infection, suffering severe disease, and dying [8, 9, 10]. However, they have been poorly represented in clinical trials. A few studies have analyzed antibody and/or cellular immune response following vaccination in limited numbers of older people (40–200 older people included) reaching inconsistent results [11, 12, 13, 14]. A recent observational study using national surveillance data after a nationwide vaccination campaign in Israel found that BNT162b2 vaccine maintained high effectiveness in the general population of older adults, including adults aged 85 or older [15]. Despite limited effectiveness data, older patients in NH have understandably been a priority target for vaccination in most countries. In France, residents of NH were the first target of the national vaccination campaign [16]. In January and February 2021, BNT162b2 vaccine was offered to all residents and to their health care professionals aged 50 years or older, or with risk factors for severe COVID-19 disease. Most residents and many health care professionals accepted. In March and April 2021, France suffered a third wave of the COVID-19 pandemic, in which the B.1.1.7 (Alpha) variant, more contagious, was dominant (82.6% of all strains analyzed) [17]. We were then concerned to learn that COVID-19 outbreaks had developed in some NH despite the previous vaccination of a large proportion of their residents, and that they included severe cases and even deaths. We described one of those outbreaks in a separate report [18]. We also wanted to know if such outbreaks were just a rare, isolated event, or there were more. Therefore, we looked for other cases of COVID-19 outbreaks in NH with high vaccination rates among its residents. Methods: The objective of the study was to know if COVID-19 outbreaks could occur in NH during a pandemic wave once a majority of its residents were fully vaccinated and to have a rough estimation of the frequency of such events − this is, if they were very rare or something more frequent. If such outbreaks did not appear to be rare, we secondarily aimed to estimate secondary attack rates and vaccine effectiveness in those particular circumstances (older patients in NH, during an outbreak, in a period of high exposition to the virus), i.e., a worst-case scenario. We contacted colleagues working in geriatric facilities in France, from April 15 to May 10, 2021, by e-mail and phone, using personal contacts and several regional, and national professional networks (Société Française de Gériatrie et Gérontologie, Gérontopôle d'Île-de-France, Collégiale de gériatrie de l'APHP, Conseil National des Professionnels de Gériatrie). We asked them to report to us any recent COVID-19 outbreak (three or more grouped cases over a 10-day period) arising in NH that included patients previously vaccinated. One investigator (A.R., J.B.) called the medical coordinator of any NH that answered the survey, or that we had identified as possibly having had an outbreak, to confirm the outbreak and proportion of vaccinated residents, and request him or her to participate in the study. Centers that accepted to participate were asked to send data, using predefined collecting forms, on the number of persons exposed, their vaccination status, the vaccine administered, time elapsed from vaccination to the onset of the outbreak and the number, basic demographic data, and evolution of patients infected, including deaths and the cause of death, as well as any data available on SARS-CoV-2 variants or sequencing, if those have been performed. COVID-19 infection was defined on the basis of a positive reverse transcriptase-polymerase chain reaction test. French national procedures mandate that, when a case is detected in an NH, a systematic screening by reverse transcriptase-polymerase chain reaction on nasopharyngeal swabs of all residents and health care professionals were conducted [19]. The screening is repeated 7 days later on those residents initially negative. For the purposes of this study, we defined severe COVID-19 disease as any patient requiring oxygen or intravenous fluids for more than 48 h, requiring hospitalization, or dying from COVID-19 (defined as any patient who died from COVID-19 disease or from a decompensation of previous pathologies precipitated by the COVID-19 infection). Persons who had received their second vaccine dose ≥7 days before the outbreak were considered fully vaccinated, consistent with the effect observed in BNT162b2 mRNA vaccine clinical trials [3]. Persons who had received only one dose or a second dose less than 7 days before the onset of the outbreak were considered partially vaccinated. We estimated secondary attack rates following the index case for each outbreak, in fully vaccinated, partially vaccinated, and unvaccinated patients. Relative risks were calculated comparing unvaccinated residents versus fully vaccinated residents (i.e., fully vaccinated residents were set as the reference group). Vaccine effectiveness in fully vaccinated residents during the outbreak ([risk among unvaccinated group–risk among vaccinated group]/risk among unvaccinated group) was calculated for the following outcomes: SARS-CoV-2 infection (including asymptomatic cases), severe COVID-19 disease, and death from COVID-19. Correlation between vaccination rates among residents and secondary attack rates was explored using univariate regression. While we initially planned for, the data we finally retrieved did not allow us to conduct a multivariate analysis in order to adjust for important covariates like vaccine coverage in staff, staff numbers, clustering of observations, and individual characteristics of vaccinated and unvaccinated residents. Thus, only crude estimates were calculated. We followed the STROBE statement for improving the reporting of observational studies [20]. The completed STROBE checklist is available as online supplementary material (for all online suppl. material, see www.karger.com/doi/10.1159/000523701). Results: We identified 31 NH that experienced a COVID-19 outbreak occurring from February 1st to April 30th, 2021. Four declined to participate or did not send data. We report the analysis of 27 NH distributed all along with the national territory (Fig. 1). Data summarized by NH is shown in Table 1. Of the 1,768 residents present in the 27 facilities, 78.2% were fully vaccinated (95% CI: 71.9–83.9), 5.7% partially vaccinated (95% CI: 3.38–8.02), and 16.5% were not vaccinated (95% CI: 11.9–21.1). BNT162b2, an mRNA vaccine, was the vaccine administered in all NH. Vaccine has been administrated in most cases in January and February 2021, during a national vaccination campaign targeting NH residents. There were a total of 365 cases of SARS-CoV-2 infection in the facilities. Attack rates varied largely between facilities. Among fully vaccinated residents, the secondary attack rate ranged from 5.0% to 60.7% (median 16.7%, IQR 9.5%–29.2%). Secondary attack rates among partially vaccinated and unvaccinated residents ranged from 0% to 100% in both groups. Median values were 17.4% (IQR 0.0%–66.7%) for partly vaccinated residents and 20.0% (IQR 4.4%–50.0%) for unvaccinated residents. There was no correlation between the proportion of residents fully vaccinated in each NH and its global secondary attack rate (r = −0.12, p = 0.72). Data on SARS-CoV-2 infection severity and related deaths are displayed in Table 2. In the pooled population of patients who suffered SARS-CoV-2 infection despite previous full vaccination, 143 (57.7%) were asymptomatic at all times, 49 (19.7%) had mild symptoms, and 56 (22.6%) developed severe disease. Among unvaccinated patients, 19 (23.7%) were asymptomatic, 19 (23.7%) had mild symptoms, and 42 (52.5%) developed severe disease. The death rate was 6.5% among fully vaccinated patients and 25.0% among unvaccinated ones. The relative risks of unvaccinated persons, compared to vaccinated, were 1.52 (95% CI: 1.23–1.90) for being infected with SARS-CoV-2, 2.75 (95% CI: 1.90–3.03) for developing symptomatic disease, 4.17 (95% CI: 2.43–7.17) for developing severe disease, and 5.11 (95% CI: 2.49–10.5) for dying from COVID-19. BNT162b2 vaccine estimated effectiveness during the outbreak was 34.5% (95% CI: 18.5–47.3) for preventing SARS-CoV-2 infection, 63.6% (95% CI: 51.4–72.8) for preventing symptomatic disease, 71.8% (95% CI: 58.8–80.7) for preventing severe disease, and 83.1% (95% CI: 67.8–91.1) for preventing death from COVID-19. Finally, we were able to retrieve only very limited data regarding SARS-CoV-2 variants and staff vaccination coverage. A search for variants was negative in two of the facilities, variant B.1.1.7 was found in one facility and variant B.1.351 was found to be the responsible for the outbreak in another facility. Staff vaccination coverage was obtained from three NH only. Health caregivers fully vaccinated at the beginning of the outbreak were 1 of 32 (3.1%), 33 of 100 (33.0%), and 35 of 50 (70.0%). Discussion: Our study shows that outbreaks of COVID-19 can definitely occur in fully vaccinated older patients residing in NH, leading to severe cases, hospitalizations, and even deaths. We observed a high secondary attack rate in many facilities, despite high vaccine coverage among residents, suggesting that vaccination did not block SARS-CoV-2 transmission in this population. On the other hand, the risk of severe disease and death was considerably lower in vaccinated residents than in nonvaccinated. A limitation of our work is that we did not systematically survey all 7,200 NH existing in France. Probably there were other NH that suffered outbreaks but that our survey did not reach, or they did not answer, so it is not possible to obtain a reliable estimate of the incidence of this type of outbreaks. In any case, we can say that they are not rare, isolated events. It is also important to note that attack rates in our study are secondary attack rates during an outbreak and are not an estimation of the general incidence of cases among vaccinated individuals in NH. Similarly, it is important to understand that the vaccine effectiveness we estimate here is a measure during the outbreak; this is, in a situation of high exposure to the virus and in a subgroup of older people, residents of NH, who typically are very frail. This is a worst-case scenario and not an estimation of the actual effectiveness of BNT162b2 vaccine in the general population of older persons residing in NH. Few COVID-19 outbreaks had been reported to date in groups of persons well vaccinated against SARS-CoV-2. In separate reports, we documented COVID-19 in vaccinated older persons during an outbreak in a hospital rehabilitation unit and in an NH, both in France [18, 21]. Cavanaugh et al. [22] recently described a COVID-19 outbreak in an NH in Kentucky, USA, involving 26 residents, of whom 18 have been fully vaccinated. By contrast, Teran et al. [23] followed COVID-19 cases for several months in 75 skilled nursing facilities in Chicago after a vaccination campaign. They found only 22 residents developing SARS-CoV-2 infection after full vaccination and no case of secondary transmission inside facilities [23]. A lower effectiveness of COVID-19 vaccines among older and frail patients, as compared to the efficacy documented in randomized clinical trials, is not completely unexpected. Impaired immunogenicity of vaccines, attributed to immunosenescence, a broad term for declining immunity with age, involving both humoral and cellular immune responses, has been well described in older patients with frailty or living in NH [24, 25]. Several studies have documented a reduced antibody response to influenza vaccine [26, 27] and other virus vaccines [28, 29] in NH residents. Two published studies [14, 30] and one still unpublished [31] (at the moment of writing this report) have found a reduced antibody response to BNT162b2 vaccine in older long-term care residents. Nonetheless, even if a decreased effectiveness of SARS-CoV-2 vaccines could be expected in this population, the occurrence of outbreaks in geriatric settings among vaccinated residents is highly concerning. It is also very different from data obtained in the general population of older persons, as in a recent national observational study in Israel that found a preserved high effectiveness (94.1%–97.4%) of BNT162b2 vaccine in persons 85 years or older [15]. It is important to note that the SARS-CoV-2 vaccine was not completely ineffective in our study. The risk of developing severe disease or dying from COVID-19 was greatly reduced in vaccinated residents. Mortality, in particular, was largely reduced among vaccinated persons, compared with nonvaccinated residents in the same facilities. Overall, those finding suggests that the main effect of vaccination in these patients might be reducing the severity of the infection rather than avoiding the infection itself or blocking its transmission. There are other reasons that might explain our observations. One is a lower effectiveness of BNT162b2 vaccine against some SARS-CoV-2 variants. Data on variants in our study was very limited but identified two variants involved in outbreaks. Variant B.1.1.7 (Alpha), which was confirmed in one facility, was highly prevalent (82.6% of all strains) in France during the period studied [17]. It has been reported that BNT162b2 vaccine induces a reduced neutralizing antibody response against this variant [32]. Variant B.1.351 (Beta) was found to be the responsible for the outbreak in another center. B.1.351 variant has been found to escape neutralization by several monoclonal antibodies and by plasma from convalescent patients [33, 34]. Other factor that may facilitate outbreaks in NH is low vaccination coverage among staff members. We retrieved staff vaccination rates from few facilities, but it was clearly lower than in residents. Finally, all the outbreaks described in this work happened during a pandemic wave, and it is known that the incidence of COVID-19 among NH residents and staff members usually follows closely the incidence of COVID-19 in surrounding communities [35]. Our study has thus several important limitations. We did not cover all NH in our country and probably there were other outbreaks our survey did not find, so we cannot make an estimation of its actual incidence. Data retrieved on SARS-CoV-2 variants, staff vaccination coverage, and resident's individual characteristics (especially comorbidities, immune suppression, and data on previous SARS-CoV-2 infection or antibody levels) were limited, hence their influence on the risk of developing an outbreak could not be analyzed. It is also important to note, as already discussed, that the estimations of vaccine effectiveness here made are circumscribed to a particular situation − high virus exposure during an outbreak. Moreover, older people living in NH are usually frailer and cumulate more comorbidities than other groups of older persons, so the estimations made for this population should not be extrapolated to the general population of older persons. In spite of these limitations, our findings have practical implications. First, they underscore that vaccination alone, even when extensive, cannot guarantee an absence of outbreaks in NH. Prevention and control measures against SARS-CoV-2 in NH and other geriatric facilities should be maintained for the moment, at least at times of high COVID-19 incidence in the surrounding community. Such measures would need to be adapted, though, as prolonged isolation or reduced human contact can have important psychological consequences in older people [36, 37, 38]. Secondly, our findings stress that more research is needed to improve vaccines effectiveness in this population − and probably in other populations also at risk, like immunosuppressed patients. Possible strategies might be the administration of additional doses − a strategy supported by recent studies [39] and chosen by many countries - the administration of a higher dose of antigen, or the sequential combination of different vaccines. In conclusion, in this study, we found that COVID-19 outbreaks can still occur in older residents in NH despite full vaccination with BNT162b2 of a majority of them. In this setting and under similar circumstances − high COVID-19 incidence during a pandemic wave − high vaccination coverage among residents does not guarantee to prevent SARS-CoV-2 from spreading in the facility. Nonetheless, BNT162b2 vaccine appears to remain highly effective for preventing severe disease, hospitalization, and death. Prevention and control measures for SARS-CoV-2 should be still maintained in NH at periods of high incidence in the community. More research is needed to improve the effectiveness of SARS-CoV-2 vaccines in frail older persons. Statement of Ethics: The study was conducted in accordance with the Declaration of Helsinki. This type of study is exempt from the Ethic Committee review or oral or written participants' consent according to French law (Décret no. 2016-1537, November 17, 2016). All data collected was anonymous and was managed according to the requirements of the French National Commission Informatique et Libertés (CNIL) for the use of data in noninterventional health studies (declaration number 2216052v0). Conflict of Interest Statement: Dr. Hanon reports personal fees from Bayer Healthcare, Servier, Astra-Zeneca, Boston Scientific, Vifor, BMS, Pfizer, and Boehringer Ingelheim, outside the submitted work. Dr. Jeandel reports personal fees from Bayer, Boehringer Ingelheim, BMS, Pfizer, Novartis, Servier, and Vifor, outside the submitted work. Dr. Belmin reports personal fees from Novartis and Pfizer, outside the submitted work. Dr. Lafuente-Lafuente, Dr. Rainone, Dr. Guérin, and Dr. Drunat have nothing to disclose. Funding Sources: This work was supported by the Assistance Publique − Hôpitaux de Paris (APHP) and Sorbonne Université. The funding sources had no role in data collection, analysis, or interpretation; nor in the design of the study or its conduct. We were not paid to write this article by a pharmaceutical company or other agency. Author Contributions: Carmelo Lafuente-Lafuente: conceptualization, methodology, formal analysis, software, supervision, validation, and writing − original draft. Antonio Rainone: investigation, data curation, formal analysis, visualization, validation, and writing − original draft. Olivier Guérin: investigation, data curation, and project administration. Olivier Drunat: investigation, resources, and supervision. Claude Jeandel: investigation, resources, and supervision. Olivier Hanon: project administration, resources, validation, writing − review and editing. Joël Belmin: conceptualization, formal analysis, funding acquisition, project administration, supervision, validation, visualization, and writing − review and editing. Data Availability Statement: The data that support the findings of this study is openly available in the Open Science Framework (DOI: 10.17605/OSF.IO/7R349).
Background: It is not known if widespread vaccination can prevent the spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in subpopulations at high risk, like older adults in nursing homes (NH). Methods: We identified, using national professional networks, NH that suffered COVID-19 outbreaks despite having completed a vaccination campaign, and asked them to send data, using predefined collecting forms, on the number of residents exposed, their vaccination status and the number, characteristics, and evolution of patients infected. The main outcome was to identify outbreaks occurring in NH with high vaccine coverage. Secondary outcomes were residents' risk of being infected, developing severe disease, or dying from COVID-19 during the outbreak. SARS-CoV-2 infection was defined by a positive reverse transcriptase-polymerase chain reaction. All residents were serially tested whenever cases appeared in a facility. Unadjusted secondary attack rates, relative risks, and vaccine effectiveness during the outbreak were estimated. Results: We identified 31 NH suffering an outbreak during March-April 2021, of which 27 sent data, cumulating 1,768 residents (mean age 88.4, 73.4% women, 78.2% fully vaccinated). BNT162b2 was the vaccine employed in all NH. There were 365 cases of SARS-CoV-2 infection. Median secondary attack rates were 20.0% (IQR 4.4%-50.0%) among unvaccinated residents and 16.7% (IQR 9.5%-29.2%) among fully vaccinated ones. Severe cases developed in 42 of 80 (52.5%) unvaccinated patients, compared with 56 of 248 (22.6%) fully vaccinated ones (relative risks [RR] 4.17, 95% CI: 2.43-7.17). Twenty of the unvaccinated patients (25.0%) and 16 of fully vaccinated ones (6.5%) died from COVID-19 (RR 5.11, 95% CI: 2.49-10.5). Estimated vaccine effectiveness during the outbreak was 34.5% (95% CI: 18.5-47.3) for preventing SARS-CoV-2 infection, 71.8% (58.8-80.7) for preventing severe disease, and 83.1% (67.8-91.1) for preventing death. Conclusions: Outbreaks of COVID-19, including severe cases and deaths, can still occur in NH despite full vaccination of a majority of residents. Vaccine remains highly effective, however, for preventing severe disease and death. Prevention and control measures for SARS-CoV-2 should be maintained in NH at periods of high incidence in the community.
null
null
3,712
461
[ 86, 61, 120 ]
9
[ "residents", "nh", "19", "vaccinated", "covid", "covid 19", "vaccination", "sars", "sars cov", "older" ]
[ "vaccine sars cov", "fully vaccinated older", "sars cov vaccines", "vaccinated older patients", "vaccines frail" ]
null
null
[CONTENT] Coronavirus disease 2019 | Severe acute respiratory syndrome coronavirus 2 | Vaccination | Outbreak | Older adults | Nursing home [SUMMARY]
[CONTENT] Coronavirus disease 2019 | Severe acute respiratory syndrome coronavirus 2 | Vaccination | Outbreak | Older adults | Nursing home [SUMMARY]
[CONTENT] Coronavirus disease 2019 | Severe acute respiratory syndrome coronavirus 2 | Vaccination | Outbreak | Older adults | Nursing home [SUMMARY]
null
[CONTENT] Coronavirus disease 2019 | Severe acute respiratory syndrome coronavirus 2 | Vaccination | Outbreak | Older adults | Nursing home [SUMMARY]
null
[CONTENT] Humans | Female | Aged | Male | COVID-19 | SARS-CoV-2 | BNT162 Vaccine | Vaccination | Disease Outbreaks | Nursing Homes [SUMMARY]
[CONTENT] Humans | Female | Aged | Male | COVID-19 | SARS-CoV-2 | BNT162 Vaccine | Vaccination | Disease Outbreaks | Nursing Homes [SUMMARY]
[CONTENT] Humans | Female | Aged | Male | COVID-19 | SARS-CoV-2 | BNT162 Vaccine | Vaccination | Disease Outbreaks | Nursing Homes [SUMMARY]
null
[CONTENT] Humans | Female | Aged | Male | COVID-19 | SARS-CoV-2 | BNT162 Vaccine | Vaccination | Disease Outbreaks | Nursing Homes [SUMMARY]
null
[CONTENT] vaccine sars cov | fully vaccinated older | sars cov vaccines | vaccinated older patients | vaccines frail [SUMMARY]
[CONTENT] vaccine sars cov | fully vaccinated older | sars cov vaccines | vaccinated older patients | vaccines frail [SUMMARY]
[CONTENT] vaccine sars cov | fully vaccinated older | sars cov vaccines | vaccinated older patients | vaccines frail [SUMMARY]
null
[CONTENT] vaccine sars cov | fully vaccinated older | sars cov vaccines | vaccinated older patients | vaccines frail [SUMMARY]
null
[CONTENT] residents | nh | 19 | vaccinated | covid | covid 19 | vaccination | sars | sars cov | older [SUMMARY]
[CONTENT] residents | nh | 19 | vaccinated | covid | covid 19 | vaccination | sars | sars cov | older [SUMMARY]
[CONTENT] residents | nh | 19 | vaccinated | covid | covid 19 | vaccination | sars | sars cov | older [SUMMARY]
null
[CONTENT] residents | nh | 19 | vaccinated | covid | covid 19 | vaccination | sars | sars cov | older [SUMMARY]
null
[CONTENT] older | vaccination | 19 | covid 19 | covid | severe | residents | nh | disease | adults [SUMMARY]
[CONTENT] vaccinated | outbreak | residents | 19 | covid | covid 19 | fully | fully vaccinated | unvaccinated | de [SUMMARY]
[CONTENT] ci | 95 ci | 95 | vaccinated | nh | residents | unvaccinated | disease | sars | 19 [SUMMARY]
null
[CONTENT] residents | 19 | vaccinated | nh | covid | covid 19 | dr | data | older | vaccination [SUMMARY]
null
[CONTENT] 2 | NH [SUMMARY]
[CONTENT] NH | COVID-19 ||| NH ||| COVID-19 ||| ||| ||| [SUMMARY]
[CONTENT] 31 | NH | March-April 2021 | 27 | 1,768 | age 88.4 | 73.4% | 78.2% ||| NH ||| ||| 20.0% | IQR | 4.4%-50.0% | 16.7% ||| 42 | 80 | 52.5% | 56 | 248 | 22.6% | 95% | CI | 2.43 ||| Twenty | 25.0% | 16 | 6.5% | COVID-19 | 5.11 | 95% | CI | 2.49-10.5 ||| 34.5% | 95% | CI | 18.5-47.3 | 71.8% | 58.8-80.7 | 83.1% | 67.8-91.1 [SUMMARY]
null
[CONTENT] 2 | NH ||| NH | COVID-19 ||| NH ||| COVID-19 ||| ||| ||| ||| ||| 31 | NH | March-April 2021 | 27 | 1,768 | age 88.4 | 73.4% | 78.2% ||| NH ||| ||| 20.0% | IQR | 4.4%-50.0% | 16.7% ||| 42 | 80 | 52.5% | 56 | 248 | 22.6% | 95% | CI | 2.43 ||| Twenty | 25.0% | 16 | 6.5% | COVID-19 | 5.11 | 95% | CI | 2.49-10.5 ||| 34.5% | 95% | CI | 18.5-47.3 | 71.8% | 58.8-80.7 | 83.1% | 67.8-91.1 ||| NH ||| Vaccine ||| NH [SUMMARY]
null
Associations of Polymorphism of rs9944155, rs1051052, and rs1243166 Locus Allele in Alpha-1-antitrypsin with Chronic Obstructive Pulmonary Disease in Uygur Population of Kashgar Region.
29521291
Previous studies conducted in various geographical and ethnical populations have shown that Alpha-1-antitrypsin (Alpha-1-AT) expression affects the occurrence and progression of chronic obstructive pulmonary disease (COPD). We aimed to explore the associations of rs9944155AG, rs1051052AG, and rs1243166AG polymorphisms in the Alpha-1-AT gene with the risk of COPD in Uygur population in the Kashgar region.
BACKGROUND
From March 2013 to December 2015, a total of 225 Uygur COPD patients and 198 healthy people were recruited as cases and controls, respectively, in Kashgar region. DNA was extracted according to the protocol of the DNA genome kit, and Sequenom MassARRAY single-nucleotide polymorphism technology was used for genotype determination. Serum concentration of Alpha-1-AT was detected by enzyme-linked immunosorbent assay. A logistic regression model was used to estimate the associations of polymorphisms with COPD.
METHODS
The rs1243166-G allele was associated with a higher risk of COPD (odds ratio [OR] = 2.039, 95% confidence interval [CI]: 1.116-3.725, P = 0.019). In cases, Alpha-1-AT levels were the highest among participants carrying rs1243166 AG genotype, followed by AA and GG genotype (χ2 = 11.89, P = 0.003). Similarly, the rs1051052-G allele was associated with a higher risk of COPD (OR = 19.433, 95% CI: 8.783-43.00, P < 0.001). The highest Alpha-1-AT levels were observed in cases carrying rs1051052 AA genotype, followed by cases with AG and GG genotypes (χ2 = 122.45, P < 0.001). However, individuals with rs9944155-G allele exhibited a lower risk of COPD than those carrying the rs9944155-A allele (OR = 0.121, 95% CI: 0.070-0.209, P < 0.001). In both cases and controls, no significant difference in Alpha-1-AT levels was observed among various rs9944115 genotypes.
RESULTS
rs1243166, rs9944155, and rs1051052 sites of Alpha-1-AT may be associated with the COPD morbidity in Uygur population. While rs1243166-G allele and rs1051052-G allele are associated with an increased risk of developing COPD, rs9944155-G allele is a protect locus in Uygur population. Alpha-1-AT levels in Uygur COPD patients were lower than those in healthy people and differed among patients with different rs1051052 AG and rs1243166 AG genotypes.
CONCLUSIONS
[ "Aged", "Alleles", "Female", "Gene Frequency", "Genetic Predisposition to Disease", "Genotype", "Humans", "Male", "Middle Aged", "Odds Ratio", "Polymorphism, Single Nucleotide", "Pulmonary Disease, Chronic Obstructive", "alpha 1-Antitrypsin" ]
5865314
INTRODUCTION
Chronic obstructive pulmonary disease (COPD) is characterized by persistent airflow limitation and is a preventable and treatable disease. COPD is associated with enhanced inflammatory response in the airways and lung to the noxious particles and gasses. Currently, COPD is the fourth leading cause of death globally and is a major cause of morbidity.[1] It predicts that COPD will become the third leading cause of death worldwide by 2030.[2] The aging population worldwide and the ongoing exposure to risk factors are expected to further increase the prevalence of COPD.[3] The development of COPD is affected by many factors. In addition to environmental factors like air pollution, evidence has shown that genetic factors might also be involved in the pathogenesis of COPD.[4] Alpha-1-AT, also known as esterase inhibitor-1 gene (protease inhibitor 1 [PI]), is localized on 14q32 and encodes serine PIs. The inhibition of the proteolytic activity protects the lung.[5] Several studies reported that the Alpha-1-antitrypsin (Alpha-1-AT) deficiency was significantly associated with the incidence of pulmonary emphysema.[6] In addition, one study estimated that about 1–2% of COPD might be caused by genetic deficiency in Alpha-1-AT.[7] However, the evidence is limited, as only a few epidemiological studies have explored the relationship between Alpha-1-AT gene polymorphism and COPD. One study showed that patients with PiSZ genotype may increase the risk of COPD, but the Alpha-1-AT PiSS genotype was not significantly associated with COPD risk.[8] Therefore, it is unclear whether other loci of the Alpha-1-AT gene affect the development of COPD. To fill this gap, we performed a case-control study in Uygur population in the Kashgar region, to investigate the possible relationship between the polymorphism of the rs1243166, rs9944155, and rs1051052 site of the Alpha-1-AT gene and COPD.
METHODS
Ethical approval The study was approved by the Medical Ethics Review Committee of the First People's Hospital in Kashgar. Informed consents were obtained from all participants in this study. The study was approved by the Medical Ethics Review Committee of the First People's Hospital in Kashgar. Informed consents were obtained from all participants in this study. Study population All subjects were enrolled from the First People's Hospital in Kashgar, from March 2013 to December 2015. The study selected 225 Uygur COPD patients as the case group who fulfilled the following criteria: (1) the COPD diagnosis was according to the Global Initiative for Obstructive Lung Disease guidelines;[9] (2) COPD is diagnosed by typical history, clinical manifestations, and chest X-ray or computed tomography (CT); (3) the index of pulmonary function, an indication of chronic airway obstruction, was defined as a forced expiratory volume in 1 (FEV1)/forced vital capacity (FVC) <70% after inhalation of 400 μg salbutamol; (4) the patients can be checked only after ceasing the usage of the drug including the controlled release theophylline tablets for 24 h, β2 receptor agonist for 12 h, and inhaled β2-agonist and anticholinergic drugs for 4 h. According to the principle of group matching (age and gender), 198 Uygur healthy individuals constituted the control group, satisfying the following conditions: (1) the individuals did not present chronic bronchitis and emphysema in chest X-ray or chest CT; (2) the normal lung function was described as FEV1% >80% and an FEV1/FVC >70% after inhalation of 400 μg salbutamol; (3) the individuals had no diseases such as bronchiectasis, tuberculosis, interstitial disease, asthma, and cancers. All the participants were of Uygur ethnicity and shared no kinship with each other. Information on gender, age, smoking status, and lung function was collected by questionnaire. All subjects were enrolled from the First People's Hospital in Kashgar, from March 2013 to December 2015. The study selected 225 Uygur COPD patients as the case group who fulfilled the following criteria: (1) the COPD diagnosis was according to the Global Initiative for Obstructive Lung Disease guidelines;[9] (2) COPD is diagnosed by typical history, clinical manifestations, and chest X-ray or computed tomography (CT); (3) the index of pulmonary function, an indication of chronic airway obstruction, was defined as a forced expiratory volume in 1 (FEV1)/forced vital capacity (FVC) <70% after inhalation of 400 μg salbutamol; (4) the patients can be checked only after ceasing the usage of the drug including the controlled release theophylline tablets for 24 h, β2 receptor agonist for 12 h, and inhaled β2-agonist and anticholinergic drugs for 4 h. According to the principle of group matching (age and gender), 198 Uygur healthy individuals constituted the control group, satisfying the following conditions: (1) the individuals did not present chronic bronchitis and emphysema in chest X-ray or chest CT; (2) the normal lung function was described as FEV1% >80% and an FEV1/FVC >70% after inhalation of 400 μg salbutamol; (3) the individuals had no diseases such as bronchiectasis, tuberculosis, interstitial disease, asthma, and cancers. All the participants were of Uygur ethnicity and shared no kinship with each other. Information on gender, age, smoking status, and lung function was collected by questionnaire. SequenomMassARRAY single-nucleotide polymorphism testing DNA extraction We collected 2 ml venous blood from all the participants. DNA extraction from blood samples was performed with Greiner Genomic DNA purification Kit (Greiner, Germany). DNA samples with adjusted concentration of 50 ng/μl were stored at −80° C storage box (DW-86L728; Qingdao Haier, China). Primer design Genotyping was performed by polymerase chain reaction (PCR). The following primers (Shanghai YingJun Biotechnology Co., China) were used for PCR amplification: rs1243166: PCR primers: 2nd-PCRP: 5'-ACGTT GGATAAAGCACATCACCCATTGACC-3'; 1st-PCRP: 5'-ACGTTGGATGAAGAAGTCAGGCTGCATGTG-3'; UEP_SEQ: 5'-CCCTCCCTTTCCTCC-3'. rs9944155: PCR primers: 2nd-PCRP: 5'-ACGTTGGATGA AAAGTTTGGGGACTGCTGG-3'; 1st-PCRP: 5'-ACG TTGGATGCAGAAATCACTGCTTAGCCC-3'; UEP_SEQ: 5'-CGGTGACTGCTGGCTTACAC-3'. rs1051052: PCR primers: 2nd-PCRP: 5'-ACGTTGGA TGAGACCATTACCCTATATCCC-3'; 1st-PCRP: 5'-ACGTTGGATGCTGAGGAGTCCTTGCAATGG-3'; UEP_SEQ: 5'-GGATCCCTTCTCCTCCC-3'. Single-nucleotide polymorphism genotyping Multiplex PCR technique was used to amplify the gene sequences of the selected sites under the following conditions: initial denaturation at 94°C for 4 min, 45 cycles of 94°C for 20 s, 56°C for 30 s, 72°C for 1 min, and a final extension at 72°C for 3 min. The amplified products were purified by shrimp alkaline phosphatase and added to dNTP; the amplification of the single-nucleotide polymorphism (SNP) site was performed by a single base extension primer (about 20 bp). After extension, the DNA products purified with resins were detected by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MS) and SNP genotyping.[10] DNA extraction We collected 2 ml venous blood from all the participants. DNA extraction from blood samples was performed with Greiner Genomic DNA purification Kit (Greiner, Germany). DNA samples with adjusted concentration of 50 ng/μl were stored at −80° C storage box (DW-86L728; Qingdao Haier, China). Primer design Genotyping was performed by polymerase chain reaction (PCR). The following primers (Shanghai YingJun Biotechnology Co., China) were used for PCR amplification: rs1243166: PCR primers: 2nd-PCRP: 5'-ACGTT GGATAAAGCACATCACCCATTGACC-3'; 1st-PCRP: 5'-ACGTTGGATGAAGAAGTCAGGCTGCATGTG-3'; UEP_SEQ: 5'-CCCTCCCTTTCCTCC-3'. rs9944155: PCR primers: 2nd-PCRP: 5'-ACGTTGGATGA AAAGTTTGGGGACTGCTGG-3'; 1st-PCRP: 5'-ACG TTGGATGCAGAAATCACTGCTTAGCCC-3'; UEP_SEQ: 5'-CGGTGACTGCTGGCTTACAC-3'. rs1051052: PCR primers: 2nd-PCRP: 5'-ACGTTGGA TGAGACCATTACCCTATATCCC-3'; 1st-PCRP: 5'-ACGTTGGATGCTGAGGAGTCCTTGCAATGG-3'; UEP_SEQ: 5'-GGATCCCTTCTCCTCCC-3'. Single-nucleotide polymorphism genotyping Multiplex PCR technique was used to amplify the gene sequences of the selected sites under the following conditions: initial denaturation at 94°C for 4 min, 45 cycles of 94°C for 20 s, 56°C for 30 s, 72°C for 1 min, and a final extension at 72°C for 3 min. The amplified products were purified by shrimp alkaline phosphatase and added to dNTP; the amplification of the single-nucleotide polymorphism (SNP) site was performed by a single base extension primer (about 20 bp). After extension, the DNA products purified with resins were detected by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MS) and SNP genotyping.[10] Enzyme level detection Five milliliters fasting venous blood was collected, laid up for 2 h with room temperature, and centrifuged at 626 × g for 5 min, with the serum being separated and stored at −20°C. The level of Alpha-1-AT in serum was detected by enzyme-linked immunosorbent assay. Five milliliters fasting venous blood was collected, laid up for 2 h with room temperature, and centrifuged at 626 × g for 5 min, with the serum being separated and stored at −20°C. The level of Alpha-1-AT in serum was detected by enzyme-linked immunosorbent assay. Statistical analysis The SPSS 18.0 (version 18.0, SPSS Inc., Chicago, IL, USA) was used for the statistical analysis. Numerical data were expressed as a mean ± standard deviation (SD). Deviations from Hardy-Weinberg equilibrium (HWE) were evaluated using Chi-square test, and a value of P > 0.05 was considered as conforming the HWE. The difference between COPD patients and controls were also compared by Chi-square test. Univariate logistic regression model was applied to explore the relationship between COPD and the SNPs. The Alpha-1-AT levels between case group and control group and among different genotypes were compared by Wilcoxon rank-sum test. A P < 0.05 indicated a significant difference. The SPSS 18.0 (version 18.0, SPSS Inc., Chicago, IL, USA) was used for the statistical analysis. Numerical data were expressed as a mean ± standard deviation (SD). Deviations from Hardy-Weinberg equilibrium (HWE) were evaluated using Chi-square test, and a value of P > 0.05 was considered as conforming the HWE. The difference between COPD patients and controls were also compared by Chi-square test. Univariate logistic regression model was applied to explore the relationship between COPD and the SNPs. The Alpha-1-AT levels between case group and control group and among different genotypes were compared by Wilcoxon rank-sum test. A P < 0.05 indicated a significant difference.
RESULTS
Baseline characteristics of the study participants A total of 423 Uygur individuals from Xinjiang Uygur Autonomous Region were enrolled in the study. Among them, 225 comprised the COPD group and 198 formed the control group. No significant differences were observed between the two groups with respect to gender, age, weight, body mass index, cigarette smoking, passive smoking, biofuel exposure, and animal dust exposure (all P > 0.05), but significant difference was found for occupational dust exposure (χ2 = 4.694, P = 0.030; Table 1). Comparison of clinical characteristics between COPD and control groups *Coal dust, cement dust, welding fume; †Dove, cattle, and sheep fur. COPD: Chronic obstructive pulmonary disease; SD: Standard deviation; BMI: Body mass index. A total of 423 Uygur individuals from Xinjiang Uygur Autonomous Region were enrolled in the study. Among them, 225 comprised the COPD group and 198 formed the control group. No significant differences were observed between the two groups with respect to gender, age, weight, body mass index, cigarette smoking, passive smoking, biofuel exposure, and animal dust exposure (all P > 0.05), but significant difference was found for occupational dust exposure (χ2 = 4.694, P = 0.030; Table 1). Comparison of clinical characteristics between COPD and control groups *Coal dust, cement dust, welding fume; †Dove, cattle, and sheep fur. COPD: Chronic obstructive pulmonary disease; SD: Standard deviation; BMI: Body mass index. Allelic and genotypic frequencies The distributions of rs1243166, rs9944155, and rs1051052 genotype among the case and control group are shown in Table 2. The genotype distribution of Alpha-1-AT SNPs rs1243166 was in line with HWE in both case (χ2 = 4.092, P = 0.129) and control (χ2 = 2.914, P = 0.233) groups. For SNPs rs9944155 and rs1051052, genotype distribution was in line with HWE in the control group (χ2 = 3.402, P = 0.183), while they were not in the case group (χ2 = 10.587, P = 0.005). For rs1243166, the frequency of the AA genotype was 10.2% and 18.2% in the COPD and control group, respectively. The frequency of the GG/GA genotype was 89.8% and 81.8% in the case and control group, respectively. The frequencies of A and G allele were 33.1% and 66.9% in the case group and 39.9% and 60.1% in the control group, respectively. Significant differences were observed in the genotype (χ2 = 5.559, P = 0.018) and allele (χ2 = 4.198, P = 0.040) distribution between the case and control group. For rs9944155, the frequency of the AA genotype was 50.2% and 18.2% in the case and control group, respectively. The frequency of the GG/GA genotype was 49.8% and 81.8% in the case and control group, respectively. The frequencies of A and G allele were 67.8% and 32.2% in the case group and 37.1% and 31.9% in the control group. Significant differences were observed in the genotype (χ2 = 47.386, P < 0.001) and allele (χ2 = 79.559, P < 0.001) distribution between the case and control group. For rs1051052, the frequency of the AA genotype was 25.8% and 64.1% in the case and control group, respectively. The frequency of the GG/GA genotype was 74.2% and 35.9% in the case and control group, respectively. The frequencies of A and G allele were 47.1% and 52.9% in the case group and 80.1% and 19.9% in the controls. Significant differences were observed in the genotype (χ2 = 62.991, P < 0.001) and allele (χ2 = 97.543, P < 0.001) distribution between the case and control group [Table 2]. Distributions of rs1243166, rs9944155, and rs1051052 genotypes in COPD and control groups COPD: Chronic obstructive pulmonary disease; –: Not available. The distributions of rs1243166, rs9944155, and rs1051052 genotype among the case and control group are shown in Table 2. The genotype distribution of Alpha-1-AT SNPs rs1243166 was in line with HWE in both case (χ2 = 4.092, P = 0.129) and control (χ2 = 2.914, P = 0.233) groups. For SNPs rs9944155 and rs1051052, genotype distribution was in line with HWE in the control group (χ2 = 3.402, P = 0.183), while they were not in the case group (χ2 = 10.587, P = 0.005). For rs1243166, the frequency of the AA genotype was 10.2% and 18.2% in the COPD and control group, respectively. The frequency of the GG/GA genotype was 89.8% and 81.8% in the case and control group, respectively. The frequencies of A and G allele were 33.1% and 66.9% in the case group and 39.9% and 60.1% in the control group, respectively. Significant differences were observed in the genotype (χ2 = 5.559, P = 0.018) and allele (χ2 = 4.198, P = 0.040) distribution between the case and control group. For rs9944155, the frequency of the AA genotype was 50.2% and 18.2% in the case and control group, respectively. The frequency of the GG/GA genotype was 49.8% and 81.8% in the case and control group, respectively. The frequencies of A and G allele were 67.8% and 32.2% in the case group and 37.1% and 31.9% in the control group. Significant differences were observed in the genotype (χ2 = 47.386, P < 0.001) and allele (χ2 = 79.559, P < 0.001) distribution between the case and control group. For rs1051052, the frequency of the AA genotype was 25.8% and 64.1% in the case and control group, respectively. The frequency of the GG/GA genotype was 74.2% and 35.9% in the case and control group, respectively. The frequencies of A and G allele were 47.1% and 52.9% in the case group and 80.1% and 19.9% in the controls. Significant differences were observed in the genotype (χ2 = 62.991, P < 0.001) and allele (χ2 = 97.543, P < 0.001) distribution between the case and control group [Table 2]. Distributions of rs1243166, rs9944155, and rs1051052 genotypes in COPD and control groups COPD: Chronic obstructive pulmonary disease; –: Not available. Association of single-nucleotide polymorphism with chronic obstructive pulmonary disease Table 3 shows the association of rs1243166, rs9944155, and rs1051052 genotypes with COPD using logistic regression model. The risk of COPD in individuals carrying the rs1243166-GG and rs1243166-GA genotypes was 2.039-fold (95% confidence interval [CI]: 1.116–3.725; P = 0.019) and 1.875-fold (95% CI: 1.033–3.404, P = 0.037) than that of the rs1243166-AA genotype. In addition, we observed that the rs1051052-G allele posed a higher risk of COPD than that of the rs1051052-A allele (odds ratio [OR]: 19.433, 95% CI: 8.783–43.00, P < 0.001). However, individuals carrying the rs9944155-G allele had a lower risk of COPD than those carrying the rs9944155-A allele (OR: 0.121, 95% CI: 0.070–0.209, P < 0.001). Distributions of rs1243166, rs9944155, and rs1051052 genotypes and their association with risk of COPD COPD: Chronic obstructive pulmonary disease; CI: Confidence interval; OR: Odds ratio; –: Not available. Table 3 shows the association of rs1243166, rs9944155, and rs1051052 genotypes with COPD using logistic regression model. The risk of COPD in individuals carrying the rs1243166-GG and rs1243166-GA genotypes was 2.039-fold (95% confidence interval [CI]: 1.116–3.725; P = 0.019) and 1.875-fold (95% CI: 1.033–3.404, P = 0.037) than that of the rs1243166-AA genotype. In addition, we observed that the rs1051052-G allele posed a higher risk of COPD than that of the rs1051052-A allele (odds ratio [OR]: 19.433, 95% CI: 8.783–43.00, P < 0.001). However, individuals carrying the rs9944155-G allele had a lower risk of COPD than those carrying the rs9944155-A allele (OR: 0.121, 95% CI: 0.070–0.209, P < 0.001). Distributions of rs1243166, rs9944155, and rs1051052 genotypes and their association with risk of COPD COPD: Chronic obstructive pulmonary disease; CI: Confidence interval; OR: Odds ratio; –: Not available. Associations of Alpha-1-antitrypsin levels with genotype Alpha-1-AT levels in control group were significantly higher than those in COPD group (Z = 3.4820, P < 0.0001). We further compared the Alpha-1-AT levels among COPD and control group with different genotypes. For the rs1051052 polymorphism, Alpha-1-AT levels were the highest among COPD patients with AA genotype, followed by patients with AG and GG genotypes (χ2 = 122.45, P < 0.001). Similar trend was also observed in control group (χ2 = 23.67, <0.001). For rs1243166 polymorphism, Alpha-1-AT levels were the highest among COPD patients with AG genotype, followed by AA and GG genotype (χ2 = 11.89, P = 0.003). However, no significant difference was observed in control group (χ2 = 3.26, P = 0.196). With regard to rs9944115 genotype, both COPD and control groups showed no significant difference in Alpha-1-AT levels among different genotypes [Table 4]. Associations of Alpha-1-AT levels with genotypes in case and control groups Alpha-1-AT: Alpha-1-antitrypsin; –: Not available. Alpha-1-AT levels in control group were significantly higher than those in COPD group (Z = 3.4820, P < 0.0001). We further compared the Alpha-1-AT levels among COPD and control group with different genotypes. For the rs1051052 polymorphism, Alpha-1-AT levels were the highest among COPD patients with AA genotype, followed by patients with AG and GG genotypes (χ2 = 122.45, P < 0.001). Similar trend was also observed in control group (χ2 = 23.67, <0.001). For rs1243166 polymorphism, Alpha-1-AT levels were the highest among COPD patients with AG genotype, followed by AA and GG genotype (χ2 = 11.89, P = 0.003). However, no significant difference was observed in control group (χ2 = 3.26, P = 0.196). With regard to rs9944115 genotype, both COPD and control groups showed no significant difference in Alpha-1-AT levels among different genotypes [Table 4]. Associations of Alpha-1-AT levels with genotypes in case and control groups Alpha-1-AT: Alpha-1-antitrypsin; –: Not available.
null
null
[ "Ethical approval", "Study population", "SequenomMassARRAY single-nucleotide polymorphism testing", "Enzyme level detection", "Statistical analysis", "Baseline characteristics of the study participants", "Allelic and genotypic frequencies", "Association of single-nucleotide polymorphism with chronic obstructive pulmonary disease", "Associations of Alpha-1-antitrypsin levels with genotype", "Financial support and sponsorship" ]
[ "The study was approved by the Medical Ethics Review Committee of the First People's Hospital in Kashgar. Informed consents were obtained from all participants in this study.", "All subjects were enrolled from the First People's Hospital in Kashgar, from March 2013 to December 2015. The study selected 225 Uygur COPD patients as the case group who fulfilled the following criteria: (1) the COPD diagnosis was according to the Global Initiative for Obstructive Lung Disease guidelines;[9] (2) COPD is diagnosed by typical history, clinical manifestations, and chest X-ray or computed tomography (CT); (3) the index of pulmonary function, an indication of chronic airway obstruction, was defined as a forced expiratory volume in 1 (FEV1)/forced vital capacity (FVC) <70% after inhalation of 400 μg salbutamol; (4) the patients can be checked only after ceasing the usage of the drug including the controlled release theophylline tablets for 24 h, β2 receptor agonist for 12 h, and inhaled β2-agonist and anticholinergic drugs for 4 h. According to the principle of group matching (age and gender), 198 Uygur healthy individuals constituted the control group, satisfying the following conditions: (1) the individuals did not present chronic bronchitis and emphysema in chest X-ray or chest CT; (2) the normal lung function was described as FEV1% >80% and an FEV1/FVC >70% after inhalation of 400 μg salbutamol; (3) the individuals had no diseases such as bronchiectasis, tuberculosis, interstitial disease, asthma, and cancers. All the participants were of Uygur ethnicity and shared no kinship with each other. Information on gender, age, smoking status, and lung function was collected by questionnaire.", "\nDNA extraction\n\nWe collected 2 ml venous blood from all the participants. DNA extraction from blood samples was performed with Greiner Genomic DNA purification Kit (Greiner, Germany). DNA samples with adjusted concentration of 50 ng/μl were stored at −80° C storage box (DW-86L728; Qingdao Haier, China).\n\nPrimer design\n\nGenotyping was performed by polymerase chain reaction (PCR). The following primers (Shanghai YingJun Biotechnology Co., China) were used for PCR amplification: rs1243166: PCR primers: 2nd-PCRP: 5'-ACGTT GGATAAAGCACATCACCCATTGACC-3'; 1st-PCRP: 5'-ACGTTGGATGAAGAAGTCAGGCTGCATGTG-3'; UEP_SEQ: 5'-CCCTCCCTTTCCTCC-3'. rs9944155: PCR primers: 2nd-PCRP: 5'-ACGTTGGATGA AAAGTTTGGGGACTGCTGG-3'; 1st-PCRP: 5'-ACG TTGGATGCAGAAATCACTGCTTAGCCC-3'; UEP_SEQ: 5'-CGGTGACTGCTGGCTTACAC-3'. rs1051052: PCR primers: 2nd-PCRP: 5'-ACGTTGGA TGAGACCATTACCCTATATCCC-3'; 1st-PCRP: 5'-ACGTTGGATGCTGAGGAGTCCTTGCAATGG-3'; UEP_SEQ: 5'-GGATCCCTTCTCCTCCC-3'.\n\nSingle-nucleotide polymorphism genotyping\n\nMultiplex PCR technique was used to amplify the gene sequences of the selected sites under the following conditions: initial denaturation at 94°C for 4 min, 45 cycles of 94°C for 20 s, 56°C for 30 s, 72°C for 1 min, and a final extension at 72°C for 3 min. The amplified products were purified by shrimp alkaline phosphatase and added to dNTP; the amplification of the single-nucleotide polymorphism (SNP) site was performed by a single base extension primer (about 20 bp). After extension, the DNA products purified with resins were detected by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MS) and SNP genotyping.[10]", "Five milliliters fasting venous blood was collected, laid up for 2 h with room temperature, and centrifuged at 626 × g for 5 min, with the serum being separated and stored at −20°C. The level of Alpha-1-AT in serum was detected by enzyme-linked immunosorbent assay.", "The SPSS 18.0 (version 18.0, SPSS Inc., Chicago, IL, USA) was used for the statistical analysis. Numerical data were expressed as a mean ± standard deviation (SD). Deviations from Hardy-Weinberg equilibrium (HWE) were evaluated using Chi-square test, and a value of P > 0.05 was considered as conforming the HWE. The difference between COPD patients and controls were also compared by Chi-square test. Univariate logistic regression model was applied to explore the relationship between COPD and the SNPs. The Alpha-1-AT levels between case group and control group and among different genotypes were compared by Wilcoxon rank-sum test. A P < 0.05 indicated a significant difference.", "A total of 423 Uygur individuals from Xinjiang Uygur Autonomous Region were enrolled in the study. Among them, 225 comprised the COPD group and 198 formed the control group. No significant differences were observed between the two groups with respect to gender, age, weight, body mass index, cigarette smoking, passive smoking, biofuel exposure, and animal dust exposure (all P > 0.05), but significant difference was found for occupational dust exposure (χ2 = 4.694, P = 0.030; Table 1).\nComparison of clinical characteristics between COPD and control groups\n*Coal dust, cement dust, welding fume; †Dove, cattle, and sheep fur. COPD: Chronic obstructive pulmonary disease; SD: Standard deviation; BMI: Body mass index.", "The distributions of rs1243166, rs9944155, and rs1051052 genotype among the case and control group are shown in Table 2. The genotype distribution of Alpha-1-AT SNPs rs1243166 was in line with HWE in both case (χ2 = 4.092, P = 0.129) and control (χ2 = 2.914, P = 0.233) groups. For SNPs rs9944155 and rs1051052, genotype distribution was in line with HWE in the control group (χ2 = 3.402, P = 0.183), while they were not in the case group (χ2 = 10.587, P = 0.005). For rs1243166, the frequency of the AA genotype was 10.2% and 18.2% in the COPD and control group, respectively. The frequency of the GG/GA genotype was 89.8% and 81.8% in the case and control group, respectively. The frequencies of A and G allele were 33.1% and 66.9% in the case group and 39.9% and 60.1% in the control group, respectively. Significant differences were observed in the genotype (χ2 = 5.559, P = 0.018) and allele (χ2 = 4.198, P = 0.040) distribution between the case and control group. For rs9944155, the frequency of the AA genotype was 50.2% and 18.2% in the case and control group, respectively. The frequency of the GG/GA genotype was 49.8% and 81.8% in the case and control group, respectively. The frequencies of A and G allele were 67.8% and 32.2% in the case group and 37.1% and 31.9% in the control group. Significant differences were observed in the genotype (χ2 = 47.386, P < 0.001) and allele (χ2 = 79.559, P < 0.001) distribution between the case and control group. For rs1051052, the frequency of the AA genotype was 25.8% and 64.1% in the case and control group, respectively. The frequency of the GG/GA genotype was 74.2% and 35.9% in the case and control group, respectively. The frequencies of A and G allele were 47.1% and 52.9% in the case group and 80.1% and 19.9% in the controls. Significant differences were observed in the genotype (χ2 = 62.991, P < 0.001) and allele (χ2 = 97.543, P < 0.001) distribution between the case and control group [Table 2].\nDistributions of rs1243166, rs9944155, and rs1051052 genotypes in COPD and control groups\nCOPD: Chronic obstructive pulmonary disease; –: Not available.", "\nTable 3 shows the association of rs1243166, rs9944155, and rs1051052 genotypes with COPD using logistic regression model. The risk of COPD in individuals carrying the rs1243166-GG and rs1243166-GA genotypes was 2.039-fold (95% confidence interval [CI]: 1.116–3.725; P = 0.019) and 1.875-fold (95% CI: 1.033–3.404, P = 0.037) than that of the rs1243166-AA genotype. In addition, we observed that the rs1051052-G allele posed a higher risk of COPD than that of the rs1051052-A allele (odds ratio [OR]: 19.433, 95% CI: 8.783–43.00, P < 0.001). However, individuals carrying the rs9944155-G allele had a lower risk of COPD than those carrying the rs9944155-A allele (OR: 0.121, 95% CI: 0.070–0.209, P < 0.001).\nDistributions of rs1243166, rs9944155, and rs1051052 genotypes and their association with risk of COPD\nCOPD: Chronic obstructive pulmonary disease; CI: Confidence interval; OR: Odds ratio; –: Not available.", "Alpha-1-AT levels in control group were significantly higher than those in COPD group (Z = 3.4820, P < 0.0001). We further compared the Alpha-1-AT levels among COPD and control group with different genotypes. For the rs1051052 polymorphism, Alpha-1-AT levels were the highest among COPD patients with AA genotype, followed by patients with AG and GG genotypes (χ2 = 122.45, P < 0.001). Similar trend was also observed in control group (χ2 = 23.67, <0.001). For rs1243166 polymorphism, Alpha-1-AT levels were the highest among COPD patients with AG genotype, followed by AA and GG genotype (χ2 = 11.89, P = 0.003). However, no significant difference was observed in control group (χ2 = 3.26, P = 0.196). With regard to rs9944115 genotype, both COPD and control groups showed no significant difference in Alpha-1-AT levels among different genotypes [Table 4].\nAssociations of Alpha-1-AT levels with genotypes in case and control groups\nAlpha-1-AT: Alpha-1-antitrypsin; –: Not available.", "This study was supported by the grant from the Youth Science and Technology Foundation of the Health and Family Planning Commission of Xinjiang Uygur Autonomous Region (No. 2015Y01)." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Ethical approval", "Study population", "SequenomMassARRAY single-nucleotide polymorphism testing", "Enzyme level detection", "Statistical analysis", "RESULTS", "Baseline characteristics of the study participants", "Allelic and genotypic frequencies", "Association of single-nucleotide polymorphism with chronic obstructive pulmonary disease", "Associations of Alpha-1-antitrypsin levels with genotype", "DISCUSSION", "Financial support and sponsorship", "Conflicts of interest" ]
[ "Chronic obstructive pulmonary disease (COPD) is characterized by persistent airflow limitation and is a preventable and treatable disease. COPD is associated with enhanced inflammatory response in the airways and lung to the noxious particles and gasses. Currently, COPD is the fourth leading cause of death globally and is a major cause of morbidity.[1] It predicts that COPD will become the third leading cause of death worldwide by 2030.[2] The aging population worldwide and the ongoing exposure to risk factors are expected to further increase the prevalence of COPD.[3]\nThe development of COPD is affected by many factors. In addition to environmental factors like air pollution, evidence has shown that genetic factors might also be involved in the pathogenesis of COPD.[4] Alpha-1-AT, also known as esterase inhibitor-1 gene (protease inhibitor 1 [PI]), is localized on 14q32 and encodes serine PIs. The inhibition of the proteolytic activity protects the lung.[5] Several studies reported that the Alpha-1-antitrypsin (Alpha-1-AT) deficiency was significantly associated with the incidence of pulmonary emphysema.[6] In addition, one study estimated that about 1–2% of COPD might be caused by genetic deficiency in Alpha-1-AT.[7] However, the evidence is limited, as only a few epidemiological studies have explored the relationship between Alpha-1-AT gene polymorphism and COPD. One study showed that patients with PiSZ genotype may increase the risk of COPD, but the Alpha-1-AT PiSS genotype was not significantly associated with COPD risk.[8] Therefore, it is unclear whether other loci of the Alpha-1-AT gene affect the development of COPD. To fill this gap, we performed a case-control study in Uygur population in the Kashgar region, to investigate the possible relationship between the polymorphism of the rs1243166, rs9944155, and rs1051052 site of the Alpha-1-AT gene and COPD.", " Ethical approval The study was approved by the Medical Ethics Review Committee of the First People's Hospital in Kashgar. Informed consents were obtained from all participants in this study.\nThe study was approved by the Medical Ethics Review Committee of the First People's Hospital in Kashgar. Informed consents were obtained from all participants in this study.\n Study population All subjects were enrolled from the First People's Hospital in Kashgar, from March 2013 to December 2015. The study selected 225 Uygur COPD patients as the case group who fulfilled the following criteria: (1) the COPD diagnosis was according to the Global Initiative for Obstructive Lung Disease guidelines;[9] (2) COPD is diagnosed by typical history, clinical manifestations, and chest X-ray or computed tomography (CT); (3) the index of pulmonary function, an indication of chronic airway obstruction, was defined as a forced expiratory volume in 1 (FEV1)/forced vital capacity (FVC) <70% after inhalation of 400 μg salbutamol; (4) the patients can be checked only after ceasing the usage of the drug including the controlled release theophylline tablets for 24 h, β2 receptor agonist for 12 h, and inhaled β2-agonist and anticholinergic drugs for 4 h. According to the principle of group matching (age and gender), 198 Uygur healthy individuals constituted the control group, satisfying the following conditions: (1) the individuals did not present chronic bronchitis and emphysema in chest X-ray or chest CT; (2) the normal lung function was described as FEV1% >80% and an FEV1/FVC >70% after inhalation of 400 μg salbutamol; (3) the individuals had no diseases such as bronchiectasis, tuberculosis, interstitial disease, asthma, and cancers. All the participants were of Uygur ethnicity and shared no kinship with each other. Information on gender, age, smoking status, and lung function was collected by questionnaire.\nAll subjects were enrolled from the First People's Hospital in Kashgar, from March 2013 to December 2015. The study selected 225 Uygur COPD patients as the case group who fulfilled the following criteria: (1) the COPD diagnosis was according to the Global Initiative for Obstructive Lung Disease guidelines;[9] (2) COPD is diagnosed by typical history, clinical manifestations, and chest X-ray or computed tomography (CT); (3) the index of pulmonary function, an indication of chronic airway obstruction, was defined as a forced expiratory volume in 1 (FEV1)/forced vital capacity (FVC) <70% after inhalation of 400 μg salbutamol; (4) the patients can be checked only after ceasing the usage of the drug including the controlled release theophylline tablets for 24 h, β2 receptor agonist for 12 h, and inhaled β2-agonist and anticholinergic drugs for 4 h. According to the principle of group matching (age and gender), 198 Uygur healthy individuals constituted the control group, satisfying the following conditions: (1) the individuals did not present chronic bronchitis and emphysema in chest X-ray or chest CT; (2) the normal lung function was described as FEV1% >80% and an FEV1/FVC >70% after inhalation of 400 μg salbutamol; (3) the individuals had no diseases such as bronchiectasis, tuberculosis, interstitial disease, asthma, and cancers. All the participants were of Uygur ethnicity and shared no kinship with each other. Information on gender, age, smoking status, and lung function was collected by questionnaire.\n SequenomMassARRAY single-nucleotide polymorphism testing \nDNA extraction\n\nWe collected 2 ml venous blood from all the participants. DNA extraction from blood samples was performed with Greiner Genomic DNA purification Kit (Greiner, Germany). DNA samples with adjusted concentration of 50 ng/μl were stored at −80° C storage box (DW-86L728; Qingdao Haier, China).\n\nPrimer design\n\nGenotyping was performed by polymerase chain reaction (PCR). The following primers (Shanghai YingJun Biotechnology Co., China) were used for PCR amplification: rs1243166: PCR primers: 2nd-PCRP: 5'-ACGTT GGATAAAGCACATCACCCATTGACC-3'; 1st-PCRP: 5'-ACGTTGGATGAAGAAGTCAGGCTGCATGTG-3'; UEP_SEQ: 5'-CCCTCCCTTTCCTCC-3'. rs9944155: PCR primers: 2nd-PCRP: 5'-ACGTTGGATGA AAAGTTTGGGGACTGCTGG-3'; 1st-PCRP: 5'-ACG TTGGATGCAGAAATCACTGCTTAGCCC-3'; UEP_SEQ: 5'-CGGTGACTGCTGGCTTACAC-3'. rs1051052: PCR primers: 2nd-PCRP: 5'-ACGTTGGA TGAGACCATTACCCTATATCCC-3'; 1st-PCRP: 5'-ACGTTGGATGCTGAGGAGTCCTTGCAATGG-3'; UEP_SEQ: 5'-GGATCCCTTCTCCTCCC-3'.\n\nSingle-nucleotide polymorphism genotyping\n\nMultiplex PCR technique was used to amplify the gene sequences of the selected sites under the following conditions: initial denaturation at 94°C for 4 min, 45 cycles of 94°C for 20 s, 56°C for 30 s, 72°C for 1 min, and a final extension at 72°C for 3 min. The amplified products were purified by shrimp alkaline phosphatase and added to dNTP; the amplification of the single-nucleotide polymorphism (SNP) site was performed by a single base extension primer (about 20 bp). After extension, the DNA products purified with resins were detected by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MS) and SNP genotyping.[10]\n\nDNA extraction\n\nWe collected 2 ml venous blood from all the participants. DNA extraction from blood samples was performed with Greiner Genomic DNA purification Kit (Greiner, Germany). DNA samples with adjusted concentration of 50 ng/μl were stored at −80° C storage box (DW-86L728; Qingdao Haier, China).\n\nPrimer design\n\nGenotyping was performed by polymerase chain reaction (PCR). The following primers (Shanghai YingJun Biotechnology Co., China) were used for PCR amplification: rs1243166: PCR primers: 2nd-PCRP: 5'-ACGTT GGATAAAGCACATCACCCATTGACC-3'; 1st-PCRP: 5'-ACGTTGGATGAAGAAGTCAGGCTGCATGTG-3'; UEP_SEQ: 5'-CCCTCCCTTTCCTCC-3'. rs9944155: PCR primers: 2nd-PCRP: 5'-ACGTTGGATGA AAAGTTTGGGGACTGCTGG-3'; 1st-PCRP: 5'-ACG TTGGATGCAGAAATCACTGCTTAGCCC-3'; UEP_SEQ: 5'-CGGTGACTGCTGGCTTACAC-3'. rs1051052: PCR primers: 2nd-PCRP: 5'-ACGTTGGA TGAGACCATTACCCTATATCCC-3'; 1st-PCRP: 5'-ACGTTGGATGCTGAGGAGTCCTTGCAATGG-3'; UEP_SEQ: 5'-GGATCCCTTCTCCTCCC-3'.\n\nSingle-nucleotide polymorphism genotyping\n\nMultiplex PCR technique was used to amplify the gene sequences of the selected sites under the following conditions: initial denaturation at 94°C for 4 min, 45 cycles of 94°C for 20 s, 56°C for 30 s, 72°C for 1 min, and a final extension at 72°C for 3 min. The amplified products were purified by shrimp alkaline phosphatase and added to dNTP; the amplification of the single-nucleotide polymorphism (SNP) site was performed by a single base extension primer (about 20 bp). After extension, the DNA products purified with resins were detected by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MS) and SNP genotyping.[10]\n Enzyme level detection Five milliliters fasting venous blood was collected, laid up for 2 h with room temperature, and centrifuged at 626 × g for 5 min, with the serum being separated and stored at −20°C. The level of Alpha-1-AT in serum was detected by enzyme-linked immunosorbent assay.\nFive milliliters fasting venous blood was collected, laid up for 2 h with room temperature, and centrifuged at 626 × g for 5 min, with the serum being separated and stored at −20°C. The level of Alpha-1-AT in serum was detected by enzyme-linked immunosorbent assay.\n Statistical analysis The SPSS 18.0 (version 18.0, SPSS Inc., Chicago, IL, USA) was used for the statistical analysis. Numerical data were expressed as a mean ± standard deviation (SD). Deviations from Hardy-Weinberg equilibrium (HWE) were evaluated using Chi-square test, and a value of P > 0.05 was considered as conforming the HWE. The difference between COPD patients and controls were also compared by Chi-square test. Univariate logistic regression model was applied to explore the relationship between COPD and the SNPs. The Alpha-1-AT levels between case group and control group and among different genotypes were compared by Wilcoxon rank-sum test. A P < 0.05 indicated a significant difference.\nThe SPSS 18.0 (version 18.0, SPSS Inc., Chicago, IL, USA) was used for the statistical analysis. Numerical data were expressed as a mean ± standard deviation (SD). Deviations from Hardy-Weinberg equilibrium (HWE) were evaluated using Chi-square test, and a value of P > 0.05 was considered as conforming the HWE. The difference between COPD patients and controls were also compared by Chi-square test. Univariate logistic regression model was applied to explore the relationship between COPD and the SNPs. The Alpha-1-AT levels between case group and control group and among different genotypes were compared by Wilcoxon rank-sum test. A P < 0.05 indicated a significant difference.", "The study was approved by the Medical Ethics Review Committee of the First People's Hospital in Kashgar. Informed consents were obtained from all participants in this study.", "All subjects were enrolled from the First People's Hospital in Kashgar, from March 2013 to December 2015. The study selected 225 Uygur COPD patients as the case group who fulfilled the following criteria: (1) the COPD diagnosis was according to the Global Initiative for Obstructive Lung Disease guidelines;[9] (2) COPD is diagnosed by typical history, clinical manifestations, and chest X-ray or computed tomography (CT); (3) the index of pulmonary function, an indication of chronic airway obstruction, was defined as a forced expiratory volume in 1 (FEV1)/forced vital capacity (FVC) <70% after inhalation of 400 μg salbutamol; (4) the patients can be checked only after ceasing the usage of the drug including the controlled release theophylline tablets for 24 h, β2 receptor agonist for 12 h, and inhaled β2-agonist and anticholinergic drugs for 4 h. According to the principle of group matching (age and gender), 198 Uygur healthy individuals constituted the control group, satisfying the following conditions: (1) the individuals did not present chronic bronchitis and emphysema in chest X-ray or chest CT; (2) the normal lung function was described as FEV1% >80% and an FEV1/FVC >70% after inhalation of 400 μg salbutamol; (3) the individuals had no diseases such as bronchiectasis, tuberculosis, interstitial disease, asthma, and cancers. All the participants were of Uygur ethnicity and shared no kinship with each other. Information on gender, age, smoking status, and lung function was collected by questionnaire.", "\nDNA extraction\n\nWe collected 2 ml venous blood from all the participants. DNA extraction from blood samples was performed with Greiner Genomic DNA purification Kit (Greiner, Germany). DNA samples with adjusted concentration of 50 ng/μl were stored at −80° C storage box (DW-86L728; Qingdao Haier, China).\n\nPrimer design\n\nGenotyping was performed by polymerase chain reaction (PCR). The following primers (Shanghai YingJun Biotechnology Co., China) were used for PCR amplification: rs1243166: PCR primers: 2nd-PCRP: 5'-ACGTT GGATAAAGCACATCACCCATTGACC-3'; 1st-PCRP: 5'-ACGTTGGATGAAGAAGTCAGGCTGCATGTG-3'; UEP_SEQ: 5'-CCCTCCCTTTCCTCC-3'. rs9944155: PCR primers: 2nd-PCRP: 5'-ACGTTGGATGA AAAGTTTGGGGACTGCTGG-3'; 1st-PCRP: 5'-ACG TTGGATGCAGAAATCACTGCTTAGCCC-3'; UEP_SEQ: 5'-CGGTGACTGCTGGCTTACAC-3'. rs1051052: PCR primers: 2nd-PCRP: 5'-ACGTTGGA TGAGACCATTACCCTATATCCC-3'; 1st-PCRP: 5'-ACGTTGGATGCTGAGGAGTCCTTGCAATGG-3'; UEP_SEQ: 5'-GGATCCCTTCTCCTCCC-3'.\n\nSingle-nucleotide polymorphism genotyping\n\nMultiplex PCR technique was used to amplify the gene sequences of the selected sites under the following conditions: initial denaturation at 94°C for 4 min, 45 cycles of 94°C for 20 s, 56°C for 30 s, 72°C for 1 min, and a final extension at 72°C for 3 min. The amplified products were purified by shrimp alkaline phosphatase and added to dNTP; the amplification of the single-nucleotide polymorphism (SNP) site was performed by a single base extension primer (about 20 bp). After extension, the DNA products purified with resins were detected by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MS) and SNP genotyping.[10]", "Five milliliters fasting venous blood was collected, laid up for 2 h with room temperature, and centrifuged at 626 × g for 5 min, with the serum being separated and stored at −20°C. The level of Alpha-1-AT in serum was detected by enzyme-linked immunosorbent assay.", "The SPSS 18.0 (version 18.0, SPSS Inc., Chicago, IL, USA) was used for the statistical analysis. Numerical data were expressed as a mean ± standard deviation (SD). Deviations from Hardy-Weinberg equilibrium (HWE) were evaluated using Chi-square test, and a value of P > 0.05 was considered as conforming the HWE. The difference between COPD patients and controls were also compared by Chi-square test. Univariate logistic regression model was applied to explore the relationship between COPD and the SNPs. The Alpha-1-AT levels between case group and control group and among different genotypes were compared by Wilcoxon rank-sum test. A P < 0.05 indicated a significant difference.", " Baseline characteristics of the study participants A total of 423 Uygur individuals from Xinjiang Uygur Autonomous Region were enrolled in the study. Among them, 225 comprised the COPD group and 198 formed the control group. No significant differences were observed between the two groups with respect to gender, age, weight, body mass index, cigarette smoking, passive smoking, biofuel exposure, and animal dust exposure (all P > 0.05), but significant difference was found for occupational dust exposure (χ2 = 4.694, P = 0.030; Table 1).\nComparison of clinical characteristics between COPD and control groups\n*Coal dust, cement dust, welding fume; †Dove, cattle, and sheep fur. COPD: Chronic obstructive pulmonary disease; SD: Standard deviation; BMI: Body mass index.\nA total of 423 Uygur individuals from Xinjiang Uygur Autonomous Region were enrolled in the study. Among them, 225 comprised the COPD group and 198 formed the control group. No significant differences were observed between the two groups with respect to gender, age, weight, body mass index, cigarette smoking, passive smoking, biofuel exposure, and animal dust exposure (all P > 0.05), but significant difference was found for occupational dust exposure (χ2 = 4.694, P = 0.030; Table 1).\nComparison of clinical characteristics between COPD and control groups\n*Coal dust, cement dust, welding fume; †Dove, cattle, and sheep fur. COPD: Chronic obstructive pulmonary disease; SD: Standard deviation; BMI: Body mass index.\n Allelic and genotypic frequencies The distributions of rs1243166, rs9944155, and rs1051052 genotype among the case and control group are shown in Table 2. The genotype distribution of Alpha-1-AT SNPs rs1243166 was in line with HWE in both case (χ2 = 4.092, P = 0.129) and control (χ2 = 2.914, P = 0.233) groups. For SNPs rs9944155 and rs1051052, genotype distribution was in line with HWE in the control group (χ2 = 3.402, P = 0.183), while they were not in the case group (χ2 = 10.587, P = 0.005). For rs1243166, the frequency of the AA genotype was 10.2% and 18.2% in the COPD and control group, respectively. The frequency of the GG/GA genotype was 89.8% and 81.8% in the case and control group, respectively. The frequencies of A and G allele were 33.1% and 66.9% in the case group and 39.9% and 60.1% in the control group, respectively. Significant differences were observed in the genotype (χ2 = 5.559, P = 0.018) and allele (χ2 = 4.198, P = 0.040) distribution between the case and control group. For rs9944155, the frequency of the AA genotype was 50.2% and 18.2% in the case and control group, respectively. The frequency of the GG/GA genotype was 49.8% and 81.8% in the case and control group, respectively. The frequencies of A and G allele were 67.8% and 32.2% in the case group and 37.1% and 31.9% in the control group. Significant differences were observed in the genotype (χ2 = 47.386, P < 0.001) and allele (χ2 = 79.559, P < 0.001) distribution between the case and control group. For rs1051052, the frequency of the AA genotype was 25.8% and 64.1% in the case and control group, respectively. The frequency of the GG/GA genotype was 74.2% and 35.9% in the case and control group, respectively. The frequencies of A and G allele were 47.1% and 52.9% in the case group and 80.1% and 19.9% in the controls. Significant differences were observed in the genotype (χ2 = 62.991, P < 0.001) and allele (χ2 = 97.543, P < 0.001) distribution between the case and control group [Table 2].\nDistributions of rs1243166, rs9944155, and rs1051052 genotypes in COPD and control groups\nCOPD: Chronic obstructive pulmonary disease; –: Not available.\nThe distributions of rs1243166, rs9944155, and rs1051052 genotype among the case and control group are shown in Table 2. The genotype distribution of Alpha-1-AT SNPs rs1243166 was in line with HWE in both case (χ2 = 4.092, P = 0.129) and control (χ2 = 2.914, P = 0.233) groups. For SNPs rs9944155 and rs1051052, genotype distribution was in line with HWE in the control group (χ2 = 3.402, P = 0.183), while they were not in the case group (χ2 = 10.587, P = 0.005). For rs1243166, the frequency of the AA genotype was 10.2% and 18.2% in the COPD and control group, respectively. The frequency of the GG/GA genotype was 89.8% and 81.8% in the case and control group, respectively. The frequencies of A and G allele were 33.1% and 66.9% in the case group and 39.9% and 60.1% in the control group, respectively. Significant differences were observed in the genotype (χ2 = 5.559, P = 0.018) and allele (χ2 = 4.198, P = 0.040) distribution between the case and control group. For rs9944155, the frequency of the AA genotype was 50.2% and 18.2% in the case and control group, respectively. The frequency of the GG/GA genotype was 49.8% and 81.8% in the case and control group, respectively. The frequencies of A and G allele were 67.8% and 32.2% in the case group and 37.1% and 31.9% in the control group. Significant differences were observed in the genotype (χ2 = 47.386, P < 0.001) and allele (χ2 = 79.559, P < 0.001) distribution between the case and control group. For rs1051052, the frequency of the AA genotype was 25.8% and 64.1% in the case and control group, respectively. The frequency of the GG/GA genotype was 74.2% and 35.9% in the case and control group, respectively. The frequencies of A and G allele were 47.1% and 52.9% in the case group and 80.1% and 19.9% in the controls. Significant differences were observed in the genotype (χ2 = 62.991, P < 0.001) and allele (χ2 = 97.543, P < 0.001) distribution between the case and control group [Table 2].\nDistributions of rs1243166, rs9944155, and rs1051052 genotypes in COPD and control groups\nCOPD: Chronic obstructive pulmonary disease; –: Not available.\n Association of single-nucleotide polymorphism with chronic obstructive pulmonary disease \nTable 3 shows the association of rs1243166, rs9944155, and rs1051052 genotypes with COPD using logistic regression model. The risk of COPD in individuals carrying the rs1243166-GG and rs1243166-GA genotypes was 2.039-fold (95% confidence interval [CI]: 1.116–3.725; P = 0.019) and 1.875-fold (95% CI: 1.033–3.404, P = 0.037) than that of the rs1243166-AA genotype. In addition, we observed that the rs1051052-G allele posed a higher risk of COPD than that of the rs1051052-A allele (odds ratio [OR]: 19.433, 95% CI: 8.783–43.00, P < 0.001). However, individuals carrying the rs9944155-G allele had a lower risk of COPD than those carrying the rs9944155-A allele (OR: 0.121, 95% CI: 0.070–0.209, P < 0.001).\nDistributions of rs1243166, rs9944155, and rs1051052 genotypes and their association with risk of COPD\nCOPD: Chronic obstructive pulmonary disease; CI: Confidence interval; OR: Odds ratio; –: Not available.\n\nTable 3 shows the association of rs1243166, rs9944155, and rs1051052 genotypes with COPD using logistic regression model. The risk of COPD in individuals carrying the rs1243166-GG and rs1243166-GA genotypes was 2.039-fold (95% confidence interval [CI]: 1.116–3.725; P = 0.019) and 1.875-fold (95% CI: 1.033–3.404, P = 0.037) than that of the rs1243166-AA genotype. In addition, we observed that the rs1051052-G allele posed a higher risk of COPD than that of the rs1051052-A allele (odds ratio [OR]: 19.433, 95% CI: 8.783–43.00, P < 0.001). However, individuals carrying the rs9944155-G allele had a lower risk of COPD than those carrying the rs9944155-A allele (OR: 0.121, 95% CI: 0.070–0.209, P < 0.001).\nDistributions of rs1243166, rs9944155, and rs1051052 genotypes and their association with risk of COPD\nCOPD: Chronic obstructive pulmonary disease; CI: Confidence interval; OR: Odds ratio; –: Not available.\n Associations of Alpha-1-antitrypsin levels with genotype Alpha-1-AT levels in control group were significantly higher than those in COPD group (Z = 3.4820, P < 0.0001). We further compared the Alpha-1-AT levels among COPD and control group with different genotypes. For the rs1051052 polymorphism, Alpha-1-AT levels were the highest among COPD patients with AA genotype, followed by patients with AG and GG genotypes (χ2 = 122.45, P < 0.001). Similar trend was also observed in control group (χ2 = 23.67, <0.001). For rs1243166 polymorphism, Alpha-1-AT levels were the highest among COPD patients with AG genotype, followed by AA and GG genotype (χ2 = 11.89, P = 0.003). However, no significant difference was observed in control group (χ2 = 3.26, P = 0.196). With regard to rs9944115 genotype, both COPD and control groups showed no significant difference in Alpha-1-AT levels among different genotypes [Table 4].\nAssociations of Alpha-1-AT levels with genotypes in case and control groups\nAlpha-1-AT: Alpha-1-antitrypsin; –: Not available.\nAlpha-1-AT levels in control group were significantly higher than those in COPD group (Z = 3.4820, P < 0.0001). We further compared the Alpha-1-AT levels among COPD and control group with different genotypes. For the rs1051052 polymorphism, Alpha-1-AT levels were the highest among COPD patients with AA genotype, followed by patients with AG and GG genotypes (χ2 = 122.45, P < 0.001). Similar trend was also observed in control group (χ2 = 23.67, <0.001). For rs1243166 polymorphism, Alpha-1-AT levels were the highest among COPD patients with AG genotype, followed by AA and GG genotype (χ2 = 11.89, P = 0.003). However, no significant difference was observed in control group (χ2 = 3.26, P = 0.196). With regard to rs9944115 genotype, both COPD and control groups showed no significant difference in Alpha-1-AT levels among different genotypes [Table 4].\nAssociations of Alpha-1-AT levels with genotypes in case and control groups\nAlpha-1-AT: Alpha-1-antitrypsin; –: Not available.", "A total of 423 Uygur individuals from Xinjiang Uygur Autonomous Region were enrolled in the study. Among them, 225 comprised the COPD group and 198 formed the control group. No significant differences were observed between the two groups with respect to gender, age, weight, body mass index, cigarette smoking, passive smoking, biofuel exposure, and animal dust exposure (all P > 0.05), but significant difference was found for occupational dust exposure (χ2 = 4.694, P = 0.030; Table 1).\nComparison of clinical characteristics between COPD and control groups\n*Coal dust, cement dust, welding fume; †Dove, cattle, and sheep fur. COPD: Chronic obstructive pulmonary disease; SD: Standard deviation; BMI: Body mass index.", "The distributions of rs1243166, rs9944155, and rs1051052 genotype among the case and control group are shown in Table 2. The genotype distribution of Alpha-1-AT SNPs rs1243166 was in line with HWE in both case (χ2 = 4.092, P = 0.129) and control (χ2 = 2.914, P = 0.233) groups. For SNPs rs9944155 and rs1051052, genotype distribution was in line with HWE in the control group (χ2 = 3.402, P = 0.183), while they were not in the case group (χ2 = 10.587, P = 0.005). For rs1243166, the frequency of the AA genotype was 10.2% and 18.2% in the COPD and control group, respectively. The frequency of the GG/GA genotype was 89.8% and 81.8% in the case and control group, respectively. The frequencies of A and G allele were 33.1% and 66.9% in the case group and 39.9% and 60.1% in the control group, respectively. Significant differences were observed in the genotype (χ2 = 5.559, P = 0.018) and allele (χ2 = 4.198, P = 0.040) distribution between the case and control group. For rs9944155, the frequency of the AA genotype was 50.2% and 18.2% in the case and control group, respectively. The frequency of the GG/GA genotype was 49.8% and 81.8% in the case and control group, respectively. The frequencies of A and G allele were 67.8% and 32.2% in the case group and 37.1% and 31.9% in the control group. Significant differences were observed in the genotype (χ2 = 47.386, P < 0.001) and allele (χ2 = 79.559, P < 0.001) distribution between the case and control group. For rs1051052, the frequency of the AA genotype was 25.8% and 64.1% in the case and control group, respectively. The frequency of the GG/GA genotype was 74.2% and 35.9% in the case and control group, respectively. The frequencies of A and G allele were 47.1% and 52.9% in the case group and 80.1% and 19.9% in the controls. Significant differences were observed in the genotype (χ2 = 62.991, P < 0.001) and allele (χ2 = 97.543, P < 0.001) distribution between the case and control group [Table 2].\nDistributions of rs1243166, rs9944155, and rs1051052 genotypes in COPD and control groups\nCOPD: Chronic obstructive pulmonary disease; –: Not available.", "\nTable 3 shows the association of rs1243166, rs9944155, and rs1051052 genotypes with COPD using logistic regression model. The risk of COPD in individuals carrying the rs1243166-GG and rs1243166-GA genotypes was 2.039-fold (95% confidence interval [CI]: 1.116–3.725; P = 0.019) and 1.875-fold (95% CI: 1.033–3.404, P = 0.037) than that of the rs1243166-AA genotype. In addition, we observed that the rs1051052-G allele posed a higher risk of COPD than that of the rs1051052-A allele (odds ratio [OR]: 19.433, 95% CI: 8.783–43.00, P < 0.001). However, individuals carrying the rs9944155-G allele had a lower risk of COPD than those carrying the rs9944155-A allele (OR: 0.121, 95% CI: 0.070–0.209, P < 0.001).\nDistributions of rs1243166, rs9944155, and rs1051052 genotypes and their association with risk of COPD\nCOPD: Chronic obstructive pulmonary disease; CI: Confidence interval; OR: Odds ratio; –: Not available.", "Alpha-1-AT levels in control group were significantly higher than those in COPD group (Z = 3.4820, P < 0.0001). We further compared the Alpha-1-AT levels among COPD and control group with different genotypes. For the rs1051052 polymorphism, Alpha-1-AT levels were the highest among COPD patients with AA genotype, followed by patients with AG and GG genotypes (χ2 = 122.45, P < 0.001). Similar trend was also observed in control group (χ2 = 23.67, <0.001). For rs1243166 polymorphism, Alpha-1-AT levels were the highest among COPD patients with AG genotype, followed by AA and GG genotype (χ2 = 11.89, P = 0.003). However, no significant difference was observed in control group (χ2 = 3.26, P = 0.196). With regard to rs9944115 genotype, both COPD and control groups showed no significant difference in Alpha-1-AT levels among different genotypes [Table 4].\nAssociations of Alpha-1-AT levels with genotypes in case and control groups\nAlpha-1-AT: Alpha-1-antitrypsin; –: Not available.", "The development of COPD is partially caused by genetic factors. Some analyses also identified interacting genes that might play a role in COPD pathogenesis. These genes include SERPINE2, CD79A, and POU2AF1.[11] In the current study, we observed that Alpha-1-AT is a susceptibility gene associated with COPD. Genes encoding Alpha-1-AT are located on chromosomes 14q31.0–32.3, 12.2 kb in length. Mature Alpha-1-AT is composed of a single peptide chain containing 394 amino acids, molecular weight to be approximately 52,000 Da, nine alpha helices, and three beta folds. Alpha-1-AT interacts with the protease to form a 1:1 tight structure and develop the inhibitory function of the protease.[12] The percentage of Alpha-1-AT genotypes in normal Saudi individuals was 17%, 2%, 0.2%, 0.8%, and 0% for MS, MZ, ZZ, SZ, and SS genotypes, respectively. The mean value of serum Alpha-1-AT levels was normal in these individuals.[13] Previous studies have shown that Alpha-1-AT expression in different geographical regions and ethnicities influences the occurrence and development of COPD. However, the polymorphism of other loci of the gene also influences the development of COPD, which needs further studies. The previous concept that genetic polymorphisms are in the exon region and can lead to variations in the amino acids in vital parts of the protein causing functional alterations and that the polymorphism in the regulation region might affect the level of gene expression, indicates the physiological significance of these two types of SNPs, and forms the core of the study of the susceptibility genes in polygenic disease. The 3’ UTR, which is located in the coding region, is known as the translation section 3’ end. It combines with miRNA to modulate the level of transcription after degradation of mRNA or inhibits translation to regulate the gene expression. Therefore, the number of studies on the relationship between the SNPs in the 3’ UTR and the disease is increasing gradually. Since the occurrence of COPD is affected by many factors such as environment, genetic susceptibility, and racial difference, the results of a single gene locus cannot completely explain the complex pathogenesis.[14] In the case of individuals without damage, the individuals with mutant and without mutant can maintain normal Alpha-1-AT levels and protect the lung tissue from injury. However, under the state of stress, such as infection, inhalation of harmful components in the environment, and chronic inflammation, the level of Alpha-1-AT in the mutant individuals does not increase with the increasing of the damaging factors (such as protease), thereby it unable to protect the lung tissue effectively. Repeated stress-induced damage was accumulated gradually, leading to COPD.[15] Zhao et al.[16] found that Alpha-1-AT 3’ UTR mutation significantly increased in patients with COPD and lung cancer which is the risk factor of COPD and lung cancer.\nThe present study selected rs1243166, rs9944155, and rs1051052 sites that were located in the 3’ UTR of Alpha-1-AT as a candidate gene and explored their association with COPD in Uygur population of Kashgar region. The results showed that Alpha-1-AT gene rs1243166, rs9944155, and rs1051052 sites might be associated with the onset of COPD in Uygur population. The rs1243166-G allele was associated with a higher risk of COPD. The rs9944155-G allele might serve as a protective gene in Uyghur COPD. Uyghur individuals carrying rs1051052-G allele might have a higher risk of COPD. Alpha-1-AT levels in patients with COPD are lower. The levels of enzymes corresponding to different genotypes were different. The previous similar studies have not been able to determine whether the results are related to race or region. In addition, the sample size of this study is small and might not completely represent the gene mutation status of the whole population. Therefore, it is necessary to expand the sample size in the future study and to investigate other sites to assess the relationship between the gene mutation in the 3’ UTR of Alpha-1-AT and the occurrence and development of COPD.\n Financial support and sponsorship This study was supported by the grant from the Youth Science and Technology Foundation of the Health and Family Planning Commission of Xinjiang Uygur Autonomous Region (No. 2015Y01).\nThis study was supported by the grant from the Youth Science and Technology Foundation of the Health and Family Planning Commission of Xinjiang Uygur Autonomous Region (No. 2015Y01).\n Conflicts of interest There are no conflicts of interest.\nThere are no conflicts of interest.", "This study was supported by the grant from the Youth Science and Technology Foundation of the Health and Family Planning Commission of Xinjiang Uygur Autonomous Region (No. 2015Y01).", "There are no conflicts of interest." ]
[ "intro", "methods", null, null, null, null, null, "results", null, null, null, null, "discussion", null, "COI-statement" ]
[ "Alpha-1-antitrypsin", "Chronic Obstructive Pulmonary Disease", "Polymorphism", "Uygur Population" ]
INTRODUCTION: Chronic obstructive pulmonary disease (COPD) is characterized by persistent airflow limitation and is a preventable and treatable disease. COPD is associated with enhanced inflammatory response in the airways and lung to the noxious particles and gasses. Currently, COPD is the fourth leading cause of death globally and is a major cause of morbidity.[1] It predicts that COPD will become the third leading cause of death worldwide by 2030.[2] The aging population worldwide and the ongoing exposure to risk factors are expected to further increase the prevalence of COPD.[3] The development of COPD is affected by many factors. In addition to environmental factors like air pollution, evidence has shown that genetic factors might also be involved in the pathogenesis of COPD.[4] Alpha-1-AT, also known as esterase inhibitor-1 gene (protease inhibitor 1 [PI]), is localized on 14q32 and encodes serine PIs. The inhibition of the proteolytic activity protects the lung.[5] Several studies reported that the Alpha-1-antitrypsin (Alpha-1-AT) deficiency was significantly associated with the incidence of pulmonary emphysema.[6] In addition, one study estimated that about 1–2% of COPD might be caused by genetic deficiency in Alpha-1-AT.[7] However, the evidence is limited, as only a few epidemiological studies have explored the relationship between Alpha-1-AT gene polymorphism and COPD. One study showed that patients with PiSZ genotype may increase the risk of COPD, but the Alpha-1-AT PiSS genotype was not significantly associated with COPD risk.[8] Therefore, it is unclear whether other loci of the Alpha-1-AT gene affect the development of COPD. To fill this gap, we performed a case-control study in Uygur population in the Kashgar region, to investigate the possible relationship between the polymorphism of the rs1243166, rs9944155, and rs1051052 site of the Alpha-1-AT gene and COPD. METHODS: Ethical approval The study was approved by the Medical Ethics Review Committee of the First People's Hospital in Kashgar. Informed consents were obtained from all participants in this study. The study was approved by the Medical Ethics Review Committee of the First People's Hospital in Kashgar. Informed consents were obtained from all participants in this study. Study population All subjects were enrolled from the First People's Hospital in Kashgar, from March 2013 to December 2015. The study selected 225 Uygur COPD patients as the case group who fulfilled the following criteria: (1) the COPD diagnosis was according to the Global Initiative for Obstructive Lung Disease guidelines;[9] (2) COPD is diagnosed by typical history, clinical manifestations, and chest X-ray or computed tomography (CT); (3) the index of pulmonary function, an indication of chronic airway obstruction, was defined as a forced expiratory volume in 1 (FEV1)/forced vital capacity (FVC) <70% after inhalation of 400 μg salbutamol; (4) the patients can be checked only after ceasing the usage of the drug including the controlled release theophylline tablets for 24 h, β2 receptor agonist for 12 h, and inhaled β2-agonist and anticholinergic drugs for 4 h. According to the principle of group matching (age and gender), 198 Uygur healthy individuals constituted the control group, satisfying the following conditions: (1) the individuals did not present chronic bronchitis and emphysema in chest X-ray or chest CT; (2) the normal lung function was described as FEV1% >80% and an FEV1/FVC >70% after inhalation of 400 μg salbutamol; (3) the individuals had no diseases such as bronchiectasis, tuberculosis, interstitial disease, asthma, and cancers. All the participants were of Uygur ethnicity and shared no kinship with each other. Information on gender, age, smoking status, and lung function was collected by questionnaire. All subjects were enrolled from the First People's Hospital in Kashgar, from March 2013 to December 2015. The study selected 225 Uygur COPD patients as the case group who fulfilled the following criteria: (1) the COPD diagnosis was according to the Global Initiative for Obstructive Lung Disease guidelines;[9] (2) COPD is diagnosed by typical history, clinical manifestations, and chest X-ray or computed tomography (CT); (3) the index of pulmonary function, an indication of chronic airway obstruction, was defined as a forced expiratory volume in 1 (FEV1)/forced vital capacity (FVC) <70% after inhalation of 400 μg salbutamol; (4) the patients can be checked only after ceasing the usage of the drug including the controlled release theophylline tablets for 24 h, β2 receptor agonist for 12 h, and inhaled β2-agonist and anticholinergic drugs for 4 h. According to the principle of group matching (age and gender), 198 Uygur healthy individuals constituted the control group, satisfying the following conditions: (1) the individuals did not present chronic bronchitis and emphysema in chest X-ray or chest CT; (2) the normal lung function was described as FEV1% >80% and an FEV1/FVC >70% after inhalation of 400 μg salbutamol; (3) the individuals had no diseases such as bronchiectasis, tuberculosis, interstitial disease, asthma, and cancers. All the participants were of Uygur ethnicity and shared no kinship with each other. Information on gender, age, smoking status, and lung function was collected by questionnaire. SequenomMassARRAY single-nucleotide polymorphism testing DNA extraction We collected 2 ml venous blood from all the participants. DNA extraction from blood samples was performed with Greiner Genomic DNA purification Kit (Greiner, Germany). DNA samples with adjusted concentration of 50 ng/μl were stored at −80° C storage box (DW-86L728; Qingdao Haier, China). Primer design Genotyping was performed by polymerase chain reaction (PCR). The following primers (Shanghai YingJun Biotechnology Co., China) were used for PCR amplification: rs1243166: PCR primers: 2nd-PCRP: 5'-ACGTT GGATAAAGCACATCACCCATTGACC-3'; 1st-PCRP: 5'-ACGTTGGATGAAGAAGTCAGGCTGCATGTG-3'; UEP_SEQ: 5'-CCCTCCCTTTCCTCC-3'. rs9944155: PCR primers: 2nd-PCRP: 5'-ACGTTGGATGA AAAGTTTGGGGACTGCTGG-3'; 1st-PCRP: 5'-ACG TTGGATGCAGAAATCACTGCTTAGCCC-3'; UEP_SEQ: 5'-CGGTGACTGCTGGCTTACAC-3'. rs1051052: PCR primers: 2nd-PCRP: 5'-ACGTTGGA TGAGACCATTACCCTATATCCC-3'; 1st-PCRP: 5'-ACGTTGGATGCTGAGGAGTCCTTGCAATGG-3'; UEP_SEQ: 5'-GGATCCCTTCTCCTCCC-3'. Single-nucleotide polymorphism genotyping Multiplex PCR technique was used to amplify the gene sequences of the selected sites under the following conditions: initial denaturation at 94°C for 4 min, 45 cycles of 94°C for 20 s, 56°C for 30 s, 72°C for 1 min, and a final extension at 72°C for 3 min. The amplified products were purified by shrimp alkaline phosphatase and added to dNTP; the amplification of the single-nucleotide polymorphism (SNP) site was performed by a single base extension primer (about 20 bp). After extension, the DNA products purified with resins were detected by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MS) and SNP genotyping.[10] DNA extraction We collected 2 ml venous blood from all the participants. DNA extraction from blood samples was performed with Greiner Genomic DNA purification Kit (Greiner, Germany). DNA samples with adjusted concentration of 50 ng/μl were stored at −80° C storage box (DW-86L728; Qingdao Haier, China). Primer design Genotyping was performed by polymerase chain reaction (PCR). The following primers (Shanghai YingJun Biotechnology Co., China) were used for PCR amplification: rs1243166: PCR primers: 2nd-PCRP: 5'-ACGTT GGATAAAGCACATCACCCATTGACC-3'; 1st-PCRP: 5'-ACGTTGGATGAAGAAGTCAGGCTGCATGTG-3'; UEP_SEQ: 5'-CCCTCCCTTTCCTCC-3'. rs9944155: PCR primers: 2nd-PCRP: 5'-ACGTTGGATGA AAAGTTTGGGGACTGCTGG-3'; 1st-PCRP: 5'-ACG TTGGATGCAGAAATCACTGCTTAGCCC-3'; UEP_SEQ: 5'-CGGTGACTGCTGGCTTACAC-3'. rs1051052: PCR primers: 2nd-PCRP: 5'-ACGTTGGA TGAGACCATTACCCTATATCCC-3'; 1st-PCRP: 5'-ACGTTGGATGCTGAGGAGTCCTTGCAATGG-3'; UEP_SEQ: 5'-GGATCCCTTCTCCTCCC-3'. Single-nucleotide polymorphism genotyping Multiplex PCR technique was used to amplify the gene sequences of the selected sites under the following conditions: initial denaturation at 94°C for 4 min, 45 cycles of 94°C for 20 s, 56°C for 30 s, 72°C for 1 min, and a final extension at 72°C for 3 min. The amplified products were purified by shrimp alkaline phosphatase and added to dNTP; the amplification of the single-nucleotide polymorphism (SNP) site was performed by a single base extension primer (about 20 bp). After extension, the DNA products purified with resins were detected by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MS) and SNP genotyping.[10] Enzyme level detection Five milliliters fasting venous blood was collected, laid up for 2 h with room temperature, and centrifuged at 626 × g for 5 min, with the serum being separated and stored at −20°C. The level of Alpha-1-AT in serum was detected by enzyme-linked immunosorbent assay. Five milliliters fasting venous blood was collected, laid up for 2 h with room temperature, and centrifuged at 626 × g for 5 min, with the serum being separated and stored at −20°C. The level of Alpha-1-AT in serum was detected by enzyme-linked immunosorbent assay. Statistical analysis The SPSS 18.0 (version 18.0, SPSS Inc., Chicago, IL, USA) was used for the statistical analysis. Numerical data were expressed as a mean ± standard deviation (SD). Deviations from Hardy-Weinberg equilibrium (HWE) were evaluated using Chi-square test, and a value of P > 0.05 was considered as conforming the HWE. The difference between COPD patients and controls were also compared by Chi-square test. Univariate logistic regression model was applied to explore the relationship between COPD and the SNPs. The Alpha-1-AT levels between case group and control group and among different genotypes were compared by Wilcoxon rank-sum test. A P < 0.05 indicated a significant difference. The SPSS 18.0 (version 18.0, SPSS Inc., Chicago, IL, USA) was used for the statistical analysis. Numerical data were expressed as a mean ± standard deviation (SD). Deviations from Hardy-Weinberg equilibrium (HWE) were evaluated using Chi-square test, and a value of P > 0.05 was considered as conforming the HWE. The difference between COPD patients and controls were also compared by Chi-square test. Univariate logistic regression model was applied to explore the relationship between COPD and the SNPs. The Alpha-1-AT levels between case group and control group and among different genotypes were compared by Wilcoxon rank-sum test. A P < 0.05 indicated a significant difference. Ethical approval: The study was approved by the Medical Ethics Review Committee of the First People's Hospital in Kashgar. Informed consents were obtained from all participants in this study. Study population: All subjects were enrolled from the First People's Hospital in Kashgar, from March 2013 to December 2015. The study selected 225 Uygur COPD patients as the case group who fulfilled the following criteria: (1) the COPD diagnosis was according to the Global Initiative for Obstructive Lung Disease guidelines;[9] (2) COPD is diagnosed by typical history, clinical manifestations, and chest X-ray or computed tomography (CT); (3) the index of pulmonary function, an indication of chronic airway obstruction, was defined as a forced expiratory volume in 1 (FEV1)/forced vital capacity (FVC) <70% after inhalation of 400 μg salbutamol; (4) the patients can be checked only after ceasing the usage of the drug including the controlled release theophylline tablets for 24 h, β2 receptor agonist for 12 h, and inhaled β2-agonist and anticholinergic drugs for 4 h. According to the principle of group matching (age and gender), 198 Uygur healthy individuals constituted the control group, satisfying the following conditions: (1) the individuals did not present chronic bronchitis and emphysema in chest X-ray or chest CT; (2) the normal lung function was described as FEV1% >80% and an FEV1/FVC >70% after inhalation of 400 μg salbutamol; (3) the individuals had no diseases such as bronchiectasis, tuberculosis, interstitial disease, asthma, and cancers. All the participants were of Uygur ethnicity and shared no kinship with each other. Information on gender, age, smoking status, and lung function was collected by questionnaire. SequenomMassARRAY single-nucleotide polymorphism testing: DNA extraction We collected 2 ml venous blood from all the participants. DNA extraction from blood samples was performed with Greiner Genomic DNA purification Kit (Greiner, Germany). DNA samples with adjusted concentration of 50 ng/μl were stored at −80° C storage box (DW-86L728; Qingdao Haier, China). Primer design Genotyping was performed by polymerase chain reaction (PCR). The following primers (Shanghai YingJun Biotechnology Co., China) were used for PCR amplification: rs1243166: PCR primers: 2nd-PCRP: 5'-ACGTT GGATAAAGCACATCACCCATTGACC-3'; 1st-PCRP: 5'-ACGTTGGATGAAGAAGTCAGGCTGCATGTG-3'; UEP_SEQ: 5'-CCCTCCCTTTCCTCC-3'. rs9944155: PCR primers: 2nd-PCRP: 5'-ACGTTGGATGA AAAGTTTGGGGACTGCTGG-3'; 1st-PCRP: 5'-ACG TTGGATGCAGAAATCACTGCTTAGCCC-3'; UEP_SEQ: 5'-CGGTGACTGCTGGCTTACAC-3'. rs1051052: PCR primers: 2nd-PCRP: 5'-ACGTTGGA TGAGACCATTACCCTATATCCC-3'; 1st-PCRP: 5'-ACGTTGGATGCTGAGGAGTCCTTGCAATGG-3'; UEP_SEQ: 5'-GGATCCCTTCTCCTCCC-3'. Single-nucleotide polymorphism genotyping Multiplex PCR technique was used to amplify the gene sequences of the selected sites under the following conditions: initial denaturation at 94°C for 4 min, 45 cycles of 94°C for 20 s, 56°C for 30 s, 72°C for 1 min, and a final extension at 72°C for 3 min. The amplified products were purified by shrimp alkaline phosphatase and added to dNTP; the amplification of the single-nucleotide polymorphism (SNP) site was performed by a single base extension primer (about 20 bp). After extension, the DNA products purified with resins were detected by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MS) and SNP genotyping.[10] Enzyme level detection: Five milliliters fasting venous blood was collected, laid up for 2 h with room temperature, and centrifuged at 626 × g for 5 min, with the serum being separated and stored at −20°C. The level of Alpha-1-AT in serum was detected by enzyme-linked immunosorbent assay. Statistical analysis: The SPSS 18.0 (version 18.0, SPSS Inc., Chicago, IL, USA) was used for the statistical analysis. Numerical data were expressed as a mean ± standard deviation (SD). Deviations from Hardy-Weinberg equilibrium (HWE) were evaluated using Chi-square test, and a value of P > 0.05 was considered as conforming the HWE. The difference between COPD patients and controls were also compared by Chi-square test. Univariate logistic regression model was applied to explore the relationship between COPD and the SNPs. The Alpha-1-AT levels between case group and control group and among different genotypes were compared by Wilcoxon rank-sum test. A P < 0.05 indicated a significant difference. RESULTS: Baseline characteristics of the study participants A total of 423 Uygur individuals from Xinjiang Uygur Autonomous Region were enrolled in the study. Among them, 225 comprised the COPD group and 198 formed the control group. No significant differences were observed between the two groups with respect to gender, age, weight, body mass index, cigarette smoking, passive smoking, biofuel exposure, and animal dust exposure (all P > 0.05), but significant difference was found for occupational dust exposure (χ2 = 4.694, P = 0.030; Table 1). Comparison of clinical characteristics between COPD and control groups *Coal dust, cement dust, welding fume; †Dove, cattle, and sheep fur. COPD: Chronic obstructive pulmonary disease; SD: Standard deviation; BMI: Body mass index. A total of 423 Uygur individuals from Xinjiang Uygur Autonomous Region were enrolled in the study. Among them, 225 comprised the COPD group and 198 formed the control group. No significant differences were observed between the two groups with respect to gender, age, weight, body mass index, cigarette smoking, passive smoking, biofuel exposure, and animal dust exposure (all P > 0.05), but significant difference was found for occupational dust exposure (χ2 = 4.694, P = 0.030; Table 1). Comparison of clinical characteristics between COPD and control groups *Coal dust, cement dust, welding fume; †Dove, cattle, and sheep fur. COPD: Chronic obstructive pulmonary disease; SD: Standard deviation; BMI: Body mass index. Allelic and genotypic frequencies The distributions of rs1243166, rs9944155, and rs1051052 genotype among the case and control group are shown in Table 2. The genotype distribution of Alpha-1-AT SNPs rs1243166 was in line with HWE in both case (χ2 = 4.092, P = 0.129) and control (χ2 = 2.914, P = 0.233) groups. For SNPs rs9944155 and rs1051052, genotype distribution was in line with HWE in the control group (χ2 = 3.402, P = 0.183), while they were not in the case group (χ2 = 10.587, P = 0.005). For rs1243166, the frequency of the AA genotype was 10.2% and 18.2% in the COPD and control group, respectively. The frequency of the GG/GA genotype was 89.8% and 81.8% in the case and control group, respectively. The frequencies of A and G allele were 33.1% and 66.9% in the case group and 39.9% and 60.1% in the control group, respectively. Significant differences were observed in the genotype (χ2 = 5.559, P = 0.018) and allele (χ2 = 4.198, P = 0.040) distribution between the case and control group. For rs9944155, the frequency of the AA genotype was 50.2% and 18.2% in the case and control group, respectively. The frequency of the GG/GA genotype was 49.8% and 81.8% in the case and control group, respectively. The frequencies of A and G allele were 67.8% and 32.2% in the case group and 37.1% and 31.9% in the control group. Significant differences were observed in the genotype (χ2 = 47.386, P < 0.001) and allele (χ2 = 79.559, P < 0.001) distribution between the case and control group. For rs1051052, the frequency of the AA genotype was 25.8% and 64.1% in the case and control group, respectively. The frequency of the GG/GA genotype was 74.2% and 35.9% in the case and control group, respectively. The frequencies of A and G allele were 47.1% and 52.9% in the case group and 80.1% and 19.9% in the controls. Significant differences were observed in the genotype (χ2 = 62.991, P < 0.001) and allele (χ2 = 97.543, P < 0.001) distribution between the case and control group [Table 2]. Distributions of rs1243166, rs9944155, and rs1051052 genotypes in COPD and control groups COPD: Chronic obstructive pulmonary disease; –: Not available. The distributions of rs1243166, rs9944155, and rs1051052 genotype among the case and control group are shown in Table 2. The genotype distribution of Alpha-1-AT SNPs rs1243166 was in line with HWE in both case (χ2 = 4.092, P = 0.129) and control (χ2 = 2.914, P = 0.233) groups. For SNPs rs9944155 and rs1051052, genotype distribution was in line with HWE in the control group (χ2 = 3.402, P = 0.183), while they were not in the case group (χ2 = 10.587, P = 0.005). For rs1243166, the frequency of the AA genotype was 10.2% and 18.2% in the COPD and control group, respectively. The frequency of the GG/GA genotype was 89.8% and 81.8% in the case and control group, respectively. The frequencies of A and G allele were 33.1% and 66.9% in the case group and 39.9% and 60.1% in the control group, respectively. Significant differences were observed in the genotype (χ2 = 5.559, P = 0.018) and allele (χ2 = 4.198, P = 0.040) distribution between the case and control group. For rs9944155, the frequency of the AA genotype was 50.2% and 18.2% in the case and control group, respectively. The frequency of the GG/GA genotype was 49.8% and 81.8% in the case and control group, respectively. The frequencies of A and G allele were 67.8% and 32.2% in the case group and 37.1% and 31.9% in the control group. Significant differences were observed in the genotype (χ2 = 47.386, P < 0.001) and allele (χ2 = 79.559, P < 0.001) distribution between the case and control group. For rs1051052, the frequency of the AA genotype was 25.8% and 64.1% in the case and control group, respectively. The frequency of the GG/GA genotype was 74.2% and 35.9% in the case and control group, respectively. The frequencies of A and G allele were 47.1% and 52.9% in the case group and 80.1% and 19.9% in the controls. Significant differences were observed in the genotype (χ2 = 62.991, P < 0.001) and allele (χ2 = 97.543, P < 0.001) distribution between the case and control group [Table 2]. Distributions of rs1243166, rs9944155, and rs1051052 genotypes in COPD and control groups COPD: Chronic obstructive pulmonary disease; –: Not available. Association of single-nucleotide polymorphism with chronic obstructive pulmonary disease Table 3 shows the association of rs1243166, rs9944155, and rs1051052 genotypes with COPD using logistic regression model. The risk of COPD in individuals carrying the rs1243166-GG and rs1243166-GA genotypes was 2.039-fold (95% confidence interval [CI]: 1.116–3.725; P = 0.019) and 1.875-fold (95% CI: 1.033–3.404, P = 0.037) than that of the rs1243166-AA genotype. In addition, we observed that the rs1051052-G allele posed a higher risk of COPD than that of the rs1051052-A allele (odds ratio [OR]: 19.433, 95% CI: 8.783–43.00, P < 0.001). However, individuals carrying the rs9944155-G allele had a lower risk of COPD than those carrying the rs9944155-A allele (OR: 0.121, 95% CI: 0.070–0.209, P < 0.001). Distributions of rs1243166, rs9944155, and rs1051052 genotypes and their association with risk of COPD COPD: Chronic obstructive pulmonary disease; CI: Confidence interval; OR: Odds ratio; –: Not available. Table 3 shows the association of rs1243166, rs9944155, and rs1051052 genotypes with COPD using logistic regression model. The risk of COPD in individuals carrying the rs1243166-GG and rs1243166-GA genotypes was 2.039-fold (95% confidence interval [CI]: 1.116–3.725; P = 0.019) and 1.875-fold (95% CI: 1.033–3.404, P = 0.037) than that of the rs1243166-AA genotype. In addition, we observed that the rs1051052-G allele posed a higher risk of COPD than that of the rs1051052-A allele (odds ratio [OR]: 19.433, 95% CI: 8.783–43.00, P < 0.001). However, individuals carrying the rs9944155-G allele had a lower risk of COPD than those carrying the rs9944155-A allele (OR: 0.121, 95% CI: 0.070–0.209, P < 0.001). Distributions of rs1243166, rs9944155, and rs1051052 genotypes and their association with risk of COPD COPD: Chronic obstructive pulmonary disease; CI: Confidence interval; OR: Odds ratio; –: Not available. Associations of Alpha-1-antitrypsin levels with genotype Alpha-1-AT levels in control group were significantly higher than those in COPD group (Z = 3.4820, P < 0.0001). We further compared the Alpha-1-AT levels among COPD and control group with different genotypes. For the rs1051052 polymorphism, Alpha-1-AT levels were the highest among COPD patients with AA genotype, followed by patients with AG and GG genotypes (χ2 = 122.45, P < 0.001). Similar trend was also observed in control group (χ2 = 23.67, <0.001). For rs1243166 polymorphism, Alpha-1-AT levels were the highest among COPD patients with AG genotype, followed by AA and GG genotype (χ2 = 11.89, P = 0.003). However, no significant difference was observed in control group (χ2 = 3.26, P = 0.196). With regard to rs9944115 genotype, both COPD and control groups showed no significant difference in Alpha-1-AT levels among different genotypes [Table 4]. Associations of Alpha-1-AT levels with genotypes in case and control groups Alpha-1-AT: Alpha-1-antitrypsin; –: Not available. Alpha-1-AT levels in control group were significantly higher than those in COPD group (Z = 3.4820, P < 0.0001). We further compared the Alpha-1-AT levels among COPD and control group with different genotypes. For the rs1051052 polymorphism, Alpha-1-AT levels were the highest among COPD patients with AA genotype, followed by patients with AG and GG genotypes (χ2 = 122.45, P < 0.001). Similar trend was also observed in control group (χ2 = 23.67, <0.001). For rs1243166 polymorphism, Alpha-1-AT levels were the highest among COPD patients with AG genotype, followed by AA and GG genotype (χ2 = 11.89, P = 0.003). However, no significant difference was observed in control group (χ2 = 3.26, P = 0.196). With regard to rs9944115 genotype, both COPD and control groups showed no significant difference in Alpha-1-AT levels among different genotypes [Table 4]. Associations of Alpha-1-AT levels with genotypes in case and control groups Alpha-1-AT: Alpha-1-antitrypsin; –: Not available. Baseline characteristics of the study participants: A total of 423 Uygur individuals from Xinjiang Uygur Autonomous Region were enrolled in the study. Among them, 225 comprised the COPD group and 198 formed the control group. No significant differences were observed between the two groups with respect to gender, age, weight, body mass index, cigarette smoking, passive smoking, biofuel exposure, and animal dust exposure (all P > 0.05), but significant difference was found for occupational dust exposure (χ2 = 4.694, P = 0.030; Table 1). Comparison of clinical characteristics between COPD and control groups *Coal dust, cement dust, welding fume; †Dove, cattle, and sheep fur. COPD: Chronic obstructive pulmonary disease; SD: Standard deviation; BMI: Body mass index. Allelic and genotypic frequencies: The distributions of rs1243166, rs9944155, and rs1051052 genotype among the case and control group are shown in Table 2. The genotype distribution of Alpha-1-AT SNPs rs1243166 was in line with HWE in both case (χ2 = 4.092, P = 0.129) and control (χ2 = 2.914, P = 0.233) groups. For SNPs rs9944155 and rs1051052, genotype distribution was in line with HWE in the control group (χ2 = 3.402, P = 0.183), while they were not in the case group (χ2 = 10.587, P = 0.005). For rs1243166, the frequency of the AA genotype was 10.2% and 18.2% in the COPD and control group, respectively. The frequency of the GG/GA genotype was 89.8% and 81.8% in the case and control group, respectively. The frequencies of A and G allele were 33.1% and 66.9% in the case group and 39.9% and 60.1% in the control group, respectively. Significant differences were observed in the genotype (χ2 = 5.559, P = 0.018) and allele (χ2 = 4.198, P = 0.040) distribution between the case and control group. For rs9944155, the frequency of the AA genotype was 50.2% and 18.2% in the case and control group, respectively. The frequency of the GG/GA genotype was 49.8% and 81.8% in the case and control group, respectively. The frequencies of A and G allele were 67.8% and 32.2% in the case group and 37.1% and 31.9% in the control group. Significant differences were observed in the genotype (χ2 = 47.386, P < 0.001) and allele (χ2 = 79.559, P < 0.001) distribution between the case and control group. For rs1051052, the frequency of the AA genotype was 25.8% and 64.1% in the case and control group, respectively. The frequency of the GG/GA genotype was 74.2% and 35.9% in the case and control group, respectively. The frequencies of A and G allele were 47.1% and 52.9% in the case group and 80.1% and 19.9% in the controls. Significant differences were observed in the genotype (χ2 = 62.991, P < 0.001) and allele (χ2 = 97.543, P < 0.001) distribution between the case and control group [Table 2]. Distributions of rs1243166, rs9944155, and rs1051052 genotypes in COPD and control groups COPD: Chronic obstructive pulmonary disease; –: Not available. Association of single-nucleotide polymorphism with chronic obstructive pulmonary disease: Table 3 shows the association of rs1243166, rs9944155, and rs1051052 genotypes with COPD using logistic regression model. The risk of COPD in individuals carrying the rs1243166-GG and rs1243166-GA genotypes was 2.039-fold (95% confidence interval [CI]: 1.116–3.725; P = 0.019) and 1.875-fold (95% CI: 1.033–3.404, P = 0.037) than that of the rs1243166-AA genotype. In addition, we observed that the rs1051052-G allele posed a higher risk of COPD than that of the rs1051052-A allele (odds ratio [OR]: 19.433, 95% CI: 8.783–43.00, P < 0.001). However, individuals carrying the rs9944155-G allele had a lower risk of COPD than those carrying the rs9944155-A allele (OR: 0.121, 95% CI: 0.070–0.209, P < 0.001). Distributions of rs1243166, rs9944155, and rs1051052 genotypes and their association with risk of COPD COPD: Chronic obstructive pulmonary disease; CI: Confidence interval; OR: Odds ratio; –: Not available. Associations of Alpha-1-antitrypsin levels with genotype: Alpha-1-AT levels in control group were significantly higher than those in COPD group (Z = 3.4820, P < 0.0001). We further compared the Alpha-1-AT levels among COPD and control group with different genotypes. For the rs1051052 polymorphism, Alpha-1-AT levels were the highest among COPD patients with AA genotype, followed by patients with AG and GG genotypes (χ2 = 122.45, P < 0.001). Similar trend was also observed in control group (χ2 = 23.67, <0.001). For rs1243166 polymorphism, Alpha-1-AT levels were the highest among COPD patients with AG genotype, followed by AA and GG genotype (χ2 = 11.89, P = 0.003). However, no significant difference was observed in control group (χ2 = 3.26, P = 0.196). With regard to rs9944115 genotype, both COPD and control groups showed no significant difference in Alpha-1-AT levels among different genotypes [Table 4]. Associations of Alpha-1-AT levels with genotypes in case and control groups Alpha-1-AT: Alpha-1-antitrypsin; –: Not available. DISCUSSION: The development of COPD is partially caused by genetic factors. Some analyses also identified interacting genes that might play a role in COPD pathogenesis. These genes include SERPINE2, CD79A, and POU2AF1.[11] In the current study, we observed that Alpha-1-AT is a susceptibility gene associated with COPD. Genes encoding Alpha-1-AT are located on chromosomes 14q31.0–32.3, 12.2 kb in length. Mature Alpha-1-AT is composed of a single peptide chain containing 394 amino acids, molecular weight to be approximately 52,000 Da, nine alpha helices, and three beta folds. Alpha-1-AT interacts with the protease to form a 1:1 tight structure and develop the inhibitory function of the protease.[12] The percentage of Alpha-1-AT genotypes in normal Saudi individuals was 17%, 2%, 0.2%, 0.8%, and 0% for MS, MZ, ZZ, SZ, and SS genotypes, respectively. The mean value of serum Alpha-1-AT levels was normal in these individuals.[13] Previous studies have shown that Alpha-1-AT expression in different geographical regions and ethnicities influences the occurrence and development of COPD. However, the polymorphism of other loci of the gene also influences the development of COPD, which needs further studies. The previous concept that genetic polymorphisms are in the exon region and can lead to variations in the amino acids in vital parts of the protein causing functional alterations and that the polymorphism in the regulation region might affect the level of gene expression, indicates the physiological significance of these two types of SNPs, and forms the core of the study of the susceptibility genes in polygenic disease. The 3’ UTR, which is located in the coding region, is known as the translation section 3’ end. It combines with miRNA to modulate the level of transcription after degradation of mRNA or inhibits translation to regulate the gene expression. Therefore, the number of studies on the relationship between the SNPs in the 3’ UTR and the disease is increasing gradually. Since the occurrence of COPD is affected by many factors such as environment, genetic susceptibility, and racial difference, the results of a single gene locus cannot completely explain the complex pathogenesis.[14] In the case of individuals without damage, the individuals with mutant and without mutant can maintain normal Alpha-1-AT levels and protect the lung tissue from injury. However, under the state of stress, such as infection, inhalation of harmful components in the environment, and chronic inflammation, the level of Alpha-1-AT in the mutant individuals does not increase with the increasing of the damaging factors (such as protease), thereby it unable to protect the lung tissue effectively. Repeated stress-induced damage was accumulated gradually, leading to COPD.[15] Zhao et al.[16] found that Alpha-1-AT 3’ UTR mutation significantly increased in patients with COPD and lung cancer which is the risk factor of COPD and lung cancer. The present study selected rs1243166, rs9944155, and rs1051052 sites that were located in the 3’ UTR of Alpha-1-AT as a candidate gene and explored their association with COPD in Uygur population of Kashgar region. The results showed that Alpha-1-AT gene rs1243166, rs9944155, and rs1051052 sites might be associated with the onset of COPD in Uygur population. The rs1243166-G allele was associated with a higher risk of COPD. The rs9944155-G allele might serve as a protective gene in Uyghur COPD. Uyghur individuals carrying rs1051052-G allele might have a higher risk of COPD. Alpha-1-AT levels in patients with COPD are lower. The levels of enzymes corresponding to different genotypes were different. The previous similar studies have not been able to determine whether the results are related to race or region. In addition, the sample size of this study is small and might not completely represent the gene mutation status of the whole population. Therefore, it is necessary to expand the sample size in the future study and to investigate other sites to assess the relationship between the gene mutation in the 3’ UTR of Alpha-1-AT and the occurrence and development of COPD. Financial support and sponsorship This study was supported by the grant from the Youth Science and Technology Foundation of the Health and Family Planning Commission of Xinjiang Uygur Autonomous Region (No. 2015Y01). This study was supported by the grant from the Youth Science and Technology Foundation of the Health and Family Planning Commission of Xinjiang Uygur Autonomous Region (No. 2015Y01). Conflicts of interest There are no conflicts of interest. There are no conflicts of interest. Financial support and sponsorship: This study was supported by the grant from the Youth Science and Technology Foundation of the Health and Family Planning Commission of Xinjiang Uygur Autonomous Region (No. 2015Y01). Conflicts of interest: There are no conflicts of interest.
Background: Previous studies conducted in various geographical and ethnical populations have shown that Alpha-1-antitrypsin (Alpha-1-AT) expression affects the occurrence and progression of chronic obstructive pulmonary disease (COPD). We aimed to explore the associations of rs9944155AG, rs1051052AG, and rs1243166AG polymorphisms in the Alpha-1-AT gene with the risk of COPD in Uygur population in the Kashgar region. Methods: From March 2013 to December 2015, a total of 225 Uygur COPD patients and 198 healthy people were recruited as cases and controls, respectively, in Kashgar region. DNA was extracted according to the protocol of the DNA genome kit, and Sequenom MassARRAY single-nucleotide polymorphism technology was used for genotype determination. Serum concentration of Alpha-1-AT was detected by enzyme-linked immunosorbent assay. A logistic regression model was used to estimate the associations of polymorphisms with COPD. Results: The rs1243166-G allele was associated with a higher risk of COPD (odds ratio [OR] = 2.039, 95% confidence interval [CI]: 1.116-3.725, P = 0.019). In cases, Alpha-1-AT levels were the highest among participants carrying rs1243166 AG genotype, followed by AA and GG genotype (χ2 = 11.89, P = 0.003). Similarly, the rs1051052-G allele was associated with a higher risk of COPD (OR = 19.433, 95% CI: 8.783-43.00, P < 0.001). The highest Alpha-1-AT levels were observed in cases carrying rs1051052 AA genotype, followed by cases with AG and GG genotypes (χ2 = 122.45, P < 0.001). However, individuals with rs9944155-G allele exhibited a lower risk of COPD than those carrying the rs9944155-A allele (OR = 0.121, 95% CI: 0.070-0.209, P < 0.001). In both cases and controls, no significant difference in Alpha-1-AT levels was observed among various rs9944115 genotypes. Conclusions: rs1243166, rs9944155, and rs1051052 sites of Alpha-1-AT may be associated with the COPD morbidity in Uygur population. While rs1243166-G allele and rs1051052-G allele are associated with an increased risk of developing COPD, rs9944155-G allele is a protect locus in Uygur population. Alpha-1-AT levels in Uygur COPD patients were lower than those in healthy people and differed among patients with different rs1051052 AG and rs1243166 AG genotypes.
null
null
6,955
452
[ 30, 298, 314, 56, 133, 143, 469, 204, 206, 32 ]
15
[ "copd", "group", "control", "control group", "alpha", "genotype", "case", "χ2", "rs1243166", "rs1051052" ]
[ "associated copd genes", "copd genes", "role copd pathogenesis", "alpha gene copd", "pathogenesis copd alpha" ]
null
null
[CONTENT] Alpha-1-antitrypsin | Chronic Obstructive Pulmonary Disease | Polymorphism | Uygur Population [SUMMARY]
[CONTENT] Alpha-1-antitrypsin | Chronic Obstructive Pulmonary Disease | Polymorphism | Uygur Population [SUMMARY]
[CONTENT] Alpha-1-antitrypsin | Chronic Obstructive Pulmonary Disease | Polymorphism | Uygur Population [SUMMARY]
null
[CONTENT] Alpha-1-antitrypsin | Chronic Obstructive Pulmonary Disease | Polymorphism | Uygur Population [SUMMARY]
null
[CONTENT] Aged | Alleles | Female | Gene Frequency | Genetic Predisposition to Disease | Genotype | Humans | Male | Middle Aged | Odds Ratio | Polymorphism, Single Nucleotide | Pulmonary Disease, Chronic Obstructive | alpha 1-Antitrypsin [SUMMARY]
[CONTENT] Aged | Alleles | Female | Gene Frequency | Genetic Predisposition to Disease | Genotype | Humans | Male | Middle Aged | Odds Ratio | Polymorphism, Single Nucleotide | Pulmonary Disease, Chronic Obstructive | alpha 1-Antitrypsin [SUMMARY]
[CONTENT] Aged | Alleles | Female | Gene Frequency | Genetic Predisposition to Disease | Genotype | Humans | Male | Middle Aged | Odds Ratio | Polymorphism, Single Nucleotide | Pulmonary Disease, Chronic Obstructive | alpha 1-Antitrypsin [SUMMARY]
null
[CONTENT] Aged | Alleles | Female | Gene Frequency | Genetic Predisposition to Disease | Genotype | Humans | Male | Middle Aged | Odds Ratio | Polymorphism, Single Nucleotide | Pulmonary Disease, Chronic Obstructive | alpha 1-Antitrypsin [SUMMARY]
null
[CONTENT] associated copd genes | copd genes | role copd pathogenesis | alpha gene copd | pathogenesis copd alpha [SUMMARY]
[CONTENT] associated copd genes | copd genes | role copd pathogenesis | alpha gene copd | pathogenesis copd alpha [SUMMARY]
[CONTENT] associated copd genes | copd genes | role copd pathogenesis | alpha gene copd | pathogenesis copd alpha [SUMMARY]
null
[CONTENT] associated copd genes | copd genes | role copd pathogenesis | alpha gene copd | pathogenesis copd alpha [SUMMARY]
null
[CONTENT] copd | group | control | control group | alpha | genotype | case | χ2 | rs1243166 | rs1051052 [SUMMARY]
[CONTENT] copd | group | control | control group | alpha | genotype | case | χ2 | rs1243166 | rs1051052 [SUMMARY]
[CONTENT] copd | group | control | control group | alpha | genotype | case | χ2 | rs1243166 | rs1051052 [SUMMARY]
null
[CONTENT] copd | group | control | control group | alpha | genotype | case | χ2 | rs1243166 | rs1051052 [SUMMARY]
null
[CONTENT] copd | alpha | factors | cause | gene | associated | alpha gene | risk | death | significantly associated [SUMMARY]
[CONTENT] pcrp | pcr | dna | primers | min | following | group | pcr primers 2nd pcrp | primers 2nd pcrp | primers 2nd [SUMMARY]
[CONTENT] group | control | genotype | χ2 | control group | case control group | case | copd | allele | case control [SUMMARY]
null
[CONTENT] copd | group | alpha | control | genotype | interest | conflicts | conflicts interest | control group | χ2 [SUMMARY]
null
[CONTENT] ||| rs9944155AG | rs1051052AG | Uygur | Kashgar [SUMMARY]
[CONTENT] March 2013 to December 2015 | 225 | Uygur | 198 | Kashgar ||| Sequenom ||| ||| COPD [SUMMARY]
[CONTENT] rs1243166 | COPD | 2.039 | 95% ||| CI | 1.116 | 0.019 ||| rs1243166 AG | 11.89 | 0.003 ||| 95% | CI | 8.783-43.00 | P < 0.001 ||| rs1051052 AA | AG | 122.45 | P < 0.001 ||| 0.121 | 95% | CI | 0.070-0.209 | P < 0.001 ||| rs9944115 [SUMMARY]
null
[CONTENT] ||| rs9944155AG | rs1051052AG | Uygur | Kashgar ||| March 2013 to December 2015 | 225 | Uygur | 198 | Kashgar ||| Sequenom ||| ||| COPD ||| rs1243166 | COPD | 2.039 | 95% ||| CI | 1.116 | 0.019 ||| rs1243166 AG | 11.89 | 0.003 ||| 95% | CI | 8.783-43.00 | P < 0.001 ||| rs1051052 AA | AG | 122.45 | P < 0.001 ||| 0.121 | 95% | CI | 0.070-0.209 | P < 0.001 ||| rs9944115 ||| rs1243166 | rs9944155 | rs1051052 | Uygur ||| rs1243166 | rs1051052-G | COPD | Uygur ||| Uygur | rs1051052 AG | rs1243166 AG [SUMMARY]
null
Comparative study of indocyanine green-R15, Child-Pugh score, and model for end-stage liver disease score for prediction of hepatic encephalopathy after transjugular intrahepatic portosystemic shunt.
33584073
Hepatic encephalopathy (HE) remains an enormous challenge in patients who undergo transjugular intrahepatic portosystemic shunt (TIPS) implantation. The preoperative indocyanine green retention rate at 15 min (ICG-R15), as one of the liver function assessment tools, has been developed as a prognostic indicator in patients undergoing surgery, but there are limited data on its role in TIPS.
BACKGROUND
This retrospective study included 195 patients with PHT who underwent elective TIPS at Beijing Shijitan Hospital from January 2018 to June 2019. All patients underwent the ICG-R15 test, CPS evaluation, and MELD scoring 1 wk before TIPS. According to whether they developed HE or not, the patients were divided into two groups: HE group and non-HE group. The prediction of one-year post-TIPS HE by ICG-R15, CPS and MELD score was evaluated by the areas under the receiver operating characteristic curves (AUCs).
METHODS
A total of 195 patients with portal hypertension were included and 23% (45/195) of the patients developed post-TIPS HE. The ICG-R15 was identified as an independent predictor of post-TIPS HE. The AUCs for the ICG-R15, CPS, and MELD score for predicting post-TIPS HE were 0.664 (95% confidence interval [CI]: 0.557-0.743, P = 0.0046), 0.596 (95%CI: 0.508-0.679, P = 0.087), and 0.641 (95%CI: 0.554-0.721, P = 0.021), respectively. The non-parametric approach (Delong-Delong & Clarke-Pearson) showed that there was statistical significance in pairwise comparison between AUCs of ICG-R15 and MELD score (P = 0.0229).
RESULTS
The ICG-R15 has appreciated clinical value for predicting the occurrence of post-TIPS HE and is a choice for evaluating the prognosis of patients undergoing TIPS.
CONCLUSION
[ "End Stage Liver Disease", "Hepatic Encephalopathy", "Humans", "Indocyanine Green", "Liver Cirrhosis", "Portasystemic Shunt, Transjugular Intrahepatic", "Retrospective Studies", "Severity of Illness Index" ]
7856842
INTRODUCTION
Portal hypertension (PHT) is a very common and serious complication of chronic liver disease that often causes variceal bleeding and refractory ascites[1]. Transjugular intrahepatic portosystemic shunt (TIPS) is an important treatment option that has been shown to be efficacious in the management of PHT[2]. This procedure can alleviate portal hypertension by creating a large channel between the portal vein and hepatic vein[3]. Unfortunately, TIPS can cause severe complications such as heart failure, liver failure, and hepatic encephalopathy (HE). HE has a high incidence rate and is one of the most debilitating complications, which has a serious effect on the prognosis and survival of patients[4-6]. Although some risk factors are known, the identification of patients at risk of HE needs additional research. It is important to predict post-TIPS HE so that prevention and treatment measures can be implemented in high-risk HE patients to avoid adverse outcomes. The indocyanine green retention rate at 15 min (ICG-R15), the Child-Pugh score (CPS), and the model for end-stage liver disease (MELD) score have been developed to assess liver function[7,8]. The ICG-R15 is a relatively non-invasive, quick, and inexpensive method that has been widely used in patients with end-stage liver disease[9]. Zipprich et al[10] reported that ICG is the most accurate predictor among quantitative liver function tests of the survival of patients with cirrhosis[10]. A recent retrospective study demonstrated that preoperative ICG clearance was predictive of the surgical prognosis in patients undergoing hepatectomy[11]. The CPS was developed to assess the severity of liver cirrhosis in the clinic. This scoring system includes the bilirubin level, the albumin level, the prothrombin time, HE, and ascites[12]. The MELD score is used to predict the survival of patients undergoing TIPS and to evaluate patients with severe liver disease prior to transplantation. It includes three objective variables: The total bilirubin level, the creatinine level, and the international normalized ratio (INR)[13]. However, there are limited data on the use of liver function tools, especially the ICG-R15, to predict post-TIPS HE. Therefore, the aim of this study was to compare the clinical value of the MELD score, CPS, and ICG-R15 for the prediction of post-TIPS HE in patients with PHT.
MATERIALS AND METHODS
This retrospective study was approved by the Ethics Committee of Beijing Shijitan Hospital of Capital Medical University. The need to obtain informed consent was waived due to its retrospective nature. Patients All patients who underwent TIPS between January 2018 and June 2019 in the interventional department of Beijing Shijitan Hospital were included in this study. The following inclusion criteria were set: (1) Patients between 18 and 70 years old; (2) Patients diagnosed with PHT; and (3) Patients who underwent TIPS using a polytetrafluoroethylene-covered stent. The following exclusion criteria were set: (1) Preoperative HE; (2) Liver cancer; (3) Liver transplantation; (4) TIPS retreatment; (5) Non-cirrhotic PHT; (6) Surgical splenectomy; (7) Portal vein thrombosis; and (8) Urgent TIPS. All patients who underwent TIPS between January 2018 and June 2019 in the interventional department of Beijing Shijitan Hospital were included in this study. The following inclusion criteria were set: (1) Patients between 18 and 70 years old; (2) Patients diagnosed with PHT; and (3) Patients who underwent TIPS using a polytetrafluoroethylene-covered stent. The following exclusion criteria were set: (1) Preoperative HE; (2) Liver cancer; (3) Liver transplantation; (4) TIPS retreatment; (5) Non-cirrhotic PHT; (6) Surgical splenectomy; (7) Portal vein thrombosis; and (8) Urgent TIPS. Definitions PHT was defined as the radiological presence of significant splenomegaly, umbilical vein recanalization, and/or portosystemic shunts as well as a preoperative platelet count < 100 × 109/L. Portal pressure gradient values greater than or equal to 10 mmHg indicated clinically significant PHT[14]. HE was defined as neuropsychiatric abnormalities ranging from mild neuro-psychological dysfunction to deep coma and abnormal ammonia levels, after the exclusion of other possible causes of altered mental status by computed tomography or magnetic resonance imaging[15]. PHT was defined as the radiological presence of significant splenomegaly, umbilical vein recanalization, and/or portosystemic shunts as well as a preoperative platelet count < 100 × 109/L. Portal pressure gradient values greater than or equal to 10 mmHg indicated clinically significant PHT[14]. HE was defined as neuropsychiatric abnormalities ranging from mild neuro-psychological dysfunction to deep coma and abnormal ammonia levels, after the exclusion of other possible causes of altered mental status by computed tomography or magnetic resonance imaging[15]. Preoperative ICG-R15, CPS, and MELD score All patients included in this study underwent the ICG-R15 test with a dye-densitogram (DDG) analyser (Japan, NIHON KOHDEN, model DDG-3300K) and an ICG clearance rate test (Japan, NIHON KOHDEN, model A). Within 30 s after the injection of ICG (0.5 mg/kg) into the median cubital vein, plasma ICG concentrations were monitored via a sensor attached to the patients’ finger. The ICG-R15 was subsequently assessed by a computer. The CPS score can be divided into three grades depending on the total points: Grade A (5-6 points), B (7-9 points), and C (≥ 10 points). The MELD score was calculated based on the following formula: R = 3.8 × ln (bilirubin mg/dL) + 11.2 × ln (INR) + 9.6 ln (creatinine mg/dL) + 6.4 × aetiology (biliary and alcoholic 0, others 1)[8,16] (Figure 1). Receive operating characteristic curve analyses of indocyanine green-R15, Child-Pugh score, and model for end-stage liver disease score to predict post-transjugular intrahepatic portosystemic shunt hepatic encephalopathy. ICG: Indocyanine green; MELD: Model for end-stage liver disease. All patients included in this study underwent the ICG-R15 test with a dye-densitogram (DDG) analyser (Japan, NIHON KOHDEN, model DDG-3300K) and an ICG clearance rate test (Japan, NIHON KOHDEN, model A). Within 30 s after the injection of ICG (0.5 mg/kg) into the median cubital vein, plasma ICG concentrations were monitored via a sensor attached to the patients’ finger. The ICG-R15 was subsequently assessed by a computer. The CPS score can be divided into three grades depending on the total points: Grade A (5-6 points), B (7-9 points), and C (≥ 10 points). The MELD score was calculated based on the following formula: R = 3.8 × ln (bilirubin mg/dL) + 11.2 × ln (INR) + 9.6 ln (creatinine mg/dL) + 6.4 × aetiology (biliary and alcoholic 0, others 1)[8,16] (Figure 1). Receive operating characteristic curve analyses of indocyanine green-R15, Child-Pugh score, and model for end-stage liver disease score to predict post-transjugular intrahepatic portosystemic shunt hepatic encephalopathy. ICG: Indocyanine green; MELD: Model for end-stage liver disease. TIPS Under fluoroscopic guidance, a standard TIPS procedure was performed by an experienced interventional radiologist. A pigtail catheter was inserted into the right internal jugular vein leading to the hepatic veins. After finding the portal vein through the superior mesenteric artery or splenic artery using indirect portal venography, a stent (7, 8, or 10 mm, Fluency, Bard, United States) was placed to create a channel between the hepatic vein and the portal vein. Afterward, the portal vein pressure was measured at least three times, and a pressure transducer system (Combitrans, Braun Melsungen, Germany) with a multichannel monitor (Sirecust, Siemens, Germany) was used to measure the haemodynamic parameters. Under fluoroscopic guidance, a standard TIPS procedure was performed by an experienced interventional radiologist. A pigtail catheter was inserted into the right internal jugular vein leading to the hepatic veins. After finding the portal vein through the superior mesenteric artery or splenic artery using indirect portal venography, a stent (7, 8, or 10 mm, Fluency, Bard, United States) was placed to create a channel between the hepatic vein and the portal vein. Afterward, the portal vein pressure was measured at least three times, and a pressure transducer system (Combitrans, Braun Melsungen, Germany) with a multichannel monitor (Sirecust, Siemens, Germany) was used to measure the haemodynamic parameters. Statistical analysis Clinical and laboratory characteristics were collected from the medical records. SPSS (version 20.0, SPSS Inc., United States) and MedCalc were used for the statistical analyses. Descriptive data are presented as the mean ± SD, and qualitative variables are presented as frequencies or percentages. Student's t test or the Mann-Whitney U test was used to compare quantitative variables between groups, and the chi-square test or Fisher's exact test was used for qualitative variables. Univariate and multivariable logistic regression analyses were used to determine HE-related risk factors after TIPS. The areas under the receiver operating characteristic curves (AUCs) for the ICG-R15, CPS, and MELD score were evaluated. The non-parametric approach (Delong-Delong & Clarke-Pearson)[17] was used for pairwise comparison among AUCs of ICG-R15, CPS, and MELD score. Statistical significance was established at P < 0.05. Clinical and laboratory characteristics were collected from the medical records. SPSS (version 20.0, SPSS Inc., United States) and MedCalc were used for the statistical analyses. Descriptive data are presented as the mean ± SD, and qualitative variables are presented as frequencies or percentages. Student's t test or the Mann-Whitney U test was used to compare quantitative variables between groups, and the chi-square test or Fisher's exact test was used for qualitative variables. Univariate and multivariable logistic regression analyses were used to determine HE-related risk factors after TIPS. The areas under the receiver operating characteristic curves (AUCs) for the ICG-R15, CPS, and MELD score were evaluated. The non-parametric approach (Delong-Delong & Clarke-Pearson)[17] was used for pairwise comparison among AUCs of ICG-R15, CPS, and MELD score. Statistical significance was established at P < 0.05.
null
null
CONCLUSION
We can learn from this study that monitoring patients who underwent TIPS with an ICG-R15 value above 30% can better prevent adverse outcomes. Future studies will focus on the incidence of complications and survival in terms of the value of ICG-R15 and randomized controlled trials are needed in order to verify our results.
[ "INTRODUCTION", "Patients", "Definitions", "Preoperative ICG-R15, CPS, and MELD score", "TIPS", "Statistical analysis", "RESULTS", "Patients’ preoperative characteristics", "Univariable and multivariable analyses of post-TIPS HE", "Discriminatory power of the CPS, MELD score, and ICG-R15", "Comparison of post-TIPS-HE incidence between patients with an ICG-R15 > 30% and < 30%", "DISCUSSION", "CONCLUSION" ]
[ "Portal hypertension (PHT) is a very common and serious complication of chronic liver disease that often causes variceal bleeding and refractory ascites[1]. Transjugular intrahepatic portosystemic shunt (TIPS) is an important treatment option that has been shown to be efficacious in the management of PHT[2]. This procedure can alleviate portal hypertension by creating a large channel between the portal vein and hepatic vein[3]. Unfortunately, TIPS can cause severe complications such as heart failure, liver failure, and hepatic encephalopathy (HE). HE has a high incidence rate and is one of the most debilitating complications, which has a serious effect on the prognosis and survival of patients[4-6]. Although some risk factors are known, the identification of patients at risk of HE needs additional research. It is important to predict post-TIPS HE so that prevention and treatment measures can be implemented in high-risk HE patients to avoid adverse outcomes.\nThe indocyanine green retention rate at 15 min (ICG-R15), the Child-Pugh score (CPS), and the model for end-stage liver disease (MELD) score have been developed to assess liver function[7,8]. The ICG-R15 is a relatively non-invasive, quick, and inexpensive method that has been widely used in patients with end-stage liver disease[9]. Zipprich et al[10] reported that ICG is the most accurate predictor among quantitative liver function tests of the survival of patients with cirrhosis[10]. A recent retrospective study demonstrated that preoperative ICG clearance was predictive of the surgical prognosis in patients undergoing hepatectomy[11]. The CPS was developed to assess the severity of liver cirrhosis in the clinic. This scoring system includes the bilirubin level, the albumin level, the prothrombin time, HE, and ascites[12]. The MELD score is used to predict the survival of patients undergoing TIPS and to evaluate patients with severe liver disease prior to transplantation. It includes three objective variables: The total bilirubin level, the creatinine level, and the international normalized ratio (INR)[13]. However, there are limited data on the use of liver function tools, especially the ICG-R15, to predict post-TIPS HE. Therefore, the aim of this study was to compare the clinical value of the MELD score, CPS, and ICG-R15 for the prediction of post-TIPS HE in patients with PHT.", "All patients who underwent TIPS between January 2018 and June 2019 in the interventional department of Beijing Shijitan Hospital were included in this study.\nThe following inclusion criteria were set: (1) Patients between 18 and 70 years old; (2) Patients diagnosed with PHT; and (3) Patients who underwent TIPS using a polytetrafluoroethylene-covered stent. The following exclusion criteria were set: (1) Preoperative HE; (2) Liver cancer; (3) Liver transplantation; (4) TIPS retreatment; (5) Non-cirrhotic PHT; (6) Surgical splenectomy; (7) Portal vein thrombosis; and (8) Urgent TIPS.", "PHT was defined as the radiological presence of significant splenomegaly, umbilical vein recanalization, and/or portosystemic shunts as well as a preoperative platelet count < 100 × 109/L. Portal pressure gradient values greater than or equal to 10 mmHg indicated clinically significant PHT[14].\nHE was defined as neuropsychiatric abnormalities ranging from mild neuro-psychological dysfunction to deep coma and abnormal ammonia levels, after the exclusion of other possible causes of altered mental status by computed tomography or magnetic resonance imaging[15].", "All patients included in this study underwent the ICG-R15 test with a dye-densitogram (DDG) analyser (Japan, NIHON KOHDEN, model DDG-3300K) and an ICG clearance rate test (Japan, NIHON KOHDEN, model A). Within 30 s after the injection of ICG (0.5 mg/kg) into the median cubital vein, plasma ICG concentrations were monitored via a sensor attached to the patients’ finger. The ICG-R15 was subsequently assessed by a computer. The CPS score can be divided into three grades depending on the total points: Grade A (5-6 points), B (7-9 points), and C (≥ 10 points). The MELD score was calculated based on the following formula: R = 3.8 × ln (bilirubin mg/dL) + 11.2 × ln (INR) + 9.6 ln (creatinine mg/dL) + 6.4 × aetiology (biliary and alcoholic 0, others 1)[8,16] (Figure 1).\nReceive operating characteristic curve analyses of indocyanine green-R15, Child-Pugh score, and model for end-stage liver disease score to predict post-transjugular intrahepatic portosystemic shunt hepatic encephalopathy. ICG: Indocyanine green; MELD: Model for end-stage liver disease.", "Under fluoroscopic guidance, a standard TIPS procedure was performed by an experienced interventional radiologist. A pigtail catheter was inserted into the right internal jugular vein leading to the hepatic veins. After finding the portal vein through the superior mesenteric artery or splenic artery using indirect portal venography, a stent (7, 8, or 10 mm, Fluency, Bard, United States) was placed to create a channel between the hepatic vein and the portal vein. Afterward, the portal vein pressure was measured at least three times, and a pressure transducer system (Combitrans, Braun Melsungen, Germany) with a multichannel monitor (Sirecust, Siemens, Germany) was used to measure the haemodynamic parameters.", "Clinical and laboratory characteristics were collected from the medical records. SPSS (version 20.0, SPSS Inc., United States) and MedCalc were used for the statistical analyses. Descriptive data are presented as the mean ± SD, and qualitative variables are presented as frequencies or percentages. Student's t test or the Mann-Whitney U test was used to compare quantitative variables between groups, and the chi-square test or Fisher's exact test was used for qualitative variables. Univariate and multivariable logistic regression analyses were used to determine HE-related risk factors after TIPS. The areas under the receiver operating characteristic curves (AUCs) for the ICG-R15, CPS, and MELD score were evaluated. The non-parametric approach (Delong-Delong & Clarke-Pearson)[17] was used for pairwise comparison among AUCs of ICG-R15, CPS, and MELD score. Statistical significance was established at P < 0.05.", "Patients’ preoperative characteristics A total of 221 patients who underwent TIPS were included in this study. After applying the inclusion and exclusion criteria of the study, data were collected from a total of 195 decompensated cirrhosis patients with PHT who underwent TIPS. The basic clinical characteristics are listed in Table 1. The study population comprised 140 men and 55 women, with a mean age of 51.2 ± 11.4 years. The indications included variceal bleeding in 118 (60.7%) patients, refractory ascites in 36 (18.5%), variceal bleeding combined with ascites in 27 (14.1%), and other (including pleural fluid and hepatorenal syndrome) in 14 (6.7%). The most common cause of cirrhosis was viral cirrhosis (60.7%), followed by alcoholic cirrhosis (15.9%), biliary cirrhosis (4.6%), and drug-induced and autoimmune hepatitis cirrhosis (4.6%). According to the CPS, 108 patients were classified as having grade A, 75 as having grade B and 12 as having grade C. The median MELD score and ICG-R15 were 7 (4.1, 10) and 38.4 (22, 50), respectively.\nPatients’ preoperative characteristics\nValues are the mean ± SD. \nValues are n. \nValues are medians (P25, P75). \nP < 0.05. \nP < 0.01. TIPS: Transjugular intrahepatic portosystemic shunt; HE: Hepatic encephalopathy; HVPG: Hepatic venous pressure gradient; PPG: Portal pressure gradient; MELD: Model for end-stage liver disease; ICG: Indocyanine green; ALT: Alanine aminotransferase; AST: Aspartate transaminase; ALP: Alkaline phosphatase; Hb: Hemoglobin; BUN: Blood urea nitrogen; APTT: Active partial thromboplastin; INR: International normalized ratio; FIB: Fibrinogen; PT: Prothrombin time; WBC: White blood cells.\nA total of 221 patients who underwent TIPS were included in this study. After applying the inclusion and exclusion criteria of the study, data were collected from a total of 195 decompensated cirrhosis patients with PHT who underwent TIPS. The basic clinical characteristics are listed in Table 1. The study population comprised 140 men and 55 women, with a mean age of 51.2 ± 11.4 years. The indications included variceal bleeding in 118 (60.7%) patients, refractory ascites in 36 (18.5%), variceal bleeding combined with ascites in 27 (14.1%), and other (including pleural fluid and hepatorenal syndrome) in 14 (6.7%). The most common cause of cirrhosis was viral cirrhosis (60.7%), followed by alcoholic cirrhosis (15.9%), biliary cirrhosis (4.6%), and drug-induced and autoimmune hepatitis cirrhosis (4.6%). According to the CPS, 108 patients were classified as having grade A, 75 as having grade B and 12 as having grade C. The median MELD score and ICG-R15 were 7 (4.1, 10) and 38.4 (22, 50), respectively.\nPatients’ preoperative characteristics\nValues are the mean ± SD. \nValues are n. \nValues are medians (P25, P75). \nP < 0.05. \nP < 0.01. TIPS: Transjugular intrahepatic portosystemic shunt; HE: Hepatic encephalopathy; HVPG: Hepatic venous pressure gradient; PPG: Portal pressure gradient; MELD: Model for end-stage liver disease; ICG: Indocyanine green; ALT: Alanine aminotransferase; AST: Aspartate transaminase; ALP: Alkaline phosphatase; Hb: Hemoglobin; BUN: Blood urea nitrogen; APTT: Active partial thromboplastin; INR: International normalized ratio; FIB: Fibrinogen; PT: Prothrombin time; WBC: White blood cells.\nUnivariable and multivariable analyses of post-TIPS HE Of the 195 patients who underwent TIPS, 45 (23%) developed HE at the 12 mo follow-up. The factors associated with HE in univariable analysis included age, stent size, puncture site, ICG-R15, CPS, MELD score, blood urea nitrogen (BUN) level, and NH3 level (P = 0.006, 0.019, 0.006, 0.027, 0.022, 0.020, 0.025, and 0.044, respectively). As shown in Table 2, multivariate regression analysis identified the following variables as independent risk factors for post-TIPS HE: Older age, 10 mm stent size, puncture site in the right branch of the portal vein, high ICG-R15, high BUN level, high NH3 level, high CPS, and high MELD score.\nUnivariable and multivariable logistic regression analyses of post-transjugular intrahepatic portosystemic shunt hepatic encephalopathy\nP < 0.05. \nP < 0.01. TIPS: Transjugular intrahepatic portosystemic shunt; HVPG: Hepatic venous pressure gradient; PPG: Portal pressure gradient; MELD: Model for end-stage liver disease; ICG: Indocyanine green; ALT: Alanine aminotransferase; AST: Aspartate transaminase; ALP: Alkaline phosphatase; Hb: Hemoglobin; BUN: Blood urea nitrogen; APTT: Active partial thromboplastin; INR: International normalized ratio; FIB: Fibrinogen; PT: Prothrombin time; WBC: White blood cell.\nOf the 195 patients who underwent TIPS, 45 (23%) developed HE at the 12 mo follow-up. The factors associated with HE in univariable analysis included age, stent size, puncture site, ICG-R15, CPS, MELD score, blood urea nitrogen (BUN) level, and NH3 level (P = 0.006, 0.019, 0.006, 0.027, 0.022, 0.020, 0.025, and 0.044, respectively). As shown in Table 2, multivariate regression analysis identified the following variables as independent risk factors for post-TIPS HE: Older age, 10 mm stent size, puncture site in the right branch of the portal vein, high ICG-R15, high BUN level, high NH3 level, high CPS, and high MELD score.\nUnivariable and multivariable logistic regression analyses of post-transjugular intrahepatic portosystemic shunt hepatic encephalopathy\nP < 0.05. \nP < 0.01. TIPS: Transjugular intrahepatic portosystemic shunt; HVPG: Hepatic venous pressure gradient; PPG: Portal pressure gradient; MELD: Model for end-stage liver disease; ICG: Indocyanine green; ALT: Alanine aminotransferase; AST: Aspartate transaminase; ALP: Alkaline phosphatase; Hb: Hemoglobin; BUN: Blood urea nitrogen; APTT: Active partial thromboplastin; INR: International normalized ratio; FIB: Fibrinogen; PT: Prothrombin time; WBC: White blood cell.\nDiscriminatory power of the CPS, MELD score, and ICG-R15 The area under the receiver operating characteristic (ROC) curve for the ICG-R15 (AUC = 0.664, 95% confidence interval [CI]: 0.557-0.743, P = 0.0046) for the prediction of post-TIPS HE was larger than those of the CPS (AUC = 0.596, 95%CI: 0.508-0.679, P = 0.087) and the MELD score (AUC = 0.641, 95%CI: 0.554-0.721, P = 0.021). The non-parametric approach (Delong-Delong & Clarke-Pearson)[17] showed that there was no statistical significance in pairwise comparison between AUCs of ICG-R15 and MELD score (P = 0.0229). The cut-off value for the ICG-R15, which was determined by the maximum of Youden index, was 30, with a sensitivity of 86.96% and specificity of 56.25% for the prediction of post-TIPS HE (Table 3). Of the 126 patients with an ICG-R15 > 30, 36 (28.5%) developed post-TIPS HE, while 9 (13%) of 69 patients with an ICG ≤ 30 developed post-TIPS HE.\nAreas under the receiver operating characteristic curves of indocyanine green rentention rate of 15 min, Child-Pugh score, and model for end-stage liver disease score\nP < 0.01. ICG: Indocyanine green; CPS: Child-Pugh score; MELD: Model for end-stage liver disease; AUS: Areas under the receiver operating characteristic curves.\nThe area under the receiver operating characteristic (ROC) curve for the ICG-R15 (AUC = 0.664, 95% confidence interval [CI]: 0.557-0.743, P = 0.0046) for the prediction of post-TIPS HE was larger than those of the CPS (AUC = 0.596, 95%CI: 0.508-0.679, P = 0.087) and the MELD score (AUC = 0.641, 95%CI: 0.554-0.721, P = 0.021). The non-parametric approach (Delong-Delong & Clarke-Pearson)[17] showed that there was no statistical significance in pairwise comparison between AUCs of ICG-R15 and MELD score (P = 0.0229). The cut-off value for the ICG-R15, which was determined by the maximum of Youden index, was 30, with a sensitivity of 86.96% and specificity of 56.25% for the prediction of post-TIPS HE (Table 3). Of the 126 patients with an ICG-R15 > 30, 36 (28.5%) developed post-TIPS HE, while 9 (13%) of 69 patients with an ICG ≤ 30 developed post-TIPS HE.\nAreas under the receiver operating characteristic curves of indocyanine green rentention rate of 15 min, Child-Pugh score, and model for end-stage liver disease score\nP < 0.01. ICG: Indocyanine green; CPS: Child-Pugh score; MELD: Model for end-stage liver disease; AUS: Areas under the receiver operating characteristic curves.\nComparison of post-TIPS-HE incidence between patients with an ICG-R15 > 30% and < 30% The patients were divided into two groups according to the ICG-R15 cut-off value, 30%, determined by the maximal Youden index. Patients with an ICG-R15 > 30% had a higher incidence of HE than those with an ICG-R15 < 30% (13% vs 28.5%; P = 0.014). There were significant differences in age, CPS, preoperative portal pressure gradient, NH3, albumin, aspartate transaminase, Cl, Na, white blood cells, active partial thromboplastin, or prothrombin time in patients with different ICG-R15 levels below and above 30% (Table 4).\nCharacteristics of patients with an indocyanine green rentention rate at 15 min ≤ 30% and > 30%\nValues are n. \nValues are the mean ± SD. \nValues are medians (P25, P75). \nP < 0.05. \nP < 0.01. \nP < 0.001. ICG: Indocyanine green; HE: Hepatic encephalopathy; PPG: Portal pressure gradient; AST: Aspartate transaminase; APTT: Active partial thromboplastin; PT: Prothrombin time; WBC: White blood cells.\nThe patients were divided into two groups according to the ICG-R15 cut-off value, 30%, determined by the maximal Youden index. Patients with an ICG-R15 > 30% had a higher incidence of HE than those with an ICG-R15 < 30% (13% vs 28.5%; P = 0.014). There were significant differences in age, CPS, preoperative portal pressure gradient, NH3, albumin, aspartate transaminase, Cl, Na, white blood cells, active partial thromboplastin, or prothrombin time in patients with different ICG-R15 levels below and above 30% (Table 4).\nCharacteristics of patients with an indocyanine green rentention rate at 15 min ≤ 30% and > 30%\nValues are n. \nValues are the mean ± SD. \nValues are medians (P25, P75). \nP < 0.05. \nP < 0.01. \nP < 0.001. ICG: Indocyanine green; HE: Hepatic encephalopathy; PPG: Portal pressure gradient; AST: Aspartate transaminase; APTT: Active partial thromboplastin; PT: Prothrombin time; WBC: White blood cells.", "A total of 221 patients who underwent TIPS were included in this study. After applying the inclusion and exclusion criteria of the study, data were collected from a total of 195 decompensated cirrhosis patients with PHT who underwent TIPS. The basic clinical characteristics are listed in Table 1. The study population comprised 140 men and 55 women, with a mean age of 51.2 ± 11.4 years. The indications included variceal bleeding in 118 (60.7%) patients, refractory ascites in 36 (18.5%), variceal bleeding combined with ascites in 27 (14.1%), and other (including pleural fluid and hepatorenal syndrome) in 14 (6.7%). The most common cause of cirrhosis was viral cirrhosis (60.7%), followed by alcoholic cirrhosis (15.9%), biliary cirrhosis (4.6%), and drug-induced and autoimmune hepatitis cirrhosis (4.6%). According to the CPS, 108 patients were classified as having grade A, 75 as having grade B and 12 as having grade C. The median MELD score and ICG-R15 were 7 (4.1, 10) and 38.4 (22, 50), respectively.\nPatients’ preoperative characteristics\nValues are the mean ± SD. \nValues are n. \nValues are medians (P25, P75). \nP < 0.05. \nP < 0.01. TIPS: Transjugular intrahepatic portosystemic shunt; HE: Hepatic encephalopathy; HVPG: Hepatic venous pressure gradient; PPG: Portal pressure gradient; MELD: Model for end-stage liver disease; ICG: Indocyanine green; ALT: Alanine aminotransferase; AST: Aspartate transaminase; ALP: Alkaline phosphatase; Hb: Hemoglobin; BUN: Blood urea nitrogen; APTT: Active partial thromboplastin; INR: International normalized ratio; FIB: Fibrinogen; PT: Prothrombin time; WBC: White blood cells.", "Of the 195 patients who underwent TIPS, 45 (23%) developed HE at the 12 mo follow-up. The factors associated with HE in univariable analysis included age, stent size, puncture site, ICG-R15, CPS, MELD score, blood urea nitrogen (BUN) level, and NH3 level (P = 0.006, 0.019, 0.006, 0.027, 0.022, 0.020, 0.025, and 0.044, respectively). As shown in Table 2, multivariate regression analysis identified the following variables as independent risk factors for post-TIPS HE: Older age, 10 mm stent size, puncture site in the right branch of the portal vein, high ICG-R15, high BUN level, high NH3 level, high CPS, and high MELD score.\nUnivariable and multivariable logistic regression analyses of post-transjugular intrahepatic portosystemic shunt hepatic encephalopathy\nP < 0.05. \nP < 0.01. TIPS: Transjugular intrahepatic portosystemic shunt; HVPG: Hepatic venous pressure gradient; PPG: Portal pressure gradient; MELD: Model for end-stage liver disease; ICG: Indocyanine green; ALT: Alanine aminotransferase; AST: Aspartate transaminase; ALP: Alkaline phosphatase; Hb: Hemoglobin; BUN: Blood urea nitrogen; APTT: Active partial thromboplastin; INR: International normalized ratio; FIB: Fibrinogen; PT: Prothrombin time; WBC: White blood cell.", "The area under the receiver operating characteristic (ROC) curve for the ICG-R15 (AUC = 0.664, 95% confidence interval [CI]: 0.557-0.743, P = 0.0046) for the prediction of post-TIPS HE was larger than those of the CPS (AUC = 0.596, 95%CI: 0.508-0.679, P = 0.087) and the MELD score (AUC = 0.641, 95%CI: 0.554-0.721, P = 0.021). The non-parametric approach (Delong-Delong & Clarke-Pearson)[17] showed that there was no statistical significance in pairwise comparison between AUCs of ICG-R15 and MELD score (P = 0.0229). The cut-off value for the ICG-R15, which was determined by the maximum of Youden index, was 30, with a sensitivity of 86.96% and specificity of 56.25% for the prediction of post-TIPS HE (Table 3). Of the 126 patients with an ICG-R15 > 30, 36 (28.5%) developed post-TIPS HE, while 9 (13%) of 69 patients with an ICG ≤ 30 developed post-TIPS HE.\nAreas under the receiver operating characteristic curves of indocyanine green rentention rate of 15 min, Child-Pugh score, and model for end-stage liver disease score\nP < 0.01. ICG: Indocyanine green; CPS: Child-Pugh score; MELD: Model for end-stage liver disease; AUS: Areas under the receiver operating characteristic curves.", "The patients were divided into two groups according to the ICG-R15 cut-off value, 30%, determined by the maximal Youden index. Patients with an ICG-R15 > 30% had a higher incidence of HE than those with an ICG-R15 < 30% (13% vs 28.5%; P = 0.014). There were significant differences in age, CPS, preoperative portal pressure gradient, NH3, albumin, aspartate transaminase, Cl, Na, white blood cells, active partial thromboplastin, or prothrombin time in patients with different ICG-R15 levels below and above 30% (Table 4).\nCharacteristics of patients with an indocyanine green rentention rate at 15 min ≤ 30% and > 30%\nValues are n. \nValues are the mean ± SD. \nValues are medians (P25, P75). \nP < 0.05. \nP < 0.01. \nP < 0.001. ICG: Indocyanine green; HE: Hepatic encephalopathy; PPG: Portal pressure gradient; AST: Aspartate transaminase; APTT: Active partial thromboplastin; PT: Prothrombin time; WBC: White blood cells.", "TIPS has been widely used to treat complications of PHT, including varices and ascites, by creating a large channel between the hepatic vein and portal vein. This procedure changes the liver haemodynamics by shunting a fraction of the portal venous blood directly into the systemic circulation, which can lead to decreased liver blood supply and impaired liver function reserve. In addition, HE occurs because of an increase in the amount of natural toxins such as ammonia travelling to the brain as a result of the shunting of the blood directly from the portal vein to the hepatic vein[18]. HE can produce a spectrum of neurological/psychiatric syndromes ranging from subclinical alterations to coma. It remains one of the most common and worrisome complications of end-stage liver disease after TIPS[15].\nThe ICG-R15 test is simple, fast, less invasive, and inexpensive, and can be performed in less than half an hour. The ICG-R15 retention trial was introduced as a relatively noninvasive tool for the classification of pediatric and adult patients with acute and chronic liver failure[19]. A particular advantage of the ICG-R15 test is that it is more suitable for pediatric patients. In addition, the trial appears to be an ideal way to assess the risks of surgical procedures such as liver resection. In addition to assessing the predictive value of varices and ascites, the results of some papers suggest that the ICG-R15 test may also predict mortality[20].\nIn this study, univariate and multivariate logistic analyses showed that 10 mm stent size, puncture site in the right branch of the portal vein, age, ICG-R15, BUN level, NH3 level, CPS, and MELD score were predictors of post-TIPS HE in patients with portal hypertension. Patients with puncture sites in the right portal vein had a high incidence of post-TIPS HE. This is related to the right branch of the portal vein contains more poisons mainly received from the superior mesenteric vein[21-23]. It was reported that choosing the left branch of the portal vein as the puncture site during the placement of TIPS may decrease the incidence of HE significantly[24,25]. HE occurs more often in patients with a stent diameter of 10 mm than in those with smaller-diameter stents. A larger stent can effectively reduce portal vein pressure, but at the same time, more blood that has not been detoxified by the liver directly enters the systemic circulation, which can further impair liver function and lead to HE. However, some studies showed that the incidence of post-TIPS HE was unrelated to stent diameter[26]. Li et al[27] also found that age and Child-Pugh classification were independent risk factors for early post-TIPS HE[27], which was in accord with previous studies[28,29]. Fonio et al[4] demonstrated that the MELD grade was a risk factor for post-TIPS HE; the results of this study support this finding[4]. Normally, ammonia is detoxified by conversion to urea by Krebs-Henseleit buffer or the urea cycle in the liver. In total, 40% to 60% of the urea nitrogen in the primary urine is absorbed in the renal tubules and collecting tubes. A high incidence of HE was found in patients with high BUN levels, which is related to the aggravation of liver function involving the kidneys, renal decompensation, and azotaemia[30].\nHiwatashi et al[31] found that a higher ICG-R15 was significantly correlated with surgical complications and liver dysfunction after surgical resection and chemotherapy in patients with colorectal cancer[31]. Wang et al[32] demonstrated that the ICG-R15 could more accurately predict preoperative liver reserve function than the Child-Pugh and MELD scores in patients who suffered from liver cancer[32]. Another study showed that the ALICE scoring system (including serum albumin and ICG-R15) could simply and effectively predict the prognosis of liver cancer patients undergoing surgery[33]. In this study, we analysed and compared the areas under the ROC curves for the ICG-R15, CPS, and MELD score. The results were as follows: AUCICG-R15 > AUCMELD > AUCCPS, and there was statistical significance in pairwise comparison between AUCs of ICG-R15 and MELD score. This suggests that the ICG-R15 has clinical equivalent value to predict post-TIPS HE compared to MELD score. The CPS is relatively restricted because it includes two subjective variables (HE and severity of ascites)[12], and the limited values ranging from 5 to 15 make it imprecise. The boundary values for the five parameters were chosen empirically and have not been formally validated[34]. In some situations, the predictive value of MELD may be reduced. Malabsorption of vitamin K secondary to cholestasis can cause an increase in the INR; starvation and infection can increase the level of bilirubin; and the use of diuretics can increase the level of creatinine[35]. As a quantitative assessment, the ICG-R15 is a simple and practical way to assess liver function and is widely used in patients undergoing liver surgery. The ICG-R15 may play a role in predicting post-TIPS HE; consequently, it may be useful for identifying high-risk patients.\nThe optimal ICG-R15 cut-off value in our study was 30%, and it divided the patients into two groups with different risks of post-TIPS HE. This difference was highly significant (P = 0.014), which implies that TIPS patients with an ICG-R15 > 30% should be given special care during perioperative management. However, we did not compare the role of the CPS, MELD score, and ICG-R15 for predicting the survival of patients after TIPS, which needs further study. In addition, future studies should analyse the incidence of complications and survival between different ICG-R15 groups. A small number of patients with Child-Pugh C cirrhosis and the low median MELD score were limitations of this study. Since this may be related to the small sample size, we will increase the amount of the sample in the future.\nIn summary, the ICG-R15 can be used for predicting post-TIPS HE in patients with PHT. We propose using the ICG-R15 to evaluate the risk of HE in PHT patients undergoing TIPS.", "TIPS for PHT in patients with cirrhosis should be considered after careful selection based on patient characteristics and liver function. The ICG-R15 has appreciated clinical value for predicting the occurrence of post-TIPS HE and is a choice for evaluating the prognosis of patients undergoing TIPS." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Patients", "Definitions", "Preoperative ICG-R15, CPS, and MELD score", "TIPS", "Statistical analysis", "RESULTS", "Patients’ preoperative characteristics", "Univariable and multivariable analyses of post-TIPS HE", "Discriminatory power of the CPS, MELD score, and ICG-R15", "Comparison of post-TIPS-HE incidence between patients with an ICG-R15 > 30% and < 30%", "DISCUSSION", "CONCLUSION" ]
[ "Portal hypertension (PHT) is a very common and serious complication of chronic liver disease that often causes variceal bleeding and refractory ascites[1]. Transjugular intrahepatic portosystemic shunt (TIPS) is an important treatment option that has been shown to be efficacious in the management of PHT[2]. This procedure can alleviate portal hypertension by creating a large channel between the portal vein and hepatic vein[3]. Unfortunately, TIPS can cause severe complications such as heart failure, liver failure, and hepatic encephalopathy (HE). HE has a high incidence rate and is one of the most debilitating complications, which has a serious effect on the prognosis and survival of patients[4-6]. Although some risk factors are known, the identification of patients at risk of HE needs additional research. It is important to predict post-TIPS HE so that prevention and treatment measures can be implemented in high-risk HE patients to avoid adverse outcomes.\nThe indocyanine green retention rate at 15 min (ICG-R15), the Child-Pugh score (CPS), and the model for end-stage liver disease (MELD) score have been developed to assess liver function[7,8]. The ICG-R15 is a relatively non-invasive, quick, and inexpensive method that has been widely used in patients with end-stage liver disease[9]. Zipprich et al[10] reported that ICG is the most accurate predictor among quantitative liver function tests of the survival of patients with cirrhosis[10]. A recent retrospective study demonstrated that preoperative ICG clearance was predictive of the surgical prognosis in patients undergoing hepatectomy[11]. The CPS was developed to assess the severity of liver cirrhosis in the clinic. This scoring system includes the bilirubin level, the albumin level, the prothrombin time, HE, and ascites[12]. The MELD score is used to predict the survival of patients undergoing TIPS and to evaluate patients with severe liver disease prior to transplantation. It includes three objective variables: The total bilirubin level, the creatinine level, and the international normalized ratio (INR)[13]. However, there are limited data on the use of liver function tools, especially the ICG-R15, to predict post-TIPS HE. Therefore, the aim of this study was to compare the clinical value of the MELD score, CPS, and ICG-R15 for the prediction of post-TIPS HE in patients with PHT.", "This retrospective study was approved by the Ethics Committee of Beijing Shijitan Hospital of Capital Medical University. The need to obtain informed consent was waived due to its retrospective nature.\nPatients All patients who underwent TIPS between January 2018 and June 2019 in the interventional department of Beijing Shijitan Hospital were included in this study.\nThe following inclusion criteria were set: (1) Patients between 18 and 70 years old; (2) Patients diagnosed with PHT; and (3) Patients who underwent TIPS using a polytetrafluoroethylene-covered stent. The following exclusion criteria were set: (1) Preoperative HE; (2) Liver cancer; (3) Liver transplantation; (4) TIPS retreatment; (5) Non-cirrhotic PHT; (6) Surgical splenectomy; (7) Portal vein thrombosis; and (8) Urgent TIPS.\nAll patients who underwent TIPS between January 2018 and June 2019 in the interventional department of Beijing Shijitan Hospital were included in this study.\nThe following inclusion criteria were set: (1) Patients between 18 and 70 years old; (2) Patients diagnosed with PHT; and (3) Patients who underwent TIPS using a polytetrafluoroethylene-covered stent. The following exclusion criteria were set: (1) Preoperative HE; (2) Liver cancer; (3) Liver transplantation; (4) TIPS retreatment; (5) Non-cirrhotic PHT; (6) Surgical splenectomy; (7) Portal vein thrombosis; and (8) Urgent TIPS.\nDefinitions PHT was defined as the radiological presence of significant splenomegaly, umbilical vein recanalization, and/or portosystemic shunts as well as a preoperative platelet count < 100 × 109/L. Portal pressure gradient values greater than or equal to 10 mmHg indicated clinically significant PHT[14].\nHE was defined as neuropsychiatric abnormalities ranging from mild neuro-psychological dysfunction to deep coma and abnormal ammonia levels, after the exclusion of other possible causes of altered mental status by computed tomography or magnetic resonance imaging[15].\nPHT was defined as the radiological presence of significant splenomegaly, umbilical vein recanalization, and/or portosystemic shunts as well as a preoperative platelet count < 100 × 109/L. Portal pressure gradient values greater than or equal to 10 mmHg indicated clinically significant PHT[14].\nHE was defined as neuropsychiatric abnormalities ranging from mild neuro-psychological dysfunction to deep coma and abnormal ammonia levels, after the exclusion of other possible causes of altered mental status by computed tomography or magnetic resonance imaging[15].\nPreoperative ICG-R15, CPS, and MELD score All patients included in this study underwent the ICG-R15 test with a dye-densitogram (DDG) analyser (Japan, NIHON KOHDEN, model DDG-3300K) and an ICG clearance rate test (Japan, NIHON KOHDEN, model A). Within 30 s after the injection of ICG (0.5 mg/kg) into the median cubital vein, plasma ICG concentrations were monitored via a sensor attached to the patients’ finger. The ICG-R15 was subsequently assessed by a computer. The CPS score can be divided into three grades depending on the total points: Grade A (5-6 points), B (7-9 points), and C (≥ 10 points). The MELD score was calculated based on the following formula: R = 3.8 × ln (bilirubin mg/dL) + 11.2 × ln (INR) + 9.6 ln (creatinine mg/dL) + 6.4 × aetiology (biliary and alcoholic 0, others 1)[8,16] (Figure 1).\nReceive operating characteristic curve analyses of indocyanine green-R15, Child-Pugh score, and model for end-stage liver disease score to predict post-transjugular intrahepatic portosystemic shunt hepatic encephalopathy. ICG: Indocyanine green; MELD: Model for end-stage liver disease.\nAll patients included in this study underwent the ICG-R15 test with a dye-densitogram (DDG) analyser (Japan, NIHON KOHDEN, model DDG-3300K) and an ICG clearance rate test (Japan, NIHON KOHDEN, model A). Within 30 s after the injection of ICG (0.5 mg/kg) into the median cubital vein, plasma ICG concentrations were monitored via a sensor attached to the patients’ finger. The ICG-R15 was subsequently assessed by a computer. The CPS score can be divided into three grades depending on the total points: Grade A (5-6 points), B (7-9 points), and C (≥ 10 points). The MELD score was calculated based on the following formula: R = 3.8 × ln (bilirubin mg/dL) + 11.2 × ln (INR) + 9.6 ln (creatinine mg/dL) + 6.4 × aetiology (biliary and alcoholic 0, others 1)[8,16] (Figure 1).\nReceive operating characteristic curve analyses of indocyanine green-R15, Child-Pugh score, and model for end-stage liver disease score to predict post-transjugular intrahepatic portosystemic shunt hepatic encephalopathy. ICG: Indocyanine green; MELD: Model for end-stage liver disease.\nTIPS Under fluoroscopic guidance, a standard TIPS procedure was performed by an experienced interventional radiologist. A pigtail catheter was inserted into the right internal jugular vein leading to the hepatic veins. After finding the portal vein through the superior mesenteric artery or splenic artery using indirect portal venography, a stent (7, 8, or 10 mm, Fluency, Bard, United States) was placed to create a channel between the hepatic vein and the portal vein. Afterward, the portal vein pressure was measured at least three times, and a pressure transducer system (Combitrans, Braun Melsungen, Germany) with a multichannel monitor (Sirecust, Siemens, Germany) was used to measure the haemodynamic parameters.\nUnder fluoroscopic guidance, a standard TIPS procedure was performed by an experienced interventional radiologist. A pigtail catheter was inserted into the right internal jugular vein leading to the hepatic veins. After finding the portal vein through the superior mesenteric artery or splenic artery using indirect portal venography, a stent (7, 8, or 10 mm, Fluency, Bard, United States) was placed to create a channel between the hepatic vein and the portal vein. Afterward, the portal vein pressure was measured at least three times, and a pressure transducer system (Combitrans, Braun Melsungen, Germany) with a multichannel monitor (Sirecust, Siemens, Germany) was used to measure the haemodynamic parameters.\nStatistical analysis Clinical and laboratory characteristics were collected from the medical records. SPSS (version 20.0, SPSS Inc., United States) and MedCalc were used for the statistical analyses. Descriptive data are presented as the mean ± SD, and qualitative variables are presented as frequencies or percentages. Student's t test or the Mann-Whitney U test was used to compare quantitative variables between groups, and the chi-square test or Fisher's exact test was used for qualitative variables. Univariate and multivariable logistic regression analyses were used to determine HE-related risk factors after TIPS. The areas under the receiver operating characteristic curves (AUCs) for the ICG-R15, CPS, and MELD score were evaluated. The non-parametric approach (Delong-Delong & Clarke-Pearson)[17] was used for pairwise comparison among AUCs of ICG-R15, CPS, and MELD score. Statistical significance was established at P < 0.05.\nClinical and laboratory characteristics were collected from the medical records. SPSS (version 20.0, SPSS Inc., United States) and MedCalc were used for the statistical analyses. Descriptive data are presented as the mean ± SD, and qualitative variables are presented as frequencies or percentages. Student's t test or the Mann-Whitney U test was used to compare quantitative variables between groups, and the chi-square test or Fisher's exact test was used for qualitative variables. Univariate and multivariable logistic regression analyses were used to determine HE-related risk factors after TIPS. The areas under the receiver operating characteristic curves (AUCs) for the ICG-R15, CPS, and MELD score were evaluated. The non-parametric approach (Delong-Delong & Clarke-Pearson)[17] was used for pairwise comparison among AUCs of ICG-R15, CPS, and MELD score. Statistical significance was established at P < 0.05.", "All patients who underwent TIPS between January 2018 and June 2019 in the interventional department of Beijing Shijitan Hospital were included in this study.\nThe following inclusion criteria were set: (1) Patients between 18 and 70 years old; (2) Patients diagnosed with PHT; and (3) Patients who underwent TIPS using a polytetrafluoroethylene-covered stent. The following exclusion criteria were set: (1) Preoperative HE; (2) Liver cancer; (3) Liver transplantation; (4) TIPS retreatment; (5) Non-cirrhotic PHT; (6) Surgical splenectomy; (7) Portal vein thrombosis; and (8) Urgent TIPS.", "PHT was defined as the radiological presence of significant splenomegaly, umbilical vein recanalization, and/or portosystemic shunts as well as a preoperative platelet count < 100 × 109/L. Portal pressure gradient values greater than or equal to 10 mmHg indicated clinically significant PHT[14].\nHE was defined as neuropsychiatric abnormalities ranging from mild neuro-psychological dysfunction to deep coma and abnormal ammonia levels, after the exclusion of other possible causes of altered mental status by computed tomography or magnetic resonance imaging[15].", "All patients included in this study underwent the ICG-R15 test with a dye-densitogram (DDG) analyser (Japan, NIHON KOHDEN, model DDG-3300K) and an ICG clearance rate test (Japan, NIHON KOHDEN, model A). Within 30 s after the injection of ICG (0.5 mg/kg) into the median cubital vein, plasma ICG concentrations were monitored via a sensor attached to the patients’ finger. The ICG-R15 was subsequently assessed by a computer. The CPS score can be divided into three grades depending on the total points: Grade A (5-6 points), B (7-9 points), and C (≥ 10 points). The MELD score was calculated based on the following formula: R = 3.8 × ln (bilirubin mg/dL) + 11.2 × ln (INR) + 9.6 ln (creatinine mg/dL) + 6.4 × aetiology (biliary and alcoholic 0, others 1)[8,16] (Figure 1).\nReceive operating characteristic curve analyses of indocyanine green-R15, Child-Pugh score, and model for end-stage liver disease score to predict post-transjugular intrahepatic portosystemic shunt hepatic encephalopathy. ICG: Indocyanine green; MELD: Model for end-stage liver disease.", "Under fluoroscopic guidance, a standard TIPS procedure was performed by an experienced interventional radiologist. A pigtail catheter was inserted into the right internal jugular vein leading to the hepatic veins. After finding the portal vein through the superior mesenteric artery or splenic artery using indirect portal venography, a stent (7, 8, or 10 mm, Fluency, Bard, United States) was placed to create a channel between the hepatic vein and the portal vein. Afterward, the portal vein pressure was measured at least three times, and a pressure transducer system (Combitrans, Braun Melsungen, Germany) with a multichannel monitor (Sirecust, Siemens, Germany) was used to measure the haemodynamic parameters.", "Clinical and laboratory characteristics were collected from the medical records. SPSS (version 20.0, SPSS Inc., United States) and MedCalc were used for the statistical analyses. Descriptive data are presented as the mean ± SD, and qualitative variables are presented as frequencies or percentages. Student's t test or the Mann-Whitney U test was used to compare quantitative variables between groups, and the chi-square test or Fisher's exact test was used for qualitative variables. Univariate and multivariable logistic regression analyses were used to determine HE-related risk factors after TIPS. The areas under the receiver operating characteristic curves (AUCs) for the ICG-R15, CPS, and MELD score were evaluated. The non-parametric approach (Delong-Delong & Clarke-Pearson)[17] was used for pairwise comparison among AUCs of ICG-R15, CPS, and MELD score. Statistical significance was established at P < 0.05.", "Patients’ preoperative characteristics A total of 221 patients who underwent TIPS were included in this study. After applying the inclusion and exclusion criteria of the study, data were collected from a total of 195 decompensated cirrhosis patients with PHT who underwent TIPS. The basic clinical characteristics are listed in Table 1. The study population comprised 140 men and 55 women, with a mean age of 51.2 ± 11.4 years. The indications included variceal bleeding in 118 (60.7%) patients, refractory ascites in 36 (18.5%), variceal bleeding combined with ascites in 27 (14.1%), and other (including pleural fluid and hepatorenal syndrome) in 14 (6.7%). The most common cause of cirrhosis was viral cirrhosis (60.7%), followed by alcoholic cirrhosis (15.9%), biliary cirrhosis (4.6%), and drug-induced and autoimmune hepatitis cirrhosis (4.6%). According to the CPS, 108 patients were classified as having grade A, 75 as having grade B and 12 as having grade C. The median MELD score and ICG-R15 were 7 (4.1, 10) and 38.4 (22, 50), respectively.\nPatients’ preoperative characteristics\nValues are the mean ± SD. \nValues are n. \nValues are medians (P25, P75). \nP < 0.05. \nP < 0.01. TIPS: Transjugular intrahepatic portosystemic shunt; HE: Hepatic encephalopathy; HVPG: Hepatic venous pressure gradient; PPG: Portal pressure gradient; MELD: Model for end-stage liver disease; ICG: Indocyanine green; ALT: Alanine aminotransferase; AST: Aspartate transaminase; ALP: Alkaline phosphatase; Hb: Hemoglobin; BUN: Blood urea nitrogen; APTT: Active partial thromboplastin; INR: International normalized ratio; FIB: Fibrinogen; PT: Prothrombin time; WBC: White blood cells.\nA total of 221 patients who underwent TIPS were included in this study. After applying the inclusion and exclusion criteria of the study, data were collected from a total of 195 decompensated cirrhosis patients with PHT who underwent TIPS. The basic clinical characteristics are listed in Table 1. The study population comprised 140 men and 55 women, with a mean age of 51.2 ± 11.4 years. The indications included variceal bleeding in 118 (60.7%) patients, refractory ascites in 36 (18.5%), variceal bleeding combined with ascites in 27 (14.1%), and other (including pleural fluid and hepatorenal syndrome) in 14 (6.7%). The most common cause of cirrhosis was viral cirrhosis (60.7%), followed by alcoholic cirrhosis (15.9%), biliary cirrhosis (4.6%), and drug-induced and autoimmune hepatitis cirrhosis (4.6%). According to the CPS, 108 patients were classified as having grade A, 75 as having grade B and 12 as having grade C. The median MELD score and ICG-R15 were 7 (4.1, 10) and 38.4 (22, 50), respectively.\nPatients’ preoperative characteristics\nValues are the mean ± SD. \nValues are n. \nValues are medians (P25, P75). \nP < 0.05. \nP < 0.01. TIPS: Transjugular intrahepatic portosystemic shunt; HE: Hepatic encephalopathy; HVPG: Hepatic venous pressure gradient; PPG: Portal pressure gradient; MELD: Model for end-stage liver disease; ICG: Indocyanine green; ALT: Alanine aminotransferase; AST: Aspartate transaminase; ALP: Alkaline phosphatase; Hb: Hemoglobin; BUN: Blood urea nitrogen; APTT: Active partial thromboplastin; INR: International normalized ratio; FIB: Fibrinogen; PT: Prothrombin time; WBC: White blood cells.\nUnivariable and multivariable analyses of post-TIPS HE Of the 195 patients who underwent TIPS, 45 (23%) developed HE at the 12 mo follow-up. The factors associated with HE in univariable analysis included age, stent size, puncture site, ICG-R15, CPS, MELD score, blood urea nitrogen (BUN) level, and NH3 level (P = 0.006, 0.019, 0.006, 0.027, 0.022, 0.020, 0.025, and 0.044, respectively). As shown in Table 2, multivariate regression analysis identified the following variables as independent risk factors for post-TIPS HE: Older age, 10 mm stent size, puncture site in the right branch of the portal vein, high ICG-R15, high BUN level, high NH3 level, high CPS, and high MELD score.\nUnivariable and multivariable logistic regression analyses of post-transjugular intrahepatic portosystemic shunt hepatic encephalopathy\nP < 0.05. \nP < 0.01. TIPS: Transjugular intrahepatic portosystemic shunt; HVPG: Hepatic venous pressure gradient; PPG: Portal pressure gradient; MELD: Model for end-stage liver disease; ICG: Indocyanine green; ALT: Alanine aminotransferase; AST: Aspartate transaminase; ALP: Alkaline phosphatase; Hb: Hemoglobin; BUN: Blood urea nitrogen; APTT: Active partial thromboplastin; INR: International normalized ratio; FIB: Fibrinogen; PT: Prothrombin time; WBC: White blood cell.\nOf the 195 patients who underwent TIPS, 45 (23%) developed HE at the 12 mo follow-up. The factors associated with HE in univariable analysis included age, stent size, puncture site, ICG-R15, CPS, MELD score, blood urea nitrogen (BUN) level, and NH3 level (P = 0.006, 0.019, 0.006, 0.027, 0.022, 0.020, 0.025, and 0.044, respectively). As shown in Table 2, multivariate regression analysis identified the following variables as independent risk factors for post-TIPS HE: Older age, 10 mm stent size, puncture site in the right branch of the portal vein, high ICG-R15, high BUN level, high NH3 level, high CPS, and high MELD score.\nUnivariable and multivariable logistic regression analyses of post-transjugular intrahepatic portosystemic shunt hepatic encephalopathy\nP < 0.05. \nP < 0.01. TIPS: Transjugular intrahepatic portosystemic shunt; HVPG: Hepatic venous pressure gradient; PPG: Portal pressure gradient; MELD: Model for end-stage liver disease; ICG: Indocyanine green; ALT: Alanine aminotransferase; AST: Aspartate transaminase; ALP: Alkaline phosphatase; Hb: Hemoglobin; BUN: Blood urea nitrogen; APTT: Active partial thromboplastin; INR: International normalized ratio; FIB: Fibrinogen; PT: Prothrombin time; WBC: White blood cell.\nDiscriminatory power of the CPS, MELD score, and ICG-R15 The area under the receiver operating characteristic (ROC) curve for the ICG-R15 (AUC = 0.664, 95% confidence interval [CI]: 0.557-0.743, P = 0.0046) for the prediction of post-TIPS HE was larger than those of the CPS (AUC = 0.596, 95%CI: 0.508-0.679, P = 0.087) and the MELD score (AUC = 0.641, 95%CI: 0.554-0.721, P = 0.021). The non-parametric approach (Delong-Delong & Clarke-Pearson)[17] showed that there was no statistical significance in pairwise comparison between AUCs of ICG-R15 and MELD score (P = 0.0229). The cut-off value for the ICG-R15, which was determined by the maximum of Youden index, was 30, with a sensitivity of 86.96% and specificity of 56.25% for the prediction of post-TIPS HE (Table 3). Of the 126 patients with an ICG-R15 > 30, 36 (28.5%) developed post-TIPS HE, while 9 (13%) of 69 patients with an ICG ≤ 30 developed post-TIPS HE.\nAreas under the receiver operating characteristic curves of indocyanine green rentention rate of 15 min, Child-Pugh score, and model for end-stage liver disease score\nP < 0.01. ICG: Indocyanine green; CPS: Child-Pugh score; MELD: Model for end-stage liver disease; AUS: Areas under the receiver operating characteristic curves.\nThe area under the receiver operating characteristic (ROC) curve for the ICG-R15 (AUC = 0.664, 95% confidence interval [CI]: 0.557-0.743, P = 0.0046) for the prediction of post-TIPS HE was larger than those of the CPS (AUC = 0.596, 95%CI: 0.508-0.679, P = 0.087) and the MELD score (AUC = 0.641, 95%CI: 0.554-0.721, P = 0.021). The non-parametric approach (Delong-Delong & Clarke-Pearson)[17] showed that there was no statistical significance in pairwise comparison between AUCs of ICG-R15 and MELD score (P = 0.0229). The cut-off value for the ICG-R15, which was determined by the maximum of Youden index, was 30, with a sensitivity of 86.96% and specificity of 56.25% for the prediction of post-TIPS HE (Table 3). Of the 126 patients with an ICG-R15 > 30, 36 (28.5%) developed post-TIPS HE, while 9 (13%) of 69 patients with an ICG ≤ 30 developed post-TIPS HE.\nAreas under the receiver operating characteristic curves of indocyanine green rentention rate of 15 min, Child-Pugh score, and model for end-stage liver disease score\nP < 0.01. ICG: Indocyanine green; CPS: Child-Pugh score; MELD: Model for end-stage liver disease; AUS: Areas under the receiver operating characteristic curves.\nComparison of post-TIPS-HE incidence between patients with an ICG-R15 > 30% and < 30% The patients were divided into two groups according to the ICG-R15 cut-off value, 30%, determined by the maximal Youden index. Patients with an ICG-R15 > 30% had a higher incidence of HE than those with an ICG-R15 < 30% (13% vs 28.5%; P = 0.014). There were significant differences in age, CPS, preoperative portal pressure gradient, NH3, albumin, aspartate transaminase, Cl, Na, white blood cells, active partial thromboplastin, or prothrombin time in patients with different ICG-R15 levels below and above 30% (Table 4).\nCharacteristics of patients with an indocyanine green rentention rate at 15 min ≤ 30% and > 30%\nValues are n. \nValues are the mean ± SD. \nValues are medians (P25, P75). \nP < 0.05. \nP < 0.01. \nP < 0.001. ICG: Indocyanine green; HE: Hepatic encephalopathy; PPG: Portal pressure gradient; AST: Aspartate transaminase; APTT: Active partial thromboplastin; PT: Prothrombin time; WBC: White blood cells.\nThe patients were divided into two groups according to the ICG-R15 cut-off value, 30%, determined by the maximal Youden index. Patients with an ICG-R15 > 30% had a higher incidence of HE than those with an ICG-R15 < 30% (13% vs 28.5%; P = 0.014). There were significant differences in age, CPS, preoperative portal pressure gradient, NH3, albumin, aspartate transaminase, Cl, Na, white blood cells, active partial thromboplastin, or prothrombin time in patients with different ICG-R15 levels below and above 30% (Table 4).\nCharacteristics of patients with an indocyanine green rentention rate at 15 min ≤ 30% and > 30%\nValues are n. \nValues are the mean ± SD. \nValues are medians (P25, P75). \nP < 0.05. \nP < 0.01. \nP < 0.001. ICG: Indocyanine green; HE: Hepatic encephalopathy; PPG: Portal pressure gradient; AST: Aspartate transaminase; APTT: Active partial thromboplastin; PT: Prothrombin time; WBC: White blood cells.", "A total of 221 patients who underwent TIPS were included in this study. After applying the inclusion and exclusion criteria of the study, data were collected from a total of 195 decompensated cirrhosis patients with PHT who underwent TIPS. The basic clinical characteristics are listed in Table 1. The study population comprised 140 men and 55 women, with a mean age of 51.2 ± 11.4 years. The indications included variceal bleeding in 118 (60.7%) patients, refractory ascites in 36 (18.5%), variceal bleeding combined with ascites in 27 (14.1%), and other (including pleural fluid and hepatorenal syndrome) in 14 (6.7%). The most common cause of cirrhosis was viral cirrhosis (60.7%), followed by alcoholic cirrhosis (15.9%), biliary cirrhosis (4.6%), and drug-induced and autoimmune hepatitis cirrhosis (4.6%). According to the CPS, 108 patients were classified as having grade A, 75 as having grade B and 12 as having grade C. The median MELD score and ICG-R15 were 7 (4.1, 10) and 38.4 (22, 50), respectively.\nPatients’ preoperative characteristics\nValues are the mean ± SD. \nValues are n. \nValues are medians (P25, P75). \nP < 0.05. \nP < 0.01. TIPS: Transjugular intrahepatic portosystemic shunt; HE: Hepatic encephalopathy; HVPG: Hepatic venous pressure gradient; PPG: Portal pressure gradient; MELD: Model for end-stage liver disease; ICG: Indocyanine green; ALT: Alanine aminotransferase; AST: Aspartate transaminase; ALP: Alkaline phosphatase; Hb: Hemoglobin; BUN: Blood urea nitrogen; APTT: Active partial thromboplastin; INR: International normalized ratio; FIB: Fibrinogen; PT: Prothrombin time; WBC: White blood cells.", "Of the 195 patients who underwent TIPS, 45 (23%) developed HE at the 12 mo follow-up. The factors associated with HE in univariable analysis included age, stent size, puncture site, ICG-R15, CPS, MELD score, blood urea nitrogen (BUN) level, and NH3 level (P = 0.006, 0.019, 0.006, 0.027, 0.022, 0.020, 0.025, and 0.044, respectively). As shown in Table 2, multivariate regression analysis identified the following variables as independent risk factors for post-TIPS HE: Older age, 10 mm stent size, puncture site in the right branch of the portal vein, high ICG-R15, high BUN level, high NH3 level, high CPS, and high MELD score.\nUnivariable and multivariable logistic regression analyses of post-transjugular intrahepatic portosystemic shunt hepatic encephalopathy\nP < 0.05. \nP < 0.01. TIPS: Transjugular intrahepatic portosystemic shunt; HVPG: Hepatic venous pressure gradient; PPG: Portal pressure gradient; MELD: Model for end-stage liver disease; ICG: Indocyanine green; ALT: Alanine aminotransferase; AST: Aspartate transaminase; ALP: Alkaline phosphatase; Hb: Hemoglobin; BUN: Blood urea nitrogen; APTT: Active partial thromboplastin; INR: International normalized ratio; FIB: Fibrinogen; PT: Prothrombin time; WBC: White blood cell.", "The area under the receiver operating characteristic (ROC) curve for the ICG-R15 (AUC = 0.664, 95% confidence interval [CI]: 0.557-0.743, P = 0.0046) for the prediction of post-TIPS HE was larger than those of the CPS (AUC = 0.596, 95%CI: 0.508-0.679, P = 0.087) and the MELD score (AUC = 0.641, 95%CI: 0.554-0.721, P = 0.021). The non-parametric approach (Delong-Delong & Clarke-Pearson)[17] showed that there was no statistical significance in pairwise comparison between AUCs of ICG-R15 and MELD score (P = 0.0229). The cut-off value for the ICG-R15, which was determined by the maximum of Youden index, was 30, with a sensitivity of 86.96% and specificity of 56.25% for the prediction of post-TIPS HE (Table 3). Of the 126 patients with an ICG-R15 > 30, 36 (28.5%) developed post-TIPS HE, while 9 (13%) of 69 patients with an ICG ≤ 30 developed post-TIPS HE.\nAreas under the receiver operating characteristic curves of indocyanine green rentention rate of 15 min, Child-Pugh score, and model for end-stage liver disease score\nP < 0.01. ICG: Indocyanine green; CPS: Child-Pugh score; MELD: Model for end-stage liver disease; AUS: Areas under the receiver operating characteristic curves.", "The patients were divided into two groups according to the ICG-R15 cut-off value, 30%, determined by the maximal Youden index. Patients with an ICG-R15 > 30% had a higher incidence of HE than those with an ICG-R15 < 30% (13% vs 28.5%; P = 0.014). There were significant differences in age, CPS, preoperative portal pressure gradient, NH3, albumin, aspartate transaminase, Cl, Na, white blood cells, active partial thromboplastin, or prothrombin time in patients with different ICG-R15 levels below and above 30% (Table 4).\nCharacteristics of patients with an indocyanine green rentention rate at 15 min ≤ 30% and > 30%\nValues are n. \nValues are the mean ± SD. \nValues are medians (P25, P75). \nP < 0.05. \nP < 0.01. \nP < 0.001. ICG: Indocyanine green; HE: Hepatic encephalopathy; PPG: Portal pressure gradient; AST: Aspartate transaminase; APTT: Active partial thromboplastin; PT: Prothrombin time; WBC: White blood cells.", "TIPS has been widely used to treat complications of PHT, including varices and ascites, by creating a large channel between the hepatic vein and portal vein. This procedure changes the liver haemodynamics by shunting a fraction of the portal venous blood directly into the systemic circulation, which can lead to decreased liver blood supply and impaired liver function reserve. In addition, HE occurs because of an increase in the amount of natural toxins such as ammonia travelling to the brain as a result of the shunting of the blood directly from the portal vein to the hepatic vein[18]. HE can produce a spectrum of neurological/psychiatric syndromes ranging from subclinical alterations to coma. It remains one of the most common and worrisome complications of end-stage liver disease after TIPS[15].\nThe ICG-R15 test is simple, fast, less invasive, and inexpensive, and can be performed in less than half an hour. The ICG-R15 retention trial was introduced as a relatively noninvasive tool for the classification of pediatric and adult patients with acute and chronic liver failure[19]. A particular advantage of the ICG-R15 test is that it is more suitable for pediatric patients. In addition, the trial appears to be an ideal way to assess the risks of surgical procedures such as liver resection. In addition to assessing the predictive value of varices and ascites, the results of some papers suggest that the ICG-R15 test may also predict mortality[20].\nIn this study, univariate and multivariate logistic analyses showed that 10 mm stent size, puncture site in the right branch of the portal vein, age, ICG-R15, BUN level, NH3 level, CPS, and MELD score were predictors of post-TIPS HE in patients with portal hypertension. Patients with puncture sites in the right portal vein had a high incidence of post-TIPS HE. This is related to the right branch of the portal vein contains more poisons mainly received from the superior mesenteric vein[21-23]. It was reported that choosing the left branch of the portal vein as the puncture site during the placement of TIPS may decrease the incidence of HE significantly[24,25]. HE occurs more often in patients with a stent diameter of 10 mm than in those with smaller-diameter stents. A larger stent can effectively reduce portal vein pressure, but at the same time, more blood that has not been detoxified by the liver directly enters the systemic circulation, which can further impair liver function and lead to HE. However, some studies showed that the incidence of post-TIPS HE was unrelated to stent diameter[26]. Li et al[27] also found that age and Child-Pugh classification were independent risk factors for early post-TIPS HE[27], which was in accord with previous studies[28,29]. Fonio et al[4] demonstrated that the MELD grade was a risk factor for post-TIPS HE; the results of this study support this finding[4]. Normally, ammonia is detoxified by conversion to urea by Krebs-Henseleit buffer or the urea cycle in the liver. In total, 40% to 60% of the urea nitrogen in the primary urine is absorbed in the renal tubules and collecting tubes. A high incidence of HE was found in patients with high BUN levels, which is related to the aggravation of liver function involving the kidneys, renal decompensation, and azotaemia[30].\nHiwatashi et al[31] found that a higher ICG-R15 was significantly correlated with surgical complications and liver dysfunction after surgical resection and chemotherapy in patients with colorectal cancer[31]. Wang et al[32] demonstrated that the ICG-R15 could more accurately predict preoperative liver reserve function than the Child-Pugh and MELD scores in patients who suffered from liver cancer[32]. Another study showed that the ALICE scoring system (including serum albumin and ICG-R15) could simply and effectively predict the prognosis of liver cancer patients undergoing surgery[33]. In this study, we analysed and compared the areas under the ROC curves for the ICG-R15, CPS, and MELD score. The results were as follows: AUCICG-R15 > AUCMELD > AUCCPS, and there was statistical significance in pairwise comparison between AUCs of ICG-R15 and MELD score. This suggests that the ICG-R15 has clinical equivalent value to predict post-TIPS HE compared to MELD score. The CPS is relatively restricted because it includes two subjective variables (HE and severity of ascites)[12], and the limited values ranging from 5 to 15 make it imprecise. The boundary values for the five parameters were chosen empirically and have not been formally validated[34]. In some situations, the predictive value of MELD may be reduced. Malabsorption of vitamin K secondary to cholestasis can cause an increase in the INR; starvation and infection can increase the level of bilirubin; and the use of diuretics can increase the level of creatinine[35]. As a quantitative assessment, the ICG-R15 is a simple and practical way to assess liver function and is widely used in patients undergoing liver surgery. The ICG-R15 may play a role in predicting post-TIPS HE; consequently, it may be useful for identifying high-risk patients.\nThe optimal ICG-R15 cut-off value in our study was 30%, and it divided the patients into two groups with different risks of post-TIPS HE. This difference was highly significant (P = 0.014), which implies that TIPS patients with an ICG-R15 > 30% should be given special care during perioperative management. However, we did not compare the role of the CPS, MELD score, and ICG-R15 for predicting the survival of patients after TIPS, which needs further study. In addition, future studies should analyse the incidence of complications and survival between different ICG-R15 groups. A small number of patients with Child-Pugh C cirrhosis and the low median MELD score were limitations of this study. Since this may be related to the small sample size, we will increase the amount of the sample in the future.\nIn summary, the ICG-R15 can be used for predicting post-TIPS HE in patients with PHT. We propose using the ICG-R15 to evaluate the risk of HE in PHT patients undergoing TIPS.", "TIPS for PHT in patients with cirrhosis should be considered after careful selection based on patient characteristics and liver function. The ICG-R15 has appreciated clinical value for predicting the occurrence of post-TIPS HE and is a choice for evaluating the prognosis of patients undergoing TIPS." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Hepatic encephalopathy", "Indocyanine green-R15", "Child-Pugh score", "Model for end-stage liver disease score", "Transjugular intrahepatic portosystemic shunt", "Portal hypertention" ]
INTRODUCTION: Portal hypertension (PHT) is a very common and serious complication of chronic liver disease that often causes variceal bleeding and refractory ascites[1]. Transjugular intrahepatic portosystemic shunt (TIPS) is an important treatment option that has been shown to be efficacious in the management of PHT[2]. This procedure can alleviate portal hypertension by creating a large channel between the portal vein and hepatic vein[3]. Unfortunately, TIPS can cause severe complications such as heart failure, liver failure, and hepatic encephalopathy (HE). HE has a high incidence rate and is one of the most debilitating complications, which has a serious effect on the prognosis and survival of patients[4-6]. Although some risk factors are known, the identification of patients at risk of HE needs additional research. It is important to predict post-TIPS HE so that prevention and treatment measures can be implemented in high-risk HE patients to avoid adverse outcomes. The indocyanine green retention rate at 15 min (ICG-R15), the Child-Pugh score (CPS), and the model for end-stage liver disease (MELD) score have been developed to assess liver function[7,8]. The ICG-R15 is a relatively non-invasive, quick, and inexpensive method that has been widely used in patients with end-stage liver disease[9]. Zipprich et al[10] reported that ICG is the most accurate predictor among quantitative liver function tests of the survival of patients with cirrhosis[10]. A recent retrospective study demonstrated that preoperative ICG clearance was predictive of the surgical prognosis in patients undergoing hepatectomy[11]. The CPS was developed to assess the severity of liver cirrhosis in the clinic. This scoring system includes the bilirubin level, the albumin level, the prothrombin time, HE, and ascites[12]. The MELD score is used to predict the survival of patients undergoing TIPS and to evaluate patients with severe liver disease prior to transplantation. It includes three objective variables: The total bilirubin level, the creatinine level, and the international normalized ratio (INR)[13]. However, there are limited data on the use of liver function tools, especially the ICG-R15, to predict post-TIPS HE. Therefore, the aim of this study was to compare the clinical value of the MELD score, CPS, and ICG-R15 for the prediction of post-TIPS HE in patients with PHT. MATERIALS AND METHODS: This retrospective study was approved by the Ethics Committee of Beijing Shijitan Hospital of Capital Medical University. The need to obtain informed consent was waived due to its retrospective nature. Patients All patients who underwent TIPS between January 2018 and June 2019 in the interventional department of Beijing Shijitan Hospital were included in this study. The following inclusion criteria were set: (1) Patients between 18 and 70 years old; (2) Patients diagnosed with PHT; and (3) Patients who underwent TIPS using a polytetrafluoroethylene-covered stent. The following exclusion criteria were set: (1) Preoperative HE; (2) Liver cancer; (3) Liver transplantation; (4) TIPS retreatment; (5) Non-cirrhotic PHT; (6) Surgical splenectomy; (7) Portal vein thrombosis; and (8) Urgent TIPS. All patients who underwent TIPS between January 2018 and June 2019 in the interventional department of Beijing Shijitan Hospital were included in this study. The following inclusion criteria were set: (1) Patients between 18 and 70 years old; (2) Patients diagnosed with PHT; and (3) Patients who underwent TIPS using a polytetrafluoroethylene-covered stent. The following exclusion criteria were set: (1) Preoperative HE; (2) Liver cancer; (3) Liver transplantation; (4) TIPS retreatment; (5) Non-cirrhotic PHT; (6) Surgical splenectomy; (7) Portal vein thrombosis; and (8) Urgent TIPS. Definitions PHT was defined as the radiological presence of significant splenomegaly, umbilical vein recanalization, and/or portosystemic shunts as well as a preoperative platelet count < 100 × 109/L. Portal pressure gradient values greater than or equal to 10 mmHg indicated clinically significant PHT[14]. HE was defined as neuropsychiatric abnormalities ranging from mild neuro-psychological dysfunction to deep coma and abnormal ammonia levels, after the exclusion of other possible causes of altered mental status by computed tomography or magnetic resonance imaging[15]. PHT was defined as the radiological presence of significant splenomegaly, umbilical vein recanalization, and/or portosystemic shunts as well as a preoperative platelet count < 100 × 109/L. Portal pressure gradient values greater than or equal to 10 mmHg indicated clinically significant PHT[14]. HE was defined as neuropsychiatric abnormalities ranging from mild neuro-psychological dysfunction to deep coma and abnormal ammonia levels, after the exclusion of other possible causes of altered mental status by computed tomography or magnetic resonance imaging[15]. Preoperative ICG-R15, CPS, and MELD score All patients included in this study underwent the ICG-R15 test with a dye-densitogram (DDG) analyser (Japan, NIHON KOHDEN, model DDG-3300K) and an ICG clearance rate test (Japan, NIHON KOHDEN, model A). Within 30 s after the injection of ICG (0.5 mg/kg) into the median cubital vein, plasma ICG concentrations were monitored via a sensor attached to the patients’ finger. The ICG-R15 was subsequently assessed by a computer. The CPS score can be divided into three grades depending on the total points: Grade A (5-6 points), B (7-9 points), and C (≥ 10 points). The MELD score was calculated based on the following formula: R = 3.8 × ln (bilirubin mg/dL) + 11.2 × ln (INR) + 9.6 ln (creatinine mg/dL) + 6.4 × aetiology (biliary and alcoholic 0, others 1)[8,16] (Figure 1). Receive operating characteristic curve analyses of indocyanine green-R15, Child-Pugh score, and model for end-stage liver disease score to predict post-transjugular intrahepatic portosystemic shunt hepatic encephalopathy. ICG: Indocyanine green; MELD: Model for end-stage liver disease. All patients included in this study underwent the ICG-R15 test with a dye-densitogram (DDG) analyser (Japan, NIHON KOHDEN, model DDG-3300K) and an ICG clearance rate test (Japan, NIHON KOHDEN, model A). Within 30 s after the injection of ICG (0.5 mg/kg) into the median cubital vein, plasma ICG concentrations were monitored via a sensor attached to the patients’ finger. The ICG-R15 was subsequently assessed by a computer. The CPS score can be divided into three grades depending on the total points: Grade A (5-6 points), B (7-9 points), and C (≥ 10 points). The MELD score was calculated based on the following formula: R = 3.8 × ln (bilirubin mg/dL) + 11.2 × ln (INR) + 9.6 ln (creatinine mg/dL) + 6.4 × aetiology (biliary and alcoholic 0, others 1)[8,16] (Figure 1). Receive operating characteristic curve analyses of indocyanine green-R15, Child-Pugh score, and model for end-stage liver disease score to predict post-transjugular intrahepatic portosystemic shunt hepatic encephalopathy. ICG: Indocyanine green; MELD: Model for end-stage liver disease. TIPS Under fluoroscopic guidance, a standard TIPS procedure was performed by an experienced interventional radiologist. A pigtail catheter was inserted into the right internal jugular vein leading to the hepatic veins. After finding the portal vein through the superior mesenteric artery or splenic artery using indirect portal venography, a stent (7, 8, or 10 mm, Fluency, Bard, United States) was placed to create a channel between the hepatic vein and the portal vein. Afterward, the portal vein pressure was measured at least three times, and a pressure transducer system (Combitrans, Braun Melsungen, Germany) with a multichannel monitor (Sirecust, Siemens, Germany) was used to measure the haemodynamic parameters. Under fluoroscopic guidance, a standard TIPS procedure was performed by an experienced interventional radiologist. A pigtail catheter was inserted into the right internal jugular vein leading to the hepatic veins. After finding the portal vein through the superior mesenteric artery or splenic artery using indirect portal venography, a stent (7, 8, or 10 mm, Fluency, Bard, United States) was placed to create a channel between the hepatic vein and the portal vein. Afterward, the portal vein pressure was measured at least three times, and a pressure transducer system (Combitrans, Braun Melsungen, Germany) with a multichannel monitor (Sirecust, Siemens, Germany) was used to measure the haemodynamic parameters. Statistical analysis Clinical and laboratory characteristics were collected from the medical records. SPSS (version 20.0, SPSS Inc., United States) and MedCalc were used for the statistical analyses. Descriptive data are presented as the mean ± SD, and qualitative variables are presented as frequencies or percentages. Student's t test or the Mann-Whitney U test was used to compare quantitative variables between groups, and the chi-square test or Fisher's exact test was used for qualitative variables. Univariate and multivariable logistic regression analyses were used to determine HE-related risk factors after TIPS. The areas under the receiver operating characteristic curves (AUCs) for the ICG-R15, CPS, and MELD score were evaluated. The non-parametric approach (Delong-Delong & Clarke-Pearson)[17] was used for pairwise comparison among AUCs of ICG-R15, CPS, and MELD score. Statistical significance was established at P < 0.05. Clinical and laboratory characteristics were collected from the medical records. SPSS (version 20.0, SPSS Inc., United States) and MedCalc were used for the statistical analyses. Descriptive data are presented as the mean ± SD, and qualitative variables are presented as frequencies or percentages. Student's t test or the Mann-Whitney U test was used to compare quantitative variables between groups, and the chi-square test or Fisher's exact test was used for qualitative variables. Univariate and multivariable logistic regression analyses were used to determine HE-related risk factors after TIPS. The areas under the receiver operating characteristic curves (AUCs) for the ICG-R15, CPS, and MELD score were evaluated. The non-parametric approach (Delong-Delong & Clarke-Pearson)[17] was used for pairwise comparison among AUCs of ICG-R15, CPS, and MELD score. Statistical significance was established at P < 0.05. Patients: All patients who underwent TIPS between January 2018 and June 2019 in the interventional department of Beijing Shijitan Hospital were included in this study. The following inclusion criteria were set: (1) Patients between 18 and 70 years old; (2) Patients diagnosed with PHT; and (3) Patients who underwent TIPS using a polytetrafluoroethylene-covered stent. The following exclusion criteria were set: (1) Preoperative HE; (2) Liver cancer; (3) Liver transplantation; (4) TIPS retreatment; (5) Non-cirrhotic PHT; (6) Surgical splenectomy; (7) Portal vein thrombosis; and (8) Urgent TIPS. Definitions: PHT was defined as the radiological presence of significant splenomegaly, umbilical vein recanalization, and/or portosystemic shunts as well as a preoperative platelet count < 100 × 109/L. Portal pressure gradient values greater than or equal to 10 mmHg indicated clinically significant PHT[14]. HE was defined as neuropsychiatric abnormalities ranging from mild neuro-psychological dysfunction to deep coma and abnormal ammonia levels, after the exclusion of other possible causes of altered mental status by computed tomography or magnetic resonance imaging[15]. Preoperative ICG-R15, CPS, and MELD score: All patients included in this study underwent the ICG-R15 test with a dye-densitogram (DDG) analyser (Japan, NIHON KOHDEN, model DDG-3300K) and an ICG clearance rate test (Japan, NIHON KOHDEN, model A). Within 30 s after the injection of ICG (0.5 mg/kg) into the median cubital vein, plasma ICG concentrations were monitored via a sensor attached to the patients’ finger. The ICG-R15 was subsequently assessed by a computer. The CPS score can be divided into three grades depending on the total points: Grade A (5-6 points), B (7-9 points), and C (≥ 10 points). The MELD score was calculated based on the following formula: R = 3.8 × ln (bilirubin mg/dL) + 11.2 × ln (INR) + 9.6 ln (creatinine mg/dL) + 6.4 × aetiology (biliary and alcoholic 0, others 1)[8,16] (Figure 1). Receive operating characteristic curve analyses of indocyanine green-R15, Child-Pugh score, and model for end-stage liver disease score to predict post-transjugular intrahepatic portosystemic shunt hepatic encephalopathy. ICG: Indocyanine green; MELD: Model for end-stage liver disease. TIPS: Under fluoroscopic guidance, a standard TIPS procedure was performed by an experienced interventional radiologist. A pigtail catheter was inserted into the right internal jugular vein leading to the hepatic veins. After finding the portal vein through the superior mesenteric artery or splenic artery using indirect portal venography, a stent (7, 8, or 10 mm, Fluency, Bard, United States) was placed to create a channel between the hepatic vein and the portal vein. Afterward, the portal vein pressure was measured at least three times, and a pressure transducer system (Combitrans, Braun Melsungen, Germany) with a multichannel monitor (Sirecust, Siemens, Germany) was used to measure the haemodynamic parameters. Statistical analysis: Clinical and laboratory characteristics were collected from the medical records. SPSS (version 20.0, SPSS Inc., United States) and MedCalc were used for the statistical analyses. Descriptive data are presented as the mean ± SD, and qualitative variables are presented as frequencies or percentages. Student's t test or the Mann-Whitney U test was used to compare quantitative variables between groups, and the chi-square test or Fisher's exact test was used for qualitative variables. Univariate and multivariable logistic regression analyses were used to determine HE-related risk factors after TIPS. The areas under the receiver operating characteristic curves (AUCs) for the ICG-R15, CPS, and MELD score were evaluated. The non-parametric approach (Delong-Delong & Clarke-Pearson)[17] was used for pairwise comparison among AUCs of ICG-R15, CPS, and MELD score. Statistical significance was established at P < 0.05. RESULTS: Patients’ preoperative characteristics A total of 221 patients who underwent TIPS were included in this study. After applying the inclusion and exclusion criteria of the study, data were collected from a total of 195 decompensated cirrhosis patients with PHT who underwent TIPS. The basic clinical characteristics are listed in Table 1. The study population comprised 140 men and 55 women, with a mean age of 51.2 ± 11.4 years. The indications included variceal bleeding in 118 (60.7%) patients, refractory ascites in 36 (18.5%), variceal bleeding combined with ascites in 27 (14.1%), and other (including pleural fluid and hepatorenal syndrome) in 14 (6.7%). The most common cause of cirrhosis was viral cirrhosis (60.7%), followed by alcoholic cirrhosis (15.9%), biliary cirrhosis (4.6%), and drug-induced and autoimmune hepatitis cirrhosis (4.6%). According to the CPS, 108 patients were classified as having grade A, 75 as having grade B and 12 as having grade C. The median MELD score and ICG-R15 were 7 (4.1, 10) and 38.4 (22, 50), respectively. Patients’ preoperative characteristics Values are the mean ± SD. Values are n. Values are medians (P25, P75). P < 0.05. P < 0.01. TIPS: Transjugular intrahepatic portosystemic shunt; HE: Hepatic encephalopathy; HVPG: Hepatic venous pressure gradient; PPG: Portal pressure gradient; MELD: Model for end-stage liver disease; ICG: Indocyanine green; ALT: Alanine aminotransferase; AST: Aspartate transaminase; ALP: Alkaline phosphatase; Hb: Hemoglobin; BUN: Blood urea nitrogen; APTT: Active partial thromboplastin; INR: International normalized ratio; FIB: Fibrinogen; PT: Prothrombin time; WBC: White blood cells. A total of 221 patients who underwent TIPS were included in this study. After applying the inclusion and exclusion criteria of the study, data were collected from a total of 195 decompensated cirrhosis patients with PHT who underwent TIPS. The basic clinical characteristics are listed in Table 1. The study population comprised 140 men and 55 women, with a mean age of 51.2 ± 11.4 years. The indications included variceal bleeding in 118 (60.7%) patients, refractory ascites in 36 (18.5%), variceal bleeding combined with ascites in 27 (14.1%), and other (including pleural fluid and hepatorenal syndrome) in 14 (6.7%). The most common cause of cirrhosis was viral cirrhosis (60.7%), followed by alcoholic cirrhosis (15.9%), biliary cirrhosis (4.6%), and drug-induced and autoimmune hepatitis cirrhosis (4.6%). According to the CPS, 108 patients were classified as having grade A, 75 as having grade B and 12 as having grade C. The median MELD score and ICG-R15 were 7 (4.1, 10) and 38.4 (22, 50), respectively. Patients’ preoperative characteristics Values are the mean ± SD. Values are n. Values are medians (P25, P75). P < 0.05. P < 0.01. TIPS: Transjugular intrahepatic portosystemic shunt; HE: Hepatic encephalopathy; HVPG: Hepatic venous pressure gradient; PPG: Portal pressure gradient; MELD: Model for end-stage liver disease; ICG: Indocyanine green; ALT: Alanine aminotransferase; AST: Aspartate transaminase; ALP: Alkaline phosphatase; Hb: Hemoglobin; BUN: Blood urea nitrogen; APTT: Active partial thromboplastin; INR: International normalized ratio; FIB: Fibrinogen; PT: Prothrombin time; WBC: White blood cells. Univariable and multivariable analyses of post-TIPS HE Of the 195 patients who underwent TIPS, 45 (23%) developed HE at the 12 mo follow-up. The factors associated with HE in univariable analysis included age, stent size, puncture site, ICG-R15, CPS, MELD score, blood urea nitrogen (BUN) level, and NH3 level (P = 0.006, 0.019, 0.006, 0.027, 0.022, 0.020, 0.025, and 0.044, respectively). As shown in Table 2, multivariate regression analysis identified the following variables as independent risk factors for post-TIPS HE: Older age, 10 mm stent size, puncture site in the right branch of the portal vein, high ICG-R15, high BUN level, high NH3 level, high CPS, and high MELD score. Univariable and multivariable logistic regression analyses of post-transjugular intrahepatic portosystemic shunt hepatic encephalopathy P < 0.05. P < 0.01. TIPS: Transjugular intrahepatic portosystemic shunt; HVPG: Hepatic venous pressure gradient; PPG: Portal pressure gradient; MELD: Model for end-stage liver disease; ICG: Indocyanine green; ALT: Alanine aminotransferase; AST: Aspartate transaminase; ALP: Alkaline phosphatase; Hb: Hemoglobin; BUN: Blood urea nitrogen; APTT: Active partial thromboplastin; INR: International normalized ratio; FIB: Fibrinogen; PT: Prothrombin time; WBC: White blood cell. Of the 195 patients who underwent TIPS, 45 (23%) developed HE at the 12 mo follow-up. The factors associated with HE in univariable analysis included age, stent size, puncture site, ICG-R15, CPS, MELD score, blood urea nitrogen (BUN) level, and NH3 level (P = 0.006, 0.019, 0.006, 0.027, 0.022, 0.020, 0.025, and 0.044, respectively). As shown in Table 2, multivariate regression analysis identified the following variables as independent risk factors for post-TIPS HE: Older age, 10 mm stent size, puncture site in the right branch of the portal vein, high ICG-R15, high BUN level, high NH3 level, high CPS, and high MELD score. Univariable and multivariable logistic regression analyses of post-transjugular intrahepatic portosystemic shunt hepatic encephalopathy P < 0.05. P < 0.01. TIPS: Transjugular intrahepatic portosystemic shunt; HVPG: Hepatic venous pressure gradient; PPG: Portal pressure gradient; MELD: Model for end-stage liver disease; ICG: Indocyanine green; ALT: Alanine aminotransferase; AST: Aspartate transaminase; ALP: Alkaline phosphatase; Hb: Hemoglobin; BUN: Blood urea nitrogen; APTT: Active partial thromboplastin; INR: International normalized ratio; FIB: Fibrinogen; PT: Prothrombin time; WBC: White blood cell. Discriminatory power of the CPS, MELD score, and ICG-R15 The area under the receiver operating characteristic (ROC) curve for the ICG-R15 (AUC = 0.664, 95% confidence interval [CI]: 0.557-0.743, P = 0.0046) for the prediction of post-TIPS HE was larger than those of the CPS (AUC = 0.596, 95%CI: 0.508-0.679, P = 0.087) and the MELD score (AUC = 0.641, 95%CI: 0.554-0.721, P = 0.021). The non-parametric approach (Delong-Delong & Clarke-Pearson)[17] showed that there was no statistical significance in pairwise comparison between AUCs of ICG-R15 and MELD score (P = 0.0229). The cut-off value for the ICG-R15, which was determined by the maximum of Youden index, was 30, with a sensitivity of 86.96% and specificity of 56.25% for the prediction of post-TIPS HE (Table 3). Of the 126 patients with an ICG-R15 > 30, 36 (28.5%) developed post-TIPS HE, while 9 (13%) of 69 patients with an ICG ≤ 30 developed post-TIPS HE. Areas under the receiver operating characteristic curves of indocyanine green rentention rate of 15 min, Child-Pugh score, and model for end-stage liver disease score P < 0.01. ICG: Indocyanine green; CPS: Child-Pugh score; MELD: Model for end-stage liver disease; AUS: Areas under the receiver operating characteristic curves. The area under the receiver operating characteristic (ROC) curve for the ICG-R15 (AUC = 0.664, 95% confidence interval [CI]: 0.557-0.743, P = 0.0046) for the prediction of post-TIPS HE was larger than those of the CPS (AUC = 0.596, 95%CI: 0.508-0.679, P = 0.087) and the MELD score (AUC = 0.641, 95%CI: 0.554-0.721, P = 0.021). The non-parametric approach (Delong-Delong & Clarke-Pearson)[17] showed that there was no statistical significance in pairwise comparison between AUCs of ICG-R15 and MELD score (P = 0.0229). The cut-off value for the ICG-R15, which was determined by the maximum of Youden index, was 30, with a sensitivity of 86.96% and specificity of 56.25% for the prediction of post-TIPS HE (Table 3). Of the 126 patients with an ICG-R15 > 30, 36 (28.5%) developed post-TIPS HE, while 9 (13%) of 69 patients with an ICG ≤ 30 developed post-TIPS HE. Areas under the receiver operating characteristic curves of indocyanine green rentention rate of 15 min, Child-Pugh score, and model for end-stage liver disease score P < 0.01. ICG: Indocyanine green; CPS: Child-Pugh score; MELD: Model for end-stage liver disease; AUS: Areas under the receiver operating characteristic curves. Comparison of post-TIPS-HE incidence between patients with an ICG-R15 > 30% and < 30% The patients were divided into two groups according to the ICG-R15 cut-off value, 30%, determined by the maximal Youden index. Patients with an ICG-R15 > 30% had a higher incidence of HE than those with an ICG-R15 < 30% (13% vs 28.5%; P = 0.014). There were significant differences in age, CPS, preoperative portal pressure gradient, NH3, albumin, aspartate transaminase, Cl, Na, white blood cells, active partial thromboplastin, or prothrombin time in patients with different ICG-R15 levels below and above 30% (Table 4). Characteristics of patients with an indocyanine green rentention rate at 15 min ≤ 30% and > 30% Values are n. Values are the mean ± SD. Values are medians (P25, P75). P < 0.05. P < 0.01. P < 0.001. ICG: Indocyanine green; HE: Hepatic encephalopathy; PPG: Portal pressure gradient; AST: Aspartate transaminase; APTT: Active partial thromboplastin; PT: Prothrombin time; WBC: White blood cells. The patients were divided into two groups according to the ICG-R15 cut-off value, 30%, determined by the maximal Youden index. Patients with an ICG-R15 > 30% had a higher incidence of HE than those with an ICG-R15 < 30% (13% vs 28.5%; P = 0.014). There were significant differences in age, CPS, preoperative portal pressure gradient, NH3, albumin, aspartate transaminase, Cl, Na, white blood cells, active partial thromboplastin, or prothrombin time in patients with different ICG-R15 levels below and above 30% (Table 4). Characteristics of patients with an indocyanine green rentention rate at 15 min ≤ 30% and > 30% Values are n. Values are the mean ± SD. Values are medians (P25, P75). P < 0.05. P < 0.01. P < 0.001. ICG: Indocyanine green; HE: Hepatic encephalopathy; PPG: Portal pressure gradient; AST: Aspartate transaminase; APTT: Active partial thromboplastin; PT: Prothrombin time; WBC: White blood cells. Patients’ preoperative characteristics: A total of 221 patients who underwent TIPS were included in this study. After applying the inclusion and exclusion criteria of the study, data were collected from a total of 195 decompensated cirrhosis patients with PHT who underwent TIPS. The basic clinical characteristics are listed in Table 1. The study population comprised 140 men and 55 women, with a mean age of 51.2 ± 11.4 years. The indications included variceal bleeding in 118 (60.7%) patients, refractory ascites in 36 (18.5%), variceal bleeding combined with ascites in 27 (14.1%), and other (including pleural fluid and hepatorenal syndrome) in 14 (6.7%). The most common cause of cirrhosis was viral cirrhosis (60.7%), followed by alcoholic cirrhosis (15.9%), biliary cirrhosis (4.6%), and drug-induced and autoimmune hepatitis cirrhosis (4.6%). According to the CPS, 108 patients were classified as having grade A, 75 as having grade B and 12 as having grade C. The median MELD score and ICG-R15 were 7 (4.1, 10) and 38.4 (22, 50), respectively. Patients’ preoperative characteristics Values are the mean ± SD. Values are n. Values are medians (P25, P75). P < 0.05. P < 0.01. TIPS: Transjugular intrahepatic portosystemic shunt; HE: Hepatic encephalopathy; HVPG: Hepatic venous pressure gradient; PPG: Portal pressure gradient; MELD: Model for end-stage liver disease; ICG: Indocyanine green; ALT: Alanine aminotransferase; AST: Aspartate transaminase; ALP: Alkaline phosphatase; Hb: Hemoglobin; BUN: Blood urea nitrogen; APTT: Active partial thromboplastin; INR: International normalized ratio; FIB: Fibrinogen; PT: Prothrombin time; WBC: White blood cells. Univariable and multivariable analyses of post-TIPS HE: Of the 195 patients who underwent TIPS, 45 (23%) developed HE at the 12 mo follow-up. The factors associated with HE in univariable analysis included age, stent size, puncture site, ICG-R15, CPS, MELD score, blood urea nitrogen (BUN) level, and NH3 level (P = 0.006, 0.019, 0.006, 0.027, 0.022, 0.020, 0.025, and 0.044, respectively). As shown in Table 2, multivariate regression analysis identified the following variables as independent risk factors for post-TIPS HE: Older age, 10 mm stent size, puncture site in the right branch of the portal vein, high ICG-R15, high BUN level, high NH3 level, high CPS, and high MELD score. Univariable and multivariable logistic regression analyses of post-transjugular intrahepatic portosystemic shunt hepatic encephalopathy P < 0.05. P < 0.01. TIPS: Transjugular intrahepatic portosystemic shunt; HVPG: Hepatic venous pressure gradient; PPG: Portal pressure gradient; MELD: Model for end-stage liver disease; ICG: Indocyanine green; ALT: Alanine aminotransferase; AST: Aspartate transaminase; ALP: Alkaline phosphatase; Hb: Hemoglobin; BUN: Blood urea nitrogen; APTT: Active partial thromboplastin; INR: International normalized ratio; FIB: Fibrinogen; PT: Prothrombin time; WBC: White blood cell. Discriminatory power of the CPS, MELD score, and ICG-R15: The area under the receiver operating characteristic (ROC) curve for the ICG-R15 (AUC = 0.664, 95% confidence interval [CI]: 0.557-0.743, P = 0.0046) for the prediction of post-TIPS HE was larger than those of the CPS (AUC = 0.596, 95%CI: 0.508-0.679, P = 0.087) and the MELD score (AUC = 0.641, 95%CI: 0.554-0.721, P = 0.021). The non-parametric approach (Delong-Delong & Clarke-Pearson)[17] showed that there was no statistical significance in pairwise comparison between AUCs of ICG-R15 and MELD score (P = 0.0229). The cut-off value for the ICG-R15, which was determined by the maximum of Youden index, was 30, with a sensitivity of 86.96% and specificity of 56.25% for the prediction of post-TIPS HE (Table 3). Of the 126 patients with an ICG-R15 > 30, 36 (28.5%) developed post-TIPS HE, while 9 (13%) of 69 patients with an ICG ≤ 30 developed post-TIPS HE. Areas under the receiver operating characteristic curves of indocyanine green rentention rate of 15 min, Child-Pugh score, and model for end-stage liver disease score P < 0.01. ICG: Indocyanine green; CPS: Child-Pugh score; MELD: Model for end-stage liver disease; AUS: Areas under the receiver operating characteristic curves. Comparison of post-TIPS-HE incidence between patients with an ICG-R15 > 30% and < 30%: The patients were divided into two groups according to the ICG-R15 cut-off value, 30%, determined by the maximal Youden index. Patients with an ICG-R15 > 30% had a higher incidence of HE than those with an ICG-R15 < 30% (13% vs 28.5%; P = 0.014). There were significant differences in age, CPS, preoperative portal pressure gradient, NH3, albumin, aspartate transaminase, Cl, Na, white blood cells, active partial thromboplastin, or prothrombin time in patients with different ICG-R15 levels below and above 30% (Table 4). Characteristics of patients with an indocyanine green rentention rate at 15 min ≤ 30% and > 30% Values are n. Values are the mean ± SD. Values are medians (P25, P75). P < 0.05. P < 0.01. P < 0.001. ICG: Indocyanine green; HE: Hepatic encephalopathy; PPG: Portal pressure gradient; AST: Aspartate transaminase; APTT: Active partial thromboplastin; PT: Prothrombin time; WBC: White blood cells. DISCUSSION: TIPS has been widely used to treat complications of PHT, including varices and ascites, by creating a large channel between the hepatic vein and portal vein. This procedure changes the liver haemodynamics by shunting a fraction of the portal venous blood directly into the systemic circulation, which can lead to decreased liver blood supply and impaired liver function reserve. In addition, HE occurs because of an increase in the amount of natural toxins such as ammonia travelling to the brain as a result of the shunting of the blood directly from the portal vein to the hepatic vein[18]. HE can produce a spectrum of neurological/psychiatric syndromes ranging from subclinical alterations to coma. It remains one of the most common and worrisome complications of end-stage liver disease after TIPS[15]. The ICG-R15 test is simple, fast, less invasive, and inexpensive, and can be performed in less than half an hour. The ICG-R15 retention trial was introduced as a relatively noninvasive tool for the classification of pediatric and adult patients with acute and chronic liver failure[19]. A particular advantage of the ICG-R15 test is that it is more suitable for pediatric patients. In addition, the trial appears to be an ideal way to assess the risks of surgical procedures such as liver resection. In addition to assessing the predictive value of varices and ascites, the results of some papers suggest that the ICG-R15 test may also predict mortality[20]. In this study, univariate and multivariate logistic analyses showed that 10 mm stent size, puncture site in the right branch of the portal vein, age, ICG-R15, BUN level, NH3 level, CPS, and MELD score were predictors of post-TIPS HE in patients with portal hypertension. Patients with puncture sites in the right portal vein had a high incidence of post-TIPS HE. This is related to the right branch of the portal vein contains more poisons mainly received from the superior mesenteric vein[21-23]. It was reported that choosing the left branch of the portal vein as the puncture site during the placement of TIPS may decrease the incidence of HE significantly[24,25]. HE occurs more often in patients with a stent diameter of 10 mm than in those with smaller-diameter stents. A larger stent can effectively reduce portal vein pressure, but at the same time, more blood that has not been detoxified by the liver directly enters the systemic circulation, which can further impair liver function and lead to HE. However, some studies showed that the incidence of post-TIPS HE was unrelated to stent diameter[26]. Li et al[27] also found that age and Child-Pugh classification were independent risk factors for early post-TIPS HE[27], which was in accord with previous studies[28,29]. Fonio et al[4] demonstrated that the MELD grade was a risk factor for post-TIPS HE; the results of this study support this finding[4]. Normally, ammonia is detoxified by conversion to urea by Krebs-Henseleit buffer or the urea cycle in the liver. In total, 40% to 60% of the urea nitrogen in the primary urine is absorbed in the renal tubules and collecting tubes. A high incidence of HE was found in patients with high BUN levels, which is related to the aggravation of liver function involving the kidneys, renal decompensation, and azotaemia[30]. Hiwatashi et al[31] found that a higher ICG-R15 was significantly correlated with surgical complications and liver dysfunction after surgical resection and chemotherapy in patients with colorectal cancer[31]. Wang et al[32] demonstrated that the ICG-R15 could more accurately predict preoperative liver reserve function than the Child-Pugh and MELD scores in patients who suffered from liver cancer[32]. Another study showed that the ALICE scoring system (including serum albumin and ICG-R15) could simply and effectively predict the prognosis of liver cancer patients undergoing surgery[33]. In this study, we analysed and compared the areas under the ROC curves for the ICG-R15, CPS, and MELD score. The results were as follows: AUCICG-R15 > AUCMELD > AUCCPS, and there was statistical significance in pairwise comparison between AUCs of ICG-R15 and MELD score. This suggests that the ICG-R15 has clinical equivalent value to predict post-TIPS HE compared to MELD score. The CPS is relatively restricted because it includes two subjective variables (HE and severity of ascites)[12], and the limited values ranging from 5 to 15 make it imprecise. The boundary values for the five parameters were chosen empirically and have not been formally validated[34]. In some situations, the predictive value of MELD may be reduced. Malabsorption of vitamin K secondary to cholestasis can cause an increase in the INR; starvation and infection can increase the level of bilirubin; and the use of diuretics can increase the level of creatinine[35]. As a quantitative assessment, the ICG-R15 is a simple and practical way to assess liver function and is widely used in patients undergoing liver surgery. The ICG-R15 may play a role in predicting post-TIPS HE; consequently, it may be useful for identifying high-risk patients. The optimal ICG-R15 cut-off value in our study was 30%, and it divided the patients into two groups with different risks of post-TIPS HE. This difference was highly significant (P = 0.014), which implies that TIPS patients with an ICG-R15 > 30% should be given special care during perioperative management. However, we did not compare the role of the CPS, MELD score, and ICG-R15 for predicting the survival of patients after TIPS, which needs further study. In addition, future studies should analyse the incidence of complications and survival between different ICG-R15 groups. A small number of patients with Child-Pugh C cirrhosis and the low median MELD score were limitations of this study. Since this may be related to the small sample size, we will increase the amount of the sample in the future. In summary, the ICG-R15 can be used for predicting post-TIPS HE in patients with PHT. We propose using the ICG-R15 to evaluate the risk of HE in PHT patients undergoing TIPS. CONCLUSION: TIPS for PHT in patients with cirrhosis should be considered after careful selection based on patient characteristics and liver function. The ICG-R15 has appreciated clinical value for predicting the occurrence of post-TIPS HE and is a choice for evaluating the prognosis of patients undergoing TIPS.
Background: Hepatic encephalopathy (HE) remains an enormous challenge in patients who undergo transjugular intrahepatic portosystemic shunt (TIPS) implantation. The preoperative indocyanine green retention rate at 15 min (ICG-R15), as one of the liver function assessment tools, has been developed as a prognostic indicator in patients undergoing surgery, but there are limited data on its role in TIPS. Methods: This retrospective study included 195 patients with PHT who underwent elective TIPS at Beijing Shijitan Hospital from January 2018 to June 2019. All patients underwent the ICG-R15 test, CPS evaluation, and MELD scoring 1 wk before TIPS. According to whether they developed HE or not, the patients were divided into two groups: HE group and non-HE group. The prediction of one-year post-TIPS HE by ICG-R15, CPS and MELD score was evaluated by the areas under the receiver operating characteristic curves (AUCs). Results: A total of 195 patients with portal hypertension were included and 23% (45/195) of the patients developed post-TIPS HE. The ICG-R15 was identified as an independent predictor of post-TIPS HE. The AUCs for the ICG-R15, CPS, and MELD score for predicting post-TIPS HE were 0.664 (95% confidence interval [CI]: 0.557-0.743, P = 0.0046), 0.596 (95%CI: 0.508-0.679, P = 0.087), and 0.641 (95%CI: 0.554-0.721, P = 0.021), respectively. The non-parametric approach (Delong-Delong & Clarke-Pearson) showed that there was statistical significance in pairwise comparison between AUCs of ICG-R15 and MELD score (P = 0.0229). Conclusions: The ICG-R15 has appreciated clinical value for predicting the occurrence of post-TIPS HE and is a choice for evaluating the prognosis of patients undergoing TIPS.
INTRODUCTION: Portal hypertension (PHT) is a very common and serious complication of chronic liver disease that often causes variceal bleeding and refractory ascites[1]. Transjugular intrahepatic portosystemic shunt (TIPS) is an important treatment option that has been shown to be efficacious in the management of PHT[2]. This procedure can alleviate portal hypertension by creating a large channel between the portal vein and hepatic vein[3]. Unfortunately, TIPS can cause severe complications such as heart failure, liver failure, and hepatic encephalopathy (HE). HE has a high incidence rate and is one of the most debilitating complications, which has a serious effect on the prognosis and survival of patients[4-6]. Although some risk factors are known, the identification of patients at risk of HE needs additional research. It is important to predict post-TIPS HE so that prevention and treatment measures can be implemented in high-risk HE patients to avoid adverse outcomes. The indocyanine green retention rate at 15 min (ICG-R15), the Child-Pugh score (CPS), and the model for end-stage liver disease (MELD) score have been developed to assess liver function[7,8]. The ICG-R15 is a relatively non-invasive, quick, and inexpensive method that has been widely used in patients with end-stage liver disease[9]. Zipprich et al[10] reported that ICG is the most accurate predictor among quantitative liver function tests of the survival of patients with cirrhosis[10]. A recent retrospective study demonstrated that preoperative ICG clearance was predictive of the surgical prognosis in patients undergoing hepatectomy[11]. The CPS was developed to assess the severity of liver cirrhosis in the clinic. This scoring system includes the bilirubin level, the albumin level, the prothrombin time, HE, and ascites[12]. The MELD score is used to predict the survival of patients undergoing TIPS and to evaluate patients with severe liver disease prior to transplantation. It includes three objective variables: The total bilirubin level, the creatinine level, and the international normalized ratio (INR)[13]. However, there are limited data on the use of liver function tools, especially the ICG-R15, to predict post-TIPS HE. Therefore, the aim of this study was to compare the clinical value of the MELD score, CPS, and ICG-R15 for the prediction of post-TIPS HE in patients with PHT. CONCLUSION: We can learn from this study that monitoring patients who underwent TIPS with an ICG-R15 value above 30% can better prevent adverse outcomes. Future studies will focus on the incidence of complications and survival in terms of the value of ICG-R15 and randomized controlled trials are needed in order to verify our results.
Background: Hepatic encephalopathy (HE) remains an enormous challenge in patients who undergo transjugular intrahepatic portosystemic shunt (TIPS) implantation. The preoperative indocyanine green retention rate at 15 min (ICG-R15), as one of the liver function assessment tools, has been developed as a prognostic indicator in patients undergoing surgery, but there are limited data on its role in TIPS. Methods: This retrospective study included 195 patients with PHT who underwent elective TIPS at Beijing Shijitan Hospital from January 2018 to June 2019. All patients underwent the ICG-R15 test, CPS evaluation, and MELD scoring 1 wk before TIPS. According to whether they developed HE or not, the patients were divided into two groups: HE group and non-HE group. The prediction of one-year post-TIPS HE by ICG-R15, CPS and MELD score was evaluated by the areas under the receiver operating characteristic curves (AUCs). Results: A total of 195 patients with portal hypertension were included and 23% (45/195) of the patients developed post-TIPS HE. The ICG-R15 was identified as an independent predictor of post-TIPS HE. The AUCs for the ICG-R15, CPS, and MELD score for predicting post-TIPS HE were 0.664 (95% confidence interval [CI]: 0.557-0.743, P = 0.0046), 0.596 (95%CI: 0.508-0.679, P = 0.087), and 0.641 (95%CI: 0.554-0.721, P = 0.021), respectively. The non-parametric approach (Delong-Delong & Clarke-Pearson) showed that there was statistical significance in pairwise comparison between AUCs of ICG-R15 and MELD score (P = 0.0229). Conclusions: The ICG-R15 has appreciated clinical value for predicting the occurrence of post-TIPS HE and is a choice for evaluating the prognosis of patients undergoing TIPS.
7,481
363
[ 447, 125, 90, 241, 129, 172, 2261, 345, 259, 285, 214, 1179, 51 ]
14
[ "icg", "patients", "r15", "tips", "icg r15", "score", "meld", "liver", "portal", "vein" ]
[ "liver haemodynamics shunting", "hepatic vein portal", "intrahepatic portosystemic shunt", "alleviate portal hypertension", "portal hypertension pht" ]
null
[CONTENT] Hepatic encephalopathy | Indocyanine green-R15 | Child-Pugh score | Model for end-stage liver disease score | Transjugular intrahepatic portosystemic shunt | Portal hypertention [SUMMARY]
[CONTENT] Hepatic encephalopathy | Indocyanine green-R15 | Child-Pugh score | Model for end-stage liver disease score | Transjugular intrahepatic portosystemic shunt | Portal hypertention [SUMMARY]
null
[CONTENT] Hepatic encephalopathy | Indocyanine green-R15 | Child-Pugh score | Model for end-stage liver disease score | Transjugular intrahepatic portosystemic shunt | Portal hypertention [SUMMARY]
[CONTENT] Hepatic encephalopathy | Indocyanine green-R15 | Child-Pugh score | Model for end-stage liver disease score | Transjugular intrahepatic portosystemic shunt | Portal hypertention [SUMMARY]
[CONTENT] Hepatic encephalopathy | Indocyanine green-R15 | Child-Pugh score | Model for end-stage liver disease score | Transjugular intrahepatic portosystemic shunt | Portal hypertention [SUMMARY]
[CONTENT] End Stage Liver Disease | Hepatic Encephalopathy | Humans | Indocyanine Green | Liver Cirrhosis | Portasystemic Shunt, Transjugular Intrahepatic | Retrospective Studies | Severity of Illness Index [SUMMARY]
[CONTENT] End Stage Liver Disease | Hepatic Encephalopathy | Humans | Indocyanine Green | Liver Cirrhosis | Portasystemic Shunt, Transjugular Intrahepatic | Retrospective Studies | Severity of Illness Index [SUMMARY]
null
[CONTENT] End Stage Liver Disease | Hepatic Encephalopathy | Humans | Indocyanine Green | Liver Cirrhosis | Portasystemic Shunt, Transjugular Intrahepatic | Retrospective Studies | Severity of Illness Index [SUMMARY]
[CONTENT] End Stage Liver Disease | Hepatic Encephalopathy | Humans | Indocyanine Green | Liver Cirrhosis | Portasystemic Shunt, Transjugular Intrahepatic | Retrospective Studies | Severity of Illness Index [SUMMARY]
[CONTENT] End Stage Liver Disease | Hepatic Encephalopathy | Humans | Indocyanine Green | Liver Cirrhosis | Portasystemic Shunt, Transjugular Intrahepatic | Retrospective Studies | Severity of Illness Index [SUMMARY]
[CONTENT] liver haemodynamics shunting | hepatic vein portal | intrahepatic portosystemic shunt | alleviate portal hypertension | portal hypertension pht [SUMMARY]
[CONTENT] liver haemodynamics shunting | hepatic vein portal | intrahepatic portosystemic shunt | alleviate portal hypertension | portal hypertension pht [SUMMARY]
null
[CONTENT] liver haemodynamics shunting | hepatic vein portal | intrahepatic portosystemic shunt | alleviate portal hypertension | portal hypertension pht [SUMMARY]
[CONTENT] liver haemodynamics shunting | hepatic vein portal | intrahepatic portosystemic shunt | alleviate portal hypertension | portal hypertension pht [SUMMARY]
[CONTENT] liver haemodynamics shunting | hepatic vein portal | intrahepatic portosystemic shunt | alleviate portal hypertension | portal hypertension pht [SUMMARY]
[CONTENT] icg | patients | r15 | tips | icg r15 | score | meld | liver | portal | vein [SUMMARY]
[CONTENT] icg | patients | r15 | tips | icg r15 | score | meld | liver | portal | vein [SUMMARY]
null
[CONTENT] icg | patients | r15 | tips | icg r15 | score | meld | liver | portal | vein [SUMMARY]
[CONTENT] icg | patients | r15 | tips | icg r15 | score | meld | liver | portal | vein [SUMMARY]
[CONTENT] icg | patients | r15 | tips | icg r15 | score | meld | liver | portal | vein [SUMMARY]
[CONTENT] liver | patients | level | survival patients | survival | tips | icg | liver function | function | predict [SUMMARY]
[CONTENT] test | vein | points | icg | score | patients | tips | portal | mg | ln [SUMMARY]
null
[CONTENT] tips | considered careful | cirrhosis considered careful | occurrence post | occurrence post tips | occurrence post tips choice | evaluating prognosis patients | evaluating prognosis patients undergoing | based patient characteristics liver | based patient characteristics [SUMMARY]
[CONTENT] icg | patients | tips | r15 | icg r15 | score | liver | meld | portal | vein [SUMMARY]
[CONTENT] icg | patients | tips | r15 | icg r15 | score | liver | meld | portal | vein [SUMMARY]
[CONTENT] ||| 15 [SUMMARY]
[CONTENT] 195 | PHT | Beijing Shijitan Hospital | January 2018 to June 2019 ||| CPS | MELD | 1 ||| two ||| one-year | ICG-R15 | CPS | MELD [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] ||| 15 ||| 195 | PHT | Beijing Shijitan Hospital | January 2018 to June 2019 ||| CPS | MELD | 1 ||| two ||| one-year | ICG-R15 | CPS | MELD ||| 195 | 23% | 45/195 ||| ||| CPS | MELD | 0.664 | 95% ||| CI | 0.557-0.743 | 0.0046 | 0.596 | 0.508 | 0.087 | 0.641 | 0.554-0.721 | 0.021 ||| Delong-Delong & Clarke-Pearson | ICG-R15 | 0.0229 ||| [SUMMARY]
[CONTENT] ||| 15 ||| 195 | PHT | Beijing Shijitan Hospital | January 2018 to June 2019 ||| CPS | MELD | 1 ||| two ||| one-year | ICG-R15 | CPS | MELD ||| 195 | 23% | 45/195 ||| ||| CPS | MELD | 0.664 | 95% ||| CI | 0.557-0.743 | 0.0046 | 0.596 | 0.508 | 0.087 | 0.641 | 0.554-0.721 | 0.021 ||| Delong-Delong & Clarke-Pearson | ICG-R15 | 0.0229 ||| [SUMMARY]
Topical co-administration of zoledronate with recombinant human bone morphogenetic protein-2 can induce and maintain bone formation in the bone marrow environment.
33472600
Bone morphogenetic proteins (BMPs) induce osteogenesis in various environments. However, when BMPs are used alone in the bone marrow environment, the maintenance of new bone formation is difficult owing to vigorous bone resorption. This is because BMPs stimulate the differentiation of not only osteoblast precursor cells but also osteoclast precursor cells. The present study aimed to induce and maintain new bone formation using the topical co-administration of recombinant human BMP-2 (rh-BMP-2) and zoledronate (ZOL) on beta-tricalcium phosphate (β-TCP) composite.
BACKGROUND
β-TCP columns were impregnated with both rh-BMP-2 (30 µg) and ZOL (5 µg), rh-BMP-2 alone, or ZOL alone, and implanted into the left femur canal of New Zealand white rabbits (n = 56). The implanted β-TCP columns were harvested and evaluated at 3 and 6 weeks after implantation. These harvested β-TCP columns were evaluated radiologically using plane radiograph, and histologically using haematoxylin/eosin (H&E) and Masson's trichrome (MT) staining. In addition, micro-computed tomography (CT) was performed for qualitative analysis of bone formation in each group (n = 7).
METHODS
Tissue sections stained with H&E and MT dyes revealed that new bone formation inside the β-TCP composite was significantly greater in those impregnated with both rh-BMP-2 and ZOL than in those from the other experimental groups at 3 and 6 weeks after implantations (p < 0.05). Micro-CT data also demonstrated that the bone volume and the bone mineral density inside the β-TCP columns were significantly greater in those impregnated with both rh-BMP-2 and ZOL than in those from the other experimental groups at 3 and 6 weeks after implantations (p < 0.05).
RESULTS
The topical co-administration of both rh-BMP-2 and ZOL on β-TCP composite promoted and maintained newly formed bone structure in the bone marrow environment.
CONCLUSIONS
[ "Animals", "Bone Marrow", "Bone Morphogenetic Protein 2", "Humans", "Osteogenesis", "Rabbits", "Recombinant Proteins", "Transforming Growth Factor beta", "X-Ray Microtomography", "Zoledronic Acid" ]
7819170
Background
Several clinical applications of recombinant human bone morphogenic proteins (rh-BMPs) have reportedly promoted new bone formation [1, 2]. BMPs act as signal transducers in the Smad signaling pathway to regulate mesenchymal stem cell differentiation during skeletal development, especially bone formation [3, 4]. For example, in orthopaedics surgery, rh-BMP has already been used to improve clinical results such as the novel operative technique of spinal fusion [5]. However, the use of rh-BMPs in certain orthopaedic surgeries performed in the intramedullary environment, e.g., total hip replacements involving large bone defects or intramedullary bone tumours, remains limited because more osteoclast progenitor cells are derived from hematopoietic stem cells in the bone marrow environment and rh-BMPs cannot achieve suitable osteogenesis inside of the bone marrow by promoting the differentiation of the osteoclast precursor cells, not only precursor cells which can be differentiated into osteoblast [6, 7]. In the intramedullary environment, it is difficult to achieve both bone formation and its maintenance. To overcome these problems, we previously reported the effectiveness of the systemic administration of ZOL using rh-BMP-2/β-tricalcium phosphate (β-TCP) composite to promote the osteogenesis of newly formed bone in the bone marrow environment [8]. β-TCP has been reported as a good carrier for drug delivery of both rh-BMP and bisphosphonates to promote osteogenesis [9–12]. β-TCP, a bioactive bone substitute material, has high biocompatibility and good stability [13]. Moreover, ZOL has demonstrated to have a protective effect on bone tissue resorption by inhibiting the activity of osteoclasts at the local site [14, 15]. In the present study, we further investigated if the topical co-treatment of ZOL and the rh-BMP-2/β-TCP composite is useful in the promotion as well as the maintenance of new bone formation in the bone marrow environment. Should the intramedullary bone formation be achieved by only the topical administration of these drugs, this treatment may represent a safety and effective procedure to create bone formation in lesion sites, both from a clinical and morphological perspective. In this study, the primary object was to achieve bone formation in the bone marrow environment and the secondary object was to maintain the formed bone tissue, by utilizing the combined effect of rh-BMP-2 in promoting bone formation and ZOL in maintaining bone tissue. In other words, we hypothesized that rh-BMP-2 could achieve bone formation in the bone marrow environment during the early treatment period and ZOL could maintain the newly formed bone tissue by inhibiting bone resorption for a certain period. The aim of this study was to investigate if the topical co-administration of rh-BMP-2/β-TCP/ZOL composite promoted osteogenesis and maintained the newly formed bone in the bone marrow environment.
null
null
Results
There were no statistically significant differences in body weight among each group at 0, 3, and 6 weeks after implantation (p = 0.63). The median and range of body weight (kg) at each time point for the groups 1, 2, 3, and 4 were as follows: 3.2 (2.9 to 3.3), 3.3 (3.0 to 3.4), 3.2 (3.0 to 3.3), and 3.2 (3.0 to 3.3) at 0 weeks, 3.3 (3.1 to 3.4), 3.2 (3.0 to 3.4), 3.2 (3.0 to 3.4), and 3.1 (3.0 to 3.3) at 3 weeks, 3.2 (3.0 to 3.3), 3.2 (3.0 to 3.5), 3.2 (3.1 to 3.3), and 3.0 (2.9 to 3.4) at 6 weeks, respectively. No cases of animals dropping out from observation during the study periods due to death or any other reasons were reported. Moreover, there was no complications, such as poor wound healing, after surgery. Macroscopy of implanted β-TCP columns in femoral bone marrow At 3 weeks after implantation, the gross appearance of the implanted β-TCP column disappeared significantly in Group 2 (rh-BMP-2 alone) (Fig. 1c, d). However, in groups containing ZOL (Group 3 and 4), the β-TCP column remained recognizable at 6 weeks after implantation (Fig. 1e-h). Fig. 1Representative photos of the left distal femurs of rabbits cut in the sagittal plane at 3 and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. Arrows point to a section of the edge of the implanted β-TCP. The gross appearances of implanted β-TCP columns gradually disappeared (a). The gross appearances of β-TCP columns in groups containing ZOL (f and h) were comparatively more recognizable than in the other groups without ZOL (b and d) at 6 weeks after implantation Representative photos of the left distal femurs of rabbits cut in the sagittal plane at 3 and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. Arrows point to a section of the edge of the implanted β-TCP. The gross appearances of implanted β-TCP columns gradually disappeared (a). The gross appearances of β-TCP columns in groups containing ZOL (f and h) were comparatively more recognizable than in the other groups without ZOL (b and d) at 6 weeks after implantation At 3 weeks after implantation, the gross appearance of the implanted β-TCP column disappeared significantly in Group 2 (rh-BMP-2 alone) (Fig. 1c, d). However, in groups containing ZOL (Group 3 and 4), the β-TCP column remained recognizable at 6 weeks after implantation (Fig. 1e-h). Fig. 1Representative photos of the left distal femurs of rabbits cut in the sagittal plane at 3 and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. Arrows point to a section of the edge of the implanted β-TCP. The gross appearances of implanted β-TCP columns gradually disappeared (a). The gross appearances of β-TCP columns in groups containing ZOL (f and h) were comparatively more recognizable than in the other groups without ZOL (b and d) at 6 weeks after implantation Representative photos of the left distal femurs of rabbits cut in the sagittal plane at 3 and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. Arrows point to a section of the edge of the implanted β-TCP. The gross appearances of implanted β-TCP columns gradually disappeared (a). The gross appearances of β-TCP columns in groups containing ZOL (f and h) were comparatively more recognizable than in the other groups without ZOL (b and d) at 6 weeks after implantation Radiographic evaluations of implanted β-TCP columns in femoral bone marrow The X-ray images showed that the radiolucency inside the implanted β-TCP column tended to increase gradually in all groups (Fig. 2). However, in combination with the macroscopy analysis, the radiolucency inside of the β-TCP columns was comparatively suppressed in the ZOL-treated groups (Group 3 and 4) (Fig. 2g-l). Fig. 2Representative X-ray images of the left distal femurs of rabbits from each group at 0, 3, and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. The radiolucencies at the area of implanted β-TCP columns gradually increased. The radiolucencies of β-TCP columns in groups containing ZOL (i and l) were comparatively more recognizable than in the other groups without ZOL (c and f) at 6 weeks after implantation Representative X-ray images of the left distal femurs of rabbits from each group at 0, 3, and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. The radiolucencies at the area of implanted β-TCP columns gradually increased. The radiolucencies of β-TCP columns in groups containing ZOL (i and l) were comparatively more recognizable than in the other groups without ZOL (c and f) at 6 weeks after implantation The X-ray images showed that the radiolucency inside the implanted β-TCP column tended to increase gradually in all groups (Fig. 2). However, in combination with the macroscopy analysis, the radiolucency inside of the β-TCP columns was comparatively suppressed in the ZOL-treated groups (Group 3 and 4) (Fig. 2g-l). Fig. 2Representative X-ray images of the left distal femurs of rabbits from each group at 0, 3, and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. The radiolucencies at the area of implanted β-TCP columns gradually increased. The radiolucencies of β-TCP columns in groups containing ZOL (i and l) were comparatively more recognizable than in the other groups without ZOL (c and f) at 6 weeks after implantation Representative X-ray images of the left distal femurs of rabbits from each group at 0, 3, and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. The radiolucencies at the area of implanted β-TCP columns gradually increased. The radiolucencies of β-TCP columns in groups containing ZOL (i and l) were comparatively more recognizable than in the other groups without ZOL (c and f) at 6 weeks after implantation Promotion and maintenance of bone formation in the bone marrow environment Representative H&E and MT stained sections of tissues and their quantitative evaluations are shown in Figs. 3 and 4. At 3 weeks after implantation, the newly formed area of bone structure was significantly larger in the groups with rh-BMP-2 (Group 2 and 4) than in the groups without rh-BMP-2 (Group 1 and 3) (p < 0.001, Fig. 4a, c). Details of the statistical analysis of each parameter are as follows: group 1 vs. 2: p < 0.001 in H&E and p < 0.001 in MT; group 1 vs. 3: p = 0.05 in H&E and p = 0.06 in MT; group 1 vs. 4: p < 0.001 in H&E and p = 0.04 in MT; group 2 vs. 3: p < 0.001 in H&E and p < 0.001 in MT; group 2 vs. 4: p = 1.0 in H&E and p = 0.06 in MT; and group 3 vs. 4: p < 0.001 in H&E and p < 0.001 in MT. At 6 weeks after implantation, the newly formed area of bone structure in the group containing both rh-BMP-2 and ZOL (Group 4) was significantly larger than that in the other groups (p < 0.001, Fig. 4b, d). Details of the statistical analysis of each parameter are as follows: group 1 vs. 2: p = 1.0 in H&E and p = 1.0 in MT; group 1 vs. 3: p = 1.0 in H&E and p = 1.0 in MT; group 1 vs. 4: p < 0.001 in H&E and p = 0.03 in MT; group 2 vs. 3: p = 1.0 in H&E and p = 1.0 in MT; group 2 vs. 4: p < 0.001 in H&E and p < 0.001 in MT; and group 3 vs. 4: p < 0.001 in H&E and p = 0.006 in MT. The newly formed bone structure area in the Groups 1, 2, and 3 had almost disappeared at 6 weeks after implantation (Fig. 3q-z, a’, and b’). The actual values of new bone structure area in H&E and MT sections at 3 and 6 weeks after implantations are shown in Table 1. Fig. 3Representative H&E and Masson’s Trichrome stained sections of the left distal femurs of rabbits cut in the sagittal plane in each group at 3 and 6 weeks after implantation. In each image, the proximal section is displayed on the right and the dorsal section is displayed on the upper parts of the figure. The dotted box in the low-powered view (2×) indicates the range of high-powered view. The high-powered views (20×) were captured randomly from inside the implanted β-TCP areas for quantitative evaluation. The uniformly-stained tissue area, pointed by arrows, indicate newly formed trabecular bone structure. At 3 weeks after implantation, stained tissue areas were recognized as new bone area was significantly increased in groups containing rh-BMP-2 (f, h,n, and p). New bone area only remained in groups treated with both rh-BMP-2 and ZOL (d’ and f’) at 6 weeks after implantation. Note: H&E, Hematoxylin-Eosin; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate; β-TCP, β-tricalcium phosphate Representative H&E and Masson’s Trichrome stained sections of the left distal femurs of rabbits cut in the sagittal plane in each group at 3 and 6 weeks after implantation. In each image, the proximal section is displayed on the right and the dorsal section is displayed on the upper parts of the figure. The dotted box in the low-powered view (2×) indicates the range of high-powered view. The high-powered views (20×) were captured randomly from inside the implanted β-TCP areas for quantitative evaluation. The uniformly-stained tissue area, pointed by arrows, indicate newly formed trabecular bone structure. At 3 weeks after implantation, stained tissue areas were recognized as new bone area was significantly increased in groups containing rh-BMP-2 (f, h,n, and p). New bone area only remained in groups treated with both rh-BMP-2 and ZOL (d’ and f’) at 6 weeks after implantation. Note: H&E, Hematoxylin-Eosin; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate; β-TCP, β-tricalcium phosphate Fig. 4Quantitative evaluation of H&E sections and Masson’s Trichrome sections of the of the left distal femurs of rabbits in each group at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. At 3 weeks after implantation, the groups containing rh-BMP-2 (Group 2 or 4) showed greater areas of new bone formation than the other groups (P < 0.05). However, at 6 weeks after implantation, only the group (Group 4) that involved the combination usage of both rh-BMP-2 and ZOL still showed areas of newly formed bone (P < 0.05). *: P < 0.05. Statistical differences between groups were determined using a one-way analysis of variance with Bonferroni’s multiple comparison test. Note: H&E, Hematoxylin-Eosin; MT, Masson Trichrome; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate Quantitative evaluation of H&E sections and Masson’s Trichrome sections of the of the left distal femurs of rabbits in each group at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. At 3 weeks after implantation, the groups containing rh-BMP-2 (Group 2 or 4) showed greater areas of new bone formation than the other groups (P < 0.05). However, at 6 weeks after implantation, only the group (Group 4) that involved the combination usage of both rh-BMP-2 and ZOL still showed areas of newly formed bone (P < 0.05). *: P < 0.05. Statistical differences between groups were determined using a one-way analysis of variance with Bonferroni’s multiple comparison test. Note: H&E, Hematoxylin-Eosin; MT, Masson Trichrome; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate Histological assessments of new bone area in the marrow of a rabbit femur Variables present percentages of new bone areas in tissues as median, minimum, and maximum. P values indicate the statistical differences between the groups. Note: β-TCP Beta-tricalcium phosphate, rh-BMP-2 Recombinant human bone morphogenetic protein-2, ZOL Zoledronate. Representative H&E and MT stained sections of tissues and their quantitative evaluations are shown in Figs. 3 and 4. At 3 weeks after implantation, the newly formed area of bone structure was significantly larger in the groups with rh-BMP-2 (Group 2 and 4) than in the groups without rh-BMP-2 (Group 1 and 3) (p < 0.001, Fig. 4a, c). Details of the statistical analysis of each parameter are as follows: group 1 vs. 2: p < 0.001 in H&E and p < 0.001 in MT; group 1 vs. 3: p = 0.05 in H&E and p = 0.06 in MT; group 1 vs. 4: p < 0.001 in H&E and p = 0.04 in MT; group 2 vs. 3: p < 0.001 in H&E and p < 0.001 in MT; group 2 vs. 4: p = 1.0 in H&E and p = 0.06 in MT; and group 3 vs. 4: p < 0.001 in H&E and p < 0.001 in MT. At 6 weeks after implantation, the newly formed area of bone structure in the group containing both rh-BMP-2 and ZOL (Group 4) was significantly larger than that in the other groups (p < 0.001, Fig. 4b, d). Details of the statistical analysis of each parameter are as follows: group 1 vs. 2: p = 1.0 in H&E and p = 1.0 in MT; group 1 vs. 3: p = 1.0 in H&E and p = 1.0 in MT; group 1 vs. 4: p < 0.001 in H&E and p = 0.03 in MT; group 2 vs. 3: p = 1.0 in H&E and p = 1.0 in MT; group 2 vs. 4: p < 0.001 in H&E and p < 0.001 in MT; and group 3 vs. 4: p < 0.001 in H&E and p = 0.006 in MT. The newly formed bone structure area in the Groups 1, 2, and 3 had almost disappeared at 6 weeks after implantation (Fig. 3q-z, a’, and b’). The actual values of new bone structure area in H&E and MT sections at 3 and 6 weeks after implantations are shown in Table 1. Fig. 3Representative H&E and Masson’s Trichrome stained sections of the left distal femurs of rabbits cut in the sagittal plane in each group at 3 and 6 weeks after implantation. In each image, the proximal section is displayed on the right and the dorsal section is displayed on the upper parts of the figure. The dotted box in the low-powered view (2×) indicates the range of high-powered view. The high-powered views (20×) were captured randomly from inside the implanted β-TCP areas for quantitative evaluation. The uniformly-stained tissue area, pointed by arrows, indicate newly formed trabecular bone structure. At 3 weeks after implantation, stained tissue areas were recognized as new bone area was significantly increased in groups containing rh-BMP-2 (f, h,n, and p). New bone area only remained in groups treated with both rh-BMP-2 and ZOL (d’ and f’) at 6 weeks after implantation. Note: H&E, Hematoxylin-Eosin; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate; β-TCP, β-tricalcium phosphate Representative H&E and Masson’s Trichrome stained sections of the left distal femurs of rabbits cut in the sagittal plane in each group at 3 and 6 weeks after implantation. In each image, the proximal section is displayed on the right and the dorsal section is displayed on the upper parts of the figure. The dotted box in the low-powered view (2×) indicates the range of high-powered view. The high-powered views (20×) were captured randomly from inside the implanted β-TCP areas for quantitative evaluation. The uniformly-stained tissue area, pointed by arrows, indicate newly formed trabecular bone structure. At 3 weeks after implantation, stained tissue areas were recognized as new bone area was significantly increased in groups containing rh-BMP-2 (f, h,n, and p). New bone area only remained in groups treated with both rh-BMP-2 and ZOL (d’ and f’) at 6 weeks after implantation. Note: H&E, Hematoxylin-Eosin; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate; β-TCP, β-tricalcium phosphate Fig. 4Quantitative evaluation of H&E sections and Masson’s Trichrome sections of the of the left distal femurs of rabbits in each group at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. At 3 weeks after implantation, the groups containing rh-BMP-2 (Group 2 or 4) showed greater areas of new bone formation than the other groups (P < 0.05). However, at 6 weeks after implantation, only the group (Group 4) that involved the combination usage of both rh-BMP-2 and ZOL still showed areas of newly formed bone (P < 0.05). *: P < 0.05. Statistical differences between groups were determined using a one-way analysis of variance with Bonferroni’s multiple comparison test. Note: H&E, Hematoxylin-Eosin; MT, Masson Trichrome; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate Quantitative evaluation of H&E sections and Masson’s Trichrome sections of the of the left distal femurs of rabbits in each group at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. At 3 weeks after implantation, the groups containing rh-BMP-2 (Group 2 or 4) showed greater areas of new bone formation than the other groups (P < 0.05). However, at 6 weeks after implantation, only the group (Group 4) that involved the combination usage of both rh-BMP-2 and ZOL still showed areas of newly formed bone (P < 0.05). *: P < 0.05. Statistical differences between groups were determined using a one-way analysis of variance with Bonferroni’s multiple comparison test. Note: H&E, Hematoxylin-Eosin; MT, Masson Trichrome; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate Histological assessments of new bone area in the marrow of a rabbit femur Variables present percentages of new bone areas in tissues as median, minimum, and maximum. P values indicate the statistical differences between the groups. Note: β-TCP Beta-tricalcium phosphate, rh-BMP-2 Recombinant human bone morphogenetic protein-2, ZOL Zoledronate. Qualitative improvement of formed bone by topical co‐administration of rh-BMP2 and ZOL The qualitative differences of newly formed bone inside the implanted β-TCP columns between the groups were evaluated by µ-CT, and the results are shown in bar graphs in Fig. 5. At 3 weeks after implantation, groups with rh-BMP-2 (Group 2 and 4) showed significantly greater BV/TV and BMD than groups without rh-BMP-2 (Group 1 and 3) (p < 0.05, Fig. 5a, c). At 6 weeks after implantation, only the treatment group with both rh-BMP-2 and ZOL (Group 4) showed significantly greater BV/TV and BMD values than the other groups (p < 0.05, Fig. 5b, d). The actual values of BV/TV and BMD at 3 and 6 weeks after implantation are shown in Table 2. Fig. 5μ-CT evaluation of BV/TV and BMD in retrieved β-TCP implants at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. *: P < 0.05. Statistical differences between groups were determined with the one-way ANOVA and post-hoc Bonferroni test. Note: BV/TV, Bone volume/ Total tissue volume; BMD, Bone mineral density μ-CT evaluation of BV/TV and BMD in retrieved β-TCP implants at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. *: P < 0.05. Statistical differences between groups were determined with the one-way ANOVA and post-hoc Bonferroni test. Note: BV/TV, Bone volume/ Total tissue volume; BMD, Bone mineral density Quantitatively assessments of implanted β-TCP using μ-CT Median, minimum, and maximum are provided. P values indicate the statistical differences between the groups. Note: β-TCP Beta-tricalcium phosphate, rh-BMP-2 Recombinant human bone morphogenetic protein-2, ZOL Zoledronate. The qualitative differences of newly formed bone inside the implanted β-TCP columns between the groups were evaluated by µ-CT, and the results are shown in bar graphs in Fig. 5. At 3 weeks after implantation, groups with rh-BMP-2 (Group 2 and 4) showed significantly greater BV/TV and BMD than groups without rh-BMP-2 (Group 1 and 3) (p < 0.05, Fig. 5a, c). At 6 weeks after implantation, only the treatment group with both rh-BMP-2 and ZOL (Group 4) showed significantly greater BV/TV and BMD values than the other groups (p < 0.05, Fig. 5b, d). The actual values of BV/TV and BMD at 3 and 6 weeks after implantation are shown in Table 2. Fig. 5μ-CT evaluation of BV/TV and BMD in retrieved β-TCP implants at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. *: P < 0.05. Statistical differences between groups were determined with the one-way ANOVA and post-hoc Bonferroni test. Note: BV/TV, Bone volume/ Total tissue volume; BMD, Bone mineral density μ-CT evaluation of BV/TV and BMD in retrieved β-TCP implants at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. *: P < 0.05. Statistical differences between groups were determined with the one-way ANOVA and post-hoc Bonferroni test. Note: BV/TV, Bone volume/ Total tissue volume; BMD, Bone mineral density Quantitatively assessments of implanted β-TCP using μ-CT Median, minimum, and maximum are provided. P values indicate the statistical differences between the groups. Note: β-TCP Beta-tricalcium phosphate, rh-BMP-2 Recombinant human bone morphogenetic protein-2, ZOL Zoledronate.
Conclusions
In summary, the combination of locally administered rh-BMP-2 and ZOL via β-TCP column materials promoted new bone formation in the bone marrow and enabled the maintenance of the newly formed bone for 6 weeks after implantation. Our findings may contribute to the development of the orthopaedic field, especially involving clinical approaches for cases that require bone regeneration in the bone marrow environment.
[ "Background", "Recombinant human BMP-2", "Zoledronate", "β-TCP columns", "Surgery and implantation of β-TCP columns", "Plane radiographs", "Histological examination", "Micro‐computed tomography", "Ethical considerations", "Statistical analysis", "Macroscopy of implanted β-TCP columns in femoral bone marrow", "Radiographic evaluations of implanted β-TCP columns in femoral bone marrow", "Promotion and maintenance of bone formation in the bone marrow environment", "Qualitative improvement of formed bone by topical co‐administration of rh-BMP2 and ZOL" ]
[ "Several clinical applications of recombinant human bone morphogenic proteins (rh-BMPs) have reportedly promoted new bone formation [1, 2]. BMPs act as signal transducers in the Smad signaling pathway to regulate mesenchymal stem cell differentiation during skeletal development, especially bone formation [3, 4]. For example, in orthopaedics surgery, rh-BMP has already been used to improve clinical results such as the novel operative technique of spinal fusion [5]. However, the use of rh-BMPs in certain orthopaedic surgeries performed in the intramedullary environment, e.g., total hip replacements involving large bone defects or intramedullary bone tumours, remains limited because more osteoclast progenitor cells are derived from hematopoietic stem cells in the bone marrow environment and rh-BMPs cannot achieve suitable osteogenesis inside of the bone marrow by promoting the differentiation of the osteoclast precursor cells, not only precursor cells which can be differentiated into osteoblast [6, 7]. In the intramedullary environment, it is difficult to achieve both bone formation and its maintenance.\nTo overcome these problems, we previously reported the effectiveness of the systemic administration of ZOL using rh-BMP-2/β-tricalcium phosphate (β-TCP) composite to promote the osteogenesis of newly formed bone in the bone marrow environment [8]. β-TCP has been reported as a good carrier for drug delivery of both rh-BMP and bisphosphonates to promote osteogenesis [9–12]. β-TCP, a bioactive bone substitute material, has high biocompatibility and good stability [13]. Moreover, ZOL has demonstrated to have a protective effect on bone tissue resorption by inhibiting the activity of osteoclasts at the local site [14, 15]. In the present study, we further investigated if the topical co-treatment of ZOL and the rh-BMP-2/β-TCP composite is useful in the promotion as well as the maintenance of new bone formation in the bone marrow environment. Should the intramedullary bone formation be achieved by only the topical administration of these drugs, this treatment may represent a safety and effective procedure to create bone formation in lesion sites, both from a clinical and morphological perspective.\nIn this study, the primary object was to achieve bone formation in the bone marrow environment and the secondary object was to maintain the formed bone tissue, by utilizing the combined effect of rh-BMP-2 in promoting bone formation and ZOL in maintaining bone tissue. In other words, we hypothesized that rh-BMP-2 could achieve bone formation in the bone marrow environment during the early treatment period and ZOL could maintain the newly formed bone tissue by inhibiting bone resorption for a certain period. The aim of this study was to investigate if the topical co-administration of rh-BMP-2/β-TCP/ZOL composite promoted osteogenesis and maintained the newly formed bone in the bone marrow environment.", "This study used rh-BMP-2 produced in Escherichia coli, provided by Osteopharma, Inc (Osaka, Japan) [16]. Dimerization of the monomeric cytokine was obtained using published procedures [16, 17]. Rh-BMP-2 was reconstituted in sterile 0.01 N hydrochloric acid at 5 mg/mL and stored at 80℃ until use.", "ZOL used in this study was purchased as a liquid solution as 4 mg/5 mL (Zometa™; Novartis Pharma K.K./ Tokyo, Japan) and stored at room temperature (approximately 25℃) until use. ZOL was diluted in phosphate-buffered saline (PBS, Wako, Osaka, Japan) to 5 µg ZOL per β-TCP column.", "β-TCP columns (diameter: 6 mm, length: 10 mm, porosity: 75 %) were manufactured and provided by HOYA (Tokyo, Japan), in a dry condition. The β-TCP columns were sterilized using dry heat (255 ℃, 3 h) and impregnated with each drug. The concentration of each drug was adjusted using 75 µL PBS per β-TCP column as follows: PBS only (Group 1), 30 µg of rh-BMP-2 (Group 2), 5 µg of ZOL (Group 3), or 30 µg of rh-BMP-2 and 5 µg of ZOL (Group 4). The drug lysates were infiltrated into the β-TCP columns in a laminar flow cabinet [8].", "New Zealand white rabbits (n = 56 females, age: 18 to 20 weeks, body weight: 3.0–4.0 kg) were purchased from Japan SLC Co. (Shizuoka, Japan). All animals were acclimatized in cages with free access to food and water for 2 weeks. The β-TCP columns were surgically inserted into the medullary cavity at the distal position of the left femurs based on our previously described procedure [8]. Briefly, animals were anesthetized with a subcutaneous injection of ketamine (30 mg/kg body weight) and xylazine (10 mg/kg body weight). After exposure, the distal femur was reamed with a 6.2 mm hand drill to create a hole, a radiograph was taken for confirmation, and then a β-TCP column was inserted into the medullary cavity. During the postoperative period, all animals were maintained in cages (one rabbit per cage) in a temperature-controlled room (25℃) with ad libitum access to food and water and unrestricted movement at the animal care centre at our institution. At 3 and 6 weeks after the surgery, animals were sacrificed by intravenous injection of 100 mg/kg pentobarbital (Somnopentyl™, Kyoritu seiyaku, Tokyo, Japan) and the distal femurs containing the β-TCP were harvested. Seven rabbits were sacrificed in each group at the each timepoint. Harvested femurs were fixed in 4 % paraformaldehyde phosphate buffer overnight at 4℃ and stored in 70 % ethanol solution at 4℃ until use. No animals were excluded from experimental analysis. To reduce confounding factors as much as possible, the order of implanting β-TCP in each group was selected randomly. In the post-operative management, one cage was used for one animal and its locations in the animal care room were randomly selected at regular intervals for unification of environment. After surgery, the surgical wound condition, food intake, and activity were monitored and confirmed to be clear.", "Plane radiographs of the lateral views of the distal femurs were taken under anesthetization during the implantation surgery (0 weeks) and at 3 and 6 weeks after the surgery. Radiographs were obtained using a KXO-15ER apparatus (Toshiba Medical, Tochigi, Japan) at 50 kV and 100 mA for 0.08 s, and visualized using an FCR CAPSULA- 2V1 system (Fujifilm, Tokyo, Japan).", "Prior to histological evaluation, the fixed specimens were decalcified in 0.5 mol/L ethylenediaminetetraacetic acid (EDTA) solution (Wako, Osaka, Japan) for 2 weeks, dehydrated in a graded ethanol series (70 %, 80 %, 90 %, and 100 % ethanol), and embedded in paraffin wax. Mid-sagittal (longitudinal, along the implant) sections were cut into 4 µm slices in each plane. After preparation, the tissue sections were stained using haematoxylin/eosin (H&E) staining and Masson’s trichrome (MT) staining. New bone formation within the β-TCP columns was histologically assessed using previously described procedures with minor modifications [18]. Briefly, three high-powered fields (objective lens 20×) were randomly selected from three tissue sections from each the β-TCP column sample. The images were captured using a microscope with a built-in digital camera (DP 70; Olympus Corporation, Tokyo, Japan). Captured images were analysed using ImageJ™ software (National Institutes of Health, MD, USA). A total of 9 images captured in each group were analysed. The threshold for the measurement of the newly formed bone was set between 150 and 180 of the red channel in the software. New bone area (%) was estimated as the detected area/total area ×100 in each section. These new bone areas in the H&E and MT sections were defined as the primary outcomes in this study.", "The implanted β-TCP columns were evaluated by micro-computed tomography (µ-CT) using an Aloka Latheta LCT200 (HITACHI, Tokyo, Japan) based on the previous published procedure [19, 20]. Briefly, the following conditions were maintained per image: slice width of 30 µm, voxel size of 30 × 120 µm, voltage of 80 kVp, and current of 50 µA. The area of β-TCP measurements was determined and the quality as bone in its area were quantitatively assessed using LaTheta software (version 2.10, Aloka). Bone volume/ total tissue volume (BV/TV) and bone mineral density (BMD) were evaluated according manufactural instruments. All sections were analysed by µ-CT (n = 7 in each group at 3 and 6 weeks after implantation). These quantitative bone assessments of µ-CT were defined as the secondary outcomes in this study.", "This study was approved by the Animal Research Committees of our institution (approval number 13,017). All applicable international, national, and/or institutional guidelines for the care and use of animals were followed. All procedures performed were in accordance with the ethical standards of the institution at which the study was conducted. This paper does not contain any studies with human participants performed by any of the authors.", "The results are presented as median and range (minimum and maximum). All variables were confirmed as parametric using the Kolmogorov-Smirnov test. The differences between groups were analysed using a one-way analysis of variance with Bonferroni’s multiple comparison test. To determine the adequate sample size, a power analysis was performed for the primary and secondary outcomes. According to a previous report on new bone area and the quantitative bone assessment of µ-CT, the expected differences in primary and secondary outcomes were 10 ± 5.5 % and 6 ± 3.5 %, respectively [8, 18]. Based on these findings, to provide an appropriate power (β = 0.80) with the significance level set at 0.05, a sample size of five cases or more was adequate to achieve the primary outcome and a sample size of six cases or more was adequate to achieve the secondary outcomes. Statistical significance was set at P < 0.05. Statistical analyses were performed using SPSS software, version 22 (IBM, NY, USA).", "At 3 weeks after implantation, the gross appearance of the implanted β-TCP column disappeared significantly in Group 2 (rh-BMP-2 alone) (Fig. 1c, d). However, in groups containing ZOL (Group 3 and 4), the β-TCP column remained recognizable at 6 weeks after implantation (Fig. 1e-h).\n\nFig. 1Representative photos of the left distal femurs of rabbits cut in the sagittal plane at 3 and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. Arrows point to a section of the edge of the implanted β-TCP. The gross appearances of implanted β-TCP columns gradually disappeared (a). The gross appearances of β-TCP columns in groups containing ZOL (f and h) were comparatively more recognizable than in the other groups without ZOL (b and d) at 6 weeks after implantation\nRepresentative photos of the left distal femurs of rabbits cut in the sagittal plane at 3 and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. Arrows point to a section of the edge of the implanted β-TCP. The gross appearances of implanted β-TCP columns gradually disappeared (a). The gross appearances of β-TCP columns in groups containing ZOL (f and h) were comparatively more recognizable than in the other groups without ZOL (b and d) at 6 weeks after implantation", "The X-ray images showed that the radiolucency inside the implanted β-TCP column tended to increase gradually in all groups (Fig. 2). However, in combination with the macroscopy analysis, the radiolucency inside of the β-TCP columns was comparatively suppressed in the ZOL-treated groups (Group 3 and 4) (Fig. 2g-l).\n\nFig. 2Representative X-ray images of the left distal femurs of rabbits from each group at 0, 3, and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. The radiolucencies at the area of implanted β-TCP columns gradually increased. The radiolucencies of β-TCP columns in groups containing ZOL (i and l) were comparatively more recognizable than in the other groups without ZOL (c and f) at 6 weeks after implantation\nRepresentative X-ray images of the left distal femurs of rabbits from each group at 0, 3, and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. The radiolucencies at the area of implanted β-TCP columns gradually increased. The radiolucencies of β-TCP columns in groups containing ZOL (i and l) were comparatively more recognizable than in the other groups without ZOL (c and f) at 6 weeks after implantation", "Representative H&E and MT stained sections of tissues and their quantitative evaluations are shown in Figs. 3 and 4. At 3 weeks after implantation, the newly formed area of bone structure was significantly larger in the groups with rh-BMP-2 (Group 2 and 4) than in the groups without rh-BMP-2 (Group 1 and 3) (p < 0.001, Fig. 4a, c). Details of the statistical analysis of each parameter are as follows: group 1 vs. 2: p < 0.001 in H&E and p < 0.001 in MT; group 1 vs. 3: p = 0.05 in H&E and p = 0.06 in MT; group 1 vs. 4: p < 0.001 in H&E and p = 0.04 in MT; group 2 vs. 3: p < 0.001 in H&E and p < 0.001 in MT; group 2 vs. 4: p = 1.0 in H&E and p = 0.06 in MT; and group 3 vs. 4: p < 0.001 in H&E and p < 0.001 in MT. At 6 weeks after implantation, the newly formed area of bone structure in the group containing both rh-BMP-2 and ZOL (Group 4) was significantly larger than that in the other groups (p < 0.001, Fig. 4b, d). Details of the statistical analysis of each parameter are as follows: group 1 vs. 2: p = 1.0 in H&E and p = 1.0 in MT; group 1 vs. 3: p = 1.0 in H&E and p = 1.0 in MT; group 1 vs. 4: p < 0.001 in H&E and p = 0.03 in MT; group 2 vs. 3: p = 1.0 in H&E and p = 1.0 in MT; group 2 vs. 4: p < 0.001 in H&E and p < 0.001 in MT; and group 3 vs. 4: p < 0.001 in H&E and p = 0.006 in MT. The newly formed bone structure area in the Groups 1, 2, and 3 had almost disappeared at 6 weeks after implantation (Fig. 3q-z, a’, and b’). The actual values of new bone structure area in H&E and MT sections at 3 and 6 weeks after implantations are shown in Table 1.\n\nFig. 3Representative H&E and Masson’s Trichrome stained sections of the left distal femurs of rabbits cut in the sagittal plane in each group at 3 and 6 weeks after implantation. In each image, the proximal section is displayed on the right and the dorsal section is displayed on the upper parts of the figure. The dotted box in the low-powered view (2×) indicates the range of high-powered view. The high-powered views (20×) were captured randomly from inside the implanted β-TCP areas for quantitative evaluation. The uniformly-stained tissue area, pointed by arrows, indicate newly formed trabecular bone structure. At 3 weeks after implantation, stained tissue areas were recognized as new bone area was significantly increased in groups containing rh-BMP-2 (f, h,n, and p). New bone area only remained in groups treated with both rh-BMP-2 and ZOL (d’ and f’) at 6 weeks after implantation. Note: H&E, Hematoxylin-Eosin; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate; β-TCP, β-tricalcium phosphate\nRepresentative H&E and Masson’s Trichrome stained sections of the left distal femurs of rabbits cut in the sagittal plane in each group at 3 and 6 weeks after implantation. In each image, the proximal section is displayed on the right and the dorsal section is displayed on the upper parts of the figure. The dotted box in the low-powered view (2×) indicates the range of high-powered view. The high-powered views (20×) were captured randomly from inside the implanted β-TCP areas for quantitative evaluation. The uniformly-stained tissue area, pointed by arrows, indicate newly formed trabecular bone structure. At 3 weeks after implantation, stained tissue areas were recognized as new bone area was significantly increased in groups containing rh-BMP-2 (f, h,n, and p). New bone area only remained in groups treated with both rh-BMP-2 and ZOL (d’ and f’) at 6 weeks after implantation. Note: H&E, Hematoxylin-Eosin; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate; β-TCP, β-tricalcium phosphate\n\nFig. 4Quantitative evaluation of H&E sections and Masson’s Trichrome sections of the of the left distal femurs of rabbits in each group at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. At 3 weeks after implantation, the groups containing rh-BMP-2 (Group 2 or 4) showed greater areas of new bone formation than the other groups (P < 0.05). However, at 6 weeks after implantation, only the group (Group 4) that involved the combination usage of both rh-BMP-2 and ZOL still showed areas of newly formed bone (P < 0.05). *: P < 0.05. Statistical differences between groups were determined using a one-way analysis of variance with Bonferroni’s multiple comparison test. Note: H&E, Hematoxylin-Eosin; MT, Masson Trichrome; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate\nQuantitative evaluation of H&E sections and Masson’s Trichrome sections of the of the left distal femurs of rabbits in each group at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. At 3 weeks after implantation, the groups containing rh-BMP-2 (Group 2 or 4) showed greater areas of new bone formation than the other groups (P < 0.05). However, at 6 weeks after implantation, only the group (Group 4) that involved the combination usage of both rh-BMP-2 and ZOL still showed areas of newly formed bone (P < 0.05). *: P < 0.05. Statistical differences between groups were determined using a one-way analysis of variance with Bonferroni’s multiple comparison test. Note: H&E, Hematoxylin-Eosin; MT, Masson Trichrome; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate\nHistological assessments of new bone area in the marrow of a rabbit femur\nVariables present percentages of new bone areas in tissues as median, minimum, and maximum. P values indicate the statistical differences between the groups. Note: β-TCP Beta-tricalcium phosphate, rh-BMP-2 Recombinant human bone morphogenetic protein-2, ZOL Zoledronate.", "The qualitative differences of newly formed bone inside the implanted β-TCP columns between the groups were evaluated by µ-CT, and the results are shown in bar graphs in Fig. 5. At 3 weeks after implantation, groups with rh-BMP-2 (Group 2 and 4) showed significantly greater BV/TV and BMD than groups without rh-BMP-2 (Group 1 and 3) (p < 0.05, Fig. 5a, c). At 6 weeks after implantation, only the treatment group with both rh-BMP-2 and ZOL (Group 4) showed significantly greater BV/TV and BMD values than the other groups (p < 0.05, Fig. 5b, d). The actual values of BV/TV and BMD at 3 and 6 weeks after implantation are shown in Table 2.\n\nFig. 5μ-CT evaluation of BV/TV and BMD in retrieved β-TCP implants at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. *: P < 0.05. Statistical differences between groups were determined with the one-way ANOVA and post-hoc Bonferroni test. Note: BV/TV, Bone volume/ Total tissue volume; BMD, Bone mineral density\nμ-CT evaluation of BV/TV and BMD in retrieved β-TCP implants at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. *: P < 0.05. Statistical differences between groups were determined with the one-way ANOVA and post-hoc Bonferroni test. Note: BV/TV, Bone volume/ Total tissue volume; BMD, Bone mineral density\nQuantitatively assessments of implanted β-TCP using μ-CT\nMedian, minimum, and maximum are provided. P values indicate the statistical differences between the groups. Note: β-TCP Beta-tricalcium phosphate, rh-BMP-2 Recombinant human bone morphogenetic protein-2, ZOL Zoledronate." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Materials and Methods", "Recombinant human BMP-2", "Zoledronate", "β-TCP columns", "Surgery and implantation of β-TCP columns", "Plane radiographs", "Histological examination", "Micro‐computed tomography", "Ethical considerations", "Statistical analysis", "Results", "Macroscopy of implanted β-TCP columns in femoral bone marrow", "Radiographic evaluations of implanted β-TCP columns in femoral bone marrow", "Promotion and maintenance of bone formation in the bone marrow environment", "Qualitative improvement of formed bone by topical co‐administration of rh-BMP2 and ZOL", "Discussion", "Conclusions" ]
[ "Several clinical applications of recombinant human bone morphogenic proteins (rh-BMPs) have reportedly promoted new bone formation [1, 2]. BMPs act as signal transducers in the Smad signaling pathway to regulate mesenchymal stem cell differentiation during skeletal development, especially bone formation [3, 4]. For example, in orthopaedics surgery, rh-BMP has already been used to improve clinical results such as the novel operative technique of spinal fusion [5]. However, the use of rh-BMPs in certain orthopaedic surgeries performed in the intramedullary environment, e.g., total hip replacements involving large bone defects or intramedullary bone tumours, remains limited because more osteoclast progenitor cells are derived from hematopoietic stem cells in the bone marrow environment and rh-BMPs cannot achieve suitable osteogenesis inside of the bone marrow by promoting the differentiation of the osteoclast precursor cells, not only precursor cells which can be differentiated into osteoblast [6, 7]. In the intramedullary environment, it is difficult to achieve both bone formation and its maintenance.\nTo overcome these problems, we previously reported the effectiveness of the systemic administration of ZOL using rh-BMP-2/β-tricalcium phosphate (β-TCP) composite to promote the osteogenesis of newly formed bone in the bone marrow environment [8]. β-TCP has been reported as a good carrier for drug delivery of both rh-BMP and bisphosphonates to promote osteogenesis [9–12]. β-TCP, a bioactive bone substitute material, has high biocompatibility and good stability [13]. Moreover, ZOL has demonstrated to have a protective effect on bone tissue resorption by inhibiting the activity of osteoclasts at the local site [14, 15]. In the present study, we further investigated if the topical co-treatment of ZOL and the rh-BMP-2/β-TCP composite is useful in the promotion as well as the maintenance of new bone formation in the bone marrow environment. Should the intramedullary bone formation be achieved by only the topical administration of these drugs, this treatment may represent a safety and effective procedure to create bone formation in lesion sites, both from a clinical and morphological perspective.\nIn this study, the primary object was to achieve bone formation in the bone marrow environment and the secondary object was to maintain the formed bone tissue, by utilizing the combined effect of rh-BMP-2 in promoting bone formation and ZOL in maintaining bone tissue. In other words, we hypothesized that rh-BMP-2 could achieve bone formation in the bone marrow environment during the early treatment period and ZOL could maintain the newly formed bone tissue by inhibiting bone resorption for a certain period. The aim of this study was to investigate if the topical co-administration of rh-BMP-2/β-TCP/ZOL composite promoted osteogenesis and maintained the newly formed bone in the bone marrow environment.", "Recombinant human BMP-2 This study used rh-BMP-2 produced in Escherichia coli, provided by Osteopharma, Inc (Osaka, Japan) [16]. Dimerization of the monomeric cytokine was obtained using published procedures [16, 17]. Rh-BMP-2 was reconstituted in sterile 0.01 N hydrochloric acid at 5 mg/mL and stored at 80℃ until use.\nThis study used rh-BMP-2 produced in Escherichia coli, provided by Osteopharma, Inc (Osaka, Japan) [16]. Dimerization of the monomeric cytokine was obtained using published procedures [16, 17]. Rh-BMP-2 was reconstituted in sterile 0.01 N hydrochloric acid at 5 mg/mL and stored at 80℃ until use.\nZoledronate ZOL used in this study was purchased as a liquid solution as 4 mg/5 mL (Zometa™; Novartis Pharma K.K./ Tokyo, Japan) and stored at room temperature (approximately 25℃) until use. ZOL was diluted in phosphate-buffered saline (PBS, Wako, Osaka, Japan) to 5 µg ZOL per β-TCP column.\nZOL used in this study was purchased as a liquid solution as 4 mg/5 mL (Zometa™; Novartis Pharma K.K./ Tokyo, Japan) and stored at room temperature (approximately 25℃) until use. ZOL was diluted in phosphate-buffered saline (PBS, Wako, Osaka, Japan) to 5 µg ZOL per β-TCP column.\nβ-TCP columns β-TCP columns (diameter: 6 mm, length: 10 mm, porosity: 75 %) were manufactured and provided by HOYA (Tokyo, Japan), in a dry condition. The β-TCP columns were sterilized using dry heat (255 ℃, 3 h) and impregnated with each drug. The concentration of each drug was adjusted using 75 µL PBS per β-TCP column as follows: PBS only (Group 1), 30 µg of rh-BMP-2 (Group 2), 5 µg of ZOL (Group 3), or 30 µg of rh-BMP-2 and 5 µg of ZOL (Group 4). The drug lysates were infiltrated into the β-TCP columns in a laminar flow cabinet [8].\nβ-TCP columns (diameter: 6 mm, length: 10 mm, porosity: 75 %) were manufactured and provided by HOYA (Tokyo, Japan), in a dry condition. The β-TCP columns were sterilized using dry heat (255 ℃, 3 h) and impregnated with each drug. The concentration of each drug was adjusted using 75 µL PBS per β-TCP column as follows: PBS only (Group 1), 30 µg of rh-BMP-2 (Group 2), 5 µg of ZOL (Group 3), or 30 µg of rh-BMP-2 and 5 µg of ZOL (Group 4). The drug lysates were infiltrated into the β-TCP columns in a laminar flow cabinet [8].\nSurgery and implantation of β-TCP columns New Zealand white rabbits (n = 56 females, age: 18 to 20 weeks, body weight: 3.0–4.0 kg) were purchased from Japan SLC Co. (Shizuoka, Japan). All animals were acclimatized in cages with free access to food and water for 2 weeks. The β-TCP columns were surgically inserted into the medullary cavity at the distal position of the left femurs based on our previously described procedure [8]. Briefly, animals were anesthetized with a subcutaneous injection of ketamine (30 mg/kg body weight) and xylazine (10 mg/kg body weight). After exposure, the distal femur was reamed with a 6.2 mm hand drill to create a hole, a radiograph was taken for confirmation, and then a β-TCP column was inserted into the medullary cavity. During the postoperative period, all animals were maintained in cages (one rabbit per cage) in a temperature-controlled room (25℃) with ad libitum access to food and water and unrestricted movement at the animal care centre at our institution. At 3 and 6 weeks after the surgery, animals were sacrificed by intravenous injection of 100 mg/kg pentobarbital (Somnopentyl™, Kyoritu seiyaku, Tokyo, Japan) and the distal femurs containing the β-TCP were harvested. Seven rabbits were sacrificed in each group at the each timepoint. Harvested femurs were fixed in 4 % paraformaldehyde phosphate buffer overnight at 4℃ and stored in 70 % ethanol solution at 4℃ until use. No animals were excluded from experimental analysis. To reduce confounding factors as much as possible, the order of implanting β-TCP in each group was selected randomly. In the post-operative management, one cage was used for one animal and its locations in the animal care room were randomly selected at regular intervals for unification of environment. After surgery, the surgical wound condition, food intake, and activity were monitored and confirmed to be clear.\nNew Zealand white rabbits (n = 56 females, age: 18 to 20 weeks, body weight: 3.0–4.0 kg) were purchased from Japan SLC Co. (Shizuoka, Japan). All animals were acclimatized in cages with free access to food and water for 2 weeks. The β-TCP columns were surgically inserted into the medullary cavity at the distal position of the left femurs based on our previously described procedure [8]. Briefly, animals were anesthetized with a subcutaneous injection of ketamine (30 mg/kg body weight) and xylazine (10 mg/kg body weight). After exposure, the distal femur was reamed with a 6.2 mm hand drill to create a hole, a radiograph was taken for confirmation, and then a β-TCP column was inserted into the medullary cavity. During the postoperative period, all animals were maintained in cages (one rabbit per cage) in a temperature-controlled room (25℃) with ad libitum access to food and water and unrestricted movement at the animal care centre at our institution. At 3 and 6 weeks after the surgery, animals were sacrificed by intravenous injection of 100 mg/kg pentobarbital (Somnopentyl™, Kyoritu seiyaku, Tokyo, Japan) and the distal femurs containing the β-TCP were harvested. Seven rabbits were sacrificed in each group at the each timepoint. Harvested femurs were fixed in 4 % paraformaldehyde phosphate buffer overnight at 4℃ and stored in 70 % ethanol solution at 4℃ until use. No animals were excluded from experimental analysis. To reduce confounding factors as much as possible, the order of implanting β-TCP in each group was selected randomly. In the post-operative management, one cage was used for one animal and its locations in the animal care room were randomly selected at regular intervals for unification of environment. After surgery, the surgical wound condition, food intake, and activity were monitored and confirmed to be clear.\nPlane radiographs Plane radiographs of the lateral views of the distal femurs were taken under anesthetization during the implantation surgery (0 weeks) and at 3 and 6 weeks after the surgery. Radiographs were obtained using a KXO-15ER apparatus (Toshiba Medical, Tochigi, Japan) at 50 kV and 100 mA for 0.08 s, and visualized using an FCR CAPSULA- 2V1 system (Fujifilm, Tokyo, Japan).\nPlane radiographs of the lateral views of the distal femurs were taken under anesthetization during the implantation surgery (0 weeks) and at 3 and 6 weeks after the surgery. Radiographs were obtained using a KXO-15ER apparatus (Toshiba Medical, Tochigi, Japan) at 50 kV and 100 mA for 0.08 s, and visualized using an FCR CAPSULA- 2V1 system (Fujifilm, Tokyo, Japan).\nHistological examination Prior to histological evaluation, the fixed specimens were decalcified in 0.5 mol/L ethylenediaminetetraacetic acid (EDTA) solution (Wako, Osaka, Japan) for 2 weeks, dehydrated in a graded ethanol series (70 %, 80 %, 90 %, and 100 % ethanol), and embedded in paraffin wax. Mid-sagittal (longitudinal, along the implant) sections were cut into 4 µm slices in each plane. After preparation, the tissue sections were stained using haematoxylin/eosin (H&E) staining and Masson’s trichrome (MT) staining. New bone formation within the β-TCP columns was histologically assessed using previously described procedures with minor modifications [18]. Briefly, three high-powered fields (objective lens 20×) were randomly selected from three tissue sections from each the β-TCP column sample. The images were captured using a microscope with a built-in digital camera (DP 70; Olympus Corporation, Tokyo, Japan). Captured images were analysed using ImageJ™ software (National Institutes of Health, MD, USA). A total of 9 images captured in each group were analysed. The threshold for the measurement of the newly formed bone was set between 150 and 180 of the red channel in the software. New bone area (%) was estimated as the detected area/total area ×100 in each section. These new bone areas in the H&E and MT sections were defined as the primary outcomes in this study.\nPrior to histological evaluation, the fixed specimens were decalcified in 0.5 mol/L ethylenediaminetetraacetic acid (EDTA) solution (Wako, Osaka, Japan) for 2 weeks, dehydrated in a graded ethanol series (70 %, 80 %, 90 %, and 100 % ethanol), and embedded in paraffin wax. Mid-sagittal (longitudinal, along the implant) sections were cut into 4 µm slices in each plane. After preparation, the tissue sections were stained using haematoxylin/eosin (H&E) staining and Masson’s trichrome (MT) staining. New bone formation within the β-TCP columns was histologically assessed using previously described procedures with minor modifications [18]. Briefly, three high-powered fields (objective lens 20×) were randomly selected from three tissue sections from each the β-TCP column sample. The images were captured using a microscope with a built-in digital camera (DP 70; Olympus Corporation, Tokyo, Japan). Captured images were analysed using ImageJ™ software (National Institutes of Health, MD, USA). A total of 9 images captured in each group were analysed. The threshold for the measurement of the newly formed bone was set between 150 and 180 of the red channel in the software. New bone area (%) was estimated as the detected area/total area ×100 in each section. These new bone areas in the H&E and MT sections were defined as the primary outcomes in this study.\nMicro‐computed tomography The implanted β-TCP columns were evaluated by micro-computed tomography (µ-CT) using an Aloka Latheta LCT200 (HITACHI, Tokyo, Japan) based on the previous published procedure [19, 20]. Briefly, the following conditions were maintained per image: slice width of 30 µm, voxel size of 30 × 120 µm, voltage of 80 kVp, and current of 50 µA. The area of β-TCP measurements was determined and the quality as bone in its area were quantitatively assessed using LaTheta software (version 2.10, Aloka). Bone volume/ total tissue volume (BV/TV) and bone mineral density (BMD) were evaluated according manufactural instruments. All sections were analysed by µ-CT (n = 7 in each group at 3 and 6 weeks after implantation). These quantitative bone assessments of µ-CT were defined as the secondary outcomes in this study.\nThe implanted β-TCP columns were evaluated by micro-computed tomography (µ-CT) using an Aloka Latheta LCT200 (HITACHI, Tokyo, Japan) based on the previous published procedure [19, 20]. Briefly, the following conditions were maintained per image: slice width of 30 µm, voxel size of 30 × 120 µm, voltage of 80 kVp, and current of 50 µA. The area of β-TCP measurements was determined and the quality as bone in its area were quantitatively assessed using LaTheta software (version 2.10, Aloka). Bone volume/ total tissue volume (BV/TV) and bone mineral density (BMD) were evaluated according manufactural instruments. All sections were analysed by µ-CT (n = 7 in each group at 3 and 6 weeks after implantation). These quantitative bone assessments of µ-CT were defined as the secondary outcomes in this study.\nEthical considerations This study was approved by the Animal Research Committees of our institution (approval number 13,017). All applicable international, national, and/or institutional guidelines for the care and use of animals were followed. All procedures performed were in accordance with the ethical standards of the institution at which the study was conducted. This paper does not contain any studies with human participants performed by any of the authors.\nThis study was approved by the Animal Research Committees of our institution (approval number 13,017). All applicable international, national, and/or institutional guidelines for the care and use of animals were followed. All procedures performed were in accordance with the ethical standards of the institution at which the study was conducted. This paper does not contain any studies with human participants performed by any of the authors.\nStatistical analysis The results are presented as median and range (minimum and maximum). All variables were confirmed as parametric using the Kolmogorov-Smirnov test. The differences between groups were analysed using a one-way analysis of variance with Bonferroni’s multiple comparison test. To determine the adequate sample size, a power analysis was performed for the primary and secondary outcomes. According to a previous report on new bone area and the quantitative bone assessment of µ-CT, the expected differences in primary and secondary outcomes were 10 ± 5.5 % and 6 ± 3.5 %, respectively [8, 18]. Based on these findings, to provide an appropriate power (β = 0.80) with the significance level set at 0.05, a sample size of five cases or more was adequate to achieve the primary outcome and a sample size of six cases or more was adequate to achieve the secondary outcomes. Statistical significance was set at P < 0.05. Statistical analyses were performed using SPSS software, version 22 (IBM, NY, USA).\nThe results are presented as median and range (minimum and maximum). All variables were confirmed as parametric using the Kolmogorov-Smirnov test. The differences between groups were analysed using a one-way analysis of variance with Bonferroni’s multiple comparison test. To determine the adequate sample size, a power analysis was performed for the primary and secondary outcomes. According to a previous report on new bone area and the quantitative bone assessment of µ-CT, the expected differences in primary and secondary outcomes were 10 ± 5.5 % and 6 ± 3.5 %, respectively [8, 18]. Based on these findings, to provide an appropriate power (β = 0.80) with the significance level set at 0.05, a sample size of five cases or more was adequate to achieve the primary outcome and a sample size of six cases or more was adequate to achieve the secondary outcomes. Statistical significance was set at P < 0.05. Statistical analyses were performed using SPSS software, version 22 (IBM, NY, USA).", "This study used rh-BMP-2 produced in Escherichia coli, provided by Osteopharma, Inc (Osaka, Japan) [16]. Dimerization of the monomeric cytokine was obtained using published procedures [16, 17]. Rh-BMP-2 was reconstituted in sterile 0.01 N hydrochloric acid at 5 mg/mL and stored at 80℃ until use.", "ZOL used in this study was purchased as a liquid solution as 4 mg/5 mL (Zometa™; Novartis Pharma K.K./ Tokyo, Japan) and stored at room temperature (approximately 25℃) until use. ZOL was diluted in phosphate-buffered saline (PBS, Wako, Osaka, Japan) to 5 µg ZOL per β-TCP column.", "β-TCP columns (diameter: 6 mm, length: 10 mm, porosity: 75 %) were manufactured and provided by HOYA (Tokyo, Japan), in a dry condition. The β-TCP columns were sterilized using dry heat (255 ℃, 3 h) and impregnated with each drug. The concentration of each drug was adjusted using 75 µL PBS per β-TCP column as follows: PBS only (Group 1), 30 µg of rh-BMP-2 (Group 2), 5 µg of ZOL (Group 3), or 30 µg of rh-BMP-2 and 5 µg of ZOL (Group 4). The drug lysates were infiltrated into the β-TCP columns in a laminar flow cabinet [8].", "New Zealand white rabbits (n = 56 females, age: 18 to 20 weeks, body weight: 3.0–4.0 kg) were purchased from Japan SLC Co. (Shizuoka, Japan). All animals were acclimatized in cages with free access to food and water for 2 weeks. The β-TCP columns were surgically inserted into the medullary cavity at the distal position of the left femurs based on our previously described procedure [8]. Briefly, animals were anesthetized with a subcutaneous injection of ketamine (30 mg/kg body weight) and xylazine (10 mg/kg body weight). After exposure, the distal femur was reamed with a 6.2 mm hand drill to create a hole, a radiograph was taken for confirmation, and then a β-TCP column was inserted into the medullary cavity. During the postoperative period, all animals were maintained in cages (one rabbit per cage) in a temperature-controlled room (25℃) with ad libitum access to food and water and unrestricted movement at the animal care centre at our institution. At 3 and 6 weeks after the surgery, animals were sacrificed by intravenous injection of 100 mg/kg pentobarbital (Somnopentyl™, Kyoritu seiyaku, Tokyo, Japan) and the distal femurs containing the β-TCP were harvested. Seven rabbits were sacrificed in each group at the each timepoint. Harvested femurs were fixed in 4 % paraformaldehyde phosphate buffer overnight at 4℃ and stored in 70 % ethanol solution at 4℃ until use. No animals were excluded from experimental analysis. To reduce confounding factors as much as possible, the order of implanting β-TCP in each group was selected randomly. In the post-operative management, one cage was used for one animal and its locations in the animal care room were randomly selected at regular intervals for unification of environment. After surgery, the surgical wound condition, food intake, and activity were monitored and confirmed to be clear.", "Plane radiographs of the lateral views of the distal femurs were taken under anesthetization during the implantation surgery (0 weeks) and at 3 and 6 weeks after the surgery. Radiographs were obtained using a KXO-15ER apparatus (Toshiba Medical, Tochigi, Japan) at 50 kV and 100 mA for 0.08 s, and visualized using an FCR CAPSULA- 2V1 system (Fujifilm, Tokyo, Japan).", "Prior to histological evaluation, the fixed specimens were decalcified in 0.5 mol/L ethylenediaminetetraacetic acid (EDTA) solution (Wako, Osaka, Japan) for 2 weeks, dehydrated in a graded ethanol series (70 %, 80 %, 90 %, and 100 % ethanol), and embedded in paraffin wax. Mid-sagittal (longitudinal, along the implant) sections were cut into 4 µm slices in each plane. After preparation, the tissue sections were stained using haematoxylin/eosin (H&E) staining and Masson’s trichrome (MT) staining. New bone formation within the β-TCP columns was histologically assessed using previously described procedures with minor modifications [18]. Briefly, three high-powered fields (objective lens 20×) were randomly selected from three tissue sections from each the β-TCP column sample. The images were captured using a microscope with a built-in digital camera (DP 70; Olympus Corporation, Tokyo, Japan). Captured images were analysed using ImageJ™ software (National Institutes of Health, MD, USA). A total of 9 images captured in each group were analysed. The threshold for the measurement of the newly formed bone was set between 150 and 180 of the red channel in the software. New bone area (%) was estimated as the detected area/total area ×100 in each section. These new bone areas in the H&E and MT sections were defined as the primary outcomes in this study.", "The implanted β-TCP columns were evaluated by micro-computed tomography (µ-CT) using an Aloka Latheta LCT200 (HITACHI, Tokyo, Japan) based on the previous published procedure [19, 20]. Briefly, the following conditions were maintained per image: slice width of 30 µm, voxel size of 30 × 120 µm, voltage of 80 kVp, and current of 50 µA. The area of β-TCP measurements was determined and the quality as bone in its area were quantitatively assessed using LaTheta software (version 2.10, Aloka). Bone volume/ total tissue volume (BV/TV) and bone mineral density (BMD) were evaluated according manufactural instruments. All sections were analysed by µ-CT (n = 7 in each group at 3 and 6 weeks after implantation). These quantitative bone assessments of µ-CT were defined as the secondary outcomes in this study.", "This study was approved by the Animal Research Committees of our institution (approval number 13,017). All applicable international, national, and/or institutional guidelines for the care and use of animals were followed. All procedures performed were in accordance with the ethical standards of the institution at which the study was conducted. This paper does not contain any studies with human participants performed by any of the authors.", "The results are presented as median and range (minimum and maximum). All variables were confirmed as parametric using the Kolmogorov-Smirnov test. The differences between groups were analysed using a one-way analysis of variance with Bonferroni’s multiple comparison test. To determine the adequate sample size, a power analysis was performed for the primary and secondary outcomes. According to a previous report on new bone area and the quantitative bone assessment of µ-CT, the expected differences in primary and secondary outcomes were 10 ± 5.5 % and 6 ± 3.5 %, respectively [8, 18]. Based on these findings, to provide an appropriate power (β = 0.80) with the significance level set at 0.05, a sample size of five cases or more was adequate to achieve the primary outcome and a sample size of six cases or more was adequate to achieve the secondary outcomes. Statistical significance was set at P < 0.05. Statistical analyses were performed using SPSS software, version 22 (IBM, NY, USA).", "There were no statistically significant differences in body weight among each group at 0, 3, and 6 weeks after implantation (p = 0.63). The median and range of body weight (kg) at each time point for the groups 1, 2, 3, and 4 were as follows: 3.2 (2.9 to 3.3), 3.3 (3.0 to 3.4), 3.2 (3.0 to 3.3), and 3.2 (3.0 to 3.3) at 0 weeks, 3.3 (3.1 to 3.4), 3.2 (3.0 to 3.4), 3.2 (3.0 to 3.4), and 3.1 (3.0 to 3.3) at 3 weeks, 3.2 (3.0 to 3.3), 3.2 (3.0 to 3.5), 3.2 (3.1 to 3.3), and 3.0 (2.9 to 3.4) at 6 weeks, respectively. No cases of animals dropping out from observation during the study periods due to death or any other reasons were reported. Moreover, there was no complications, such as poor wound healing, after surgery.\nMacroscopy of implanted β-TCP columns in femoral bone marrow At 3 weeks after implantation, the gross appearance of the implanted β-TCP column disappeared significantly in Group 2 (rh-BMP-2 alone) (Fig. 1c, d). However, in groups containing ZOL (Group 3 and 4), the β-TCP column remained recognizable at 6 weeks after implantation (Fig. 1e-h).\n\nFig. 1Representative photos of the left distal femurs of rabbits cut in the sagittal plane at 3 and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. Arrows point to a section of the edge of the implanted β-TCP. The gross appearances of implanted β-TCP columns gradually disappeared (a). The gross appearances of β-TCP columns in groups containing ZOL (f and h) were comparatively more recognizable than in the other groups without ZOL (b and d) at 6 weeks after implantation\nRepresentative photos of the left distal femurs of rabbits cut in the sagittal plane at 3 and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. Arrows point to a section of the edge of the implanted β-TCP. The gross appearances of implanted β-TCP columns gradually disappeared (a). The gross appearances of β-TCP columns in groups containing ZOL (f and h) were comparatively more recognizable than in the other groups without ZOL (b and d) at 6 weeks after implantation\nAt 3 weeks after implantation, the gross appearance of the implanted β-TCP column disappeared significantly in Group 2 (rh-BMP-2 alone) (Fig. 1c, d). However, in groups containing ZOL (Group 3 and 4), the β-TCP column remained recognizable at 6 weeks after implantation (Fig. 1e-h).\n\nFig. 1Representative photos of the left distal femurs of rabbits cut in the sagittal plane at 3 and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. Arrows point to a section of the edge of the implanted β-TCP. The gross appearances of implanted β-TCP columns gradually disappeared (a). The gross appearances of β-TCP columns in groups containing ZOL (f and h) were comparatively more recognizable than in the other groups without ZOL (b and d) at 6 weeks after implantation\nRepresentative photos of the left distal femurs of rabbits cut in the sagittal plane at 3 and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. Arrows point to a section of the edge of the implanted β-TCP. The gross appearances of implanted β-TCP columns gradually disappeared (a). The gross appearances of β-TCP columns in groups containing ZOL (f and h) were comparatively more recognizable than in the other groups without ZOL (b and d) at 6 weeks after implantation\nRadiographic evaluations of implanted β-TCP columns in femoral bone marrow The X-ray images showed that the radiolucency inside the implanted β-TCP column tended to increase gradually in all groups (Fig. 2). However, in combination with the macroscopy analysis, the radiolucency inside of the β-TCP columns was comparatively suppressed in the ZOL-treated groups (Group 3 and 4) (Fig. 2g-l).\n\nFig. 2Representative X-ray images of the left distal femurs of rabbits from each group at 0, 3, and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. The radiolucencies at the area of implanted β-TCP columns gradually increased. The radiolucencies of β-TCP columns in groups containing ZOL (i and l) were comparatively more recognizable than in the other groups without ZOL (c and f) at 6 weeks after implantation\nRepresentative X-ray images of the left distal femurs of rabbits from each group at 0, 3, and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. The radiolucencies at the area of implanted β-TCP columns gradually increased. The radiolucencies of β-TCP columns in groups containing ZOL (i and l) were comparatively more recognizable than in the other groups without ZOL (c and f) at 6 weeks after implantation\nThe X-ray images showed that the radiolucency inside the implanted β-TCP column tended to increase gradually in all groups (Fig. 2). However, in combination with the macroscopy analysis, the radiolucency inside of the β-TCP columns was comparatively suppressed in the ZOL-treated groups (Group 3 and 4) (Fig. 2g-l).\n\nFig. 2Representative X-ray images of the left distal femurs of rabbits from each group at 0, 3, and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. The radiolucencies at the area of implanted β-TCP columns gradually increased. The radiolucencies of β-TCP columns in groups containing ZOL (i and l) were comparatively more recognizable than in the other groups without ZOL (c and f) at 6 weeks after implantation\nRepresentative X-ray images of the left distal femurs of rabbits from each group at 0, 3, and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. The radiolucencies at the area of implanted β-TCP columns gradually increased. The radiolucencies of β-TCP columns in groups containing ZOL (i and l) were comparatively more recognizable than in the other groups without ZOL (c and f) at 6 weeks after implantation\nPromotion and maintenance of bone formation in the bone marrow environment Representative H&E and MT stained sections of tissues and their quantitative evaluations are shown in Figs. 3 and 4. At 3 weeks after implantation, the newly formed area of bone structure was significantly larger in the groups with rh-BMP-2 (Group 2 and 4) than in the groups without rh-BMP-2 (Group 1 and 3) (p < 0.001, Fig. 4a, c). Details of the statistical analysis of each parameter are as follows: group 1 vs. 2: p < 0.001 in H&E and p < 0.001 in MT; group 1 vs. 3: p = 0.05 in H&E and p = 0.06 in MT; group 1 vs. 4: p < 0.001 in H&E and p = 0.04 in MT; group 2 vs. 3: p < 0.001 in H&E and p < 0.001 in MT; group 2 vs. 4: p = 1.0 in H&E and p = 0.06 in MT; and group 3 vs. 4: p < 0.001 in H&E and p < 0.001 in MT. At 6 weeks after implantation, the newly formed area of bone structure in the group containing both rh-BMP-2 and ZOL (Group 4) was significantly larger than that in the other groups (p < 0.001, Fig. 4b, d). Details of the statistical analysis of each parameter are as follows: group 1 vs. 2: p = 1.0 in H&E and p = 1.0 in MT; group 1 vs. 3: p = 1.0 in H&E and p = 1.0 in MT; group 1 vs. 4: p < 0.001 in H&E and p = 0.03 in MT; group 2 vs. 3: p = 1.0 in H&E and p = 1.0 in MT; group 2 vs. 4: p < 0.001 in H&E and p < 0.001 in MT; and group 3 vs. 4: p < 0.001 in H&E and p = 0.006 in MT. The newly formed bone structure area in the Groups 1, 2, and 3 had almost disappeared at 6 weeks after implantation (Fig. 3q-z, a’, and b’). The actual values of new bone structure area in H&E and MT sections at 3 and 6 weeks after implantations are shown in Table 1.\n\nFig. 3Representative H&E and Masson’s Trichrome stained sections of the left distal femurs of rabbits cut in the sagittal plane in each group at 3 and 6 weeks after implantation. In each image, the proximal section is displayed on the right and the dorsal section is displayed on the upper parts of the figure. The dotted box in the low-powered view (2×) indicates the range of high-powered view. The high-powered views (20×) were captured randomly from inside the implanted β-TCP areas for quantitative evaluation. The uniformly-stained tissue area, pointed by arrows, indicate newly formed trabecular bone structure. At 3 weeks after implantation, stained tissue areas were recognized as new bone area was significantly increased in groups containing rh-BMP-2 (f, h,n, and p). New bone area only remained in groups treated with both rh-BMP-2 and ZOL (d’ and f’) at 6 weeks after implantation. Note: H&E, Hematoxylin-Eosin; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate; β-TCP, β-tricalcium phosphate\nRepresentative H&E and Masson’s Trichrome stained sections of the left distal femurs of rabbits cut in the sagittal plane in each group at 3 and 6 weeks after implantation. In each image, the proximal section is displayed on the right and the dorsal section is displayed on the upper parts of the figure. The dotted box in the low-powered view (2×) indicates the range of high-powered view. The high-powered views (20×) were captured randomly from inside the implanted β-TCP areas for quantitative evaluation. The uniformly-stained tissue area, pointed by arrows, indicate newly formed trabecular bone structure. At 3 weeks after implantation, stained tissue areas were recognized as new bone area was significantly increased in groups containing rh-BMP-2 (f, h,n, and p). New bone area only remained in groups treated with both rh-BMP-2 and ZOL (d’ and f’) at 6 weeks after implantation. Note: H&E, Hematoxylin-Eosin; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate; β-TCP, β-tricalcium phosphate\n\nFig. 4Quantitative evaluation of H&E sections and Masson’s Trichrome sections of the of the left distal femurs of rabbits in each group at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. At 3 weeks after implantation, the groups containing rh-BMP-2 (Group 2 or 4) showed greater areas of new bone formation than the other groups (P < 0.05). However, at 6 weeks after implantation, only the group (Group 4) that involved the combination usage of both rh-BMP-2 and ZOL still showed areas of newly formed bone (P < 0.05). *: P < 0.05. Statistical differences between groups were determined using a one-way analysis of variance with Bonferroni’s multiple comparison test. Note: H&E, Hematoxylin-Eosin; MT, Masson Trichrome; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate\nQuantitative evaluation of H&E sections and Masson’s Trichrome sections of the of the left distal femurs of rabbits in each group at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. At 3 weeks after implantation, the groups containing rh-BMP-2 (Group 2 or 4) showed greater areas of new bone formation than the other groups (P < 0.05). However, at 6 weeks after implantation, only the group (Group 4) that involved the combination usage of both rh-BMP-2 and ZOL still showed areas of newly formed bone (P < 0.05). *: P < 0.05. Statistical differences between groups were determined using a one-way analysis of variance with Bonferroni’s multiple comparison test. Note: H&E, Hematoxylin-Eosin; MT, Masson Trichrome; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate\nHistological assessments of new bone area in the marrow of a rabbit femur\nVariables present percentages of new bone areas in tissues as median, minimum, and maximum. P values indicate the statistical differences between the groups. Note: β-TCP Beta-tricalcium phosphate, rh-BMP-2 Recombinant human bone morphogenetic protein-2, ZOL Zoledronate.\nRepresentative H&E and MT stained sections of tissues and their quantitative evaluations are shown in Figs. 3 and 4. At 3 weeks after implantation, the newly formed area of bone structure was significantly larger in the groups with rh-BMP-2 (Group 2 and 4) than in the groups without rh-BMP-2 (Group 1 and 3) (p < 0.001, Fig. 4a, c). Details of the statistical analysis of each parameter are as follows: group 1 vs. 2: p < 0.001 in H&E and p < 0.001 in MT; group 1 vs. 3: p = 0.05 in H&E and p = 0.06 in MT; group 1 vs. 4: p < 0.001 in H&E and p = 0.04 in MT; group 2 vs. 3: p < 0.001 in H&E and p < 0.001 in MT; group 2 vs. 4: p = 1.0 in H&E and p = 0.06 in MT; and group 3 vs. 4: p < 0.001 in H&E and p < 0.001 in MT. At 6 weeks after implantation, the newly formed area of bone structure in the group containing both rh-BMP-2 and ZOL (Group 4) was significantly larger than that in the other groups (p < 0.001, Fig. 4b, d). Details of the statistical analysis of each parameter are as follows: group 1 vs. 2: p = 1.0 in H&E and p = 1.0 in MT; group 1 vs. 3: p = 1.0 in H&E and p = 1.0 in MT; group 1 vs. 4: p < 0.001 in H&E and p = 0.03 in MT; group 2 vs. 3: p = 1.0 in H&E and p = 1.0 in MT; group 2 vs. 4: p < 0.001 in H&E and p < 0.001 in MT; and group 3 vs. 4: p < 0.001 in H&E and p = 0.006 in MT. The newly formed bone structure area in the Groups 1, 2, and 3 had almost disappeared at 6 weeks after implantation (Fig. 3q-z, a’, and b’). The actual values of new bone structure area in H&E and MT sections at 3 and 6 weeks after implantations are shown in Table 1.\n\nFig. 3Representative H&E and Masson’s Trichrome stained sections of the left distal femurs of rabbits cut in the sagittal plane in each group at 3 and 6 weeks after implantation. In each image, the proximal section is displayed on the right and the dorsal section is displayed on the upper parts of the figure. The dotted box in the low-powered view (2×) indicates the range of high-powered view. The high-powered views (20×) were captured randomly from inside the implanted β-TCP areas for quantitative evaluation. The uniformly-stained tissue area, pointed by arrows, indicate newly formed trabecular bone structure. At 3 weeks after implantation, stained tissue areas were recognized as new bone area was significantly increased in groups containing rh-BMP-2 (f, h,n, and p). New bone area only remained in groups treated with both rh-BMP-2 and ZOL (d’ and f’) at 6 weeks after implantation. Note: H&E, Hematoxylin-Eosin; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate; β-TCP, β-tricalcium phosphate\nRepresentative H&E and Masson’s Trichrome stained sections of the left distal femurs of rabbits cut in the sagittal plane in each group at 3 and 6 weeks after implantation. In each image, the proximal section is displayed on the right and the dorsal section is displayed on the upper parts of the figure. The dotted box in the low-powered view (2×) indicates the range of high-powered view. The high-powered views (20×) were captured randomly from inside the implanted β-TCP areas for quantitative evaluation. The uniformly-stained tissue area, pointed by arrows, indicate newly formed trabecular bone structure. At 3 weeks after implantation, stained tissue areas were recognized as new bone area was significantly increased in groups containing rh-BMP-2 (f, h,n, and p). New bone area only remained in groups treated with both rh-BMP-2 and ZOL (d’ and f’) at 6 weeks after implantation. Note: H&E, Hematoxylin-Eosin; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate; β-TCP, β-tricalcium phosphate\n\nFig. 4Quantitative evaluation of H&E sections and Masson’s Trichrome sections of the of the left distal femurs of rabbits in each group at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. At 3 weeks after implantation, the groups containing rh-BMP-2 (Group 2 or 4) showed greater areas of new bone formation than the other groups (P < 0.05). However, at 6 weeks after implantation, only the group (Group 4) that involved the combination usage of both rh-BMP-2 and ZOL still showed areas of newly formed bone (P < 0.05). *: P < 0.05. Statistical differences between groups were determined using a one-way analysis of variance with Bonferroni’s multiple comparison test. Note: H&E, Hematoxylin-Eosin; MT, Masson Trichrome; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate\nQuantitative evaluation of H&E sections and Masson’s Trichrome sections of the of the left distal femurs of rabbits in each group at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. At 3 weeks after implantation, the groups containing rh-BMP-2 (Group 2 or 4) showed greater areas of new bone formation than the other groups (P < 0.05). However, at 6 weeks after implantation, only the group (Group 4) that involved the combination usage of both rh-BMP-2 and ZOL still showed areas of newly formed bone (P < 0.05). *: P < 0.05. Statistical differences between groups were determined using a one-way analysis of variance with Bonferroni’s multiple comparison test. Note: H&E, Hematoxylin-Eosin; MT, Masson Trichrome; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate\nHistological assessments of new bone area in the marrow of a rabbit femur\nVariables present percentages of new bone areas in tissues as median, minimum, and maximum. P values indicate the statistical differences between the groups. Note: β-TCP Beta-tricalcium phosphate, rh-BMP-2 Recombinant human bone morphogenetic protein-2, ZOL Zoledronate.\nQualitative improvement of formed bone by topical co‐administration of rh-BMP2 and ZOL The qualitative differences of newly formed bone inside the implanted β-TCP columns between the groups were evaluated by µ-CT, and the results are shown in bar graphs in Fig. 5. At 3 weeks after implantation, groups with rh-BMP-2 (Group 2 and 4) showed significantly greater BV/TV and BMD than groups without rh-BMP-2 (Group 1 and 3) (p < 0.05, Fig. 5a, c). At 6 weeks after implantation, only the treatment group with both rh-BMP-2 and ZOL (Group 4) showed significantly greater BV/TV and BMD values than the other groups (p < 0.05, Fig. 5b, d). The actual values of BV/TV and BMD at 3 and 6 weeks after implantation are shown in Table 2.\n\nFig. 5μ-CT evaluation of BV/TV and BMD in retrieved β-TCP implants at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. *: P < 0.05. Statistical differences between groups were determined with the one-way ANOVA and post-hoc Bonferroni test. Note: BV/TV, Bone volume/ Total tissue volume; BMD, Bone mineral density\nμ-CT evaluation of BV/TV and BMD in retrieved β-TCP implants at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. *: P < 0.05. Statistical differences between groups were determined with the one-way ANOVA and post-hoc Bonferroni test. Note: BV/TV, Bone volume/ Total tissue volume; BMD, Bone mineral density\nQuantitatively assessments of implanted β-TCP using μ-CT\nMedian, minimum, and maximum are provided. P values indicate the statistical differences between the groups. Note: β-TCP Beta-tricalcium phosphate, rh-BMP-2 Recombinant human bone morphogenetic protein-2, ZOL Zoledronate.\nThe qualitative differences of newly formed bone inside the implanted β-TCP columns between the groups were evaluated by µ-CT, and the results are shown in bar graphs in Fig. 5. At 3 weeks after implantation, groups with rh-BMP-2 (Group 2 and 4) showed significantly greater BV/TV and BMD than groups without rh-BMP-2 (Group 1 and 3) (p < 0.05, Fig. 5a, c). At 6 weeks after implantation, only the treatment group with both rh-BMP-2 and ZOL (Group 4) showed significantly greater BV/TV and BMD values than the other groups (p < 0.05, Fig. 5b, d). The actual values of BV/TV and BMD at 3 and 6 weeks after implantation are shown in Table 2.\n\nFig. 5μ-CT evaluation of BV/TV and BMD in retrieved β-TCP implants at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. *: P < 0.05. Statistical differences between groups were determined with the one-way ANOVA and post-hoc Bonferroni test. Note: BV/TV, Bone volume/ Total tissue volume; BMD, Bone mineral density\nμ-CT evaluation of BV/TV and BMD in retrieved β-TCP implants at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. *: P < 0.05. Statistical differences between groups were determined with the one-way ANOVA and post-hoc Bonferroni test. Note: BV/TV, Bone volume/ Total tissue volume; BMD, Bone mineral density\nQuantitatively assessments of implanted β-TCP using μ-CT\nMedian, minimum, and maximum are provided. P values indicate the statistical differences between the groups. Note: β-TCP Beta-tricalcium phosphate, rh-BMP-2 Recombinant human bone morphogenetic protein-2, ZOL Zoledronate.", "At 3 weeks after implantation, the gross appearance of the implanted β-TCP column disappeared significantly in Group 2 (rh-BMP-2 alone) (Fig. 1c, d). However, in groups containing ZOL (Group 3 and 4), the β-TCP column remained recognizable at 6 weeks after implantation (Fig. 1e-h).\n\nFig. 1Representative photos of the left distal femurs of rabbits cut in the sagittal plane at 3 and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. Arrows point to a section of the edge of the implanted β-TCP. The gross appearances of implanted β-TCP columns gradually disappeared (a). The gross appearances of β-TCP columns in groups containing ZOL (f and h) were comparatively more recognizable than in the other groups without ZOL (b and d) at 6 weeks after implantation\nRepresentative photos of the left distal femurs of rabbits cut in the sagittal plane at 3 and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. Arrows point to a section of the edge of the implanted β-TCP. The gross appearances of implanted β-TCP columns gradually disappeared (a). The gross appearances of β-TCP columns in groups containing ZOL (f and h) were comparatively more recognizable than in the other groups without ZOL (b and d) at 6 weeks after implantation", "The X-ray images showed that the radiolucency inside the implanted β-TCP column tended to increase gradually in all groups (Fig. 2). However, in combination with the macroscopy analysis, the radiolucency inside of the β-TCP columns was comparatively suppressed in the ZOL-treated groups (Group 3 and 4) (Fig. 2g-l).\n\nFig. 2Representative X-ray images of the left distal femurs of rabbits from each group at 0, 3, and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. The radiolucencies at the area of implanted β-TCP columns gradually increased. The radiolucencies of β-TCP columns in groups containing ZOL (i and l) were comparatively more recognizable than in the other groups without ZOL (c and f) at 6 weeks after implantation\nRepresentative X-ray images of the left distal femurs of rabbits from each group at 0, 3, and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. The radiolucencies at the area of implanted β-TCP columns gradually increased. The radiolucencies of β-TCP columns in groups containing ZOL (i and l) were comparatively more recognizable than in the other groups without ZOL (c and f) at 6 weeks after implantation", "Representative H&E and MT stained sections of tissues and their quantitative evaluations are shown in Figs. 3 and 4. At 3 weeks after implantation, the newly formed area of bone structure was significantly larger in the groups with rh-BMP-2 (Group 2 and 4) than in the groups without rh-BMP-2 (Group 1 and 3) (p < 0.001, Fig. 4a, c). Details of the statistical analysis of each parameter are as follows: group 1 vs. 2: p < 0.001 in H&E and p < 0.001 in MT; group 1 vs. 3: p = 0.05 in H&E and p = 0.06 in MT; group 1 vs. 4: p < 0.001 in H&E and p = 0.04 in MT; group 2 vs. 3: p < 0.001 in H&E and p < 0.001 in MT; group 2 vs. 4: p = 1.0 in H&E and p = 0.06 in MT; and group 3 vs. 4: p < 0.001 in H&E and p < 0.001 in MT. At 6 weeks after implantation, the newly formed area of bone structure in the group containing both rh-BMP-2 and ZOL (Group 4) was significantly larger than that in the other groups (p < 0.001, Fig. 4b, d). Details of the statistical analysis of each parameter are as follows: group 1 vs. 2: p = 1.0 in H&E and p = 1.0 in MT; group 1 vs. 3: p = 1.0 in H&E and p = 1.0 in MT; group 1 vs. 4: p < 0.001 in H&E and p = 0.03 in MT; group 2 vs. 3: p = 1.0 in H&E and p = 1.0 in MT; group 2 vs. 4: p < 0.001 in H&E and p < 0.001 in MT; and group 3 vs. 4: p < 0.001 in H&E and p = 0.006 in MT. The newly formed bone structure area in the Groups 1, 2, and 3 had almost disappeared at 6 weeks after implantation (Fig. 3q-z, a’, and b’). The actual values of new bone structure area in H&E and MT sections at 3 and 6 weeks after implantations are shown in Table 1.\n\nFig. 3Representative H&E and Masson’s Trichrome stained sections of the left distal femurs of rabbits cut in the sagittal plane in each group at 3 and 6 weeks after implantation. In each image, the proximal section is displayed on the right and the dorsal section is displayed on the upper parts of the figure. The dotted box in the low-powered view (2×) indicates the range of high-powered view. The high-powered views (20×) were captured randomly from inside the implanted β-TCP areas for quantitative evaluation. The uniformly-stained tissue area, pointed by arrows, indicate newly formed trabecular bone structure. At 3 weeks after implantation, stained tissue areas were recognized as new bone area was significantly increased in groups containing rh-BMP-2 (f, h,n, and p). New bone area only remained in groups treated with both rh-BMP-2 and ZOL (d’ and f’) at 6 weeks after implantation. Note: H&E, Hematoxylin-Eosin; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate; β-TCP, β-tricalcium phosphate\nRepresentative H&E and Masson’s Trichrome stained sections of the left distal femurs of rabbits cut in the sagittal plane in each group at 3 and 6 weeks after implantation. In each image, the proximal section is displayed on the right and the dorsal section is displayed on the upper parts of the figure. The dotted box in the low-powered view (2×) indicates the range of high-powered view. The high-powered views (20×) were captured randomly from inside the implanted β-TCP areas for quantitative evaluation. The uniformly-stained tissue area, pointed by arrows, indicate newly formed trabecular bone structure. At 3 weeks after implantation, stained tissue areas were recognized as new bone area was significantly increased in groups containing rh-BMP-2 (f, h,n, and p). New bone area only remained in groups treated with both rh-BMP-2 and ZOL (d’ and f’) at 6 weeks after implantation. Note: H&E, Hematoxylin-Eosin; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate; β-TCP, β-tricalcium phosphate\n\nFig. 4Quantitative evaluation of H&E sections and Masson’s Trichrome sections of the of the left distal femurs of rabbits in each group at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. At 3 weeks after implantation, the groups containing rh-BMP-2 (Group 2 or 4) showed greater areas of new bone formation than the other groups (P < 0.05). However, at 6 weeks after implantation, only the group (Group 4) that involved the combination usage of both rh-BMP-2 and ZOL still showed areas of newly formed bone (P < 0.05). *: P < 0.05. Statistical differences between groups were determined using a one-way analysis of variance with Bonferroni’s multiple comparison test. Note: H&E, Hematoxylin-Eosin; MT, Masson Trichrome; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate\nQuantitative evaluation of H&E sections and Masson’s Trichrome sections of the of the left distal femurs of rabbits in each group at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. At 3 weeks after implantation, the groups containing rh-BMP-2 (Group 2 or 4) showed greater areas of new bone formation than the other groups (P < 0.05). However, at 6 weeks after implantation, only the group (Group 4) that involved the combination usage of both rh-BMP-2 and ZOL still showed areas of newly formed bone (P < 0.05). *: P < 0.05. Statistical differences between groups were determined using a one-way analysis of variance with Bonferroni’s multiple comparison test. Note: H&E, Hematoxylin-Eosin; MT, Masson Trichrome; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate\nHistological assessments of new bone area in the marrow of a rabbit femur\nVariables present percentages of new bone areas in tissues as median, minimum, and maximum. P values indicate the statistical differences between the groups. Note: β-TCP Beta-tricalcium phosphate, rh-BMP-2 Recombinant human bone morphogenetic protein-2, ZOL Zoledronate.", "The qualitative differences of newly formed bone inside the implanted β-TCP columns between the groups were evaluated by µ-CT, and the results are shown in bar graphs in Fig. 5. At 3 weeks after implantation, groups with rh-BMP-2 (Group 2 and 4) showed significantly greater BV/TV and BMD than groups without rh-BMP-2 (Group 1 and 3) (p < 0.05, Fig. 5a, c). At 6 weeks after implantation, only the treatment group with both rh-BMP-2 and ZOL (Group 4) showed significantly greater BV/TV and BMD values than the other groups (p < 0.05, Fig. 5b, d). The actual values of BV/TV and BMD at 3 and 6 weeks after implantation are shown in Table 2.\n\nFig. 5μ-CT evaluation of BV/TV and BMD in retrieved β-TCP implants at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. *: P < 0.05. Statistical differences between groups were determined with the one-way ANOVA and post-hoc Bonferroni test. Note: BV/TV, Bone volume/ Total tissue volume; BMD, Bone mineral density\nμ-CT evaluation of BV/TV and BMD in retrieved β-TCP implants at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. *: P < 0.05. Statistical differences between groups were determined with the one-way ANOVA and post-hoc Bonferroni test. Note: BV/TV, Bone volume/ Total tissue volume; BMD, Bone mineral density\nQuantitatively assessments of implanted β-TCP using μ-CT\nMedian, minimum, and maximum are provided. P values indicate the statistical differences between the groups. Note: β-TCP Beta-tricalcium phosphate, rh-BMP-2 Recombinant human bone morphogenetic protein-2, ZOL Zoledronate.", "BMPs can induce osteogenesis by stimulating osteoblast differentiation [2], however, BMPs can also promote the catabolic activity of osteoclast [6, 21], which complicates the formation of bone in the bone marrow area. In this study, the radiological and histological parameters indicated that rh-BMP-2 promoted significantly osteogenesis in the bone marrow environment at 3 weeks after implantation. However, even though bone formation was achieved once at 3 weeks after implantation, the bone tissues gradually resorbed until 6 weeks after implantation due to the osteoclasts that were concurrently induced by rh-BMP-2 in the bone marrow environment [6]. We previously investigated the effect of ZOL on new bone formation induced by rh-BMP-2 in bone extramedullary and intramedullary environments. Systemic ZOL treatment via the rh-BMP-2/β-TCP composite was shown to promote and maintain new bone formation in bone marrow environment [8]. Local co-administration of ZOL via the rh-BMP-2/β-TCP composite has also been shown to promote and maintain new bone formation in the extramedullary environment for a long period of time [12]. In the present study, we aimed to clarify if the topical co-administration of ZOL was also effective in promoting and maintaining new bone formation induced by rh-BMP-2 in the bone marrow environment.\nThe ultimate goal of tissue regeneration engineering in the orthopaedic field is the accurate and effective formation of tissue at the necessary site. Therefore, we investigated whether the topical co-administration of ZOL and rh-BMP-2 would represent a useful procedure to facilitate and maintain bone formation in the bone marrow environment. As seen in systemic ZOL treatment, the topical treatment of ZOL co-administration with rh-BMP-2 also promoted and maintained new bone formation in the bone marrow environment. Topical administration of ZOL has been considered to reduce the associated side effects and limit the effect to a target site [22]. ZOL is known to cause side effects such as hypocalcemia, renal failure, or osteonecrosis of the jaw. Therefore, the topical administration of ZOL can be effective in patients in whom systemic administration is inappropriate due to side effects [23]. A systematic review showed that β-TCP is one of the most commonly used biocompatible materials [13]. β-TCP has high biocompatibility and is an ideal material for clinical application [24, 25]. It has been shown to be effective in bone conduction on its own, but it is also often used as a carrier of some drugs to accelerate effectiveness [13]. It has been reported that β-TCP as a carrier for the local administration of both rh-BMP-2 and ZOL is useful for new ectopic bone formation [12], and our findings further demonstrated that β-TCP is a useful carrier of rh-BMP-2 and ZOL for effective bone induction in the bone marrow environment.\nBone formation in the bone marrow environment by local drug administration is clinically important because it leads to the development of biomaterials for surgical implants in the medullary cavity, such as intramedullary nails and the femoral stem of total hip replacements. Moreover, these biomaterials can offer novel therapeutic substitutes that can be used for the regeneration of bone cavities after the surgical removal of bone tumors, osteonecrosis lesions, or vertebral fractures. Local administrated of ZOL has been shown to directly suppress the bone resorption action of the osteoclasts in the local area [26]. The local effects of ZOL may also enable complications related to systemic bisphosphonate therapy, such as renal disorders or osteonecrosis of the jaw, to be avoided [27]. Therefore, β-TCP material treated with a combination of rh-BMP-2 and ZOL was shown to effectively promote and maintain bone formation in the bone marrow environment.\nThis study contains a few limitations, e.g., a single animal model and a single dose of therapeutic agents was used. Future studies should be conducted to assess the underlying detailed molecular mechanisms of the combination therapy that produced the observed therapeutic effect.", "In summary, the combination of locally administered rh-BMP-2 and ZOL via β-TCP column materials promoted new bone formation in the bone marrow and enabled the maintenance of the newly formed bone for 6 weeks after implantation. Our findings may contribute to the development of the orthopaedic field, especially involving clinical approaches for cases that require bone regeneration in the bone marrow environment." ]
[ null, "materials|methods", null, null, null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusion" ]
[ "Bone morphogenetic proteins", "β-tricalcium phosphate", "Rabbit", "micro computed tomography", "Histology" ]
Background: Several clinical applications of recombinant human bone morphogenic proteins (rh-BMPs) have reportedly promoted new bone formation [1, 2]. BMPs act as signal transducers in the Smad signaling pathway to regulate mesenchymal stem cell differentiation during skeletal development, especially bone formation [3, 4]. For example, in orthopaedics surgery, rh-BMP has already been used to improve clinical results such as the novel operative technique of spinal fusion [5]. However, the use of rh-BMPs in certain orthopaedic surgeries performed in the intramedullary environment, e.g., total hip replacements involving large bone defects or intramedullary bone tumours, remains limited because more osteoclast progenitor cells are derived from hematopoietic stem cells in the bone marrow environment and rh-BMPs cannot achieve suitable osteogenesis inside of the bone marrow by promoting the differentiation of the osteoclast precursor cells, not only precursor cells which can be differentiated into osteoblast [6, 7]. In the intramedullary environment, it is difficult to achieve both bone formation and its maintenance. To overcome these problems, we previously reported the effectiveness of the systemic administration of ZOL using rh-BMP-2/β-tricalcium phosphate (β-TCP) composite to promote the osteogenesis of newly formed bone in the bone marrow environment [8]. β-TCP has been reported as a good carrier for drug delivery of both rh-BMP and bisphosphonates to promote osteogenesis [9–12]. β-TCP, a bioactive bone substitute material, has high biocompatibility and good stability [13]. Moreover, ZOL has demonstrated to have a protective effect on bone tissue resorption by inhibiting the activity of osteoclasts at the local site [14, 15]. In the present study, we further investigated if the topical co-treatment of ZOL and the rh-BMP-2/β-TCP composite is useful in the promotion as well as the maintenance of new bone formation in the bone marrow environment. Should the intramedullary bone formation be achieved by only the topical administration of these drugs, this treatment may represent a safety and effective procedure to create bone formation in lesion sites, both from a clinical and morphological perspective. In this study, the primary object was to achieve bone formation in the bone marrow environment and the secondary object was to maintain the formed bone tissue, by utilizing the combined effect of rh-BMP-2 in promoting bone formation and ZOL in maintaining bone tissue. In other words, we hypothesized that rh-BMP-2 could achieve bone formation in the bone marrow environment during the early treatment period and ZOL could maintain the newly formed bone tissue by inhibiting bone resorption for a certain period. The aim of this study was to investigate if the topical co-administration of rh-BMP-2/β-TCP/ZOL composite promoted osteogenesis and maintained the newly formed bone in the bone marrow environment. Materials and Methods: Recombinant human BMP-2 This study used rh-BMP-2 produced in Escherichia coli, provided by Osteopharma, Inc (Osaka, Japan) [16]. Dimerization of the monomeric cytokine was obtained using published procedures [16, 17]. Rh-BMP-2 was reconstituted in sterile 0.01 N hydrochloric acid at 5 mg/mL and stored at 80℃ until use. This study used rh-BMP-2 produced in Escherichia coli, provided by Osteopharma, Inc (Osaka, Japan) [16]. Dimerization of the monomeric cytokine was obtained using published procedures [16, 17]. Rh-BMP-2 was reconstituted in sterile 0.01 N hydrochloric acid at 5 mg/mL and stored at 80℃ until use. Zoledronate ZOL used in this study was purchased as a liquid solution as 4 mg/5 mL (Zometa™; Novartis Pharma K.K./ Tokyo, Japan) and stored at room temperature (approximately 25℃) until use. ZOL was diluted in phosphate-buffered saline (PBS, Wako, Osaka, Japan) to 5 µg ZOL per β-TCP column. ZOL used in this study was purchased as a liquid solution as 4 mg/5 mL (Zometa™; Novartis Pharma K.K./ Tokyo, Japan) and stored at room temperature (approximately 25℃) until use. ZOL was diluted in phosphate-buffered saline (PBS, Wako, Osaka, Japan) to 5 µg ZOL per β-TCP column. β-TCP columns β-TCP columns (diameter: 6 mm, length: 10 mm, porosity: 75 %) were manufactured and provided by HOYA (Tokyo, Japan), in a dry condition. The β-TCP columns were sterilized using dry heat (255 ℃, 3 h) and impregnated with each drug. The concentration of each drug was adjusted using 75 µL PBS per β-TCP column as follows: PBS only (Group 1), 30 µg of rh-BMP-2 (Group 2), 5 µg of ZOL (Group 3), or 30 µg of rh-BMP-2 and 5 µg of ZOL (Group 4). The drug lysates were infiltrated into the β-TCP columns in a laminar flow cabinet [8]. β-TCP columns (diameter: 6 mm, length: 10 mm, porosity: 75 %) were manufactured and provided by HOYA (Tokyo, Japan), in a dry condition. The β-TCP columns were sterilized using dry heat (255 ℃, 3 h) and impregnated with each drug. The concentration of each drug was adjusted using 75 µL PBS per β-TCP column as follows: PBS only (Group 1), 30 µg of rh-BMP-2 (Group 2), 5 µg of ZOL (Group 3), or 30 µg of rh-BMP-2 and 5 µg of ZOL (Group 4). The drug lysates were infiltrated into the β-TCP columns in a laminar flow cabinet [8]. Surgery and implantation of β-TCP columns New Zealand white rabbits (n = 56 females, age: 18 to 20 weeks, body weight: 3.0–4.0 kg) were purchased from Japan SLC Co. (Shizuoka, Japan). All animals were acclimatized in cages with free access to food and water for 2 weeks. The β-TCP columns were surgically inserted into the medullary cavity at the distal position of the left femurs based on our previously described procedure [8]. Briefly, animals were anesthetized with a subcutaneous injection of ketamine (30 mg/kg body weight) and xylazine (10 mg/kg body weight). After exposure, the distal femur was reamed with a 6.2 mm hand drill to create a hole, a radiograph was taken for confirmation, and then a β-TCP column was inserted into the medullary cavity. During the postoperative period, all animals were maintained in cages (one rabbit per cage) in a temperature-controlled room (25℃) with ad libitum access to food and water and unrestricted movement at the animal care centre at our institution. At 3 and 6 weeks after the surgery, animals were sacrificed by intravenous injection of 100 mg/kg pentobarbital (Somnopentyl™, Kyoritu seiyaku, Tokyo, Japan) and the distal femurs containing the β-TCP were harvested. Seven rabbits were sacrificed in each group at the each timepoint. Harvested femurs were fixed in 4 % paraformaldehyde phosphate buffer overnight at 4℃ and stored in 70 % ethanol solution at 4℃ until use. No animals were excluded from experimental analysis. To reduce confounding factors as much as possible, the order of implanting β-TCP in each group was selected randomly. In the post-operative management, one cage was used for one animal and its locations in the animal care room were randomly selected at regular intervals for unification of environment. After surgery, the surgical wound condition, food intake, and activity were monitored and confirmed to be clear. New Zealand white rabbits (n = 56 females, age: 18 to 20 weeks, body weight: 3.0–4.0 kg) were purchased from Japan SLC Co. (Shizuoka, Japan). All animals were acclimatized in cages with free access to food and water for 2 weeks. The β-TCP columns were surgically inserted into the medullary cavity at the distal position of the left femurs based on our previously described procedure [8]. Briefly, animals were anesthetized with a subcutaneous injection of ketamine (30 mg/kg body weight) and xylazine (10 mg/kg body weight). After exposure, the distal femur was reamed with a 6.2 mm hand drill to create a hole, a radiograph was taken for confirmation, and then a β-TCP column was inserted into the medullary cavity. During the postoperative period, all animals were maintained in cages (one rabbit per cage) in a temperature-controlled room (25℃) with ad libitum access to food and water and unrestricted movement at the animal care centre at our institution. At 3 and 6 weeks after the surgery, animals were sacrificed by intravenous injection of 100 mg/kg pentobarbital (Somnopentyl™, Kyoritu seiyaku, Tokyo, Japan) and the distal femurs containing the β-TCP were harvested. Seven rabbits were sacrificed in each group at the each timepoint. Harvested femurs were fixed in 4 % paraformaldehyde phosphate buffer overnight at 4℃ and stored in 70 % ethanol solution at 4℃ until use. No animals were excluded from experimental analysis. To reduce confounding factors as much as possible, the order of implanting β-TCP in each group was selected randomly. In the post-operative management, one cage was used for one animal and its locations in the animal care room were randomly selected at regular intervals for unification of environment. After surgery, the surgical wound condition, food intake, and activity were monitored and confirmed to be clear. Plane radiographs Plane radiographs of the lateral views of the distal femurs were taken under anesthetization during the implantation surgery (0 weeks) and at 3 and 6 weeks after the surgery. Radiographs were obtained using a KXO-15ER apparatus (Toshiba Medical, Tochigi, Japan) at 50 kV and 100 mA for 0.08 s, and visualized using an FCR CAPSULA- 2V1 system (Fujifilm, Tokyo, Japan). Plane radiographs of the lateral views of the distal femurs were taken under anesthetization during the implantation surgery (0 weeks) and at 3 and 6 weeks after the surgery. Radiographs were obtained using a KXO-15ER apparatus (Toshiba Medical, Tochigi, Japan) at 50 kV and 100 mA for 0.08 s, and visualized using an FCR CAPSULA- 2V1 system (Fujifilm, Tokyo, Japan). Histological examination Prior to histological evaluation, the fixed specimens were decalcified in 0.5 mol/L ethylenediaminetetraacetic acid (EDTA) solution (Wako, Osaka, Japan) for 2 weeks, dehydrated in a graded ethanol series (70 %, 80 %, 90 %, and 100 % ethanol), and embedded in paraffin wax. Mid-sagittal (longitudinal, along the implant) sections were cut into 4 µm slices in each plane. After preparation, the tissue sections were stained using haematoxylin/eosin (H&E) staining and Masson’s trichrome (MT) staining. New bone formation within the β-TCP columns was histologically assessed using previously described procedures with minor modifications [18]. Briefly, three high-powered fields (objective lens 20×) were randomly selected from three tissue sections from each the β-TCP column sample. The images were captured using a microscope with a built-in digital camera (DP 70; Olympus Corporation, Tokyo, Japan). Captured images were analysed using ImageJ™ software (National Institutes of Health, MD, USA). A total of 9 images captured in each group were analysed. The threshold for the measurement of the newly formed bone was set between 150 and 180 of the red channel in the software. New bone area (%) was estimated as the detected area/total area ×100 in each section. These new bone areas in the H&E and MT sections were defined as the primary outcomes in this study. Prior to histological evaluation, the fixed specimens were decalcified in 0.5 mol/L ethylenediaminetetraacetic acid (EDTA) solution (Wako, Osaka, Japan) for 2 weeks, dehydrated in a graded ethanol series (70 %, 80 %, 90 %, and 100 % ethanol), and embedded in paraffin wax. Mid-sagittal (longitudinal, along the implant) sections were cut into 4 µm slices in each plane. After preparation, the tissue sections were stained using haematoxylin/eosin (H&E) staining and Masson’s trichrome (MT) staining. New bone formation within the β-TCP columns was histologically assessed using previously described procedures with minor modifications [18]. Briefly, three high-powered fields (objective lens 20×) were randomly selected from three tissue sections from each the β-TCP column sample. The images were captured using a microscope with a built-in digital camera (DP 70; Olympus Corporation, Tokyo, Japan). Captured images were analysed using ImageJ™ software (National Institutes of Health, MD, USA). A total of 9 images captured in each group were analysed. The threshold for the measurement of the newly formed bone was set between 150 and 180 of the red channel in the software. New bone area (%) was estimated as the detected area/total area ×100 in each section. These new bone areas in the H&E and MT sections were defined as the primary outcomes in this study. Micro‐computed tomography The implanted β-TCP columns were evaluated by micro-computed tomography (µ-CT) using an Aloka Latheta LCT200 (HITACHI, Tokyo, Japan) based on the previous published procedure [19, 20]. Briefly, the following conditions were maintained per image: slice width of 30 µm, voxel size of 30 × 120 µm, voltage of 80 kVp, and current of 50 µA. The area of β-TCP measurements was determined and the quality as bone in its area were quantitatively assessed using LaTheta software (version 2.10, Aloka). Bone volume/ total tissue volume (BV/TV) and bone mineral density (BMD) were evaluated according manufactural instruments. All sections were analysed by µ-CT (n = 7 in each group at 3 and 6 weeks after implantation). These quantitative bone assessments of µ-CT were defined as the secondary outcomes in this study. The implanted β-TCP columns were evaluated by micro-computed tomography (µ-CT) using an Aloka Latheta LCT200 (HITACHI, Tokyo, Japan) based on the previous published procedure [19, 20]. Briefly, the following conditions were maintained per image: slice width of 30 µm, voxel size of 30 × 120 µm, voltage of 80 kVp, and current of 50 µA. The area of β-TCP measurements was determined and the quality as bone in its area were quantitatively assessed using LaTheta software (version 2.10, Aloka). Bone volume/ total tissue volume (BV/TV) and bone mineral density (BMD) were evaluated according manufactural instruments. All sections were analysed by µ-CT (n = 7 in each group at 3 and 6 weeks after implantation). These quantitative bone assessments of µ-CT were defined as the secondary outcomes in this study. Ethical considerations This study was approved by the Animal Research Committees of our institution (approval number 13,017). All applicable international, national, and/or institutional guidelines for the care and use of animals were followed. All procedures performed were in accordance with the ethical standards of the institution at which the study was conducted. This paper does not contain any studies with human participants performed by any of the authors. This study was approved by the Animal Research Committees of our institution (approval number 13,017). All applicable international, national, and/or institutional guidelines for the care and use of animals were followed. All procedures performed were in accordance with the ethical standards of the institution at which the study was conducted. This paper does not contain any studies with human participants performed by any of the authors. Statistical analysis The results are presented as median and range (minimum and maximum). All variables were confirmed as parametric using the Kolmogorov-Smirnov test. The differences between groups were analysed using a one-way analysis of variance with Bonferroni’s multiple comparison test. To determine the adequate sample size, a power analysis was performed for the primary and secondary outcomes. According to a previous report on new bone area and the quantitative bone assessment of µ-CT, the expected differences in primary and secondary outcomes were 10 ± 5.5 % and 6 ± 3.5 %, respectively [8, 18]. Based on these findings, to provide an appropriate power (β = 0.80) with the significance level set at 0.05, a sample size of five cases or more was adequate to achieve the primary outcome and a sample size of six cases or more was adequate to achieve the secondary outcomes. Statistical significance was set at P < 0.05. Statistical analyses were performed using SPSS software, version 22 (IBM, NY, USA). The results are presented as median and range (minimum and maximum). All variables were confirmed as parametric using the Kolmogorov-Smirnov test. The differences between groups were analysed using a one-way analysis of variance with Bonferroni’s multiple comparison test. To determine the adequate sample size, a power analysis was performed for the primary and secondary outcomes. According to a previous report on new bone area and the quantitative bone assessment of µ-CT, the expected differences in primary and secondary outcomes were 10 ± 5.5 % and 6 ± 3.5 %, respectively [8, 18]. Based on these findings, to provide an appropriate power (β = 0.80) with the significance level set at 0.05, a sample size of five cases or more was adequate to achieve the primary outcome and a sample size of six cases or more was adequate to achieve the secondary outcomes. Statistical significance was set at P < 0.05. Statistical analyses were performed using SPSS software, version 22 (IBM, NY, USA). Recombinant human BMP-2: This study used rh-BMP-2 produced in Escherichia coli, provided by Osteopharma, Inc (Osaka, Japan) [16]. Dimerization of the monomeric cytokine was obtained using published procedures [16, 17]. Rh-BMP-2 was reconstituted in sterile 0.01 N hydrochloric acid at 5 mg/mL and stored at 80℃ until use. Zoledronate: ZOL used in this study was purchased as a liquid solution as 4 mg/5 mL (Zometa™; Novartis Pharma K.K./ Tokyo, Japan) and stored at room temperature (approximately 25℃) until use. ZOL was diluted in phosphate-buffered saline (PBS, Wako, Osaka, Japan) to 5 µg ZOL per β-TCP column. β-TCP columns: β-TCP columns (diameter: 6 mm, length: 10 mm, porosity: 75 %) were manufactured and provided by HOYA (Tokyo, Japan), in a dry condition. The β-TCP columns were sterilized using dry heat (255 ℃, 3 h) and impregnated with each drug. The concentration of each drug was adjusted using 75 µL PBS per β-TCP column as follows: PBS only (Group 1), 30 µg of rh-BMP-2 (Group 2), 5 µg of ZOL (Group 3), or 30 µg of rh-BMP-2 and 5 µg of ZOL (Group 4). The drug lysates were infiltrated into the β-TCP columns in a laminar flow cabinet [8]. Surgery and implantation of β-TCP columns: New Zealand white rabbits (n = 56 females, age: 18 to 20 weeks, body weight: 3.0–4.0 kg) were purchased from Japan SLC Co. (Shizuoka, Japan). All animals were acclimatized in cages with free access to food and water for 2 weeks. The β-TCP columns were surgically inserted into the medullary cavity at the distal position of the left femurs based on our previously described procedure [8]. Briefly, animals were anesthetized with a subcutaneous injection of ketamine (30 mg/kg body weight) and xylazine (10 mg/kg body weight). After exposure, the distal femur was reamed with a 6.2 mm hand drill to create a hole, a radiograph was taken for confirmation, and then a β-TCP column was inserted into the medullary cavity. During the postoperative period, all animals were maintained in cages (one rabbit per cage) in a temperature-controlled room (25℃) with ad libitum access to food and water and unrestricted movement at the animal care centre at our institution. At 3 and 6 weeks after the surgery, animals were sacrificed by intravenous injection of 100 mg/kg pentobarbital (Somnopentyl™, Kyoritu seiyaku, Tokyo, Japan) and the distal femurs containing the β-TCP were harvested. Seven rabbits were sacrificed in each group at the each timepoint. Harvested femurs were fixed in 4 % paraformaldehyde phosphate buffer overnight at 4℃ and stored in 70 % ethanol solution at 4℃ until use. No animals were excluded from experimental analysis. To reduce confounding factors as much as possible, the order of implanting β-TCP in each group was selected randomly. In the post-operative management, one cage was used for one animal and its locations in the animal care room were randomly selected at regular intervals for unification of environment. After surgery, the surgical wound condition, food intake, and activity were monitored and confirmed to be clear. Plane radiographs: Plane radiographs of the lateral views of the distal femurs were taken under anesthetization during the implantation surgery (0 weeks) and at 3 and 6 weeks after the surgery. Radiographs were obtained using a KXO-15ER apparatus (Toshiba Medical, Tochigi, Japan) at 50 kV and 100 mA for 0.08 s, and visualized using an FCR CAPSULA- 2V1 system (Fujifilm, Tokyo, Japan). Histological examination: Prior to histological evaluation, the fixed specimens were decalcified in 0.5 mol/L ethylenediaminetetraacetic acid (EDTA) solution (Wako, Osaka, Japan) for 2 weeks, dehydrated in a graded ethanol series (70 %, 80 %, 90 %, and 100 % ethanol), and embedded in paraffin wax. Mid-sagittal (longitudinal, along the implant) sections were cut into 4 µm slices in each plane. After preparation, the tissue sections were stained using haematoxylin/eosin (H&E) staining and Masson’s trichrome (MT) staining. New bone formation within the β-TCP columns was histologically assessed using previously described procedures with minor modifications [18]. Briefly, three high-powered fields (objective lens 20×) were randomly selected from three tissue sections from each the β-TCP column sample. The images were captured using a microscope with a built-in digital camera (DP 70; Olympus Corporation, Tokyo, Japan). Captured images were analysed using ImageJ™ software (National Institutes of Health, MD, USA). A total of 9 images captured in each group were analysed. The threshold for the measurement of the newly formed bone was set between 150 and 180 of the red channel in the software. New bone area (%) was estimated as the detected area/total area ×100 in each section. These new bone areas in the H&E and MT sections were defined as the primary outcomes in this study. Micro‐computed tomography: The implanted β-TCP columns were evaluated by micro-computed tomography (µ-CT) using an Aloka Latheta LCT200 (HITACHI, Tokyo, Japan) based on the previous published procedure [19, 20]. Briefly, the following conditions were maintained per image: slice width of 30 µm, voxel size of 30 × 120 µm, voltage of 80 kVp, and current of 50 µA. The area of β-TCP measurements was determined and the quality as bone in its area were quantitatively assessed using LaTheta software (version 2.10, Aloka). Bone volume/ total tissue volume (BV/TV) and bone mineral density (BMD) were evaluated according manufactural instruments. All sections were analysed by µ-CT (n = 7 in each group at 3 and 6 weeks after implantation). These quantitative bone assessments of µ-CT were defined as the secondary outcomes in this study. Ethical considerations: This study was approved by the Animal Research Committees of our institution (approval number 13,017). All applicable international, national, and/or institutional guidelines for the care and use of animals were followed. All procedures performed were in accordance with the ethical standards of the institution at which the study was conducted. This paper does not contain any studies with human participants performed by any of the authors. Statistical analysis: The results are presented as median and range (minimum and maximum). All variables were confirmed as parametric using the Kolmogorov-Smirnov test. The differences between groups were analysed using a one-way analysis of variance with Bonferroni’s multiple comparison test. To determine the adequate sample size, a power analysis was performed for the primary and secondary outcomes. According to a previous report on new bone area and the quantitative bone assessment of µ-CT, the expected differences in primary and secondary outcomes were 10 ± 5.5 % and 6 ± 3.5 %, respectively [8, 18]. Based on these findings, to provide an appropriate power (β = 0.80) with the significance level set at 0.05, a sample size of five cases or more was adequate to achieve the primary outcome and a sample size of six cases or more was adequate to achieve the secondary outcomes. Statistical significance was set at P < 0.05. Statistical analyses were performed using SPSS software, version 22 (IBM, NY, USA). Results: There were no statistically significant differences in body weight among each group at 0, 3, and 6 weeks after implantation (p = 0.63). The median and range of body weight (kg) at each time point for the groups 1, 2, 3, and 4 were as follows: 3.2 (2.9 to 3.3), 3.3 (3.0 to 3.4), 3.2 (3.0 to 3.3), and 3.2 (3.0 to 3.3) at 0 weeks, 3.3 (3.1 to 3.4), 3.2 (3.0 to 3.4), 3.2 (3.0 to 3.4), and 3.1 (3.0 to 3.3) at 3 weeks, 3.2 (3.0 to 3.3), 3.2 (3.0 to 3.5), 3.2 (3.1 to 3.3), and 3.0 (2.9 to 3.4) at 6 weeks, respectively. No cases of animals dropping out from observation during the study periods due to death or any other reasons were reported. Moreover, there was no complications, such as poor wound healing, after surgery. Macroscopy of implanted β-TCP columns in femoral bone marrow At 3 weeks after implantation, the gross appearance of the implanted β-TCP column disappeared significantly in Group 2 (rh-BMP-2 alone) (Fig. 1c, d). However, in groups containing ZOL (Group 3 and 4), the β-TCP column remained recognizable at 6 weeks after implantation (Fig. 1e-h). Fig. 1Representative photos of the left distal femurs of rabbits cut in the sagittal plane at 3 and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. Arrows point to a section of the edge of the implanted β-TCP. The gross appearances of implanted β-TCP columns gradually disappeared (a). The gross appearances of β-TCP columns in groups containing ZOL (f and h) were comparatively more recognizable than in the other groups without ZOL (b and d) at 6 weeks after implantation Representative photos of the left distal femurs of rabbits cut in the sagittal plane at 3 and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. Arrows point to a section of the edge of the implanted β-TCP. The gross appearances of implanted β-TCP columns gradually disappeared (a). The gross appearances of β-TCP columns in groups containing ZOL (f and h) were comparatively more recognizable than in the other groups without ZOL (b and d) at 6 weeks after implantation At 3 weeks after implantation, the gross appearance of the implanted β-TCP column disappeared significantly in Group 2 (rh-BMP-2 alone) (Fig. 1c, d). However, in groups containing ZOL (Group 3 and 4), the β-TCP column remained recognizable at 6 weeks after implantation (Fig. 1e-h). Fig. 1Representative photos of the left distal femurs of rabbits cut in the sagittal plane at 3 and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. Arrows point to a section of the edge of the implanted β-TCP. The gross appearances of implanted β-TCP columns gradually disappeared (a). The gross appearances of β-TCP columns in groups containing ZOL (f and h) were comparatively more recognizable than in the other groups without ZOL (b and d) at 6 weeks after implantation Representative photos of the left distal femurs of rabbits cut in the sagittal plane at 3 and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. Arrows point to a section of the edge of the implanted β-TCP. The gross appearances of implanted β-TCP columns gradually disappeared (a). The gross appearances of β-TCP columns in groups containing ZOL (f and h) were comparatively more recognizable than in the other groups without ZOL (b and d) at 6 weeks after implantation Radiographic evaluations of implanted β-TCP columns in femoral bone marrow The X-ray images showed that the radiolucency inside the implanted β-TCP column tended to increase gradually in all groups (Fig. 2). However, in combination with the macroscopy analysis, the radiolucency inside of the β-TCP columns was comparatively suppressed in the ZOL-treated groups (Group 3 and 4) (Fig. 2g-l). Fig. 2Representative X-ray images of the left distal femurs of rabbits from each group at 0, 3, and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. The radiolucencies at the area of implanted β-TCP columns gradually increased. The radiolucencies of β-TCP columns in groups containing ZOL (i and l) were comparatively more recognizable than in the other groups without ZOL (c and f) at 6 weeks after implantation Representative X-ray images of the left distal femurs of rabbits from each group at 0, 3, and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. The radiolucencies at the area of implanted β-TCP columns gradually increased. The radiolucencies of β-TCP columns in groups containing ZOL (i and l) were comparatively more recognizable than in the other groups without ZOL (c and f) at 6 weeks after implantation The X-ray images showed that the radiolucency inside the implanted β-TCP column tended to increase gradually in all groups (Fig. 2). However, in combination with the macroscopy analysis, the radiolucency inside of the β-TCP columns was comparatively suppressed in the ZOL-treated groups (Group 3 and 4) (Fig. 2g-l). Fig. 2Representative X-ray images of the left distal femurs of rabbits from each group at 0, 3, and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. The radiolucencies at the area of implanted β-TCP columns gradually increased. The radiolucencies of β-TCP columns in groups containing ZOL (i and l) were comparatively more recognizable than in the other groups without ZOL (c and f) at 6 weeks after implantation Representative X-ray images of the left distal femurs of rabbits from each group at 0, 3, and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. The radiolucencies at the area of implanted β-TCP columns gradually increased. The radiolucencies of β-TCP columns in groups containing ZOL (i and l) were comparatively more recognizable than in the other groups without ZOL (c and f) at 6 weeks after implantation Promotion and maintenance of bone formation in the bone marrow environment Representative H&E and MT stained sections of tissues and their quantitative evaluations are shown in Figs. 3 and 4. At 3 weeks after implantation, the newly formed area of bone structure was significantly larger in the groups with rh-BMP-2 (Group 2 and 4) than in the groups without rh-BMP-2 (Group 1 and 3) (p < 0.001, Fig. 4a, c). Details of the statistical analysis of each parameter are as follows: group 1 vs. 2: p < 0.001 in H&E and p < 0.001 in MT; group 1 vs. 3: p = 0.05 in H&E and p = 0.06 in MT; group 1 vs. 4: p < 0.001 in H&E and p = 0.04 in MT; group 2 vs. 3: p < 0.001 in H&E and p < 0.001 in MT; group 2 vs. 4: p = 1.0 in H&E and p = 0.06 in MT; and group 3 vs. 4: p < 0.001 in H&E and p < 0.001 in MT. At 6 weeks after implantation, the newly formed area of bone structure in the group containing both rh-BMP-2 and ZOL (Group 4) was significantly larger than that in the other groups (p < 0.001, Fig. 4b, d). Details of the statistical analysis of each parameter are as follows: group 1 vs. 2: p = 1.0 in H&E and p = 1.0 in MT; group 1 vs. 3: p = 1.0 in H&E and p = 1.0 in MT; group 1 vs. 4: p < 0.001 in H&E and p = 0.03 in MT; group 2 vs. 3: p = 1.0 in H&E and p = 1.0 in MT; group 2 vs. 4: p < 0.001 in H&E and p < 0.001 in MT; and group 3 vs. 4: p < 0.001 in H&E and p = 0.006 in MT. The newly formed bone structure area in the Groups 1, 2, and 3 had almost disappeared at 6 weeks after implantation (Fig. 3q-z, a’, and b’). The actual values of new bone structure area in H&E and MT sections at 3 and 6 weeks after implantations are shown in Table 1. Fig. 3Representative H&E and Masson’s Trichrome stained sections of the left distal femurs of rabbits cut in the sagittal plane in each group at 3 and 6 weeks after implantation. In each image, the proximal section is displayed on the right and the dorsal section is displayed on the upper parts of the figure. The dotted box in the low-powered view (2×) indicates the range of high-powered view. The high-powered views (20×) were captured randomly from inside the implanted β-TCP areas for quantitative evaluation. The uniformly-stained tissue area, pointed by arrows, indicate newly formed trabecular bone structure. At 3 weeks after implantation, stained tissue areas were recognized as new bone area was significantly increased in groups containing rh-BMP-2 (f, h,n, and p). New bone area only remained in groups treated with both rh-BMP-2 and ZOL (d’ and f’) at 6 weeks after implantation. Note: H&E, Hematoxylin-Eosin; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate; β-TCP, β-tricalcium phosphate Representative H&E and Masson’s Trichrome stained sections of the left distal femurs of rabbits cut in the sagittal plane in each group at 3 and 6 weeks after implantation. In each image, the proximal section is displayed on the right and the dorsal section is displayed on the upper parts of the figure. The dotted box in the low-powered view (2×) indicates the range of high-powered view. The high-powered views (20×) were captured randomly from inside the implanted β-TCP areas for quantitative evaluation. The uniformly-stained tissue area, pointed by arrows, indicate newly formed trabecular bone structure. At 3 weeks after implantation, stained tissue areas were recognized as new bone area was significantly increased in groups containing rh-BMP-2 (f, h,n, and p). New bone area only remained in groups treated with both rh-BMP-2 and ZOL (d’ and f’) at 6 weeks after implantation. Note: H&E, Hematoxylin-Eosin; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate; β-TCP, β-tricalcium phosphate Fig. 4Quantitative evaluation of H&E sections and Masson’s Trichrome sections of the of the left distal femurs of rabbits in each group at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. At 3 weeks after implantation, the groups containing rh-BMP-2 (Group 2 or 4) showed greater areas of new bone formation than the other groups (P < 0.05). However, at 6 weeks after implantation, only the group (Group 4) that involved the combination usage of both rh-BMP-2 and ZOL still showed areas of newly formed bone (P < 0.05). *: P < 0.05. Statistical differences between groups were determined using a one-way analysis of variance with Bonferroni’s multiple comparison test. Note: H&E, Hematoxylin-Eosin; MT, Masson Trichrome; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate Quantitative evaluation of H&E sections and Masson’s Trichrome sections of the of the left distal femurs of rabbits in each group at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. At 3 weeks after implantation, the groups containing rh-BMP-2 (Group 2 or 4) showed greater areas of new bone formation than the other groups (P < 0.05). However, at 6 weeks after implantation, only the group (Group 4) that involved the combination usage of both rh-BMP-2 and ZOL still showed areas of newly formed bone (P < 0.05). *: P < 0.05. Statistical differences between groups were determined using a one-way analysis of variance with Bonferroni’s multiple comparison test. Note: H&E, Hematoxylin-Eosin; MT, Masson Trichrome; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate Histological assessments of new bone area in the marrow of a rabbit femur Variables present percentages of new bone areas in tissues as median, minimum, and maximum. P values indicate the statistical differences between the groups. Note: β-TCP Beta-tricalcium phosphate, rh-BMP-2 Recombinant human bone morphogenetic protein-2, ZOL Zoledronate. Representative H&E and MT stained sections of tissues and their quantitative evaluations are shown in Figs. 3 and 4. At 3 weeks after implantation, the newly formed area of bone structure was significantly larger in the groups with rh-BMP-2 (Group 2 and 4) than in the groups without rh-BMP-2 (Group 1 and 3) (p < 0.001, Fig. 4a, c). Details of the statistical analysis of each parameter are as follows: group 1 vs. 2: p < 0.001 in H&E and p < 0.001 in MT; group 1 vs. 3: p = 0.05 in H&E and p = 0.06 in MT; group 1 vs. 4: p < 0.001 in H&E and p = 0.04 in MT; group 2 vs. 3: p < 0.001 in H&E and p < 0.001 in MT; group 2 vs. 4: p = 1.0 in H&E and p = 0.06 in MT; and group 3 vs. 4: p < 0.001 in H&E and p < 0.001 in MT. At 6 weeks after implantation, the newly formed area of bone structure in the group containing both rh-BMP-2 and ZOL (Group 4) was significantly larger than that in the other groups (p < 0.001, Fig. 4b, d). Details of the statistical analysis of each parameter are as follows: group 1 vs. 2: p = 1.0 in H&E and p = 1.0 in MT; group 1 vs. 3: p = 1.0 in H&E and p = 1.0 in MT; group 1 vs. 4: p < 0.001 in H&E and p = 0.03 in MT; group 2 vs. 3: p = 1.0 in H&E and p = 1.0 in MT; group 2 vs. 4: p < 0.001 in H&E and p < 0.001 in MT; and group 3 vs. 4: p < 0.001 in H&E and p = 0.006 in MT. The newly formed bone structure area in the Groups 1, 2, and 3 had almost disappeared at 6 weeks after implantation (Fig. 3q-z, a’, and b’). The actual values of new bone structure area in H&E and MT sections at 3 and 6 weeks after implantations are shown in Table 1. Fig. 3Representative H&E and Masson’s Trichrome stained sections of the left distal femurs of rabbits cut in the sagittal plane in each group at 3 and 6 weeks after implantation. In each image, the proximal section is displayed on the right and the dorsal section is displayed on the upper parts of the figure. The dotted box in the low-powered view (2×) indicates the range of high-powered view. The high-powered views (20×) were captured randomly from inside the implanted β-TCP areas for quantitative evaluation. The uniformly-stained tissue area, pointed by arrows, indicate newly formed trabecular bone structure. At 3 weeks after implantation, stained tissue areas were recognized as new bone area was significantly increased in groups containing rh-BMP-2 (f, h,n, and p). New bone area only remained in groups treated with both rh-BMP-2 and ZOL (d’ and f’) at 6 weeks after implantation. Note: H&E, Hematoxylin-Eosin; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate; β-TCP, β-tricalcium phosphate Representative H&E and Masson’s Trichrome stained sections of the left distal femurs of rabbits cut in the sagittal plane in each group at 3 and 6 weeks after implantation. In each image, the proximal section is displayed on the right and the dorsal section is displayed on the upper parts of the figure. The dotted box in the low-powered view (2×) indicates the range of high-powered view. The high-powered views (20×) were captured randomly from inside the implanted β-TCP areas for quantitative evaluation. The uniformly-stained tissue area, pointed by arrows, indicate newly formed trabecular bone structure. At 3 weeks after implantation, stained tissue areas were recognized as new bone area was significantly increased in groups containing rh-BMP-2 (f, h,n, and p). New bone area only remained in groups treated with both rh-BMP-2 and ZOL (d’ and f’) at 6 weeks after implantation. Note: H&E, Hematoxylin-Eosin; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate; β-TCP, β-tricalcium phosphate Fig. 4Quantitative evaluation of H&E sections and Masson’s Trichrome sections of the of the left distal femurs of rabbits in each group at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. At 3 weeks after implantation, the groups containing rh-BMP-2 (Group 2 or 4) showed greater areas of new bone formation than the other groups (P < 0.05). However, at 6 weeks after implantation, only the group (Group 4) that involved the combination usage of both rh-BMP-2 and ZOL still showed areas of newly formed bone (P < 0.05). *: P < 0.05. Statistical differences between groups were determined using a one-way analysis of variance with Bonferroni’s multiple comparison test. Note: H&E, Hematoxylin-Eosin; MT, Masson Trichrome; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate Quantitative evaluation of H&E sections and Masson’s Trichrome sections of the of the left distal femurs of rabbits in each group at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. At 3 weeks after implantation, the groups containing rh-BMP-2 (Group 2 or 4) showed greater areas of new bone formation than the other groups (P < 0.05). However, at 6 weeks after implantation, only the group (Group 4) that involved the combination usage of both rh-BMP-2 and ZOL still showed areas of newly formed bone (P < 0.05). *: P < 0.05. Statistical differences between groups were determined using a one-way analysis of variance with Bonferroni’s multiple comparison test. Note: H&E, Hematoxylin-Eosin; MT, Masson Trichrome; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate Histological assessments of new bone area in the marrow of a rabbit femur Variables present percentages of new bone areas in tissues as median, minimum, and maximum. P values indicate the statistical differences between the groups. Note: β-TCP Beta-tricalcium phosphate, rh-BMP-2 Recombinant human bone morphogenetic protein-2, ZOL Zoledronate. Qualitative improvement of formed bone by topical co‐administration of rh-BMP2 and ZOL The qualitative differences of newly formed bone inside the implanted β-TCP columns between the groups were evaluated by µ-CT, and the results are shown in bar graphs in Fig. 5. At 3 weeks after implantation, groups with rh-BMP-2 (Group 2 and 4) showed significantly greater BV/TV and BMD than groups without rh-BMP-2 (Group 1 and 3) (p < 0.05, Fig. 5a, c). At 6 weeks after implantation, only the treatment group with both rh-BMP-2 and ZOL (Group 4) showed significantly greater BV/TV and BMD values than the other groups (p < 0.05, Fig. 5b, d). The actual values of BV/TV and BMD at 3 and 6 weeks after implantation are shown in Table 2. Fig. 5μ-CT evaluation of BV/TV and BMD in retrieved β-TCP implants at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. *: P < 0.05. Statistical differences between groups were determined with the one-way ANOVA and post-hoc Bonferroni test. Note: BV/TV, Bone volume/ Total tissue volume; BMD, Bone mineral density μ-CT evaluation of BV/TV and BMD in retrieved β-TCP implants at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. *: P < 0.05. Statistical differences between groups were determined with the one-way ANOVA and post-hoc Bonferroni test. Note: BV/TV, Bone volume/ Total tissue volume; BMD, Bone mineral density Quantitatively assessments of implanted β-TCP using μ-CT Median, minimum, and maximum are provided. P values indicate the statistical differences between the groups. Note: β-TCP Beta-tricalcium phosphate, rh-BMP-2 Recombinant human bone morphogenetic protein-2, ZOL Zoledronate. The qualitative differences of newly formed bone inside the implanted β-TCP columns between the groups were evaluated by µ-CT, and the results are shown in bar graphs in Fig. 5. At 3 weeks after implantation, groups with rh-BMP-2 (Group 2 and 4) showed significantly greater BV/TV and BMD than groups without rh-BMP-2 (Group 1 and 3) (p < 0.05, Fig. 5a, c). At 6 weeks after implantation, only the treatment group with both rh-BMP-2 and ZOL (Group 4) showed significantly greater BV/TV and BMD values than the other groups (p < 0.05, Fig. 5b, d). The actual values of BV/TV and BMD at 3 and 6 weeks after implantation are shown in Table 2. Fig. 5μ-CT evaluation of BV/TV and BMD in retrieved β-TCP implants at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. *: P < 0.05. Statistical differences between groups were determined with the one-way ANOVA and post-hoc Bonferroni test. Note: BV/TV, Bone volume/ Total tissue volume; BMD, Bone mineral density μ-CT evaluation of BV/TV and BMD in retrieved β-TCP implants at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. *: P < 0.05. Statistical differences between groups were determined with the one-way ANOVA and post-hoc Bonferroni test. Note: BV/TV, Bone volume/ Total tissue volume; BMD, Bone mineral density Quantitatively assessments of implanted β-TCP using μ-CT Median, minimum, and maximum are provided. P values indicate the statistical differences between the groups. Note: β-TCP Beta-tricalcium phosphate, rh-BMP-2 Recombinant human bone morphogenetic protein-2, ZOL Zoledronate. Macroscopy of implanted β-TCP columns in femoral bone marrow: At 3 weeks after implantation, the gross appearance of the implanted β-TCP column disappeared significantly in Group 2 (rh-BMP-2 alone) (Fig. 1c, d). However, in groups containing ZOL (Group 3 and 4), the β-TCP column remained recognizable at 6 weeks after implantation (Fig. 1e-h). Fig. 1Representative photos of the left distal femurs of rabbits cut in the sagittal plane at 3 and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. Arrows point to a section of the edge of the implanted β-TCP. The gross appearances of implanted β-TCP columns gradually disappeared (a). The gross appearances of β-TCP columns in groups containing ZOL (f and h) were comparatively more recognizable than in the other groups without ZOL (b and d) at 6 weeks after implantation Representative photos of the left distal femurs of rabbits cut in the sagittal plane at 3 and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. Arrows point to a section of the edge of the implanted β-TCP. The gross appearances of implanted β-TCP columns gradually disappeared (a). The gross appearances of β-TCP columns in groups containing ZOL (f and h) were comparatively more recognizable than in the other groups without ZOL (b and d) at 6 weeks after implantation Radiographic evaluations of implanted β-TCP columns in femoral bone marrow: The X-ray images showed that the radiolucency inside the implanted β-TCP column tended to increase gradually in all groups (Fig. 2). However, in combination with the macroscopy analysis, the radiolucency inside of the β-TCP columns was comparatively suppressed in the ZOL-treated groups (Group 3 and 4) (Fig. 2g-l). Fig. 2Representative X-ray images of the left distal femurs of rabbits from each group at 0, 3, and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. The radiolucencies at the area of implanted β-TCP columns gradually increased. The radiolucencies of β-TCP columns in groups containing ZOL (i and l) were comparatively more recognizable than in the other groups without ZOL (c and f) at 6 weeks after implantation Representative X-ray images of the left distal femurs of rabbits from each group at 0, 3, and 6 weeks after implantation. In the images, the left side is distal side of femur and the upper side is dorsal side of femur. The radiolucencies at the area of implanted β-TCP columns gradually increased. The radiolucencies of β-TCP columns in groups containing ZOL (i and l) were comparatively more recognizable than in the other groups without ZOL (c and f) at 6 weeks after implantation Promotion and maintenance of bone formation in the bone marrow environment: Representative H&E and MT stained sections of tissues and their quantitative evaluations are shown in Figs. 3 and 4. At 3 weeks after implantation, the newly formed area of bone structure was significantly larger in the groups with rh-BMP-2 (Group 2 and 4) than in the groups without rh-BMP-2 (Group 1 and 3) (p < 0.001, Fig. 4a, c). Details of the statistical analysis of each parameter are as follows: group 1 vs. 2: p < 0.001 in H&E and p < 0.001 in MT; group 1 vs. 3: p = 0.05 in H&E and p = 0.06 in MT; group 1 vs. 4: p < 0.001 in H&E and p = 0.04 in MT; group 2 vs. 3: p < 0.001 in H&E and p < 0.001 in MT; group 2 vs. 4: p = 1.0 in H&E and p = 0.06 in MT; and group 3 vs. 4: p < 0.001 in H&E and p < 0.001 in MT. At 6 weeks after implantation, the newly formed area of bone structure in the group containing both rh-BMP-2 and ZOL (Group 4) was significantly larger than that in the other groups (p < 0.001, Fig. 4b, d). Details of the statistical analysis of each parameter are as follows: group 1 vs. 2: p = 1.0 in H&E and p = 1.0 in MT; group 1 vs. 3: p = 1.0 in H&E and p = 1.0 in MT; group 1 vs. 4: p < 0.001 in H&E and p = 0.03 in MT; group 2 vs. 3: p = 1.0 in H&E and p = 1.0 in MT; group 2 vs. 4: p < 0.001 in H&E and p < 0.001 in MT; and group 3 vs. 4: p < 0.001 in H&E and p = 0.006 in MT. The newly formed bone structure area in the Groups 1, 2, and 3 had almost disappeared at 6 weeks after implantation (Fig. 3q-z, a’, and b’). The actual values of new bone structure area in H&E and MT sections at 3 and 6 weeks after implantations are shown in Table 1. Fig. 3Representative H&E and Masson’s Trichrome stained sections of the left distal femurs of rabbits cut in the sagittal plane in each group at 3 and 6 weeks after implantation. In each image, the proximal section is displayed on the right and the dorsal section is displayed on the upper parts of the figure. The dotted box in the low-powered view (2×) indicates the range of high-powered view. The high-powered views (20×) were captured randomly from inside the implanted β-TCP areas for quantitative evaluation. The uniformly-stained tissue area, pointed by arrows, indicate newly formed trabecular bone structure. At 3 weeks after implantation, stained tissue areas were recognized as new bone area was significantly increased in groups containing rh-BMP-2 (f, h,n, and p). New bone area only remained in groups treated with both rh-BMP-2 and ZOL (d’ and f’) at 6 weeks after implantation. Note: H&E, Hematoxylin-Eosin; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate; β-TCP, β-tricalcium phosphate Representative H&E and Masson’s Trichrome stained sections of the left distal femurs of rabbits cut in the sagittal plane in each group at 3 and 6 weeks after implantation. In each image, the proximal section is displayed on the right and the dorsal section is displayed on the upper parts of the figure. The dotted box in the low-powered view (2×) indicates the range of high-powered view. The high-powered views (20×) were captured randomly from inside the implanted β-TCP areas for quantitative evaluation. The uniformly-stained tissue area, pointed by arrows, indicate newly formed trabecular bone structure. At 3 weeks after implantation, stained tissue areas were recognized as new bone area was significantly increased in groups containing rh-BMP-2 (f, h,n, and p). New bone area only remained in groups treated with both rh-BMP-2 and ZOL (d’ and f’) at 6 weeks after implantation. Note: H&E, Hematoxylin-Eosin; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate; β-TCP, β-tricalcium phosphate Fig. 4Quantitative evaluation of H&E sections and Masson’s Trichrome sections of the of the left distal femurs of rabbits in each group at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. At 3 weeks after implantation, the groups containing rh-BMP-2 (Group 2 or 4) showed greater areas of new bone formation than the other groups (P < 0.05). However, at 6 weeks after implantation, only the group (Group 4) that involved the combination usage of both rh-BMP-2 and ZOL still showed areas of newly formed bone (P < 0.05). *: P < 0.05. Statistical differences between groups were determined using a one-way analysis of variance with Bonferroni’s multiple comparison test. Note: H&E, Hematoxylin-Eosin; MT, Masson Trichrome; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate Quantitative evaluation of H&E sections and Masson’s Trichrome sections of the of the left distal femurs of rabbits in each group at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. At 3 weeks after implantation, the groups containing rh-BMP-2 (Group 2 or 4) showed greater areas of new bone formation than the other groups (P < 0.05). However, at 6 weeks after implantation, only the group (Group 4) that involved the combination usage of both rh-BMP-2 and ZOL still showed areas of newly formed bone (P < 0.05). *: P < 0.05. Statistical differences between groups were determined using a one-way analysis of variance with Bonferroni’s multiple comparison test. Note: H&E, Hematoxylin-Eosin; MT, Masson Trichrome; rh-BMP-2, recombinant human bone morphogenetic protein 2; ZOL, zoledronate Histological assessments of new bone area in the marrow of a rabbit femur Variables present percentages of new bone areas in tissues as median, minimum, and maximum. P values indicate the statistical differences between the groups. Note: β-TCP Beta-tricalcium phosphate, rh-BMP-2 Recombinant human bone morphogenetic protein-2, ZOL Zoledronate. Qualitative improvement of formed bone by topical co‐administration of rh-BMP2 and ZOL: The qualitative differences of newly formed bone inside the implanted β-TCP columns between the groups were evaluated by µ-CT, and the results are shown in bar graphs in Fig. 5. At 3 weeks after implantation, groups with rh-BMP-2 (Group 2 and 4) showed significantly greater BV/TV and BMD than groups without rh-BMP-2 (Group 1 and 3) (p < 0.05, Fig. 5a, c). At 6 weeks after implantation, only the treatment group with both rh-BMP-2 and ZOL (Group 4) showed significantly greater BV/TV and BMD values than the other groups (p < 0.05, Fig. 5b, d). The actual values of BV/TV and BMD at 3 and 6 weeks after implantation are shown in Table 2. Fig. 5μ-CT evaluation of BV/TV and BMD in retrieved β-TCP implants at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. *: P < 0.05. Statistical differences between groups were determined with the one-way ANOVA and post-hoc Bonferroni test. Note: BV/TV, Bone volume/ Total tissue volume; BMD, Bone mineral density μ-CT evaluation of BV/TV and BMD in retrieved β-TCP implants at 3 and 6 weeks after implantation. The columns and bars represent the means and standard deviations (n = 7), respectively. *: P < 0.05. Statistical differences between groups were determined with the one-way ANOVA and post-hoc Bonferroni test. Note: BV/TV, Bone volume/ Total tissue volume; BMD, Bone mineral density Quantitatively assessments of implanted β-TCP using μ-CT Median, minimum, and maximum are provided. P values indicate the statistical differences between the groups. Note: β-TCP Beta-tricalcium phosphate, rh-BMP-2 Recombinant human bone morphogenetic protein-2, ZOL Zoledronate. Discussion: BMPs can induce osteogenesis by stimulating osteoblast differentiation [2], however, BMPs can also promote the catabolic activity of osteoclast [6, 21], which complicates the formation of bone in the bone marrow area. In this study, the radiological and histological parameters indicated that rh-BMP-2 promoted significantly osteogenesis in the bone marrow environment at 3 weeks after implantation. However, even though bone formation was achieved once at 3 weeks after implantation, the bone tissues gradually resorbed until 6 weeks after implantation due to the osteoclasts that were concurrently induced by rh-BMP-2 in the bone marrow environment [6]. We previously investigated the effect of ZOL on new bone formation induced by rh-BMP-2 in bone extramedullary and intramedullary environments. Systemic ZOL treatment via the rh-BMP-2/β-TCP composite was shown to promote and maintain new bone formation in bone marrow environment [8]. Local co-administration of ZOL via the rh-BMP-2/β-TCP composite has also been shown to promote and maintain new bone formation in the extramedullary environment for a long period of time [12]. In the present study, we aimed to clarify if the topical co-administration of ZOL was also effective in promoting and maintaining new bone formation induced by rh-BMP-2 in the bone marrow environment. The ultimate goal of tissue regeneration engineering in the orthopaedic field is the accurate and effective formation of tissue at the necessary site. Therefore, we investigated whether the topical co-administration of ZOL and rh-BMP-2 would represent a useful procedure to facilitate and maintain bone formation in the bone marrow environment. As seen in systemic ZOL treatment, the topical treatment of ZOL co-administration with rh-BMP-2 also promoted and maintained new bone formation in the bone marrow environment. Topical administration of ZOL has been considered to reduce the associated side effects and limit the effect to a target site [22]. ZOL is known to cause side effects such as hypocalcemia, renal failure, or osteonecrosis of the jaw. Therefore, the topical administration of ZOL can be effective in patients in whom systemic administration is inappropriate due to side effects [23]. A systematic review showed that β-TCP is one of the most commonly used biocompatible materials [13]. β-TCP has high biocompatibility and is an ideal material for clinical application [24, 25]. It has been shown to be effective in bone conduction on its own, but it is also often used as a carrier of some drugs to accelerate effectiveness [13]. It has been reported that β-TCP as a carrier for the local administration of both rh-BMP-2 and ZOL is useful for new ectopic bone formation [12], and our findings further demonstrated that β-TCP is a useful carrier of rh-BMP-2 and ZOL for effective bone induction in the bone marrow environment. Bone formation in the bone marrow environment by local drug administration is clinically important because it leads to the development of biomaterials for surgical implants in the medullary cavity, such as intramedullary nails and the femoral stem of total hip replacements. Moreover, these biomaterials can offer novel therapeutic substitutes that can be used for the regeneration of bone cavities after the surgical removal of bone tumors, osteonecrosis lesions, or vertebral fractures. Local administrated of ZOL has been shown to directly suppress the bone resorption action of the osteoclasts in the local area [26]. The local effects of ZOL may also enable complications related to systemic bisphosphonate therapy, such as renal disorders or osteonecrosis of the jaw, to be avoided [27]. Therefore, β-TCP material treated with a combination of rh-BMP-2 and ZOL was shown to effectively promote and maintain bone formation in the bone marrow environment. This study contains a few limitations, e.g., a single animal model and a single dose of therapeutic agents was used. Future studies should be conducted to assess the underlying detailed molecular mechanisms of the combination therapy that produced the observed therapeutic effect. Conclusions: In summary, the combination of locally administered rh-BMP-2 and ZOL via β-TCP column materials promoted new bone formation in the bone marrow and enabled the maintenance of the newly formed bone for 6 weeks after implantation. Our findings may contribute to the development of the orthopaedic field, especially involving clinical approaches for cases that require bone regeneration in the bone marrow environment.
Background: Bone morphogenetic proteins (BMPs) induce osteogenesis in various environments. However, when BMPs are used alone in the bone marrow environment, the maintenance of new bone formation is difficult owing to vigorous bone resorption. This is because BMPs stimulate the differentiation of not only osteoblast precursor cells but also osteoclast precursor cells. The present study aimed to induce and maintain new bone formation using the topical co-administration of recombinant human BMP-2 (rh-BMP-2) and zoledronate (ZOL) on beta-tricalcium phosphate (β-TCP) composite. Methods: β-TCP columns were impregnated with both rh-BMP-2 (30 µg) and ZOL (5 µg), rh-BMP-2 alone, or ZOL alone, and implanted into the left femur canal of New Zealand white rabbits (n = 56). The implanted β-TCP columns were harvested and evaluated at 3 and 6 weeks after implantation. These harvested β-TCP columns were evaluated radiologically using plane radiograph, and histologically using haematoxylin/eosin (H&E) and Masson's trichrome (MT) staining. In addition, micro-computed tomography (CT) was performed for qualitative analysis of bone formation in each group (n = 7). Results: Tissue sections stained with H&E and MT dyes revealed that new bone formation inside the β-TCP composite was significantly greater in those impregnated with both rh-BMP-2 and ZOL than in those from the other experimental groups at 3 and 6 weeks after implantations (p < 0.05). Micro-CT data also demonstrated that the bone volume and the bone mineral density inside the β-TCP columns were significantly greater in those impregnated with both rh-BMP-2 and ZOL than in those from the other experimental groups at 3 and 6 weeks after implantations (p < 0.05). Conclusions: The topical co-administration of both rh-BMP-2 and ZOL on β-TCP composite promoted and maintained newly formed bone structure in the bone marrow environment.
Background: Several clinical applications of recombinant human bone morphogenic proteins (rh-BMPs) have reportedly promoted new bone formation [1, 2]. BMPs act as signal transducers in the Smad signaling pathway to regulate mesenchymal stem cell differentiation during skeletal development, especially bone formation [3, 4]. For example, in orthopaedics surgery, rh-BMP has already been used to improve clinical results such as the novel operative technique of spinal fusion [5]. However, the use of rh-BMPs in certain orthopaedic surgeries performed in the intramedullary environment, e.g., total hip replacements involving large bone defects or intramedullary bone tumours, remains limited because more osteoclast progenitor cells are derived from hematopoietic stem cells in the bone marrow environment and rh-BMPs cannot achieve suitable osteogenesis inside of the bone marrow by promoting the differentiation of the osteoclast precursor cells, not only precursor cells which can be differentiated into osteoblast [6, 7]. In the intramedullary environment, it is difficult to achieve both bone formation and its maintenance. To overcome these problems, we previously reported the effectiveness of the systemic administration of ZOL using rh-BMP-2/β-tricalcium phosphate (β-TCP) composite to promote the osteogenesis of newly formed bone in the bone marrow environment [8]. β-TCP has been reported as a good carrier for drug delivery of both rh-BMP and bisphosphonates to promote osteogenesis [9–12]. β-TCP, a bioactive bone substitute material, has high biocompatibility and good stability [13]. Moreover, ZOL has demonstrated to have a protective effect on bone tissue resorption by inhibiting the activity of osteoclasts at the local site [14, 15]. In the present study, we further investigated if the topical co-treatment of ZOL and the rh-BMP-2/β-TCP composite is useful in the promotion as well as the maintenance of new bone formation in the bone marrow environment. Should the intramedullary bone formation be achieved by only the topical administration of these drugs, this treatment may represent a safety and effective procedure to create bone formation in lesion sites, both from a clinical and morphological perspective. In this study, the primary object was to achieve bone formation in the bone marrow environment and the secondary object was to maintain the formed bone tissue, by utilizing the combined effect of rh-BMP-2 in promoting bone formation and ZOL in maintaining bone tissue. In other words, we hypothesized that rh-BMP-2 could achieve bone formation in the bone marrow environment during the early treatment period and ZOL could maintain the newly formed bone tissue by inhibiting bone resorption for a certain period. The aim of this study was to investigate if the topical co-administration of rh-BMP-2/β-TCP/ZOL composite promoted osteogenesis and maintained the newly formed bone in the bone marrow environment. Conclusions: In summary, the combination of locally administered rh-BMP-2 and ZOL via β-TCP column materials promoted new bone formation in the bone marrow and enabled the maintenance of the newly formed bone for 6 weeks after implantation. Our findings may contribute to the development of the orthopaedic field, especially involving clinical approaches for cases that require bone regeneration in the bone marrow environment.
Background: Bone morphogenetic proteins (BMPs) induce osteogenesis in various environments. However, when BMPs are used alone in the bone marrow environment, the maintenance of new bone formation is difficult owing to vigorous bone resorption. This is because BMPs stimulate the differentiation of not only osteoblast precursor cells but also osteoclast precursor cells. The present study aimed to induce and maintain new bone formation using the topical co-administration of recombinant human BMP-2 (rh-BMP-2) and zoledronate (ZOL) on beta-tricalcium phosphate (β-TCP) composite. Methods: β-TCP columns were impregnated with both rh-BMP-2 (30 µg) and ZOL (5 µg), rh-BMP-2 alone, or ZOL alone, and implanted into the left femur canal of New Zealand white rabbits (n = 56). The implanted β-TCP columns were harvested and evaluated at 3 and 6 weeks after implantation. These harvested β-TCP columns were evaluated radiologically using plane radiograph, and histologically using haematoxylin/eosin (H&E) and Masson's trichrome (MT) staining. In addition, micro-computed tomography (CT) was performed for qualitative analysis of bone formation in each group (n = 7). Results: Tissue sections stained with H&E and MT dyes revealed that new bone formation inside the β-TCP composite was significantly greater in those impregnated with both rh-BMP-2 and ZOL than in those from the other experimental groups at 3 and 6 weeks after implantations (p < 0.05). Micro-CT data also demonstrated that the bone volume and the bone mineral density inside the β-TCP columns were significantly greater in those impregnated with both rh-BMP-2 and ZOL than in those from the other experimental groups at 3 and 6 weeks after implantations (p < 0.05). Conclusions: The topical co-administration of both rh-BMP-2 and ZOL on β-TCP composite promoted and maintained newly formed bone structure in the bone marrow environment.
13,243
392
[ 537, 67, 69, 151, 379, 78, 285, 173, 74, 202, 304, 278, 1359, 395 ]
18
[ "bone", "tcp", "group", "weeks", "zol", "groups", "implantation", "rh", "weeks implantation", "bmp" ]
[ "promoted osteogenesis maintained", "substitutes regeneration bone", "stem cells bone", "bmp bone extramedullary", "osteoblast differentiation bmps" ]
null
[CONTENT] Bone morphogenetic proteins | β-tricalcium phosphate | Rabbit | micro computed tomography | Histology [SUMMARY]
null
[CONTENT] Bone morphogenetic proteins | β-tricalcium phosphate | Rabbit | micro computed tomography | Histology [SUMMARY]
[CONTENT] Bone morphogenetic proteins | β-tricalcium phosphate | Rabbit | micro computed tomography | Histology [SUMMARY]
[CONTENT] Bone morphogenetic proteins | β-tricalcium phosphate | Rabbit | micro computed tomography | Histology [SUMMARY]
[CONTENT] Bone morphogenetic proteins | β-tricalcium phosphate | Rabbit | micro computed tomography | Histology [SUMMARY]
[CONTENT] Animals | Bone Marrow | Bone Morphogenetic Protein 2 | Humans | Osteogenesis | Rabbits | Recombinant Proteins | Transforming Growth Factor beta | X-Ray Microtomography | Zoledronic Acid [SUMMARY]
null
[CONTENT] Animals | Bone Marrow | Bone Morphogenetic Protein 2 | Humans | Osteogenesis | Rabbits | Recombinant Proteins | Transforming Growth Factor beta | X-Ray Microtomography | Zoledronic Acid [SUMMARY]
[CONTENT] Animals | Bone Marrow | Bone Morphogenetic Protein 2 | Humans | Osteogenesis | Rabbits | Recombinant Proteins | Transforming Growth Factor beta | X-Ray Microtomography | Zoledronic Acid [SUMMARY]
[CONTENT] Animals | Bone Marrow | Bone Morphogenetic Protein 2 | Humans | Osteogenesis | Rabbits | Recombinant Proteins | Transforming Growth Factor beta | X-Ray Microtomography | Zoledronic Acid [SUMMARY]
[CONTENT] Animals | Bone Marrow | Bone Morphogenetic Protein 2 | Humans | Osteogenesis | Rabbits | Recombinant Proteins | Transforming Growth Factor beta | X-Ray Microtomography | Zoledronic Acid [SUMMARY]
[CONTENT] promoted osteogenesis maintained | substitutes regeneration bone | stem cells bone | bmp bone extramedullary | osteoblast differentiation bmps [SUMMARY]
null
[CONTENT] promoted osteogenesis maintained | substitutes regeneration bone | stem cells bone | bmp bone extramedullary | osteoblast differentiation bmps [SUMMARY]
[CONTENT] promoted osteogenesis maintained | substitutes regeneration bone | stem cells bone | bmp bone extramedullary | osteoblast differentiation bmps [SUMMARY]
[CONTENT] promoted osteogenesis maintained | substitutes regeneration bone | stem cells bone | bmp bone extramedullary | osteoblast differentiation bmps [SUMMARY]
[CONTENT] promoted osteogenesis maintained | substitutes regeneration bone | stem cells bone | bmp bone extramedullary | osteoblast differentiation bmps [SUMMARY]
[CONTENT] bone | tcp | group | weeks | zol | groups | implantation | rh | weeks implantation | bmp [SUMMARY]
null
[CONTENT] bone | tcp | group | weeks | zol | groups | implantation | rh | weeks implantation | bmp [SUMMARY]
[CONTENT] bone | tcp | group | weeks | zol | groups | implantation | rh | weeks implantation | bmp [SUMMARY]
[CONTENT] bone | tcp | group | weeks | zol | groups | implantation | rh | weeks implantation | bmp [SUMMARY]
[CONTENT] bone | tcp | group | weeks | zol | groups | implantation | rh | weeks implantation | bmp [SUMMARY]
[CONTENT] bone | formation | bone formation | bone marrow | environment | rh | marrow | bone marrow environment | marrow environment | cells [SUMMARY]
null
[CONTENT] groups | group | weeks implantation | implantation | bone | weeks | mt | 001 | group vs | vs [SUMMARY]
[CONTENT] bone | bone marrow | marrow | column materials | development orthopaedic field | marrow enabled maintenance | marrow enabled maintenance newly | development orthopaedic field especially | enabled maintenance newly formed | enabled maintenance newly [SUMMARY]
[CONTENT] bone | tcp | zol | group | weeks | rh | groups | bmp | rh bmp | implantation [SUMMARY]
[CONTENT] bone | tcp | zol | group | weeks | rh | groups | bmp | rh bmp | implantation [SUMMARY]
[CONTENT] ||| ||| ||| BMP-2 | ZOL [SUMMARY]
null
[CONTENT] H&E | MT | ZOL | 3 | 6 weeks | 0.05 ||| Micro-CT | ZOL | 3 | 6 weeks | 0.05 [SUMMARY]
[CONTENT] ZOL [SUMMARY]
[CONTENT] ||| ||| ||| BMP-2 | ZOL ||| 30 | ZOL | 5 | ZOL | New Zealand | 56 ||| 3 | 6 weeks ||| haematoxylin | H&E | Masson ||| 7 ||| H&E | MT | ZOL | 3 | 6 weeks | 0.05 ||| Micro-CT | ZOL | 3 | 6 weeks | 0.05 ||| ZOL [SUMMARY]
[CONTENT] ||| ||| ||| BMP-2 | ZOL ||| 30 | ZOL | 5 | ZOL | New Zealand | 56 ||| 3 | 6 weeks ||| haematoxylin | H&E | Masson ||| 7 ||| H&E | MT | ZOL | 3 | 6 weeks | 0.05 ||| Micro-CT | ZOL | 3 | 6 weeks | 0.05 ||| ZOL [SUMMARY]
Meta-Analysis on the Correlation Between APOM rs805296 Polymorphism and Risk of Coronary Artery Disease.
26723879
The present meta-analysis aimed to summarize the inconsistent findings on the association of apolipoprotein M gene (ApoM) rs805296 polymorphism with the risk of coronary artery disease (CAD), and to obtain a more authentic result about this topic.
BACKGROUND
A total of 7 available articles were identified through electronic databases--PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI)--and their useful data were carefully extracted. The relationship between ApoM rs805296 polymorphism and CAD risk was assessed by odds ratios (ORs) and corresponding 95% confidence intervals (95% CIs), which were calculated using the fixed- or random-effects model, according to the degree of heterogeneity. Hardy-Weinberg equilibrium test, sensitivity test, and publication bias examination were also performed in this meta-analysis.
MATERIAL/METHODS
According to the pooled results, ApoM rs805296 polymorphism conferred an increased risk of CAD under all the genetic contrasts: CC versus TT, CC + TC versus TT, CC versus TT+TC, C versus T, and TC versus TT (OR=2.13, 95% CI=1.16-3.91; OR=1.80, 95% CI=1.50-2.17; OR=1.91, 95% CI=1.04-3.51; OR=1.72, 95% CI=1.45-2.04; OR=1.78, 95% CI=1.47-2.15).
RESULTS
ApoM rs805296 polymorphism may be a risk factor for developing CAD.
CONCLUSIONS
[ "Apolipoproteins", "Apolipoproteins M", "Case-Control Studies", "China", "Coronary Artery Disease", "Gene Frequency", "Genetic Predisposition to Disease", "Genotype", "Humans", "Lipocalins", "Odds Ratio", "Polymorphism, Single Nucleotide", "Promoter Regions, Genetic", "Risk Factors", "Triglycerides" ]
4702609
Publication characteristics
Figure 1 displays the detailed process of study selection and reasons for study exclusion. Initially, 81 records were identified through the computer search, and 36 remained after excluding 8 studies not on humans and 37 apparently irrelevant ones. Through the subsequent exclusion for reviews and letters (7), without full texts (3), duplicates (5), not about ApoM rs805296 polymorphism (8) and without original data (6), we eventually included 7 studies in the quantitative synthesis [18,21–26]. The primary features of eligible studies are presented in Table 1.
Sensitivity analysis
In the process of sensitivity analysis, every individual study was omitted in sequence, and the changed results were observed correspondingly. No radical alteration occurred in the pooled results, suggesting that no single study substantially affected the results, and our meta-analysis outcomes were statistically robust.
Results
Publication characteristics Figure 1 displays the detailed process of study selection and reasons for study exclusion. Initially, 81 records were identified through the computer search, and 36 remained after excluding 8 studies not on humans and 37 apparently irrelevant ones. Through the subsequent exclusion for reviews and letters (7), without full texts (3), duplicates (5), not about ApoM rs805296 polymorphism (8) and without original data (6), we eventually included 7 studies in the quantitative synthesis [18,21–26]. The primary features of eligible studies are presented in Table 1. Figure 1 displays the detailed process of study selection and reasons for study exclusion. Initially, 81 records were identified through the computer search, and 36 remained after excluding 8 studies not on humans and 37 apparently irrelevant ones. Through the subsequent exclusion for reviews and letters (7), without full texts (3), duplicates (5), not about ApoM rs805296 polymorphism (8) and without original data (6), we eventually included 7 studies in the quantitative synthesis [18,21–26]. The primary features of eligible studies are presented in Table 1. Study results Table 2 shows the ORs with 95% CIs and P values for heterogeneity test under all the genetic models. Overall, the ApoM rs805296 polymorphism elevated the CAD risk in all the genetic contrasts (CC versus TT: OR=2.13, 95% CI=1.16–3.91; CC + TC versus TT: OR=1.80, 95% CI=1.50–2.17; CC versus TT+TC: OR=1.91, 95% CI=1.04–3.51; C versus T: OR=1.72, 95% CI=1.45–2.04; TC versus TT: OR=1.78, 95% CI=1.47–2.15). Figure 2 describes the forest plot for the association of ApoM rs805296 polymorphism with CAD risk under the CC versus TT model. Table 2 shows the ORs with 95% CIs and P values for heterogeneity test under all the genetic models. Overall, the ApoM rs805296 polymorphism elevated the CAD risk in all the genetic contrasts (CC versus TT: OR=2.13, 95% CI=1.16–3.91; CC + TC versus TT: OR=1.80, 95% CI=1.50–2.17; CC versus TT+TC: OR=1.91, 95% CI=1.04–3.51; C versus T: OR=1.72, 95% CI=1.45–2.04; TC versus TT: OR=1.78, 95% CI=1.47–2.15). Figure 2 describes the forest plot for the association of ApoM rs805296 polymorphism with CAD risk under the CC versus TT model. Heterogeneity examination As shown in Table 2, P values for heterogeneity under all the genetic models were larger than 0.05 (P=0.491 for CC versus TT model; P=0.204 for CC + TC versus TT model; P=0.557 for CC versus TT+TC model; P=0.126 for C versus T model; P=0.361 for TC versus TT model). Therefore, there was no marked heterogeneity and the fixed-effects model was used for pooling the results. As shown in Table 2, P values for heterogeneity under all the genetic models were larger than 0.05 (P=0.491 for CC versus TT model; P=0.204 for CC + TC versus TT model; P=0.557 for CC versus TT+TC model; P=0.126 for C versus T model; P=0.361 for TC versus TT model). Therefore, there was no marked heterogeneity and the fixed-effects model was used for pooling the results. Publication bias test Begg’s funnel plots and Egger’s test were used to detected possible publication bias among the included studies from the visual and statistical perspective, respectively, and neither the shapes of funnel plots (Figure 3) nor the statistical data of Egger’s test (P=0.260) provided evidence for the presence of obvious publication bias. Begg’s funnel plots and Egger’s test were used to detected possible publication bias among the included studies from the visual and statistical perspective, respectively, and neither the shapes of funnel plots (Figure 3) nor the statistical data of Egger’s test (P=0.260) provided evidence for the presence of obvious publication bias. Sensitivity analysis In the process of sensitivity analysis, every individual study was omitted in sequence, and the changed results were observed correspondingly. No radical alteration occurred in the pooled results, suggesting that no single study substantially affected the results, and our meta-analysis outcomes were statistically robust. In the process of sensitivity analysis, every individual study was omitted in sequence, and the changed results were observed correspondingly. No radical alteration occurred in the pooled results, suggesting that no single study substantially affected the results, and our meta-analysis outcomes were statistically robust.
Conclusions
In conclusion, our meta-analysis results revealed a significant correlation of ApoM rs805296 polymorphism with CAD risk, and showed rs805296 polymorphism might confer increased risk of CAD in the Chinese population. The association between ApoM rs805296 and onset risk of CAD needs to be further verified by studies containing combined effects of genetic and environmental factors and larger sample size in multiple ethnicities.
[ "Background", "Study source and search strategy", "Selection standards", "Data extraction", "Statistical analyses", "Heterogeneity examination", "Publication bias test" ]
[ "Coronary artery disease (CAD), one of the most common cardiovascular diseases, ranks first among fatal diseases in adults around the world [1–3]. Several risk factors have been confirmed, such as smoking, hypertension, diabetes, high blood cholesterol, excessive alcohol drinking, depression, and lack of exercise [4,5]. As for the underlying mechanism of CAD, it is reported that cardiac atherosclerosis may have a significant influence on the occurrence and progression of the disease [6,7]. However, genetic and environmental risk factors have been widely researched in the etiology of CAD, and their remarkable effects on susceptibility to CAD have also been identified.\nIn recent years, accumulating evidence indicates that genetic polymorphisms may be implicated in individual susceptibility to CAD, including polymorphisms within genes of APLNR [8], interleukin-6 [9], CYP7A1 [10], and PAI-1 [11]. In addition, the apolipoprotein M gene (ApoM), located on human chromosome 6 p21.31, has been reported to be significantly related to the occurrence of CAD [12].\nThe ApoM gene codes a 22-kDa protein that belongs to the apolipoprotein superfamily in structure. ApoM protein was first identified and determined in a study on lipoprotein by Xu et al. in 1993 [13]. Human ApoM cDNA, with 734 base pairs, encodes a residue-long protein with 188 amino acids [14]. ApoM is reportedly related to lower high-density lipoprotein (HDL) cholesterol, triglyceride-rich lipoproteins, lipoproteins containing ApoB, and very low-density lipoprotein (VLDL). Only expressed in the kidneys and liver [15], ApoM has been confirmed to have great influence on the transportation of reverse cholesterol [16].\nPrevious studies have suggested that one of the polymorphisms in ApoM gene, rs805296, was related to the susceptibility to CAD [17–20]; however, the number of studies is relatively limited, and the results are divergent rather than conclusive due to various reasons. Therefore, we comprehensively summarized all the findings on the association of ApoM rs805296 polymorphism with risk of CAD so as to reach a more authentic conclusion by performing the present meta-analysis.", "The electronic databases searched for all the usable studies were: PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI). The words and items for literature searching contained “Apolipoprotein M” or “ ApoM, “polymorphism” or “variant” or “mutation”, and “coronary artery disease” or “CAD” or “atherosclerosis”. In case of missing any adequate studies, we screened the articles in the reference lists of relevant studies by manual searching. All the studies were restricted to those in English or Chinese.", "All the publications meeting the following requirements were included into our meta-analysis: (1) possessing the case and control subjects at the same time; (2) studying the correlation between ApoM rs805296 polymorphism and CAD risk; (3) with sufficient data describing genotype and allele frequencies of the polymorphism in case and control groups; (4) human studies; and (5) the genotype distribution in control group conforming to Hardy-Weinberg equilibrium. Those studies with case-only design, inadequate information, or duplicating other articles were excluded. For publications with similar datasets, the one with the largest amount of information was included.", "An identical form for data extraction was designed in advance, and the whole process was completed by 2 authors separately. Information to be extracted from each included article incorporated the following aspects: year of publication, name of first author, country of origin, ethnicity, genotyping method, sample sizes of cases and controls, genotypic and allelic distribution in case and control groups, and P value for Hardy-Weinberg equilibrium in control group.", "STATA software (V.12.0) was used in all statistical analyses. Since all the publications would have control groups with genotypic and allelic frequencies consistent with Hardy-Weinberg equilibrium according to the selection criteria, we checked the compliance degree of Hardy-Weinberg equilibrium using the chi-square test for those not stating relevant data. The strength of relationship between ApoM rs805296 polymorphism and CAD risk was evaluated with odds ratio (OR) and its corresponding 95% confidence interval (95% CI) under 5 genetic models (CC versus TT, CC + TC versus TT, CC versus TT+TC, C versus T and TC versus TT). The absence or presence of statistically significant inter-study heterogeneity, tested by the χ2-test-based Q statistic, determined the use of fixed-effects model (Mantel-Haenszel method) or random-effects model (DerSimonian and Laird method). In sensitivity analysis, each single study was deleted in turn to observe the alterations of the overall results. Through Begg’s funnel plots and Egger’s test, we detected if there existed significant publication bias across eligible studies. For all the statistical tests, the significance level was set at P<0.05.", "As shown in Table 2, P values for heterogeneity under all the genetic models were larger than 0.05 (P=0.491 for CC versus TT model; P=0.204 for CC + TC versus TT model; P=0.557 for CC versus TT+TC model; P=0.126 for C versus T model; P=0.361 for TC versus TT model). Therefore, there was no marked heterogeneity and the fixed-effects model was used for pooling the results.", "Begg’s funnel plots and Egger’s test were used to detected possible publication bias among the included studies from the visual and statistical perspective, respectively, and neither the shapes of funnel plots (Figure 3) nor the statistical data of Egger’s test (P=0.260) provided evidence for the presence of obvious publication bias." ]
[ null, "methods", null, "methods", null, null, null ]
[ "Background", "Material and Methods", "Study source and search strategy", "Selection standards", "Data extraction", "Statistical analyses", "Results", "Publication characteristics", "Study results", "Heterogeneity examination", "Publication bias test", "Sensitivity analysis", "Discussion", "Conclusions" ]
[ "Coronary artery disease (CAD), one of the most common cardiovascular diseases, ranks first among fatal diseases in adults around the world [1–3]. Several risk factors have been confirmed, such as smoking, hypertension, diabetes, high blood cholesterol, excessive alcohol drinking, depression, and lack of exercise [4,5]. As for the underlying mechanism of CAD, it is reported that cardiac atherosclerosis may have a significant influence on the occurrence and progression of the disease [6,7]. However, genetic and environmental risk factors have been widely researched in the etiology of CAD, and their remarkable effects on susceptibility to CAD have also been identified.\nIn recent years, accumulating evidence indicates that genetic polymorphisms may be implicated in individual susceptibility to CAD, including polymorphisms within genes of APLNR [8], interleukin-6 [9], CYP7A1 [10], and PAI-1 [11]. In addition, the apolipoprotein M gene (ApoM), located on human chromosome 6 p21.31, has been reported to be significantly related to the occurrence of CAD [12].\nThe ApoM gene codes a 22-kDa protein that belongs to the apolipoprotein superfamily in structure. ApoM protein was first identified and determined in a study on lipoprotein by Xu et al. in 1993 [13]. Human ApoM cDNA, with 734 base pairs, encodes a residue-long protein with 188 amino acids [14]. ApoM is reportedly related to lower high-density lipoprotein (HDL) cholesterol, triglyceride-rich lipoproteins, lipoproteins containing ApoB, and very low-density lipoprotein (VLDL). Only expressed in the kidneys and liver [15], ApoM has been confirmed to have great influence on the transportation of reverse cholesterol [16].\nPrevious studies have suggested that one of the polymorphisms in ApoM gene, rs805296, was related to the susceptibility to CAD [17–20]; however, the number of studies is relatively limited, and the results are divergent rather than conclusive due to various reasons. Therefore, we comprehensively summarized all the findings on the association of ApoM rs805296 polymorphism with risk of CAD so as to reach a more authentic conclusion by performing the present meta-analysis.", " Study source and search strategy The electronic databases searched for all the usable studies were: PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI). The words and items for literature searching contained “Apolipoprotein M” or “ ApoM, “polymorphism” or “variant” or “mutation”, and “coronary artery disease” or “CAD” or “atherosclerosis”. In case of missing any adequate studies, we screened the articles in the reference lists of relevant studies by manual searching. All the studies were restricted to those in English or Chinese.\nThe electronic databases searched for all the usable studies were: PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI). The words and items for literature searching contained “Apolipoprotein M” or “ ApoM, “polymorphism” or “variant” or “mutation”, and “coronary artery disease” or “CAD” or “atherosclerosis”. In case of missing any adequate studies, we screened the articles in the reference lists of relevant studies by manual searching. All the studies were restricted to those in English or Chinese.\n Selection standards All the publications meeting the following requirements were included into our meta-analysis: (1) possessing the case and control subjects at the same time; (2) studying the correlation between ApoM rs805296 polymorphism and CAD risk; (3) with sufficient data describing genotype and allele frequencies of the polymorphism in case and control groups; (4) human studies; and (5) the genotype distribution in control group conforming to Hardy-Weinberg equilibrium. Those studies with case-only design, inadequate information, or duplicating other articles were excluded. For publications with similar datasets, the one with the largest amount of information was included.\nAll the publications meeting the following requirements were included into our meta-analysis: (1) possessing the case and control subjects at the same time; (2) studying the correlation between ApoM rs805296 polymorphism and CAD risk; (3) with sufficient data describing genotype and allele frequencies of the polymorphism in case and control groups; (4) human studies; and (5) the genotype distribution in control group conforming to Hardy-Weinberg equilibrium. Those studies with case-only design, inadequate information, or duplicating other articles were excluded. For publications with similar datasets, the one with the largest amount of information was included.\n Data extraction An identical form for data extraction was designed in advance, and the whole process was completed by 2 authors separately. Information to be extracted from each included article incorporated the following aspects: year of publication, name of first author, country of origin, ethnicity, genotyping method, sample sizes of cases and controls, genotypic and allelic distribution in case and control groups, and P value for Hardy-Weinberg equilibrium in control group.\nAn identical form for data extraction was designed in advance, and the whole process was completed by 2 authors separately. Information to be extracted from each included article incorporated the following aspects: year of publication, name of first author, country of origin, ethnicity, genotyping method, sample sizes of cases and controls, genotypic and allelic distribution in case and control groups, and P value for Hardy-Weinberg equilibrium in control group.\n Statistical analyses STATA software (V.12.0) was used in all statistical analyses. Since all the publications would have control groups with genotypic and allelic frequencies consistent with Hardy-Weinberg equilibrium according to the selection criteria, we checked the compliance degree of Hardy-Weinberg equilibrium using the chi-square test for those not stating relevant data. The strength of relationship between ApoM rs805296 polymorphism and CAD risk was evaluated with odds ratio (OR) and its corresponding 95% confidence interval (95% CI) under 5 genetic models (CC versus TT, CC + TC versus TT, CC versus TT+TC, C versus T and TC versus TT). The absence or presence of statistically significant inter-study heterogeneity, tested by the χ2-test-based Q statistic, determined the use of fixed-effects model (Mantel-Haenszel method) or random-effects model (DerSimonian and Laird method). In sensitivity analysis, each single study was deleted in turn to observe the alterations of the overall results. Through Begg’s funnel plots and Egger’s test, we detected if there existed significant publication bias across eligible studies. For all the statistical tests, the significance level was set at P<0.05.\nSTATA software (V.12.0) was used in all statistical analyses. Since all the publications would have control groups with genotypic and allelic frequencies consistent with Hardy-Weinberg equilibrium according to the selection criteria, we checked the compliance degree of Hardy-Weinberg equilibrium using the chi-square test for those not stating relevant data. The strength of relationship between ApoM rs805296 polymorphism and CAD risk was evaluated with odds ratio (OR) and its corresponding 95% confidence interval (95% CI) under 5 genetic models (CC versus TT, CC + TC versus TT, CC versus TT+TC, C versus T and TC versus TT). The absence or presence of statistically significant inter-study heterogeneity, tested by the χ2-test-based Q statistic, determined the use of fixed-effects model (Mantel-Haenszel method) or random-effects model (DerSimonian and Laird method). In sensitivity analysis, each single study was deleted in turn to observe the alterations of the overall results. Through Begg’s funnel plots and Egger’s test, we detected if there existed significant publication bias across eligible studies. For all the statistical tests, the significance level was set at P<0.05.", "The electronic databases searched for all the usable studies were: PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI). The words and items for literature searching contained “Apolipoprotein M” or “ ApoM, “polymorphism” or “variant” or “mutation”, and “coronary artery disease” or “CAD” or “atherosclerosis”. In case of missing any adequate studies, we screened the articles in the reference lists of relevant studies by manual searching. All the studies were restricted to those in English or Chinese.", "All the publications meeting the following requirements were included into our meta-analysis: (1) possessing the case and control subjects at the same time; (2) studying the correlation between ApoM rs805296 polymorphism and CAD risk; (3) with sufficient data describing genotype and allele frequencies of the polymorphism in case and control groups; (4) human studies; and (5) the genotype distribution in control group conforming to Hardy-Weinberg equilibrium. Those studies with case-only design, inadequate information, or duplicating other articles were excluded. For publications with similar datasets, the one with the largest amount of information was included.", "An identical form for data extraction was designed in advance, and the whole process was completed by 2 authors separately. Information to be extracted from each included article incorporated the following aspects: year of publication, name of first author, country of origin, ethnicity, genotyping method, sample sizes of cases and controls, genotypic and allelic distribution in case and control groups, and P value for Hardy-Weinberg equilibrium in control group.", "STATA software (V.12.0) was used in all statistical analyses. Since all the publications would have control groups with genotypic and allelic frequencies consistent with Hardy-Weinberg equilibrium according to the selection criteria, we checked the compliance degree of Hardy-Weinberg equilibrium using the chi-square test for those not stating relevant data. The strength of relationship between ApoM rs805296 polymorphism and CAD risk was evaluated with odds ratio (OR) and its corresponding 95% confidence interval (95% CI) under 5 genetic models (CC versus TT, CC + TC versus TT, CC versus TT+TC, C versus T and TC versus TT). The absence or presence of statistically significant inter-study heterogeneity, tested by the χ2-test-based Q statistic, determined the use of fixed-effects model (Mantel-Haenszel method) or random-effects model (DerSimonian and Laird method). In sensitivity analysis, each single study was deleted in turn to observe the alterations of the overall results. Through Begg’s funnel plots and Egger’s test, we detected if there existed significant publication bias across eligible studies. For all the statistical tests, the significance level was set at P<0.05.", " Publication characteristics Figure 1 displays the detailed process of study selection and reasons for study exclusion. Initially, 81 records were identified through the computer search, and 36 remained after excluding 8 studies not on humans and 37 apparently irrelevant ones. Through the subsequent exclusion for reviews and letters (7), without full texts (3), duplicates (5), not about ApoM rs805296 polymorphism (8) and without original data (6), we eventually included 7 studies in the quantitative synthesis [18,21–26]. The primary features of eligible studies are presented in Table 1.\nFigure 1 displays the detailed process of study selection and reasons for study exclusion. Initially, 81 records were identified through the computer search, and 36 remained after excluding 8 studies not on humans and 37 apparently irrelevant ones. Through the subsequent exclusion for reviews and letters (7), without full texts (3), duplicates (5), not about ApoM rs805296 polymorphism (8) and without original data (6), we eventually included 7 studies in the quantitative synthesis [18,21–26]. The primary features of eligible studies are presented in Table 1.\n Study results Table 2 shows the ORs with 95% CIs and P values for heterogeneity test under all the genetic models. Overall, the ApoM rs805296 polymorphism elevated the CAD risk in all the genetic contrasts (CC versus TT: OR=2.13, 95% CI=1.16–3.91; CC + TC versus TT: OR=1.80, 95% CI=1.50–2.17; CC versus TT+TC: OR=1.91, 95% CI=1.04–3.51; C versus T: OR=1.72, 95% CI=1.45–2.04; TC versus TT: OR=1.78, 95% CI=1.47–2.15). Figure 2 describes the forest plot for the association of ApoM rs805296 polymorphism with CAD risk under the CC versus TT model.\nTable 2 shows the ORs with 95% CIs and P values for heterogeneity test under all the genetic models. Overall, the ApoM rs805296 polymorphism elevated the CAD risk in all the genetic contrasts (CC versus TT: OR=2.13, 95% CI=1.16–3.91; CC + TC versus TT: OR=1.80, 95% CI=1.50–2.17; CC versus TT+TC: OR=1.91, 95% CI=1.04–3.51; C versus T: OR=1.72, 95% CI=1.45–2.04; TC versus TT: OR=1.78, 95% CI=1.47–2.15). Figure 2 describes the forest plot for the association of ApoM rs805296 polymorphism with CAD risk under the CC versus TT model.\n Heterogeneity examination As shown in Table 2, P values for heterogeneity under all the genetic models were larger than 0.05 (P=0.491 for CC versus TT model; P=0.204 for CC + TC versus TT model; P=0.557 for CC versus TT+TC model; P=0.126 for C versus T model; P=0.361 for TC versus TT model). Therefore, there was no marked heterogeneity and the fixed-effects model was used for pooling the results.\nAs shown in Table 2, P values for heterogeneity under all the genetic models were larger than 0.05 (P=0.491 for CC versus TT model; P=0.204 for CC + TC versus TT model; P=0.557 for CC versus TT+TC model; P=0.126 for C versus T model; P=0.361 for TC versus TT model). Therefore, there was no marked heterogeneity and the fixed-effects model was used for pooling the results.\n Publication bias test Begg’s funnel plots and Egger’s test were used to detected possible publication bias among the included studies from the visual and statistical perspective, respectively, and neither the shapes of funnel plots (Figure 3) nor the statistical data of Egger’s test (P=0.260) provided evidence for the presence of obvious publication bias.\nBegg’s funnel plots and Egger’s test were used to detected possible publication bias among the included studies from the visual and statistical perspective, respectively, and neither the shapes of funnel plots (Figure 3) nor the statistical data of Egger’s test (P=0.260) provided evidence for the presence of obvious publication bias.\n Sensitivity analysis In the process of sensitivity analysis, every individual study was omitted in sequence, and the changed results were observed correspondingly. No radical alteration occurred in the pooled results, suggesting that no single study substantially affected the results, and our meta-analysis outcomes were statistically robust.\nIn the process of sensitivity analysis, every individual study was omitted in sequence, and the changed results were observed correspondingly. No radical alteration occurred in the pooled results, suggesting that no single study substantially affected the results, and our meta-analysis outcomes were statistically robust.", "Figure 1 displays the detailed process of study selection and reasons for study exclusion. Initially, 81 records were identified through the computer search, and 36 remained after excluding 8 studies not on humans and 37 apparently irrelevant ones. Through the subsequent exclusion for reviews and letters (7), without full texts (3), duplicates (5), not about ApoM rs805296 polymorphism (8) and without original data (6), we eventually included 7 studies in the quantitative synthesis [18,21–26]. The primary features of eligible studies are presented in Table 1.", "Table 2 shows the ORs with 95% CIs and P values for heterogeneity test under all the genetic models. Overall, the ApoM rs805296 polymorphism elevated the CAD risk in all the genetic contrasts (CC versus TT: OR=2.13, 95% CI=1.16–3.91; CC + TC versus TT: OR=1.80, 95% CI=1.50–2.17; CC versus TT+TC: OR=1.91, 95% CI=1.04–3.51; C versus T: OR=1.72, 95% CI=1.45–2.04; TC versus TT: OR=1.78, 95% CI=1.47–2.15). Figure 2 describes the forest plot for the association of ApoM rs805296 polymorphism with CAD risk under the CC versus TT model.", "As shown in Table 2, P values for heterogeneity under all the genetic models were larger than 0.05 (P=0.491 for CC versus TT model; P=0.204 for CC + TC versus TT model; P=0.557 for CC versus TT+TC model; P=0.126 for C versus T model; P=0.361 for TC versus TT model). Therefore, there was no marked heterogeneity and the fixed-effects model was used for pooling the results.", "Begg’s funnel plots and Egger’s test were used to detected possible publication bias among the included studies from the visual and statistical perspective, respectively, and neither the shapes of funnel plots (Figure 3) nor the statistical data of Egger’s test (P=0.260) provided evidence for the presence of obvious publication bias.", "In the process of sensitivity analysis, every individual study was omitted in sequence, and the changed results were observed correspondingly. No radical alteration occurred in the pooled results, suggesting that no single study substantially affected the results, and our meta-analysis outcomes were statistically robust.", "CAD is a complex multi-genetic disease caused by synergistic effects of genetic and environmental risk factors [27,28]. Hereditary epidemiological studies have suggested that genetic mutations may elevate individual risk of developing CAD [29–31]. Initially separated and cloned from chylomicrons [13], ApoM in plasma mainly exists in HDL particles, and very little is in triglyceride-rich lipoprotein (TGRLP) and low-density lipoprotein (LDL), suggesting ApoM may be associated with lipid transportation and metabolism [15]. Richter et al. found an important role of ApoM in the formation of HDL, and confirmed its protective effects against atherosclerosis [16,32]. In the study by Xu et al., the correlation between ApoM and indexes of lipid indicated that ApoM levels in plasma had a positive relation with factors against the progression of atherosclerosis, such as ApoA I and HDLC, and was negatively related with factors promoting atherosclerosis development, such as triglyceride, total cholesterol, and lipoprotein (a), and that elevated levels of ApoM could prevent and slow the progression of atherosclerosis [33].\nThe human ApoM gene is located in a region adjacent to that of major histocompatibility complex (MHC) in which multiple genes are related to immune response; therefore, the ApoM gene is likely to participate in the regulation of immune defense [34]. Among a number of polymorphisms within the ApoM gene, the rs805296 variant in the proximal promoter region has been verified to have a link with plasma cholesterol, and may increase individual susceptibility to CAD [35].\nIn this present study, we referred to previous studies and analyzed the association between ApoM rs805296 polymorphism and CAD risk. Our results indicate that ApoM rs805296 polymorphism under all the comparisons could elevate the risk of CAD, suggesting this polymorphism might act as a promoter for CAD onset. Several case-control studies have investigated the significance of ApoM rs805296 in CAD risk in Chinese populations, and obtained useful findings. Using the method of polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP), Huang et al. carried out a screening for ApoM rs805296 in 220 CAD cases and 195 normal controls, and observed the frequency of C allele in case and control groups was 19.1% and 12.6% respectively, and this difference was statistically significant (P=0.011), which proved the polymorphism rs805296 might be a susceptible factor for CAD [21]. Zhang et al. performed a large study recruiting 675 patients with acute coronary syndrome (ACS) and 636 healthy control subjects, and found that the frequencies of both C allele and CC genotype of ApoM rs805296 polymorphism in the case group were significantly higher than those in the control group (P<0.01). Subsequently, after the adjustment of susceptibility factors for CAD, the C allele was found to be an independent risk factor for the occurrence of ACS [25]. In addition, some other studies also obtained results similar to those mentioned above [18,22,23,26]. In contrast, Zheng et al. found no statistically significant difference in distribution of 3 genotypes of ApoM rs805296 polymorphism, including TT, TC, and CC, between case and control groups, and concluded that rs805296 might not be correlated with the development of CAD [24]. Conducted in a Chinese population, the study of Zheng et al. obtained results that are in contrast with our present study and the other case-control studies listed above, which might be attributed to differences in number of samples, methods of genotyping, correction factors, and other risk elements.\nAbsence of heterogeneity and publications bias is the biggest strength of this meta-analysis. However, as in previous studies and meta-analyses, our meta-analysis also has some weaknesses that should be clearly presented. Because all the prior studies on the association of ApoM rs805296 polymorphism with CAD risk only focused on Chinese populations, our meta-analysis solely discussed this association among Chinese people, which might not be representative in other ethnic groups. In addition, the limited number of included studies and the relatively small sample size might lessen the statistical power of our results. Another important aspect that should be stated is that some potential risk factors such as family history, smoking status, body mass index (BMI) and other environmental influences [36] were not incorporated into the discussion of the present study due to limited original information of included studies.", "In conclusion, our meta-analysis results revealed a significant correlation of ApoM rs805296 polymorphism with CAD risk, and showed rs805296 polymorphism might confer increased risk of CAD in the Chinese population. The association between ApoM rs805296 and onset risk of CAD needs to be further verified by studies containing combined effects of genetic and environmental factors and larger sample size in multiple ethnicities." ]
[ null, "materials|methods", "methods", null, "methods", null, "results", "intro", "methods|results", null, null, "methods", "discussion", "conclusions" ]
[ "Apolipoproteins", "Coronary Artery Disease", "Polymorphism, Genetic", "Risk Factors" ]
Background: Coronary artery disease (CAD), one of the most common cardiovascular diseases, ranks first among fatal diseases in adults around the world [1–3]. Several risk factors have been confirmed, such as smoking, hypertension, diabetes, high blood cholesterol, excessive alcohol drinking, depression, and lack of exercise [4,5]. As for the underlying mechanism of CAD, it is reported that cardiac atherosclerosis may have a significant influence on the occurrence and progression of the disease [6,7]. However, genetic and environmental risk factors have been widely researched in the etiology of CAD, and their remarkable effects on susceptibility to CAD have also been identified. In recent years, accumulating evidence indicates that genetic polymorphisms may be implicated in individual susceptibility to CAD, including polymorphisms within genes of APLNR [8], interleukin-6 [9], CYP7A1 [10], and PAI-1 [11]. In addition, the apolipoprotein M gene (ApoM), located on human chromosome 6 p21.31, has been reported to be significantly related to the occurrence of CAD [12]. The ApoM gene codes a 22-kDa protein that belongs to the apolipoprotein superfamily in structure. ApoM protein was first identified and determined in a study on lipoprotein by Xu et al. in 1993 [13]. Human ApoM cDNA, with 734 base pairs, encodes a residue-long protein with 188 amino acids [14]. ApoM is reportedly related to lower high-density lipoprotein (HDL) cholesterol, triglyceride-rich lipoproteins, lipoproteins containing ApoB, and very low-density lipoprotein (VLDL). Only expressed in the kidneys and liver [15], ApoM has been confirmed to have great influence on the transportation of reverse cholesterol [16]. Previous studies have suggested that one of the polymorphisms in ApoM gene, rs805296, was related to the susceptibility to CAD [17–20]; however, the number of studies is relatively limited, and the results are divergent rather than conclusive due to various reasons. Therefore, we comprehensively summarized all the findings on the association of ApoM rs805296 polymorphism with risk of CAD so as to reach a more authentic conclusion by performing the present meta-analysis. Material and Methods: Study source and search strategy The electronic databases searched for all the usable studies were: PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI). The words and items for literature searching contained “Apolipoprotein M” or “ ApoM, “polymorphism” or “variant” or “mutation”, and “coronary artery disease” or “CAD” or “atherosclerosis”. In case of missing any adequate studies, we screened the articles in the reference lists of relevant studies by manual searching. All the studies were restricted to those in English or Chinese. The electronic databases searched for all the usable studies were: PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI). The words and items for literature searching contained “Apolipoprotein M” or “ ApoM, “polymorphism” or “variant” or “mutation”, and “coronary artery disease” or “CAD” or “atherosclerosis”. In case of missing any adequate studies, we screened the articles in the reference lists of relevant studies by manual searching. All the studies were restricted to those in English or Chinese. Selection standards All the publications meeting the following requirements were included into our meta-analysis: (1) possessing the case and control subjects at the same time; (2) studying the correlation between ApoM rs805296 polymorphism and CAD risk; (3) with sufficient data describing genotype and allele frequencies of the polymorphism in case and control groups; (4) human studies; and (5) the genotype distribution in control group conforming to Hardy-Weinberg equilibrium. Those studies with case-only design, inadequate information, or duplicating other articles were excluded. For publications with similar datasets, the one with the largest amount of information was included. All the publications meeting the following requirements were included into our meta-analysis: (1) possessing the case and control subjects at the same time; (2) studying the correlation between ApoM rs805296 polymorphism and CAD risk; (3) with sufficient data describing genotype and allele frequencies of the polymorphism in case and control groups; (4) human studies; and (5) the genotype distribution in control group conforming to Hardy-Weinberg equilibrium. Those studies with case-only design, inadequate information, or duplicating other articles were excluded. For publications with similar datasets, the one with the largest amount of information was included. Data extraction An identical form for data extraction was designed in advance, and the whole process was completed by 2 authors separately. Information to be extracted from each included article incorporated the following aspects: year of publication, name of first author, country of origin, ethnicity, genotyping method, sample sizes of cases and controls, genotypic and allelic distribution in case and control groups, and P value for Hardy-Weinberg equilibrium in control group. An identical form for data extraction was designed in advance, and the whole process was completed by 2 authors separately. Information to be extracted from each included article incorporated the following aspects: year of publication, name of first author, country of origin, ethnicity, genotyping method, sample sizes of cases and controls, genotypic and allelic distribution in case and control groups, and P value for Hardy-Weinberg equilibrium in control group. Statistical analyses STATA software (V.12.0) was used in all statistical analyses. Since all the publications would have control groups with genotypic and allelic frequencies consistent with Hardy-Weinberg equilibrium according to the selection criteria, we checked the compliance degree of Hardy-Weinberg equilibrium using the chi-square test for those not stating relevant data. The strength of relationship between ApoM rs805296 polymorphism and CAD risk was evaluated with odds ratio (OR) and its corresponding 95% confidence interval (95% CI) under 5 genetic models (CC versus TT, CC + TC versus TT, CC versus TT+TC, C versus T and TC versus TT). The absence or presence of statistically significant inter-study heterogeneity, tested by the χ2-test-based Q statistic, determined the use of fixed-effects model (Mantel-Haenszel method) or random-effects model (DerSimonian and Laird method). In sensitivity analysis, each single study was deleted in turn to observe the alterations of the overall results. Through Begg’s funnel plots and Egger’s test, we detected if there existed significant publication bias across eligible studies. For all the statistical tests, the significance level was set at P<0.05. STATA software (V.12.0) was used in all statistical analyses. Since all the publications would have control groups with genotypic and allelic frequencies consistent with Hardy-Weinberg equilibrium according to the selection criteria, we checked the compliance degree of Hardy-Weinberg equilibrium using the chi-square test for those not stating relevant data. The strength of relationship between ApoM rs805296 polymorphism and CAD risk was evaluated with odds ratio (OR) and its corresponding 95% confidence interval (95% CI) under 5 genetic models (CC versus TT, CC + TC versus TT, CC versus TT+TC, C versus T and TC versus TT). The absence or presence of statistically significant inter-study heterogeneity, tested by the χ2-test-based Q statistic, determined the use of fixed-effects model (Mantel-Haenszel method) or random-effects model (DerSimonian and Laird method). In sensitivity analysis, each single study was deleted in turn to observe the alterations of the overall results. Through Begg’s funnel plots and Egger’s test, we detected if there existed significant publication bias across eligible studies. For all the statistical tests, the significance level was set at P<0.05. Study source and search strategy: The electronic databases searched for all the usable studies were: PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI). The words and items for literature searching contained “Apolipoprotein M” or “ ApoM, “polymorphism” or “variant” or “mutation”, and “coronary artery disease” or “CAD” or “atherosclerosis”. In case of missing any adequate studies, we screened the articles in the reference lists of relevant studies by manual searching. All the studies were restricted to those in English or Chinese. Selection standards: All the publications meeting the following requirements were included into our meta-analysis: (1) possessing the case and control subjects at the same time; (2) studying the correlation between ApoM rs805296 polymorphism and CAD risk; (3) with sufficient data describing genotype and allele frequencies of the polymorphism in case and control groups; (4) human studies; and (5) the genotype distribution in control group conforming to Hardy-Weinberg equilibrium. Those studies with case-only design, inadequate information, or duplicating other articles were excluded. For publications with similar datasets, the one with the largest amount of information was included. Data extraction: An identical form for data extraction was designed in advance, and the whole process was completed by 2 authors separately. Information to be extracted from each included article incorporated the following aspects: year of publication, name of first author, country of origin, ethnicity, genotyping method, sample sizes of cases and controls, genotypic and allelic distribution in case and control groups, and P value for Hardy-Weinberg equilibrium in control group. Statistical analyses: STATA software (V.12.0) was used in all statistical analyses. Since all the publications would have control groups with genotypic and allelic frequencies consistent with Hardy-Weinberg equilibrium according to the selection criteria, we checked the compliance degree of Hardy-Weinberg equilibrium using the chi-square test for those not stating relevant data. The strength of relationship between ApoM rs805296 polymorphism and CAD risk was evaluated with odds ratio (OR) and its corresponding 95% confidence interval (95% CI) under 5 genetic models (CC versus TT, CC + TC versus TT, CC versus TT+TC, C versus T and TC versus TT). The absence or presence of statistically significant inter-study heterogeneity, tested by the χ2-test-based Q statistic, determined the use of fixed-effects model (Mantel-Haenszel method) or random-effects model (DerSimonian and Laird method). In sensitivity analysis, each single study was deleted in turn to observe the alterations of the overall results. Through Begg’s funnel plots and Egger’s test, we detected if there existed significant publication bias across eligible studies. For all the statistical tests, the significance level was set at P<0.05. Results: Publication characteristics Figure 1 displays the detailed process of study selection and reasons for study exclusion. Initially, 81 records were identified through the computer search, and 36 remained after excluding 8 studies not on humans and 37 apparently irrelevant ones. Through the subsequent exclusion for reviews and letters (7), without full texts (3), duplicates (5), not about ApoM rs805296 polymorphism (8) and without original data (6), we eventually included 7 studies in the quantitative synthesis [18,21–26]. The primary features of eligible studies are presented in Table 1. Figure 1 displays the detailed process of study selection and reasons for study exclusion. Initially, 81 records were identified through the computer search, and 36 remained after excluding 8 studies not on humans and 37 apparently irrelevant ones. Through the subsequent exclusion for reviews and letters (7), without full texts (3), duplicates (5), not about ApoM rs805296 polymorphism (8) and without original data (6), we eventually included 7 studies in the quantitative synthesis [18,21–26]. The primary features of eligible studies are presented in Table 1. Study results Table 2 shows the ORs with 95% CIs and P values for heterogeneity test under all the genetic models. Overall, the ApoM rs805296 polymorphism elevated the CAD risk in all the genetic contrasts (CC versus TT: OR=2.13, 95% CI=1.16–3.91; CC + TC versus TT: OR=1.80, 95% CI=1.50–2.17; CC versus TT+TC: OR=1.91, 95% CI=1.04–3.51; C versus T: OR=1.72, 95% CI=1.45–2.04; TC versus TT: OR=1.78, 95% CI=1.47–2.15). Figure 2 describes the forest plot for the association of ApoM rs805296 polymorphism with CAD risk under the CC versus TT model. Table 2 shows the ORs with 95% CIs and P values for heterogeneity test under all the genetic models. Overall, the ApoM rs805296 polymorphism elevated the CAD risk in all the genetic contrasts (CC versus TT: OR=2.13, 95% CI=1.16–3.91; CC + TC versus TT: OR=1.80, 95% CI=1.50–2.17; CC versus TT+TC: OR=1.91, 95% CI=1.04–3.51; C versus T: OR=1.72, 95% CI=1.45–2.04; TC versus TT: OR=1.78, 95% CI=1.47–2.15). Figure 2 describes the forest plot for the association of ApoM rs805296 polymorphism with CAD risk under the CC versus TT model. Heterogeneity examination As shown in Table 2, P values for heterogeneity under all the genetic models were larger than 0.05 (P=0.491 for CC versus TT model; P=0.204 for CC + TC versus TT model; P=0.557 for CC versus TT+TC model; P=0.126 for C versus T model; P=0.361 for TC versus TT model). Therefore, there was no marked heterogeneity and the fixed-effects model was used for pooling the results. As shown in Table 2, P values for heterogeneity under all the genetic models were larger than 0.05 (P=0.491 for CC versus TT model; P=0.204 for CC + TC versus TT model; P=0.557 for CC versus TT+TC model; P=0.126 for C versus T model; P=0.361 for TC versus TT model). Therefore, there was no marked heterogeneity and the fixed-effects model was used for pooling the results. Publication bias test Begg’s funnel plots and Egger’s test were used to detected possible publication bias among the included studies from the visual and statistical perspective, respectively, and neither the shapes of funnel plots (Figure 3) nor the statistical data of Egger’s test (P=0.260) provided evidence for the presence of obvious publication bias. Begg’s funnel plots and Egger’s test were used to detected possible publication bias among the included studies from the visual and statistical perspective, respectively, and neither the shapes of funnel plots (Figure 3) nor the statistical data of Egger’s test (P=0.260) provided evidence for the presence of obvious publication bias. Sensitivity analysis In the process of sensitivity analysis, every individual study was omitted in sequence, and the changed results were observed correspondingly. No radical alteration occurred in the pooled results, suggesting that no single study substantially affected the results, and our meta-analysis outcomes were statistically robust. In the process of sensitivity analysis, every individual study was omitted in sequence, and the changed results were observed correspondingly. No radical alteration occurred in the pooled results, suggesting that no single study substantially affected the results, and our meta-analysis outcomes were statistically robust. Publication characteristics: Figure 1 displays the detailed process of study selection and reasons for study exclusion. Initially, 81 records were identified through the computer search, and 36 remained after excluding 8 studies not on humans and 37 apparently irrelevant ones. Through the subsequent exclusion for reviews and letters (7), without full texts (3), duplicates (5), not about ApoM rs805296 polymorphism (8) and without original data (6), we eventually included 7 studies in the quantitative synthesis [18,21–26]. The primary features of eligible studies are presented in Table 1. Study results: Table 2 shows the ORs with 95% CIs and P values for heterogeneity test under all the genetic models. Overall, the ApoM rs805296 polymorphism elevated the CAD risk in all the genetic contrasts (CC versus TT: OR=2.13, 95% CI=1.16–3.91; CC + TC versus TT: OR=1.80, 95% CI=1.50–2.17; CC versus TT+TC: OR=1.91, 95% CI=1.04–3.51; C versus T: OR=1.72, 95% CI=1.45–2.04; TC versus TT: OR=1.78, 95% CI=1.47–2.15). Figure 2 describes the forest plot for the association of ApoM rs805296 polymorphism with CAD risk under the CC versus TT model. Heterogeneity examination: As shown in Table 2, P values for heterogeneity under all the genetic models were larger than 0.05 (P=0.491 for CC versus TT model; P=0.204 for CC + TC versus TT model; P=0.557 for CC versus TT+TC model; P=0.126 for C versus T model; P=0.361 for TC versus TT model). Therefore, there was no marked heterogeneity and the fixed-effects model was used for pooling the results. Publication bias test: Begg’s funnel plots and Egger’s test were used to detected possible publication bias among the included studies from the visual and statistical perspective, respectively, and neither the shapes of funnel plots (Figure 3) nor the statistical data of Egger’s test (P=0.260) provided evidence for the presence of obvious publication bias. Sensitivity analysis: In the process of sensitivity analysis, every individual study was omitted in sequence, and the changed results were observed correspondingly. No radical alteration occurred in the pooled results, suggesting that no single study substantially affected the results, and our meta-analysis outcomes were statistically robust. Discussion: CAD is a complex multi-genetic disease caused by synergistic effects of genetic and environmental risk factors [27,28]. Hereditary epidemiological studies have suggested that genetic mutations may elevate individual risk of developing CAD [29–31]. Initially separated and cloned from chylomicrons [13], ApoM in plasma mainly exists in HDL particles, and very little is in triglyceride-rich lipoprotein (TGRLP) and low-density lipoprotein (LDL), suggesting ApoM may be associated with lipid transportation and metabolism [15]. Richter et al. found an important role of ApoM in the formation of HDL, and confirmed its protective effects against atherosclerosis [16,32]. In the study by Xu et al., the correlation between ApoM and indexes of lipid indicated that ApoM levels in plasma had a positive relation with factors against the progression of atherosclerosis, such as ApoA I and HDLC, and was negatively related with factors promoting atherosclerosis development, such as triglyceride, total cholesterol, and lipoprotein (a), and that elevated levels of ApoM could prevent and slow the progression of atherosclerosis [33]. The human ApoM gene is located in a region adjacent to that of major histocompatibility complex (MHC) in which multiple genes are related to immune response; therefore, the ApoM gene is likely to participate in the regulation of immune defense [34]. Among a number of polymorphisms within the ApoM gene, the rs805296 variant in the proximal promoter region has been verified to have a link with plasma cholesterol, and may increase individual susceptibility to CAD [35]. In this present study, we referred to previous studies and analyzed the association between ApoM rs805296 polymorphism and CAD risk. Our results indicate that ApoM rs805296 polymorphism under all the comparisons could elevate the risk of CAD, suggesting this polymorphism might act as a promoter for CAD onset. Several case-control studies have investigated the significance of ApoM rs805296 in CAD risk in Chinese populations, and obtained useful findings. Using the method of polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP), Huang et al. carried out a screening for ApoM rs805296 in 220 CAD cases and 195 normal controls, and observed the frequency of C allele in case and control groups was 19.1% and 12.6% respectively, and this difference was statistically significant (P=0.011), which proved the polymorphism rs805296 might be a susceptible factor for CAD [21]. Zhang et al. performed a large study recruiting 675 patients with acute coronary syndrome (ACS) and 636 healthy control subjects, and found that the frequencies of both C allele and CC genotype of ApoM rs805296 polymorphism in the case group were significantly higher than those in the control group (P<0.01). Subsequently, after the adjustment of susceptibility factors for CAD, the C allele was found to be an independent risk factor for the occurrence of ACS [25]. In addition, some other studies also obtained results similar to those mentioned above [18,22,23,26]. In contrast, Zheng et al. found no statistically significant difference in distribution of 3 genotypes of ApoM rs805296 polymorphism, including TT, TC, and CC, between case and control groups, and concluded that rs805296 might not be correlated with the development of CAD [24]. Conducted in a Chinese population, the study of Zheng et al. obtained results that are in contrast with our present study and the other case-control studies listed above, which might be attributed to differences in number of samples, methods of genotyping, correction factors, and other risk elements. Absence of heterogeneity and publications bias is the biggest strength of this meta-analysis. However, as in previous studies and meta-analyses, our meta-analysis also has some weaknesses that should be clearly presented. Because all the prior studies on the association of ApoM rs805296 polymorphism with CAD risk only focused on Chinese populations, our meta-analysis solely discussed this association among Chinese people, which might not be representative in other ethnic groups. In addition, the limited number of included studies and the relatively small sample size might lessen the statistical power of our results. Another important aspect that should be stated is that some potential risk factors such as family history, smoking status, body mass index (BMI) and other environmental influences [36] were not incorporated into the discussion of the present study due to limited original information of included studies. Conclusions: In conclusion, our meta-analysis results revealed a significant correlation of ApoM rs805296 polymorphism with CAD risk, and showed rs805296 polymorphism might confer increased risk of CAD in the Chinese population. The association between ApoM rs805296 and onset risk of CAD needs to be further verified by studies containing combined effects of genetic and environmental factors and larger sample size in multiple ethnicities.
Background: The present meta-analysis aimed to summarize the inconsistent findings on the association of apolipoprotein M gene (ApoM) rs805296 polymorphism with the risk of coronary artery disease (CAD), and to obtain a more authentic result about this topic. Methods: A total of 7 available articles were identified through electronic databases--PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI)--and their useful data were carefully extracted. The relationship between ApoM rs805296 polymorphism and CAD risk was assessed by odds ratios (ORs) and corresponding 95% confidence intervals (95% CIs), which were calculated using the fixed- or random-effects model, according to the degree of heterogeneity. Hardy-Weinberg equilibrium test, sensitivity test, and publication bias examination were also performed in this meta-analysis. Results: According to the pooled results, ApoM rs805296 polymorphism conferred an increased risk of CAD under all the genetic contrasts: CC versus TT, CC + TC versus TT, CC versus TT+TC, C versus T, and TC versus TT (OR=2.13, 95% CI=1.16-3.91; OR=1.80, 95% CI=1.50-2.17; OR=1.91, 95% CI=1.04-3.51; OR=1.72, 95% CI=1.45-2.04; OR=1.78, 95% CI=1.47-2.15). Conclusions: ApoM rs805296 polymorphism may be a risk factor for developing CAD.
Publication characteristics: Figure 1 displays the detailed process of study selection and reasons for study exclusion. Initially, 81 records were identified through the computer search, and 36 remained after excluding 8 studies not on humans and 37 apparently irrelevant ones. Through the subsequent exclusion for reviews and letters (7), without full texts (3), duplicates (5), not about ApoM rs805296 polymorphism (8) and without original data (6), we eventually included 7 studies in the quantitative synthesis [18,21–26]. The primary features of eligible studies are presented in Table 1. Conclusions: In conclusion, our meta-analysis results revealed a significant correlation of ApoM rs805296 polymorphism with CAD risk, and showed rs805296 polymorphism might confer increased risk of CAD in the Chinese population. The association between ApoM rs805296 and onset risk of CAD needs to be further verified by studies containing combined effects of genetic and environmental factors and larger sample size in multiple ethnicities.
Background: The present meta-analysis aimed to summarize the inconsistent findings on the association of apolipoprotein M gene (ApoM) rs805296 polymorphism with the risk of coronary artery disease (CAD), and to obtain a more authentic result about this topic. Methods: A total of 7 available articles were identified through electronic databases--PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI)--and their useful data were carefully extracted. The relationship between ApoM rs805296 polymorphism and CAD risk was assessed by odds ratios (ORs) and corresponding 95% confidence intervals (95% CIs), which were calculated using the fixed- or random-effects model, according to the degree of heterogeneity. Hardy-Weinberg equilibrium test, sensitivity test, and publication bias examination were also performed in this meta-analysis. Results: According to the pooled results, ApoM rs805296 polymorphism conferred an increased risk of CAD under all the genetic contrasts: CC versus TT, CC + TC versus TT, CC versus TT+TC, C versus T, and TC versus TT (OR=2.13, 95% CI=1.16-3.91; OR=1.80, 95% CI=1.50-2.17; OR=1.91, 95% CI=1.04-3.51; OR=1.72, 95% CI=1.45-2.04; OR=1.78, 95% CI=1.47-2.15). Conclusions: ApoM rs805296 polymorphism may be a risk factor for developing CAD.
4,248
257
[ 416, 102, 121, 82, 224, 78, 60 ]
14
[ "versus", "studies", "apom", "tt", "versus tt", "cad", "polymorphism", "cc", "rs805296", "study" ]
[ "cardiac atherosclerosis significant", "cardiovascular diseases", "factors promoting atherosclerosis", "apolipoprotein gene apom", "genetic polymorphisms implicated" ]
[CONTENT] Apolipoproteins | Coronary Artery Disease | Polymorphism, Genetic | Risk Factors [SUMMARY]
[CONTENT] Apolipoproteins | Coronary Artery Disease | Polymorphism, Genetic | Risk Factors [SUMMARY]
[CONTENT] Apolipoproteins | Coronary Artery Disease | Polymorphism, Genetic | Risk Factors [SUMMARY]
[CONTENT] Apolipoproteins | Coronary Artery Disease | Polymorphism, Genetic | Risk Factors [SUMMARY]
[CONTENT] Apolipoproteins | Coronary Artery Disease | Polymorphism, Genetic | Risk Factors [SUMMARY]
[CONTENT] Apolipoproteins | Coronary Artery Disease | Polymorphism, Genetic | Risk Factors [SUMMARY]
[CONTENT] Apolipoproteins | Apolipoproteins M | Case-Control Studies | China | Coronary Artery Disease | Gene Frequency | Genetic Predisposition to Disease | Genotype | Humans | Lipocalins | Odds Ratio | Polymorphism, Single Nucleotide | Promoter Regions, Genetic | Risk Factors | Triglycerides [SUMMARY]
[CONTENT] Apolipoproteins | Apolipoproteins M | Case-Control Studies | China | Coronary Artery Disease | Gene Frequency | Genetic Predisposition to Disease | Genotype | Humans | Lipocalins | Odds Ratio | Polymorphism, Single Nucleotide | Promoter Regions, Genetic | Risk Factors | Triglycerides [SUMMARY]
[CONTENT] Apolipoproteins | Apolipoproteins M | Case-Control Studies | China | Coronary Artery Disease | Gene Frequency | Genetic Predisposition to Disease | Genotype | Humans | Lipocalins | Odds Ratio | Polymorphism, Single Nucleotide | Promoter Regions, Genetic | Risk Factors | Triglycerides [SUMMARY]
[CONTENT] Apolipoproteins | Apolipoproteins M | Case-Control Studies | China | Coronary Artery Disease | Gene Frequency | Genetic Predisposition to Disease | Genotype | Humans | Lipocalins | Odds Ratio | Polymorphism, Single Nucleotide | Promoter Regions, Genetic | Risk Factors | Triglycerides [SUMMARY]
[CONTENT] Apolipoproteins | Apolipoproteins M | Case-Control Studies | China | Coronary Artery Disease | Gene Frequency | Genetic Predisposition to Disease | Genotype | Humans | Lipocalins | Odds Ratio | Polymorphism, Single Nucleotide | Promoter Regions, Genetic | Risk Factors | Triglycerides [SUMMARY]
[CONTENT] Apolipoproteins | Apolipoproteins M | Case-Control Studies | China | Coronary Artery Disease | Gene Frequency | Genetic Predisposition to Disease | Genotype | Humans | Lipocalins | Odds Ratio | Polymorphism, Single Nucleotide | Promoter Regions, Genetic | Risk Factors | Triglycerides [SUMMARY]
[CONTENT] cardiac atherosclerosis significant | cardiovascular diseases | factors promoting atherosclerosis | apolipoprotein gene apom | genetic polymorphisms implicated [SUMMARY]
[CONTENT] cardiac atherosclerosis significant | cardiovascular diseases | factors promoting atherosclerosis | apolipoprotein gene apom | genetic polymorphisms implicated [SUMMARY]
[CONTENT] cardiac atherosclerosis significant | cardiovascular diseases | factors promoting atherosclerosis | apolipoprotein gene apom | genetic polymorphisms implicated [SUMMARY]
[CONTENT] cardiac atherosclerosis significant | cardiovascular diseases | factors promoting atherosclerosis | apolipoprotein gene apom | genetic polymorphisms implicated [SUMMARY]
[CONTENT] cardiac atherosclerosis significant | cardiovascular diseases | factors promoting atherosclerosis | apolipoprotein gene apom | genetic polymorphisms implicated [SUMMARY]
[CONTENT] cardiac atherosclerosis significant | cardiovascular diseases | factors promoting atherosclerosis | apolipoprotein gene apom | genetic polymorphisms implicated [SUMMARY]
[CONTENT] versus | studies | apom | tt | versus tt | cad | polymorphism | cc | rs805296 | study [SUMMARY]
[CONTENT] versus | studies | apom | tt | versus tt | cad | polymorphism | cc | rs805296 | study [SUMMARY]
[CONTENT] versus | studies | apom | tt | versus tt | cad | polymorphism | cc | rs805296 | study [SUMMARY]
[CONTENT] versus | studies | apom | tt | versus tt | cad | polymorphism | cc | rs805296 | study [SUMMARY]
[CONTENT] versus | studies | apom | tt | versus tt | cad | polymorphism | cc | rs805296 | study [SUMMARY]
[CONTENT] versus | studies | apom | tt | versus tt | cad | polymorphism | cc | rs805296 | study [SUMMARY]
[CONTENT] exclusion | studies | study | texts duplicates apom | primary features | process study selection | process study | studies presented table | duplicates | studies quantitative [SUMMARY]
[CONTENT] results | study | analysis | single study substantially | results observed correspondingly radical | results observed | results meta analysis outcomes | results meta analysis | results meta | sequence [SUMMARY]
[CONTENT] versus | versus tt | tt | model | 95 | cc | tc | 95 ci | ci | cc versus [SUMMARY]
[CONTENT] risk cad | risk | cad | rs805296 | results revealed significant correlation | significant correlation apom rs805296 | significant correlation | revealed | revealed significant | revealed significant correlation [SUMMARY]
[CONTENT] versus | versus tt | tt | studies | apom | cad | model | cc | tc | control [SUMMARY]
[CONTENT] versus | versus tt | tt | studies | apom | cad | model | cc | tc | control [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] 7 | PubMed | EMBASE | Chinese National Knowledge Infrastructure ||| ApoM | CAD | 95% | 95% ||| Hardy-Weinberg [SUMMARY]
[CONTENT] CAD | CC | TT | CC | TT | CC | TT+TC | 95% | OR=1.80 | 95% | CI=1.50 | 2.17 | OR=1.91 | 95% | CI=1.04-3.51 | OR=1.72 | 95% | OR=1.78 | 95% | CI=1.47 [SUMMARY]
[CONTENT] CAD [SUMMARY]
[CONTENT] ||| 7 | PubMed | EMBASE | Chinese National Knowledge Infrastructure ||| ApoM | CAD | 95% | 95% ||| Hardy-Weinberg ||| ||| CAD | CC | TT | CC | TT | CC | TT+TC | 95% | OR=1.80 | 95% | CI=1.50 | 2.17 | OR=1.91 | 95% | CI=1.04-3.51 | OR=1.72 | 95% | OR=1.78 | 95% | CI=1.47 ||| CAD [SUMMARY]
[CONTENT] ||| 7 | PubMed | EMBASE | Chinese National Knowledge Infrastructure ||| ApoM | CAD | 95% | 95% ||| Hardy-Weinberg ||| ||| CAD | CC | TT | CC | TT | CC | TT+TC | 95% | OR=1.80 | 95% | CI=1.50 | 2.17 | OR=1.91 | 95% | CI=1.04-3.51 | OR=1.72 | 95% | OR=1.78 | 95% | CI=1.47 ||| CAD [SUMMARY]
High serum exosomal long non-coding RNA DANCR expression confers poor prognosis in patients with breast cancer.
35150011
Exosomal long non-coding RNAs (lncRNAs) serve as excellent candidate biomarkers for clinical applications. The expression of differentiation antagonizing non-protein coding RNA (DANCR) has been shown to be decreased in breast cancer (BC) tissues and cell lines. However, the clinical value of circulating exosomal DANCR in BC has not been explored.
BACKGROUND
A total of 120 BC patients, 70 benign breast disease (BBD) patients, and 105 healthy women were recruited in this study. Total RNA was extracted from serum samples, and the level of serum exosomal lncRNA DANCR was evaluated by quantitative real-time reverse transcription-polymerase chain reaction (qRT-PCR).
METHODS
Serum exosomal lncRNA DANCR levels were significantly higher in BC patients than in BBD patients and normal controls. The diagnostic performance of serum exosomal lncRNA DANCR was good, and the combination of serum exosomal lncRNA DANCR, CA153, and CEA greatly improved the diagnostic accuracy for BC. High serum exosomal lncRNA DANCR level was associated with various clinicopathological variables including lymph node metastasis, ER status, HER2 status, and TNM stage. In addition, the BC patients in the high serum exosomal lncRNA DANCR expression group had significantly shorter 5-year overall survival time. Multivariate analysis demonstrated that serum exosomal lncRNA DANCR was an independent risk factor for BC.
RESULTS
Serum exosomal lncRNA DANCR may be a useful non-invasive biomarker for the clinical diagnosis and prognosis of BC.
CONCLUSION
[ "Biomarkers, Tumor", "Breast Neoplasms", "Female", "Humans", "Lymphatic Metastasis", "Prognosis", "RNA, Long Noncoding" ]
8906022
INTRODUCTION
Breast cancer (BC) is the most frequent cancer among women around the world. 1 , 2 In China, more than 272,400 newly diagnosed BC were reported, causing about 70,700 deaths in 2015. BC alone accounts for about 15% of all new cancers in women. 3 , 4 Despite recent advances in surgical techniques and adjuvant chemotherapy, the clinical outcome of BC patients remains unfavorable due to the recurrence and metastasis. 5 Early diagnosis and treatment significantly improve the prognosis of this malignancy. Cancer antigen 153 (CA153) and carcinoembryonic antigen (CEA) are the most commonly used biomarkers for BC. However, diagnostic and prognostic accuracy of CA153 and CEA is poor. Therefore, it is urgent to identify novel, reliable biomarkers for GC diagnosis and prognosis evaluation. Long non‐coding RNAs (lncRNAs) are a class of non‐coding RNAs with a length of longer than 200 nucleotides. 6 , 7 Dysregulation of lncRNAs has been shown to play a critical role in regulating the initiation and progression of BC. 8 , 9 For instance, OLBC15 was highly expressed in triple‐negative BC, and OLBC15 promoted the BC tumorigenesis through destabilizing ZNF326. 10 Exosomes are 30–100 nm nanovesicles containing many molecules, such as lncRNAs, proteins, DNA, RNA, and miRNAs. 11 , 12 , 13 Exosomal lncRNAs are correlated with cancer tumorigenesis and progression and can be used as effective biomarkers in various types of cancer including BC. For example, compared to benign breast disease (BBD) and healthy controls, serum exosomal H19 was significantly increased in BC patients, and high serum exosomal H19 expression was strongly associated with worse clinical variables. 14 Similarly, patients with BC exhibited higher serum exosomal HOTAIR levels, and overexpression of serum exosomal HOTAIR was correlated with poorer prognosis. 15 LncRNA differentiation antagonizing non‐protein coding RNA (DANCR or ANCR), located on human chromosome 4, has been previously found to be involved in the initiation and the progression of BC. 16 , 17 However, the diagnostic and prognostic value of circulating exosomal DANCR in BC has not yet been explored. Thus, this study aimed to detect the serum exosomal lncRNA DANCR levels in BC patients, BBD patients, and normal controls and analyzed its role as a potential novel biomarker for BC diagnosis and prognosis.
null
null
RESULTS
Serum exosomal lncRNA DANCR was dramatically increased in BC patients qRT‐PCR was used to detect the serum exosomal lncRNA DANCR levels in all participants. Serum exosomal lncRNA DANCR levels were significantly higher in BC patients than in BBD subjects and controls (p < 0.001, Figure 1A). In addition, BC patients with positive lymph node metastasis (p = 0.005, Figure 1B), negative ER status (p = 0.012, Figure 1C), negative HER2 status (p < 0.001, Figure 1D), and advanced TNM stage (p < 0.001, Figure 1E) exhibited higher serum exosomal lncRNA DANCR levels compared with their respective controls. Serum exosomal lncRNA DANCR levels were significantly higher in BC patients (A). Increased serum exosomal lncRNA DANCR expression occurred more frequently in BC patients with positive lymph node metastasis (B), negative ER status (C), negative HER2 status (D), and advanced TNM stage (E) qRT‐PCR was used to detect the serum exosomal lncRNA DANCR levels in all participants. Serum exosomal lncRNA DANCR levels were significantly higher in BC patients than in BBD subjects and controls (p < 0.001, Figure 1A). In addition, BC patients with positive lymph node metastasis (p = 0.005, Figure 1B), negative ER status (p = 0.012, Figure 1C), negative HER2 status (p < 0.001, Figure 1D), and advanced TNM stage (p < 0.001, Figure 1E) exhibited higher serum exosomal lncRNA DANCR levels compared with their respective controls. Serum exosomal lncRNA DANCR levels were significantly higher in BC patients (A). Increased serum exosomal lncRNA DANCR expression occurred more frequently in BC patients with positive lymph node metastasis (B), negative ER status (C), negative HER2 status (D), and advanced TNM stage (E) Diagnostic accuracy of serum exosomal lncRNA DNACR for BC By ROC analysis, Figure 2A demonstrated that the AUC for serum exosomal lncRNA DANCR was 0.880, with a specificity of 82.9% and a sensitivity of 83.3%. CA153 and CEA yielded an AUC of 0.799 (specificity: 76.0%; sensitivity: 68.3%) and 0.784 (specificity: 83.8%, sensitivity: 72.5%) for discriminating BC cases from controls (Figure 2B,C). More importantly, the combination of serum exosomal lncRNA DANCR, CA153, and CEA presented further improvement with an AUC of 0.954 (specificity: 91.4%, sensitivity: 90.8%) (Figure 2D; Table 2). Receiver operating characteristic (ROC) curves of single and combining three markers with the optimum cut‐off values in differentiating breast cancer (BC) from controls The diagnostic value of all individual and combined biomarkers for BC Abbreviations: AUC, area under the curves; BC, breast cancer; CA153, Cancer antigen 153; CEA, carcinoembryonic antigen; DANCR, differentiation antagonizing non‐protein coding RNA. By ROC analysis, Figure 2A demonstrated that the AUC for serum exosomal lncRNA DANCR was 0.880, with a specificity of 82.9% and a sensitivity of 83.3%. CA153 and CEA yielded an AUC of 0.799 (specificity: 76.0%; sensitivity: 68.3%) and 0.784 (specificity: 83.8%, sensitivity: 72.5%) for discriminating BC cases from controls (Figure 2B,C). More importantly, the combination of serum exosomal lncRNA DANCR, CA153, and CEA presented further improvement with an AUC of 0.954 (specificity: 91.4%, sensitivity: 90.8%) (Figure 2D; Table 2). Receiver operating characteristic (ROC) curves of single and combining three markers with the optimum cut‐off values in differentiating breast cancer (BC) from controls The diagnostic value of all individual and combined biomarkers for BC Abbreviations: AUC, area under the curves; BC, breast cancer; CA153, Cancer antigen 153; CEA, carcinoembryonic antigen; DANCR, differentiation antagonizing non‐protein coding RNA. Increased serum exosomal lncRNA DANCR expression was correlated with clinical variables in BC patients The median serum exosomal lncRNA DANCR expression was used as a cut‐off value, and all 120 BC patients were classified into two groups: high serum exosomal DANCR expression group (n = 60) and low serum exosomal DANCR expression group (n = 60). As illustrated in Table 1, high serum exosomal lncRNA DANCR expression occurred more frequently in BC cases with positive lymph node metastasis (p = 0.0080), negative ER status (p = 0.0150), negative HER2 status (p = 0.0009), and advanced TNM stage (p < 0.0001). However, there was no significant correlation between serum exosomal lncRNA DANCR expression and age, tumor size, and PR status (all p > 0.05). The median serum exosomal lncRNA DANCR expression was used as a cut‐off value, and all 120 BC patients were classified into two groups: high serum exosomal DANCR expression group (n = 60) and low serum exosomal DANCR expression group (n = 60). As illustrated in Table 1, high serum exosomal lncRNA DANCR expression occurred more frequently in BC cases with positive lymph node metastasis (p = 0.0080), negative ER status (p = 0.0150), negative HER2 status (p = 0.0009), and advanced TNM stage (p < 0.0001). However, there was no significant correlation between serum exosomal lncRNA DANCR expression and age, tumor size, and PR status (all p > 0.05). Alteration of serum exosomal lncRNA DANCR levels following treatments One month after surgical resection, paired blood samples were collected from all BC subjects. qRT‐PCR was used to measure the serum exosomal lncRNA DANCR expression level, and we found the serum exosomal lncRNA DANCR levels were markedly reduced following surgery (p < 0.001, Figure 3). Compared to preoperative blood samples, serum exosomal long non‐coding RNAs (lncRNAs) differentiation antagonizing non‐protein coding RNA (DANCR) levels were markedly downregulated in postoperative samples One month after surgical resection, paired blood samples were collected from all BC subjects. qRT‐PCR was used to measure the serum exosomal lncRNA DANCR expression level, and we found the serum exosomal lncRNA DANCR levels were markedly reduced following surgery (p < 0.001, Figure 3). Compared to preoperative blood samples, serum exosomal long non‐coding RNAs (lncRNAs) differentiation antagonizing non‐protein coding RNA (DANCR) levels were markedly downregulated in postoperative samples Serum exosomal lncRNA DANCR expression was a prognostic biomarker for BC The Kaplan‐Meier curve for OS regarding serum exosomal lncRNA DANCR expression is presented in Figure 4. Compared to BC patients with low serum exosomal lncRNA DANCR expression, patients with high serum exosomal lncRNA DANCR survived significantly shorter (p = 0.0132). Furthermore, multivariate analysis indicated that lymph node metastasis (HR = 3.12, 95% CI = 1.65–5.78, p = 0.018), ER status (HR = 2.75, 95% CI = 1.38–5.26, p = 0.021), HER2 status (HR = 3.51, 95% CI = 1.92–6.24, p = 0.014), TNM stage (HR = 4.75, 95% CI = 2.74–8.32, p < 0.001), and serum exosomal lncRNA DANCR expression (HR = 3.86, 95% CI = 2.16–6.63, p = 0.009) were significant independent prognostic markers for shorter OS (Table 3). Kaplan‐Meier analysis of overall survival (OS) according to serum exosomal long non‐coding RNAs (lncRNAs) differentiation antagonizing non‐protein coding RNA (DANCR) expression Multivariate analysis of overall survival in 120 BC patients Abbreviations: BC, breast cancer; DANCR, differentiation antagonizing non‐protein coding RNA. The Kaplan‐Meier curve for OS regarding serum exosomal lncRNA DANCR expression is presented in Figure 4. Compared to BC patients with low serum exosomal lncRNA DANCR expression, patients with high serum exosomal lncRNA DANCR survived significantly shorter (p = 0.0132). Furthermore, multivariate analysis indicated that lymph node metastasis (HR = 3.12, 95% CI = 1.65–5.78, p = 0.018), ER status (HR = 2.75, 95% CI = 1.38–5.26, p = 0.021), HER2 status (HR = 3.51, 95% CI = 1.92–6.24, p = 0.014), TNM stage (HR = 4.75, 95% CI = 2.74–8.32, p < 0.001), and serum exosomal lncRNA DANCR expression (HR = 3.86, 95% CI = 2.16–6.63, p = 0.009) were significant independent prognostic markers for shorter OS (Table 3). Kaplan‐Meier analysis of overall survival (OS) according to serum exosomal long non‐coding RNAs (lncRNAs) differentiation antagonizing non‐protein coding RNA (DANCR) expression Multivariate analysis of overall survival in 120 BC patients Abbreviations: BC, breast cancer; DANCR, differentiation antagonizing non‐protein coding RNA.
CONCLUSIONS
Taken together, this study demonstrated that serum exosomal lncRNA DANCR expression was markedly elevated in patients with BC. Upregulation of serum exosomal lncRNA DANCR was closely associated with worse clinical parameters and shorter survival. Thus, serum exosomal lncRNA DANCR serves as a promising prognostic marker for BC. Meanwhile, as a result of small sample research, the analysis of larger cohorts is required to confirm the clinical role of serum exosomal lncRNA DANCR in BC.
[ "INTRODUCTION", "Patients and samples", "Exosome isolation", "RNA extraction and quantitative real‐time reverse transcription‐polymerase chain reaction (qRT‐PCR)", "Measurement of CA153 and CEA in serum", "Statistical analysis", "Serum exosomal lncRNA DANCR was dramatically increased in BC patients", "Diagnostic accuracy of serum exosomal lncRNA DNACR for BC", "Increased serum exosomal lncRNA DANCR expression was correlated with clinical variables in BC patients", "Alteration of serum exosomal lncRNA DANCR levels following treatments", "Serum exosomal lncRNA DANCR expression was a prognostic biomarker for BC" ]
[ "Breast cancer (BC) is the most frequent cancer among women around the world.\n1\n, \n2\n In China, more than 272,400 newly diagnosed BC were reported, causing about 70,700 deaths in 2015. BC alone accounts for about 15% of all new cancers in women.\n3\n, \n4\n Despite recent advances in surgical techniques and adjuvant chemotherapy, the clinical outcome of BC patients remains unfavorable due to the recurrence and metastasis.\n5\n Early diagnosis and treatment significantly improve the prognosis of this malignancy. Cancer antigen 153 (CA153) and carcinoembryonic antigen (CEA) are the most commonly used biomarkers for BC. However, diagnostic and prognostic accuracy of CA153 and CEA is poor. Therefore, it is urgent to identify novel, reliable biomarkers for GC diagnosis and prognosis evaluation.\nLong non‐coding RNAs (lncRNAs) are a class of non‐coding RNAs with a length of longer than 200 nucleotides.\n6\n, \n7\n Dysregulation of lncRNAs has been shown to play a critical role in regulating the initiation and progression of BC.\n8\n, \n9\n For instance, OLBC15 was highly expressed in triple‐negative BC, and OLBC15 promoted the BC tumorigenesis through destabilizing ZNF326.\n10\n Exosomes are 30–100 nm nanovesicles containing many molecules, such as lncRNAs, proteins, DNA, RNA, and miRNAs.\n11\n, \n12\n, \n13\n Exosomal lncRNAs are correlated with cancer tumorigenesis and progression and can be used as effective biomarkers in various types of cancer including BC. For example, compared to benign breast disease (BBD) and healthy controls, serum exosomal H19 was significantly increased in BC patients, and high serum exosomal H19 expression was strongly associated with worse clinical variables.\n14\n Similarly, patients with BC exhibited higher serum exosomal HOTAIR levels, and overexpression of serum exosomal HOTAIR was correlated with poorer prognosis.\n15\n\n\nLncRNA differentiation antagonizing non‐protein coding RNA (DANCR or ANCR), located on human chromosome 4, has been previously found to be involved in the initiation and the progression of BC.\n16\n, \n17\n However, the diagnostic and prognostic value of circulating exosomal DANCR in BC has not yet been explored. Thus, this study aimed to detect the serum exosomal lncRNA DANCR levels in BC patients, BBD patients, and normal controls and analyzed its role as a potential novel biomarker for BC diagnosis and prognosis.", "The current study was approved by the Ethics Committee of Taizhou Municipal Hospital. Written informed consent was obtained from all patients and healthy subjects. In this study, blood samples were obtained from 120 BC patients, 70 BBD patients, and 105 healthy volunteers. All recruited BC patients had not received any chemotherapy or radiotherapy prior to the sampling. We also collected the blood samples from all BC cases one month after their surgery. The blood samples were collected in anti‐coagulative tubes and centrifuged at 1600 g for 10 min, followed by centrifugation at 16,000 g for 10 min at 4°C. The supernatant was stored at −80°C until further use. The clinical characteristics of all 120 patients with BC are summarized in Table 1.\nAssociation of serum exosomal lncRNA DANCR with clinicopathological characteristics", "Total exosomes were isolated from serum with the total exosome isolation reagent (Invitrogen) according to the manufacturer's protocol. The serum was thawed on ice at 25°C. Then, possible residual cell debris were removed by a centrifugation step at 2000 g for 30 min. Then, serum supernatant was mixed with the exosome isolation reagent, followed by incubation for at 4°C 30 min and centrifugation at 10,000 g for 10 min. The supernatant was discarded, and the exosomal pellet was resuspended in PBS for RNA extraction.", "Total exosomal RNA was extracted from serum samples with an miRNeasy Serum/Plasma Kit (QIAGEN). The RNA was quantified on a NanoDrop ND‐1000 Spectrophotometer (Thermo Scientific) and immediately reverse transcribed into cDNA. Before RNA extraction, 25 fmol/ml of synthesized cel‐miR‐39 (Applied Biosystems) was added as an exogenous spike in control. Then, qRT‐PCR was performed with SYBR PrimeScript miRNA RT‐PCR kit (Takara Biotechnology Co. Ltd) on the 7500 Real‐Time PCR systems (Applied Biosystems). The relative levels of serum exosomal lncRNA DANCR were expressed using the 2–ΔΔCt method.", "The levels of CA153 and CEA were measured using electrochemiluminescence assays with available commercial kits.", "The Mann‐Whitney U and Kruskal‐Wallis tests were used to compare the difference regarding the serum exosomal lncRNA DANCR expression between two or more groups. The chi‐square test was performed to evaluate the associations between clinicopathological variables and serum exosomal lncRNA DANCR expression. Receiver operating characteristic (ROC) curves and the area under the curves (AUC) were applied to access diagnostic power of serum exosomal lncRNA DANCR, CA153, and CEA in BC. Kaplan‐Meier analysis plus log‐rank test was used for overall survival (OS) analysis. Multivariate Cox proportional hazard models were used to analyze the independent risk indicators for BC patients. All statistical analyses were performed using GraphPad Prism 6.0 (GraphPad Software Inc.). p < 0.05 was considered statistically significant.", "qRT‐PCR was used to detect the serum exosomal lncRNA DANCR levels in all participants. Serum exosomal lncRNA DANCR levels were significantly higher in BC patients than in BBD subjects and controls (p < 0.001, Figure 1A). In addition, BC patients with positive lymph node metastasis (p = 0.005, Figure 1B), negative ER status (p = 0.012, Figure 1C), negative HER2 status (p < 0.001, Figure 1D), and advanced TNM stage (p < 0.001, Figure 1E) exhibited higher serum exosomal lncRNA DANCR levels compared with their respective controls.\nSerum exosomal lncRNA DANCR levels were significantly higher in BC patients (A). Increased serum exosomal lncRNA DANCR expression occurred more frequently in BC patients with positive lymph node metastasis (B), negative ER status (C), negative HER2 status (D), and advanced TNM stage (E)", "By ROC analysis, Figure 2A demonstrated that the AUC for serum exosomal lncRNA DANCR was 0.880, with a specificity of 82.9% and a sensitivity of 83.3%. CA153 and CEA yielded an AUC of 0.799 (specificity: 76.0%; sensitivity: 68.3%) and 0.784 (specificity: 83.8%, sensitivity: 72.5%) for discriminating BC cases from controls (Figure 2B,C). More importantly, the combination of serum exosomal lncRNA DANCR, CA153, and CEA presented further improvement with an AUC of 0.954 (specificity: 91.4%, sensitivity: 90.8%) (Figure 2D; Table 2).\nReceiver operating characteristic (ROC) curves of single and combining three markers with the optimum cut‐off values in differentiating breast cancer (BC) from controls\nThe diagnostic value of all individual and combined biomarkers for BC\nAbbreviations: AUC, area under the curves; BC, breast cancer; CA153, Cancer antigen 153; CEA, carcinoembryonic antigen; DANCR, differentiation antagonizing non‐protein coding RNA.", "The median serum exosomal lncRNA DANCR expression was used as a cut‐off value, and all 120 BC patients were classified into two groups: high serum exosomal DANCR expression group (n = 60) and low serum exosomal DANCR expression group (n = 60). As illustrated in Table 1, high serum exosomal lncRNA DANCR expression occurred more frequently in BC cases with positive lymph node metastasis (p = 0.0080), negative ER status (p = 0.0150), negative HER2 status (p = 0.0009), and advanced TNM stage (p < 0.0001). However, there was no significant correlation between serum exosomal lncRNA DANCR expression and age, tumor size, and PR status (all p > 0.05).", "One month after surgical resection, paired blood samples were collected from all BC subjects. qRT‐PCR was used to measure the serum exosomal lncRNA DANCR expression level, and we found the serum exosomal lncRNA DANCR levels were markedly reduced following surgery (p < 0.001, Figure 3).\nCompared to preoperative blood samples, serum exosomal long non‐coding RNAs (lncRNAs) differentiation antagonizing non‐protein coding RNA (DANCR) levels were markedly downregulated in postoperative samples", "The Kaplan‐Meier curve for OS regarding serum exosomal lncRNA DANCR expression is presented in Figure 4. Compared to BC patients with low serum exosomal lncRNA DANCR expression, patients with high serum exosomal lncRNA DANCR survived significantly shorter (p = 0.0132). Furthermore, multivariate analysis indicated that lymph node metastasis (HR = 3.12, 95% CI = 1.65–5.78, p = 0.018), ER status (HR = 2.75, 95% CI = 1.38–5.26, p = 0.021), HER2 status (HR = 3.51, 95% CI = 1.92–6.24, p = 0.014), TNM stage (HR = 4.75, 95% CI = 2.74–8.32, p < 0.001), and serum exosomal lncRNA DANCR expression (HR = 3.86, 95% CI = 2.16–6.63, p = 0.009) were significant independent prognostic markers for shorter OS (Table 3).\nKaplan‐Meier analysis of overall survival (OS) according to serum exosomal long non‐coding RNAs (lncRNAs) differentiation antagonizing non‐protein coding RNA (DANCR) expression\nMultivariate analysis of overall survival in 120 BC patients\nAbbreviations: BC, breast cancer; DANCR, differentiation antagonizing non‐protein coding RNA." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Patients and samples", "Exosome isolation", "RNA extraction and quantitative real‐time reverse transcription‐polymerase chain reaction (qRT‐PCR)", "Measurement of CA153 and CEA in serum", "Statistical analysis", "RESULTS", "Serum exosomal lncRNA DANCR was dramatically increased in BC patients", "Diagnostic accuracy of serum exosomal lncRNA DNACR for BC", "Increased serum exosomal lncRNA DANCR expression was correlated with clinical variables in BC patients", "Alteration of serum exosomal lncRNA DANCR levels following treatments", "Serum exosomal lncRNA DANCR expression was a prognostic biomarker for BC", "DISCUSSION", "CONCLUSIONS", "CONFLICT OF INTERESTS" ]
[ "Breast cancer (BC) is the most frequent cancer among women around the world.\n1\n, \n2\n In China, more than 272,400 newly diagnosed BC were reported, causing about 70,700 deaths in 2015. BC alone accounts for about 15% of all new cancers in women.\n3\n, \n4\n Despite recent advances in surgical techniques and adjuvant chemotherapy, the clinical outcome of BC patients remains unfavorable due to the recurrence and metastasis.\n5\n Early diagnosis and treatment significantly improve the prognosis of this malignancy. Cancer antigen 153 (CA153) and carcinoembryonic antigen (CEA) are the most commonly used biomarkers for BC. However, diagnostic and prognostic accuracy of CA153 and CEA is poor. Therefore, it is urgent to identify novel, reliable biomarkers for GC diagnosis and prognosis evaluation.\nLong non‐coding RNAs (lncRNAs) are a class of non‐coding RNAs with a length of longer than 200 nucleotides.\n6\n, \n7\n Dysregulation of lncRNAs has been shown to play a critical role in regulating the initiation and progression of BC.\n8\n, \n9\n For instance, OLBC15 was highly expressed in triple‐negative BC, and OLBC15 promoted the BC tumorigenesis through destabilizing ZNF326.\n10\n Exosomes are 30–100 nm nanovesicles containing many molecules, such as lncRNAs, proteins, DNA, RNA, and miRNAs.\n11\n, \n12\n, \n13\n Exosomal lncRNAs are correlated with cancer tumorigenesis and progression and can be used as effective biomarkers in various types of cancer including BC. For example, compared to benign breast disease (BBD) and healthy controls, serum exosomal H19 was significantly increased in BC patients, and high serum exosomal H19 expression was strongly associated with worse clinical variables.\n14\n Similarly, patients with BC exhibited higher serum exosomal HOTAIR levels, and overexpression of serum exosomal HOTAIR was correlated with poorer prognosis.\n15\n\n\nLncRNA differentiation antagonizing non‐protein coding RNA (DANCR or ANCR), located on human chromosome 4, has been previously found to be involved in the initiation and the progression of BC.\n16\n, \n17\n However, the diagnostic and prognostic value of circulating exosomal DANCR in BC has not yet been explored. Thus, this study aimed to detect the serum exosomal lncRNA DANCR levels in BC patients, BBD patients, and normal controls and analyzed its role as a potential novel biomarker for BC diagnosis and prognosis.", "Patients and samples The current study was approved by the Ethics Committee of Taizhou Municipal Hospital. Written informed consent was obtained from all patients and healthy subjects. In this study, blood samples were obtained from 120 BC patients, 70 BBD patients, and 105 healthy volunteers. All recruited BC patients had not received any chemotherapy or radiotherapy prior to the sampling. We also collected the blood samples from all BC cases one month after their surgery. The blood samples were collected in anti‐coagulative tubes and centrifuged at 1600 g for 10 min, followed by centrifugation at 16,000 g for 10 min at 4°C. The supernatant was stored at −80°C until further use. The clinical characteristics of all 120 patients with BC are summarized in Table 1.\nAssociation of serum exosomal lncRNA DANCR with clinicopathological characteristics\nThe current study was approved by the Ethics Committee of Taizhou Municipal Hospital. Written informed consent was obtained from all patients and healthy subjects. In this study, blood samples were obtained from 120 BC patients, 70 BBD patients, and 105 healthy volunteers. All recruited BC patients had not received any chemotherapy or radiotherapy prior to the sampling. We also collected the blood samples from all BC cases one month after their surgery. The blood samples were collected in anti‐coagulative tubes and centrifuged at 1600 g for 10 min, followed by centrifugation at 16,000 g for 10 min at 4°C. The supernatant was stored at −80°C until further use. The clinical characteristics of all 120 patients with BC are summarized in Table 1.\nAssociation of serum exosomal lncRNA DANCR with clinicopathological characteristics\nExosome isolation Total exosomes were isolated from serum with the total exosome isolation reagent (Invitrogen) according to the manufacturer's protocol. The serum was thawed on ice at 25°C. Then, possible residual cell debris were removed by a centrifugation step at 2000 g for 30 min. Then, serum supernatant was mixed with the exosome isolation reagent, followed by incubation for at 4°C 30 min and centrifugation at 10,000 g for 10 min. The supernatant was discarded, and the exosomal pellet was resuspended in PBS for RNA extraction.\nTotal exosomes were isolated from serum with the total exosome isolation reagent (Invitrogen) according to the manufacturer's protocol. The serum was thawed on ice at 25°C. Then, possible residual cell debris were removed by a centrifugation step at 2000 g for 30 min. Then, serum supernatant was mixed with the exosome isolation reagent, followed by incubation for at 4°C 30 min and centrifugation at 10,000 g for 10 min. The supernatant was discarded, and the exosomal pellet was resuspended in PBS for RNA extraction.\nRNA extraction and quantitative real‐time reverse transcription‐polymerase chain reaction (qRT‐PCR) Total exosomal RNA was extracted from serum samples with an miRNeasy Serum/Plasma Kit (QIAGEN). The RNA was quantified on a NanoDrop ND‐1000 Spectrophotometer (Thermo Scientific) and immediately reverse transcribed into cDNA. Before RNA extraction, 25 fmol/ml of synthesized cel‐miR‐39 (Applied Biosystems) was added as an exogenous spike in control. Then, qRT‐PCR was performed with SYBR PrimeScript miRNA RT‐PCR kit (Takara Biotechnology Co. Ltd) on the 7500 Real‐Time PCR systems (Applied Biosystems). The relative levels of serum exosomal lncRNA DANCR were expressed using the 2–ΔΔCt method.\nTotal exosomal RNA was extracted from serum samples with an miRNeasy Serum/Plasma Kit (QIAGEN). The RNA was quantified on a NanoDrop ND‐1000 Spectrophotometer (Thermo Scientific) and immediately reverse transcribed into cDNA. Before RNA extraction, 25 fmol/ml of synthesized cel‐miR‐39 (Applied Biosystems) was added as an exogenous spike in control. Then, qRT‐PCR was performed with SYBR PrimeScript miRNA RT‐PCR kit (Takara Biotechnology Co. Ltd) on the 7500 Real‐Time PCR systems (Applied Biosystems). The relative levels of serum exosomal lncRNA DANCR were expressed using the 2–ΔΔCt method.\nMeasurement of CA153 and CEA in serum The levels of CA153 and CEA were measured using electrochemiluminescence assays with available commercial kits.\nThe levels of CA153 and CEA were measured using electrochemiluminescence assays with available commercial kits.\nStatistical analysis The Mann‐Whitney U and Kruskal‐Wallis tests were used to compare the difference regarding the serum exosomal lncRNA DANCR expression between two or more groups. The chi‐square test was performed to evaluate the associations between clinicopathological variables and serum exosomal lncRNA DANCR expression. Receiver operating characteristic (ROC) curves and the area under the curves (AUC) were applied to access diagnostic power of serum exosomal lncRNA DANCR, CA153, and CEA in BC. Kaplan‐Meier analysis plus log‐rank test was used for overall survival (OS) analysis. Multivariate Cox proportional hazard models were used to analyze the independent risk indicators for BC patients. All statistical analyses were performed using GraphPad Prism 6.0 (GraphPad Software Inc.). p < 0.05 was considered statistically significant.\nThe Mann‐Whitney U and Kruskal‐Wallis tests were used to compare the difference regarding the serum exosomal lncRNA DANCR expression between two or more groups. The chi‐square test was performed to evaluate the associations between clinicopathological variables and serum exosomal lncRNA DANCR expression. Receiver operating characteristic (ROC) curves and the area under the curves (AUC) were applied to access diagnostic power of serum exosomal lncRNA DANCR, CA153, and CEA in BC. Kaplan‐Meier analysis plus log‐rank test was used for overall survival (OS) analysis. Multivariate Cox proportional hazard models were used to analyze the independent risk indicators for BC patients. All statistical analyses were performed using GraphPad Prism 6.0 (GraphPad Software Inc.). p < 0.05 was considered statistically significant.", "The current study was approved by the Ethics Committee of Taizhou Municipal Hospital. Written informed consent was obtained from all patients and healthy subjects. In this study, blood samples were obtained from 120 BC patients, 70 BBD patients, and 105 healthy volunteers. All recruited BC patients had not received any chemotherapy or radiotherapy prior to the sampling. We also collected the blood samples from all BC cases one month after their surgery. The blood samples were collected in anti‐coagulative tubes and centrifuged at 1600 g for 10 min, followed by centrifugation at 16,000 g for 10 min at 4°C. The supernatant was stored at −80°C until further use. The clinical characteristics of all 120 patients with BC are summarized in Table 1.\nAssociation of serum exosomal lncRNA DANCR with clinicopathological characteristics", "Total exosomes were isolated from serum with the total exosome isolation reagent (Invitrogen) according to the manufacturer's protocol. The serum was thawed on ice at 25°C. Then, possible residual cell debris were removed by a centrifugation step at 2000 g for 30 min. Then, serum supernatant was mixed with the exosome isolation reagent, followed by incubation for at 4°C 30 min and centrifugation at 10,000 g for 10 min. The supernatant was discarded, and the exosomal pellet was resuspended in PBS for RNA extraction.", "Total exosomal RNA was extracted from serum samples with an miRNeasy Serum/Plasma Kit (QIAGEN). The RNA was quantified on a NanoDrop ND‐1000 Spectrophotometer (Thermo Scientific) and immediately reverse transcribed into cDNA. Before RNA extraction, 25 fmol/ml of synthesized cel‐miR‐39 (Applied Biosystems) was added as an exogenous spike in control. Then, qRT‐PCR was performed with SYBR PrimeScript miRNA RT‐PCR kit (Takara Biotechnology Co. Ltd) on the 7500 Real‐Time PCR systems (Applied Biosystems). The relative levels of serum exosomal lncRNA DANCR were expressed using the 2–ΔΔCt method.", "The levels of CA153 and CEA were measured using electrochemiluminescence assays with available commercial kits.", "The Mann‐Whitney U and Kruskal‐Wallis tests were used to compare the difference regarding the serum exosomal lncRNA DANCR expression between two or more groups. The chi‐square test was performed to evaluate the associations between clinicopathological variables and serum exosomal lncRNA DANCR expression. Receiver operating characteristic (ROC) curves and the area under the curves (AUC) were applied to access diagnostic power of serum exosomal lncRNA DANCR, CA153, and CEA in BC. Kaplan‐Meier analysis plus log‐rank test was used for overall survival (OS) analysis. Multivariate Cox proportional hazard models were used to analyze the independent risk indicators for BC patients. All statistical analyses were performed using GraphPad Prism 6.0 (GraphPad Software Inc.). p < 0.05 was considered statistically significant.", "Serum exosomal lncRNA DANCR was dramatically increased in BC patients qRT‐PCR was used to detect the serum exosomal lncRNA DANCR levels in all participants. Serum exosomal lncRNA DANCR levels were significantly higher in BC patients than in BBD subjects and controls (p < 0.001, Figure 1A). In addition, BC patients with positive lymph node metastasis (p = 0.005, Figure 1B), negative ER status (p = 0.012, Figure 1C), negative HER2 status (p < 0.001, Figure 1D), and advanced TNM stage (p < 0.001, Figure 1E) exhibited higher serum exosomal lncRNA DANCR levels compared with their respective controls.\nSerum exosomal lncRNA DANCR levels were significantly higher in BC patients (A). Increased serum exosomal lncRNA DANCR expression occurred more frequently in BC patients with positive lymph node metastasis (B), negative ER status (C), negative HER2 status (D), and advanced TNM stage (E)\nqRT‐PCR was used to detect the serum exosomal lncRNA DANCR levels in all participants. Serum exosomal lncRNA DANCR levels were significantly higher in BC patients than in BBD subjects and controls (p < 0.001, Figure 1A). In addition, BC patients with positive lymph node metastasis (p = 0.005, Figure 1B), negative ER status (p = 0.012, Figure 1C), negative HER2 status (p < 0.001, Figure 1D), and advanced TNM stage (p < 0.001, Figure 1E) exhibited higher serum exosomal lncRNA DANCR levels compared with their respective controls.\nSerum exosomal lncRNA DANCR levels were significantly higher in BC patients (A). Increased serum exosomal lncRNA DANCR expression occurred more frequently in BC patients with positive lymph node metastasis (B), negative ER status (C), negative HER2 status (D), and advanced TNM stage (E)\nDiagnostic accuracy of serum exosomal lncRNA DNACR for BC By ROC analysis, Figure 2A demonstrated that the AUC for serum exosomal lncRNA DANCR was 0.880, with a specificity of 82.9% and a sensitivity of 83.3%. CA153 and CEA yielded an AUC of 0.799 (specificity: 76.0%; sensitivity: 68.3%) and 0.784 (specificity: 83.8%, sensitivity: 72.5%) for discriminating BC cases from controls (Figure 2B,C). More importantly, the combination of serum exosomal lncRNA DANCR, CA153, and CEA presented further improvement with an AUC of 0.954 (specificity: 91.4%, sensitivity: 90.8%) (Figure 2D; Table 2).\nReceiver operating characteristic (ROC) curves of single and combining three markers with the optimum cut‐off values in differentiating breast cancer (BC) from controls\nThe diagnostic value of all individual and combined biomarkers for BC\nAbbreviations: AUC, area under the curves; BC, breast cancer; CA153, Cancer antigen 153; CEA, carcinoembryonic antigen; DANCR, differentiation antagonizing non‐protein coding RNA.\nBy ROC analysis, Figure 2A demonstrated that the AUC for serum exosomal lncRNA DANCR was 0.880, with a specificity of 82.9% and a sensitivity of 83.3%. CA153 and CEA yielded an AUC of 0.799 (specificity: 76.0%; sensitivity: 68.3%) and 0.784 (specificity: 83.8%, sensitivity: 72.5%) for discriminating BC cases from controls (Figure 2B,C). More importantly, the combination of serum exosomal lncRNA DANCR, CA153, and CEA presented further improvement with an AUC of 0.954 (specificity: 91.4%, sensitivity: 90.8%) (Figure 2D; Table 2).\nReceiver operating characteristic (ROC) curves of single and combining three markers with the optimum cut‐off values in differentiating breast cancer (BC) from controls\nThe diagnostic value of all individual and combined biomarkers for BC\nAbbreviations: AUC, area under the curves; BC, breast cancer; CA153, Cancer antigen 153; CEA, carcinoembryonic antigen; DANCR, differentiation antagonizing non‐protein coding RNA.\nIncreased serum exosomal lncRNA DANCR expression was correlated with clinical variables in BC patients The median serum exosomal lncRNA DANCR expression was used as a cut‐off value, and all 120 BC patients were classified into two groups: high serum exosomal DANCR expression group (n = 60) and low serum exosomal DANCR expression group (n = 60). As illustrated in Table 1, high serum exosomal lncRNA DANCR expression occurred more frequently in BC cases with positive lymph node metastasis (p = 0.0080), negative ER status (p = 0.0150), negative HER2 status (p = 0.0009), and advanced TNM stage (p < 0.0001). However, there was no significant correlation between serum exosomal lncRNA DANCR expression and age, tumor size, and PR status (all p > 0.05).\nThe median serum exosomal lncRNA DANCR expression was used as a cut‐off value, and all 120 BC patients were classified into two groups: high serum exosomal DANCR expression group (n = 60) and low serum exosomal DANCR expression group (n = 60). As illustrated in Table 1, high serum exosomal lncRNA DANCR expression occurred more frequently in BC cases with positive lymph node metastasis (p = 0.0080), negative ER status (p = 0.0150), negative HER2 status (p = 0.0009), and advanced TNM stage (p < 0.0001). However, there was no significant correlation between serum exosomal lncRNA DANCR expression and age, tumor size, and PR status (all p > 0.05).\nAlteration of serum exosomal lncRNA DANCR levels following treatments One month after surgical resection, paired blood samples were collected from all BC subjects. qRT‐PCR was used to measure the serum exosomal lncRNA DANCR expression level, and we found the serum exosomal lncRNA DANCR levels were markedly reduced following surgery (p < 0.001, Figure 3).\nCompared to preoperative blood samples, serum exosomal long non‐coding RNAs (lncRNAs) differentiation antagonizing non‐protein coding RNA (DANCR) levels were markedly downregulated in postoperative samples\nOne month after surgical resection, paired blood samples were collected from all BC subjects. qRT‐PCR was used to measure the serum exosomal lncRNA DANCR expression level, and we found the serum exosomal lncRNA DANCR levels were markedly reduced following surgery (p < 0.001, Figure 3).\nCompared to preoperative blood samples, serum exosomal long non‐coding RNAs (lncRNAs) differentiation antagonizing non‐protein coding RNA (DANCR) levels were markedly downregulated in postoperative samples\nSerum exosomal lncRNA DANCR expression was a prognostic biomarker for BC The Kaplan‐Meier curve for OS regarding serum exosomal lncRNA DANCR expression is presented in Figure 4. Compared to BC patients with low serum exosomal lncRNA DANCR expression, patients with high serum exosomal lncRNA DANCR survived significantly shorter (p = 0.0132). Furthermore, multivariate analysis indicated that lymph node metastasis (HR = 3.12, 95% CI = 1.65–5.78, p = 0.018), ER status (HR = 2.75, 95% CI = 1.38–5.26, p = 0.021), HER2 status (HR = 3.51, 95% CI = 1.92–6.24, p = 0.014), TNM stage (HR = 4.75, 95% CI = 2.74–8.32, p < 0.001), and serum exosomal lncRNA DANCR expression (HR = 3.86, 95% CI = 2.16–6.63, p = 0.009) were significant independent prognostic markers for shorter OS (Table 3).\nKaplan‐Meier analysis of overall survival (OS) according to serum exosomal long non‐coding RNAs (lncRNAs) differentiation antagonizing non‐protein coding RNA (DANCR) expression\nMultivariate analysis of overall survival in 120 BC patients\nAbbreviations: BC, breast cancer; DANCR, differentiation antagonizing non‐protein coding RNA.\nThe Kaplan‐Meier curve for OS regarding serum exosomal lncRNA DANCR expression is presented in Figure 4. Compared to BC patients with low serum exosomal lncRNA DANCR expression, patients with high serum exosomal lncRNA DANCR survived significantly shorter (p = 0.0132). Furthermore, multivariate analysis indicated that lymph node metastasis (HR = 3.12, 95% CI = 1.65–5.78, p = 0.018), ER status (HR = 2.75, 95% CI = 1.38–5.26, p = 0.021), HER2 status (HR = 3.51, 95% CI = 1.92–6.24, p = 0.014), TNM stage (HR = 4.75, 95% CI = 2.74–8.32, p < 0.001), and serum exosomal lncRNA DANCR expression (HR = 3.86, 95% CI = 2.16–6.63, p = 0.009) were significant independent prognostic markers for shorter OS (Table 3).\nKaplan‐Meier analysis of overall survival (OS) according to serum exosomal long non‐coding RNAs (lncRNAs) differentiation antagonizing non‐protein coding RNA (DANCR) expression\nMultivariate analysis of overall survival in 120 BC patients\nAbbreviations: BC, breast cancer; DANCR, differentiation antagonizing non‐protein coding RNA.", "qRT‐PCR was used to detect the serum exosomal lncRNA DANCR levels in all participants. Serum exosomal lncRNA DANCR levels were significantly higher in BC patients than in BBD subjects and controls (p < 0.001, Figure 1A). In addition, BC patients with positive lymph node metastasis (p = 0.005, Figure 1B), negative ER status (p = 0.012, Figure 1C), negative HER2 status (p < 0.001, Figure 1D), and advanced TNM stage (p < 0.001, Figure 1E) exhibited higher serum exosomal lncRNA DANCR levels compared with their respective controls.\nSerum exosomal lncRNA DANCR levels were significantly higher in BC patients (A). Increased serum exosomal lncRNA DANCR expression occurred more frequently in BC patients with positive lymph node metastasis (B), negative ER status (C), negative HER2 status (D), and advanced TNM stage (E)", "By ROC analysis, Figure 2A demonstrated that the AUC for serum exosomal lncRNA DANCR was 0.880, with a specificity of 82.9% and a sensitivity of 83.3%. CA153 and CEA yielded an AUC of 0.799 (specificity: 76.0%; sensitivity: 68.3%) and 0.784 (specificity: 83.8%, sensitivity: 72.5%) for discriminating BC cases from controls (Figure 2B,C). More importantly, the combination of serum exosomal lncRNA DANCR, CA153, and CEA presented further improvement with an AUC of 0.954 (specificity: 91.4%, sensitivity: 90.8%) (Figure 2D; Table 2).\nReceiver operating characteristic (ROC) curves of single and combining three markers with the optimum cut‐off values in differentiating breast cancer (BC) from controls\nThe diagnostic value of all individual and combined biomarkers for BC\nAbbreviations: AUC, area under the curves; BC, breast cancer; CA153, Cancer antigen 153; CEA, carcinoembryonic antigen; DANCR, differentiation antagonizing non‐protein coding RNA.", "The median serum exosomal lncRNA DANCR expression was used as a cut‐off value, and all 120 BC patients were classified into two groups: high serum exosomal DANCR expression group (n = 60) and low serum exosomal DANCR expression group (n = 60). As illustrated in Table 1, high serum exosomal lncRNA DANCR expression occurred more frequently in BC cases with positive lymph node metastasis (p = 0.0080), negative ER status (p = 0.0150), negative HER2 status (p = 0.0009), and advanced TNM stage (p < 0.0001). However, there was no significant correlation between serum exosomal lncRNA DANCR expression and age, tumor size, and PR status (all p > 0.05).", "One month after surgical resection, paired blood samples were collected from all BC subjects. qRT‐PCR was used to measure the serum exosomal lncRNA DANCR expression level, and we found the serum exosomal lncRNA DANCR levels were markedly reduced following surgery (p < 0.001, Figure 3).\nCompared to preoperative blood samples, serum exosomal long non‐coding RNAs (lncRNAs) differentiation antagonizing non‐protein coding RNA (DANCR) levels were markedly downregulated in postoperative samples", "The Kaplan‐Meier curve for OS regarding serum exosomal lncRNA DANCR expression is presented in Figure 4. Compared to BC patients with low serum exosomal lncRNA DANCR expression, patients with high serum exosomal lncRNA DANCR survived significantly shorter (p = 0.0132). Furthermore, multivariate analysis indicated that lymph node metastasis (HR = 3.12, 95% CI = 1.65–5.78, p = 0.018), ER status (HR = 2.75, 95% CI = 1.38–5.26, p = 0.021), HER2 status (HR = 3.51, 95% CI = 1.92–6.24, p = 0.014), TNM stage (HR = 4.75, 95% CI = 2.74–8.32, p < 0.001), and serum exosomal lncRNA DANCR expression (HR = 3.86, 95% CI = 2.16–6.63, p = 0.009) were significant independent prognostic markers for shorter OS (Table 3).\nKaplan‐Meier analysis of overall survival (OS) according to serum exosomal long non‐coding RNAs (lncRNAs) differentiation antagonizing non‐protein coding RNA (DANCR) expression\nMultivariate analysis of overall survival in 120 BC patients\nAbbreviations: BC, breast cancer; DANCR, differentiation antagonizing non‐protein coding RNA.", "Dysregulation of lncRNAs has been demonstrated to play major roles in breast cancer progression.\n18\n This study enrolled a total of 120 BC patients, 70 BBD patients, and 105 healthy women. Compared to BBD patients and controls, a significantly higher serum exosomal lncRNA DANCR level was observed in BC patients. High serum exosomal lncRNA DANCR expression is strongly associated with aggressive clinical variables. In addition, serum exosomal lncRNA DANCR had a relatively high diagnostic performance than CA153 and CEA. The combination of serum exosomal lncRNA DANCR, CA153, and CEA significantly improved the diagnostic accuracy. Moreover, serum exosomal lncRNA DANCR levels were markedly decreased 1 month after surgery. Furthermore, high serum exosomal lncRNA DANCR predicted unfavorable OS in BC patients and was an independent prognostic indicator for BC. The results have demonstrated that serum exosomal lncRNA DANCR is a promising and robust biomarker for BC.\nOur findings were in line with several previous studies. Zhang et al. showed that lncRNA DANCR was significantly upregulated both in BC tissues and cell lines. Overexpression of DANCR stimulated epithelial‐mesenchymal transition (EMT), cancer stemness, inflammation, and vice versa. The oncogenic activities of DANCR were reversed by EZH2 inhibition or SOCS3 upregulation.\n16\n Similarly, DANCR was significantly higher in BC tissues than in paired normal tissues. DANCR upregulation not only promoted tumorigenicity in xenograft animal but also greatly enhanced cell proliferation, invasion, and migration in vitro through regulating miR‐216a‐5p.\n17\n\n\nBesides BC, the role of lncRNA DANCR was also reported in different types of cancer. LncRNA DANCR was increased both in tumor tissues and serum samples of gastric cancer (GC) patients, and its upregulation was closely associated with malignant phenotypes. LncRNA DANCR overexpression significantly promoted the carcinogenesis in vitro and in vivo, while inhibition of DANCR showed the opposite effect.\n19\n In bladder cancer (BC), lncRNA DANCR expression was dramatically higher in BC tissues than in normal controls. LncRNA DANCR functioned as a miR‐149 sponge to positively correlate with MSI2 expression and markedly stimulated the tumorigenicity. High DANCR expression was strongly associated with aggressive clinical variables patients with BC, such as lymph node metastasis and advanced TNM stage.\n20\n, \n21\n In addition, lncRNA DANCR was highly expressed in colorectal cancer (CRC) tissues compared to normal tissues, and its overexpression was negatively associated with CRC patient survival. Upregulation of lncRNA DANCR significantly promoted carcinogenesis and metastasis via sponging miR‐577.\n22\n, \n23\n Moreover, lncRNA DANCR was remarkably increased in ovarian cancer (OC) tissues and cell lines. Overexpression of lncRNA DANCR significantly promoted OC cell proliferation, invasion, and migration through regulating IGF2 or UPF1.\n24\n, \n25\n However, lncRNA DANCR expression was frequently lower in papillary thyroid cancer (PTC) tissues than that in adjacent normal tissues, and low lncRNA DANCR expression was closely associated with worse clinical parameters of PTC patients, suggesting that lncRNA DANCR played a tumor suppressive role in PTC.\n26\n The contradictory functions of lncRNA DANCR may be explained by its ambiguous role in regulation of the complex networks of oncogenes and tumor suppressors resulting in a cancer type‐dependent outcome.", "Taken together, this study demonstrated that serum exosomal lncRNA DANCR expression was markedly elevated in patients with BC. Upregulation of serum exosomal lncRNA DANCR was closely associated with worse clinical parameters and shorter survival. Thus, serum exosomal lncRNA DANCR serves as a promising prognostic marker for BC. Meanwhile, as a result of small sample research, the analysis of larger cohorts is required to confirm the clinical role of serum exosomal lncRNA DANCR in BC.", "None." ]
[ null, "materials-and-methods", null, null, null, null, null, "results", null, null, null, null, null, "discussion", "conclusions", "COI-statement" ]
[ "breast cancer", "diagnosis", "exosome", "lncDANCR", "prognosis" ]
INTRODUCTION: Breast cancer (BC) is the most frequent cancer among women around the world. 1 , 2 In China, more than 272,400 newly diagnosed BC were reported, causing about 70,700 deaths in 2015. BC alone accounts for about 15% of all new cancers in women. 3 , 4 Despite recent advances in surgical techniques and adjuvant chemotherapy, the clinical outcome of BC patients remains unfavorable due to the recurrence and metastasis. 5 Early diagnosis and treatment significantly improve the prognosis of this malignancy. Cancer antigen 153 (CA153) and carcinoembryonic antigen (CEA) are the most commonly used biomarkers for BC. However, diagnostic and prognostic accuracy of CA153 and CEA is poor. Therefore, it is urgent to identify novel, reliable biomarkers for GC diagnosis and prognosis evaluation. Long non‐coding RNAs (lncRNAs) are a class of non‐coding RNAs with a length of longer than 200 nucleotides. 6 , 7 Dysregulation of lncRNAs has been shown to play a critical role in regulating the initiation and progression of BC. 8 , 9 For instance, OLBC15 was highly expressed in triple‐negative BC, and OLBC15 promoted the BC tumorigenesis through destabilizing ZNF326. 10 Exosomes are 30–100 nm nanovesicles containing many molecules, such as lncRNAs, proteins, DNA, RNA, and miRNAs. 11 , 12 , 13 Exosomal lncRNAs are correlated with cancer tumorigenesis and progression and can be used as effective biomarkers in various types of cancer including BC. For example, compared to benign breast disease (BBD) and healthy controls, serum exosomal H19 was significantly increased in BC patients, and high serum exosomal H19 expression was strongly associated with worse clinical variables. 14 Similarly, patients with BC exhibited higher serum exosomal HOTAIR levels, and overexpression of serum exosomal HOTAIR was correlated with poorer prognosis. 15 LncRNA differentiation antagonizing non‐protein coding RNA (DANCR or ANCR), located on human chromosome 4, has been previously found to be involved in the initiation and the progression of BC. 16 , 17 However, the diagnostic and prognostic value of circulating exosomal DANCR in BC has not yet been explored. Thus, this study aimed to detect the serum exosomal lncRNA DANCR levels in BC patients, BBD patients, and normal controls and analyzed its role as a potential novel biomarker for BC diagnosis and prognosis. MATERIALS AND METHODS: Patients and samples The current study was approved by the Ethics Committee of Taizhou Municipal Hospital. Written informed consent was obtained from all patients and healthy subjects. In this study, blood samples were obtained from 120 BC patients, 70 BBD patients, and 105 healthy volunteers. All recruited BC patients had not received any chemotherapy or radiotherapy prior to the sampling. We also collected the blood samples from all BC cases one month after their surgery. The blood samples were collected in anti‐coagulative tubes and centrifuged at 1600 g for 10 min, followed by centrifugation at 16,000 g for 10 min at 4°C. The supernatant was stored at −80°C until further use. The clinical characteristics of all 120 patients with BC are summarized in Table 1. Association of serum exosomal lncRNA DANCR with clinicopathological characteristics The current study was approved by the Ethics Committee of Taizhou Municipal Hospital. Written informed consent was obtained from all patients and healthy subjects. In this study, blood samples were obtained from 120 BC patients, 70 BBD patients, and 105 healthy volunteers. All recruited BC patients had not received any chemotherapy or radiotherapy prior to the sampling. We also collected the blood samples from all BC cases one month after their surgery. The blood samples were collected in anti‐coagulative tubes and centrifuged at 1600 g for 10 min, followed by centrifugation at 16,000 g for 10 min at 4°C. The supernatant was stored at −80°C until further use. The clinical characteristics of all 120 patients with BC are summarized in Table 1. Association of serum exosomal lncRNA DANCR with clinicopathological characteristics Exosome isolation Total exosomes were isolated from serum with the total exosome isolation reagent (Invitrogen) according to the manufacturer's protocol. The serum was thawed on ice at 25°C. Then, possible residual cell debris were removed by a centrifugation step at 2000 g for 30 min. Then, serum supernatant was mixed with the exosome isolation reagent, followed by incubation for at 4°C 30 min and centrifugation at 10,000 g for 10 min. The supernatant was discarded, and the exosomal pellet was resuspended in PBS for RNA extraction. Total exosomes were isolated from serum with the total exosome isolation reagent (Invitrogen) according to the manufacturer's protocol. The serum was thawed on ice at 25°C. Then, possible residual cell debris were removed by a centrifugation step at 2000 g for 30 min. Then, serum supernatant was mixed with the exosome isolation reagent, followed by incubation for at 4°C 30 min and centrifugation at 10,000 g for 10 min. The supernatant was discarded, and the exosomal pellet was resuspended in PBS for RNA extraction. RNA extraction and quantitative real‐time reverse transcription‐polymerase chain reaction (qRT‐PCR) Total exosomal RNA was extracted from serum samples with an miRNeasy Serum/Plasma Kit (QIAGEN). The RNA was quantified on a NanoDrop ND‐1000 Spectrophotometer (Thermo Scientific) and immediately reverse transcribed into cDNA. Before RNA extraction, 25 fmol/ml of synthesized cel‐miR‐39 (Applied Biosystems) was added as an exogenous spike in control. Then, qRT‐PCR was performed with SYBR PrimeScript miRNA RT‐PCR kit (Takara Biotechnology Co. Ltd) on the 7500 Real‐Time PCR systems (Applied Biosystems). The relative levels of serum exosomal lncRNA DANCR were expressed using the 2–ΔΔCt method. Total exosomal RNA was extracted from serum samples with an miRNeasy Serum/Plasma Kit (QIAGEN). The RNA was quantified on a NanoDrop ND‐1000 Spectrophotometer (Thermo Scientific) and immediately reverse transcribed into cDNA. Before RNA extraction, 25 fmol/ml of synthesized cel‐miR‐39 (Applied Biosystems) was added as an exogenous spike in control. Then, qRT‐PCR was performed with SYBR PrimeScript miRNA RT‐PCR kit (Takara Biotechnology Co. Ltd) on the 7500 Real‐Time PCR systems (Applied Biosystems). The relative levels of serum exosomal lncRNA DANCR were expressed using the 2–ΔΔCt method. Measurement of CA153 and CEA in serum The levels of CA153 and CEA were measured using electrochemiluminescence assays with available commercial kits. The levels of CA153 and CEA were measured using electrochemiluminescence assays with available commercial kits. Statistical analysis The Mann‐Whitney U and Kruskal‐Wallis tests were used to compare the difference regarding the serum exosomal lncRNA DANCR expression between two or more groups. The chi‐square test was performed to evaluate the associations between clinicopathological variables and serum exosomal lncRNA DANCR expression. Receiver operating characteristic (ROC) curves and the area under the curves (AUC) were applied to access diagnostic power of serum exosomal lncRNA DANCR, CA153, and CEA in BC. Kaplan‐Meier analysis plus log‐rank test was used for overall survival (OS) analysis. Multivariate Cox proportional hazard models were used to analyze the independent risk indicators for BC patients. All statistical analyses were performed using GraphPad Prism 6.0 (GraphPad Software Inc.). p < 0.05 was considered statistically significant. The Mann‐Whitney U and Kruskal‐Wallis tests were used to compare the difference regarding the serum exosomal lncRNA DANCR expression between two or more groups. The chi‐square test was performed to evaluate the associations between clinicopathological variables and serum exosomal lncRNA DANCR expression. Receiver operating characteristic (ROC) curves and the area under the curves (AUC) were applied to access diagnostic power of serum exosomal lncRNA DANCR, CA153, and CEA in BC. Kaplan‐Meier analysis plus log‐rank test was used for overall survival (OS) analysis. Multivariate Cox proportional hazard models were used to analyze the independent risk indicators for BC patients. All statistical analyses were performed using GraphPad Prism 6.0 (GraphPad Software Inc.). p < 0.05 was considered statistically significant. Patients and samples: The current study was approved by the Ethics Committee of Taizhou Municipal Hospital. Written informed consent was obtained from all patients and healthy subjects. In this study, blood samples were obtained from 120 BC patients, 70 BBD patients, and 105 healthy volunteers. All recruited BC patients had not received any chemotherapy or radiotherapy prior to the sampling. We also collected the blood samples from all BC cases one month after their surgery. The blood samples were collected in anti‐coagulative tubes and centrifuged at 1600 g for 10 min, followed by centrifugation at 16,000 g for 10 min at 4°C. The supernatant was stored at −80°C until further use. The clinical characteristics of all 120 patients with BC are summarized in Table 1. Association of serum exosomal lncRNA DANCR with clinicopathological characteristics Exosome isolation: Total exosomes were isolated from serum with the total exosome isolation reagent (Invitrogen) according to the manufacturer's protocol. The serum was thawed on ice at 25°C. Then, possible residual cell debris were removed by a centrifugation step at 2000 g for 30 min. Then, serum supernatant was mixed with the exosome isolation reagent, followed by incubation for at 4°C 30 min and centrifugation at 10,000 g for 10 min. The supernatant was discarded, and the exosomal pellet was resuspended in PBS for RNA extraction. RNA extraction and quantitative real‐time reverse transcription‐polymerase chain reaction (qRT‐PCR): Total exosomal RNA was extracted from serum samples with an miRNeasy Serum/Plasma Kit (QIAGEN). The RNA was quantified on a NanoDrop ND‐1000 Spectrophotometer (Thermo Scientific) and immediately reverse transcribed into cDNA. Before RNA extraction, 25 fmol/ml of synthesized cel‐miR‐39 (Applied Biosystems) was added as an exogenous spike in control. Then, qRT‐PCR was performed with SYBR PrimeScript miRNA RT‐PCR kit (Takara Biotechnology Co. Ltd) on the 7500 Real‐Time PCR systems (Applied Biosystems). The relative levels of serum exosomal lncRNA DANCR were expressed using the 2–ΔΔCt method. Measurement of CA153 and CEA in serum: The levels of CA153 and CEA were measured using electrochemiluminescence assays with available commercial kits. Statistical analysis: The Mann‐Whitney U and Kruskal‐Wallis tests were used to compare the difference regarding the serum exosomal lncRNA DANCR expression between two or more groups. The chi‐square test was performed to evaluate the associations between clinicopathological variables and serum exosomal lncRNA DANCR expression. Receiver operating characteristic (ROC) curves and the area under the curves (AUC) were applied to access diagnostic power of serum exosomal lncRNA DANCR, CA153, and CEA in BC. Kaplan‐Meier analysis plus log‐rank test was used for overall survival (OS) analysis. Multivariate Cox proportional hazard models were used to analyze the independent risk indicators for BC patients. All statistical analyses were performed using GraphPad Prism 6.0 (GraphPad Software Inc.). p < 0.05 was considered statistically significant. RESULTS: Serum exosomal lncRNA DANCR was dramatically increased in BC patients qRT‐PCR was used to detect the serum exosomal lncRNA DANCR levels in all participants. Serum exosomal lncRNA DANCR levels were significantly higher in BC patients than in BBD subjects and controls (p < 0.001, Figure 1A). In addition, BC patients with positive lymph node metastasis (p = 0.005, Figure 1B), negative ER status (p = 0.012, Figure 1C), negative HER2 status (p < 0.001, Figure 1D), and advanced TNM stage (p < 0.001, Figure 1E) exhibited higher serum exosomal lncRNA DANCR levels compared with their respective controls. Serum exosomal lncRNA DANCR levels were significantly higher in BC patients (A). Increased serum exosomal lncRNA DANCR expression occurred more frequently in BC patients with positive lymph node metastasis (B), negative ER status (C), negative HER2 status (D), and advanced TNM stage (E) qRT‐PCR was used to detect the serum exosomal lncRNA DANCR levels in all participants. Serum exosomal lncRNA DANCR levels were significantly higher in BC patients than in BBD subjects and controls (p < 0.001, Figure 1A). In addition, BC patients with positive lymph node metastasis (p = 0.005, Figure 1B), negative ER status (p = 0.012, Figure 1C), negative HER2 status (p < 0.001, Figure 1D), and advanced TNM stage (p < 0.001, Figure 1E) exhibited higher serum exosomal lncRNA DANCR levels compared with their respective controls. Serum exosomal lncRNA DANCR levels were significantly higher in BC patients (A). Increased serum exosomal lncRNA DANCR expression occurred more frequently in BC patients with positive lymph node metastasis (B), negative ER status (C), negative HER2 status (D), and advanced TNM stage (E) Diagnostic accuracy of serum exosomal lncRNA DNACR for BC By ROC analysis, Figure 2A demonstrated that the AUC for serum exosomal lncRNA DANCR was 0.880, with a specificity of 82.9% and a sensitivity of 83.3%. CA153 and CEA yielded an AUC of 0.799 (specificity: 76.0%; sensitivity: 68.3%) and 0.784 (specificity: 83.8%, sensitivity: 72.5%) for discriminating BC cases from controls (Figure 2B,C). More importantly, the combination of serum exosomal lncRNA DANCR, CA153, and CEA presented further improvement with an AUC of 0.954 (specificity: 91.4%, sensitivity: 90.8%) (Figure 2D; Table 2). Receiver operating characteristic (ROC) curves of single and combining three markers with the optimum cut‐off values in differentiating breast cancer (BC) from controls The diagnostic value of all individual and combined biomarkers for BC Abbreviations: AUC, area under the curves; BC, breast cancer; CA153, Cancer antigen 153; CEA, carcinoembryonic antigen; DANCR, differentiation antagonizing non‐protein coding RNA. By ROC analysis, Figure 2A demonstrated that the AUC for serum exosomal lncRNA DANCR was 0.880, with a specificity of 82.9% and a sensitivity of 83.3%. CA153 and CEA yielded an AUC of 0.799 (specificity: 76.0%; sensitivity: 68.3%) and 0.784 (specificity: 83.8%, sensitivity: 72.5%) for discriminating BC cases from controls (Figure 2B,C). More importantly, the combination of serum exosomal lncRNA DANCR, CA153, and CEA presented further improvement with an AUC of 0.954 (specificity: 91.4%, sensitivity: 90.8%) (Figure 2D; Table 2). Receiver operating characteristic (ROC) curves of single and combining three markers with the optimum cut‐off values in differentiating breast cancer (BC) from controls The diagnostic value of all individual and combined biomarkers for BC Abbreviations: AUC, area under the curves; BC, breast cancer; CA153, Cancer antigen 153; CEA, carcinoembryonic antigen; DANCR, differentiation antagonizing non‐protein coding RNA. Increased serum exosomal lncRNA DANCR expression was correlated with clinical variables in BC patients The median serum exosomal lncRNA DANCR expression was used as a cut‐off value, and all 120 BC patients were classified into two groups: high serum exosomal DANCR expression group (n = 60) and low serum exosomal DANCR expression group (n = 60). As illustrated in Table 1, high serum exosomal lncRNA DANCR expression occurred more frequently in BC cases with positive lymph node metastasis (p = 0.0080), negative ER status (p = 0.0150), negative HER2 status (p = 0.0009), and advanced TNM stage (p < 0.0001). However, there was no significant correlation between serum exosomal lncRNA DANCR expression and age, tumor size, and PR status (all p > 0.05). The median serum exosomal lncRNA DANCR expression was used as a cut‐off value, and all 120 BC patients were classified into two groups: high serum exosomal DANCR expression group (n = 60) and low serum exosomal DANCR expression group (n = 60). As illustrated in Table 1, high serum exosomal lncRNA DANCR expression occurred more frequently in BC cases with positive lymph node metastasis (p = 0.0080), negative ER status (p = 0.0150), negative HER2 status (p = 0.0009), and advanced TNM stage (p < 0.0001). However, there was no significant correlation between serum exosomal lncRNA DANCR expression and age, tumor size, and PR status (all p > 0.05). Alteration of serum exosomal lncRNA DANCR levels following treatments One month after surgical resection, paired blood samples were collected from all BC subjects. qRT‐PCR was used to measure the serum exosomal lncRNA DANCR expression level, and we found the serum exosomal lncRNA DANCR levels were markedly reduced following surgery (p < 0.001, Figure 3). Compared to preoperative blood samples, serum exosomal long non‐coding RNAs (lncRNAs) differentiation antagonizing non‐protein coding RNA (DANCR) levels were markedly downregulated in postoperative samples One month after surgical resection, paired blood samples were collected from all BC subjects. qRT‐PCR was used to measure the serum exosomal lncRNA DANCR expression level, and we found the serum exosomal lncRNA DANCR levels were markedly reduced following surgery (p < 0.001, Figure 3). Compared to preoperative blood samples, serum exosomal long non‐coding RNAs (lncRNAs) differentiation antagonizing non‐protein coding RNA (DANCR) levels were markedly downregulated in postoperative samples Serum exosomal lncRNA DANCR expression was a prognostic biomarker for BC The Kaplan‐Meier curve for OS regarding serum exosomal lncRNA DANCR expression is presented in Figure 4. Compared to BC patients with low serum exosomal lncRNA DANCR expression, patients with high serum exosomal lncRNA DANCR survived significantly shorter (p = 0.0132). Furthermore, multivariate analysis indicated that lymph node metastasis (HR = 3.12, 95% CI = 1.65–5.78, p = 0.018), ER status (HR = 2.75, 95% CI = 1.38–5.26, p = 0.021), HER2 status (HR = 3.51, 95% CI = 1.92–6.24, p = 0.014), TNM stage (HR = 4.75, 95% CI = 2.74–8.32, p < 0.001), and serum exosomal lncRNA DANCR expression (HR = 3.86, 95% CI = 2.16–6.63, p = 0.009) were significant independent prognostic markers for shorter OS (Table 3). Kaplan‐Meier analysis of overall survival (OS) according to serum exosomal long non‐coding RNAs (lncRNAs) differentiation antagonizing non‐protein coding RNA (DANCR) expression Multivariate analysis of overall survival in 120 BC patients Abbreviations: BC, breast cancer; DANCR, differentiation antagonizing non‐protein coding RNA. The Kaplan‐Meier curve for OS regarding serum exosomal lncRNA DANCR expression is presented in Figure 4. Compared to BC patients with low serum exosomal lncRNA DANCR expression, patients with high serum exosomal lncRNA DANCR survived significantly shorter (p = 0.0132). Furthermore, multivariate analysis indicated that lymph node metastasis (HR = 3.12, 95% CI = 1.65–5.78, p = 0.018), ER status (HR = 2.75, 95% CI = 1.38–5.26, p = 0.021), HER2 status (HR = 3.51, 95% CI = 1.92–6.24, p = 0.014), TNM stage (HR = 4.75, 95% CI = 2.74–8.32, p < 0.001), and serum exosomal lncRNA DANCR expression (HR = 3.86, 95% CI = 2.16–6.63, p = 0.009) were significant independent prognostic markers for shorter OS (Table 3). Kaplan‐Meier analysis of overall survival (OS) according to serum exosomal long non‐coding RNAs (lncRNAs) differentiation antagonizing non‐protein coding RNA (DANCR) expression Multivariate analysis of overall survival in 120 BC patients Abbreviations: BC, breast cancer; DANCR, differentiation antagonizing non‐protein coding RNA. Serum exosomal lncRNA DANCR was dramatically increased in BC patients: qRT‐PCR was used to detect the serum exosomal lncRNA DANCR levels in all participants. Serum exosomal lncRNA DANCR levels were significantly higher in BC patients than in BBD subjects and controls (p < 0.001, Figure 1A). In addition, BC patients with positive lymph node metastasis (p = 0.005, Figure 1B), negative ER status (p = 0.012, Figure 1C), negative HER2 status (p < 0.001, Figure 1D), and advanced TNM stage (p < 0.001, Figure 1E) exhibited higher serum exosomal lncRNA DANCR levels compared with their respective controls. Serum exosomal lncRNA DANCR levels were significantly higher in BC patients (A). Increased serum exosomal lncRNA DANCR expression occurred more frequently in BC patients with positive lymph node metastasis (B), negative ER status (C), negative HER2 status (D), and advanced TNM stage (E) Diagnostic accuracy of serum exosomal lncRNA DNACR for BC: By ROC analysis, Figure 2A demonstrated that the AUC for serum exosomal lncRNA DANCR was 0.880, with a specificity of 82.9% and a sensitivity of 83.3%. CA153 and CEA yielded an AUC of 0.799 (specificity: 76.0%; sensitivity: 68.3%) and 0.784 (specificity: 83.8%, sensitivity: 72.5%) for discriminating BC cases from controls (Figure 2B,C). More importantly, the combination of serum exosomal lncRNA DANCR, CA153, and CEA presented further improvement with an AUC of 0.954 (specificity: 91.4%, sensitivity: 90.8%) (Figure 2D; Table 2). Receiver operating characteristic (ROC) curves of single and combining three markers with the optimum cut‐off values in differentiating breast cancer (BC) from controls The diagnostic value of all individual and combined biomarkers for BC Abbreviations: AUC, area under the curves; BC, breast cancer; CA153, Cancer antigen 153; CEA, carcinoembryonic antigen; DANCR, differentiation antagonizing non‐protein coding RNA. Increased serum exosomal lncRNA DANCR expression was correlated with clinical variables in BC patients: The median serum exosomal lncRNA DANCR expression was used as a cut‐off value, and all 120 BC patients were classified into two groups: high serum exosomal DANCR expression group (n = 60) and low serum exosomal DANCR expression group (n = 60). As illustrated in Table 1, high serum exosomal lncRNA DANCR expression occurred more frequently in BC cases with positive lymph node metastasis (p = 0.0080), negative ER status (p = 0.0150), negative HER2 status (p = 0.0009), and advanced TNM stage (p < 0.0001). However, there was no significant correlation between serum exosomal lncRNA DANCR expression and age, tumor size, and PR status (all p > 0.05). Alteration of serum exosomal lncRNA DANCR levels following treatments: One month after surgical resection, paired blood samples were collected from all BC subjects. qRT‐PCR was used to measure the serum exosomal lncRNA DANCR expression level, and we found the serum exosomal lncRNA DANCR levels were markedly reduced following surgery (p < 0.001, Figure 3). Compared to preoperative blood samples, serum exosomal long non‐coding RNAs (lncRNAs) differentiation antagonizing non‐protein coding RNA (DANCR) levels were markedly downregulated in postoperative samples Serum exosomal lncRNA DANCR expression was a prognostic biomarker for BC: The Kaplan‐Meier curve for OS regarding serum exosomal lncRNA DANCR expression is presented in Figure 4. Compared to BC patients with low serum exosomal lncRNA DANCR expression, patients with high serum exosomal lncRNA DANCR survived significantly shorter (p = 0.0132). Furthermore, multivariate analysis indicated that lymph node metastasis (HR = 3.12, 95% CI = 1.65–5.78, p = 0.018), ER status (HR = 2.75, 95% CI = 1.38–5.26, p = 0.021), HER2 status (HR = 3.51, 95% CI = 1.92–6.24, p = 0.014), TNM stage (HR = 4.75, 95% CI = 2.74–8.32, p < 0.001), and serum exosomal lncRNA DANCR expression (HR = 3.86, 95% CI = 2.16–6.63, p = 0.009) were significant independent prognostic markers for shorter OS (Table 3). Kaplan‐Meier analysis of overall survival (OS) according to serum exosomal long non‐coding RNAs (lncRNAs) differentiation antagonizing non‐protein coding RNA (DANCR) expression Multivariate analysis of overall survival in 120 BC patients Abbreviations: BC, breast cancer; DANCR, differentiation antagonizing non‐protein coding RNA. DISCUSSION: Dysregulation of lncRNAs has been demonstrated to play major roles in breast cancer progression. 18 This study enrolled a total of 120 BC patients, 70 BBD patients, and 105 healthy women. Compared to BBD patients and controls, a significantly higher serum exosomal lncRNA DANCR level was observed in BC patients. High serum exosomal lncRNA DANCR expression is strongly associated with aggressive clinical variables. In addition, serum exosomal lncRNA DANCR had a relatively high diagnostic performance than CA153 and CEA. The combination of serum exosomal lncRNA DANCR, CA153, and CEA significantly improved the diagnostic accuracy. Moreover, serum exosomal lncRNA DANCR levels were markedly decreased 1 month after surgery. Furthermore, high serum exosomal lncRNA DANCR predicted unfavorable OS in BC patients and was an independent prognostic indicator for BC. The results have demonstrated that serum exosomal lncRNA DANCR is a promising and robust biomarker for BC. Our findings were in line with several previous studies. Zhang et al. showed that lncRNA DANCR was significantly upregulated both in BC tissues and cell lines. Overexpression of DANCR stimulated epithelial‐mesenchymal transition (EMT), cancer stemness, inflammation, and vice versa. The oncogenic activities of DANCR were reversed by EZH2 inhibition or SOCS3 upregulation. 16 Similarly, DANCR was significantly higher in BC tissues than in paired normal tissues. DANCR upregulation not only promoted tumorigenicity in xenograft animal but also greatly enhanced cell proliferation, invasion, and migration in vitro through regulating miR‐216a‐5p. 17 Besides BC, the role of lncRNA DANCR was also reported in different types of cancer. LncRNA DANCR was increased both in tumor tissues and serum samples of gastric cancer (GC) patients, and its upregulation was closely associated with malignant phenotypes. LncRNA DANCR overexpression significantly promoted the carcinogenesis in vitro and in vivo, while inhibition of DANCR showed the opposite effect. 19 In bladder cancer (BC), lncRNA DANCR expression was dramatically higher in BC tissues than in normal controls. LncRNA DANCR functioned as a miR‐149 sponge to positively correlate with MSI2 expression and markedly stimulated the tumorigenicity. High DANCR expression was strongly associated with aggressive clinical variables patients with BC, such as lymph node metastasis and advanced TNM stage. 20 , 21 In addition, lncRNA DANCR was highly expressed in colorectal cancer (CRC) tissues compared to normal tissues, and its overexpression was negatively associated with CRC patient survival. Upregulation of lncRNA DANCR significantly promoted carcinogenesis and metastasis via sponging miR‐577. 22 , 23  Moreover, lncRNA DANCR was remarkably increased in ovarian cancer (OC) tissues and cell lines. Overexpression of lncRNA DANCR significantly promoted OC cell proliferation, invasion, and migration through regulating IGF2 or UPF1. 24 , 25 However, lncRNA DANCR expression was frequently lower in papillary thyroid cancer (PTC) tissues than that in adjacent normal tissues, and low lncRNA DANCR expression was closely associated with worse clinical parameters of PTC patients, suggesting that lncRNA DANCR played a tumor suppressive role in PTC. 26  The contradictory functions of lncRNA DANCR may be explained by its ambiguous role in regulation of the complex networks of oncogenes and tumor suppressors resulting in a cancer type‐dependent outcome. CONCLUSIONS: Taken together, this study demonstrated that serum exosomal lncRNA DANCR expression was markedly elevated in patients with BC. Upregulation of serum exosomal lncRNA DANCR was closely associated with worse clinical parameters and shorter survival. Thus, serum exosomal lncRNA DANCR serves as a promising prognostic marker for BC. Meanwhile, as a result of small sample research, the analysis of larger cohorts is required to confirm the clinical role of serum exosomal lncRNA DANCR in BC. CONFLICT OF INTERESTS: None.
Background: Exosomal long non-coding RNAs (lncRNAs) serve as excellent candidate biomarkers for clinical applications. The expression of differentiation antagonizing non-protein coding RNA (DANCR) has been shown to be decreased in breast cancer (BC) tissues and cell lines. However, the clinical value of circulating exosomal DANCR in BC has not been explored. Methods: A total of 120 BC patients, 70 benign breast disease (BBD) patients, and 105 healthy women were recruited in this study. Total RNA was extracted from serum samples, and the level of serum exosomal lncRNA DANCR was evaluated by quantitative real-time reverse transcription-polymerase chain reaction (qRT-PCR). Results: Serum exosomal lncRNA DANCR levels were significantly higher in BC patients than in BBD patients and normal controls. The diagnostic performance of serum exosomal lncRNA DANCR was good, and the combination of serum exosomal lncRNA DANCR, CA153, and CEA greatly improved the diagnostic accuracy for BC. High serum exosomal lncRNA DANCR level was associated with various clinicopathological variables including lymph node metastasis, ER status, HER2 status, and TNM stage. In addition, the BC patients in the high serum exosomal lncRNA DANCR expression group had significantly shorter 5-year overall survival time. Multivariate analysis demonstrated that serum exosomal lncRNA DANCR was an independent risk factor for BC. Conclusions: Serum exosomal lncRNA DANCR may be a useful non-invasive biomarker for the clinical diagnosis and prognosis of BC.
INTRODUCTION: Breast cancer (BC) is the most frequent cancer among women around the world. 1 , 2 In China, more than 272,400 newly diagnosed BC were reported, causing about 70,700 deaths in 2015. BC alone accounts for about 15% of all new cancers in women. 3 , 4 Despite recent advances in surgical techniques and adjuvant chemotherapy, the clinical outcome of BC patients remains unfavorable due to the recurrence and metastasis. 5 Early diagnosis and treatment significantly improve the prognosis of this malignancy. Cancer antigen 153 (CA153) and carcinoembryonic antigen (CEA) are the most commonly used biomarkers for BC. However, diagnostic and prognostic accuracy of CA153 and CEA is poor. Therefore, it is urgent to identify novel, reliable biomarkers for GC diagnosis and prognosis evaluation. Long non‐coding RNAs (lncRNAs) are a class of non‐coding RNAs with a length of longer than 200 nucleotides. 6 , 7 Dysregulation of lncRNAs has been shown to play a critical role in regulating the initiation and progression of BC. 8 , 9 For instance, OLBC15 was highly expressed in triple‐negative BC, and OLBC15 promoted the BC tumorigenesis through destabilizing ZNF326. 10 Exosomes are 30–100 nm nanovesicles containing many molecules, such as lncRNAs, proteins, DNA, RNA, and miRNAs. 11 , 12 , 13 Exosomal lncRNAs are correlated with cancer tumorigenesis and progression and can be used as effective biomarkers in various types of cancer including BC. For example, compared to benign breast disease (BBD) and healthy controls, serum exosomal H19 was significantly increased in BC patients, and high serum exosomal H19 expression was strongly associated with worse clinical variables. 14 Similarly, patients with BC exhibited higher serum exosomal HOTAIR levels, and overexpression of serum exosomal HOTAIR was correlated with poorer prognosis. 15 LncRNA differentiation antagonizing non‐protein coding RNA (DANCR or ANCR), located on human chromosome 4, has been previously found to be involved in the initiation and the progression of BC. 16 , 17 However, the diagnostic and prognostic value of circulating exosomal DANCR in BC has not yet been explored. Thus, this study aimed to detect the serum exosomal lncRNA DANCR levels in BC patients, BBD patients, and normal controls and analyzed its role as a potential novel biomarker for BC diagnosis and prognosis. CONCLUSIONS: Taken together, this study demonstrated that serum exosomal lncRNA DANCR expression was markedly elevated in patients with BC. Upregulation of serum exosomal lncRNA DANCR was closely associated with worse clinical parameters and shorter survival. Thus, serum exosomal lncRNA DANCR serves as a promising prognostic marker for BC. Meanwhile, as a result of small sample research, the analysis of larger cohorts is required to confirm the clinical role of serum exosomal lncRNA DANCR in BC.
Background: Exosomal long non-coding RNAs (lncRNAs) serve as excellent candidate biomarkers for clinical applications. The expression of differentiation antagonizing non-protein coding RNA (DANCR) has been shown to be decreased in breast cancer (BC) tissues and cell lines. However, the clinical value of circulating exosomal DANCR in BC has not been explored. Methods: A total of 120 BC patients, 70 benign breast disease (BBD) patients, and 105 healthy women were recruited in this study. Total RNA was extracted from serum samples, and the level of serum exosomal lncRNA DANCR was evaluated by quantitative real-time reverse transcription-polymerase chain reaction (qRT-PCR). Results: Serum exosomal lncRNA DANCR levels were significantly higher in BC patients than in BBD patients and normal controls. The diagnostic performance of serum exosomal lncRNA DANCR was good, and the combination of serum exosomal lncRNA DANCR, CA153, and CEA greatly improved the diagnostic accuracy for BC. High serum exosomal lncRNA DANCR level was associated with various clinicopathological variables including lymph node metastasis, ER status, HER2 status, and TNM stage. In addition, the BC patients in the high serum exosomal lncRNA DANCR expression group had significantly shorter 5-year overall survival time. Multivariate analysis demonstrated that serum exosomal lncRNA DANCR was an independent risk factor for BC. Conclusions: Serum exosomal lncRNA DANCR may be a useful non-invasive biomarker for the clinical diagnosis and prognosis of BC.
5,531
282
[ 460, 154, 105, 110, 16, 136, 184, 196, 152, 85, 247 ]
16
[ "dancr", "serum", "exosomal", "serum exosomal", "lncrna", "lncrna dancr", "bc", "exosomal lncrna", "serum exosomal lncrna", "exosomal lncrna dancr" ]
[ "nucleotides dysregulation lncrnas", "dysregulation lncrnas demonstrated", "types cancer lncrna", "prognosis 15 lncrna", "cancer bc lncrna" ]
null
[CONTENT] breast cancer | diagnosis | exosome | lncDANCR | prognosis [SUMMARY]
null
[CONTENT] breast cancer | diagnosis | exosome | lncDANCR | prognosis [SUMMARY]
[CONTENT] breast cancer | diagnosis | exosome | lncDANCR | prognosis [SUMMARY]
[CONTENT] breast cancer | diagnosis | exosome | lncDANCR | prognosis [SUMMARY]
[CONTENT] breast cancer | diagnosis | exosome | lncDANCR | prognosis [SUMMARY]
[CONTENT] Biomarkers, Tumor | Breast Neoplasms | Female | Humans | Lymphatic Metastasis | Prognosis | RNA, Long Noncoding [SUMMARY]
null
[CONTENT] Biomarkers, Tumor | Breast Neoplasms | Female | Humans | Lymphatic Metastasis | Prognosis | RNA, Long Noncoding [SUMMARY]
[CONTENT] Biomarkers, Tumor | Breast Neoplasms | Female | Humans | Lymphatic Metastasis | Prognosis | RNA, Long Noncoding [SUMMARY]
[CONTENT] Biomarkers, Tumor | Breast Neoplasms | Female | Humans | Lymphatic Metastasis | Prognosis | RNA, Long Noncoding [SUMMARY]
[CONTENT] Biomarkers, Tumor | Breast Neoplasms | Female | Humans | Lymphatic Metastasis | Prognosis | RNA, Long Noncoding [SUMMARY]
[CONTENT] nucleotides dysregulation lncrnas | dysregulation lncrnas demonstrated | types cancer lncrna | prognosis 15 lncrna | cancer bc lncrna [SUMMARY]
null
[CONTENT] nucleotides dysregulation lncrnas | dysregulation lncrnas demonstrated | types cancer lncrna | prognosis 15 lncrna | cancer bc lncrna [SUMMARY]
[CONTENT] nucleotides dysregulation lncrnas | dysregulation lncrnas demonstrated | types cancer lncrna | prognosis 15 lncrna | cancer bc lncrna [SUMMARY]
[CONTENT] nucleotides dysregulation lncrnas | dysregulation lncrnas demonstrated | types cancer lncrna | prognosis 15 lncrna | cancer bc lncrna [SUMMARY]
[CONTENT] nucleotides dysregulation lncrnas | dysregulation lncrnas demonstrated | types cancer lncrna | prognosis 15 lncrna | cancer bc lncrna [SUMMARY]
[CONTENT] dancr | serum | exosomal | serum exosomal | lncrna | lncrna dancr | bc | exosomal lncrna | serum exosomal lncrna | exosomal lncrna dancr [SUMMARY]
null
[CONTENT] dancr | serum | exosomal | serum exosomal | lncrna | lncrna dancr | bc | exosomal lncrna | serum exosomal lncrna | exosomal lncrna dancr [SUMMARY]
[CONTENT] dancr | serum | exosomal | serum exosomal | lncrna | lncrna dancr | bc | exosomal lncrna | serum exosomal lncrna | exosomal lncrna dancr [SUMMARY]
[CONTENT] dancr | serum | exosomal | serum exosomal | lncrna | lncrna dancr | bc | exosomal lncrna | serum exosomal lncrna | exosomal lncrna dancr [SUMMARY]
[CONTENT] dancr | serum | exosomal | serum exosomal | lncrna | lncrna dancr | bc | exosomal lncrna | serum exosomal lncrna | exosomal lncrna dancr [SUMMARY]
[CONTENT] bc | prognosis | cancer | diagnosis | progression | lncrnas | exosomal | biomarkers | patients | progression bc [SUMMARY]
null
[CONTENT] dancr | serum exosomal | serum | exosomal | serum exosomal lncrna | lncrna | exosomal lncrna | serum exosomal lncrna dancr | lncrna dancr | exosomal lncrna dancr [SUMMARY]
[CONTENT] serum exosomal lncrna dancr | serum exosomal lncrna | serum exosomal | lncrna dancr | dancr | lncrna | exosomal lncrna dancr | exosomal lncrna | serum | exosomal [SUMMARY]
[CONTENT] dancr | serum | exosomal | bc | serum exosomal | lncrna | lncrna dancr | serum exosomal lncrna | exosomal lncrna | serum exosomal lncrna dancr [SUMMARY]
[CONTENT] dancr | serum | exosomal | bc | serum exosomal | lncrna | lncrna dancr | serum exosomal lncrna | exosomal lncrna | serum exosomal lncrna dancr [SUMMARY]
[CONTENT] ||| RNA | DANCR | BC ||| BC [SUMMARY]
null
[CONTENT] BC | BBD ||| CA153 | CEA | BC ||| ER | HER2 | TNM ||| BC | 5-year ||| BC [SUMMARY]
[CONTENT] BC [SUMMARY]
[CONTENT] ||| RNA | DANCR | BC ||| BC ||| 120 | BC | 70 | BBD | 105 ||| RNA ||| BC | BBD ||| CA153 | CEA | BC ||| ER | HER2 | TNM ||| BC | 5-year ||| BC ||| BC [SUMMARY]
[CONTENT] ||| RNA | DANCR | BC ||| BC ||| 120 | BC | 70 | BBD | 105 ||| RNA ||| BC | BBD ||| CA153 | CEA | BC ||| ER | HER2 | TNM ||| BC | 5-year ||| BC ||| BC [SUMMARY]
Assessing left atrial function in patients with atrial fibrillation and valvular heart disease using cardiovascular magnetic resonance imaging.
35289415
Atrial fibrillation (AF) is common arrhythmia in valvular heart disease (VHD) and is associated with adverse outcomes.
BACKGROUND
This was a retrospective cross-sectional inter-reader and intra-reader reproducibility conducted from July 1, 2020, to January 31, 2021. A total of 39 patients with AF-VHD (rheumatic heart valvular disease [RHVD] [n = 22], degenerative heart valvular disease [DHVD] [n = 17]) underwent MRI scans performed with drug-controlled heart rate before correcting the rhythm and valves through maze procedure. Fifteen participants with normal cardiac MRI were included as healthy control. εs /SRs, εe /SRe, and εa /SRa, corresponding to LA reservoir, conduit, and booster-pump function, were assessed using Feature Tracking software (CVI42 v5.12.1).
METHODS
Compared with healthy controls, LA global strain parameters (εs /εe /εa /SRs/SRe/SRa) were significantly decreased (all p < 0.001), while LA size and volume were increased in AF-VHD group (all p < 0.001). In the subgroup, RHVD group showed lower LA total ejection fraction (LATEF) and strain data than DHVD group (12.6% ± 3.3% vs. 19.4 ± 8.6, p = 0.001). Decreased LATEF was significantly related to altered LA strain and strain rate, especially in εs , εe , and SRs (Pearson/Spearman r/ρ = 0.856/0.837/0.562, respectively; all p < 0.001). Interstudy and intrastudy reproducibility were consistent for LA volumetry and strain parameters (intraclass correlation coefficient: 0.88-0.99).
RESULTS
CMR-FT can be used to assess the LA strain parameters, and identify LA dysfunction and deformation noninvasively, which could be a helpful functional imaging biomarker in the clinical treatment of AF-VHD.
CONCLUSIONS
[ "Atrial Fibrillation", "Atrial Function, Left", "Cross-Sectional Studies", "Heart Atria", "Heart Valve Diseases", "Humans", "Magnetic Resonance Imaging", "Reproducibility of Results", "Retrospective Studies" ]
9045075
INTRODUCTION
Atrial fibrillation (AF) is the most common human tachyarrhythmia diagnosed clinically; patients with AF are at an increased risk of stroke and heart failure, in addition to a decreased quality of life and lower survival. 1 , 2 AF is associated with profound structural and functional alterations of the atrial myocardium, 3 promoting a true atrial cardiomyopathy, the severity of which is an important determinant of AF recurrence and response to treatment. 4 , 5 , 6 Stroke prevention is a pivotal part of the treatment of patients with AF. 7 Patients with AF and concurrent valvular heart disease (AF‐VHD) have an even higher thromboembolic risk than those with AF alone. 7 , 8 Patients with AF have a 5‐fold increased risk of stroke compared with patients without cardiovascular disease, and patients with AF coupled with mitral stenosis have a 20‐fold risk of stroke. 9 It is of major clinical interest to have a new imaging biomarker with which to quantify the degree of LA dysfunction and make earlier clinical decisions in patients with AF‐VHD in an effort to prevent cardiac events. The LA normal function includes reservoir (collection of pulmonary venous blood during left ventricular [LV] systole), conduit (passage of pulmonary venous blood flow to the LV during LV early‐diastole), and booster pump function (augmentation of LV filling during LV late‐diastole/atrial systole). 10 It is important to recognize the interplay that exists between these atrial functions and ventricular performance throughout the cardiac cycle. Previous publications investigating LA function have primarily focused on LA size and volume. 11  Due to the LA's complex geometry and intricate fiber orientation and the variable contributions of its appendage and pulmonary veins, these two parameters alone may be insufficient to describe the complexity of the LA function and wall motion. 12 , 13 , 14 LA strain assessed by cardiovascular magnetic resonance imaging feature tracking (CMR‐FT) has been used in many cardiovascular diseases and enhanced the diagnostic value, 12 , 13 , 14 which might be higher and more sensitive than conventional LA volumetric parameters 12 and LV function 13 , 14 and presented with good intra‐observer and interobserver reproducibility in normal volunteers. 15 Due to the very thin LA wall, it is challenging to measure the LA strain. 15 And radial strain has already been noted for its difficulty to be obtained with poor reproducibility. 15 , 16 Echocardiographic data also showed poor reproducibility of strain rate and radial strain 16 and was limited in the two‐dimensional approaches with a semi‐quantitative and subjective measure. 17 , 18 It is clear that studies of LA function provide new insights into the contribution of LA performance to cardiovascular disease and are promising tools for predicting cardiovascular events in a wide range of patient populations. Considerable data also support the use of LAEF for predicting events. 12 Accordingly, LA function indices such as strain and strain rate have been proposed using noninvasive imaging modalities such as echocardiography speckle tracking 19 , 20 and color tissue doppler. Although speckle tracking is presently the only available reference for LA strain estimation, ultrasound beam direction as well as heart motion relative to the probe may influence measurements and inter‐vendor variability, essentially described in the setting of LV function, need to be further investigated. 12 , 21 CMR‐FT is a new method to evaluate myocardial strain and strain rate; it can be applied to routine cine images and has the advantages of high spatial resolution, large field of view, good reproducibility, and it can more sensitively reflect the functional characteristics of myocardial tissue. 22 As a new technique, CMR‐FT has been mainly used in the study of LV strain in recent years, 23 but has rarely been applied to analysis of LA. The aim of this study was to evaluate LA strain and strain rate using CMR‐FT before valve replacement surgery, assess the feasibility and reproducibility of CMR‐FT for the quantification of global LA function in patients with AF‐VHD, and review the clinical application value of CMR‐FT. In addition, we compared global LA function between patients with AF‐VHD and those without cardiac disease.
null
null
RESULTS
Basic demographic characteristics A total of 39 patients (22 DHVD and 17 RHVD) and 15 control participants were included in the study. Baseline characteristics and volumetric chamber indices for all control participants are summarized in Table 1. The AF‐VHD group had a higher heart rate than the control group (94.8 ± 23.6 bpm vs. 77.3 ± 9.22 bpm; p < 0.001), especially in the RHVD group. RHVD group showed lower body mass index (25.8 ± 2.48 kg/m2 vs. 27.4 ± 2.09 kg/m2, p = 0.048) and BSA (1.84 ± 0.09 m2 vs. 1.92 ± 0.1 m2, p = 0.015) than DHVD group. RHVD group had more diagnoses of mitral stenosis (21/22 vs. 1/17, p < 0.001), and all patients in the AF‐VHD group had mitral regurgitation; there was no statistical significance for tricuspid regurgitation (12/22 vs. 12/17, p = 0.343). There was no statistical significance for severity of valvular heart disease and proportion of presurgery medications (anti‐platelets, anti‐coagulation, anti‐hypertensive, anticholesterol, hypoglycemics, and antiarrhythmic drugs) between the two groups (all p > 0.05). Baseline and clinical characteristics of participants Note: “*” indicates statistical significance. “a”/“b” indicates statistical significance between the control group and RHVD group/DHVD group. Abbreviations: AF, atrial fibrillation; CI, cardiac index; CO, cardiac output; DHVD, degenerative heart valvular disease; EDD, end‐diastolic diameter; EDV/ESV(i), end‐diastolic/end‐systolic volume(index); EF, emptying fraction; LV, left ventricular; LVM(i), LV mass(index); NYHA, New York Heart Association; RHVD, rheumatic heart valvular disease; SV(i), stroke volume(index); TIA, transient ischemic attack; VHD, valvular heart disease. A total of 39 patients (22 DHVD and 17 RHVD) and 15 control participants were included in the study. Baseline characteristics and volumetric chamber indices for all control participants are summarized in Table 1. The AF‐VHD group had a higher heart rate than the control group (94.8 ± 23.6 bpm vs. 77.3 ± 9.22 bpm; p < 0.001), especially in the RHVD group. RHVD group showed lower body mass index (25.8 ± 2.48 kg/m2 vs. 27.4 ± 2.09 kg/m2, p = 0.048) and BSA (1.84 ± 0.09 m2 vs. 1.92 ± 0.1 m2, p = 0.015) than DHVD group. RHVD group had more diagnoses of mitral stenosis (21/22 vs. 1/17, p < 0.001), and all patients in the AF‐VHD group had mitral regurgitation; there was no statistical significance for tricuspid regurgitation (12/22 vs. 12/17, p = 0.343). There was no statistical significance for severity of valvular heart disease and proportion of presurgery medications (anti‐platelets, anti‐coagulation, anti‐hypertensive, anticholesterol, hypoglycemics, and antiarrhythmic drugs) between the two groups (all p > 0.05). Baseline and clinical characteristics of participants Note: “*” indicates statistical significance. “a”/“b” indicates statistical significance between the control group and RHVD group/DHVD group. Abbreviations: AF, atrial fibrillation; CI, cardiac index; CO, cardiac output; DHVD, degenerative heart valvular disease; EDD, end‐diastolic diameter; EDV/ESV(i), end‐diastolic/end‐systolic volume(index); EF, emptying fraction; LV, left ventricular; LVM(i), LV mass(index); NYHA, New York Heart Association; RHVD, rheumatic heart valvular disease; SV(i), stroke volume(index); TIA, transient ischemic attack; VHD, valvular heart disease. LV parameters Compared with the control group, the AF‐VHD group showed higher LVEDD, LVEDV(i), LVESV(i), and lower LVEF (59.9% ± 5.8% vs. 42.6 ± 10.2, p < 0.001, Table 1). In the RHVD group, LVEDD, LVEDV(i), LVESV(i), LVSV(i), LVCO(LVCI), and LVM(i) were lower than those in the DHVD group. However, there was no significant difference in LVEF between the two groups (42.4% ± 9.75% vs. 42.9 ± 11, p = 0.877). Compared with the control group, the AF‐VHD group showed higher LVEDD, LVEDV(i), LVESV(i), and lower LVEF (59.9% ± 5.8% vs. 42.6 ± 10.2, p < 0.001, Table 1). In the RHVD group, LVEDD, LVEDV(i), LVESV(i), LVSV(i), LVCO(LVCI), and LVM(i) were lower than those in the DHVD group. However, there was no significant difference in LVEF between the two groups (42.4% ± 9.75% vs. 42.9 ± 11, p = 0.877). LA structure and function Patients with AF‐VHD had higher LA size (LAAD and LAV), lower strain values (ε s/ε e/ε a/SRs/SRe/SRa) and LATEF than did control participants (all p < 0.001, Table 2). Compared with the DHVD group, the RHVD group showed lower LA strain parameters (ε s/ε e/ε a/SRs/SRe/SRa) and lower LATEF (12.6% ± 3.3% vs. 19.4 ± 8.6, p = 0.001). Comparison of LA parameters between the control group and patients with AF‐VHD Note: “*” indicates statistical significance. “a”/“b” indicates statistical significance between the control group and RHVD group/DHVD group. Abbreviations: AF, atrial fibrillation; DHVD, degenerative heart valvular disease; LA, left atrial; LAAD, anteroposterior diameter of left atrium; LATEF, left atrial total ejection fraction; LAV(i)max/min, the maximum/minimum volume of left atrium (index); RHVD, rheumatic heart valvular disease; SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; VHD, valvular heart disease; ε a, active strain; ε e, passive strain; ε s, total strain. Patients with AF‐VHD had higher LA size (LAAD and LAV), lower strain values (ε s/ε e/ε a/SRs/SRe/SRa) and LATEF than did control participants (all p < 0.001, Table 2). Compared with the DHVD group, the RHVD group showed lower LA strain parameters (ε s/ε e/ε a/SRs/SRe/SRa) and lower LATEF (12.6% ± 3.3% vs. 19.4 ± 8.6, p = 0.001). Comparison of LA parameters between the control group and patients with AF‐VHD Note: “*” indicates statistical significance. “a”/“b” indicates statistical significance between the control group and RHVD group/DHVD group. Abbreviations: AF, atrial fibrillation; DHVD, degenerative heart valvular disease; LA, left atrial; LAAD, anteroposterior diameter of left atrium; LATEF, left atrial total ejection fraction; LAV(i)max/min, the maximum/minimum volume of left atrium (index); RHVD, rheumatic heart valvular disease; SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; VHD, valvular heart disease; ε a, active strain; ε e, passive strain; ε s, total strain. Correlation between strain parameters and LA function in patients with VHD LATEF was positively correlated with ε s, ε e, ε a, SRs (r = 0.856, p < 0.001; r = 0.837, p < 0.001; ρ = 0.501, p = 0.001; ρ = 0.562, p < 0.001; respectively) and negatively correlated with SRe, SRa (ρ = –0.407, p = 0.01; ρ = –0.429, p = 0.006; respectively, Figure 2, Table S1). Scatter plots showing correlations of left atrial total emptying fraction (LATEF) and ε s, ε e, ε a, SRs, SRe, and SRa. SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε a, active strain; ε e, passive strain; ε s, total strain LATEF was positively correlated with ε s, ε e, ε a, SRs (r = 0.856, p < 0.001; r = 0.837, p < 0.001; ρ = 0.501, p = 0.001; ρ = 0.562, p < 0.001; respectively) and negatively correlated with SRe, SRa (ρ = –0.407, p = 0.01; ρ = –0.429, p = 0.006; respectively, Figure 2, Table S1). Scatter plots showing correlations of left atrial total emptying fraction (LATEF) and ε s, ε e, ε a, SRs, SRe, and SRa. SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε a, active strain; ε e, passive strain; ε s, total strain Reproducibility Intraobserver and interobserver reproducibility of global LA strain, strain rate, and volumetric parameters using CMR are shown in Table 3. There were excellent and good intra‐observer and interobserver reproducibility, respectively. For intra‐observer reproducibility, the LAVmax had the highest reproducibility (ICC, 0.99; 0.98–0.99). For interobserver reproducibility, the LAVmax and LAVmin had the highest reproducibility (ICC, 0.98; 0.97–0.99). The least reproducible segmental measurement for interobserver reproducibility was the SRe (ICC, 0.88; 0.81–0.93). Intraobserver and interobserver repeatability of LA strain and strain rate Abbreviations: ICC, intraclass correlation coefficient; LA, left atrial; LATEF, left atrial total ejection fraction; LAVmax/min, the maximum/minimum volume of left atrium; SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε a, active strain; ε e, passive strain; ε s, total strain. Intraobserver and interobserver reproducibility of global LA strain, strain rate, and volumetric parameters using CMR are shown in Table 3. There were excellent and good intra‐observer and interobserver reproducibility, respectively. For intra‐observer reproducibility, the LAVmax had the highest reproducibility (ICC, 0.99; 0.98–0.99). For interobserver reproducibility, the LAVmax and LAVmin had the highest reproducibility (ICC, 0.98; 0.97–0.99). The least reproducible segmental measurement for interobserver reproducibility was the SRe (ICC, 0.88; 0.81–0.93). Intraobserver and interobserver repeatability of LA strain and strain rate Abbreviations: ICC, intraclass correlation coefficient; LA, left atrial; LATEF, left atrial total ejection fraction; LAVmax/min, the maximum/minimum volume of left atrium; SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε a, active strain; ε e, passive strain; ε s, total strain.
CONCLUSION
CMR‐FT is a reliable tool with good clinical feasibility and repeatability for noninvasive quantitative assessment of LA strain and strain rate (LA function) without using a contrast agent and may provide insight into assessment of the LA performance over time in patients with AF‐VHD, which has potential clinical value in guiding the treatment, prognosis, evaluation and risk stratification.
[ "INTRODUCTION", "Patient selection", "CMR imaging acquisition", "Imaging analysis", "Statistical analysis", "Basic demographic characteristics", "LV parameters", "LA structure and function", "Correlation between strain parameters and LA function in patients with VHD", "Reproducibility", "LIMITATION", "AUTHOR CONTRIBUTIONS" ]
[ "Atrial fibrillation (AF) is the most common human tachyarrhythmia diagnosed clinically; patients with AF are at an increased risk of stroke and heart failure, in addition to a decreased quality of life and lower survival.\n1\n, \n2\n AF is associated with profound structural and functional alterations of the atrial myocardium,\n3\n promoting a true atrial cardiomyopathy, the severity of which is an important determinant of AF recurrence and response to treatment.\n4\n, \n5\n, \n6\n Stroke prevention is a pivotal part of the treatment of patients with AF.\n7\n Patients with AF and concurrent valvular heart disease (AF‐VHD) have an even higher thromboembolic risk than those with AF alone.\n7\n, \n8\n Patients with AF have a 5‐fold increased risk of stroke compared with patients without cardiovascular disease, and patients with AF coupled with mitral stenosis have a 20‐fold risk of stroke.\n9\n\n\nIt is of major clinical interest to have a new imaging biomarker with which to quantify the degree of LA dysfunction and make earlier clinical decisions in patients with AF‐VHD in an effort to prevent cardiac events. The LA normal function includes reservoir (collection of pulmonary venous blood during left ventricular [LV] systole), conduit (passage of pulmonary venous blood flow to the LV during LV early‐diastole), and booster pump function (augmentation of LV filling during LV late‐diastole/atrial systole).\n10\n It is important to recognize the interplay that exists between these atrial functions and ventricular performance throughout the cardiac cycle. Previous publications investigating LA function have primarily focused on LA size and volume.\n11\n Due to the LA's complex geometry and intricate fiber orientation and the variable contributions of its appendage and pulmonary veins, these two parameters alone may be insufficient to describe the complexity of the LA function and wall motion.\n12\n, \n13\n, \n14\n\n\nLA strain assessed by cardiovascular magnetic resonance imaging feature tracking (CMR‐FT) has been used in many cardiovascular diseases and enhanced the diagnostic value,\n12\n, \n13\n, \n14\n which might be higher and more sensitive than conventional LA volumetric parameters\n12\n and LV function\n13\n, \n14\n and presented with good intra‐observer and interobserver reproducibility in normal volunteers.\n15\n Due to the very thin LA wall, it is challenging to measure the LA strain.\n15\n And radial strain has already been noted for its difficulty to be obtained with poor reproducibility.\n15\n, \n16\n Echocardiographic data also showed poor reproducibility of strain rate and radial strain\n16\n and was limited in the two‐dimensional approaches with a semi‐quantitative and subjective measure.\n17\n, \n18\n It is clear that studies of LA function provide new insights into the contribution of LA performance to cardiovascular disease and are promising tools for predicting cardiovascular events in a wide range of patient populations. Considerable data also support the use of LAEF for predicting events.\n12\n Accordingly, LA function indices such as strain and strain rate have been proposed using noninvasive imaging modalities such as echocardiography speckle tracking\n19\n, \n20\n and color tissue doppler. Although speckle tracking is presently the only available reference for LA strain estimation, ultrasound beam direction as well as heart motion relative to the probe may influence measurements and inter‐vendor variability, essentially described in the setting of LV function, need to be further investigated.\n12\n, \n21\n\n\nCMR‐FT is a new method to evaluate myocardial strain and strain rate; it can be applied to routine cine images and has the advantages of high spatial resolution, large field of view, good reproducibility, and it can more sensitively reflect the functional characteristics of myocardial tissue.\n22\n As a new technique, CMR‐FT has been mainly used in the study of LV strain in recent years,\n23\n but has rarely been applied to analysis of LA. The aim of this study was to evaluate LA strain and strain rate using CMR‐FT before valve replacement surgery, assess the feasibility and reproducibility of CMR‐FT for the quantification of global LA function in patients with AF‐VHD, and review the clinical application value of CMR‐FT. In addition, we compared global LA function between patients with AF‐VHD and those without cardiac disease.", "From July 1, 2020, to January 31, 2021, 39 consecutive patients with AF‐VHD who were admitted to the department of cardiac surgery in our hospital, and performed MRI to assess the left atrium function before valve replacement surgery only or valve replacement surgery with Maze procedure to correct valves and rhythm were included in this study. All patients were with persistent AF and mitral stenosis. The clinical characteristics, echocardiography findings on admission, and cardiac MRI data were retrospectively collected.\nThe inclusion criteria\n24\n were as follows\n1\n: Patients' age >18 years with AF who were planning to undergo cardiac surgery to correct the rhythm and valves\n2\n; AF of more than 30 s as recorded by electrocardiogram (ECG) or dynamic ECG; and\n3\n patients who were treated according to the American Heart Association/American College of Cardiology/Heart Rhythm Society (AHA/ACC/HRS) AF guidelines for the management of AF\n9\n and VHD.\n4\n, \n25\n According to etiological analysis, the patients divided into two groups, degenerative heart valvular disease (DHVD) group and rheumatic heart valvular disease (RHVD) group. Exclusion criteria included patients who had undergone previous valvular surgery or ablation for AF.\nAt the same time, 15 healthy participants with normal cardiac MRI in our hospital were included as the healthy control group. The healthy control group excluded claustrophobia, coronary/congenital/valvular heart disease, hypertension, diabetes, severe arrhythmia (e.g., atrial/ventricular arrhythmia, atrioventricular block, pre‐excitation syndrome, and sick sinus syndrome), chronic kidney disease, myocarditis, and cardiomyopathy.", "CMR studies were performed using a 3.0T MR scanner (Verio, Siemens Medical Systems) and a 32‐channel phased‐array body coil. All images were ECG gated; patients were placed in the supine position and required to hold their breath during image capture. Cine images were acquired in the short‐axis views and longitudinal two‐, three‐ and four‐chamber views using True Fast imaging with Steady‐State Precession (TrueFISP) imaging sequences covering the entire LV and LA (typical field of view 360 × 360 mm, matrix size 216 × 256, slice thickness of 6 mm with no gap, repetition time 40.68 ms, echo time 1.49 ms, flip angle 50°). In addition, we used the cardiac shim model of SIEMENS to adapt adjustment volume to reduce dark band artifacts.", "Images were analyzed using CVI42 (Circle, version 5.12.1).\n(1)LV volumetric parameters: LV end‐diastolic volume (LVEDV, the maximum left ventricular end diastolic filling volume), end‐systolic volume (LVESV, the minimum left ventricular volume at the end of ventricular ejection), ejection fraction (LVEF), cardiac output (LVCO), cardiac index (LVCI) and LV mass (LVM) were measured. LV endocardial and epicardial contours were drawn on LV short‐axis cine images, excluding the papillary muscles. All parameters of LV and LA were corrected according to body surface area (BSA), for example, LVEDV index (LVEDVi) = LVEDV/BSA × 100%, and so forth.(2)LA strain and strain rate: the LA myocardial deformation was quantified using CVI 42 Tissue Tracking software.\n15\n, \n26\n LA endocardial and epicardial borders were manually delineated in the apical four‐ and two‐chamber views at end‐diastole using a point‐and‐click approach before the automated tracking algorithm was applied. The pulmonary vein and LA appendage were not included (Figure S1). The software strain analysis model automatically provided the LA strain and strain rate curves. The endocardial LA global longitudinal strain and strain rate values were recorded from the curves\n11\n: total strain (ε\ns), active strain (ε\na), passive strain (ε\ne, ε\ne = ε\ns−ε\na), peak first positive strain rate (SRs), peak early (first) negative strain rate (SRe), and peak late (second) negative LA strain rate(SRa). ε\ns and SRs, corresponding to LA reservoir function; ε\ne and SRe, corresponding to LA conduit function; ε\na and SRa, corresponding to LA booster pump function\n3\n, \n27\n(Figure 1).(3)LA volume (LAV): The LA endocardium was manually labeled by the end‐diastolic atrial minimum volume at the four‐chamber and two‐chamber levels using CVI 42. The pulmonary vein and LA appendage were not included. The parameters of the LA volume were obtained using Simpson's method. The parameters of LAV included the maximum volume (LAVmax, before the end‐systolic mitral valve was opened) and the minimum volume (LAVmin, when the end‐diastolic mitral valve was just closed). LA total ejection fraction (LATEF) = (LAVmax − LAVmin)/LAVmax × 100%.(4)The intraobserver and interobserver variability for the LA parameters, measurements were assessed by the intraclass correlation coefficient (ICC).\n28\n Intraobserver reproducibility was established by the same observer (J. H., 5‐year experience in CMR diagnosis) who reanalyzed the same subjects after 1 month. Interobserver reproducibility was assessed by a second independent observer (Y. S., 3‐year experience in CMR diagnosis) who was blinded to the first observer's results.\n\nLV volumetric parameters: LV end‐diastolic volume (LVEDV, the maximum left ventricular end diastolic filling volume), end‐systolic volume (LVESV, the minimum left ventricular volume at the end of ventricular ejection), ejection fraction (LVEF), cardiac output (LVCO), cardiac index (LVCI) and LV mass (LVM) were measured. LV endocardial and epicardial contours were drawn on LV short‐axis cine images, excluding the papillary muscles. All parameters of LV and LA were corrected according to body surface area (BSA), for example, LVEDV index (LVEDVi) = LVEDV/BSA × 100%, and so forth.\nLA strain and strain rate: the LA myocardial deformation was quantified using CVI 42 Tissue Tracking software.\n15\n, \n26\n LA endocardial and epicardial borders were manually delineated in the apical four‐ and two‐chamber views at end‐diastole using a point‐and‐click approach before the automated tracking algorithm was applied. The pulmonary vein and LA appendage were not included (Figure S1). The software strain analysis model automatically provided the LA strain and strain rate curves. The endocardial LA global longitudinal strain and strain rate values were recorded from the curves\n11\n: total strain (ε\ns), active strain (ε\na), passive strain (ε\ne, ε\ne = ε\ns−ε\na), peak first positive strain rate (SRs), peak early (first) negative strain rate (SRe), and peak late (second) negative LA strain rate(SRa). ε\ns and SRs, corresponding to LA reservoir function; ε\ne and SRe, corresponding to LA conduit function; ε\na and SRa, corresponding to LA booster pump function\n3\n, \n27\n(Figure 1).\nLA volume (LAV): The LA endocardium was manually labeled by the end‐diastolic atrial minimum volume at the four‐chamber and two‐chamber levels using CVI 42. The pulmonary vein and LA appendage were not included. The parameters of the LA volume were obtained using Simpson's method. The parameters of LAV included the maximum volume (LAVmax, before the end‐systolic mitral valve was opened) and the minimum volume (LAVmin, when the end‐diastolic mitral valve was just closed). LA total ejection fraction (LATEF) = (LAVmax − LAVmin)/LAVmax × 100%.\nThe intraobserver and interobserver variability for the LA parameters, measurements were assessed by the intraclass correlation coefficient (ICC).\n28\n Intraobserver reproducibility was established by the same observer (J. H., 5‐year experience in CMR diagnosis) who reanalyzed the same subjects after 1 month. Interobserver reproducibility was assessed by a second independent observer (Y. S., 3‐year experience in CMR diagnosis) who was blinded to the first observer's results.\nThe left atrial (LA) strain and strain rate curve. Global endocardial LA strain and strain rate values (yellow line) were recorded. SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε\ns, total strain; ε\na, active strain; ε\ne, ε\ns−ε\na, passive strain. ε\ns and SRs, corresponding to LA reservoir function; ε\ne and SRe, corresponding to LA conduit function; ε\na and SRa, corresponding to LA booster pump function", "Data were analyzed using SPSS, version 20.0 (SPSS Statistics, IBM Corporation). Mean ± SD or median (quartiles) was used to express measurement data in accordance with normal distribution, and the continuous variables were analyzed by independent sample t‐test or Mann–Whitney U test. χ2 or Fisher exact test was used to assess categorical data. Pearson or Spearman correlation was performed to investigate the potential relationship between LA strain parameters and LA function. Moreover, we assessed the ICC to evaluate the accuracy and the precision of the method to measure each LA parameters. ICC was scored as follows: poor reliability, ICC < 0.50; moderate reliability, ICC: 0.50–0.75; good reliability, 0.75–0.9; excellent reliability, ICC > 0.9.\n28\n p‐value < 0.05 was considered statistically significant.", "A total of 39 patients (22 DHVD and 17 RHVD) and 15 control participants were included in the study. Baseline characteristics and volumetric chamber indices for all control participants are summarized in Table 1. The AF‐VHD group had a higher heart rate than the control group (94.8 ± 23.6 bpm vs. 77.3 ± 9.22 bpm; p < 0.001), especially in the RHVD group. RHVD group showed lower body mass index (25.8 ± 2.48 kg/m2 vs. 27.4 ± 2.09 kg/m2, p = 0.048) and BSA (1.84 ± 0.09 m2 vs. 1.92 ± 0.1 m2, p = 0.015) than DHVD group. RHVD group had more diagnoses of mitral stenosis (21/22 vs. 1/17, p < 0.001), and all patients in the AF‐VHD group had mitral regurgitation; there was no statistical significance for tricuspid regurgitation (12/22 vs. 12/17, p = 0.343). There was no statistical significance for severity of valvular heart disease and proportion of presurgery medications (anti‐platelets, anti‐coagulation, anti‐hypertensive, anticholesterol, hypoglycemics, and antiarrhythmic drugs) between the two groups (all p > 0.05).\nBaseline and clinical characteristics of participants\n\nNote: “*” indicates statistical significance. “a”/“b” indicates statistical significance between the control group and RHVD group/DHVD group.\nAbbreviations: AF, atrial fibrillation; CI, cardiac index; CO, cardiac output; DHVD, degenerative heart valvular disease; EDD, end‐diastolic diameter; EDV/ESV(i), end‐diastolic/end‐systolic volume(index); EF, emptying fraction; LV, left ventricular; LVM(i), LV mass(index); NYHA, New York Heart Association; RHVD, rheumatic heart valvular disease; SV(i), stroke volume(index); TIA, transient ischemic attack; VHD, valvular heart disease.", "Compared with the control group, the AF‐VHD group showed higher LVEDD, LVEDV(i), LVESV(i), and lower LVEF (59.9% ± 5.8% vs. 42.6 ± 10.2, p < 0.001, Table 1). In the RHVD group, LVEDD, LVEDV(i), LVESV(i), LVSV(i), LVCO(LVCI), and LVM(i) were lower than those in the DHVD group. However, there was no significant difference in LVEF between the two groups (42.4% ± 9.75% vs. 42.9 ± 11, p = 0.877).", "Patients with AF‐VHD had higher LA size (LAAD and LAV), lower strain values (ε\ns/ε\ne/ε\na/SRs/SRe/SRa) and LATEF than did control participants (all p < 0.001, Table 2). Compared with the DHVD group, the RHVD group showed lower LA strain parameters (ε\ns/ε\ne/ε\na/SRs/SRe/SRa) and lower LATEF (12.6% ± 3.3% vs. 19.4 ± 8.6, p = 0.001).\nComparison of LA parameters between the control group and patients with AF‐VHD\n\nNote: “*” indicates statistical significance. “a”/“b” indicates statistical significance between the control group and RHVD group/DHVD group.\nAbbreviations: AF, atrial fibrillation; DHVD, degenerative heart valvular disease; LA, left atrial; LAAD, anteroposterior diameter of left atrium; LATEF, left atrial total ejection fraction; LAV(i)max/min, the maximum/minimum volume of left atrium (index); RHVD, rheumatic heart valvular disease; SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; VHD, valvular heart disease; ε\na, active strain; ε\ne, passive strain; ε\ns, total strain.", "LATEF was positively correlated with ε\ns, ε\ne, ε\na, SRs (r = 0.856, p < 0.001; r = 0.837, p < 0.001; ρ = 0.501, p = 0.001; ρ = 0.562, p < 0.001; respectively) and negatively correlated with SRe, SRa (ρ = –0.407, p = 0.01; ρ = –0.429, p = 0.006; respectively, Figure 2, Table S1).\nScatter plots showing correlations of left atrial total emptying fraction (LATEF) and ε\ns, ε\ne, ε\na, SRs, SRe, and SRa. SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε\na, active strain; ε\ne, passive strain; ε\ns, total strain", "Intraobserver and interobserver reproducibility of global LA strain, strain rate, and volumetric parameters using CMR are shown in Table 3. There were excellent and good intra‐observer and interobserver reproducibility, respectively. For intra‐observer reproducibility, the LAVmax had the highest reproducibility (ICC, 0.99; 0.98–0.99). For interobserver reproducibility, the LAVmax and LAVmin had the highest reproducibility (ICC, 0.98; 0.97–0.99). The least reproducible segmental measurement for interobserver reproducibility was the SRe (ICC, 0.88; 0.81–0.93).\nIntraobserver and interobserver repeatability of LA strain and strain rate\nAbbreviations: ICC, intraclass correlation coefficient; LA, left atrial; LATEF, left atrial total ejection fraction; LAVmax/min, the maximum/minimum volume of left atrium; SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε\na, active strain; ε\ne, passive strain; ε\ns, total strain.", "This was a single‐center study with a relatively modest sample size. We only assessed global longitudinal strain and did not assess radial strain. Radial strain has been noted as a parameter that is difficult to obtain and reproduce consistently.\n16\n At present, most research investigates the longitudinal LA strain on the LA long axis, and most of the studies have obtained positive results. It is believed that with the application of three‐dimensional strain analysis technology in the future, its accuracy will be further improved.", "Huishan Wang and Benqiang Yang contributed to the conception and design of the study; Jie Hou, Yu Sun, and Wei Wang contributed significantly to manuscript preparation; Jie Hou and Yu Sun analyzed the data; Jie Hou wrote the manuscript; Libo Zhang, Hongrui You and Rongrong Zhang helped perform the analysis with constructive discussions and revised the manuscript. All authors received the final approval of the submission." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIAL AND METHODS", "Patient selection", "CMR imaging acquisition", "Imaging analysis", "Statistical analysis", "RESULTS", "Basic demographic characteristics", "LV parameters", "LA structure and function", "Correlation between strain parameters and LA function in patients with VHD", "Reproducibility", "DISCUSSION", "LIMITATION", "CONCLUSION", "CONFLICTS OF INTEREST", "AUTHOR CONTRIBUTIONS", "Supporting information" ]
[ "Atrial fibrillation (AF) is the most common human tachyarrhythmia diagnosed clinically; patients with AF are at an increased risk of stroke and heart failure, in addition to a decreased quality of life and lower survival.\n1\n, \n2\n AF is associated with profound structural and functional alterations of the atrial myocardium,\n3\n promoting a true atrial cardiomyopathy, the severity of which is an important determinant of AF recurrence and response to treatment.\n4\n, \n5\n, \n6\n Stroke prevention is a pivotal part of the treatment of patients with AF.\n7\n Patients with AF and concurrent valvular heart disease (AF‐VHD) have an even higher thromboembolic risk than those with AF alone.\n7\n, \n8\n Patients with AF have a 5‐fold increased risk of stroke compared with patients without cardiovascular disease, and patients with AF coupled with mitral stenosis have a 20‐fold risk of stroke.\n9\n\n\nIt is of major clinical interest to have a new imaging biomarker with which to quantify the degree of LA dysfunction and make earlier clinical decisions in patients with AF‐VHD in an effort to prevent cardiac events. The LA normal function includes reservoir (collection of pulmonary venous blood during left ventricular [LV] systole), conduit (passage of pulmonary venous blood flow to the LV during LV early‐diastole), and booster pump function (augmentation of LV filling during LV late‐diastole/atrial systole).\n10\n It is important to recognize the interplay that exists between these atrial functions and ventricular performance throughout the cardiac cycle. Previous publications investigating LA function have primarily focused on LA size and volume.\n11\n Due to the LA's complex geometry and intricate fiber orientation and the variable contributions of its appendage and pulmonary veins, these two parameters alone may be insufficient to describe the complexity of the LA function and wall motion.\n12\n, \n13\n, \n14\n\n\nLA strain assessed by cardiovascular magnetic resonance imaging feature tracking (CMR‐FT) has been used in many cardiovascular diseases and enhanced the diagnostic value,\n12\n, \n13\n, \n14\n which might be higher and more sensitive than conventional LA volumetric parameters\n12\n and LV function\n13\n, \n14\n and presented with good intra‐observer and interobserver reproducibility in normal volunteers.\n15\n Due to the very thin LA wall, it is challenging to measure the LA strain.\n15\n And radial strain has already been noted for its difficulty to be obtained with poor reproducibility.\n15\n, \n16\n Echocardiographic data also showed poor reproducibility of strain rate and radial strain\n16\n and was limited in the two‐dimensional approaches with a semi‐quantitative and subjective measure.\n17\n, \n18\n It is clear that studies of LA function provide new insights into the contribution of LA performance to cardiovascular disease and are promising tools for predicting cardiovascular events in a wide range of patient populations. Considerable data also support the use of LAEF for predicting events.\n12\n Accordingly, LA function indices such as strain and strain rate have been proposed using noninvasive imaging modalities such as echocardiography speckle tracking\n19\n, \n20\n and color tissue doppler. Although speckle tracking is presently the only available reference for LA strain estimation, ultrasound beam direction as well as heart motion relative to the probe may influence measurements and inter‐vendor variability, essentially described in the setting of LV function, need to be further investigated.\n12\n, \n21\n\n\nCMR‐FT is a new method to evaluate myocardial strain and strain rate; it can be applied to routine cine images and has the advantages of high spatial resolution, large field of view, good reproducibility, and it can more sensitively reflect the functional characteristics of myocardial tissue.\n22\n As a new technique, CMR‐FT has been mainly used in the study of LV strain in recent years,\n23\n but has rarely been applied to analysis of LA. The aim of this study was to evaluate LA strain and strain rate using CMR‐FT before valve replacement surgery, assess the feasibility and reproducibility of CMR‐FT for the quantification of global LA function in patients with AF‐VHD, and review the clinical application value of CMR‐FT. In addition, we compared global LA function between patients with AF‐VHD and those without cardiac disease.", "This study was approved by the local ethics committee. All patients or their families provided written informed consent for the examination.\nPatient selection From July 1, 2020, to January 31, 2021, 39 consecutive patients with AF‐VHD who were admitted to the department of cardiac surgery in our hospital, and performed MRI to assess the left atrium function before valve replacement surgery only or valve replacement surgery with Maze procedure to correct valves and rhythm were included in this study. All patients were with persistent AF and mitral stenosis. The clinical characteristics, echocardiography findings on admission, and cardiac MRI data were retrospectively collected.\nThe inclusion criteria\n24\n were as follows\n1\n: Patients' age >18 years with AF who were planning to undergo cardiac surgery to correct the rhythm and valves\n2\n; AF of more than 30 s as recorded by electrocardiogram (ECG) or dynamic ECG; and\n3\n patients who were treated according to the American Heart Association/American College of Cardiology/Heart Rhythm Society (AHA/ACC/HRS) AF guidelines for the management of AF\n9\n and VHD.\n4\n, \n25\n According to etiological analysis, the patients divided into two groups, degenerative heart valvular disease (DHVD) group and rheumatic heart valvular disease (RHVD) group. Exclusion criteria included patients who had undergone previous valvular surgery or ablation for AF.\nAt the same time, 15 healthy participants with normal cardiac MRI in our hospital were included as the healthy control group. The healthy control group excluded claustrophobia, coronary/congenital/valvular heart disease, hypertension, diabetes, severe arrhythmia (e.g., atrial/ventricular arrhythmia, atrioventricular block, pre‐excitation syndrome, and sick sinus syndrome), chronic kidney disease, myocarditis, and cardiomyopathy.\nFrom July 1, 2020, to January 31, 2021, 39 consecutive patients with AF‐VHD who were admitted to the department of cardiac surgery in our hospital, and performed MRI to assess the left atrium function before valve replacement surgery only or valve replacement surgery with Maze procedure to correct valves and rhythm were included in this study. All patients were with persistent AF and mitral stenosis. The clinical characteristics, echocardiography findings on admission, and cardiac MRI data were retrospectively collected.\nThe inclusion criteria\n24\n were as follows\n1\n: Patients' age >18 years with AF who were planning to undergo cardiac surgery to correct the rhythm and valves\n2\n; AF of more than 30 s as recorded by electrocardiogram (ECG) or dynamic ECG; and\n3\n patients who were treated according to the American Heart Association/American College of Cardiology/Heart Rhythm Society (AHA/ACC/HRS) AF guidelines for the management of AF\n9\n and VHD.\n4\n, \n25\n According to etiological analysis, the patients divided into two groups, degenerative heart valvular disease (DHVD) group and rheumatic heart valvular disease (RHVD) group. Exclusion criteria included patients who had undergone previous valvular surgery or ablation for AF.\nAt the same time, 15 healthy participants with normal cardiac MRI in our hospital were included as the healthy control group. The healthy control group excluded claustrophobia, coronary/congenital/valvular heart disease, hypertension, diabetes, severe arrhythmia (e.g., atrial/ventricular arrhythmia, atrioventricular block, pre‐excitation syndrome, and sick sinus syndrome), chronic kidney disease, myocarditis, and cardiomyopathy.\nCMR imaging acquisition CMR studies were performed using a 3.0T MR scanner (Verio, Siemens Medical Systems) and a 32‐channel phased‐array body coil. All images were ECG gated; patients were placed in the supine position and required to hold their breath during image capture. Cine images were acquired in the short‐axis views and longitudinal two‐, three‐ and four‐chamber views using True Fast imaging with Steady‐State Precession (TrueFISP) imaging sequences covering the entire LV and LA (typical field of view 360 × 360 mm, matrix size 216 × 256, slice thickness of 6 mm with no gap, repetition time 40.68 ms, echo time 1.49 ms, flip angle 50°). In addition, we used the cardiac shim model of SIEMENS to adapt adjustment volume to reduce dark band artifacts.\nCMR studies were performed using a 3.0T MR scanner (Verio, Siemens Medical Systems) and a 32‐channel phased‐array body coil. All images were ECG gated; patients were placed in the supine position and required to hold their breath during image capture. Cine images were acquired in the short‐axis views and longitudinal two‐, three‐ and four‐chamber views using True Fast imaging with Steady‐State Precession (TrueFISP) imaging sequences covering the entire LV and LA (typical field of view 360 × 360 mm, matrix size 216 × 256, slice thickness of 6 mm with no gap, repetition time 40.68 ms, echo time 1.49 ms, flip angle 50°). In addition, we used the cardiac shim model of SIEMENS to adapt adjustment volume to reduce dark band artifacts.\nImaging analysis Images were analyzed using CVI42 (Circle, version 5.12.1).\n(1)LV volumetric parameters: LV end‐diastolic volume (LVEDV, the maximum left ventricular end diastolic filling volume), end‐systolic volume (LVESV, the minimum left ventricular volume at the end of ventricular ejection), ejection fraction (LVEF), cardiac output (LVCO), cardiac index (LVCI) and LV mass (LVM) were measured. LV endocardial and epicardial contours were drawn on LV short‐axis cine images, excluding the papillary muscles. All parameters of LV and LA were corrected according to body surface area (BSA), for example, LVEDV index (LVEDVi) = LVEDV/BSA × 100%, and so forth.(2)LA strain and strain rate: the LA myocardial deformation was quantified using CVI 42 Tissue Tracking software.\n15\n, \n26\n LA endocardial and epicardial borders were manually delineated in the apical four‐ and two‐chamber views at end‐diastole using a point‐and‐click approach before the automated tracking algorithm was applied. The pulmonary vein and LA appendage were not included (Figure S1). The software strain analysis model automatically provided the LA strain and strain rate curves. The endocardial LA global longitudinal strain and strain rate values were recorded from the curves\n11\n: total strain (ε\ns), active strain (ε\na), passive strain (ε\ne, ε\ne = ε\ns−ε\na), peak first positive strain rate (SRs), peak early (first) negative strain rate (SRe), and peak late (second) negative LA strain rate(SRa). ε\ns and SRs, corresponding to LA reservoir function; ε\ne and SRe, corresponding to LA conduit function; ε\na and SRa, corresponding to LA booster pump function\n3\n, \n27\n(Figure 1).(3)LA volume (LAV): The LA endocardium was manually labeled by the end‐diastolic atrial minimum volume at the four‐chamber and two‐chamber levels using CVI 42. The pulmonary vein and LA appendage were not included. The parameters of the LA volume were obtained using Simpson's method. The parameters of LAV included the maximum volume (LAVmax, before the end‐systolic mitral valve was opened) and the minimum volume (LAVmin, when the end‐diastolic mitral valve was just closed). LA total ejection fraction (LATEF) = (LAVmax − LAVmin)/LAVmax × 100%.(4)The intraobserver and interobserver variability for the LA parameters, measurements were assessed by the intraclass correlation coefficient (ICC).\n28\n Intraobserver reproducibility was established by the same observer (J. H., 5‐year experience in CMR diagnosis) who reanalyzed the same subjects after 1 month. Interobserver reproducibility was assessed by a second independent observer (Y. S., 3‐year experience in CMR diagnosis) who was blinded to the first observer's results.\n\nLV volumetric parameters: LV end‐diastolic volume (LVEDV, the maximum left ventricular end diastolic filling volume), end‐systolic volume (LVESV, the minimum left ventricular volume at the end of ventricular ejection), ejection fraction (LVEF), cardiac output (LVCO), cardiac index (LVCI) and LV mass (LVM) were measured. LV endocardial and epicardial contours were drawn on LV short‐axis cine images, excluding the papillary muscles. All parameters of LV and LA were corrected according to body surface area (BSA), for example, LVEDV index (LVEDVi) = LVEDV/BSA × 100%, and so forth.\nLA strain and strain rate: the LA myocardial deformation was quantified using CVI 42 Tissue Tracking software.\n15\n, \n26\n LA endocardial and epicardial borders were manually delineated in the apical four‐ and two‐chamber views at end‐diastole using a point‐and‐click approach before the automated tracking algorithm was applied. The pulmonary vein and LA appendage were not included (Figure S1). The software strain analysis model automatically provided the LA strain and strain rate curves. The endocardial LA global longitudinal strain and strain rate values were recorded from the curves\n11\n: total strain (ε\ns), active strain (ε\na), passive strain (ε\ne, ε\ne = ε\ns−ε\na), peak first positive strain rate (SRs), peak early (first) negative strain rate (SRe), and peak late (second) negative LA strain rate(SRa). ε\ns and SRs, corresponding to LA reservoir function; ε\ne and SRe, corresponding to LA conduit function; ε\na and SRa, corresponding to LA booster pump function\n3\n, \n27\n(Figure 1).\nLA volume (LAV): The LA endocardium was manually labeled by the end‐diastolic atrial minimum volume at the four‐chamber and two‐chamber levels using CVI 42. The pulmonary vein and LA appendage were not included. The parameters of the LA volume were obtained using Simpson's method. The parameters of LAV included the maximum volume (LAVmax, before the end‐systolic mitral valve was opened) and the minimum volume (LAVmin, when the end‐diastolic mitral valve was just closed). LA total ejection fraction (LATEF) = (LAVmax − LAVmin)/LAVmax × 100%.\nThe intraobserver and interobserver variability for the LA parameters, measurements were assessed by the intraclass correlation coefficient (ICC).\n28\n Intraobserver reproducibility was established by the same observer (J. H., 5‐year experience in CMR diagnosis) who reanalyzed the same subjects after 1 month. Interobserver reproducibility was assessed by a second independent observer (Y. S., 3‐year experience in CMR diagnosis) who was blinded to the first observer's results.\nThe left atrial (LA) strain and strain rate curve. Global endocardial LA strain and strain rate values (yellow line) were recorded. SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε\ns, total strain; ε\na, active strain; ε\ne, ε\ns−ε\na, passive strain. ε\ns and SRs, corresponding to LA reservoir function; ε\ne and SRe, corresponding to LA conduit function; ε\na and SRa, corresponding to LA booster pump function\nImages were analyzed using CVI42 (Circle, version 5.12.1).\n(1)LV volumetric parameters: LV end‐diastolic volume (LVEDV, the maximum left ventricular end diastolic filling volume), end‐systolic volume (LVESV, the minimum left ventricular volume at the end of ventricular ejection), ejection fraction (LVEF), cardiac output (LVCO), cardiac index (LVCI) and LV mass (LVM) were measured. LV endocardial and epicardial contours were drawn on LV short‐axis cine images, excluding the papillary muscles. All parameters of LV and LA were corrected according to body surface area (BSA), for example, LVEDV index (LVEDVi) = LVEDV/BSA × 100%, and so forth.(2)LA strain and strain rate: the LA myocardial deformation was quantified using CVI 42 Tissue Tracking software.\n15\n, \n26\n LA endocardial and epicardial borders were manually delineated in the apical four‐ and two‐chamber views at end‐diastole using a point‐and‐click approach before the automated tracking algorithm was applied. The pulmonary vein and LA appendage were not included (Figure S1). The software strain analysis model automatically provided the LA strain and strain rate curves. The endocardial LA global longitudinal strain and strain rate values were recorded from the curves\n11\n: total strain (ε\ns), active strain (ε\na), passive strain (ε\ne, ε\ne = ε\ns−ε\na), peak first positive strain rate (SRs), peak early (first) negative strain rate (SRe), and peak late (second) negative LA strain rate(SRa). ε\ns and SRs, corresponding to LA reservoir function; ε\ne and SRe, corresponding to LA conduit function; ε\na and SRa, corresponding to LA booster pump function\n3\n, \n27\n(Figure 1).(3)LA volume (LAV): The LA endocardium was manually labeled by the end‐diastolic atrial minimum volume at the four‐chamber and two‐chamber levels using CVI 42. The pulmonary vein and LA appendage were not included. The parameters of the LA volume were obtained using Simpson's method. The parameters of LAV included the maximum volume (LAVmax, before the end‐systolic mitral valve was opened) and the minimum volume (LAVmin, when the end‐diastolic mitral valve was just closed). LA total ejection fraction (LATEF) = (LAVmax − LAVmin)/LAVmax × 100%.(4)The intraobserver and interobserver variability for the LA parameters, measurements were assessed by the intraclass correlation coefficient (ICC).\n28\n Intraobserver reproducibility was established by the same observer (J. H., 5‐year experience in CMR diagnosis) who reanalyzed the same subjects after 1 month. Interobserver reproducibility was assessed by a second independent observer (Y. S., 3‐year experience in CMR diagnosis) who was blinded to the first observer's results.\n\nLV volumetric parameters: LV end‐diastolic volume (LVEDV, the maximum left ventricular end diastolic filling volume), end‐systolic volume (LVESV, the minimum left ventricular volume at the end of ventricular ejection), ejection fraction (LVEF), cardiac output (LVCO), cardiac index (LVCI) and LV mass (LVM) were measured. LV endocardial and epicardial contours were drawn on LV short‐axis cine images, excluding the papillary muscles. All parameters of LV and LA were corrected according to body surface area (BSA), for example, LVEDV index (LVEDVi) = LVEDV/BSA × 100%, and so forth.\nLA strain and strain rate: the LA myocardial deformation was quantified using CVI 42 Tissue Tracking software.\n15\n, \n26\n LA endocardial and epicardial borders were manually delineated in the apical four‐ and two‐chamber views at end‐diastole using a point‐and‐click approach before the automated tracking algorithm was applied. The pulmonary vein and LA appendage were not included (Figure S1). The software strain analysis model automatically provided the LA strain and strain rate curves. The endocardial LA global longitudinal strain and strain rate values were recorded from the curves\n11\n: total strain (ε\ns), active strain (ε\na), passive strain (ε\ne, ε\ne = ε\ns−ε\na), peak first positive strain rate (SRs), peak early (first) negative strain rate (SRe), and peak late (second) negative LA strain rate(SRa). ε\ns and SRs, corresponding to LA reservoir function; ε\ne and SRe, corresponding to LA conduit function; ε\na and SRa, corresponding to LA booster pump function\n3\n, \n27\n(Figure 1).\nLA volume (LAV): The LA endocardium was manually labeled by the end‐diastolic atrial minimum volume at the four‐chamber and two‐chamber levels using CVI 42. The pulmonary vein and LA appendage were not included. The parameters of the LA volume were obtained using Simpson's method. The parameters of LAV included the maximum volume (LAVmax, before the end‐systolic mitral valve was opened) and the minimum volume (LAVmin, when the end‐diastolic mitral valve was just closed). LA total ejection fraction (LATEF) = (LAVmax − LAVmin)/LAVmax × 100%.\nThe intraobserver and interobserver variability for the LA parameters, measurements were assessed by the intraclass correlation coefficient (ICC).\n28\n Intraobserver reproducibility was established by the same observer (J. H., 5‐year experience in CMR diagnosis) who reanalyzed the same subjects after 1 month. Interobserver reproducibility was assessed by a second independent observer (Y. S., 3‐year experience in CMR diagnosis) who was blinded to the first observer's results.\nThe left atrial (LA) strain and strain rate curve. Global endocardial LA strain and strain rate values (yellow line) were recorded. SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε\ns, total strain; ε\na, active strain; ε\ne, ε\ns−ε\na, passive strain. ε\ns and SRs, corresponding to LA reservoir function; ε\ne and SRe, corresponding to LA conduit function; ε\na and SRa, corresponding to LA booster pump function\nStatistical analysis Data were analyzed using SPSS, version 20.0 (SPSS Statistics, IBM Corporation). Mean ± SD or median (quartiles) was used to express measurement data in accordance with normal distribution, and the continuous variables were analyzed by independent sample t‐test or Mann–Whitney U test. χ2 or Fisher exact test was used to assess categorical data. Pearson or Spearman correlation was performed to investigate the potential relationship between LA strain parameters and LA function. Moreover, we assessed the ICC to evaluate the accuracy and the precision of the method to measure each LA parameters. ICC was scored as follows: poor reliability, ICC < 0.50; moderate reliability, ICC: 0.50–0.75; good reliability, 0.75–0.9; excellent reliability, ICC > 0.9.\n28\n p‐value < 0.05 was considered statistically significant.\nData were analyzed using SPSS, version 20.0 (SPSS Statistics, IBM Corporation). Mean ± SD or median (quartiles) was used to express measurement data in accordance with normal distribution, and the continuous variables were analyzed by independent sample t‐test or Mann–Whitney U test. χ2 or Fisher exact test was used to assess categorical data. Pearson or Spearman correlation was performed to investigate the potential relationship between LA strain parameters and LA function. Moreover, we assessed the ICC to evaluate the accuracy and the precision of the method to measure each LA parameters. ICC was scored as follows: poor reliability, ICC < 0.50; moderate reliability, ICC: 0.50–0.75; good reliability, 0.75–0.9; excellent reliability, ICC > 0.9.\n28\n p‐value < 0.05 was considered statistically significant.", "From July 1, 2020, to January 31, 2021, 39 consecutive patients with AF‐VHD who were admitted to the department of cardiac surgery in our hospital, and performed MRI to assess the left atrium function before valve replacement surgery only or valve replacement surgery with Maze procedure to correct valves and rhythm were included in this study. All patients were with persistent AF and mitral stenosis. The clinical characteristics, echocardiography findings on admission, and cardiac MRI data were retrospectively collected.\nThe inclusion criteria\n24\n were as follows\n1\n: Patients' age >18 years with AF who were planning to undergo cardiac surgery to correct the rhythm and valves\n2\n; AF of more than 30 s as recorded by electrocardiogram (ECG) or dynamic ECG; and\n3\n patients who were treated according to the American Heart Association/American College of Cardiology/Heart Rhythm Society (AHA/ACC/HRS) AF guidelines for the management of AF\n9\n and VHD.\n4\n, \n25\n According to etiological analysis, the patients divided into two groups, degenerative heart valvular disease (DHVD) group and rheumatic heart valvular disease (RHVD) group. Exclusion criteria included patients who had undergone previous valvular surgery or ablation for AF.\nAt the same time, 15 healthy participants with normal cardiac MRI in our hospital were included as the healthy control group. The healthy control group excluded claustrophobia, coronary/congenital/valvular heart disease, hypertension, diabetes, severe arrhythmia (e.g., atrial/ventricular arrhythmia, atrioventricular block, pre‐excitation syndrome, and sick sinus syndrome), chronic kidney disease, myocarditis, and cardiomyopathy.", "CMR studies were performed using a 3.0T MR scanner (Verio, Siemens Medical Systems) and a 32‐channel phased‐array body coil. All images were ECG gated; patients were placed in the supine position and required to hold their breath during image capture. Cine images were acquired in the short‐axis views and longitudinal two‐, three‐ and four‐chamber views using True Fast imaging with Steady‐State Precession (TrueFISP) imaging sequences covering the entire LV and LA (typical field of view 360 × 360 mm, matrix size 216 × 256, slice thickness of 6 mm with no gap, repetition time 40.68 ms, echo time 1.49 ms, flip angle 50°). In addition, we used the cardiac shim model of SIEMENS to adapt adjustment volume to reduce dark band artifacts.", "Images were analyzed using CVI42 (Circle, version 5.12.1).\n(1)LV volumetric parameters: LV end‐diastolic volume (LVEDV, the maximum left ventricular end diastolic filling volume), end‐systolic volume (LVESV, the minimum left ventricular volume at the end of ventricular ejection), ejection fraction (LVEF), cardiac output (LVCO), cardiac index (LVCI) and LV mass (LVM) were measured. LV endocardial and epicardial contours were drawn on LV short‐axis cine images, excluding the papillary muscles. All parameters of LV and LA were corrected according to body surface area (BSA), for example, LVEDV index (LVEDVi) = LVEDV/BSA × 100%, and so forth.(2)LA strain and strain rate: the LA myocardial deformation was quantified using CVI 42 Tissue Tracking software.\n15\n, \n26\n LA endocardial and epicardial borders were manually delineated in the apical four‐ and two‐chamber views at end‐diastole using a point‐and‐click approach before the automated tracking algorithm was applied. The pulmonary vein and LA appendage were not included (Figure S1). The software strain analysis model automatically provided the LA strain and strain rate curves. The endocardial LA global longitudinal strain and strain rate values were recorded from the curves\n11\n: total strain (ε\ns), active strain (ε\na), passive strain (ε\ne, ε\ne = ε\ns−ε\na), peak first positive strain rate (SRs), peak early (first) negative strain rate (SRe), and peak late (second) negative LA strain rate(SRa). ε\ns and SRs, corresponding to LA reservoir function; ε\ne and SRe, corresponding to LA conduit function; ε\na and SRa, corresponding to LA booster pump function\n3\n, \n27\n(Figure 1).(3)LA volume (LAV): The LA endocardium was manually labeled by the end‐diastolic atrial minimum volume at the four‐chamber and two‐chamber levels using CVI 42. The pulmonary vein and LA appendage were not included. The parameters of the LA volume were obtained using Simpson's method. The parameters of LAV included the maximum volume (LAVmax, before the end‐systolic mitral valve was opened) and the minimum volume (LAVmin, when the end‐diastolic mitral valve was just closed). LA total ejection fraction (LATEF) = (LAVmax − LAVmin)/LAVmax × 100%.(4)The intraobserver and interobserver variability for the LA parameters, measurements were assessed by the intraclass correlation coefficient (ICC).\n28\n Intraobserver reproducibility was established by the same observer (J. H., 5‐year experience in CMR diagnosis) who reanalyzed the same subjects after 1 month. Interobserver reproducibility was assessed by a second independent observer (Y. S., 3‐year experience in CMR diagnosis) who was blinded to the first observer's results.\n\nLV volumetric parameters: LV end‐diastolic volume (LVEDV, the maximum left ventricular end diastolic filling volume), end‐systolic volume (LVESV, the minimum left ventricular volume at the end of ventricular ejection), ejection fraction (LVEF), cardiac output (LVCO), cardiac index (LVCI) and LV mass (LVM) were measured. LV endocardial and epicardial contours were drawn on LV short‐axis cine images, excluding the papillary muscles. All parameters of LV and LA were corrected according to body surface area (BSA), for example, LVEDV index (LVEDVi) = LVEDV/BSA × 100%, and so forth.\nLA strain and strain rate: the LA myocardial deformation was quantified using CVI 42 Tissue Tracking software.\n15\n, \n26\n LA endocardial and epicardial borders were manually delineated in the apical four‐ and two‐chamber views at end‐diastole using a point‐and‐click approach before the automated tracking algorithm was applied. The pulmonary vein and LA appendage were not included (Figure S1). The software strain analysis model automatically provided the LA strain and strain rate curves. The endocardial LA global longitudinal strain and strain rate values were recorded from the curves\n11\n: total strain (ε\ns), active strain (ε\na), passive strain (ε\ne, ε\ne = ε\ns−ε\na), peak first positive strain rate (SRs), peak early (first) negative strain rate (SRe), and peak late (second) negative LA strain rate(SRa). ε\ns and SRs, corresponding to LA reservoir function; ε\ne and SRe, corresponding to LA conduit function; ε\na and SRa, corresponding to LA booster pump function\n3\n, \n27\n(Figure 1).\nLA volume (LAV): The LA endocardium was manually labeled by the end‐diastolic atrial minimum volume at the four‐chamber and two‐chamber levels using CVI 42. The pulmonary vein and LA appendage were not included. The parameters of the LA volume were obtained using Simpson's method. The parameters of LAV included the maximum volume (LAVmax, before the end‐systolic mitral valve was opened) and the minimum volume (LAVmin, when the end‐diastolic mitral valve was just closed). LA total ejection fraction (LATEF) = (LAVmax − LAVmin)/LAVmax × 100%.\nThe intraobserver and interobserver variability for the LA parameters, measurements were assessed by the intraclass correlation coefficient (ICC).\n28\n Intraobserver reproducibility was established by the same observer (J. H., 5‐year experience in CMR diagnosis) who reanalyzed the same subjects after 1 month. Interobserver reproducibility was assessed by a second independent observer (Y. S., 3‐year experience in CMR diagnosis) who was blinded to the first observer's results.\nThe left atrial (LA) strain and strain rate curve. Global endocardial LA strain and strain rate values (yellow line) were recorded. SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε\ns, total strain; ε\na, active strain; ε\ne, ε\ns−ε\na, passive strain. ε\ns and SRs, corresponding to LA reservoir function; ε\ne and SRe, corresponding to LA conduit function; ε\na and SRa, corresponding to LA booster pump function", "Data were analyzed using SPSS, version 20.0 (SPSS Statistics, IBM Corporation). Mean ± SD or median (quartiles) was used to express measurement data in accordance with normal distribution, and the continuous variables were analyzed by independent sample t‐test or Mann–Whitney U test. χ2 or Fisher exact test was used to assess categorical data. Pearson or Spearman correlation was performed to investigate the potential relationship between LA strain parameters and LA function. Moreover, we assessed the ICC to evaluate the accuracy and the precision of the method to measure each LA parameters. ICC was scored as follows: poor reliability, ICC < 0.50; moderate reliability, ICC: 0.50–0.75; good reliability, 0.75–0.9; excellent reliability, ICC > 0.9.\n28\n p‐value < 0.05 was considered statistically significant.", "Basic demographic characteristics A total of 39 patients (22 DHVD and 17 RHVD) and 15 control participants were included in the study. Baseline characteristics and volumetric chamber indices for all control participants are summarized in Table 1. The AF‐VHD group had a higher heart rate than the control group (94.8 ± 23.6 bpm vs. 77.3 ± 9.22 bpm; p < 0.001), especially in the RHVD group. RHVD group showed lower body mass index (25.8 ± 2.48 kg/m2 vs. 27.4 ± 2.09 kg/m2, p = 0.048) and BSA (1.84 ± 0.09 m2 vs. 1.92 ± 0.1 m2, p = 0.015) than DHVD group. RHVD group had more diagnoses of mitral stenosis (21/22 vs. 1/17, p < 0.001), and all patients in the AF‐VHD group had mitral regurgitation; there was no statistical significance for tricuspid regurgitation (12/22 vs. 12/17, p = 0.343). There was no statistical significance for severity of valvular heart disease and proportion of presurgery medications (anti‐platelets, anti‐coagulation, anti‐hypertensive, anticholesterol, hypoglycemics, and antiarrhythmic drugs) between the two groups (all p > 0.05).\nBaseline and clinical characteristics of participants\n\nNote: “*” indicates statistical significance. “a”/“b” indicates statistical significance between the control group and RHVD group/DHVD group.\nAbbreviations: AF, atrial fibrillation; CI, cardiac index; CO, cardiac output; DHVD, degenerative heart valvular disease; EDD, end‐diastolic diameter; EDV/ESV(i), end‐diastolic/end‐systolic volume(index); EF, emptying fraction; LV, left ventricular; LVM(i), LV mass(index); NYHA, New York Heart Association; RHVD, rheumatic heart valvular disease; SV(i), stroke volume(index); TIA, transient ischemic attack; VHD, valvular heart disease.\nA total of 39 patients (22 DHVD and 17 RHVD) and 15 control participants were included in the study. Baseline characteristics and volumetric chamber indices for all control participants are summarized in Table 1. The AF‐VHD group had a higher heart rate than the control group (94.8 ± 23.6 bpm vs. 77.3 ± 9.22 bpm; p < 0.001), especially in the RHVD group. RHVD group showed lower body mass index (25.8 ± 2.48 kg/m2 vs. 27.4 ± 2.09 kg/m2, p = 0.048) and BSA (1.84 ± 0.09 m2 vs. 1.92 ± 0.1 m2, p = 0.015) than DHVD group. RHVD group had more diagnoses of mitral stenosis (21/22 vs. 1/17, p < 0.001), and all patients in the AF‐VHD group had mitral regurgitation; there was no statistical significance for tricuspid regurgitation (12/22 vs. 12/17, p = 0.343). There was no statistical significance for severity of valvular heart disease and proportion of presurgery medications (anti‐platelets, anti‐coagulation, anti‐hypertensive, anticholesterol, hypoglycemics, and antiarrhythmic drugs) between the two groups (all p > 0.05).\nBaseline and clinical characteristics of participants\n\nNote: “*” indicates statistical significance. “a”/“b” indicates statistical significance between the control group and RHVD group/DHVD group.\nAbbreviations: AF, atrial fibrillation; CI, cardiac index; CO, cardiac output; DHVD, degenerative heart valvular disease; EDD, end‐diastolic diameter; EDV/ESV(i), end‐diastolic/end‐systolic volume(index); EF, emptying fraction; LV, left ventricular; LVM(i), LV mass(index); NYHA, New York Heart Association; RHVD, rheumatic heart valvular disease; SV(i), stroke volume(index); TIA, transient ischemic attack; VHD, valvular heart disease.\nLV parameters Compared with the control group, the AF‐VHD group showed higher LVEDD, LVEDV(i), LVESV(i), and lower LVEF (59.9% ± 5.8% vs. 42.6 ± 10.2, p < 0.001, Table 1). In the RHVD group, LVEDD, LVEDV(i), LVESV(i), LVSV(i), LVCO(LVCI), and LVM(i) were lower than those in the DHVD group. However, there was no significant difference in LVEF between the two groups (42.4% ± 9.75% vs. 42.9 ± 11, p = 0.877).\nCompared with the control group, the AF‐VHD group showed higher LVEDD, LVEDV(i), LVESV(i), and lower LVEF (59.9% ± 5.8% vs. 42.6 ± 10.2, p < 0.001, Table 1). In the RHVD group, LVEDD, LVEDV(i), LVESV(i), LVSV(i), LVCO(LVCI), and LVM(i) were lower than those in the DHVD group. However, there was no significant difference in LVEF between the two groups (42.4% ± 9.75% vs. 42.9 ± 11, p = 0.877).\nLA structure and function Patients with AF‐VHD had higher LA size (LAAD and LAV), lower strain values (ε\ns/ε\ne/ε\na/SRs/SRe/SRa) and LATEF than did control participants (all p < 0.001, Table 2). Compared with the DHVD group, the RHVD group showed lower LA strain parameters (ε\ns/ε\ne/ε\na/SRs/SRe/SRa) and lower LATEF (12.6% ± 3.3% vs. 19.4 ± 8.6, p = 0.001).\nComparison of LA parameters between the control group and patients with AF‐VHD\n\nNote: “*” indicates statistical significance. “a”/“b” indicates statistical significance between the control group and RHVD group/DHVD group.\nAbbreviations: AF, atrial fibrillation; DHVD, degenerative heart valvular disease; LA, left atrial; LAAD, anteroposterior diameter of left atrium; LATEF, left atrial total ejection fraction; LAV(i)max/min, the maximum/minimum volume of left atrium (index); RHVD, rheumatic heart valvular disease; SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; VHD, valvular heart disease; ε\na, active strain; ε\ne, passive strain; ε\ns, total strain.\nPatients with AF‐VHD had higher LA size (LAAD and LAV), lower strain values (ε\ns/ε\ne/ε\na/SRs/SRe/SRa) and LATEF than did control participants (all p < 0.001, Table 2). Compared with the DHVD group, the RHVD group showed lower LA strain parameters (ε\ns/ε\ne/ε\na/SRs/SRe/SRa) and lower LATEF (12.6% ± 3.3% vs. 19.4 ± 8.6, p = 0.001).\nComparison of LA parameters between the control group and patients with AF‐VHD\n\nNote: “*” indicates statistical significance. “a”/“b” indicates statistical significance between the control group and RHVD group/DHVD group.\nAbbreviations: AF, atrial fibrillation; DHVD, degenerative heart valvular disease; LA, left atrial; LAAD, anteroposterior diameter of left atrium; LATEF, left atrial total ejection fraction; LAV(i)max/min, the maximum/minimum volume of left atrium (index); RHVD, rheumatic heart valvular disease; SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; VHD, valvular heart disease; ε\na, active strain; ε\ne, passive strain; ε\ns, total strain.\nCorrelation between strain parameters and LA function in patients with VHD LATEF was positively correlated with ε\ns, ε\ne, ε\na, SRs (r = 0.856, p < 0.001; r = 0.837, p < 0.001; ρ = 0.501, p = 0.001; ρ = 0.562, p < 0.001; respectively) and negatively correlated with SRe, SRa (ρ = –0.407, p = 0.01; ρ = –0.429, p = 0.006; respectively, Figure 2, Table S1).\nScatter plots showing correlations of left atrial total emptying fraction (LATEF) and ε\ns, ε\ne, ε\na, SRs, SRe, and SRa. SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε\na, active strain; ε\ne, passive strain; ε\ns, total strain\nLATEF was positively correlated with ε\ns, ε\ne, ε\na, SRs (r = 0.856, p < 0.001; r = 0.837, p < 0.001; ρ = 0.501, p = 0.001; ρ = 0.562, p < 0.001; respectively) and negatively correlated with SRe, SRa (ρ = –0.407, p = 0.01; ρ = –0.429, p = 0.006; respectively, Figure 2, Table S1).\nScatter plots showing correlations of left atrial total emptying fraction (LATEF) and ε\ns, ε\ne, ε\na, SRs, SRe, and SRa. SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε\na, active strain; ε\ne, passive strain; ε\ns, total strain\nReproducibility Intraobserver and interobserver reproducibility of global LA strain, strain rate, and volumetric parameters using CMR are shown in Table 3. There were excellent and good intra‐observer and interobserver reproducibility, respectively. For intra‐observer reproducibility, the LAVmax had the highest reproducibility (ICC, 0.99; 0.98–0.99). For interobserver reproducibility, the LAVmax and LAVmin had the highest reproducibility (ICC, 0.98; 0.97–0.99). The least reproducible segmental measurement for interobserver reproducibility was the SRe (ICC, 0.88; 0.81–0.93).\nIntraobserver and interobserver repeatability of LA strain and strain rate\nAbbreviations: ICC, intraclass correlation coefficient; LA, left atrial; LATEF, left atrial total ejection fraction; LAVmax/min, the maximum/minimum volume of left atrium; SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε\na, active strain; ε\ne, passive strain; ε\ns, total strain.\nIntraobserver and interobserver reproducibility of global LA strain, strain rate, and volumetric parameters using CMR are shown in Table 3. There were excellent and good intra‐observer and interobserver reproducibility, respectively. For intra‐observer reproducibility, the LAVmax had the highest reproducibility (ICC, 0.99; 0.98–0.99). For interobserver reproducibility, the LAVmax and LAVmin had the highest reproducibility (ICC, 0.98; 0.97–0.99). The least reproducible segmental measurement for interobserver reproducibility was the SRe (ICC, 0.88; 0.81–0.93).\nIntraobserver and interobserver repeatability of LA strain and strain rate\nAbbreviations: ICC, intraclass correlation coefficient; LA, left atrial; LATEF, left atrial total ejection fraction; LAVmax/min, the maximum/minimum volume of left atrium; SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε\na, active strain; ε\ne, passive strain; ε\ns, total strain.", "A total of 39 patients (22 DHVD and 17 RHVD) and 15 control participants were included in the study. Baseline characteristics and volumetric chamber indices for all control participants are summarized in Table 1. The AF‐VHD group had a higher heart rate than the control group (94.8 ± 23.6 bpm vs. 77.3 ± 9.22 bpm; p < 0.001), especially in the RHVD group. RHVD group showed lower body mass index (25.8 ± 2.48 kg/m2 vs. 27.4 ± 2.09 kg/m2, p = 0.048) and BSA (1.84 ± 0.09 m2 vs. 1.92 ± 0.1 m2, p = 0.015) than DHVD group. RHVD group had more diagnoses of mitral stenosis (21/22 vs. 1/17, p < 0.001), and all patients in the AF‐VHD group had mitral regurgitation; there was no statistical significance for tricuspid regurgitation (12/22 vs. 12/17, p = 0.343). There was no statistical significance for severity of valvular heart disease and proportion of presurgery medications (anti‐platelets, anti‐coagulation, anti‐hypertensive, anticholesterol, hypoglycemics, and antiarrhythmic drugs) between the two groups (all p > 0.05).\nBaseline and clinical characteristics of participants\n\nNote: “*” indicates statistical significance. “a”/“b” indicates statistical significance between the control group and RHVD group/DHVD group.\nAbbreviations: AF, atrial fibrillation; CI, cardiac index; CO, cardiac output; DHVD, degenerative heart valvular disease; EDD, end‐diastolic diameter; EDV/ESV(i), end‐diastolic/end‐systolic volume(index); EF, emptying fraction; LV, left ventricular; LVM(i), LV mass(index); NYHA, New York Heart Association; RHVD, rheumatic heart valvular disease; SV(i), stroke volume(index); TIA, transient ischemic attack; VHD, valvular heart disease.", "Compared with the control group, the AF‐VHD group showed higher LVEDD, LVEDV(i), LVESV(i), and lower LVEF (59.9% ± 5.8% vs. 42.6 ± 10.2, p < 0.001, Table 1). In the RHVD group, LVEDD, LVEDV(i), LVESV(i), LVSV(i), LVCO(LVCI), and LVM(i) were lower than those in the DHVD group. However, there was no significant difference in LVEF between the two groups (42.4% ± 9.75% vs. 42.9 ± 11, p = 0.877).", "Patients with AF‐VHD had higher LA size (LAAD and LAV), lower strain values (ε\ns/ε\ne/ε\na/SRs/SRe/SRa) and LATEF than did control participants (all p < 0.001, Table 2). Compared with the DHVD group, the RHVD group showed lower LA strain parameters (ε\ns/ε\ne/ε\na/SRs/SRe/SRa) and lower LATEF (12.6% ± 3.3% vs. 19.4 ± 8.6, p = 0.001).\nComparison of LA parameters between the control group and patients with AF‐VHD\n\nNote: “*” indicates statistical significance. “a”/“b” indicates statistical significance between the control group and RHVD group/DHVD group.\nAbbreviations: AF, atrial fibrillation; DHVD, degenerative heart valvular disease; LA, left atrial; LAAD, anteroposterior diameter of left atrium; LATEF, left atrial total ejection fraction; LAV(i)max/min, the maximum/minimum volume of left atrium (index); RHVD, rheumatic heart valvular disease; SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; VHD, valvular heart disease; ε\na, active strain; ε\ne, passive strain; ε\ns, total strain.", "LATEF was positively correlated with ε\ns, ε\ne, ε\na, SRs (r = 0.856, p < 0.001; r = 0.837, p < 0.001; ρ = 0.501, p = 0.001; ρ = 0.562, p < 0.001; respectively) and negatively correlated with SRe, SRa (ρ = –0.407, p = 0.01; ρ = –0.429, p = 0.006; respectively, Figure 2, Table S1).\nScatter plots showing correlations of left atrial total emptying fraction (LATEF) and ε\ns, ε\ne, ε\na, SRs, SRe, and SRa. SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε\na, active strain; ε\ne, passive strain; ε\ns, total strain", "Intraobserver and interobserver reproducibility of global LA strain, strain rate, and volumetric parameters using CMR are shown in Table 3. There were excellent and good intra‐observer and interobserver reproducibility, respectively. For intra‐observer reproducibility, the LAVmax had the highest reproducibility (ICC, 0.99; 0.98–0.99). For interobserver reproducibility, the LAVmax and LAVmin had the highest reproducibility (ICC, 0.98; 0.97–0.99). The least reproducible segmental measurement for interobserver reproducibility was the SRe (ICC, 0.88; 0.81–0.93).\nIntraobserver and interobserver repeatability of LA strain and strain rate\nAbbreviations: ICC, intraclass correlation coefficient; LA, left atrial; LATEF, left atrial total ejection fraction; LAVmax/min, the maximum/minimum volume of left atrium; SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε\na, active strain; ε\ne, passive strain; ε\ns, total strain.", "Our study evaluated the LA function in patients with AF‐VHD using CMR‐FT. We found that LA function in AF ‐VHD was lower than the Control healthy participants. AF‐VHD patients with reduced reservoir and conduit function, reduced or absent booster pump function, which showed in strain parameters. LATEF has a linear correlation with LA strain parameters. LA enlargement was also observed in AF‐VHD, which means LA remodeling. In the subgroup, DHVD had higher LV size, volume, mass, and stain parameters than those in RHVD. And LA strain and volumetric parameters showed good reproducibility.\nLA remodeling consists of both structural and functional changes; enlargement of the LA and fibrosis of the atrial muscle promotes the persistence of AF,\n29\n significantly affecting the LA function. The pathophysiology of AF is complex and incompletely understood.\n2\n Currently, it is believed that AF‐induced electrical alterations occur within the atrial myocardium (electrical remodeling), which may also promote or accelerate myocardial apoptosis and fibrosis (anatomical remodeling),\n2\n, \n30\n then the process becomes self‐perpetuating.\n31\n\n\nLA size by volumetric index is widely accepted as a significant prognostic marker of mortality and outcomes in many cardiovascular diseases.\n11\n, \n32\n Le Tourneau T et al.\n33\n found that patients with organic mitral regurgitation in sinus rhythm who have LAVi ≥ 60 ml/m2 and are treated medically have increased mortality and more cardiac events (AF and heart failure). After mitral valve surgery, the cutoff of LAVi ≥ 60 ml/m2 lost its prognostic value. Caso P et al.\n34\n and Ancona R et al.\n35\n showed that the better independent predictor of cardiac events was the SR and ε\ns in patients with asymptomatic rheumatic mitral stenosis followed for 3–4 years. These markers of risk are particularly important because in clinical practice, risk stratification for mitral surgery helps with clinical decision‐making in these patients.\n33\n Most of the patients with AF‐VHD in our study had concomitant mitral stenosis and mitral regurgitation and had larger LAV than those of control participants, especially patients with DHVD. Mitral stenosis is associated with LA remodeling, increased LA stiffness, and abnormal atrial contractility. Habibi et al.\n36\n found that LA reservoir, conduit function, and booster pump function assessed by CMR‐FT were decreased and negatively correlated with LA myocardial fibrosis in patients with AF. LA strain analysis can detect the existence of a myocardial scar in the early clinical stages,\n37\n which can indicate the decrease of LA compliance and indirectly reflect the degree of myocardial fibrosis.\n38\n Strain and SR represent the magnitude and rate, respectively, of myocardial deformation.\n11\n Interestingly, our results suggest that most of the LA deformation parameters were significantly associated with LATEF, which may suggest the LA strain parameters have a potential correlation with LA wall deformation. Atrial ejection force, the force exerted by LA to accelerate blood into the LV, is another marker of atrial systolic function. LA enlargement and dysfunction (reduced reservoir and conduit function, reduced or absent booster pump function) were found to be present in patients with AF‐VHD, which suggests atrial remodeling and impaired LA myocardial function. It suggested that LA strain with AF‐VHD using CMR‐FT presented impaired strain despite no obvious significant difference in LA volume or function, which may be used in some early‐state cardiovascular patients before obvious changes in volume and function to guide clinical treatment.\nTruong VT et al.\n15\n using 1.5T MRI and Kowallick JT et al.\n39\n using 3.0T MRI found that LA strain and strain rate using CMR‐FT all had good intraobserver and interobserver reproducibility, in addition to good feasibility and reliability. Our study showed the same results. However, Alfuhied A et al.\n40\n found that sixty participants including 16 aortic stenosis, 28 type 2 diabetes, 10 end‐stage renal disease on hemodialysis, and 10 healthy volunteers underwent CMR scans, the result showed that LA strain and strain rate assessment using CMR‐FT showed moderate to poor test–retest reproducibility across disease states, LAV and emptying fraction showed more reproducible on CMR. It may be related to the small number and heterogeneous of disease in each group. And all our patients were with mitral valve disease and AF, because the a‐wave peak (ε\na and SRa) is very low or absent in patients with AF‐VHD,\n3\n it is not surprising that the global longitudinal εa and SRa show good reproducibility.\nPrevious size (volume) parameters can only reflect the changes in LA structural remodeling, whereas strain parameters can better reflect the early changes in LA in functional parameters. The comprehensive assessment continues to provide more insight into LA function, and therefore more information is available to guide clinical treatment. Future large‐scale studies are warranted to assess whether individual strain parameters using CMR‐FT may provide additional clinical value in stroke risk assessment and guidance of anticoagulation therapy.", "This was a single‐center study with a relatively modest sample size. We only assessed global longitudinal strain and did not assess radial strain. Radial strain has been noted as a parameter that is difficult to obtain and reproduce consistently.\n16\n At present, most research investigates the longitudinal LA strain on the LA long axis, and most of the studies have obtained positive results. It is believed that with the application of three‐dimensional strain analysis technology in the future, its accuracy will be further improved.", "CMR‐FT is a reliable tool with good clinical feasibility and repeatability for noninvasive quantitative assessment of LA strain and strain rate (LA function) without using a contrast agent and may provide insight into assessment of the LA performance over time in patients with AF‐VHD, which has potential clinical value in guiding the treatment, prognosis, evaluation and risk stratification.", "The authors declare no conflicts of interest.", "Huishan Wang and Benqiang Yang contributed to the conception and design of the study; Jie Hou, Yu Sun, and Wei Wang contributed significantly to manuscript preparation; Jie Hou and Yu Sun analyzed the data; Jie Hou wrote the manuscript; Libo Zhang, Hongrui You and Rongrong Zhang helped perform the analysis with constructive discussions and revised the manuscript. All authors received the final approval of the submission.", "Supporting information.\nClick here for additional data file." ]
[ null, "materials-and-methods", null, null, null, null, "results", null, null, null, null, null, "discussion", null, "conclusions", "COI-statement", null, "supplementary-material" ]
[ "atrial fibrillation", "cardiovascular magnetic resonance imaging", "feature tracking", "left atrial function", "strain", "valvular heart disease" ]
INTRODUCTION: Atrial fibrillation (AF) is the most common human tachyarrhythmia diagnosed clinically; patients with AF are at an increased risk of stroke and heart failure, in addition to a decreased quality of life and lower survival. 1 , 2 AF is associated with profound structural and functional alterations of the atrial myocardium, 3 promoting a true atrial cardiomyopathy, the severity of which is an important determinant of AF recurrence and response to treatment. 4 , 5 , 6 Stroke prevention is a pivotal part of the treatment of patients with AF. 7 Patients with AF and concurrent valvular heart disease (AF‐VHD) have an even higher thromboembolic risk than those with AF alone. 7 , 8 Patients with AF have a 5‐fold increased risk of stroke compared with patients without cardiovascular disease, and patients with AF coupled with mitral stenosis have a 20‐fold risk of stroke. 9 It is of major clinical interest to have a new imaging biomarker with which to quantify the degree of LA dysfunction and make earlier clinical decisions in patients with AF‐VHD in an effort to prevent cardiac events. The LA normal function includes reservoir (collection of pulmonary venous blood during left ventricular [LV] systole), conduit (passage of pulmonary venous blood flow to the LV during LV early‐diastole), and booster pump function (augmentation of LV filling during LV late‐diastole/atrial systole). 10 It is important to recognize the interplay that exists between these atrial functions and ventricular performance throughout the cardiac cycle. Previous publications investigating LA function have primarily focused on LA size and volume. 11  Due to the LA's complex geometry and intricate fiber orientation and the variable contributions of its appendage and pulmonary veins, these two parameters alone may be insufficient to describe the complexity of the LA function and wall motion. 12 , 13 , 14 LA strain assessed by cardiovascular magnetic resonance imaging feature tracking (CMR‐FT) has been used in many cardiovascular diseases and enhanced the diagnostic value, 12 , 13 , 14 which might be higher and more sensitive than conventional LA volumetric parameters 12 and LV function 13 , 14 and presented with good intra‐observer and interobserver reproducibility in normal volunteers. 15 Due to the very thin LA wall, it is challenging to measure the LA strain. 15 And radial strain has already been noted for its difficulty to be obtained with poor reproducibility. 15 , 16 Echocardiographic data also showed poor reproducibility of strain rate and radial strain 16 and was limited in the two‐dimensional approaches with a semi‐quantitative and subjective measure. 17 , 18 It is clear that studies of LA function provide new insights into the contribution of LA performance to cardiovascular disease and are promising tools for predicting cardiovascular events in a wide range of patient populations. Considerable data also support the use of LAEF for predicting events. 12 Accordingly, LA function indices such as strain and strain rate have been proposed using noninvasive imaging modalities such as echocardiography speckle tracking 19 , 20 and color tissue doppler. Although speckle tracking is presently the only available reference for LA strain estimation, ultrasound beam direction as well as heart motion relative to the probe may influence measurements and inter‐vendor variability, essentially described in the setting of LV function, need to be further investigated. 12 , 21 CMR‐FT is a new method to evaluate myocardial strain and strain rate; it can be applied to routine cine images and has the advantages of high spatial resolution, large field of view, good reproducibility, and it can more sensitively reflect the functional characteristics of myocardial tissue. 22 As a new technique, CMR‐FT has been mainly used in the study of LV strain in recent years, 23 but has rarely been applied to analysis of LA. The aim of this study was to evaluate LA strain and strain rate using CMR‐FT before valve replacement surgery, assess the feasibility and reproducibility of CMR‐FT for the quantification of global LA function in patients with AF‐VHD, and review the clinical application value of CMR‐FT. In addition, we compared global LA function between patients with AF‐VHD and those without cardiac disease. MATERIAL AND METHODS: This study was approved by the local ethics committee. All patients or their families provided written informed consent for the examination. Patient selection From July 1, 2020, to January 31, 2021, 39 consecutive patients with AF‐VHD who were admitted to the department of cardiac surgery in our hospital, and performed MRI to assess the left atrium function before valve replacement surgery only or valve replacement surgery with Maze procedure to correct valves and rhythm were included in this study. All patients were with persistent AF and mitral stenosis. The clinical characteristics, echocardiography findings on admission, and cardiac MRI data were retrospectively collected. The inclusion criteria 24 were as follows 1 : Patients' age >18 years with AF who were planning to undergo cardiac surgery to correct the rhythm and valves 2 ; AF of more than 30 s as recorded by electrocardiogram (ECG) or dynamic ECG; and 3 patients who were treated according to the American Heart Association/American College of Cardiology/Heart Rhythm Society (AHA/ACC/HRS) AF guidelines for the management of AF 9 and VHD. 4 , 25 According to etiological analysis, the patients divided into two groups, degenerative heart valvular disease (DHVD) group and rheumatic heart valvular disease (RHVD) group. Exclusion criteria included patients who had undergone previous valvular surgery or ablation for AF. At the same time, 15 healthy participants with normal cardiac MRI in our hospital were included as the healthy control group. The healthy control group excluded claustrophobia, coronary/congenital/valvular heart disease, hypertension, diabetes, severe arrhythmia (e.g., atrial/ventricular arrhythmia, atrioventricular block, pre‐excitation syndrome, and sick sinus syndrome), chronic kidney disease, myocarditis, and cardiomyopathy. From July 1, 2020, to January 31, 2021, 39 consecutive patients with AF‐VHD who were admitted to the department of cardiac surgery in our hospital, and performed MRI to assess the left atrium function before valve replacement surgery only or valve replacement surgery with Maze procedure to correct valves and rhythm were included in this study. All patients were with persistent AF and mitral stenosis. The clinical characteristics, echocardiography findings on admission, and cardiac MRI data were retrospectively collected. The inclusion criteria 24 were as follows 1 : Patients' age >18 years with AF who were planning to undergo cardiac surgery to correct the rhythm and valves 2 ; AF of more than 30 s as recorded by electrocardiogram (ECG) or dynamic ECG; and 3 patients who were treated according to the American Heart Association/American College of Cardiology/Heart Rhythm Society (AHA/ACC/HRS) AF guidelines for the management of AF 9 and VHD. 4 , 25 According to etiological analysis, the patients divided into two groups, degenerative heart valvular disease (DHVD) group and rheumatic heart valvular disease (RHVD) group. Exclusion criteria included patients who had undergone previous valvular surgery or ablation for AF. At the same time, 15 healthy participants with normal cardiac MRI in our hospital were included as the healthy control group. The healthy control group excluded claustrophobia, coronary/congenital/valvular heart disease, hypertension, diabetes, severe arrhythmia (e.g., atrial/ventricular arrhythmia, atrioventricular block, pre‐excitation syndrome, and sick sinus syndrome), chronic kidney disease, myocarditis, and cardiomyopathy. CMR imaging acquisition CMR studies were performed using a 3.0T MR scanner (Verio, Siemens Medical Systems) and a 32‐channel phased‐array body coil. All images were ECG gated; patients were placed in the supine position and required to hold their breath during image capture. Cine images were acquired in the short‐axis views and longitudinal two‐, three‐ and four‐chamber views using True Fast imaging with Steady‐State Precession (TrueFISP) imaging sequences covering the entire LV and LA (typical field of view 360 × 360 mm, matrix size 216 × 256, slice thickness of 6 mm with no gap, repetition time 40.68 ms, echo time 1.49 ms, flip angle 50°). In addition, we used the cardiac shim model of SIEMENS to adapt adjustment volume to reduce dark band artifacts. CMR studies were performed using a 3.0T MR scanner (Verio, Siemens Medical Systems) and a 32‐channel phased‐array body coil. All images were ECG gated; patients were placed in the supine position and required to hold their breath during image capture. Cine images were acquired in the short‐axis views and longitudinal two‐, three‐ and four‐chamber views using True Fast imaging with Steady‐State Precession (TrueFISP) imaging sequences covering the entire LV and LA (typical field of view 360 × 360 mm, matrix size 216 × 256, slice thickness of 6 mm with no gap, repetition time 40.68 ms, echo time 1.49 ms, flip angle 50°). In addition, we used the cardiac shim model of SIEMENS to adapt adjustment volume to reduce dark band artifacts. Imaging analysis Images were analyzed using CVI42 (Circle, version 5.12.1). (1)LV volumetric parameters: LV end‐diastolic volume (LVEDV, the maximum left ventricular end diastolic filling volume), end‐systolic volume (LVESV, the minimum left ventricular volume at the end of ventricular ejection), ejection fraction (LVEF), cardiac output (LVCO), cardiac index (LVCI) and LV mass (LVM) were measured. LV endocardial and epicardial contours were drawn on LV short‐axis cine images, excluding the papillary muscles. All parameters of LV and LA were corrected according to body surface area (BSA), for example, LVEDV index (LVEDVi) = LVEDV/BSA × 100%, and so forth.(2)LA strain and strain rate: the LA myocardial deformation was quantified using CVI 42 Tissue Tracking software. 15 , 26 LA endocardial and epicardial borders were manually delineated in the apical four‐ and two‐chamber views at end‐diastole using a point‐and‐click approach before the automated tracking algorithm was applied. The pulmonary vein and LA appendage were not included (Figure S1). The software strain analysis model automatically provided the LA strain and strain rate curves. The endocardial LA global longitudinal strain and strain rate values were recorded from the curves 11 : total strain (ε s), active strain (ε a), passive strain (ε e, ε e = ε s−ε a), peak first positive strain rate (SRs), peak early (first) negative strain rate (SRe), and peak late (second) negative LA strain rate(SRa). ε s and SRs, corresponding to LA reservoir function; ε e and SRe, corresponding to LA conduit function; ε a and SRa, corresponding to LA booster pump function 3 , 27 (Figure 1).(3)LA volume (LAV): The LA endocardium was manually labeled by the end‐diastolic atrial minimum volume at the four‐chamber and two‐chamber levels using CVI 42. The pulmonary vein and LA appendage were not included. The parameters of the LA volume were obtained using Simpson's method. The parameters of LAV included the maximum volume (LAVmax, before the end‐systolic mitral valve was opened) and the minimum volume (LAVmin, when the end‐diastolic mitral valve was just closed). LA total ejection fraction (LATEF) = (LAVmax − LAVmin)/LAVmax × 100%.(4)The intraobserver and interobserver variability for the LA parameters, measurements were assessed by the intraclass correlation coefficient (ICC). 28 Intraobserver reproducibility was established by the same observer (J. H., 5‐year experience in CMR diagnosis) who reanalyzed the same subjects after 1 month. Interobserver reproducibility was assessed by a second independent observer (Y. S., 3‐year experience in CMR diagnosis) who was blinded to the first observer's results. LV volumetric parameters: LV end‐diastolic volume (LVEDV, the maximum left ventricular end diastolic filling volume), end‐systolic volume (LVESV, the minimum left ventricular volume at the end of ventricular ejection), ejection fraction (LVEF), cardiac output (LVCO), cardiac index (LVCI) and LV mass (LVM) were measured. LV endocardial and epicardial contours were drawn on LV short‐axis cine images, excluding the papillary muscles. All parameters of LV and LA were corrected according to body surface area (BSA), for example, LVEDV index (LVEDVi) = LVEDV/BSA × 100%, and so forth. LA strain and strain rate: the LA myocardial deformation was quantified using CVI 42 Tissue Tracking software. 15 , 26 LA endocardial and epicardial borders were manually delineated in the apical four‐ and two‐chamber views at end‐diastole using a point‐and‐click approach before the automated tracking algorithm was applied. The pulmonary vein and LA appendage were not included (Figure S1). The software strain analysis model automatically provided the LA strain and strain rate curves. The endocardial LA global longitudinal strain and strain rate values were recorded from the curves 11 : total strain (ε s), active strain (ε a), passive strain (ε e, ε e = ε s−ε a), peak first positive strain rate (SRs), peak early (first) negative strain rate (SRe), and peak late (second) negative LA strain rate(SRa). ε s and SRs, corresponding to LA reservoir function; ε e and SRe, corresponding to LA conduit function; ε a and SRa, corresponding to LA booster pump function 3 , 27 (Figure 1). LA volume (LAV): The LA endocardium was manually labeled by the end‐diastolic atrial minimum volume at the four‐chamber and two‐chamber levels using CVI 42. The pulmonary vein and LA appendage were not included. The parameters of the LA volume were obtained using Simpson's method. The parameters of LAV included the maximum volume (LAVmax, before the end‐systolic mitral valve was opened) and the minimum volume (LAVmin, when the end‐diastolic mitral valve was just closed). LA total ejection fraction (LATEF) = (LAVmax − LAVmin)/LAVmax × 100%. The intraobserver and interobserver variability for the LA parameters, measurements were assessed by the intraclass correlation coefficient (ICC). 28 Intraobserver reproducibility was established by the same observer (J. H., 5‐year experience in CMR diagnosis) who reanalyzed the same subjects after 1 month. Interobserver reproducibility was assessed by a second independent observer (Y. S., 3‐year experience in CMR diagnosis) who was blinded to the first observer's results. The left atrial (LA) strain and strain rate curve. Global endocardial LA strain and strain rate values (yellow line) were recorded. SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε s, total strain; ε a, active strain; ε e, ε s−ε a, passive strain. ε s and SRs, corresponding to LA reservoir function; ε e and SRe, corresponding to LA conduit function; ε a and SRa, corresponding to LA booster pump function Images were analyzed using CVI42 (Circle, version 5.12.1). (1)LV volumetric parameters: LV end‐diastolic volume (LVEDV, the maximum left ventricular end diastolic filling volume), end‐systolic volume (LVESV, the minimum left ventricular volume at the end of ventricular ejection), ejection fraction (LVEF), cardiac output (LVCO), cardiac index (LVCI) and LV mass (LVM) were measured. LV endocardial and epicardial contours were drawn on LV short‐axis cine images, excluding the papillary muscles. All parameters of LV and LA were corrected according to body surface area (BSA), for example, LVEDV index (LVEDVi) = LVEDV/BSA × 100%, and so forth.(2)LA strain and strain rate: the LA myocardial deformation was quantified using CVI 42 Tissue Tracking software. 15 , 26 LA endocardial and epicardial borders were manually delineated in the apical four‐ and two‐chamber views at end‐diastole using a point‐and‐click approach before the automated tracking algorithm was applied. The pulmonary vein and LA appendage were not included (Figure S1). The software strain analysis model automatically provided the LA strain and strain rate curves. The endocardial LA global longitudinal strain and strain rate values were recorded from the curves 11 : total strain (ε s), active strain (ε a), passive strain (ε e, ε e = ε s−ε a), peak first positive strain rate (SRs), peak early (first) negative strain rate (SRe), and peak late (second) negative LA strain rate(SRa). ε s and SRs, corresponding to LA reservoir function; ε e and SRe, corresponding to LA conduit function; ε a and SRa, corresponding to LA booster pump function 3 , 27 (Figure 1).(3)LA volume (LAV): The LA endocardium was manually labeled by the end‐diastolic atrial minimum volume at the four‐chamber and two‐chamber levels using CVI 42. The pulmonary vein and LA appendage were not included. The parameters of the LA volume were obtained using Simpson's method. The parameters of LAV included the maximum volume (LAVmax, before the end‐systolic mitral valve was opened) and the minimum volume (LAVmin, when the end‐diastolic mitral valve was just closed). LA total ejection fraction (LATEF) = (LAVmax − LAVmin)/LAVmax × 100%.(4)The intraobserver and interobserver variability for the LA parameters, measurements were assessed by the intraclass correlation coefficient (ICC). 28 Intraobserver reproducibility was established by the same observer (J. H., 5‐year experience in CMR diagnosis) who reanalyzed the same subjects after 1 month. Interobserver reproducibility was assessed by a second independent observer (Y. S., 3‐year experience in CMR diagnosis) who was blinded to the first observer's results. LV volumetric parameters: LV end‐diastolic volume (LVEDV, the maximum left ventricular end diastolic filling volume), end‐systolic volume (LVESV, the minimum left ventricular volume at the end of ventricular ejection), ejection fraction (LVEF), cardiac output (LVCO), cardiac index (LVCI) and LV mass (LVM) were measured. LV endocardial and epicardial contours were drawn on LV short‐axis cine images, excluding the papillary muscles. All parameters of LV and LA were corrected according to body surface area (BSA), for example, LVEDV index (LVEDVi) = LVEDV/BSA × 100%, and so forth. LA strain and strain rate: the LA myocardial deformation was quantified using CVI 42 Tissue Tracking software. 15 , 26 LA endocardial and epicardial borders were manually delineated in the apical four‐ and two‐chamber views at end‐diastole using a point‐and‐click approach before the automated tracking algorithm was applied. The pulmonary vein and LA appendage were not included (Figure S1). The software strain analysis model automatically provided the LA strain and strain rate curves. The endocardial LA global longitudinal strain and strain rate values were recorded from the curves 11 : total strain (ε s), active strain (ε a), passive strain (ε e, ε e = ε s−ε a), peak first positive strain rate (SRs), peak early (first) negative strain rate (SRe), and peak late (second) negative LA strain rate(SRa). ε s and SRs, corresponding to LA reservoir function; ε e and SRe, corresponding to LA conduit function; ε a and SRa, corresponding to LA booster pump function 3 , 27 (Figure 1). LA volume (LAV): The LA endocardium was manually labeled by the end‐diastolic atrial minimum volume at the four‐chamber and two‐chamber levels using CVI 42. The pulmonary vein and LA appendage were not included. The parameters of the LA volume were obtained using Simpson's method. The parameters of LAV included the maximum volume (LAVmax, before the end‐systolic mitral valve was opened) and the minimum volume (LAVmin, when the end‐diastolic mitral valve was just closed). LA total ejection fraction (LATEF) = (LAVmax − LAVmin)/LAVmax × 100%. The intraobserver and interobserver variability for the LA parameters, measurements were assessed by the intraclass correlation coefficient (ICC). 28 Intraobserver reproducibility was established by the same observer (J. H., 5‐year experience in CMR diagnosis) who reanalyzed the same subjects after 1 month. Interobserver reproducibility was assessed by a second independent observer (Y. S., 3‐year experience in CMR diagnosis) who was blinded to the first observer's results. The left atrial (LA) strain and strain rate curve. Global endocardial LA strain and strain rate values (yellow line) were recorded. SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε s, total strain; ε a, active strain; ε e, ε s−ε a, passive strain. ε s and SRs, corresponding to LA reservoir function; ε e and SRe, corresponding to LA conduit function; ε a and SRa, corresponding to LA booster pump function Statistical analysis Data were analyzed using SPSS, version 20.0 (SPSS Statistics, IBM Corporation). Mean ± SD or median (quartiles) was used to express measurement data in accordance with normal distribution, and the continuous variables were analyzed by independent sample t‐test or Mann–Whitney U test. χ2 or Fisher exact test was used to assess categorical data. Pearson or Spearman correlation was performed to investigate the potential relationship between LA strain parameters and LA function. Moreover, we assessed the ICC to evaluate the accuracy and the precision of the method to measure each LA parameters. ICC was scored as follows: poor reliability, ICC < 0.50; moderate reliability, ICC: 0.50–0.75; good reliability, 0.75–0.9; excellent reliability, ICC > 0.9. 28  p‐value < 0.05 was considered statistically significant. Data were analyzed using SPSS, version 20.0 (SPSS Statistics, IBM Corporation). Mean ± SD or median (quartiles) was used to express measurement data in accordance with normal distribution, and the continuous variables were analyzed by independent sample t‐test or Mann–Whitney U test. χ2 or Fisher exact test was used to assess categorical data. Pearson or Spearman correlation was performed to investigate the potential relationship between LA strain parameters and LA function. Moreover, we assessed the ICC to evaluate the accuracy and the precision of the method to measure each LA parameters. ICC was scored as follows: poor reliability, ICC < 0.50; moderate reliability, ICC: 0.50–0.75; good reliability, 0.75–0.9; excellent reliability, ICC > 0.9. 28  p‐value < 0.05 was considered statistically significant. Patient selection: From July 1, 2020, to January 31, 2021, 39 consecutive patients with AF‐VHD who were admitted to the department of cardiac surgery in our hospital, and performed MRI to assess the left atrium function before valve replacement surgery only or valve replacement surgery with Maze procedure to correct valves and rhythm were included in this study. All patients were with persistent AF and mitral stenosis. The clinical characteristics, echocardiography findings on admission, and cardiac MRI data were retrospectively collected. The inclusion criteria 24 were as follows 1 : Patients' age >18 years with AF who were planning to undergo cardiac surgery to correct the rhythm and valves 2 ; AF of more than 30 s as recorded by electrocardiogram (ECG) or dynamic ECG; and 3 patients who were treated according to the American Heart Association/American College of Cardiology/Heart Rhythm Society (AHA/ACC/HRS) AF guidelines for the management of AF 9 and VHD. 4 , 25 According to etiological analysis, the patients divided into two groups, degenerative heart valvular disease (DHVD) group and rheumatic heart valvular disease (RHVD) group. Exclusion criteria included patients who had undergone previous valvular surgery or ablation for AF. At the same time, 15 healthy participants with normal cardiac MRI in our hospital were included as the healthy control group. The healthy control group excluded claustrophobia, coronary/congenital/valvular heart disease, hypertension, diabetes, severe arrhythmia (e.g., atrial/ventricular arrhythmia, atrioventricular block, pre‐excitation syndrome, and sick sinus syndrome), chronic kidney disease, myocarditis, and cardiomyopathy. CMR imaging acquisition: CMR studies were performed using a 3.0T MR scanner (Verio, Siemens Medical Systems) and a 32‐channel phased‐array body coil. All images were ECG gated; patients were placed in the supine position and required to hold their breath during image capture. Cine images were acquired in the short‐axis views and longitudinal two‐, three‐ and four‐chamber views using True Fast imaging with Steady‐State Precession (TrueFISP) imaging sequences covering the entire LV and LA (typical field of view 360 × 360 mm, matrix size 216 × 256, slice thickness of 6 mm with no gap, repetition time 40.68 ms, echo time 1.49 ms, flip angle 50°). In addition, we used the cardiac shim model of SIEMENS to adapt adjustment volume to reduce dark band artifacts. Imaging analysis: Images were analyzed using CVI42 (Circle, version 5.12.1). (1)LV volumetric parameters: LV end‐diastolic volume (LVEDV, the maximum left ventricular end diastolic filling volume), end‐systolic volume (LVESV, the minimum left ventricular volume at the end of ventricular ejection), ejection fraction (LVEF), cardiac output (LVCO), cardiac index (LVCI) and LV mass (LVM) were measured. LV endocardial and epicardial contours were drawn on LV short‐axis cine images, excluding the papillary muscles. All parameters of LV and LA were corrected according to body surface area (BSA), for example, LVEDV index (LVEDVi) = LVEDV/BSA × 100%, and so forth.(2)LA strain and strain rate: the LA myocardial deformation was quantified using CVI 42 Tissue Tracking software. 15 , 26 LA endocardial and epicardial borders were manually delineated in the apical four‐ and two‐chamber views at end‐diastole using a point‐and‐click approach before the automated tracking algorithm was applied. The pulmonary vein and LA appendage were not included (Figure S1). The software strain analysis model automatically provided the LA strain and strain rate curves. The endocardial LA global longitudinal strain and strain rate values were recorded from the curves 11 : total strain (ε s), active strain (ε a), passive strain (ε e, ε e = ε s−ε a), peak first positive strain rate (SRs), peak early (first) negative strain rate (SRe), and peak late (second) negative LA strain rate(SRa). ε s and SRs, corresponding to LA reservoir function; ε e and SRe, corresponding to LA conduit function; ε a and SRa, corresponding to LA booster pump function 3 , 27 (Figure 1).(3)LA volume (LAV): The LA endocardium was manually labeled by the end‐diastolic atrial minimum volume at the four‐chamber and two‐chamber levels using CVI 42. The pulmonary vein and LA appendage were not included. The parameters of the LA volume were obtained using Simpson's method. The parameters of LAV included the maximum volume (LAVmax, before the end‐systolic mitral valve was opened) and the minimum volume (LAVmin, when the end‐diastolic mitral valve was just closed). LA total ejection fraction (LATEF) = (LAVmax − LAVmin)/LAVmax × 100%.(4)The intraobserver and interobserver variability for the LA parameters, measurements were assessed by the intraclass correlation coefficient (ICC). 28 Intraobserver reproducibility was established by the same observer (J. H., 5‐year experience in CMR diagnosis) who reanalyzed the same subjects after 1 month. Interobserver reproducibility was assessed by a second independent observer (Y. S., 3‐year experience in CMR diagnosis) who was blinded to the first observer's results. LV volumetric parameters: LV end‐diastolic volume (LVEDV, the maximum left ventricular end diastolic filling volume), end‐systolic volume (LVESV, the minimum left ventricular volume at the end of ventricular ejection), ejection fraction (LVEF), cardiac output (LVCO), cardiac index (LVCI) and LV mass (LVM) were measured. LV endocardial and epicardial contours were drawn on LV short‐axis cine images, excluding the papillary muscles. All parameters of LV and LA were corrected according to body surface area (BSA), for example, LVEDV index (LVEDVi) = LVEDV/BSA × 100%, and so forth. LA strain and strain rate: the LA myocardial deformation was quantified using CVI 42 Tissue Tracking software. 15 , 26 LA endocardial and epicardial borders were manually delineated in the apical four‐ and two‐chamber views at end‐diastole using a point‐and‐click approach before the automated tracking algorithm was applied. The pulmonary vein and LA appendage were not included (Figure S1). The software strain analysis model automatically provided the LA strain and strain rate curves. The endocardial LA global longitudinal strain and strain rate values were recorded from the curves 11 : total strain (ε s), active strain (ε a), passive strain (ε e, ε e = ε s−ε a), peak first positive strain rate (SRs), peak early (first) negative strain rate (SRe), and peak late (second) negative LA strain rate(SRa). ε s and SRs, corresponding to LA reservoir function; ε e and SRe, corresponding to LA conduit function; ε a and SRa, corresponding to LA booster pump function 3 , 27 (Figure 1). LA volume (LAV): The LA endocardium was manually labeled by the end‐diastolic atrial minimum volume at the four‐chamber and two‐chamber levels using CVI 42. The pulmonary vein and LA appendage were not included. The parameters of the LA volume were obtained using Simpson's method. The parameters of LAV included the maximum volume (LAVmax, before the end‐systolic mitral valve was opened) and the minimum volume (LAVmin, when the end‐diastolic mitral valve was just closed). LA total ejection fraction (LATEF) = (LAVmax − LAVmin)/LAVmax × 100%. The intraobserver and interobserver variability for the LA parameters, measurements were assessed by the intraclass correlation coefficient (ICC). 28 Intraobserver reproducibility was established by the same observer (J. H., 5‐year experience in CMR diagnosis) who reanalyzed the same subjects after 1 month. Interobserver reproducibility was assessed by a second independent observer (Y. S., 3‐year experience in CMR diagnosis) who was blinded to the first observer's results. The left atrial (LA) strain and strain rate curve. Global endocardial LA strain and strain rate values (yellow line) were recorded. SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε s, total strain; ε a, active strain; ε e, ε s−ε a, passive strain. ε s and SRs, corresponding to LA reservoir function; ε e and SRe, corresponding to LA conduit function; ε a and SRa, corresponding to LA booster pump function Statistical analysis: Data were analyzed using SPSS, version 20.0 (SPSS Statistics, IBM Corporation). Mean ± SD or median (quartiles) was used to express measurement data in accordance with normal distribution, and the continuous variables were analyzed by independent sample t‐test or Mann–Whitney U test. χ2 or Fisher exact test was used to assess categorical data. Pearson or Spearman correlation was performed to investigate the potential relationship between LA strain parameters and LA function. Moreover, we assessed the ICC to evaluate the accuracy and the precision of the method to measure each LA parameters. ICC was scored as follows: poor reliability, ICC < 0.50; moderate reliability, ICC: 0.50–0.75; good reliability, 0.75–0.9; excellent reliability, ICC > 0.9. 28  p‐value < 0.05 was considered statistically significant. RESULTS: Basic demographic characteristics A total of 39 patients (22 DHVD and 17 RHVD) and 15 control participants were included in the study. Baseline characteristics and volumetric chamber indices for all control participants are summarized in Table 1. The AF‐VHD group had a higher heart rate than the control group (94.8 ± 23.6 bpm vs. 77.3 ± 9.22 bpm; p < 0.001), especially in the RHVD group. RHVD group showed lower body mass index (25.8 ± 2.48 kg/m2 vs. 27.4 ± 2.09 kg/m2, p = 0.048) and BSA (1.84 ± 0.09 m2 vs. 1.92 ± 0.1 m2, p = 0.015) than DHVD group. RHVD group had more diagnoses of mitral stenosis (21/22 vs. 1/17, p < 0.001), and all patients in the AF‐VHD group had mitral regurgitation; there was no statistical significance for tricuspid regurgitation (12/22 vs. 12/17, p = 0.343). There was no statistical significance for severity of valvular heart disease and proportion of presurgery medications (anti‐platelets, anti‐coagulation, anti‐hypertensive, anticholesterol, hypoglycemics, and antiarrhythmic drugs) between the two groups (all p > 0.05). Baseline and clinical characteristics of participants Note: “*” indicates statistical significance. “a”/“b” indicates statistical significance between the control group and RHVD group/DHVD group. Abbreviations: AF, atrial fibrillation; CI, cardiac index; CO, cardiac output; DHVD, degenerative heart valvular disease; EDD, end‐diastolic diameter; EDV/ESV(i), end‐diastolic/end‐systolic volume(index); EF, emptying fraction; LV, left ventricular; LVM(i), LV mass(index); NYHA, New York Heart Association; RHVD, rheumatic heart valvular disease; SV(i), stroke volume(index); TIA, transient ischemic attack; VHD, valvular heart disease. A total of 39 patients (22 DHVD and 17 RHVD) and 15 control participants were included in the study. Baseline characteristics and volumetric chamber indices for all control participants are summarized in Table 1. The AF‐VHD group had a higher heart rate than the control group (94.8 ± 23.6 bpm vs. 77.3 ± 9.22 bpm; p < 0.001), especially in the RHVD group. RHVD group showed lower body mass index (25.8 ± 2.48 kg/m2 vs. 27.4 ± 2.09 kg/m2, p = 0.048) and BSA (1.84 ± 0.09 m2 vs. 1.92 ± 0.1 m2, p = 0.015) than DHVD group. RHVD group had more diagnoses of mitral stenosis (21/22 vs. 1/17, p < 0.001), and all patients in the AF‐VHD group had mitral regurgitation; there was no statistical significance for tricuspid regurgitation (12/22 vs. 12/17, p = 0.343). There was no statistical significance for severity of valvular heart disease and proportion of presurgery medications (anti‐platelets, anti‐coagulation, anti‐hypertensive, anticholesterol, hypoglycemics, and antiarrhythmic drugs) between the two groups (all p > 0.05). Baseline and clinical characteristics of participants Note: “*” indicates statistical significance. “a”/“b” indicates statistical significance between the control group and RHVD group/DHVD group. Abbreviations: AF, atrial fibrillation; CI, cardiac index; CO, cardiac output; DHVD, degenerative heart valvular disease; EDD, end‐diastolic diameter; EDV/ESV(i), end‐diastolic/end‐systolic volume(index); EF, emptying fraction; LV, left ventricular; LVM(i), LV mass(index); NYHA, New York Heart Association; RHVD, rheumatic heart valvular disease; SV(i), stroke volume(index); TIA, transient ischemic attack; VHD, valvular heart disease. LV parameters Compared with the control group, the AF‐VHD group showed higher LVEDD, LVEDV(i), LVESV(i), and lower LVEF (59.9% ± 5.8% vs. 42.6 ± 10.2, p < 0.001, Table 1). In the RHVD group, LVEDD, LVEDV(i), LVESV(i), LVSV(i), LVCO(LVCI), and LVM(i) were lower than those in the DHVD group. However, there was no significant difference in LVEF between the two groups (42.4% ± 9.75% vs. 42.9 ± 11, p = 0.877). Compared with the control group, the AF‐VHD group showed higher LVEDD, LVEDV(i), LVESV(i), and lower LVEF (59.9% ± 5.8% vs. 42.6 ± 10.2, p < 0.001, Table 1). In the RHVD group, LVEDD, LVEDV(i), LVESV(i), LVSV(i), LVCO(LVCI), and LVM(i) were lower than those in the DHVD group. However, there was no significant difference in LVEF between the two groups (42.4% ± 9.75% vs. 42.9 ± 11, p = 0.877). LA structure and function Patients with AF‐VHD had higher LA size (LAAD and LAV), lower strain values (ε s/ε e/ε a/SRs/SRe/SRa) and LATEF than did control participants (all p < 0.001, Table 2). Compared with the DHVD group, the RHVD group showed lower LA strain parameters (ε s/ε e/ε a/SRs/SRe/SRa) and lower LATEF (12.6% ± 3.3% vs. 19.4 ± 8.6, p = 0.001). Comparison of LA parameters between the control group and patients with AF‐VHD Note: “*” indicates statistical significance. “a”/“b” indicates statistical significance between the control group and RHVD group/DHVD group. Abbreviations: AF, atrial fibrillation; DHVD, degenerative heart valvular disease; LA, left atrial; LAAD, anteroposterior diameter of left atrium; LATEF, left atrial total ejection fraction; LAV(i)max/min, the maximum/minimum volume of left atrium (index); RHVD, rheumatic heart valvular disease; SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; VHD, valvular heart disease; ε a, active strain; ε e, passive strain; ε s, total strain. Patients with AF‐VHD had higher LA size (LAAD and LAV), lower strain values (ε s/ε e/ε a/SRs/SRe/SRa) and LATEF than did control participants (all p < 0.001, Table 2). Compared with the DHVD group, the RHVD group showed lower LA strain parameters (ε s/ε e/ε a/SRs/SRe/SRa) and lower LATEF (12.6% ± 3.3% vs. 19.4 ± 8.6, p = 0.001). Comparison of LA parameters between the control group and patients with AF‐VHD Note: “*” indicates statistical significance. “a”/“b” indicates statistical significance between the control group and RHVD group/DHVD group. Abbreviations: AF, atrial fibrillation; DHVD, degenerative heart valvular disease; LA, left atrial; LAAD, anteroposterior diameter of left atrium; LATEF, left atrial total ejection fraction; LAV(i)max/min, the maximum/minimum volume of left atrium (index); RHVD, rheumatic heart valvular disease; SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; VHD, valvular heart disease; ε a, active strain; ε e, passive strain; ε s, total strain. Correlation between strain parameters and LA function in patients with VHD LATEF was positively correlated with ε s, ε e, ε a, SRs (r = 0.856, p < 0.001; r = 0.837, p < 0.001; ρ = 0.501, p = 0.001; ρ = 0.562, p < 0.001; respectively) and negatively correlated with SRe, SRa (ρ = –0.407, p = 0.01; ρ = –0.429, p = 0.006; respectively, Figure 2, Table S1). Scatter plots showing correlations of left atrial total emptying fraction (LATEF) and ε s, ε e, ε a, SRs, SRe, and SRa. SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε a, active strain; ε e, passive strain; ε s, total strain LATEF was positively correlated with ε s, ε e, ε a, SRs (r = 0.856, p < 0.001; r = 0.837, p < 0.001; ρ = 0.501, p = 0.001; ρ = 0.562, p < 0.001; respectively) and negatively correlated with SRe, SRa (ρ = –0.407, p = 0.01; ρ = –0.429, p = 0.006; respectively, Figure 2, Table S1). Scatter plots showing correlations of left atrial total emptying fraction (LATEF) and ε s, ε e, ε a, SRs, SRe, and SRa. SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε a, active strain; ε e, passive strain; ε s, total strain Reproducibility Intraobserver and interobserver reproducibility of global LA strain, strain rate, and volumetric parameters using CMR are shown in Table 3. There were excellent and good intra‐observer and interobserver reproducibility, respectively. For intra‐observer reproducibility, the LAVmax had the highest reproducibility (ICC, 0.99; 0.98–0.99). For interobserver reproducibility, the LAVmax and LAVmin had the highest reproducibility (ICC, 0.98; 0.97–0.99). The least reproducible segmental measurement for interobserver reproducibility was the SRe (ICC, 0.88; 0.81–0.93). Intraobserver and interobserver repeatability of LA strain and strain rate Abbreviations: ICC, intraclass correlation coefficient; LA, left atrial; LATEF, left atrial total ejection fraction; LAVmax/min, the maximum/minimum volume of left atrium; SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε a, active strain; ε e, passive strain; ε s, total strain. Intraobserver and interobserver reproducibility of global LA strain, strain rate, and volumetric parameters using CMR are shown in Table 3. There were excellent and good intra‐observer and interobserver reproducibility, respectively. For intra‐observer reproducibility, the LAVmax had the highest reproducibility (ICC, 0.99; 0.98–0.99). For interobserver reproducibility, the LAVmax and LAVmin had the highest reproducibility (ICC, 0.98; 0.97–0.99). The least reproducible segmental measurement for interobserver reproducibility was the SRe (ICC, 0.88; 0.81–0.93). Intraobserver and interobserver repeatability of LA strain and strain rate Abbreviations: ICC, intraclass correlation coefficient; LA, left atrial; LATEF, left atrial total ejection fraction; LAVmax/min, the maximum/minimum volume of left atrium; SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε a, active strain; ε e, passive strain; ε s, total strain. Basic demographic characteristics: A total of 39 patients (22 DHVD and 17 RHVD) and 15 control participants were included in the study. Baseline characteristics and volumetric chamber indices for all control participants are summarized in Table 1. The AF‐VHD group had a higher heart rate than the control group (94.8 ± 23.6 bpm vs. 77.3 ± 9.22 bpm; p < 0.001), especially in the RHVD group. RHVD group showed lower body mass index (25.8 ± 2.48 kg/m2 vs. 27.4 ± 2.09 kg/m2, p = 0.048) and BSA (1.84 ± 0.09 m2 vs. 1.92 ± 0.1 m2, p = 0.015) than DHVD group. RHVD group had more diagnoses of mitral stenosis (21/22 vs. 1/17, p < 0.001), and all patients in the AF‐VHD group had mitral regurgitation; there was no statistical significance for tricuspid regurgitation (12/22 vs. 12/17, p = 0.343). There was no statistical significance for severity of valvular heart disease and proportion of presurgery medications (anti‐platelets, anti‐coagulation, anti‐hypertensive, anticholesterol, hypoglycemics, and antiarrhythmic drugs) between the two groups (all p > 0.05). Baseline and clinical characteristics of participants Note: “*” indicates statistical significance. “a”/“b” indicates statistical significance between the control group and RHVD group/DHVD group. Abbreviations: AF, atrial fibrillation; CI, cardiac index; CO, cardiac output; DHVD, degenerative heart valvular disease; EDD, end‐diastolic diameter; EDV/ESV(i), end‐diastolic/end‐systolic volume(index); EF, emptying fraction; LV, left ventricular; LVM(i), LV mass(index); NYHA, New York Heart Association; RHVD, rheumatic heart valvular disease; SV(i), stroke volume(index); TIA, transient ischemic attack; VHD, valvular heart disease. LV parameters: Compared with the control group, the AF‐VHD group showed higher LVEDD, LVEDV(i), LVESV(i), and lower LVEF (59.9% ± 5.8% vs. 42.6 ± 10.2, p < 0.001, Table 1). In the RHVD group, LVEDD, LVEDV(i), LVESV(i), LVSV(i), LVCO(LVCI), and LVM(i) were lower than those in the DHVD group. However, there was no significant difference in LVEF between the two groups (42.4% ± 9.75% vs. 42.9 ± 11, p = 0.877). LA structure and function: Patients with AF‐VHD had higher LA size (LAAD and LAV), lower strain values (ε s/ε e/ε a/SRs/SRe/SRa) and LATEF than did control participants (all p < 0.001, Table 2). Compared with the DHVD group, the RHVD group showed lower LA strain parameters (ε s/ε e/ε a/SRs/SRe/SRa) and lower LATEF (12.6% ± 3.3% vs. 19.4 ± 8.6, p = 0.001). Comparison of LA parameters between the control group and patients with AF‐VHD Note: “*” indicates statistical significance. “a”/“b” indicates statistical significance between the control group and RHVD group/DHVD group. Abbreviations: AF, atrial fibrillation; DHVD, degenerative heart valvular disease; LA, left atrial; LAAD, anteroposterior diameter of left atrium; LATEF, left atrial total ejection fraction; LAV(i)max/min, the maximum/minimum volume of left atrium (index); RHVD, rheumatic heart valvular disease; SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; VHD, valvular heart disease; ε a, active strain; ε e, passive strain; ε s, total strain. Correlation between strain parameters and LA function in patients with VHD: LATEF was positively correlated with ε s, ε e, ε a, SRs (r = 0.856, p < 0.001; r = 0.837, p < 0.001; ρ = 0.501, p = 0.001; ρ = 0.562, p < 0.001; respectively) and negatively correlated with SRe, SRa (ρ = –0.407, p = 0.01; ρ = –0.429, p = 0.006; respectively, Figure 2, Table S1). Scatter plots showing correlations of left atrial total emptying fraction (LATEF) and ε s, ε e, ε a, SRs, SRe, and SRa. SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε a, active strain; ε e, passive strain; ε s, total strain Reproducibility: Intraobserver and interobserver reproducibility of global LA strain, strain rate, and volumetric parameters using CMR are shown in Table 3. There were excellent and good intra‐observer and interobserver reproducibility, respectively. For intra‐observer reproducibility, the LAVmax had the highest reproducibility (ICC, 0.99; 0.98–0.99). For interobserver reproducibility, the LAVmax and LAVmin had the highest reproducibility (ICC, 0.98; 0.97–0.99). The least reproducible segmental measurement for interobserver reproducibility was the SRe (ICC, 0.88; 0.81–0.93). Intraobserver and interobserver repeatability of LA strain and strain rate Abbreviations: ICC, intraclass correlation coefficient; LA, left atrial; LATEF, left atrial total ejection fraction; LAVmax/min, the maximum/minimum volume of left atrium; SRa, peak late negative strain rate; SRe, peak early negative rate; SRs, peak positive strain rate; ε a, active strain; ε e, passive strain; ε s, total strain. DISCUSSION: Our study evaluated the LA function in patients with AF‐VHD using CMR‐FT. We found that LA function in AF ‐VHD was lower than the Control healthy participants. AF‐VHD patients with reduced reservoir and conduit function, reduced or absent booster pump function, which showed in strain parameters. LATEF has a linear correlation with LA strain parameters. LA enlargement was also observed in AF‐VHD, which means LA remodeling. In the subgroup, DHVD had higher LV size, volume, mass, and stain parameters than those in RHVD. And LA strain and volumetric parameters showed good reproducibility. LA remodeling consists of both structural and functional changes; enlargement of the LA and fibrosis of the atrial muscle promotes the persistence of AF, 29 significantly affecting the LA function. The pathophysiology of AF is complex and incompletely understood. 2 Currently, it is believed that AF‐induced electrical alterations occur within the atrial myocardium (electrical remodeling), which may also promote or accelerate myocardial apoptosis and fibrosis (anatomical remodeling), 2 , 30 then the process becomes self‐perpetuating. 31 LA size by volumetric index is widely accepted as a significant prognostic marker of mortality and outcomes in many cardiovascular diseases. 11 , 32 Le Tourneau T et al. 33 found that patients with organic mitral regurgitation in sinus rhythm who have LAVi ≥ 60 ml/m2 and are treated medically have increased mortality and more cardiac events (AF and heart failure). After mitral valve surgery, the cutoff of LAVi ≥ 60 ml/m2 lost its prognostic value. Caso P et al. 34 and Ancona R et al. 35 showed that the better independent predictor of cardiac events was the SR and ε s in patients with asymptomatic rheumatic mitral stenosis followed for 3–4 years. These markers of risk are particularly important because in clinical practice, risk stratification for mitral surgery helps with clinical decision‐making in these patients. 33 Most of the patients with AF‐VHD in our study had concomitant mitral stenosis and mitral regurgitation and had larger LAV than those of control participants, especially patients with DHVD. Mitral stenosis is associated with LA remodeling, increased LA stiffness, and abnormal atrial contractility. Habibi et al. 36 found that LA reservoir, conduit function, and booster pump function assessed by CMR‐FT were decreased and negatively correlated with LA myocardial fibrosis in patients with AF. LA strain analysis can detect the existence of a myocardial scar in the early clinical stages, 37 which can indicate the decrease of LA compliance and indirectly reflect the degree of myocardial fibrosis. 38 Strain and SR represent the magnitude and rate, respectively, of myocardial deformation. 11 Interestingly, our results suggest that most of the LA deformation parameters were significantly associated with LATEF, which may suggest the LA strain parameters have a potential correlation with LA wall deformation. Atrial ejection force, the force exerted by LA to accelerate blood into the LV, is another marker of atrial systolic function. LA enlargement and dysfunction (reduced reservoir and conduit function, reduced or absent booster pump function) were found to be present in patients with AF‐VHD, which suggests atrial remodeling and impaired LA myocardial function. It suggested that LA strain with AF‐VHD using CMR‐FT presented impaired strain despite no obvious significant difference in LA volume or function, which may be used in some early‐state cardiovascular patients before obvious changes in volume and function to guide clinical treatment. Truong VT et al. 15 using 1.5T MRI and Kowallick JT et al. 39 using 3.0T MRI found that LA strain and strain rate using CMR‐FT all had good intraobserver and interobserver reproducibility, in addition to good feasibility and reliability. Our study showed the same results. However, Alfuhied A et al. 40 found that sixty participants including 16 aortic stenosis, 28 type 2 diabetes, 10 end‐stage renal disease on hemodialysis, and 10 healthy volunteers underwent CMR scans, the result showed that LA strain and strain rate assessment using CMR‐FT showed moderate to poor test–retest reproducibility across disease states, LAV and emptying fraction showed more reproducible on CMR. It may be related to the small number and heterogeneous of disease in each group. And all our patients were with mitral valve disease and AF, because the a‐wave peak (ε a and SRa) is very low or absent in patients with AF‐VHD, 3 it is not surprising that the global longitudinal εa and SRa show good reproducibility. Previous size (volume) parameters can only reflect the changes in LA structural remodeling, whereas strain parameters can better reflect the early changes in LA in functional parameters. The comprehensive assessment continues to provide more insight into LA function, and therefore more information is available to guide clinical treatment. Future large‐scale studies are warranted to assess whether individual strain parameters using CMR‐FT may provide additional clinical value in stroke risk assessment and guidance of anticoagulation therapy. LIMITATION: This was a single‐center study with a relatively modest sample size. We only assessed global longitudinal strain and did not assess radial strain. Radial strain has been noted as a parameter that is difficult to obtain and reproduce consistently. 16 At present, most research investigates the longitudinal LA strain on the LA long axis, and most of the studies have obtained positive results. It is believed that with the application of three‐dimensional strain analysis technology in the future, its accuracy will be further improved. CONCLUSION: CMR‐FT is a reliable tool with good clinical feasibility and repeatability for noninvasive quantitative assessment of LA strain and strain rate (LA function) without using a contrast agent and may provide insight into assessment of the LA performance over time in patients with AF‐VHD, which has potential clinical value in guiding the treatment, prognosis, evaluation and risk stratification. CONFLICTS OF INTEREST: The authors declare no conflicts of interest. AUTHOR CONTRIBUTIONS: Huishan Wang and Benqiang Yang contributed to the conception and design of the study; Jie Hou, Yu Sun, and Wei Wang contributed significantly to manuscript preparation; Jie Hou and Yu Sun analyzed the data; Jie Hou wrote the manuscript; Libo Zhang, Hongrui You and Rongrong Zhang helped perform the analysis with constructive discussions and revised the manuscript. All authors received the final approval of the submission. Supporting information: Supporting information. Click here for additional data file.
Background: Atrial fibrillation (AF) is common arrhythmia in valvular heart disease (VHD) and is associated with adverse outcomes. Methods: This was a retrospective cross-sectional inter-reader and intra-reader reproducibility conducted from July 1, 2020, to January 31, 2021. A total of 39 patients with AF-VHD (rheumatic heart valvular disease [RHVD] [n = 22], degenerative heart valvular disease [DHVD] [n = 17]) underwent MRI scans performed with drug-controlled heart rate before correcting the rhythm and valves through maze procedure. Fifteen participants with normal cardiac MRI were included as healthy control. εs /SRs, εe /SRe, and εa /SRa, corresponding to LA reservoir, conduit, and booster-pump function, were assessed using Feature Tracking software (CVI42 v5.12.1). Results: Compared with healthy controls, LA global strain parameters (εs /εe /εa /SRs/SRe/SRa) were significantly decreased (all p < 0.001), while LA size and volume were increased in AF-VHD group (all p < 0.001). In the subgroup, RHVD group showed lower LA total ejection fraction (LATEF) and strain data than DHVD group (12.6% ± 3.3% vs. 19.4 ± 8.6, p = 0.001). Decreased LATEF was significantly related to altered LA strain and strain rate, especially in εs , εe , and SRs (Pearson/Spearman r/ρ = 0.856/0.837/0.562, respectively; all p < 0.001). Interstudy and intrastudy reproducibility were consistent for LA volumetry and strain parameters (intraclass correlation coefficient: 0.88-0.99). Conclusions: CMR-FT can be used to assess the LA strain parameters, and identify LA dysfunction and deformation noninvasively, which could be a helpful functional imaging biomarker in the clinical treatment of AF-VHD.
INTRODUCTION: Atrial fibrillation (AF) is the most common human tachyarrhythmia diagnosed clinically; patients with AF are at an increased risk of stroke and heart failure, in addition to a decreased quality of life and lower survival. 1 , 2 AF is associated with profound structural and functional alterations of the atrial myocardium, 3 promoting a true atrial cardiomyopathy, the severity of which is an important determinant of AF recurrence and response to treatment. 4 , 5 , 6 Stroke prevention is a pivotal part of the treatment of patients with AF. 7 Patients with AF and concurrent valvular heart disease (AF‐VHD) have an even higher thromboembolic risk than those with AF alone. 7 , 8 Patients with AF have a 5‐fold increased risk of stroke compared with patients without cardiovascular disease, and patients with AF coupled with mitral stenosis have a 20‐fold risk of stroke. 9 It is of major clinical interest to have a new imaging biomarker with which to quantify the degree of LA dysfunction and make earlier clinical decisions in patients with AF‐VHD in an effort to prevent cardiac events. The LA normal function includes reservoir (collection of pulmonary venous blood during left ventricular [LV] systole), conduit (passage of pulmonary venous blood flow to the LV during LV early‐diastole), and booster pump function (augmentation of LV filling during LV late‐diastole/atrial systole). 10 It is important to recognize the interplay that exists between these atrial functions and ventricular performance throughout the cardiac cycle. Previous publications investigating LA function have primarily focused on LA size and volume. 11  Due to the LA's complex geometry and intricate fiber orientation and the variable contributions of its appendage and pulmonary veins, these two parameters alone may be insufficient to describe the complexity of the LA function and wall motion. 12 , 13 , 14 LA strain assessed by cardiovascular magnetic resonance imaging feature tracking (CMR‐FT) has been used in many cardiovascular diseases and enhanced the diagnostic value, 12 , 13 , 14 which might be higher and more sensitive than conventional LA volumetric parameters 12 and LV function 13 , 14 and presented with good intra‐observer and interobserver reproducibility in normal volunteers. 15 Due to the very thin LA wall, it is challenging to measure the LA strain. 15 And radial strain has already been noted for its difficulty to be obtained with poor reproducibility. 15 , 16 Echocardiographic data also showed poor reproducibility of strain rate and radial strain 16 and was limited in the two‐dimensional approaches with a semi‐quantitative and subjective measure. 17 , 18 It is clear that studies of LA function provide new insights into the contribution of LA performance to cardiovascular disease and are promising tools for predicting cardiovascular events in a wide range of patient populations. Considerable data also support the use of LAEF for predicting events. 12 Accordingly, LA function indices such as strain and strain rate have been proposed using noninvasive imaging modalities such as echocardiography speckle tracking 19 , 20 and color tissue doppler. Although speckle tracking is presently the only available reference for LA strain estimation, ultrasound beam direction as well as heart motion relative to the probe may influence measurements and inter‐vendor variability, essentially described in the setting of LV function, need to be further investigated. 12 , 21 CMR‐FT is a new method to evaluate myocardial strain and strain rate; it can be applied to routine cine images and has the advantages of high spatial resolution, large field of view, good reproducibility, and it can more sensitively reflect the functional characteristics of myocardial tissue. 22 As a new technique, CMR‐FT has been mainly used in the study of LV strain in recent years, 23 but has rarely been applied to analysis of LA. The aim of this study was to evaluate LA strain and strain rate using CMR‐FT before valve replacement surgery, assess the feasibility and reproducibility of CMR‐FT for the quantification of global LA function in patients with AF‐VHD, and review the clinical application value of CMR‐FT. In addition, we compared global LA function between patients with AF‐VHD and those without cardiac disease. CONCLUSION: CMR‐FT is a reliable tool with good clinical feasibility and repeatability for noninvasive quantitative assessment of LA strain and strain rate (LA function) without using a contrast agent and may provide insight into assessment of the LA performance over time in patients with AF‐VHD, which has potential clinical value in guiding the treatment, prognosis, evaluation and risk stratification.
Background: Atrial fibrillation (AF) is common arrhythmia in valvular heart disease (VHD) and is associated with adverse outcomes. Methods: This was a retrospective cross-sectional inter-reader and intra-reader reproducibility conducted from July 1, 2020, to January 31, 2021. A total of 39 patients with AF-VHD (rheumatic heart valvular disease [RHVD] [n = 22], degenerative heart valvular disease [DHVD] [n = 17]) underwent MRI scans performed with drug-controlled heart rate before correcting the rhythm and valves through maze procedure. Fifteen participants with normal cardiac MRI were included as healthy control. εs /SRs, εe /SRe, and εa /SRa, corresponding to LA reservoir, conduit, and booster-pump function, were assessed using Feature Tracking software (CVI42 v5.12.1). Results: Compared with healthy controls, LA global strain parameters (εs /εe /εa /SRs/SRe/SRa) were significantly decreased (all p < 0.001), while LA size and volume were increased in AF-VHD group (all p < 0.001). In the subgroup, RHVD group showed lower LA total ejection fraction (LATEF) and strain data than DHVD group (12.6% ± 3.3% vs. 19.4 ± 8.6, p = 0.001). Decreased LATEF was significantly related to altered LA strain and strain rate, especially in εs , εe , and SRs (Pearson/Spearman r/ρ = 0.856/0.837/0.562, respectively; all p < 0.001). Interstudy and intrastudy reproducibility were consistent for LA volumetry and strain parameters (intraclass correlation coefficient: 0.88-0.99). Conclusions: CMR-FT can be used to assess the LA strain parameters, and identify LA dysfunction and deformation noninvasively, which could be a helpful functional imaging biomarker in the clinical treatment of AF-VHD.
10,986
369
[ 816, 315, 151, 1194, 158, 368, 114, 263, 191, 182, 94, 76 ]
18
[ "strain", "la", "rate", "strain rate", "volume", "group", "af", "function", "parameters", "patients" ]
[ "atrial cardiomyopathy severity", "la fibrosis atrial", "af vhd cardiac", "atrial fibrillation dhvd", "fibrillation ci cardiac" ]
null
[CONTENT] atrial fibrillation | cardiovascular magnetic resonance imaging | feature tracking | left atrial function | strain | valvular heart disease [SUMMARY]
null
[CONTENT] atrial fibrillation | cardiovascular magnetic resonance imaging | feature tracking | left atrial function | strain | valvular heart disease [SUMMARY]
[CONTENT] atrial fibrillation | cardiovascular magnetic resonance imaging | feature tracking | left atrial function | strain | valvular heart disease [SUMMARY]
[CONTENT] atrial fibrillation | cardiovascular magnetic resonance imaging | feature tracking | left atrial function | strain | valvular heart disease [SUMMARY]
[CONTENT] atrial fibrillation | cardiovascular magnetic resonance imaging | feature tracking | left atrial function | strain | valvular heart disease [SUMMARY]
[CONTENT] Atrial Fibrillation | Atrial Function, Left | Cross-Sectional Studies | Heart Atria | Heart Valve Diseases | Humans | Magnetic Resonance Imaging | Reproducibility of Results | Retrospective Studies [SUMMARY]
null
[CONTENT] Atrial Fibrillation | Atrial Function, Left | Cross-Sectional Studies | Heart Atria | Heart Valve Diseases | Humans | Magnetic Resonance Imaging | Reproducibility of Results | Retrospective Studies [SUMMARY]
[CONTENT] Atrial Fibrillation | Atrial Function, Left | Cross-Sectional Studies | Heart Atria | Heart Valve Diseases | Humans | Magnetic Resonance Imaging | Reproducibility of Results | Retrospective Studies [SUMMARY]
[CONTENT] Atrial Fibrillation | Atrial Function, Left | Cross-Sectional Studies | Heart Atria | Heart Valve Diseases | Humans | Magnetic Resonance Imaging | Reproducibility of Results | Retrospective Studies [SUMMARY]
[CONTENT] Atrial Fibrillation | Atrial Function, Left | Cross-Sectional Studies | Heart Atria | Heart Valve Diseases | Humans | Magnetic Resonance Imaging | Reproducibility of Results | Retrospective Studies [SUMMARY]
[CONTENT] atrial cardiomyopathy severity | la fibrosis atrial | af vhd cardiac | atrial fibrillation dhvd | fibrillation ci cardiac [SUMMARY]
null
[CONTENT] atrial cardiomyopathy severity | la fibrosis atrial | af vhd cardiac | atrial fibrillation dhvd | fibrillation ci cardiac [SUMMARY]
[CONTENT] atrial cardiomyopathy severity | la fibrosis atrial | af vhd cardiac | atrial fibrillation dhvd | fibrillation ci cardiac [SUMMARY]
[CONTENT] atrial cardiomyopathy severity | la fibrosis atrial | af vhd cardiac | atrial fibrillation dhvd | fibrillation ci cardiac [SUMMARY]
[CONTENT] atrial cardiomyopathy severity | la fibrosis atrial | af vhd cardiac | atrial fibrillation dhvd | fibrillation ci cardiac [SUMMARY]
[CONTENT] strain | la | rate | strain rate | volume | group | af | function | parameters | patients [SUMMARY]
null
[CONTENT] strain | la | rate | strain rate | volume | group | af | function | parameters | patients [SUMMARY]
[CONTENT] strain | la | rate | strain rate | volume | group | af | function | parameters | patients [SUMMARY]
[CONTENT] strain | la | rate | strain rate | volume | group | af | function | parameters | patients [SUMMARY]
[CONTENT] strain | la | rate | strain rate | volume | group | af | function | parameters | patients [SUMMARY]
[CONTENT] la | af | strain | function | cmr ft | ft | lv | patients | cardiovascular | patients af [SUMMARY]
null
[CONTENT] group | strain | 001 | vs | rhvd | rate | heart | peak | sre | significance [SUMMARY]
[CONTENT] assessment la | assessment | la | clinical | quantitative assessment | rate la function contrast | reliable tool good clinical | reliable tool good | reliable tool | reliable [SUMMARY]
[CONTENT] strain | la | group | rate | af | strain rate | patients | peak | heart | function [SUMMARY]
[CONTENT] strain | la | group | rate | af | strain rate | patients | peak | heart | function [SUMMARY]
[CONTENT] VHD [SUMMARY]
null
[CONTENT] LA | 0.001 | LA | AF-VHD | 0.001 ||| RHVD | LA | 12.6% | 3.3% | 19.4 ± | 8.6 | 0.001 ||| LA | Pearson/Spearman | 0.856/0.837/0.562 | 0.001 ||| LA | 0.88 [SUMMARY]
[CONTENT] CMR-FT | LA | LA | AF-VHD [SUMMARY]
[CONTENT] VHD ||| July 1, 2020 | January 31, 2021 ||| 39 | AF-VHD | 22 | 17 ||| Fifteen ||| LA reservoir | Feature Tracking ||| LA | 0.001 | LA | AF-VHD | 0.001 ||| RHVD | LA | 12.6% | 3.3% | 19.4 ± | 8.6 | 0.001 ||| LA | Pearson/Spearman | 0.856/0.837/0.562 | 0.001 ||| LA | 0.88 ||| LA | LA | AF-VHD [SUMMARY]
[CONTENT] VHD ||| July 1, 2020 | January 31, 2021 ||| 39 | AF-VHD | 22 | 17 ||| Fifteen ||| LA reservoir | Feature Tracking ||| LA | 0.001 | LA | AF-VHD | 0.001 ||| RHVD | LA | 12.6% | 3.3% | 19.4 ± | 8.6 | 0.001 ||| LA | Pearson/Spearman | 0.856/0.837/0.562 | 0.001 ||| LA | 0.88 ||| LA | LA | AF-VHD [SUMMARY]
Experimental Study and Early Clinical Application Of a Sutureless Aortic Bioprosthesis.
26735597
The conventional aortic valve replacement is the treatment of choice for symptomatic severe aortic stenosis. Transcatheter technique is a viable alternative with promising results for inoperable patients. Sutureless bioprostheses have shown benefits in high-risk patients, such as reduction of aortic clamping and cardiopulmonary bypass, decreasing risks and adverse effects.
INTRODUCTION
The bioprosthesis is made of a metal frame and bovine pericardium leaflets, encapsulated in a catheter. The animals underwent left thoracotomy and the cardiopulmonary bypass was established. The sutureless bioprosthesis was deployed to the aortic valve, with 1/3 of the structure on the left ventricular face. Cardiopulmonary bypass, aortic clamping and deployment times were recorded. Echocardiograms were performed before, during and after the surgery. The bioprosthesis was initially implanted in an 85 year-old patient with aortic stenosis and high risk for conventional surgery, EuroSCORE 40 and multiple comorbidities.
METHODS
The sutureless bioprosthesis was rapidly deployed (50-170 seconds; average=95 seconds). The aortic clamping time ranged from 6-10 minutes, average of 7 minutes; the mean cardiopulmonary bypass time was 71 minutes. Bioprostheses were properly positioned without perivalvar leak. In the first operated patient the aortic clamp time was 39 minutes and the patient had good postoperative course.
RESULTS
The deployment of the sutureless bioprosthesis was safe and effective, thereby representing a new alternative to conventional surgery or transcatheter in moderate- to high-risk patients with severe aortic stenosis.
CONCLUSION
[ "Aged, 80 and over", "Animals", "Aortic Valve", "Aortic Valve Stenosis", "Bioprosthesis", "Cattle", "Heart Valve Prosthesis", "Heart Valve Prosthesis Implantation", "Humans", "Implants, Experimental", "Operative Time", "Prosthesis Design", "Sheep", "Treatment Outcome" ]
4690655
INTRODUCTION
Conventional aortic valve replacement is still the treatment of choice for patients with symptomatic severe aortic valve stenosis. However, in recent times the transcatheter technique (TAVI) has emerged as a viable and effective alternative to treat high risk or inoperable patients[1]. Nevertheless, inherent complications of TAVI has been surfacing and restricting its use, such as the embolization of calcium debris and consequent cerebral infarction, peripheral vascular damage, the further need of pacemaker insertion, paravalvular leakage and its impact on long-term survival, coronary ostium occlusion, aortic rupture and the high cost of the device[2,3]. Sutureless AVR using self-expanding bioprosthesis is a new and promising alternative to standard AVR in elderly and high-risk surgical patients[4]. The proposed benefits of this technology include enhanced implantability, shorter aortic cross-clamp and cardiopulmonary bypass (CPB) times, favourable hemodynamic performance, and easier access for minimally invasive surgery[5-8]. In addition, this approach allows complete removal of the diseased native valve and also comprises a suitable alternative to multiple valve procedures or associated coronary artery bypass grafting. Several European case series have shown excellent early clinical and hemodynamic outcomes[7,8]. Therefore sutureless aortic bioprostheses has been placed as an alternative to standard surgical AVR or TAVI in elderly and high-risk patients. Comparative reports in intermediate- to high-risk patients have demonstrate a lower rate of perioperative complications and improved survival at 24-month follow-up with sutureless valves compared to TAVI[2,9]. Therefore the objective of this study was to experimentally evaluate the implantation of a novel balloon-expandable aortic valve with sutureless bioprosthesis in animal model and report the early clinical application.
METHODS
The Inovare Alpha bioprosthesis is made of a metallic structure of cobalt-chrome, previously coated with a polyester fabric. This structure serves as a support for a bovine pericardium valve, which is sutured with polyester yarn to this metal support (Figure 1). The bioprosthesis is encapsulated in a catheter for the positioning and deployment. The fixing of the valve to the patient is given by the radial expansion force to the metal structure exerts against the patient valve structures, pressure sufficient enough to counterbalance the force exerted by blood flow. The Inovare Alpha sutureless bioprosthesis (above) and the delivery catheter (below). The Inovare Alpha was evaluated in five animals (ovine) operated on in an experimental operative room under routine hemodynamic monitoring and conducted as usual in clinical cardiac surgical practice. The study was approved by the Institutional Ethics Committee and all animals were treated according to ethical principles of "National Research Council - Institute of Laboratory Animal Resources" and those drawn-up by the Brazilian College of Animal Experimentation (COBEA), along with the local Ethics Committee on Animal Use. Standard general anesthesia and endotracheal intubation were applied for all surgical interventions. The animals underwent left thoracotomy and upon opening the pericardium the cardiopulmonary bypass was established through cannulation of carotid artery and left jugular vein. After aortic cross-clamping, myocardial protection was achieved using intermittent cold blood cardioplegia. Transverse aortotomy well above the aortic annulus enabled access to the aortic valve, the leaflets were removed and the sutureless bioprosthesis was balloon expanded and deployed to the aortic annulus, with 1/3 of the structure remaining on the left ventricular face. CPB, aortic clamping and deployment times were recorded. Echocardiograms were performed before, during and after the surgery. The initial clinical application was performed in an 85 year-old patient with aortic stenosis and high risk for conventional surgery, EuroSCORE 40% and multiple comorbidities associated with active hepatitis C.
RESULTS
Valve deployment was successfully performed in all cases. All valves were firmly positioned without any migration. The sutureless bioprosthesis were rapidly deployed (time ranging from 50 to 170 seconds; average: 95 seconds). The aortic clamping time varied from 6-10 minutes, average of 7 minutes; the mean CPB time was 71 minutes. Bioprostheses were properly positioned and secured to the aortic ring, as assessed by transesophageal echo (Figure 2). There were neither paravalvular nor transvalvular leaks and excellent hemodynamic function was observed in all cases. All coronary arteries remained patent, with no obstruction determined by the device. Positioning and function were confirmed by autopsy in all but one animal. The deployment of the Inovare-Alpha in the experimental setting. A=transverse aortotomy well above the aortic annulus and insertion of the catheter-mounted valve; B=balloon expansion of the prosthesis; C=valve deployed to the aortic annulus In the postmortem examination, macroscopic examination revealed that all valves were fully deployed and expanded, and there was no obstruction of coronary ostia in any of the cases. The sutureless valves showed precise positioning in all cases with good alignment to the aortic valve plane. In the first patient operated on, the prosthesis was inserted and the aortic clamping time was 39 minutes (Figure 3). The patient had a good postoperative recovery and has currently been followed up for the hepatitis C. The clinical case.
CONCLUSION
In conclusion, the deployment of the sutureless bioprosthesis was safe and effective, thereby representing a new alternative to conventional surgery or transcatheter in moderate- to high-risk patients with severe aortic stenosis.
[]
[]
[]
[ "INTRODUCTION", "METHODS", "RESULTS", "DISCUSSION", "CONCLUSION" ]
[ "Conventional aortic valve replacement is still the treatment of choice for patients\nwith symptomatic severe aortic valve stenosis. However, in recent times the\ntranscatheter technique (TAVI) has emerged as a viable and effective alternative to\ntreat high risk or inoperable patients[1].\nNevertheless, inherent complications of TAVI has been surfacing and restricting its\nuse, such as the embolization of calcium debris and consequent cerebral infarction,\nperipheral vascular damage, the further need of pacemaker insertion, paravalvular\nleakage and its impact on long-term survival, coronary ostium occlusion, aortic\nrupture and the high cost of the device[2,3].\nSutureless AVR using self-expanding bioprosthesis is a new and promising alternative\nto standard AVR in elderly and high-risk surgical patients[4]. The proposed benefits of\nthis technology include enhanced implantability, shorter aortic cross-clamp and\ncardiopulmonary bypass (CPB) times, favourable hemodynamic performance, and easier\naccess for minimally invasive surgery[5-8].\nIn addition, this approach allows complete removal of the diseased native valve and\nalso comprises a suitable alternative to multiple valve procedures or associated\ncoronary artery bypass grafting. Several European case series have shown excellent\nearly clinical and hemodynamic outcomes[7,8].\nTherefore sutureless aortic bioprostheses has been placed as an alternative to\nstandard surgical AVR or TAVI in elderly and high-risk patients.\nComparative reports in intermediate- to high-risk patients have demonstrate a lower\nrate of perioperative complications and improved survival at 24-month follow-up with\nsutureless valves compared to TAVI[2,9].\nTherefore the objective of this study was to experimentally evaluate the implantation\nof a novel balloon-expandable aortic valve with sutureless bioprosthesis in animal\nmodel and report the early clinical application.", "The Inovare Alpha bioprosthesis is made of a metallic structure of cobalt-chrome,\npreviously coated with a polyester fabric. This structure serves as a support for a\nbovine pericardium valve, which is sutured with polyester yarn to this metal support\n(Figure 1). The bioprosthesis is\nencapsulated in a catheter for the positioning and deployment. The fixing of the\nvalve to the patient is given by the radial expansion force to the metal structure\nexerts against the patient valve structures, pressure sufficient enough to\ncounterbalance the force exerted by blood flow.\nThe Inovare Alpha sutureless bioprosthesis (above) and the delivery catheter\n(below).\nThe Inovare Alpha was evaluated in five animals (ovine) operated on in an\nexperimental operative room under routine hemodynamic monitoring and conducted as\nusual in clinical cardiac surgical practice. The study was approved by the\nInstitutional Ethics Committee and all animals were treated according to ethical\nprinciples of \"National Research Council - Institute of Laboratory Animal Resources\"\nand those drawn-up by the Brazilian College of Animal Experimentation (COBEA), along\nwith the local Ethics Committee on Animal Use.\nStandard general anesthesia and endotracheal intubation were applied for all surgical\ninterventions. The animals underwent left thoracotomy and upon opening the\npericardium the cardiopulmonary bypass was established through cannulation of\ncarotid artery and left jugular vein. After aortic cross-clamping, myocardial\nprotection was achieved using intermittent cold blood cardioplegia. Transverse\naortotomy well above the aortic annulus enabled access to the aortic valve, the\nleaflets were removed and the sutureless bioprosthesis was balloon expanded and\ndeployed to the aortic annulus, with 1/3 of the structure remaining on the left\nventricular face. CPB, aortic clamping and deployment times were recorded.\nEchocardiograms were performed before, during and after the surgery. The initial\nclinical application was performed in an 85 year-old patient with aortic stenosis\nand high risk for conventional surgery, EuroSCORE 40% and multiple comorbidities\nassociated with active hepatitis C.", "Valve deployment was successfully performed in all cases. All valves were firmly\npositioned without any migration. The sutureless bioprosthesis were rapidly deployed\n(time ranging from 50 to 170 seconds; average: 95 seconds). The aortic clamping time\nvaried from 6-10 minutes, average of 7 minutes; the mean CPB time was 71 minutes.\nBioprostheses were properly positioned and secured to the aortic ring, as assessed\nby transesophageal echo (Figure 2). There were\nneither paravalvular nor transvalvular leaks and excellent hemodynamic function was\nobserved in all cases. All coronary arteries remained patent, with no obstruction\ndetermined by the device. Positioning and function were confirmed by autopsy in all\nbut one animal.\nThe deployment of the Inovare-Alpha in the experimental setting.\nA=transverse aortotomy well above the aortic annulus and insertion of the\ncatheter-mounted valve; B=balloon expansion of the prosthesis; C=valve\ndeployed to the aortic annulus\nIn the postmortem examination, macroscopic examination revealed that all valves were\nfully deployed and expanded, and there was no obstruction of coronary ostia in any\nof the cases. The sutureless valves showed precise positioning in all cases with\ngood alignment to the aortic valve plane.\nIn the first patient operated on, the prosthesis was inserted and the aortic clamping\ntime was 39 minutes (Figure 3). The patient had\na good postoperative recovery and has currently been followed up for the hepatitis\nC.\nThe clinical case.", "The present study demonstrates that the implanted sutureless prosthesis proved to be\nreliable and efficient, sitting and remaining well attached to the aortic valve\nannulus with a fast procedure, as demonstrated by the short clamping time. The\nperformance of the prosthesis was also consistent, without paravalvar leakage,\nmigration, or damage to the surrounding tissues. These findings were confirmed by\nthe postmortem examination.\nA good alignment of the device and the aortic valve plane was observed, and a fair\nhemodynamic performance can be inferred because of the low profile and the optimized\nopening area. No interference to the coronary arteries was seen, with the metallic\nframe staying away from both ostia.\nSurgical aortic valve replacement (AVR) still represents the gold standard among the\ntherapeutical options in patients with severe aortic valve\nstenosis[10].\nNevertheless, over the past few years, the possibility to treat high-risk or\ninoperable patients with alternative approaches, such as the transcatheter technique\ncame out as a feasible and effective strategy with promising results. However,\ninherent complications of this new technology as its increased costs, the lack of\nremoval of the calcified aortic valve and the resultant risk of paravalvular\nleakage, coronary occlusion and aortic rupture have been recognized as important\nlimitations for TAVI[11-13]. For these reasons, a number of sutureless aortic valve\nbioprostheses have been developed to facilitate AVR and reduce the duration of\naortic cross-clamping time and its related adverse events[14].\nThe introduction of balloon-expandable sutureless bioprosthesis represents a step\nforward and a novel device for treating intermediate- to high-risk patients with\nsevere aortic stenosis, with the reduction of aortic clamping and cardiopulmonary\nbypass (CPB), decreasing risks and adverse effects, also comprising a suitable\nalternative to multiple valve procedures or associated coronary artery bypass\ngrafting. In addition, this approach allows complete or selective removal of the\ncalcified and diseased native valve, potentially averting particulated cerebral\nembolism and cerebrovascular accident.\nThe concept of sutureless prosthetic heart valves led to the development of an array\nof new generation of devices. Nowadays, three sutureless aortic bioprostheses are\ncurrently available in Europe, the Perceval S (Sorin Group, Saluggia, Italy), the 3f\nEnable (Medtronic, Minneapolis, MN, USA), and the Intuity (Edwards Lifesciences,\nIrvine, CA, USA)[14].\nThis novel surgical prosthesis has been favourably compared with the TAVI approach in\nrecent series, thus offering a potential alternative to transcatheter in high-risk\npatients. Several European case series have shown good outcomes of sutureless\ncompared to TAVI, with rather lower incidence of significant paravalvular\nregurgitation, post-procedural pacemaker implantation and peripheral vascular\ncomplications, along with better immediate postoperative survival[2,9,10,12].\nThe potential of shortening surgical times and improving overall patient outcomes may\nexpand the applicability of this simple and rapid implantation technique, as in long\nand complex procedures (reoperations or combined procedures). Reduced implantation\nand cross-clamping times will have a positive impact on the postoperative outcome of\nhigh-risk patients undergoing long surgical procedures[5,6]. Ranucci et al.[15] reported that the aortic cross-clamp time\nis an independent predictor of severe cardiovascular morbidity, with an increased\nrisk of 1.4% per 1-minute increase.\nAssociated with minimally invasive AVR, the sutureless approach can combine the\nadvantages of both techniques, as demonstrated by several recently published case\nseries that have shown excellent clinical and hemodynamic results[16-18].\nAdditionally, it represents a formidable alternative for valve in valve (aortic or\nmitral), not only with failed bioprosthesis but also with mechanical valves, where\nthe direct approach allows the disk removal and the rapid insertion of the\nsutureless valve.\nSutureless AVR is also an appealing option in several other specific circumstances,\nsuch as redo procedures, as well as in the presence of porcelain aorta, calcified\naortic homograft, or small aortic annulus[19-22]. And the additional breakthrough is the performance of\nthese procedures without the need of a hybrid room or a cath lab, being routinely\ncarried out in an ordinary operative room simply with the aid of a transesophageal\necho.\nConsequently, the costs of sutureless are believed to be lower, as the price of TAVI\ndevices are higher and requires incremental costs related to prosthesis\nimplantation-related technology and to an increased number of personnel involved in\nthis procedure. A cost-utility analysis of TAVI in Belgium concluded that it is not\nrecommended to reimburse TAVI for high-risk patients because the patients had no\nsurvival benefit after 1 year, the risk of cerebrovascular accident was twice as\nhigh, and the costs were significantly higher[23,24].\nDefinitely further prospective clinical trials are needed to determine the long-term\ndurability and outcomes. The clinical trial testing these devices has been approved\nand is currently underway.", "In conclusion, the deployment of the sutureless bioprosthesis was safe and effective,\nthereby representing a new alternative to conventional surgery or transcatheter in\nmoderate- to high-risk patients with severe aortic stenosis." ]
[ "intro", "methods", "results", "discussion", "conclusions" ]
[ "Aortic Valve, Surgery", "Heart Valves, Surgery", "Aortic Valve Stenosis", "Bioprosthesis" ]
INTRODUCTION: Conventional aortic valve replacement is still the treatment of choice for patients with symptomatic severe aortic valve stenosis. However, in recent times the transcatheter technique (TAVI) has emerged as a viable and effective alternative to treat high risk or inoperable patients[1]. Nevertheless, inherent complications of TAVI has been surfacing and restricting its use, such as the embolization of calcium debris and consequent cerebral infarction, peripheral vascular damage, the further need of pacemaker insertion, paravalvular leakage and its impact on long-term survival, coronary ostium occlusion, aortic rupture and the high cost of the device[2,3]. Sutureless AVR using self-expanding bioprosthesis is a new and promising alternative to standard AVR in elderly and high-risk surgical patients[4]. The proposed benefits of this technology include enhanced implantability, shorter aortic cross-clamp and cardiopulmonary bypass (CPB) times, favourable hemodynamic performance, and easier access for minimally invasive surgery[5-8]. In addition, this approach allows complete removal of the diseased native valve and also comprises a suitable alternative to multiple valve procedures or associated coronary artery bypass grafting. Several European case series have shown excellent early clinical and hemodynamic outcomes[7,8]. Therefore sutureless aortic bioprostheses has been placed as an alternative to standard surgical AVR or TAVI in elderly and high-risk patients. Comparative reports in intermediate- to high-risk patients have demonstrate a lower rate of perioperative complications and improved survival at 24-month follow-up with sutureless valves compared to TAVI[2,9]. Therefore the objective of this study was to experimentally evaluate the implantation of a novel balloon-expandable aortic valve with sutureless bioprosthesis in animal model and report the early clinical application. METHODS: The Inovare Alpha bioprosthesis is made of a metallic structure of cobalt-chrome, previously coated with a polyester fabric. This structure serves as a support for a bovine pericardium valve, which is sutured with polyester yarn to this metal support (Figure 1). The bioprosthesis is encapsulated in a catheter for the positioning and deployment. The fixing of the valve to the patient is given by the radial expansion force to the metal structure exerts against the patient valve structures, pressure sufficient enough to counterbalance the force exerted by blood flow. The Inovare Alpha sutureless bioprosthesis (above) and the delivery catheter (below). The Inovare Alpha was evaluated in five animals (ovine) operated on in an experimental operative room under routine hemodynamic monitoring and conducted as usual in clinical cardiac surgical practice. The study was approved by the Institutional Ethics Committee and all animals were treated according to ethical principles of "National Research Council - Institute of Laboratory Animal Resources" and those drawn-up by the Brazilian College of Animal Experimentation (COBEA), along with the local Ethics Committee on Animal Use. Standard general anesthesia and endotracheal intubation were applied for all surgical interventions. The animals underwent left thoracotomy and upon opening the pericardium the cardiopulmonary bypass was established through cannulation of carotid artery and left jugular vein. After aortic cross-clamping, myocardial protection was achieved using intermittent cold blood cardioplegia. Transverse aortotomy well above the aortic annulus enabled access to the aortic valve, the leaflets were removed and the sutureless bioprosthesis was balloon expanded and deployed to the aortic annulus, with 1/3 of the structure remaining on the left ventricular face. CPB, aortic clamping and deployment times were recorded. Echocardiograms were performed before, during and after the surgery. The initial clinical application was performed in an 85 year-old patient with aortic stenosis and high risk for conventional surgery, EuroSCORE 40% and multiple comorbidities associated with active hepatitis C. RESULTS: Valve deployment was successfully performed in all cases. All valves were firmly positioned without any migration. The sutureless bioprosthesis were rapidly deployed (time ranging from 50 to 170 seconds; average: 95 seconds). The aortic clamping time varied from 6-10 minutes, average of 7 minutes; the mean CPB time was 71 minutes. Bioprostheses were properly positioned and secured to the aortic ring, as assessed by transesophageal echo (Figure 2). There were neither paravalvular nor transvalvular leaks and excellent hemodynamic function was observed in all cases. All coronary arteries remained patent, with no obstruction determined by the device. Positioning and function were confirmed by autopsy in all but one animal. The deployment of the Inovare-Alpha in the experimental setting. A=transverse aortotomy well above the aortic annulus and insertion of the catheter-mounted valve; B=balloon expansion of the prosthesis; C=valve deployed to the aortic annulus In the postmortem examination, macroscopic examination revealed that all valves were fully deployed and expanded, and there was no obstruction of coronary ostia in any of the cases. The sutureless valves showed precise positioning in all cases with good alignment to the aortic valve plane. In the first patient operated on, the prosthesis was inserted and the aortic clamping time was 39 minutes (Figure 3). The patient had a good postoperative recovery and has currently been followed up for the hepatitis C. The clinical case. DISCUSSION: The present study demonstrates that the implanted sutureless prosthesis proved to be reliable and efficient, sitting and remaining well attached to the aortic valve annulus with a fast procedure, as demonstrated by the short clamping time. The performance of the prosthesis was also consistent, without paravalvar leakage, migration, or damage to the surrounding tissues. These findings were confirmed by the postmortem examination. A good alignment of the device and the aortic valve plane was observed, and a fair hemodynamic performance can be inferred because of the low profile and the optimized opening area. No interference to the coronary arteries was seen, with the metallic frame staying away from both ostia. Surgical aortic valve replacement (AVR) still represents the gold standard among the therapeutical options in patients with severe aortic valve stenosis[10]. Nevertheless, over the past few years, the possibility to treat high-risk or inoperable patients with alternative approaches, such as the transcatheter technique came out as a feasible and effective strategy with promising results. However, inherent complications of this new technology as its increased costs, the lack of removal of the calcified aortic valve and the resultant risk of paravalvular leakage, coronary occlusion and aortic rupture have been recognized as important limitations for TAVI[11-13]. For these reasons, a number of sutureless aortic valve bioprostheses have been developed to facilitate AVR and reduce the duration of aortic cross-clamping time and its related adverse events[14]. The introduction of balloon-expandable sutureless bioprosthesis represents a step forward and a novel device for treating intermediate- to high-risk patients with severe aortic stenosis, with the reduction of aortic clamping and cardiopulmonary bypass (CPB), decreasing risks and adverse effects, also comprising a suitable alternative to multiple valve procedures or associated coronary artery bypass grafting. In addition, this approach allows complete or selective removal of the calcified and diseased native valve, potentially averting particulated cerebral embolism and cerebrovascular accident. The concept of sutureless prosthetic heart valves led to the development of an array of new generation of devices. Nowadays, three sutureless aortic bioprostheses are currently available in Europe, the Perceval S (Sorin Group, Saluggia, Italy), the 3f Enable (Medtronic, Minneapolis, MN, USA), and the Intuity (Edwards Lifesciences, Irvine, CA, USA)[14]. This novel surgical prosthesis has been favourably compared with the TAVI approach in recent series, thus offering a potential alternative to transcatheter in high-risk patients. Several European case series have shown good outcomes of sutureless compared to TAVI, with rather lower incidence of significant paravalvular regurgitation, post-procedural pacemaker implantation and peripheral vascular complications, along with better immediate postoperative survival[2,9,10,12]. The potential of shortening surgical times and improving overall patient outcomes may expand the applicability of this simple and rapid implantation technique, as in long and complex procedures (reoperations or combined procedures). Reduced implantation and cross-clamping times will have a positive impact on the postoperative outcome of high-risk patients undergoing long surgical procedures[5,6]. Ranucci et al.[15] reported that the aortic cross-clamp time is an independent predictor of severe cardiovascular morbidity, with an increased risk of 1.4% per 1-minute increase. Associated with minimally invasive AVR, the sutureless approach can combine the advantages of both techniques, as demonstrated by several recently published case series that have shown excellent clinical and hemodynamic results[16-18]. Additionally, it represents a formidable alternative for valve in valve (aortic or mitral), not only with failed bioprosthesis but also with mechanical valves, where the direct approach allows the disk removal and the rapid insertion of the sutureless valve. Sutureless AVR is also an appealing option in several other specific circumstances, such as redo procedures, as well as in the presence of porcelain aorta, calcified aortic homograft, or small aortic annulus[19-22]. And the additional breakthrough is the performance of these procedures without the need of a hybrid room or a cath lab, being routinely carried out in an ordinary operative room simply with the aid of a transesophageal echo. Consequently, the costs of sutureless are believed to be lower, as the price of TAVI devices are higher and requires incremental costs related to prosthesis implantation-related technology and to an increased number of personnel involved in this procedure. A cost-utility analysis of TAVI in Belgium concluded that it is not recommended to reimburse TAVI for high-risk patients because the patients had no survival benefit after 1 year, the risk of cerebrovascular accident was twice as high, and the costs were significantly higher[23,24]. Definitely further prospective clinical trials are needed to determine the long-term durability and outcomes. The clinical trial testing these devices has been approved and is currently underway. CONCLUSION: In conclusion, the deployment of the sutureless bioprosthesis was safe and effective, thereby representing a new alternative to conventional surgery or transcatheter in moderate- to high-risk patients with severe aortic stenosis.
Background: The conventional aortic valve replacement is the treatment of choice for symptomatic severe aortic stenosis. Transcatheter technique is a viable alternative with promising results for inoperable patients. Sutureless bioprostheses have shown benefits in high-risk patients, such as reduction of aortic clamping and cardiopulmonary bypass, decreasing risks and adverse effects. Methods: The bioprosthesis is made of a metal frame and bovine pericardium leaflets, encapsulated in a catheter. The animals underwent left thoracotomy and the cardiopulmonary bypass was established. The sutureless bioprosthesis was deployed to the aortic valve, with 1/3 of the structure on the left ventricular face. Cardiopulmonary bypass, aortic clamping and deployment times were recorded. Echocardiograms were performed before, during and after the surgery. The bioprosthesis was initially implanted in an 85 year-old patient with aortic stenosis and high risk for conventional surgery, EuroSCORE 40 and multiple comorbidities. Results: The sutureless bioprosthesis was rapidly deployed (50-170 seconds; average=95 seconds). The aortic clamping time ranged from 6-10 minutes, average of 7 minutes; the mean cardiopulmonary bypass time was 71 minutes. Bioprostheses were properly positioned without perivalvar leak. In the first operated patient the aortic clamp time was 39 minutes and the patient had good postoperative course. Conclusions: The deployment of the sutureless bioprosthesis was safe and effective, thereby representing a new alternative to conventional surgery or transcatheter in moderate- to high-risk patients with severe aortic stenosis.
INTRODUCTION: Conventional aortic valve replacement is still the treatment of choice for patients with symptomatic severe aortic valve stenosis. However, in recent times the transcatheter technique (TAVI) has emerged as a viable and effective alternative to treat high risk or inoperable patients[1]. Nevertheless, inherent complications of TAVI has been surfacing and restricting its use, such as the embolization of calcium debris and consequent cerebral infarction, peripheral vascular damage, the further need of pacemaker insertion, paravalvular leakage and its impact on long-term survival, coronary ostium occlusion, aortic rupture and the high cost of the device[2,3]. Sutureless AVR using self-expanding bioprosthesis is a new and promising alternative to standard AVR in elderly and high-risk surgical patients[4]. The proposed benefits of this technology include enhanced implantability, shorter aortic cross-clamp and cardiopulmonary bypass (CPB) times, favourable hemodynamic performance, and easier access for minimally invasive surgery[5-8]. In addition, this approach allows complete removal of the diseased native valve and also comprises a suitable alternative to multiple valve procedures or associated coronary artery bypass grafting. Several European case series have shown excellent early clinical and hemodynamic outcomes[7,8]. Therefore sutureless aortic bioprostheses has been placed as an alternative to standard surgical AVR or TAVI in elderly and high-risk patients. Comparative reports in intermediate- to high-risk patients have demonstrate a lower rate of perioperative complications and improved survival at 24-month follow-up with sutureless valves compared to TAVI[2,9]. Therefore the objective of this study was to experimentally evaluate the implantation of a novel balloon-expandable aortic valve with sutureless bioprosthesis in animal model and report the early clinical application. CONCLUSION: In conclusion, the deployment of the sutureless bioprosthesis was safe and effective, thereby representing a new alternative to conventional surgery or transcatheter in moderate- to high-risk patients with severe aortic stenosis.
Background: The conventional aortic valve replacement is the treatment of choice for symptomatic severe aortic stenosis. Transcatheter technique is a viable alternative with promising results for inoperable patients. Sutureless bioprostheses have shown benefits in high-risk patients, such as reduction of aortic clamping and cardiopulmonary bypass, decreasing risks and adverse effects. Methods: The bioprosthesis is made of a metal frame and bovine pericardium leaflets, encapsulated in a catheter. The animals underwent left thoracotomy and the cardiopulmonary bypass was established. The sutureless bioprosthesis was deployed to the aortic valve, with 1/3 of the structure on the left ventricular face. Cardiopulmonary bypass, aortic clamping and deployment times were recorded. Echocardiograms were performed before, during and after the surgery. The bioprosthesis was initially implanted in an 85 year-old patient with aortic stenosis and high risk for conventional surgery, EuroSCORE 40 and multiple comorbidities. Results: The sutureless bioprosthesis was rapidly deployed (50-170 seconds; average=95 seconds). The aortic clamping time ranged from 6-10 minutes, average of 7 minutes; the mean cardiopulmonary bypass time was 71 minutes. Bioprostheses were properly positioned without perivalvar leak. In the first operated patient the aortic clamp time was 39 minutes and the patient had good postoperative course. Conclusions: The deployment of the sutureless bioprosthesis was safe and effective, thereby representing a new alternative to conventional surgery or transcatheter in moderate- to high-risk patients with severe aortic stenosis.
2,023
274
[]
5
[ "aortic", "valve", "sutureless", "risk", "patients", "high", "high risk", "aortic valve", "tavi", "bioprosthesis" ]
[ "aortic valve replacement", "expandable aortic valve", "sutureless compared tavi", "avr tavi elderly", "transcatheter technique tavi" ]
[CONTENT] Aortic Valve, Surgery | Heart Valves, Surgery | Aortic Valve Stenosis | Bioprosthesis [SUMMARY]
[CONTENT] Aortic Valve, Surgery | Heart Valves, Surgery | Aortic Valve Stenosis | Bioprosthesis [SUMMARY]
[CONTENT] Aortic Valve, Surgery | Heart Valves, Surgery | Aortic Valve Stenosis | Bioprosthesis [SUMMARY]
[CONTENT] Aortic Valve, Surgery | Heart Valves, Surgery | Aortic Valve Stenosis | Bioprosthesis [SUMMARY]
[CONTENT] Aortic Valve, Surgery | Heart Valves, Surgery | Aortic Valve Stenosis | Bioprosthesis [SUMMARY]
[CONTENT] Aortic Valve, Surgery | Heart Valves, Surgery | Aortic Valve Stenosis | Bioprosthesis [SUMMARY]
[CONTENT] Aged, 80 and over | Animals | Aortic Valve | Aortic Valve Stenosis | Bioprosthesis | Cattle | Heart Valve Prosthesis | Heart Valve Prosthesis Implantation | Humans | Implants, Experimental | Operative Time | Prosthesis Design | Sheep | Treatment Outcome [SUMMARY]
[CONTENT] Aged, 80 and over | Animals | Aortic Valve | Aortic Valve Stenosis | Bioprosthesis | Cattle | Heart Valve Prosthesis | Heart Valve Prosthesis Implantation | Humans | Implants, Experimental | Operative Time | Prosthesis Design | Sheep | Treatment Outcome [SUMMARY]
[CONTENT] Aged, 80 and over | Animals | Aortic Valve | Aortic Valve Stenosis | Bioprosthesis | Cattle | Heart Valve Prosthesis | Heart Valve Prosthesis Implantation | Humans | Implants, Experimental | Operative Time | Prosthesis Design | Sheep | Treatment Outcome [SUMMARY]
[CONTENT] Aged, 80 and over | Animals | Aortic Valve | Aortic Valve Stenosis | Bioprosthesis | Cattle | Heart Valve Prosthesis | Heart Valve Prosthesis Implantation | Humans | Implants, Experimental | Operative Time | Prosthesis Design | Sheep | Treatment Outcome [SUMMARY]
[CONTENT] Aged, 80 and over | Animals | Aortic Valve | Aortic Valve Stenosis | Bioprosthesis | Cattle | Heart Valve Prosthesis | Heart Valve Prosthesis Implantation | Humans | Implants, Experimental | Operative Time | Prosthesis Design | Sheep | Treatment Outcome [SUMMARY]
[CONTENT] Aged, 80 and over | Animals | Aortic Valve | Aortic Valve Stenosis | Bioprosthesis | Cattle | Heart Valve Prosthesis | Heart Valve Prosthesis Implantation | Humans | Implants, Experimental | Operative Time | Prosthesis Design | Sheep | Treatment Outcome [SUMMARY]
[CONTENT] aortic valve replacement | expandable aortic valve | sutureless compared tavi | avr tavi elderly | transcatheter technique tavi [SUMMARY]
[CONTENT] aortic valve replacement | expandable aortic valve | sutureless compared tavi | avr tavi elderly | transcatheter technique tavi [SUMMARY]
[CONTENT] aortic valve replacement | expandable aortic valve | sutureless compared tavi | avr tavi elderly | transcatheter technique tavi [SUMMARY]
[CONTENT] aortic valve replacement | expandable aortic valve | sutureless compared tavi | avr tavi elderly | transcatheter technique tavi [SUMMARY]
[CONTENT] aortic valve replacement | expandable aortic valve | sutureless compared tavi | avr tavi elderly | transcatheter technique tavi [SUMMARY]
[CONTENT] aortic valve replacement | expandable aortic valve | sutureless compared tavi | avr tavi elderly | transcatheter technique tavi [SUMMARY]
[CONTENT] aortic | valve | sutureless | risk | patients | high | high risk | aortic valve | tavi | bioprosthesis [SUMMARY]
[CONTENT] aortic | valve | sutureless | risk | patients | high | high risk | aortic valve | tavi | bioprosthesis [SUMMARY]
[CONTENT] aortic | valve | sutureless | risk | patients | high | high risk | aortic valve | tavi | bioprosthesis [SUMMARY]
[CONTENT] aortic | valve | sutureless | risk | patients | high | high risk | aortic valve | tavi | bioprosthesis [SUMMARY]
[CONTENT] aortic | valve | sutureless | risk | patients | high | high risk | aortic valve | tavi | bioprosthesis [SUMMARY]
[CONTENT] aortic | valve | sutureless | risk | patients | high | high risk | aortic valve | tavi | bioprosthesis [SUMMARY]
[CONTENT] patients | tavi | aortic | valve | high | alternative | avr | risk | high risk | elderly [SUMMARY]
[CONTENT] structure | animals | left | aortic | inovare alpha | alpha | inovare | valve | patient | animal [SUMMARY]
[CONTENT] cases | minutes | time | aortic | deployed | valve | valves | seconds | average | aortic clamping time [SUMMARY]
[CONTENT] bioprosthesis safe effective representing | representing | alternative conventional surgery | alternative conventional surgery transcatheter | new alternative conventional surgery | new alternative conventional | moderate high risk patients | moderate high risk | new alternative | moderate high [SUMMARY]
[CONTENT] aortic | valve | patients | sutureless | risk | high | alternative | high risk | tavi | bioprosthesis [SUMMARY]
[CONTENT] aortic | valve | patients | sutureless | risk | high | alternative | high risk | tavi | bioprosthesis [SUMMARY]
[CONTENT] ||| Transcatheter ||| [SUMMARY]
[CONTENT] ||| ||| 1/3 ||| ||| ||| 85 year-old | 40 [SUMMARY]
[CONTENT] 50-170 seconds | average=95 seconds ||| 6-10 minutes | 7 minutes | 71 minutes ||| ||| 39 minutes [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| Transcatheter ||| ||| ||| ||| 1/3 ||| ||| ||| 85 year-old | 40 ||| ||| 50-170 seconds | average=95 seconds ||| 6-10 minutes | 7 minutes | 71 minutes ||| ||| 39 minutes ||| [SUMMARY]
[CONTENT] ||| Transcatheter ||| ||| ||| ||| 1/3 ||| ||| ||| 85 year-old | 40 ||| ||| 50-170 seconds | average=95 seconds ||| 6-10 minutes | 7 minutes | 71 minutes ||| ||| 39 minutes ||| [SUMMARY]
Lifestyle Changes and Body Mass Index during COVID-19 Pandemic Lockdown: An Italian Online-Survey.
33805291
COVID-19 pandemic has imposed a period of contingency measures, including total or partial lockdowns all over the world leading to several changes in lifestyle/eating behaviours. This retrospective cohort study aimed at investigating Italian adult population lifestyle changes during COVID-19 pandemic "Phase 1" lockdown (8 March-4 May 2020) and discriminate between positive and negative changes and BMI (body mass index) variations (ΔBMI).
BACKGROUND
A multiple-choice web-form survey was used to collect retrospective data regarding lifestyle/eating behaviours during "Phase 1" in the Italian adult population. According to changes in lifestyle/eating behaviours, the sample was divided into three classes of changes: "negative change", "no change", "positive change". For each class, correlations with ΔBMI were investigated.
METHODS
Data were collected from 1304 subjects (973F/331M). Mean ΔBMI differed significantly (p < 0.001) between classes, and was significantly related to water intake, alcohol consumption, physical activity, frequency of "craving or snacking between meals", dessert/sweets consumption at lunch.
RESULTS
During "Phase 1", many people faced several negative changes in lifestyle/eating behaviours with potential negative impact on health. These findings highlight that pandemic exacerbates nutritional issues and most efforts need to be done to provide nutrition counselling and public health services to support general population needs.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Alcohol Drinking", "Body Mass Index", "COVID-19", "Child", "Communicable Disease Control", "Diet", "Drinking", "Exercise", "Female", "Humans", "Italy", "Life Style", "Male", "Middle Aged", "Retrospective Studies", "Surveys and Questionnaires", "Young Adult" ]
8066204
1. Introduction
COVID-19 pandemic has imposed a period of contingency measures, including total or partial lockdowns all over the world, with different modalities according to the countries. In Italy the first lockdown, “Phase 1” (DPCM-GU Serie Generale n.59; 8 March 2020), occurred March 8. This phase has greatly impacted everyday life of all citizens, leading to different adaptive strategies. Emerging studies [1,2,3,4] have assessed the impact of this situation on physical and mental health. Di Renzo et al. [1] analyzing the effect of COVID-19 pandemic on eating habits and lifestyle in a sample of Italian respondents (age range, 12 and 86 years), by means of an online survey, reported that young adults (aged 18 to 30) adhered to the Mediterranean dietary pattern more than youth and elderly, with 15% of the respondents turning to organic fruit and vegetables [1]. Furthermore, other virtuous behaviors were reported by the authors as a slight increase in physical activity as well as a reduction in smoking habits [1]. Others [2] instead have suggested that social isolation has led to consuming more ultra-processed, energy-dense comfort foods, purchasing more packaged and shelf-stable foods, both to reduce trips outside home to the supermarket and to have a richer and more satisfying pantry. At the same time, social distancing and home deliveries decreased the opportunities for physical activity, especially in individuals living in small apartments in urban areas, increasing sedentariness, and affecting health and wellbeing [2]. However, it is important to emphasize that numerous interindividual differences account for different behaviors [3] that may depend on personal characteristics and that have led some to cook more and devote more time to meals within the family and others instead to use the food in a disorderly way for boredom, as a consolation or to quell anxiety [3]. This pandemic has more than ever mastered the notion that there are inevitable repercussions of the general malaise on mental wellbeing. A large multicenter Italian study [4] explored the implications of social isolation on psychological distress in the academic population of five Universities, reporting that about 20% of the participants showed severe levels of anxiety and mood deflection. Therefore, based on these previous observations [1,2,3,4], the primary aim of our study was to investigate lifestyle habits and eating behaviors modifications in a sample of Italian adults during “Phase 1” COVID-19 pandemic home confinement. The secondary aim was to discriminate between positive and negative changes in lifestyle habits and eating behaviors and their relationship with body mass index (BMI) variations, raising awareness of the need for public health actions to elicit positive behavior changes and prevent negative behavior changes in individuals.
null
null
3. Results
We received 1360 questionnaires and selected 1304 subjects (973F/331M) after removing potential duplicates by comparing their IDs. Most of the sample (82.7%, n = 1078) resided in the northern regions of Italy while 17.3% (n = 226) in the Central and Southern ones. We observed missing item responses only in item #9 (“Habitual vegetable consumption at lunch”), with a total of 1241/1304 responses (4.8% missingness). As reported in Table 1, BMI increased significantly at T1 (p < 0.0001). The median value of the lifestyle habits and eating behaviors total score showed that most of the subjects belonged to the 2nd tertile, meaning substantially that no changes occurred during “Phase 1”; no significant differences were recorded between residents in different regions (Northern vs. Centre + Southern) nor between gender (Table 1). Changes in lifestyle habits and eating behaviors during “Phase 1” are also reported in Table 1. Multiple regression analysis (Table 2) was conducted to evaluate which lifestyle and eating behavior variables were significantly related to BMI variations (ΔBMI). ΔBMI (Kg/m2) as dependent variable and lifestyle habits and eating behaviors (e.g., physical activity; adequate daily water consumption; alcohol consumption; caffeine consumption; daily breakfast consumption; habitual sandwich/pizza consumption at lunch; habitual sweets/dessert consumption at lunch; habitual fruit consumption at lunch; habitual vegetable consumption at lunch and craving or eating between meals) as independent ones were simultaneously assessed. The analysis showed that BMI increase (ΔBMI > 0) during “Phase 1” was significantly and negatively related to the following behaviors: (i) inadequate water consumption (β: −0.09; SE: 0.04; p = 0.01); (ii) excessive alcohol consumption (β: −0.2; SE: 0.04; p < 0.000); (iii) decreased physical activity (β: −0.12; SE: 0.03; p = 0.000); (iv) increased frequency of “craving or eating between meals” (β: −0.28; SE: 0.04; p < 0.000); (v) habitual consumption of dessert/sweets at lunch (β: −0.37; SE: 0.07; p < 0.000). In the three classes of change, mean (± SD) BMI value did not differ significantly (“negative change”: 23.34 ± 4.19; “no change” 23.23 ± 4.15; “positive change” 23.09 ± 4.01, p = 0.737). On the contrary, mean (± SD) ΔBMI differed significantly (p < 0.001) between subjects in the “negative change” class (0.4 ± 10.8), subjects in the “no change” class (0.12 ± 0.74), and subjects in the “positive change” class (−0.18 ± 0.9), as reported in Figure 1.
5. Conclusions
This study showed that during “Phase 1” COVID-19 pandemic home confinement, several changes in lifestyle habits and eating behaviors occurred, with individual differences probably depending on personal resilience. However, the short duration of “Phase 1” did not allow to highlight the impact of these changes on long term outcomes Nevertheless, we underline the need to increase public health actions to meet emergency needs and reduce vulnerability over the long term through expanded social protection, health care and education, especially for the most vulnerable groups including seniors, children and women, and for those with poor access to essential goods such as food, education and health care. This is not going to be the last pandemic we face; therefore, we need to build resiliency in our population and nutrition is a key factor. Public health interventions should consider the need to decrease levels of inequity and protect people from further health threats and disease. Further research to understand if and to what extent these changes have regressed, or have remained stable, in the long term and what impact they have had on health is recommended.
[ "2. Materials and Methods", "2.1. Online-Survey", "2.2. Variables Collected", "2.3. Scoring", "2.4. Statistical Analysis" ]
[ "2.1. Online-Survey A 38 multiple-choice web-form survey in Google Forms was used to collect retrospective demographic and anthropometric data as well as lifestyle habits and eating behaviors during the first lockdown phase of COVID-19 pandemic (“Phase 1”, 8 March–4 May 2020) in Italy. The survey was launched on 30 April 2020, through WhatsApp, institutional social networks channels (e.g., Facebook and LinkedIn) and our Lab (Laboratory of Dietetics and Clinical Nutrition) mailing list. Adults (>18 years) residing in Italy were eligible to participate. Data collection ended on 10 May 2020.\nThis web-form administration did not allow a probability sampling procedure; however, it was effective for the research objectives since it facilitated wide dissemination of the survey.\nThe web-survey was conducted in agreement with the national and international regulations and the Declaration of Helsinki. All participants were fully informed about the study objectives and requirements and were asked to provide informed consent accepting the data sharing and privacy policy, before accessing the survey. Participants completed the survey by connecting directly to Google Forms, a web-based app used to create forms for data collection purposes and which is included as part of the free Google Docs Editors suite offered by Google [5]. All the surveys were then downloaded as Microsoft Excel sheet, with the maximum guarantee of anonymity inherent in the web-survey format which does not allow to trace sensitive personal data in any way [6]. For this reason, this study is exempt from the request of the ethics committee approval. \nA 38 multiple-choice web-form survey in Google Forms was used to collect retrospective demographic and anthropometric data as well as lifestyle habits and eating behaviors during the first lockdown phase of COVID-19 pandemic (“Phase 1”, 8 March–4 May 2020) in Italy. The survey was launched on 30 April 2020, through WhatsApp, institutional social networks channels (e.g., Facebook and LinkedIn) and our Lab (Laboratory of Dietetics and Clinical Nutrition) mailing list. Adults (>18 years) residing in Italy were eligible to participate. Data collection ended on 10 May 2020.\nThis web-form administration did not allow a probability sampling procedure; however, it was effective for the research objectives since it facilitated wide dissemination of the survey.\nThe web-survey was conducted in agreement with the national and international regulations and the Declaration of Helsinki. All participants were fully informed about the study objectives and requirements and were asked to provide informed consent accepting the data sharing and privacy policy, before accessing the survey. Participants completed the survey by connecting directly to Google Forms, a web-based app used to create forms for data collection purposes and which is included as part of the free Google Docs Editors suite offered by Google [5]. All the surveys were then downloaded as Microsoft Excel sheet, with the maximum guarantee of anonymity inherent in the web-survey format which does not allow to trace sensitive personal data in any way [6]. For this reason, this study is exempt from the request of the ethics committee approval. \n2.2. Variables Collected To investigate changes experienced during “Phase 1” COVID-19 pandemic home confinement, participants were asked to describe their lifestyle habits and eating behaviors, before “Phase 1” (8 March 2020, T0) and during “Phase 1” (8 March–4 May 2020, T1). For this purpose, in the present research, we considered 10 multiple-choice items, out of the 38 ones, exploring (i) physical activity; (ii) daily water consumption; (iii) daily caffeine consumption; (iv) daily alcohol consumption; (v) daily breakfast consumption; (vi) habitual sandwich/pizza consumption at lunch; (vii) habitual sweets/dessert consumption at lunch; (viii) habitual fruit consumption at lunch; (ix) habitual vegetable consumption at lunch; (x) “craving or eating between meals” habit. \nThe survey also collected demographic information (such as gender and residency) as well as anthropometric data (such as weight and height) useful to calculate BMI (Kg/m2) both before and during COVID-19 pandemic “Phase 1”. \nTo investigate changes experienced during “Phase 1” COVID-19 pandemic home confinement, participants were asked to describe their lifestyle habits and eating behaviors, before “Phase 1” (8 March 2020, T0) and during “Phase 1” (8 March–4 May 2020, T1). For this purpose, in the present research, we considered 10 multiple-choice items, out of the 38 ones, exploring (i) physical activity; (ii) daily water consumption; (iii) daily caffeine consumption; (iv) daily alcohol consumption; (v) daily breakfast consumption; (vi) habitual sandwich/pizza consumption at lunch; (vii) habitual sweets/dessert consumption at lunch; (viii) habitual fruit consumption at lunch; (ix) habitual vegetable consumption at lunch; (x) “craving or eating between meals” habit. \nThe survey also collected demographic information (such as gender and residency) as well as anthropometric data (such as weight and height) useful to calculate BMI (Kg/m2) both before and during COVID-19 pandemic “Phase 1”. \n2.3. Scoring Each question investigating lifestyle habits and eating behaviors included multiple choices coded from a score ranging from 0 to 2, where 0 = negative change (shifting away from the national dietary guidelines) [7]; 1 = no change; 2 = positive change (shifting towards the national dietary guidelines [7]. The maximum achievable final score was 20.\nThe total score was then divided into tertiles. According to the tertiles, subjects were classified into three different “classes of change” that occurred during “Phase 1”: (i) subjects with negative change, (shifting away from the national dietary guidelines) [7] (subjects scoring < 10; 1st tertile); (ii) subjects with no change (subjects scoring 10 or 11; 2nd tertile); and finally (iii) subjects with positive change (shifting towards the national dietary guidelines) [7] (subjects scoring ≥ 12 and ≤16; 3rd tertile).\nEach question investigating lifestyle habits and eating behaviors included multiple choices coded from a score ranging from 0 to 2, where 0 = negative change (shifting away from the national dietary guidelines) [7]; 1 = no change; 2 = positive change (shifting towards the national dietary guidelines [7]. The maximum achievable final score was 20.\nThe total score was then divided into tertiles. According to the tertiles, subjects were classified into three different “classes of change” that occurred during “Phase 1”: (i) subjects with negative change, (shifting away from the national dietary guidelines) [7] (subjects scoring < 10; 1st tertile); (ii) subjects with no change (subjects scoring 10 or 11; 2nd tertile); and finally (iii) subjects with positive change (shifting towards the national dietary guidelines) [7] (subjects scoring ≥ 12 and ≤16; 3rd tertile).\n2.4. Statistical Analysis Basic description of data and statistical analyses were performed using STATA 16.1 (Stata Corp LLC, College Station, TX, USA). BMI mean values at T0 and T1, between North and Centre-South regions and, between genders were compared by t-test. Multiple regression analysis was used to evaluate what lifestyle factors (independent variables simultaneously put in the model) were significantly related to BMI change (dependent variable), adjusting for sex. \nBasic description of data and statistical analyses were performed using STATA 16.1 (Stata Corp LLC, College Station, TX, USA). BMI mean values at T0 and T1, between North and Centre-South regions and, between genders were compared by t-test. Multiple regression analysis was used to evaluate what lifestyle factors (independent variables simultaneously put in the model) were significantly related to BMI change (dependent variable), adjusting for sex. ", "A 38 multiple-choice web-form survey in Google Forms was used to collect retrospective demographic and anthropometric data as well as lifestyle habits and eating behaviors during the first lockdown phase of COVID-19 pandemic (“Phase 1”, 8 March–4 May 2020) in Italy. The survey was launched on 30 April 2020, through WhatsApp, institutional social networks channels (e.g., Facebook and LinkedIn) and our Lab (Laboratory of Dietetics and Clinical Nutrition) mailing list. Adults (>18 years) residing in Italy were eligible to participate. Data collection ended on 10 May 2020.\nThis web-form administration did not allow a probability sampling procedure; however, it was effective for the research objectives since it facilitated wide dissemination of the survey.\nThe web-survey was conducted in agreement with the national and international regulations and the Declaration of Helsinki. All participants were fully informed about the study objectives and requirements and were asked to provide informed consent accepting the data sharing and privacy policy, before accessing the survey. Participants completed the survey by connecting directly to Google Forms, a web-based app used to create forms for data collection purposes and which is included as part of the free Google Docs Editors suite offered by Google [5]. All the surveys were then downloaded as Microsoft Excel sheet, with the maximum guarantee of anonymity inherent in the web-survey format which does not allow to trace sensitive personal data in any way [6]. For this reason, this study is exempt from the request of the ethics committee approval. ", "To investigate changes experienced during “Phase 1” COVID-19 pandemic home confinement, participants were asked to describe their lifestyle habits and eating behaviors, before “Phase 1” (8 March 2020, T0) and during “Phase 1” (8 March–4 May 2020, T1). For this purpose, in the present research, we considered 10 multiple-choice items, out of the 38 ones, exploring (i) physical activity; (ii) daily water consumption; (iii) daily caffeine consumption; (iv) daily alcohol consumption; (v) daily breakfast consumption; (vi) habitual sandwich/pizza consumption at lunch; (vii) habitual sweets/dessert consumption at lunch; (viii) habitual fruit consumption at lunch; (ix) habitual vegetable consumption at lunch; (x) “craving or eating between meals” habit. \nThe survey also collected demographic information (such as gender and residency) as well as anthropometric data (such as weight and height) useful to calculate BMI (Kg/m2) both before and during COVID-19 pandemic “Phase 1”. ", "Each question investigating lifestyle habits and eating behaviors included multiple choices coded from a score ranging from 0 to 2, where 0 = negative change (shifting away from the national dietary guidelines) [7]; 1 = no change; 2 = positive change (shifting towards the national dietary guidelines [7]. The maximum achievable final score was 20.\nThe total score was then divided into tertiles. According to the tertiles, subjects were classified into three different “classes of change” that occurred during “Phase 1”: (i) subjects with negative change, (shifting away from the national dietary guidelines) [7] (subjects scoring < 10; 1st tertile); (ii) subjects with no change (subjects scoring 10 or 11; 2nd tertile); and finally (iii) subjects with positive change (shifting towards the national dietary guidelines) [7] (subjects scoring ≥ 12 and ≤16; 3rd tertile).", "Basic description of data and statistical analyses were performed using STATA 16.1 (Stata Corp LLC, College Station, TX, USA). BMI mean values at T0 and T1, between North and Centre-South regions and, between genders were compared by t-test. Multiple regression analysis was used to evaluate what lifestyle factors (independent variables simultaneously put in the model) were significantly related to BMI change (dependent variable), adjusting for sex. " ]
[ null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Online-Survey", "2.2. Variables Collected", "2.3. Scoring", "2.4. Statistical Analysis", "3. Results", "4. Discussion", "5. Conclusions" ]
[ "COVID-19 pandemic has imposed a period of contingency measures, including total or partial lockdowns all over the world, with different modalities according to the countries. In Italy the first lockdown, “Phase 1” (DPCM-GU Serie Generale n.59; 8 March 2020), occurred March 8. This phase has greatly impacted everyday life of all citizens, leading to different adaptive strategies.\nEmerging studies [1,2,3,4] have assessed the impact of this situation on physical and mental health. Di Renzo et al. [1] analyzing the effect of COVID-19 pandemic on eating habits and lifestyle in a sample of Italian respondents (age range, 12 and 86 years), by means of an online survey, reported that young adults (aged 18 to 30) adhered to the Mediterranean dietary pattern more than youth and elderly, with 15% of the respondents turning to organic fruit and vegetables [1]. Furthermore, other virtuous behaviors were reported by the authors as a slight increase in physical activity as well as a reduction in smoking habits [1]. Others [2] instead have suggested that social isolation has led to consuming more ultra-processed, energy-dense comfort foods, purchasing more packaged and shelf-stable foods, both to reduce trips outside home to the supermarket and to have a richer and more satisfying pantry. At the same time, social distancing and home deliveries decreased the opportunities for physical activity, especially in individuals living in small apartments in urban areas, increasing sedentariness, and affecting health and wellbeing [2]. However, it is important to emphasize that numerous interindividual differences account for different behaviors [3] that may depend on personal characteristics and that have led some to cook more and devote more time to meals within the family and others instead to use the food in a disorderly way for boredom, as a consolation or to quell anxiety [3].\nThis pandemic has more than ever mastered the notion that there are inevitable repercussions of the general malaise on mental wellbeing. A large multicenter Italian study [4] explored the implications of social isolation on psychological distress in the academic population of five Universities, reporting that about 20% of the participants showed severe levels of anxiety and mood deflection. \nTherefore, based on these previous observations [1,2,3,4], the primary aim of our study was to investigate lifestyle habits and eating behaviors modifications in a sample of Italian adults during “Phase 1” COVID-19 pandemic home confinement. The secondary aim was to discriminate between positive and negative changes in lifestyle habits and eating behaviors and their relationship with body mass index (BMI) variations, raising awareness of the need for public health actions to elicit positive behavior changes and prevent negative behavior changes in individuals.", "2.1. Online-Survey A 38 multiple-choice web-form survey in Google Forms was used to collect retrospective demographic and anthropometric data as well as lifestyle habits and eating behaviors during the first lockdown phase of COVID-19 pandemic (“Phase 1”, 8 March–4 May 2020) in Italy. The survey was launched on 30 April 2020, through WhatsApp, institutional social networks channels (e.g., Facebook and LinkedIn) and our Lab (Laboratory of Dietetics and Clinical Nutrition) mailing list. Adults (>18 years) residing in Italy were eligible to participate. Data collection ended on 10 May 2020.\nThis web-form administration did not allow a probability sampling procedure; however, it was effective for the research objectives since it facilitated wide dissemination of the survey.\nThe web-survey was conducted in agreement with the national and international regulations and the Declaration of Helsinki. All participants were fully informed about the study objectives and requirements and were asked to provide informed consent accepting the data sharing and privacy policy, before accessing the survey. Participants completed the survey by connecting directly to Google Forms, a web-based app used to create forms for data collection purposes and which is included as part of the free Google Docs Editors suite offered by Google [5]. All the surveys were then downloaded as Microsoft Excel sheet, with the maximum guarantee of anonymity inherent in the web-survey format which does not allow to trace sensitive personal data in any way [6]. For this reason, this study is exempt from the request of the ethics committee approval. \nA 38 multiple-choice web-form survey in Google Forms was used to collect retrospective demographic and anthropometric data as well as lifestyle habits and eating behaviors during the first lockdown phase of COVID-19 pandemic (“Phase 1”, 8 March–4 May 2020) in Italy. The survey was launched on 30 April 2020, through WhatsApp, institutional social networks channels (e.g., Facebook and LinkedIn) and our Lab (Laboratory of Dietetics and Clinical Nutrition) mailing list. Adults (>18 years) residing in Italy were eligible to participate. Data collection ended on 10 May 2020.\nThis web-form administration did not allow a probability sampling procedure; however, it was effective for the research objectives since it facilitated wide dissemination of the survey.\nThe web-survey was conducted in agreement with the national and international regulations and the Declaration of Helsinki. All participants were fully informed about the study objectives and requirements and were asked to provide informed consent accepting the data sharing and privacy policy, before accessing the survey. Participants completed the survey by connecting directly to Google Forms, a web-based app used to create forms for data collection purposes and which is included as part of the free Google Docs Editors suite offered by Google [5]. All the surveys were then downloaded as Microsoft Excel sheet, with the maximum guarantee of anonymity inherent in the web-survey format which does not allow to trace sensitive personal data in any way [6]. For this reason, this study is exempt from the request of the ethics committee approval. \n2.2. Variables Collected To investigate changes experienced during “Phase 1” COVID-19 pandemic home confinement, participants were asked to describe their lifestyle habits and eating behaviors, before “Phase 1” (8 March 2020, T0) and during “Phase 1” (8 March–4 May 2020, T1). For this purpose, in the present research, we considered 10 multiple-choice items, out of the 38 ones, exploring (i) physical activity; (ii) daily water consumption; (iii) daily caffeine consumption; (iv) daily alcohol consumption; (v) daily breakfast consumption; (vi) habitual sandwich/pizza consumption at lunch; (vii) habitual sweets/dessert consumption at lunch; (viii) habitual fruit consumption at lunch; (ix) habitual vegetable consumption at lunch; (x) “craving or eating between meals” habit. \nThe survey also collected demographic information (such as gender and residency) as well as anthropometric data (such as weight and height) useful to calculate BMI (Kg/m2) both before and during COVID-19 pandemic “Phase 1”. \nTo investigate changes experienced during “Phase 1” COVID-19 pandemic home confinement, participants were asked to describe their lifestyle habits and eating behaviors, before “Phase 1” (8 March 2020, T0) and during “Phase 1” (8 March–4 May 2020, T1). For this purpose, in the present research, we considered 10 multiple-choice items, out of the 38 ones, exploring (i) physical activity; (ii) daily water consumption; (iii) daily caffeine consumption; (iv) daily alcohol consumption; (v) daily breakfast consumption; (vi) habitual sandwich/pizza consumption at lunch; (vii) habitual sweets/dessert consumption at lunch; (viii) habitual fruit consumption at lunch; (ix) habitual vegetable consumption at lunch; (x) “craving or eating between meals” habit. \nThe survey also collected demographic information (such as gender and residency) as well as anthropometric data (such as weight and height) useful to calculate BMI (Kg/m2) both before and during COVID-19 pandemic “Phase 1”. \n2.3. Scoring Each question investigating lifestyle habits and eating behaviors included multiple choices coded from a score ranging from 0 to 2, where 0 = negative change (shifting away from the national dietary guidelines) [7]; 1 = no change; 2 = positive change (shifting towards the national dietary guidelines [7]. The maximum achievable final score was 20.\nThe total score was then divided into tertiles. According to the tertiles, subjects were classified into three different “classes of change” that occurred during “Phase 1”: (i) subjects with negative change, (shifting away from the national dietary guidelines) [7] (subjects scoring < 10; 1st tertile); (ii) subjects with no change (subjects scoring 10 or 11; 2nd tertile); and finally (iii) subjects with positive change (shifting towards the national dietary guidelines) [7] (subjects scoring ≥ 12 and ≤16; 3rd tertile).\nEach question investigating lifestyle habits and eating behaviors included multiple choices coded from a score ranging from 0 to 2, where 0 = negative change (shifting away from the national dietary guidelines) [7]; 1 = no change; 2 = positive change (shifting towards the national dietary guidelines [7]. The maximum achievable final score was 20.\nThe total score was then divided into tertiles. According to the tertiles, subjects were classified into three different “classes of change” that occurred during “Phase 1”: (i) subjects with negative change, (shifting away from the national dietary guidelines) [7] (subjects scoring < 10; 1st tertile); (ii) subjects with no change (subjects scoring 10 or 11; 2nd tertile); and finally (iii) subjects with positive change (shifting towards the national dietary guidelines) [7] (subjects scoring ≥ 12 and ≤16; 3rd tertile).\n2.4. Statistical Analysis Basic description of data and statistical analyses were performed using STATA 16.1 (Stata Corp LLC, College Station, TX, USA). BMI mean values at T0 and T1, between North and Centre-South regions and, between genders were compared by t-test. Multiple regression analysis was used to evaluate what lifestyle factors (independent variables simultaneously put in the model) were significantly related to BMI change (dependent variable), adjusting for sex. \nBasic description of data and statistical analyses were performed using STATA 16.1 (Stata Corp LLC, College Station, TX, USA). BMI mean values at T0 and T1, between North and Centre-South regions and, between genders were compared by t-test. Multiple regression analysis was used to evaluate what lifestyle factors (independent variables simultaneously put in the model) were significantly related to BMI change (dependent variable), adjusting for sex. ", "A 38 multiple-choice web-form survey in Google Forms was used to collect retrospective demographic and anthropometric data as well as lifestyle habits and eating behaviors during the first lockdown phase of COVID-19 pandemic (“Phase 1”, 8 March–4 May 2020) in Italy. The survey was launched on 30 April 2020, through WhatsApp, institutional social networks channels (e.g., Facebook and LinkedIn) and our Lab (Laboratory of Dietetics and Clinical Nutrition) mailing list. Adults (>18 years) residing in Italy were eligible to participate. Data collection ended on 10 May 2020.\nThis web-form administration did not allow a probability sampling procedure; however, it was effective for the research objectives since it facilitated wide dissemination of the survey.\nThe web-survey was conducted in agreement with the national and international regulations and the Declaration of Helsinki. All participants were fully informed about the study objectives and requirements and were asked to provide informed consent accepting the data sharing and privacy policy, before accessing the survey. Participants completed the survey by connecting directly to Google Forms, a web-based app used to create forms for data collection purposes and which is included as part of the free Google Docs Editors suite offered by Google [5]. All the surveys were then downloaded as Microsoft Excel sheet, with the maximum guarantee of anonymity inherent in the web-survey format which does not allow to trace sensitive personal data in any way [6]. For this reason, this study is exempt from the request of the ethics committee approval. ", "To investigate changes experienced during “Phase 1” COVID-19 pandemic home confinement, participants were asked to describe their lifestyle habits and eating behaviors, before “Phase 1” (8 March 2020, T0) and during “Phase 1” (8 March–4 May 2020, T1). For this purpose, in the present research, we considered 10 multiple-choice items, out of the 38 ones, exploring (i) physical activity; (ii) daily water consumption; (iii) daily caffeine consumption; (iv) daily alcohol consumption; (v) daily breakfast consumption; (vi) habitual sandwich/pizza consumption at lunch; (vii) habitual sweets/dessert consumption at lunch; (viii) habitual fruit consumption at lunch; (ix) habitual vegetable consumption at lunch; (x) “craving or eating between meals” habit. \nThe survey also collected demographic information (such as gender and residency) as well as anthropometric data (such as weight and height) useful to calculate BMI (Kg/m2) both before and during COVID-19 pandemic “Phase 1”. ", "Each question investigating lifestyle habits and eating behaviors included multiple choices coded from a score ranging from 0 to 2, where 0 = negative change (shifting away from the national dietary guidelines) [7]; 1 = no change; 2 = positive change (shifting towards the national dietary guidelines [7]. The maximum achievable final score was 20.\nThe total score was then divided into tertiles. According to the tertiles, subjects were classified into three different “classes of change” that occurred during “Phase 1”: (i) subjects with negative change, (shifting away from the national dietary guidelines) [7] (subjects scoring < 10; 1st tertile); (ii) subjects with no change (subjects scoring 10 or 11; 2nd tertile); and finally (iii) subjects with positive change (shifting towards the national dietary guidelines) [7] (subjects scoring ≥ 12 and ≤16; 3rd tertile).", "Basic description of data and statistical analyses were performed using STATA 16.1 (Stata Corp LLC, College Station, TX, USA). BMI mean values at T0 and T1, between North and Centre-South regions and, between genders were compared by t-test. Multiple regression analysis was used to evaluate what lifestyle factors (independent variables simultaneously put in the model) were significantly related to BMI change (dependent variable), adjusting for sex. ", "We received 1360 questionnaires and selected 1304 subjects (973F/331M) after removing potential duplicates by comparing their IDs. Most of the sample (82.7%, n = 1078) resided in the northern regions of Italy while 17.3% (n = 226) in the Central and Southern ones. \nWe observed missing item responses only in item #9 (“Habitual vegetable consumption at lunch”), with a total of 1241/1304 responses (4.8% missingness).\nAs reported in Table 1, BMI increased significantly at T1 (p < 0.0001). The median value of the lifestyle habits and eating behaviors total score showed that most of the subjects belonged to the 2nd tertile, meaning substantially that no changes occurred during “Phase 1”; no significant differences were recorded between residents in different regions (Northern vs. Centre + Southern) nor between gender (Table 1). Changes in lifestyle habits and eating behaviors during “Phase 1” are also reported in Table 1.\nMultiple regression analysis (Table 2) was conducted to evaluate which lifestyle and eating behavior variables were significantly related to BMI variations (ΔBMI). ΔBMI (Kg/m2) as dependent variable and lifestyle habits and eating behaviors (e.g., physical activity; adequate daily water consumption; alcohol consumption; caffeine consumption; daily breakfast consumption; habitual sandwich/pizza consumption at lunch; habitual sweets/dessert consumption at lunch; habitual fruit consumption at lunch; habitual vegetable consumption at lunch and craving or eating between meals) as independent ones were simultaneously assessed. The analysis showed that BMI increase (ΔBMI > 0) during “Phase 1” was significantly and negatively related to the following behaviors: (i) inadequate water consumption (β: −0.09; SE: 0.04; p = 0.01); (ii) excessive alcohol consumption (β: −0.2; SE: 0.04; p < 0.000); (iii) decreased physical activity (β: −0.12; SE: 0.03; p = 0.000); (iv) increased frequency of “craving or eating between meals” (β: −0.28; SE: 0.04; p < 0.000); (v) habitual consumption of dessert/sweets at lunch (β: −0.37; SE: 0.07; p < 0.000).\nIn the three classes of change, mean (± SD) BMI value did not differ significantly (“negative change”: 23.34 ± 4.19; “no change” 23.23 ± 4.15; “positive change” 23.09 ± 4.01, p = 0.737). On the contrary, mean (± SD) ΔBMI differed significantly (p < 0.001) between subjects in the “negative change” class (0.4 ± 10.8), subjects in the “no change” class (0.12 ± 0.74), and subjects in the “positive change” class (−0.18 ± 0.9), as reported in Figure 1.", "Home confinement experienced during COVID-19 pandemic has undeniably changed everyday life of most people. Some of these changes have affected lifestyle, known as the set of individual habits and behaviors, including dietary pattern and physical activity [1,2,3,4] with consequences on body weight among the many [1,3]. The present brief report illustrates lifestyle habits and eating behaviors modifications in a sample of Italian adults during COVID-19 pandemic “Phase 1” home confinement, discriminating between positive and negative changes, according to National dietary guidelines and their association with ΔBMI.\nOur sample population was characterized overall by a slight but significant BMI increase during COVID-19 pandemic “Phase 1”. After classifying lifestyle changes into three classes (negative changes, no changes, positive changes), we observed a significant variation of BMI (ΔBMI) between the three groups (p < 0.001). The groups experiencing negative lifestyle changes and the one experiencing positive ones reported an increased and decreased BMI respectively, while the one experiencing no lifestyle changes reported no substantial BMI modifications. In particular, those subjects who reported negative changes of their behaviors, namely “adequate daily water consumption”, “alcohol consumption”, “physical activity”, “craving or eating between meals”, “habitual sweets/dessert consumption at lunch”, were also those who displayed a significant increase in BMI during “Phase 1”.\nConsidering the whole sample, it is important to highlight that around one-third of the examined population worsened their lifestyle behaviors.\nThese results suggest that although most people show good resilience, still a large part of the community is at risk and predisposed to acquire lifestyle behaviors that are potentially harmful to their health, regardless of the initial body weight. Indeed, we observed that while some took advantage of the home confinement to increase physical activity, many people stopped or reduced sports and recreational activities also facilitated by the closure of gyms and swimming pools. These findings suggest the need to raise awareness of subjects regarding lifestyle according to national guidelines [8], and to tailor strategies for health policies.\nComfort food consumption, including desserts or sweets at lunch, is positively associated with BMI increase, suggesting that lockdown induced people to spend more time planning meals, cooking tasty foods and homemade sweets, as already reported by other surveys [1]. A positive association between BMI and “craving or eating between meals” as well as consumption of unhealthy snacks and drinks for stress, anxiety and time spent in front of a TV screen was observed, consistently with what reported by others [9,10].\nIn a survey conducted in Spain during home confinement, where the sample was grouped in three clusters based on food and cooking habits (i. self-control, with restraint, attitude on food and cooking habits; ii sensitive, with emotional attitudes on food and cooking habits; iii non-emotional, with no emotional attitudes on food and cooking habits), the authors reported that the “sensitive” one, ate more and often and snacked more between meals, influenced by a lower mood due to social isolation.\nNoteworthy snacking and increased ultra-processed foods consumption in association with mood disorders may be confused with emotional eating, not allowing a subclinical eating disorder, implicated in the circuit of food reward and addiction to be correctly diagnosed [11].\nFinally, it is well known that stressful events, including home confinement, may also induce people to greater consumption of alcoholic beverages [12], which promotes a positive energy balance and contributes to weight gain [13]. In a survey conducted in Poland during lockdown, alcohol consumption was increased in 14.6% of the sample, with higher rates of alcohol abuse in adults [12]. Our results showed that 16.4% of the sample, increased their habitual alcohol consumption, especially those who already abused it. Alcohol consumption does not protect in any way from infections, including SARS CoV2, indeed it impairs immune response with a dose-dependent correlation [14]. Isolation and drinking can also increase the risk of self-injury, suicide and violence, especially domestic violence against women, leading them in turn to drink more in a vicious circle [14]. That’s why it is necessary to conduct public health information campaigns to raise population awareness of all the risks related to excessive alcohol consumption during isolation and provide interventions to protect the most vulnerable ones [14].\nDespite this study highlights the need to address future health emergencies with public health actions considering also lifestyle, authors acknowledge some limitations. First of all, the data are self-reported and this leads to potential bias, moreover it is not possible to check data accuracy since participants in Google forms are anonymous and may easily provide false information. The sample was not representative of specific Regions nor of the whole country. Although, the survey was conducted on the adult population (aged > 18 years), age was not registered, and the analysis was not age-adjusted. Finally, the authors investigated lifestyle habits and eating behavior changes to simply discriminate between positive and negative changes and their relationship to BMI during COVID-19 pandemic, unable to exclude pre-existing eating or mood disorders.", "This study showed that during “Phase 1” COVID-19 pandemic home confinement, several changes in lifestyle habits and eating behaviors occurred, with individual differences probably depending on personal resilience. However, the short duration of “Phase 1” did not allow to highlight the impact of these changes on long term outcomes Nevertheless, we underline the need to increase public health actions to meet emergency needs and reduce vulnerability over the long term through expanded social protection, health care and education, especially for the most vulnerable groups including seniors, children and women, and for those with poor access to essential goods such as food, education and health care.\nThis is not going to be the last pandemic we face; therefore, we need to build resiliency in our population and nutrition is a key factor. Public health interventions should consider the need to decrease levels of inequity and protect people from further health threats and disease. Further research to understand if and to what extent these changes have regressed, or have remained stable, in the long term and what impact they have had on health is recommended." ]
[ "intro", null, null, null, null, null, "results", "discussion", "conclusions" ]
[ "lifestyle", "body mass index", "COVID-19 pandemic" ]
1. Introduction: COVID-19 pandemic has imposed a period of contingency measures, including total or partial lockdowns all over the world, with different modalities according to the countries. In Italy the first lockdown, “Phase 1” (DPCM-GU Serie Generale n.59; 8 March 2020), occurred March 8. This phase has greatly impacted everyday life of all citizens, leading to different adaptive strategies. Emerging studies [1,2,3,4] have assessed the impact of this situation on physical and mental health. Di Renzo et al. [1] analyzing the effect of COVID-19 pandemic on eating habits and lifestyle in a sample of Italian respondents (age range, 12 and 86 years), by means of an online survey, reported that young adults (aged 18 to 30) adhered to the Mediterranean dietary pattern more than youth and elderly, with 15% of the respondents turning to organic fruit and vegetables [1]. Furthermore, other virtuous behaviors were reported by the authors as a slight increase in physical activity as well as a reduction in smoking habits [1]. Others [2] instead have suggested that social isolation has led to consuming more ultra-processed, energy-dense comfort foods, purchasing more packaged and shelf-stable foods, both to reduce trips outside home to the supermarket and to have a richer and more satisfying pantry. At the same time, social distancing and home deliveries decreased the opportunities for physical activity, especially in individuals living in small apartments in urban areas, increasing sedentariness, and affecting health and wellbeing [2]. However, it is important to emphasize that numerous interindividual differences account for different behaviors [3] that may depend on personal characteristics and that have led some to cook more and devote more time to meals within the family and others instead to use the food in a disorderly way for boredom, as a consolation or to quell anxiety [3]. This pandemic has more than ever mastered the notion that there are inevitable repercussions of the general malaise on mental wellbeing. A large multicenter Italian study [4] explored the implications of social isolation on psychological distress in the academic population of five Universities, reporting that about 20% of the participants showed severe levels of anxiety and mood deflection. Therefore, based on these previous observations [1,2,3,4], the primary aim of our study was to investigate lifestyle habits and eating behaviors modifications in a sample of Italian adults during “Phase 1” COVID-19 pandemic home confinement. The secondary aim was to discriminate between positive and negative changes in lifestyle habits and eating behaviors and their relationship with body mass index (BMI) variations, raising awareness of the need for public health actions to elicit positive behavior changes and prevent negative behavior changes in individuals. 2. Materials and Methods: 2.1. Online-Survey A 38 multiple-choice web-form survey in Google Forms was used to collect retrospective demographic and anthropometric data as well as lifestyle habits and eating behaviors during the first lockdown phase of COVID-19 pandemic (“Phase 1”, 8 March–4 May 2020) in Italy. The survey was launched on 30 April 2020, through WhatsApp, institutional social networks channels (e.g., Facebook and LinkedIn) and our Lab (Laboratory of Dietetics and Clinical Nutrition) mailing list. Adults (>18 years) residing in Italy were eligible to participate. Data collection ended on 10 May 2020. This web-form administration did not allow a probability sampling procedure; however, it was effective for the research objectives since it facilitated wide dissemination of the survey. The web-survey was conducted in agreement with the national and international regulations and the Declaration of Helsinki. All participants were fully informed about the study objectives and requirements and were asked to provide informed consent accepting the data sharing and privacy policy, before accessing the survey. Participants completed the survey by connecting directly to Google Forms, a web-based app used to create forms for data collection purposes and which is included as part of the free Google Docs Editors suite offered by Google [5]. All the surveys were then downloaded as Microsoft Excel sheet, with the maximum guarantee of anonymity inherent in the web-survey format which does not allow to trace sensitive personal data in any way [6]. For this reason, this study is exempt from the request of the ethics committee approval. A 38 multiple-choice web-form survey in Google Forms was used to collect retrospective demographic and anthropometric data as well as lifestyle habits and eating behaviors during the first lockdown phase of COVID-19 pandemic (“Phase 1”, 8 March–4 May 2020) in Italy. The survey was launched on 30 April 2020, through WhatsApp, institutional social networks channels (e.g., Facebook and LinkedIn) and our Lab (Laboratory of Dietetics and Clinical Nutrition) mailing list. Adults (>18 years) residing in Italy were eligible to participate. Data collection ended on 10 May 2020. This web-form administration did not allow a probability sampling procedure; however, it was effective for the research objectives since it facilitated wide dissemination of the survey. The web-survey was conducted in agreement with the national and international regulations and the Declaration of Helsinki. All participants were fully informed about the study objectives and requirements and were asked to provide informed consent accepting the data sharing and privacy policy, before accessing the survey. Participants completed the survey by connecting directly to Google Forms, a web-based app used to create forms for data collection purposes and which is included as part of the free Google Docs Editors suite offered by Google [5]. All the surveys were then downloaded as Microsoft Excel sheet, with the maximum guarantee of anonymity inherent in the web-survey format which does not allow to trace sensitive personal data in any way [6]. For this reason, this study is exempt from the request of the ethics committee approval. 2.2. Variables Collected To investigate changes experienced during “Phase 1” COVID-19 pandemic home confinement, participants were asked to describe their lifestyle habits and eating behaviors, before “Phase 1” (8 March 2020, T0) and during “Phase 1” (8 March–4 May 2020, T1). For this purpose, in the present research, we considered 10 multiple-choice items, out of the 38 ones, exploring (i) physical activity; (ii) daily water consumption; (iii) daily caffeine consumption; (iv) daily alcohol consumption; (v) daily breakfast consumption; (vi) habitual sandwich/pizza consumption at lunch; (vii) habitual sweets/dessert consumption at lunch; (viii) habitual fruit consumption at lunch; (ix) habitual vegetable consumption at lunch; (x) “craving or eating between meals” habit. The survey also collected demographic information (such as gender and residency) as well as anthropometric data (such as weight and height) useful to calculate BMI (Kg/m2) both before and during COVID-19 pandemic “Phase 1”. To investigate changes experienced during “Phase 1” COVID-19 pandemic home confinement, participants were asked to describe their lifestyle habits and eating behaviors, before “Phase 1” (8 March 2020, T0) and during “Phase 1” (8 March–4 May 2020, T1). For this purpose, in the present research, we considered 10 multiple-choice items, out of the 38 ones, exploring (i) physical activity; (ii) daily water consumption; (iii) daily caffeine consumption; (iv) daily alcohol consumption; (v) daily breakfast consumption; (vi) habitual sandwich/pizza consumption at lunch; (vii) habitual sweets/dessert consumption at lunch; (viii) habitual fruit consumption at lunch; (ix) habitual vegetable consumption at lunch; (x) “craving or eating between meals” habit. The survey also collected demographic information (such as gender and residency) as well as anthropometric data (such as weight and height) useful to calculate BMI (Kg/m2) both before and during COVID-19 pandemic “Phase 1”. 2.3. Scoring Each question investigating lifestyle habits and eating behaviors included multiple choices coded from a score ranging from 0 to 2, where 0 = negative change (shifting away from the national dietary guidelines) [7]; 1 = no change; 2 = positive change (shifting towards the national dietary guidelines [7]. The maximum achievable final score was 20. The total score was then divided into tertiles. According to the tertiles, subjects were classified into three different “classes of change” that occurred during “Phase 1”: (i) subjects with negative change, (shifting away from the national dietary guidelines) [7] (subjects scoring < 10; 1st tertile); (ii) subjects with no change (subjects scoring 10 or 11; 2nd tertile); and finally (iii) subjects with positive change (shifting towards the national dietary guidelines) [7] (subjects scoring ≥ 12 and ≤16; 3rd tertile). Each question investigating lifestyle habits and eating behaviors included multiple choices coded from a score ranging from 0 to 2, where 0 = negative change (shifting away from the national dietary guidelines) [7]; 1 = no change; 2 = positive change (shifting towards the national dietary guidelines [7]. The maximum achievable final score was 20. The total score was then divided into tertiles. According to the tertiles, subjects were classified into three different “classes of change” that occurred during “Phase 1”: (i) subjects with negative change, (shifting away from the national dietary guidelines) [7] (subjects scoring < 10; 1st tertile); (ii) subjects with no change (subjects scoring 10 or 11; 2nd tertile); and finally (iii) subjects with positive change (shifting towards the national dietary guidelines) [7] (subjects scoring ≥ 12 and ≤16; 3rd tertile). 2.4. Statistical Analysis Basic description of data and statistical analyses were performed using STATA 16.1 (Stata Corp LLC, College Station, TX, USA). BMI mean values at T0 and T1, between North and Centre-South regions and, between genders were compared by t-test. Multiple regression analysis was used to evaluate what lifestyle factors (independent variables simultaneously put in the model) were significantly related to BMI change (dependent variable), adjusting for sex. Basic description of data and statistical analyses were performed using STATA 16.1 (Stata Corp LLC, College Station, TX, USA). BMI mean values at T0 and T1, between North and Centre-South regions and, between genders were compared by t-test. Multiple regression analysis was used to evaluate what lifestyle factors (independent variables simultaneously put in the model) were significantly related to BMI change (dependent variable), adjusting for sex. 2.1. Online-Survey: A 38 multiple-choice web-form survey in Google Forms was used to collect retrospective demographic and anthropometric data as well as lifestyle habits and eating behaviors during the first lockdown phase of COVID-19 pandemic (“Phase 1”, 8 March–4 May 2020) in Italy. The survey was launched on 30 April 2020, through WhatsApp, institutional social networks channels (e.g., Facebook and LinkedIn) and our Lab (Laboratory of Dietetics and Clinical Nutrition) mailing list. Adults (>18 years) residing in Italy were eligible to participate. Data collection ended on 10 May 2020. This web-form administration did not allow a probability sampling procedure; however, it was effective for the research objectives since it facilitated wide dissemination of the survey. The web-survey was conducted in agreement with the national and international regulations and the Declaration of Helsinki. All participants were fully informed about the study objectives and requirements and were asked to provide informed consent accepting the data sharing and privacy policy, before accessing the survey. Participants completed the survey by connecting directly to Google Forms, a web-based app used to create forms for data collection purposes and which is included as part of the free Google Docs Editors suite offered by Google [5]. All the surveys were then downloaded as Microsoft Excel sheet, with the maximum guarantee of anonymity inherent in the web-survey format which does not allow to trace sensitive personal data in any way [6]. For this reason, this study is exempt from the request of the ethics committee approval. 2.2. Variables Collected: To investigate changes experienced during “Phase 1” COVID-19 pandemic home confinement, participants were asked to describe their lifestyle habits and eating behaviors, before “Phase 1” (8 March 2020, T0) and during “Phase 1” (8 March–4 May 2020, T1). For this purpose, in the present research, we considered 10 multiple-choice items, out of the 38 ones, exploring (i) physical activity; (ii) daily water consumption; (iii) daily caffeine consumption; (iv) daily alcohol consumption; (v) daily breakfast consumption; (vi) habitual sandwich/pizza consumption at lunch; (vii) habitual sweets/dessert consumption at lunch; (viii) habitual fruit consumption at lunch; (ix) habitual vegetable consumption at lunch; (x) “craving or eating between meals” habit. The survey also collected demographic information (such as gender and residency) as well as anthropometric data (such as weight and height) useful to calculate BMI (Kg/m2) both before and during COVID-19 pandemic “Phase 1”. 2.3. Scoring: Each question investigating lifestyle habits and eating behaviors included multiple choices coded from a score ranging from 0 to 2, where 0 = negative change (shifting away from the national dietary guidelines) [7]; 1 = no change; 2 = positive change (shifting towards the national dietary guidelines [7]. The maximum achievable final score was 20. The total score was then divided into tertiles. According to the tertiles, subjects were classified into three different “classes of change” that occurred during “Phase 1”: (i) subjects with negative change, (shifting away from the national dietary guidelines) [7] (subjects scoring < 10; 1st tertile); (ii) subjects with no change (subjects scoring 10 or 11; 2nd tertile); and finally (iii) subjects with positive change (shifting towards the national dietary guidelines) [7] (subjects scoring ≥ 12 and ≤16; 3rd tertile). 2.4. Statistical Analysis: Basic description of data and statistical analyses were performed using STATA 16.1 (Stata Corp LLC, College Station, TX, USA). BMI mean values at T0 and T1, between North and Centre-South regions and, between genders were compared by t-test. Multiple regression analysis was used to evaluate what lifestyle factors (independent variables simultaneously put in the model) were significantly related to BMI change (dependent variable), adjusting for sex. 3. Results: We received 1360 questionnaires and selected 1304 subjects (973F/331M) after removing potential duplicates by comparing their IDs. Most of the sample (82.7%, n = 1078) resided in the northern regions of Italy while 17.3% (n = 226) in the Central and Southern ones. We observed missing item responses only in item #9 (“Habitual vegetable consumption at lunch”), with a total of 1241/1304 responses (4.8% missingness). As reported in Table 1, BMI increased significantly at T1 (p < 0.0001). The median value of the lifestyle habits and eating behaviors total score showed that most of the subjects belonged to the 2nd tertile, meaning substantially that no changes occurred during “Phase 1”; no significant differences were recorded between residents in different regions (Northern vs. Centre + Southern) nor between gender (Table 1). Changes in lifestyle habits and eating behaviors during “Phase 1” are also reported in Table 1. Multiple regression analysis (Table 2) was conducted to evaluate which lifestyle and eating behavior variables were significantly related to BMI variations (ΔBMI). ΔBMI (Kg/m2) as dependent variable and lifestyle habits and eating behaviors (e.g., physical activity; adequate daily water consumption; alcohol consumption; caffeine consumption; daily breakfast consumption; habitual sandwich/pizza consumption at lunch; habitual sweets/dessert consumption at lunch; habitual fruit consumption at lunch; habitual vegetable consumption at lunch and craving or eating between meals) as independent ones were simultaneously assessed. The analysis showed that BMI increase (ΔBMI > 0) during “Phase 1” was significantly and negatively related to the following behaviors: (i) inadequate water consumption (β: −0.09; SE: 0.04; p = 0.01); (ii) excessive alcohol consumption (β: −0.2; SE: 0.04; p < 0.000); (iii) decreased physical activity (β: −0.12; SE: 0.03; p = 0.000); (iv) increased frequency of “craving or eating between meals” (β: −0.28; SE: 0.04; p < 0.000); (v) habitual consumption of dessert/sweets at lunch (β: −0.37; SE: 0.07; p < 0.000). In the three classes of change, mean (± SD) BMI value did not differ significantly (“negative change”: 23.34 ± 4.19; “no change” 23.23 ± 4.15; “positive change” 23.09 ± 4.01, p = 0.737). On the contrary, mean (± SD) ΔBMI differed significantly (p < 0.001) between subjects in the “negative change” class (0.4 ± 10.8), subjects in the “no change” class (0.12 ± 0.74), and subjects in the “positive change” class (−0.18 ± 0.9), as reported in Figure 1. 4. Discussion: Home confinement experienced during COVID-19 pandemic has undeniably changed everyday life of most people. Some of these changes have affected lifestyle, known as the set of individual habits and behaviors, including dietary pattern and physical activity [1,2,3,4] with consequences on body weight among the many [1,3]. The present brief report illustrates lifestyle habits and eating behaviors modifications in a sample of Italian adults during COVID-19 pandemic “Phase 1” home confinement, discriminating between positive and negative changes, according to National dietary guidelines and their association with ΔBMI. Our sample population was characterized overall by a slight but significant BMI increase during COVID-19 pandemic “Phase 1”. After classifying lifestyle changes into three classes (negative changes, no changes, positive changes), we observed a significant variation of BMI (ΔBMI) between the three groups (p < 0.001). The groups experiencing negative lifestyle changes and the one experiencing positive ones reported an increased and decreased BMI respectively, while the one experiencing no lifestyle changes reported no substantial BMI modifications. In particular, those subjects who reported negative changes of their behaviors, namely “adequate daily water consumption”, “alcohol consumption”, “physical activity”, “craving or eating between meals”, “habitual sweets/dessert consumption at lunch”, were also those who displayed a significant increase in BMI during “Phase 1”. Considering the whole sample, it is important to highlight that around one-third of the examined population worsened their lifestyle behaviors. These results suggest that although most people show good resilience, still a large part of the community is at risk and predisposed to acquire lifestyle behaviors that are potentially harmful to their health, regardless of the initial body weight. Indeed, we observed that while some took advantage of the home confinement to increase physical activity, many people stopped or reduced sports and recreational activities also facilitated by the closure of gyms and swimming pools. These findings suggest the need to raise awareness of subjects regarding lifestyle according to national guidelines [8], and to tailor strategies for health policies. Comfort food consumption, including desserts or sweets at lunch, is positively associated with BMI increase, suggesting that lockdown induced people to spend more time planning meals, cooking tasty foods and homemade sweets, as already reported by other surveys [1]. A positive association between BMI and “craving or eating between meals” as well as consumption of unhealthy snacks and drinks for stress, anxiety and time spent in front of a TV screen was observed, consistently with what reported by others [9,10]. In a survey conducted in Spain during home confinement, where the sample was grouped in three clusters based on food and cooking habits (i. self-control, with restraint, attitude on food and cooking habits; ii sensitive, with emotional attitudes on food and cooking habits; iii non-emotional, with no emotional attitudes on food and cooking habits), the authors reported that the “sensitive” one, ate more and often and snacked more between meals, influenced by a lower mood due to social isolation. Noteworthy snacking and increased ultra-processed foods consumption in association with mood disorders may be confused with emotional eating, not allowing a subclinical eating disorder, implicated in the circuit of food reward and addiction to be correctly diagnosed [11]. Finally, it is well known that stressful events, including home confinement, may also induce people to greater consumption of alcoholic beverages [12], which promotes a positive energy balance and contributes to weight gain [13]. In a survey conducted in Poland during lockdown, alcohol consumption was increased in 14.6% of the sample, with higher rates of alcohol abuse in adults [12]. Our results showed that 16.4% of the sample, increased their habitual alcohol consumption, especially those who already abused it. Alcohol consumption does not protect in any way from infections, including SARS CoV2, indeed it impairs immune response with a dose-dependent correlation [14]. Isolation and drinking can also increase the risk of self-injury, suicide and violence, especially domestic violence against women, leading them in turn to drink more in a vicious circle [14]. That’s why it is necessary to conduct public health information campaigns to raise population awareness of all the risks related to excessive alcohol consumption during isolation and provide interventions to protect the most vulnerable ones [14]. Despite this study highlights the need to address future health emergencies with public health actions considering also lifestyle, authors acknowledge some limitations. First of all, the data are self-reported and this leads to potential bias, moreover it is not possible to check data accuracy since participants in Google forms are anonymous and may easily provide false information. The sample was not representative of specific Regions nor of the whole country. Although, the survey was conducted on the adult population (aged > 18 years), age was not registered, and the analysis was not age-adjusted. Finally, the authors investigated lifestyle habits and eating behavior changes to simply discriminate between positive and negative changes and their relationship to BMI during COVID-19 pandemic, unable to exclude pre-existing eating or mood disorders. 5. Conclusions: This study showed that during “Phase 1” COVID-19 pandemic home confinement, several changes in lifestyle habits and eating behaviors occurred, with individual differences probably depending on personal resilience. However, the short duration of “Phase 1” did not allow to highlight the impact of these changes on long term outcomes Nevertheless, we underline the need to increase public health actions to meet emergency needs and reduce vulnerability over the long term through expanded social protection, health care and education, especially for the most vulnerable groups including seniors, children and women, and for those with poor access to essential goods such as food, education and health care. This is not going to be the last pandemic we face; therefore, we need to build resiliency in our population and nutrition is a key factor. Public health interventions should consider the need to decrease levels of inequity and protect people from further health threats and disease. Further research to understand if and to what extent these changes have regressed, or have remained stable, in the long term and what impact they have had on health is recommended.
Background: COVID-19 pandemic has imposed a period of contingency measures, including total or partial lockdowns all over the world leading to several changes in lifestyle/eating behaviours. This retrospective cohort study aimed at investigating Italian adult population lifestyle changes during COVID-19 pandemic "Phase 1" lockdown (8 March-4 May 2020) and discriminate between positive and negative changes and BMI (body mass index) variations (ΔBMI). Methods: A multiple-choice web-form survey was used to collect retrospective data regarding lifestyle/eating behaviours during "Phase 1" in the Italian adult population. According to changes in lifestyle/eating behaviours, the sample was divided into three classes of changes: "negative change", "no change", "positive change". For each class, correlations with ΔBMI were investigated. Results: Data were collected from 1304 subjects (973F/331M). Mean ΔBMI differed significantly (p < 0.001) between classes, and was significantly related to water intake, alcohol consumption, physical activity, frequency of "craving or snacking between meals", dessert/sweets consumption at lunch. Conclusions: During "Phase 1", many people faced several negative changes in lifestyle/eating behaviours with potential negative impact on health. These findings highlight that pandemic exacerbates nutritional issues and most efforts need to be done to provide nutrition counselling and public health services to support general population needs.
1. Introduction: COVID-19 pandemic has imposed a period of contingency measures, including total or partial lockdowns all over the world, with different modalities according to the countries. In Italy the first lockdown, “Phase 1” (DPCM-GU Serie Generale n.59; 8 March 2020), occurred March 8. This phase has greatly impacted everyday life of all citizens, leading to different adaptive strategies. Emerging studies [1,2,3,4] have assessed the impact of this situation on physical and mental health. Di Renzo et al. [1] analyzing the effect of COVID-19 pandemic on eating habits and lifestyle in a sample of Italian respondents (age range, 12 and 86 years), by means of an online survey, reported that young adults (aged 18 to 30) adhered to the Mediterranean dietary pattern more than youth and elderly, with 15% of the respondents turning to organic fruit and vegetables [1]. Furthermore, other virtuous behaviors were reported by the authors as a slight increase in physical activity as well as a reduction in smoking habits [1]. Others [2] instead have suggested that social isolation has led to consuming more ultra-processed, energy-dense comfort foods, purchasing more packaged and shelf-stable foods, both to reduce trips outside home to the supermarket and to have a richer and more satisfying pantry. At the same time, social distancing and home deliveries decreased the opportunities for physical activity, especially in individuals living in small apartments in urban areas, increasing sedentariness, and affecting health and wellbeing [2]. However, it is important to emphasize that numerous interindividual differences account for different behaviors [3] that may depend on personal characteristics and that have led some to cook more and devote more time to meals within the family and others instead to use the food in a disorderly way for boredom, as a consolation or to quell anxiety [3]. This pandemic has more than ever mastered the notion that there are inevitable repercussions of the general malaise on mental wellbeing. A large multicenter Italian study [4] explored the implications of social isolation on psychological distress in the academic population of five Universities, reporting that about 20% of the participants showed severe levels of anxiety and mood deflection. Therefore, based on these previous observations [1,2,3,4], the primary aim of our study was to investigate lifestyle habits and eating behaviors modifications in a sample of Italian adults during “Phase 1” COVID-19 pandemic home confinement. The secondary aim was to discriminate between positive and negative changes in lifestyle habits and eating behaviors and their relationship with body mass index (BMI) variations, raising awareness of the need for public health actions to elicit positive behavior changes and prevent negative behavior changes in individuals. 5. Conclusions: This study showed that during “Phase 1” COVID-19 pandemic home confinement, several changes in lifestyle habits and eating behaviors occurred, with individual differences probably depending on personal resilience. However, the short duration of “Phase 1” did not allow to highlight the impact of these changes on long term outcomes Nevertheless, we underline the need to increase public health actions to meet emergency needs and reduce vulnerability over the long term through expanded social protection, health care and education, especially for the most vulnerable groups including seniors, children and women, and for those with poor access to essential goods such as food, education and health care. This is not going to be the last pandemic we face; therefore, we need to build resiliency in our population and nutrition is a key factor. Public health interventions should consider the need to decrease levels of inequity and protect people from further health threats and disease. Further research to understand if and to what extent these changes have regressed, or have remained stable, in the long term and what impact they have had on health is recommended.
Background: COVID-19 pandemic has imposed a period of contingency measures, including total or partial lockdowns all over the world leading to several changes in lifestyle/eating behaviours. This retrospective cohort study aimed at investigating Italian adult population lifestyle changes during COVID-19 pandemic "Phase 1" lockdown (8 March-4 May 2020) and discriminate between positive and negative changes and BMI (body mass index) variations (ΔBMI). Methods: A multiple-choice web-form survey was used to collect retrospective data regarding lifestyle/eating behaviours during "Phase 1" in the Italian adult population. According to changes in lifestyle/eating behaviours, the sample was divided into three classes of changes: "negative change", "no change", "positive change". For each class, correlations with ΔBMI were investigated. Results: Data were collected from 1304 subjects (973F/331M). Mean ΔBMI differed significantly (p < 0.001) between classes, and was significantly related to water intake, alcohol consumption, physical activity, frequency of "craving or snacking between meals", dessert/sweets consumption at lunch. Conclusions: During "Phase 1", many people faced several negative changes in lifestyle/eating behaviours with potential negative impact on health. These findings highlight that pandemic exacerbates nutritional issues and most efforts need to be done to provide nutrition counselling and public health services to support general population needs.
4,651
270
[ 1563, 295, 209, 181, 85 ]
9
[ "consumption", "phase", "change", "survey", "lifestyle", "eating", "subjects", "habits", "data", "behaviors" ]
[ "changes lifestyle habits", "behaviors including dietary", "meals consumption unhealthy", "pandemic eating", "19 pandemic eating" ]
null
[CONTENT] lifestyle | body mass index | COVID-19 pandemic [SUMMARY]
null
[CONTENT] lifestyle | body mass index | COVID-19 pandemic [SUMMARY]
[CONTENT] lifestyle | body mass index | COVID-19 pandemic [SUMMARY]
[CONTENT] lifestyle | body mass index | COVID-19 pandemic [SUMMARY]
[CONTENT] lifestyle | body mass index | COVID-19 pandemic [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Alcohol Drinking | Body Mass Index | COVID-19 | Child | Communicable Disease Control | Diet | Drinking | Exercise | Female | Humans | Italy | Life Style | Male | Middle Aged | Retrospective Studies | Surveys and Questionnaires | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Alcohol Drinking | Body Mass Index | COVID-19 | Child | Communicable Disease Control | Diet | Drinking | Exercise | Female | Humans | Italy | Life Style | Male | Middle Aged | Retrospective Studies | Surveys and Questionnaires | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Alcohol Drinking | Body Mass Index | COVID-19 | Child | Communicable Disease Control | Diet | Drinking | Exercise | Female | Humans | Italy | Life Style | Male | Middle Aged | Retrospective Studies | Surveys and Questionnaires | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Alcohol Drinking | Body Mass Index | COVID-19 | Child | Communicable Disease Control | Diet | Drinking | Exercise | Female | Humans | Italy | Life Style | Male | Middle Aged | Retrospective Studies | Surveys and Questionnaires | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Alcohol Drinking | Body Mass Index | COVID-19 | Child | Communicable Disease Control | Diet | Drinking | Exercise | Female | Humans | Italy | Life Style | Male | Middle Aged | Retrospective Studies | Surveys and Questionnaires | Young Adult [SUMMARY]
[CONTENT] changes lifestyle habits | behaviors including dietary | meals consumption unhealthy | pandemic eating | 19 pandemic eating [SUMMARY]
null
[CONTENT] changes lifestyle habits | behaviors including dietary | meals consumption unhealthy | pandemic eating | 19 pandemic eating [SUMMARY]
[CONTENT] changes lifestyle habits | behaviors including dietary | meals consumption unhealthy | pandemic eating | 19 pandemic eating [SUMMARY]
[CONTENT] changes lifestyle habits | behaviors including dietary | meals consumption unhealthy | pandemic eating | 19 pandemic eating [SUMMARY]
[CONTENT] changes lifestyle habits | behaviors including dietary | meals consumption unhealthy | pandemic eating | 19 pandemic eating [SUMMARY]
[CONTENT] consumption | phase | change | survey | lifestyle | eating | subjects | habits | data | behaviors [SUMMARY]
null
[CONTENT] consumption | phase | change | survey | lifestyle | eating | subjects | habits | data | behaviors [SUMMARY]
[CONTENT] consumption | phase | change | survey | lifestyle | eating | subjects | habits | data | behaviors [SUMMARY]
[CONTENT] consumption | phase | change | survey | lifestyle | eating | subjects | habits | data | behaviors [SUMMARY]
[CONTENT] consumption | phase | change | survey | lifestyle | eating | subjects | habits | data | behaviors [SUMMARY]
[CONTENT] italian | health | pandemic | led | aim | respondents | instead | mental | individuals | wellbeing [SUMMARY]
null
[CONTENT] consumption | se | change | table | 23 | 000 | habitual | lunch | significantly | δbmi [SUMMARY]
[CONTENT] health | term | long term | long | need | health care | education | care | impact | changes [SUMMARY]
[CONTENT] consumption | change | subjects | survey | phase | lunch | habitual | health | data | eating [SUMMARY]
[CONTENT] consumption | change | subjects | survey | phase | lunch | habitual | health | data | eating [SUMMARY]
[CONTENT] ||| Italian | COVID-19 | 8 March-4 May 2020 | BMI [SUMMARY]
null
[CONTENT] 1304 ||| dessert [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| Italian | COVID-19 | 8 March-4 May 2020 | BMI ||| Italian ||| three ||| ΔBMI ||| ||| 1304 ||| dessert ||| ||| [SUMMARY]
[CONTENT] ||| Italian | COVID-19 | 8 March-4 May 2020 | BMI ||| Italian ||| three ||| ΔBMI ||| ||| 1304 ||| dessert ||| ||| [SUMMARY]
Antiepileptic drug treatment of rolandic epilepsy and Panayiotopoulos syndrome: clinical practice survey and clinical trial feasibility.
25202134
The evidence base for management of childhood epilepsy is poor, especially for the most common specific syndromes such as rolandic epilepsy (RE) and Panayiotopoulos syndrome (PS). Considerable international variation in management and controversy about non-treatment indicate the need for high quality randomised controlled trials (RCT). The aim of this study is, therefore, to describe current UK practice and explore the feasibility of different RCT designs for RE and PS.
BACKGROUND
We conducted an online survey of 590 UK paediatricians who treat epilepsy. Thirty-two questions covered annual caseload, investigation and management practice, factors influencing treatment, antiepileptic drug preferences and hypothetical trial design preferences.
METHODS
132 responded (22%): 81% were paediatricians and 95% at consultant seniority. We estimated, annually, 751 new RE cases and 233 PS cases. Electroencephalography (EEG) is requested at least half the time in approximately 70% of cases; MRI brain at least half the time in 40%-65% cases and neuropsychological evaluation in 7%-8%. Clinicians reported non-treatment in 40%: main reasons were low frequency of seizures and parent/child preferences. Carbamazepine is the preferred older, and levetiracetam the preferred newer, RCT arm. Approximately one-half considered active and placebo designs acceptable, choosing seizures as primary and cognitive/behavioural measures as secondary outcomes.
RESULTS
Management among respondents is broadly in line with national guidance, although with possible overuse of brain imaging and underuse of EEG and neuropsychological assessments. A large proportion of patients in the UK remains untreated, and clinicians seem amenable to a range of RCT designs, with carbamazepine and levetiracetam the preferred active drugs.
CONCLUSIONS
[ "Anticonvulsants", "Data Collection", "Electroencephalography", "Epilepsy, Rolandic", "Humans", "Practice Patterns, Physicians'", "Seizures", "United Kingdom" ]
4283698
Introduction
Epilepsy affects 63 400 young people under 18 years of age in the UK.1 Seizures represent one of the top five avoidable reasons for admission of children to emergency departments in the UK.2 Aside from seizures, cognitive and behavioural comorbidities cause a substantial impact affecting about two-thirds of children with epilepsy.3 Indeed, the comorbidity-associated burden may outweigh that of the seizures themselves4 and in some epilepsies, comorbidities are stronger predictors of quality-of-life than seizures.5 Hence, the comprehensive management of epilepsy necessitates the recognition and management of individual epilepsy syndromes and their specific comorbidities. There are almost 40 electroclinical epilepsy syndromes defined by constellations of seizure type(s), age of onset, electroencephalography (EEG) and clinical features, each requiring individual assessment and management. The evidence base for management of childhood epilepsies in the UK is detailed in the National Institute of Health and Care Excellence (NICE) Guidelines6 and by its Scottish equivalent, the Scottish Intercollegiate Guidelines Network (SIGN).7 This guidance provides a care standard framework against which a recent national ‘Epilepsy12’ audit of childhood epilepsy care was conducted.8 However, the evidence base for antiepileptic drug treatment in these guidelines remains poor for many common childhood epilepsies, consisting of only a few high-quality randomised controlled trials (RCT).9 Much of the evidence available to NICE is extrapolated from heterogeneous studies of seizures mixing epilepsy types and age groups,10 with uncertain applicability to well-defined childhood epilepsy syndromes.11 Furthermore, while there is now an emerging idea of how childhood epilepsy as a whole is managed in the UK,8 there is little detail about the variation in management of specific epilepsy syndromes and paediatricians’ familiarity with them. Hence, a more detailed survey of practice is justified. In the absence of solid RCT evidence, there is widespread historical and geographical variation in recommendations for the drug management of specific epilepsies. In childhood, rolandic epilepsy (RE, sometimes known as Benign Epilepsy with CentroTemporal Spikes or BECTS), a focal epilepsy, is the most common syndrome, estimated to constitute 8%–25% of all childhood epilepsies12 and diagnosed in 9% of children with epilepsy in the recent Epilepsy12 audit.8 RE, along with the closely related Panayiotopoulos syndrome (PS),13 are part of the group of non-lesional focal epilepsies (box 1). Textbooks and older ‘expert opinion’ often advised conservative management of RE, that is, with no antiepileptic drugs (AED),14–17 but neither the extent to which this conservative management is followed, nor its scientific rationale,14 18 19 are known. Moreover, recent experimental evidence suggests that focal EEG spikes may disrupt simultaneous regional brain function20 and might also impair long-term learning21 and memory consolidation in sleep,22 prompting a re-evaluation of the ‘benign’ nature of this group of epilepsies. Box 1Clinical Description of RE and PSRolandic epilepsyRolandic epilepsy (RE), also known as Benign Epilepsy of Childhood with Centro-Temporal Spikes (BECTS), is the most common epilepsy in childhood, with an incidence of up to 21 per 100 000 children aged 15 years and under.12 The onset of seizures is between 3 and 12 years, and remission almost always occurs by adolescence. Seizures typically occur during sleep or drowsiness, are brief and involve unilateral sensorimotor symptoms (eg, numbness, tingling, drooling) of the pharynx, tongue, face, lips and sometimes hand. The affected side may alternate and seizures may infrequently become secondarily generalised. Neurodevelopmental disorders are very common (40%) and include speech sound disorder, language impairment and reading disability, all of which usually precede seizures and aggregate among relatives.23 24 Attention deficit hyperactivity disorder (ADHD) and migraine without aura are also strong but less frequent (10%) associations.25Panayiotopoulos syndromeIn Panayiotopoulos syndrome (PS), seizure semiology involves prominent autonomic symptoms. At onset, nausea, retching and vomiting are characteristic, often accompanied by other autonomic symptoms, such as pupillary changes, pallor/flushing, alterations in heart rate, breathing irregularities and temperature instability.13 Syncope-like episodes may occur. Impairment of consciousness develops as the seizure progresses, often accompanied by eye and head deviation. Seizures may end in hemi or generalised convulsions. Two-thirds of seizures occur during sleep. Seizures are often prolonged, most lasting over 10 min and many over 30 min (autonomic, non-convulsive status epilepticus). Potentially life-threatening cardiorespiratory arrest has been described in PS. As in RE, cognitive and behavioural difficulties have been associated with PS. Prognosis for remission of seizures is excellent. Rolandic epilepsy Rolandic epilepsy (RE), also known as Benign Epilepsy of Childhood with Centro-Temporal Spikes (BECTS), is the most common epilepsy in childhood, with an incidence of up to 21 per 100 000 children aged 15 years and under.12 The onset of seizures is between 3 and 12 years, and remission almost always occurs by adolescence. Seizures typically occur during sleep or drowsiness, are brief and involve unilateral sensorimotor symptoms (eg, numbness, tingling, drooling) of the pharynx, tongue, face, lips and sometimes hand. The affected side may alternate and seizures may infrequently become secondarily generalised. Neurodevelopmental disorders are very common (40%) and include speech sound disorder, language impairment and reading disability, all of which usually precede seizures and aggregate among relatives.23 24 Attention deficit hyperactivity disorder (ADHD) and migraine without aura are also strong but less frequent (10%) associations.25 Panayiotopoulos syndrome In Panayiotopoulos syndrome (PS), seizure semiology involves prominent autonomic symptoms. At onset, nausea, retching and vomiting are characteristic, often accompanied by other autonomic symptoms, such as pupillary changes, pallor/flushing, alterations in heart rate, breathing irregularities and temperature instability.13 Syncope-like episodes may occur. Impairment of consciousness develops as the seizure progresses, often accompanied by eye and head deviation. Seizures may end in hemi or generalised convulsions. Two-thirds of seizures occur during sleep. Seizures are often prolonged, most lasting over 10 min and many over 30 min (autonomic, non-convulsive status epilepticus). Potentially life-threatening cardiorespiratory arrest has been described in PS. As in RE, cognitive and behavioural difficulties have been associated with PS. Prognosis for remission of seizures is excellent. When treated, carbamazepine and lamotrigine are recommended as first-line monotherapy by NICE (see table 1) and others,6 17 26 although the evidence base is acknowledged to be poor,27–29 and there are theoretical risks of carbamazepine exacerbating the seizures or EEG abnormality30 31 and also speech production.32 International practice beyond the UK diverges widely,29 33 as does expert opinion on the subject.34 Sulthiame is considered first-line in Germany, Austria and Israel,35 36 although it may have adverse effects on cognition;37 sodium valproate in France; while levetiracetam is popular in the USA. Thus, the rationale for treatment versus non-treatment is not established and the evidence base in favour of specific AEDs is unknown amid widespread national and international variation in practice. This equipoise scenario sets the stage for an RCT, for which it would be important to learn the treatment preferences of physicians likely to recruit to such a trial, in order to assess various aspects of design feasibility. The purpose of this paper is, therefore, (1) to examine current clinical practice for RE and PS in relation to NICE guidance, specifically asking what proportion of patients are routinely not treated and (2) to explore the feasibility of alternative syndrome-specific RCT designs for RE and PS. We specifically address the questions of clinicians’ attitudes towards placebo-controlled designs, and which AEDs would be either preferred or unpopular comparators. NICE recommendations for management of RE/PS (rolandic epilepsy/Panayiotopoulos Syndrome) AEDs, antiepileptic drugs; BECTS, Benign Epilepsy with CentroTemporal Spikes; NICE, National Institute of Health and Care Excellence.
Methods
Survey We designed a questionnaire targeting UK paediatricians with clinical responsibility for epilepsy (see online supplementary appendix 1). Thirty-two questions covered areas specifically related to RE and PS: (1) annual caseload of new patients; (2) investigation and management practice; (3) factors influencing AED treatment versus no treatment; (4) AED preferences between ‘older’ (before 1980) and ‘newer’ drugs; (5) theoretical RCT design and outcome preferences. Preference items were scored on a 5-point Likert scale. We piloted a paper version among five paediatric epilepsy specialists and then created a modified online, forced-choice version using SurveyMonkey. We designed a questionnaire targeting UK paediatricians with clinical responsibility for epilepsy (see online supplementary appendix 1). Thirty-two questions covered areas specifically related to RE and PS: (1) annual caseload of new patients; (2) investigation and management practice; (3) factors influencing AED treatment versus no treatment; (4) AED preferences between ‘older’ (before 1980) and ‘newer’ drugs; (5) theoretical RCT design and outcome preferences. Preference items were scored on a 5-point Likert scale. We piloted a paper version among five paediatric epilepsy specialists and then created a modified online, forced-choice version using SurveyMonkey. Participants We distributed the online survey to the 282 Epilepsy-12 audit leads8 and to 308 members of the British Paediatric Neurology Association between November 2012 and December 2012. Respondents could only reply once and this was checked by names and internet protocol addresses. We sent weekly email reminders before closing the weblink after 4 weeks. This professional survey was deemed exempt from ethics approval. We distributed the online survey to the 282 Epilepsy-12 audit leads8 and to 308 members of the British Paediatric Neurology Association between November 2012 and December 2012. Respondents could only reply once and this was checked by names and internet protocol addresses. We sent weekly email reminders before closing the weblink after 4 weeks. This professional survey was deemed exempt from ethics approval. Analysis Results were available both as summary data and as raw response files. We edited out inconsistent responses using the raw data. We summarised and visualised results using a spreadsheet tool. Results were available both as summary data and as raw response files. We edited out inconsistent responses using the raw data. We summarised and visualised results using a spreadsheet tool.
Results
Characteristics of respondents There was a total of 132 respondents from the 590 individuals contacted with valid email addresses, a response rate of 39% from Epilepsy-12 audit leads and 22% overall. With weekly email prompts, 74 responded in the first week, another 34 in the second week and the remainder responded over the following fortnight. Fifty-three percent of respondents were general paediatricians with special expertise in epilepsy; 17% were general, community or neurodisability paediatricians; 19% were paediatric neurologists. In terms of seniority, 95% were consultants; 5% associate specialists. Fifty-three percent of respondents were men. Age-wise, 11% of respondents were under 40 years; 54% were 41–50 years; 35% over 50 years. There was a total of 132 respondents from the 590 individuals contacted with valid email addresses, a response rate of 39% from Epilepsy-12 audit leads and 22% overall. With weekly email prompts, 74 responded in the first week, another 34 in the second week and the remainder responded over the following fortnight. Fifty-three percent of respondents were general paediatricians with special expertise in epilepsy; 17% were general, community or neurodisability paediatricians; 19% were paediatric neurologists. In terms of seniority, 95% were consultants; 5% associate specialists. Fifty-three percent of respondents were men. Age-wise, 11% of respondents were under 40 years; 54% were 41–50 years; 35% over 50 years. Caseload The majority (90%) of clinicians reported diagnosing six or fewer new RE and six or fewer PS cases annually; by summating monthly individual case loads, we estimated an annual total of 751 new RE cases and 233 new PS cases from the 132 UK respondents. The majority (90%) of clinicians reported diagnosing six or fewer new RE and six or fewer PS cases annually; by summating monthly individual case loads, we estimated an annual total of 751 new RE cases and 233 new PS cases from the 132 UK respondents. Investigation EEG is the most often requested investigation (figure 1). Most clinicians only infrequently request CT or MRI brain scans. Only 7%–8% of clinicians request neuropsychological assessments in RE or PS. Use of investigations in rolandic epilepsy (RE) and Panayiotopoulos Syndrome (PS) expressed as percentage of respondents: electroencephalography (EEG); brain MRI; neuropsychological assessment (NP). EEG is the most often requested investigation (figure 1). Most clinicians only infrequently request CT or MRI brain scans. Only 7%–8% of clinicians request neuropsychological assessments in RE or PS. Use of investigations in rolandic epilepsy (RE) and Panayiotopoulos Syndrome (PS) expressed as percentage of respondents: electroencephalography (EEG); brain MRI; neuropsychological assessment (NP). Treatment Forty percent of RE and PS cases are never treated with regular AEDs. Both seizure frequency/severity and parental/child preference were judged important factors influencing the decision not to prescribe AED (figure 2). Factors rated as quite or very important influencing a no-treatment decision in rolandic epilepsy, expressed as percentage of respondents (data for Panayiotopoulos Syndrome very similar). Forty percent of RE and PS cases are never treated with regular AEDs. Both seizure frequency/severity and parental/child preference were judged important factors influencing the decision not to prescribe AED (figure 2). Factors rated as quite or very important influencing a no-treatment decision in rolandic epilepsy, expressed as percentage of respondents (data for Panayiotopoulos Syndrome very similar). AED preference in RCTs AED preferences were almost identical for RE and PS (figure 3). Carbamazepine was overwhelmingly rated as the most preferred active comparator for an RCT and also the preferred ‘older’ AED; sodium valproate was the second most preferred AED; no other AEDs were indicated as first choice by >10% of respondents. Among the newer AEDs, respondents indicated a slight preference for levetiracetam over lamotrigine as active comparator, with no other popular first-line choices >10%. About 10% of respondents would object to ethosuximide, benzodiazepines, phenobarbital or phenytoin as an active treatment arm. Clinicians’ preferred choices of (A) all; (B) older and (C) newer treatments. STM, Sulthiame; OX, oxcarbazepine; LEV, levetiracetam; LTG, lamotrigine; SV, sodium valproate; CBZ, carbamazepine; TPM, topiramate. AED preferences were almost identical for RE and PS (figure 3). Carbamazepine was overwhelmingly rated as the most preferred active comparator for an RCT and also the preferred ‘older’ AED; sodium valproate was the second most preferred AED; no other AEDs were indicated as first choice by >10% of respondents. Among the newer AEDs, respondents indicated a slight preference for levetiracetam over lamotrigine as active comparator, with no other popular first-line choices >10%. About 10% of respondents would object to ethosuximide, benzodiazepines, phenobarbital or phenytoin as an active treatment arm. Clinicians’ preferred choices of (A) all; (B) older and (C) newer treatments. STM, Sulthiame; OX, oxcarbazepine; LEV, levetiracetam; LTG, lamotrigine; SV, sodium valproate; CBZ, carbamazepine; TPM, topiramate. Attitudes to clinical trial design In RE, just over one-half of respondents (55%) would recruit to an RCT comparing either two active treatments, 48%–49% to active versus no active treatment design and 41% to active versus placebo, regardless of the preceding number of seizures (<3 or ≥3 in 6 months). In PS, there was a greater preference for a two active drug design (59%) than a trial with no active treatment (44%) or placebo (38%). The great majority (82% for RE and 79% for PS) chose seizure remission as the preferred primary outcome in a RE trial, with close to two-thirds (73% for RE and 60% for PS) choosing cognitive or broad quality of life measures as a secondary outcome. In RE, just over one-half of respondents (55%) would recruit to an RCT comparing either two active treatments, 48%–49% to active versus no active treatment design and 41% to active versus placebo, regardless of the preceding number of seizures (<3 or ≥3 in 6 months). In PS, there was a greater preference for a two active drug design (59%) than a trial with no active treatment (44%) or placebo (38%). The great majority (82% for RE and 79% for PS) chose seizure remission as the preferred primary outcome in a RE trial, with close to two-thirds (73% for RE and 60% for PS) choosing cognitive or broad quality of life measures as a secondary outcome.
null
null
[ "Survey", "Participants", "Analysis", "Characteristics of respondents", "Caseload", "Investigation", "Treatment", "AED preference in RCTs", "Attitudes to clinical trial design" ]
[ "We designed a questionnaire targeting UK paediatricians with clinical responsibility for epilepsy (see online supplementary appendix 1). Thirty-two questions covered areas specifically related to RE and PS: (1) annual caseload of new patients; (2) investigation and management practice; (3) factors influencing AED treatment versus no treatment; (4) AED preferences between ‘older’ (before 1980) and ‘newer’ drugs; (5) theoretical RCT design and outcome preferences. Preference items were scored on a 5-point Likert scale. We piloted a paper version among five paediatric epilepsy specialists and then created a modified online, forced-choice version using SurveyMonkey.", "We distributed the online survey to the 282 Epilepsy-12 audit leads8 and to 308 members of the British Paediatric Neurology Association between November 2012 and December 2012. Respondents could only reply once and this was checked by names and internet protocol addresses. We sent weekly email reminders before closing the weblink after 4 weeks. This professional survey was deemed exempt from ethics approval.", "Results were available both as summary data and as raw response files. We edited out inconsistent responses using the raw data. We summarised and visualised results using a spreadsheet tool.", "There was a total of 132 respondents from the 590 individuals contacted with valid email addresses, a response rate of 39% from Epilepsy-12 audit leads and 22% overall. With weekly email prompts, 74 responded in the first week, another 34 in the second week and the remainder responded over the following fortnight. Fifty-three percent of respondents were general paediatricians with special expertise in epilepsy; 17% were general, community or neurodisability paediatricians; 19% were paediatric neurologists. In terms of seniority, 95% were consultants; 5% associate specialists. Fifty-three percent of respondents were men. Age-wise, 11% of respondents were under 40 years; 54% were 41–50 years; 35% over 50 years.", "The majority (90%) of clinicians reported diagnosing six or fewer new RE and six or fewer PS cases annually; by summating monthly individual case loads, we estimated an annual total of 751 new RE cases and 233 new PS cases from the 132 UK respondents.", "EEG is the most often requested investigation (figure 1). Most clinicians only infrequently request CT or MRI brain scans. Only 7%–8% of clinicians request neuropsychological assessments in RE or PS.\nUse of investigations in rolandic epilepsy (RE) and Panayiotopoulos Syndrome (PS) expressed as percentage of respondents: electroencephalography (EEG); brain MRI; neuropsychological assessment (NP).", "Forty percent of RE and PS cases are never treated with regular AEDs. Both seizure frequency/severity and parental/child preference were judged important factors influencing the decision not to prescribe AED (figure 2).\nFactors rated as quite or very important influencing a no-treatment decision in rolandic epilepsy, expressed as percentage of respondents (data for Panayiotopoulos Syndrome very similar).", "AED preferences were almost identical for RE and PS (figure 3). Carbamazepine was overwhelmingly rated as the most preferred active comparator for an RCT and also the preferred ‘older’ AED; sodium valproate was the second most preferred AED; no other AEDs were indicated as first choice by >10% of respondents. Among the newer AEDs, respondents indicated a slight preference for levetiracetam over lamotrigine as active comparator, with no other popular first-line choices >10%. About 10% of respondents would object to ethosuximide, benzodiazepines, phenobarbital or phenytoin as an active treatment arm.\nClinicians’ preferred choices of (A) all; (B) older and (C) newer treatments. STM, Sulthiame; OX, oxcarbazepine; LEV, levetiracetam; LTG, lamotrigine; SV, sodium valproate; CBZ, carbamazepine; TPM, topiramate.", "In RE, just over one-half of respondents (55%) would recruit to an RCT comparing either two active treatments, 48%–49% to active versus no active treatment design and 41% to active versus placebo, regardless of the preceding number of seizures (<3 or ≥3 in 6 months). In PS, there was a greater preference for a two active drug design (59%) than a trial with no active treatment (44%) or placebo (38%). The great majority (82% for RE and 79% for PS) chose seizure remission as the preferred primary outcome in a RE trial, with close to two-thirds (73% for RE and 60% for PS) choosing cognitive or broad quality of life measures as a secondary outcome." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Survey", "Participants", "Analysis", "Results", "Characteristics of respondents", "Caseload", "Investigation", "Treatment", "AED preference in RCTs", "Attitudes to clinical trial design", "Discussion", "Supplementary Material" ]
[ "Epilepsy affects 63 400 young people under 18 years of age in the UK.1 Seizures represent one of the top five avoidable reasons for admission of children to emergency departments in the UK.2 Aside from seizures, cognitive and behavioural comorbidities cause a substantial impact affecting about two-thirds of children with epilepsy.3 Indeed, the comorbidity-associated burden may outweigh that of the seizures themselves4 and in some epilepsies, comorbidities are stronger predictors of quality-of-life than seizures.5 Hence, the comprehensive management of epilepsy necessitates the recognition and management of individual epilepsy syndromes and their specific comorbidities.\nThere are almost 40 electroclinical epilepsy syndromes defined by constellations of seizure type(s), age of onset, electroencephalography (EEG) and clinical features, each requiring individual assessment and management. The evidence base for management of childhood epilepsies in the UK is detailed in the National Institute of Health and Care Excellence (NICE) Guidelines6 and by its Scottish equivalent, the Scottish Intercollegiate Guidelines Network (SIGN).7 This guidance provides a care standard framework against which a recent national ‘Epilepsy12’ audit of childhood epilepsy care was conducted.8 However, the evidence base for antiepileptic drug treatment in these guidelines remains poor for many common childhood epilepsies, consisting of only a few high-quality randomised controlled trials (RCT).9 Much of the evidence available to NICE is extrapolated from heterogeneous studies of seizures mixing epilepsy types and age groups,10 with uncertain applicability to well-defined childhood epilepsy syndromes.11 Furthermore, while there is now an emerging idea of how childhood epilepsy as a whole is managed in the UK,8 there is little detail about the variation in management of specific epilepsy syndromes and paediatricians’ familiarity with them. Hence, a more detailed survey of practice is justified.\nIn the absence of solid RCT evidence, there is widespread historical and geographical variation in recommendations for the drug management of specific epilepsies. In childhood, rolandic epilepsy (RE, sometimes known as Benign Epilepsy with CentroTemporal Spikes or BECTS), a focal epilepsy, is the most common syndrome, estimated to constitute 8%–25% of all childhood epilepsies12 and diagnosed in 9% of children with epilepsy in the recent Epilepsy12 audit.8 RE, along with the closely related Panayiotopoulos syndrome (PS),13 are part of the group of non-lesional focal epilepsies (box 1). Textbooks and older ‘expert opinion’ often advised conservative management of RE, that is, with no antiepileptic drugs (AED),14–17 but neither the extent to which this conservative management is followed, nor its scientific rationale,14\n18\n19 are known. Moreover, recent experimental evidence suggests that focal EEG spikes may disrupt simultaneous regional brain function20 and might also impair long-term learning21 and memory consolidation in sleep,22 prompting a re-evaluation of the ‘benign’ nature of this group of epilepsies.\nBox 1Clinical Description of RE and PSRolandic epilepsyRolandic epilepsy (RE), also known as Benign Epilepsy of Childhood with Centro-Temporal Spikes (BECTS), is the most common epilepsy in childhood, with an incidence of up to 21 per 100 000 children aged 15 years and under.12 The onset of seizures is between 3 and 12 years, and remission almost always occurs by adolescence. Seizures typically occur during sleep or drowsiness, are brief and involve unilateral sensorimotor symptoms (eg, numbness, tingling, drooling) of the pharynx, tongue, face, lips and sometimes hand. The affected side may alternate and seizures may infrequently become secondarily generalised. Neurodevelopmental disorders are very common (40%) and include speech sound disorder, language impairment and reading disability, all of which usually precede seizures and aggregate among relatives.23\n24 Attention deficit hyperactivity disorder (ADHD) and migraine without aura are also strong but less frequent (10%) associations.25Panayiotopoulos syndromeIn Panayiotopoulos syndrome (PS), seizure semiology involves prominent autonomic symptoms. At onset, nausea, retching and vomiting are characteristic, often accompanied by other autonomic symptoms, such as pupillary changes, pallor/flushing, alterations in heart rate, breathing irregularities and temperature instability.13 Syncope-like episodes may occur. Impairment of consciousness develops as the seizure progresses, often accompanied by eye and head deviation. Seizures may end in hemi or generalised convulsions. Two-thirds of seizures occur during sleep. Seizures are often prolonged, most lasting over 10 min and many over 30 min (autonomic, non-convulsive status epilepticus). Potentially life-threatening cardiorespiratory arrest has been described in PS. As in RE, cognitive and behavioural difficulties have been associated with PS. Prognosis for remission of seizures is excellent.\nRolandic epilepsy\nRolandic epilepsy (RE), also known as Benign Epilepsy of Childhood with Centro-Temporal Spikes (BECTS), is the most common epilepsy in childhood, with an incidence of up to 21 per 100 000 children aged 15 years and under.12 The onset of seizures is between 3 and 12 years, and remission almost always occurs by adolescence. Seizures typically occur during sleep or drowsiness, are brief and involve unilateral sensorimotor symptoms (eg, numbness, tingling, drooling) of the pharynx, tongue, face, lips and sometimes hand. The affected side may alternate and seizures may infrequently become secondarily generalised. Neurodevelopmental disorders are very common (40%) and include speech sound disorder, language impairment and reading disability, all of which usually precede seizures and aggregate among relatives.23\n24 Attention deficit hyperactivity disorder (ADHD) and migraine without aura are also strong but less frequent (10%) associations.25\nPanayiotopoulos syndrome\nIn Panayiotopoulos syndrome (PS), seizure semiology involves prominent autonomic symptoms. At onset, nausea, retching and vomiting are characteristic, often accompanied by other autonomic symptoms, such as pupillary changes, pallor/flushing, alterations in heart rate, breathing irregularities and temperature instability.13 Syncope-like episodes may occur. Impairment of consciousness develops as the seizure progresses, often accompanied by eye and head deviation. Seizures may end in hemi or generalised convulsions. Two-thirds of seizures occur during sleep. Seizures are often prolonged, most lasting over 10 min and many over 30 min (autonomic, non-convulsive status epilepticus). Potentially life-threatening cardiorespiratory arrest has been described in PS. As in RE, cognitive and behavioural difficulties have been associated with PS. Prognosis for remission of seizures is excellent.\nWhen treated, carbamazepine and lamotrigine are recommended as first-line monotherapy by NICE (see table 1) and others,6\n17\n26 although the evidence base is acknowledged to be poor,27–29 and there are theoretical risks of carbamazepine exacerbating the seizures or EEG abnormality30\n31 and also speech production.32 International practice beyond the UK diverges widely,29\n33 as does expert opinion on the subject.34 Sulthiame is considered first-line in Germany, Austria and Israel,35\n36 although it may have adverse effects on cognition;37 sodium valproate in France; while levetiracetam is popular in the USA. Thus, the rationale for treatment versus non-treatment is not established and the evidence base in favour of specific AEDs is unknown amid widespread national and international variation in practice. This equipoise scenario sets the stage for an RCT, for which it would be important to learn the treatment preferences of physicians likely to recruit to such a trial, in order to assess various aspects of design feasibility. The purpose of this paper is, therefore, (1) to examine current clinical practice for RE and PS in relation to NICE guidance, specifically asking what proportion of patients are routinely not treated and (2) to explore the feasibility of alternative syndrome-specific RCT designs for RE and PS. We specifically address the questions of clinicians’ attitudes towards placebo-controlled designs, and which AEDs would be either preferred or unpopular comparators.\nNICE recommendations for management of RE/PS (rolandic epilepsy/Panayiotopoulos Syndrome)\nAEDs, antiepileptic drugs; BECTS, Benign Epilepsy with CentroTemporal Spikes; NICE, National Institute of Health and Care Excellence.", " Survey We designed a questionnaire targeting UK paediatricians with clinical responsibility for epilepsy (see online supplementary appendix 1). Thirty-two questions covered areas specifically related to RE and PS: (1) annual caseload of new patients; (2) investigation and management practice; (3) factors influencing AED treatment versus no treatment; (4) AED preferences between ‘older’ (before 1980) and ‘newer’ drugs; (5) theoretical RCT design and outcome preferences. Preference items were scored on a 5-point Likert scale. We piloted a paper version among five paediatric epilepsy specialists and then created a modified online, forced-choice version using SurveyMonkey.\nWe designed a questionnaire targeting UK paediatricians with clinical responsibility for epilepsy (see online supplementary appendix 1). Thirty-two questions covered areas specifically related to RE and PS: (1) annual caseload of new patients; (2) investigation and management practice; (3) factors influencing AED treatment versus no treatment; (4) AED preferences between ‘older’ (before 1980) and ‘newer’ drugs; (5) theoretical RCT design and outcome preferences. Preference items were scored on a 5-point Likert scale. We piloted a paper version among five paediatric epilepsy specialists and then created a modified online, forced-choice version using SurveyMonkey.\n Participants We distributed the online survey to the 282 Epilepsy-12 audit leads8 and to 308 members of the British Paediatric Neurology Association between November 2012 and December 2012. Respondents could only reply once and this was checked by names and internet protocol addresses. We sent weekly email reminders before closing the weblink after 4 weeks. This professional survey was deemed exempt from ethics approval.\nWe distributed the online survey to the 282 Epilepsy-12 audit leads8 and to 308 members of the British Paediatric Neurology Association between November 2012 and December 2012. Respondents could only reply once and this was checked by names and internet protocol addresses. We sent weekly email reminders before closing the weblink after 4 weeks. This professional survey was deemed exempt from ethics approval.\n Analysis Results were available both as summary data and as raw response files. We edited out inconsistent responses using the raw data. We summarised and visualised results using a spreadsheet tool.\nResults were available both as summary data and as raw response files. We edited out inconsistent responses using the raw data. We summarised and visualised results using a spreadsheet tool.", "We designed a questionnaire targeting UK paediatricians with clinical responsibility for epilepsy (see online supplementary appendix 1). Thirty-two questions covered areas specifically related to RE and PS: (1) annual caseload of new patients; (2) investigation and management practice; (3) factors influencing AED treatment versus no treatment; (4) AED preferences between ‘older’ (before 1980) and ‘newer’ drugs; (5) theoretical RCT design and outcome preferences. Preference items were scored on a 5-point Likert scale. We piloted a paper version among five paediatric epilepsy specialists and then created a modified online, forced-choice version using SurveyMonkey.", "We distributed the online survey to the 282 Epilepsy-12 audit leads8 and to 308 members of the British Paediatric Neurology Association between November 2012 and December 2012. Respondents could only reply once and this was checked by names and internet protocol addresses. We sent weekly email reminders before closing the weblink after 4 weeks. This professional survey was deemed exempt from ethics approval.", "Results were available both as summary data and as raw response files. We edited out inconsistent responses using the raw data. We summarised and visualised results using a spreadsheet tool.", " Characteristics of respondents There was a total of 132 respondents from the 590 individuals contacted with valid email addresses, a response rate of 39% from Epilepsy-12 audit leads and 22% overall. With weekly email prompts, 74 responded in the first week, another 34 in the second week and the remainder responded over the following fortnight. Fifty-three percent of respondents were general paediatricians with special expertise in epilepsy; 17% were general, community or neurodisability paediatricians; 19% were paediatric neurologists. In terms of seniority, 95% were consultants; 5% associate specialists. Fifty-three percent of respondents were men. Age-wise, 11% of respondents were under 40 years; 54% were 41–50 years; 35% over 50 years.\nThere was a total of 132 respondents from the 590 individuals contacted with valid email addresses, a response rate of 39% from Epilepsy-12 audit leads and 22% overall. With weekly email prompts, 74 responded in the first week, another 34 in the second week and the remainder responded over the following fortnight. Fifty-three percent of respondents were general paediatricians with special expertise in epilepsy; 17% were general, community or neurodisability paediatricians; 19% were paediatric neurologists. In terms of seniority, 95% were consultants; 5% associate specialists. Fifty-three percent of respondents were men. Age-wise, 11% of respondents were under 40 years; 54% were 41–50 years; 35% over 50 years.\n Caseload The majority (90%) of clinicians reported diagnosing six or fewer new RE and six or fewer PS cases annually; by summating monthly individual case loads, we estimated an annual total of 751 new RE cases and 233 new PS cases from the 132 UK respondents.\nThe majority (90%) of clinicians reported diagnosing six or fewer new RE and six or fewer PS cases annually; by summating monthly individual case loads, we estimated an annual total of 751 new RE cases and 233 new PS cases from the 132 UK respondents.\n Investigation EEG is the most often requested investigation (figure 1). Most clinicians only infrequently request CT or MRI brain scans. Only 7%–8% of clinicians request neuropsychological assessments in RE or PS.\nUse of investigations in rolandic epilepsy (RE) and Panayiotopoulos Syndrome (PS) expressed as percentage of respondents: electroencephalography (EEG); brain MRI; neuropsychological assessment (NP).\nEEG is the most often requested investigation (figure 1). Most clinicians only infrequently request CT or MRI brain scans. Only 7%–8% of clinicians request neuropsychological assessments in RE or PS.\nUse of investigations in rolandic epilepsy (RE) and Panayiotopoulos Syndrome (PS) expressed as percentage of respondents: electroencephalography (EEG); brain MRI; neuropsychological assessment (NP).\n Treatment Forty percent of RE and PS cases are never treated with regular AEDs. Both seizure frequency/severity and parental/child preference were judged important factors influencing the decision not to prescribe AED (figure 2).\nFactors rated as quite or very important influencing a no-treatment decision in rolandic epilepsy, expressed as percentage of respondents (data for Panayiotopoulos Syndrome very similar).\nForty percent of RE and PS cases are never treated with regular AEDs. Both seizure frequency/severity and parental/child preference were judged important factors influencing the decision not to prescribe AED (figure 2).\nFactors rated as quite or very important influencing a no-treatment decision in rolandic epilepsy, expressed as percentage of respondents (data for Panayiotopoulos Syndrome very similar).\n AED preference in RCTs AED preferences were almost identical for RE and PS (figure 3). Carbamazepine was overwhelmingly rated as the most preferred active comparator for an RCT and also the preferred ‘older’ AED; sodium valproate was the second most preferred AED; no other AEDs were indicated as first choice by >10% of respondents. Among the newer AEDs, respondents indicated a slight preference for levetiracetam over lamotrigine as active comparator, with no other popular first-line choices >10%. About 10% of respondents would object to ethosuximide, benzodiazepines, phenobarbital or phenytoin as an active treatment arm.\nClinicians’ preferred choices of (A) all; (B) older and (C) newer treatments. STM, Sulthiame; OX, oxcarbazepine; LEV, levetiracetam; LTG, lamotrigine; SV, sodium valproate; CBZ, carbamazepine; TPM, topiramate.\nAED preferences were almost identical for RE and PS (figure 3). Carbamazepine was overwhelmingly rated as the most preferred active comparator for an RCT and also the preferred ‘older’ AED; sodium valproate was the second most preferred AED; no other AEDs were indicated as first choice by >10% of respondents. Among the newer AEDs, respondents indicated a slight preference for levetiracetam over lamotrigine as active comparator, with no other popular first-line choices >10%. About 10% of respondents would object to ethosuximide, benzodiazepines, phenobarbital or phenytoin as an active treatment arm.\nClinicians’ preferred choices of (A) all; (B) older and (C) newer treatments. STM, Sulthiame; OX, oxcarbazepine; LEV, levetiracetam; LTG, lamotrigine; SV, sodium valproate; CBZ, carbamazepine; TPM, topiramate.\n Attitudes to clinical trial design In RE, just over one-half of respondents (55%) would recruit to an RCT comparing either two active treatments, 48%–49% to active versus no active treatment design and 41% to active versus placebo, regardless of the preceding number of seizures (<3 or ≥3 in 6 months). In PS, there was a greater preference for a two active drug design (59%) than a trial with no active treatment (44%) or placebo (38%). The great majority (82% for RE and 79% for PS) chose seizure remission as the preferred primary outcome in a RE trial, with close to two-thirds (73% for RE and 60% for PS) choosing cognitive or broad quality of life measures as a secondary outcome.\nIn RE, just over one-half of respondents (55%) would recruit to an RCT comparing either two active treatments, 48%–49% to active versus no active treatment design and 41% to active versus placebo, regardless of the preceding number of seizures (<3 or ≥3 in 6 months). In PS, there was a greater preference for a two active drug design (59%) than a trial with no active treatment (44%) or placebo (38%). The great majority (82% for RE and 79% for PS) chose seizure remission as the preferred primary outcome in a RE trial, with close to two-thirds (73% for RE and 60% for PS) choosing cognitive or broad quality of life measures as a secondary outcome.", "There was a total of 132 respondents from the 590 individuals contacted with valid email addresses, a response rate of 39% from Epilepsy-12 audit leads and 22% overall. With weekly email prompts, 74 responded in the first week, another 34 in the second week and the remainder responded over the following fortnight. Fifty-three percent of respondents were general paediatricians with special expertise in epilepsy; 17% were general, community or neurodisability paediatricians; 19% were paediatric neurologists. In terms of seniority, 95% were consultants; 5% associate specialists. Fifty-three percent of respondents were men. Age-wise, 11% of respondents were under 40 years; 54% were 41–50 years; 35% over 50 years.", "The majority (90%) of clinicians reported diagnosing six or fewer new RE and six or fewer PS cases annually; by summating monthly individual case loads, we estimated an annual total of 751 new RE cases and 233 new PS cases from the 132 UK respondents.", "EEG is the most often requested investigation (figure 1). Most clinicians only infrequently request CT or MRI brain scans. Only 7%–8% of clinicians request neuropsychological assessments in RE or PS.\nUse of investigations in rolandic epilepsy (RE) and Panayiotopoulos Syndrome (PS) expressed as percentage of respondents: electroencephalography (EEG); brain MRI; neuropsychological assessment (NP).", "Forty percent of RE and PS cases are never treated with regular AEDs. Both seizure frequency/severity and parental/child preference were judged important factors influencing the decision not to prescribe AED (figure 2).\nFactors rated as quite or very important influencing a no-treatment decision in rolandic epilepsy, expressed as percentage of respondents (data for Panayiotopoulos Syndrome very similar).", "AED preferences were almost identical for RE and PS (figure 3). Carbamazepine was overwhelmingly rated as the most preferred active comparator for an RCT and also the preferred ‘older’ AED; sodium valproate was the second most preferred AED; no other AEDs were indicated as first choice by >10% of respondents. Among the newer AEDs, respondents indicated a slight preference for levetiracetam over lamotrigine as active comparator, with no other popular first-line choices >10%. About 10% of respondents would object to ethosuximide, benzodiazepines, phenobarbital or phenytoin as an active treatment arm.\nClinicians’ preferred choices of (A) all; (B) older and (C) newer treatments. STM, Sulthiame; OX, oxcarbazepine; LEV, levetiracetam; LTG, lamotrigine; SV, sodium valproate; CBZ, carbamazepine; TPM, topiramate.", "In RE, just over one-half of respondents (55%) would recruit to an RCT comparing either two active treatments, 48%–49% to active versus no active treatment design and 41% to active versus placebo, regardless of the preceding number of seizures (<3 or ≥3 in 6 months). In PS, there was a greater preference for a two active drug design (59%) than a trial with no active treatment (44%) or placebo (38%). The great majority (82% for RE and 79% for PS) chose seizure remission as the preferred primary outcome in a RE trial, with close to two-thirds (73% for RE and 60% for PS) choosing cognitive or broad quality of life measures as a secondary outcome.", "This is the first physician survey regarding the common epilepsy syndromes of childhood RE and PS. The results indicate first, that the pattern of investigations requested for patients with these syndromes appears broadly appropriate and in line with NICE guidance, although with a few unexplained observations. Second, a large proportion of children with RE and PS remains untreated, reportedly influenced by a number of medical and social factors. Third, clinicians seem amenable to both placebo-controlled and head-to-head RCTs to address the question of drug superiority; they chose conventional primary seizure outcomes and secondary cognitive/behavioural outcomes. Carbamazepine would be the preferred older AED in a hypothetical RCT and levetiracetam, narrowly, the preferred newer AED. Inclusion of older AEDs, such as ethosuximide, phenobarbital, phenytoin and benzodiazepines, would deter recruitment among a significant minority. Last, response data support the existence of a sufficiently large sampling frame of new-onset RE and PS cases to construct an RCT in the UK.\nThe overall pattern of reported investigations appears broadly appropriate aside from some unexplained observations. For RE and PS, NICE and SIGN recommend EEG at diagnosis, with follow-up ambulatory, sleep or video EEG if needed; MRI brain scans are not recommended as first-line investigations because of the non-lesional nature of these epilepsies. In our survey, 70% of clinicians reported using EEG as a first-line investigation at least half the time (figure 1); those who do not may possibly not have ready access to it—Epilepsy12 suggests that over 40% do not have local access to these facilities.8 Whatever the explanation, it is debatable whether a diagnosis of epilepsy, especially a ‘benign’ syndrome, should be made without benefit of EEG. Brain MRI was requested in approximately 25% of RE cases and 50% of PS cases. While the clinical history usually obviates the need for brain imaging in RE (especially in the presence of a suggestive EEG), and brain imaging in RE can generate incidental findings,38 variable clinical features can make an early diagnosis more challenging in PS and might explain the higher frequency of imaging requests.39 Nevertheless, the overall use of MRI appears unusually high for these non-lesional syndromes and is worthy of further investigation. The very low use of neuropsychological assessments (<10%) is striking given the well-documented occurrence of language, literacy and attentional comorbidities in RE23\n24 and their impact on educational achievement and quality of life.5 Although this topic is not covered by professional guidelines, it merits investigation as it could reflect correctable factors, for example, either poor local access to assessment services, low awareness about the high prevalence of treatable comorbidities or referral to alternative services, for example, school educational psychologist.\nA large proportion of patients with RE and PS are not routinely treated with AEDs. Survey respondents indicated that low seizure frequency, parental and child preference, low seizure severity and nocturnal seizure predominance were the most important factors influencing a policy of no treatment (figure 2). These same factors have been cited as reasons not to treat.14–17\n19 However, we neither know whether untreated patients are missing important benefits of treatment, nor whether treated patients are needlessly suffering adverse effects. For example, we do not know whether regular AED treatment overall mitigates or exacerbates the frequent cognitive and attentional comorbidities. Suggestions that interictal focal EEG discharges led to transitory regional cognitive dysfunction,20 have now been augmented by experimental evidence in the rat model that interictal EEG discharges can impair long-term learning.21 Meanwhile, emerging data suggest that interictal spikes even in benign focal epilepsies can impair memory consolidation in sleep.22 On the other hand, there is also evidence that carbamazepine can worsen speech production32 and sulthiame may worsen cognitive function in RE.37 Quality of life appears to suffer regardless of seizure number, suggesting that clinical severity has a loose correlation with child and family impact.18 The no-treatment practice does not take into account the costs associated with emergency room attendance and hospital admission for unprevented seizures. These various uncertainties point to the necessity for any RCT to address the rationale for non-treatment. Such a trial could also be designed to evaluate the superiority of active comparators.\nThe evidence base for treatment choice in both RE and PS is acknowledged to be poor,9 with only three relevant Class III studies (eg, open-label) suggesting that carbamazepine and sodium valproate are ‘possibly’ effective, and levetiracetam, oxcarbamazepine, gabapentin and sulthiame ‘potentially’ effective as initial monotherapy.27–29\n40 There are many other class IV studies (ie, observational) that cannot be used to support guidelines.28\n37\n41–43 Among active comparators in the survey, carbamazepine is clearly the referent first-line AED for RE/PS trials (figure 3A and B). Despite concerns about carbamazepine worsening electroclinical features and triggering continuous spikes in slow-wave sleep,30 only 6%–8% respondents indicated its inclusion would deter recruitment of patients to an RCT. Critics feel that carbamazepine has not been properly evaluated in childhood epilepsy: the Standard and New Antiepileptic Drugs (SANAD) trial (upon which NICE guidelines are based) included few patients with RE/PS (n=24) and mixed focal seizures of diverse epilepsy syndromes, for example, temporal lobe epilepsy together, possibly masking pertinent data.10\n11 Although lamotrigine (along with carbamazepine) is considered by NICE as a first-line and sodium valproate as a second-line agent, we observe the reverse preferences in our survey (figure 3A). Looking only at newer AEDs, levetiracetam emerges as the first choice, ahead of lamotrigine (figure 3C). Neither topiramate, oxcarbazepine nor sulthiame (not licensed in the UK) was commonly chosen as a possible comparator; and much older AEDs, for example, phenobarbital, phenytoin, ethosuximide and benzodiazepines were not acceptable to about 10% and there is little scientific rationale to include them in a future trial in developed countries. Since older AEDs (carbamazepine and sodium valproate here) are known to have comparable efficacy in childhood,44 there would only be merit in comparing referent AED with newer AED(s). Levetiracetam is the obvious newer comparator AED because: (1) it has not been adequately evaluated as monotherapy in the paediatric population despite widespread off-license use;35\n45 (2) it is the most favoured newer AED in this survey; (3) it has come off-patent since the time of the European consensus statement and will therefore be an economically viable comparator to older AEDs;46 (4) lamotrigine monotherapy has been compared with carbamazepine47 and sodium valproate.48 However, outside the UK, clinicians may prefer alternative comparators: European ‘expert opinion’ considers sodium valproate drug of choice for RE and carbamazepine only as ‘sometimes appropriate’.34 German-speaking countries, Japan and Israel favour sulthiame,33\n36 although concerns remain over its cognitive adverse effects.37\nWhile non-treatment is common practice, and we conclude the need to conduct an RCT to determine whether this is justified, would clinicians countenance a trial design with no active arm? The survey results are encouraging: approximately one-half of respondents would recruit either to a head-to-head, active versus non-active or to a placebo-controlled RCT in RE. Respondents were equally prepared to randomise after either less than, or more than, two seizures in 6 months. However, there was a more pronounced preference towards active comparators in PS. Thus, non-active or placebo-controlled designs can be considered feasible options in trial design.\nA cross-sectional survey has several limitations and is only a proxy for actual observation of practice. The response rate was relatively low and the results may reflect selection bias towards respondents who conform more closely to norms and guidelines, giving a more positive impression of management. Clinicians who see many patients may also be more inclined to respond, inflating the estimates of caseload. Clinicians motivated to complete surveys may also, as a group, be more amenable towards RCTs, thus, overestimating true potential recruitment. This survey principally targeted general paediatricians who have primary responsibility for childhood epilepsy in their district or audit zone, all of whom were involved in the recent Epilepsy12 audit. Since this is the group that has primary contact with children with RE or PS, they would be the clinicians who are likely to recruit such children into an RCT and, as such, their opinions are important to determine feasibility. The response rate among the general paediatrician group was 39% and even if we only double the number of reported cases to estimate the total number seen within audit zones, there might be 1500 RE and 500 PS ascertainable annual cases. This is a higher number compared with the Epilepsy12 audit, which estimated 340 annual RE cases.8 The discrepancy between the figures from the two studies might be due to the low rate of syndromic diagnoses that were made by clinicians participating in Epilepsy12 or due to over-reporting of local caseloads in our survey, for example, if there was shared care of the same patient between a paediatrician and a paediatric neurologist. Approximately half the respondents indicated a willingness to recruit to an RCT, still yielding a sufficient sampling frame to consider further planning.\nOverall, the question whether or not to treat children with non-lesional focal epilepsy syndromes like RE and PS remains unanswered. Furthermore, supporting evidence to validate the use of specific AEDs for either RE or PS is limited despite widespread variation in practice. Encouragingly, this survey demonstrates: (1) there is a sufficient UK network of clinicians who see a large number of these patients; (2) these clinicians are willing to resolve these issues of treatment and non-treatment uncertainty through an RCT; (3) a non-active or placebo arm would be acceptable, and carbamazepine and levetiracetam would be the preferred (and most informative) active comparators. We suggest further discussion and planning of trial design features such as the issue of combining RE and PS, placebo-control, active arms, blinding, outcome measures, logistics and so on, within a larger national or international multidisciplinary study group.", "" ]
[ "intro", "methods", null, null, null, "results", null, null, null, null, null, null, "discussion", "supplementary-material" ]
[ "Neurology", "Paediatric Practice" ]
Introduction: Epilepsy affects 63 400 young people under 18 years of age in the UK.1 Seizures represent one of the top five avoidable reasons for admission of children to emergency departments in the UK.2 Aside from seizures, cognitive and behavioural comorbidities cause a substantial impact affecting about two-thirds of children with epilepsy.3 Indeed, the comorbidity-associated burden may outweigh that of the seizures themselves4 and in some epilepsies, comorbidities are stronger predictors of quality-of-life than seizures.5 Hence, the comprehensive management of epilepsy necessitates the recognition and management of individual epilepsy syndromes and their specific comorbidities. There are almost 40 electroclinical epilepsy syndromes defined by constellations of seizure type(s), age of onset, electroencephalography (EEG) and clinical features, each requiring individual assessment and management. The evidence base for management of childhood epilepsies in the UK is detailed in the National Institute of Health and Care Excellence (NICE) Guidelines6 and by its Scottish equivalent, the Scottish Intercollegiate Guidelines Network (SIGN).7 This guidance provides a care standard framework against which a recent national ‘Epilepsy12’ audit of childhood epilepsy care was conducted.8 However, the evidence base for antiepileptic drug treatment in these guidelines remains poor for many common childhood epilepsies, consisting of only a few high-quality randomised controlled trials (RCT).9 Much of the evidence available to NICE is extrapolated from heterogeneous studies of seizures mixing epilepsy types and age groups,10 with uncertain applicability to well-defined childhood epilepsy syndromes.11 Furthermore, while there is now an emerging idea of how childhood epilepsy as a whole is managed in the UK,8 there is little detail about the variation in management of specific epilepsy syndromes and paediatricians’ familiarity with them. Hence, a more detailed survey of practice is justified. In the absence of solid RCT evidence, there is widespread historical and geographical variation in recommendations for the drug management of specific epilepsies. In childhood, rolandic epilepsy (RE, sometimes known as Benign Epilepsy with CentroTemporal Spikes or BECTS), a focal epilepsy, is the most common syndrome, estimated to constitute 8%–25% of all childhood epilepsies12 and diagnosed in 9% of children with epilepsy in the recent Epilepsy12 audit.8 RE, along with the closely related Panayiotopoulos syndrome (PS),13 are part of the group of non-lesional focal epilepsies (box 1). Textbooks and older ‘expert opinion’ often advised conservative management of RE, that is, with no antiepileptic drugs (AED),14–17 but neither the extent to which this conservative management is followed, nor its scientific rationale,14 18 19 are known. Moreover, recent experimental evidence suggests that focal EEG spikes may disrupt simultaneous regional brain function20 and might also impair long-term learning21 and memory consolidation in sleep,22 prompting a re-evaluation of the ‘benign’ nature of this group of epilepsies. Box 1Clinical Description of RE and PSRolandic epilepsyRolandic epilepsy (RE), also known as Benign Epilepsy of Childhood with Centro-Temporal Spikes (BECTS), is the most common epilepsy in childhood, with an incidence of up to 21 per 100 000 children aged 15 years and under.12 The onset of seizures is between 3 and 12 years, and remission almost always occurs by adolescence. Seizures typically occur during sleep or drowsiness, are brief and involve unilateral sensorimotor symptoms (eg, numbness, tingling, drooling) of the pharynx, tongue, face, lips and sometimes hand. The affected side may alternate and seizures may infrequently become secondarily generalised. Neurodevelopmental disorders are very common (40%) and include speech sound disorder, language impairment and reading disability, all of which usually precede seizures and aggregate among relatives.23 24 Attention deficit hyperactivity disorder (ADHD) and migraine without aura are also strong but less frequent (10%) associations.25Panayiotopoulos syndromeIn Panayiotopoulos syndrome (PS), seizure semiology involves prominent autonomic symptoms. At onset, nausea, retching and vomiting are characteristic, often accompanied by other autonomic symptoms, such as pupillary changes, pallor/flushing, alterations in heart rate, breathing irregularities and temperature instability.13 Syncope-like episodes may occur. Impairment of consciousness develops as the seizure progresses, often accompanied by eye and head deviation. Seizures may end in hemi or generalised convulsions. Two-thirds of seizures occur during sleep. Seizures are often prolonged, most lasting over 10 min and many over 30 min (autonomic, non-convulsive status epilepticus). Potentially life-threatening cardiorespiratory arrest has been described in PS. As in RE, cognitive and behavioural difficulties have been associated with PS. Prognosis for remission of seizures is excellent. Rolandic epilepsy Rolandic epilepsy (RE), also known as Benign Epilepsy of Childhood with Centro-Temporal Spikes (BECTS), is the most common epilepsy in childhood, with an incidence of up to 21 per 100 000 children aged 15 years and under.12 The onset of seizures is between 3 and 12 years, and remission almost always occurs by adolescence. Seizures typically occur during sleep or drowsiness, are brief and involve unilateral sensorimotor symptoms (eg, numbness, tingling, drooling) of the pharynx, tongue, face, lips and sometimes hand. The affected side may alternate and seizures may infrequently become secondarily generalised. Neurodevelopmental disorders are very common (40%) and include speech sound disorder, language impairment and reading disability, all of which usually precede seizures and aggregate among relatives.23 24 Attention deficit hyperactivity disorder (ADHD) and migraine without aura are also strong but less frequent (10%) associations.25 Panayiotopoulos syndrome In Panayiotopoulos syndrome (PS), seizure semiology involves prominent autonomic symptoms. At onset, nausea, retching and vomiting are characteristic, often accompanied by other autonomic symptoms, such as pupillary changes, pallor/flushing, alterations in heart rate, breathing irregularities and temperature instability.13 Syncope-like episodes may occur. Impairment of consciousness develops as the seizure progresses, often accompanied by eye and head deviation. Seizures may end in hemi or generalised convulsions. Two-thirds of seizures occur during sleep. Seizures are often prolonged, most lasting over 10 min and many over 30 min (autonomic, non-convulsive status epilepticus). Potentially life-threatening cardiorespiratory arrest has been described in PS. As in RE, cognitive and behavioural difficulties have been associated with PS. Prognosis for remission of seizures is excellent. When treated, carbamazepine and lamotrigine are recommended as first-line monotherapy by NICE (see table 1) and others,6 17 26 although the evidence base is acknowledged to be poor,27–29 and there are theoretical risks of carbamazepine exacerbating the seizures or EEG abnormality30 31 and also speech production.32 International practice beyond the UK diverges widely,29 33 as does expert opinion on the subject.34 Sulthiame is considered first-line in Germany, Austria and Israel,35 36 although it may have adverse effects on cognition;37 sodium valproate in France; while levetiracetam is popular in the USA. Thus, the rationale for treatment versus non-treatment is not established and the evidence base in favour of specific AEDs is unknown amid widespread national and international variation in practice. This equipoise scenario sets the stage for an RCT, for which it would be important to learn the treatment preferences of physicians likely to recruit to such a trial, in order to assess various aspects of design feasibility. The purpose of this paper is, therefore, (1) to examine current clinical practice for RE and PS in relation to NICE guidance, specifically asking what proportion of patients are routinely not treated and (2) to explore the feasibility of alternative syndrome-specific RCT designs for RE and PS. We specifically address the questions of clinicians’ attitudes towards placebo-controlled designs, and which AEDs would be either preferred or unpopular comparators. NICE recommendations for management of RE/PS (rolandic epilepsy/Panayiotopoulos Syndrome) AEDs, antiepileptic drugs; BECTS, Benign Epilepsy with CentroTemporal Spikes; NICE, National Institute of Health and Care Excellence. Methods: Survey We designed a questionnaire targeting UK paediatricians with clinical responsibility for epilepsy (see online supplementary appendix 1). Thirty-two questions covered areas specifically related to RE and PS: (1) annual caseload of new patients; (2) investigation and management practice; (3) factors influencing AED treatment versus no treatment; (4) AED preferences between ‘older’ (before 1980) and ‘newer’ drugs; (5) theoretical RCT design and outcome preferences. Preference items were scored on a 5-point Likert scale. We piloted a paper version among five paediatric epilepsy specialists and then created a modified online, forced-choice version using SurveyMonkey. We designed a questionnaire targeting UK paediatricians with clinical responsibility for epilepsy (see online supplementary appendix 1). Thirty-two questions covered areas specifically related to RE and PS: (1) annual caseload of new patients; (2) investigation and management practice; (3) factors influencing AED treatment versus no treatment; (4) AED preferences between ‘older’ (before 1980) and ‘newer’ drugs; (5) theoretical RCT design and outcome preferences. Preference items were scored on a 5-point Likert scale. We piloted a paper version among five paediatric epilepsy specialists and then created a modified online, forced-choice version using SurveyMonkey. Participants We distributed the online survey to the 282 Epilepsy-12 audit leads8 and to 308 members of the British Paediatric Neurology Association between November 2012 and December 2012. Respondents could only reply once and this was checked by names and internet protocol addresses. We sent weekly email reminders before closing the weblink after 4 weeks. This professional survey was deemed exempt from ethics approval. We distributed the online survey to the 282 Epilepsy-12 audit leads8 and to 308 members of the British Paediatric Neurology Association between November 2012 and December 2012. Respondents could only reply once and this was checked by names and internet protocol addresses. We sent weekly email reminders before closing the weblink after 4 weeks. This professional survey was deemed exempt from ethics approval. Analysis Results were available both as summary data and as raw response files. We edited out inconsistent responses using the raw data. We summarised and visualised results using a spreadsheet tool. Results were available both as summary data and as raw response files. We edited out inconsistent responses using the raw data. We summarised and visualised results using a spreadsheet tool. Survey: We designed a questionnaire targeting UK paediatricians with clinical responsibility for epilepsy (see online supplementary appendix 1). Thirty-two questions covered areas specifically related to RE and PS: (1) annual caseload of new patients; (2) investigation and management practice; (3) factors influencing AED treatment versus no treatment; (4) AED preferences between ‘older’ (before 1980) and ‘newer’ drugs; (5) theoretical RCT design and outcome preferences. Preference items were scored on a 5-point Likert scale. We piloted a paper version among five paediatric epilepsy specialists and then created a modified online, forced-choice version using SurveyMonkey. Participants: We distributed the online survey to the 282 Epilepsy-12 audit leads8 and to 308 members of the British Paediatric Neurology Association between November 2012 and December 2012. Respondents could only reply once and this was checked by names and internet protocol addresses. We sent weekly email reminders before closing the weblink after 4 weeks. This professional survey was deemed exempt from ethics approval. Analysis: Results were available both as summary data and as raw response files. We edited out inconsistent responses using the raw data. We summarised and visualised results using a spreadsheet tool. Results: Characteristics of respondents There was a total of 132 respondents from the 590 individuals contacted with valid email addresses, a response rate of 39% from Epilepsy-12 audit leads and 22% overall. With weekly email prompts, 74 responded in the first week, another 34 in the second week and the remainder responded over the following fortnight. Fifty-three percent of respondents were general paediatricians with special expertise in epilepsy; 17% were general, community or neurodisability paediatricians; 19% were paediatric neurologists. In terms of seniority, 95% were consultants; 5% associate specialists. Fifty-three percent of respondents were men. Age-wise, 11% of respondents were under 40 years; 54% were 41–50 years; 35% over 50 years. There was a total of 132 respondents from the 590 individuals contacted with valid email addresses, a response rate of 39% from Epilepsy-12 audit leads and 22% overall. With weekly email prompts, 74 responded in the first week, another 34 in the second week and the remainder responded over the following fortnight. Fifty-three percent of respondents were general paediatricians with special expertise in epilepsy; 17% were general, community or neurodisability paediatricians; 19% were paediatric neurologists. In terms of seniority, 95% were consultants; 5% associate specialists. Fifty-three percent of respondents were men. Age-wise, 11% of respondents were under 40 years; 54% were 41–50 years; 35% over 50 years. Caseload The majority (90%) of clinicians reported diagnosing six or fewer new RE and six or fewer PS cases annually; by summating monthly individual case loads, we estimated an annual total of 751 new RE cases and 233 new PS cases from the 132 UK respondents. The majority (90%) of clinicians reported diagnosing six or fewer new RE and six or fewer PS cases annually; by summating monthly individual case loads, we estimated an annual total of 751 new RE cases and 233 new PS cases from the 132 UK respondents. Investigation EEG is the most often requested investigation (figure 1). Most clinicians only infrequently request CT or MRI brain scans. Only 7%–8% of clinicians request neuropsychological assessments in RE or PS. Use of investigations in rolandic epilepsy (RE) and Panayiotopoulos Syndrome (PS) expressed as percentage of respondents: electroencephalography (EEG); brain MRI; neuropsychological assessment (NP). EEG is the most often requested investigation (figure 1). Most clinicians only infrequently request CT or MRI brain scans. Only 7%–8% of clinicians request neuropsychological assessments in RE or PS. Use of investigations in rolandic epilepsy (RE) and Panayiotopoulos Syndrome (PS) expressed as percentage of respondents: electroencephalography (EEG); brain MRI; neuropsychological assessment (NP). Treatment Forty percent of RE and PS cases are never treated with regular AEDs. Both seizure frequency/severity and parental/child preference were judged important factors influencing the decision not to prescribe AED (figure 2). Factors rated as quite or very important influencing a no-treatment decision in rolandic epilepsy, expressed as percentage of respondents (data for Panayiotopoulos Syndrome very similar). Forty percent of RE and PS cases are never treated with regular AEDs. Both seizure frequency/severity and parental/child preference were judged important factors influencing the decision not to prescribe AED (figure 2). Factors rated as quite or very important influencing a no-treatment decision in rolandic epilepsy, expressed as percentage of respondents (data for Panayiotopoulos Syndrome very similar). AED preference in RCTs AED preferences were almost identical for RE and PS (figure 3). Carbamazepine was overwhelmingly rated as the most preferred active comparator for an RCT and also the preferred ‘older’ AED; sodium valproate was the second most preferred AED; no other AEDs were indicated as first choice by >10% of respondents. Among the newer AEDs, respondents indicated a slight preference for levetiracetam over lamotrigine as active comparator, with no other popular first-line choices >10%. About 10% of respondents would object to ethosuximide, benzodiazepines, phenobarbital or phenytoin as an active treatment arm. Clinicians’ preferred choices of (A) all; (B) older and (C) newer treatments. STM, Sulthiame; OX, oxcarbazepine; LEV, levetiracetam; LTG, lamotrigine; SV, sodium valproate; CBZ, carbamazepine; TPM, topiramate. AED preferences were almost identical for RE and PS (figure 3). Carbamazepine was overwhelmingly rated as the most preferred active comparator for an RCT and also the preferred ‘older’ AED; sodium valproate was the second most preferred AED; no other AEDs were indicated as first choice by >10% of respondents. Among the newer AEDs, respondents indicated a slight preference for levetiracetam over lamotrigine as active comparator, with no other popular first-line choices >10%. About 10% of respondents would object to ethosuximide, benzodiazepines, phenobarbital or phenytoin as an active treatment arm. Clinicians’ preferred choices of (A) all; (B) older and (C) newer treatments. STM, Sulthiame; OX, oxcarbazepine; LEV, levetiracetam; LTG, lamotrigine; SV, sodium valproate; CBZ, carbamazepine; TPM, topiramate. Attitudes to clinical trial design In RE, just over one-half of respondents (55%) would recruit to an RCT comparing either two active treatments, 48%–49% to active versus no active treatment design and 41% to active versus placebo, regardless of the preceding number of seizures (<3 or ≥3 in 6 months). In PS, there was a greater preference for a two active drug design (59%) than a trial with no active treatment (44%) or placebo (38%). The great majority (82% for RE and 79% for PS) chose seizure remission as the preferred primary outcome in a RE trial, with close to two-thirds (73% for RE and 60% for PS) choosing cognitive or broad quality of life measures as a secondary outcome. In RE, just over one-half of respondents (55%) would recruit to an RCT comparing either two active treatments, 48%–49% to active versus no active treatment design and 41% to active versus placebo, regardless of the preceding number of seizures (<3 or ≥3 in 6 months). In PS, there was a greater preference for a two active drug design (59%) than a trial with no active treatment (44%) or placebo (38%). The great majority (82% for RE and 79% for PS) chose seizure remission as the preferred primary outcome in a RE trial, with close to two-thirds (73% for RE and 60% for PS) choosing cognitive or broad quality of life measures as a secondary outcome. Characteristics of respondents: There was a total of 132 respondents from the 590 individuals contacted with valid email addresses, a response rate of 39% from Epilepsy-12 audit leads and 22% overall. With weekly email prompts, 74 responded in the first week, another 34 in the second week and the remainder responded over the following fortnight. Fifty-three percent of respondents were general paediatricians with special expertise in epilepsy; 17% were general, community or neurodisability paediatricians; 19% were paediatric neurologists. In terms of seniority, 95% were consultants; 5% associate specialists. Fifty-three percent of respondents were men. Age-wise, 11% of respondents were under 40 years; 54% were 41–50 years; 35% over 50 years. Caseload: The majority (90%) of clinicians reported diagnosing six or fewer new RE and six or fewer PS cases annually; by summating monthly individual case loads, we estimated an annual total of 751 new RE cases and 233 new PS cases from the 132 UK respondents. Investigation: EEG is the most often requested investigation (figure 1). Most clinicians only infrequently request CT or MRI brain scans. Only 7%–8% of clinicians request neuropsychological assessments in RE or PS. Use of investigations in rolandic epilepsy (RE) and Panayiotopoulos Syndrome (PS) expressed as percentage of respondents: electroencephalography (EEG); brain MRI; neuropsychological assessment (NP). Treatment: Forty percent of RE and PS cases are never treated with regular AEDs. Both seizure frequency/severity and parental/child preference were judged important factors influencing the decision not to prescribe AED (figure 2). Factors rated as quite or very important influencing a no-treatment decision in rolandic epilepsy, expressed as percentage of respondents (data for Panayiotopoulos Syndrome very similar). AED preference in RCTs: AED preferences were almost identical for RE and PS (figure 3). Carbamazepine was overwhelmingly rated as the most preferred active comparator for an RCT and also the preferred ‘older’ AED; sodium valproate was the second most preferred AED; no other AEDs were indicated as first choice by >10% of respondents. Among the newer AEDs, respondents indicated a slight preference for levetiracetam over lamotrigine as active comparator, with no other popular first-line choices >10%. About 10% of respondents would object to ethosuximide, benzodiazepines, phenobarbital or phenytoin as an active treatment arm. Clinicians’ preferred choices of (A) all; (B) older and (C) newer treatments. STM, Sulthiame; OX, oxcarbazepine; LEV, levetiracetam; LTG, lamotrigine; SV, sodium valproate; CBZ, carbamazepine; TPM, topiramate. Attitudes to clinical trial design: In RE, just over one-half of respondents (55%) would recruit to an RCT comparing either two active treatments, 48%–49% to active versus no active treatment design and 41% to active versus placebo, regardless of the preceding number of seizures (<3 or ≥3 in 6 months). In PS, there was a greater preference for a two active drug design (59%) than a trial with no active treatment (44%) or placebo (38%). The great majority (82% for RE and 79% for PS) chose seizure remission as the preferred primary outcome in a RE trial, with close to two-thirds (73% for RE and 60% for PS) choosing cognitive or broad quality of life measures as a secondary outcome. Discussion: This is the first physician survey regarding the common epilepsy syndromes of childhood RE and PS. The results indicate first, that the pattern of investigations requested for patients with these syndromes appears broadly appropriate and in line with NICE guidance, although with a few unexplained observations. Second, a large proportion of children with RE and PS remains untreated, reportedly influenced by a number of medical and social factors. Third, clinicians seem amenable to both placebo-controlled and head-to-head RCTs to address the question of drug superiority; they chose conventional primary seizure outcomes and secondary cognitive/behavioural outcomes. Carbamazepine would be the preferred older AED in a hypothetical RCT and levetiracetam, narrowly, the preferred newer AED. Inclusion of older AEDs, such as ethosuximide, phenobarbital, phenytoin and benzodiazepines, would deter recruitment among a significant minority. Last, response data support the existence of a sufficiently large sampling frame of new-onset RE and PS cases to construct an RCT in the UK. The overall pattern of reported investigations appears broadly appropriate aside from some unexplained observations. For RE and PS, NICE and SIGN recommend EEG at diagnosis, with follow-up ambulatory, sleep or video EEG if needed; MRI brain scans are not recommended as first-line investigations because of the non-lesional nature of these epilepsies. In our survey, 70% of clinicians reported using EEG as a first-line investigation at least half the time (figure 1); those who do not may possibly not have ready access to it—Epilepsy12 suggests that over 40% do not have local access to these facilities.8 Whatever the explanation, it is debatable whether a diagnosis of epilepsy, especially a ‘benign’ syndrome, should be made without benefit of EEG. Brain MRI was requested in approximately 25% of RE cases and 50% of PS cases. While the clinical history usually obviates the need for brain imaging in RE (especially in the presence of a suggestive EEG), and brain imaging in RE can generate incidental findings,38 variable clinical features can make an early diagnosis more challenging in PS and might explain the higher frequency of imaging requests.39 Nevertheless, the overall use of MRI appears unusually high for these non-lesional syndromes and is worthy of further investigation. The very low use of neuropsychological assessments (<10%) is striking given the well-documented occurrence of language, literacy and attentional comorbidities in RE23 24 and their impact on educational achievement and quality of life.5 Although this topic is not covered by professional guidelines, it merits investigation as it could reflect correctable factors, for example, either poor local access to assessment services, low awareness about the high prevalence of treatable comorbidities or referral to alternative services, for example, school educational psychologist. A large proportion of patients with RE and PS are not routinely treated with AEDs. Survey respondents indicated that low seizure frequency, parental and child preference, low seizure severity and nocturnal seizure predominance were the most important factors influencing a policy of no treatment (figure 2). These same factors have been cited as reasons not to treat.14–17 19 However, we neither know whether untreated patients are missing important benefits of treatment, nor whether treated patients are needlessly suffering adverse effects. For example, we do not know whether regular AED treatment overall mitigates or exacerbates the frequent cognitive and attentional comorbidities. Suggestions that interictal focal EEG discharges led to transitory regional cognitive dysfunction,20 have now been augmented by experimental evidence in the rat model that interictal EEG discharges can impair long-term learning.21 Meanwhile, emerging data suggest that interictal spikes even in benign focal epilepsies can impair memory consolidation in sleep.22 On the other hand, there is also evidence that carbamazepine can worsen speech production32 and sulthiame may worsen cognitive function in RE.37 Quality of life appears to suffer regardless of seizure number, suggesting that clinical severity has a loose correlation with child and family impact.18 The no-treatment practice does not take into account the costs associated with emergency room attendance and hospital admission for unprevented seizures. These various uncertainties point to the necessity for any RCT to address the rationale for non-treatment. Such a trial could also be designed to evaluate the superiority of active comparators. The evidence base for treatment choice in both RE and PS is acknowledged to be poor,9 with only three relevant Class III studies (eg, open-label) suggesting that carbamazepine and sodium valproate are ‘possibly’ effective, and levetiracetam, oxcarbamazepine, gabapentin and sulthiame ‘potentially’ effective as initial monotherapy.27–29 40 There are many other class IV studies (ie, observational) that cannot be used to support guidelines.28 37 41–43 Among active comparators in the survey, carbamazepine is clearly the referent first-line AED for RE/PS trials (figure 3A and B). Despite concerns about carbamazepine worsening electroclinical features and triggering continuous spikes in slow-wave sleep,30 only 6%–8% respondents indicated its inclusion would deter recruitment of patients to an RCT. Critics feel that carbamazepine has not been properly evaluated in childhood epilepsy: the Standard and New Antiepileptic Drugs (SANAD) trial (upon which NICE guidelines are based) included few patients with RE/PS (n=24) and mixed focal seizures of diverse epilepsy syndromes, for example, temporal lobe epilepsy together, possibly masking pertinent data.10 11 Although lamotrigine (along with carbamazepine) is considered by NICE as a first-line and sodium valproate as a second-line agent, we observe the reverse preferences in our survey (figure 3A). Looking only at newer AEDs, levetiracetam emerges as the first choice, ahead of lamotrigine (figure 3C). Neither topiramate, oxcarbazepine nor sulthiame (not licensed in the UK) was commonly chosen as a possible comparator; and much older AEDs, for example, phenobarbital, phenytoin, ethosuximide and benzodiazepines were not acceptable to about 10% and there is little scientific rationale to include them in a future trial in developed countries. Since older AEDs (carbamazepine and sodium valproate here) are known to have comparable efficacy in childhood,44 there would only be merit in comparing referent AED with newer AED(s). Levetiracetam is the obvious newer comparator AED because: (1) it has not been adequately evaluated as monotherapy in the paediatric population despite widespread off-license use;35 45 (2) it is the most favoured newer AED in this survey; (3) it has come off-patent since the time of the European consensus statement and will therefore be an economically viable comparator to older AEDs;46 (4) lamotrigine monotherapy has been compared with carbamazepine47 and sodium valproate.48 However, outside the UK, clinicians may prefer alternative comparators: European ‘expert opinion’ considers sodium valproate drug of choice for RE and carbamazepine only as ‘sometimes appropriate’.34 German-speaking countries, Japan and Israel favour sulthiame,33 36 although concerns remain over its cognitive adverse effects.37 While non-treatment is common practice, and we conclude the need to conduct an RCT to determine whether this is justified, would clinicians countenance a trial design with no active arm? The survey results are encouraging: approximately one-half of respondents would recruit either to a head-to-head, active versus non-active or to a placebo-controlled RCT in RE. Respondents were equally prepared to randomise after either less than, or more than, two seizures in 6 months. However, there was a more pronounced preference towards active comparators in PS. Thus, non-active or placebo-controlled designs can be considered feasible options in trial design. A cross-sectional survey has several limitations and is only a proxy for actual observation of practice. The response rate was relatively low and the results may reflect selection bias towards respondents who conform more closely to norms and guidelines, giving a more positive impression of management. Clinicians who see many patients may also be more inclined to respond, inflating the estimates of caseload. Clinicians motivated to complete surveys may also, as a group, be more amenable towards RCTs, thus, overestimating true potential recruitment. This survey principally targeted general paediatricians who have primary responsibility for childhood epilepsy in their district or audit zone, all of whom were involved in the recent Epilepsy12 audit. Since this is the group that has primary contact with children with RE or PS, they would be the clinicians who are likely to recruit such children into an RCT and, as such, their opinions are important to determine feasibility. The response rate among the general paediatrician group was 39% and even if we only double the number of reported cases to estimate the total number seen within audit zones, there might be 1500 RE and 500 PS ascertainable annual cases. This is a higher number compared with the Epilepsy12 audit, which estimated 340 annual RE cases.8 The discrepancy between the figures from the two studies might be due to the low rate of syndromic diagnoses that were made by clinicians participating in Epilepsy12 or due to over-reporting of local caseloads in our survey, for example, if there was shared care of the same patient between a paediatrician and a paediatric neurologist. Approximately half the respondents indicated a willingness to recruit to an RCT, still yielding a sufficient sampling frame to consider further planning. Overall, the question whether or not to treat children with non-lesional focal epilepsy syndromes like RE and PS remains unanswered. Furthermore, supporting evidence to validate the use of specific AEDs for either RE or PS is limited despite widespread variation in practice. Encouragingly, this survey demonstrates: (1) there is a sufficient UK network of clinicians who see a large number of these patients; (2) these clinicians are willing to resolve these issues of treatment and non-treatment uncertainty through an RCT; (3) a non-active or placebo arm would be acceptable, and carbamazepine and levetiracetam would be the preferred (and most informative) active comparators. We suggest further discussion and planning of trial design features such as the issue of combining RE and PS, placebo-control, active arms, blinding, outcome measures, logistics and so on, within a larger national or international multidisciplinary study group. Supplementary Material:
Background: The evidence base for management of childhood epilepsy is poor, especially for the most common specific syndromes such as rolandic epilepsy (RE) and Panayiotopoulos syndrome (PS). Considerable international variation in management and controversy about non-treatment indicate the need for high quality randomised controlled trials (RCT). The aim of this study is, therefore, to describe current UK practice and explore the feasibility of different RCT designs for RE and PS. Methods: We conducted an online survey of 590 UK paediatricians who treat epilepsy. Thirty-two questions covered annual caseload, investigation and management practice, factors influencing treatment, antiepileptic drug preferences and hypothetical trial design preferences. Results: 132 responded (22%): 81% were paediatricians and 95% at consultant seniority. We estimated, annually, 751 new RE cases and 233 PS cases. Electroencephalography (EEG) is requested at least half the time in approximately 70% of cases; MRI brain at least half the time in 40%-65% cases and neuropsychological evaluation in 7%-8%. Clinicians reported non-treatment in 40%: main reasons were low frequency of seizures and parent/child preferences. Carbamazepine is the preferred older, and levetiracetam the preferred newer, RCT arm. Approximately one-half considered active and placebo designs acceptable, choosing seizures as primary and cognitive/behavioural measures as secondary outcomes. Conclusions: Management among respondents is broadly in line with national guidance, although with possible overuse of brain imaging and underuse of EEG and neuropsychological assessments. A large proportion of patients in the UK remains untreated, and clinicians seem amenable to a range of RCT designs, with carbamazepine and levetiracetam the preferred active drugs.
null
null
6,135
322
[ 126, 68, 33, 140, 51, 72, 72, 162, 152 ]
14
[ "ps", "epilepsy", "respondents", "active", "treatment", "aed", "seizures", "clinicians", "rct", "survey" ]
[ "age uk seizures", "childhood epilepsy syndromes", "epilepsy comorbidity", "epilepsy managed uk", "childhood epilepsy standard" ]
null
null
[CONTENT] Neurology | Paediatric Practice [SUMMARY]
[CONTENT] Neurology | Paediatric Practice [SUMMARY]
[CONTENT] Neurology | Paediatric Practice [SUMMARY]
null
[CONTENT] Neurology | Paediatric Practice [SUMMARY]
null
[CONTENT] Anticonvulsants | Data Collection | Electroencephalography | Epilepsy, Rolandic | Humans | Practice Patterns, Physicians' | Seizures | United Kingdom [SUMMARY]
[CONTENT] Anticonvulsants | Data Collection | Electroencephalography | Epilepsy, Rolandic | Humans | Practice Patterns, Physicians' | Seizures | United Kingdom [SUMMARY]
[CONTENT] Anticonvulsants | Data Collection | Electroencephalography | Epilepsy, Rolandic | Humans | Practice Patterns, Physicians' | Seizures | United Kingdom [SUMMARY]
null
[CONTENT] Anticonvulsants | Data Collection | Electroencephalography | Epilepsy, Rolandic | Humans | Practice Patterns, Physicians' | Seizures | United Kingdom [SUMMARY]
null
[CONTENT] age uk seizures | childhood epilepsy syndromes | epilepsy comorbidity | epilepsy managed uk | childhood epilepsy standard [SUMMARY]
[CONTENT] age uk seizures | childhood epilepsy syndromes | epilepsy comorbidity | epilepsy managed uk | childhood epilepsy standard [SUMMARY]
[CONTENT] age uk seizures | childhood epilepsy syndromes | epilepsy comorbidity | epilepsy managed uk | childhood epilepsy standard [SUMMARY]
null
[CONTENT] age uk seizures | childhood epilepsy syndromes | epilepsy comorbidity | epilepsy managed uk | childhood epilepsy standard [SUMMARY]
null
[CONTENT] ps | epilepsy | respondents | active | treatment | aed | seizures | clinicians | rct | survey [SUMMARY]
[CONTENT] ps | epilepsy | respondents | active | treatment | aed | seizures | clinicians | rct | survey [SUMMARY]
[CONTENT] ps | epilepsy | respondents | active | treatment | aed | seizures | clinicians | rct | survey [SUMMARY]
null
[CONTENT] ps | epilepsy | respondents | active | treatment | aed | seizures | clinicians | rct | survey [SUMMARY]
null
[CONTENT] seizures | epilepsy | childhood | management | evidence | occur | symptoms | autonomic | nice | epilepsies [SUMMARY]
[CONTENT] online | survey | 2012 | raw | version | results | epilepsy | data | preferences | paediatric [SUMMARY]
[CONTENT] active | respondents | ps | preferred | cases | aed | clinicians | years | percent | active treatment [SUMMARY]
null
[CONTENT] active | ps | respondents | epilepsy | aed | treatment | cases | clinicians | new | data [SUMMARY]
null
[CONTENT] Panayiotopoulos | PS ||| ||| UK | PS [SUMMARY]
[CONTENT] 590 | UK ||| Thirty-two | annual [SUMMARY]
[CONTENT] 132 | 22% | 81% | 95% ||| annually | 751 | 233 | PS ||| Electroencephalography | EEG | at least half | approximately 70% | at least half | 40%-65% | 7%-8% ||| Clinicians | 40% ||| ||| Approximately one-half [SUMMARY]
null
[CONTENT] Panayiotopoulos | PS ||| ||| UK | PS | 590 | UK ||| Thirty-two | annual ||| 132 | 22% | 81% | 95% ||| annually | 751 | 233 | PS ||| Electroencephalography | EEG | at least half | approximately 70% | at least half | 40%-65% | 7%-8% ||| Clinicians | 40% ||| ||| Approximately one-half ||| EEG ||| UK | clinicians [SUMMARY]
null
Sarcopenia and osteoporosis are interrelated in geriatric inpatients.
31049683
Sarcopenia and osteoporosis share an underlying pathology and reinforce each other in terms of negative outcomes.
BACKGROUND
A cross-sectional analysis of geriatric inpatients from the sarcopenia in geriatric elderly (SAGE) study. Measurements included dual X‑ray absorptiometry for bone mineral density and appendicular muscle mass; gait speed and hand grip strength, the Barthel index, body mass index (BMI) and the mini nutritional assessment short form (MNA-SF).
MATERIAL AND METHODS
Of the 148 patients recruited for SAGE, 141 (84 women, 57 men; mean age 80.6 ± 5.5 years) had sufficient data to be included in this ancillary investigation: 22/141 (15.6%) were only osteoporotic, 19/141 (13.5%) were only sarcopenic and 20/141 (14.2%) osteosarcopenic (i.e. both sarcopenia and osteoporosis). The prevalence of osteoporosis was higher in sarcopenic than in non-sarcopenic individuals (51.3% vs. 21.6%, p < 0.001). Sarcopenic, osteoporotic and osteosarcopenic subjects had a lower BMI, MNA-SF, handgrip and gait speed (p < 0.05) than the reference group (those neither osteoporotic nor sarcopenic, n = 80). The Barthel index was lower for sarcopenic and osteosarcopenic (p < 0.05) but not for osteoporotic (p = 0.07) subjects. The BMI and MNA-SF were lower in osteosarcopenia compared to sarcopenia or osteoporosis alone (p < 0.05) while there were no differences in functional criteria.
RESULTS
Osteoporosis and sarcopenia are linked to nutritional deficits and reduced function in geriatric inpatients. Co-occurrence (osteosarcopenia) is common and associated with a higher degree of malnutrition than osteoporosis or sarcopenia alone.
CONCLUSION
[ "Aged", "Aged, 80 and over", "Cross-Sectional Studies", "Female", "Gait", "Hand Strength", "Humans", "Male", "Osteoporosis", "Prevalence", "Sarcopenia" ]
6817738
Introduction
Sarcopenia and osteoporosis are highly prevalent in old people and contribute to a variety of negative health outcomes [1]. As reflected by the term osteosarcopenia there is an increasing awareness of the interrelationship between muscle and bone disease [1] regarding genetic regulation [2], the endocrine framework [3] and close mechanical interaction [4]. Another common feature of both conditions is structural degradation due to lipotoxicity [5]. Sarcopenia-associated risk of falling and increased bone vulnerability have a synergistic impact on fracture incidence [6]. Despite being named the hazardous duet [7] both conditions are treatable, with a substantial overlap in the options available (exercise, protein supplementation, vitamin D) [8]. While there are some prevalence data from community dwelling elders, in particular fallers [9] and patients with a history of hip fractures [10], data from geriatric hospitals about osteosarcopenic patients are widely lacking. In order to determine the frequency of co-occurrence of sarcopenia and osteoporosis and the functional and nutritional characteristics accompanying these conditions in geriatric inpatients the dataset of the SAGE study [11] was examined.
null
null
null
null
Practical conclusion
Osteoporosis and sarcopenia are prevalent conditions on a geriatric ward. Both are related to poor function and malnutrition. Co-occurrence (osteosarcopenia) is frequent and it is associated with a more compromised nutritional state than isolated osteoporosis or sarcopenia. The use of DXA might prove useful in co-diagnosing the two conditions.
[ "Introduction", "Methods", "Study population", "Parameters", "Baseline characteristics.", "Functional and nutritional parameters.", "Diagnosis of sarcopenia.", "Diagnosis of osteoporosis.", "Diagnosis of osteosarcopenia.", "Data analysis.", "Results", "Discussion" ]
[ "Sarcopenia and osteoporosis are highly prevalent in old people and contribute to a variety of negative health outcomes [1]. As reflected by the term osteosarcopenia there is an increasing awareness of the interrelationship between muscle and bone disease [1] regarding genetic regulation [2], the endocrine framework [3] and close mechanical interaction [4]. Another common feature of both conditions is structural degradation due to lipotoxicity [5].\nSarcopenia-associated risk of falling and increased bone vulnerability have a synergistic impact on fracture incidence [6]. Despite being named the hazardous duet [7] both conditions are treatable, with a substantial overlap in the options available (exercise, protein supplementation, vitamin D) [8]. While there are some prevalence data from community dwelling elders, in particular fallers [9] and patients with a history of hip fractures [10], data from geriatric hospitals about osteosarcopenic patients are widely lacking.\nIn order to determine the frequency of co-occurrence of sarcopenia and osteoporosis and the functional and nutritional characteristics accompanying these conditions in geriatric inpatients the dataset of the SAGE study [11] was examined.", " Study population The SAGE study is a cross-sectional study concerned with issues of muscle mass measurement in geriatric inpatients. Preliminary results and study design have been published elsewhere [11]. Briefly the study recruited 148 geriatric inpatients (99 female and 59 male) at the department of Geriatric Medicine, Paracelsus Medical University Salzburg. Inclusion criteria were admittance to a geriatric ward in the study period, ability to walk a few meters and to lie still for 5 min. The lower age limit was 70 years. Exclusion criteria were critical or terminal illness, advanced dementia or delirium, indwelling electrical devices such as pacemakers (bioimpedance analysis being part of the study protocol) and complete or partial amputation of one or more limbs. All participants gave written informed consent. The study was approved by the local ethics committee of the state of Salzburg. For the ancillary osteosarcopenia investigation all SAGE participants with a sufficient dataset were included to enable the diagnosis of sarcopenia and osteoporosis.\nThe SAGE study is a cross-sectional study concerned with issues of muscle mass measurement in geriatric inpatients. Preliminary results and study design have been published elsewhere [11]. Briefly the study recruited 148 geriatric inpatients (99 female and 59 male) at the department of Geriatric Medicine, Paracelsus Medical University Salzburg. Inclusion criteria were admittance to a geriatric ward in the study period, ability to walk a few meters and to lie still for 5 min. The lower age limit was 70 years. Exclusion criteria were critical or terminal illness, advanced dementia or delirium, indwelling electrical devices such as pacemakers (bioimpedance analysis being part of the study protocol) and complete or partial amputation of one or more limbs. All participants gave written informed consent. The study was approved by the local ethics committee of the state of Salzburg. For the ancillary osteosarcopenia investigation all SAGE participants with a sufficient dataset were included to enable the diagnosis of sarcopenia and osteoporosis.\n Parameters Baseline characteristics. In addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records.\nIn addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records.\n Functional and nutritional parameters. Gait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician.\nGait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician.\n Diagnosis of sarcopenia. Diagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women.\nDiagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women.\n Diagnosis of osteoporosis. Bone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14].\nBone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14].\n Diagnosis of osteosarcopenia. Osteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present.\nOsteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present.\n Data analysis. The basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant.\nThe basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant.\n Baseline characteristics. In addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records.\nIn addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records.\n Functional and nutritional parameters. Gait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician.\nGait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician.\n Diagnosis of sarcopenia. Diagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women.\nDiagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women.\n Diagnosis of osteoporosis. Bone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14].\nBone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14].\n Diagnosis of osteosarcopenia. Osteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present.\nOsteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present.\n Data analysis. The basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant.\nThe basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant.", "The SAGE study is a cross-sectional study concerned with issues of muscle mass measurement in geriatric inpatients. Preliminary results and study design have been published elsewhere [11]. Briefly the study recruited 148 geriatric inpatients (99 female and 59 male) at the department of Geriatric Medicine, Paracelsus Medical University Salzburg. Inclusion criteria were admittance to a geriatric ward in the study period, ability to walk a few meters and to lie still for 5 min. The lower age limit was 70 years. Exclusion criteria were critical or terminal illness, advanced dementia or delirium, indwelling electrical devices such as pacemakers (bioimpedance analysis being part of the study protocol) and complete or partial amputation of one or more limbs. All participants gave written informed consent. The study was approved by the local ethics committee of the state of Salzburg. For the ancillary osteosarcopenia investigation all SAGE participants with a sufficient dataset were included to enable the diagnosis of sarcopenia and osteoporosis.", " Baseline characteristics. In addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records.\nIn addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records.\n Functional and nutritional parameters. Gait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician.\nGait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician.\n Diagnosis of sarcopenia. Diagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women.\nDiagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women.\n Diagnosis of osteoporosis. Bone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14].\nBone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14].\n Diagnosis of osteosarcopenia. Osteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present.\nOsteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present.\n Data analysis. The basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant.\nThe basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant.", "In addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records.", "Gait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician.", "Diagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women.", "Bone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14].", "Osteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present.", "The basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant.", "Of the 148 geriatric inpatients participating in SAGE, 141 had a dataset allowing the diagnosis of sarcopenia and osteoporosis. Thus 84 women and 57 men (mean age 80.7 ± 5.3 years vs. 80.4 ± 5.8; p = 0.73 years) could be included in this investigation. Basic characteristics of the different subgroups of the study population are shown in Table 1. Out of 141 patients 26 (18.4%) had been admitted to hospital with recent fractures.Table 1Characteristics of the different subgroups of the study populationOP only(n = 22)SP only(n = 19)OSP(n = 20)OP total(n = 42)SP total(n = 39)RG(n = 80)Total (n = 141)Age (years)82.5 ± 4.581.1 ± 6.382.0 ± 6.382.3 ± 5.581.6 ± 6.379.6 ± 3.280.6 ± 5.5Gender (female/male)20/24/1514/634/818/2146/3484/57Comorbiditiesa (m ± SD)2.5 ± 1.43.2 ± 1.42.4 ± 1.62.5 ± 1.52.8 ± 1.62.7 ± 1.52.7 ± 1.5Medications (m ± SD)8.2 ± 3.38.2 ± 3.17.8 ± 3.98.0 ± 3.68.0 ± 3.68.6 ± 3.28.4 ± 3.3Malnutritionb12121729293778ADL dependencyc8111523263266Community dwelling211716373376130Use of walking aid16161430304389OP only osteoporosis no sarcopenia, SP only sarcopenia no osteoporosis, OSP sarcopenia and osteoporosis, OP total OP only + OSP, SP total SP only + OSP, RG reference group (no sarcopenia, no osteoporosis), m arithmetic mean, SD standard deviation, MNA mini nutritional assessment, ADL activities of daily livinga Selected comorbidities (see methods)b MNA <17c Barthel index <70\nCharacteristics of the different subgroups of the study population\nOP only osteoporosis no sarcopenia, SP only sarcopenia no osteoporosis, OSP sarcopenia and osteoporosis, OP total OP only + OSP, SP total SP only + OSP, RG reference group (no sarcopenia, no osteoporosis), m arithmetic mean, SD standard deviation, MNA mini nutritional assessment, ADL activities of daily living\na Selected comorbidities (see methods)\nb MNA <17\nc Barthel index <70\nPrevalence data are displayed in Fig. 1. Overall prevalences of osteoporosis, sarcopenia and osteosarcopenia were 29.8%, 27.7% and 14.2%, respectively. Osteoporosis was significantly more prevalent among women (40.5% vs. 14.0%, p < 0.001), whereas no significant gender difference was observed for sarcopenia and osteosarcopenia. Women had isolated osteoporosis more frequently (p < 0.001) while men had isolated sarcopenia more frequently (p < 0.001).Fig. 1Prevalence of sarcopenia, osteoporosis and osteosarcopenia in the study populations (OP osteoporotic; SP sarcopenic, OSP sarcopenia and osteoporosis, RG reference group). Prevalence significantly gender associated (Fisher’s exact test) at *p <0.05, **p <0.01;***p <0.001\nPrevalence of sarcopenia, osteoporosis and osteosarcopenia in the study populations (OP osteoporotic; SP sarcopenic, OSP sarcopenia and osteoporosis, RG reference group). Prevalence significantly gender associated (Fisher’s exact test) at *p <0.05, **p <0.01;***p <0.001\nThere was a highly significant association of sarcopenia and osteoporosis in the whole study group (p < 0.001) and in female participants (p < 0.001) and a significant association in men (p = 0.02) (Fig. 2). The age and gender corrected odds ratio (Mantel-Haenszel test) for the sarcopenic group to be also osteoporotic was 8.71 (confidence interval 2.87–26.42; p < 0.001).Fig. 2Co-occurrence of sarcopenia and osteoporosis. a Prevalence of osteoporosis dependent on sarcopenia status. SP sarcopenia. Prevalence significantly different (Fisher’s exact test) at *p <0.05, **p <0.01;***p <0.001 between SP (lightblue) and no SP (dark blue). b Prevalence of sarcopenia depending on osteoporosis status. OP osteoporotic. Prevalence significantly different (Fisher’s exact test) at *p <0.05, **p <0.01;***p <0.001 between OP (light blue) and no OP (dark blue)\nCo-occurrence of sarcopenia and osteoporosis. a Prevalence of osteoporosis dependent on sarcopenia status. SP sarcopenia. Prevalence significantly different (Fisher’s exact test) at *p <0.05, **p <0.01;***p <0.001 between SP (lightblue) and no SP (dark blue). b Prevalence of sarcopenia depending on osteoporosis status. OP osteoporotic. Prevalence significantly different (Fisher’s exact test) at *p <0.05, **p <0.01;***p <0.001 between OP (light blue) and no OP (dark blue)\nThe presence of sarcopenia, osteoporosis and osteosarcopenia was associated with nutritional and functional deficits when compared to the reference group (Table 2). All three conditions showed a highly significant association with lower BMI and MNA-SF. In terms of functionality, the Barthel index was lower in the sarcopenic and osteosarcopenic but not in the osteoporotic patients. All three groups had significantly lower gait speed than the reference group. These findings proved statistically robust (p < 0.05) when checking for gender and age as possible confounders by multiple regression analysis.Table 2Functional and nutritional characteristics depending on OS/SP statusBMI(kg/m2)MNA-SFHG male(kg)HG female(kg)Gait speed(m/s)Barthel indexRG28.46 ± 4.1511.21 ± 2.0433.65 ± 9.4621.40 ± 6.570.91 ± 0.3573.56 ± 19.27OP total23.68 ± 3.96***9.69 ± 2.64**30.17 ± 10.4916.63 ± 4.42***0.67 ± 0.24***66.43 ± 20.33SP total23.08 ± 3.52***9.33 ± 2.44***28.26 ± 7.99*16.56 ± 4.13**0.62 ± 0.20***61.79 ± 20.58**OSP21.46 ± 2.53***8.50 ± 2.53***32.25 ± 11.5816.75 ± 4.43(**)0.61 ± 0.16***61.00 ± 15.86**OP only25.71 ± 3.95**10.77 ± 2.2526.00 ± 6.0016.53 ± 4.40***0.72 ± 0.29(*)71.36 ± 22.57SP only24.78 ± 3.61***10.21 ± 2.0229.00 ± 5.37**15.75 ± 2.86(*)0.62 ± 0.23**62.63 ± 24.57Values given as arithmetic mean ± standard deviationRG reference group (no sarcopenia, no osteoporosis), OP osteoporotic, SP sarcopenic, OSP sarcopenia and osteoporosis, BMI body mass index, HG handgrip, MNA-SF mini nutritional assessment short formSignificant at *p <0.05; **p <0.01;***p <0.001 compared to reference group (male and female parts of reference group for handgrip, respectively). Where differences no longer reached statistical significance after correcting for age and gender the asterisks are put into brackets\nFunctional and nutritional characteristics depending on OS/SP status\nValues given as arithmetic mean ± standard deviation\nRG reference group (no sarcopenia, no osteoporosis), OP osteoporotic, SP sarcopenic, OSP sarcopenia and osteoporosis, BMI body mass index, HG handgrip, MNA-SF mini nutritional assessment short form\nSignificant at *p <0.05; **p <0.01;***p <0.001 compared to reference group (male and female parts of reference group for handgrip, respectively). Where differences no longer reached statistical significance after correcting for age and gender the asterisks are put into brackets\nPathologically decreased handgrip strength occurred more frequently in all disease subgroups (barely osteoporotic, all osteoporotic, osteosarcopenic, barely sarcopenic and total sarcopenic) than in the reference group (p < 0.01). Absolute strength was analyzed separately for female and male participants. Women consistently showed lower grip strength associated with all kinds of muscle and bone disease (p < 0.05). This effect persisted after correction for age in all osteoporotic and all sarcopenic women but lost statistical significance for the osteosarcopenic subgroup. No such association was observed in men. Given the small sample size in the different disease subgroups among men, all pathologic conditions were pooled and compared to the male part of the reference group (handgrip 28.05 ± 7.85 kg vs. 33.65 ± 8.08 kg; p = 0.017). Pooling of sarcopenic male (sarcopenia only + osteosarcopenia) still showed a significant lower handgrip compared to the male reference group while pooling of osteoporosis (n = 2) only and osteosarcopenia (n = 6) failed to show a significant difference to the reference group (Table 2). No differences were observed in functional state when the subgroup of osteosarcopenic subjects was compared to sarcopenic and osteoporotic subjects (data not shown); however, BMI and MNA-SF were lower in osteosarcopenic patients when compared to barely sarcopenic and barely osteoporotic respectively (Table 3). This remained true after correction for age and gender.Table 3Nutritional state and OS/SP statusBMI [kg/m2]MNA-SFOSP21.46 ± 2.538.50 ± 2.52SP only24.78 ± 3.61**10.21 ± 2.02*OP only25.71 ± 3.95***10.77 ± 2.25**Values given as arithmetic mean ± standard deviationOSP sarcopenia and osteoporosis; OP all osteoporotic; SP all sarcopenic; BMI body mass index; HG handgrip; MNA-SF mini nutritional assessment short formSignificant at *p <0.05; **p <0.01;***p <0.001 compared to OSP group. All differences remained statistically significant after correction for age and gender\nNutritional state and OS/SP status\nValues given as arithmetic mean ± standard deviation\nOSP sarcopenia and osteoporosis; OP all osteoporotic; SP all sarcopenic; BMI body mass index; HG handgrip; MNA-SF mini nutritional assessment short form\nSignificant at *p <0.05; **p <0.01;***p <0.001 compared to OSP group. All differences remained statistically significant after correction for age and gender", "In terms of age, comorbidity, polypharmacy and walking limitation (use of walking device) this sample reflected the diversity of hospitalized geriatric patients. The sex ratio (59.6% women) is consistent with the predominance of women on a geriatric ward. The characteristics of this sample match the features of geriatric inpatients as described exemplarily in a cohort study by von Renteln-Kruse and Ebner [16]. The prevalence of osteosarcopenia was 14.2% with a non-significant tendency for higher prevalence (16.7% vs. 10.5%) among women. This is in line with a Chinese study reporting a prevalence of 15.1% (women) and 10.4% (men) in community dwelling elders over age 80 years [15]. Recently Locquet et al. found a prevalence of 9.52% in an exclusively female population (mean age 74.3 years) from Belgium [17]. Some studies [9, 10] reported considerably higher prevalence rates (58% [9], 37.9% [10]) but these referred to preselected populations of female hip fracture patients and fallers, respectively. Osteoporosis was more frequent in sarcopenic (51.3%) than non-sarcopenic (21.6%) subjects. A higher osteoporosis prevalence with respect to sarcopenia status was described in earlier studies (46.1% vs. 22.0% [17], 58.5% vs. 36.4% [9], 57.8% vs. 22.0% [18]) and was seen in both female and male subjects. The nutritional state of all sarcopenic, osteoporotic and osteosarcopenic patients was characterized by lower BMI and MNA-SF compared to the reference group. The BMI and MNA-SF were lowest in osteosarcopenic subjects with a significant difference not only compared to normal but even compared to the barely sarcopenic and barely osteoporotic subjects. A worse nutritional state in osteosarcopenic subjects was also observed by Huo et al. [9] who concluded that thorough nutritional assessment and early supplementation were especially important in this subgroup.\nAll three pathologic conditions were accompanied by lower gait speed. This was also true for hand grip strength in women and both sexes combined. In men the difference in hand grip yielded statistical significance after pooling the different disease subgroups. A lower Barthel index was associated with sarcopenia and osteosarcopenia but not with osteoporosis. There was no significant difference in functional parameters between the osteosarcopenic and the barely osteoporotic or sarcopenic. Drey et al. [1] reported decreased hand grip strength for the osteosarcopenic subgroup in a population of 68 prefrail individuals but did not find any difference in gait speed; however, this study is not directly comparable due to a different definition of osteosarcopenia which included osteopenic patients with sarcopenia. In a larger (n = 680) Australian population of older fallers the osteosarcopenic individuals had decreased hand grip strength as well as gait speed. Previous data pertaining to ADL status in osteosarcopenia could not be found.\nThere are important limitations to this study. First, due to exclusion of the most frail and demented patients, the population might not entirely reflect a geriatric ward population; however, it is probable that the selection bias is towards underestimation of prevalence rates. Second, the study lacks statistical power especially due to the low number of males with osteoporosis or osteosarcopenia. Larger multicenter studies are needed that allow stratification of the independent contributors to osteosarcopenia. Finally, it is a cross sectional study that does not allow the establishment of chronological or causal relationships in the pathways leading to osteosarcopenia. The strength of the study is to deliver data from a geriatric hospital giving the clinician an idea about the prevalence, the degree of overlap and the associated nutritional and functional deficits of two highly important pathologies of the locomotor and skeletal system." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Study population", "Parameters", "Baseline characteristics.", "Functional and nutritional parameters.", "Diagnosis of sarcopenia.", "Diagnosis of osteoporosis.", "Diagnosis of osteosarcopenia.", "Data analysis.", "Results", "Discussion", "Practical conclusion" ]
[ "Sarcopenia and osteoporosis are highly prevalent in old people and contribute to a variety of negative health outcomes [1]. As reflected by the term osteosarcopenia there is an increasing awareness of the interrelationship between muscle and bone disease [1] regarding genetic regulation [2], the endocrine framework [3] and close mechanical interaction [4]. Another common feature of both conditions is structural degradation due to lipotoxicity [5].\nSarcopenia-associated risk of falling and increased bone vulnerability have a synergistic impact on fracture incidence [6]. Despite being named the hazardous duet [7] both conditions are treatable, with a substantial overlap in the options available (exercise, protein supplementation, vitamin D) [8]. While there are some prevalence data from community dwelling elders, in particular fallers [9] and patients with a history of hip fractures [10], data from geriatric hospitals about osteosarcopenic patients are widely lacking.\nIn order to determine the frequency of co-occurrence of sarcopenia and osteoporosis and the functional and nutritional characteristics accompanying these conditions in geriatric inpatients the dataset of the SAGE study [11] was examined.", " Study population The SAGE study is a cross-sectional study concerned with issues of muscle mass measurement in geriatric inpatients. Preliminary results and study design have been published elsewhere [11]. Briefly the study recruited 148 geriatric inpatients (99 female and 59 male) at the department of Geriatric Medicine, Paracelsus Medical University Salzburg. Inclusion criteria were admittance to a geriatric ward in the study period, ability to walk a few meters and to lie still for 5 min. The lower age limit was 70 years. Exclusion criteria were critical or terminal illness, advanced dementia or delirium, indwelling electrical devices such as pacemakers (bioimpedance analysis being part of the study protocol) and complete or partial amputation of one or more limbs. All participants gave written informed consent. The study was approved by the local ethics committee of the state of Salzburg. For the ancillary osteosarcopenia investigation all SAGE participants with a sufficient dataset were included to enable the diagnosis of sarcopenia and osteoporosis.\nThe SAGE study is a cross-sectional study concerned with issues of muscle mass measurement in geriatric inpatients. Preliminary results and study design have been published elsewhere [11]. Briefly the study recruited 148 geriatric inpatients (99 female and 59 male) at the department of Geriatric Medicine, Paracelsus Medical University Salzburg. Inclusion criteria were admittance to a geriatric ward in the study period, ability to walk a few meters and to lie still for 5 min. The lower age limit was 70 years. Exclusion criteria were critical or terminal illness, advanced dementia or delirium, indwelling electrical devices such as pacemakers (bioimpedance analysis being part of the study protocol) and complete or partial amputation of one or more limbs. All participants gave written informed consent. The study was approved by the local ethics committee of the state of Salzburg. For the ancillary osteosarcopenia investigation all SAGE participants with a sufficient dataset were included to enable the diagnosis of sarcopenia and osteoporosis.\n Parameters Baseline characteristics. In addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records.\nIn addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records.\n Functional and nutritional parameters. Gait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician.\nGait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician.\n Diagnosis of sarcopenia. Diagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women.\nDiagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women.\n Diagnosis of osteoporosis. Bone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14].\nBone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14].\n Diagnosis of osteosarcopenia. Osteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present.\nOsteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present.\n Data analysis. The basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant.\nThe basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant.\n Baseline characteristics. In addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records.\nIn addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records.\n Functional and nutritional parameters. Gait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician.\nGait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician.\n Diagnosis of sarcopenia. Diagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women.\nDiagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women.\n Diagnosis of osteoporosis. Bone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14].\nBone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14].\n Diagnosis of osteosarcopenia. Osteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present.\nOsteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present.\n Data analysis. The basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant.\nThe basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant.", "The SAGE study is a cross-sectional study concerned with issues of muscle mass measurement in geriatric inpatients. Preliminary results and study design have been published elsewhere [11]. Briefly the study recruited 148 geriatric inpatients (99 female and 59 male) at the department of Geriatric Medicine, Paracelsus Medical University Salzburg. Inclusion criteria were admittance to a geriatric ward in the study period, ability to walk a few meters and to lie still for 5 min. The lower age limit was 70 years. Exclusion criteria were critical or terminal illness, advanced dementia or delirium, indwelling electrical devices such as pacemakers (bioimpedance analysis being part of the study protocol) and complete or partial amputation of one or more limbs. All participants gave written informed consent. The study was approved by the local ethics committee of the state of Salzburg. For the ancillary osteosarcopenia investigation all SAGE participants with a sufficient dataset were included to enable the diagnosis of sarcopenia and osteoporosis.", " Baseline characteristics. In addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records.\nIn addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records.\n Functional and nutritional parameters. Gait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician.\nGait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician.\n Diagnosis of sarcopenia. Diagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women.\nDiagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women.\n Diagnosis of osteoporosis. Bone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14].\nBone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14].\n Diagnosis of osteosarcopenia. Osteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present.\nOsteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present.\n Data analysis. The basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant.\nThe basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant.", "In addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records.", "Gait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician.", "Diagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women.", "Bone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14].", "Osteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present.", "The basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant.", "Of the 148 geriatric inpatients participating in SAGE, 141 had a dataset allowing the diagnosis of sarcopenia and osteoporosis. Thus 84 women and 57 men (mean age 80.7 ± 5.3 years vs. 80.4 ± 5.8; p = 0.73 years) could be included in this investigation. Basic characteristics of the different subgroups of the study population are shown in Table 1. Out of 141 patients 26 (18.4%) had been admitted to hospital with recent fractures.Table 1Characteristics of the different subgroups of the study populationOP only(n = 22)SP only(n = 19)OSP(n = 20)OP total(n = 42)SP total(n = 39)RG(n = 80)Total (n = 141)Age (years)82.5 ± 4.581.1 ± 6.382.0 ± 6.382.3 ± 5.581.6 ± 6.379.6 ± 3.280.6 ± 5.5Gender (female/male)20/24/1514/634/818/2146/3484/57Comorbiditiesa (m ± SD)2.5 ± 1.43.2 ± 1.42.4 ± 1.62.5 ± 1.52.8 ± 1.62.7 ± 1.52.7 ± 1.5Medications (m ± SD)8.2 ± 3.38.2 ± 3.17.8 ± 3.98.0 ± 3.68.0 ± 3.68.6 ± 3.28.4 ± 3.3Malnutritionb12121729293778ADL dependencyc8111523263266Community dwelling211716373376130Use of walking aid16161430304389OP only osteoporosis no sarcopenia, SP only sarcopenia no osteoporosis, OSP sarcopenia and osteoporosis, OP total OP only + OSP, SP total SP only + OSP, RG reference group (no sarcopenia, no osteoporosis), m arithmetic mean, SD standard deviation, MNA mini nutritional assessment, ADL activities of daily livinga Selected comorbidities (see methods)b MNA <17c Barthel index <70\nCharacteristics of the different subgroups of the study population\nOP only osteoporosis no sarcopenia, SP only sarcopenia no osteoporosis, OSP sarcopenia and osteoporosis, OP total OP only + OSP, SP total SP only + OSP, RG reference group (no sarcopenia, no osteoporosis), m arithmetic mean, SD standard deviation, MNA mini nutritional assessment, ADL activities of daily living\na Selected comorbidities (see methods)\nb MNA <17\nc Barthel index <70\nPrevalence data are displayed in Fig. 1. Overall prevalences of osteoporosis, sarcopenia and osteosarcopenia were 29.8%, 27.7% and 14.2%, respectively. Osteoporosis was significantly more prevalent among women (40.5% vs. 14.0%, p < 0.001), whereas no significant gender difference was observed for sarcopenia and osteosarcopenia. Women had isolated osteoporosis more frequently (p < 0.001) while men had isolated sarcopenia more frequently (p < 0.001).Fig. 1Prevalence of sarcopenia, osteoporosis and osteosarcopenia in the study populations (OP osteoporotic; SP sarcopenic, OSP sarcopenia and osteoporosis, RG reference group). Prevalence significantly gender associated (Fisher’s exact test) at *p <0.05, **p <0.01;***p <0.001\nPrevalence of sarcopenia, osteoporosis and osteosarcopenia in the study populations (OP osteoporotic; SP sarcopenic, OSP sarcopenia and osteoporosis, RG reference group). Prevalence significantly gender associated (Fisher’s exact test) at *p <0.05, **p <0.01;***p <0.001\nThere was a highly significant association of sarcopenia and osteoporosis in the whole study group (p < 0.001) and in female participants (p < 0.001) and a significant association in men (p = 0.02) (Fig. 2). The age and gender corrected odds ratio (Mantel-Haenszel test) for the sarcopenic group to be also osteoporotic was 8.71 (confidence interval 2.87–26.42; p < 0.001).Fig. 2Co-occurrence of sarcopenia and osteoporosis. a Prevalence of osteoporosis dependent on sarcopenia status. SP sarcopenia. Prevalence significantly different (Fisher’s exact test) at *p <0.05, **p <0.01;***p <0.001 between SP (lightblue) and no SP (dark blue). b Prevalence of sarcopenia depending on osteoporosis status. OP osteoporotic. Prevalence significantly different (Fisher’s exact test) at *p <0.05, **p <0.01;***p <0.001 between OP (light blue) and no OP (dark blue)\nCo-occurrence of sarcopenia and osteoporosis. a Prevalence of osteoporosis dependent on sarcopenia status. SP sarcopenia. Prevalence significantly different (Fisher’s exact test) at *p <0.05, **p <0.01;***p <0.001 between SP (lightblue) and no SP (dark blue). b Prevalence of sarcopenia depending on osteoporosis status. OP osteoporotic. Prevalence significantly different (Fisher’s exact test) at *p <0.05, **p <0.01;***p <0.001 between OP (light blue) and no OP (dark blue)\nThe presence of sarcopenia, osteoporosis and osteosarcopenia was associated with nutritional and functional deficits when compared to the reference group (Table 2). All three conditions showed a highly significant association with lower BMI and MNA-SF. In terms of functionality, the Barthel index was lower in the sarcopenic and osteosarcopenic but not in the osteoporotic patients. All three groups had significantly lower gait speed than the reference group. These findings proved statistically robust (p < 0.05) when checking for gender and age as possible confounders by multiple regression analysis.Table 2Functional and nutritional characteristics depending on OS/SP statusBMI(kg/m2)MNA-SFHG male(kg)HG female(kg)Gait speed(m/s)Barthel indexRG28.46 ± 4.1511.21 ± 2.0433.65 ± 9.4621.40 ± 6.570.91 ± 0.3573.56 ± 19.27OP total23.68 ± 3.96***9.69 ± 2.64**30.17 ± 10.4916.63 ± 4.42***0.67 ± 0.24***66.43 ± 20.33SP total23.08 ± 3.52***9.33 ± 2.44***28.26 ± 7.99*16.56 ± 4.13**0.62 ± 0.20***61.79 ± 20.58**OSP21.46 ± 2.53***8.50 ± 2.53***32.25 ± 11.5816.75 ± 4.43(**)0.61 ± 0.16***61.00 ± 15.86**OP only25.71 ± 3.95**10.77 ± 2.2526.00 ± 6.0016.53 ± 4.40***0.72 ± 0.29(*)71.36 ± 22.57SP only24.78 ± 3.61***10.21 ± 2.0229.00 ± 5.37**15.75 ± 2.86(*)0.62 ± 0.23**62.63 ± 24.57Values given as arithmetic mean ± standard deviationRG reference group (no sarcopenia, no osteoporosis), OP osteoporotic, SP sarcopenic, OSP sarcopenia and osteoporosis, BMI body mass index, HG handgrip, MNA-SF mini nutritional assessment short formSignificant at *p <0.05; **p <0.01;***p <0.001 compared to reference group (male and female parts of reference group for handgrip, respectively). Where differences no longer reached statistical significance after correcting for age and gender the asterisks are put into brackets\nFunctional and nutritional characteristics depending on OS/SP status\nValues given as arithmetic mean ± standard deviation\nRG reference group (no sarcopenia, no osteoporosis), OP osteoporotic, SP sarcopenic, OSP sarcopenia and osteoporosis, BMI body mass index, HG handgrip, MNA-SF mini nutritional assessment short form\nSignificant at *p <0.05; **p <0.01;***p <0.001 compared to reference group (male and female parts of reference group for handgrip, respectively). Where differences no longer reached statistical significance after correcting for age and gender the asterisks are put into brackets\nPathologically decreased handgrip strength occurred more frequently in all disease subgroups (barely osteoporotic, all osteoporotic, osteosarcopenic, barely sarcopenic and total sarcopenic) than in the reference group (p < 0.01). Absolute strength was analyzed separately for female and male participants. Women consistently showed lower grip strength associated with all kinds of muscle and bone disease (p < 0.05). This effect persisted after correction for age in all osteoporotic and all sarcopenic women but lost statistical significance for the osteosarcopenic subgroup. No such association was observed in men. Given the small sample size in the different disease subgroups among men, all pathologic conditions were pooled and compared to the male part of the reference group (handgrip 28.05 ± 7.85 kg vs. 33.65 ± 8.08 kg; p = 0.017). Pooling of sarcopenic male (sarcopenia only + osteosarcopenia) still showed a significant lower handgrip compared to the male reference group while pooling of osteoporosis (n = 2) only and osteosarcopenia (n = 6) failed to show a significant difference to the reference group (Table 2). No differences were observed in functional state when the subgroup of osteosarcopenic subjects was compared to sarcopenic and osteoporotic subjects (data not shown); however, BMI and MNA-SF were lower in osteosarcopenic patients when compared to barely sarcopenic and barely osteoporotic respectively (Table 3). This remained true after correction for age and gender.Table 3Nutritional state and OS/SP statusBMI [kg/m2]MNA-SFOSP21.46 ± 2.538.50 ± 2.52SP only24.78 ± 3.61**10.21 ± 2.02*OP only25.71 ± 3.95***10.77 ± 2.25**Values given as arithmetic mean ± standard deviationOSP sarcopenia and osteoporosis; OP all osteoporotic; SP all sarcopenic; BMI body mass index; HG handgrip; MNA-SF mini nutritional assessment short formSignificant at *p <0.05; **p <0.01;***p <0.001 compared to OSP group. All differences remained statistically significant after correction for age and gender\nNutritional state and OS/SP status\nValues given as arithmetic mean ± standard deviation\nOSP sarcopenia and osteoporosis; OP all osteoporotic; SP all sarcopenic; BMI body mass index; HG handgrip; MNA-SF mini nutritional assessment short form\nSignificant at *p <0.05; **p <0.01;***p <0.001 compared to OSP group. All differences remained statistically significant after correction for age and gender", "In terms of age, comorbidity, polypharmacy and walking limitation (use of walking device) this sample reflected the diversity of hospitalized geriatric patients. The sex ratio (59.6% women) is consistent with the predominance of women on a geriatric ward. The characteristics of this sample match the features of geriatric inpatients as described exemplarily in a cohort study by von Renteln-Kruse and Ebner [16]. The prevalence of osteosarcopenia was 14.2% with a non-significant tendency for higher prevalence (16.7% vs. 10.5%) among women. This is in line with a Chinese study reporting a prevalence of 15.1% (women) and 10.4% (men) in community dwelling elders over age 80 years [15]. Recently Locquet et al. found a prevalence of 9.52% in an exclusively female population (mean age 74.3 years) from Belgium [17]. Some studies [9, 10] reported considerably higher prevalence rates (58% [9], 37.9% [10]) but these referred to preselected populations of female hip fracture patients and fallers, respectively. Osteoporosis was more frequent in sarcopenic (51.3%) than non-sarcopenic (21.6%) subjects. A higher osteoporosis prevalence with respect to sarcopenia status was described in earlier studies (46.1% vs. 22.0% [17], 58.5% vs. 36.4% [9], 57.8% vs. 22.0% [18]) and was seen in both female and male subjects. The nutritional state of all sarcopenic, osteoporotic and osteosarcopenic patients was characterized by lower BMI and MNA-SF compared to the reference group. The BMI and MNA-SF were lowest in osteosarcopenic subjects with a significant difference not only compared to normal but even compared to the barely sarcopenic and barely osteoporotic subjects. A worse nutritional state in osteosarcopenic subjects was also observed by Huo et al. [9] who concluded that thorough nutritional assessment and early supplementation were especially important in this subgroup.\nAll three pathologic conditions were accompanied by lower gait speed. This was also true for hand grip strength in women and both sexes combined. In men the difference in hand grip yielded statistical significance after pooling the different disease subgroups. A lower Barthel index was associated with sarcopenia and osteosarcopenia but not with osteoporosis. There was no significant difference in functional parameters between the osteosarcopenic and the barely osteoporotic or sarcopenic. Drey et al. [1] reported decreased hand grip strength for the osteosarcopenic subgroup in a population of 68 prefrail individuals but did not find any difference in gait speed; however, this study is not directly comparable due to a different definition of osteosarcopenia which included osteopenic patients with sarcopenia. In a larger (n = 680) Australian population of older fallers the osteosarcopenic individuals had decreased hand grip strength as well as gait speed. Previous data pertaining to ADL status in osteosarcopenia could not be found.\nThere are important limitations to this study. First, due to exclusion of the most frail and demented patients, the population might not entirely reflect a geriatric ward population; however, it is probable that the selection bias is towards underestimation of prevalence rates. Second, the study lacks statistical power especially due to the low number of males with osteoporosis or osteosarcopenia. Larger multicenter studies are needed that allow stratification of the independent contributors to osteosarcopenia. Finally, it is a cross sectional study that does not allow the establishment of chronological or causal relationships in the pathways leading to osteosarcopenia. The strength of the study is to deliver data from a geriatric hospital giving the clinician an idea about the prevalence, the degree of overlap and the associated nutritional and functional deficits of two highly important pathologies of the locomotor and skeletal system.", "Osteoporosis and sarcopenia are prevalent conditions on a geriatric ward. Both are related to poor function and malnutrition. Co-occurrence (osteosarcopenia) is frequent and it is associated with a more compromised nutritional state than isolated osteoporosis or sarcopenia. The use of DXA might prove useful in co-diagnosing the two conditions." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, "conclusion" ]
[ "Osteosarcopenia", "Muscle-bone-unit", "Bone-muscle-continuum", "Sarco-osteoporosis", "Osteosarkopenie", "Muskel-Knochen-Einheit", "Knochen-Muskel-Kontinuum", "Sarkoosteoporose" ]
Introduction: Sarcopenia and osteoporosis are highly prevalent in old people and contribute to a variety of negative health outcomes [1]. As reflected by the term osteosarcopenia there is an increasing awareness of the interrelationship between muscle and bone disease [1] regarding genetic regulation [2], the endocrine framework [3] and close mechanical interaction [4]. Another common feature of both conditions is structural degradation due to lipotoxicity [5]. Sarcopenia-associated risk of falling and increased bone vulnerability have a synergistic impact on fracture incidence [6]. Despite being named the hazardous duet [7] both conditions are treatable, with a substantial overlap in the options available (exercise, protein supplementation, vitamin D) [8]. While there are some prevalence data from community dwelling elders, in particular fallers [9] and patients with a history of hip fractures [10], data from geriatric hospitals about osteosarcopenic patients are widely lacking. In order to determine the frequency of co-occurrence of sarcopenia and osteoporosis and the functional and nutritional characteristics accompanying these conditions in geriatric inpatients the dataset of the SAGE study [11] was examined. Methods: Study population The SAGE study is a cross-sectional study concerned with issues of muscle mass measurement in geriatric inpatients. Preliminary results and study design have been published elsewhere [11]. Briefly the study recruited 148 geriatric inpatients (99 female and 59 male) at the department of Geriatric Medicine, Paracelsus Medical University Salzburg. Inclusion criteria were admittance to a geriatric ward in the study period, ability to walk a few meters and to lie still for 5 min. The lower age limit was 70 years. Exclusion criteria were critical or terminal illness, advanced dementia or delirium, indwelling electrical devices such as pacemakers (bioimpedance analysis being part of the study protocol) and complete or partial amputation of one or more limbs. All participants gave written informed consent. The study was approved by the local ethics committee of the state of Salzburg. For the ancillary osteosarcopenia investigation all SAGE participants with a sufficient dataset were included to enable the diagnosis of sarcopenia and osteoporosis. The SAGE study is a cross-sectional study concerned with issues of muscle mass measurement in geriatric inpatients. Preliminary results and study design have been published elsewhere [11]. Briefly the study recruited 148 geriatric inpatients (99 female and 59 male) at the department of Geriatric Medicine, Paracelsus Medical University Salzburg. Inclusion criteria were admittance to a geriatric ward in the study period, ability to walk a few meters and to lie still for 5 min. The lower age limit was 70 years. Exclusion criteria were critical or terminal illness, advanced dementia or delirium, indwelling electrical devices such as pacemakers (bioimpedance analysis being part of the study protocol) and complete or partial amputation of one or more limbs. All participants gave written informed consent. The study was approved by the local ethics committee of the state of Salzburg. For the ancillary osteosarcopenia investigation all SAGE participants with a sufficient dataset were included to enable the diagnosis of sarcopenia and osteoporosis. Parameters Baseline characteristics. In addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records. In addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records. Functional and nutritional parameters. Gait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician. Gait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician. Diagnosis of sarcopenia. Diagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women. Diagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women. Diagnosis of osteoporosis. Bone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14]. Bone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14]. Diagnosis of osteosarcopenia. Osteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present. Osteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present. Data analysis. The basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant. The basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant. Baseline characteristics. In addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records. In addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records. Functional and nutritional parameters. Gait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician. Gait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician. Diagnosis of sarcopenia. Diagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women. Diagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women. Diagnosis of osteoporosis. Bone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14]. Bone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14]. Diagnosis of osteosarcopenia. Osteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present. Osteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present. Data analysis. The basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant. The basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant. Study population: The SAGE study is a cross-sectional study concerned with issues of muscle mass measurement in geriatric inpatients. Preliminary results and study design have been published elsewhere [11]. Briefly the study recruited 148 geriatric inpatients (99 female and 59 male) at the department of Geriatric Medicine, Paracelsus Medical University Salzburg. Inclusion criteria were admittance to a geriatric ward in the study period, ability to walk a few meters and to lie still for 5 min. The lower age limit was 70 years. Exclusion criteria were critical or terminal illness, advanced dementia or delirium, indwelling electrical devices such as pacemakers (bioimpedance analysis being part of the study protocol) and complete or partial amputation of one or more limbs. All participants gave written informed consent. The study was approved by the local ethics committee of the state of Salzburg. For the ancillary osteosarcopenia investigation all SAGE participants with a sufficient dataset were included to enable the diagnosis of sarcopenia and osteoporosis. Parameters: Baseline characteristics. In addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records. In addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records. Functional and nutritional parameters. Gait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician. Gait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician. Diagnosis of sarcopenia. Diagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women. Diagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women. Diagnosis of osteoporosis. Bone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14]. Bone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14]. Diagnosis of osteosarcopenia. Osteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present. Osteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present. Data analysis. The basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant. The basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant. Baseline characteristics.: In addition to age and gender, the number of comorbidities (out of the nine following: cardiovascular disease, chronic heart failure, cerebrovascular disease, obstructive pulmonary disease, diabetes, renal failure, hypertension, cancer and dementia) were looked into. Besides the number of medications, the life setting (community dwelling vs. institutionalized) and the use of a walking device were assessed. Furthermore, the number of patients in which initial hospital admission was due to fractures was retrieved. Information was obtained from the clinical records. Functional and nutritional parameters.: Gait speed and hand grip strength were measured in all individuals, partly for the diagnosis of sarcopenia but also for comparison between study subgroups. Gait speed was measured over a distance of 5 m. Hand grip strength was measured with a JAMARR hydraulic hand dynamometer (Ametek, Chicago, IL, USA). A total of six measurements were performed alternating left and right side and the maximum value was selected. The Barthel index at admission and BMI were obtained from the clinical records. The MNA-SF (and if pathologic full MNA) was assessed by a dietician. Diagnosis of sarcopenia.: Diagnosis was based on the EWGSOP criteria, comprising gait speed, handgrip strength and appendicular muscle mass derived from dual energy X‑ray absorptiometry (DXA) [12]. A gait speed of ≤0.8 m/s was considered pathologic. Low hand grip strength was defined as <30 kg for men and <20 kg for women. The Hologic Discovery A (S/N 85001 Hologic Inc. Marlborough, MA, USA) was used for all DXA scans. The scan measurements and analyses were conducted following standard procedures. Participants were measured wearing only gowns to eliminate possible artifacts due to clothing and fasteners. Whole body scans were manually analyzed for the manufacturer-defined regions of interest (ROI) following the standard analysis protocol in the Hologic User Manual. Customized ROI were also analyzed using the Hologic whole body and subregion analysis modes (software ver. 13.3.01). Appendicular skeletal muscle mass (ASMM) was directly derived from the appendicular soft lean tissue compartment in the DXA studies and denoted ASMMDXA. For DXA-derived muscle mass the thresholds communicated by Baumgartner et al. [13] were applied based on an appendicular skeletal muscle mass index (ASMMI): ASMMIDXA < 7.26 kg/m2 for men and <5.5 kg/m2 for women. Diagnosis of osteoporosis.: Bone mineral density (BMD) measurements were performed at three sites in the same session and with same DXA device: lumbar spine (L2–L4), total hip and femoral neck. A T-score of ≤−2.5 standard deviation (SD) at any location was considered diagnostic of osteoporosis [14]. Diagnosis of osteosarcopenia.: Osteosarcopenia was diagnosed when both sarcopenia and osteoporosis were present. Data analysis.: The basic pattern of data analysis was to divide the study group into 4 subgroups: 1. sarcopenia without osteoporosis, 2. osteoporosis without sarcopenia, 3. osteosarcopenia (both conditions present) and 4. absence of both conditions (the latter defined as the reference group). For some of the analyses all sarcopenic (with or without osteoporosis) and all osteoporotic (with or without sarcopenia) patients were pooled. Statistical analysis was performed by SPSSR statistics 24. Descriptive values are presented as mean ± standard deviation (±SD). Fisher’s exact test was used for the analysis of qualitative data. Significance of quantitative differences between subgroups was determined by unpaired t‑test. To check for the confounding effect of age and gender multiple regression analysis was used, when means between subgroups were compared. For contingency tables the Mantel-Haenszel method of weighted odds ratios was applied. A p-value <0.05 was considered significant. Results: Of the 148 geriatric inpatients participating in SAGE, 141 had a dataset allowing the diagnosis of sarcopenia and osteoporosis. Thus 84 women and 57 men (mean age 80.7 ± 5.3 years vs. 80.4 ± 5.8; p = 0.73 years) could be included in this investigation. Basic characteristics of the different subgroups of the study population are shown in Table 1. Out of 141 patients 26 (18.4%) had been admitted to hospital with recent fractures.Table 1Characteristics of the different subgroups of the study populationOP only(n = 22)SP only(n = 19)OSP(n = 20)OP total(n = 42)SP total(n = 39)RG(n = 80)Total (n = 141)Age (years)82.5 ± 4.581.1 ± 6.382.0 ± 6.382.3 ± 5.581.6 ± 6.379.6 ± 3.280.6 ± 5.5Gender (female/male)20/24/1514/634/818/2146/3484/57Comorbiditiesa (m ± SD)2.5 ± 1.43.2 ± 1.42.4 ± 1.62.5 ± 1.52.8 ± 1.62.7 ± 1.52.7 ± 1.5Medications (m ± SD)8.2 ± 3.38.2 ± 3.17.8 ± 3.98.0 ± 3.68.0 ± 3.68.6 ± 3.28.4 ± 3.3Malnutritionb12121729293778ADL dependencyc8111523263266Community dwelling211716373376130Use of walking aid16161430304389OP only osteoporosis no sarcopenia, SP only sarcopenia no osteoporosis, OSP sarcopenia and osteoporosis, OP total OP only + OSP, SP total SP only + OSP, RG reference group (no sarcopenia, no osteoporosis), m arithmetic mean, SD standard deviation, MNA mini nutritional assessment, ADL activities of daily livinga Selected comorbidities (see methods)b MNA <17c Barthel index <70 Characteristics of the different subgroups of the study population OP only osteoporosis no sarcopenia, SP only sarcopenia no osteoporosis, OSP sarcopenia and osteoporosis, OP total OP only + OSP, SP total SP only + OSP, RG reference group (no sarcopenia, no osteoporosis), m arithmetic mean, SD standard deviation, MNA mini nutritional assessment, ADL activities of daily living a Selected comorbidities (see methods) b MNA <17 c Barthel index <70 Prevalence data are displayed in Fig. 1. Overall prevalences of osteoporosis, sarcopenia and osteosarcopenia were 29.8%, 27.7% and 14.2%, respectively. Osteoporosis was significantly more prevalent among women (40.5% vs. 14.0%, p < 0.001), whereas no significant gender difference was observed for sarcopenia and osteosarcopenia. Women had isolated osteoporosis more frequently (p < 0.001) while men had isolated sarcopenia more frequently (p < 0.001).Fig. 1Prevalence of sarcopenia, osteoporosis and osteosarcopenia in the study populations (OP osteoporotic; SP sarcopenic, OSP sarcopenia and osteoporosis, RG reference group). Prevalence significantly gender associated (Fisher’s exact test) at *p <0.05, **p <0.01;***p <0.001 Prevalence of sarcopenia, osteoporosis and osteosarcopenia in the study populations (OP osteoporotic; SP sarcopenic, OSP sarcopenia and osteoporosis, RG reference group). Prevalence significantly gender associated (Fisher’s exact test) at *p <0.05, **p <0.01;***p <0.001 There was a highly significant association of sarcopenia and osteoporosis in the whole study group (p < 0.001) and in female participants (p < 0.001) and a significant association in men (p = 0.02) (Fig. 2). The age and gender corrected odds ratio (Mantel-Haenszel test) for the sarcopenic group to be also osteoporotic was 8.71 (confidence interval 2.87–26.42; p < 0.001).Fig. 2Co-occurrence of sarcopenia and osteoporosis. a Prevalence of osteoporosis dependent on sarcopenia status. SP sarcopenia. Prevalence significantly different (Fisher’s exact test) at *p <0.05, **p <0.01;***p <0.001 between SP (lightblue) and no SP (dark blue). b Prevalence of sarcopenia depending on osteoporosis status. OP osteoporotic. Prevalence significantly different (Fisher’s exact test) at *p <0.05, **p <0.01;***p <0.001 between OP (light blue) and no OP (dark blue) Co-occurrence of sarcopenia and osteoporosis. a Prevalence of osteoporosis dependent on sarcopenia status. SP sarcopenia. Prevalence significantly different (Fisher’s exact test) at *p <0.05, **p <0.01;***p <0.001 between SP (lightblue) and no SP (dark blue). b Prevalence of sarcopenia depending on osteoporosis status. OP osteoporotic. Prevalence significantly different (Fisher’s exact test) at *p <0.05, **p <0.01;***p <0.001 between OP (light blue) and no OP (dark blue) The presence of sarcopenia, osteoporosis and osteosarcopenia was associated with nutritional and functional deficits when compared to the reference group (Table 2). All three conditions showed a highly significant association with lower BMI and MNA-SF. In terms of functionality, the Barthel index was lower in the sarcopenic and osteosarcopenic but not in the osteoporotic patients. All three groups had significantly lower gait speed than the reference group. These findings proved statistically robust (p < 0.05) when checking for gender and age as possible confounders by multiple regression analysis.Table 2Functional and nutritional characteristics depending on OS/SP statusBMI(kg/m2)MNA-SFHG male(kg)HG female(kg)Gait speed(m/s)Barthel indexRG28.46 ± 4.1511.21 ± 2.0433.65 ± 9.4621.40 ± 6.570.91 ± 0.3573.56 ± 19.27OP total23.68 ± 3.96***9.69 ± 2.64**30.17 ± 10.4916.63 ± 4.42***0.67 ± 0.24***66.43 ± 20.33SP total23.08 ± 3.52***9.33 ± 2.44***28.26 ± 7.99*16.56 ± 4.13**0.62 ± 0.20***61.79 ± 20.58**OSP21.46 ± 2.53***8.50 ± 2.53***32.25 ± 11.5816.75 ± 4.43(**)0.61 ± 0.16***61.00 ± 15.86**OP only25.71 ± 3.95**10.77 ± 2.2526.00 ± 6.0016.53 ± 4.40***0.72 ± 0.29(*)71.36 ± 22.57SP only24.78 ± 3.61***10.21 ± 2.0229.00 ± 5.37**15.75 ± 2.86(*)0.62 ± 0.23**62.63 ± 24.57Values given as arithmetic mean ± standard deviationRG reference group (no sarcopenia, no osteoporosis), OP osteoporotic, SP sarcopenic, OSP sarcopenia and osteoporosis, BMI body mass index, HG handgrip, MNA-SF mini nutritional assessment short formSignificant at *p <0.05; **p <0.01;***p <0.001 compared to reference group (male and female parts of reference group for handgrip, respectively). Where differences no longer reached statistical significance after correcting for age and gender the asterisks are put into brackets Functional and nutritional characteristics depending on OS/SP status Values given as arithmetic mean ± standard deviation RG reference group (no sarcopenia, no osteoporosis), OP osteoporotic, SP sarcopenic, OSP sarcopenia and osteoporosis, BMI body mass index, HG handgrip, MNA-SF mini nutritional assessment short form Significant at *p <0.05; **p <0.01;***p <0.001 compared to reference group (male and female parts of reference group for handgrip, respectively). Where differences no longer reached statistical significance after correcting for age and gender the asterisks are put into brackets Pathologically decreased handgrip strength occurred more frequently in all disease subgroups (barely osteoporotic, all osteoporotic, osteosarcopenic, barely sarcopenic and total sarcopenic) than in the reference group (p < 0.01). Absolute strength was analyzed separately for female and male participants. Women consistently showed lower grip strength associated with all kinds of muscle and bone disease (p < 0.05). This effect persisted after correction for age in all osteoporotic and all sarcopenic women but lost statistical significance for the osteosarcopenic subgroup. No such association was observed in men. Given the small sample size in the different disease subgroups among men, all pathologic conditions were pooled and compared to the male part of the reference group (handgrip 28.05 ± 7.85 kg vs. 33.65 ± 8.08 kg; p = 0.017). Pooling of sarcopenic male (sarcopenia only + osteosarcopenia) still showed a significant lower handgrip compared to the male reference group while pooling of osteoporosis (n = 2) only and osteosarcopenia (n = 6) failed to show a significant difference to the reference group (Table 2). No differences were observed in functional state when the subgroup of osteosarcopenic subjects was compared to sarcopenic and osteoporotic subjects (data not shown); however, BMI and MNA-SF were lower in osteosarcopenic patients when compared to barely sarcopenic and barely osteoporotic respectively (Table 3). This remained true after correction for age and gender.Table 3Nutritional state and OS/SP statusBMI [kg/m2]MNA-SFOSP21.46 ± 2.538.50 ± 2.52SP only24.78 ± 3.61**10.21 ± 2.02*OP only25.71 ± 3.95***10.77 ± 2.25**Values given as arithmetic mean ± standard deviationOSP sarcopenia and osteoporosis; OP all osteoporotic; SP all sarcopenic; BMI body mass index; HG handgrip; MNA-SF mini nutritional assessment short formSignificant at *p <0.05; **p <0.01;***p <0.001 compared to OSP group. All differences remained statistically significant after correction for age and gender Nutritional state and OS/SP status Values given as arithmetic mean ± standard deviation OSP sarcopenia and osteoporosis; OP all osteoporotic; SP all sarcopenic; BMI body mass index; HG handgrip; MNA-SF mini nutritional assessment short form Significant at *p <0.05; **p <0.01;***p <0.001 compared to OSP group. All differences remained statistically significant after correction for age and gender Discussion: In terms of age, comorbidity, polypharmacy and walking limitation (use of walking device) this sample reflected the diversity of hospitalized geriatric patients. The sex ratio (59.6% women) is consistent with the predominance of women on a geriatric ward. The characteristics of this sample match the features of geriatric inpatients as described exemplarily in a cohort study by von Renteln-Kruse and Ebner [16]. The prevalence of osteosarcopenia was 14.2% with a non-significant tendency for higher prevalence (16.7% vs. 10.5%) among women. This is in line with a Chinese study reporting a prevalence of 15.1% (women) and 10.4% (men) in community dwelling elders over age 80 years [15]. Recently Locquet et al. found a prevalence of 9.52% in an exclusively female population (mean age 74.3 years) from Belgium [17]. Some studies [9, 10] reported considerably higher prevalence rates (58% [9], 37.9% [10]) but these referred to preselected populations of female hip fracture patients and fallers, respectively. Osteoporosis was more frequent in sarcopenic (51.3%) than non-sarcopenic (21.6%) subjects. A higher osteoporosis prevalence with respect to sarcopenia status was described in earlier studies (46.1% vs. 22.0% [17], 58.5% vs. 36.4% [9], 57.8% vs. 22.0% [18]) and was seen in both female and male subjects. The nutritional state of all sarcopenic, osteoporotic and osteosarcopenic patients was characterized by lower BMI and MNA-SF compared to the reference group. The BMI and MNA-SF were lowest in osteosarcopenic subjects with a significant difference not only compared to normal but even compared to the barely sarcopenic and barely osteoporotic subjects. A worse nutritional state in osteosarcopenic subjects was also observed by Huo et al. [9] who concluded that thorough nutritional assessment and early supplementation were especially important in this subgroup. All three pathologic conditions were accompanied by lower gait speed. This was also true for hand grip strength in women and both sexes combined. In men the difference in hand grip yielded statistical significance after pooling the different disease subgroups. A lower Barthel index was associated with sarcopenia and osteosarcopenia but not with osteoporosis. There was no significant difference in functional parameters between the osteosarcopenic and the barely osteoporotic or sarcopenic. Drey et al. [1] reported decreased hand grip strength for the osteosarcopenic subgroup in a population of 68 prefrail individuals but did not find any difference in gait speed; however, this study is not directly comparable due to a different definition of osteosarcopenia which included osteopenic patients with sarcopenia. In a larger (n = 680) Australian population of older fallers the osteosarcopenic individuals had decreased hand grip strength as well as gait speed. Previous data pertaining to ADL status in osteosarcopenia could not be found. There are important limitations to this study. First, due to exclusion of the most frail and demented patients, the population might not entirely reflect a geriatric ward population; however, it is probable that the selection bias is towards underestimation of prevalence rates. Second, the study lacks statistical power especially due to the low number of males with osteoporosis or osteosarcopenia. Larger multicenter studies are needed that allow stratification of the independent contributors to osteosarcopenia. Finally, it is a cross sectional study that does not allow the establishment of chronological or causal relationships in the pathways leading to osteosarcopenia. The strength of the study is to deliver data from a geriatric hospital giving the clinician an idea about the prevalence, the degree of overlap and the associated nutritional and functional deficits of two highly important pathologies of the locomotor and skeletal system. Practical conclusion: Osteoporosis and sarcopenia are prevalent conditions on a geriatric ward. Both are related to poor function and malnutrition. Co-occurrence (osteosarcopenia) is frequent and it is associated with a more compromised nutritional state than isolated osteoporosis or sarcopenia. The use of DXA might prove useful in co-diagnosing the two conditions.
Background: Sarcopenia and osteoporosis share an underlying pathology and reinforce each other in terms of negative outcomes. Methods: A cross-sectional analysis of geriatric inpatients from the sarcopenia in geriatric elderly (SAGE) study. Measurements included dual X‑ray absorptiometry for bone mineral density and appendicular muscle mass; gait speed and hand grip strength, the Barthel index, body mass index (BMI) and the mini nutritional assessment short form (MNA-SF). Results: Of the 148 patients recruited for SAGE, 141 (84 women, 57 men; mean age 80.6 ± 5.5 years) had sufficient data to be included in this ancillary investigation: 22/141 (15.6%) were only osteoporotic, 19/141 (13.5%) were only sarcopenic and 20/141 (14.2%) osteosarcopenic (i.e. both sarcopenia and osteoporosis). The prevalence of osteoporosis was higher in sarcopenic than in non-sarcopenic individuals (51.3% vs. 21.6%, p < 0.001). Sarcopenic, osteoporotic and osteosarcopenic subjects had a lower BMI, MNA-SF, handgrip and gait speed (p < 0.05) than the reference group (those neither osteoporotic nor sarcopenic, n = 80). The Barthel index was lower for sarcopenic and osteosarcopenic (p < 0.05) but not for osteoporotic (p = 0.07) subjects. The BMI and MNA-SF were lower in osteosarcopenia compared to sarcopenia or osteoporosis alone (p < 0.05) while there were no differences in functional criteria. Conclusions: Osteoporosis and sarcopenia are linked to nutritional deficits and reduced function in geriatric inpatients. Co-occurrence (osteosarcopenia) is common and associated with a higher degree of malnutrition than osteoporosis or sarcopenia alone.
Introduction: Sarcopenia and osteoporosis are highly prevalent in old people and contribute to a variety of negative health outcomes [1]. As reflected by the term osteosarcopenia there is an increasing awareness of the interrelationship between muscle and bone disease [1] regarding genetic regulation [2], the endocrine framework [3] and close mechanical interaction [4]. Another common feature of both conditions is structural degradation due to lipotoxicity [5]. Sarcopenia-associated risk of falling and increased bone vulnerability have a synergistic impact on fracture incidence [6]. Despite being named the hazardous duet [7] both conditions are treatable, with a substantial overlap in the options available (exercise, protein supplementation, vitamin D) [8]. While there are some prevalence data from community dwelling elders, in particular fallers [9] and patients with a history of hip fractures [10], data from geriatric hospitals about osteosarcopenic patients are widely lacking. In order to determine the frequency of co-occurrence of sarcopenia and osteoporosis and the functional and nutritional characteristics accompanying these conditions in geriatric inpatients the dataset of the SAGE study [11] was examined. Practical conclusion: Osteoporosis and sarcopenia are prevalent conditions on a geriatric ward. Both are related to poor function and malnutrition. Co-occurrence (osteosarcopenia) is frequent and it is associated with a more compromised nutritional state than isolated osteoporosis or sarcopenia. The use of DXA might prove useful in co-diagnosing the two conditions.
Background: Sarcopenia and osteoporosis share an underlying pathology and reinforce each other in terms of negative outcomes. Methods: A cross-sectional analysis of geriatric inpatients from the sarcopenia in geriatric elderly (SAGE) study. Measurements included dual X‑ray absorptiometry for bone mineral density and appendicular muscle mass; gait speed and hand grip strength, the Barthel index, body mass index (BMI) and the mini nutritional assessment short form (MNA-SF). Results: Of the 148 patients recruited for SAGE, 141 (84 women, 57 men; mean age 80.6 ± 5.5 years) had sufficient data to be included in this ancillary investigation: 22/141 (15.6%) were only osteoporotic, 19/141 (13.5%) were only sarcopenic and 20/141 (14.2%) osteosarcopenic (i.e. both sarcopenia and osteoporosis). The prevalence of osteoporosis was higher in sarcopenic than in non-sarcopenic individuals (51.3% vs. 21.6%, p < 0.001). Sarcopenic, osteoporotic and osteosarcopenic subjects had a lower BMI, MNA-SF, handgrip and gait speed (p < 0.05) than the reference group (those neither osteoporotic nor sarcopenic, n = 80). The Barthel index was lower for sarcopenic and osteosarcopenic (p < 0.05) but not for osteoporotic (p = 0.07) subjects. The BMI and MNA-SF were lower in osteosarcopenia compared to sarcopenia or osteoporosis alone (p < 0.05) while there were no differences in functional criteria. Conclusions: Osteoporosis and sarcopenia are linked to nutritional deficits and reduced function in geriatric inpatients. Co-occurrence (osteosarcopenia) is common and associated with a higher degree of malnutrition than osteoporosis or sarcopenia alone.
8,701
337
[ 222, 3311, 187, 1465, 99, 112, 247, 60, 11, 183, 1954, 720 ]
13
[ "sarcopenia", "osteoporosis", "study", "analysis", "sarcopenia osteoporosis", "osteosarcopenia", "dxa", "mass", "group", "strength" ]
[ "osteoporosis sarcopenia prevalent", "prevalences osteoporosis sarcopenia", "osteoporosis sarcopenia use", "sarcopenia osteoporosis parameters", "sarcopenia depending osteoporosis" ]
null
null
[CONTENT] Osteosarcopenia | Muscle-bone-unit | Bone-muscle-continuum | Sarco-osteoporosis | Osteosarkopenie | Muskel-Knochen-Einheit | Knochen-Muskel-Kontinuum | Sarkoosteoporose [SUMMARY]
null
null
[CONTENT] Osteosarcopenia | Muscle-bone-unit | Bone-muscle-continuum | Sarco-osteoporosis | Osteosarkopenie | Muskel-Knochen-Einheit | Knochen-Muskel-Kontinuum | Sarkoosteoporose [SUMMARY]
[CONTENT] Osteosarcopenia | Muscle-bone-unit | Bone-muscle-continuum | Sarco-osteoporosis | Osteosarkopenie | Muskel-Knochen-Einheit | Knochen-Muskel-Kontinuum | Sarkoosteoporose [SUMMARY]
[CONTENT] Osteosarcopenia | Muscle-bone-unit | Bone-muscle-continuum | Sarco-osteoporosis | Osteosarkopenie | Muskel-Knochen-Einheit | Knochen-Muskel-Kontinuum | Sarkoosteoporose [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Cross-Sectional Studies | Female | Gait | Hand Strength | Humans | Male | Osteoporosis | Prevalence | Sarcopenia [SUMMARY]
null
null
[CONTENT] Aged | Aged, 80 and over | Cross-Sectional Studies | Female | Gait | Hand Strength | Humans | Male | Osteoporosis | Prevalence | Sarcopenia [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Cross-Sectional Studies | Female | Gait | Hand Strength | Humans | Male | Osteoporosis | Prevalence | Sarcopenia [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Cross-Sectional Studies | Female | Gait | Hand Strength | Humans | Male | Osteoporosis | Prevalence | Sarcopenia [SUMMARY]
[CONTENT] osteoporosis sarcopenia prevalent | prevalences osteoporosis sarcopenia | osteoporosis sarcopenia use | sarcopenia osteoporosis parameters | sarcopenia depending osteoporosis [SUMMARY]
null
null
[CONTENT] osteoporosis sarcopenia prevalent | prevalences osteoporosis sarcopenia | osteoporosis sarcopenia use | sarcopenia osteoporosis parameters | sarcopenia depending osteoporosis [SUMMARY]
[CONTENT] osteoporosis sarcopenia prevalent | prevalences osteoporosis sarcopenia | osteoporosis sarcopenia use | sarcopenia osteoporosis parameters | sarcopenia depending osteoporosis [SUMMARY]
[CONTENT] osteoporosis sarcopenia prevalent | prevalences osteoporosis sarcopenia | osteoporosis sarcopenia use | sarcopenia osteoporosis parameters | sarcopenia depending osteoporosis [SUMMARY]
[CONTENT] sarcopenia | osteoporosis | study | analysis | sarcopenia osteoporosis | osteosarcopenia | dxa | mass | group | strength [SUMMARY]
null
null
[CONTENT] sarcopenia | osteoporosis | study | analysis | sarcopenia osteoporosis | osteosarcopenia | dxa | mass | group | strength [SUMMARY]
[CONTENT] sarcopenia | osteoporosis | study | analysis | sarcopenia osteoporosis | osteosarcopenia | dxa | mass | group | strength [SUMMARY]
[CONTENT] sarcopenia | osteoporosis | study | analysis | sarcopenia osteoporosis | osteosarcopenia | dxa | mass | group | strength [SUMMARY]
[CONTENT] conditions | sarcopenia | bone | geriatric | data | patients | sarcopenia osteoporosis | order determine frequency | old people contribute variety | options [SUMMARY]
null
null
[CONTENT] co | osteoporosis sarcopenia | conditions | nutritional state isolated | associated compromised nutritional state | use dxa prove | use dxa | related | related poor | related poor function [SUMMARY]
[CONTENT] sarcopenia | osteoporosis | study | analysis | sarcopenia osteoporosis | osteosarcopenia | dxa | hand | present | measured [SUMMARY]
[CONTENT] sarcopenia | osteoporosis | study | analysis | sarcopenia osteoporosis | osteosarcopenia | dxa | hand | present | measured [SUMMARY]
[CONTENT] [SUMMARY]
null
null
[CONTENT] ||| osteosarcopenia [SUMMARY]
[CONTENT] ||| ||| Barthel | BMI | MNA-SF ||| 148 | 141 | 84 | 57 | age 80.6 | 5.5 years | 22/141 | 15.6% | 19/141 | 13.5% | 20/141 | 14.2% | osteosarcopenic ||| 51.3% | 21.6% | 0.001 ||| osteosarcopenic | BMI | MNA-SF | 0.05 | n = 80 ||| Barthel | 0.05 | 0.07 ||| BMI | MNA-SF | osteosarcopenia | 0.05 ||| ||| osteosarcopenia [SUMMARY]
[CONTENT] ||| ||| Barthel | BMI | MNA-SF ||| 148 | 141 | 84 | 57 | age 80.6 | 5.5 years | 22/141 | 15.6% | 19/141 | 13.5% | 20/141 | 14.2% | osteosarcopenic ||| 51.3% | 21.6% | 0.001 ||| osteosarcopenic | BMI | MNA-SF | 0.05 | n = 80 ||| Barthel | 0.05 | 0.07 ||| BMI | MNA-SF | osteosarcopenia | 0.05 ||| ||| osteosarcopenia [SUMMARY]
Hub genes for early diagnosis and therapy of adamantinomatous craniopharyngioma.
36123899
Adamantinomatous craniopharyngioma (ACP) is a subtype of craniopharyngioma, a neoplastic disease with a benign pathological phenotype but a poor prognosis in the sellar region. The disease has been considered the most common congenital tumor in the skull. Therefore, this article aims to identify hub genes that might serve as genetic markers of diagnosis, treatment, and prognosis of ACP.
BACKGROUND
The procedure of this research includes the acquisition of public data, identification and functional annotation of differentially expressed genes (DEGs), construction and analysis of protein-protein interaction network, and the mining and analysis of hub genes by Spearman-rho test, multivariable linear regression, and receiver operator characteristic curve analysis. Quantitative real-time polymerase chain reaction was used to detect the level of mRNA of relative genes.
METHODS
Among 2 datasets, a total of 703 DEGs were identified, mainly enriched in chemical synaptic transmission, cell adhesion, odontogenesis of the dentin-containing tooth, cell junction, extracellular region, extracellular space, structural molecule activity, and structural constituent of cytoskeleton. The protein-protein interaction network was composed of 4379 edges and 589 nodes. Its significant module had 10 hub genes, and SYN1, SYP, and GRIA2 were significantly down-regulated with ACP.
RESULTS
In a word, we find out the DEGs between ACP patients and standard samples, which are likely to play an essential role in the development of ACP. At the same time, these DEGs are of great value in tumors' diagnosis and targeted therapy and could even be mined as biological molecular targets for diagnosing and treating ACP patients.
CONCLUSION
[ "Computational Biology", "Craniopharyngioma", "Early Diagnosis", "Gene Expression Profiling", "Gene Regulatory Networks", "Genetic Markers", "Humans", "Pituitary Neoplasms", "RNA, Messenger" ]
9478218
1. Introduction
Craniopharyngioma is a neoplastic disease with a benign pathological phenotype but a poor prognosis in the sellar region.[1] The disease has been considered the most common congenital tumor in the skull. The prevalence of the disease is higher in Africa and the far east than in other regions except for Japan, and the tumor presents a bimodal distribution in age, namely 2 peaks in children and adults aged 40 to 50 years.[1] According to the central nervous system classification of craniopharyngioma published by WHO in 2016, craniopharyngioma is classified into adamantinomatous craniopharyngioma (ACP) and papilloma craniopharyngioma.[2] The former is mainly found in children, while the latter is almost only seen in adults. Because craniopharyngioma locates deep and adjacent to the optic nerve, hypothalamus, basilar artery ring, and other essential structures, surgical treatment is complex, the mortality rate is high, and it is easy to relapse. In recent years, bioinformatics technology has been widely used to explore the potential genetic targets of diseases and help us find the differentially expressed genes and possible pathways related to the occurrence and development of diseases.[3] Differentially expressed genes have been found and verified in many diseases and are potential targets for disease prediction and treatment. Bredemeier used the biological information method to identify the critical genes in the occurrence and development of breast cancer, and the results showed that KRT19, EPCAM, CDH1, and SCGB2A2 had significant expression differences.[4] Therefore, it is suggested that this gene should be regarded as a therapeutic target. Meanwhile, Feng et al[5] identified 2 lncRNAs, LOC146880 and ENST00000439577, which may promote the development and progression of lung cancer by analyzing gene expression and methylation microarray data. Therefore, bioinformatics technology has unique advantages in mining the differentially expressed genes between patients with the disease and regular patients and searching for the targeted genes related to the occurrence and development of disease. Based on bioinformatics technology, this study combined spearman correlation analysis and multiple linear regression analysis to screen out the central genes significantly related to ACP. The results of this study may provide important targets for the clinical diagnosis and treatment of ACP and contribute to the clinical treatment and decision-making of ACP. It provides better help for the prognosis and survival of patients with ACP and more possibilities for future treatment of ACP.
2. Methods
2.1. Access to public data Gene Expression Omnibus (GEO) database (http://www.ncbi.nlm.nih.gov/geo) is an open functional genomics database of high-throughput resources, including microarrays, gene expression data, and chips. Two expression profiling datasets (GSE94349 and GSE68015) were obtained from GEO. The probes would be transformed into the homologous gene symbol using the platform’s annotation information. The GSE94349 dataset contained 24 ACP and 27 standard samples, and GSE68015 contained 15 ACP and 16 normal samples. Gene Expression Omnibus (GEO) database (http://www.ncbi.nlm.nih.gov/geo) is an open functional genomics database of high-throughput resources, including microarrays, gene expression data, and chips. Two expression profiling datasets (GSE94349 and GSE68015) were obtained from GEO. The probes would be transformed into the homologous gene symbol using the platform’s annotation information. The GSE94349 dataset contained 24 ACP and 27 standard samples, and GSE68015 contained 15 ACP and 16 normal samples. 2.2. DEGs identified by limma package The limma package screened the differentially expressed genes (DEGs) between ACP and normal samples. After setting up the differential experimental groups for 1 GEO series, the limma package could execute a command to compare the differential classifications to identify the DEGs. According to the method of Benjamini and Hochberg (false discovery rate), the tool could apply adjustment to the P values to obtain the adjusted P values (adj. P) and maintain 1 balance between the possibility of false positives and detection of statistically significant genes. If 1 probe set does not have the homologous gene, or if 1 gene has numerous probe sets, the data will be removed. The rule of statistical significance is that adj. P value ≤ .01 and log (Fold change, FC) ≥ 4 or ≤ −4. The Venn diagram was delineated by FunRich software. The limma package screened the differentially expressed genes (DEGs) between ACP and normal samples. After setting up the differential experimental groups for 1 GEO series, the limma package could execute a command to compare the differential classifications to identify the DEGs. According to the method of Benjamini and Hochberg (false discovery rate), the tool could apply adjustment to the P values to obtain the adjusted P values (adj. P) and maintain 1 balance between the possibility of false positives and detection of statistically significant genes. If 1 probe set does not have the homologous gene, or if 1 gene has numerous probe sets, the data will be removed. The rule of statistical significance is that adj. P value ≤ .01 and log (Fold change, FC) ≥ 4 or ≤ −4. The Venn diagram was delineated by FunRich software. 2.3. Functional annotation of DEGs by GO and KEGG analyses DAVID (https://david.ncifcrf.gov/home.jsp) (version 6.8), 1 online analysis tool suite with the function of Integrated Discovery and Annotation, mainly provides typical batch annotation and gene-gene ontology (GO) term enrichment analysis to highlight the most relevant GO terms associated with a given gene list. Kyoto Encyclopedia of Genes and Genomes (KEGG) (https://www.kegg.jp/), 1 of the world’s most commonly used biological information databases, aims to understand advanced functions and biological systems. From the molecular level, KEGG integrates many practical program database resources from high-throughput experimental technologies. GO is an ontology widely used in bioinformatics, which covers 3 aspects of biology, including cellular component, molecular function, and biological process. The DAVID online tool was implemented to perform GO and KEGG analyses of DEGs. The rule of statistical significance is that P < .05. DAVID (https://david.ncifcrf.gov/home.jsp) (version 6.8), 1 online analysis tool suite with the function of Integrated Discovery and Annotation, mainly provides typical batch annotation and gene-gene ontology (GO) term enrichment analysis to highlight the most relevant GO terms associated with a given gene list. Kyoto Encyclopedia of Genes and Genomes (KEGG) (https://www.kegg.jp/), 1 of the world’s most commonly used biological information databases, aims to understand advanced functions and biological systems. From the molecular level, KEGG integrates many practical program database resources from high-throughput experimental technologies. GO is an ontology widely used in bioinformatics, which covers 3 aspects of biology, including cellular component, molecular function, and biological process. The DAVID online tool was implemented to perform GO and KEGG analyses of DEGs. The rule of statistical significance is that P < .05. 2.4. Construction and analysis of PPI network After importing the common DEGs to the Search Tool for the Retrieval of Interacting Genes (http://string-db.org) (version 10.5), the online tool could predict and trace the protein-protein interaction (PPI) network. The analysis of interactions between various proteins might put forward some novel ideas into the pathophysiological mechanisms of the development of ACP. In this research, the Search Tool for the Retrieval of Interacting Genes database was used to construct the PPI network of DEGs, and the minimum required interaction score is medium confidence > 0.4. After importing the common DEGs to the Search Tool for the Retrieval of Interacting Genes (http://string-db.org) (version 10.5), the online tool could predict and trace the protein-protein interaction (PPI) network. The analysis of interactions between various proteins might put forward some novel ideas into the pathophysiological mechanisms of the development of ACP. In this research, the Search Tool for the Retrieval of Interacting Genes database was used to construct the PPI network of DEGs, and the minimum required interaction score is medium confidence > 0.4. 2.5. The analysis and mining of hub genes Based on the topology principles, the Molecular Complex Detection (MCODE) (version 1.5.1), a plug-in of Cytoscape, could discover the tightly coupled region. MCODE identified the most important module of the PPI network map. The criteria of MCODE analysis require that degree cut-off = 2, MCODE scores > 5, Max depth = 100, k-score = 2, and node score cut-off = 0.2. The hub genes were excavated when the degrees were set (degrees ≥ 10). Then, DAVID online tool was used to analyze the GO and KEGG pathway analyses for the hub genes. The clustering analysis of hub genes was performed using OmicShare (http://www.omicshare.com/tools), an available data analysis platform. The Spearman-rho test was used for correlation analysis between ACP and relevant gene expression. Any test results reaching a liberal statistical threshold of P < .2 for each comparison were then entered into a multivariable linear regression model to identify independent predictive genes of ACP. Finally, we performed receiver operator characteristic (ROC) curve analysis to determine the ability of the hub genes to predict ACP. All statistical analyses were conducted using SPSS software (version 21.0; IBM, Armonk, NY). A P value of <0.05 was considered statistically significant. Based on the topology principles, the Molecular Complex Detection (MCODE) (version 1.5.1), a plug-in of Cytoscape, could discover the tightly coupled region. MCODE identified the most important module of the PPI network map. The criteria of MCODE analysis require that degree cut-off = 2, MCODE scores > 5, Max depth = 100, k-score = 2, and node score cut-off = 0.2. The hub genes were excavated when the degrees were set (degrees ≥ 10). Then, DAVID online tool was used to analyze the GO and KEGG pathway analyses for the hub genes. The clustering analysis of hub genes was performed using OmicShare (http://www.omicshare.com/tools), an available data analysis platform. The Spearman-rho test was used for correlation analysis between ACP and relevant gene expression. Any test results reaching a liberal statistical threshold of P < .2 for each comparison were then entered into a multivariable linear regression model to identify independent predictive genes of ACP. Finally, we performed receiver operator characteristic (ROC) curve analysis to determine the ability of the hub genes to predict ACP. All statistical analyses were conducted using SPSS software (version 21.0; IBM, Armonk, NY). A P value of <0.05 was considered statistically significant. 2.6. Quantitative real-time polymerase chain reaction (RT-qPCR) assay A total of 14 participants were recruited, including 7 control individuals and 7 ACP patients. After surgery, 7 ACP tumor samples from ACP patients and 7 control brain samples from control individuals were obtained. The research conformed to the Declaration of Helsinki and was authorized by the Human Ethics and Research Ethics Committees of the Zhejiang Cancer Hospital. Informed consent was obtained from all participants. Total RNA was extracted from 7 ACP tumors and 7 control brain samples by the RNAiso Plus (Trizol) kit (Thermofisher, Massachusetts, United States, MA) and reverse transcribed to cDNA. RT-qPCR was performed using a Light Cycler® 4800 System (Roche Diagnostic Products Co., Basel, Switzerland) with specific GRIA2, SYN1, and SYP primers. Table 1 presents the primer sequences used in the experiments. The RQ values (2−ΔΔCt, where Ct is the threshold cycle) of each sample were calculated and are presented as fold changes in gene expression relative to the control group. GAPDH was used as an endogenous control. Primers and their sequences for polymerase chain reaction analysis. A total of 14 participants were recruited, including 7 control individuals and 7 ACP patients. After surgery, 7 ACP tumor samples from ACP patients and 7 control brain samples from control individuals were obtained. The research conformed to the Declaration of Helsinki and was authorized by the Human Ethics and Research Ethics Committees of the Zhejiang Cancer Hospital. Informed consent was obtained from all participants. Total RNA was extracted from 7 ACP tumors and 7 control brain samples by the RNAiso Plus (Trizol) kit (Thermofisher, Massachusetts, United States, MA) and reverse transcribed to cDNA. RT-qPCR was performed using a Light Cycler® 4800 System (Roche Diagnostic Products Co., Basel, Switzerland) with specific GRIA2, SYN1, and SYP primers. Table 1 presents the primer sequences used in the experiments. The RQ values (2−ΔΔCt, where Ct is the threshold cycle) of each sample were calculated and are presented as fold changes in gene expression relative to the control group. GAPDH was used as an endogenous control. Primers and their sequences for polymerase chain reaction analysis.
3.7. Results of RT-qPCR analysis
According to the analysis above, GRIA2, SYN1, and SYP were markedly down-regulated in ACP tumor samples. As presented in Figure 6, the relative expression levels of GRIA2, SYN1, and SYP were significantly lower in ACP samples compared with the control groups. The result demonstrated that GRIA2, SYN1, and SYP might be considered biomarkers for ACP. Relative expression of GRIA2, SYN1, and SYP by RT-qPCR analysis. *P < .05, compared with controls. RT-qPCR = quantitative real-time polymerase chain reaction.
null
null
[ "2.1. Access to public data", "2.2. DEGs identified by limma package", "2.3. Functional annotation of DEGs by GO and KEGG analyses", "2.4. Construction and analysis of PPI network", "2.5. The analysis and mining of hub genes", "2.6. Quantitative real-time polymerase chain reaction (RT-qPCR) assay", "3. Results", "3.1. DEGs identified between standard and ACP samples", "3.2. Functional annotation of DEGs by GO and KEGG analyses", "3.3. PPI and module networks construction and hub gene selection", "3.4. Hub gene analysis", "3.5. Correlation between ACP and hub genes expression", "3.6. The hub genes could predict ACP sensitively and significantly by the ROC curve", "Author contributions" ]
[ "Gene Expression Omnibus (GEO) database (http://www.ncbi.nlm.nih.gov/geo) is an open functional genomics database of high-throughput resources, including microarrays, gene expression data, and chips. Two expression profiling datasets (GSE94349 and GSE68015) were obtained from GEO. The probes would be transformed into the homologous gene symbol using the platform’s annotation information. The GSE94349 dataset contained 24 ACP and 27 standard samples, and GSE68015 contained 15 ACP and 16 normal samples.", "The limma package screened the differentially expressed genes (DEGs) between ACP and normal samples. After setting up the differential experimental groups for 1 GEO series, the limma package could execute a command to compare the differential classifications to identify the DEGs. According to the method of Benjamini and Hochberg (false discovery rate), the tool could apply adjustment to the P values to obtain the adjusted P values (adj. P) and maintain 1 balance between the possibility of false positives and detection of statistically significant genes. If 1 probe set does not have the homologous gene, or if 1 gene has numerous probe sets, the data will be removed. The rule of statistical significance is that adj. P value ≤ .01 and log (Fold change, FC) ≥ 4 or ≤ −4. The Venn diagram was delineated by FunRich software.", "DAVID (https://david.ncifcrf.gov/home.jsp) (version 6.8), 1 online analysis tool suite with the function of Integrated Discovery and Annotation, mainly provides typical batch annotation and gene-gene ontology (GO) term enrichment analysis to highlight the most relevant GO terms associated with a given gene list. Kyoto Encyclopedia of Genes and Genomes (KEGG) (https://www.kegg.jp/), 1 of the world’s most commonly used biological information databases, aims to understand advanced functions and biological systems. From the molecular level, KEGG integrates many practical program database resources from high-throughput experimental technologies. GO is an ontology widely used in bioinformatics, which covers 3 aspects of biology, including cellular component, molecular function, and biological process. The DAVID online tool was implemented to perform GO and KEGG analyses of DEGs. The rule of statistical significance is that P < .05.", "After importing the common DEGs to the Search Tool for the Retrieval of Interacting Genes (http://string-db.org) (version 10.5), the online tool could predict and trace the protein-protein interaction (PPI) network. The analysis of interactions between various proteins might put forward some novel ideas into the pathophysiological mechanisms of the development of ACP. In this research, the Search Tool for the Retrieval of Interacting Genes database was used to construct the PPI network of DEGs, and the minimum required interaction score is medium confidence > 0.4.", "Based on the topology principles, the Molecular Complex Detection (MCODE) (version 1.5.1), a plug-in of Cytoscape, could discover the tightly coupled region. MCODE identified the most important module of the PPI network map. The criteria of MCODE analysis require that degree cut-off = 2, MCODE scores > 5, Max depth = 100, k-score = 2, and node score cut-off = 0.2. The hub genes were excavated when the degrees were set (degrees ≥ 10). Then, DAVID online tool was used to analyze the GO and KEGG pathway analyses for the hub genes. The clustering analysis of hub genes was performed using OmicShare (http://www.omicshare.com/tools), an available data analysis platform.\nThe Spearman-rho test was used for correlation analysis between ACP and relevant gene expression. Any test results reaching a liberal statistical threshold of P < .2 for each comparison were then entered into a multivariable linear regression model to identify independent predictive genes of ACP. Finally, we performed receiver operator characteristic (ROC) curve analysis to determine the ability of the hub genes to predict ACP. All statistical analyses were conducted using SPSS software (version 21.0; IBM, Armonk, NY). A P value of <0.05 was considered statistically significant.", "A total of 14 participants were recruited, including 7 control individuals and 7 ACP patients. After surgery, 7 ACP tumor samples from ACP patients and 7 control brain samples from control individuals were obtained. The research conformed to the Declaration of Helsinki and was authorized by the Human Ethics and Research Ethics Committees of the Zhejiang Cancer Hospital. Informed consent was obtained from all participants.\nTotal RNA was extracted from 7 ACP tumors and 7 control brain samples by the RNAiso Plus (Trizol) kit (Thermofisher, Massachusetts, United States, MA) and reverse transcribed to cDNA. RT-qPCR was performed using a Light Cycler® 4800 System (Roche Diagnostic Products Co., Basel, Switzerland) with specific GRIA2, SYN1, and SYP primers. Table 1 presents the primer sequences used in the experiments. The RQ values (2−ΔΔCt, where Ct is the threshold cycle) of each sample were calculated and are presented as fold changes in gene expression relative to the control group. GAPDH was used as an endogenous control.\nPrimers and their sequences for polymerase chain reaction analysis.", "3.1. DEGs identified between standard and ACP samples After analysis of the datasets (GSE94349 and GSE68015) with the limma package, the difference between ACP and standard samples could be presented in the volcano plots (Fig. 1A and B). Then these results were standardized, and DEGs (944 in GSE94349 and 764 in GSE68015) were distinguished. The Venn diagram could show that 703 genes were simultaneously contained in the 2 datasets (Fig. 1C).\nThe identification of DEGs by limma package and Venn diagram. (A) The volcano plot presents the difference between ACP and normal samples after analysis of the datasets GSE94349 with limma package. (B) The volcano plot presents the difference between non-MM lung cancer and MM lung cancer tissues after analysis of the datasets GSE68015 with limma package. (C) The Venn diagram could show that 703 genes were contained in the GSE94349 and GSE68015 datasets simultaneously. ACP = adamantinomatous craniopharyngioma, DEGs = differentially expressed genes, MM = multiple myeloma.\nAfter analysis of the datasets (GSE94349 and GSE68015) with the limma package, the difference between ACP and standard samples could be presented in the volcano plots (Fig. 1A and B). Then these results were standardized, and DEGs (944 in GSE94349 and 764 in GSE68015) were distinguished. The Venn diagram could show that 703 genes were simultaneously contained in the 2 datasets (Fig. 1C).\nThe identification of DEGs by limma package and Venn diagram. (A) The volcano plot presents the difference between ACP and normal samples after analysis of the datasets GSE94349 with limma package. (B) The volcano plot presents the difference between non-MM lung cancer and MM lung cancer tissues after analysis of the datasets GSE68015 with limma package. (C) The Venn diagram could show that 703 genes were contained in the GSE94349 and GSE68015 datasets simultaneously. ACP = adamantinomatous craniopharyngioma, DEGs = differentially expressed genes, MM = multiple myeloma.\n3.2. Functional annotation of DEGs by GO and KEGG analyses The results of GO analysis presented that variations in biological processes, cell component, and molecular function of DEGs were mainly enriched in chemical synaptic transmission, cell adhesion, epidermis development, extracellular matrix organization, odontogenesis of the dentin-containing tooth, cell junction, extracellular region, plasma membrane, extracellular space, axon, structural molecule activity, calcium ion binding, Gamma-aminobutyric acid-A (GABA-A) receptor activity, and structural constituent of the cytoskeleton (Table 2). Analysis of the KEGG pathway displayed that all DEGs were primarily enriched in retrograde endocannabinoid signaling, nicotine addiction, extracellular matrix-receptor interaction, morphine addiction, and GABAergic synapse (Table 2).\nGO and KEGG pathway enrichment analysis of DEGs in ACP samples.\nACP = adamantinomatous craniopharyngioma, DEGs = differentially expressed genes, ECM = extracellular matrix, FDR = false discovery rate, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes.\nThe results of GO analysis presented that variations in biological processes, cell component, and molecular function of DEGs were mainly enriched in chemical synaptic transmission, cell adhesion, epidermis development, extracellular matrix organization, odontogenesis of the dentin-containing tooth, cell junction, extracellular region, plasma membrane, extracellular space, axon, structural molecule activity, calcium ion binding, Gamma-aminobutyric acid-A (GABA-A) receptor activity, and structural constituent of the cytoskeleton (Table 2). Analysis of the KEGG pathway displayed that all DEGs were primarily enriched in retrograde endocannabinoid signaling, nicotine addiction, extracellular matrix-receptor interaction, morphine addiction, and GABAergic synapse (Table 2).\nGO and KEGG pathway enrichment analysis of DEGs in ACP samples.\nACP = adamantinomatous craniopharyngioma, DEGs = differentially expressed genes, ECM = extracellular matrix, FDR = false discovery rate, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes.\n3.3. PPI and module networks construction and hub gene selection The PPI network of DEGs was constructed (Fig. 2), and the most powerful module was obtained using Cytoscape (Fig. 3). A total of 10 genes (SNAP25, GRIA2, KCNJ9, SYN1, SLC32A1, SNCB, GRM5, GABRG2, SYP, and CDH1) were identified as hub genes with degrees ≥ 10 (Fig. 4A).\nThe PPI network presents the intricate relationships between DEGs. DEGs = differentially expressed genes, PPI = protein-protein interaction.\nThe significant module network was identified from the PPI network. PPI = protein-protein interaction.\nThe mining and analysis of hub genes. (A) There 10 genes (SNAP25, GRIA2, KCNJ9, SYN1, SLC32A1, SNCB, GRM5, GABRG2, SYP, and CDH1) were identified as hub genes with degrees ≥10. (B) Hierarchical clustering showed that the hub genes could differentiate the ACP samples from the normal ones in the datasets GSE94349. (C) Hierarchical clustering showed that the hub genes could differentiate the ACP samples from the normal ones in the datasets GSE68015. Upregulation of genes is marked in red; downregulation is observed in green. ACP = adamantinomatous craniopharyngioma.\nThe PPI network of DEGs was constructed (Fig. 2), and the most powerful module was obtained using Cytoscape (Fig. 3). A total of 10 genes (SNAP25, GRIA2, KCNJ9, SYN1, SLC32A1, SNCB, GRM5, GABRG2, SYP, and CDH1) were identified as hub genes with degrees ≥ 10 (Fig. 4A).\nThe PPI network presents the intricate relationships between DEGs. DEGs = differentially expressed genes, PPI = protein-protein interaction.\nThe significant module network was identified from the PPI network. PPI = protein-protein interaction.\nThe mining and analysis of hub genes. (A) There 10 genes (SNAP25, GRIA2, KCNJ9, SYN1, SLC32A1, SNCB, GRM5, GABRG2, SYP, and CDH1) were identified as hub genes with degrees ≥10. (B) Hierarchical clustering showed that the hub genes could differentiate the ACP samples from the normal ones in the datasets GSE94349. (C) Hierarchical clustering showed that the hub genes could differentiate the ACP samples from the normal ones in the datasets GSE68015. Upregulation of genes is marked in red; downregulation is observed in green. ACP = adamantinomatous craniopharyngioma.\n3.4. Hub gene analysis The functional analyses of hub genes were analyzed using DAVID. Results showed that hub genes were mainly enriched in chemical synaptic transmission, neurotransmitter secretion, regulation of long-term neuronal synaptic plasticity, locomotory behavior, cell junction, presynaptic active zone, neuron projection, synaptic vesicle, calcium-dependent protein binding, retrograde endocannabinoid signaling, nicotine addiction, morphine addiction, and neuroactive ligand-receptor interaction (Table 3). The names, abbreviations, and functions for these hub genes are shown in Table 4. Hierarchical clustering showed that the hub genes could differentiate the ACP samples from normal ones (Fig. 4B and C). These hub genes showed the highest node score in the PPI network, suggesting that they might play essential roles in the occurrence or progression of ACP.\nGO and KEGG pathway enrichment analysis of hub genes.\nFDR = false discovery rate, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes.\nSummaries for the function of 10 hub genes.\nThe functional analyses of hub genes were analyzed using DAVID. Results showed that hub genes were mainly enriched in chemical synaptic transmission, neurotransmitter secretion, regulation of long-term neuronal synaptic plasticity, locomotory behavior, cell junction, presynaptic active zone, neuron projection, synaptic vesicle, calcium-dependent protein binding, retrograde endocannabinoid signaling, nicotine addiction, morphine addiction, and neuroactive ligand-receptor interaction (Table 3). The names, abbreviations, and functions for these hub genes are shown in Table 4. Hierarchical clustering showed that the hub genes could differentiate the ACP samples from normal ones (Fig. 4B and C). These hub genes showed the highest node score in the PPI network, suggesting that they might play essential roles in the occurrence or progression of ACP.\nGO and KEGG pathway enrichment analysis of hub genes.\nFDR = false discovery rate, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes.\nSummaries for the function of 10 hub genes.\n3.5. Correlation between ACP and hub genes expression To ensure that the hub genes impacted ACP, we performed a further analysis of ACP and hub gene expression. Spearman correlation coefficient was used in the correlation analysis, and SNAP25 (ρ = −0.702, P < .001), GRIA2 (ρ = −0.673, P < .001), KCNJ9 (ρ = −0.706, P < .001), SYN1 (ρ = −0.747, P < .001), SLC32A1 (ρ = −0.813, P < .001), SNCB (ρ = −0.848, P < .001), GRM5 (ρ = −0.680, P < .001), GABRG2 (ρ = −0.830, P < .001), SYP (ρ = −0.852, P < .001), and CDH1 (ρ = −0.865, P < .001) were significantly correlated with ACP (Table 5). In the multivariate linear regression model, holding all other variables at any fixed value, the natural logarithmic ACP remained associated with GRIA2 (β = 0.044, P = .041), SYN1 (β = 0.083, P = .017), SYP (β = −0.137, P < .001), and CDH1 (β = 0.154, P < .001) (Table 5).\nThe correlation and linear regression analysis between ACP and relevant gene expression.\nACP = adamantinomatous craniopharyngioma, β = parameter estimate, ρ = Spearman correlation coefficient.\nSignificant variables: P < .05.\nSpearman rank correlation coefficient between ACP and relevant characteristics.\nMultiple linear regression analysis.\nTo ensure that the hub genes impacted ACP, we performed a further analysis of ACP and hub gene expression. Spearman correlation coefficient was used in the correlation analysis, and SNAP25 (ρ = −0.702, P < .001), GRIA2 (ρ = −0.673, P < .001), KCNJ9 (ρ = −0.706, P < .001), SYN1 (ρ = −0.747, P < .001), SLC32A1 (ρ = −0.813, P < .001), SNCB (ρ = −0.848, P < .001), GRM5 (ρ = −0.680, P < .001), GABRG2 (ρ = −0.830, P < .001), SYP (ρ = −0.852, P < .001), and CDH1 (ρ = −0.865, P < .001) were significantly correlated with ACP (Table 5). In the multivariate linear regression model, holding all other variables at any fixed value, the natural logarithmic ACP remained associated with GRIA2 (β = 0.044, P = .041), SYN1 (β = 0.083, P = .017), SYP (β = −0.137, P < .001), and CDH1 (β = 0.154, P < .001) (Table 5).\nThe correlation and linear regression analysis between ACP and relevant gene expression.\nACP = adamantinomatous craniopharyngioma, β = parameter estimate, ρ = Spearman correlation coefficient.\nSignificant variables: P < .05.\nSpearman rank correlation coefficient between ACP and relevant characteristics.\nMultiple linear regression analysis.\n3.6. The hub genes could predict ACP sensitively and significantly by the ROC curve We constructed receiver operator characteristic curves to identify accurate thresholds for hub genes predicting ACP. SYP was mostly associated with higher risk of ACP (area under the curve for ACP, 0.992; 95% confidence interval, 0.980–1.000; P < .001). The optimal diagnostic threshold of SYP for ACP was 5.672. (Table 6; Fig. 5A–K).\nReceiver operator characteristic curve analysis of hub gene expression for ACP.\nACP = adamantinomatous craniopharyngioma, AUC = area under curve, CI = confidence interval, max = the maximum of AUC, ODT = optimal diagnostic threshold.\nSignificant variables.\nThe receiver operator characteristic curves indicate that the hub genes could predict ACP sensitively and especially. (A) SNAP25, (B) GRIA2, (C) KCNJ9, (D) SYN1, (E) SLC32A1, (F) SNCB, (G) GRM5, (H) GABRG2, (I) SYP, (J) CDH1, (K) The merger of all hub genes. ACP = adamantinomatous craniopharyngioma.\nWe constructed receiver operator characteristic curves to identify accurate thresholds for hub genes predicting ACP. SYP was mostly associated with higher risk of ACP (area under the curve for ACP, 0.992; 95% confidence interval, 0.980–1.000; P < .001). The optimal diagnostic threshold of SYP for ACP was 5.672. (Table 6; Fig. 5A–K).\nReceiver operator characteristic curve analysis of hub gene expression for ACP.\nACP = adamantinomatous craniopharyngioma, AUC = area under curve, CI = confidence interval, max = the maximum of AUC, ODT = optimal diagnostic threshold.\nSignificant variables.\nThe receiver operator characteristic curves indicate that the hub genes could predict ACP sensitively and especially. (A) SNAP25, (B) GRIA2, (C) KCNJ9, (D) SYN1, (E) SLC32A1, (F) SNCB, (G) GRM5, (H) GABRG2, (I) SYP, (J) CDH1, (K) The merger of all hub genes. ACP = adamantinomatous craniopharyngioma.\n3.7. Results of RT-qPCR analysis According to the analysis above, GRIA2, SYN1, and SYP were markedly down-regulated in ACP tumor samples. As presented in Figure 6, the relative expression levels of GRIA2, SYN1, and SYP were significantly lower in ACP samples compared with the control groups. The result demonstrated that GRIA2, SYN1, and SYP might be considered biomarkers for ACP.\nRelative expression of GRIA2, SYN1, and SYP by RT-qPCR analysis. *P < .05, compared with controls. RT-qPCR = quantitative real-time polymerase chain reaction.\nAccording to the analysis above, GRIA2, SYN1, and SYP were markedly down-regulated in ACP tumor samples. As presented in Figure 6, the relative expression levels of GRIA2, SYN1, and SYP were significantly lower in ACP samples compared with the control groups. The result demonstrated that GRIA2, SYN1, and SYP might be considered biomarkers for ACP.\nRelative expression of GRIA2, SYN1, and SYP by RT-qPCR analysis. *P < .05, compared with controls. RT-qPCR = quantitative real-time polymerase chain reaction.", "After analysis of the datasets (GSE94349 and GSE68015) with the limma package, the difference between ACP and standard samples could be presented in the volcano plots (Fig. 1A and B). Then these results were standardized, and DEGs (944 in GSE94349 and 764 in GSE68015) were distinguished. The Venn diagram could show that 703 genes were simultaneously contained in the 2 datasets (Fig. 1C).\nThe identification of DEGs by limma package and Venn diagram. (A) The volcano plot presents the difference between ACP and normal samples after analysis of the datasets GSE94349 with limma package. (B) The volcano plot presents the difference between non-MM lung cancer and MM lung cancer tissues after analysis of the datasets GSE68015 with limma package. (C) The Venn diagram could show that 703 genes were contained in the GSE94349 and GSE68015 datasets simultaneously. ACP = adamantinomatous craniopharyngioma, DEGs = differentially expressed genes, MM = multiple myeloma.", "The results of GO analysis presented that variations in biological processes, cell component, and molecular function of DEGs were mainly enriched in chemical synaptic transmission, cell adhesion, epidermis development, extracellular matrix organization, odontogenesis of the dentin-containing tooth, cell junction, extracellular region, plasma membrane, extracellular space, axon, structural molecule activity, calcium ion binding, Gamma-aminobutyric acid-A (GABA-A) receptor activity, and structural constituent of the cytoskeleton (Table 2). Analysis of the KEGG pathway displayed that all DEGs were primarily enriched in retrograde endocannabinoid signaling, nicotine addiction, extracellular matrix-receptor interaction, morphine addiction, and GABAergic synapse (Table 2).\nGO and KEGG pathway enrichment analysis of DEGs in ACP samples.\nACP = adamantinomatous craniopharyngioma, DEGs = differentially expressed genes, ECM = extracellular matrix, FDR = false discovery rate, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes.", "The PPI network of DEGs was constructed (Fig. 2), and the most powerful module was obtained using Cytoscape (Fig. 3). A total of 10 genes (SNAP25, GRIA2, KCNJ9, SYN1, SLC32A1, SNCB, GRM5, GABRG2, SYP, and CDH1) were identified as hub genes with degrees ≥ 10 (Fig. 4A).\nThe PPI network presents the intricate relationships between DEGs. DEGs = differentially expressed genes, PPI = protein-protein interaction.\nThe significant module network was identified from the PPI network. PPI = protein-protein interaction.\nThe mining and analysis of hub genes. (A) There 10 genes (SNAP25, GRIA2, KCNJ9, SYN1, SLC32A1, SNCB, GRM5, GABRG2, SYP, and CDH1) were identified as hub genes with degrees ≥10. (B) Hierarchical clustering showed that the hub genes could differentiate the ACP samples from the normal ones in the datasets GSE94349. (C) Hierarchical clustering showed that the hub genes could differentiate the ACP samples from the normal ones in the datasets GSE68015. Upregulation of genes is marked in red; downregulation is observed in green. ACP = adamantinomatous craniopharyngioma.", "The functional analyses of hub genes were analyzed using DAVID. Results showed that hub genes were mainly enriched in chemical synaptic transmission, neurotransmitter secretion, regulation of long-term neuronal synaptic plasticity, locomotory behavior, cell junction, presynaptic active zone, neuron projection, synaptic vesicle, calcium-dependent protein binding, retrograde endocannabinoid signaling, nicotine addiction, morphine addiction, and neuroactive ligand-receptor interaction (Table 3). The names, abbreviations, and functions for these hub genes are shown in Table 4. Hierarchical clustering showed that the hub genes could differentiate the ACP samples from normal ones (Fig. 4B and C). These hub genes showed the highest node score in the PPI network, suggesting that they might play essential roles in the occurrence or progression of ACP.\nGO and KEGG pathway enrichment analysis of hub genes.\nFDR = false discovery rate, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes.\nSummaries for the function of 10 hub genes.", "To ensure that the hub genes impacted ACP, we performed a further analysis of ACP and hub gene expression. Spearman correlation coefficient was used in the correlation analysis, and SNAP25 (ρ = −0.702, P < .001), GRIA2 (ρ = −0.673, P < .001), KCNJ9 (ρ = −0.706, P < .001), SYN1 (ρ = −0.747, P < .001), SLC32A1 (ρ = −0.813, P < .001), SNCB (ρ = −0.848, P < .001), GRM5 (ρ = −0.680, P < .001), GABRG2 (ρ = −0.830, P < .001), SYP (ρ = −0.852, P < .001), and CDH1 (ρ = −0.865, P < .001) were significantly correlated with ACP (Table 5). In the multivariate linear regression model, holding all other variables at any fixed value, the natural logarithmic ACP remained associated with GRIA2 (β = 0.044, P = .041), SYN1 (β = 0.083, P = .017), SYP (β = −0.137, P < .001), and CDH1 (β = 0.154, P < .001) (Table 5).\nThe correlation and linear regression analysis between ACP and relevant gene expression.\nACP = adamantinomatous craniopharyngioma, β = parameter estimate, ρ = Spearman correlation coefficient.\nSignificant variables: P < .05.\nSpearman rank correlation coefficient between ACP and relevant characteristics.\nMultiple linear regression analysis.", "We constructed receiver operator characteristic curves to identify accurate thresholds for hub genes predicting ACP. SYP was mostly associated with higher risk of ACP (area under the curve for ACP, 0.992; 95% confidence interval, 0.980–1.000; P < .001). The optimal diagnostic threshold of SYP for ACP was 5.672. (Table 6; Fig. 5A–K).\nReceiver operator characteristic curve analysis of hub gene expression for ACP.\nACP = adamantinomatous craniopharyngioma, AUC = area under curve, CI = confidence interval, max = the maximum of AUC, ODT = optimal diagnostic threshold.\nSignificant variables.\nThe receiver operator characteristic curves indicate that the hub genes could predict ACP sensitively and especially. (A) SNAP25, (B) GRIA2, (C) KCNJ9, (D) SYN1, (E) SLC32A1, (F) SNCB, (G) GRM5, (H) GABRG2, (I) SYP, (J) CDH1, (K) The merger of all hub genes. ACP = adamantinomatous craniopharyngioma.", "Y-FZ and S-YZ experimented and were significant contributors to writing and submitting the manuscript. BW and C-XS made substantial contributions to research conception. They also designed the draft of the research process. Y-FZ and LX were involved in critically revising the manuscript for important intellectual content. BW, L-WL and KJ analyzed the genic data regarding Glioblastoma. All authors read and approved the final manuscript.\nConceptualization: Liang Xia, Cai-Xing Sun, Bin Wu.\nFormal analysis: Li-Weng Li, Kai Jing.\nInvestigation: Yang-Fan Zou, Shu-Yuan Zhang, Li-Weng Li.\nProject administration: Li-Weng Li, Cai-Xing Sun.\nResources: Bin Wu.\nSoftware: Shu-Yuan Zhang.\nSupervision: Kai Jing.\nValidation: Shu-Yuan Zhang, Kai Jing.\nVisualization: Kai Jing.\nWriting – original draft: Yang-Fan Zou, Shu-Yuan Zhang, Liang Xia, Bin Wu.\nWriting – review & editing: Yang-Fan Zou, Liang Xia, Bin Wu." ]
[ null, null, null, null, null, null, "results", null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Methods", "2.1. Access to public data", "2.2. DEGs identified by limma package", "2.3. Functional annotation of DEGs by GO and KEGG analyses", "2.4. Construction and analysis of PPI network", "2.5. The analysis and mining of hub genes", "2.6. Quantitative real-time polymerase chain reaction (RT-qPCR) assay", "3. Results", "3.1. DEGs identified between standard and ACP samples", "3.2. Functional annotation of DEGs by GO and KEGG analyses", "3.3. PPI and module networks construction and hub gene selection", "3.4. Hub gene analysis", "3.5. Correlation between ACP and hub genes expression", "3.6. The hub genes could predict ACP sensitively and significantly by the ROC curve", "3.7. Results of RT-qPCR analysis", "4. Discussion", "Author contributions" ]
[ "Craniopharyngioma is a neoplastic disease with a benign pathological phenotype but a poor prognosis in the sellar region.[1] The disease has been considered the most common congenital tumor in the skull. The prevalence of the disease is higher in Africa and the far east than in other regions except for Japan, and the tumor presents a bimodal distribution in age, namely 2 peaks in children and adults aged 40 to 50 years.[1] According to the central nervous system classification of craniopharyngioma published by WHO in 2016, craniopharyngioma is classified into adamantinomatous craniopharyngioma (ACP) and papilloma craniopharyngioma.[2] The former is mainly found in children, while the latter is almost only seen in adults. Because craniopharyngioma locates deep and adjacent to the optic nerve, hypothalamus, basilar artery ring, and other essential structures, surgical treatment is complex, the mortality rate is high, and it is easy to relapse.\nIn recent years, bioinformatics technology has been widely used to explore the potential genetic targets of diseases and help us find the differentially expressed genes and possible pathways related to the occurrence and development of diseases.[3] Differentially expressed genes have been found and verified in many diseases and are potential targets for disease prediction and treatment. Bredemeier used the biological information method to identify the critical genes in the occurrence and development of breast cancer, and the results showed that KRT19, EPCAM, CDH1, and SCGB2A2 had significant expression differences.[4] Therefore, it is suggested that this gene should be regarded as a therapeutic target. Meanwhile, Feng et al[5] identified 2 lncRNAs, LOC146880 and ENST00000439577, which may promote the development and progression of lung cancer by analyzing gene expression and methylation microarray data. Therefore, bioinformatics technology has unique advantages in mining the differentially expressed genes between patients with the disease and regular patients and searching for the targeted genes related to the occurrence and development of disease.\nBased on bioinformatics technology, this study combined spearman correlation analysis and multiple linear regression analysis to screen out the central genes significantly related to ACP. The results of this study may provide important targets for the clinical diagnosis and treatment of ACP and contribute to the clinical treatment and decision-making of ACP. It provides better help for the prognosis and survival of patients with ACP and more possibilities for future treatment of ACP.", "2.1. Access to public data Gene Expression Omnibus (GEO) database (http://www.ncbi.nlm.nih.gov/geo) is an open functional genomics database of high-throughput resources, including microarrays, gene expression data, and chips. Two expression profiling datasets (GSE94349 and GSE68015) were obtained from GEO. The probes would be transformed into the homologous gene symbol using the platform’s annotation information. The GSE94349 dataset contained 24 ACP and 27 standard samples, and GSE68015 contained 15 ACP and 16 normal samples.\nGene Expression Omnibus (GEO) database (http://www.ncbi.nlm.nih.gov/geo) is an open functional genomics database of high-throughput resources, including microarrays, gene expression data, and chips. Two expression profiling datasets (GSE94349 and GSE68015) were obtained from GEO. The probes would be transformed into the homologous gene symbol using the platform’s annotation information. The GSE94349 dataset contained 24 ACP and 27 standard samples, and GSE68015 contained 15 ACP and 16 normal samples.\n2.2. DEGs identified by limma package The limma package screened the differentially expressed genes (DEGs) between ACP and normal samples. After setting up the differential experimental groups for 1 GEO series, the limma package could execute a command to compare the differential classifications to identify the DEGs. According to the method of Benjamini and Hochberg (false discovery rate), the tool could apply adjustment to the P values to obtain the adjusted P values (adj. P) and maintain 1 balance between the possibility of false positives and detection of statistically significant genes. If 1 probe set does not have the homologous gene, or if 1 gene has numerous probe sets, the data will be removed. The rule of statistical significance is that adj. P value ≤ .01 and log (Fold change, FC) ≥ 4 or ≤ −4. The Venn diagram was delineated by FunRich software.\nThe limma package screened the differentially expressed genes (DEGs) between ACP and normal samples. After setting up the differential experimental groups for 1 GEO series, the limma package could execute a command to compare the differential classifications to identify the DEGs. According to the method of Benjamini and Hochberg (false discovery rate), the tool could apply adjustment to the P values to obtain the adjusted P values (adj. P) and maintain 1 balance between the possibility of false positives and detection of statistically significant genes. If 1 probe set does not have the homologous gene, or if 1 gene has numerous probe sets, the data will be removed. The rule of statistical significance is that adj. P value ≤ .01 and log (Fold change, FC) ≥ 4 or ≤ −4. The Venn diagram was delineated by FunRich software.\n2.3. Functional annotation of DEGs by GO and KEGG analyses DAVID (https://david.ncifcrf.gov/home.jsp) (version 6.8), 1 online analysis tool suite with the function of Integrated Discovery and Annotation, mainly provides typical batch annotation and gene-gene ontology (GO) term enrichment analysis to highlight the most relevant GO terms associated with a given gene list. Kyoto Encyclopedia of Genes and Genomes (KEGG) (https://www.kegg.jp/), 1 of the world’s most commonly used biological information databases, aims to understand advanced functions and biological systems. From the molecular level, KEGG integrates many practical program database resources from high-throughput experimental technologies. GO is an ontology widely used in bioinformatics, which covers 3 aspects of biology, including cellular component, molecular function, and biological process. The DAVID online tool was implemented to perform GO and KEGG analyses of DEGs. The rule of statistical significance is that P < .05.\nDAVID (https://david.ncifcrf.gov/home.jsp) (version 6.8), 1 online analysis tool suite with the function of Integrated Discovery and Annotation, mainly provides typical batch annotation and gene-gene ontology (GO) term enrichment analysis to highlight the most relevant GO terms associated with a given gene list. Kyoto Encyclopedia of Genes and Genomes (KEGG) (https://www.kegg.jp/), 1 of the world’s most commonly used biological information databases, aims to understand advanced functions and biological systems. From the molecular level, KEGG integrates many practical program database resources from high-throughput experimental technologies. GO is an ontology widely used in bioinformatics, which covers 3 aspects of biology, including cellular component, molecular function, and biological process. The DAVID online tool was implemented to perform GO and KEGG analyses of DEGs. The rule of statistical significance is that P < .05.\n2.4. Construction and analysis of PPI network After importing the common DEGs to the Search Tool for the Retrieval of Interacting Genes (http://string-db.org) (version 10.5), the online tool could predict and trace the protein-protein interaction (PPI) network. The analysis of interactions between various proteins might put forward some novel ideas into the pathophysiological mechanisms of the development of ACP. In this research, the Search Tool for the Retrieval of Interacting Genes database was used to construct the PPI network of DEGs, and the minimum required interaction score is medium confidence > 0.4.\nAfter importing the common DEGs to the Search Tool for the Retrieval of Interacting Genes (http://string-db.org) (version 10.5), the online tool could predict and trace the protein-protein interaction (PPI) network. The analysis of interactions between various proteins might put forward some novel ideas into the pathophysiological mechanisms of the development of ACP. In this research, the Search Tool for the Retrieval of Interacting Genes database was used to construct the PPI network of DEGs, and the minimum required interaction score is medium confidence > 0.4.\n2.5. The analysis and mining of hub genes Based on the topology principles, the Molecular Complex Detection (MCODE) (version 1.5.1), a plug-in of Cytoscape, could discover the tightly coupled region. MCODE identified the most important module of the PPI network map. The criteria of MCODE analysis require that degree cut-off = 2, MCODE scores > 5, Max depth = 100, k-score = 2, and node score cut-off = 0.2. The hub genes were excavated when the degrees were set (degrees ≥ 10). Then, DAVID online tool was used to analyze the GO and KEGG pathway analyses for the hub genes. The clustering analysis of hub genes was performed using OmicShare (http://www.omicshare.com/tools), an available data analysis platform.\nThe Spearman-rho test was used for correlation analysis between ACP and relevant gene expression. Any test results reaching a liberal statistical threshold of P < .2 for each comparison were then entered into a multivariable linear regression model to identify independent predictive genes of ACP. Finally, we performed receiver operator characteristic (ROC) curve analysis to determine the ability of the hub genes to predict ACP. All statistical analyses were conducted using SPSS software (version 21.0; IBM, Armonk, NY). A P value of <0.05 was considered statistically significant.\nBased on the topology principles, the Molecular Complex Detection (MCODE) (version 1.5.1), a plug-in of Cytoscape, could discover the tightly coupled region. MCODE identified the most important module of the PPI network map. The criteria of MCODE analysis require that degree cut-off = 2, MCODE scores > 5, Max depth = 100, k-score = 2, and node score cut-off = 0.2. The hub genes were excavated when the degrees were set (degrees ≥ 10). Then, DAVID online tool was used to analyze the GO and KEGG pathway analyses for the hub genes. The clustering analysis of hub genes was performed using OmicShare (http://www.omicshare.com/tools), an available data analysis platform.\nThe Spearman-rho test was used for correlation analysis between ACP and relevant gene expression. Any test results reaching a liberal statistical threshold of P < .2 for each comparison were then entered into a multivariable linear regression model to identify independent predictive genes of ACP. Finally, we performed receiver operator characteristic (ROC) curve analysis to determine the ability of the hub genes to predict ACP. All statistical analyses were conducted using SPSS software (version 21.0; IBM, Armonk, NY). A P value of <0.05 was considered statistically significant.\n2.6. Quantitative real-time polymerase chain reaction (RT-qPCR) assay A total of 14 participants were recruited, including 7 control individuals and 7 ACP patients. After surgery, 7 ACP tumor samples from ACP patients and 7 control brain samples from control individuals were obtained. The research conformed to the Declaration of Helsinki and was authorized by the Human Ethics and Research Ethics Committees of the Zhejiang Cancer Hospital. Informed consent was obtained from all participants.\nTotal RNA was extracted from 7 ACP tumors and 7 control brain samples by the RNAiso Plus (Trizol) kit (Thermofisher, Massachusetts, United States, MA) and reverse transcribed to cDNA. RT-qPCR was performed using a Light Cycler® 4800 System (Roche Diagnostic Products Co., Basel, Switzerland) with specific GRIA2, SYN1, and SYP primers. Table 1 presents the primer sequences used in the experiments. The RQ values (2−ΔΔCt, where Ct is the threshold cycle) of each sample were calculated and are presented as fold changes in gene expression relative to the control group. GAPDH was used as an endogenous control.\nPrimers and their sequences for polymerase chain reaction analysis.\nA total of 14 participants were recruited, including 7 control individuals and 7 ACP patients. After surgery, 7 ACP tumor samples from ACP patients and 7 control brain samples from control individuals were obtained. The research conformed to the Declaration of Helsinki and was authorized by the Human Ethics and Research Ethics Committees of the Zhejiang Cancer Hospital. Informed consent was obtained from all participants.\nTotal RNA was extracted from 7 ACP tumors and 7 control brain samples by the RNAiso Plus (Trizol) kit (Thermofisher, Massachusetts, United States, MA) and reverse transcribed to cDNA. RT-qPCR was performed using a Light Cycler® 4800 System (Roche Diagnostic Products Co., Basel, Switzerland) with specific GRIA2, SYN1, and SYP primers. Table 1 presents the primer sequences used in the experiments. The RQ values (2−ΔΔCt, where Ct is the threshold cycle) of each sample were calculated and are presented as fold changes in gene expression relative to the control group. GAPDH was used as an endogenous control.\nPrimers and their sequences for polymerase chain reaction analysis.", "Gene Expression Omnibus (GEO) database (http://www.ncbi.nlm.nih.gov/geo) is an open functional genomics database of high-throughput resources, including microarrays, gene expression data, and chips. Two expression profiling datasets (GSE94349 and GSE68015) were obtained from GEO. The probes would be transformed into the homologous gene symbol using the platform’s annotation information. The GSE94349 dataset contained 24 ACP and 27 standard samples, and GSE68015 contained 15 ACP and 16 normal samples.", "The limma package screened the differentially expressed genes (DEGs) between ACP and normal samples. After setting up the differential experimental groups for 1 GEO series, the limma package could execute a command to compare the differential classifications to identify the DEGs. According to the method of Benjamini and Hochberg (false discovery rate), the tool could apply adjustment to the P values to obtain the adjusted P values (adj. P) and maintain 1 balance between the possibility of false positives and detection of statistically significant genes. If 1 probe set does not have the homologous gene, or if 1 gene has numerous probe sets, the data will be removed. The rule of statistical significance is that adj. P value ≤ .01 and log (Fold change, FC) ≥ 4 or ≤ −4. The Venn diagram was delineated by FunRich software.", "DAVID (https://david.ncifcrf.gov/home.jsp) (version 6.8), 1 online analysis tool suite with the function of Integrated Discovery and Annotation, mainly provides typical batch annotation and gene-gene ontology (GO) term enrichment analysis to highlight the most relevant GO terms associated with a given gene list. Kyoto Encyclopedia of Genes and Genomes (KEGG) (https://www.kegg.jp/), 1 of the world’s most commonly used biological information databases, aims to understand advanced functions and biological systems. From the molecular level, KEGG integrates many practical program database resources from high-throughput experimental technologies. GO is an ontology widely used in bioinformatics, which covers 3 aspects of biology, including cellular component, molecular function, and biological process. The DAVID online tool was implemented to perform GO and KEGG analyses of DEGs. The rule of statistical significance is that P < .05.", "After importing the common DEGs to the Search Tool for the Retrieval of Interacting Genes (http://string-db.org) (version 10.5), the online tool could predict and trace the protein-protein interaction (PPI) network. The analysis of interactions between various proteins might put forward some novel ideas into the pathophysiological mechanisms of the development of ACP. In this research, the Search Tool for the Retrieval of Interacting Genes database was used to construct the PPI network of DEGs, and the minimum required interaction score is medium confidence > 0.4.", "Based on the topology principles, the Molecular Complex Detection (MCODE) (version 1.5.1), a plug-in of Cytoscape, could discover the tightly coupled region. MCODE identified the most important module of the PPI network map. The criteria of MCODE analysis require that degree cut-off = 2, MCODE scores > 5, Max depth = 100, k-score = 2, and node score cut-off = 0.2. The hub genes were excavated when the degrees were set (degrees ≥ 10). Then, DAVID online tool was used to analyze the GO and KEGG pathway analyses for the hub genes. The clustering analysis of hub genes was performed using OmicShare (http://www.omicshare.com/tools), an available data analysis platform.\nThe Spearman-rho test was used for correlation analysis between ACP and relevant gene expression. Any test results reaching a liberal statistical threshold of P < .2 for each comparison were then entered into a multivariable linear regression model to identify independent predictive genes of ACP. Finally, we performed receiver operator characteristic (ROC) curve analysis to determine the ability of the hub genes to predict ACP. All statistical analyses were conducted using SPSS software (version 21.0; IBM, Armonk, NY). A P value of <0.05 was considered statistically significant.", "A total of 14 participants were recruited, including 7 control individuals and 7 ACP patients. After surgery, 7 ACP tumor samples from ACP patients and 7 control brain samples from control individuals were obtained. The research conformed to the Declaration of Helsinki and was authorized by the Human Ethics and Research Ethics Committees of the Zhejiang Cancer Hospital. Informed consent was obtained from all participants.\nTotal RNA was extracted from 7 ACP tumors and 7 control brain samples by the RNAiso Plus (Trizol) kit (Thermofisher, Massachusetts, United States, MA) and reverse transcribed to cDNA. RT-qPCR was performed using a Light Cycler® 4800 System (Roche Diagnostic Products Co., Basel, Switzerland) with specific GRIA2, SYN1, and SYP primers. Table 1 presents the primer sequences used in the experiments. The RQ values (2−ΔΔCt, where Ct is the threshold cycle) of each sample were calculated and are presented as fold changes in gene expression relative to the control group. GAPDH was used as an endogenous control.\nPrimers and their sequences for polymerase chain reaction analysis.", "3.1. DEGs identified between standard and ACP samples After analysis of the datasets (GSE94349 and GSE68015) with the limma package, the difference between ACP and standard samples could be presented in the volcano plots (Fig. 1A and B). Then these results were standardized, and DEGs (944 in GSE94349 and 764 in GSE68015) were distinguished. The Venn diagram could show that 703 genes were simultaneously contained in the 2 datasets (Fig. 1C).\nThe identification of DEGs by limma package and Venn diagram. (A) The volcano plot presents the difference between ACP and normal samples after analysis of the datasets GSE94349 with limma package. (B) The volcano plot presents the difference between non-MM lung cancer and MM lung cancer tissues after analysis of the datasets GSE68015 with limma package. (C) The Venn diagram could show that 703 genes were contained in the GSE94349 and GSE68015 datasets simultaneously. ACP = adamantinomatous craniopharyngioma, DEGs = differentially expressed genes, MM = multiple myeloma.\nAfter analysis of the datasets (GSE94349 and GSE68015) with the limma package, the difference between ACP and standard samples could be presented in the volcano plots (Fig. 1A and B). Then these results were standardized, and DEGs (944 in GSE94349 and 764 in GSE68015) were distinguished. The Venn diagram could show that 703 genes were simultaneously contained in the 2 datasets (Fig. 1C).\nThe identification of DEGs by limma package and Venn diagram. (A) The volcano plot presents the difference between ACP and normal samples after analysis of the datasets GSE94349 with limma package. (B) The volcano plot presents the difference between non-MM lung cancer and MM lung cancer tissues after analysis of the datasets GSE68015 with limma package. (C) The Venn diagram could show that 703 genes were contained in the GSE94349 and GSE68015 datasets simultaneously. ACP = adamantinomatous craniopharyngioma, DEGs = differentially expressed genes, MM = multiple myeloma.\n3.2. Functional annotation of DEGs by GO and KEGG analyses The results of GO analysis presented that variations in biological processes, cell component, and molecular function of DEGs were mainly enriched in chemical synaptic transmission, cell adhesion, epidermis development, extracellular matrix organization, odontogenesis of the dentin-containing tooth, cell junction, extracellular region, plasma membrane, extracellular space, axon, structural molecule activity, calcium ion binding, Gamma-aminobutyric acid-A (GABA-A) receptor activity, and structural constituent of the cytoskeleton (Table 2). Analysis of the KEGG pathway displayed that all DEGs were primarily enriched in retrograde endocannabinoid signaling, nicotine addiction, extracellular matrix-receptor interaction, morphine addiction, and GABAergic synapse (Table 2).\nGO and KEGG pathway enrichment analysis of DEGs in ACP samples.\nACP = adamantinomatous craniopharyngioma, DEGs = differentially expressed genes, ECM = extracellular matrix, FDR = false discovery rate, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes.\nThe results of GO analysis presented that variations in biological processes, cell component, and molecular function of DEGs were mainly enriched in chemical synaptic transmission, cell adhesion, epidermis development, extracellular matrix organization, odontogenesis of the dentin-containing tooth, cell junction, extracellular region, plasma membrane, extracellular space, axon, structural molecule activity, calcium ion binding, Gamma-aminobutyric acid-A (GABA-A) receptor activity, and structural constituent of the cytoskeleton (Table 2). Analysis of the KEGG pathway displayed that all DEGs were primarily enriched in retrograde endocannabinoid signaling, nicotine addiction, extracellular matrix-receptor interaction, morphine addiction, and GABAergic synapse (Table 2).\nGO and KEGG pathway enrichment analysis of DEGs in ACP samples.\nACP = adamantinomatous craniopharyngioma, DEGs = differentially expressed genes, ECM = extracellular matrix, FDR = false discovery rate, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes.\n3.3. PPI and module networks construction and hub gene selection The PPI network of DEGs was constructed (Fig. 2), and the most powerful module was obtained using Cytoscape (Fig. 3). A total of 10 genes (SNAP25, GRIA2, KCNJ9, SYN1, SLC32A1, SNCB, GRM5, GABRG2, SYP, and CDH1) were identified as hub genes with degrees ≥ 10 (Fig. 4A).\nThe PPI network presents the intricate relationships between DEGs. DEGs = differentially expressed genes, PPI = protein-protein interaction.\nThe significant module network was identified from the PPI network. PPI = protein-protein interaction.\nThe mining and analysis of hub genes. (A) There 10 genes (SNAP25, GRIA2, KCNJ9, SYN1, SLC32A1, SNCB, GRM5, GABRG2, SYP, and CDH1) were identified as hub genes with degrees ≥10. (B) Hierarchical clustering showed that the hub genes could differentiate the ACP samples from the normal ones in the datasets GSE94349. (C) Hierarchical clustering showed that the hub genes could differentiate the ACP samples from the normal ones in the datasets GSE68015. Upregulation of genes is marked in red; downregulation is observed in green. ACP = adamantinomatous craniopharyngioma.\nThe PPI network of DEGs was constructed (Fig. 2), and the most powerful module was obtained using Cytoscape (Fig. 3). A total of 10 genes (SNAP25, GRIA2, KCNJ9, SYN1, SLC32A1, SNCB, GRM5, GABRG2, SYP, and CDH1) were identified as hub genes with degrees ≥ 10 (Fig. 4A).\nThe PPI network presents the intricate relationships between DEGs. DEGs = differentially expressed genes, PPI = protein-protein interaction.\nThe significant module network was identified from the PPI network. PPI = protein-protein interaction.\nThe mining and analysis of hub genes. (A) There 10 genes (SNAP25, GRIA2, KCNJ9, SYN1, SLC32A1, SNCB, GRM5, GABRG2, SYP, and CDH1) were identified as hub genes with degrees ≥10. (B) Hierarchical clustering showed that the hub genes could differentiate the ACP samples from the normal ones in the datasets GSE94349. (C) Hierarchical clustering showed that the hub genes could differentiate the ACP samples from the normal ones in the datasets GSE68015. Upregulation of genes is marked in red; downregulation is observed in green. ACP = adamantinomatous craniopharyngioma.\n3.4. Hub gene analysis The functional analyses of hub genes were analyzed using DAVID. Results showed that hub genes were mainly enriched in chemical synaptic transmission, neurotransmitter secretion, regulation of long-term neuronal synaptic plasticity, locomotory behavior, cell junction, presynaptic active zone, neuron projection, synaptic vesicle, calcium-dependent protein binding, retrograde endocannabinoid signaling, nicotine addiction, morphine addiction, and neuroactive ligand-receptor interaction (Table 3). The names, abbreviations, and functions for these hub genes are shown in Table 4. Hierarchical clustering showed that the hub genes could differentiate the ACP samples from normal ones (Fig. 4B and C). These hub genes showed the highest node score in the PPI network, suggesting that they might play essential roles in the occurrence or progression of ACP.\nGO and KEGG pathway enrichment analysis of hub genes.\nFDR = false discovery rate, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes.\nSummaries for the function of 10 hub genes.\nThe functional analyses of hub genes were analyzed using DAVID. Results showed that hub genes were mainly enriched in chemical synaptic transmission, neurotransmitter secretion, regulation of long-term neuronal synaptic plasticity, locomotory behavior, cell junction, presynaptic active zone, neuron projection, synaptic vesicle, calcium-dependent protein binding, retrograde endocannabinoid signaling, nicotine addiction, morphine addiction, and neuroactive ligand-receptor interaction (Table 3). The names, abbreviations, and functions for these hub genes are shown in Table 4. Hierarchical clustering showed that the hub genes could differentiate the ACP samples from normal ones (Fig. 4B and C). These hub genes showed the highest node score in the PPI network, suggesting that they might play essential roles in the occurrence or progression of ACP.\nGO and KEGG pathway enrichment analysis of hub genes.\nFDR = false discovery rate, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes.\nSummaries for the function of 10 hub genes.\n3.5. Correlation between ACP and hub genes expression To ensure that the hub genes impacted ACP, we performed a further analysis of ACP and hub gene expression. Spearman correlation coefficient was used in the correlation analysis, and SNAP25 (ρ = −0.702, P < .001), GRIA2 (ρ = −0.673, P < .001), KCNJ9 (ρ = −0.706, P < .001), SYN1 (ρ = −0.747, P < .001), SLC32A1 (ρ = −0.813, P < .001), SNCB (ρ = −0.848, P < .001), GRM5 (ρ = −0.680, P < .001), GABRG2 (ρ = −0.830, P < .001), SYP (ρ = −0.852, P < .001), and CDH1 (ρ = −0.865, P < .001) were significantly correlated with ACP (Table 5). In the multivariate linear regression model, holding all other variables at any fixed value, the natural logarithmic ACP remained associated with GRIA2 (β = 0.044, P = .041), SYN1 (β = 0.083, P = .017), SYP (β = −0.137, P < .001), and CDH1 (β = 0.154, P < .001) (Table 5).\nThe correlation and linear regression analysis between ACP and relevant gene expression.\nACP = adamantinomatous craniopharyngioma, β = parameter estimate, ρ = Spearman correlation coefficient.\nSignificant variables: P < .05.\nSpearman rank correlation coefficient between ACP and relevant characteristics.\nMultiple linear regression analysis.\nTo ensure that the hub genes impacted ACP, we performed a further analysis of ACP and hub gene expression. Spearman correlation coefficient was used in the correlation analysis, and SNAP25 (ρ = −0.702, P < .001), GRIA2 (ρ = −0.673, P < .001), KCNJ9 (ρ = −0.706, P < .001), SYN1 (ρ = −0.747, P < .001), SLC32A1 (ρ = −0.813, P < .001), SNCB (ρ = −0.848, P < .001), GRM5 (ρ = −0.680, P < .001), GABRG2 (ρ = −0.830, P < .001), SYP (ρ = −0.852, P < .001), and CDH1 (ρ = −0.865, P < .001) were significantly correlated with ACP (Table 5). In the multivariate linear regression model, holding all other variables at any fixed value, the natural logarithmic ACP remained associated with GRIA2 (β = 0.044, P = .041), SYN1 (β = 0.083, P = .017), SYP (β = −0.137, P < .001), and CDH1 (β = 0.154, P < .001) (Table 5).\nThe correlation and linear regression analysis between ACP and relevant gene expression.\nACP = adamantinomatous craniopharyngioma, β = parameter estimate, ρ = Spearman correlation coefficient.\nSignificant variables: P < .05.\nSpearman rank correlation coefficient between ACP and relevant characteristics.\nMultiple linear regression analysis.\n3.6. The hub genes could predict ACP sensitively and significantly by the ROC curve We constructed receiver operator characteristic curves to identify accurate thresholds for hub genes predicting ACP. SYP was mostly associated with higher risk of ACP (area under the curve for ACP, 0.992; 95% confidence interval, 0.980–1.000; P < .001). The optimal diagnostic threshold of SYP for ACP was 5.672. (Table 6; Fig. 5A–K).\nReceiver operator characteristic curve analysis of hub gene expression for ACP.\nACP = adamantinomatous craniopharyngioma, AUC = area under curve, CI = confidence interval, max = the maximum of AUC, ODT = optimal diagnostic threshold.\nSignificant variables.\nThe receiver operator characteristic curves indicate that the hub genes could predict ACP sensitively and especially. (A) SNAP25, (B) GRIA2, (C) KCNJ9, (D) SYN1, (E) SLC32A1, (F) SNCB, (G) GRM5, (H) GABRG2, (I) SYP, (J) CDH1, (K) The merger of all hub genes. ACP = adamantinomatous craniopharyngioma.\nWe constructed receiver operator characteristic curves to identify accurate thresholds for hub genes predicting ACP. SYP was mostly associated with higher risk of ACP (area under the curve for ACP, 0.992; 95% confidence interval, 0.980–1.000; P < .001). The optimal diagnostic threshold of SYP for ACP was 5.672. (Table 6; Fig. 5A–K).\nReceiver operator characteristic curve analysis of hub gene expression for ACP.\nACP = adamantinomatous craniopharyngioma, AUC = area under curve, CI = confidence interval, max = the maximum of AUC, ODT = optimal diagnostic threshold.\nSignificant variables.\nThe receiver operator characteristic curves indicate that the hub genes could predict ACP sensitively and especially. (A) SNAP25, (B) GRIA2, (C) KCNJ9, (D) SYN1, (E) SLC32A1, (F) SNCB, (G) GRM5, (H) GABRG2, (I) SYP, (J) CDH1, (K) The merger of all hub genes. ACP = adamantinomatous craniopharyngioma.\n3.7. Results of RT-qPCR analysis According to the analysis above, GRIA2, SYN1, and SYP were markedly down-regulated in ACP tumor samples. As presented in Figure 6, the relative expression levels of GRIA2, SYN1, and SYP were significantly lower in ACP samples compared with the control groups. The result demonstrated that GRIA2, SYN1, and SYP might be considered biomarkers for ACP.\nRelative expression of GRIA2, SYN1, and SYP by RT-qPCR analysis. *P < .05, compared with controls. RT-qPCR = quantitative real-time polymerase chain reaction.\nAccording to the analysis above, GRIA2, SYN1, and SYP were markedly down-regulated in ACP tumor samples. As presented in Figure 6, the relative expression levels of GRIA2, SYN1, and SYP were significantly lower in ACP samples compared with the control groups. The result demonstrated that GRIA2, SYN1, and SYP might be considered biomarkers for ACP.\nRelative expression of GRIA2, SYN1, and SYP by RT-qPCR analysis. *P < .05, compared with controls. RT-qPCR = quantitative real-time polymerase chain reaction.", "After analysis of the datasets (GSE94349 and GSE68015) with the limma package, the difference between ACP and standard samples could be presented in the volcano plots (Fig. 1A and B). Then these results were standardized, and DEGs (944 in GSE94349 and 764 in GSE68015) were distinguished. The Venn diagram could show that 703 genes were simultaneously contained in the 2 datasets (Fig. 1C).\nThe identification of DEGs by limma package and Venn diagram. (A) The volcano plot presents the difference between ACP and normal samples after analysis of the datasets GSE94349 with limma package. (B) The volcano plot presents the difference between non-MM lung cancer and MM lung cancer tissues after analysis of the datasets GSE68015 with limma package. (C) The Venn diagram could show that 703 genes were contained in the GSE94349 and GSE68015 datasets simultaneously. ACP = adamantinomatous craniopharyngioma, DEGs = differentially expressed genes, MM = multiple myeloma.", "The results of GO analysis presented that variations in biological processes, cell component, and molecular function of DEGs were mainly enriched in chemical synaptic transmission, cell adhesion, epidermis development, extracellular matrix organization, odontogenesis of the dentin-containing tooth, cell junction, extracellular region, plasma membrane, extracellular space, axon, structural molecule activity, calcium ion binding, Gamma-aminobutyric acid-A (GABA-A) receptor activity, and structural constituent of the cytoskeleton (Table 2). Analysis of the KEGG pathway displayed that all DEGs were primarily enriched in retrograde endocannabinoid signaling, nicotine addiction, extracellular matrix-receptor interaction, morphine addiction, and GABAergic synapse (Table 2).\nGO and KEGG pathway enrichment analysis of DEGs in ACP samples.\nACP = adamantinomatous craniopharyngioma, DEGs = differentially expressed genes, ECM = extracellular matrix, FDR = false discovery rate, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes.", "The PPI network of DEGs was constructed (Fig. 2), and the most powerful module was obtained using Cytoscape (Fig. 3). A total of 10 genes (SNAP25, GRIA2, KCNJ9, SYN1, SLC32A1, SNCB, GRM5, GABRG2, SYP, and CDH1) were identified as hub genes with degrees ≥ 10 (Fig. 4A).\nThe PPI network presents the intricate relationships between DEGs. DEGs = differentially expressed genes, PPI = protein-protein interaction.\nThe significant module network was identified from the PPI network. PPI = protein-protein interaction.\nThe mining and analysis of hub genes. (A) There 10 genes (SNAP25, GRIA2, KCNJ9, SYN1, SLC32A1, SNCB, GRM5, GABRG2, SYP, and CDH1) were identified as hub genes with degrees ≥10. (B) Hierarchical clustering showed that the hub genes could differentiate the ACP samples from the normal ones in the datasets GSE94349. (C) Hierarchical clustering showed that the hub genes could differentiate the ACP samples from the normal ones in the datasets GSE68015. Upregulation of genes is marked in red; downregulation is observed in green. ACP = adamantinomatous craniopharyngioma.", "The functional analyses of hub genes were analyzed using DAVID. Results showed that hub genes were mainly enriched in chemical synaptic transmission, neurotransmitter secretion, regulation of long-term neuronal synaptic plasticity, locomotory behavior, cell junction, presynaptic active zone, neuron projection, synaptic vesicle, calcium-dependent protein binding, retrograde endocannabinoid signaling, nicotine addiction, morphine addiction, and neuroactive ligand-receptor interaction (Table 3). The names, abbreviations, and functions for these hub genes are shown in Table 4. Hierarchical clustering showed that the hub genes could differentiate the ACP samples from normal ones (Fig. 4B and C). These hub genes showed the highest node score in the PPI network, suggesting that they might play essential roles in the occurrence or progression of ACP.\nGO and KEGG pathway enrichment analysis of hub genes.\nFDR = false discovery rate, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes.\nSummaries for the function of 10 hub genes.", "To ensure that the hub genes impacted ACP, we performed a further analysis of ACP and hub gene expression. Spearman correlation coefficient was used in the correlation analysis, and SNAP25 (ρ = −0.702, P < .001), GRIA2 (ρ = −0.673, P < .001), KCNJ9 (ρ = −0.706, P < .001), SYN1 (ρ = −0.747, P < .001), SLC32A1 (ρ = −0.813, P < .001), SNCB (ρ = −0.848, P < .001), GRM5 (ρ = −0.680, P < .001), GABRG2 (ρ = −0.830, P < .001), SYP (ρ = −0.852, P < .001), and CDH1 (ρ = −0.865, P < .001) were significantly correlated with ACP (Table 5). In the multivariate linear regression model, holding all other variables at any fixed value, the natural logarithmic ACP remained associated with GRIA2 (β = 0.044, P = .041), SYN1 (β = 0.083, P = .017), SYP (β = −0.137, P < .001), and CDH1 (β = 0.154, P < .001) (Table 5).\nThe correlation and linear regression analysis between ACP and relevant gene expression.\nACP = adamantinomatous craniopharyngioma, β = parameter estimate, ρ = Spearman correlation coefficient.\nSignificant variables: P < .05.\nSpearman rank correlation coefficient between ACP and relevant characteristics.\nMultiple linear regression analysis.", "We constructed receiver operator characteristic curves to identify accurate thresholds for hub genes predicting ACP. SYP was mostly associated with higher risk of ACP (area under the curve for ACP, 0.992; 95% confidence interval, 0.980–1.000; P < .001). The optimal diagnostic threshold of SYP for ACP was 5.672. (Table 6; Fig. 5A–K).\nReceiver operator characteristic curve analysis of hub gene expression for ACP.\nACP = adamantinomatous craniopharyngioma, AUC = area under curve, CI = confidence interval, max = the maximum of AUC, ODT = optimal diagnostic threshold.\nSignificant variables.\nThe receiver operator characteristic curves indicate that the hub genes could predict ACP sensitively and especially. (A) SNAP25, (B) GRIA2, (C) KCNJ9, (D) SYN1, (E) SLC32A1, (F) SNCB, (G) GRM5, (H) GABRG2, (I) SYP, (J) CDH1, (K) The merger of all hub genes. ACP = adamantinomatous craniopharyngioma.", "According to the analysis above, GRIA2, SYN1, and SYP were markedly down-regulated in ACP tumor samples. As presented in Figure 6, the relative expression levels of GRIA2, SYN1, and SYP were significantly lower in ACP samples compared with the control groups. The result demonstrated that GRIA2, SYN1, and SYP might be considered biomarkers for ACP.\nRelative expression of GRIA2, SYN1, and SYP by RT-qPCR analysis. *P < .05, compared with controls. RT-qPCR = quantitative real-time polymerase chain reaction.", "ACP is a congenital histologically benign but aggressive and invasive epithelial tumor of the saddle area. Current ACP treatment is still dominated by surgical resection. However, due to the complicated and essential anatomical relationships around tumor and tumor itself factors, the total surgical resection rate is low, operative mortality and incidence of severe complications and the postoperative recurrence rate are high,[6] and the complete resection rate reported in the literature was only 18% to 84%, the postoperative mortality was as high as 1.7% to 5.4%, and the 10-year recurrence rate of patients with total tumor resection was 0% to 62%. Long-term hypophysis and hypothalamus dysfunction are also complex problems in neurosurgery for many years. Molecular biological targeted therapy for craniopharyngioma with BRAF mutation has become a research hotspot.[7] In 2016, gene blockers were used for the clinical treatment of patients with craniopharyngioma, and preliminary observation found that the tumor volume after treatment was significantly reduced.[8] Currently, phase II clinical trials are underway, and it still takes some time to use targeted molecular therapy widely. And there is a lack of molecular biological markers to predict the efficacy and prognosis of ACP patients.\nBiological information technology can screen out the differentially expressed genes between tumor patients and normal individuals through the mining and utilization of gene databases and then use them as potential genetic, molecular markers for tumor diagnosis and prognosis.[9] At present, this technique has been used to screen differentially expressed genes in a variety of tumors and has broad application value.\nWe use bioinformatics technology to screen 2 gene expression data sets, the ACP and normal individuals screen out 703 differentially expressed genes, further analysis found the ten most significant hub genes, and through the GO, KEGG analysis to find out the function and pathway of this differential gene enrichment, to further explore the possible mechanism of these differentially expressed genes on disease. In spearman correlation analysis and multiple linear regression analysis, CDH1 was closely related to ACP, indicating that it might have significant statistical significance in the occurrence and development of ACP. However, in ROC analysis, the AUC expression was zero, indicating that the results had low authenticity and no application value. Therefore, it has no diagnostic value for the direction of this study. Finally, Spearman correlation, multiple linear regression, and ROC curve analysis were used to screen and identify the 3 hub genes (SYN1, SYP, and GRIA2) most valuable for ACP diagnosis. SYN1, SYP, and GRIA2 had significant statistical significance in the occurrence and development of ACP.\nThe SYN1 gene encodes a neuronal phosphoprotein that covers synaptic vesicles and binds to the cytoskeleton, which is thought to play an essential role in regulating the release of neurotransmitters.[10] Some scholars have found that SYN1 gene mutation causes changes in neuron development and nerve ending function and causes some diseases related to synaptic dysfunction, such as autism and epilepsy.[11] At the same time, it has been found that low expression of SYN1 may maintain malignant tumor proliferation and promote the occurrence and development of glioma.[12] Similar to these findings, we also found that this gene is highly expressed in normal individuals and low in ACP patients. Since tumor progression and poor prognosis are often associated with the synaptic function of tumor cells,[13] we speculated that mutations in this gene were involved in the development and progression of ACP by causing changes in synaptic plasticity. These data suggest that the SYN1 gene and its target proteins can serve as potential, genetic, and molecular targets for ACP prevention and treatment.\nBinding to small synaptic vesicles found in the nerve terminals, SYN1 possibly has an exocytotic regulatory role in linking the vesicles to the cytoskeleton and each other.[14–16] Furthermore, SYN1 is likely involved in neuronal development and the formation of synaptic contacts between neurons.[17–19] The mutations create changes in the SYN1 protein, potentially causing defects in synaptic vesicle traffic and nerve terminal function. Following its native position, SYN1 is found to be brain- and neuron-specifically expressed mediated by the promoter region of the SYN1 gene.[20] The SYN1 protein serves as a substrate for several different protein kinases, and phosphorylation is likely functioning in regulating this protein in the nerve terminal.\nSYP coding protein is involved in forming intracellular vesicles and other membrane components, and it can regulate the short-term and long-term plasticity of synapses.[21] Since the interruption of synaptic plasticity is the basis of learning and memory, some scholars believe that SYP is involved in the occurrence and development of Alzheimer disease.[22] At the same time, studies have shown that the synapse complex protein syp-1 formed by its encoding protein affects the formation of some critical protein domains through phosphorylation, which further affects cell mitosis and plays an essential role in maintaining the normal cell cycle.[23]SYP is a specific marker protein of synaptic vesicles, and its density and distribution indirectly reflect the number and distribution of synaptic vesicles.[24] Other scholars have found that syp-5 can inhibit PI3K/AKT- and MAPK/erk-dependent hif-1 pathways and inhibit the migration and invasion of tumor cells and tumor angiogenesis.[25] Through bioinformatics technology, we found that ACP patients had low SYP expression and then speculated that the abnormal expression of synaptic vesicles and other synaptic signals might be involved in the development of ACP and suggested its potential significance as the target gene for the diagnosis and treatment of ACP. However, the signaling pathway affected by synaptic vesicles needs to be further studied.\nSYP is a transmembrane glycoprotein found in small presynaptic vesicles of the nerve cells and microvesicles of the neuroendocrine cells[26] and is a major integral membrane protein of secretory vesicles. SYP, the most commonly expressed neural marker, exists widely in various primary central nervous system neoplasms lesions. The higher the tumor’s degree of dedifferentiation, the higher the malignant degree. Therefore, its expression level may be closely related to the malignant degree of ACP and the survival prognosis of patients.[27,28]\nGRIA2 encoded glutamate receptors play an important role in central nervous system gated ion channels and excitatory synaptic transmission. Earlier studies found that the knockout of the GRIA2 gene in mice can affect learning and food-reward stimulation.[29] Recently, it has been found that the GRIA2 gene is differentially expressed in patients with a good prognosis of ovarian serous papillary adenocarcinoma after chemotherapy, suggesting the role of GRIA2 in determining the prognosis of patients with chemotherapy.[30] It has also been found that compared with normal individuals, the abnormal expression of GRIA2 in isolated fibrous tumors was statistically different.[31] However, some scholars believe this gene increases the invasion and migration of pancreatic cancer cells by activating the AMPA receptor and the classical MAPK pathway. GRIA2 is involved in the degeneration of the brain and spinal motor neurons caused by amyotrophic lateral sclerosis. Its mechanism may be the abnormal signal pathway caused by its transcriptional mRNA and the interference of Ca2+ homeostasis. Other studies have suggested that the glutamate system may be involved in the development of migraine.[32] However, through bioinformatics technology, we found that the low expression of GRIA2 in ACP patients may be the abnormal expression of the signal pathway related to the glutamate energy system, leading to the occurrence and development of ACP and suggesting that GRIA2 can be used as a potential target for ACP diagnosis and treatment.\nHub proteins are an essential part of interactors in the organism. They bind to different interacting partners, and most of which are transcription factors or co-regulators involved in signaling pathways. It shows remarkable pleiotropy and connects many cellular systems. Static centers interact with their partners simultaneously, while dynamic centers bind to different partners at different places and times.[33] Hub proteins transfer from the cytoplasm to the nucleus, triggered by phosphorylation and ubiquitination in regions of internal disorder. Hub proteins transmit external signals to the nucleus via the cell membrane and cytoplasm.[34] Hub proteins, which function at the center of multiple DNA-processing machines, are the basis of multiprotein complex remodeling and the integration of pathways such as DNA replication, damage response, and repair. These proteins often act as platforms or scaffolding around which dynamic DNA-processing mechanisms assemble and disassemble. A key characteristic of most Hub proteins in DNA-processing machines is that they bind DNA substrates and protein chaperones in specific directions, thus providing the necessary directionality to ensure proper functioning.[35]\nHowever, our study still has many shortcomings, such as the lack of animal experiments to verify whether abnormal expression of these genes can trigger ACP. The experiment’s sample size is small, which may affect the results to some extent. In future studies, larger sample sizes will be used for analysis, and more reliable statistical values will be obtained through higher sample sizes.\nIn a word, by screening the gene database, we find out that ACP patients differentially expressed genes are likely to play an essential role in the development of ACP. At the same time, these differentially expressed genes are of great value in the diagnosis and targeted therapy of tumors and can even be mined as a biological molecular target for the prognosis and prognosis of ACP patients, which has broad application prospects.", "Y-FZ and S-YZ experimented and were significant contributors to writing and submitting the manuscript. BW and C-XS made substantial contributions to research conception. They also designed the draft of the research process. Y-FZ and LX were involved in critically revising the manuscript for important intellectual content. BW, L-WL and KJ analyzed the genic data regarding Glioblastoma. All authors read and approved the final manuscript.\nConceptualization: Liang Xia, Cai-Xing Sun, Bin Wu.\nFormal analysis: Li-Weng Li, Kai Jing.\nInvestigation: Yang-Fan Zou, Shu-Yuan Zhang, Li-Weng Li.\nProject administration: Li-Weng Li, Cai-Xing Sun.\nResources: Bin Wu.\nSoftware: Shu-Yuan Zhang.\nSupervision: Kai Jing.\nValidation: Shu-Yuan Zhang, Kai Jing.\nVisualization: Kai Jing.\nWriting – original draft: Yang-Fan Zou, Shu-Yuan Zhang, Liang Xia, Bin Wu.\nWriting – review & editing: Yang-Fan Zou, Liang Xia, Bin Wu." ]
[ "intro", "methods", null, null, null, null, null, null, "results", null, null, null, null, null, null, "results", "discussion", null ]
[ "adamantinomatous craniopharyngioma", "biological information technology", "biomarker", "differentially expressed genes" ]
1. Introduction: Craniopharyngioma is a neoplastic disease with a benign pathological phenotype but a poor prognosis in the sellar region.[1] The disease has been considered the most common congenital tumor in the skull. The prevalence of the disease is higher in Africa and the far east than in other regions except for Japan, and the tumor presents a bimodal distribution in age, namely 2 peaks in children and adults aged 40 to 50 years.[1] According to the central nervous system classification of craniopharyngioma published by WHO in 2016, craniopharyngioma is classified into adamantinomatous craniopharyngioma (ACP) and papilloma craniopharyngioma.[2] The former is mainly found in children, while the latter is almost only seen in adults. Because craniopharyngioma locates deep and adjacent to the optic nerve, hypothalamus, basilar artery ring, and other essential structures, surgical treatment is complex, the mortality rate is high, and it is easy to relapse. In recent years, bioinformatics technology has been widely used to explore the potential genetic targets of diseases and help us find the differentially expressed genes and possible pathways related to the occurrence and development of diseases.[3] Differentially expressed genes have been found and verified in many diseases and are potential targets for disease prediction and treatment. Bredemeier used the biological information method to identify the critical genes in the occurrence and development of breast cancer, and the results showed that KRT19, EPCAM, CDH1, and SCGB2A2 had significant expression differences.[4] Therefore, it is suggested that this gene should be regarded as a therapeutic target. Meanwhile, Feng et al[5] identified 2 lncRNAs, LOC146880 and ENST00000439577, which may promote the development and progression of lung cancer by analyzing gene expression and methylation microarray data. Therefore, bioinformatics technology has unique advantages in mining the differentially expressed genes between patients with the disease and regular patients and searching for the targeted genes related to the occurrence and development of disease. Based on bioinformatics technology, this study combined spearman correlation analysis and multiple linear regression analysis to screen out the central genes significantly related to ACP. The results of this study may provide important targets for the clinical diagnosis and treatment of ACP and contribute to the clinical treatment and decision-making of ACP. It provides better help for the prognosis and survival of patients with ACP and more possibilities for future treatment of ACP. 2. Methods: 2.1. Access to public data Gene Expression Omnibus (GEO) database (http://www.ncbi.nlm.nih.gov/geo) is an open functional genomics database of high-throughput resources, including microarrays, gene expression data, and chips. Two expression profiling datasets (GSE94349 and GSE68015) were obtained from GEO. The probes would be transformed into the homologous gene symbol using the platform’s annotation information. The GSE94349 dataset contained 24 ACP and 27 standard samples, and GSE68015 contained 15 ACP and 16 normal samples. Gene Expression Omnibus (GEO) database (http://www.ncbi.nlm.nih.gov/geo) is an open functional genomics database of high-throughput resources, including microarrays, gene expression data, and chips. Two expression profiling datasets (GSE94349 and GSE68015) were obtained from GEO. The probes would be transformed into the homologous gene symbol using the platform’s annotation information. The GSE94349 dataset contained 24 ACP and 27 standard samples, and GSE68015 contained 15 ACP and 16 normal samples. 2.2. DEGs identified by limma package The limma package screened the differentially expressed genes (DEGs) between ACP and normal samples. After setting up the differential experimental groups for 1 GEO series, the limma package could execute a command to compare the differential classifications to identify the DEGs. According to the method of Benjamini and Hochberg (false discovery rate), the tool could apply adjustment to the P values to obtain the adjusted P values (adj. P) and maintain 1 balance between the possibility of false positives and detection of statistically significant genes. If 1 probe set does not have the homologous gene, or if 1 gene has numerous probe sets, the data will be removed. The rule of statistical significance is that adj. P value ≤ .01 and log (Fold change, FC) ≥ 4 or ≤ −4. The Venn diagram was delineated by FunRich software. The limma package screened the differentially expressed genes (DEGs) between ACP and normal samples. After setting up the differential experimental groups for 1 GEO series, the limma package could execute a command to compare the differential classifications to identify the DEGs. According to the method of Benjamini and Hochberg (false discovery rate), the tool could apply adjustment to the P values to obtain the adjusted P values (adj. P) and maintain 1 balance between the possibility of false positives and detection of statistically significant genes. If 1 probe set does not have the homologous gene, or if 1 gene has numerous probe sets, the data will be removed. The rule of statistical significance is that adj. P value ≤ .01 and log (Fold change, FC) ≥ 4 or ≤ −4. The Venn diagram was delineated by FunRich software. 2.3. Functional annotation of DEGs by GO and KEGG analyses DAVID (https://david.ncifcrf.gov/home.jsp) (version 6.8), 1 online analysis tool suite with the function of Integrated Discovery and Annotation, mainly provides typical batch annotation and gene-gene ontology (GO) term enrichment analysis to highlight the most relevant GO terms associated with a given gene list. Kyoto Encyclopedia of Genes and Genomes (KEGG) (https://www.kegg.jp/), 1 of the world’s most commonly used biological information databases, aims to understand advanced functions and biological systems. From the molecular level, KEGG integrates many practical program database resources from high-throughput experimental technologies. GO is an ontology widely used in bioinformatics, which covers 3 aspects of biology, including cellular component, molecular function, and biological process. The DAVID online tool was implemented to perform GO and KEGG analyses of DEGs. The rule of statistical significance is that P < .05. DAVID (https://david.ncifcrf.gov/home.jsp) (version 6.8), 1 online analysis tool suite with the function of Integrated Discovery and Annotation, mainly provides typical batch annotation and gene-gene ontology (GO) term enrichment analysis to highlight the most relevant GO terms associated with a given gene list. Kyoto Encyclopedia of Genes and Genomes (KEGG) (https://www.kegg.jp/), 1 of the world’s most commonly used biological information databases, aims to understand advanced functions and biological systems. From the molecular level, KEGG integrates many practical program database resources from high-throughput experimental technologies. GO is an ontology widely used in bioinformatics, which covers 3 aspects of biology, including cellular component, molecular function, and biological process. The DAVID online tool was implemented to perform GO and KEGG analyses of DEGs. The rule of statistical significance is that P < .05. 2.4. Construction and analysis of PPI network After importing the common DEGs to the Search Tool for the Retrieval of Interacting Genes (http://string-db.org) (version 10.5), the online tool could predict and trace the protein-protein interaction (PPI) network. The analysis of interactions between various proteins might put forward some novel ideas into the pathophysiological mechanisms of the development of ACP. In this research, the Search Tool for the Retrieval of Interacting Genes database was used to construct the PPI network of DEGs, and the minimum required interaction score is medium confidence > 0.4. After importing the common DEGs to the Search Tool for the Retrieval of Interacting Genes (http://string-db.org) (version 10.5), the online tool could predict and trace the protein-protein interaction (PPI) network. The analysis of interactions between various proteins might put forward some novel ideas into the pathophysiological mechanisms of the development of ACP. In this research, the Search Tool for the Retrieval of Interacting Genes database was used to construct the PPI network of DEGs, and the minimum required interaction score is medium confidence > 0.4. 2.5. The analysis and mining of hub genes Based on the topology principles, the Molecular Complex Detection (MCODE) (version 1.5.1), a plug-in of Cytoscape, could discover the tightly coupled region. MCODE identified the most important module of the PPI network map. The criteria of MCODE analysis require that degree cut-off = 2, MCODE scores > 5, Max depth = 100, k-score = 2, and node score cut-off = 0.2. The hub genes were excavated when the degrees were set (degrees ≥ 10). Then, DAVID online tool was used to analyze the GO and KEGG pathway analyses for the hub genes. The clustering analysis of hub genes was performed using OmicShare (http://www.omicshare.com/tools), an available data analysis platform. The Spearman-rho test was used for correlation analysis between ACP and relevant gene expression. Any test results reaching a liberal statistical threshold of P < .2 for each comparison were then entered into a multivariable linear regression model to identify independent predictive genes of ACP. Finally, we performed receiver operator characteristic (ROC) curve analysis to determine the ability of the hub genes to predict ACP. All statistical analyses were conducted using SPSS software (version 21.0; IBM, Armonk, NY). A P value of <0.05 was considered statistically significant. Based on the topology principles, the Molecular Complex Detection (MCODE) (version 1.5.1), a plug-in of Cytoscape, could discover the tightly coupled region. MCODE identified the most important module of the PPI network map. The criteria of MCODE analysis require that degree cut-off = 2, MCODE scores > 5, Max depth = 100, k-score = 2, and node score cut-off = 0.2. The hub genes were excavated when the degrees were set (degrees ≥ 10). Then, DAVID online tool was used to analyze the GO and KEGG pathway analyses for the hub genes. The clustering analysis of hub genes was performed using OmicShare (http://www.omicshare.com/tools), an available data analysis platform. The Spearman-rho test was used for correlation analysis between ACP and relevant gene expression. Any test results reaching a liberal statistical threshold of P < .2 for each comparison were then entered into a multivariable linear regression model to identify independent predictive genes of ACP. Finally, we performed receiver operator characteristic (ROC) curve analysis to determine the ability of the hub genes to predict ACP. All statistical analyses were conducted using SPSS software (version 21.0; IBM, Armonk, NY). A P value of <0.05 was considered statistically significant. 2.6. Quantitative real-time polymerase chain reaction (RT-qPCR) assay A total of 14 participants were recruited, including 7 control individuals and 7 ACP patients. After surgery, 7 ACP tumor samples from ACP patients and 7 control brain samples from control individuals were obtained. The research conformed to the Declaration of Helsinki and was authorized by the Human Ethics and Research Ethics Committees of the Zhejiang Cancer Hospital. Informed consent was obtained from all participants. Total RNA was extracted from 7 ACP tumors and 7 control brain samples by the RNAiso Plus (Trizol) kit (Thermofisher, Massachusetts, United States, MA) and reverse transcribed to cDNA. RT-qPCR was performed using a Light Cycler® 4800 System (Roche Diagnostic Products Co., Basel, Switzerland) with specific GRIA2, SYN1, and SYP primers. Table 1 presents the primer sequences used in the experiments. The RQ values (2−ΔΔCt, where Ct is the threshold cycle) of each sample were calculated and are presented as fold changes in gene expression relative to the control group. GAPDH was used as an endogenous control. Primers and their sequences for polymerase chain reaction analysis. A total of 14 participants were recruited, including 7 control individuals and 7 ACP patients. After surgery, 7 ACP tumor samples from ACP patients and 7 control brain samples from control individuals were obtained. The research conformed to the Declaration of Helsinki and was authorized by the Human Ethics and Research Ethics Committees of the Zhejiang Cancer Hospital. Informed consent was obtained from all participants. Total RNA was extracted from 7 ACP tumors and 7 control brain samples by the RNAiso Plus (Trizol) kit (Thermofisher, Massachusetts, United States, MA) and reverse transcribed to cDNA. RT-qPCR was performed using a Light Cycler® 4800 System (Roche Diagnostic Products Co., Basel, Switzerland) with specific GRIA2, SYN1, and SYP primers. Table 1 presents the primer sequences used in the experiments. The RQ values (2−ΔΔCt, where Ct is the threshold cycle) of each sample were calculated and are presented as fold changes in gene expression relative to the control group. GAPDH was used as an endogenous control. Primers and their sequences for polymerase chain reaction analysis. 2.1. Access to public data: Gene Expression Omnibus (GEO) database (http://www.ncbi.nlm.nih.gov/geo) is an open functional genomics database of high-throughput resources, including microarrays, gene expression data, and chips. Two expression profiling datasets (GSE94349 and GSE68015) were obtained from GEO. The probes would be transformed into the homologous gene symbol using the platform’s annotation information. The GSE94349 dataset contained 24 ACP and 27 standard samples, and GSE68015 contained 15 ACP and 16 normal samples. 2.2. DEGs identified by limma package: The limma package screened the differentially expressed genes (DEGs) between ACP and normal samples. After setting up the differential experimental groups for 1 GEO series, the limma package could execute a command to compare the differential classifications to identify the DEGs. According to the method of Benjamini and Hochberg (false discovery rate), the tool could apply adjustment to the P values to obtain the adjusted P values (adj. P) and maintain 1 balance between the possibility of false positives and detection of statistically significant genes. If 1 probe set does not have the homologous gene, or if 1 gene has numerous probe sets, the data will be removed. The rule of statistical significance is that adj. P value ≤ .01 and log (Fold change, FC) ≥ 4 or ≤ −4. The Venn diagram was delineated by FunRich software. 2.3. Functional annotation of DEGs by GO and KEGG analyses: DAVID (https://david.ncifcrf.gov/home.jsp) (version 6.8), 1 online analysis tool suite with the function of Integrated Discovery and Annotation, mainly provides typical batch annotation and gene-gene ontology (GO) term enrichment analysis to highlight the most relevant GO terms associated with a given gene list. Kyoto Encyclopedia of Genes and Genomes (KEGG) (https://www.kegg.jp/), 1 of the world’s most commonly used biological information databases, aims to understand advanced functions and biological systems. From the molecular level, KEGG integrates many practical program database resources from high-throughput experimental technologies. GO is an ontology widely used in bioinformatics, which covers 3 aspects of biology, including cellular component, molecular function, and biological process. The DAVID online tool was implemented to perform GO and KEGG analyses of DEGs. The rule of statistical significance is that P < .05. 2.4. Construction and analysis of PPI network: After importing the common DEGs to the Search Tool for the Retrieval of Interacting Genes (http://string-db.org) (version 10.5), the online tool could predict and trace the protein-protein interaction (PPI) network. The analysis of interactions between various proteins might put forward some novel ideas into the pathophysiological mechanisms of the development of ACP. In this research, the Search Tool for the Retrieval of Interacting Genes database was used to construct the PPI network of DEGs, and the minimum required interaction score is medium confidence > 0.4. 2.5. The analysis and mining of hub genes: Based on the topology principles, the Molecular Complex Detection (MCODE) (version 1.5.1), a plug-in of Cytoscape, could discover the tightly coupled region. MCODE identified the most important module of the PPI network map. The criteria of MCODE analysis require that degree cut-off = 2, MCODE scores > 5, Max depth = 100, k-score = 2, and node score cut-off = 0.2. The hub genes were excavated when the degrees were set (degrees ≥ 10). Then, DAVID online tool was used to analyze the GO and KEGG pathway analyses for the hub genes. The clustering analysis of hub genes was performed using OmicShare (http://www.omicshare.com/tools), an available data analysis platform. The Spearman-rho test was used for correlation analysis between ACP and relevant gene expression. Any test results reaching a liberal statistical threshold of P < .2 for each comparison were then entered into a multivariable linear regression model to identify independent predictive genes of ACP. Finally, we performed receiver operator characteristic (ROC) curve analysis to determine the ability of the hub genes to predict ACP. All statistical analyses were conducted using SPSS software (version 21.0; IBM, Armonk, NY). A P value of <0.05 was considered statistically significant. 2.6. Quantitative real-time polymerase chain reaction (RT-qPCR) assay: A total of 14 participants were recruited, including 7 control individuals and 7 ACP patients. After surgery, 7 ACP tumor samples from ACP patients and 7 control brain samples from control individuals were obtained. The research conformed to the Declaration of Helsinki and was authorized by the Human Ethics and Research Ethics Committees of the Zhejiang Cancer Hospital. Informed consent was obtained from all participants. Total RNA was extracted from 7 ACP tumors and 7 control brain samples by the RNAiso Plus (Trizol) kit (Thermofisher, Massachusetts, United States, MA) and reverse transcribed to cDNA. RT-qPCR was performed using a Light Cycler® 4800 System (Roche Diagnostic Products Co., Basel, Switzerland) with specific GRIA2, SYN1, and SYP primers. Table 1 presents the primer sequences used in the experiments. The RQ values (2−ΔΔCt, where Ct is the threshold cycle) of each sample were calculated and are presented as fold changes in gene expression relative to the control group. GAPDH was used as an endogenous control. Primers and their sequences for polymerase chain reaction analysis. 3. Results: 3.1. DEGs identified between standard and ACP samples After analysis of the datasets (GSE94349 and GSE68015) with the limma package, the difference between ACP and standard samples could be presented in the volcano plots (Fig. 1A and B). Then these results were standardized, and DEGs (944 in GSE94349 and 764 in GSE68015) were distinguished. The Venn diagram could show that 703 genes were simultaneously contained in the 2 datasets (Fig. 1C). The identification of DEGs by limma package and Venn diagram. (A) The volcano plot presents the difference between ACP and normal samples after analysis of the datasets GSE94349 with limma package. (B) The volcano plot presents the difference between non-MM lung cancer and MM lung cancer tissues after analysis of the datasets GSE68015 with limma package. (C) The Venn diagram could show that 703 genes were contained in the GSE94349 and GSE68015 datasets simultaneously. ACP = adamantinomatous craniopharyngioma, DEGs = differentially expressed genes, MM = multiple myeloma. After analysis of the datasets (GSE94349 and GSE68015) with the limma package, the difference between ACP and standard samples could be presented in the volcano plots (Fig. 1A and B). Then these results were standardized, and DEGs (944 in GSE94349 and 764 in GSE68015) were distinguished. The Venn diagram could show that 703 genes were simultaneously contained in the 2 datasets (Fig. 1C). The identification of DEGs by limma package and Venn diagram. (A) The volcano plot presents the difference between ACP and normal samples after analysis of the datasets GSE94349 with limma package. (B) The volcano plot presents the difference between non-MM lung cancer and MM lung cancer tissues after analysis of the datasets GSE68015 with limma package. (C) The Venn diagram could show that 703 genes were contained in the GSE94349 and GSE68015 datasets simultaneously. ACP = adamantinomatous craniopharyngioma, DEGs = differentially expressed genes, MM = multiple myeloma. 3.2. Functional annotation of DEGs by GO and KEGG analyses The results of GO analysis presented that variations in biological processes, cell component, and molecular function of DEGs were mainly enriched in chemical synaptic transmission, cell adhesion, epidermis development, extracellular matrix organization, odontogenesis of the dentin-containing tooth, cell junction, extracellular region, plasma membrane, extracellular space, axon, structural molecule activity, calcium ion binding, Gamma-aminobutyric acid-A (GABA-A) receptor activity, and structural constituent of the cytoskeleton (Table 2). Analysis of the KEGG pathway displayed that all DEGs were primarily enriched in retrograde endocannabinoid signaling, nicotine addiction, extracellular matrix-receptor interaction, morphine addiction, and GABAergic synapse (Table 2). GO and KEGG pathway enrichment analysis of DEGs in ACP samples. ACP = adamantinomatous craniopharyngioma, DEGs = differentially expressed genes, ECM = extracellular matrix, FDR = false discovery rate, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes. The results of GO analysis presented that variations in biological processes, cell component, and molecular function of DEGs were mainly enriched in chemical synaptic transmission, cell adhesion, epidermis development, extracellular matrix organization, odontogenesis of the dentin-containing tooth, cell junction, extracellular region, plasma membrane, extracellular space, axon, structural molecule activity, calcium ion binding, Gamma-aminobutyric acid-A (GABA-A) receptor activity, and structural constituent of the cytoskeleton (Table 2). Analysis of the KEGG pathway displayed that all DEGs were primarily enriched in retrograde endocannabinoid signaling, nicotine addiction, extracellular matrix-receptor interaction, morphine addiction, and GABAergic synapse (Table 2). GO and KEGG pathway enrichment analysis of DEGs in ACP samples. ACP = adamantinomatous craniopharyngioma, DEGs = differentially expressed genes, ECM = extracellular matrix, FDR = false discovery rate, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes. 3.3. PPI and module networks construction and hub gene selection The PPI network of DEGs was constructed (Fig. 2), and the most powerful module was obtained using Cytoscape (Fig. 3). A total of 10 genes (SNAP25, GRIA2, KCNJ9, SYN1, SLC32A1, SNCB, GRM5, GABRG2, SYP, and CDH1) were identified as hub genes with degrees ≥ 10 (Fig. 4A). The PPI network presents the intricate relationships between DEGs. DEGs = differentially expressed genes, PPI = protein-protein interaction. The significant module network was identified from the PPI network. PPI = protein-protein interaction. The mining and analysis of hub genes. (A) There 10 genes (SNAP25, GRIA2, KCNJ9, SYN1, SLC32A1, SNCB, GRM5, GABRG2, SYP, and CDH1) were identified as hub genes with degrees ≥10. (B) Hierarchical clustering showed that the hub genes could differentiate the ACP samples from the normal ones in the datasets GSE94349. (C) Hierarchical clustering showed that the hub genes could differentiate the ACP samples from the normal ones in the datasets GSE68015. Upregulation of genes is marked in red; downregulation is observed in green. ACP = adamantinomatous craniopharyngioma. The PPI network of DEGs was constructed (Fig. 2), and the most powerful module was obtained using Cytoscape (Fig. 3). A total of 10 genes (SNAP25, GRIA2, KCNJ9, SYN1, SLC32A1, SNCB, GRM5, GABRG2, SYP, and CDH1) were identified as hub genes with degrees ≥ 10 (Fig. 4A). The PPI network presents the intricate relationships between DEGs. DEGs = differentially expressed genes, PPI = protein-protein interaction. The significant module network was identified from the PPI network. PPI = protein-protein interaction. The mining and analysis of hub genes. (A) There 10 genes (SNAP25, GRIA2, KCNJ9, SYN1, SLC32A1, SNCB, GRM5, GABRG2, SYP, and CDH1) were identified as hub genes with degrees ≥10. (B) Hierarchical clustering showed that the hub genes could differentiate the ACP samples from the normal ones in the datasets GSE94349. (C) Hierarchical clustering showed that the hub genes could differentiate the ACP samples from the normal ones in the datasets GSE68015. Upregulation of genes is marked in red; downregulation is observed in green. ACP = adamantinomatous craniopharyngioma. 3.4. Hub gene analysis The functional analyses of hub genes were analyzed using DAVID. Results showed that hub genes were mainly enriched in chemical synaptic transmission, neurotransmitter secretion, regulation of long-term neuronal synaptic plasticity, locomotory behavior, cell junction, presynaptic active zone, neuron projection, synaptic vesicle, calcium-dependent protein binding, retrograde endocannabinoid signaling, nicotine addiction, morphine addiction, and neuroactive ligand-receptor interaction (Table 3). The names, abbreviations, and functions for these hub genes are shown in Table 4. Hierarchical clustering showed that the hub genes could differentiate the ACP samples from normal ones (Fig. 4B and C). These hub genes showed the highest node score in the PPI network, suggesting that they might play essential roles in the occurrence or progression of ACP. GO and KEGG pathway enrichment analysis of hub genes. FDR = false discovery rate, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes. Summaries for the function of 10 hub genes. The functional analyses of hub genes were analyzed using DAVID. Results showed that hub genes were mainly enriched in chemical synaptic transmission, neurotransmitter secretion, regulation of long-term neuronal synaptic plasticity, locomotory behavior, cell junction, presynaptic active zone, neuron projection, synaptic vesicle, calcium-dependent protein binding, retrograde endocannabinoid signaling, nicotine addiction, morphine addiction, and neuroactive ligand-receptor interaction (Table 3). The names, abbreviations, and functions for these hub genes are shown in Table 4. Hierarchical clustering showed that the hub genes could differentiate the ACP samples from normal ones (Fig. 4B and C). These hub genes showed the highest node score in the PPI network, suggesting that they might play essential roles in the occurrence or progression of ACP. GO and KEGG pathway enrichment analysis of hub genes. FDR = false discovery rate, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes. Summaries for the function of 10 hub genes. 3.5. Correlation between ACP and hub genes expression To ensure that the hub genes impacted ACP, we performed a further analysis of ACP and hub gene expression. Spearman correlation coefficient was used in the correlation analysis, and SNAP25 (ρ = −0.702, P < .001), GRIA2 (ρ = −0.673, P < .001), KCNJ9 (ρ = −0.706, P < .001), SYN1 (ρ = −0.747, P < .001), SLC32A1 (ρ = −0.813, P < .001), SNCB (ρ = −0.848, P < .001), GRM5 (ρ = −0.680, P < .001), GABRG2 (ρ = −0.830, P < .001), SYP (ρ = −0.852, P < .001), and CDH1 (ρ = −0.865, P < .001) were significantly correlated with ACP (Table 5). In the multivariate linear regression model, holding all other variables at any fixed value, the natural logarithmic ACP remained associated with GRIA2 (β = 0.044, P = .041), SYN1 (β = 0.083, P = .017), SYP (β = −0.137, P < .001), and CDH1 (β = 0.154, P < .001) (Table 5). The correlation and linear regression analysis between ACP and relevant gene expression. ACP = adamantinomatous craniopharyngioma, β = parameter estimate, ρ = Spearman correlation coefficient. Significant variables: P < .05. Spearman rank correlation coefficient between ACP and relevant characteristics. Multiple linear regression analysis. To ensure that the hub genes impacted ACP, we performed a further analysis of ACP and hub gene expression. Spearman correlation coefficient was used in the correlation analysis, and SNAP25 (ρ = −0.702, P < .001), GRIA2 (ρ = −0.673, P < .001), KCNJ9 (ρ = −0.706, P < .001), SYN1 (ρ = −0.747, P < .001), SLC32A1 (ρ = −0.813, P < .001), SNCB (ρ = −0.848, P < .001), GRM5 (ρ = −0.680, P < .001), GABRG2 (ρ = −0.830, P < .001), SYP (ρ = −0.852, P < .001), and CDH1 (ρ = −0.865, P < .001) were significantly correlated with ACP (Table 5). In the multivariate linear regression model, holding all other variables at any fixed value, the natural logarithmic ACP remained associated with GRIA2 (β = 0.044, P = .041), SYN1 (β = 0.083, P = .017), SYP (β = −0.137, P < .001), and CDH1 (β = 0.154, P < .001) (Table 5). The correlation and linear regression analysis between ACP and relevant gene expression. ACP = adamantinomatous craniopharyngioma, β = parameter estimate, ρ = Spearman correlation coefficient. Significant variables: P < .05. Spearman rank correlation coefficient between ACP and relevant characteristics. Multiple linear regression analysis. 3.6. The hub genes could predict ACP sensitively and significantly by the ROC curve We constructed receiver operator characteristic curves to identify accurate thresholds for hub genes predicting ACP. SYP was mostly associated with higher risk of ACP (area under the curve for ACP, 0.992; 95% confidence interval, 0.980–1.000; P < .001). The optimal diagnostic threshold of SYP for ACP was 5.672. (Table 6; Fig. 5A–K). Receiver operator characteristic curve analysis of hub gene expression for ACP. ACP = adamantinomatous craniopharyngioma, AUC = area under curve, CI = confidence interval, max = the maximum of AUC, ODT = optimal diagnostic threshold. Significant variables. The receiver operator characteristic curves indicate that the hub genes could predict ACP sensitively and especially. (A) SNAP25, (B) GRIA2, (C) KCNJ9, (D) SYN1, (E) SLC32A1, (F) SNCB, (G) GRM5, (H) GABRG2, (I) SYP, (J) CDH1, (K) The merger of all hub genes. ACP = adamantinomatous craniopharyngioma. We constructed receiver operator characteristic curves to identify accurate thresholds for hub genes predicting ACP. SYP was mostly associated with higher risk of ACP (area under the curve for ACP, 0.992; 95% confidence interval, 0.980–1.000; P < .001). The optimal diagnostic threshold of SYP for ACP was 5.672. (Table 6; Fig. 5A–K). Receiver operator characteristic curve analysis of hub gene expression for ACP. ACP = adamantinomatous craniopharyngioma, AUC = area under curve, CI = confidence interval, max = the maximum of AUC, ODT = optimal diagnostic threshold. Significant variables. The receiver operator characteristic curves indicate that the hub genes could predict ACP sensitively and especially. (A) SNAP25, (B) GRIA2, (C) KCNJ9, (D) SYN1, (E) SLC32A1, (F) SNCB, (G) GRM5, (H) GABRG2, (I) SYP, (J) CDH1, (K) The merger of all hub genes. ACP = adamantinomatous craniopharyngioma. 3.7. Results of RT-qPCR analysis According to the analysis above, GRIA2, SYN1, and SYP were markedly down-regulated in ACP tumor samples. As presented in Figure 6, the relative expression levels of GRIA2, SYN1, and SYP were significantly lower in ACP samples compared with the control groups. The result demonstrated that GRIA2, SYN1, and SYP might be considered biomarkers for ACP. Relative expression of GRIA2, SYN1, and SYP by RT-qPCR analysis. *P < .05, compared with controls. RT-qPCR = quantitative real-time polymerase chain reaction. According to the analysis above, GRIA2, SYN1, and SYP were markedly down-regulated in ACP tumor samples. As presented in Figure 6, the relative expression levels of GRIA2, SYN1, and SYP were significantly lower in ACP samples compared with the control groups. The result demonstrated that GRIA2, SYN1, and SYP might be considered biomarkers for ACP. Relative expression of GRIA2, SYN1, and SYP by RT-qPCR analysis. *P < .05, compared with controls. RT-qPCR = quantitative real-time polymerase chain reaction. 3.1. DEGs identified between standard and ACP samples: After analysis of the datasets (GSE94349 and GSE68015) with the limma package, the difference between ACP and standard samples could be presented in the volcano plots (Fig. 1A and B). Then these results were standardized, and DEGs (944 in GSE94349 and 764 in GSE68015) were distinguished. The Venn diagram could show that 703 genes were simultaneously contained in the 2 datasets (Fig. 1C). The identification of DEGs by limma package and Venn diagram. (A) The volcano plot presents the difference between ACP and normal samples after analysis of the datasets GSE94349 with limma package. (B) The volcano plot presents the difference between non-MM lung cancer and MM lung cancer tissues after analysis of the datasets GSE68015 with limma package. (C) The Venn diagram could show that 703 genes were contained in the GSE94349 and GSE68015 datasets simultaneously. ACP = adamantinomatous craniopharyngioma, DEGs = differentially expressed genes, MM = multiple myeloma. 3.2. Functional annotation of DEGs by GO and KEGG analyses: The results of GO analysis presented that variations in biological processes, cell component, and molecular function of DEGs were mainly enriched in chemical synaptic transmission, cell adhesion, epidermis development, extracellular matrix organization, odontogenesis of the dentin-containing tooth, cell junction, extracellular region, plasma membrane, extracellular space, axon, structural molecule activity, calcium ion binding, Gamma-aminobutyric acid-A (GABA-A) receptor activity, and structural constituent of the cytoskeleton (Table 2). Analysis of the KEGG pathway displayed that all DEGs were primarily enriched in retrograde endocannabinoid signaling, nicotine addiction, extracellular matrix-receptor interaction, morphine addiction, and GABAergic synapse (Table 2). GO and KEGG pathway enrichment analysis of DEGs in ACP samples. ACP = adamantinomatous craniopharyngioma, DEGs = differentially expressed genes, ECM = extracellular matrix, FDR = false discovery rate, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes. 3.3. PPI and module networks construction and hub gene selection: The PPI network of DEGs was constructed (Fig. 2), and the most powerful module was obtained using Cytoscape (Fig. 3). A total of 10 genes (SNAP25, GRIA2, KCNJ9, SYN1, SLC32A1, SNCB, GRM5, GABRG2, SYP, and CDH1) were identified as hub genes with degrees ≥ 10 (Fig. 4A). The PPI network presents the intricate relationships between DEGs. DEGs = differentially expressed genes, PPI = protein-protein interaction. The significant module network was identified from the PPI network. PPI = protein-protein interaction. The mining and analysis of hub genes. (A) There 10 genes (SNAP25, GRIA2, KCNJ9, SYN1, SLC32A1, SNCB, GRM5, GABRG2, SYP, and CDH1) were identified as hub genes with degrees ≥10. (B) Hierarchical clustering showed that the hub genes could differentiate the ACP samples from the normal ones in the datasets GSE94349. (C) Hierarchical clustering showed that the hub genes could differentiate the ACP samples from the normal ones in the datasets GSE68015. Upregulation of genes is marked in red; downregulation is observed in green. ACP = adamantinomatous craniopharyngioma. 3.4. Hub gene analysis: The functional analyses of hub genes were analyzed using DAVID. Results showed that hub genes were mainly enriched in chemical synaptic transmission, neurotransmitter secretion, regulation of long-term neuronal synaptic plasticity, locomotory behavior, cell junction, presynaptic active zone, neuron projection, synaptic vesicle, calcium-dependent protein binding, retrograde endocannabinoid signaling, nicotine addiction, morphine addiction, and neuroactive ligand-receptor interaction (Table 3). The names, abbreviations, and functions for these hub genes are shown in Table 4. Hierarchical clustering showed that the hub genes could differentiate the ACP samples from normal ones (Fig. 4B and C). These hub genes showed the highest node score in the PPI network, suggesting that they might play essential roles in the occurrence or progression of ACP. GO and KEGG pathway enrichment analysis of hub genes. FDR = false discovery rate, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes. Summaries for the function of 10 hub genes. 3.5. Correlation between ACP and hub genes expression: To ensure that the hub genes impacted ACP, we performed a further analysis of ACP and hub gene expression. Spearman correlation coefficient was used in the correlation analysis, and SNAP25 (ρ = −0.702, P < .001), GRIA2 (ρ = −0.673, P < .001), KCNJ9 (ρ = −0.706, P < .001), SYN1 (ρ = −0.747, P < .001), SLC32A1 (ρ = −0.813, P < .001), SNCB (ρ = −0.848, P < .001), GRM5 (ρ = −0.680, P < .001), GABRG2 (ρ = −0.830, P < .001), SYP (ρ = −0.852, P < .001), and CDH1 (ρ = −0.865, P < .001) were significantly correlated with ACP (Table 5). In the multivariate linear regression model, holding all other variables at any fixed value, the natural logarithmic ACP remained associated with GRIA2 (β = 0.044, P = .041), SYN1 (β = 0.083, P = .017), SYP (β = −0.137, P < .001), and CDH1 (β = 0.154, P < .001) (Table 5). The correlation and linear regression analysis between ACP and relevant gene expression. ACP = adamantinomatous craniopharyngioma, β = parameter estimate, ρ = Spearman correlation coefficient. Significant variables: P < .05. Spearman rank correlation coefficient between ACP and relevant characteristics. Multiple linear regression analysis. 3.6. The hub genes could predict ACP sensitively and significantly by the ROC curve: We constructed receiver operator characteristic curves to identify accurate thresholds for hub genes predicting ACP. SYP was mostly associated with higher risk of ACP (area under the curve for ACP, 0.992; 95% confidence interval, 0.980–1.000; P < .001). The optimal diagnostic threshold of SYP for ACP was 5.672. (Table 6; Fig. 5A–K). Receiver operator characteristic curve analysis of hub gene expression for ACP. ACP = adamantinomatous craniopharyngioma, AUC = area under curve, CI = confidence interval, max = the maximum of AUC, ODT = optimal diagnostic threshold. Significant variables. The receiver operator characteristic curves indicate that the hub genes could predict ACP sensitively and especially. (A) SNAP25, (B) GRIA2, (C) KCNJ9, (D) SYN1, (E) SLC32A1, (F) SNCB, (G) GRM5, (H) GABRG2, (I) SYP, (J) CDH1, (K) The merger of all hub genes. ACP = adamantinomatous craniopharyngioma. 3.7. Results of RT-qPCR analysis: According to the analysis above, GRIA2, SYN1, and SYP were markedly down-regulated in ACP tumor samples. As presented in Figure 6, the relative expression levels of GRIA2, SYN1, and SYP were significantly lower in ACP samples compared with the control groups. The result demonstrated that GRIA2, SYN1, and SYP might be considered biomarkers for ACP. Relative expression of GRIA2, SYN1, and SYP by RT-qPCR analysis. *P < .05, compared with controls. RT-qPCR = quantitative real-time polymerase chain reaction. 4. Discussion: ACP is a congenital histologically benign but aggressive and invasive epithelial tumor of the saddle area. Current ACP treatment is still dominated by surgical resection. However, due to the complicated and essential anatomical relationships around tumor and tumor itself factors, the total surgical resection rate is low, operative mortality and incidence of severe complications and the postoperative recurrence rate are high,[6] and the complete resection rate reported in the literature was only 18% to 84%, the postoperative mortality was as high as 1.7% to 5.4%, and the 10-year recurrence rate of patients with total tumor resection was 0% to 62%. Long-term hypophysis and hypothalamus dysfunction are also complex problems in neurosurgery for many years. Molecular biological targeted therapy for craniopharyngioma with BRAF mutation has become a research hotspot.[7] In 2016, gene blockers were used for the clinical treatment of patients with craniopharyngioma, and preliminary observation found that the tumor volume after treatment was significantly reduced.[8] Currently, phase II clinical trials are underway, and it still takes some time to use targeted molecular therapy widely. And there is a lack of molecular biological markers to predict the efficacy and prognosis of ACP patients. Biological information technology can screen out the differentially expressed genes between tumor patients and normal individuals through the mining and utilization of gene databases and then use them as potential genetic, molecular markers for tumor diagnosis and prognosis.[9] At present, this technique has been used to screen differentially expressed genes in a variety of tumors and has broad application value. We use bioinformatics technology to screen 2 gene expression data sets, the ACP and normal individuals screen out 703 differentially expressed genes, further analysis found the ten most significant hub genes, and through the GO, KEGG analysis to find out the function and pathway of this differential gene enrichment, to further explore the possible mechanism of these differentially expressed genes on disease. In spearman correlation analysis and multiple linear regression analysis, CDH1 was closely related to ACP, indicating that it might have significant statistical significance in the occurrence and development of ACP. However, in ROC analysis, the AUC expression was zero, indicating that the results had low authenticity and no application value. Therefore, it has no diagnostic value for the direction of this study. Finally, Spearman correlation, multiple linear regression, and ROC curve analysis were used to screen and identify the 3 hub genes (SYN1, SYP, and GRIA2) most valuable for ACP diagnosis. SYN1, SYP, and GRIA2 had significant statistical significance in the occurrence and development of ACP. The SYN1 gene encodes a neuronal phosphoprotein that covers synaptic vesicles and binds to the cytoskeleton, which is thought to play an essential role in regulating the release of neurotransmitters.[10] Some scholars have found that SYN1 gene mutation causes changes in neuron development and nerve ending function and causes some diseases related to synaptic dysfunction, such as autism and epilepsy.[11] At the same time, it has been found that low expression of SYN1 may maintain malignant tumor proliferation and promote the occurrence and development of glioma.[12] Similar to these findings, we also found that this gene is highly expressed in normal individuals and low in ACP patients. Since tumor progression and poor prognosis are often associated with the synaptic function of tumor cells,[13] we speculated that mutations in this gene were involved in the development and progression of ACP by causing changes in synaptic plasticity. These data suggest that the SYN1 gene and its target proteins can serve as potential, genetic, and molecular targets for ACP prevention and treatment. Binding to small synaptic vesicles found in the nerve terminals, SYN1 possibly has an exocytotic regulatory role in linking the vesicles to the cytoskeleton and each other.[14–16] Furthermore, SYN1 is likely involved in neuronal development and the formation of synaptic contacts between neurons.[17–19] The mutations create changes in the SYN1 protein, potentially causing defects in synaptic vesicle traffic and nerve terminal function. Following its native position, SYN1 is found to be brain- and neuron-specifically expressed mediated by the promoter region of the SYN1 gene.[20] The SYN1 protein serves as a substrate for several different protein kinases, and phosphorylation is likely functioning in regulating this protein in the nerve terminal. SYP coding protein is involved in forming intracellular vesicles and other membrane components, and it can regulate the short-term and long-term plasticity of synapses.[21] Since the interruption of synaptic plasticity is the basis of learning and memory, some scholars believe that SYP is involved in the occurrence and development of Alzheimer disease.[22] At the same time, studies have shown that the synapse complex protein syp-1 formed by its encoding protein affects the formation of some critical protein domains through phosphorylation, which further affects cell mitosis and plays an essential role in maintaining the normal cell cycle.[23]SYP is a specific marker protein of synaptic vesicles, and its density and distribution indirectly reflect the number and distribution of synaptic vesicles.[24] Other scholars have found that syp-5 can inhibit PI3K/AKT- and MAPK/erk-dependent hif-1 pathways and inhibit the migration and invasion of tumor cells and tumor angiogenesis.[25] Through bioinformatics technology, we found that ACP patients had low SYP expression and then speculated that the abnormal expression of synaptic vesicles and other synaptic signals might be involved in the development of ACP and suggested its potential significance as the target gene for the diagnosis and treatment of ACP. However, the signaling pathway affected by synaptic vesicles needs to be further studied. SYP is a transmembrane glycoprotein found in small presynaptic vesicles of the nerve cells and microvesicles of the neuroendocrine cells[26] and is a major integral membrane protein of secretory vesicles. SYP, the most commonly expressed neural marker, exists widely in various primary central nervous system neoplasms lesions. The higher the tumor’s degree of dedifferentiation, the higher the malignant degree. Therefore, its expression level may be closely related to the malignant degree of ACP and the survival prognosis of patients.[27,28] GRIA2 encoded glutamate receptors play an important role in central nervous system gated ion channels and excitatory synaptic transmission. Earlier studies found that the knockout of the GRIA2 gene in mice can affect learning and food-reward stimulation.[29] Recently, it has been found that the GRIA2 gene is differentially expressed in patients with a good prognosis of ovarian serous papillary adenocarcinoma after chemotherapy, suggesting the role of GRIA2 in determining the prognosis of patients with chemotherapy.[30] It has also been found that compared with normal individuals, the abnormal expression of GRIA2 in isolated fibrous tumors was statistically different.[31] However, some scholars believe this gene increases the invasion and migration of pancreatic cancer cells by activating the AMPA receptor and the classical MAPK pathway. GRIA2 is involved in the degeneration of the brain and spinal motor neurons caused by amyotrophic lateral sclerosis. Its mechanism may be the abnormal signal pathway caused by its transcriptional mRNA and the interference of Ca2+ homeostasis. Other studies have suggested that the glutamate system may be involved in the development of migraine.[32] However, through bioinformatics technology, we found that the low expression of GRIA2 in ACP patients may be the abnormal expression of the signal pathway related to the glutamate energy system, leading to the occurrence and development of ACP and suggesting that GRIA2 can be used as a potential target for ACP diagnosis and treatment. Hub proteins are an essential part of interactors in the organism. They bind to different interacting partners, and most of which are transcription factors or co-regulators involved in signaling pathways. It shows remarkable pleiotropy and connects many cellular systems. Static centers interact with their partners simultaneously, while dynamic centers bind to different partners at different places and times.[33] Hub proteins transfer from the cytoplasm to the nucleus, triggered by phosphorylation and ubiquitination in regions of internal disorder. Hub proteins transmit external signals to the nucleus via the cell membrane and cytoplasm.[34] Hub proteins, which function at the center of multiple DNA-processing machines, are the basis of multiprotein complex remodeling and the integration of pathways such as DNA replication, damage response, and repair. These proteins often act as platforms or scaffolding around which dynamic DNA-processing mechanisms assemble and disassemble. A key characteristic of most Hub proteins in DNA-processing machines is that they bind DNA substrates and protein chaperones in specific directions, thus providing the necessary directionality to ensure proper functioning.[35] However, our study still has many shortcomings, such as the lack of animal experiments to verify whether abnormal expression of these genes can trigger ACP. The experiment’s sample size is small, which may affect the results to some extent. In future studies, larger sample sizes will be used for analysis, and more reliable statistical values will be obtained through higher sample sizes. In a word, by screening the gene database, we find out that ACP patients differentially expressed genes are likely to play an essential role in the development of ACP. At the same time, these differentially expressed genes are of great value in the diagnosis and targeted therapy of tumors and can even be mined as a biological molecular target for the prognosis and prognosis of ACP patients, which has broad application prospects. Author contributions: Y-FZ and S-YZ experimented and were significant contributors to writing and submitting the manuscript. BW and C-XS made substantial contributions to research conception. They also designed the draft of the research process. Y-FZ and LX were involved in critically revising the manuscript for important intellectual content. BW, L-WL and KJ analyzed the genic data regarding Glioblastoma. All authors read and approved the final manuscript. Conceptualization: Liang Xia, Cai-Xing Sun, Bin Wu. Formal analysis: Li-Weng Li, Kai Jing. Investigation: Yang-Fan Zou, Shu-Yuan Zhang, Li-Weng Li. Project administration: Li-Weng Li, Cai-Xing Sun. Resources: Bin Wu. Software: Shu-Yuan Zhang. Supervision: Kai Jing. Validation: Shu-Yuan Zhang, Kai Jing. Visualization: Kai Jing. Writing – original draft: Yang-Fan Zou, Shu-Yuan Zhang, Liang Xia, Bin Wu. Writing – review & editing: Yang-Fan Zou, Liang Xia, Bin Wu.
Background: Adamantinomatous craniopharyngioma (ACP) is a subtype of craniopharyngioma, a neoplastic disease with a benign pathological phenotype but a poor prognosis in the sellar region. The disease has been considered the most common congenital tumor in the skull. Therefore, this article aims to identify hub genes that might serve as genetic markers of diagnosis, treatment, and prognosis of ACP. Methods: The procedure of this research includes the acquisition of public data, identification and functional annotation of differentially expressed genes (DEGs), construction and analysis of protein-protein interaction network, and the mining and analysis of hub genes by Spearman-rho test, multivariable linear regression, and receiver operator characteristic curve analysis. Quantitative real-time polymerase chain reaction was used to detect the level of mRNA of relative genes. Results: Among 2 datasets, a total of 703 DEGs were identified, mainly enriched in chemical synaptic transmission, cell adhesion, odontogenesis of the dentin-containing tooth, cell junction, extracellular region, extracellular space, structural molecule activity, and structural constituent of cytoskeleton. The protein-protein interaction network was composed of 4379 edges and 589 nodes. Its significant module had 10 hub genes, and SYN1, SYP, and GRIA2 were significantly down-regulated with ACP. Conclusions: In a word, we find out the DEGs between ACP patients and standard samples, which are likely to play an essential role in the development of ACP. At the same time, these DEGs are of great value in tumors' diagnosis and targeted therapy and could even be mined as biological molecular targets for diagnosing and treating ACP patients.
null
null
9,870
311
[ 84, 165, 161, 101, 257, 204, 2929, 181, 180, 228, 189, 338, 201, 216 ]
18
[ "acp", "genes", "analysis", "hub", "hub genes", "gene", "degs", "syp", "samples", "expression" ]
[ "patients craniopharyngioma preliminary", "craniopharyngioma mainly found", "classification craniopharyngioma published", "introduction craniopharyngioma neoplastic", "craniopharyngioma neoplastic disease" ]
null
null
[CONTENT] adamantinomatous craniopharyngioma | biological information technology | biomarker | differentially expressed genes [SUMMARY]
[CONTENT] adamantinomatous craniopharyngioma | biological information technology | biomarker | differentially expressed genes [SUMMARY]
[CONTENT] adamantinomatous craniopharyngioma | biological information technology | biomarker | differentially expressed genes [SUMMARY]
null
[CONTENT] adamantinomatous craniopharyngioma | biological information technology | biomarker | differentially expressed genes [SUMMARY]
null
[CONTENT] Computational Biology | Craniopharyngioma | Early Diagnosis | Gene Expression Profiling | Gene Regulatory Networks | Genetic Markers | Humans | Pituitary Neoplasms | RNA, Messenger [SUMMARY]
[CONTENT] Computational Biology | Craniopharyngioma | Early Diagnosis | Gene Expression Profiling | Gene Regulatory Networks | Genetic Markers | Humans | Pituitary Neoplasms | RNA, Messenger [SUMMARY]
[CONTENT] Computational Biology | Craniopharyngioma | Early Diagnosis | Gene Expression Profiling | Gene Regulatory Networks | Genetic Markers | Humans | Pituitary Neoplasms | RNA, Messenger [SUMMARY]
null
[CONTENT] Computational Biology | Craniopharyngioma | Early Diagnosis | Gene Expression Profiling | Gene Regulatory Networks | Genetic Markers | Humans | Pituitary Neoplasms | RNA, Messenger [SUMMARY]
null
[CONTENT] patients craniopharyngioma preliminary | craniopharyngioma mainly found | classification craniopharyngioma published | introduction craniopharyngioma neoplastic | craniopharyngioma neoplastic disease [SUMMARY]
[CONTENT] patients craniopharyngioma preliminary | craniopharyngioma mainly found | classification craniopharyngioma published | introduction craniopharyngioma neoplastic | craniopharyngioma neoplastic disease [SUMMARY]
[CONTENT] patients craniopharyngioma preliminary | craniopharyngioma mainly found | classification craniopharyngioma published | introduction craniopharyngioma neoplastic | craniopharyngioma neoplastic disease [SUMMARY]
null
[CONTENT] patients craniopharyngioma preliminary | craniopharyngioma mainly found | classification craniopharyngioma published | introduction craniopharyngioma neoplastic | craniopharyngioma neoplastic disease [SUMMARY]
null
[CONTENT] acp | genes | analysis | hub | hub genes | gene | degs | syp | samples | expression [SUMMARY]
[CONTENT] acp | genes | analysis | hub | hub genes | gene | degs | syp | samples | expression [SUMMARY]
[CONTENT] acp | genes | analysis | hub | hub genes | gene | degs | syp | samples | expression [SUMMARY]
null
[CONTENT] acp | genes | analysis | hub | hub genes | gene | degs | syp | samples | expression [SUMMARY]
null
[CONTENT] disease | treatment | craniopharyngioma | bioinformatics technology | technology | targets | occurrence development | related | diseases | development [SUMMARY]
[CONTENT] tool | control | gene | genes | acp | mcode | analysis | degs | kegg | geo [SUMMARY]
[CONTENT] gria2 syn1 | gria2 syn1 syp | syn1 syp | syp | gria2 | syn1 | relative expression | compared | relative | qpcr [SUMMARY]
null
[CONTENT] acp | genes | hub | hub genes | analysis | 001 | degs | gene | syp | samples [SUMMARY]
null
[CONTENT] ACP ||| ||| ACP [SUMMARY]
[CONTENT] Spearman | linear ||| [SUMMARY]
[CONTENT] 2 | 703 ||| 4379 | 589 ||| 10 | SYN1 | SYP | GRIA2 | ACP [SUMMARY]
null
[CONTENT] ACP ||| ||| ACP ||| Spearman | linear ||| ||| ||| 2 | 703 ||| 4379 | 589 ||| 10 | SYN1 | SYP | GRIA2 | ACP ||| ACP | ACP ||| ACP [SUMMARY]
null
Dietary Behaviors and Incident COVID-19 in the UK Biobank.
34203027
Nutritional status influences immunity but its specific association with susceptibility to COVID-19 remains unclear. We examined the association of specific dietary data and incident COVID-19 in the UK Biobank (UKB).
BACKGROUND
We considered UKB participants in England with self-reported baseline (2006-2010) data and linked them to Public Health England COVID-19 test results-performed on samples from combined nose/throat swabs, using real time polymerase chain reaction (RT-PCR)-between March and November 2020. Baseline diet factors included breastfed as baby and specific consumption of coffee, tea, oily fish, processed meat, red meat, fruit, and vegetables. Individual COVID-19 exposure was estimated using the UK's average monthly positive case rate per specific geo-populations. Logistic regression estimated the odds of COVID-19 positivity by diet status adjusting for baseline socio-demographic factors, medical history, and other lifestyle factors. Another model was further adjusted for COVID-19 exposure.
METHODS
Eligible UKB participants (n = 37,988) were 40 to 70 years of age at baseline; 17% tested positive for COVID-19 by SAR-CoV-2 PCR. After multivariable adjustment, the odds (95% CI) of COVID-19 positivity was 0.90 (0.83, 0.96) when consuming 2-3 cups of coffee/day (vs. <1 cup/day), 0.88 (0.80, 0.98) when consuming vegetables in the third quartile of servings/day (vs. lowest quartile), 1.14 (1.01, 1.29) when consuming fourth quartile servings of processed meats (vs. lowest quartile), and 0.91 (0.85, 0.98) when having been breastfed (vs. not breastfed). Associations were attenuated when further adjusted for COVID-19 exposure, but patterns of associations remained.
RESULTS
In the UK Biobank, consumption of coffee, vegetables, and being breastfed as a baby were favorably associated with incident COVID-19; intake of processed meat was adversely associated. Although these findings warrant independent confirmation, adherence to certain dietary behaviors may be an additional tool to existing COVID-19 protection guidelines to limit the spread of this virus.
CONCLUSIONS
[ "Aged", "Biological Specimen Banks", "Breast Feeding", "COVID-19", "Coffee", "Diet", "England", "Feeding Behavior", "Female", "Food Handling", "Humans", "Incidence", "Infant", "Logistic Models", "Male", "Meat", "Middle Aged", "Nutritional Status", "Public Health", "Risk Factors", "SARS-CoV-2", "United Kingdom", "Vegetables" ]
8234071
1. Introduction
Coronavirus (COVID-19) is an infectious disease caused by SARS-CoV-2 that predominantly attacks the respiratory system and has spread rapidly in the last year [1]. This unprecedented global public health crisis necessitates the expansion of our understanding of COVID-19 for purposes of prevention and mitigation of its severity and mortality risk in infected individuals [2]. Cross-sectional analyses identify patient characteristics and clinical data subsequent to a positive test, thereby informing the progression and severity of COVID-19. However, identifying factors underlying potential susceptibility a priori to the virus offers public health benefits. The immune system plays a key role in an individual’s susceptibility and response to infectious diseases, including COVID-19 [3,4]. A major modifiable factor affecting immune function is dietary behavior that influences nutritional status [5,6,7]. Ecological studies of COVID-19 report favorable correlations with specific vegetables and dietary patterns, such as the Mediterranean diet [8,9,10]. Some dietary supplements were found to have an association with SARS-CoV-2 infection [11]. To our knowledge, no population data have been examined regarding the role of specific dietary intakes in prevention of COVID-19. Using data from the UK Biobank (UKB), we examined the associations between dietary behaviors measured in 2006–2010 and incident COVID-19 infection in 2020. We additionally linked UKB geo-data to UK COVID-19 surveillance data to account for COVID-19 exposure in our analyses, which to our knowledge has never been done in COVID-19 studies of the UKB.
null
null
3. Results
3.1. Participant Characteristics Of 37,988 eligible participants who were tested for COVID-19 from March to November 2020, 6482 (17%) tested positive. In general, participants in the analysis sample had baseline demographic and diet characteristics similar to those of the full UKB cohort (N = 502,633, Table 1). However, those in the analysis sample tended to report poorer health and more comorbidities than those in the full UKB cohort. Characteristics of the analysis sample, stratified by baseline age and race, are presented in Supplemental Tables S2 and S3. Briefly, compared to the older age group, the younger age group tended to be female, employed, and better educated, with better income and rated better health; they also tended to consume less tea, fruit, vegetables, fish, and red meat, with fewer having been breastfed. Compared to non-white participants, white participants tended to consume more coffee, tea, processed meat, less fruit or vegetables, with fewer having been breastfed. Among non-whites, Black people tended to live in higher deprivation areas, be employed, have higher BMI, and consume more red meat, while Asians tended to report poorer health and consume more fruit and vegetables. The Asian group also had a higher incidence of COVID-19 positivity than other racial groups. The exposure risk for the UKB sample in the context of the nation overall is illustrated in Figure 1 and Supplemental Figure S3, in which geo-positive tests are shown by Waves for UKB and UK data separately. Areas with a high number of positive tests in UKB were generally similar to those in the UK overall. Of 37,988 eligible participants who were tested for COVID-19 from March to November 2020, 6482 (17%) tested positive. In general, participants in the analysis sample had baseline demographic and diet characteristics similar to those of the full UKB cohort (N = 502,633, Table 1). However, those in the analysis sample tended to report poorer health and more comorbidities than those in the full UKB cohort. Characteristics of the analysis sample, stratified by baseline age and race, are presented in Supplemental Tables S2 and S3. Briefly, compared to the older age group, the younger age group tended to be female, employed, and better educated, with better income and rated better health; they also tended to consume less tea, fruit, vegetables, fish, and red meat, with fewer having been breastfed. Compared to non-white participants, white participants tended to consume more coffee, tea, processed meat, less fruit or vegetables, with fewer having been breastfed. Among non-whites, Black people tended to live in higher deprivation areas, be employed, have higher BMI, and consume more red meat, while Asians tended to report poorer health and consume more fruit and vegetables. The Asian group also had a higher incidence of COVID-19 positivity than other racial groups. The exposure risk for the UKB sample in the context of the nation overall is illustrated in Figure 1 and Supplemental Figure S3, in which geo-positive tests are shown by Waves for UKB and UK data separately. Areas with a high number of positive tests in UKB were generally similar to those in the UK overall. 3.2. Nutritional Factors and COVID-19 Positivity With adjustment for age, race, and sex (Crude model), consumption of coffee, moderate tea, oily fish, and vegetables; and being breastfed as a baby were significantly associated with lower odds of COVID-19 positivity, while consuming processed meat was associated with higher odds of COVID-19 positivity (Supplemental Table S4). After further adjusting for other socio-demographics, medical and lifestyle factors, associations were attenuated; consumption of tea and fish oil were no longer significantly associated with COVID-19 infection (Model 1–Table 2). The odds (95% CI) of COVID-19 positivity were 0.90 (0.83, 0.98), 0.90 (0.83, 0.96), and 0.92 (0.84, 1.00) when consuming 1 cup, 2–3 cups, and 4+ cups of coffee/day (vs. <1 cup/day), respectively (Model 2). The odds (95%CI) of COVID-19 infection was 0.88 (0.80, 0.98) for individuals in the 3rd quartile of vegetable intake (vs. lowest quartile), 1.14 (1.01, 1.29) for individuals in the 4th quartile of processed meat intake (vs lowest quartile), and 0.91 (0.85, 0.98) for those who had been breastfed as a baby (vs. not breastfed). We observed similar associations between these dietary factors assessed individually (Model 1) or mutually (Model 2) with COVID-19 infection, suggesting independent associations. Associations were attenuated when further adjusted for COVID-19 exposure, but patterns of associations remained (Model 3). Further adjusting for pre-existing compromised pulmonary function, or incident diabetes and heart disease during follow-up did not alter results (Model 4, data not shown). Patterns of associations were similar when excluding individuals reporting diabetes, CVD, or the use of cholesterol-lowering or antihypertension medication from the sample (data not shown), or when analyzing data for Wave 1 and Wave 2 separately (Supplemental Table S5). Associations between nutrition factors and COVID-19 positivity were not significantly modified by age, sex, and race (p > 0.002 for interactions, Supplemental Tables S6–S8). With adjustment for age, race, and sex (Crude model), consumption of coffee, moderate tea, oily fish, and vegetables; and being breastfed as a baby were significantly associated with lower odds of COVID-19 positivity, while consuming processed meat was associated with higher odds of COVID-19 positivity (Supplemental Table S4). After further adjusting for other socio-demographics, medical and lifestyle factors, associations were attenuated; consumption of tea and fish oil were no longer significantly associated with COVID-19 infection (Model 1–Table 2). The odds (95% CI) of COVID-19 positivity were 0.90 (0.83, 0.98), 0.90 (0.83, 0.96), and 0.92 (0.84, 1.00) when consuming 1 cup, 2–3 cups, and 4+ cups of coffee/day (vs. <1 cup/day), respectively (Model 2). The odds (95%CI) of COVID-19 infection was 0.88 (0.80, 0.98) for individuals in the 3rd quartile of vegetable intake (vs. lowest quartile), 1.14 (1.01, 1.29) for individuals in the 4th quartile of processed meat intake (vs lowest quartile), and 0.91 (0.85, 0.98) for those who had been breastfed as a baby (vs. not breastfed). We observed similar associations between these dietary factors assessed individually (Model 1) or mutually (Model 2) with COVID-19 infection, suggesting independent associations. Associations were attenuated when further adjusted for COVID-19 exposure, but patterns of associations remained (Model 3). Further adjusting for pre-existing compromised pulmonary function, or incident diabetes and heart disease during follow-up did not alter results (Model 4, data not shown). Patterns of associations were similar when excluding individuals reporting diabetes, CVD, or the use of cholesterol-lowering or antihypertension medication from the sample (data not shown), or when analyzing data for Wave 1 and Wave 2 separately (Supplemental Table S5). Associations between nutrition factors and COVID-19 positivity were not significantly modified by age, sex, and race (p > 0.002 for interactions, Supplemental Tables S6–S8).
5. Conclusions
Our results support the hypothesis that nutritional factors may influence distinct aspects of the immune system, hence susceptibility to COVID-19. Encouraging adherence to certain nutritional behaviors (e.g., increasing vegetable intake and reducing processed meat intake) may be an additional tool to existing COVID-19 protection guidelines to limit the spread of this virus. Nevertheless, our findings warrant confirmation in other populations.
[ "2. Materials and Methods", "2.1. UK Biobank", "2.2. COVID-19 Diagnosis", "2.3. COVID-19 Exposure", "2.4. Baseline Dietary Data", "2.5. Other Covariates", "2.6. Analysis Sample", "2.7. Statistical Analysis", "3.1. Participant Characteristics", "3.2. Nutritional Factors and COVID-19 Positivity" ]
[ "2.1. UK Biobank The UKB is an international health resource of over 500,000 participants aged 37–73 years at 22 centers across England, Wales, and Scotland. Details of the study methods have been described previously [12]. Briefly, participants underwent physical measurements, assessments about health and risk factors (including lifestyle and dietary behaviors), and blood sampling at baseline (2006–2010) and agreed to follow-up on their health status. Details of data collected are provided on the Showcase tab of the UKB website [13]. UKB ethical approval was from the National Research Ethics Service Committee North West–Haydock (approval letter dated 17 June 2011, Ref 11/NW/0382), and all study procedures were performed in accordance with the World Medical Association Declaration of Helsinki ethical principles for medical research. The current analysis was approved under the UKB application #21394 (PI, M.C.C).\nThe UKB is an international health resource of over 500,000 participants aged 37–73 years at 22 centers across England, Wales, and Scotland. Details of the study methods have been described previously [12]. Briefly, participants underwent physical measurements, assessments about health and risk factors (including lifestyle and dietary behaviors), and blood sampling at baseline (2006–2010) and agreed to follow-up on their health status. Details of data collected are provided on the Showcase tab of the UKB website [13]. UKB ethical approval was from the National Research Ethics Service Committee North West–Haydock (approval letter dated 17 June 2011, Ref 11/NW/0382), and all study procedures were performed in accordance with the World Medical Association Declaration of Helsinki ethical principles for medical research. The current analysis was approved under the UKB application #21394 (PI, M.C.C).\n2.2. COVID-19 Diagnosis COVID-19 test results from Public Health England have been dynamically linked to the UKB since 16 March 2020 [14]. The regularly updated COVID-19 data table provided to UKB researchers includes participant ID, record date, test location (mouth, nose, throat, trachea, etc.), testing laboratory (71 labs listed), and test results (negative or positive). The vast majority of samples tested are from combined nose/throat swabs that are transported in a medium suitable for viruses and subject to polymerase chain reaction testing.\nCOVID-19 test results from Public Health England have been dynamically linked to the UKB since 16 March 2020 [14]. The regularly updated COVID-19 data table provided to UKB researchers includes participant ID, record date, test location (mouth, nose, throat, trachea, etc.), testing laboratory (71 labs listed), and test results (negative or positive). The vast majority of samples tested are from combined nose/throat swabs that are transported in a medium suitable for viruses and subject to polymerase chain reaction testing.\n2.3. COVID-19 Exposure UKB provides grid co-ordinate data at the 1 km resolution as a measure of residential location [15]. The data are provided to researchers in the projection of the Ordnance Survey National Grid (OSGB1936) geographic reference system; measures refer to easting and northing with a reference point near the Isles of Sicily. These data were used in conjunction with country-wide surveillance data to identify UKB participants exposed to COVID-19 [16]. The UKB geo-data and UK COVID-19 surveillance data was imported and projected in ArcGIS (Environmental Systems Research Institute, Redlands, CA, USA) for visual inspection. To make the UKB geo-data (North/East) compatible with the latitude/longitude surveillance data, we converted the former to the latter using standard equations already built into ArcGIS [17]. Hotspot analysis was performed on the UK surveillance data to assign values (“Gi*” statistic) back to each UKB participant representing a COVID-19-exposure risk factor or score. We calculated monthly positive case rates per specific geo-populations, and the average of these monthly rates was used in the analysis.\nUKB provides grid co-ordinate data at the 1 km resolution as a measure of residential location [15]. The data are provided to researchers in the projection of the Ordnance Survey National Grid (OSGB1936) geographic reference system; measures refer to easting and northing with a reference point near the Isles of Sicily. These data were used in conjunction with country-wide surveillance data to identify UKB participants exposed to COVID-19 [16]. The UKB geo-data and UK COVID-19 surveillance data was imported and projected in ArcGIS (Environmental Systems Research Institute, Redlands, CA, USA) for visual inspection. To make the UKB geo-data (North/East) compatible with the latitude/longitude surveillance data, we converted the former to the latter using standard equations already built into ArcGIS [17]. Hotspot analysis was performed on the UK surveillance data to assign values (“Gi*” statistic) back to each UKB participant representing a COVID-19-exposure risk factor or score. We calculated monthly positive case rates per specific geo-populations, and the average of these monthly rates was used in the analysis.\n2.4. Baseline Dietary Data The UKB touchscreen at baseline included a semi-quantitative food frequency questionnaire (FFQ) that assessed self-reported usual intake of 17 foods and beverages [18]. Participants were asked to report the number of pieces/tablespoons/cups of each item they consumed or to choose one of several pre-specified frequency categories. Among these choices, baseline servings of vegetables (cooked, raw), fruit (fresh, dried), oily fish, processed meat, red meat (beef, lamb/mutton, or pork), tea, and coffee have specifically been identified as contributing nutritional factors implicated in immunity [19,20,21,22,23,24] and were thus targeted for consideration in this study. The details of the questions and possible answers for these diet factors are provided in Supplemental Table S1. Based on the participants’ responses, we computed portion sizes as cups or servings per day. Participants were also asked if they were breastfed as a baby and responded yes, no, or don’t know.\nThe UKB touchscreen at baseline included a semi-quantitative food frequency questionnaire (FFQ) that assessed self-reported usual intake of 17 foods and beverages [18]. Participants were asked to report the number of pieces/tablespoons/cups of each item they consumed or to choose one of several pre-specified frequency categories. Among these choices, baseline servings of vegetables (cooked, raw), fruit (fresh, dried), oily fish, processed meat, red meat (beef, lamb/mutton, or pork), tea, and coffee have specifically been identified as contributing nutritional factors implicated in immunity [19,20,21,22,23,24] and were thus targeted for consideration in this study. The details of the questions and possible answers for these diet factors are provided in Supplemental Table S1. Based on the participants’ responses, we computed portion sizes as cups or servings per day. Participants were also asked if they were breastfed as a baby and responded yes, no, or don’t know.\n2.5. Other Covariates Age, sex, race, education, employment status, type of accommodation lived in and number of co-habitants, smoking behaviors, and current health status were self-reported at baseline using the touchscreen. Townsend Deprivation Index, a measure of SES, was derived using census data and postal codes of participants, where higher scores represent higher deprivation. Body mass index (BMI) was calculated using height and weight measured at the assessment center. Physical activity was assessed using questions adapted from the validated short International Physical Activity Questionnaire [25]. Participants were asked how many minutes they engaged in each of the activities on a typical day. Duration of moderate and vigorous activities (minutes/day) was used in this analysis. History of diabetes, history of any heart diseases, and hypercholesterolemia or hypertensive medication use were obtained via self-reporting (and verified during a verbal interview) and hospital databases.\nBecause individuals with lung diseases may have an increased risk of COVID-19 infection [26], we further classified participants by their pre-existing compromised pulmonary function (yes/no respiratory diseases excluding common colds); data extracted from health records during follow-up. We also used health records to calculate incident diabetes and heart disease.\nAge, sex, race, education, employment status, type of accommodation lived in and number of co-habitants, smoking behaviors, and current health status were self-reported at baseline using the touchscreen. Townsend Deprivation Index, a measure of SES, was derived using census data and postal codes of participants, where higher scores represent higher deprivation. Body mass index (BMI) was calculated using height and weight measured at the assessment center. Physical activity was assessed using questions adapted from the validated short International Physical Activity Questionnaire [25]. Participants were asked how many minutes they engaged in each of the activities on a typical day. Duration of moderate and vigorous activities (minutes/day) was used in this analysis. History of diabetes, history of any heart diseases, and hypercholesterolemia or hypertensive medication use were obtained via self-reporting (and verified during a verbal interview) and hospital databases.\nBecause individuals with lung diseases may have an increased risk of COVID-19 infection [26], we further classified participants by their pre-existing compromised pulmonary function (yes/no respiratory diseases excluding common colds); data extracted from health records during follow-up. We also used health records to calculate incident diabetes and heart disease.\n2.6. Analysis Sample Because no COVID-19 test data were available for UKB assessment centers in Scotland and Wales, only participants in England were included. For this analysis, we included participants with test results between 16 March and 30 November 2020, which was before vaccines were rolled out in the UK (8 December 2020) [27]. There were 74,878 tests performed on 42,699 UKB participants. We excluded participants who had missing baseline data on nutritional factors and covariates (n = 4711). The final study sample included 37,988 UKB participants (Supplemental Figure S1).\nBecause no COVID-19 test data were available for UKB assessment centers in Scotland and Wales, only participants in England were included. For this analysis, we included participants with test results between 16 March and 30 November 2020, which was before vaccines were rolled out in the UK (8 December 2020) [27]. There were 74,878 tests performed on 42,699 UKB participants. We excluded participants who had missing baseline data on nutritional factors and covariates (n = 4711). The final study sample included 37,988 UKB participants (Supplemental Figure S1).\n2.7. Statistical Analysis All analyses were performed using SAS (SAS Institute, Inc., Cary, NC, USA). Mapping was performed using ArcGIS. F-tests or χ2 tests were used to compare differences in baseline characteristics across participants’ age and race/ethnicity.\nOur main outcome of interest was whether a person had any confirmed COVID-19 infection (defined as having any positive test result). Our exposures of interest included consumption of coffee, tea, oily fish, processed meat, red meat, fruit, and vegetables; and were analyzed in quartiles of servings/d [each modeled: lowest quartile/Q1(referent), Q2, Q3, and Q4] as well as tea and coffee [each modeled: none or <1 (referent), 1, 2–3, and ≥4 cups/d], and breastfed as a baby [yes (referent), no, and don’t know]. Because the cut-points to define Q2 and Q3 for oily fish were similar, Q3 and Q4 were combined.\nWe examined the association between each dietary behavior and incident COVID-19 using logistic regression, first adjusting for age, sex, and race (White/Asian/Black/Mixed-Others) (Crude model), then further adjusting for Townsend-deprivation index (quartiles), education (6 qualification classes), employment status (employed/retired/other), type of living accommodation (house/apartment/other), number of co-habitants (1,2,3 or ≥4), income (4 levels), physical activity (quartiles of moderate or vigorous activities, min/day), smoking (never/past/current), BMI (<25, 25–<30, and ≥30 kg/m2), self-rated health (excellent/good/fair/poor), hypercholesterolemia medication use (yes/no), hypertensive medication use (yes/no), diabetes (yes/no), and any heart diseases (yes/no) (Model 1). Model 2 was similar to Model 1 but with diet behaviors assessed mutually (i.e., including all 7 other dietary factors). Model 3 included variables from Model 2 and further adjusted for COVID-19 exposure score. Finally, Model 4 further adjusted for pre-existing compromised pulmonary function (yes/no). Statistical significance was defined as p < 0.05. No adjustments were made for multiple testing as all tests were a priori.\nIn sensitivity analyses, we excluded individuals self-reporting diabetes, heart diseases, or using cholesterol-lowering or antihypertension medication because they may recently have made dietary changes consequential to the disease. We also adjusted for incident diabetes and heart disease during follow-up. Moreover, as shown in Supplemental Figure S2, there were two waves of new cases between March and November 2020 (i.e., from March–August 2020, or Wave 1, and September–November 2020, or Wave 2) in the UK, with greater numbers in Wave 2 than Wave 1. The differences may reflect a change in testing-rate, a true change in risk of exposure to the virus or some other factor. Regardless, analyses were also conducted for Wave 1 and Wave 2 separately.\nWe screened for effect modification (interaction) by sex (male or female), age (<55 or ≥55 years of age) and race (White, Asian, Black, or mixed/others) by including the cross-product term of each nutritional factor (e.g., coffee consumption, 4 categories) and the interacting variable in multivariable regression models. Statistically significant interactions were defined as p < 0.002; accounting for 24 tests (8 nutrition factors × 3 effect modifiers) performed.\nAll analyses were performed using SAS (SAS Institute, Inc., Cary, NC, USA). Mapping was performed using ArcGIS. F-tests or χ2 tests were used to compare differences in baseline characteristics across participants’ age and race/ethnicity.\nOur main outcome of interest was whether a person had any confirmed COVID-19 infection (defined as having any positive test result). Our exposures of interest included consumption of coffee, tea, oily fish, processed meat, red meat, fruit, and vegetables; and were analyzed in quartiles of servings/d [each modeled: lowest quartile/Q1(referent), Q2, Q3, and Q4] as well as tea and coffee [each modeled: none or <1 (referent), 1, 2–3, and ≥4 cups/d], and breastfed as a baby [yes (referent), no, and don’t know]. Because the cut-points to define Q2 and Q3 for oily fish were similar, Q3 and Q4 were combined.\nWe examined the association between each dietary behavior and incident COVID-19 using logistic regression, first adjusting for age, sex, and race (White/Asian/Black/Mixed-Others) (Crude model), then further adjusting for Townsend-deprivation index (quartiles), education (6 qualification classes), employment status (employed/retired/other), type of living accommodation (house/apartment/other), number of co-habitants (1,2,3 or ≥4), income (4 levels), physical activity (quartiles of moderate or vigorous activities, min/day), smoking (never/past/current), BMI (<25, 25–<30, and ≥30 kg/m2), self-rated health (excellent/good/fair/poor), hypercholesterolemia medication use (yes/no), hypertensive medication use (yes/no), diabetes (yes/no), and any heart diseases (yes/no) (Model 1). Model 2 was similar to Model 1 but with diet behaviors assessed mutually (i.e., including all 7 other dietary factors). Model 3 included variables from Model 2 and further adjusted for COVID-19 exposure score. Finally, Model 4 further adjusted for pre-existing compromised pulmonary function (yes/no). Statistical significance was defined as p < 0.05. No adjustments were made for multiple testing as all tests were a priori.\nIn sensitivity analyses, we excluded individuals self-reporting diabetes, heart diseases, or using cholesterol-lowering or antihypertension medication because they may recently have made dietary changes consequential to the disease. We also adjusted for incident diabetes and heart disease during follow-up. Moreover, as shown in Supplemental Figure S2, there were two waves of new cases between March and November 2020 (i.e., from March–August 2020, or Wave 1, and September–November 2020, or Wave 2) in the UK, with greater numbers in Wave 2 than Wave 1. The differences may reflect a change in testing-rate, a true change in risk of exposure to the virus or some other factor. Regardless, analyses were also conducted for Wave 1 and Wave 2 separately.\nWe screened for effect modification (interaction) by sex (male or female), age (<55 or ≥55 years of age) and race (White, Asian, Black, or mixed/others) by including the cross-product term of each nutritional factor (e.g., coffee consumption, 4 categories) and the interacting variable in multivariable regression models. Statistically significant interactions were defined as p < 0.002; accounting for 24 tests (8 nutrition factors × 3 effect modifiers) performed.", "The UKB is an international health resource of over 500,000 participants aged 37–73 years at 22 centers across England, Wales, and Scotland. Details of the study methods have been described previously [12]. Briefly, participants underwent physical measurements, assessments about health and risk factors (including lifestyle and dietary behaviors), and blood sampling at baseline (2006–2010) and agreed to follow-up on their health status. Details of data collected are provided on the Showcase tab of the UKB website [13]. UKB ethical approval was from the National Research Ethics Service Committee North West–Haydock (approval letter dated 17 June 2011, Ref 11/NW/0382), and all study procedures were performed in accordance with the World Medical Association Declaration of Helsinki ethical principles for medical research. The current analysis was approved under the UKB application #21394 (PI, M.C.C).", "COVID-19 test results from Public Health England have been dynamically linked to the UKB since 16 March 2020 [14]. The regularly updated COVID-19 data table provided to UKB researchers includes participant ID, record date, test location (mouth, nose, throat, trachea, etc.), testing laboratory (71 labs listed), and test results (negative or positive). The vast majority of samples tested are from combined nose/throat swabs that are transported in a medium suitable for viruses and subject to polymerase chain reaction testing.", "UKB provides grid co-ordinate data at the 1 km resolution as a measure of residential location [15]. The data are provided to researchers in the projection of the Ordnance Survey National Grid (OSGB1936) geographic reference system; measures refer to easting and northing with a reference point near the Isles of Sicily. These data were used in conjunction with country-wide surveillance data to identify UKB participants exposed to COVID-19 [16]. The UKB geo-data and UK COVID-19 surveillance data was imported and projected in ArcGIS (Environmental Systems Research Institute, Redlands, CA, USA) for visual inspection. To make the UKB geo-data (North/East) compatible with the latitude/longitude surveillance data, we converted the former to the latter using standard equations already built into ArcGIS [17]. Hotspot analysis was performed on the UK surveillance data to assign values (“Gi*” statistic) back to each UKB participant representing a COVID-19-exposure risk factor or score. We calculated monthly positive case rates per specific geo-populations, and the average of these monthly rates was used in the analysis.", "The UKB touchscreen at baseline included a semi-quantitative food frequency questionnaire (FFQ) that assessed self-reported usual intake of 17 foods and beverages [18]. Participants were asked to report the number of pieces/tablespoons/cups of each item they consumed or to choose one of several pre-specified frequency categories. Among these choices, baseline servings of vegetables (cooked, raw), fruit (fresh, dried), oily fish, processed meat, red meat (beef, lamb/mutton, or pork), tea, and coffee have specifically been identified as contributing nutritional factors implicated in immunity [19,20,21,22,23,24] and were thus targeted for consideration in this study. The details of the questions and possible answers for these diet factors are provided in Supplemental Table S1. Based on the participants’ responses, we computed portion sizes as cups or servings per day. Participants were also asked if they were breastfed as a baby and responded yes, no, or don’t know.", "Age, sex, race, education, employment status, type of accommodation lived in and number of co-habitants, smoking behaviors, and current health status were self-reported at baseline using the touchscreen. Townsend Deprivation Index, a measure of SES, was derived using census data and postal codes of participants, where higher scores represent higher deprivation. Body mass index (BMI) was calculated using height and weight measured at the assessment center. Physical activity was assessed using questions adapted from the validated short International Physical Activity Questionnaire [25]. Participants were asked how many minutes they engaged in each of the activities on a typical day. Duration of moderate and vigorous activities (minutes/day) was used in this analysis. History of diabetes, history of any heart diseases, and hypercholesterolemia or hypertensive medication use were obtained via self-reporting (and verified during a verbal interview) and hospital databases.\nBecause individuals with lung diseases may have an increased risk of COVID-19 infection [26], we further classified participants by their pre-existing compromised pulmonary function (yes/no respiratory diseases excluding common colds); data extracted from health records during follow-up. We also used health records to calculate incident diabetes and heart disease.", "Because no COVID-19 test data were available for UKB assessment centers in Scotland and Wales, only participants in England were included. For this analysis, we included participants with test results between 16 March and 30 November 2020, which was before vaccines were rolled out in the UK (8 December 2020) [27]. There were 74,878 tests performed on 42,699 UKB participants. We excluded participants who had missing baseline data on nutritional factors and covariates (n = 4711). The final study sample included 37,988 UKB participants (Supplemental Figure S1).", "All analyses were performed using SAS (SAS Institute, Inc., Cary, NC, USA). Mapping was performed using ArcGIS. F-tests or χ2 tests were used to compare differences in baseline characteristics across participants’ age and race/ethnicity.\nOur main outcome of interest was whether a person had any confirmed COVID-19 infection (defined as having any positive test result). Our exposures of interest included consumption of coffee, tea, oily fish, processed meat, red meat, fruit, and vegetables; and were analyzed in quartiles of servings/d [each modeled: lowest quartile/Q1(referent), Q2, Q3, and Q4] as well as tea and coffee [each modeled: none or <1 (referent), 1, 2–3, and ≥4 cups/d], and breastfed as a baby [yes (referent), no, and don’t know]. Because the cut-points to define Q2 and Q3 for oily fish were similar, Q3 and Q4 were combined.\nWe examined the association between each dietary behavior and incident COVID-19 using logistic regression, first adjusting for age, sex, and race (White/Asian/Black/Mixed-Others) (Crude model), then further adjusting for Townsend-deprivation index (quartiles), education (6 qualification classes), employment status (employed/retired/other), type of living accommodation (house/apartment/other), number of co-habitants (1,2,3 or ≥4), income (4 levels), physical activity (quartiles of moderate or vigorous activities, min/day), smoking (never/past/current), BMI (<25, 25–<30, and ≥30 kg/m2), self-rated health (excellent/good/fair/poor), hypercholesterolemia medication use (yes/no), hypertensive medication use (yes/no), diabetes (yes/no), and any heart diseases (yes/no) (Model 1). Model 2 was similar to Model 1 but with diet behaviors assessed mutually (i.e., including all 7 other dietary factors). Model 3 included variables from Model 2 and further adjusted for COVID-19 exposure score. Finally, Model 4 further adjusted for pre-existing compromised pulmonary function (yes/no). Statistical significance was defined as p < 0.05. No adjustments were made for multiple testing as all tests were a priori.\nIn sensitivity analyses, we excluded individuals self-reporting diabetes, heart diseases, or using cholesterol-lowering or antihypertension medication because they may recently have made dietary changes consequential to the disease. We also adjusted for incident diabetes and heart disease during follow-up. Moreover, as shown in Supplemental Figure S2, there were two waves of new cases between March and November 2020 (i.e., from March–August 2020, or Wave 1, and September–November 2020, or Wave 2) in the UK, with greater numbers in Wave 2 than Wave 1. The differences may reflect a change in testing-rate, a true change in risk of exposure to the virus or some other factor. Regardless, analyses were also conducted for Wave 1 and Wave 2 separately.\nWe screened for effect modification (interaction) by sex (male or female), age (<55 or ≥55 years of age) and race (White, Asian, Black, or mixed/others) by including the cross-product term of each nutritional factor (e.g., coffee consumption, 4 categories) and the interacting variable in multivariable regression models. Statistically significant interactions were defined as p < 0.002; accounting for 24 tests (8 nutrition factors × 3 effect modifiers) performed.", "Of 37,988 eligible participants who were tested for COVID-19 from March to November 2020, 6482 (17%) tested positive. In general, participants in the analysis sample had baseline demographic and diet characteristics similar to those of the full UKB cohort (N = 502,633, Table 1). However, those in the analysis sample tended to report poorer health and more comorbidities than those in the full UKB cohort.\nCharacteristics of the analysis sample, stratified by baseline age and race, are presented in Supplemental Tables S2 and S3. Briefly, compared to the older age group, the younger age group tended to be female, employed, and better educated, with better income and rated better health; they also tended to consume less tea, fruit, vegetables, fish, and red meat, with fewer having been breastfed. Compared to non-white participants, white participants tended to consume more coffee, tea, processed meat, less fruit or vegetables, with fewer having been breastfed. Among non-whites, Black people tended to live in higher deprivation areas, be employed, have higher BMI, and consume more red meat, while Asians tended to report poorer health and consume more fruit and vegetables. The Asian group also had a higher incidence of COVID-19 positivity than other racial groups.\nThe exposure risk for the UKB sample in the context of the nation overall is illustrated in Figure 1 and Supplemental Figure S3, in which geo-positive tests are shown by Waves for UKB and UK data separately. Areas with a high number of positive tests in UKB were generally similar to those in the UK overall.", "With adjustment for age, race, and sex (Crude model), consumption of coffee, moderate tea, oily fish, and vegetables; and being breastfed as a baby were significantly associated with lower odds of COVID-19 positivity, while consuming processed meat was associated with higher odds of COVID-19 positivity (Supplemental Table S4). After further adjusting for other socio-demographics, medical and lifestyle factors, associations were attenuated; consumption of tea and fish oil were no longer significantly associated with COVID-19 infection (Model 1–Table 2). The odds (95% CI) of COVID-19 positivity were 0.90 (0.83, 0.98), 0.90 (0.83, 0.96), and 0.92 (0.84, 1.00) when consuming 1 cup, 2–3 cups, and 4+ cups of coffee/day (vs. <1 cup/day), respectively (Model 2). The odds (95%CI) of COVID-19 infection was 0.88 (0.80, 0.98) for individuals in the 3rd quartile of vegetable intake (vs. lowest quartile), 1.14 (1.01, 1.29) for individuals in the 4th quartile of processed meat intake (vs lowest quartile), and 0.91 (0.85, 0.98) for those who had been breastfed as a baby (vs. not breastfed). We observed similar associations between these dietary factors assessed individually (Model 1) or mutually (Model 2) with COVID-19 infection, suggesting independent associations. Associations were attenuated when further adjusted for COVID-19 exposure, but patterns of associations remained (Model 3). Further adjusting for pre-existing compromised pulmonary function, or incident diabetes and heart disease during follow-up did not alter results (Model 4, data not shown).\nPatterns of associations were similar when excluding individuals reporting diabetes, CVD, or the use of cholesterol-lowering or antihypertension medication from the sample (data not shown), or when analyzing data for Wave 1 and Wave 2 separately (Supplemental Table S5). Associations between nutrition factors and COVID-19 positivity were not significantly modified by age, sex, and race (p > 0.002 for interactions, Supplemental Tables S6–S8)." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. UK Biobank", "2.2. COVID-19 Diagnosis", "2.3. COVID-19 Exposure", "2.4. Baseline Dietary Data", "2.5. Other Covariates", "2.6. Analysis Sample", "2.7. Statistical Analysis", "3. Results", "3.1. Participant Characteristics", "3.2. Nutritional Factors and COVID-19 Positivity", "4. Discussion", "5. Conclusions" ]
[ "Coronavirus (COVID-19) is an infectious disease caused by SARS-CoV-2 that predominantly attacks the respiratory system and has spread rapidly in the last year [1]. This unprecedented global public health crisis necessitates the expansion of our understanding of COVID-19 for purposes of prevention and mitigation of its severity and mortality risk in infected individuals [2].\nCross-sectional analyses identify patient characteristics and clinical data subsequent to a positive test, thereby informing the progression and severity of COVID-19. However, identifying factors underlying potential susceptibility a priori to the virus offers public health benefits. The immune system plays a key role in an individual’s susceptibility and response to infectious diseases, including COVID-19 [3,4]. A major modifiable factor affecting immune function is dietary behavior that influences nutritional status [5,6,7]. Ecological studies of COVID-19 report favorable correlations with specific vegetables and dietary patterns, such as the Mediterranean diet [8,9,10]. Some dietary supplements were found to have an association with SARS-CoV-2 infection [11]. To our knowledge, no population data have been examined regarding the role of specific dietary intakes in prevention of COVID-19.\nUsing data from the UK Biobank (UKB), we examined the associations between dietary behaviors measured in 2006–2010 and incident COVID-19 infection in 2020. We additionally linked UKB geo-data to UK COVID-19 surveillance data to account for COVID-19 exposure in our analyses, which to our knowledge has never been done in COVID-19 studies of the UKB.", "2.1. UK Biobank The UKB is an international health resource of over 500,000 participants aged 37–73 years at 22 centers across England, Wales, and Scotland. Details of the study methods have been described previously [12]. Briefly, participants underwent physical measurements, assessments about health and risk factors (including lifestyle and dietary behaviors), and blood sampling at baseline (2006–2010) and agreed to follow-up on their health status. Details of data collected are provided on the Showcase tab of the UKB website [13]. UKB ethical approval was from the National Research Ethics Service Committee North West–Haydock (approval letter dated 17 June 2011, Ref 11/NW/0382), and all study procedures were performed in accordance with the World Medical Association Declaration of Helsinki ethical principles for medical research. The current analysis was approved under the UKB application #21394 (PI, M.C.C).\nThe UKB is an international health resource of over 500,000 participants aged 37–73 years at 22 centers across England, Wales, and Scotland. Details of the study methods have been described previously [12]. Briefly, participants underwent physical measurements, assessments about health and risk factors (including lifestyle and dietary behaviors), and blood sampling at baseline (2006–2010) and agreed to follow-up on their health status. Details of data collected are provided on the Showcase tab of the UKB website [13]. UKB ethical approval was from the National Research Ethics Service Committee North West–Haydock (approval letter dated 17 June 2011, Ref 11/NW/0382), and all study procedures were performed in accordance with the World Medical Association Declaration of Helsinki ethical principles for medical research. The current analysis was approved under the UKB application #21394 (PI, M.C.C).\n2.2. COVID-19 Diagnosis COVID-19 test results from Public Health England have been dynamically linked to the UKB since 16 March 2020 [14]. The regularly updated COVID-19 data table provided to UKB researchers includes participant ID, record date, test location (mouth, nose, throat, trachea, etc.), testing laboratory (71 labs listed), and test results (negative or positive). The vast majority of samples tested are from combined nose/throat swabs that are transported in a medium suitable for viruses and subject to polymerase chain reaction testing.\nCOVID-19 test results from Public Health England have been dynamically linked to the UKB since 16 March 2020 [14]. The regularly updated COVID-19 data table provided to UKB researchers includes participant ID, record date, test location (mouth, nose, throat, trachea, etc.), testing laboratory (71 labs listed), and test results (negative or positive). The vast majority of samples tested are from combined nose/throat swabs that are transported in a medium suitable for viruses and subject to polymerase chain reaction testing.\n2.3. COVID-19 Exposure UKB provides grid co-ordinate data at the 1 km resolution as a measure of residential location [15]. The data are provided to researchers in the projection of the Ordnance Survey National Grid (OSGB1936) geographic reference system; measures refer to easting and northing with a reference point near the Isles of Sicily. These data were used in conjunction with country-wide surveillance data to identify UKB participants exposed to COVID-19 [16]. The UKB geo-data and UK COVID-19 surveillance data was imported and projected in ArcGIS (Environmental Systems Research Institute, Redlands, CA, USA) for visual inspection. To make the UKB geo-data (North/East) compatible with the latitude/longitude surveillance data, we converted the former to the latter using standard equations already built into ArcGIS [17]. Hotspot analysis was performed on the UK surveillance data to assign values (“Gi*” statistic) back to each UKB participant representing a COVID-19-exposure risk factor or score. We calculated monthly positive case rates per specific geo-populations, and the average of these monthly rates was used in the analysis.\nUKB provides grid co-ordinate data at the 1 km resolution as a measure of residential location [15]. The data are provided to researchers in the projection of the Ordnance Survey National Grid (OSGB1936) geographic reference system; measures refer to easting and northing with a reference point near the Isles of Sicily. These data were used in conjunction with country-wide surveillance data to identify UKB participants exposed to COVID-19 [16]. The UKB geo-data and UK COVID-19 surveillance data was imported and projected in ArcGIS (Environmental Systems Research Institute, Redlands, CA, USA) for visual inspection. To make the UKB geo-data (North/East) compatible with the latitude/longitude surveillance data, we converted the former to the latter using standard equations already built into ArcGIS [17]. Hotspot analysis was performed on the UK surveillance data to assign values (“Gi*” statistic) back to each UKB participant representing a COVID-19-exposure risk factor or score. We calculated monthly positive case rates per specific geo-populations, and the average of these monthly rates was used in the analysis.\n2.4. Baseline Dietary Data The UKB touchscreen at baseline included a semi-quantitative food frequency questionnaire (FFQ) that assessed self-reported usual intake of 17 foods and beverages [18]. Participants were asked to report the number of pieces/tablespoons/cups of each item they consumed or to choose one of several pre-specified frequency categories. Among these choices, baseline servings of vegetables (cooked, raw), fruit (fresh, dried), oily fish, processed meat, red meat (beef, lamb/mutton, or pork), tea, and coffee have specifically been identified as contributing nutritional factors implicated in immunity [19,20,21,22,23,24] and were thus targeted for consideration in this study. The details of the questions and possible answers for these diet factors are provided in Supplemental Table S1. Based on the participants’ responses, we computed portion sizes as cups or servings per day. Participants were also asked if they were breastfed as a baby and responded yes, no, or don’t know.\nThe UKB touchscreen at baseline included a semi-quantitative food frequency questionnaire (FFQ) that assessed self-reported usual intake of 17 foods and beverages [18]. Participants were asked to report the number of pieces/tablespoons/cups of each item they consumed or to choose one of several pre-specified frequency categories. Among these choices, baseline servings of vegetables (cooked, raw), fruit (fresh, dried), oily fish, processed meat, red meat (beef, lamb/mutton, or pork), tea, and coffee have specifically been identified as contributing nutritional factors implicated in immunity [19,20,21,22,23,24] and were thus targeted for consideration in this study. The details of the questions and possible answers for these diet factors are provided in Supplemental Table S1. Based on the participants’ responses, we computed portion sizes as cups or servings per day. Participants were also asked if they were breastfed as a baby and responded yes, no, or don’t know.\n2.5. Other Covariates Age, sex, race, education, employment status, type of accommodation lived in and number of co-habitants, smoking behaviors, and current health status were self-reported at baseline using the touchscreen. Townsend Deprivation Index, a measure of SES, was derived using census data and postal codes of participants, where higher scores represent higher deprivation. Body mass index (BMI) was calculated using height and weight measured at the assessment center. Physical activity was assessed using questions adapted from the validated short International Physical Activity Questionnaire [25]. Participants were asked how many minutes they engaged in each of the activities on a typical day. Duration of moderate and vigorous activities (minutes/day) was used in this analysis. History of diabetes, history of any heart diseases, and hypercholesterolemia or hypertensive medication use were obtained via self-reporting (and verified during a verbal interview) and hospital databases.\nBecause individuals with lung diseases may have an increased risk of COVID-19 infection [26], we further classified participants by their pre-existing compromised pulmonary function (yes/no respiratory diseases excluding common colds); data extracted from health records during follow-up. We also used health records to calculate incident diabetes and heart disease.\nAge, sex, race, education, employment status, type of accommodation lived in and number of co-habitants, smoking behaviors, and current health status were self-reported at baseline using the touchscreen. Townsend Deprivation Index, a measure of SES, was derived using census data and postal codes of participants, where higher scores represent higher deprivation. Body mass index (BMI) was calculated using height and weight measured at the assessment center. Physical activity was assessed using questions adapted from the validated short International Physical Activity Questionnaire [25]. Participants were asked how many minutes they engaged in each of the activities on a typical day. Duration of moderate and vigorous activities (minutes/day) was used in this analysis. History of diabetes, history of any heart diseases, and hypercholesterolemia or hypertensive medication use were obtained via self-reporting (and verified during a verbal interview) and hospital databases.\nBecause individuals with lung diseases may have an increased risk of COVID-19 infection [26], we further classified participants by their pre-existing compromised pulmonary function (yes/no respiratory diseases excluding common colds); data extracted from health records during follow-up. We also used health records to calculate incident diabetes and heart disease.\n2.6. Analysis Sample Because no COVID-19 test data were available for UKB assessment centers in Scotland and Wales, only participants in England were included. For this analysis, we included participants with test results between 16 March and 30 November 2020, which was before vaccines were rolled out in the UK (8 December 2020) [27]. There were 74,878 tests performed on 42,699 UKB participants. We excluded participants who had missing baseline data on nutritional factors and covariates (n = 4711). The final study sample included 37,988 UKB participants (Supplemental Figure S1).\nBecause no COVID-19 test data were available for UKB assessment centers in Scotland and Wales, only participants in England were included. For this analysis, we included participants with test results between 16 March and 30 November 2020, which was before vaccines were rolled out in the UK (8 December 2020) [27]. There were 74,878 tests performed on 42,699 UKB participants. We excluded participants who had missing baseline data on nutritional factors and covariates (n = 4711). The final study sample included 37,988 UKB participants (Supplemental Figure S1).\n2.7. Statistical Analysis All analyses were performed using SAS (SAS Institute, Inc., Cary, NC, USA). Mapping was performed using ArcGIS. F-tests or χ2 tests were used to compare differences in baseline characteristics across participants’ age and race/ethnicity.\nOur main outcome of interest was whether a person had any confirmed COVID-19 infection (defined as having any positive test result). Our exposures of interest included consumption of coffee, tea, oily fish, processed meat, red meat, fruit, and vegetables; and were analyzed in quartiles of servings/d [each modeled: lowest quartile/Q1(referent), Q2, Q3, and Q4] as well as tea and coffee [each modeled: none or <1 (referent), 1, 2–3, and ≥4 cups/d], and breastfed as a baby [yes (referent), no, and don’t know]. Because the cut-points to define Q2 and Q3 for oily fish were similar, Q3 and Q4 were combined.\nWe examined the association between each dietary behavior and incident COVID-19 using logistic regression, first adjusting for age, sex, and race (White/Asian/Black/Mixed-Others) (Crude model), then further adjusting for Townsend-deprivation index (quartiles), education (6 qualification classes), employment status (employed/retired/other), type of living accommodation (house/apartment/other), number of co-habitants (1,2,3 or ≥4), income (4 levels), physical activity (quartiles of moderate or vigorous activities, min/day), smoking (never/past/current), BMI (<25, 25–<30, and ≥30 kg/m2), self-rated health (excellent/good/fair/poor), hypercholesterolemia medication use (yes/no), hypertensive medication use (yes/no), diabetes (yes/no), and any heart diseases (yes/no) (Model 1). Model 2 was similar to Model 1 but with diet behaviors assessed mutually (i.e., including all 7 other dietary factors). Model 3 included variables from Model 2 and further adjusted for COVID-19 exposure score. Finally, Model 4 further adjusted for pre-existing compromised pulmonary function (yes/no). Statistical significance was defined as p < 0.05. No adjustments were made for multiple testing as all tests were a priori.\nIn sensitivity analyses, we excluded individuals self-reporting diabetes, heart diseases, or using cholesterol-lowering or antihypertension medication because they may recently have made dietary changes consequential to the disease. We also adjusted for incident diabetes and heart disease during follow-up. Moreover, as shown in Supplemental Figure S2, there were two waves of new cases between March and November 2020 (i.e., from March–August 2020, or Wave 1, and September–November 2020, or Wave 2) in the UK, with greater numbers in Wave 2 than Wave 1. The differences may reflect a change in testing-rate, a true change in risk of exposure to the virus or some other factor. Regardless, analyses were also conducted for Wave 1 and Wave 2 separately.\nWe screened for effect modification (interaction) by sex (male or female), age (<55 or ≥55 years of age) and race (White, Asian, Black, or mixed/others) by including the cross-product term of each nutritional factor (e.g., coffee consumption, 4 categories) and the interacting variable in multivariable regression models. Statistically significant interactions were defined as p < 0.002; accounting for 24 tests (8 nutrition factors × 3 effect modifiers) performed.\nAll analyses were performed using SAS (SAS Institute, Inc., Cary, NC, USA). Mapping was performed using ArcGIS. F-tests or χ2 tests were used to compare differences in baseline characteristics across participants’ age and race/ethnicity.\nOur main outcome of interest was whether a person had any confirmed COVID-19 infection (defined as having any positive test result). Our exposures of interest included consumption of coffee, tea, oily fish, processed meat, red meat, fruit, and vegetables; and were analyzed in quartiles of servings/d [each modeled: lowest quartile/Q1(referent), Q2, Q3, and Q4] as well as tea and coffee [each modeled: none or <1 (referent), 1, 2–3, and ≥4 cups/d], and breastfed as a baby [yes (referent), no, and don’t know]. Because the cut-points to define Q2 and Q3 for oily fish were similar, Q3 and Q4 were combined.\nWe examined the association between each dietary behavior and incident COVID-19 using logistic regression, first adjusting for age, sex, and race (White/Asian/Black/Mixed-Others) (Crude model), then further adjusting for Townsend-deprivation index (quartiles), education (6 qualification classes), employment status (employed/retired/other), type of living accommodation (house/apartment/other), number of co-habitants (1,2,3 or ≥4), income (4 levels), physical activity (quartiles of moderate or vigorous activities, min/day), smoking (never/past/current), BMI (<25, 25–<30, and ≥30 kg/m2), self-rated health (excellent/good/fair/poor), hypercholesterolemia medication use (yes/no), hypertensive medication use (yes/no), diabetes (yes/no), and any heart diseases (yes/no) (Model 1). Model 2 was similar to Model 1 but with diet behaviors assessed mutually (i.e., including all 7 other dietary factors). Model 3 included variables from Model 2 and further adjusted for COVID-19 exposure score. Finally, Model 4 further adjusted for pre-existing compromised pulmonary function (yes/no). Statistical significance was defined as p < 0.05. No adjustments were made for multiple testing as all tests were a priori.\nIn sensitivity analyses, we excluded individuals self-reporting diabetes, heart diseases, or using cholesterol-lowering or antihypertension medication because they may recently have made dietary changes consequential to the disease. We also adjusted for incident diabetes and heart disease during follow-up. Moreover, as shown in Supplemental Figure S2, there were two waves of new cases between March and November 2020 (i.e., from March–August 2020, or Wave 1, and September–November 2020, or Wave 2) in the UK, with greater numbers in Wave 2 than Wave 1. The differences may reflect a change in testing-rate, a true change in risk of exposure to the virus or some other factor. Regardless, analyses were also conducted for Wave 1 and Wave 2 separately.\nWe screened for effect modification (interaction) by sex (male or female), age (<55 or ≥55 years of age) and race (White, Asian, Black, or mixed/others) by including the cross-product term of each nutritional factor (e.g., coffee consumption, 4 categories) and the interacting variable in multivariable regression models. Statistically significant interactions were defined as p < 0.002; accounting for 24 tests (8 nutrition factors × 3 effect modifiers) performed.", "The UKB is an international health resource of over 500,000 participants aged 37–73 years at 22 centers across England, Wales, and Scotland. Details of the study methods have been described previously [12]. Briefly, participants underwent physical measurements, assessments about health and risk factors (including lifestyle and dietary behaviors), and blood sampling at baseline (2006–2010) and agreed to follow-up on their health status. Details of data collected are provided on the Showcase tab of the UKB website [13]. UKB ethical approval was from the National Research Ethics Service Committee North West–Haydock (approval letter dated 17 June 2011, Ref 11/NW/0382), and all study procedures were performed in accordance with the World Medical Association Declaration of Helsinki ethical principles for medical research. The current analysis was approved under the UKB application #21394 (PI, M.C.C).", "COVID-19 test results from Public Health England have been dynamically linked to the UKB since 16 March 2020 [14]. The regularly updated COVID-19 data table provided to UKB researchers includes participant ID, record date, test location (mouth, nose, throat, trachea, etc.), testing laboratory (71 labs listed), and test results (negative or positive). The vast majority of samples tested are from combined nose/throat swabs that are transported in a medium suitable for viruses and subject to polymerase chain reaction testing.", "UKB provides grid co-ordinate data at the 1 km resolution as a measure of residential location [15]. The data are provided to researchers in the projection of the Ordnance Survey National Grid (OSGB1936) geographic reference system; measures refer to easting and northing with a reference point near the Isles of Sicily. These data were used in conjunction with country-wide surveillance data to identify UKB participants exposed to COVID-19 [16]. The UKB geo-data and UK COVID-19 surveillance data was imported and projected in ArcGIS (Environmental Systems Research Institute, Redlands, CA, USA) for visual inspection. To make the UKB geo-data (North/East) compatible with the latitude/longitude surveillance data, we converted the former to the latter using standard equations already built into ArcGIS [17]. Hotspot analysis was performed on the UK surveillance data to assign values (“Gi*” statistic) back to each UKB participant representing a COVID-19-exposure risk factor or score. We calculated monthly positive case rates per specific geo-populations, and the average of these monthly rates was used in the analysis.", "The UKB touchscreen at baseline included a semi-quantitative food frequency questionnaire (FFQ) that assessed self-reported usual intake of 17 foods and beverages [18]. Participants were asked to report the number of pieces/tablespoons/cups of each item they consumed or to choose one of several pre-specified frequency categories. Among these choices, baseline servings of vegetables (cooked, raw), fruit (fresh, dried), oily fish, processed meat, red meat (beef, lamb/mutton, or pork), tea, and coffee have specifically been identified as contributing nutritional factors implicated in immunity [19,20,21,22,23,24] and were thus targeted for consideration in this study. The details of the questions and possible answers for these diet factors are provided in Supplemental Table S1. Based on the participants’ responses, we computed portion sizes as cups or servings per day. Participants were also asked if they were breastfed as a baby and responded yes, no, or don’t know.", "Age, sex, race, education, employment status, type of accommodation lived in and number of co-habitants, smoking behaviors, and current health status were self-reported at baseline using the touchscreen. Townsend Deprivation Index, a measure of SES, was derived using census data and postal codes of participants, where higher scores represent higher deprivation. Body mass index (BMI) was calculated using height and weight measured at the assessment center. Physical activity was assessed using questions adapted from the validated short International Physical Activity Questionnaire [25]. Participants were asked how many minutes they engaged in each of the activities on a typical day. Duration of moderate and vigorous activities (minutes/day) was used in this analysis. History of diabetes, history of any heart diseases, and hypercholesterolemia or hypertensive medication use were obtained via self-reporting (and verified during a verbal interview) and hospital databases.\nBecause individuals with lung diseases may have an increased risk of COVID-19 infection [26], we further classified participants by their pre-existing compromised pulmonary function (yes/no respiratory diseases excluding common colds); data extracted from health records during follow-up. We also used health records to calculate incident diabetes and heart disease.", "Because no COVID-19 test data were available for UKB assessment centers in Scotland and Wales, only participants in England were included. For this analysis, we included participants with test results between 16 March and 30 November 2020, which was before vaccines were rolled out in the UK (8 December 2020) [27]. There were 74,878 tests performed on 42,699 UKB participants. We excluded participants who had missing baseline data on nutritional factors and covariates (n = 4711). The final study sample included 37,988 UKB participants (Supplemental Figure S1).", "All analyses were performed using SAS (SAS Institute, Inc., Cary, NC, USA). Mapping was performed using ArcGIS. F-tests or χ2 tests were used to compare differences in baseline characteristics across participants’ age and race/ethnicity.\nOur main outcome of interest was whether a person had any confirmed COVID-19 infection (defined as having any positive test result). Our exposures of interest included consumption of coffee, tea, oily fish, processed meat, red meat, fruit, and vegetables; and were analyzed in quartiles of servings/d [each modeled: lowest quartile/Q1(referent), Q2, Q3, and Q4] as well as tea and coffee [each modeled: none or <1 (referent), 1, 2–3, and ≥4 cups/d], and breastfed as a baby [yes (referent), no, and don’t know]. Because the cut-points to define Q2 and Q3 for oily fish were similar, Q3 and Q4 were combined.\nWe examined the association between each dietary behavior and incident COVID-19 using logistic regression, first adjusting for age, sex, and race (White/Asian/Black/Mixed-Others) (Crude model), then further adjusting for Townsend-deprivation index (quartiles), education (6 qualification classes), employment status (employed/retired/other), type of living accommodation (house/apartment/other), number of co-habitants (1,2,3 or ≥4), income (4 levels), physical activity (quartiles of moderate or vigorous activities, min/day), smoking (never/past/current), BMI (<25, 25–<30, and ≥30 kg/m2), self-rated health (excellent/good/fair/poor), hypercholesterolemia medication use (yes/no), hypertensive medication use (yes/no), diabetes (yes/no), and any heart diseases (yes/no) (Model 1). Model 2 was similar to Model 1 but with diet behaviors assessed mutually (i.e., including all 7 other dietary factors). Model 3 included variables from Model 2 and further adjusted for COVID-19 exposure score. Finally, Model 4 further adjusted for pre-existing compromised pulmonary function (yes/no). Statistical significance was defined as p < 0.05. No adjustments were made for multiple testing as all tests were a priori.\nIn sensitivity analyses, we excluded individuals self-reporting diabetes, heart diseases, or using cholesterol-lowering or antihypertension medication because they may recently have made dietary changes consequential to the disease. We also adjusted for incident diabetes and heart disease during follow-up. Moreover, as shown in Supplemental Figure S2, there were two waves of new cases between March and November 2020 (i.e., from March–August 2020, or Wave 1, and September–November 2020, or Wave 2) in the UK, with greater numbers in Wave 2 than Wave 1. The differences may reflect a change in testing-rate, a true change in risk of exposure to the virus or some other factor. Regardless, analyses were also conducted for Wave 1 and Wave 2 separately.\nWe screened for effect modification (interaction) by sex (male or female), age (<55 or ≥55 years of age) and race (White, Asian, Black, or mixed/others) by including the cross-product term of each nutritional factor (e.g., coffee consumption, 4 categories) and the interacting variable in multivariable regression models. Statistically significant interactions were defined as p < 0.002; accounting for 24 tests (8 nutrition factors × 3 effect modifiers) performed.", "3.1. Participant Characteristics Of 37,988 eligible participants who were tested for COVID-19 from March to November 2020, 6482 (17%) tested positive. In general, participants in the analysis sample had baseline demographic and diet characteristics similar to those of the full UKB cohort (N = 502,633, Table 1). However, those in the analysis sample tended to report poorer health and more comorbidities than those in the full UKB cohort.\nCharacteristics of the analysis sample, stratified by baseline age and race, are presented in Supplemental Tables S2 and S3. Briefly, compared to the older age group, the younger age group tended to be female, employed, and better educated, with better income and rated better health; they also tended to consume less tea, fruit, vegetables, fish, and red meat, with fewer having been breastfed. Compared to non-white participants, white participants tended to consume more coffee, tea, processed meat, less fruit or vegetables, with fewer having been breastfed. Among non-whites, Black people tended to live in higher deprivation areas, be employed, have higher BMI, and consume more red meat, while Asians tended to report poorer health and consume more fruit and vegetables. The Asian group also had a higher incidence of COVID-19 positivity than other racial groups.\nThe exposure risk for the UKB sample in the context of the nation overall is illustrated in Figure 1 and Supplemental Figure S3, in which geo-positive tests are shown by Waves for UKB and UK data separately. Areas with a high number of positive tests in UKB were generally similar to those in the UK overall.\nOf 37,988 eligible participants who were tested for COVID-19 from March to November 2020, 6482 (17%) tested positive. In general, participants in the analysis sample had baseline demographic and diet characteristics similar to those of the full UKB cohort (N = 502,633, Table 1). However, those in the analysis sample tended to report poorer health and more comorbidities than those in the full UKB cohort.\nCharacteristics of the analysis sample, stratified by baseline age and race, are presented in Supplemental Tables S2 and S3. Briefly, compared to the older age group, the younger age group tended to be female, employed, and better educated, with better income and rated better health; they also tended to consume less tea, fruit, vegetables, fish, and red meat, with fewer having been breastfed. Compared to non-white participants, white participants tended to consume more coffee, tea, processed meat, less fruit or vegetables, with fewer having been breastfed. Among non-whites, Black people tended to live in higher deprivation areas, be employed, have higher BMI, and consume more red meat, while Asians tended to report poorer health and consume more fruit and vegetables. The Asian group also had a higher incidence of COVID-19 positivity than other racial groups.\nThe exposure risk for the UKB sample in the context of the nation overall is illustrated in Figure 1 and Supplemental Figure S3, in which geo-positive tests are shown by Waves for UKB and UK data separately. Areas with a high number of positive tests in UKB were generally similar to those in the UK overall.\n3.2. Nutritional Factors and COVID-19 Positivity With adjustment for age, race, and sex (Crude model), consumption of coffee, moderate tea, oily fish, and vegetables; and being breastfed as a baby were significantly associated with lower odds of COVID-19 positivity, while consuming processed meat was associated with higher odds of COVID-19 positivity (Supplemental Table S4). After further adjusting for other socio-demographics, medical and lifestyle factors, associations were attenuated; consumption of tea and fish oil were no longer significantly associated with COVID-19 infection (Model 1–Table 2). The odds (95% CI) of COVID-19 positivity were 0.90 (0.83, 0.98), 0.90 (0.83, 0.96), and 0.92 (0.84, 1.00) when consuming 1 cup, 2–3 cups, and 4+ cups of coffee/day (vs. <1 cup/day), respectively (Model 2). The odds (95%CI) of COVID-19 infection was 0.88 (0.80, 0.98) for individuals in the 3rd quartile of vegetable intake (vs. lowest quartile), 1.14 (1.01, 1.29) for individuals in the 4th quartile of processed meat intake (vs lowest quartile), and 0.91 (0.85, 0.98) for those who had been breastfed as a baby (vs. not breastfed). We observed similar associations between these dietary factors assessed individually (Model 1) or mutually (Model 2) with COVID-19 infection, suggesting independent associations. Associations were attenuated when further adjusted for COVID-19 exposure, but patterns of associations remained (Model 3). Further adjusting for pre-existing compromised pulmonary function, or incident diabetes and heart disease during follow-up did not alter results (Model 4, data not shown).\nPatterns of associations were similar when excluding individuals reporting diabetes, CVD, or the use of cholesterol-lowering or antihypertension medication from the sample (data not shown), or when analyzing data for Wave 1 and Wave 2 separately (Supplemental Table S5). Associations between nutrition factors and COVID-19 positivity were not significantly modified by age, sex, and race (p > 0.002 for interactions, Supplemental Tables S6–S8).\nWith adjustment for age, race, and sex (Crude model), consumption of coffee, moderate tea, oily fish, and vegetables; and being breastfed as a baby were significantly associated with lower odds of COVID-19 positivity, while consuming processed meat was associated with higher odds of COVID-19 positivity (Supplemental Table S4). After further adjusting for other socio-demographics, medical and lifestyle factors, associations were attenuated; consumption of tea and fish oil were no longer significantly associated with COVID-19 infection (Model 1–Table 2). The odds (95% CI) of COVID-19 positivity were 0.90 (0.83, 0.98), 0.90 (0.83, 0.96), and 0.92 (0.84, 1.00) when consuming 1 cup, 2–3 cups, and 4+ cups of coffee/day (vs. <1 cup/day), respectively (Model 2). The odds (95%CI) of COVID-19 infection was 0.88 (0.80, 0.98) for individuals in the 3rd quartile of vegetable intake (vs. lowest quartile), 1.14 (1.01, 1.29) for individuals in the 4th quartile of processed meat intake (vs lowest quartile), and 0.91 (0.85, 0.98) for those who had been breastfed as a baby (vs. not breastfed). We observed similar associations between these dietary factors assessed individually (Model 1) or mutually (Model 2) with COVID-19 infection, suggesting independent associations. Associations were attenuated when further adjusted for COVID-19 exposure, but patterns of associations remained (Model 3). Further adjusting for pre-existing compromised pulmonary function, or incident diabetes and heart disease during follow-up did not alter results (Model 4, data not shown).\nPatterns of associations were similar when excluding individuals reporting diabetes, CVD, or the use of cholesterol-lowering or antihypertension medication from the sample (data not shown), or when analyzing data for Wave 1 and Wave 2 separately (Supplemental Table S5). Associations between nutrition factors and COVID-19 positivity were not significantly modified by age, sex, and race (p > 0.002 for interactions, Supplemental Tables S6–S8).", "Of 37,988 eligible participants who were tested for COVID-19 from March to November 2020, 6482 (17%) tested positive. In general, participants in the analysis sample had baseline demographic and diet characteristics similar to those of the full UKB cohort (N = 502,633, Table 1). However, those in the analysis sample tended to report poorer health and more comorbidities than those in the full UKB cohort.\nCharacteristics of the analysis sample, stratified by baseline age and race, are presented in Supplemental Tables S2 and S3. Briefly, compared to the older age group, the younger age group tended to be female, employed, and better educated, with better income and rated better health; they also tended to consume less tea, fruit, vegetables, fish, and red meat, with fewer having been breastfed. Compared to non-white participants, white participants tended to consume more coffee, tea, processed meat, less fruit or vegetables, with fewer having been breastfed. Among non-whites, Black people tended to live in higher deprivation areas, be employed, have higher BMI, and consume more red meat, while Asians tended to report poorer health and consume more fruit and vegetables. The Asian group also had a higher incidence of COVID-19 positivity than other racial groups.\nThe exposure risk for the UKB sample in the context of the nation overall is illustrated in Figure 1 and Supplemental Figure S3, in which geo-positive tests are shown by Waves for UKB and UK data separately. Areas with a high number of positive tests in UKB were generally similar to those in the UK overall.", "With adjustment for age, race, and sex (Crude model), consumption of coffee, moderate tea, oily fish, and vegetables; and being breastfed as a baby were significantly associated with lower odds of COVID-19 positivity, while consuming processed meat was associated with higher odds of COVID-19 positivity (Supplemental Table S4). After further adjusting for other socio-demographics, medical and lifestyle factors, associations were attenuated; consumption of tea and fish oil were no longer significantly associated with COVID-19 infection (Model 1–Table 2). The odds (95% CI) of COVID-19 positivity were 0.90 (0.83, 0.98), 0.90 (0.83, 0.96), and 0.92 (0.84, 1.00) when consuming 1 cup, 2–3 cups, and 4+ cups of coffee/day (vs. <1 cup/day), respectively (Model 2). The odds (95%CI) of COVID-19 infection was 0.88 (0.80, 0.98) for individuals in the 3rd quartile of vegetable intake (vs. lowest quartile), 1.14 (1.01, 1.29) for individuals in the 4th quartile of processed meat intake (vs lowest quartile), and 0.91 (0.85, 0.98) for those who had been breastfed as a baby (vs. not breastfed). We observed similar associations between these dietary factors assessed individually (Model 1) or mutually (Model 2) with COVID-19 infection, suggesting independent associations. Associations were attenuated when further adjusted for COVID-19 exposure, but patterns of associations remained (Model 3). Further adjusting for pre-existing compromised pulmonary function, or incident diabetes and heart disease during follow-up did not alter results (Model 4, data not shown).\nPatterns of associations were similar when excluding individuals reporting diabetes, CVD, or the use of cholesterol-lowering or antihypertension medication from the sample (data not shown), or when analyzing data for Wave 1 and Wave 2 separately (Supplemental Table S5). Associations between nutrition factors and COVID-19 positivity were not significantly modified by age, sex, and race (p > 0.002 for interactions, Supplemental Tables S6–S8).", "In this study, consuming more coffee, vegetables, and being breast fed as well as consuming less processed meat intake were independently associated with lower odds of COVID-19 positivity. These associations were attenuated when accounting for the UK’s COVID-19 case rate (i.e., exposure).\nMuch research has focused on characterizing individuals who test positive for COVID-19 and infection progression and outcomes. Individuals with suppressed immune systems, such as the elderly and those with existing comorbidities including cardiovascular diseases, hypertension, diabetes, and obesity, are more likely to suffer with progression toward severe outcomes of COVID-19 [3,4,28,29]. Less attention has focused on modifiable risk factors preceding COVID-19 infection. While nutrition may theoretically impact COVID-19 susceptibility [3,6,30,31,32], few investigations have specifically tested the hypothesis a priori. Low vitamin D status is associated with COVID-19 infection, severity, and mortality [33,34]. Del Ser et al. [35] recently explored a variety of risk factors for the incidence, severity, and mortality of COVID-19 in a Spanish cohort of 913 volunteers aged 75–90 years. Sixty-two cases reported symptoms compatible with COVID-19; 6 of them died. High alcohol and lower coffee and tea consumption were associated with disease severity; other dietary behaviors were not considered.\nIn the UKB, habitual consumption of 1 or more cups of coffee per day was associated with about a 10% decrease in risk of COVID-19 compared to less than 1 cup/day. Coffee is not only a key source of caffeine, but contributes dozens of other constituents; including many implicated in immunity [21]. Among many populations, coffee is the major contributor to total polyphenol intake, phenolic acids in particular [36,37]. Coffee, caffeine, and polyphenols have antioxidant and anti-inflammatory properties [38,39,40,41,42,43,44]. Coffee consumption favorably correlates with inflammatory biomarkers such as CRP, interleukin-6 (IL-6), and tumor necrosis factor α (TNF-α) [38,45,46,47,48,49,50,51], which are also associated with COVID-19 severity and mortality [52,53]. Coffee consumption has also been associated with lower risk of pneumonia in elderly [54]. Taken together, an immunoprotective effect of coffee against COVID-19 is plausible and merits further investigation.\nFruits and vegetables are rich dietary sources of vitamins, folate, fiber, and several phytochemicals such as carotenoids and flavonoids [55,56]. These substances have anti-inflammatory, antibacterial, and antiviral properties and are thus immune-protective [19,57,58]. In the current study, consumption of at least 0.67 servings/d of vegetables (cooked or raw, excluding potatoes) was associated with a lower risk of COVID-19 infection. Recent ecological studies of COVID-19 report that countries with high consumption of foods with potent antioxidant or anti angiotensin-converting enzyme (ACE) activity such as raw or fermented cabbage have a lower COVID-19 death rate compared to other countries [59,60]. Studies of vegetables and fermented-foods with COVID-19 mortality in Europe reported that each g/day increase in the average national consumption of head cabbage, cucumber or fermented vegetables decreased the mortality risk for COVID-19 by 11–35%; consumption of broccoli surprisingly increased COVID-19 mortality [9,10]. COVID-19 mortality was not our outcome of interest, and while we did not analyze vegetables individually, the UK’s national consumption of broccoli is above Europe’s average and thus our findings conflict with the previous inverse relationship between broccoli consumption and COVID-19 mortality [9,10]. Fruit (fresh and dried) consumption was not associated with COVID-19 risk in the UKB. While fruits and vegetables share several health benefits, the specific bioactive compounds in fruits and in vegetables can vary [61]. Fruits are also relatively higher in sugar (fructose) while vegetables contain more starch. Follow-up studies to our current findings might therefore focus on vegetable constituents distinct from those of fruit.\nProcessed meat consumption of as little as 0.43 servings/d was associated with a higher risk of COVID-19 in UKB. However red meat consumption presented no risk, suggesting meat per se does not underlie the association we observed with processed meats. Processed meat refers to any meat that has been transformed through salting, curing, fermenting, smoking, or other process to enhance flavor or improve preservation [62]. In the UK, sausages, bacon, and ham are the major contributors to processed meat intake [63] and these often contain salt enriched with nitrates/nitrites. Preservatives and other additives are increasingly being used yet difficult to measure in observational studies [64]. These include milk-, soy- and wheat-based ingredients, spices, ascorbic acid, phosphates, antioxidants, monosodium glutamate, food colorings, blood plasma, gelatin, and transglutaminase [62]. Total and saturated fat concentrations are also relatively higher in processed meats [65]. Processed meats are also characteristic of a western-style diet, which may adversely impact immunity [23] and thus other dietary behaviors that correlate with processed meat intake may underlie the association we observed with COVID-19 susceptibly.\nConsumption of human milk has also emerged as an early-life factor impacting immunity in both infancy and adulthood, with links to lower rates of allergy, influenza, asthma, and other respiratory infections [66,67]. Breastfeeding has also been linked to epigenetic changes of toll-like-receptors, which play significant roles in innate immunity [68]. We found a long-term favorable association between being breastfed as a baby and COVID-19 infection in UKB and thus contribute to the growing evidence in support of nutrition early in life for optimal immunity for life.\nThe strengths of this study include access to the largest cohort established years before the COVID-19 pandemic, with detailed health, lifestyle, and nutrition data and ongoing follow-up. Although much COVID-19 research [69] has incorporated this database, we are the first to leverage independent COVID-19 surveillance database of the UK to estimate participant COVID-19 exposure [70]. COVID-19 data from UKB are consistent with public government data, which adds confidence to our approach and results.\nThe study has limitations. First, only a portion of overall UKB participants were tested for COVID-19 (about 10%) during the study timeframe, and these participants were slightly older, less educated, and less employed, while reporting poorer health than the original UKB cohort. Those factors were associated with higher odds of COVID-19 infection in our analysis sample. We note, however, that the full UKB is not representative of the sampling population, with evidence of a ‘healthy volunteer’ selection bias [71]. However, participants in these two samples had many baseline characteristics in common, including dietary behaviors. Any effects of selection bias would likely lead to an underestimation of the true associations between nutrition and COVID-19. Second, diet assessment tools are generally prone to measurement error: limitations inherent to closed-ended options, inaccurate portion size estimates, and social desirability bias are among the sources contributing to error [12,72]. Nevertheless, we were uniquely positioned to pursue this investigation with novel aims and analyses; and although the effect sizes may be imprecise, establishing a relationship between diet and COVID-19 is informative in itself and can be inferred from the direction and statistical significance of the association [12,72]. Finally, the current study is observational and cannot infer causality. While we accounted for a comprehensive set of potential confounders, we cannot discount the possibility of residual confounding; whether it be positive or negative. Moreover, we had no concurrent pandemic data on other established risk factors for COVID-19 infection, such as participant social distancing behavior, work environment, and face mask-use; some of these factors may correlate with diet behaviors.", "Our results support the hypothesis that nutritional factors may influence distinct aspects of the immune system, hence susceptibility to COVID-19. Encouraging adherence to certain nutritional behaviors (e.g., increasing vegetable intake and reducing processed meat intake) may be an additional tool to existing COVID-19 protection guidelines to limit the spread of this virus. Nevertheless, our findings warrant confirmation in other populations." ]
[ "intro", null, null, null, null, null, null, null, null, "results", null, null, "discussion", "conclusions" ]
[ "nutrition", "COVID-19", "immunity", "coffee", "breastfeeding", "epidemiology" ]
1. Introduction: Coronavirus (COVID-19) is an infectious disease caused by SARS-CoV-2 that predominantly attacks the respiratory system and has spread rapidly in the last year [1]. This unprecedented global public health crisis necessitates the expansion of our understanding of COVID-19 for purposes of prevention and mitigation of its severity and mortality risk in infected individuals [2]. Cross-sectional analyses identify patient characteristics and clinical data subsequent to a positive test, thereby informing the progression and severity of COVID-19. However, identifying factors underlying potential susceptibility a priori to the virus offers public health benefits. The immune system plays a key role in an individual’s susceptibility and response to infectious diseases, including COVID-19 [3,4]. A major modifiable factor affecting immune function is dietary behavior that influences nutritional status [5,6,7]. Ecological studies of COVID-19 report favorable correlations with specific vegetables and dietary patterns, such as the Mediterranean diet [8,9,10]. Some dietary supplements were found to have an association with SARS-CoV-2 infection [11]. To our knowledge, no population data have been examined regarding the role of specific dietary intakes in prevention of COVID-19. Using data from the UK Biobank (UKB), we examined the associations between dietary behaviors measured in 2006–2010 and incident COVID-19 infection in 2020. We additionally linked UKB geo-data to UK COVID-19 surveillance data to account for COVID-19 exposure in our analyses, which to our knowledge has never been done in COVID-19 studies of the UKB. 2. Materials and Methods: 2.1. UK Biobank The UKB is an international health resource of over 500,000 participants aged 37–73 years at 22 centers across England, Wales, and Scotland. Details of the study methods have been described previously [12]. Briefly, participants underwent physical measurements, assessments about health and risk factors (including lifestyle and dietary behaviors), and blood sampling at baseline (2006–2010) and agreed to follow-up on their health status. Details of data collected are provided on the Showcase tab of the UKB website [13]. UKB ethical approval was from the National Research Ethics Service Committee North West–Haydock (approval letter dated 17 June 2011, Ref 11/NW/0382), and all study procedures were performed in accordance with the World Medical Association Declaration of Helsinki ethical principles for medical research. The current analysis was approved under the UKB application #21394 (PI, M.C.C). The UKB is an international health resource of over 500,000 participants aged 37–73 years at 22 centers across England, Wales, and Scotland. Details of the study methods have been described previously [12]. Briefly, participants underwent physical measurements, assessments about health and risk factors (including lifestyle and dietary behaviors), and blood sampling at baseline (2006–2010) and agreed to follow-up on their health status. Details of data collected are provided on the Showcase tab of the UKB website [13]. UKB ethical approval was from the National Research Ethics Service Committee North West–Haydock (approval letter dated 17 June 2011, Ref 11/NW/0382), and all study procedures were performed in accordance with the World Medical Association Declaration of Helsinki ethical principles for medical research. The current analysis was approved under the UKB application #21394 (PI, M.C.C). 2.2. COVID-19 Diagnosis COVID-19 test results from Public Health England have been dynamically linked to the UKB since 16 March 2020 [14]. The regularly updated COVID-19 data table provided to UKB researchers includes participant ID, record date, test location (mouth, nose, throat, trachea, etc.), testing laboratory (71 labs listed), and test results (negative or positive). The vast majority of samples tested are from combined nose/throat swabs that are transported in a medium suitable for viruses and subject to polymerase chain reaction testing. COVID-19 test results from Public Health England have been dynamically linked to the UKB since 16 March 2020 [14]. The regularly updated COVID-19 data table provided to UKB researchers includes participant ID, record date, test location (mouth, nose, throat, trachea, etc.), testing laboratory (71 labs listed), and test results (negative or positive). The vast majority of samples tested are from combined nose/throat swabs that are transported in a medium suitable for viruses and subject to polymerase chain reaction testing. 2.3. COVID-19 Exposure UKB provides grid co-ordinate data at the 1 km resolution as a measure of residential location [15]. The data are provided to researchers in the projection of the Ordnance Survey National Grid (OSGB1936) geographic reference system; measures refer to easting and northing with a reference point near the Isles of Sicily. These data were used in conjunction with country-wide surveillance data to identify UKB participants exposed to COVID-19 [16]. The UKB geo-data and UK COVID-19 surveillance data was imported and projected in ArcGIS (Environmental Systems Research Institute, Redlands, CA, USA) for visual inspection. To make the UKB geo-data (North/East) compatible with the latitude/longitude surveillance data, we converted the former to the latter using standard equations already built into ArcGIS [17]. Hotspot analysis was performed on the UK surveillance data to assign values (“Gi*” statistic) back to each UKB participant representing a COVID-19-exposure risk factor or score. We calculated monthly positive case rates per specific geo-populations, and the average of these monthly rates was used in the analysis. UKB provides grid co-ordinate data at the 1 km resolution as a measure of residential location [15]. The data are provided to researchers in the projection of the Ordnance Survey National Grid (OSGB1936) geographic reference system; measures refer to easting and northing with a reference point near the Isles of Sicily. These data were used in conjunction with country-wide surveillance data to identify UKB participants exposed to COVID-19 [16]. The UKB geo-data and UK COVID-19 surveillance data was imported and projected in ArcGIS (Environmental Systems Research Institute, Redlands, CA, USA) for visual inspection. To make the UKB geo-data (North/East) compatible with the latitude/longitude surveillance data, we converted the former to the latter using standard equations already built into ArcGIS [17]. Hotspot analysis was performed on the UK surveillance data to assign values (“Gi*” statistic) back to each UKB participant representing a COVID-19-exposure risk factor or score. We calculated monthly positive case rates per specific geo-populations, and the average of these monthly rates was used in the analysis. 2.4. Baseline Dietary Data The UKB touchscreen at baseline included a semi-quantitative food frequency questionnaire (FFQ) that assessed self-reported usual intake of 17 foods and beverages [18]. Participants were asked to report the number of pieces/tablespoons/cups of each item they consumed or to choose one of several pre-specified frequency categories. Among these choices, baseline servings of vegetables (cooked, raw), fruit (fresh, dried), oily fish, processed meat, red meat (beef, lamb/mutton, or pork), tea, and coffee have specifically been identified as contributing nutritional factors implicated in immunity [19,20,21,22,23,24] and were thus targeted for consideration in this study. The details of the questions and possible answers for these diet factors are provided in Supplemental Table S1. Based on the participants’ responses, we computed portion sizes as cups or servings per day. Participants were also asked if they were breastfed as a baby and responded yes, no, or don’t know. The UKB touchscreen at baseline included a semi-quantitative food frequency questionnaire (FFQ) that assessed self-reported usual intake of 17 foods and beverages [18]. Participants were asked to report the number of pieces/tablespoons/cups of each item they consumed or to choose one of several pre-specified frequency categories. Among these choices, baseline servings of vegetables (cooked, raw), fruit (fresh, dried), oily fish, processed meat, red meat (beef, lamb/mutton, or pork), tea, and coffee have specifically been identified as contributing nutritional factors implicated in immunity [19,20,21,22,23,24] and were thus targeted for consideration in this study. The details of the questions and possible answers for these diet factors are provided in Supplemental Table S1. Based on the participants’ responses, we computed portion sizes as cups or servings per day. Participants were also asked if they were breastfed as a baby and responded yes, no, or don’t know. 2.5. Other Covariates Age, sex, race, education, employment status, type of accommodation lived in and number of co-habitants, smoking behaviors, and current health status were self-reported at baseline using the touchscreen. Townsend Deprivation Index, a measure of SES, was derived using census data and postal codes of participants, where higher scores represent higher deprivation. Body mass index (BMI) was calculated using height and weight measured at the assessment center. Physical activity was assessed using questions adapted from the validated short International Physical Activity Questionnaire [25]. Participants were asked how many minutes they engaged in each of the activities on a typical day. Duration of moderate and vigorous activities (minutes/day) was used in this analysis. History of diabetes, history of any heart diseases, and hypercholesterolemia or hypertensive medication use were obtained via self-reporting (and verified during a verbal interview) and hospital databases. Because individuals with lung diseases may have an increased risk of COVID-19 infection [26], we further classified participants by their pre-existing compromised pulmonary function (yes/no respiratory diseases excluding common colds); data extracted from health records during follow-up. We also used health records to calculate incident diabetes and heart disease. Age, sex, race, education, employment status, type of accommodation lived in and number of co-habitants, smoking behaviors, and current health status were self-reported at baseline using the touchscreen. Townsend Deprivation Index, a measure of SES, was derived using census data and postal codes of participants, where higher scores represent higher deprivation. Body mass index (BMI) was calculated using height and weight measured at the assessment center. Physical activity was assessed using questions adapted from the validated short International Physical Activity Questionnaire [25]. Participants were asked how many minutes they engaged in each of the activities on a typical day. Duration of moderate and vigorous activities (minutes/day) was used in this analysis. History of diabetes, history of any heart diseases, and hypercholesterolemia or hypertensive medication use were obtained via self-reporting (and verified during a verbal interview) and hospital databases. Because individuals with lung diseases may have an increased risk of COVID-19 infection [26], we further classified participants by their pre-existing compromised pulmonary function (yes/no respiratory diseases excluding common colds); data extracted from health records during follow-up. We also used health records to calculate incident diabetes and heart disease. 2.6. Analysis Sample Because no COVID-19 test data were available for UKB assessment centers in Scotland and Wales, only participants in England were included. For this analysis, we included participants with test results between 16 March and 30 November 2020, which was before vaccines were rolled out in the UK (8 December 2020) [27]. There were 74,878 tests performed on 42,699 UKB participants. We excluded participants who had missing baseline data on nutritional factors and covariates (n = 4711). The final study sample included 37,988 UKB participants (Supplemental Figure S1). Because no COVID-19 test data were available for UKB assessment centers in Scotland and Wales, only participants in England were included. For this analysis, we included participants with test results between 16 March and 30 November 2020, which was before vaccines were rolled out in the UK (8 December 2020) [27]. There were 74,878 tests performed on 42,699 UKB participants. We excluded participants who had missing baseline data on nutritional factors and covariates (n = 4711). The final study sample included 37,988 UKB participants (Supplemental Figure S1). 2.7. Statistical Analysis All analyses were performed using SAS (SAS Institute, Inc., Cary, NC, USA). Mapping was performed using ArcGIS. F-tests or χ2 tests were used to compare differences in baseline characteristics across participants’ age and race/ethnicity. Our main outcome of interest was whether a person had any confirmed COVID-19 infection (defined as having any positive test result). Our exposures of interest included consumption of coffee, tea, oily fish, processed meat, red meat, fruit, and vegetables; and were analyzed in quartiles of servings/d [each modeled: lowest quartile/Q1(referent), Q2, Q3, and Q4] as well as tea and coffee [each modeled: none or <1 (referent), 1, 2–3, and ≥4 cups/d], and breastfed as a baby [yes (referent), no, and don’t know]. Because the cut-points to define Q2 and Q3 for oily fish were similar, Q3 and Q4 were combined. We examined the association between each dietary behavior and incident COVID-19 using logistic regression, first adjusting for age, sex, and race (White/Asian/Black/Mixed-Others) (Crude model), then further adjusting for Townsend-deprivation index (quartiles), education (6 qualification classes), employment status (employed/retired/other), type of living accommodation (house/apartment/other), number of co-habitants (1,2,3 or ≥4), income (4 levels), physical activity (quartiles of moderate or vigorous activities, min/day), smoking (never/past/current), BMI (<25, 25–<30, and ≥30 kg/m2), self-rated health (excellent/good/fair/poor), hypercholesterolemia medication use (yes/no), hypertensive medication use (yes/no), diabetes (yes/no), and any heart diseases (yes/no) (Model 1). Model 2 was similar to Model 1 but with diet behaviors assessed mutually (i.e., including all 7 other dietary factors). Model 3 included variables from Model 2 and further adjusted for COVID-19 exposure score. Finally, Model 4 further adjusted for pre-existing compromised pulmonary function (yes/no). Statistical significance was defined as p < 0.05. No adjustments were made for multiple testing as all tests were a priori. In sensitivity analyses, we excluded individuals self-reporting diabetes, heart diseases, or using cholesterol-lowering or antihypertension medication because they may recently have made dietary changes consequential to the disease. We also adjusted for incident diabetes and heart disease during follow-up. Moreover, as shown in Supplemental Figure S2, there were two waves of new cases between March and November 2020 (i.e., from March–August 2020, or Wave 1, and September–November 2020, or Wave 2) in the UK, with greater numbers in Wave 2 than Wave 1. The differences may reflect a change in testing-rate, a true change in risk of exposure to the virus or some other factor. Regardless, analyses were also conducted for Wave 1 and Wave 2 separately. We screened for effect modification (interaction) by sex (male or female), age (<55 or ≥55 years of age) and race (White, Asian, Black, or mixed/others) by including the cross-product term of each nutritional factor (e.g., coffee consumption, 4 categories) and the interacting variable in multivariable regression models. Statistically significant interactions were defined as p < 0.002; accounting for 24 tests (8 nutrition factors × 3 effect modifiers) performed. All analyses were performed using SAS (SAS Institute, Inc., Cary, NC, USA). Mapping was performed using ArcGIS. F-tests or χ2 tests were used to compare differences in baseline characteristics across participants’ age and race/ethnicity. Our main outcome of interest was whether a person had any confirmed COVID-19 infection (defined as having any positive test result). Our exposures of interest included consumption of coffee, tea, oily fish, processed meat, red meat, fruit, and vegetables; and were analyzed in quartiles of servings/d [each modeled: lowest quartile/Q1(referent), Q2, Q3, and Q4] as well as tea and coffee [each modeled: none or <1 (referent), 1, 2–3, and ≥4 cups/d], and breastfed as a baby [yes (referent), no, and don’t know]. Because the cut-points to define Q2 and Q3 for oily fish were similar, Q3 and Q4 were combined. We examined the association between each dietary behavior and incident COVID-19 using logistic regression, first adjusting for age, sex, and race (White/Asian/Black/Mixed-Others) (Crude model), then further adjusting for Townsend-deprivation index (quartiles), education (6 qualification classes), employment status (employed/retired/other), type of living accommodation (house/apartment/other), number of co-habitants (1,2,3 or ≥4), income (4 levels), physical activity (quartiles of moderate or vigorous activities, min/day), smoking (never/past/current), BMI (<25, 25–<30, and ≥30 kg/m2), self-rated health (excellent/good/fair/poor), hypercholesterolemia medication use (yes/no), hypertensive medication use (yes/no), diabetes (yes/no), and any heart diseases (yes/no) (Model 1). Model 2 was similar to Model 1 but with diet behaviors assessed mutually (i.e., including all 7 other dietary factors). Model 3 included variables from Model 2 and further adjusted for COVID-19 exposure score. Finally, Model 4 further adjusted for pre-existing compromised pulmonary function (yes/no). Statistical significance was defined as p < 0.05. No adjustments were made for multiple testing as all tests were a priori. In sensitivity analyses, we excluded individuals self-reporting diabetes, heart diseases, or using cholesterol-lowering or antihypertension medication because they may recently have made dietary changes consequential to the disease. We also adjusted for incident diabetes and heart disease during follow-up. Moreover, as shown in Supplemental Figure S2, there were two waves of new cases between March and November 2020 (i.e., from March–August 2020, or Wave 1, and September–November 2020, or Wave 2) in the UK, with greater numbers in Wave 2 than Wave 1. The differences may reflect a change in testing-rate, a true change in risk of exposure to the virus or some other factor. Regardless, analyses were also conducted for Wave 1 and Wave 2 separately. We screened for effect modification (interaction) by sex (male or female), age (<55 or ≥55 years of age) and race (White, Asian, Black, or mixed/others) by including the cross-product term of each nutritional factor (e.g., coffee consumption, 4 categories) and the interacting variable in multivariable regression models. Statistically significant interactions were defined as p < 0.002; accounting for 24 tests (8 nutrition factors × 3 effect modifiers) performed. 2.1. UK Biobank: The UKB is an international health resource of over 500,000 participants aged 37–73 years at 22 centers across England, Wales, and Scotland. Details of the study methods have been described previously [12]. Briefly, participants underwent physical measurements, assessments about health and risk factors (including lifestyle and dietary behaviors), and blood sampling at baseline (2006–2010) and agreed to follow-up on their health status. Details of data collected are provided on the Showcase tab of the UKB website [13]. UKB ethical approval was from the National Research Ethics Service Committee North West–Haydock (approval letter dated 17 June 2011, Ref 11/NW/0382), and all study procedures were performed in accordance with the World Medical Association Declaration of Helsinki ethical principles for medical research. The current analysis was approved under the UKB application #21394 (PI, M.C.C). 2.2. COVID-19 Diagnosis: COVID-19 test results from Public Health England have been dynamically linked to the UKB since 16 March 2020 [14]. The regularly updated COVID-19 data table provided to UKB researchers includes participant ID, record date, test location (mouth, nose, throat, trachea, etc.), testing laboratory (71 labs listed), and test results (negative or positive). The vast majority of samples tested are from combined nose/throat swabs that are transported in a medium suitable for viruses and subject to polymerase chain reaction testing. 2.3. COVID-19 Exposure: UKB provides grid co-ordinate data at the 1 km resolution as a measure of residential location [15]. The data are provided to researchers in the projection of the Ordnance Survey National Grid (OSGB1936) geographic reference system; measures refer to easting and northing with a reference point near the Isles of Sicily. These data were used in conjunction with country-wide surveillance data to identify UKB participants exposed to COVID-19 [16]. The UKB geo-data and UK COVID-19 surveillance data was imported and projected in ArcGIS (Environmental Systems Research Institute, Redlands, CA, USA) for visual inspection. To make the UKB geo-data (North/East) compatible with the latitude/longitude surveillance data, we converted the former to the latter using standard equations already built into ArcGIS [17]. Hotspot analysis was performed on the UK surveillance data to assign values (“Gi*” statistic) back to each UKB participant representing a COVID-19-exposure risk factor or score. We calculated monthly positive case rates per specific geo-populations, and the average of these monthly rates was used in the analysis. 2.4. Baseline Dietary Data: The UKB touchscreen at baseline included a semi-quantitative food frequency questionnaire (FFQ) that assessed self-reported usual intake of 17 foods and beverages [18]. Participants were asked to report the number of pieces/tablespoons/cups of each item they consumed or to choose one of several pre-specified frequency categories. Among these choices, baseline servings of vegetables (cooked, raw), fruit (fresh, dried), oily fish, processed meat, red meat (beef, lamb/mutton, or pork), tea, and coffee have specifically been identified as contributing nutritional factors implicated in immunity [19,20,21,22,23,24] and were thus targeted for consideration in this study. The details of the questions and possible answers for these diet factors are provided in Supplemental Table S1. Based on the participants’ responses, we computed portion sizes as cups or servings per day. Participants were also asked if they were breastfed as a baby and responded yes, no, or don’t know. 2.5. Other Covariates: Age, sex, race, education, employment status, type of accommodation lived in and number of co-habitants, smoking behaviors, and current health status were self-reported at baseline using the touchscreen. Townsend Deprivation Index, a measure of SES, was derived using census data and postal codes of participants, where higher scores represent higher deprivation. Body mass index (BMI) was calculated using height and weight measured at the assessment center. Physical activity was assessed using questions adapted from the validated short International Physical Activity Questionnaire [25]. Participants were asked how many minutes they engaged in each of the activities on a typical day. Duration of moderate and vigorous activities (minutes/day) was used in this analysis. History of diabetes, history of any heart diseases, and hypercholesterolemia or hypertensive medication use were obtained via self-reporting (and verified during a verbal interview) and hospital databases. Because individuals with lung diseases may have an increased risk of COVID-19 infection [26], we further classified participants by their pre-existing compromised pulmonary function (yes/no respiratory diseases excluding common colds); data extracted from health records during follow-up. We also used health records to calculate incident diabetes and heart disease. 2.6. Analysis Sample: Because no COVID-19 test data were available for UKB assessment centers in Scotland and Wales, only participants in England were included. For this analysis, we included participants with test results between 16 March and 30 November 2020, which was before vaccines were rolled out in the UK (8 December 2020) [27]. There were 74,878 tests performed on 42,699 UKB participants. We excluded participants who had missing baseline data on nutritional factors and covariates (n = 4711). The final study sample included 37,988 UKB participants (Supplemental Figure S1). 2.7. Statistical Analysis: All analyses were performed using SAS (SAS Institute, Inc., Cary, NC, USA). Mapping was performed using ArcGIS. F-tests or χ2 tests were used to compare differences in baseline characteristics across participants’ age and race/ethnicity. Our main outcome of interest was whether a person had any confirmed COVID-19 infection (defined as having any positive test result). Our exposures of interest included consumption of coffee, tea, oily fish, processed meat, red meat, fruit, and vegetables; and were analyzed in quartiles of servings/d [each modeled: lowest quartile/Q1(referent), Q2, Q3, and Q4] as well as tea and coffee [each modeled: none or <1 (referent), 1, 2–3, and ≥4 cups/d], and breastfed as a baby [yes (referent), no, and don’t know]. Because the cut-points to define Q2 and Q3 for oily fish were similar, Q3 and Q4 were combined. We examined the association between each dietary behavior and incident COVID-19 using logistic regression, first adjusting for age, sex, and race (White/Asian/Black/Mixed-Others) (Crude model), then further adjusting for Townsend-deprivation index (quartiles), education (6 qualification classes), employment status (employed/retired/other), type of living accommodation (house/apartment/other), number of co-habitants (1,2,3 or ≥4), income (4 levels), physical activity (quartiles of moderate or vigorous activities, min/day), smoking (never/past/current), BMI (<25, 25–<30, and ≥30 kg/m2), self-rated health (excellent/good/fair/poor), hypercholesterolemia medication use (yes/no), hypertensive medication use (yes/no), diabetes (yes/no), and any heart diseases (yes/no) (Model 1). Model 2 was similar to Model 1 but with diet behaviors assessed mutually (i.e., including all 7 other dietary factors). Model 3 included variables from Model 2 and further adjusted for COVID-19 exposure score. Finally, Model 4 further adjusted for pre-existing compromised pulmonary function (yes/no). Statistical significance was defined as p < 0.05. No adjustments were made for multiple testing as all tests were a priori. In sensitivity analyses, we excluded individuals self-reporting diabetes, heart diseases, or using cholesterol-lowering or antihypertension medication because they may recently have made dietary changes consequential to the disease. We also adjusted for incident diabetes and heart disease during follow-up. Moreover, as shown in Supplemental Figure S2, there were two waves of new cases between March and November 2020 (i.e., from March–August 2020, or Wave 1, and September–November 2020, or Wave 2) in the UK, with greater numbers in Wave 2 than Wave 1. The differences may reflect a change in testing-rate, a true change in risk of exposure to the virus or some other factor. Regardless, analyses were also conducted for Wave 1 and Wave 2 separately. We screened for effect modification (interaction) by sex (male or female), age (<55 or ≥55 years of age) and race (White, Asian, Black, or mixed/others) by including the cross-product term of each nutritional factor (e.g., coffee consumption, 4 categories) and the interacting variable in multivariable regression models. Statistically significant interactions were defined as p < 0.002; accounting for 24 tests (8 nutrition factors × 3 effect modifiers) performed. 3. Results: 3.1. Participant Characteristics Of 37,988 eligible participants who were tested for COVID-19 from March to November 2020, 6482 (17%) tested positive. In general, participants in the analysis sample had baseline demographic and diet characteristics similar to those of the full UKB cohort (N = 502,633, Table 1). However, those in the analysis sample tended to report poorer health and more comorbidities than those in the full UKB cohort. Characteristics of the analysis sample, stratified by baseline age and race, are presented in Supplemental Tables S2 and S3. Briefly, compared to the older age group, the younger age group tended to be female, employed, and better educated, with better income and rated better health; they also tended to consume less tea, fruit, vegetables, fish, and red meat, with fewer having been breastfed. Compared to non-white participants, white participants tended to consume more coffee, tea, processed meat, less fruit or vegetables, with fewer having been breastfed. Among non-whites, Black people tended to live in higher deprivation areas, be employed, have higher BMI, and consume more red meat, while Asians tended to report poorer health and consume more fruit and vegetables. The Asian group also had a higher incidence of COVID-19 positivity than other racial groups. The exposure risk for the UKB sample in the context of the nation overall is illustrated in Figure 1 and Supplemental Figure S3, in which geo-positive tests are shown by Waves for UKB and UK data separately. Areas with a high number of positive tests in UKB were generally similar to those in the UK overall. Of 37,988 eligible participants who were tested for COVID-19 from March to November 2020, 6482 (17%) tested positive. In general, participants in the analysis sample had baseline demographic and diet characteristics similar to those of the full UKB cohort (N = 502,633, Table 1). However, those in the analysis sample tended to report poorer health and more comorbidities than those in the full UKB cohort. Characteristics of the analysis sample, stratified by baseline age and race, are presented in Supplemental Tables S2 and S3. Briefly, compared to the older age group, the younger age group tended to be female, employed, and better educated, with better income and rated better health; they also tended to consume less tea, fruit, vegetables, fish, and red meat, with fewer having been breastfed. Compared to non-white participants, white participants tended to consume more coffee, tea, processed meat, less fruit or vegetables, with fewer having been breastfed. Among non-whites, Black people tended to live in higher deprivation areas, be employed, have higher BMI, and consume more red meat, while Asians tended to report poorer health and consume more fruit and vegetables. The Asian group also had a higher incidence of COVID-19 positivity than other racial groups. The exposure risk for the UKB sample in the context of the nation overall is illustrated in Figure 1 and Supplemental Figure S3, in which geo-positive tests are shown by Waves for UKB and UK data separately. Areas with a high number of positive tests in UKB were generally similar to those in the UK overall. 3.2. Nutritional Factors and COVID-19 Positivity With adjustment for age, race, and sex (Crude model), consumption of coffee, moderate tea, oily fish, and vegetables; and being breastfed as a baby were significantly associated with lower odds of COVID-19 positivity, while consuming processed meat was associated with higher odds of COVID-19 positivity (Supplemental Table S4). After further adjusting for other socio-demographics, medical and lifestyle factors, associations were attenuated; consumption of tea and fish oil were no longer significantly associated with COVID-19 infection (Model 1–Table 2). The odds (95% CI) of COVID-19 positivity were 0.90 (0.83, 0.98), 0.90 (0.83, 0.96), and 0.92 (0.84, 1.00) when consuming 1 cup, 2–3 cups, and 4+ cups of coffee/day (vs. <1 cup/day), respectively (Model 2). The odds (95%CI) of COVID-19 infection was 0.88 (0.80, 0.98) for individuals in the 3rd quartile of vegetable intake (vs. lowest quartile), 1.14 (1.01, 1.29) for individuals in the 4th quartile of processed meat intake (vs lowest quartile), and 0.91 (0.85, 0.98) for those who had been breastfed as a baby (vs. not breastfed). We observed similar associations between these dietary factors assessed individually (Model 1) or mutually (Model 2) with COVID-19 infection, suggesting independent associations. Associations were attenuated when further adjusted for COVID-19 exposure, but patterns of associations remained (Model 3). Further adjusting for pre-existing compromised pulmonary function, or incident diabetes and heart disease during follow-up did not alter results (Model 4, data not shown). Patterns of associations were similar when excluding individuals reporting diabetes, CVD, or the use of cholesterol-lowering or antihypertension medication from the sample (data not shown), or when analyzing data for Wave 1 and Wave 2 separately (Supplemental Table S5). Associations between nutrition factors and COVID-19 positivity were not significantly modified by age, sex, and race (p > 0.002 for interactions, Supplemental Tables S6–S8). With adjustment for age, race, and sex (Crude model), consumption of coffee, moderate tea, oily fish, and vegetables; and being breastfed as a baby were significantly associated with lower odds of COVID-19 positivity, while consuming processed meat was associated with higher odds of COVID-19 positivity (Supplemental Table S4). After further adjusting for other socio-demographics, medical and lifestyle factors, associations were attenuated; consumption of tea and fish oil were no longer significantly associated with COVID-19 infection (Model 1–Table 2). The odds (95% CI) of COVID-19 positivity were 0.90 (0.83, 0.98), 0.90 (0.83, 0.96), and 0.92 (0.84, 1.00) when consuming 1 cup, 2–3 cups, and 4+ cups of coffee/day (vs. <1 cup/day), respectively (Model 2). The odds (95%CI) of COVID-19 infection was 0.88 (0.80, 0.98) for individuals in the 3rd quartile of vegetable intake (vs. lowest quartile), 1.14 (1.01, 1.29) for individuals in the 4th quartile of processed meat intake (vs lowest quartile), and 0.91 (0.85, 0.98) for those who had been breastfed as a baby (vs. not breastfed). We observed similar associations between these dietary factors assessed individually (Model 1) or mutually (Model 2) with COVID-19 infection, suggesting independent associations. Associations were attenuated when further adjusted for COVID-19 exposure, but patterns of associations remained (Model 3). Further adjusting for pre-existing compromised pulmonary function, or incident diabetes and heart disease during follow-up did not alter results (Model 4, data not shown). Patterns of associations were similar when excluding individuals reporting diabetes, CVD, or the use of cholesterol-lowering or antihypertension medication from the sample (data not shown), or when analyzing data for Wave 1 and Wave 2 separately (Supplemental Table S5). Associations between nutrition factors and COVID-19 positivity were not significantly modified by age, sex, and race (p > 0.002 for interactions, Supplemental Tables S6–S8). 3.1. Participant Characteristics: Of 37,988 eligible participants who were tested for COVID-19 from March to November 2020, 6482 (17%) tested positive. In general, participants in the analysis sample had baseline demographic and diet characteristics similar to those of the full UKB cohort (N = 502,633, Table 1). However, those in the analysis sample tended to report poorer health and more comorbidities than those in the full UKB cohort. Characteristics of the analysis sample, stratified by baseline age and race, are presented in Supplemental Tables S2 and S3. Briefly, compared to the older age group, the younger age group tended to be female, employed, and better educated, with better income and rated better health; they also tended to consume less tea, fruit, vegetables, fish, and red meat, with fewer having been breastfed. Compared to non-white participants, white participants tended to consume more coffee, tea, processed meat, less fruit or vegetables, with fewer having been breastfed. Among non-whites, Black people tended to live in higher deprivation areas, be employed, have higher BMI, and consume more red meat, while Asians tended to report poorer health and consume more fruit and vegetables. The Asian group also had a higher incidence of COVID-19 positivity than other racial groups. The exposure risk for the UKB sample in the context of the nation overall is illustrated in Figure 1 and Supplemental Figure S3, in which geo-positive tests are shown by Waves for UKB and UK data separately. Areas with a high number of positive tests in UKB were generally similar to those in the UK overall. 3.2. Nutritional Factors and COVID-19 Positivity: With adjustment for age, race, and sex (Crude model), consumption of coffee, moderate tea, oily fish, and vegetables; and being breastfed as a baby were significantly associated with lower odds of COVID-19 positivity, while consuming processed meat was associated with higher odds of COVID-19 positivity (Supplemental Table S4). After further adjusting for other socio-demographics, medical and lifestyle factors, associations were attenuated; consumption of tea and fish oil were no longer significantly associated with COVID-19 infection (Model 1–Table 2). The odds (95% CI) of COVID-19 positivity were 0.90 (0.83, 0.98), 0.90 (0.83, 0.96), and 0.92 (0.84, 1.00) when consuming 1 cup, 2–3 cups, and 4+ cups of coffee/day (vs. <1 cup/day), respectively (Model 2). The odds (95%CI) of COVID-19 infection was 0.88 (0.80, 0.98) for individuals in the 3rd quartile of vegetable intake (vs. lowest quartile), 1.14 (1.01, 1.29) for individuals in the 4th quartile of processed meat intake (vs lowest quartile), and 0.91 (0.85, 0.98) for those who had been breastfed as a baby (vs. not breastfed). We observed similar associations between these dietary factors assessed individually (Model 1) or mutually (Model 2) with COVID-19 infection, suggesting independent associations. Associations were attenuated when further adjusted for COVID-19 exposure, but patterns of associations remained (Model 3). Further adjusting for pre-existing compromised pulmonary function, or incident diabetes and heart disease during follow-up did not alter results (Model 4, data not shown). Patterns of associations were similar when excluding individuals reporting diabetes, CVD, or the use of cholesterol-lowering or antihypertension medication from the sample (data not shown), or when analyzing data for Wave 1 and Wave 2 separately (Supplemental Table S5). Associations between nutrition factors and COVID-19 positivity were not significantly modified by age, sex, and race (p > 0.002 for interactions, Supplemental Tables S6–S8). 4. Discussion: In this study, consuming more coffee, vegetables, and being breast fed as well as consuming less processed meat intake were independently associated with lower odds of COVID-19 positivity. These associations were attenuated when accounting for the UK’s COVID-19 case rate (i.e., exposure). Much research has focused on characterizing individuals who test positive for COVID-19 and infection progression and outcomes. Individuals with suppressed immune systems, such as the elderly and those with existing comorbidities including cardiovascular diseases, hypertension, diabetes, and obesity, are more likely to suffer with progression toward severe outcomes of COVID-19 [3,4,28,29]. Less attention has focused on modifiable risk factors preceding COVID-19 infection. While nutrition may theoretically impact COVID-19 susceptibility [3,6,30,31,32], few investigations have specifically tested the hypothesis a priori. Low vitamin D status is associated with COVID-19 infection, severity, and mortality [33,34]. Del Ser et al. [35] recently explored a variety of risk factors for the incidence, severity, and mortality of COVID-19 in a Spanish cohort of 913 volunteers aged 75–90 years. Sixty-two cases reported symptoms compatible with COVID-19; 6 of them died. High alcohol and lower coffee and tea consumption were associated with disease severity; other dietary behaviors were not considered. In the UKB, habitual consumption of 1 or more cups of coffee per day was associated with about a 10% decrease in risk of COVID-19 compared to less than 1 cup/day. Coffee is not only a key source of caffeine, but contributes dozens of other constituents; including many implicated in immunity [21]. Among many populations, coffee is the major contributor to total polyphenol intake, phenolic acids in particular [36,37]. Coffee, caffeine, and polyphenols have antioxidant and anti-inflammatory properties [38,39,40,41,42,43,44]. Coffee consumption favorably correlates with inflammatory biomarkers such as CRP, interleukin-6 (IL-6), and tumor necrosis factor α (TNF-α) [38,45,46,47,48,49,50,51], which are also associated with COVID-19 severity and mortality [52,53]. Coffee consumption has also been associated with lower risk of pneumonia in elderly [54]. Taken together, an immunoprotective effect of coffee against COVID-19 is plausible and merits further investigation. Fruits and vegetables are rich dietary sources of vitamins, folate, fiber, and several phytochemicals such as carotenoids and flavonoids [55,56]. These substances have anti-inflammatory, antibacterial, and antiviral properties and are thus immune-protective [19,57,58]. In the current study, consumption of at least 0.67 servings/d of vegetables (cooked or raw, excluding potatoes) was associated with a lower risk of COVID-19 infection. Recent ecological studies of COVID-19 report that countries with high consumption of foods with potent antioxidant or anti angiotensin-converting enzyme (ACE) activity such as raw or fermented cabbage have a lower COVID-19 death rate compared to other countries [59,60]. Studies of vegetables and fermented-foods with COVID-19 mortality in Europe reported that each g/day increase in the average national consumption of head cabbage, cucumber or fermented vegetables decreased the mortality risk for COVID-19 by 11–35%; consumption of broccoli surprisingly increased COVID-19 mortality [9,10]. COVID-19 mortality was not our outcome of interest, and while we did not analyze vegetables individually, the UK’s national consumption of broccoli is above Europe’s average and thus our findings conflict with the previous inverse relationship between broccoli consumption and COVID-19 mortality [9,10]. Fruit (fresh and dried) consumption was not associated with COVID-19 risk in the UKB. While fruits and vegetables share several health benefits, the specific bioactive compounds in fruits and in vegetables can vary [61]. Fruits are also relatively higher in sugar (fructose) while vegetables contain more starch. Follow-up studies to our current findings might therefore focus on vegetable constituents distinct from those of fruit. Processed meat consumption of as little as 0.43 servings/d was associated with a higher risk of COVID-19 in UKB. However red meat consumption presented no risk, suggesting meat per se does not underlie the association we observed with processed meats. Processed meat refers to any meat that has been transformed through salting, curing, fermenting, smoking, or other process to enhance flavor or improve preservation [62]. In the UK, sausages, bacon, and ham are the major contributors to processed meat intake [63] and these often contain salt enriched with nitrates/nitrites. Preservatives and other additives are increasingly being used yet difficult to measure in observational studies [64]. These include milk-, soy- and wheat-based ingredients, spices, ascorbic acid, phosphates, antioxidants, monosodium glutamate, food colorings, blood plasma, gelatin, and transglutaminase [62]. Total and saturated fat concentrations are also relatively higher in processed meats [65]. Processed meats are also characteristic of a western-style diet, which may adversely impact immunity [23] and thus other dietary behaviors that correlate with processed meat intake may underlie the association we observed with COVID-19 susceptibly. Consumption of human milk has also emerged as an early-life factor impacting immunity in both infancy and adulthood, with links to lower rates of allergy, influenza, asthma, and other respiratory infections [66,67]. Breastfeeding has also been linked to epigenetic changes of toll-like-receptors, which play significant roles in innate immunity [68]. We found a long-term favorable association between being breastfed as a baby and COVID-19 infection in UKB and thus contribute to the growing evidence in support of nutrition early in life for optimal immunity for life. The strengths of this study include access to the largest cohort established years before the COVID-19 pandemic, with detailed health, lifestyle, and nutrition data and ongoing follow-up. Although much COVID-19 research [69] has incorporated this database, we are the first to leverage independent COVID-19 surveillance database of the UK to estimate participant COVID-19 exposure [70]. COVID-19 data from UKB are consistent with public government data, which adds confidence to our approach and results. The study has limitations. First, only a portion of overall UKB participants were tested for COVID-19 (about 10%) during the study timeframe, and these participants were slightly older, less educated, and less employed, while reporting poorer health than the original UKB cohort. Those factors were associated with higher odds of COVID-19 infection in our analysis sample. We note, however, that the full UKB is not representative of the sampling population, with evidence of a ‘healthy volunteer’ selection bias [71]. However, participants in these two samples had many baseline characteristics in common, including dietary behaviors. Any effects of selection bias would likely lead to an underestimation of the true associations between nutrition and COVID-19. Second, diet assessment tools are generally prone to measurement error: limitations inherent to closed-ended options, inaccurate portion size estimates, and social desirability bias are among the sources contributing to error [12,72]. Nevertheless, we were uniquely positioned to pursue this investigation with novel aims and analyses; and although the effect sizes may be imprecise, establishing a relationship between diet and COVID-19 is informative in itself and can be inferred from the direction and statistical significance of the association [12,72]. Finally, the current study is observational and cannot infer causality. While we accounted for a comprehensive set of potential confounders, we cannot discount the possibility of residual confounding; whether it be positive or negative. Moreover, we had no concurrent pandemic data on other established risk factors for COVID-19 infection, such as participant social distancing behavior, work environment, and face mask-use; some of these factors may correlate with diet behaviors. 5. Conclusions: Our results support the hypothesis that nutritional factors may influence distinct aspects of the immune system, hence susceptibility to COVID-19. Encouraging adherence to certain nutritional behaviors (e.g., increasing vegetable intake and reducing processed meat intake) may be an additional tool to existing COVID-19 protection guidelines to limit the spread of this virus. Nevertheless, our findings warrant confirmation in other populations.
Background: Nutritional status influences immunity but its specific association with susceptibility to COVID-19 remains unclear. We examined the association of specific dietary data and incident COVID-19 in the UK Biobank (UKB). Methods: We considered UKB participants in England with self-reported baseline (2006-2010) data and linked them to Public Health England COVID-19 test results-performed on samples from combined nose/throat swabs, using real time polymerase chain reaction (RT-PCR)-between March and November 2020. Baseline diet factors included breastfed as baby and specific consumption of coffee, tea, oily fish, processed meat, red meat, fruit, and vegetables. Individual COVID-19 exposure was estimated using the UK's average monthly positive case rate per specific geo-populations. Logistic regression estimated the odds of COVID-19 positivity by diet status adjusting for baseline socio-demographic factors, medical history, and other lifestyle factors. Another model was further adjusted for COVID-19 exposure. Results: Eligible UKB participants (n = 37,988) were 40 to 70 years of age at baseline; 17% tested positive for COVID-19 by SAR-CoV-2 PCR. After multivariable adjustment, the odds (95% CI) of COVID-19 positivity was 0.90 (0.83, 0.96) when consuming 2-3 cups of coffee/day (vs. <1 cup/day), 0.88 (0.80, 0.98) when consuming vegetables in the third quartile of servings/day (vs. lowest quartile), 1.14 (1.01, 1.29) when consuming fourth quartile servings of processed meats (vs. lowest quartile), and 0.91 (0.85, 0.98) when having been breastfed (vs. not breastfed). Associations were attenuated when further adjusted for COVID-19 exposure, but patterns of associations remained. Conclusions: In the UK Biobank, consumption of coffee, vegetables, and being breastfed as a baby were favorably associated with incident COVID-19; intake of processed meat was adversely associated. Although these findings warrant independent confirmation, adherence to certain dietary behaviors may be an additional tool to existing COVID-19 protection guidelines to limit the spread of this virus.
1. Introduction: Coronavirus (COVID-19) is an infectious disease caused by SARS-CoV-2 that predominantly attacks the respiratory system and has spread rapidly in the last year [1]. This unprecedented global public health crisis necessitates the expansion of our understanding of COVID-19 for purposes of prevention and mitigation of its severity and mortality risk in infected individuals [2]. Cross-sectional analyses identify patient characteristics and clinical data subsequent to a positive test, thereby informing the progression and severity of COVID-19. However, identifying factors underlying potential susceptibility a priori to the virus offers public health benefits. The immune system plays a key role in an individual’s susceptibility and response to infectious diseases, including COVID-19 [3,4]. A major modifiable factor affecting immune function is dietary behavior that influences nutritional status [5,6,7]. Ecological studies of COVID-19 report favorable correlations with specific vegetables and dietary patterns, such as the Mediterranean diet [8,9,10]. Some dietary supplements were found to have an association with SARS-CoV-2 infection [11]. To our knowledge, no population data have been examined regarding the role of specific dietary intakes in prevention of COVID-19. Using data from the UK Biobank (UKB), we examined the associations between dietary behaviors measured in 2006–2010 and incident COVID-19 infection in 2020. We additionally linked UKB geo-data to UK COVID-19 surveillance data to account for COVID-19 exposure in our analyses, which to our knowledge has never been done in COVID-19 studies of the UKB. 5. Conclusions: Our results support the hypothesis that nutritional factors may influence distinct aspects of the immune system, hence susceptibility to COVID-19. Encouraging adherence to certain nutritional behaviors (e.g., increasing vegetable intake and reducing processed meat intake) may be an additional tool to existing COVID-19 protection guidelines to limit the spread of this virus. Nevertheless, our findings warrant confirmation in other populations.
Background: Nutritional status influences immunity but its specific association with susceptibility to COVID-19 remains unclear. We examined the association of specific dietary data and incident COVID-19 in the UK Biobank (UKB). Methods: We considered UKB participants in England with self-reported baseline (2006-2010) data and linked them to Public Health England COVID-19 test results-performed on samples from combined nose/throat swabs, using real time polymerase chain reaction (RT-PCR)-between March and November 2020. Baseline diet factors included breastfed as baby and specific consumption of coffee, tea, oily fish, processed meat, red meat, fruit, and vegetables. Individual COVID-19 exposure was estimated using the UK's average monthly positive case rate per specific geo-populations. Logistic regression estimated the odds of COVID-19 positivity by diet status adjusting for baseline socio-demographic factors, medical history, and other lifestyle factors. Another model was further adjusted for COVID-19 exposure. Results: Eligible UKB participants (n = 37,988) were 40 to 70 years of age at baseline; 17% tested positive for COVID-19 by SAR-CoV-2 PCR. After multivariable adjustment, the odds (95% CI) of COVID-19 positivity was 0.90 (0.83, 0.96) when consuming 2-3 cups of coffee/day (vs. <1 cup/day), 0.88 (0.80, 0.98) when consuming vegetables in the third quartile of servings/day (vs. lowest quartile), 1.14 (1.01, 1.29) when consuming fourth quartile servings of processed meats (vs. lowest quartile), and 0.91 (0.85, 0.98) when having been breastfed (vs. not breastfed). Associations were attenuated when further adjusted for COVID-19 exposure, but patterns of associations remained. Conclusions: In the UK Biobank, consumption of coffee, vegetables, and being breastfed as a baby were favorably associated with incident COVID-19; intake of processed meat was adversely associated. Although these findings warrant independent confirmation, adherence to certain dietary behaviors may be an additional tool to existing COVID-19 protection guidelines to limit the spread of this virus.
9,225
395
[ 3474, 164, 101, 213, 189, 237, 103, 709, 307, 401 ]
14
[ "19", "covid", "covid 19", "ukb", "data", "participants", "model", "health", "meat", "factors" ]
[ "immune function dietary", "intakes prevention covid", "foods covid 19", "associations nutrition covid", "nutritional factors covid" ]
null
[CONTENT] nutrition | COVID-19 | immunity | coffee | breastfeeding | epidemiology [SUMMARY]
null
[CONTENT] nutrition | COVID-19 | immunity | coffee | breastfeeding | epidemiology [SUMMARY]
[CONTENT] nutrition | COVID-19 | immunity | coffee | breastfeeding | epidemiology [SUMMARY]
[CONTENT] nutrition | COVID-19 | immunity | coffee | breastfeeding | epidemiology [SUMMARY]
[CONTENT] nutrition | COVID-19 | immunity | coffee | breastfeeding | epidemiology [SUMMARY]
[CONTENT] Aged | Biological Specimen Banks | Breast Feeding | COVID-19 | Coffee | Diet | England | Feeding Behavior | Female | Food Handling | Humans | Incidence | Infant | Logistic Models | Male | Meat | Middle Aged | Nutritional Status | Public Health | Risk Factors | SARS-CoV-2 | United Kingdom | Vegetables [SUMMARY]
null
[CONTENT] Aged | Biological Specimen Banks | Breast Feeding | COVID-19 | Coffee | Diet | England | Feeding Behavior | Female | Food Handling | Humans | Incidence | Infant | Logistic Models | Male | Meat | Middle Aged | Nutritional Status | Public Health | Risk Factors | SARS-CoV-2 | United Kingdom | Vegetables [SUMMARY]
[CONTENT] Aged | Biological Specimen Banks | Breast Feeding | COVID-19 | Coffee | Diet | England | Feeding Behavior | Female | Food Handling | Humans | Incidence | Infant | Logistic Models | Male | Meat | Middle Aged | Nutritional Status | Public Health | Risk Factors | SARS-CoV-2 | United Kingdom | Vegetables [SUMMARY]
[CONTENT] Aged | Biological Specimen Banks | Breast Feeding | COVID-19 | Coffee | Diet | England | Feeding Behavior | Female | Food Handling | Humans | Incidence | Infant | Logistic Models | Male | Meat | Middle Aged | Nutritional Status | Public Health | Risk Factors | SARS-CoV-2 | United Kingdom | Vegetables [SUMMARY]
[CONTENT] Aged | Biological Specimen Banks | Breast Feeding | COVID-19 | Coffee | Diet | England | Feeding Behavior | Female | Food Handling | Humans | Incidence | Infant | Logistic Models | Male | Meat | Middle Aged | Nutritional Status | Public Health | Risk Factors | SARS-CoV-2 | United Kingdom | Vegetables [SUMMARY]
[CONTENT] immune function dietary | intakes prevention covid | foods covid 19 | associations nutrition covid | nutritional factors covid [SUMMARY]
null
[CONTENT] immune function dietary | intakes prevention covid | foods covid 19 | associations nutrition covid | nutritional factors covid [SUMMARY]
[CONTENT] immune function dietary | intakes prevention covid | foods covid 19 | associations nutrition covid | nutritional factors covid [SUMMARY]
[CONTENT] immune function dietary | intakes prevention covid | foods covid 19 | associations nutrition covid | nutritional factors covid [SUMMARY]
[CONTENT] immune function dietary | intakes prevention covid | foods covid 19 | associations nutrition covid | nutritional factors covid [SUMMARY]
[CONTENT] 19 | covid | covid 19 | ukb | data | participants | model | health | meat | factors [SUMMARY]
null
[CONTENT] 19 | covid | covid 19 | ukb | data | participants | model | health | meat | factors [SUMMARY]
[CONTENT] 19 | covid | covid 19 | ukb | data | participants | model | health | meat | factors [SUMMARY]
[CONTENT] 19 | covid | covid 19 | ukb | data | participants | model | health | meat | factors [SUMMARY]
[CONTENT] 19 | covid | covid 19 | ukb | data | participants | model | health | meat | factors [SUMMARY]
[CONTENT] covid 19 | covid | 19 | dietary | data | sars cov | prevention | cov | knowledge | sars [SUMMARY]
null
[CONTENT] tended | associations | model | covid 19 | covid | covid 19 positivity | positivity | 19 positivity | 19 | consume [SUMMARY]
[CONTENT] intake | nutritional | immune system susceptibility | vegetable intake reducing | aspects immune | aspects immune system | reducing | reducing processed | reducing processed meat | reducing processed meat intake [SUMMARY]
[CONTENT] covid 19 | covid | 19 | ukb | data | participants | model | health | meat | factors [SUMMARY]
[CONTENT] covid 19 | covid | 19 | ukb | data | participants | model | health | meat | factors [SUMMARY]
[CONTENT] COVID-19 ||| the UK Biobank | UKB [SUMMARY]
null
[CONTENT] UKB | 37,988 | 40 to 70 years of age | 17% | COVID-19 | SAR-CoV-2 PCR ||| 95% | CI | COVID-19 | 0.90 | 0.83 | 0.96 | 2 | 0.88 | 0.80 | 0.98 | third | 1.14 | 1.01 | 1.29 | fourth | 0.91 | 0.85 | 0.98 ||| COVID-19 [SUMMARY]
[CONTENT] UK | COVID-19 ||| COVID-19 [SUMMARY]
[CONTENT] COVID-19 ||| the UK Biobank | UKB ||| UKB | England | 2006-2010 | Public Health England COVID-19 | RT-PCR)-between March and November 2020 ||| ||| UK | monthly ||| COVID-19 ||| COVID-19 ||| UKB | 37,988 | 40 to 70 years of age | 17% | COVID-19 | SAR-CoV-2 PCR ||| 95% | CI | COVID-19 | 0.90 | 0.83 | 0.96 | 2 | 0.88 | 0.80 | 0.98 | third | 1.14 | 1.01 | 1.29 | fourth | 0.91 | 0.85 | 0.98 ||| COVID-19 ||| UK | COVID-19 ||| COVID-19 [SUMMARY]
[CONTENT] COVID-19 ||| the UK Biobank | UKB ||| UKB | England | 2006-2010 | Public Health England COVID-19 | RT-PCR)-between March and November 2020 ||| ||| UK | monthly ||| COVID-19 ||| COVID-19 ||| UKB | 37,988 | 40 to 70 years of age | 17% | COVID-19 | SAR-CoV-2 PCR ||| 95% | CI | COVID-19 | 0.90 | 0.83 | 0.96 | 2 | 0.88 | 0.80 | 0.98 | third | 1.14 | 1.01 | 1.29 | fourth | 0.91 | 0.85 | 0.98 ||| COVID-19 ||| UK | COVID-19 ||| COVID-19 [SUMMARY]
Pediatric self-medication use in Rwanda - a cross sectional study.
34394269
Self-medication, a worldwide practice, has both benefits and risks. Many countries have regulated non-prescription medications available for use in self-medication. However, in countries such as Rwanda, where prescriptions are not required to purchase medications, prescription, non-prescription and traditional medications have been used for self-medication.
BACKGROUND
A cross-sectional multi-center questionnaire based quantitative study of 154 parents/caregivers of children under ten years undertaken in private and public health facilities.
METHODS
The use of self-medication was reported to be 77.9%. Among these parents/caregivers, 50.8% used modern self-medication only, 15.8% used traditional self-medication only and 33.3% used both types of self-medication. Paracetamol was the most commonly used drug in modern self-medication; the traditional drugs used were Rwandan local herbs. Parents/caregivers who used modern medicines had slightly more confidence in self-medication than self-medication users of traditional medicines (p=0.005). Parents/caregivers who used modern self-medication reported barriers to consultation as a reason to self-medicate more frequently than those who used traditional drugs. Having more than one child below 10 years of-age was the only socio-demographic factor associated with having used self-medication (AOR=4.74, CI: 1.94-11.58, p=0.001). Being above 30 years (AOR= 5.78, CI: 1.25-26.68, p=0.025) and living in Kigali (AOR=8.2, CI: 1.58-43.12, p=.0.012) were factors associated with preference of modern self- medication compared to traditional self-medication.
RESULTS
Self-medication is common in Rwanda. Parents/caregivers are involved in this practice regardless of their socio-demographic background.
CONCLUSION
[ "Attitude to Health", "Caregivers", "Child", "Child, Preschool", "Cross-Sectional Studies", "Female", "Health Knowledge, Attitudes, Practice", "Health Services Accessibility", "Humans", "Infant", "Infant, Newborn", "Male", "Nonprescription Drugs", "Parents", "Rwanda", "Self Medication", "Surveys and Questionnaires", "Young Adult" ]
8351863
Introduction
Since the declaration of Alma-Ata in 1978, the principles of self-care, individual participation, responsibility and involvement in one's own health care have been recognized as important elements of primary healthcare1–3. Self-care includes self-medication which the WHO defines as “the selection and use of medicines by individuals to treat self-recognized illnesses or symptoms”4–7. In the pediatric population, self-medication implies administration of a drug to a child, by a caregiver, without prior medical consultation 8. WHO defines traditional medicine as “The sum of knowledge, skill, and practices based on the theories, beliefs, and experiences indigenous to different cultures, whether explicable or not, used in the maintenance of health as well as in the prevention, diagnosis, improvement or treatment of physical and mental illness” 9,10. Traditional medicines will often be chosen and provided by a traditional healer or therapist, but can also be produced or purchased by an individual and used as a form of self-medication. Responsible self-medication implies the use of approved, safe, and effective drugs, accompanied by information to direct users 11. There are several advantages of responsible self-medication, but the population must be aware of potentials risks and harms of self-medication12,13. Factors associated with the parental choice to self-medicate their children vary, but include: parents/caregivers perceiving their child's illness as being mild and not requiring health professional consultation, lack of time to attend consultations, high consultation fees, clinic waiting time, emergency treatment, use of old prescriptions available in the home, parent comfort in recognizing their children's disease based on the symptoms and having experience with the medication 14,15. Pediatric self-medication is a worldwide practice with a reported prevalence between 32–98% in Madagascar, India, Greece, and Australia 8,16–18. Tanzania, a fellow nation of the East African community has reported pediatric self-medication rates of 69% 19. There is also a difference in education and economic factors associated with the choice to self-medicate. Studies from Spain, Finland, Italy, and Madagascar12,19–21, found that families with high income and with secondary and higher education level practice more self-medication than families with low income and low education level. On the other hand, in studies done in Germany and India, family income and parental education were not significantly influencing self-medication practice 9,22. There is insufficient literature describing the use of self-medication in pediatric populations in Sub-Saharan Africa. In Rwanda, self-medication is being used by parents/caregivers for their children 20,21. The Rwandan Ministry of Health regulates essential medicines in accordane with WHO recommendations but there is no regulation of non-prescription or OTC drugs. In addition, there is a rise in antibiotic resistance in the country. Self-medication, if not done in a responsible and safe manner, could predispose the population to more harm than benefits. So there is a need to establish a baseline data on parental self-medication practices in Rwanda including the use of traditional and modern drugs. This could help health care providers and policymakers to plan education and establish policies aiming to achieve responsible use of medicines. Objectives The present study aimed at determining the proportion of parents/caregivers who reported self-medicating their children before consulting private and public health facilities in Rwanda and to determine attitudes and reasons associated with parental decisions to self-medicate their children.
Methods
Study Design: We conducted a cross-sectional study from July to September 2018. Reporting has been verified in accordance with the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) checklist 22. Study Sites/settings: We conducted a multi-center study in three facilities: i) a provincial referral hospital, two hours from Kigali, the capital of Rwanda, ii) a vaccination clinic in an urban public health center in Kigali city and iii) a private pediatric clinic in Kigali, which serves a larger proportion of educated parents/caregivers with high socioeconomic level. This selection was done to increase the representativeness and generalizability of the results to the Rwandan Population. Study population: The study aimed to recruit parents/caregivers of both unwell and well children with different socioeconomic status. Unwell children were recruited at out and in-patient units of one private and one public health facility. Parents/caregivers of well children were approached at a vaccination clinic in a health center. Inclusion criteria: All Rwandan parents/caregivers of children aged one month to 10 years of life and who accepted to participate in the study were eligible to be included. Parents/caregivers who were not primary caregivers of the children were excluded as they would not be able to describe fully the self-treatment of the children. We excluded from the analysis the participants who consented but did not fully complete the questionnaire. Sampling: Non-probability convenience sampling was used to recruit study subjects on working days (Monday to Friday) during working hours (8am to 5pm). Procedures for enrollment: Eligible parents/caregivers were approached in the waiting area, prior to their consultation, and they were given a verbal explanation of the purpose and the methods of the study. All participants who signed the written consent form were then included. No data was kept on the number of participants who declined to participate. Administering the questionnaire: Questionnaires were given to parents/caregivers by the PI or a trained data collector (DC) who was available to answer questions regarding the questionnaire. For parents/caregivers who couldn't read or who preferred to be assisted, the DC read the questions and completed the questionnaires on behalf of the participant. The questionnaires were completed in the waiting area of the respective health facility. When the questionnaire was filled by the DC, this was indicated on the form. Variables: Dependent (outcome) variable were the percentage of parents/caregivers using self-medication, reasons of parental self-medication and common drugs used in self-medication. Independent variables were age, sex, province of origin, level of education, socioeconomic status, number of children under 10 years of age; health insurance, determinant/predictors of parental self-medication, relationship with the child, marital status. Potential confounders were site of interview and mode of completing questionnaire (verbal or written) Data collection tool (Questionnaire): A questionnaire was developed specifically for this study. To ensure content validity of our data collection tool, it was first built by the Principal Investigator (PI) (JU) using previously published studies14,15,23–25. The first draft was then reviewed by four local experts in research (three pediatricians (CU, NM & PC) and one statistician). There were four sections to the questionnaire, namely: demo- graphic characteristics of respondents; modern medicine questionnaire; traditional medicine questionnaire and finally the no self-medicating questionnaire. Parents/caregivers were asked whether they have ever used self-medication for their children and if so, what type they had used. Depending on their answers, they completed the appropriate questionnaire (i.e. Modern and/or Traditional medicines) or the questionnaire for parents/caregivers who have never used self-medication for their children. Therefore, not all subjects completed all sections of the questionnaires The questionnaires used 5-point Likert-scales to assess the parental attitudes and perceptions about self-medication. The responses given to the Likert-scale questions from each of the barriers (five questions) and confidence (six questions) domains were combined to create total “barriers to consultation” and total “confidence in self-medicating” scores. Translation and piloting of questionnaires: The questionnaire was initially prepared in English and translated into the national language (Kinyarwanda) by the PI. It was then back-translated in English for accuracy by an independent, non-medical translator. It was piloted on five subjects and amendments were then made based on this piloting, including changes in wording and the addition of further Likert-items. Sample size calculation: Sample-size was calculated for the prevalence of self-medicating, with a predicted baseline of 69% of parents/caregivers of children under-five years of age reported in Tanzania 19. Using the formula with Finite Population Correction 26 and a level of confidence aimed at 95% we calculated the sample size to be 152 study participants. Data management: EpiData Entry 3.1 Software was used for data entry and storage. The data was then exported into Statistical Package for the Social Sciences (IBM Corp. Released 2011. IBM SPSS Statistics for Windows, Version 20.0. Armonk, NY: IBM Corp.), for statistical analysis. Statistical analysis: For continuous numerical data, means and standard deviation were reported; for categorical data, frequency tables, percentages and graphs were reported. Comparison was made between responses on the modern-medicine and traditional-medicine questionnaires. Therefore, parents/caregivers, who self-medicated with both modalities, were included in both arms of the analysis For confidence and barrier questions, Likert-scale items were converted into means and compared using Mann-Whitney U test due to the non-normal distribution of the data. Categorical variables were analyzed using Chi-square test or Fisher's exact test. Odds ratio were reported to identify associations. To account for multiple confounders logistic regression analysis was undertaken including variables with p-value <0.2.
Results
Study period: Recruitment took place from July to September 2018 with all sites being visited twice, gaining a total of 162 subjects who were enrolled (Figure 1). Consort flow diagram (n=number of respondents, N=Number of participants) Participation flow and social-demographic characteristics: Eight participants were excluded from the analysis as they did not complete the questionnaires fully, leaving a total of 154 participants who were included in the final analysis with complete data. Fourteen participants (9%) self-completed the questionnaire with 140 (91%) needing assistance with verbal completion of the questionnaire. The mean age of respondents was 33 years (SD: 6.7). Mothers represented the majority (86.4%) of respondent and most respondents were married (94%) (Table 1). Participant demographics Advanced education level = completed secondary school or university; Minimal education level = no education or completed primary school; High economic status = have both employment and own house; Middle economic class = have either employment or own house; Low economic class = have neither employment nor own house; Self-medication indications and practices: Among 154 participants, 120 caregivers (78%) reported to have used self-medication for their children, Sixty-one of them (51%) used only modern drugs in self-medication, 19 (16%) used only traditional drugs while 40 (33%) used both types. Thirty-four caregivers had never used self-medication (22%) (Figure 1). Many parents/caregivers decided to self-medicate with modern medications without seeking advice from anyone. Whereas, for traditional self-medication, friends and relatives served as the main sources of advice (Table 2). All traditional self-medication users used local Rwandan herbs. Paracetamol was the most commonly used modern medicine. Traditional medicine was most commonly used when parents/caregivers suspected intestinal worms. Self-medication indications and practice Respondents could choose more than two options. Reasons for using self-medication: There was a statistically significant difference in reasons for choosing modern and traditional self-medication among our participants, with participants choosing modern medications having slightly more confidence in self-medication than self-medication providers of traditional medicines (p=0.005) and parents/caregivers who used modern self-medication reporting barriers to consultation more frequently as reason to self-medicate than parents/caregivers who used traditional self-prescription (p=0.028) (Table 3). Reasons of using self-medication – Means of Likert questions Likert scale: 1=strongly disagree; 2=disagree; 3=neutral;4=agree; 5=strongly agree. P-value: Mann-Whitney U-test Comparison of means Non-self-prescribers: Parents/caregivers who choose not to self-medicate reported most frequently that they don't use self-medication because “non-prescription drugs are dangerous”. Access to pharmacy, traditional healer, traditional medication shop or cost, were not found to be reasons to not use self-medication (Table 4). Reason for not using self-medication (n=34) Likert scale: 1=strongly disagree; 2=disagree; 3= neutral;4=agree; 5=strongly agree Social demographic association with use of non-prescription drug: The association between different demographic characteristics and the use of self-medication was analyzed. In bivariate analysis the factors that were statistically significantly associated with use of non-prescription drugs included being from Kigali and having more than one child less than 10 years of age. In the multivariate analysis of factors that were significant in the bivariate analysis (with cut-off p<0.2), the only factor associated with increased use of self-prescribed drugs was having more than 1 child under the age of 10 years (Table 5). Social demographic association with use of nonprescription drug OR=Odds ratio; AOR=Adjusted Odds Ratio; Multivariate analysis (AOR) undertaken on factors with p-value <0.20 Association of socio-demographic factors with the use of modern versus. Traditional medication: The association between socio-demographic factors and the use of modern only vs. traditional only drugs was analyzed. In bivariate analysis the factors associated with use of exclusive modern rather than exclusive traditional drugs in self-medication were: caregivers in private clinic, caregivers from Kigali, advanced education, high social economic class, having private health insurance, and being older than 30 years. e included all factors with a p-value<0.2 in a multivariate analysis and after adjusting for confounders only age above 30 and living in Kigali remained associated with the preference of exclusive modern self-medication compared to exclusive traditional self-medication (Table 6). Social demographic association with type of self-medication used(modern vs. traditional) OR=Odds ratio; AOR=Adjusted Odds Ratio; Multivariate analysis (AOR) undertaken on factors with p-value <0.20
Conclusion
Self-medication, a worldwide practice, is also common in Rwanda. Parents/caregivers are involved in this practice for their children regardless of their socio-demographic background. Consideration should be given to regulating drugs used in self-medication as well as the education of the population with the goal of minimizing the risks of self- medication and maximizing benefits.
[ "Limitations" ]
[ "Our analysis to identify associated factors did not include the group of parents/caregivers who used both types of self-medication. The internal validity could have been compromised by the high rate of verbal completion of the questionnaire, with participants being potentially prone to acquiescence bias. Recall bias may have also limited caregiver responses. All regional provinces of Rwanda were not represented." ]
[ null ]
[ "Introduction", "Methods", "Results", "Discussion", "Limitations", "Conclusion" ]
[ "Since the declaration of Alma-Ata in 1978, the principles of self-care, individual participation, responsibility and involvement in one's own health care have been recognized as important elements of primary healthcare1–3. Self-care includes self-medication which the WHO defines as “the selection and use of medicines by individuals to treat self-recognized illnesses or symptoms”4–7. In the pediatric population, self-medication implies administration of a drug to a child, by a caregiver, without prior medical consultation 8. WHO defines traditional medicine as “The sum of knowledge, skill, and practices based on the theories, beliefs, and experiences indigenous to different cultures, whether explicable or not, used in the maintenance of health as well as in the prevention, diagnosis, improvement or treatment of physical and mental illness” 9,10. Traditional medicines will often be chosen and provided by a traditional healer or therapist, but can also be produced or purchased by an individual and used as a form of self-medication.\nResponsible self-medication implies the use of approved, safe, and effective drugs, accompanied by information to direct users 11. There are several advantages of responsible self-medication, but the population must be aware of potentials risks and harms of self-medication12,13.\nFactors associated with the parental choice to self-medicate their children vary, but include: parents/caregivers perceiving their child's illness as being mild and not requiring health professional consultation, lack of time to attend consultations, high consultation fees, clinic waiting time, emergency treatment, use of old prescriptions available in the home, parent comfort in recognizing their children's disease based on the symptoms and having experience with the medication 14,15.\nPediatric self-medication is a worldwide practice with a reported prevalence between 32–98% in Madagascar, India, Greece, and Australia 8,16–18. Tanzania, a fellow nation of the East African community has reported pediatric self-medication rates of 69% 19.\nThere is also a difference in education and economic factors associated with the choice to self-medicate. Studies from Spain, Finland, Italy, and Madagascar12,19–21, found that families with high income and with secondary and higher education level practice more self-medication than families with low income and low education level. On the other hand, in studies done in Germany and India, family income and parental education were not significantly influencing self-medication practice 9,22.\nThere is insufficient literature describing the use of self-medication in pediatric populations in Sub-Saharan Africa. In Rwanda, self-medication is being used by parents/caregivers for their children 20,21. The Rwandan Ministry of Health regulates essential medicines in accordane with WHO recommendations but there is no regulation of non-prescription or OTC drugs. In addition, there is a rise in antibiotic resistance in the country. Self-medication, if not done in a responsible and safe manner, could predispose the population to more harm than benefits. So there is a need to establish a baseline data on parental self-medication practices in Rwanda including the use of traditional and modern drugs. This could help health care providers and policymakers to plan education and establish policies aiming to achieve responsible use of medicines.\nObjectives The present study aimed at determining the proportion of parents/caregivers who reported self-medicating their children before consulting private and public health facilities in Rwanda and to determine attitudes and reasons associated with parental decisions to self-medicate their children.", "Study Design: We conducted a cross-sectional study from July to September 2018. Reporting has been verified in accordance with the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) checklist 22.\nStudy Sites/settings: We conducted a multi-center study in three facilities: i) a provincial referral hospital, two hours from Kigali, the capital of Rwanda, ii) a vaccination clinic in an urban public health center in Kigali city and iii) a private pediatric clinic in Kigali, which serves a larger proportion of educated parents/caregivers with high socioeconomic level. This selection was done to increase the representativeness and generalizability of the results to the Rwandan Population.\nStudy population: The study aimed to recruit parents/caregivers of both unwell and well children with different socioeconomic status. Unwell children were recruited at out and in-patient units of one private and one public health facility. Parents/caregivers of well children were approached at a vaccination clinic in a health center.\nInclusion criteria: All Rwandan parents/caregivers of children aged one month to 10 years of life and who accepted to participate in the study were eligible to be included. Parents/caregivers who were not primary caregivers of the children were excluded as they would not be able to describe fully the self-treatment of the children. We excluded from the analysis the participants who consented but did not fully complete the questionnaire.\nSampling: Non-probability convenience sampling was used to recruit study subjects on working days (Monday to Friday) during working hours (8am to 5pm).\nProcedures for enrollment: Eligible parents/caregivers were approached in the waiting area, prior to their consultation, and they were given a verbal explanation of the purpose and the methods of the study. All participants who signed the written consent form were then included. No data was kept on the number of participants who declined to participate.\nAdministering the questionnaire: Questionnaires were given to parents/caregivers by the PI or a trained data collector (DC) who was available to answer questions regarding the questionnaire. For parents/caregivers who couldn't read or who preferred to be assisted, the DC read the questions and completed the questionnaires on behalf of the participant. The questionnaires were completed in the waiting area of the respective health facility. When the questionnaire was filled by the DC, this was indicated on the form.\nVariables: Dependent (outcome) variable were the percentage of parents/caregivers using self-medication, reasons of parental self-medication and common drugs used in self-medication. Independent variables were age, sex, province of origin, level of education, socioeconomic status, number of children under 10 years of age; health insurance, determinant/predictors of parental self-medication, relationship with the child, marital status. Potential confounders were site of interview and mode of completing questionnaire (verbal or written)\nData collection tool (Questionnaire): A questionnaire was developed specifically for this study. To ensure content validity of our data collection tool, it was first built by the Principal Investigator (PI) (JU) using previously published studies14,15,23–25. The first draft was then reviewed by four local experts in research (three pediatricians (CU, NM & PC) and one statistician). There were four sections to the questionnaire, namely: demo- graphic characteristics of respondents; modern medicine questionnaire; traditional medicine questionnaire and finally the no self-medicating questionnaire. Parents/caregivers were asked whether they have ever used self-medication for their children and if so, what type they had used. Depending on their answers, they completed the appropriate questionnaire (i.e. Modern and/or Traditional medicines) or the questionnaire for parents/caregivers who have never used self-medication for their children. Therefore, not all subjects completed all sections of the questionnaires\nThe questionnaires used 5-point Likert-scales to assess the parental attitudes and perceptions about self-medication. The responses given to the Likert-scale questions from each of the barriers (five questions) and confidence (six questions) domains were combined to create total “barriers to consultation” and total “confidence in self-medicating” scores.\nTranslation and piloting of questionnaires: The questionnaire was initially prepared in English and translated into the national language (Kinyarwanda) by the PI. It was then back-translated in English for accuracy by an independent, non-medical translator. It was piloted on five subjects and amendments were then made based on this piloting, including changes in wording and the addition of further Likert-items.\nSample size calculation: Sample-size was calculated for the prevalence of self-medicating, with a predicted baseline of 69% of parents/caregivers of children under-five years of age reported in Tanzania 19. Using the formula with Finite Population Correction 26 and a level of confidence aimed at 95% we calculated the sample size to be 152 study participants.\nData management: EpiData Entry 3.1 Software was used for data entry and storage. The data was then exported into Statistical Package for the Social Sciences (IBM Corp. Released 2011. IBM SPSS Statistics for Windows, Version 20.0. Armonk, NY: IBM Corp.), for statistical analysis.\nStatistical analysis: For continuous numerical data, means and standard deviation were reported; for categorical data, frequency tables, percentages and graphs were reported. Comparison was made between responses on the modern-medicine and traditional-medicine questionnaires. Therefore, parents/caregivers, who self-medicated with both modalities, were included in both arms of the analysis For confidence and barrier questions, Likert-scale items were converted into means and compared using Mann-Whitney U test due to the non-normal distribution of the data. Categorical variables were analyzed using Chi-square test or Fisher's exact test. Odds ratio were reported to identify associations. To account for multiple confounders logistic regression analysis was undertaken including variables with p-value <0.2.", "Study period: Recruitment took place from July to September 2018 with all sites being visited twice, gaining a total of 162 subjects who were enrolled (Figure 1).\nConsort flow diagram (n=number of respondents, N=Number of participants)\nParticipation flow and social-demographic characteristics: Eight participants were excluded from the analysis as they did not complete the questionnaires fully, leaving a total of 154 participants who were included in the final analysis with complete data. Fourteen participants (9%) self-completed the questionnaire with 140 (91%) needing assistance with verbal completion of the questionnaire. The mean age of respondents was 33 years (SD: 6.7). Mothers represented the majority (86.4%) of respondent and most respondents were married (94%) (Table 1).\nParticipant demographics\nAdvanced education level = completed secondary school or university; Minimal education level = no education or completed primary school; High economic status = have both employment and own house; Middle economic class = have either employment or own house; Low economic class = have neither employment nor own house;\nSelf-medication indications and practices: Among 154 participants, 120 caregivers (78%) reported to have used self-medication for their children, Sixty-one of them (51%) used only modern drugs in self-medication, 19 (16%) used only traditional drugs while 40 (33%) used both types. Thirty-four caregivers had never used self-medication (22%) (Figure 1).\nMany parents/caregivers decided to self-medicate with modern medications without seeking advice from anyone. Whereas, for traditional self-medication, friends and relatives served as the main sources of advice (Table 2). All traditional self-medication users used local Rwandan herbs. Paracetamol was the most commonly used modern medicine. Traditional medicine was most commonly used when parents/caregivers suspected intestinal worms.\nSelf-medication indications and practice\nRespondents could choose more than two options.\nReasons for using self-medication: There was a statistically significant difference in reasons for choosing modern and traditional self-medication among our participants, with participants choosing modern medications having slightly more confidence in self-medication than self-medication providers of traditional medicines (p=0.005) and parents/caregivers who used modern self-medication reporting barriers to consultation more frequently as reason to self-medicate than parents/caregivers who used traditional self-prescription (p=0.028) (Table 3).\nReasons of using self-medication – Means of Likert questions\nLikert scale: 1=strongly disagree; 2=disagree; 3=neutral;4=agree; 5=strongly agree. P-value: Mann-Whitney U-test Comparison of means\nNon-self-prescribers: Parents/caregivers who choose not to self-medicate reported most frequently that they don't use self-medication because “non-prescription drugs are dangerous”. Access to pharmacy, traditional healer, traditional medication shop or cost, were not found to be reasons to not use self-medication (Table 4).\nReason for not using self-medication (n=34)\nLikert scale: 1=strongly disagree; 2=disagree; 3= neutral;4=agree; 5=strongly agree\nSocial demographic association with use of non-prescription drug: The association between different demographic characteristics and the use of self-medication was analyzed. In bivariate analysis the factors that were statistically significantly associated with use of non-prescription drugs included being from Kigali and having more than one child less than 10 years of age. In the multivariate analysis of factors that were significant in the bivariate analysis (with cut-off p<0.2), the only factor associated with increased use of self-prescribed drugs was having more than 1 child under the age of 10 years (Table 5).\nSocial demographic association with use of nonprescription drug\nOR=Odds ratio; AOR=Adjusted Odds Ratio; Multivariate analysis (AOR) undertaken on factors with p-value <0.20\nAssociation of socio-demographic factors with the use of modern versus. Traditional medication: The association between socio-demographic factors and the use of modern only vs. traditional only drugs was analyzed. In bivariate analysis the factors associated with use of exclusive modern rather than exclusive traditional drugs in self-medication were: caregivers in private clinic, caregivers from Kigali, advanced education, high social economic class, having private health insurance, and being older than 30 years. e included all factors with a p-value<0.2 in a multivariate analysis and after adjusting for confounders only age above 30 and living in Kigali remained associated with the preference of exclusive modern self-medication compared to exclusive traditional self-medication (Table 6).\nSocial demographic association with type of self-medication used(modern vs. traditional)\nOR=Odds ratio; AOR=Adjusted Odds Ratio; Multivariate analysis (AOR) undertaken on factors with p-value <0.20", "This study aimed to determine the reported self-medication use in children in private and public health facilities in Rwanda and to determine attitudes and reasons associated with parental decisions to self-medicate their children. Our study showed that 78% of participants have used self-medication for their children. Both modern and traditional drugs were used. This is similar to 77% reported in Pakistan 7 and higher than the prevalence of 32% and 58.82% reported in Madagascar and India 8,11,16. However, higher prevalence of 95.1% and 98.1% were reported in Greece and Australia 17,18. This shows that self-medication in children is a universal practice with some differences between countries.\nIn our study 51% of parents/caregivers who used self-medication used only modern medicines, 16% used only traditional self-medicines and 33% used both. Our findings show that we had a lower rate of parents/caregivers who used only modern medicines compared to previous studies conducted in Sudan and Saudi Arabia where 84% and 86.7% of parents/caregivers reported using modern medicines over traditional ones 23,24; this difference can be explained by the fact that in our study we separately analyzed participants who used both types of self- medication, and did not attempt to specify whether they preferred modern or traditional drugs.\nSelf-medication has many potential benefits, for example medications such as paracetamol can be effectively used. However, self-medication can result in ineffective medications being used, wasting parental resources and potentially delaying a child receiving appropriate care. Cough syrups were the second most widely used self-medication. This is of concern, particularly in young children as cough suppressants in this age group are known to be ineffective and to have potentially harmful effects27. Drugs for intestinal worms were the third most commonly used in self-medication. Clinical experience of the authors demonstrates that parents/caregivers may be attributing many other symptoms such as decreased appetite, poor weight gain as being caused by intestinal worms and therefore use of drugs for suspected intestinal worms without prior consultation may be inappropriate. Most alarming in our results was the finding that nearly 20% of parents/caregivers were self-medicating their children with antibiotics. As the world aims to combat antibiotic resistance, this form of self-medication should be addressed.\nThere is a difference between education and economic factors associated with the choice to self-medicate. Studies from Spain, Finland, Italy, and Madagascar15,16,28,29, found that families with high income and with secondary and higher education level practice more self-medication than families with low income and low education level. On the other hand in a study done in India, family income and parental education were not significantly influencing self-medication practice, in Germany, mother with lower education level used fewer OTC and no significant difference was seen with regard to income status 11,30. In our study, financial, insurance or geographical access barriers were not reasons to choose self-medication. This finding is not surprising as 58% of our participants had Mutuelle de santé insurance, and access to healthcare in Rwanda is overall good with 79% of the population subscribing to an affordable community health insurance (Mutuelle de Santé) which covers 90% of consultation fees in public health facilities 31.\nFurthermore, our study showed that having more than one child less than 10 years of age was associated with the practice of self-medication. No other factor was shown to be associated with parental use of self-medication. Caregiver's age above 30 years of age were found to be associated with the use of modern self-medication compared to traditional ones. In contrast to our findings, being female and younger were associated with the practice of self-medication in Italy and Spain 15,28.\nWe observed that coming from Kigali city was associated with the preference of modern drugs over traditional ones, which can be explained by the fact that most of our study participants were from Kigali where there is a wider availability of pharmacies dispensing modern medicines than in areas outside Kigali.", "Our analysis to identify associated factors did not include the group of parents/caregivers who used both types of self-medication. The internal validity could have been compromised by the high rate of verbal completion of the questionnaire, with participants being potentially prone to acquiescence bias. Recall bias may have also limited caregiver responses. All regional provinces of Rwanda were not represented.", "Self-medication, a worldwide practice, is also common in Rwanda. Parents/caregivers are involved in this practice for their children regardless of their socio-demographic background. Consideration should be given to regulating drugs used in self-medication as well as the education of the population with the goal of minimizing the risks of self- medication and maximizing benefits." ]
[ "intro", "methods", "results", "discussion", null, "conclusions" ]
[ "Self-medication", "medicines", "parents", "caregivers", "children", "Nonprescription Drugs", "Rwanda" ]
Introduction: Since the declaration of Alma-Ata in 1978, the principles of self-care, individual participation, responsibility and involvement in one's own health care have been recognized as important elements of primary healthcare1–3. Self-care includes self-medication which the WHO defines as “the selection and use of medicines by individuals to treat self-recognized illnesses or symptoms”4–7. In the pediatric population, self-medication implies administration of a drug to a child, by a caregiver, without prior medical consultation 8. WHO defines traditional medicine as “The sum of knowledge, skill, and practices based on the theories, beliefs, and experiences indigenous to different cultures, whether explicable or not, used in the maintenance of health as well as in the prevention, diagnosis, improvement or treatment of physical and mental illness” 9,10. Traditional medicines will often be chosen and provided by a traditional healer or therapist, but can also be produced or purchased by an individual and used as a form of self-medication. Responsible self-medication implies the use of approved, safe, and effective drugs, accompanied by information to direct users 11. There are several advantages of responsible self-medication, but the population must be aware of potentials risks and harms of self-medication12,13. Factors associated with the parental choice to self-medicate their children vary, but include: parents/caregivers perceiving their child's illness as being mild and not requiring health professional consultation, lack of time to attend consultations, high consultation fees, clinic waiting time, emergency treatment, use of old prescriptions available in the home, parent comfort in recognizing their children's disease based on the symptoms and having experience with the medication 14,15. Pediatric self-medication is a worldwide practice with a reported prevalence between 32–98% in Madagascar, India, Greece, and Australia 8,16–18. Tanzania, a fellow nation of the East African community has reported pediatric self-medication rates of 69% 19. There is also a difference in education and economic factors associated with the choice to self-medicate. Studies from Spain, Finland, Italy, and Madagascar12,19–21, found that families with high income and with secondary and higher education level practice more self-medication than families with low income and low education level. On the other hand, in studies done in Germany and India, family income and parental education were not significantly influencing self-medication practice 9,22. There is insufficient literature describing the use of self-medication in pediatric populations in Sub-Saharan Africa. In Rwanda, self-medication is being used by parents/caregivers for their children 20,21. The Rwandan Ministry of Health regulates essential medicines in accordane with WHO recommendations but there is no regulation of non-prescription or OTC drugs. In addition, there is a rise in antibiotic resistance in the country. Self-medication, if not done in a responsible and safe manner, could predispose the population to more harm than benefits. So there is a need to establish a baseline data on parental self-medication practices in Rwanda including the use of traditional and modern drugs. This could help health care providers and policymakers to plan education and establish policies aiming to achieve responsible use of medicines. Objectives The present study aimed at determining the proportion of parents/caregivers who reported self-medicating their children before consulting private and public health facilities in Rwanda and to determine attitudes and reasons associated with parental decisions to self-medicate their children. Methods: Study Design: We conducted a cross-sectional study from July to September 2018. Reporting has been verified in accordance with the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) checklist 22. Study Sites/settings: We conducted a multi-center study in three facilities: i) a provincial referral hospital, two hours from Kigali, the capital of Rwanda, ii) a vaccination clinic in an urban public health center in Kigali city and iii) a private pediatric clinic in Kigali, which serves a larger proportion of educated parents/caregivers with high socioeconomic level. This selection was done to increase the representativeness and generalizability of the results to the Rwandan Population. Study population: The study aimed to recruit parents/caregivers of both unwell and well children with different socioeconomic status. Unwell children were recruited at out and in-patient units of one private and one public health facility. Parents/caregivers of well children were approached at a vaccination clinic in a health center. Inclusion criteria: All Rwandan parents/caregivers of children aged one month to 10 years of life and who accepted to participate in the study were eligible to be included. Parents/caregivers who were not primary caregivers of the children were excluded as they would not be able to describe fully the self-treatment of the children. We excluded from the analysis the participants who consented but did not fully complete the questionnaire. Sampling: Non-probability convenience sampling was used to recruit study subjects on working days (Monday to Friday) during working hours (8am to 5pm). Procedures for enrollment: Eligible parents/caregivers were approached in the waiting area, prior to their consultation, and they were given a verbal explanation of the purpose and the methods of the study. All participants who signed the written consent form were then included. No data was kept on the number of participants who declined to participate. Administering the questionnaire: Questionnaires were given to parents/caregivers by the PI or a trained data collector (DC) who was available to answer questions regarding the questionnaire. For parents/caregivers who couldn't read or who preferred to be assisted, the DC read the questions and completed the questionnaires on behalf of the participant. The questionnaires were completed in the waiting area of the respective health facility. When the questionnaire was filled by the DC, this was indicated on the form. Variables: Dependent (outcome) variable were the percentage of parents/caregivers using self-medication, reasons of parental self-medication and common drugs used in self-medication. Independent variables were age, sex, province of origin, level of education, socioeconomic status, number of children under 10 years of age; health insurance, determinant/predictors of parental self-medication, relationship with the child, marital status. Potential confounders were site of interview and mode of completing questionnaire (verbal or written) Data collection tool (Questionnaire): A questionnaire was developed specifically for this study. To ensure content validity of our data collection tool, it was first built by the Principal Investigator (PI) (JU) using previously published studies14,15,23–25. The first draft was then reviewed by four local experts in research (three pediatricians (CU, NM & PC) and one statistician). There were four sections to the questionnaire, namely: demo- graphic characteristics of respondents; modern medicine questionnaire; traditional medicine questionnaire and finally the no self-medicating questionnaire. Parents/caregivers were asked whether they have ever used self-medication for their children and if so, what type they had used. Depending on their answers, they completed the appropriate questionnaire (i.e. Modern and/or Traditional medicines) or the questionnaire for parents/caregivers who have never used self-medication for their children. Therefore, not all subjects completed all sections of the questionnaires The questionnaires used 5-point Likert-scales to assess the parental attitudes and perceptions about self-medication. The responses given to the Likert-scale questions from each of the barriers (five questions) and confidence (six questions) domains were combined to create total “barriers to consultation” and total “confidence in self-medicating” scores. Translation and piloting of questionnaires: The questionnaire was initially prepared in English and translated into the national language (Kinyarwanda) by the PI. It was then back-translated in English for accuracy by an independent, non-medical translator. It was piloted on five subjects and amendments were then made based on this piloting, including changes in wording and the addition of further Likert-items. Sample size calculation: Sample-size was calculated for the prevalence of self-medicating, with a predicted baseline of 69% of parents/caregivers of children under-five years of age reported in Tanzania 19. Using the formula with Finite Population Correction 26 and a level of confidence aimed at 95% we calculated the sample size to be 152 study participants. Data management: EpiData Entry 3.1 Software was used for data entry and storage. The data was then exported into Statistical Package for the Social Sciences (IBM Corp. Released 2011. IBM SPSS Statistics for Windows, Version 20.0. Armonk, NY: IBM Corp.), for statistical analysis. Statistical analysis: For continuous numerical data, means and standard deviation were reported; for categorical data, frequency tables, percentages and graphs were reported. Comparison was made between responses on the modern-medicine and traditional-medicine questionnaires. Therefore, parents/caregivers, who self-medicated with both modalities, were included in both arms of the analysis For confidence and barrier questions, Likert-scale items were converted into means and compared using Mann-Whitney U test due to the non-normal distribution of the data. Categorical variables were analyzed using Chi-square test or Fisher's exact test. Odds ratio were reported to identify associations. To account for multiple confounders logistic regression analysis was undertaken including variables with p-value <0.2. Results: Study period: Recruitment took place from July to September 2018 with all sites being visited twice, gaining a total of 162 subjects who were enrolled (Figure 1). Consort flow diagram (n=number of respondents, N=Number of participants) Participation flow and social-demographic characteristics: Eight participants were excluded from the analysis as they did not complete the questionnaires fully, leaving a total of 154 participants who were included in the final analysis with complete data. Fourteen participants (9%) self-completed the questionnaire with 140 (91%) needing assistance with verbal completion of the questionnaire. The mean age of respondents was 33 years (SD: 6.7). Mothers represented the majority (86.4%) of respondent and most respondents were married (94%) (Table 1). Participant demographics Advanced education level = completed secondary school or university; Minimal education level = no education or completed primary school; High economic status = have both employment and own house; Middle economic class = have either employment or own house; Low economic class = have neither employment nor own house; Self-medication indications and practices: Among 154 participants, 120 caregivers (78%) reported to have used self-medication for their children, Sixty-one of them (51%) used only modern drugs in self-medication, 19 (16%) used only traditional drugs while 40 (33%) used both types. Thirty-four caregivers had never used self-medication (22%) (Figure 1). Many parents/caregivers decided to self-medicate with modern medications without seeking advice from anyone. Whereas, for traditional self-medication, friends and relatives served as the main sources of advice (Table 2). All traditional self-medication users used local Rwandan herbs. Paracetamol was the most commonly used modern medicine. Traditional medicine was most commonly used when parents/caregivers suspected intestinal worms. Self-medication indications and practice Respondents could choose more than two options. Reasons for using self-medication: There was a statistically significant difference in reasons for choosing modern and traditional self-medication among our participants, with participants choosing modern medications having slightly more confidence in self-medication than self-medication providers of traditional medicines (p=0.005) and parents/caregivers who used modern self-medication reporting barriers to consultation more frequently as reason to self-medicate than parents/caregivers who used traditional self-prescription (p=0.028) (Table 3). Reasons of using self-medication – Means of Likert questions Likert scale: 1=strongly disagree; 2=disagree; 3=neutral;4=agree; 5=strongly agree. P-value: Mann-Whitney U-test Comparison of means Non-self-prescribers: Parents/caregivers who choose not to self-medicate reported most frequently that they don't use self-medication because “non-prescription drugs are dangerous”. Access to pharmacy, traditional healer, traditional medication shop or cost, were not found to be reasons to not use self-medication (Table 4). Reason for not using self-medication (n=34) Likert scale: 1=strongly disagree; 2=disagree; 3= neutral;4=agree; 5=strongly agree Social demographic association with use of non-prescription drug: The association between different demographic characteristics and the use of self-medication was analyzed. In bivariate analysis the factors that were statistically significantly associated with use of non-prescription drugs included being from Kigali and having more than one child less than 10 years of age. In the multivariate analysis of factors that were significant in the bivariate analysis (with cut-off p<0.2), the only factor associated with increased use of self-prescribed drugs was having more than 1 child under the age of 10 years (Table 5). Social demographic association with use of nonprescription drug OR=Odds ratio; AOR=Adjusted Odds Ratio; Multivariate analysis (AOR) undertaken on factors with p-value <0.20 Association of socio-demographic factors with the use of modern versus. Traditional medication: The association between socio-demographic factors and the use of modern only vs. traditional only drugs was analyzed. In bivariate analysis the factors associated with use of exclusive modern rather than exclusive traditional drugs in self-medication were: caregivers in private clinic, caregivers from Kigali, advanced education, high social economic class, having private health insurance, and being older than 30 years. e included all factors with a p-value<0.2 in a multivariate analysis and after adjusting for confounders only age above 30 and living in Kigali remained associated with the preference of exclusive modern self-medication compared to exclusive traditional self-medication (Table 6). Social demographic association with type of self-medication used(modern vs. traditional) OR=Odds ratio; AOR=Adjusted Odds Ratio; Multivariate analysis (AOR) undertaken on factors with p-value <0.20 Discussion: This study aimed to determine the reported self-medication use in children in private and public health facilities in Rwanda and to determine attitudes and reasons associated with parental decisions to self-medicate their children. Our study showed that 78% of participants have used self-medication for their children. Both modern and traditional drugs were used. This is similar to 77% reported in Pakistan 7 and higher than the prevalence of 32% and 58.82% reported in Madagascar and India 8,11,16. However, higher prevalence of 95.1% and 98.1% were reported in Greece and Australia 17,18. This shows that self-medication in children is a universal practice with some differences between countries. In our study 51% of parents/caregivers who used self-medication used only modern medicines, 16% used only traditional self-medicines and 33% used both. Our findings show that we had a lower rate of parents/caregivers who used only modern medicines compared to previous studies conducted in Sudan and Saudi Arabia where 84% and 86.7% of parents/caregivers reported using modern medicines over traditional ones 23,24; this difference can be explained by the fact that in our study we separately analyzed participants who used both types of self- medication, and did not attempt to specify whether they preferred modern or traditional drugs. Self-medication has many potential benefits, for example medications such as paracetamol can be effectively used. However, self-medication can result in ineffective medications being used, wasting parental resources and potentially delaying a child receiving appropriate care. Cough syrups were the second most widely used self-medication. This is of concern, particularly in young children as cough suppressants in this age group are known to be ineffective and to have potentially harmful effects27. Drugs for intestinal worms were the third most commonly used in self-medication. Clinical experience of the authors demonstrates that parents/caregivers may be attributing many other symptoms such as decreased appetite, poor weight gain as being caused by intestinal worms and therefore use of drugs for suspected intestinal worms without prior consultation may be inappropriate. Most alarming in our results was the finding that nearly 20% of parents/caregivers were self-medicating their children with antibiotics. As the world aims to combat antibiotic resistance, this form of self-medication should be addressed. There is a difference between education and economic factors associated with the choice to self-medicate. Studies from Spain, Finland, Italy, and Madagascar15,16,28,29, found that families with high income and with secondary and higher education level practice more self-medication than families with low income and low education level. On the other hand in a study done in India, family income and parental education were not significantly influencing self-medication practice, in Germany, mother with lower education level used fewer OTC and no significant difference was seen with regard to income status 11,30. In our study, financial, insurance or geographical access barriers were not reasons to choose self-medication. This finding is not surprising as 58% of our participants had Mutuelle de santé insurance, and access to healthcare in Rwanda is overall good with 79% of the population subscribing to an affordable community health insurance (Mutuelle de Santé) which covers 90% of consultation fees in public health facilities 31. Furthermore, our study showed that having more than one child less than 10 years of age was associated with the practice of self-medication. No other factor was shown to be associated with parental use of self-medication. Caregiver's age above 30 years of age were found to be associated with the use of modern self-medication compared to traditional ones. In contrast to our findings, being female and younger were associated with the practice of self-medication in Italy and Spain 15,28. We observed that coming from Kigali city was associated with the preference of modern drugs over traditional ones, which can be explained by the fact that most of our study participants were from Kigali where there is a wider availability of pharmacies dispensing modern medicines than in areas outside Kigali. Limitations: Our analysis to identify associated factors did not include the group of parents/caregivers who used both types of self-medication. The internal validity could have been compromised by the high rate of verbal completion of the questionnaire, with participants being potentially prone to acquiescence bias. Recall bias may have also limited caregiver responses. All regional provinces of Rwanda were not represented. Conclusion: Self-medication, a worldwide practice, is also common in Rwanda. Parents/caregivers are involved in this practice for their children regardless of their socio-demographic background. Consideration should be given to regulating drugs used in self-medication as well as the education of the population with the goal of minimizing the risks of self- medication and maximizing benefits.
Background: Self-medication, a worldwide practice, has both benefits and risks. Many countries have regulated non-prescription medications available for use in self-medication. However, in countries such as Rwanda, where prescriptions are not required to purchase medications, prescription, non-prescription and traditional medications have been used for self-medication. Methods: A cross-sectional multi-center questionnaire based quantitative study of 154 parents/caregivers of children under ten years undertaken in private and public health facilities. Results: The use of self-medication was reported to be 77.9%. Among these parents/caregivers, 50.8% used modern self-medication only, 15.8% used traditional self-medication only and 33.3% used both types of self-medication. Paracetamol was the most commonly used drug in modern self-medication; the traditional drugs used were Rwandan local herbs. Parents/caregivers who used modern medicines had slightly more confidence in self-medication than self-medication users of traditional medicines (p=0.005). Parents/caregivers who used modern self-medication reported barriers to consultation as a reason to self-medicate more frequently than those who used traditional drugs. Having more than one child below 10 years of-age was the only socio-demographic factor associated with having used self-medication (AOR=4.74, CI: 1.94-11.58, p=0.001). Being above 30 years (AOR= 5.78, CI: 1.25-26.68, p=0.025) and living in Kigali (AOR=8.2, CI: 1.58-43.12, p=.0.012) were factors associated with preference of modern self- medication compared to traditional self-medication. Conclusions: Self-medication is common in Rwanda. Parents/caregivers are involved in this practice regardless of their socio-demographic background.
Introduction: Since the declaration of Alma-Ata in 1978, the principles of self-care, individual participation, responsibility and involvement in one's own health care have been recognized as important elements of primary healthcare1–3. Self-care includes self-medication which the WHO defines as “the selection and use of medicines by individuals to treat self-recognized illnesses or symptoms”4–7. In the pediatric population, self-medication implies administration of a drug to a child, by a caregiver, without prior medical consultation 8. WHO defines traditional medicine as “The sum of knowledge, skill, and practices based on the theories, beliefs, and experiences indigenous to different cultures, whether explicable or not, used in the maintenance of health as well as in the prevention, diagnosis, improvement or treatment of physical and mental illness” 9,10. Traditional medicines will often be chosen and provided by a traditional healer or therapist, but can also be produced or purchased by an individual and used as a form of self-medication. Responsible self-medication implies the use of approved, safe, and effective drugs, accompanied by information to direct users 11. There are several advantages of responsible self-medication, but the population must be aware of potentials risks and harms of self-medication12,13. Factors associated with the parental choice to self-medicate their children vary, but include: parents/caregivers perceiving their child's illness as being mild and not requiring health professional consultation, lack of time to attend consultations, high consultation fees, clinic waiting time, emergency treatment, use of old prescriptions available in the home, parent comfort in recognizing their children's disease based on the symptoms and having experience with the medication 14,15. Pediatric self-medication is a worldwide practice with a reported prevalence between 32–98% in Madagascar, India, Greece, and Australia 8,16–18. Tanzania, a fellow nation of the East African community has reported pediatric self-medication rates of 69% 19. There is also a difference in education and economic factors associated with the choice to self-medicate. Studies from Spain, Finland, Italy, and Madagascar12,19–21, found that families with high income and with secondary and higher education level practice more self-medication than families with low income and low education level. On the other hand, in studies done in Germany and India, family income and parental education were not significantly influencing self-medication practice 9,22. There is insufficient literature describing the use of self-medication in pediatric populations in Sub-Saharan Africa. In Rwanda, self-medication is being used by parents/caregivers for their children 20,21. The Rwandan Ministry of Health regulates essential medicines in accordane with WHO recommendations but there is no regulation of non-prescription or OTC drugs. In addition, there is a rise in antibiotic resistance in the country. Self-medication, if not done in a responsible and safe manner, could predispose the population to more harm than benefits. So there is a need to establish a baseline data on parental self-medication practices in Rwanda including the use of traditional and modern drugs. This could help health care providers and policymakers to plan education and establish policies aiming to achieve responsible use of medicines. Objectives The present study aimed at determining the proportion of parents/caregivers who reported self-medicating their children before consulting private and public health facilities in Rwanda and to determine attitudes and reasons associated with parental decisions to self-medicate their children. Conclusion: Self-medication, a worldwide practice, is also common in Rwanda. Parents/caregivers are involved in this practice for their children regardless of their socio-demographic background. Consideration should be given to regulating drugs used in self-medication as well as the education of the population with the goal of minimizing the risks of self- medication and maximizing benefits.
Background: Self-medication, a worldwide practice, has both benefits and risks. Many countries have regulated non-prescription medications available for use in self-medication. However, in countries such as Rwanda, where prescriptions are not required to purchase medications, prescription, non-prescription and traditional medications have been used for self-medication. Methods: A cross-sectional multi-center questionnaire based quantitative study of 154 parents/caregivers of children under ten years undertaken in private and public health facilities. Results: The use of self-medication was reported to be 77.9%. Among these parents/caregivers, 50.8% used modern self-medication only, 15.8% used traditional self-medication only and 33.3% used both types of self-medication. Paracetamol was the most commonly used drug in modern self-medication; the traditional drugs used were Rwandan local herbs. Parents/caregivers who used modern medicines had slightly more confidence in self-medication than self-medication users of traditional medicines (p=0.005). Parents/caregivers who used modern self-medication reported barriers to consultation as a reason to self-medicate more frequently than those who used traditional drugs. Having more than one child below 10 years of-age was the only socio-demographic factor associated with having used self-medication (AOR=4.74, CI: 1.94-11.58, p=0.001). Being above 30 years (AOR= 5.78, CI: 1.25-26.68, p=0.025) and living in Kigali (AOR=8.2, CI: 1.58-43.12, p=.0.012) were factors associated with preference of modern self- medication compared to traditional self-medication. Conclusions: Self-medication is common in Rwanda. Parents/caregivers are involved in this practice regardless of their socio-demographic background.
3,687
340
[ 69 ]
6
[ "self", "medication", "self medication", "caregivers", "parents caregivers", "parents", "traditional", "children", "modern", "study" ]
[ "medication self medication", "self medicines", "providers traditional medicines", "self medication indications", "traditional self prescription" ]
[CONTENT] Self-medication | medicines | parents | caregivers | children | Nonprescription Drugs | Rwanda [SUMMARY]
[CONTENT] Self-medication | medicines | parents | caregivers | children | Nonprescription Drugs | Rwanda [SUMMARY]
[CONTENT] Self-medication | medicines | parents | caregivers | children | Nonprescription Drugs | Rwanda [SUMMARY]
[CONTENT] Self-medication | medicines | parents | caregivers | children | Nonprescription Drugs | Rwanda [SUMMARY]
[CONTENT] Self-medication | medicines | parents | caregivers | children | Nonprescription Drugs | Rwanda [SUMMARY]
[CONTENT] Self-medication | medicines | parents | caregivers | children | Nonprescription Drugs | Rwanda [SUMMARY]
[CONTENT] Attitude to Health | Caregivers | Child | Child, Preschool | Cross-Sectional Studies | Female | Health Knowledge, Attitudes, Practice | Health Services Accessibility | Humans | Infant | Infant, Newborn | Male | Nonprescription Drugs | Parents | Rwanda | Self Medication | Surveys and Questionnaires | Young Adult [SUMMARY]
[CONTENT] Attitude to Health | Caregivers | Child | Child, Preschool | Cross-Sectional Studies | Female | Health Knowledge, Attitudes, Practice | Health Services Accessibility | Humans | Infant | Infant, Newborn | Male | Nonprescription Drugs | Parents | Rwanda | Self Medication | Surveys and Questionnaires | Young Adult [SUMMARY]
[CONTENT] Attitude to Health | Caregivers | Child | Child, Preschool | Cross-Sectional Studies | Female | Health Knowledge, Attitudes, Practice | Health Services Accessibility | Humans | Infant | Infant, Newborn | Male | Nonprescription Drugs | Parents | Rwanda | Self Medication | Surveys and Questionnaires | Young Adult [SUMMARY]
[CONTENT] Attitude to Health | Caregivers | Child | Child, Preschool | Cross-Sectional Studies | Female | Health Knowledge, Attitudes, Practice | Health Services Accessibility | Humans | Infant | Infant, Newborn | Male | Nonprescription Drugs | Parents | Rwanda | Self Medication | Surveys and Questionnaires | Young Adult [SUMMARY]
[CONTENT] Attitude to Health | Caregivers | Child | Child, Preschool | Cross-Sectional Studies | Female | Health Knowledge, Attitudes, Practice | Health Services Accessibility | Humans | Infant | Infant, Newborn | Male | Nonprescription Drugs | Parents | Rwanda | Self Medication | Surveys and Questionnaires | Young Adult [SUMMARY]
[CONTENT] Attitude to Health | Caregivers | Child | Child, Preschool | Cross-Sectional Studies | Female | Health Knowledge, Attitudes, Practice | Health Services Accessibility | Humans | Infant | Infant, Newborn | Male | Nonprescription Drugs | Parents | Rwanda | Self Medication | Surveys and Questionnaires | Young Adult [SUMMARY]
[CONTENT] medication self medication | self medicines | providers traditional medicines | self medication indications | traditional self prescription [SUMMARY]
[CONTENT] medication self medication | self medicines | providers traditional medicines | self medication indications | traditional self prescription [SUMMARY]
[CONTENT] medication self medication | self medicines | providers traditional medicines | self medication indications | traditional self prescription [SUMMARY]
[CONTENT] medication self medication | self medicines | providers traditional medicines | self medication indications | traditional self prescription [SUMMARY]
[CONTENT] medication self medication | self medicines | providers traditional medicines | self medication indications | traditional self prescription [SUMMARY]
[CONTENT] medication self medication | self medicines | providers traditional medicines | self medication indications | traditional self prescription [SUMMARY]
[CONTENT] self | medication | self medication | caregivers | parents caregivers | parents | traditional | children | modern | study [SUMMARY]
[CONTENT] self | medication | self medication | caregivers | parents caregivers | parents | traditional | children | modern | study [SUMMARY]
[CONTENT] self | medication | self medication | caregivers | parents caregivers | parents | traditional | children | modern | study [SUMMARY]
[CONTENT] self | medication | self medication | caregivers | parents caregivers | parents | traditional | children | modern | study [SUMMARY]
[CONTENT] self | medication | self medication | caregivers | parents caregivers | parents | traditional | children | modern | study [SUMMARY]
[CONTENT] self | medication | self medication | caregivers | parents caregivers | parents | traditional | children | modern | study [SUMMARY]
[CONTENT] self | medication | self medication | use | responsible | health | pediatric | care | parental | education [SUMMARY]
[CONTENT] questionnaire | data | study | caregivers | parents | parents caregivers | questionnaires | self | children | questions [SUMMARY]
[CONTENT] self | medication | self medication | traditional | use | modern | analysis | association | table | demographic [SUMMARY]
[CONTENT] self | medication | self medication | practice | medication education population | regardless socio | education population goal | education population | self medication education | given regulating [SUMMARY]
[CONTENT] self | medication | self medication | caregivers | traditional | parents caregivers | parents | use | children | modern [SUMMARY]
[CONTENT] self | medication | self medication | caregivers | traditional | parents caregivers | parents | use | children | modern [SUMMARY]
[CONTENT] ||| ||| Rwanda [SUMMARY]
[CONTENT] 154 | under ten years [SUMMARY]
[CONTENT] 77.9% ||| 50.8% | 15.8% | 33.3% ||| Rwandan ||| ||| ||| more than one | 10 years of-age | AOR=4.74 | CI | 1.94-11.58 ||| above 30 years | 5.78 | CI | 1.25 | Kigali | CI | 1.58-43.12 [SUMMARY]
[CONTENT] Rwanda ||| [SUMMARY]
[CONTENT] ||| ||| Rwanda | 154 | under ten years ||| 77.9% ||| 50.8% | 15.8% | 33.3% ||| Rwandan ||| ||| ||| more than one | 10 years of-age | AOR=4.74 | CI | 1.94-11.58 ||| above 30 years | 5.78 | CI | 1.25 | Kigali | CI | 1.58-43.12 ||| Rwanda ||| [SUMMARY]
[CONTENT] ||| ||| Rwanda | 154 | under ten years ||| 77.9% ||| 50.8% | 15.8% | 33.3% ||| Rwandan ||| ||| ||| more than one | 10 years of-age | AOR=4.74 | CI | 1.94-11.58 ||| above 30 years | 5.78 | CI | 1.25 | Kigali | CI | 1.58-43.12 ||| Rwanda ||| [SUMMARY]
Comparison between better and poorly differentiated locally advanced gastric cancer in preoperative chemotherapy: a retrospective, comparative study at a single tertiary care institute.
25200958
Gastric cancer is the third leading cause of cancer-related mortality in China, and the long-term survival for locally advanced gastric cancer is very poor. Simple surgery cannot yield an ideal result because of the high recurrence rate after tumor resection. Preoperative chemotherapy could help to reduce tumor volume, improve the R0 resection rate (no residual tumor after surgery), and decrease the risk of local tumor recurrence. The aim of this study was to evaluate the influence of pathological differentiation in the effect of preoperative chemotherapy for patients with locally advanced gastric cancer.
BACKGROUND
Patients with locally advanced gastric cancer (n = 32) received preoperative chemotherapy under the XELOX (capecitabine plus oxaliplatin) regimen. According to pathological examination, patients' tumors were classified into better (well and moderate) and poorly differentiated (lower differentiated and undifferentiated) groups, and the clinical response rate, type of gastrectomy, and negative tumor residual rate were compared between the two groups of patients. Morphological changes and toxic reactions were monitored after chemotherapy.
METHODS
The results showed that the clinical response rate in the better differentiated group was significantly higher than that in the poorly differentiated group (100% versus 25%, P = 0.000). The partial gastrectomy rate in the better differentiated group was significantly higher than that in the poorly differentiated group (87.5% versus 25% P = 0.000). A significant shrinking of tumor and necrosis of tumor tissues caused by chemotherapy could be observed.
RESULTS
In conclusion, the better differentiated group with locally advanced gastric cancer is suitable for preoperative chemotherapy under the XELOX regimen, and as a result of effective preoperative chemotherapy, much more gastric tissue can be preserved for the better differentiated group.
CONCLUSIONS
[ "Antineoplastic Combined Chemotherapy Protocols", "Capecitabine", "Cell Differentiation", "Chemotherapy, Adjuvant", "Combined Modality Therapy", "Deoxycytidine", "Female", "Fluorouracil", "Follow-Up Studies", "Gastrectomy", "Humans", "Male", "Middle Aged", "Neoplasm Staging", "Oxaloacetates", "Preoperative Care", "Prognosis", "Retrospective Studies", "Stomach Neoplasms", "Tertiary Healthcare" ]
4177253
Background
Gastric cancer is one of the most common malignancies in Asia, especially in China, Korea, and Japan [1]. It is the third leading cause of cancer-related mortality in China, and Chinese patients with gastric cancer account for 42% of the worldwide patient population with gastric cancer [2]. Surgical resection of the tumor is the most effective approach in increasing the long-term survival of patients with early stage gastric cancer [3]. The five-year survival rate of patients with resectable gastric cancer in advanced stages (stages III or IV) can be improved through combined surgical management with perioperative chemotherapy [4]. The benefits of preoperative chemotherapy (neo-adjuvant chemotherapy) for patients with gastric cancer are as follows: reduces tumor volume, which results in tumor downstage, improves the R0 resection rate (no residual tumor after surgery), acts on micrometastasis, decreases the risk of local tumor recurrence, and aids in evaluating tumor chemosensitivity to cytotoxic drugs [5–10]. In locally advanced gastric cancer, the primary tumor is invaded through the submucosal layers of gastric tissues, with regional nodal involvement, and occupies most of the normal gastric cell lines [11, 12]. Although the long-term effects remain controversial, preoperative chemotherapy for locally advanced gastric cancer has shown encouraging rates of pathologic complete response and R0 resection, with acceptable rates of acute and late toxicities [13, 14]. However, no report was found on the factor of pathological differentiation in preoperative chemotherapy of locally advanced gastric cancer. Therefore, the aim of this study was to evaluate the influence of pathological differentiation in the effect of preoperative chemotherapy for patients with locally advanced gastric cancer. In this study, we compared the clinical response rate of preoperative chemotherapy between better and poorly differentiated locally advanced gastric cancer, and discussed its effect in the preservation of gastric tissue during gastrectomy.
Methods
Patients Patients who had received preoperative chemotherapy and surgical treatment for locally advanced gastric cancer in the gastrointestinal department of the China-Japan Union Hospital of Jilin University, China, between April 2009 and March 2013, were retrospectively reviewed. The preoperative diagnosis was made through endoscopy, biopsy, endoscopic ultrasound, and enhanced computed tomography. The cancer staging was evaluated according to the Union for International Cancer Control tumor-node-metastasis classification (sixth edition) [15]. Patients were fully informed about the side effects of preoperative chemotherapy and surgery, and they chose this treatment by themselves voluntarily. Preoperative chemotherapy for locally advanced gastric cancer was approved by the Medical Ethics Committee of the China-Japan Union Hospital of Jilin University, China. Patients who had received preoperative chemotherapy and surgical treatment for locally advanced gastric cancer in the gastrointestinal department of the China-Japan Union Hospital of Jilin University, China, between April 2009 and March 2013, were retrospectively reviewed. The preoperative diagnosis was made through endoscopy, biopsy, endoscopic ultrasound, and enhanced computed tomography. The cancer staging was evaluated according to the Union for International Cancer Control tumor-node-metastasis classification (sixth edition) [15]. Patients were fully informed about the side effects of preoperative chemotherapy and surgery, and they chose this treatment by themselves voluntarily. Preoperative chemotherapy for locally advanced gastric cancer was approved by the Medical Ethics Committee of the China-Japan Union Hospital of Jilin University, China. Preoperative chemotherapy and surgery As preoperative chemotherapy, the XELOX (capecitabine plus oxaliplatin) regimen was used in this study [14], as follows: intravenous infusion of oxaliplatin 130 mg/m2 over 2 hours on Day 1, followed by capecitabine 1,000 mg/m2 orally twice daily for 2 weeks. This cycle was repeated once every 3 weeks, and the patients were given two cycles before evaluation of the chemotherapeutic effect. Clinical efficacy was evaluated by computed tomography and endoscopy. Patients with resectable tumors after chemotherapy were chosen for surgery. Patients with unresectable tumors after two cycles of chemotherapy continued to receive chemotherapy (for a total of four cycles), after which the efficacy was again evaluated by computed tomography and endoscopy. At this point, patients with resectable tumors would undergo surgery, and patients with unresectable tumors would be excluded from the study. Some patients whose tumors were resectable initially, but for whom total gastrectomy seemed unavoidable were also included in this study. The choice of surgical type depended on the treating surgeon’s preference, primary tumor location, and extent of disease. As preoperative chemotherapy, the XELOX (capecitabine plus oxaliplatin) regimen was used in this study [14], as follows: intravenous infusion of oxaliplatin 130 mg/m2 over 2 hours on Day 1, followed by capecitabine 1,000 mg/m2 orally twice daily for 2 weeks. This cycle was repeated once every 3 weeks, and the patients were given two cycles before evaluation of the chemotherapeutic effect. Clinical efficacy was evaluated by computed tomography and endoscopy. Patients with resectable tumors after chemotherapy were chosen for surgery. Patients with unresectable tumors after two cycles of chemotherapy continued to receive chemotherapy (for a total of four cycles), after which the efficacy was again evaluated by computed tomography and endoscopy. At this point, patients with resectable tumors would undergo surgery, and patients with unresectable tumors would be excluded from the study. Some patients whose tumors were resectable initially, but for whom total gastrectomy seemed unavoidable were also included in this study. The choice of surgical type depended on the treating surgeon’s preference, primary tumor location, and extent of disease. Evaluation for efficacy and adverse events monitoring The tumors’ reaction to prechemotherapy was evaluated as follows: (1) complete response, complete disappearance of the tumor; (2) partial response, a decrease of more than 30% in tumor size; (3) progressive disease, tumor size increased, more than 20%; and (4) stable disease, no change found in tumor size. The clinical response rate was calculated as follows: Repeated computed tomography was used to evaluate the change in size of the metastasizing lymph node, and repeated endoscopy examination was used to evaluate changes in the primary gastric carcinoma. Patients’ liver and kidney function, bone marrow hematopoiesis, gastrointestinal reactions, and related adverse events were closely monitored during the treatment. Toxic reactions were evaluated using the National Cancer Institute Common Toxicity Criteria (version 3.0) [14]. The tumors’ reaction to prechemotherapy was evaluated as follows: (1) complete response, complete disappearance of the tumor; (2) partial response, a decrease of more than 30% in tumor size; (3) progressive disease, tumor size increased, more than 20%; and (4) stable disease, no change found in tumor size. The clinical response rate was calculated as follows: Repeated computed tomography was used to evaluate the change in size of the metastasizing lymph node, and repeated endoscopy examination was used to evaluate changes in the primary gastric carcinoma. Patients’ liver and kidney function, bone marrow hematopoiesis, gastrointestinal reactions, and related adverse events were closely monitored during the treatment. Toxic reactions were evaluated using the National Cancer Institute Common Toxicity Criteria (version 3.0) [14]. Statistical analysis The patient’s age and body mass index were presented as χ ± standard deviation. The chi-square test was used to compare the differences in clinical response rate, operation type, and R0 resection rate. All statistical analyses were performed using SPSS software, version 11.0 (SPSS Inc, Chicago, United States). The patient’s age and body mass index were presented as χ ± standard deviation. The chi-square test was used to compare the differences in clinical response rate, operation type, and R0 resection rate. All statistical analyses were performed using SPSS software, version 11.0 (SPSS Inc, Chicago, United States).
Results
Patients’ characteristics A total of 32 patients with locally advanced gastric cancer were enrolled in this study. The patients’ characteristics (age, sex, and body mass index (kg/m2)), pathological degree of differentiation, and preoperative pathological stages are listed in the Table  1.Table 1 Patients’ general characteristics Better differentiated group (n= 16)Poorly differentiated group (n= 16) P Male:female9:711:50.465Average age (years)53.80 ± 4.2152.50 ± 4.670.522Average body mass index35.24 ± 5.3536.10 ± 7.230.705Degree of differentiation:Well4Moderate12Lower6Undifferentiated10Preoperative tumor-node-metastasis stage:III12130.669IV430.059Mean number of chemotherapy cycles2.8 ± 0.753.4 ± 0.50 Patients’ general characteristics A total of 32 patients with locally advanced gastric cancer were enrolled in this study. The patients’ characteristics (age, sex, and body mass index (kg/m2)), pathological degree of differentiation, and preoperative pathological stages are listed in the Table  1.Table 1 Patients’ general characteristics Better differentiated group (n= 16)Poorly differentiated group (n= 16) P Male:female9:711:50.465Average age (years)53.80 ± 4.2152.50 ± 4.670.522Average body mass index35.24 ± 5.3536.10 ± 7.230.705Degree of differentiation:Well4Moderate12Lower6Undifferentiated10Preoperative tumor-node-metastasis stage:III12130.669IV430.059Mean number of chemotherapy cycles2.8 ± 0.753.4 ± 0.50 Patients’ general characteristics Response rate of preoperative chemotherapy Of the 32 patients, no patient was rated as having complete response or progressive disease, 20 patients were rated as having a partial response, and 12 patients were rated as having stable disease. The overall clinical response rate was 62.5%. Altogether, 26 patients (81.3%) received surgical R0 resection. However, six (37.5%) tumors with poor differentiation were considered as unresectable after four cycles of preoperative chemotherapy. Of the 32 patients, no patient was rated as having complete response or progressive disease, 20 patients were rated as having a partial response, and 12 patients were rated as having stable disease. The overall clinical response rate was 62.5%. Altogether, 26 patients (81.3%) received surgical R0 resection. However, six (37.5%) tumors with poor differentiation were considered as unresectable after four cycles of preoperative chemotherapy. Morphological changes after preoperative chemotherapy Figures  1A and 1B show a significant change in tumor size (partial response) before and after preoperative chemotherapy. Figures  1C and 1D show the necrosis of tumor tissue surrounded by inflammatory tissue after preoperative chemotherapy.Figure 1 Morphological changes of local advanced gastric cancer before and after preoperative chemotherapy. (A) Gross gastric carcinoma (endoscopy view) before chemotherapy. (B) Obviously shrunk gastric carcinoma (endoscopy view) after chemotherapy in the same patient. (C) Tumor tissue surrounded by inflammatory tissue (arrow) after chemotherapy. H & E staining, original magnification ×40. (D) Gastric cancer cells showed obvious nucleus necrosis after chemotherapy. H & E staining, original magnification ×100. Morphological changes of local advanced gastric cancer before and after preoperative chemotherapy. (A) Gross gastric carcinoma (endoscopy view) before chemotherapy. (B) Obviously shrunk gastric carcinoma (endoscopy view) after chemotherapy in the same patient. (C) Tumor tissue surrounded by inflammatory tissue (arrow) after chemotherapy. H & E staining, original magnification ×40. (D) Gastric cancer cells showed obvious nucleus necrosis after chemotherapy. H & E staining, original magnification ×100. Figures  1A and 1B show a significant change in tumor size (partial response) before and after preoperative chemotherapy. Figures  1C and 1D show the necrosis of tumor tissue surrounded by inflammatory tissue after preoperative chemotherapy.Figure 1 Morphological changes of local advanced gastric cancer before and after preoperative chemotherapy. (A) Gross gastric carcinoma (endoscopy view) before chemotherapy. (B) Obviously shrunk gastric carcinoma (endoscopy view) after chemotherapy in the same patient. (C) Tumor tissue surrounded by inflammatory tissue (arrow) after chemotherapy. H & E staining, original magnification ×40. (D) Gastric cancer cells showed obvious nucleus necrosis after chemotherapy. H & E staining, original magnification ×100. Morphological changes of local advanced gastric cancer before and after preoperative chemotherapy. (A) Gross gastric carcinoma (endoscopy view) before chemotherapy. (B) Obviously shrunk gastric carcinoma (endoscopy view) after chemotherapy in the same patient. (C) Tumor tissue surrounded by inflammatory tissue (arrow) after chemotherapy. H & E staining, original magnification ×40. (D) Gastric cancer cells showed obvious nucleus necrosis after chemotherapy. H & E staining, original magnification ×100. Comparison of response and surgical results between different pathological differentiations The pathological differentiation was classified into two groups: better differentiated (well and moderately differentiated) and poorly differentiated (lower differentiated and undifferentiated). No difference was found in the age, sex, body mass index, tumor-node-metastasis stage, and tumor size between the two groups. The clinical response rate in the better differentiated group was significantly higher than that in the poorly differentiated group. The partial gastrectomy rate in the better differentiated group was significantly higher than that in the poorly differentiated group (Table  2). Neither death nor severe complication occurred in this study. No difference was found in surgical time and incidence rate of postoperative complications between the well and poorly differentiated groups.Table 2 Comparison of chemotherapy results between better and poorly differentiated groups Better differentiated group (n= 16)Poorly differentiated group (n= 16) P Clinical reaction rate100% (16/16)25% (4/16)0.000Partial gastrectomy87.5% (14/16)25% (4/16)0.000Total gastrectomy12.5% (2/16)37.5% (6/16)0.654Unresectable cases Prechemotherapy18.7% (3/16)62.5% (10/16)0.012 Postchemotherapy0 (0/16)37.5% (6/16)0.024R0 resection rate100% (16/16)62.5% (10/16)0.018Toxicity reaction rate100% (16/16)100% (16/16)R0, no residual tumor after surgery. Comparison of chemotherapy results between better and poorly differentiated groups R0, no residual tumor after surgery. The pathological differentiation was classified into two groups: better differentiated (well and moderately differentiated) and poorly differentiated (lower differentiated and undifferentiated). No difference was found in the age, sex, body mass index, tumor-node-metastasis stage, and tumor size between the two groups. The clinical response rate in the better differentiated group was significantly higher than that in the poorly differentiated group. The partial gastrectomy rate in the better differentiated group was significantly higher than that in the poorly differentiated group (Table  2). Neither death nor severe complication occurred in this study. No difference was found in surgical time and incidence rate of postoperative complications between the well and poorly differentiated groups.Table 2 Comparison of chemotherapy results between better and poorly differentiated groups Better differentiated group (n= 16)Poorly differentiated group (n= 16) P Clinical reaction rate100% (16/16)25% (4/16)0.000Partial gastrectomy87.5% (14/16)25% (4/16)0.000Total gastrectomy12.5% (2/16)37.5% (6/16)0.654Unresectable cases Prechemotherapy18.7% (3/16)62.5% (10/16)0.012 Postchemotherapy0 (0/16)37.5% (6/16)0.024R0 resection rate100% (16/16)62.5% (10/16)0.018Toxicity reaction rate100% (16/16)100% (16/16)R0, no residual tumor after surgery. Comparison of chemotherapy results between better and poorly differentiated groups R0, no residual tumor after surgery. Toxicity No treatment termination or death occurred as a result of a toxic reaction. During preoperative chemotherapy, there were different degrees of toxic and adverse reactions, mainly myelosuppression, liver dysfunction, and gastrointestinal reactions. Toxic reaction occurred in all the patients receiving preoperative chemotherapy, to different extents. Leukopenia (n = 24; 75%) was the most commonly reported adverse reaction. As markers of liver injury, alanine aminotransferase and aspartate aminotransferase levels were increased in nine (28.1%) and eight patients (25%), respectively. Nausea and vomiting were the commonest digestive-tract reactions, with 19 patients (59.3%) reporting vomiting, and 25 patients (67.7%) reporting nausea. Toxic neurological reaction occurred in 21 patients (78.1%). The toxic reactions reported in this study were similar with previous reports [14, 16]. No treatment termination or death occurred as a result of a toxic reaction. During preoperative chemotherapy, there were different degrees of toxic and adverse reactions, mainly myelosuppression, liver dysfunction, and gastrointestinal reactions. Toxic reaction occurred in all the patients receiving preoperative chemotherapy, to different extents. Leukopenia (n = 24; 75%) was the most commonly reported adverse reaction. As markers of liver injury, alanine aminotransferase and aspartate aminotransferase levels were increased in nine (28.1%) and eight patients (25%), respectively. Nausea and vomiting were the commonest digestive-tract reactions, with 19 patients (59.3%) reporting vomiting, and 25 patients (67.7%) reporting nausea. Toxic neurological reaction occurred in 21 patients (78.1%). The toxic reactions reported in this study were similar with previous reports [14, 16].
Conclusions
From this study, it is concluded that better differentiated locally advanced gastric cancer is suitable for preoperative chemotherapy under the XELOX regimen. As a result of effective preoperative chemotherapy, much more gastric tissue can be preserved in patients with better differentiated locally advanced gastric cancer.
[ "Background", "Patients", "Preoperative chemotherapy and surgery", "Evaluation for efficacy and adverse events monitoring", "Statistical analysis", "Patients’ characteristics", "Response rate of preoperative chemotherapy", "Morphological changes after preoperative chemotherapy", "Comparison of response and surgical results between different pathological differentiations", "Toxicity" ]
[ "Gastric cancer is one of the most common malignancies in Asia, especially in China, Korea, and Japan\n[1]. It is the third leading cause of cancer-related mortality in China, and Chinese patients with gastric cancer account for 42% of the worldwide patient population with gastric cancer\n[2]. Surgical resection of the tumor is the most effective approach in increasing the long-term survival of patients with early stage gastric cancer\n[3]. The five-year survival rate of patients with resectable gastric cancer in advanced stages (stages III or IV) can be improved through combined surgical management with perioperative chemotherapy\n[4]. The benefits of preoperative chemotherapy (neo-adjuvant chemotherapy) for patients with gastric cancer are as follows: reduces tumor volume, which results in tumor downstage, improves the R0 resection rate (no residual tumor after surgery), acts on micrometastasis, decreases the risk of local tumor recurrence, and aids in evaluating tumor chemosensitivity to cytotoxic drugs\n[5–10]. In locally advanced gastric cancer, the primary tumor is invaded through the submucosal layers of gastric tissues, with regional nodal involvement, and occupies most of the normal gastric cell lines\n[11, 12]. Although the long-term effects remain controversial, preoperative chemotherapy for locally advanced gastric cancer has shown encouraging rates of pathologic complete response and R0 resection, with acceptable rates of acute and late toxicities\n[13, 14]. However, no report was found on the factor of pathological differentiation in preoperative chemotherapy of locally advanced gastric cancer. Therefore, the aim of this study was to evaluate the influence of pathological differentiation in the effect of preoperative chemotherapy for patients with locally advanced gastric cancer.\nIn this study, we compared the clinical response rate of preoperative chemotherapy between better and poorly differentiated locally advanced gastric cancer, and discussed its effect in the preservation of gastric tissue during gastrectomy.", "Patients who had received preoperative chemotherapy and surgical treatment for locally advanced gastric cancer in the gastrointestinal department of the China-Japan Union Hospital of Jilin University, China, between April 2009 and March 2013, were retrospectively reviewed. The preoperative diagnosis was made through endoscopy, biopsy, endoscopic ultrasound, and enhanced computed tomography. The cancer staging was evaluated according to the Union for International Cancer Control tumor-node-metastasis classification (sixth edition)\n[15]. Patients were fully informed about the side effects of preoperative chemotherapy and surgery, and they chose this treatment by themselves voluntarily. Preoperative chemotherapy for locally advanced gastric cancer was approved by the Medical Ethics Committee of the China-Japan Union Hospital of Jilin University, China.", "As preoperative chemotherapy, the XELOX (capecitabine plus oxaliplatin) regimen was used in this study\n[14], as follows: intravenous infusion of oxaliplatin 130 mg/m2 over 2 hours on Day 1, followed by capecitabine 1,000 mg/m2 orally twice daily for 2 weeks. This cycle was repeated once every 3 weeks, and the patients were given two cycles before evaluation of the chemotherapeutic effect. Clinical efficacy was evaluated by computed tomography and endoscopy. Patients with resectable tumors after chemotherapy were chosen for surgery. Patients with unresectable tumors after two cycles of chemotherapy continued to receive chemotherapy (for a total of four cycles), after which the efficacy was again evaluated by computed tomography and endoscopy. At this point, patients with resectable tumors would undergo surgery, and patients with unresectable tumors would be excluded from the study. Some patients whose tumors were resectable initially, but for whom total gastrectomy seemed unavoidable were also included in this study. The choice of surgical type depended on the treating surgeon’s preference, primary tumor location, and extent of disease.", "The tumors’ reaction to prechemotherapy was evaluated as follows: (1) complete response, complete disappearance of the tumor; (2) partial response, a decrease of more than 30% in tumor size; (3) progressive disease, tumor size increased, more than 20%; and (4) stable disease, no change found in tumor size. The clinical response rate was calculated as follows:\n\nRepeated computed tomography was used to evaluate the change in size of the metastasizing lymph node, and repeated endoscopy examination was used to evaluate changes in the primary gastric carcinoma. Patients’ liver and kidney function, bone marrow hematopoiesis, gastrointestinal reactions, and related adverse events were closely monitored during the treatment. Toxic reactions were evaluated using the National Cancer Institute Common Toxicity Criteria (version 3.0)\n[14].", "The patient’s age and body mass index were presented as χ ± standard deviation. The chi-square test was used to compare the differences in clinical response rate, operation type, and R0 resection rate. All statistical analyses were performed using SPSS software, version 11.0 (SPSS Inc, Chicago, United States).", "A total of 32 patients with locally advanced gastric cancer were enrolled in this study. The patients’ characteristics (age, sex, and body mass index (kg/m2)), pathological degree of differentiation, and preoperative pathological stages are listed in the Table \n1.Table 1\nPatients’ general characteristics\nBetter differentiated group (n= 16)Poorly differentiated group (n= 16)\nP\nMale:female9:711:50.465Average age (years)53.80 ± 4.2152.50 ± 4.670.522Average body mass index35.24 ± 5.3536.10 ± 7.230.705Degree of differentiation:Well4Moderate12Lower6Undifferentiated10Preoperative tumor-node-metastasis stage:III12130.669IV430.059Mean number of chemotherapy cycles2.8 ± 0.753.4 ± 0.50\n\nPatients’ general characteristics\n", "Of the 32 patients, no patient was rated as having complete response or progressive disease, 20 patients were rated as having a partial response, and 12 patients were rated as having stable disease. The overall clinical response rate was 62.5%. Altogether, 26 patients (81.3%) received surgical R0 resection. However, six (37.5%) tumors with poor differentiation were considered as unresectable after four cycles of preoperative chemotherapy.", "Figures \n1A and\n1B show a significant change in tumor size (partial response) before and after preoperative chemotherapy. Figures \n1C and\n1D show the necrosis of tumor tissue surrounded by inflammatory tissue after preoperative chemotherapy.Figure 1\nMorphological changes of local advanced gastric cancer before and after preoperative chemotherapy. (A) Gross gastric carcinoma (endoscopy view) before chemotherapy. (B) Obviously shrunk gastric carcinoma (endoscopy view) after chemotherapy in the same patient. (C) Tumor tissue surrounded by inflammatory tissue (arrow) after chemotherapy. H & E staining, original magnification ×40. (D) Gastric cancer cells showed obvious nucleus necrosis after chemotherapy. H & E staining, original magnification ×100.\n\nMorphological changes of local advanced gastric cancer before and after preoperative chemotherapy. (A) Gross gastric carcinoma (endoscopy view) before chemotherapy. (B) Obviously shrunk gastric carcinoma (endoscopy view) after chemotherapy in the same patient. (C) Tumor tissue surrounded by inflammatory tissue (arrow) after chemotherapy. H & E staining, original magnification ×40. (D) Gastric cancer cells showed obvious nucleus necrosis after chemotherapy. H & E staining, original magnification ×100.", "The pathological differentiation was classified into two groups: better differentiated (well and moderately differentiated) and poorly differentiated (lower differentiated and undifferentiated). No difference was found in the age, sex, body mass index, tumor-node-metastasis stage, and tumor size between the two groups. The clinical response rate in the better differentiated group was significantly higher than that in the poorly differentiated group. The partial gastrectomy rate in the better differentiated group was significantly higher than that in the poorly differentiated group (Table \n2). Neither death nor severe complication occurred in this study. No difference was found in surgical time and incidence rate of postoperative complications between the well and poorly differentiated groups.Table 2\nComparison of chemotherapy results between better and poorly differentiated groups\nBetter differentiated group (n= 16)Poorly differentiated group (n= 16)\nP\nClinical reaction rate100% (16/16)25% (4/16)0.000Partial gastrectomy87.5% (14/16)25% (4/16)0.000Total gastrectomy12.5% (2/16)37.5% (6/16)0.654Unresectable cases Prechemotherapy18.7% (3/16)62.5% (10/16)0.012 Postchemotherapy0 (0/16)37.5% (6/16)0.024R0 resection rate100% (16/16)62.5% (10/16)0.018Toxicity reaction rate100% (16/16)100% (16/16)R0, no residual tumor after surgery.\n\nComparison of chemotherapy results between better and poorly differentiated groups\n\nR0, no residual tumor after surgery.", "No treatment termination or death occurred as a result of a toxic reaction. During preoperative chemotherapy, there were different degrees of toxic and adverse reactions, mainly myelosuppression, liver dysfunction, and gastrointestinal reactions. Toxic reaction occurred in all the patients receiving preoperative chemotherapy, to different extents. Leukopenia (n = 24; 75%) was the most commonly reported adverse reaction. As markers of liver injury, alanine aminotransferase and aspartate aminotransferase levels were increased in nine (28.1%) and eight patients (25%), respectively. Nausea and vomiting were the commonest digestive-tract reactions, with 19 patients (59.3%) reporting vomiting, and 25 patients (67.7%) reporting nausea. Toxic neurological reaction occurred in 21 patients (78.1%). The toxic reactions reported in this study were similar with previous reports\n[14, 16]." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients", "Preoperative chemotherapy and surgery", "Evaluation for efficacy and adverse events monitoring", "Statistical analysis", "Results", "Patients’ characteristics", "Response rate of preoperative chemotherapy", "Morphological changes after preoperative chemotherapy", "Comparison of response and surgical results between different pathological differentiations", "Toxicity", "Discussion", "Conclusions" ]
[ "Gastric cancer is one of the most common malignancies in Asia, especially in China, Korea, and Japan\n[1]. It is the third leading cause of cancer-related mortality in China, and Chinese patients with gastric cancer account for 42% of the worldwide patient population with gastric cancer\n[2]. Surgical resection of the tumor is the most effective approach in increasing the long-term survival of patients with early stage gastric cancer\n[3]. The five-year survival rate of patients with resectable gastric cancer in advanced stages (stages III or IV) can be improved through combined surgical management with perioperative chemotherapy\n[4]. The benefits of preoperative chemotherapy (neo-adjuvant chemotherapy) for patients with gastric cancer are as follows: reduces tumor volume, which results in tumor downstage, improves the R0 resection rate (no residual tumor after surgery), acts on micrometastasis, decreases the risk of local tumor recurrence, and aids in evaluating tumor chemosensitivity to cytotoxic drugs\n[5–10]. In locally advanced gastric cancer, the primary tumor is invaded through the submucosal layers of gastric tissues, with regional nodal involvement, and occupies most of the normal gastric cell lines\n[11, 12]. Although the long-term effects remain controversial, preoperative chemotherapy for locally advanced gastric cancer has shown encouraging rates of pathologic complete response and R0 resection, with acceptable rates of acute and late toxicities\n[13, 14]. However, no report was found on the factor of pathological differentiation in preoperative chemotherapy of locally advanced gastric cancer. Therefore, the aim of this study was to evaluate the influence of pathological differentiation in the effect of preoperative chemotherapy for patients with locally advanced gastric cancer.\nIn this study, we compared the clinical response rate of preoperative chemotherapy between better and poorly differentiated locally advanced gastric cancer, and discussed its effect in the preservation of gastric tissue during gastrectomy.", " Patients Patients who had received preoperative chemotherapy and surgical treatment for locally advanced gastric cancer in the gastrointestinal department of the China-Japan Union Hospital of Jilin University, China, between April 2009 and March 2013, were retrospectively reviewed. The preoperative diagnosis was made through endoscopy, biopsy, endoscopic ultrasound, and enhanced computed tomography. The cancer staging was evaluated according to the Union for International Cancer Control tumor-node-metastasis classification (sixth edition)\n[15]. Patients were fully informed about the side effects of preoperative chemotherapy and surgery, and they chose this treatment by themselves voluntarily. Preoperative chemotherapy for locally advanced gastric cancer was approved by the Medical Ethics Committee of the China-Japan Union Hospital of Jilin University, China.\nPatients who had received preoperative chemotherapy and surgical treatment for locally advanced gastric cancer in the gastrointestinal department of the China-Japan Union Hospital of Jilin University, China, between April 2009 and March 2013, were retrospectively reviewed. The preoperative diagnosis was made through endoscopy, biopsy, endoscopic ultrasound, and enhanced computed tomography. The cancer staging was evaluated according to the Union for International Cancer Control tumor-node-metastasis classification (sixth edition)\n[15]. Patients were fully informed about the side effects of preoperative chemotherapy and surgery, and they chose this treatment by themselves voluntarily. Preoperative chemotherapy for locally advanced gastric cancer was approved by the Medical Ethics Committee of the China-Japan Union Hospital of Jilin University, China.\n Preoperative chemotherapy and surgery As preoperative chemotherapy, the XELOX (capecitabine plus oxaliplatin) regimen was used in this study\n[14], as follows: intravenous infusion of oxaliplatin 130 mg/m2 over 2 hours on Day 1, followed by capecitabine 1,000 mg/m2 orally twice daily for 2 weeks. This cycle was repeated once every 3 weeks, and the patients were given two cycles before evaluation of the chemotherapeutic effect. Clinical efficacy was evaluated by computed tomography and endoscopy. Patients with resectable tumors after chemotherapy were chosen for surgery. Patients with unresectable tumors after two cycles of chemotherapy continued to receive chemotherapy (for a total of four cycles), after which the efficacy was again evaluated by computed tomography and endoscopy. At this point, patients with resectable tumors would undergo surgery, and patients with unresectable tumors would be excluded from the study. Some patients whose tumors were resectable initially, but for whom total gastrectomy seemed unavoidable were also included in this study. The choice of surgical type depended on the treating surgeon’s preference, primary tumor location, and extent of disease.\nAs preoperative chemotherapy, the XELOX (capecitabine plus oxaliplatin) regimen was used in this study\n[14], as follows: intravenous infusion of oxaliplatin 130 mg/m2 over 2 hours on Day 1, followed by capecitabine 1,000 mg/m2 orally twice daily for 2 weeks. This cycle was repeated once every 3 weeks, and the patients were given two cycles before evaluation of the chemotherapeutic effect. Clinical efficacy was evaluated by computed tomography and endoscopy. Patients with resectable tumors after chemotherapy were chosen for surgery. Patients with unresectable tumors after two cycles of chemotherapy continued to receive chemotherapy (for a total of four cycles), after which the efficacy was again evaluated by computed tomography and endoscopy. At this point, patients with resectable tumors would undergo surgery, and patients with unresectable tumors would be excluded from the study. Some patients whose tumors were resectable initially, but for whom total gastrectomy seemed unavoidable were also included in this study. The choice of surgical type depended on the treating surgeon’s preference, primary tumor location, and extent of disease.\n Evaluation for efficacy and adverse events monitoring The tumors’ reaction to prechemotherapy was evaluated as follows: (1) complete response, complete disappearance of the tumor; (2) partial response, a decrease of more than 30% in tumor size; (3) progressive disease, tumor size increased, more than 20%; and (4) stable disease, no change found in tumor size. The clinical response rate was calculated as follows:\n\nRepeated computed tomography was used to evaluate the change in size of the metastasizing lymph node, and repeated endoscopy examination was used to evaluate changes in the primary gastric carcinoma. Patients’ liver and kidney function, bone marrow hematopoiesis, gastrointestinal reactions, and related adverse events were closely monitored during the treatment. Toxic reactions were evaluated using the National Cancer Institute Common Toxicity Criteria (version 3.0)\n[14].\nThe tumors’ reaction to prechemotherapy was evaluated as follows: (1) complete response, complete disappearance of the tumor; (2) partial response, a decrease of more than 30% in tumor size; (3) progressive disease, tumor size increased, more than 20%; and (4) stable disease, no change found in tumor size. The clinical response rate was calculated as follows:\n\nRepeated computed tomography was used to evaluate the change in size of the metastasizing lymph node, and repeated endoscopy examination was used to evaluate changes in the primary gastric carcinoma. Patients’ liver and kidney function, bone marrow hematopoiesis, gastrointestinal reactions, and related adverse events were closely monitored during the treatment. Toxic reactions were evaluated using the National Cancer Institute Common Toxicity Criteria (version 3.0)\n[14].\n Statistical analysis The patient’s age and body mass index were presented as χ ± standard deviation. The chi-square test was used to compare the differences in clinical response rate, operation type, and R0 resection rate. All statistical analyses were performed using SPSS software, version 11.0 (SPSS Inc, Chicago, United States).\nThe patient’s age and body mass index were presented as χ ± standard deviation. The chi-square test was used to compare the differences in clinical response rate, operation type, and R0 resection rate. All statistical analyses were performed using SPSS software, version 11.0 (SPSS Inc, Chicago, United States).", "Patients who had received preoperative chemotherapy and surgical treatment for locally advanced gastric cancer in the gastrointestinal department of the China-Japan Union Hospital of Jilin University, China, between April 2009 and March 2013, were retrospectively reviewed. The preoperative diagnosis was made through endoscopy, biopsy, endoscopic ultrasound, and enhanced computed tomography. The cancer staging was evaluated according to the Union for International Cancer Control tumor-node-metastasis classification (sixth edition)\n[15]. Patients were fully informed about the side effects of preoperative chemotherapy and surgery, and they chose this treatment by themselves voluntarily. Preoperative chemotherapy for locally advanced gastric cancer was approved by the Medical Ethics Committee of the China-Japan Union Hospital of Jilin University, China.", "As preoperative chemotherapy, the XELOX (capecitabine plus oxaliplatin) regimen was used in this study\n[14], as follows: intravenous infusion of oxaliplatin 130 mg/m2 over 2 hours on Day 1, followed by capecitabine 1,000 mg/m2 orally twice daily for 2 weeks. This cycle was repeated once every 3 weeks, and the patients were given two cycles before evaluation of the chemotherapeutic effect. Clinical efficacy was evaluated by computed tomography and endoscopy. Patients with resectable tumors after chemotherapy were chosen for surgery. Patients with unresectable tumors after two cycles of chemotherapy continued to receive chemotherapy (for a total of four cycles), after which the efficacy was again evaluated by computed tomography and endoscopy. At this point, patients with resectable tumors would undergo surgery, and patients with unresectable tumors would be excluded from the study. Some patients whose tumors were resectable initially, but for whom total gastrectomy seemed unavoidable were also included in this study. The choice of surgical type depended on the treating surgeon’s preference, primary tumor location, and extent of disease.", "The tumors’ reaction to prechemotherapy was evaluated as follows: (1) complete response, complete disappearance of the tumor; (2) partial response, a decrease of more than 30% in tumor size; (3) progressive disease, tumor size increased, more than 20%; and (4) stable disease, no change found in tumor size. The clinical response rate was calculated as follows:\n\nRepeated computed tomography was used to evaluate the change in size of the metastasizing lymph node, and repeated endoscopy examination was used to evaluate changes in the primary gastric carcinoma. Patients’ liver and kidney function, bone marrow hematopoiesis, gastrointestinal reactions, and related adverse events were closely monitored during the treatment. Toxic reactions were evaluated using the National Cancer Institute Common Toxicity Criteria (version 3.0)\n[14].", "The patient’s age and body mass index were presented as χ ± standard deviation. The chi-square test was used to compare the differences in clinical response rate, operation type, and R0 resection rate. All statistical analyses were performed using SPSS software, version 11.0 (SPSS Inc, Chicago, United States).", " Patients’ characteristics A total of 32 patients with locally advanced gastric cancer were enrolled in this study. The patients’ characteristics (age, sex, and body mass index (kg/m2)), pathological degree of differentiation, and preoperative pathological stages are listed in the Table \n1.Table 1\nPatients’ general characteristics\nBetter differentiated group (n= 16)Poorly differentiated group (n= 16)\nP\nMale:female9:711:50.465Average age (years)53.80 ± 4.2152.50 ± 4.670.522Average body mass index35.24 ± 5.3536.10 ± 7.230.705Degree of differentiation:Well4Moderate12Lower6Undifferentiated10Preoperative tumor-node-metastasis stage:III12130.669IV430.059Mean number of chemotherapy cycles2.8 ± 0.753.4 ± 0.50\n\nPatients’ general characteristics\n\nA total of 32 patients with locally advanced gastric cancer were enrolled in this study. The patients’ characteristics (age, sex, and body mass index (kg/m2)), pathological degree of differentiation, and preoperative pathological stages are listed in the Table \n1.Table 1\nPatients’ general characteristics\nBetter differentiated group (n= 16)Poorly differentiated group (n= 16)\nP\nMale:female9:711:50.465Average age (years)53.80 ± 4.2152.50 ± 4.670.522Average body mass index35.24 ± 5.3536.10 ± 7.230.705Degree of differentiation:Well4Moderate12Lower6Undifferentiated10Preoperative tumor-node-metastasis stage:III12130.669IV430.059Mean number of chemotherapy cycles2.8 ± 0.753.4 ± 0.50\n\nPatients’ general characteristics\n\n Response rate of preoperative chemotherapy Of the 32 patients, no patient was rated as having complete response or progressive disease, 20 patients were rated as having a partial response, and 12 patients were rated as having stable disease. The overall clinical response rate was 62.5%. Altogether, 26 patients (81.3%) received surgical R0 resection. However, six (37.5%) tumors with poor differentiation were considered as unresectable after four cycles of preoperative chemotherapy.\nOf the 32 patients, no patient was rated as having complete response or progressive disease, 20 patients were rated as having a partial response, and 12 patients were rated as having stable disease. The overall clinical response rate was 62.5%. Altogether, 26 patients (81.3%) received surgical R0 resection. However, six (37.5%) tumors with poor differentiation were considered as unresectable after four cycles of preoperative chemotherapy.\n Morphological changes after preoperative chemotherapy Figures \n1A and\n1B show a significant change in tumor size (partial response) before and after preoperative chemotherapy. Figures \n1C and\n1D show the necrosis of tumor tissue surrounded by inflammatory tissue after preoperative chemotherapy.Figure 1\nMorphological changes of local advanced gastric cancer before and after preoperative chemotherapy. (A) Gross gastric carcinoma (endoscopy view) before chemotherapy. (B) Obviously shrunk gastric carcinoma (endoscopy view) after chemotherapy in the same patient. (C) Tumor tissue surrounded by inflammatory tissue (arrow) after chemotherapy. H & E staining, original magnification ×40. (D) Gastric cancer cells showed obvious nucleus necrosis after chemotherapy. H & E staining, original magnification ×100.\n\nMorphological changes of local advanced gastric cancer before and after preoperative chemotherapy. (A) Gross gastric carcinoma (endoscopy view) before chemotherapy. (B) Obviously shrunk gastric carcinoma (endoscopy view) after chemotherapy in the same patient. (C) Tumor tissue surrounded by inflammatory tissue (arrow) after chemotherapy. H & E staining, original magnification ×40. (D) Gastric cancer cells showed obvious nucleus necrosis after chemotherapy. H & E staining, original magnification ×100.\nFigures \n1A and\n1B show a significant change in tumor size (partial response) before and after preoperative chemotherapy. Figures \n1C and\n1D show the necrosis of tumor tissue surrounded by inflammatory tissue after preoperative chemotherapy.Figure 1\nMorphological changes of local advanced gastric cancer before and after preoperative chemotherapy. (A) Gross gastric carcinoma (endoscopy view) before chemotherapy. (B) Obviously shrunk gastric carcinoma (endoscopy view) after chemotherapy in the same patient. (C) Tumor tissue surrounded by inflammatory tissue (arrow) after chemotherapy. H & E staining, original magnification ×40. (D) Gastric cancer cells showed obvious nucleus necrosis after chemotherapy. H & E staining, original magnification ×100.\n\nMorphological changes of local advanced gastric cancer before and after preoperative chemotherapy. (A) Gross gastric carcinoma (endoscopy view) before chemotherapy. (B) Obviously shrunk gastric carcinoma (endoscopy view) after chemotherapy in the same patient. (C) Tumor tissue surrounded by inflammatory tissue (arrow) after chemotherapy. H & E staining, original magnification ×40. (D) Gastric cancer cells showed obvious nucleus necrosis after chemotherapy. H & E staining, original magnification ×100.\n Comparison of response and surgical results between different pathological differentiations The pathological differentiation was classified into two groups: better differentiated (well and moderately differentiated) and poorly differentiated (lower differentiated and undifferentiated). No difference was found in the age, sex, body mass index, tumor-node-metastasis stage, and tumor size between the two groups. The clinical response rate in the better differentiated group was significantly higher than that in the poorly differentiated group. The partial gastrectomy rate in the better differentiated group was significantly higher than that in the poorly differentiated group (Table \n2). Neither death nor severe complication occurred in this study. No difference was found in surgical time and incidence rate of postoperative complications between the well and poorly differentiated groups.Table 2\nComparison of chemotherapy results between better and poorly differentiated groups\nBetter differentiated group (n= 16)Poorly differentiated group (n= 16)\nP\nClinical reaction rate100% (16/16)25% (4/16)0.000Partial gastrectomy87.5% (14/16)25% (4/16)0.000Total gastrectomy12.5% (2/16)37.5% (6/16)0.654Unresectable cases Prechemotherapy18.7% (3/16)62.5% (10/16)0.012 Postchemotherapy0 (0/16)37.5% (6/16)0.024R0 resection rate100% (16/16)62.5% (10/16)0.018Toxicity reaction rate100% (16/16)100% (16/16)R0, no residual tumor after surgery.\n\nComparison of chemotherapy results between better and poorly differentiated groups\n\nR0, no residual tumor after surgery.\nThe pathological differentiation was classified into two groups: better differentiated (well and moderately differentiated) and poorly differentiated (lower differentiated and undifferentiated). No difference was found in the age, sex, body mass index, tumor-node-metastasis stage, and tumor size between the two groups. The clinical response rate in the better differentiated group was significantly higher than that in the poorly differentiated group. The partial gastrectomy rate in the better differentiated group was significantly higher than that in the poorly differentiated group (Table \n2). Neither death nor severe complication occurred in this study. No difference was found in surgical time and incidence rate of postoperative complications between the well and poorly differentiated groups.Table 2\nComparison of chemotherapy results between better and poorly differentiated groups\nBetter differentiated group (n= 16)Poorly differentiated group (n= 16)\nP\nClinical reaction rate100% (16/16)25% (4/16)0.000Partial gastrectomy87.5% (14/16)25% (4/16)0.000Total gastrectomy12.5% (2/16)37.5% (6/16)0.654Unresectable cases Prechemotherapy18.7% (3/16)62.5% (10/16)0.012 Postchemotherapy0 (0/16)37.5% (6/16)0.024R0 resection rate100% (16/16)62.5% (10/16)0.018Toxicity reaction rate100% (16/16)100% (16/16)R0, no residual tumor after surgery.\n\nComparison of chemotherapy results between better and poorly differentiated groups\n\nR0, no residual tumor after surgery.\n Toxicity No treatment termination or death occurred as a result of a toxic reaction. During preoperative chemotherapy, there were different degrees of toxic and adverse reactions, mainly myelosuppression, liver dysfunction, and gastrointestinal reactions. Toxic reaction occurred in all the patients receiving preoperative chemotherapy, to different extents. Leukopenia (n = 24; 75%) was the most commonly reported adverse reaction. As markers of liver injury, alanine aminotransferase and aspartate aminotransferase levels were increased in nine (28.1%) and eight patients (25%), respectively. Nausea and vomiting were the commonest digestive-tract reactions, with 19 patients (59.3%) reporting vomiting, and 25 patients (67.7%) reporting nausea. Toxic neurological reaction occurred in 21 patients (78.1%). The toxic reactions reported in this study were similar with previous reports\n[14, 16].\nNo treatment termination or death occurred as a result of a toxic reaction. During preoperative chemotherapy, there were different degrees of toxic and adverse reactions, mainly myelosuppression, liver dysfunction, and gastrointestinal reactions. Toxic reaction occurred in all the patients receiving preoperative chemotherapy, to different extents. Leukopenia (n = 24; 75%) was the most commonly reported adverse reaction. As markers of liver injury, alanine aminotransferase and aspartate aminotransferase levels were increased in nine (28.1%) and eight patients (25%), respectively. Nausea and vomiting were the commonest digestive-tract reactions, with 19 patients (59.3%) reporting vomiting, and 25 patients (67.7%) reporting nausea. Toxic neurological reaction occurred in 21 patients (78.1%). The toxic reactions reported in this study were similar with previous reports\n[14, 16].", "A total of 32 patients with locally advanced gastric cancer were enrolled in this study. The patients’ characteristics (age, sex, and body mass index (kg/m2)), pathological degree of differentiation, and preoperative pathological stages are listed in the Table \n1.Table 1\nPatients’ general characteristics\nBetter differentiated group (n= 16)Poorly differentiated group (n= 16)\nP\nMale:female9:711:50.465Average age (years)53.80 ± 4.2152.50 ± 4.670.522Average body mass index35.24 ± 5.3536.10 ± 7.230.705Degree of differentiation:Well4Moderate12Lower6Undifferentiated10Preoperative tumor-node-metastasis stage:III12130.669IV430.059Mean number of chemotherapy cycles2.8 ± 0.753.4 ± 0.50\n\nPatients’ general characteristics\n", "Of the 32 patients, no patient was rated as having complete response or progressive disease, 20 patients were rated as having a partial response, and 12 patients were rated as having stable disease. The overall clinical response rate was 62.5%. Altogether, 26 patients (81.3%) received surgical R0 resection. However, six (37.5%) tumors with poor differentiation were considered as unresectable after four cycles of preoperative chemotherapy.", "Figures \n1A and\n1B show a significant change in tumor size (partial response) before and after preoperative chemotherapy. Figures \n1C and\n1D show the necrosis of tumor tissue surrounded by inflammatory tissue after preoperative chemotherapy.Figure 1\nMorphological changes of local advanced gastric cancer before and after preoperative chemotherapy. (A) Gross gastric carcinoma (endoscopy view) before chemotherapy. (B) Obviously shrunk gastric carcinoma (endoscopy view) after chemotherapy in the same patient. (C) Tumor tissue surrounded by inflammatory tissue (arrow) after chemotherapy. H & E staining, original magnification ×40. (D) Gastric cancer cells showed obvious nucleus necrosis after chemotherapy. H & E staining, original magnification ×100.\n\nMorphological changes of local advanced gastric cancer before and after preoperative chemotherapy. (A) Gross gastric carcinoma (endoscopy view) before chemotherapy. (B) Obviously shrunk gastric carcinoma (endoscopy view) after chemotherapy in the same patient. (C) Tumor tissue surrounded by inflammatory tissue (arrow) after chemotherapy. H & E staining, original magnification ×40. (D) Gastric cancer cells showed obvious nucleus necrosis after chemotherapy. H & E staining, original magnification ×100.", "The pathological differentiation was classified into two groups: better differentiated (well and moderately differentiated) and poorly differentiated (lower differentiated and undifferentiated). No difference was found in the age, sex, body mass index, tumor-node-metastasis stage, and tumor size between the two groups. The clinical response rate in the better differentiated group was significantly higher than that in the poorly differentiated group. The partial gastrectomy rate in the better differentiated group was significantly higher than that in the poorly differentiated group (Table \n2). Neither death nor severe complication occurred in this study. No difference was found in surgical time and incidence rate of postoperative complications between the well and poorly differentiated groups.Table 2\nComparison of chemotherapy results between better and poorly differentiated groups\nBetter differentiated group (n= 16)Poorly differentiated group (n= 16)\nP\nClinical reaction rate100% (16/16)25% (4/16)0.000Partial gastrectomy87.5% (14/16)25% (4/16)0.000Total gastrectomy12.5% (2/16)37.5% (6/16)0.654Unresectable cases Prechemotherapy18.7% (3/16)62.5% (10/16)0.012 Postchemotherapy0 (0/16)37.5% (6/16)0.024R0 resection rate100% (16/16)62.5% (10/16)0.018Toxicity reaction rate100% (16/16)100% (16/16)R0, no residual tumor after surgery.\n\nComparison of chemotherapy results between better and poorly differentiated groups\n\nR0, no residual tumor after surgery.", "No treatment termination or death occurred as a result of a toxic reaction. During preoperative chemotherapy, there were different degrees of toxic and adverse reactions, mainly myelosuppression, liver dysfunction, and gastrointestinal reactions. Toxic reaction occurred in all the patients receiving preoperative chemotherapy, to different extents. Leukopenia (n = 24; 75%) was the most commonly reported adverse reaction. As markers of liver injury, alanine aminotransferase and aspartate aminotransferase levels were increased in nine (28.1%) and eight patients (25%), respectively. Nausea and vomiting were the commonest digestive-tract reactions, with 19 patients (59.3%) reporting vomiting, and 25 patients (67.7%) reporting nausea. Toxic neurological reaction occurred in 21 patients (78.1%). The toxic reactions reported in this study were similar with previous reports\n[14, 16].", "Unlike early gastric cancer, surgical treatment of advanced gastric cancer is not satisfactory because of tumor local invasion and severe lymph node metastasis, with a survival time of no longer than one year\n[17, 18]. However, surgery is still the primary treatment modality for achieving a potential cure and can be beneficial in the palliation of advanced gastric cancer\n[19, 20]. The high recurrence rate after surgical resection for locally advanced gastric cancer was considered the main reason for poor treatment results\n[21, 22]. Many clinical studies have shown that chemotherapy can downstage the tumor, eliminate micrometastasis, and make some unresectable gastric cancers resectable, thereby prolonging the survival time of patients\n[23–26].\nRecently, the XELOX regimen has been used as a new chemotherapeutic strategy for locally advanced gastric cancer patients, which was easier to accept in clinical practice, with an encouraging 63% clinical response rate and a median survival time of 11.9 months\n[27]. In the present study, similar results were obtained, with a clinical response rate of 62.5% and 81.3% R0 resection. On comparing the clinical response rate between the better differentiated and poorly differentiated groups, it was found that the better differentiated group showed a 100% clinical response rate, whereas the poorly differentiated group showed only 25%. This result strongly suggested that well and moderately differentiated locally advanced gastric cancer is a candidate for preoperative chemotherapy. Moreover, total gastrectomy could be avoided in patients with well differentiated gastric cancer, since the recovery of normal gastric tissue was a result of effective preoperative chemotherapy. Although the short-term results for better differentiated locally advanced gastric cancer were promising in this study, longer survival times need to be observed further.\nAfter chemotherapy, well differentiated larger tumors were found to be obviously shrunken. After H & E staining, the necrosis of tumor tissues was easily seen under a microscope, commonly surrounded by inflammatory tissue, forming a typical tissue morphology after sensitive chemotherapy. Based on the effective chemotherapy, the recovery of normal gastric tissues resulted in the possibility of preserving some stomach, other than removing the total stomach, to obtain R0 resection.\nThe toxic reactions to chemotherapy were very common and were the main cause of patients refusing or discontinuing chemotherapy. In the present study, patients experienced different degrees of toxic and adverse reactions, especially during the first cycle. The patients could recover from leukopenia and liver function abnormality after two to three weeks of rest. Nausea and vomiting often occurred during the intravenous infusion of oxaliplatin at the beginning of therapy, wherein liquid transfusion was necessary to keep the acid-base balance and to supply nutritional energy. For patients with severe toxic reactions, delaying the treatment was deemed necessary.\nThere are several limitations of this study to note. Firstly, this study was a retrospective study. Retrospective studies are inherently less robust than prospective studies. Secondly, the sample sizes of the groups were relatively small. It is possible that additional differences would emerge in a larger study. Hence, a large prospective study at multiple centers could provide more robust and generalizable information about the influence of pathological differentiation in the effect of preoperative chemotherapy for patients with locally advanced gastric cancer.", "From this study, it is concluded that better differentiated locally advanced gastric cancer is suitable for preoperative chemotherapy under the XELOX regimen. As a result of effective preoperative chemotherapy, much more gastric tissue can be preserved in patients with better differentiated locally advanced gastric cancer." ]
[ null, "methods", null, null, null, null, "results", null, null, null, null, null, "discussion", "conclusions" ]
[ "locally advanced gastric cancer", "pathological differentiation", "preoperative chemotherapy", "surgery" ]
Background: Gastric cancer is one of the most common malignancies in Asia, especially in China, Korea, and Japan [1]. It is the third leading cause of cancer-related mortality in China, and Chinese patients with gastric cancer account for 42% of the worldwide patient population with gastric cancer [2]. Surgical resection of the tumor is the most effective approach in increasing the long-term survival of patients with early stage gastric cancer [3]. The five-year survival rate of patients with resectable gastric cancer in advanced stages (stages III or IV) can be improved through combined surgical management with perioperative chemotherapy [4]. The benefits of preoperative chemotherapy (neo-adjuvant chemotherapy) for patients with gastric cancer are as follows: reduces tumor volume, which results in tumor downstage, improves the R0 resection rate (no residual tumor after surgery), acts on micrometastasis, decreases the risk of local tumor recurrence, and aids in evaluating tumor chemosensitivity to cytotoxic drugs [5–10]. In locally advanced gastric cancer, the primary tumor is invaded through the submucosal layers of gastric tissues, with regional nodal involvement, and occupies most of the normal gastric cell lines [11, 12]. Although the long-term effects remain controversial, preoperative chemotherapy for locally advanced gastric cancer has shown encouraging rates of pathologic complete response and R0 resection, with acceptable rates of acute and late toxicities [13, 14]. However, no report was found on the factor of pathological differentiation in preoperative chemotherapy of locally advanced gastric cancer. Therefore, the aim of this study was to evaluate the influence of pathological differentiation in the effect of preoperative chemotherapy for patients with locally advanced gastric cancer. In this study, we compared the clinical response rate of preoperative chemotherapy between better and poorly differentiated locally advanced gastric cancer, and discussed its effect in the preservation of gastric tissue during gastrectomy. Methods: Patients Patients who had received preoperative chemotherapy and surgical treatment for locally advanced gastric cancer in the gastrointestinal department of the China-Japan Union Hospital of Jilin University, China, between April 2009 and March 2013, were retrospectively reviewed. The preoperative diagnosis was made through endoscopy, biopsy, endoscopic ultrasound, and enhanced computed tomography. The cancer staging was evaluated according to the Union for International Cancer Control tumor-node-metastasis classification (sixth edition) [15]. Patients were fully informed about the side effects of preoperative chemotherapy and surgery, and they chose this treatment by themselves voluntarily. Preoperative chemotherapy for locally advanced gastric cancer was approved by the Medical Ethics Committee of the China-Japan Union Hospital of Jilin University, China. Patients who had received preoperative chemotherapy and surgical treatment for locally advanced gastric cancer in the gastrointestinal department of the China-Japan Union Hospital of Jilin University, China, between April 2009 and March 2013, were retrospectively reviewed. The preoperative diagnosis was made through endoscopy, biopsy, endoscopic ultrasound, and enhanced computed tomography. The cancer staging was evaluated according to the Union for International Cancer Control tumor-node-metastasis classification (sixth edition) [15]. Patients were fully informed about the side effects of preoperative chemotherapy and surgery, and they chose this treatment by themselves voluntarily. Preoperative chemotherapy for locally advanced gastric cancer was approved by the Medical Ethics Committee of the China-Japan Union Hospital of Jilin University, China. Preoperative chemotherapy and surgery As preoperative chemotherapy, the XELOX (capecitabine plus oxaliplatin) regimen was used in this study [14], as follows: intravenous infusion of oxaliplatin 130 mg/m2 over 2 hours on Day 1, followed by capecitabine 1,000 mg/m2 orally twice daily for 2 weeks. This cycle was repeated once every 3 weeks, and the patients were given two cycles before evaluation of the chemotherapeutic effect. Clinical efficacy was evaluated by computed tomography and endoscopy. Patients with resectable tumors after chemotherapy were chosen for surgery. Patients with unresectable tumors after two cycles of chemotherapy continued to receive chemotherapy (for a total of four cycles), after which the efficacy was again evaluated by computed tomography and endoscopy. At this point, patients with resectable tumors would undergo surgery, and patients with unresectable tumors would be excluded from the study. Some patients whose tumors were resectable initially, but for whom total gastrectomy seemed unavoidable were also included in this study. The choice of surgical type depended on the treating surgeon’s preference, primary tumor location, and extent of disease. As preoperative chemotherapy, the XELOX (capecitabine plus oxaliplatin) regimen was used in this study [14], as follows: intravenous infusion of oxaliplatin 130 mg/m2 over 2 hours on Day 1, followed by capecitabine 1,000 mg/m2 orally twice daily for 2 weeks. This cycle was repeated once every 3 weeks, and the patients were given two cycles before evaluation of the chemotherapeutic effect. Clinical efficacy was evaluated by computed tomography and endoscopy. Patients with resectable tumors after chemotherapy were chosen for surgery. Patients with unresectable tumors after two cycles of chemotherapy continued to receive chemotherapy (for a total of four cycles), after which the efficacy was again evaluated by computed tomography and endoscopy. At this point, patients with resectable tumors would undergo surgery, and patients with unresectable tumors would be excluded from the study. Some patients whose tumors were resectable initially, but for whom total gastrectomy seemed unavoidable were also included in this study. The choice of surgical type depended on the treating surgeon’s preference, primary tumor location, and extent of disease. Evaluation for efficacy and adverse events monitoring The tumors’ reaction to prechemotherapy was evaluated as follows: (1) complete response, complete disappearance of the tumor; (2) partial response, a decrease of more than 30% in tumor size; (3) progressive disease, tumor size increased, more than 20%; and (4) stable disease, no change found in tumor size. The clinical response rate was calculated as follows: Repeated computed tomography was used to evaluate the change in size of the metastasizing lymph node, and repeated endoscopy examination was used to evaluate changes in the primary gastric carcinoma. Patients’ liver and kidney function, bone marrow hematopoiesis, gastrointestinal reactions, and related adverse events were closely monitored during the treatment. Toxic reactions were evaluated using the National Cancer Institute Common Toxicity Criteria (version 3.0) [14]. The tumors’ reaction to prechemotherapy was evaluated as follows: (1) complete response, complete disappearance of the tumor; (2) partial response, a decrease of more than 30% in tumor size; (3) progressive disease, tumor size increased, more than 20%; and (4) stable disease, no change found in tumor size. The clinical response rate was calculated as follows: Repeated computed tomography was used to evaluate the change in size of the metastasizing lymph node, and repeated endoscopy examination was used to evaluate changes in the primary gastric carcinoma. Patients’ liver and kidney function, bone marrow hematopoiesis, gastrointestinal reactions, and related adverse events were closely monitored during the treatment. Toxic reactions were evaluated using the National Cancer Institute Common Toxicity Criteria (version 3.0) [14]. Statistical analysis The patient’s age and body mass index were presented as χ ± standard deviation. The chi-square test was used to compare the differences in clinical response rate, operation type, and R0 resection rate. All statistical analyses were performed using SPSS software, version 11.0 (SPSS Inc, Chicago, United States). The patient’s age and body mass index were presented as χ ± standard deviation. The chi-square test was used to compare the differences in clinical response rate, operation type, and R0 resection rate. All statistical analyses were performed using SPSS software, version 11.0 (SPSS Inc, Chicago, United States). Patients: Patients who had received preoperative chemotherapy and surgical treatment for locally advanced gastric cancer in the gastrointestinal department of the China-Japan Union Hospital of Jilin University, China, between April 2009 and March 2013, were retrospectively reviewed. The preoperative diagnosis was made through endoscopy, biopsy, endoscopic ultrasound, and enhanced computed tomography. The cancer staging was evaluated according to the Union for International Cancer Control tumor-node-metastasis classification (sixth edition) [15]. Patients were fully informed about the side effects of preoperative chemotherapy and surgery, and they chose this treatment by themselves voluntarily. Preoperative chemotherapy for locally advanced gastric cancer was approved by the Medical Ethics Committee of the China-Japan Union Hospital of Jilin University, China. Preoperative chemotherapy and surgery: As preoperative chemotherapy, the XELOX (capecitabine plus oxaliplatin) regimen was used in this study [14], as follows: intravenous infusion of oxaliplatin 130 mg/m2 over 2 hours on Day 1, followed by capecitabine 1,000 mg/m2 orally twice daily for 2 weeks. This cycle was repeated once every 3 weeks, and the patients were given two cycles before evaluation of the chemotherapeutic effect. Clinical efficacy was evaluated by computed tomography and endoscopy. Patients with resectable tumors after chemotherapy were chosen for surgery. Patients with unresectable tumors after two cycles of chemotherapy continued to receive chemotherapy (for a total of four cycles), after which the efficacy was again evaluated by computed tomography and endoscopy. At this point, patients with resectable tumors would undergo surgery, and patients with unresectable tumors would be excluded from the study. Some patients whose tumors were resectable initially, but for whom total gastrectomy seemed unavoidable were also included in this study. The choice of surgical type depended on the treating surgeon’s preference, primary tumor location, and extent of disease. Evaluation for efficacy and adverse events monitoring: The tumors’ reaction to prechemotherapy was evaluated as follows: (1) complete response, complete disappearance of the tumor; (2) partial response, a decrease of more than 30% in tumor size; (3) progressive disease, tumor size increased, more than 20%; and (4) stable disease, no change found in tumor size. The clinical response rate was calculated as follows: Repeated computed tomography was used to evaluate the change in size of the metastasizing lymph node, and repeated endoscopy examination was used to evaluate changes in the primary gastric carcinoma. Patients’ liver and kidney function, bone marrow hematopoiesis, gastrointestinal reactions, and related adverse events were closely monitored during the treatment. Toxic reactions were evaluated using the National Cancer Institute Common Toxicity Criteria (version 3.0) [14]. Statistical analysis: The patient’s age and body mass index were presented as χ ± standard deviation. The chi-square test was used to compare the differences in clinical response rate, operation type, and R0 resection rate. All statistical analyses were performed using SPSS software, version 11.0 (SPSS Inc, Chicago, United States). Results: Patients’ characteristics A total of 32 patients with locally advanced gastric cancer were enrolled in this study. The patients’ characteristics (age, sex, and body mass index (kg/m2)), pathological degree of differentiation, and preoperative pathological stages are listed in the Table  1.Table 1 Patients’ general characteristics Better differentiated group (n= 16)Poorly differentiated group (n= 16) P Male:female9:711:50.465Average age (years)53.80 ± 4.2152.50 ± 4.670.522Average body mass index35.24 ± 5.3536.10 ± 7.230.705Degree of differentiation:Well4Moderate12Lower6Undifferentiated10Preoperative tumor-node-metastasis stage:III12130.669IV430.059Mean number of chemotherapy cycles2.8 ± 0.753.4 ± 0.50 Patients’ general characteristics A total of 32 patients with locally advanced gastric cancer were enrolled in this study. The patients’ characteristics (age, sex, and body mass index (kg/m2)), pathological degree of differentiation, and preoperative pathological stages are listed in the Table  1.Table 1 Patients’ general characteristics Better differentiated group (n= 16)Poorly differentiated group (n= 16) P Male:female9:711:50.465Average age (years)53.80 ± 4.2152.50 ± 4.670.522Average body mass index35.24 ± 5.3536.10 ± 7.230.705Degree of differentiation:Well4Moderate12Lower6Undifferentiated10Preoperative tumor-node-metastasis stage:III12130.669IV430.059Mean number of chemotherapy cycles2.8 ± 0.753.4 ± 0.50 Patients’ general characteristics Response rate of preoperative chemotherapy Of the 32 patients, no patient was rated as having complete response or progressive disease, 20 patients were rated as having a partial response, and 12 patients were rated as having stable disease. The overall clinical response rate was 62.5%. Altogether, 26 patients (81.3%) received surgical R0 resection. However, six (37.5%) tumors with poor differentiation were considered as unresectable after four cycles of preoperative chemotherapy. Of the 32 patients, no patient was rated as having complete response or progressive disease, 20 patients were rated as having a partial response, and 12 patients were rated as having stable disease. The overall clinical response rate was 62.5%. Altogether, 26 patients (81.3%) received surgical R0 resection. However, six (37.5%) tumors with poor differentiation were considered as unresectable after four cycles of preoperative chemotherapy. Morphological changes after preoperative chemotherapy Figures  1A and 1B show a significant change in tumor size (partial response) before and after preoperative chemotherapy. Figures  1C and 1D show the necrosis of tumor tissue surrounded by inflammatory tissue after preoperative chemotherapy.Figure 1 Morphological changes of local advanced gastric cancer before and after preoperative chemotherapy. (A) Gross gastric carcinoma (endoscopy view) before chemotherapy. (B) Obviously shrunk gastric carcinoma (endoscopy view) after chemotherapy in the same patient. (C) Tumor tissue surrounded by inflammatory tissue (arrow) after chemotherapy. H & E staining, original magnification ×40. (D) Gastric cancer cells showed obvious nucleus necrosis after chemotherapy. H & E staining, original magnification ×100. Morphological changes of local advanced gastric cancer before and after preoperative chemotherapy. (A) Gross gastric carcinoma (endoscopy view) before chemotherapy. (B) Obviously shrunk gastric carcinoma (endoscopy view) after chemotherapy in the same patient. (C) Tumor tissue surrounded by inflammatory tissue (arrow) after chemotherapy. H & E staining, original magnification ×40. (D) Gastric cancer cells showed obvious nucleus necrosis after chemotherapy. H & E staining, original magnification ×100. Figures  1A and 1B show a significant change in tumor size (partial response) before and after preoperative chemotherapy. Figures  1C and 1D show the necrosis of tumor tissue surrounded by inflammatory tissue after preoperative chemotherapy.Figure 1 Morphological changes of local advanced gastric cancer before and after preoperative chemotherapy. (A) Gross gastric carcinoma (endoscopy view) before chemotherapy. (B) Obviously shrunk gastric carcinoma (endoscopy view) after chemotherapy in the same patient. (C) Tumor tissue surrounded by inflammatory tissue (arrow) after chemotherapy. H & E staining, original magnification ×40. (D) Gastric cancer cells showed obvious nucleus necrosis after chemotherapy. H & E staining, original magnification ×100. Morphological changes of local advanced gastric cancer before and after preoperative chemotherapy. (A) Gross gastric carcinoma (endoscopy view) before chemotherapy. (B) Obviously shrunk gastric carcinoma (endoscopy view) after chemotherapy in the same patient. (C) Tumor tissue surrounded by inflammatory tissue (arrow) after chemotherapy. H & E staining, original magnification ×40. (D) Gastric cancer cells showed obvious nucleus necrosis after chemotherapy. H & E staining, original magnification ×100. Comparison of response and surgical results between different pathological differentiations The pathological differentiation was classified into two groups: better differentiated (well and moderately differentiated) and poorly differentiated (lower differentiated and undifferentiated). No difference was found in the age, sex, body mass index, tumor-node-metastasis stage, and tumor size between the two groups. The clinical response rate in the better differentiated group was significantly higher than that in the poorly differentiated group. The partial gastrectomy rate in the better differentiated group was significantly higher than that in the poorly differentiated group (Table  2). Neither death nor severe complication occurred in this study. No difference was found in surgical time and incidence rate of postoperative complications between the well and poorly differentiated groups.Table 2 Comparison of chemotherapy results between better and poorly differentiated groups Better differentiated group (n= 16)Poorly differentiated group (n= 16) P Clinical reaction rate100% (16/16)25% (4/16)0.000Partial gastrectomy87.5% (14/16)25% (4/16)0.000Total gastrectomy12.5% (2/16)37.5% (6/16)0.654Unresectable cases Prechemotherapy18.7% (3/16)62.5% (10/16)0.012 Postchemotherapy0 (0/16)37.5% (6/16)0.024R0 resection rate100% (16/16)62.5% (10/16)0.018Toxicity reaction rate100% (16/16)100% (16/16)R0, no residual tumor after surgery. Comparison of chemotherapy results between better and poorly differentiated groups R0, no residual tumor after surgery. The pathological differentiation was classified into two groups: better differentiated (well and moderately differentiated) and poorly differentiated (lower differentiated and undifferentiated). No difference was found in the age, sex, body mass index, tumor-node-metastasis stage, and tumor size between the two groups. The clinical response rate in the better differentiated group was significantly higher than that in the poorly differentiated group. The partial gastrectomy rate in the better differentiated group was significantly higher than that in the poorly differentiated group (Table  2). Neither death nor severe complication occurred in this study. No difference was found in surgical time and incidence rate of postoperative complications between the well and poorly differentiated groups.Table 2 Comparison of chemotherapy results between better and poorly differentiated groups Better differentiated group (n= 16)Poorly differentiated group (n= 16) P Clinical reaction rate100% (16/16)25% (4/16)0.000Partial gastrectomy87.5% (14/16)25% (4/16)0.000Total gastrectomy12.5% (2/16)37.5% (6/16)0.654Unresectable cases Prechemotherapy18.7% (3/16)62.5% (10/16)0.012 Postchemotherapy0 (0/16)37.5% (6/16)0.024R0 resection rate100% (16/16)62.5% (10/16)0.018Toxicity reaction rate100% (16/16)100% (16/16)R0, no residual tumor after surgery. Comparison of chemotherapy results between better and poorly differentiated groups R0, no residual tumor after surgery. Toxicity No treatment termination or death occurred as a result of a toxic reaction. During preoperative chemotherapy, there were different degrees of toxic and adverse reactions, mainly myelosuppression, liver dysfunction, and gastrointestinal reactions. Toxic reaction occurred in all the patients receiving preoperative chemotherapy, to different extents. Leukopenia (n = 24; 75%) was the most commonly reported adverse reaction. As markers of liver injury, alanine aminotransferase and aspartate aminotransferase levels were increased in nine (28.1%) and eight patients (25%), respectively. Nausea and vomiting were the commonest digestive-tract reactions, with 19 patients (59.3%) reporting vomiting, and 25 patients (67.7%) reporting nausea. Toxic neurological reaction occurred in 21 patients (78.1%). The toxic reactions reported in this study were similar with previous reports [14, 16]. No treatment termination or death occurred as a result of a toxic reaction. During preoperative chemotherapy, there were different degrees of toxic and adverse reactions, mainly myelosuppression, liver dysfunction, and gastrointestinal reactions. Toxic reaction occurred in all the patients receiving preoperative chemotherapy, to different extents. Leukopenia (n = 24; 75%) was the most commonly reported adverse reaction. As markers of liver injury, alanine aminotransferase and aspartate aminotransferase levels were increased in nine (28.1%) and eight patients (25%), respectively. Nausea and vomiting were the commonest digestive-tract reactions, with 19 patients (59.3%) reporting vomiting, and 25 patients (67.7%) reporting nausea. Toxic neurological reaction occurred in 21 patients (78.1%). The toxic reactions reported in this study were similar with previous reports [14, 16]. Patients’ characteristics: A total of 32 patients with locally advanced gastric cancer were enrolled in this study. The patients’ characteristics (age, sex, and body mass index (kg/m2)), pathological degree of differentiation, and preoperative pathological stages are listed in the Table  1.Table 1 Patients’ general characteristics Better differentiated group (n= 16)Poorly differentiated group (n= 16) P Male:female9:711:50.465Average age (years)53.80 ± 4.2152.50 ± 4.670.522Average body mass index35.24 ± 5.3536.10 ± 7.230.705Degree of differentiation:Well4Moderate12Lower6Undifferentiated10Preoperative tumor-node-metastasis stage:III12130.669IV430.059Mean number of chemotherapy cycles2.8 ± 0.753.4 ± 0.50 Patients’ general characteristics Response rate of preoperative chemotherapy: Of the 32 patients, no patient was rated as having complete response or progressive disease, 20 patients were rated as having a partial response, and 12 patients were rated as having stable disease. The overall clinical response rate was 62.5%. Altogether, 26 patients (81.3%) received surgical R0 resection. However, six (37.5%) tumors with poor differentiation were considered as unresectable after four cycles of preoperative chemotherapy. Morphological changes after preoperative chemotherapy: Figures  1A and 1B show a significant change in tumor size (partial response) before and after preoperative chemotherapy. Figures  1C and 1D show the necrosis of tumor tissue surrounded by inflammatory tissue after preoperative chemotherapy.Figure 1 Morphological changes of local advanced gastric cancer before and after preoperative chemotherapy. (A) Gross gastric carcinoma (endoscopy view) before chemotherapy. (B) Obviously shrunk gastric carcinoma (endoscopy view) after chemotherapy in the same patient. (C) Tumor tissue surrounded by inflammatory tissue (arrow) after chemotherapy. H & E staining, original magnification ×40. (D) Gastric cancer cells showed obvious nucleus necrosis after chemotherapy. H & E staining, original magnification ×100. Morphological changes of local advanced gastric cancer before and after preoperative chemotherapy. (A) Gross gastric carcinoma (endoscopy view) before chemotherapy. (B) Obviously shrunk gastric carcinoma (endoscopy view) after chemotherapy in the same patient. (C) Tumor tissue surrounded by inflammatory tissue (arrow) after chemotherapy. H & E staining, original magnification ×40. (D) Gastric cancer cells showed obvious nucleus necrosis after chemotherapy. H & E staining, original magnification ×100. Comparison of response and surgical results between different pathological differentiations: The pathological differentiation was classified into two groups: better differentiated (well and moderately differentiated) and poorly differentiated (lower differentiated and undifferentiated). No difference was found in the age, sex, body mass index, tumor-node-metastasis stage, and tumor size between the two groups. The clinical response rate in the better differentiated group was significantly higher than that in the poorly differentiated group. The partial gastrectomy rate in the better differentiated group was significantly higher than that in the poorly differentiated group (Table  2). Neither death nor severe complication occurred in this study. No difference was found in surgical time and incidence rate of postoperative complications between the well and poorly differentiated groups.Table 2 Comparison of chemotherapy results between better and poorly differentiated groups Better differentiated group (n= 16)Poorly differentiated group (n= 16) P Clinical reaction rate100% (16/16)25% (4/16)0.000Partial gastrectomy87.5% (14/16)25% (4/16)0.000Total gastrectomy12.5% (2/16)37.5% (6/16)0.654Unresectable cases Prechemotherapy18.7% (3/16)62.5% (10/16)0.012 Postchemotherapy0 (0/16)37.5% (6/16)0.024R0 resection rate100% (16/16)62.5% (10/16)0.018Toxicity reaction rate100% (16/16)100% (16/16)R0, no residual tumor after surgery. Comparison of chemotherapy results between better and poorly differentiated groups R0, no residual tumor after surgery. Toxicity: No treatment termination or death occurred as a result of a toxic reaction. During preoperative chemotherapy, there were different degrees of toxic and adverse reactions, mainly myelosuppression, liver dysfunction, and gastrointestinal reactions. Toxic reaction occurred in all the patients receiving preoperative chemotherapy, to different extents. Leukopenia (n = 24; 75%) was the most commonly reported adverse reaction. As markers of liver injury, alanine aminotransferase and aspartate aminotransferase levels were increased in nine (28.1%) and eight patients (25%), respectively. Nausea and vomiting were the commonest digestive-tract reactions, with 19 patients (59.3%) reporting vomiting, and 25 patients (67.7%) reporting nausea. Toxic neurological reaction occurred in 21 patients (78.1%). The toxic reactions reported in this study were similar with previous reports [14, 16]. Discussion: Unlike early gastric cancer, surgical treatment of advanced gastric cancer is not satisfactory because of tumor local invasion and severe lymph node metastasis, with a survival time of no longer than one year [17, 18]. However, surgery is still the primary treatment modality for achieving a potential cure and can be beneficial in the palliation of advanced gastric cancer [19, 20]. The high recurrence rate after surgical resection for locally advanced gastric cancer was considered the main reason for poor treatment results [21, 22]. Many clinical studies have shown that chemotherapy can downstage the tumor, eliminate micrometastasis, and make some unresectable gastric cancers resectable, thereby prolonging the survival time of patients [23–26]. Recently, the XELOX regimen has been used as a new chemotherapeutic strategy for locally advanced gastric cancer patients, which was easier to accept in clinical practice, with an encouraging 63% clinical response rate and a median survival time of 11.9 months [27]. In the present study, similar results were obtained, with a clinical response rate of 62.5% and 81.3% R0 resection. On comparing the clinical response rate between the better differentiated and poorly differentiated groups, it was found that the better differentiated group showed a 100% clinical response rate, whereas the poorly differentiated group showed only 25%. This result strongly suggested that well and moderately differentiated locally advanced gastric cancer is a candidate for preoperative chemotherapy. Moreover, total gastrectomy could be avoided in patients with well differentiated gastric cancer, since the recovery of normal gastric tissue was a result of effective preoperative chemotherapy. Although the short-term results for better differentiated locally advanced gastric cancer were promising in this study, longer survival times need to be observed further. After chemotherapy, well differentiated larger tumors were found to be obviously shrunken. After H & E staining, the necrosis of tumor tissues was easily seen under a microscope, commonly surrounded by inflammatory tissue, forming a typical tissue morphology after sensitive chemotherapy. Based on the effective chemotherapy, the recovery of normal gastric tissues resulted in the possibility of preserving some stomach, other than removing the total stomach, to obtain R0 resection. The toxic reactions to chemotherapy were very common and were the main cause of patients refusing or discontinuing chemotherapy. In the present study, patients experienced different degrees of toxic and adverse reactions, especially during the first cycle. The patients could recover from leukopenia and liver function abnormality after two to three weeks of rest. Nausea and vomiting often occurred during the intravenous infusion of oxaliplatin at the beginning of therapy, wherein liquid transfusion was necessary to keep the acid-base balance and to supply nutritional energy. For patients with severe toxic reactions, delaying the treatment was deemed necessary. There are several limitations of this study to note. Firstly, this study was a retrospective study. Retrospective studies are inherently less robust than prospective studies. Secondly, the sample sizes of the groups were relatively small. It is possible that additional differences would emerge in a larger study. Hence, a large prospective study at multiple centers could provide more robust and generalizable information about the influence of pathological differentiation in the effect of preoperative chemotherapy for patients with locally advanced gastric cancer. Conclusions: From this study, it is concluded that better differentiated locally advanced gastric cancer is suitable for preoperative chemotherapy under the XELOX regimen. As a result of effective preoperative chemotherapy, much more gastric tissue can be preserved in patients with better differentiated locally advanced gastric cancer.
Background: Gastric cancer is the third leading cause of cancer-related mortality in China, and the long-term survival for locally advanced gastric cancer is very poor. Simple surgery cannot yield an ideal result because of the high recurrence rate after tumor resection. Preoperative chemotherapy could help to reduce tumor volume, improve the R0 resection rate (no residual tumor after surgery), and decrease the risk of local tumor recurrence. The aim of this study was to evaluate the influence of pathological differentiation in the effect of preoperative chemotherapy for patients with locally advanced gastric cancer. Methods: Patients with locally advanced gastric cancer (n = 32) received preoperative chemotherapy under the XELOX (capecitabine plus oxaliplatin) regimen. According to pathological examination, patients' tumors were classified into better (well and moderate) and poorly differentiated (lower differentiated and undifferentiated) groups, and the clinical response rate, type of gastrectomy, and negative tumor residual rate were compared between the two groups of patients. Morphological changes and toxic reactions were monitored after chemotherapy. Results: The results showed that the clinical response rate in the better differentiated group was significantly higher than that in the poorly differentiated group (100% versus 25%, P = 0.000). The partial gastrectomy rate in the better differentiated group was significantly higher than that in the poorly differentiated group (87.5% versus 25% P = 0.000). A significant shrinking of tumor and necrosis of tumor tissues caused by chemotherapy could be observed. Conclusions: In conclusion, the better differentiated group with locally advanced gastric cancer is suitable for preoperative chemotherapy under the XELOX regimen, and as a result of effective preoperative chemotherapy, much more gastric tissue can be preserved for the better differentiated group.
Background: Gastric cancer is one of the most common malignancies in Asia, especially in China, Korea, and Japan [1]. It is the third leading cause of cancer-related mortality in China, and Chinese patients with gastric cancer account for 42% of the worldwide patient population with gastric cancer [2]. Surgical resection of the tumor is the most effective approach in increasing the long-term survival of patients with early stage gastric cancer [3]. The five-year survival rate of patients with resectable gastric cancer in advanced stages (stages III or IV) can be improved through combined surgical management with perioperative chemotherapy [4]. The benefits of preoperative chemotherapy (neo-adjuvant chemotherapy) for patients with gastric cancer are as follows: reduces tumor volume, which results in tumor downstage, improves the R0 resection rate (no residual tumor after surgery), acts on micrometastasis, decreases the risk of local tumor recurrence, and aids in evaluating tumor chemosensitivity to cytotoxic drugs [5–10]. In locally advanced gastric cancer, the primary tumor is invaded through the submucosal layers of gastric tissues, with regional nodal involvement, and occupies most of the normal gastric cell lines [11, 12]. Although the long-term effects remain controversial, preoperative chemotherapy for locally advanced gastric cancer has shown encouraging rates of pathologic complete response and R0 resection, with acceptable rates of acute and late toxicities [13, 14]. However, no report was found on the factor of pathological differentiation in preoperative chemotherapy of locally advanced gastric cancer. Therefore, the aim of this study was to evaluate the influence of pathological differentiation in the effect of preoperative chemotherapy for patients with locally advanced gastric cancer. In this study, we compared the clinical response rate of preoperative chemotherapy between better and poorly differentiated locally advanced gastric cancer, and discussed its effect in the preservation of gastric tissue during gastrectomy. Conclusions: From this study, it is concluded that better differentiated locally advanced gastric cancer is suitable for preoperative chemotherapy under the XELOX regimen. As a result of effective preoperative chemotherapy, much more gastric tissue can be preserved in patients with better differentiated locally advanced gastric cancer.
Background: Gastric cancer is the third leading cause of cancer-related mortality in China, and the long-term survival for locally advanced gastric cancer is very poor. Simple surgery cannot yield an ideal result because of the high recurrence rate after tumor resection. Preoperative chemotherapy could help to reduce tumor volume, improve the R0 resection rate (no residual tumor after surgery), and decrease the risk of local tumor recurrence. The aim of this study was to evaluate the influence of pathological differentiation in the effect of preoperative chemotherapy for patients with locally advanced gastric cancer. Methods: Patients with locally advanced gastric cancer (n = 32) received preoperative chemotherapy under the XELOX (capecitabine plus oxaliplatin) regimen. According to pathological examination, patients' tumors were classified into better (well and moderate) and poorly differentiated (lower differentiated and undifferentiated) groups, and the clinical response rate, type of gastrectomy, and negative tumor residual rate were compared between the two groups of patients. Morphological changes and toxic reactions were monitored after chemotherapy. Results: The results showed that the clinical response rate in the better differentiated group was significantly higher than that in the poorly differentiated group (100% versus 25%, P = 0.000). The partial gastrectomy rate in the better differentiated group was significantly higher than that in the poorly differentiated group (87.5% versus 25% P = 0.000). A significant shrinking of tumor and necrosis of tumor tissues caused by chemotherapy could be observed. Conclusions: In conclusion, the better differentiated group with locally advanced gastric cancer is suitable for preoperative chemotherapy under the XELOX regimen, and as a result of effective preoperative chemotherapy, much more gastric tissue can be preserved for the better differentiated group.
5,333
333
[ 366, 138, 201, 157, 61, 117, 81, 227, 243, 162 ]
14
[ "chemotherapy", "patients", "16", "gastric", "differentiated", "tumor", "preoperative", "cancer", "preoperative chemotherapy", "gastric cancer" ]
[ "gastric cancer surgical", "stage gastric cancer", "preoperative chemotherapy neo", "chemotherapy patients gastric", "gastric cancer advanced" ]
[CONTENT] locally advanced gastric cancer | pathological differentiation | preoperative chemotherapy | surgery [SUMMARY]
[CONTENT] locally advanced gastric cancer | pathological differentiation | preoperative chemotherapy | surgery [SUMMARY]
[CONTENT] locally advanced gastric cancer | pathological differentiation | preoperative chemotherapy | surgery [SUMMARY]
[CONTENT] locally advanced gastric cancer | pathological differentiation | preoperative chemotherapy | surgery [SUMMARY]
[CONTENT] locally advanced gastric cancer | pathological differentiation | preoperative chemotherapy | surgery [SUMMARY]
[CONTENT] locally advanced gastric cancer | pathological differentiation | preoperative chemotherapy | surgery [SUMMARY]
[CONTENT] Antineoplastic Combined Chemotherapy Protocols | Capecitabine | Cell Differentiation | Chemotherapy, Adjuvant | Combined Modality Therapy | Deoxycytidine | Female | Fluorouracil | Follow-Up Studies | Gastrectomy | Humans | Male | Middle Aged | Neoplasm Staging | Oxaloacetates | Preoperative Care | Prognosis | Retrospective Studies | Stomach Neoplasms | Tertiary Healthcare [SUMMARY]
[CONTENT] Antineoplastic Combined Chemotherapy Protocols | Capecitabine | Cell Differentiation | Chemotherapy, Adjuvant | Combined Modality Therapy | Deoxycytidine | Female | Fluorouracil | Follow-Up Studies | Gastrectomy | Humans | Male | Middle Aged | Neoplasm Staging | Oxaloacetates | Preoperative Care | Prognosis | Retrospective Studies | Stomach Neoplasms | Tertiary Healthcare [SUMMARY]
[CONTENT] Antineoplastic Combined Chemotherapy Protocols | Capecitabine | Cell Differentiation | Chemotherapy, Adjuvant | Combined Modality Therapy | Deoxycytidine | Female | Fluorouracil | Follow-Up Studies | Gastrectomy | Humans | Male | Middle Aged | Neoplasm Staging | Oxaloacetates | Preoperative Care | Prognosis | Retrospective Studies | Stomach Neoplasms | Tertiary Healthcare [SUMMARY]
[CONTENT] Antineoplastic Combined Chemotherapy Protocols | Capecitabine | Cell Differentiation | Chemotherapy, Adjuvant | Combined Modality Therapy | Deoxycytidine | Female | Fluorouracil | Follow-Up Studies | Gastrectomy | Humans | Male | Middle Aged | Neoplasm Staging | Oxaloacetates | Preoperative Care | Prognosis | Retrospective Studies | Stomach Neoplasms | Tertiary Healthcare [SUMMARY]
[CONTENT] Antineoplastic Combined Chemotherapy Protocols | Capecitabine | Cell Differentiation | Chemotherapy, Adjuvant | Combined Modality Therapy | Deoxycytidine | Female | Fluorouracil | Follow-Up Studies | Gastrectomy | Humans | Male | Middle Aged | Neoplasm Staging | Oxaloacetates | Preoperative Care | Prognosis | Retrospective Studies | Stomach Neoplasms | Tertiary Healthcare [SUMMARY]
[CONTENT] Antineoplastic Combined Chemotherapy Protocols | Capecitabine | Cell Differentiation | Chemotherapy, Adjuvant | Combined Modality Therapy | Deoxycytidine | Female | Fluorouracil | Follow-Up Studies | Gastrectomy | Humans | Male | Middle Aged | Neoplasm Staging | Oxaloacetates | Preoperative Care | Prognosis | Retrospective Studies | Stomach Neoplasms | Tertiary Healthcare [SUMMARY]
[CONTENT] gastric cancer surgical | stage gastric cancer | preoperative chemotherapy neo | chemotherapy patients gastric | gastric cancer advanced [SUMMARY]
[CONTENT] gastric cancer surgical | stage gastric cancer | preoperative chemotherapy neo | chemotherapy patients gastric | gastric cancer advanced [SUMMARY]
[CONTENT] gastric cancer surgical | stage gastric cancer | preoperative chemotherapy neo | chemotherapy patients gastric | gastric cancer advanced [SUMMARY]
[CONTENT] gastric cancer surgical | stage gastric cancer | preoperative chemotherapy neo | chemotherapy patients gastric | gastric cancer advanced [SUMMARY]
[CONTENT] gastric cancer surgical | stage gastric cancer | preoperative chemotherapy neo | chemotherapy patients gastric | gastric cancer advanced [SUMMARY]
[CONTENT] gastric cancer surgical | stage gastric cancer | preoperative chemotherapy neo | chemotherapy patients gastric | gastric cancer advanced [SUMMARY]
[CONTENT] chemotherapy | patients | 16 | gastric | differentiated | tumor | preoperative | cancer | preoperative chemotherapy | gastric cancer [SUMMARY]
[CONTENT] chemotherapy | patients | 16 | gastric | differentiated | tumor | preoperative | cancer | preoperative chemotherapy | gastric cancer [SUMMARY]
[CONTENT] chemotherapy | patients | 16 | gastric | differentiated | tumor | preoperative | cancer | preoperative chemotherapy | gastric cancer [SUMMARY]
[CONTENT] chemotherapy | patients | 16 | gastric | differentiated | tumor | preoperative | cancer | preoperative chemotherapy | gastric cancer [SUMMARY]
[CONTENT] chemotherapy | patients | 16 | gastric | differentiated | tumor | preoperative | cancer | preoperative chemotherapy | gastric cancer [SUMMARY]
[CONTENT] chemotherapy | patients | 16 | gastric | differentiated | tumor | preoperative | cancer | preoperative chemotherapy | gastric cancer [SUMMARY]
[CONTENT] gastric | cancer | gastric cancer | tumor | advanced | locally advanced | locally | locally advanced gastric | locally advanced gastric cancer | chemotherapy [SUMMARY]
[CONTENT] patients | tumors | evaluated | china | chemotherapy | tomography | computed tomography | computed | tumor | union [SUMMARY]
[CONTENT] 16 | differentiated | chemotherapy | differentiated group | group | patients | poorly differentiated | poorly | gastric | better [SUMMARY]
[CONTENT] better differentiated locally | better differentiated locally advanced | differentiated locally advanced gastric | differentiated locally advanced | differentiated locally | gastric | better differentiated | better | differentiated | locally advanced [SUMMARY]
[CONTENT] patients | chemotherapy | 16 | gastric | differentiated | cancer | gastric cancer | preoperative | tumor | preoperative chemotherapy [SUMMARY]
[CONTENT] patients | chemotherapy | 16 | gastric | differentiated | cancer | gastric cancer | preoperative | tumor | preoperative chemotherapy [SUMMARY]
[CONTENT] third | China ||| ||| R0 ||| [SUMMARY]
[CONTENT] 32 | XELOX ||| two ||| [SUMMARY]
[CONTENT] 100% | 25% | 0.000 ||| 87.5% | 25% | 0.000 ||| [SUMMARY]
[CONTENT] XELOX [SUMMARY]
[CONTENT] third | China ||| ||| R0 ||| ||| 32 | XELOX ||| two ||| ||| ||| 100% | 25% | 0.000 ||| 87.5% | 25% | 0.000 ||| ||| XELOX [SUMMARY]
[CONTENT] third | China ||| ||| R0 ||| ||| 32 | XELOX ||| two ||| ||| ||| 100% | 25% | 0.000 ||| 87.5% | 25% | 0.000 ||| ||| XELOX [SUMMARY]
Prevalence and characteristics of advocacy curricula in Australian public health degrees.
35771729
Public health advocacy is a fundamental part of health promotion practice. Advocacy efforts can lead to healthier public policies and positive impacts on society. Public health educators are responsible for equipping graduates with cross-cutting advocacy competencies to address current and future public health challenges.
BACKGROUND
Australian public health Bachelor's and Master's degrees were identified using the CRICOS database. Open-source online unit guides were reviewed to determine where and how advocacy was included within core and elective units (in title, unit description or learning outcomes). Degree directors and convenors of identified units were surveyed to further garner information about advocacy in the curriculum.
METHODOLOGY
Of 65 identified degrees, 17 of 26 (65%) undergraduate degrees and 24 of 39 (62%) postgraduate degrees included advocacy within the core curriculum, while 6 of 26 (23%) undergraduate and 8 of 39 (21%) postgraduate offered no advocacy curriculum.
RESULTS
Australian and international public health competency frameworks indicate advocacy curriculum should be included in all degrees. This research suggests advocacy competencies are not ubiquitous within Australian public health curricula. The findings support the need to advance public health advocacy teaching efforts further.
IMPLICATIONS
[ "Humans", "Public Health", "Prevalence", "Australia", "Curriculum", "Health Promotion" ]
9796077
INTRODUCTION
Public health advocacy is a fundamental part of health promotion practice. 1 , 2 , 3 Its importance is reflected by its inclusion as one of three pillars within the World Health Organization's Ottawa Charter for Health Promotion. 3 The three pillars, “advocate,” “enable” and “mediate,” characterise the fundamental activities and competencies necessary to promote the health of populations. 4 , 5 Advocacy is the active support of a cause. 6 It entails speaking, writing or acting in favour of a particular cause, policy or group of people. 5 Public health advocacy efforts can take on many forms, employing a range of strategies that aim to influence and advance evidence‐based policymaking to improve health and well‐being for individuals and populations. 5 , 6 Advocacy can also raise awareness of social and environmental factors enabling systemic changes in these areas as well as mobilising communities. To achieve these outcomes, advocacy does not employ one consistent approach. 6 Rather, activities and efforts can include a variety of strategies such as negotiation, debate and consensus generation in the pursuit of improved health and well‐being. 6 These advocacy activities can address the social and other determinants of health and lead to the development of healthier public policies and positive impacts on societies. 6 , 7 , 8 The influence of public health advocacy has been demonstrated through several public health successes in Australia, including plain tobacco packaging, mandatory folate fortification within bread products and the ban on commercial tanning beds. 9 Such successes were not made overnight. Advocates including health practitioners, policymakers and peak consumer advocacy bodies were required to coalesce and engage in extended and strategic efforts that ensured evidence was communicated in politically compelling ways, often in the face of fierce opposition from the commercial sector. 6 Successful outcomes are achieved through a variety of strategies, including building the capacity of health professionals and building coalitions, although it has been described that some of the public health community may not feel comfortable or empowered to operate in this space. 5 Importantly, recent global events such as COVID‐19, which has flared vaccine hesitancy, and current issues such as climate change and the influence of major industries, have drawn attention to the critical need to develop an appropriately skilled public health workforce that engages with public health advocacy to meet current and emerging population health challenges. 7 , 10 , 11 The Public Health Association of Australia states that a well‐trained workforce is required, while emphasising the role of universities within this whole‐of‐system approach. 8 In 2016, the peak organisation for public health education throughout Australasia, the Council of Academic Public Health Institutions Australasia (CAPHIA), proffered a foundational set of competencies expected from undergraduate and postgraduate public health students. 12 The competencies recognise the role of public health advocacy in underpinning knowledge of health promotion and disease prevention 12 and are consistent with the inclusion of advocacy in other international public health competency frameworks. 13 , 14 While public health undergraduates and postgraduates are expected to have knowledge and skills in public health advocacy, 12 there is a paucity of literature on how it is taught to these cohorts. 9 , 11 , 15 , 16 Contributing to this is a dearth of empirical research on the practice of advocacy. 17 , 18 This may be due to the absence of a consensus on the definition of public health advocacy among academics, health practitioners and advocacy groups. 1 , 18 Despite accepted principals, no one consistent approach to advocacy exists. 6 , 18 While the discipline of advocacy enables a wide variety of practices to be implemented that are tailored for specific contexts, 6 this can result in challenges in conducting research due to the varying heterogeneous methodologies. For the current study, the authors defined public health advocacy as educating, organising and mobilising for systems change in population health. 19 While many definitions of public health advocacy exist, the authors selected this broad‐spectrum definition as the foundation of this research to acknowledge the myriad of possible forms advocacy can take. Because of the potential for advocacy to improve population health, previous research has called for a better understanding of advocacy in public health education. 1 , 18 , 20 To determine whether advocacy curricula within Australian public health degrees match what is expected of graduates working in advocacy and health promotion roles once in the workforce, knowledge of advocacy curricula is required. This study aims to determine the scope of public health advocacy education within undergraduate and postgraduate Australian public health degrees and whether advocacy education is delivered as part of the core curriculum, as part of electives, or not at all.
METHODS
Sample To identify a list of Australian public health degrees, the Commonwealth Register of Institutions and Courses for Overseas Students (CRICOS) database was used, applying the 0613 Public Health Field of Education. 21 Bachelor‐ and Master‐level degrees were included that related to the teaching of public/population health and variations of it, including global health, health policy, health communication and health promotion. Bachelor of Science or Bachelor of Health Science degrees where public health, population health or health promotion majors were offered were also included. This approach sought to provide a comprehensive list of recognised Australian institutions providing public health training. Degrees related to health service management were excluded because these programs are typically more focused on organisational governance. Extended or Advanced degrees (eg Master of Public Health [Extension]) were excluded because they usually included the standard version of the same degree, with additional scope for electives. Degrees below an Australian Qualification Framework (AQF) Level 7 (sub‐Bachelor level) were excluded. These were often abridged versions of Bachelor‐level degrees, as were 1‐year public health Honours degrees that are typically research‐focused. For dual degrees, the base degree was counted once. To identify a list of Australian public health degrees, the Commonwealth Register of Institutions and Courses for Overseas Students (CRICOS) database was used, applying the 0613 Public Health Field of Education. 21 Bachelor‐ and Master‐level degrees were included that related to the teaching of public/population health and variations of it, including global health, health policy, health communication and health promotion. Bachelor of Science or Bachelor of Health Science degrees where public health, population health or health promotion majors were offered were also included. This approach sought to provide a comprehensive list of recognised Australian institutions providing public health training. Degrees related to health service management were excluded because these programs are typically more focused on organisational governance. Extended or Advanced degrees (eg Master of Public Health [Extension]) were excluded because they usually included the standard version of the same degree, with additional scope for electives. Degrees below an Australian Qualification Framework (AQF) Level 7 (sub‐Bachelor level) were excluded. These were often abridged versions of Bachelor‐level degrees, as were 1‐year public health Honours degrees that are typically research‐focused. For dual degrees, the base degree was counted once. Data collection Constructive alignment pedagogical theory 22 was used as a guiding theory. Publicly available information for each included degree was manually extracted from the institution's online handbook and website by the researchers between July and August 2021. The nature and extent of advocacy within curricula were assessed, including whether “advoc*” was included in degree learning outcomes (DLOs). Attempts were made to contact the institution directly by two researchers (AB and SL). Where possible, the unit or degree coordinator listed was approached at least twice via email or phone. If such information was not available online, the researchers contacted the School/Department/Faculty or general university contact listed. The AQF definition of an accredited unit was used. 23 This definition states that an accredited unit is a single component of a qualification or stand‐alone unit that has been accredited by the same process as for a whole AQF qualification; an accredited unit may be called a “module,” “subject,” “unit of competency” or “unit.” 23 The authors identified all units offered within the degree (core and elective) that included “advoc*” in the unit title, unit description or unit level learning outcomes (ULOs) and extracted the data into Microsoft Excel. A single unit may be included as a core or elective across multiple degrees for some institutions. For example, a Foundation in Public Health unit may be included in both a Master of Public Health and Master of Global Health. In these cases, the unit was only counted once in the numerator. Similarly, there are cases where multiple units deliver advocacy teaching within the same degree. In these cases, the degree was only counted once in the denominator. Previous research that used online handbook information to collect curriculum data recommended future studies draw from additional sources for more complete data. 16 Considering this, and because some websites included limited details, where “advoc*” was identified in the unit title, description or ULOs, the unit convenor and degree director were invited via email to provide high‐level details about the advocacy component of the relevant unit. This invitation included a Qualtrics link (Qualtrics, Provo, UT), a web‐based survey platform. Additional data were collected from respondents, including the number of advocacy‐specific learning outcomes and assessment details, such as type (eg, case study) and weighting. Unit convenors and degree directors received one reminder email 2 weeks after the initial invitation. Constructive alignment pedagogical theory 22 was used as a guiding theory. Publicly available information for each included degree was manually extracted from the institution's online handbook and website by the researchers between July and August 2021. The nature and extent of advocacy within curricula were assessed, including whether “advoc*” was included in degree learning outcomes (DLOs). Attempts were made to contact the institution directly by two researchers (AB and SL). Where possible, the unit or degree coordinator listed was approached at least twice via email or phone. If such information was not available online, the researchers contacted the School/Department/Faculty or general university contact listed. The AQF definition of an accredited unit was used. 23 This definition states that an accredited unit is a single component of a qualification or stand‐alone unit that has been accredited by the same process as for a whole AQF qualification; an accredited unit may be called a “module,” “subject,” “unit of competency” or “unit.” 23 The authors identified all units offered within the degree (core and elective) that included “advoc*” in the unit title, unit description or unit level learning outcomes (ULOs) and extracted the data into Microsoft Excel. A single unit may be included as a core or elective across multiple degrees for some institutions. For example, a Foundation in Public Health unit may be included in both a Master of Public Health and Master of Global Health. In these cases, the unit was only counted once in the numerator. Similarly, there are cases where multiple units deliver advocacy teaching within the same degree. In these cases, the degree was only counted once in the denominator. Previous research that used online handbook information to collect curriculum data recommended future studies draw from additional sources for more complete data. 16 Considering this, and because some websites included limited details, where “advoc*” was identified in the unit title, description or ULOs, the unit convenor and degree director were invited via email to provide high‐level details about the advocacy component of the relevant unit. This invitation included a Qualtrics link (Qualtrics, Provo, UT), a web‐based survey platform. Additional data were collected from respondents, including the number of advocacy‐specific learning outcomes and assessment details, such as type (eg, case study) and weighting. Unit convenors and degree directors received one reminder email 2 weeks after the initial invitation. Data analysis Additional data collected via surveys with unit convenors and degree directors were combined with data extracted from institutions' websites. A simple content analysis approach was used on extracted advocacy data to gauge how advocacy curricula were included within Australian tertiary public health education. 16 , 24 , 25 Previous research shows that web‐based content analysis is an appropriate method for auditing curricula content in public health degrees. 16 , 24 Denominators used to calculate proportions excluded degrees or units with missing data. In this broad scoping exercise, a simple content analysis was conducted as follows. The authors looked for all mentions of “advoc*” in all core and elective unit titles, descriptions and learning outcomes. Proportions of degrees with “advoc*” in either a unit title, description or learning outcomes were calculated. Further, given that constructive alignment pedagogical theory would suggest that unit learning outcomes should be aligned to DLOs to maximise opportunities for student learning, 22 the proportion of degrees that included “advoc*” in core unit learning outcomes alone was also calculated. Additional data collected via surveys with unit convenors and degree directors were combined with data extracted from institutions' websites. A simple content analysis approach was used on extracted advocacy data to gauge how advocacy curricula were included within Australian tertiary public health education. 16 , 24 , 25 Previous research shows that web‐based content analysis is an appropriate method for auditing curricula content in public health degrees. 16 , 24 Denominators used to calculate proportions excluded degrees or units with missing data. In this broad scoping exercise, a simple content analysis was conducted as follows. The authors looked for all mentions of “advoc*” in all core and elective unit titles, descriptions and learning outcomes. Proportions of degrees with “advoc*” in either a unit title, description or learning outcomes were calculated. Further, given that constructive alignment pedagogical theory would suggest that unit learning outcomes should be aligned to DLOs to maximise opportunities for student learning, 22 the proportion of degrees that included “advoc*” in core unit learning outcomes alone was also calculated. Ethics The Macquarie University Human Research Ethics Committee approved this research (Ref. No. 520211009730240). The Macquarie University Human Research Ethics Committee approved this research (Ref. No. 520211009730240).
RESULTS
Using the CRICOS database, 175 eligible degrees (53 Bachelor's and 122 Master's) were identified. After exclusions, there were 65 degrees included for review from 30 Australian institutions: 26 undergraduate degrees and 39 postgraduate degrees (see Figure 1). Of the 65 degrees analysed, 41 (63%) included advocacy as part of core curricula; 14 (22%) did not include any identified advocacy curricula (Figure 1). DLOs could not be found online for 28 degrees. Flow diagram of scope of advocacy training within Australian public health curriculums Undergraduate There was a total of 26 undergraduate degrees identified from 23 institutions. From these degrees, there were 39 relevant units identified: four from the unit title alone (10%), 21 from the learning outcomes (54%) and 14 from the unit description (36%). Of the 26 degrees, 20 had DLOs publicly available; six degrees did not publicly report DLOs, and additional information could not be obtained from degree directors or institutions via attempts to contact these institutions made by researchers. Of the degrees with available DLOs, 3 out of 20 (15%) included “advoc*” in one DLO. No degree had more than one DLO mentioning “advoc*.” Each of these three degrees delivered advocacy training within core units. Of the 17 degrees which did not include “advoc*” in their DLOs, five did not include any units where “advoc*” was included as a learning outcome in the unit title or description. Notably, all five degrees were Bachelor of Health Science programs. For 12 degrees, while they did not include “advoc*” in their DLOs, they did include some advocacy training: nine degrees included it as part of the core curriculum, and three degrees included it as part of an elective option. For the six degrees that did not publicly report DLOs and attempts to determine this information from researchers was unsuccessful, five units included advocacy within the core curriculum, and one did not offer any advocacy curriculum. In summary, six of the 26 degrees (23%) had no identified advocacy curriculum, and three (12%) only offered it as part of elective units, that is just 17 degrees (65%) had advocacy as part of the core curriculum. When only “advoc*” in unit learning outcomes are considered, 16 of 26 degrees (62%) included at least one core unit where “advoc*” was included in a learning outcome. There was a total of 26 undergraduate degrees identified from 23 institutions. From these degrees, there were 39 relevant units identified: four from the unit title alone (10%), 21 from the learning outcomes (54%) and 14 from the unit description (36%). Of the 26 degrees, 20 had DLOs publicly available; six degrees did not publicly report DLOs, and additional information could not be obtained from degree directors or institutions via attempts to contact these institutions made by researchers. Of the degrees with available DLOs, 3 out of 20 (15%) included “advoc*” in one DLO. No degree had more than one DLO mentioning “advoc*.” Each of these three degrees delivered advocacy training within core units. Of the 17 degrees which did not include “advoc*” in their DLOs, five did not include any units where “advoc*” was included as a learning outcome in the unit title or description. Notably, all five degrees were Bachelor of Health Science programs. For 12 degrees, while they did not include “advoc*” in their DLOs, they did include some advocacy training: nine degrees included it as part of the core curriculum, and three degrees included it as part of an elective option. For the six degrees that did not publicly report DLOs and attempts to determine this information from researchers was unsuccessful, five units included advocacy within the core curriculum, and one did not offer any advocacy curriculum. In summary, six of the 26 degrees (23%) had no identified advocacy curriculum, and three (12%) only offered it as part of elective units, that is just 17 degrees (65%) had advocacy as part of the core curriculum. When only “advoc*” in unit learning outcomes are considered, 16 of 26 degrees (62%) included at least one core unit where “advoc*” was included in a learning outcome. Postgraduate A total of 39 postgraduate degrees were identified from 30 institutions. From these degrees, there were 55 relevant units identified: seven from the unit title alone (13%), 23 from the learning outcomes (42%), and 25 from the unit description (45%). Of these degrees, 26 had DLOs publicly available or were obtained from degree directors; 13 degrees did not publicly report DLOs, and additional information could not be obtained from degree directors or institutions. Of the degrees with available DLOs, 10 out of 26 (38%) included “advoc*” in at least one DLO. Eight of these 10 degrees included advocacy training within at least one core unit; two degrees included advocacy training in electives. Of the 16 degrees without advocacy included in DLOs, nine included advocacy training in core units, two in electives only, and five had no identified advocacy training. For the 13 degrees where DLOs were not publicly reported, three degrees did not include any advocacy training. For the 10 degrees that did include advocacy training, seven units included it within core units and three degrees within elective units only. In summary, of the 39 identified postgraduate degrees, 31 included advocacy training (79%). Of these, 24 (62%) were delivered via core units. In summary, eight of the 39 degrees (21%) had no identified advocacy training, and seven (18%) only offered it as part of elective units, that is just 24 degrees (62%) had advocacy as part of core training. When only “advoc*” in unit learning outcomes are considered, 13 of 39 degrees (50%) included at least one core unit where ‘advoc*' was included in a learning outcome, excluding the nine degrees where the unit learning outcomes were not available. A total of 39 postgraduate degrees were identified from 30 institutions. From these degrees, there were 55 relevant units identified: seven from the unit title alone (13%), 23 from the learning outcomes (42%), and 25 from the unit description (45%). Of these degrees, 26 had DLOs publicly available or were obtained from degree directors; 13 degrees did not publicly report DLOs, and additional information could not be obtained from degree directors or institutions. Of the degrees with available DLOs, 10 out of 26 (38%) included “advoc*” in at least one DLO. Eight of these 10 degrees included advocacy training within at least one core unit; two degrees included advocacy training in electives. Of the 16 degrees without advocacy included in DLOs, nine included advocacy training in core units, two in electives only, and five had no identified advocacy training. For the 13 degrees where DLOs were not publicly reported, three degrees did not include any advocacy training. For the 10 degrees that did include advocacy training, seven units included it within core units and three degrees within elective units only. In summary, of the 39 identified postgraduate degrees, 31 included advocacy training (79%). Of these, 24 (62%) were delivered via core units. In summary, eight of the 39 degrees (21%) had no identified advocacy training, and seven (18%) only offered it as part of elective units, that is just 24 degrees (62%) had advocacy as part of core training. When only “advoc*” in unit learning outcomes are considered, 13 of 39 degrees (50%) included at least one core unit where ‘advoc*' was included in a learning outcome, excluding the nine degrees where the unit learning outcomes were not available. Assessments Of the unit convenors and degree directors (N = 66) invited to complete the survey to supplement collected data, 34 responded (52% response rate). The majority of respondents provided additional detail regarding the type and weighting of assessment tasks which was used to supplement data collected from institutions' websites. Undergraduate Assessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%. Assessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%. Postgraduate Assessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%. Assessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%. Of the unit convenors and degree directors (N = 66) invited to complete the survey to supplement collected data, 34 responded (52% response rate). The majority of respondents provided additional detail regarding the type and weighting of assessment tasks which was used to supplement data collected from institutions' websites. Undergraduate Assessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%. Assessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%. Postgraduate Assessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%. Assessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%.
CONCLUSION
This research provides an initial overview of how public health advocacy is included within Australian tertiary public health degrees. The researchers conclude that public health advocacy is not delivered consistently across degrees, and some students may miss out entirely. There remain opportunities to optimise advocacy curricula. These findings highlight the need for Australian universities offering public health degrees to review and enhance their public health advocacy education offerings at undergraduate and postgraduate levels. A focus on ensuring advocacy is included within core units will allow students to graduate with foundational advocacy competencies set out by CAPHIA. 12 In addition, it is important to ensure that where advocacy is taught and assessed, that this is explicit within the unit description and ULOs so that there is precise alignment with the foundational public health competencies, and it is clear to students what they are learning. Long‐term university commitments to provide relevant advocacy training will contribute to investments in the public health workforce being adequately equipped to deal with emerging health issues such as climate change, public health emergencies and other issues that promote equity. It is envisaged these findings will be particularly informative for degree directors, educators, and advocacy leaders seeking to augment public health advocacy training in the university sector. It is also envisaged that collaborative efforts at discipline‐specific conferences such as CAPHIA learning and teaching forums could further facilitate work in this area. Future research in this area should include determining the type of advocacy skills required for real‐world advocacy practice, whether these skills are authentically assessed within public health training and best practice methods for training public health students in advocacy. The next steps for this research project include qualitative methods to assess pedagogical approaches to teaching advocacy via interviews with advocacy stakeholders to ensure teaching is optimally aligned to industry needs.
[ "INTRODUCTION", "Sample", "Data collection", "Data analysis", "Ethics", "Undergraduate", "Postgraduate", "Assessments", "Undergraduate", "Postgraduate", "LIMITATIONS", "FUNDING INFORMATION" ]
[ "Public health advocacy is a fundamental part of health promotion practice.\n1\n, \n2\n, \n3\n Its importance is reflected by its inclusion as one of three pillars within the World Health Organization's Ottawa Charter for Health Promotion.\n3\n The three pillars, “advocate,” “enable” and “mediate,” characterise the fundamental activities and competencies necessary to promote the health of populations.\n4\n, \n5\n\n\nAdvocacy is the active support of a cause.\n6\n It entails speaking, writing or acting in favour of a particular cause, policy or group of people.\n5\n Public health advocacy efforts can take on many forms, employing a range of strategies that aim to influence and advance evidence‐based policymaking to improve health and well‐being for individuals and populations.\n5\n, \n6\n Advocacy can also raise awareness of social and environmental factors enabling systemic changes in these areas as well as mobilising communities. To achieve these outcomes, advocacy does not employ one consistent approach.\n6\n Rather, activities and efforts can include a variety of strategies such as negotiation, debate and consensus generation in the pursuit of improved health and well‐being.\n6\n These advocacy activities can address the social and other determinants of health and lead to the development of healthier public policies and positive impacts on societies.\n6\n, \n7\n, \n8\n\n\nThe influence of public health advocacy has been demonstrated through several public health successes in Australia, including plain tobacco packaging, mandatory folate fortification within bread products and the ban on commercial tanning beds.\n9\n Such successes were not made overnight. Advocates including health practitioners, policymakers and peak consumer advocacy bodies were required to coalesce and engage in extended and strategic efforts that ensured evidence was communicated in politically compelling ways, often in the face of fierce opposition from the commercial sector.\n6\n\n\nSuccessful outcomes are achieved through a variety of strategies, including building the capacity of health professionals and building coalitions, although it has been described that some of the public health community may not feel comfortable or empowered to operate in this space.\n5\n\n\nImportantly, recent global events such as COVID‐19, which has flared vaccine hesitancy, and current issues such as climate change and the influence of major industries, have drawn attention to the critical need to develop an appropriately skilled public health workforce that engages with public health advocacy to meet current and emerging population health challenges.\n7\n, \n10\n, \n11\n The Public Health Association of Australia states that a well‐trained workforce is required, while emphasising the role of universities within this whole‐of‐system approach.\n8\n\n\nIn 2016, the peak organisation for public health education throughout Australasia, the Council of Academic Public Health Institutions Australasia (CAPHIA), proffered a foundational set of competencies expected from undergraduate and postgraduate public health students.\n12\n The competencies recognise the role of public health advocacy in underpinning knowledge of health promotion and disease prevention\n12\n and are consistent with the inclusion of advocacy in other international public health competency frameworks.\n13\n, \n14\n While public health undergraduates and postgraduates are expected to have knowledge and skills in public health advocacy,\n12\n there is a paucity of literature on how it is taught to these cohorts.\n9\n, \n11\n, \n15\n, \n16\n Contributing to this is a dearth of empirical research on the practice of advocacy.\n17\n, \n18\n This may be due to the absence of a consensus on the definition of public health advocacy among academics, health practitioners and advocacy groups.\n1\n, \n18\n Despite accepted principals, no one consistent approach to advocacy exists.\n6\n, \n18\n While the discipline of advocacy enables a wide variety of practices to be implemented that are tailored for specific contexts,\n6\n this can result in challenges in conducting research due to the varying heterogeneous methodologies. For the current study, the authors defined public health advocacy as educating, organising and mobilising for systems change in population health.\n19\n While many definitions of public health advocacy exist, the authors selected this broad‐spectrum definition as the foundation of this research to acknowledge the myriad of possible forms advocacy can take.\nBecause of the potential for advocacy to improve population health, previous research has called for a better understanding of advocacy in public health education.\n1\n, \n18\n, \n20\n To determine whether advocacy curricula within Australian public health degrees match what is expected of graduates working in advocacy and health promotion roles once in the workforce, knowledge of advocacy curricula is required.\nThis study aims to determine the scope of public health advocacy education within undergraduate and postgraduate Australian public health degrees and whether advocacy education is delivered as part of the core curriculum, as part of electives, or not at all.", "To identify a list of Australian public health degrees, the Commonwealth Register of Institutions and Courses for Overseas Students (CRICOS) database was used, applying the 0613 Public Health Field of Education.\n21\n Bachelor‐ and Master‐level degrees were included that related to the teaching of public/population health and variations of it, including global health, health policy, health communication and health promotion. Bachelor of Science or Bachelor of Health Science degrees where public health, population health or health promotion majors were offered were also included. This approach sought to provide a comprehensive list of recognised Australian institutions providing public health training.\nDegrees related to health service management were excluded because these programs are typically more focused on organisational governance. Extended or Advanced degrees (eg Master of Public Health [Extension]) were excluded because they usually included the standard version of the same degree, with additional scope for electives. Degrees below an Australian Qualification Framework (AQF) Level 7 (sub‐Bachelor level) were excluded. These were often abridged versions of Bachelor‐level degrees, as were 1‐year public health Honours degrees that are typically research‐focused. For dual degrees, the base degree was counted once.", "Constructive alignment pedagogical theory\n22\n was used as a guiding theory. Publicly available information for each included degree was manually extracted from the institution's online handbook and website by the researchers between July and August 2021. The nature and extent of advocacy within curricula were assessed, including whether “advoc*” was included in degree learning outcomes (DLOs). Attempts were made to contact the institution directly by two researchers (AB and SL). Where possible, the unit or degree coordinator listed was approached at least twice via email or phone. If such information was not available online, the researchers contacted the School/Department/Faculty or general university contact listed.\nThe AQF definition of an accredited unit was used.\n23\n This definition states that an accredited unit is a single component of a qualification or stand‐alone unit that has been accredited by the same process as for a whole AQF qualification; an accredited unit may be called a “module,” “subject,” “unit of competency” or “unit.”\n23\n\n\nThe authors identified all units offered within the degree (core and elective) that included “advoc*” in the unit title, unit description or unit level learning outcomes (ULOs) and extracted the data into Microsoft Excel. A single unit may be included as a core or elective across multiple degrees for some institutions. For example, a Foundation in Public Health unit may be included in both a Master of Public Health and Master of Global Health. In these cases, the unit was only counted once in the numerator. Similarly, there are cases where multiple units deliver advocacy teaching within the same degree. In these cases, the degree was only counted once in the denominator.\nPrevious research that used online handbook information to collect curriculum data recommended future studies draw from additional sources for more complete data.\n16\n Considering this, and because some websites included limited details, where “advoc*” was identified in the unit title, description or ULOs, the unit convenor and degree director were invited via email to provide high‐level details about the advocacy component of the relevant unit. This invitation included a Qualtrics link (Qualtrics, Provo, UT), a web‐based survey platform. Additional data were collected from respondents, including the number of advocacy‐specific learning outcomes and assessment details, such as type (eg, case study) and weighting. Unit convenors and degree directors received one reminder email 2 weeks after the initial invitation.", "Additional data collected via surveys with unit convenors and degree directors were combined with data extracted from institutions' websites. A simple content analysis approach was used on extracted advocacy data to gauge how advocacy curricula were included within Australian tertiary public health education.\n16\n, \n24\n, \n25\n Previous research shows that web‐based content analysis is an appropriate method for auditing curricula content in public health degrees.\n16\n, \n24\n Denominators used to calculate proportions excluded degrees or units with missing data.\nIn this broad scoping exercise, a simple content analysis was conducted as follows. The authors looked for all mentions of “advoc*” in all core and elective unit titles, descriptions and learning outcomes. Proportions of degrees with “advoc*” in either a unit title, description or learning outcomes were calculated. Further, given that constructive alignment pedagogical theory would suggest that unit learning outcomes should be aligned to DLOs to maximise opportunities for student learning,\n22\n the proportion of degrees that included “advoc*” in core unit learning outcomes alone was also calculated.", "The Macquarie University Human Research Ethics Committee approved this research (Ref. No. 520211009730240).", "There was a total of 26 undergraduate degrees identified from 23 institutions. From these degrees, there were 39 relevant units identified: four from the unit title alone (10%), 21 from the learning outcomes (54%) and 14 from the unit description (36%).\nOf the 26 degrees, 20 had DLOs publicly available; six degrees did not publicly report DLOs, and additional information could not be obtained from degree directors or institutions via attempts to contact these institutions made by researchers. Of the degrees with available DLOs, 3 out of 20 (15%) included “advoc*” in one DLO. No degree had more than one DLO mentioning “advoc*.” Each of these three degrees delivered advocacy training within core units. Of the 17 degrees which did not include “advoc*” in their DLOs, five did not include any units where “advoc*” was included as a learning outcome in the unit title or description. Notably, all five degrees were Bachelor of Health Science programs. For 12 degrees, while they did not include “advoc*” in their DLOs, they did include some advocacy training: nine degrees included it as part of the core curriculum, and three degrees included it as part of an elective option. For the six degrees that did not publicly report DLOs and attempts to determine this information from researchers was unsuccessful, five units included advocacy within the core curriculum, and one did not offer any advocacy curriculum.\nIn summary, six of the 26 degrees (23%) had no identified advocacy curriculum, and three (12%) only offered it as part of elective units, that is just 17 degrees (65%) had advocacy as part of the core curriculum. When only “advoc*” in unit learning outcomes are considered, 16 of 26 degrees (62%) included at least one core unit where “advoc*” was included in a learning outcome.", "A total of 39 postgraduate degrees were identified from 30 institutions. From these degrees, there were 55 relevant units identified: seven from the unit title alone (13%), 23 from the learning outcomes (42%), and 25 from the unit description (45%).\nOf these degrees, 26 had DLOs publicly available or were obtained from degree directors; 13 degrees did not publicly report DLOs, and additional information could not be obtained from degree directors or institutions. Of the degrees with available DLOs, 10 out of 26 (38%) included “advoc*” in at least one DLO. Eight of these 10 degrees included advocacy training within at least one core unit; two degrees included advocacy training in electives. Of the 16 degrees without advocacy included in DLOs, nine included advocacy training in core units, two in electives only, and five had no identified advocacy training. For the 13 degrees where DLOs were not publicly reported, three degrees did not include any advocacy training. For the 10 degrees that did include advocacy training, seven units included it within core units and three degrees within elective units only. In summary, of the 39 identified postgraduate degrees, 31 included advocacy training (79%). Of these, 24 (62%) were delivered via core units.\nIn summary, eight of the 39 degrees (21%) had no identified advocacy training, and seven (18%) only offered it as part of elective units, that is just 24 degrees (62%) had advocacy as part of core training. When only “advoc*” in unit learning outcomes are considered, 13 of 39 degrees (50%) included at least one core unit where ‘advoc*' was included in a learning outcome, excluding the nine degrees where the unit learning outcomes were not available.", "Of the unit convenors and degree directors (N = 66) invited to complete the survey to supplement collected data, 34 responded (52% response rate). The majority of respondents provided additional detail regarding the type and weighting of assessment tasks which was used to supplement data collected from institutions' websites.\nUndergraduate Assessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%.\nAssessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%.\nPostgraduate Assessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%.\nAssessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%.", "Assessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%.", "Assessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%.", "Several limitations are noted in this study. Firstly, a conservative approach was taken to reviewing how advocacy was included within curricula with narrow search terms (“advoc*”). Additional search terms such as “community development/engagement”, “change” or “health communication” may have yielded more results. This method may have missed synonyms or concepts included in advocacy. A review of the unit learning outcomes of all health promotion, health policy and health communication units offered was carried out, but this did not yield additional data. Thus, there may have been some missed instances where advocacy skills are integrated into teaching but not explicitly mentioned and therefore not captured by this study. There might have also been some missed degrees if they were not registered on the CRICOS database. Future analysis could include a more comprehensive list of search terms or thematic identification.\nA second limitation is that it was difficult to ascertain the exact extent to which advocacy was included within a unit. It was out of scope for the review team to audit the entire unit curricula, and it is possible that unit content was not fully aligned with the ULOs, unit title or description. Therefore, the audit did not investigate how much advocacy curricula are included or how well this is taught.\nA third limitation is the currency of the information reviewed. This audit was completed between July and September 2021. Institutions regularly make amendments to the content offered within degrees, and advocacy offerings may have changed in some instances. Attempts to mitigate accuracy limitations were made by obtaining data from multiple sources, including online handbooks, institutions' websites, surveying unit convenors, and degree directors.\n16\n Additionally, the audit was limited by the data made publicly available by each institution. While some institutions provided extensive and detailed information on their websites regarding degree and unit learning outcomes and assessment details, this information was wholly absent in the public domain for other institutions.\nIt is not clear what the effect of the combination of these limitations is. The prevalence's estimated here in this broad scoping review may be under or overestimated.\nFinally, the researchers recognise their interest and involvement in teaching advocacy within their institution's public health degrees and that combined with the focus of this study, a desirability bias may be introduced.", "The authors received no financial support for this research." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Sample", "Data collection", "Data analysis", "Ethics", "RESULTS", "Undergraduate", "Postgraduate", "Assessments", "Undergraduate", "Postgraduate", "DISCUSSION", "LIMITATIONS", "CONCLUSION", "FUNDING INFORMATION", "CONFLICT OF INTERESTS" ]
[ "Public health advocacy is a fundamental part of health promotion practice.\n1\n, \n2\n, \n3\n Its importance is reflected by its inclusion as one of three pillars within the World Health Organization's Ottawa Charter for Health Promotion.\n3\n The three pillars, “advocate,” “enable” and “mediate,” characterise the fundamental activities and competencies necessary to promote the health of populations.\n4\n, \n5\n\n\nAdvocacy is the active support of a cause.\n6\n It entails speaking, writing or acting in favour of a particular cause, policy or group of people.\n5\n Public health advocacy efforts can take on many forms, employing a range of strategies that aim to influence and advance evidence‐based policymaking to improve health and well‐being for individuals and populations.\n5\n, \n6\n Advocacy can also raise awareness of social and environmental factors enabling systemic changes in these areas as well as mobilising communities. To achieve these outcomes, advocacy does not employ one consistent approach.\n6\n Rather, activities and efforts can include a variety of strategies such as negotiation, debate and consensus generation in the pursuit of improved health and well‐being.\n6\n These advocacy activities can address the social and other determinants of health and lead to the development of healthier public policies and positive impacts on societies.\n6\n, \n7\n, \n8\n\n\nThe influence of public health advocacy has been demonstrated through several public health successes in Australia, including plain tobacco packaging, mandatory folate fortification within bread products and the ban on commercial tanning beds.\n9\n Such successes were not made overnight. Advocates including health practitioners, policymakers and peak consumer advocacy bodies were required to coalesce and engage in extended and strategic efforts that ensured evidence was communicated in politically compelling ways, often in the face of fierce opposition from the commercial sector.\n6\n\n\nSuccessful outcomes are achieved through a variety of strategies, including building the capacity of health professionals and building coalitions, although it has been described that some of the public health community may not feel comfortable or empowered to operate in this space.\n5\n\n\nImportantly, recent global events such as COVID‐19, which has flared vaccine hesitancy, and current issues such as climate change and the influence of major industries, have drawn attention to the critical need to develop an appropriately skilled public health workforce that engages with public health advocacy to meet current and emerging population health challenges.\n7\n, \n10\n, \n11\n The Public Health Association of Australia states that a well‐trained workforce is required, while emphasising the role of universities within this whole‐of‐system approach.\n8\n\n\nIn 2016, the peak organisation for public health education throughout Australasia, the Council of Academic Public Health Institutions Australasia (CAPHIA), proffered a foundational set of competencies expected from undergraduate and postgraduate public health students.\n12\n The competencies recognise the role of public health advocacy in underpinning knowledge of health promotion and disease prevention\n12\n and are consistent with the inclusion of advocacy in other international public health competency frameworks.\n13\n, \n14\n While public health undergraduates and postgraduates are expected to have knowledge and skills in public health advocacy,\n12\n there is a paucity of literature on how it is taught to these cohorts.\n9\n, \n11\n, \n15\n, \n16\n Contributing to this is a dearth of empirical research on the practice of advocacy.\n17\n, \n18\n This may be due to the absence of a consensus on the definition of public health advocacy among academics, health practitioners and advocacy groups.\n1\n, \n18\n Despite accepted principals, no one consistent approach to advocacy exists.\n6\n, \n18\n While the discipline of advocacy enables a wide variety of practices to be implemented that are tailored for specific contexts,\n6\n this can result in challenges in conducting research due to the varying heterogeneous methodologies. For the current study, the authors defined public health advocacy as educating, organising and mobilising for systems change in population health.\n19\n While many definitions of public health advocacy exist, the authors selected this broad‐spectrum definition as the foundation of this research to acknowledge the myriad of possible forms advocacy can take.\nBecause of the potential for advocacy to improve population health, previous research has called for a better understanding of advocacy in public health education.\n1\n, \n18\n, \n20\n To determine whether advocacy curricula within Australian public health degrees match what is expected of graduates working in advocacy and health promotion roles once in the workforce, knowledge of advocacy curricula is required.\nThis study aims to determine the scope of public health advocacy education within undergraduate and postgraduate Australian public health degrees and whether advocacy education is delivered as part of the core curriculum, as part of electives, or not at all.", "Sample To identify a list of Australian public health degrees, the Commonwealth Register of Institutions and Courses for Overseas Students (CRICOS) database was used, applying the 0613 Public Health Field of Education.\n21\n Bachelor‐ and Master‐level degrees were included that related to the teaching of public/population health and variations of it, including global health, health policy, health communication and health promotion. Bachelor of Science or Bachelor of Health Science degrees where public health, population health or health promotion majors were offered were also included. This approach sought to provide a comprehensive list of recognised Australian institutions providing public health training.\nDegrees related to health service management were excluded because these programs are typically more focused on organisational governance. Extended or Advanced degrees (eg Master of Public Health [Extension]) were excluded because they usually included the standard version of the same degree, with additional scope for electives. Degrees below an Australian Qualification Framework (AQF) Level 7 (sub‐Bachelor level) were excluded. These were often abridged versions of Bachelor‐level degrees, as were 1‐year public health Honours degrees that are typically research‐focused. For dual degrees, the base degree was counted once.\nTo identify a list of Australian public health degrees, the Commonwealth Register of Institutions and Courses for Overseas Students (CRICOS) database was used, applying the 0613 Public Health Field of Education.\n21\n Bachelor‐ and Master‐level degrees were included that related to the teaching of public/population health and variations of it, including global health, health policy, health communication and health promotion. Bachelor of Science or Bachelor of Health Science degrees where public health, population health or health promotion majors were offered were also included. This approach sought to provide a comprehensive list of recognised Australian institutions providing public health training.\nDegrees related to health service management were excluded because these programs are typically more focused on organisational governance. Extended or Advanced degrees (eg Master of Public Health [Extension]) were excluded because they usually included the standard version of the same degree, with additional scope for electives. Degrees below an Australian Qualification Framework (AQF) Level 7 (sub‐Bachelor level) were excluded. These were often abridged versions of Bachelor‐level degrees, as were 1‐year public health Honours degrees that are typically research‐focused. For dual degrees, the base degree was counted once.\nData collection Constructive alignment pedagogical theory\n22\n was used as a guiding theory. Publicly available information for each included degree was manually extracted from the institution's online handbook and website by the researchers between July and August 2021. The nature and extent of advocacy within curricula were assessed, including whether “advoc*” was included in degree learning outcomes (DLOs). Attempts were made to contact the institution directly by two researchers (AB and SL). Where possible, the unit or degree coordinator listed was approached at least twice via email or phone. If such information was not available online, the researchers contacted the School/Department/Faculty or general university contact listed.\nThe AQF definition of an accredited unit was used.\n23\n This definition states that an accredited unit is a single component of a qualification or stand‐alone unit that has been accredited by the same process as for a whole AQF qualification; an accredited unit may be called a “module,” “subject,” “unit of competency” or “unit.”\n23\n\n\nThe authors identified all units offered within the degree (core and elective) that included “advoc*” in the unit title, unit description or unit level learning outcomes (ULOs) and extracted the data into Microsoft Excel. A single unit may be included as a core or elective across multiple degrees for some institutions. For example, a Foundation in Public Health unit may be included in both a Master of Public Health and Master of Global Health. In these cases, the unit was only counted once in the numerator. Similarly, there are cases where multiple units deliver advocacy teaching within the same degree. In these cases, the degree was only counted once in the denominator.\nPrevious research that used online handbook information to collect curriculum data recommended future studies draw from additional sources for more complete data.\n16\n Considering this, and because some websites included limited details, where “advoc*” was identified in the unit title, description or ULOs, the unit convenor and degree director were invited via email to provide high‐level details about the advocacy component of the relevant unit. This invitation included a Qualtrics link (Qualtrics, Provo, UT), a web‐based survey platform. Additional data were collected from respondents, including the number of advocacy‐specific learning outcomes and assessment details, such as type (eg, case study) and weighting. Unit convenors and degree directors received one reminder email 2 weeks after the initial invitation.\nConstructive alignment pedagogical theory\n22\n was used as a guiding theory. Publicly available information for each included degree was manually extracted from the institution's online handbook and website by the researchers between July and August 2021. The nature and extent of advocacy within curricula were assessed, including whether “advoc*” was included in degree learning outcomes (DLOs). Attempts were made to contact the institution directly by two researchers (AB and SL). Where possible, the unit or degree coordinator listed was approached at least twice via email or phone. If such information was not available online, the researchers contacted the School/Department/Faculty or general university contact listed.\nThe AQF definition of an accredited unit was used.\n23\n This definition states that an accredited unit is a single component of a qualification or stand‐alone unit that has been accredited by the same process as for a whole AQF qualification; an accredited unit may be called a “module,” “subject,” “unit of competency” or “unit.”\n23\n\n\nThe authors identified all units offered within the degree (core and elective) that included “advoc*” in the unit title, unit description or unit level learning outcomes (ULOs) and extracted the data into Microsoft Excel. A single unit may be included as a core or elective across multiple degrees for some institutions. For example, a Foundation in Public Health unit may be included in both a Master of Public Health and Master of Global Health. In these cases, the unit was only counted once in the numerator. Similarly, there are cases where multiple units deliver advocacy teaching within the same degree. In these cases, the degree was only counted once in the denominator.\nPrevious research that used online handbook information to collect curriculum data recommended future studies draw from additional sources for more complete data.\n16\n Considering this, and because some websites included limited details, where “advoc*” was identified in the unit title, description or ULOs, the unit convenor and degree director were invited via email to provide high‐level details about the advocacy component of the relevant unit. This invitation included a Qualtrics link (Qualtrics, Provo, UT), a web‐based survey platform. Additional data were collected from respondents, including the number of advocacy‐specific learning outcomes and assessment details, such as type (eg, case study) and weighting. Unit convenors and degree directors received one reminder email 2 weeks after the initial invitation.\nData analysis Additional data collected via surveys with unit convenors and degree directors were combined with data extracted from institutions' websites. A simple content analysis approach was used on extracted advocacy data to gauge how advocacy curricula were included within Australian tertiary public health education.\n16\n, \n24\n, \n25\n Previous research shows that web‐based content analysis is an appropriate method for auditing curricula content in public health degrees.\n16\n, \n24\n Denominators used to calculate proportions excluded degrees or units with missing data.\nIn this broad scoping exercise, a simple content analysis was conducted as follows. The authors looked for all mentions of “advoc*” in all core and elective unit titles, descriptions and learning outcomes. Proportions of degrees with “advoc*” in either a unit title, description or learning outcomes were calculated. Further, given that constructive alignment pedagogical theory would suggest that unit learning outcomes should be aligned to DLOs to maximise opportunities for student learning,\n22\n the proportion of degrees that included “advoc*” in core unit learning outcomes alone was also calculated.\nAdditional data collected via surveys with unit convenors and degree directors were combined with data extracted from institutions' websites. A simple content analysis approach was used on extracted advocacy data to gauge how advocacy curricula were included within Australian tertiary public health education.\n16\n, \n24\n, \n25\n Previous research shows that web‐based content analysis is an appropriate method for auditing curricula content in public health degrees.\n16\n, \n24\n Denominators used to calculate proportions excluded degrees or units with missing data.\nIn this broad scoping exercise, a simple content analysis was conducted as follows. The authors looked for all mentions of “advoc*” in all core and elective unit titles, descriptions and learning outcomes. Proportions of degrees with “advoc*” in either a unit title, description or learning outcomes were calculated. Further, given that constructive alignment pedagogical theory would suggest that unit learning outcomes should be aligned to DLOs to maximise opportunities for student learning,\n22\n the proportion of degrees that included “advoc*” in core unit learning outcomes alone was also calculated.\nEthics The Macquarie University Human Research Ethics Committee approved this research (Ref. No. 520211009730240).\nThe Macquarie University Human Research Ethics Committee approved this research (Ref. No. 520211009730240).", "To identify a list of Australian public health degrees, the Commonwealth Register of Institutions and Courses for Overseas Students (CRICOS) database was used, applying the 0613 Public Health Field of Education.\n21\n Bachelor‐ and Master‐level degrees were included that related to the teaching of public/population health and variations of it, including global health, health policy, health communication and health promotion. Bachelor of Science or Bachelor of Health Science degrees where public health, population health or health promotion majors were offered were also included. This approach sought to provide a comprehensive list of recognised Australian institutions providing public health training.\nDegrees related to health service management were excluded because these programs are typically more focused on organisational governance. Extended or Advanced degrees (eg Master of Public Health [Extension]) were excluded because they usually included the standard version of the same degree, with additional scope for electives. Degrees below an Australian Qualification Framework (AQF) Level 7 (sub‐Bachelor level) were excluded. These were often abridged versions of Bachelor‐level degrees, as were 1‐year public health Honours degrees that are typically research‐focused. For dual degrees, the base degree was counted once.", "Constructive alignment pedagogical theory\n22\n was used as a guiding theory. Publicly available information for each included degree was manually extracted from the institution's online handbook and website by the researchers between July and August 2021. The nature and extent of advocacy within curricula were assessed, including whether “advoc*” was included in degree learning outcomes (DLOs). Attempts were made to contact the institution directly by two researchers (AB and SL). Where possible, the unit or degree coordinator listed was approached at least twice via email or phone. If such information was not available online, the researchers contacted the School/Department/Faculty or general university contact listed.\nThe AQF definition of an accredited unit was used.\n23\n This definition states that an accredited unit is a single component of a qualification or stand‐alone unit that has been accredited by the same process as for a whole AQF qualification; an accredited unit may be called a “module,” “subject,” “unit of competency” or “unit.”\n23\n\n\nThe authors identified all units offered within the degree (core and elective) that included “advoc*” in the unit title, unit description or unit level learning outcomes (ULOs) and extracted the data into Microsoft Excel. A single unit may be included as a core or elective across multiple degrees for some institutions. For example, a Foundation in Public Health unit may be included in both a Master of Public Health and Master of Global Health. In these cases, the unit was only counted once in the numerator. Similarly, there are cases where multiple units deliver advocacy teaching within the same degree. In these cases, the degree was only counted once in the denominator.\nPrevious research that used online handbook information to collect curriculum data recommended future studies draw from additional sources for more complete data.\n16\n Considering this, and because some websites included limited details, where “advoc*” was identified in the unit title, description or ULOs, the unit convenor and degree director were invited via email to provide high‐level details about the advocacy component of the relevant unit. This invitation included a Qualtrics link (Qualtrics, Provo, UT), a web‐based survey platform. Additional data were collected from respondents, including the number of advocacy‐specific learning outcomes and assessment details, such as type (eg, case study) and weighting. Unit convenors and degree directors received one reminder email 2 weeks after the initial invitation.", "Additional data collected via surveys with unit convenors and degree directors were combined with data extracted from institutions' websites. A simple content analysis approach was used on extracted advocacy data to gauge how advocacy curricula were included within Australian tertiary public health education.\n16\n, \n24\n, \n25\n Previous research shows that web‐based content analysis is an appropriate method for auditing curricula content in public health degrees.\n16\n, \n24\n Denominators used to calculate proportions excluded degrees or units with missing data.\nIn this broad scoping exercise, a simple content analysis was conducted as follows. The authors looked for all mentions of “advoc*” in all core and elective unit titles, descriptions and learning outcomes. Proportions of degrees with “advoc*” in either a unit title, description or learning outcomes were calculated. Further, given that constructive alignment pedagogical theory would suggest that unit learning outcomes should be aligned to DLOs to maximise opportunities for student learning,\n22\n the proportion of degrees that included “advoc*” in core unit learning outcomes alone was also calculated.", "The Macquarie University Human Research Ethics Committee approved this research (Ref. No. 520211009730240).", "Using the CRICOS database, 175 eligible degrees (53 Bachelor's and 122 Master's) were identified. After exclusions, there were 65 degrees included for review from 30 Australian institutions: 26 undergraduate degrees and 39 postgraduate degrees (see Figure 1). Of the 65 degrees analysed, 41 (63%) included advocacy as part of core curricula; 14 (22%) did not include any identified advocacy curricula (Figure 1). DLOs could not be found online for 28 degrees.\nFlow diagram of scope of advocacy training within Australian public health curriculums\nUndergraduate There was a total of 26 undergraduate degrees identified from 23 institutions. From these degrees, there were 39 relevant units identified: four from the unit title alone (10%), 21 from the learning outcomes (54%) and 14 from the unit description (36%).\nOf the 26 degrees, 20 had DLOs publicly available; six degrees did not publicly report DLOs, and additional information could not be obtained from degree directors or institutions via attempts to contact these institutions made by researchers. Of the degrees with available DLOs, 3 out of 20 (15%) included “advoc*” in one DLO. No degree had more than one DLO mentioning “advoc*.” Each of these three degrees delivered advocacy training within core units. Of the 17 degrees which did not include “advoc*” in their DLOs, five did not include any units where “advoc*” was included as a learning outcome in the unit title or description. Notably, all five degrees were Bachelor of Health Science programs. For 12 degrees, while they did not include “advoc*” in their DLOs, they did include some advocacy training: nine degrees included it as part of the core curriculum, and three degrees included it as part of an elective option. For the six degrees that did not publicly report DLOs and attempts to determine this information from researchers was unsuccessful, five units included advocacy within the core curriculum, and one did not offer any advocacy curriculum.\nIn summary, six of the 26 degrees (23%) had no identified advocacy curriculum, and three (12%) only offered it as part of elective units, that is just 17 degrees (65%) had advocacy as part of the core curriculum. When only “advoc*” in unit learning outcomes are considered, 16 of 26 degrees (62%) included at least one core unit where “advoc*” was included in a learning outcome.\nThere was a total of 26 undergraduate degrees identified from 23 institutions. From these degrees, there were 39 relevant units identified: four from the unit title alone (10%), 21 from the learning outcomes (54%) and 14 from the unit description (36%).\nOf the 26 degrees, 20 had DLOs publicly available; six degrees did not publicly report DLOs, and additional information could not be obtained from degree directors or institutions via attempts to contact these institutions made by researchers. Of the degrees with available DLOs, 3 out of 20 (15%) included “advoc*” in one DLO. No degree had more than one DLO mentioning “advoc*.” Each of these three degrees delivered advocacy training within core units. Of the 17 degrees which did not include “advoc*” in their DLOs, five did not include any units where “advoc*” was included as a learning outcome in the unit title or description. Notably, all five degrees were Bachelor of Health Science programs. For 12 degrees, while they did not include “advoc*” in their DLOs, they did include some advocacy training: nine degrees included it as part of the core curriculum, and three degrees included it as part of an elective option. For the six degrees that did not publicly report DLOs and attempts to determine this information from researchers was unsuccessful, five units included advocacy within the core curriculum, and one did not offer any advocacy curriculum.\nIn summary, six of the 26 degrees (23%) had no identified advocacy curriculum, and three (12%) only offered it as part of elective units, that is just 17 degrees (65%) had advocacy as part of the core curriculum. When only “advoc*” in unit learning outcomes are considered, 16 of 26 degrees (62%) included at least one core unit where “advoc*” was included in a learning outcome.\nPostgraduate A total of 39 postgraduate degrees were identified from 30 institutions. From these degrees, there were 55 relevant units identified: seven from the unit title alone (13%), 23 from the learning outcomes (42%), and 25 from the unit description (45%).\nOf these degrees, 26 had DLOs publicly available or were obtained from degree directors; 13 degrees did not publicly report DLOs, and additional information could not be obtained from degree directors or institutions. Of the degrees with available DLOs, 10 out of 26 (38%) included “advoc*” in at least one DLO. Eight of these 10 degrees included advocacy training within at least one core unit; two degrees included advocacy training in electives. Of the 16 degrees without advocacy included in DLOs, nine included advocacy training in core units, two in electives only, and five had no identified advocacy training. For the 13 degrees where DLOs were not publicly reported, three degrees did not include any advocacy training. For the 10 degrees that did include advocacy training, seven units included it within core units and three degrees within elective units only. In summary, of the 39 identified postgraduate degrees, 31 included advocacy training (79%). Of these, 24 (62%) were delivered via core units.\nIn summary, eight of the 39 degrees (21%) had no identified advocacy training, and seven (18%) only offered it as part of elective units, that is just 24 degrees (62%) had advocacy as part of core training. When only “advoc*” in unit learning outcomes are considered, 13 of 39 degrees (50%) included at least one core unit where ‘advoc*' was included in a learning outcome, excluding the nine degrees where the unit learning outcomes were not available.\nA total of 39 postgraduate degrees were identified from 30 institutions. From these degrees, there were 55 relevant units identified: seven from the unit title alone (13%), 23 from the learning outcomes (42%), and 25 from the unit description (45%).\nOf these degrees, 26 had DLOs publicly available or were obtained from degree directors; 13 degrees did not publicly report DLOs, and additional information could not be obtained from degree directors or institutions. Of the degrees with available DLOs, 10 out of 26 (38%) included “advoc*” in at least one DLO. Eight of these 10 degrees included advocacy training within at least one core unit; two degrees included advocacy training in electives. Of the 16 degrees without advocacy included in DLOs, nine included advocacy training in core units, two in electives only, and five had no identified advocacy training. For the 13 degrees where DLOs were not publicly reported, three degrees did not include any advocacy training. For the 10 degrees that did include advocacy training, seven units included it within core units and three degrees within elective units only. In summary, of the 39 identified postgraduate degrees, 31 included advocacy training (79%). Of these, 24 (62%) were delivered via core units.\nIn summary, eight of the 39 degrees (21%) had no identified advocacy training, and seven (18%) only offered it as part of elective units, that is just 24 degrees (62%) had advocacy as part of core training. When only “advoc*” in unit learning outcomes are considered, 13 of 39 degrees (50%) included at least one core unit where ‘advoc*' was included in a learning outcome, excluding the nine degrees where the unit learning outcomes were not available.\nAssessments Of the unit convenors and degree directors (N = 66) invited to complete the survey to supplement collected data, 34 responded (52% response rate). The majority of respondents provided additional detail regarding the type and weighting of assessment tasks which was used to supplement data collected from institutions' websites.\nUndergraduate Assessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%.\nAssessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%.\nPostgraduate Assessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%.\nAssessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%.\nOf the unit convenors and degree directors (N = 66) invited to complete the survey to supplement collected data, 34 responded (52% response rate). The majority of respondents provided additional detail regarding the type and weighting of assessment tasks which was used to supplement data collected from institutions' websites.\nUndergraduate Assessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%.\nAssessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%.\nPostgraduate Assessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%.\nAssessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%.", "There was a total of 26 undergraduate degrees identified from 23 institutions. From these degrees, there were 39 relevant units identified: four from the unit title alone (10%), 21 from the learning outcomes (54%) and 14 from the unit description (36%).\nOf the 26 degrees, 20 had DLOs publicly available; six degrees did not publicly report DLOs, and additional information could not be obtained from degree directors or institutions via attempts to contact these institutions made by researchers. Of the degrees with available DLOs, 3 out of 20 (15%) included “advoc*” in one DLO. No degree had more than one DLO mentioning “advoc*.” Each of these three degrees delivered advocacy training within core units. Of the 17 degrees which did not include “advoc*” in their DLOs, five did not include any units where “advoc*” was included as a learning outcome in the unit title or description. Notably, all five degrees were Bachelor of Health Science programs. For 12 degrees, while they did not include “advoc*” in their DLOs, they did include some advocacy training: nine degrees included it as part of the core curriculum, and three degrees included it as part of an elective option. For the six degrees that did not publicly report DLOs and attempts to determine this information from researchers was unsuccessful, five units included advocacy within the core curriculum, and one did not offer any advocacy curriculum.\nIn summary, six of the 26 degrees (23%) had no identified advocacy curriculum, and three (12%) only offered it as part of elective units, that is just 17 degrees (65%) had advocacy as part of the core curriculum. When only “advoc*” in unit learning outcomes are considered, 16 of 26 degrees (62%) included at least one core unit where “advoc*” was included in a learning outcome.", "A total of 39 postgraduate degrees were identified from 30 institutions. From these degrees, there were 55 relevant units identified: seven from the unit title alone (13%), 23 from the learning outcomes (42%), and 25 from the unit description (45%).\nOf these degrees, 26 had DLOs publicly available or were obtained from degree directors; 13 degrees did not publicly report DLOs, and additional information could not be obtained from degree directors or institutions. Of the degrees with available DLOs, 10 out of 26 (38%) included “advoc*” in at least one DLO. Eight of these 10 degrees included advocacy training within at least one core unit; two degrees included advocacy training in electives. Of the 16 degrees without advocacy included in DLOs, nine included advocacy training in core units, two in electives only, and five had no identified advocacy training. For the 13 degrees where DLOs were not publicly reported, three degrees did not include any advocacy training. For the 10 degrees that did include advocacy training, seven units included it within core units and three degrees within elective units only. In summary, of the 39 identified postgraduate degrees, 31 included advocacy training (79%). Of these, 24 (62%) were delivered via core units.\nIn summary, eight of the 39 degrees (21%) had no identified advocacy training, and seven (18%) only offered it as part of elective units, that is just 24 degrees (62%) had advocacy as part of core training. When only “advoc*” in unit learning outcomes are considered, 13 of 39 degrees (50%) included at least one core unit where ‘advoc*' was included in a learning outcome, excluding the nine degrees where the unit learning outcomes were not available.", "Of the unit convenors and degree directors (N = 66) invited to complete the survey to supplement collected data, 34 responded (52% response rate). The majority of respondents provided additional detail regarding the type and weighting of assessment tasks which was used to supplement data collected from institutions' websites.\nUndergraduate Assessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%.\nAssessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%.\nPostgraduate Assessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%.\nAssessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%.", "Assessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%.", "Assessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%.", "This study provides an overview of the scope of public health advocacy education within Australian public health degrees. This audit indicates advocacy curricula are not ubiquitously delivered, with one‐third of all identified degrees not including advocacy as part of core curricula and advocacy rarely included in DLOs.\nThese findings mirror the limited previous Australian‐based research indicating public health advocacy receives minimal coverage in university curricula and the need to include public health advocacy training within core units.\n9\n These gaps have also been reflected internationally,\n11\n where there is also an identified lack of understanding of the optimal training format and length for teaching advocacy to other health disciplines (nurses)\n26\n and medical students.\n20\n, \n27\n\n\nWhen a broad overview of advocacy training is considered (advocacy taught within core or elective units), it appears that advocacy is taught at a similar frequency in undergraduate (77%) and postgraduate degrees (79%). However, when considering good pedagogical practice requires essential learning to be expressly outlined in unit learning outcomes (where it is clear to students what they are learning) rather than only in unit descriptions, the difference between the qualification levels is starker. Although the frequency of advocacy within both undergraduate and postgraduate curricula is relatively low, this study found that 62% of undergraduate degrees included at least one core unit where advocacy was included in unit‐level outcomes compared to only 50% of postgraduate degrees. The finding that this more frequently occurs in core undergraduate training may be due to the longer duration of undergraduate degrees, which allows an increased opportunity for the inclusion of advocacy training within the required curriculum.\nIt was identified that 14 (22%) of the eligible public health degrees do not include any advocacy training, raising issues with the advocacy capability of these degrees. However, it should be acknowledged that for many degrees, particularly postgraduate degrees, information on unit learning outcomes was not publicly available, and this figure may have been overestimated.\nIf advocacy is not routinely included within the core curriculum, as the findings indicate, but public health students are expected to graduate with advocacy competencies,\n12\n there is a likelihood that some Australian public health graduates may not be adequately trained in advocacy. This has the potential to produce graduates who are not sufficiently prepared to deal with current and emerging population health challenges such as climate change and public health emergencies, which can stall health promotion advances that seek to improve the health and well‐being of individuals and populations.\nThe highly varied inclusion of advocacy curricula across degrees suggests uneven alignment between advocacy competencies and DLOs, ULOs and advocacy‐related assessment tasks. While universities have a responsibility to ensure public health students meet professional competencies,\n8\n, \n27\n at present, the graduate capabilities set out by CAPHIA are standards that institutions are encouraged to align to.\n12\n These capabilities are not regulated due to the lack of requirement for accreditation in Australia, and not all institutions teaching public health curricula are CAPHIA members. This may partly explain why there is diversity in the inclusion of advocacy in Australian public health degrees. These findings demonstrate the need for public health programs within universities to comply with current standards so that students understand the nature of advocacy and the importance of advocacy in the policy process.\nThe low response rates from unit convenors and degree directors make it difficult for any meaningful inferences to be made regarding how advocacy is assessed. Attempts were made to supplement this with data extracted by researchers from institution websites. The diversity of assessment tasks used suggests that there are many ways to assess advocacy skills proficiency. However, further research is required to understand the type of advocacy skills needed in real‐world advocacy practice and whether these skills are authentically assessed in units. A small number of units (n = 4, 4% of all units reviewed) reported advocacy as a learning outcome but did not assess it, which suggests there may be some poor alignment between learning outcomes and assessment in some degrees.", "Several limitations are noted in this study. Firstly, a conservative approach was taken to reviewing how advocacy was included within curricula with narrow search terms (“advoc*”). Additional search terms such as “community development/engagement”, “change” or “health communication” may have yielded more results. This method may have missed synonyms or concepts included in advocacy. A review of the unit learning outcomes of all health promotion, health policy and health communication units offered was carried out, but this did not yield additional data. Thus, there may have been some missed instances where advocacy skills are integrated into teaching but not explicitly mentioned and therefore not captured by this study. There might have also been some missed degrees if they were not registered on the CRICOS database. Future analysis could include a more comprehensive list of search terms or thematic identification.\nA second limitation is that it was difficult to ascertain the exact extent to which advocacy was included within a unit. It was out of scope for the review team to audit the entire unit curricula, and it is possible that unit content was not fully aligned with the ULOs, unit title or description. Therefore, the audit did not investigate how much advocacy curricula are included or how well this is taught.\nA third limitation is the currency of the information reviewed. This audit was completed between July and September 2021. Institutions regularly make amendments to the content offered within degrees, and advocacy offerings may have changed in some instances. Attempts to mitigate accuracy limitations were made by obtaining data from multiple sources, including online handbooks, institutions' websites, surveying unit convenors, and degree directors.\n16\n Additionally, the audit was limited by the data made publicly available by each institution. While some institutions provided extensive and detailed information on their websites regarding degree and unit learning outcomes and assessment details, this information was wholly absent in the public domain for other institutions.\nIt is not clear what the effect of the combination of these limitations is. The prevalence's estimated here in this broad scoping review may be under or overestimated.\nFinally, the researchers recognise their interest and involvement in teaching advocacy within their institution's public health degrees and that combined with the focus of this study, a desirability bias may be introduced.", "This research provides an initial overview of how public health advocacy is included within Australian tertiary public health degrees. The researchers conclude that public health advocacy is not delivered consistently across degrees, and some students may miss out entirely. There remain opportunities to optimise advocacy curricula.\nThese findings highlight the need for Australian universities offering public health degrees to review and enhance their public health advocacy education offerings at undergraduate and postgraduate levels. A focus on ensuring advocacy is included within core units will allow students to graduate with foundational advocacy competencies set out by CAPHIA.\n12\n In addition, it is important to ensure that where advocacy is taught and assessed, that this is explicit within the unit description and ULOs so that there is precise alignment with the foundational public health competencies, and it is clear to students what they are learning. Long‐term university commitments to provide relevant advocacy training will contribute to investments in the public health workforce being adequately equipped to deal with emerging health issues such as climate change, public health emergencies and other issues that promote equity.\nIt is envisaged these findings will be particularly informative for degree directors, educators, and advocacy leaders seeking to augment public health advocacy training in the university sector. It is also envisaged that collaborative efforts at discipline‐specific conferences such as CAPHIA learning and teaching forums could further facilitate work in this area.\nFuture research in this area should include determining the type of advocacy skills required for real‐world advocacy practice, whether these skills are authentically assessed within public health training and best practice methods for training public health students in advocacy. The next steps for this research project include qualitative methods to assess pedagogical approaches to teaching advocacy via interviews with advocacy stakeholders to ensure teaching is optimally aligned to industry needs.", "The authors received no financial support for this research.", "The authors declare no conflict of interest." ]
[ null, "methods", null, null, null, null, "results", null, null, null, null, null, "discussion", null, "conclusions", null, "COI-statement" ]
[ "advocacy", "health promotion", "postgraduate education in public health", "public health education", "public health workforce", "undergraduate education in public health" ]
INTRODUCTION: Public health advocacy is a fundamental part of health promotion practice. 1 , 2 , 3 Its importance is reflected by its inclusion as one of three pillars within the World Health Organization's Ottawa Charter for Health Promotion. 3 The three pillars, “advocate,” “enable” and “mediate,” characterise the fundamental activities and competencies necessary to promote the health of populations. 4 , 5 Advocacy is the active support of a cause. 6 It entails speaking, writing or acting in favour of a particular cause, policy or group of people. 5 Public health advocacy efforts can take on many forms, employing a range of strategies that aim to influence and advance evidence‐based policymaking to improve health and well‐being for individuals and populations. 5 , 6 Advocacy can also raise awareness of social and environmental factors enabling systemic changes in these areas as well as mobilising communities. To achieve these outcomes, advocacy does not employ one consistent approach. 6 Rather, activities and efforts can include a variety of strategies such as negotiation, debate and consensus generation in the pursuit of improved health and well‐being. 6 These advocacy activities can address the social and other determinants of health and lead to the development of healthier public policies and positive impacts on societies. 6 , 7 , 8 The influence of public health advocacy has been demonstrated through several public health successes in Australia, including plain tobacco packaging, mandatory folate fortification within bread products and the ban on commercial tanning beds. 9 Such successes were not made overnight. Advocates including health practitioners, policymakers and peak consumer advocacy bodies were required to coalesce and engage in extended and strategic efforts that ensured evidence was communicated in politically compelling ways, often in the face of fierce opposition from the commercial sector. 6 Successful outcomes are achieved through a variety of strategies, including building the capacity of health professionals and building coalitions, although it has been described that some of the public health community may not feel comfortable or empowered to operate in this space. 5 Importantly, recent global events such as COVID‐19, which has flared vaccine hesitancy, and current issues such as climate change and the influence of major industries, have drawn attention to the critical need to develop an appropriately skilled public health workforce that engages with public health advocacy to meet current and emerging population health challenges. 7 , 10 , 11 The Public Health Association of Australia states that a well‐trained workforce is required, while emphasising the role of universities within this whole‐of‐system approach. 8 In 2016, the peak organisation for public health education throughout Australasia, the Council of Academic Public Health Institutions Australasia (CAPHIA), proffered a foundational set of competencies expected from undergraduate and postgraduate public health students. 12 The competencies recognise the role of public health advocacy in underpinning knowledge of health promotion and disease prevention 12 and are consistent with the inclusion of advocacy in other international public health competency frameworks. 13 , 14 While public health undergraduates and postgraduates are expected to have knowledge and skills in public health advocacy, 12 there is a paucity of literature on how it is taught to these cohorts. 9 , 11 , 15 , 16 Contributing to this is a dearth of empirical research on the practice of advocacy. 17 , 18 This may be due to the absence of a consensus on the definition of public health advocacy among academics, health practitioners and advocacy groups. 1 , 18 Despite accepted principals, no one consistent approach to advocacy exists. 6 , 18 While the discipline of advocacy enables a wide variety of practices to be implemented that are tailored for specific contexts, 6 this can result in challenges in conducting research due to the varying heterogeneous methodologies. For the current study, the authors defined public health advocacy as educating, organising and mobilising for systems change in population health. 19 While many definitions of public health advocacy exist, the authors selected this broad‐spectrum definition as the foundation of this research to acknowledge the myriad of possible forms advocacy can take. Because of the potential for advocacy to improve population health, previous research has called for a better understanding of advocacy in public health education. 1 , 18 , 20 To determine whether advocacy curricula within Australian public health degrees match what is expected of graduates working in advocacy and health promotion roles once in the workforce, knowledge of advocacy curricula is required. This study aims to determine the scope of public health advocacy education within undergraduate and postgraduate Australian public health degrees and whether advocacy education is delivered as part of the core curriculum, as part of electives, or not at all. METHODS: Sample To identify a list of Australian public health degrees, the Commonwealth Register of Institutions and Courses for Overseas Students (CRICOS) database was used, applying the 0613 Public Health Field of Education. 21 Bachelor‐ and Master‐level degrees were included that related to the teaching of public/population health and variations of it, including global health, health policy, health communication and health promotion. Bachelor of Science or Bachelor of Health Science degrees where public health, population health or health promotion majors were offered were also included. This approach sought to provide a comprehensive list of recognised Australian institutions providing public health training. Degrees related to health service management were excluded because these programs are typically more focused on organisational governance. Extended or Advanced degrees (eg Master of Public Health [Extension]) were excluded because they usually included the standard version of the same degree, with additional scope for electives. Degrees below an Australian Qualification Framework (AQF) Level 7 (sub‐Bachelor level) were excluded. These were often abridged versions of Bachelor‐level degrees, as were 1‐year public health Honours degrees that are typically research‐focused. For dual degrees, the base degree was counted once. To identify a list of Australian public health degrees, the Commonwealth Register of Institutions and Courses for Overseas Students (CRICOS) database was used, applying the 0613 Public Health Field of Education. 21 Bachelor‐ and Master‐level degrees were included that related to the teaching of public/population health and variations of it, including global health, health policy, health communication and health promotion. Bachelor of Science or Bachelor of Health Science degrees where public health, population health or health promotion majors were offered were also included. This approach sought to provide a comprehensive list of recognised Australian institutions providing public health training. Degrees related to health service management were excluded because these programs are typically more focused on organisational governance. Extended or Advanced degrees (eg Master of Public Health [Extension]) were excluded because they usually included the standard version of the same degree, with additional scope for electives. Degrees below an Australian Qualification Framework (AQF) Level 7 (sub‐Bachelor level) were excluded. These were often abridged versions of Bachelor‐level degrees, as were 1‐year public health Honours degrees that are typically research‐focused. For dual degrees, the base degree was counted once. Data collection Constructive alignment pedagogical theory 22 was used as a guiding theory. Publicly available information for each included degree was manually extracted from the institution's online handbook and website by the researchers between July and August 2021. The nature and extent of advocacy within curricula were assessed, including whether “advoc*” was included in degree learning outcomes (DLOs). Attempts were made to contact the institution directly by two researchers (AB and SL). Where possible, the unit or degree coordinator listed was approached at least twice via email or phone. If such information was not available online, the researchers contacted the School/Department/Faculty or general university contact listed. The AQF definition of an accredited unit was used. 23 This definition states that an accredited unit is a single component of a qualification or stand‐alone unit that has been accredited by the same process as for a whole AQF qualification; an accredited unit may be called a “module,” “subject,” “unit of competency” or “unit.” 23 The authors identified all units offered within the degree (core and elective) that included “advoc*” in the unit title, unit description or unit level learning outcomes (ULOs) and extracted the data into Microsoft Excel. A single unit may be included as a core or elective across multiple degrees for some institutions. For example, a Foundation in Public Health unit may be included in both a Master of Public Health and Master of Global Health. In these cases, the unit was only counted once in the numerator. Similarly, there are cases where multiple units deliver advocacy teaching within the same degree. In these cases, the degree was only counted once in the denominator. Previous research that used online handbook information to collect curriculum data recommended future studies draw from additional sources for more complete data. 16 Considering this, and because some websites included limited details, where “advoc*” was identified in the unit title, description or ULOs, the unit convenor and degree director were invited via email to provide high‐level details about the advocacy component of the relevant unit. This invitation included a Qualtrics link (Qualtrics, Provo, UT), a web‐based survey platform. Additional data were collected from respondents, including the number of advocacy‐specific learning outcomes and assessment details, such as type (eg, case study) and weighting. Unit convenors and degree directors received one reminder email 2 weeks after the initial invitation. Constructive alignment pedagogical theory 22 was used as a guiding theory. Publicly available information for each included degree was manually extracted from the institution's online handbook and website by the researchers between July and August 2021. The nature and extent of advocacy within curricula were assessed, including whether “advoc*” was included in degree learning outcomes (DLOs). Attempts were made to contact the institution directly by two researchers (AB and SL). Where possible, the unit or degree coordinator listed was approached at least twice via email or phone. If such information was not available online, the researchers contacted the School/Department/Faculty or general university contact listed. The AQF definition of an accredited unit was used. 23 This definition states that an accredited unit is a single component of a qualification or stand‐alone unit that has been accredited by the same process as for a whole AQF qualification; an accredited unit may be called a “module,” “subject,” “unit of competency” or “unit.” 23 The authors identified all units offered within the degree (core and elective) that included “advoc*” in the unit title, unit description or unit level learning outcomes (ULOs) and extracted the data into Microsoft Excel. A single unit may be included as a core or elective across multiple degrees for some institutions. For example, a Foundation in Public Health unit may be included in both a Master of Public Health and Master of Global Health. In these cases, the unit was only counted once in the numerator. Similarly, there are cases where multiple units deliver advocacy teaching within the same degree. In these cases, the degree was only counted once in the denominator. Previous research that used online handbook information to collect curriculum data recommended future studies draw from additional sources for more complete data. 16 Considering this, and because some websites included limited details, where “advoc*” was identified in the unit title, description or ULOs, the unit convenor and degree director were invited via email to provide high‐level details about the advocacy component of the relevant unit. This invitation included a Qualtrics link (Qualtrics, Provo, UT), a web‐based survey platform. Additional data were collected from respondents, including the number of advocacy‐specific learning outcomes and assessment details, such as type (eg, case study) and weighting. Unit convenors and degree directors received one reminder email 2 weeks after the initial invitation. Data analysis Additional data collected via surveys with unit convenors and degree directors were combined with data extracted from institutions' websites. A simple content analysis approach was used on extracted advocacy data to gauge how advocacy curricula were included within Australian tertiary public health education. 16 , 24 , 25 Previous research shows that web‐based content analysis is an appropriate method for auditing curricula content in public health degrees. 16 , 24 Denominators used to calculate proportions excluded degrees or units with missing data. In this broad scoping exercise, a simple content analysis was conducted as follows. The authors looked for all mentions of “advoc*” in all core and elective unit titles, descriptions and learning outcomes. Proportions of degrees with “advoc*” in either a unit title, description or learning outcomes were calculated. Further, given that constructive alignment pedagogical theory would suggest that unit learning outcomes should be aligned to DLOs to maximise opportunities for student learning, 22 the proportion of degrees that included “advoc*” in core unit learning outcomes alone was also calculated. Additional data collected via surveys with unit convenors and degree directors were combined with data extracted from institutions' websites. A simple content analysis approach was used on extracted advocacy data to gauge how advocacy curricula were included within Australian tertiary public health education. 16 , 24 , 25 Previous research shows that web‐based content analysis is an appropriate method for auditing curricula content in public health degrees. 16 , 24 Denominators used to calculate proportions excluded degrees or units with missing data. In this broad scoping exercise, a simple content analysis was conducted as follows. The authors looked for all mentions of “advoc*” in all core and elective unit titles, descriptions and learning outcomes. Proportions of degrees with “advoc*” in either a unit title, description or learning outcomes were calculated. Further, given that constructive alignment pedagogical theory would suggest that unit learning outcomes should be aligned to DLOs to maximise opportunities for student learning, 22 the proportion of degrees that included “advoc*” in core unit learning outcomes alone was also calculated. Ethics The Macquarie University Human Research Ethics Committee approved this research (Ref. No. 520211009730240). The Macquarie University Human Research Ethics Committee approved this research (Ref. No. 520211009730240). Sample: To identify a list of Australian public health degrees, the Commonwealth Register of Institutions and Courses for Overseas Students (CRICOS) database was used, applying the 0613 Public Health Field of Education. 21 Bachelor‐ and Master‐level degrees were included that related to the teaching of public/population health and variations of it, including global health, health policy, health communication and health promotion. Bachelor of Science or Bachelor of Health Science degrees where public health, population health or health promotion majors were offered were also included. This approach sought to provide a comprehensive list of recognised Australian institutions providing public health training. Degrees related to health service management were excluded because these programs are typically more focused on organisational governance. Extended or Advanced degrees (eg Master of Public Health [Extension]) were excluded because they usually included the standard version of the same degree, with additional scope for electives. Degrees below an Australian Qualification Framework (AQF) Level 7 (sub‐Bachelor level) were excluded. These were often abridged versions of Bachelor‐level degrees, as were 1‐year public health Honours degrees that are typically research‐focused. For dual degrees, the base degree was counted once. Data collection: Constructive alignment pedagogical theory 22 was used as a guiding theory. Publicly available information for each included degree was manually extracted from the institution's online handbook and website by the researchers between July and August 2021. The nature and extent of advocacy within curricula were assessed, including whether “advoc*” was included in degree learning outcomes (DLOs). Attempts were made to contact the institution directly by two researchers (AB and SL). Where possible, the unit or degree coordinator listed was approached at least twice via email or phone. If such information was not available online, the researchers contacted the School/Department/Faculty or general university contact listed. The AQF definition of an accredited unit was used. 23 This definition states that an accredited unit is a single component of a qualification or stand‐alone unit that has been accredited by the same process as for a whole AQF qualification; an accredited unit may be called a “module,” “subject,” “unit of competency” or “unit.” 23 The authors identified all units offered within the degree (core and elective) that included “advoc*” in the unit title, unit description or unit level learning outcomes (ULOs) and extracted the data into Microsoft Excel. A single unit may be included as a core or elective across multiple degrees for some institutions. For example, a Foundation in Public Health unit may be included in both a Master of Public Health and Master of Global Health. In these cases, the unit was only counted once in the numerator. Similarly, there are cases where multiple units deliver advocacy teaching within the same degree. In these cases, the degree was only counted once in the denominator. Previous research that used online handbook information to collect curriculum data recommended future studies draw from additional sources for more complete data. 16 Considering this, and because some websites included limited details, where “advoc*” was identified in the unit title, description or ULOs, the unit convenor and degree director were invited via email to provide high‐level details about the advocacy component of the relevant unit. This invitation included a Qualtrics link (Qualtrics, Provo, UT), a web‐based survey platform. Additional data were collected from respondents, including the number of advocacy‐specific learning outcomes and assessment details, such as type (eg, case study) and weighting. Unit convenors and degree directors received one reminder email 2 weeks after the initial invitation. Data analysis: Additional data collected via surveys with unit convenors and degree directors were combined with data extracted from institutions' websites. A simple content analysis approach was used on extracted advocacy data to gauge how advocacy curricula were included within Australian tertiary public health education. 16 , 24 , 25 Previous research shows that web‐based content analysis is an appropriate method for auditing curricula content in public health degrees. 16 , 24 Denominators used to calculate proportions excluded degrees or units with missing data. In this broad scoping exercise, a simple content analysis was conducted as follows. The authors looked for all mentions of “advoc*” in all core and elective unit titles, descriptions and learning outcomes. Proportions of degrees with “advoc*” in either a unit title, description or learning outcomes were calculated. Further, given that constructive alignment pedagogical theory would suggest that unit learning outcomes should be aligned to DLOs to maximise opportunities for student learning, 22 the proportion of degrees that included “advoc*” in core unit learning outcomes alone was also calculated. Ethics: The Macquarie University Human Research Ethics Committee approved this research (Ref. No. 520211009730240). RESULTS: Using the CRICOS database, 175 eligible degrees (53 Bachelor's and 122 Master's) were identified. After exclusions, there were 65 degrees included for review from 30 Australian institutions: 26 undergraduate degrees and 39 postgraduate degrees (see Figure 1). Of the 65 degrees analysed, 41 (63%) included advocacy as part of core curricula; 14 (22%) did not include any identified advocacy curricula (Figure 1). DLOs could not be found online for 28 degrees. Flow diagram of scope of advocacy training within Australian public health curriculums Undergraduate There was a total of 26 undergraduate degrees identified from 23 institutions. From these degrees, there were 39 relevant units identified: four from the unit title alone (10%), 21 from the learning outcomes (54%) and 14 from the unit description (36%). Of the 26 degrees, 20 had DLOs publicly available; six degrees did not publicly report DLOs, and additional information could not be obtained from degree directors or institutions via attempts to contact these institutions made by researchers. Of the degrees with available DLOs, 3 out of 20 (15%) included “advoc*” in one DLO. No degree had more than one DLO mentioning “advoc*.” Each of these three degrees delivered advocacy training within core units. Of the 17 degrees which did not include “advoc*” in their DLOs, five did not include any units where “advoc*” was included as a learning outcome in the unit title or description. Notably, all five degrees were Bachelor of Health Science programs. For 12 degrees, while they did not include “advoc*” in their DLOs, they did include some advocacy training: nine degrees included it as part of the core curriculum, and three degrees included it as part of an elective option. For the six degrees that did not publicly report DLOs and attempts to determine this information from researchers was unsuccessful, five units included advocacy within the core curriculum, and one did not offer any advocacy curriculum. In summary, six of the 26 degrees (23%) had no identified advocacy curriculum, and three (12%) only offered it as part of elective units, that is just 17 degrees (65%) had advocacy as part of the core curriculum. When only “advoc*” in unit learning outcomes are considered, 16 of 26 degrees (62%) included at least one core unit where “advoc*” was included in a learning outcome. There was a total of 26 undergraduate degrees identified from 23 institutions. From these degrees, there were 39 relevant units identified: four from the unit title alone (10%), 21 from the learning outcomes (54%) and 14 from the unit description (36%). Of the 26 degrees, 20 had DLOs publicly available; six degrees did not publicly report DLOs, and additional information could not be obtained from degree directors or institutions via attempts to contact these institutions made by researchers. Of the degrees with available DLOs, 3 out of 20 (15%) included “advoc*” in one DLO. No degree had more than one DLO mentioning “advoc*.” Each of these three degrees delivered advocacy training within core units. Of the 17 degrees which did not include “advoc*” in their DLOs, five did not include any units where “advoc*” was included as a learning outcome in the unit title or description. Notably, all five degrees were Bachelor of Health Science programs. For 12 degrees, while they did not include “advoc*” in their DLOs, they did include some advocacy training: nine degrees included it as part of the core curriculum, and three degrees included it as part of an elective option. For the six degrees that did not publicly report DLOs and attempts to determine this information from researchers was unsuccessful, five units included advocacy within the core curriculum, and one did not offer any advocacy curriculum. In summary, six of the 26 degrees (23%) had no identified advocacy curriculum, and three (12%) only offered it as part of elective units, that is just 17 degrees (65%) had advocacy as part of the core curriculum. When only “advoc*” in unit learning outcomes are considered, 16 of 26 degrees (62%) included at least one core unit where “advoc*” was included in a learning outcome. Postgraduate A total of 39 postgraduate degrees were identified from 30 institutions. From these degrees, there were 55 relevant units identified: seven from the unit title alone (13%), 23 from the learning outcomes (42%), and 25 from the unit description (45%). Of these degrees, 26 had DLOs publicly available or were obtained from degree directors; 13 degrees did not publicly report DLOs, and additional information could not be obtained from degree directors or institutions. Of the degrees with available DLOs, 10 out of 26 (38%) included “advoc*” in at least one DLO. Eight of these 10 degrees included advocacy training within at least one core unit; two degrees included advocacy training in electives. Of the 16 degrees without advocacy included in DLOs, nine included advocacy training in core units, two in electives only, and five had no identified advocacy training. For the 13 degrees where DLOs were not publicly reported, three degrees did not include any advocacy training. For the 10 degrees that did include advocacy training, seven units included it within core units and three degrees within elective units only. In summary, of the 39 identified postgraduate degrees, 31 included advocacy training (79%). Of these, 24 (62%) were delivered via core units. In summary, eight of the 39 degrees (21%) had no identified advocacy training, and seven (18%) only offered it as part of elective units, that is just 24 degrees (62%) had advocacy as part of core training. When only “advoc*” in unit learning outcomes are considered, 13 of 39 degrees (50%) included at least one core unit where ‘advoc*' was included in a learning outcome, excluding the nine degrees where the unit learning outcomes were not available. A total of 39 postgraduate degrees were identified from 30 institutions. From these degrees, there were 55 relevant units identified: seven from the unit title alone (13%), 23 from the learning outcomes (42%), and 25 from the unit description (45%). Of these degrees, 26 had DLOs publicly available or were obtained from degree directors; 13 degrees did not publicly report DLOs, and additional information could not be obtained from degree directors or institutions. Of the degrees with available DLOs, 10 out of 26 (38%) included “advoc*” in at least one DLO. Eight of these 10 degrees included advocacy training within at least one core unit; two degrees included advocacy training in electives. Of the 16 degrees without advocacy included in DLOs, nine included advocacy training in core units, two in electives only, and five had no identified advocacy training. For the 13 degrees where DLOs were not publicly reported, three degrees did not include any advocacy training. For the 10 degrees that did include advocacy training, seven units included it within core units and three degrees within elective units only. In summary, of the 39 identified postgraduate degrees, 31 included advocacy training (79%). Of these, 24 (62%) were delivered via core units. In summary, eight of the 39 degrees (21%) had no identified advocacy training, and seven (18%) only offered it as part of elective units, that is just 24 degrees (62%) had advocacy as part of core training. When only “advoc*” in unit learning outcomes are considered, 13 of 39 degrees (50%) included at least one core unit where ‘advoc*' was included in a learning outcome, excluding the nine degrees where the unit learning outcomes were not available. Assessments Of the unit convenors and degree directors (N = 66) invited to complete the survey to supplement collected data, 34 responded (52% response rate). The majority of respondents provided additional detail regarding the type and weighting of assessment tasks which was used to supplement data collected from institutions' websites. Undergraduate Assessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%. Assessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%. Postgraduate Assessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%. Assessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%. Of the unit convenors and degree directors (N = 66) invited to complete the survey to supplement collected data, 34 responded (52% response rate). The majority of respondents provided additional detail regarding the type and weighting of assessment tasks which was used to supplement data collected from institutions' websites. Undergraduate Assessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%. Assessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%. Postgraduate Assessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%. Assessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%. Undergraduate: There was a total of 26 undergraduate degrees identified from 23 institutions. From these degrees, there were 39 relevant units identified: four from the unit title alone (10%), 21 from the learning outcomes (54%) and 14 from the unit description (36%). Of the 26 degrees, 20 had DLOs publicly available; six degrees did not publicly report DLOs, and additional information could not be obtained from degree directors or institutions via attempts to contact these institutions made by researchers. Of the degrees with available DLOs, 3 out of 20 (15%) included “advoc*” in one DLO. No degree had more than one DLO mentioning “advoc*.” Each of these three degrees delivered advocacy training within core units. Of the 17 degrees which did not include “advoc*” in their DLOs, five did not include any units where “advoc*” was included as a learning outcome in the unit title or description. Notably, all five degrees were Bachelor of Health Science programs. For 12 degrees, while they did not include “advoc*” in their DLOs, they did include some advocacy training: nine degrees included it as part of the core curriculum, and three degrees included it as part of an elective option. For the six degrees that did not publicly report DLOs and attempts to determine this information from researchers was unsuccessful, five units included advocacy within the core curriculum, and one did not offer any advocacy curriculum. In summary, six of the 26 degrees (23%) had no identified advocacy curriculum, and three (12%) only offered it as part of elective units, that is just 17 degrees (65%) had advocacy as part of the core curriculum. When only “advoc*” in unit learning outcomes are considered, 16 of 26 degrees (62%) included at least one core unit where “advoc*” was included in a learning outcome. Postgraduate: A total of 39 postgraduate degrees were identified from 30 institutions. From these degrees, there were 55 relevant units identified: seven from the unit title alone (13%), 23 from the learning outcomes (42%), and 25 from the unit description (45%). Of these degrees, 26 had DLOs publicly available or were obtained from degree directors; 13 degrees did not publicly report DLOs, and additional information could not be obtained from degree directors or institutions. Of the degrees with available DLOs, 10 out of 26 (38%) included “advoc*” in at least one DLO. Eight of these 10 degrees included advocacy training within at least one core unit; two degrees included advocacy training in electives. Of the 16 degrees without advocacy included in DLOs, nine included advocacy training in core units, two in electives only, and five had no identified advocacy training. For the 13 degrees where DLOs were not publicly reported, three degrees did not include any advocacy training. For the 10 degrees that did include advocacy training, seven units included it within core units and three degrees within elective units only. In summary, of the 39 identified postgraduate degrees, 31 included advocacy training (79%). Of these, 24 (62%) were delivered via core units. In summary, eight of the 39 degrees (21%) had no identified advocacy training, and seven (18%) only offered it as part of elective units, that is just 24 degrees (62%) had advocacy as part of core training. When only “advoc*” in unit learning outcomes are considered, 13 of 39 degrees (50%) included at least one core unit where ‘advoc*' was included in a learning outcome, excluding the nine degrees where the unit learning outcomes were not available. Assessments: Of the unit convenors and degree directors (N = 66) invited to complete the survey to supplement collected data, 34 responded (52% response rate). The majority of respondents provided additional detail regarding the type and weighting of assessment tasks which was used to supplement data collected from institutions' websites. Undergraduate Assessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%. Assessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%. Postgraduate Assessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%. Assessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%. Undergraduate: Assessment details were obtained for 19 of 39 undergraduate units (49%). Five units did not include an advocacy‐related assessment task. However, one of these units expressly included advocacy as part of one learning outcome. For the 19 units where assessment details were obtained, assessment of advocacy was very heterogenous: 4 of 19 (21%) were group assessments, 8 of 19 (42%) were short or long answer written assessments, and the rest were a mix of case studies, advocacy letter, online discussion or “other”, and weighting of assessment ranged from 10% to 50%. Postgraduate: Assessment details were obtained for 22 of 55 postgraduate units (40%). Nine units did not include an advocacy‐related assessment task. However, three of these units included advocacy as part of at least one learning outcome. For the remaining 13 units, all were individual tasks; five asked students to create an advocacy campaign/strategy, two were presentation tasks, and the others a mix of case studies, reflection, online discussion and an advocacy letter. Assessment weight was also varied between 5% and 50%. DISCUSSION: This study provides an overview of the scope of public health advocacy education within Australian public health degrees. This audit indicates advocacy curricula are not ubiquitously delivered, with one‐third of all identified degrees not including advocacy as part of core curricula and advocacy rarely included in DLOs. These findings mirror the limited previous Australian‐based research indicating public health advocacy receives minimal coverage in university curricula and the need to include public health advocacy training within core units. 9 These gaps have also been reflected internationally, 11 where there is also an identified lack of understanding of the optimal training format and length for teaching advocacy to other health disciplines (nurses) 26 and medical students. 20 , 27 When a broad overview of advocacy training is considered (advocacy taught within core or elective units), it appears that advocacy is taught at a similar frequency in undergraduate (77%) and postgraduate degrees (79%). However, when considering good pedagogical practice requires essential learning to be expressly outlined in unit learning outcomes (where it is clear to students what they are learning) rather than only in unit descriptions, the difference between the qualification levels is starker. Although the frequency of advocacy within both undergraduate and postgraduate curricula is relatively low, this study found that 62% of undergraduate degrees included at least one core unit where advocacy was included in unit‐level outcomes compared to only 50% of postgraduate degrees. The finding that this more frequently occurs in core undergraduate training may be due to the longer duration of undergraduate degrees, which allows an increased opportunity for the inclusion of advocacy training within the required curriculum. It was identified that 14 (22%) of the eligible public health degrees do not include any advocacy training, raising issues with the advocacy capability of these degrees. However, it should be acknowledged that for many degrees, particularly postgraduate degrees, information on unit learning outcomes was not publicly available, and this figure may have been overestimated. If advocacy is not routinely included within the core curriculum, as the findings indicate, but public health students are expected to graduate with advocacy competencies, 12 there is a likelihood that some Australian public health graduates may not be adequately trained in advocacy. This has the potential to produce graduates who are not sufficiently prepared to deal with current and emerging population health challenges such as climate change and public health emergencies, which can stall health promotion advances that seek to improve the health and well‐being of individuals and populations. The highly varied inclusion of advocacy curricula across degrees suggests uneven alignment between advocacy competencies and DLOs, ULOs and advocacy‐related assessment tasks. While universities have a responsibility to ensure public health students meet professional competencies, 8 , 27 at present, the graduate capabilities set out by CAPHIA are standards that institutions are encouraged to align to. 12 These capabilities are not regulated due to the lack of requirement for accreditation in Australia, and not all institutions teaching public health curricula are CAPHIA members. This may partly explain why there is diversity in the inclusion of advocacy in Australian public health degrees. These findings demonstrate the need for public health programs within universities to comply with current standards so that students understand the nature of advocacy and the importance of advocacy in the policy process. The low response rates from unit convenors and degree directors make it difficult for any meaningful inferences to be made regarding how advocacy is assessed. Attempts were made to supplement this with data extracted by researchers from institution websites. The diversity of assessment tasks used suggests that there are many ways to assess advocacy skills proficiency. However, further research is required to understand the type of advocacy skills needed in real‐world advocacy practice and whether these skills are authentically assessed in units. A small number of units (n = 4, 4% of all units reviewed) reported advocacy as a learning outcome but did not assess it, which suggests there may be some poor alignment between learning outcomes and assessment in some degrees. LIMITATIONS: Several limitations are noted in this study. Firstly, a conservative approach was taken to reviewing how advocacy was included within curricula with narrow search terms (“advoc*”). Additional search terms such as “community development/engagement”, “change” or “health communication” may have yielded more results. This method may have missed synonyms or concepts included in advocacy. A review of the unit learning outcomes of all health promotion, health policy and health communication units offered was carried out, but this did not yield additional data. Thus, there may have been some missed instances where advocacy skills are integrated into teaching but not explicitly mentioned and therefore not captured by this study. There might have also been some missed degrees if they were not registered on the CRICOS database. Future analysis could include a more comprehensive list of search terms or thematic identification. A second limitation is that it was difficult to ascertain the exact extent to which advocacy was included within a unit. It was out of scope for the review team to audit the entire unit curricula, and it is possible that unit content was not fully aligned with the ULOs, unit title or description. Therefore, the audit did not investigate how much advocacy curricula are included or how well this is taught. A third limitation is the currency of the information reviewed. This audit was completed between July and September 2021. Institutions regularly make amendments to the content offered within degrees, and advocacy offerings may have changed in some instances. Attempts to mitigate accuracy limitations were made by obtaining data from multiple sources, including online handbooks, institutions' websites, surveying unit convenors, and degree directors. 16 Additionally, the audit was limited by the data made publicly available by each institution. While some institutions provided extensive and detailed information on their websites regarding degree and unit learning outcomes and assessment details, this information was wholly absent in the public domain for other institutions. It is not clear what the effect of the combination of these limitations is. The prevalence's estimated here in this broad scoping review may be under or overestimated. Finally, the researchers recognise their interest and involvement in teaching advocacy within their institution's public health degrees and that combined with the focus of this study, a desirability bias may be introduced. CONCLUSION: This research provides an initial overview of how public health advocacy is included within Australian tertiary public health degrees. The researchers conclude that public health advocacy is not delivered consistently across degrees, and some students may miss out entirely. There remain opportunities to optimise advocacy curricula. These findings highlight the need for Australian universities offering public health degrees to review and enhance their public health advocacy education offerings at undergraduate and postgraduate levels. A focus on ensuring advocacy is included within core units will allow students to graduate with foundational advocacy competencies set out by CAPHIA. 12 In addition, it is important to ensure that where advocacy is taught and assessed, that this is explicit within the unit description and ULOs so that there is precise alignment with the foundational public health competencies, and it is clear to students what they are learning. Long‐term university commitments to provide relevant advocacy training will contribute to investments in the public health workforce being adequately equipped to deal with emerging health issues such as climate change, public health emergencies and other issues that promote equity. It is envisaged these findings will be particularly informative for degree directors, educators, and advocacy leaders seeking to augment public health advocacy training in the university sector. It is also envisaged that collaborative efforts at discipline‐specific conferences such as CAPHIA learning and teaching forums could further facilitate work in this area. Future research in this area should include determining the type of advocacy skills required for real‐world advocacy practice, whether these skills are authentically assessed within public health training and best practice methods for training public health students in advocacy. The next steps for this research project include qualitative methods to assess pedagogical approaches to teaching advocacy via interviews with advocacy stakeholders to ensure teaching is optimally aligned to industry needs. FUNDING INFORMATION: The authors received no financial support for this research. CONFLICT OF INTERESTS: The authors declare no conflict of interest.
Background: Public health advocacy is a fundamental part of health promotion practice. Advocacy efforts can lead to healthier public policies and positive impacts on society. Public health educators are responsible for equipping graduates with cross-cutting advocacy competencies to address current and future public health challenges. Methods: Australian public health Bachelor's and Master's degrees were identified using the CRICOS database. Open-source online unit guides were reviewed to determine where and how advocacy was included within core and elective units (in title, unit description or learning outcomes). Degree directors and convenors of identified units were surveyed to further garner information about advocacy in the curriculum. Results: Of 65 identified degrees, 17 of 26 (65%) undergraduate degrees and 24 of 39 (62%) postgraduate degrees included advocacy within the core curriculum, while 6 of 26 (23%) undergraduate and 8 of 39 (21%) postgraduate offered no advocacy curriculum. Conclusions: Australian and international public health competency frameworks indicate advocacy curriculum should be included in all degrees. This research suggests advocacy competencies are not ubiquitous within Australian public health curricula. The findings support the need to advance public health advocacy teaching efforts further.
INTRODUCTION: Public health advocacy is a fundamental part of health promotion practice. 1 , 2 , 3 Its importance is reflected by its inclusion as one of three pillars within the World Health Organization's Ottawa Charter for Health Promotion. 3 The three pillars, “advocate,” “enable” and “mediate,” characterise the fundamental activities and competencies necessary to promote the health of populations. 4 , 5 Advocacy is the active support of a cause. 6 It entails speaking, writing or acting in favour of a particular cause, policy or group of people. 5 Public health advocacy efforts can take on many forms, employing a range of strategies that aim to influence and advance evidence‐based policymaking to improve health and well‐being for individuals and populations. 5 , 6 Advocacy can also raise awareness of social and environmental factors enabling systemic changes in these areas as well as mobilising communities. To achieve these outcomes, advocacy does not employ one consistent approach. 6 Rather, activities and efforts can include a variety of strategies such as negotiation, debate and consensus generation in the pursuit of improved health and well‐being. 6 These advocacy activities can address the social and other determinants of health and lead to the development of healthier public policies and positive impacts on societies. 6 , 7 , 8 The influence of public health advocacy has been demonstrated through several public health successes in Australia, including plain tobacco packaging, mandatory folate fortification within bread products and the ban on commercial tanning beds. 9 Such successes were not made overnight. Advocates including health practitioners, policymakers and peak consumer advocacy bodies were required to coalesce and engage in extended and strategic efforts that ensured evidence was communicated in politically compelling ways, often in the face of fierce opposition from the commercial sector. 6 Successful outcomes are achieved through a variety of strategies, including building the capacity of health professionals and building coalitions, although it has been described that some of the public health community may not feel comfortable or empowered to operate in this space. 5 Importantly, recent global events such as COVID‐19, which has flared vaccine hesitancy, and current issues such as climate change and the influence of major industries, have drawn attention to the critical need to develop an appropriately skilled public health workforce that engages with public health advocacy to meet current and emerging population health challenges. 7 , 10 , 11 The Public Health Association of Australia states that a well‐trained workforce is required, while emphasising the role of universities within this whole‐of‐system approach. 8 In 2016, the peak organisation for public health education throughout Australasia, the Council of Academic Public Health Institutions Australasia (CAPHIA), proffered a foundational set of competencies expected from undergraduate and postgraduate public health students. 12 The competencies recognise the role of public health advocacy in underpinning knowledge of health promotion and disease prevention 12 and are consistent with the inclusion of advocacy in other international public health competency frameworks. 13 , 14 While public health undergraduates and postgraduates are expected to have knowledge and skills in public health advocacy, 12 there is a paucity of literature on how it is taught to these cohorts. 9 , 11 , 15 , 16 Contributing to this is a dearth of empirical research on the practice of advocacy. 17 , 18 This may be due to the absence of a consensus on the definition of public health advocacy among academics, health practitioners and advocacy groups. 1 , 18 Despite accepted principals, no one consistent approach to advocacy exists. 6 , 18 While the discipline of advocacy enables a wide variety of practices to be implemented that are tailored for specific contexts, 6 this can result in challenges in conducting research due to the varying heterogeneous methodologies. For the current study, the authors defined public health advocacy as educating, organising and mobilising for systems change in population health. 19 While many definitions of public health advocacy exist, the authors selected this broad‐spectrum definition as the foundation of this research to acknowledge the myriad of possible forms advocacy can take. Because of the potential for advocacy to improve population health, previous research has called for a better understanding of advocacy in public health education. 1 , 18 , 20 To determine whether advocacy curricula within Australian public health degrees match what is expected of graduates working in advocacy and health promotion roles once in the workforce, knowledge of advocacy curricula is required. This study aims to determine the scope of public health advocacy education within undergraduate and postgraduate Australian public health degrees and whether advocacy education is delivered as part of the core curriculum, as part of electives, or not at all. CONCLUSION: This research provides an initial overview of how public health advocacy is included within Australian tertiary public health degrees. The researchers conclude that public health advocacy is not delivered consistently across degrees, and some students may miss out entirely. There remain opportunities to optimise advocacy curricula. These findings highlight the need for Australian universities offering public health degrees to review and enhance their public health advocacy education offerings at undergraduate and postgraduate levels. A focus on ensuring advocacy is included within core units will allow students to graduate with foundational advocacy competencies set out by CAPHIA. 12 In addition, it is important to ensure that where advocacy is taught and assessed, that this is explicit within the unit description and ULOs so that there is precise alignment with the foundational public health competencies, and it is clear to students what they are learning. Long‐term university commitments to provide relevant advocacy training will contribute to investments in the public health workforce being adequately equipped to deal with emerging health issues such as climate change, public health emergencies and other issues that promote equity. It is envisaged these findings will be particularly informative for degree directors, educators, and advocacy leaders seeking to augment public health advocacy training in the university sector. It is also envisaged that collaborative efforts at discipline‐specific conferences such as CAPHIA learning and teaching forums could further facilitate work in this area. Future research in this area should include determining the type of advocacy skills required for real‐world advocacy practice, whether these skills are authentically assessed within public health training and best practice methods for training public health students in advocacy. The next steps for this research project include qualitative methods to assess pedagogical approaches to teaching advocacy via interviews with advocacy stakeholders to ensure teaching is optimally aligned to industry needs.
Background: Public health advocacy is a fundamental part of health promotion practice. Advocacy efforts can lead to healthier public policies and positive impacts on society. Public health educators are responsible for equipping graduates with cross-cutting advocacy competencies to address current and future public health challenges. Methods: Australian public health Bachelor's and Master's degrees were identified using the CRICOS database. Open-source online unit guides were reviewed to determine where and how advocacy was included within core and elective units (in title, unit description or learning outcomes). Degree directors and convenors of identified units were surveyed to further garner information about advocacy in the curriculum. Results: Of 65 identified degrees, 17 of 26 (65%) undergraduate degrees and 24 of 39 (62%) postgraduate degrees included advocacy within the core curriculum, while 6 of 26 (23%) undergraduate and 8 of 39 (21%) postgraduate offered no advocacy curriculum. Conclusions: Australian and international public health competency frameworks indicate advocacy curriculum should be included in all degrees. This research suggests advocacy competencies are not ubiquitous within Australian public health curricula. The findings support the need to advance public health advocacy teaching efforts further.
9,340
230
[ 935, 219, 474, 207, 18, 375, 354, 500, 117, 100, 438, 10 ]
17
[ "advocacy", "degrees", "health", "unit", "included", "units", "public", "public health", "learning", "assessment" ]
[ "improved health advocacy", "health advocacy fundamental", "health advocacy demonstrated", "public health advocacy", "advocacy health promotion" ]
[CONTENT] advocacy | health promotion | postgraduate education in public health | public health education | public health workforce | undergraduate education in public health [SUMMARY]
[CONTENT] advocacy | health promotion | postgraduate education in public health | public health education | public health workforce | undergraduate education in public health [SUMMARY]
[CONTENT] advocacy | health promotion | postgraduate education in public health | public health education | public health workforce | undergraduate education in public health [SUMMARY]
[CONTENT] advocacy | health promotion | postgraduate education in public health | public health education | public health workforce | undergraduate education in public health [SUMMARY]
[CONTENT] advocacy | health promotion | postgraduate education in public health | public health education | public health workforce | undergraduate education in public health [SUMMARY]
[CONTENT] advocacy | health promotion | postgraduate education in public health | public health education | public health workforce | undergraduate education in public health [SUMMARY]
[CONTENT] Humans | Public Health | Prevalence | Australia | Curriculum | Health Promotion [SUMMARY]
[CONTENT] Humans | Public Health | Prevalence | Australia | Curriculum | Health Promotion [SUMMARY]
[CONTENT] Humans | Public Health | Prevalence | Australia | Curriculum | Health Promotion [SUMMARY]
[CONTENT] Humans | Public Health | Prevalence | Australia | Curriculum | Health Promotion [SUMMARY]
[CONTENT] Humans | Public Health | Prevalence | Australia | Curriculum | Health Promotion [SUMMARY]
[CONTENT] Humans | Public Health | Prevalence | Australia | Curriculum | Health Promotion [SUMMARY]
[CONTENT] improved health advocacy | health advocacy fundamental | health advocacy demonstrated | public health advocacy | advocacy health promotion [SUMMARY]
[CONTENT] improved health advocacy | health advocacy fundamental | health advocacy demonstrated | public health advocacy | advocacy health promotion [SUMMARY]
[CONTENT] improved health advocacy | health advocacy fundamental | health advocacy demonstrated | public health advocacy | advocacy health promotion [SUMMARY]
[CONTENT] improved health advocacy | health advocacy fundamental | health advocacy demonstrated | public health advocacy | advocacy health promotion [SUMMARY]
[CONTENT] improved health advocacy | health advocacy fundamental | health advocacy demonstrated | public health advocacy | advocacy health promotion [SUMMARY]
[CONTENT] improved health advocacy | health advocacy fundamental | health advocacy demonstrated | public health advocacy | advocacy health promotion [SUMMARY]
[CONTENT] advocacy | degrees | health | unit | included | units | public | public health | learning | assessment [SUMMARY]
[CONTENT] advocacy | degrees | health | unit | included | units | public | public health | learning | assessment [SUMMARY]
[CONTENT] advocacy | degrees | health | unit | included | units | public | public health | learning | assessment [SUMMARY]
[CONTENT] advocacy | degrees | health | unit | included | units | public | public health | learning | assessment [SUMMARY]
[CONTENT] advocacy | degrees | health | unit | included | units | public | public health | learning | assessment [SUMMARY]
[CONTENT] advocacy | degrees | health | unit | included | units | public | public health | learning | assessment [SUMMARY]
[CONTENT] health | public | public health | advocacy | health advocacy | public health advocacy | 18 | influence | knowledge | consistent [SUMMARY]
[CONTENT] unit | health | degrees | public | data | public health | degree | included | level | learning outcomes [SUMMARY]
[CONTENT] degrees | advocacy | units | assessment | included | advocacy training | dlos | training | core | 19 [SUMMARY]
[CONTENT] advocacy | health | public | public health | public health advocacy | health advocacy | training | students | methods | area [SUMMARY]
[CONTENT] advocacy | degrees | health | unit | units | public | included | public health | assessment | learning [SUMMARY]
[CONTENT] advocacy | degrees | health | unit | units | public | included | public health | assessment | learning [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] Australian | Bachelor | Master | CRICOS ||| ||| [SUMMARY]
[CONTENT] 65 | 17 | 26 | 65% | 24 | 39 | 62% | 6 | 26 | 23% | 8 | 39 | 21% [SUMMARY]
[CONTENT] Australian ||| Australian ||| [SUMMARY]
[CONTENT] ||| ||| ||| Australian | Bachelor | Master | CRICOS ||| ||| ||| ||| 65 | 17 | 26 | 65% | 24 | 39 | 62% | 6 | 26 | 23% | 8 | 39 | 21% ||| Australian ||| Australian ||| [SUMMARY]
[CONTENT] ||| ||| ||| Australian | Bachelor | Master | CRICOS ||| ||| ||| ||| 65 | 17 | 26 | 65% | 24 | 39 | 62% | 6 | 26 | 23% | 8 | 39 | 21% ||| Australian ||| Australian ||| [SUMMARY]
The COVID-19 pandemic as a factor of hospital staff compliance with the rules of hand hygiene: assessment of the usefulness of the "Clean Care is a Safer Care" program as a tool to enhance compliance with hand hygiene principles in hospitals.
34322613
Hand cleansing and disinfection is the most efficient method for reducing the rates of hospital-acquired infections which are a serious medical and economic problem. Striving to ensure the maximum safety of the therapeutic process, we decided to promote hand hygiene by implementing the educational program titled "Clean Care is a Safer Care". The occurrence of the COVID-19 pandemic affected the compliance with procedures related to the sanitary regime, including the frequency and accuracy of hand decontamination by medical personnel.
INTRODUCTION
We monitored the compliance with the hygiene procedure before implementation of the program as well as during the hand hygiene campaign by means of direct observation as well as the disinfectant consumption rates.
METHODS
In the initial self-assessment survey, the hospital had scored 270/500 points (54%). Preliminary audit revealed the hygiene compliance rate at the level of 49%. After broad-scaled educational efforts, the semi-annual audit revealed an increase in hand hygiene compliance rate up to 81% (hospital average) while the final audit carried out after one year of campaigning revealed a compliance rate of 77%. The final score for the hospital increased to 435/500 points.
RESULTS
COVID-19 pandemic dramatically increased accuracy of proper hand hygiene procedures and consumption of disinfectant agents. The educational program has succeeded to reach its goal; however, long-term educational efforts are required to maintain and improve the quality of provided services.
CONCLUSIONS
[ "COVID-19", "Guideline Adherence", "Hand Hygiene", "Health Promotion", "Humans", "Pandemics", "Personnel, Hospital", "SARS-CoV-2", "Surveys and Questionnaires" ]
8283626
Introduction
Every year, millions of patients acquire infections during hospital treatment. The scale of the problem is best illustrated by data indicating that hospital-acquired infections development in nearly 10% of all hospitalized patients while reaching nearly 50% in case of high-risk groups such as ICU patients, patients subjected to prolonged artificial ventilation, or immunosuppressed patients. These serious complications increase the length of hospital stay by about 5-10 days, leading to potential disabilities as well as double the morbidity rates in affected patients [1]. Hospital-acquired infections are also a serious economic problem due to the increased treatment costs. As estimated by Urban et al., treatment of mild infections may increase the individual treatment costs by at least 400 USD while the average costs of the treatment of severe nosocomial infections exceed 30,000 USD per patient. In many cases, nosocomial infections are transmitted by the medical staff due to their failure to comply with aseptic and antiseptic procedures [2]. Strict compliance with hand hygiene principles is of special importance in reducing the nosocomial infection rates. Hand cleansing and disinfection practices are the most economical and most efficient methods to prevent transmission of microorganisms and reduce hospital-acquired infection rates [3]. Despite the simplicity of this measure, non-compliance with its principles is a global problem in the health care sector [1]. The pioneer of hand hygiene as a measure to prevent hospital-acquired infections was Ignaz Philipp Semmelweis (1818-1865) who discovered the correlation between dirty hands of obstetricians and the incidence of postpartum infections and was the first to implement appropriate hand disinfection procedures. Today, nearly 200 years later, proper compliance with hand hygiene principles is still a problem for medical professionals [4]. The importance of this simple procedure appears to be underestimated by medical professionals as the World Health Organization estimates that the compliance rates range from 5% to 89% [5]. In 2009, WHO published the guidelines for proper hand hygiene in the medical sector. All around the globe, training is organized for medical professionals as part of the First Global Patient Safety Challenge “Clean Care is a Safer Care” [6]. Poland joined the initiative in 2013. Many hospitals in our country decided to implement the WHO guidelines as a measure to reduce the hospital-acquired infection rates. The guidelines were also adopted by the Medicover Hospital in Warsaw as a means to ensure the maximum safety of the therapeutic services provided to our patients. Program implementation was coordinated by the Hospital-Acquired Infections Team which worked to increase the awareness of hospital workers in relation to the importance of hand hygiene as well as to strengthen good behavior models to promote proper hand hygiene. The outbreak of the COVID-19 epidemic provided completely new observations of behavior among medical staff. Measurements of the use of liquid soap and hand disinfectant, carried out at that time showed dramatic increase in use.
Material and methods
Medicover Hospital is a large, broad-profile hospital which provides its patients with high quality, comprehensive health care. The educational program was implemented there in 2014. Education included special training workshops, repeated worksite training for all staff members, and visual accents to remind the personnel of the hand hygiene principles. The promotional material was prepared in collaboration with Ecolab company. Posters highlighting the importance of hand hygiene and information on the efficiency of this measure were placed in strategic locations and points of care across the institution. Care was taken to ensure the constant availability of alcohol-based disinfectant at points of care at every department. In addition, disinfectant dispensers are placed in special holders at all patient beds. An observation study of hand hygiene behaviors was commenced. The awareness of the role of institutional involvement in the desired change in personnel behaviors led the hospital management and staff administration team to define hand hygiene a priority task for the entire hospital personnel. The study group consisted of all the hospital’s nurses, auxiliary staff, physicians, and other employees of all hospital departments and the outpatient consultation center. The study was based on the tools to implement multi-aspect strategy for hand hygiene as developed by the WHO. The strategy consists of 5 key components [7]: systemic change; training and education of health care professionals; assessment and feedback; visual instructions at worksite; promotion of institutional safety. A maximum of 100 points may be scored in each of these areas (for a maximum overall score of 500). The package of implementation tools includes the Hand Hygiene Self-Assessment Questionnaire as a validation tool to assess the implementation of the 5 components of the WHO strategy at health care institutions. Based on the total number of points obtained after completion of the questionnaire, the institution is assigned with one of the four possible levels of hand hygiene promotion and practice, including insufficient, basic, intermediate, and advanced. Institutions classified into the advanced level provide answers to 20 questions in the leaders’ section to obtain a maximum of 20 points. Twelve points are enough to achieve the status of a leader [7]. This study was carried out by means of a diagnostic survey based on the Polish version of the “2010 WHO Self-Assessment Questionnaire” for hand hygiene published in April 2013. The tool facilitates the analysis of current capabilities and knowledge of hand hygiene issues as well as identification of future goals and measures. The compliance with the hand hygiene principles was monitored in a direct as well as an indirect fashion. Direct monitoring of hand hygiene consisted in direct observation carried out by a specially trained staff and based on the “Five Moments for Hand Hygiene” WHO guidelines. The guidelines present five indications for hand hygiene procedures: before touching a patient, before clean/aseptic procedures, after body fluid exposure/risk, after touching a patient, and after touching patient surroundings [8]. Another WHO implementation tool, i.e. the disinfectant consumption rate, was also used to determine the hand hygiene level at the Medicover Hospital. The rate corresponds to the quantity of hand disinfectant consumed per 1,000 person-days at individual departments or in the entire hospital [9]. Based on the form for recording nursing tasks requiring hand hygiene measures, hand hygiene compliance rate was calculated on the basis of average consumption of soap and disinfectant. All these factors were monitored in “non-pandemic” and “pandemic” season.
null
null
Conclusions
Hands sanitizer's use analysis during the SARS-CoV-2 coronavirus epidemic and 2019 year.
[ "Introduction", "Objective", "Results", "Discussion", "Conclusions" ]
[ "Every year, millions of patients acquire infections during hospital treatment. The scale of the problem is best illustrated by data indicating that hospital-acquired infections development in nearly 10% of all hospitalized patients while reaching nearly 50% in case of high-risk groups such as ICU patients, patients subjected to prolonged artificial ventilation, or immunosuppressed patients.\nThese serious complications increase the length of hospital stay by about 5-10 days, leading to potential disabilities as well as double the morbidity rates in affected patients [1].\nHospital-acquired infections are also a serious economic problem due to the increased treatment costs. As estimated by Urban et al., treatment of mild infections may increase the individual treatment costs by at least 400 USD while the average costs of the treatment of severe nosocomial infections exceed 30,000 USD per patient. In many cases, nosocomial infections are transmitted by the medical staff due to their failure to comply with aseptic and antiseptic procedures [2].\nStrict compliance with hand hygiene principles is of special importance in reducing the nosocomial infection rates. Hand cleansing and disinfection practices are the most economical and most efficient methods to prevent transmission of microorganisms and reduce hospital-acquired infection rates [3]. Despite the simplicity of this measure, non-compliance with its principles is a global problem in the health care sector [1].\nThe pioneer of hand hygiene as a measure to prevent hospital-acquired infections was Ignaz Philipp Semmelweis (1818-1865) who discovered the correlation between dirty hands of obstetricians and the incidence of postpartum infections and was the first to implement appropriate hand disinfection procedures.\nToday, nearly 200 years later, proper compliance with hand hygiene principles is still a problem for medical professionals [4]. The importance of this simple procedure appears to be underestimated by medical professionals as the World Health Organization estimates that the compliance rates range from 5% to 89% [5].\nIn 2009, WHO published the guidelines for proper hand hygiene in the medical sector. All around the globe, training is organized for medical professionals as part of the First Global Patient Safety Challenge “Clean Care is a Safer Care” [6]. Poland joined the initiative in 2013. Many hospitals in our country decided to implement the WHO guidelines as a measure to reduce the hospital-acquired infection rates.\nThe guidelines were also adopted by the Medicover Hospital in Warsaw as a means to ensure the maximum safety of the therapeutic services provided to our patients. Program implementation was coordinated by the Hospital-Acquired Infections Team which worked to increase the awareness of hospital workers in relation to the importance of hand hygiene as well as to strengthen good behavior models to promote proper hand hygiene.\nThe outbreak of the COVID-19 epidemic provided completely new observations of behavior among medical staff. Measurements of the use of liquid soap and hand disinfectant, carried out at that time showed dramatic increase in use.", "The objective of the study was to assess the usefulness of the educational program titled “Clean Care is a Safer Care” as a tool to enhance the compliance with hand hygiene principles before and during COVID-19 pandemic.", "In the initial self-assessment survey taken in 2014, the Hospital scored a total of 270/500 points (54%), corresponding to the intermediate compliance level. The poorest scores were obtained in institutional safety and training/education components.\nThe hand hygiene compliance rate calculated on the basis of the consumption of soap and disinfectants by the hospital staff was 49%. While the overall compliance with hand hygiene measures was 50%, it varied in individual personnel groups, ranging from 63% in nursing staff to as little as 9% in the auxiliary personnel. Soap consumption was 630 L per year, corresponding to 31 mL per person/day; the annual consumption of 460 L of disinfectant corresponded to 22 mL per person/day. The average compliance with the “Five moments for hand hygiene” was 60%.\nHand hygiene was measured in general and in individual departments. As shown by the initial audit, the lowest compliance score was observed at the admission room. Subsequent audits suggested a constant improvement in these parameters until final audit. During observation period the ICU became the institution’s leader in this regard.\nAfter initial assessment and identification of institution-specific principles for implementation of the “Clean Care is a Safer Care” educational program, training workshops and worksite training were commenced, and educational materials such as posters and reminders were distributed around the hospital. UV lamps were used to assess the correctness of hand cleansing during the training sessions.\nAfter these broad-scaled educational efforts, the semi-annual audit revealed an increase in hand hygiene compliance rate up to 81% (hospital average, observation day 180) while the final audit carried out after one year of campaigning revealed a compliance rate of 77%.\nThe largest problem was identified in relation to hand hygiene following the contact to patient surroundings, with relevant principles being complied with only in 22% of cases. This aspect was also associated with the highest increase in the compliance rate, with the percentage of desirable behaviors rising to 60%.\nFollowing the exposure to body fluids, the procedures were complied with in 60% at day 0 to reach 95% at the end of the observation period. The highest mean compliance rates were observed before the commencement of aseptic procedures (77% before the program vs. 88% on observation day 365) as well as following the contact with the patient (51% before the program vs 79% on observation day 365).\nAccording to the final assessment survey (took on observation day 365), the final score for the entire institution in the year 2016 was 435/500 points, with hand hygiene compliance rate reaching the final value of 77%. The increase in the compliance to hand hygiene principles varied in different health care professionals. It increased in physicians and nurses, but the most significance increase was observed in the auxiliary staff (from 9 to 63%). This might be due to the awareness in this workers’ group being initially low and increasing significantly as the result of the educational efforts.\nHigh results were achieved in all individual aspects of hand hygiene, with the change in the preferred alcohol-based hand disinfectant being generally accepted. Organizational culture-related measures for promoting safer health care require further attention and continued improvement. The analysis carried out at the Medicover hospital during the COVID-19 pandemic compared the use of disinfection fluids in 2019 and the first two months of the epidemic in 2020 an dramatic increase in the use of detergents and disinfectants has been observed. The results of the analysis are collected in the Table I.", "Hospital hand hygiene is one of the cardinal principles for reducing the transmission of pathogens during therapeutic procedures. Appropriate assessment of the compliance with hygienic guidelines and procedures is an important tool for modeling appropriate behaviors of the medical staff. Methods for the monitoring of the compliance with hand hygiene principles at health care institutions include direct observation, measurement of hand hygiene agents consumption, measurements and studies [10] or electronic devices installed at worksites [11]. However, no standardized assessment method has been developed and none of the existing methods can be considered ideal due to their high costs, subjective nature, or staff behaviors being changed while under observation. It was mainly for this reason that out of 36 campaigns held in 36 European countries in years 2000-2012, only 50% could be assessed for efficiency with the WHO guidelines being implemented in only 55% of this latter group [12].\nNowadays, direct monitoring by specially trained personnel is considered to be a gold standard [13-15]. The method provides detailed information on the hand hygiene behaviors in different groups of the medical staff at the crucial moments that determine the risk of patient-personnel-patient transmission of pathogens. As part of the “Clean Care is a Safer Care” program, the WHO developed a special observation form which facilitates monitoring of hand cleansing and disinfection behaviors on the basis of “Five Moments for Hand Hygiene”, i.e. before touching a patient, before clean/aseptic procedures, after body fluid exposure/risk, after touching a patient, and after touching patient surroundings. The Hospital-Acquired Infections Team at the Medicover Hospital used this observation tool in the development of a program to monitor and improve hand hygiene in the personnel involved in patient care [9]. Another WHO implementation tool developed as part of the “Clean Care is a Safer Care” program, i.e. the disinfectant consumption rate, was also used to determine the hand hygiene level at the Medicover Hospital. The rate corresponds to the quantity of hand disinfectant consumed per 1,000 person-day at individual departments or in the entire hospital [9]. According to the WHO guidelines, the minimum consumption of hand disinfectants should be at the level of 20 liters per 1,000 person/day [9]. The advantages as well as disadvantages of this method have been broadly discussed in the literature. The advantages consist in relatively simple data collection as well as a lower number of staff members being involved in the monitoring resulting in lower costs. The disadvantage consists in measurement inaccuracies as e.g. cases of disinfectant spills or cleaning agents being used by the patients are not accounted for. Despite the above, the disinfectant consumption rate is often associated with the hand hygiene behavior rate; according to some authors, it is a priority method which should be considered more important than direct observation [16-19]. As the result of the educational program being implemented in the hospital, the compliance with the hand hygiene guidelines has improved significantly. The program was based on worksite training, poster campaign, observational studies, disinfectant consumption monitoring and promotion of hand disinfectant use. Of note is the fact that hygiene consultants used UV lamps to better visualize potential risks during the workshop training. In relation to the low compliance with the hand hygiene principles as defined by the “Five Moments for Hand Hygiene”, particularly in relation to the contact with patient’s surroundings (only 22% of proper behaviors as determined during the initial audit), an assumption was made that the personnel was unaware of the risk of pathogen transmission in cases when no direct contact with the patient has occurred. Thanks to the use of UV lamps, employees could better see that the patient’s surroundings (bed, pajamas, personal items) are a reservoir of pathogens which are easily transmitted onto professionals’ hands even when the patient is absent, and therefore, that appropriate hygiene measures should be taken after touching such items. As shown by this example, it is rational to include various techniques for better visualization of the pathogen transmission stages in educational materials as this may result in higher compliance of hygiene principles by the health care staff [20]. In the course of our study, we observed that hand hygiene behaviors were significantly more common after touching the patient than before touching the patient (60 vs 54% at day 0 and 95 vs 80% at day 365). Whitby et al. highlighted that health care workers are more likely to comply with hygiene principles when self-preservation imperative is involved, i.e. when they perceive their hands to be contaminated and have the potential for transmitting the infection to employees themselves as well as those close to them (i.e. after touching the patient or their biological material) [8]. Therefore, promotion of attitudes stressing the personnel’s responsibility for the patients’ health, including legal liability in case of any claims from patients having acquired nosocomial infection, is very important. According to many authors, nurses are more aware of the problem as compared to physicians [21, 22]. Boscard et al. suggest that the motivation behind hand hygiene compliance in nurses consists in their concern for the safety of themselves as well as their close ones, since nurses are aware of risks associated with the failure to comply with the procedures [23]. Regardless of the monitoring method, appropriate reporting of results, and feedback on observation and disinfectant consumption results should be an important educational as well as motivational component of the hand hygiene program [24]. In our experience gained during implementation of the monitoring program at the Medicover Hospital, direct observation and feedback was the most effective measurement method. Although an improvement was achieved, there is no certainty as to how long the new compliance rates would hold after intervention is discontinued. This would require further observations and development of efficient methods for the strengthening of behavior patterns. Many authors have highlighted that numerous health care professionals have problems with complying to hand hygiene principles. The reported causes of such a non-compliance include heavy workload and non-availability of disinfectants as well as insufficient number of protective gloves [25]. These, however, are not the only causes: problems with availability of protective gloves or disinfectants had never been encountered at Medicover Hospital, suggesting a significant importance of the “human factor”, the lack of appropriate procedures, education, and hygiene monitoring. According to WHO recommendations, hand cleansing or disinfection should be performed at the point of care, i.e. at a place where the three elements: the patient, the health care professional, and a medical procedure, exist concurrently. Hand cleansing and disinfection agents should be readily available at points of care so that health care professionals do not have to leave the patient zone [10]. On the basis of these recommendations, hand disinfectant holders were placed on patient beds in the Medicover Hospital. Studies confirmed that disinfecting hands using an alcohol-based rub is more efficient than washing while being easy to implement and inexpensive. According to calculations, prevention of 8 cases of hospital-acquired pneumonia would compensate the annual cost of hand disinfection agents [26]. As shown by such calculations, implementation of hand hygiene programs leads to measurable therapeutic and economic benefits [27]. Most employees of the Medicover Hospital changed their behavioral habits and the compliance rate has increased significantly as the result of increased use of hand disinfectants from 49% at the initial audit to 81% at day 180 and 77% at day 365. A significant increase in the tested parameters was observed in the semiannual audit, followed by a slight decrease in the final audit. Our results correspond to those obtained in other studies as medical personnel appears to be somewhat bored with the hand hygiene campaigns. It might show that the best method for the improvement in hand hygiene consists in placing hand cleansing instructions at the key points of care [9]. According to Pittet et al. [28], best results are achieved when education is combined with hand hygiene reminders such as posters, brochures, memes, etc. Although posters are considered to be somewhat outdated as an educational tool, they may provide useful reminder when placed at the point of care. The importance of posters developed using the social marketing concept was demonstrated by Forrester in Canada [29] as well as by successful national campaigns in Australia and Europe. Implementation of current graphical trends and aggressive content increases the impact of this type of media [30, 31]. Introduction of hand hygiene programs for medical personnel at health care institutions is of high importance in the prevention of infections. As shown by Pittet et al., implementation of these recommendations in the University Hospital In Geneva reduced the nosocomial infection rate from 16.9 to 9.9% [32, 33]. Rosenthal et al. showed that the improvement in hand safety contributed to a significant reduction in the rate of nosocomial infections (from 47.55 to 27.93 per 1,000 patient/day) [34]. Multi-aspect campaigns unambiguously increase the hand hygiene compliance rates. During a 2-year campaign carried out by the WHO in 8 regions of the world (2004-2006), the mean compliance rate increased from 39.6 to 56.9%; notably, the impact of the intervention was higher in the developing countries where the access to knowledge and protection measures was more limited [35]. However, it must be noted that compliance with high hygienic standards is also associated with the institutional structure of the health care system which differs in different countries. Epidemiological studies revealed a correlation between infection rates and the low number of personnel in relation to the high number of patients. As the result of the number of patients or physicians being too low, hand hygiene behaviors are reduced or even absent between provision of consecutive patients [36, 37]. Without appropriate employment rates, it would be very difficult to expect an improvement in the quality of services and maintenance of high hygienic standards. Considering the dwindling number of applicants for health care jobs in Poland as well as the aging of the society and health care professionals combined with economically-driven emigration, maintenance of high treatment standards may become increasingly difficult in near future. Notably, the degree of hand hygiene program implementation is associated with the quality of epidemiological process monitoring. As shown by the studies, appropriate number of epidemiologists is required in relation to the number of hospital beds [7]. According to the common consensus, the appropriate rate is 1:250; however, as demonstrated by Delphi [38], a ratio of 0.8-1/100 might be required for an improvement to be observed. In Poland, the ratio has been statutorily established at 1:200; many hospitals, however, fail to comply with this requirement. In our study, the organizational culture and safety of health care was shown to be the weakest link of the Multi-aspect Strategy for Hand Hygiene (55 points). This result is not satisfactory since the support from health care administration and local governments is crucial for hand hygiene being considered a priority for patient safety and for educational programs being continued [39]. According to the self-assessment survey administered at the Medicover Hospital, the advancement of the hand hygiene program was “intermediate” at the beginning and closer to “advanced” at the end showing that hygiene practices improved significantly. The measures are being continued to date. The compliance with hand hygiene principles at the Medicover Hospital in “non-pandemic period” was pretty good; it was higher than at numerous sites in developed countries assessed in the similar manner. We expect, that further improvements will be still possible, particularly by increasing institutional safety, continuing the educational measures and optimizing the epidemiological personnel employment ratios. We were estimate that particular attention must be paid to measures related to the organizational culture promoting health care safety. This element is particularly difficult to change as it is related to the culture of work modeled by the management staff who must acquire, strengthen, and implement new knowledge in their own behaviors before implementing efficient methods to promote, strengthen, and enforce hand hygiene behaviors at all levels of subordinate personnel. Changes in organizational culture are considered to take longer and require significant time and resources; they become visible in longer time frames, and they require continuous monitoring and strengthening. However, they are an efficient way for promoting appropriate hygiene habits in large health care institutions (hospitals, health care networks). We were not expecting that other factor can change totally hospital staff behaviors. Continuation of the study of hygiene practices related to hand washing after the outbreak of the COVID-19 epidemic provided completely new observations of behavior among medical staff. Measurements of the use of liquid soap and hand disinfectant carried out at that time showed more than 550-fold increase in consumption.\nThe results of this part of the study indicate the accompanying component of the psychosocial factor promoting protective behavior during the pandemic. Similar observations were made during the Severe Acute Respiratory Syndrome (SARS) epidemic in 2003, it was recorded that with an increased number of cases of infection, care for hand hygiene increases significantly [40]. Perception of threat is a subjective phenomenon, even pragmatic than based on experience, Leppin et Aro referred to this as a cognitive-emotional phenomenon [41]. Greater levels of anxiety may be associated with significant behavioral changes, e.g. more frequent washing, disinfecting hands to protect against infection, its complications and potential death. Recent studies by the National Health Commission of China in Chinese hospitals in February 2020 estimated the transmission of infection between COVID-19 patients and medical personnel to be 3.8%. Guo et al. they compared the COVID-19 pandemic with the Middle East Respiratory Syndrome (MERS) epidemic in Saudi Arabia in 2012 and SARS in South Korea in 2015. Outbreaks were concentrated in hospitals and the percentage of infections among healthcare professionals ranged between 33-42%. Unfortunately, the transmission risk factor has not been clearly defined [42]. Current research confirms that widely available detergents, including soap, are a reliable and cheap form of minimization of viral transmission between patients and others peoples including medical staff. Other compounds with proven effectiveness in preventing hand-transmitted infections are disinfectant based on alcohol. Current quality requirements of disinfectants indicate that the concentration of alcohol having proven antiviral efficacy in the prevention of COVID-19 is 62-71% [43]. Hand washing still remains the most effective form of prevention of transmission infectious respiratory diseases including COVID-19 [44]. Our study had a certain limitation consisting in randomization being impossible due to the single-site, all-hospital character of the intervention. Due to the multimodality of the campaign, it is difficult to pinpoint the most effective element of the strategy. However, our results suggest that long-term, planned epidemiological activity promoting proper behaviors and habits in relation to hand hygiene may significantly increase the indices of epidemiological safety in hospital treatment.", "Providing education to healthcare professionals in relation to proper hand hygiene during providing care to the patients is the most important element of multimodal interventional strategies aimed at hand hygiene improvement. The First Global Patient Safety Challenge “Clean Care is a Safer Care” has acquired its desired effect; however, long-term educational measures are required to maintain and further increase the quality of services and patient safety. COVID-19 pandemic dramatically increase accuracy of proper hand hygiene procedures and consumption of disinfectant agents." ]
[ null, null, null, null, null ]
[ "Introduction", "Objective", "Material and methods", "Results", "Discussion", "Conclusions" ]
[ "Every year, millions of patients acquire infections during hospital treatment. The scale of the problem is best illustrated by data indicating that hospital-acquired infections development in nearly 10% of all hospitalized patients while reaching nearly 50% in case of high-risk groups such as ICU patients, patients subjected to prolonged artificial ventilation, or immunosuppressed patients.\nThese serious complications increase the length of hospital stay by about 5-10 days, leading to potential disabilities as well as double the morbidity rates in affected patients [1].\nHospital-acquired infections are also a serious economic problem due to the increased treatment costs. As estimated by Urban et al., treatment of mild infections may increase the individual treatment costs by at least 400 USD while the average costs of the treatment of severe nosocomial infections exceed 30,000 USD per patient. In many cases, nosocomial infections are transmitted by the medical staff due to their failure to comply with aseptic and antiseptic procedures [2].\nStrict compliance with hand hygiene principles is of special importance in reducing the nosocomial infection rates. Hand cleansing and disinfection practices are the most economical and most efficient methods to prevent transmission of microorganisms and reduce hospital-acquired infection rates [3]. Despite the simplicity of this measure, non-compliance with its principles is a global problem in the health care sector [1].\nThe pioneer of hand hygiene as a measure to prevent hospital-acquired infections was Ignaz Philipp Semmelweis (1818-1865) who discovered the correlation between dirty hands of obstetricians and the incidence of postpartum infections and was the first to implement appropriate hand disinfection procedures.\nToday, nearly 200 years later, proper compliance with hand hygiene principles is still a problem for medical professionals [4]. The importance of this simple procedure appears to be underestimated by medical professionals as the World Health Organization estimates that the compliance rates range from 5% to 89% [5].\nIn 2009, WHO published the guidelines for proper hand hygiene in the medical sector. All around the globe, training is organized for medical professionals as part of the First Global Patient Safety Challenge “Clean Care is a Safer Care” [6]. Poland joined the initiative in 2013. Many hospitals in our country decided to implement the WHO guidelines as a measure to reduce the hospital-acquired infection rates.\nThe guidelines were also adopted by the Medicover Hospital in Warsaw as a means to ensure the maximum safety of the therapeutic services provided to our patients. Program implementation was coordinated by the Hospital-Acquired Infections Team which worked to increase the awareness of hospital workers in relation to the importance of hand hygiene as well as to strengthen good behavior models to promote proper hand hygiene.\nThe outbreak of the COVID-19 epidemic provided completely new observations of behavior among medical staff. Measurements of the use of liquid soap and hand disinfectant, carried out at that time showed dramatic increase in use.", "The objective of the study was to assess the usefulness of the educational program titled “Clean Care is a Safer Care” as a tool to enhance the compliance with hand hygiene principles before and during COVID-19 pandemic.", "Medicover Hospital is a large, broad-profile hospital which provides its patients with high quality, comprehensive health care. The educational program was implemented there in 2014. Education included special training workshops, repeated worksite training for all staff members, and visual accents to remind the personnel of the hand hygiene principles. The promotional material was prepared in collaboration with Ecolab company. Posters highlighting the importance of hand hygiene and information on the efficiency of this measure were placed in strategic locations and points of care across the institution. Care was taken to ensure the constant availability of alcohol-based disinfectant at points of care at every department. In addition, disinfectant dispensers are placed in special holders at all patient beds. An observation study of hand hygiene behaviors was commenced.\nThe awareness of the role of institutional involvement in the desired change in personnel behaviors led the hospital management and staff administration team to define hand hygiene a priority task for the entire hospital personnel. The study group consisted of all the hospital’s nurses, auxiliary staff, physicians, and other employees of all hospital departments and the outpatient consultation center.\nThe study was based on the tools to implement multi-aspect strategy for hand hygiene as developed by the WHO. The strategy consists of 5 key components [7]:\nsystemic change;\ntraining and education of health care professionals;\nassessment and feedback;\nvisual instructions at worksite;\npromotion of institutional safety.\nA maximum of 100 points may be scored in each of these areas (for a maximum overall score of 500).\nThe package of implementation tools includes the Hand Hygiene Self-Assessment Questionnaire as a validation tool to assess the implementation of the 5 components of the WHO strategy at health care institutions.\nBased on the total number of points obtained after completion of the questionnaire, the institution is assigned with one of the four possible levels of hand hygiene promotion and practice, including insufficient, basic, intermediate, and advanced. Institutions classified into the advanced level provide answers to 20 questions in the leaders’ section to obtain a maximum of 20 points. Twelve points are enough to achieve the status of a leader [7].\nThis study was carried out by means of a diagnostic survey based on the Polish version of the “2010 WHO Self-Assessment Questionnaire” for hand hygiene published in April 2013. The tool facilitates the analysis of current capabilities and knowledge of hand hygiene issues as well as identification of future goals and measures. The compliance with the hand hygiene principles was monitored in a direct as well as an indirect fashion.\nDirect monitoring of hand hygiene consisted in direct observation carried out by a specially trained staff and based on the “Five Moments for Hand Hygiene” WHO guidelines. The guidelines present five indications for hand hygiene procedures: before touching a patient, before clean/aseptic procedures, after body fluid exposure/risk, after touching a patient, and after touching patient surroundings [8].\nAnother WHO implementation tool, i.e. the disinfectant consumption rate, was also used to determine the hand hygiene level at the Medicover Hospital. The rate corresponds to the quantity of hand disinfectant consumed per 1,000 person-days at individual departments or in the entire hospital [9]. Based on the form for recording nursing tasks requiring hand hygiene measures, hand hygiene compliance rate was calculated on the basis of average consumption of soap and disinfectant. All these factors were monitored in “non-pandemic” and “pandemic” season.", "In the initial self-assessment survey taken in 2014, the Hospital scored a total of 270/500 points (54%), corresponding to the intermediate compliance level. The poorest scores were obtained in institutional safety and training/education components.\nThe hand hygiene compliance rate calculated on the basis of the consumption of soap and disinfectants by the hospital staff was 49%. While the overall compliance with hand hygiene measures was 50%, it varied in individual personnel groups, ranging from 63% in nursing staff to as little as 9% in the auxiliary personnel. Soap consumption was 630 L per year, corresponding to 31 mL per person/day; the annual consumption of 460 L of disinfectant corresponded to 22 mL per person/day. The average compliance with the “Five moments for hand hygiene” was 60%.\nHand hygiene was measured in general and in individual departments. As shown by the initial audit, the lowest compliance score was observed at the admission room. Subsequent audits suggested a constant improvement in these parameters until final audit. During observation period the ICU became the institution’s leader in this regard.\nAfter initial assessment and identification of institution-specific principles for implementation of the “Clean Care is a Safer Care” educational program, training workshops and worksite training were commenced, and educational materials such as posters and reminders were distributed around the hospital. UV lamps were used to assess the correctness of hand cleansing during the training sessions.\nAfter these broad-scaled educational efforts, the semi-annual audit revealed an increase in hand hygiene compliance rate up to 81% (hospital average, observation day 180) while the final audit carried out after one year of campaigning revealed a compliance rate of 77%.\nThe largest problem was identified in relation to hand hygiene following the contact to patient surroundings, with relevant principles being complied with only in 22% of cases. This aspect was also associated with the highest increase in the compliance rate, with the percentage of desirable behaviors rising to 60%.\nFollowing the exposure to body fluids, the procedures were complied with in 60% at day 0 to reach 95% at the end of the observation period. The highest mean compliance rates were observed before the commencement of aseptic procedures (77% before the program vs. 88% on observation day 365) as well as following the contact with the patient (51% before the program vs 79% on observation day 365).\nAccording to the final assessment survey (took on observation day 365), the final score for the entire institution in the year 2016 was 435/500 points, with hand hygiene compliance rate reaching the final value of 77%. The increase in the compliance to hand hygiene principles varied in different health care professionals. It increased in physicians and nurses, but the most significance increase was observed in the auxiliary staff (from 9 to 63%). This might be due to the awareness in this workers’ group being initially low and increasing significantly as the result of the educational efforts.\nHigh results were achieved in all individual aspects of hand hygiene, with the change in the preferred alcohol-based hand disinfectant being generally accepted. Organizational culture-related measures for promoting safer health care require further attention and continued improvement. The analysis carried out at the Medicover hospital during the COVID-19 pandemic compared the use of disinfection fluids in 2019 and the first two months of the epidemic in 2020 an dramatic increase in the use of detergents and disinfectants has been observed. The results of the analysis are collected in the Table I.", "Hospital hand hygiene is one of the cardinal principles for reducing the transmission of pathogens during therapeutic procedures. Appropriate assessment of the compliance with hygienic guidelines and procedures is an important tool for modeling appropriate behaviors of the medical staff. Methods for the monitoring of the compliance with hand hygiene principles at health care institutions include direct observation, measurement of hand hygiene agents consumption, measurements and studies [10] or electronic devices installed at worksites [11]. However, no standardized assessment method has been developed and none of the existing methods can be considered ideal due to their high costs, subjective nature, or staff behaviors being changed while under observation. It was mainly for this reason that out of 36 campaigns held in 36 European countries in years 2000-2012, only 50% could be assessed for efficiency with the WHO guidelines being implemented in only 55% of this latter group [12].\nNowadays, direct monitoring by specially trained personnel is considered to be a gold standard [13-15]. The method provides detailed information on the hand hygiene behaviors in different groups of the medical staff at the crucial moments that determine the risk of patient-personnel-patient transmission of pathogens. As part of the “Clean Care is a Safer Care” program, the WHO developed a special observation form which facilitates monitoring of hand cleansing and disinfection behaviors on the basis of “Five Moments for Hand Hygiene”, i.e. before touching a patient, before clean/aseptic procedures, after body fluid exposure/risk, after touching a patient, and after touching patient surroundings. The Hospital-Acquired Infections Team at the Medicover Hospital used this observation tool in the development of a program to monitor and improve hand hygiene in the personnel involved in patient care [9]. Another WHO implementation tool developed as part of the “Clean Care is a Safer Care” program, i.e. the disinfectant consumption rate, was also used to determine the hand hygiene level at the Medicover Hospital. The rate corresponds to the quantity of hand disinfectant consumed per 1,000 person-day at individual departments or in the entire hospital [9]. According to the WHO guidelines, the minimum consumption of hand disinfectants should be at the level of 20 liters per 1,000 person/day [9]. The advantages as well as disadvantages of this method have been broadly discussed in the literature. The advantages consist in relatively simple data collection as well as a lower number of staff members being involved in the monitoring resulting in lower costs. The disadvantage consists in measurement inaccuracies as e.g. cases of disinfectant spills or cleaning agents being used by the patients are not accounted for. Despite the above, the disinfectant consumption rate is often associated with the hand hygiene behavior rate; according to some authors, it is a priority method which should be considered more important than direct observation [16-19]. As the result of the educational program being implemented in the hospital, the compliance with the hand hygiene guidelines has improved significantly. The program was based on worksite training, poster campaign, observational studies, disinfectant consumption monitoring and promotion of hand disinfectant use. Of note is the fact that hygiene consultants used UV lamps to better visualize potential risks during the workshop training. In relation to the low compliance with the hand hygiene principles as defined by the “Five Moments for Hand Hygiene”, particularly in relation to the contact with patient’s surroundings (only 22% of proper behaviors as determined during the initial audit), an assumption was made that the personnel was unaware of the risk of pathogen transmission in cases when no direct contact with the patient has occurred. Thanks to the use of UV lamps, employees could better see that the patient’s surroundings (bed, pajamas, personal items) are a reservoir of pathogens which are easily transmitted onto professionals’ hands even when the patient is absent, and therefore, that appropriate hygiene measures should be taken after touching such items. As shown by this example, it is rational to include various techniques for better visualization of the pathogen transmission stages in educational materials as this may result in higher compliance of hygiene principles by the health care staff [20]. In the course of our study, we observed that hand hygiene behaviors were significantly more common after touching the patient than before touching the patient (60 vs 54% at day 0 and 95 vs 80% at day 365). Whitby et al. highlighted that health care workers are more likely to comply with hygiene principles when self-preservation imperative is involved, i.e. when they perceive their hands to be contaminated and have the potential for transmitting the infection to employees themselves as well as those close to them (i.e. after touching the patient or their biological material) [8]. Therefore, promotion of attitudes stressing the personnel’s responsibility for the patients’ health, including legal liability in case of any claims from patients having acquired nosocomial infection, is very important. According to many authors, nurses are more aware of the problem as compared to physicians [21, 22]. Boscard et al. suggest that the motivation behind hand hygiene compliance in nurses consists in their concern for the safety of themselves as well as their close ones, since nurses are aware of risks associated with the failure to comply with the procedures [23]. Regardless of the monitoring method, appropriate reporting of results, and feedback on observation and disinfectant consumption results should be an important educational as well as motivational component of the hand hygiene program [24]. In our experience gained during implementation of the monitoring program at the Medicover Hospital, direct observation and feedback was the most effective measurement method. Although an improvement was achieved, there is no certainty as to how long the new compliance rates would hold after intervention is discontinued. This would require further observations and development of efficient methods for the strengthening of behavior patterns. Many authors have highlighted that numerous health care professionals have problems with complying to hand hygiene principles. The reported causes of such a non-compliance include heavy workload and non-availability of disinfectants as well as insufficient number of protective gloves [25]. These, however, are not the only causes: problems with availability of protective gloves or disinfectants had never been encountered at Medicover Hospital, suggesting a significant importance of the “human factor”, the lack of appropriate procedures, education, and hygiene monitoring. According to WHO recommendations, hand cleansing or disinfection should be performed at the point of care, i.e. at a place where the three elements: the patient, the health care professional, and a medical procedure, exist concurrently. Hand cleansing and disinfection agents should be readily available at points of care so that health care professionals do not have to leave the patient zone [10]. On the basis of these recommendations, hand disinfectant holders were placed on patient beds in the Medicover Hospital. Studies confirmed that disinfecting hands using an alcohol-based rub is more efficient than washing while being easy to implement and inexpensive. According to calculations, prevention of 8 cases of hospital-acquired pneumonia would compensate the annual cost of hand disinfection agents [26]. As shown by such calculations, implementation of hand hygiene programs leads to measurable therapeutic and economic benefits [27]. Most employees of the Medicover Hospital changed their behavioral habits and the compliance rate has increased significantly as the result of increased use of hand disinfectants from 49% at the initial audit to 81% at day 180 and 77% at day 365. A significant increase in the tested parameters was observed in the semiannual audit, followed by a slight decrease in the final audit. Our results correspond to those obtained in other studies as medical personnel appears to be somewhat bored with the hand hygiene campaigns. It might show that the best method for the improvement in hand hygiene consists in placing hand cleansing instructions at the key points of care [9]. According to Pittet et al. [28], best results are achieved when education is combined with hand hygiene reminders such as posters, brochures, memes, etc. Although posters are considered to be somewhat outdated as an educational tool, they may provide useful reminder when placed at the point of care. The importance of posters developed using the social marketing concept was demonstrated by Forrester in Canada [29] as well as by successful national campaigns in Australia and Europe. Implementation of current graphical trends and aggressive content increases the impact of this type of media [30, 31]. Introduction of hand hygiene programs for medical personnel at health care institutions is of high importance in the prevention of infections. As shown by Pittet et al., implementation of these recommendations in the University Hospital In Geneva reduced the nosocomial infection rate from 16.9 to 9.9% [32, 33]. Rosenthal et al. showed that the improvement in hand safety contributed to a significant reduction in the rate of nosocomial infections (from 47.55 to 27.93 per 1,000 patient/day) [34]. Multi-aspect campaigns unambiguously increase the hand hygiene compliance rates. During a 2-year campaign carried out by the WHO in 8 regions of the world (2004-2006), the mean compliance rate increased from 39.6 to 56.9%; notably, the impact of the intervention was higher in the developing countries where the access to knowledge and protection measures was more limited [35]. However, it must be noted that compliance with high hygienic standards is also associated with the institutional structure of the health care system which differs in different countries. Epidemiological studies revealed a correlation between infection rates and the low number of personnel in relation to the high number of patients. As the result of the number of patients or physicians being too low, hand hygiene behaviors are reduced or even absent between provision of consecutive patients [36, 37]. Without appropriate employment rates, it would be very difficult to expect an improvement in the quality of services and maintenance of high hygienic standards. Considering the dwindling number of applicants for health care jobs in Poland as well as the aging of the society and health care professionals combined with economically-driven emigration, maintenance of high treatment standards may become increasingly difficult in near future. Notably, the degree of hand hygiene program implementation is associated with the quality of epidemiological process monitoring. As shown by the studies, appropriate number of epidemiologists is required in relation to the number of hospital beds [7]. According to the common consensus, the appropriate rate is 1:250; however, as demonstrated by Delphi [38], a ratio of 0.8-1/100 might be required for an improvement to be observed. In Poland, the ratio has been statutorily established at 1:200; many hospitals, however, fail to comply with this requirement. In our study, the organizational culture and safety of health care was shown to be the weakest link of the Multi-aspect Strategy for Hand Hygiene (55 points). This result is not satisfactory since the support from health care administration and local governments is crucial for hand hygiene being considered a priority for patient safety and for educational programs being continued [39]. According to the self-assessment survey administered at the Medicover Hospital, the advancement of the hand hygiene program was “intermediate” at the beginning and closer to “advanced” at the end showing that hygiene practices improved significantly. The measures are being continued to date. The compliance with hand hygiene principles at the Medicover Hospital in “non-pandemic period” was pretty good; it was higher than at numerous sites in developed countries assessed in the similar manner. We expect, that further improvements will be still possible, particularly by increasing institutional safety, continuing the educational measures and optimizing the epidemiological personnel employment ratios. We were estimate that particular attention must be paid to measures related to the organizational culture promoting health care safety. This element is particularly difficult to change as it is related to the culture of work modeled by the management staff who must acquire, strengthen, and implement new knowledge in their own behaviors before implementing efficient methods to promote, strengthen, and enforce hand hygiene behaviors at all levels of subordinate personnel. Changes in organizational culture are considered to take longer and require significant time and resources; they become visible in longer time frames, and they require continuous monitoring and strengthening. However, they are an efficient way for promoting appropriate hygiene habits in large health care institutions (hospitals, health care networks). We were not expecting that other factor can change totally hospital staff behaviors. Continuation of the study of hygiene practices related to hand washing after the outbreak of the COVID-19 epidemic provided completely new observations of behavior among medical staff. Measurements of the use of liquid soap and hand disinfectant carried out at that time showed more than 550-fold increase in consumption.\nThe results of this part of the study indicate the accompanying component of the psychosocial factor promoting protective behavior during the pandemic. Similar observations were made during the Severe Acute Respiratory Syndrome (SARS) epidemic in 2003, it was recorded that with an increased number of cases of infection, care for hand hygiene increases significantly [40]. Perception of threat is a subjective phenomenon, even pragmatic than based on experience, Leppin et Aro referred to this as a cognitive-emotional phenomenon [41]. Greater levels of anxiety may be associated with significant behavioral changes, e.g. more frequent washing, disinfecting hands to protect against infection, its complications and potential death. Recent studies by the National Health Commission of China in Chinese hospitals in February 2020 estimated the transmission of infection between COVID-19 patients and medical personnel to be 3.8%. Guo et al. they compared the COVID-19 pandemic with the Middle East Respiratory Syndrome (MERS) epidemic in Saudi Arabia in 2012 and SARS in South Korea in 2015. Outbreaks were concentrated in hospitals and the percentage of infections among healthcare professionals ranged between 33-42%. Unfortunately, the transmission risk factor has not been clearly defined [42]. Current research confirms that widely available detergents, including soap, are a reliable and cheap form of minimization of viral transmission between patients and others peoples including medical staff. Other compounds with proven effectiveness in preventing hand-transmitted infections are disinfectant based on alcohol. Current quality requirements of disinfectants indicate that the concentration of alcohol having proven antiviral efficacy in the prevention of COVID-19 is 62-71% [43]. Hand washing still remains the most effective form of prevention of transmission infectious respiratory diseases including COVID-19 [44]. Our study had a certain limitation consisting in randomization being impossible due to the single-site, all-hospital character of the intervention. Due to the multimodality of the campaign, it is difficult to pinpoint the most effective element of the strategy. However, our results suggest that long-term, planned epidemiological activity promoting proper behaviors and habits in relation to hand hygiene may significantly increase the indices of epidemiological safety in hospital treatment.", "Providing education to healthcare professionals in relation to proper hand hygiene during providing care to the patients is the most important element of multimodal interventional strategies aimed at hand hygiene improvement. The First Global Patient Safety Challenge “Clean Care is a Safer Care” has acquired its desired effect; however, long-term educational measures are required to maintain and further increase the quality of services and patient safety. COVID-19 pandemic dramatically increase accuracy of proper hand hygiene procedures and consumption of disinfectant agents." ]
[ null, null, "methods", null, null, null ]
[ "Hand hygiene", "Educational program", "Efficiency", "Disinfection", "COVID-19" ]
Introduction: Every year, millions of patients acquire infections during hospital treatment. The scale of the problem is best illustrated by data indicating that hospital-acquired infections development in nearly 10% of all hospitalized patients while reaching nearly 50% in case of high-risk groups such as ICU patients, patients subjected to prolonged artificial ventilation, or immunosuppressed patients. These serious complications increase the length of hospital stay by about 5-10 days, leading to potential disabilities as well as double the morbidity rates in affected patients [1]. Hospital-acquired infections are also a serious economic problem due to the increased treatment costs. As estimated by Urban et al., treatment of mild infections may increase the individual treatment costs by at least 400 USD while the average costs of the treatment of severe nosocomial infections exceed 30,000 USD per patient. In many cases, nosocomial infections are transmitted by the medical staff due to their failure to comply with aseptic and antiseptic procedures [2]. Strict compliance with hand hygiene principles is of special importance in reducing the nosocomial infection rates. Hand cleansing and disinfection practices are the most economical and most efficient methods to prevent transmission of microorganisms and reduce hospital-acquired infection rates [3]. Despite the simplicity of this measure, non-compliance with its principles is a global problem in the health care sector [1]. The pioneer of hand hygiene as a measure to prevent hospital-acquired infections was Ignaz Philipp Semmelweis (1818-1865) who discovered the correlation between dirty hands of obstetricians and the incidence of postpartum infections and was the first to implement appropriate hand disinfection procedures. Today, nearly 200 years later, proper compliance with hand hygiene principles is still a problem for medical professionals [4]. The importance of this simple procedure appears to be underestimated by medical professionals as the World Health Organization estimates that the compliance rates range from 5% to 89% [5]. In 2009, WHO published the guidelines for proper hand hygiene in the medical sector. All around the globe, training is organized for medical professionals as part of the First Global Patient Safety Challenge “Clean Care is a Safer Care” [6]. Poland joined the initiative in 2013. Many hospitals in our country decided to implement the WHO guidelines as a measure to reduce the hospital-acquired infection rates. The guidelines were also adopted by the Medicover Hospital in Warsaw as a means to ensure the maximum safety of the therapeutic services provided to our patients. Program implementation was coordinated by the Hospital-Acquired Infections Team which worked to increase the awareness of hospital workers in relation to the importance of hand hygiene as well as to strengthen good behavior models to promote proper hand hygiene. The outbreak of the COVID-19 epidemic provided completely new observations of behavior among medical staff. Measurements of the use of liquid soap and hand disinfectant, carried out at that time showed dramatic increase in use. Objective: The objective of the study was to assess the usefulness of the educational program titled “Clean Care is a Safer Care” as a tool to enhance the compliance with hand hygiene principles before and during COVID-19 pandemic. Material and methods: Medicover Hospital is a large, broad-profile hospital which provides its patients with high quality, comprehensive health care. The educational program was implemented there in 2014. Education included special training workshops, repeated worksite training for all staff members, and visual accents to remind the personnel of the hand hygiene principles. The promotional material was prepared in collaboration with Ecolab company. Posters highlighting the importance of hand hygiene and information on the efficiency of this measure were placed in strategic locations and points of care across the institution. Care was taken to ensure the constant availability of alcohol-based disinfectant at points of care at every department. In addition, disinfectant dispensers are placed in special holders at all patient beds. An observation study of hand hygiene behaviors was commenced. The awareness of the role of institutional involvement in the desired change in personnel behaviors led the hospital management and staff administration team to define hand hygiene a priority task for the entire hospital personnel. The study group consisted of all the hospital’s nurses, auxiliary staff, physicians, and other employees of all hospital departments and the outpatient consultation center. The study was based on the tools to implement multi-aspect strategy for hand hygiene as developed by the WHO. The strategy consists of 5 key components [7]: systemic change; training and education of health care professionals; assessment and feedback; visual instructions at worksite; promotion of institutional safety. A maximum of 100 points may be scored in each of these areas (for a maximum overall score of 500). The package of implementation tools includes the Hand Hygiene Self-Assessment Questionnaire as a validation tool to assess the implementation of the 5 components of the WHO strategy at health care institutions. Based on the total number of points obtained after completion of the questionnaire, the institution is assigned with one of the four possible levels of hand hygiene promotion and practice, including insufficient, basic, intermediate, and advanced. Institutions classified into the advanced level provide answers to 20 questions in the leaders’ section to obtain a maximum of 20 points. Twelve points are enough to achieve the status of a leader [7]. This study was carried out by means of a diagnostic survey based on the Polish version of the “2010 WHO Self-Assessment Questionnaire” for hand hygiene published in April 2013. The tool facilitates the analysis of current capabilities and knowledge of hand hygiene issues as well as identification of future goals and measures. The compliance with the hand hygiene principles was monitored in a direct as well as an indirect fashion. Direct monitoring of hand hygiene consisted in direct observation carried out by a specially trained staff and based on the “Five Moments for Hand Hygiene” WHO guidelines. The guidelines present five indications for hand hygiene procedures: before touching a patient, before clean/aseptic procedures, after body fluid exposure/risk, after touching a patient, and after touching patient surroundings [8]. Another WHO implementation tool, i.e. the disinfectant consumption rate, was also used to determine the hand hygiene level at the Medicover Hospital. The rate corresponds to the quantity of hand disinfectant consumed per 1,000 person-days at individual departments or in the entire hospital [9]. Based on the form for recording nursing tasks requiring hand hygiene measures, hand hygiene compliance rate was calculated on the basis of average consumption of soap and disinfectant. All these factors were monitored in “non-pandemic” and “pandemic” season. Results: In the initial self-assessment survey taken in 2014, the Hospital scored a total of 270/500 points (54%), corresponding to the intermediate compliance level. The poorest scores were obtained in institutional safety and training/education components. The hand hygiene compliance rate calculated on the basis of the consumption of soap and disinfectants by the hospital staff was 49%. While the overall compliance with hand hygiene measures was 50%, it varied in individual personnel groups, ranging from 63% in nursing staff to as little as 9% in the auxiliary personnel. Soap consumption was 630 L per year, corresponding to 31 mL per person/day; the annual consumption of 460 L of disinfectant corresponded to 22 mL per person/day. The average compliance with the “Five moments for hand hygiene” was 60%. Hand hygiene was measured in general and in individual departments. As shown by the initial audit, the lowest compliance score was observed at the admission room. Subsequent audits suggested a constant improvement in these parameters until final audit. During observation period the ICU became the institution’s leader in this regard. After initial assessment and identification of institution-specific principles for implementation of the “Clean Care is a Safer Care” educational program, training workshops and worksite training were commenced, and educational materials such as posters and reminders were distributed around the hospital. UV lamps were used to assess the correctness of hand cleansing during the training sessions. After these broad-scaled educational efforts, the semi-annual audit revealed an increase in hand hygiene compliance rate up to 81% (hospital average, observation day 180) while the final audit carried out after one year of campaigning revealed a compliance rate of 77%. The largest problem was identified in relation to hand hygiene following the contact to patient surroundings, with relevant principles being complied with only in 22% of cases. This aspect was also associated with the highest increase in the compliance rate, with the percentage of desirable behaviors rising to 60%. Following the exposure to body fluids, the procedures were complied with in 60% at day 0 to reach 95% at the end of the observation period. The highest mean compliance rates were observed before the commencement of aseptic procedures (77% before the program vs. 88% on observation day 365) as well as following the contact with the patient (51% before the program vs 79% on observation day 365). According to the final assessment survey (took on observation day 365), the final score for the entire institution in the year 2016 was 435/500 points, with hand hygiene compliance rate reaching the final value of 77%. The increase in the compliance to hand hygiene principles varied in different health care professionals. It increased in physicians and nurses, but the most significance increase was observed in the auxiliary staff (from 9 to 63%). This might be due to the awareness in this workers’ group being initially low and increasing significantly as the result of the educational efforts. High results were achieved in all individual aspects of hand hygiene, with the change in the preferred alcohol-based hand disinfectant being generally accepted. Organizational culture-related measures for promoting safer health care require further attention and continued improvement. The analysis carried out at the Medicover hospital during the COVID-19 pandemic compared the use of disinfection fluids in 2019 and the first two months of the epidemic in 2020 an dramatic increase in the use of detergents and disinfectants has been observed. The results of the analysis are collected in the Table I. Discussion: Hospital hand hygiene is one of the cardinal principles for reducing the transmission of pathogens during therapeutic procedures. Appropriate assessment of the compliance with hygienic guidelines and procedures is an important tool for modeling appropriate behaviors of the medical staff. Methods for the monitoring of the compliance with hand hygiene principles at health care institutions include direct observation, measurement of hand hygiene agents consumption, measurements and studies [10] or electronic devices installed at worksites [11]. However, no standardized assessment method has been developed and none of the existing methods can be considered ideal due to their high costs, subjective nature, or staff behaviors being changed while under observation. It was mainly for this reason that out of 36 campaigns held in 36 European countries in years 2000-2012, only 50% could be assessed for efficiency with the WHO guidelines being implemented in only 55% of this latter group [12]. Nowadays, direct monitoring by specially trained personnel is considered to be a gold standard [13-15]. The method provides detailed information on the hand hygiene behaviors in different groups of the medical staff at the crucial moments that determine the risk of patient-personnel-patient transmission of pathogens. As part of the “Clean Care is a Safer Care” program, the WHO developed a special observation form which facilitates monitoring of hand cleansing and disinfection behaviors on the basis of “Five Moments for Hand Hygiene”, i.e. before touching a patient, before clean/aseptic procedures, after body fluid exposure/risk, after touching a patient, and after touching patient surroundings. The Hospital-Acquired Infections Team at the Medicover Hospital used this observation tool in the development of a program to monitor and improve hand hygiene in the personnel involved in patient care [9]. Another WHO implementation tool developed as part of the “Clean Care is a Safer Care” program, i.e. the disinfectant consumption rate, was also used to determine the hand hygiene level at the Medicover Hospital. The rate corresponds to the quantity of hand disinfectant consumed per 1,000 person-day at individual departments or in the entire hospital [9]. According to the WHO guidelines, the minimum consumption of hand disinfectants should be at the level of 20 liters per 1,000 person/day [9]. The advantages as well as disadvantages of this method have been broadly discussed in the literature. The advantages consist in relatively simple data collection as well as a lower number of staff members being involved in the monitoring resulting in lower costs. The disadvantage consists in measurement inaccuracies as e.g. cases of disinfectant spills or cleaning agents being used by the patients are not accounted for. Despite the above, the disinfectant consumption rate is often associated with the hand hygiene behavior rate; according to some authors, it is a priority method which should be considered more important than direct observation [16-19]. As the result of the educational program being implemented in the hospital, the compliance with the hand hygiene guidelines has improved significantly. The program was based on worksite training, poster campaign, observational studies, disinfectant consumption monitoring and promotion of hand disinfectant use. Of note is the fact that hygiene consultants used UV lamps to better visualize potential risks during the workshop training. In relation to the low compliance with the hand hygiene principles as defined by the “Five Moments for Hand Hygiene”, particularly in relation to the contact with patient’s surroundings (only 22% of proper behaviors as determined during the initial audit), an assumption was made that the personnel was unaware of the risk of pathogen transmission in cases when no direct contact with the patient has occurred. Thanks to the use of UV lamps, employees could better see that the patient’s surroundings (bed, pajamas, personal items) are a reservoir of pathogens which are easily transmitted onto professionals’ hands even when the patient is absent, and therefore, that appropriate hygiene measures should be taken after touching such items. As shown by this example, it is rational to include various techniques for better visualization of the pathogen transmission stages in educational materials as this may result in higher compliance of hygiene principles by the health care staff [20]. In the course of our study, we observed that hand hygiene behaviors were significantly more common after touching the patient than before touching the patient (60 vs 54% at day 0 and 95 vs 80% at day 365). Whitby et al. highlighted that health care workers are more likely to comply with hygiene principles when self-preservation imperative is involved, i.e. when they perceive their hands to be contaminated and have the potential for transmitting the infection to employees themselves as well as those close to them (i.e. after touching the patient or their biological material) [8]. Therefore, promotion of attitudes stressing the personnel’s responsibility for the patients’ health, including legal liability in case of any claims from patients having acquired nosocomial infection, is very important. According to many authors, nurses are more aware of the problem as compared to physicians [21, 22]. Boscard et al. suggest that the motivation behind hand hygiene compliance in nurses consists in their concern for the safety of themselves as well as their close ones, since nurses are aware of risks associated with the failure to comply with the procedures [23]. Regardless of the monitoring method, appropriate reporting of results, and feedback on observation and disinfectant consumption results should be an important educational as well as motivational component of the hand hygiene program [24]. In our experience gained during implementation of the monitoring program at the Medicover Hospital, direct observation and feedback was the most effective measurement method. Although an improvement was achieved, there is no certainty as to how long the new compliance rates would hold after intervention is discontinued. This would require further observations and development of efficient methods for the strengthening of behavior patterns. Many authors have highlighted that numerous health care professionals have problems with complying to hand hygiene principles. The reported causes of such a non-compliance include heavy workload and non-availability of disinfectants as well as insufficient number of protective gloves [25]. These, however, are not the only causes: problems with availability of protective gloves or disinfectants had never been encountered at Medicover Hospital, suggesting a significant importance of the “human factor”, the lack of appropriate procedures, education, and hygiene monitoring. According to WHO recommendations, hand cleansing or disinfection should be performed at the point of care, i.e. at a place where the three elements: the patient, the health care professional, and a medical procedure, exist concurrently. Hand cleansing and disinfection agents should be readily available at points of care so that health care professionals do not have to leave the patient zone [10]. On the basis of these recommendations, hand disinfectant holders were placed on patient beds in the Medicover Hospital. Studies confirmed that disinfecting hands using an alcohol-based rub is more efficient than washing while being easy to implement and inexpensive. According to calculations, prevention of 8 cases of hospital-acquired pneumonia would compensate the annual cost of hand disinfection agents [26]. As shown by such calculations, implementation of hand hygiene programs leads to measurable therapeutic and economic benefits [27]. Most employees of the Medicover Hospital changed their behavioral habits and the compliance rate has increased significantly as the result of increased use of hand disinfectants from 49% at the initial audit to 81% at day 180 and 77% at day 365. A significant increase in the tested parameters was observed in the semiannual audit, followed by a slight decrease in the final audit. Our results correspond to those obtained in other studies as medical personnel appears to be somewhat bored with the hand hygiene campaigns. It might show that the best method for the improvement in hand hygiene consists in placing hand cleansing instructions at the key points of care [9]. According to Pittet et al. [28], best results are achieved when education is combined with hand hygiene reminders such as posters, brochures, memes, etc. Although posters are considered to be somewhat outdated as an educational tool, they may provide useful reminder when placed at the point of care. The importance of posters developed using the social marketing concept was demonstrated by Forrester in Canada [29] as well as by successful national campaigns in Australia and Europe. Implementation of current graphical trends and aggressive content increases the impact of this type of media [30, 31]. Introduction of hand hygiene programs for medical personnel at health care institutions is of high importance in the prevention of infections. As shown by Pittet et al., implementation of these recommendations in the University Hospital In Geneva reduced the nosocomial infection rate from 16.9 to 9.9% [32, 33]. Rosenthal et al. showed that the improvement in hand safety contributed to a significant reduction in the rate of nosocomial infections (from 47.55 to 27.93 per 1,000 patient/day) [34]. Multi-aspect campaigns unambiguously increase the hand hygiene compliance rates. During a 2-year campaign carried out by the WHO in 8 regions of the world (2004-2006), the mean compliance rate increased from 39.6 to 56.9%; notably, the impact of the intervention was higher in the developing countries where the access to knowledge and protection measures was more limited [35]. However, it must be noted that compliance with high hygienic standards is also associated with the institutional structure of the health care system which differs in different countries. Epidemiological studies revealed a correlation between infection rates and the low number of personnel in relation to the high number of patients. As the result of the number of patients or physicians being too low, hand hygiene behaviors are reduced or even absent between provision of consecutive patients [36, 37]. Without appropriate employment rates, it would be very difficult to expect an improvement in the quality of services and maintenance of high hygienic standards. Considering the dwindling number of applicants for health care jobs in Poland as well as the aging of the society and health care professionals combined with economically-driven emigration, maintenance of high treatment standards may become increasingly difficult in near future. Notably, the degree of hand hygiene program implementation is associated with the quality of epidemiological process monitoring. As shown by the studies, appropriate number of epidemiologists is required in relation to the number of hospital beds [7]. According to the common consensus, the appropriate rate is 1:250; however, as demonstrated by Delphi [38], a ratio of 0.8-1/100 might be required for an improvement to be observed. In Poland, the ratio has been statutorily established at 1:200; many hospitals, however, fail to comply with this requirement. In our study, the organizational culture and safety of health care was shown to be the weakest link of the Multi-aspect Strategy for Hand Hygiene (55 points). This result is not satisfactory since the support from health care administration and local governments is crucial for hand hygiene being considered a priority for patient safety and for educational programs being continued [39]. According to the self-assessment survey administered at the Medicover Hospital, the advancement of the hand hygiene program was “intermediate” at the beginning and closer to “advanced” at the end showing that hygiene practices improved significantly. The measures are being continued to date. The compliance with hand hygiene principles at the Medicover Hospital in “non-pandemic period” was pretty good; it was higher than at numerous sites in developed countries assessed in the similar manner. We expect, that further improvements will be still possible, particularly by increasing institutional safety, continuing the educational measures and optimizing the epidemiological personnel employment ratios. We were estimate that particular attention must be paid to measures related to the organizational culture promoting health care safety. This element is particularly difficult to change as it is related to the culture of work modeled by the management staff who must acquire, strengthen, and implement new knowledge in their own behaviors before implementing efficient methods to promote, strengthen, and enforce hand hygiene behaviors at all levels of subordinate personnel. Changes in organizational culture are considered to take longer and require significant time and resources; they become visible in longer time frames, and they require continuous monitoring and strengthening. However, they are an efficient way for promoting appropriate hygiene habits in large health care institutions (hospitals, health care networks). We were not expecting that other factor can change totally hospital staff behaviors. Continuation of the study of hygiene practices related to hand washing after the outbreak of the COVID-19 epidemic provided completely new observations of behavior among medical staff. Measurements of the use of liquid soap and hand disinfectant carried out at that time showed more than 550-fold increase in consumption. The results of this part of the study indicate the accompanying component of the psychosocial factor promoting protective behavior during the pandemic. Similar observations were made during the Severe Acute Respiratory Syndrome (SARS) epidemic in 2003, it was recorded that with an increased number of cases of infection, care for hand hygiene increases significantly [40]. Perception of threat is a subjective phenomenon, even pragmatic than based on experience, Leppin et Aro referred to this as a cognitive-emotional phenomenon [41]. Greater levels of anxiety may be associated with significant behavioral changes, e.g. more frequent washing, disinfecting hands to protect against infection, its complications and potential death. Recent studies by the National Health Commission of China in Chinese hospitals in February 2020 estimated the transmission of infection between COVID-19 patients and medical personnel to be 3.8%. Guo et al. they compared the COVID-19 pandemic with the Middle East Respiratory Syndrome (MERS) epidemic in Saudi Arabia in 2012 and SARS in South Korea in 2015. Outbreaks were concentrated in hospitals and the percentage of infections among healthcare professionals ranged between 33-42%. Unfortunately, the transmission risk factor has not been clearly defined [42]. Current research confirms that widely available detergents, including soap, are a reliable and cheap form of minimization of viral transmission between patients and others peoples including medical staff. Other compounds with proven effectiveness in preventing hand-transmitted infections are disinfectant based on alcohol. Current quality requirements of disinfectants indicate that the concentration of alcohol having proven antiviral efficacy in the prevention of COVID-19 is 62-71% [43]. Hand washing still remains the most effective form of prevention of transmission infectious respiratory diseases including COVID-19 [44]. Our study had a certain limitation consisting in randomization being impossible due to the single-site, all-hospital character of the intervention. Due to the multimodality of the campaign, it is difficult to pinpoint the most effective element of the strategy. However, our results suggest that long-term, planned epidemiological activity promoting proper behaviors and habits in relation to hand hygiene may significantly increase the indices of epidemiological safety in hospital treatment. Conclusions: Providing education to healthcare professionals in relation to proper hand hygiene during providing care to the patients is the most important element of multimodal interventional strategies aimed at hand hygiene improvement. The First Global Patient Safety Challenge “Clean Care is a Safer Care” has acquired its desired effect; however, long-term educational measures are required to maintain and further increase the quality of services and patient safety. COVID-19 pandemic dramatically increase accuracy of proper hand hygiene procedures and consumption of disinfectant agents.
Background: Hand cleansing and disinfection is the most efficient method for reducing the rates of hospital-acquired infections which are a serious medical and economic problem. Striving to ensure the maximum safety of the therapeutic process, we decided to promote hand hygiene by implementing the educational program titled "Clean Care is a Safer Care". The occurrence of the COVID-19 pandemic affected the compliance with procedures related to the sanitary regime, including the frequency and accuracy of hand decontamination by medical personnel. Methods: We monitored the compliance with the hygiene procedure before implementation of the program as well as during the hand hygiene campaign by means of direct observation as well as the disinfectant consumption rates. Results: In the initial self-assessment survey, the hospital had scored 270/500 points (54%). Preliminary audit revealed the hygiene compliance rate at the level of 49%. After broad-scaled educational efforts, the semi-annual audit revealed an increase in hand hygiene compliance rate up to 81% (hospital average) while the final audit carried out after one year of campaigning revealed a compliance rate of 77%. The final score for the hospital increased to 435/500 points. Conclusions: COVID-19 pandemic dramatically increased accuracy of proper hand hygiene procedures and consumption of disinfectant agents. The educational program has succeeded to reach its goal; however, long-term educational efforts are required to maintain and improve the quality of provided services.
Introduction: Every year, millions of patients acquire infections during hospital treatment. The scale of the problem is best illustrated by data indicating that hospital-acquired infections development in nearly 10% of all hospitalized patients while reaching nearly 50% in case of high-risk groups such as ICU patients, patients subjected to prolonged artificial ventilation, or immunosuppressed patients. These serious complications increase the length of hospital stay by about 5-10 days, leading to potential disabilities as well as double the morbidity rates in affected patients [1]. Hospital-acquired infections are also a serious economic problem due to the increased treatment costs. As estimated by Urban et al., treatment of mild infections may increase the individual treatment costs by at least 400 USD while the average costs of the treatment of severe nosocomial infections exceed 30,000 USD per patient. In many cases, nosocomial infections are transmitted by the medical staff due to their failure to comply with aseptic and antiseptic procedures [2]. Strict compliance with hand hygiene principles is of special importance in reducing the nosocomial infection rates. Hand cleansing and disinfection practices are the most economical and most efficient methods to prevent transmission of microorganisms and reduce hospital-acquired infection rates [3]. Despite the simplicity of this measure, non-compliance with its principles is a global problem in the health care sector [1]. The pioneer of hand hygiene as a measure to prevent hospital-acquired infections was Ignaz Philipp Semmelweis (1818-1865) who discovered the correlation between dirty hands of obstetricians and the incidence of postpartum infections and was the first to implement appropriate hand disinfection procedures. Today, nearly 200 years later, proper compliance with hand hygiene principles is still a problem for medical professionals [4]. The importance of this simple procedure appears to be underestimated by medical professionals as the World Health Organization estimates that the compliance rates range from 5% to 89% [5]. In 2009, WHO published the guidelines for proper hand hygiene in the medical sector. All around the globe, training is organized for medical professionals as part of the First Global Patient Safety Challenge “Clean Care is a Safer Care” [6]. Poland joined the initiative in 2013. Many hospitals in our country decided to implement the WHO guidelines as a measure to reduce the hospital-acquired infection rates. The guidelines were also adopted by the Medicover Hospital in Warsaw as a means to ensure the maximum safety of the therapeutic services provided to our patients. Program implementation was coordinated by the Hospital-Acquired Infections Team which worked to increase the awareness of hospital workers in relation to the importance of hand hygiene as well as to strengthen good behavior models to promote proper hand hygiene. The outbreak of the COVID-19 epidemic provided completely new observations of behavior among medical staff. Measurements of the use of liquid soap and hand disinfectant, carried out at that time showed dramatic increase in use. Conclusions: Hands sanitizer's use analysis during the SARS-CoV-2 coronavirus epidemic and 2019 year.
Background: Hand cleansing and disinfection is the most efficient method for reducing the rates of hospital-acquired infections which are a serious medical and economic problem. Striving to ensure the maximum safety of the therapeutic process, we decided to promote hand hygiene by implementing the educational program titled "Clean Care is a Safer Care". The occurrence of the COVID-19 pandemic affected the compliance with procedures related to the sanitary regime, including the frequency and accuracy of hand decontamination by medical personnel. Methods: We monitored the compliance with the hygiene procedure before implementation of the program as well as during the hand hygiene campaign by means of direct observation as well as the disinfectant consumption rates. Results: In the initial self-assessment survey, the hospital had scored 270/500 points (54%). Preliminary audit revealed the hygiene compliance rate at the level of 49%. After broad-scaled educational efforts, the semi-annual audit revealed an increase in hand hygiene compliance rate up to 81% (hospital average) while the final audit carried out after one year of campaigning revealed a compliance rate of 77%. The final score for the hospital increased to 435/500 points. Conclusions: COVID-19 pandemic dramatically increased accuracy of proper hand hygiene procedures and consumption of disinfectant agents. The educational program has succeeded to reach its goal; however, long-term educational efforts are required to maintain and improve the quality of provided services.
4,883
273
[ 556, 40, 680, 2826, 90 ]
6
[ "hand", "hygiene", "hand hygiene", "care", "hospital", "compliance", "patient", "health", "health care", "disinfectant" ]
[ "rate nosocomial infections", "acquire infections hospital", "protect infection complications", "hospitals percentage infections", "reducing nosocomial infection" ]
null
[CONTENT] Hand hygiene | Educational program | Efficiency | Disinfection | COVID-19 [SUMMARY]
[CONTENT] Hand hygiene | Educational program | Efficiency | Disinfection | COVID-19 [SUMMARY]
null
[CONTENT] Hand hygiene | Educational program | Efficiency | Disinfection | COVID-19 [SUMMARY]
[CONTENT] Hand hygiene | Educational program | Efficiency | Disinfection | COVID-19 [SUMMARY]
[CONTENT] Hand hygiene | Educational program | Efficiency | Disinfection | COVID-19 [SUMMARY]
[CONTENT] COVID-19 | Guideline Adherence | Hand Hygiene | Health Promotion | Humans | Pandemics | Personnel, Hospital | SARS-CoV-2 | Surveys and Questionnaires [SUMMARY]
[CONTENT] COVID-19 | Guideline Adherence | Hand Hygiene | Health Promotion | Humans | Pandemics | Personnel, Hospital | SARS-CoV-2 | Surveys and Questionnaires [SUMMARY]
null
[CONTENT] COVID-19 | Guideline Adherence | Hand Hygiene | Health Promotion | Humans | Pandemics | Personnel, Hospital | SARS-CoV-2 | Surveys and Questionnaires [SUMMARY]
[CONTENT] COVID-19 | Guideline Adherence | Hand Hygiene | Health Promotion | Humans | Pandemics | Personnel, Hospital | SARS-CoV-2 | Surveys and Questionnaires [SUMMARY]
[CONTENT] COVID-19 | Guideline Adherence | Hand Hygiene | Health Promotion | Humans | Pandemics | Personnel, Hospital | SARS-CoV-2 | Surveys and Questionnaires [SUMMARY]
[CONTENT] rate nosocomial infections | acquire infections hospital | protect infection complications | hospitals percentage infections | reducing nosocomial infection [SUMMARY]
[CONTENT] rate nosocomial infections | acquire infections hospital | protect infection complications | hospitals percentage infections | reducing nosocomial infection [SUMMARY]
null
[CONTENT] rate nosocomial infections | acquire infections hospital | protect infection complications | hospitals percentage infections | reducing nosocomial infection [SUMMARY]
[CONTENT] rate nosocomial infections | acquire infections hospital | protect infection complications | hospitals percentage infections | reducing nosocomial infection [SUMMARY]
[CONTENT] rate nosocomial infections | acquire infections hospital | protect infection complications | hospitals percentage infections | reducing nosocomial infection [SUMMARY]
[CONTENT] hand | hygiene | hand hygiene | care | hospital | compliance | patient | health | health care | disinfectant [SUMMARY]
[CONTENT] hand | hygiene | hand hygiene | care | hospital | compliance | patient | health | health care | disinfectant [SUMMARY]
null
[CONTENT] hand | hygiene | hand hygiene | care | hospital | compliance | patient | health | health care | disinfectant [SUMMARY]
[CONTENT] hand | hygiene | hand hygiene | care | hospital | compliance | patient | health | health care | disinfectant [SUMMARY]
[CONTENT] hand | hygiene | hand hygiene | care | hospital | compliance | patient | health | health care | disinfectant [SUMMARY]
[CONTENT] infections | hospital | medical | hospital acquired | acquired | patients | treatment | hand | rates | acquired infections [SUMMARY]
[CONTENT] hand | hygiene | hand hygiene | hospital | based | points | questionnaire | study | care | disinfectant [SUMMARY]
null
[CONTENT] providing | proper hand | proper hand hygiene | patient safety | proper | hand hygiene | care | hygiene | hand | increase [SUMMARY]
[CONTENT] hand | hygiene | hand hygiene | care | hospital | compliance | patient | increase | infections | patients [SUMMARY]
[CONTENT] hand | hygiene | hand hygiene | care | hospital | compliance | patient | increase | infections | patients [SUMMARY]
[CONTENT] ||| Clean Care ||| COVID-19 [SUMMARY]
[CONTENT] [SUMMARY]
null
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| Clean Care ||| COVID-19 ||| ||| ||| 270/500 | 54% ||| 49% ||| 81% | one year | 77% ||| 435/500 ||| ||| [SUMMARY]
[CONTENT] ||| Clean Care ||| COVID-19 ||| ||| ||| 270/500 | 54% ||| 49% ||| 81% | one year | 77% ||| 435/500 ||| ||| [SUMMARY]
The effect of lateral wall perforation on screw pull-out strength: a cadaveric study.
25616775
Lateral pedicle wall perforations occur frequently during pedicle screw insertion. Although it is known that such an occurrence decreases the screw pull-out strength, the effect has not been quantified biomechanically.
BACKGROUND
Twenty fresh cadaveric lumbar vertebrae were harvested, and the bone mineral density (BMD) of each was evaluated with dual-energy radiography absorptiometry (DEXA). Twenty matched, 6.5-mm pedicle screws were inserted in two different manners in two groups, the control group and the experimental group. In the control group, the pedicle screw was inserted in a standard fashion taking adequate precaution to ensure there was no perforation of the wall. In the experimental group, the pedicle screw was inserted such that its trajectory perforated the lateral wall. Group assignments were done randomly, and the maximal fixation strength was recorded for each screw pull-out test with a material-testing system (MTS 858 II).
MATERIALS AND METHODS
The average BMD for both groups was 0.850 g/cm(2) (0.788-0.912 g/cm(2)). The average (and standard deviation) maximal pull-out forces were 1,015.8 ± 249.40 N for the experimental group and 1,326.0 ± 320.50 N for the control group. According to a paired t-test, the difference between the two groups was statistically significant (P < 0.001).
RESULTS
The results of this study confirm that the maximal pull-out strength of pedicle screws decreases by approximately 23.4% when the lateral wall is perforated.
CONCLUSION
[ "Aged", "Female", "Humans", "Lumbar Vertebrae", "Male", "Middle Aged", "Pedicle Screws" ]
4314786
Introduction
The use of pedicle screws in the lumbar region is a well-established technique that has been shown to provide immediate stability and rigid fixation that facilitates correction of a deformity in both sagittal and coronal planes [1-5]. However, to ensure optimal placement for achieving the requisite stability, the screw must be meticulously placed and insertion obtained in good quality bone. Various techniques have been developed to ensure optimal placement of pedicle screws in the pedicles. In the straight-ahead technique as described by Roy-Camille [6], screw insertion begins at the intersection of a horizontal line bisecting the transverse process and a longitudinal line bisecting the facet joint. The screw is then inserted straight ahead, parallel to the vertebral endplates. The Magerl [7] technique uses the same horizontal landmark for screw insertion as the Roy-Camille technique, but for the longitudinal line, the landmark is just lateral to the angle of the superior facet. The screw is then angled laterally to medially while kept parallel to the vertebral endplates. The up-and-in method of screw placement uses the same longitudinal reference line as described by Magerl et al., but with a horizontal reference line that crosses the lower third of the transverse process. The screws are then placed in a caudad-to-cephalad direction toward, but not into, the vertebral endplate. The screws are also angled slightly medially as in the Magerl technique [8]. Beyond the conventional techniques using intraoperative landmarks, recent advancements in a developed navigation technique have begun to help surgeons insert pedicle screws more accurately [9-14]. However, despite experience with conventional techniques and advancements in the field of intraoperative navigation, intraoperative lateral pedicular wall perforation is not uncommon. This has been attributed to the morphology of the pedicle and also the fact that the lateral wall is the weakest among all walls making up the pedicle [15]. Castro [16] reported that 14 (11%) of 131 screws penetrated the lateral wall of the pedicle in 30 patients after lumbar spinal fusion, as assessed using computed tomography. Silbermann [9] compared the accuracy rates of pedicle screw placement between the free-hand and O-arm-based navigation techniques and found that 34 (22.4%) of 152 screws showed medial encroachment and 14 (9.2%) screws showed lateral encroachment with free-hand placement in comparison to 2 (1.1%) and 7 (3.7%) of 187 screws, respectively, with O-arm-based navigation. Thus, lateral perforation seemed more likely to occur with O-arm-based navigation than with free-hand placement. Galalis [17] also published a systematic review comparing free-hand placement, fluoroscopy guidance, and navigation techniques. Twenty-six prospective clinical studies were included in the analysis, and these studies included 1,105 patients in whom 6,617 screws were inserted. When evaluating the position of perforation, in the studies using the free-hand technique, a range from 12 to 67% was found for lateral perforation. When fluoroscopy was used, the pedicles were perforated laterally with an incidence of 16 to 79%. In patients in whom computed tomography (CT) navigation was used, the proportion of screws that perforated the lateral wall was significantly increased, ranging from 29 to 80%, compared to the percentage of screws that perforated the medial wall, which ranged from 8 to 29%. All of these studies suggest that meticulous attention should be paid to the lateral placement of pedicle screws, especially when using navigation and assistance techniques, as with these the incidence of lateral perforation is significantly higher. In addition to biomechanical changes, lateral wall perforation may also result in vascular injuries, especially when the aorta and other retroperitoneal structures are located close to the screw trajectory and the vertebral bodies [18,19]. Anatomic studies have shown that, even in severe scoliosis, the aorta persistently follows and adheres to the abnormal curves of the spine [20]. Many studies on the accuracy of the screw trajectory and its actual placement in the pedicle have revealed that a large percentage of screws penetrate the pedicle lateral cortex, placing major vascular structures at risk for injury [21]. Many case series and reports have reported similar findings, and Minor [22] reported a case of a patient who underwent surgical correction of a spinal deformity and had to receive endovascular treatment following iatrogenic injury. Postoperative CT scans of the case revealed a laterally misplaced pedicle screw, which was impinging on the descending aortic wall. The patient was brought to the operating room, where a thoracic stent graft was deployed under fluoroscopic guidance as the malpositioned screw was manually retracted. Although vascular injuries associated with spinal surgery have delayed presentation, occurring only after chronic irritation of the pulsating aortic wall against a metallic implant, immediate intervention is still indicated to prevent potentially serious, future complications. It is thought that lateral pedicle wall perforation negatively impacts the purchase of the screw in the pedicle and may consequently reduce its pull-out strength. However, there is still a lack of quantitative evidence demonstrating this negative correlation. Thus, the present biomechanical study was undertaken to explore and quantify the impact of perforation of the lateral wall during pedicle screw insertion on its pull out strength.
null
null
Results
The average BMD of the 20 specimens was 0.850 ± 0.062 g/cm2. Differences in BMD between individual vertebrae were not statistically significant, which showed the specimens were normal and not affected by osteoporosis. In both the experimental and control groups formed from 20 vertebral bodies (from 4 cadavers), the fixation strength was measured (Tables 1 and 2). The results of paired t-tests showed that the pull-out strength was significantly greater in the control group than in the experimental group (1,326.0 ± 320.50 vs 1,015.8 ± 249.40 N, P < 0.001; Figure 4). The average value in the experimental group was about 76.6% of that in the control group.Table 1 The fixation strength of the pedicle screw to the different vertebral bodies in no. 1 and 2 cadavers VB Cad 1 (B) (N) (A) (N) Cad 2 (B) (N) (A) (N) L19011,2531,3161,733L21,0471,1247631,945L39249771,1521,596L47479241,2291,305L56598859111,088B, experimental group; A, control group.Table 2 The fixation strength of the pedicle screw at different vertebral bodies in the no. 3 and 4 cadavers VB Cad 3 (B) (N) (A) (N) Cad 4 (B) (N) (A) (N) L11,3091,5531,0061,461L21,5301,9931,3151,426L31,1541,4061,0071,209L41,1251,2208731,345L56239237241,154B, experimental group; A, control group.Figure 4 The maximum pull-out strength in both the experimental and control group. The fixation strength of the pedicle screw to the different vertebral bodies in no. 1 and 2 cadavers B, experimental group; A, control group. The fixation strength of the pedicle screw at different vertebral bodies in the no. 3 and 4 cadavers B, experimental group; A, control group. The maximum pull-out strength in both the experimental and control group. For more detailed analysis, two vertebral bodies at different levels were selected randomly and compared separately. One was the L5 vertebra from cadaver no. 3, for which the pull-out strength was 623 N in the experimental group and 923 N in the control group. The other was the L4 vertebra from cadaver no. 4, for which the pull-out strength was 873 N in the experimental group and 1,345 N in the control group. The data from the two vertebrae were plotted with the axial pull-out strength along the y-axis and axial displacement along the x-axis (Figures 5 and 6).Figure 5 Fixation strength in both experimental and control group from L5 vertebral body in cadaver vertebra no. 3. Figure 6 Fixation strength in both experimental and control group from L4 vertebral body in cadaver vertebra no 4. Fixation strength in both experimental and control group from L5 vertebral body in cadaver vertebra no. 3. Fixation strength in both experimental and control group from L4 vertebral body in cadaver vertebra no 4.
Conclusion
The integrity of the lumbar vertebral pedicle strongly affects the fixation strength of pedicle screws. Perforation of the lateral wall decreases the pull out strength of screws by 23.4% compared to the pull-out strength in the control group in which perforation did not occur.
[]
[]
[]
[ "Introduction", "Material and methods", "Results", "Discussion", "Conclusion" ]
[ "The use of pedicle screws in the lumbar region is a well-established technique that has been shown to provide immediate stability and rigid fixation that facilitates correction of a deformity in both sagittal and coronal planes [1-5]. However, to ensure optimal placement for achieving the requisite stability, the screw must be meticulously placed and insertion obtained in good quality bone. Various techniques have been developed to ensure optimal placement of pedicle screws in the pedicles. In the straight-ahead technique as described by Roy-Camille [6], screw insertion begins at the intersection of a horizontal line bisecting the transverse process and a longitudinal line bisecting the facet joint. The screw is then inserted straight ahead, parallel to the vertebral endplates. The Magerl [7] technique uses the same horizontal landmark for screw insertion as the Roy-Camille technique, but for the longitudinal line, the landmark is just lateral to the angle of the superior facet. The screw is then angled laterally to medially while kept parallel to the vertebral endplates. The up-and-in method of screw placement uses the same longitudinal reference line as described by Magerl et al., but with a horizontal reference line that crosses the lower third of the transverse process. The screws are then placed in a caudad-to-cephalad direction toward, but not into, the vertebral endplate. The screws are also angled slightly medially as in the Magerl technique [8]. Beyond the conventional techniques using intraoperative landmarks, recent advancements in a developed navigation technique have begun to help surgeons insert pedicle screws more accurately [9-14]. However, despite experience with conventional techniques and advancements in the field of intraoperative navigation, intraoperative lateral pedicular wall perforation is not uncommon. This has been attributed to the morphology of the pedicle and also the fact that the lateral wall is the weakest among all walls making up the pedicle [15]. Castro [16] reported that 14 (11%) of 131 screws penetrated the lateral wall of the pedicle in 30 patients after lumbar spinal fusion, as assessed using computed tomography. Silbermann [9] compared the accuracy rates of pedicle screw placement between the free-hand and O-arm-based navigation techniques and found that 34 (22.4%) of 152 screws showed medial encroachment and 14 (9.2%) screws showed lateral encroachment with free-hand placement in comparison to 2 (1.1%) and 7 (3.7%) of 187 screws, respectively, with O-arm-based navigation. Thus, lateral perforation seemed more likely to occur with O-arm-based navigation than with free-hand placement. Galalis [17] also published a systematic review comparing free-hand placement, fluoroscopy guidance, and navigation techniques. Twenty-six prospective clinical studies were included in the analysis, and these studies included 1,105 patients in whom 6,617 screws were inserted. When evaluating the position of perforation, in the studies using the free-hand technique, a range from 12 to 67% was found for lateral perforation. When fluoroscopy was used, the pedicles were perforated laterally with an incidence of 16 to 79%. In patients in whom computed tomography (CT) navigation was used, the proportion of screws that perforated the lateral wall was significantly increased, ranging from 29 to 80%, compared to the percentage of screws that perforated the medial wall, which ranged from 8 to 29%. All of these studies suggest that meticulous attention should be paid to the lateral placement of pedicle screws, especially when using navigation and assistance techniques, as with these the incidence of lateral perforation is significantly higher. In addition to biomechanical changes, lateral wall perforation may also result in vascular injuries, especially when the aorta and other retroperitoneal structures are located close to the screw trajectory and the vertebral bodies [18,19]. Anatomic studies have shown that, even in severe scoliosis, the aorta persistently follows and adheres to the abnormal curves of the spine [20]. Many studies on the accuracy of the screw trajectory and its actual placement in the pedicle have revealed that a large percentage of screws penetrate the pedicle lateral cortex, placing major vascular structures at risk for injury [21]. Many case series and reports have reported similar findings, and Minor [22] reported a case of a patient who underwent surgical correction of a spinal deformity and had to receive endovascular treatment following iatrogenic injury. Postoperative CT scans of the case revealed a laterally misplaced pedicle screw, which was impinging on the descending aortic wall. The patient was brought to the operating room, where a thoracic stent graft was deployed under fluoroscopic guidance as the malpositioned screw was manually retracted. Although vascular injuries associated with spinal surgery have delayed presentation, occurring only after chronic irritation of the pulsating aortic wall against a metallic implant, immediate intervention is still indicated to prevent potentially serious, future complications.\nIt is thought that lateral pedicle wall perforation negatively impacts the purchase of the screw in the pedicle and may consequently reduce its pull-out strength. However, there is still a lack of quantitative evidence demonstrating this negative correlation. Thus, the present biomechanical study was undertaken to explore and quantify the impact of perforation of the lateral wall during pedicle screw insertion on its pull out strength.", "Requisite institutional review board (IRB; Beijing Jishuitan Hospital ethnics committee) approvals were obtained for the study. For the purpose of the study, 20 freshly frozen lumbar vertebrae (L1-L5) were harvested from 4 cadavers (3 males, 1 female) with an average age of 54.0 years (range, 47–67 years). Each vertebra was dissected individually and then carefully inspected to ensure that all the specimens were free from metastatic or any obvious metabolic bone disease. Furthermore, the vertebrae were visually examined to eliminate vertebrae with previous fracture or instrumentation. Dual-energy radiography absorptiometry (DEXA) was then performed to measure bone mineral density (BMD) using a Lunar Prodigy, encore 2006 instrument (General Electric, Madison, WI, USA). As all the specimens were from middle-aged adults, osteophyte formation and facet hypertrophy were minor. Thus, DEXA scanning used for BMD measurement offers greater precision in this study than in other studies in which the specimens were obtained from relatively older individuals.\nBefore testing, the specimens were thawed for 24 h to room temperature from their storage temperature of −30°C and cleaned of all soft tissues. Each vertebra was then cut into two halves along the sagittal midline using an electric saw. Individual pedicles were randomly assigned to two groups: the control group and the experimental group. For the control group, the standard procedure (Magerl method) for pedicle screw insertion was used. A 3.5-mm burr was used to create a pilot hole on the dorsal cortex for screw entrance. The entire pedicle tract was then probed with the universal device into the vertebral body to a depth of about 40 mm, and an X-ray/image intensifier was used during the procedure to ensure the proper trajectory and avoid cortical perforation. In contrast, on the contralateral side that formed the experimental group, the lateral cortex at the junction between the pedicle and vertebral body was perforated with a 3.0-mm burr before the probe passed out of the pedicle through this hole, in order to mimic the lateral perforation that occurs incidentally during the live surgery. The pedicle was then probed as in the control group after changing the transverse angle. Each vertebra was then instrumented with 6.5 × 45-mm M8 titanium alloy screws (Medtronic, Sofamor-Danek, Memphis, TN) (Figures 1 and 2). Subsequently, for the purpose of biomechanical analysis, all vertebrae were embedded in bone cement after confirming that each screw was fully contained inside the pedicle. In order to ensure that bone cement did not accidentally enter the lateral perforation in the experimental group, these perforations were sealed with plasticine. Finally, the pull-out strength of each pedicle screw in both groups was measured with a MTS-858 II material tester (Material Testing System Corporation, Minneapolis, MN), which was connected to the screw through a jig to align the pull out direction along the longitudinal axis of the screw. This ensured that any load from the other direction was eliminated (Figure 3A, B). The screw was pulled out at a constant velocity of 5 mm/min, and the peak load was taken as the pull-out strength.Figure 1\nAP and lateral X-ray of the experimental (long arrow) and control (arrow head) group. The probe traveled out of the pedicle through the breakage in the lateral wall in the experimental group.Figure 2\nAP and lateral X-ray images show that the probe re-entered the vertebral body after changing direction. Control group, arrow head; experimental group, long arrow.Figure 3\nMono-axial pedicle screw and pedicle screw pull-out strength. A The mono-axial pedicle screw (6.5 mm × 45 mm, M8, Sofamor-Danek, USA) was used to fix the vertebral body and connected to a multi-functional biomechanical testing machine (MTS 858). B The pedicle screw pull-out strength was tested using the MTS at a rate of 5 mm/min.\n\nAP and lateral X-ray of the experimental (long arrow) and control (arrow head) group. The probe traveled out of the pedicle through the breakage in the lateral wall in the experimental group.\n\nAP and lateral X-ray images show that the probe re-entered the vertebral body after changing direction. Control group, arrow head; experimental group, long arrow.\n\nMono-axial pedicle screw and pedicle screw pull-out strength. A The mono-axial pedicle screw (6.5 mm × 45 mm, M8, Sofamor-Danek, USA) was used to fix the vertebral body and connected to a multi-functional biomechanical testing machine (MTS 858). B The pedicle screw pull-out strength was tested using the MTS at a rate of 5 mm/min.\nBMD and pull-out strength were compared between the two groups using paired t-tests. P < 0.05 indicated a statistically significant difference. All data were analyzed using SPSS 14.0 software (SPSS Inc., Chicago, IL).", "The average BMD of the 20 specimens was 0.850 ± 0.062 g/cm2. Differences in BMD between individual vertebrae were not statistically significant, which showed the specimens were normal and not affected by osteoporosis. In both the experimental and control groups formed from 20 vertebral bodies (from 4 cadavers), the fixation strength was measured (Tables 1 and 2). The results of paired t-tests showed that the pull-out strength was significantly greater in the control group than in the experimental group (1,326.0 ± 320.50 vs 1,015.8 ± 249.40 N, P < 0.001; Figure 4). The average value in the experimental group was about 76.6% of that in the control group.Table 1\nThe fixation strength of the pedicle screw to the different vertebral bodies in no. 1 and 2 cadavers\n\nVB\n\nCad 1 (B) (N)\n\n(A) (N)\n\nCad 2 (B) (N)\n\n(A) (N)\nL19011,2531,3161,733L21,0471,1247631,945L39249771,1521,596L47479241,2291,305L56598859111,088B, experimental group; A, control group.Table 2\nThe fixation strength of the pedicle screw at different vertebral bodies in the no. 3 and 4 cadavers\n\nVB\n\nCad 3 (B) (N)\n\n(A) (N)\n\nCad 4 (B) (N)\n\n(A) (N)\nL11,3091,5531,0061,461L21,5301,9931,3151,426L31,1541,4061,0071,209L41,1251,2208731,345L56239237241,154B, experimental group; A, control group.Figure 4\nThe maximum pull-out strength in both the experimental and control group.\n\n\nThe fixation strength of the pedicle screw to the different vertebral bodies in no. 1 and 2 cadavers\n\nB, experimental group; A, control group.\n\nThe fixation strength of the pedicle screw at different vertebral bodies in the no. 3 and 4 cadavers\n\nB, experimental group; A, control group.\n\nThe maximum pull-out strength in both the experimental and control group.\n\nFor more detailed analysis, two vertebral bodies at different levels were selected randomly and compared separately. One was the L5 vertebra from cadaver no. 3, for which the pull-out strength was 623 N in the experimental group and 923 N in the control group. The other was the L4 vertebra from cadaver no. 4, for which the pull-out strength was 873 N in the experimental group and 1,345 N in the control group. The data from the two vertebrae were plotted with the axial pull-out strength along the y-axis and axial displacement along the x-axis (Figures 5 and 6).Figure 5\nFixation strength in both experimental and control group from L5 vertebral body in cadaver vertebra no. 3.\nFigure 6\nFixation strength in both experimental and control group from L4 vertebral body in cadaver vertebra no 4.\n\n\nFixation strength in both experimental and control group from L5 vertebral body in cadaver vertebra no. 3.\n\n\nFixation strength in both experimental and control group from L4 vertebral body in cadaver vertebra no 4.\n", "Correct placement of transpedicular screws for spinal fusion is technically challenging due to several factors such as the variable anatomy of vertebral bodies, the relatively narrow pedicle in some thin patients at Asia, and the complicated three-dimensional orientation of the pedicle, especially in cases of scoliosis. Therefore, pedicle perforation and screw misplacement occurs frequently in clinical practice [23-25]. Perforation may weaken the fixation strength of screws in vertebrae, particularly in cases with lateral cortical perforation. George et al. [26] observed that unintentional pedicle fracture reduces the mean pull-out strength by 11% compared to that of screws in intact pedicles. Saraf et al. [27] observed that the mean pull-out strength of laterally misplaced screws was 47.3% less than that of standard pedicle screws in the thoracic and lumbar vertebrae through a cadaveric study, but did not find any correlation between BMD and ultimate pull-out strength. Brasiliense et al. [28] also reported that laterally misplaced pedicle screws have a 21% lower pull-out strength compared to well-placed pedicle screws, although their study included only thoracic human cadaveric vertebrae.\nFrequent disruption of the lateral pedicle wall can be attributed to the anatomy of the pedicle, which is a cylindrical body located between the vertebral body and lamina. It is also due to the fact that the lateral wall is the weakest of all the walls of the pedicle. Weinstein et al. [29] demonstrated that during screw fixation of thoracic or lumbar vertebral body, the pedicle structure accounts for 60% of the pull-out strength, the vertebral body accounts for 15–20%, and precise fixation of a screw to the cortical bone of the anterior vertebral body accounts for the remaining 20–25%. Hirano et al. [15] also measured the pull-out strength of pedicle screws through biomechanical testing, and the tests revealed that 82% of the fixation strength and 57% of the pull out strength are attributable to vertebral pedicle structures. These studies thus established that the pedicle is the cornerstone for stable pedicle screw fixation.\nOur results show that the pull-out strength of fixed screws decreases by approximately 25% when lateral perforation occurs, which differs slightly from the results of previous studies. This difference could be attributed to our slightly reformed study design, which aimed to mimic real-life surgical practice. In addition, in previous studies, only misplaced screws were considered, whereas in our study, the experiments focused on simulating the frequent surgical occurrence in which a surgeon manages to place the screw in the right tract even after perforating the lateral cortex. Our clinical experience suggests that lateral screw misplacement can be avoided with intraoperative diligence and that tactile feedback to the surgeon obtained by probing the drilled tract plays a vital role in the same. In the current study, although the screws appeared to have been well contained, lateral wall perforation compromised stability. The perforation led to loss of the integrity of the cylindrical structure of the vertebral pedicle, causing exposure of the screw thread, which in turn reduced the holding strength of the screw and weakened its fixation strength at length.\nTo achieve pedicular screw fixation for the lumbar spine, there are two key technical elements. First and foremost is the need to ensure an accurate entry point, and secondly, the principles of the appropriate transverse screw angle (TSA) must be followed. Thorough exposure and effective hemostasis are needed to find the right point. If the entry point is too lateral, the probe will perforate laterally at the very beginning. Then, the surgeon can realize the mistake with the pedicle feeler and shift the entry point medially accordingly. From our experience, we propose that the TSA should be 5–10° for L1–L3 and 10–15° for L4–L5, and the risk of breakage of the pedicular lateral wall will increase if the TSA is below the lower limit. However, ensuring optimal placement through a correct TSA to avoid pedicle perforation is not easy. Schizas [30] reviewed 130 studies published in the past 40 years, and in this meta-analysis, they found that without using navigation, only 86.5% of pedicle screws were accurately placed in the lumbar spine of cadaveric specimens, and this rate was only 87.3% in vivo. Tian [31] found that lumbar pedicle screw malposition is frequently accompanied by vertebral axial rotation, which is more common than anatomical variation and has a significant impact on TSA. Accordingly, the incidence of pedicle perforation will increase if surgeons do not pay enough attention to the change in TSA due to vertebral rotation [32,33]. Therefore, finding the rotation and selecting the appropriate TSA accordingly is the key to avoiding lateral perforation.\nThe study does have some limitations particularly in the terms of the small sample size and the inherent limitations associated with a cadaveric study. While the study shows that mechanical aberration occurs with lateral cortical perforation, it does not simulate the actual clinical scenarios in which these perforations may be repaired over time and may not affect the clinical outcomes. Additionally, as newer technologies such as computer-assisted surgery [34] and rapid prototyping [35,36] become universally available, the incidence of these inadvertent perforation may no longer remain relevant. However, currently, such perforation remains a key intraoperative problem, and this study highlights the need to be diligent during the surgery in order to avoid these complications.", "The integrity of the lumbar vertebral pedicle strongly affects the fixation strength of pedicle screws. Perforation of the lateral wall decreases the pull out strength of screws by 23.4% compared to the pull-out strength in the control group in which perforation did not occur." ]
[ "intro", "materials|methods", "results", "discussion", "conclusions" ]
[ "Lumbar spine", "Pedicular screw insertion", "Pull-out strength" ]
Introduction: The use of pedicle screws in the lumbar region is a well-established technique that has been shown to provide immediate stability and rigid fixation that facilitates correction of a deformity in both sagittal and coronal planes [1-5]. However, to ensure optimal placement for achieving the requisite stability, the screw must be meticulously placed and insertion obtained in good quality bone. Various techniques have been developed to ensure optimal placement of pedicle screws in the pedicles. In the straight-ahead technique as described by Roy-Camille [6], screw insertion begins at the intersection of a horizontal line bisecting the transverse process and a longitudinal line bisecting the facet joint. The screw is then inserted straight ahead, parallel to the vertebral endplates. The Magerl [7] technique uses the same horizontal landmark for screw insertion as the Roy-Camille technique, but for the longitudinal line, the landmark is just lateral to the angle of the superior facet. The screw is then angled laterally to medially while kept parallel to the vertebral endplates. The up-and-in method of screw placement uses the same longitudinal reference line as described by Magerl et al., but with a horizontal reference line that crosses the lower third of the transverse process. The screws are then placed in a caudad-to-cephalad direction toward, but not into, the vertebral endplate. The screws are also angled slightly medially as in the Magerl technique [8]. Beyond the conventional techniques using intraoperative landmarks, recent advancements in a developed navigation technique have begun to help surgeons insert pedicle screws more accurately [9-14]. However, despite experience with conventional techniques and advancements in the field of intraoperative navigation, intraoperative lateral pedicular wall perforation is not uncommon. This has been attributed to the morphology of the pedicle and also the fact that the lateral wall is the weakest among all walls making up the pedicle [15]. Castro [16] reported that 14 (11%) of 131 screws penetrated the lateral wall of the pedicle in 30 patients after lumbar spinal fusion, as assessed using computed tomography. Silbermann [9] compared the accuracy rates of pedicle screw placement between the free-hand and O-arm-based navigation techniques and found that 34 (22.4%) of 152 screws showed medial encroachment and 14 (9.2%) screws showed lateral encroachment with free-hand placement in comparison to 2 (1.1%) and 7 (3.7%) of 187 screws, respectively, with O-arm-based navigation. Thus, lateral perforation seemed more likely to occur with O-arm-based navigation than with free-hand placement. Galalis [17] also published a systematic review comparing free-hand placement, fluoroscopy guidance, and navigation techniques. Twenty-six prospective clinical studies were included in the analysis, and these studies included 1,105 patients in whom 6,617 screws were inserted. When evaluating the position of perforation, in the studies using the free-hand technique, a range from 12 to 67% was found for lateral perforation. When fluoroscopy was used, the pedicles were perforated laterally with an incidence of 16 to 79%. In patients in whom computed tomography (CT) navigation was used, the proportion of screws that perforated the lateral wall was significantly increased, ranging from 29 to 80%, compared to the percentage of screws that perforated the medial wall, which ranged from 8 to 29%. All of these studies suggest that meticulous attention should be paid to the lateral placement of pedicle screws, especially when using navigation and assistance techniques, as with these the incidence of lateral perforation is significantly higher. In addition to biomechanical changes, lateral wall perforation may also result in vascular injuries, especially when the aorta and other retroperitoneal structures are located close to the screw trajectory and the vertebral bodies [18,19]. Anatomic studies have shown that, even in severe scoliosis, the aorta persistently follows and adheres to the abnormal curves of the spine [20]. Many studies on the accuracy of the screw trajectory and its actual placement in the pedicle have revealed that a large percentage of screws penetrate the pedicle lateral cortex, placing major vascular structures at risk for injury [21]. Many case series and reports have reported similar findings, and Minor [22] reported a case of a patient who underwent surgical correction of a spinal deformity and had to receive endovascular treatment following iatrogenic injury. Postoperative CT scans of the case revealed a laterally misplaced pedicle screw, which was impinging on the descending aortic wall. The patient was brought to the operating room, where a thoracic stent graft was deployed under fluoroscopic guidance as the malpositioned screw was manually retracted. Although vascular injuries associated with spinal surgery have delayed presentation, occurring only after chronic irritation of the pulsating aortic wall against a metallic implant, immediate intervention is still indicated to prevent potentially serious, future complications. It is thought that lateral pedicle wall perforation negatively impacts the purchase of the screw in the pedicle and may consequently reduce its pull-out strength. However, there is still a lack of quantitative evidence demonstrating this negative correlation. Thus, the present biomechanical study was undertaken to explore and quantify the impact of perforation of the lateral wall during pedicle screw insertion on its pull out strength. Material and methods: Requisite institutional review board (IRB; Beijing Jishuitan Hospital ethnics committee) approvals were obtained for the study. For the purpose of the study, 20 freshly frozen lumbar vertebrae (L1-L5) were harvested from 4 cadavers (3 males, 1 female) with an average age of 54.0 years (range, 47–67 years). Each vertebra was dissected individually and then carefully inspected to ensure that all the specimens were free from metastatic or any obvious metabolic bone disease. Furthermore, the vertebrae were visually examined to eliminate vertebrae with previous fracture or instrumentation. Dual-energy radiography absorptiometry (DEXA) was then performed to measure bone mineral density (BMD) using a Lunar Prodigy, encore 2006 instrument (General Electric, Madison, WI, USA). As all the specimens were from middle-aged adults, osteophyte formation and facet hypertrophy were minor. Thus, DEXA scanning used for BMD measurement offers greater precision in this study than in other studies in which the specimens were obtained from relatively older individuals. Before testing, the specimens were thawed for 24 h to room temperature from their storage temperature of −30°C and cleaned of all soft tissues. Each vertebra was then cut into two halves along the sagittal midline using an electric saw. Individual pedicles were randomly assigned to two groups: the control group and the experimental group. For the control group, the standard procedure (Magerl method) for pedicle screw insertion was used. A 3.5-mm burr was used to create a pilot hole on the dorsal cortex for screw entrance. The entire pedicle tract was then probed with the universal device into the vertebral body to a depth of about 40 mm, and an X-ray/image intensifier was used during the procedure to ensure the proper trajectory and avoid cortical perforation. In contrast, on the contralateral side that formed the experimental group, the lateral cortex at the junction between the pedicle and vertebral body was perforated with a 3.0-mm burr before the probe passed out of the pedicle through this hole, in order to mimic the lateral perforation that occurs incidentally during the live surgery. The pedicle was then probed as in the control group after changing the transverse angle. Each vertebra was then instrumented with 6.5 × 45-mm M8 titanium alloy screws (Medtronic, Sofamor-Danek, Memphis, TN) (Figures 1 and 2). Subsequently, for the purpose of biomechanical analysis, all vertebrae were embedded in bone cement after confirming that each screw was fully contained inside the pedicle. In order to ensure that bone cement did not accidentally enter the lateral perforation in the experimental group, these perforations were sealed with plasticine. Finally, the pull-out strength of each pedicle screw in both groups was measured with a MTS-858 II material tester (Material Testing System Corporation, Minneapolis, MN), which was connected to the screw through a jig to align the pull out direction along the longitudinal axis of the screw. This ensured that any load from the other direction was eliminated (Figure 3A, B). The screw was pulled out at a constant velocity of 5 mm/min, and the peak load was taken as the pull-out strength.Figure 1 AP and lateral X-ray of the experimental (long arrow) and control (arrow head) group. The probe traveled out of the pedicle through the breakage in the lateral wall in the experimental group.Figure 2 AP and lateral X-ray images show that the probe re-entered the vertebral body after changing direction. Control group, arrow head; experimental group, long arrow.Figure 3 Mono-axial pedicle screw and pedicle screw pull-out strength. A The mono-axial pedicle screw (6.5 mm × 45 mm, M8, Sofamor-Danek, USA) was used to fix the vertebral body and connected to a multi-functional biomechanical testing machine (MTS 858). B The pedicle screw pull-out strength was tested using the MTS at a rate of 5 mm/min. AP and lateral X-ray of the experimental (long arrow) and control (arrow head) group. The probe traveled out of the pedicle through the breakage in the lateral wall in the experimental group. AP and lateral X-ray images show that the probe re-entered the vertebral body after changing direction. Control group, arrow head; experimental group, long arrow. Mono-axial pedicle screw and pedicle screw pull-out strength. A The mono-axial pedicle screw (6.5 mm × 45 mm, M8, Sofamor-Danek, USA) was used to fix the vertebral body and connected to a multi-functional biomechanical testing machine (MTS 858). B The pedicle screw pull-out strength was tested using the MTS at a rate of 5 mm/min. BMD and pull-out strength were compared between the two groups using paired t-tests. P < 0.05 indicated a statistically significant difference. All data were analyzed using SPSS 14.0 software (SPSS Inc., Chicago, IL). Results: The average BMD of the 20 specimens was 0.850 ± 0.062 g/cm2. Differences in BMD between individual vertebrae were not statistically significant, which showed the specimens were normal and not affected by osteoporosis. In both the experimental and control groups formed from 20 vertebral bodies (from 4 cadavers), the fixation strength was measured (Tables 1 and 2). The results of paired t-tests showed that the pull-out strength was significantly greater in the control group than in the experimental group (1,326.0 ± 320.50 vs 1,015.8 ± 249.40 N, P < 0.001; Figure 4). The average value in the experimental group was about 76.6% of that in the control group.Table 1 The fixation strength of the pedicle screw to the different vertebral bodies in no. 1 and 2 cadavers VB Cad 1 (B) (N) (A) (N) Cad 2 (B) (N) (A) (N) L19011,2531,3161,733L21,0471,1247631,945L39249771,1521,596L47479241,2291,305L56598859111,088B, experimental group; A, control group.Table 2 The fixation strength of the pedicle screw at different vertebral bodies in the no. 3 and 4 cadavers VB Cad 3 (B) (N) (A) (N) Cad 4 (B) (N) (A) (N) L11,3091,5531,0061,461L21,5301,9931,3151,426L31,1541,4061,0071,209L41,1251,2208731,345L56239237241,154B, experimental group; A, control group.Figure 4 The maximum pull-out strength in both the experimental and control group. The fixation strength of the pedicle screw to the different vertebral bodies in no. 1 and 2 cadavers B, experimental group; A, control group. The fixation strength of the pedicle screw at different vertebral bodies in the no. 3 and 4 cadavers B, experimental group; A, control group. The maximum pull-out strength in both the experimental and control group. For more detailed analysis, two vertebral bodies at different levels were selected randomly and compared separately. One was the L5 vertebra from cadaver no. 3, for which the pull-out strength was 623 N in the experimental group and 923 N in the control group. The other was the L4 vertebra from cadaver no. 4, for which the pull-out strength was 873 N in the experimental group and 1,345 N in the control group. The data from the two vertebrae were plotted with the axial pull-out strength along the y-axis and axial displacement along the x-axis (Figures 5 and 6).Figure 5 Fixation strength in both experimental and control group from L5 vertebral body in cadaver vertebra no. 3. Figure 6 Fixation strength in both experimental and control group from L4 vertebral body in cadaver vertebra no 4. Fixation strength in both experimental and control group from L5 vertebral body in cadaver vertebra no. 3. Fixation strength in both experimental and control group from L4 vertebral body in cadaver vertebra no 4. Discussion: Correct placement of transpedicular screws for spinal fusion is technically challenging due to several factors such as the variable anatomy of vertebral bodies, the relatively narrow pedicle in some thin patients at Asia, and the complicated three-dimensional orientation of the pedicle, especially in cases of scoliosis. Therefore, pedicle perforation and screw misplacement occurs frequently in clinical practice [23-25]. Perforation may weaken the fixation strength of screws in vertebrae, particularly in cases with lateral cortical perforation. George et al. [26] observed that unintentional pedicle fracture reduces the mean pull-out strength by 11% compared to that of screws in intact pedicles. Saraf et al. [27] observed that the mean pull-out strength of laterally misplaced screws was 47.3% less than that of standard pedicle screws in the thoracic and lumbar vertebrae through a cadaveric study, but did not find any correlation between BMD and ultimate pull-out strength. Brasiliense et al. [28] also reported that laterally misplaced pedicle screws have a 21% lower pull-out strength compared to well-placed pedicle screws, although their study included only thoracic human cadaveric vertebrae. Frequent disruption of the lateral pedicle wall can be attributed to the anatomy of the pedicle, which is a cylindrical body located between the vertebral body and lamina. It is also due to the fact that the lateral wall is the weakest of all the walls of the pedicle. Weinstein et al. [29] demonstrated that during screw fixation of thoracic or lumbar vertebral body, the pedicle structure accounts for 60% of the pull-out strength, the vertebral body accounts for 15–20%, and precise fixation of a screw to the cortical bone of the anterior vertebral body accounts for the remaining 20–25%. Hirano et al. [15] also measured the pull-out strength of pedicle screws through biomechanical testing, and the tests revealed that 82% of the fixation strength and 57% of the pull out strength are attributable to vertebral pedicle structures. These studies thus established that the pedicle is the cornerstone for stable pedicle screw fixation. Our results show that the pull-out strength of fixed screws decreases by approximately 25% when lateral perforation occurs, which differs slightly from the results of previous studies. This difference could be attributed to our slightly reformed study design, which aimed to mimic real-life surgical practice. In addition, in previous studies, only misplaced screws were considered, whereas in our study, the experiments focused on simulating the frequent surgical occurrence in which a surgeon manages to place the screw in the right tract even after perforating the lateral cortex. Our clinical experience suggests that lateral screw misplacement can be avoided with intraoperative diligence and that tactile feedback to the surgeon obtained by probing the drilled tract plays a vital role in the same. In the current study, although the screws appeared to have been well contained, lateral wall perforation compromised stability. The perforation led to loss of the integrity of the cylindrical structure of the vertebral pedicle, causing exposure of the screw thread, which in turn reduced the holding strength of the screw and weakened its fixation strength at length. To achieve pedicular screw fixation for the lumbar spine, there are two key technical elements. First and foremost is the need to ensure an accurate entry point, and secondly, the principles of the appropriate transverse screw angle (TSA) must be followed. Thorough exposure and effective hemostasis are needed to find the right point. If the entry point is too lateral, the probe will perforate laterally at the very beginning. Then, the surgeon can realize the mistake with the pedicle feeler and shift the entry point medially accordingly. From our experience, we propose that the TSA should be 5–10° for L1–L3 and 10–15° for L4–L5, and the risk of breakage of the pedicular lateral wall will increase if the TSA is below the lower limit. However, ensuring optimal placement through a correct TSA to avoid pedicle perforation is not easy. Schizas [30] reviewed 130 studies published in the past 40 years, and in this meta-analysis, they found that without using navigation, only 86.5% of pedicle screws were accurately placed in the lumbar spine of cadaveric specimens, and this rate was only 87.3% in vivo. Tian [31] found that lumbar pedicle screw malposition is frequently accompanied by vertebral axial rotation, which is more common than anatomical variation and has a significant impact on TSA. Accordingly, the incidence of pedicle perforation will increase if surgeons do not pay enough attention to the change in TSA due to vertebral rotation [32,33]. Therefore, finding the rotation and selecting the appropriate TSA accordingly is the key to avoiding lateral perforation. The study does have some limitations particularly in the terms of the small sample size and the inherent limitations associated with a cadaveric study. While the study shows that mechanical aberration occurs with lateral cortical perforation, it does not simulate the actual clinical scenarios in which these perforations may be repaired over time and may not affect the clinical outcomes. Additionally, as newer technologies such as computer-assisted surgery [34] and rapid prototyping [35,36] become universally available, the incidence of these inadvertent perforation may no longer remain relevant. However, currently, such perforation remains a key intraoperative problem, and this study highlights the need to be diligent during the surgery in order to avoid these complications. Conclusion: The integrity of the lumbar vertebral pedicle strongly affects the fixation strength of pedicle screws. Perforation of the lateral wall decreases the pull out strength of screws by 23.4% compared to the pull-out strength in the control group in which perforation did not occur.
Background: Lateral pedicle wall perforations occur frequently during pedicle screw insertion. Although it is known that such an occurrence decreases the screw pull-out strength, the effect has not been quantified biomechanically. Methods: Twenty fresh cadaveric lumbar vertebrae were harvested, and the bone mineral density (BMD) of each was evaluated with dual-energy radiography absorptiometry (DEXA). Twenty matched, 6.5-mm pedicle screws were inserted in two different manners in two groups, the control group and the experimental group. In the control group, the pedicle screw was inserted in a standard fashion taking adequate precaution to ensure there was no perforation of the wall. In the experimental group, the pedicle screw was inserted such that its trajectory perforated the lateral wall. Group assignments were done randomly, and the maximal fixation strength was recorded for each screw pull-out test with a material-testing system (MTS 858 II). Results: The average BMD for both groups was 0.850 g/cm(2) (0.788-0.912 g/cm(2)). The average (and standard deviation) maximal pull-out forces were 1,015.8 ± 249.40 N for the experimental group and 1,326.0 ± 320.50 N for the control group. According to a paired t-test, the difference between the two groups was statistically significant (P < 0.001). Conclusions: The results of this study confirm that the maximal pull-out strength of pedicle screws decreases by approximately 23.4% when the lateral wall is perforated.
Introduction: The use of pedicle screws in the lumbar region is a well-established technique that has been shown to provide immediate stability and rigid fixation that facilitates correction of a deformity in both sagittal and coronal planes [1-5]. However, to ensure optimal placement for achieving the requisite stability, the screw must be meticulously placed and insertion obtained in good quality bone. Various techniques have been developed to ensure optimal placement of pedicle screws in the pedicles. In the straight-ahead technique as described by Roy-Camille [6], screw insertion begins at the intersection of a horizontal line bisecting the transverse process and a longitudinal line bisecting the facet joint. The screw is then inserted straight ahead, parallel to the vertebral endplates. The Magerl [7] technique uses the same horizontal landmark for screw insertion as the Roy-Camille technique, but for the longitudinal line, the landmark is just lateral to the angle of the superior facet. The screw is then angled laterally to medially while kept parallel to the vertebral endplates. The up-and-in method of screw placement uses the same longitudinal reference line as described by Magerl et al., but with a horizontal reference line that crosses the lower third of the transverse process. The screws are then placed in a caudad-to-cephalad direction toward, but not into, the vertebral endplate. The screws are also angled slightly medially as in the Magerl technique [8]. Beyond the conventional techniques using intraoperative landmarks, recent advancements in a developed navigation technique have begun to help surgeons insert pedicle screws more accurately [9-14]. However, despite experience with conventional techniques and advancements in the field of intraoperative navigation, intraoperative lateral pedicular wall perforation is not uncommon. This has been attributed to the morphology of the pedicle and also the fact that the lateral wall is the weakest among all walls making up the pedicle [15]. Castro [16] reported that 14 (11%) of 131 screws penetrated the lateral wall of the pedicle in 30 patients after lumbar spinal fusion, as assessed using computed tomography. Silbermann [9] compared the accuracy rates of pedicle screw placement between the free-hand and O-arm-based navigation techniques and found that 34 (22.4%) of 152 screws showed medial encroachment and 14 (9.2%) screws showed lateral encroachment with free-hand placement in comparison to 2 (1.1%) and 7 (3.7%) of 187 screws, respectively, with O-arm-based navigation. Thus, lateral perforation seemed more likely to occur with O-arm-based navigation than with free-hand placement. Galalis [17] also published a systematic review comparing free-hand placement, fluoroscopy guidance, and navigation techniques. Twenty-six prospective clinical studies were included in the analysis, and these studies included 1,105 patients in whom 6,617 screws were inserted. When evaluating the position of perforation, in the studies using the free-hand technique, a range from 12 to 67% was found for lateral perforation. When fluoroscopy was used, the pedicles were perforated laterally with an incidence of 16 to 79%. In patients in whom computed tomography (CT) navigation was used, the proportion of screws that perforated the lateral wall was significantly increased, ranging from 29 to 80%, compared to the percentage of screws that perforated the medial wall, which ranged from 8 to 29%. All of these studies suggest that meticulous attention should be paid to the lateral placement of pedicle screws, especially when using navigation and assistance techniques, as with these the incidence of lateral perforation is significantly higher. In addition to biomechanical changes, lateral wall perforation may also result in vascular injuries, especially when the aorta and other retroperitoneal structures are located close to the screw trajectory and the vertebral bodies [18,19]. Anatomic studies have shown that, even in severe scoliosis, the aorta persistently follows and adheres to the abnormal curves of the spine [20]. Many studies on the accuracy of the screw trajectory and its actual placement in the pedicle have revealed that a large percentage of screws penetrate the pedicle lateral cortex, placing major vascular structures at risk for injury [21]. Many case series and reports have reported similar findings, and Minor [22] reported a case of a patient who underwent surgical correction of a spinal deformity and had to receive endovascular treatment following iatrogenic injury. Postoperative CT scans of the case revealed a laterally misplaced pedicle screw, which was impinging on the descending aortic wall. The patient was brought to the operating room, where a thoracic stent graft was deployed under fluoroscopic guidance as the malpositioned screw was manually retracted. Although vascular injuries associated with spinal surgery have delayed presentation, occurring only after chronic irritation of the pulsating aortic wall against a metallic implant, immediate intervention is still indicated to prevent potentially serious, future complications. It is thought that lateral pedicle wall perforation negatively impacts the purchase of the screw in the pedicle and may consequently reduce its pull-out strength. However, there is still a lack of quantitative evidence demonstrating this negative correlation. Thus, the present biomechanical study was undertaken to explore and quantify the impact of perforation of the lateral wall during pedicle screw insertion on its pull out strength. Conclusion: The integrity of the lumbar vertebral pedicle strongly affects the fixation strength of pedicle screws. Perforation of the lateral wall decreases the pull out strength of screws by 23.4% compared to the pull-out strength in the control group in which perforation did not occur.
Background: Lateral pedicle wall perforations occur frequently during pedicle screw insertion. Although it is known that such an occurrence decreases the screw pull-out strength, the effect has not been quantified biomechanically. Methods: Twenty fresh cadaveric lumbar vertebrae were harvested, and the bone mineral density (BMD) of each was evaluated with dual-energy radiography absorptiometry (DEXA). Twenty matched, 6.5-mm pedicle screws were inserted in two different manners in two groups, the control group and the experimental group. In the control group, the pedicle screw was inserted in a standard fashion taking adequate precaution to ensure there was no perforation of the wall. In the experimental group, the pedicle screw was inserted such that its trajectory perforated the lateral wall. Group assignments were done randomly, and the maximal fixation strength was recorded for each screw pull-out test with a material-testing system (MTS 858 II). Results: The average BMD for both groups was 0.850 g/cm(2) (0.788-0.912 g/cm(2)). The average (and standard deviation) maximal pull-out forces were 1,015.8 ± 249.40 N for the experimental group and 1,326.0 ± 320.50 N for the control group. According to a paired t-test, the difference between the two groups was statistically significant (P < 0.001). Conclusions: The results of this study confirm that the maximal pull-out strength of pedicle screws decreases by approximately 23.4% when the lateral wall is perforated.
3,659
298
[]
5
[ "pedicle", "screw", "strength", "group", "lateral", "vertebral", "screws", "pull", "pull strength", "perforation" ]
[ "screw different vertebral", "screw trajectory vertebral", "placement pedicle screws", "screw fixation lumbar", "pedicle screw placement" ]
null
[CONTENT] Lumbar spine | Pedicular screw insertion | Pull-out strength [SUMMARY]
null
[CONTENT] Lumbar spine | Pedicular screw insertion | Pull-out strength [SUMMARY]
[CONTENT] Lumbar spine | Pedicular screw insertion | Pull-out strength [SUMMARY]
[CONTENT] Lumbar spine | Pedicular screw insertion | Pull-out strength [SUMMARY]
[CONTENT] Lumbar spine | Pedicular screw insertion | Pull-out strength [SUMMARY]
[CONTENT] Aged | Female | Humans | Lumbar Vertebrae | Male | Middle Aged | Pedicle Screws [SUMMARY]
null
[CONTENT] Aged | Female | Humans | Lumbar Vertebrae | Male | Middle Aged | Pedicle Screws [SUMMARY]
[CONTENT] Aged | Female | Humans | Lumbar Vertebrae | Male | Middle Aged | Pedicle Screws [SUMMARY]
[CONTENT] Aged | Female | Humans | Lumbar Vertebrae | Male | Middle Aged | Pedicle Screws [SUMMARY]
[CONTENT] Aged | Female | Humans | Lumbar Vertebrae | Male | Middle Aged | Pedicle Screws [SUMMARY]
[CONTENT] screw different vertebral | screw trajectory vertebral | placement pedicle screws | screw fixation lumbar | pedicle screw placement [SUMMARY]
null
[CONTENT] screw different vertebral | screw trajectory vertebral | placement pedicle screws | screw fixation lumbar | pedicle screw placement [SUMMARY]
[CONTENT] screw different vertebral | screw trajectory vertebral | placement pedicle screws | screw fixation lumbar | pedicle screw placement [SUMMARY]
[CONTENT] screw different vertebral | screw trajectory vertebral | placement pedicle screws | screw fixation lumbar | pedicle screw placement [SUMMARY]
[CONTENT] screw different vertebral | screw trajectory vertebral | placement pedicle screws | screw fixation lumbar | pedicle screw placement [SUMMARY]
[CONTENT] pedicle | screw | strength | group | lateral | vertebral | screws | pull | pull strength | perforation [SUMMARY]
null
[CONTENT] pedicle | screw | strength | group | lateral | vertebral | screws | pull | pull strength | perforation [SUMMARY]
[CONTENT] pedicle | screw | strength | group | lateral | vertebral | screws | pull | pull strength | perforation [SUMMARY]
[CONTENT] pedicle | screw | strength | group | lateral | vertebral | screws | pull | pull strength | perforation [SUMMARY]
[CONTENT] pedicle | screw | strength | group | lateral | vertebral | screws | pull | pull strength | perforation [SUMMARY]
[CONTENT] screws | lateral | screw | placement | technique | pedicle | navigation | techniques | wall | free hand [SUMMARY]
null
[CONTENT] group | experimental | control | control group | strength | experimental control | experimental group | fixation strength | cadaver | experimental control group [SUMMARY]
[CONTENT] strength | screws | perforation | affects fixation strength | fixation strength pedicle screws | strength control | strength control group | strength control group perforation | affects fixation strength pedicle | group perforation occur [SUMMARY]
[CONTENT] pedicle | group | strength | screw | lateral | screws | perforation | experimental | control | pull [SUMMARY]
[CONTENT] pedicle | group | strength | screw | lateral | screws | perforation | experimental | control | pull [SUMMARY]
[CONTENT] ||| [SUMMARY]
null
[CONTENT] BMD | 0.850 | g/cm(2 | 0.788 | g/cm(2 ||| 1,015.8 | 249.40 | 1,326.0 | 320.50 ||| two | 0.001 [SUMMARY]
[CONTENT] approximately 23.4% [SUMMARY]
[CONTENT] ||| ||| Twenty | BMD | DEXA ||| Twenty | 6.5-mm | two | two ||| ||| ||| 858 II ||| ||| BMD | 0.850 | g/cm(2 | 0.788 | g/cm(2 ||| 1,015.8 | 249.40 | 1,326.0 | 320.50 ||| two | 0.001 ||| approximately 23.4% [SUMMARY]
[CONTENT] ||| ||| Twenty | BMD | DEXA ||| Twenty | 6.5-mm | two | two ||| ||| ||| 858 II ||| ||| BMD | 0.850 | g/cm(2 | 0.788 | g/cm(2 ||| 1,015.8 | 249.40 | 1,326.0 | 320.50 ||| two | 0.001 ||| approximately 23.4% [SUMMARY]
Diversity, natural infection and blood meal sources of phlebotomine sandflies (Diptera, Psychodidae) in the western Brazilian Amazon.
31365633
The state of Rondônia (RO) is a hot spot for human cases of cutaneous leishmaniasis. Many sandfly species in RO are putative vectors of leishmaniasis.
BACKGROUND
A sandfly survey was performed between 2016 and 2018 in 10 municipalities categorised into three different environment types: (i) Conservation Unit (CUN) - comprised of preserved ombrophilous forests; (ii) Forest Edge (FE) - small forest fragments; and (iii) Peridomicile (PE) - areas around dwellings.
METHODS
A total of 73 species were identified from 9,535 sandflies. The most abundant species were Psychodopygus davisi (1,741 individuals), Nyssomyia antunesi (1,397), Trichophoromyia auraensis (1,295) and Trichophoromyia ubiquitalis (1,043). Diversity was the highest in CUN, followed by the FE and PE environments. One pool of Ps. davisi tested positive for Leishmania braziliensis, reinforcing the possibility that Ps. davisi acts as a vector. The cytochrome b (cytb) sequences were used to identify three blood meal sources: Bos taurus, Homo sapiens and Tamandua tetradactyla.
FINDINGS
Our results demonstrated that sandflies can switch between blood meal sources in differing environments. This study enhances the knowledge of the vector life cycle in RO and provides information relevant to leishmaniasis surveillance.
MAIN CONCLUSIONS
[ "Animals", "Animals, Domestic", "Brazil", "Disease Reservoirs", "Female", "Forests", "Humans", "Insect Vectors", "Leishmaniasis, Cutaneous", "Male", "Population Density", "Psychodidae", "Urban Population" ]
6663149
null
null
null
null
RESULTS
A total of 73 species and 14 genera were identified from 9,535 individuals (♀4,089/♂5,446) (Table I). Owing to the absence of morphological characters, 118 individuals were identified only at the genus level, with 49 individuals belonging to the genus Trichophoromyia and 69 to Trichopygomyia. The most abundant species were Ps. davisi (1,741 individuals), Ny. antunesi (1,397), Th. auraensis (1,295) and Th. ubiquitalis (1,043); these four species comprised 57% of all individuals collected. A sample coverage analysis indicated that sandfly populations were sufficiently represented in all environments. The CUN, FE and PE environments yielded 5,847, 2,111 and 1,457 individuals and 68, 58 and 47 species, respectively, at 99% sample coverage (Table I). TABLE ISandfly composition, exponential of Shannon entropy index (q = 1) and inverse of Simpson concentration index (q = 2) with its confidence intervals (CI) based on a bootstrap method of 1,000 replications for three environments from the state of Rondônia, BrazilSpeciesCUNFEPETotal%Total (♀/♂) Bichromomyia flaviscutellata (Mangabeira, 1942)a 65 (40/25)65 (23/42)9 (6/3)1391.46 Bichromomyia olmeca nociva (Young & Arias, 1970)--1 (1/0)10.01 Brumptomyia brumpti (Larrousse, 1920)a 14 (2/12)9 (5/4)10 (3/7)330.35 Brumptomyia mesai Sherlock, 19621 (0/1)--10.01 Brumptomyia pintoi (Costa Lima, 1932) 1 (0/1)--10.01 Evandromyia bacula (Martins, Falcão & Silva, 1965)a 9 (6/3)2 (1/1)2 (1/1)130.14 Evandromyia georgii (Freitas & Barrett, 2002)a 22 (19/3)16 (12/4)5 (3/2)430.45 Evandromyia infraspinosa (Mangabeira, 1941)7 (5/2)--70.07 Evandromyia lenti (Mangabeira, 1938)-2 (1/1)2 (1/1)40.04 Evandromyia piperiformis Godoy, Cunha & Galati, 2017--1 (0/1)10.01 Evandromyia saulensis (Floch & Abonnenc, 1944)a 62 (51/11)18 (13/5)14 (11/3)940.99 Evandromyia tarapacaensis (Le Pont, Torrez-Espejo & Galati, 1997)2 (0/2)2 (0/2)-40.04 Evandromyia termitophila (Martins, Falcão & Silva, 1964)a 2 (0/2)4 (2/2)1 (1/0)70.07 Evandromyia walkeri (Newstead, 1941)a 29 (27/2)4 (3/1)46 (15/31)790.83 Evandromyia wilsoni (Dasmasceno & Causey, 1945)17 (4/13)3 (1/2)-200.21 Lutzomyia evangelistai Martins & Fraiha, 19712 (0/2)--20.02 Lutzomyia marinkellei Young, 19792 (0/2)--20.02 Lutzomyia sherlocki Martins, Silva & Falcão, 1971a 72 (47/25)35 (23/12)10 (8/2)1171.23 Martinsmyia waltoni (Arias, Freitas & Barrett, 1984)-1 (0/1)-10.01 Micropygomyia rorotaensis (Floch & Abonnenc, 1944)a 11 (1/10)6 (1/5)5 (2/3)220.23 Micropygomyia trinidadensis (Newstead, 1922)-5 (3/2)1 (1/0)60.06 Micropygomyia villelai (Mangabeira, 1942)a 6 (6/0)6 (5/1)4 (4/0)160.17 Migonemyia migonei (França, 1920)a 8 (3/5)1 (1/0)1 (0/1)100.10 Nyssomyia anduzei (Rozeboom, 1942)26 (17/9)1 (0/1)-270.28 Nyssomyia antunesi (Coutinho, 1939)a 650b (447/203)234b (169/65)513b (235/278)1,39714.65 Nyssomyia delsionatali Galati & Galvis, 2012)a 1 (0/1)1 (0/1)5 (0/5)70.07 Nyssomyia richardwardi (Ready & Fraiha, 1981)123 (97/26)12 (11/1)-1351.42 Nyssomyia shawi (Fraiha, Ward & Ready, 1981)14 (9/5)1 (1/0)-150.16 Nyssomyia umbratilis (Ward & Faiha, 1977)95 (74/21)11 (9/2)-1061.11 Nyssomyia whitmani (Antunes & Coutinho, 1939)a 162b (57/105)106b (44/62)2 (2/0)2702.83 Nyssomyia yuilli yuilli (Young & Porter, 1972)a 70 (69/1)12 (12/0)61 (25/36)1431.50 Pintomyia nevesi (Damasceno & Arouck, 1956)a 21 (18/3)18 (16/2)7 (3/4)460.48 Pintomyia serrana (Damasceno & Arouck, 1949)a 6 (3/3)13 (7/6)2 (2/0)210.22 Pintomyia sp. 1 (0/1)--10.01 Pressatia calcarata (Martins & Silva, 1964)1 (0/1)--10.01 Pressatia triacantha (Mangabeira, 1942)6 (0/6)--60.06 Psathyromyia aragaoi (Costa Lima, 1932)a 36 (16/20)8 (4/4)2 (1/1)460.48 Psathyromyia b. barretoi (Mangabeira, 1942)1 (0/1)1 (1/0)-20.02 Psathyromyia campbelli (Damasceno, Causey & Arouck, 1945)1 (1/0)2 (1/1)-30.03 Psathyromyia coutinhoi (Mangabeira, 1942)1 (0/1)--10.01 Psathyromyia dendrophyla (Mangabeira, 1942)a 11 (5/6)11 (5/6)4 (3/1)260.27 Psathyromyia dreisbachi (Causey & Damasceno, 1945)a 15 (5/8)12 (11/1)18 (18/0)430.45 Psathyromyia elizabethdorvalae Brilhante, Sábio & Galati, 20172 (0/2)1 (0/1)-30.03 Psathyromyia hermanlenti (Martins, Silva & Falcão, 1970)a 7 (7/0)18 (2/16)21 (11/10)460.48 Psathyromyia lutziana (Costa Lima, 1932)a 6 (3/3)5 (3/2)2 (2/0)130.14 Psathyromyia runoides (Fairchild & Hertig, 1943)-6 (1/5)9 (3/6)150.16 Psychodopygus amazonensis (Root, 1934)a 5 (5/17)3 (3/1)1 (1/0)270.28 Psychodopygus ayrozai (Barretto & Coutinho, 1940)a 10 (10/0)4 (1/3)2 (1/1)160.17 Psychodopygus bispinosus (Fairchild & Hertig, 1951)6 (6/0)--60.06 Psychodopygus c. carrerai (Barretto, 1946)a 376b (103/273)57 (26/31)23 (8/15)4564.78 Psychodopygus chagasi (Costa Lima, 1941)a 150b (118/32)14 (12/2)4 (3/1)1681.76 Psychodopygus claustrei (Root, 1934)a 114 (24/90)42 (7/35)9 (2/7)1651.73 Psychodopygus complexus (Mangabeira, 1941)a 240b (165/75)14 (0/14)9 (0/9)2632.76 Psychodopygus davisi (Root, 1934)a 715b (442/273)671b (239/432)355b (138/217)1,74118.26 Psychodopygus geniculatus (Mangabeira, 1941)a 121b (108/13)21 (15/6)9 (4/5)1511.58 Psychodopygus h. hirsutus (Mangabeira, 1942)a 57 (49/8)107b (45/52)38 (20/18)2022.12 Psychodopygus lainsoni Fraiha & Ward, 1974a 95 (56/39)7 (5/2)2 (0/2)1041.09 Psychodopygus leonidasdeanei (Fraiha, Ryan, Ward, Lainson & Shaw, 1986)115b (97/18)--1151.21 Psychodopygus llanosmartinsi (Fraiha & Ward, 1980)a 40 (26/14)3 (2/1)3 (1/2)460.48 Psychodopygus paraensis (Costa Lima, 1941)19 (8/11)6 (1/5)-250.26 Psychodopygus yucumensis (Le Pont, Caillard, Tibayrenc & Desjeux, 1986)a 4 (1/3)-2 (0/2)60.06 Sciopemyia fluviatilis (floch & Abonnenc, 1944)a 3 (3/0)3 (3/0)1 (0/1)70.07 Sciopemyia servulolimai (Damasceno & Causey, 1945)10 (5/5)7 (4/3)-170.18 Sciopemyia sordellii (Shannon & Del Ponte, 1927)a 94 (50/44)20 (12/8)14 (6/8)1281.34 Trichophoromyia auraensis (Mangabeira, 1942)a 1,261b (9/1252)22 (2/20)12 (0/12)1,29513.58 Trichophoromyia clitella (Young & Pérez, 1944)a 12b (0/122)30 (1/29)22 (0/22)1741.82 Trichophoromyia flochi (Abonnenc & Chassignet, 1948)a 19 (0/19)57 (0/57)24 (0/24)1001.05 Trichophoromyia readyi (Ryan, 1986)4 (0/4)--40.04 Trichophoromyia ruifreitasi Oliveira, Teles, Medeiros, Camargo & Pessoa, 20162 (0/2)--20.02 Trichophoromyia sp.49 (48/1)--490.51 Trichophoromyia ubiquitalis (Mangabeira, 1942)a 525b (62/463)352b (93/259)166b (35/131)1.04310.94 Trichopygomyia dasypodogeton (Castro, 1939)a 107 (1/106)3 (0/3)2 (0/2)1121.17 Trichopygomyia rondoniensis (Martins, Falcão & Silva)6 (0/6)1 (0/1)-70.07 Trichopygomyia sp.69 (69/0)--690.72 Viannamyia caprina (Osorno-Mesa, Moralez & Osorno, 1972)11 (11/0)12 (9/3)-230.24 Viannamyia tuberculata (Mangabeira, 1941)a 15 (15/0)2 (2/0)1 (1/0)180.19Total5,967 (2,530/3,437)2,111 (973/1,238)1,457 (586/871)9,535100Sample coverage (%)999999--Exponential of Shannon entropy index (CI95%)16.6 (16.5-17.7)16.3 (15.9-17.4)8.3 (8.0-8.3)--Inverse of Simpson concentration index (CI95%)8.2 (8.2-8.7)8.5 (8.5-9.3)3.6 (3.6-4.0)-- a: species present in all environments evaluated; b: abundant species in the environment. CUN: Conservation unit; FE: Forest Edge; PE: Peridomicile. a: species present in all environments evaluated; b: abundant species in the environment. CUN: Conservation unit; FE: Forest Edge; PE: Peridomicile. Species diversity was the highest in the CUN environments, followed by the FE and PE environments. The CUN environments yielded the highest Shannon index (H’) = 19.5 common species and Simpson index (1/D) = 6.9 dominant species, followed by FE with H’ = 14.5 common species and 1/D = 6.9 dominant species and PE with H’ = 10.3 common species and 1/D = 5.2 dominant species (Table I, Fig. 2). Fig. 2:index diversities based on Hill numbers of the sandfly fauna collected in three environments in the state of Rondônia, Brazil. CUN: Conservation Unit; FE: Forest Edge; PE: Peridomicile. A total of 1,755 females were divided into 274 pools representing 35 species. The PCR targeting of kDNA and hsp70 identified one pool of Ps. davisi infected with Le. (Vi.) braziliensis (query cover = 100%, identity = 100%, GenBank accession KX573933.1) (Fig. 3). The infected pool was collected from an FE environment in the municipality of Monte Negro. Fig. 3:natural infection of sandfly. A: amplified fragment of 120 bp from the kDNA region of the kinetoplast Leishmania species; B: the DNA extracted from the Psychodopygus davisi blood sample was subjected to polymerase chain reaction (PCR), which led to the amplification of a 240 bp hsp70 fragment. The PCR products were subjected to 1.5% agarose gel electrophoresis and stained with 1 μL of GelRed® solution. 1: Ps. davisi sample; M: molecular Maker; NC: negative control (water); PC: positive control Leishmania amazonensis reference strain IOC L0575 (IFLA/BR/1967/PH8). Blood meal sources were identified by taking samples from 15 engorged females belonging to the following species: Bi. flaviscutellata (1), Ny. antunesi (4), Psathyromyia dendrophyla (2), Ps. carrerai carrerai (1), Ps. davisi (6) and Ps. hirsutus hirsutus (1). The samples were used to amplify a 358 bp fragment of the cytb gene. The resultant sequences were compared with the GenBank sequences, leading to the identification of three vertebrates with 98-100% similarity: Bos taurus, Homo sapiens and Tamandua tetradactyla (Table II). TABLE IIVertebrate species identified from engorged sandfly females collected Forest Edge (FE) and Peridomicile (PE) environments in the state of Rondônia, BrazilSandfly speciesBlood mealMunicipality (environment)Accession codeIdentity (%)Total score (n)Query cover (%)E-value Nyssomyia antunesi Tamandua tetradactyla Porto Velho (FE)KT818552.198.145621003E-156 Nyssomyia antunesi Tamandua tetradactyla Porto Velho (FE)KT818552.199.075821002E-162 Nyssomyia antunesi Homo sapiens Vilhena (PE)KX697544.11006081004E-170 Bichromomyia flaviscutellata Homo sapiens Vilhena (FE)KX697544.11006431001E-180 Psathyromyia dendrophyla Bos taurus Cacoal (PE)MK028750.11006031002E-168 Psychodopygus davisi Bos taurus Cacoal (PE)MK028750.199.665381004E-149 Psychodopygus davisi Bos taurus Cacoal (PE)MK028750.11005971007E-167 Psychodopygus davisi Bos taurus Cacoal (PE)MK028750.11005951003E-166 Psathyromyia dendrophyla Bos taurus Cacoal (PE)MK028750.199.624811006E-132 Psychodopygus davisi Bos taurus Cacoal (PE)EU365345.199.426251004E-175 Psychodopygus davisi Bos taurus Cacoal (PE)MK028750.11006031002E-168 Psychodopygus h. hirsutus Bos taurus Cacoal (PE)MK028750.199.665421003E-150 Psychodopygus c. carrerai Bos taurus Cacoal (FE)EU365345.199.055661002E-157 Psychodopygus davisi Homo sapiens Ji-Paraná (FE)KX697544.11006471000.0 Nyssomyia antunesi Homo sapiens Ji-Paraná (PE)KX697544.11004251003E-115
null
null
[]
[]
[]
[ "MATERIALS AND METHODS", "RESULTS", "DISCUSSION" ]
[ "\nStudy areas - RO is located in the northern region of Brazil; it\nborders AM to the north, MT to the east and AC to the west and shares an\ninternational border with Bolivia to the southwest (Fig. 1). It has an area of approximately 238,000 km² and contains 52\nmunicipalities.\n\nFig. 1:sandfly collection points distributed in the state of Rondônia,\nBrazil.\n\nThe sandfly fauna was collected in three environments: Conservation Unit (CUN) -\ncharacterised by large areas of ombrophilous rain forest; Forest Edge (FE) -\ncharacterised by small forest fragments near urban areas; and Peridomicile (PE) -\nareas around dwellings that are situated near small forest fragments and contain\nenclosures where domestic animals are raised. Collections in the CUN environment\nwere conducted between 2016 and 2017 at three places (Fig. 1): the Jaru Biological Reserve (REBIO Jaru), which has a territory\nthat covers six municipalities (Machadinho D’Oeste, Vale do Anari, Theobroma, Ouro\nPreto do Oeste, Vale do Paraíso and Ji-Paraná) where collection were made in May and\nDecember of 2016 and April and July of 2017; the Jamari National Forest (FLONA\nJamari), located north of RO in the municipality of Itapuã do Oeste, where\ncollections were made in April and August of 2016 and April and October of 2017; and\nGuajará-Mirim State Park, located to the west of RO between the municipalities of\nNova Mamoré and Guajará-Mirim, where collections were made in May and August of 2016\nand April and November of 2017.\nCollections in the FE and PE environments were conducted between 2016 and 2018 in the\nmunicipalities of Cacaulândia and Monte Negro (where collections were made in\nOctober 2016, June 2017 and May 2018); Cacoal, Ji-Paraná and Vilhena (where\ncollections were made in July 2016, May 2017 and April 2018); and Guajará-Mirim and\nPorto Velho (where collections were made in May 2016, April 2017, and June 2018)\n(Fig. 1).\n\nSandfly collection and identification - In the CUN environments,\ncollections were made along two different trails and sampling was performed twice in\n2016 and twice in 2017. Six Hoover Pugedo (HP) light traps were set along each trail\n(12 HPs/reserve) and collections were made between 06:00 p.m. and 07:00 a.m. for\nfive consecutive days.\nCollections in the FE and PE environments were made from 2016 to 2018 at five\nlocations within each municipality. At each location, one trap was set in the FE\nenvironment and two traps were set in the PE environment, using a total of 15 traps\nper municipality.\nMale sandflies were clarified in 10% potassium hydroxide (KOH), washed in 10% acetic\nacid and slide-mounted in Berlese fluid. Females were divided into engorged and\nnon-engorged specimens, and their heads and genitalia were clarified and\nslide-mounted as above. The thorax and abdomen of each female were stored in a\nmicrotube with 96% ethylic alcohol for further molecular analysis. Species\nidentification was carried out using the morphological characters described by\nGalati.\n2\n\n\n\nMolecular detection of Leishmania - Females were sorted according\nto species abundance, collection location and environment type and were separated\ninto pools of 2-20 specimens. DNA extraction and polymerase chain reaction (PCR)\nassays were performed by targeting kDNA and hsp70, as described\nelsewhere.\n4\n\n,\n\n21\n The Th. ubiquitalis males and the\nLe. amazonensis reference strain IOC/L0575\n(IFLA/BR/1967/PH8) were used as positive controls and ultrapure water was used as\nthe negative control.\n\nSandfly blood meal sources - Engorged females were separated\naccording to species, municipality, and environment type. During DNA extraction,\nthree samples were used as negative controls: one sample containing DNA-free water\nand two samples containing a female sandfly with no blood present in the gut. DNA\nextraction was carried out using the phenol/chloroform protocol described by\nSambrook and Russell.\n22\n A PCR was carried out using the primers cytb 1 and\ncytb 2, which are complementary to the conserved region of the\ncytb gene in vertebrate mitochondrial DNA.\n23\n\n\nThe PCR amplification was carried out in a 50 µL reaction volume containing 25 µL\n(1X) Go Taq Colorless (Promega®, Madison, WI, USA), 1.5 µL of each primer\n(cytb 1 and cytb 2, 10 µM each) and 5 µL of\nDNA (< 250 ng). The amplifications were performed in a thermocycler\n(Veriti® - Applied Biosystems, Foster City, CA, USA) with an initial\ndenaturation of 95ºC for 5 min, followed by 35 cycles of denaturation at 95ºC for 30\ns, annealing at 53ºC for 30 s and extension at 72ºC for 1 min, with a final\nextension at 72ºC for 6 min. Amplified products were purified using the QIAquick\nPurification Kit (Qiagen, Hilden, Germany) and submitted to the Fiocruz Sequencing\nFacility (Rio de Janeiro, RJ, Brazil).\n\nData analysis - Interpolation and extrapolation curves (iNEXT) were\nused to evaluate sample coverage and compare diversity indexes (Shannon and Simpson)\nbetween environments. Comparisons were made using Hill numbers expressed as order q\nvalues, and the data were analysed in the R program.\n7\n\n\nThe sequences (hsp 70 and cytb) were analysed using the Phred, Phrap\nand Consed software programs,\n24\n with the minimum value defined as Q = 30. The consensus sequences were\nsubmitted to the Basic Local Alignment Search Tool (BLAST)\n(http://blast.ncbi.nlm.nih.gov/Blast.cgi) and compared with the sequences obtained\nfrom the National Center for Biotechnology Information (NCBI) GenBank database\n(http://www.ncbi.nlm.nih.gov/genbank/).\n\nEthics - The study was performed under authorisations 43702-1 and\n56321-1 SISBIO/ICMBio/MMA.", "A total of 73 species and 14 genera were identified from 9,535 individuals\n(♀4,089/♂5,446) (Table I). Owing to the\nabsence of morphological characters, 118 individuals were identified only at the\ngenus level, with 49 individuals belonging to the genus\nTrichophoromyia and 69 to Trichopygomyia. The\nmost abundant species were Ps. davisi (1,741\nindividuals), Ny. antunesi (1,397),\nTh. auraensis (1,295) and Th.\nubiquitalis (1,043); these four species comprised 57% of all\nindividuals collected.\nA sample coverage analysis indicated that sandfly populations were sufficiently\nrepresented in all environments. The CUN, FE and PE environments yielded 5,847,\n2,111 and 1,457 individuals and 68, 58 and 47 species, respectively, at 99% sample\ncoverage (Table I).\n\nTABLE ISandfly composition, exponential of Shannon entropy index (q = 1) and\ninverse of Simpson concentration index (q = 2) with its confidence\nintervals (CI) based on a bootstrap method of 1,000 replications for\nthree environments from the state of Rondônia, BrazilSpeciesCUNFEPETotal%Total (♀/♂)\n\n\nBichromomyia flaviscutellata (Mangabeira, 1942)a\n65 (40/25)65 (23/42)9 (6/3)1391.46\nBichromomyia olmeca nociva (Young & Arias,\n1970)--1 (1/0)10.01\nBrumptomyia brumpti (Larrousse, 1920)a\n14 (2/12)9 (5/4)10 (3/7)330.35\nBrumptomyia mesai Sherlock, 19621 (0/1)--10.01\nBrumptomyia pintoi (Costa Lima, 1932) 1 (0/1)--10.01\nEvandromyia bacula (Martins, Falcão &\nSilva, 1965)a\n9 (6/3)2 (1/1)2 (1/1)130.14\nEvandromyia georgii (Freitas & Barrett, 2002)a\n22 (19/3)16 (12/4)5 (3/2)430.45\nEvandromyia infraspinosa (Mangabeira,\n1941)7 (5/2)--70.07\nEvandromyia lenti (Mangabeira, 1938)-2 (1/1)2 (1/1)40.04\nEvandromyia piperiformis Godoy, Cunha &\nGalati, 2017--1 (0/1)10.01\nEvandromyia saulensis (Floch & Abonnenc, 1944)a\n62 (51/11)18 (13/5)14 (11/3)940.99\nEvandromyia tarapacaensis (Le Pont,\nTorrez-Espejo & Galati, 1997)2 (0/2)2 (0/2)-40.04\nEvandromyia termitophila (Martins, Falcão &\nSilva, 1964)a\n2 (0/2)4 (2/2)1 (1/0)70.07\nEvandromyia walkeri (Newstead, 1941)a\n29 (27/2)4 (3/1)46 (15/31)790.83\nEvandromyia wilsoni (Dasmasceno & Causey,\n1945)17 (4/13)3 (1/2)-200.21\nLutzomyia evangelistai Martins & Fraiha,\n19712 (0/2)--20.02\nLutzomyia marinkellei Young, 19792 (0/2)--20.02\nLutzomyia sherlocki Martins, Silva &\nFalcão, 1971a\n72 (47/25)35 (23/12)10 (8/2)1171.23\nMartinsmyia waltoni (Arias, Freitas &\nBarrett, 1984)-1 (0/1)-10.01\nMicropygomyia rorotaensis (Floch &\nAbonnenc, 1944)a\n11 (1/10)6 (1/5)5 (2/3)220.23\nMicropygomyia trinidadensis (Newstead,\n1922)-5 (3/2)1 (1/0)60.06\nMicropygomyia villelai (Mangabeira, 1942)a\n6 (6/0)6 (5/1)4 (4/0)160.17\nMigonemyia migonei (França, 1920)a\n8 (3/5)1 (1/0)1 (0/1)100.10\nNyssomyia anduzei (Rozeboom, 1942)26 (17/9)1 (0/1)-270.28\nNyssomyia antunesi (Coutinho, 1939)a\n650b (447/203)234b (169/65)513b (235/278)1,39714.65\nNyssomyia delsionatali Galati & Galvis, 2012)a\n1 (0/1)1 (0/1)5 (0/5)70.07\nNyssomyia richardwardi (Ready & Fraiha,\n1981)123 (97/26)12 (11/1)-1351.42\nNyssomyia shawi (Fraiha, Ward & Ready,\n1981)14 (9/5)1 (1/0)-150.16\nNyssomyia umbratilis (Ward & Faiha,\n1977)95 (74/21)11 (9/2)-1061.11\nNyssomyia whitmani (Antunes & Coutinho, 1939)a\n162b (57/105)106b (44/62)2 (2/0)2702.83\nNyssomyia yuilli yuilli (Young & Porter, 1972)a\n70 (69/1)12 (12/0)61 (25/36)1431.50\nPintomyia nevesi (Damasceno & Arouck, 1956)a\n21 (18/3)18 (16/2)7 (3/4)460.48\nPintomyia serrana (Damasceno & Arouck, 1949)a\n6 (3/3)13 (7/6)2 (2/0)210.22\nPintomyia sp. 1 (0/1)--10.01\nPressatia calcarata (Martins & Silva,\n1964)1 (0/1)--10.01\nPressatia triacantha (Mangabeira, 1942)6 (0/6)--60.06\nPsathyromyia aragaoi (Costa Lima, 1932)a\n36 (16/20)8 (4/4)2 (1/1)460.48\nPsathyromyia b. barretoi\n(Mangabeira, 1942)1 (0/1)1 (1/0)-20.02\nPsathyromyia campbelli (Damasceno, Causey &\nArouck, 1945)1 (1/0)2 (1/1)-30.03\nPsathyromyia coutinhoi (Mangabeira, 1942)1 (0/1)--10.01\nPsathyromyia dendrophyla (Mangabeira, 1942)a\n11 (5/6)11 (5/6)4 (3/1)260.27\nPsathyromyia dreisbachi (Causey &\nDamasceno, 1945)a\n15 (5/8)12 (11/1)18 (18/0)430.45\nPsathyromyia elizabethdorvalae Brilhante, Sábio\n& Galati, 20172 (0/2)1 (0/1)-30.03\nPsathyromyia hermanlenti (Martins, Silva &\nFalcão, 1970)a\n7 (7/0)18 (2/16)21 (11/10)460.48\nPsathyromyia lutziana (Costa Lima, 1932)a\n6 (3/3)5 (3/2)2 (2/0)130.14\nPsathyromyia runoides (Fairchild & Hertig,\n1943)-6 (1/5)9 (3/6)150.16\nPsychodopygus amazonensis (Root, 1934)a\n5 (5/17)3 (3/1)1 (1/0)270.28\nPsychodopygus ayrozai (Barretto & Coutinho, 1940)a\n10 (10/0)4 (1/3)2 (1/1)160.17\nPsychodopygus bispinosus (Fairchild &\nHertig, 1951)6 (6/0)--60.06\nPsychodopygus c. carrerai\n(Barretto, 1946)a\n376b (103/273)57 (26/31)23 (8/15)4564.78\nPsychodopygus chagasi (Costa Lima, 1941)a\n150b (118/32)14 (12/2)4 (3/1)1681.76\nPsychodopygus claustrei (Root, 1934)a\n114 (24/90)42 (7/35)9 (2/7)1651.73\nPsychodopygus complexus (Mangabeira, 1941)a\n240b (165/75)14 (0/14)9 (0/9)2632.76\nPsychodopygus davisi (Root, 1934)a\n715b (442/273)671b (239/432)355b (138/217)1,74118.26\nPsychodopygus geniculatus (Mangabeira, 1941)a\n121b (108/13)21 (15/6)9 (4/5)1511.58\nPsychodopygus h. hirsutus\n(Mangabeira, 1942)a\n57 (49/8)107b (45/52)38 (20/18)2022.12\nPsychodopygus lainsoni Fraiha & Ward, 1974a\n95 (56/39)7 (5/2)2 (0/2)1041.09\nPsychodopygus leonidasdeanei (Fraiha, Ryan,\nWard, Lainson & Shaw, 1986)115b (97/18)--1151.21\nPsychodopygus llanosmartinsi (Fraiha &\nWard, 1980)a\n40 (26/14)3 (2/1)3 (1/2)460.48\nPsychodopygus paraensis (Costa Lima, 1941)19 (8/11)6 (1/5)-250.26\nPsychodopygus yucumensis (Le Pont, Caillard,\nTibayrenc & Desjeux, 1986)a\n4 (1/3)-2 (0/2)60.06\nSciopemyia fluviatilis (floch & Abonnenc, 1944)a\n3 (3/0)3 (3/0)1 (0/1)70.07\nSciopemyia servulolimai (Damasceno &\nCausey, 1945)10 (5/5)7 (4/3)-170.18\nSciopemyia sordellii (Shannon & Del Ponte, 1927)a\n94 (50/44)20 (12/8)14 (6/8)1281.34\nTrichophoromyia auraensis (Mangabeira, 1942)a\n1,261b (9/1252)22 (2/20)12 (0/12)1,29513.58\nTrichophoromyia clitella (Young & Pérez, 1944)a\n12b (0/122)30 (1/29)22 (0/22)1741.82\nTrichophoromyia flochi (Abonnenc &\nChassignet, 1948)a\n19 (0/19)57 (0/57)24 (0/24)1001.05\nTrichophoromyia readyi (Ryan, 1986)4 (0/4)--40.04\nTrichophoromyia ruifreitasi Oliveira, Teles,\nMedeiros, Camargo & Pessoa, 20162 (0/2)--20.02\nTrichophoromyia sp.49 (48/1)--490.51\nTrichophoromyia ubiquitalis (Mangabeira, 1942)a\n525b (62/463)352b (93/259)166b (35/131)1.04310.94\nTrichopygomyia dasypodogeton (Castro, 1939)a\n107 (1/106)3 (0/3)2 (0/2)1121.17\nTrichopygomyia rondoniensis (Martins, Falcão\n& Silva)6 (0/6)1 (0/1)-70.07\nTrichopygomyia sp.69 (69/0)--690.72\nViannamyia caprina (Osorno-Mesa, Moralez &\nOsorno, 1972)11 (11/0)12 (9/3)-230.24\nViannamyia tuberculata (Mangabeira, 1941)a\n15 (15/0)2 (2/0)1 (1/0)180.19Total5,967 (2,530/3,437)2,111 (973/1,238)1,457 (586/871)9,535100Sample coverage (%)999999--Exponential of Shannon entropy index (CI95%)16.6 (16.5-17.7)16.3 (15.9-17.4)8.3 (8.0-8.3)--Inverse of Simpson concentration index (CI95%)8.2 (8.2-8.7)8.5 (8.5-9.3)3.6 (3.6-4.0)--\na: species present in all environments evaluated;\nb: abundant species in the environment. CUN:\nConservation unit; FE: Forest Edge; PE: Peridomicile. \n\n\na: species present in all environments evaluated;\nb: abundant species in the environment. CUN:\nConservation unit; FE: Forest Edge; PE: Peridomicile. \nSpecies diversity was the highest in the CUN environments, followed by the FE and PE\nenvironments. The CUN environments yielded the highest Shannon index (H’) = 19.5\ncommon species and Simpson index (1/D) = 6.9 dominant species, followed by FE with\nH’ = 14.5 common species and 1/D = 6.9 dominant species and PE with H’ = 10.3 common\nspecies and 1/D = 5.2 dominant species (Table\nI, Fig. 2).\n\nFig. 2:index diversities based on Hill numbers of the sandfly fauna\ncollected in three environments in the state of Rondônia, Brazil. CUN:\nConservation Unit; FE: Forest Edge; PE: Peridomicile.\n\nA total of 1,755 females were divided into 274 pools representing 35 species. The PCR\ntargeting of kDNA and hsp70 identified one pool of\nPs. davisi infected with Le.\n(Vi.) braziliensis (query cover = 100%,\nidentity = 100%, GenBank accession KX573933.1) (Fig.\n3). The infected pool was collected from an FE environment in the\nmunicipality of Monte Negro.\n\nFig. 3:natural infection of sandfly. A: amplified fragment of 120 bp from\nthe kDNA region of the kinetoplast\nLeishmania species; B: the DNA extracted from the\nPsychodopygus davisi blood sample was subjected to\npolymerase chain reaction (PCR), which led to the amplification of a 240\nbp hsp70 fragment. The PCR products were subjected to 1.5% agarose gel\nelectrophoresis and stained with 1 μL of GelRed® solution. 1:\nPs. davisi sample; M: molecular Maker; NC: negative\ncontrol (water); PC: positive control Leishmania\namazonensis reference strain IOC L0575\n(IFLA/BR/1967/PH8).\n\nBlood meal sources were identified by taking samples from 15 engorged females\nbelonging to the following species: Bi.\nflaviscutellata (1), Ny.\nantunesi (4), Psathyromyia dendrophyla (2),\nPs. carrerai carrerai (1), Ps.\ndavisi (6) and Ps. hirsutus\nhirsutus (1). The samples were used to amplify a 358 bp fragment of the\ncytb gene. The resultant sequences were compared with the\nGenBank sequences, leading to the identification of three vertebrates with 98-100%\nsimilarity: Bos taurus, Homo sapiens and\nTamandua tetradactyla (Table\nII).\n\nTABLE IIVertebrate species identified from engorged sandfly females collected\nForest Edge (FE) and Peridomicile (PE) environments in the state of\nRondônia, BrazilSandfly speciesBlood mealMunicipality (environment)Accession codeIdentity (%)Total score (n)Query cover (%)E-value\nNyssomyia antunesi\n\nTamandua tetradactyla\nPorto Velho (FE)KT818552.198.145621003E-156\nNyssomyia antunesi\n\nTamandua tetradactyla\nPorto Velho (FE)KT818552.199.075821002E-162\nNyssomyia antunesi\n\nHomo sapiens\nVilhena (PE)KX697544.11006081004E-170\nBichromomyia flaviscutellata\n\nHomo sapiens\nVilhena (FE)KX697544.11006431001E-180\nPsathyromyia dendrophyla\n\nBos taurus\nCacoal (PE)MK028750.11006031002E-168\nPsychodopygus davisi\n\nBos taurus\nCacoal (PE)MK028750.199.665381004E-149\nPsychodopygus davisi\n\nBos taurus\nCacoal (PE)MK028750.11005971007E-167\nPsychodopygus davisi\n\nBos taurus\nCacoal (PE)MK028750.11005951003E-166\nPsathyromyia dendrophyla\n\nBos taurus\nCacoal (PE)MK028750.199.624811006E-132\nPsychodopygus davisi\n\nBos taurus\nCacoal (PE)EU365345.199.426251004E-175\nPsychodopygus davisi\n\nBos taurus\nCacoal (PE)MK028750.11006031002E-168\nPsychodopygus h. hirsutus\n\nBos taurus\nCacoal (PE)MK028750.199.665421003E-150\nPsychodopygus c. carrerai\n\nBos taurus\nCacoal (FE)EU365345.199.055661002E-157\nPsychodopygus davisi\n\nHomo sapiens\nJi-Paraná (FE)KX697544.11006471000.0\nNyssomyia antunesi\n\nHomo sapiens\nJi-Paraná (PE)KX697544.11004251003E-115\n", "The epidemiological pattern of leishmaniasis in RO is characterised by a zoonotic or\nsylvatic transmission cycle in which humans might acquire infection via exposure to\nsandflies in the Amazon rainforest.\n5\n\n,\n\n13\n\n,\n\n16\n\n\nA total of 73 species were registered in this study, which demonstrated a higher\nlevel of species richness than the previous surveys conducted in RO.\n7\n\n,\n\n13\n\n,\n\n25\n Sandfly diversity was the highest in protected environments. The CUN\nenvironment exhibited the highest levels of species richness was found in the CUN\nenvironment, followed by the FE and PE environments; these findings corroborate\nthose of the previous studies conducted in the Amazon region.\n4\n\n,\n\n21\n\n,\n\n26\n\n,\n\n27\n Although sampling methods differed between environments, it was still\npossible to perform reliable diversity comparisons because the sample coverage was\n99% in each environment.\nOur data demonstrated that sandflies might serve as biodiversity indicators. The\nspecies richness was reduced by 10 species in the FE environments and by 21 species\nin the PE environments relative to the CUN environments. This indicates that the\nreduction of forests to small fragments affects sandfly composition primarily by\neliminating the species that occur in minor abundance (rare species).\n26\n\n,\n\n27\n\n\nThe sandfly species Ny. antunesi, Ps. davisi and\nTh. ubiquitalis were abundant in all environments. Humans\ngenerally come in close proximity to forest fragments while engaged in agriculture\nor activities like hunting and fishing. These activities increase the risk of\nexposure to Leishmania vectors in possible transmission foci,\n25\n\n,\n\n28\n\n) and that risk is exacerbated by the abundance of Ny.\nantunesi, Ps. davisi and Th.\nubiquitalis in the areas we studied. One pool of Ps.\ndavisi tested positive for Le.\n(Vi.) braziliensis DNA in the municipality of\nMonte Negro. This finding is significant because Le.\nbraziliensis is responsible for 50% of human CL cases in the\nrural population of Monte Negro.\n25\n\n,\n\n28\n\nPs. davisi is an abundant species in this region,\n25\n as well as in other parts of RO,\n4\n\n,\n\n7\n\n,\n\n13\n and Ps. davisi individuals have been\npreviously found to be infected with Le. (Vi.)\nbraziliensis\n\n5\n and Le. (Vi.) naiffi.\n\n13\n The discovery of this infection in the current study further supports the\nevidence that Ps. davisi might act as a vector in\nRO.\nFurthermore, Ny. antunesi and Th.\nubiquitalis might act as vectors in RO. Both species were found\nin high abundance in this as well as other studies conducted throughout the\nstate,\n4\n\n,\n\n13\n\n,\n\n17\n\n,\n\n25\n and the susceptibility of these species to natural infection by\nLeishmania has been demonstrated in the two studies performed\nin Porto Velho.\n4\n\n,\n\n17\n Furthermore, both species are suspected vectors in AM and MT which border\nRO.\n21\n\n,\n\n29\n\n\nOther abundant species found in RO included Ny.\nwhitmani, Ps. carrerai\ncarrerai, Ps. complexus,\nPs. hirsutus hirsutus and Th.\nauraensis. Ny. whitmani has already been\nrecorded in abundance in RO.\n13\n\n,\n\n25\n\nNy. whitmani and Ps.\nhirsutus hirsutus were found in high abundance in the FE\nenvironments, suggesting that these species are confined to degraded environments;\nboth species have been associated with dense forest environments\n13\n\n,\n\n25\n and with environments impacted by anthropic activities.\n25\n\n,\n\n30\n Neither species has been found infected with Leishmania in\nRO;\n4\n\n,\n\n5\n\n,\n\n7\n\n,\n\n17\n however, both species are putative vectors in the Amazon region because they\nhave been found infected with Leishmania in PA.\n6\n\n\nIn RO, Ps. carrerai carrerai occurs mainly in dense\nforest environments.\n6\n\n,\n\n7\n\n,\n\n13\n Only three studies conducted in central RO have demonstrated the\npredominance of Ps. carrerai carrerai and\nPs. complexus in this region.\n13\n\n,\n\n25\n\nPs. carrerai carrerai has been found to carry promastigote\nflagellates identified as Le. (Vi.)\nbraziliensis\n\n5\n and carrying Leishmania DNA.\n7\n\n\n\nTrichophoromyia auraensis is abundant primarily in the\nmunicipalities of Guajará-Mirim and Porto Velho and in central RO\n13\n\n,\n\n17\n where Leishmania DNA has been detected in females.\n7\n\n,\n\n17\n In our study, Th. auraensis was found abundant only in the\nCUN environments; however, the natural infection of Th.\nauraensis by Leishmania spp has been\nreported\n30\n and Th. auraensis has been found in\nabundance in both FE and PE environments in AC of western Brazil.\n19\n\n,\n\n30\n\n\nIn our analysis of blood meal sources, blood was taken from the stomachs of 15\nengorged females, and the PCR amplification of the cytb gene led to\nthe identification of the DNA belonging to humans (H. sapiens),\ndomestic animals (Bos taurus) and sylvatic animals such as\nanteaters (T. tetradactyla). The DNA belonging to Bos\ntaurus and H. sapiens was present in samples from\nevery collection made in the PE and FE environments, and this DNA was found in the\nblood taken from 13 female specimens belonging to the species: Bi.\nflaviscutellata, Ny. antunesi,\nPa. dendrophyla, Ps.\ncarrerai carrerai, Ps. davisi\nand Ps. hirsutus hirsutus. The T.\ntetradactyla DNA was present in the blood sample taken from two\nNy. antunesi females captured in the FE\nenvironments. Although 15 engorged females represents a small sample size relative\nto other studies, our findings are significant because previous studies targeting\nthe cytb gene have detected the DNA of only domestic animals, such\nas cats, dogs, chickens, bovines, equines and pigs.\n18\n\n-\n\n20\n\n\nPreserved environments showed the highest variety of blood meal sources for\nsandflies, and a large variety of blood meal sources guarantees the maintenance of\nthe gonotrophic cycle.\n12\n Anteaters (Tamandua spp) act as possible reservoirs for\nsome Leishmania species, such as Le.\n(Vi.) guyanensis,\n12\n and in degraded natural habitats, the scarcity of sylvatic reservoirs, such\nas anteaters, might cause sandflies to migrate to the PE environments to find new\nblood meal sources.\n4\n\n,\n\n18\n\n,\n\n19\n\n,\n\n27\n In our study, the PE collection points were close to small forest fragments,\nand the availability of blood meal sources in the form of domestic animals, such as\nBos taurus, might have attracted sandflies from the forest\nfragments.\nFew studies have examined the importance of blood meal sources in the Brazilian\nAmazon. Recently, in the municipality of Rio Branco (AC), the blood collected from\nthe intestinal contents of the two specimens of Ps.\ndavisi was subjected to PCR targeting the cytb\ngene; this led to the identification of Gallus gallus as a blood\nmeal source.\n19\n The current study improves our knowledge of blood meal sources by\ndemonstrating that vectors, such as Ny. antunesi\nand Ps. davisi, feed on humans and bovines in the\nPE environments and feed on sylvatic animals, such as anteaters in the FE\nenvironments.\nOur study largely corroborates the findings of previous studies concerned with the\ntransmission cycle of leishmaniasis in RO. However, the fact that sandflies are\nusing humans and domestic animals as blood meal sources indicates that the\ntransmission profile might be changing in the PE environments. These findings can be\nused to enhance the epidemiological surveillance of leishmaniasis in RO." ]
[ "materials|methods", "results", "discussion" ]
[ "Leishmania", "vector", "reservoirs", "protected areas", "state of Rondônia" ]
MATERIALS AND METHODS: Study areas - RO is located in the northern region of Brazil; it borders AM to the north, MT to the east and AC to the west and shares an international border with Bolivia to the southwest (Fig. 1). It has an area of approximately 238,000 km² and contains 52 municipalities. Fig. 1:sandfly collection points distributed in the state of Rondônia, Brazil. The sandfly fauna was collected in three environments: Conservation Unit (CUN) - characterised by large areas of ombrophilous rain forest; Forest Edge (FE) - characterised by small forest fragments near urban areas; and Peridomicile (PE) - areas around dwellings that are situated near small forest fragments and contain enclosures where domestic animals are raised. Collections in the CUN environment were conducted between 2016 and 2017 at three places (Fig. 1): the Jaru Biological Reserve (REBIO Jaru), which has a territory that covers six municipalities (Machadinho D’Oeste, Vale do Anari, Theobroma, Ouro Preto do Oeste, Vale do Paraíso and Ji-Paraná) where collection were made in May and December of 2016 and April and July of 2017; the Jamari National Forest (FLONA Jamari), located north of RO in the municipality of Itapuã do Oeste, where collections were made in April and August of 2016 and April and October of 2017; and Guajará-Mirim State Park, located to the west of RO between the municipalities of Nova Mamoré and Guajará-Mirim, where collections were made in May and August of 2016 and April and November of 2017. Collections in the FE and PE environments were conducted between 2016 and 2018 in the municipalities of Cacaulândia and Monte Negro (where collections were made in October 2016, June 2017 and May 2018); Cacoal, Ji-Paraná and Vilhena (where collections were made in July 2016, May 2017 and April 2018); and Guajará-Mirim and Porto Velho (where collections were made in May 2016, April 2017, and June 2018) (Fig. 1). Sandfly collection and identification - In the CUN environments, collections were made along two different trails and sampling was performed twice in 2016 and twice in 2017. Six Hoover Pugedo (HP) light traps were set along each trail (12 HPs/reserve) and collections were made between 06:00 p.m. and 07:00 a.m. for five consecutive days. Collections in the FE and PE environments were made from 2016 to 2018 at five locations within each municipality. At each location, one trap was set in the FE environment and two traps were set in the PE environment, using a total of 15 traps per municipality. Male sandflies were clarified in 10% potassium hydroxide (KOH), washed in 10% acetic acid and slide-mounted in Berlese fluid. Females were divided into engorged and non-engorged specimens, and their heads and genitalia were clarified and slide-mounted as above. The thorax and abdomen of each female were stored in a microtube with 96% ethylic alcohol for further molecular analysis. Species identification was carried out using the morphological characters described by Galati. 2 Molecular detection of Leishmania - Females were sorted according to species abundance, collection location and environment type and were separated into pools of 2-20 specimens. DNA extraction and polymerase chain reaction (PCR) assays were performed by targeting kDNA and hsp70, as described elsewhere. 4 , 21 The Th. ubiquitalis males and the Le. amazonensis reference strain IOC/L0575 (IFLA/BR/1967/PH8) were used as positive controls and ultrapure water was used as the negative control. Sandfly blood meal sources - Engorged females were separated according to species, municipality, and environment type. During DNA extraction, three samples were used as negative controls: one sample containing DNA-free water and two samples containing a female sandfly with no blood present in the gut. DNA extraction was carried out using the phenol/chloroform protocol described by Sambrook and Russell. 22 A PCR was carried out using the primers cytb 1 and cytb 2, which are complementary to the conserved region of the cytb gene in vertebrate mitochondrial DNA. 23 The PCR amplification was carried out in a 50 µL reaction volume containing 25 µL (1X) Go Taq Colorless (Promega®, Madison, WI, USA), 1.5 µL of each primer (cytb 1 and cytb 2, 10 µM each) and 5 µL of DNA (< 250 ng). The amplifications were performed in a thermocycler (Veriti® - Applied Biosystems, Foster City, CA, USA) with an initial denaturation of 95ºC for 5 min, followed by 35 cycles of denaturation at 95ºC for 30 s, annealing at 53ºC for 30 s and extension at 72ºC for 1 min, with a final extension at 72ºC for 6 min. Amplified products were purified using the QIAquick Purification Kit (Qiagen, Hilden, Germany) and submitted to the Fiocruz Sequencing Facility (Rio de Janeiro, RJ, Brazil). Data analysis - Interpolation and extrapolation curves (iNEXT) were used to evaluate sample coverage and compare diversity indexes (Shannon and Simpson) between environments. Comparisons were made using Hill numbers expressed as order q values, and the data were analysed in the R program. 7 The sequences (hsp 70 and cytb) were analysed using the Phred, Phrap and Consed software programs, 24 with the minimum value defined as Q = 30. The consensus sequences were submitted to the Basic Local Alignment Search Tool (BLAST) (http://blast.ncbi.nlm.nih.gov/Blast.cgi) and compared with the sequences obtained from the National Center for Biotechnology Information (NCBI) GenBank database (http://www.ncbi.nlm.nih.gov/genbank/). Ethics - The study was performed under authorisations 43702-1 and 56321-1 SISBIO/ICMBio/MMA. RESULTS: A total of 73 species and 14 genera were identified from 9,535 individuals (♀4,089/♂5,446) (Table I). Owing to the absence of morphological characters, 118 individuals were identified only at the genus level, with 49 individuals belonging to the genus Trichophoromyia and 69 to Trichopygomyia. The most abundant species were Ps. davisi (1,741 individuals), Ny. antunesi (1,397), Th. auraensis (1,295) and Th. ubiquitalis (1,043); these four species comprised 57% of all individuals collected. A sample coverage analysis indicated that sandfly populations were sufficiently represented in all environments. The CUN, FE and PE environments yielded 5,847, 2,111 and 1,457 individuals and 68, 58 and 47 species, respectively, at 99% sample coverage (Table I). TABLE ISandfly composition, exponential of Shannon entropy index (q = 1) and inverse of Simpson concentration index (q = 2) with its confidence intervals (CI) based on a bootstrap method of 1,000 replications for three environments from the state of Rondônia, BrazilSpeciesCUNFEPETotal%Total (♀/♂) Bichromomyia flaviscutellata (Mangabeira, 1942)a 65 (40/25)65 (23/42)9 (6/3)1391.46 Bichromomyia olmeca nociva (Young & Arias, 1970)--1 (1/0)10.01 Brumptomyia brumpti (Larrousse, 1920)a 14 (2/12)9 (5/4)10 (3/7)330.35 Brumptomyia mesai Sherlock, 19621 (0/1)--10.01 Brumptomyia pintoi (Costa Lima, 1932) 1 (0/1)--10.01 Evandromyia bacula (Martins, Falcão & Silva, 1965)a 9 (6/3)2 (1/1)2 (1/1)130.14 Evandromyia georgii (Freitas & Barrett, 2002)a 22 (19/3)16 (12/4)5 (3/2)430.45 Evandromyia infraspinosa (Mangabeira, 1941)7 (5/2)--70.07 Evandromyia lenti (Mangabeira, 1938)-2 (1/1)2 (1/1)40.04 Evandromyia piperiformis Godoy, Cunha & Galati, 2017--1 (0/1)10.01 Evandromyia saulensis (Floch & Abonnenc, 1944)a 62 (51/11)18 (13/5)14 (11/3)940.99 Evandromyia tarapacaensis (Le Pont, Torrez-Espejo & Galati, 1997)2 (0/2)2 (0/2)-40.04 Evandromyia termitophila (Martins, Falcão & Silva, 1964)a 2 (0/2)4 (2/2)1 (1/0)70.07 Evandromyia walkeri (Newstead, 1941)a 29 (27/2)4 (3/1)46 (15/31)790.83 Evandromyia wilsoni (Dasmasceno & Causey, 1945)17 (4/13)3 (1/2)-200.21 Lutzomyia evangelistai Martins & Fraiha, 19712 (0/2)--20.02 Lutzomyia marinkellei Young, 19792 (0/2)--20.02 Lutzomyia sherlocki Martins, Silva & Falcão, 1971a 72 (47/25)35 (23/12)10 (8/2)1171.23 Martinsmyia waltoni (Arias, Freitas & Barrett, 1984)-1 (0/1)-10.01 Micropygomyia rorotaensis (Floch & Abonnenc, 1944)a 11 (1/10)6 (1/5)5 (2/3)220.23 Micropygomyia trinidadensis (Newstead, 1922)-5 (3/2)1 (1/0)60.06 Micropygomyia villelai (Mangabeira, 1942)a 6 (6/0)6 (5/1)4 (4/0)160.17 Migonemyia migonei (França, 1920)a 8 (3/5)1 (1/0)1 (0/1)100.10 Nyssomyia anduzei (Rozeboom, 1942)26 (17/9)1 (0/1)-270.28 Nyssomyia antunesi (Coutinho, 1939)a 650b (447/203)234b (169/65)513b (235/278)1,39714.65 Nyssomyia delsionatali Galati & Galvis, 2012)a 1 (0/1)1 (0/1)5 (0/5)70.07 Nyssomyia richardwardi (Ready & Fraiha, 1981)123 (97/26)12 (11/1)-1351.42 Nyssomyia shawi (Fraiha, Ward & Ready, 1981)14 (9/5)1 (1/0)-150.16 Nyssomyia umbratilis (Ward & Faiha, 1977)95 (74/21)11 (9/2)-1061.11 Nyssomyia whitmani (Antunes & Coutinho, 1939)a 162b (57/105)106b (44/62)2 (2/0)2702.83 Nyssomyia yuilli yuilli (Young & Porter, 1972)a 70 (69/1)12 (12/0)61 (25/36)1431.50 Pintomyia nevesi (Damasceno & Arouck, 1956)a 21 (18/3)18 (16/2)7 (3/4)460.48 Pintomyia serrana (Damasceno & Arouck, 1949)a 6 (3/3)13 (7/6)2 (2/0)210.22 Pintomyia sp. 1 (0/1)--10.01 Pressatia calcarata (Martins & Silva, 1964)1 (0/1)--10.01 Pressatia triacantha (Mangabeira, 1942)6 (0/6)--60.06 Psathyromyia aragaoi (Costa Lima, 1932)a 36 (16/20)8 (4/4)2 (1/1)460.48 Psathyromyia b. barretoi (Mangabeira, 1942)1 (0/1)1 (1/0)-20.02 Psathyromyia campbelli (Damasceno, Causey & Arouck, 1945)1 (1/0)2 (1/1)-30.03 Psathyromyia coutinhoi (Mangabeira, 1942)1 (0/1)--10.01 Psathyromyia dendrophyla (Mangabeira, 1942)a 11 (5/6)11 (5/6)4 (3/1)260.27 Psathyromyia dreisbachi (Causey & Damasceno, 1945)a 15 (5/8)12 (11/1)18 (18/0)430.45 Psathyromyia elizabethdorvalae Brilhante, Sábio & Galati, 20172 (0/2)1 (0/1)-30.03 Psathyromyia hermanlenti (Martins, Silva & Falcão, 1970)a 7 (7/0)18 (2/16)21 (11/10)460.48 Psathyromyia lutziana (Costa Lima, 1932)a 6 (3/3)5 (3/2)2 (2/0)130.14 Psathyromyia runoides (Fairchild & Hertig, 1943)-6 (1/5)9 (3/6)150.16 Psychodopygus amazonensis (Root, 1934)a 5 (5/17)3 (3/1)1 (1/0)270.28 Psychodopygus ayrozai (Barretto & Coutinho, 1940)a 10 (10/0)4 (1/3)2 (1/1)160.17 Psychodopygus bispinosus (Fairchild & Hertig, 1951)6 (6/0)--60.06 Psychodopygus c. carrerai (Barretto, 1946)a 376b (103/273)57 (26/31)23 (8/15)4564.78 Psychodopygus chagasi (Costa Lima, 1941)a 150b (118/32)14 (12/2)4 (3/1)1681.76 Psychodopygus claustrei (Root, 1934)a 114 (24/90)42 (7/35)9 (2/7)1651.73 Psychodopygus complexus (Mangabeira, 1941)a 240b (165/75)14 (0/14)9 (0/9)2632.76 Psychodopygus davisi (Root, 1934)a 715b (442/273)671b (239/432)355b (138/217)1,74118.26 Psychodopygus geniculatus (Mangabeira, 1941)a 121b (108/13)21 (15/6)9 (4/5)1511.58 Psychodopygus h. hirsutus (Mangabeira, 1942)a 57 (49/8)107b (45/52)38 (20/18)2022.12 Psychodopygus lainsoni Fraiha & Ward, 1974a 95 (56/39)7 (5/2)2 (0/2)1041.09 Psychodopygus leonidasdeanei (Fraiha, Ryan, Ward, Lainson & Shaw, 1986)115b (97/18)--1151.21 Psychodopygus llanosmartinsi (Fraiha & Ward, 1980)a 40 (26/14)3 (2/1)3 (1/2)460.48 Psychodopygus paraensis (Costa Lima, 1941)19 (8/11)6 (1/5)-250.26 Psychodopygus yucumensis (Le Pont, Caillard, Tibayrenc & Desjeux, 1986)a 4 (1/3)-2 (0/2)60.06 Sciopemyia fluviatilis (floch & Abonnenc, 1944)a 3 (3/0)3 (3/0)1 (0/1)70.07 Sciopemyia servulolimai (Damasceno & Causey, 1945)10 (5/5)7 (4/3)-170.18 Sciopemyia sordellii (Shannon & Del Ponte, 1927)a 94 (50/44)20 (12/8)14 (6/8)1281.34 Trichophoromyia auraensis (Mangabeira, 1942)a 1,261b (9/1252)22 (2/20)12 (0/12)1,29513.58 Trichophoromyia clitella (Young & Pérez, 1944)a 12b (0/122)30 (1/29)22 (0/22)1741.82 Trichophoromyia flochi (Abonnenc & Chassignet, 1948)a 19 (0/19)57 (0/57)24 (0/24)1001.05 Trichophoromyia readyi (Ryan, 1986)4 (0/4)--40.04 Trichophoromyia ruifreitasi Oliveira, Teles, Medeiros, Camargo & Pessoa, 20162 (0/2)--20.02 Trichophoromyia sp.49 (48/1)--490.51 Trichophoromyia ubiquitalis (Mangabeira, 1942)a 525b (62/463)352b (93/259)166b (35/131)1.04310.94 Trichopygomyia dasypodogeton (Castro, 1939)a 107 (1/106)3 (0/3)2 (0/2)1121.17 Trichopygomyia rondoniensis (Martins, Falcão & Silva)6 (0/6)1 (0/1)-70.07 Trichopygomyia sp.69 (69/0)--690.72 Viannamyia caprina (Osorno-Mesa, Moralez & Osorno, 1972)11 (11/0)12 (9/3)-230.24 Viannamyia tuberculata (Mangabeira, 1941)a 15 (15/0)2 (2/0)1 (1/0)180.19Total5,967 (2,530/3,437)2,111 (973/1,238)1,457 (586/871)9,535100Sample coverage (%)999999--Exponential of Shannon entropy index (CI95%)16.6 (16.5-17.7)16.3 (15.9-17.4)8.3 (8.0-8.3)--Inverse of Simpson concentration index (CI95%)8.2 (8.2-8.7)8.5 (8.5-9.3)3.6 (3.6-4.0)-- a: species present in all environments evaluated; b: abundant species in the environment. CUN: Conservation unit; FE: Forest Edge; PE: Peridomicile. a: species present in all environments evaluated; b: abundant species in the environment. CUN: Conservation unit; FE: Forest Edge; PE: Peridomicile. Species diversity was the highest in the CUN environments, followed by the FE and PE environments. The CUN environments yielded the highest Shannon index (H’) = 19.5 common species and Simpson index (1/D) = 6.9 dominant species, followed by FE with H’ = 14.5 common species and 1/D = 6.9 dominant species and PE with H’ = 10.3 common species and 1/D = 5.2 dominant species (Table I, Fig. 2). Fig. 2:index diversities based on Hill numbers of the sandfly fauna collected in three environments in the state of Rondônia, Brazil. CUN: Conservation Unit; FE: Forest Edge; PE: Peridomicile. A total of 1,755 females were divided into 274 pools representing 35 species. The PCR targeting of kDNA and hsp70 identified one pool of Ps. davisi infected with Le. (Vi.) braziliensis (query cover = 100%, identity = 100%, GenBank accession KX573933.1) (Fig. 3). The infected pool was collected from an FE environment in the municipality of Monte Negro. Fig. 3:natural infection of sandfly. A: amplified fragment of 120 bp from the kDNA region of the kinetoplast Leishmania species; B: the DNA extracted from the Psychodopygus davisi blood sample was subjected to polymerase chain reaction (PCR), which led to the amplification of a 240 bp hsp70 fragment. The PCR products were subjected to 1.5% agarose gel electrophoresis and stained with 1 μL of GelRed® solution. 1: Ps. davisi sample; M: molecular Maker; NC: negative control (water); PC: positive control Leishmania amazonensis reference strain IOC L0575 (IFLA/BR/1967/PH8). Blood meal sources were identified by taking samples from 15 engorged females belonging to the following species: Bi. flaviscutellata (1), Ny. antunesi (4), Psathyromyia dendrophyla (2), Ps. carrerai carrerai (1), Ps. davisi (6) and Ps. hirsutus hirsutus (1). The samples were used to amplify a 358 bp fragment of the cytb gene. The resultant sequences were compared with the GenBank sequences, leading to the identification of three vertebrates with 98-100% similarity: Bos taurus, Homo sapiens and Tamandua tetradactyla (Table II). TABLE IIVertebrate species identified from engorged sandfly females collected Forest Edge (FE) and Peridomicile (PE) environments in the state of Rondônia, BrazilSandfly speciesBlood mealMunicipality (environment)Accession codeIdentity (%)Total score (n)Query cover (%)E-value Nyssomyia antunesi Tamandua tetradactyla Porto Velho (FE)KT818552.198.145621003E-156 Nyssomyia antunesi Tamandua tetradactyla Porto Velho (FE)KT818552.199.075821002E-162 Nyssomyia antunesi Homo sapiens Vilhena (PE)KX697544.11006081004E-170 Bichromomyia flaviscutellata Homo sapiens Vilhena (FE)KX697544.11006431001E-180 Psathyromyia dendrophyla Bos taurus Cacoal (PE)MK028750.11006031002E-168 Psychodopygus davisi Bos taurus Cacoal (PE)MK028750.199.665381004E-149 Psychodopygus davisi Bos taurus Cacoal (PE)MK028750.11005971007E-167 Psychodopygus davisi Bos taurus Cacoal (PE)MK028750.11005951003E-166 Psathyromyia dendrophyla Bos taurus Cacoal (PE)MK028750.199.624811006E-132 Psychodopygus davisi Bos taurus Cacoal (PE)EU365345.199.426251004E-175 Psychodopygus davisi Bos taurus Cacoal (PE)MK028750.11006031002E-168 Psychodopygus h. hirsutus Bos taurus Cacoal (PE)MK028750.199.665421003E-150 Psychodopygus c. carrerai Bos taurus Cacoal (FE)EU365345.199.055661002E-157 Psychodopygus davisi Homo sapiens Ji-Paraná (FE)KX697544.11006471000.0 Nyssomyia antunesi Homo sapiens Ji-Paraná (PE)KX697544.11004251003E-115 DISCUSSION: The epidemiological pattern of leishmaniasis in RO is characterised by a zoonotic or sylvatic transmission cycle in which humans might acquire infection via exposure to sandflies in the Amazon rainforest. 5 , 13 , 16 A total of 73 species were registered in this study, which demonstrated a higher level of species richness than the previous surveys conducted in RO. 7 , 13 , 25 Sandfly diversity was the highest in protected environments. The CUN environment exhibited the highest levels of species richness was found in the CUN environment, followed by the FE and PE environments; these findings corroborate those of the previous studies conducted in the Amazon region. 4 , 21 , 26 , 27 Although sampling methods differed between environments, it was still possible to perform reliable diversity comparisons because the sample coverage was 99% in each environment. Our data demonstrated that sandflies might serve as biodiversity indicators. The species richness was reduced by 10 species in the FE environments and by 21 species in the PE environments relative to the CUN environments. This indicates that the reduction of forests to small fragments affects sandfly composition primarily by eliminating the species that occur in minor abundance (rare species). 26 , 27 The sandfly species Ny. antunesi, Ps. davisi and Th. ubiquitalis were abundant in all environments. Humans generally come in close proximity to forest fragments while engaged in agriculture or activities like hunting and fishing. These activities increase the risk of exposure to Leishmania vectors in possible transmission foci, 25 , 28 ) and that risk is exacerbated by the abundance of Ny. antunesi, Ps. davisi and Th. ubiquitalis in the areas we studied. One pool of Ps. davisi tested positive for Le. (Vi.) braziliensis DNA in the municipality of Monte Negro. This finding is significant because Le. braziliensis is responsible for 50% of human CL cases in the rural population of Monte Negro. 25 , 28 Ps. davisi is an abundant species in this region, 25 as well as in other parts of RO, 4 , 7 , 13 and Ps. davisi individuals have been previously found to be infected with Le. (Vi.) braziliensis 5 and Le. (Vi.) naiffi. 13 The discovery of this infection in the current study further supports the evidence that Ps. davisi might act as a vector in RO. Furthermore, Ny. antunesi and Th. ubiquitalis might act as vectors in RO. Both species were found in high abundance in this as well as other studies conducted throughout the state, 4 , 13 , 17 , 25 and the susceptibility of these species to natural infection by Leishmania has been demonstrated in the two studies performed in Porto Velho. 4 , 17 Furthermore, both species are suspected vectors in AM and MT which border RO. 21 , 29 Other abundant species found in RO included Ny. whitmani, Ps. carrerai carrerai, Ps. complexus, Ps. hirsutus hirsutus and Th. auraensis. Ny. whitmani has already been recorded in abundance in RO. 13 , 25 Ny. whitmani and Ps. hirsutus hirsutus were found in high abundance in the FE environments, suggesting that these species are confined to degraded environments; both species have been associated with dense forest environments 13 , 25 and with environments impacted by anthropic activities. 25 , 30 Neither species has been found infected with Leishmania in RO; 4 , 5 , 7 , 17 however, both species are putative vectors in the Amazon region because they have been found infected with Leishmania in PA. 6 In RO, Ps. carrerai carrerai occurs mainly in dense forest environments. 6 , 7 , 13 Only three studies conducted in central RO have demonstrated the predominance of Ps. carrerai carrerai and Ps. complexus in this region. 13 , 25 Ps. carrerai carrerai has been found to carry promastigote flagellates identified as Le. (Vi.) braziliensis 5 and carrying Leishmania DNA. 7 Trichophoromyia auraensis is abundant primarily in the municipalities of Guajará-Mirim and Porto Velho and in central RO 13 , 17 where Leishmania DNA has been detected in females. 7 , 17 In our study, Th. auraensis was found abundant only in the CUN environments; however, the natural infection of Th. auraensis by Leishmania spp has been reported 30 and Th. auraensis has been found in abundance in both FE and PE environments in AC of western Brazil. 19 , 30 In our analysis of blood meal sources, blood was taken from the stomachs of 15 engorged females, and the PCR amplification of the cytb gene led to the identification of the DNA belonging to humans (H. sapiens), domestic animals (Bos taurus) and sylvatic animals such as anteaters (T. tetradactyla). The DNA belonging to Bos taurus and H. sapiens was present in samples from every collection made in the PE and FE environments, and this DNA was found in the blood taken from 13 female specimens belonging to the species: Bi. flaviscutellata, Ny. antunesi, Pa. dendrophyla, Ps. carrerai carrerai, Ps. davisi and Ps. hirsutus hirsutus. The T. tetradactyla DNA was present in the blood sample taken from two Ny. antunesi females captured in the FE environments. Although 15 engorged females represents a small sample size relative to other studies, our findings are significant because previous studies targeting the cytb gene have detected the DNA of only domestic animals, such as cats, dogs, chickens, bovines, equines and pigs. 18 - 20 Preserved environments showed the highest variety of blood meal sources for sandflies, and a large variety of blood meal sources guarantees the maintenance of the gonotrophic cycle. 12 Anteaters (Tamandua spp) act as possible reservoirs for some Leishmania species, such as Le. (Vi.) guyanensis, 12 and in degraded natural habitats, the scarcity of sylvatic reservoirs, such as anteaters, might cause sandflies to migrate to the PE environments to find new blood meal sources. 4 , 18 , 19 , 27 In our study, the PE collection points were close to small forest fragments, and the availability of blood meal sources in the form of domestic animals, such as Bos taurus, might have attracted sandflies from the forest fragments. Few studies have examined the importance of blood meal sources in the Brazilian Amazon. Recently, in the municipality of Rio Branco (AC), the blood collected from the intestinal contents of the two specimens of Ps. davisi was subjected to PCR targeting the cytb gene; this led to the identification of Gallus gallus as a blood meal source. 19 The current study improves our knowledge of blood meal sources by demonstrating that vectors, such as Ny. antunesi and Ps. davisi, feed on humans and bovines in the PE environments and feed on sylvatic animals, such as anteaters in the FE environments. Our study largely corroborates the findings of previous studies concerned with the transmission cycle of leishmaniasis in RO. However, the fact that sandflies are using humans and domestic animals as blood meal sources indicates that the transmission profile might be changing in the PE environments. These findings can be used to enhance the epidemiological surveillance of leishmaniasis in RO.
Background: The state of Rondônia (RO) is a hot spot for human cases of cutaneous leishmaniasis. Many sandfly species in RO are putative vectors of leishmaniasis. Methods: A sandfly survey was performed between 2016 and 2018 in 10 municipalities categorised into three different environment types: (i) Conservation Unit (CUN) - comprised of preserved ombrophilous forests; (ii) Forest Edge (FE) - small forest fragments; and (iii) Peridomicile (PE) - areas around dwellings. Results: A total of 73 species were identified from 9,535 sandflies. The most abundant species were Psychodopygus davisi (1,741 individuals), Nyssomyia antunesi (1,397), Trichophoromyia auraensis (1,295) and Trichophoromyia ubiquitalis (1,043). Diversity was the highest in CUN, followed by the FE and PE environments. One pool of Ps. davisi tested positive for Leishmania braziliensis, reinforcing the possibility that Ps. davisi acts as a vector. The cytochrome b (cytb) sequences were used to identify three blood meal sources: Bos taurus, Homo sapiens and Tamandua tetradactyla. Conclusions: Our results demonstrated that sandflies can switch between blood meal sources in differing environments. This study enhances the knowledge of the vector life cycle in RO and provides information relevant to leishmaniasis surveillance.
null
null
4,883
244
[]
3
[ "species", "environments", "pe", "ps", "psychodopygus", "fe", "davisi", "10", "blood", "ro" ]
[ "sandfly fauna", "braziliensis query", "sandfly collection identification", "sandflies serve biodiversity", "vi braziliensis query" ]
null
null
null
null
null
null
[CONTENT] Leishmania | vector | reservoirs | protected areas | state of Rondônia [SUMMARY]
null
[CONTENT] Leishmania | vector | reservoirs | protected areas | state of Rondônia [SUMMARY]
null
null
null
[CONTENT] Animals | Animals, Domestic | Brazil | Disease Reservoirs | Female | Forests | Humans | Insect Vectors | Leishmaniasis, Cutaneous | Male | Population Density | Psychodidae | Urban Population [SUMMARY]
null
[CONTENT] Animals | Animals, Domestic | Brazil | Disease Reservoirs | Female | Forests | Humans | Insect Vectors | Leishmaniasis, Cutaneous | Male | Population Density | Psychodidae | Urban Population [SUMMARY]
null
null
null
[CONTENT] sandfly fauna | braziliensis query | sandfly collection identification | sandflies serve biodiversity | vi braziliensis query [SUMMARY]
null
[CONTENT] sandfly fauna | braziliensis query | sandfly collection identification | sandflies serve biodiversity | vi braziliensis query [SUMMARY]
null
null
null
[CONTENT] species | environments | pe | ps | psychodopygus | fe | davisi | 10 | blood | ro [SUMMARY]
null
[CONTENT] species | environments | pe | ps | psychodopygus | fe | davisi | 10 | blood | ro [SUMMARY]
null
null
null
[CONTENT] psychodopygus | mangabeira | psathyromyia | 11 | nyssomyia | 14 | species | pe | 10 | evandromyia [SUMMARY]
null
[CONTENT] species | environments | ps | psychodopygus | pe | ro | collections | 2016 | davisi | fe [SUMMARY]
null
null
null
[CONTENT] 73 | 9,535 ||| Psychodopygus | 1,741 | Nyssomyia | 1,397 | 1,295 | Trichophoromyia | 1,043 ||| CUN | FE | PE ||| One | Ps. | davisi | Leishmania | Ps | davisi ||| three | Bos | Tamandua [SUMMARY]
null
[CONTENT] Rondônia ||| RO ||| between 2016 and 2018 | 10 | three | CUN ||| 73 | 9,535 ||| Psychodopygus | 1,741 | Nyssomyia | 1,397 | 1,295 | Trichophoromyia | 1,043 ||| CUN | FE | PE ||| One | Ps. | davisi | Leishmania | Ps | davisi ||| three | Bos | Tamandua ||| ||| RO [SUMMARY]
null
Inhibition of Ubiquitin-specific Peptidase 8 Suppresses Adrenocorticotropic Hormone Production and Tumorous Corticotroph Cell Growth in AtT20 Cells.
27569239
Two recent whole-exome sequencing researches identifying somatic mutations in the ubiquitin-specific protease 8 (USP8) gene in pituitary corticotroph adenomas provide exciting advances in this field. These mutations drive increased epidermal growth factor receptor (EGFR) signaling and promote adrenocorticotropic hormone (ACTH) production. This study was to investigate whether the inhibition of USP8 activity could be a strategy for the treatment of Cushing's disease (CD).
BACKGROUND
The anticancer effect of USP8 inhibitor was determined by testing cell viability, colony formation, apoptosis, and ACTH secretion. The immunoblotting and quantitative reverse transcription polymerase chain reaction were conducted to explore the signaling pathway by USP8 inhibition.
METHODS
Inhibition of USP8-induced degradation of receptor tyrosine kinases including EGFR, EGFR-2 (ERBB2), and Met leading to a suppression of AtT20 cell growth and ACTH secretion. Moreover, treatment with USP8 inhibitor markedly induced AtT20 cells apoptosis.
RESULTS
Inhibition of USP8 activity could be an effective strategy for CD. It might provide a novel pharmacological approach for the treatment of CD.
CONCLUSIONS
[ "Adrenocorticotropic Hormone", "Animals", "Apoptosis", "Cell Proliferation", "Cell Survival", "Endopeptidases", "Endosomal Sorting Complexes Required for Transport", "Enzyme Inhibitors", "ErbB Receptors", "Humans", "Indenes", "Mice", "Pyrazines", "Ubiquitin Thiolesterase" ]
5009596
INTRODUCTION
Cushing's disease (CD), or pituitary-dependent Cushing's syndrome, is the most common cause of endogenous Cushing's syndrome accounting for about 70% of the chronic endogenous hypercortisolism.[1] It induced a series of several comorbidities and clinical complications, mainly including hypertension, diabetes mellitus, dyslipidemia, osteoporosis, cardiovascular disease, infection, and mental disorders, which associated with increased morbidity and mortality if not appropriately treated.[2] Until recently, no available medical treatment was licensed for CD although several drugs had demonstrated efficacy in lowering excess cortisol.[34] Using next-generation sequencing approach, Reincke et al. have identified recurrent somatic mutations in the gene encoding USP8 in four in an initial set of ten corticotroph tumors. These mutations were validated in a small group of seven Cushing's patients, with a final prevalence of 35%.[5] Subsequently, two different retrospective studies analyzed the prevalence of ubiquitin-specific protease 8 (USP8) mutations in two large cohorts of CD patients, identifying a prevalence of 36% and 62%, respectively.[67] It is noteworthy that somatic mutations in USP8 gene show a remarkable specificity for CD, with no mutations found in other type of pituitary adenoma and only rare somatic mutations reported in other tumors.[567] All of the mutations were located in exon 14, defining a hotspot region that overlaps with the sequence coding for the 14-3-3 binding motif, highly conserved between different species. These mutations inhibited 14-3-3 protein binding to USP8 and resulted in a higher deubiquitinating enzyme (DUB) activation. The consequence of this hyperactivation is increased epidermal growth factor receptor (EGFR) deubiquitination and a longer retention of EGFR at the plasma membrane which leads to inhibition of degradation, thereby preventing downregulation of ligand-activated EGFR and promotes and enhances adrenocorticotropic hormone (ACTH) production. The identification of USP8 mutations as specific contributors to the pathogenesis of ACTH-secreting pituitary adenomas represents an exciting advance in our understanding of CD. The aim of our study was to investigate the anticancer efficacy of USP8 inhibitor in CD. Here, we demonstrate that treatment with USP8 inhibitor, 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile, suppresses ACTH secretion, cell viability, and promotes cell apoptosis in AtT20 cells suggesting that UPS8 inhibitor could be a new therapeutic candidate for CD.
METHODS
Cell culture and reagents All of the cell lines were obtained from the American Tissue Type Collection (ATCC, Manassas, VA, USA). The mouse AtT20 pituitary corticotroph cell line and hepatocellular carcinoma cell line Hepa 1-6 were maintained in Dulbecco's modified Eagle's medium (DMEM) (GIBCO, New York, USA) containing 10% fetal bovine serum (FBS) (GIBCO, New York, USA) and 2 mmol/L L-glutamine and 100 IU/ml penicillin and 100 µg/ml streptomycin (GIBCO, New York, USA) at 37°C in a humidified incubator with 5% CO2. The cells were starved with DMEM supplemented with 2% FBS for 16 h prior to each experiment. The 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile was obtained from Melone Pharmaceutical, Dalian, China. All of the cell lines were obtained from the American Tissue Type Collection (ATCC, Manassas, VA, USA). The mouse AtT20 pituitary corticotroph cell line and hepatocellular carcinoma cell line Hepa 1-6 were maintained in Dulbecco's modified Eagle's medium (DMEM) (GIBCO, New York, USA) containing 10% fetal bovine serum (FBS) (GIBCO, New York, USA) and 2 mmol/L L-glutamine and 100 IU/ml penicillin and 100 µg/ml streptomycin (GIBCO, New York, USA) at 37°C in a humidified incubator with 5% CO2. The cells were starved with DMEM supplemented with 2% FBS for 16 h prior to each experiment. The 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile was obtained from Melone Pharmaceutical, Dalian, China. Cell proliferation assay and colony formation assay The AtT20 cells were seeded at 2 × 103 cells per well in 96-well plates and left them to attach for 24 h. After that we changed the medium to cell culture medium with 2% FBS with indicated concentrations of USP8 inhibitor for 24 and 48 h. Viable cells were measured using a Cell Counting Kit-8 (CCK-8, Dojindo, Japan) according to the manufacturer's instructions. AtT20 cells were plated at a density of 103 cells per well in a 6-well plate in DMEM culture medium containing 10% FBS. The medium with indicated concentrations of USP8 inhibitor was replaced every 3 or 4 days. After 15 days, colonies were fixed with 4% paraformaldehyde (Sigma, St. Louis, USA) for 20 min and stained with 0.05% crystal violet at room temperature for 20 min. Then, the cells were washed three times with phosphate-buffered saline (PBS) for 5 min. Colonies containing more than 50 cells were counted using a light microscope. The AtT20 cells were seeded at 2 × 103 cells per well in 96-well plates and left them to attach for 24 h. After that we changed the medium to cell culture medium with 2% FBS with indicated concentrations of USP8 inhibitor for 24 and 48 h. Viable cells were measured using a Cell Counting Kit-8 (CCK-8, Dojindo, Japan) according to the manufacturer's instructions. AtT20 cells were plated at a density of 103 cells per well in a 6-well plate in DMEM culture medium containing 10% FBS. The medium with indicated concentrations of USP8 inhibitor was replaced every 3 or 4 days. After 15 days, colonies were fixed with 4% paraformaldehyde (Sigma, St. Louis, USA) for 20 min and stained with 0.05% crystal violet at room temperature for 20 min. Then, the cells were washed three times with phosphate-buffered saline (PBS) for 5 min. Colonies containing more than 50 cells were counted using a light microscope. Western blotting Total cell lysate was prepared with radioimmunoprecipitation assay buffer containing Protease Inhibitor Cocktail. Protein concentrations were measured by DC protein assay reagent (Bio-Rad, CA, USA) and extracts resolved by SDS/PAGE on 8% gels. Membranes were blocked for 2 h at room temperature in tris buffered saline-tween-20 containing 5% nonfat dried milk (Bio-Rad), washed, and then incubated with primary antibodies (anti-EGFR from Santa Cruz, CA, USA; anti-pErbB2, anti-Met, anti-Akt, anti-pAkt, anti-GAPDH, anti-p27/kip1, anti-cleaved-caspase 3, anti-bax, and anti-bcl-2 from Cell Signaling Technology, Boston, USA) at 4°C overnight. After washing, membranes were incubated with horseradish peroxidase-conjugated secondary antibodies (Cell Signaling Technology, Boston, USA). The signal was detected using enhanced chemiluminescence (PerkinElmer, Waltham, MA, USA). Total cell lysate was prepared with radioimmunoprecipitation assay buffer containing Protease Inhibitor Cocktail. Protein concentrations were measured by DC protein assay reagent (Bio-Rad, CA, USA) and extracts resolved by SDS/PAGE on 8% gels. Membranes were blocked for 2 h at room temperature in tris buffered saline-tween-20 containing 5% nonfat dried milk (Bio-Rad), washed, and then incubated with primary antibodies (anti-EGFR from Santa Cruz, CA, USA; anti-pErbB2, anti-Met, anti-Akt, anti-pAkt, anti-GAPDH, anti-p27/kip1, anti-cleaved-caspase 3, anti-bax, and anti-bcl-2 from Cell Signaling Technology, Boston, USA) at 4°C overnight. After washing, membranes were incubated with horseradish peroxidase-conjugated secondary antibodies (Cell Signaling Technology, Boston, USA). The signal was detected using enhanced chemiluminescence (PerkinElmer, Waltham, MA, USA). Adrenocorticotropic hormone assay The cells were incubated for 4 and 24 h with the indicated concentrations of USP8 inhibitor. The medium was then aspirated, and the ACTH levels in the supernatants were measured using an ACTH enzyme-linked immunosorbent assay kit (Phoenix, Milpitas, USA). The cells were incubated for 4 and 24 h with the indicated concentrations of USP8 inhibitor. The medium was then aspirated, and the ACTH levels in the supernatants were measured using an ACTH enzyme-linked immunosorbent assay kit (Phoenix, Milpitas, USA). Apoptosis assay Cell apoptosis was measured using the fluorescein isothiocyanate (FITC) Annexin V Apoptosis detection Kit I (BD Biosciences Pharmingen, San Jose, CA, USA) according to the manufacturer's instructions. AtT20 cells were plated in 6-well plates. After exposure to USP8 inhibitor for 24 and 48 h, cells were detached and then washed once with cold PBS, suspended in 1 × binding buffer at a concentration of 5 × 105 cells/ml. And then FITC Annexin V and propidium iodide were added. After incubating for 15 min at room temperature in the dark, 200 µl of 1 × binding buffer were added to each tube. The cells were analyzed with a BD FACSVerse flow cytometer (BD Biosciences Pharmingen) and BD FACSuite Software (BD Biosciences Pharmingen). The fraction of the cell population in different quadrants was analyzed using quadrant statistics. Cell apoptosis was measured using the fluorescein isothiocyanate (FITC) Annexin V Apoptosis detection Kit I (BD Biosciences Pharmingen, San Jose, CA, USA) according to the manufacturer's instructions. AtT20 cells were plated in 6-well plates. After exposure to USP8 inhibitor for 24 and 48 h, cells were detached and then washed once with cold PBS, suspended in 1 × binding buffer at a concentration of 5 × 105 cells/ml. And then FITC Annexin V and propidium iodide were added. After incubating for 15 min at room temperature in the dark, 200 µl of 1 × binding buffer were added to each tube. The cells were analyzed with a BD FACSVerse flow cytometer (BD Biosciences Pharmingen) and BD FACSuite Software (BD Biosciences Pharmingen). The fraction of the cell population in different quadrants was analyzed using quadrant statistics. Statistical analysis The statistical analysis was performed using the SPSS version 16.0 (SPSS Inc., USA). Values are described as a mean ± standard deviation (SD). Significant differences were analyzed using two-tail unpaired Student's t-test and one-way ANOVA. A value of P < 0.05 was considered statistically significant. The statistical analysis was performed using the SPSS version 16.0 (SPSS Inc., USA). Values are described as a mean ± standard deviation (SD). Significant differences were analyzed using two-tail unpaired Student's t-test and one-way ANOVA. A value of P < 0.05 was considered statistically significant.
RESULTS
Ubiquitin-specific protease 8 inhibitor inhibit cell viability by downregulating oncogenic receptor tyrosine kinases To investigate that targeting USP8 with its specific inhibitor might exhibit an anticancer effect in the corticotroph adenomas, we first examined the effect of USP8 inhibitor on downstream protein levels including EGFR, ERBB2, and Met. AtT20 cells were treated with a recently synthesized USP8 inhibitor, 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile [Figure 1a].[89] Our data revealed that treatment with USP8 inhibitor could effectively downregulate the expression levels of EGFR, ERBB2, and Met in AtT20 cells in a dose-dependent manner [Figure 1b], demonstrating the inhibition potency of this small molecule for USP8 in AtT20 cells. The treatment of USP8 inhibitor for 24 and 48 h induced an inhibition of cell viability from concentration of 1 µmol/L (4.1%, 4.7%; P < 0.05) and the maximum inhibition was obtained with 10 µmol/L (12.4%, 27.8%; P < 0.001) [Figure 1c]. Moreover, treatment with USP8 inhibitor for 36 h also could inhibit cell growth, while it had no effect on cell growth after 12 h treatment (data not shown). Ubiquitin-specific protease 8 inhibitor suppresses AtT20 cell growth by downregulation of oncogenic receptor tyrosine kinases. (a) Chemical structure of ubiquitin-specific protease 8 inhibitor. (b) Effect of ubiquitin-specific protease 8 inhibitor on receptor tyrosine kinases, epidermal growth factor receptor, ERBB2, Met, and Akt. (c) Effects of ubiquitin-specific protease 8 inhibitor on cell viability. * P<0.05, ** P<0.01, ***P<0.001. To investigate that targeting USP8 with its specific inhibitor might exhibit an anticancer effect in the corticotroph adenomas, we first examined the effect of USP8 inhibitor on downstream protein levels including EGFR, ERBB2, and Met. AtT20 cells were treated with a recently synthesized USP8 inhibitor, 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile [Figure 1a].[89] Our data revealed that treatment with USP8 inhibitor could effectively downregulate the expression levels of EGFR, ERBB2, and Met in AtT20 cells in a dose-dependent manner [Figure 1b], demonstrating the inhibition potency of this small molecule for USP8 in AtT20 cells. The treatment of USP8 inhibitor for 24 and 48 h induced an inhibition of cell viability from concentration of 1 µmol/L (4.1%, 4.7%; P < 0.05) and the maximum inhibition was obtained with 10 µmol/L (12.4%, 27.8%; P < 0.001) [Figure 1c]. Moreover, treatment with USP8 inhibitor for 36 h also could inhibit cell growth, while it had no effect on cell growth after 12 h treatment (data not shown). Ubiquitin-specific protease 8 inhibitor suppresses AtT20 cell growth by downregulation of oncogenic receptor tyrosine kinases. (a) Chemical structure of ubiquitin-specific protease 8 inhibitor. (b) Effect of ubiquitin-specific protease 8 inhibitor on receptor tyrosine kinases, epidermal growth factor receptor, ERBB2, Met, and Akt. (c) Effects of ubiquitin-specific protease 8 inhibitor on cell viability. * P<0.05, ** P<0.01, ***P<0.001. Effects of ubiquitin-specific protease 8 inhibitor on cell viability of renal, adrenal, and liver cells To determine the specificity of USP8 inhibitor effects, cell viability was assessed in Hepa 1-6, HEK293T, and PC12 cell lines after 24 h treatment without or with increasing concentration of USP8 inhibitor (1–10 µmol/L). As shown in Figure 2a–2c, USP8 inhibitor did not significantly modify the viability of any investigated cell line. Effects of ubiquitin-specific protease 8 inhibitor on cell viability of liver, renal, and adrenal cells. Cells were incubated for 24 h with 1–10 μmol/L ubiquitin-specific protease 8 inhibitor; control cells were treated with vehicle solution. HEK 293T (a), Hepa 1-6 (b), PC12 (c), cell viability was assessed in at least three independent experiments with six replicates. To determine the specificity of USP8 inhibitor effects, cell viability was assessed in Hepa 1-6, HEK293T, and PC12 cell lines after 24 h treatment without or with increasing concentration of USP8 inhibitor (1–10 µmol/L). As shown in Figure 2a–2c, USP8 inhibitor did not significantly modify the viability of any investigated cell line. Effects of ubiquitin-specific protease 8 inhibitor on cell viability of liver, renal, and adrenal cells. Cells were incubated for 24 h with 1–10 μmol/L ubiquitin-specific protease 8 inhibitor; control cells were treated with vehicle solution. HEK 293T (a), Hepa 1-6 (b), PC12 (c), cell viability was assessed in at least three independent experiments with six replicates. Ubiquitin-specific protease 8 inhibitor inhibits the clonogenic ability of AtT20 cells Next, we explore whether USP8 inhibitor would have an effect on the clonogenic ability of AtT20 cells [Figure 3a and 3b]. AtT20 cells were seeded in complete growth medium and allowed to adhere for 24 h. The medium was then replaced with complete growth medium containing the indicated concentrations of USP8 inhibitor, and the ability of AtT20 cells to form colonies was monitored over the next 15 days. Our data showed that significant inhibition (9.4%; P < 0.05) of colony formation was detected with 1 µmol/L USP8 inhibitor and maximum reduction (94%; P < 0.001) of clonogenic ability was obtained when 10 µmol/L USP8 inhibitor were used. Formation of AtT20 cells colonies. The number of AtT20 cell colonies was determined after 14 days of culture in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum contain ubiquitin-specific protease 8 inhibitor at concentrations of 1–20 μmol/L. Phase contrast microscopy of AtT20 cell colony was observed on 6-well culture plates (a) and quantitative representation of the colonies formed (b). * P<0.05, *** P<0.001. Next, we explore whether USP8 inhibitor would have an effect on the clonogenic ability of AtT20 cells [Figure 3a and 3b]. AtT20 cells were seeded in complete growth medium and allowed to adhere for 24 h. The medium was then replaced with complete growth medium containing the indicated concentrations of USP8 inhibitor, and the ability of AtT20 cells to form colonies was monitored over the next 15 days. Our data showed that significant inhibition (9.4%; P < 0.05) of colony formation was detected with 1 µmol/L USP8 inhibitor and maximum reduction (94%; P < 0.001) of clonogenic ability was obtained when 10 µmol/L USP8 inhibitor were used. Formation of AtT20 cells colonies. The number of AtT20 cell colonies was determined after 14 days of culture in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum contain ubiquitin-specific protease 8 inhibitor at concentrations of 1–20 μmol/L. Phase contrast microscopy of AtT20 cell colony was observed on 6-well culture plates (a) and quantitative representation of the colonies formed (b). * P<0.05, *** P<0.001. Ubiquitin-specific protease 8 inhibitor induces apoptosis in AtT20 cells To investigate whether USP8 inhibitor reduces cell viability by inducing apoptosis, flow cytometry analysis and apoptosis-related proteins analysis were performed. The results showed that dose-dependent treatment with 1–10 µmol/L USP8 inhibitor for 24 and 48 h markedly induced early apoptosis at a level of 11.1% and 29.2%, 12.3%, and 31.6%, respectively [Figure 4a]. However, gefitinib treatment induced early apoptosis at a level of 14.9%. Moreover, the pro-apoptotic effect of USP8 inhibitor was accompanied by the induction of activated caspase-3 and Bax expression and the suppression of Bcl-2 expression [Figure 4b]. Ubiquitin-specific protease 8 inhibitor-induced apoptosis in AtT20 cells. (a) AtT20 cells were treated with indicated concentration of ubiquitin-specific protease 8 inhibitor for 24 and 48 h. Cells were washed and labeled with annexin V fluorescein isothiocyanate and propidium iodide and analyzed by flow cytometry. Data are presented as % of gated cells. (b) The apoptosis-related protein levels of Bcl-2, Bax, P27, and cleaved caspase-3 were analyzed by Western blot in AtT20 cells treated with ubiquitin-specific protease 8 inhibitor. To investigate whether USP8 inhibitor reduces cell viability by inducing apoptosis, flow cytometry analysis and apoptosis-related proteins analysis were performed. The results showed that dose-dependent treatment with 1–10 µmol/L USP8 inhibitor for 24 and 48 h markedly induced early apoptosis at a level of 11.1% and 29.2%, 12.3%, and 31.6%, respectively [Figure 4a]. However, gefitinib treatment induced early apoptosis at a level of 14.9%. Moreover, the pro-apoptotic effect of USP8 inhibitor was accompanied by the induction of activated caspase-3 and Bax expression and the suppression of Bcl-2 expression [Figure 4b]. Ubiquitin-specific protease 8 inhibitor-induced apoptosis in AtT20 cells. (a) AtT20 cells were treated with indicated concentration of ubiquitin-specific protease 8 inhibitor for 24 and 48 h. Cells were washed and labeled with annexin V fluorescein isothiocyanate and propidium iodide and analyzed by flow cytometry. Data are presented as % of gated cells. (b) The apoptosis-related protein levels of Bcl-2, Bax, P27, and cleaved caspase-3 were analyzed by Western blot in AtT20 cells treated with ubiquitin-specific protease 8 inhibitor. Ubiquitin-specific protease 8 inhibitor suppressed proopiomelanocortin gene expression and adrenocorticotropic hormone secretion in AtT20 cells AtT20 cells were incubated with USP8 inhibitor for 4 and 24 h to determine its effects on proopiomelanocortin (POMC) mRNA levels. As shown, POMC mRNA levels decreased in a dose-dependent manner, with significant effects observed from 5 µmol/L (32.1%; P < 0.05) [Figure 5a and 5b]. To determine the effects of USP8 inhibitor on ACTH secretion, ACTH levels were assessed in conditioned medium in AtT20 cells treatment for 4 and 24 h. As we can see, USP8 inhibitor significantly reduced ACTH secretion after 4 h treatment at both 5 and 10 µmol/L (26.1% and 30.1%, respectively; P < 0.01). After 24 h, USP8 inhibitor significantly reduced ACTH secretion at ≥5 µmol/L (from 16.7% to 40.5%; P < 0.001) [Figure 5c and 5d]. In addition, USP8 inhibitor could enhance the effects of dexamethasone on endogenous ACTH secretion. Effect of ubiquitin-specific protease 8 inhibitor on proopiomelanocortin expression and adrenocorticotropic hormone secretion. AtT20 cells were incubated for 4 and 24 h with 1–20 μmol/L ubiquitin-specific protease 8 inhibitor; proopiomelanocortin mRNA expression was assessed by quantitative reverse transcription polymerase chain reaction (a and b). Adrenocorticotropic hormone levels were measured in conditioned medium by enzyme-linked immunosorbent assay in six independent experiments in three replicates (c and d). * P<0.05, **P <0.01, *** P<0.001. AtT20 cells were incubated with USP8 inhibitor for 4 and 24 h to determine its effects on proopiomelanocortin (POMC) mRNA levels. As shown, POMC mRNA levels decreased in a dose-dependent manner, with significant effects observed from 5 µmol/L (32.1%; P < 0.05) [Figure 5a and 5b]. To determine the effects of USP8 inhibitor on ACTH secretion, ACTH levels were assessed in conditioned medium in AtT20 cells treatment for 4 and 24 h. As we can see, USP8 inhibitor significantly reduced ACTH secretion after 4 h treatment at both 5 and 10 µmol/L (26.1% and 30.1%, respectively; P < 0.01). After 24 h, USP8 inhibitor significantly reduced ACTH secretion at ≥5 µmol/L (from 16.7% to 40.5%; P < 0.001) [Figure 5c and 5d]. In addition, USP8 inhibitor could enhance the effects of dexamethasone on endogenous ACTH secretion. Effect of ubiquitin-specific protease 8 inhibitor on proopiomelanocortin expression and adrenocorticotropic hormone secretion. AtT20 cells were incubated for 4 and 24 h with 1–20 μmol/L ubiquitin-specific protease 8 inhibitor; proopiomelanocortin mRNA expression was assessed by quantitative reverse transcription polymerase chain reaction (a and b). Adrenocorticotropic hormone levels were measured in conditioned medium by enzyme-linked immunosorbent assay in six independent experiments in three replicates (c and d). * P<0.05, **P <0.01, *** P<0.001.
null
null
[ "Cell culture and reagents", "Cell proliferation assay and colony formation assay", "Western blotting", "Adrenocorticotropic hormone assay", "Apoptosis assay", "Statistical analysis", "Ubiquitin-specific protease 8 inhibitor inhibit cell viability by downregulating oncogenic receptor tyrosine kinases", "Effects of ubiquitin-specific protease 8 inhibitor on cell viability of renal, adrenal, and liver cells", "Ubiquitin-specific protease 8 inhibitor inhibits the clonogenic ability of AtT20 cells", "Ubiquitin-specific protease 8 inhibitor induces apoptosis in AtT20 cells", "Ubiquitin-specific protease 8 inhibitor suppressed proopiomelanocortin gene expression and adrenocorticotropic hormone secretion in AtT20 cells", "Financial support and sponsorship", "Conflicts of interest" ]
[ "All of the cell lines were obtained from the American Tissue Type Collection (ATCC, Manassas, VA, USA). The mouse AtT20 pituitary corticotroph cell line and hepatocellular carcinoma cell line Hepa 1-6 were maintained in Dulbecco's modified Eagle's medium (DMEM) (GIBCO, New York, USA) containing 10% fetal bovine serum (FBS) (GIBCO, New York, USA) and 2 mmol/L L-glutamine and 100 IU/ml penicillin and 100 µg/ml streptomycin (GIBCO, New York, USA) at 37°C in a humidified incubator with 5% CO2. The cells were starved with DMEM supplemented with 2% FBS for 16 h prior to each experiment. The 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile was obtained from Melone Pharmaceutical, Dalian, China.", "The AtT20 cells were seeded at 2 × 103 cells per well in 96-well plates and left them to attach for 24 h. After that we changed the medium to cell culture medium with 2% FBS with indicated concentrations of USP8 inhibitor for 24 and 48 h. Viable cells were measured using a Cell Counting Kit-8 (CCK-8, Dojindo, Japan) according to the manufacturer's instructions.\nAtT20 cells were plated at a density of 103 cells per well in a 6-well plate in DMEM culture medium containing 10% FBS. The medium with indicated concentrations of USP8 inhibitor was replaced every 3 or 4 days. After 15 days, colonies were fixed with 4% paraformaldehyde (Sigma, St. Louis, USA) for 20 min and stained with 0.05% crystal violet at room temperature for 20 min. Then, the cells were washed three times with phosphate-buffered saline (PBS) for 5 min. Colonies containing more than 50 cells were counted using a light microscope.", "Total cell lysate was prepared with radioimmunoprecipitation assay buffer containing Protease Inhibitor Cocktail. Protein concentrations were measured by DC protein assay reagent (Bio-Rad, CA, USA) and extracts resolved by SDS/PAGE on 8% gels. Membranes were blocked for 2 h at room temperature in tris buffered saline-tween-20 containing 5% nonfat dried milk (Bio-Rad), washed, and then incubated with primary antibodies (anti-EGFR from Santa Cruz, CA, USA; anti-pErbB2, anti-Met, anti-Akt, anti-pAkt, anti-GAPDH, anti-p27/kip1, anti-cleaved-caspase 3, anti-bax, and anti-bcl-2 from Cell Signaling Technology, Boston, USA) at 4°C overnight. After washing, membranes were incubated with horseradish peroxidase-conjugated secondary antibodies (Cell Signaling Technology, Boston, USA). The signal was detected using enhanced chemiluminescence (PerkinElmer, Waltham, MA, USA).", "The cells were incubated for 4 and 24 h with the indicated concentrations of USP8 inhibitor. The medium was then aspirated, and the ACTH levels in the supernatants were measured using an ACTH enzyme-linked immunosorbent assay kit (Phoenix, Milpitas, USA).", "Cell apoptosis was measured using the fluorescein isothiocyanate (FITC) Annexin V Apoptosis detection Kit I (BD Biosciences Pharmingen, San Jose, CA, USA) according to the manufacturer's instructions. AtT20 cells were plated in 6-well plates. After exposure to USP8 inhibitor for 24 and 48 h, cells were detached and then washed once with cold PBS, suspended in 1 × binding buffer at a concentration of 5 × 105 cells/ml. And then FITC Annexin V and propidium iodide were added. After incubating for 15 min at room temperature in the dark, 200 µl of 1 × binding buffer were added to each tube. The cells were analyzed with a BD FACSVerse flow cytometer (BD Biosciences Pharmingen) and BD FACSuite Software (BD Biosciences Pharmingen). The fraction of the cell population in different quadrants was analyzed using quadrant statistics.", "The statistical analysis was performed using the SPSS version 16.0 (SPSS Inc., USA). Values are described as a mean ± standard deviation (SD). Significant differences were analyzed using two-tail unpaired Student's t-test and one-way ANOVA. A value of P < 0.05 was considered statistically significant.", "To investigate that targeting USP8 with its specific inhibitor might exhibit an anticancer effect in the corticotroph adenomas, we first examined the effect of USP8 inhibitor on downstream protein levels including EGFR, ERBB2, and Met. AtT20 cells were treated with a recently synthesized USP8 inhibitor, 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile [Figure 1a].[89] Our data revealed that treatment with USP8 inhibitor could effectively downregulate the expression levels of EGFR, ERBB2, and Met in AtT20 cells in a dose-dependent manner [Figure 1b], demonstrating the inhibition potency of this small molecule for USP8 in AtT20 cells. The treatment of USP8 inhibitor for 24 and 48 h induced an inhibition of cell viability from concentration of 1 µmol/L (4.1%, 4.7%; P < 0.05) and the maximum inhibition was obtained with 10 µmol/L (12.4%, 27.8%; P < 0.001) [Figure 1c]. Moreover, treatment with USP8 inhibitor for 36 h also could inhibit cell growth, while it had no effect on cell growth after 12 h treatment (data not shown).\nUbiquitin-specific protease 8 inhibitor suppresses AtT20 cell growth by downregulation of oncogenic receptor tyrosine kinases. (a) Chemical structure of ubiquitin-specific protease 8 inhibitor. (b) Effect of ubiquitin-specific protease 8 inhibitor on receptor tyrosine kinases, epidermal growth factor receptor, ERBB2, Met, and Akt. (c) Effects of ubiquitin-specific protease 8 inhibitor on cell viability. * P<0.05, ** P<0.01, ***P<0.001.", "To determine the specificity of USP8 inhibitor effects, cell viability was assessed in Hepa 1-6, HEK293T, and PC12 cell lines after 24 h treatment without or with increasing concentration of USP8 inhibitor (1–10 µmol/L). As shown in Figure 2a–2c, USP8 inhibitor did not significantly modify the viability of any investigated cell line.\nEffects of ubiquitin-specific protease 8 inhibitor on cell viability of liver, renal, and adrenal cells. Cells were incubated for 24 h with 1–10 μmol/L ubiquitin-specific protease 8 inhibitor; control cells were treated with vehicle solution. HEK 293T (a), Hepa 1-6 (b), PC12 (c), cell viability was assessed in at least three independent experiments with six replicates.", "Next, we explore whether USP8 inhibitor would have an effect on the clonogenic ability of AtT20 cells [Figure 3a and 3b]. AtT20 cells were seeded in complete growth medium and allowed to adhere for 24 h. The medium was then replaced with complete growth medium containing the indicated concentrations of USP8 inhibitor, and the ability of AtT20 cells to form colonies was monitored over the next 15 days. Our data showed that significant inhibition (9.4%; P < 0.05) of colony formation was detected with 1 µmol/L USP8 inhibitor and maximum reduction (94%; P < 0.001) of clonogenic ability was obtained when 10 µmol/L USP8 inhibitor were used.\nFormation of AtT20 cells colonies. The number of AtT20 cell colonies was determined after 14 days of culture in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum contain ubiquitin-specific protease 8 inhibitor at concentrations of 1–20 μmol/L. Phase contrast microscopy of AtT20 cell colony was observed on 6-well culture plates (a) and quantitative representation of the colonies formed (b). * P<0.05, *** P<0.001.", "To investigate whether USP8 inhibitor reduces cell viability by inducing apoptosis, flow cytometry analysis and apoptosis-related proteins analysis were performed. The results showed that dose-dependent treatment with 1–10 µmol/L USP8 inhibitor for 24 and 48 h markedly induced early apoptosis at a level of 11.1% and 29.2%, 12.3%, and 31.6%, respectively [Figure 4a]. However, gefitinib treatment induced early apoptosis at a level of 14.9%. Moreover, the pro-apoptotic effect of USP8 inhibitor was accompanied by the induction of activated caspase-3 and Bax expression and the suppression of Bcl-2 expression [Figure 4b].\nUbiquitin-specific protease 8 inhibitor-induced apoptosis in AtT20 cells. (a) AtT20 cells were treated with indicated concentration of ubiquitin-specific protease 8 inhibitor for 24 and 48 h. Cells were washed and labeled with annexin V fluorescein isothiocyanate and propidium iodide and analyzed by flow cytometry. Data are presented as % of gated cells. (b) The apoptosis-related protein levels of Bcl-2, Bax, P27, and cleaved caspase-3 were analyzed by Western blot in AtT20 cells treated with ubiquitin-specific protease 8 inhibitor.", "AtT20 cells were incubated with USP8 inhibitor for 4 and 24 h to determine its effects on proopiomelanocortin (POMC) mRNA levels. As shown, POMC mRNA levels decreased in a dose-dependent manner, with significant effects observed from 5 µmol/L (32.1%; P < 0.05) [Figure 5a and 5b]. To determine the effects of USP8 inhibitor on ACTH secretion, ACTH levels were assessed in conditioned medium in AtT20 cells treatment for 4 and 24 h. As we can see, USP8 inhibitor significantly reduced ACTH secretion after 4 h treatment at both 5 and 10 µmol/L (26.1% and 30.1%, respectively; P < 0.01). After 24 h, USP8 inhibitor significantly reduced ACTH secretion at ≥5 µmol/L (from 16.7% to 40.5%; P < 0.001) [Figure 5c and 5d]. In addition, USP8 inhibitor could enhance the effects of dexamethasone on endogenous ACTH secretion.\nEffect of ubiquitin-specific protease 8 inhibitor on proopiomelanocortin expression and adrenocorticotropic hormone secretion. AtT20 cells were incubated for 4 and 24 h with 1–20 μmol/L ubiquitin-specific protease 8 inhibitor; proopiomelanocortin mRNA expression was assessed by quantitative reverse transcription polymerase chain reaction (a and b). Adrenocorticotropic hormone levels were measured in conditioned medium by enzyme-linked immunosorbent assay in six independent experiments in three replicates (c and d). * P<0.05, **P <0.01, *** P<0.001.", "This work was supported by grants to Qing-Fang Sun from National Natural Science Foundation of China (No. 81270856) and National High-tech R&D Program (863 program) (No. 2014AA020611).", "There are no conflicts of interest." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Cell culture and reagents", "Cell proliferation assay and colony formation assay", "Western blotting", "Adrenocorticotropic hormone assay", "Apoptosis assay", "Statistical analysis", "RESULTS", "Ubiquitin-specific protease 8 inhibitor inhibit cell viability by downregulating oncogenic receptor tyrosine kinases", "Effects of ubiquitin-specific protease 8 inhibitor on cell viability of renal, adrenal, and liver cells", "Ubiquitin-specific protease 8 inhibitor inhibits the clonogenic ability of AtT20 cells", "Ubiquitin-specific protease 8 inhibitor induces apoptosis in AtT20 cells", "Ubiquitin-specific protease 8 inhibitor suppressed proopiomelanocortin gene expression and adrenocorticotropic hormone secretion in AtT20 cells", "DISCUSSION", "Financial support and sponsorship", "Conflicts of interest" ]
[ "Cushing's disease (CD), or pituitary-dependent Cushing's syndrome, is the most common cause of endogenous Cushing's syndrome accounting for about 70% of the chronic endogenous hypercortisolism.[1] It induced a series of several comorbidities and clinical complications, mainly including hypertension, diabetes mellitus, dyslipidemia, osteoporosis, cardiovascular disease, infection, and mental disorders, which associated with increased morbidity and mortality if not appropriately treated.[2] Until recently, no available medical treatment was licensed for CD although several drugs had demonstrated efficacy in lowering excess cortisol.[34]\nUsing next-generation sequencing approach, Reincke et al. have identified recurrent somatic mutations in the gene encoding USP8 in four in an initial set of ten corticotroph tumors. These mutations were validated in a small group of seven Cushing's patients, with a final prevalence of 35%.[5] Subsequently, two different retrospective studies analyzed the prevalence of ubiquitin-specific protease 8 (USP8) mutations in two large cohorts of CD patients, identifying a prevalence of 36% and 62%, respectively.[67] It is noteworthy that somatic mutations in USP8 gene show a remarkable specificity for CD, with no mutations found in other type of pituitary adenoma and only rare somatic mutations reported in other tumors.[567] All of the mutations were located in exon 14, defining a hotspot region that overlaps with the sequence coding for the 14-3-3 binding motif, highly conserved between different species. These mutations inhibited 14-3-3 protein binding to USP8 and resulted in a higher deubiquitinating enzyme (DUB) activation. The consequence of this hyperactivation is increased epidermal growth factor receptor (EGFR) deubiquitination and a longer retention of EGFR at the plasma membrane which leads to inhibition of degradation, thereby preventing downregulation of ligand-activated EGFR and promotes and enhances adrenocorticotropic hormone (ACTH) production.\nThe identification of USP8 mutations as specific contributors to the pathogenesis of ACTH-secreting pituitary adenomas represents an exciting advance in our understanding of CD. The aim of our study was to investigate the anticancer efficacy of USP8 inhibitor in CD. Here, we demonstrate that treatment with USP8 inhibitor, 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile, suppresses ACTH secretion, cell viability, and promotes cell apoptosis in AtT20 cells suggesting that UPS8 inhibitor could be a new therapeutic candidate for CD.", " Cell culture and reagents All of the cell lines were obtained from the American Tissue Type Collection (ATCC, Manassas, VA, USA). The mouse AtT20 pituitary corticotroph cell line and hepatocellular carcinoma cell line Hepa 1-6 were maintained in Dulbecco's modified Eagle's medium (DMEM) (GIBCO, New York, USA) containing 10% fetal bovine serum (FBS) (GIBCO, New York, USA) and 2 mmol/L L-glutamine and 100 IU/ml penicillin and 100 µg/ml streptomycin (GIBCO, New York, USA) at 37°C in a humidified incubator with 5% CO2. The cells were starved with DMEM supplemented with 2% FBS for 16 h prior to each experiment. The 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile was obtained from Melone Pharmaceutical, Dalian, China.\nAll of the cell lines were obtained from the American Tissue Type Collection (ATCC, Manassas, VA, USA). The mouse AtT20 pituitary corticotroph cell line and hepatocellular carcinoma cell line Hepa 1-6 were maintained in Dulbecco's modified Eagle's medium (DMEM) (GIBCO, New York, USA) containing 10% fetal bovine serum (FBS) (GIBCO, New York, USA) and 2 mmol/L L-glutamine and 100 IU/ml penicillin and 100 µg/ml streptomycin (GIBCO, New York, USA) at 37°C in a humidified incubator with 5% CO2. The cells were starved with DMEM supplemented with 2% FBS for 16 h prior to each experiment. The 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile was obtained from Melone Pharmaceutical, Dalian, China.\n Cell proliferation assay and colony formation assay The AtT20 cells were seeded at 2 × 103 cells per well in 96-well plates and left them to attach for 24 h. After that we changed the medium to cell culture medium with 2% FBS with indicated concentrations of USP8 inhibitor for 24 and 48 h. Viable cells were measured using a Cell Counting Kit-8 (CCK-8, Dojindo, Japan) according to the manufacturer's instructions.\nAtT20 cells were plated at a density of 103 cells per well in a 6-well plate in DMEM culture medium containing 10% FBS. The medium with indicated concentrations of USP8 inhibitor was replaced every 3 or 4 days. After 15 days, colonies were fixed with 4% paraformaldehyde (Sigma, St. Louis, USA) for 20 min and stained with 0.05% crystal violet at room temperature for 20 min. Then, the cells were washed three times with phosphate-buffered saline (PBS) for 5 min. Colonies containing more than 50 cells were counted using a light microscope.\nThe AtT20 cells were seeded at 2 × 103 cells per well in 96-well plates and left them to attach for 24 h. After that we changed the medium to cell culture medium with 2% FBS with indicated concentrations of USP8 inhibitor for 24 and 48 h. Viable cells were measured using a Cell Counting Kit-8 (CCK-8, Dojindo, Japan) according to the manufacturer's instructions.\nAtT20 cells were plated at a density of 103 cells per well in a 6-well plate in DMEM culture medium containing 10% FBS. The medium with indicated concentrations of USP8 inhibitor was replaced every 3 or 4 days. After 15 days, colonies were fixed with 4% paraformaldehyde (Sigma, St. Louis, USA) for 20 min and stained with 0.05% crystal violet at room temperature for 20 min. Then, the cells were washed three times with phosphate-buffered saline (PBS) for 5 min. Colonies containing more than 50 cells were counted using a light microscope.\n Western blotting Total cell lysate was prepared with radioimmunoprecipitation assay buffer containing Protease Inhibitor Cocktail. Protein concentrations were measured by DC protein assay reagent (Bio-Rad, CA, USA) and extracts resolved by SDS/PAGE on 8% gels. Membranes were blocked for 2 h at room temperature in tris buffered saline-tween-20 containing 5% nonfat dried milk (Bio-Rad), washed, and then incubated with primary antibodies (anti-EGFR from Santa Cruz, CA, USA; anti-pErbB2, anti-Met, anti-Akt, anti-pAkt, anti-GAPDH, anti-p27/kip1, anti-cleaved-caspase 3, anti-bax, and anti-bcl-2 from Cell Signaling Technology, Boston, USA) at 4°C overnight. After washing, membranes were incubated with horseradish peroxidase-conjugated secondary antibodies (Cell Signaling Technology, Boston, USA). The signal was detected using enhanced chemiluminescence (PerkinElmer, Waltham, MA, USA).\nTotal cell lysate was prepared with radioimmunoprecipitation assay buffer containing Protease Inhibitor Cocktail. Protein concentrations were measured by DC protein assay reagent (Bio-Rad, CA, USA) and extracts resolved by SDS/PAGE on 8% gels. Membranes were blocked for 2 h at room temperature in tris buffered saline-tween-20 containing 5% nonfat dried milk (Bio-Rad), washed, and then incubated with primary antibodies (anti-EGFR from Santa Cruz, CA, USA; anti-pErbB2, anti-Met, anti-Akt, anti-pAkt, anti-GAPDH, anti-p27/kip1, anti-cleaved-caspase 3, anti-bax, and anti-bcl-2 from Cell Signaling Technology, Boston, USA) at 4°C overnight. After washing, membranes were incubated with horseradish peroxidase-conjugated secondary antibodies (Cell Signaling Technology, Boston, USA). The signal was detected using enhanced chemiluminescence (PerkinElmer, Waltham, MA, USA).\n Adrenocorticotropic hormone assay The cells were incubated for 4 and 24 h with the indicated concentrations of USP8 inhibitor. The medium was then aspirated, and the ACTH levels in the supernatants were measured using an ACTH enzyme-linked immunosorbent assay kit (Phoenix, Milpitas, USA).\nThe cells were incubated for 4 and 24 h with the indicated concentrations of USP8 inhibitor. The medium was then aspirated, and the ACTH levels in the supernatants were measured using an ACTH enzyme-linked immunosorbent assay kit (Phoenix, Milpitas, USA).\n Apoptosis assay Cell apoptosis was measured using the fluorescein isothiocyanate (FITC) Annexin V Apoptosis detection Kit I (BD Biosciences Pharmingen, San Jose, CA, USA) according to the manufacturer's instructions. AtT20 cells were plated in 6-well plates. After exposure to USP8 inhibitor for 24 and 48 h, cells were detached and then washed once with cold PBS, suspended in 1 × binding buffer at a concentration of 5 × 105 cells/ml. And then FITC Annexin V and propidium iodide were added. After incubating for 15 min at room temperature in the dark, 200 µl of 1 × binding buffer were added to each tube. The cells were analyzed with a BD FACSVerse flow cytometer (BD Biosciences Pharmingen) and BD FACSuite Software (BD Biosciences Pharmingen). The fraction of the cell population in different quadrants was analyzed using quadrant statistics.\nCell apoptosis was measured using the fluorescein isothiocyanate (FITC) Annexin V Apoptosis detection Kit I (BD Biosciences Pharmingen, San Jose, CA, USA) according to the manufacturer's instructions. AtT20 cells were plated in 6-well plates. After exposure to USP8 inhibitor for 24 and 48 h, cells were detached and then washed once with cold PBS, suspended in 1 × binding buffer at a concentration of 5 × 105 cells/ml. And then FITC Annexin V and propidium iodide were added. After incubating for 15 min at room temperature in the dark, 200 µl of 1 × binding buffer were added to each tube. The cells were analyzed with a BD FACSVerse flow cytometer (BD Biosciences Pharmingen) and BD FACSuite Software (BD Biosciences Pharmingen). The fraction of the cell population in different quadrants was analyzed using quadrant statistics.\n Statistical analysis The statistical analysis was performed using the SPSS version 16.0 (SPSS Inc., USA). Values are described as a mean ± standard deviation (SD). Significant differences were analyzed using two-tail unpaired Student's t-test and one-way ANOVA. A value of P < 0.05 was considered statistically significant.\nThe statistical analysis was performed using the SPSS version 16.0 (SPSS Inc., USA). Values are described as a mean ± standard deviation (SD). Significant differences were analyzed using two-tail unpaired Student's t-test and one-way ANOVA. A value of P < 0.05 was considered statistically significant.", "All of the cell lines were obtained from the American Tissue Type Collection (ATCC, Manassas, VA, USA). The mouse AtT20 pituitary corticotroph cell line and hepatocellular carcinoma cell line Hepa 1-6 were maintained in Dulbecco's modified Eagle's medium (DMEM) (GIBCO, New York, USA) containing 10% fetal bovine serum (FBS) (GIBCO, New York, USA) and 2 mmol/L L-glutamine and 100 IU/ml penicillin and 100 µg/ml streptomycin (GIBCO, New York, USA) at 37°C in a humidified incubator with 5% CO2. The cells were starved with DMEM supplemented with 2% FBS for 16 h prior to each experiment. The 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile was obtained from Melone Pharmaceutical, Dalian, China.", "The AtT20 cells were seeded at 2 × 103 cells per well in 96-well plates and left them to attach for 24 h. After that we changed the medium to cell culture medium with 2% FBS with indicated concentrations of USP8 inhibitor for 24 and 48 h. Viable cells were measured using a Cell Counting Kit-8 (CCK-8, Dojindo, Japan) according to the manufacturer's instructions.\nAtT20 cells were plated at a density of 103 cells per well in a 6-well plate in DMEM culture medium containing 10% FBS. The medium with indicated concentrations of USP8 inhibitor was replaced every 3 or 4 days. After 15 days, colonies were fixed with 4% paraformaldehyde (Sigma, St. Louis, USA) for 20 min and stained with 0.05% crystal violet at room temperature for 20 min. Then, the cells were washed three times with phosphate-buffered saline (PBS) for 5 min. Colonies containing more than 50 cells were counted using a light microscope.", "Total cell lysate was prepared with radioimmunoprecipitation assay buffer containing Protease Inhibitor Cocktail. Protein concentrations were measured by DC protein assay reagent (Bio-Rad, CA, USA) and extracts resolved by SDS/PAGE on 8% gels. Membranes were blocked for 2 h at room temperature in tris buffered saline-tween-20 containing 5% nonfat dried milk (Bio-Rad), washed, and then incubated with primary antibodies (anti-EGFR from Santa Cruz, CA, USA; anti-pErbB2, anti-Met, anti-Akt, anti-pAkt, anti-GAPDH, anti-p27/kip1, anti-cleaved-caspase 3, anti-bax, and anti-bcl-2 from Cell Signaling Technology, Boston, USA) at 4°C overnight. After washing, membranes were incubated with horseradish peroxidase-conjugated secondary antibodies (Cell Signaling Technology, Boston, USA). The signal was detected using enhanced chemiluminescence (PerkinElmer, Waltham, MA, USA).", "The cells were incubated for 4 and 24 h with the indicated concentrations of USP8 inhibitor. The medium was then aspirated, and the ACTH levels in the supernatants were measured using an ACTH enzyme-linked immunosorbent assay kit (Phoenix, Milpitas, USA).", "Cell apoptosis was measured using the fluorescein isothiocyanate (FITC) Annexin V Apoptosis detection Kit I (BD Biosciences Pharmingen, San Jose, CA, USA) according to the manufacturer's instructions. AtT20 cells were plated in 6-well plates. After exposure to USP8 inhibitor for 24 and 48 h, cells were detached and then washed once with cold PBS, suspended in 1 × binding buffer at a concentration of 5 × 105 cells/ml. And then FITC Annexin V and propidium iodide were added. After incubating for 15 min at room temperature in the dark, 200 µl of 1 × binding buffer were added to each tube. The cells were analyzed with a BD FACSVerse flow cytometer (BD Biosciences Pharmingen) and BD FACSuite Software (BD Biosciences Pharmingen). The fraction of the cell population in different quadrants was analyzed using quadrant statistics.", "The statistical analysis was performed using the SPSS version 16.0 (SPSS Inc., USA). Values are described as a mean ± standard deviation (SD). Significant differences were analyzed using two-tail unpaired Student's t-test and one-way ANOVA. A value of P < 0.05 was considered statistically significant.", " Ubiquitin-specific protease 8 inhibitor inhibit cell viability by downregulating oncogenic receptor tyrosine kinases To investigate that targeting USP8 with its specific inhibitor might exhibit an anticancer effect in the corticotroph adenomas, we first examined the effect of USP8 inhibitor on downstream protein levels including EGFR, ERBB2, and Met. AtT20 cells were treated with a recently synthesized USP8 inhibitor, 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile [Figure 1a].[89] Our data revealed that treatment with USP8 inhibitor could effectively downregulate the expression levels of EGFR, ERBB2, and Met in AtT20 cells in a dose-dependent manner [Figure 1b], demonstrating the inhibition potency of this small molecule for USP8 in AtT20 cells. The treatment of USP8 inhibitor for 24 and 48 h induced an inhibition of cell viability from concentration of 1 µmol/L (4.1%, 4.7%; P < 0.05) and the maximum inhibition was obtained with 10 µmol/L (12.4%, 27.8%; P < 0.001) [Figure 1c]. Moreover, treatment with USP8 inhibitor for 36 h also could inhibit cell growth, while it had no effect on cell growth after 12 h treatment (data not shown).\nUbiquitin-specific protease 8 inhibitor suppresses AtT20 cell growth by downregulation of oncogenic receptor tyrosine kinases. (a) Chemical structure of ubiquitin-specific protease 8 inhibitor. (b) Effect of ubiquitin-specific protease 8 inhibitor on receptor tyrosine kinases, epidermal growth factor receptor, ERBB2, Met, and Akt. (c) Effects of ubiquitin-specific protease 8 inhibitor on cell viability. * P<0.05, ** P<0.01, ***P<0.001.\nTo investigate that targeting USP8 with its specific inhibitor might exhibit an anticancer effect in the corticotroph adenomas, we first examined the effect of USP8 inhibitor on downstream protein levels including EGFR, ERBB2, and Met. AtT20 cells were treated with a recently synthesized USP8 inhibitor, 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile [Figure 1a].[89] Our data revealed that treatment with USP8 inhibitor could effectively downregulate the expression levels of EGFR, ERBB2, and Met in AtT20 cells in a dose-dependent manner [Figure 1b], demonstrating the inhibition potency of this small molecule for USP8 in AtT20 cells. The treatment of USP8 inhibitor for 24 and 48 h induced an inhibition of cell viability from concentration of 1 µmol/L (4.1%, 4.7%; P < 0.05) and the maximum inhibition was obtained with 10 µmol/L (12.4%, 27.8%; P < 0.001) [Figure 1c]. Moreover, treatment with USP8 inhibitor for 36 h also could inhibit cell growth, while it had no effect on cell growth after 12 h treatment (data not shown).\nUbiquitin-specific protease 8 inhibitor suppresses AtT20 cell growth by downregulation of oncogenic receptor tyrosine kinases. (a) Chemical structure of ubiquitin-specific protease 8 inhibitor. (b) Effect of ubiquitin-specific protease 8 inhibitor on receptor tyrosine kinases, epidermal growth factor receptor, ERBB2, Met, and Akt. (c) Effects of ubiquitin-specific protease 8 inhibitor on cell viability. * P<0.05, ** P<0.01, ***P<0.001.\n Effects of ubiquitin-specific protease 8 inhibitor on cell viability of renal, adrenal, and liver cells To determine the specificity of USP8 inhibitor effects, cell viability was assessed in Hepa 1-6, HEK293T, and PC12 cell lines after 24 h treatment without or with increasing concentration of USP8 inhibitor (1–10 µmol/L). As shown in Figure 2a–2c, USP8 inhibitor did not significantly modify the viability of any investigated cell line.\nEffects of ubiquitin-specific protease 8 inhibitor on cell viability of liver, renal, and adrenal cells. Cells were incubated for 24 h with 1–10 μmol/L ubiquitin-specific protease 8 inhibitor; control cells were treated with vehicle solution. HEK 293T (a), Hepa 1-6 (b), PC12 (c), cell viability was assessed in at least three independent experiments with six replicates.\nTo determine the specificity of USP8 inhibitor effects, cell viability was assessed in Hepa 1-6, HEK293T, and PC12 cell lines after 24 h treatment without or with increasing concentration of USP8 inhibitor (1–10 µmol/L). As shown in Figure 2a–2c, USP8 inhibitor did not significantly modify the viability of any investigated cell line.\nEffects of ubiquitin-specific protease 8 inhibitor on cell viability of liver, renal, and adrenal cells. Cells were incubated for 24 h with 1–10 μmol/L ubiquitin-specific protease 8 inhibitor; control cells were treated with vehicle solution. HEK 293T (a), Hepa 1-6 (b), PC12 (c), cell viability was assessed in at least three independent experiments with six replicates.\n Ubiquitin-specific protease 8 inhibitor inhibits the clonogenic ability of AtT20 cells Next, we explore whether USP8 inhibitor would have an effect on the clonogenic ability of AtT20 cells [Figure 3a and 3b]. AtT20 cells were seeded in complete growth medium and allowed to adhere for 24 h. The medium was then replaced with complete growth medium containing the indicated concentrations of USP8 inhibitor, and the ability of AtT20 cells to form colonies was monitored over the next 15 days. Our data showed that significant inhibition (9.4%; P < 0.05) of colony formation was detected with 1 µmol/L USP8 inhibitor and maximum reduction (94%; P < 0.001) of clonogenic ability was obtained when 10 µmol/L USP8 inhibitor were used.\nFormation of AtT20 cells colonies. The number of AtT20 cell colonies was determined after 14 days of culture in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum contain ubiquitin-specific protease 8 inhibitor at concentrations of 1–20 μmol/L. Phase contrast microscopy of AtT20 cell colony was observed on 6-well culture plates (a) and quantitative representation of the colonies formed (b). * P<0.05, *** P<0.001.\nNext, we explore whether USP8 inhibitor would have an effect on the clonogenic ability of AtT20 cells [Figure 3a and 3b]. AtT20 cells were seeded in complete growth medium and allowed to adhere for 24 h. The medium was then replaced with complete growth medium containing the indicated concentrations of USP8 inhibitor, and the ability of AtT20 cells to form colonies was monitored over the next 15 days. Our data showed that significant inhibition (9.4%; P < 0.05) of colony formation was detected with 1 µmol/L USP8 inhibitor and maximum reduction (94%; P < 0.001) of clonogenic ability was obtained when 10 µmol/L USP8 inhibitor were used.\nFormation of AtT20 cells colonies. The number of AtT20 cell colonies was determined after 14 days of culture in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum contain ubiquitin-specific protease 8 inhibitor at concentrations of 1–20 μmol/L. Phase contrast microscopy of AtT20 cell colony was observed on 6-well culture plates (a) and quantitative representation of the colonies formed (b). * P<0.05, *** P<0.001.\n Ubiquitin-specific protease 8 inhibitor induces apoptosis in AtT20 cells To investigate whether USP8 inhibitor reduces cell viability by inducing apoptosis, flow cytometry analysis and apoptosis-related proteins analysis were performed. The results showed that dose-dependent treatment with 1–10 µmol/L USP8 inhibitor for 24 and 48 h markedly induced early apoptosis at a level of 11.1% and 29.2%, 12.3%, and 31.6%, respectively [Figure 4a]. However, gefitinib treatment induced early apoptosis at a level of 14.9%. Moreover, the pro-apoptotic effect of USP8 inhibitor was accompanied by the induction of activated caspase-3 and Bax expression and the suppression of Bcl-2 expression [Figure 4b].\nUbiquitin-specific protease 8 inhibitor-induced apoptosis in AtT20 cells. (a) AtT20 cells were treated with indicated concentration of ubiquitin-specific protease 8 inhibitor for 24 and 48 h. Cells were washed and labeled with annexin V fluorescein isothiocyanate and propidium iodide and analyzed by flow cytometry. Data are presented as % of gated cells. (b) The apoptosis-related protein levels of Bcl-2, Bax, P27, and cleaved caspase-3 were analyzed by Western blot in AtT20 cells treated with ubiquitin-specific protease 8 inhibitor.\nTo investigate whether USP8 inhibitor reduces cell viability by inducing apoptosis, flow cytometry analysis and apoptosis-related proteins analysis were performed. The results showed that dose-dependent treatment with 1–10 µmol/L USP8 inhibitor for 24 and 48 h markedly induced early apoptosis at a level of 11.1% and 29.2%, 12.3%, and 31.6%, respectively [Figure 4a]. However, gefitinib treatment induced early apoptosis at a level of 14.9%. Moreover, the pro-apoptotic effect of USP8 inhibitor was accompanied by the induction of activated caspase-3 and Bax expression and the suppression of Bcl-2 expression [Figure 4b].\nUbiquitin-specific protease 8 inhibitor-induced apoptosis in AtT20 cells. (a) AtT20 cells were treated with indicated concentration of ubiquitin-specific protease 8 inhibitor for 24 and 48 h. Cells were washed and labeled with annexin V fluorescein isothiocyanate and propidium iodide and analyzed by flow cytometry. Data are presented as % of gated cells. (b) The apoptosis-related protein levels of Bcl-2, Bax, P27, and cleaved caspase-3 were analyzed by Western blot in AtT20 cells treated with ubiquitin-specific protease 8 inhibitor.\n Ubiquitin-specific protease 8 inhibitor suppressed proopiomelanocortin gene expression and adrenocorticotropic hormone secretion in AtT20 cells AtT20 cells were incubated with USP8 inhibitor for 4 and 24 h to determine its effects on proopiomelanocortin (POMC) mRNA levels. As shown, POMC mRNA levels decreased in a dose-dependent manner, with significant effects observed from 5 µmol/L (32.1%; P < 0.05) [Figure 5a and 5b]. To determine the effects of USP8 inhibitor on ACTH secretion, ACTH levels were assessed in conditioned medium in AtT20 cells treatment for 4 and 24 h. As we can see, USP8 inhibitor significantly reduced ACTH secretion after 4 h treatment at both 5 and 10 µmol/L (26.1% and 30.1%, respectively; P < 0.01). After 24 h, USP8 inhibitor significantly reduced ACTH secretion at ≥5 µmol/L (from 16.7% to 40.5%; P < 0.001) [Figure 5c and 5d]. In addition, USP8 inhibitor could enhance the effects of dexamethasone on endogenous ACTH secretion.\nEffect of ubiquitin-specific protease 8 inhibitor on proopiomelanocortin expression and adrenocorticotropic hormone secretion. AtT20 cells were incubated for 4 and 24 h with 1–20 μmol/L ubiquitin-specific protease 8 inhibitor; proopiomelanocortin mRNA expression was assessed by quantitative reverse transcription polymerase chain reaction (a and b). Adrenocorticotropic hormone levels were measured in conditioned medium by enzyme-linked immunosorbent assay in six independent experiments in three replicates (c and d). * P<0.05, **P <0.01, *** P<0.001.\nAtT20 cells were incubated with USP8 inhibitor for 4 and 24 h to determine its effects on proopiomelanocortin (POMC) mRNA levels. As shown, POMC mRNA levels decreased in a dose-dependent manner, with significant effects observed from 5 µmol/L (32.1%; P < 0.05) [Figure 5a and 5b]. To determine the effects of USP8 inhibitor on ACTH secretion, ACTH levels were assessed in conditioned medium in AtT20 cells treatment for 4 and 24 h. As we can see, USP8 inhibitor significantly reduced ACTH secretion after 4 h treatment at both 5 and 10 µmol/L (26.1% and 30.1%, respectively; P < 0.01). After 24 h, USP8 inhibitor significantly reduced ACTH secretion at ≥5 µmol/L (from 16.7% to 40.5%; P < 0.001) [Figure 5c and 5d]. In addition, USP8 inhibitor could enhance the effects of dexamethasone on endogenous ACTH secretion.\nEffect of ubiquitin-specific protease 8 inhibitor on proopiomelanocortin expression and adrenocorticotropic hormone secretion. AtT20 cells were incubated for 4 and 24 h with 1–20 μmol/L ubiquitin-specific protease 8 inhibitor; proopiomelanocortin mRNA expression was assessed by quantitative reverse transcription polymerase chain reaction (a and b). Adrenocorticotropic hormone levels were measured in conditioned medium by enzyme-linked immunosorbent assay in six independent experiments in three replicates (c and d). * P<0.05, **P <0.01, *** P<0.001.", "To investigate that targeting USP8 with its specific inhibitor might exhibit an anticancer effect in the corticotroph adenomas, we first examined the effect of USP8 inhibitor on downstream protein levels including EGFR, ERBB2, and Met. AtT20 cells were treated with a recently synthesized USP8 inhibitor, 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile [Figure 1a].[89] Our data revealed that treatment with USP8 inhibitor could effectively downregulate the expression levels of EGFR, ERBB2, and Met in AtT20 cells in a dose-dependent manner [Figure 1b], demonstrating the inhibition potency of this small molecule for USP8 in AtT20 cells. The treatment of USP8 inhibitor for 24 and 48 h induced an inhibition of cell viability from concentration of 1 µmol/L (4.1%, 4.7%; P < 0.05) and the maximum inhibition was obtained with 10 µmol/L (12.4%, 27.8%; P < 0.001) [Figure 1c]. Moreover, treatment with USP8 inhibitor for 36 h also could inhibit cell growth, while it had no effect on cell growth after 12 h treatment (data not shown).\nUbiquitin-specific protease 8 inhibitor suppresses AtT20 cell growth by downregulation of oncogenic receptor tyrosine kinases. (a) Chemical structure of ubiquitin-specific protease 8 inhibitor. (b) Effect of ubiquitin-specific protease 8 inhibitor on receptor tyrosine kinases, epidermal growth factor receptor, ERBB2, Met, and Akt. (c) Effects of ubiquitin-specific protease 8 inhibitor on cell viability. * P<0.05, ** P<0.01, ***P<0.001.", "To determine the specificity of USP8 inhibitor effects, cell viability was assessed in Hepa 1-6, HEK293T, and PC12 cell lines after 24 h treatment without or with increasing concentration of USP8 inhibitor (1–10 µmol/L). As shown in Figure 2a–2c, USP8 inhibitor did not significantly modify the viability of any investigated cell line.\nEffects of ubiquitin-specific protease 8 inhibitor on cell viability of liver, renal, and adrenal cells. Cells were incubated for 24 h with 1–10 μmol/L ubiquitin-specific protease 8 inhibitor; control cells were treated with vehicle solution. HEK 293T (a), Hepa 1-6 (b), PC12 (c), cell viability was assessed in at least three independent experiments with six replicates.", "Next, we explore whether USP8 inhibitor would have an effect on the clonogenic ability of AtT20 cells [Figure 3a and 3b]. AtT20 cells were seeded in complete growth medium and allowed to adhere for 24 h. The medium was then replaced with complete growth medium containing the indicated concentrations of USP8 inhibitor, and the ability of AtT20 cells to form colonies was monitored over the next 15 days. Our data showed that significant inhibition (9.4%; P < 0.05) of colony formation was detected with 1 µmol/L USP8 inhibitor and maximum reduction (94%; P < 0.001) of clonogenic ability was obtained when 10 µmol/L USP8 inhibitor were used.\nFormation of AtT20 cells colonies. The number of AtT20 cell colonies was determined after 14 days of culture in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum contain ubiquitin-specific protease 8 inhibitor at concentrations of 1–20 μmol/L. Phase contrast microscopy of AtT20 cell colony was observed on 6-well culture plates (a) and quantitative representation of the colonies formed (b). * P<0.05, *** P<0.001.", "To investigate whether USP8 inhibitor reduces cell viability by inducing apoptosis, flow cytometry analysis and apoptosis-related proteins analysis were performed. The results showed that dose-dependent treatment with 1–10 µmol/L USP8 inhibitor for 24 and 48 h markedly induced early apoptosis at a level of 11.1% and 29.2%, 12.3%, and 31.6%, respectively [Figure 4a]. However, gefitinib treatment induced early apoptosis at a level of 14.9%. Moreover, the pro-apoptotic effect of USP8 inhibitor was accompanied by the induction of activated caspase-3 and Bax expression and the suppression of Bcl-2 expression [Figure 4b].\nUbiquitin-specific protease 8 inhibitor-induced apoptosis in AtT20 cells. (a) AtT20 cells were treated with indicated concentration of ubiquitin-specific protease 8 inhibitor for 24 and 48 h. Cells were washed and labeled with annexin V fluorescein isothiocyanate and propidium iodide and analyzed by flow cytometry. Data are presented as % of gated cells. (b) The apoptosis-related protein levels of Bcl-2, Bax, P27, and cleaved caspase-3 were analyzed by Western blot in AtT20 cells treated with ubiquitin-specific protease 8 inhibitor.", "AtT20 cells were incubated with USP8 inhibitor for 4 and 24 h to determine its effects on proopiomelanocortin (POMC) mRNA levels. As shown, POMC mRNA levels decreased in a dose-dependent manner, with significant effects observed from 5 µmol/L (32.1%; P < 0.05) [Figure 5a and 5b]. To determine the effects of USP8 inhibitor on ACTH secretion, ACTH levels were assessed in conditioned medium in AtT20 cells treatment for 4 and 24 h. As we can see, USP8 inhibitor significantly reduced ACTH secretion after 4 h treatment at both 5 and 10 µmol/L (26.1% and 30.1%, respectively; P < 0.01). After 24 h, USP8 inhibitor significantly reduced ACTH secretion at ≥5 µmol/L (from 16.7% to 40.5%; P < 0.001) [Figure 5c and 5d]. In addition, USP8 inhibitor could enhance the effects of dexamethasone on endogenous ACTH secretion.\nEffect of ubiquitin-specific protease 8 inhibitor on proopiomelanocortin expression and adrenocorticotropic hormone secretion. AtT20 cells were incubated for 4 and 24 h with 1–20 μmol/L ubiquitin-specific protease 8 inhibitor; proopiomelanocortin mRNA expression was assessed by quantitative reverse transcription polymerase chain reaction (a and b). Adrenocorticotropic hormone levels were measured in conditioned medium by enzyme-linked immunosorbent assay in six independent experiments in three replicates (c and d). * P<0.05, **P <0.01, *** P<0.001.", "Posttranslational modification of proteins by ubiquitination represents a central mechanism for modulating a wide range of cellular functions, such as protein stability, intracellular transport, protein interactions, and transcriptional activity.[10] Analogous to other posttranslational modifications, ubiquitination is a reversible process counteracted by DUBs.[11] Due to the key role for deubiquitination in regulating the stability and activity of a variety of proteins crucial to cell cycle progression, apoptosis and DNA damage repair including p53,[12] MDM2,[13] histones,[14] Norch,[15] β-catenin,[16] and more DUBs have been considered as good targets for cancer treatment. USPs, with more than sixty members, comprise the largest class of DUBs. More and more researches have focused on assessing their function, substrates, and role in specific diseases, especially cancer. USP8, also designated as Ub-specific protease Y, is an ubiquitin isopeptidase that belongs to the USP family of cysteine proteases. USP8 was originally identified to enhance cell growth as its expression increases upon serum stimulation in cancer cells.[17] In addition, it has been reported that USP8 interacts with a number of clinically relevant cancer targets including Cdc25,[18] Erbb2,[19] EGFR,[20] and Nrdp1[21] and implying a crucial role of USP8 in cancers.\nTwo recent studies demonstrated that the USP8 gene is frequently mutated in corticotroph adenomas.[57] ACTH adenomas harboring USP8 mutant had higher EGFR levels, expressed more POMC mRNAs, and had higher ACTH production than those with wild-type USP8. USP8 knockdown in primary ACTH-secreting tumor cells, however, reduced ACTH secretion and EGFR levels, suggesting that inhibition of USP8 activity may be an effective treatment strategy for CD.[7]\nRecently, it was also demonstrated that USP8 inhibitor impairs the growth of gefitinib-resistant and -sensitive nonsmall cell lung cancer cells by decreasing receptor tyrosine kinase (RTK) expression.[822] Here, we explored the effect of the novel specific USP8 inhibitor, 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile, on murine AtT20 pituitary adenoma cells. First, we assessed the potency of USP8 inhibitor in AtT20 cells. Inhibition of USP8 results in a dramatic decrease in the total protein levels of RTKs including EGFR, ERBB2, and Met, while has no effect on Akt protein level. Moreover, USP8 inhibitor has an inhibitory effect on the cell viability of AtT20 cells in a concentration-dependent manner, while it does not affect cell viability of the endocrine cell PC-12, indicating that USP8 inhibitor cytotoxic effects are not generalized to endocrine cells. In addition, the viability of nonendocrine cells, such as Hepa 1-6 cell lines and HEK 293T cells, is not influenced by the drug, supporting the hypothesis that USP8 inhibitor acts rapidly with a specific effect at the pituitary level.\nMoreover, we observed that the inhibitory effects of USP8 inhibitor on mouse ACTH-secreting pituitary adenoma cell viability are, at least in part, due to apoptosis induction, accompanied by increased cleaved caspase 3 and Bax and decreased Bcl-2 as previously reported in other experimental models.[822] Arrest of tumor growth is often associated with induction of cell apoptosis, and USP8 inhibitor has been shown to induce apoptosis in tumor cells, in parallel with growth inhibition. In HCC827 GR cells, treatment with 0.1–10 µmol/L USP8 inhibitor markedly induced early apoptosis at a level of 29.7% and 40.8%.\nExcess ACTH production and hypercortisolemia are associated with the progression of CD, decreasing hormone levels, therefore, is a therapeutic goal. We finally investigated whether USP8 inhibitor had any effect on ACTH secretion in AtT20 cells. Our data showed that USP8 inhibitor could decrease ACTH secretion from 5 µmol/L, which clearly demonstrate the potential of USP8 inhibitor to achieve the therapeutic purpose of lowering hormone secretion.\nIn conclusion, USP8 inhibitor could inhibit proliferation, abolishes clonogenic ability, and induces apoptosis in pituitary corticotroph tumor cell-AtT20 cells. We also demonstrate that USP8 inhibitor suppresses POMC mRNA levels and ACTH secretion. We therefore propose small molecular USP8 inhibitor as pharmacotherapy against CD with dual effects of suppressing ACTH overproduction to alleviate hypercortisolemia and metabolic complications, while also achieving control or shrinkage of pituitary corticotroph tumor growth. Collectively, our findings suggest that USP8 could be a new therapeutic target for CD.\n Financial support and sponsorship This work was supported by grants to Qing-Fang Sun from National Natural Science Foundation of China (No. 81270856) and National High-tech R&D Program (863 program) (No. 2014AA020611).\nThis work was supported by grants to Qing-Fang Sun from National Natural Science Foundation of China (No. 81270856) and National High-tech R&D Program (863 program) (No. 2014AA020611).\n Conflicts of interest There are no conflicts of interest.\nThere are no conflicts of interest.", "This work was supported by grants to Qing-Fang Sun from National Natural Science Foundation of China (No. 81270856) and National High-tech R&D Program (863 program) (No. 2014AA020611).", "There are no conflicts of interest." ]
[ "intro", "methods", null, null, null, null, null, null, "results", null, null, null, null, null, "discussion", null, null ]
[ "Adrenocorticotropic Hormone Secretion", "Cell Viability", "Cushing's Disease", "Ubiquitin-specific Protease 8 Inhibitor" ]
INTRODUCTION: Cushing's disease (CD), or pituitary-dependent Cushing's syndrome, is the most common cause of endogenous Cushing's syndrome accounting for about 70% of the chronic endogenous hypercortisolism.[1] It induced a series of several comorbidities and clinical complications, mainly including hypertension, diabetes mellitus, dyslipidemia, osteoporosis, cardiovascular disease, infection, and mental disorders, which associated with increased morbidity and mortality if not appropriately treated.[2] Until recently, no available medical treatment was licensed for CD although several drugs had demonstrated efficacy in lowering excess cortisol.[34] Using next-generation sequencing approach, Reincke et al. have identified recurrent somatic mutations in the gene encoding USP8 in four in an initial set of ten corticotroph tumors. These mutations were validated in a small group of seven Cushing's patients, with a final prevalence of 35%.[5] Subsequently, two different retrospective studies analyzed the prevalence of ubiquitin-specific protease 8 (USP8) mutations in two large cohorts of CD patients, identifying a prevalence of 36% and 62%, respectively.[67] It is noteworthy that somatic mutations in USP8 gene show a remarkable specificity for CD, with no mutations found in other type of pituitary adenoma and only rare somatic mutations reported in other tumors.[567] All of the mutations were located in exon 14, defining a hotspot region that overlaps with the sequence coding for the 14-3-3 binding motif, highly conserved between different species. These mutations inhibited 14-3-3 protein binding to USP8 and resulted in a higher deubiquitinating enzyme (DUB) activation. The consequence of this hyperactivation is increased epidermal growth factor receptor (EGFR) deubiquitination and a longer retention of EGFR at the plasma membrane which leads to inhibition of degradation, thereby preventing downregulation of ligand-activated EGFR and promotes and enhances adrenocorticotropic hormone (ACTH) production. The identification of USP8 mutations as specific contributors to the pathogenesis of ACTH-secreting pituitary adenomas represents an exciting advance in our understanding of CD. The aim of our study was to investigate the anticancer efficacy of USP8 inhibitor in CD. Here, we demonstrate that treatment with USP8 inhibitor, 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile, suppresses ACTH secretion, cell viability, and promotes cell apoptosis in AtT20 cells suggesting that UPS8 inhibitor could be a new therapeutic candidate for CD. METHODS: Cell culture and reagents All of the cell lines were obtained from the American Tissue Type Collection (ATCC, Manassas, VA, USA). The mouse AtT20 pituitary corticotroph cell line and hepatocellular carcinoma cell line Hepa 1-6 were maintained in Dulbecco's modified Eagle's medium (DMEM) (GIBCO, New York, USA) containing 10% fetal bovine serum (FBS) (GIBCO, New York, USA) and 2 mmol/L L-glutamine and 100 IU/ml penicillin and 100 µg/ml streptomycin (GIBCO, New York, USA) at 37°C in a humidified incubator with 5% CO2. The cells were starved with DMEM supplemented with 2% FBS for 16 h prior to each experiment. The 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile was obtained from Melone Pharmaceutical, Dalian, China. All of the cell lines were obtained from the American Tissue Type Collection (ATCC, Manassas, VA, USA). The mouse AtT20 pituitary corticotroph cell line and hepatocellular carcinoma cell line Hepa 1-6 were maintained in Dulbecco's modified Eagle's medium (DMEM) (GIBCO, New York, USA) containing 10% fetal bovine serum (FBS) (GIBCO, New York, USA) and 2 mmol/L L-glutamine and 100 IU/ml penicillin and 100 µg/ml streptomycin (GIBCO, New York, USA) at 37°C in a humidified incubator with 5% CO2. The cells were starved with DMEM supplemented with 2% FBS for 16 h prior to each experiment. The 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile was obtained from Melone Pharmaceutical, Dalian, China. Cell proliferation assay and colony formation assay The AtT20 cells were seeded at 2 × 103 cells per well in 96-well plates and left them to attach for 24 h. After that we changed the medium to cell culture medium with 2% FBS with indicated concentrations of USP8 inhibitor for 24 and 48 h. Viable cells were measured using a Cell Counting Kit-8 (CCK-8, Dojindo, Japan) according to the manufacturer's instructions. AtT20 cells were plated at a density of 103 cells per well in a 6-well plate in DMEM culture medium containing 10% FBS. The medium with indicated concentrations of USP8 inhibitor was replaced every 3 or 4 days. After 15 days, colonies were fixed with 4% paraformaldehyde (Sigma, St. Louis, USA) for 20 min and stained with 0.05% crystal violet at room temperature for 20 min. Then, the cells were washed three times with phosphate-buffered saline (PBS) for 5 min. Colonies containing more than 50 cells were counted using a light microscope. The AtT20 cells were seeded at 2 × 103 cells per well in 96-well plates and left them to attach for 24 h. After that we changed the medium to cell culture medium with 2% FBS with indicated concentrations of USP8 inhibitor for 24 and 48 h. Viable cells were measured using a Cell Counting Kit-8 (CCK-8, Dojindo, Japan) according to the manufacturer's instructions. AtT20 cells were plated at a density of 103 cells per well in a 6-well plate in DMEM culture medium containing 10% FBS. The medium with indicated concentrations of USP8 inhibitor was replaced every 3 or 4 days. After 15 days, colonies were fixed with 4% paraformaldehyde (Sigma, St. Louis, USA) for 20 min and stained with 0.05% crystal violet at room temperature for 20 min. Then, the cells were washed three times with phosphate-buffered saline (PBS) for 5 min. Colonies containing more than 50 cells were counted using a light microscope. Western blotting Total cell lysate was prepared with radioimmunoprecipitation assay buffer containing Protease Inhibitor Cocktail. Protein concentrations were measured by DC protein assay reagent (Bio-Rad, CA, USA) and extracts resolved by SDS/PAGE on 8% gels. Membranes were blocked for 2 h at room temperature in tris buffered saline-tween-20 containing 5% nonfat dried milk (Bio-Rad), washed, and then incubated with primary antibodies (anti-EGFR from Santa Cruz, CA, USA; anti-pErbB2, anti-Met, anti-Akt, anti-pAkt, anti-GAPDH, anti-p27/kip1, anti-cleaved-caspase 3, anti-bax, and anti-bcl-2 from Cell Signaling Technology, Boston, USA) at 4°C overnight. After washing, membranes were incubated with horseradish peroxidase-conjugated secondary antibodies (Cell Signaling Technology, Boston, USA). The signal was detected using enhanced chemiluminescence (PerkinElmer, Waltham, MA, USA). Total cell lysate was prepared with radioimmunoprecipitation assay buffer containing Protease Inhibitor Cocktail. Protein concentrations were measured by DC protein assay reagent (Bio-Rad, CA, USA) and extracts resolved by SDS/PAGE on 8% gels. Membranes were blocked for 2 h at room temperature in tris buffered saline-tween-20 containing 5% nonfat dried milk (Bio-Rad), washed, and then incubated with primary antibodies (anti-EGFR from Santa Cruz, CA, USA; anti-pErbB2, anti-Met, anti-Akt, anti-pAkt, anti-GAPDH, anti-p27/kip1, anti-cleaved-caspase 3, anti-bax, and anti-bcl-2 from Cell Signaling Technology, Boston, USA) at 4°C overnight. After washing, membranes were incubated with horseradish peroxidase-conjugated secondary antibodies (Cell Signaling Technology, Boston, USA). The signal was detected using enhanced chemiluminescence (PerkinElmer, Waltham, MA, USA). Adrenocorticotropic hormone assay The cells were incubated for 4 and 24 h with the indicated concentrations of USP8 inhibitor. The medium was then aspirated, and the ACTH levels in the supernatants were measured using an ACTH enzyme-linked immunosorbent assay kit (Phoenix, Milpitas, USA). The cells were incubated for 4 and 24 h with the indicated concentrations of USP8 inhibitor. The medium was then aspirated, and the ACTH levels in the supernatants were measured using an ACTH enzyme-linked immunosorbent assay kit (Phoenix, Milpitas, USA). Apoptosis assay Cell apoptosis was measured using the fluorescein isothiocyanate (FITC) Annexin V Apoptosis detection Kit I (BD Biosciences Pharmingen, San Jose, CA, USA) according to the manufacturer's instructions. AtT20 cells were plated in 6-well plates. After exposure to USP8 inhibitor for 24 and 48 h, cells were detached and then washed once with cold PBS, suspended in 1 × binding buffer at a concentration of 5 × 105 cells/ml. And then FITC Annexin V and propidium iodide were added. After incubating for 15 min at room temperature in the dark, 200 µl of 1 × binding buffer were added to each tube. The cells were analyzed with a BD FACSVerse flow cytometer (BD Biosciences Pharmingen) and BD FACSuite Software (BD Biosciences Pharmingen). The fraction of the cell population in different quadrants was analyzed using quadrant statistics. Cell apoptosis was measured using the fluorescein isothiocyanate (FITC) Annexin V Apoptosis detection Kit I (BD Biosciences Pharmingen, San Jose, CA, USA) according to the manufacturer's instructions. AtT20 cells were plated in 6-well plates. After exposure to USP8 inhibitor for 24 and 48 h, cells were detached and then washed once with cold PBS, suspended in 1 × binding buffer at a concentration of 5 × 105 cells/ml. And then FITC Annexin V and propidium iodide were added. After incubating for 15 min at room temperature in the dark, 200 µl of 1 × binding buffer were added to each tube. The cells were analyzed with a BD FACSVerse flow cytometer (BD Biosciences Pharmingen) and BD FACSuite Software (BD Biosciences Pharmingen). The fraction of the cell population in different quadrants was analyzed using quadrant statistics. Statistical analysis The statistical analysis was performed using the SPSS version 16.0 (SPSS Inc., USA). Values are described as a mean ± standard deviation (SD). Significant differences were analyzed using two-tail unpaired Student's t-test and one-way ANOVA. A value of P < 0.05 was considered statistically significant. The statistical analysis was performed using the SPSS version 16.0 (SPSS Inc., USA). Values are described as a mean ± standard deviation (SD). Significant differences were analyzed using two-tail unpaired Student's t-test and one-way ANOVA. A value of P < 0.05 was considered statistically significant. Cell culture and reagents: All of the cell lines were obtained from the American Tissue Type Collection (ATCC, Manassas, VA, USA). The mouse AtT20 pituitary corticotroph cell line and hepatocellular carcinoma cell line Hepa 1-6 were maintained in Dulbecco's modified Eagle's medium (DMEM) (GIBCO, New York, USA) containing 10% fetal bovine serum (FBS) (GIBCO, New York, USA) and 2 mmol/L L-glutamine and 100 IU/ml penicillin and 100 µg/ml streptomycin (GIBCO, New York, USA) at 37°C in a humidified incubator with 5% CO2. The cells were starved with DMEM supplemented with 2% FBS for 16 h prior to each experiment. The 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile was obtained from Melone Pharmaceutical, Dalian, China. Cell proliferation assay and colony formation assay: The AtT20 cells were seeded at 2 × 103 cells per well in 96-well plates and left them to attach for 24 h. After that we changed the medium to cell culture medium with 2% FBS with indicated concentrations of USP8 inhibitor for 24 and 48 h. Viable cells were measured using a Cell Counting Kit-8 (CCK-8, Dojindo, Japan) according to the manufacturer's instructions. AtT20 cells were plated at a density of 103 cells per well in a 6-well plate in DMEM culture medium containing 10% FBS. The medium with indicated concentrations of USP8 inhibitor was replaced every 3 or 4 days. After 15 days, colonies were fixed with 4% paraformaldehyde (Sigma, St. Louis, USA) for 20 min and stained with 0.05% crystal violet at room temperature for 20 min. Then, the cells were washed three times with phosphate-buffered saline (PBS) for 5 min. Colonies containing more than 50 cells were counted using a light microscope. Western blotting: Total cell lysate was prepared with radioimmunoprecipitation assay buffer containing Protease Inhibitor Cocktail. Protein concentrations were measured by DC protein assay reagent (Bio-Rad, CA, USA) and extracts resolved by SDS/PAGE on 8% gels. Membranes were blocked for 2 h at room temperature in tris buffered saline-tween-20 containing 5% nonfat dried milk (Bio-Rad), washed, and then incubated with primary antibodies (anti-EGFR from Santa Cruz, CA, USA; anti-pErbB2, anti-Met, anti-Akt, anti-pAkt, anti-GAPDH, anti-p27/kip1, anti-cleaved-caspase 3, anti-bax, and anti-bcl-2 from Cell Signaling Technology, Boston, USA) at 4°C overnight. After washing, membranes were incubated with horseradish peroxidase-conjugated secondary antibodies (Cell Signaling Technology, Boston, USA). The signal was detected using enhanced chemiluminescence (PerkinElmer, Waltham, MA, USA). Adrenocorticotropic hormone assay: The cells were incubated for 4 and 24 h with the indicated concentrations of USP8 inhibitor. The medium was then aspirated, and the ACTH levels in the supernatants were measured using an ACTH enzyme-linked immunosorbent assay kit (Phoenix, Milpitas, USA). Apoptosis assay: Cell apoptosis was measured using the fluorescein isothiocyanate (FITC) Annexin V Apoptosis detection Kit I (BD Biosciences Pharmingen, San Jose, CA, USA) according to the manufacturer's instructions. AtT20 cells were plated in 6-well plates. After exposure to USP8 inhibitor for 24 and 48 h, cells were detached and then washed once with cold PBS, suspended in 1 × binding buffer at a concentration of 5 × 105 cells/ml. And then FITC Annexin V and propidium iodide were added. After incubating for 15 min at room temperature in the dark, 200 µl of 1 × binding buffer were added to each tube. The cells were analyzed with a BD FACSVerse flow cytometer (BD Biosciences Pharmingen) and BD FACSuite Software (BD Biosciences Pharmingen). The fraction of the cell population in different quadrants was analyzed using quadrant statistics. Statistical analysis: The statistical analysis was performed using the SPSS version 16.0 (SPSS Inc., USA). Values are described as a mean ± standard deviation (SD). Significant differences were analyzed using two-tail unpaired Student's t-test and one-way ANOVA. A value of P < 0.05 was considered statistically significant. RESULTS: Ubiquitin-specific protease 8 inhibitor inhibit cell viability by downregulating oncogenic receptor tyrosine kinases To investigate that targeting USP8 with its specific inhibitor might exhibit an anticancer effect in the corticotroph adenomas, we first examined the effect of USP8 inhibitor on downstream protein levels including EGFR, ERBB2, and Met. AtT20 cells were treated with a recently synthesized USP8 inhibitor, 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile [Figure 1a].[89] Our data revealed that treatment with USP8 inhibitor could effectively downregulate the expression levels of EGFR, ERBB2, and Met in AtT20 cells in a dose-dependent manner [Figure 1b], demonstrating the inhibition potency of this small molecule for USP8 in AtT20 cells. The treatment of USP8 inhibitor for 24 and 48 h induced an inhibition of cell viability from concentration of 1 µmol/L (4.1%, 4.7%; P < 0.05) and the maximum inhibition was obtained with 10 µmol/L (12.4%, 27.8%; P < 0.001) [Figure 1c]. Moreover, treatment with USP8 inhibitor for 36 h also could inhibit cell growth, while it had no effect on cell growth after 12 h treatment (data not shown). Ubiquitin-specific protease 8 inhibitor suppresses AtT20 cell growth by downregulation of oncogenic receptor tyrosine kinases. (a) Chemical structure of ubiquitin-specific protease 8 inhibitor. (b) Effect of ubiquitin-specific protease 8 inhibitor on receptor tyrosine kinases, epidermal growth factor receptor, ERBB2, Met, and Akt. (c) Effects of ubiquitin-specific protease 8 inhibitor on cell viability. * P<0.05, ** P<0.01, ***P<0.001. To investigate that targeting USP8 with its specific inhibitor might exhibit an anticancer effect in the corticotroph adenomas, we first examined the effect of USP8 inhibitor on downstream protein levels including EGFR, ERBB2, and Met. AtT20 cells were treated with a recently synthesized USP8 inhibitor, 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile [Figure 1a].[89] Our data revealed that treatment with USP8 inhibitor could effectively downregulate the expression levels of EGFR, ERBB2, and Met in AtT20 cells in a dose-dependent manner [Figure 1b], demonstrating the inhibition potency of this small molecule for USP8 in AtT20 cells. The treatment of USP8 inhibitor for 24 and 48 h induced an inhibition of cell viability from concentration of 1 µmol/L (4.1%, 4.7%; P < 0.05) and the maximum inhibition was obtained with 10 µmol/L (12.4%, 27.8%; P < 0.001) [Figure 1c]. Moreover, treatment with USP8 inhibitor for 36 h also could inhibit cell growth, while it had no effect on cell growth after 12 h treatment (data not shown). Ubiquitin-specific protease 8 inhibitor suppresses AtT20 cell growth by downregulation of oncogenic receptor tyrosine kinases. (a) Chemical structure of ubiquitin-specific protease 8 inhibitor. (b) Effect of ubiquitin-specific protease 8 inhibitor on receptor tyrosine kinases, epidermal growth factor receptor, ERBB2, Met, and Akt. (c) Effects of ubiquitin-specific protease 8 inhibitor on cell viability. * P<0.05, ** P<0.01, ***P<0.001. Effects of ubiquitin-specific protease 8 inhibitor on cell viability of renal, adrenal, and liver cells To determine the specificity of USP8 inhibitor effects, cell viability was assessed in Hepa 1-6, HEK293T, and PC12 cell lines after 24 h treatment without or with increasing concentration of USP8 inhibitor (1–10 µmol/L). As shown in Figure 2a–2c, USP8 inhibitor did not significantly modify the viability of any investigated cell line. Effects of ubiquitin-specific protease 8 inhibitor on cell viability of liver, renal, and adrenal cells. Cells were incubated for 24 h with 1–10 μmol/L ubiquitin-specific protease 8 inhibitor; control cells were treated with vehicle solution. HEK 293T (a), Hepa 1-6 (b), PC12 (c), cell viability was assessed in at least three independent experiments with six replicates. To determine the specificity of USP8 inhibitor effects, cell viability was assessed in Hepa 1-6, HEK293T, and PC12 cell lines after 24 h treatment without or with increasing concentration of USP8 inhibitor (1–10 µmol/L). As shown in Figure 2a–2c, USP8 inhibitor did not significantly modify the viability of any investigated cell line. Effects of ubiquitin-specific protease 8 inhibitor on cell viability of liver, renal, and adrenal cells. Cells were incubated for 24 h with 1–10 μmol/L ubiquitin-specific protease 8 inhibitor; control cells were treated with vehicle solution. HEK 293T (a), Hepa 1-6 (b), PC12 (c), cell viability was assessed in at least three independent experiments with six replicates. Ubiquitin-specific protease 8 inhibitor inhibits the clonogenic ability of AtT20 cells Next, we explore whether USP8 inhibitor would have an effect on the clonogenic ability of AtT20 cells [Figure 3a and 3b]. AtT20 cells were seeded in complete growth medium and allowed to adhere for 24 h. The medium was then replaced with complete growth medium containing the indicated concentrations of USP8 inhibitor, and the ability of AtT20 cells to form colonies was monitored over the next 15 days. Our data showed that significant inhibition (9.4%; P < 0.05) of colony formation was detected with 1 µmol/L USP8 inhibitor and maximum reduction (94%; P < 0.001) of clonogenic ability was obtained when 10 µmol/L USP8 inhibitor were used. Formation of AtT20 cells colonies. The number of AtT20 cell colonies was determined after 14 days of culture in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum contain ubiquitin-specific protease 8 inhibitor at concentrations of 1–20 μmol/L. Phase contrast microscopy of AtT20 cell colony was observed on 6-well culture plates (a) and quantitative representation of the colonies formed (b). * P<0.05, *** P<0.001. Next, we explore whether USP8 inhibitor would have an effect on the clonogenic ability of AtT20 cells [Figure 3a and 3b]. AtT20 cells were seeded in complete growth medium and allowed to adhere for 24 h. The medium was then replaced with complete growth medium containing the indicated concentrations of USP8 inhibitor, and the ability of AtT20 cells to form colonies was monitored over the next 15 days. Our data showed that significant inhibition (9.4%; P < 0.05) of colony formation was detected with 1 µmol/L USP8 inhibitor and maximum reduction (94%; P < 0.001) of clonogenic ability was obtained when 10 µmol/L USP8 inhibitor were used. Formation of AtT20 cells colonies. The number of AtT20 cell colonies was determined after 14 days of culture in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum contain ubiquitin-specific protease 8 inhibitor at concentrations of 1–20 μmol/L. Phase contrast microscopy of AtT20 cell colony was observed on 6-well culture plates (a) and quantitative representation of the colonies formed (b). * P<0.05, *** P<0.001. Ubiquitin-specific protease 8 inhibitor induces apoptosis in AtT20 cells To investigate whether USP8 inhibitor reduces cell viability by inducing apoptosis, flow cytometry analysis and apoptosis-related proteins analysis were performed. The results showed that dose-dependent treatment with 1–10 µmol/L USP8 inhibitor for 24 and 48 h markedly induced early apoptosis at a level of 11.1% and 29.2%, 12.3%, and 31.6%, respectively [Figure 4a]. However, gefitinib treatment induced early apoptosis at a level of 14.9%. Moreover, the pro-apoptotic effect of USP8 inhibitor was accompanied by the induction of activated caspase-3 and Bax expression and the suppression of Bcl-2 expression [Figure 4b]. Ubiquitin-specific protease 8 inhibitor-induced apoptosis in AtT20 cells. (a) AtT20 cells were treated with indicated concentration of ubiquitin-specific protease 8 inhibitor for 24 and 48 h. Cells were washed and labeled with annexin V fluorescein isothiocyanate and propidium iodide and analyzed by flow cytometry. Data are presented as % of gated cells. (b) The apoptosis-related protein levels of Bcl-2, Bax, P27, and cleaved caspase-3 were analyzed by Western blot in AtT20 cells treated with ubiquitin-specific protease 8 inhibitor. To investigate whether USP8 inhibitor reduces cell viability by inducing apoptosis, flow cytometry analysis and apoptosis-related proteins analysis were performed. The results showed that dose-dependent treatment with 1–10 µmol/L USP8 inhibitor for 24 and 48 h markedly induced early apoptosis at a level of 11.1% and 29.2%, 12.3%, and 31.6%, respectively [Figure 4a]. However, gefitinib treatment induced early apoptosis at a level of 14.9%. Moreover, the pro-apoptotic effect of USP8 inhibitor was accompanied by the induction of activated caspase-3 and Bax expression and the suppression of Bcl-2 expression [Figure 4b]. Ubiquitin-specific protease 8 inhibitor-induced apoptosis in AtT20 cells. (a) AtT20 cells were treated with indicated concentration of ubiquitin-specific protease 8 inhibitor for 24 and 48 h. Cells were washed and labeled with annexin V fluorescein isothiocyanate and propidium iodide and analyzed by flow cytometry. Data are presented as % of gated cells. (b) The apoptosis-related protein levels of Bcl-2, Bax, P27, and cleaved caspase-3 were analyzed by Western blot in AtT20 cells treated with ubiquitin-specific protease 8 inhibitor. Ubiquitin-specific protease 8 inhibitor suppressed proopiomelanocortin gene expression and adrenocorticotropic hormone secretion in AtT20 cells AtT20 cells were incubated with USP8 inhibitor for 4 and 24 h to determine its effects on proopiomelanocortin (POMC) mRNA levels. As shown, POMC mRNA levels decreased in a dose-dependent manner, with significant effects observed from 5 µmol/L (32.1%; P < 0.05) [Figure 5a and 5b]. To determine the effects of USP8 inhibitor on ACTH secretion, ACTH levels were assessed in conditioned medium in AtT20 cells treatment for 4 and 24 h. As we can see, USP8 inhibitor significantly reduced ACTH secretion after 4 h treatment at both 5 and 10 µmol/L (26.1% and 30.1%, respectively; P < 0.01). After 24 h, USP8 inhibitor significantly reduced ACTH secretion at ≥5 µmol/L (from 16.7% to 40.5%; P < 0.001) [Figure 5c and 5d]. In addition, USP8 inhibitor could enhance the effects of dexamethasone on endogenous ACTH secretion. Effect of ubiquitin-specific protease 8 inhibitor on proopiomelanocortin expression and adrenocorticotropic hormone secretion. AtT20 cells were incubated for 4 and 24 h with 1–20 μmol/L ubiquitin-specific protease 8 inhibitor; proopiomelanocortin mRNA expression was assessed by quantitative reverse transcription polymerase chain reaction (a and b). Adrenocorticotropic hormone levels were measured in conditioned medium by enzyme-linked immunosorbent assay in six independent experiments in three replicates (c and d). * P<0.05, **P <0.01, *** P<0.001. AtT20 cells were incubated with USP8 inhibitor for 4 and 24 h to determine its effects on proopiomelanocortin (POMC) mRNA levels. As shown, POMC mRNA levels decreased in a dose-dependent manner, with significant effects observed from 5 µmol/L (32.1%; P < 0.05) [Figure 5a and 5b]. To determine the effects of USP8 inhibitor on ACTH secretion, ACTH levels were assessed in conditioned medium in AtT20 cells treatment for 4 and 24 h. As we can see, USP8 inhibitor significantly reduced ACTH secretion after 4 h treatment at both 5 and 10 µmol/L (26.1% and 30.1%, respectively; P < 0.01). After 24 h, USP8 inhibitor significantly reduced ACTH secretion at ≥5 µmol/L (from 16.7% to 40.5%; P < 0.001) [Figure 5c and 5d]. In addition, USP8 inhibitor could enhance the effects of dexamethasone on endogenous ACTH secretion. Effect of ubiquitin-specific protease 8 inhibitor on proopiomelanocortin expression and adrenocorticotropic hormone secretion. AtT20 cells were incubated for 4 and 24 h with 1–20 μmol/L ubiquitin-specific protease 8 inhibitor; proopiomelanocortin mRNA expression was assessed by quantitative reverse transcription polymerase chain reaction (a and b). Adrenocorticotropic hormone levels were measured in conditioned medium by enzyme-linked immunosorbent assay in six independent experiments in three replicates (c and d). * P<0.05, **P <0.01, *** P<0.001. Ubiquitin-specific protease 8 inhibitor inhibit cell viability by downregulating oncogenic receptor tyrosine kinases: To investigate that targeting USP8 with its specific inhibitor might exhibit an anticancer effect in the corticotroph adenomas, we first examined the effect of USP8 inhibitor on downstream protein levels including EGFR, ERBB2, and Met. AtT20 cells were treated with a recently synthesized USP8 inhibitor, 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile [Figure 1a].[89] Our data revealed that treatment with USP8 inhibitor could effectively downregulate the expression levels of EGFR, ERBB2, and Met in AtT20 cells in a dose-dependent manner [Figure 1b], demonstrating the inhibition potency of this small molecule for USP8 in AtT20 cells. The treatment of USP8 inhibitor for 24 and 48 h induced an inhibition of cell viability from concentration of 1 µmol/L (4.1%, 4.7%; P < 0.05) and the maximum inhibition was obtained with 10 µmol/L (12.4%, 27.8%; P < 0.001) [Figure 1c]. Moreover, treatment with USP8 inhibitor for 36 h also could inhibit cell growth, while it had no effect on cell growth after 12 h treatment (data not shown). Ubiquitin-specific protease 8 inhibitor suppresses AtT20 cell growth by downregulation of oncogenic receptor tyrosine kinases. (a) Chemical structure of ubiquitin-specific protease 8 inhibitor. (b) Effect of ubiquitin-specific protease 8 inhibitor on receptor tyrosine kinases, epidermal growth factor receptor, ERBB2, Met, and Akt. (c) Effects of ubiquitin-specific protease 8 inhibitor on cell viability. * P<0.05, ** P<0.01, ***P<0.001. Effects of ubiquitin-specific protease 8 inhibitor on cell viability of renal, adrenal, and liver cells: To determine the specificity of USP8 inhibitor effects, cell viability was assessed in Hepa 1-6, HEK293T, and PC12 cell lines after 24 h treatment without or with increasing concentration of USP8 inhibitor (1–10 µmol/L). As shown in Figure 2a–2c, USP8 inhibitor did not significantly modify the viability of any investigated cell line. Effects of ubiquitin-specific protease 8 inhibitor on cell viability of liver, renal, and adrenal cells. Cells were incubated for 24 h with 1–10 μmol/L ubiquitin-specific protease 8 inhibitor; control cells were treated with vehicle solution. HEK 293T (a), Hepa 1-6 (b), PC12 (c), cell viability was assessed in at least three independent experiments with six replicates. Ubiquitin-specific protease 8 inhibitor inhibits the clonogenic ability of AtT20 cells: Next, we explore whether USP8 inhibitor would have an effect on the clonogenic ability of AtT20 cells [Figure 3a and 3b]. AtT20 cells were seeded in complete growth medium and allowed to adhere for 24 h. The medium was then replaced with complete growth medium containing the indicated concentrations of USP8 inhibitor, and the ability of AtT20 cells to form colonies was monitored over the next 15 days. Our data showed that significant inhibition (9.4%; P < 0.05) of colony formation was detected with 1 µmol/L USP8 inhibitor and maximum reduction (94%; P < 0.001) of clonogenic ability was obtained when 10 µmol/L USP8 inhibitor were used. Formation of AtT20 cells colonies. The number of AtT20 cell colonies was determined after 14 days of culture in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum contain ubiquitin-specific protease 8 inhibitor at concentrations of 1–20 μmol/L. Phase contrast microscopy of AtT20 cell colony was observed on 6-well culture plates (a) and quantitative representation of the colonies formed (b). * P<0.05, *** P<0.001. Ubiquitin-specific protease 8 inhibitor induces apoptosis in AtT20 cells: To investigate whether USP8 inhibitor reduces cell viability by inducing apoptosis, flow cytometry analysis and apoptosis-related proteins analysis were performed. The results showed that dose-dependent treatment with 1–10 µmol/L USP8 inhibitor for 24 and 48 h markedly induced early apoptosis at a level of 11.1% and 29.2%, 12.3%, and 31.6%, respectively [Figure 4a]. However, gefitinib treatment induced early apoptosis at a level of 14.9%. Moreover, the pro-apoptotic effect of USP8 inhibitor was accompanied by the induction of activated caspase-3 and Bax expression and the suppression of Bcl-2 expression [Figure 4b]. Ubiquitin-specific protease 8 inhibitor-induced apoptosis in AtT20 cells. (a) AtT20 cells were treated with indicated concentration of ubiquitin-specific protease 8 inhibitor for 24 and 48 h. Cells were washed and labeled with annexin V fluorescein isothiocyanate and propidium iodide and analyzed by flow cytometry. Data are presented as % of gated cells. (b) The apoptosis-related protein levels of Bcl-2, Bax, P27, and cleaved caspase-3 were analyzed by Western blot in AtT20 cells treated with ubiquitin-specific protease 8 inhibitor. Ubiquitin-specific protease 8 inhibitor suppressed proopiomelanocortin gene expression and adrenocorticotropic hormone secretion in AtT20 cells: AtT20 cells were incubated with USP8 inhibitor for 4 and 24 h to determine its effects on proopiomelanocortin (POMC) mRNA levels. As shown, POMC mRNA levels decreased in a dose-dependent manner, with significant effects observed from 5 µmol/L (32.1%; P < 0.05) [Figure 5a and 5b]. To determine the effects of USP8 inhibitor on ACTH secretion, ACTH levels were assessed in conditioned medium in AtT20 cells treatment for 4 and 24 h. As we can see, USP8 inhibitor significantly reduced ACTH secretion after 4 h treatment at both 5 and 10 µmol/L (26.1% and 30.1%, respectively; P < 0.01). After 24 h, USP8 inhibitor significantly reduced ACTH secretion at ≥5 µmol/L (from 16.7% to 40.5%; P < 0.001) [Figure 5c and 5d]. In addition, USP8 inhibitor could enhance the effects of dexamethasone on endogenous ACTH secretion. Effect of ubiquitin-specific protease 8 inhibitor on proopiomelanocortin expression and adrenocorticotropic hormone secretion. AtT20 cells were incubated for 4 and 24 h with 1–20 μmol/L ubiquitin-specific protease 8 inhibitor; proopiomelanocortin mRNA expression was assessed by quantitative reverse transcription polymerase chain reaction (a and b). Adrenocorticotropic hormone levels were measured in conditioned medium by enzyme-linked immunosorbent assay in six independent experiments in three replicates (c and d). * P<0.05, **P <0.01, *** P<0.001. DISCUSSION: Posttranslational modification of proteins by ubiquitination represents a central mechanism for modulating a wide range of cellular functions, such as protein stability, intracellular transport, protein interactions, and transcriptional activity.[10] Analogous to other posttranslational modifications, ubiquitination is a reversible process counteracted by DUBs.[11] Due to the key role for deubiquitination in regulating the stability and activity of a variety of proteins crucial to cell cycle progression, apoptosis and DNA damage repair including p53,[12] MDM2,[13] histones,[14] Norch,[15] β-catenin,[16] and more DUBs have been considered as good targets for cancer treatment. USPs, with more than sixty members, comprise the largest class of DUBs. More and more researches have focused on assessing their function, substrates, and role in specific diseases, especially cancer. USP8, also designated as Ub-specific protease Y, is an ubiquitin isopeptidase that belongs to the USP family of cysteine proteases. USP8 was originally identified to enhance cell growth as its expression increases upon serum stimulation in cancer cells.[17] In addition, it has been reported that USP8 interacts with a number of clinically relevant cancer targets including Cdc25,[18] Erbb2,[19] EGFR,[20] and Nrdp1[21] and implying a crucial role of USP8 in cancers. Two recent studies demonstrated that the USP8 gene is frequently mutated in corticotroph adenomas.[57] ACTH adenomas harboring USP8 mutant had higher EGFR levels, expressed more POMC mRNAs, and had higher ACTH production than those with wild-type USP8. USP8 knockdown in primary ACTH-secreting tumor cells, however, reduced ACTH secretion and EGFR levels, suggesting that inhibition of USP8 activity may be an effective treatment strategy for CD.[7] Recently, it was also demonstrated that USP8 inhibitor impairs the growth of gefitinib-resistant and -sensitive nonsmall cell lung cancer cells by decreasing receptor tyrosine kinase (RTK) expression.[822] Here, we explored the effect of the novel specific USP8 inhibitor, 9-ehtyloxyimino9H-indeno [1,2-b] pyrazine-2,3-dicarbonitrile, on murine AtT20 pituitary adenoma cells. First, we assessed the potency of USP8 inhibitor in AtT20 cells. Inhibition of USP8 results in a dramatic decrease in the total protein levels of RTKs including EGFR, ERBB2, and Met, while has no effect on Akt protein level. Moreover, USP8 inhibitor has an inhibitory effect on the cell viability of AtT20 cells in a concentration-dependent manner, while it does not affect cell viability of the endocrine cell PC-12, indicating that USP8 inhibitor cytotoxic effects are not generalized to endocrine cells. In addition, the viability of nonendocrine cells, such as Hepa 1-6 cell lines and HEK 293T cells, is not influenced by the drug, supporting the hypothesis that USP8 inhibitor acts rapidly with a specific effect at the pituitary level. Moreover, we observed that the inhibitory effects of USP8 inhibitor on mouse ACTH-secreting pituitary adenoma cell viability are, at least in part, due to apoptosis induction, accompanied by increased cleaved caspase 3 and Bax and decreased Bcl-2 as previously reported in other experimental models.[822] Arrest of tumor growth is often associated with induction of cell apoptosis, and USP8 inhibitor has been shown to induce apoptosis in tumor cells, in parallel with growth inhibition. In HCC827 GR cells, treatment with 0.1–10 µmol/L USP8 inhibitor markedly induced early apoptosis at a level of 29.7% and 40.8%. Excess ACTH production and hypercortisolemia are associated with the progression of CD, decreasing hormone levels, therefore, is a therapeutic goal. We finally investigated whether USP8 inhibitor had any effect on ACTH secretion in AtT20 cells. Our data showed that USP8 inhibitor could decrease ACTH secretion from 5 µmol/L, which clearly demonstrate the potential of USP8 inhibitor to achieve the therapeutic purpose of lowering hormone secretion. In conclusion, USP8 inhibitor could inhibit proliferation, abolishes clonogenic ability, and induces apoptosis in pituitary corticotroph tumor cell-AtT20 cells. We also demonstrate that USP8 inhibitor suppresses POMC mRNA levels and ACTH secretion. We therefore propose small molecular USP8 inhibitor as pharmacotherapy against CD with dual effects of suppressing ACTH overproduction to alleviate hypercortisolemia and metabolic complications, while also achieving control or shrinkage of pituitary corticotroph tumor growth. Collectively, our findings suggest that USP8 could be a new therapeutic target for CD. Financial support and sponsorship This work was supported by grants to Qing-Fang Sun from National Natural Science Foundation of China (No. 81270856) and National High-tech R&D Program (863 program) (No. 2014AA020611). This work was supported by grants to Qing-Fang Sun from National Natural Science Foundation of China (No. 81270856) and National High-tech R&D Program (863 program) (No. 2014AA020611). Conflicts of interest There are no conflicts of interest. There are no conflicts of interest. Financial support and sponsorship: This work was supported by grants to Qing-Fang Sun from National Natural Science Foundation of China (No. 81270856) and National High-tech R&D Program (863 program) (No. 2014AA020611). Conflicts of interest: There are no conflicts of interest.
Background: Two recent whole-exome sequencing researches identifying somatic mutations in the ubiquitin-specific protease 8 (USP8) gene in pituitary corticotroph adenomas provide exciting advances in this field. These mutations drive increased epidermal growth factor receptor (EGFR) signaling and promote adrenocorticotropic hormone (ACTH) production. This study was to investigate whether the inhibition of USP8 activity could be a strategy for the treatment of Cushing's disease (CD). Methods: The anticancer effect of USP8 inhibitor was determined by testing cell viability, colony formation, apoptosis, and ACTH secretion. The immunoblotting and quantitative reverse transcription polymerase chain reaction were conducted to explore the signaling pathway by USP8 inhibition. Results: Inhibition of USP8-induced degradation of receptor tyrosine kinases including EGFR, EGFR-2 (ERBB2), and Met leading to a suppression of AtT20 cell growth and ACTH secretion. Moreover, treatment with USP8 inhibitor markedly induced AtT20 cells apoptosis. Conclusions: Inhibition of USP8 activity could be an effective strategy for CD. It might provide a novel pharmacological approach for the treatment of CD.
null
null
7,506
205
[ 160, 186, 187, 49, 161, 61, 297, 146, 211, 217, 273, 40, 7 ]
17
[ "inhibitor", "cells", "usp8", "cell", "usp8 inhibitor", "att20", "att20 cells", "specific", "protease", "protease inhibitor" ]
[ "seven cushing patients", "cause endogenous cushing", "dependent cushing syndrome", "pituitary dependent cushing", "cushing disease cd" ]
null
null
[CONTENT] Adrenocorticotropic Hormone Secretion | Cell Viability | Cushing's Disease | Ubiquitin-specific Protease 8 Inhibitor [SUMMARY]
[CONTENT] Adrenocorticotropic Hormone Secretion | Cell Viability | Cushing's Disease | Ubiquitin-specific Protease 8 Inhibitor [SUMMARY]
[CONTENT] Adrenocorticotropic Hormone Secretion | Cell Viability | Cushing's Disease | Ubiquitin-specific Protease 8 Inhibitor [SUMMARY]
null
[CONTENT] Adrenocorticotropic Hormone Secretion | Cell Viability | Cushing's Disease | Ubiquitin-specific Protease 8 Inhibitor [SUMMARY]
null
[CONTENT] Adrenocorticotropic Hormone | Animals | Apoptosis | Cell Proliferation | Cell Survival | Endopeptidases | Endosomal Sorting Complexes Required for Transport | Enzyme Inhibitors | ErbB Receptors | Humans | Indenes | Mice | Pyrazines | Ubiquitin Thiolesterase [SUMMARY]
[CONTENT] Adrenocorticotropic Hormone | Animals | Apoptosis | Cell Proliferation | Cell Survival | Endopeptidases | Endosomal Sorting Complexes Required for Transport | Enzyme Inhibitors | ErbB Receptors | Humans | Indenes | Mice | Pyrazines | Ubiquitin Thiolesterase [SUMMARY]
[CONTENT] Adrenocorticotropic Hormone | Animals | Apoptosis | Cell Proliferation | Cell Survival | Endopeptidases | Endosomal Sorting Complexes Required for Transport | Enzyme Inhibitors | ErbB Receptors | Humans | Indenes | Mice | Pyrazines | Ubiquitin Thiolesterase [SUMMARY]
null
[CONTENT] Adrenocorticotropic Hormone | Animals | Apoptosis | Cell Proliferation | Cell Survival | Endopeptidases | Endosomal Sorting Complexes Required for Transport | Enzyme Inhibitors | ErbB Receptors | Humans | Indenes | Mice | Pyrazines | Ubiquitin Thiolesterase [SUMMARY]
null
[CONTENT] seven cushing patients | cause endogenous cushing | dependent cushing syndrome | pituitary dependent cushing | cushing disease cd [SUMMARY]
[CONTENT] seven cushing patients | cause endogenous cushing | dependent cushing syndrome | pituitary dependent cushing | cushing disease cd [SUMMARY]
[CONTENT] seven cushing patients | cause endogenous cushing | dependent cushing syndrome | pituitary dependent cushing | cushing disease cd [SUMMARY]
null
[CONTENT] seven cushing patients | cause endogenous cushing | dependent cushing syndrome | pituitary dependent cushing | cushing disease cd [SUMMARY]
null
[CONTENT] inhibitor | cells | usp8 | cell | usp8 inhibitor | att20 | att20 cells | specific | protease | protease inhibitor [SUMMARY]
[CONTENT] inhibitor | cells | usp8 | cell | usp8 inhibitor | att20 | att20 cells | specific | protease | protease inhibitor [SUMMARY]
[CONTENT] inhibitor | cells | usp8 | cell | usp8 inhibitor | att20 | att20 cells | specific | protease | protease inhibitor [SUMMARY]
null
[CONTENT] inhibitor | cells | usp8 | cell | usp8 inhibitor | att20 | att20 cells | specific | protease | protease inhibitor [SUMMARY]
null
[CONTENT] mutations | cd | cushing | somatic | somatic mutations | prevalence | usp8 | pituitary | usp8 mutations | cushing syndrome [SUMMARY]
[CONTENT] anti | usa | cells | cell | bd | medium | assay | min | fbs | containing [SUMMARY]
[CONTENT] inhibitor | usp8 | ubiquitin specific protease inhibitor | specific protease inhibitor | usp8 inhibitor | ubiquitin specific protease | ubiquitin specific | specific | cells | att20 [SUMMARY]
null
[CONTENT] inhibitor | usp8 | cells | usp8 inhibitor | cell | att20 | usa | anti | interest | conflicts interest [SUMMARY]
null
[CONTENT] Two | 8 ||| EGFR ||| Cushing [SUMMARY]
[CONTENT] ACTH ||| transcription [SUMMARY]
[CONTENT] EGFR | AtT20 | ACTH ||| AtT20 [SUMMARY]
null
[CONTENT] Two | 8 ||| EGFR ||| Cushing ||| ACTH ||| transcription ||| EGFR | AtT20 | ACTH ||| AtT20 ||| ||| [SUMMARY]
null
4-week eicosapentaenoic acid-rich fish oil supplementation partially protects muscular damage following eccentric contractions.
33648546
We previously showed 8-week of fish oil supplementation attenuated muscle damage. However, the effect of a shorter period of fish oil supplementation is unclear. The present study investigated the effect of fish oil, eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), for 4 weeks on muscular damage caused by eccentric contractions (ECCs) of the elbow flexors.
BACKGROUND
Twenty-two untrained men were recruited in this double-blind, placebo-controlled, parallel design study and the subjects were randomly assigned to the EPA and DHA group (EPA and DHA, n = 11) and placebo group (PL, n = 11). They consumed either EPA 600 mg and DHA 260 mg per day or placebo supplement for 4 weeks prior to exercise. Subjects performed 60 ECCs at 100 % maximal voluntary contraction (MVC) using a dumbbell. Changes in MVC torque, range of motion (ROM), upper arm circumference, muscle soreness, echo intensity, muscle thickness, serum creatine kinase (CK), and interleukin-6 (IL-6) were assessed before exercise; immediately after exercise; and 1, 2, 3, and 5 days after exercise.
METHODS
ROM was significantly higher in the EPA and DHA group than in the PL group immediately after performing ECCs (p < 0.05). No differences between groups were observed in terms of MVC torque, upper arm circumference, muscle soreness, echo intensity, and thickness. A significant difference was observed in serum CK 3 days after ECCs (p < 0.05).
RESULTS
We concluded that shorter period EPA and DHA supplementation benefits joint flexibility and protection of muscle fiber following ECCs.
CONCLUSIONS
[ "8,11,14-Eicosatrienoic Acid", "Arachidonic Acid", "Arm", "Creatine Kinase", "Dietary Supplements", "Docosahexaenoic Acids", "Eicosapentaenoic Acid", "Elbow Joint", "Fatty Acids, Unsaturated", "Fish Oils", "Humans", "Interleukin-6", "Isometric Contraction", "Male", "Myalgia", "Placebos", "Range of Motion, Articular", "Time Factors", "Torque", "Young Adult" ]
7923476
Introduction
Omega-3 polyunsaturated fatty acids include of eicosapentaenoic acid (EPA; 20:5 n-3) and docosahexaenoic acid (DHA; 22:6 n-3), which are mainly contained in fish oil. EPA and DHA are known to have anti-inflammatory effects and increased red blood cell (RBC) deformability as a consequence of incorporation of omega-3 polyunsaturated fatty acid into RBC membrane phospholipids [1]. Recently, omega-3 polyunsaturated fatty acid supplementation has been proposed as an ergogenic aid for athletes [2]. Exhaustive eccentric contractions (ECCs) or unaccustomed exercise causes delayed onset muscle soreness (DOMS), reduction in maximal strength, limitation of range of motion (ROM), and muscle swelling [3]. In addition, previous studies showed that after ECCs, serum myoglobin (Mb), creatine kinase (CK), and interleukin (IL)-6 increase [3–7]. ECC-induced muscle damage is defined as morphological changes in the sarcomeres and endomysium and inflammatory responses in muscle fibers and connective tissues [8]. Previous studies have reported that EPA and DHA supplementation positively affect these symptoms of muscle damage [7, 9–12]. Consumed omega-3 polyunsaturated fatty acids are incorporated into phospholipid, a major component of the cell membrane, and have been reported to inhibit the effects of inflammation and reactive oxygen species [13]. Helge et al. [14] reported that the total proportion of omega-3 polyunsaturated fatty acids in muscle cellular membrane was significantly increased after 4 weeks of high-fat diet in rat. It is assumed that ingestion of EPA and DHA alleviates exercise-induced muscle damage by the incorporation of omega-3 polyunsaturated fatty acids into the muscle cell membrane. We previously examined 30 ECCs in elbow flexors after an intake of both 600 mg/day of EPA and 260 mg/day of DHA for 8 weeks [7] and showed the inhibitions in reduced muscle strength and ROM, and increased IL-6 as inflammatory response [7]. Furthermore, other studies have reported that ingestion of 8-week EPA (600 mg/day) and DHA (260 mg/day) attenuates nerve dysfunction [9] and muscle stiffness [11] following 60 ECCs using a dumbbell at 100 % 1RM. However, the study that conducted these experiments in shorter periods (< 8 weeks) is controversial and insufficient [10, 12, 15]. Specifically, Tartibian et al. [10, 12] examined the effects of 324 mg/day EPA and 216 mg/day DHA ingestion for 30 days on muscle damage after 40 min of bench stepping. As the results, development of DOMS, limited ROM, muscle swelling, and elevated of serum CK and IL-6 were inhibited by the ingestions. On the other hand, it showed that the ingestion of 400 mg/day EPA and 270 mg/day DHA for 30 days were not effective to DOMS, ROM, and serum CK and IL-6 following 50 maximal ECCs of elbow flexors [15]. Hence, elucidating the efficacy of shorter period of EPA and DHA supplementation is important for athletes and resistance training enthusiasts. Therefore, the present study examined whether 600 mg/day of EPA and 260 mg/day of DHA supplementation for 4 weeks reduce muscle damage following elbow flexors in ECCs. We hypothesized that short term EPA and DHA supplementation may improve ECC-induced muscle damage.
null
null
Results
No significant differences were observed between the EPA and DHA and PL groups in terms of age, weight, and BMI (PL group, n = 11; age, 19.8 ± 1.5 years; height, 169.0 ± 7.8 cm; weight, 65.4 ± 8.4 kg; body fat, 15.7 ± 7.6 %; and BMI, 23.2 ± 3.3 kg/m2; EPA and DHA group, n = 11; age, 20.2 ± 0.4 years; height, 167.4 ± 5.4 cm; weight, 65.0 ± 8.9 kg; body fat, 17.2 ± 6.9 %; and BMI, 23.2 ± 2.9 kg/m2). Based on the results of the food frequency survey, no difference was observed in the PL group between before (energy, 1953.6 ± 442.2 kcal; protein, 64.4 ± 16.9 g; fat, 64.6 ± 12.3 g; carbohydrate, 263.5 ± 72.5 g; and omega-3 fatty acid, 1.8 ± 0.2 g) and after supplementation (energy, 1991.0 ± 556.2 kcal; protein, 64.9 ± 22.2 g; fat, 68.3 ± 24.1 g; carbohydrate, 262.0 ± 77.4 g; and omega-3 fatty acid, 2.0 ± 0.7 g) and in the EPA and DHA group between before (energy, 1907.9 ± 349.7 kcal; protein, 70.9 ± 14.3 g; fat, 68.1 ± 12.7 g; carbohydrate, 243.8 ± 73.4 g; and omega-3 fatty acid, 2.1 ± 0.6 g) and after supplementation (energy, 1894.9 ± 431.4 kcal; protein, 70.7 ± 17.3 g; fat, 65.8 ± 13.7 g; carbohydrate, 244.1 ± 81.9 g; and omega-3 fatty acid, 2.0 ± 0.6 g). These data reveal that physical characteristics and nutrition status of the participants did not change during the experimental period. Polyunsaturated fatty acids As shown in Table 1, no significant changes were observed in the PL group before and after supplementation in terms of DGLA, AA, EPA, and DHA levels. In the EPA and DHA group, blood EPA level increased after 4 weeks (p < 0.05). However, no significant difference was observed in the DGLA, AA, and DHA levels. For comparison between groups, the EPA level was significantly higher in the EPA and DHA group than in the PL group after the 4-week supplementation (p < 0.05). Table 1Changes of serum dihomo-gamma-linolenic acid, arachidonic acid, eicosapentaenoic acid, and docosahexaenoic acid at before and after 4-weekDihomo-gamma-linolenic acid (µg/ml)Arachidonic acid (µg/ml)Eicosapentaenoic acid (µg/ml)Docosahexaenoic acid (µg/ml)PlaceboBefore49.0 ± 9.9189.1 ± 40.620.9 ± 9.268.5 ± 24.6After42.9 ± 1.8198.7 ± 39.514.2 ± 6.159.4 ± 22.3EPABefore44.7 ± 9.3201.6 ± 38.133.5 ± 21.270.9 ± 18.0After42.3 ± 10.5186.8 ± 20.358.2 ± 25.7 *†78.2 ± 17.7* p < 0.05, Compare with placebo group for after supplementation, †p < 0.05, Compare with EPA and DHA group for before supplementation Changes of serum dihomo-gamma-linolenic acid, arachidonic acid, eicosapentaenoic acid, and docosahexaenoic acid at before and after 4-week * p < 0.05, Compare with placebo group for after supplementation, †p < 0.05, Compare with EPA and DHA group for before supplementation As shown in Table 1, no significant changes were observed in the PL group before and after supplementation in terms of DGLA, AA, EPA, and DHA levels. In the EPA and DHA group, blood EPA level increased after 4 weeks (p < 0.05). However, no significant difference was observed in the DGLA, AA, and DHA levels. For comparison between groups, the EPA level was significantly higher in the EPA and DHA group than in the PL group after the 4-week supplementation (p < 0.05). Table 1Changes of serum dihomo-gamma-linolenic acid, arachidonic acid, eicosapentaenoic acid, and docosahexaenoic acid at before and after 4-weekDihomo-gamma-linolenic acid (µg/ml)Arachidonic acid (µg/ml)Eicosapentaenoic acid (µg/ml)Docosahexaenoic acid (µg/ml)PlaceboBefore49.0 ± 9.9189.1 ± 40.620.9 ± 9.268.5 ± 24.6After42.9 ± 1.8198.7 ± 39.514.2 ± 6.159.4 ± 22.3EPABefore44.7 ± 9.3201.6 ± 38.133.5 ± 21.270.9 ± 18.0After42.3 ± 10.5186.8 ± 20.358.2 ± 25.7 *†78.2 ± 17.7* p < 0.05, Compare with placebo group for after supplementation, †p < 0.05, Compare with EPA and DHA group for before supplementation Changes of serum dihomo-gamma-linolenic acid, arachidonic acid, eicosapentaenoic acid, and docosahexaenoic acid at before and after 4-week * p < 0.05, Compare with placebo group for after supplementation, †p < 0.05, Compare with EPA and DHA group for before supplementation Maximal voluntary isometric contraction torque For the MVC, although a times main effect was observed (90°: F = 28.8, p < 0.05, 110°: F = 20.8, p < 0.05), but not a main effect for group (90°: F = 2.5, 110°: F = 1.4) and interaction effect for times and group (90°: F = 1.2, 110°: F = 0.9) (Fig. 1). Fig. 1Changes (mean ± SD) of maximal voluntary isometric contraction (MVC) torque at 90° (a), and 110° (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups Changes (mean ± SD) of maximal voluntary isometric contraction (MVC) torque at 90° (a), and 110° (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups For the MVC, although a times main effect was observed (90°: F = 28.8, p < 0.05, 110°: F = 20.8, p < 0.05), but not a main effect for group (90°: F = 2.5, 110°: F = 1.4) and interaction effect for times and group (90°: F = 1.2, 110°: F = 0.9) (Fig. 1). Fig. 1Changes (mean ± SD) of maximal voluntary isometric contraction (MVC) torque at 90° (a), and 110° (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups Changes (mean ± SD) of maximal voluntary isometric contraction (MVC) torque at 90° (a), and 110° (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups Range of motion of the elbow joint As shown in Fig. 2a, ANOVA revealed an interaction (F = 3.4, p < 0.05), times (F = 20.6, p < 0.05), and group effect (F = 4.0, p < 0.05) for ROM. Post-hoc analysis revealed that a significant decrease in ROM was observed in the PL group immediately after exercise, which remained lower than the baseline on days 1, 2, and 3 after exercise. The ROM in the EPA and DHA group decreased immediately after and 1 day after exercise compared with the pre-exercise value but returned to baseline on day 2 after exercise. The ROM in the EPA and DHA group was significantly higher than that of the PL group immediately after exercise (EPA and DHA group; 76.5 ± 16.7 %, PL group; 53.1 ± 18.7 %; p < 0.05). Fig. 2Changes (mean ± SD) of range of motion (ROM) (a), circumference (b), and visual analogue scale (VAS) (c) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group, * p < 0.05 for the difference from pre-exercise value in the EPA and DHA group Changes (mean ± SD) of range of motion (ROM) (a), circumference (b), and visual analogue scale (VAS) (c) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group, * p < 0.05 for the difference from pre-exercise value in the EPA and DHA group As shown in Fig. 2a, ANOVA revealed an interaction (F = 3.4, p < 0.05), times (F = 20.6, p < 0.05), and group effect (F = 4.0, p < 0.05) for ROM. Post-hoc analysis revealed that a significant decrease in ROM was observed in the PL group immediately after exercise, which remained lower than the baseline on days 1, 2, and 3 after exercise. The ROM in the EPA and DHA group decreased immediately after and 1 day after exercise compared with the pre-exercise value but returned to baseline on day 2 after exercise. The ROM in the EPA and DHA group was significantly higher than that of the PL group immediately after exercise (EPA and DHA group; 76.5 ± 16.7 %, PL group; 53.1 ± 18.7 %; p < 0.05). Fig. 2Changes (mean ± SD) of range of motion (ROM) (a), circumference (b), and visual analogue scale (VAS) (c) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group, * p < 0.05 for the difference from pre-exercise value in the EPA and DHA group Changes (mean ± SD) of range of motion (ROM) (a), circumference (b), and visual analogue scale (VAS) (c) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group, * p < 0.05 for the difference from pre-exercise value in the EPA and DHA group Upper arm circumference and muscle soreness For upper arm circumference and muscle soreness, although a times main effect was observed (circumference: F = 11.8, p < 0.05, muscle soreness: F = 17.7, p < 0.05), but not a main effect for group (circumference: F = 3.4, muscle soreness: F = 0.7) and interaction effect for times and group (circumference: F = 1.7, muscle soreness: F = 1.0) (Fig. 2b and c). For upper arm circumference and muscle soreness, although a times main effect was observed (circumference: F = 11.8, p < 0.05, muscle soreness: F = 17.7, p < 0.05), but not a main effect for group (circumference: F = 3.4, muscle soreness: F = 0.7) and interaction effect for times and group (circumference: F = 1.7, muscle soreness: F = 1.0) (Fig. 2b and c). Muscle thickness and echo intensity For muscle thickness and echo intensity, although a times main effect was observed (muscle thickness: F = 3.1, p < 0.05, echo intensity: F = 7.5, p < 0.05), but not a main effect for group (muscle thickness: F = 0.2, echo intensity: F = 1.7) and interaction effect for times and group (muscle thickness: F = 0.5, echo intensity: F = 1.2) (Fig. 3a and b). Fig. 3Changes (mean ± SD) of muscle thickness (a) and muscle echo intensity (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups Changes (mean ± SD) of muscle thickness (a) and muscle echo intensity (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups For muscle thickness and echo intensity, although a times main effect was observed (muscle thickness: F = 3.1, p < 0.05, echo intensity: F = 7.5, p < 0.05), but not a main effect for group (muscle thickness: F = 0.2, echo intensity: F = 1.7) and interaction effect for times and group (muscle thickness: F = 0.5, echo intensity: F = 1.2) (Fig. 3a and b). Fig. 3Changes (mean ± SD) of muscle thickness (a) and muscle echo intensity (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups Changes (mean ± SD) of muscle thickness (a) and muscle echo intensity (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups Blood serum analysis ANOVA revealed an interaction (F = 3.1, p < 0.05), times (F = 8.0, p < 0.05), and group effect (F = 4.7, p < 0.05) for serum CK. Serum CK levels in the PL group significantly increased 3 and 5 days after exercise compared with pre-exercise values (Fig. 4a, p < 0.05). However, no significant difference after exercise compared with pre-exercise values was observed in the EPA and DHA group. Serum CK levels were significantly higher in the PL group than the EPA and DHA group 3 days after exercise (PL, 12132.7 ± 13652.2 U/L vs. EPA, 2575.2 ± 2798.9 U/L, p < 0.05). In contrast, no interaction (F = 2.2), times (F = 17.5), and group effect (F = 2.3) in IL-6 were detected between times and the group (Fig. 4b). Fig. 4Changes (mean ± SD) of serum creatine kinase (a) and serum interleukin-6 (b) measured before (pre) and the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group Changes (mean ± SD) of serum creatine kinase (a) and serum interleukin-6 (b) measured before (pre) and the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group ANOVA revealed an interaction (F = 3.1, p < 0.05), times (F = 8.0, p < 0.05), and group effect (F = 4.7, p < 0.05) for serum CK. Serum CK levels in the PL group significantly increased 3 and 5 days after exercise compared with pre-exercise values (Fig. 4a, p < 0.05). However, no significant difference after exercise compared with pre-exercise values was observed in the EPA and DHA group. Serum CK levels were significantly higher in the PL group than the EPA and DHA group 3 days after exercise (PL, 12132.7 ± 13652.2 U/L vs. EPA, 2575.2 ± 2798.9 U/L, p < 0.05). In contrast, no interaction (F = 2.2), times (F = 17.5), and group effect (F = 2.3) in IL-6 were detected between times and the group (Fig. 4b). Fig. 4Changes (mean ± SD) of serum creatine kinase (a) and serum interleukin-6 (b) measured before (pre) and the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group Changes (mean ± SD) of serum creatine kinase (a) and serum interleukin-6 (b) measured before (pre) and the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group
Conclusions
Herein, we confirmed that 4-week 2,400 mg/day fish oil of supplementation with EPA and DHA at daily doses of 600 mg of 260 mg, respectively, alleviated the loss of joint flexibility and the increase in CK after ECCs through elbow flexions. However, other effects for the parameters of ECC-induced muscle damage were limited compared with that of 8-week consumption of EPA and DHA [7, 11]. Therefore, we conclude that supplementation with EPA-enriched fish oil for 4 weeks is effective to a limited extent for attenuating acute exercise-induced muscular damage. Our finding that there is certain duration of supplement to obtain the earlier muscle recovery will be important information to athletes and training enthusiasts acquire efficient training.
[ "Methods", "Subjects", "Study design", "Supplements", "Blood sample", "Eccentric contractions", "Maximum voluntary contraction torque", "Range of motion of the elbow joint", "Muscle soreness", "Upper arm circumference", "Muscle echo intensity and thickness", "Statistical analyses", "Polyunsaturated fatty acids", "Maximal voluntary isometric contraction torque", "Range of motion of the elbow joint", "Upper arm circumference and muscle soreness", "Muscle thickness and echo intensity", "Blood serum analysis", "Discussion" ]
[ "Subjects A total of 22 healthy recreational untrained men were recruited for this study. The participants were not allergic to fish and had not participated in any regular resistance training experience for at least one year before this study. Further, participants were asked not to participate in other clinical trials and interventions, such as massage, stretching, strenuous exercise, excessive consumption of food or alcohol, and intake of supplementations or medications during the experimental period. All participants were provided with detailed explanations of the study protocol prior to participation, and informed consent was obtained from all participants. The present study was performed in accordance with the Declaration of Helsinki and was approved by the ethics committee for human experiments of Teikyo Heisei University (ID: R01-040). Moreover, the study was registered at the University Hospital Medical Information Network Clinical Trials Registry (UMIN-CTR identifier: UMIN000038003).\nA total of 22 healthy recreational untrained men were recruited for this study. The participants were not allergic to fish and had not participated in any regular resistance training experience for at least one year before this study. Further, participants were asked not to participate in other clinical trials and interventions, such as massage, stretching, strenuous exercise, excessive consumption of food or alcohol, and intake of supplementations or medications during the experimental period. All participants were provided with detailed explanations of the study protocol prior to participation, and informed consent was obtained from all participants. The present study was performed in accordance with the Declaration of Helsinki and was approved by the ethics committee for human experiments of Teikyo Heisei University (ID: R01-040). Moreover, the study was registered at the University Hospital Medical Information Network Clinical Trials Registry (UMIN-CTR identifier: UMIN000038003).\nStudy design The study used the double-blind, placebo-controlled, parallel-group trial design. The participants were randomly assigned to two groups using a table of random numbers to minimize the intergroup differences in terms of age, body fat, and body mass index (BMI). The placebo (PL) and EPA and DHA group consumed daily placebo or fish oil capsules for 4 weeks prior to an exercise experiment and for 5 days after the exercise experiment. The sequence allocation concealment and blinding of participants and researchers were maintained throughout this period. Medication adherence was assessed using the daily record of the patients and via pill count at the end of the study. On the day of exercise testing, muscle damage markers were assessed using the nondominant arm before exercise. Immediately after these baseline measurements, the participants performed ECCs using the same arm. Maximum voluntary contraction (MVC) torque, ROM, DOMS, circumference, muscle echo intensity, and thickness were measured immediately before and after exercise and 1, 2, 3, and 5 days after exercise. Serum CK and IL-6 were measured before exercise and 1, 2, 3, and 5 days after exercise. In addition, we measured serum fatty acid levels at before and after 4 weeks supplementation. Subjects were instructed to eat a light meal > 2 h before arriving at the laboratory. In addition, they were asked to refrain from any exercise for 24 h, before the study visit. In addition, we assessed the nutrition status of all participants prior to supplement consumption and after the experimental testing on food frequency using a questionnaire based on food groups (FFQg version 3.5, Kenpakusha, Tokyo, Japan). Furthermore, we measured serum fatty acid levels, including EPA, DHA, arachidonic acid (AA), and dihomo-gamma-linolenic acid (DGLA) levels.\nThe study used the double-blind, placebo-controlled, parallel-group trial design. The participants were randomly assigned to two groups using a table of random numbers to minimize the intergroup differences in terms of age, body fat, and body mass index (BMI). The placebo (PL) and EPA and DHA group consumed daily placebo or fish oil capsules for 4 weeks prior to an exercise experiment and for 5 days after the exercise experiment. The sequence allocation concealment and blinding of participants and researchers were maintained throughout this period. Medication adherence was assessed using the daily record of the patients and via pill count at the end of the study. On the day of exercise testing, muscle damage markers were assessed using the nondominant arm before exercise. Immediately after these baseline measurements, the participants performed ECCs using the same arm. Maximum voluntary contraction (MVC) torque, ROM, DOMS, circumference, muscle echo intensity, and thickness were measured immediately before and after exercise and 1, 2, 3, and 5 days after exercise. Serum CK and IL-6 were measured before exercise and 1, 2, 3, and 5 days after exercise. In addition, we measured serum fatty acid levels at before and after 4 weeks supplementation. Subjects were instructed to eat a light meal > 2 h before arriving at the laboratory. In addition, they were asked to refrain from any exercise for 24 h, before the study visit. In addition, we assessed the nutrition status of all participants prior to supplement consumption and after the experimental testing on food frequency using a questionnaire based on food groups (FFQg version 3.5, Kenpakusha, Tokyo, Japan). Furthermore, we measured serum fatty acid levels, including EPA, DHA, arachidonic acid (AA), and dihomo-gamma-linolenic acid (DGLA) levels.\nSupplements The EPA and DHA group consumed eight 300-mg EPA-rich fish oil softgel capsules (Nippon Suisan Kaisha Ltd., Tokyo, Japan) per day, and the total consumption was 2,400 mg per day (600-mg EPA and 260-mg DHA). The PL group consumed eight 300-mg corn oil softgel capsules per day (without EPA and DHA), and the total consumption was 2,400 mg. The participants consumed the capsules within 30 min after the morning meal.\nThe EPA and DHA group consumed eight 300-mg EPA-rich fish oil softgel capsules (Nippon Suisan Kaisha Ltd., Tokyo, Japan) per day, and the total consumption was 2,400 mg per day (600-mg EPA and 260-mg DHA). The PL group consumed eight 300-mg corn oil softgel capsules per day (without EPA and DHA), and the total consumption was 2,400 mg. The participants consumed the capsules within 30 min after the morning meal.\nBlood sample The participants fasted for 8 h before a trained doctor obtained blood samples from their forearms. The blood samples were allowed to clot at room temperature (25 °C) and were then centrifuged at 3,000 rpm for 10 min at 4 °C. The serum was extracted and stored at − 20 °C until analysis. The serum levels of DGLA, AA, EPA, and DHA were measured. In addition, we evaluated serum creatine kinase (CK) and interleukin-6 (IL-6) as muscle damage markers, as previously described [7].\nThe participants fasted for 8 h before a trained doctor obtained blood samples from their forearms. The blood samples were allowed to clot at room temperature (25 °C) and were then centrifuged at 3,000 rpm for 10 min at 4 °C. The serum was extracted and stored at − 20 °C until analysis. The serum levels of DGLA, AA, EPA, and DHA were measured. In addition, we evaluated serum creatine kinase (CK) and interleukin-6 (IL-6) as muscle damage markers, as previously described [7].\nEccentric contractions For the ECCs, the participant sat on a preacher curl bench with his shoulder joint angle at 45° flexion. For the use of the dumbbell, the value of maximal voluntary contraction (MVC) torque measurement at 90° was converted to kilograms. The exercise comprised six sets of 10 maximal voluntary ECCs of the elbow flexors with a rest period of 90 s between each set, as described in our previous study [11]. The dumbbell was handed to the participant at the elbow flexed position (90°), and the participant was instructed to lower it to a fully extended position (0°) at an approximately constant speed (30°/s) in time (3 s) with a metronome. The investigator then removed the dumbbell, and the participant returned his arm without the dumbbell to the start the position for the next ECCs.\nFor the ECCs, the participant sat on a preacher curl bench with his shoulder joint angle at 45° flexion. For the use of the dumbbell, the value of maximal voluntary contraction (MVC) torque measurement at 90° was converted to kilograms. The exercise comprised six sets of 10 maximal voluntary ECCs of the elbow flexors with a rest period of 90 s between each set, as described in our previous study [11]. The dumbbell was handed to the participant at the elbow flexed position (90°), and the participant was instructed to lower it to a fully extended position (0°) at an approximately constant speed (30°/s) in time (3 s) with a metronome. The investigator then removed the dumbbell, and the participant returned his arm without the dumbbell to the start the position for the next ECCs.\nMaximum voluntary contraction torque For the measurement of MVC torque, each subject was seated with nondominant arm attached to a custom-designed ergometer and that performed isometric of the elbow flexors. The MVC torque was measured three 3-s MVCs at 90° and 110° of elbow joint angle with a 30-s rest during contractions. Subjects were verbally encouraged to give their maximal effort during the muscle strength tests. The greatest 1-s average of the three trials for each angle was used for subsequent analysis. The peak torque of each angle contraction was used as the MVC torque. The torque signal was amplified using a strain amplifier (LUR-A-100NSA1; Kyowa Electronic Instruments, Tokyo, Japan). The analog torque signal was converted to digital signals using a 16-bit analog-to-digital converter (Power-Lab 16SP; AD Instruments, Bella Vista, Australia). The sampling frequency was set at 10 kHz. The test–retest reliability of the MVC measures based on coefficient of variation (CV) was 2.8 %.\nFor the measurement of MVC torque, each subject was seated with nondominant arm attached to a custom-designed ergometer and that performed isometric of the elbow flexors. The MVC torque was measured three 3-s MVCs at 90° and 110° of elbow joint angle with a 30-s rest during contractions. Subjects were verbally encouraged to give their maximal effort during the muscle strength tests. The greatest 1-s average of the three trials for each angle was used for subsequent analysis. The peak torque of each angle contraction was used as the MVC torque. The torque signal was amplified using a strain amplifier (LUR-A-100NSA1; Kyowa Electronic Instruments, Tokyo, Japan). The analog torque signal was converted to digital signals using a 16-bit analog-to-digital converter (Power-Lab 16SP; AD Instruments, Bella Vista, Australia). The sampling frequency was set at 10 kHz. The test–retest reliability of the MVC measures based on coefficient of variation (CV) was 2.8 %.\nRange of motion of the elbow joint To examine the ROM of the elbow joint, two elbow joint angles (extended and flexed) were measured using a goniometer (Takase Medical, Tokyo, Japan). The extended joint angle was recorded while the participant attempted to fully extend the joint with the elbow held by his side and the hand in supination [7, 16, 17]. The flexed joint angle was identified while the participant attempted to fully flex the joint from an equally fully extended position with the hand supinated. The ROM was calculated by subtracting the flexed joint angle from the extended joint angle. The test–retest reliability of the ROM measures based on CV was 2.2 %.\nTo examine the ROM of the elbow joint, two elbow joint angles (extended and flexed) were measured using a goniometer (Takase Medical, Tokyo, Japan). The extended joint angle was recorded while the participant attempted to fully extend the joint with the elbow held by his side and the hand in supination [7, 16, 17]. The flexed joint angle was identified while the participant attempted to fully flex the joint from an equally fully extended position with the hand supinated. The ROM was calculated by subtracting the flexed joint angle from the extended joint angle. The test–retest reliability of the ROM measures based on CV was 2.2 %.\nMuscle soreness Muscle soreness in the elbow flexors was assessed using a 10-cm visual analogue scale in which 0 indicated “no pain” and 10 indicated “unbearable imaginable” [7, 16, 17]. The participant relaxed his arm in a natural position. The investigator then palpated the upper arm using a thumb, and the participant indicated his pain level using the visual analogue scale. All tests were conducted by the same investigator who had been trained to use the same pressure over time between participants. The test–retest reliability of the VAS measures based on CV was 1.9 %.\nMuscle soreness in the elbow flexors was assessed using a 10-cm visual analogue scale in which 0 indicated “no pain” and 10 indicated “unbearable imaginable” [7, 16, 17]. The participant relaxed his arm in a natural position. The investigator then palpated the upper arm using a thumb, and the participant indicated his pain level using the visual analogue scale. All tests were conducted by the same investigator who had been trained to use the same pressure over time between participants. The test–retest reliability of the VAS measures based on CV was 1.9 %.\nUpper arm circumference Upper arm circumference was assessed at 9 cm above the elbow joint using a tape measure while the participants were standing with their arms relaxed by their side [9]. The measurement marks were maintained during the experimental period using a semipermanent ink marker, and a well-trained investigator obtained the measurements. The average value of the three measurements was used for further analysis. The test-retest reliability of the circumference measurements based on CV was 2.2 %.\nUpper arm circumference was assessed at 9 cm above the elbow joint using a tape measure while the participants were standing with their arms relaxed by their side [9]. The measurement marks were maintained during the experimental period using a semipermanent ink marker, and a well-trained investigator obtained the measurements. The average value of the three measurements was used for further analysis. The test-retest reliability of the circumference measurements based on CV was 2.2 %.\nMuscle echo intensity and thickness B-mode ultrasound pictures of the upper arm were obtained using the biceps brachii via an ultrasound (SONIMAGE HS1, Konika Minolta, Japan), and the probe was placed 9 cm from the elbow joint at the position marked for the measurement of the upper arm circumference. The same gains and contrast were used over the experimental period. The transverse images were transferred to a computer as bitmap (.bmp) files and analyzed using a computer. The average muscle echo intensity of the region of interest (20 × 20 mm) was calculated using the computer image analysis software that provided a grayscale histogram (0, black; 100, white) for the region, as described in a previous study (ImageJ, Maryland, USA) [9]. Scanned images of each muscle were transferred to a personal computer and the thickness of biceps rachii was manually calculated via tracing using same software. The test-retest reliability of the muscle echo intensity and thickness measurements based on CV were 2.1 % and 1.4 %, respectively.\nB-mode ultrasound pictures of the upper arm were obtained using the biceps brachii via an ultrasound (SONIMAGE HS1, Konika Minolta, Japan), and the probe was placed 9 cm from the elbow joint at the position marked for the measurement of the upper arm circumference. The same gains and contrast were used over the experimental period. The transverse images were transferred to a computer as bitmap (.bmp) files and analyzed using a computer. The average muscle echo intensity of the region of interest (20 × 20 mm) was calculated using the computer image analysis software that provided a grayscale histogram (0, black; 100, white) for the region, as described in a previous study (ImageJ, Maryland, USA) [9]. Scanned images of each muscle were transferred to a personal computer and the thickness of biceps rachii was manually calculated via tracing using same software. The test-retest reliability of the muscle echo intensity and thickness measurements based on CV were 2.1 % and 1.4 %, respectively.\nStatistical analyses All analyses were performed using the SPSS software version 25.0 (IBM Corp., Armonk, NY). Values were expressed as means ± standard deviation. MVC torque, ROM, circumference, echo intensity, and thickness of values before exercise to 5 days post-exercise were calculated based on relative changes from the baseline. MVC, ROM, visual analogue scale (VAS), circumference, echo intensity, thickness, and blood data of the PL and EPA and DHA groups were compared using two-way repeated-measure analysis of variance (ANOVA). When a significant primary effect or interaction was found, Bonferroni’s correction was performed for post-hoc testing. A p-value of < 0.05 was considered statistically significant.\nAll analyses were performed using the SPSS software version 25.0 (IBM Corp., Armonk, NY). Values were expressed as means ± standard deviation. MVC torque, ROM, circumference, echo intensity, and thickness of values before exercise to 5 days post-exercise were calculated based on relative changes from the baseline. MVC, ROM, visual analogue scale (VAS), circumference, echo intensity, thickness, and blood data of the PL and EPA and DHA groups were compared using two-way repeated-measure analysis of variance (ANOVA). When a significant primary effect or interaction was found, Bonferroni’s correction was performed for post-hoc testing. A p-value of < 0.05 was considered statistically significant.", "A total of 22 healthy recreational untrained men were recruited for this study. The participants were not allergic to fish and had not participated in any regular resistance training experience for at least one year before this study. Further, participants were asked not to participate in other clinical trials and interventions, such as massage, stretching, strenuous exercise, excessive consumption of food or alcohol, and intake of supplementations or medications during the experimental period. All participants were provided with detailed explanations of the study protocol prior to participation, and informed consent was obtained from all participants. The present study was performed in accordance with the Declaration of Helsinki and was approved by the ethics committee for human experiments of Teikyo Heisei University (ID: R01-040). Moreover, the study was registered at the University Hospital Medical Information Network Clinical Trials Registry (UMIN-CTR identifier: UMIN000038003).", "The study used the double-blind, placebo-controlled, parallel-group trial design. The participants were randomly assigned to two groups using a table of random numbers to minimize the intergroup differences in terms of age, body fat, and body mass index (BMI). The placebo (PL) and EPA and DHA group consumed daily placebo or fish oil capsules for 4 weeks prior to an exercise experiment and for 5 days after the exercise experiment. The sequence allocation concealment and blinding of participants and researchers were maintained throughout this period. Medication adherence was assessed using the daily record of the patients and via pill count at the end of the study. On the day of exercise testing, muscle damage markers were assessed using the nondominant arm before exercise. Immediately after these baseline measurements, the participants performed ECCs using the same arm. Maximum voluntary contraction (MVC) torque, ROM, DOMS, circumference, muscle echo intensity, and thickness were measured immediately before and after exercise and 1, 2, 3, and 5 days after exercise. Serum CK and IL-6 were measured before exercise and 1, 2, 3, and 5 days after exercise. In addition, we measured serum fatty acid levels at before and after 4 weeks supplementation. Subjects were instructed to eat a light meal > 2 h before arriving at the laboratory. In addition, they were asked to refrain from any exercise for 24 h, before the study visit. In addition, we assessed the nutrition status of all participants prior to supplement consumption and after the experimental testing on food frequency using a questionnaire based on food groups (FFQg version 3.5, Kenpakusha, Tokyo, Japan). Furthermore, we measured serum fatty acid levels, including EPA, DHA, arachidonic acid (AA), and dihomo-gamma-linolenic acid (DGLA) levels.", "The EPA and DHA group consumed eight 300-mg EPA-rich fish oil softgel capsules (Nippon Suisan Kaisha Ltd., Tokyo, Japan) per day, and the total consumption was 2,400 mg per day (600-mg EPA and 260-mg DHA). The PL group consumed eight 300-mg corn oil softgel capsules per day (without EPA and DHA), and the total consumption was 2,400 mg. The participants consumed the capsules within 30 min after the morning meal.", "The participants fasted for 8 h before a trained doctor obtained blood samples from their forearms. The blood samples were allowed to clot at room temperature (25 °C) and were then centrifuged at 3,000 rpm for 10 min at 4 °C. The serum was extracted and stored at − 20 °C until analysis. The serum levels of DGLA, AA, EPA, and DHA were measured. In addition, we evaluated serum creatine kinase (CK) and interleukin-6 (IL-6) as muscle damage markers, as previously described [7].", "For the ECCs, the participant sat on a preacher curl bench with his shoulder joint angle at 45° flexion. For the use of the dumbbell, the value of maximal voluntary contraction (MVC) torque measurement at 90° was converted to kilograms. The exercise comprised six sets of 10 maximal voluntary ECCs of the elbow flexors with a rest period of 90 s between each set, as described in our previous study [11]. The dumbbell was handed to the participant at the elbow flexed position (90°), and the participant was instructed to lower it to a fully extended position (0°) at an approximately constant speed (30°/s) in time (3 s) with a metronome. The investigator then removed the dumbbell, and the participant returned his arm without the dumbbell to the start the position for the next ECCs.", "For the measurement of MVC torque, each subject was seated with nondominant arm attached to a custom-designed ergometer and that performed isometric of the elbow flexors. The MVC torque was measured three 3-s MVCs at 90° and 110° of elbow joint angle with a 30-s rest during contractions. Subjects were verbally encouraged to give their maximal effort during the muscle strength tests. The greatest 1-s average of the three trials for each angle was used for subsequent analysis. The peak torque of each angle contraction was used as the MVC torque. The torque signal was amplified using a strain amplifier (LUR-A-100NSA1; Kyowa Electronic Instruments, Tokyo, Japan). The analog torque signal was converted to digital signals using a 16-bit analog-to-digital converter (Power-Lab 16SP; AD Instruments, Bella Vista, Australia). The sampling frequency was set at 10 kHz. The test–retest reliability of the MVC measures based on coefficient of variation (CV) was 2.8 %.", "To examine the ROM of the elbow joint, two elbow joint angles (extended and flexed) were measured using a goniometer (Takase Medical, Tokyo, Japan). The extended joint angle was recorded while the participant attempted to fully extend the joint with the elbow held by his side and the hand in supination [7, 16, 17]. The flexed joint angle was identified while the participant attempted to fully flex the joint from an equally fully extended position with the hand supinated. The ROM was calculated by subtracting the flexed joint angle from the extended joint angle. The test–retest reliability of the ROM measures based on CV was 2.2 %.", "Muscle soreness in the elbow flexors was assessed using a 10-cm visual analogue scale in which 0 indicated “no pain” and 10 indicated “unbearable imaginable” [7, 16, 17]. The participant relaxed his arm in a natural position. The investigator then palpated the upper arm using a thumb, and the participant indicated his pain level using the visual analogue scale. All tests were conducted by the same investigator who had been trained to use the same pressure over time between participants. The test–retest reliability of the VAS measures based on CV was 1.9 %.", "Upper arm circumference was assessed at 9 cm above the elbow joint using a tape measure while the participants were standing with their arms relaxed by their side [9]. The measurement marks were maintained during the experimental period using a semipermanent ink marker, and a well-trained investigator obtained the measurements. The average value of the three measurements was used for further analysis. The test-retest reliability of the circumference measurements based on CV was 2.2 %.", "B-mode ultrasound pictures of the upper arm were obtained using the biceps brachii via an ultrasound (SONIMAGE HS1, Konika Minolta, Japan), and the probe was placed 9 cm from the elbow joint at the position marked for the measurement of the upper arm circumference. The same gains and contrast were used over the experimental period. The transverse images were transferred to a computer as bitmap (.bmp) files and analyzed using a computer. The average muscle echo intensity of the region of interest (20 × 20 mm) was calculated using the computer image analysis software that provided a grayscale histogram (0, black; 100, white) for the region, as described in a previous study (ImageJ, Maryland, USA) [9]. Scanned images of each muscle were transferred to a personal computer and the thickness of biceps rachii was manually calculated via tracing using same software. The test-retest reliability of the muscle echo intensity and thickness measurements based on CV were 2.1 % and 1.4 %, respectively.", "All analyses were performed using the SPSS software version 25.0 (IBM Corp., Armonk, NY). Values were expressed as means ± standard deviation. MVC torque, ROM, circumference, echo intensity, and thickness of values before exercise to 5 days post-exercise were calculated based on relative changes from the baseline. MVC, ROM, visual analogue scale (VAS), circumference, echo intensity, thickness, and blood data of the PL and EPA and DHA groups were compared using two-way repeated-measure analysis of variance (ANOVA). When a significant primary effect or interaction was found, Bonferroni’s correction was performed for post-hoc testing. A p-value of < 0.05 was considered statistically significant.", "As shown in Table 1, no significant changes were observed in the PL group before and after supplementation in terms of DGLA, AA, EPA, and DHA levels. In the EPA and DHA group, blood EPA level increased after 4 weeks (p < 0.05). However, no significant difference was observed in the DGLA, AA, and DHA levels. For comparison between groups, the EPA level was significantly higher in the EPA and DHA group than in the PL group after the 4-week supplementation (p < 0.05).\n\nTable 1Changes of serum dihomo-gamma-linolenic acid, arachidonic acid, eicosapentaenoic acid, and docosahexaenoic acid at before and after 4-weekDihomo-gamma-linolenic acid (µg/ml)Arachidonic acid (µg/ml)Eicosapentaenoic acid (µg/ml)Docosahexaenoic acid (µg/ml)PlaceboBefore49.0 ± 9.9189.1 ± 40.620.9 ± 9.268.5 ± 24.6After42.9 ± 1.8198.7 ± 39.514.2 ± 6.159.4 ± 22.3EPABefore44.7 ± 9.3201.6 ± 38.133.5 ± 21.270.9 ± 18.0After42.3 ± 10.5186.8 ± 20.358.2 ± 25.7 *†78.2 ± 17.7* p < 0.05, Compare with placebo group for after supplementation, †p < 0.05, Compare with EPA and DHA group for before supplementation\nChanges of serum dihomo-gamma-linolenic acid, arachidonic acid, eicosapentaenoic acid, and docosahexaenoic acid at before and after 4-week\n* p < 0.05, Compare with placebo group for after supplementation, †p < 0.05, Compare with EPA and DHA group for before supplementation", "For the MVC, although a times main effect was observed (90°: F = 28.8, p < 0.05, 110°: F = 20.8, p < 0.05), but not a main effect for group (90°: F = 2.5, 110°: F = 1.4) and interaction effect for times and group (90°: F = 1.2, 110°: F = 0.9) (Fig. 1).\n\nFig. 1Changes (mean ± SD) of maximal voluntary isometric contraction (MVC) torque at 90° (a), and 110° (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups\nChanges (mean ± SD) of maximal voluntary isometric contraction (MVC) torque at 90° (a), and 110° (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups", "As shown in Fig. 2a, ANOVA revealed an interaction (F = 3.4, p < 0.05), times (F = 20.6, p < 0.05), and group effect (F = 4.0, p < 0.05) for ROM. Post-hoc analysis revealed that a significant decrease in ROM was observed in the PL group immediately after exercise, which remained lower than the baseline on days 1, 2, and 3 after exercise. The ROM in the EPA and DHA group decreased immediately after and 1 day after exercise compared with the pre-exercise value but returned to baseline on day 2 after exercise. The ROM in the EPA and DHA group was significantly higher than that of the PL group immediately after exercise (EPA and DHA group; 76.5 ± 16.7 %, PL group; 53.1 ± 18.7 %; p < 0.05).\n\nFig. 2Changes (mean ± SD) of range of motion (ROM) (a), circumference (b), and visual analogue scale (VAS) (c) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group, * p < 0.05 for the difference from pre-exercise value in the EPA and DHA group\nChanges (mean ± SD) of range of motion (ROM) (a), circumference (b), and visual analogue scale (VAS) (c) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group, * p < 0.05 for the difference from pre-exercise value in the EPA and DHA group", "For upper arm circumference and muscle soreness, although a times main effect was observed (circumference: F = 11.8, p < 0.05, muscle soreness: F = 17.7, p < 0.05), but not a main effect for group (circumference: F = 3.4, muscle soreness: F = 0.7) and interaction effect for times and group (circumference: F = 1.7, muscle soreness: F = 1.0) (Fig. 2b and c).", "For muscle thickness and echo intensity, although a times main effect was observed (muscle thickness: F = 3.1, p < 0.05, echo intensity: F = 7.5, p < 0.05), but not a main effect for group (muscle thickness: F = 0.2, echo intensity: F = 1.7) and interaction effect for times and group (muscle thickness: F = 0.5, echo intensity: F = 1.2) (Fig. 3a and b).\nFig. 3Changes (mean ± SD) of muscle thickness (a) and muscle echo intensity (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups\nChanges (mean ± SD) of muscle thickness (a) and muscle echo intensity (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups", "ANOVA revealed an interaction (F = 3.1, p < 0.05), times (F = 8.0, p < 0.05), and group effect (F = 4.7, p < 0.05) for serum CK. Serum CK levels in the PL group significantly increased 3 and 5 days after exercise compared with pre-exercise values (Fig. 4a, p < 0.05). However, no significant difference after exercise compared with pre-exercise values was observed in the EPA and DHA group. Serum CK levels were significantly higher in the PL group than the EPA and DHA group 3 days after exercise (PL, 12132.7 ± 13652.2 U/L vs. EPA, 2575.2 ± 2798.9 U/L, p < 0.05). In contrast, no interaction (F = 2.2), times (F = 17.5), and group effect (F = 2.3) in IL-6 were detected between times and the group (Fig. 4b).\nFig. 4Changes (mean ± SD) of serum creatine kinase (a) and serum interleukin-6 (b) measured before (pre) and the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group\nChanges (mean ± SD) of serum creatine kinase (a) and serum interleukin-6 (b) measured before (pre) and the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group", "This study investigated the effect of consuming EPA-enriched fish oil for 4 weeks on temporal muscular dysfunction and soreness following ECCs. As a result, the 4-week ingestion of 600-mg EPA and 260-mg DHA attenuated the loss of joint flexibility after 60 ECCs in the elbow flexors. Moreover, we confirmed the inhibition of increased blood CK level, suggesting protection of muscle damage to muscle fibers. However, no effects were observed on loss of muscle strength, delayed onset muscle soreness, muscle swelling, echo intensity, and blood IL-6. These results suggest that supplementation with EPA-enriched fish oil for 4 weeks is effective to a limited extent for attenuating acute exercise-induced muscular damage. These results support our hypothesis.\nHerein, we confirmed that EPA (600 mg/day) and DHA (260 mg/day) consumed for 4 weeks significantly attenuated reduced ROM after 60 ECCs in the elbow flexors. The decreased ROM after ECCs has been attributed to the increased passive muscle stiffness due to inflammatory response in myofibrils, elevated cytoplasmic calcium levels, and muscle swelling due to inflammatory reactions [18, 19]. In our previous study, 30 isokinetic ECCs produced by elbow flexion with maximum effort were loaded after 600-mg EPA and 260-mg DHA per day for 8 weeks before exercise [7]. As a result, the reduction of ROM in the elbow joint until 3 days after exercise in EPA and DHA group was significantly less compared with that in the placebo group [7]. In a different study, the 8-week consumption of 600 mg/day EPA and 260 mg/day DHA significantly reduced elbow ROM until 5 days after 60 ECCs using dumbbells at 100 % MVC [11]. As a study investigating the effectiveness of supplementation for a period shorter than 8 weeks, Lenn et al. [15] tested the effect of 30-day consumption of 1,800-mg/day omega-3 fatty acids (398-mg EPA and 269-mg DHA) on muscle damage following 50 ECCs by elbow flexions in 21 untrained men and women. The results of that study show no differences in the loss of ROM in the elbow between the omega-3 fatty acid group and the placebo group. As consumption of 2,400-mg/day omega-3 fatty acids (600-mg EPA and 260-mg DHA) was effective in this study, the dose, rather than the duration of administration, appears to be a more important factor determining the preventive effect on joint flexibility. In a review article [20], it has been suggested that an ingestion ratio of EPA to DHA of approximately 2:1 may be beneficial in counteracting exercise-induced inflammation, which is almost similar to our ratio. Therefore, ratio of EPA to DHA may be also important factor to maintain the joint flexibility after muscle damage. In addition, a study involving 24 healthy young men has reported that reduced ROM in the knee joint after loading ECCs of quadriceps through a 40-min bench stepping exercise was significantly reduced in those who consumed 324-mg/day EPA and 216-mg/day DHA for 30 days [10]. In that study, ingestion of lower EPA and DHA was effective presumably because the multi-joint movement in lower limbs for a long period of time was a relatively low-intensity exercise. These data suggest that the effectiveness of short-term supplementation with EPA and DHA (28–30 days) on joint flexibility may differ depending on the omega-3 fatty acid dose, mode of exercise, and specific muscles involved.\nIn the present study, no differences in IL-6 between the groups were observed, and blood CK elevation in the EPA and DHA group was significantly reduced. The increase in CK following ECCs is attributable to micro-damage to muscle fibers [3, 5]. A study involving mice has reported that the amount of omega-3 fatty acids in the cell membrane significantly increased after 3-week administration of omega-3 fatty acids [21]. Therefore, a possible reason why the exercise-induced CK elevation was mitigated herein is that omega-3 fatty acids were incorporated into and protected cell membrane in muscle fiber. Similarly, Tartibian et al. [10] have reported that consumption of EPA and DHA for 30 days reduced CK elevation after ECCs from 40 min of the bench stepping exercise. Furthermore, some studies have demonstrated that consumption of EPA and DHA reduced the elevation of IL-6, an inflammatory cytokine [7, 12, 22, 23]. Possible reasons why our results did not agree with these previous studies were used different modes such as isokinetic ECCs [7, 22], arm carl machine [23], or bench stepping exercise [12]. In addition, Phillips et al. [23] showed that there were significant differences between supplementation of DHA group and placebo for IL-6 at 10 days after exercise. Future studies considering exercise modes and measurement of time point are necessary.\nThe present study also indicated no effect of 4-week supplementation with EPA and DHA on DOMS. A study has reported that the results of daily ingestion of 2,000-mg EPA and 1,000-mg DHA for 2 weeks significantly mitigated DOMS in the upper arm after two sets of ECCs (until exhaustion) via elbow flexions using dumbbells at an intensity of 120 % 1RM [24]. Similarly, as a result of 3,600 mg of EPA and DHA ingested for 2 weeks have also been reported to mitigate DOMS in the upper arm and thigh after 10 sets of ECCs (until exhaustion) through elbow flexions and knee extensions at an intensity of 50 % 1RM [25]. Studies using 8-week consumption of EPA and DHA at lower doses (600 mg/day of EPA and 260 mg/day of DHA) have reported significant mitigation of DOMS after 30 isokinetic ECCs by elbow flexions [7] and DOMS after isotonic ECCs using dumbbells at an intensity of 40 % 1RM [9]. In contrast, 30-day consumption of 287-mg/day EPA and 194-mg/day DHA [15] and 6-week consumption of 1,300-mg/day EPA and 300-mg/day DHA [26] have been demonstrated to be ineffective for DOMS following isokinetic ECCs. Therefore, we suggest that ingestion at a high daily dose or for a long period of time appears to be important for EPA and DHA to exert the mitigation effect on DOMS.\nHerein, loss of muscle strength, muscle swelling, and increased echo intensity after exercise were not prevented by the 4-week ingestion of EPA and DHA. In a previous study, we have reported that the results of 8-week ingestion at similar doses to those used in the present study reduced the loss of muscle strength [7, 9, 11]. Furthermore, 30-day ingestion of 287-mg/day EPA and 194-mg/day DHA [15] and 6-week ingestion of 1,300-mg EPA and 300-mg DHA [26] have been shown to be ineffective against loss of muscle strength. Considering the results of our present and previous studies, the duration of EPA and DHA intake is an important determinant of the effectiveness for muscle strength after exercise. One of reasons for the importance of duration, previous study showed that the increase of omega-3 polyunsaturated fatty acids in muscle cellular membrane needs over 4 weeks in rat [14]. Therefore, we speculate there is no effect to muscle strength, showing the different and multiple roles of EPA and DHA for muscle damage. In addition, no consensus has been obtained thus far on the swelling of muscles. Consumption of EPA (600 mg/day) and DHA (260 mg/day) for 8 weeks has been reported to attenuate muscle swelling after ECCs through elbow flexions at an intensity of 100 % MVC [11]. Ingestion of 324-mg EPA and 216-mg DHA per day for 30 days has been reported to mitigate muscle swelling after ECCs in lower limbs [10]. Several previous studies show that consumption of EPA and DHA for 2–8 weeks did not attenuate muscle swelling after ECCs [7, 9, 24, 25]. In many studies, muscle swelling was not evaluated directly and was measured using a measuring tape. The use of MRI or ultrasonography may be necessary for evaluation in the future. In the ultrasound measurement, an increased echo intensity reflects the amount of free water or edema due to disintegration of the extracellular matrix [6]. Our previous study has shown that consumption of EPA and DHA for 8 weeks resulted in a smaller increase in echo intensity following elbow flexions [11]. Therefore, more studies are needed to clarify the effect on echo intensity after ECCs because pf the paucity of data; regardless, the available data suggest the importance of the duration of consumption." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Subjects", "Study design", "Supplements", "Blood sample", "Eccentric contractions", "Maximum voluntary contraction torque", "Range of motion of the elbow joint", "Muscle soreness", "Upper arm circumference", "Muscle echo intensity and thickness", "Statistical analyses", "Results", "Polyunsaturated fatty acids", "Maximal voluntary isometric contraction torque", "Range of motion of the elbow joint", "Upper arm circumference and muscle soreness", "Muscle thickness and echo intensity", "Blood serum analysis", "Discussion", "Conclusions" ]
[ "Omega-3 polyunsaturated fatty acids include of eicosapentaenoic acid (EPA; 20:5 n-3) and docosahexaenoic acid (DHA; 22:6 n-3), which are mainly contained in fish oil. EPA and DHA are known to have anti-inflammatory effects and increased red blood cell (RBC) deformability as a consequence of incorporation of omega-3 polyunsaturated fatty acid into RBC membrane phospholipids [1]. Recently, omega-3 polyunsaturated fatty acid supplementation has been proposed as an ergogenic aid for athletes [2].\nExhaustive eccentric contractions (ECCs) or unaccustomed exercise causes delayed onset muscle soreness (DOMS), reduction in maximal strength, limitation of range of motion (ROM), and muscle swelling [3]. In addition, previous studies showed that after ECCs, serum myoglobin (Mb), creatine kinase (CK), and interleukin (IL)-6 increase [3–7]. ECC-induced muscle damage is defined as morphological changes in the sarcomeres and endomysium and inflammatory responses in muscle fibers and connective tissues [8]. Previous studies have reported that EPA and DHA supplementation positively affect these symptoms of muscle damage [7, 9–12]. Consumed omega-3 polyunsaturated fatty acids are incorporated into phospholipid, a major component of the cell membrane, and have been reported to inhibit the effects of inflammation and reactive oxygen species [13]. Helge et al. [14] reported that the total proportion of omega-3 polyunsaturated fatty acids in muscle cellular membrane was significantly increased after 4 weeks of high-fat diet in rat. It is assumed that ingestion of EPA and DHA alleviates exercise-induced muscle damage by the incorporation of omega-3 polyunsaturated fatty acids into the muscle cell membrane.\nWe previously examined 30 ECCs in elbow flexors after an intake of both 600 mg/day of EPA and 260 mg/day of DHA for 8 weeks [7] and showed the inhibitions in reduced muscle strength and ROM, and increased IL-6 as inflammatory response [7]. Furthermore, other studies have reported that ingestion of 8-week EPA (600 mg/day) and DHA (260 mg/day) attenuates nerve dysfunction [9] and muscle stiffness [11] following 60 ECCs using a dumbbell at 100 % 1RM. However, the study that conducted these experiments in shorter periods (< 8 weeks) is controversial and insufficient [10, 12, 15]. Specifically, Tartibian et al. [10, 12] examined the effects of 324 mg/day EPA and 216 mg/day DHA ingestion for 30 days on muscle damage after 40 min of bench stepping. As the results, development of DOMS, limited ROM, muscle swelling, and elevated of serum CK and IL-6 were inhibited by the ingestions. On the other hand, it showed that the ingestion of 400 mg/day EPA and 270 mg/day DHA for 30 days were not effective to DOMS, ROM, and serum CK and IL-6 following 50 maximal ECCs of elbow flexors [15]. Hence, elucidating the efficacy of shorter period of EPA and DHA supplementation is important for athletes and resistance training enthusiasts.\nTherefore, the present study examined whether 600 mg/day of EPA and 260 mg/day of DHA supplementation for 4 weeks reduce muscle damage following elbow flexors in ECCs. We hypothesized that short term EPA and DHA supplementation may improve ECC-induced muscle damage.", "Subjects A total of 22 healthy recreational untrained men were recruited for this study. The participants were not allergic to fish and had not participated in any regular resistance training experience for at least one year before this study. Further, participants were asked not to participate in other clinical trials and interventions, such as massage, stretching, strenuous exercise, excessive consumption of food or alcohol, and intake of supplementations or medications during the experimental period. All participants were provided with detailed explanations of the study protocol prior to participation, and informed consent was obtained from all participants. The present study was performed in accordance with the Declaration of Helsinki and was approved by the ethics committee for human experiments of Teikyo Heisei University (ID: R01-040). Moreover, the study was registered at the University Hospital Medical Information Network Clinical Trials Registry (UMIN-CTR identifier: UMIN000038003).\nA total of 22 healthy recreational untrained men were recruited for this study. The participants were not allergic to fish and had not participated in any regular resistance training experience for at least one year before this study. Further, participants were asked not to participate in other clinical trials and interventions, such as massage, stretching, strenuous exercise, excessive consumption of food or alcohol, and intake of supplementations or medications during the experimental period. All participants were provided with detailed explanations of the study protocol prior to participation, and informed consent was obtained from all participants. The present study was performed in accordance with the Declaration of Helsinki and was approved by the ethics committee for human experiments of Teikyo Heisei University (ID: R01-040). Moreover, the study was registered at the University Hospital Medical Information Network Clinical Trials Registry (UMIN-CTR identifier: UMIN000038003).\nStudy design The study used the double-blind, placebo-controlled, parallel-group trial design. The participants were randomly assigned to two groups using a table of random numbers to minimize the intergroup differences in terms of age, body fat, and body mass index (BMI). The placebo (PL) and EPA and DHA group consumed daily placebo or fish oil capsules for 4 weeks prior to an exercise experiment and for 5 days after the exercise experiment. The sequence allocation concealment and blinding of participants and researchers were maintained throughout this period. Medication adherence was assessed using the daily record of the patients and via pill count at the end of the study. On the day of exercise testing, muscle damage markers were assessed using the nondominant arm before exercise. Immediately after these baseline measurements, the participants performed ECCs using the same arm. Maximum voluntary contraction (MVC) torque, ROM, DOMS, circumference, muscle echo intensity, and thickness were measured immediately before and after exercise and 1, 2, 3, and 5 days after exercise. Serum CK and IL-6 were measured before exercise and 1, 2, 3, and 5 days after exercise. In addition, we measured serum fatty acid levels at before and after 4 weeks supplementation. Subjects were instructed to eat a light meal > 2 h before arriving at the laboratory. In addition, they were asked to refrain from any exercise for 24 h, before the study visit. In addition, we assessed the nutrition status of all participants prior to supplement consumption and after the experimental testing on food frequency using a questionnaire based on food groups (FFQg version 3.5, Kenpakusha, Tokyo, Japan). Furthermore, we measured serum fatty acid levels, including EPA, DHA, arachidonic acid (AA), and dihomo-gamma-linolenic acid (DGLA) levels.\nThe study used the double-blind, placebo-controlled, parallel-group trial design. The participants were randomly assigned to two groups using a table of random numbers to minimize the intergroup differences in terms of age, body fat, and body mass index (BMI). The placebo (PL) and EPA and DHA group consumed daily placebo or fish oil capsules for 4 weeks prior to an exercise experiment and for 5 days after the exercise experiment. The sequence allocation concealment and blinding of participants and researchers were maintained throughout this period. Medication adherence was assessed using the daily record of the patients and via pill count at the end of the study. On the day of exercise testing, muscle damage markers were assessed using the nondominant arm before exercise. Immediately after these baseline measurements, the participants performed ECCs using the same arm. Maximum voluntary contraction (MVC) torque, ROM, DOMS, circumference, muscle echo intensity, and thickness were measured immediately before and after exercise and 1, 2, 3, and 5 days after exercise. Serum CK and IL-6 were measured before exercise and 1, 2, 3, and 5 days after exercise. In addition, we measured serum fatty acid levels at before and after 4 weeks supplementation. Subjects were instructed to eat a light meal > 2 h before arriving at the laboratory. In addition, they were asked to refrain from any exercise for 24 h, before the study visit. In addition, we assessed the nutrition status of all participants prior to supplement consumption and after the experimental testing on food frequency using a questionnaire based on food groups (FFQg version 3.5, Kenpakusha, Tokyo, Japan). Furthermore, we measured serum fatty acid levels, including EPA, DHA, arachidonic acid (AA), and dihomo-gamma-linolenic acid (DGLA) levels.\nSupplements The EPA and DHA group consumed eight 300-mg EPA-rich fish oil softgel capsules (Nippon Suisan Kaisha Ltd., Tokyo, Japan) per day, and the total consumption was 2,400 mg per day (600-mg EPA and 260-mg DHA). The PL group consumed eight 300-mg corn oil softgel capsules per day (without EPA and DHA), and the total consumption was 2,400 mg. The participants consumed the capsules within 30 min after the morning meal.\nThe EPA and DHA group consumed eight 300-mg EPA-rich fish oil softgel capsules (Nippon Suisan Kaisha Ltd., Tokyo, Japan) per day, and the total consumption was 2,400 mg per day (600-mg EPA and 260-mg DHA). The PL group consumed eight 300-mg corn oil softgel capsules per day (without EPA and DHA), and the total consumption was 2,400 mg. The participants consumed the capsules within 30 min after the morning meal.\nBlood sample The participants fasted for 8 h before a trained doctor obtained blood samples from their forearms. The blood samples were allowed to clot at room temperature (25 °C) and were then centrifuged at 3,000 rpm for 10 min at 4 °C. The serum was extracted and stored at − 20 °C until analysis. The serum levels of DGLA, AA, EPA, and DHA were measured. In addition, we evaluated serum creatine kinase (CK) and interleukin-6 (IL-6) as muscle damage markers, as previously described [7].\nThe participants fasted for 8 h before a trained doctor obtained blood samples from their forearms. The blood samples were allowed to clot at room temperature (25 °C) and were then centrifuged at 3,000 rpm for 10 min at 4 °C. The serum was extracted and stored at − 20 °C until analysis. The serum levels of DGLA, AA, EPA, and DHA were measured. In addition, we evaluated serum creatine kinase (CK) and interleukin-6 (IL-6) as muscle damage markers, as previously described [7].\nEccentric contractions For the ECCs, the participant sat on a preacher curl bench with his shoulder joint angle at 45° flexion. For the use of the dumbbell, the value of maximal voluntary contraction (MVC) torque measurement at 90° was converted to kilograms. The exercise comprised six sets of 10 maximal voluntary ECCs of the elbow flexors with a rest period of 90 s between each set, as described in our previous study [11]. The dumbbell was handed to the participant at the elbow flexed position (90°), and the participant was instructed to lower it to a fully extended position (0°) at an approximately constant speed (30°/s) in time (3 s) with a metronome. The investigator then removed the dumbbell, and the participant returned his arm without the dumbbell to the start the position for the next ECCs.\nFor the ECCs, the participant sat on a preacher curl bench with his shoulder joint angle at 45° flexion. For the use of the dumbbell, the value of maximal voluntary contraction (MVC) torque measurement at 90° was converted to kilograms. The exercise comprised six sets of 10 maximal voluntary ECCs of the elbow flexors with a rest period of 90 s between each set, as described in our previous study [11]. The dumbbell was handed to the participant at the elbow flexed position (90°), and the participant was instructed to lower it to a fully extended position (0°) at an approximately constant speed (30°/s) in time (3 s) with a metronome. The investigator then removed the dumbbell, and the participant returned his arm without the dumbbell to the start the position for the next ECCs.\nMaximum voluntary contraction torque For the measurement of MVC torque, each subject was seated with nondominant arm attached to a custom-designed ergometer and that performed isometric of the elbow flexors. The MVC torque was measured three 3-s MVCs at 90° and 110° of elbow joint angle with a 30-s rest during contractions. Subjects were verbally encouraged to give their maximal effort during the muscle strength tests. The greatest 1-s average of the three trials for each angle was used for subsequent analysis. The peak torque of each angle contraction was used as the MVC torque. The torque signal was amplified using a strain amplifier (LUR-A-100NSA1; Kyowa Electronic Instruments, Tokyo, Japan). The analog torque signal was converted to digital signals using a 16-bit analog-to-digital converter (Power-Lab 16SP; AD Instruments, Bella Vista, Australia). The sampling frequency was set at 10 kHz. The test–retest reliability of the MVC measures based on coefficient of variation (CV) was 2.8 %.\nFor the measurement of MVC torque, each subject was seated with nondominant arm attached to a custom-designed ergometer and that performed isometric of the elbow flexors. The MVC torque was measured three 3-s MVCs at 90° and 110° of elbow joint angle with a 30-s rest during contractions. Subjects were verbally encouraged to give their maximal effort during the muscle strength tests. The greatest 1-s average of the three trials for each angle was used for subsequent analysis. The peak torque of each angle contraction was used as the MVC torque. The torque signal was amplified using a strain amplifier (LUR-A-100NSA1; Kyowa Electronic Instruments, Tokyo, Japan). The analog torque signal was converted to digital signals using a 16-bit analog-to-digital converter (Power-Lab 16SP; AD Instruments, Bella Vista, Australia). The sampling frequency was set at 10 kHz. The test–retest reliability of the MVC measures based on coefficient of variation (CV) was 2.8 %.\nRange of motion of the elbow joint To examine the ROM of the elbow joint, two elbow joint angles (extended and flexed) were measured using a goniometer (Takase Medical, Tokyo, Japan). The extended joint angle was recorded while the participant attempted to fully extend the joint with the elbow held by his side and the hand in supination [7, 16, 17]. The flexed joint angle was identified while the participant attempted to fully flex the joint from an equally fully extended position with the hand supinated. The ROM was calculated by subtracting the flexed joint angle from the extended joint angle. The test–retest reliability of the ROM measures based on CV was 2.2 %.\nTo examine the ROM of the elbow joint, two elbow joint angles (extended and flexed) were measured using a goniometer (Takase Medical, Tokyo, Japan). The extended joint angle was recorded while the participant attempted to fully extend the joint with the elbow held by his side and the hand in supination [7, 16, 17]. The flexed joint angle was identified while the participant attempted to fully flex the joint from an equally fully extended position with the hand supinated. The ROM was calculated by subtracting the flexed joint angle from the extended joint angle. The test–retest reliability of the ROM measures based on CV was 2.2 %.\nMuscle soreness Muscle soreness in the elbow flexors was assessed using a 10-cm visual analogue scale in which 0 indicated “no pain” and 10 indicated “unbearable imaginable” [7, 16, 17]. The participant relaxed his arm in a natural position. The investigator then palpated the upper arm using a thumb, and the participant indicated his pain level using the visual analogue scale. All tests were conducted by the same investigator who had been trained to use the same pressure over time between participants. The test–retest reliability of the VAS measures based on CV was 1.9 %.\nMuscle soreness in the elbow flexors was assessed using a 10-cm visual analogue scale in which 0 indicated “no pain” and 10 indicated “unbearable imaginable” [7, 16, 17]. The participant relaxed his arm in a natural position. The investigator then palpated the upper arm using a thumb, and the participant indicated his pain level using the visual analogue scale. All tests were conducted by the same investigator who had been trained to use the same pressure over time between participants. The test–retest reliability of the VAS measures based on CV was 1.9 %.\nUpper arm circumference Upper arm circumference was assessed at 9 cm above the elbow joint using a tape measure while the participants were standing with their arms relaxed by their side [9]. The measurement marks were maintained during the experimental period using a semipermanent ink marker, and a well-trained investigator obtained the measurements. The average value of the three measurements was used for further analysis. The test-retest reliability of the circumference measurements based on CV was 2.2 %.\nUpper arm circumference was assessed at 9 cm above the elbow joint using a tape measure while the participants were standing with their arms relaxed by their side [9]. The measurement marks were maintained during the experimental period using a semipermanent ink marker, and a well-trained investigator obtained the measurements. The average value of the three measurements was used for further analysis. The test-retest reliability of the circumference measurements based on CV was 2.2 %.\nMuscle echo intensity and thickness B-mode ultrasound pictures of the upper arm were obtained using the biceps brachii via an ultrasound (SONIMAGE HS1, Konika Minolta, Japan), and the probe was placed 9 cm from the elbow joint at the position marked for the measurement of the upper arm circumference. The same gains and contrast were used over the experimental period. The transverse images were transferred to a computer as bitmap (.bmp) files and analyzed using a computer. The average muscle echo intensity of the region of interest (20 × 20 mm) was calculated using the computer image analysis software that provided a grayscale histogram (0, black; 100, white) for the region, as described in a previous study (ImageJ, Maryland, USA) [9]. Scanned images of each muscle were transferred to a personal computer and the thickness of biceps rachii was manually calculated via tracing using same software. The test-retest reliability of the muscle echo intensity and thickness measurements based on CV were 2.1 % and 1.4 %, respectively.\nB-mode ultrasound pictures of the upper arm were obtained using the biceps brachii via an ultrasound (SONIMAGE HS1, Konika Minolta, Japan), and the probe was placed 9 cm from the elbow joint at the position marked for the measurement of the upper arm circumference. The same gains and contrast were used over the experimental period. The transverse images were transferred to a computer as bitmap (.bmp) files and analyzed using a computer. The average muscle echo intensity of the region of interest (20 × 20 mm) was calculated using the computer image analysis software that provided a grayscale histogram (0, black; 100, white) for the region, as described in a previous study (ImageJ, Maryland, USA) [9]. Scanned images of each muscle were transferred to a personal computer and the thickness of biceps rachii was manually calculated via tracing using same software. The test-retest reliability of the muscle echo intensity and thickness measurements based on CV were 2.1 % and 1.4 %, respectively.\nStatistical analyses All analyses were performed using the SPSS software version 25.0 (IBM Corp., Armonk, NY). Values were expressed as means ± standard deviation. MVC torque, ROM, circumference, echo intensity, and thickness of values before exercise to 5 days post-exercise were calculated based on relative changes from the baseline. MVC, ROM, visual analogue scale (VAS), circumference, echo intensity, thickness, and blood data of the PL and EPA and DHA groups were compared using two-way repeated-measure analysis of variance (ANOVA). When a significant primary effect or interaction was found, Bonferroni’s correction was performed for post-hoc testing. A p-value of < 0.05 was considered statistically significant.\nAll analyses were performed using the SPSS software version 25.0 (IBM Corp., Armonk, NY). Values were expressed as means ± standard deviation. MVC torque, ROM, circumference, echo intensity, and thickness of values before exercise to 5 days post-exercise were calculated based on relative changes from the baseline. MVC, ROM, visual analogue scale (VAS), circumference, echo intensity, thickness, and blood data of the PL and EPA and DHA groups were compared using two-way repeated-measure analysis of variance (ANOVA). When a significant primary effect or interaction was found, Bonferroni’s correction was performed for post-hoc testing. A p-value of < 0.05 was considered statistically significant.", "A total of 22 healthy recreational untrained men were recruited for this study. The participants were not allergic to fish and had not participated in any regular resistance training experience for at least one year before this study. Further, participants were asked not to participate in other clinical trials and interventions, such as massage, stretching, strenuous exercise, excessive consumption of food or alcohol, and intake of supplementations or medications during the experimental period. All participants were provided with detailed explanations of the study protocol prior to participation, and informed consent was obtained from all participants. The present study was performed in accordance with the Declaration of Helsinki and was approved by the ethics committee for human experiments of Teikyo Heisei University (ID: R01-040). Moreover, the study was registered at the University Hospital Medical Information Network Clinical Trials Registry (UMIN-CTR identifier: UMIN000038003).", "The study used the double-blind, placebo-controlled, parallel-group trial design. The participants were randomly assigned to two groups using a table of random numbers to minimize the intergroup differences in terms of age, body fat, and body mass index (BMI). The placebo (PL) and EPA and DHA group consumed daily placebo or fish oil capsules for 4 weeks prior to an exercise experiment and for 5 days after the exercise experiment. The sequence allocation concealment and blinding of participants and researchers were maintained throughout this period. Medication adherence was assessed using the daily record of the patients and via pill count at the end of the study. On the day of exercise testing, muscle damage markers were assessed using the nondominant arm before exercise. Immediately after these baseline measurements, the participants performed ECCs using the same arm. Maximum voluntary contraction (MVC) torque, ROM, DOMS, circumference, muscle echo intensity, and thickness were measured immediately before and after exercise and 1, 2, 3, and 5 days after exercise. Serum CK and IL-6 were measured before exercise and 1, 2, 3, and 5 days after exercise. In addition, we measured serum fatty acid levels at before and after 4 weeks supplementation. Subjects were instructed to eat a light meal > 2 h before arriving at the laboratory. In addition, they were asked to refrain from any exercise for 24 h, before the study visit. In addition, we assessed the nutrition status of all participants prior to supplement consumption and after the experimental testing on food frequency using a questionnaire based on food groups (FFQg version 3.5, Kenpakusha, Tokyo, Japan). Furthermore, we measured serum fatty acid levels, including EPA, DHA, arachidonic acid (AA), and dihomo-gamma-linolenic acid (DGLA) levels.", "The EPA and DHA group consumed eight 300-mg EPA-rich fish oil softgel capsules (Nippon Suisan Kaisha Ltd., Tokyo, Japan) per day, and the total consumption was 2,400 mg per day (600-mg EPA and 260-mg DHA). The PL group consumed eight 300-mg corn oil softgel capsules per day (without EPA and DHA), and the total consumption was 2,400 mg. The participants consumed the capsules within 30 min after the morning meal.", "The participants fasted for 8 h before a trained doctor obtained blood samples from their forearms. The blood samples were allowed to clot at room temperature (25 °C) and were then centrifuged at 3,000 rpm for 10 min at 4 °C. The serum was extracted and stored at − 20 °C until analysis. The serum levels of DGLA, AA, EPA, and DHA were measured. In addition, we evaluated serum creatine kinase (CK) and interleukin-6 (IL-6) as muscle damage markers, as previously described [7].", "For the ECCs, the participant sat on a preacher curl bench with his shoulder joint angle at 45° flexion. For the use of the dumbbell, the value of maximal voluntary contraction (MVC) torque measurement at 90° was converted to kilograms. The exercise comprised six sets of 10 maximal voluntary ECCs of the elbow flexors with a rest period of 90 s between each set, as described in our previous study [11]. The dumbbell was handed to the participant at the elbow flexed position (90°), and the participant was instructed to lower it to a fully extended position (0°) at an approximately constant speed (30°/s) in time (3 s) with a metronome. The investigator then removed the dumbbell, and the participant returned his arm without the dumbbell to the start the position for the next ECCs.", "For the measurement of MVC torque, each subject was seated with nondominant arm attached to a custom-designed ergometer and that performed isometric of the elbow flexors. The MVC torque was measured three 3-s MVCs at 90° and 110° of elbow joint angle with a 30-s rest during contractions. Subjects were verbally encouraged to give their maximal effort during the muscle strength tests. The greatest 1-s average of the three trials for each angle was used for subsequent analysis. The peak torque of each angle contraction was used as the MVC torque. The torque signal was amplified using a strain amplifier (LUR-A-100NSA1; Kyowa Electronic Instruments, Tokyo, Japan). The analog torque signal was converted to digital signals using a 16-bit analog-to-digital converter (Power-Lab 16SP; AD Instruments, Bella Vista, Australia). The sampling frequency was set at 10 kHz. The test–retest reliability of the MVC measures based on coefficient of variation (CV) was 2.8 %.", "To examine the ROM of the elbow joint, two elbow joint angles (extended and flexed) were measured using a goniometer (Takase Medical, Tokyo, Japan). The extended joint angle was recorded while the participant attempted to fully extend the joint with the elbow held by his side and the hand in supination [7, 16, 17]. The flexed joint angle was identified while the participant attempted to fully flex the joint from an equally fully extended position with the hand supinated. The ROM was calculated by subtracting the flexed joint angle from the extended joint angle. The test–retest reliability of the ROM measures based on CV was 2.2 %.", "Muscle soreness in the elbow flexors was assessed using a 10-cm visual analogue scale in which 0 indicated “no pain” and 10 indicated “unbearable imaginable” [7, 16, 17]. The participant relaxed his arm in a natural position. The investigator then palpated the upper arm using a thumb, and the participant indicated his pain level using the visual analogue scale. All tests were conducted by the same investigator who had been trained to use the same pressure over time between participants. The test–retest reliability of the VAS measures based on CV was 1.9 %.", "Upper arm circumference was assessed at 9 cm above the elbow joint using a tape measure while the participants were standing with their arms relaxed by their side [9]. The measurement marks were maintained during the experimental period using a semipermanent ink marker, and a well-trained investigator obtained the measurements. The average value of the three measurements was used for further analysis. The test-retest reliability of the circumference measurements based on CV was 2.2 %.", "B-mode ultrasound pictures of the upper arm were obtained using the biceps brachii via an ultrasound (SONIMAGE HS1, Konika Minolta, Japan), and the probe was placed 9 cm from the elbow joint at the position marked for the measurement of the upper arm circumference. The same gains and contrast were used over the experimental period. The transverse images were transferred to a computer as bitmap (.bmp) files and analyzed using a computer. The average muscle echo intensity of the region of interest (20 × 20 mm) was calculated using the computer image analysis software that provided a grayscale histogram (0, black; 100, white) for the region, as described in a previous study (ImageJ, Maryland, USA) [9]. Scanned images of each muscle were transferred to a personal computer and the thickness of biceps rachii was manually calculated via tracing using same software. The test-retest reliability of the muscle echo intensity and thickness measurements based on CV were 2.1 % and 1.4 %, respectively.", "All analyses were performed using the SPSS software version 25.0 (IBM Corp., Armonk, NY). Values were expressed as means ± standard deviation. MVC torque, ROM, circumference, echo intensity, and thickness of values before exercise to 5 days post-exercise were calculated based on relative changes from the baseline. MVC, ROM, visual analogue scale (VAS), circumference, echo intensity, thickness, and blood data of the PL and EPA and DHA groups were compared using two-way repeated-measure analysis of variance (ANOVA). When a significant primary effect or interaction was found, Bonferroni’s correction was performed for post-hoc testing. A p-value of < 0.05 was considered statistically significant.", "No significant differences were observed between the EPA and DHA and PL groups in terms of age, weight, and BMI (PL group, n = 11; age, 19.8 ± 1.5 years; height, 169.0 ± 7.8 cm; weight, 65.4 ± 8.4 kg; body fat, 15.7 ± 7.6 %; and BMI, 23.2 ± 3.3 kg/m2; EPA and DHA group, n = 11; age, 20.2 ± 0.4 years; height, 167.4 ± 5.4 cm; weight, 65.0 ± 8.9 kg; body fat, 17.2 ± 6.9 %; and BMI, 23.2 ± 2.9 kg/m2). Based on the results of the food frequency survey, no difference was observed in the PL group between before (energy, 1953.6 ± 442.2 kcal; protein, 64.4 ± 16.9 g; fat, 64.6 ± 12.3 g; carbohydrate, 263.5 ± 72.5 g; and omega-3 fatty acid, 1.8 ± 0.2 g) and after supplementation (energy, 1991.0 ± 556.2 kcal; protein, 64.9 ± 22.2 g; fat, 68.3 ± 24.1 g; carbohydrate, 262.0 ± 77.4 g; and omega-3 fatty acid, 2.0 ± 0.7 g) and in the EPA and DHA group between before (energy, 1907.9 ± 349.7 kcal; protein, 70.9 ± 14.3 g; fat, 68.1 ± 12.7 g; carbohydrate, 243.8 ± 73.4 g; and omega-3 fatty acid, 2.1 ± 0.6 g) and after supplementation (energy, 1894.9 ± 431.4 kcal; protein, 70.7 ± 17.3 g; fat, 65.8 ± 13.7 g; carbohydrate, 244.1 ± 81.9 g; and omega-3 fatty acid, 2.0 ± 0.6 g). These data reveal that physical characteristics and nutrition status of the participants did not change during the experimental period.\nPolyunsaturated fatty acids As shown in Table 1, no significant changes were observed in the PL group before and after supplementation in terms of DGLA, AA, EPA, and DHA levels. In the EPA and DHA group, blood EPA level increased after 4 weeks (p < 0.05). However, no significant difference was observed in the DGLA, AA, and DHA levels. For comparison between groups, the EPA level was significantly higher in the EPA and DHA group than in the PL group after the 4-week supplementation (p < 0.05).\n\nTable 1Changes of serum dihomo-gamma-linolenic acid, arachidonic acid, eicosapentaenoic acid, and docosahexaenoic acid at before and after 4-weekDihomo-gamma-linolenic acid (µg/ml)Arachidonic acid (µg/ml)Eicosapentaenoic acid (µg/ml)Docosahexaenoic acid (µg/ml)PlaceboBefore49.0 ± 9.9189.1 ± 40.620.9 ± 9.268.5 ± 24.6After42.9 ± 1.8198.7 ± 39.514.2 ± 6.159.4 ± 22.3EPABefore44.7 ± 9.3201.6 ± 38.133.5 ± 21.270.9 ± 18.0After42.3 ± 10.5186.8 ± 20.358.2 ± 25.7 *†78.2 ± 17.7* p < 0.05, Compare with placebo group for after supplementation, †p < 0.05, Compare with EPA and DHA group for before supplementation\nChanges of serum dihomo-gamma-linolenic acid, arachidonic acid, eicosapentaenoic acid, and docosahexaenoic acid at before and after 4-week\n* p < 0.05, Compare with placebo group for after supplementation, †p < 0.05, Compare with EPA and DHA group for before supplementation\nAs shown in Table 1, no significant changes were observed in the PL group before and after supplementation in terms of DGLA, AA, EPA, and DHA levels. In the EPA and DHA group, blood EPA level increased after 4 weeks (p < 0.05). However, no significant difference was observed in the DGLA, AA, and DHA levels. For comparison between groups, the EPA level was significantly higher in the EPA and DHA group than in the PL group after the 4-week supplementation (p < 0.05).\n\nTable 1Changes of serum dihomo-gamma-linolenic acid, arachidonic acid, eicosapentaenoic acid, and docosahexaenoic acid at before and after 4-weekDihomo-gamma-linolenic acid (µg/ml)Arachidonic acid (µg/ml)Eicosapentaenoic acid (µg/ml)Docosahexaenoic acid (µg/ml)PlaceboBefore49.0 ± 9.9189.1 ± 40.620.9 ± 9.268.5 ± 24.6After42.9 ± 1.8198.7 ± 39.514.2 ± 6.159.4 ± 22.3EPABefore44.7 ± 9.3201.6 ± 38.133.5 ± 21.270.9 ± 18.0After42.3 ± 10.5186.8 ± 20.358.2 ± 25.7 *†78.2 ± 17.7* p < 0.05, Compare with placebo group for after supplementation, †p < 0.05, Compare with EPA and DHA group for before supplementation\nChanges of serum dihomo-gamma-linolenic acid, arachidonic acid, eicosapentaenoic acid, and docosahexaenoic acid at before and after 4-week\n* p < 0.05, Compare with placebo group for after supplementation, †p < 0.05, Compare with EPA and DHA group for before supplementation\nMaximal voluntary isometric contraction torque For the MVC, although a times main effect was observed (90°: F = 28.8, p < 0.05, 110°: F = 20.8, p < 0.05), but not a main effect for group (90°: F = 2.5, 110°: F = 1.4) and interaction effect for times and group (90°: F = 1.2, 110°: F = 0.9) (Fig. 1).\n\nFig. 1Changes (mean ± SD) of maximal voluntary isometric contraction (MVC) torque at 90° (a), and 110° (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups\nChanges (mean ± SD) of maximal voluntary isometric contraction (MVC) torque at 90° (a), and 110° (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups\nFor the MVC, although a times main effect was observed (90°: F = 28.8, p < 0.05, 110°: F = 20.8, p < 0.05), but not a main effect for group (90°: F = 2.5, 110°: F = 1.4) and interaction effect for times and group (90°: F = 1.2, 110°: F = 0.9) (Fig. 1).\n\nFig. 1Changes (mean ± SD) of maximal voluntary isometric contraction (MVC) torque at 90° (a), and 110° (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups\nChanges (mean ± SD) of maximal voluntary isometric contraction (MVC) torque at 90° (a), and 110° (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups\nRange of motion of the elbow joint As shown in Fig. 2a, ANOVA revealed an interaction (F = 3.4, p < 0.05), times (F = 20.6, p < 0.05), and group effect (F = 4.0, p < 0.05) for ROM. Post-hoc analysis revealed that a significant decrease in ROM was observed in the PL group immediately after exercise, which remained lower than the baseline on days 1, 2, and 3 after exercise. The ROM in the EPA and DHA group decreased immediately after and 1 day after exercise compared with the pre-exercise value but returned to baseline on day 2 after exercise. The ROM in the EPA and DHA group was significantly higher than that of the PL group immediately after exercise (EPA and DHA group; 76.5 ± 16.7 %, PL group; 53.1 ± 18.7 %; p < 0.05).\n\nFig. 2Changes (mean ± SD) of range of motion (ROM) (a), circumference (b), and visual analogue scale (VAS) (c) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group, * p < 0.05 for the difference from pre-exercise value in the EPA and DHA group\nChanges (mean ± SD) of range of motion (ROM) (a), circumference (b), and visual analogue scale (VAS) (c) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group, * p < 0.05 for the difference from pre-exercise value in the EPA and DHA group\nAs shown in Fig. 2a, ANOVA revealed an interaction (F = 3.4, p < 0.05), times (F = 20.6, p < 0.05), and group effect (F = 4.0, p < 0.05) for ROM. Post-hoc analysis revealed that a significant decrease in ROM was observed in the PL group immediately after exercise, which remained lower than the baseline on days 1, 2, and 3 after exercise. The ROM in the EPA and DHA group decreased immediately after and 1 day after exercise compared with the pre-exercise value but returned to baseline on day 2 after exercise. The ROM in the EPA and DHA group was significantly higher than that of the PL group immediately after exercise (EPA and DHA group; 76.5 ± 16.7 %, PL group; 53.1 ± 18.7 %; p < 0.05).\n\nFig. 2Changes (mean ± SD) of range of motion (ROM) (a), circumference (b), and visual analogue scale (VAS) (c) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group, * p < 0.05 for the difference from pre-exercise value in the EPA and DHA group\nChanges (mean ± SD) of range of motion (ROM) (a), circumference (b), and visual analogue scale (VAS) (c) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group, * p < 0.05 for the difference from pre-exercise value in the EPA and DHA group\nUpper arm circumference and muscle soreness For upper arm circumference and muscle soreness, although a times main effect was observed (circumference: F = 11.8, p < 0.05, muscle soreness: F = 17.7, p < 0.05), but not a main effect for group (circumference: F = 3.4, muscle soreness: F = 0.7) and interaction effect for times and group (circumference: F = 1.7, muscle soreness: F = 1.0) (Fig. 2b and c).\nFor upper arm circumference and muscle soreness, although a times main effect was observed (circumference: F = 11.8, p < 0.05, muscle soreness: F = 17.7, p < 0.05), but not a main effect for group (circumference: F = 3.4, muscle soreness: F = 0.7) and interaction effect for times and group (circumference: F = 1.7, muscle soreness: F = 1.0) (Fig. 2b and c).\nMuscle thickness and echo intensity For muscle thickness and echo intensity, although a times main effect was observed (muscle thickness: F = 3.1, p < 0.05, echo intensity: F = 7.5, p < 0.05), but not a main effect for group (muscle thickness: F = 0.2, echo intensity: F = 1.7) and interaction effect for times and group (muscle thickness: F = 0.5, echo intensity: F = 1.2) (Fig. 3a and b).\nFig. 3Changes (mean ± SD) of muscle thickness (a) and muscle echo intensity (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups\nChanges (mean ± SD) of muscle thickness (a) and muscle echo intensity (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups\nFor muscle thickness and echo intensity, although a times main effect was observed (muscle thickness: F = 3.1, p < 0.05, echo intensity: F = 7.5, p < 0.05), but not a main effect for group (muscle thickness: F = 0.2, echo intensity: F = 1.7) and interaction effect for times and group (muscle thickness: F = 0.5, echo intensity: F = 1.2) (Fig. 3a and b).\nFig. 3Changes (mean ± SD) of muscle thickness (a) and muscle echo intensity (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups\nChanges (mean ± SD) of muscle thickness (a) and muscle echo intensity (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups\nBlood serum analysis ANOVA revealed an interaction (F = 3.1, p < 0.05), times (F = 8.0, p < 0.05), and group effect (F = 4.7, p < 0.05) for serum CK. Serum CK levels in the PL group significantly increased 3 and 5 days after exercise compared with pre-exercise values (Fig. 4a, p < 0.05). However, no significant difference after exercise compared with pre-exercise values was observed in the EPA and DHA group. Serum CK levels were significantly higher in the PL group than the EPA and DHA group 3 days after exercise (PL, 12132.7 ± 13652.2 U/L vs. EPA, 2575.2 ± 2798.9 U/L, p < 0.05). In contrast, no interaction (F = 2.2), times (F = 17.5), and group effect (F = 2.3) in IL-6 were detected between times and the group (Fig. 4b).\nFig. 4Changes (mean ± SD) of serum creatine kinase (a) and serum interleukin-6 (b) measured before (pre) and the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group\nChanges (mean ± SD) of serum creatine kinase (a) and serum interleukin-6 (b) measured before (pre) and the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group\nANOVA revealed an interaction (F = 3.1, p < 0.05), times (F = 8.0, p < 0.05), and group effect (F = 4.7, p < 0.05) for serum CK. Serum CK levels in the PL group significantly increased 3 and 5 days after exercise compared with pre-exercise values (Fig. 4a, p < 0.05). However, no significant difference after exercise compared with pre-exercise values was observed in the EPA and DHA group. Serum CK levels were significantly higher in the PL group than the EPA and DHA group 3 days after exercise (PL, 12132.7 ± 13652.2 U/L vs. EPA, 2575.2 ± 2798.9 U/L, p < 0.05). In contrast, no interaction (F = 2.2), times (F = 17.5), and group effect (F = 2.3) in IL-6 were detected between times and the group (Fig. 4b).\nFig. 4Changes (mean ± SD) of serum creatine kinase (a) and serum interleukin-6 (b) measured before (pre) and the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group\nChanges (mean ± SD) of serum creatine kinase (a) and serum interleukin-6 (b) measured before (pre) and the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group", "As shown in Table 1, no significant changes were observed in the PL group before and after supplementation in terms of DGLA, AA, EPA, and DHA levels. In the EPA and DHA group, blood EPA level increased after 4 weeks (p < 0.05). However, no significant difference was observed in the DGLA, AA, and DHA levels. For comparison between groups, the EPA level was significantly higher in the EPA and DHA group than in the PL group after the 4-week supplementation (p < 0.05).\n\nTable 1Changes of serum dihomo-gamma-linolenic acid, arachidonic acid, eicosapentaenoic acid, and docosahexaenoic acid at before and after 4-weekDihomo-gamma-linolenic acid (µg/ml)Arachidonic acid (µg/ml)Eicosapentaenoic acid (µg/ml)Docosahexaenoic acid (µg/ml)PlaceboBefore49.0 ± 9.9189.1 ± 40.620.9 ± 9.268.5 ± 24.6After42.9 ± 1.8198.7 ± 39.514.2 ± 6.159.4 ± 22.3EPABefore44.7 ± 9.3201.6 ± 38.133.5 ± 21.270.9 ± 18.0After42.3 ± 10.5186.8 ± 20.358.2 ± 25.7 *†78.2 ± 17.7* p < 0.05, Compare with placebo group for after supplementation, †p < 0.05, Compare with EPA and DHA group for before supplementation\nChanges of serum dihomo-gamma-linolenic acid, arachidonic acid, eicosapentaenoic acid, and docosahexaenoic acid at before and after 4-week\n* p < 0.05, Compare with placebo group for after supplementation, †p < 0.05, Compare with EPA and DHA group for before supplementation", "For the MVC, although a times main effect was observed (90°: F = 28.8, p < 0.05, 110°: F = 20.8, p < 0.05), but not a main effect for group (90°: F = 2.5, 110°: F = 1.4) and interaction effect for times and group (90°: F = 1.2, 110°: F = 0.9) (Fig. 1).\n\nFig. 1Changes (mean ± SD) of maximal voluntary isometric contraction (MVC) torque at 90° (a), and 110° (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups\nChanges (mean ± SD) of maximal voluntary isometric contraction (MVC) torque at 90° (a), and 110° (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups", "As shown in Fig. 2a, ANOVA revealed an interaction (F = 3.4, p < 0.05), times (F = 20.6, p < 0.05), and group effect (F = 4.0, p < 0.05) for ROM. Post-hoc analysis revealed that a significant decrease in ROM was observed in the PL group immediately after exercise, which remained lower than the baseline on days 1, 2, and 3 after exercise. The ROM in the EPA and DHA group decreased immediately after and 1 day after exercise compared with the pre-exercise value but returned to baseline on day 2 after exercise. The ROM in the EPA and DHA group was significantly higher than that of the PL group immediately after exercise (EPA and DHA group; 76.5 ± 16.7 %, PL group; 53.1 ± 18.7 %; p < 0.05).\n\nFig. 2Changes (mean ± SD) of range of motion (ROM) (a), circumference (b), and visual analogue scale (VAS) (c) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group, * p < 0.05 for the difference from pre-exercise value in the EPA and DHA group\nChanges (mean ± SD) of range of motion (ROM) (a), circumference (b), and visual analogue scale (VAS) (c) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group, * p < 0.05 for the difference from pre-exercise value in the EPA and DHA group", "For upper arm circumference and muscle soreness, although a times main effect was observed (circumference: F = 11.8, p < 0.05, muscle soreness: F = 17.7, p < 0.05), but not a main effect for group (circumference: F = 3.4, muscle soreness: F = 0.7) and interaction effect for times and group (circumference: F = 1.7, muscle soreness: F = 1.0) (Fig. 2b and c).", "For muscle thickness and echo intensity, although a times main effect was observed (muscle thickness: F = 3.1, p < 0.05, echo intensity: F = 7.5, p < 0.05), but not a main effect for group (muscle thickness: F = 0.2, echo intensity: F = 1.7) and interaction effect for times and group (muscle thickness: F = 0.5, echo intensity: F = 1.2) (Fig. 3a and b).\nFig. 3Changes (mean ± SD) of muscle thickness (a) and muscle echo intensity (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups\nChanges (mean ± SD) of muscle thickness (a) and muscle echo intensity (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups", "ANOVA revealed an interaction (F = 3.1, p < 0.05), times (F = 8.0, p < 0.05), and group effect (F = 4.7, p < 0.05) for serum CK. Serum CK levels in the PL group significantly increased 3 and 5 days after exercise compared with pre-exercise values (Fig. 4a, p < 0.05). However, no significant difference after exercise compared with pre-exercise values was observed in the EPA and DHA group. Serum CK levels were significantly higher in the PL group than the EPA and DHA group 3 days after exercise (PL, 12132.7 ± 13652.2 U/L vs. EPA, 2575.2 ± 2798.9 U/L, p < 0.05). In contrast, no interaction (F = 2.2), times (F = 17.5), and group effect (F = 2.3) in IL-6 were detected between times and the group (Fig. 4b).\nFig. 4Changes (mean ± SD) of serum creatine kinase (a) and serum interleukin-6 (b) measured before (pre) and the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group\nChanges (mean ± SD) of serum creatine kinase (a) and serum interleukin-6 (b) measured before (pre) and the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group", "This study investigated the effect of consuming EPA-enriched fish oil for 4 weeks on temporal muscular dysfunction and soreness following ECCs. As a result, the 4-week ingestion of 600-mg EPA and 260-mg DHA attenuated the loss of joint flexibility after 60 ECCs in the elbow flexors. Moreover, we confirmed the inhibition of increased blood CK level, suggesting protection of muscle damage to muscle fibers. However, no effects were observed on loss of muscle strength, delayed onset muscle soreness, muscle swelling, echo intensity, and blood IL-6. These results suggest that supplementation with EPA-enriched fish oil for 4 weeks is effective to a limited extent for attenuating acute exercise-induced muscular damage. These results support our hypothesis.\nHerein, we confirmed that EPA (600 mg/day) and DHA (260 mg/day) consumed for 4 weeks significantly attenuated reduced ROM after 60 ECCs in the elbow flexors. The decreased ROM after ECCs has been attributed to the increased passive muscle stiffness due to inflammatory response in myofibrils, elevated cytoplasmic calcium levels, and muscle swelling due to inflammatory reactions [18, 19]. In our previous study, 30 isokinetic ECCs produced by elbow flexion with maximum effort were loaded after 600-mg EPA and 260-mg DHA per day for 8 weeks before exercise [7]. As a result, the reduction of ROM in the elbow joint until 3 days after exercise in EPA and DHA group was significantly less compared with that in the placebo group [7]. In a different study, the 8-week consumption of 600 mg/day EPA and 260 mg/day DHA significantly reduced elbow ROM until 5 days after 60 ECCs using dumbbells at 100 % MVC [11]. As a study investigating the effectiveness of supplementation for a period shorter than 8 weeks, Lenn et al. [15] tested the effect of 30-day consumption of 1,800-mg/day omega-3 fatty acids (398-mg EPA and 269-mg DHA) on muscle damage following 50 ECCs by elbow flexions in 21 untrained men and women. The results of that study show no differences in the loss of ROM in the elbow between the omega-3 fatty acid group and the placebo group. As consumption of 2,400-mg/day omega-3 fatty acids (600-mg EPA and 260-mg DHA) was effective in this study, the dose, rather than the duration of administration, appears to be a more important factor determining the preventive effect on joint flexibility. In a review article [20], it has been suggested that an ingestion ratio of EPA to DHA of approximately 2:1 may be beneficial in counteracting exercise-induced inflammation, which is almost similar to our ratio. Therefore, ratio of EPA to DHA may be also important factor to maintain the joint flexibility after muscle damage. In addition, a study involving 24 healthy young men has reported that reduced ROM in the knee joint after loading ECCs of quadriceps through a 40-min bench stepping exercise was significantly reduced in those who consumed 324-mg/day EPA and 216-mg/day DHA for 30 days [10]. In that study, ingestion of lower EPA and DHA was effective presumably because the multi-joint movement in lower limbs for a long period of time was a relatively low-intensity exercise. These data suggest that the effectiveness of short-term supplementation with EPA and DHA (28–30 days) on joint flexibility may differ depending on the omega-3 fatty acid dose, mode of exercise, and specific muscles involved.\nIn the present study, no differences in IL-6 between the groups were observed, and blood CK elevation in the EPA and DHA group was significantly reduced. The increase in CK following ECCs is attributable to micro-damage to muscle fibers [3, 5]. A study involving mice has reported that the amount of omega-3 fatty acids in the cell membrane significantly increased after 3-week administration of omega-3 fatty acids [21]. Therefore, a possible reason why the exercise-induced CK elevation was mitigated herein is that omega-3 fatty acids were incorporated into and protected cell membrane in muscle fiber. Similarly, Tartibian et al. [10] have reported that consumption of EPA and DHA for 30 days reduced CK elevation after ECCs from 40 min of the bench stepping exercise. Furthermore, some studies have demonstrated that consumption of EPA and DHA reduced the elevation of IL-6, an inflammatory cytokine [7, 12, 22, 23]. Possible reasons why our results did not agree with these previous studies were used different modes such as isokinetic ECCs [7, 22], arm carl machine [23], or bench stepping exercise [12]. In addition, Phillips et al. [23] showed that there were significant differences between supplementation of DHA group and placebo for IL-6 at 10 days after exercise. Future studies considering exercise modes and measurement of time point are necessary.\nThe present study also indicated no effect of 4-week supplementation with EPA and DHA on DOMS. A study has reported that the results of daily ingestion of 2,000-mg EPA and 1,000-mg DHA for 2 weeks significantly mitigated DOMS in the upper arm after two sets of ECCs (until exhaustion) via elbow flexions using dumbbells at an intensity of 120 % 1RM [24]. Similarly, as a result of 3,600 mg of EPA and DHA ingested for 2 weeks have also been reported to mitigate DOMS in the upper arm and thigh after 10 sets of ECCs (until exhaustion) through elbow flexions and knee extensions at an intensity of 50 % 1RM [25]. Studies using 8-week consumption of EPA and DHA at lower doses (600 mg/day of EPA and 260 mg/day of DHA) have reported significant mitigation of DOMS after 30 isokinetic ECCs by elbow flexions [7] and DOMS after isotonic ECCs using dumbbells at an intensity of 40 % 1RM [9]. In contrast, 30-day consumption of 287-mg/day EPA and 194-mg/day DHA [15] and 6-week consumption of 1,300-mg/day EPA and 300-mg/day DHA [26] have been demonstrated to be ineffective for DOMS following isokinetic ECCs. Therefore, we suggest that ingestion at a high daily dose or for a long period of time appears to be important for EPA and DHA to exert the mitigation effect on DOMS.\nHerein, loss of muscle strength, muscle swelling, and increased echo intensity after exercise were not prevented by the 4-week ingestion of EPA and DHA. In a previous study, we have reported that the results of 8-week ingestion at similar doses to those used in the present study reduced the loss of muscle strength [7, 9, 11]. Furthermore, 30-day ingestion of 287-mg/day EPA and 194-mg/day DHA [15] and 6-week ingestion of 1,300-mg EPA and 300-mg DHA [26] have been shown to be ineffective against loss of muscle strength. Considering the results of our present and previous studies, the duration of EPA and DHA intake is an important determinant of the effectiveness for muscle strength after exercise. One of reasons for the importance of duration, previous study showed that the increase of omega-3 polyunsaturated fatty acids in muscle cellular membrane needs over 4 weeks in rat [14]. Therefore, we speculate there is no effect to muscle strength, showing the different and multiple roles of EPA and DHA for muscle damage. In addition, no consensus has been obtained thus far on the swelling of muscles. Consumption of EPA (600 mg/day) and DHA (260 mg/day) for 8 weeks has been reported to attenuate muscle swelling after ECCs through elbow flexions at an intensity of 100 % MVC [11]. Ingestion of 324-mg EPA and 216-mg DHA per day for 30 days has been reported to mitigate muscle swelling after ECCs in lower limbs [10]. Several previous studies show that consumption of EPA and DHA for 2–8 weeks did not attenuate muscle swelling after ECCs [7, 9, 24, 25]. In many studies, muscle swelling was not evaluated directly and was measured using a measuring tape. The use of MRI or ultrasonography may be necessary for evaluation in the future. In the ultrasound measurement, an increased echo intensity reflects the amount of free water or edema due to disintegration of the extracellular matrix [6]. Our previous study has shown that consumption of EPA and DHA for 8 weeks resulted in a smaller increase in echo intensity following elbow flexions [11]. Therefore, more studies are needed to clarify the effect on echo intensity after ECCs because pf the paucity of data; regardless, the available data suggest the importance of the duration of consumption.", "Herein, we confirmed that 4-week 2,400 mg/day fish oil of supplementation with EPA and DHA at daily doses of 600 mg of 260 mg, respectively, alleviated the loss of joint flexibility and the increase in CK after ECCs through elbow flexions. However, other effects for the parameters of ECC-induced muscle damage were limited compared with that of 8-week consumption of EPA and DHA [7, 11]. Therefore, we conclude that supplementation with EPA-enriched fish oil for 4 weeks is effective to a limited extent for attenuating acute exercise-induced muscular damage. Our finding that there is certain duration of supplement to obtain the earlier muscle recovery will be important information to athletes and training enthusiasts acquire efficient training." ]
[ "introduction", null, null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, null, "conclusion" ]
[ "Eicosapentaenoic acid", "Omega-3", "Sports nutrition", "Ergogenic aid", "Long‐chain n-3 polyunsaturated fatty acids", "Lengthening", "Muscle function", "Joint flexibility" ]
Introduction: Omega-3 polyunsaturated fatty acids include of eicosapentaenoic acid (EPA; 20:5 n-3) and docosahexaenoic acid (DHA; 22:6 n-3), which are mainly contained in fish oil. EPA and DHA are known to have anti-inflammatory effects and increased red blood cell (RBC) deformability as a consequence of incorporation of omega-3 polyunsaturated fatty acid into RBC membrane phospholipids [1]. Recently, omega-3 polyunsaturated fatty acid supplementation has been proposed as an ergogenic aid for athletes [2]. Exhaustive eccentric contractions (ECCs) or unaccustomed exercise causes delayed onset muscle soreness (DOMS), reduction in maximal strength, limitation of range of motion (ROM), and muscle swelling [3]. In addition, previous studies showed that after ECCs, serum myoglobin (Mb), creatine kinase (CK), and interleukin (IL)-6 increase [3–7]. ECC-induced muscle damage is defined as morphological changes in the sarcomeres and endomysium and inflammatory responses in muscle fibers and connective tissues [8]. Previous studies have reported that EPA and DHA supplementation positively affect these symptoms of muscle damage [7, 9–12]. Consumed omega-3 polyunsaturated fatty acids are incorporated into phospholipid, a major component of the cell membrane, and have been reported to inhibit the effects of inflammation and reactive oxygen species [13]. Helge et al. [14] reported that the total proportion of omega-3 polyunsaturated fatty acids in muscle cellular membrane was significantly increased after 4 weeks of high-fat diet in rat. It is assumed that ingestion of EPA and DHA alleviates exercise-induced muscle damage by the incorporation of omega-3 polyunsaturated fatty acids into the muscle cell membrane. We previously examined 30 ECCs in elbow flexors after an intake of both 600 mg/day of EPA and 260 mg/day of DHA for 8 weeks [7] and showed the inhibitions in reduced muscle strength and ROM, and increased IL-6 as inflammatory response [7]. Furthermore, other studies have reported that ingestion of 8-week EPA (600 mg/day) and DHA (260 mg/day) attenuates nerve dysfunction [9] and muscle stiffness [11] following 60 ECCs using a dumbbell at 100 % 1RM. However, the study that conducted these experiments in shorter periods (< 8 weeks) is controversial and insufficient [10, 12, 15]. Specifically, Tartibian et al. [10, 12] examined the effects of 324 mg/day EPA and 216 mg/day DHA ingestion for 30 days on muscle damage after 40 min of bench stepping. As the results, development of DOMS, limited ROM, muscle swelling, and elevated of serum CK and IL-6 were inhibited by the ingestions. On the other hand, it showed that the ingestion of 400 mg/day EPA and 270 mg/day DHA for 30 days were not effective to DOMS, ROM, and serum CK and IL-6 following 50 maximal ECCs of elbow flexors [15]. Hence, elucidating the efficacy of shorter period of EPA and DHA supplementation is important for athletes and resistance training enthusiasts. Therefore, the present study examined whether 600 mg/day of EPA and 260 mg/day of DHA supplementation for 4 weeks reduce muscle damage following elbow flexors in ECCs. We hypothesized that short term EPA and DHA supplementation may improve ECC-induced muscle damage. Methods: Subjects A total of 22 healthy recreational untrained men were recruited for this study. The participants were not allergic to fish and had not participated in any regular resistance training experience for at least one year before this study. Further, participants were asked not to participate in other clinical trials and interventions, such as massage, stretching, strenuous exercise, excessive consumption of food or alcohol, and intake of supplementations or medications during the experimental period. All participants were provided with detailed explanations of the study protocol prior to participation, and informed consent was obtained from all participants. The present study was performed in accordance with the Declaration of Helsinki and was approved by the ethics committee for human experiments of Teikyo Heisei University (ID: R01-040). Moreover, the study was registered at the University Hospital Medical Information Network Clinical Trials Registry (UMIN-CTR identifier: UMIN000038003). A total of 22 healthy recreational untrained men were recruited for this study. The participants were not allergic to fish and had not participated in any regular resistance training experience for at least one year before this study. Further, participants were asked not to participate in other clinical trials and interventions, such as massage, stretching, strenuous exercise, excessive consumption of food or alcohol, and intake of supplementations or medications during the experimental period. All participants were provided with detailed explanations of the study protocol prior to participation, and informed consent was obtained from all participants. The present study was performed in accordance with the Declaration of Helsinki and was approved by the ethics committee for human experiments of Teikyo Heisei University (ID: R01-040). Moreover, the study was registered at the University Hospital Medical Information Network Clinical Trials Registry (UMIN-CTR identifier: UMIN000038003). Study design The study used the double-blind, placebo-controlled, parallel-group trial design. The participants were randomly assigned to two groups using a table of random numbers to minimize the intergroup differences in terms of age, body fat, and body mass index (BMI). The placebo (PL) and EPA and DHA group consumed daily placebo or fish oil capsules for 4 weeks prior to an exercise experiment and for 5 days after the exercise experiment. The sequence allocation concealment and blinding of participants and researchers were maintained throughout this period. Medication adherence was assessed using the daily record of the patients and via pill count at the end of the study. On the day of exercise testing, muscle damage markers were assessed using the nondominant arm before exercise. Immediately after these baseline measurements, the participants performed ECCs using the same arm. Maximum voluntary contraction (MVC) torque, ROM, DOMS, circumference, muscle echo intensity, and thickness were measured immediately before and after exercise and 1, 2, 3, and 5 days after exercise. Serum CK and IL-6 were measured before exercise and 1, 2, 3, and 5 days after exercise. In addition, we measured serum fatty acid levels at before and after 4 weeks supplementation. Subjects were instructed to eat a light meal > 2 h before arriving at the laboratory. In addition, they were asked to refrain from any exercise for 24 h, before the study visit. In addition, we assessed the nutrition status of all participants prior to supplement consumption and after the experimental testing on food frequency using a questionnaire based on food groups (FFQg version 3.5, Kenpakusha, Tokyo, Japan). Furthermore, we measured serum fatty acid levels, including EPA, DHA, arachidonic acid (AA), and dihomo-gamma-linolenic acid (DGLA) levels. The study used the double-blind, placebo-controlled, parallel-group trial design. The participants were randomly assigned to two groups using a table of random numbers to minimize the intergroup differences in terms of age, body fat, and body mass index (BMI). The placebo (PL) and EPA and DHA group consumed daily placebo or fish oil capsules for 4 weeks prior to an exercise experiment and for 5 days after the exercise experiment. The sequence allocation concealment and blinding of participants and researchers were maintained throughout this period. Medication adherence was assessed using the daily record of the patients and via pill count at the end of the study. On the day of exercise testing, muscle damage markers were assessed using the nondominant arm before exercise. Immediately after these baseline measurements, the participants performed ECCs using the same arm. Maximum voluntary contraction (MVC) torque, ROM, DOMS, circumference, muscle echo intensity, and thickness were measured immediately before and after exercise and 1, 2, 3, and 5 days after exercise. Serum CK and IL-6 were measured before exercise and 1, 2, 3, and 5 days after exercise. In addition, we measured serum fatty acid levels at before and after 4 weeks supplementation. Subjects were instructed to eat a light meal > 2 h before arriving at the laboratory. In addition, they were asked to refrain from any exercise for 24 h, before the study visit. In addition, we assessed the nutrition status of all participants prior to supplement consumption and after the experimental testing on food frequency using a questionnaire based on food groups (FFQg version 3.5, Kenpakusha, Tokyo, Japan). Furthermore, we measured serum fatty acid levels, including EPA, DHA, arachidonic acid (AA), and dihomo-gamma-linolenic acid (DGLA) levels. Supplements The EPA and DHA group consumed eight 300-mg EPA-rich fish oil softgel capsules (Nippon Suisan Kaisha Ltd., Tokyo, Japan) per day, and the total consumption was 2,400 mg per day (600-mg EPA and 260-mg DHA). The PL group consumed eight 300-mg corn oil softgel capsules per day (without EPA and DHA), and the total consumption was 2,400 mg. The participants consumed the capsules within 30 min after the morning meal. The EPA and DHA group consumed eight 300-mg EPA-rich fish oil softgel capsules (Nippon Suisan Kaisha Ltd., Tokyo, Japan) per day, and the total consumption was 2,400 mg per day (600-mg EPA and 260-mg DHA). The PL group consumed eight 300-mg corn oil softgel capsules per day (without EPA and DHA), and the total consumption was 2,400 mg. The participants consumed the capsules within 30 min after the morning meal. Blood sample The participants fasted for 8 h before a trained doctor obtained blood samples from their forearms. The blood samples were allowed to clot at room temperature (25 °C) and were then centrifuged at 3,000 rpm for 10 min at 4 °C. The serum was extracted and stored at − 20 °C until analysis. The serum levels of DGLA, AA, EPA, and DHA were measured. In addition, we evaluated serum creatine kinase (CK) and interleukin-6 (IL-6) as muscle damage markers, as previously described [7]. The participants fasted for 8 h before a trained doctor obtained blood samples from their forearms. The blood samples were allowed to clot at room temperature (25 °C) and were then centrifuged at 3,000 rpm for 10 min at 4 °C. The serum was extracted and stored at − 20 °C until analysis. The serum levels of DGLA, AA, EPA, and DHA were measured. In addition, we evaluated serum creatine kinase (CK) and interleukin-6 (IL-6) as muscle damage markers, as previously described [7]. Eccentric contractions For the ECCs, the participant sat on a preacher curl bench with his shoulder joint angle at 45° flexion. For the use of the dumbbell, the value of maximal voluntary contraction (MVC) torque measurement at 90° was converted to kilograms. The exercise comprised six sets of 10 maximal voluntary ECCs of the elbow flexors with a rest period of 90 s between each set, as described in our previous study [11]. The dumbbell was handed to the participant at the elbow flexed position (90°), and the participant was instructed to lower it to a fully extended position (0°) at an approximately constant speed (30°/s) in time (3 s) with a metronome. The investigator then removed the dumbbell, and the participant returned his arm without the dumbbell to the start the position for the next ECCs. For the ECCs, the participant sat on a preacher curl bench with his shoulder joint angle at 45° flexion. For the use of the dumbbell, the value of maximal voluntary contraction (MVC) torque measurement at 90° was converted to kilograms. The exercise comprised six sets of 10 maximal voluntary ECCs of the elbow flexors with a rest period of 90 s between each set, as described in our previous study [11]. The dumbbell was handed to the participant at the elbow flexed position (90°), and the participant was instructed to lower it to a fully extended position (0°) at an approximately constant speed (30°/s) in time (3 s) with a metronome. The investigator then removed the dumbbell, and the participant returned his arm without the dumbbell to the start the position for the next ECCs. Maximum voluntary contraction torque For the measurement of MVC torque, each subject was seated with nondominant arm attached to a custom-designed ergometer and that performed isometric of the elbow flexors. The MVC torque was measured three 3-s MVCs at 90° and 110° of elbow joint angle with a 30-s rest during contractions. Subjects were verbally encouraged to give their maximal effort during the muscle strength tests. The greatest 1-s average of the three trials for each angle was used for subsequent analysis. The peak torque of each angle contraction was used as the MVC torque. The torque signal was amplified using a strain amplifier (LUR-A-100NSA1; Kyowa Electronic Instruments, Tokyo, Japan). The analog torque signal was converted to digital signals using a 16-bit analog-to-digital converter (Power-Lab 16SP; AD Instruments, Bella Vista, Australia). The sampling frequency was set at 10 kHz. The test–retest reliability of the MVC measures based on coefficient of variation (CV) was 2.8 %. For the measurement of MVC torque, each subject was seated with nondominant arm attached to a custom-designed ergometer and that performed isometric of the elbow flexors. The MVC torque was measured three 3-s MVCs at 90° and 110° of elbow joint angle with a 30-s rest during contractions. Subjects were verbally encouraged to give their maximal effort during the muscle strength tests. The greatest 1-s average of the three trials for each angle was used for subsequent analysis. The peak torque of each angle contraction was used as the MVC torque. The torque signal was amplified using a strain amplifier (LUR-A-100NSA1; Kyowa Electronic Instruments, Tokyo, Japan). The analog torque signal was converted to digital signals using a 16-bit analog-to-digital converter (Power-Lab 16SP; AD Instruments, Bella Vista, Australia). The sampling frequency was set at 10 kHz. The test–retest reliability of the MVC measures based on coefficient of variation (CV) was 2.8 %. Range of motion of the elbow joint To examine the ROM of the elbow joint, two elbow joint angles (extended and flexed) were measured using a goniometer (Takase Medical, Tokyo, Japan). The extended joint angle was recorded while the participant attempted to fully extend the joint with the elbow held by his side and the hand in supination [7, 16, 17]. The flexed joint angle was identified while the participant attempted to fully flex the joint from an equally fully extended position with the hand supinated. The ROM was calculated by subtracting the flexed joint angle from the extended joint angle. The test–retest reliability of the ROM measures based on CV was 2.2 %. To examine the ROM of the elbow joint, two elbow joint angles (extended and flexed) were measured using a goniometer (Takase Medical, Tokyo, Japan). The extended joint angle was recorded while the participant attempted to fully extend the joint with the elbow held by his side and the hand in supination [7, 16, 17]. The flexed joint angle was identified while the participant attempted to fully flex the joint from an equally fully extended position with the hand supinated. The ROM was calculated by subtracting the flexed joint angle from the extended joint angle. The test–retest reliability of the ROM measures based on CV was 2.2 %. Muscle soreness Muscle soreness in the elbow flexors was assessed using a 10-cm visual analogue scale in which 0 indicated “no pain” and 10 indicated “unbearable imaginable” [7, 16, 17]. The participant relaxed his arm in a natural position. The investigator then palpated the upper arm using a thumb, and the participant indicated his pain level using the visual analogue scale. All tests were conducted by the same investigator who had been trained to use the same pressure over time between participants. The test–retest reliability of the VAS measures based on CV was 1.9 %. Muscle soreness in the elbow flexors was assessed using a 10-cm visual analogue scale in which 0 indicated “no pain” and 10 indicated “unbearable imaginable” [7, 16, 17]. The participant relaxed his arm in a natural position. The investigator then palpated the upper arm using a thumb, and the participant indicated his pain level using the visual analogue scale. All tests were conducted by the same investigator who had been trained to use the same pressure over time between participants. The test–retest reliability of the VAS measures based on CV was 1.9 %. Upper arm circumference Upper arm circumference was assessed at 9 cm above the elbow joint using a tape measure while the participants were standing with their arms relaxed by their side [9]. The measurement marks were maintained during the experimental period using a semipermanent ink marker, and a well-trained investigator obtained the measurements. The average value of the three measurements was used for further analysis. The test-retest reliability of the circumference measurements based on CV was 2.2 %. Upper arm circumference was assessed at 9 cm above the elbow joint using a tape measure while the participants were standing with their arms relaxed by their side [9]. The measurement marks were maintained during the experimental period using a semipermanent ink marker, and a well-trained investigator obtained the measurements. The average value of the three measurements was used for further analysis. The test-retest reliability of the circumference measurements based on CV was 2.2 %. Muscle echo intensity and thickness B-mode ultrasound pictures of the upper arm were obtained using the biceps brachii via an ultrasound (SONIMAGE HS1, Konika Minolta, Japan), and the probe was placed 9 cm from the elbow joint at the position marked for the measurement of the upper arm circumference. The same gains and contrast were used over the experimental period. The transverse images were transferred to a computer as bitmap (.bmp) files and analyzed using a computer. The average muscle echo intensity of the region of interest (20 × 20 mm) was calculated using the computer image analysis software that provided a grayscale histogram (0, black; 100, white) for the region, as described in a previous study (ImageJ, Maryland, USA) [9]. Scanned images of each muscle were transferred to a personal computer and the thickness of biceps rachii was manually calculated via tracing using same software. The test-retest reliability of the muscle echo intensity and thickness measurements based on CV were 2.1 % and 1.4 %, respectively. B-mode ultrasound pictures of the upper arm were obtained using the biceps brachii via an ultrasound (SONIMAGE HS1, Konika Minolta, Japan), and the probe was placed 9 cm from the elbow joint at the position marked for the measurement of the upper arm circumference. The same gains and contrast were used over the experimental period. The transverse images were transferred to a computer as bitmap (.bmp) files and analyzed using a computer. The average muscle echo intensity of the region of interest (20 × 20 mm) was calculated using the computer image analysis software that provided a grayscale histogram (0, black; 100, white) for the region, as described in a previous study (ImageJ, Maryland, USA) [9]. Scanned images of each muscle were transferred to a personal computer and the thickness of biceps rachii was manually calculated via tracing using same software. The test-retest reliability of the muscle echo intensity and thickness measurements based on CV were 2.1 % and 1.4 %, respectively. Statistical analyses All analyses were performed using the SPSS software version 25.0 (IBM Corp., Armonk, NY). Values were expressed as means ± standard deviation. MVC torque, ROM, circumference, echo intensity, and thickness of values before exercise to 5 days post-exercise were calculated based on relative changes from the baseline. MVC, ROM, visual analogue scale (VAS), circumference, echo intensity, thickness, and blood data of the PL and EPA and DHA groups were compared using two-way repeated-measure analysis of variance (ANOVA). When a significant primary effect or interaction was found, Bonferroni’s correction was performed for post-hoc testing. A p-value of < 0.05 was considered statistically significant. All analyses were performed using the SPSS software version 25.0 (IBM Corp., Armonk, NY). Values were expressed as means ± standard deviation. MVC torque, ROM, circumference, echo intensity, and thickness of values before exercise to 5 days post-exercise were calculated based on relative changes from the baseline. MVC, ROM, visual analogue scale (VAS), circumference, echo intensity, thickness, and blood data of the PL and EPA and DHA groups were compared using two-way repeated-measure analysis of variance (ANOVA). When a significant primary effect or interaction was found, Bonferroni’s correction was performed for post-hoc testing. A p-value of < 0.05 was considered statistically significant. Subjects: A total of 22 healthy recreational untrained men were recruited for this study. The participants were not allergic to fish and had not participated in any regular resistance training experience for at least one year before this study. Further, participants were asked not to participate in other clinical trials and interventions, such as massage, stretching, strenuous exercise, excessive consumption of food or alcohol, and intake of supplementations or medications during the experimental period. All participants were provided with detailed explanations of the study protocol prior to participation, and informed consent was obtained from all participants. The present study was performed in accordance with the Declaration of Helsinki and was approved by the ethics committee for human experiments of Teikyo Heisei University (ID: R01-040). Moreover, the study was registered at the University Hospital Medical Information Network Clinical Trials Registry (UMIN-CTR identifier: UMIN000038003). Study design: The study used the double-blind, placebo-controlled, parallel-group trial design. The participants were randomly assigned to two groups using a table of random numbers to minimize the intergroup differences in terms of age, body fat, and body mass index (BMI). The placebo (PL) and EPA and DHA group consumed daily placebo or fish oil capsules for 4 weeks prior to an exercise experiment and for 5 days after the exercise experiment. The sequence allocation concealment and blinding of participants and researchers were maintained throughout this period. Medication adherence was assessed using the daily record of the patients and via pill count at the end of the study. On the day of exercise testing, muscle damage markers were assessed using the nondominant arm before exercise. Immediately after these baseline measurements, the participants performed ECCs using the same arm. Maximum voluntary contraction (MVC) torque, ROM, DOMS, circumference, muscle echo intensity, and thickness were measured immediately before and after exercise and 1, 2, 3, and 5 days after exercise. Serum CK and IL-6 were measured before exercise and 1, 2, 3, and 5 days after exercise. In addition, we measured serum fatty acid levels at before and after 4 weeks supplementation. Subjects were instructed to eat a light meal > 2 h before arriving at the laboratory. In addition, they were asked to refrain from any exercise for 24 h, before the study visit. In addition, we assessed the nutrition status of all participants prior to supplement consumption and after the experimental testing on food frequency using a questionnaire based on food groups (FFQg version 3.5, Kenpakusha, Tokyo, Japan). Furthermore, we measured serum fatty acid levels, including EPA, DHA, arachidonic acid (AA), and dihomo-gamma-linolenic acid (DGLA) levels. Supplements: The EPA and DHA group consumed eight 300-mg EPA-rich fish oil softgel capsules (Nippon Suisan Kaisha Ltd., Tokyo, Japan) per day, and the total consumption was 2,400 mg per day (600-mg EPA and 260-mg DHA). The PL group consumed eight 300-mg corn oil softgel capsules per day (without EPA and DHA), and the total consumption was 2,400 mg. The participants consumed the capsules within 30 min after the morning meal. Blood sample: The participants fasted for 8 h before a trained doctor obtained blood samples from their forearms. The blood samples were allowed to clot at room temperature (25 °C) and were then centrifuged at 3,000 rpm for 10 min at 4 °C. The serum was extracted and stored at − 20 °C until analysis. The serum levels of DGLA, AA, EPA, and DHA were measured. In addition, we evaluated serum creatine kinase (CK) and interleukin-6 (IL-6) as muscle damage markers, as previously described [7]. Eccentric contractions: For the ECCs, the participant sat on a preacher curl bench with his shoulder joint angle at 45° flexion. For the use of the dumbbell, the value of maximal voluntary contraction (MVC) torque measurement at 90° was converted to kilograms. The exercise comprised six sets of 10 maximal voluntary ECCs of the elbow flexors with a rest period of 90 s between each set, as described in our previous study [11]. The dumbbell was handed to the participant at the elbow flexed position (90°), and the participant was instructed to lower it to a fully extended position (0°) at an approximately constant speed (30°/s) in time (3 s) with a metronome. The investigator then removed the dumbbell, and the participant returned his arm without the dumbbell to the start the position for the next ECCs. Maximum voluntary contraction torque: For the measurement of MVC torque, each subject was seated with nondominant arm attached to a custom-designed ergometer and that performed isometric of the elbow flexors. The MVC torque was measured three 3-s MVCs at 90° and 110° of elbow joint angle with a 30-s rest during contractions. Subjects were verbally encouraged to give their maximal effort during the muscle strength tests. The greatest 1-s average of the three trials for each angle was used for subsequent analysis. The peak torque of each angle contraction was used as the MVC torque. The torque signal was amplified using a strain amplifier (LUR-A-100NSA1; Kyowa Electronic Instruments, Tokyo, Japan). The analog torque signal was converted to digital signals using a 16-bit analog-to-digital converter (Power-Lab 16SP; AD Instruments, Bella Vista, Australia). The sampling frequency was set at 10 kHz. The test–retest reliability of the MVC measures based on coefficient of variation (CV) was 2.8 %. Range of motion of the elbow joint: To examine the ROM of the elbow joint, two elbow joint angles (extended and flexed) were measured using a goniometer (Takase Medical, Tokyo, Japan). The extended joint angle was recorded while the participant attempted to fully extend the joint with the elbow held by his side and the hand in supination [7, 16, 17]. The flexed joint angle was identified while the participant attempted to fully flex the joint from an equally fully extended position with the hand supinated. The ROM was calculated by subtracting the flexed joint angle from the extended joint angle. The test–retest reliability of the ROM measures based on CV was 2.2 %. Muscle soreness: Muscle soreness in the elbow flexors was assessed using a 10-cm visual analogue scale in which 0 indicated “no pain” and 10 indicated “unbearable imaginable” [7, 16, 17]. The participant relaxed his arm in a natural position. The investigator then palpated the upper arm using a thumb, and the participant indicated his pain level using the visual analogue scale. All tests were conducted by the same investigator who had been trained to use the same pressure over time between participants. The test–retest reliability of the VAS measures based on CV was 1.9 %. Upper arm circumference: Upper arm circumference was assessed at 9 cm above the elbow joint using a tape measure while the participants were standing with their arms relaxed by their side [9]. The measurement marks were maintained during the experimental period using a semipermanent ink marker, and a well-trained investigator obtained the measurements. The average value of the three measurements was used for further analysis. The test-retest reliability of the circumference measurements based on CV was 2.2 %. Muscle echo intensity and thickness: B-mode ultrasound pictures of the upper arm were obtained using the biceps brachii via an ultrasound (SONIMAGE HS1, Konika Minolta, Japan), and the probe was placed 9 cm from the elbow joint at the position marked for the measurement of the upper arm circumference. The same gains and contrast were used over the experimental period. The transverse images were transferred to a computer as bitmap (.bmp) files and analyzed using a computer. The average muscle echo intensity of the region of interest (20 × 20 mm) was calculated using the computer image analysis software that provided a grayscale histogram (0, black; 100, white) for the region, as described in a previous study (ImageJ, Maryland, USA) [9]. Scanned images of each muscle were transferred to a personal computer and the thickness of biceps rachii was manually calculated via tracing using same software. The test-retest reliability of the muscle echo intensity and thickness measurements based on CV were 2.1 % and 1.4 %, respectively. Statistical analyses: All analyses were performed using the SPSS software version 25.0 (IBM Corp., Armonk, NY). Values were expressed as means ± standard deviation. MVC torque, ROM, circumference, echo intensity, and thickness of values before exercise to 5 days post-exercise were calculated based on relative changes from the baseline. MVC, ROM, visual analogue scale (VAS), circumference, echo intensity, thickness, and blood data of the PL and EPA and DHA groups were compared using two-way repeated-measure analysis of variance (ANOVA). When a significant primary effect or interaction was found, Bonferroni’s correction was performed for post-hoc testing. A p-value of < 0.05 was considered statistically significant. Results: No significant differences were observed between the EPA and DHA and PL groups in terms of age, weight, and BMI (PL group, n = 11; age, 19.8 ± 1.5 years; height, 169.0 ± 7.8 cm; weight, 65.4 ± 8.4 kg; body fat, 15.7 ± 7.6 %; and BMI, 23.2 ± 3.3 kg/m2; EPA and DHA group, n = 11; age, 20.2 ± 0.4 years; height, 167.4 ± 5.4 cm; weight, 65.0 ± 8.9 kg; body fat, 17.2 ± 6.9 %; and BMI, 23.2 ± 2.9 kg/m2). Based on the results of the food frequency survey, no difference was observed in the PL group between before (energy, 1953.6 ± 442.2 kcal; protein, 64.4 ± 16.9 g; fat, 64.6 ± 12.3 g; carbohydrate, 263.5 ± 72.5 g; and omega-3 fatty acid, 1.8 ± 0.2 g) and after supplementation (energy, 1991.0 ± 556.2 kcal; protein, 64.9 ± 22.2 g; fat, 68.3 ± 24.1 g; carbohydrate, 262.0 ± 77.4 g; and omega-3 fatty acid, 2.0 ± 0.7 g) and in the EPA and DHA group between before (energy, 1907.9 ± 349.7 kcal; protein, 70.9 ± 14.3 g; fat, 68.1 ± 12.7 g; carbohydrate, 243.8 ± 73.4 g; and omega-3 fatty acid, 2.1 ± 0.6 g) and after supplementation (energy, 1894.9 ± 431.4 kcal; protein, 70.7 ± 17.3 g; fat, 65.8 ± 13.7 g; carbohydrate, 244.1 ± 81.9 g; and omega-3 fatty acid, 2.0 ± 0.6 g). These data reveal that physical characteristics and nutrition status of the participants did not change during the experimental period. Polyunsaturated fatty acids As shown in Table 1, no significant changes were observed in the PL group before and after supplementation in terms of DGLA, AA, EPA, and DHA levels. In the EPA and DHA group, blood EPA level increased after 4 weeks (p < 0.05). However, no significant difference was observed in the DGLA, AA, and DHA levels. For comparison between groups, the EPA level was significantly higher in the EPA and DHA group than in the PL group after the 4-week supplementation (p < 0.05). Table 1Changes of serum dihomo-gamma-linolenic acid, arachidonic acid, eicosapentaenoic acid, and docosahexaenoic acid at before and after 4-weekDihomo-gamma-linolenic acid (µg/ml)Arachidonic acid (µg/ml)Eicosapentaenoic acid (µg/ml)Docosahexaenoic acid (µg/ml)PlaceboBefore49.0 ± 9.9189.1 ± 40.620.9 ± 9.268.5 ± 24.6After42.9 ± 1.8198.7 ± 39.514.2 ± 6.159.4 ± 22.3EPABefore44.7 ± 9.3201.6 ± 38.133.5 ± 21.270.9 ± 18.0After42.3 ± 10.5186.8 ± 20.358.2 ± 25.7 *†78.2 ± 17.7* p < 0.05, Compare with placebo group for after supplementation, †p < 0.05, Compare with EPA and DHA group for before supplementation Changes of serum dihomo-gamma-linolenic acid, arachidonic acid, eicosapentaenoic acid, and docosahexaenoic acid at before and after 4-week * p < 0.05, Compare with placebo group for after supplementation, †p < 0.05, Compare with EPA and DHA group for before supplementation As shown in Table 1, no significant changes were observed in the PL group before and after supplementation in terms of DGLA, AA, EPA, and DHA levels. In the EPA and DHA group, blood EPA level increased after 4 weeks (p < 0.05). However, no significant difference was observed in the DGLA, AA, and DHA levels. For comparison between groups, the EPA level was significantly higher in the EPA and DHA group than in the PL group after the 4-week supplementation (p < 0.05). Table 1Changes of serum dihomo-gamma-linolenic acid, arachidonic acid, eicosapentaenoic acid, and docosahexaenoic acid at before and after 4-weekDihomo-gamma-linolenic acid (µg/ml)Arachidonic acid (µg/ml)Eicosapentaenoic acid (µg/ml)Docosahexaenoic acid (µg/ml)PlaceboBefore49.0 ± 9.9189.1 ± 40.620.9 ± 9.268.5 ± 24.6After42.9 ± 1.8198.7 ± 39.514.2 ± 6.159.4 ± 22.3EPABefore44.7 ± 9.3201.6 ± 38.133.5 ± 21.270.9 ± 18.0After42.3 ± 10.5186.8 ± 20.358.2 ± 25.7 *†78.2 ± 17.7* p < 0.05, Compare with placebo group for after supplementation, †p < 0.05, Compare with EPA and DHA group for before supplementation Changes of serum dihomo-gamma-linolenic acid, arachidonic acid, eicosapentaenoic acid, and docosahexaenoic acid at before and after 4-week * p < 0.05, Compare with placebo group for after supplementation, †p < 0.05, Compare with EPA and DHA group for before supplementation Maximal voluntary isometric contraction torque For the MVC, although a times main effect was observed (90°: F = 28.8, p < 0.05, 110°: F = 20.8, p < 0.05), but not a main effect for group (90°: F = 2.5, 110°: F = 1.4) and interaction effect for times and group (90°: F = 1.2, 110°: F = 0.9) (Fig. 1). Fig. 1Changes (mean ± SD) of maximal voluntary isometric contraction (MVC) torque at 90° (a), and 110° (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups Changes (mean ± SD) of maximal voluntary isometric contraction (MVC) torque at 90° (a), and 110° (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups For the MVC, although a times main effect was observed (90°: F = 28.8, p < 0.05, 110°: F = 20.8, p < 0.05), but not a main effect for group (90°: F = 2.5, 110°: F = 1.4) and interaction effect for times and group (90°: F = 1.2, 110°: F = 0.9) (Fig. 1). Fig. 1Changes (mean ± SD) of maximal voluntary isometric contraction (MVC) torque at 90° (a), and 110° (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups Changes (mean ± SD) of maximal voluntary isometric contraction (MVC) torque at 90° (a), and 110° (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups Range of motion of the elbow joint As shown in Fig. 2a, ANOVA revealed an interaction (F = 3.4, p < 0.05), times (F = 20.6, p < 0.05), and group effect (F = 4.0, p < 0.05) for ROM. Post-hoc analysis revealed that a significant decrease in ROM was observed in the PL group immediately after exercise, which remained lower than the baseline on days 1, 2, and 3 after exercise. The ROM in the EPA and DHA group decreased immediately after and 1 day after exercise compared with the pre-exercise value but returned to baseline on day 2 after exercise. The ROM in the EPA and DHA group was significantly higher than that of the PL group immediately after exercise (EPA and DHA group; 76.5 ± 16.7 %, PL group; 53.1 ± 18.7 %; p < 0.05). Fig. 2Changes (mean ± SD) of range of motion (ROM) (a), circumference (b), and visual analogue scale (VAS) (c) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group, * p < 0.05 for the difference from pre-exercise value in the EPA and DHA group Changes (mean ± SD) of range of motion (ROM) (a), circumference (b), and visual analogue scale (VAS) (c) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group, * p < 0.05 for the difference from pre-exercise value in the EPA and DHA group As shown in Fig. 2a, ANOVA revealed an interaction (F = 3.4, p < 0.05), times (F = 20.6, p < 0.05), and group effect (F = 4.0, p < 0.05) for ROM. Post-hoc analysis revealed that a significant decrease in ROM was observed in the PL group immediately after exercise, which remained lower than the baseline on days 1, 2, and 3 after exercise. The ROM in the EPA and DHA group decreased immediately after and 1 day after exercise compared with the pre-exercise value but returned to baseline on day 2 after exercise. The ROM in the EPA and DHA group was significantly higher than that of the PL group immediately after exercise (EPA and DHA group; 76.5 ± 16.7 %, PL group; 53.1 ± 18.7 %; p < 0.05). Fig. 2Changes (mean ± SD) of range of motion (ROM) (a), circumference (b), and visual analogue scale (VAS) (c) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group, * p < 0.05 for the difference from pre-exercise value in the EPA and DHA group Changes (mean ± SD) of range of motion (ROM) (a), circumference (b), and visual analogue scale (VAS) (c) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group, * p < 0.05 for the difference from pre-exercise value in the EPA and DHA group Upper arm circumference and muscle soreness For upper arm circumference and muscle soreness, although a times main effect was observed (circumference: F = 11.8, p < 0.05, muscle soreness: F = 17.7, p < 0.05), but not a main effect for group (circumference: F = 3.4, muscle soreness: F = 0.7) and interaction effect for times and group (circumference: F = 1.7, muscle soreness: F = 1.0) (Fig. 2b and c). For upper arm circumference and muscle soreness, although a times main effect was observed (circumference: F = 11.8, p < 0.05, muscle soreness: F = 17.7, p < 0.05), but not a main effect for group (circumference: F = 3.4, muscle soreness: F = 0.7) and interaction effect for times and group (circumference: F = 1.7, muscle soreness: F = 1.0) (Fig. 2b and c). Muscle thickness and echo intensity For muscle thickness and echo intensity, although a times main effect was observed (muscle thickness: F = 3.1, p < 0.05, echo intensity: F = 7.5, p < 0.05), but not a main effect for group (muscle thickness: F = 0.2, echo intensity: F = 1.7) and interaction effect for times and group (muscle thickness: F = 0.5, echo intensity: F = 1.2) (Fig. 3a and b). Fig. 3Changes (mean ± SD) of muscle thickness (a) and muscle echo intensity (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups Changes (mean ± SD) of muscle thickness (a) and muscle echo intensity (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups For muscle thickness and echo intensity, although a times main effect was observed (muscle thickness: F = 3.1, p < 0.05, echo intensity: F = 7.5, p < 0.05), but not a main effect for group (muscle thickness: F = 0.2, echo intensity: F = 1.7) and interaction effect for times and group (muscle thickness: F = 0.5, echo intensity: F = 1.2) (Fig. 3a and b). Fig. 3Changes (mean ± SD) of muscle thickness (a) and muscle echo intensity (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups Changes (mean ± SD) of muscle thickness (a) and muscle echo intensity (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups Blood serum analysis ANOVA revealed an interaction (F = 3.1, p < 0.05), times (F = 8.0, p < 0.05), and group effect (F = 4.7, p < 0.05) for serum CK. Serum CK levels in the PL group significantly increased 3 and 5 days after exercise compared with pre-exercise values (Fig. 4a, p < 0.05). However, no significant difference after exercise compared with pre-exercise values was observed in the EPA and DHA group. Serum CK levels were significantly higher in the PL group than the EPA and DHA group 3 days after exercise (PL, 12132.7 ± 13652.2 U/L vs. EPA, 2575.2 ± 2798.9 U/L, p < 0.05). In contrast, no interaction (F = 2.2), times (F = 17.5), and group effect (F = 2.3) in IL-6 were detected between times and the group (Fig. 4b). Fig. 4Changes (mean ± SD) of serum creatine kinase (a) and serum interleukin-6 (b) measured before (pre) and the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group Changes (mean ± SD) of serum creatine kinase (a) and serum interleukin-6 (b) measured before (pre) and the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group ANOVA revealed an interaction (F = 3.1, p < 0.05), times (F = 8.0, p < 0.05), and group effect (F = 4.7, p < 0.05) for serum CK. Serum CK levels in the PL group significantly increased 3 and 5 days after exercise compared with pre-exercise values (Fig. 4a, p < 0.05). However, no significant difference after exercise compared with pre-exercise values was observed in the EPA and DHA group. Serum CK levels were significantly higher in the PL group than the EPA and DHA group 3 days after exercise (PL, 12132.7 ± 13652.2 U/L vs. EPA, 2575.2 ± 2798.9 U/L, p < 0.05). In contrast, no interaction (F = 2.2), times (F = 17.5), and group effect (F = 2.3) in IL-6 were detected between times and the group (Fig. 4b). Fig. 4Changes (mean ± SD) of serum creatine kinase (a) and serum interleukin-6 (b) measured before (pre) and the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group Changes (mean ± SD) of serum creatine kinase (a) and serum interleukin-6 (b) measured before (pre) and the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group Polyunsaturated fatty acids: As shown in Table 1, no significant changes were observed in the PL group before and after supplementation in terms of DGLA, AA, EPA, and DHA levels. In the EPA and DHA group, blood EPA level increased after 4 weeks (p < 0.05). However, no significant difference was observed in the DGLA, AA, and DHA levels. For comparison between groups, the EPA level was significantly higher in the EPA and DHA group than in the PL group after the 4-week supplementation (p < 0.05). Table 1Changes of serum dihomo-gamma-linolenic acid, arachidonic acid, eicosapentaenoic acid, and docosahexaenoic acid at before and after 4-weekDihomo-gamma-linolenic acid (µg/ml)Arachidonic acid (µg/ml)Eicosapentaenoic acid (µg/ml)Docosahexaenoic acid (µg/ml)PlaceboBefore49.0 ± 9.9189.1 ± 40.620.9 ± 9.268.5 ± 24.6After42.9 ± 1.8198.7 ± 39.514.2 ± 6.159.4 ± 22.3EPABefore44.7 ± 9.3201.6 ± 38.133.5 ± 21.270.9 ± 18.0After42.3 ± 10.5186.8 ± 20.358.2 ± 25.7 *†78.2 ± 17.7* p < 0.05, Compare with placebo group for after supplementation, †p < 0.05, Compare with EPA and DHA group for before supplementation Changes of serum dihomo-gamma-linolenic acid, arachidonic acid, eicosapentaenoic acid, and docosahexaenoic acid at before and after 4-week * p < 0.05, Compare with placebo group for after supplementation, †p < 0.05, Compare with EPA and DHA group for before supplementation Maximal voluntary isometric contraction torque: For the MVC, although a times main effect was observed (90°: F = 28.8, p < 0.05, 110°: F = 20.8, p < 0.05), but not a main effect for group (90°: F = 2.5, 110°: F = 1.4) and interaction effect for times and group (90°: F = 1.2, 110°: F = 0.9) (Fig. 1). Fig. 1Changes (mean ± SD) of maximal voluntary isometric contraction (MVC) torque at 90° (a), and 110° (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups Changes (mean ± SD) of maximal voluntary isometric contraction (MVC) torque at 90° (a), and 110° (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups Range of motion of the elbow joint: As shown in Fig. 2a, ANOVA revealed an interaction (F = 3.4, p < 0.05), times (F = 20.6, p < 0.05), and group effect (F = 4.0, p < 0.05) for ROM. Post-hoc analysis revealed that a significant decrease in ROM was observed in the PL group immediately after exercise, which remained lower than the baseline on days 1, 2, and 3 after exercise. The ROM in the EPA and DHA group decreased immediately after and 1 day after exercise compared with the pre-exercise value but returned to baseline on day 2 after exercise. The ROM in the EPA and DHA group was significantly higher than that of the PL group immediately after exercise (EPA and DHA group; 76.5 ± 16.7 %, PL group; 53.1 ± 18.7 %; p < 0.05). Fig. 2Changes (mean ± SD) of range of motion (ROM) (a), circumference (b), and visual analogue scale (VAS) (c) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group, * p < 0.05 for the difference from pre-exercise value in the EPA and DHA group Changes (mean ± SD) of range of motion (ROM) (a), circumference (b), and visual analogue scale (VAS) (c) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group, * p < 0.05 for the difference from pre-exercise value in the EPA and DHA group Upper arm circumference and muscle soreness: For upper arm circumference and muscle soreness, although a times main effect was observed (circumference: F = 11.8, p < 0.05, muscle soreness: F = 17.7, p < 0.05), but not a main effect for group (circumference: F = 3.4, muscle soreness: F = 0.7) and interaction effect for times and group (circumference: F = 1.7, muscle soreness: F = 1.0) (Fig. 2b and c). Muscle thickness and echo intensity: For muscle thickness and echo intensity, although a times main effect was observed (muscle thickness: F = 3.1, p < 0.05, echo intensity: F = 7.5, p < 0.05), but not a main effect for group (muscle thickness: F = 0.2, echo intensity: F = 1.7) and interaction effect for times and group (muscle thickness: F = 0.5, echo intensity: F = 1.2) (Fig. 3a and b). Fig. 3Changes (mean ± SD) of muscle thickness (a) and muscle echo intensity (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups Changes (mean ± SD) of muscle thickness (a) and muscle echo intensity (b) measured before (pre) and immediately after (post) the eccentric contractions exercise and 1, 2, 3 and 5 days in the PL and EPA and DHA groups Blood serum analysis: ANOVA revealed an interaction (F = 3.1, p < 0.05), times (F = 8.0, p < 0.05), and group effect (F = 4.7, p < 0.05) for serum CK. Serum CK levels in the PL group significantly increased 3 and 5 days after exercise compared with pre-exercise values (Fig. 4a, p < 0.05). However, no significant difference after exercise compared with pre-exercise values was observed in the EPA and DHA group. Serum CK levels were significantly higher in the PL group than the EPA and DHA group 3 days after exercise (PL, 12132.7 ± 13652.2 U/L vs. EPA, 2575.2 ± 2798.9 U/L, p < 0.05). In contrast, no interaction (F = 2.2), times (F = 17.5), and group effect (F = 2.3) in IL-6 were detected between times and the group (Fig. 4b). Fig. 4Changes (mean ± SD) of serum creatine kinase (a) and serum interleukin-6 (b) measured before (pre) and the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group Changes (mean ± SD) of serum creatine kinase (a) and serum interleukin-6 (b) measured before (pre) and the eccentric contractions exercise and 1, 2, 3 and 5 days in the placebo (PL) and EPA and DHA groups. # p < 0.05 for the difference between groups; † p < 0.05 for the difference from the pre-exercise value in the PL group Discussion: This study investigated the effect of consuming EPA-enriched fish oil for 4 weeks on temporal muscular dysfunction and soreness following ECCs. As a result, the 4-week ingestion of 600-mg EPA and 260-mg DHA attenuated the loss of joint flexibility after 60 ECCs in the elbow flexors. Moreover, we confirmed the inhibition of increased blood CK level, suggesting protection of muscle damage to muscle fibers. However, no effects were observed on loss of muscle strength, delayed onset muscle soreness, muscle swelling, echo intensity, and blood IL-6. These results suggest that supplementation with EPA-enriched fish oil for 4 weeks is effective to a limited extent for attenuating acute exercise-induced muscular damage. These results support our hypothesis. Herein, we confirmed that EPA (600 mg/day) and DHA (260 mg/day) consumed for 4 weeks significantly attenuated reduced ROM after 60 ECCs in the elbow flexors. The decreased ROM after ECCs has been attributed to the increased passive muscle stiffness due to inflammatory response in myofibrils, elevated cytoplasmic calcium levels, and muscle swelling due to inflammatory reactions [18, 19]. In our previous study, 30 isokinetic ECCs produced by elbow flexion with maximum effort were loaded after 600-mg EPA and 260-mg DHA per day for 8 weeks before exercise [7]. As a result, the reduction of ROM in the elbow joint until 3 days after exercise in EPA and DHA group was significantly less compared with that in the placebo group [7]. In a different study, the 8-week consumption of 600 mg/day EPA and 260 mg/day DHA significantly reduced elbow ROM until 5 days after 60 ECCs using dumbbells at 100 % MVC [11]. As a study investigating the effectiveness of supplementation for a period shorter than 8 weeks, Lenn et al. [15] tested the effect of 30-day consumption of 1,800-mg/day omega-3 fatty acids (398-mg EPA and 269-mg DHA) on muscle damage following 50 ECCs by elbow flexions in 21 untrained men and women. The results of that study show no differences in the loss of ROM in the elbow between the omega-3 fatty acid group and the placebo group. As consumption of 2,400-mg/day omega-3 fatty acids (600-mg EPA and 260-mg DHA) was effective in this study, the dose, rather than the duration of administration, appears to be a more important factor determining the preventive effect on joint flexibility. In a review article [20], it has been suggested that an ingestion ratio of EPA to DHA of approximately 2:1 may be beneficial in counteracting exercise-induced inflammation, which is almost similar to our ratio. Therefore, ratio of EPA to DHA may be also important factor to maintain the joint flexibility after muscle damage. In addition, a study involving 24 healthy young men has reported that reduced ROM in the knee joint after loading ECCs of quadriceps through a 40-min bench stepping exercise was significantly reduced in those who consumed 324-mg/day EPA and 216-mg/day DHA for 30 days [10]. In that study, ingestion of lower EPA and DHA was effective presumably because the multi-joint movement in lower limbs for a long period of time was a relatively low-intensity exercise. These data suggest that the effectiveness of short-term supplementation with EPA and DHA (28–30 days) on joint flexibility may differ depending on the omega-3 fatty acid dose, mode of exercise, and specific muscles involved. In the present study, no differences in IL-6 between the groups were observed, and blood CK elevation in the EPA and DHA group was significantly reduced. The increase in CK following ECCs is attributable to micro-damage to muscle fibers [3, 5]. A study involving mice has reported that the amount of omega-3 fatty acids in the cell membrane significantly increased after 3-week administration of omega-3 fatty acids [21]. Therefore, a possible reason why the exercise-induced CK elevation was mitigated herein is that omega-3 fatty acids were incorporated into and protected cell membrane in muscle fiber. Similarly, Tartibian et al. [10] have reported that consumption of EPA and DHA for 30 days reduced CK elevation after ECCs from 40 min of the bench stepping exercise. Furthermore, some studies have demonstrated that consumption of EPA and DHA reduced the elevation of IL-6, an inflammatory cytokine [7, 12, 22, 23]. Possible reasons why our results did not agree with these previous studies were used different modes such as isokinetic ECCs [7, 22], arm carl machine [23], or bench stepping exercise [12]. In addition, Phillips et al. [23] showed that there were significant differences between supplementation of DHA group and placebo for IL-6 at 10 days after exercise. Future studies considering exercise modes and measurement of time point are necessary. The present study also indicated no effect of 4-week supplementation with EPA and DHA on DOMS. A study has reported that the results of daily ingestion of 2,000-mg EPA and 1,000-mg DHA for 2 weeks significantly mitigated DOMS in the upper arm after two sets of ECCs (until exhaustion) via elbow flexions using dumbbells at an intensity of 120 % 1RM [24]. Similarly, as a result of 3,600 mg of EPA and DHA ingested for 2 weeks have also been reported to mitigate DOMS in the upper arm and thigh after 10 sets of ECCs (until exhaustion) through elbow flexions and knee extensions at an intensity of 50 % 1RM [25]. Studies using 8-week consumption of EPA and DHA at lower doses (600 mg/day of EPA and 260 mg/day of DHA) have reported significant mitigation of DOMS after 30 isokinetic ECCs by elbow flexions [7] and DOMS after isotonic ECCs using dumbbells at an intensity of 40 % 1RM [9]. In contrast, 30-day consumption of 287-mg/day EPA and 194-mg/day DHA [15] and 6-week consumption of 1,300-mg/day EPA and 300-mg/day DHA [26] have been demonstrated to be ineffective for DOMS following isokinetic ECCs. Therefore, we suggest that ingestion at a high daily dose or for a long period of time appears to be important for EPA and DHA to exert the mitigation effect on DOMS. Herein, loss of muscle strength, muscle swelling, and increased echo intensity after exercise were not prevented by the 4-week ingestion of EPA and DHA. In a previous study, we have reported that the results of 8-week ingestion at similar doses to those used in the present study reduced the loss of muscle strength [7, 9, 11]. Furthermore, 30-day ingestion of 287-mg/day EPA and 194-mg/day DHA [15] and 6-week ingestion of 1,300-mg EPA and 300-mg DHA [26] have been shown to be ineffective against loss of muscle strength. Considering the results of our present and previous studies, the duration of EPA and DHA intake is an important determinant of the effectiveness for muscle strength after exercise. One of reasons for the importance of duration, previous study showed that the increase of omega-3 polyunsaturated fatty acids in muscle cellular membrane needs over 4 weeks in rat [14]. Therefore, we speculate there is no effect to muscle strength, showing the different and multiple roles of EPA and DHA for muscle damage. In addition, no consensus has been obtained thus far on the swelling of muscles. Consumption of EPA (600 mg/day) and DHA (260 mg/day) for 8 weeks has been reported to attenuate muscle swelling after ECCs through elbow flexions at an intensity of 100 % MVC [11]. Ingestion of 324-mg EPA and 216-mg DHA per day for 30 days has been reported to mitigate muscle swelling after ECCs in lower limbs [10]. Several previous studies show that consumption of EPA and DHA for 2–8 weeks did not attenuate muscle swelling after ECCs [7, 9, 24, 25]. In many studies, muscle swelling was not evaluated directly and was measured using a measuring tape. The use of MRI or ultrasonography may be necessary for evaluation in the future. In the ultrasound measurement, an increased echo intensity reflects the amount of free water or edema due to disintegration of the extracellular matrix [6]. Our previous study has shown that consumption of EPA and DHA for 8 weeks resulted in a smaller increase in echo intensity following elbow flexions [11]. Therefore, more studies are needed to clarify the effect on echo intensity after ECCs because pf the paucity of data; regardless, the available data suggest the importance of the duration of consumption. Conclusions: Herein, we confirmed that 4-week 2,400 mg/day fish oil of supplementation with EPA and DHA at daily doses of 600 mg of 260 mg, respectively, alleviated the loss of joint flexibility and the increase in CK after ECCs through elbow flexions. However, other effects for the parameters of ECC-induced muscle damage were limited compared with that of 8-week consumption of EPA and DHA [7, 11]. Therefore, we conclude that supplementation with EPA-enriched fish oil for 4 weeks is effective to a limited extent for attenuating acute exercise-induced muscular damage. Our finding that there is certain duration of supplement to obtain the earlier muscle recovery will be important information to athletes and training enthusiasts acquire efficient training.
Background: We previously showed 8-week of fish oil supplementation attenuated muscle damage. However, the effect of a shorter period of fish oil supplementation is unclear. The present study investigated the effect of fish oil, eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), for 4 weeks on muscular damage caused by eccentric contractions (ECCs) of the elbow flexors. Methods: Twenty-two untrained men were recruited in this double-blind, placebo-controlled, parallel design study and the subjects were randomly assigned to the EPA and DHA group (EPA and DHA, n = 11) and placebo group (PL, n = 11). They consumed either EPA 600 mg and DHA 260 mg per day or placebo supplement for 4 weeks prior to exercise. Subjects performed 60 ECCs at 100 % maximal voluntary contraction (MVC) using a dumbbell. Changes in MVC torque, range of motion (ROM), upper arm circumference, muscle soreness, echo intensity, muscle thickness, serum creatine kinase (CK), and interleukin-6 (IL-6) were assessed before exercise; immediately after exercise; and 1, 2, 3, and 5 days after exercise. Results: ROM was significantly higher in the EPA and DHA group than in the PL group immediately after performing ECCs (p < 0.05). No differences between groups were observed in terms of MVC torque, upper arm circumference, muscle soreness, echo intensity, and thickness. A significant difference was observed in serum CK 3 days after ECCs (p < 0.05). Conclusions: We concluded that shorter period EPA and DHA supplementation benefits joint flexibility and protection of muscle fiber following ECCs.
Introduction: Omega-3 polyunsaturated fatty acids include of eicosapentaenoic acid (EPA; 20:5 n-3) and docosahexaenoic acid (DHA; 22:6 n-3), which are mainly contained in fish oil. EPA and DHA are known to have anti-inflammatory effects and increased red blood cell (RBC) deformability as a consequence of incorporation of omega-3 polyunsaturated fatty acid into RBC membrane phospholipids [1]. Recently, omega-3 polyunsaturated fatty acid supplementation has been proposed as an ergogenic aid for athletes [2]. Exhaustive eccentric contractions (ECCs) or unaccustomed exercise causes delayed onset muscle soreness (DOMS), reduction in maximal strength, limitation of range of motion (ROM), and muscle swelling [3]. In addition, previous studies showed that after ECCs, serum myoglobin (Mb), creatine kinase (CK), and interleukin (IL)-6 increase [3–7]. ECC-induced muscle damage is defined as morphological changes in the sarcomeres and endomysium and inflammatory responses in muscle fibers and connective tissues [8]. Previous studies have reported that EPA and DHA supplementation positively affect these symptoms of muscle damage [7, 9–12]. Consumed omega-3 polyunsaturated fatty acids are incorporated into phospholipid, a major component of the cell membrane, and have been reported to inhibit the effects of inflammation and reactive oxygen species [13]. Helge et al. [14] reported that the total proportion of omega-3 polyunsaturated fatty acids in muscle cellular membrane was significantly increased after 4 weeks of high-fat diet in rat. It is assumed that ingestion of EPA and DHA alleviates exercise-induced muscle damage by the incorporation of omega-3 polyunsaturated fatty acids into the muscle cell membrane. We previously examined 30 ECCs in elbow flexors after an intake of both 600 mg/day of EPA and 260 mg/day of DHA for 8 weeks [7] and showed the inhibitions in reduced muscle strength and ROM, and increased IL-6 as inflammatory response [7]. Furthermore, other studies have reported that ingestion of 8-week EPA (600 mg/day) and DHA (260 mg/day) attenuates nerve dysfunction [9] and muscle stiffness [11] following 60 ECCs using a dumbbell at 100 % 1RM. However, the study that conducted these experiments in shorter periods (< 8 weeks) is controversial and insufficient [10, 12, 15]. Specifically, Tartibian et al. [10, 12] examined the effects of 324 mg/day EPA and 216 mg/day DHA ingestion for 30 days on muscle damage after 40 min of bench stepping. As the results, development of DOMS, limited ROM, muscle swelling, and elevated of serum CK and IL-6 were inhibited by the ingestions. On the other hand, it showed that the ingestion of 400 mg/day EPA and 270 mg/day DHA for 30 days were not effective to DOMS, ROM, and serum CK and IL-6 following 50 maximal ECCs of elbow flexors [15]. Hence, elucidating the efficacy of shorter period of EPA and DHA supplementation is important for athletes and resistance training enthusiasts. Therefore, the present study examined whether 600 mg/day of EPA and 260 mg/day of DHA supplementation for 4 weeks reduce muscle damage following elbow flexors in ECCs. We hypothesized that short term EPA and DHA supplementation may improve ECC-induced muscle damage. Conclusions: Herein, we confirmed that 4-week 2,400 mg/day fish oil of supplementation with EPA and DHA at daily doses of 600 mg of 260 mg, respectively, alleviated the loss of joint flexibility and the increase in CK after ECCs through elbow flexions. However, other effects for the parameters of ECC-induced muscle damage were limited compared with that of 8-week consumption of EPA and DHA [7, 11]. Therefore, we conclude that supplementation with EPA-enriched fish oil for 4 weeks is effective to a limited extent for attenuating acute exercise-induced muscular damage. Our finding that there is certain duration of supplement to obtain the earlier muscle recovery will be important information to athletes and training enthusiasts acquire efficient training.
Background: We previously showed 8-week of fish oil supplementation attenuated muscle damage. However, the effect of a shorter period of fish oil supplementation is unclear. The present study investigated the effect of fish oil, eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), for 4 weeks on muscular damage caused by eccentric contractions (ECCs) of the elbow flexors. Methods: Twenty-two untrained men were recruited in this double-blind, placebo-controlled, parallel design study and the subjects were randomly assigned to the EPA and DHA group (EPA and DHA, n = 11) and placebo group (PL, n = 11). They consumed either EPA 600 mg and DHA 260 mg per day or placebo supplement for 4 weeks prior to exercise. Subjects performed 60 ECCs at 100 % maximal voluntary contraction (MVC) using a dumbbell. Changes in MVC torque, range of motion (ROM), upper arm circumference, muscle soreness, echo intensity, muscle thickness, serum creatine kinase (CK), and interleukin-6 (IL-6) were assessed before exercise; immediately after exercise; and 1, 2, 3, and 5 days after exercise. Results: ROM was significantly higher in the EPA and DHA group than in the PL group immediately after performing ECCs (p < 0.05). No differences between groups were observed in terms of MVC torque, upper arm circumference, muscle soreness, echo intensity, and thickness. A significant difference was observed in serum CK 3 days after ECCs (p < 0.05). Conclusions: We concluded that shorter period EPA and DHA supplementation benefits joint flexibility and protection of muscle fiber following ECCs.
13,435
333
[ 3562, 165, 352, 96, 111, 165, 197, 126, 114, 88, 200, 141, 314, 236, 426, 105, 217, 379, 1704 ]
22
[ "epa", "dha", "exercise", "group", "muscle", "epa dha", "05", "pl", "mg", "acid" ]
[ "supplementation important athletes", "mitigated omega fatty", "omega fatty acids", "exercise induced inflammation", "acids muscle cell" ]
null
[CONTENT] Eicosapentaenoic acid | Omega-3 | Sports nutrition | Ergogenic aid | Long‐chain n-3 polyunsaturated fatty acids | Lengthening | Muscle function | Joint flexibility [SUMMARY]
null
[CONTENT] Eicosapentaenoic acid | Omega-3 | Sports nutrition | Ergogenic aid | Long‐chain n-3 polyunsaturated fatty acids | Lengthening | Muscle function | Joint flexibility [SUMMARY]
[CONTENT] Eicosapentaenoic acid | Omega-3 | Sports nutrition | Ergogenic aid | Long‐chain n-3 polyunsaturated fatty acids | Lengthening | Muscle function | Joint flexibility [SUMMARY]
[CONTENT] Eicosapentaenoic acid | Omega-3 | Sports nutrition | Ergogenic aid | Long‐chain n-3 polyunsaturated fatty acids | Lengthening | Muscle function | Joint flexibility [SUMMARY]
[CONTENT] Eicosapentaenoic acid | Omega-3 | Sports nutrition | Ergogenic aid | Long‐chain n-3 polyunsaturated fatty acids | Lengthening | Muscle function | Joint flexibility [SUMMARY]
[CONTENT] 8,11,14-Eicosatrienoic Acid | Arachidonic Acid | Arm | Creatine Kinase | Dietary Supplements | Docosahexaenoic Acids | Eicosapentaenoic Acid | Elbow Joint | Fatty Acids, Unsaturated | Fish Oils | Humans | Interleukin-6 | Isometric Contraction | Male | Myalgia | Placebos | Range of Motion, Articular | Time Factors | Torque | Young Adult [SUMMARY]
null
[CONTENT] 8,11,14-Eicosatrienoic Acid | Arachidonic Acid | Arm | Creatine Kinase | Dietary Supplements | Docosahexaenoic Acids | Eicosapentaenoic Acid | Elbow Joint | Fatty Acids, Unsaturated | Fish Oils | Humans | Interleukin-6 | Isometric Contraction | Male | Myalgia | Placebos | Range of Motion, Articular | Time Factors | Torque | Young Adult [SUMMARY]
[CONTENT] 8,11,14-Eicosatrienoic Acid | Arachidonic Acid | Arm | Creatine Kinase | Dietary Supplements | Docosahexaenoic Acids | Eicosapentaenoic Acid | Elbow Joint | Fatty Acids, Unsaturated | Fish Oils | Humans | Interleukin-6 | Isometric Contraction | Male | Myalgia | Placebos | Range of Motion, Articular | Time Factors | Torque | Young Adult [SUMMARY]
[CONTENT] 8,11,14-Eicosatrienoic Acid | Arachidonic Acid | Arm | Creatine Kinase | Dietary Supplements | Docosahexaenoic Acids | Eicosapentaenoic Acid | Elbow Joint | Fatty Acids, Unsaturated | Fish Oils | Humans | Interleukin-6 | Isometric Contraction | Male | Myalgia | Placebos | Range of Motion, Articular | Time Factors | Torque | Young Adult [SUMMARY]
[CONTENT] 8,11,14-Eicosatrienoic Acid | Arachidonic Acid | Arm | Creatine Kinase | Dietary Supplements | Docosahexaenoic Acids | Eicosapentaenoic Acid | Elbow Joint | Fatty Acids, Unsaturated | Fish Oils | Humans | Interleukin-6 | Isometric Contraction | Male | Myalgia | Placebos | Range of Motion, Articular | Time Factors | Torque | Young Adult [SUMMARY]
[CONTENT] supplementation important athletes | mitigated omega fatty | omega fatty acids | exercise induced inflammation | acids muscle cell [SUMMARY]
null
[CONTENT] supplementation important athletes | mitigated omega fatty | omega fatty acids | exercise induced inflammation | acids muscle cell [SUMMARY]
[CONTENT] supplementation important athletes | mitigated omega fatty | omega fatty acids | exercise induced inflammation | acids muscle cell [SUMMARY]
[CONTENT] supplementation important athletes | mitigated omega fatty | omega fatty acids | exercise induced inflammation | acids muscle cell [SUMMARY]
[CONTENT] supplementation important athletes | mitigated omega fatty | omega fatty acids | exercise induced inflammation | acids muscle cell [SUMMARY]
[CONTENT] epa | dha | exercise | group | muscle | epa dha | 05 | pl | mg | acid [SUMMARY]
null
[CONTENT] epa | dha | exercise | group | muscle | epa dha | 05 | pl | mg | acid [SUMMARY]
[CONTENT] epa | dha | exercise | group | muscle | epa dha | 05 | pl | mg | acid [SUMMARY]
[CONTENT] epa | dha | exercise | group | muscle | epa dha | 05 | pl | mg | acid [SUMMARY]
[CONTENT] epa | dha | exercise | group | muscle | epa dha | 05 | pl | mg | acid [SUMMARY]
[CONTENT] mg | mg day | muscle | day | omega polyunsaturated fatty | omega polyunsaturated | polyunsaturated | polyunsaturated fatty | omega | dha [SUMMARY]
null
[CONTENT] group | 05 | pre | exercise | pl | epa | dha | difference | epa dha | acid [SUMMARY]
[CONTENT] mg | supplementation epa | limited | induced | training | week | fish oil | oil | damage | epa [SUMMARY]
[CONTENT] epa | group | exercise | muscle | dha | 05 | mg | epa dha | pl | pre [SUMMARY]
[CONTENT] epa | group | exercise | muscle | dha | 05 | mg | epa dha | pl | pre [SUMMARY]
[CONTENT] 8-week ||| ||| EPA | 4 weeks [SUMMARY]
null
[CONTENT] ROM | EPA | DHA | PL | 0.05 ||| MVC ||| CK | 3 days | 0.05 [SUMMARY]
[CONTENT] EPA | DHA [SUMMARY]
[CONTENT] 8-week ||| ||| EPA | 4 weeks ||| Twenty-two | EPA | DHA | EPA | DHA | 11 | PL, n =  | 11 ||| EPA | 600 | 4 weeks ||| 60 | 100 % | MVC ||| MVC | creatine kinase | CK | 1 | 2 | 3 | 5 days ||| EPA | DHA | PL | 0.05 ||| MVC ||| CK | 3 days | 0.05 ||| EPA | DHA [SUMMARY]
[CONTENT] 8-week ||| ||| EPA | 4 weeks ||| Twenty-two | EPA | DHA | EPA | DHA | 11 | PL, n =  | 11 ||| EPA | 600 | 4 weeks ||| 60 | 100 % | MVC ||| MVC | creatine kinase | CK | 1 | 2 | 3 | 5 days ||| EPA | DHA | PL | 0.05 ||| MVC ||| CK | 3 days | 0.05 ||| EPA | DHA [SUMMARY]
Implementation of control measures against an outbreak due to
34604593
To describe the outbreak of Clostridioides difficile infection (CDI), and the impact of the prevention and control measures that were implemented in the "Hospital Juárez de México" (HJM) for its control.
INTRODUCTION
A cross-sectional, descriptive, observational, and retrospective study was designed. All information on the hospital outbreak and on health care-associated infections (HCAI) was obtained from the files of the Hospital Epidemiological Surveillance Unit (HESU) of the HJM.
METHODS
A total of 15 cases of CDI were detected from February 20th to May 22nd, 2018, which represented 55.6% and 44.4% for the male and female gender, respectively, with an average age of 56 years and a range of 24 to 86 years old. It was possible to identify six failures and deficiencies that involved health personnel and hospital logistics through analyses based on the situational diagnosis in the services involved and through the construction of cause-effect diagrams. Additionally, through the detection of the outbreak by means of laboratory tests and timeline, the HESU team implemented measures and prospective surveillance to control and prevent the emergence of new cases.
RESULTS
The implementation of basic quality tools, control measures, and the prospective epidemiological surveillance had a positive impact on the control against the outbreak of C. difficile producing toxin B.
CONCLUSIONS
[ "Adult", "Aged", "Aged, 80 and over", "Clostridioides difficile", "Clostridium Infections", "Cross Infection", "Cross-Sectional Studies", "Disease Outbreaks", "Female", "Humans", "Infection Control", "Male", "Mexico", "Middle Aged", "Prospective Studies", "Retrospective Studies", "Tertiary Care Centers", "Young Adult" ]
8451347
Introduction
Clostridioides difficile (C. difficile) is a Gram-positive bacillus, spore-forming and toxin producer that in recent years has represented a great challenge in terms of infection control within health institutions in the world, being one of the main etiological agents of the health care-associated infections (HCAI) [1]. C. difficile infection (CDI) is the most common cause of diarrhea associated with health care and long-term use of antimicrobials [1]. The transmission of this pathogen mainly occurs in hospital facilities, where exposure to antimicrobials and contamination of the environment by C. difficile spores is common [1]. C. difficile produces toxins that cause damage to the epithelial cells, these toxins are known as toxin A and toxin B [1]. Toxin A, called enterotoxin, which is more potent, increases secretion, causes more mucosal damage and inflammation, however, in cell cultures, toxin B has been shown to have greater cytotoxic activity, and in subsequent studies it has been shown that it is more toxic in the human colonic epithelium than toxin A [1, 2]. Worldwide, it is estimated that between 20 and 30% of cases of diarrhea associated with antibiotics are caused by C. difficile. In Canada, a rate between 3.8 to 9.5 cases per 10,000 patient days is calculated between 1997 and 2005. As of 2002, more severe and recurrent outbreaks have been observed, the great majority associated with the use of antimicrobials, mainly fluoroquinolones, with an attributable mortality of 6.9 to 7.5% [3, 4]. In some hospitals in Mexico, the presence of C. difficile has been detected as one of the main infectious agents, showing a higher prevalence in hospital services such as Internal Medicine, General Surgery, and Intensive Care Units [5]. According to the literature, the main risk factor for C. difficile infection is the indiscriminate use of antimicrobials, mainly from the family of carbapenems and cephalosporins [6, 7]. The outbreaks that have occurred in our country have shown difficulties in their control and prevention, even remaining constant with an endemic behavior [8, 9]. The aim of this work was to describe the hospital outbreak of C. difficile infection and the impact of the prevention and control measures that were implemented in the “Hospital Juárez de México” (HJM).
Methods
STUDY DESIGN A cross-sectional, descriptive, observational, and retrospective study was designed. All information was obtained from the hospital outbreak and from the health care-associated infections (HCAI) from the Hospital Epidemiological Surveillance Unit (HESU) of the HJM. A cross-sectional, descriptive, observational, and retrospective study was designed. All information was obtained from the hospital outbreak and from the health care-associated infections (HCAI) from the Hospital Epidemiological Surveillance Unit (HESU) of the HJM. PERIOD OF ANALYSIS AND CASE DEFINITION OF CDI The period of the outbreak study was from February 20th to May 22nd, 2018. For the identification of cases, operational definitions were made and inclusion criteria was included such as patient of any age who had been hospitalized for at least 48 hours and who had presented more than three diarrheic stools in 12 hours, accompanied by one or more of the following symptoms: fever, abdominal distension and/or abdominal pain was considered as a suspected case of CDI. For a confirmed case of CDI, we had to meet the operational definition of a suspected case and have a positive result on any of the following tests: commercial immunochromatographic qualitative test against glutamate dehydrogenase (GDH) (CERTEST Clostridium difficile antigen GDH, Certest Biotec S.L.); real-time PCR amplification of a fragment of the gene coding for C. difficile toxin B was performed by using the automated BD MAX technology (Becton Dickinson) according to the supplier’s instructions. The period of the outbreak study was from February 20th to May 22nd, 2018. For the identification of cases, operational definitions were made and inclusion criteria was included such as patient of any age who had been hospitalized for at least 48 hours and who had presented more than three diarrheic stools in 12 hours, accompanied by one or more of the following symptoms: fever, abdominal distension and/or abdominal pain was considered as a suspected case of CDI. For a confirmed case of CDI, we had to meet the operational definition of a suspected case and have a positive result on any of the following tests: commercial immunochromatographic qualitative test against glutamate dehydrogenase (GDH) (CERTEST Clostridium difficile antigen GDH, Certest Biotec S.L.); real-time PCR amplification of a fragment of the gene coding for C. difficile toxin B was performed by using the automated BD MAX technology (Becton Dickinson) according to the supplier’s instructions. COLLECTION AND ANALYSIS DATA The outbreak information was handled in Microsoft Excel package (Microsoft Corporation; Redmond, WA). The variables and risk factors described and analyzed were age, gender, hospital service, use of antimicrobials before CDI diagnosis, use of proton pump inhibitors, and type of diagnosis. Frequencies and percentages are used for data description. The outbreak information was handled in Microsoft Excel package (Microsoft Corporation; Redmond, WA). The variables and risk factors described and analyzed were age, gender, hospital service, use of antimicrobials before CDI diagnosis, use of proton pump inhibitors, and type of diagnosis. Frequencies and percentages are used for data description.
null
null
Conclusions
Epidemic curve of outbreak due to Clostridioides difficile infection. Ishikawa diagram of outbreak due to Clostridioides difficile infection.
[ "Introduction", "STUDY DESIGN", "PERIOD OF ANALYSIS AND CASE DEFINITION OF CDI", "COLLECTION AND ANALYSIS DATA", "Results", "IMPLEMENTATION OF OUTBREAK CONTROL MEASURES", "Hygiene and hand washing, and contact precautions", "Hospital cleaning and disinfection", "LABORATORY DIAGNOSIS", "COHORT ISOLATION", "INTERVENTION WITH HEALTH WORKERS", "Discussion", "Conclusions" ]
[ "Clostridioides difficile (C. difficile) is a Gram-positive bacillus, spore-forming and toxin producer that in recent years has represented a great challenge in terms of infection control within health institutions in the world, being one of the main etiological agents of the health care-associated infections (HCAI) [1]. C. difficile infection (CDI) is the most common cause of diarrhea associated with health care and long-term use of antimicrobials [1]. The transmission of this pathogen mainly occurs in hospital facilities, where exposure to antimicrobials and contamination of the environment by C. difficile spores is common [1]. C. difficile produces toxins that cause damage to the epithelial cells, these toxins are known as toxin A and toxin B [1]. Toxin A, called enterotoxin, which is more potent, increases secretion, causes more mucosal damage and inflammation, however, in cell cultures, toxin B has been shown to have greater cytotoxic activity, and in subsequent studies it has been shown that it is more toxic in the human colonic epithelium than toxin A [1, 2]. Worldwide, it is estimated that between 20 and 30% of cases of diarrhea associated with antibiotics are caused by C. difficile. In Canada, a rate between 3.8 to 9.5 cases per 10,000 patient days is calculated between 1997 and 2005. As of 2002, more severe and recurrent outbreaks have been observed, the great majority associated with the use of antimicrobials, mainly fluoroquinolones, with an attributable mortality of 6.9 to 7.5% [3, 4].\nIn some hospitals in Mexico, the presence of C. difficile has been detected as one of the main infectious agents, showing a higher prevalence in hospital services such as Internal Medicine, General Surgery, and Intensive Care Units [5].\nAccording to the literature, the main risk factor for C. difficile infection is the indiscriminate use of antimicrobials, mainly from the family of carbapenems and cephalosporins [6, 7]. The outbreaks that have occurred in our country have shown difficulties in their control and prevention, even remaining constant with an endemic behavior [8, 9].\nThe aim of this work was to describe the hospital outbreak of C. difficile infection and the impact of the prevention and control measures that were implemented in the “Hospital Juárez de México” (HJM).", "A cross-sectional, descriptive, observational, and retrospective study was designed. All information was obtained from the hospital outbreak and from the health care-associated infections (HCAI) from the Hospital Epidemiological Surveillance Unit (HESU) of the HJM.", "The period of the outbreak study was from February 20th to May 22nd, 2018. For the identification of cases, operational definitions were made and inclusion criteria was included such as patient of any age who had been hospitalized for at least 48 hours and who had presented more than three diarrheic stools in 12 hours, accompanied by one or more of the following symptoms: fever, abdominal distension and/or abdominal pain was considered as a suspected case of CDI. For a confirmed case of CDI, we had to meet the operational definition of a suspected case and have a positive result on any of the following tests:\ncommercial immunochromatographic qualitative test against glutamate dehydrogenase (GDH) (CERTEST Clostridium difficile antigen GDH, Certest Biotec S.L.);\nreal-time PCR amplification of a fragment of the gene coding for C. difficile toxin B was performed by using the automated BD MAX technology (Becton Dickinson) according to the supplier’s instructions.", "The outbreak information was handled in Microsoft Excel package (Microsoft Corporation; Redmond, WA). The variables and risk factors described and analyzed were age, gender, hospital service, use of antimicrobials before CDI diagnosis, use of proton pump inhibitors, and type of diagnosis. Frequencies and percentages are used for data description.", "A total of 15 cases of CDI were detected during February 20th to May 22nd, 2018. The index case was a patient who had a history of having been previously hospitalized in the HJM during December 2017 and January 2018 in General Surgery. One week after his hospital discharge, his start with characteristic symptomatology in his home (diarrheic discharge, abdominal pain and fever). Since the clinical picture began at home, the health personnel who valued his readmission did not suspect that the etiologic agent was C. difficile. For this reason, there was a delay in the diagnosis and in the implementation of preventive measures, which generated exposure to patients and health personnel.\nThe epidemiological investigation and case analysis showed that 66.6 and 34.4% of patients corresponded to the male and female gender, respectively, with an average age of 56 years and a range of 24 to 86 years of age. According to the hospital services, where the cases were detected, 40% were in Internal Medicine, 26.7% in General Surgery, 13.3% in Neurosurgery, 13.3% in Hematology, and 6.7% in Thoracic Surgery. According to the timeline, 13.3% was detected in February 40% in March, and 46.7% in April. Figure 1 shows the distribution of cases in the epidemic curve.\nThree deaths were recorded, of which only one was related to CDI, therefore, the fatality rate was 5.6%. The index case began with diarrheic discharge in February; however, when reviewing the epidemiological background, he was hospitalized at the HJM in December; despite identifying this data, it could not be determined whether if it was the primary case or the source of the outbreak. Among the risk factors detected, 88.9% of the cases had some antimicrobial scheme, the most commonly used being carbapenems, quinolones, and cephalosporins. In all cases, the use of proton pump inhibitors was detected. When determining that it was a hospital outbreak due to C. difficile, the HESU team began with the prospective epidemiological surveillance, the implementation of control measures for the outbreak, and the prevention of the appearance of new cases, based on the situational diagnosis carried out in the services involved, where faults and deficiencies were identified and outlined in the cause-effect diagram (Fig. 2).\nIMPLEMENTATION OF OUTBREAK CONTROL MEASURES Hygiene and hand washing, and contact precautions During the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment.\nDuring the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment.\nHospital cleaning and disinfection A diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge.\nThe cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room.\nA diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge.\nThe cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room.\nHygiene and hand washing, and contact precautions During the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment.\nDuring the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment.\nHospital cleaning and disinfection A diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge.\nThe cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room.\nA diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge.\nThe cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room.\nLABORATORY DIAGNOSIS When a suspect case was detected, the sample of diarrheic stools was collected and submitted for emergency diagnosis to the Genetics and Molecular Diagnostic Laboratory of the HJM for their corresponding study. One of the weaknesses detected during the study process of the outbreak was that the medical and nursing staff did not have sufficient knowledge to perform a differential diagnosis of CDI. Therefore, the necessary training was initiated to identify CDI and the operational definition of the case. The second problem was that even when the personnel that identified a suspected case of CDI, they did not notify it, the case was detected through active epidemiological surveillance. The third and final problem was that samples were occasionally sent to the laboratory for CDI diagnosis.\nWhen a suspect case was detected, the sample of diarrheic stools was collected and submitted for emergency diagnosis to the Genetics and Molecular Diagnostic Laboratory of the HJM for their corresponding study. One of the weaknesses detected during the study process of the outbreak was that the medical and nursing staff did not have sufficient knowledge to perform a differential diagnosis of CDI. Therefore, the necessary training was initiated to identify CDI and the operational definition of the case. The second problem was that even when the personnel that identified a suspected case of CDI, they did not notify it, the case was detected through active epidemiological surveillance. The third and final problem was that samples were occasionally sent to the laboratory for CDI diagnosis.\nCOHORT ISOLATION One of the obstacles for the isolation of CDI cases was the structural characteristic of the HJM, since it limited the correct application of precautions contact measures, thus, relocation of beds and/or rooms was carried out in order to be able to shape specific and unique areas for suspected and confirmed cases. These measures were carried out until the discharge of the patient. In the same period of the outbreak, a total of three positive cases of C. difficile coming from other hospital units were identified, along with eight cases of gastroenteritis associated with health care with negative results to C. difficile.\nOne of the obstacles for the isolation of CDI cases was the structural characteristic of the HJM, since it limited the correct application of precautions contact measures, thus, relocation of beds and/or rooms was carried out in order to be able to shape specific and unique areas for suspected and confirmed cases. These measures were carried out until the discharge of the patient. In the same period of the outbreak, a total of three positive cases of C. difficile coming from other hospital units were identified, along with eight cases of gastroenteritis associated with health care with negative results to C. difficile.\nINTERVENTION WITH HEALTH WORKERS During training on hygiene and hand washing, the matter of identification, notification, diagnosis and treatment of suspected or confirmed cases of CDI was also addressed. The training covered the following points:\noperational definition of suspected case of CDI to improve detection;\nsample submission to the central or research laboratory for the timely diagnosis of CDI cases;\napplication and registration in clinical file and medical indications of contact precautions in suspected and confirmed cases of CDI;\neffective communication of the suspicion or confirmation of the CDI to the interconsultant, surgical, and auxiliary diagnostic services (imaging);\ncorrect cleaning and disinfection of biomedical equipment in contact with suspected or confirmed CDI patients;\ninterconsultation to the infectology service when detecting a suspected case of CDI for the timely initiation of antimicrobial treatment.\nDuring training on hygiene and hand washing, the matter of identification, notification, diagnosis and treatment of suspected or confirmed cases of CDI was also addressed. The training covered the following points:\noperational definition of suspected case of CDI to improve detection;\nsample submission to the central or research laboratory for the timely diagnosis of CDI cases;\napplication and registration in clinical file and medical indications of contact precautions in suspected and confirmed cases of CDI;\neffective communication of the suspicion or confirmation of the CDI to the interconsultant, surgical, and auxiliary diagnostic services (imaging);\ncorrect cleaning and disinfection of biomedical equipment in contact with suspected or confirmed CDI patients;\ninterconsultation to the infectology service when detecting a suspected case of CDI for the timely initiation of antimicrobial treatment.", "Hygiene and hand washing, and contact precautions During the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment.\nDuring the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment.\nHospital cleaning and disinfection A diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge.\nThe cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room.\nA diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge.\nThe cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room.", "During the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment.", "A diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge.\nThe cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room.", "When a suspect case was detected, the sample of diarrheic stools was collected and submitted for emergency diagnosis to the Genetics and Molecular Diagnostic Laboratory of the HJM for their corresponding study. One of the weaknesses detected during the study process of the outbreak was that the medical and nursing staff did not have sufficient knowledge to perform a differential diagnosis of CDI. Therefore, the necessary training was initiated to identify CDI and the operational definition of the case. The second problem was that even when the personnel that identified a suspected case of CDI, they did not notify it, the case was detected through active epidemiological surveillance. The third and final problem was that samples were occasionally sent to the laboratory for CDI diagnosis.", "One of the obstacles for the isolation of CDI cases was the structural characteristic of the HJM, since it limited the correct application of precautions contact measures, thus, relocation of beds and/or rooms was carried out in order to be able to shape specific and unique areas for suspected and confirmed cases. These measures were carried out until the discharge of the patient. In the same period of the outbreak, a total of three positive cases of C. difficile coming from other hospital units were identified, along with eight cases of gastroenteritis associated with health care with negative results to C. difficile.", "During training on hygiene and hand washing, the matter of identification, notification, diagnosis and treatment of suspected or confirmed cases of CDI was also addressed. The training covered the following points:\noperational definition of suspected case of CDI to improve detection;\nsample submission to the central or research laboratory for the timely diagnosis of CDI cases;\napplication and registration in clinical file and medical indications of contact precautions in suspected and confirmed cases of CDI;\neffective communication of the suspicion or confirmation of the CDI to the interconsultant, surgical, and auxiliary diagnostic services (imaging);\ncorrect cleaning and disinfection of biomedical equipment in contact with suspected or confirmed CDI patients;\ninterconsultation to the infectology service when detecting a suspected case of CDI for the timely initiation of antimicrobial treatment.", "To our knowledge, this is the first study of a hospital outbreak of diarrhea caused by C. difficile producing toxin B in Mexico, although there are previous reports in our country of the presence of this toxin, they only focus on isolates of clinical origin [10].\nIt is known that the identification of the source of dissemination in a C. difficile outbreak is very complex, which is why it is necessary to take into account that the hospital environment is probably the main source of contamination. This is due to poor cleaning and disinfection processes, combined with a low adherence to hygiene and hand washing of health workers [10]. The constant training and supervision of health personnel on hygiene and hand washing, and on cleaning and disinfection is a task of great importance, since it is easy to lose habits in the application of these measures. Previous studies have shown that hygiene and hand washing are the most important measures for the prevention and control of HCAI [11, 12].\nOne of the points to improve, is the importance of having an effective cleaning and disinfection process, with the appropriate products, of the areas involved in the circulation of C. difficile. Hospital environments should be evaluated, since there are different surfaces and biomedical equipment that can be difficult to disinfect, such as mattress surfaces, beds, and toilets [13]. The disinfectants that are used should be evaluated, as well as the concentrations to achieve a significant reduction of the spores [11]. Wong et al., 2018, concluded that the use of disinfectants with sporicidal activity in the hospital environment after discharge of the patient with CDI helps to reduce the risk of cross-transmission of C. difficile [14]. The effectiveness of cleaning and disinfecting with impregnated towels or with nebulization with hydrogen peroxide, complemented with silver particles has been demonstrated [15-17].\nTo obtain positive results with this type of products, it must be considered that they will always have to be complementary to the use of suitable concentrations of chlorine, either before or after the cleaning and disinfection step [18, 19]. They must be used strategically to optimize this input, and always with good communication with the technical team in charge of the application and with the hospital services involved [20, 21].\nAlthough C. difficile is an infectious agent that is difficult to control, and that in a large number of hospitals its presence has an endemic behavior, efforts to make effective prevention and control measures of the outbreak must be constant with the trainings and supervisions in the hospital services involved. It is important to mention that hospital epidemiological surveillance actions work in a similar way to field epidemiology, having the structure based on the surveillance of morbidity, mortality, and by laboratory. It is necessary to establish the operational definition of CDI cases, both of suspected and confirmed cases, the technique in collecting and submitting samples for laboratory diagnosis, and the assessment by endoscopy and pathology services, if required. The establishment of these parameters will allow to carry out a proper epidemiological surveillance of cases of CDI. Furthermore, it is important to integrate indicators such as the incidence rate [22, 23]. One factor to take into account of the CDI cases is the increase in the costs derived from medical care, van Buerden et al, 2017, calculated the costs attributable to the care and control of an outbreak of C. difficile ribotype 027, and it was of approximately € 1,222,376, the factor that increased the cost was the increase in hospital stay and the closing of rooms to implement contact precautions [24]. The impact on the economy of the institutions is important, without considering the cost that it generates to the family members of the infected patients. One of the characteristics of the HJM, which represents a barrier for infection control, is the deficient number of rooms for the isolation of cases, these limits and impacts the adherence to precautions based on the transmission of diseases, in this case contact precautions. Ideally when making plans on the construction of new hospitals or the remodeling of existing institutions, the team of the HESU should consider aspects related to the prevention and control of infections. Without a doubt, the effective communication with the staff of the diagnostic laboratories was the key to contain the outbreak, since there is a coordinated notification with the HESU team, which strengthens the timely detection activities. It is important to permanently assemble a system for the notification and detection of suspected cases of CDI to identify a case in a timely manner, initiate contact precautions and antimicrobial treatment, and effective measures of cleaning and disinfection. The importance of communicating this report is to raise awareness about the importance of prevention and control measures, which must be well executed and supervised, since the success or failure of the risk of infection to patients, family members, and health workers depends on this.", "The implementation of effective and adequate measures of prevention and control of C. difficile are a constant challenge for hospital epidemiology, for which strategies should be designed for hospital structures, so that the results are as favorable as possible. The use of basic quality tools to carry out the analysis with a risk approach for the study of outbreaks is a methodology that we must explore, since they can give a wider perspective to face the barriers that are detected. Additionally, we must not forget that epidemiological surveillance is based on morbidity, mortality, and the laboratory, the latter was a strength for our HESU group, because having results in a timely manner allowed us to implement prevention and control measures in a short time." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "STUDY DESIGN", "PERIOD OF ANALYSIS AND CASE DEFINITION OF CDI", "COLLECTION AND ANALYSIS DATA", "Results", "IMPLEMENTATION OF OUTBREAK CONTROL MEASURES", "Hygiene and hand washing, and contact precautions", "Hospital cleaning and disinfection", "LABORATORY DIAGNOSIS", "COHORT ISOLATION", "INTERVENTION WITH HEALTH WORKERS", "Discussion", "Conclusions" ]
[ "Clostridioides difficile (C. difficile) is a Gram-positive bacillus, spore-forming and toxin producer that in recent years has represented a great challenge in terms of infection control within health institutions in the world, being one of the main etiological agents of the health care-associated infections (HCAI) [1]. C. difficile infection (CDI) is the most common cause of diarrhea associated with health care and long-term use of antimicrobials [1]. The transmission of this pathogen mainly occurs in hospital facilities, where exposure to antimicrobials and contamination of the environment by C. difficile spores is common [1]. C. difficile produces toxins that cause damage to the epithelial cells, these toxins are known as toxin A and toxin B [1]. Toxin A, called enterotoxin, which is more potent, increases secretion, causes more mucosal damage and inflammation, however, in cell cultures, toxin B has been shown to have greater cytotoxic activity, and in subsequent studies it has been shown that it is more toxic in the human colonic epithelium than toxin A [1, 2]. Worldwide, it is estimated that between 20 and 30% of cases of diarrhea associated with antibiotics are caused by C. difficile. In Canada, a rate between 3.8 to 9.5 cases per 10,000 patient days is calculated between 1997 and 2005. As of 2002, more severe and recurrent outbreaks have been observed, the great majority associated with the use of antimicrobials, mainly fluoroquinolones, with an attributable mortality of 6.9 to 7.5% [3, 4].\nIn some hospitals in Mexico, the presence of C. difficile has been detected as one of the main infectious agents, showing a higher prevalence in hospital services such as Internal Medicine, General Surgery, and Intensive Care Units [5].\nAccording to the literature, the main risk factor for C. difficile infection is the indiscriminate use of antimicrobials, mainly from the family of carbapenems and cephalosporins [6, 7]. The outbreaks that have occurred in our country have shown difficulties in their control and prevention, even remaining constant with an endemic behavior [8, 9].\nThe aim of this work was to describe the hospital outbreak of C. difficile infection and the impact of the prevention and control measures that were implemented in the “Hospital Juárez de México” (HJM).", "STUDY DESIGN A cross-sectional, descriptive, observational, and retrospective study was designed. All information was obtained from the hospital outbreak and from the health care-associated infections (HCAI) from the Hospital Epidemiological Surveillance Unit (HESU) of the HJM.\nA cross-sectional, descriptive, observational, and retrospective study was designed. All information was obtained from the hospital outbreak and from the health care-associated infections (HCAI) from the Hospital Epidemiological Surveillance Unit (HESU) of the HJM.\nPERIOD OF ANALYSIS AND CASE DEFINITION OF CDI The period of the outbreak study was from February 20th to May 22nd, 2018. For the identification of cases, operational definitions were made and inclusion criteria was included such as patient of any age who had been hospitalized for at least 48 hours and who had presented more than three diarrheic stools in 12 hours, accompanied by one or more of the following symptoms: fever, abdominal distension and/or abdominal pain was considered as a suspected case of CDI. For a confirmed case of CDI, we had to meet the operational definition of a suspected case and have a positive result on any of the following tests:\ncommercial immunochromatographic qualitative test against glutamate dehydrogenase (GDH) (CERTEST Clostridium difficile antigen GDH, Certest Biotec S.L.);\nreal-time PCR amplification of a fragment of the gene coding for C. difficile toxin B was performed by using the automated BD MAX technology (Becton Dickinson) according to the supplier’s instructions.\nThe period of the outbreak study was from February 20th to May 22nd, 2018. For the identification of cases, operational definitions were made and inclusion criteria was included such as patient of any age who had been hospitalized for at least 48 hours and who had presented more than three diarrheic stools in 12 hours, accompanied by one or more of the following symptoms: fever, abdominal distension and/or abdominal pain was considered as a suspected case of CDI. For a confirmed case of CDI, we had to meet the operational definition of a suspected case and have a positive result on any of the following tests:\ncommercial immunochromatographic qualitative test against glutamate dehydrogenase (GDH) (CERTEST Clostridium difficile antigen GDH, Certest Biotec S.L.);\nreal-time PCR amplification of a fragment of the gene coding for C. difficile toxin B was performed by using the automated BD MAX technology (Becton Dickinson) according to the supplier’s instructions.\nCOLLECTION AND ANALYSIS DATA The outbreak information was handled in Microsoft Excel package (Microsoft Corporation; Redmond, WA). The variables and risk factors described and analyzed were age, gender, hospital service, use of antimicrobials before CDI diagnosis, use of proton pump inhibitors, and type of diagnosis. Frequencies and percentages are used for data description.\nThe outbreak information was handled in Microsoft Excel package (Microsoft Corporation; Redmond, WA). The variables and risk factors described and analyzed were age, gender, hospital service, use of antimicrobials before CDI diagnosis, use of proton pump inhibitors, and type of diagnosis. Frequencies and percentages are used for data description.", "A cross-sectional, descriptive, observational, and retrospective study was designed. All information was obtained from the hospital outbreak and from the health care-associated infections (HCAI) from the Hospital Epidemiological Surveillance Unit (HESU) of the HJM.", "The period of the outbreak study was from February 20th to May 22nd, 2018. For the identification of cases, operational definitions were made and inclusion criteria was included such as patient of any age who had been hospitalized for at least 48 hours and who had presented more than three diarrheic stools in 12 hours, accompanied by one or more of the following symptoms: fever, abdominal distension and/or abdominal pain was considered as a suspected case of CDI. For a confirmed case of CDI, we had to meet the operational definition of a suspected case and have a positive result on any of the following tests:\ncommercial immunochromatographic qualitative test against glutamate dehydrogenase (GDH) (CERTEST Clostridium difficile antigen GDH, Certest Biotec S.L.);\nreal-time PCR amplification of a fragment of the gene coding for C. difficile toxin B was performed by using the automated BD MAX technology (Becton Dickinson) according to the supplier’s instructions.", "The outbreak information was handled in Microsoft Excel package (Microsoft Corporation; Redmond, WA). The variables and risk factors described and analyzed were age, gender, hospital service, use of antimicrobials before CDI diagnosis, use of proton pump inhibitors, and type of diagnosis. Frequencies and percentages are used for data description.", "A total of 15 cases of CDI were detected during February 20th to May 22nd, 2018. The index case was a patient who had a history of having been previously hospitalized in the HJM during December 2017 and January 2018 in General Surgery. One week after his hospital discharge, his start with characteristic symptomatology in his home (diarrheic discharge, abdominal pain and fever). Since the clinical picture began at home, the health personnel who valued his readmission did not suspect that the etiologic agent was C. difficile. For this reason, there was a delay in the diagnosis and in the implementation of preventive measures, which generated exposure to patients and health personnel.\nThe epidemiological investigation and case analysis showed that 66.6 and 34.4% of patients corresponded to the male and female gender, respectively, with an average age of 56 years and a range of 24 to 86 years of age. According to the hospital services, where the cases were detected, 40% were in Internal Medicine, 26.7% in General Surgery, 13.3% in Neurosurgery, 13.3% in Hematology, and 6.7% in Thoracic Surgery. According to the timeline, 13.3% was detected in February 40% in March, and 46.7% in April. Figure 1 shows the distribution of cases in the epidemic curve.\nThree deaths were recorded, of which only one was related to CDI, therefore, the fatality rate was 5.6%. The index case began with diarrheic discharge in February; however, when reviewing the epidemiological background, he was hospitalized at the HJM in December; despite identifying this data, it could not be determined whether if it was the primary case or the source of the outbreak. Among the risk factors detected, 88.9% of the cases had some antimicrobial scheme, the most commonly used being carbapenems, quinolones, and cephalosporins. In all cases, the use of proton pump inhibitors was detected. When determining that it was a hospital outbreak due to C. difficile, the HESU team began with the prospective epidemiological surveillance, the implementation of control measures for the outbreak, and the prevention of the appearance of new cases, based on the situational diagnosis carried out in the services involved, where faults and deficiencies were identified and outlined in the cause-effect diagram (Fig. 2).\nIMPLEMENTATION OF OUTBREAK CONTROL MEASURES Hygiene and hand washing, and contact precautions During the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment.\nDuring the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment.\nHospital cleaning and disinfection A diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge.\nThe cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room.\nA diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge.\nThe cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room.\nHygiene and hand washing, and contact precautions During the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment.\nDuring the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment.\nHospital cleaning and disinfection A diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge.\nThe cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room.\nA diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge.\nThe cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room.\nLABORATORY DIAGNOSIS When a suspect case was detected, the sample of diarrheic stools was collected and submitted for emergency diagnosis to the Genetics and Molecular Diagnostic Laboratory of the HJM for their corresponding study. One of the weaknesses detected during the study process of the outbreak was that the medical and nursing staff did not have sufficient knowledge to perform a differential diagnosis of CDI. Therefore, the necessary training was initiated to identify CDI and the operational definition of the case. The second problem was that even when the personnel that identified a suspected case of CDI, they did not notify it, the case was detected through active epidemiological surveillance. The third and final problem was that samples were occasionally sent to the laboratory for CDI diagnosis.\nWhen a suspect case was detected, the sample of diarrheic stools was collected and submitted for emergency diagnosis to the Genetics and Molecular Diagnostic Laboratory of the HJM for their corresponding study. One of the weaknesses detected during the study process of the outbreak was that the medical and nursing staff did not have sufficient knowledge to perform a differential diagnosis of CDI. Therefore, the necessary training was initiated to identify CDI and the operational definition of the case. The second problem was that even when the personnel that identified a suspected case of CDI, they did not notify it, the case was detected through active epidemiological surveillance. The third and final problem was that samples were occasionally sent to the laboratory for CDI diagnosis.\nCOHORT ISOLATION One of the obstacles for the isolation of CDI cases was the structural characteristic of the HJM, since it limited the correct application of precautions contact measures, thus, relocation of beds and/or rooms was carried out in order to be able to shape specific and unique areas for suspected and confirmed cases. These measures were carried out until the discharge of the patient. In the same period of the outbreak, a total of three positive cases of C. difficile coming from other hospital units were identified, along with eight cases of gastroenteritis associated with health care with negative results to C. difficile.\nOne of the obstacles for the isolation of CDI cases was the structural characteristic of the HJM, since it limited the correct application of precautions contact measures, thus, relocation of beds and/or rooms was carried out in order to be able to shape specific and unique areas for suspected and confirmed cases. These measures were carried out until the discharge of the patient. In the same period of the outbreak, a total of three positive cases of C. difficile coming from other hospital units were identified, along with eight cases of gastroenteritis associated with health care with negative results to C. difficile.\nINTERVENTION WITH HEALTH WORKERS During training on hygiene and hand washing, the matter of identification, notification, diagnosis and treatment of suspected or confirmed cases of CDI was also addressed. The training covered the following points:\noperational definition of suspected case of CDI to improve detection;\nsample submission to the central or research laboratory for the timely diagnosis of CDI cases;\napplication and registration in clinical file and medical indications of contact precautions in suspected and confirmed cases of CDI;\neffective communication of the suspicion or confirmation of the CDI to the interconsultant, surgical, and auxiliary diagnostic services (imaging);\ncorrect cleaning and disinfection of biomedical equipment in contact with suspected or confirmed CDI patients;\ninterconsultation to the infectology service when detecting a suspected case of CDI for the timely initiation of antimicrobial treatment.\nDuring training on hygiene and hand washing, the matter of identification, notification, diagnosis and treatment of suspected or confirmed cases of CDI was also addressed. The training covered the following points:\noperational definition of suspected case of CDI to improve detection;\nsample submission to the central or research laboratory for the timely diagnosis of CDI cases;\napplication and registration in clinical file and medical indications of contact precautions in suspected and confirmed cases of CDI;\neffective communication of the suspicion or confirmation of the CDI to the interconsultant, surgical, and auxiliary diagnostic services (imaging);\ncorrect cleaning and disinfection of biomedical equipment in contact with suspected or confirmed CDI patients;\ninterconsultation to the infectology service when detecting a suspected case of CDI for the timely initiation of antimicrobial treatment.", "Hygiene and hand washing, and contact precautions During the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment.\nDuring the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment.\nHospital cleaning and disinfection A diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge.\nThe cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room.\nA diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge.\nThe cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room.", "During the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment.", "A diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge.\nThe cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room.", "When a suspect case was detected, the sample of diarrheic stools was collected and submitted for emergency diagnosis to the Genetics and Molecular Diagnostic Laboratory of the HJM for their corresponding study. One of the weaknesses detected during the study process of the outbreak was that the medical and nursing staff did not have sufficient knowledge to perform a differential diagnosis of CDI. Therefore, the necessary training was initiated to identify CDI and the operational definition of the case. The second problem was that even when the personnel that identified a suspected case of CDI, they did not notify it, the case was detected through active epidemiological surveillance. The third and final problem was that samples were occasionally sent to the laboratory for CDI diagnosis.", "One of the obstacles for the isolation of CDI cases was the structural characteristic of the HJM, since it limited the correct application of precautions contact measures, thus, relocation of beds and/or rooms was carried out in order to be able to shape specific and unique areas for suspected and confirmed cases. These measures were carried out until the discharge of the patient. In the same period of the outbreak, a total of three positive cases of C. difficile coming from other hospital units were identified, along with eight cases of gastroenteritis associated with health care with negative results to C. difficile.", "During training on hygiene and hand washing, the matter of identification, notification, diagnosis and treatment of suspected or confirmed cases of CDI was also addressed. The training covered the following points:\noperational definition of suspected case of CDI to improve detection;\nsample submission to the central or research laboratory for the timely diagnosis of CDI cases;\napplication and registration in clinical file and medical indications of contact precautions in suspected and confirmed cases of CDI;\neffective communication of the suspicion or confirmation of the CDI to the interconsultant, surgical, and auxiliary diagnostic services (imaging);\ncorrect cleaning and disinfection of biomedical equipment in contact with suspected or confirmed CDI patients;\ninterconsultation to the infectology service when detecting a suspected case of CDI for the timely initiation of antimicrobial treatment.", "To our knowledge, this is the first study of a hospital outbreak of diarrhea caused by C. difficile producing toxin B in Mexico, although there are previous reports in our country of the presence of this toxin, they only focus on isolates of clinical origin [10].\nIt is known that the identification of the source of dissemination in a C. difficile outbreak is very complex, which is why it is necessary to take into account that the hospital environment is probably the main source of contamination. This is due to poor cleaning and disinfection processes, combined with a low adherence to hygiene and hand washing of health workers [10]. The constant training and supervision of health personnel on hygiene and hand washing, and on cleaning and disinfection is a task of great importance, since it is easy to lose habits in the application of these measures. Previous studies have shown that hygiene and hand washing are the most important measures for the prevention and control of HCAI [11, 12].\nOne of the points to improve, is the importance of having an effective cleaning and disinfection process, with the appropriate products, of the areas involved in the circulation of C. difficile. Hospital environments should be evaluated, since there are different surfaces and biomedical equipment that can be difficult to disinfect, such as mattress surfaces, beds, and toilets [13]. The disinfectants that are used should be evaluated, as well as the concentrations to achieve a significant reduction of the spores [11]. Wong et al., 2018, concluded that the use of disinfectants with sporicidal activity in the hospital environment after discharge of the patient with CDI helps to reduce the risk of cross-transmission of C. difficile [14]. The effectiveness of cleaning and disinfecting with impregnated towels or with nebulization with hydrogen peroxide, complemented with silver particles has been demonstrated [15-17].\nTo obtain positive results with this type of products, it must be considered that they will always have to be complementary to the use of suitable concentrations of chlorine, either before or after the cleaning and disinfection step [18, 19]. They must be used strategically to optimize this input, and always with good communication with the technical team in charge of the application and with the hospital services involved [20, 21].\nAlthough C. difficile is an infectious agent that is difficult to control, and that in a large number of hospitals its presence has an endemic behavior, efforts to make effective prevention and control measures of the outbreak must be constant with the trainings and supervisions in the hospital services involved. It is important to mention that hospital epidemiological surveillance actions work in a similar way to field epidemiology, having the structure based on the surveillance of morbidity, mortality, and by laboratory. It is necessary to establish the operational definition of CDI cases, both of suspected and confirmed cases, the technique in collecting and submitting samples for laboratory diagnosis, and the assessment by endoscopy and pathology services, if required. The establishment of these parameters will allow to carry out a proper epidemiological surveillance of cases of CDI. Furthermore, it is important to integrate indicators such as the incidence rate [22, 23]. One factor to take into account of the CDI cases is the increase in the costs derived from medical care, van Buerden et al, 2017, calculated the costs attributable to the care and control of an outbreak of C. difficile ribotype 027, and it was of approximately € 1,222,376, the factor that increased the cost was the increase in hospital stay and the closing of rooms to implement contact precautions [24]. The impact on the economy of the institutions is important, without considering the cost that it generates to the family members of the infected patients. One of the characteristics of the HJM, which represents a barrier for infection control, is the deficient number of rooms for the isolation of cases, these limits and impacts the adherence to precautions based on the transmission of diseases, in this case contact precautions. Ideally when making plans on the construction of new hospitals or the remodeling of existing institutions, the team of the HESU should consider aspects related to the prevention and control of infections. Without a doubt, the effective communication with the staff of the diagnostic laboratories was the key to contain the outbreak, since there is a coordinated notification with the HESU team, which strengthens the timely detection activities. It is important to permanently assemble a system for the notification and detection of suspected cases of CDI to identify a case in a timely manner, initiate contact precautions and antimicrobial treatment, and effective measures of cleaning and disinfection. The importance of communicating this report is to raise awareness about the importance of prevention and control measures, which must be well executed and supervised, since the success or failure of the risk of infection to patients, family members, and health workers depends on this.", "The implementation of effective and adequate measures of prevention and control of C. difficile are a constant challenge for hospital epidemiology, for which strategies should be designed for hospital structures, so that the results are as favorable as possible. The use of basic quality tools to carry out the analysis with a risk approach for the study of outbreaks is a methodology that we must explore, since they can give a wider perspective to face the barriers that are detected. Additionally, we must not forget that epidemiological surveillance is based on morbidity, mortality, and the laboratory, the latter was a strength for our HESU group, because having results in a timely manner allowed us to implement prevention and control measures in a short time." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Hospital outbreak", "Infection control", "\nClostridioides difficile\n", "Toxin B" ]
Introduction: Clostridioides difficile (C. difficile) is a Gram-positive bacillus, spore-forming and toxin producer that in recent years has represented a great challenge in terms of infection control within health institutions in the world, being one of the main etiological agents of the health care-associated infections (HCAI) [1]. C. difficile infection (CDI) is the most common cause of diarrhea associated with health care and long-term use of antimicrobials [1]. The transmission of this pathogen mainly occurs in hospital facilities, where exposure to antimicrobials and contamination of the environment by C. difficile spores is common [1]. C. difficile produces toxins that cause damage to the epithelial cells, these toxins are known as toxin A and toxin B [1]. Toxin A, called enterotoxin, which is more potent, increases secretion, causes more mucosal damage and inflammation, however, in cell cultures, toxin B has been shown to have greater cytotoxic activity, and in subsequent studies it has been shown that it is more toxic in the human colonic epithelium than toxin A [1, 2]. Worldwide, it is estimated that between 20 and 30% of cases of diarrhea associated with antibiotics are caused by C. difficile. In Canada, a rate between 3.8 to 9.5 cases per 10,000 patient days is calculated between 1997 and 2005. As of 2002, more severe and recurrent outbreaks have been observed, the great majority associated with the use of antimicrobials, mainly fluoroquinolones, with an attributable mortality of 6.9 to 7.5% [3, 4]. In some hospitals in Mexico, the presence of C. difficile has been detected as one of the main infectious agents, showing a higher prevalence in hospital services such as Internal Medicine, General Surgery, and Intensive Care Units [5]. According to the literature, the main risk factor for C. difficile infection is the indiscriminate use of antimicrobials, mainly from the family of carbapenems and cephalosporins [6, 7]. The outbreaks that have occurred in our country have shown difficulties in their control and prevention, even remaining constant with an endemic behavior [8, 9]. The aim of this work was to describe the hospital outbreak of C. difficile infection and the impact of the prevention and control measures that were implemented in the “Hospital Juárez de México” (HJM). Methods: STUDY DESIGN A cross-sectional, descriptive, observational, and retrospective study was designed. All information was obtained from the hospital outbreak and from the health care-associated infections (HCAI) from the Hospital Epidemiological Surveillance Unit (HESU) of the HJM. A cross-sectional, descriptive, observational, and retrospective study was designed. All information was obtained from the hospital outbreak and from the health care-associated infections (HCAI) from the Hospital Epidemiological Surveillance Unit (HESU) of the HJM. PERIOD OF ANALYSIS AND CASE DEFINITION OF CDI The period of the outbreak study was from February 20th to May 22nd, 2018. For the identification of cases, operational definitions were made and inclusion criteria was included such as patient of any age who had been hospitalized for at least 48 hours and who had presented more than three diarrheic stools in 12 hours, accompanied by one or more of the following symptoms: fever, abdominal distension and/or abdominal pain was considered as a suspected case of CDI. For a confirmed case of CDI, we had to meet the operational definition of a suspected case and have a positive result on any of the following tests: commercial immunochromatographic qualitative test against glutamate dehydrogenase (GDH) (CERTEST Clostridium difficile antigen GDH, Certest Biotec S.L.); real-time PCR amplification of a fragment of the gene coding for C. difficile toxin B was performed by using the automated BD MAX technology (Becton Dickinson) according to the supplier’s instructions. The period of the outbreak study was from February 20th to May 22nd, 2018. For the identification of cases, operational definitions were made and inclusion criteria was included such as patient of any age who had been hospitalized for at least 48 hours and who had presented more than three diarrheic stools in 12 hours, accompanied by one or more of the following symptoms: fever, abdominal distension and/or abdominal pain was considered as a suspected case of CDI. For a confirmed case of CDI, we had to meet the operational definition of a suspected case and have a positive result on any of the following tests: commercial immunochromatographic qualitative test against glutamate dehydrogenase (GDH) (CERTEST Clostridium difficile antigen GDH, Certest Biotec S.L.); real-time PCR amplification of a fragment of the gene coding for C. difficile toxin B was performed by using the automated BD MAX technology (Becton Dickinson) according to the supplier’s instructions. COLLECTION AND ANALYSIS DATA The outbreak information was handled in Microsoft Excel package (Microsoft Corporation; Redmond, WA). The variables and risk factors described and analyzed were age, gender, hospital service, use of antimicrobials before CDI diagnosis, use of proton pump inhibitors, and type of diagnosis. Frequencies and percentages are used for data description. The outbreak information was handled in Microsoft Excel package (Microsoft Corporation; Redmond, WA). The variables and risk factors described and analyzed were age, gender, hospital service, use of antimicrobials before CDI diagnosis, use of proton pump inhibitors, and type of diagnosis. Frequencies and percentages are used for data description. STUDY DESIGN: A cross-sectional, descriptive, observational, and retrospective study was designed. All information was obtained from the hospital outbreak and from the health care-associated infections (HCAI) from the Hospital Epidemiological Surveillance Unit (HESU) of the HJM. PERIOD OF ANALYSIS AND CASE DEFINITION OF CDI: The period of the outbreak study was from February 20th to May 22nd, 2018. For the identification of cases, operational definitions were made and inclusion criteria was included such as patient of any age who had been hospitalized for at least 48 hours and who had presented more than three diarrheic stools in 12 hours, accompanied by one or more of the following symptoms: fever, abdominal distension and/or abdominal pain was considered as a suspected case of CDI. For a confirmed case of CDI, we had to meet the operational definition of a suspected case and have a positive result on any of the following tests: commercial immunochromatographic qualitative test against glutamate dehydrogenase (GDH) (CERTEST Clostridium difficile antigen GDH, Certest Biotec S.L.); real-time PCR amplification of a fragment of the gene coding for C. difficile toxin B was performed by using the automated BD MAX technology (Becton Dickinson) according to the supplier’s instructions. COLLECTION AND ANALYSIS DATA: The outbreak information was handled in Microsoft Excel package (Microsoft Corporation; Redmond, WA). The variables and risk factors described and analyzed were age, gender, hospital service, use of antimicrobials before CDI diagnosis, use of proton pump inhibitors, and type of diagnosis. Frequencies and percentages are used for data description. Results: A total of 15 cases of CDI were detected during February 20th to May 22nd, 2018. The index case was a patient who had a history of having been previously hospitalized in the HJM during December 2017 and January 2018 in General Surgery. One week after his hospital discharge, his start with characteristic symptomatology in his home (diarrheic discharge, abdominal pain and fever). Since the clinical picture began at home, the health personnel who valued his readmission did not suspect that the etiologic agent was C. difficile. For this reason, there was a delay in the diagnosis and in the implementation of preventive measures, which generated exposure to patients and health personnel. The epidemiological investigation and case analysis showed that 66.6 and 34.4% of patients corresponded to the male and female gender, respectively, with an average age of 56 years and a range of 24 to 86 years of age. According to the hospital services, where the cases were detected, 40% were in Internal Medicine, 26.7% in General Surgery, 13.3% in Neurosurgery, 13.3% in Hematology, and 6.7% in Thoracic Surgery. According to the timeline, 13.3% was detected in February 40% in March, and 46.7% in April. Figure 1 shows the distribution of cases in the epidemic curve. Three deaths were recorded, of which only one was related to CDI, therefore, the fatality rate was 5.6%. The index case began with diarrheic discharge in February; however, when reviewing the epidemiological background, he was hospitalized at the HJM in December; despite identifying this data, it could not be determined whether if it was the primary case or the source of the outbreak. Among the risk factors detected, 88.9% of the cases had some antimicrobial scheme, the most commonly used being carbapenems, quinolones, and cephalosporins. In all cases, the use of proton pump inhibitors was detected. When determining that it was a hospital outbreak due to C. difficile, the HESU team began with the prospective epidemiological surveillance, the implementation of control measures for the outbreak, and the prevention of the appearance of new cases, based on the situational diagnosis carried out in the services involved, where faults and deficiencies were identified and outlined in the cause-effect diagram (Fig. 2). IMPLEMENTATION OF OUTBREAK CONTROL MEASURES Hygiene and hand washing, and contact precautions During the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment. During the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment. Hospital cleaning and disinfection A diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge. The cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room. A diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge. The cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room. Hygiene and hand washing, and contact precautions During the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment. During the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment. Hospital cleaning and disinfection A diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge. The cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room. A diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge. The cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room. LABORATORY DIAGNOSIS When a suspect case was detected, the sample of diarrheic stools was collected and submitted for emergency diagnosis to the Genetics and Molecular Diagnostic Laboratory of the HJM for their corresponding study. One of the weaknesses detected during the study process of the outbreak was that the medical and nursing staff did not have sufficient knowledge to perform a differential diagnosis of CDI. Therefore, the necessary training was initiated to identify CDI and the operational definition of the case. The second problem was that even when the personnel that identified a suspected case of CDI, they did not notify it, the case was detected through active epidemiological surveillance. The third and final problem was that samples were occasionally sent to the laboratory for CDI diagnosis. When a suspect case was detected, the sample of diarrheic stools was collected and submitted for emergency diagnosis to the Genetics and Molecular Diagnostic Laboratory of the HJM for their corresponding study. One of the weaknesses detected during the study process of the outbreak was that the medical and nursing staff did not have sufficient knowledge to perform a differential diagnosis of CDI. Therefore, the necessary training was initiated to identify CDI and the operational definition of the case. The second problem was that even when the personnel that identified a suspected case of CDI, they did not notify it, the case was detected through active epidemiological surveillance. The third and final problem was that samples were occasionally sent to the laboratory for CDI diagnosis. COHORT ISOLATION One of the obstacles for the isolation of CDI cases was the structural characteristic of the HJM, since it limited the correct application of precautions contact measures, thus, relocation of beds and/or rooms was carried out in order to be able to shape specific and unique areas for suspected and confirmed cases. These measures were carried out until the discharge of the patient. In the same period of the outbreak, a total of three positive cases of C. difficile coming from other hospital units were identified, along with eight cases of gastroenteritis associated with health care with negative results to C. difficile. One of the obstacles for the isolation of CDI cases was the structural characteristic of the HJM, since it limited the correct application of precautions contact measures, thus, relocation of beds and/or rooms was carried out in order to be able to shape specific and unique areas for suspected and confirmed cases. These measures were carried out until the discharge of the patient. In the same period of the outbreak, a total of three positive cases of C. difficile coming from other hospital units were identified, along with eight cases of gastroenteritis associated with health care with negative results to C. difficile. INTERVENTION WITH HEALTH WORKERS During training on hygiene and hand washing, the matter of identification, notification, diagnosis and treatment of suspected or confirmed cases of CDI was also addressed. The training covered the following points: operational definition of suspected case of CDI to improve detection; sample submission to the central or research laboratory for the timely diagnosis of CDI cases; application and registration in clinical file and medical indications of contact precautions in suspected and confirmed cases of CDI; effective communication of the suspicion or confirmation of the CDI to the interconsultant, surgical, and auxiliary diagnostic services (imaging); correct cleaning and disinfection of biomedical equipment in contact with suspected or confirmed CDI patients; interconsultation to the infectology service when detecting a suspected case of CDI for the timely initiation of antimicrobial treatment. During training on hygiene and hand washing, the matter of identification, notification, diagnosis and treatment of suspected or confirmed cases of CDI was also addressed. The training covered the following points: operational definition of suspected case of CDI to improve detection; sample submission to the central or research laboratory for the timely diagnosis of CDI cases; application and registration in clinical file and medical indications of contact precautions in suspected and confirmed cases of CDI; effective communication of the suspicion or confirmation of the CDI to the interconsultant, surgical, and auxiliary diagnostic services (imaging); correct cleaning and disinfection of biomedical equipment in contact with suspected or confirmed CDI patients; interconsultation to the infectology service when detecting a suspected case of CDI for the timely initiation of antimicrobial treatment. IMPLEMENTATION OF OUTBREAK CONTROL MEASURES: Hygiene and hand washing, and contact precautions During the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment. During the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment. Hospital cleaning and disinfection A diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge. The cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room. A diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge. The cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room. Hygiene and hand washing, and contact precautions: During the outbreak, a low percentage of adherence to hygiene and hand washing was detected in the services involved, the above was determinate by personalized questioning. Therefore, training for the personnel of the involved areas on the correct technique of hygiene and hand washing, as well as, the five moments for hand hygiene was carried out. A total of 118 health workers from the hospital wards involved were trained, of which 75% were nurses, 13% cleaning staff, 9.2% doctors, and 2.7% were support staff. 0.1% corresponded to administrative staff, family members, and patients. These measures were applied to patients who met the operational definition of a suspected case of CDI. In addition, family members, patients, and health personnel were trained in the matter of the correct use of personal protective equipment. Hospital cleaning and disinfection: A diagnosis of the situation was performed, where it was identified that the hospital cleaning and disinfection process was not standardized, and it had errors in the sequence of steps. Consequently, it was decided to use two concentrations of sodium hypochlorite, 1,600 ppm for routine cleaning, and 5,000 ppm for final cleaning at patient discharge. The cleaning of the biomedical equipment and objects that were found inside the patient’s room or environment, was carried out daily and at the time of the patient’s discharge with towels impregnated with peroxide. This step was complementary to disinfection with the chlorine concentrations mentioned above, before and after, but always together. As a last step in the sequence of disinfection, in the suspected or confirmed cases, the nebulization with hydrogen peroxide with silver particles was used, where it was requested by the services and authorized by the staff of the HESU, and after this step a patient could be allowed to enter the room. LABORATORY DIAGNOSIS: When a suspect case was detected, the sample of diarrheic stools was collected and submitted for emergency diagnosis to the Genetics and Molecular Diagnostic Laboratory of the HJM for their corresponding study. One of the weaknesses detected during the study process of the outbreak was that the medical and nursing staff did not have sufficient knowledge to perform a differential diagnosis of CDI. Therefore, the necessary training was initiated to identify CDI and the operational definition of the case. The second problem was that even when the personnel that identified a suspected case of CDI, they did not notify it, the case was detected through active epidemiological surveillance. The third and final problem was that samples were occasionally sent to the laboratory for CDI diagnosis. COHORT ISOLATION: One of the obstacles for the isolation of CDI cases was the structural characteristic of the HJM, since it limited the correct application of precautions contact measures, thus, relocation of beds and/or rooms was carried out in order to be able to shape specific and unique areas for suspected and confirmed cases. These measures were carried out until the discharge of the patient. In the same period of the outbreak, a total of three positive cases of C. difficile coming from other hospital units were identified, along with eight cases of gastroenteritis associated with health care with negative results to C. difficile. INTERVENTION WITH HEALTH WORKERS: During training on hygiene and hand washing, the matter of identification, notification, diagnosis and treatment of suspected or confirmed cases of CDI was also addressed. The training covered the following points: operational definition of suspected case of CDI to improve detection; sample submission to the central or research laboratory for the timely diagnosis of CDI cases; application and registration in clinical file and medical indications of contact precautions in suspected and confirmed cases of CDI; effective communication of the suspicion or confirmation of the CDI to the interconsultant, surgical, and auxiliary diagnostic services (imaging); correct cleaning and disinfection of biomedical equipment in contact with suspected or confirmed CDI patients; interconsultation to the infectology service when detecting a suspected case of CDI for the timely initiation of antimicrobial treatment. Discussion: To our knowledge, this is the first study of a hospital outbreak of diarrhea caused by C. difficile producing toxin B in Mexico, although there are previous reports in our country of the presence of this toxin, they only focus on isolates of clinical origin [10]. It is known that the identification of the source of dissemination in a C. difficile outbreak is very complex, which is why it is necessary to take into account that the hospital environment is probably the main source of contamination. This is due to poor cleaning and disinfection processes, combined with a low adherence to hygiene and hand washing of health workers [10]. The constant training and supervision of health personnel on hygiene and hand washing, and on cleaning and disinfection is a task of great importance, since it is easy to lose habits in the application of these measures. Previous studies have shown that hygiene and hand washing are the most important measures for the prevention and control of HCAI [11, 12]. One of the points to improve, is the importance of having an effective cleaning and disinfection process, with the appropriate products, of the areas involved in the circulation of C. difficile. Hospital environments should be evaluated, since there are different surfaces and biomedical equipment that can be difficult to disinfect, such as mattress surfaces, beds, and toilets [13]. The disinfectants that are used should be evaluated, as well as the concentrations to achieve a significant reduction of the spores [11]. Wong et al., 2018, concluded that the use of disinfectants with sporicidal activity in the hospital environment after discharge of the patient with CDI helps to reduce the risk of cross-transmission of C. difficile [14]. The effectiveness of cleaning and disinfecting with impregnated towels or with nebulization with hydrogen peroxide, complemented with silver particles has been demonstrated [15-17]. To obtain positive results with this type of products, it must be considered that they will always have to be complementary to the use of suitable concentrations of chlorine, either before or after the cleaning and disinfection step [18, 19]. They must be used strategically to optimize this input, and always with good communication with the technical team in charge of the application and with the hospital services involved [20, 21]. Although C. difficile is an infectious agent that is difficult to control, and that in a large number of hospitals its presence has an endemic behavior, efforts to make effective prevention and control measures of the outbreak must be constant with the trainings and supervisions in the hospital services involved. It is important to mention that hospital epidemiological surveillance actions work in a similar way to field epidemiology, having the structure based on the surveillance of morbidity, mortality, and by laboratory. It is necessary to establish the operational definition of CDI cases, both of suspected and confirmed cases, the technique in collecting and submitting samples for laboratory diagnosis, and the assessment by endoscopy and pathology services, if required. The establishment of these parameters will allow to carry out a proper epidemiological surveillance of cases of CDI. Furthermore, it is important to integrate indicators such as the incidence rate [22, 23]. One factor to take into account of the CDI cases is the increase in the costs derived from medical care, van Buerden et al, 2017, calculated the costs attributable to the care and control of an outbreak of C. difficile ribotype 027, and it was of approximately € 1,222,376, the factor that increased the cost was the increase in hospital stay and the closing of rooms to implement contact precautions [24]. The impact on the economy of the institutions is important, without considering the cost that it generates to the family members of the infected patients. One of the characteristics of the HJM, which represents a barrier for infection control, is the deficient number of rooms for the isolation of cases, these limits and impacts the adherence to precautions based on the transmission of diseases, in this case contact precautions. Ideally when making plans on the construction of new hospitals or the remodeling of existing institutions, the team of the HESU should consider aspects related to the prevention and control of infections. Without a doubt, the effective communication with the staff of the diagnostic laboratories was the key to contain the outbreak, since there is a coordinated notification with the HESU team, which strengthens the timely detection activities. It is important to permanently assemble a system for the notification and detection of suspected cases of CDI to identify a case in a timely manner, initiate contact precautions and antimicrobial treatment, and effective measures of cleaning and disinfection. The importance of communicating this report is to raise awareness about the importance of prevention and control measures, which must be well executed and supervised, since the success or failure of the risk of infection to patients, family members, and health workers depends on this. Conclusions: The implementation of effective and adequate measures of prevention and control of C. difficile are a constant challenge for hospital epidemiology, for which strategies should be designed for hospital structures, so that the results are as favorable as possible. The use of basic quality tools to carry out the analysis with a risk approach for the study of outbreaks is a methodology that we must explore, since they can give a wider perspective to face the barriers that are detected. Additionally, we must not forget that epidemiological surveillance is based on morbidity, mortality, and the laboratory, the latter was a strength for our HESU group, because having results in a timely manner allowed us to implement prevention and control measures in a short time.
Background: To describe the outbreak of Clostridioides difficile infection (CDI), and the impact of the prevention and control measures that were implemented in the "Hospital Juárez de México" (HJM) for its control. Methods: A cross-sectional, descriptive, observational, and retrospective study was designed. All information on the hospital outbreak and on health care-associated infections (HCAI) was obtained from the files of the Hospital Epidemiological Surveillance Unit (HESU) of the HJM. Results: A total of 15 cases of CDI were detected from February 20th to May 22nd, 2018, which represented 55.6% and 44.4% for the male and female gender, respectively, with an average age of 56 years and a range of 24 to 86 years old. It was possible to identify six failures and deficiencies that involved health personnel and hospital logistics through analyses based on the situational diagnosis in the services involved and through the construction of cause-effect diagrams. Additionally, through the detection of the outbreak by means of laboratory tests and timeline, the HESU team implemented measures and prospective surveillance to control and prevent the emergence of new cases. Conclusions: The implementation of basic quality tools, control measures, and the prospective epidemiological surveillance had a positive impact on the control against the outbreak of C. difficile producing toxin B.
Introduction: Clostridioides difficile (C. difficile) is a Gram-positive bacillus, spore-forming and toxin producer that in recent years has represented a great challenge in terms of infection control within health institutions in the world, being one of the main etiological agents of the health care-associated infections (HCAI) [1]. C. difficile infection (CDI) is the most common cause of diarrhea associated with health care and long-term use of antimicrobials [1]. The transmission of this pathogen mainly occurs in hospital facilities, where exposure to antimicrobials and contamination of the environment by C. difficile spores is common [1]. C. difficile produces toxins that cause damage to the epithelial cells, these toxins are known as toxin A and toxin B [1]. Toxin A, called enterotoxin, which is more potent, increases secretion, causes more mucosal damage and inflammation, however, in cell cultures, toxin B has been shown to have greater cytotoxic activity, and in subsequent studies it has been shown that it is more toxic in the human colonic epithelium than toxin A [1, 2]. Worldwide, it is estimated that between 20 and 30% of cases of diarrhea associated with antibiotics are caused by C. difficile. In Canada, a rate between 3.8 to 9.5 cases per 10,000 patient days is calculated between 1997 and 2005. As of 2002, more severe and recurrent outbreaks have been observed, the great majority associated with the use of antimicrobials, mainly fluoroquinolones, with an attributable mortality of 6.9 to 7.5% [3, 4]. In some hospitals in Mexico, the presence of C. difficile has been detected as one of the main infectious agents, showing a higher prevalence in hospital services such as Internal Medicine, General Surgery, and Intensive Care Units [5]. According to the literature, the main risk factor for C. difficile infection is the indiscriminate use of antimicrobials, mainly from the family of carbapenems and cephalosporins [6, 7]. The outbreaks that have occurred in our country have shown difficulties in their control and prevention, even remaining constant with an endemic behavior [8, 9]. The aim of this work was to describe the hospital outbreak of C. difficile infection and the impact of the prevention and control measures that were implemented in the “Hospital Juárez de México” (HJM). Conclusions: Epidemic curve of outbreak due to Clostridioides difficile infection. Ishikawa diagram of outbreak due to Clostridioides difficile infection.
Background: To describe the outbreak of Clostridioides difficile infection (CDI), and the impact of the prevention and control measures that were implemented in the "Hospital Juárez de México" (HJM) for its control. Methods: A cross-sectional, descriptive, observational, and retrospective study was designed. All information on the hospital outbreak and on health care-associated infections (HCAI) was obtained from the files of the Hospital Epidemiological Surveillance Unit (HESU) of the HJM. Results: A total of 15 cases of CDI were detected from February 20th to May 22nd, 2018, which represented 55.6% and 44.4% for the male and female gender, respectively, with an average age of 56 years and a range of 24 to 86 years old. It was possible to identify six failures and deficiencies that involved health personnel and hospital logistics through analyses based on the situational diagnosis in the services involved and through the construction of cause-effect diagrams. Additionally, through the detection of the outbreak by means of laboratory tests and timeline, the HESU team implemented measures and prospective surveillance to control and prevent the emergence of new cases. Conclusions: The implementation of basic quality tools, control measures, and the prospective epidemiological surveillance had a positive impact on the control against the outbreak of C. difficile producing toxin B.
6,500
256
[ 447, 47, 176, 61, 2615, 687, 156, 180, 134, 110, 149, 930, 135 ]
14
[ "cdi", "cleaning", "hospital", "cases", "suspected", "case", "patient", "disinfection", "outbreak", "diagnosis" ]
[ "difficile infection impact", "clostridium difficile", "coding difficile toxin", "introduction clostridioides difficile", "difficile produces toxins" ]
null
[CONTENT] Hospital outbreak | Infection control | Clostridioides difficile | Toxin B [SUMMARY]
[CONTENT] Hospital outbreak | Infection control | Clostridioides difficile | Toxin B [SUMMARY]
null
[CONTENT] Hospital outbreak | Infection control | Clostridioides difficile | Toxin B [SUMMARY]
[CONTENT] Hospital outbreak | Infection control | Clostridioides difficile | Toxin B [SUMMARY]
[CONTENT] Hospital outbreak | Infection control | Clostridioides difficile | Toxin B [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Clostridioides difficile | Clostridium Infections | Cross Infection | Cross-Sectional Studies | Disease Outbreaks | Female | Humans | Infection Control | Male | Mexico | Middle Aged | Prospective Studies | Retrospective Studies | Tertiary Care Centers | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Clostridioides difficile | Clostridium Infections | Cross Infection | Cross-Sectional Studies | Disease Outbreaks | Female | Humans | Infection Control | Male | Mexico | Middle Aged | Prospective Studies | Retrospective Studies | Tertiary Care Centers | Young Adult [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | Clostridioides difficile | Clostridium Infections | Cross Infection | Cross-Sectional Studies | Disease Outbreaks | Female | Humans | Infection Control | Male | Mexico | Middle Aged | Prospective Studies | Retrospective Studies | Tertiary Care Centers | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Clostridioides difficile | Clostridium Infections | Cross Infection | Cross-Sectional Studies | Disease Outbreaks | Female | Humans | Infection Control | Male | Mexico | Middle Aged | Prospective Studies | Retrospective Studies | Tertiary Care Centers | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Clostridioides difficile | Clostridium Infections | Cross Infection | Cross-Sectional Studies | Disease Outbreaks | Female | Humans | Infection Control | Male | Mexico | Middle Aged | Prospective Studies | Retrospective Studies | Tertiary Care Centers | Young Adult [SUMMARY]
[CONTENT] difficile infection impact | clostridium difficile | coding difficile toxin | introduction clostridioides difficile | difficile produces toxins [SUMMARY]
[CONTENT] difficile infection impact | clostridium difficile | coding difficile toxin | introduction clostridioides difficile | difficile produces toxins [SUMMARY]
null
[CONTENT] difficile infection impact | clostridium difficile | coding difficile toxin | introduction clostridioides difficile | difficile produces toxins [SUMMARY]
[CONTENT] difficile infection impact | clostridium difficile | coding difficile toxin | introduction clostridioides difficile | difficile produces toxins [SUMMARY]
[CONTENT] difficile infection impact | clostridium difficile | coding difficile toxin | introduction clostridioides difficile | difficile produces toxins [SUMMARY]
[CONTENT] cdi | cleaning | hospital | cases | suspected | case | patient | disinfection | outbreak | diagnosis [SUMMARY]
[CONTENT] cdi | cleaning | hospital | cases | suspected | case | patient | disinfection | outbreak | diagnosis [SUMMARY]
null
[CONTENT] cdi | cleaning | hospital | cases | suspected | case | patient | disinfection | outbreak | diagnosis [SUMMARY]
[CONTENT] cdi | cleaning | hospital | cases | suspected | case | patient | disinfection | outbreak | diagnosis [SUMMARY]
[CONTENT] cdi | cleaning | hospital | cases | suspected | case | patient | disinfection | outbreak | diagnosis [SUMMARY]
[CONTENT] difficile | toxin | infection | antimicrobials | difficile infection | mainly | main | shown | associated | use antimicrobials [SUMMARY]
[CONTENT] case | gdh | microsoft | gdh certest | hours | certest | abdominal | information | cdi | age [SUMMARY]
null
[CONTENT] prevention control | control | prevention | results | measures | effective adequate measures prevention | prevention control difficile constant | quality tools carry | based morbidity mortality laboratory | based morbidity mortality [SUMMARY]
[CONTENT] cdi | cleaning | hospital | case | cases | suspected | difficile | diagnosis | patient | disinfection [SUMMARY]
[CONTENT] cdi | cleaning | hospital | case | cases | suspected | difficile | diagnosis | patient | disinfection [SUMMARY]
[CONTENT] Clostridioides | CDI | the "Hospital Juárez de México | HJM [SUMMARY]
[CONTENT] ||| HCAI | the Hospital Epidemiological Surveillance Unit | HESU | HJM [SUMMARY]
null
[CONTENT] C. | B. [SUMMARY]
[CONTENT] Clostridioides | CDI | the "Hospital Juárez de México | HJM ||| ||| HCAI | the Hospital Epidemiological Surveillance Unit | HESU | HJM ||| ||| 15 | CDI | February 20th to May 22nd, 2018 | 55.6% | 44.4% | 24 to 86 years old ||| six ||| HESU ||| C. | B. [SUMMARY]
[CONTENT] Clostridioides | CDI | the "Hospital Juárez de México | HJM ||| ||| HCAI | the Hospital Epidemiological Surveillance Unit | HESU | HJM ||| ||| 15 | CDI | February 20th to May 22nd, 2018 | 55.6% | 44.4% | 24 to 86 years old ||| six ||| HESU ||| C. | B. [SUMMARY]
Comparison of Patellofemoral Outcomes between Attune and PFC Sigma Designs: A Prospective Matched-Pair Analysis.
35251546
Attune (DePuy Synthes) prosthesis was designed to overcome patellofemoral complications associated with PFC Sigma (DePuy Synthes) prosthesis. The aim of our study was to compare the incidence of anterior knee pain (AKP), patellofemoral crepitus (PCr), and functional outcome between them.
BACKGROUD
This prospective matched-pair study was conducted between January 2014 and June 2015, during which 75 consecutive Attune total knee arthroplasties (TKAs) were matched with 75 PFC Sigma TKAs based on age, sex, body mass index, pathology, and deformity. A single surgeon performed all the operations with aid of computer navigation, using a posterior-stabilized prosthesis with patellar resurfacing. Outcome was assessed by new Knee Society Score (NKSS) and Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) score. AKP and PCr were assessed by a patient-administered questionnaire till 2 years of follow-up. Three pairs were lost to follow-up and finally 72 pairs were analyzed.
METHODS
One patient in each group reported AKP and 1 patient from each group had PCr at 2 years postoperatively. None of these patients required additional surgery. The incidence of lateral retinacular release was higher with PFC Sigma (5/72) than Attune (2/72); however, this was statistically not significant (p = 0.4). The Attune group had a significantly greater range of motion (ROM) at 3 months postoperatively (p = 0.049). At final follow-up, ROM was comparable between two prosthesis designs. NKSS and WOMAC scores were also comparable between the groups.
RESULTS
We observed that both Attune and PFC Sigma had a low and comparable incidence of AKP and PCr up to 2 years of follow-up. The Attune group achieved a significantly greater ROM at 3 months postoperatively. At 2 years of follow-up, both prostheses had excellent and comparable clinical and functional results.
CONCLUSIONS
[ "Arthroplasty, Replacement, Knee", "Humans", "Knee Joint", "Knee Prosthesis", "Matched-Pair Analysis", "Osteoarthritis, Knee", "Patella", "Prospective Studies", "Prosthesis Design", "Range of Motion, Articular", "Treatment Outcome" ]
8858905
null
null
METHODS
This prospective matched-pair study was conducted between January 2014 and June 2015 after obtaining Lilavati Hospital and Research Center Institutional Review Board and Ethics Committee approval (IRB No. ECR/606/Inst/MH/2014/RR-17). A written informed consent was obtained from all patients authorizing radiological examination, photographic documentation, and surgery. During this period, 343 patients (417 knees) underwent primary total knee arthroplasty (TKA). Preoperatively, each patient had been offered a choice between Attune and PFC Sigma prosthesis. Attune was higher priced than PFC Sigma, hence the patients were asked to choose the prosthesis based on their individual financial preference. The minimum sample size required for such a study was calculated with the level of significance (α) taken as p = 0.05, and the power of study being 80%. Using the incidence of AKP reported by Ranawat et al.7) with Attune and PFC Sigma as the outcome measure, we arrived at a minimum sample size of 73 for the study group. The first 75 consecutive knees (43 patients), in whom Attune prosthesis was implanted, were considered for inclusion. The control group of 75 knees was selected from 302 consecutive patients (344 knees) in whom PFC Sigma was implanted during the same period. Each Attune TKA was matched manually with PFC Sigma TKA based on age (± 5 years), sex, diagnosis (primary osteoarthritis or rheumatoid arthritis), deformity (varus or valgus), body mass index (± 5 kg/m2), and type of implant (fixed-bearing or mobile-bearing). The mean age of the study population was 65.8 years (range, 49–85 years) and mean body mass index was 31.9 kg/m2 (range, 22.8–45.1 kg/m2). The majority of the patients were women (87.5%). Only 4 patients in each group (5.5%) had rheumatoid arthritis, the rest had primary osteoarthritis (Table 1). The majority of the knees had a varus deformity (138 knees, 95.8%) and the rest were in valgus. The Attune and PFC Sigma groups were comparable in all demographic aspects (Table 1). The mean deformity, as measured by computer navigation, was 10.6° varus in the PFC Sigma group and 8.8° varus in the Attune group. Although this difference was statistically significant, we believe a difference of less than 2° in their means did not make a significant clinical difference. The sizes of the patellar buttons used in the two groups were also comparable. Demographic data, Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC), and new Knee Society Score (NKSS) were recorded preoperatively. These outcome measures were chosen since their validity and reliability have been established in our population.910) Presence of AKP and PCr was ascertained by a validated patient-administered questionnaire that asks the patient to localize the site of pain and incidence of any noise from the joint.11) All TKAs, irrespective of the prosthesis being implanted, were performed by the same senior surgeon (RNM), under tourniquet control, using the medial parapatellar approach, with the aid of computer navigation, by the tibial-cut first technique. For all cases, cemented, cruciate-substituting implants were used and the patella was resurfaced. Operative details of limb alignment (as per computer navigation), implant size, and ligament releases were recorded. Patellar tracking was recorded, and if required, a step-wise outside-in lateral retinacular release was performed as described by Maniar et al.12) The surgeon has performed over 4,000 TKAs using PFC Sigma and has hence refined surgical techniques with respect to PFC Sigma. This included maintaining patellar bone thickness between 13 and 15 mm and avoiding tilt in the mediolateral or cephalocaudal directions. The placement of patellar button was matched with the medial and proximal edges of the patella and lateral patellar osteophyte/spur was excised. Synovial fold at the quadriceps tendon-patella junction was completely excised. Ensuring flush fit of femoral component on the anterior cortex prevented overstuffing of the patellofemoral joint. An outside-in lateral retinacular release was performed as and when indicated to correct patellar maltracking.12) For Attune prosthesis, synovial fold at the quadriceps tendon-patella junction was completely excised just as in PFC sigma cases. However, placement of patella was central since the patellar design has a medialized anatomic dome. A 100-mL cocktail of ketorolac (30 mg, 1 mL), bupivacaine (0.5%, volume in mL calculated as 40% of bodyweight), epinephrine (5 μg/mL, 0.01 mL per kg of body weight), and normal saline was used for periarticular injection. Out of this, 70 mL was injected into the deeper structures (posterior capsule, around collateral ligaments and extensor apparatus) and 30 mL into superficial structures (subcutaneous tissue). Patient-controlled analgesia (PCA) with an infusion pump containing morphine 1mg/ml, was used for pain management. Apart from this, oral paracetamol (1 g, thrice a day) and diclofenac suppositories (100 g, twice a day) were used for all patients. A uniform postoperative rehabilitation protocol was followed, consisting of in-home physiotherapy for a period of around 4–6 weeks. After removal of sutures at 2 weeks postoperatively, patients were re-evaluated at 3 months, 1 year, and 2 years from the surgery. At these follow-up visits, the senior author (RNM) examined the patients and used a hand-held goniometer for measuring range of motion (ROM) in supine position. WOMAC and NKSS were readministered and presence of AKP and/or PCr was also noted at each of these visits. Weight-bearing anteroposterior, lateral, and skyline radiographs of the knees were obtained at each of these visits. All radiographs were scrutinized using Knee Society radiographic scoring system by the senior author (RNM) for signs of loosening or patellar maltracking.13) One patient from the Attune group expired within 1 year from the surgery, from causes unrelated to the knee. Two more patients from the Attune group were not available for follow-up at 1 and 2 years postoperatively due to their distant geographical location. These patients along with their PFC Sigma matched pair were excluded from final analysis. All other patients were followed up actively to ensure that no other patients (72 Attune TKAs and 72 PFC Sigma TKAs) were lost. Asymptomatic, painless, or mildly painful patellar crepitus was treated conservatively by nonsteroidal anti-inflammatory drugs, quadriceps strengthening and restricting activities that involve extreme knee bending. Patients not responding to conservative measures, those with painful crepitus or patellar clunk, were considered for arthroscopic debridement and scar tissue excision. SPSS ver. 18.0 (SPSS Inc., Chicago, IL, USA) was used for statistical analysis. Data were given as mean and standard deviation for quantitative data and number (percentage, %) for qualitative data. Student unpaired t-test was applied to compare means between two groups. Chi-square test and Fisher-exact test were applied to compare qualitative data. All statistical tests were two-tailed.
RESULTS
AKP and PCr Preoperatively, 116 knees (80.5%) had AKP and 51 knees (35.4%) had PCr. Both groups had a comparable distribution of AKP and PCr preoperatively (Table 2). The incidence of both AKP and PCr declined steadily postoperatively (Fig. 2). There were no significant differences in their incidence at 3 months and 12 months follow-up visits. At final follow-up, 2 years postoperatively, 1 patient in each group reported AKP. Similarly, only 1 knee from each group had PCr. None of the patients that had postoperative AKP or PCr required additional surgical intervention. They were managed successfully nonoperatively. The patients with PCr persisting at 2 years did not report it to be a hindrance in activities of daily living. Preoperatively, 116 knees (80.5%) had AKP and 51 knees (35.4%) had PCr. Both groups had a comparable distribution of AKP and PCr preoperatively (Table 2). The incidence of both AKP and PCr declined steadily postoperatively (Fig. 2). There were no significant differences in their incidence at 3 months and 12 months follow-up visits. At final follow-up, 2 years postoperatively, 1 patient in each group reported AKP. Similarly, only 1 knee from each group had PCr. None of the patients that had postoperative AKP or PCr required additional surgical intervention. They were managed successfully nonoperatively. The patients with PCr persisting at 2 years did not report it to be a hindrance in activities of daily living. Range of Motion The mean ROM improved from 118° preoperatively to 132° at 2 years in the Attune group and from 114° to 132° in the PFC Sigma group (Fig. 3). This improvement in ROM was statistically significant (p < 0.001) in both groups. Although preoperative ROM was comparable between the groups, at 3 months postoperatively, patients with Attune prosthesis had a significantly better ROM than their PFC Sigma counterparts (Table 3). However, at 1 year and 2 years postoperatively, the ROM was again comparable between the two prosthesis designs. The improvement in ROM from the preoperative value to 2-year follow-up was also comparable between the groups. None of the patients required manipulation under anesthesia or any surgical intervention to improve ROM. The mean ROM improved from 118° preoperatively to 132° at 2 years in the Attune group and from 114° to 132° in the PFC Sigma group (Fig. 3). This improvement in ROM was statistically significant (p < 0.001) in both groups. Although preoperative ROM was comparable between the groups, at 3 months postoperatively, patients with Attune prosthesis had a significantly better ROM than their PFC Sigma counterparts (Table 3). However, at 1 year and 2 years postoperatively, the ROM was again comparable between the two prosthesis designs. The improvement in ROM from the preoperative value to 2-year follow-up was also comparable between the groups. None of the patients required manipulation under anesthesia or any surgical intervention to improve ROM. Questionnaire-Based Outcomes In the Attune group, the mean WOMAC score improved from 43 preoperatively to 10 at 2 years (Table 4). In the PFC Sigma group, the mean WOMAC score improved from 42 preoperatively to 11 at 2 years. WOMAC score improved consistently at each of the follow-up visits in both groups (Fig. 4). The WOMAC score was comparable between the two groups preoperatively, as well as at each of the subsequent visits. The mean preoperative NKSS was 97 in Attune group and 94 in PFC Sigma group, which improved at 2 years to 201 and 199, respectively (Table 4). There was no significant difference in these scores between the two groups. NKSS and its subcomponents also showed consistent improvement at each of the intermediate follow-up visits (Fig. 4). The scores of the two groups were comparable at all of these visits. In the Attune group, the mean WOMAC score improved from 43 preoperatively to 10 at 2 years (Table 4). In the PFC Sigma group, the mean WOMAC score improved from 42 preoperatively to 11 at 2 years. WOMAC score improved consistently at each of the follow-up visits in both groups (Fig. 4). The WOMAC score was comparable between the two groups preoperatively, as well as at each of the subsequent visits. The mean preoperative NKSS was 97 in Attune group and 94 in PFC Sigma group, which improved at 2 years to 201 and 199, respectively (Table 4). There was no significant difference in these scores between the two groups. NKSS and its subcomponents also showed consistent improvement at each of the intermediate follow-up visits (Fig. 4). The scores of the two groups were comparable at all of these visits. Soft-Tissue Releases Lateral retinacular release for patellofemoral maltracking was required in 5 knees with PFC Sigma, as compared to 2 in the Attune group; however, this difference was not statistically significant (p = 0.4). The incidence of posteromedial corner release was also higher in the PFC Sigma group than in the Attune group although the difference was not statistically significant (Table 1). Lateral retinacular release for patellofemoral maltracking was required in 5 knees with PFC Sigma, as compared to 2 in the Attune group; however, this difference was not statistically significant (p = 0.4). The incidence of posteromedial corner release was also higher in the PFC Sigma group than in the Attune group although the difference was not statistically significant (Table 1). Revision Surgery and Radiographic Analysis None of the patients in either of the groups required revision surgery for any cause during the 2-year follow-up period. Radiographs did not show any osteolytic changes or signs of loosening in any of the patients. None of the patients in either of the groups required revision surgery for any cause during the 2-year follow-up period. Radiographs did not show any osteolytic changes or signs of loosening in any of the patients.
null
null
[ "AKP and PCr", "Range of Motion", "Questionnaire-Based Outcomes", "Soft-Tissue Releases", "Revision Surgery and Radiographic Analysis" ]
[ "Preoperatively, 116 knees (80.5%) had AKP and 51 knees (35.4%) had PCr. Both groups had a comparable distribution of AKP and PCr preoperatively (Table 2). The incidence of both AKP and PCr declined steadily postoperatively (Fig. 2). There were no significant differences in their incidence at 3 months and 12 months follow-up visits. At final follow-up, 2 years postoperatively, 1 patient in each group reported AKP. Similarly, only 1 knee from each group had PCr. None of the patients that had postoperative AKP or PCr required additional surgical intervention. They were managed successfully nonoperatively. The patients with PCr persisting at 2 years did not report it to be a hindrance in activities of daily living.", "The mean ROM improved from 118° preoperatively to 132° at 2 years in the Attune group and from 114° to 132° in the PFC Sigma group (Fig. 3). This improvement in ROM was statistically significant (p < 0.001) in both groups. Although preoperative ROM was comparable between the groups, at 3 months postoperatively, patients with Attune prosthesis had a significantly better ROM than their PFC Sigma counterparts (Table 3). However, at 1 year and 2 years postoperatively, the ROM was again comparable between the two prosthesis designs. The improvement in ROM from the preoperative value to 2-year follow-up was also comparable between the groups. None of the patients required manipulation under anesthesia or any surgical intervention to improve ROM.", "In the Attune group, the mean WOMAC score improved from 43 preoperatively to 10 at 2 years (Table 4). In the PFC Sigma group, the mean WOMAC score improved from 42 preoperatively to 11 at 2 years. WOMAC score improved consistently at each of the follow-up visits in both groups (Fig. 4). The WOMAC score was comparable between the two groups preoperatively, as well as at each of the subsequent visits. The mean preoperative NKSS was 97 in Attune group and 94 in PFC Sigma group, which improved at 2 years to 201 and 199, respectively (Table 4). There was no significant difference in these scores between the two groups. NKSS and its subcomponents also showed consistent improvement at each of the intermediate follow-up visits (Fig. 4). The scores of the two groups were comparable at all of these visits.", "Lateral retinacular release for patellofemoral maltracking was required in 5 knees with PFC Sigma, as compared to 2 in the Attune group; however, this difference was not statistically significant (p = 0.4). The incidence of posteromedial corner release was also higher in the PFC Sigma group than in the Attune group although the difference was not statistically significant (Table 1).", "None of the patients in either of the groups required revision surgery for any cause during the 2-year follow-up period. Radiographs did not show any osteolytic changes or signs of loosening in any of the patients." ]
[ null, null, null, null, null ]
[ "METHODS", "RESULTS", "AKP and PCr", "Range of Motion", "Questionnaire-Based Outcomes", "Soft-Tissue Releases", "Revision Surgery and Radiographic Analysis", "DISCUSSION" ]
[ "This prospective matched-pair study was conducted between January 2014 and June 2015 after obtaining Lilavati Hospital and Research Center Institutional Review Board and Ethics Committee approval (IRB No. ECR/606/Inst/MH/2014/RR-17). A written informed consent was obtained from all patients authorizing radiological examination, photographic documentation, and surgery.\nDuring this period, 343 patients (417 knees) underwent primary total knee arthroplasty (TKA). Preoperatively, each patient had been offered a choice between Attune and PFC Sigma prosthesis. Attune was higher priced than PFC Sigma, hence the patients were asked to choose the prosthesis based on their individual financial preference.\nThe minimum sample size required for such a study was calculated with the level of significance (α) taken as p = 0.05, and the power of study being 80%. Using the incidence of AKP reported by Ranawat et al.7) with Attune and PFC Sigma as the outcome measure, we arrived at a minimum sample size of 73 for the study group. The first 75 consecutive knees (43 patients), in whom Attune prosthesis was implanted, were considered for inclusion. The control group of 75 knees was selected from 302 consecutive patients (344 knees) in whom PFC Sigma was implanted during the same period. Each Attune TKA was matched manually with PFC Sigma TKA based on age (± 5 years), sex, diagnosis (primary osteoarthritis or rheumatoid arthritis), deformity (varus or valgus), body mass index (± 5 kg/m2), and type of implant (fixed-bearing or mobile-bearing).\nThe mean age of the study population was 65.8 years (range, 49–85 years) and mean body mass index was 31.9 kg/m2 (range, 22.8–45.1 kg/m2). The majority of the patients were women (87.5%). Only 4 patients in each group (5.5%) had rheumatoid arthritis, the rest had primary osteoarthritis (Table 1). The majority of the knees had a varus deformity (138 knees, 95.8%) and the rest were in valgus. The Attune and PFC Sigma groups were comparable in all demographic aspects (Table 1). The mean deformity, as measured by computer navigation, was 10.6° varus in the PFC Sigma group and 8.8° varus in the Attune group. Although this difference was statistically significant, we believe a difference of less than 2° in their means did not make a significant clinical difference. The sizes of the patellar buttons used in the two groups were also comparable.\nDemographic data, Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC), and new Knee Society Score (NKSS) were recorded preoperatively. These outcome measures were chosen since their validity and reliability have been established in our population.910) Presence of AKP and PCr was ascertained by a validated patient-administered questionnaire that asks the patient to localize the site of pain and incidence of any noise from the joint.11)\nAll TKAs, irrespective of the prosthesis being implanted, were performed by the same senior surgeon (RNM), under tourniquet control, using the medial parapatellar approach, with the aid of computer navigation, by the tibial-cut first technique. For all cases, cemented, cruciate-substituting implants were used and the patella was resurfaced. Operative details of limb alignment (as per computer navigation), implant size, and ligament releases were recorded. Patellar tracking was recorded, and if required, a step-wise outside-in lateral retinacular release was performed as described by Maniar et al.12) The surgeon has performed over 4,000 TKAs using PFC Sigma and has hence refined surgical techniques with respect to PFC Sigma. This included maintaining patellar bone thickness between 13 and 15 mm and avoiding tilt in the mediolateral or cephalocaudal directions. The placement of patellar button was matched with the medial and proximal edges of the patella and lateral patellar osteophyte/spur was excised. Synovial fold at the quadriceps tendon-patella junction was completely excised. Ensuring flush fit of femoral component on the anterior cortex prevented overstuffing of the patellofemoral joint. An outside-in lateral retinacular release was performed as and when indicated to correct patellar maltracking.12) For Attune prosthesis, synovial fold at the quadriceps tendon-patella junction was completely excised just as in PFC sigma cases. However, placement of patella was central since the patellar design has a medialized anatomic dome.\nA 100-mL cocktail of ketorolac (30 mg, 1 mL), bupivacaine (0.5%, volume in mL calculated as 40% of bodyweight), epinephrine (5 μg/mL, 0.01 mL per kg of body weight), and normal saline was used for periarticular injection. Out of this, 70 mL was injected into the deeper structures (posterior capsule, around collateral ligaments and extensor apparatus) and 30 mL into superficial structures (subcutaneous tissue). Patient-controlled analgesia (PCA) with an infusion pump containing morphine 1mg/ml, was used for pain management. Apart from this, oral paracetamol (1 g, thrice a day) and diclofenac suppositories (100 g, twice a day) were used for all patients.\nA uniform postoperative rehabilitation protocol was followed, consisting of in-home physiotherapy for a period of around 4–6 weeks. After removal of sutures at 2 weeks postoperatively, patients were re-evaluated at 3 months, 1 year, and 2 years from the surgery. At these follow-up visits, the senior author (RNM) examined the patients and used a hand-held goniometer for measuring range of motion (ROM) in supine position. WOMAC and NKSS were readministered and presence of AKP and/or PCr was also noted at each of these visits. Weight-bearing anteroposterior, lateral, and skyline radiographs of the knees were obtained at each of these visits. All radiographs were scrutinized using Knee Society radiographic scoring system by the senior author (RNM) for signs of loosening or patellar maltracking.13)\nOne patient from the Attune group expired within 1 year from the surgery, from causes unrelated to the knee. Two more patients from the Attune group were not available for follow-up at 1 and 2 years postoperatively due to their distant geographical location. These patients along with their PFC Sigma matched pair were excluded from final analysis. All other patients were followed up actively to ensure that no other patients (72 Attune TKAs and 72 PFC Sigma TKAs) were lost. Asymptomatic, painless, or mildly painful patellar crepitus was treated conservatively by nonsteroidal anti-inflammatory drugs, quadriceps strengthening and restricting activities that involve extreme knee bending. Patients not responding to conservative measures, those with painful crepitus or patellar clunk, were considered for arthroscopic debridement and scar tissue excision.\nSPSS ver. 18.0 (SPSS Inc., Chicago, IL, USA) was used for statistical analysis. Data were given as mean and standard deviation for quantitative data and number (percentage, %) for qualitative data. Student unpaired t-test was applied to compare means between two groups. Chi-square test and Fisher-exact test were applied to compare qualitative data. All statistical tests were two-tailed.", " AKP and PCr Preoperatively, 116 knees (80.5%) had AKP and 51 knees (35.4%) had PCr. Both groups had a comparable distribution of AKP and PCr preoperatively (Table 2). The incidence of both AKP and PCr declined steadily postoperatively (Fig. 2). There were no significant differences in their incidence at 3 months and 12 months follow-up visits. At final follow-up, 2 years postoperatively, 1 patient in each group reported AKP. Similarly, only 1 knee from each group had PCr. None of the patients that had postoperative AKP or PCr required additional surgical intervention. They were managed successfully nonoperatively. The patients with PCr persisting at 2 years did not report it to be a hindrance in activities of daily living.\nPreoperatively, 116 knees (80.5%) had AKP and 51 knees (35.4%) had PCr. Both groups had a comparable distribution of AKP and PCr preoperatively (Table 2). The incidence of both AKP and PCr declined steadily postoperatively (Fig. 2). There were no significant differences in their incidence at 3 months and 12 months follow-up visits. At final follow-up, 2 years postoperatively, 1 patient in each group reported AKP. Similarly, only 1 knee from each group had PCr. None of the patients that had postoperative AKP or PCr required additional surgical intervention. They were managed successfully nonoperatively. The patients with PCr persisting at 2 years did not report it to be a hindrance in activities of daily living.\n Range of Motion The mean ROM improved from 118° preoperatively to 132° at 2 years in the Attune group and from 114° to 132° in the PFC Sigma group (Fig. 3). This improvement in ROM was statistically significant (p < 0.001) in both groups. Although preoperative ROM was comparable between the groups, at 3 months postoperatively, patients with Attune prosthesis had a significantly better ROM than their PFC Sigma counterparts (Table 3). However, at 1 year and 2 years postoperatively, the ROM was again comparable between the two prosthesis designs. The improvement in ROM from the preoperative value to 2-year follow-up was also comparable between the groups. None of the patients required manipulation under anesthesia or any surgical intervention to improve ROM.\nThe mean ROM improved from 118° preoperatively to 132° at 2 years in the Attune group and from 114° to 132° in the PFC Sigma group (Fig. 3). This improvement in ROM was statistically significant (p < 0.001) in both groups. Although preoperative ROM was comparable between the groups, at 3 months postoperatively, patients with Attune prosthesis had a significantly better ROM than their PFC Sigma counterparts (Table 3). However, at 1 year and 2 years postoperatively, the ROM was again comparable between the two prosthesis designs. The improvement in ROM from the preoperative value to 2-year follow-up was also comparable between the groups. None of the patients required manipulation under anesthesia or any surgical intervention to improve ROM.\n Questionnaire-Based Outcomes In the Attune group, the mean WOMAC score improved from 43 preoperatively to 10 at 2 years (Table 4). In the PFC Sigma group, the mean WOMAC score improved from 42 preoperatively to 11 at 2 years. WOMAC score improved consistently at each of the follow-up visits in both groups (Fig. 4). The WOMAC score was comparable between the two groups preoperatively, as well as at each of the subsequent visits. The mean preoperative NKSS was 97 in Attune group and 94 in PFC Sigma group, which improved at 2 years to 201 and 199, respectively (Table 4). There was no significant difference in these scores between the two groups. NKSS and its subcomponents also showed consistent improvement at each of the intermediate follow-up visits (Fig. 4). The scores of the two groups were comparable at all of these visits.\nIn the Attune group, the mean WOMAC score improved from 43 preoperatively to 10 at 2 years (Table 4). In the PFC Sigma group, the mean WOMAC score improved from 42 preoperatively to 11 at 2 years. WOMAC score improved consistently at each of the follow-up visits in both groups (Fig. 4). The WOMAC score was comparable between the two groups preoperatively, as well as at each of the subsequent visits. The mean preoperative NKSS was 97 in Attune group and 94 in PFC Sigma group, which improved at 2 years to 201 and 199, respectively (Table 4). There was no significant difference in these scores between the two groups. NKSS and its subcomponents also showed consistent improvement at each of the intermediate follow-up visits (Fig. 4). The scores of the two groups were comparable at all of these visits.\n Soft-Tissue Releases Lateral retinacular release for patellofemoral maltracking was required in 5 knees with PFC Sigma, as compared to 2 in the Attune group; however, this difference was not statistically significant (p = 0.4). The incidence of posteromedial corner release was also higher in the PFC Sigma group than in the Attune group although the difference was not statistically significant (Table 1).\nLateral retinacular release for patellofemoral maltracking was required in 5 knees with PFC Sigma, as compared to 2 in the Attune group; however, this difference was not statistically significant (p = 0.4). The incidence of posteromedial corner release was also higher in the PFC Sigma group than in the Attune group although the difference was not statistically significant (Table 1).\n Revision Surgery and Radiographic Analysis None of the patients in either of the groups required revision surgery for any cause during the 2-year follow-up period. Radiographs did not show any osteolytic changes or signs of loosening in any of the patients.\nNone of the patients in either of the groups required revision surgery for any cause during the 2-year follow-up period. Radiographs did not show any osteolytic changes or signs of loosening in any of the patients.", "Preoperatively, 116 knees (80.5%) had AKP and 51 knees (35.4%) had PCr. Both groups had a comparable distribution of AKP and PCr preoperatively (Table 2). The incidence of both AKP and PCr declined steadily postoperatively (Fig. 2). There were no significant differences in their incidence at 3 months and 12 months follow-up visits. At final follow-up, 2 years postoperatively, 1 patient in each group reported AKP. Similarly, only 1 knee from each group had PCr. None of the patients that had postoperative AKP or PCr required additional surgical intervention. They were managed successfully nonoperatively. The patients with PCr persisting at 2 years did not report it to be a hindrance in activities of daily living.", "The mean ROM improved from 118° preoperatively to 132° at 2 years in the Attune group and from 114° to 132° in the PFC Sigma group (Fig. 3). This improvement in ROM was statistically significant (p < 0.001) in both groups. Although preoperative ROM was comparable between the groups, at 3 months postoperatively, patients with Attune prosthesis had a significantly better ROM than their PFC Sigma counterparts (Table 3). However, at 1 year and 2 years postoperatively, the ROM was again comparable between the two prosthesis designs. The improvement in ROM from the preoperative value to 2-year follow-up was also comparable between the groups. None of the patients required manipulation under anesthesia or any surgical intervention to improve ROM.", "In the Attune group, the mean WOMAC score improved from 43 preoperatively to 10 at 2 years (Table 4). In the PFC Sigma group, the mean WOMAC score improved from 42 preoperatively to 11 at 2 years. WOMAC score improved consistently at each of the follow-up visits in both groups (Fig. 4). The WOMAC score was comparable between the two groups preoperatively, as well as at each of the subsequent visits. The mean preoperative NKSS was 97 in Attune group and 94 in PFC Sigma group, which improved at 2 years to 201 and 199, respectively (Table 4). There was no significant difference in these scores between the two groups. NKSS and its subcomponents also showed consistent improvement at each of the intermediate follow-up visits (Fig. 4). The scores of the two groups were comparable at all of these visits.", "Lateral retinacular release for patellofemoral maltracking was required in 5 knees with PFC Sigma, as compared to 2 in the Attune group; however, this difference was not statistically significant (p = 0.4). The incidence of posteromedial corner release was also higher in the PFC Sigma group than in the Attune group although the difference was not statistically significant (Table 1).", "None of the patients in either of the groups required revision surgery for any cause during the 2-year follow-up period. Radiographs did not show any osteolytic changes or signs of loosening in any of the patients.", "The most important finding of this study is that the incidence of postoperative AKP and PCr was comparable between the two groups. This is in contrast to the results described in recent literature (Table 5), wherein AKP and painful/symptomatic crepitus were observed to be significantly less frequent with Attune prosthesis in comparison to PFC Sigma prosthesis.678) At 1 year postoperatively, in our study, AKP was present in 11% of knees in Attune group and in 9% of knees in PFC Sigma group. At 2 years postoperatively, 1 knee (1.4%) in each group had AKP. In contrast, other matched pair studies have reported AKP at 2-year follow-up to be 2%–13% in Attune group and 9%–26% in PFC Sigma group.78) We observed symptomatic crepitus in 4% of knees in both Attune and PFC groups at 1 year postoperatively. Most of these resolved with nonoperative management. Symptomatic crepitus persisted in only 1 knee (1.4%) in each group at 2 years of follow-up. This is in tune with observations of the other two matched-pair studies, which also reported 1% incidence of symptomatic crepitus at 2 years in the Attune group.78) However, the contrast is stark at 2 years in the PFC sigma group, where only 1 of our patients (1.4%) had PCr as compared to the 5% and 4% incidence reported by Indelli et al.8) and Ranawat et al.,7) respectively. The operating surgeon (RNM) previously reported the incidence of PCr to be 1.1% and AKP to be 3.4% with PFC Sigma prosthesis in patients operated by him.12) However, this was amongst patients that underwent lateral retinacular release for patellar maltracking.\nThe deviation of our results, as compared to that of existing literature can be explained by three factors. Firstly, ours is a single surgeon-based study; hence, a standard surgical technique was used in all patients. The studies by Ranawat et al.7) and Indelli et al.8) pooled their data from operations done by multiple surgeons and hence may have inevitably introduced variations in surgical techniques and postoperative evaluation. Secondly, all operations in our study were done with the aid of computer navigation, whereas the other two studies used the conventional surgical technique. Computer navigation is proven to improve coronal alignment and rotational orientation of components.141516) Component positioning plays a key role in influencing patellofemoral kinematics.1718) Hence, we believe that by improving the accuracy of component positioning by the use of computer navigation, the risk of postoperative patellofemoral complications can be reduced. Lastly, the preoperative varus alignment amongst the 69 pairs that had varus deformity showed a statistical difference between them. However, the mean difference between the two groups was less than 2° and therefore clinically irrelevant and is unlikely to influence the patellofemoral outcome.\nSurgical scar excision for patellar clunk has been reported to be significantly lower in Attune TKAs (0.14%) as compared to PFC Sigma TKAs (1.6%).6) However, the rate of surgery for patellar clunk was discordant between the two matched pair studies by Indelli et al.8) and Ranawat et al.7) Peripatellar scar excision was performed in 2% of PFC Sigma TKAs and in none of the Attune TKAs by Indelli et al.8) whereas it was performed in 1% of Attune TKAs and in none of the PFC Sigma TKAs by Ranawat et al.7) We did not encounter any case of patellar clunk syndrome warranting surgery.\nThe incidence of lateral retinacular release is an indirect indicator of patellofemoral kinematics and maltracking. The need for lateral release is significantly higher with PFC Sigma than Attune.5) We also observed a higher incidence of lateral release in the PFC Sigma group, but the difference was not statistically significant. Questionnaire-based outcome measures (WOMAC and NKSS) were also comparable between the two types of prosthesis at each stage of follow-up.\nThe Attune femoral component is designed with a gradually reducing femoral radius in higher degrees of knee flexion.2) This allows for a gradual reduction in tibiofemoral conformity, which in turn allows greater freedom of rotation. This feature has been designed to increase knee flexion without sacrificing stability. However, clinical results comparing ROM with the older PFC designs are divided. Significantly better ROM in Attune groups was reported by Indelli et al.8) and Martin et al.6) However, Ranawat et al.7) reported no significant differences in ROM between the two designs. We observed significantly better ROM in the Attune group at 3 months postoperatively, but the difference evened out at 1 year postoperatively.\nThe primary limitation of the study is the selection bias and observer bias due to lack of randomization and blinding, respectively. Such a bias is inherent to matched-pair studies and could not be excluded. This study also had a relatively smaller study population (72 pairs) as compared to other recent matched-pair studies (100 pairs).78) However, we minimized loss to follow-up by actively ensuring that the patients report for their postoperative visits.\nThe strengths of this study are that this is a single surgeon-based study and has minimal loss to follow-up. Another strength is that both the Attune and PFC Sigma groups were comparable in terms of demographics and intraoperative variables. We observed that both Attune and PFC Sigma had a low and comparable incidence of AKP and PCr up to 2 years of follow-up. Attune group achieved significantly better ROM at 3 months postoperatively. At 2 years of follow-up, both prostheses had excellent and comparable clinical and functional results." ]
[ "methods", "results", null, null, null, null, null, "discussion" ]
[ "Knee", "Prosthesis design", "Patellofemoral joints", "Arthroplasty", "Anterior knee pain syndrome" ]
METHODS: This prospective matched-pair study was conducted between January 2014 and June 2015 after obtaining Lilavati Hospital and Research Center Institutional Review Board and Ethics Committee approval (IRB No. ECR/606/Inst/MH/2014/RR-17). A written informed consent was obtained from all patients authorizing radiological examination, photographic documentation, and surgery. During this period, 343 patients (417 knees) underwent primary total knee arthroplasty (TKA). Preoperatively, each patient had been offered a choice between Attune and PFC Sigma prosthesis. Attune was higher priced than PFC Sigma, hence the patients were asked to choose the prosthesis based on their individual financial preference. The minimum sample size required for such a study was calculated with the level of significance (α) taken as p = 0.05, and the power of study being 80%. Using the incidence of AKP reported by Ranawat et al.7) with Attune and PFC Sigma as the outcome measure, we arrived at a minimum sample size of 73 for the study group. The first 75 consecutive knees (43 patients), in whom Attune prosthesis was implanted, were considered for inclusion. The control group of 75 knees was selected from 302 consecutive patients (344 knees) in whom PFC Sigma was implanted during the same period. Each Attune TKA was matched manually with PFC Sigma TKA based on age (± 5 years), sex, diagnosis (primary osteoarthritis or rheumatoid arthritis), deformity (varus or valgus), body mass index (± 5 kg/m2), and type of implant (fixed-bearing or mobile-bearing). The mean age of the study population was 65.8 years (range, 49–85 years) and mean body mass index was 31.9 kg/m2 (range, 22.8–45.1 kg/m2). The majority of the patients were women (87.5%). Only 4 patients in each group (5.5%) had rheumatoid arthritis, the rest had primary osteoarthritis (Table 1). The majority of the knees had a varus deformity (138 knees, 95.8%) and the rest were in valgus. The Attune and PFC Sigma groups were comparable in all demographic aspects (Table 1). The mean deformity, as measured by computer navigation, was 10.6° varus in the PFC Sigma group and 8.8° varus in the Attune group. Although this difference was statistically significant, we believe a difference of less than 2° in their means did not make a significant clinical difference. The sizes of the patellar buttons used in the two groups were also comparable. Demographic data, Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC), and new Knee Society Score (NKSS) were recorded preoperatively. These outcome measures were chosen since their validity and reliability have been established in our population.910) Presence of AKP and PCr was ascertained by a validated patient-administered questionnaire that asks the patient to localize the site of pain and incidence of any noise from the joint.11) All TKAs, irrespective of the prosthesis being implanted, were performed by the same senior surgeon (RNM), under tourniquet control, using the medial parapatellar approach, with the aid of computer navigation, by the tibial-cut first technique. For all cases, cemented, cruciate-substituting implants were used and the patella was resurfaced. Operative details of limb alignment (as per computer navigation), implant size, and ligament releases were recorded. Patellar tracking was recorded, and if required, a step-wise outside-in lateral retinacular release was performed as described by Maniar et al.12) The surgeon has performed over 4,000 TKAs using PFC Sigma and has hence refined surgical techniques with respect to PFC Sigma. This included maintaining patellar bone thickness between 13 and 15 mm and avoiding tilt in the mediolateral or cephalocaudal directions. The placement of patellar button was matched with the medial and proximal edges of the patella and lateral patellar osteophyte/spur was excised. Synovial fold at the quadriceps tendon-patella junction was completely excised. Ensuring flush fit of femoral component on the anterior cortex prevented overstuffing of the patellofemoral joint. An outside-in lateral retinacular release was performed as and when indicated to correct patellar maltracking.12) For Attune prosthesis, synovial fold at the quadriceps tendon-patella junction was completely excised just as in PFC sigma cases. However, placement of patella was central since the patellar design has a medialized anatomic dome. A 100-mL cocktail of ketorolac (30 mg, 1 mL), bupivacaine (0.5%, volume in mL calculated as 40% of bodyweight), epinephrine (5 μg/mL, 0.01 mL per kg of body weight), and normal saline was used for periarticular injection. Out of this, 70 mL was injected into the deeper structures (posterior capsule, around collateral ligaments and extensor apparatus) and 30 mL into superficial structures (subcutaneous tissue). Patient-controlled analgesia (PCA) with an infusion pump containing morphine 1mg/ml, was used for pain management. Apart from this, oral paracetamol (1 g, thrice a day) and diclofenac suppositories (100 g, twice a day) were used for all patients. A uniform postoperative rehabilitation protocol was followed, consisting of in-home physiotherapy for a period of around 4–6 weeks. After removal of sutures at 2 weeks postoperatively, patients were re-evaluated at 3 months, 1 year, and 2 years from the surgery. At these follow-up visits, the senior author (RNM) examined the patients and used a hand-held goniometer for measuring range of motion (ROM) in supine position. WOMAC and NKSS were readministered and presence of AKP and/or PCr was also noted at each of these visits. Weight-bearing anteroposterior, lateral, and skyline radiographs of the knees were obtained at each of these visits. All radiographs were scrutinized using Knee Society radiographic scoring system by the senior author (RNM) for signs of loosening or patellar maltracking.13) One patient from the Attune group expired within 1 year from the surgery, from causes unrelated to the knee. Two more patients from the Attune group were not available for follow-up at 1 and 2 years postoperatively due to their distant geographical location. These patients along with their PFC Sigma matched pair were excluded from final analysis. All other patients were followed up actively to ensure that no other patients (72 Attune TKAs and 72 PFC Sigma TKAs) were lost. Asymptomatic, painless, or mildly painful patellar crepitus was treated conservatively by nonsteroidal anti-inflammatory drugs, quadriceps strengthening and restricting activities that involve extreme knee bending. Patients not responding to conservative measures, those with painful crepitus or patellar clunk, were considered for arthroscopic debridement and scar tissue excision. SPSS ver. 18.0 (SPSS Inc., Chicago, IL, USA) was used for statistical analysis. Data were given as mean and standard deviation for quantitative data and number (percentage, %) for qualitative data. Student unpaired t-test was applied to compare means between two groups. Chi-square test and Fisher-exact test were applied to compare qualitative data. All statistical tests were two-tailed. RESULTS: AKP and PCr Preoperatively, 116 knees (80.5%) had AKP and 51 knees (35.4%) had PCr. Both groups had a comparable distribution of AKP and PCr preoperatively (Table 2). The incidence of both AKP and PCr declined steadily postoperatively (Fig. 2). There were no significant differences in their incidence at 3 months and 12 months follow-up visits. At final follow-up, 2 years postoperatively, 1 patient in each group reported AKP. Similarly, only 1 knee from each group had PCr. None of the patients that had postoperative AKP or PCr required additional surgical intervention. They were managed successfully nonoperatively. The patients with PCr persisting at 2 years did not report it to be a hindrance in activities of daily living. Preoperatively, 116 knees (80.5%) had AKP and 51 knees (35.4%) had PCr. Both groups had a comparable distribution of AKP and PCr preoperatively (Table 2). The incidence of both AKP and PCr declined steadily postoperatively (Fig. 2). There were no significant differences in their incidence at 3 months and 12 months follow-up visits. At final follow-up, 2 years postoperatively, 1 patient in each group reported AKP. Similarly, only 1 knee from each group had PCr. None of the patients that had postoperative AKP or PCr required additional surgical intervention. They were managed successfully nonoperatively. The patients with PCr persisting at 2 years did not report it to be a hindrance in activities of daily living. Range of Motion The mean ROM improved from 118° preoperatively to 132° at 2 years in the Attune group and from 114° to 132° in the PFC Sigma group (Fig. 3). This improvement in ROM was statistically significant (p < 0.001) in both groups. Although preoperative ROM was comparable between the groups, at 3 months postoperatively, patients with Attune prosthesis had a significantly better ROM than their PFC Sigma counterparts (Table 3). However, at 1 year and 2 years postoperatively, the ROM was again comparable between the two prosthesis designs. The improvement in ROM from the preoperative value to 2-year follow-up was also comparable between the groups. None of the patients required manipulation under anesthesia or any surgical intervention to improve ROM. The mean ROM improved from 118° preoperatively to 132° at 2 years in the Attune group and from 114° to 132° in the PFC Sigma group (Fig. 3). This improvement in ROM was statistically significant (p < 0.001) in both groups. Although preoperative ROM was comparable between the groups, at 3 months postoperatively, patients with Attune prosthesis had a significantly better ROM than their PFC Sigma counterparts (Table 3). However, at 1 year and 2 years postoperatively, the ROM was again comparable between the two prosthesis designs. The improvement in ROM from the preoperative value to 2-year follow-up was also comparable between the groups. None of the patients required manipulation under anesthesia or any surgical intervention to improve ROM. Questionnaire-Based Outcomes In the Attune group, the mean WOMAC score improved from 43 preoperatively to 10 at 2 years (Table 4). In the PFC Sigma group, the mean WOMAC score improved from 42 preoperatively to 11 at 2 years. WOMAC score improved consistently at each of the follow-up visits in both groups (Fig. 4). The WOMAC score was comparable between the two groups preoperatively, as well as at each of the subsequent visits. The mean preoperative NKSS was 97 in Attune group and 94 in PFC Sigma group, which improved at 2 years to 201 and 199, respectively (Table 4). There was no significant difference in these scores between the two groups. NKSS and its subcomponents also showed consistent improvement at each of the intermediate follow-up visits (Fig. 4). The scores of the two groups were comparable at all of these visits. In the Attune group, the mean WOMAC score improved from 43 preoperatively to 10 at 2 years (Table 4). In the PFC Sigma group, the mean WOMAC score improved from 42 preoperatively to 11 at 2 years. WOMAC score improved consistently at each of the follow-up visits in both groups (Fig. 4). The WOMAC score was comparable between the two groups preoperatively, as well as at each of the subsequent visits. The mean preoperative NKSS was 97 in Attune group and 94 in PFC Sigma group, which improved at 2 years to 201 and 199, respectively (Table 4). There was no significant difference in these scores between the two groups. NKSS and its subcomponents also showed consistent improvement at each of the intermediate follow-up visits (Fig. 4). The scores of the two groups were comparable at all of these visits. Soft-Tissue Releases Lateral retinacular release for patellofemoral maltracking was required in 5 knees with PFC Sigma, as compared to 2 in the Attune group; however, this difference was not statistically significant (p = 0.4). The incidence of posteromedial corner release was also higher in the PFC Sigma group than in the Attune group although the difference was not statistically significant (Table 1). Lateral retinacular release for patellofemoral maltracking was required in 5 knees with PFC Sigma, as compared to 2 in the Attune group; however, this difference was not statistically significant (p = 0.4). The incidence of posteromedial corner release was also higher in the PFC Sigma group than in the Attune group although the difference was not statistically significant (Table 1). Revision Surgery and Radiographic Analysis None of the patients in either of the groups required revision surgery for any cause during the 2-year follow-up period. Radiographs did not show any osteolytic changes or signs of loosening in any of the patients. None of the patients in either of the groups required revision surgery for any cause during the 2-year follow-up period. Radiographs did not show any osteolytic changes or signs of loosening in any of the patients. AKP and PCr: Preoperatively, 116 knees (80.5%) had AKP and 51 knees (35.4%) had PCr. Both groups had a comparable distribution of AKP and PCr preoperatively (Table 2). The incidence of both AKP and PCr declined steadily postoperatively (Fig. 2). There were no significant differences in their incidence at 3 months and 12 months follow-up visits. At final follow-up, 2 years postoperatively, 1 patient in each group reported AKP. Similarly, only 1 knee from each group had PCr. None of the patients that had postoperative AKP or PCr required additional surgical intervention. They were managed successfully nonoperatively. The patients with PCr persisting at 2 years did not report it to be a hindrance in activities of daily living. Range of Motion: The mean ROM improved from 118° preoperatively to 132° at 2 years in the Attune group and from 114° to 132° in the PFC Sigma group (Fig. 3). This improvement in ROM was statistically significant (p < 0.001) in both groups. Although preoperative ROM was comparable between the groups, at 3 months postoperatively, patients with Attune prosthesis had a significantly better ROM than their PFC Sigma counterparts (Table 3). However, at 1 year and 2 years postoperatively, the ROM was again comparable between the two prosthesis designs. The improvement in ROM from the preoperative value to 2-year follow-up was also comparable between the groups. None of the patients required manipulation under anesthesia or any surgical intervention to improve ROM. Questionnaire-Based Outcomes: In the Attune group, the mean WOMAC score improved from 43 preoperatively to 10 at 2 years (Table 4). In the PFC Sigma group, the mean WOMAC score improved from 42 preoperatively to 11 at 2 years. WOMAC score improved consistently at each of the follow-up visits in both groups (Fig. 4). The WOMAC score was comparable between the two groups preoperatively, as well as at each of the subsequent visits. The mean preoperative NKSS was 97 in Attune group and 94 in PFC Sigma group, which improved at 2 years to 201 and 199, respectively (Table 4). There was no significant difference in these scores between the two groups. NKSS and its subcomponents also showed consistent improvement at each of the intermediate follow-up visits (Fig. 4). The scores of the two groups were comparable at all of these visits. Soft-Tissue Releases: Lateral retinacular release for patellofemoral maltracking was required in 5 knees with PFC Sigma, as compared to 2 in the Attune group; however, this difference was not statistically significant (p = 0.4). The incidence of posteromedial corner release was also higher in the PFC Sigma group than in the Attune group although the difference was not statistically significant (Table 1). Revision Surgery and Radiographic Analysis: None of the patients in either of the groups required revision surgery for any cause during the 2-year follow-up period. Radiographs did not show any osteolytic changes or signs of loosening in any of the patients. DISCUSSION: The most important finding of this study is that the incidence of postoperative AKP and PCr was comparable between the two groups. This is in contrast to the results described in recent literature (Table 5), wherein AKP and painful/symptomatic crepitus were observed to be significantly less frequent with Attune prosthesis in comparison to PFC Sigma prosthesis.678) At 1 year postoperatively, in our study, AKP was present in 11% of knees in Attune group and in 9% of knees in PFC Sigma group. At 2 years postoperatively, 1 knee (1.4%) in each group had AKP. In contrast, other matched pair studies have reported AKP at 2-year follow-up to be 2%–13% in Attune group and 9%–26% in PFC Sigma group.78) We observed symptomatic crepitus in 4% of knees in both Attune and PFC groups at 1 year postoperatively. Most of these resolved with nonoperative management. Symptomatic crepitus persisted in only 1 knee (1.4%) in each group at 2 years of follow-up. This is in tune with observations of the other two matched-pair studies, which also reported 1% incidence of symptomatic crepitus at 2 years in the Attune group.78) However, the contrast is stark at 2 years in the PFC sigma group, where only 1 of our patients (1.4%) had PCr as compared to the 5% and 4% incidence reported by Indelli et al.8) and Ranawat et al.,7) respectively. The operating surgeon (RNM) previously reported the incidence of PCr to be 1.1% and AKP to be 3.4% with PFC Sigma prosthesis in patients operated by him.12) However, this was amongst patients that underwent lateral retinacular release for patellar maltracking. The deviation of our results, as compared to that of existing literature can be explained by three factors. Firstly, ours is a single surgeon-based study; hence, a standard surgical technique was used in all patients. The studies by Ranawat et al.7) and Indelli et al.8) pooled their data from operations done by multiple surgeons and hence may have inevitably introduced variations in surgical techniques and postoperative evaluation. Secondly, all operations in our study were done with the aid of computer navigation, whereas the other two studies used the conventional surgical technique. Computer navigation is proven to improve coronal alignment and rotational orientation of components.141516) Component positioning plays a key role in influencing patellofemoral kinematics.1718) Hence, we believe that by improving the accuracy of component positioning by the use of computer navigation, the risk of postoperative patellofemoral complications can be reduced. Lastly, the preoperative varus alignment amongst the 69 pairs that had varus deformity showed a statistical difference between them. However, the mean difference between the two groups was less than 2° and therefore clinically irrelevant and is unlikely to influence the patellofemoral outcome. Surgical scar excision for patellar clunk has been reported to be significantly lower in Attune TKAs (0.14%) as compared to PFC Sigma TKAs (1.6%).6) However, the rate of surgery for patellar clunk was discordant between the two matched pair studies by Indelli et al.8) and Ranawat et al.7) Peripatellar scar excision was performed in 2% of PFC Sigma TKAs and in none of the Attune TKAs by Indelli et al.8) whereas it was performed in 1% of Attune TKAs and in none of the PFC Sigma TKAs by Ranawat et al.7) We did not encounter any case of patellar clunk syndrome warranting surgery. The incidence of lateral retinacular release is an indirect indicator of patellofemoral kinematics and maltracking. The need for lateral release is significantly higher with PFC Sigma than Attune.5) We also observed a higher incidence of lateral release in the PFC Sigma group, but the difference was not statistically significant. Questionnaire-based outcome measures (WOMAC and NKSS) were also comparable between the two types of prosthesis at each stage of follow-up. The Attune femoral component is designed with a gradually reducing femoral radius in higher degrees of knee flexion.2) This allows for a gradual reduction in tibiofemoral conformity, which in turn allows greater freedom of rotation. This feature has been designed to increase knee flexion without sacrificing stability. However, clinical results comparing ROM with the older PFC designs are divided. Significantly better ROM in Attune groups was reported by Indelli et al.8) and Martin et al.6) However, Ranawat et al.7) reported no significant differences in ROM between the two designs. We observed significantly better ROM in the Attune group at 3 months postoperatively, but the difference evened out at 1 year postoperatively. The primary limitation of the study is the selection bias and observer bias due to lack of randomization and blinding, respectively. Such a bias is inherent to matched-pair studies and could not be excluded. This study also had a relatively smaller study population (72 pairs) as compared to other recent matched-pair studies (100 pairs).78) However, we minimized loss to follow-up by actively ensuring that the patients report for their postoperative visits. The strengths of this study are that this is a single surgeon-based study and has minimal loss to follow-up. Another strength is that both the Attune and PFC Sigma groups were comparable in terms of demographics and intraoperative variables. We observed that both Attune and PFC Sigma had a low and comparable incidence of AKP and PCr up to 2 years of follow-up. Attune group achieved significantly better ROM at 3 months postoperatively. At 2 years of follow-up, both prostheses had excellent and comparable clinical and functional results.
Background: Attune (DePuy Synthes) prosthesis was designed to overcome patellofemoral complications associated with PFC Sigma (DePuy Synthes) prosthesis. The aim of our study was to compare the incidence of anterior knee pain (AKP), patellofemoral crepitus (PCr), and functional outcome between them. Methods: This prospective matched-pair study was conducted between January 2014 and June 2015, during which 75 consecutive Attune total knee arthroplasties (TKAs) were matched with 75 PFC Sigma TKAs based on age, sex, body mass index, pathology, and deformity. A single surgeon performed all the operations with aid of computer navigation, using a posterior-stabilized prosthesis with patellar resurfacing. Outcome was assessed by new Knee Society Score (NKSS) and Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) score. AKP and PCr were assessed by a patient-administered questionnaire till 2 years of follow-up. Three pairs were lost to follow-up and finally 72 pairs were analyzed. Results: One patient in each group reported AKP and 1 patient from each group had PCr at 2 years postoperatively. None of these patients required additional surgery. The incidence of lateral retinacular release was higher with PFC Sigma (5/72) than Attune (2/72); however, this was statistically not significant (p = 0.4). The Attune group had a significantly greater range of motion (ROM) at 3 months postoperatively (p = 0.049). At final follow-up, ROM was comparable between two prosthesis designs. NKSS and WOMAC scores were also comparable between the groups. Conclusions: We observed that both Attune and PFC Sigma had a low and comparable incidence of AKP and PCr up to 2 years of follow-up. The Attune group achieved a significantly greater ROM at 3 months postoperatively. At 2 years of follow-up, both prostheses had excellent and comparable clinical and functional results.
null
null
4,172
364
[ 143, 144, 167, 69, 42 ]
8
[ "group", "attune", "pfc", "sigma", "pfc sigma", "patients", "groups", "years", "akp", "rom" ]
[ "sigma prosthesis patients", "total knee arthroplasty", "knee arthroplasty tka", "patients 417 knees", "attune prosthesis comparison" ]
null
null
null
null
[CONTENT] Knee | Prosthesis design | Patellofemoral joints | Arthroplasty | Anterior knee pain syndrome [SUMMARY]
[CONTENT] Knee | Prosthesis design | Patellofemoral joints | Arthroplasty | Anterior knee pain syndrome [SUMMARY]
null
[CONTENT] Knee | Prosthesis design | Patellofemoral joints | Arthroplasty | Anterior knee pain syndrome [SUMMARY]
null
null
[CONTENT] Arthroplasty, Replacement, Knee | Humans | Knee Joint | Knee Prosthesis | Matched-Pair Analysis | Osteoarthritis, Knee | Patella | Prospective Studies | Prosthesis Design | Range of Motion, Articular | Treatment Outcome [SUMMARY]
[CONTENT] Arthroplasty, Replacement, Knee | Humans | Knee Joint | Knee Prosthesis | Matched-Pair Analysis | Osteoarthritis, Knee | Patella | Prospective Studies | Prosthesis Design | Range of Motion, Articular | Treatment Outcome [SUMMARY]
null
[CONTENT] Arthroplasty, Replacement, Knee | Humans | Knee Joint | Knee Prosthesis | Matched-Pair Analysis | Osteoarthritis, Knee | Patella | Prospective Studies | Prosthesis Design | Range of Motion, Articular | Treatment Outcome [SUMMARY]
null
null
[CONTENT] sigma prosthesis patients | total knee arthroplasty | knee arthroplasty tka | patients 417 knees | attune prosthesis comparison [SUMMARY]
[CONTENT] sigma prosthesis patients | total knee arthroplasty | knee arthroplasty tka | patients 417 knees | attune prosthesis comparison [SUMMARY]
null
[CONTENT] sigma prosthesis patients | total knee arthroplasty | knee arthroplasty tka | patients 417 knees | attune prosthesis comparison [SUMMARY]
null
null
[CONTENT] group | attune | pfc | sigma | pfc sigma | patients | groups | years | akp | rom [SUMMARY]
[CONTENT] group | attune | pfc | sigma | pfc sigma | patients | groups | years | akp | rom [SUMMARY]
null
[CONTENT] group | attune | pfc | sigma | pfc sigma | patients | groups | years | akp | rom [SUMMARY]
null
null
[CONTENT] patellar | ml | patients | sigma | pfc | pfc sigma | attune | patella | data | study [SUMMARY]
[CONTENT] group | rom | pcr | groups | improved | years | akp | preoperatively | womac score | comparable [SUMMARY]
null
[CONTENT] group | attune | pfc | patients | pfc sigma | sigma | rom | groups | pcr | akp [SUMMARY]
null
null
[CONTENT] between January 2014 and June 2015 | 75 | Attune | 75 ||| ||| Knee Society Score | Western Ontario | McMaster Universities Osteoarthritis Index | WOMAC ||| AKP | 2 years ||| Three | 72 [SUMMARY]
[CONTENT] One | AKP | 1 | 2 years ||| ||| PFC Sigma | 5/72 | Attune | 2/72 | 0.4 ||| Attune | 3 months | 0.049 ||| ROM | two ||| NKSS | WOMAC [SUMMARY]
null
[CONTENT] Attune (DePuy Synthes | PFC Sigma (DePuy Synthes ||| AKP ||| between January 2014 and June 2015 | 75 | Attune | 75 ||| ||| Knee Society Score | Western Ontario | McMaster Universities Osteoarthritis Index | WOMAC ||| AKP | 2 years ||| Three | 72 ||| ||| One | AKP | 1 | 2 years ||| ||| PFC Sigma | 5/72 | Attune | 2/72 | 0.4 ||| Attune | 3 months | 0.049 ||| ROM | two ||| NKSS | WOMAC ||| Attune | PFC Sigma | AKP | up to 2 years ||| Attune | 3 months ||| 2 years [SUMMARY]
null
Deficiency of the myogenic factor MyoD causes a perinatally lethal fetal akinesia.
26733463
Lethal fetal akinesia deformation sequence (FADS) describes a clinically and genetically heterogeneous phenotype that includes fetal akinesia, intrauterine growth retardation, arthrogryposis and developmental anomalies. Affected babies die as a result of pulmonary hypoplasia. We aimed to identify the underlying genetic cause of this disorder in a family in which there were three affected individuals from two sibships.
BACKGROUND
Autosomal-recessive inheritance was suggested by a family history of consanguinity and by recurrence of the phenotype between the two sibships. We performed exome sequencing of the affected individuals and their unaffected mother, followed by autozygosity mapping and variant filtering to identify the causative gene.
METHODS
Five autozygous regions were identified, spanning 31.7 Mb of genomic sequence and including 211 genes. Using standard variant filtering criteria, we excluded all variants as being the likely pathogenic cause, apart from a single novel nonsense mutation, c.188C>A p.(Ser63*) (NM_002478.4), in MYOD1. This gene encodes an extensively studied transcription factor involved in muscle development, which has nonetheless not hitherto been associated with a hereditary human disease phenotype.
RESULTS
We provide the first description of a human phenotype that appears to result from MYOD1 mutation. The presentation with FADS is consistent with a large body of data demonstrating that in the mouse, MyoD is a major controller of precursor cell commitment to the myogenic differentiation programme.
CONCLUSIONS
[ "Aborted Fetus", "Animals", "Arthrogryposis", "Exome", "Female", "Fetal Growth Retardation", "High-Throughput Nucleotide Sequencing", "Humans", "Lung", "Mice", "Mutation", "MyoD Protein", "Pedigree", "Phenotype", "Pregnancy" ]
4819622
Introduction
Lethal fetal akinesia deformation sequence (FADS; OMIM 208150) comprises a spectrum of clinically and genetically heterogeneous disorders that is most commonly detected prenatally, through the routine application of middle-trimester ultrasound scanning. Its clinical features, which are secondary to the lack of fetal movement, include intrauterine growth retardation, arthrogryposis and developmental anomalies such as lung hypoplasia, characteristic facies, cleft palate and cryptorchidism.1 Affected babies either die in utero or shortly after birth. The akinesia phenotype can be due to a genetic defect intrinsic to the fetus, but also to non-genetic causes such as maternal myasthenia gravis. To date, >20 genes have been associated with fetal akinesia, representing all modes of inheritance. Many of these genes encode components of neuromuscular pathways, as described in the comprehensive review by Ravenscroft et al.2 The prenatal presentation of FADS complicates diagnosis. Although anatomical examination, especially using advanced ultrasonographic techniques,3 4 can provide important diagnostic information regarding associated malformations such as facial clefting, prognosis cannot necessarily be inferred with certainty. Postmortem examination may result in a more precise pathological classification, but given the aetiological heterogeneity of FADS, a genetic diagnosis (preferably prenatally) remains highly desirable. High-throughput DNA sequencing has broadened the scope of traditional hypothesis-free genetic investigations from the detection of structural variants using arrayCGH (or low-coverage whole-genome sequencing, ‘CNV-seq’) to encompass point mutation and small insertion/deletion variant detection using exome sequencing. The methods employed in large multicentre studies of human paediatric phenotypes (such as ‘Deciphering Developmental Disorders’5) are now also being applied to large-scale prenatal studies such as the Prenatal Assessment of Genomes and Exomes project.6 To perform new gene discovery, these projects rely on ascertaining a large number of trio samples (typically singleton affected cases and their parents) with similar phenotypes. The expectation is that more than one family within the cohort will have a disease-causing mutation in the same gene, allowing a novel gene function to be attributed. Despite the large size of these data sets, it is frequently the case that similarly affected patients can only be identified outside of the study cohort. The integration of more ‘traditional’ types of genetic mapping data into the interpretation of genome-wide sequencing data sets permits an alternative approach, in which candidate genes can be filtered using information from a single family. This permits small-scale gene discovery programmes, which may arise directly from data sets accrued in diagnostic laboratories, without the need for large and expensive recruitment programmes.7 Filtering on the basis of identity by descent, either explicit from family history, or implicit based on the rarity of a disease or isolation of an ethnic group, is one very powerful genetic approach. Here, we report a consanguineous Caucasian family in whom three affected individuals, from two sibships that share the same mother, presented with features consistent with FADS. Exome analysis with homozygosity-based filtering resulted in the identification of MYOD1 as a new gene underlying FADS.
Methods
Consultant clinical geneticists clinically examined and reviewed postmortem examination reports that had been undertaken on the deceased babies. DNA was isolated using phenol/chloroform extraction or standard salting out (for blood) protocols, following written informed consent. Illumina-compatible exome sequencing libraries were generated for samples II:2, III:1 and III:4 using SureSelect v.5 reagents, following manufacturer's protocols throughout (Agilent Technologies, Wokingham, UK). A low DNA input protocol was followed to create the sequencing library for individual III:2 using alternative library preparation reagents (New England Biolabs, Ipswich, UK). The four exome-enriched libraries were pooled in equimolar concentration with either 3 (III:2) or 4 (II:2, III:1 and III:4) other sequencing libraries prepared as part of the laboratory's standard diagnostic testing workflow. Sequencing of each pool was performed using both lanes of a HiSeq2500 rapid mode flow cell, generating paired-end 100 bp sequence reads (Illumina, San Diego, California, USA). Despite the poor quality material obtained for DNA extraction from the affected individuals (and low DNA yield for sample III:2), the per-patient sequencing summary metrics were comparable to those obtained from the exome library of the unaffected mother (whose DNA source was peripheral blood lymphocytes, our preferred specimen). Notably a minimal number of reads were adaptor trimmed (∼3.5%) or identified as duplicate read pairs (∼12.4%) (see online supplementary table S1). Data processing was performed using an in-house informatics pipeline. Briefly, raw sequence data were converted from bcl to FASTQ.gz format and demultiplexed using CASAVA V.1.8.2. Adaptor sequences were trimmed from per-patient sequence reads using Cutadapt V.1.7.1 (https://code.google.com/p/cutadapt/)8 before being aligned to the human reference genome (hg19) using bwa V.0.7.12 (http://bio-bwa.sourceforge.net).9 Sam to bam conversion, duplicate read removal and read sorting based on mapped read coordinates were performed using Picard V.1.129 (http://picard.sourceforge.net). The Genome Analysis Toolkit (GATK) V.2.3-4Lite was used to perform indel realignment, base quality score recalibration and variant discovery, which resulted in variants being saved in variant call format (VCF)10 before being annotated with positional, functional and frequency data using Alamut Batch standalone V.1.4.0 (Interactive Biosoftware, Rouen, France). Programs ancillary to the automated pipeline were used to interrogate these data, namely Agile MultiIdeogram (http://dna.leeds.ac.uk/agile/AgileMultiIdeogram), which was used to determine regions of autozygosity from the per-patient VCF files, and Agile Exome Filter (http://dna.leeds.ac.uk/agile/AgileExomeFilter), which was used to filter Alamut Batch annotated variant reports using positional and functional parameters.7 Interrogation of the Exome Aggregation Consortium data set was performed using the ExAC Browser (http://exac.broadinstitute.org). Manual inspection of aligned sequence reads was performed using the Integrative Genome Viewer V.2.3.57.11 The number of sequence reads mapping to each target base was calculated using the GATK DepthOfCoverage walker. To confirm and genotype the putative pathogenic MYOD1 variant, a 383 bp amplicon was designed and optimised before being Sanger sequenced using an ABI3730 following manufacturer's protocols (Applied Biosystems, Paisley, UK). Each reaction consisted of 0.5 μL of genomic DNA (∼500 ng/μL), 11 μL of Megamix (Microzone, Haywards Heath, UK), 1 μL of 10 pmol/μL forward primer (dTGTAAAACGACGGCCAGTACGACTTCTATGACGACCCG) and 1 μL of 10 pmol/μL reverse primer (dCAGGAAACAGCTATGACCCTGGTTTGGATTGCTCGACG). Both primers contained universal sequencing tags (underlined) to allow Sanger sequencing according to our standard laboratory workflows. Thermocycling conditions consisted of 5 min at 94°C, followed by 30 cycles of 94°C for 30 s, 55°C for 60 s and 72°C for 45 s, before a final extension step at 72°C for 5 min. Sequence chromatograms were viewed using Mutation Surveyor V.3.2 (SoftGenetics LLC, State College, USA).
Results
The pedigree structure of the family is shown in figure 1. Two affected male babies were born to a first-cousin consanguineous Caucasian couple. A pedigree showing the relationship between affected (shaded symbols) and unaffected (outlined symbols) individuals. The first child (III:1) was delivered following spontaneous labour at 35+5 weeks. During the pregnancy, a cystic hygroma had been noted at 12 weeks, which subsequently resolved, and polyhydramnios developed during the third trimester. The baby's birth weight was 1.87 kg (2nd to 9th centile) and his occipitofrontal circumference (OFC) was 33.0 cm (50th centile). Apgar scores were 1 at 1 min and 1 at 5 min. He required extensive resuscitation and was ventilator dependent. He had numerous episodes of oxygen desaturation and on day 2 failed to respond to an acute resuscitation for a further hypoxic episode and died. A postmortem examination noted dysmorphic features that included downslanting palpebral fissures, a small chin and square forehead. The toes and fingers were overlapping and the fists were held in a clenched position. The testes could not be palpated in the scrotum. There was a midline posterior cleft palate and lung hypoplasia. (The right and left lung weights were 15.1 and 11.5 g compared with a normal combined weight of 34.5 g.) There was a right-sided diaphragmatic eventration, although no hernia was identified. There was a mild degree of bilateral renal pelvis distention, more severe on the left (1.3 cm in maximum dimension). Microscopically, the fundamental structure of the lungs was normal, but there was a degree of pulmonary hypoplasia. The costochondral junction appeared irregular, and there was centrilobular congestion of the liver. The second child (III:2) was born at 35+1 weeks following spontaneous labour. During the pregnancy, serial growth scans had detected a right duplex kidney and increased liquor at 28 weeks. A scan at 34+2 weeks revealed polyhydramnios and gross right-sided hydronephrosis. The baby's birth weight was 1.80 kg (2nd centile) and his OFC was 33.5 cm (75th centile). Apgar scores were 1 at 1 min and 1 at 5 min. The baby died shortly thereafter, following unsuccessful resuscitation attempts. A postmortem examination noted dysmorphic features that included a tall forehead with bitemporal narrowing, a long philtrum and a small chin (figure 2A). He had long tapered fingers with contractures and overlapping toes (figure 2B, C). A single testis was palpable. There was a midline posterior cleft palate, and the right and left lungs were both extremely small (7 and 5.75 g). There was no obvious diaphragmatic hernia, but the domes were very high with extremely small pleural cavities. The right renal pelvis was extremely dilated with an obvious pelviureteric junction obstruction; the left kidney was normal. Affected infant III:2 showing (A) facial dysmorphism including a tall forehead with bitemporal narrowing, a long philtrum and a small chin; (B) the right hand showing long tapered fingers with contractures and (C) the left foot with overlapping toes. Affected infant III:4 showing (D) facial dysmorphism including a square forehead and small chin; apparent deficiency of pectoralis and proximal limb musculature (E and F) contractures of the proximal interphalangeal joints in the right and left hand, respectively. The mother next delivered a third baby (III:3) whose father (II:3), a first-cousin relative, was different to that of her previously described affected offspring. The child was healthy. A further daughter, III:4, was delivered at 37 weeks gestation, following a spontaneous labour. A cystic hygroma had been identified at 12 weeks. Chorionic villus sampling and prenatal arrayCGH analysis revealed no pathogenic CNVs. The baby's birth weight was 1.96 kg (0.4th to 2nd centile) and her OFC was 32.5 cm (25th–50th centile). Apgar scores were 1 at 1 min and 1 at 5 min. Despite intensive resuscitation, the baby died 30 min after delivery. Postmortem examination revealed a square forehead and small chin (figure 2D). There were contractures of the proximal interphalangeal joints from the 2nd to 5th fingers bilaterally (figure 2E, F). There was also bilateral knee flexion. There was a midline posterior cleft palate and extreme pulmonary hypoplasia. (Right and left lung weights were 8.2 and 6.5 g) There was no diaphragmatic hernia, but the domes were extremely high with small pleural cavities. The right and left kidney weights were 3.1 and 8.75 g (normal combined weight=17.4 g), but no obstruction was identified. Postnatal arrayCGH analysis of samples III:1, III:2 and III:4 did not reveal any pathogenic variants that could account for the presenting phenotype. The sibling recurrence and reported consanguinity between parents suggested that the disorder was likely to be inherited in an autosomal-recessive manner. DNA samples from all three affected individuals and their unaffected mother were subjected to exome sequencing. Having verified the quality control metrics of our data set (see ‘Methods’), we undertook autozygosity mapping using exome-wide SNP genotypes. Five genomic regions were identified to be identical by descent, which encompassed 31.7 Mb of genomic DNA sequence and 211 genes (figure 3 and online supplementary table S2). We analysed only homozygous variants located in these autozygous regions, thus reducing the mean total variant count from 35 820 to 269 variants per affected individual (table 1). Excluding common variants (those with a minor allele frequency >1% in either dbSNP or the ESP5400 data set) further reduced the mean number of variants to 12. To aid manual interpretation of the remaining variants, we retained only those variants located in coding regions or invariant splice sites. Of these, four variants were identified in a homozygous state in all affected siblings. Two of these were homozygous in the unaffected mother and on this basis were deemed to be non-pathogenic. A further two variants were heterozygous in the mother and warranted further investigation (see online supplementary table S3). The first of these was the missense variant OTOG c.3265A>G p.(Ile1089Val) (NM_001277269.1). Standard in silico tools suggested that this variant was unlikely to be pathogenic. (PolyPhen-2 result: benign; SIFT result: tolerated; AlignGVGD result: class C0.) Furthermore, OTOG has been previously described to cause autosomal-recessive deafness (OMIM: 604487) and did not offer an explanation for the clinical features of our patients. The second variant, MYOD1 c.188C>A p.(Ser63*) (NM_002478.4), was predicted to be pathogenic and would likely lead to nonsense-mediated decay of the mRNA transcript. A thorough search of online databases did not identify this variant in any public databases. Further analysis of 61 486 exomes from the ExAC consortium revealed that homozygous loss-of-function (LOF) mutations had not been previously observed in MYOD1. In addition, MYOD1 has not been previously classified as an LOF gene in data sets generated from previous LOF screens.12–14 Filtering parameters per patient Autozygous intervals shared between all three affected individuals (dark blue) are shown with respect to per-sample autozygous intervals identified in affected individuals (light blue) and the unaffected mother (pink). The concentric circles, beginning from the interior, represent samples II:2, III:2, III:1 and III:4. The zygosity status and segregation of the c.188C>A MYOD1 variant was confirmed by Sanger sequencing of six individuals. This included the second father II:3 and his unaffected daughter, III:3, both of whom were heterozygous carriers of the mutation (see online supplementary figure S1).
null
null
[]
[]
[]
[ "Introduction", "Methods", "Results", "Discussion", "Supplementary Material" ]
[ "Lethal fetal akinesia deformation sequence (FADS; OMIM 208150) comprises a spectrum of clinically and genetically heterogeneous disorders that is most commonly detected prenatally, through the routine application of middle-trimester ultrasound scanning. Its clinical features, which are secondary to the lack of fetal movement, include intrauterine growth retardation, arthrogryposis and developmental anomalies such as lung hypoplasia, characteristic facies, cleft palate and cryptorchidism.1 Affected babies either die in utero or shortly after birth. The akinesia phenotype can be due to a genetic defect intrinsic to the fetus, but also to non-genetic causes such as maternal myasthenia gravis.\nTo date, >20 genes have been associated with fetal akinesia, representing all modes of inheritance. Many of these genes encode components of neuromuscular pathways, as described in the comprehensive review by Ravenscroft et al.2\nThe prenatal presentation of FADS complicates diagnosis. Although anatomical examination, especially using advanced ultrasonographic techniques,3\n4 can provide important diagnostic information regarding associated malformations such as facial clefting, prognosis cannot necessarily be inferred with certainty. Postmortem examination may result in a more precise pathological classification, but given the aetiological heterogeneity of FADS, a genetic diagnosis (preferably prenatally) remains highly desirable.\nHigh-throughput DNA sequencing has broadened the scope of traditional hypothesis-free genetic investigations from the detection of structural variants using arrayCGH (or low-coverage whole-genome sequencing, ‘CNV-seq’) to encompass point mutation and small insertion/deletion variant detection using exome sequencing. The methods employed in large multicentre studies of human paediatric phenotypes (such as ‘Deciphering Developmental Disorders’5) are now also being applied to large-scale prenatal studies such as the Prenatal Assessment of Genomes and Exomes project.6 To perform new gene discovery, these projects rely on ascertaining a large number of trio samples (typically singleton affected cases and their parents) with similar phenotypes. The expectation is that more than one family within the cohort will have a disease-causing mutation in the same gene, allowing a novel gene function to be attributed. Despite the large size of these data sets, it is frequently the case that similarly affected patients can only be identified outside of the study cohort.\nThe integration of more ‘traditional’ types of genetic mapping data into the interpretation of genome-wide sequencing data sets permits an alternative approach, in which candidate genes can be filtered using information from a single family. This permits small-scale gene discovery programmes, which may arise directly from data sets accrued in diagnostic laboratories, without the need for large and expensive recruitment programmes.7 Filtering on the basis of identity by descent, either explicit from family history, or implicit based on the rarity of a disease or isolation of an ethnic group, is one very powerful genetic approach.\nHere, we report a consanguineous Caucasian family in whom three affected individuals, from two sibships that share the same mother, presented with features consistent with FADS. Exome analysis with homozygosity-based filtering resulted in the identification of MYOD1 as a new gene underlying FADS.", "Consultant clinical geneticists clinically examined and reviewed postmortem examination reports that had been undertaken on the deceased babies. DNA was isolated using phenol/chloroform extraction or standard salting out (for blood) protocols, following written informed consent.\nIllumina-compatible exome sequencing libraries were generated for samples II:2, III:1 and III:4 using SureSelect v.5 reagents, following manufacturer's protocols throughout (Agilent Technologies, Wokingham, UK). A low DNA input protocol was followed to create the sequencing library for individual III:2 using alternative library preparation reagents (New England Biolabs, Ipswich, UK). The four exome-enriched libraries were pooled in equimolar concentration with either 3 (III:2) or 4 (II:2, III:1 and III:4) other sequencing libraries prepared as part of the laboratory's standard diagnostic testing workflow. Sequencing of each pool was performed using both lanes of a HiSeq2500 rapid mode flow cell, generating paired-end 100 bp sequence reads (Illumina, San Diego, California, USA).\nDespite the poor quality material obtained for DNA extraction from the affected individuals (and low DNA yield for sample III:2), the per-patient sequencing summary metrics were comparable to those obtained from the exome library of the unaffected mother (whose DNA source was peripheral blood lymphocytes, our preferred specimen). Notably a minimal number of reads were adaptor trimmed (∼3.5%) or identified as duplicate read pairs (∼12.4%) (see online supplementary table S1).\nData processing was performed using an in-house informatics pipeline. Briefly, raw sequence data were converted from bcl to FASTQ.gz format and demultiplexed using CASAVA V.1.8.2. Adaptor sequences were trimmed from per-patient sequence reads using Cutadapt V.1.7.1 (https://code.google.com/p/cutadapt/)8 before being aligned to the human reference genome (hg19) using bwa V.0.7.12 (http://bio-bwa.sourceforge.net).9 Sam to bam conversion, duplicate read removal and read sorting based on mapped read coordinates were performed using Picard V.1.129 (http://picard.sourceforge.net). The Genome Analysis Toolkit (GATK) V.2.3-4Lite was used to perform indel realignment, base quality score recalibration and variant discovery, which resulted in variants being saved in variant call format (VCF)10 before being annotated with positional, functional and frequency data using Alamut Batch standalone V.1.4.0 (Interactive Biosoftware, Rouen, France). Programs ancillary to the automated pipeline were used to interrogate these data, namely Agile MultiIdeogram (http://dna.leeds.ac.uk/agile/AgileMultiIdeogram), which was used to determine regions of autozygosity from the per-patient VCF files, and Agile Exome Filter (http://dna.leeds.ac.uk/agile/AgileExomeFilter), which was used to filter Alamut Batch annotated variant reports using positional and functional parameters.7 Interrogation of the Exome Aggregation Consortium data set was performed using the ExAC Browser (http://exac.broadinstitute.org). Manual inspection of aligned sequence reads was performed using the Integrative Genome Viewer V.2.3.57.11 The number of sequence reads mapping to each target base was calculated using the GATK DepthOfCoverage walker.\nTo confirm and genotype the putative pathogenic MYOD1 variant, a 383 bp amplicon was designed and optimised before being Sanger sequenced using an ABI3730 following manufacturer's protocols (Applied Biosystems, Paisley, UK). Each reaction consisted of 0.5 μL of genomic DNA (∼500 ng/μL), 11 μL of Megamix (Microzone, Haywards Heath, UK), 1 μL of 10 pmol/μL forward primer (dTGTAAAACGACGGCCAGTACGACTTCTATGACGACCCG) and 1 μL of 10 pmol/μL reverse primer (dCAGGAAACAGCTATGACCCTGGTTTGGATTGCTCGACG). Both primers contained universal sequencing tags (underlined) to allow Sanger sequencing according to our standard laboratory workflows. Thermocycling conditions consisted of 5 min at 94°C, followed by 30 cycles of 94°C for 30 s, 55°C for 60 s and 72°C for 45 s, before a final extension step at 72°C for 5 min. Sequence chromatograms were viewed using Mutation Surveyor V.3.2 (SoftGenetics LLC, State College, USA).", "The pedigree structure of the family is shown in figure 1. Two affected male babies were born to a first-cousin consanguineous Caucasian couple.\nA pedigree showing the relationship between affected (shaded symbols) and unaffected (outlined symbols) individuals.\nThe first child (III:1) was delivered following spontaneous labour at 35+5 weeks. During the pregnancy, a cystic hygroma had been noted at 12 weeks, which subsequently resolved, and polyhydramnios developed during the third trimester. The baby's birth weight was 1.87 kg (2nd to 9th centile) and his occipitofrontal circumference (OFC) was 33.0 cm (50th centile). Apgar scores were 1 at 1 min and 1 at 5 min. He required extensive resuscitation and was ventilator dependent. He had numerous episodes of oxygen desaturation and on day 2 failed to respond to an acute resuscitation for a further hypoxic episode and died. A postmortem examination noted dysmorphic features that included downslanting palpebral fissures, a small chin and square forehead. The toes and fingers were overlapping and the fists were held in a clenched position. The testes could not be palpated in the scrotum. There was a midline posterior cleft palate and lung hypoplasia. (The right and left lung weights were 15.1 and 11.5 g compared with a normal combined weight of 34.5 g.) There was a right-sided diaphragmatic eventration, although no hernia was identified. There was a mild degree of bilateral renal pelvis distention, more severe on the left (1.3 cm in maximum dimension). Microscopically, the fundamental structure of the lungs was normal, but there was a degree of pulmonary hypoplasia. The costochondral junction appeared irregular, and there was centrilobular congestion of the liver.\nThe second child (III:2) was born at 35+1 weeks following spontaneous labour. During the pregnancy, serial growth scans had detected a right duplex kidney and increased liquor at 28 weeks. A scan at 34+2 weeks revealed polyhydramnios and gross right-sided hydronephrosis. The baby's birth weight was 1.80 kg (2nd centile) and his OFC was 33.5 cm (75th centile). Apgar scores were 1 at 1 min and 1 at 5 min. The baby died shortly thereafter, following unsuccessful resuscitation attempts. A postmortem examination noted dysmorphic features that included a tall forehead with bitemporal narrowing, a long philtrum and a small chin (figure 2A). He had long tapered fingers with contractures and overlapping toes (figure 2B, C). A single testis was palpable. There was a midline posterior cleft palate, and the right and left lungs were both extremely small (7 and 5.75 g). There was no obvious diaphragmatic hernia, but the domes were very high with extremely small pleural cavities. The right renal pelvis was extremely dilated with an obvious pelviureteric junction obstruction; the left kidney was normal.\nAffected infant III:2 showing (A) facial dysmorphism including a tall forehead with bitemporal narrowing, a long philtrum and a small chin; (B) the right hand showing long tapered fingers with contractures and (C) the left foot with overlapping toes. Affected infant III:4 showing (D) facial dysmorphism including a square forehead and small chin; apparent deficiency of pectoralis and proximal limb musculature (E and F) contractures of the proximal interphalangeal joints in the right and left hand, respectively.\nThe mother next delivered a third baby (III:3) whose father (II:3), a first-cousin relative, was different to that of her previously described affected offspring. The child was healthy.\nA further daughter, III:4, was delivered at 37 weeks gestation, following a spontaneous labour. A cystic hygroma had been identified at 12 weeks. Chorionic villus sampling and prenatal arrayCGH analysis revealed no pathogenic CNVs. The baby's birth weight was 1.96 kg (0.4th to 2nd centile) and her OFC was 32.5 cm (25th–50th centile). Apgar scores were 1 at 1 min and 1 at 5 min. Despite intensive resuscitation, the baby died 30 min after delivery. Postmortem examination revealed a square forehead and small chin (figure 2D). There were contractures of the proximal interphalangeal joints from the 2nd to 5th fingers bilaterally (figure 2E, F). There was also bilateral knee flexion. There was a midline posterior cleft palate and extreme pulmonary hypoplasia. (Right and left lung weights were 8.2 and 6.5 g) There was no diaphragmatic hernia, but the domes were extremely high with small pleural cavities. The right and left kidney weights were 3.1 and 8.75 g (normal combined weight=17.4 g), but no obstruction was identified.\nPostnatal arrayCGH analysis of samples III:1, III:2 and III:4 did not reveal any pathogenic variants that could account for the presenting phenotype. The sibling recurrence and reported consanguinity between parents suggested that the disorder was likely to be inherited in an autosomal-recessive manner.\nDNA samples from all three affected individuals and their unaffected mother were subjected to exome sequencing.\nHaving verified the quality control metrics of our data set (see ‘Methods’), we undertook autozygosity mapping using exome-wide SNP genotypes. Five genomic regions were identified to be identical by descent, which encompassed 31.7 Mb of genomic DNA sequence and 211 genes (figure 3 and online supplementary table S2). We analysed only homozygous variants located in these autozygous regions, thus reducing the mean total variant count from 35 820 to 269 variants per affected individual (table 1). Excluding common variants (those with a minor allele frequency >1% in either dbSNP or the ESP5400 data set) further reduced the mean number of variants to 12. To aid manual interpretation of the remaining variants, we retained only those variants located in coding regions or invariant splice sites. Of these, four variants were identified in a homozygous state in all affected siblings. Two of these were homozygous in the unaffected mother and on this basis were deemed to be non-pathogenic. A further two variants were heterozygous in the mother and warranted further investigation (see online supplementary table S3). The first of these was the missense variant OTOG c.3265A>G p.(Ile1089Val) (NM_001277269.1). Standard in silico tools suggested that this variant was unlikely to be pathogenic. (PolyPhen-2 result: benign; SIFT result: tolerated; AlignGVGD result: class C0.) Furthermore, OTOG has been previously described to cause autosomal-recessive deafness (OMIM: 604487) and did not offer an explanation for the clinical features of our patients. The second variant, MYOD1 c.188C>A p.(Ser63*) (NM_002478.4), was predicted to be pathogenic and would likely lead to nonsense-mediated decay of the mRNA transcript. A thorough search of online databases did not identify this variant in any public databases. Further analysis of 61 486 exomes from the ExAC consortium revealed that homozygous loss-of-function (LOF) mutations had not been previously observed in MYOD1. In addition, MYOD1 has not been previously classified as an LOF gene in data sets generated from previous LOF screens.12–14\nFiltering parameters per patient\nAutozygous intervals shared between all three affected individuals (dark blue) are shown with respect to per-sample autozygous intervals identified in affected individuals (light blue) and the unaffected mother (pink). The concentric circles, beginning from the interior, represent samples II:2, III:2, III:1 and III:4.\nThe zygosity status and segregation of the c.188C>A MYOD1 variant was confirmed by Sanger sequencing of six individuals. This included the second father II:3 and his unaffected daughter, III:3, both of whom were heterozygous carriers of the mutation (see online supplementary figure S1).", "Here, we define by exome analysis of individuals affected with a perinatally lethal fetal akinesia syndrome, the first human phenotype that appears to be attributable to LOF of the myogenic factor MyoD. MyoD was the earliest described example of a developmental fate-controlling transcription factor15 and has been the subject of intensive study over a period of nearly 30 years. Remarkably though, LOF mutations in human MYOD1 have not previously been identified.\nThe MyoD family of transcriptional regulators include two primary muscle lineage-determining factors MyoD and Myf-5. MyoD expressed from a constitutive promoter is able to convert a variety of different cell types into muscle-lineage cells in vitro.16 MyoD is a basic-helix-loop-helix domain-containing protein that functions by binding E-box sequences present in DNA.17 Histone acetyltransferases are then recruited to allow chromatin remodelling towards an environment that is conducive to active transcription.\nMuch work has been undertaken to elucidate the spatial and temporal differences in development between the MyoD/Myf-5 muscle regulatory factors. MyoD mutant mouse embryos show a 2-day delay in the development of hypaxial (limb and abdominal wall) muscle, whereas Myf-5 mutants exhibit a similar delay in epaxial (paraspinal and intercostal) muscle development.18 However, lineage-tracing experiments combined with selective ablation of MyoD-expressing cells, using diphtheria toxin, indicate that the majority of myogenic progenitors pass through a MyoD+ cell fate and that myogenesis cannot be rescued by embryonic progenitor cells not expressing MyoD.19\nIt is, therefore, somewhat surprising that homozygous MyoD knockout mice are viable and fertile, produce seemingly normal amounts of the downstream myogenin protein and have no histologically detectable muscle abnormalities.20 Only when crossed with Myf-5 knockout mice to create homozygous double mutants is a lethal phenotype (with complete absence of skeletal muscle) obtained; this suggests there is an element of functional redundancy between these two muscle regulatory factors.21 In contrast to these findings in the mouse, the family we report here suggests that lack of MYOD1 is not compatible with postnatal life in humans.\nWhy human MYOD1 mutations have not previously been recognised is uncertain, although recessive null alleles, as mentioned above, are clearly very rare. Prenatally and perinatally lethal disorders can be difficult to study because of limited availability and poor quality of DNA from affected individuals. This has led some to focus on analysis of parental samples in order to deduce pathogenic compound heterozygous genotypes in affected fetuses.22\n23\nIn the present case, we were able to perform direct analysis of affected individuals’ exomes, with interpretation aided by autozygosity mapping, as previously described.24 The autozygosity mapping and variant filtering we performed reduced the burden of variant interpretation to just two candidate disease-causing variants. In silico analysis of one of these variants (OTOG c.3265A>G NM_001277269.1) did not suggest a likely pathogenic effect. Furthermore, OTOG had previously been described to cause autosomal-recessive deafness.25 The second variant, MYOD1 c.188C>A (p.Ser63*; NM_002478.4), is predicted to be pathogenic. In addition to introducing a stop codon within the N-terminal basic domain of the predicted protein, the location of this makes it highly likely that the mRNA transcript will be subject to nonsense-mediated decay. No previous homozygous LOF mutations have been identified in MYOD1 in either of two large international projects. As this investigation is based on a single family, additional independent reports are required to confirm the proposed link between MYOD1 mutations and FADS.\nOur autozygosity analysis was performed by direct analysis of sequence variants extracted from whole-exome sequence. This is an approach that we have described previously;24 although it offers poorer resolution than whole-genome SNP array mapping, it frequently (as here) suffices and can be performed using data sets generated for routine diagnostic purposes. Of relevance to the present case, this method also avoids problems related to poor performance of array-based reagents in the face of DNA samples of poor quality or limited quantity.", "" ]
[ "intro", "methods", "results", "discussion", "supplementary-material" ]
[ "<i>MYOD1</i>", "perinatal lethal", "fetal akinesia", "lung hypoplasia", "exome sequencing" ]
Introduction: Lethal fetal akinesia deformation sequence (FADS; OMIM 208150) comprises a spectrum of clinically and genetically heterogeneous disorders that is most commonly detected prenatally, through the routine application of middle-trimester ultrasound scanning. Its clinical features, which are secondary to the lack of fetal movement, include intrauterine growth retardation, arthrogryposis and developmental anomalies such as lung hypoplasia, characteristic facies, cleft palate and cryptorchidism.1 Affected babies either die in utero or shortly after birth. The akinesia phenotype can be due to a genetic defect intrinsic to the fetus, but also to non-genetic causes such as maternal myasthenia gravis. To date, >20 genes have been associated with fetal akinesia, representing all modes of inheritance. Many of these genes encode components of neuromuscular pathways, as described in the comprehensive review by Ravenscroft et al.2 The prenatal presentation of FADS complicates diagnosis. Although anatomical examination, especially using advanced ultrasonographic techniques,3 4 can provide important diagnostic information regarding associated malformations such as facial clefting, prognosis cannot necessarily be inferred with certainty. Postmortem examination may result in a more precise pathological classification, but given the aetiological heterogeneity of FADS, a genetic diagnosis (preferably prenatally) remains highly desirable. High-throughput DNA sequencing has broadened the scope of traditional hypothesis-free genetic investigations from the detection of structural variants using arrayCGH (or low-coverage whole-genome sequencing, ‘CNV-seq’) to encompass point mutation and small insertion/deletion variant detection using exome sequencing. The methods employed in large multicentre studies of human paediatric phenotypes (such as ‘Deciphering Developmental Disorders’5) are now also being applied to large-scale prenatal studies such as the Prenatal Assessment of Genomes and Exomes project.6 To perform new gene discovery, these projects rely on ascertaining a large number of trio samples (typically singleton affected cases and their parents) with similar phenotypes. The expectation is that more than one family within the cohort will have a disease-causing mutation in the same gene, allowing a novel gene function to be attributed. Despite the large size of these data sets, it is frequently the case that similarly affected patients can only be identified outside of the study cohort. The integration of more ‘traditional’ types of genetic mapping data into the interpretation of genome-wide sequencing data sets permits an alternative approach, in which candidate genes can be filtered using information from a single family. This permits small-scale gene discovery programmes, which may arise directly from data sets accrued in diagnostic laboratories, without the need for large and expensive recruitment programmes.7 Filtering on the basis of identity by descent, either explicit from family history, or implicit based on the rarity of a disease or isolation of an ethnic group, is one very powerful genetic approach. Here, we report a consanguineous Caucasian family in whom three affected individuals, from two sibships that share the same mother, presented with features consistent with FADS. Exome analysis with homozygosity-based filtering resulted in the identification of MYOD1 as a new gene underlying FADS. Methods: Consultant clinical geneticists clinically examined and reviewed postmortem examination reports that had been undertaken on the deceased babies. DNA was isolated using phenol/chloroform extraction or standard salting out (for blood) protocols, following written informed consent. Illumina-compatible exome sequencing libraries were generated for samples II:2, III:1 and III:4 using SureSelect v.5 reagents, following manufacturer's protocols throughout (Agilent Technologies, Wokingham, UK). A low DNA input protocol was followed to create the sequencing library for individual III:2 using alternative library preparation reagents (New England Biolabs, Ipswich, UK). The four exome-enriched libraries were pooled in equimolar concentration with either 3 (III:2) or 4 (II:2, III:1 and III:4) other sequencing libraries prepared as part of the laboratory's standard diagnostic testing workflow. Sequencing of each pool was performed using both lanes of a HiSeq2500 rapid mode flow cell, generating paired-end 100 bp sequence reads (Illumina, San Diego, California, USA). Despite the poor quality material obtained for DNA extraction from the affected individuals (and low DNA yield for sample III:2), the per-patient sequencing summary metrics were comparable to those obtained from the exome library of the unaffected mother (whose DNA source was peripheral blood lymphocytes, our preferred specimen). Notably a minimal number of reads were adaptor trimmed (∼3.5%) or identified as duplicate read pairs (∼12.4%) (see online supplementary table S1). Data processing was performed using an in-house informatics pipeline. Briefly, raw sequence data were converted from bcl to FASTQ.gz format and demultiplexed using CASAVA V.1.8.2. Adaptor sequences were trimmed from per-patient sequence reads using Cutadapt V.1.7.1 (https://code.google.com/p/cutadapt/)8 before being aligned to the human reference genome (hg19) using bwa V.0.7.12 (http://bio-bwa.sourceforge.net).9 Sam to bam conversion, duplicate read removal and read sorting based on mapped read coordinates were performed using Picard V.1.129 (http://picard.sourceforge.net). The Genome Analysis Toolkit (GATK) V.2.3-4Lite was used to perform indel realignment, base quality score recalibration and variant discovery, which resulted in variants being saved in variant call format (VCF)10 before being annotated with positional, functional and frequency data using Alamut Batch standalone V.1.4.0 (Interactive Biosoftware, Rouen, France). Programs ancillary to the automated pipeline were used to interrogate these data, namely Agile MultiIdeogram (http://dna.leeds.ac.uk/agile/AgileMultiIdeogram), which was used to determine regions of autozygosity from the per-patient VCF files, and Agile Exome Filter (http://dna.leeds.ac.uk/agile/AgileExomeFilter), which was used to filter Alamut Batch annotated variant reports using positional and functional parameters.7 Interrogation of the Exome Aggregation Consortium data set was performed using the ExAC Browser (http://exac.broadinstitute.org). Manual inspection of aligned sequence reads was performed using the Integrative Genome Viewer V.2.3.57.11 The number of sequence reads mapping to each target base was calculated using the GATK DepthOfCoverage walker. To confirm and genotype the putative pathogenic MYOD1 variant, a 383 bp amplicon was designed and optimised before being Sanger sequenced using an ABI3730 following manufacturer's protocols (Applied Biosystems, Paisley, UK). Each reaction consisted of 0.5 μL of genomic DNA (∼500 ng/μL), 11 μL of Megamix (Microzone, Haywards Heath, UK), 1 μL of 10 pmol/μL forward primer (dTGTAAAACGACGGCCAGTACGACTTCTATGACGACCCG) and 1 μL of 10 pmol/μL reverse primer (dCAGGAAACAGCTATGACCCTGGTTTGGATTGCTCGACG). Both primers contained universal sequencing tags (underlined) to allow Sanger sequencing according to our standard laboratory workflows. Thermocycling conditions consisted of 5 min at 94°C, followed by 30 cycles of 94°C for 30 s, 55°C for 60 s and 72°C for 45 s, before a final extension step at 72°C for 5 min. Sequence chromatograms were viewed using Mutation Surveyor V.3.2 (SoftGenetics LLC, State College, USA). Results: The pedigree structure of the family is shown in figure 1. Two affected male babies were born to a first-cousin consanguineous Caucasian couple. A pedigree showing the relationship between affected (shaded symbols) and unaffected (outlined symbols) individuals. The first child (III:1) was delivered following spontaneous labour at 35+5 weeks. During the pregnancy, a cystic hygroma had been noted at 12 weeks, which subsequently resolved, and polyhydramnios developed during the third trimester. The baby's birth weight was 1.87 kg (2nd to 9th centile) and his occipitofrontal circumference (OFC) was 33.0 cm (50th centile). Apgar scores were 1 at 1 min and 1 at 5 min. He required extensive resuscitation and was ventilator dependent. He had numerous episodes of oxygen desaturation and on day 2 failed to respond to an acute resuscitation for a further hypoxic episode and died. A postmortem examination noted dysmorphic features that included downslanting palpebral fissures, a small chin and square forehead. The toes and fingers were overlapping and the fists were held in a clenched position. The testes could not be palpated in the scrotum. There was a midline posterior cleft palate and lung hypoplasia. (The right and left lung weights were 15.1 and 11.5 g compared with a normal combined weight of 34.5 g.) There was a right-sided diaphragmatic eventration, although no hernia was identified. There was a mild degree of bilateral renal pelvis distention, more severe on the left (1.3 cm in maximum dimension). Microscopically, the fundamental structure of the lungs was normal, but there was a degree of pulmonary hypoplasia. The costochondral junction appeared irregular, and there was centrilobular congestion of the liver. The second child (III:2) was born at 35+1 weeks following spontaneous labour. During the pregnancy, serial growth scans had detected a right duplex kidney and increased liquor at 28 weeks. A scan at 34+2 weeks revealed polyhydramnios and gross right-sided hydronephrosis. The baby's birth weight was 1.80 kg (2nd centile) and his OFC was 33.5 cm (75th centile). Apgar scores were 1 at 1 min and 1 at 5 min. The baby died shortly thereafter, following unsuccessful resuscitation attempts. A postmortem examination noted dysmorphic features that included a tall forehead with bitemporal narrowing, a long philtrum and a small chin (figure 2A). He had long tapered fingers with contractures and overlapping toes (figure 2B, C). A single testis was palpable. There was a midline posterior cleft palate, and the right and left lungs were both extremely small (7 and 5.75 g). There was no obvious diaphragmatic hernia, but the domes were very high with extremely small pleural cavities. The right renal pelvis was extremely dilated with an obvious pelviureteric junction obstruction; the left kidney was normal. Affected infant III:2 showing (A) facial dysmorphism including a tall forehead with bitemporal narrowing, a long philtrum and a small chin; (B) the right hand showing long tapered fingers with contractures and (C) the left foot with overlapping toes. Affected infant III:4 showing (D) facial dysmorphism including a square forehead and small chin; apparent deficiency of pectoralis and proximal limb musculature (E and F) contractures of the proximal interphalangeal joints in the right and left hand, respectively. The mother next delivered a third baby (III:3) whose father (II:3), a first-cousin relative, was different to that of her previously described affected offspring. The child was healthy. A further daughter, III:4, was delivered at 37 weeks gestation, following a spontaneous labour. A cystic hygroma had been identified at 12 weeks. Chorionic villus sampling and prenatal arrayCGH analysis revealed no pathogenic CNVs. The baby's birth weight was 1.96 kg (0.4th to 2nd centile) and her OFC was 32.5 cm (25th–50th centile). Apgar scores were 1 at 1 min and 1 at 5 min. Despite intensive resuscitation, the baby died 30 min after delivery. Postmortem examination revealed a square forehead and small chin (figure 2D). There were contractures of the proximal interphalangeal joints from the 2nd to 5th fingers bilaterally (figure 2E, F). There was also bilateral knee flexion. There was a midline posterior cleft palate and extreme pulmonary hypoplasia. (Right and left lung weights were 8.2 and 6.5 g) There was no diaphragmatic hernia, but the domes were extremely high with small pleural cavities. The right and left kidney weights were 3.1 and 8.75 g (normal combined weight=17.4 g), but no obstruction was identified. Postnatal arrayCGH analysis of samples III:1, III:2 and III:4 did not reveal any pathogenic variants that could account for the presenting phenotype. The sibling recurrence and reported consanguinity between parents suggested that the disorder was likely to be inherited in an autosomal-recessive manner. DNA samples from all three affected individuals and their unaffected mother were subjected to exome sequencing. Having verified the quality control metrics of our data set (see ‘Methods’), we undertook autozygosity mapping using exome-wide SNP genotypes. Five genomic regions were identified to be identical by descent, which encompassed 31.7 Mb of genomic DNA sequence and 211 genes (figure 3 and online supplementary table S2). We analysed only homozygous variants located in these autozygous regions, thus reducing the mean total variant count from 35 820 to 269 variants per affected individual (table 1). Excluding common variants (those with a minor allele frequency >1% in either dbSNP or the ESP5400 data set) further reduced the mean number of variants to 12. To aid manual interpretation of the remaining variants, we retained only those variants located in coding regions or invariant splice sites. Of these, four variants were identified in a homozygous state in all affected siblings. Two of these were homozygous in the unaffected mother and on this basis were deemed to be non-pathogenic. A further two variants were heterozygous in the mother and warranted further investigation (see online supplementary table S3). The first of these was the missense variant OTOG c.3265A>G p.(Ile1089Val) (NM_001277269.1). Standard in silico tools suggested that this variant was unlikely to be pathogenic. (PolyPhen-2 result: benign; SIFT result: tolerated; AlignGVGD result: class C0.) Furthermore, OTOG has been previously described to cause autosomal-recessive deafness (OMIM: 604487) and did not offer an explanation for the clinical features of our patients. The second variant, MYOD1 c.188C>A p.(Ser63*) (NM_002478.4), was predicted to be pathogenic and would likely lead to nonsense-mediated decay of the mRNA transcript. A thorough search of online databases did not identify this variant in any public databases. Further analysis of 61 486 exomes from the ExAC consortium revealed that homozygous loss-of-function (LOF) mutations had not been previously observed in MYOD1. In addition, MYOD1 has not been previously classified as an LOF gene in data sets generated from previous LOF screens.12–14 Filtering parameters per patient Autozygous intervals shared between all three affected individuals (dark blue) are shown with respect to per-sample autozygous intervals identified in affected individuals (light blue) and the unaffected mother (pink). The concentric circles, beginning from the interior, represent samples II:2, III:2, III:1 and III:4. The zygosity status and segregation of the c.188C>A MYOD1 variant was confirmed by Sanger sequencing of six individuals. This included the second father II:3 and his unaffected daughter, III:3, both of whom were heterozygous carriers of the mutation (see online supplementary figure S1). Discussion: Here, we define by exome analysis of individuals affected with a perinatally lethal fetal akinesia syndrome, the first human phenotype that appears to be attributable to LOF of the myogenic factor MyoD. MyoD was the earliest described example of a developmental fate-controlling transcription factor15 and has been the subject of intensive study over a period of nearly 30 years. Remarkably though, LOF mutations in human MYOD1 have not previously been identified. The MyoD family of transcriptional regulators include two primary muscle lineage-determining factors MyoD and Myf-5. MyoD expressed from a constitutive promoter is able to convert a variety of different cell types into muscle-lineage cells in vitro.16 MyoD is a basic-helix-loop-helix domain-containing protein that functions by binding E-box sequences present in DNA.17 Histone acetyltransferases are then recruited to allow chromatin remodelling towards an environment that is conducive to active transcription. Much work has been undertaken to elucidate the spatial and temporal differences in development between the MyoD/Myf-5 muscle regulatory factors. MyoD mutant mouse embryos show a 2-day delay in the development of hypaxial (limb and abdominal wall) muscle, whereas Myf-5 mutants exhibit a similar delay in epaxial (paraspinal and intercostal) muscle development.18 However, lineage-tracing experiments combined with selective ablation of MyoD-expressing cells, using diphtheria toxin, indicate that the majority of myogenic progenitors pass through a MyoD+ cell fate and that myogenesis cannot be rescued by embryonic progenitor cells not expressing MyoD.19 It is, therefore, somewhat surprising that homozygous MyoD knockout mice are viable and fertile, produce seemingly normal amounts of the downstream myogenin protein and have no histologically detectable muscle abnormalities.20 Only when crossed with Myf-5 knockout mice to create homozygous double mutants is a lethal phenotype (with complete absence of skeletal muscle) obtained; this suggests there is an element of functional redundancy between these two muscle regulatory factors.21 In contrast to these findings in the mouse, the family we report here suggests that lack of MYOD1 is not compatible with postnatal life in humans. Why human MYOD1 mutations have not previously been recognised is uncertain, although recessive null alleles, as mentioned above, are clearly very rare. Prenatally and perinatally lethal disorders can be difficult to study because of limited availability and poor quality of DNA from affected individuals. This has led some to focus on analysis of parental samples in order to deduce pathogenic compound heterozygous genotypes in affected fetuses.22 23 In the present case, we were able to perform direct analysis of affected individuals’ exomes, with interpretation aided by autozygosity mapping, as previously described.24 The autozygosity mapping and variant filtering we performed reduced the burden of variant interpretation to just two candidate disease-causing variants. In silico analysis of one of these variants (OTOG c.3265A>G NM_001277269.1) did not suggest a likely pathogenic effect. Furthermore, OTOG had previously been described to cause autosomal-recessive deafness.25 The second variant, MYOD1 c.188C>A (p.Ser63*; NM_002478.4), is predicted to be pathogenic. In addition to introducing a stop codon within the N-terminal basic domain of the predicted protein, the location of this makes it highly likely that the mRNA transcript will be subject to nonsense-mediated decay. No previous homozygous LOF mutations have been identified in MYOD1 in either of two large international projects. As this investigation is based on a single family, additional independent reports are required to confirm the proposed link between MYOD1 mutations and FADS. Our autozygosity analysis was performed by direct analysis of sequence variants extracted from whole-exome sequence. This is an approach that we have described previously;24 although it offers poorer resolution than whole-genome SNP array mapping, it frequently (as here) suffices and can be performed using data sets generated for routine diagnostic purposes. Of relevance to the present case, this method also avoids problems related to poor performance of array-based reagents in the face of DNA samples of poor quality or limited quantity. Supplementary Material:
Background: Lethal fetal akinesia deformation sequence (FADS) describes a clinically and genetically heterogeneous phenotype that includes fetal akinesia, intrauterine growth retardation, arthrogryposis and developmental anomalies. Affected babies die as a result of pulmonary hypoplasia. We aimed to identify the underlying genetic cause of this disorder in a family in which there were three affected individuals from two sibships. Methods: Autosomal-recessive inheritance was suggested by a family history of consanguinity and by recurrence of the phenotype between the two sibships. We performed exome sequencing of the affected individuals and their unaffected mother, followed by autozygosity mapping and variant filtering to identify the causative gene. Results: Five autozygous regions were identified, spanning 31.7 Mb of genomic sequence and including 211 genes. Using standard variant filtering criteria, we excluded all variants as being the likely pathogenic cause, apart from a single novel nonsense mutation, c.188C>A p.(Ser63*) (NM_002478.4), in MYOD1. This gene encodes an extensively studied transcription factor involved in muscle development, which has nonetheless not hitherto been associated with a hereditary human disease phenotype. Conclusions: We provide the first description of a human phenotype that appears to result from MYOD1 mutation. The presentation with FADS is consistent with a large body of data demonstrating that in the mouse, MyoD is a major controller of precursor cell commitment to the myogenic differentiation programme.
null
null
3,523
260
[]
5
[ "iii", "affected", "variant", "dna", "variants", "sequencing", "data", "myod1", "myod", "analysis" ]
[ "cleft palate extreme", "akinesia syndrome", "facies cleft palate", "birth akinesia phenotype", "lethal fetal akinesia" ]
null
null
[CONTENT] <i>MYOD1</i> | perinatal lethal | fetal akinesia | lung hypoplasia | exome sequencing [SUMMARY]
[CONTENT] <i>MYOD1</i> | perinatal lethal | fetal akinesia | lung hypoplasia | exome sequencing [SUMMARY]
[CONTENT] <i>MYOD1</i> | perinatal lethal | fetal akinesia | lung hypoplasia | exome sequencing [SUMMARY]
null
[CONTENT] <i>MYOD1</i> | perinatal lethal | fetal akinesia | lung hypoplasia | exome sequencing [SUMMARY]
null
[CONTENT] Aborted Fetus | Animals | Arthrogryposis | Exome | Female | Fetal Growth Retardation | High-Throughput Nucleotide Sequencing | Humans | Lung | Mice | Mutation | MyoD Protein | Pedigree | Phenotype | Pregnancy [SUMMARY]
[CONTENT] Aborted Fetus | Animals | Arthrogryposis | Exome | Female | Fetal Growth Retardation | High-Throughput Nucleotide Sequencing | Humans | Lung | Mice | Mutation | MyoD Protein | Pedigree | Phenotype | Pregnancy [SUMMARY]
[CONTENT] Aborted Fetus | Animals | Arthrogryposis | Exome | Female | Fetal Growth Retardation | High-Throughput Nucleotide Sequencing | Humans | Lung | Mice | Mutation | MyoD Protein | Pedigree | Phenotype | Pregnancy [SUMMARY]
null
[CONTENT] Aborted Fetus | Animals | Arthrogryposis | Exome | Female | Fetal Growth Retardation | High-Throughput Nucleotide Sequencing | Humans | Lung | Mice | Mutation | MyoD Protein | Pedigree | Phenotype | Pregnancy [SUMMARY]
null
[CONTENT] cleft palate extreme | akinesia syndrome | facies cleft palate | birth akinesia phenotype | lethal fetal akinesia [SUMMARY]
[CONTENT] cleft palate extreme | akinesia syndrome | facies cleft palate | birth akinesia phenotype | lethal fetal akinesia [SUMMARY]
[CONTENT] cleft palate extreme | akinesia syndrome | facies cleft palate | birth akinesia phenotype | lethal fetal akinesia [SUMMARY]
null
[CONTENT] cleft palate extreme | akinesia syndrome | facies cleft palate | birth akinesia phenotype | lethal fetal akinesia [SUMMARY]
null
[CONTENT] iii | affected | variant | dna | variants | sequencing | data | myod1 | myod | analysis [SUMMARY]
[CONTENT] iii | affected | variant | dna | variants | sequencing | data | myod1 | myod | analysis [SUMMARY]
[CONTENT] iii | affected | variant | dna | variants | sequencing | data | myod1 | myod | analysis [SUMMARY]
null
[CONTENT] iii | affected | variant | dna | variants | sequencing | data | myod1 | myod | analysis [SUMMARY]
null
[CONTENT] genetic | fads | gene | large | family | sequencing | genes | fetal | prenatal | akinesia [SUMMARY]
[CONTENT] μl | uk | iii | http | reads | sequencing | dna | performed | read | sequence reads [SUMMARY]
[CONTENT] iii | right | left | figure | weeks | small | centile | baby | min | affected [SUMMARY]
null
[CONTENT] iii | myod | affected | sequencing | muscle | dna | genetic | data | variant | μl [SUMMARY]
null
[CONTENT] Lethal | FADS ||| pulmonary hypoplasia ||| three | two [SUMMARY]
[CONTENT] two ||| [SUMMARY]
[CONTENT] Five | 31.7 | 211 ||| p.(Ser63 | MYOD1 ||| [SUMMARY]
null
[CONTENT] Lethal | FADS ||| pulmonary hypoplasia ||| three | two ||| two ||| ||| ||| Five | 31.7 | 211 ||| p.(Ser63 | MYOD1 ||| ||| first | MYOD1 ||| FADS | MyoD [SUMMARY]
null
The course of dry eye after phacoemulsification surgery.
26122323
The aim of this retrospective study was to evaluate the course of dry eye syndrome after phacoemulsification surgery.
BACKGROUND
One hundred and ninety-two eyes of 96 patients (30 males, 66 females) with chronic dry eye syndrome and cataract, who had undergone phacoemulsification surgery were enrolled in this study.
METHODS
Their mean age was 68.46 ± 8.14 standard deviation (SD) (range 56-83) years . Thirty of them (31 %) were males and 66 (69 %) were females. Ocular Surface Disease Index (OSDI) questionnaire scores increased postoperatively, but arrived preoperative levels at the end of 3rd month following the surgery. Fluorescein staining patterns according to Oxford Schema got worse postoperatively, however after postoperative 3rd month they got better and resembled preoperative patterns. The mean postoperative 1st day, 1st week and 1st month Break-up Time (BUT) values were significantly lower than preoperative BUT value (P < 0.001, P < 0.001, P < 0.001), however 3rd month, 6th month, 1st year and 2nd year values were not significantly different from preoperative value (P = 0.441, P = 0.078, P = 0.145, P = 0.125). The mean postoperative 1st day, 1st week and 1st month Schirmer Test 1 (ST1) values were significantly lower than preoperative ST1 value (P < 0.001, P < 0.001, P < 0.001), however 3rd month, 6th month, 1st year and 2nd year values were not significantly different from preoperative value (P = 0.748, P = 0.439, P = 0.091, P = 0.214).
RESULTS
Phacoemulsification surgery may aggravate the signs and symptoms of dry eye and affect dry eye test values in chronic dry eye patients in short-term. However, in long-term, signs and symptoms of dry eye decrease and dry eye test values return to preoperative values.
CONCLUSION
[ "Aged", "Aged, 80 and over", "Chronic Disease", "Cyclosporine", "Dry Eye Syndromes", "Female", "Fluorescein", "Fluorophotometry", "Humans", "Immunosuppressive Agents", "Lubricant Eye Drops", "Male", "Middle Aged", "Phacoemulsification", "Retrospective Studies", "Staining and Labeling", "Tears" ]
4485332
Background
Dry eye syndrome is a multifactorial disease characterized by dryness of the ocular surface due to tear deficiency and overevaporation [1, 2]. There are many causes and factors leading to dry eye, including aging, female gender, connective tissue diseases, Diabetes Mellitus, systemic hypertension, contact lens usage, drugs like antihistamines, anticholinergics, antidepressants, oral contraceptives and topical eye drops containing preservatives and ocular diseases like blepharitis, chronic conjunctivitis, meibomitis and pterygium [3–5]. The symptoms observed in dry eye syndrome include dryness, irritation, burning, foreign body sensation, heaviness of the eyelids, redness, reflex lacrimation, ocular pain and fatigue. It may cause punctate keratitis, persistent epithelial defect, filamentary keratopathy, superior limbic keratoconjunctivitis and reduced visual acuity [6, 7]. Some surgical interventions related to anterior segment may also cause dry eye and aggravate the symptoms in pre-existing dry eye, like PRK, LASIK and cataract surgery [8–10]. In this study, we evaluated the course of dry eye syndrome after phacoemulsification surgery.
null
null
Results
The time of dry eye diagnosis of these patients was approximately 1 to 5 years prior to the surgery. During the time of diagnosis at the first examination, all of the patients had complaints such as burning, stinging, redness, dryness, foreign body sensation, pain and fatigue in their eyes. OSDI scores were between 25 and 50 in 66 (69 %) patients and between 50 and 75 in 30 (31 %) patients. Fluorescein staining, BUT and ST1 tests were performed. Twenty-four eyes (12 %) had grade 4, 42 eyes (22 %) had grade 3, 66 eyes (34 %) had grade 2 and 60 eyes (31 %) had grade 1 staining pattern according to Oxford Schema. BUT values were under 10 s in 138 eyes (72 %) and under 5 s in 54 eyes (28 %). ST1 values were under 5 mm in 150 eyes (78 %) and under 3 mm in 42 eyes (22 %). We commenced topical artificial tears therapy for all of them and additional topical cyclosporin A for 30 of them. Cyclosporin A therapy was ceased and restarted according to the clinical courses of the patients. The frequency of women were significantly higher than that of men (P = 0.003). OSDI scores were under 25 in 87 (91 %) patients and between 25 and 30 in 9 (9 %) patients preoperatively. However, postoperatively in 1st week, it was under 25 in 15 (16 %) patients, between 25 and 30 in 33 (34 %) patients, between 30 and 40 in 21 (22 %) patients and between 40 and 50 in 27 (28 %) patients. In postoperative 1st month, it was under 25 in 30 (31 %) patients, between 25 and 30 in 39 (41 %) patients and between 30 and 40 in 27 (28 %) patients. In postoperative 3rd month it was under 25 in 84 (88 %) patients and between 25 and 30 in 12 (12 %) patients. In postoperative 6th month it was under 25 in 90 (94 %) patients and between 25 and 30 in 6 (6 %) patients. According to Oxford Schema, preoperatively only 15 eyes had grade 2 fluorescein staining (7 %). But postoperatively on 1st day, 36 eyes had grade 2 (18 %), 24 eyes had grade 3 (12 %) and 12 eyes had grade 4 (6 %) fluorescein staining pattern. In 1st week 24 eyes had grade 2 (12 %) and 12 eyes had grade 3 (6 %) staining. In 1st month 18 eyes had grade 2 (9 %) and 6 eyes had grade 3 (3 %) staining. In 3rd month, only 6 eyes had grade 2 staining pattern (3 %). The mean preoperative BUT value was 11.65 ± 2.31 (SD) (7–16) seconds. Postoperative 1st day value was 7.60 ± 1.24 (SD) (5–11), 1st week value 7.03 ± 0.97 (SD) (5–9), 1st month value 7.42 ± 0.79 (SD) (6–8), 3rd month value 11.76 ± 2.08 (SD) (9–16), 6th month value 12.01 ± 2.05 (SD) (9–16), 1st year value 11.85 ± 2,01 (SD) (8–17) and 2nd year value was 11.95 ± 1.92 (SD) (9–17) seconds. In comparison with preoperative value, the 1st day, 1st week and 1st month values were significantly lower (P < 0.001, P < 0.001,P < 0.001), however 3rd month, 6th month, 1st year and 2nd year values were not significantly different from preoperative value (P = 0.441, P = 0.078, P = 0.145, P = 0.125). This is shown in Fig. 1.Fig. 1The course of BUT values The course of BUT values The mean preoperative ST1 value was 6.39 ± 1.42 (SD) (4–9) mm. Postoperative 1st day value was 4.59 ± 1.06 (SD) (3–7), 1st week value 4.45 ± 0.95 (SD) (2–6), 1st month value 4.50 ± 1.00 (SD) (3–6), 3rd month value 6.42 ± 1.31 (SD) (4–9), 6th month value 6.46 ± 1.28 (SD) (4–10), 1st year value 6.59 ± 1.38 (SD) (4–9) and 2nd year value was 6.54 ± 1.29 (SD) (4–9) mm. In comparison with preoperative value, 1st day, 1st week and 1st month values were significantly lower (P < 0.001, P < 0.001, P < 0.001), however 3rd month, 6th month, 1st year and 2nd year values were not significantly different from preoperative value (P = 0.748, P = 0.439, P = 0.091, P = 0.214). This is shown in Fig. 2.Fig. 2The course of ST1 values The course of ST1 values By the way, the subjective symptoms of the patients related to dry eye increased postoperatively. But after postoperative 1st month, their complaints decreased gradually. As it was seen; fluorescein staining, BUT and ST1 were impaired up to 1st month postoperatively, however, after 1st month, they improved and they returned to preoperative levels in 3rd month (Additional file 1).
Conclusion
Phacoemulsification surgery may aggravate the signs and symptoms of dry eye and affect dry eye test values in chronic dry eye patients in short-term. However, in long-term, signs and symptoms of dry eye decrease and dry eye test values return to preoperative values.
[ "Statistical analysis" ]
[ "For statistical analysis, SPSS version 22 programme was used. For comparison of the data, Chi-square test and Paired t test were used. A P < 0.05 value was accepted as statistically significant." ]
[ null ]
[ "Background", "Methods", "Statistical analysis", "Results", "Discussion", "Conclusion" ]
[ "Dry eye syndrome is a multifactorial disease characterized by dryness of the ocular surface due to tear deficiency and overevaporation [1, 2]. There are many causes and factors leading to dry eye, including aging, female gender, connective tissue diseases, Diabetes Mellitus, systemic hypertension, contact lens usage, drugs like antihistamines, anticholinergics, antidepressants, oral contraceptives and topical eye drops containing preservatives and ocular diseases like blepharitis, chronic conjunctivitis, meibomitis and pterygium [3–5]. The symptoms observed in dry eye syndrome include dryness, irritation, burning, foreign body sensation, heaviness of the eyelids, redness, reflex lacrimation, ocular pain and fatigue. It may cause punctate keratitis, persistent epithelial defect, filamentary keratopathy, superior limbic keratoconjunctivitis and reduced visual acuity [6, 7].\nSome surgical interventions related to anterior segment may also cause dry eye and aggravate the symptoms in pre-existing dry eye, like PRK, LASIK and cataract surgery [8–10].\nIn this study, we evaluated the course of dry eye syndrome after phacoemulsification surgery.", "The study protocol was approved by the local ethics commitee (Selcuk University,Faculty of Medicine Ethics Commitee, Konya, Turkey). An informed written consent was obtained from the patients for the cataract surgery. The study was carried out according to the tenets of the Declaration of Helsinki.\nOne hundred and ninety-two eyes of 96 patients with chronic dry eye syndrome and cataract were enrolled in this study. They had undergone uneventful phacoemulsification and IOL implantation operation between January 2010 and March 2011. Their medical records were evaluated retrospectively. Their mean age was 68.46 ± 8.14 (SD) (56–83) years. Thirty of them (31 %) were male and 66 (69 %) were female. They all had bilateral cataracts. All of the surgeries were performed by a single surgeon (SC). Under subtenon anesthesia, a 2.75 mm clear corneal incision was made. Anterior chamber was filled with a dispersive (hydroxypropylmethylcellulose, Easy Visc, Germany) viscoelastic substance. After continuous curvilinear capsulorhexis, hydrodissection and hydrodelineation was performed, then a sideport entrance was made. The nucleus was removed by using the “divide and conquer” technique (Sovereing Compact, Phacoemulsification System, AMO, USA). The cortex was aspirated with coaxial irrigation/aspiration. The capsular bag was filled with a cohesive (Na Hyaluronate 1.6, Easyluron, Germany) viscoelastic substance. A foldable monofocal posterior chamber IOL (Acriva, VSY, Turkey) was implanted in the capsular bag through an injector system. The viscoelastic material was aspirated completely. The entrances were closed with stromal hydration. After the operation patients used topical antibiotic (Moxifloxacin 0.5 %, Vigamox, Alcon, USA) four times a day for a week and topical steroid (Dexamethasone Na Phosphate 0.1 %, Dexa-sine SE, Liba, USA) six times a day for a week and tapered for subsequent 3 weeks. These two eyedrops did not contain any preservatives. They were all taking artificial tears therapy routinely and 12 of them (12 %) were taking additional topical cyclosporin A. Their therapies were not interrupted due to the surgery. Their full ophthalmological examinations were performed 1 week before the surgery and 1st day, 1st week, 1st month, 3rd month, 6th month, 1st year and 2nd year after the surgery and additionally fluorescein staining, BUT and ST1 without anesthesia were performed owing to the chronic dry eye. OSDI questionnaire was applied 1 week before the surgery and 1st week, 1st month, 3rd month and 6th month after the surgery.\nOSDI score was calculated by this formula: Total points of all answered questions x 100/Total number of answered questions x 4. The range of OSDI scores is between 0 and 100. Scores over 25 shows dry eye syndrome. Fluorescein staining was classified according to Oxford Schema (Grade 0 to 5). Grade 2 and greater than this level indicate dry eye syndrome. For BUT, a fluorescein strip was placed in the inferior fornix and the patient was asked to blink several times and with slit lamp biomicroscopy by using cobalt blue filter, the interval between last blink and first appearance of a dry spot or tear film break-up was recorded, and this was repeated three times and the average was determined. Values shorter than 10 s indicate dry eye syndrome. For ST1, Schirmer strip was inserted into the inferior fornix beneath the temporal lid margin, after 5 min, strip was removed and the wetness was measured. Values lower than 5 mm are diagnostic for dry eye syndrome.\n Statistical analysis For statistical analysis, SPSS version 22 programme was used. For comparison of the data, Chi-square test and Paired t test were used. A P < 0.05 value was accepted as statistically significant.\nFor statistical analysis, SPSS version 22 programme was used. For comparison of the data, Chi-square test and Paired t test were used. A P < 0.05 value was accepted as statistically significant.", "For statistical analysis, SPSS version 22 programme was used. For comparison of the data, Chi-square test and Paired t test were used. A P < 0.05 value was accepted as statistically significant.", "The time of dry eye diagnosis of these patients was approximately 1 to 5 years prior to the surgery. During the time of diagnosis at the first examination, all of the patients had complaints such as burning, stinging, redness, dryness, foreign body sensation, pain and fatigue in their eyes. OSDI scores were between 25 and 50 in 66 (69 %) patients and between 50 and 75 in 30 (31 %) patients. Fluorescein staining, BUT and ST1 tests were performed. Twenty-four eyes (12 %) had grade 4, 42 eyes (22 %) had grade 3, 66 eyes (34 %) had grade 2 and 60 eyes (31 %) had grade 1 staining pattern according to Oxford Schema. BUT values were under 10 s in 138 eyes (72 %) and under 5 s in 54 eyes (28 %). ST1 values were under 5 mm in 150 eyes (78 %) and under 3 mm in 42 eyes (22 %). We commenced topical artificial tears therapy for all of them and additional topical cyclosporin A for 30 of them. Cyclosporin A therapy was ceased and restarted according to the clinical courses of the patients.\nThe frequency of women were significantly higher than that of men (P = 0.003). OSDI scores were under 25 in 87 (91 %) patients and between 25 and 30 in 9 (9 %) patients preoperatively. However, postoperatively in 1st week, it was under 25 in 15 (16 %) patients, between 25 and 30 in 33 (34 %) patients, between 30 and 40 in 21 (22 %) patients and between 40 and 50 in 27 (28 %) patients. In postoperative 1st month, it was under 25 in 30 (31 %) patients, between 25 and 30 in 39 (41 %) patients and between 30 and 40 in 27 (28 %) patients. In postoperative 3rd month it was under 25 in 84 (88 %) patients and between 25 and 30 in 12 (12 %) patients. In postoperative 6th month it was under 25 in 90 (94 %) patients and between 25 and 30 in 6 (6 %) patients. According to Oxford Schema, preoperatively only 15 eyes had grade 2 fluorescein staining (7 %). But postoperatively on 1st day, 36 eyes had grade 2 (18 %), 24 eyes had grade 3 (12 %) and 12 eyes had grade 4 (6 %) fluorescein staining pattern. In 1st week 24 eyes had grade 2 (12 %) and 12 eyes had grade 3 (6 %) staining. In 1st month 18 eyes had grade 2 (9 %) and 6 eyes had grade 3 (3 %) staining. In 3rd month, only 6 eyes had grade 2 staining pattern (3 %). The mean preoperative BUT value was 11.65 ± 2.31 (SD) (7–16) seconds. Postoperative 1st day value was 7.60 ± 1.24 (SD) (5–11), 1st week value 7.03 ± 0.97 (SD) (5–9), 1st month value 7.42 ± 0.79 (SD) (6–8), 3rd month value 11.76 ± 2.08 (SD) (9–16), 6th month value 12.01 ± 2.05 (SD) (9–16), 1st year value 11.85 ± 2,01 (SD) (8–17) and 2nd year value was 11.95 ± 1.92 (SD) (9–17) seconds. In comparison with preoperative value, the 1st day, 1st week and 1st month values were significantly lower (P < 0.001, P < 0.001,P < 0.001), however 3rd month, 6th month, 1st year and 2nd year values were not significantly different from preoperative value (P = 0.441, P = 0.078, P = 0.145, P = 0.125). This is shown in Fig. 1.Fig. 1The course of BUT values\nThe course of BUT values\nThe mean preoperative ST1 value was 6.39 ± 1.42 (SD) (4–9) mm. Postoperative 1st day value was 4.59 ± 1.06 (SD) (3–7), 1st week value 4.45 ± 0.95 (SD) (2–6), 1st month value 4.50 ± 1.00 (SD) (3–6), 3rd month value 6.42 ± 1.31 (SD) (4–9), 6th month value 6.46 ± 1.28 (SD) (4–10), 1st year value 6.59 ± 1.38 (SD) (4–9) and 2nd year value was 6.54 ± 1.29 (SD) (4–9) mm. In comparison with preoperative value, 1st day, 1st week and 1st month values were significantly lower (P < 0.001, P < 0.001, P < 0.001), however 3rd month, 6th month, 1st year and 2nd year values were not significantly different from preoperative value (P = 0.748, P = 0.439, P = 0.091, P = 0.214). This is shown in Fig. 2.Fig. 2The course of ST1 values\nThe course of ST1 values\nBy the way, the subjective symptoms of the patients related to dry eye increased postoperatively. But after postoperative 1st month, their complaints decreased gradually. As it was seen; fluorescein staining, BUT and ST1 were impaired up to 1st month postoperatively, however, after 1st month, they improved and they returned to preoperative levels in 3rd month (Additional file 1).", "Cornea is innervated by long ciliary nerves of ophthalmic branch of the fifth (trigeminal) nerve. In normal conditions, these nerves send afferent stimuli to brain stem and parasymphathetic and symphathetic signals stimulate lacrimal gland for tear production and secretion [11–13]. For normal blinking and tearing reflexes, intact corneal innervation is necessary. Damage of this circuit causes dry eye. Surgical procedures like PRK, LASIK, extracapsular cataract extraction and phacoemulsification causing denervation of cornea result in decreased blinking and reduction in tear production thus leading to increased epithelial permeability, decreased epithelial metabolic activity and impaired epithelial wound healing [14, 15]. Inflammatory mediators released after corneal incisions may also change the actions of the corneal nerves and reduce corneal sensitivity and result in tear film instability [16, 17]. In healing process, neural growth factor is released to regenerate the subepithelial corneal axon, this process is completed approximately within 1 month and this recovery of the nerves may explain why dry eye signs and symptoms are prominent early after surgery and improve thereafter [16]. This is in accordance with our study. Incision site is larger in LASIK and extracapsular cataract extraction in comparison with phacoemulsification, hence, dry eye signs and symptoms are more prominent and last longer in these patients [18, 19].\nVigorous irrigation of the cornea intraoperatively and ocular surface manipulation deteriorate tear film stability and may reduce goblet cell density and thus cause shortened BUT postoperatively [6, 20]. Exposure to microscope light may also aggravate dry eye symptoms postoperatively [15]. In our study also, BUT values decreased postoperatively.\nThe use of topical anesthesia, topical eye drops containing preservatives like benzalkonium chloride administered preoperatively and postoperatively may cause tear film instability and decrease the number of mucin expressing cells and lead to dry eye postoperatively [21, 22]. We did not use any topical eye drops containing preservatives postoperatively for our patients not to increase dry eye symptoms.\nWe did not divide the patients into subgroups according to their degree of dry eye severity. We evaulated the patients as one group statistically. The use of means of this group’s test results might mask subgroups which might behave differently from the group as a whole. That was our limitation in this study.\nKhanal et al. [9] reported that deterioration in corneal sensitivity and tear physiology was seen immediately after phacoemulsification. Corneal sensitivity didn’t return to preoperative levels until 3 months postoperatively whereas the tear functions recovered within 1 month. Kasetsuwan et al. [20] reported that, signs and syptoms of dry eye occured as early as 7 days post-phacoemulsification and the severity pattern improved over time. In our study also, dry eye test values returned to preoperative values after postoperative 3rd month.\nOh et al. [19] reported that, the decrease in goblet cell density, which was correlated with operation time, had not recovered at 3 months after cataract surgery, therefore, microscopic ocular surface damage during cataract surgery seems to be one of the pathogenic factors that causes ocular discomfort and dry eye syndrome after cataract surgery. Han et al. [23] reported that Meibomian gland function may be altered without accompanying structural changes after cataract surgery.\nMovahedan et al. [24] reported that, maintaining a healthy ocular surface is essential for achieving the best visual outcome in cataract patients. Ocular surface preparation is beneficial not only in patients with established ocular surface disease, but also in those with minimal signs or symptoms of surface disease. Chung et al. [25] suggested that, cyclosporine 0.05 % can be an effective treatment for dry eye after cataract surgery.", "Phacoemulsification surgery may aggravate the signs and symptoms of dry eye and affect dry eye test values in chronic dry eye patients in short-term. However, in long-term, signs and symptoms of dry eye decrease and dry eye test values return to preoperative values." ]
[ "introduction", "materials|methods", null, "results", "discussion", "conclusion" ]
[ "Phacoemulsification", "Dry eye", "Break-up Time", "Schirmer" ]
Background: Dry eye syndrome is a multifactorial disease characterized by dryness of the ocular surface due to tear deficiency and overevaporation [1, 2]. There are many causes and factors leading to dry eye, including aging, female gender, connective tissue diseases, Diabetes Mellitus, systemic hypertension, contact lens usage, drugs like antihistamines, anticholinergics, antidepressants, oral contraceptives and topical eye drops containing preservatives and ocular diseases like blepharitis, chronic conjunctivitis, meibomitis and pterygium [3–5]. The symptoms observed in dry eye syndrome include dryness, irritation, burning, foreign body sensation, heaviness of the eyelids, redness, reflex lacrimation, ocular pain and fatigue. It may cause punctate keratitis, persistent epithelial defect, filamentary keratopathy, superior limbic keratoconjunctivitis and reduced visual acuity [6, 7]. Some surgical interventions related to anterior segment may also cause dry eye and aggravate the symptoms in pre-existing dry eye, like PRK, LASIK and cataract surgery [8–10]. In this study, we evaluated the course of dry eye syndrome after phacoemulsification surgery. Methods: The study protocol was approved by the local ethics commitee (Selcuk University,Faculty of Medicine Ethics Commitee, Konya, Turkey). An informed written consent was obtained from the patients for the cataract surgery. The study was carried out according to the tenets of the Declaration of Helsinki. One hundred and ninety-two eyes of 96 patients with chronic dry eye syndrome and cataract were enrolled in this study. They had undergone uneventful phacoemulsification and IOL implantation operation between January 2010 and March 2011. Their medical records were evaluated retrospectively. Their mean age was 68.46 ± 8.14 (SD) (56–83) years. Thirty of them (31 %) were male and 66 (69 %) were female. They all had bilateral cataracts. All of the surgeries were performed by a single surgeon (SC). Under subtenon anesthesia, a 2.75 mm clear corneal incision was made. Anterior chamber was filled with a dispersive (hydroxypropylmethylcellulose, Easy Visc, Germany) viscoelastic substance. After continuous curvilinear capsulorhexis, hydrodissection and hydrodelineation was performed, then a sideport entrance was made. The nucleus was removed by using the “divide and conquer” technique (Sovereing Compact, Phacoemulsification System, AMO, USA). The cortex was aspirated with coaxial irrigation/aspiration. The capsular bag was filled with a cohesive (Na Hyaluronate 1.6, Easyluron, Germany) viscoelastic substance. A foldable monofocal posterior chamber IOL (Acriva, VSY, Turkey) was implanted in the capsular bag through an injector system. The viscoelastic material was aspirated completely. The entrances were closed with stromal hydration. After the operation patients used topical antibiotic (Moxifloxacin 0.5 %, Vigamox, Alcon, USA) four times a day for a week and topical steroid (Dexamethasone Na Phosphate 0.1 %, Dexa-sine SE, Liba, USA) six times a day for a week and tapered for subsequent 3 weeks. These two eyedrops did not contain any preservatives. They were all taking artificial tears therapy routinely and 12 of them (12 %) were taking additional topical cyclosporin A. Their therapies were not interrupted due to the surgery. Their full ophthalmological examinations were performed 1 week before the surgery and 1st day, 1st week, 1st month, 3rd month, 6th month, 1st year and 2nd year after the surgery and additionally fluorescein staining, BUT and ST1 without anesthesia were performed owing to the chronic dry eye. OSDI questionnaire was applied 1 week before the surgery and 1st week, 1st month, 3rd month and 6th month after the surgery. OSDI score was calculated by this formula: Total points of all answered questions x 100/Total number of answered questions x 4. The range of OSDI scores is between 0 and 100. Scores over 25 shows dry eye syndrome. Fluorescein staining was classified according to Oxford Schema (Grade 0 to 5). Grade 2 and greater than this level indicate dry eye syndrome. For BUT, a fluorescein strip was placed in the inferior fornix and the patient was asked to blink several times and with slit lamp biomicroscopy by using cobalt blue filter, the interval between last blink and first appearance of a dry spot or tear film break-up was recorded, and this was repeated three times and the average was determined. Values shorter than 10 s indicate dry eye syndrome. For ST1, Schirmer strip was inserted into the inferior fornix beneath the temporal lid margin, after 5 min, strip was removed and the wetness was measured. Values lower than 5 mm are diagnostic for dry eye syndrome. Statistical analysis For statistical analysis, SPSS version 22 programme was used. For comparison of the data, Chi-square test and Paired t test were used. A P < 0.05 value was accepted as statistically significant. For statistical analysis, SPSS version 22 programme was used. For comparison of the data, Chi-square test and Paired t test were used. A P < 0.05 value was accepted as statistically significant. Statistical analysis: For statistical analysis, SPSS version 22 programme was used. For comparison of the data, Chi-square test and Paired t test were used. A P < 0.05 value was accepted as statistically significant. Results: The time of dry eye diagnosis of these patients was approximately 1 to 5 years prior to the surgery. During the time of diagnosis at the first examination, all of the patients had complaints such as burning, stinging, redness, dryness, foreign body sensation, pain and fatigue in their eyes. OSDI scores were between 25 and 50 in 66 (69 %) patients and between 50 and 75 in 30 (31 %) patients. Fluorescein staining, BUT and ST1 tests were performed. Twenty-four eyes (12 %) had grade 4, 42 eyes (22 %) had grade 3, 66 eyes (34 %) had grade 2 and 60 eyes (31 %) had grade 1 staining pattern according to Oxford Schema. BUT values were under 10 s in 138 eyes (72 %) and under 5 s in 54 eyes (28 %). ST1 values were under 5 mm in 150 eyes (78 %) and under 3 mm in 42 eyes (22 %). We commenced topical artificial tears therapy for all of them and additional topical cyclosporin A for 30 of them. Cyclosporin A therapy was ceased and restarted according to the clinical courses of the patients. The frequency of women were significantly higher than that of men (P = 0.003). OSDI scores were under 25 in 87 (91 %) patients and between 25 and 30 in 9 (9 %) patients preoperatively. However, postoperatively in 1st week, it was under 25 in 15 (16 %) patients, between 25 and 30 in 33 (34 %) patients, between 30 and 40 in 21 (22 %) patients and between 40 and 50 in 27 (28 %) patients. In postoperative 1st month, it was under 25 in 30 (31 %) patients, between 25 and 30 in 39 (41 %) patients and between 30 and 40 in 27 (28 %) patients. In postoperative 3rd month it was under 25 in 84 (88 %) patients and between 25 and 30 in 12 (12 %) patients. In postoperative 6th month it was under 25 in 90 (94 %) patients and between 25 and 30 in 6 (6 %) patients. According to Oxford Schema, preoperatively only 15 eyes had grade 2 fluorescein staining (7 %). But postoperatively on 1st day, 36 eyes had grade 2 (18 %), 24 eyes had grade 3 (12 %) and 12 eyes had grade 4 (6 %) fluorescein staining pattern. In 1st week 24 eyes had grade 2 (12 %) and 12 eyes had grade 3 (6 %) staining. In 1st month 18 eyes had grade 2 (9 %) and 6 eyes had grade 3 (3 %) staining. In 3rd month, only 6 eyes had grade 2 staining pattern (3 %). The mean preoperative BUT value was 11.65 ± 2.31 (SD) (7–16) seconds. Postoperative 1st day value was 7.60 ± 1.24 (SD) (5–11), 1st week value 7.03 ± 0.97 (SD) (5–9), 1st month value 7.42 ± 0.79 (SD) (6–8), 3rd month value 11.76 ± 2.08 (SD) (9–16), 6th month value 12.01 ± 2.05 (SD) (9–16), 1st year value 11.85 ± 2,01 (SD) (8–17) and 2nd year value was 11.95 ± 1.92 (SD) (9–17) seconds. In comparison with preoperative value, the 1st day, 1st week and 1st month values were significantly lower (P < 0.001, P < 0.001,P < 0.001), however 3rd month, 6th month, 1st year and 2nd year values were not significantly different from preoperative value (P = 0.441, P = 0.078, P = 0.145, P = 0.125). This is shown in Fig. 1.Fig. 1The course of BUT values The course of BUT values The mean preoperative ST1 value was 6.39 ± 1.42 (SD) (4–9) mm. Postoperative 1st day value was 4.59 ± 1.06 (SD) (3–7), 1st week value 4.45 ± 0.95 (SD) (2–6), 1st month value 4.50 ± 1.00 (SD) (3–6), 3rd month value 6.42 ± 1.31 (SD) (4–9), 6th month value 6.46 ± 1.28 (SD) (4–10), 1st year value 6.59 ± 1.38 (SD) (4–9) and 2nd year value was 6.54 ± 1.29 (SD) (4–9) mm. In comparison with preoperative value, 1st day, 1st week and 1st month values were significantly lower (P < 0.001, P < 0.001, P < 0.001), however 3rd month, 6th month, 1st year and 2nd year values were not significantly different from preoperative value (P = 0.748, P = 0.439, P = 0.091, P = 0.214). This is shown in Fig. 2.Fig. 2The course of ST1 values The course of ST1 values By the way, the subjective symptoms of the patients related to dry eye increased postoperatively. But after postoperative 1st month, their complaints decreased gradually. As it was seen; fluorescein staining, BUT and ST1 were impaired up to 1st month postoperatively, however, after 1st month, they improved and they returned to preoperative levels in 3rd month (Additional file 1). Discussion: Cornea is innervated by long ciliary nerves of ophthalmic branch of the fifth (trigeminal) nerve. In normal conditions, these nerves send afferent stimuli to brain stem and parasymphathetic and symphathetic signals stimulate lacrimal gland for tear production and secretion [11–13]. For normal blinking and tearing reflexes, intact corneal innervation is necessary. Damage of this circuit causes dry eye. Surgical procedures like PRK, LASIK, extracapsular cataract extraction and phacoemulsification causing denervation of cornea result in decreased blinking and reduction in tear production thus leading to increased epithelial permeability, decreased epithelial metabolic activity and impaired epithelial wound healing [14, 15]. Inflammatory mediators released after corneal incisions may also change the actions of the corneal nerves and reduce corneal sensitivity and result in tear film instability [16, 17]. In healing process, neural growth factor is released to regenerate the subepithelial corneal axon, this process is completed approximately within 1 month and this recovery of the nerves may explain why dry eye signs and symptoms are prominent early after surgery and improve thereafter [16]. This is in accordance with our study. Incision site is larger in LASIK and extracapsular cataract extraction in comparison with phacoemulsification, hence, dry eye signs and symptoms are more prominent and last longer in these patients [18, 19]. Vigorous irrigation of the cornea intraoperatively and ocular surface manipulation deteriorate tear film stability and may reduce goblet cell density and thus cause shortened BUT postoperatively [6, 20]. Exposure to microscope light may also aggravate dry eye symptoms postoperatively [15]. In our study also, BUT values decreased postoperatively. The use of topical anesthesia, topical eye drops containing preservatives like benzalkonium chloride administered preoperatively and postoperatively may cause tear film instability and decrease the number of mucin expressing cells and lead to dry eye postoperatively [21, 22]. We did not use any topical eye drops containing preservatives postoperatively for our patients not to increase dry eye symptoms. We did not divide the patients into subgroups according to their degree of dry eye severity. We evaulated the patients as one group statistically. The use of means of this group’s test results might mask subgroups which might behave differently from the group as a whole. That was our limitation in this study. Khanal et al. [9] reported that deterioration in corneal sensitivity and tear physiology was seen immediately after phacoemulsification. Corneal sensitivity didn’t return to preoperative levels until 3 months postoperatively whereas the tear functions recovered within 1 month. Kasetsuwan et al. [20] reported that, signs and syptoms of dry eye occured as early as 7 days post-phacoemulsification and the severity pattern improved over time. In our study also, dry eye test values returned to preoperative values after postoperative 3rd month. Oh et al. [19] reported that, the decrease in goblet cell density, which was correlated with operation time, had not recovered at 3 months after cataract surgery, therefore, microscopic ocular surface damage during cataract surgery seems to be one of the pathogenic factors that causes ocular discomfort and dry eye syndrome after cataract surgery. Han et al. [23] reported that Meibomian gland function may be altered without accompanying structural changes after cataract surgery. Movahedan et al. [24] reported that, maintaining a healthy ocular surface is essential for achieving the best visual outcome in cataract patients. Ocular surface preparation is beneficial not only in patients with established ocular surface disease, but also in those with minimal signs or symptoms of surface disease. Chung et al. [25] suggested that, cyclosporine 0.05 % can be an effective treatment for dry eye after cataract surgery. Conclusion: Phacoemulsification surgery may aggravate the signs and symptoms of dry eye and affect dry eye test values in chronic dry eye patients in short-term. However, in long-term, signs and symptoms of dry eye decrease and dry eye test values return to preoperative values.
Background: The aim of this retrospective study was to evaluate the course of dry eye syndrome after phacoemulsification surgery. Methods: One hundred and ninety-two eyes of 96 patients (30 males, 66 females) with chronic dry eye syndrome and cataract, who had undergone phacoemulsification surgery were enrolled in this study. Results: Their mean age was 68.46 ± 8.14 standard deviation (SD) (range 56-83) years . Thirty of them (31 %) were males and 66 (69 %) were females. Ocular Surface Disease Index (OSDI) questionnaire scores increased postoperatively, but arrived preoperative levels at the end of 3rd month following the surgery. Fluorescein staining patterns according to Oxford Schema got worse postoperatively, however after postoperative 3rd month they got better and resembled preoperative patterns. The mean postoperative 1st day, 1st week and 1st month Break-up Time (BUT) values were significantly lower than preoperative BUT value (P < 0.001, P < 0.001, P < 0.001), however 3rd month, 6th month, 1st year and 2nd year values were not significantly different from preoperative value (P = 0.441, P = 0.078, P = 0.145, P = 0.125). The mean postoperative 1st day, 1st week and 1st month Schirmer Test 1 (ST1) values were significantly lower than preoperative ST1 value (P < 0.001, P < 0.001, P < 0.001), however 3rd month, 6th month, 1st year and 2nd year values were not significantly different from preoperative value (P = 0.748, P = 0.439, P = 0.091, P = 0.214). Conclusions: Phacoemulsification surgery may aggravate the signs and symptoms of dry eye and affect dry eye test values in chronic dry eye patients in short-term. However, in long-term, signs and symptoms of dry eye decrease and dry eye test values return to preoperative values.
Background: Dry eye syndrome is a multifactorial disease characterized by dryness of the ocular surface due to tear deficiency and overevaporation [1, 2]. There are many causes and factors leading to dry eye, including aging, female gender, connective tissue diseases, Diabetes Mellitus, systemic hypertension, contact lens usage, drugs like antihistamines, anticholinergics, antidepressants, oral contraceptives and topical eye drops containing preservatives and ocular diseases like blepharitis, chronic conjunctivitis, meibomitis and pterygium [3–5]. The symptoms observed in dry eye syndrome include dryness, irritation, burning, foreign body sensation, heaviness of the eyelids, redness, reflex lacrimation, ocular pain and fatigue. It may cause punctate keratitis, persistent epithelial defect, filamentary keratopathy, superior limbic keratoconjunctivitis and reduced visual acuity [6, 7]. Some surgical interventions related to anterior segment may also cause dry eye and aggravate the symptoms in pre-existing dry eye, like PRK, LASIK and cataract surgery [8–10]. In this study, we evaluated the course of dry eye syndrome after phacoemulsification surgery. Conclusion: Phacoemulsification surgery may aggravate the signs and symptoms of dry eye and affect dry eye test values in chronic dry eye patients in short-term. However, in long-term, signs and symptoms of dry eye decrease and dry eye test values return to preoperative values.
Background: The aim of this retrospective study was to evaluate the course of dry eye syndrome after phacoemulsification surgery. Methods: One hundred and ninety-two eyes of 96 patients (30 males, 66 females) with chronic dry eye syndrome and cataract, who had undergone phacoemulsification surgery were enrolled in this study. Results: Their mean age was 68.46 ± 8.14 standard deviation (SD) (range 56-83) years . Thirty of them (31 %) were males and 66 (69 %) were females. Ocular Surface Disease Index (OSDI) questionnaire scores increased postoperatively, but arrived preoperative levels at the end of 3rd month following the surgery. Fluorescein staining patterns according to Oxford Schema got worse postoperatively, however after postoperative 3rd month they got better and resembled preoperative patterns. The mean postoperative 1st day, 1st week and 1st month Break-up Time (BUT) values were significantly lower than preoperative BUT value (P < 0.001, P < 0.001, P < 0.001), however 3rd month, 6th month, 1st year and 2nd year values were not significantly different from preoperative value (P = 0.441, P = 0.078, P = 0.145, P = 0.125). The mean postoperative 1st day, 1st week and 1st month Schirmer Test 1 (ST1) values were significantly lower than preoperative ST1 value (P < 0.001, P < 0.001, P < 0.001), however 3rd month, 6th month, 1st year and 2nd year values were not significantly different from preoperative value (P = 0.748, P = 0.439, P = 0.091, P = 0.214). Conclusions: Phacoemulsification surgery may aggravate the signs and symptoms of dry eye and affect dry eye test values in chronic dry eye patients in short-term. However, in long-term, signs and symptoms of dry eye decrease and dry eye test values return to preoperative values.
2,895
364
[ 41 ]
6
[ "eye", "dry", "dry eye", "1st", "month", "patients", "value", "eyes", "values", "sd" ]
[ "dry eye aggravate", "ocular discomfort dry", "dry eye symptoms", "cause dry eye", "dry eye syndrome" ]
null
[CONTENT] Phacoemulsification | Dry eye | Break-up Time | Schirmer [SUMMARY]
null
[CONTENT] Phacoemulsification | Dry eye | Break-up Time | Schirmer [SUMMARY]
[CONTENT] Phacoemulsification | Dry eye | Break-up Time | Schirmer [SUMMARY]
[CONTENT] Phacoemulsification | Dry eye | Break-up Time | Schirmer [SUMMARY]
[CONTENT] Phacoemulsification | Dry eye | Break-up Time | Schirmer [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Chronic Disease | Cyclosporine | Dry Eye Syndromes | Female | Fluorescein | Fluorophotometry | Humans | Immunosuppressive Agents | Lubricant Eye Drops | Male | Middle Aged | Phacoemulsification | Retrospective Studies | Staining and Labeling | Tears [SUMMARY]
null
[CONTENT] Aged | Aged, 80 and over | Chronic Disease | Cyclosporine | Dry Eye Syndromes | Female | Fluorescein | Fluorophotometry | Humans | Immunosuppressive Agents | Lubricant Eye Drops | Male | Middle Aged | Phacoemulsification | Retrospective Studies | Staining and Labeling | Tears [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Chronic Disease | Cyclosporine | Dry Eye Syndromes | Female | Fluorescein | Fluorophotometry | Humans | Immunosuppressive Agents | Lubricant Eye Drops | Male | Middle Aged | Phacoemulsification | Retrospective Studies | Staining and Labeling | Tears [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Chronic Disease | Cyclosporine | Dry Eye Syndromes | Female | Fluorescein | Fluorophotometry | Humans | Immunosuppressive Agents | Lubricant Eye Drops | Male | Middle Aged | Phacoemulsification | Retrospective Studies | Staining and Labeling | Tears [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Chronic Disease | Cyclosporine | Dry Eye Syndromes | Female | Fluorescein | Fluorophotometry | Humans | Immunosuppressive Agents | Lubricant Eye Drops | Male | Middle Aged | Phacoemulsification | Retrospective Studies | Staining and Labeling | Tears [SUMMARY]
[CONTENT] dry eye aggravate | ocular discomfort dry | dry eye symptoms | cause dry eye | dry eye syndrome [SUMMARY]
null
[CONTENT] dry eye aggravate | ocular discomfort dry | dry eye symptoms | cause dry eye | dry eye syndrome [SUMMARY]
[CONTENT] dry eye aggravate | ocular discomfort dry | dry eye symptoms | cause dry eye | dry eye syndrome [SUMMARY]
[CONTENT] dry eye aggravate | ocular discomfort dry | dry eye symptoms | cause dry eye | dry eye syndrome [SUMMARY]
[CONTENT] dry eye aggravate | ocular discomfort dry | dry eye symptoms | cause dry eye | dry eye syndrome [SUMMARY]
[CONTENT] eye | dry | dry eye | 1st | month | patients | value | eyes | values | sd [SUMMARY]
null
[CONTENT] eye | dry | dry eye | 1st | month | patients | value | eyes | values | sd [SUMMARY]
[CONTENT] eye | dry | dry eye | 1st | month | patients | value | eyes | values | sd [SUMMARY]
[CONTENT] eye | dry | dry eye | 1st | month | patients | value | eyes | values | sd [SUMMARY]
[CONTENT] eye | dry | dry eye | 1st | month | patients | value | eyes | values | sd [SUMMARY]
[CONTENT] eye | dry | dry eye | like | ocular | dry eye syndrome | eye syndrome | syndrome | diseases | cause [SUMMARY]
null
[CONTENT] 1st | eyes | month | value | sd | patients | grade | 30 | eyes grade | 25 [SUMMARY]
[CONTENT] eye | dry | dry eye | symptoms dry eye | signs symptoms dry eye | signs symptoms dry | term | symptoms dry | values | eye test values [SUMMARY]
[CONTENT] eye | dry | dry eye | 1st | month | test | value | patients | values | surgery [SUMMARY]
[CONTENT] eye | dry | dry eye | 1st | month | test | value | patients | values | surgery [SUMMARY]
[CONTENT] [SUMMARY]
null
[CONTENT] 68.46 | 8.14 | 56-83 | years ||| Thirty | 31 % | 66 | 69 % ||| the end of 3rd month ||| Oxford Schema | 3rd month ||| 1st day | 1st week | 1st month | Break | Time | P < 0.001 | P < 0.001 | 3rd month | 6th month | 1st year | 2nd year | 0.441 | 0.078 | 0.145 | 0.125 ||| 1st day | 1st week | 1st month | Schirmer Test 1 | P < 0.001 | P < 0.001 | 3rd month | 6th month | 1st year | 2nd year | 0.748 | 0.439 | 0.091 | 0.214 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| One hundred and ninety-two | 96 | 30 | 66 ||| ||| 68.46 | 8.14 | 56-83 | years ||| Thirty | 31 % | 66 | 69 % ||| the end of 3rd month ||| Oxford Schema | 3rd month ||| 1st day | 1st week | 1st month | Break | Time | P < 0.001 | P < 0.001 | 3rd month | 6th month | 1st year | 2nd year | 0.441 | 0.078 | 0.145 | 0.125 ||| 1st day | 1st week | 1st month | Schirmer Test 1 | P < 0.001 | P < 0.001 | 3rd month | 6th month | 1st year | 2nd year | 0.748 | 0.439 | 0.091 | 0.214 ||| ||| [SUMMARY]
[CONTENT] ||| One hundred and ninety-two | 96 | 30 | 66 ||| ||| 68.46 | 8.14 | 56-83 | years ||| Thirty | 31 % | 66 | 69 % ||| the end of 3rd month ||| Oxford Schema | 3rd month ||| 1st day | 1st week | 1st month | Break | Time | P < 0.001 | P < 0.001 | 3rd month | 6th month | 1st year | 2nd year | 0.441 | 0.078 | 0.145 | 0.125 ||| 1st day | 1st week | 1st month | Schirmer Test 1 | P < 0.001 | P < 0.001 | 3rd month | 6th month | 1st year | 2nd year | 0.748 | 0.439 | 0.091 | 0.214 ||| ||| [SUMMARY]
CONVERSION THERAPY FOR GASTRIC CANCER: EXPANDING THE TREATMENT POSSIBILITIES.
31038560
Conversion therapy in gastric cancer (GC) is defined as the use of chemotherapy/radiotherapy followed by surgical resection with curative intent of a tumor that was prior considered unresectable or oncologically incurable.
BACKGROUND
Retrospective analysis of all GC surgeries between 2009 and 2018. Patients who received any therapy before surgery were further identified to define the conversion group.
METHODS
Out of 1003 surgeries performed for GC, 113 cases underwent neoadjuvant treatment and 16 (1.6%) were considered as conversion therapy. The main indication for treatment was: T4b lesions (n=10), lymph node metastasis (n=4), peritoneal carcinomatosis and hepatic metastasis in one case each. The diagnosis was made by imaging in 14 cases (75%) and during surgical procedure in four (25%). The most commonly used chemotherapy regimens were XP and mFLOX. Major surgical complications occurred in four cases (25%) and one (6.3%) died. After an average follow-up of 20 months, 11 patients (68.7%) had recurrence and nine (56.3%) died. Prolonged recurrence-free survival over 40 months occurred in two cases.
RESULTS
Conversion therapy may offer the possibility of prolonged survival for a group of GC patients initially considered beyond therapeutic possibility.
CONCLUSION
[ "Adenocarcinoma", "Aged", "Aged, 80 and over", "Carcinoma", "Female", "Humans", "Kaplan-Meier Estimate", "Male", "Middle Aged", "Neoplasm Recurrence, Local", "Palliative Care", "Retrospective Studies", "Sex Distribution", "Stomach Neoplasms", "Time Factors", "Treatment Outcome" ]
6488271
INTRODUCTION
Gastric cancer (GC) is the fifth most common cancer in the world. It is estimated that almost one million (952,000) new cases occurred worldwide in 2012 11 . Surgery remains as the main curative treatment option, and gastrectomy with D2 lymphadenectomy is considered the standard surgical treatment for locally advanced GC. Unfortunately, many patients at the time of diagnosis have already locally unresectable tumors or signs of systemic disease 22 . For those clinical stage IV patients, palliative chemotherapy represents the current standard of care 18 . Recently, conversion therapy has emerged as an alternative therapy for these stage IV patients 26 . It consists in the administration of chemotherapy followed by surgery in stage IV patients. It is also referred as combination of induction chemotherapy and “adjuvant” surgery. This option can be indicated to treat unresectable or marginally resectable lesions, patients with distant lymph node metastasis (LNM) and even those with metastatic disease or peritoneal dissemination. In the last years, the development and improvement of chemotherapy regimens and molecular targeting agents based on molecular markers have improved dramatically the response rates 2 , 6 . Thus, it has become increasingly common for surgeons to reassess patients initially labeled as non-candidates for curative resection that present a complete different disease after initial palliative chemotherapy. This new scenario has brought conversion therapy to the primetime discussion of GC treatment. However, the clinical value of such multimodal strategy for stage IV GC remains controversial with few reports from western countries and very conflicting definitions of its use that may impair a clear analysis of its results. Thus, the aim of this study was to analyze the results of patients who were submitted to conversion therapy in our institution.
METHODS
The study was approved by the hospital ethics committee (NP993/16) and registered online (www.plataformabrasil.com; CAAE: 2915516.2.0000.0065). We reviewed our prospective database, selecting all patients submitted to any surgical procedure due to gastric adenocarcinoma from 2008 to 2018. Posteriorly, patients who received chemotherapy or radiotherapy followed by gastric resection were selected. Conversion therapy was defined as patients who were considered unresectable or marginally resectable and/or with disseminated disease during initial staging and were referred to initial chemo and/or radiation therapy. Those who had partial or complete response at re-assessment were indicated for surgery and considered as the conversion therapy group. Patients were staged preoperatively through abdominal and pelvis computed tomography, endoscopy and laboratory tests. Extension of gastric resection (total x subtotal) was based on the location of the tumor to obtain free proximal margin 27 . TNM staging was performed according to the TNM 7th edition 24 . Clinical characteristics evaluated included American Society of Anesthesiologists (ASA) classification 8 , Charlson Comorbidity Index (CCI) 4 and laboratory tests. CCI was considered without inclusion of GC as comorbidity. Surgical complications were graded according to Clavien-Dindo’s classification 7 . Major complications were considered Clavien III-V. Hospital length of stay and the number of retrieved lymph nodes were evaluated. Surgical mortality was considered when it occurred in the first 30 days after surgery or during hospital stay after the procedure. The postoperative follow-up was performed on a quarterly basis in the first year and every six months in the following years. Follow-up tests for relapse detection were performed based on the presence of symptoms. Absence in consultations for more than 12 months was considered as loss of follow-up. All cases were operated in a high-volume center by specialized surgeons. The surgical technique, extension of resection and dissected lymph node chains followed the recommendations of the Japanese Gastric Cancer Association guidelines 18 . Statistical analysis The Chi-square test was used for categorical variables and t-tests for continuous variables. Overall survival (OS) and disease-free survival (DFS) were estimated using the method of Kaplan-Meier, and differences in survival were examined using the Log Rank Test. Survival time, in months, was calculated from the date of surgery until the date of death/recurrence. The patients alive were censored at the date of last contact. All tests were two-sided and p<0.05 was considered statistically significant. Analysis was performed using SPSS software, version 18.0 (SPSS Inc, Chicago, IL). The Chi-square test was used for categorical variables and t-tests for continuous variables. Overall survival (OS) and disease-free survival (DFS) were estimated using the method of Kaplan-Meier, and differences in survival were examined using the Log Rank Test. Survival time, in months, was calculated from the date of surgery until the date of death/recurrence. The patients alive were censored at the date of last contact. All tests were two-sided and p<0.05 was considered statistically significant. Analysis was performed using SPSS software, version 18.0 (SPSS Inc, Chicago, IL).
RESULTS
Out of 1,003 GC patients operated in the period, surgical resection with curative intent was performed in 629 cases and palliative procedures in 230. A total of 113 patients were resected with curative intent after chemotherapy and/or radiotherapy. From this, 16 were considered as conversion therapy (1.6%). Table 1 presents the clinicopathological characteristics of patients from the conversion group. Most patients had low ASA classification score (I-II) and CCI (0-1). Tumors were mainly located at the distal part of the stomach (56.3%) and intestinal adenocarcinoma was the most common histological subtype (43.8%). Considering the conversion group decision for chemo/radiotherapy was mainly based on radiologic exams (75%) and 4 patients (25%) were deemed unresectable/incurable during surgery. The chemotherapy regimens varied, with a predominance of schemes based on the combination of platin and fluoropyrimidine. TABLE 1Clinicopathological characteristics of conversion therapyVariables n = 16%Gender Female850 Male850Age (years) Mean (Range)62.5 (48-80) Charlson Comorbidity Index (CCI) 0 - 11062.5 >=1637.5ASA I-II1275 III425Location of tumor Upper16.3 Middle425 Lower956.3 Total212.5Histological type Intestinal adenocarcinoma743.8 Diffuse adenocarcinoma737.5 Mixed adenocarcinoma16.3 Squamous cell carcinoma16.3Degree of histological differentiation Well/ Moderately differentiated956.3 Poorly differentiated743.7Diagnosis of nonresectability Surgery425 MRI318.7 CT956.3Preoperative treatment Capecitabine + Cisplatin (XP)531.3 modified FLOX (mFLOX)531.3 Capecitabine + Oxaliplatin (Xelox)16.3 FOLFIRINOX16.3 Carboplatin + Paclitaxel16.3 Cisplatin + Iritonecan212.5 Radiotherapy (RDT)16.3 Table 2 presents the surgical results. Combined organ resection was performed in 9 cases (56.3%) and in 4 of them more than 1 adjacent organ was resected. Liver and pancreas were resected in 5 cases and spleen and colon in 4. R0 resection was achieved in 13 cases (81.3%). The ypT4 category occurred in 8 patients (50%). The mean number of retrieved lymph nodes was 35.5, and 4 cases (25%) had no LNM. Only 2 cases were pathological stage IV. Four patients (25%) had major surgical complications and 1 (6.3%) died. TABLE 2Surgical results of conversion therapy Variables n = 16%Type of resection Subtotal850 Total850Lymphadenectomy D1318.7 D21381.3Combined ressection No743.8 Yes956.3Residual disease R01381.3 R1/R2318.7ypT pT0/pT1 212.5 pT216.3 pT3531.3 pT4a531.3 pT4b318.7ypN pN0425 pN1531.3 pN216.3 pN3637.5ypTNM I16.3 II425 III956.3 IV212.5Surgical complication None / Clavien I - II1168.7 Clavien III - IV425 Clavien V16.3Recurrence No531.3 Yes1168.7Death No743.8 Yes956.3 The median follow-up was 8.9 months (mean=16.2, Standard-Deviation=22.3). Eleven patients (68.8%) had recurrence and 9 (56.3%) died. Two patients had long-term survival without recurrence: one had local invasion to the pancreas and liver, and the other had invasion of the pancreas, duodenum and a gastrocutaneous fistula due to abdominal wall invasion. Characteristics of the patients and survival results are demonstrated in Table 3. TABLE 3Outcomes of conversion therapy CaseIncurable factorQT regimenSurgeryRecurrenceStatusDFS*OS*1T4bXPTG + D2peritoneum/liverloss of follow-up1421.72LNMmFLOXSTG + D2peritoneumdead4.24.43T4bCis + IrinoTG + D2-alive91.391.34T4b, LNMRDTTG + D1-dead0.70.75T4b, LNMmFLOXTG + D2peritoneum/ LNdead7.49.86T4bmFLOXTG + D1-alive22.322.37LNMmFLOXSTG + D2bonedead3.688T4bXELOXSTG + D2-alive339T4b, CarcinomatosisXPSTG + D1bonedead11.111.310T4b, gastrocutaneous fistulaCis + IrinoSTG + D2-alive40.640.611LNMXPSTG + D2liverdead3.53.812T4b, LNMmFLOXSTG + D2peritoneumdead5.35.813LNMXPSTG + D2LNalive18.318.314Lives metastasisXPTG + D2liverdead016.215T4bTaxol + CarboTG + D2liver / LNalive2.75.216T4b, LNMFolfirinoxTG + D2peritoneumdead06.5*months; TG= total gastrectomy; STG= subtotal gastrectomy *months; TG= total gastrectomy; STG= subtotal gastrectomy Survival analysis of all 1003 GC patients submitted to any surgical procedure demonstrated that, according to clinical stages, OS of the conversion group was higher than stage IV patients not submitted to conversion therapy (43.8% vs. 27%, p=0.037, Figure 1). The median OS for stage IV was 7 months compared to 11.3 months of the conversion group. Furthermore, there were no significant differences in the survival rates between stage III patients (52.3%, median OS=27 months) and the conversion group (p=0.222) FIGURE 1Kaplan-Meier overall survival curves according to the clinical stage compared to conversion therapy group Regarding survival and the intention of the surgical procedures, patients who underwent conversion therapy had a trend to better OS than the ones submitted to palliative procedures (43.8% vs. 27.9%, p=0.054, Figure 2). The median OS was of 11.3 and 7.9 months for the conversion and palliative group, respectively. The standard curative treatment group had a significantly higher OS rate than palliative patients (served as reference group) with OS rate of 73.2% (p<0.001). FIGURE 2Kaplan-Meier overall survival curves according to the indication of surgical treatment
CONCLUSION
Conversion therapy may offer the possibility of surgical resection with long-term survival to a group of patients initially considered beyond therapeutic possibility. However, definitions regarding the best treatment regimen, diagnostic criteria of irresectability and which group of patients benefits from this modality are still necessary
[ "Statistical analysis" ]
[ "The Chi-square test was used for categorical variables and t-tests for continuous\nvariables. Overall survival (OS) and disease-free survival (DFS) were estimated\nusing the method of Kaplan-Meier, and differences in survival were examined\nusing the Log Rank Test. Survival time, in months, was calculated from the date\nof surgery until the date of death/recurrence. The patients alive were censored\nat the date of last contact. All tests were two-sided and\np<0.05 was considered statistically significant. Analysis\nwas performed using SPSS software, version 18.0 (SPSS Inc, Chicago, IL)." ]
[ null ]
[ "INTRODUCTION", "METHODS", "Statistical analysis", "RESULTS", "DISCUSSION", "CONCLUSION" ]
[ "Gastric cancer (GC) is the fifth most common cancer in the world. It is estimated\nthat almost one million (952,000) new cases occurred worldwide in 2012\n11\n. Surgery remains as the main curative treatment option, and gastrectomy with\nD2 lymphadenectomy is considered the standard surgical treatment for locally\nadvanced GC. Unfortunately, many patients at the time of diagnosis have already\nlocally unresectable tumors or signs of systemic disease\n22\n. For those clinical stage IV patients, palliative chemotherapy represents\nthe current standard of care\n18\n. \nRecently, conversion therapy has emerged as an alternative therapy for these stage IV\npatients\n26\n. It consists in the administration of chemotherapy followed by surgery in\nstage IV patients. It is also referred as combination of induction chemotherapy and\n“adjuvant” surgery. This option can be indicated to treat unresectable or marginally\nresectable lesions, patients with distant lymph node metastasis (LNM) and even those\nwith metastatic disease or peritoneal dissemination. In the last years, the\ndevelopment and improvement of chemotherapy regimens and molecular targeting agents\nbased on molecular markers have improved dramatically the response rates\n2\n\n,\n\n6\n. Thus, it has become increasingly common for surgeons to reassess patients\ninitially labeled as non-candidates for curative resection that present a complete\ndifferent disease after initial palliative chemotherapy. This new scenario has\nbrought conversion therapy to the primetime discussion of GC treatment. However, the\nclinical value of such multimodal strategy for stage IV GC remains controversial\nwith few reports from western countries and very conflicting definitions of its use\nthat may impair a clear analysis of its results. \nThus, the aim of this study was to analyze the results of patients who were submitted\nto conversion therapy in our institution.", "The study was approved by the hospital ethics committee (NP993/16) and registered\nonline (www.plataformabrasil.com; CAAE: 2915516.2.0000.0065).\nWe reviewed our prospective database, selecting all patients submitted to any\nsurgical procedure due to gastric adenocarcinoma from 2008 to 2018. Posteriorly,\npatients who received chemotherapy or radiotherapy followed by gastric resection\nwere selected. Conversion therapy was defined as patients who were considered\nunresectable or marginally resectable and/or with disseminated disease during\ninitial staging and were referred to initial chemo and/or radiation therapy. Those\nwho had partial or complete response at re-assessment were indicated for surgery and\nconsidered as the conversion therapy group. \nPatients were staged preoperatively through abdominal and pelvis computed tomography,\nendoscopy and laboratory tests. Extension of gastric resection (total x subtotal)\nwas based on the location of the tumor to obtain free proximal margin\n27\n. TNM staging was performed according to the TNM 7th edition\n24\n. Clinical characteristics evaluated included American Society of\nAnesthesiologists (ASA) classification\n8\n, Charlson Comorbidity Index (CCI)\n4\n and laboratory tests. CCI was considered without inclusion of GC as\ncomorbidity. Surgical complications were graded according to Clavien-Dindo’s\nclassification\n7\n. Major complications were considered Clavien III-V. Hospital length of stay\nand the number of retrieved lymph nodes were evaluated. Surgical mortality was\nconsidered when it occurred in the first 30 days after surgery or during hospital\nstay after the procedure.\nThe postoperative follow-up was performed on a quarterly basis in the first year and\nevery six months in the following years. Follow-up tests for relapse detection were\nperformed based on the presence of symptoms. Absence in consultations for more than\n12 months was considered as loss of follow-up. All cases were operated in a\nhigh-volume center by specialized surgeons. The surgical technique, extension of\nresection and dissected lymph node chains followed the recommendations of the\nJapanese Gastric Cancer Association guidelines\n18\n. \n Statistical analysis The Chi-square test was used for categorical variables and t-tests for continuous\nvariables. Overall survival (OS) and disease-free survival (DFS) were estimated\nusing the method of Kaplan-Meier, and differences in survival were examined\nusing the Log Rank Test. Survival time, in months, was calculated from the date\nof surgery until the date of death/recurrence. The patients alive were censored\nat the date of last contact. All tests were two-sided and\np<0.05 was considered statistically significant. Analysis\nwas performed using SPSS software, version 18.0 (SPSS Inc, Chicago, IL).\nThe Chi-square test was used for categorical variables and t-tests for continuous\nvariables. Overall survival (OS) and disease-free survival (DFS) were estimated\nusing the method of Kaplan-Meier, and differences in survival were examined\nusing the Log Rank Test. Survival time, in months, was calculated from the date\nof surgery until the date of death/recurrence. The patients alive were censored\nat the date of last contact. All tests were two-sided and\np<0.05 was considered statistically significant. Analysis\nwas performed using SPSS software, version 18.0 (SPSS Inc, Chicago, IL).", "The Chi-square test was used for categorical variables and t-tests for continuous\nvariables. Overall survival (OS) and disease-free survival (DFS) were estimated\nusing the method of Kaplan-Meier, and differences in survival were examined\nusing the Log Rank Test. Survival time, in months, was calculated from the date\nof surgery until the date of death/recurrence. The patients alive were censored\nat the date of last contact. All tests were two-sided and\np<0.05 was considered statistically significant. Analysis\nwas performed using SPSS software, version 18.0 (SPSS Inc, Chicago, IL).", "Out of 1,003 GC patients operated in the period, surgical resection with curative\nintent was performed in 629 cases and palliative procedures in 230. A total of 113\npatients were resected with curative intent after chemotherapy and/or radiotherapy.\nFrom this, 16 were considered as conversion therapy (1.6%).\n\nTable 1 presents the clinicopathological\ncharacteristics of patients from the conversion group. Most patients had low ASA\nclassification score (I-II) and CCI (0-1). Tumors were mainly located at the distal\npart of the stomach (56.3%) and intestinal adenocarcinoma was the most common\nhistological subtype (43.8%). Considering the conversion group decision for\nchemo/radiotherapy was mainly based on radiologic exams (75%) and 4 patients (25%)\nwere deemed unresectable/incurable during surgery. The chemotherapy regimens varied,\nwith a predominance of schemes based on the combination of platin and\nfluoropyrimidine.\n\nTABLE 1Clinicopathological characteristics of conversion therapyVariables n = 16%Gender \n\n\nFemale850\nMale850Age (years) \n\n\nMean (Range)62.5 (48-80)\nCharlson Comorbidity Index (CCI) \n\n\n0 - 11062.5\n>=1637.5ASA \n\n\nI-II1275\nIII425Location of tumor \n\n\nUpper16.3\nMiddle425\nLower956.3\nTotal212.5Histological type \n\n\nIntestinal adenocarcinoma743.8\nDiffuse adenocarcinoma737.5\nMixed adenocarcinoma16.3\nSquamous cell carcinoma16.3Degree of histological differentiation \n\n\nWell/ Moderately differentiated956.3\nPoorly differentiated743.7Diagnosis of nonresectability \n\n\nSurgery425\nMRI318.7\nCT956.3Preoperative treatment \n\n\nCapecitabine + Cisplatin (XP)531.3\nmodified FLOX (mFLOX)531.3\nCapecitabine + Oxaliplatin (Xelox)16.3\nFOLFIRINOX16.3\nCarboplatin + Paclitaxel16.3\nCisplatin + Iritonecan212.5\nRadiotherapy (RDT)16.3\n\n\nTable 2 presents the surgical results.\nCombined organ resection was performed in 9 cases (56.3%) and in 4 of them more than\n1 adjacent organ was resected. Liver and pancreas were resected in 5 cases and\nspleen and colon in 4. R0 resection was achieved in 13 cases (81.3%). The ypT4\ncategory occurred in 8 patients (50%). The mean number of retrieved lymph nodes was\n35.5, and 4 cases (25%) had no LNM. Only 2 cases were pathological stage IV. Four\npatients (25%) had major surgical complications and 1 (6.3%) died.\n\nTABLE 2Surgical results of conversion therapy Variables n = 16%Type of resection \n\n\nSubtotal850\nTotal850Lymphadenectomy \n\n\nD1318.7\nD21381.3Combined ressection \n\n\nNo743.8\nYes956.3Residual disease \n\n\nR01381.3\nR1/R2318.7ypT \n\n\npT0/pT1 212.5\npT216.3\npT3531.3\npT4a531.3\npT4b318.7ypN \n\n\npN0425\npN1531.3\npN216.3\npN3637.5ypTNM \n\n\nI16.3\nII425\nIII956.3\nIV212.5Surgical complication \n\n\nNone / Clavien I - II1168.7\nClavien III - IV425\nClavien V16.3Recurrence \n\n\nNo531.3\nYes1168.7Death \n\n\nNo743.8\nYes956.3\n\nThe median follow-up was 8.9 months (mean=16.2, Standard-Deviation=22.3). Eleven\npatients (68.8%) had recurrence and 9 (56.3%) died. Two patients had long-term\nsurvival without recurrence: one had local invasion to the pancreas and liver, and\nthe other had invasion of the pancreas, duodenum and a gastrocutaneous fistula due\nto abdominal wall invasion. Characteristics of the patients and survival results are\ndemonstrated in Table 3.\n\nTABLE 3Outcomes of conversion therapy CaseIncurable factorQT regimenSurgeryRecurrenceStatusDFS*OS*1T4bXPTG + D2peritoneum/liverloss of follow-up1421.72LNMmFLOXSTG + D2peritoneumdead4.24.43T4bCis + IrinoTG + D2-alive91.391.34T4b, LNMRDTTG + D1-dead0.70.75T4b, LNMmFLOXTG + D2peritoneum/ LNdead7.49.86T4bmFLOXTG + D1-alive22.322.37LNMmFLOXSTG + D2bonedead3.688T4bXELOXSTG + D2-alive339T4b, CarcinomatosisXPSTG + D1bonedead11.111.310T4b, gastrocutaneous fistulaCis + IrinoSTG + D2-alive40.640.611LNMXPSTG + D2liverdead3.53.812T4b, LNMmFLOXSTG + D2peritoneumdead5.35.813LNMXPSTG + D2LNalive18.318.314Lives metastasisXPTG + D2liverdead016.215T4bTaxol + CarboTG + D2liver / LNalive2.75.216T4b, LNMFolfirinoxTG + D2peritoneumdead06.5*months; TG= total gastrectomy; STG= subtotal gastrectomy\n\n*months; TG= total gastrectomy; STG= subtotal gastrectomy\nSurvival analysis of all 1003 GC patients submitted to any surgical procedure\ndemonstrated that, according to clinical stages, OS of the conversion group was\nhigher than stage IV patients not submitted to conversion therapy (43.8% vs. 27%,\np=0.037, Figure 1). The\nmedian OS for stage IV was 7 months compared to 11.3 months of the conversion group.\nFurthermore, there were no significant differences in the survival rates between\nstage III patients (52.3%, median OS=27 months) and the conversion group\n(p=0.222)\n\nFIGURE 1Kaplan-Meier overall survival curves according to the clinical stage\ncompared to conversion therapy group\n\nRegarding survival and the intention of the surgical procedures, patients who\nunderwent conversion therapy had a trend to better OS than the ones submitted to\npalliative procedures (43.8% vs. 27.9%, p=0.054, Figure 2). The median OS was of 11.3 and 7.9\nmonths for the conversion and palliative group, respectively. The standard curative\ntreatment group had a significantly higher OS rate than palliative patients (served\nas reference group) with OS rate of 73.2% (p<0.001).\n\nFIGURE 2Kaplan-Meier overall survival curves according to the indication of\nsurgical treatment\n", "Conversion therapy is an attempt to turn an incurable or unresectable/marginally\nresectable disease into a curable one. The concept and definition are often mixed\nand confused with other indications for GC management, especially with neoadjuvant\nchemotherapy. The last is indicated for resectable tumors aiming to downstage the\nlesion, reduce LNM and micrometastasis in order to improve survival. Palliative\nsurgery is indicated based on the presence of symptoms, mainly bleeding and\nobstruction\n14\n. Cytoreductive surgery is the resection of an asymptomatic patient with\ndisseminated disease\n12\n. Both cytoreductive and palliative are not intended to cure, but to improve\nthe quality of life and/or prolong survival. The recent results of the REGATTA trial\nsuggest that in metastatic patients cytoreductive surgery without prior chemotherapy\ndid not offer benefits in survival compared with palliative chemotherapy\n12\n. Salvage surgery is the procedure indicated due to recurrence after a\ndefinitive chemotherapy and/or radiotherapy treatment. It is mainly related to\nesophageal tumors\n21\n.\nIt seems clear that the conversion therapy may have characteristics of all these\ndefinitions, but the main objective is to achieve a R0 resection and the cure of\npatients that were previously considerable incurable. Nevertheless, some\ncontroversies regarding its definition still persist. Some consider as conversion\ntherapy any gastrectomy performed after prior palliative chemotherapy for\nunresectable GC\n19\n\n,\n\n20\n. Additionally, distant LNM most of the times are not technically\nunresectable, but they may also be included in the conversion group, similarly to\nthe present study\n9\n\n,\n\n20\n. It is also extremely difficult to define what is marginally resectable or\neven unresectable tumor, and this varies a lot even among surgeons. This lack of a\nstandardized definition makes it difficult to compare studies.\nWe were able to identify 16 patients who fitted the criteria for conversion therapy.\nPatients were younger with less comorbidities (most of them were ASA I-II with CCI\nof 0-1) than previous reports from our institution\n23\n. This reflects the ability of younger and healthier patients to endure the\nchemotherapy drawbacks optimizing its results to enable the surgical resection.\nMajor complications occurred in 25% of the cases with 1 surgical death. Indeed, we\nexpected a higher morbimortality rate due to the complexity of these procedures with\n9 cases submitted to combined resection of other organs \n23\n.\nAccording to Yoshida et al.\n26\n stage IV patients can be divided in four groups. The division is based on\nthe presence of peritoneal disease, systemic metastasis, lymph node metastasis and\nresectability of the tumor. Type 1 tumors are defined as tumors oncologically stage\nIV, but with technically resectable metastasis without the need of any chemotherapy\nregimen to downstage the tumor. It is related mainly to single liver metastasis,\npositive peritoneal cytology and distant LNM. In this group, the administration of\nchemotherapy prior to surgery can be even considered as neoadjuvant. As this\nsituation is not common, we consider it as conversion therapy in our analysis. Type\n2 tumors have more than two liver metastasis, distant LNM or primary lesion larger\nthan 5 cm located close to hepatic and/or portal vein.\n Patients with peritoneal dissemination (types 3 and 4) are considered to have the\nworst prognosis. In this case series, we only performed the procedure in one case\nwith peritoneal metastasis with an unfavorable outcome. This poor result is also\nreported by other authors\n9\n\n,\n\n25\n. It must be highlighted that we did not added any kind of peritoneal\nchemotherapy to our procedure. Recently, the use of peritoneal chemotherapy and\nHIPEC has been attempt in this population\n3\n. Until now, there is no definite evidence of its effectiveness, but its use\nmay increase the indication and the number of cases amenable to conversion\ntherapy\n1\n\n,\n\n17\n.\nWe had two cases with favorable long-term results with OS over 40 months. They were\nboth considered marginally resectable due to locally advanced tumors (Yoshida type\n2). OS curves according to clinical stage demonstrated a slightly improvement of\nconversion therapy in relation to clinical stage IV tumors during the first two\nyears. However, as we only have 16 cases of conversion therapy, those two cases\nlong-term survivals play an important effect on the survival curve after two years.\nIt even crosses the stage III curve. The same effect happens on the survival curve\naccording to the intention of surgical treatment. Despite the statistical\nsignificance, it is possible to realize that the key point is to find out who is\ngoing to be the long-term survival. Otherwise, they will do slightly better than\nclinical stage IV and palliative procedure patients. \nMarginally resectable tumors are probably the most favorable indication for\nconversion therapy. However, it carries a high risk of classification bias. What is\na marginally resectable tumor? A clear consensus and definition still lacks. Even\nfor pancreatic cancer, that has been long using the term “borderline resectable”,\nthere are some different definitions\n16\n. Thus, the inclusion of many “minor” marginally resectable tumors in the\nconversion therapy group may erroneously improve their outcomes. Additionally, the\nresults of neoadjuvant therapy may also be falsely optimized by transferring these\n“borderline” patients to the conversion therapy group. This must be taken in account\nconsidering that most of the studies related to conversion therapy are\nretrospective. Therefore, the indication of preoperative chemotherapy, neoadjuvant\nor conversion, must be defined and reported before starting treatment.\nDifferent regimens of chemotherapy were performed in our study. This reflects the\ndifferent perspectives of the patients when they started palliative chemotherapy.\nThe analysis of a period of nine years also plays a role in the variety of regimens\nadopted. Cisplatin and oxaliplatin, as well as 5-fluorouracil and capecitabine have\nbeen shown to be equally effective in advanced disease\n5\n. Cisplatin and irinotecan combination has demonstrated efficacy in a single\narm phase II trial\n15\n although this appears to be inferior than the platin and fluoropyrimidine\ncombination in a randomized phase 2 study\n10\n. Given that there are several possible combinations, many factors should be\ntake into account when choosing the chemotherapy backbone: comorbidities,\nperformance status, infusion pump availability, ability to swallow tablets,\navailability to come to the center for treatment.\nA major limitation of this study is the small number of patients included.\nAdditionally, it is not possible to quantify the total of patients who underwent\npalliative treatment and could be considered as candidates for conversion.\nTherefore, the rate of patients who successfully complete the conversion therapy is\nunknown determining a relevant selection bias. Previous studies have reported rates\nbetween 26 and 32.4%\n13\n\n,\n\n20\n\n,\n\n25\n. Prospective trials with clear inclusion/exclusion criteria are needed to\nanswer this question. A protocol of conversion therapy was recently in our\ninstitution designed to address this issue. Another limitation is that our\npalliative group, used in the survival analysis comparison, is formed only by\npatients submitted to surgical palliative procedures due to the presence of\nsymptoms. Asymptomatic patients who received exclusive palliative chemotherapy were\nnot included in the analysis.\nIn summary, our results suggest that conversion therapy should be considered with\ncaution. The rational of conversion therapy and the reports of good clinical\noutcomes in these patients with limited perspectives encourages its promptly\nadoption. However, definitive unbiased data to corroborate its effectiveness and\ndefine the best candidates are still needed. Since the number of candidate patients\nfor this therapy is too small to conduct a randomized clinical trial, the case\nseries report, as our study, represents the current option to analyze and gather\ndata.", "Conversion therapy may offer the possibility of surgical resection with long-term\nsurvival to a group of patients initially considered beyond therapeutic possibility.\nHowever, definitions regarding the best treatment regimen, diagnostic criteria of\nirresectability and which group of patients benefits from this modality are still\nnecessary" ]
[ "intro", "methods", null, "results", "discussion", "conclusions" ]
[ "Stomach neoplasms", "Neoadjuvant therapy", "Gastrectomy", "Neoplasias gástricas", "Terapia neoadjuvante", "Gastrectomia" ]
INTRODUCTION: Gastric cancer (GC) is the fifth most common cancer in the world. It is estimated that almost one million (952,000) new cases occurred worldwide in 2012 11 . Surgery remains as the main curative treatment option, and gastrectomy with D2 lymphadenectomy is considered the standard surgical treatment for locally advanced GC. Unfortunately, many patients at the time of diagnosis have already locally unresectable tumors or signs of systemic disease 22 . For those clinical stage IV patients, palliative chemotherapy represents the current standard of care 18 . Recently, conversion therapy has emerged as an alternative therapy for these stage IV patients 26 . It consists in the administration of chemotherapy followed by surgery in stage IV patients. It is also referred as combination of induction chemotherapy and “adjuvant” surgery. This option can be indicated to treat unresectable or marginally resectable lesions, patients with distant lymph node metastasis (LNM) and even those with metastatic disease or peritoneal dissemination. In the last years, the development and improvement of chemotherapy regimens and molecular targeting agents based on molecular markers have improved dramatically the response rates 2 , 6 . Thus, it has become increasingly common for surgeons to reassess patients initially labeled as non-candidates for curative resection that present a complete different disease after initial palliative chemotherapy. This new scenario has brought conversion therapy to the primetime discussion of GC treatment. However, the clinical value of such multimodal strategy for stage IV GC remains controversial with few reports from western countries and very conflicting definitions of its use that may impair a clear analysis of its results. Thus, the aim of this study was to analyze the results of patients who were submitted to conversion therapy in our institution. METHODS: The study was approved by the hospital ethics committee (NP993/16) and registered online (www.plataformabrasil.com; CAAE: 2915516.2.0000.0065). We reviewed our prospective database, selecting all patients submitted to any surgical procedure due to gastric adenocarcinoma from 2008 to 2018. Posteriorly, patients who received chemotherapy or radiotherapy followed by gastric resection were selected. Conversion therapy was defined as patients who were considered unresectable or marginally resectable and/or with disseminated disease during initial staging and were referred to initial chemo and/or radiation therapy. Those who had partial or complete response at re-assessment were indicated for surgery and considered as the conversion therapy group. Patients were staged preoperatively through abdominal and pelvis computed tomography, endoscopy and laboratory tests. Extension of gastric resection (total x subtotal) was based on the location of the tumor to obtain free proximal margin 27 . TNM staging was performed according to the TNM 7th edition 24 . Clinical characteristics evaluated included American Society of Anesthesiologists (ASA) classification 8 , Charlson Comorbidity Index (CCI) 4 and laboratory tests. CCI was considered without inclusion of GC as comorbidity. Surgical complications were graded according to Clavien-Dindo’s classification 7 . Major complications were considered Clavien III-V. Hospital length of stay and the number of retrieved lymph nodes were evaluated. Surgical mortality was considered when it occurred in the first 30 days after surgery or during hospital stay after the procedure. The postoperative follow-up was performed on a quarterly basis in the first year and every six months in the following years. Follow-up tests for relapse detection were performed based on the presence of symptoms. Absence in consultations for more than 12 months was considered as loss of follow-up. All cases were operated in a high-volume center by specialized surgeons. The surgical technique, extension of resection and dissected lymph node chains followed the recommendations of the Japanese Gastric Cancer Association guidelines 18 . Statistical analysis The Chi-square test was used for categorical variables and t-tests for continuous variables. Overall survival (OS) and disease-free survival (DFS) were estimated using the method of Kaplan-Meier, and differences in survival were examined using the Log Rank Test. Survival time, in months, was calculated from the date of surgery until the date of death/recurrence. The patients alive were censored at the date of last contact. All tests were two-sided and p<0.05 was considered statistically significant. Analysis was performed using SPSS software, version 18.0 (SPSS Inc, Chicago, IL). The Chi-square test was used for categorical variables and t-tests for continuous variables. Overall survival (OS) and disease-free survival (DFS) were estimated using the method of Kaplan-Meier, and differences in survival were examined using the Log Rank Test. Survival time, in months, was calculated from the date of surgery until the date of death/recurrence. The patients alive were censored at the date of last contact. All tests were two-sided and p<0.05 was considered statistically significant. Analysis was performed using SPSS software, version 18.0 (SPSS Inc, Chicago, IL). Statistical analysis: The Chi-square test was used for categorical variables and t-tests for continuous variables. Overall survival (OS) and disease-free survival (DFS) were estimated using the method of Kaplan-Meier, and differences in survival were examined using the Log Rank Test. Survival time, in months, was calculated from the date of surgery until the date of death/recurrence. The patients alive were censored at the date of last contact. All tests were two-sided and p<0.05 was considered statistically significant. Analysis was performed using SPSS software, version 18.0 (SPSS Inc, Chicago, IL). RESULTS: Out of 1,003 GC patients operated in the period, surgical resection with curative intent was performed in 629 cases and palliative procedures in 230. A total of 113 patients were resected with curative intent after chemotherapy and/or radiotherapy. From this, 16 were considered as conversion therapy (1.6%). Table 1 presents the clinicopathological characteristics of patients from the conversion group. Most patients had low ASA classification score (I-II) and CCI (0-1). Tumors were mainly located at the distal part of the stomach (56.3%) and intestinal adenocarcinoma was the most common histological subtype (43.8%). Considering the conversion group decision for chemo/radiotherapy was mainly based on radiologic exams (75%) and 4 patients (25%) were deemed unresectable/incurable during surgery. The chemotherapy regimens varied, with a predominance of schemes based on the combination of platin and fluoropyrimidine. TABLE 1Clinicopathological characteristics of conversion therapyVariables n = 16%Gender Female850 Male850Age (years) Mean (Range)62.5 (48-80) Charlson Comorbidity Index (CCI) 0 - 11062.5 >=1637.5ASA I-II1275 III425Location of tumor Upper16.3 Middle425 Lower956.3 Total212.5Histological type Intestinal adenocarcinoma743.8 Diffuse adenocarcinoma737.5 Mixed adenocarcinoma16.3 Squamous cell carcinoma16.3Degree of histological differentiation Well/ Moderately differentiated956.3 Poorly differentiated743.7Diagnosis of nonresectability Surgery425 MRI318.7 CT956.3Preoperative treatment Capecitabine + Cisplatin (XP)531.3 modified FLOX (mFLOX)531.3 Capecitabine + Oxaliplatin (Xelox)16.3 FOLFIRINOX16.3 Carboplatin + Paclitaxel16.3 Cisplatin + Iritonecan212.5 Radiotherapy (RDT)16.3 Table 2 presents the surgical results. Combined organ resection was performed in 9 cases (56.3%) and in 4 of them more than 1 adjacent organ was resected. Liver and pancreas were resected in 5 cases and spleen and colon in 4. R0 resection was achieved in 13 cases (81.3%). The ypT4 category occurred in 8 patients (50%). The mean number of retrieved lymph nodes was 35.5, and 4 cases (25%) had no LNM. Only 2 cases were pathological stage IV. Four patients (25%) had major surgical complications and 1 (6.3%) died. TABLE 2Surgical results of conversion therapy Variables n = 16%Type of resection Subtotal850 Total850Lymphadenectomy D1318.7 D21381.3Combined ressection No743.8 Yes956.3Residual disease R01381.3 R1/R2318.7ypT pT0/pT1 212.5 pT216.3 pT3531.3 pT4a531.3 pT4b318.7ypN pN0425 pN1531.3 pN216.3 pN3637.5ypTNM I16.3 II425 III956.3 IV212.5Surgical complication None / Clavien I - II1168.7 Clavien III - IV425 Clavien V16.3Recurrence No531.3 Yes1168.7Death No743.8 Yes956.3 The median follow-up was 8.9 months (mean=16.2, Standard-Deviation=22.3). Eleven patients (68.8%) had recurrence and 9 (56.3%) died. Two patients had long-term survival without recurrence: one had local invasion to the pancreas and liver, and the other had invasion of the pancreas, duodenum and a gastrocutaneous fistula due to abdominal wall invasion. Characteristics of the patients and survival results are demonstrated in Table 3. TABLE 3Outcomes of conversion therapy CaseIncurable factorQT regimenSurgeryRecurrenceStatusDFS*OS*1T4bXPTG + D2peritoneum/liverloss of follow-up1421.72LNMmFLOXSTG + D2peritoneumdead4.24.43T4bCis + IrinoTG + D2-alive91.391.34T4b, LNMRDTTG + D1-dead0.70.75T4b, LNMmFLOXTG + D2peritoneum/ LNdead7.49.86T4bmFLOXTG + D1-alive22.322.37LNMmFLOXSTG + D2bonedead3.688T4bXELOXSTG + D2-alive339T4b, CarcinomatosisXPSTG + D1bonedead11.111.310T4b, gastrocutaneous fistulaCis + IrinoSTG + D2-alive40.640.611LNMXPSTG + D2liverdead3.53.812T4b, LNMmFLOXSTG + D2peritoneumdead5.35.813LNMXPSTG + D2LNalive18.318.314Lives metastasisXPTG + D2liverdead016.215T4bTaxol + CarboTG + D2liver / LNalive2.75.216T4b, LNMFolfirinoxTG + D2peritoneumdead06.5*months; TG= total gastrectomy; STG= subtotal gastrectomy *months; TG= total gastrectomy; STG= subtotal gastrectomy Survival analysis of all 1003 GC patients submitted to any surgical procedure demonstrated that, according to clinical stages, OS of the conversion group was higher than stage IV patients not submitted to conversion therapy (43.8% vs. 27%, p=0.037, Figure 1). The median OS for stage IV was 7 months compared to 11.3 months of the conversion group. Furthermore, there were no significant differences in the survival rates between stage III patients (52.3%, median OS=27 months) and the conversion group (p=0.222) FIGURE 1Kaplan-Meier overall survival curves according to the clinical stage compared to conversion therapy group Regarding survival and the intention of the surgical procedures, patients who underwent conversion therapy had a trend to better OS than the ones submitted to palliative procedures (43.8% vs. 27.9%, p=0.054, Figure 2). The median OS was of 11.3 and 7.9 months for the conversion and palliative group, respectively. The standard curative treatment group had a significantly higher OS rate than palliative patients (served as reference group) with OS rate of 73.2% (p<0.001). FIGURE 2Kaplan-Meier overall survival curves according to the indication of surgical treatment DISCUSSION: Conversion therapy is an attempt to turn an incurable or unresectable/marginally resectable disease into a curable one. The concept and definition are often mixed and confused with other indications for GC management, especially with neoadjuvant chemotherapy. The last is indicated for resectable tumors aiming to downstage the lesion, reduce LNM and micrometastasis in order to improve survival. Palliative surgery is indicated based on the presence of symptoms, mainly bleeding and obstruction 14 . Cytoreductive surgery is the resection of an asymptomatic patient with disseminated disease 12 . Both cytoreductive and palliative are not intended to cure, but to improve the quality of life and/or prolong survival. The recent results of the REGATTA trial suggest that in metastatic patients cytoreductive surgery without prior chemotherapy did not offer benefits in survival compared with palliative chemotherapy 12 . Salvage surgery is the procedure indicated due to recurrence after a definitive chemotherapy and/or radiotherapy treatment. It is mainly related to esophageal tumors 21 . It seems clear that the conversion therapy may have characteristics of all these definitions, but the main objective is to achieve a R0 resection and the cure of patients that were previously considerable incurable. Nevertheless, some controversies regarding its definition still persist. Some consider as conversion therapy any gastrectomy performed after prior palliative chemotherapy for unresectable GC 19 , 20 . Additionally, distant LNM most of the times are not technically unresectable, but they may also be included in the conversion group, similarly to the present study 9 , 20 . It is also extremely difficult to define what is marginally resectable or even unresectable tumor, and this varies a lot even among surgeons. This lack of a standardized definition makes it difficult to compare studies. We were able to identify 16 patients who fitted the criteria for conversion therapy. Patients were younger with less comorbidities (most of them were ASA I-II with CCI of 0-1) than previous reports from our institution 23 . This reflects the ability of younger and healthier patients to endure the chemotherapy drawbacks optimizing its results to enable the surgical resection. Major complications occurred in 25% of the cases with 1 surgical death. Indeed, we expected a higher morbimortality rate due to the complexity of these procedures with 9 cases submitted to combined resection of other organs 23 . According to Yoshida et al. 26 stage IV patients can be divided in four groups. The division is based on the presence of peritoneal disease, systemic metastasis, lymph node metastasis and resectability of the tumor. Type 1 tumors are defined as tumors oncologically stage IV, but with technically resectable metastasis without the need of any chemotherapy regimen to downstage the tumor. It is related mainly to single liver metastasis, positive peritoneal cytology and distant LNM. In this group, the administration of chemotherapy prior to surgery can be even considered as neoadjuvant. As this situation is not common, we consider it as conversion therapy in our analysis. Type 2 tumors have more than two liver metastasis, distant LNM or primary lesion larger than 5 cm located close to hepatic and/or portal vein. Patients with peritoneal dissemination (types 3 and 4) are considered to have the worst prognosis. In this case series, we only performed the procedure in one case with peritoneal metastasis with an unfavorable outcome. This poor result is also reported by other authors 9 , 25 . It must be highlighted that we did not added any kind of peritoneal chemotherapy to our procedure. Recently, the use of peritoneal chemotherapy and HIPEC has been attempt in this population 3 . Until now, there is no definite evidence of its effectiveness, but its use may increase the indication and the number of cases amenable to conversion therapy 1 , 17 . We had two cases with favorable long-term results with OS over 40 months. They were both considered marginally resectable due to locally advanced tumors (Yoshida type 2). OS curves according to clinical stage demonstrated a slightly improvement of conversion therapy in relation to clinical stage IV tumors during the first two years. However, as we only have 16 cases of conversion therapy, those two cases long-term survivals play an important effect on the survival curve after two years. It even crosses the stage III curve. The same effect happens on the survival curve according to the intention of surgical treatment. Despite the statistical significance, it is possible to realize that the key point is to find out who is going to be the long-term survival. Otherwise, they will do slightly better than clinical stage IV and palliative procedure patients. Marginally resectable tumors are probably the most favorable indication for conversion therapy. However, it carries a high risk of classification bias. What is a marginally resectable tumor? A clear consensus and definition still lacks. Even for pancreatic cancer, that has been long using the term “borderline resectable”, there are some different definitions 16 . Thus, the inclusion of many “minor” marginally resectable tumors in the conversion therapy group may erroneously improve their outcomes. Additionally, the results of neoadjuvant therapy may also be falsely optimized by transferring these “borderline” patients to the conversion therapy group. This must be taken in account considering that most of the studies related to conversion therapy are retrospective. Therefore, the indication of preoperative chemotherapy, neoadjuvant or conversion, must be defined and reported before starting treatment. Different regimens of chemotherapy were performed in our study. This reflects the different perspectives of the patients when they started palliative chemotherapy. The analysis of a period of nine years also plays a role in the variety of regimens adopted. Cisplatin and oxaliplatin, as well as 5-fluorouracil and capecitabine have been shown to be equally effective in advanced disease 5 . Cisplatin and irinotecan combination has demonstrated efficacy in a single arm phase II trial 15 although this appears to be inferior than the platin and fluoropyrimidine combination in a randomized phase 2 study 10 . Given that there are several possible combinations, many factors should be take into account when choosing the chemotherapy backbone: comorbidities, performance status, infusion pump availability, ability to swallow tablets, availability to come to the center for treatment. A major limitation of this study is the small number of patients included. Additionally, it is not possible to quantify the total of patients who underwent palliative treatment and could be considered as candidates for conversion. Therefore, the rate of patients who successfully complete the conversion therapy is unknown determining a relevant selection bias. Previous studies have reported rates between 26 and 32.4% 13 , 20 , 25 . Prospective trials with clear inclusion/exclusion criteria are needed to answer this question. A protocol of conversion therapy was recently in our institution designed to address this issue. Another limitation is that our palliative group, used in the survival analysis comparison, is formed only by patients submitted to surgical palliative procedures due to the presence of symptoms. Asymptomatic patients who received exclusive palliative chemotherapy were not included in the analysis. In summary, our results suggest that conversion therapy should be considered with caution. The rational of conversion therapy and the reports of good clinical outcomes in these patients with limited perspectives encourages its promptly adoption. However, definitive unbiased data to corroborate its effectiveness and define the best candidates are still needed. Since the number of candidate patients for this therapy is too small to conduct a randomized clinical trial, the case series report, as our study, represents the current option to analyze and gather data. CONCLUSION: Conversion therapy may offer the possibility of surgical resection with long-term survival to a group of patients initially considered beyond therapeutic possibility. However, definitions regarding the best treatment regimen, diagnostic criteria of irresectability and which group of patients benefits from this modality are still necessary
Background: Conversion therapy in gastric cancer (GC) is defined as the use of chemotherapy/radiotherapy followed by surgical resection with curative intent of a tumor that was prior considered unresectable or oncologically incurable. Methods: Retrospective analysis of all GC surgeries between 2009 and 2018. Patients who received any therapy before surgery were further identified to define the conversion group. Results: Out of 1003 surgeries performed for GC, 113 cases underwent neoadjuvant treatment and 16 (1.6%) were considered as conversion therapy. The main indication for treatment was: T4b lesions (n=10), lymph node metastasis (n=4), peritoneal carcinomatosis and hepatic metastasis in one case each. The diagnosis was made by imaging in 14 cases (75%) and during surgical procedure in four (25%). The most commonly used chemotherapy regimens were XP and mFLOX. Major surgical complications occurred in four cases (25%) and one (6.3%) died. After an average follow-up of 20 months, 11 patients (68.7%) had recurrence and nine (56.3%) died. Prolonged recurrence-free survival over 40 months occurred in two cases. Conclusions: Conversion therapy may offer the possibility of prolonged survival for a group of GC patients initially considered beyond therapeutic possibility.
INTRODUCTION: Gastric cancer (GC) is the fifth most common cancer in the world. It is estimated that almost one million (952,000) new cases occurred worldwide in 2012 11 . Surgery remains as the main curative treatment option, and gastrectomy with D2 lymphadenectomy is considered the standard surgical treatment for locally advanced GC. Unfortunately, many patients at the time of diagnosis have already locally unresectable tumors or signs of systemic disease 22 . For those clinical stage IV patients, palliative chemotherapy represents the current standard of care 18 . Recently, conversion therapy has emerged as an alternative therapy for these stage IV patients 26 . It consists in the administration of chemotherapy followed by surgery in stage IV patients. It is also referred as combination of induction chemotherapy and “adjuvant” surgery. This option can be indicated to treat unresectable or marginally resectable lesions, patients with distant lymph node metastasis (LNM) and even those with metastatic disease or peritoneal dissemination. In the last years, the development and improvement of chemotherapy regimens and molecular targeting agents based on molecular markers have improved dramatically the response rates 2 , 6 . Thus, it has become increasingly common for surgeons to reassess patients initially labeled as non-candidates for curative resection that present a complete different disease after initial palliative chemotherapy. This new scenario has brought conversion therapy to the primetime discussion of GC treatment. However, the clinical value of such multimodal strategy for stage IV GC remains controversial with few reports from western countries and very conflicting definitions of its use that may impair a clear analysis of its results. Thus, the aim of this study was to analyze the results of patients who were submitted to conversion therapy in our institution. CONCLUSION: Conversion therapy may offer the possibility of surgical resection with long-term survival to a group of patients initially considered beyond therapeutic possibility. However, definitions regarding the best treatment regimen, diagnostic criteria of irresectability and which group of patients benefits from this modality are still necessary
Background: Conversion therapy in gastric cancer (GC) is defined as the use of chemotherapy/radiotherapy followed by surgical resection with curative intent of a tumor that was prior considered unresectable or oncologically incurable. Methods: Retrospective analysis of all GC surgeries between 2009 and 2018. Patients who received any therapy before surgery were further identified to define the conversion group. Results: Out of 1003 surgeries performed for GC, 113 cases underwent neoadjuvant treatment and 16 (1.6%) were considered as conversion therapy. The main indication for treatment was: T4b lesions (n=10), lymph node metastasis (n=4), peritoneal carcinomatosis and hepatic metastasis in one case each. The diagnosis was made by imaging in 14 cases (75%) and during surgical procedure in four (25%). The most commonly used chemotherapy regimens were XP and mFLOX. Major surgical complications occurred in four cases (25%) and one (6.3%) died. After an average follow-up of 20 months, 11 patients (68.7%) had recurrence and nine (56.3%) died. Prolonged recurrence-free survival over 40 months occurred in two cases. Conclusions: Conversion therapy may offer the possibility of prolonged survival for a group of GC patients initially considered beyond therapeutic possibility.
3,660
247
[ 124 ]
6
[ "patients", "conversion", "therapy", "conversion therapy", "survival", "chemotherapy", "considered", "group", "palliative", "surgical" ]
[ "chemotherapy neoadjuvant conversion", "palliative chemotherapy 12", "palliative chemotherapy unresectable", "initial palliative chemotherapy", "compared palliative chemotherapy" ]
[CONTENT] Stomach neoplasms | Neoadjuvant therapy | Gastrectomy | Neoplasias gástricas | Terapia neoadjuvante | Gastrectomia [SUMMARY]
[CONTENT] Stomach neoplasms | Neoadjuvant therapy | Gastrectomy | Neoplasias gástricas | Terapia neoadjuvante | Gastrectomia [SUMMARY]
[CONTENT] Stomach neoplasms | Neoadjuvant therapy | Gastrectomy | Neoplasias gástricas | Terapia neoadjuvante | Gastrectomia [SUMMARY]
[CONTENT] Stomach neoplasms | Neoadjuvant therapy | Gastrectomy | Neoplasias gástricas | Terapia neoadjuvante | Gastrectomia [SUMMARY]
[CONTENT] Stomach neoplasms | Neoadjuvant therapy | Gastrectomy | Neoplasias gástricas | Terapia neoadjuvante | Gastrectomia [SUMMARY]
[CONTENT] Stomach neoplasms | Neoadjuvant therapy | Gastrectomy | Neoplasias gástricas | Terapia neoadjuvante | Gastrectomia [SUMMARY]
[CONTENT] Adenocarcinoma | Aged | Aged, 80 and over | Carcinoma | Female | Humans | Kaplan-Meier Estimate | Male | Middle Aged | Neoplasm Recurrence, Local | Palliative Care | Retrospective Studies | Sex Distribution | Stomach Neoplasms | Time Factors | Treatment Outcome [SUMMARY]
[CONTENT] Adenocarcinoma | Aged | Aged, 80 and over | Carcinoma | Female | Humans | Kaplan-Meier Estimate | Male | Middle Aged | Neoplasm Recurrence, Local | Palliative Care | Retrospective Studies | Sex Distribution | Stomach Neoplasms | Time Factors | Treatment Outcome [SUMMARY]
[CONTENT] Adenocarcinoma | Aged | Aged, 80 and over | Carcinoma | Female | Humans | Kaplan-Meier Estimate | Male | Middle Aged | Neoplasm Recurrence, Local | Palliative Care | Retrospective Studies | Sex Distribution | Stomach Neoplasms | Time Factors | Treatment Outcome [SUMMARY]
[CONTENT] Adenocarcinoma | Aged | Aged, 80 and over | Carcinoma | Female | Humans | Kaplan-Meier Estimate | Male | Middle Aged | Neoplasm Recurrence, Local | Palliative Care | Retrospective Studies | Sex Distribution | Stomach Neoplasms | Time Factors | Treatment Outcome [SUMMARY]
[CONTENT] Adenocarcinoma | Aged | Aged, 80 and over | Carcinoma | Female | Humans | Kaplan-Meier Estimate | Male | Middle Aged | Neoplasm Recurrence, Local | Palliative Care | Retrospective Studies | Sex Distribution | Stomach Neoplasms | Time Factors | Treatment Outcome [SUMMARY]
[CONTENT] Adenocarcinoma | Aged | Aged, 80 and over | Carcinoma | Female | Humans | Kaplan-Meier Estimate | Male | Middle Aged | Neoplasm Recurrence, Local | Palliative Care | Retrospective Studies | Sex Distribution | Stomach Neoplasms | Time Factors | Treatment Outcome [SUMMARY]
[CONTENT] chemotherapy neoadjuvant conversion | palliative chemotherapy 12 | palliative chemotherapy unresectable | initial palliative chemotherapy | compared palliative chemotherapy [SUMMARY]
[CONTENT] chemotherapy neoadjuvant conversion | palliative chemotherapy 12 | palliative chemotherapy unresectable | initial palliative chemotherapy | compared palliative chemotherapy [SUMMARY]
[CONTENT] chemotherapy neoadjuvant conversion | palliative chemotherapy 12 | palliative chemotherapy unresectable | initial palliative chemotherapy | compared palliative chemotherapy [SUMMARY]
[CONTENT] chemotherapy neoadjuvant conversion | palliative chemotherapy 12 | palliative chemotherapy unresectable | initial palliative chemotherapy | compared palliative chemotherapy [SUMMARY]
[CONTENT] chemotherapy neoadjuvant conversion | palliative chemotherapy 12 | palliative chemotherapy unresectable | initial palliative chemotherapy | compared palliative chemotherapy [SUMMARY]
[CONTENT] chemotherapy neoadjuvant conversion | palliative chemotherapy 12 | palliative chemotherapy unresectable | initial palliative chemotherapy | compared palliative chemotherapy [SUMMARY]
[CONTENT] patients | conversion | therapy | conversion therapy | survival | chemotherapy | considered | group | palliative | surgical [SUMMARY]
[CONTENT] patients | conversion | therapy | conversion therapy | survival | chemotherapy | considered | group | palliative | surgical [SUMMARY]
[CONTENT] patients | conversion | therapy | conversion therapy | survival | chemotherapy | considered | group | palliative | surgical [SUMMARY]
[CONTENT] patients | conversion | therapy | conversion therapy | survival | chemotherapy | considered | group | palliative | surgical [SUMMARY]
[CONTENT] patients | conversion | therapy | conversion therapy | survival | chemotherapy | considered | group | palliative | surgical [SUMMARY]
[CONTENT] patients | conversion | therapy | conversion therapy | survival | chemotherapy | considered | group | palliative | surgical [SUMMARY]
[CONTENT] patients | chemotherapy | stage iv | stage | iv | gc | stage iv patients | iv patients | therapy | molecular [SUMMARY]
[CONTENT] tests | date | survival | considered | test | spss | gastric | hospital | performed | variables [SUMMARY]
[CONTENT] conversion | patients | table | group | os | 16 | months | conversion group | median | figure [SUMMARY]
[CONTENT] possibility | group patients | group | treatment regimen | possibility surgical resection long | definitions best treatment | definitions best | treatment regimen diagnostic criteria | offer possibility | offer possibility surgical [SUMMARY]
[CONTENT] patients | survival | conversion | therapy | conversion therapy | date | chemotherapy | group | tests | stage [SUMMARY]
[CONTENT] patients | survival | conversion | therapy | conversion therapy | date | chemotherapy | group | tests | stage [SUMMARY]
[CONTENT] GC [SUMMARY]
[CONTENT] GC | between 2009 and 2018 ||| [SUMMARY]
[CONTENT] 1003 | GC | 113 | 16 | 1.6% ||| one ||| 14 | 75% | four | 25% ||| XP ||| four | 25% | 6.3% ||| 20 months | 11 | 68.7% | nine | 56.3% ||| over 40 months | two [SUMMARY]
[CONTENT] GC [SUMMARY]
[CONTENT] GC ||| GC | between 2009 and 2018 ||| ||| 1003 | GC | 113 | 16 | 1.6% ||| one ||| 14 | 75% | four | 25% ||| XP ||| four | 25% | 6.3% ||| 20 months | 11 | 68.7% | nine | 56.3% ||| over 40 months | two ||| GC [SUMMARY]
[CONTENT] GC ||| GC | between 2009 and 2018 ||| ||| 1003 | GC | 113 | 16 | 1.6% ||| one ||| 14 | 75% | four | 25% ||| XP ||| four | 25% | 6.3% ||| 20 months | 11 | 68.7% | nine | 56.3% ||| over 40 months | two ||| GC [SUMMARY]
Association between GSTM1 null genotype and coronary artery disease risk: a meta-analysis.
25183432
We conducted a meta-analysis to assess the association between polymorphisms of GSTM1 null genotype and coronary artery disease (CAD) risk.
BACKGROUND
Published literature from PubMed, EMBASE, and China National Knowledge Infrastructure (CNKI) were retrieved before March 2014. All studies reporting adjusted odds ratios (ORs) and 95% confidence intervals (CIs) of CAD risk were included.
MATERIAL AND METHODS
A total of 13 case-control studies, including 5453 cases and 5068 controls, were collected. There was a significant association between GSTM1 null genotype and CAD risk (adjusted OR=1.26; 95% CI, 1.11-1.43; I^2=3%). When stratified by ethnicity, a significantly elevated risk was observed in whites. In the subgroup analysis according to disease type, a significantly increased myocardial infarction (MI) risk was observed. Subgroup analysis of smoking status showed an increased CAD risk in smokers.
RESULTS
Our results indicate that GSTM1 null genotype is associated with an increased CAD risk.
CONCLUSIONS
[ "Case-Control Studies", "Coronary Artery Disease", "Genetic Association Studies", "Genetic Predisposition to Disease", "Glutathione Transferase", "Humans", "Polymorphism, Genetic", "Risk Factors" ]
4160135
Background
Coronary artery disease (CAD) is the leading health problem worldwide and is the leading cause of mortality in the United States. The role of DNA oxidative stress in the pathogenesis of atherosclerosis and its association with increased production of reactive oxygen species has been well established [1]. The mutagenic activities of cigarette smoke chemicals can cause DNA adducts in target tissues and oxidative modification and progression of atherosclerotic lesions. The glutathione S-transferases (GSTs) are a gene superfamily of phase II metabolic enzymes that detoxify free radicals, particularly in tobacco smoke [2]. GSTM1 has been mapped to the GST mu gene cluster on chromosome 1p13.3. One variant in GSTM1 have been identified – a deletion. This inactive form of GSTM1 (null genotype) causes lower detoxification, which may be a risk factor for CAD. The relationship between GSTM1 null genotype and risk of CAD has been studied for more than 10 years. Several studies found GSTM1 null genotype to be a risk factor in CAD, but other studies showed no association between this polymorphism and risk of CAD. These studies reached inconsistent conclusions [3–15], probably due to the relatively small sample sizes. Since individual studies are usually underpowered in detecting the effect of low-penetrance genes, in this study we conducted a meta-analysis to investigate the association between GSTM1 null genotype and the risk for CAD.
Quantitative data synthesis
The evaluations of the association between GSTM1 polymorphism and CAD risk are summarized in Table 2. The null genotype of GSTM1 was associated with a significantly increased risk of CAD when compared with present genotype (adjusted OR=1.26; 95% CI 1.11–1.43; I2=3%; Figure 1). When stratified by ethnicity, a significantly elevated risk was observed in whites (OR=1.22; 95% CI 1.06–1.41; I2=0%) but not in Asians (OR=1.20; 95% CI 0.85–1.71; I2=50%). In the subgroup analysis according to disease type, a significantly increased myocardial infarction (MI) risk was observed (OR=1.19; 95% CI 1.01–1.40; I2=0%). Subgroup analysis of smoking status showed that increased risks were found in smokers (OR=1.97; 95% CI 1.59–2.44; I2=4%) but not in non-smokers (OR=1.20; 95%CI, 0.96–1.51; I2=0%). When we limited the meta-analysis to studies that controlled for confounders such as age, sex, smoking, diabetes, hypertension, family history, and dyslipidemia, a significant association between GSTM1 null genotype and CAD risk remained. As shown in Figure 2, significant associations were evident with each addition of more data over time. The results showed that the pooled ORs tended to be stable. A single study involved in the meta-analysis was deleted each time to reflect the influence of the individual data set to the pooled ORs, and the corresponding pooled ORs were not materially altered (Figure 3). Funnel plot and Egger’s test were performed to assess the publication bias of the literature. The shape of the funnel plot did not reveal any evidence of obvious asymmetry (data not shown). Egger’s test did not find evidence of publication bias (P=0.68).
Results
Study characteristics We ultimately identified a total of 13 articles reporting the relationship between GSTM1 null genotype and CAD risk [3–15]. A total of 5453 cases and 5068 controls were included in this meta-analysis. Table 1 summarized the main characteristics of those included studies. There were 8 case-control studies from white populations and 5 case-control studies from Asian populations. We ultimately identified a total of 13 articles reporting the relationship between GSTM1 null genotype and CAD risk [3–15]. A total of 5453 cases and 5068 controls were included in this meta-analysis. Table 1 summarized the main characteristics of those included studies. There were 8 case-control studies from white populations and 5 case-control studies from Asian populations. Quantitative data synthesis The evaluations of the association between GSTM1 polymorphism and CAD risk are summarized in Table 2. The null genotype of GSTM1 was associated with a significantly increased risk of CAD when compared with present genotype (adjusted OR=1.26; 95% CI 1.11–1.43; I2=3%; Figure 1). When stratified by ethnicity, a significantly elevated risk was observed in whites (OR=1.22; 95% CI 1.06–1.41; I2=0%) but not in Asians (OR=1.20; 95% CI 0.85–1.71; I2=50%). In the subgroup analysis according to disease type, a significantly increased myocardial infarction (MI) risk was observed (OR=1.19; 95% CI 1.01–1.40; I2=0%). Subgroup analysis of smoking status showed that increased risks were found in smokers (OR=1.97; 95% CI 1.59–2.44; I2=4%) but not in non-smokers (OR=1.20; 95%CI, 0.96–1.51; I2=0%). When we limited the meta-analysis to studies that controlled for confounders such as age, sex, smoking, diabetes, hypertension, family history, and dyslipidemia, a significant association between GSTM1 null genotype and CAD risk remained. As shown in Figure 2, significant associations were evident with each addition of more data over time. The results showed that the pooled ORs tended to be stable. A single study involved in the meta-analysis was deleted each time to reflect the influence of the individual data set to the pooled ORs, and the corresponding pooled ORs were not materially altered (Figure 3). Funnel plot and Egger’s test were performed to assess the publication bias of the literature. The shape of the funnel plot did not reveal any evidence of obvious asymmetry (data not shown). Egger’s test did not find evidence of publication bias (P=0.68). The evaluations of the association between GSTM1 polymorphism and CAD risk are summarized in Table 2. The null genotype of GSTM1 was associated with a significantly increased risk of CAD when compared with present genotype (adjusted OR=1.26; 95% CI 1.11–1.43; I2=3%; Figure 1). When stratified by ethnicity, a significantly elevated risk was observed in whites (OR=1.22; 95% CI 1.06–1.41; I2=0%) but not in Asians (OR=1.20; 95% CI 0.85–1.71; I2=50%). In the subgroup analysis according to disease type, a significantly increased myocardial infarction (MI) risk was observed (OR=1.19; 95% CI 1.01–1.40; I2=0%). Subgroup analysis of smoking status showed that increased risks were found in smokers (OR=1.97; 95% CI 1.59–2.44; I2=4%) but not in non-smokers (OR=1.20; 95%CI, 0.96–1.51; I2=0%). When we limited the meta-analysis to studies that controlled for confounders such as age, sex, smoking, diabetes, hypertension, family history, and dyslipidemia, a significant association between GSTM1 null genotype and CAD risk remained. As shown in Figure 2, significant associations were evident with each addition of more data over time. The results showed that the pooled ORs tended to be stable. A single study involved in the meta-analysis was deleted each time to reflect the influence of the individual data set to the pooled ORs, and the corresponding pooled ORs were not materially altered (Figure 3). Funnel plot and Egger’s test were performed to assess the publication bias of the literature. The shape of the funnel plot did not reveal any evidence of obvious asymmetry (data not shown). Egger’s test did not find evidence of publication bias (P=0.68).
Conclusions
This meta-analysis supports an association between GSTM1 null genotype and risk of CAD. Prospective studies are suggested to further ascertain the relationship between GSTM1 null genotype and genetic predisposition to CAD.
[ "Background", "Publication search", "Inclusion criteria and data extraction", "Statistical analysis" ]
[ "Coronary artery disease (CAD) is the leading health problem worldwide and is the leading cause of mortality in the United States. The role of DNA oxidative stress in the pathogenesis of atherosclerosis and its association with increased production of reactive oxygen species has been well established [1]. The mutagenic activities of cigarette smoke chemicals can cause DNA adducts in target tissues and oxidative modification and progression of atherosclerotic lesions. The glutathione S-transferases (GSTs) are a gene superfamily of phase II metabolic enzymes that detoxify free radicals, particularly in tobacco smoke [2]. GSTM1 has been mapped to the GST mu gene cluster on chromosome 1p13.3. One variant in GSTM1 have been identified – a deletion. This inactive form of GSTM1 (null genotype) causes lower detoxification, which may be a risk factor for CAD. The relationship between GSTM1 null genotype and risk of CAD has been studied for more than 10 years. Several studies found GSTM1 null genotype to be a risk factor in CAD, but other studies showed no association between this polymorphism and risk of CAD. These studies reached inconsistent conclusions [3–15], probably due to the relatively small sample sizes. Since individual studies are usually underpowered in detecting the effect of low-penetrance genes, in this study we conducted a meta-analysis to investigate the association between GSTM1 null genotype and the risk for CAD.", "We conducted a literature search before February 2014 in PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI) databases without restrictions. Combination of the following terms were applied: ‘coronary heart disease’ OR ‘coronary artery disease’ OR ‘myocardial infarction’ OR ‘acute coronary syndrome’ OR ‘ischemic heart disease’ OR ‘cardiovascular disease’ OR ‘major adverse cardiac event’ OR ‘CHD’ OR ‘CAD’ OR ‘MI’ OR ‘ACS’ OR ‘IHD’ OR ‘MACE’; ‘Glutathione S-transferases’ OR ‘GSTM1; ‘polymorphism’ OR ‘variant’ OR ‘genetic’ OR ‘mutation’. We also conducted a manual search to find other articles based on references identified in the individual articles.", "We included articles if they met all the following criteria: (1) evaluation of GSTM1 polymorphism and CAD risk, (2) using a case-control design, and (3) adjusted odds ratios (ORs) with 95% confidence intervals (CIs) were reported.\nData were extracted by 2 authors independently. In case of conflicting evaluations, an agreement was reached following a discussion; if agreement could not be reached, another author was consulted to resolve the debate. The following information was extracted from each study: first author, year of publication, ethnicity, age, sex, disease type, sample size, smoking status, covariates, adjusted Ors, and the corresponding 95% CIs of CAD risk.", "For the GSTM1 gene, we estimated the risk of the null genotype on CAD compared with the non-null genotypes in the recessive model (null verses heterozygous + wild type). The strength of the association between the GSTM1 gene and CAD risk was measured by ORs with 95% CIs. The ORs with corresponding 95% CIs from individual studies were pooled using random-effects models. Heterogeneity between the studies was quantified using the Cochran Q test in combination with the I2 statistic, which represents the percentage of variability across studies that is attributable to heterogeneity rather than to chance. Heterogeneity among studies was considered significant when P was less than 0.1 for the Q-test or when the I2 value was greater than 50%. Subgroup analyses were stratified by ethnicity, disease type, smoking status, and covariates of adjustment. Cumulative meta-analysis was performed. Sensitivity analysis was further performed by excluding single studies sequentially to assess the impact of the individual study on the pooled estimate. Funnel plots and Egger’s regression test were undertaken to assess the potential publication bias [16]. Data analysis was performed using STATA 12 (StataCorp LP, College Station, Texas, USA)." ]
[ null, null, "methods", "methods" ]
[ "Background", "Material and Methods", "Publication search", "Inclusion criteria and data extraction", "Statistical analysis", "Results", "Study characteristics", "Quantitative data synthesis", "Discussion", "Conclusions" ]
[ "Coronary artery disease (CAD) is the leading health problem worldwide and is the leading cause of mortality in the United States. The role of DNA oxidative stress in the pathogenesis of atherosclerosis and its association with increased production of reactive oxygen species has been well established [1]. The mutagenic activities of cigarette smoke chemicals can cause DNA adducts in target tissues and oxidative modification and progression of atherosclerotic lesions. The glutathione S-transferases (GSTs) are a gene superfamily of phase II metabolic enzymes that detoxify free radicals, particularly in tobacco smoke [2]. GSTM1 has been mapped to the GST mu gene cluster on chromosome 1p13.3. One variant in GSTM1 have been identified – a deletion. This inactive form of GSTM1 (null genotype) causes lower detoxification, which may be a risk factor for CAD. The relationship between GSTM1 null genotype and risk of CAD has been studied for more than 10 years. Several studies found GSTM1 null genotype to be a risk factor in CAD, but other studies showed no association between this polymorphism and risk of CAD. These studies reached inconsistent conclusions [3–15], probably due to the relatively small sample sizes. Since individual studies are usually underpowered in detecting the effect of low-penetrance genes, in this study we conducted a meta-analysis to investigate the association between GSTM1 null genotype and the risk for CAD.", " Publication search We conducted a literature search before February 2014 in PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI) databases without restrictions. Combination of the following terms were applied: ‘coronary heart disease’ OR ‘coronary artery disease’ OR ‘myocardial infarction’ OR ‘acute coronary syndrome’ OR ‘ischemic heart disease’ OR ‘cardiovascular disease’ OR ‘major adverse cardiac event’ OR ‘CHD’ OR ‘CAD’ OR ‘MI’ OR ‘ACS’ OR ‘IHD’ OR ‘MACE’; ‘Glutathione S-transferases’ OR ‘GSTM1; ‘polymorphism’ OR ‘variant’ OR ‘genetic’ OR ‘mutation’. We also conducted a manual search to find other articles based on references identified in the individual articles.\nWe conducted a literature search before February 2014 in PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI) databases without restrictions. Combination of the following terms were applied: ‘coronary heart disease’ OR ‘coronary artery disease’ OR ‘myocardial infarction’ OR ‘acute coronary syndrome’ OR ‘ischemic heart disease’ OR ‘cardiovascular disease’ OR ‘major adverse cardiac event’ OR ‘CHD’ OR ‘CAD’ OR ‘MI’ OR ‘ACS’ OR ‘IHD’ OR ‘MACE’; ‘Glutathione S-transferases’ OR ‘GSTM1; ‘polymorphism’ OR ‘variant’ OR ‘genetic’ OR ‘mutation’. We also conducted a manual search to find other articles based on references identified in the individual articles.\n Inclusion criteria and data extraction We included articles if they met all the following criteria: (1) evaluation of GSTM1 polymorphism and CAD risk, (2) using a case-control design, and (3) adjusted odds ratios (ORs) with 95% confidence intervals (CIs) were reported.\nData were extracted by 2 authors independently. In case of conflicting evaluations, an agreement was reached following a discussion; if agreement could not be reached, another author was consulted to resolve the debate. The following information was extracted from each study: first author, year of publication, ethnicity, age, sex, disease type, sample size, smoking status, covariates, adjusted Ors, and the corresponding 95% CIs of CAD risk.\nWe included articles if they met all the following criteria: (1) evaluation of GSTM1 polymorphism and CAD risk, (2) using a case-control design, and (3) adjusted odds ratios (ORs) with 95% confidence intervals (CIs) were reported.\nData were extracted by 2 authors independently. In case of conflicting evaluations, an agreement was reached following a discussion; if agreement could not be reached, another author was consulted to resolve the debate. The following information was extracted from each study: first author, year of publication, ethnicity, age, sex, disease type, sample size, smoking status, covariates, adjusted Ors, and the corresponding 95% CIs of CAD risk.\n Statistical analysis For the GSTM1 gene, we estimated the risk of the null genotype on CAD compared with the non-null genotypes in the recessive model (null verses heterozygous + wild type). The strength of the association between the GSTM1 gene and CAD risk was measured by ORs with 95% CIs. The ORs with corresponding 95% CIs from individual studies were pooled using random-effects models. Heterogeneity between the studies was quantified using the Cochran Q test in combination with the I2 statistic, which represents the percentage of variability across studies that is attributable to heterogeneity rather than to chance. Heterogeneity among studies was considered significant when P was less than 0.1 for the Q-test or when the I2 value was greater than 50%. Subgroup analyses were stratified by ethnicity, disease type, smoking status, and covariates of adjustment. Cumulative meta-analysis was performed. Sensitivity analysis was further performed by excluding single studies sequentially to assess the impact of the individual study on the pooled estimate. Funnel plots and Egger’s regression test were undertaken to assess the potential publication bias [16]. Data analysis was performed using STATA 12 (StataCorp LP, College Station, Texas, USA).\nFor the GSTM1 gene, we estimated the risk of the null genotype on CAD compared with the non-null genotypes in the recessive model (null verses heterozygous + wild type). The strength of the association between the GSTM1 gene and CAD risk was measured by ORs with 95% CIs. The ORs with corresponding 95% CIs from individual studies were pooled using random-effects models. Heterogeneity between the studies was quantified using the Cochran Q test in combination with the I2 statistic, which represents the percentage of variability across studies that is attributable to heterogeneity rather than to chance. Heterogeneity among studies was considered significant when P was less than 0.1 for the Q-test or when the I2 value was greater than 50%. Subgroup analyses were stratified by ethnicity, disease type, smoking status, and covariates of adjustment. Cumulative meta-analysis was performed. Sensitivity analysis was further performed by excluding single studies sequentially to assess the impact of the individual study on the pooled estimate. Funnel plots and Egger’s regression test were undertaken to assess the potential publication bias [16]. Data analysis was performed using STATA 12 (StataCorp LP, College Station, Texas, USA).", "We conducted a literature search before February 2014 in PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI) databases without restrictions. Combination of the following terms were applied: ‘coronary heart disease’ OR ‘coronary artery disease’ OR ‘myocardial infarction’ OR ‘acute coronary syndrome’ OR ‘ischemic heart disease’ OR ‘cardiovascular disease’ OR ‘major adverse cardiac event’ OR ‘CHD’ OR ‘CAD’ OR ‘MI’ OR ‘ACS’ OR ‘IHD’ OR ‘MACE’; ‘Glutathione S-transferases’ OR ‘GSTM1; ‘polymorphism’ OR ‘variant’ OR ‘genetic’ OR ‘mutation’. We also conducted a manual search to find other articles based on references identified in the individual articles.", "We included articles if they met all the following criteria: (1) evaluation of GSTM1 polymorphism and CAD risk, (2) using a case-control design, and (3) adjusted odds ratios (ORs) with 95% confidence intervals (CIs) were reported.\nData were extracted by 2 authors independently. In case of conflicting evaluations, an agreement was reached following a discussion; if agreement could not be reached, another author was consulted to resolve the debate. The following information was extracted from each study: first author, year of publication, ethnicity, age, sex, disease type, sample size, smoking status, covariates, adjusted Ors, and the corresponding 95% CIs of CAD risk.", "For the GSTM1 gene, we estimated the risk of the null genotype on CAD compared with the non-null genotypes in the recessive model (null verses heterozygous + wild type). The strength of the association between the GSTM1 gene and CAD risk was measured by ORs with 95% CIs. The ORs with corresponding 95% CIs from individual studies were pooled using random-effects models. Heterogeneity between the studies was quantified using the Cochran Q test in combination with the I2 statistic, which represents the percentage of variability across studies that is attributable to heterogeneity rather than to chance. Heterogeneity among studies was considered significant when P was less than 0.1 for the Q-test or when the I2 value was greater than 50%. Subgroup analyses were stratified by ethnicity, disease type, smoking status, and covariates of adjustment. Cumulative meta-analysis was performed. Sensitivity analysis was further performed by excluding single studies sequentially to assess the impact of the individual study on the pooled estimate. Funnel plots and Egger’s regression test were undertaken to assess the potential publication bias [16]. Data analysis was performed using STATA 12 (StataCorp LP, College Station, Texas, USA).", " Study characteristics We ultimately identified a total of 13 articles reporting the relationship between GSTM1 null genotype and CAD risk [3–15]. A total of 5453 cases and 5068 controls were included in this meta-analysis. Table 1 summarized the main characteristics of those included studies. There were 8 case-control studies from white populations and 5 case-control studies from Asian populations.\nWe ultimately identified a total of 13 articles reporting the relationship between GSTM1 null genotype and CAD risk [3–15]. A total of 5453 cases and 5068 controls were included in this meta-analysis. Table 1 summarized the main characteristics of those included studies. There were 8 case-control studies from white populations and 5 case-control studies from Asian populations.\n Quantitative data synthesis The evaluations of the association between GSTM1 polymorphism and CAD risk are summarized in Table 2. The null genotype of GSTM1 was associated with a significantly increased risk of CAD when compared with present genotype (adjusted OR=1.26; 95% CI 1.11–1.43; I2=3%; Figure 1). When stratified by ethnicity, a significantly elevated risk was observed in whites (OR=1.22; 95% CI 1.06–1.41; I2=0%) but not in Asians (OR=1.20; 95% CI 0.85–1.71; I2=50%). In the subgroup analysis according to disease type, a significantly increased myocardial infarction (MI) risk was observed (OR=1.19; 95% CI 1.01–1.40; I2=0%). Subgroup analysis of smoking status showed that increased risks were found in smokers (OR=1.97; 95% CI 1.59–2.44; I2=4%) but not in non-smokers (OR=1.20; 95%CI, 0.96–1.51; I2=0%). When we limited the meta-analysis to studies that controlled for confounders such as age, sex, smoking, diabetes, hypertension, family history, and dyslipidemia, a significant association between GSTM1 null genotype and CAD risk remained.\nAs shown in Figure 2, significant associations were evident with each addition of more data over time. The results showed that the pooled ORs tended to be stable. A single study involved in the meta-analysis was deleted each time to reflect the influence of the individual data set to the pooled ORs, and the corresponding pooled ORs were not materially altered (Figure 3).\nFunnel plot and Egger’s test were performed to assess the publication bias of the literature. The shape of the funnel plot did not reveal any evidence of obvious asymmetry (data not shown). Egger’s test did not find evidence of publication bias (P=0.68).\nThe evaluations of the association between GSTM1 polymorphism and CAD risk are summarized in Table 2. The null genotype of GSTM1 was associated with a significantly increased risk of CAD when compared with present genotype (adjusted OR=1.26; 95% CI 1.11–1.43; I2=3%; Figure 1). When stratified by ethnicity, a significantly elevated risk was observed in whites (OR=1.22; 95% CI 1.06–1.41; I2=0%) but not in Asians (OR=1.20; 95% CI 0.85–1.71; I2=50%). In the subgroup analysis according to disease type, a significantly increased myocardial infarction (MI) risk was observed (OR=1.19; 95% CI 1.01–1.40; I2=0%). Subgroup analysis of smoking status showed that increased risks were found in smokers (OR=1.97; 95% CI 1.59–2.44; I2=4%) but not in non-smokers (OR=1.20; 95%CI, 0.96–1.51; I2=0%). When we limited the meta-analysis to studies that controlled for confounders such as age, sex, smoking, diabetes, hypertension, family history, and dyslipidemia, a significant association between GSTM1 null genotype and CAD risk remained.\nAs shown in Figure 2, significant associations were evident with each addition of more data over time. The results showed that the pooled ORs tended to be stable. A single study involved in the meta-analysis was deleted each time to reflect the influence of the individual data set to the pooled ORs, and the corresponding pooled ORs were not materially altered (Figure 3).\nFunnel plot and Egger’s test were performed to assess the publication bias of the literature. The shape of the funnel plot did not reveal any evidence of obvious asymmetry (data not shown). Egger’s test did not find evidence of publication bias (P=0.68).", "We ultimately identified a total of 13 articles reporting the relationship between GSTM1 null genotype and CAD risk [3–15]. A total of 5453 cases and 5068 controls were included in this meta-analysis. Table 1 summarized the main characteristics of those included studies. There were 8 case-control studies from white populations and 5 case-control studies from Asian populations.", "The evaluations of the association between GSTM1 polymorphism and CAD risk are summarized in Table 2. The null genotype of GSTM1 was associated with a significantly increased risk of CAD when compared with present genotype (adjusted OR=1.26; 95% CI 1.11–1.43; I2=3%; Figure 1). When stratified by ethnicity, a significantly elevated risk was observed in whites (OR=1.22; 95% CI 1.06–1.41; I2=0%) but not in Asians (OR=1.20; 95% CI 0.85–1.71; I2=50%). In the subgroup analysis according to disease type, a significantly increased myocardial infarction (MI) risk was observed (OR=1.19; 95% CI 1.01–1.40; I2=0%). Subgroup analysis of smoking status showed that increased risks were found in smokers (OR=1.97; 95% CI 1.59–2.44; I2=4%) but not in non-smokers (OR=1.20; 95%CI, 0.96–1.51; I2=0%). When we limited the meta-analysis to studies that controlled for confounders such as age, sex, smoking, diabetes, hypertension, family history, and dyslipidemia, a significant association between GSTM1 null genotype and CAD risk remained.\nAs shown in Figure 2, significant associations were evident with each addition of more data over time. The results showed that the pooled ORs tended to be stable. A single study involved in the meta-analysis was deleted each time to reflect the influence of the individual data set to the pooled ORs, and the corresponding pooled ORs were not materially altered (Figure 3).\nFunnel plot and Egger’s test were performed to assess the publication bias of the literature. The shape of the funnel plot did not reveal any evidence of obvious asymmetry (data not shown). Egger’s test did not find evidence of publication bias (P=0.68).", "The present meta-analysis, including 5453 cases and 5068 controls from 13 case-control studies, explored the associations of GSTM1 null genotype with CAD risk. We demonstrated that this polymorphism is significantly associated with an increased CAD risk. Subgroup analyses stratified by ethnicity showed whites, but not Asians, with GSTM1 null genotype had increased CAD risk. It was possible that different lifestyles, diets, and environments may account for this discrepancy. In the MI subgroup, we found that this polymorphism was associated with MI risk. Cigarette smoking is a pro-inflammatory stimulus and an important risk factor for CAD. Some studies have explored the interaction between GSTM1 genotype and smoking habits. Our results showed a significant association among smokers but not in non-smokers, suggesting that even the same variant in the same gene may have a different effect on the pathogenesis and occurrence of CAD in different individuals.\nPrevious studies have shown that individuals with GSTM1 null genotype have a decreased capacity to detoxify certain carcinogens. Thus, impaired GSTM1 function may lead to serious DNA damage. Toxic molecules produce DNA adducts that contribute to the development of atherosclerosis. Evidence indicates that the interaction of DNA adducts with DNA may trigger pathogenic pathways in the cell. Significant correlation was found between DNA adduct levels, which are accepted as a biomarker of exposure to environmental carcinogens, and atherogenic risk factors. Higher DNA adduct levels were detected in individuals with severe CAD [17,18] and in atherosclerotic plaques [19]. Therefore, it is biologically plausible that the GSTM1 null genotype may increase risk of CAD.\nOur study has some strengths. First, it was the first meta-analysis to report the adjusted ORs between GSTM1 null genotype and CAD risk. Second, the methodological issues for meta-analysis (e.g., subgroup analysis, cumulative meta-analysis, and sensitivity analysis) were well investigated. Third, when we limited the meta-analysis to studies that controlled for age and sex, smoking, diabetes and hypertension, family history, dyslipidemia, the significant positive association was only marginally altered. Finally, we did find significant heterogeneity and publication bias in this meta-analysis.\nSome limitations in this meta-analysis should be addressed. First, the number of studies included in our meta-analysis remained small. Thus, publication bias may exist, although the funnel plots and Egger’s linear regression tests indicated no remarkable publication bias. Second, lack of the original data of the eligible studies limited the evaluation of the effects of the gene-environment interactions in CAD development. Third, no prospective studies have addressed this association between GSTM1 null genotype and CAD risk, and all included studies followed a retrospective case-control design. Thus, owing to the limitations of case-control design, we cannot exclude the possibility of undetected bias.", "This meta-analysis supports an association between GSTM1 null genotype and risk of CAD. Prospective studies are suggested to further ascertain the relationship between GSTM1 null genotype and genetic predisposition to CAD." ]
[ null, "materials|methods", null, "methods", "methods", "results", "intro|methods", "methods", "discussion", "conclusions" ]
[ "Coronary Artery Disease", "Glutathione Transferase", "Meta-Analysis as Topic", "Polymorphism, Genetic" ]
Background: Coronary artery disease (CAD) is the leading health problem worldwide and is the leading cause of mortality in the United States. The role of DNA oxidative stress in the pathogenesis of atherosclerosis and its association with increased production of reactive oxygen species has been well established [1]. The mutagenic activities of cigarette smoke chemicals can cause DNA adducts in target tissues and oxidative modification and progression of atherosclerotic lesions. The glutathione S-transferases (GSTs) are a gene superfamily of phase II metabolic enzymes that detoxify free radicals, particularly in tobacco smoke [2]. GSTM1 has been mapped to the GST mu gene cluster on chromosome 1p13.3. One variant in GSTM1 have been identified – a deletion. This inactive form of GSTM1 (null genotype) causes lower detoxification, which may be a risk factor for CAD. The relationship between GSTM1 null genotype and risk of CAD has been studied for more than 10 years. Several studies found GSTM1 null genotype to be a risk factor in CAD, but other studies showed no association between this polymorphism and risk of CAD. These studies reached inconsistent conclusions [3–15], probably due to the relatively small sample sizes. Since individual studies are usually underpowered in detecting the effect of low-penetrance genes, in this study we conducted a meta-analysis to investigate the association between GSTM1 null genotype and the risk for CAD. Material and Methods: Publication search We conducted a literature search before February 2014 in PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI) databases without restrictions. Combination of the following terms were applied: ‘coronary heart disease’ OR ‘coronary artery disease’ OR ‘myocardial infarction’ OR ‘acute coronary syndrome’ OR ‘ischemic heart disease’ OR ‘cardiovascular disease’ OR ‘major adverse cardiac event’ OR ‘CHD’ OR ‘CAD’ OR ‘MI’ OR ‘ACS’ OR ‘IHD’ OR ‘MACE’; ‘Glutathione S-transferases’ OR ‘GSTM1; ‘polymorphism’ OR ‘variant’ OR ‘genetic’ OR ‘mutation’. We also conducted a manual search to find other articles based on references identified in the individual articles. We conducted a literature search before February 2014 in PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI) databases without restrictions. Combination of the following terms were applied: ‘coronary heart disease’ OR ‘coronary artery disease’ OR ‘myocardial infarction’ OR ‘acute coronary syndrome’ OR ‘ischemic heart disease’ OR ‘cardiovascular disease’ OR ‘major adverse cardiac event’ OR ‘CHD’ OR ‘CAD’ OR ‘MI’ OR ‘ACS’ OR ‘IHD’ OR ‘MACE’; ‘Glutathione S-transferases’ OR ‘GSTM1; ‘polymorphism’ OR ‘variant’ OR ‘genetic’ OR ‘mutation’. We also conducted a manual search to find other articles based on references identified in the individual articles. Inclusion criteria and data extraction We included articles if they met all the following criteria: (1) evaluation of GSTM1 polymorphism and CAD risk, (2) using a case-control design, and (3) adjusted odds ratios (ORs) with 95% confidence intervals (CIs) were reported. Data were extracted by 2 authors independently. In case of conflicting evaluations, an agreement was reached following a discussion; if agreement could not be reached, another author was consulted to resolve the debate. The following information was extracted from each study: first author, year of publication, ethnicity, age, sex, disease type, sample size, smoking status, covariates, adjusted Ors, and the corresponding 95% CIs of CAD risk. We included articles if they met all the following criteria: (1) evaluation of GSTM1 polymorphism and CAD risk, (2) using a case-control design, and (3) adjusted odds ratios (ORs) with 95% confidence intervals (CIs) were reported. Data were extracted by 2 authors independently. In case of conflicting evaluations, an agreement was reached following a discussion; if agreement could not be reached, another author was consulted to resolve the debate. The following information was extracted from each study: first author, year of publication, ethnicity, age, sex, disease type, sample size, smoking status, covariates, adjusted Ors, and the corresponding 95% CIs of CAD risk. Statistical analysis For the GSTM1 gene, we estimated the risk of the null genotype on CAD compared with the non-null genotypes in the recessive model (null verses heterozygous + wild type). The strength of the association between the GSTM1 gene and CAD risk was measured by ORs with 95% CIs. The ORs with corresponding 95% CIs from individual studies were pooled using random-effects models. Heterogeneity between the studies was quantified using the Cochran Q test in combination with the I2 statistic, which represents the percentage of variability across studies that is attributable to heterogeneity rather than to chance. Heterogeneity among studies was considered significant when P was less than 0.1 for the Q-test or when the I2 value was greater than 50%. Subgroup analyses were stratified by ethnicity, disease type, smoking status, and covariates of adjustment. Cumulative meta-analysis was performed. Sensitivity analysis was further performed by excluding single studies sequentially to assess the impact of the individual study on the pooled estimate. Funnel plots and Egger’s regression test were undertaken to assess the potential publication bias [16]. Data analysis was performed using STATA 12 (StataCorp LP, College Station, Texas, USA). For the GSTM1 gene, we estimated the risk of the null genotype on CAD compared with the non-null genotypes in the recessive model (null verses heterozygous + wild type). The strength of the association between the GSTM1 gene and CAD risk was measured by ORs with 95% CIs. The ORs with corresponding 95% CIs from individual studies were pooled using random-effects models. Heterogeneity between the studies was quantified using the Cochran Q test in combination with the I2 statistic, which represents the percentage of variability across studies that is attributable to heterogeneity rather than to chance. Heterogeneity among studies was considered significant when P was less than 0.1 for the Q-test or when the I2 value was greater than 50%. Subgroup analyses were stratified by ethnicity, disease type, smoking status, and covariates of adjustment. Cumulative meta-analysis was performed. Sensitivity analysis was further performed by excluding single studies sequentially to assess the impact of the individual study on the pooled estimate. Funnel plots and Egger’s regression test were undertaken to assess the potential publication bias [16]. Data analysis was performed using STATA 12 (StataCorp LP, College Station, Texas, USA). Publication search: We conducted a literature search before February 2014 in PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI) databases without restrictions. Combination of the following terms were applied: ‘coronary heart disease’ OR ‘coronary artery disease’ OR ‘myocardial infarction’ OR ‘acute coronary syndrome’ OR ‘ischemic heart disease’ OR ‘cardiovascular disease’ OR ‘major adverse cardiac event’ OR ‘CHD’ OR ‘CAD’ OR ‘MI’ OR ‘ACS’ OR ‘IHD’ OR ‘MACE’; ‘Glutathione S-transferases’ OR ‘GSTM1; ‘polymorphism’ OR ‘variant’ OR ‘genetic’ OR ‘mutation’. We also conducted a manual search to find other articles based on references identified in the individual articles. Inclusion criteria and data extraction: We included articles if they met all the following criteria: (1) evaluation of GSTM1 polymorphism and CAD risk, (2) using a case-control design, and (3) adjusted odds ratios (ORs) with 95% confidence intervals (CIs) were reported. Data were extracted by 2 authors independently. In case of conflicting evaluations, an agreement was reached following a discussion; if agreement could not be reached, another author was consulted to resolve the debate. The following information was extracted from each study: first author, year of publication, ethnicity, age, sex, disease type, sample size, smoking status, covariates, adjusted Ors, and the corresponding 95% CIs of CAD risk. Statistical analysis: For the GSTM1 gene, we estimated the risk of the null genotype on CAD compared with the non-null genotypes in the recessive model (null verses heterozygous + wild type). The strength of the association between the GSTM1 gene and CAD risk was measured by ORs with 95% CIs. The ORs with corresponding 95% CIs from individual studies were pooled using random-effects models. Heterogeneity between the studies was quantified using the Cochran Q test in combination with the I2 statistic, which represents the percentage of variability across studies that is attributable to heterogeneity rather than to chance. Heterogeneity among studies was considered significant when P was less than 0.1 for the Q-test or when the I2 value was greater than 50%. Subgroup analyses were stratified by ethnicity, disease type, smoking status, and covariates of adjustment. Cumulative meta-analysis was performed. Sensitivity analysis was further performed by excluding single studies sequentially to assess the impact of the individual study on the pooled estimate. Funnel plots and Egger’s regression test were undertaken to assess the potential publication bias [16]. Data analysis was performed using STATA 12 (StataCorp LP, College Station, Texas, USA). Results: Study characteristics We ultimately identified a total of 13 articles reporting the relationship between GSTM1 null genotype and CAD risk [3–15]. A total of 5453 cases and 5068 controls were included in this meta-analysis. Table 1 summarized the main characteristics of those included studies. There were 8 case-control studies from white populations and 5 case-control studies from Asian populations. We ultimately identified a total of 13 articles reporting the relationship between GSTM1 null genotype and CAD risk [3–15]. A total of 5453 cases and 5068 controls were included in this meta-analysis. Table 1 summarized the main characteristics of those included studies. There were 8 case-control studies from white populations and 5 case-control studies from Asian populations. Quantitative data synthesis The evaluations of the association between GSTM1 polymorphism and CAD risk are summarized in Table 2. The null genotype of GSTM1 was associated with a significantly increased risk of CAD when compared with present genotype (adjusted OR=1.26; 95% CI 1.11–1.43; I2=3%; Figure 1). When stratified by ethnicity, a significantly elevated risk was observed in whites (OR=1.22; 95% CI 1.06–1.41; I2=0%) but not in Asians (OR=1.20; 95% CI 0.85–1.71; I2=50%). In the subgroup analysis according to disease type, a significantly increased myocardial infarction (MI) risk was observed (OR=1.19; 95% CI 1.01–1.40; I2=0%). Subgroup analysis of smoking status showed that increased risks were found in smokers (OR=1.97; 95% CI 1.59–2.44; I2=4%) but not in non-smokers (OR=1.20; 95%CI, 0.96–1.51; I2=0%). When we limited the meta-analysis to studies that controlled for confounders such as age, sex, smoking, diabetes, hypertension, family history, and dyslipidemia, a significant association between GSTM1 null genotype and CAD risk remained. As shown in Figure 2, significant associations were evident with each addition of more data over time. The results showed that the pooled ORs tended to be stable. A single study involved in the meta-analysis was deleted each time to reflect the influence of the individual data set to the pooled ORs, and the corresponding pooled ORs were not materially altered (Figure 3). Funnel plot and Egger’s test were performed to assess the publication bias of the literature. The shape of the funnel plot did not reveal any evidence of obvious asymmetry (data not shown). Egger’s test did not find evidence of publication bias (P=0.68). The evaluations of the association between GSTM1 polymorphism and CAD risk are summarized in Table 2. The null genotype of GSTM1 was associated with a significantly increased risk of CAD when compared with present genotype (adjusted OR=1.26; 95% CI 1.11–1.43; I2=3%; Figure 1). When stratified by ethnicity, a significantly elevated risk was observed in whites (OR=1.22; 95% CI 1.06–1.41; I2=0%) but not in Asians (OR=1.20; 95% CI 0.85–1.71; I2=50%). In the subgroup analysis according to disease type, a significantly increased myocardial infarction (MI) risk was observed (OR=1.19; 95% CI 1.01–1.40; I2=0%). Subgroup analysis of smoking status showed that increased risks were found in smokers (OR=1.97; 95% CI 1.59–2.44; I2=4%) but not in non-smokers (OR=1.20; 95%CI, 0.96–1.51; I2=0%). When we limited the meta-analysis to studies that controlled for confounders such as age, sex, smoking, diabetes, hypertension, family history, and dyslipidemia, a significant association between GSTM1 null genotype and CAD risk remained. As shown in Figure 2, significant associations were evident with each addition of more data over time. The results showed that the pooled ORs tended to be stable. A single study involved in the meta-analysis was deleted each time to reflect the influence of the individual data set to the pooled ORs, and the corresponding pooled ORs were not materially altered (Figure 3). Funnel plot and Egger’s test were performed to assess the publication bias of the literature. The shape of the funnel plot did not reveal any evidence of obvious asymmetry (data not shown). Egger’s test did not find evidence of publication bias (P=0.68). Study characteristics: We ultimately identified a total of 13 articles reporting the relationship between GSTM1 null genotype and CAD risk [3–15]. A total of 5453 cases and 5068 controls were included in this meta-analysis. Table 1 summarized the main characteristics of those included studies. There were 8 case-control studies from white populations and 5 case-control studies from Asian populations. Quantitative data synthesis: The evaluations of the association between GSTM1 polymorphism and CAD risk are summarized in Table 2. The null genotype of GSTM1 was associated with a significantly increased risk of CAD when compared with present genotype (adjusted OR=1.26; 95% CI 1.11–1.43; I2=3%; Figure 1). When stratified by ethnicity, a significantly elevated risk was observed in whites (OR=1.22; 95% CI 1.06–1.41; I2=0%) but not in Asians (OR=1.20; 95% CI 0.85–1.71; I2=50%). In the subgroup analysis according to disease type, a significantly increased myocardial infarction (MI) risk was observed (OR=1.19; 95% CI 1.01–1.40; I2=0%). Subgroup analysis of smoking status showed that increased risks were found in smokers (OR=1.97; 95% CI 1.59–2.44; I2=4%) but not in non-smokers (OR=1.20; 95%CI, 0.96–1.51; I2=0%). When we limited the meta-analysis to studies that controlled for confounders such as age, sex, smoking, diabetes, hypertension, family history, and dyslipidemia, a significant association between GSTM1 null genotype and CAD risk remained. As shown in Figure 2, significant associations were evident with each addition of more data over time. The results showed that the pooled ORs tended to be stable. A single study involved in the meta-analysis was deleted each time to reflect the influence of the individual data set to the pooled ORs, and the corresponding pooled ORs were not materially altered (Figure 3). Funnel plot and Egger’s test were performed to assess the publication bias of the literature. The shape of the funnel plot did not reveal any evidence of obvious asymmetry (data not shown). Egger’s test did not find evidence of publication bias (P=0.68). Discussion: The present meta-analysis, including 5453 cases and 5068 controls from 13 case-control studies, explored the associations of GSTM1 null genotype with CAD risk. We demonstrated that this polymorphism is significantly associated with an increased CAD risk. Subgroup analyses stratified by ethnicity showed whites, but not Asians, with GSTM1 null genotype had increased CAD risk. It was possible that different lifestyles, diets, and environments may account for this discrepancy. In the MI subgroup, we found that this polymorphism was associated with MI risk. Cigarette smoking is a pro-inflammatory stimulus and an important risk factor for CAD. Some studies have explored the interaction between GSTM1 genotype and smoking habits. Our results showed a significant association among smokers but not in non-smokers, suggesting that even the same variant in the same gene may have a different effect on the pathogenesis and occurrence of CAD in different individuals. Previous studies have shown that individuals with GSTM1 null genotype have a decreased capacity to detoxify certain carcinogens. Thus, impaired GSTM1 function may lead to serious DNA damage. Toxic molecules produce DNA adducts that contribute to the development of atherosclerosis. Evidence indicates that the interaction of DNA adducts with DNA may trigger pathogenic pathways in the cell. Significant correlation was found between DNA adduct levels, which are accepted as a biomarker of exposure to environmental carcinogens, and atherogenic risk factors. Higher DNA adduct levels were detected in individuals with severe CAD [17,18] and in atherosclerotic plaques [19]. Therefore, it is biologically plausible that the GSTM1 null genotype may increase risk of CAD. Our study has some strengths. First, it was the first meta-analysis to report the adjusted ORs between GSTM1 null genotype and CAD risk. Second, the methodological issues for meta-analysis (e.g., subgroup analysis, cumulative meta-analysis, and sensitivity analysis) were well investigated. Third, when we limited the meta-analysis to studies that controlled for age and sex, smoking, diabetes and hypertension, family history, dyslipidemia, the significant positive association was only marginally altered. Finally, we did find significant heterogeneity and publication bias in this meta-analysis. Some limitations in this meta-analysis should be addressed. First, the number of studies included in our meta-analysis remained small. Thus, publication bias may exist, although the funnel plots and Egger’s linear regression tests indicated no remarkable publication bias. Second, lack of the original data of the eligible studies limited the evaluation of the effects of the gene-environment interactions in CAD development. Third, no prospective studies have addressed this association between GSTM1 null genotype and CAD risk, and all included studies followed a retrospective case-control design. Thus, owing to the limitations of case-control design, we cannot exclude the possibility of undetected bias. Conclusions: This meta-analysis supports an association between GSTM1 null genotype and risk of CAD. Prospective studies are suggested to further ascertain the relationship between GSTM1 null genotype and genetic predisposition to CAD.
Background: We conducted a meta-analysis to assess the association between polymorphisms of GSTM1 null genotype and coronary artery disease (CAD) risk. Methods: Published literature from PubMed, EMBASE, and China National Knowledge Infrastructure (CNKI) were retrieved before March 2014. All studies reporting adjusted odds ratios (ORs) and 95% confidence intervals (CIs) of CAD risk were included. Results: A total of 13 case-control studies, including 5453 cases and 5068 controls, were collected. There was a significant association between GSTM1 null genotype and CAD risk (adjusted OR=1.26; 95% CI, 1.11-1.43; I^2=3%). When stratified by ethnicity, a significantly elevated risk was observed in whites. In the subgroup analysis according to disease type, a significantly increased myocardial infarction (MI) risk was observed. Subgroup analysis of smoking status showed an increased CAD risk in smokers. Conclusions: Our results indicate that GSTM1 null genotype is associated with an increased CAD risk.
Background: Coronary artery disease (CAD) is the leading health problem worldwide and is the leading cause of mortality in the United States. The role of DNA oxidative stress in the pathogenesis of atherosclerosis and its association with increased production of reactive oxygen species has been well established [1]. The mutagenic activities of cigarette smoke chemicals can cause DNA adducts in target tissues and oxidative modification and progression of atherosclerotic lesions. The glutathione S-transferases (GSTs) are a gene superfamily of phase II metabolic enzymes that detoxify free radicals, particularly in tobacco smoke [2]. GSTM1 has been mapped to the GST mu gene cluster on chromosome 1p13.3. One variant in GSTM1 have been identified – a deletion. This inactive form of GSTM1 (null genotype) causes lower detoxification, which may be a risk factor for CAD. The relationship between GSTM1 null genotype and risk of CAD has been studied for more than 10 years. Several studies found GSTM1 null genotype to be a risk factor in CAD, but other studies showed no association between this polymorphism and risk of CAD. These studies reached inconsistent conclusions [3–15], probably due to the relatively small sample sizes. Since individual studies are usually underpowered in detecting the effect of low-penetrance genes, in this study we conducted a meta-analysis to investigate the association between GSTM1 null genotype and the risk for CAD. Conclusions: This meta-analysis supports an association between GSTM1 null genotype and risk of CAD. Prospective studies are suggested to further ascertain the relationship between GSTM1 null genotype and genetic predisposition to CAD.
Background: We conducted a meta-analysis to assess the association between polymorphisms of GSTM1 null genotype and coronary artery disease (CAD) risk. Methods: Published literature from PubMed, EMBASE, and China National Knowledge Infrastructure (CNKI) were retrieved before March 2014. All studies reporting adjusted odds ratios (ORs) and 95% confidence intervals (CIs) of CAD risk were included. Results: A total of 13 case-control studies, including 5453 cases and 5068 controls, were collected. There was a significant association between GSTM1 null genotype and CAD risk (adjusted OR=1.26; 95% CI, 1.11-1.43; I^2=3%). When stratified by ethnicity, a significantly elevated risk was observed in whites. In the subgroup analysis according to disease type, a significantly increased myocardial infarction (MI) risk was observed. Subgroup analysis of smoking status showed an increased CAD risk in smokers. Conclusions: Our results indicate that GSTM1 null genotype is associated with an increased CAD risk.
3,626
194
[ 257, 143, 138, 224 ]
10
[ "risk", "cad", "studies", "gstm1", "analysis", "95", "null", "genotype", "null genotype", "i2" ]
[ "carcinogens impaired gstm1", "transferases gsts gene", "atherosclerotic lesions glutathione", "glutathione transferases gsts", "gstm1 genotype smoking" ]
[CONTENT] Coronary Artery Disease | Glutathione Transferase | Meta-Analysis as Topic | Polymorphism, Genetic [SUMMARY]
[CONTENT] Coronary Artery Disease | Glutathione Transferase | Meta-Analysis as Topic | Polymorphism, Genetic [SUMMARY]
[CONTENT] Coronary Artery Disease | Glutathione Transferase | Meta-Analysis as Topic | Polymorphism, Genetic [SUMMARY]
[CONTENT] Coronary Artery Disease | Glutathione Transferase | Meta-Analysis as Topic | Polymorphism, Genetic [SUMMARY]
[CONTENT] Coronary Artery Disease | Glutathione Transferase | Meta-Analysis as Topic | Polymorphism, Genetic [SUMMARY]
[CONTENT] Coronary Artery Disease | Glutathione Transferase | Meta-Analysis as Topic | Polymorphism, Genetic [SUMMARY]
[CONTENT] Case-Control Studies | Coronary Artery Disease | Genetic Association Studies | Genetic Predisposition to Disease | Glutathione Transferase | Humans | Polymorphism, Genetic | Risk Factors [SUMMARY]
[CONTENT] Case-Control Studies | Coronary Artery Disease | Genetic Association Studies | Genetic Predisposition to Disease | Glutathione Transferase | Humans | Polymorphism, Genetic | Risk Factors [SUMMARY]
[CONTENT] Case-Control Studies | Coronary Artery Disease | Genetic Association Studies | Genetic Predisposition to Disease | Glutathione Transferase | Humans | Polymorphism, Genetic | Risk Factors [SUMMARY]
[CONTENT] Case-Control Studies | Coronary Artery Disease | Genetic Association Studies | Genetic Predisposition to Disease | Glutathione Transferase | Humans | Polymorphism, Genetic | Risk Factors [SUMMARY]
[CONTENT] Case-Control Studies | Coronary Artery Disease | Genetic Association Studies | Genetic Predisposition to Disease | Glutathione Transferase | Humans | Polymorphism, Genetic | Risk Factors [SUMMARY]
[CONTENT] Case-Control Studies | Coronary Artery Disease | Genetic Association Studies | Genetic Predisposition to Disease | Glutathione Transferase | Humans | Polymorphism, Genetic | Risk Factors [SUMMARY]
[CONTENT] carcinogens impaired gstm1 | transferases gsts gene | atherosclerotic lesions glutathione | glutathione transferases gsts | gstm1 genotype smoking [SUMMARY]
[CONTENT] carcinogens impaired gstm1 | transferases gsts gene | atherosclerotic lesions glutathione | glutathione transferases gsts | gstm1 genotype smoking [SUMMARY]
[CONTENT] carcinogens impaired gstm1 | transferases gsts gene | atherosclerotic lesions glutathione | glutathione transferases gsts | gstm1 genotype smoking [SUMMARY]
[CONTENT] carcinogens impaired gstm1 | transferases gsts gene | atherosclerotic lesions glutathione | glutathione transferases gsts | gstm1 genotype smoking [SUMMARY]
[CONTENT] carcinogens impaired gstm1 | transferases gsts gene | atherosclerotic lesions glutathione | glutathione transferases gsts | gstm1 genotype smoking [SUMMARY]
[CONTENT] carcinogens impaired gstm1 | transferases gsts gene | atherosclerotic lesions glutathione | glutathione transferases gsts | gstm1 genotype smoking [SUMMARY]
[CONTENT] risk | cad | studies | gstm1 | analysis | 95 | null | genotype | null genotype | i2 [SUMMARY]
[CONTENT] risk | cad | studies | gstm1 | analysis | 95 | null | genotype | null genotype | i2 [SUMMARY]
[CONTENT] risk | cad | studies | gstm1 | analysis | 95 | null | genotype | null genotype | i2 [SUMMARY]
[CONTENT] risk | cad | studies | gstm1 | analysis | 95 | null | genotype | null genotype | i2 [SUMMARY]
[CONTENT] risk | cad | studies | gstm1 | analysis | 95 | null | genotype | null genotype | i2 [SUMMARY]
[CONTENT] risk | cad | studies | gstm1 | analysis | 95 | null | genotype | null genotype | i2 [SUMMARY]
[CONTENT] gstm1 null genotype risk | genotype risk | null genotype risk | gstm1 | cad | gstm1 null | gstm1 null genotype | risk | smoke | leading [SUMMARY]
[CONTENT] 95 ci | ci | i2 | 95 | figure | pooled ors | significantly | risk | increased | pooled [SUMMARY]
[CONTENT] ci | 95 ci | i2 | 95 | figure | pooled ors | risk | significantly | analysis | pooled [SUMMARY]
[CONTENT] gstm1 null | gstm1 null genotype | ascertain | supports association gstm1 | ascertain relationship gstm1 null | ascertain relationship gstm1 | ascertain relationship | cad prospective | null genotype genetic predisposition | null genotype genetic [SUMMARY]
[CONTENT] studies | cad | risk | gstm1 | analysis | null | 95 | genotype | gstm1 null genotype | gstm1 null [SUMMARY]
[CONTENT] studies | cad | risk | gstm1 | analysis | null | 95 | genotype | gstm1 null genotype | gstm1 null [SUMMARY]
[CONTENT] GSTM1 [SUMMARY]
[CONTENT] PubMed | EMBASE | China National Knowledge Infrastructure (CNKI | March 2014 ||| 95% | CAD [SUMMARY]
[CONTENT] 13 | 5453 | 5068 ||| GSTM1 | CAD | OR=1.26 | 95% | CI | 1.11 ||| ||| ||| CAD [SUMMARY]
[CONTENT] GSTM1 | CAD [SUMMARY]
[CONTENT] GSTM1 ||| PubMed | EMBASE | China National Knowledge Infrastructure (CNKI | March 2014 ||| 95% | CAD ||| 13 | 5453 | 5068 ||| GSTM1 | CAD | OR=1.26 | 95% | CI | 1.11 ||| ||| ||| CAD ||| GSTM1 | CAD [SUMMARY]
[CONTENT] GSTM1 ||| PubMed | EMBASE | China National Knowledge Infrastructure (CNKI | March 2014 ||| 95% | CAD ||| 13 | 5453 | 5068 ||| GSTM1 | CAD | OR=1.26 | 95% | CI | 1.11 ||| ||| ||| CAD ||| GSTM1 | CAD [SUMMARY]
Effect of tramadol addiction alone and its co-abuse with cannabis on urinary excretion of Copper, Zinc, and Calcium among Egyptian addicts.
30603010
The use of illicit drugs has become a worldwide health problem. Substances with the potential to be abused may have direct or indirect effects on physiologic mechanisms that lead to organ system dysfunction and diseases.
BACKGROUND
Sixty-five males were included in the study, they were classified into control group (G1=19), tramadol addicts group (G2=18), and tramadol coabused with cannabis addicts group (G3=28). Parameters investigated for structural integrity were urinary levels ofleucineaminopeptidase and N-acetyl-β-D-glucosaminidase, and urinary parameters for reabsorption integrity were levels of copper and zinc as well as calcium, also urinary creatinine was measured. In addition, urinary levels of tramadol and tetrahydrocannabinol were estimated.
METHODS
Among the two addicted groups, all measured parameters were not significantly different in comparison with the control group except for urinary calcium excretion which was found to be significantly increased among the two addicted groups.
RESULTS
Both tramadol addiction alone or coabused with cannabis causes increased urinary excretion of calcium, indicating reabsorption dysfunction of calcium without affecting structural integrity along the nephron.
CONCLUSION
[ "Adult", "Calcium", "Case-Control Studies", "Copper", "Creatinine", "Cross-Sectional Studies", "Egypt", "Humans", "Male", "Marijuana Abuse", "Opioid-Related Disorders", "Tramadol", "Young Adult", "Zinc" ]
6307004
Introduction
The use of illicit drugs has become a worldwide health problem. It leads to numerous consequences at the health, economic, social and legal levels. These consequences interfere with the development of the countries and their efforts to respond to the needs of their populations. Apart from its economic cost, drug abuse leads to social problems among family members, abnormal behavior, crime, health and psychological problems as well as economic difficulties1. Tramadol (Tr) is a synthetic opioid from the aminocyclohexanol group and is commonly used in clinical practice as analgesic for treatment of moderate to severe pain2. Tramadol is free of side effects of traditional opioid such as the risk for respiratory depression3 and drug dependence4. So, the abuse potential of tramadol is considered low or absent5, which is in contrast to other opioids. Cannabis was used a long time ago for both recreational and medicinal purposes. The plant Cannabis sativa (Indian hemp) is the source of different psychoactive products. Marijuana comes from leaves, stems, and dried flower buds of the plant. Hashish is a resin obtained from flowering buds of the hemp plant6. Δ9-tetrahydrocannabinol (Δ9-THC) is the primary psychoactive compound and contributes to the behavioral toxicity ofcannabis7. Calcium (Ca) is the most abundant divalent ion (Ca2+) in the body8. Calcium is reabsorbed along all segments of the nephron. The proximal tubule is the primary tubular segment where most of the filtered calcium is reabsorbed. Under normal condition, 98–99% of the calcium which passes into the tubule is reabsorbed. Approximately 60–70% of the filtered calcium is reabsorbed in the proximal convoluted tubule, 20% in the loop of Henle, 10% by the distal convoluted tubule, and 5% by the collecting duct9. Any calcium that is not reabsorbed then passes into what is now urine and into the bladder. Copper(Cu) is important for all functions of many cellular enzymes10. The copper transporters ATP7A and ATP7B are expressed at distinct regions of the kidney11. They are both expressed in the glomeruli in minor amounts. ATP7A is localized in the proximal and distal tubules, while ATP7B is expressed in the loop of Henle. The two copper transporters work together to maintain copper balance within this tissue. The ability of ATP7A to traffic to basolateral membrane (an event associated with copper export) in response to copper elevation strongly suggests that ATP7A plays the major role in exporting copper from renal cells for reabsorption into the blood and in protecting renal cells against copper overload12. In another research, ATP7B expression in the loops of Henle suggests that once in the filtrate, ATP7B is responsible for copper reabsorption11. Previous data suggest that copper may be reabsorbed by the same pathway as that involved in iron reabsorption13, which occurs between the proximal and distal convoluted tubules, interpreted as the loops of Henle. Zinc (Zn) exists as a divalent cation (Zn2+). The kidney epithelium is the site of urinary zinc excretion and reabsorption. Zinc reabsorption occurs in all the downstream segments of the nephron; proximal tubule, Loop of Henle, and terminal nephron segments14. Most of the filtered Zn is reabsorbed along kidney proximal tubule15. The kidney is considered the primary eliminator of exogenous drugs and toxins; impairment of excretion predisposes it to various forms of injury. Development of new biomarkers is needed for the specific diagnosis of nephrotoxicity at earlier stage16. These biomarkers include brush bodrder enzyme such as Leucineaminopeptidase (LAP)17, Lysosomal enzyme such as N-acetyl-β-D-glucosaminidase (NAG)18, and cytosolic enzyme such as α- glutathion S-transferase (α-GST)18. Thus, detection of enzymes in the urine potentially provides valuable information not only related to the site of tubular injury (proximal & distal tubule) but also to the severity of injury16. Renal tubular cells, in particular proximal tubule cells, are vulnerable to the toxic effects of drugs because their role in concentrating and reabsorbing glomerular filtrate exposes them to high levels of circulating toxins19. Although most nephrotoxicity occurs in the proximal part of the nephron, some chemicals damage distal tubules. The function of these structures facilitates their vulnerability to toxicants20. Drug addiction has been associated with various forms of renal diseases which may be related to direct effect of the drugs themselves while others are caused by complications related to drug abuse21. The present work aims to investigate the effect of tramadol addiction on urinary excretion of some metals (e.g. copper, zinc and calcium) in relation to structural integrity of proximal tubules which were evaluated by measuring the urinary excretion of proximal tubular specific enzymes e.g. LAP and NAG. In addition, the study was extended to evaluate the effect of tramadol addiction coabused with cannabis.
Methods
Each urine sample was analyzed for the determination of: Urinary tramadol (U.Tr) concentration by immunalysis using immunalysis EIA kit (immunalysis corporation, USA).Urinary Δ9-THC concentration by DRI® cannabinoid assay (Microgenics, USA).Urinary NAG and LAP concentration for the assessment of proximal tubular structural integrity by colorimetric method using NAG kit (Diazymelaboratotries, USA) and LAP kit (Randox, UK).Urinary Cu, Zn, and Ca concentration for assessment of reabsorption integrity of the proximal tubules by Thermo Scientific ICE 3000 Series Atomic Absorption Spectrophotometer, England.Urinary creatinine (U. Cr) concentration by Jaffe kinetic method using kit (Greiner Diagnostic GmbH, Germany). Urinary tramadol (U.Tr) concentration by immunalysis using immunalysis EIA kit (immunalysis corporation, USA). Urinary Δ9-THC concentration by DRI® cannabinoid assay (Microgenics, USA). Urinary NAG and LAP concentration for the assessment of proximal tubular structural integrity by colorimetric method using NAG kit (Diazymelaboratotries, USA) and LAP kit (Randox, UK). Urinary Cu, Zn, and Ca concentration for assessment of reabsorption integrity of the proximal tubules by Thermo Scientific ICE 3000 Series Atomic Absorption Spectrophotometer, England. Urinary creatinine (U. Cr) concentration by Jaffe kinetic method using kit (Greiner Diagnostic GmbH, Germany).
null
null
Conclusion
Among the investigated addicted individuals, neither the addiction of tramadol alone nor in combination with cannabis was found to affect the structural integrity of proximal tubules. Moreover, reabsorption integrity of the nephron is affected only with respect to calcium reabsorption and not with respect to copper and zinc reabsorption. Also, to the best of our knowledge, this is the first study that investigated the effect of addiction of tramadol alone or coabused with cannabis on the reabsorption integrity of the nephron regarding urinary excretion of copper and zinc as well as calcium.
[ "Sampling", "Statistical analysis", "Results and discussion", "Limitation of the study" ]
[ "Before starting treatment from addiction, morning urine sample was collected from each participant into 100 ml sterilized plastic container and centrifuged at 4500 rpm for 5 minutes, and then the clear supernatant was distributed in polyethylene vials (1.5 ml capacity). One vial was used on the same day of urine collection for measuring tramadol (Tr) and cannabis in urine, the rest of vials were stored at-20°C without preservatives until analyzed within 2 weeks.", "Data was tabulated as mean ± SD. ANOVA was used to compare between more than two means of independent groups. In case of parametric data, if ANOVA was statistically significant, Post Hoc test was used to compare between each two means, while Mann- Whitney and Kruskall-Wallis tests were used for non-parametric data. Correlation coefficient was calculated to examine the relation between investigated urinary parameters. Results were considered statistically significant at P < 0.05.", "Regarding age, results of the present study (Table 1) showed that the three investigated groups were of comparable age since there was insignificant difference between G1 and each of G2 (P1 > 0.05) and G3(P2 > 0.05) as well as between G2 and G3 (P3 > 0.05), suggesting that age has no effect on the investigated urinary parameters. Concerning the investigated urinary enzymes of structural integrity of the proximal tubules (U.NAG/U.Cr and U.LAP/ U.Cr), data in (Table 1) demonstrated insignificant difference between the three investigated groups, suggesting that tramadol did not cause a damaging effect on the structural integrity of the proximal tubules, and even coabuse of cannabis with tramadol addiction has no effect on the structural integrity of the proximal tubules. This suggests that neither tramadol addiction alone nor tramadol coabused with cannabis affects the structural integrity of the proximal tubules among the investigated addicted individuals in the present study. These findings support the results reported by others22.\nData (Mean ± SD) of investigated parameters among different studied groups\nN: Number of individuals participated in each investigated group.\nOn comparing the three investigated groups using ANOVA, (Table 1) revealed a significant increase (P < 0.05) in the urinary excretion of calcium and insignificant differences regarding all others investigated parameters.\nRegarding the effect of addiction on reabsorption integrity of the nephron for U.Cu/U.Cr and U.Zn/U.Cr as well as U.Ca/U.Cr, results in (Table 1) showed insignificant changes (P2 > 0.05) in urinary excretion of U.Cu/U.Cr and U.Zn/U.Cr among the three investigated groups, suggesting also that neither tramadol addiction alone nor coabused with cannabis could affect the reabsorption integrity of the nephron for both Cu and Zn. Also, this suggests that cannabis itself has no effect on the reabsorption integrity of the nephron for U.Cu/U.Cr and U.Zn/U.Cr.\nConcerning the effect of addiction on urinary excretion of U.Ca/U.Cr, results in (Table 2) revealed a significant increase in U.Ca/U.Cr excretion among tramadol addicted group (G2) (P1 < 0.05) whencompared with (G1), suggesting that tramadol addiction causes reabsorption impairment of the nephron function regarding urinary calcium. This finding and suggestion receive support from the report that opioids cause increased levels of growth hormone and vasopressin23.\nComparison between different investigated groups regarding urinary calcium excretion\nN: Number of individuals participated in each investigated group,\nP1: G2 compared with G1, P2: G3 compared with G1, P3: G3 compared with G2.\nIn addition, (Table 2) demonstrated a significant difference (P < 0.05) in the urinary calcium excretion in each of the two addicted groups (G2and G3) on comparing each of them with the control group (G1), as well as on comparing both of them together.\nIt was reported that growth hormone excess such as occurs in acromegaly causes hypercalciuria which appears to be a tubular effect and that vasopressin may increase calciumexcretion due to its effect on reducing calcium transport the thick ascending limb24. Furthermore, (Table 2) showed a significant increase in U.Ca/U.Cr excretion among tramadol coabused with cannabis group (G3) (P2 < 0.05) when compared with (G1), and this excretion was also significantly increased more than the one in tramadol addicted group (G2) on comparison between (G3) and (G2), suggesting that cannabis may have a synergetic effect to the impairment effect of tramadol on reabsorption of calcium along the nephron. This suggestion is supported by results in (Table 2) which demonstrated a significant increase in U.Ca/U.Cr excretion intramadol coabused with cannabis group (G3) (P2 < 0.05) when compared with tramadol addicted group (G2), and the finding of a significant correlation (r = 0.5, P2 < 0.01) between U.Ca/U.Cr and U.THC/U.Cr levels in (G3) (Table 4). Moreover, other investigators25 reported mild kidney function decline among patients who had used marijuana, and others reported that tramadol has negative impacts on kidney function in human26 and experimental animals27 as evidenced by an increase in BUN and serum creatinine levels. Other studies reached similar results28,29. Results of the present study (Table 1) showed that the two addicted groups were having different addiction duration, and since the addiction duration in G3 is longer than in G2, it could be suggested that the abuser had started addiction with only one drug and after a period of time (that is with increasing age) the abuser started the coabuse of the second drug. This suggestion is supported by the report of others30 who found that for the cases of tramadol abuse, 97% of the drug addicts used tramadol in combination with other drugs or they had a previous history of addiction to substance of abuse. In addition, results in (Table 4) revealed a significant correlation (r = 0.552, P < 0.01) between age and addiction duration in G3, thus adding more support to the above suggestion.\nCorrelation coefficient (r) between investigated parameters in the tramadol coabused with cannabis addicted group\nr is significant as (P < 0.05)\nr is highly significant as (P < 0.01).\nData from the (Table 4) demonstrate a positive correlation between age and addiction duration, and between U.THC/U.Cr and each of U.LAP/U.Cr, U.Cu/U.Cr, U.Zn/U.Cr and U.Ca/U.Cr, as well as between U.NAG/U.Cr and U.LAP/U.Cr, also between U.LAP/U.Cr and each of U.Cu/U.Cr, U.Zn/U.Cr and U.Ca/U.Cr, as well as between U.Cu/U.Cr with both U.Zn/U.Cr and U.Ca/U.Cr, and between U.Zn/U.Cr and U.Ca/U.Cr.", "Absence of cannabis addicted group and the small number of recruited participants are the limitations in the present study." ]
[ null, null, null, null ]
[ "Introduction", "Materials & methods", "Sampling", "Methods", "Statistical analysis", "Results and discussion", "Limitation of the study", "Conclusion" ]
[ "The use of illicit drugs has become a worldwide health problem. It leads to numerous consequences at the health, economic, social and legal levels. These consequences interfere with the development of the countries and their efforts to respond to the needs of their populations. Apart from its economic cost, drug abuse leads to social problems among family members, abnormal behavior, crime, health and psychological problems as well as economic difficulties1.\nTramadol (Tr) is a synthetic opioid from the aminocyclohexanol group and is commonly used in clinical practice as analgesic for treatment of moderate to severe pain2. Tramadol is free of side effects of traditional opioid such as the risk for respiratory depression3 and drug dependence4. So, the abuse potential of tramadol is considered low or absent5, which is in contrast to other opioids.\nCannabis was used a long time ago for both recreational and medicinal purposes. The plant Cannabis sativa (Indian hemp) is the source of different psychoactive products. Marijuana comes from leaves, stems, and dried flower buds of the plant. Hashish is a resin obtained from flowering buds of the hemp plant6. Δ9-tetrahydrocannabinol (Δ9-THC) is the primary psychoactive compound and contributes to the behavioral toxicity ofcannabis7.\nCalcium (Ca) is the most abundant divalent ion (Ca2+) in the body8. Calcium is reabsorbed along all segments of the nephron. The proximal tubule is the primary tubular segment where most of the filtered calcium is reabsorbed. Under normal condition, 98–99% of the calcium which passes into the tubule is reabsorbed. Approximately 60–70% of the filtered calcium is reabsorbed in the proximal convoluted tubule, 20% in the loop of Henle, 10% by the distal convoluted tubule, and 5% by the collecting duct9. Any calcium that is not reabsorbed then passes into what is now urine and into the bladder.\nCopper(Cu) is important for all functions of many cellular enzymes10. The copper transporters ATP7A and ATP7B are expressed at distinct regions of the kidney11. They are both expressed in the glomeruli in minor amounts. ATP7A is localized in the proximal and distal tubules, while ATP7B is expressed in the loop of Henle. The two copper transporters work together to maintain copper balance within this tissue. The ability of ATP7A to traffic to basolateral membrane (an event associated with copper export) in response to copper elevation strongly suggests that ATP7A plays the major role in exporting copper from renal cells for reabsorption into the blood and in protecting renal cells against copper overload12. In another research, ATP7B expression in the loops of Henle suggests that once in the filtrate, ATP7B is responsible for copper reabsorption11. Previous data suggest that copper may be reabsorbed by the same pathway as that involved in iron reabsorption13, which occurs between the proximal and distal convoluted tubules, interpreted as the loops of Henle.\nZinc (Zn) exists as a divalent cation (Zn2+). The kidney epithelium is the site of urinary zinc excretion and reabsorption. Zinc reabsorption occurs in all the downstream segments of the nephron; proximal tubule, Loop of Henle, and terminal nephron segments14. Most of the filtered Zn is reabsorbed along kidney proximal tubule15. The kidney is considered the primary eliminator of exogenous drugs and toxins; impairment of excretion predisposes it to various forms of injury. Development of new biomarkers is needed for the specific diagnosis of nephrotoxicity at earlier stage16. These biomarkers include brush bodrder enzyme such as Leucineaminopeptidase (LAP)17, Lysosomal enzyme such as N-acetyl-β-D-glucosaminidase (NAG)18, and cytosolic enzyme such as α- glutathion S-transferase (α-GST)18. Thus, detection of enzymes in the urine potentially provides valuable information not only related to the site of tubular injury (proximal & distal tubule) but also to the severity of injury16.\nRenal tubular cells, in particular proximal tubule cells, are vulnerable to the toxic effects of drugs because their role in concentrating and reabsorbing glomerular filtrate exposes them to high levels of circulating toxins19. Although most nephrotoxicity occurs in the proximal part of the nephron, some chemicals damage distal tubules. The function of these structures facilitates their vulnerability to toxicants20.\nDrug addiction has been associated with various forms of renal diseases which may be related to direct effect of the drugs themselves while others are caused by complications related to drug abuse21.\nThe present work aims to investigate the effect of tramadol addiction on urinary excretion of some metals (e.g. copper, zinc and calcium) in relation to structural integrity of proximal tubules which were evaluated by measuring the urinary excretion of proximal tubular specific enzymes e.g. LAP and NAG. In addition, the study was extended to evaluate the effect of tramadol addiction coabused with cannabis.", "The study was designed to be a comparative cross-sectional one from December 2015 to March 2016. Participants in the present investigation were recruited on voluntary bases from non-smoking males attending the outpatient clinic, El-Demerdash hospital, Ain-Shams University, for treatment of drug addiction. All participates were subjected to interview using a questionnaire designed to obtain information about previous medical and occupational history, medication intake, actual health status, and subjective symptoms. All subjects underwent a routine clinical examination and a routine urinalysis. The interview and clinical examination were performed by the clinic physicians under the supervision of one of the authors. Participants of the control group were recruited from relatives of the addicted participants after applying the same exclusion criteria and clinical examination. A participant was excluded from the present study if he had:\nA history of disease affecting the kidney before onset of addiction or any disease likely to impair renal function or affect the urinary excretion of the investigated parameters e.g. (diabetes mellitus, hypertension, collagen diseases as systemic lupus erythromatosis, Urinary tract disease, Rheumatoid arthritis and gout.A previous or present exposure to agents capable of damaging the kidney, heavy metals such as lead (Pb2+) and cadmium (Cd2+) as well as other nephrotoxins such as organic solvents.Regular and prolonged treatment by drugs affecting the kidney e.g. aminoglycosides and antirheumatic drugs.Dental mercury amalgam filling as it may affect the kidney.Cigarette smoker.\nA history of disease affecting the kidney before onset of addiction or any disease likely to impair renal function or affect the urinary excretion of the investigated parameters e.g. (diabetes mellitus, hypertension, collagen diseases as systemic lupus erythromatosis, Urinary tract disease, Rheumatoid arthritis and gout.\nA previous or present exposure to agents capable of damaging the kidney, heavy metals such as lead (Pb2+) and cadmium (Cd2+) as well as other nephrotoxins such as organic solvents.\nRegular and prolonged treatment by drugs affecting the kidney e.g. aminoglycosides and antirheumatic drugs.\nDental mercury amalgam filling as it may affect the kidney.\nCigarette smoker.\nA total of sixty-five males were included in the present study and divided into three groups as follows: Control group (G1) was comprised of 19 males, Tramadol addicts (G2) were comprised of 18 males, Tramadol coabused with cannabis addicts (G3) was comprised of 28 males.\n Sampling Before starting treatment from addiction, morning urine sample was collected from each participant into 100 ml sterilized plastic container and centrifuged at 4500 rpm for 5 minutes, and then the clear supernatant was distributed in polyethylene vials (1.5 ml capacity). One vial was used on the same day of urine collection for measuring tramadol (Tr) and cannabis in urine, the rest of vials were stored at-20°C without preservatives until analyzed within 2 weeks.\nBefore starting treatment from addiction, morning urine sample was collected from each participant into 100 ml sterilized plastic container and centrifuged at 4500 rpm for 5 minutes, and then the clear supernatant was distributed in polyethylene vials (1.5 ml capacity). One vial was used on the same day of urine collection for measuring tramadol (Tr) and cannabis in urine, the rest of vials were stored at-20°C without preservatives until analyzed within 2 weeks.\n Methods Each urine sample was analyzed for the determination of:\nUrinary tramadol (U.Tr) concentration by immunalysis using immunalysis EIA kit (immunalysis corporation, USA).Urinary Δ9-THC concentration by DRI® cannabinoid assay (Microgenics, USA).Urinary NAG and LAP concentration for the assessment of proximal tubular structural integrity by colorimetric method using NAG kit (Diazymelaboratotries, USA) and LAP kit (Randox, UK).Urinary Cu, Zn, and Ca concentration for assessment of reabsorption integrity of the proximal tubules by Thermo Scientific ICE 3000 Series Atomic Absorption Spectrophotometer, England.Urinary creatinine (U. Cr) concentration by Jaffe kinetic method using kit (Greiner Diagnostic GmbH, Germany).\nUrinary tramadol (U.Tr) concentration by immunalysis using immunalysis EIA kit (immunalysis corporation, USA).\nUrinary Δ9-THC concentration by DRI® cannabinoid assay (Microgenics, USA).\nUrinary NAG and LAP concentration for the assessment of proximal tubular structural integrity by colorimetric method using NAG kit (Diazymelaboratotries, USA) and LAP kit (Randox, UK).\nUrinary Cu, Zn, and Ca concentration for assessment of reabsorption integrity of the proximal tubules by Thermo Scientific ICE 3000 Series Atomic Absorption Spectrophotometer, England.\nUrinary creatinine (U. Cr) concentration by Jaffe kinetic method using kit (Greiner Diagnostic GmbH, Germany).\nEach urine sample was analyzed for the determination of:\nUrinary tramadol (U.Tr) concentration by immunalysis using immunalysis EIA kit (immunalysis corporation, USA).Urinary Δ9-THC concentration by DRI® cannabinoid assay (Microgenics, USA).Urinary NAG and LAP concentration for the assessment of proximal tubular structural integrity by colorimetric method using NAG kit (Diazymelaboratotries, USA) and LAP kit (Randox, UK).Urinary Cu, Zn, and Ca concentration for assessment of reabsorption integrity of the proximal tubules by Thermo Scientific ICE 3000 Series Atomic Absorption Spectrophotometer, England.Urinary creatinine (U. Cr) concentration by Jaffe kinetic method using kit (Greiner Diagnostic GmbH, Germany).\nUrinary tramadol (U.Tr) concentration by immunalysis using immunalysis EIA kit (immunalysis corporation, USA).\nUrinary Δ9-THC concentration by DRI® cannabinoid assay (Microgenics, USA).\nUrinary NAG and LAP concentration for the assessment of proximal tubular structural integrity by colorimetric method using NAG kit (Diazymelaboratotries, USA) and LAP kit (Randox, UK).\nUrinary Cu, Zn, and Ca concentration for assessment of reabsorption integrity of the proximal tubules by Thermo Scientific ICE 3000 Series Atomic Absorption Spectrophotometer, England.\nUrinary creatinine (U. Cr) concentration by Jaffe kinetic method using kit (Greiner Diagnostic GmbH, Germany).\n Statistical analysis Data was tabulated as mean ± SD. ANOVA was used to compare between more than two means of independent groups. In case of parametric data, if ANOVA was statistically significant, Post Hoc test was used to compare between each two means, while Mann- Whitney and Kruskall-Wallis tests were used for non-parametric data. Correlation coefficient was calculated to examine the relation between investigated urinary parameters. Results were considered statistically significant at P < 0.05.\nData was tabulated as mean ± SD. ANOVA was used to compare between more than two means of independent groups. In case of parametric data, if ANOVA was statistically significant, Post Hoc test was used to compare between each two means, while Mann- Whitney and Kruskall-Wallis tests were used for non-parametric data. Correlation coefficient was calculated to examine the relation between investigated urinary parameters. Results were considered statistically significant at P < 0.05.", "Before starting treatment from addiction, morning urine sample was collected from each participant into 100 ml sterilized plastic container and centrifuged at 4500 rpm for 5 minutes, and then the clear supernatant was distributed in polyethylene vials (1.5 ml capacity). One vial was used on the same day of urine collection for measuring tramadol (Tr) and cannabis in urine, the rest of vials were stored at-20°C without preservatives until analyzed within 2 weeks.", "Each urine sample was analyzed for the determination of:\nUrinary tramadol (U.Tr) concentration by immunalysis using immunalysis EIA kit (immunalysis corporation, USA).Urinary Δ9-THC concentration by DRI® cannabinoid assay (Microgenics, USA).Urinary NAG and LAP concentration for the assessment of proximal tubular structural integrity by colorimetric method using NAG kit (Diazymelaboratotries, USA) and LAP kit (Randox, UK).Urinary Cu, Zn, and Ca concentration for assessment of reabsorption integrity of the proximal tubules by Thermo Scientific ICE 3000 Series Atomic Absorption Spectrophotometer, England.Urinary creatinine (U. Cr) concentration by Jaffe kinetic method using kit (Greiner Diagnostic GmbH, Germany).\nUrinary tramadol (U.Tr) concentration by immunalysis using immunalysis EIA kit (immunalysis corporation, USA).\nUrinary Δ9-THC concentration by DRI® cannabinoid assay (Microgenics, USA).\nUrinary NAG and LAP concentration for the assessment of proximal tubular structural integrity by colorimetric method using NAG kit (Diazymelaboratotries, USA) and LAP kit (Randox, UK).\nUrinary Cu, Zn, and Ca concentration for assessment of reabsorption integrity of the proximal tubules by Thermo Scientific ICE 3000 Series Atomic Absorption Spectrophotometer, England.\nUrinary creatinine (U. Cr) concentration by Jaffe kinetic method using kit (Greiner Diagnostic GmbH, Germany).", "Data was tabulated as mean ± SD. ANOVA was used to compare between more than two means of independent groups. In case of parametric data, if ANOVA was statistically significant, Post Hoc test was used to compare between each two means, while Mann- Whitney and Kruskall-Wallis tests were used for non-parametric data. Correlation coefficient was calculated to examine the relation between investigated urinary parameters. Results were considered statistically significant at P < 0.05.", "Regarding age, results of the present study (Table 1) showed that the three investigated groups were of comparable age since there was insignificant difference between G1 and each of G2 (P1 > 0.05) and G3(P2 > 0.05) as well as between G2 and G3 (P3 > 0.05), suggesting that age has no effect on the investigated urinary parameters. Concerning the investigated urinary enzymes of structural integrity of the proximal tubules (U.NAG/U.Cr and U.LAP/ U.Cr), data in (Table 1) demonstrated insignificant difference between the three investigated groups, suggesting that tramadol did not cause a damaging effect on the structural integrity of the proximal tubules, and even coabuse of cannabis with tramadol addiction has no effect on the structural integrity of the proximal tubules. This suggests that neither tramadol addiction alone nor tramadol coabused with cannabis affects the structural integrity of the proximal tubules among the investigated addicted individuals in the present study. These findings support the results reported by others22.\nData (Mean ± SD) of investigated parameters among different studied groups\nN: Number of individuals participated in each investigated group.\nOn comparing the three investigated groups using ANOVA, (Table 1) revealed a significant increase (P < 0.05) in the urinary excretion of calcium and insignificant differences regarding all others investigated parameters.\nRegarding the effect of addiction on reabsorption integrity of the nephron for U.Cu/U.Cr and U.Zn/U.Cr as well as U.Ca/U.Cr, results in (Table 1) showed insignificant changes (P2 > 0.05) in urinary excretion of U.Cu/U.Cr and U.Zn/U.Cr among the three investigated groups, suggesting also that neither tramadol addiction alone nor coabused with cannabis could affect the reabsorption integrity of the nephron for both Cu and Zn. Also, this suggests that cannabis itself has no effect on the reabsorption integrity of the nephron for U.Cu/U.Cr and U.Zn/U.Cr.\nConcerning the effect of addiction on urinary excretion of U.Ca/U.Cr, results in (Table 2) revealed a significant increase in U.Ca/U.Cr excretion among tramadol addicted group (G2) (P1 < 0.05) whencompared with (G1), suggesting that tramadol addiction causes reabsorption impairment of the nephron function regarding urinary calcium. This finding and suggestion receive support from the report that opioids cause increased levels of growth hormone and vasopressin23.\nComparison between different investigated groups regarding urinary calcium excretion\nN: Number of individuals participated in each investigated group,\nP1: G2 compared with G1, P2: G3 compared with G1, P3: G3 compared with G2.\nIn addition, (Table 2) demonstrated a significant difference (P < 0.05) in the urinary calcium excretion in each of the two addicted groups (G2and G3) on comparing each of them with the control group (G1), as well as on comparing both of them together.\nIt was reported that growth hormone excess such as occurs in acromegaly causes hypercalciuria which appears to be a tubular effect and that vasopressin may increase calciumexcretion due to its effect on reducing calcium transport the thick ascending limb24. Furthermore, (Table 2) showed a significant increase in U.Ca/U.Cr excretion among tramadol coabused with cannabis group (G3) (P2 < 0.05) when compared with (G1), and this excretion was also significantly increased more than the one in tramadol addicted group (G2) on comparison between (G3) and (G2), suggesting that cannabis may have a synergetic effect to the impairment effect of tramadol on reabsorption of calcium along the nephron. This suggestion is supported by results in (Table 2) which demonstrated a significant increase in U.Ca/U.Cr excretion intramadol coabused with cannabis group (G3) (P2 < 0.05) when compared with tramadol addicted group (G2), and the finding of a significant correlation (r = 0.5, P2 < 0.01) between U.Ca/U.Cr and U.THC/U.Cr levels in (G3) (Table 4). Moreover, other investigators25 reported mild kidney function decline among patients who had used marijuana, and others reported that tramadol has negative impacts on kidney function in human26 and experimental animals27 as evidenced by an increase in BUN and serum creatinine levels. Other studies reached similar results28,29. Results of the present study (Table 1) showed that the two addicted groups were having different addiction duration, and since the addiction duration in G3 is longer than in G2, it could be suggested that the abuser had started addiction with only one drug and after a period of time (that is with increasing age) the abuser started the coabuse of the second drug. This suggestion is supported by the report of others30 who found that for the cases of tramadol abuse, 97% of the drug addicts used tramadol in combination with other drugs or they had a previous history of addiction to substance of abuse. In addition, results in (Table 4) revealed a significant correlation (r = 0.552, P < 0.01) between age and addiction duration in G3, thus adding more support to the above suggestion.\nCorrelation coefficient (r) between investigated parameters in the tramadol coabused with cannabis addicted group\nr is significant as (P < 0.05)\nr is highly significant as (P < 0.01).\nData from the (Table 4) demonstrate a positive correlation between age and addiction duration, and between U.THC/U.Cr and each of U.LAP/U.Cr, U.Cu/U.Cr, U.Zn/U.Cr and U.Ca/U.Cr, as well as between U.NAG/U.Cr and U.LAP/U.Cr, also between U.LAP/U.Cr and each of U.Cu/U.Cr, U.Zn/U.Cr and U.Ca/U.Cr, as well as between U.Cu/U.Cr with both U.Zn/U.Cr and U.Ca/U.Cr, and between U.Zn/U.Cr and U.Ca/U.Cr.", "Absence of cannabis addicted group and the small number of recruited participants are the limitations in the present study.", "Among the investigated addicted individuals, neither the addiction of tramadol alone nor in combination with cannabis was found to affect the structural integrity of proximal tubules. Moreover, reabsorption integrity of the nephron is affected only with respect to calcium reabsorption and not with respect to copper and zinc reabsorption. Also, to the best of our knowledge, this is the first study that investigated the effect of addiction of tramadol alone or coabused with cannabis on the reabsorption integrity of the nephron regarding urinary excretion of copper and zinc as well as calcium." ]
[ "intro", "materials|methods", null, "methods", null, null, null, "conclusions" ]
[ "Enzymuria", "copper", "zinc", "calcium", "reabsorption impairment", "tubular structural integrity" ]
Introduction: The use of illicit drugs has become a worldwide health problem. It leads to numerous consequences at the health, economic, social and legal levels. These consequences interfere with the development of the countries and their efforts to respond to the needs of their populations. Apart from its economic cost, drug abuse leads to social problems among family members, abnormal behavior, crime, health and psychological problems as well as economic difficulties1. Tramadol (Tr) is a synthetic opioid from the aminocyclohexanol group and is commonly used in clinical practice as analgesic for treatment of moderate to severe pain2. Tramadol is free of side effects of traditional opioid such as the risk for respiratory depression3 and drug dependence4. So, the abuse potential of tramadol is considered low or absent5, which is in contrast to other opioids. Cannabis was used a long time ago for both recreational and medicinal purposes. The plant Cannabis sativa (Indian hemp) is the source of different psychoactive products. Marijuana comes from leaves, stems, and dried flower buds of the plant. Hashish is a resin obtained from flowering buds of the hemp plant6. Δ9-tetrahydrocannabinol (Δ9-THC) is the primary psychoactive compound and contributes to the behavioral toxicity ofcannabis7. Calcium (Ca) is the most abundant divalent ion (Ca2+) in the body8. Calcium is reabsorbed along all segments of the nephron. The proximal tubule is the primary tubular segment where most of the filtered calcium is reabsorbed. Under normal condition, 98–99% of the calcium which passes into the tubule is reabsorbed. Approximately 60–70% of the filtered calcium is reabsorbed in the proximal convoluted tubule, 20% in the loop of Henle, 10% by the distal convoluted tubule, and 5% by the collecting duct9. Any calcium that is not reabsorbed then passes into what is now urine and into the bladder. Copper(Cu) is important for all functions of many cellular enzymes10. The copper transporters ATP7A and ATP7B are expressed at distinct regions of the kidney11. They are both expressed in the glomeruli in minor amounts. ATP7A is localized in the proximal and distal tubules, while ATP7B is expressed in the loop of Henle. The two copper transporters work together to maintain copper balance within this tissue. The ability of ATP7A to traffic to basolateral membrane (an event associated with copper export) in response to copper elevation strongly suggests that ATP7A plays the major role in exporting copper from renal cells for reabsorption into the blood and in protecting renal cells against copper overload12. In another research, ATP7B expression in the loops of Henle suggests that once in the filtrate, ATP7B is responsible for copper reabsorption11. Previous data suggest that copper may be reabsorbed by the same pathway as that involved in iron reabsorption13, which occurs between the proximal and distal convoluted tubules, interpreted as the loops of Henle. Zinc (Zn) exists as a divalent cation (Zn2+). The kidney epithelium is the site of urinary zinc excretion and reabsorption. Zinc reabsorption occurs in all the downstream segments of the nephron; proximal tubule, Loop of Henle, and terminal nephron segments14. Most of the filtered Zn is reabsorbed along kidney proximal tubule15. The kidney is considered the primary eliminator of exogenous drugs and toxins; impairment of excretion predisposes it to various forms of injury. Development of new biomarkers is needed for the specific diagnosis of nephrotoxicity at earlier stage16. These biomarkers include brush bodrder enzyme such as Leucineaminopeptidase (LAP)17, Lysosomal enzyme such as N-acetyl-β-D-glucosaminidase (NAG)18, and cytosolic enzyme such as α- glutathion S-transferase (α-GST)18. Thus, detection of enzymes in the urine potentially provides valuable information not only related to the site of tubular injury (proximal & distal tubule) but also to the severity of injury16. Renal tubular cells, in particular proximal tubule cells, are vulnerable to the toxic effects of drugs because their role in concentrating and reabsorbing glomerular filtrate exposes them to high levels of circulating toxins19. Although most nephrotoxicity occurs in the proximal part of the nephron, some chemicals damage distal tubules. The function of these structures facilitates their vulnerability to toxicants20. Drug addiction has been associated with various forms of renal diseases which may be related to direct effect of the drugs themselves while others are caused by complications related to drug abuse21. The present work aims to investigate the effect of tramadol addiction on urinary excretion of some metals (e.g. copper, zinc and calcium) in relation to structural integrity of proximal tubules which were evaluated by measuring the urinary excretion of proximal tubular specific enzymes e.g. LAP and NAG. In addition, the study was extended to evaluate the effect of tramadol addiction coabused with cannabis. Materials & methods: The study was designed to be a comparative cross-sectional one from December 2015 to March 2016. Participants in the present investigation were recruited on voluntary bases from non-smoking males attending the outpatient clinic, El-Demerdash hospital, Ain-Shams University, for treatment of drug addiction. All participates were subjected to interview using a questionnaire designed to obtain information about previous medical and occupational history, medication intake, actual health status, and subjective symptoms. All subjects underwent a routine clinical examination and a routine urinalysis. The interview and clinical examination were performed by the clinic physicians under the supervision of one of the authors. Participants of the control group were recruited from relatives of the addicted participants after applying the same exclusion criteria and clinical examination. A participant was excluded from the present study if he had: A history of disease affecting the kidney before onset of addiction or any disease likely to impair renal function or affect the urinary excretion of the investigated parameters e.g. (diabetes mellitus, hypertension, collagen diseases as systemic lupus erythromatosis, Urinary tract disease, Rheumatoid arthritis and gout.A previous or present exposure to agents capable of damaging the kidney, heavy metals such as lead (Pb2+) and cadmium (Cd2+) as well as other nephrotoxins such as organic solvents.Regular and prolonged treatment by drugs affecting the kidney e.g. aminoglycosides and antirheumatic drugs.Dental mercury amalgam filling as it may affect the kidney.Cigarette smoker. A history of disease affecting the kidney before onset of addiction or any disease likely to impair renal function or affect the urinary excretion of the investigated parameters e.g. (diabetes mellitus, hypertension, collagen diseases as systemic lupus erythromatosis, Urinary tract disease, Rheumatoid arthritis and gout. A previous or present exposure to agents capable of damaging the kidney, heavy metals such as lead (Pb2+) and cadmium (Cd2+) as well as other nephrotoxins such as organic solvents. Regular and prolonged treatment by drugs affecting the kidney e.g. aminoglycosides and antirheumatic drugs. Dental mercury amalgam filling as it may affect the kidney. Cigarette smoker. A total of sixty-five males were included in the present study and divided into three groups as follows: Control group (G1) was comprised of 19 males, Tramadol addicts (G2) were comprised of 18 males, Tramadol coabused with cannabis addicts (G3) was comprised of 28 males. Sampling Before starting treatment from addiction, morning urine sample was collected from each participant into 100 ml sterilized plastic container and centrifuged at 4500 rpm for 5 minutes, and then the clear supernatant was distributed in polyethylene vials (1.5 ml capacity). One vial was used on the same day of urine collection for measuring tramadol (Tr) and cannabis in urine, the rest of vials were stored at-20°C without preservatives until analyzed within 2 weeks. Before starting treatment from addiction, morning urine sample was collected from each participant into 100 ml sterilized plastic container and centrifuged at 4500 rpm for 5 minutes, and then the clear supernatant was distributed in polyethylene vials (1.5 ml capacity). One vial was used on the same day of urine collection for measuring tramadol (Tr) and cannabis in urine, the rest of vials were stored at-20°C without preservatives until analyzed within 2 weeks. Methods Each urine sample was analyzed for the determination of: Urinary tramadol (U.Tr) concentration by immunalysis using immunalysis EIA kit (immunalysis corporation, USA).Urinary Δ9-THC concentration by DRI® cannabinoid assay (Microgenics, USA).Urinary NAG and LAP concentration for the assessment of proximal tubular structural integrity by colorimetric method using NAG kit (Diazymelaboratotries, USA) and LAP kit (Randox, UK).Urinary Cu, Zn, and Ca concentration for assessment of reabsorption integrity of the proximal tubules by Thermo Scientific ICE 3000 Series Atomic Absorption Spectrophotometer, England.Urinary creatinine (U. Cr) concentration by Jaffe kinetic method using kit (Greiner Diagnostic GmbH, Germany). Urinary tramadol (U.Tr) concentration by immunalysis using immunalysis EIA kit (immunalysis corporation, USA). Urinary Δ9-THC concentration by DRI® cannabinoid assay (Microgenics, USA). Urinary NAG and LAP concentration for the assessment of proximal tubular structural integrity by colorimetric method using NAG kit (Diazymelaboratotries, USA) and LAP kit (Randox, UK). Urinary Cu, Zn, and Ca concentration for assessment of reabsorption integrity of the proximal tubules by Thermo Scientific ICE 3000 Series Atomic Absorption Spectrophotometer, England. Urinary creatinine (U. Cr) concentration by Jaffe kinetic method using kit (Greiner Diagnostic GmbH, Germany). Each urine sample was analyzed for the determination of: Urinary tramadol (U.Tr) concentration by immunalysis using immunalysis EIA kit (immunalysis corporation, USA).Urinary Δ9-THC concentration by DRI® cannabinoid assay (Microgenics, USA).Urinary NAG and LAP concentration for the assessment of proximal tubular structural integrity by colorimetric method using NAG kit (Diazymelaboratotries, USA) and LAP kit (Randox, UK).Urinary Cu, Zn, and Ca concentration for assessment of reabsorption integrity of the proximal tubules by Thermo Scientific ICE 3000 Series Atomic Absorption Spectrophotometer, England.Urinary creatinine (U. Cr) concentration by Jaffe kinetic method using kit (Greiner Diagnostic GmbH, Germany). Urinary tramadol (U.Tr) concentration by immunalysis using immunalysis EIA kit (immunalysis corporation, USA). Urinary Δ9-THC concentration by DRI® cannabinoid assay (Microgenics, USA). Urinary NAG and LAP concentration for the assessment of proximal tubular structural integrity by colorimetric method using NAG kit (Diazymelaboratotries, USA) and LAP kit (Randox, UK). Urinary Cu, Zn, and Ca concentration for assessment of reabsorption integrity of the proximal tubules by Thermo Scientific ICE 3000 Series Atomic Absorption Spectrophotometer, England. Urinary creatinine (U. Cr) concentration by Jaffe kinetic method using kit (Greiner Diagnostic GmbH, Germany). Statistical analysis Data was tabulated as mean ± SD. ANOVA was used to compare between more than two means of independent groups. In case of parametric data, if ANOVA was statistically significant, Post Hoc test was used to compare between each two means, while Mann- Whitney and Kruskall-Wallis tests were used for non-parametric data. Correlation coefficient was calculated to examine the relation between investigated urinary parameters. Results were considered statistically significant at P < 0.05. Data was tabulated as mean ± SD. ANOVA was used to compare between more than two means of independent groups. In case of parametric data, if ANOVA was statistically significant, Post Hoc test was used to compare between each two means, while Mann- Whitney and Kruskall-Wallis tests were used for non-parametric data. Correlation coefficient was calculated to examine the relation between investigated urinary parameters. Results were considered statistically significant at P < 0.05. Sampling: Before starting treatment from addiction, morning urine sample was collected from each participant into 100 ml sterilized plastic container and centrifuged at 4500 rpm for 5 minutes, and then the clear supernatant was distributed in polyethylene vials (1.5 ml capacity). One vial was used on the same day of urine collection for measuring tramadol (Tr) and cannabis in urine, the rest of vials were stored at-20°C without preservatives until analyzed within 2 weeks. Methods: Each urine sample was analyzed for the determination of: Urinary tramadol (U.Tr) concentration by immunalysis using immunalysis EIA kit (immunalysis corporation, USA).Urinary Δ9-THC concentration by DRI® cannabinoid assay (Microgenics, USA).Urinary NAG and LAP concentration for the assessment of proximal tubular structural integrity by colorimetric method using NAG kit (Diazymelaboratotries, USA) and LAP kit (Randox, UK).Urinary Cu, Zn, and Ca concentration for assessment of reabsorption integrity of the proximal tubules by Thermo Scientific ICE 3000 Series Atomic Absorption Spectrophotometer, England.Urinary creatinine (U. Cr) concentration by Jaffe kinetic method using kit (Greiner Diagnostic GmbH, Germany). Urinary tramadol (U.Tr) concentration by immunalysis using immunalysis EIA kit (immunalysis corporation, USA). Urinary Δ9-THC concentration by DRI® cannabinoid assay (Microgenics, USA). Urinary NAG and LAP concentration for the assessment of proximal tubular structural integrity by colorimetric method using NAG kit (Diazymelaboratotries, USA) and LAP kit (Randox, UK). Urinary Cu, Zn, and Ca concentration for assessment of reabsorption integrity of the proximal tubules by Thermo Scientific ICE 3000 Series Atomic Absorption Spectrophotometer, England. Urinary creatinine (U. Cr) concentration by Jaffe kinetic method using kit (Greiner Diagnostic GmbH, Germany). Statistical analysis: Data was tabulated as mean ± SD. ANOVA was used to compare between more than two means of independent groups. In case of parametric data, if ANOVA was statistically significant, Post Hoc test was used to compare between each two means, while Mann- Whitney and Kruskall-Wallis tests were used for non-parametric data. Correlation coefficient was calculated to examine the relation between investigated urinary parameters. Results were considered statistically significant at P < 0.05. Results and discussion: Regarding age, results of the present study (Table 1) showed that the three investigated groups were of comparable age since there was insignificant difference between G1 and each of G2 (P1 > 0.05) and G3(P2 > 0.05) as well as between G2 and G3 (P3 > 0.05), suggesting that age has no effect on the investigated urinary parameters. Concerning the investigated urinary enzymes of structural integrity of the proximal tubules (U.NAG/U.Cr and U.LAP/ U.Cr), data in (Table 1) demonstrated insignificant difference between the three investigated groups, suggesting that tramadol did not cause a damaging effect on the structural integrity of the proximal tubules, and even coabuse of cannabis with tramadol addiction has no effect on the structural integrity of the proximal tubules. This suggests that neither tramadol addiction alone nor tramadol coabused with cannabis affects the structural integrity of the proximal tubules among the investigated addicted individuals in the present study. These findings support the results reported by others22. Data (Mean ± SD) of investigated parameters among different studied groups N: Number of individuals participated in each investigated group. On comparing the three investigated groups using ANOVA, (Table 1) revealed a significant increase (P < 0.05) in the urinary excretion of calcium and insignificant differences regarding all others investigated parameters. Regarding the effect of addiction on reabsorption integrity of the nephron for U.Cu/U.Cr and U.Zn/U.Cr as well as U.Ca/U.Cr, results in (Table 1) showed insignificant changes (P2 > 0.05) in urinary excretion of U.Cu/U.Cr and U.Zn/U.Cr among the three investigated groups, suggesting also that neither tramadol addiction alone nor coabused with cannabis could affect the reabsorption integrity of the nephron for both Cu and Zn. Also, this suggests that cannabis itself has no effect on the reabsorption integrity of the nephron for U.Cu/U.Cr and U.Zn/U.Cr. Concerning the effect of addiction on urinary excretion of U.Ca/U.Cr, results in (Table 2) revealed a significant increase in U.Ca/U.Cr excretion among tramadol addicted group (G2) (P1 < 0.05) whencompared with (G1), suggesting that tramadol addiction causes reabsorption impairment of the nephron function regarding urinary calcium. This finding and suggestion receive support from the report that opioids cause increased levels of growth hormone and vasopressin23. Comparison between different investigated groups regarding urinary calcium excretion N: Number of individuals participated in each investigated group, P1: G2 compared with G1, P2: G3 compared with G1, P3: G3 compared with G2. In addition, (Table 2) demonstrated a significant difference (P < 0.05) in the urinary calcium excretion in each of the two addicted groups (G2and G3) on comparing each of them with the control group (G1), as well as on comparing both of them together. It was reported that growth hormone excess such as occurs in acromegaly causes hypercalciuria which appears to be a tubular effect and that vasopressin may increase calciumexcretion due to its effect on reducing calcium transport the thick ascending limb24. Furthermore, (Table 2) showed a significant increase in U.Ca/U.Cr excretion among tramadol coabused with cannabis group (G3) (P2 < 0.05) when compared with (G1), and this excretion was also significantly increased more than the one in tramadol addicted group (G2) on comparison between (G3) and (G2), suggesting that cannabis may have a synergetic effect to the impairment effect of tramadol on reabsorption of calcium along the nephron. This suggestion is supported by results in (Table 2) which demonstrated a significant increase in U.Ca/U.Cr excretion intramadol coabused with cannabis group (G3) (P2 < 0.05) when compared with tramadol addicted group (G2), and the finding of a significant correlation (r = 0.5, P2 < 0.01) between U.Ca/U.Cr and U.THC/U.Cr levels in (G3) (Table 4). Moreover, other investigators25 reported mild kidney function decline among patients who had used marijuana, and others reported that tramadol has negative impacts on kidney function in human26 and experimental animals27 as evidenced by an increase in BUN and serum creatinine levels. Other studies reached similar results28,29. Results of the present study (Table 1) showed that the two addicted groups were having different addiction duration, and since the addiction duration in G3 is longer than in G2, it could be suggested that the abuser had started addiction with only one drug and after a period of time (that is with increasing age) the abuser started the coabuse of the second drug. This suggestion is supported by the report of others30 who found that for the cases of tramadol abuse, 97% of the drug addicts used tramadol in combination with other drugs or they had a previous history of addiction to substance of abuse. In addition, results in (Table 4) revealed a significant correlation (r = 0.552, P < 0.01) between age and addiction duration in G3, thus adding more support to the above suggestion. Correlation coefficient (r) between investigated parameters in the tramadol coabused with cannabis addicted group r is significant as (P < 0.05) r is highly significant as (P < 0.01). Data from the (Table 4) demonstrate a positive correlation between age and addiction duration, and between U.THC/U.Cr and each of U.LAP/U.Cr, U.Cu/U.Cr, U.Zn/U.Cr and U.Ca/U.Cr, as well as between U.NAG/U.Cr and U.LAP/U.Cr, also between U.LAP/U.Cr and each of U.Cu/U.Cr, U.Zn/U.Cr and U.Ca/U.Cr, as well as between U.Cu/U.Cr with both U.Zn/U.Cr and U.Ca/U.Cr, and between U.Zn/U.Cr and U.Ca/U.Cr. Limitation of the study: Absence of cannabis addicted group and the small number of recruited participants are the limitations in the present study. Conclusion: Among the investigated addicted individuals, neither the addiction of tramadol alone nor in combination with cannabis was found to affect the structural integrity of proximal tubules. Moreover, reabsorption integrity of the nephron is affected only with respect to calcium reabsorption and not with respect to copper and zinc reabsorption. Also, to the best of our knowledge, this is the first study that investigated the effect of addiction of tramadol alone or coabused with cannabis on the reabsorption integrity of the nephron regarding urinary excretion of copper and zinc as well as calcium.
Background: The use of illicit drugs has become a worldwide health problem. Substances with the potential to be abused may have direct or indirect effects on physiologic mechanisms that lead to organ system dysfunction and diseases. Methods: Sixty-five males were included in the study, they were classified into control group (G1=19), tramadol addicts group (G2=18), and tramadol coabused with cannabis addicts group (G3=28). Parameters investigated for structural integrity were urinary levels ofleucineaminopeptidase and N-acetyl-β-D-glucosaminidase, and urinary parameters for reabsorption integrity were levels of copper and zinc as well as calcium, also urinary creatinine was measured. In addition, urinary levels of tramadol and tetrahydrocannabinol were estimated. Results: Among the two addicted groups, all measured parameters were not significantly different in comparison with the control group except for urinary calcium excretion which was found to be significantly increased among the two addicted groups. Conclusions: Both tramadol addiction alone or coabused with cannabis causes increased urinary excretion of calcium, indicating reabsorption dysfunction of calcium without affecting structural integrity along the nephron.
Introduction: The use of illicit drugs has become a worldwide health problem. It leads to numerous consequences at the health, economic, social and legal levels. These consequences interfere with the development of the countries and their efforts to respond to the needs of their populations. Apart from its economic cost, drug abuse leads to social problems among family members, abnormal behavior, crime, health and psychological problems as well as economic difficulties1. Tramadol (Tr) is a synthetic opioid from the aminocyclohexanol group and is commonly used in clinical practice as analgesic for treatment of moderate to severe pain2. Tramadol is free of side effects of traditional opioid such as the risk for respiratory depression3 and drug dependence4. So, the abuse potential of tramadol is considered low or absent5, which is in contrast to other opioids. Cannabis was used a long time ago for both recreational and medicinal purposes. The plant Cannabis sativa (Indian hemp) is the source of different psychoactive products. Marijuana comes from leaves, stems, and dried flower buds of the plant. Hashish is a resin obtained from flowering buds of the hemp plant6. Δ9-tetrahydrocannabinol (Δ9-THC) is the primary psychoactive compound and contributes to the behavioral toxicity ofcannabis7. Calcium (Ca) is the most abundant divalent ion (Ca2+) in the body8. Calcium is reabsorbed along all segments of the nephron. The proximal tubule is the primary tubular segment where most of the filtered calcium is reabsorbed. Under normal condition, 98–99% of the calcium which passes into the tubule is reabsorbed. Approximately 60–70% of the filtered calcium is reabsorbed in the proximal convoluted tubule, 20% in the loop of Henle, 10% by the distal convoluted tubule, and 5% by the collecting duct9. Any calcium that is not reabsorbed then passes into what is now urine and into the bladder. Copper(Cu) is important for all functions of many cellular enzymes10. The copper transporters ATP7A and ATP7B are expressed at distinct regions of the kidney11. They are both expressed in the glomeruli in minor amounts. ATP7A is localized in the proximal and distal tubules, while ATP7B is expressed in the loop of Henle. The two copper transporters work together to maintain copper balance within this tissue. The ability of ATP7A to traffic to basolateral membrane (an event associated with copper export) in response to copper elevation strongly suggests that ATP7A plays the major role in exporting copper from renal cells for reabsorption into the blood and in protecting renal cells against copper overload12. In another research, ATP7B expression in the loops of Henle suggests that once in the filtrate, ATP7B is responsible for copper reabsorption11. Previous data suggest that copper may be reabsorbed by the same pathway as that involved in iron reabsorption13, which occurs between the proximal and distal convoluted tubules, interpreted as the loops of Henle. Zinc (Zn) exists as a divalent cation (Zn2+). The kidney epithelium is the site of urinary zinc excretion and reabsorption. Zinc reabsorption occurs in all the downstream segments of the nephron; proximal tubule, Loop of Henle, and terminal nephron segments14. Most of the filtered Zn is reabsorbed along kidney proximal tubule15. The kidney is considered the primary eliminator of exogenous drugs and toxins; impairment of excretion predisposes it to various forms of injury. Development of new biomarkers is needed for the specific diagnosis of nephrotoxicity at earlier stage16. These biomarkers include brush bodrder enzyme such as Leucineaminopeptidase (LAP)17, Lysosomal enzyme such as N-acetyl-β-D-glucosaminidase (NAG)18, and cytosolic enzyme such as α- glutathion S-transferase (α-GST)18. Thus, detection of enzymes in the urine potentially provides valuable information not only related to the site of tubular injury (proximal & distal tubule) but also to the severity of injury16. Renal tubular cells, in particular proximal tubule cells, are vulnerable to the toxic effects of drugs because their role in concentrating and reabsorbing glomerular filtrate exposes them to high levels of circulating toxins19. Although most nephrotoxicity occurs in the proximal part of the nephron, some chemicals damage distal tubules. The function of these structures facilitates their vulnerability to toxicants20. Drug addiction has been associated with various forms of renal diseases which may be related to direct effect of the drugs themselves while others are caused by complications related to drug abuse21. The present work aims to investigate the effect of tramadol addiction on urinary excretion of some metals (e.g. copper, zinc and calcium) in relation to structural integrity of proximal tubules which were evaluated by measuring the urinary excretion of proximal tubular specific enzymes e.g. LAP and NAG. In addition, the study was extended to evaluate the effect of tramadol addiction coabused with cannabis. Conclusion: Among the investigated addicted individuals, neither the addiction of tramadol alone nor in combination with cannabis was found to affect the structural integrity of proximal tubules. Moreover, reabsorption integrity of the nephron is affected only with respect to calcium reabsorption and not with respect to copper and zinc reabsorption. Also, to the best of our knowledge, this is the first study that investigated the effect of addiction of tramadol alone or coabused with cannabis on the reabsorption integrity of the nephron regarding urinary excretion of copper and zinc as well as calcium.
Background: The use of illicit drugs has become a worldwide health problem. Substances with the potential to be abused may have direct or indirect effects on physiologic mechanisms that lead to organ system dysfunction and diseases. Methods: Sixty-five males were included in the study, they were classified into control group (G1=19), tramadol addicts group (G2=18), and tramadol coabused with cannabis addicts group (G3=28). Parameters investigated for structural integrity were urinary levels ofleucineaminopeptidase and N-acetyl-β-D-glucosaminidase, and urinary parameters for reabsorption integrity were levels of copper and zinc as well as calcium, also urinary creatinine was measured. In addition, urinary levels of tramadol and tetrahydrocannabinol were estimated. Results: Among the two addicted groups, all measured parameters were not significantly different in comparison with the control group except for urinary calcium excretion which was found to be significantly increased among the two addicted groups. Conclusions: Both tramadol addiction alone or coabused with cannabis causes increased urinary excretion of calcium, indicating reabsorption dysfunction of calcium without affecting structural integrity along the nephron.
3,835
210
[ 84, 85, 1087, 20 ]
8
[ "urinary", "cr", "tramadol", "concentration", "proximal", "kit", "addiction", "integrity", "investigated", "immunalysis" ]
[ "drug addicts tramadol", "opioids cannabis", "addiction tramadol", "coabuse cannabis tramadol", "marijuana reported tramadol" ]
null
[CONTENT] Enzymuria | copper | zinc | calcium | reabsorption impairment | tubular structural integrity [SUMMARY]
[CONTENT] Enzymuria | copper | zinc | calcium | reabsorption impairment | tubular structural integrity [SUMMARY]
null
[CONTENT] Enzymuria | copper | zinc | calcium | reabsorption impairment | tubular structural integrity [SUMMARY]
[CONTENT] Enzymuria | copper | zinc | calcium | reabsorption impairment | tubular structural integrity [SUMMARY]
[CONTENT] Enzymuria | copper | zinc | calcium | reabsorption impairment | tubular structural integrity [SUMMARY]
[CONTENT] Adult | Calcium | Case-Control Studies | Copper | Creatinine | Cross-Sectional Studies | Egypt | Humans | Male | Marijuana Abuse | Opioid-Related Disorders | Tramadol | Young Adult | Zinc [SUMMARY]
[CONTENT] Adult | Calcium | Case-Control Studies | Copper | Creatinine | Cross-Sectional Studies | Egypt | Humans | Male | Marijuana Abuse | Opioid-Related Disorders | Tramadol | Young Adult | Zinc [SUMMARY]
null
[CONTENT] Adult | Calcium | Case-Control Studies | Copper | Creatinine | Cross-Sectional Studies | Egypt | Humans | Male | Marijuana Abuse | Opioid-Related Disorders | Tramadol | Young Adult | Zinc [SUMMARY]
[CONTENT] Adult | Calcium | Case-Control Studies | Copper | Creatinine | Cross-Sectional Studies | Egypt | Humans | Male | Marijuana Abuse | Opioid-Related Disorders | Tramadol | Young Adult | Zinc [SUMMARY]
[CONTENT] Adult | Calcium | Case-Control Studies | Copper | Creatinine | Cross-Sectional Studies | Egypt | Humans | Male | Marijuana Abuse | Opioid-Related Disorders | Tramadol | Young Adult | Zinc [SUMMARY]
[CONTENT] drug addicts tramadol | opioids cannabis | addiction tramadol | coabuse cannabis tramadol | marijuana reported tramadol [SUMMARY]
[CONTENT] drug addicts tramadol | opioids cannabis | addiction tramadol | coabuse cannabis tramadol | marijuana reported tramadol [SUMMARY]
null
[CONTENT] drug addicts tramadol | opioids cannabis | addiction tramadol | coabuse cannabis tramadol | marijuana reported tramadol [SUMMARY]
[CONTENT] drug addicts tramadol | opioids cannabis | addiction tramadol | coabuse cannabis tramadol | marijuana reported tramadol [SUMMARY]
[CONTENT] drug addicts tramadol | opioids cannabis | addiction tramadol | coabuse cannabis tramadol | marijuana reported tramadol [SUMMARY]
[CONTENT] urinary | cr | tramadol | concentration | proximal | kit | addiction | integrity | investigated | immunalysis [SUMMARY]
[CONTENT] urinary | cr | tramadol | concentration | proximal | kit | addiction | integrity | investigated | immunalysis [SUMMARY]
null
[CONTENT] urinary | cr | tramadol | concentration | proximal | kit | addiction | integrity | investigated | immunalysis [SUMMARY]
[CONTENT] urinary | cr | tramadol | concentration | proximal | kit | addiction | integrity | investigated | immunalysis [SUMMARY]
[CONTENT] urinary | cr | tramadol | concentration | proximal | kit | addiction | integrity | investigated | immunalysis [SUMMARY]
[CONTENT] copper | tubule | reabsorbed | proximal | calcium | distal | henle | cells | atp7a | atp7b [SUMMARY]
[CONTENT] concentration | kit | immunalysis | usa | urinary | method | concentration assessment | usa urinary | assessment | nag [SUMMARY]
null
[CONTENT] reabsorption | respect | integrity | addiction tramadol | zinc | copper | reabsorption integrity nephron | copper zinc | integrity nephron | nephron [SUMMARY]
[CONTENT] urinary | concentration | cr | kit | tramadol | proximal | integrity | copper | cannabis | reabsorption [SUMMARY]
[CONTENT] urinary | concentration | cr | kit | tramadol | proximal | integrity | copper | cannabis | reabsorption [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] Sixty-five | G3=28 ||| ||| [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| Sixty-five | G3=28 ||| ||| ||| ||| two | two ||| [SUMMARY]
[CONTENT] ||| ||| Sixty-five | G3=28 ||| ||| ||| ||| two | two ||| [SUMMARY]
Phase angle values, a good indicator of nutritional status, are associated with median value of hemoglobin rather than hemoglobin variability in hemodialysis patients.
33567950
Our aim was to elucidate whether Hb variability affects nutritional status in HD patients.
BACKGROUND
This study included chronic HD patients (n = 76) with available monthly Hb levels up to 24 months prior to the body composition monitoring (BCM) measurement. The parameters obtained in the BCM included body mass index (BMI), lean tissue index (LTI), fat tissue index (FTI), body cell mass index (BCMI), overhydration/extracellular water ratio (OH), and phase angle (PhA). The coefficient of variation (Hb-CV), standard deviation (Hb-SD), and range of Hb (Hb-RAN) were used as indexes of Hb variability. In addition, minimum (Hb-Min), maximum (Hb-Max), average (Hb-Avg), and median (Hb-Med) Hb levels (g/dL) were analyzed.
METHODS
There were no significant differences in clinical, biochemical, and nutritional indexes based on the Hb-CV level. Compared to patients with an Hb-Med ≤ 10.77, those with an Hb-Med >10.77 had higher albumin levels, total iron-binding capacity (TIBC), and PhA and lower average weekly prescribed darbepoetin. Age, female sex, OH, and darbepoetin dosage were negatively correlated with PhA. Serum albumin, phosphorus, TIBC, Hb-Med, and Hb-Avg were positively correlated with PhA. In multiple linear regression analysis, PhA was positively associated with Hb-Med and serum albumin level, whereas PhA was negatively associated with age and female sex. The area under the curve (AUC) of Hb-Med was 0.665 (p = 0.040) in predicting PhA >5.00°.
RESULTS
PhA was not affected by indexes of Hb variability, whereas PhA was associated with Hb-Med in chronic HD patients.
CONCLUSIONS
[ "Age Factors", "Aged", "Body Composition", "Electric Impedance", "Female", "Hemoglobins", "Humans", "Kidney Failure, Chronic", "Male", "Middle Aged", "Nutritional Status", "Predictive Value of Tests", "ROC Curve", "Renal Dialysis", "Sex Factors" ]
7889140
Introduction
Anemia is a common complication of patients with chronic kidney disease (CKD) and end-stage renal disease (ESRD) undergoing renal replacement treatment. Although the introduction of erythrocyte-stimulating agents (ESAs) has led to a dramatic reduction in blood transfusion requirements and is associated with improved quality of life, fluctuations in hemoglobin (Hb) levels, known as Hb variability, during ESA treatment is a well-documented phenomenon [1]. The optimal target Hb concentration in CKD/ESRD patients remains under debate. Countries and societies have different recommendations for maintaining the optimal Hb concentration. In addition, maintaining patients’ Hb levels in such an optimal narrow range is difficult due to the loss of physiological regulation of red blood cell formation and many other factors, such as iron deficiency, chronic inflammation, secondary hyperparathyroidism, malnutrition, and inadequate-dose dialysis. The data show that only 30% of patients will fall within this range at any point in time because Hb level fluctuations result in frequent under- and overshooting of the target level [2]. Chronic HD patients with a stable target Hb level are at lower risk for adverse events than maintenance HD patients without a stable Hb level [3]. In recent studies, a higher fluctuation in Hb variability was associated with cardiovascular mortality and all-cause mortality in maintenance HD patients [4,5]. Malnutrition is a well-known risk factor in the general population and in HD patients. However, few studies have evaluated the association between Hb variability and nutritional status in HD patients. The nutritional status in HD patients is usually assessed using the malnutrition-inflammation score (MIS), body mass index (BMI), and serum albumin level. This was because there was a problem with the accurate and reproducible evaluation of nutrition status. Body composite monitor (BCM) analysis is a noninvasive and accurate instrument for distinguishing between excessive and insufficient moisture levels and can accurately assess nutritional status [6,7]. Therefore, we aimed to elucidate whether Hb variability itself affected the nutritional status.
null
null
Results
Clinical characteristics of the study population The study included a total of 76 patients undergoing HD (median duration, 70.98 (24–130) months; 51.3% men; mean age, 65.84 ± 11.91 years). Here, 51.6% patients had diabetes mellitus. The mean Hb and darbepoetin dosage were 10.80 ± 0.51 g/dL and 25.08 ± 13.41 µg/week. The mean albumin, iron, TIBC, TS, total cholesterol, LDL-cholesterol, HDL-cholesterol, triglyceride, calcium, phosphorus, intact PTH, and hs-CRP levels were 3.83 ± 0.22 g/dL, 73.78 ± 17.75, 257.20 ± 30.68 µg/dL, 28.81 ± 6.44%, 134.86 ± 26.45, 67.92 ± 20.63, 46.04 ± 13.00, 104.95 ± 47.23, 8.36 ± 0.38, 4.10 ± 0.66, 313.23 ± 159.41, and 0.42 ± 0.43 mg/dL, respectively. The median ferritin was 228.18 (49.14–1233.45). The dry weight, pre-dialysis weight, interdialytic weight gain, and OH were 60.80 ± 10.34, 62.30 ± 11.36, 2.16 ± 0.94 kg, and 8.47 ± 6.52%. The mean BMI, LTI, FTI, BCMI, and PhA were 23.58 ± 3.02, 12.71 ± 3.17, 9.80 ± 3.59, 6.92 ± 2.25 kg/m2, and 4.28 ± 0.94°, respectively (Table 1). Clinical characteristics of the study population (N = 76). HDL: high-density lipoprotein; LDL: low-density lipoprotein; OH: overhydration/extracellular water ratio; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation, while categorical variables are expressed as number (percentage). The study included a total of 76 patients undergoing HD (median duration, 70.98 (24–130) months; 51.3% men; mean age, 65.84 ± 11.91 years). Here, 51.6% patients had diabetes mellitus. The mean Hb and darbepoetin dosage were 10.80 ± 0.51 g/dL and 25.08 ± 13.41 µg/week. The mean albumin, iron, TIBC, TS, total cholesterol, LDL-cholesterol, HDL-cholesterol, triglyceride, calcium, phosphorus, intact PTH, and hs-CRP levels were 3.83 ± 0.22 g/dL, 73.78 ± 17.75, 257.20 ± 30.68 µg/dL, 28.81 ± 6.44%, 134.86 ± 26.45, 67.92 ± 20.63, 46.04 ± 13.00, 104.95 ± 47.23, 8.36 ± 0.38, 4.10 ± 0.66, 313.23 ± 159.41, and 0.42 ± 0.43 mg/dL, respectively. The median ferritin was 228.18 (49.14–1233.45). The dry weight, pre-dialysis weight, interdialytic weight gain, and OH were 60.80 ± 10.34, 62.30 ± 11.36, 2.16 ± 0.94 kg, and 8.47 ± 6.52%. The mean BMI, LTI, FTI, BCMI, and PhA were 23.58 ± 3.02, 12.71 ± 3.17, 9.80 ± 3.59, 6.92 ± 2.25 kg/m2, and 4.28 ± 0.94°, respectively (Table 1). Clinical characteristics of the study population (N = 76). HDL: high-density lipoprotein; LDL: low-density lipoprotein; OH: overhydration/extracellular water ratio; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation, while categorical variables are expressed as number (percentage). Comparisons of variables by Hb-CV level The analysis that divided the median Hb-CV value no difference in age, sex, presence of diabetes, or HD duration. In addition, Hb-Avg and biochemical and nutritional parameters did not differ between the two groups. Finally, there was no intergroup difference with respect to darbepoetin dose received (Table 2). Comparisons of variables by Hb-CV in HD patients (N = 76). CV: coefficient of variation; Hb-CV: hemoglobin coefficient of variation; HD: hemodialysis; HDL: high-density lipoprotein; LDL: low-density lipoprotein; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation or median (range), while categorical variables are expressed as number (percentage). The analysis that divided the median Hb-CV value no difference in age, sex, presence of diabetes, or HD duration. In addition, Hb-Avg and biochemical and nutritional parameters did not differ between the two groups. Finally, there was no intergroup difference with respect to darbepoetin dose received (Table 2). Comparisons of variables by Hb-CV in HD patients (N = 76). CV: coefficient of variation; Hb-CV: hemoglobin coefficient of variation; HD: hemodialysis; HDL: high-density lipoprotein; LDL: low-density lipoprotein; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation or median (range), while categorical variables are expressed as number (percentage). Comparisons of variables by Hb-Med level The analysis that divided the median Hb value found that compared to patients with an Hb-Med ≤ 10.77 group, those with an Hb-Med >10.77 had a significantly higher proportion of male sex and significantly higher serum albumin level, TIBC, and PhA (65.8 versus 34.2%, 3.91 ± 0.21 versus 3.76 ± 0.21 mg/dL, 264.93 ± 33.41 versus 249.41 ± 26.10 µg/dL, and 4.49 ± 0.92 versus 4.06 ± 0.93°, respectively; p < 0.05). On the contrary, the darbepoetin dose received in the Hb-Med >10.77 group was lower than that in the Hb-Med ≤ 10.77 group (18.02 ± 9.50 versus 32.15 ± 13.10; p < 0.05; Table 3). Comparison of variables by Hb-Med level in HD patients (N = 76). Med: median value; HDL: high-density lipoprotein; LDL: low-density lipoprotein; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation, while categorical variables are expressed as number (percentage). *p < 0.05. The bold values indicate statistical difference (p<0.05) between the two groups. The analysis that divided the median Hb value found that compared to patients with an Hb-Med ≤ 10.77 group, those with an Hb-Med >10.77 had a significantly higher proportion of male sex and significantly higher serum albumin level, TIBC, and PhA (65.8 versus 34.2%, 3.91 ± 0.21 versus 3.76 ± 0.21 mg/dL, 264.93 ± 33.41 versus 249.41 ± 26.10 µg/dL, and 4.49 ± 0.92 versus 4.06 ± 0.93°, respectively; p < 0.05). On the contrary, the darbepoetin dose received in the Hb-Med >10.77 group was lower than that in the Hb-Med ≤ 10.77 group (18.02 ± 9.50 versus 32.15 ± 13.10; p < 0.05; Table 3). Comparison of variables by Hb-Med level in HD patients (N = 76). Med: median value; HDL: high-density lipoprotein; LDL: low-density lipoprotein; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation, while categorical variables are expressed as number (percentage). *p < 0.05. The bold values indicate statistical difference (p<0.05) between the two groups. Correlations between clinical and biochemical variables and PhA Age, female sex, darbepoetin dosage, and OH were negatively correlated with PhA (r = −0.490, −0.345, −0.287, and −0.558 respectively; p < 0.05). Serum albumin, phosphorus, TIBC, Hb-Med and Hb-Avg were positively correlated with PhA (r = 0.457, 0.338, 0.329, 0.350 and 0.349; p < 0.05) (Table 4 and Figure 2). (A,B) Correlations among Hb-median, Hb-average, and phase angle in HD patients (N = 76). Correlations between clinical, biochemical variables and PhA (n = 76). PhA: phase angle; Hb: hemoglobin; Hb-Avg: average of Hb; Hb-CV: coefficient of variation of Hb; Hb-Max: maximum of Hb; Hb-Med: median of Hb; Hb-Min: minimum of Hb; Hb-RAN: range of Hb; Hb-SD: standard deviation of Hb; HDL: high-density lipoprotein; LDL: low-density lipoprotein; OH: overhydration/extracellular water ratio; PTH: parathyroid hormone; TIBC: total iron-binding capacity. Spearman’s (non-parametric) correlation was used to test for associations between clinical and biochemical variables and PhA.*p < 0.05. The bold values indicate statistical difference (p<0.05) between correlation. Age, female sex, darbepoetin dosage, and OH were negatively correlated with PhA (r = −0.490, −0.345, −0.287, and −0.558 respectively; p < 0.05). Serum albumin, phosphorus, TIBC, Hb-Med and Hb-Avg were positively correlated with PhA (r = 0.457, 0.338, 0.329, 0.350 and 0.349; p < 0.05) (Table 4 and Figure 2). (A,B) Correlations among Hb-median, Hb-average, and phase angle in HD patients (N = 76). Correlations between clinical, biochemical variables and PhA (n = 76). PhA: phase angle; Hb: hemoglobin; Hb-Avg: average of Hb; Hb-CV: coefficient of variation of Hb; Hb-Max: maximum of Hb; Hb-Med: median of Hb; Hb-Min: minimum of Hb; Hb-RAN: range of Hb; Hb-SD: standard deviation of Hb; HDL: high-density lipoprotein; LDL: low-density lipoprotein; OH: overhydration/extracellular water ratio; PTH: parathyroid hormone; TIBC: total iron-binding capacity. Spearman’s (non-parametric) correlation was used to test for associations between clinical and biochemical variables and PhA.*p < 0.05. The bold values indicate statistical difference (p<0.05) between correlation. Multivariate linear regression analysis with PhA as dependent variable In multiple linear regression analysis, PhA was positively associated with Hb-Med, and serum albumin level (β = 0.085 and 1.350, respectively; p < 0.05), whereas PhA was negatively associated with age and female sex (β = −0.041 and −0.477, respectively; p < 0.05; Table 5). Multivariate linear regression analysis with PhA as a dependent variable in HD patients (N = 76). Selected variables: age, sex, serum albumin level, phosphorus level, TIBC, Hb-CV, Hb-SD, Hb-RAN, Hb-Min, Hb-Max, Hb-Med, OH, and darbepoetin dosage. PhA: phase angle; Hb-Avg: average of Hb; Hb-CV: coefficient of variation of Hb; Hb-Max: maximum of Hb; Hb-Med: median of Hb; Hb-Min: minimum of Hb; Hb-RAN: range of Hb; Hb-SD: standard deviation of Hb; HD: hemodialysis; TIBC: total iron-binding capacity; OH: overhydration/extracellular water ratio. *p < 0.05. In multiple linear regression analysis, PhA was positively associated with Hb-Med, and serum albumin level (β = 0.085 and 1.350, respectively; p < 0.05), whereas PhA was negatively associated with age and female sex (β = −0.041 and −0.477, respectively; p < 0.05; Table 5). Multivariate linear regression analysis with PhA as a dependent variable in HD patients (N = 76). Selected variables: age, sex, serum albumin level, phosphorus level, TIBC, Hb-CV, Hb-SD, Hb-RAN, Hb-Min, Hb-Max, Hb-Med, OH, and darbepoetin dosage. PhA: phase angle; Hb-Avg: average of Hb; Hb-CV: coefficient of variation of Hb; Hb-Max: maximum of Hb; Hb-Med: median of Hb; Hb-Min: minimum of Hb; Hb-RAN: range of Hb; Hb-SD: standard deviation of Hb; HD: hemodialysis; TIBC: total iron-binding capacity; OH: overhydration/extracellular water ratio. *p < 0.05. ROC curves of Hb-Med, Hb-CV for predicting PhA > 5.00° The AUC of Hb-Med was 0.665 (95% confidence interval 0.535–0.794, p = 0.040) in predicting PhA > 5.00°. Optimal Hb-Med cutoff value was 10.77 g/dL (sensitivity 76.5%, specificity 59.3%). On the contrary, the AUC of Hb-CV was 0.610 (95% confidence interval 0.457–0.764, p = 0.168) in predicting PhA > 5.00° (Figure 3). Receiver operating characteristics (ROC) curves of Hb-median (red) and Hb-CV (green) for predicting PA > 5.00°. The AUC of Hb-Med was 0.665 (95% confidence interval 0.535–0.794, p = 0.040) in predicting PhA > 5.00°. Optimal Hb-Med cutoff value was 10.77 g/dL (sensitivity 76.5%, specificity 59.3%). On the contrary, the AUC of Hb-CV was 0.610 (95% confidence interval 0.457–0.764, p = 0.168) in predicting PhA > 5.00° (Figure 3). Receiver operating characteristics (ROC) curves of Hb-median (red) and Hb-CV (green) for predicting PA > 5.00°.
Conclusions
Phase angle values, a good indicator of nutritional status, are associated with median value of Hb rather than Hb variability in chronic HD patients. PhA provides practical information to predict responsiveness to ESAs in the same population.
[ "PATIENTS and METHODS", "Study population", "Determination of Hb indexes", "Other data collection", "BCM analysis", "Statistical analysis", "Clinical characteristics of the study population", "Comparisons of variables by Hb-CV level", "Comparisons of variables by Hb-Med level", "Correlations between clinical and biochemical variables and PhA", "Multivariate linear regression analysis with PhA as dependent variable", "ROC curves of Hb-Med, Hb-CV for predicting PhA > 5.00°" ]
[ " Study population Patients with ESRD who were receiving maintenance HD were recruited. This study was conducted between 1 December 2016 and 1 May 2019 and included patients from Myongji Hospital in Korea. The study included adult patients aged over 20 years who had been undergoing HD for more than 24 months (Figure 1). The target Hb level was 10–11 g/dL according to reimbursement regulations of the Korea Health Insurance Review & Assessment Service. Dose adjustments of darbepoetin (NESP®; Kyowa Kirin Korea Co., Ltd., Seoul, Korea) were made according to the Hb level measured monthly in our facility. Therefore, every study population had more than 24 monthly Hb data points. The patients with acute infection, with malignancy, intact PTH > 500 pg/mL during study period, overhydration/extracellular water > 15% (OH) at BCM measurement, Kt/Vurea < 1.2 during study period or who were receiving HD via a catheter were excluded. Finally, 76 patients were enrolled in the study. Informed written consent was obtained from all patients. This study was approved by our facility’s institutional review board [MJH2018-10-012].\nStudy Flow Diagram.\nPatients with ESRD who were receiving maintenance HD were recruited. This study was conducted between 1 December 2016 and 1 May 2019 and included patients from Myongji Hospital in Korea. The study included adult patients aged over 20 years who had been undergoing HD for more than 24 months (Figure 1). The target Hb level was 10–11 g/dL according to reimbursement regulations of the Korea Health Insurance Review & Assessment Service. Dose adjustments of darbepoetin (NESP®; Kyowa Kirin Korea Co., Ltd., Seoul, Korea) were made according to the Hb level measured monthly in our facility. Therefore, every study population had more than 24 monthly Hb data points. The patients with acute infection, with malignancy, intact PTH > 500 pg/mL during study period, overhydration/extracellular water > 15% (OH) at BCM measurement, Kt/Vurea < 1.2 during study period or who were receiving HD via a catheter were excluded. Finally, 76 patients were enrolled in the study. Informed written consent was obtained from all patients. This study was approved by our facility’s institutional review board [MJH2018-10-012].\nStudy Flow Diagram.\n Determination of Hb indexes Hb-CV, Hb-SD, Hb-RAN, Hb-Min, Hb-Max, Hb-Avg, and Hb-Med were the Hb indexes. All the Hb indexes were calculated using monthly Hb levels for at least 24 months prior to the BCM test. We quantified the degree of Hb variability using range (HD-RAN), standard deviation (Hb-SD), and coefficient of variation of Hb (Hb-CV), that is, the ratio of standard deviation to the average Hb level [8]. BCM was performed at the study population enrollment.\nHb-CV, Hb-SD, Hb-RAN, Hb-Min, Hb-Max, Hb-Avg, and Hb-Med were the Hb indexes. All the Hb indexes were calculated using monthly Hb levels for at least 24 months prior to the BCM test. We quantified the degree of Hb variability using range (HD-RAN), standard deviation (Hb-SD), and coefficient of variation of Hb (Hb-CV), that is, the ratio of standard deviation to the average Hb level [8]. BCM was performed at the study population enrollment.\n Other data collection All demographic and clinical data were collected from the patients’ electronic medical records. Age, sex, height, body weight, presence of diabetes, HD duration, and average laboratory data for the previous 24 months before BCM were collected. Laboratory data included mean platelet volume (MPV), total iron-binding capacity (TIBC), transferrin saturation (TS), and levels of serum iron, ferritin, albumin, calcium, phosphorus, intact parathyroid hormone (PTH), triglyceride, total cholesterol, high-density lipoprotein (HDL)-cholesterol, low-density lipoprotein (LDL)-cholesterol, and highly sensitive C-reactive protein (hs-CRP). Darbepoetin dosages were collected during the entire study period and calculated as µg/week.\nAll demographic and clinical data were collected from the patients’ electronic medical records. Age, sex, height, body weight, presence of diabetes, HD duration, and average laboratory data for the previous 24 months before BCM were collected. Laboratory data included mean platelet volume (MPV), total iron-binding capacity (TIBC), transferrin saturation (TS), and levels of serum iron, ferritin, albumin, calcium, phosphorus, intact parathyroid hormone (PTH), triglyceride, total cholesterol, high-density lipoprotein (HDL)-cholesterol, low-density lipoprotein (LDL)-cholesterol, and highly sensitive C-reactive protein (hs-CRP). Darbepoetin dosages were collected during the entire study period and calculated as µg/week.\n BCM analysis The nutritional status of the patients at enrollment was assessed using the BCM® (Fresenius Medical Care a Deutschland GmbH, Germany). The measurements were made before the onset of the dialysis session at mid-week with 4 conventional electrodes being placed on the patient, who was lying in the supine position: 2 on the hand and 2 on the foot contralateral to the vascular access. The parameters obtained with the BCM were BMI, LTI, FTI, BCMI, and PhA [9].\nThe nutritional status of the patients at enrollment was assessed using the BCM® (Fresenius Medical Care a Deutschland GmbH, Germany). The measurements were made before the onset of the dialysis session at mid-week with 4 conventional electrodes being placed on the patient, who was lying in the supine position: 2 on the hand and 2 on the foot contralateral to the vascular access. The parameters obtained with the BCM were BMI, LTI, FTI, BCMI, and PhA [9].\n Statistical analysis Because of the small number of patients, normal distribution was tested using a single sample Kolmogorov–Smirnov analysis. Variables are expressed as mean ± SD or median (range). Between-group differences were assessed for significance using Mann–Whitney U tests. Correlations between nutritional parameters and clinical, biochemical, and Hb indexes were assessed using Spearman’s (non-parametric) correlations. Multivariate linear regression analysis was used to assess the combined influence of the PhA values adjusted for age, sex, TIBC, albumin level, phosphorus level, and all Hb indexes. We evaluated the receiver operating characteristics (ROC) curve of Hb-Med, Hb-CV for predicting PA > 5.00°, which is considered as PhA cutoff value for estimating well-nourished status proposed in previous research [6]. Significant differences were defined as p values less than 0.05. All statistical analyses were performed using Statistical Package for the Social Sciences version 23.0 (SPSS Inc., Chicago, IL).\nBecause of the small number of patients, normal distribution was tested using a single sample Kolmogorov–Smirnov analysis. Variables are expressed as mean ± SD or median (range). Between-group differences were assessed for significance using Mann–Whitney U tests. Correlations between nutritional parameters and clinical, biochemical, and Hb indexes were assessed using Spearman’s (non-parametric) correlations. Multivariate linear regression analysis was used to assess the combined influence of the PhA values adjusted for age, sex, TIBC, albumin level, phosphorus level, and all Hb indexes. We evaluated the receiver operating characteristics (ROC) curve of Hb-Med, Hb-CV for predicting PA > 5.00°, which is considered as PhA cutoff value for estimating well-nourished status proposed in previous research [6]. Significant differences were defined as p values less than 0.05. All statistical analyses were performed using Statistical Package for the Social Sciences version 23.0 (SPSS Inc., Chicago, IL).", "Patients with ESRD who were receiving maintenance HD were recruited. This study was conducted between 1 December 2016 and 1 May 2019 and included patients from Myongji Hospital in Korea. The study included adult patients aged over 20 years who had been undergoing HD for more than 24 months (Figure 1). The target Hb level was 10–11 g/dL according to reimbursement regulations of the Korea Health Insurance Review & Assessment Service. Dose adjustments of darbepoetin (NESP®; Kyowa Kirin Korea Co., Ltd., Seoul, Korea) were made according to the Hb level measured monthly in our facility. Therefore, every study population had more than 24 monthly Hb data points. The patients with acute infection, with malignancy, intact PTH > 500 pg/mL during study period, overhydration/extracellular water > 15% (OH) at BCM measurement, Kt/Vurea < 1.2 during study period or who were receiving HD via a catheter were excluded. Finally, 76 patients were enrolled in the study. Informed written consent was obtained from all patients. This study was approved by our facility’s institutional review board [MJH2018-10-012].\nStudy Flow Diagram.", "Hb-CV, Hb-SD, Hb-RAN, Hb-Min, Hb-Max, Hb-Avg, and Hb-Med were the Hb indexes. All the Hb indexes were calculated using monthly Hb levels for at least 24 months prior to the BCM test. We quantified the degree of Hb variability using range (HD-RAN), standard deviation (Hb-SD), and coefficient of variation of Hb (Hb-CV), that is, the ratio of standard deviation to the average Hb level [8]. BCM was performed at the study population enrollment.", "All demographic and clinical data were collected from the patients’ electronic medical records. Age, sex, height, body weight, presence of diabetes, HD duration, and average laboratory data for the previous 24 months before BCM were collected. Laboratory data included mean platelet volume (MPV), total iron-binding capacity (TIBC), transferrin saturation (TS), and levels of serum iron, ferritin, albumin, calcium, phosphorus, intact parathyroid hormone (PTH), triglyceride, total cholesterol, high-density lipoprotein (HDL)-cholesterol, low-density lipoprotein (LDL)-cholesterol, and highly sensitive C-reactive protein (hs-CRP). Darbepoetin dosages were collected during the entire study period and calculated as µg/week.", "The nutritional status of the patients at enrollment was assessed using the BCM® (Fresenius Medical Care a Deutschland GmbH, Germany). The measurements were made before the onset of the dialysis session at mid-week with 4 conventional electrodes being placed on the patient, who was lying in the supine position: 2 on the hand and 2 on the foot contralateral to the vascular access. The parameters obtained with the BCM were BMI, LTI, FTI, BCMI, and PhA [9].", "Because of the small number of patients, normal distribution was tested using a single sample Kolmogorov–Smirnov analysis. Variables are expressed as mean ± SD or median (range). Between-group differences were assessed for significance using Mann–Whitney U tests. Correlations between nutritional parameters and clinical, biochemical, and Hb indexes were assessed using Spearman’s (non-parametric) correlations. Multivariate linear regression analysis was used to assess the combined influence of the PhA values adjusted for age, sex, TIBC, albumin level, phosphorus level, and all Hb indexes. We evaluated the receiver operating characteristics (ROC) curve of Hb-Med, Hb-CV for predicting PA > 5.00°, which is considered as PhA cutoff value for estimating well-nourished status proposed in previous research [6]. Significant differences were defined as p values less than 0.05. All statistical analyses were performed using Statistical Package for the Social Sciences version 23.0 (SPSS Inc., Chicago, IL).", "The study included a total of 76 patients undergoing HD (median duration, 70.98 (24–130) months; 51.3% men; mean age, 65.84 ± 11.91 years). Here, 51.6% patients had diabetes mellitus. The mean Hb and darbepoetin dosage were 10.80 ± 0.51 g/dL and 25.08 ± 13.41 µg/week. The mean albumin, iron, TIBC, TS, total cholesterol, LDL-cholesterol, HDL-cholesterol, triglyceride, calcium, phosphorus, intact PTH, and hs-CRP levels were 3.83 ± 0.22 g/dL, 73.78 ± 17.75, 257.20 ± 30.68 µg/dL, 28.81 ± 6.44%, 134.86 ± 26.45, 67.92 ± 20.63, 46.04 ± 13.00, 104.95 ± 47.23, 8.36 ± 0.38, 4.10 ± 0.66, 313.23 ± 159.41, and 0.42 ± 0.43 mg/dL, respectively. The median ferritin was 228.18 (49.14–1233.45). The dry weight, pre-dialysis weight, interdialytic weight gain, and OH were 60.80 ± 10.34, 62.30 ± 11.36, 2.16 ± 0.94 kg, and 8.47 ± 6.52%. The mean BMI, LTI, FTI, BCMI, and PhA were 23.58 ± 3.02, 12.71 ± 3.17, 9.80 ± 3.59, 6.92 ± 2.25 kg/m2, and 4.28 ± 0.94°, respectively (Table 1).\nClinical characteristics of the study population (N = 76).\nHDL: high-density lipoprotein; LDL: low-density lipoprotein; OH: overhydration/extracellular water ratio; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation, while categorical variables are expressed as number (percentage).", "The analysis that divided the median Hb-CV value no difference in age, sex, presence of diabetes, or HD duration. In addition, Hb-Avg and biochemical and nutritional parameters did not differ between the two groups. Finally, there was no intergroup difference with respect to darbepoetin dose received (Table 2).\nComparisons of variables by Hb-CV in HD patients (N = 76).\nCV: coefficient of variation; Hb-CV: hemoglobin coefficient of variation; HD: hemodialysis; HDL: high-density lipoprotein; LDL: low-density lipoprotein; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation or median (range), while categorical variables are expressed as number (percentage).", "The analysis that divided the median Hb value found that compared to patients with an Hb-Med ≤ 10.77 group, those with an Hb-Med >10.77 had a significantly higher proportion of male sex and significantly higher serum albumin level, TIBC, and PhA (65.8 versus 34.2%, 3.91 ± 0.21 versus 3.76 ± 0.21 mg/dL, 264.93 ± 33.41 versus 249.41 ± 26.10 µg/dL, and 4.49 ± 0.92 versus 4.06 ± 0.93°, respectively; p < 0.05). On the contrary, the darbepoetin dose received in the Hb-Med >10.77 group was lower than that in the Hb-Med ≤ 10.77 group (18.02 ± 9.50 versus 32.15 ± 13.10; p < 0.05; Table 3).\nComparison of variables by Hb-Med level in HD patients (N = 76).\nMed: median value; HDL: high-density lipoprotein; LDL: low-density lipoprotein; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation, while categorical variables are expressed as number (percentage). *p < 0.05.\nThe bold values indicate statistical difference (p<0.05) between the two groups.", "Age, female sex, darbepoetin dosage, and OH were negatively correlated with PhA (r = −0.490, −0.345, −0.287, and −0.558 respectively; p < 0.05). Serum albumin, phosphorus, TIBC, Hb-Med and Hb-Avg were positively correlated with PhA (r = 0.457, 0.338, 0.329, 0.350 and 0.349; p < 0.05) (Table 4 and Figure 2).\n(A,B) Correlations among Hb-median, Hb-average, and phase angle in HD patients (N = 76).\nCorrelations between clinical, biochemical variables and PhA (n = 76).\nPhA: phase angle; Hb: hemoglobin; Hb-Avg: average of Hb; Hb-CV: coefficient of variation of Hb; Hb-Max: maximum of Hb; Hb-Med: median of Hb; Hb-Min: minimum of Hb; Hb-RAN: range of Hb; Hb-SD: standard deviation of Hb; HDL: high-density lipoprotein; LDL: low-density lipoprotein; OH: overhydration/extracellular water ratio; PTH: parathyroid hormone; TIBC: total iron-binding capacity. Spearman’s (non-parametric) correlation was used to test for associations between clinical and biochemical variables and PhA.*p < 0.05.\nThe bold values indicate statistical difference (p<0.05) between correlation.", "In multiple linear regression analysis, PhA was positively associated with Hb-Med, and serum albumin level (β = 0.085 and 1.350, respectively; p < 0.05), whereas PhA was negatively associated with age and female sex (β = −0.041 and −0.477, respectively; p < 0.05; Table 5).\nMultivariate linear regression analysis with PhA as a dependent variable in HD patients (N = 76).\nSelected variables: age, sex, serum albumin level, phosphorus level, TIBC, Hb-CV, Hb-SD, Hb-RAN, Hb-Min, Hb-Max, Hb-Med, OH, and darbepoetin dosage. PhA: phase angle; Hb-Avg: average of Hb; Hb-CV: coefficient of variation of Hb; Hb-Max: maximum of Hb; Hb-Med: median of Hb; Hb-Min: minimum of Hb; Hb-RAN: range of Hb; Hb-SD: standard deviation of Hb; HD: hemodialysis; TIBC: total iron-binding capacity; OH: overhydration/extracellular water ratio. *p < 0.05.", "The AUC of Hb-Med was 0.665 (95% confidence interval 0.535–0.794, p = 0.040) in predicting PhA > 5.00°. Optimal Hb-Med cutoff value was 10.77 g/dL (sensitivity 76.5%, specificity 59.3%). On the contrary, the AUC of Hb-CV was 0.610 (95% confidence interval 0.457–0.764, p = 0.168) in predicting PhA > 5.00° (Figure 3).\nReceiver operating characteristics (ROC) curves of Hb-median (red) and Hb-CV (green) for predicting PA > 5.00°." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "PATIENTS and METHODS", "Study population", "Determination of Hb indexes", "Other data collection", "BCM analysis", "Statistical analysis", "Results", "Clinical characteristics of the study population", "Comparisons of variables by Hb-CV level", "Comparisons of variables by Hb-Med level", "Correlations between clinical and biochemical variables and PhA", "Multivariate linear regression analysis with PhA as dependent variable", "ROC curves of Hb-Med, Hb-CV for predicting PhA > 5.00°", "Discussion", "Conclusions" ]
[ "Anemia is a common complication of patients with chronic kidney disease (CKD) and end-stage renal disease (ESRD) undergoing renal replacement treatment. Although the introduction of erythrocyte-stimulating agents (ESAs) has led to a dramatic reduction in blood transfusion requirements and is associated with improved quality of life, fluctuations in hemoglobin (Hb) levels, known as Hb variability, during ESA treatment is a well-documented phenomenon [1]. The optimal target Hb concentration in CKD/ESRD patients remains under debate. Countries and societies have different recommendations for maintaining the optimal Hb concentration. In addition, maintaining patients’ Hb levels in such an optimal narrow range is difficult due to the loss of physiological regulation of red blood cell formation and many other factors, such as iron deficiency, chronic inflammation, secondary hyperparathyroidism, malnutrition, and inadequate-dose dialysis. The data show that only 30% of patients will fall within this range at any point in time because Hb level fluctuations result in frequent under- and overshooting of the target level [2]. Chronic HD patients with a stable target Hb level are at lower risk for adverse events than maintenance HD patients without a stable Hb level [3]. In recent studies, a higher fluctuation in Hb variability was associated with cardiovascular mortality and all-cause mortality in maintenance HD patients [4,5]. Malnutrition is a well-known risk factor in the general population and in HD patients. However, few studies have evaluated the association between Hb variability and nutritional status in HD patients. The nutritional status in HD patients is usually assessed using the malnutrition-inflammation score (MIS), body mass index (BMI), and serum albumin level. This was because there was a problem with the accurate and reproducible evaluation of nutrition status. Body composite monitor (BCM) analysis is a noninvasive and accurate instrument for distinguishing between excessive and insufficient moisture levels and can accurately assess nutritional status [6,7]. Therefore, we aimed to elucidate whether Hb variability itself affected the nutritional status.", " Study population Patients with ESRD who were receiving maintenance HD were recruited. This study was conducted between 1 December 2016 and 1 May 2019 and included patients from Myongji Hospital in Korea. The study included adult patients aged over 20 years who had been undergoing HD for more than 24 months (Figure 1). The target Hb level was 10–11 g/dL according to reimbursement regulations of the Korea Health Insurance Review & Assessment Service. Dose adjustments of darbepoetin (NESP®; Kyowa Kirin Korea Co., Ltd., Seoul, Korea) were made according to the Hb level measured monthly in our facility. Therefore, every study population had more than 24 monthly Hb data points. The patients with acute infection, with malignancy, intact PTH > 500 pg/mL during study period, overhydration/extracellular water > 15% (OH) at BCM measurement, Kt/Vurea < 1.2 during study period or who were receiving HD via a catheter were excluded. Finally, 76 patients were enrolled in the study. Informed written consent was obtained from all patients. This study was approved by our facility’s institutional review board [MJH2018-10-012].\nStudy Flow Diagram.\nPatients with ESRD who were receiving maintenance HD were recruited. This study was conducted between 1 December 2016 and 1 May 2019 and included patients from Myongji Hospital in Korea. The study included adult patients aged over 20 years who had been undergoing HD for more than 24 months (Figure 1). The target Hb level was 10–11 g/dL according to reimbursement regulations of the Korea Health Insurance Review & Assessment Service. Dose adjustments of darbepoetin (NESP®; Kyowa Kirin Korea Co., Ltd., Seoul, Korea) were made according to the Hb level measured monthly in our facility. Therefore, every study population had more than 24 monthly Hb data points. The patients with acute infection, with malignancy, intact PTH > 500 pg/mL during study period, overhydration/extracellular water > 15% (OH) at BCM measurement, Kt/Vurea < 1.2 during study period or who were receiving HD via a catheter were excluded. Finally, 76 patients were enrolled in the study. Informed written consent was obtained from all patients. This study was approved by our facility’s institutional review board [MJH2018-10-012].\nStudy Flow Diagram.\n Determination of Hb indexes Hb-CV, Hb-SD, Hb-RAN, Hb-Min, Hb-Max, Hb-Avg, and Hb-Med were the Hb indexes. All the Hb indexes were calculated using monthly Hb levels for at least 24 months prior to the BCM test. We quantified the degree of Hb variability using range (HD-RAN), standard deviation (Hb-SD), and coefficient of variation of Hb (Hb-CV), that is, the ratio of standard deviation to the average Hb level [8]. BCM was performed at the study population enrollment.\nHb-CV, Hb-SD, Hb-RAN, Hb-Min, Hb-Max, Hb-Avg, and Hb-Med were the Hb indexes. All the Hb indexes were calculated using monthly Hb levels for at least 24 months prior to the BCM test. We quantified the degree of Hb variability using range (HD-RAN), standard deviation (Hb-SD), and coefficient of variation of Hb (Hb-CV), that is, the ratio of standard deviation to the average Hb level [8]. BCM was performed at the study population enrollment.\n Other data collection All demographic and clinical data were collected from the patients’ electronic medical records. Age, sex, height, body weight, presence of diabetes, HD duration, and average laboratory data for the previous 24 months before BCM were collected. Laboratory data included mean platelet volume (MPV), total iron-binding capacity (TIBC), transferrin saturation (TS), and levels of serum iron, ferritin, albumin, calcium, phosphorus, intact parathyroid hormone (PTH), triglyceride, total cholesterol, high-density lipoprotein (HDL)-cholesterol, low-density lipoprotein (LDL)-cholesterol, and highly sensitive C-reactive protein (hs-CRP). Darbepoetin dosages were collected during the entire study period and calculated as µg/week.\nAll demographic and clinical data were collected from the patients’ electronic medical records. Age, sex, height, body weight, presence of diabetes, HD duration, and average laboratory data for the previous 24 months before BCM were collected. Laboratory data included mean platelet volume (MPV), total iron-binding capacity (TIBC), transferrin saturation (TS), and levels of serum iron, ferritin, albumin, calcium, phosphorus, intact parathyroid hormone (PTH), triglyceride, total cholesterol, high-density lipoprotein (HDL)-cholesterol, low-density lipoprotein (LDL)-cholesterol, and highly sensitive C-reactive protein (hs-CRP). Darbepoetin dosages were collected during the entire study period and calculated as µg/week.\n BCM analysis The nutritional status of the patients at enrollment was assessed using the BCM® (Fresenius Medical Care a Deutschland GmbH, Germany). The measurements were made before the onset of the dialysis session at mid-week with 4 conventional electrodes being placed on the patient, who was lying in the supine position: 2 on the hand and 2 on the foot contralateral to the vascular access. The parameters obtained with the BCM were BMI, LTI, FTI, BCMI, and PhA [9].\nThe nutritional status of the patients at enrollment was assessed using the BCM® (Fresenius Medical Care a Deutschland GmbH, Germany). The measurements were made before the onset of the dialysis session at mid-week with 4 conventional electrodes being placed on the patient, who was lying in the supine position: 2 on the hand and 2 on the foot contralateral to the vascular access. The parameters obtained with the BCM were BMI, LTI, FTI, BCMI, and PhA [9].\n Statistical analysis Because of the small number of patients, normal distribution was tested using a single sample Kolmogorov–Smirnov analysis. Variables are expressed as mean ± SD or median (range). Between-group differences were assessed for significance using Mann–Whitney U tests. Correlations between nutritional parameters and clinical, biochemical, and Hb indexes were assessed using Spearman’s (non-parametric) correlations. Multivariate linear regression analysis was used to assess the combined influence of the PhA values adjusted for age, sex, TIBC, albumin level, phosphorus level, and all Hb indexes. We evaluated the receiver operating characteristics (ROC) curve of Hb-Med, Hb-CV for predicting PA > 5.00°, which is considered as PhA cutoff value for estimating well-nourished status proposed in previous research [6]. Significant differences were defined as p values less than 0.05. All statistical analyses were performed using Statistical Package for the Social Sciences version 23.0 (SPSS Inc., Chicago, IL).\nBecause of the small number of patients, normal distribution was tested using a single sample Kolmogorov–Smirnov analysis. Variables are expressed as mean ± SD or median (range). Between-group differences were assessed for significance using Mann–Whitney U tests. Correlations between nutritional parameters and clinical, biochemical, and Hb indexes were assessed using Spearman’s (non-parametric) correlations. Multivariate linear regression analysis was used to assess the combined influence of the PhA values adjusted for age, sex, TIBC, albumin level, phosphorus level, and all Hb indexes. We evaluated the receiver operating characteristics (ROC) curve of Hb-Med, Hb-CV for predicting PA > 5.00°, which is considered as PhA cutoff value for estimating well-nourished status proposed in previous research [6]. Significant differences were defined as p values less than 0.05. All statistical analyses were performed using Statistical Package for the Social Sciences version 23.0 (SPSS Inc., Chicago, IL).", "Patients with ESRD who were receiving maintenance HD were recruited. This study was conducted between 1 December 2016 and 1 May 2019 and included patients from Myongji Hospital in Korea. The study included adult patients aged over 20 years who had been undergoing HD for more than 24 months (Figure 1). The target Hb level was 10–11 g/dL according to reimbursement regulations of the Korea Health Insurance Review & Assessment Service. Dose adjustments of darbepoetin (NESP®; Kyowa Kirin Korea Co., Ltd., Seoul, Korea) were made according to the Hb level measured monthly in our facility. Therefore, every study population had more than 24 monthly Hb data points. The patients with acute infection, with malignancy, intact PTH > 500 pg/mL during study period, overhydration/extracellular water > 15% (OH) at BCM measurement, Kt/Vurea < 1.2 during study period or who were receiving HD via a catheter were excluded. Finally, 76 patients were enrolled in the study. Informed written consent was obtained from all patients. This study was approved by our facility’s institutional review board [MJH2018-10-012].\nStudy Flow Diagram.", "Hb-CV, Hb-SD, Hb-RAN, Hb-Min, Hb-Max, Hb-Avg, and Hb-Med were the Hb indexes. All the Hb indexes were calculated using monthly Hb levels for at least 24 months prior to the BCM test. We quantified the degree of Hb variability using range (HD-RAN), standard deviation (Hb-SD), and coefficient of variation of Hb (Hb-CV), that is, the ratio of standard deviation to the average Hb level [8]. BCM was performed at the study population enrollment.", "All demographic and clinical data were collected from the patients’ electronic medical records. Age, sex, height, body weight, presence of diabetes, HD duration, and average laboratory data for the previous 24 months before BCM were collected. Laboratory data included mean platelet volume (MPV), total iron-binding capacity (TIBC), transferrin saturation (TS), and levels of serum iron, ferritin, albumin, calcium, phosphorus, intact parathyroid hormone (PTH), triglyceride, total cholesterol, high-density lipoprotein (HDL)-cholesterol, low-density lipoprotein (LDL)-cholesterol, and highly sensitive C-reactive protein (hs-CRP). Darbepoetin dosages were collected during the entire study period and calculated as µg/week.", "The nutritional status of the patients at enrollment was assessed using the BCM® (Fresenius Medical Care a Deutschland GmbH, Germany). The measurements were made before the onset of the dialysis session at mid-week with 4 conventional electrodes being placed on the patient, who was lying in the supine position: 2 on the hand and 2 on the foot contralateral to the vascular access. The parameters obtained with the BCM were BMI, LTI, FTI, BCMI, and PhA [9].", "Because of the small number of patients, normal distribution was tested using a single sample Kolmogorov–Smirnov analysis. Variables are expressed as mean ± SD or median (range). Between-group differences were assessed for significance using Mann–Whitney U tests. Correlations between nutritional parameters and clinical, biochemical, and Hb indexes were assessed using Spearman’s (non-parametric) correlations. Multivariate linear regression analysis was used to assess the combined influence of the PhA values adjusted for age, sex, TIBC, albumin level, phosphorus level, and all Hb indexes. We evaluated the receiver operating characteristics (ROC) curve of Hb-Med, Hb-CV for predicting PA > 5.00°, which is considered as PhA cutoff value for estimating well-nourished status proposed in previous research [6]. Significant differences were defined as p values less than 0.05. All statistical analyses were performed using Statistical Package for the Social Sciences version 23.0 (SPSS Inc., Chicago, IL).", " Clinical characteristics of the study population The study included a total of 76 patients undergoing HD (median duration, 70.98 (24–130) months; 51.3% men; mean age, 65.84 ± 11.91 years). Here, 51.6% patients had diabetes mellitus. The mean Hb and darbepoetin dosage were 10.80 ± 0.51 g/dL and 25.08 ± 13.41 µg/week. The mean albumin, iron, TIBC, TS, total cholesterol, LDL-cholesterol, HDL-cholesterol, triglyceride, calcium, phosphorus, intact PTH, and hs-CRP levels were 3.83 ± 0.22 g/dL, 73.78 ± 17.75, 257.20 ± 30.68 µg/dL, 28.81 ± 6.44%, 134.86 ± 26.45, 67.92 ± 20.63, 46.04 ± 13.00, 104.95 ± 47.23, 8.36 ± 0.38, 4.10 ± 0.66, 313.23 ± 159.41, and 0.42 ± 0.43 mg/dL, respectively. The median ferritin was 228.18 (49.14–1233.45). The dry weight, pre-dialysis weight, interdialytic weight gain, and OH were 60.80 ± 10.34, 62.30 ± 11.36, 2.16 ± 0.94 kg, and 8.47 ± 6.52%. The mean BMI, LTI, FTI, BCMI, and PhA were 23.58 ± 3.02, 12.71 ± 3.17, 9.80 ± 3.59, 6.92 ± 2.25 kg/m2, and 4.28 ± 0.94°, respectively (Table 1).\nClinical characteristics of the study population (N = 76).\nHDL: high-density lipoprotein; LDL: low-density lipoprotein; OH: overhydration/extracellular water ratio; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation, while categorical variables are expressed as number (percentage).\nThe study included a total of 76 patients undergoing HD (median duration, 70.98 (24–130) months; 51.3% men; mean age, 65.84 ± 11.91 years). Here, 51.6% patients had diabetes mellitus. The mean Hb and darbepoetin dosage were 10.80 ± 0.51 g/dL and 25.08 ± 13.41 µg/week. The mean albumin, iron, TIBC, TS, total cholesterol, LDL-cholesterol, HDL-cholesterol, triglyceride, calcium, phosphorus, intact PTH, and hs-CRP levels were 3.83 ± 0.22 g/dL, 73.78 ± 17.75, 257.20 ± 30.68 µg/dL, 28.81 ± 6.44%, 134.86 ± 26.45, 67.92 ± 20.63, 46.04 ± 13.00, 104.95 ± 47.23, 8.36 ± 0.38, 4.10 ± 0.66, 313.23 ± 159.41, and 0.42 ± 0.43 mg/dL, respectively. The median ferritin was 228.18 (49.14–1233.45). The dry weight, pre-dialysis weight, interdialytic weight gain, and OH were 60.80 ± 10.34, 62.30 ± 11.36, 2.16 ± 0.94 kg, and 8.47 ± 6.52%. The mean BMI, LTI, FTI, BCMI, and PhA were 23.58 ± 3.02, 12.71 ± 3.17, 9.80 ± 3.59, 6.92 ± 2.25 kg/m2, and 4.28 ± 0.94°, respectively (Table 1).\nClinical characteristics of the study population (N = 76).\nHDL: high-density lipoprotein; LDL: low-density lipoprotein; OH: overhydration/extracellular water ratio; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation, while categorical variables are expressed as number (percentage).\n Comparisons of variables by Hb-CV level The analysis that divided the median Hb-CV value no difference in age, sex, presence of diabetes, or HD duration. In addition, Hb-Avg and biochemical and nutritional parameters did not differ between the two groups. Finally, there was no intergroup difference with respect to darbepoetin dose received (Table 2).\nComparisons of variables by Hb-CV in HD patients (N = 76).\nCV: coefficient of variation; Hb-CV: hemoglobin coefficient of variation; HD: hemodialysis; HDL: high-density lipoprotein; LDL: low-density lipoprotein; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation or median (range), while categorical variables are expressed as number (percentage).\nThe analysis that divided the median Hb-CV value no difference in age, sex, presence of diabetes, or HD duration. In addition, Hb-Avg and biochemical and nutritional parameters did not differ between the two groups. Finally, there was no intergroup difference with respect to darbepoetin dose received (Table 2).\nComparisons of variables by Hb-CV in HD patients (N = 76).\nCV: coefficient of variation; Hb-CV: hemoglobin coefficient of variation; HD: hemodialysis; HDL: high-density lipoprotein; LDL: low-density lipoprotein; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation or median (range), while categorical variables are expressed as number (percentage).\n Comparisons of variables by Hb-Med level The analysis that divided the median Hb value found that compared to patients with an Hb-Med ≤ 10.77 group, those with an Hb-Med >10.77 had a significantly higher proportion of male sex and significantly higher serum albumin level, TIBC, and PhA (65.8 versus 34.2%, 3.91 ± 0.21 versus 3.76 ± 0.21 mg/dL, 264.93 ± 33.41 versus 249.41 ± 26.10 µg/dL, and 4.49 ± 0.92 versus 4.06 ± 0.93°, respectively; p < 0.05). On the contrary, the darbepoetin dose received in the Hb-Med >10.77 group was lower than that in the Hb-Med ≤ 10.77 group (18.02 ± 9.50 versus 32.15 ± 13.10; p < 0.05; Table 3).\nComparison of variables by Hb-Med level in HD patients (N = 76).\nMed: median value; HDL: high-density lipoprotein; LDL: low-density lipoprotein; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation, while categorical variables are expressed as number (percentage). *p < 0.05.\nThe bold values indicate statistical difference (p<0.05) between the two groups.\nThe analysis that divided the median Hb value found that compared to patients with an Hb-Med ≤ 10.77 group, those with an Hb-Med >10.77 had a significantly higher proportion of male sex and significantly higher serum albumin level, TIBC, and PhA (65.8 versus 34.2%, 3.91 ± 0.21 versus 3.76 ± 0.21 mg/dL, 264.93 ± 33.41 versus 249.41 ± 26.10 µg/dL, and 4.49 ± 0.92 versus 4.06 ± 0.93°, respectively; p < 0.05). On the contrary, the darbepoetin dose received in the Hb-Med >10.77 group was lower than that in the Hb-Med ≤ 10.77 group (18.02 ± 9.50 versus 32.15 ± 13.10; p < 0.05; Table 3).\nComparison of variables by Hb-Med level in HD patients (N = 76).\nMed: median value; HDL: high-density lipoprotein; LDL: low-density lipoprotein; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation, while categorical variables are expressed as number (percentage). *p < 0.05.\nThe bold values indicate statistical difference (p<0.05) between the two groups.\n Correlations between clinical and biochemical variables and PhA Age, female sex, darbepoetin dosage, and OH were negatively correlated with PhA (r = −0.490, −0.345, −0.287, and −0.558 respectively; p < 0.05). Serum albumin, phosphorus, TIBC, Hb-Med and Hb-Avg were positively correlated with PhA (r = 0.457, 0.338, 0.329, 0.350 and 0.349; p < 0.05) (Table 4 and Figure 2).\n(A,B) Correlations among Hb-median, Hb-average, and phase angle in HD patients (N = 76).\nCorrelations between clinical, biochemical variables and PhA (n = 76).\nPhA: phase angle; Hb: hemoglobin; Hb-Avg: average of Hb; Hb-CV: coefficient of variation of Hb; Hb-Max: maximum of Hb; Hb-Med: median of Hb; Hb-Min: minimum of Hb; Hb-RAN: range of Hb; Hb-SD: standard deviation of Hb; HDL: high-density lipoprotein; LDL: low-density lipoprotein; OH: overhydration/extracellular water ratio; PTH: parathyroid hormone; TIBC: total iron-binding capacity. Spearman’s (non-parametric) correlation was used to test for associations between clinical and biochemical variables and PhA.*p < 0.05.\nThe bold values indicate statistical difference (p<0.05) between correlation.\nAge, female sex, darbepoetin dosage, and OH were negatively correlated with PhA (r = −0.490, −0.345, −0.287, and −0.558 respectively; p < 0.05). Serum albumin, phosphorus, TIBC, Hb-Med and Hb-Avg were positively correlated with PhA (r = 0.457, 0.338, 0.329, 0.350 and 0.349; p < 0.05) (Table 4 and Figure 2).\n(A,B) Correlations among Hb-median, Hb-average, and phase angle in HD patients (N = 76).\nCorrelations between clinical, biochemical variables and PhA (n = 76).\nPhA: phase angle; Hb: hemoglobin; Hb-Avg: average of Hb; Hb-CV: coefficient of variation of Hb; Hb-Max: maximum of Hb; Hb-Med: median of Hb; Hb-Min: minimum of Hb; Hb-RAN: range of Hb; Hb-SD: standard deviation of Hb; HDL: high-density lipoprotein; LDL: low-density lipoprotein; OH: overhydration/extracellular water ratio; PTH: parathyroid hormone; TIBC: total iron-binding capacity. Spearman’s (non-parametric) correlation was used to test for associations between clinical and biochemical variables and PhA.*p < 0.05.\nThe bold values indicate statistical difference (p<0.05) between correlation.\n Multivariate linear regression analysis with PhA as dependent variable In multiple linear regression analysis, PhA was positively associated with Hb-Med, and serum albumin level (β = 0.085 and 1.350, respectively; p < 0.05), whereas PhA was negatively associated with age and female sex (β = −0.041 and −0.477, respectively; p < 0.05; Table 5).\nMultivariate linear regression analysis with PhA as a dependent variable in HD patients (N = 76).\nSelected variables: age, sex, serum albumin level, phosphorus level, TIBC, Hb-CV, Hb-SD, Hb-RAN, Hb-Min, Hb-Max, Hb-Med, OH, and darbepoetin dosage. PhA: phase angle; Hb-Avg: average of Hb; Hb-CV: coefficient of variation of Hb; Hb-Max: maximum of Hb; Hb-Med: median of Hb; Hb-Min: minimum of Hb; Hb-RAN: range of Hb; Hb-SD: standard deviation of Hb; HD: hemodialysis; TIBC: total iron-binding capacity; OH: overhydration/extracellular water ratio. *p < 0.05.\nIn multiple linear regression analysis, PhA was positively associated with Hb-Med, and serum albumin level (β = 0.085 and 1.350, respectively; p < 0.05), whereas PhA was negatively associated with age and female sex (β = −0.041 and −0.477, respectively; p < 0.05; Table 5).\nMultivariate linear regression analysis with PhA as a dependent variable in HD patients (N = 76).\nSelected variables: age, sex, serum albumin level, phosphorus level, TIBC, Hb-CV, Hb-SD, Hb-RAN, Hb-Min, Hb-Max, Hb-Med, OH, and darbepoetin dosage. PhA: phase angle; Hb-Avg: average of Hb; Hb-CV: coefficient of variation of Hb; Hb-Max: maximum of Hb; Hb-Med: median of Hb; Hb-Min: minimum of Hb; Hb-RAN: range of Hb; Hb-SD: standard deviation of Hb; HD: hemodialysis; TIBC: total iron-binding capacity; OH: overhydration/extracellular water ratio. *p < 0.05.\n ROC curves of Hb-Med, Hb-CV for predicting PhA > 5.00° The AUC of Hb-Med was 0.665 (95% confidence interval 0.535–0.794, p = 0.040) in predicting PhA > 5.00°. Optimal Hb-Med cutoff value was 10.77 g/dL (sensitivity 76.5%, specificity 59.3%). On the contrary, the AUC of Hb-CV was 0.610 (95% confidence interval 0.457–0.764, p = 0.168) in predicting PhA > 5.00° (Figure 3).\nReceiver operating characteristics (ROC) curves of Hb-median (red) and Hb-CV (green) for predicting PA > 5.00°.\nThe AUC of Hb-Med was 0.665 (95% confidence interval 0.535–0.794, p = 0.040) in predicting PhA > 5.00°. Optimal Hb-Med cutoff value was 10.77 g/dL (sensitivity 76.5%, specificity 59.3%). On the contrary, the AUC of Hb-CV was 0.610 (95% confidence interval 0.457–0.764, p = 0.168) in predicting PhA > 5.00° (Figure 3).\nReceiver operating characteristics (ROC) curves of Hb-median (red) and Hb-CV (green) for predicting PA > 5.00°.", "The study included a total of 76 patients undergoing HD (median duration, 70.98 (24–130) months; 51.3% men; mean age, 65.84 ± 11.91 years). Here, 51.6% patients had diabetes mellitus. The mean Hb and darbepoetin dosage were 10.80 ± 0.51 g/dL and 25.08 ± 13.41 µg/week. The mean albumin, iron, TIBC, TS, total cholesterol, LDL-cholesterol, HDL-cholesterol, triglyceride, calcium, phosphorus, intact PTH, and hs-CRP levels were 3.83 ± 0.22 g/dL, 73.78 ± 17.75, 257.20 ± 30.68 µg/dL, 28.81 ± 6.44%, 134.86 ± 26.45, 67.92 ± 20.63, 46.04 ± 13.00, 104.95 ± 47.23, 8.36 ± 0.38, 4.10 ± 0.66, 313.23 ± 159.41, and 0.42 ± 0.43 mg/dL, respectively. The median ferritin was 228.18 (49.14–1233.45). The dry weight, pre-dialysis weight, interdialytic weight gain, and OH were 60.80 ± 10.34, 62.30 ± 11.36, 2.16 ± 0.94 kg, and 8.47 ± 6.52%. The mean BMI, LTI, FTI, BCMI, and PhA were 23.58 ± 3.02, 12.71 ± 3.17, 9.80 ± 3.59, 6.92 ± 2.25 kg/m2, and 4.28 ± 0.94°, respectively (Table 1).\nClinical characteristics of the study population (N = 76).\nHDL: high-density lipoprotein; LDL: low-density lipoprotein; OH: overhydration/extracellular water ratio; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation, while categorical variables are expressed as number (percentage).", "The analysis that divided the median Hb-CV value no difference in age, sex, presence of diabetes, or HD duration. In addition, Hb-Avg and biochemical and nutritional parameters did not differ between the two groups. Finally, there was no intergroup difference with respect to darbepoetin dose received (Table 2).\nComparisons of variables by Hb-CV in HD patients (N = 76).\nCV: coefficient of variation; Hb-CV: hemoglobin coefficient of variation; HD: hemodialysis; HDL: high-density lipoprotein; LDL: low-density lipoprotein; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation or median (range), while categorical variables are expressed as number (percentage).", "The analysis that divided the median Hb value found that compared to patients with an Hb-Med ≤ 10.77 group, those with an Hb-Med >10.77 had a significantly higher proportion of male sex and significantly higher serum albumin level, TIBC, and PhA (65.8 versus 34.2%, 3.91 ± 0.21 versus 3.76 ± 0.21 mg/dL, 264.93 ± 33.41 versus 249.41 ± 26.10 µg/dL, and 4.49 ± 0.92 versus 4.06 ± 0.93°, respectively; p < 0.05). On the contrary, the darbepoetin dose received in the Hb-Med >10.77 group was lower than that in the Hb-Med ≤ 10.77 group (18.02 ± 9.50 versus 32.15 ± 13.10; p < 0.05; Table 3).\nComparison of variables by Hb-Med level in HD patients (N = 76).\nMed: median value; HDL: high-density lipoprotein; LDL: low-density lipoprotein; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation, while categorical variables are expressed as number (percentage). *p < 0.05.\nThe bold values indicate statistical difference (p<0.05) between the two groups.", "Age, female sex, darbepoetin dosage, and OH were negatively correlated with PhA (r = −0.490, −0.345, −0.287, and −0.558 respectively; p < 0.05). Serum albumin, phosphorus, TIBC, Hb-Med and Hb-Avg were positively correlated with PhA (r = 0.457, 0.338, 0.329, 0.350 and 0.349; p < 0.05) (Table 4 and Figure 2).\n(A,B) Correlations among Hb-median, Hb-average, and phase angle in HD patients (N = 76).\nCorrelations between clinical, biochemical variables and PhA (n = 76).\nPhA: phase angle; Hb: hemoglobin; Hb-Avg: average of Hb; Hb-CV: coefficient of variation of Hb; Hb-Max: maximum of Hb; Hb-Med: median of Hb; Hb-Min: minimum of Hb; Hb-RAN: range of Hb; Hb-SD: standard deviation of Hb; HDL: high-density lipoprotein; LDL: low-density lipoprotein; OH: overhydration/extracellular water ratio; PTH: parathyroid hormone; TIBC: total iron-binding capacity. Spearman’s (non-parametric) correlation was used to test for associations between clinical and biochemical variables and PhA.*p < 0.05.\nThe bold values indicate statistical difference (p<0.05) between correlation.", "In multiple linear regression analysis, PhA was positively associated with Hb-Med, and serum albumin level (β = 0.085 and 1.350, respectively; p < 0.05), whereas PhA was negatively associated with age and female sex (β = −0.041 and −0.477, respectively; p < 0.05; Table 5).\nMultivariate linear regression analysis with PhA as a dependent variable in HD patients (N = 76).\nSelected variables: age, sex, serum albumin level, phosphorus level, TIBC, Hb-CV, Hb-SD, Hb-RAN, Hb-Min, Hb-Max, Hb-Med, OH, and darbepoetin dosage. PhA: phase angle; Hb-Avg: average of Hb; Hb-CV: coefficient of variation of Hb; Hb-Max: maximum of Hb; Hb-Med: median of Hb; Hb-Min: minimum of Hb; Hb-RAN: range of Hb; Hb-SD: standard deviation of Hb; HD: hemodialysis; TIBC: total iron-binding capacity; OH: overhydration/extracellular water ratio. *p < 0.05.", "The AUC of Hb-Med was 0.665 (95% confidence interval 0.535–0.794, p = 0.040) in predicting PhA > 5.00°. Optimal Hb-Med cutoff value was 10.77 g/dL (sensitivity 76.5%, specificity 59.3%). On the contrary, the AUC of Hb-CV was 0.610 (95% confidence interval 0.457–0.764, p = 0.168) in predicting PhA > 5.00° (Figure 3).\nReceiver operating characteristics (ROC) curves of Hb-median (red) and Hb-CV (green) for predicting PA > 5.00°.", "We evaluated the cross-sectional correlation between Hb variability and difference in nutritional status using BCM in chronic HD patients over the last 24 months. The main findings of our study were that, unexpectedly, there was no association between Hb variability using Hb-CV, Hb-SD, Hb-RAN, and all nutritional parameters including PhA. On the contrary, only PhA among several nutritional parameters was associated with Hb-Med and Hb-Avg. Optimal Hb-Med cutoff value to predict the well-nourished status in chronic HD patients was 10.77 g/dL. Finally, the HD patients with a higher Hb-Med showed lower average weekly darbepoetin dosage, and darbepoetin dosage were negatively correlated with PhA.\nHb variability in HD patients was first described by Lacson and Berns in 2003, and most studies have investigated the relationship between Hb variability and overall mortality rate [10–12]. In a recent retrospective study including 252 patients, Hb variability was independently associated with the cardiovascular mortality of HD patients, and patients in the highest Hb variation group had a nine times higher cardiovascular death risk than those in the lowest Hb variation group [4]. In a meta-analysis, the investigators analyzed the relationship between Hb variability and all-cause mortality and found a 9% increase in the adjusted rate of death for each 1 g/dL increase in Hb variability [5]. In contrast, few studies have examined the relationship between Hb variability and nutritional status. A recent study of 754 HD patients by Rattanasompattikul et al., with nutritional markers such as MIS, considered wasting an independent factor for ESA resistance. They found that low levels of nutritional markers were independent predictors of a decreased response to ESAs [13]. In another observational multicenter study, the investigators found that in HD patients, those with a high BMI (≥30 kg/m2) required a lower ESA dose and had a better response to anemia treatment [14]. These studies focused on the relationship between ESA responsiveness and nutritional status.\nAmong available indicators of nutritional status, PhA measured by BIA or BCM is promising. Several recent papers have reported that PhA is a good indicator of nutritional status and useful predictor of mortality [15–18]. PhA can be used to screen individuals prone to mortality. In particular, BIA-provided PhA is the most potent predictor of malnutrition and associated with death in patients with CKD [19,20]. 5.00° is usually considered as a cutoff PhA value for estimating well-nourished status proposed in previous research [6]. One paper explored the relationship between PhA and ESA responsiveness. Colin et al. found that patients with a lower PhA had a lower Hb and hematocrit. In addition, they suggested that patients with a PhA > 5.00° responded satisfactorily to treatment with ESA, while those with a lower PhA did not respond to treatment [21]. However, no recent studies have investigated the relationship between PhA and Hb variability. In our study, there was no association between Hb variability on Hb-CV, Hb-SD, Hb-RAN, or any nutritional parameters including PhA. On the contrary, only PhA among other nutritional parameters was significantly associated with Hb-Med and Hb-Avg. The reason for this unexpected finding resulted from the low-amplitude Hb-variability in our study population. The median value of Hb-CV in our study was only 9.47, which indicates statistically low-amplitude Hb variability. In Korea, the target Hb level was 10–11 g/dL according to reimbursement regulations of the Korea Health Insurance Review & Assessment Service. Strict dose adjustment of darbepoetin was performed according to monthly Hb level in every HD center. The Hb level in our study was controlled within a narrow range. This was the reason why our patients could not be classified into six groups: consistently low, consistently within the target range, consistently high, low-amplitude fluctuation with low Hb levels, low-amplitude fluctuation with high Hb levels, and high-amplitude fluctuation, the main applied methods in a previous study of Hb variability. Our study was forced on investigating the effects of Hb variability using Hb indexes such as Hb-CV, Hb-SD, and Hb-RAN. Taken together, the factors that related the PhA of HD patients are the maintenance of appropriate Hb-Med and Hb-Avg values for a period of time, especially in the situation of low-amplitude Hb variability. In our study, well-known nutritional markers such as albumin level, phosphorus level, and TIBC were positively correlated with PhA, whereas older age and female sex were negatively correlated with PhA. Considering these results, the reproducibility of our results is guaranteed.\nAnother major finding of our study was that HD patients with a higher Hb-Med showed lower average weekly darbepoetin dosages and that darbepoetin dosage was negatively correlated with PhA. There have been conflicting results between ESA responsiveness, Hb, and PhA. In a previous study, Han et al. demonstrated that PhA was positively associated with a geriatric nutritional risk index, lean tissue index, and albumin level in non-dialytic stage 5 CKD and PD patients. However, there was no association between PhA and Hb level in those patients [22]. González et al. reported that an MIS ≤ 6 indicated a possible response to treatment with ESA but that a PhA ≥ 5.00° showed a trend that was not statistically significant [23]. On the contrary, Shin et al. demonstrated that patients with a PhA ≥ 4.50° received lower weekly doses of ESAs and monthly doses of intravenous iron than those with a PhA < 4.50°. They concluded that HD patients with a low PhA have poor responsiveness to anemia management including ESAs and intravenous iron [18]. The previous two studies had a short follow-up period of 3 months, while one study was conducted in non-dialytic CKD stage 5 and peritoneal dialysis patients, not in HD patients. However, our study followed HD patients for 24 months; therefore, we found that the lower the PhA, the higher the required ESA dose.\nThe results of our study should be interpreted with caution given the following limitations. First, this was a cross-sectional and single-center study, which could have resulted in information bias, and included a relatively small sample size. However, we followed up with regular medical records for 2 years; thus, missing or incorrect information was likely minimized. Second, the iron administration would have influenced the Hb control. However, the dose of iron administered to the patient could not be determined, although TIBC and ferritin levels were not different and would not have a significant effect in this study.", "Phase angle values, a good indicator of nutritional status, are associated with median value of Hb rather than Hb variability in chronic HD patients. PhA provides practical information to predict responsiveness to ESAs in the same population." ]
[ "intro", null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", "conclusions" ]
[ "Hemodialysis", "hemoglobin", "variability", "nutrition", "phase angle" ]
Introduction: Anemia is a common complication of patients with chronic kidney disease (CKD) and end-stage renal disease (ESRD) undergoing renal replacement treatment. Although the introduction of erythrocyte-stimulating agents (ESAs) has led to a dramatic reduction in blood transfusion requirements and is associated with improved quality of life, fluctuations in hemoglobin (Hb) levels, known as Hb variability, during ESA treatment is a well-documented phenomenon [1]. The optimal target Hb concentration in CKD/ESRD patients remains under debate. Countries and societies have different recommendations for maintaining the optimal Hb concentration. In addition, maintaining patients’ Hb levels in such an optimal narrow range is difficult due to the loss of physiological regulation of red blood cell formation and many other factors, such as iron deficiency, chronic inflammation, secondary hyperparathyroidism, malnutrition, and inadequate-dose dialysis. The data show that only 30% of patients will fall within this range at any point in time because Hb level fluctuations result in frequent under- and overshooting of the target level [2]. Chronic HD patients with a stable target Hb level are at lower risk for adverse events than maintenance HD patients without a stable Hb level [3]. In recent studies, a higher fluctuation in Hb variability was associated with cardiovascular mortality and all-cause mortality in maintenance HD patients [4,5]. Malnutrition is a well-known risk factor in the general population and in HD patients. However, few studies have evaluated the association between Hb variability and nutritional status in HD patients. The nutritional status in HD patients is usually assessed using the malnutrition-inflammation score (MIS), body mass index (BMI), and serum albumin level. This was because there was a problem with the accurate and reproducible evaluation of nutrition status. Body composite monitor (BCM) analysis is a noninvasive and accurate instrument for distinguishing between excessive and insufficient moisture levels and can accurately assess nutritional status [6,7]. Therefore, we aimed to elucidate whether Hb variability itself affected the nutritional status. PATIENTS and METHODS: Study population Patients with ESRD who were receiving maintenance HD were recruited. This study was conducted between 1 December 2016 and 1 May 2019 and included patients from Myongji Hospital in Korea. The study included adult patients aged over 20 years who had been undergoing HD for more than 24 months (Figure 1). The target Hb level was 10–11 g/dL according to reimbursement regulations of the Korea Health Insurance Review & Assessment Service. Dose adjustments of darbepoetin (NESP®; Kyowa Kirin Korea Co., Ltd., Seoul, Korea) were made according to the Hb level measured monthly in our facility. Therefore, every study population had more than 24 monthly Hb data points. The patients with acute infection, with malignancy, intact PTH > 500 pg/mL during study period, overhydration/extracellular water > 15% (OH) at BCM measurement, Kt/Vurea < 1.2 during study period or who were receiving HD via a catheter were excluded. Finally, 76 patients were enrolled in the study. Informed written consent was obtained from all patients. This study was approved by our facility’s institutional review board [MJH2018-10-012]. Study Flow Diagram. Patients with ESRD who were receiving maintenance HD were recruited. This study was conducted between 1 December 2016 and 1 May 2019 and included patients from Myongji Hospital in Korea. The study included adult patients aged over 20 years who had been undergoing HD for more than 24 months (Figure 1). The target Hb level was 10–11 g/dL according to reimbursement regulations of the Korea Health Insurance Review & Assessment Service. Dose adjustments of darbepoetin (NESP®; Kyowa Kirin Korea Co., Ltd., Seoul, Korea) were made according to the Hb level measured monthly in our facility. Therefore, every study population had more than 24 monthly Hb data points. The patients with acute infection, with malignancy, intact PTH > 500 pg/mL during study period, overhydration/extracellular water > 15% (OH) at BCM measurement, Kt/Vurea < 1.2 during study period or who were receiving HD via a catheter were excluded. Finally, 76 patients were enrolled in the study. Informed written consent was obtained from all patients. This study was approved by our facility’s institutional review board [MJH2018-10-012]. Study Flow Diagram. Determination of Hb indexes Hb-CV, Hb-SD, Hb-RAN, Hb-Min, Hb-Max, Hb-Avg, and Hb-Med were the Hb indexes. All the Hb indexes were calculated using monthly Hb levels for at least 24 months prior to the BCM test. We quantified the degree of Hb variability using range (HD-RAN), standard deviation (Hb-SD), and coefficient of variation of Hb (Hb-CV), that is, the ratio of standard deviation to the average Hb level [8]. BCM was performed at the study population enrollment. Hb-CV, Hb-SD, Hb-RAN, Hb-Min, Hb-Max, Hb-Avg, and Hb-Med were the Hb indexes. All the Hb indexes were calculated using monthly Hb levels for at least 24 months prior to the BCM test. We quantified the degree of Hb variability using range (HD-RAN), standard deviation (Hb-SD), and coefficient of variation of Hb (Hb-CV), that is, the ratio of standard deviation to the average Hb level [8]. BCM was performed at the study population enrollment. Other data collection All demographic and clinical data were collected from the patients’ electronic medical records. Age, sex, height, body weight, presence of diabetes, HD duration, and average laboratory data for the previous 24 months before BCM were collected. Laboratory data included mean platelet volume (MPV), total iron-binding capacity (TIBC), transferrin saturation (TS), and levels of serum iron, ferritin, albumin, calcium, phosphorus, intact parathyroid hormone (PTH), triglyceride, total cholesterol, high-density lipoprotein (HDL)-cholesterol, low-density lipoprotein (LDL)-cholesterol, and highly sensitive C-reactive protein (hs-CRP). Darbepoetin dosages were collected during the entire study period and calculated as µg/week. All demographic and clinical data were collected from the patients’ electronic medical records. Age, sex, height, body weight, presence of diabetes, HD duration, and average laboratory data for the previous 24 months before BCM were collected. Laboratory data included mean platelet volume (MPV), total iron-binding capacity (TIBC), transferrin saturation (TS), and levels of serum iron, ferritin, albumin, calcium, phosphorus, intact parathyroid hormone (PTH), triglyceride, total cholesterol, high-density lipoprotein (HDL)-cholesterol, low-density lipoprotein (LDL)-cholesterol, and highly sensitive C-reactive protein (hs-CRP). Darbepoetin dosages were collected during the entire study period and calculated as µg/week. BCM analysis The nutritional status of the patients at enrollment was assessed using the BCM® (Fresenius Medical Care a Deutschland GmbH, Germany). The measurements were made before the onset of the dialysis session at mid-week with 4 conventional electrodes being placed on the patient, who was lying in the supine position: 2 on the hand and 2 on the foot contralateral to the vascular access. The parameters obtained with the BCM were BMI, LTI, FTI, BCMI, and PhA [9]. The nutritional status of the patients at enrollment was assessed using the BCM® (Fresenius Medical Care a Deutschland GmbH, Germany). The measurements were made before the onset of the dialysis session at mid-week with 4 conventional electrodes being placed on the patient, who was lying in the supine position: 2 on the hand and 2 on the foot contralateral to the vascular access. The parameters obtained with the BCM were BMI, LTI, FTI, BCMI, and PhA [9]. Statistical analysis Because of the small number of patients, normal distribution was tested using a single sample Kolmogorov–Smirnov analysis. Variables are expressed as mean ± SD or median (range). Between-group differences were assessed for significance using Mann–Whitney U tests. Correlations between nutritional parameters and clinical, biochemical, and Hb indexes were assessed using Spearman’s (non-parametric) correlations. Multivariate linear regression analysis was used to assess the combined influence of the PhA values adjusted for age, sex, TIBC, albumin level, phosphorus level, and all Hb indexes. We evaluated the receiver operating characteristics (ROC) curve of Hb-Med, Hb-CV for predicting PA > 5.00°, which is considered as PhA cutoff value for estimating well-nourished status proposed in previous research [6]. Significant differences were defined as p values less than 0.05. All statistical analyses were performed using Statistical Package for the Social Sciences version 23.0 (SPSS Inc., Chicago, IL). Because of the small number of patients, normal distribution was tested using a single sample Kolmogorov–Smirnov analysis. Variables are expressed as mean ± SD or median (range). Between-group differences were assessed for significance using Mann–Whitney U tests. Correlations between nutritional parameters and clinical, biochemical, and Hb indexes were assessed using Spearman’s (non-parametric) correlations. Multivariate linear regression analysis was used to assess the combined influence of the PhA values adjusted for age, sex, TIBC, albumin level, phosphorus level, and all Hb indexes. We evaluated the receiver operating characteristics (ROC) curve of Hb-Med, Hb-CV for predicting PA > 5.00°, which is considered as PhA cutoff value for estimating well-nourished status proposed in previous research [6]. Significant differences were defined as p values less than 0.05. All statistical analyses were performed using Statistical Package for the Social Sciences version 23.0 (SPSS Inc., Chicago, IL). Study population: Patients with ESRD who were receiving maintenance HD were recruited. This study was conducted between 1 December 2016 and 1 May 2019 and included patients from Myongji Hospital in Korea. The study included adult patients aged over 20 years who had been undergoing HD for more than 24 months (Figure 1). The target Hb level was 10–11 g/dL according to reimbursement regulations of the Korea Health Insurance Review & Assessment Service. Dose adjustments of darbepoetin (NESP®; Kyowa Kirin Korea Co., Ltd., Seoul, Korea) were made according to the Hb level measured monthly in our facility. Therefore, every study population had more than 24 monthly Hb data points. The patients with acute infection, with malignancy, intact PTH > 500 pg/mL during study period, overhydration/extracellular water > 15% (OH) at BCM measurement, Kt/Vurea < 1.2 during study period or who were receiving HD via a catheter were excluded. Finally, 76 patients were enrolled in the study. Informed written consent was obtained from all patients. This study was approved by our facility’s institutional review board [MJH2018-10-012]. Study Flow Diagram. Determination of Hb indexes: Hb-CV, Hb-SD, Hb-RAN, Hb-Min, Hb-Max, Hb-Avg, and Hb-Med were the Hb indexes. All the Hb indexes were calculated using monthly Hb levels for at least 24 months prior to the BCM test. We quantified the degree of Hb variability using range (HD-RAN), standard deviation (Hb-SD), and coefficient of variation of Hb (Hb-CV), that is, the ratio of standard deviation to the average Hb level [8]. BCM was performed at the study population enrollment. Other data collection: All demographic and clinical data were collected from the patients’ electronic medical records. Age, sex, height, body weight, presence of diabetes, HD duration, and average laboratory data for the previous 24 months before BCM were collected. Laboratory data included mean platelet volume (MPV), total iron-binding capacity (TIBC), transferrin saturation (TS), and levels of serum iron, ferritin, albumin, calcium, phosphorus, intact parathyroid hormone (PTH), triglyceride, total cholesterol, high-density lipoprotein (HDL)-cholesterol, low-density lipoprotein (LDL)-cholesterol, and highly sensitive C-reactive protein (hs-CRP). Darbepoetin dosages were collected during the entire study period and calculated as µg/week. BCM analysis: The nutritional status of the patients at enrollment was assessed using the BCM® (Fresenius Medical Care a Deutschland GmbH, Germany). The measurements were made before the onset of the dialysis session at mid-week with 4 conventional electrodes being placed on the patient, who was lying in the supine position: 2 on the hand and 2 on the foot contralateral to the vascular access. The parameters obtained with the BCM were BMI, LTI, FTI, BCMI, and PhA [9]. Statistical analysis: Because of the small number of patients, normal distribution was tested using a single sample Kolmogorov–Smirnov analysis. Variables are expressed as mean ± SD or median (range). Between-group differences were assessed for significance using Mann–Whitney U tests. Correlations between nutritional parameters and clinical, biochemical, and Hb indexes were assessed using Spearman’s (non-parametric) correlations. Multivariate linear regression analysis was used to assess the combined influence of the PhA values adjusted for age, sex, TIBC, albumin level, phosphorus level, and all Hb indexes. We evaluated the receiver operating characteristics (ROC) curve of Hb-Med, Hb-CV for predicting PA > 5.00°, which is considered as PhA cutoff value for estimating well-nourished status proposed in previous research [6]. Significant differences were defined as p values less than 0.05. All statistical analyses were performed using Statistical Package for the Social Sciences version 23.0 (SPSS Inc., Chicago, IL). Results: Clinical characteristics of the study population The study included a total of 76 patients undergoing HD (median duration, 70.98 (24–130) months; 51.3% men; mean age, 65.84 ± 11.91 years). Here, 51.6% patients had diabetes mellitus. The mean Hb and darbepoetin dosage were 10.80 ± 0.51 g/dL and 25.08 ± 13.41 µg/week. The mean albumin, iron, TIBC, TS, total cholesterol, LDL-cholesterol, HDL-cholesterol, triglyceride, calcium, phosphorus, intact PTH, and hs-CRP levels were 3.83 ± 0.22 g/dL, 73.78 ± 17.75, 257.20 ± 30.68 µg/dL, 28.81 ± 6.44%, 134.86 ± 26.45, 67.92 ± 20.63, 46.04 ± 13.00, 104.95 ± 47.23, 8.36 ± 0.38, 4.10 ± 0.66, 313.23 ± 159.41, and 0.42 ± 0.43 mg/dL, respectively. The median ferritin was 228.18 (49.14–1233.45). The dry weight, pre-dialysis weight, interdialytic weight gain, and OH were 60.80 ± 10.34, 62.30 ± 11.36, 2.16 ± 0.94 kg, and 8.47 ± 6.52%. The mean BMI, LTI, FTI, BCMI, and PhA were 23.58 ± 3.02, 12.71 ± 3.17, 9.80 ± 3.59, 6.92 ± 2.25 kg/m2, and 4.28 ± 0.94°, respectively (Table 1). Clinical characteristics of the study population (N = 76). HDL: high-density lipoprotein; LDL: low-density lipoprotein; OH: overhydration/extracellular water ratio; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation, while categorical variables are expressed as number (percentage). The study included a total of 76 patients undergoing HD (median duration, 70.98 (24–130) months; 51.3% men; mean age, 65.84 ± 11.91 years). Here, 51.6% patients had diabetes mellitus. The mean Hb and darbepoetin dosage were 10.80 ± 0.51 g/dL and 25.08 ± 13.41 µg/week. The mean albumin, iron, TIBC, TS, total cholesterol, LDL-cholesterol, HDL-cholesterol, triglyceride, calcium, phosphorus, intact PTH, and hs-CRP levels were 3.83 ± 0.22 g/dL, 73.78 ± 17.75, 257.20 ± 30.68 µg/dL, 28.81 ± 6.44%, 134.86 ± 26.45, 67.92 ± 20.63, 46.04 ± 13.00, 104.95 ± 47.23, 8.36 ± 0.38, 4.10 ± 0.66, 313.23 ± 159.41, and 0.42 ± 0.43 mg/dL, respectively. The median ferritin was 228.18 (49.14–1233.45). The dry weight, pre-dialysis weight, interdialytic weight gain, and OH were 60.80 ± 10.34, 62.30 ± 11.36, 2.16 ± 0.94 kg, and 8.47 ± 6.52%. The mean BMI, LTI, FTI, BCMI, and PhA were 23.58 ± 3.02, 12.71 ± 3.17, 9.80 ± 3.59, 6.92 ± 2.25 kg/m2, and 4.28 ± 0.94°, respectively (Table 1). Clinical characteristics of the study population (N = 76). HDL: high-density lipoprotein; LDL: low-density lipoprotein; OH: overhydration/extracellular water ratio; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation, while categorical variables are expressed as number (percentage). Comparisons of variables by Hb-CV level The analysis that divided the median Hb-CV value no difference in age, sex, presence of diabetes, or HD duration. In addition, Hb-Avg and biochemical and nutritional parameters did not differ between the two groups. Finally, there was no intergroup difference with respect to darbepoetin dose received (Table 2). Comparisons of variables by Hb-CV in HD patients (N = 76). CV: coefficient of variation; Hb-CV: hemoglobin coefficient of variation; HD: hemodialysis; HDL: high-density lipoprotein; LDL: low-density lipoprotein; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation or median (range), while categorical variables are expressed as number (percentage). The analysis that divided the median Hb-CV value no difference in age, sex, presence of diabetes, or HD duration. In addition, Hb-Avg and biochemical and nutritional parameters did not differ between the two groups. Finally, there was no intergroup difference with respect to darbepoetin dose received (Table 2). Comparisons of variables by Hb-CV in HD patients (N = 76). CV: coefficient of variation; Hb-CV: hemoglobin coefficient of variation; HD: hemodialysis; HDL: high-density lipoprotein; LDL: low-density lipoprotein; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation or median (range), while categorical variables are expressed as number (percentage). Comparisons of variables by Hb-Med level The analysis that divided the median Hb value found that compared to patients with an Hb-Med ≤ 10.77 group, those with an Hb-Med >10.77 had a significantly higher proportion of male sex and significantly higher serum albumin level, TIBC, and PhA (65.8 versus 34.2%, 3.91 ± 0.21 versus 3.76 ± 0.21 mg/dL, 264.93 ± 33.41 versus 249.41 ± 26.10 µg/dL, and 4.49 ± 0.92 versus 4.06 ± 0.93°, respectively; p < 0.05). On the contrary, the darbepoetin dose received in the Hb-Med >10.77 group was lower than that in the Hb-Med ≤ 10.77 group (18.02 ± 9.50 versus 32.15 ± 13.10; p < 0.05; Table 3). Comparison of variables by Hb-Med level in HD patients (N = 76). Med: median value; HDL: high-density lipoprotein; LDL: low-density lipoprotein; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation, while categorical variables are expressed as number (percentage). *p < 0.05. The bold values indicate statistical difference (p<0.05) between the two groups. The analysis that divided the median Hb value found that compared to patients with an Hb-Med ≤ 10.77 group, those with an Hb-Med >10.77 had a significantly higher proportion of male sex and significantly higher serum albumin level, TIBC, and PhA (65.8 versus 34.2%, 3.91 ± 0.21 versus 3.76 ± 0.21 mg/dL, 264.93 ± 33.41 versus 249.41 ± 26.10 µg/dL, and 4.49 ± 0.92 versus 4.06 ± 0.93°, respectively; p < 0.05). On the contrary, the darbepoetin dose received in the Hb-Med >10.77 group was lower than that in the Hb-Med ≤ 10.77 group (18.02 ± 9.50 versus 32.15 ± 13.10; p < 0.05; Table 3). Comparison of variables by Hb-Med level in HD patients (N = 76). Med: median value; HDL: high-density lipoprotein; LDL: low-density lipoprotein; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation, while categorical variables are expressed as number (percentage). *p < 0.05. The bold values indicate statistical difference (p<0.05) between the two groups. Correlations between clinical and biochemical variables and PhA Age, female sex, darbepoetin dosage, and OH were negatively correlated with PhA (r = −0.490, −0.345, −0.287, and −0.558 respectively; p < 0.05). Serum albumin, phosphorus, TIBC, Hb-Med and Hb-Avg were positively correlated with PhA (r = 0.457, 0.338, 0.329, 0.350 and 0.349; p < 0.05) (Table 4 and Figure 2). (A,B) Correlations among Hb-median, Hb-average, and phase angle in HD patients (N = 76). Correlations between clinical, biochemical variables and PhA (n = 76). PhA: phase angle; Hb: hemoglobin; Hb-Avg: average of Hb; Hb-CV: coefficient of variation of Hb; Hb-Max: maximum of Hb; Hb-Med: median of Hb; Hb-Min: minimum of Hb; Hb-RAN: range of Hb; Hb-SD: standard deviation of Hb; HDL: high-density lipoprotein; LDL: low-density lipoprotein; OH: overhydration/extracellular water ratio; PTH: parathyroid hormone; TIBC: total iron-binding capacity. Spearman’s (non-parametric) correlation was used to test for associations between clinical and biochemical variables and PhA.*p < 0.05. The bold values indicate statistical difference (p<0.05) between correlation. Age, female sex, darbepoetin dosage, and OH were negatively correlated with PhA (r = −0.490, −0.345, −0.287, and −0.558 respectively; p < 0.05). Serum albumin, phosphorus, TIBC, Hb-Med and Hb-Avg were positively correlated with PhA (r = 0.457, 0.338, 0.329, 0.350 and 0.349; p < 0.05) (Table 4 and Figure 2). (A,B) Correlations among Hb-median, Hb-average, and phase angle in HD patients (N = 76). Correlations between clinical, biochemical variables and PhA (n = 76). PhA: phase angle; Hb: hemoglobin; Hb-Avg: average of Hb; Hb-CV: coefficient of variation of Hb; Hb-Max: maximum of Hb; Hb-Med: median of Hb; Hb-Min: minimum of Hb; Hb-RAN: range of Hb; Hb-SD: standard deviation of Hb; HDL: high-density lipoprotein; LDL: low-density lipoprotein; OH: overhydration/extracellular water ratio; PTH: parathyroid hormone; TIBC: total iron-binding capacity. Spearman’s (non-parametric) correlation was used to test for associations between clinical and biochemical variables and PhA.*p < 0.05. The bold values indicate statistical difference (p<0.05) between correlation. Multivariate linear regression analysis with PhA as dependent variable In multiple linear regression analysis, PhA was positively associated with Hb-Med, and serum albumin level (β = 0.085 and 1.350, respectively; p < 0.05), whereas PhA was negatively associated with age and female sex (β = −0.041 and −0.477, respectively; p < 0.05; Table 5). Multivariate linear regression analysis with PhA as a dependent variable in HD patients (N = 76). Selected variables: age, sex, serum albumin level, phosphorus level, TIBC, Hb-CV, Hb-SD, Hb-RAN, Hb-Min, Hb-Max, Hb-Med, OH, and darbepoetin dosage. PhA: phase angle; Hb-Avg: average of Hb; Hb-CV: coefficient of variation of Hb; Hb-Max: maximum of Hb; Hb-Med: median of Hb; Hb-Min: minimum of Hb; Hb-RAN: range of Hb; Hb-SD: standard deviation of Hb; HD: hemodialysis; TIBC: total iron-binding capacity; OH: overhydration/extracellular water ratio. *p < 0.05. In multiple linear regression analysis, PhA was positively associated with Hb-Med, and serum albumin level (β = 0.085 and 1.350, respectively; p < 0.05), whereas PhA was negatively associated with age and female sex (β = −0.041 and −0.477, respectively; p < 0.05; Table 5). Multivariate linear regression analysis with PhA as a dependent variable in HD patients (N = 76). Selected variables: age, sex, serum albumin level, phosphorus level, TIBC, Hb-CV, Hb-SD, Hb-RAN, Hb-Min, Hb-Max, Hb-Med, OH, and darbepoetin dosage. PhA: phase angle; Hb-Avg: average of Hb; Hb-CV: coefficient of variation of Hb; Hb-Max: maximum of Hb; Hb-Med: median of Hb; Hb-Min: minimum of Hb; Hb-RAN: range of Hb; Hb-SD: standard deviation of Hb; HD: hemodialysis; TIBC: total iron-binding capacity; OH: overhydration/extracellular water ratio. *p < 0.05. ROC curves of Hb-Med, Hb-CV for predicting PhA > 5.00° The AUC of Hb-Med was 0.665 (95% confidence interval 0.535–0.794, p = 0.040) in predicting PhA > 5.00°. Optimal Hb-Med cutoff value was 10.77 g/dL (sensitivity 76.5%, specificity 59.3%). On the contrary, the AUC of Hb-CV was 0.610 (95% confidence interval 0.457–0.764, p = 0.168) in predicting PhA > 5.00° (Figure 3). Receiver operating characteristics (ROC) curves of Hb-median (red) and Hb-CV (green) for predicting PA > 5.00°. The AUC of Hb-Med was 0.665 (95% confidence interval 0.535–0.794, p = 0.040) in predicting PhA > 5.00°. Optimal Hb-Med cutoff value was 10.77 g/dL (sensitivity 76.5%, specificity 59.3%). On the contrary, the AUC of Hb-CV was 0.610 (95% confidence interval 0.457–0.764, p = 0.168) in predicting PhA > 5.00° (Figure 3). Receiver operating characteristics (ROC) curves of Hb-median (red) and Hb-CV (green) for predicting PA > 5.00°. Clinical characteristics of the study population: The study included a total of 76 patients undergoing HD (median duration, 70.98 (24–130) months; 51.3% men; mean age, 65.84 ± 11.91 years). Here, 51.6% patients had diabetes mellitus. The mean Hb and darbepoetin dosage were 10.80 ± 0.51 g/dL and 25.08 ± 13.41 µg/week. The mean albumin, iron, TIBC, TS, total cholesterol, LDL-cholesterol, HDL-cholesterol, triglyceride, calcium, phosphorus, intact PTH, and hs-CRP levels were 3.83 ± 0.22 g/dL, 73.78 ± 17.75, 257.20 ± 30.68 µg/dL, 28.81 ± 6.44%, 134.86 ± 26.45, 67.92 ± 20.63, 46.04 ± 13.00, 104.95 ± 47.23, 8.36 ± 0.38, 4.10 ± 0.66, 313.23 ± 159.41, and 0.42 ± 0.43 mg/dL, respectively. The median ferritin was 228.18 (49.14–1233.45). The dry weight, pre-dialysis weight, interdialytic weight gain, and OH were 60.80 ± 10.34, 62.30 ± 11.36, 2.16 ± 0.94 kg, and 8.47 ± 6.52%. The mean BMI, LTI, FTI, BCMI, and PhA were 23.58 ± 3.02, 12.71 ± 3.17, 9.80 ± 3.59, 6.92 ± 2.25 kg/m2, and 4.28 ± 0.94°, respectively (Table 1). Clinical characteristics of the study population (N = 76). HDL: high-density lipoprotein; LDL: low-density lipoprotein; OH: overhydration/extracellular water ratio; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation, while categorical variables are expressed as number (percentage). Comparisons of variables by Hb-CV level: The analysis that divided the median Hb-CV value no difference in age, sex, presence of diabetes, or HD duration. In addition, Hb-Avg and biochemical and nutritional parameters did not differ between the two groups. Finally, there was no intergroup difference with respect to darbepoetin dose received (Table 2). Comparisons of variables by Hb-CV in HD patients (N = 76). CV: coefficient of variation; Hb-CV: hemoglobin coefficient of variation; HD: hemodialysis; HDL: high-density lipoprotein; LDL: low-density lipoprotein; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation or median (range), while categorical variables are expressed as number (percentage). Comparisons of variables by Hb-Med level: The analysis that divided the median Hb value found that compared to patients with an Hb-Med ≤ 10.77 group, those with an Hb-Med >10.77 had a significantly higher proportion of male sex and significantly higher serum albumin level, TIBC, and PhA (65.8 versus 34.2%, 3.91 ± 0.21 versus 3.76 ± 0.21 mg/dL, 264.93 ± 33.41 versus 249.41 ± 26.10 µg/dL, and 4.49 ± 0.92 versus 4.06 ± 0.93°, respectively; p < 0.05). On the contrary, the darbepoetin dose received in the Hb-Med >10.77 group was lower than that in the Hb-Med ≤ 10.77 group (18.02 ± 9.50 versus 32.15 ± 13.10; p < 0.05; Table 3). Comparison of variables by Hb-Med level in HD patients (N = 76). Med: median value; HDL: high-density lipoprotein; LDL: low-density lipoprotein; PTH: parathyroid hormone; TIBC: total iron-binding capacity; PhA: phase angle. Continuous variables are expressed as mean ± standard deviation, while categorical variables are expressed as number (percentage). *p < 0.05. The bold values indicate statistical difference (p<0.05) between the two groups. Correlations between clinical and biochemical variables and PhA: Age, female sex, darbepoetin dosage, and OH were negatively correlated with PhA (r = −0.490, −0.345, −0.287, and −0.558 respectively; p < 0.05). Serum albumin, phosphorus, TIBC, Hb-Med and Hb-Avg were positively correlated with PhA (r = 0.457, 0.338, 0.329, 0.350 and 0.349; p < 0.05) (Table 4 and Figure 2). (A,B) Correlations among Hb-median, Hb-average, and phase angle in HD patients (N = 76). Correlations between clinical, biochemical variables and PhA (n = 76). PhA: phase angle; Hb: hemoglobin; Hb-Avg: average of Hb; Hb-CV: coefficient of variation of Hb; Hb-Max: maximum of Hb; Hb-Med: median of Hb; Hb-Min: minimum of Hb; Hb-RAN: range of Hb; Hb-SD: standard deviation of Hb; HDL: high-density lipoprotein; LDL: low-density lipoprotein; OH: overhydration/extracellular water ratio; PTH: parathyroid hormone; TIBC: total iron-binding capacity. Spearman’s (non-parametric) correlation was used to test for associations between clinical and biochemical variables and PhA.*p < 0.05. The bold values indicate statistical difference (p<0.05) between correlation. Multivariate linear regression analysis with PhA as dependent variable: In multiple linear regression analysis, PhA was positively associated with Hb-Med, and serum albumin level (β = 0.085 and 1.350, respectively; p < 0.05), whereas PhA was negatively associated with age and female sex (β = −0.041 and −0.477, respectively; p < 0.05; Table 5). Multivariate linear regression analysis with PhA as a dependent variable in HD patients (N = 76). Selected variables: age, sex, serum albumin level, phosphorus level, TIBC, Hb-CV, Hb-SD, Hb-RAN, Hb-Min, Hb-Max, Hb-Med, OH, and darbepoetin dosage. PhA: phase angle; Hb-Avg: average of Hb; Hb-CV: coefficient of variation of Hb; Hb-Max: maximum of Hb; Hb-Med: median of Hb; Hb-Min: minimum of Hb; Hb-RAN: range of Hb; Hb-SD: standard deviation of Hb; HD: hemodialysis; TIBC: total iron-binding capacity; OH: overhydration/extracellular water ratio. *p < 0.05. ROC curves of Hb-Med, Hb-CV for predicting PhA > 5.00°: The AUC of Hb-Med was 0.665 (95% confidence interval 0.535–0.794, p = 0.040) in predicting PhA > 5.00°. Optimal Hb-Med cutoff value was 10.77 g/dL (sensitivity 76.5%, specificity 59.3%). On the contrary, the AUC of Hb-CV was 0.610 (95% confidence interval 0.457–0.764, p = 0.168) in predicting PhA > 5.00° (Figure 3). Receiver operating characteristics (ROC) curves of Hb-median (red) and Hb-CV (green) for predicting PA > 5.00°. Discussion: We evaluated the cross-sectional correlation between Hb variability and difference in nutritional status using BCM in chronic HD patients over the last 24 months. The main findings of our study were that, unexpectedly, there was no association between Hb variability using Hb-CV, Hb-SD, Hb-RAN, and all nutritional parameters including PhA. On the contrary, only PhA among several nutritional parameters was associated with Hb-Med and Hb-Avg. Optimal Hb-Med cutoff value to predict the well-nourished status in chronic HD patients was 10.77 g/dL. Finally, the HD patients with a higher Hb-Med showed lower average weekly darbepoetin dosage, and darbepoetin dosage were negatively correlated with PhA. Hb variability in HD patients was first described by Lacson and Berns in 2003, and most studies have investigated the relationship between Hb variability and overall mortality rate [10–12]. In a recent retrospective study including 252 patients, Hb variability was independently associated with the cardiovascular mortality of HD patients, and patients in the highest Hb variation group had a nine times higher cardiovascular death risk than those in the lowest Hb variation group [4]. In a meta-analysis, the investigators analyzed the relationship between Hb variability and all-cause mortality and found a 9% increase in the adjusted rate of death for each 1 g/dL increase in Hb variability [5]. In contrast, few studies have examined the relationship between Hb variability and nutritional status. A recent study of 754 HD patients by Rattanasompattikul et al., with nutritional markers such as MIS, considered wasting an independent factor for ESA resistance. They found that low levels of nutritional markers were independent predictors of a decreased response to ESAs [13]. In another observational multicenter study, the investigators found that in HD patients, those with a high BMI (≥30 kg/m2) required a lower ESA dose and had a better response to anemia treatment [14]. These studies focused on the relationship between ESA responsiveness and nutritional status. Among available indicators of nutritional status, PhA measured by BIA or BCM is promising. Several recent papers have reported that PhA is a good indicator of nutritional status and useful predictor of mortality [15–18]. PhA can be used to screen individuals prone to mortality. In particular, BIA-provided PhA is the most potent predictor of malnutrition and associated with death in patients with CKD [19,20]. 5.00° is usually considered as a cutoff PhA value for estimating well-nourished status proposed in previous research [6]. One paper explored the relationship between PhA and ESA responsiveness. Colin et al. found that patients with a lower PhA had a lower Hb and hematocrit. In addition, they suggested that patients with a PhA > 5.00° responded satisfactorily to treatment with ESA, while those with a lower PhA did not respond to treatment [21]. However, no recent studies have investigated the relationship between PhA and Hb variability. In our study, there was no association between Hb variability on Hb-CV, Hb-SD, Hb-RAN, or any nutritional parameters including PhA. On the contrary, only PhA among other nutritional parameters was significantly associated with Hb-Med and Hb-Avg. The reason for this unexpected finding resulted from the low-amplitude Hb-variability in our study population. The median value of Hb-CV in our study was only 9.47, which indicates statistically low-amplitude Hb variability. In Korea, the target Hb level was 10–11 g/dL according to reimbursement regulations of the Korea Health Insurance Review & Assessment Service. Strict dose adjustment of darbepoetin was performed according to monthly Hb level in every HD center. The Hb level in our study was controlled within a narrow range. This was the reason why our patients could not be classified into six groups: consistently low, consistently within the target range, consistently high, low-amplitude fluctuation with low Hb levels, low-amplitude fluctuation with high Hb levels, and high-amplitude fluctuation, the main applied methods in a previous study of Hb variability. Our study was forced on investigating the effects of Hb variability using Hb indexes such as Hb-CV, Hb-SD, and Hb-RAN. Taken together, the factors that related the PhA of HD patients are the maintenance of appropriate Hb-Med and Hb-Avg values for a period of time, especially in the situation of low-amplitude Hb variability. In our study, well-known nutritional markers such as albumin level, phosphorus level, and TIBC were positively correlated with PhA, whereas older age and female sex were negatively correlated with PhA. Considering these results, the reproducibility of our results is guaranteed. Another major finding of our study was that HD patients with a higher Hb-Med showed lower average weekly darbepoetin dosages and that darbepoetin dosage was negatively correlated with PhA. There have been conflicting results between ESA responsiveness, Hb, and PhA. In a previous study, Han et al. demonstrated that PhA was positively associated with a geriatric nutritional risk index, lean tissue index, and albumin level in non-dialytic stage 5 CKD and PD patients. However, there was no association between PhA and Hb level in those patients [22]. González et al. reported that an MIS ≤ 6 indicated a possible response to treatment with ESA but that a PhA ≥ 5.00° showed a trend that was not statistically significant [23]. On the contrary, Shin et al. demonstrated that patients with a PhA ≥ 4.50° received lower weekly doses of ESAs and monthly doses of intravenous iron than those with a PhA < 4.50°. They concluded that HD patients with a low PhA have poor responsiveness to anemia management including ESAs and intravenous iron [18]. The previous two studies had a short follow-up period of 3 months, while one study was conducted in non-dialytic CKD stage 5 and peritoneal dialysis patients, not in HD patients. However, our study followed HD patients for 24 months; therefore, we found that the lower the PhA, the higher the required ESA dose. The results of our study should be interpreted with caution given the following limitations. First, this was a cross-sectional and single-center study, which could have resulted in information bias, and included a relatively small sample size. However, we followed up with regular medical records for 2 years; thus, missing or incorrect information was likely minimized. Second, the iron administration would have influenced the Hb control. However, the dose of iron administered to the patient could not be determined, although TIBC and ferritin levels were not different and would not have a significant effect in this study. Conclusions: Phase angle values, a good indicator of nutritional status, are associated with median value of Hb rather than Hb variability in chronic HD patients. PhA provides practical information to predict responsiveness to ESAs in the same population.
Background: Our aim was to elucidate whether Hb variability affects nutritional status in HD patients. Methods: This study included chronic HD patients (n = 76) with available monthly Hb levels up to 24 months prior to the body composition monitoring (BCM) measurement. The parameters obtained in the BCM included body mass index (BMI), lean tissue index (LTI), fat tissue index (FTI), body cell mass index (BCMI), overhydration/extracellular water ratio (OH), and phase angle (PhA). The coefficient of variation (Hb-CV), standard deviation (Hb-SD), and range of Hb (Hb-RAN) were used as indexes of Hb variability. In addition, minimum (Hb-Min), maximum (Hb-Max), average (Hb-Avg), and median (Hb-Med) Hb levels (g/dL) were analyzed. Results: There were no significant differences in clinical, biochemical, and nutritional indexes based on the Hb-CV level. Compared to patients with an Hb-Med ≤ 10.77, those with an Hb-Med >10.77 had higher albumin levels, total iron-binding capacity (TIBC), and PhA and lower average weekly prescribed darbepoetin. Age, female sex, OH, and darbepoetin dosage were negatively correlated with PhA. Serum albumin, phosphorus, TIBC, Hb-Med, and Hb-Avg were positively correlated with PhA. In multiple linear regression analysis, PhA was positively associated with Hb-Med and serum albumin level, whereas PhA was negatively associated with age and female sex. The area under the curve (AUC) of Hb-Med was 0.665 (p = 0.040) in predicting PhA >5.00°. Conclusions: PhA was not affected by indexes of Hb variability, whereas PhA was associated with Hb-Med in chronic HD patients.
Introduction: Anemia is a common complication of patients with chronic kidney disease (CKD) and end-stage renal disease (ESRD) undergoing renal replacement treatment. Although the introduction of erythrocyte-stimulating agents (ESAs) has led to a dramatic reduction in blood transfusion requirements and is associated with improved quality of life, fluctuations in hemoglobin (Hb) levels, known as Hb variability, during ESA treatment is a well-documented phenomenon [1]. The optimal target Hb concentration in CKD/ESRD patients remains under debate. Countries and societies have different recommendations for maintaining the optimal Hb concentration. In addition, maintaining patients’ Hb levels in such an optimal narrow range is difficult due to the loss of physiological regulation of red blood cell formation and many other factors, such as iron deficiency, chronic inflammation, secondary hyperparathyroidism, malnutrition, and inadequate-dose dialysis. The data show that only 30% of patients will fall within this range at any point in time because Hb level fluctuations result in frequent under- and overshooting of the target level [2]. Chronic HD patients with a stable target Hb level are at lower risk for adverse events than maintenance HD patients without a stable Hb level [3]. In recent studies, a higher fluctuation in Hb variability was associated with cardiovascular mortality and all-cause mortality in maintenance HD patients [4,5]. Malnutrition is a well-known risk factor in the general population and in HD patients. However, few studies have evaluated the association between Hb variability and nutritional status in HD patients. The nutritional status in HD patients is usually assessed using the malnutrition-inflammation score (MIS), body mass index (BMI), and serum albumin level. This was because there was a problem with the accurate and reproducible evaluation of nutrition status. Body composite monitor (BCM) analysis is a noninvasive and accurate instrument for distinguishing between excessive and insufficient moisture levels and can accurately assess nutritional status [6,7]. Therefore, we aimed to elucidate whether Hb variability itself affected the nutritional status. Conclusions: Phase angle values, a good indicator of nutritional status, are associated with median value of Hb rather than Hb variability in chronic HD patients. PhA provides practical information to predict responsiveness to ESAs in the same population.
Background: Our aim was to elucidate whether Hb variability affects nutritional status in HD patients. Methods: This study included chronic HD patients (n = 76) with available monthly Hb levels up to 24 months prior to the body composition monitoring (BCM) measurement. The parameters obtained in the BCM included body mass index (BMI), lean tissue index (LTI), fat tissue index (FTI), body cell mass index (BCMI), overhydration/extracellular water ratio (OH), and phase angle (PhA). The coefficient of variation (Hb-CV), standard deviation (Hb-SD), and range of Hb (Hb-RAN) were used as indexes of Hb variability. In addition, minimum (Hb-Min), maximum (Hb-Max), average (Hb-Avg), and median (Hb-Med) Hb levels (g/dL) were analyzed. Results: There were no significant differences in clinical, biochemical, and nutritional indexes based on the Hb-CV level. Compared to patients with an Hb-Med ≤ 10.77, those with an Hb-Med >10.77 had higher albumin levels, total iron-binding capacity (TIBC), and PhA and lower average weekly prescribed darbepoetin. Age, female sex, OH, and darbepoetin dosage were negatively correlated with PhA. Serum albumin, phosphorus, TIBC, Hb-Med, and Hb-Avg were positively correlated with PhA. In multiple linear regression analysis, PhA was positively associated with Hb-Med and serum albumin level, whereas PhA was negatively associated with age and female sex. The area under the curve (AUC) of Hb-Med was 0.665 (p = 0.040) in predicting PhA >5.00°. Conclusions: PhA was not affected by indexes of Hb variability, whereas PhA was associated with Hb-Med in chronic HD patients.
8,499
371
[ 1556, 225, 116, 140, 94, 189, 383, 161, 264, 273, 226, 115 ]
16
[ "hb", "pha", "patients", "hd", "study", "med", "hb med", "level", "cv", "variables" ]
[ "response anemia treatment", "responsiveness anemia management", "hemodialysis hdl high", "patients hb variability", "fluctuations hemoglobin hb" ]
null
[CONTENT] Hemodialysis | hemoglobin | variability | nutrition | phase angle [SUMMARY]
null
[CONTENT] Hemodialysis | hemoglobin | variability | nutrition | phase angle [SUMMARY]
[CONTENT] Hemodialysis | hemoglobin | variability | nutrition | phase angle [SUMMARY]
[CONTENT] Hemodialysis | hemoglobin | variability | nutrition | phase angle [SUMMARY]
[CONTENT] Hemodialysis | hemoglobin | variability | nutrition | phase angle [SUMMARY]
[CONTENT] Age Factors | Aged | Body Composition | Electric Impedance | Female | Hemoglobins | Humans | Kidney Failure, Chronic | Male | Middle Aged | Nutritional Status | Predictive Value of Tests | ROC Curve | Renal Dialysis | Sex Factors [SUMMARY]
null
[CONTENT] Age Factors | Aged | Body Composition | Electric Impedance | Female | Hemoglobins | Humans | Kidney Failure, Chronic | Male | Middle Aged | Nutritional Status | Predictive Value of Tests | ROC Curve | Renal Dialysis | Sex Factors [SUMMARY]
[CONTENT] Age Factors | Aged | Body Composition | Electric Impedance | Female | Hemoglobins | Humans | Kidney Failure, Chronic | Male | Middle Aged | Nutritional Status | Predictive Value of Tests | ROC Curve | Renal Dialysis | Sex Factors [SUMMARY]
[CONTENT] Age Factors | Aged | Body Composition | Electric Impedance | Female | Hemoglobins | Humans | Kidney Failure, Chronic | Male | Middle Aged | Nutritional Status | Predictive Value of Tests | ROC Curve | Renal Dialysis | Sex Factors [SUMMARY]
[CONTENT] Age Factors | Aged | Body Composition | Electric Impedance | Female | Hemoglobins | Humans | Kidney Failure, Chronic | Male | Middle Aged | Nutritional Status | Predictive Value of Tests | ROC Curve | Renal Dialysis | Sex Factors [SUMMARY]
[CONTENT] response anemia treatment | responsiveness anemia management | hemodialysis hdl high | patients hb variability | fluctuations hemoglobin hb [SUMMARY]
null
[CONTENT] response anemia treatment | responsiveness anemia management | hemodialysis hdl high | patients hb variability | fluctuations hemoglobin hb [SUMMARY]
[CONTENT] response anemia treatment | responsiveness anemia management | hemodialysis hdl high | patients hb variability | fluctuations hemoglobin hb [SUMMARY]
[CONTENT] response anemia treatment | responsiveness anemia management | hemodialysis hdl high | patients hb variability | fluctuations hemoglobin hb [SUMMARY]
[CONTENT] response anemia treatment | responsiveness anemia management | hemodialysis hdl high | patients hb variability | fluctuations hemoglobin hb [SUMMARY]
[CONTENT] hb | pha | patients | hd | study | med | hb med | level | cv | variables [SUMMARY]
null
[CONTENT] hb | pha | patients | hd | study | med | hb med | level | cv | variables [SUMMARY]
[CONTENT] hb | pha | patients | hd | study | med | hb med | level | cv | variables [SUMMARY]
[CONTENT] hb | pha | patients | hd | study | med | hb med | level | cv | variables [SUMMARY]
[CONTENT] hb | pha | patients | hd | study | med | hb med | level | cv | variables [SUMMARY]
[CONTENT] hb | patients | hd patients | status | malnutrition | variability | nutritional status | hb variability | level | chronic [SUMMARY]
null
[CONTENT] hb | hb hb | pha | med | 05 | variables | hb med | 10 | cv | lipoprotein [SUMMARY]
[CONTENT] pha provides | practical information predict | hd patients pha | hd patients pha provides | chronic hd patients pha | predict responsiveness esas population | predict responsiveness esas | predict responsiveness | nutritional status associated | practical information predict responsiveness [SUMMARY]
[CONTENT] hb | pha | patients | study | hb hb | hd | med | hb med | cv | level [SUMMARY]
[CONTENT] hb | pha | patients | study | hb hb | hd | med | hb med | cv | level [SUMMARY]
[CONTENT] Hb [SUMMARY]
null
[CONTENT] Hb-CV ||| Hb-Med ≤ 10.77 | Hb-Med | 10.77 | weekly ||| Serum | TIBC | Hb-Med | Hb-Avg | Hb-Med ||| Hb-Med | 0.665 | 0.040 | 5.00 [SUMMARY]
[CONTENT] Hb | Hb-Med [SUMMARY]
[CONTENT] Hb ||| 76 | monthly | Hb | 24 months ||| BCM | BMI | LTI | FTI | BCMI ||| Hb-CV | Hb-SD | Hb (Hb-RAN | Hb ||| Hb-Min | Hb-Max | Hb-Avg | Hb-Med ||| Hb-CV ||| Hb-Med ≤ 10.77 | Hb-Med | 10.77 | weekly ||| Serum | TIBC | Hb-Med | Hb-Avg | Hb-Med ||| Hb-Med | 0.665 | 0.040 | 5.00 ||| Hb | Hb-Med [SUMMARY]
[CONTENT] Hb ||| 76 | monthly | Hb | 24 months ||| BCM | BMI | LTI | FTI | BCMI ||| Hb-CV | Hb-SD | Hb (Hb-RAN | Hb ||| Hb-Min | Hb-Max | Hb-Avg | Hb-Med ||| Hb-CV ||| Hb-Med ≤ 10.77 | Hb-Med | 10.77 | weekly ||| Serum | TIBC | Hb-Med | Hb-Avg | Hb-Med ||| Hb-Med | 0.665 | 0.040 | 5.00 ||| Hb | Hb-Med [SUMMARY]
Influence of efforts of employer and employee on return-to-work process and outcomes.
21328060
Research on disability and RTW outcome has led to significant advances in understanding these outcomes, however, limited studies focus on measuring the RTW process. After a prolonged period of sickness absence, the assessment of the RTW process by investigating RTW Effort Sufficiency (RTW-ES) is essential. However, little is known about factors influencing RTW-ES. Also, the correspondence in factors determining RTW-ES and RTW is unknown. The purpose of this study was to investigate 1) the strength and relevance of factors related to RTW-ES and RTW (no/partial RTW), and 2) the comparability of factors associated with RTW-ES and with RTW.
BACKGROUND
During 4 months, all assessments of RTW-ES and RTW (no/partial RTW) among employees applying for disability benefits after 2 years of sickness absence, performed by labor experts at 3 Dutch Social Insurance Institute locations, were investigated by means of a questionnaire.
METHODS
Questionnaires concerning 415 cases were available. Using multiple logistic regression analysis, the only factor related to RTW-ES is a good employer-employee relationship. Factors related to RTW (no/partial RTW) were found to be high education, no previous periods of complete disability and a good employer-employee relationship.
RESULTS
Different factors are relevant to RTW-ES and RTW, but the employer-employee relationship is relevant for both. Considering the importance of the assessment of RTW-ES after a prolonged period of sickness absence among employees who are not fully disabled, this knowledge is essential for the assessment of RTW-ES and the RTW process itself.
CONCLUSIONS
[ "Adult", "Educational Status", "Employment", "Female", "Humans", "Illness Behavior", "Intention", "Interpersonal Relations", "Logistic Models", "Male", "Middle Aged", "Netherlands", "Sick Leave", "Surveys and Questionnaires", "Work" ]
3217145
Background
In the past years, policymakers and researchers have focused on early return-to-work (RTW) after sickness absence and on the prevention of long-term sickness absence and permanent disability [1, 2]. Long-term absence and work disability are associated with health risks, social isolation and exclusion from the labor market [1, 2]. Although research on disability and RTW outcome has led to significant advances in understanding about these outcomes, limited studies focus on measuring aspects of the RTW process—the process that workers go through to reach, or attempt to reach, their goals [3]. Up to date, the focus is commonly placed on simply the act of returning-to-work or applying for a disability pension. However, RTW and work disability can also be described in terms of the type of actions undertaken by workers resuming employment [4]. An instrument to measure the undertaken actions in the RTW process and to evaluate if an agreed upon RTW goal has been reached is of interest of various stakeholders. In several countries the assessment of Return-To-Work Effort Sufficiency (RTW-ES) is part of the evaluation of the RTW process in relation to the application for disability benefits [5]. After the onset of sickness absence, the RTW process takes place. RTW efforts made in the RTW process include all activities undertaken to improve the work ability of the sick-listed employee in the period between onset of sickness absence and the application for disability benefits (see Fig. 1) [6]. During this RTW process, the RTW efforts are undertaken by employer, employee, and health professionals (e.g. general physician, specialist, and/or occupational physician). The assessment of RTW-ES explores the RTW process from the perspective of the efforts made by both employer and employee. This assessment takes place prior to the assessment of disability benefits, which in the Netherlands takes place after 2 years of sickness absence [6]. The assessment of RTW-ES and the assessment for disability benefits are performed by the Labor Expert (LE) and a Social Insurance Physician (SIP), respectively, of the Social Insurance Institute (SII).Fig. 1A description of the RTW process in relation to the assessment of Return-To-Work Effort Sufficiency and the assessment for disability benefits A description of the RTW process in relation to the assessment of Return-To-Work Effort Sufficiency and the assessment for disability benefits The RTW efforts are sufficient if the RTW process is designed effectively, the chances of RTW are optimal, and RTW is achieved in accordance with health status and work ability of the sick-listed employee [5, 7]. The RTW-ES assessment is performed only when the Dutch employee has not fully returned to work after 2 years of sickness absence, but does have remaining work ability and is applying for disability benefits. Of employees applying for disability benefits, some apply for partial benefits, and some apply for complete benefits, based mainly on the level of RTW achieved during the RTW process, i.e. no RTW or partial RTW. Little is known about the differences between employees who apply for disability benefits after long-term sickness absence who have achieved partial RTW and those who have not achieved RTW. Assessing the sufficiency of the efforts made during the RTW process prior to the application for disability benefits could help prevent unnecessary applications for disability benefits. However, current RTW process outcomes focus mostly on time elapsed or costs [4], and not on the RTW process. Assessing the sufficiency of efforts undertaken during the RTW process might be an important addition to existing RTW outcomes as it could give insight in factors related to RTW in employees on long-term sickness absence who apply for disability benefits [5]. Because the RTW outcome is assessed after a longer period of sickness absence, the influence of the activities undertaken in the RTW process is evident. Knowing the strength and relevance of factors influencing the RTW process can provide vital information for the RTW outcome and the opportunities to achieve better RTW goals in the future. Ultimately, knowing the differences in factors associated to RTW-ES among employees who have not returned to work fully, but do have remaining work ability, might give insight in the differences between factors related to RTW outcome and the factors during the RTW process related to the assessment of RTW-ES. Moreover, the comparability of both outcomes (RTW outcome versus RTW-ES outcome) is unclear. Factors related to RTW among employees on long-term sickness absence and applying for disability benefits might differ from factors relevant to the RTW-ES outcome measured by the activities during the RTW process. The purpose of this study was to investigate 1) the strength and relevance of factors related to RTW Effort Sufficiency (RTW-ES) and to RTW outcome (no RTW or partial RTW) among employees applying for disability benefits after 2 years of sickness absence, and 2) the comparability of the factors associated with RTW-ES and RTW.
null
null
Results
Study Population Questionnaires concerning 415 cases were filled out. Of all cases, the average age of the employees was 47 years (SD 9.4), 180 (43%) were male, and education level was low in 20%, medium in 60% and high in 20% (see Table 1).Table 1Description of study populationAge M(SD) (N = 415)47.4 (9.4)Gender N(%) (N = 415) Male180 (43.4) Female235 (56.6)Educational level N(%) (N = 410) Low84 (20.5) Medium246 (60.0) High80 (19.5)RTW N(%) (N = 411) No (or only on a therapeutic basis)211 (51.3) Yes (partially or fully)200 (48.7)RTW efforts N (%) (N = 415) Sufficient334 (80.5) Insufficient81 (19.5)RTW at own employer (N = 196) Yes191 (97.4) No5 (2.6)RTW process agreed on by employee (N = 409) Yes329 (80.4) No80 (19.6) Description of study population Of the 415 cases, RTW-ES was deemed sufficient in 334 cases (80%) and insufficient in 81 cases (20%). Of the 415 cases, 203 sick-listed employees had returned to work partially prior to applying for disability benefits, whereas 211 sick-listed employees (51%) had not returned to work. At the moment of application for disability benefits, 191 employees who had returned to work had returned to their own employer (97%), whereas 5 had not (3%). The RTW process was agreed upon by the employee in 329 cases (80%), while 80 employees (20%) did not agree with the proceedings of the 2 years prior to the application for disability benefits. Questionnaires concerning 415 cases were filled out. Of all cases, the average age of the employees was 47 years (SD 9.4), 180 (43%) were male, and education level was low in 20%, medium in 60% and high in 20% (see Table 1).Table 1Description of study populationAge M(SD) (N = 415)47.4 (9.4)Gender N(%) (N = 415) Male180 (43.4) Female235 (56.6)Educational level N(%) (N = 410) Low84 (20.5) Medium246 (60.0) High80 (19.5)RTW N(%) (N = 411) No (or only on a therapeutic basis)211 (51.3) Yes (partially or fully)200 (48.7)RTW efforts N (%) (N = 415) Sufficient334 (80.5) Insufficient81 (19.5)RTW at own employer (N = 196) Yes191 (97.4) No5 (2.6)RTW process agreed on by employee (N = 409) Yes329 (80.4) No80 (19.6) Description of study population Of the 415 cases, RTW-ES was deemed sufficient in 334 cases (80%) and insufficient in 81 cases (20%). Of the 415 cases, 203 sick-listed employees had returned to work partially prior to applying for disability benefits, whereas 211 sick-listed employees (51%) had not returned to work. At the moment of application for disability benefits, 191 employees who had returned to work had returned to their own employer (97%), whereas 5 had not (3%). The RTW process was agreed upon by the employee in 329 cases (80%), while 80 employees (20%) did not agree with the proceedings of the 2 years prior to the application for disability benefits. Personal and External Characteristics The characteristics of the variables included in the logistic analyses are presented in Table 2. Regarding the personal factors, the reason of absence was a physical health condition in 261 cases (63%), mixed health conditions in 67 cases (16%), and a mental health condition in 84 cases (20%). The average tenure was 13 years (SD 8.8). Of the sick-listed employees, 272 (66%) reported periods of complete disability, which meant that no activities to promote RTW could be undertaken during this period. 218 employees (53%) reported periods of work resumption, meaning that they had attempted to RTW during the 2 years before the application for disability benefits.Table 2Description of personal and external factors in study populationPersonal factorsN (%)Reason of absence (N = 412) Physical261 (63.3) Both physical and mental67 (16.3) Mental84 (20.4)Tenure (N = 358) (M(SD))12.84 (8.75)Periods of complete disability (N = 412) Yes272 (66.0) No140 (34.0)Periods of work resumption (N = 412) Yes218 (52.9) No194 (47.1)External factorsN (%)Sickness absence work related (N = 339) Yes, completely/partially55 (16.2) No284 (83.8)Relationship employer employee (N = 380) Good/neutral355 (93.4) Bad25 (6.6)Conflict (N = 410) Yes32 (7.8) No378 (92.2) Description of personal and external factors in study population For the external factors, the sickness absence was partially or completely work-related in 55 cases (16%). The relationship between the employer and employee was good or neutral in 355 cases (93%). There was evidence of conflict between employer and employee in 32 cases (8%). The correlation between employer-employee relationship and employer-employee conflict was 0.72 (P < 0.01). The characteristics of the variables included in the logistic analyses are presented in Table 2. Regarding the personal factors, the reason of absence was a physical health condition in 261 cases (63%), mixed health conditions in 67 cases (16%), and a mental health condition in 84 cases (20%). The average tenure was 13 years (SD 8.8). Of the sick-listed employees, 272 (66%) reported periods of complete disability, which meant that no activities to promote RTW could be undertaken during this period. 218 employees (53%) reported periods of work resumption, meaning that they had attempted to RTW during the 2 years before the application for disability benefits.Table 2Description of personal and external factors in study populationPersonal factorsN (%)Reason of absence (N = 412) Physical261 (63.3) Both physical and mental67 (16.3) Mental84 (20.4)Tenure (N = 358) (M(SD))12.84 (8.75)Periods of complete disability (N = 412) Yes272 (66.0) No140 (34.0)Periods of work resumption (N = 412) Yes218 (52.9) No194 (47.1)External factorsN (%)Sickness absence work related (N = 339) Yes, completely/partially55 (16.2) No284 (83.8)Relationship employer employee (N = 380) Good/neutral355 (93.4) Bad25 (6.6)Conflict (N = 410) Yes32 (7.8) No378 (92.2) Description of personal and external factors in study population For the external factors, the sickness absence was partially or completely work-related in 55 cases (16%). The relationship between the employer and employee was good or neutral in 355 cases (93%). There was evidence of conflict between employer and employee in 32 cases (8%). The correlation between employer-employee relationship and employer-employee conflict was 0.72 (P < 0.01). RTW-ES and RTW Factors related to RTW-ES are shown in Table 3. The multilevel regression analysis shows 5 potential determinants (P < 0.20) of RTW-ES, while taking assessor into account: reason of absence, tenure, work-relatedness of absence, employer-employee relationship, and employer-employee conflict.Table 3Factors related to RTW-ES: multilevel logistic regression analyses, taking assessor into accountVariable (reference group)Crude odds ratiosOR, adjusted for age, gender, and educationªOR (95%) P OR (95%) P Personal factors Age (years)1.00 (0.97–1.03)0.930.99 (0.95–1.03)0.65 Gender (female)1.07 (0.68–1.71)0.760.97 (0.51–1.85)0.92 Education (low)1.02 (0.43–2.41)0.971.08 (0.35–3.32)0.89 Reason of absence (mental)¹1.97 (1.14–3.42)0.021.22 (0.57–2.61)0.60 Tenure (years)1.03 (0.99–1.06)0.141.03 (0.98–1.01)0.23 Periods of complete disability (no)1.19 (0.75–1.89)0.45–– Periods of work resumption (no)1.22 (0.76–1.95)0.42––External factors Sickness absence work related (yes)²2.73 (1.33–5.62)0.011.44 (0.64–3.22)0.38 Relationship employer/employee (poor)5.91 (2.81–12.43)<0.015.47 (2.00–14.98)<0.01 Conflict (yes)³4.25 (2.14–8.43)<0.01–OR of >1 indicates a higher chance of RTW-ES, compared to the reference groupª QIC = 265.92, N = 269 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression Factors related to RTW-ES: multilevel logistic regression analyses, taking assessor into account OR of >1 indicates a higher chance of RTW-ES, compared to the reference group ª QIC = 265.92, N = 269 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression Using multiple multilevel logistic regression analysis, adjusting for age, gender and education and excluding conflict, one factor remained in the model. Only employer-employee relationship had a significant relationship to a higher chance of RTW-ES (OR 5.47, 95%CI 2.00-14.98, P < 0.01). Factors related to RTW according to the regression analysis are presented in Table 4. In the univariate regression analyses 5 potential determinants were associated (P < 0.20) to RTW: education level, tenure, periods of complete disability, relationship between employer and employee, and employer-employee conflict. Conflict was excluded from the model because of the high correlation to employer-employee relationship, and the model was adjusted for age, gender and education. Using multiple backward conditional logistic regression analysis, three factors remained in the model: employer-employee relationship (OR 14.59, 95%CI 3.29-64.71, P = <0.01), level of education (OR 2.89, 95%CI 1.39-6.00, P = <0.01), and periods of complete disability (OR 1.92, 95%CI 1.18-3.15, P = <0.01).Table 4Factors related to RTW: logistic regression analysesVariable (reference group)Crude odds ratiosOR, adjusted for age, gender and educationªOR (95%) P OR (95%) P Personal factors Age (years)1.00 (0.98–0.02)0.971.00 (0.97–1.02)0.68 Gender (male)1.01 (0.68–1.49)0.970.98 (0.62–1.53)0.92 Education (low)1.99 (1.07–3.70)0.032.89 (1.39–6.00)<0.01 Reason of absence (mental)¹1.08 (0.57–2.07)0.81–– Tenure (years)1.02 (0.99–1.04)0.181.01 (0.98–1.04)0.47 Periods of complete disability (yes)1.74 (1.15–2.63)0.011.92 (1.18–3.15)<0.01 Periods of work resumption (yes)1.16 (0.79–1.71)0.45––External factors Sickness absence work related (yes)²1.35 (0.75–2.41)0.32–– Relationship employer/employee (poor)12.95 (3.01–55.74)<0.0114.59 (3.29–64.71)<0.01 Conflict (yes)³5.57 (2.10–14.78)<0.01––OR of >1 indicates a higher chance of RTW, compared to the reference group aR² = 135, N = 321 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression analysis Factors related to RTW: logistic regression analyses OR of >1 indicates a higher chance of RTW, compared to the reference group aR² = 135, N = 321 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression analysis Factors related to RTW-ES are shown in Table 3. The multilevel regression analysis shows 5 potential determinants (P < 0.20) of RTW-ES, while taking assessor into account: reason of absence, tenure, work-relatedness of absence, employer-employee relationship, and employer-employee conflict.Table 3Factors related to RTW-ES: multilevel logistic regression analyses, taking assessor into accountVariable (reference group)Crude odds ratiosOR, adjusted for age, gender, and educationªOR (95%) P OR (95%) P Personal factors Age (years)1.00 (0.97–1.03)0.930.99 (0.95–1.03)0.65 Gender (female)1.07 (0.68–1.71)0.760.97 (0.51–1.85)0.92 Education (low)1.02 (0.43–2.41)0.971.08 (0.35–3.32)0.89 Reason of absence (mental)¹1.97 (1.14–3.42)0.021.22 (0.57–2.61)0.60 Tenure (years)1.03 (0.99–1.06)0.141.03 (0.98–1.01)0.23 Periods of complete disability (no)1.19 (0.75–1.89)0.45–– Periods of work resumption (no)1.22 (0.76–1.95)0.42––External factors Sickness absence work related (yes)²2.73 (1.33–5.62)0.011.44 (0.64–3.22)0.38 Relationship employer/employee (poor)5.91 (2.81–12.43)<0.015.47 (2.00–14.98)<0.01 Conflict (yes)³4.25 (2.14–8.43)<0.01–OR of >1 indicates a higher chance of RTW-ES, compared to the reference groupª QIC = 265.92, N = 269 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression Factors related to RTW-ES: multilevel logistic regression analyses, taking assessor into account OR of >1 indicates a higher chance of RTW-ES, compared to the reference group ª QIC = 265.92, N = 269 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression Using multiple multilevel logistic regression analysis, adjusting for age, gender and education and excluding conflict, one factor remained in the model. Only employer-employee relationship had a significant relationship to a higher chance of RTW-ES (OR 5.47, 95%CI 2.00-14.98, P < 0.01). Factors related to RTW according to the regression analysis are presented in Table 4. In the univariate regression analyses 5 potential determinants were associated (P < 0.20) to RTW: education level, tenure, periods of complete disability, relationship between employer and employee, and employer-employee conflict. Conflict was excluded from the model because of the high correlation to employer-employee relationship, and the model was adjusted for age, gender and education. Using multiple backward conditional logistic regression analysis, three factors remained in the model: employer-employee relationship (OR 14.59, 95%CI 3.29-64.71, P = <0.01), level of education (OR 2.89, 95%CI 1.39-6.00, P = <0.01), and periods of complete disability (OR 1.92, 95%CI 1.18-3.15, P = <0.01).Table 4Factors related to RTW: logistic regression analysesVariable (reference group)Crude odds ratiosOR, adjusted for age, gender and educationªOR (95%) P OR (95%) P Personal factors Age (years)1.00 (0.98–0.02)0.971.00 (0.97–1.02)0.68 Gender (male)1.01 (0.68–1.49)0.970.98 (0.62–1.53)0.92 Education (low)1.99 (1.07–3.70)0.032.89 (1.39–6.00)<0.01 Reason of absence (mental)¹1.08 (0.57–2.07)0.81–– Tenure (years)1.02 (0.99–1.04)0.181.01 (0.98–1.04)0.47 Periods of complete disability (yes)1.74 (1.15–2.63)0.011.92 (1.18–3.15)<0.01 Periods of work resumption (yes)1.16 (0.79–1.71)0.45––External factors Sickness absence work related (yes)²1.35 (0.75–2.41)0.32–– Relationship employer/employee (poor)12.95 (3.01–55.74)<0.0114.59 (3.29–64.71)<0.01 Conflict (yes)³5.57 (2.10–14.78)<0.01––OR of >1 indicates a higher chance of RTW, compared to the reference group aR² = 135, N = 321 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression analysis Factors related to RTW: logistic regression analyses OR of >1 indicates a higher chance of RTW, compared to the reference group aR² = 135, N = 321 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression analysis
null
null
[ "Background", "Measures", "RTW-ES Assessment", "Research Questionnaire", "Statistical Analyses", "Comparability", "Study Population", "Personal and External Characteristics", "RTW-ES and RTW" ]
[ "In the past years, policymakers and researchers have focused on early return-to-work (RTW) after sickness absence and on the prevention of long-term sickness absence and permanent disability [1, 2]. Long-term absence and work disability are associated with health risks, social isolation and exclusion from the labor market [1, 2]. Although research on disability and RTW outcome has led to significant advances in understanding about these outcomes, limited studies focus on measuring aspects of the RTW process—the process that workers go through to reach, or attempt to reach, their goals [3]. Up to date, the focus is commonly placed on simply the act of returning-to-work or applying for a disability pension. However, RTW and work disability can also be described in terms of the type of actions undertaken by workers resuming employment [4].\nAn instrument to measure the undertaken actions in the RTW process and to evaluate if an agreed upon RTW goal has been reached is of interest of various stakeholders. In several countries the assessment of Return-To-Work Effort Sufficiency (RTW-ES) is part of the evaluation of the RTW process in relation to the application for disability benefits [5]. After the onset of sickness absence, the RTW process takes place. RTW efforts made in the RTW process include all activities undertaken to improve the work ability of the sick-listed employee in the period between onset of sickness absence and the application for disability benefits (see Fig. 1) [6]. During this RTW process, the RTW efforts are undertaken by employer, employee, and health professionals (e.g. general physician, specialist, and/or occupational physician). The assessment of RTW-ES explores the RTW process from the perspective of the efforts made by both employer and employee. This assessment takes place prior to the assessment of disability benefits, which in the Netherlands takes place after 2 years of sickness absence [6]. The assessment of RTW-ES and the assessment for disability benefits are performed by the Labor Expert (LE) and a Social Insurance Physician (SIP), respectively, of the Social Insurance Institute (SII).Fig. 1A description of the RTW process in relation to the assessment of Return-To-Work Effort Sufficiency and the assessment for disability benefits\n\nA description of the RTW process in relation to the assessment of Return-To-Work Effort Sufficiency and the assessment for disability benefits\nThe RTW efforts are sufficient if the RTW process is designed effectively, the chances of RTW are optimal, and RTW is achieved in accordance with health status and work ability of the sick-listed employee [5, 7]. The RTW-ES assessment is performed only when the Dutch employee has not fully returned to work after 2 years of sickness absence, but does have remaining work ability and is applying for disability benefits. Of employees applying for disability benefits, some apply for partial benefits, and some apply for complete benefits, based mainly on the level of RTW achieved during the RTW process, i.e. no RTW or partial RTW. Little is known about the differences between employees who apply for disability benefits after long-term sickness absence who have achieved partial RTW and those who have not achieved RTW.\nAssessing the sufficiency of the efforts made during the RTW process prior to the application for disability benefits could help prevent unnecessary applications for disability benefits. However, current RTW process outcomes focus mostly on time elapsed or costs [4], and not on the RTW process. Assessing the sufficiency of efforts undertaken during the RTW process might be an important addition to existing RTW outcomes as it could give insight in factors related to RTW in employees on long-term sickness absence who apply for disability benefits [5].\nBecause the RTW outcome is assessed after a longer period of sickness absence, the influence of the activities undertaken in the RTW process is evident. Knowing the strength and relevance of factors influencing the RTW process can provide vital information for the RTW outcome and the opportunities to achieve better RTW goals in the future. Ultimately, knowing the differences in factors associated to RTW-ES among employees who have not returned to work fully, but do have remaining work ability, might give insight in the differences between factors related to RTW outcome and the factors during the RTW process related to the assessment of RTW-ES. Moreover, the comparability of both outcomes (RTW outcome versus RTW-ES outcome) is unclear. Factors related to RTW among employees on long-term sickness absence and applying for disability benefits might differ from factors relevant to the RTW-ES outcome measured by the activities during the RTW process.\nThe purpose of this study was to investigate 1) the strength and relevance of factors related to RTW Effort Sufficiency (RTW-ES) and to RTW outcome (no RTW or partial RTW) among employees applying for disability benefits after 2 years of sickness absence, and 2) the comparability of the factors associated with RTW-ES and RTW.", " RTW-ES Assessment The RTW-ES assessment focuses on whether enough activities have been undertaken by the employer and employee to realize (partial) RTW after 2 years of sickness absence. This assessment is based on a case report compiled by the employer. This case report includes a problem analysis, i.e. a mandatory description of the (dis)abilities of the employee by an occupational physician hired by the employer, the plan designed to achieve work resumption (an action plan), and the employee’s opinion regarding the RTW process. Records of all interventions, conversations and agreements between the parties involved in the RTW process were also included in the case report. The assessment is performed by LE’s from the Dutch SII, who are graduates in social sciences. During the assessment, the LE’s have the opportunity to consult an SIP, and can invite the employer and employee to provide more information. When, according to the LE insufficient efforts have been made, the application for disability benefits is delayed, and the employer and/or employee receive a financial sanction, depending on who has omitted to perform the necessary efforts to promote RTW. The assessment of RTW-ES is performed at the disgression of the LE’s, no evidence based protocol or instrument is available. Employees who have returned to work fully and are receiving the original level of income, or who are fully disabled are not assessed. Employees on sickness absence due to pregnancy, or on sickness absence while not under contract fall under a different policy and are not assessed as well.\nThe RTW-ES assessment focuses on whether enough activities have been undertaken by the employer and employee to realize (partial) RTW after 2 years of sickness absence. This assessment is based on a case report compiled by the employer. This case report includes a problem analysis, i.e. a mandatory description of the (dis)abilities of the employee by an occupational physician hired by the employer, the plan designed to achieve work resumption (an action plan), and the employee’s opinion regarding the RTW process. Records of all interventions, conversations and agreements between the parties involved in the RTW process were also included in the case report. The assessment is performed by LE’s from the Dutch SII, who are graduates in social sciences. During the assessment, the LE’s have the opportunity to consult an SIP, and can invite the employer and employee to provide more information. When, according to the LE insufficient efforts have been made, the application for disability benefits is delayed, and the employer and/or employee receive a financial sanction, depending on who has omitted to perform the necessary efforts to promote RTW. The assessment of RTW-ES is performed at the disgression of the LE’s, no evidence based protocol or instrument is available. Employees who have returned to work fully and are receiving the original level of income, or who are fully disabled are not assessed. Employees on sickness absence due to pregnancy, or on sickness absence while not under contract fall under a different policy and are not assessed as well.\n Research Questionnaire A closed-ended questionnaire was developed to gather information about the two outcomes, RTW and RTW-ES, and personal and external factors related to the case and the RTW process of the employee.\nThe strength and relevance of factors related to RTW-ES and to RTW outcome (no RTW or partial RTW) were investigated by means of a questionnaire. During the RTW-ES assessment, the LE was asked to fill out the questionnaire.\nThe content of the questionnaire consists of a list of possible predicting factors of RTW, which were inventorized by literature (e.g. [8–10]). Questions were included about personal factors such as age, gender, level of education (low, medium, high, including examples), and more work-related personal factors such as the reason of sickness absence (i.e. physical, mental or both) and tenure (number of years with current employer). Questions about whether there had been periods of work resumption (yes, no) or periods of complete disability (yes, no) were also included. For the external factors questions were asked about whether the sickness absence was work-related (yes, both work-related and private, no), whether there had been a conflict between the employer and employee during the RTW process (yes, no), and also whether the quality of the relationship between the employer and employee was deemed good/neutral or bad. A question about whether the employee had returned to work (yes/no) was included, as well as a question about the sufficiency of RTW efforts (sufficient/insufficient) according to the LE. The LE’s gathered the information necessary for filling out the questionnaire by examining the case report or interviewing the employer, employee or SIP.\nA closed-ended questionnaire was developed to gather information about the two outcomes, RTW and RTW-ES, and personal and external factors related to the case and the RTW process of the employee.\nThe strength and relevance of factors related to RTW-ES and to RTW outcome (no RTW or partial RTW) were investigated by means of a questionnaire. During the RTW-ES assessment, the LE was asked to fill out the questionnaire.\nThe content of the questionnaire consists of a list of possible predicting factors of RTW, which were inventorized by literature (e.g. [8–10]). Questions were included about personal factors such as age, gender, level of education (low, medium, high, including examples), and more work-related personal factors such as the reason of sickness absence (i.e. physical, mental or both) and tenure (number of years with current employer). Questions about whether there had been periods of work resumption (yes, no) or periods of complete disability (yes, no) were also included. For the external factors questions were asked about whether the sickness absence was work-related (yes, both work-related and private, no), whether there had been a conflict between the employer and employee during the RTW process (yes, no), and also whether the quality of the relationship between the employer and employee was deemed good/neutral or bad. A question about whether the employee had returned to work (yes/no) was included, as well as a question about the sufficiency of RTW efforts (sufficient/insufficient) according to the LE. The LE’s gathered the information necessary for filling out the questionnaire by examining the case report or interviewing the employer, employee or SIP.", "The RTW-ES assessment focuses on whether enough activities have been undertaken by the employer and employee to realize (partial) RTW after 2 years of sickness absence. This assessment is based on a case report compiled by the employer. This case report includes a problem analysis, i.e. a mandatory description of the (dis)abilities of the employee by an occupational physician hired by the employer, the plan designed to achieve work resumption (an action plan), and the employee’s opinion regarding the RTW process. Records of all interventions, conversations and agreements between the parties involved in the RTW process were also included in the case report. The assessment is performed by LE’s from the Dutch SII, who are graduates in social sciences. During the assessment, the LE’s have the opportunity to consult an SIP, and can invite the employer and employee to provide more information. When, according to the LE insufficient efforts have been made, the application for disability benefits is delayed, and the employer and/or employee receive a financial sanction, depending on who has omitted to perform the necessary efforts to promote RTW. The assessment of RTW-ES is performed at the disgression of the LE’s, no evidence based protocol or instrument is available. Employees who have returned to work fully and are receiving the original level of income, or who are fully disabled are not assessed. Employees on sickness absence due to pregnancy, or on sickness absence while not under contract fall under a different policy and are not assessed as well.", "A closed-ended questionnaire was developed to gather information about the two outcomes, RTW and RTW-ES, and personal and external factors related to the case and the RTW process of the employee.\nThe strength and relevance of factors related to RTW-ES and to RTW outcome (no RTW or partial RTW) were investigated by means of a questionnaire. During the RTW-ES assessment, the LE was asked to fill out the questionnaire.\nThe content of the questionnaire consists of a list of possible predicting factors of RTW, which were inventorized by literature (e.g. [8–10]). Questions were included about personal factors such as age, gender, level of education (low, medium, high, including examples), and more work-related personal factors such as the reason of sickness absence (i.e. physical, mental or both) and tenure (number of years with current employer). Questions about whether there had been periods of work resumption (yes, no) or periods of complete disability (yes, no) were also included. For the external factors questions were asked about whether the sickness absence was work-related (yes, both work-related and private, no), whether there had been a conflict between the employer and employee during the RTW process (yes, no), and also whether the quality of the relationship between the employer and employee was deemed good/neutral or bad. A question about whether the employee had returned to work (yes/no) was included, as well as a question about the sufficiency of RTW efforts (sufficient/insufficient) according to the LE. The LE’s gathered the information necessary for filling out the questionnaire by examining the case report or interviewing the employer, employee or SIP.", "Logistic regression analysis was used to assess the independent contribution of factors to the RTW outcome. The method used was backward conditional, because of the explorative nature of the analyses.\nSimilar to the analyses of RTW, multilevel regression analysis was used to analyze the relationship between the factors and the RTW process outcome in terms of sufficiency of RTW efforts, taking the assessing professional into account. In both multiple analyses, variables were entered in the model when P < 0.20 based on the univariate relationships, and were adjusted for age, gender and education level. Data analysis was performed by using SPSS 16.0 for MS Windows.\n Comparability The results of the statistical analyses were used to assess the comparability of the factors associated with RTW-ES and RTW.\nThe results of the statistical analyses were used to assess the comparability of the factors associated with RTW-ES and RTW.", "The results of the statistical analyses were used to assess the comparability of the factors associated with RTW-ES and RTW.", "Questionnaires concerning 415 cases were filled out.\nOf all cases, the average age of the employees was 47 years (SD 9.4), 180 (43%) were male, and education level was low in 20%, medium in 60% and high in 20% (see Table 1).Table 1Description of study populationAge M(SD) (N = 415)47.4 (9.4)Gender N(%) (N = 415) Male180 (43.4) Female235 (56.6)Educational level N(%) (N = 410) Low84 (20.5) Medium246 (60.0) High80 (19.5)RTW N(%) (N = 411) No (or only on a therapeutic basis)211 (51.3) Yes (partially or fully)200 (48.7)RTW efforts N (%) (N = 415) Sufficient334 (80.5) Insufficient81 (19.5)RTW at own employer (N = 196) Yes191 (97.4) No5 (2.6)RTW process agreed on by employee (N = 409) Yes329 (80.4) No80 (19.6)\n\nDescription of study population\nOf the 415 cases, RTW-ES was deemed sufficient in 334 cases (80%) and insufficient in 81 cases (20%). Of the 415 cases, 203 sick-listed employees had returned to work partially prior to applying for disability benefits, whereas 211 sick-listed employees (51%) had not returned to work.\nAt the moment of application for disability benefits, 191 employees who had returned to work had returned to their own employer (97%), whereas 5 had not (3%). The RTW process was agreed upon by the employee in 329 cases (80%), while 80 employees (20%) did not agree with the proceedings of the 2 years prior to the application for disability benefits.", "The characteristics of the variables included in the logistic analyses are presented in Table 2. Regarding the personal factors, the reason of absence was a physical health condition in 261 cases (63%), mixed health conditions in 67 cases (16%), and a mental health condition in 84 cases (20%). The average tenure was 13 years (SD 8.8). Of the sick-listed employees, 272 (66%) reported periods of complete disability, which meant that no activities to promote RTW could be undertaken during this period. 218 employees (53%) reported periods of work resumption, meaning that they had attempted to RTW during the 2 years before the application for disability benefits.Table 2Description of personal and external factors in study populationPersonal factorsN (%)Reason of absence (N = 412) Physical261 (63.3) Both physical and mental67 (16.3) Mental84 (20.4)Tenure (N = 358) (M(SD))12.84 (8.75)Periods of complete disability (N = 412) Yes272 (66.0) No140 (34.0)Periods of work resumption (N = 412) Yes218 (52.9) No194 (47.1)External factorsN (%)Sickness absence work related (N = 339) Yes, completely/partially55 (16.2) No284 (83.8)Relationship employer employee (N = 380) Good/neutral355 (93.4) Bad25 (6.6)Conflict (N = 410) Yes32 (7.8) No378 (92.2)\n\nDescription of personal and external factors in study population\nFor the external factors, the sickness absence was partially or completely work-related in 55 cases (16%). The relationship between the employer and employee was good or neutral in 355 cases (93%). There was evidence of conflict between employer and employee in 32 cases (8%). The correlation between employer-employee relationship and employer-employee conflict was 0.72 (P < 0.01).", "Factors related to RTW-ES are shown in Table 3. The multilevel regression analysis shows 5 potential determinants (P < 0.20) of RTW-ES, while taking assessor into account: reason of absence, tenure, work-relatedness of absence, employer-employee relationship, and employer-employee conflict.Table 3Factors related to RTW-ES: multilevel logistic regression analyses, taking assessor into accountVariable (reference group)Crude odds ratiosOR, adjusted for age, gender, and educationªOR (95%)\nP\nOR (95%)\nP\nPersonal factors Age (years)1.00 (0.97–1.03)0.930.99 (0.95–1.03)0.65 Gender (female)1.07 (0.68–1.71)0.760.97 (0.51–1.85)0.92 Education (low)1.02 (0.43–2.41)0.971.08 (0.35–3.32)0.89 Reason of absence (mental)¹1.97 (1.14–3.42)0.021.22 (0.57–2.61)0.60 Tenure (years)1.03 (0.99–1.06)0.141.03 (0.98–1.01)0.23 Periods of complete disability (no)1.19 (0.75–1.89)0.45–– Periods of work resumption (no)1.22 (0.76–1.95)0.42––External factors Sickness absence work related (yes)²2.73 (1.33–5.62)0.011.44 (0.64–3.22)0.38 Relationship employer/employee (poor)5.91 (2.81–12.43)<0.015.47 (2.00–14.98)<0.01 Conflict (yes)³4.25 (2.14–8.43)<0.01–OR of >1 indicates a higher chance of RTW-ES, compared to the reference groupª QIC = 265.92, N = 269\n1Physical, both physical and mental, mental\n2No, partial/yes\n3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression\n\nFactors related to RTW-ES: multilevel logistic regression analyses, taking assessor into account\nOR of >1 indicates a higher chance of RTW-ES, compared to the reference group\nª QIC = 265.92, N = 269\n\n1Physical, both physical and mental, mental\n\n2No, partial/yes\n\n3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression\nUsing multiple multilevel logistic regression analysis, adjusting for age, gender and education and excluding conflict, one factor remained in the model. Only employer-employee relationship had a significant relationship to a higher chance of RTW-ES (OR 5.47, 95%CI 2.00-14.98, P < 0.01).\nFactors related to RTW according to the regression analysis are presented in Table 4. In the univariate regression analyses 5 potential determinants were associated (P < 0.20) to RTW: education level, tenure, periods of complete disability, relationship between employer and employee, and employer-employee conflict. Conflict was excluded from the model because of the high correlation to employer-employee relationship, and the model was adjusted for age, gender and education. Using multiple backward conditional logistic regression analysis, three factors remained in the model: employer-employee relationship (OR 14.59, 95%CI 3.29-64.71, P = <0.01), level of education (OR 2.89, 95%CI 1.39-6.00, P = <0.01), and periods of complete disability (OR 1.92, 95%CI 1.18-3.15, P = <0.01).Table 4Factors related to RTW: logistic regression analysesVariable (reference group)Crude odds ratiosOR, adjusted for age, gender and educationªOR (95%)\nP\nOR (95%)\nP\nPersonal factors Age (years)1.00 (0.98–0.02)0.971.00 (0.97–1.02)0.68 Gender (male)1.01 (0.68–1.49)0.970.98 (0.62–1.53)0.92 Education (low)1.99 (1.07–3.70)0.032.89 (1.39–6.00)<0.01 Reason of absence (mental)¹1.08 (0.57–2.07)0.81–– Tenure (years)1.02 (0.99–1.04)0.181.01 (0.98–1.04)0.47 Periods of complete disability (yes)1.74 (1.15–2.63)0.011.92 (1.18–3.15)<0.01 Periods of work resumption (yes)1.16 (0.79–1.71)0.45––External factors Sickness absence work related (yes)²1.35 (0.75–2.41)0.32–– Relationship employer/employee (poor)12.95 (3.01–55.74)<0.0114.59 (3.29–64.71)<0.01 Conflict (yes)³5.57 (2.10–14.78)<0.01––OR of >1 indicates a higher chance of RTW, compared to the reference group\naR² = 135, N = 321\n1Physical, both physical and mental, mental\n2No, partial/yes\n3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression analysis\n\nFactors related to RTW: logistic regression analyses\nOR of >1 indicates a higher chance of RTW, compared to the reference group\n\naR² = 135, N = 321\n\n1Physical, both physical and mental, mental\n\n2No, partial/yes\n\n3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression analysis" ]
[ null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Measures", "RTW-ES Assessment", "Research Questionnaire", "Statistical Analyses", "Comparability", "Results", "Study Population", "Personal and External Characteristics", "RTW-ES and RTW", "Discussion" ]
[ "In the past years, policymakers and researchers have focused on early return-to-work (RTW) after sickness absence and on the prevention of long-term sickness absence and permanent disability [1, 2]. Long-term absence and work disability are associated with health risks, social isolation and exclusion from the labor market [1, 2]. Although research on disability and RTW outcome has led to significant advances in understanding about these outcomes, limited studies focus on measuring aspects of the RTW process—the process that workers go through to reach, or attempt to reach, their goals [3]. Up to date, the focus is commonly placed on simply the act of returning-to-work or applying for a disability pension. However, RTW and work disability can also be described in terms of the type of actions undertaken by workers resuming employment [4].\nAn instrument to measure the undertaken actions in the RTW process and to evaluate if an agreed upon RTW goal has been reached is of interest of various stakeholders. In several countries the assessment of Return-To-Work Effort Sufficiency (RTW-ES) is part of the evaluation of the RTW process in relation to the application for disability benefits [5]. After the onset of sickness absence, the RTW process takes place. RTW efforts made in the RTW process include all activities undertaken to improve the work ability of the sick-listed employee in the period between onset of sickness absence and the application for disability benefits (see Fig. 1) [6]. During this RTW process, the RTW efforts are undertaken by employer, employee, and health professionals (e.g. general physician, specialist, and/or occupational physician). The assessment of RTW-ES explores the RTW process from the perspective of the efforts made by both employer and employee. This assessment takes place prior to the assessment of disability benefits, which in the Netherlands takes place after 2 years of sickness absence [6]. The assessment of RTW-ES and the assessment for disability benefits are performed by the Labor Expert (LE) and a Social Insurance Physician (SIP), respectively, of the Social Insurance Institute (SII).Fig. 1A description of the RTW process in relation to the assessment of Return-To-Work Effort Sufficiency and the assessment for disability benefits\n\nA description of the RTW process in relation to the assessment of Return-To-Work Effort Sufficiency and the assessment for disability benefits\nThe RTW efforts are sufficient if the RTW process is designed effectively, the chances of RTW are optimal, and RTW is achieved in accordance with health status and work ability of the sick-listed employee [5, 7]. The RTW-ES assessment is performed only when the Dutch employee has not fully returned to work after 2 years of sickness absence, but does have remaining work ability and is applying for disability benefits. Of employees applying for disability benefits, some apply for partial benefits, and some apply for complete benefits, based mainly on the level of RTW achieved during the RTW process, i.e. no RTW or partial RTW. Little is known about the differences between employees who apply for disability benefits after long-term sickness absence who have achieved partial RTW and those who have not achieved RTW.\nAssessing the sufficiency of the efforts made during the RTW process prior to the application for disability benefits could help prevent unnecessary applications for disability benefits. However, current RTW process outcomes focus mostly on time elapsed or costs [4], and not on the RTW process. Assessing the sufficiency of efforts undertaken during the RTW process might be an important addition to existing RTW outcomes as it could give insight in factors related to RTW in employees on long-term sickness absence who apply for disability benefits [5].\nBecause the RTW outcome is assessed after a longer period of sickness absence, the influence of the activities undertaken in the RTW process is evident. Knowing the strength and relevance of factors influencing the RTW process can provide vital information for the RTW outcome and the opportunities to achieve better RTW goals in the future. Ultimately, knowing the differences in factors associated to RTW-ES among employees who have not returned to work fully, but do have remaining work ability, might give insight in the differences between factors related to RTW outcome and the factors during the RTW process related to the assessment of RTW-ES. Moreover, the comparability of both outcomes (RTW outcome versus RTW-ES outcome) is unclear. Factors related to RTW among employees on long-term sickness absence and applying for disability benefits might differ from factors relevant to the RTW-ES outcome measured by the activities during the RTW process.\nThe purpose of this study was to investigate 1) the strength and relevance of factors related to RTW Effort Sufficiency (RTW-ES) and to RTW outcome (no RTW or partial RTW) among employees applying for disability benefits after 2 years of sickness absence, and 2) the comparability of the factors associated with RTW-ES and RTW.", " Measures RTW-ES Assessment The RTW-ES assessment focuses on whether enough activities have been undertaken by the employer and employee to realize (partial) RTW after 2 years of sickness absence. This assessment is based on a case report compiled by the employer. This case report includes a problem analysis, i.e. a mandatory description of the (dis)abilities of the employee by an occupational physician hired by the employer, the plan designed to achieve work resumption (an action plan), and the employee’s opinion regarding the RTW process. Records of all interventions, conversations and agreements between the parties involved in the RTW process were also included in the case report. The assessment is performed by LE’s from the Dutch SII, who are graduates in social sciences. During the assessment, the LE’s have the opportunity to consult an SIP, and can invite the employer and employee to provide more information. When, according to the LE insufficient efforts have been made, the application for disability benefits is delayed, and the employer and/or employee receive a financial sanction, depending on who has omitted to perform the necessary efforts to promote RTW. The assessment of RTW-ES is performed at the disgression of the LE’s, no evidence based protocol or instrument is available. Employees who have returned to work fully and are receiving the original level of income, or who are fully disabled are not assessed. Employees on sickness absence due to pregnancy, or on sickness absence while not under contract fall under a different policy and are not assessed as well.\nThe RTW-ES assessment focuses on whether enough activities have been undertaken by the employer and employee to realize (partial) RTW after 2 years of sickness absence. This assessment is based on a case report compiled by the employer. This case report includes a problem analysis, i.e. a mandatory description of the (dis)abilities of the employee by an occupational physician hired by the employer, the plan designed to achieve work resumption (an action plan), and the employee’s opinion regarding the RTW process. Records of all interventions, conversations and agreements between the parties involved in the RTW process were also included in the case report. The assessment is performed by LE’s from the Dutch SII, who are graduates in social sciences. During the assessment, the LE’s have the opportunity to consult an SIP, and can invite the employer and employee to provide more information. When, according to the LE insufficient efforts have been made, the application for disability benefits is delayed, and the employer and/or employee receive a financial sanction, depending on who has omitted to perform the necessary efforts to promote RTW. The assessment of RTW-ES is performed at the disgression of the LE’s, no evidence based protocol or instrument is available. Employees who have returned to work fully and are receiving the original level of income, or who are fully disabled are not assessed. Employees on sickness absence due to pregnancy, or on sickness absence while not under contract fall under a different policy and are not assessed as well.\n Research Questionnaire A closed-ended questionnaire was developed to gather information about the two outcomes, RTW and RTW-ES, and personal and external factors related to the case and the RTW process of the employee.\nThe strength and relevance of factors related to RTW-ES and to RTW outcome (no RTW or partial RTW) were investigated by means of a questionnaire. During the RTW-ES assessment, the LE was asked to fill out the questionnaire.\nThe content of the questionnaire consists of a list of possible predicting factors of RTW, which were inventorized by literature (e.g. [8–10]). Questions were included about personal factors such as age, gender, level of education (low, medium, high, including examples), and more work-related personal factors such as the reason of sickness absence (i.e. physical, mental or both) and tenure (number of years with current employer). Questions about whether there had been periods of work resumption (yes, no) or periods of complete disability (yes, no) were also included. For the external factors questions were asked about whether the sickness absence was work-related (yes, both work-related and private, no), whether there had been a conflict between the employer and employee during the RTW process (yes, no), and also whether the quality of the relationship between the employer and employee was deemed good/neutral or bad. A question about whether the employee had returned to work (yes/no) was included, as well as a question about the sufficiency of RTW efforts (sufficient/insufficient) according to the LE. The LE’s gathered the information necessary for filling out the questionnaire by examining the case report or interviewing the employer, employee or SIP.\nA closed-ended questionnaire was developed to gather information about the two outcomes, RTW and RTW-ES, and personal and external factors related to the case and the RTW process of the employee.\nThe strength and relevance of factors related to RTW-ES and to RTW outcome (no RTW or partial RTW) were investigated by means of a questionnaire. During the RTW-ES assessment, the LE was asked to fill out the questionnaire.\nThe content of the questionnaire consists of a list of possible predicting factors of RTW, which were inventorized by literature (e.g. [8–10]). Questions were included about personal factors such as age, gender, level of education (low, medium, high, including examples), and more work-related personal factors such as the reason of sickness absence (i.e. physical, mental or both) and tenure (number of years with current employer). Questions about whether there had been periods of work resumption (yes, no) or periods of complete disability (yes, no) were also included. For the external factors questions were asked about whether the sickness absence was work-related (yes, both work-related and private, no), whether there had been a conflict between the employer and employee during the RTW process (yes, no), and also whether the quality of the relationship between the employer and employee was deemed good/neutral or bad. A question about whether the employee had returned to work (yes/no) was included, as well as a question about the sufficiency of RTW efforts (sufficient/insufficient) according to the LE. The LE’s gathered the information necessary for filling out the questionnaire by examining the case report or interviewing the employer, employee or SIP.\n RTW-ES Assessment The RTW-ES assessment focuses on whether enough activities have been undertaken by the employer and employee to realize (partial) RTW after 2 years of sickness absence. This assessment is based on a case report compiled by the employer. This case report includes a problem analysis, i.e. a mandatory description of the (dis)abilities of the employee by an occupational physician hired by the employer, the plan designed to achieve work resumption (an action plan), and the employee’s opinion regarding the RTW process. Records of all interventions, conversations and agreements between the parties involved in the RTW process were also included in the case report. The assessment is performed by LE’s from the Dutch SII, who are graduates in social sciences. During the assessment, the LE’s have the opportunity to consult an SIP, and can invite the employer and employee to provide more information. When, according to the LE insufficient efforts have been made, the application for disability benefits is delayed, and the employer and/or employee receive a financial sanction, depending on who has omitted to perform the necessary efforts to promote RTW. The assessment of RTW-ES is performed at the disgression of the LE’s, no evidence based protocol or instrument is available. Employees who have returned to work fully and are receiving the original level of income, or who are fully disabled are not assessed. Employees on sickness absence due to pregnancy, or on sickness absence while not under contract fall under a different policy and are not assessed as well.\nThe RTW-ES assessment focuses on whether enough activities have been undertaken by the employer and employee to realize (partial) RTW after 2 years of sickness absence. This assessment is based on a case report compiled by the employer. This case report includes a problem analysis, i.e. a mandatory description of the (dis)abilities of the employee by an occupational physician hired by the employer, the plan designed to achieve work resumption (an action plan), and the employee’s opinion regarding the RTW process. Records of all interventions, conversations and agreements between the parties involved in the RTW process were also included in the case report. The assessment is performed by LE’s from the Dutch SII, who are graduates in social sciences. During the assessment, the LE’s have the opportunity to consult an SIP, and can invite the employer and employee to provide more information. When, according to the LE insufficient efforts have been made, the application for disability benefits is delayed, and the employer and/or employee receive a financial sanction, depending on who has omitted to perform the necessary efforts to promote RTW. The assessment of RTW-ES is performed at the disgression of the LE’s, no evidence based protocol or instrument is available. Employees who have returned to work fully and are receiving the original level of income, or who are fully disabled are not assessed. Employees on sickness absence due to pregnancy, or on sickness absence while not under contract fall under a different policy and are not assessed as well.\n Research Questionnaire A closed-ended questionnaire was developed to gather information about the two outcomes, RTW and RTW-ES, and personal and external factors related to the case and the RTW process of the employee.\nThe strength and relevance of factors related to RTW-ES and to RTW outcome (no RTW or partial RTW) were investigated by means of a questionnaire. During the RTW-ES assessment, the LE was asked to fill out the questionnaire.\nThe content of the questionnaire consists of a list of possible predicting factors of RTW, which were inventorized by literature (e.g. [8–10]). Questions were included about personal factors such as age, gender, level of education (low, medium, high, including examples), and more work-related personal factors such as the reason of sickness absence (i.e. physical, mental or both) and tenure (number of years with current employer). Questions about whether there had been periods of work resumption (yes, no) or periods of complete disability (yes, no) were also included. For the external factors questions were asked about whether the sickness absence was work-related (yes, both work-related and private, no), whether there had been a conflict between the employer and employee during the RTW process (yes, no), and also whether the quality of the relationship between the employer and employee was deemed good/neutral or bad. A question about whether the employee had returned to work (yes/no) was included, as well as a question about the sufficiency of RTW efforts (sufficient/insufficient) according to the LE. The LE’s gathered the information necessary for filling out the questionnaire by examining the case report or interviewing the employer, employee or SIP.\nA closed-ended questionnaire was developed to gather information about the two outcomes, RTW and RTW-ES, and personal and external factors related to the case and the RTW process of the employee.\nThe strength and relevance of factors related to RTW-ES and to RTW outcome (no RTW or partial RTW) were investigated by means of a questionnaire. During the RTW-ES assessment, the LE was asked to fill out the questionnaire.\nThe content of the questionnaire consists of a list of possible predicting factors of RTW, which were inventorized by literature (e.g. [8–10]). Questions were included about personal factors such as age, gender, level of education (low, medium, high, including examples), and more work-related personal factors such as the reason of sickness absence (i.e. physical, mental or both) and tenure (number of years with current employer). Questions about whether there had been periods of work resumption (yes, no) or periods of complete disability (yes, no) were also included. For the external factors questions were asked about whether the sickness absence was work-related (yes, both work-related and private, no), whether there had been a conflict between the employer and employee during the RTW process (yes, no), and also whether the quality of the relationship between the employer and employee was deemed good/neutral or bad. A question about whether the employee had returned to work (yes/no) was included, as well as a question about the sufficiency of RTW efforts (sufficient/insufficient) according to the LE. The LE’s gathered the information necessary for filling out the questionnaire by examining the case report or interviewing the employer, employee or SIP.\n Statistical Analyses Logistic regression analysis was used to assess the independent contribution of factors to the RTW outcome. The method used was backward conditional, because of the explorative nature of the analyses.\nSimilar to the analyses of RTW, multilevel regression analysis was used to analyze the relationship between the factors and the RTW process outcome in terms of sufficiency of RTW efforts, taking the assessing professional into account. In both multiple analyses, variables were entered in the model when P < 0.20 based on the univariate relationships, and were adjusted for age, gender and education level. Data analysis was performed by using SPSS 16.0 for MS Windows.\n Comparability The results of the statistical analyses were used to assess the comparability of the factors associated with RTW-ES and RTW.\nThe results of the statistical analyses were used to assess the comparability of the factors associated with RTW-ES and RTW.\nLogistic regression analysis was used to assess the independent contribution of factors to the RTW outcome. The method used was backward conditional, because of the explorative nature of the analyses.\nSimilar to the analyses of RTW, multilevel regression analysis was used to analyze the relationship between the factors and the RTW process outcome in terms of sufficiency of RTW efforts, taking the assessing professional into account. In both multiple analyses, variables were entered in the model when P < 0.20 based on the univariate relationships, and were adjusted for age, gender and education level. Data analysis was performed by using SPSS 16.0 for MS Windows.\n Comparability The results of the statistical analyses were used to assess the comparability of the factors associated with RTW-ES and RTW.\nThe results of the statistical analyses were used to assess the comparability of the factors associated with RTW-ES and RTW.", " RTW-ES Assessment The RTW-ES assessment focuses on whether enough activities have been undertaken by the employer and employee to realize (partial) RTW after 2 years of sickness absence. This assessment is based on a case report compiled by the employer. This case report includes a problem analysis, i.e. a mandatory description of the (dis)abilities of the employee by an occupational physician hired by the employer, the plan designed to achieve work resumption (an action plan), and the employee’s opinion regarding the RTW process. Records of all interventions, conversations and agreements between the parties involved in the RTW process were also included in the case report. The assessment is performed by LE’s from the Dutch SII, who are graduates in social sciences. During the assessment, the LE’s have the opportunity to consult an SIP, and can invite the employer and employee to provide more information. When, according to the LE insufficient efforts have been made, the application for disability benefits is delayed, and the employer and/or employee receive a financial sanction, depending on who has omitted to perform the necessary efforts to promote RTW. The assessment of RTW-ES is performed at the disgression of the LE’s, no evidence based protocol or instrument is available. Employees who have returned to work fully and are receiving the original level of income, or who are fully disabled are not assessed. Employees on sickness absence due to pregnancy, or on sickness absence while not under contract fall under a different policy and are not assessed as well.\nThe RTW-ES assessment focuses on whether enough activities have been undertaken by the employer and employee to realize (partial) RTW after 2 years of sickness absence. This assessment is based on a case report compiled by the employer. This case report includes a problem analysis, i.e. a mandatory description of the (dis)abilities of the employee by an occupational physician hired by the employer, the plan designed to achieve work resumption (an action plan), and the employee’s opinion regarding the RTW process. Records of all interventions, conversations and agreements between the parties involved in the RTW process were also included in the case report. The assessment is performed by LE’s from the Dutch SII, who are graduates in social sciences. During the assessment, the LE’s have the opportunity to consult an SIP, and can invite the employer and employee to provide more information. When, according to the LE insufficient efforts have been made, the application for disability benefits is delayed, and the employer and/or employee receive a financial sanction, depending on who has omitted to perform the necessary efforts to promote RTW. The assessment of RTW-ES is performed at the disgression of the LE’s, no evidence based protocol or instrument is available. Employees who have returned to work fully and are receiving the original level of income, or who are fully disabled are not assessed. Employees on sickness absence due to pregnancy, or on sickness absence while not under contract fall under a different policy and are not assessed as well.\n Research Questionnaire A closed-ended questionnaire was developed to gather information about the two outcomes, RTW and RTW-ES, and personal and external factors related to the case and the RTW process of the employee.\nThe strength and relevance of factors related to RTW-ES and to RTW outcome (no RTW or partial RTW) were investigated by means of a questionnaire. During the RTW-ES assessment, the LE was asked to fill out the questionnaire.\nThe content of the questionnaire consists of a list of possible predicting factors of RTW, which were inventorized by literature (e.g. [8–10]). Questions were included about personal factors such as age, gender, level of education (low, medium, high, including examples), and more work-related personal factors such as the reason of sickness absence (i.e. physical, mental or both) and tenure (number of years with current employer). Questions about whether there had been periods of work resumption (yes, no) or periods of complete disability (yes, no) were also included. For the external factors questions were asked about whether the sickness absence was work-related (yes, both work-related and private, no), whether there had been a conflict between the employer and employee during the RTW process (yes, no), and also whether the quality of the relationship between the employer and employee was deemed good/neutral or bad. A question about whether the employee had returned to work (yes/no) was included, as well as a question about the sufficiency of RTW efforts (sufficient/insufficient) according to the LE. The LE’s gathered the information necessary for filling out the questionnaire by examining the case report or interviewing the employer, employee or SIP.\nA closed-ended questionnaire was developed to gather information about the two outcomes, RTW and RTW-ES, and personal and external factors related to the case and the RTW process of the employee.\nThe strength and relevance of factors related to RTW-ES and to RTW outcome (no RTW or partial RTW) were investigated by means of a questionnaire. During the RTW-ES assessment, the LE was asked to fill out the questionnaire.\nThe content of the questionnaire consists of a list of possible predicting factors of RTW, which were inventorized by literature (e.g. [8–10]). Questions were included about personal factors such as age, gender, level of education (low, medium, high, including examples), and more work-related personal factors such as the reason of sickness absence (i.e. physical, mental or both) and tenure (number of years with current employer). Questions about whether there had been periods of work resumption (yes, no) or periods of complete disability (yes, no) were also included. For the external factors questions were asked about whether the sickness absence was work-related (yes, both work-related and private, no), whether there had been a conflict between the employer and employee during the RTW process (yes, no), and also whether the quality of the relationship between the employer and employee was deemed good/neutral or bad. A question about whether the employee had returned to work (yes/no) was included, as well as a question about the sufficiency of RTW efforts (sufficient/insufficient) according to the LE. The LE’s gathered the information necessary for filling out the questionnaire by examining the case report or interviewing the employer, employee or SIP.", "The RTW-ES assessment focuses on whether enough activities have been undertaken by the employer and employee to realize (partial) RTW after 2 years of sickness absence. This assessment is based on a case report compiled by the employer. This case report includes a problem analysis, i.e. a mandatory description of the (dis)abilities of the employee by an occupational physician hired by the employer, the plan designed to achieve work resumption (an action plan), and the employee’s opinion regarding the RTW process. Records of all interventions, conversations and agreements between the parties involved in the RTW process were also included in the case report. The assessment is performed by LE’s from the Dutch SII, who are graduates in social sciences. During the assessment, the LE’s have the opportunity to consult an SIP, and can invite the employer and employee to provide more information. When, according to the LE insufficient efforts have been made, the application for disability benefits is delayed, and the employer and/or employee receive a financial sanction, depending on who has omitted to perform the necessary efforts to promote RTW. The assessment of RTW-ES is performed at the disgression of the LE’s, no evidence based protocol or instrument is available. Employees who have returned to work fully and are receiving the original level of income, or who are fully disabled are not assessed. Employees on sickness absence due to pregnancy, or on sickness absence while not under contract fall under a different policy and are not assessed as well.", "A closed-ended questionnaire was developed to gather information about the two outcomes, RTW and RTW-ES, and personal and external factors related to the case and the RTW process of the employee.\nThe strength and relevance of factors related to RTW-ES and to RTW outcome (no RTW or partial RTW) were investigated by means of a questionnaire. During the RTW-ES assessment, the LE was asked to fill out the questionnaire.\nThe content of the questionnaire consists of a list of possible predicting factors of RTW, which were inventorized by literature (e.g. [8–10]). Questions were included about personal factors such as age, gender, level of education (low, medium, high, including examples), and more work-related personal factors such as the reason of sickness absence (i.e. physical, mental or both) and tenure (number of years with current employer). Questions about whether there had been periods of work resumption (yes, no) or periods of complete disability (yes, no) were also included. For the external factors questions were asked about whether the sickness absence was work-related (yes, both work-related and private, no), whether there had been a conflict between the employer and employee during the RTW process (yes, no), and also whether the quality of the relationship between the employer and employee was deemed good/neutral or bad. A question about whether the employee had returned to work (yes/no) was included, as well as a question about the sufficiency of RTW efforts (sufficient/insufficient) according to the LE. The LE’s gathered the information necessary for filling out the questionnaire by examining the case report or interviewing the employer, employee or SIP.", "Logistic regression analysis was used to assess the independent contribution of factors to the RTW outcome. The method used was backward conditional, because of the explorative nature of the analyses.\nSimilar to the analyses of RTW, multilevel regression analysis was used to analyze the relationship between the factors and the RTW process outcome in terms of sufficiency of RTW efforts, taking the assessing professional into account. In both multiple analyses, variables were entered in the model when P < 0.20 based on the univariate relationships, and were adjusted for age, gender and education level. Data analysis was performed by using SPSS 16.0 for MS Windows.\n Comparability The results of the statistical analyses were used to assess the comparability of the factors associated with RTW-ES and RTW.\nThe results of the statistical analyses were used to assess the comparability of the factors associated with RTW-ES and RTW.", "The results of the statistical analyses were used to assess the comparability of the factors associated with RTW-ES and RTW.", " Study Population Questionnaires concerning 415 cases were filled out.\nOf all cases, the average age of the employees was 47 years (SD 9.4), 180 (43%) were male, and education level was low in 20%, medium in 60% and high in 20% (see Table 1).Table 1Description of study populationAge M(SD) (N = 415)47.4 (9.4)Gender N(%) (N = 415) Male180 (43.4) Female235 (56.6)Educational level N(%) (N = 410) Low84 (20.5) Medium246 (60.0) High80 (19.5)RTW N(%) (N = 411) No (or only on a therapeutic basis)211 (51.3) Yes (partially or fully)200 (48.7)RTW efforts N (%) (N = 415) Sufficient334 (80.5) Insufficient81 (19.5)RTW at own employer (N = 196) Yes191 (97.4) No5 (2.6)RTW process agreed on by employee (N = 409) Yes329 (80.4) No80 (19.6)\n\nDescription of study population\nOf the 415 cases, RTW-ES was deemed sufficient in 334 cases (80%) and insufficient in 81 cases (20%). Of the 415 cases, 203 sick-listed employees had returned to work partially prior to applying for disability benefits, whereas 211 sick-listed employees (51%) had not returned to work.\nAt the moment of application for disability benefits, 191 employees who had returned to work had returned to their own employer (97%), whereas 5 had not (3%). The RTW process was agreed upon by the employee in 329 cases (80%), while 80 employees (20%) did not agree with the proceedings of the 2 years prior to the application for disability benefits.\nQuestionnaires concerning 415 cases were filled out.\nOf all cases, the average age of the employees was 47 years (SD 9.4), 180 (43%) were male, and education level was low in 20%, medium in 60% and high in 20% (see Table 1).Table 1Description of study populationAge M(SD) (N = 415)47.4 (9.4)Gender N(%) (N = 415) Male180 (43.4) Female235 (56.6)Educational level N(%) (N = 410) Low84 (20.5) Medium246 (60.0) High80 (19.5)RTW N(%) (N = 411) No (or only on a therapeutic basis)211 (51.3) Yes (partially or fully)200 (48.7)RTW efforts N (%) (N = 415) Sufficient334 (80.5) Insufficient81 (19.5)RTW at own employer (N = 196) Yes191 (97.4) No5 (2.6)RTW process agreed on by employee (N = 409) Yes329 (80.4) No80 (19.6)\n\nDescription of study population\nOf the 415 cases, RTW-ES was deemed sufficient in 334 cases (80%) and insufficient in 81 cases (20%). Of the 415 cases, 203 sick-listed employees had returned to work partially prior to applying for disability benefits, whereas 211 sick-listed employees (51%) had not returned to work.\nAt the moment of application for disability benefits, 191 employees who had returned to work had returned to their own employer (97%), whereas 5 had not (3%). The RTW process was agreed upon by the employee in 329 cases (80%), while 80 employees (20%) did not agree with the proceedings of the 2 years prior to the application for disability benefits.\n Personal and External Characteristics The characteristics of the variables included in the logistic analyses are presented in Table 2. Regarding the personal factors, the reason of absence was a physical health condition in 261 cases (63%), mixed health conditions in 67 cases (16%), and a mental health condition in 84 cases (20%). The average tenure was 13 years (SD 8.8). Of the sick-listed employees, 272 (66%) reported periods of complete disability, which meant that no activities to promote RTW could be undertaken during this period. 218 employees (53%) reported periods of work resumption, meaning that they had attempted to RTW during the 2 years before the application for disability benefits.Table 2Description of personal and external factors in study populationPersonal factorsN (%)Reason of absence (N = 412) Physical261 (63.3) Both physical and mental67 (16.3) Mental84 (20.4)Tenure (N = 358) (M(SD))12.84 (8.75)Periods of complete disability (N = 412) Yes272 (66.0) No140 (34.0)Periods of work resumption (N = 412) Yes218 (52.9) No194 (47.1)External factorsN (%)Sickness absence work related (N = 339) Yes, completely/partially55 (16.2) No284 (83.8)Relationship employer employee (N = 380) Good/neutral355 (93.4) Bad25 (6.6)Conflict (N = 410) Yes32 (7.8) No378 (92.2)\n\nDescription of personal and external factors in study population\nFor the external factors, the sickness absence was partially or completely work-related in 55 cases (16%). The relationship between the employer and employee was good or neutral in 355 cases (93%). There was evidence of conflict between employer and employee in 32 cases (8%). The correlation between employer-employee relationship and employer-employee conflict was 0.72 (P < 0.01).\nThe characteristics of the variables included in the logistic analyses are presented in Table 2. Regarding the personal factors, the reason of absence was a physical health condition in 261 cases (63%), mixed health conditions in 67 cases (16%), and a mental health condition in 84 cases (20%). The average tenure was 13 years (SD 8.8). Of the sick-listed employees, 272 (66%) reported periods of complete disability, which meant that no activities to promote RTW could be undertaken during this period. 218 employees (53%) reported periods of work resumption, meaning that they had attempted to RTW during the 2 years before the application for disability benefits.Table 2Description of personal and external factors in study populationPersonal factorsN (%)Reason of absence (N = 412) Physical261 (63.3) Both physical and mental67 (16.3) Mental84 (20.4)Tenure (N = 358) (M(SD))12.84 (8.75)Periods of complete disability (N = 412) Yes272 (66.0) No140 (34.0)Periods of work resumption (N = 412) Yes218 (52.9) No194 (47.1)External factorsN (%)Sickness absence work related (N = 339) Yes, completely/partially55 (16.2) No284 (83.8)Relationship employer employee (N = 380) Good/neutral355 (93.4) Bad25 (6.6)Conflict (N = 410) Yes32 (7.8) No378 (92.2)\n\nDescription of personal and external factors in study population\nFor the external factors, the sickness absence was partially or completely work-related in 55 cases (16%). The relationship between the employer and employee was good or neutral in 355 cases (93%). There was evidence of conflict between employer and employee in 32 cases (8%). The correlation between employer-employee relationship and employer-employee conflict was 0.72 (P < 0.01).\n RTW-ES and RTW Factors related to RTW-ES are shown in Table 3. The multilevel regression analysis shows 5 potential determinants (P < 0.20) of RTW-ES, while taking assessor into account: reason of absence, tenure, work-relatedness of absence, employer-employee relationship, and employer-employee conflict.Table 3Factors related to RTW-ES: multilevel logistic regression analyses, taking assessor into accountVariable (reference group)Crude odds ratiosOR, adjusted for age, gender, and educationªOR (95%)\nP\nOR (95%)\nP\nPersonal factors Age (years)1.00 (0.97–1.03)0.930.99 (0.95–1.03)0.65 Gender (female)1.07 (0.68–1.71)0.760.97 (0.51–1.85)0.92 Education (low)1.02 (0.43–2.41)0.971.08 (0.35–3.32)0.89 Reason of absence (mental)¹1.97 (1.14–3.42)0.021.22 (0.57–2.61)0.60 Tenure (years)1.03 (0.99–1.06)0.141.03 (0.98–1.01)0.23 Periods of complete disability (no)1.19 (0.75–1.89)0.45–– Periods of work resumption (no)1.22 (0.76–1.95)0.42––External factors Sickness absence work related (yes)²2.73 (1.33–5.62)0.011.44 (0.64–3.22)0.38 Relationship employer/employee (poor)5.91 (2.81–12.43)<0.015.47 (2.00–14.98)<0.01 Conflict (yes)³4.25 (2.14–8.43)<0.01–OR of >1 indicates a higher chance of RTW-ES, compared to the reference groupª QIC = 265.92, N = 269\n1Physical, both physical and mental, mental\n2No, partial/yes\n3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression\n\nFactors related to RTW-ES: multilevel logistic regression analyses, taking assessor into account\nOR of >1 indicates a higher chance of RTW-ES, compared to the reference group\nª QIC = 265.92, N = 269\n\n1Physical, both physical and mental, mental\n\n2No, partial/yes\n\n3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression\nUsing multiple multilevel logistic regression analysis, adjusting for age, gender and education and excluding conflict, one factor remained in the model. Only employer-employee relationship had a significant relationship to a higher chance of RTW-ES (OR 5.47, 95%CI 2.00-14.98, P < 0.01).\nFactors related to RTW according to the regression analysis are presented in Table 4. In the univariate regression analyses 5 potential determinants were associated (P < 0.20) to RTW: education level, tenure, periods of complete disability, relationship between employer and employee, and employer-employee conflict. Conflict was excluded from the model because of the high correlation to employer-employee relationship, and the model was adjusted for age, gender and education. Using multiple backward conditional logistic regression analysis, three factors remained in the model: employer-employee relationship (OR 14.59, 95%CI 3.29-64.71, P = <0.01), level of education (OR 2.89, 95%CI 1.39-6.00, P = <0.01), and periods of complete disability (OR 1.92, 95%CI 1.18-3.15, P = <0.01).Table 4Factors related to RTW: logistic regression analysesVariable (reference group)Crude odds ratiosOR, adjusted for age, gender and educationªOR (95%)\nP\nOR (95%)\nP\nPersonal factors Age (years)1.00 (0.98–0.02)0.971.00 (0.97–1.02)0.68 Gender (male)1.01 (0.68–1.49)0.970.98 (0.62–1.53)0.92 Education (low)1.99 (1.07–3.70)0.032.89 (1.39–6.00)<0.01 Reason of absence (mental)¹1.08 (0.57–2.07)0.81–– Tenure (years)1.02 (0.99–1.04)0.181.01 (0.98–1.04)0.47 Periods of complete disability (yes)1.74 (1.15–2.63)0.011.92 (1.18–3.15)<0.01 Periods of work resumption (yes)1.16 (0.79–1.71)0.45––External factors Sickness absence work related (yes)²1.35 (0.75–2.41)0.32–– Relationship employer/employee (poor)12.95 (3.01–55.74)<0.0114.59 (3.29–64.71)<0.01 Conflict (yes)³5.57 (2.10–14.78)<0.01––OR of >1 indicates a higher chance of RTW, compared to the reference group\naR² = 135, N = 321\n1Physical, both physical and mental, mental\n2No, partial/yes\n3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression analysis\n\nFactors related to RTW: logistic regression analyses\nOR of >1 indicates a higher chance of RTW, compared to the reference group\n\naR² = 135, N = 321\n\n1Physical, both physical and mental, mental\n\n2No, partial/yes\n\n3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression analysis\nFactors related to RTW-ES are shown in Table 3. The multilevel regression analysis shows 5 potential determinants (P < 0.20) of RTW-ES, while taking assessor into account: reason of absence, tenure, work-relatedness of absence, employer-employee relationship, and employer-employee conflict.Table 3Factors related to RTW-ES: multilevel logistic regression analyses, taking assessor into accountVariable (reference group)Crude odds ratiosOR, adjusted for age, gender, and educationªOR (95%)\nP\nOR (95%)\nP\nPersonal factors Age (years)1.00 (0.97–1.03)0.930.99 (0.95–1.03)0.65 Gender (female)1.07 (0.68–1.71)0.760.97 (0.51–1.85)0.92 Education (low)1.02 (0.43–2.41)0.971.08 (0.35–3.32)0.89 Reason of absence (mental)¹1.97 (1.14–3.42)0.021.22 (0.57–2.61)0.60 Tenure (years)1.03 (0.99–1.06)0.141.03 (0.98–1.01)0.23 Periods of complete disability (no)1.19 (0.75–1.89)0.45–– Periods of work resumption (no)1.22 (0.76–1.95)0.42––External factors Sickness absence work related (yes)²2.73 (1.33–5.62)0.011.44 (0.64–3.22)0.38 Relationship employer/employee (poor)5.91 (2.81–12.43)<0.015.47 (2.00–14.98)<0.01 Conflict (yes)³4.25 (2.14–8.43)<0.01–OR of >1 indicates a higher chance of RTW-ES, compared to the reference groupª QIC = 265.92, N = 269\n1Physical, both physical and mental, mental\n2No, partial/yes\n3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression\n\nFactors related to RTW-ES: multilevel logistic regression analyses, taking assessor into account\nOR of >1 indicates a higher chance of RTW-ES, compared to the reference group\nª QIC = 265.92, N = 269\n\n1Physical, both physical and mental, mental\n\n2No, partial/yes\n\n3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression\nUsing multiple multilevel logistic regression analysis, adjusting for age, gender and education and excluding conflict, one factor remained in the model. Only employer-employee relationship had a significant relationship to a higher chance of RTW-ES (OR 5.47, 95%CI 2.00-14.98, P < 0.01).\nFactors related to RTW according to the regression analysis are presented in Table 4. In the univariate regression analyses 5 potential determinants were associated (P < 0.20) to RTW: education level, tenure, periods of complete disability, relationship between employer and employee, and employer-employee conflict. Conflict was excluded from the model because of the high correlation to employer-employee relationship, and the model was adjusted for age, gender and education. Using multiple backward conditional logistic regression analysis, three factors remained in the model: employer-employee relationship (OR 14.59, 95%CI 3.29-64.71, P = <0.01), level of education (OR 2.89, 95%CI 1.39-6.00, P = <0.01), and periods of complete disability (OR 1.92, 95%CI 1.18-3.15, P = <0.01).Table 4Factors related to RTW: logistic regression analysesVariable (reference group)Crude odds ratiosOR, adjusted for age, gender and educationªOR (95%)\nP\nOR (95%)\nP\nPersonal factors Age (years)1.00 (0.98–0.02)0.971.00 (0.97–1.02)0.68 Gender (male)1.01 (0.68–1.49)0.970.98 (0.62–1.53)0.92 Education (low)1.99 (1.07–3.70)0.032.89 (1.39–6.00)<0.01 Reason of absence (mental)¹1.08 (0.57–2.07)0.81–– Tenure (years)1.02 (0.99–1.04)0.181.01 (0.98–1.04)0.47 Periods of complete disability (yes)1.74 (1.15–2.63)0.011.92 (1.18–3.15)<0.01 Periods of work resumption (yes)1.16 (0.79–1.71)0.45––External factors Sickness absence work related (yes)²1.35 (0.75–2.41)0.32–– Relationship employer/employee (poor)12.95 (3.01–55.74)<0.0114.59 (3.29–64.71)<0.01 Conflict (yes)³5.57 (2.10–14.78)<0.01––OR of >1 indicates a higher chance of RTW, compared to the reference group\naR² = 135, N = 321\n1Physical, both physical and mental, mental\n2No, partial/yes\n3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression analysis\n\nFactors related to RTW: logistic regression analyses\nOR of >1 indicates a higher chance of RTW, compared to the reference group\n\naR² = 135, N = 321\n\n1Physical, both physical and mental, mental\n\n2No, partial/yes\n\n3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression analysis", "Questionnaires concerning 415 cases were filled out.\nOf all cases, the average age of the employees was 47 years (SD 9.4), 180 (43%) were male, and education level was low in 20%, medium in 60% and high in 20% (see Table 1).Table 1Description of study populationAge M(SD) (N = 415)47.4 (9.4)Gender N(%) (N = 415) Male180 (43.4) Female235 (56.6)Educational level N(%) (N = 410) Low84 (20.5) Medium246 (60.0) High80 (19.5)RTW N(%) (N = 411) No (or only on a therapeutic basis)211 (51.3) Yes (partially or fully)200 (48.7)RTW efforts N (%) (N = 415) Sufficient334 (80.5) Insufficient81 (19.5)RTW at own employer (N = 196) Yes191 (97.4) No5 (2.6)RTW process agreed on by employee (N = 409) Yes329 (80.4) No80 (19.6)\n\nDescription of study population\nOf the 415 cases, RTW-ES was deemed sufficient in 334 cases (80%) and insufficient in 81 cases (20%). Of the 415 cases, 203 sick-listed employees had returned to work partially prior to applying for disability benefits, whereas 211 sick-listed employees (51%) had not returned to work.\nAt the moment of application for disability benefits, 191 employees who had returned to work had returned to their own employer (97%), whereas 5 had not (3%). The RTW process was agreed upon by the employee in 329 cases (80%), while 80 employees (20%) did not agree with the proceedings of the 2 years prior to the application for disability benefits.", "The characteristics of the variables included in the logistic analyses are presented in Table 2. Regarding the personal factors, the reason of absence was a physical health condition in 261 cases (63%), mixed health conditions in 67 cases (16%), and a mental health condition in 84 cases (20%). The average tenure was 13 years (SD 8.8). Of the sick-listed employees, 272 (66%) reported periods of complete disability, which meant that no activities to promote RTW could be undertaken during this period. 218 employees (53%) reported periods of work resumption, meaning that they had attempted to RTW during the 2 years before the application for disability benefits.Table 2Description of personal and external factors in study populationPersonal factorsN (%)Reason of absence (N = 412) Physical261 (63.3) Both physical and mental67 (16.3) Mental84 (20.4)Tenure (N = 358) (M(SD))12.84 (8.75)Periods of complete disability (N = 412) Yes272 (66.0) No140 (34.0)Periods of work resumption (N = 412) Yes218 (52.9) No194 (47.1)External factorsN (%)Sickness absence work related (N = 339) Yes, completely/partially55 (16.2) No284 (83.8)Relationship employer employee (N = 380) Good/neutral355 (93.4) Bad25 (6.6)Conflict (N = 410) Yes32 (7.8) No378 (92.2)\n\nDescription of personal and external factors in study population\nFor the external factors, the sickness absence was partially or completely work-related in 55 cases (16%). The relationship between the employer and employee was good or neutral in 355 cases (93%). There was evidence of conflict between employer and employee in 32 cases (8%). The correlation between employer-employee relationship and employer-employee conflict was 0.72 (P < 0.01).", "Factors related to RTW-ES are shown in Table 3. The multilevel regression analysis shows 5 potential determinants (P < 0.20) of RTW-ES, while taking assessor into account: reason of absence, tenure, work-relatedness of absence, employer-employee relationship, and employer-employee conflict.Table 3Factors related to RTW-ES: multilevel logistic regression analyses, taking assessor into accountVariable (reference group)Crude odds ratiosOR, adjusted for age, gender, and educationªOR (95%)\nP\nOR (95%)\nP\nPersonal factors Age (years)1.00 (0.97–1.03)0.930.99 (0.95–1.03)0.65 Gender (female)1.07 (0.68–1.71)0.760.97 (0.51–1.85)0.92 Education (low)1.02 (0.43–2.41)0.971.08 (0.35–3.32)0.89 Reason of absence (mental)¹1.97 (1.14–3.42)0.021.22 (0.57–2.61)0.60 Tenure (years)1.03 (0.99–1.06)0.141.03 (0.98–1.01)0.23 Periods of complete disability (no)1.19 (0.75–1.89)0.45–– Periods of work resumption (no)1.22 (0.76–1.95)0.42––External factors Sickness absence work related (yes)²2.73 (1.33–5.62)0.011.44 (0.64–3.22)0.38 Relationship employer/employee (poor)5.91 (2.81–12.43)<0.015.47 (2.00–14.98)<0.01 Conflict (yes)³4.25 (2.14–8.43)<0.01–OR of >1 indicates a higher chance of RTW-ES, compared to the reference groupª QIC = 265.92, N = 269\n1Physical, both physical and mental, mental\n2No, partial/yes\n3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression\n\nFactors related to RTW-ES: multilevel logistic regression analyses, taking assessor into account\nOR of >1 indicates a higher chance of RTW-ES, compared to the reference group\nª QIC = 265.92, N = 269\n\n1Physical, both physical and mental, mental\n\n2No, partial/yes\n\n3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression\nUsing multiple multilevel logistic regression analysis, adjusting for age, gender and education and excluding conflict, one factor remained in the model. Only employer-employee relationship had a significant relationship to a higher chance of RTW-ES (OR 5.47, 95%CI 2.00-14.98, P < 0.01).\nFactors related to RTW according to the regression analysis are presented in Table 4. In the univariate regression analyses 5 potential determinants were associated (P < 0.20) to RTW: education level, tenure, periods of complete disability, relationship between employer and employee, and employer-employee conflict. Conflict was excluded from the model because of the high correlation to employer-employee relationship, and the model was adjusted for age, gender and education. Using multiple backward conditional logistic regression analysis, three factors remained in the model: employer-employee relationship (OR 14.59, 95%CI 3.29-64.71, P = <0.01), level of education (OR 2.89, 95%CI 1.39-6.00, P = <0.01), and periods of complete disability (OR 1.92, 95%CI 1.18-3.15, P = <0.01).Table 4Factors related to RTW: logistic regression analysesVariable (reference group)Crude odds ratiosOR, adjusted for age, gender and educationªOR (95%)\nP\nOR (95%)\nP\nPersonal factors Age (years)1.00 (0.98–0.02)0.971.00 (0.97–1.02)0.68 Gender (male)1.01 (0.68–1.49)0.970.98 (0.62–1.53)0.92 Education (low)1.99 (1.07–3.70)0.032.89 (1.39–6.00)<0.01 Reason of absence (mental)¹1.08 (0.57–2.07)0.81–– Tenure (years)1.02 (0.99–1.04)0.181.01 (0.98–1.04)0.47 Periods of complete disability (yes)1.74 (1.15–2.63)0.011.92 (1.18–3.15)<0.01 Periods of work resumption (yes)1.16 (0.79–1.71)0.45––External factors Sickness absence work related (yes)²1.35 (0.75–2.41)0.32–– Relationship employer/employee (poor)12.95 (3.01–55.74)<0.0114.59 (3.29–64.71)<0.01 Conflict (yes)³5.57 (2.10–14.78)<0.01––OR of >1 indicates a higher chance of RTW, compared to the reference group\naR² = 135, N = 321\n1Physical, both physical and mental, mental\n2No, partial/yes\n3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression analysis\n\nFactors related to RTW: logistic regression analyses\nOR of >1 indicates a higher chance of RTW, compared to the reference group\n\naR² = 135, N = 321\n\n1Physical, both physical and mental, mental\n\n2No, partial/yes\n\n3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression analysis", "In this study, the only factor related to RTW-ES is a good relationship between employer and employee. Factors related to RTW outcome (no RTW or partial RTW) after 2 years of sickness absence were found to be high education, no previous periods of complete disability and a good relationship between employer and employee.\nIncluded in this study were employees applying for disability after 2 years of sickness absence, who were not permanently or fully disabled. Sickness absence duration should be taken into account because the phase-specificity of sickness absence is different after 2 years of sickness absence, and other factors are related to RTW outcome [11]. Furthermore, in this study a comparison was made between employees who had achieved some RTW, and those who did not achieve RTW. Previous studies have focused on measuring RTW earlier than after 2 years, and also a distinction was made between RTW and no RTW, regardless of work ability or application for disability benefits. This could explain differences in factors related to RTW found in previous research and the results of this study.\nThe results found on factors related to RTW-ES can not be compared to previous studies because of lack of research on this subject.\nThe relation found in this study between education and RTW is congruent with existing literature. A lower education prolongs the time to RTW [12, 13]. A poor relationship between employer and employee is found to have a negative effect on RTW [14]. Moreover, supervisor support increases the chance of RTW [6, 15, 16]. Previous research has found that age, gender and tenure are related to RTW. A higher age (>50years) prolongs the time to RTW [14, 17]. Female gender decreases the chance of RTW [17], but this evidence is not conclusive [14]. A shorter tenure prolongs the time to RTW [6, 14], and a tenure longer than one [18] or 2 years [19] increases the chance of RTW. In our sample, age, gender and tenure were not found to be related to RTW. Furthermore, in this study, no relationship was found between RTW and mental health conditions as the reason of sickness absence or the work-relatedness of the sickness absence. This is also unlike the results found in previous studies, where it has been found that mental health conditions reduce the chance of RTW [20, 21]. Also, if the sickness absence is work-related, for example due to a work-related accident, this reduces the chance to RTW [22]. Furthermore, in this study, periods of work resumption were not associated to RTW.\nAs far as comparability of RTW-ES and RTW outcome is concerned, only the relationship between employer and employee is a shared relevant factor. Educational level and periods of previous disability are only predictors of RTW, but not of RTW-ES. This suggests that the two outcomes have limited comparability, but also that the relationship between employer and employee could be considered a very relevant factor in cases of prolonged sickness absence.\nThe strength of this study lies in its subject; this study is the first to investigate determinants of RTW-ES, and to compare the findings to the RTW outcome (no RTW or partial RTW). Also, this study focuses on the comparison of no RTW to partial RTW after 2 years of sickness absence. The RTW efforts are mostly of interest when the employee still has work ability, but has not yet returned to their original work fully after a prolonged period of sickness absence.\nA limitation in this study is the lack of knowledge on the level of disability of the employee. It was investigated whether there had been drastic changes, such as a period of complete disability or periods of work resumption, but the RTW outcome could not be compared to the level of disability according to the physician. However, we do know that the physicians of the OHS and the SII ensure that the assessment of RTW-ES after 2 years does not include employees who are fully disabled or who have no disability at all.\nAnother limitation issue lies in the measurement of the determinants. Questionnaires were developed in which a certain set of variables were investigated. A different selection could have lead to different results. However, the variables were selected by means of literature and expert meetings, and we feel we have investigated several of the most relevant factors. The questionnaire and the sources used to complete the questionnaire could be a source of response bias. As far as the outcome is concerned, the assessment of RTW-ES is performed by Dutch SII LE’s, who have had similar training [5]. To avoid assessor bias, the assessor was taken into account when analyzing RTW-ES. However, it could be that a different group (e.g. from another country) would perform the assessment in their own way, thereby including other factors. On the other hand, this study provides a great opportunity to compare these results to a different situation, as this is the first study to investigate RTW-ES and to compare it to RTW after 2 years.\nThe relevance of this study lies in the use of RTW-ES as RTW outcome. RTW-ES is relevant to the process, especially when investigating RTW after a longer period of time in cases where the employee is expected to be able to RTW, but not fully or in the original setting. According to the findings of this study, the relationship between employer and employee is very important to both RTW and RTW-ES. This would implicate a shift from a more physical approach or a focus on the personal factors to a work-related external factor such as the relationship between employer and employee. The importance of this factor is considerable, because effective job accommodation for employees with a chronic disability is a process in which external (i.e. social) factors are essential [23]. During the RTW process, these factors are not only of great importance, but can also be influenced, in contrast to factors such as level of education and periods of complete disability. Issues regarding the relationship can be detected by external parties such as the physician or vocational rehabilitation expert, an can be improved by mediation or counseling.\nThis is the first study performed to investigate the factors related to RTW-ES and to compare these to factors related to RTW. In future research this study could be replicated while changing a study characteristic to determine its influence on the study outcome. For example, a different group of professionals (e.g. from another country), or different factors could be included. Also, it would be interesting to investigate RTW and RTW-ES by comparing to full RTW. However, this could also cause difficulties in research design, as full RTW in the previous work already implies RTW-ES. Moreover, full RTW is usually achieved earlier. An alternative could be to research determinants of RTW at both 6 months and 2 years, as to be able to compare the determinants of full or partial RTW.\nIn conclusion, this study showed that RTW-ES is largely determined by the relationship between employer and employee. Factors related to RTW after 2 years of sickness absence are educational level, periods of complete disability and also the relationship between employer and employee. It can be concluded that RTW-ES and RTW are different outcomes, but that the relationship between employer and employee are relevant for both outcomes. Considering the importance of the assessment of RTW-ES after a prolonged period of sickness absence among employees who are not fully disabled, this knowledge is essential for the assessment of RTW-ES and the RTW process itself." ]
[ null, "materials|methods", null, null, null, null, null, "results", null, null, null, "discussion" ]
[ "Return-to-work", "Vocational Rehabilitation", "Disability Insurance", "Outcome measures", "Employer effort" ]
Background: In the past years, policymakers and researchers have focused on early return-to-work (RTW) after sickness absence and on the prevention of long-term sickness absence and permanent disability [1, 2]. Long-term absence and work disability are associated with health risks, social isolation and exclusion from the labor market [1, 2]. Although research on disability and RTW outcome has led to significant advances in understanding about these outcomes, limited studies focus on measuring aspects of the RTW process—the process that workers go through to reach, or attempt to reach, their goals [3]. Up to date, the focus is commonly placed on simply the act of returning-to-work or applying for a disability pension. However, RTW and work disability can also be described in terms of the type of actions undertaken by workers resuming employment [4]. An instrument to measure the undertaken actions in the RTW process and to evaluate if an agreed upon RTW goal has been reached is of interest of various stakeholders. In several countries the assessment of Return-To-Work Effort Sufficiency (RTW-ES) is part of the evaluation of the RTW process in relation to the application for disability benefits [5]. After the onset of sickness absence, the RTW process takes place. RTW efforts made in the RTW process include all activities undertaken to improve the work ability of the sick-listed employee in the period between onset of sickness absence and the application for disability benefits (see Fig. 1) [6]. During this RTW process, the RTW efforts are undertaken by employer, employee, and health professionals (e.g. general physician, specialist, and/or occupational physician). The assessment of RTW-ES explores the RTW process from the perspective of the efforts made by both employer and employee. This assessment takes place prior to the assessment of disability benefits, which in the Netherlands takes place after 2 years of sickness absence [6]. The assessment of RTW-ES and the assessment for disability benefits are performed by the Labor Expert (LE) and a Social Insurance Physician (SIP), respectively, of the Social Insurance Institute (SII).Fig. 1A description of the RTW process in relation to the assessment of Return-To-Work Effort Sufficiency and the assessment for disability benefits A description of the RTW process in relation to the assessment of Return-To-Work Effort Sufficiency and the assessment for disability benefits The RTW efforts are sufficient if the RTW process is designed effectively, the chances of RTW are optimal, and RTW is achieved in accordance with health status and work ability of the sick-listed employee [5, 7]. The RTW-ES assessment is performed only when the Dutch employee has not fully returned to work after 2 years of sickness absence, but does have remaining work ability and is applying for disability benefits. Of employees applying for disability benefits, some apply for partial benefits, and some apply for complete benefits, based mainly on the level of RTW achieved during the RTW process, i.e. no RTW or partial RTW. Little is known about the differences between employees who apply for disability benefits after long-term sickness absence who have achieved partial RTW and those who have not achieved RTW. Assessing the sufficiency of the efforts made during the RTW process prior to the application for disability benefits could help prevent unnecessary applications for disability benefits. However, current RTW process outcomes focus mostly on time elapsed or costs [4], and not on the RTW process. Assessing the sufficiency of efforts undertaken during the RTW process might be an important addition to existing RTW outcomes as it could give insight in factors related to RTW in employees on long-term sickness absence who apply for disability benefits [5]. Because the RTW outcome is assessed after a longer period of sickness absence, the influence of the activities undertaken in the RTW process is evident. Knowing the strength and relevance of factors influencing the RTW process can provide vital information for the RTW outcome and the opportunities to achieve better RTW goals in the future. Ultimately, knowing the differences in factors associated to RTW-ES among employees who have not returned to work fully, but do have remaining work ability, might give insight in the differences between factors related to RTW outcome and the factors during the RTW process related to the assessment of RTW-ES. Moreover, the comparability of both outcomes (RTW outcome versus RTW-ES outcome) is unclear. Factors related to RTW among employees on long-term sickness absence and applying for disability benefits might differ from factors relevant to the RTW-ES outcome measured by the activities during the RTW process. The purpose of this study was to investigate 1) the strength and relevance of factors related to RTW Effort Sufficiency (RTW-ES) and to RTW outcome (no RTW or partial RTW) among employees applying for disability benefits after 2 years of sickness absence, and 2) the comparability of the factors associated with RTW-ES and RTW. Methods: Measures RTW-ES Assessment The RTW-ES assessment focuses on whether enough activities have been undertaken by the employer and employee to realize (partial) RTW after 2 years of sickness absence. This assessment is based on a case report compiled by the employer. This case report includes a problem analysis, i.e. a mandatory description of the (dis)abilities of the employee by an occupational physician hired by the employer, the plan designed to achieve work resumption (an action plan), and the employee’s opinion regarding the RTW process. Records of all interventions, conversations and agreements between the parties involved in the RTW process were also included in the case report. The assessment is performed by LE’s from the Dutch SII, who are graduates in social sciences. During the assessment, the LE’s have the opportunity to consult an SIP, and can invite the employer and employee to provide more information. When, according to the LE insufficient efforts have been made, the application for disability benefits is delayed, and the employer and/or employee receive a financial sanction, depending on who has omitted to perform the necessary efforts to promote RTW. The assessment of RTW-ES is performed at the disgression of the LE’s, no evidence based protocol or instrument is available. Employees who have returned to work fully and are receiving the original level of income, or who are fully disabled are not assessed. Employees on sickness absence due to pregnancy, or on sickness absence while not under contract fall under a different policy and are not assessed as well. The RTW-ES assessment focuses on whether enough activities have been undertaken by the employer and employee to realize (partial) RTW after 2 years of sickness absence. This assessment is based on a case report compiled by the employer. This case report includes a problem analysis, i.e. a mandatory description of the (dis)abilities of the employee by an occupational physician hired by the employer, the plan designed to achieve work resumption (an action plan), and the employee’s opinion regarding the RTW process. Records of all interventions, conversations and agreements between the parties involved in the RTW process were also included in the case report. The assessment is performed by LE’s from the Dutch SII, who are graduates in social sciences. During the assessment, the LE’s have the opportunity to consult an SIP, and can invite the employer and employee to provide more information. When, according to the LE insufficient efforts have been made, the application for disability benefits is delayed, and the employer and/or employee receive a financial sanction, depending on who has omitted to perform the necessary efforts to promote RTW. The assessment of RTW-ES is performed at the disgression of the LE’s, no evidence based protocol or instrument is available. Employees who have returned to work fully and are receiving the original level of income, or who are fully disabled are not assessed. Employees on sickness absence due to pregnancy, or on sickness absence while not under contract fall under a different policy and are not assessed as well. Research Questionnaire A closed-ended questionnaire was developed to gather information about the two outcomes, RTW and RTW-ES, and personal and external factors related to the case and the RTW process of the employee. The strength and relevance of factors related to RTW-ES and to RTW outcome (no RTW or partial RTW) were investigated by means of a questionnaire. During the RTW-ES assessment, the LE was asked to fill out the questionnaire. The content of the questionnaire consists of a list of possible predicting factors of RTW, which were inventorized by literature (e.g. [8–10]). Questions were included about personal factors such as age, gender, level of education (low, medium, high, including examples), and more work-related personal factors such as the reason of sickness absence (i.e. physical, mental or both) and tenure (number of years with current employer). Questions about whether there had been periods of work resumption (yes, no) or periods of complete disability (yes, no) were also included. For the external factors questions were asked about whether the sickness absence was work-related (yes, both work-related and private, no), whether there had been a conflict between the employer and employee during the RTW process (yes, no), and also whether the quality of the relationship between the employer and employee was deemed good/neutral or bad. A question about whether the employee had returned to work (yes/no) was included, as well as a question about the sufficiency of RTW efforts (sufficient/insufficient) according to the LE. The LE’s gathered the information necessary for filling out the questionnaire by examining the case report or interviewing the employer, employee or SIP. A closed-ended questionnaire was developed to gather information about the two outcomes, RTW and RTW-ES, and personal and external factors related to the case and the RTW process of the employee. The strength and relevance of factors related to RTW-ES and to RTW outcome (no RTW or partial RTW) were investigated by means of a questionnaire. During the RTW-ES assessment, the LE was asked to fill out the questionnaire. The content of the questionnaire consists of a list of possible predicting factors of RTW, which were inventorized by literature (e.g. [8–10]). Questions were included about personal factors such as age, gender, level of education (low, medium, high, including examples), and more work-related personal factors such as the reason of sickness absence (i.e. physical, mental or both) and tenure (number of years with current employer). Questions about whether there had been periods of work resumption (yes, no) or periods of complete disability (yes, no) were also included. For the external factors questions were asked about whether the sickness absence was work-related (yes, both work-related and private, no), whether there had been a conflict between the employer and employee during the RTW process (yes, no), and also whether the quality of the relationship between the employer and employee was deemed good/neutral or bad. A question about whether the employee had returned to work (yes/no) was included, as well as a question about the sufficiency of RTW efforts (sufficient/insufficient) according to the LE. The LE’s gathered the information necessary for filling out the questionnaire by examining the case report or interviewing the employer, employee or SIP. RTW-ES Assessment The RTW-ES assessment focuses on whether enough activities have been undertaken by the employer and employee to realize (partial) RTW after 2 years of sickness absence. This assessment is based on a case report compiled by the employer. This case report includes a problem analysis, i.e. a mandatory description of the (dis)abilities of the employee by an occupational physician hired by the employer, the plan designed to achieve work resumption (an action plan), and the employee’s opinion regarding the RTW process. Records of all interventions, conversations and agreements between the parties involved in the RTW process were also included in the case report. The assessment is performed by LE’s from the Dutch SII, who are graduates in social sciences. During the assessment, the LE’s have the opportunity to consult an SIP, and can invite the employer and employee to provide more information. When, according to the LE insufficient efforts have been made, the application for disability benefits is delayed, and the employer and/or employee receive a financial sanction, depending on who has omitted to perform the necessary efforts to promote RTW. The assessment of RTW-ES is performed at the disgression of the LE’s, no evidence based protocol or instrument is available. Employees who have returned to work fully and are receiving the original level of income, or who are fully disabled are not assessed. Employees on sickness absence due to pregnancy, or on sickness absence while not under contract fall under a different policy and are not assessed as well. The RTW-ES assessment focuses on whether enough activities have been undertaken by the employer and employee to realize (partial) RTW after 2 years of sickness absence. This assessment is based on a case report compiled by the employer. This case report includes a problem analysis, i.e. a mandatory description of the (dis)abilities of the employee by an occupational physician hired by the employer, the plan designed to achieve work resumption (an action plan), and the employee’s opinion regarding the RTW process. Records of all interventions, conversations and agreements between the parties involved in the RTW process were also included in the case report. The assessment is performed by LE’s from the Dutch SII, who are graduates in social sciences. During the assessment, the LE’s have the opportunity to consult an SIP, and can invite the employer and employee to provide more information. When, according to the LE insufficient efforts have been made, the application for disability benefits is delayed, and the employer and/or employee receive a financial sanction, depending on who has omitted to perform the necessary efforts to promote RTW. The assessment of RTW-ES is performed at the disgression of the LE’s, no evidence based protocol or instrument is available. Employees who have returned to work fully and are receiving the original level of income, or who are fully disabled are not assessed. Employees on sickness absence due to pregnancy, or on sickness absence while not under contract fall under a different policy and are not assessed as well. Research Questionnaire A closed-ended questionnaire was developed to gather information about the two outcomes, RTW and RTW-ES, and personal and external factors related to the case and the RTW process of the employee. The strength and relevance of factors related to RTW-ES and to RTW outcome (no RTW or partial RTW) were investigated by means of a questionnaire. During the RTW-ES assessment, the LE was asked to fill out the questionnaire. The content of the questionnaire consists of a list of possible predicting factors of RTW, which were inventorized by literature (e.g. [8–10]). Questions were included about personal factors such as age, gender, level of education (low, medium, high, including examples), and more work-related personal factors such as the reason of sickness absence (i.e. physical, mental or both) and tenure (number of years with current employer). Questions about whether there had been periods of work resumption (yes, no) or periods of complete disability (yes, no) were also included. For the external factors questions were asked about whether the sickness absence was work-related (yes, both work-related and private, no), whether there had been a conflict between the employer and employee during the RTW process (yes, no), and also whether the quality of the relationship between the employer and employee was deemed good/neutral or bad. A question about whether the employee had returned to work (yes/no) was included, as well as a question about the sufficiency of RTW efforts (sufficient/insufficient) according to the LE. The LE’s gathered the information necessary for filling out the questionnaire by examining the case report or interviewing the employer, employee or SIP. A closed-ended questionnaire was developed to gather information about the two outcomes, RTW and RTW-ES, and personal and external factors related to the case and the RTW process of the employee. The strength and relevance of factors related to RTW-ES and to RTW outcome (no RTW or partial RTW) were investigated by means of a questionnaire. During the RTW-ES assessment, the LE was asked to fill out the questionnaire. The content of the questionnaire consists of a list of possible predicting factors of RTW, which were inventorized by literature (e.g. [8–10]). Questions were included about personal factors such as age, gender, level of education (low, medium, high, including examples), and more work-related personal factors such as the reason of sickness absence (i.e. physical, mental or both) and tenure (number of years with current employer). Questions about whether there had been periods of work resumption (yes, no) or periods of complete disability (yes, no) were also included. For the external factors questions were asked about whether the sickness absence was work-related (yes, both work-related and private, no), whether there had been a conflict between the employer and employee during the RTW process (yes, no), and also whether the quality of the relationship between the employer and employee was deemed good/neutral or bad. A question about whether the employee had returned to work (yes/no) was included, as well as a question about the sufficiency of RTW efforts (sufficient/insufficient) according to the LE. The LE’s gathered the information necessary for filling out the questionnaire by examining the case report or interviewing the employer, employee or SIP. Statistical Analyses Logistic regression analysis was used to assess the independent contribution of factors to the RTW outcome. The method used was backward conditional, because of the explorative nature of the analyses. Similar to the analyses of RTW, multilevel regression analysis was used to analyze the relationship between the factors and the RTW process outcome in terms of sufficiency of RTW efforts, taking the assessing professional into account. In both multiple analyses, variables were entered in the model when P < 0.20 based on the univariate relationships, and were adjusted for age, gender and education level. Data analysis was performed by using SPSS 16.0 for MS Windows. Comparability The results of the statistical analyses were used to assess the comparability of the factors associated with RTW-ES and RTW. The results of the statistical analyses were used to assess the comparability of the factors associated with RTW-ES and RTW. Logistic regression analysis was used to assess the independent contribution of factors to the RTW outcome. The method used was backward conditional, because of the explorative nature of the analyses. Similar to the analyses of RTW, multilevel regression analysis was used to analyze the relationship between the factors and the RTW process outcome in terms of sufficiency of RTW efforts, taking the assessing professional into account. In both multiple analyses, variables were entered in the model when P < 0.20 based on the univariate relationships, and were adjusted for age, gender and education level. Data analysis was performed by using SPSS 16.0 for MS Windows. Comparability The results of the statistical analyses were used to assess the comparability of the factors associated with RTW-ES and RTW. The results of the statistical analyses were used to assess the comparability of the factors associated with RTW-ES and RTW. Measures: RTW-ES Assessment The RTW-ES assessment focuses on whether enough activities have been undertaken by the employer and employee to realize (partial) RTW after 2 years of sickness absence. This assessment is based on a case report compiled by the employer. This case report includes a problem analysis, i.e. a mandatory description of the (dis)abilities of the employee by an occupational physician hired by the employer, the plan designed to achieve work resumption (an action plan), and the employee’s opinion regarding the RTW process. Records of all interventions, conversations and agreements between the parties involved in the RTW process were also included in the case report. The assessment is performed by LE’s from the Dutch SII, who are graduates in social sciences. During the assessment, the LE’s have the opportunity to consult an SIP, and can invite the employer and employee to provide more information. When, according to the LE insufficient efforts have been made, the application for disability benefits is delayed, and the employer and/or employee receive a financial sanction, depending on who has omitted to perform the necessary efforts to promote RTW. The assessment of RTW-ES is performed at the disgression of the LE’s, no evidence based protocol or instrument is available. Employees who have returned to work fully and are receiving the original level of income, or who are fully disabled are not assessed. Employees on sickness absence due to pregnancy, or on sickness absence while not under contract fall under a different policy and are not assessed as well. The RTW-ES assessment focuses on whether enough activities have been undertaken by the employer and employee to realize (partial) RTW after 2 years of sickness absence. This assessment is based on a case report compiled by the employer. This case report includes a problem analysis, i.e. a mandatory description of the (dis)abilities of the employee by an occupational physician hired by the employer, the plan designed to achieve work resumption (an action plan), and the employee’s opinion regarding the RTW process. Records of all interventions, conversations and agreements between the parties involved in the RTW process were also included in the case report. The assessment is performed by LE’s from the Dutch SII, who are graduates in social sciences. During the assessment, the LE’s have the opportunity to consult an SIP, and can invite the employer and employee to provide more information. When, according to the LE insufficient efforts have been made, the application for disability benefits is delayed, and the employer and/or employee receive a financial sanction, depending on who has omitted to perform the necessary efforts to promote RTW. The assessment of RTW-ES is performed at the disgression of the LE’s, no evidence based protocol or instrument is available. Employees who have returned to work fully and are receiving the original level of income, or who are fully disabled are not assessed. Employees on sickness absence due to pregnancy, or on sickness absence while not under contract fall under a different policy and are not assessed as well. Research Questionnaire A closed-ended questionnaire was developed to gather information about the two outcomes, RTW and RTW-ES, and personal and external factors related to the case and the RTW process of the employee. The strength and relevance of factors related to RTW-ES and to RTW outcome (no RTW or partial RTW) were investigated by means of a questionnaire. During the RTW-ES assessment, the LE was asked to fill out the questionnaire. The content of the questionnaire consists of a list of possible predicting factors of RTW, which were inventorized by literature (e.g. [8–10]). Questions were included about personal factors such as age, gender, level of education (low, medium, high, including examples), and more work-related personal factors such as the reason of sickness absence (i.e. physical, mental or both) and tenure (number of years with current employer). Questions about whether there had been periods of work resumption (yes, no) or periods of complete disability (yes, no) were also included. For the external factors questions were asked about whether the sickness absence was work-related (yes, both work-related and private, no), whether there had been a conflict between the employer and employee during the RTW process (yes, no), and also whether the quality of the relationship between the employer and employee was deemed good/neutral or bad. A question about whether the employee had returned to work (yes/no) was included, as well as a question about the sufficiency of RTW efforts (sufficient/insufficient) according to the LE. The LE’s gathered the information necessary for filling out the questionnaire by examining the case report or interviewing the employer, employee or SIP. A closed-ended questionnaire was developed to gather information about the two outcomes, RTW and RTW-ES, and personal and external factors related to the case and the RTW process of the employee. The strength and relevance of factors related to RTW-ES and to RTW outcome (no RTW or partial RTW) were investigated by means of a questionnaire. During the RTW-ES assessment, the LE was asked to fill out the questionnaire. The content of the questionnaire consists of a list of possible predicting factors of RTW, which were inventorized by literature (e.g. [8–10]). Questions were included about personal factors such as age, gender, level of education (low, medium, high, including examples), and more work-related personal factors such as the reason of sickness absence (i.e. physical, mental or both) and tenure (number of years with current employer). Questions about whether there had been periods of work resumption (yes, no) or periods of complete disability (yes, no) were also included. For the external factors questions were asked about whether the sickness absence was work-related (yes, both work-related and private, no), whether there had been a conflict between the employer and employee during the RTW process (yes, no), and also whether the quality of the relationship between the employer and employee was deemed good/neutral or bad. A question about whether the employee had returned to work (yes/no) was included, as well as a question about the sufficiency of RTW efforts (sufficient/insufficient) according to the LE. The LE’s gathered the information necessary for filling out the questionnaire by examining the case report or interviewing the employer, employee or SIP. RTW-ES Assessment: The RTW-ES assessment focuses on whether enough activities have been undertaken by the employer and employee to realize (partial) RTW after 2 years of sickness absence. This assessment is based on a case report compiled by the employer. This case report includes a problem analysis, i.e. a mandatory description of the (dis)abilities of the employee by an occupational physician hired by the employer, the plan designed to achieve work resumption (an action plan), and the employee’s opinion regarding the RTW process. Records of all interventions, conversations and agreements between the parties involved in the RTW process were also included in the case report. The assessment is performed by LE’s from the Dutch SII, who are graduates in social sciences. During the assessment, the LE’s have the opportunity to consult an SIP, and can invite the employer and employee to provide more information. When, according to the LE insufficient efforts have been made, the application for disability benefits is delayed, and the employer and/or employee receive a financial sanction, depending on who has omitted to perform the necessary efforts to promote RTW. The assessment of RTW-ES is performed at the disgression of the LE’s, no evidence based protocol or instrument is available. Employees who have returned to work fully and are receiving the original level of income, or who are fully disabled are not assessed. Employees on sickness absence due to pregnancy, or on sickness absence while not under contract fall under a different policy and are not assessed as well. Research Questionnaire: A closed-ended questionnaire was developed to gather information about the two outcomes, RTW and RTW-ES, and personal and external factors related to the case and the RTW process of the employee. The strength and relevance of factors related to RTW-ES and to RTW outcome (no RTW or partial RTW) were investigated by means of a questionnaire. During the RTW-ES assessment, the LE was asked to fill out the questionnaire. The content of the questionnaire consists of a list of possible predicting factors of RTW, which were inventorized by literature (e.g. [8–10]). Questions were included about personal factors such as age, gender, level of education (low, medium, high, including examples), and more work-related personal factors such as the reason of sickness absence (i.e. physical, mental or both) and tenure (number of years with current employer). Questions about whether there had been periods of work resumption (yes, no) or periods of complete disability (yes, no) were also included. For the external factors questions were asked about whether the sickness absence was work-related (yes, both work-related and private, no), whether there had been a conflict between the employer and employee during the RTW process (yes, no), and also whether the quality of the relationship between the employer and employee was deemed good/neutral or bad. A question about whether the employee had returned to work (yes/no) was included, as well as a question about the sufficiency of RTW efforts (sufficient/insufficient) according to the LE. The LE’s gathered the information necessary for filling out the questionnaire by examining the case report or interviewing the employer, employee or SIP. Statistical Analyses: Logistic regression analysis was used to assess the independent contribution of factors to the RTW outcome. The method used was backward conditional, because of the explorative nature of the analyses. Similar to the analyses of RTW, multilevel regression analysis was used to analyze the relationship between the factors and the RTW process outcome in terms of sufficiency of RTW efforts, taking the assessing professional into account. In both multiple analyses, variables were entered in the model when P < 0.20 based on the univariate relationships, and were adjusted for age, gender and education level. Data analysis was performed by using SPSS 16.0 for MS Windows. Comparability The results of the statistical analyses were used to assess the comparability of the factors associated with RTW-ES and RTW. The results of the statistical analyses were used to assess the comparability of the factors associated with RTW-ES and RTW. Comparability: The results of the statistical analyses were used to assess the comparability of the factors associated with RTW-ES and RTW. Results: Study Population Questionnaires concerning 415 cases were filled out. Of all cases, the average age of the employees was 47 years (SD 9.4), 180 (43%) were male, and education level was low in 20%, medium in 60% and high in 20% (see Table 1).Table 1Description of study populationAge M(SD) (N = 415)47.4 (9.4)Gender N(%) (N = 415) Male180 (43.4) Female235 (56.6)Educational level N(%) (N = 410) Low84 (20.5) Medium246 (60.0) High80 (19.5)RTW N(%) (N = 411) No (or only on a therapeutic basis)211 (51.3) Yes (partially or fully)200 (48.7)RTW efforts N (%) (N = 415) Sufficient334 (80.5) Insufficient81 (19.5)RTW at own employer (N = 196) Yes191 (97.4) No5 (2.6)RTW process agreed on by employee (N = 409) Yes329 (80.4) No80 (19.6) Description of study population Of the 415 cases, RTW-ES was deemed sufficient in 334 cases (80%) and insufficient in 81 cases (20%). Of the 415 cases, 203 sick-listed employees had returned to work partially prior to applying for disability benefits, whereas 211 sick-listed employees (51%) had not returned to work. At the moment of application for disability benefits, 191 employees who had returned to work had returned to their own employer (97%), whereas 5 had not (3%). The RTW process was agreed upon by the employee in 329 cases (80%), while 80 employees (20%) did not agree with the proceedings of the 2 years prior to the application for disability benefits. Questionnaires concerning 415 cases were filled out. Of all cases, the average age of the employees was 47 years (SD 9.4), 180 (43%) were male, and education level was low in 20%, medium in 60% and high in 20% (see Table 1).Table 1Description of study populationAge M(SD) (N = 415)47.4 (9.4)Gender N(%) (N = 415) Male180 (43.4) Female235 (56.6)Educational level N(%) (N = 410) Low84 (20.5) Medium246 (60.0) High80 (19.5)RTW N(%) (N = 411) No (or only on a therapeutic basis)211 (51.3) Yes (partially or fully)200 (48.7)RTW efforts N (%) (N = 415) Sufficient334 (80.5) Insufficient81 (19.5)RTW at own employer (N = 196) Yes191 (97.4) No5 (2.6)RTW process agreed on by employee (N = 409) Yes329 (80.4) No80 (19.6) Description of study population Of the 415 cases, RTW-ES was deemed sufficient in 334 cases (80%) and insufficient in 81 cases (20%). Of the 415 cases, 203 sick-listed employees had returned to work partially prior to applying for disability benefits, whereas 211 sick-listed employees (51%) had not returned to work. At the moment of application for disability benefits, 191 employees who had returned to work had returned to their own employer (97%), whereas 5 had not (3%). The RTW process was agreed upon by the employee in 329 cases (80%), while 80 employees (20%) did not agree with the proceedings of the 2 years prior to the application for disability benefits. Personal and External Characteristics The characteristics of the variables included in the logistic analyses are presented in Table 2. Regarding the personal factors, the reason of absence was a physical health condition in 261 cases (63%), mixed health conditions in 67 cases (16%), and a mental health condition in 84 cases (20%). The average tenure was 13 years (SD 8.8). Of the sick-listed employees, 272 (66%) reported periods of complete disability, which meant that no activities to promote RTW could be undertaken during this period. 218 employees (53%) reported periods of work resumption, meaning that they had attempted to RTW during the 2 years before the application for disability benefits.Table 2Description of personal and external factors in study populationPersonal factorsN (%)Reason of absence (N = 412) Physical261 (63.3) Both physical and mental67 (16.3) Mental84 (20.4)Tenure (N = 358) (M(SD))12.84 (8.75)Periods of complete disability (N = 412) Yes272 (66.0) No140 (34.0)Periods of work resumption (N = 412) Yes218 (52.9) No194 (47.1)External factorsN (%)Sickness absence work related (N = 339) Yes, completely/partially55 (16.2) No284 (83.8)Relationship employer employee (N = 380) Good/neutral355 (93.4) Bad25 (6.6)Conflict (N = 410) Yes32 (7.8) No378 (92.2) Description of personal and external factors in study population For the external factors, the sickness absence was partially or completely work-related in 55 cases (16%). The relationship between the employer and employee was good or neutral in 355 cases (93%). There was evidence of conflict between employer and employee in 32 cases (8%). The correlation between employer-employee relationship and employer-employee conflict was 0.72 (P < 0.01). The characteristics of the variables included in the logistic analyses are presented in Table 2. Regarding the personal factors, the reason of absence was a physical health condition in 261 cases (63%), mixed health conditions in 67 cases (16%), and a mental health condition in 84 cases (20%). The average tenure was 13 years (SD 8.8). Of the sick-listed employees, 272 (66%) reported periods of complete disability, which meant that no activities to promote RTW could be undertaken during this period. 218 employees (53%) reported periods of work resumption, meaning that they had attempted to RTW during the 2 years before the application for disability benefits.Table 2Description of personal and external factors in study populationPersonal factorsN (%)Reason of absence (N = 412) Physical261 (63.3) Both physical and mental67 (16.3) Mental84 (20.4)Tenure (N = 358) (M(SD))12.84 (8.75)Periods of complete disability (N = 412) Yes272 (66.0) No140 (34.0)Periods of work resumption (N = 412) Yes218 (52.9) No194 (47.1)External factorsN (%)Sickness absence work related (N = 339) Yes, completely/partially55 (16.2) No284 (83.8)Relationship employer employee (N = 380) Good/neutral355 (93.4) Bad25 (6.6)Conflict (N = 410) Yes32 (7.8) No378 (92.2) Description of personal and external factors in study population For the external factors, the sickness absence was partially or completely work-related in 55 cases (16%). The relationship between the employer and employee was good or neutral in 355 cases (93%). There was evidence of conflict between employer and employee in 32 cases (8%). The correlation between employer-employee relationship and employer-employee conflict was 0.72 (P < 0.01). RTW-ES and RTW Factors related to RTW-ES are shown in Table 3. The multilevel regression analysis shows 5 potential determinants (P < 0.20) of RTW-ES, while taking assessor into account: reason of absence, tenure, work-relatedness of absence, employer-employee relationship, and employer-employee conflict.Table 3Factors related to RTW-ES: multilevel logistic regression analyses, taking assessor into accountVariable (reference group)Crude odds ratiosOR, adjusted for age, gender, and educationªOR (95%) P OR (95%) P Personal factors Age (years)1.00 (0.97–1.03)0.930.99 (0.95–1.03)0.65 Gender (female)1.07 (0.68–1.71)0.760.97 (0.51–1.85)0.92 Education (low)1.02 (0.43–2.41)0.971.08 (0.35–3.32)0.89 Reason of absence (mental)¹1.97 (1.14–3.42)0.021.22 (0.57–2.61)0.60 Tenure (years)1.03 (0.99–1.06)0.141.03 (0.98–1.01)0.23 Periods of complete disability (no)1.19 (0.75–1.89)0.45–– Periods of work resumption (no)1.22 (0.76–1.95)0.42––External factors Sickness absence work related (yes)²2.73 (1.33–5.62)0.011.44 (0.64–3.22)0.38 Relationship employer/employee (poor)5.91 (2.81–12.43)<0.015.47 (2.00–14.98)<0.01 Conflict (yes)³4.25 (2.14–8.43)<0.01–OR of >1 indicates a higher chance of RTW-ES, compared to the reference groupª QIC = 265.92, N = 269 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression Factors related to RTW-ES: multilevel logistic regression analyses, taking assessor into account OR of >1 indicates a higher chance of RTW-ES, compared to the reference group ª QIC = 265.92, N = 269 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression Using multiple multilevel logistic regression analysis, adjusting for age, gender and education and excluding conflict, one factor remained in the model. Only employer-employee relationship had a significant relationship to a higher chance of RTW-ES (OR 5.47, 95%CI 2.00-14.98, P < 0.01). Factors related to RTW according to the regression analysis are presented in Table 4. In the univariate regression analyses 5 potential determinants were associated (P < 0.20) to RTW: education level, tenure, periods of complete disability, relationship between employer and employee, and employer-employee conflict. Conflict was excluded from the model because of the high correlation to employer-employee relationship, and the model was adjusted for age, gender and education. Using multiple backward conditional logistic regression analysis, three factors remained in the model: employer-employee relationship (OR 14.59, 95%CI 3.29-64.71, P = <0.01), level of education (OR 2.89, 95%CI 1.39-6.00, P = <0.01), and periods of complete disability (OR 1.92, 95%CI 1.18-3.15, P = <0.01).Table 4Factors related to RTW: logistic regression analysesVariable (reference group)Crude odds ratiosOR, adjusted for age, gender and educationªOR (95%) P OR (95%) P Personal factors Age (years)1.00 (0.98–0.02)0.971.00 (0.97–1.02)0.68 Gender (male)1.01 (0.68–1.49)0.970.98 (0.62–1.53)0.92 Education (low)1.99 (1.07–3.70)0.032.89 (1.39–6.00)<0.01 Reason of absence (mental)¹1.08 (0.57–2.07)0.81–– Tenure (years)1.02 (0.99–1.04)0.181.01 (0.98–1.04)0.47 Periods of complete disability (yes)1.74 (1.15–2.63)0.011.92 (1.18–3.15)<0.01 Periods of work resumption (yes)1.16 (0.79–1.71)0.45––External factors Sickness absence work related (yes)²1.35 (0.75–2.41)0.32–– Relationship employer/employee (poor)12.95 (3.01–55.74)<0.0114.59 (3.29–64.71)<0.01 Conflict (yes)³5.57 (2.10–14.78)<0.01––OR of >1 indicates a higher chance of RTW, compared to the reference group aR² = 135, N = 321 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression analysis Factors related to RTW: logistic regression analyses OR of >1 indicates a higher chance of RTW, compared to the reference group aR² = 135, N = 321 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression analysis Factors related to RTW-ES are shown in Table 3. The multilevel regression analysis shows 5 potential determinants (P < 0.20) of RTW-ES, while taking assessor into account: reason of absence, tenure, work-relatedness of absence, employer-employee relationship, and employer-employee conflict.Table 3Factors related to RTW-ES: multilevel logistic regression analyses, taking assessor into accountVariable (reference group)Crude odds ratiosOR, adjusted for age, gender, and educationªOR (95%) P OR (95%) P Personal factors Age (years)1.00 (0.97–1.03)0.930.99 (0.95–1.03)0.65 Gender (female)1.07 (0.68–1.71)0.760.97 (0.51–1.85)0.92 Education (low)1.02 (0.43–2.41)0.971.08 (0.35–3.32)0.89 Reason of absence (mental)¹1.97 (1.14–3.42)0.021.22 (0.57–2.61)0.60 Tenure (years)1.03 (0.99–1.06)0.141.03 (0.98–1.01)0.23 Periods of complete disability (no)1.19 (0.75–1.89)0.45–– Periods of work resumption (no)1.22 (0.76–1.95)0.42––External factors Sickness absence work related (yes)²2.73 (1.33–5.62)0.011.44 (0.64–3.22)0.38 Relationship employer/employee (poor)5.91 (2.81–12.43)<0.015.47 (2.00–14.98)<0.01 Conflict (yes)³4.25 (2.14–8.43)<0.01–OR of >1 indicates a higher chance of RTW-ES, compared to the reference groupª QIC = 265.92, N = 269 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression Factors related to RTW-ES: multilevel logistic regression analyses, taking assessor into account OR of >1 indicates a higher chance of RTW-ES, compared to the reference group ª QIC = 265.92, N = 269 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression Using multiple multilevel logistic regression analysis, adjusting for age, gender and education and excluding conflict, one factor remained in the model. Only employer-employee relationship had a significant relationship to a higher chance of RTW-ES (OR 5.47, 95%CI 2.00-14.98, P < 0.01). Factors related to RTW according to the regression analysis are presented in Table 4. In the univariate regression analyses 5 potential determinants were associated (P < 0.20) to RTW: education level, tenure, periods of complete disability, relationship between employer and employee, and employer-employee conflict. Conflict was excluded from the model because of the high correlation to employer-employee relationship, and the model was adjusted for age, gender and education. Using multiple backward conditional logistic regression analysis, three factors remained in the model: employer-employee relationship (OR 14.59, 95%CI 3.29-64.71, P = <0.01), level of education (OR 2.89, 95%CI 1.39-6.00, P = <0.01), and periods of complete disability (OR 1.92, 95%CI 1.18-3.15, P = <0.01).Table 4Factors related to RTW: logistic regression analysesVariable (reference group)Crude odds ratiosOR, adjusted for age, gender and educationªOR (95%) P OR (95%) P Personal factors Age (years)1.00 (0.98–0.02)0.971.00 (0.97–1.02)0.68 Gender (male)1.01 (0.68–1.49)0.970.98 (0.62–1.53)0.92 Education (low)1.99 (1.07–3.70)0.032.89 (1.39–6.00)<0.01 Reason of absence (mental)¹1.08 (0.57–2.07)0.81–– Tenure (years)1.02 (0.99–1.04)0.181.01 (0.98–1.04)0.47 Periods of complete disability (yes)1.74 (1.15–2.63)0.011.92 (1.18–3.15)<0.01 Periods of work resumption (yes)1.16 (0.79–1.71)0.45––External factors Sickness absence work related (yes)²1.35 (0.75–2.41)0.32–– Relationship employer/employee (poor)12.95 (3.01–55.74)<0.0114.59 (3.29–64.71)<0.01 Conflict (yes)³5.57 (2.10–14.78)<0.01––OR of >1 indicates a higher chance of RTW, compared to the reference group aR² = 135, N = 321 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression analysis Factors related to RTW: logistic regression analyses OR of >1 indicates a higher chance of RTW, compared to the reference group aR² = 135, N = 321 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression analysis Study Population: Questionnaires concerning 415 cases were filled out. Of all cases, the average age of the employees was 47 years (SD 9.4), 180 (43%) were male, and education level was low in 20%, medium in 60% and high in 20% (see Table 1).Table 1Description of study populationAge M(SD) (N = 415)47.4 (9.4)Gender N(%) (N = 415) Male180 (43.4) Female235 (56.6)Educational level N(%) (N = 410) Low84 (20.5) Medium246 (60.0) High80 (19.5)RTW N(%) (N = 411) No (or only on a therapeutic basis)211 (51.3) Yes (partially or fully)200 (48.7)RTW efforts N (%) (N = 415) Sufficient334 (80.5) Insufficient81 (19.5)RTW at own employer (N = 196) Yes191 (97.4) No5 (2.6)RTW process agreed on by employee (N = 409) Yes329 (80.4) No80 (19.6) Description of study population Of the 415 cases, RTW-ES was deemed sufficient in 334 cases (80%) and insufficient in 81 cases (20%). Of the 415 cases, 203 sick-listed employees had returned to work partially prior to applying for disability benefits, whereas 211 sick-listed employees (51%) had not returned to work. At the moment of application for disability benefits, 191 employees who had returned to work had returned to their own employer (97%), whereas 5 had not (3%). The RTW process was agreed upon by the employee in 329 cases (80%), while 80 employees (20%) did not agree with the proceedings of the 2 years prior to the application for disability benefits. Personal and External Characteristics: The characteristics of the variables included in the logistic analyses are presented in Table 2. Regarding the personal factors, the reason of absence was a physical health condition in 261 cases (63%), mixed health conditions in 67 cases (16%), and a mental health condition in 84 cases (20%). The average tenure was 13 years (SD 8.8). Of the sick-listed employees, 272 (66%) reported periods of complete disability, which meant that no activities to promote RTW could be undertaken during this period. 218 employees (53%) reported periods of work resumption, meaning that they had attempted to RTW during the 2 years before the application for disability benefits.Table 2Description of personal and external factors in study populationPersonal factorsN (%)Reason of absence (N = 412) Physical261 (63.3) Both physical and mental67 (16.3) Mental84 (20.4)Tenure (N = 358) (M(SD))12.84 (8.75)Periods of complete disability (N = 412) Yes272 (66.0) No140 (34.0)Periods of work resumption (N = 412) Yes218 (52.9) No194 (47.1)External factorsN (%)Sickness absence work related (N = 339) Yes, completely/partially55 (16.2) No284 (83.8)Relationship employer employee (N = 380) Good/neutral355 (93.4) Bad25 (6.6)Conflict (N = 410) Yes32 (7.8) No378 (92.2) Description of personal and external factors in study population For the external factors, the sickness absence was partially or completely work-related in 55 cases (16%). The relationship between the employer and employee was good or neutral in 355 cases (93%). There was evidence of conflict between employer and employee in 32 cases (8%). The correlation between employer-employee relationship and employer-employee conflict was 0.72 (P < 0.01). RTW-ES and RTW: Factors related to RTW-ES are shown in Table 3. The multilevel regression analysis shows 5 potential determinants (P < 0.20) of RTW-ES, while taking assessor into account: reason of absence, tenure, work-relatedness of absence, employer-employee relationship, and employer-employee conflict.Table 3Factors related to RTW-ES: multilevel logistic regression analyses, taking assessor into accountVariable (reference group)Crude odds ratiosOR, adjusted for age, gender, and educationªOR (95%) P OR (95%) P Personal factors Age (years)1.00 (0.97–1.03)0.930.99 (0.95–1.03)0.65 Gender (female)1.07 (0.68–1.71)0.760.97 (0.51–1.85)0.92 Education (low)1.02 (0.43–2.41)0.971.08 (0.35–3.32)0.89 Reason of absence (mental)¹1.97 (1.14–3.42)0.021.22 (0.57–2.61)0.60 Tenure (years)1.03 (0.99–1.06)0.141.03 (0.98–1.01)0.23 Periods of complete disability (no)1.19 (0.75–1.89)0.45–– Periods of work resumption (no)1.22 (0.76–1.95)0.42––External factors Sickness absence work related (yes)²2.73 (1.33–5.62)0.011.44 (0.64–3.22)0.38 Relationship employer/employee (poor)5.91 (2.81–12.43)<0.015.47 (2.00–14.98)<0.01 Conflict (yes)³4.25 (2.14–8.43)<0.01–OR of >1 indicates a higher chance of RTW-ES, compared to the reference groupª QIC = 265.92, N = 269 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression Factors related to RTW-ES: multilevel logistic regression analyses, taking assessor into account OR of >1 indicates a higher chance of RTW-ES, compared to the reference group ª QIC = 265.92, N = 269 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression Using multiple multilevel logistic regression analysis, adjusting for age, gender and education and excluding conflict, one factor remained in the model. Only employer-employee relationship had a significant relationship to a higher chance of RTW-ES (OR 5.47, 95%CI 2.00-14.98, P < 0.01). Factors related to RTW according to the regression analysis are presented in Table 4. In the univariate regression analyses 5 potential determinants were associated (P < 0.20) to RTW: education level, tenure, periods of complete disability, relationship between employer and employee, and employer-employee conflict. Conflict was excluded from the model because of the high correlation to employer-employee relationship, and the model was adjusted for age, gender and education. Using multiple backward conditional logistic regression analysis, three factors remained in the model: employer-employee relationship (OR 14.59, 95%CI 3.29-64.71, P = <0.01), level of education (OR 2.89, 95%CI 1.39-6.00, P = <0.01), and periods of complete disability (OR 1.92, 95%CI 1.18-3.15, P = <0.01).Table 4Factors related to RTW: logistic regression analysesVariable (reference group)Crude odds ratiosOR, adjusted for age, gender and educationªOR (95%) P OR (95%) P Personal factors Age (years)1.00 (0.98–0.02)0.971.00 (0.97–1.02)0.68 Gender (male)1.01 (0.68–1.49)0.970.98 (0.62–1.53)0.92 Education (low)1.99 (1.07–3.70)0.032.89 (1.39–6.00)<0.01 Reason of absence (mental)¹1.08 (0.57–2.07)0.81–– Tenure (years)1.02 (0.99–1.04)0.181.01 (0.98–1.04)0.47 Periods of complete disability (yes)1.74 (1.15–2.63)0.011.92 (1.18–3.15)<0.01 Periods of work resumption (yes)1.16 (0.79–1.71)0.45––External factors Sickness absence work related (yes)²1.35 (0.75–2.41)0.32–– Relationship employer/employee (poor)12.95 (3.01–55.74)<0.0114.59 (3.29–64.71)<0.01 Conflict (yes)³5.57 (2.10–14.78)<0.01––OR of >1 indicates a higher chance of RTW, compared to the reference group aR² = 135, N = 321 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression analysis Factors related to RTW: logistic regression analyses OR of >1 indicates a higher chance of RTW, compared to the reference group aR² = 135, N = 321 1Physical, both physical and mental, mental 2No, partial/yes 3r = 0.72 with variable ‘relationship employer/employee’; not included in multiple regression analysis Discussion: In this study, the only factor related to RTW-ES is a good relationship between employer and employee. Factors related to RTW outcome (no RTW or partial RTW) after 2 years of sickness absence were found to be high education, no previous periods of complete disability and a good relationship between employer and employee. Included in this study were employees applying for disability after 2 years of sickness absence, who were not permanently or fully disabled. Sickness absence duration should be taken into account because the phase-specificity of sickness absence is different after 2 years of sickness absence, and other factors are related to RTW outcome [11]. Furthermore, in this study a comparison was made between employees who had achieved some RTW, and those who did not achieve RTW. Previous studies have focused on measuring RTW earlier than after 2 years, and also a distinction was made between RTW and no RTW, regardless of work ability or application for disability benefits. This could explain differences in factors related to RTW found in previous research and the results of this study. The results found on factors related to RTW-ES can not be compared to previous studies because of lack of research on this subject. The relation found in this study between education and RTW is congruent with existing literature. A lower education prolongs the time to RTW [12, 13]. A poor relationship between employer and employee is found to have a negative effect on RTW [14]. Moreover, supervisor support increases the chance of RTW [6, 15, 16]. Previous research has found that age, gender and tenure are related to RTW. A higher age (>50years) prolongs the time to RTW [14, 17]. Female gender decreases the chance of RTW [17], but this evidence is not conclusive [14]. A shorter tenure prolongs the time to RTW [6, 14], and a tenure longer than one [18] or 2 years [19] increases the chance of RTW. In our sample, age, gender and tenure were not found to be related to RTW. Furthermore, in this study, no relationship was found between RTW and mental health conditions as the reason of sickness absence or the work-relatedness of the sickness absence. This is also unlike the results found in previous studies, where it has been found that mental health conditions reduce the chance of RTW [20, 21]. Also, if the sickness absence is work-related, for example due to a work-related accident, this reduces the chance to RTW [22]. Furthermore, in this study, periods of work resumption were not associated to RTW. As far as comparability of RTW-ES and RTW outcome is concerned, only the relationship between employer and employee is a shared relevant factor. Educational level and periods of previous disability are only predictors of RTW, but not of RTW-ES. This suggests that the two outcomes have limited comparability, but also that the relationship between employer and employee could be considered a very relevant factor in cases of prolonged sickness absence. The strength of this study lies in its subject; this study is the first to investigate determinants of RTW-ES, and to compare the findings to the RTW outcome (no RTW or partial RTW). Also, this study focuses on the comparison of no RTW to partial RTW after 2 years of sickness absence. The RTW efforts are mostly of interest when the employee still has work ability, but has not yet returned to their original work fully after a prolonged period of sickness absence. A limitation in this study is the lack of knowledge on the level of disability of the employee. It was investigated whether there had been drastic changes, such as a period of complete disability or periods of work resumption, but the RTW outcome could not be compared to the level of disability according to the physician. However, we do know that the physicians of the OHS and the SII ensure that the assessment of RTW-ES after 2 years does not include employees who are fully disabled or who have no disability at all. Another limitation issue lies in the measurement of the determinants. Questionnaires were developed in which a certain set of variables were investigated. A different selection could have lead to different results. However, the variables were selected by means of literature and expert meetings, and we feel we have investigated several of the most relevant factors. The questionnaire and the sources used to complete the questionnaire could be a source of response bias. As far as the outcome is concerned, the assessment of RTW-ES is performed by Dutch SII LE’s, who have had similar training [5]. To avoid assessor bias, the assessor was taken into account when analyzing RTW-ES. However, it could be that a different group (e.g. from another country) would perform the assessment in their own way, thereby including other factors. On the other hand, this study provides a great opportunity to compare these results to a different situation, as this is the first study to investigate RTW-ES and to compare it to RTW after 2 years. The relevance of this study lies in the use of RTW-ES as RTW outcome. RTW-ES is relevant to the process, especially when investigating RTW after a longer period of time in cases where the employee is expected to be able to RTW, but not fully or in the original setting. According to the findings of this study, the relationship between employer and employee is very important to both RTW and RTW-ES. This would implicate a shift from a more physical approach or a focus on the personal factors to a work-related external factor such as the relationship between employer and employee. The importance of this factor is considerable, because effective job accommodation for employees with a chronic disability is a process in which external (i.e. social) factors are essential [23]. During the RTW process, these factors are not only of great importance, but can also be influenced, in contrast to factors such as level of education and periods of complete disability. Issues regarding the relationship can be detected by external parties such as the physician or vocational rehabilitation expert, an can be improved by mediation or counseling. This is the first study performed to investigate the factors related to RTW-ES and to compare these to factors related to RTW. In future research this study could be replicated while changing a study characteristic to determine its influence on the study outcome. For example, a different group of professionals (e.g. from another country), or different factors could be included. Also, it would be interesting to investigate RTW and RTW-ES by comparing to full RTW. However, this could also cause difficulties in research design, as full RTW in the previous work already implies RTW-ES. Moreover, full RTW is usually achieved earlier. An alternative could be to research determinants of RTW at both 6 months and 2 years, as to be able to compare the determinants of full or partial RTW. In conclusion, this study showed that RTW-ES is largely determined by the relationship between employer and employee. Factors related to RTW after 2 years of sickness absence are educational level, periods of complete disability and also the relationship between employer and employee. It can be concluded that RTW-ES and RTW are different outcomes, but that the relationship between employer and employee are relevant for both outcomes. Considering the importance of the assessment of RTW-ES after a prolonged period of sickness absence among employees who are not fully disabled, this knowledge is essential for the assessment of RTW-ES and the RTW process itself.
Background: Research on disability and RTW outcome has led to significant advances in understanding these outcomes, however, limited studies focus on measuring the RTW process. After a prolonged period of sickness absence, the assessment of the RTW process by investigating RTW Effort Sufficiency (RTW-ES) is essential. However, little is known about factors influencing RTW-ES. Also, the correspondence in factors determining RTW-ES and RTW is unknown. The purpose of this study was to investigate 1) the strength and relevance of factors related to RTW-ES and RTW (no/partial RTW), and 2) the comparability of factors associated with RTW-ES and with RTW. Methods: During 4 months, all assessments of RTW-ES and RTW (no/partial RTW) among employees applying for disability benefits after 2 years of sickness absence, performed by labor experts at 3 Dutch Social Insurance Institute locations, were investigated by means of a questionnaire. Results: Questionnaires concerning 415 cases were available. Using multiple logistic regression analysis, the only factor related to RTW-ES is a good employer-employee relationship. Factors related to RTW (no/partial RTW) were found to be high education, no previous periods of complete disability and a good employer-employee relationship. Conclusions: Different factors are relevant to RTW-ES and RTW, but the employer-employee relationship is relevant for both. Considering the importance of the assessment of RTW-ES after a prolonged period of sickness absence among employees who are not fully disabled, this knowledge is essential for the assessment of RTW-ES and the RTW process itself.
null
null
12,276
317
[ 968, 1266, 288, 339, 169, 23, 359, 387, 847 ]
12
[ "rtw", "employee", "employer", "factors", "work", "employer employee", "rtw es", "es", "absence", "related" ]
[ "absence permanent disability", "absence applying disability", "work disability associated", "disability rtw outcome", "rtw work disability" ]
null
null
null
[CONTENT] Return-to-work | Vocational Rehabilitation | Disability Insurance | Outcome measures | Employer effort [SUMMARY]
null
[CONTENT] Return-to-work | Vocational Rehabilitation | Disability Insurance | Outcome measures | Employer effort [SUMMARY]
null
[CONTENT] Return-to-work | Vocational Rehabilitation | Disability Insurance | Outcome measures | Employer effort [SUMMARY]
null
[CONTENT] Adult | Educational Status | Employment | Female | Humans | Illness Behavior | Intention | Interpersonal Relations | Logistic Models | Male | Middle Aged | Netherlands | Sick Leave | Surveys and Questionnaires | Work [SUMMARY]
null
[CONTENT] Adult | Educational Status | Employment | Female | Humans | Illness Behavior | Intention | Interpersonal Relations | Logistic Models | Male | Middle Aged | Netherlands | Sick Leave | Surveys and Questionnaires | Work [SUMMARY]
null
[CONTENT] Adult | Educational Status | Employment | Female | Humans | Illness Behavior | Intention | Interpersonal Relations | Logistic Models | Male | Middle Aged | Netherlands | Sick Leave | Surveys and Questionnaires | Work [SUMMARY]
null
[CONTENT] absence permanent disability | absence applying disability | work disability associated | disability rtw outcome | rtw work disability [SUMMARY]
null
[CONTENT] absence permanent disability | absence applying disability | work disability associated | disability rtw outcome | rtw work disability [SUMMARY]
null
[CONTENT] absence permanent disability | absence applying disability | work disability associated | disability rtw outcome | rtw work disability [SUMMARY]
null
[CONTENT] rtw | employee | employer | factors | work | employer employee | rtw es | es | absence | related [SUMMARY]
null
[CONTENT] rtw | employee | employer | factors | work | employer employee | rtw es | es | absence | related [SUMMARY]
null
[CONTENT] rtw | employee | employer | factors | work | employer employee | rtw es | es | absence | related [SUMMARY]
null
[CONTENT] rtw | process | rtw process | disability | benefits | assessment | disability benefits | work | absence | long [SUMMARY]
null
[CONTENT] 01 | 95 | regression | cases | employer | employee | employer employee | rtw | relationship | yes [SUMMARY]
null
[CONTENT] rtw | employee | factors | employer | work | employer employee | es | rtw es | absence | assessment [SUMMARY]
null
[CONTENT] RTW ||| RTW | RTW Effort Sufficiency | RTW-ES ||| RTW-ES ||| RTW-ES ||| 1 | RTW-ES | 2 | RTW-ES [SUMMARY]
null
[CONTENT] 415 ||| RTW-ES ||| [SUMMARY]
null
[CONTENT] RTW ||| RTW | RTW Effort Sufficiency | RTW-ES ||| RTW-ES ||| RTW-ES ||| 1 | RTW-ES | 2 | RTW-ES ||| 4 months | RTW-ES | 2 years | 3 Dutch Social Insurance Institute ||| ||| 415 ||| RTW-ES ||| ||| RTW-ES | RTW ||| RTW-ES | RTW-ES | RTW [SUMMARY]
null
Program synergies and social relations: implications of integrating HIV testing and counselling into maternal health care on care seeking.
25603914
Women and children in sub-Saharan Africa bear a disproportionate burden of HIV/AIDS. Integration of HIV with maternal and child services aims to reduce the impact of HIV/AIDS. To assess the potential gains and risks of such integration, this paper considers pregnant women's and providers' perceptions about the effects of integrated HIV testing and counselling on care seeking by pregnant women during antenatal care in Tanzania.
BACKGROUND
From a larger evaluation of an integrated maternal and newborn health care program in Morogoro, Tanzania, this analysis included a subset of information from 203 observations of antenatal care and interviews with 57 providers and 190 pregnant women from 18 public health centers in rural and peri-urban settings. Qualitative data were analyzed manually and with Atlas.ti using a framework approach, and quantitative data of respondents' demographic information were analyzed with Stata 12.0.
METHODS
Perceptions of integrating HIV testing with routine antenatal care from women and health providers were generally positive. Respondents felt that integration increased coverage of HIV testing, particularly among difficult-to-reach populations, and improved convenience, efficiency, and confidentiality for women while reducing stigma. Pregnant women believed that early detection of HIV protected their own health and that of their children. Despite these positive views, challenges remained. Providers and women perceived opt out HIV testing and counselling during antenatal services to be compulsory. A sense of powerlessness and anxiety pervaded some women's responses, reflecting the unequal relations, lack of supportive communications and breaches in confidentiality between women and providers. Lastly, stigma surrounding HIV was reported to lead some women to discontinue services or seek care through other access points in the health system.
RESULTS
While providers and pregnant women view program synergies from integrating HIV services into antenatal care positively, lack of supportive provider-patient relationships, lack of trust resulting from harsh treatment or breaches in confidentiality, and stigma still inhibit women's care seeking. As countries continue rollout of Option B+, social relations between patients and providers must be understood and addressed to ensure that integrated delivery of HIV counselling and services encourages women's care seeking in order to improve maternal and child health.
CONCLUSION
[ "Adolescent", "Adult", "Africa South of the Sahara", "Confidentiality", "Counseling", "Delivery of Health Care, Integrated", "Delivery, Obstetric", "Female", "HIV Infections", "Humans", "Interviews as Topic", "Mass Screening", "Maternal Health Services", "Maternal Welfare", "Middle Aged", "Pregnancy", "Professional-Patient Relations", "Qualitative Research", "Rural Population", "Tanzania", "Young Adult" ]
4311416
Background
In sub-Saharan Africa, women comprised 60% of people living with human immunodeficiency virus and acquired immune deficiency syndrome (HIV/AIDS) in 2011, and AIDS was the leading cause of death among mothers [1]. The disproportionate burden of HIV in women has implications for not only their health but also the health of their children. Ninety one percent of children under 15 years living with HIV in 2011 were in sub-Saharan Africa [1]. In 2012, the prevalence of HIV/AIDS in Tanzania was 5.1% among adults generally, 6.2% among women of reproductive age, and 3.2% among pregnant women [2]. An estimated 70,000 to 80,000 newborns were at risk of acquiring HIV every year during pregnancy or delivery, or via breastfeeding [3]. One programmatic response to the vulnerability of women and children to HIV has been the package of interventions focused on the prevention of mother-to-child transmission (PMTCT) [4]. PMTCT is envisioned as a cascade of services throughout the reproductive, maternal, newborn, and child health spectrum that entails a series of services, including counselling, testing and treatment, delivered at multiple time points throughout a woman’s interaction with the health system. In the last decade, integration of PMTCT services into routine maternal and child health (MCH) services has been a key strategy in responding to HIV/AIDS in low-resource settings [4,5]. By strengthening the linkages between MCH and PMTCT, integration is believed to improve the coverage of HIV testing and treatment, leading to earlier treatment for those who need it and an opportunity for HIV-positive pregnant mothers to receive prophylaxis so that transmission is prevented. In addition, integration is also believed to strengthen basic health systems services, improve the efficiency of service delivery, and increase the effectiveness of health interventions [4–7]. In Tanzania, HIV integration entailed provision of HIV testing, counselling, and treatment for HIV-positive pregnant women during antenatal care (ANC). HIV testing and counselling were integrated into MCH services starting in 2007 on an “opt out” basis (Table 1) [8]. Under these guidelines, prophylaxis for prevention of vertical transmission was part of reproductive and child health clinics (RCH), while care and treatment for HIV-positive pregnant women remained in care and treatment centers (CTC) [8]. In October 2013, Tanzania began the implementation of Option B+ as recommended by the World Health Organization (WHO) [9]. Under Option B+, pregnant and lactating mothers who test positive for HIV are eligible for antiretroviral therapy (ART) in RCH wards, regardless of their CD4 count and stage.Table 1 PMTCT policies in Tanzania Year Source Policy content 2000-2002Pilot PMTCT Program• Short course regimen for preventing mother-to-child transmission in four referral hospitals and one regional hospital• Use of AZT short course from 36 weeks to delivery2004First national PMTCT guidelines for scale up• Scale up from 5 pilot testing sites to the whole country (1347 sites across the country by 2006)• sdNVP during labor and delivery2007Second national PMTCT guidelines for scale up• Provider initiated testing and counselling in antenatal visits in an “opt out” system• PMTCT remained in parallel to Care and Treatment Centers (CTC), where eligible mothers received care• Change of regimen from sdNVP to AZT from 28 weeks of pregnancy until labor and delivery for PMTCT2011Third national PMTCT guidelines for scale up• Tanzania adopts option A of 2010 WHO guidelines (use of ARV drugs for treating pregnant women and preventing mother-to-child transmission of HIV)• Engagement with, testing of, and counselling partners at health facilities• PMTCT program expanded to 3420 sites in the country2013Fourth national PMTCT guidelines Option B/B+• All HIV-infected pregnant and lactating mothers, regardless of CD4 count, eligible for lifelong treatment with antiretroviral drugs• Care and treatment integrated into RCH wards PMTCT policies in Tanzania Connecting HIV-positive mothers who tested during antenatal care to ART and following up with them after delivery has been challenging [10]. While some studies suggested that higher utilization of MCH services could lead to increased utilization of integrated HIV services [11,12], evidence for the uptake and adherence to ART among pregnant women and infants remained mixed [13–15]. At the same time, literature on client satisfaction with integrated HIV testing and counselling programs showed that most clients were satisfied with the services provided under integration, including counselling, wait time, and providers [16,17]. From a provider perspective, a study in rural Kenya found that health workers viewed integration as a mostly positive development, as this approach enhanced service provision, improved patient-provider relationships, and increased likelihood of HIV-positive women’s enrolment into HIV care by decreasing stigma [18]. Given the resources devoted to integration of HIV services in the maternal, newborn and child health (MNCH) spectrum of care and to inform future policies in this area, we aim in this paper to understand providers’ and pregnant women’s perceptions of the integration of HIV testing and counselling within routine antenatal care and the effects of integration on care seeking. We report the characteristics of respondents and antenatal care health workers to provide some contextual background for the findings. We then detail the views of pregnant women and providers on the generally positive program synergies from integrating HIV testing and counselling into antenatal care and then discuss the remaining challenges and concerns reflecting the social relations that underpin service delivery.
null
null
Results
Profile of respondents and antenatal care seeking We conducted a total of 203 clinical observations of pregnant women receiving routine antenatal care and 196 exit interviews with women. We received eight refusals for observations and seven refusals for exit interviews. In addition, we also conducted 65 quantitative interviews with providers of antenatal services and 57 qualitative in-depth provider interviews. The mean age of pregnant women was 26.0 years, and their mean year of formal education was 5.8 years. The mean number of antenatal care visits that pregnant women had, including the one after which they were interviewed, was 2.7, and a majority of women came for antenatal care at the recommendation of their health provider. The mean time that women took to reach the health center was 48.3 minutes, while the mean time between arrival at the health center and being seen by the provider was 117.2 minutes (Table 4).Table 4 Characteristics of interviewed pregnant women (N=196) Age, in years (mean/median/range)26.0/25.0/16 – 50 Number of years of education (mean/median/range)5.8/7.0/0 – 13 Previous number of live childbirths (mean/median/range)1.6/1.0/0 – 6 Number of ANC visits completed (mean/ median/ range)2.7/3.0/1 – 9 Number of weeks pregnant (mean/median/range)27.3/28.0/8 – 40 Travel time between home and health facility, in minutes (mean/median/range)48.3/30.0/1 – 240 Time between arrival at health facility and being seeing by provider, in minutes (mean/ median/ range)117.2/98.8/2 – 420 Reason for attending ANC Self-recommended23.5%Family member recommended6.1%Informal provider recommended0.5%Health provider recommended52.0%Complications this pregnancy5.6%Complications prior pregnancy1.0%Came to facility for other reason, then received ANC1.5%Other27.0% Characteristics of interviewed pregnant women (N=196) Some women expressed dissatisfaction with the long queues and wait times. For example, a 19-year-old woman during exit interview 07–08 said:[The] [n]umber of health providers should be increased so that we don’t need to spend a lot of time waiting for services. [The] [n]umber of health providers should be increased so that we don’t need to spend a lot of time waiting for services. Researchers observed that although women at different health centers expressed this concern, most pregnant women saw the wait between their arrival and receipt of services as an expected part of care seeking at rural health centers. Of the 65 health workers who were interviewed, almost 80% were female and the average age was 39.2 years. A little more than half were enrolled nurses, while registered nurses and medical attendants comprised 15.4% each. Antenatal care providers had worked for a mean duration 14.1 years in total, with a mean duration of 6.4 years at the facility where the interview took place (Table 5).Table 5 Characteristics of interviewed antenatal care providers (N=65) Age, in years (mean/median/range)39.2/37.9/13 – 60 Female 78.5% Marital status Married/Co-habitating49.2%Single38.5%Widowed/divorced/ separated10.8% Designation Assistant medical officer, 5 years of clinical training1.5%Clinical officer, 3 years of clinical training6.2%Assistant clinical officer, 3 years of clinical training1.5%Registered nurse, 4 years of nursing training15.4%Enrolled nurse, 3 years of nursing training55.4%Medical assistant, Secondary school15.4%Health assistant, Secondary school3.1%Other (“afisa muuguzi msaidizi,” assistant nursing officer)1.5% Received in-service training On HIV/AIDS58.5%On Focused ANC31.3% Years as health worker (mean/median/range)14.1/11.0/0 – 39 Years employed at this health center (mean/ median/ range)6.4/3.5/0 – 29 Number of previous postings (mean/median/range)1.6/1.0/0 – 7 Characteristics of interviewed antenatal care providers (N=65) We conducted a total of 203 clinical observations of pregnant women receiving routine antenatal care and 196 exit interviews with women. We received eight refusals for observations and seven refusals for exit interviews. In addition, we also conducted 65 quantitative interviews with providers of antenatal services and 57 qualitative in-depth provider interviews. The mean age of pregnant women was 26.0 years, and their mean year of formal education was 5.8 years. The mean number of antenatal care visits that pregnant women had, including the one after which they were interviewed, was 2.7, and a majority of women came for antenatal care at the recommendation of their health provider. The mean time that women took to reach the health center was 48.3 minutes, while the mean time between arrival at the health center and being seen by the provider was 117.2 minutes (Table 4).Table 4 Characteristics of interviewed pregnant women (N=196) Age, in years (mean/median/range)26.0/25.0/16 – 50 Number of years of education (mean/median/range)5.8/7.0/0 – 13 Previous number of live childbirths (mean/median/range)1.6/1.0/0 – 6 Number of ANC visits completed (mean/ median/ range)2.7/3.0/1 – 9 Number of weeks pregnant (mean/median/range)27.3/28.0/8 – 40 Travel time between home and health facility, in minutes (mean/median/range)48.3/30.0/1 – 240 Time between arrival at health facility and being seeing by provider, in minutes (mean/ median/ range)117.2/98.8/2 – 420 Reason for attending ANC Self-recommended23.5%Family member recommended6.1%Informal provider recommended0.5%Health provider recommended52.0%Complications this pregnancy5.6%Complications prior pregnancy1.0%Came to facility for other reason, then received ANC1.5%Other27.0% Characteristics of interviewed pregnant women (N=196) Some women expressed dissatisfaction with the long queues and wait times. For example, a 19-year-old woman during exit interview 07–08 said:[The] [n]umber of health providers should be increased so that we don’t need to spend a lot of time waiting for services. [The] [n]umber of health providers should be increased so that we don’t need to spend a lot of time waiting for services. Researchers observed that although women at different health centers expressed this concern, most pregnant women saw the wait between their arrival and receipt of services as an expected part of care seeking at rural health centers. Of the 65 health workers who were interviewed, almost 80% were female and the average age was 39.2 years. A little more than half were enrolled nurses, while registered nurses and medical attendants comprised 15.4% each. Antenatal care providers had worked for a mean duration 14.1 years in total, with a mean duration of 6.4 years at the facility where the interview took place (Table 5).Table 5 Characteristics of interviewed antenatal care providers (N=65) Age, in years (mean/median/range)39.2/37.9/13 – 60 Female 78.5% Marital status Married/Co-habitating49.2%Single38.5%Widowed/divorced/ separated10.8% Designation Assistant medical officer, 5 years of clinical training1.5%Clinical officer, 3 years of clinical training6.2%Assistant clinical officer, 3 years of clinical training1.5%Registered nurse, 4 years of nursing training15.4%Enrolled nurse, 3 years of nursing training55.4%Medical assistant, Secondary school15.4%Health assistant, Secondary school3.1%Other (“afisa muuguzi msaidizi,” assistant nursing officer)1.5% Received in-service training On HIV/AIDS58.5%On Focused ANC31.3% Years as health worker (mean/median/range)14.1/11.0/0 – 39 Years employed at this health center (mean/ median/ range)6.4/3.5/0 – 29 Number of previous postings (mean/median/range)1.6/1.0/0 – 7 Characteristics of interviewed antenatal care providers (N=65) Positive effects of integration on care seeking According to both providers and pregnant women, provision of HIV testing during antenatal care has increased coverage to include more women and other populations that were not previously being tested, for example women from the Maasai community. In addition, integration has also increased the coverage of testing among men. Providers and women explained that they utilized the opportunity to test partners for HIV as well as to involve partners in counselling and MNCH in general. For example, enrolled nurse 02–28, a health worker since 1982, discussed the changes in the involvement of women’s partners in counselling as a result of PMTCT policy changes:…during the past a woman can come along with her husband but the man will stay outside; only if the woman has got some complications [do] we ask her to call her husband. Otherwise he will stay outside waiting for his wife until she finishes without gaining anything there… But nowadays if a woman comes with her husband, if there is education provided he will be involved too, due to the policy change even the man can enter and listen to the education provided. …during the past a woman can come along with her husband but the man will stay outside; only if the woman has got some complications [do] we ask her to call her husband. Otherwise he will stay outside waiting for his wife until she finishes without gaining anything there… But nowadays if a woman comes with her husband, if there is education provided he will be involved too, due to the policy change even the man can enter and listen to the education provided. Providers and women noted that integrated services ensured that women were tested for and counseled about HIV during routine antenatal care visits, reducing the number of times they returned to the health center. Thus, integrated service delivery was more convenient for pregnant women. Provider 07–06, with 11 years of experience at the facility of interview, explained in this way:[Providing HIV testing and counselling during ANC] is a good service and very important to a mother because it saves time and a mother does not take a long time in getting the service. She is [taking] a pregnant test, then [she will find out her] HIV status, then she is getting back to her home place… [Providing HIV testing and counselling during ANC] is a good service and very important to a mother because it saves time and a mother does not take a long time in getting the service. She is [taking] a pregnant test, then [she will find out her] HIV status, then she is getting back to her home place… Exit interview 02–07 with a 29-year-old woman who has had two previous live births confirmed this view:[Providing HIV testing and counselling during ANC is] good because if the provider tells me to come only for HIV instead of antenatal services I wouldn’t come. [Providing HIV testing and counselling during ANC is] good because if the provider tells me to come only for HIV instead of antenatal services I wouldn’t come. Pregnant women also appreciated knowledge gained during testing and counselling sessions about their health status and ways to prevent transmission from mother to child and between partners. Exit interview 13–11 with a 20-year-old woman who has had one previous live birth noted:I think [HIV testing during ANC] is a good system because you get a chance to know your HIV status so that if you’re infected, you can start treatment early. If you’re not, the counselling will help you take care of yourself. I think [HIV testing during ANC] is a good system because you get a chance to know your HIV status so that if you’re infected, you can start treatment early. If you’re not, the counselling will help you take care of yourself. Moreover, provider 02–29, a health worker since 2008, commented that integrated HIV testing and counselling has reduced stigma, as the services became a routine part of the antenatal care expected during every pregnant woman’s visit to the health center. Provider 01–12, a health worker since 1989, added that integration of HIV testing and counselling into routine antenatal service delivery led to improved confidentiality for women. When a woman visited the health center for antenatal services, no one else knew her HIV status or her health issues. Provider 07–07 mentioned that having one health worker throughout the MNCH spectrum was reassuring to women and believed that women thought that having one provider ensured privacy of their information. Aside from health advantages for mothers and children, integration has also given women an opportunity to exchange information with other women in the community. Provider 09–04, who has worked at the facility of interview for 23 years, explained the added benefit from peer learning:Another success [of integration] is [that] education has spread because when she gets knowledge here [at the health center], she convinces another mother also to test; that’s why you find many more mothers test than the fathers, because when you give knowledge here at the clinic about testing when she arrives, there she tells another mother; therefore we get many mothers. Another success [of integration] is [that] education has spread because when she gets knowledge here [at the health center], she convinces another mother also to test; that’s why you find many more mothers test than the fathers, because when you give knowledge here at the clinic about testing when she arrives, there she tells another mother; therefore we get many mothers. According to both providers and pregnant women, provision of HIV testing during antenatal care has increased coverage to include more women and other populations that were not previously being tested, for example women from the Maasai community. In addition, integration has also increased the coverage of testing among men. Providers and women explained that they utilized the opportunity to test partners for HIV as well as to involve partners in counselling and MNCH in general. For example, enrolled nurse 02–28, a health worker since 1982, discussed the changes in the involvement of women’s partners in counselling as a result of PMTCT policy changes:…during the past a woman can come along with her husband but the man will stay outside; only if the woman has got some complications [do] we ask her to call her husband. Otherwise he will stay outside waiting for his wife until she finishes without gaining anything there… But nowadays if a woman comes with her husband, if there is education provided he will be involved too, due to the policy change even the man can enter and listen to the education provided. …during the past a woman can come along with her husband but the man will stay outside; only if the woman has got some complications [do] we ask her to call her husband. Otherwise he will stay outside waiting for his wife until she finishes without gaining anything there… But nowadays if a woman comes with her husband, if there is education provided he will be involved too, due to the policy change even the man can enter and listen to the education provided. Providers and women noted that integrated services ensured that women were tested for and counseled about HIV during routine antenatal care visits, reducing the number of times they returned to the health center. Thus, integrated service delivery was more convenient for pregnant women. Provider 07–06, with 11 years of experience at the facility of interview, explained in this way:[Providing HIV testing and counselling during ANC] is a good service and very important to a mother because it saves time and a mother does not take a long time in getting the service. She is [taking] a pregnant test, then [she will find out her] HIV status, then she is getting back to her home place… [Providing HIV testing and counselling during ANC] is a good service and very important to a mother because it saves time and a mother does not take a long time in getting the service. She is [taking] a pregnant test, then [she will find out her] HIV status, then she is getting back to her home place… Exit interview 02–07 with a 29-year-old woman who has had two previous live births confirmed this view:[Providing HIV testing and counselling during ANC is] good because if the provider tells me to come only for HIV instead of antenatal services I wouldn’t come. [Providing HIV testing and counselling during ANC is] good because if the provider tells me to come only for HIV instead of antenatal services I wouldn’t come. Pregnant women also appreciated knowledge gained during testing and counselling sessions about their health status and ways to prevent transmission from mother to child and between partners. Exit interview 13–11 with a 20-year-old woman who has had one previous live birth noted:I think [HIV testing during ANC] is a good system because you get a chance to know your HIV status so that if you’re infected, you can start treatment early. If you’re not, the counselling will help you take care of yourself. I think [HIV testing during ANC] is a good system because you get a chance to know your HIV status so that if you’re infected, you can start treatment early. If you’re not, the counselling will help you take care of yourself. Moreover, provider 02–29, a health worker since 2008, commented that integrated HIV testing and counselling has reduced stigma, as the services became a routine part of the antenatal care expected during every pregnant woman’s visit to the health center. Provider 01–12, a health worker since 1989, added that integration of HIV testing and counselling into routine antenatal service delivery led to improved confidentiality for women. When a woman visited the health center for antenatal services, no one else knew her HIV status or her health issues. Provider 07–07 mentioned that having one health worker throughout the MNCH spectrum was reassuring to women and believed that women thought that having one provider ensured privacy of their information. Aside from health advantages for mothers and children, integration has also given women an opportunity to exchange information with other women in the community. Provider 09–04, who has worked at the facility of interview for 23 years, explained the added benefit from peer learning:Another success [of integration] is [that] education has spread because when she gets knowledge here [at the health center], she convinces another mother also to test; that’s why you find many more mothers test than the fathers, because when you give knowledge here at the clinic about testing when she arrives, there she tells another mother; therefore we get many mothers. Another success [of integration] is [that] education has spread because when she gets knowledge here [at the health center], she convinces another mother also to test; that’s why you find many more mothers test than the fathers, because when you give knowledge here at the clinic about testing when she arrives, there she tells another mother; therefore we get many mothers. Patient-provider interactions Despite the perceived benefits of integrated HIV testing and counselling, care seeking-related challenges linked to the quality of provided services remained. One challenge that emerged surrounded patient-provider interactions, which included the nature of consent for opt out HIV testing, unequal social relations and lack of supportive communications between pregnant women and providers during counselling sessions, and privacy and confidentiality concerns. According to providers, very few women refused HIV testing. In case of refusal, health workers continued to provide counselling for routine MNCH visits and for HIV testing during subsequent routine antenatal visits until women accepted. Pregnant women confirmed provider comments about the rarity of HIV test refusals and reported their understanding of consent for HIV tests in terms of provider authority rather than choice. Many women saw HIV testing as compulsory and did not know or were not counseled that they could opt out of HIV testing during antenatal services. Some women also reported that they felt that further antenatal care services would be withheld if they did not consent to be tested for HIV. In addition, some of the women also felt that they did not have an opportunity to voice their concerns and were disempowered from making informed decisions about consenting for HIV testing during antenatal services. For example, during exit interview 03–07, an 18-year-old woman with three previous live births expressed that she would have refused HIV testing had she known that it was an option available to her:…they did not tell me [that I had a choice in HIV testing]. If they did, I would say that I do not want to get an HIV test because I do not know the meaning [of getting tested]. I got tested, because I had no choice, and there is nothing you can do about it. You just follow the instructions of the providers. …they did not tell me [that I had a choice in HIV testing]. If they did, I would say that I do not want to get an HIV test because I do not know the meaning [of getting tested]. I got tested, because I had no choice, and there is nothing you can do about it. You just follow the instructions of the providers. Provider 15–04, with 13 years of experience at the facility of interview, acknowledged that refusal was difficult for women to voice, since women interpreted the HIV test as “orders from the Ministry of Health” and “she cannot violate an order without a reasonable cause.” Women’s responses also showed that providers were highly regarded, and that providers’ positions of authority gave them power to make health-related decisions for the women. Many women viewed HIV testing as following the instructions of the providers, which, to the women, were beyond questioning. This perception of the provider as the person to make health decisions for the women was seen in other aspects of the antenatal care consultation. From exit interviews, 52.0% of pregnant women reported that they came for antenatal care at the recommendation of their providers, in comparison to 23.5% who responded that they chose to come of their own volition (Table 4). A 25-year-old woman with two previous live births in exit interview 14–06 expressed her thoughts on being asked by the provider to return to the health center again and again due to an HIV test kit stock out: “We just think it’s okay. What can we do then? We can do nothing. We just go.” Only a few pregnant women discussed health decisions in terms of exerting control over their own bodies. The few women who did so articulated an intentional involvement in decision-making and understood health services as beneficial to their health. For example, when probed about whether she felt comfortable telling the provider her opinion about HIV testing, a 28-year-old woman with three previous live births responded in exit interview 13–05:Yes, that is my body if it is going to be tested. So it does not benefit [the provider] if she knows my health; it benefits me. Yes, that is my body if it is going to be tested. So it does not benefit [the provider] if she knows my health; it benefits me. Most of the women who expressed this perspective came from the few facilities where women reported supportive relationships with their providers and felt encouraged to ask questions. In contrast, many women at other facilities communicated discomfort in voicing their opinions or asking questions of their providers due to the providers’ “harsh” attitude. Patient-provider relations were complicated by differential social status, including dramatic divergence in educational levels and an average age difference of 14 years. Findings from the observations of antenatal consultations showed that while at least 89.7% of providers greeted women and her companions with respect, spoke using understandable local language, and addressed women respectfully, only 66.5% of women were encouraged to ask questions and only 20.8% of providers responded to their questions (Table 6). In addition, only 28.6% of providers thanked the women for coming to the health center for services.Table 6 Observations of interactions between women and providers during antenatal services (N=203) Provider greets woman and her companion/relative with respect89.7%Provider speaks using easy, understandable local language99.0%Provider addresses the woman by her name/calls her ‘mama’93.1%Women encouraged to ask questions during clinical session66.5%Provider respond to questions asked by women20.8%Provider thanks woman for coming to health facility for services28.6% Observations of interactions between women and providers during antenatal services (N=203) At the same time, some health workers were friends with women in the community, which enabled a pregnant mother to seek services at the facility. Provider 06–06, a health worker since 1980, said:women [seeking services at the health center] can be my friends and I have their phone number… When they reach here, [they say]… ‘[W]here is the [specific] attendant, I am looking for attendant’… You already know this medicine is a private thing…so you come, you give her. women [seeking services at the health center] can be my friends and I have their phone number… When they reach here, [they say]… ‘[W]here is the [specific] attendant, I am looking for attendant’… You already know this medicine is a private thing…so you come, you give her. While this relationship could be positive, some women at one facility also reported that they could not trust their providers to keep their HIV results confidential and expressed concern that the provider would discuss their results with other providers at the facility and with community members at home. These concerns did not vary by educational levels. At this particular facility, nine of the ten interviewed women completed seven years of formal education, or primary schooling, and one woman completed 11 years of formal education. Provider 07–06, a health worker since 1998, was aware that women did not trust their providers to maintain confidentiality in some cases. During antenatal services at one facility, a research team member observed providers openly discussing a pregnant woman’s HIV+ status and medication needs in public. Despite the perceived benefits of integrated HIV testing and counselling, care seeking-related challenges linked to the quality of provided services remained. One challenge that emerged surrounded patient-provider interactions, which included the nature of consent for opt out HIV testing, unequal social relations and lack of supportive communications between pregnant women and providers during counselling sessions, and privacy and confidentiality concerns. According to providers, very few women refused HIV testing. In case of refusal, health workers continued to provide counselling for routine MNCH visits and for HIV testing during subsequent routine antenatal visits until women accepted. Pregnant women confirmed provider comments about the rarity of HIV test refusals and reported their understanding of consent for HIV tests in terms of provider authority rather than choice. Many women saw HIV testing as compulsory and did not know or were not counseled that they could opt out of HIV testing during antenatal services. Some women also reported that they felt that further antenatal care services would be withheld if they did not consent to be tested for HIV. In addition, some of the women also felt that they did not have an opportunity to voice their concerns and were disempowered from making informed decisions about consenting for HIV testing during antenatal services. For example, during exit interview 03–07, an 18-year-old woman with three previous live births expressed that she would have refused HIV testing had she known that it was an option available to her:…they did not tell me [that I had a choice in HIV testing]. If they did, I would say that I do not want to get an HIV test because I do not know the meaning [of getting tested]. I got tested, because I had no choice, and there is nothing you can do about it. You just follow the instructions of the providers. …they did not tell me [that I had a choice in HIV testing]. If they did, I would say that I do not want to get an HIV test because I do not know the meaning [of getting tested]. I got tested, because I had no choice, and there is nothing you can do about it. You just follow the instructions of the providers. Provider 15–04, with 13 years of experience at the facility of interview, acknowledged that refusal was difficult for women to voice, since women interpreted the HIV test as “orders from the Ministry of Health” and “she cannot violate an order without a reasonable cause.” Women’s responses also showed that providers were highly regarded, and that providers’ positions of authority gave them power to make health-related decisions for the women. Many women viewed HIV testing as following the instructions of the providers, which, to the women, were beyond questioning. This perception of the provider as the person to make health decisions for the women was seen in other aspects of the antenatal care consultation. From exit interviews, 52.0% of pregnant women reported that they came for antenatal care at the recommendation of their providers, in comparison to 23.5% who responded that they chose to come of their own volition (Table 4). A 25-year-old woman with two previous live births in exit interview 14–06 expressed her thoughts on being asked by the provider to return to the health center again and again due to an HIV test kit stock out: “We just think it’s okay. What can we do then? We can do nothing. We just go.” Only a few pregnant women discussed health decisions in terms of exerting control over their own bodies. The few women who did so articulated an intentional involvement in decision-making and understood health services as beneficial to their health. For example, when probed about whether she felt comfortable telling the provider her opinion about HIV testing, a 28-year-old woman with three previous live births responded in exit interview 13–05:Yes, that is my body if it is going to be tested. So it does not benefit [the provider] if she knows my health; it benefits me. Yes, that is my body if it is going to be tested. So it does not benefit [the provider] if she knows my health; it benefits me. Most of the women who expressed this perspective came from the few facilities where women reported supportive relationships with their providers and felt encouraged to ask questions. In contrast, many women at other facilities communicated discomfort in voicing their opinions or asking questions of their providers due to the providers’ “harsh” attitude. Patient-provider relations were complicated by differential social status, including dramatic divergence in educational levels and an average age difference of 14 years. Findings from the observations of antenatal consultations showed that while at least 89.7% of providers greeted women and her companions with respect, spoke using understandable local language, and addressed women respectfully, only 66.5% of women were encouraged to ask questions and only 20.8% of providers responded to their questions (Table 6). In addition, only 28.6% of providers thanked the women for coming to the health center for services.Table 6 Observations of interactions between women and providers during antenatal services (N=203) Provider greets woman and her companion/relative with respect89.7%Provider speaks using easy, understandable local language99.0%Provider addresses the woman by her name/calls her ‘mama’93.1%Women encouraged to ask questions during clinical session66.5%Provider respond to questions asked by women20.8%Provider thanks woman for coming to health facility for services28.6% Observations of interactions between women and providers during antenatal services (N=203) At the same time, some health workers were friends with women in the community, which enabled a pregnant mother to seek services at the facility. Provider 06–06, a health worker since 1980, said:women [seeking services at the health center] can be my friends and I have their phone number… When they reach here, [they say]… ‘[W]here is the [specific] attendant, I am looking for attendant’… You already know this medicine is a private thing…so you come, you give her. women [seeking services at the health center] can be my friends and I have their phone number… When they reach here, [they say]… ‘[W]here is the [specific] attendant, I am looking for attendant’… You already know this medicine is a private thing…so you come, you give her. While this relationship could be positive, some women at one facility also reported that they could not trust their providers to keep their HIV results confidential and expressed concern that the provider would discuss their results with other providers at the facility and with community members at home. These concerns did not vary by educational levels. At this particular facility, nine of the ten interviewed women completed seven years of formal education, or primary schooling, and one woman completed 11 years of formal education. Provider 07–06, a health worker since 1998, was aware that women did not trust their providers to maintain confidentiality in some cases. During antenatal services at one facility, a research team member observed providers openly discussing a pregnant woman’s HIV+ status and medication needs in public. Stigma Another challenge that emerged from interviews with women and providers was stigma surrounding HIV infection at both the individual and community levels, resulting in some women seeking health services in other health facilities or through other providers, for example community health workers or traditional healers. Despite integration of HIV testing and counselling in routine antenatal visits and the reduction of stigma associated with HIV testing, fear of a positive test result was still a significant barrier to care seeking. A 21-year-old woman with one previous live birth in exit interview 06–05 commented:Some [pregnant women] are afraid to know their results, as they know that being HIV positive is the end of living. Some [pregnant women] are afraid to know their results, as they know that being HIV positive is the end of living. Women’s fear of a possible positive result was strong enough that provider 02–28 and three women described it as a contributing factor to pregnant women deciding against attending antenatal services. A 22-year-old woman in exit interview 07–03 commented that women who opted out of antenatal services at the facility would rather not know their HIV status than to subject themselves to the psychological pressure of thinking that they would die soon after a positive test result, even though treatment was generally known to be available. A 21-year-old woman with no previous live births in exit interview 08–02 noted:Other [women] are afraid to take their results from the HIV test while others do not come back once they are tested and found to be HIV positive…they are afraid of people hearing about their HIV status. In the community, those who are HIV positive are ostracized. So they are afraid to go back and forth to the health center. Other [women] are afraid to take their results from the HIV test while others do not come back once they are tested and found to be HIV positive…they are afraid of people hearing about their HIV status. In the community, those who are HIV positive are ostracized. So they are afraid to go back and forth to the health center. Some women reported knowing other pregnant women who had undergone HIV testing during their first antenatal visit and then discontinued further attendance at the facility. Providers added that some women would go to other points of access, such as dispensaries and community health workers, for antenatal services. Another challenge that emerged from interviews with women and providers was stigma surrounding HIV infection at both the individual and community levels, resulting in some women seeking health services in other health facilities or through other providers, for example community health workers or traditional healers. Despite integration of HIV testing and counselling in routine antenatal visits and the reduction of stigma associated with HIV testing, fear of a positive test result was still a significant barrier to care seeking. A 21-year-old woman with one previous live birth in exit interview 06–05 commented:Some [pregnant women] are afraid to know their results, as they know that being HIV positive is the end of living. Some [pregnant women] are afraid to know their results, as they know that being HIV positive is the end of living. Women’s fear of a possible positive result was strong enough that provider 02–28 and three women described it as a contributing factor to pregnant women deciding against attending antenatal services. A 22-year-old woman in exit interview 07–03 commented that women who opted out of antenatal services at the facility would rather not know their HIV status than to subject themselves to the psychological pressure of thinking that they would die soon after a positive test result, even though treatment was generally known to be available. A 21-year-old woman with no previous live births in exit interview 08–02 noted:Other [women] are afraid to take their results from the HIV test while others do not come back once they are tested and found to be HIV positive…they are afraid of people hearing about their HIV status. In the community, those who are HIV positive are ostracized. So they are afraid to go back and forth to the health center. Other [women] are afraid to take their results from the HIV test while others do not come back once they are tested and found to be HIV positive…they are afraid of people hearing about their HIV status. In the community, those who are HIV positive are ostracized. So they are afraid to go back and forth to the health center. Some women reported knowing other pregnant women who had undergone HIV testing during their first antenatal visit and then discontinued further attendance at the facility. Providers added that some women would go to other points of access, such as dispensaries and community health workers, for antenatal services.
Conclusion
This study found that while perceptions of integrated service delivery of HIV services and antenatal care were generally positive, important barriers related to patient-provider interactions and stigma remained for women seeking antenatal care at health facilities. Tanzania and other low-resource, high-burden countries are currently rolling out Option B+, which further integrates care and treatment into maternal health services. In this context, women’s care seeking practices and perceptions about integrated health services are especially important to understand so as to inform future policies and service delivery reforms. The identified challenges must also be mitigated to ensure that pregnant mothers actively engage with the MNCH spectrum of care for themselves and their children, and concerns about integrated delivery of care must be addressed in order to reduce the impact of HIV/AIDS and to improve maternal and child health in Tanzania.
[ "Study site", "Study design", "Data collection", "Analysis", "Ethical approval", "Profile of respondents and antenatal care seeking", "Positive effects of integration on care seeking", "Patient-provider interactions", "Stigma", "Endnote" ]
[ "Populated with 44.8 million people and located in east Africa, Tanzania is a low-income country with a per capita gross national income of 540 U.S. dollars [19]. With regards to maternal health, focused antenatal care (FANC) guidelines in 2002 reduced the frequency of facility visits from monthly to a minimum of four times with new counselling and clinical services (Table 2) [20,21]. Between 2005 and 2010, 95.8% of pregnant women in mainland Tanzania made at least one antenatal care visit with skilled providers. Yet, only 42.7% of women in mainland Tanzania made four or more antenatal care visits, and half of them made their first visit during the fifth month of pregnancy [22]. While women can and do access antenatal care in facilities, challenges in terms of continuity and quality remain. Overall, 23.1% of women reported having at least one problem in accessing health care [22].Table 2\nIntegrated HIV and ANC services in Tanzania\n\nFocused antenatal care checklist\n\nParameter\n\nFirst visit <16 weeks\n\nSecond visit 20–24 weeks\n\nThird visit 28–32 weeks\n\nFourth visit 36 weeks\n\nLaboratory investigations, blood\nHaemoglobin✓✓✓✓Grouping and rhesus factor✓RPR✓HIV testing✓\nClient education and counselling (for the couple)\nProcess of pregnancy and complications✓✓✓✓Diet and nutrition✓✓✓✓Rest and exercise in pregnancy✓✓✓✓Personal hygiene✓Danger signs in pregnancy✓✓✓✓Use of drugs in pregnancy✓✓✓✓Effects of STI/HIV/AIDS✓✓✓✓Voluntary counselling and testing for HIV✓Care of breasts and breast feeding✓✓Symptoms/signs of labour✓✓Plans of delivery (emergency preparedness, place of delivery, transportation, financial arrangements)✓✓✓✓Plans for postpartum care✓✓Family planning✓✓Harmful habits (e.g. smoking, drug abuse, alcoholism)✓✓✓✓Schedule of return visit✓✓✓✓Source: Adapted from von Both C, Fleba S, Makuwani A, Mpembeni R, Jahn A. How much time do health services spend on antenatal care? Implications for the introduction of the focused antenatal care model in Tanzania. BMC Pregnancy and Childbirth 2006, 6(22).\n\nIntegrated HIV and ANC services in Tanzania\n\nSource: Adapted from von Both C, Fleba S, Makuwani A, Mpembeni R, Jahn A. How much time do health services spend on antenatal care? Implications for the introduction of the focused antenatal care model in Tanzania. BMC Pregnancy and Childbirth 2006, 6(22).\nMorogoro is one of 30 regions in Tanzania, located about 200 kilometers southwest of Dar es Salaam [23]. With a population of 2.2 million and a population density of 31 inhabitants per square kilometer, Morogoro region is among Tanzania’s largest and least densely populated regions. According to a 2002 census, more people live in rural areas (73%) than in urban areas (27%) in Morogoro, similar to most regions in Tanzania [24]. Regional averages for education, poverty and care seeking are also similar to national averages. With regards to HIV, 67.1% of women of reproductive age and 49.8% of men between 15–49 in Morogoro have ever tested for HIV, while 5.3% of women of reproductive age and 2.1% of men between the ages of 15–49 are HIV-positive [2].", "As part of a three-year evaluation of a maternal and newborn health care program implemented by the Ministry of Health and Social Welfare (MoHSW) and MAISHA through Jhpiego in Morogoro, Tanzania, all 18 government health centers in four rural and peri-urban districts (Gairo, Kilosa,a Morogoro District Council, Mvomero, and Ulanga) were chosen for a cross-sectional health facility assessment.", "A team of six research assistants received training over six days that included research ethics and techniques, project objectives, overview of instruments, and two days of pilot testing in health care facilities. Data collection proceeded from September to early December 2012. In each health facility, data were collected over a period of two days.\nPrior to the start of data collection, study personnel visited each health facility in-charge to brief him or her on data collection objectives and coordinate data collection on the days when antenatal and postnatal services were provided (Table 3). At each health facility, the first ten pregnant women attending routine antenatal services were approached for their participation and consent to the study and then subsequently observed and interviewed.Table 3\nData sources included in MNCH facility survey\n\nData source\n\nSampling\n\nFinal sample\nFacility observation checklists and interviews with facility in chargeCensus of health centers18ANC provider interviews quantitativeSub-analysis of 88 RCH providers interviewed based on availability on day of visit and provision of antenatal care in the preceding 7 days65ANC provider interviews qualitativeSub-sample of 88 RCH providers interviewed based on availability on day of visit, receipt of Jhpiego PNC training, and years of service as a provider in the facility; average of 3 per facility57ANC sessions observedQuota based on availability on day of visit, average of 10 per facility, total approved target sample of 240203; 8 refusalsANC women exit interviewsSub-sample of those who consented to observation of ANC sessions196; 7 refusals\n\nData sources included in MNCH facility survey\n\nAt least five providers per facility providing antenatal and postnatal services during the day shift were administered a structured quantitative survey. A sub-sample of about three providers per facility were then chosen for in-depth qualitative interviews based on their Jhpiego training, provision of maternal and newborn health services, and years of service. Provider interviews covered topics including antenatal and postnatal service utilization, integration of family planning and HIV services, and linkages to other levels of the health system.\nData quality was ensured by two field-based supervisors who provided overarching support to field implementation, including review of completed instruments and conduct of daily debriefings following in-depth interviews. Completed and supervisor-checked questionnaires were sent to Dar es Salaam for data entry and cleaning.\nQualitative provider interviews were digitally recorded, transcribed, and translated to English. Team debriefings at midpoint and endpoint of data collection reviewed emerging themes and assessed reliability of data through triangulation. After the midpoint debrief, revised interview guides focusing on emerging themes were implemented for the last seven health facilities visited by the research team.", "This paper drew primarily from qualitative interviews with pregnant women and antenatal care providers on the topic of integrated HIV testing and counselling services during routine antenatal care. In addition, women’s and providers’ demographic profiles were also included as background information. Thematic qualitative data analysis was performed manually from a database coded and organized by Atlas.ti. Codes were derived from the structure of the interview guide and from themes that emerged during daily, midpoint and endpoint debriefings. Codebook development and coding were undertaken through consensus by a team, including research assistants who conducted data collection and whose work was reviewed by a supervisor. A framework approach [25] was taken in the qualitative portion of the research, utilizing an inductive approach with pre-defined research questions.\nPreliminary findings from both the quantitative and qualitative analysis were shared with MoHSW and implementing partner for their feedback and review.", "The study received ethical approval from the Muhimbili University of Health and Allied Sciences (MUHAS) and the Johns Hopkins School of Public Health (JHSPH) Institutional Review Boards. Permission to conduct the study was obtained from MoHSW and from the region and district administration authorities. Individual written consents were obtained from the study participants prior to their participation in the study. All information was kept confidential and anonymous.", "We conducted a total of 203 clinical observations of pregnant women receiving routine antenatal care and 196 exit interviews with women. We received eight refusals for observations and seven refusals for exit interviews. In addition, we also conducted 65 quantitative interviews with providers of antenatal services and 57 qualitative in-depth provider interviews.\nThe mean age of pregnant women was 26.0 years, and their mean year of formal education was 5.8 years. The mean number of antenatal care visits that pregnant women had, including the one after which they were interviewed, was 2.7, and a majority of women came for antenatal care at the recommendation of their health provider. The mean time that women took to reach the health center was 48.3 minutes, while the mean time between arrival at the health center and being seen by the provider was 117.2 minutes (Table 4).Table 4\nCharacteristics of interviewed pregnant women (N=196)\n\nAge, in years (mean/median/range)26.0/25.0/16 – 50\nNumber of years of education (mean/median/range)5.8/7.0/0 – 13\nPrevious number of live childbirths (mean/median/range)1.6/1.0/0 – 6\nNumber of ANC visits completed (mean/ median/ range)2.7/3.0/1 – 9\nNumber of weeks pregnant (mean/median/range)27.3/28.0/8 – 40\nTravel time between home and health facility, in minutes (mean/median/range)48.3/30.0/1 – 240\nTime between arrival at health facility and being seeing by provider, in minutes (mean/ median/ range)117.2/98.8/2 – 420\nReason for attending ANC\nSelf-recommended23.5%Family member recommended6.1%Informal provider recommended0.5%Health provider recommended52.0%Complications this pregnancy5.6%Complications prior pregnancy1.0%Came to facility for other reason, then received ANC1.5%Other27.0%\n\nCharacteristics of interviewed pregnant women (N=196)\n\nSome women expressed dissatisfaction with the long queues and wait times. For example, a 19-year-old woman during exit interview 07–08 said:[The] [n]umber of health providers should be increased so that we don’t need to spend a lot of time waiting for services.\n[The] [n]umber of health providers should be increased so that we don’t need to spend a lot of time waiting for services.\nResearchers observed that although women at different health centers expressed this concern, most pregnant women saw the wait between their arrival and receipt of services as an expected part of care seeking at rural health centers.\nOf the 65 health workers who were interviewed, almost 80% were female and the average age was 39.2 years. A little more than half were enrolled nurses, while registered nurses and medical attendants comprised 15.4% each. Antenatal care providers had worked for a mean duration 14.1 years in total, with a mean duration of 6.4 years at the facility where the interview took place (Table 5).Table 5\nCharacteristics of interviewed antenatal care providers (N=65)\n\nAge, in years (mean/median/range)39.2/37.9/13 – 60\nFemale\n78.5%\nMarital status\nMarried/Co-habitating49.2%Single38.5%Widowed/divorced/ separated10.8%\nDesignation\nAssistant medical officer, 5 years of clinical training1.5%Clinical officer, 3 years of clinical training6.2%Assistant clinical officer, 3 years of clinical training1.5%Registered nurse, 4 years of nursing training15.4%Enrolled nurse, 3 years of nursing training55.4%Medical assistant, Secondary school15.4%Health assistant, Secondary school3.1%Other (“afisa muuguzi msaidizi,” assistant nursing officer)1.5%\nReceived in-service training\nOn HIV/AIDS58.5%On Focused ANC31.3%\nYears as health worker (mean/median/range)14.1/11.0/0 – 39\nYears employed at this health center (mean/ median/ range)6.4/3.5/0 – 29\nNumber of previous postings (mean/median/range)1.6/1.0/0 – 7\n\nCharacteristics of interviewed antenatal care providers (N=65)\n", "According to both providers and pregnant women, provision of HIV testing during antenatal care has increased coverage to include more women and other populations that were not previously being tested, for example women from the Maasai community. In addition, integration has also increased the coverage of testing among men. Providers and women explained that they utilized the opportunity to test partners for HIV as well as to involve partners in counselling and MNCH in general. For example, enrolled nurse 02–28, a health worker since 1982, discussed the changes in the involvement of women’s partners in counselling as a result of PMTCT policy changes:…during the past a woman can come along with her husband but the man will stay outside; only if the woman has got some complications [do] we ask her to call her husband. Otherwise he will stay outside waiting for his wife until she finishes without gaining anything there… But nowadays if a woman comes with her husband, if there is education provided he will be involved too, due to the policy change even the man can enter and listen to the education provided.\n…during the past a woman can come along with her husband but the man will stay outside; only if the woman has got some complications [do] we ask her to call her husband. Otherwise he will stay outside waiting for his wife until she finishes without gaining anything there… But nowadays if a woman comes with her husband, if there is education provided he will be involved too, due to the policy change even the man can enter and listen to the education provided.\nProviders and women noted that integrated services ensured that women were tested for and counseled about HIV during routine antenatal care visits, reducing the number of times they returned to the health center. Thus, integrated service delivery was more convenient for pregnant women. Provider 07–06, with 11 years of experience at the facility of interview, explained in this way:[Providing HIV testing and counselling during ANC] is a good service and very important to a mother because it saves time and a mother does not take a long time in getting the service. She is [taking] a pregnant test, then [she will find out her] HIV status, then she is getting back to her home place…\n[Providing HIV testing and counselling during ANC] is a good service and very important to a mother because it saves time and a mother does not take a long time in getting the service. She is [taking] a pregnant test, then [she will find out her] HIV status, then she is getting back to her home place…\nExit interview 02–07 with a 29-year-old woman who has had two previous live births confirmed this view:[Providing HIV testing and counselling during ANC is] good because if the provider tells me to come only for HIV instead of antenatal services I wouldn’t come.\n[Providing HIV testing and counselling during ANC is] good because if the provider tells me to come only for HIV instead of antenatal services I wouldn’t come.\nPregnant women also appreciated knowledge gained during testing and counselling sessions about their health status and ways to prevent transmission from mother to child and between partners. Exit interview 13–11 with a 20-year-old woman who has had one previous live birth noted:I think [HIV testing during ANC] is a good system because you get a chance to know your HIV status so that if you’re infected, you can start treatment early. If you’re not, the counselling will help you take care of yourself.\nI think [HIV testing during ANC] is a good system because you get a chance to know your HIV status so that if you’re infected, you can start treatment early. If you’re not, the counselling will help you take care of yourself.\nMoreover, provider 02–29, a health worker since 2008, commented that integrated HIV testing and counselling has reduced stigma, as the services became a routine part of the antenatal care expected during every pregnant woman’s visit to the health center. Provider 01–12, a health worker since 1989, added that integration of HIV testing and counselling into routine antenatal service delivery led to improved confidentiality for women. When a woman visited the health center for antenatal services, no one else knew her HIV status or her health issues. Provider 07–07 mentioned that having one health worker throughout the MNCH spectrum was reassuring to women and believed that women thought that having one provider ensured privacy of their information.\nAside from health advantages for mothers and children, integration has also given women an opportunity to exchange information with other women in the community. Provider 09–04, who has worked at the facility of interview for 23 years, explained the added benefit from peer learning:Another success [of integration] is [that] education has spread because when she gets knowledge here [at the health center], she convinces another mother also to test; that’s why you find many more mothers test than the fathers, because when you give knowledge here at the clinic about testing when she arrives, there she tells another mother; therefore we get many mothers.\nAnother success [of integration] is [that] education has spread because when she gets knowledge here [at the health center], she convinces another mother also to test; that’s why you find many more mothers test than the fathers, because when you give knowledge here at the clinic about testing when she arrives, there she tells another mother; therefore we get many mothers.", "Despite the perceived benefits of integrated HIV testing and counselling, care seeking-related challenges linked to the quality of provided services remained. One challenge that emerged surrounded patient-provider interactions, which included the nature of consent for opt out HIV testing, unequal social relations and lack of supportive communications between pregnant women and providers during counselling sessions, and privacy and confidentiality concerns.\nAccording to providers, very few women refused HIV testing. In case of refusal, health workers continued to provide counselling for routine MNCH visits and for HIV testing during subsequent routine antenatal visits until women accepted. Pregnant women confirmed provider comments about the rarity of HIV test refusals and reported their understanding of consent for HIV tests in terms of provider authority rather than choice. Many women saw HIV testing as compulsory and did not know or were not counseled that they could opt out of HIV testing during antenatal services. Some women also reported that they felt that further antenatal care services would be withheld if they did not consent to be tested for HIV. In addition, some of the women also felt that they did not have an opportunity to voice their concerns and were disempowered from making informed decisions about consenting for HIV testing during antenatal services. For example, during exit interview 03–07, an 18-year-old woman with three previous live births expressed that she would have refused HIV testing had she known that it was an option available to her:…they did not tell me [that I had a choice in HIV testing]. If they did, I would say that I do not want to get an HIV test because I do not know the meaning [of getting tested]. I got tested, because I had no choice, and there is nothing you can do about it. You just follow the instructions of the providers.\n…they did not tell me [that I had a choice in HIV testing]. If they did, I would say that I do not want to get an HIV test because I do not know the meaning [of getting tested]. I got tested, because I had no choice, and there is nothing you can do about it. You just follow the instructions of the providers.\nProvider 15–04, with 13 years of experience at the facility of interview, acknowledged that refusal was difficult for women to voice, since women interpreted the HIV test as “orders from the Ministry of Health” and “she cannot violate an order without a reasonable cause.” Women’s responses also showed that providers were highly regarded, and that providers’ positions of authority gave them power to make health-related decisions for the women. Many women viewed HIV testing as following the instructions of the providers, which, to the women, were beyond questioning.\nThis perception of the provider as the person to make health decisions for the women was seen in other aspects of the antenatal care consultation. From exit interviews, 52.0% of pregnant women reported that they came for antenatal care at the recommendation of their providers, in comparison to 23.5% who responded that they chose to come of their own volition (Table 4). A 25-year-old woman with two previous live births in exit interview 14–06 expressed her thoughts on being asked by the provider to return to the health center again and again due to an HIV test kit stock out: “We just think it’s okay. What can we do then? We can do nothing. We just go.”\nOnly a few pregnant women discussed health decisions in terms of exerting control over their own bodies. The few women who did so articulated an intentional involvement in decision-making and understood health services as beneficial to their health. For example, when probed about whether she felt comfortable telling the provider her opinion about HIV testing, a 28-year-old woman with three previous live births responded in exit interview 13–05:Yes, that is my body if it is going to be tested. So it does not benefit [the provider] if she knows my health; it benefits me.\nYes, that is my body if it is going to be tested. So it does not benefit [the provider] if she knows my health; it benefits me.\nMost of the women who expressed this perspective came from the few facilities where women reported supportive relationships with their providers and felt encouraged to ask questions.\nIn contrast, many women at other facilities communicated discomfort in voicing their opinions or asking questions of their providers due to the providers’ “harsh” attitude. Patient-provider relations were complicated by differential social status, including dramatic divergence in educational levels and an average age difference of 14 years. Findings from the observations of antenatal consultations showed that while at least 89.7% of providers greeted women and her companions with respect, spoke using understandable local language, and addressed women respectfully, only 66.5% of women were encouraged to ask questions and only 20.8% of providers responded to their questions (Table 6). In addition, only 28.6% of providers thanked the women for coming to the health center for services.Table 6\nObservations of interactions between women and providers during antenatal services (N=203)\nProvider greets woman and her companion/relative with respect89.7%Provider speaks using easy, understandable local language99.0%Provider addresses the woman by her name/calls her ‘mama’93.1%Women encouraged to ask questions during clinical session66.5%Provider respond to questions asked by women20.8%Provider thanks woman for coming to health facility for services28.6%\n\nObservations of interactions between women and providers during antenatal services (N=203)\n\nAt the same time, some health workers were friends with women in the community, which enabled a pregnant mother to seek services at the facility. Provider 06–06, a health worker since 1980, said:women [seeking services at the health center] can be my friends and I have their phone number… When they reach here, [they say]… ‘[W]here is the [specific] attendant, I am looking for attendant’… You already know this medicine is a private thing…so you come, you give her.\nwomen [seeking services at the health center] can be my friends and I have their phone number… When they reach here, [they say]… ‘[W]here is the [specific] attendant, I am looking for attendant’… You already know this medicine is a private thing…so you come, you give her.\nWhile this relationship could be positive, some women at one facility also reported that they could not trust their providers to keep their HIV results confidential and expressed concern that the provider would discuss their results with other providers at the facility and with community members at home. These concerns did not vary by educational levels. At this particular facility, nine of the ten interviewed women completed seven years of formal education, or primary schooling, and one woman completed 11 years of formal education. Provider 07–06, a health worker since 1998, was aware that women did not trust their providers to maintain confidentiality in some cases. During antenatal services at one facility, a research team member observed providers openly discussing a pregnant woman’s HIV+ status and medication needs in public.", "Another challenge that emerged from interviews with women and providers was stigma surrounding HIV infection at both the individual and community levels, resulting in some women seeking health services in other health facilities or through other providers, for example community health workers or traditional healers.\nDespite integration of HIV testing and counselling in routine antenatal visits and the reduction of stigma associated with HIV testing, fear of a positive test result was still a significant barrier to care seeking. A 21-year-old woman with one previous live birth in exit interview 06–05 commented:Some [pregnant women] are afraid to know their results, as they know that being HIV positive is the end of living.\nSome [pregnant women] are afraid to know their results, as they know that being HIV positive is the end of living.\nWomen’s fear of a possible positive result was strong enough that provider 02–28 and three women described it as a contributing factor to pregnant women deciding against attending antenatal services. A 22-year-old woman in exit interview 07–03 commented that women who opted out of antenatal services at the facility would rather not know their HIV status than to subject themselves to the psychological pressure of thinking that they would die soon after a positive test result, even though treatment was generally known to be available. A 21-year-old woman with no previous live births in exit interview 08–02 noted:Other [women] are afraid to take their results from the HIV test while others do not come back once they are tested and found to be HIV positive…they are afraid of people hearing about their HIV status. In the community, those who are HIV positive are ostracized. So they are afraid to go back and forth to the health center.\nOther [women] are afraid to take their results from the HIV test while others do not come back once they are tested and found to be HIV positive…they are afraid of people hearing about their HIV status. In the community, those who are HIV positive are ostracized. So they are afraid to go back and forth to the health center.\nSome women reported knowing other pregnant women who had undergone HIV testing during their first antenatal visit and then discontinued further attendance at the facility. Providers added that some women would go to other points of access, such as dispensaries and community health workers, for antenatal services.", "aAt the time of data collection, Gairo had yet to become an independent district and facilities in the district were counted as part of Kilosa district." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study site", "Study design", "Data collection", "Analysis", "Ethical approval", "Results", "Profile of respondents and antenatal care seeking", "Positive effects of integration on care seeking", "Patient-provider interactions", "Stigma", "Discussion", "Conclusion", "Endnote" ]
[ "In sub-Saharan Africa, women comprised 60% of people living with human immunodeficiency virus and acquired immune deficiency syndrome (HIV/AIDS) in 2011, and AIDS was the leading cause of death among mothers [1]. The disproportionate burden of HIV in women has implications for not only their health but also the health of their children. Ninety one percent of children under 15 years living with HIV in 2011 were in sub-Saharan Africa [1]. In 2012, the prevalence of HIV/AIDS in Tanzania was 5.1% among adults generally, 6.2% among women of reproductive age, and 3.2% among pregnant women [2]. An estimated 70,000 to 80,000 newborns were at risk of acquiring HIV every year during pregnancy or delivery, or via breastfeeding [3]. One programmatic response to the vulnerability of women and children to HIV has been the package of interventions focused on the prevention of mother-to-child transmission (PMTCT) [4]. PMTCT is envisioned as a cascade of services throughout the reproductive, maternal, newborn, and child health spectrum that entails a series of services, including counselling, testing and treatment, delivered at multiple time points throughout a woman’s interaction with the health system.\nIn the last decade, integration of PMTCT services into routine maternal and child health (MCH) services has been a key strategy in responding to HIV/AIDS in low-resource settings [4,5]. By strengthening the linkages between MCH and PMTCT, integration is believed to improve the coverage of HIV testing and treatment, leading to earlier treatment for those who need it and an opportunity for HIV-positive pregnant mothers to receive prophylaxis so that transmission is prevented. In addition, integration is also believed to strengthen basic health systems services, improve the efficiency of service delivery, and increase the effectiveness of health interventions [4–7].\nIn Tanzania, HIV integration entailed provision of HIV testing, counselling, and treatment for HIV-positive pregnant women during antenatal care (ANC). HIV testing and counselling were integrated into MCH services starting in 2007 on an “opt out” basis (Table 1) [8]. Under these guidelines, prophylaxis for prevention of vertical transmission was part of reproductive and child health clinics (RCH), while care and treatment for HIV-positive pregnant women remained in care and treatment centers (CTC) [8]. In October 2013, Tanzania began the implementation of Option B+ as recommended by the World Health Organization (WHO) [9]. Under Option B+, pregnant and lactating mothers who test positive for HIV are eligible for antiretroviral therapy (ART) in RCH wards, regardless of their CD4 count and stage.Table 1\nPMTCT policies in Tanzania\n\nYear\n\nSource\n\nPolicy content\n2000-2002Pilot PMTCT Program• Short course regimen for preventing mother-to-child transmission in four referral hospitals and one regional hospital• Use of AZT short course from 36 weeks to delivery2004First national PMTCT guidelines for scale up• Scale up from 5 pilot testing sites to the whole country (1347 sites across the country by 2006)• sdNVP during labor and delivery2007Second national PMTCT guidelines for scale up• Provider initiated testing and counselling in antenatal visits in an “opt out” system• PMTCT remained in parallel to Care and Treatment Centers (CTC), where eligible mothers received care• Change of regimen from sdNVP to AZT from 28 weeks of pregnancy until labor and delivery for PMTCT2011Third national PMTCT guidelines for scale up• Tanzania adopts option A of 2010 WHO guidelines (use of ARV drugs for treating pregnant women and preventing mother-to-child transmission of HIV)• Engagement with, testing of, and counselling partners at health facilities• PMTCT program expanded to 3420 sites in the country2013Fourth national PMTCT guidelines Option B/B+• All HIV-infected pregnant and lactating mothers, regardless of CD4 count, eligible for lifelong treatment with antiretroviral drugs• Care and treatment integrated into RCH wards\n\nPMTCT policies in Tanzania\n\nConnecting HIV-positive mothers who tested during antenatal care to ART and following up with them after delivery has been challenging [10]. While some studies suggested that higher utilization of MCH services could lead to increased utilization of integrated HIV services [11,12], evidence for the uptake and adherence to ART among pregnant women and infants remained mixed [13–15]. At the same time, literature on client satisfaction with integrated HIV testing and counselling programs showed that most clients were satisfied with the services provided under integration, including counselling, wait time, and providers [16,17]. From a provider perspective, a study in rural Kenya found that health workers viewed integration as a mostly positive development, as this approach enhanced service provision, improved patient-provider relationships, and increased likelihood of HIV-positive women’s enrolment into HIV care by decreasing stigma [18].\nGiven the resources devoted to integration of HIV services in the maternal, newborn and child health (MNCH) spectrum of care and to inform future policies in this area, we aim in this paper to understand providers’ and pregnant women’s perceptions of the integration of HIV testing and counselling within routine antenatal care and the effects of integration on care seeking. We report the characteristics of respondents and antenatal care health workers to provide some contextual background for the findings. We then detail the views of pregnant women and providers on the generally positive program synergies from integrating HIV testing and counselling into antenatal care and then discuss the remaining challenges and concerns reflecting the social relations that underpin service delivery.", " Study site Populated with 44.8 million people and located in east Africa, Tanzania is a low-income country with a per capita gross national income of 540 U.S. dollars [19]. With regards to maternal health, focused antenatal care (FANC) guidelines in 2002 reduced the frequency of facility visits from monthly to a minimum of four times with new counselling and clinical services (Table 2) [20,21]. Between 2005 and 2010, 95.8% of pregnant women in mainland Tanzania made at least one antenatal care visit with skilled providers. Yet, only 42.7% of women in mainland Tanzania made four or more antenatal care visits, and half of them made their first visit during the fifth month of pregnancy [22]. While women can and do access antenatal care in facilities, challenges in terms of continuity and quality remain. Overall, 23.1% of women reported having at least one problem in accessing health care [22].Table 2\nIntegrated HIV and ANC services in Tanzania\n\nFocused antenatal care checklist\n\nParameter\n\nFirst visit <16 weeks\n\nSecond visit 20–24 weeks\n\nThird visit 28–32 weeks\n\nFourth visit 36 weeks\n\nLaboratory investigations, blood\nHaemoglobin✓✓✓✓Grouping and rhesus factor✓RPR✓HIV testing✓\nClient education and counselling (for the couple)\nProcess of pregnancy and complications✓✓✓✓Diet and nutrition✓✓✓✓Rest and exercise in pregnancy✓✓✓✓Personal hygiene✓Danger signs in pregnancy✓✓✓✓Use of drugs in pregnancy✓✓✓✓Effects of STI/HIV/AIDS✓✓✓✓Voluntary counselling and testing for HIV✓Care of breasts and breast feeding✓✓Symptoms/signs of labour✓✓Plans of delivery (emergency preparedness, place of delivery, transportation, financial arrangements)✓✓✓✓Plans for postpartum care✓✓Family planning✓✓Harmful habits (e.g. smoking, drug abuse, alcoholism)✓✓✓✓Schedule of return visit✓✓✓✓Source: Adapted from von Both C, Fleba S, Makuwani A, Mpembeni R, Jahn A. How much time do health services spend on antenatal care? Implications for the introduction of the focused antenatal care model in Tanzania. BMC Pregnancy and Childbirth 2006, 6(22).\n\nIntegrated HIV and ANC services in Tanzania\n\nSource: Adapted from von Both C, Fleba S, Makuwani A, Mpembeni R, Jahn A. How much time do health services spend on antenatal care? Implications for the introduction of the focused antenatal care model in Tanzania. BMC Pregnancy and Childbirth 2006, 6(22).\nMorogoro is one of 30 regions in Tanzania, located about 200 kilometers southwest of Dar es Salaam [23]. With a population of 2.2 million and a population density of 31 inhabitants per square kilometer, Morogoro region is among Tanzania’s largest and least densely populated regions. According to a 2002 census, more people live in rural areas (73%) than in urban areas (27%) in Morogoro, similar to most regions in Tanzania [24]. Regional averages for education, poverty and care seeking are also similar to national averages. With regards to HIV, 67.1% of women of reproductive age and 49.8% of men between 15–49 in Morogoro have ever tested for HIV, while 5.3% of women of reproductive age and 2.1% of men between the ages of 15–49 are HIV-positive [2].\nPopulated with 44.8 million people and located in east Africa, Tanzania is a low-income country with a per capita gross national income of 540 U.S. dollars [19]. With regards to maternal health, focused antenatal care (FANC) guidelines in 2002 reduced the frequency of facility visits from monthly to a minimum of four times with new counselling and clinical services (Table 2) [20,21]. Between 2005 and 2010, 95.8% of pregnant women in mainland Tanzania made at least one antenatal care visit with skilled providers. Yet, only 42.7% of women in mainland Tanzania made four or more antenatal care visits, and half of them made their first visit during the fifth month of pregnancy [22]. While women can and do access antenatal care in facilities, challenges in terms of continuity and quality remain. Overall, 23.1% of women reported having at least one problem in accessing health care [22].Table 2\nIntegrated HIV and ANC services in Tanzania\n\nFocused antenatal care checklist\n\nParameter\n\nFirst visit <16 weeks\n\nSecond visit 20–24 weeks\n\nThird visit 28–32 weeks\n\nFourth visit 36 weeks\n\nLaboratory investigations, blood\nHaemoglobin✓✓✓✓Grouping and rhesus factor✓RPR✓HIV testing✓\nClient education and counselling (for the couple)\nProcess of pregnancy and complications✓✓✓✓Diet and nutrition✓✓✓✓Rest and exercise in pregnancy✓✓✓✓Personal hygiene✓Danger signs in pregnancy✓✓✓✓Use of drugs in pregnancy✓✓✓✓Effects of STI/HIV/AIDS✓✓✓✓Voluntary counselling and testing for HIV✓Care of breasts and breast feeding✓✓Symptoms/signs of labour✓✓Plans of delivery (emergency preparedness, place of delivery, transportation, financial arrangements)✓✓✓✓Plans for postpartum care✓✓Family planning✓✓Harmful habits (e.g. smoking, drug abuse, alcoholism)✓✓✓✓Schedule of return visit✓✓✓✓Source: Adapted from von Both C, Fleba S, Makuwani A, Mpembeni R, Jahn A. How much time do health services spend on antenatal care? Implications for the introduction of the focused antenatal care model in Tanzania. BMC Pregnancy and Childbirth 2006, 6(22).\n\nIntegrated HIV and ANC services in Tanzania\n\nSource: Adapted from von Both C, Fleba S, Makuwani A, Mpembeni R, Jahn A. How much time do health services spend on antenatal care? Implications for the introduction of the focused antenatal care model in Tanzania. BMC Pregnancy and Childbirth 2006, 6(22).\nMorogoro is one of 30 regions in Tanzania, located about 200 kilometers southwest of Dar es Salaam [23]. With a population of 2.2 million and a population density of 31 inhabitants per square kilometer, Morogoro region is among Tanzania’s largest and least densely populated regions. According to a 2002 census, more people live in rural areas (73%) than in urban areas (27%) in Morogoro, similar to most regions in Tanzania [24]. Regional averages for education, poverty and care seeking are also similar to national averages. With regards to HIV, 67.1% of women of reproductive age and 49.8% of men between 15–49 in Morogoro have ever tested for HIV, while 5.3% of women of reproductive age and 2.1% of men between the ages of 15–49 are HIV-positive [2].\n Study design As part of a three-year evaluation of a maternal and newborn health care program implemented by the Ministry of Health and Social Welfare (MoHSW) and MAISHA through Jhpiego in Morogoro, Tanzania, all 18 government health centers in four rural and peri-urban districts (Gairo, Kilosa,a Morogoro District Council, Mvomero, and Ulanga) were chosen for a cross-sectional health facility assessment.\nAs part of a three-year evaluation of a maternal and newborn health care program implemented by the Ministry of Health and Social Welfare (MoHSW) and MAISHA through Jhpiego in Morogoro, Tanzania, all 18 government health centers in four rural and peri-urban districts (Gairo, Kilosa,a Morogoro District Council, Mvomero, and Ulanga) were chosen for a cross-sectional health facility assessment.\n Data collection A team of six research assistants received training over six days that included research ethics and techniques, project objectives, overview of instruments, and two days of pilot testing in health care facilities. Data collection proceeded from September to early December 2012. In each health facility, data were collected over a period of two days.\nPrior to the start of data collection, study personnel visited each health facility in-charge to brief him or her on data collection objectives and coordinate data collection on the days when antenatal and postnatal services were provided (Table 3). At each health facility, the first ten pregnant women attending routine antenatal services were approached for their participation and consent to the study and then subsequently observed and interviewed.Table 3\nData sources included in MNCH facility survey\n\nData source\n\nSampling\n\nFinal sample\nFacility observation checklists and interviews with facility in chargeCensus of health centers18ANC provider interviews quantitativeSub-analysis of 88 RCH providers interviewed based on availability on day of visit and provision of antenatal care in the preceding 7 days65ANC provider interviews qualitativeSub-sample of 88 RCH providers interviewed based on availability on day of visit, receipt of Jhpiego PNC training, and years of service as a provider in the facility; average of 3 per facility57ANC sessions observedQuota based on availability on day of visit, average of 10 per facility, total approved target sample of 240203; 8 refusalsANC women exit interviewsSub-sample of those who consented to observation of ANC sessions196; 7 refusals\n\nData sources included in MNCH facility survey\n\nAt least five providers per facility providing antenatal and postnatal services during the day shift were administered a structured quantitative survey. A sub-sample of about three providers per facility were then chosen for in-depth qualitative interviews based on their Jhpiego training, provision of maternal and newborn health services, and years of service. Provider interviews covered topics including antenatal and postnatal service utilization, integration of family planning and HIV services, and linkages to other levels of the health system.\nData quality was ensured by two field-based supervisors who provided overarching support to field implementation, including review of completed instruments and conduct of daily debriefings following in-depth interviews. Completed and supervisor-checked questionnaires were sent to Dar es Salaam for data entry and cleaning.\nQualitative provider interviews were digitally recorded, transcribed, and translated to English. Team debriefings at midpoint and endpoint of data collection reviewed emerging themes and assessed reliability of data through triangulation. After the midpoint debrief, revised interview guides focusing on emerging themes were implemented for the last seven health facilities visited by the research team.\nA team of six research assistants received training over six days that included research ethics and techniques, project objectives, overview of instruments, and two days of pilot testing in health care facilities. Data collection proceeded from September to early December 2012. In each health facility, data were collected over a period of two days.\nPrior to the start of data collection, study personnel visited each health facility in-charge to brief him or her on data collection objectives and coordinate data collection on the days when antenatal and postnatal services were provided (Table 3). At each health facility, the first ten pregnant women attending routine antenatal services were approached for their participation and consent to the study and then subsequently observed and interviewed.Table 3\nData sources included in MNCH facility survey\n\nData source\n\nSampling\n\nFinal sample\nFacility observation checklists and interviews with facility in chargeCensus of health centers18ANC provider interviews quantitativeSub-analysis of 88 RCH providers interviewed based on availability on day of visit and provision of antenatal care in the preceding 7 days65ANC provider interviews qualitativeSub-sample of 88 RCH providers interviewed based on availability on day of visit, receipt of Jhpiego PNC training, and years of service as a provider in the facility; average of 3 per facility57ANC sessions observedQuota based on availability on day of visit, average of 10 per facility, total approved target sample of 240203; 8 refusalsANC women exit interviewsSub-sample of those who consented to observation of ANC sessions196; 7 refusals\n\nData sources included in MNCH facility survey\n\nAt least five providers per facility providing antenatal and postnatal services during the day shift were administered a structured quantitative survey. A sub-sample of about three providers per facility were then chosen for in-depth qualitative interviews based on their Jhpiego training, provision of maternal and newborn health services, and years of service. Provider interviews covered topics including antenatal and postnatal service utilization, integration of family planning and HIV services, and linkages to other levels of the health system.\nData quality was ensured by two field-based supervisors who provided overarching support to field implementation, including review of completed instruments and conduct of daily debriefings following in-depth interviews. Completed and supervisor-checked questionnaires were sent to Dar es Salaam for data entry and cleaning.\nQualitative provider interviews were digitally recorded, transcribed, and translated to English. Team debriefings at midpoint and endpoint of data collection reviewed emerging themes and assessed reliability of data through triangulation. After the midpoint debrief, revised interview guides focusing on emerging themes were implemented for the last seven health facilities visited by the research team.\n Analysis This paper drew primarily from qualitative interviews with pregnant women and antenatal care providers on the topic of integrated HIV testing and counselling services during routine antenatal care. In addition, women’s and providers’ demographic profiles were also included as background information. Thematic qualitative data analysis was performed manually from a database coded and organized by Atlas.ti. Codes were derived from the structure of the interview guide and from themes that emerged during daily, midpoint and endpoint debriefings. Codebook development and coding were undertaken through consensus by a team, including research assistants who conducted data collection and whose work was reviewed by a supervisor. A framework approach [25] was taken in the qualitative portion of the research, utilizing an inductive approach with pre-defined research questions.\nPreliminary findings from both the quantitative and qualitative analysis were shared with MoHSW and implementing partner for their feedback and review.\nThis paper drew primarily from qualitative interviews with pregnant women and antenatal care providers on the topic of integrated HIV testing and counselling services during routine antenatal care. In addition, women’s and providers’ demographic profiles were also included as background information. Thematic qualitative data analysis was performed manually from a database coded and organized by Atlas.ti. Codes were derived from the structure of the interview guide and from themes that emerged during daily, midpoint and endpoint debriefings. Codebook development and coding were undertaken through consensus by a team, including research assistants who conducted data collection and whose work was reviewed by a supervisor. A framework approach [25] was taken in the qualitative portion of the research, utilizing an inductive approach with pre-defined research questions.\nPreliminary findings from both the quantitative and qualitative analysis were shared with MoHSW and implementing partner for their feedback and review.\n Ethical approval The study received ethical approval from the Muhimbili University of Health and Allied Sciences (MUHAS) and the Johns Hopkins School of Public Health (JHSPH) Institutional Review Boards. Permission to conduct the study was obtained from MoHSW and from the region and district administration authorities. Individual written consents were obtained from the study participants prior to their participation in the study. All information was kept confidential and anonymous.\nThe study received ethical approval from the Muhimbili University of Health and Allied Sciences (MUHAS) and the Johns Hopkins School of Public Health (JHSPH) Institutional Review Boards. Permission to conduct the study was obtained from MoHSW and from the region and district administration authorities. Individual written consents were obtained from the study participants prior to their participation in the study. All information was kept confidential and anonymous.", "Populated with 44.8 million people and located in east Africa, Tanzania is a low-income country with a per capita gross national income of 540 U.S. dollars [19]. With regards to maternal health, focused antenatal care (FANC) guidelines in 2002 reduced the frequency of facility visits from monthly to a minimum of four times with new counselling and clinical services (Table 2) [20,21]. Between 2005 and 2010, 95.8% of pregnant women in mainland Tanzania made at least one antenatal care visit with skilled providers. Yet, only 42.7% of women in mainland Tanzania made four or more antenatal care visits, and half of them made their first visit during the fifth month of pregnancy [22]. While women can and do access antenatal care in facilities, challenges in terms of continuity and quality remain. Overall, 23.1% of women reported having at least one problem in accessing health care [22].Table 2\nIntegrated HIV and ANC services in Tanzania\n\nFocused antenatal care checklist\n\nParameter\n\nFirst visit <16 weeks\n\nSecond visit 20–24 weeks\n\nThird visit 28–32 weeks\n\nFourth visit 36 weeks\n\nLaboratory investigations, blood\nHaemoglobin✓✓✓✓Grouping and rhesus factor✓RPR✓HIV testing✓\nClient education and counselling (for the couple)\nProcess of pregnancy and complications✓✓✓✓Diet and nutrition✓✓✓✓Rest and exercise in pregnancy✓✓✓✓Personal hygiene✓Danger signs in pregnancy✓✓✓✓Use of drugs in pregnancy✓✓✓✓Effects of STI/HIV/AIDS✓✓✓✓Voluntary counselling and testing for HIV✓Care of breasts and breast feeding✓✓Symptoms/signs of labour✓✓Plans of delivery (emergency preparedness, place of delivery, transportation, financial arrangements)✓✓✓✓Plans for postpartum care✓✓Family planning✓✓Harmful habits (e.g. smoking, drug abuse, alcoholism)✓✓✓✓Schedule of return visit✓✓✓✓Source: Adapted from von Both C, Fleba S, Makuwani A, Mpembeni R, Jahn A. How much time do health services spend on antenatal care? Implications for the introduction of the focused antenatal care model in Tanzania. BMC Pregnancy and Childbirth 2006, 6(22).\n\nIntegrated HIV and ANC services in Tanzania\n\nSource: Adapted from von Both C, Fleba S, Makuwani A, Mpembeni R, Jahn A. How much time do health services spend on antenatal care? Implications for the introduction of the focused antenatal care model in Tanzania. BMC Pregnancy and Childbirth 2006, 6(22).\nMorogoro is one of 30 regions in Tanzania, located about 200 kilometers southwest of Dar es Salaam [23]. With a population of 2.2 million and a population density of 31 inhabitants per square kilometer, Morogoro region is among Tanzania’s largest and least densely populated regions. According to a 2002 census, more people live in rural areas (73%) than in urban areas (27%) in Morogoro, similar to most regions in Tanzania [24]. Regional averages for education, poverty and care seeking are also similar to national averages. With regards to HIV, 67.1% of women of reproductive age and 49.8% of men between 15–49 in Morogoro have ever tested for HIV, while 5.3% of women of reproductive age and 2.1% of men between the ages of 15–49 are HIV-positive [2].", "As part of a three-year evaluation of a maternal and newborn health care program implemented by the Ministry of Health and Social Welfare (MoHSW) and MAISHA through Jhpiego in Morogoro, Tanzania, all 18 government health centers in four rural and peri-urban districts (Gairo, Kilosa,a Morogoro District Council, Mvomero, and Ulanga) were chosen for a cross-sectional health facility assessment.", "A team of six research assistants received training over six days that included research ethics and techniques, project objectives, overview of instruments, and two days of pilot testing in health care facilities. Data collection proceeded from September to early December 2012. In each health facility, data were collected over a period of two days.\nPrior to the start of data collection, study personnel visited each health facility in-charge to brief him or her on data collection objectives and coordinate data collection on the days when antenatal and postnatal services were provided (Table 3). At each health facility, the first ten pregnant women attending routine antenatal services were approached for their participation and consent to the study and then subsequently observed and interviewed.Table 3\nData sources included in MNCH facility survey\n\nData source\n\nSampling\n\nFinal sample\nFacility observation checklists and interviews with facility in chargeCensus of health centers18ANC provider interviews quantitativeSub-analysis of 88 RCH providers interviewed based on availability on day of visit and provision of antenatal care in the preceding 7 days65ANC provider interviews qualitativeSub-sample of 88 RCH providers interviewed based on availability on day of visit, receipt of Jhpiego PNC training, and years of service as a provider in the facility; average of 3 per facility57ANC sessions observedQuota based on availability on day of visit, average of 10 per facility, total approved target sample of 240203; 8 refusalsANC women exit interviewsSub-sample of those who consented to observation of ANC sessions196; 7 refusals\n\nData sources included in MNCH facility survey\n\nAt least five providers per facility providing antenatal and postnatal services during the day shift were administered a structured quantitative survey. A sub-sample of about three providers per facility were then chosen for in-depth qualitative interviews based on their Jhpiego training, provision of maternal and newborn health services, and years of service. Provider interviews covered topics including antenatal and postnatal service utilization, integration of family planning and HIV services, and linkages to other levels of the health system.\nData quality was ensured by two field-based supervisors who provided overarching support to field implementation, including review of completed instruments and conduct of daily debriefings following in-depth interviews. Completed and supervisor-checked questionnaires were sent to Dar es Salaam for data entry and cleaning.\nQualitative provider interviews were digitally recorded, transcribed, and translated to English. Team debriefings at midpoint and endpoint of data collection reviewed emerging themes and assessed reliability of data through triangulation. After the midpoint debrief, revised interview guides focusing on emerging themes were implemented for the last seven health facilities visited by the research team.", "This paper drew primarily from qualitative interviews with pregnant women and antenatal care providers on the topic of integrated HIV testing and counselling services during routine antenatal care. In addition, women’s and providers’ demographic profiles were also included as background information. Thematic qualitative data analysis was performed manually from a database coded and organized by Atlas.ti. Codes were derived from the structure of the interview guide and from themes that emerged during daily, midpoint and endpoint debriefings. Codebook development and coding were undertaken through consensus by a team, including research assistants who conducted data collection and whose work was reviewed by a supervisor. A framework approach [25] was taken in the qualitative portion of the research, utilizing an inductive approach with pre-defined research questions.\nPreliminary findings from both the quantitative and qualitative analysis were shared with MoHSW and implementing partner for their feedback and review.", "The study received ethical approval from the Muhimbili University of Health and Allied Sciences (MUHAS) and the Johns Hopkins School of Public Health (JHSPH) Institutional Review Boards. Permission to conduct the study was obtained from MoHSW and from the region and district administration authorities. Individual written consents were obtained from the study participants prior to their participation in the study. All information was kept confidential and anonymous.", " Profile of respondents and antenatal care seeking We conducted a total of 203 clinical observations of pregnant women receiving routine antenatal care and 196 exit interviews with women. We received eight refusals for observations and seven refusals for exit interviews. In addition, we also conducted 65 quantitative interviews with providers of antenatal services and 57 qualitative in-depth provider interviews.\nThe mean age of pregnant women was 26.0 years, and their mean year of formal education was 5.8 years. The mean number of antenatal care visits that pregnant women had, including the one after which they were interviewed, was 2.7, and a majority of women came for antenatal care at the recommendation of their health provider. The mean time that women took to reach the health center was 48.3 minutes, while the mean time between arrival at the health center and being seen by the provider was 117.2 minutes (Table 4).Table 4\nCharacteristics of interviewed pregnant women (N=196)\n\nAge, in years (mean/median/range)26.0/25.0/16 – 50\nNumber of years of education (mean/median/range)5.8/7.0/0 – 13\nPrevious number of live childbirths (mean/median/range)1.6/1.0/0 – 6\nNumber of ANC visits completed (mean/ median/ range)2.7/3.0/1 – 9\nNumber of weeks pregnant (mean/median/range)27.3/28.0/8 – 40\nTravel time between home and health facility, in minutes (mean/median/range)48.3/30.0/1 – 240\nTime between arrival at health facility and being seeing by provider, in minutes (mean/ median/ range)117.2/98.8/2 – 420\nReason for attending ANC\nSelf-recommended23.5%Family member recommended6.1%Informal provider recommended0.5%Health provider recommended52.0%Complications this pregnancy5.6%Complications prior pregnancy1.0%Came to facility for other reason, then received ANC1.5%Other27.0%\n\nCharacteristics of interviewed pregnant women (N=196)\n\nSome women expressed dissatisfaction with the long queues and wait times. For example, a 19-year-old woman during exit interview 07–08 said:[The] [n]umber of health providers should be increased so that we don’t need to spend a lot of time waiting for services.\n[The] [n]umber of health providers should be increased so that we don’t need to spend a lot of time waiting for services.\nResearchers observed that although women at different health centers expressed this concern, most pregnant women saw the wait between their arrival and receipt of services as an expected part of care seeking at rural health centers.\nOf the 65 health workers who were interviewed, almost 80% were female and the average age was 39.2 years. A little more than half were enrolled nurses, while registered nurses and medical attendants comprised 15.4% each. Antenatal care providers had worked for a mean duration 14.1 years in total, with a mean duration of 6.4 years at the facility where the interview took place (Table 5).Table 5\nCharacteristics of interviewed antenatal care providers (N=65)\n\nAge, in years (mean/median/range)39.2/37.9/13 – 60\nFemale\n78.5%\nMarital status\nMarried/Co-habitating49.2%Single38.5%Widowed/divorced/ separated10.8%\nDesignation\nAssistant medical officer, 5 years of clinical training1.5%Clinical officer, 3 years of clinical training6.2%Assistant clinical officer, 3 years of clinical training1.5%Registered nurse, 4 years of nursing training15.4%Enrolled nurse, 3 years of nursing training55.4%Medical assistant, Secondary school15.4%Health assistant, Secondary school3.1%Other (“afisa muuguzi msaidizi,” assistant nursing officer)1.5%\nReceived in-service training\nOn HIV/AIDS58.5%On Focused ANC31.3%\nYears as health worker (mean/median/range)14.1/11.0/0 – 39\nYears employed at this health center (mean/ median/ range)6.4/3.5/0 – 29\nNumber of previous postings (mean/median/range)1.6/1.0/0 – 7\n\nCharacteristics of interviewed antenatal care providers (N=65)\n\nWe conducted a total of 203 clinical observations of pregnant women receiving routine antenatal care and 196 exit interviews with women. We received eight refusals for observations and seven refusals for exit interviews. In addition, we also conducted 65 quantitative interviews with providers of antenatal services and 57 qualitative in-depth provider interviews.\nThe mean age of pregnant women was 26.0 years, and their mean year of formal education was 5.8 years. The mean number of antenatal care visits that pregnant women had, including the one after which they were interviewed, was 2.7, and a majority of women came for antenatal care at the recommendation of their health provider. The mean time that women took to reach the health center was 48.3 minutes, while the mean time between arrival at the health center and being seen by the provider was 117.2 minutes (Table 4).Table 4\nCharacteristics of interviewed pregnant women (N=196)\n\nAge, in years (mean/median/range)26.0/25.0/16 – 50\nNumber of years of education (mean/median/range)5.8/7.0/0 – 13\nPrevious number of live childbirths (mean/median/range)1.6/1.0/0 – 6\nNumber of ANC visits completed (mean/ median/ range)2.7/3.0/1 – 9\nNumber of weeks pregnant (mean/median/range)27.3/28.0/8 – 40\nTravel time between home and health facility, in minutes (mean/median/range)48.3/30.0/1 – 240\nTime between arrival at health facility and being seeing by provider, in minutes (mean/ median/ range)117.2/98.8/2 – 420\nReason for attending ANC\nSelf-recommended23.5%Family member recommended6.1%Informal provider recommended0.5%Health provider recommended52.0%Complications this pregnancy5.6%Complications prior pregnancy1.0%Came to facility for other reason, then received ANC1.5%Other27.0%\n\nCharacteristics of interviewed pregnant women (N=196)\n\nSome women expressed dissatisfaction with the long queues and wait times. For example, a 19-year-old woman during exit interview 07–08 said:[The] [n]umber of health providers should be increased so that we don’t need to spend a lot of time waiting for services.\n[The] [n]umber of health providers should be increased so that we don’t need to spend a lot of time waiting for services.\nResearchers observed that although women at different health centers expressed this concern, most pregnant women saw the wait between their arrival and receipt of services as an expected part of care seeking at rural health centers.\nOf the 65 health workers who were interviewed, almost 80% were female and the average age was 39.2 years. A little more than half were enrolled nurses, while registered nurses and medical attendants comprised 15.4% each. Antenatal care providers had worked for a mean duration 14.1 years in total, with a mean duration of 6.4 years at the facility where the interview took place (Table 5).Table 5\nCharacteristics of interviewed antenatal care providers (N=65)\n\nAge, in years (mean/median/range)39.2/37.9/13 – 60\nFemale\n78.5%\nMarital status\nMarried/Co-habitating49.2%Single38.5%Widowed/divorced/ separated10.8%\nDesignation\nAssistant medical officer, 5 years of clinical training1.5%Clinical officer, 3 years of clinical training6.2%Assistant clinical officer, 3 years of clinical training1.5%Registered nurse, 4 years of nursing training15.4%Enrolled nurse, 3 years of nursing training55.4%Medical assistant, Secondary school15.4%Health assistant, Secondary school3.1%Other (“afisa muuguzi msaidizi,” assistant nursing officer)1.5%\nReceived in-service training\nOn HIV/AIDS58.5%On Focused ANC31.3%\nYears as health worker (mean/median/range)14.1/11.0/0 – 39\nYears employed at this health center (mean/ median/ range)6.4/3.5/0 – 29\nNumber of previous postings (mean/median/range)1.6/1.0/0 – 7\n\nCharacteristics of interviewed antenatal care providers (N=65)\n\n Positive effects of integration on care seeking According to both providers and pregnant women, provision of HIV testing during antenatal care has increased coverage to include more women and other populations that were not previously being tested, for example women from the Maasai community. In addition, integration has also increased the coverage of testing among men. Providers and women explained that they utilized the opportunity to test partners for HIV as well as to involve partners in counselling and MNCH in general. For example, enrolled nurse 02–28, a health worker since 1982, discussed the changes in the involvement of women’s partners in counselling as a result of PMTCT policy changes:…during the past a woman can come along with her husband but the man will stay outside; only if the woman has got some complications [do] we ask her to call her husband. Otherwise he will stay outside waiting for his wife until she finishes without gaining anything there… But nowadays if a woman comes with her husband, if there is education provided he will be involved too, due to the policy change even the man can enter and listen to the education provided.\n…during the past a woman can come along with her husband but the man will stay outside; only if the woman has got some complications [do] we ask her to call her husband. Otherwise he will stay outside waiting for his wife until she finishes without gaining anything there… But nowadays if a woman comes with her husband, if there is education provided he will be involved too, due to the policy change even the man can enter and listen to the education provided.\nProviders and women noted that integrated services ensured that women were tested for and counseled about HIV during routine antenatal care visits, reducing the number of times they returned to the health center. Thus, integrated service delivery was more convenient for pregnant women. Provider 07–06, with 11 years of experience at the facility of interview, explained in this way:[Providing HIV testing and counselling during ANC] is a good service and very important to a mother because it saves time and a mother does not take a long time in getting the service. She is [taking] a pregnant test, then [she will find out her] HIV status, then she is getting back to her home place…\n[Providing HIV testing and counselling during ANC] is a good service and very important to a mother because it saves time and a mother does not take a long time in getting the service. She is [taking] a pregnant test, then [she will find out her] HIV status, then she is getting back to her home place…\nExit interview 02–07 with a 29-year-old woman who has had two previous live births confirmed this view:[Providing HIV testing and counselling during ANC is] good because if the provider tells me to come only for HIV instead of antenatal services I wouldn’t come.\n[Providing HIV testing and counselling during ANC is] good because if the provider tells me to come only for HIV instead of antenatal services I wouldn’t come.\nPregnant women also appreciated knowledge gained during testing and counselling sessions about their health status and ways to prevent transmission from mother to child and between partners. Exit interview 13–11 with a 20-year-old woman who has had one previous live birth noted:I think [HIV testing during ANC] is a good system because you get a chance to know your HIV status so that if you’re infected, you can start treatment early. If you’re not, the counselling will help you take care of yourself.\nI think [HIV testing during ANC] is a good system because you get a chance to know your HIV status so that if you’re infected, you can start treatment early. If you’re not, the counselling will help you take care of yourself.\nMoreover, provider 02–29, a health worker since 2008, commented that integrated HIV testing and counselling has reduced stigma, as the services became a routine part of the antenatal care expected during every pregnant woman’s visit to the health center. Provider 01–12, a health worker since 1989, added that integration of HIV testing and counselling into routine antenatal service delivery led to improved confidentiality for women. When a woman visited the health center for antenatal services, no one else knew her HIV status or her health issues. Provider 07–07 mentioned that having one health worker throughout the MNCH spectrum was reassuring to women and believed that women thought that having one provider ensured privacy of their information.\nAside from health advantages for mothers and children, integration has also given women an opportunity to exchange information with other women in the community. Provider 09–04, who has worked at the facility of interview for 23 years, explained the added benefit from peer learning:Another success [of integration] is [that] education has spread because when she gets knowledge here [at the health center], she convinces another mother also to test; that’s why you find many more mothers test than the fathers, because when you give knowledge here at the clinic about testing when she arrives, there she tells another mother; therefore we get many mothers.\nAnother success [of integration] is [that] education has spread because when she gets knowledge here [at the health center], she convinces another mother also to test; that’s why you find many more mothers test than the fathers, because when you give knowledge here at the clinic about testing when she arrives, there she tells another mother; therefore we get many mothers.\nAccording to both providers and pregnant women, provision of HIV testing during antenatal care has increased coverage to include more women and other populations that were not previously being tested, for example women from the Maasai community. In addition, integration has also increased the coverage of testing among men. Providers and women explained that they utilized the opportunity to test partners for HIV as well as to involve partners in counselling and MNCH in general. For example, enrolled nurse 02–28, a health worker since 1982, discussed the changes in the involvement of women’s partners in counselling as a result of PMTCT policy changes:…during the past a woman can come along with her husband but the man will stay outside; only if the woman has got some complications [do] we ask her to call her husband. Otherwise he will stay outside waiting for his wife until she finishes without gaining anything there… But nowadays if a woman comes with her husband, if there is education provided he will be involved too, due to the policy change even the man can enter and listen to the education provided.\n…during the past a woman can come along with her husband but the man will stay outside; only if the woman has got some complications [do] we ask her to call her husband. Otherwise he will stay outside waiting for his wife until she finishes without gaining anything there… But nowadays if a woman comes with her husband, if there is education provided he will be involved too, due to the policy change even the man can enter and listen to the education provided.\nProviders and women noted that integrated services ensured that women were tested for and counseled about HIV during routine antenatal care visits, reducing the number of times they returned to the health center. Thus, integrated service delivery was more convenient for pregnant women. Provider 07–06, with 11 years of experience at the facility of interview, explained in this way:[Providing HIV testing and counselling during ANC] is a good service and very important to a mother because it saves time and a mother does not take a long time in getting the service. She is [taking] a pregnant test, then [she will find out her] HIV status, then she is getting back to her home place…\n[Providing HIV testing and counselling during ANC] is a good service and very important to a mother because it saves time and a mother does not take a long time in getting the service. She is [taking] a pregnant test, then [she will find out her] HIV status, then she is getting back to her home place…\nExit interview 02–07 with a 29-year-old woman who has had two previous live births confirmed this view:[Providing HIV testing and counselling during ANC is] good because if the provider tells me to come only for HIV instead of antenatal services I wouldn’t come.\n[Providing HIV testing and counselling during ANC is] good because if the provider tells me to come only for HIV instead of antenatal services I wouldn’t come.\nPregnant women also appreciated knowledge gained during testing and counselling sessions about their health status and ways to prevent transmission from mother to child and between partners. Exit interview 13–11 with a 20-year-old woman who has had one previous live birth noted:I think [HIV testing during ANC] is a good system because you get a chance to know your HIV status so that if you’re infected, you can start treatment early. If you’re not, the counselling will help you take care of yourself.\nI think [HIV testing during ANC] is a good system because you get a chance to know your HIV status so that if you’re infected, you can start treatment early. If you’re not, the counselling will help you take care of yourself.\nMoreover, provider 02–29, a health worker since 2008, commented that integrated HIV testing and counselling has reduced stigma, as the services became a routine part of the antenatal care expected during every pregnant woman’s visit to the health center. Provider 01–12, a health worker since 1989, added that integration of HIV testing and counselling into routine antenatal service delivery led to improved confidentiality for women. When a woman visited the health center for antenatal services, no one else knew her HIV status or her health issues. Provider 07–07 mentioned that having one health worker throughout the MNCH spectrum was reassuring to women and believed that women thought that having one provider ensured privacy of their information.\nAside from health advantages for mothers and children, integration has also given women an opportunity to exchange information with other women in the community. Provider 09–04, who has worked at the facility of interview for 23 years, explained the added benefit from peer learning:Another success [of integration] is [that] education has spread because when she gets knowledge here [at the health center], she convinces another mother also to test; that’s why you find many more mothers test than the fathers, because when you give knowledge here at the clinic about testing when she arrives, there she tells another mother; therefore we get many mothers.\nAnother success [of integration] is [that] education has spread because when she gets knowledge here [at the health center], she convinces another mother also to test; that’s why you find many more mothers test than the fathers, because when you give knowledge here at the clinic about testing when she arrives, there she tells another mother; therefore we get many mothers.\n Patient-provider interactions Despite the perceived benefits of integrated HIV testing and counselling, care seeking-related challenges linked to the quality of provided services remained. One challenge that emerged surrounded patient-provider interactions, which included the nature of consent for opt out HIV testing, unequal social relations and lack of supportive communications between pregnant women and providers during counselling sessions, and privacy and confidentiality concerns.\nAccording to providers, very few women refused HIV testing. In case of refusal, health workers continued to provide counselling for routine MNCH visits and for HIV testing during subsequent routine antenatal visits until women accepted. Pregnant women confirmed provider comments about the rarity of HIV test refusals and reported their understanding of consent for HIV tests in terms of provider authority rather than choice. Many women saw HIV testing as compulsory and did not know or were not counseled that they could opt out of HIV testing during antenatal services. Some women also reported that they felt that further antenatal care services would be withheld if they did not consent to be tested for HIV. In addition, some of the women also felt that they did not have an opportunity to voice their concerns and were disempowered from making informed decisions about consenting for HIV testing during antenatal services. For example, during exit interview 03–07, an 18-year-old woman with three previous live births expressed that she would have refused HIV testing had she known that it was an option available to her:…they did not tell me [that I had a choice in HIV testing]. If they did, I would say that I do not want to get an HIV test because I do not know the meaning [of getting tested]. I got tested, because I had no choice, and there is nothing you can do about it. You just follow the instructions of the providers.\n…they did not tell me [that I had a choice in HIV testing]. If they did, I would say that I do not want to get an HIV test because I do not know the meaning [of getting tested]. I got tested, because I had no choice, and there is nothing you can do about it. You just follow the instructions of the providers.\nProvider 15–04, with 13 years of experience at the facility of interview, acknowledged that refusal was difficult for women to voice, since women interpreted the HIV test as “orders from the Ministry of Health” and “she cannot violate an order without a reasonable cause.” Women’s responses also showed that providers were highly regarded, and that providers’ positions of authority gave them power to make health-related decisions for the women. Many women viewed HIV testing as following the instructions of the providers, which, to the women, were beyond questioning.\nThis perception of the provider as the person to make health decisions for the women was seen in other aspects of the antenatal care consultation. From exit interviews, 52.0% of pregnant women reported that they came for antenatal care at the recommendation of their providers, in comparison to 23.5% who responded that they chose to come of their own volition (Table 4). A 25-year-old woman with two previous live births in exit interview 14–06 expressed her thoughts on being asked by the provider to return to the health center again and again due to an HIV test kit stock out: “We just think it’s okay. What can we do then? We can do nothing. We just go.”\nOnly a few pregnant women discussed health decisions in terms of exerting control over their own bodies. The few women who did so articulated an intentional involvement in decision-making and understood health services as beneficial to their health. For example, when probed about whether she felt comfortable telling the provider her opinion about HIV testing, a 28-year-old woman with three previous live births responded in exit interview 13–05:Yes, that is my body if it is going to be tested. So it does not benefit [the provider] if she knows my health; it benefits me.\nYes, that is my body if it is going to be tested. So it does not benefit [the provider] if she knows my health; it benefits me.\nMost of the women who expressed this perspective came from the few facilities where women reported supportive relationships with their providers and felt encouraged to ask questions.\nIn contrast, many women at other facilities communicated discomfort in voicing their opinions or asking questions of their providers due to the providers’ “harsh” attitude. Patient-provider relations were complicated by differential social status, including dramatic divergence in educational levels and an average age difference of 14 years. Findings from the observations of antenatal consultations showed that while at least 89.7% of providers greeted women and her companions with respect, spoke using understandable local language, and addressed women respectfully, only 66.5% of women were encouraged to ask questions and only 20.8% of providers responded to their questions (Table 6). In addition, only 28.6% of providers thanked the women for coming to the health center for services.Table 6\nObservations of interactions between women and providers during antenatal services (N=203)\nProvider greets woman and her companion/relative with respect89.7%Provider speaks using easy, understandable local language99.0%Provider addresses the woman by her name/calls her ‘mama’93.1%Women encouraged to ask questions during clinical session66.5%Provider respond to questions asked by women20.8%Provider thanks woman for coming to health facility for services28.6%\n\nObservations of interactions between women and providers during antenatal services (N=203)\n\nAt the same time, some health workers were friends with women in the community, which enabled a pregnant mother to seek services at the facility. Provider 06–06, a health worker since 1980, said:women [seeking services at the health center] can be my friends and I have their phone number… When they reach here, [they say]… ‘[W]here is the [specific] attendant, I am looking for attendant’… You already know this medicine is a private thing…so you come, you give her.\nwomen [seeking services at the health center] can be my friends and I have their phone number… When they reach here, [they say]… ‘[W]here is the [specific] attendant, I am looking for attendant’… You already know this medicine is a private thing…so you come, you give her.\nWhile this relationship could be positive, some women at one facility also reported that they could not trust their providers to keep their HIV results confidential and expressed concern that the provider would discuss their results with other providers at the facility and with community members at home. These concerns did not vary by educational levels. At this particular facility, nine of the ten interviewed women completed seven years of formal education, or primary schooling, and one woman completed 11 years of formal education. Provider 07–06, a health worker since 1998, was aware that women did not trust their providers to maintain confidentiality in some cases. During antenatal services at one facility, a research team member observed providers openly discussing a pregnant woman’s HIV+ status and medication needs in public.\nDespite the perceived benefits of integrated HIV testing and counselling, care seeking-related challenges linked to the quality of provided services remained. One challenge that emerged surrounded patient-provider interactions, which included the nature of consent for opt out HIV testing, unequal social relations and lack of supportive communications between pregnant women and providers during counselling sessions, and privacy and confidentiality concerns.\nAccording to providers, very few women refused HIV testing. In case of refusal, health workers continued to provide counselling for routine MNCH visits and for HIV testing during subsequent routine antenatal visits until women accepted. Pregnant women confirmed provider comments about the rarity of HIV test refusals and reported their understanding of consent for HIV tests in terms of provider authority rather than choice. Many women saw HIV testing as compulsory and did not know or were not counseled that they could opt out of HIV testing during antenatal services. Some women also reported that they felt that further antenatal care services would be withheld if they did not consent to be tested for HIV. In addition, some of the women also felt that they did not have an opportunity to voice their concerns and were disempowered from making informed decisions about consenting for HIV testing during antenatal services. For example, during exit interview 03–07, an 18-year-old woman with three previous live births expressed that she would have refused HIV testing had she known that it was an option available to her:…they did not tell me [that I had a choice in HIV testing]. If they did, I would say that I do not want to get an HIV test because I do not know the meaning [of getting tested]. I got tested, because I had no choice, and there is nothing you can do about it. You just follow the instructions of the providers.\n…they did not tell me [that I had a choice in HIV testing]. If they did, I would say that I do not want to get an HIV test because I do not know the meaning [of getting tested]. I got tested, because I had no choice, and there is nothing you can do about it. You just follow the instructions of the providers.\nProvider 15–04, with 13 years of experience at the facility of interview, acknowledged that refusal was difficult for women to voice, since women interpreted the HIV test as “orders from the Ministry of Health” and “she cannot violate an order without a reasonable cause.” Women’s responses also showed that providers were highly regarded, and that providers’ positions of authority gave them power to make health-related decisions for the women. Many women viewed HIV testing as following the instructions of the providers, which, to the women, were beyond questioning.\nThis perception of the provider as the person to make health decisions for the women was seen in other aspects of the antenatal care consultation. From exit interviews, 52.0% of pregnant women reported that they came for antenatal care at the recommendation of their providers, in comparison to 23.5% who responded that they chose to come of their own volition (Table 4). A 25-year-old woman with two previous live births in exit interview 14–06 expressed her thoughts on being asked by the provider to return to the health center again and again due to an HIV test kit stock out: “We just think it’s okay. What can we do then? We can do nothing. We just go.”\nOnly a few pregnant women discussed health decisions in terms of exerting control over their own bodies. The few women who did so articulated an intentional involvement in decision-making and understood health services as beneficial to their health. For example, when probed about whether she felt comfortable telling the provider her opinion about HIV testing, a 28-year-old woman with three previous live births responded in exit interview 13–05:Yes, that is my body if it is going to be tested. So it does not benefit [the provider] if she knows my health; it benefits me.\nYes, that is my body if it is going to be tested. So it does not benefit [the provider] if she knows my health; it benefits me.\nMost of the women who expressed this perspective came from the few facilities where women reported supportive relationships with their providers and felt encouraged to ask questions.\nIn contrast, many women at other facilities communicated discomfort in voicing their opinions or asking questions of their providers due to the providers’ “harsh” attitude. Patient-provider relations were complicated by differential social status, including dramatic divergence in educational levels and an average age difference of 14 years. Findings from the observations of antenatal consultations showed that while at least 89.7% of providers greeted women and her companions with respect, spoke using understandable local language, and addressed women respectfully, only 66.5% of women were encouraged to ask questions and only 20.8% of providers responded to their questions (Table 6). In addition, only 28.6% of providers thanked the women for coming to the health center for services.Table 6\nObservations of interactions between women and providers during antenatal services (N=203)\nProvider greets woman and her companion/relative with respect89.7%Provider speaks using easy, understandable local language99.0%Provider addresses the woman by her name/calls her ‘mama’93.1%Women encouraged to ask questions during clinical session66.5%Provider respond to questions asked by women20.8%Provider thanks woman for coming to health facility for services28.6%\n\nObservations of interactions between women and providers during antenatal services (N=203)\n\nAt the same time, some health workers were friends with women in the community, which enabled a pregnant mother to seek services at the facility. Provider 06–06, a health worker since 1980, said:women [seeking services at the health center] can be my friends and I have their phone number… When they reach here, [they say]… ‘[W]here is the [specific] attendant, I am looking for attendant’… You already know this medicine is a private thing…so you come, you give her.\nwomen [seeking services at the health center] can be my friends and I have their phone number… When they reach here, [they say]… ‘[W]here is the [specific] attendant, I am looking for attendant’… You already know this medicine is a private thing…so you come, you give her.\nWhile this relationship could be positive, some women at one facility also reported that they could not trust their providers to keep their HIV results confidential and expressed concern that the provider would discuss their results with other providers at the facility and with community members at home. These concerns did not vary by educational levels. At this particular facility, nine of the ten interviewed women completed seven years of formal education, or primary schooling, and one woman completed 11 years of formal education. Provider 07–06, a health worker since 1998, was aware that women did not trust their providers to maintain confidentiality in some cases. During antenatal services at one facility, a research team member observed providers openly discussing a pregnant woman’s HIV+ status and medication needs in public.\n Stigma Another challenge that emerged from interviews with women and providers was stigma surrounding HIV infection at both the individual and community levels, resulting in some women seeking health services in other health facilities or through other providers, for example community health workers or traditional healers.\nDespite integration of HIV testing and counselling in routine antenatal visits and the reduction of stigma associated with HIV testing, fear of a positive test result was still a significant barrier to care seeking. A 21-year-old woman with one previous live birth in exit interview 06–05 commented:Some [pregnant women] are afraid to know their results, as they know that being HIV positive is the end of living.\nSome [pregnant women] are afraid to know their results, as they know that being HIV positive is the end of living.\nWomen’s fear of a possible positive result was strong enough that provider 02–28 and three women described it as a contributing factor to pregnant women deciding against attending antenatal services. A 22-year-old woman in exit interview 07–03 commented that women who opted out of antenatal services at the facility would rather not know their HIV status than to subject themselves to the psychological pressure of thinking that they would die soon after a positive test result, even though treatment was generally known to be available. A 21-year-old woman with no previous live births in exit interview 08–02 noted:Other [women] are afraid to take their results from the HIV test while others do not come back once they are tested and found to be HIV positive…they are afraid of people hearing about their HIV status. In the community, those who are HIV positive are ostracized. So they are afraid to go back and forth to the health center.\nOther [women] are afraid to take their results from the HIV test while others do not come back once they are tested and found to be HIV positive…they are afraid of people hearing about their HIV status. In the community, those who are HIV positive are ostracized. So they are afraid to go back and forth to the health center.\nSome women reported knowing other pregnant women who had undergone HIV testing during their first antenatal visit and then discontinued further attendance at the facility. Providers added that some women would go to other points of access, such as dispensaries and community health workers, for antenatal services.\nAnother challenge that emerged from interviews with women and providers was stigma surrounding HIV infection at both the individual and community levels, resulting in some women seeking health services in other health facilities or through other providers, for example community health workers or traditional healers.\nDespite integration of HIV testing and counselling in routine antenatal visits and the reduction of stigma associated with HIV testing, fear of a positive test result was still a significant barrier to care seeking. A 21-year-old woman with one previous live birth in exit interview 06–05 commented:Some [pregnant women] are afraid to know their results, as they know that being HIV positive is the end of living.\nSome [pregnant women] are afraid to know their results, as they know that being HIV positive is the end of living.\nWomen’s fear of a possible positive result was strong enough that provider 02–28 and three women described it as a contributing factor to pregnant women deciding against attending antenatal services. A 22-year-old woman in exit interview 07–03 commented that women who opted out of antenatal services at the facility would rather not know their HIV status than to subject themselves to the psychological pressure of thinking that they would die soon after a positive test result, even though treatment was generally known to be available. A 21-year-old woman with no previous live births in exit interview 08–02 noted:Other [women] are afraid to take their results from the HIV test while others do not come back once they are tested and found to be HIV positive…they are afraid of people hearing about their HIV status. In the community, those who are HIV positive are ostracized. So they are afraid to go back and forth to the health center.\nOther [women] are afraid to take their results from the HIV test while others do not come back once they are tested and found to be HIV positive…they are afraid of people hearing about their HIV status. In the community, those who are HIV positive are ostracized. So they are afraid to go back and forth to the health center.\nSome women reported knowing other pregnant women who had undergone HIV testing during their first antenatal visit and then discontinued further attendance at the facility. Providers added that some women would go to other points of access, such as dispensaries and community health workers, for antenatal services.", "We conducted a total of 203 clinical observations of pregnant women receiving routine antenatal care and 196 exit interviews with women. We received eight refusals for observations and seven refusals for exit interviews. In addition, we also conducted 65 quantitative interviews with providers of antenatal services and 57 qualitative in-depth provider interviews.\nThe mean age of pregnant women was 26.0 years, and their mean year of formal education was 5.8 years. The mean number of antenatal care visits that pregnant women had, including the one after which they were interviewed, was 2.7, and a majority of women came for antenatal care at the recommendation of their health provider. The mean time that women took to reach the health center was 48.3 minutes, while the mean time between arrival at the health center and being seen by the provider was 117.2 minutes (Table 4).Table 4\nCharacteristics of interviewed pregnant women (N=196)\n\nAge, in years (mean/median/range)26.0/25.0/16 – 50\nNumber of years of education (mean/median/range)5.8/7.0/0 – 13\nPrevious number of live childbirths (mean/median/range)1.6/1.0/0 – 6\nNumber of ANC visits completed (mean/ median/ range)2.7/3.0/1 – 9\nNumber of weeks pregnant (mean/median/range)27.3/28.0/8 – 40\nTravel time between home and health facility, in minutes (mean/median/range)48.3/30.0/1 – 240\nTime between arrival at health facility and being seeing by provider, in minutes (mean/ median/ range)117.2/98.8/2 – 420\nReason for attending ANC\nSelf-recommended23.5%Family member recommended6.1%Informal provider recommended0.5%Health provider recommended52.0%Complications this pregnancy5.6%Complications prior pregnancy1.0%Came to facility for other reason, then received ANC1.5%Other27.0%\n\nCharacteristics of interviewed pregnant women (N=196)\n\nSome women expressed dissatisfaction with the long queues and wait times. For example, a 19-year-old woman during exit interview 07–08 said:[The] [n]umber of health providers should be increased so that we don’t need to spend a lot of time waiting for services.\n[The] [n]umber of health providers should be increased so that we don’t need to spend a lot of time waiting for services.\nResearchers observed that although women at different health centers expressed this concern, most pregnant women saw the wait between their arrival and receipt of services as an expected part of care seeking at rural health centers.\nOf the 65 health workers who were interviewed, almost 80% were female and the average age was 39.2 years. A little more than half were enrolled nurses, while registered nurses and medical attendants comprised 15.4% each. Antenatal care providers had worked for a mean duration 14.1 years in total, with a mean duration of 6.4 years at the facility where the interview took place (Table 5).Table 5\nCharacteristics of interviewed antenatal care providers (N=65)\n\nAge, in years (mean/median/range)39.2/37.9/13 – 60\nFemale\n78.5%\nMarital status\nMarried/Co-habitating49.2%Single38.5%Widowed/divorced/ separated10.8%\nDesignation\nAssistant medical officer, 5 years of clinical training1.5%Clinical officer, 3 years of clinical training6.2%Assistant clinical officer, 3 years of clinical training1.5%Registered nurse, 4 years of nursing training15.4%Enrolled nurse, 3 years of nursing training55.4%Medical assistant, Secondary school15.4%Health assistant, Secondary school3.1%Other (“afisa muuguzi msaidizi,” assistant nursing officer)1.5%\nReceived in-service training\nOn HIV/AIDS58.5%On Focused ANC31.3%\nYears as health worker (mean/median/range)14.1/11.0/0 – 39\nYears employed at this health center (mean/ median/ range)6.4/3.5/0 – 29\nNumber of previous postings (mean/median/range)1.6/1.0/0 – 7\n\nCharacteristics of interviewed antenatal care providers (N=65)\n", "According to both providers and pregnant women, provision of HIV testing during antenatal care has increased coverage to include more women and other populations that were not previously being tested, for example women from the Maasai community. In addition, integration has also increased the coverage of testing among men. Providers and women explained that they utilized the opportunity to test partners for HIV as well as to involve partners in counselling and MNCH in general. For example, enrolled nurse 02–28, a health worker since 1982, discussed the changes in the involvement of women’s partners in counselling as a result of PMTCT policy changes:…during the past a woman can come along with her husband but the man will stay outside; only if the woman has got some complications [do] we ask her to call her husband. Otherwise he will stay outside waiting for his wife until she finishes without gaining anything there… But nowadays if a woman comes with her husband, if there is education provided he will be involved too, due to the policy change even the man can enter and listen to the education provided.\n…during the past a woman can come along with her husband but the man will stay outside; only if the woman has got some complications [do] we ask her to call her husband. Otherwise he will stay outside waiting for his wife until she finishes without gaining anything there… But nowadays if a woman comes with her husband, if there is education provided he will be involved too, due to the policy change even the man can enter and listen to the education provided.\nProviders and women noted that integrated services ensured that women were tested for and counseled about HIV during routine antenatal care visits, reducing the number of times they returned to the health center. Thus, integrated service delivery was more convenient for pregnant women. Provider 07–06, with 11 years of experience at the facility of interview, explained in this way:[Providing HIV testing and counselling during ANC] is a good service and very important to a mother because it saves time and a mother does not take a long time in getting the service. She is [taking] a pregnant test, then [she will find out her] HIV status, then she is getting back to her home place…\n[Providing HIV testing and counselling during ANC] is a good service and very important to a mother because it saves time and a mother does not take a long time in getting the service. She is [taking] a pregnant test, then [she will find out her] HIV status, then she is getting back to her home place…\nExit interview 02–07 with a 29-year-old woman who has had two previous live births confirmed this view:[Providing HIV testing and counselling during ANC is] good because if the provider tells me to come only for HIV instead of antenatal services I wouldn’t come.\n[Providing HIV testing and counselling during ANC is] good because if the provider tells me to come only for HIV instead of antenatal services I wouldn’t come.\nPregnant women also appreciated knowledge gained during testing and counselling sessions about their health status and ways to prevent transmission from mother to child and between partners. Exit interview 13–11 with a 20-year-old woman who has had one previous live birth noted:I think [HIV testing during ANC] is a good system because you get a chance to know your HIV status so that if you’re infected, you can start treatment early. If you’re not, the counselling will help you take care of yourself.\nI think [HIV testing during ANC] is a good system because you get a chance to know your HIV status so that if you’re infected, you can start treatment early. If you’re not, the counselling will help you take care of yourself.\nMoreover, provider 02–29, a health worker since 2008, commented that integrated HIV testing and counselling has reduced stigma, as the services became a routine part of the antenatal care expected during every pregnant woman’s visit to the health center. Provider 01–12, a health worker since 1989, added that integration of HIV testing and counselling into routine antenatal service delivery led to improved confidentiality for women. When a woman visited the health center for antenatal services, no one else knew her HIV status or her health issues. Provider 07–07 mentioned that having one health worker throughout the MNCH spectrum was reassuring to women and believed that women thought that having one provider ensured privacy of their information.\nAside from health advantages for mothers and children, integration has also given women an opportunity to exchange information with other women in the community. Provider 09–04, who has worked at the facility of interview for 23 years, explained the added benefit from peer learning:Another success [of integration] is [that] education has spread because when she gets knowledge here [at the health center], she convinces another mother also to test; that’s why you find many more mothers test than the fathers, because when you give knowledge here at the clinic about testing when she arrives, there she tells another mother; therefore we get many mothers.\nAnother success [of integration] is [that] education has spread because when she gets knowledge here [at the health center], she convinces another mother also to test; that’s why you find many more mothers test than the fathers, because when you give knowledge here at the clinic about testing when she arrives, there she tells another mother; therefore we get many mothers.", "Despite the perceived benefits of integrated HIV testing and counselling, care seeking-related challenges linked to the quality of provided services remained. One challenge that emerged surrounded patient-provider interactions, which included the nature of consent for opt out HIV testing, unequal social relations and lack of supportive communications between pregnant women and providers during counselling sessions, and privacy and confidentiality concerns.\nAccording to providers, very few women refused HIV testing. In case of refusal, health workers continued to provide counselling for routine MNCH visits and for HIV testing during subsequent routine antenatal visits until women accepted. Pregnant women confirmed provider comments about the rarity of HIV test refusals and reported their understanding of consent for HIV tests in terms of provider authority rather than choice. Many women saw HIV testing as compulsory and did not know or were not counseled that they could opt out of HIV testing during antenatal services. Some women also reported that they felt that further antenatal care services would be withheld if they did not consent to be tested for HIV. In addition, some of the women also felt that they did not have an opportunity to voice their concerns and were disempowered from making informed decisions about consenting for HIV testing during antenatal services. For example, during exit interview 03–07, an 18-year-old woman with three previous live births expressed that she would have refused HIV testing had she known that it was an option available to her:…they did not tell me [that I had a choice in HIV testing]. If they did, I would say that I do not want to get an HIV test because I do not know the meaning [of getting tested]. I got tested, because I had no choice, and there is nothing you can do about it. You just follow the instructions of the providers.\n…they did not tell me [that I had a choice in HIV testing]. If they did, I would say that I do not want to get an HIV test because I do not know the meaning [of getting tested]. I got tested, because I had no choice, and there is nothing you can do about it. You just follow the instructions of the providers.\nProvider 15–04, with 13 years of experience at the facility of interview, acknowledged that refusal was difficult for women to voice, since women interpreted the HIV test as “orders from the Ministry of Health” and “she cannot violate an order without a reasonable cause.” Women’s responses also showed that providers were highly regarded, and that providers’ positions of authority gave them power to make health-related decisions for the women. Many women viewed HIV testing as following the instructions of the providers, which, to the women, were beyond questioning.\nThis perception of the provider as the person to make health decisions for the women was seen in other aspects of the antenatal care consultation. From exit interviews, 52.0% of pregnant women reported that they came for antenatal care at the recommendation of their providers, in comparison to 23.5% who responded that they chose to come of their own volition (Table 4). A 25-year-old woman with two previous live births in exit interview 14–06 expressed her thoughts on being asked by the provider to return to the health center again and again due to an HIV test kit stock out: “We just think it’s okay. What can we do then? We can do nothing. We just go.”\nOnly a few pregnant women discussed health decisions in terms of exerting control over their own bodies. The few women who did so articulated an intentional involvement in decision-making and understood health services as beneficial to their health. For example, when probed about whether she felt comfortable telling the provider her opinion about HIV testing, a 28-year-old woman with three previous live births responded in exit interview 13–05:Yes, that is my body if it is going to be tested. So it does not benefit [the provider] if she knows my health; it benefits me.\nYes, that is my body if it is going to be tested. So it does not benefit [the provider] if she knows my health; it benefits me.\nMost of the women who expressed this perspective came from the few facilities where women reported supportive relationships with their providers and felt encouraged to ask questions.\nIn contrast, many women at other facilities communicated discomfort in voicing their opinions or asking questions of their providers due to the providers’ “harsh” attitude. Patient-provider relations were complicated by differential social status, including dramatic divergence in educational levels and an average age difference of 14 years. Findings from the observations of antenatal consultations showed that while at least 89.7% of providers greeted women and her companions with respect, spoke using understandable local language, and addressed women respectfully, only 66.5% of women were encouraged to ask questions and only 20.8% of providers responded to their questions (Table 6). In addition, only 28.6% of providers thanked the women for coming to the health center for services.Table 6\nObservations of interactions between women and providers during antenatal services (N=203)\nProvider greets woman and her companion/relative with respect89.7%Provider speaks using easy, understandable local language99.0%Provider addresses the woman by her name/calls her ‘mama’93.1%Women encouraged to ask questions during clinical session66.5%Provider respond to questions asked by women20.8%Provider thanks woman for coming to health facility for services28.6%\n\nObservations of interactions between women and providers during antenatal services (N=203)\n\nAt the same time, some health workers were friends with women in the community, which enabled a pregnant mother to seek services at the facility. Provider 06–06, a health worker since 1980, said:women [seeking services at the health center] can be my friends and I have their phone number… When they reach here, [they say]… ‘[W]here is the [specific] attendant, I am looking for attendant’… You already know this medicine is a private thing…so you come, you give her.\nwomen [seeking services at the health center] can be my friends and I have their phone number… When they reach here, [they say]… ‘[W]here is the [specific] attendant, I am looking for attendant’… You already know this medicine is a private thing…so you come, you give her.\nWhile this relationship could be positive, some women at one facility also reported that they could not trust their providers to keep their HIV results confidential and expressed concern that the provider would discuss their results with other providers at the facility and with community members at home. These concerns did not vary by educational levels. At this particular facility, nine of the ten interviewed women completed seven years of formal education, or primary schooling, and one woman completed 11 years of formal education. Provider 07–06, a health worker since 1998, was aware that women did not trust their providers to maintain confidentiality in some cases. During antenatal services at one facility, a research team member observed providers openly discussing a pregnant woman’s HIV+ status and medication needs in public.", "Another challenge that emerged from interviews with women and providers was stigma surrounding HIV infection at both the individual and community levels, resulting in some women seeking health services in other health facilities or through other providers, for example community health workers or traditional healers.\nDespite integration of HIV testing and counselling in routine antenatal visits and the reduction of stigma associated with HIV testing, fear of a positive test result was still a significant barrier to care seeking. A 21-year-old woman with one previous live birth in exit interview 06–05 commented:Some [pregnant women] are afraid to know their results, as they know that being HIV positive is the end of living.\nSome [pregnant women] are afraid to know their results, as they know that being HIV positive is the end of living.\nWomen’s fear of a possible positive result was strong enough that provider 02–28 and three women described it as a contributing factor to pregnant women deciding against attending antenatal services. A 22-year-old woman in exit interview 07–03 commented that women who opted out of antenatal services at the facility would rather not know their HIV status than to subject themselves to the psychological pressure of thinking that they would die soon after a positive test result, even though treatment was generally known to be available. A 21-year-old woman with no previous live births in exit interview 08–02 noted:Other [women] are afraid to take their results from the HIV test while others do not come back once they are tested and found to be HIV positive…they are afraid of people hearing about their HIV status. In the community, those who are HIV positive are ostracized. So they are afraid to go back and forth to the health center.\nOther [women] are afraid to take their results from the HIV test while others do not come back once they are tested and found to be HIV positive…they are afraid of people hearing about their HIV status. In the community, those who are HIV positive are ostracized. So they are afraid to go back and forth to the health center.\nSome women reported knowing other pregnant women who had undergone HIV testing during their first antenatal visit and then discontinued further attendance at the facility. Providers added that some women would go to other points of access, such as dispensaries and community health workers, for antenatal services.", "This study found that both pregnant women and providers had positive perceptions of the integration of HIV counselling and testing into antenatal services. Many providers and pregnant women felt that integration has increased uptake of HIV testing and enabled marginalized groups and partners to be involved. In addition, some women stated that knowing one’s status would help protect both the pregnant woman as well as her unborn child, and that early testing and treatment were key to preventing mother-to-child transmission. Yet potential gains in increased coverage of HIV testing belied outstanding challenges related to perceptions of patient-provider relations and stigma against HIV. The implicitly compulsory nature of the HIV test described by respondents raised questions about consent. A sense of powerlessness and anxiety was evident in women’s discussions about providers’ behavior. Women were concerned that providers were unable to maintain confidentiality of their HIV status at the health center and in the wider community. Stigma and fear of implications of a positive test result discouraged some pregnant women from seeking antenatal services completely or contributed to their discontinuing antenatal services at the health center after HIV testing.\nThis analysis had some limitations. The cross-sectional design of the study and timing of data collection during the harvest season in an agricultural region could potentially exclude pregnant women with harvesting responsibilities who did not attend antenatal clinic during the time period when the data collection team was at each health center. Furthermore, informant fatigue could have resulted from long in-depth interviews, which usually followed quantitative interviews. This fatigue was mitigated by rest periods between the quantitative and qualitative portions of interviews, ranging from half an hour to an entire day.\nAs ANC is the entry point for PMTCT services, engaging with women from their first antenatal care visit is important. From this study, women reported that HIV testing during antenatal services was generally accepted by pregnant women, confirming other studies that found generally positive attitudes towards integrated HIV testing and counselling [26,27]. In South Africa, women who knew their HIV status were more likely to utilize health services and adhere to drug regimens across the MNCH spectrum [28], suggesting that knowing HIV status from integrated HIV testing and counselling could lead to better drug adherence if treatment were required. In our study, we also found that with integrated service delivery, women’s exchange of information in their communities included HIV counselling, which potentially reinforced HIV counselling messages discussed during antenatal visits and community outreach efforts and furthered women’s knowledge about HIV.\nAt the same time, most of the women in this study believed that HIV testing was compulsory and were not aware of the “opt out” option, with some perceiving HIV test as necessary before receipt of antenatal services. This finding was also reflected by other studies in the region [29,30]. In fact, some women found HIV testing to be coercive and that they were unable to make an informed decision [31,32]. In rural Malawi, an HIV-positive result from testing during routine antenatal care discouraged some women from coming back for further services and from delivering at the health center [30]. In this study, the disparity in providers’ and pregnant women’s views about consenting for HIV testing holds implications for service utilization further along the PMTCT cascade and MNCH continuum of care by HIV-positive women.\nThe perceived compulsory nature of HIV testing was complicated by the relationship between providers and women seen in this study, which was often characterized by unequal social relations and lack of supportive communication. This study found that women often accepted the recommendations of providers without question, which emerged from their understanding that providers were best equipped to make health decisions on their behalf because providers knew more about health issues. Additionally, while some women and providers reported that they shared positive and supportive relationships at the health center and in the community, other women perceived their providers to be harsh, rude, and sometimes dismissive. Studies from Burkina Faso, Kenya, Malawi, and Uganda had similar findings and reported that women perceived health workers to be powerful figures who made decisions regarding their health, and that this message was reinforced by social, cultural, and political norms [30,32].\nIn addition to some participants highlighting poor consent processes and lack of supportive communications, some women in this study were concerned about their providers breaching confidentiality by discussing their HIV status with other providers or with community members. Other studies in the region echoed this finding [32,33]. While in general, providers kept the confidence of women seeking health services [27], direct breaches of confidentiality by providers have been documented in the literature [30,32,34]. In one study in Tanzania, a health worker demanded a bribe from a woman in return for keeping her information confidential [33]. A trusting and informative provider-patient relationship has been shown to be a key factor not only in the perceived quality of services but also in the uptake of further PMTCT services, drug adherence, and health outcomes [28,31,35]. Aside from ethical concerns, direct breaches of confidentiality by providers has also contributed to women’s distrust not only of the providers but also of the health system [33]. At the same time, another study aiming to measure differences in consent, confidentiality, and referral between non-integrated voluntary counselling and testing programs and integrated PMTCT programs found ethical practices of testing and counselling in general and no difference in confidentiality practices by the mode of HIV testing and counselling as perceived by women [27]. The literature thus points to mixed observations regarding confidentiality and suggests that patient-provider relations may be problematic independent of integration.\nThis study found that the reported increased uptake of HIV testing could be partly attributed to de-stigmatization of the test, as it became a routine part of and accepted by most women attending antenatal services [36]. Improved access to and coverage of HIV testing could be seen in the increasing percentage of women and men who were tested for HIV in the past 12 months and received their results: 19% to 30% for women and 19% to 27% for men from 2007–2008 to 2011–2012 [2]. However, multiple studies found that stigma against people living with HIV and AIDS was still prevalent, and that anxiety and fear surrounding a potentially positive HIV test result continued to deter women from seeking care [37]. In a study conducted in Dar es Salaam exploring the feasibility of Option B+ as recommended by the WHO, stigma remained an important barrier to care and treatment as well as adherence to the drug regimen for HIV+ women [38]. In rural communities, stigma has an even stronger influence over women precisely because women in rural areas, in comparison to women in urban areas, have fewer choices in access points for health services. Thus, since stigma has remained a barrier for uptake of the PMTCT service cascade [39,40], de-stigmatization serves as an important first step to address “missed opportunities” [41–43] in engaging with women seeking MNCH services in order to prevent vertical transmission.\nEvidence shows that community engagement is necessary for reducing stigma and promoting a supportive social environment in which women and their families can seek care [34,44]. In Rwanda and Kenya, early engagement with community stakeholders has contributed to the success of similar integrated and community-based PMTCT programs [44,45]. In the Democratic Republic of Congo, community leaders led efforts to identify HIV- and PMTCT-related priorities, and collaboration between community health workers and facility providers ensured that pregnant women could access the services they needed across a continuum of care [46]. The success of scaled up PMTCT programs in these countries highlighted the importance of collaboration between the community and the health center in reducing stigma surrounding HIV/AIDS and creating a continuum of care for all pregnant women.\nAt the health center, a supportive relationship between providers and women must be fostered to address concerns about the nature of HIV testing and confidentiality. Revising the content of in-service training, facilitating follow up provider training, and conducting community outreach with emphasis on a patient rights-based approach would be first steps in enabling a more equal and trusting relationship between providers and women. This relationship has been recognized as key in promoting women’s health [47]. In the Health Workers for Change program, for example, the participatory quality improvement measure improved relations between providers and patients [48]. In addition, the Skilled Care Initiative in Burkina Faso, Kenya, and Tanzania showed that “compassionate care” could contribute more broadly to quality of care and utilization of services over the entire MNCH spectrum [49]. However, while quality improvement initiatives could be effective, the determinants of success were not always clear [50–52], and the initiatives were not always successfully scaled up by Ministries of Health. Change can only be maintained if a well-equipped health system is responsive to needs of women and providers in health centers. From this perspective, a health systems approach to addressing this challenge requires a new consideration of “dignified, respectful health care” [53] as countries strengthen the integration between PMTCT and MNCH programs.", "This study found that while perceptions of integrated service delivery of HIV services and antenatal care were generally positive, important barriers related to patient-provider interactions and stigma remained for women seeking antenatal care at health facilities. Tanzania and other low-resource, high-burden countries are currently rolling out Option B+, which further integrates care and treatment into maternal health services. In this context, women’s care seeking practices and perceptions about integrated health services are especially important to understand so as to inform future policies and service delivery reforms. The identified challenges must also be mitigated to ensure that pregnant mothers actively engage with the MNCH spectrum of care for themselves and their children, and concerns about integrated delivery of care must be addressed in order to reduce the impact of HIV/AIDS and to improve maternal and child health in Tanzania.", "aAt the time of data collection, Gairo had yet to become an independent district and facilities in the district were counted as part of Kilosa district." ]
[ "introduction", "materials|methods", null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusion", null ]
[ "ANC", "HIV testing and counselling", "Integration", "Care seeking", "Patient-provider interaction", "Stigma" ]
Background: In sub-Saharan Africa, women comprised 60% of people living with human immunodeficiency virus and acquired immune deficiency syndrome (HIV/AIDS) in 2011, and AIDS was the leading cause of death among mothers [1]. The disproportionate burden of HIV in women has implications for not only their health but also the health of their children. Ninety one percent of children under 15 years living with HIV in 2011 were in sub-Saharan Africa [1]. In 2012, the prevalence of HIV/AIDS in Tanzania was 5.1% among adults generally, 6.2% among women of reproductive age, and 3.2% among pregnant women [2]. An estimated 70,000 to 80,000 newborns were at risk of acquiring HIV every year during pregnancy or delivery, or via breastfeeding [3]. One programmatic response to the vulnerability of women and children to HIV has been the package of interventions focused on the prevention of mother-to-child transmission (PMTCT) [4]. PMTCT is envisioned as a cascade of services throughout the reproductive, maternal, newborn, and child health spectrum that entails a series of services, including counselling, testing and treatment, delivered at multiple time points throughout a woman’s interaction with the health system. In the last decade, integration of PMTCT services into routine maternal and child health (MCH) services has been a key strategy in responding to HIV/AIDS in low-resource settings [4,5]. By strengthening the linkages between MCH and PMTCT, integration is believed to improve the coverage of HIV testing and treatment, leading to earlier treatment for those who need it and an opportunity for HIV-positive pregnant mothers to receive prophylaxis so that transmission is prevented. In addition, integration is also believed to strengthen basic health systems services, improve the efficiency of service delivery, and increase the effectiveness of health interventions [4–7]. In Tanzania, HIV integration entailed provision of HIV testing, counselling, and treatment for HIV-positive pregnant women during antenatal care (ANC). HIV testing and counselling were integrated into MCH services starting in 2007 on an “opt out” basis (Table 1) [8]. Under these guidelines, prophylaxis for prevention of vertical transmission was part of reproductive and child health clinics (RCH), while care and treatment for HIV-positive pregnant women remained in care and treatment centers (CTC) [8]. In October 2013, Tanzania began the implementation of Option B+ as recommended by the World Health Organization (WHO) [9]. Under Option B+, pregnant and lactating mothers who test positive for HIV are eligible for antiretroviral therapy (ART) in RCH wards, regardless of their CD4 count and stage.Table 1 PMTCT policies in Tanzania Year Source Policy content 2000-2002Pilot PMTCT Program• Short course regimen for preventing mother-to-child transmission in four referral hospitals and one regional hospital• Use of AZT short course from 36 weeks to delivery2004First national PMTCT guidelines for scale up• Scale up from 5 pilot testing sites to the whole country (1347 sites across the country by 2006)• sdNVP during labor and delivery2007Second national PMTCT guidelines for scale up• Provider initiated testing and counselling in antenatal visits in an “opt out” system• PMTCT remained in parallel to Care and Treatment Centers (CTC), where eligible mothers received care• Change of regimen from sdNVP to AZT from 28 weeks of pregnancy until labor and delivery for PMTCT2011Third national PMTCT guidelines for scale up• Tanzania adopts option A of 2010 WHO guidelines (use of ARV drugs for treating pregnant women and preventing mother-to-child transmission of HIV)• Engagement with, testing of, and counselling partners at health facilities• PMTCT program expanded to 3420 sites in the country2013Fourth national PMTCT guidelines Option B/B+• All HIV-infected pregnant and lactating mothers, regardless of CD4 count, eligible for lifelong treatment with antiretroviral drugs• Care and treatment integrated into RCH wards PMTCT policies in Tanzania Connecting HIV-positive mothers who tested during antenatal care to ART and following up with them after delivery has been challenging [10]. While some studies suggested that higher utilization of MCH services could lead to increased utilization of integrated HIV services [11,12], evidence for the uptake and adherence to ART among pregnant women and infants remained mixed [13–15]. At the same time, literature on client satisfaction with integrated HIV testing and counselling programs showed that most clients were satisfied with the services provided under integration, including counselling, wait time, and providers [16,17]. From a provider perspective, a study in rural Kenya found that health workers viewed integration as a mostly positive development, as this approach enhanced service provision, improved patient-provider relationships, and increased likelihood of HIV-positive women’s enrolment into HIV care by decreasing stigma [18]. Given the resources devoted to integration of HIV services in the maternal, newborn and child health (MNCH) spectrum of care and to inform future policies in this area, we aim in this paper to understand providers’ and pregnant women’s perceptions of the integration of HIV testing and counselling within routine antenatal care and the effects of integration on care seeking. We report the characteristics of respondents and antenatal care health workers to provide some contextual background for the findings. We then detail the views of pregnant women and providers on the generally positive program synergies from integrating HIV testing and counselling into antenatal care and then discuss the remaining challenges and concerns reflecting the social relations that underpin service delivery. Methods: Study site Populated with 44.8 million people and located in east Africa, Tanzania is a low-income country with a per capita gross national income of 540 U.S. dollars [19]. With regards to maternal health, focused antenatal care (FANC) guidelines in 2002 reduced the frequency of facility visits from monthly to a minimum of four times with new counselling and clinical services (Table 2) [20,21]. Between 2005 and 2010, 95.8% of pregnant women in mainland Tanzania made at least one antenatal care visit with skilled providers. Yet, only 42.7% of women in mainland Tanzania made four or more antenatal care visits, and half of them made their first visit during the fifth month of pregnancy [22]. While women can and do access antenatal care in facilities, challenges in terms of continuity and quality remain. Overall, 23.1% of women reported having at least one problem in accessing health care [22].Table 2 Integrated HIV and ANC services in Tanzania Focused antenatal care checklist Parameter First visit <16 weeks Second visit 20–24 weeks Third visit 28–32 weeks Fourth visit 36 weeks Laboratory investigations, blood Haemoglobin✓✓✓✓Grouping and rhesus factor✓RPR✓HIV testing✓ Client education and counselling (for the couple) Process of pregnancy and complications✓✓✓✓Diet and nutrition✓✓✓✓Rest and exercise in pregnancy✓✓✓✓Personal hygiene✓Danger signs in pregnancy✓✓✓✓Use of drugs in pregnancy✓✓✓✓Effects of STI/HIV/AIDS✓✓✓✓Voluntary counselling and testing for HIV✓Care of breasts and breast feeding✓✓Symptoms/signs of labour✓✓Plans of delivery (emergency preparedness, place of delivery, transportation, financial arrangements)✓✓✓✓Plans for postpartum care✓✓Family planning✓✓Harmful habits (e.g. smoking, drug abuse, alcoholism)✓✓✓✓Schedule of return visit✓✓✓✓Source: Adapted from von Both C, Fleba S, Makuwani A, Mpembeni R, Jahn A. How much time do health services spend on antenatal care? Implications for the introduction of the focused antenatal care model in Tanzania. BMC Pregnancy and Childbirth 2006, 6(22). Integrated HIV and ANC services in Tanzania Source: Adapted from von Both C, Fleba S, Makuwani A, Mpembeni R, Jahn A. How much time do health services spend on antenatal care? Implications for the introduction of the focused antenatal care model in Tanzania. BMC Pregnancy and Childbirth 2006, 6(22). Morogoro is one of 30 regions in Tanzania, located about 200 kilometers southwest of Dar es Salaam [23]. With a population of 2.2 million and a population density of 31 inhabitants per square kilometer, Morogoro region is among Tanzania’s largest and least densely populated regions. According to a 2002 census, more people live in rural areas (73%) than in urban areas (27%) in Morogoro, similar to most regions in Tanzania [24]. Regional averages for education, poverty and care seeking are also similar to national averages. With regards to HIV, 67.1% of women of reproductive age and 49.8% of men between 15–49 in Morogoro have ever tested for HIV, while 5.3% of women of reproductive age and 2.1% of men between the ages of 15–49 are HIV-positive [2]. Populated with 44.8 million people and located in east Africa, Tanzania is a low-income country with a per capita gross national income of 540 U.S. dollars [19]. With regards to maternal health, focused antenatal care (FANC) guidelines in 2002 reduced the frequency of facility visits from monthly to a minimum of four times with new counselling and clinical services (Table 2) [20,21]. Between 2005 and 2010, 95.8% of pregnant women in mainland Tanzania made at least one antenatal care visit with skilled providers. Yet, only 42.7% of women in mainland Tanzania made four or more antenatal care visits, and half of them made their first visit during the fifth month of pregnancy [22]. While women can and do access antenatal care in facilities, challenges in terms of continuity and quality remain. Overall, 23.1% of women reported having at least one problem in accessing health care [22].Table 2 Integrated HIV and ANC services in Tanzania Focused antenatal care checklist Parameter First visit <16 weeks Second visit 20–24 weeks Third visit 28–32 weeks Fourth visit 36 weeks Laboratory investigations, blood Haemoglobin✓✓✓✓Grouping and rhesus factor✓RPR✓HIV testing✓ Client education and counselling (for the couple) Process of pregnancy and complications✓✓✓✓Diet and nutrition✓✓✓✓Rest and exercise in pregnancy✓✓✓✓Personal hygiene✓Danger signs in pregnancy✓✓✓✓Use of drugs in pregnancy✓✓✓✓Effects of STI/HIV/AIDS✓✓✓✓Voluntary counselling and testing for HIV✓Care of breasts and breast feeding✓✓Symptoms/signs of labour✓✓Plans of delivery (emergency preparedness, place of delivery, transportation, financial arrangements)✓✓✓✓Plans for postpartum care✓✓Family planning✓✓Harmful habits (e.g. smoking, drug abuse, alcoholism)✓✓✓✓Schedule of return visit✓✓✓✓Source: Adapted from von Both C, Fleba S, Makuwani A, Mpembeni R, Jahn A. How much time do health services spend on antenatal care? Implications for the introduction of the focused antenatal care model in Tanzania. BMC Pregnancy and Childbirth 2006, 6(22). Integrated HIV and ANC services in Tanzania Source: Adapted from von Both C, Fleba S, Makuwani A, Mpembeni R, Jahn A. How much time do health services spend on antenatal care? Implications for the introduction of the focused antenatal care model in Tanzania. BMC Pregnancy and Childbirth 2006, 6(22). Morogoro is one of 30 regions in Tanzania, located about 200 kilometers southwest of Dar es Salaam [23]. With a population of 2.2 million and a population density of 31 inhabitants per square kilometer, Morogoro region is among Tanzania’s largest and least densely populated regions. According to a 2002 census, more people live in rural areas (73%) than in urban areas (27%) in Morogoro, similar to most regions in Tanzania [24]. Regional averages for education, poverty and care seeking are also similar to national averages. With regards to HIV, 67.1% of women of reproductive age and 49.8% of men between 15–49 in Morogoro have ever tested for HIV, while 5.3% of women of reproductive age and 2.1% of men between the ages of 15–49 are HIV-positive [2]. Study design As part of a three-year evaluation of a maternal and newborn health care program implemented by the Ministry of Health and Social Welfare (MoHSW) and MAISHA through Jhpiego in Morogoro, Tanzania, all 18 government health centers in four rural and peri-urban districts (Gairo, Kilosa,a Morogoro District Council, Mvomero, and Ulanga) were chosen for a cross-sectional health facility assessment. As part of a three-year evaluation of a maternal and newborn health care program implemented by the Ministry of Health and Social Welfare (MoHSW) and MAISHA through Jhpiego in Morogoro, Tanzania, all 18 government health centers in four rural and peri-urban districts (Gairo, Kilosa,a Morogoro District Council, Mvomero, and Ulanga) were chosen for a cross-sectional health facility assessment. Data collection A team of six research assistants received training over six days that included research ethics and techniques, project objectives, overview of instruments, and two days of pilot testing in health care facilities. Data collection proceeded from September to early December 2012. In each health facility, data were collected over a period of two days. Prior to the start of data collection, study personnel visited each health facility in-charge to brief him or her on data collection objectives and coordinate data collection on the days when antenatal and postnatal services were provided (Table 3). At each health facility, the first ten pregnant women attending routine antenatal services were approached for their participation and consent to the study and then subsequently observed and interviewed.Table 3 Data sources included in MNCH facility survey Data source Sampling Final sample Facility observation checklists and interviews with facility in chargeCensus of health centers18ANC provider interviews quantitativeSub-analysis of 88 RCH providers interviewed based on availability on day of visit and provision of antenatal care in the preceding 7 days65ANC provider interviews qualitativeSub-sample of 88 RCH providers interviewed based on availability on day of visit, receipt of Jhpiego PNC training, and years of service as a provider in the facility; average of 3 per facility57ANC sessions observedQuota based on availability on day of visit, average of 10 per facility, total approved target sample of 240203; 8 refusalsANC women exit interviewsSub-sample of those who consented to observation of ANC sessions196; 7 refusals Data sources included in MNCH facility survey At least five providers per facility providing antenatal and postnatal services during the day shift were administered a structured quantitative survey. A sub-sample of about three providers per facility were then chosen for in-depth qualitative interviews based on their Jhpiego training, provision of maternal and newborn health services, and years of service. Provider interviews covered topics including antenatal and postnatal service utilization, integration of family planning and HIV services, and linkages to other levels of the health system. Data quality was ensured by two field-based supervisors who provided overarching support to field implementation, including review of completed instruments and conduct of daily debriefings following in-depth interviews. Completed and supervisor-checked questionnaires were sent to Dar es Salaam for data entry and cleaning. Qualitative provider interviews were digitally recorded, transcribed, and translated to English. Team debriefings at midpoint and endpoint of data collection reviewed emerging themes and assessed reliability of data through triangulation. After the midpoint debrief, revised interview guides focusing on emerging themes were implemented for the last seven health facilities visited by the research team. A team of six research assistants received training over six days that included research ethics and techniques, project objectives, overview of instruments, and two days of pilot testing in health care facilities. Data collection proceeded from September to early December 2012. In each health facility, data were collected over a period of two days. Prior to the start of data collection, study personnel visited each health facility in-charge to brief him or her on data collection objectives and coordinate data collection on the days when antenatal and postnatal services were provided (Table 3). At each health facility, the first ten pregnant women attending routine antenatal services were approached for their participation and consent to the study and then subsequently observed and interviewed.Table 3 Data sources included in MNCH facility survey Data source Sampling Final sample Facility observation checklists and interviews with facility in chargeCensus of health centers18ANC provider interviews quantitativeSub-analysis of 88 RCH providers interviewed based on availability on day of visit and provision of antenatal care in the preceding 7 days65ANC provider interviews qualitativeSub-sample of 88 RCH providers interviewed based on availability on day of visit, receipt of Jhpiego PNC training, and years of service as a provider in the facility; average of 3 per facility57ANC sessions observedQuota based on availability on day of visit, average of 10 per facility, total approved target sample of 240203; 8 refusalsANC women exit interviewsSub-sample of those who consented to observation of ANC sessions196; 7 refusals Data sources included in MNCH facility survey At least five providers per facility providing antenatal and postnatal services during the day shift were administered a structured quantitative survey. A sub-sample of about three providers per facility were then chosen for in-depth qualitative interviews based on their Jhpiego training, provision of maternal and newborn health services, and years of service. Provider interviews covered topics including antenatal and postnatal service utilization, integration of family planning and HIV services, and linkages to other levels of the health system. Data quality was ensured by two field-based supervisors who provided overarching support to field implementation, including review of completed instruments and conduct of daily debriefings following in-depth interviews. Completed and supervisor-checked questionnaires were sent to Dar es Salaam for data entry and cleaning. Qualitative provider interviews were digitally recorded, transcribed, and translated to English. Team debriefings at midpoint and endpoint of data collection reviewed emerging themes and assessed reliability of data through triangulation. After the midpoint debrief, revised interview guides focusing on emerging themes were implemented for the last seven health facilities visited by the research team. Analysis This paper drew primarily from qualitative interviews with pregnant women and antenatal care providers on the topic of integrated HIV testing and counselling services during routine antenatal care. In addition, women’s and providers’ demographic profiles were also included as background information. Thematic qualitative data analysis was performed manually from a database coded and organized by Atlas.ti. Codes were derived from the structure of the interview guide and from themes that emerged during daily, midpoint and endpoint debriefings. Codebook development and coding were undertaken through consensus by a team, including research assistants who conducted data collection and whose work was reviewed by a supervisor. A framework approach [25] was taken in the qualitative portion of the research, utilizing an inductive approach with pre-defined research questions. Preliminary findings from both the quantitative and qualitative analysis were shared with MoHSW and implementing partner for their feedback and review. This paper drew primarily from qualitative interviews with pregnant women and antenatal care providers on the topic of integrated HIV testing and counselling services during routine antenatal care. In addition, women’s and providers’ demographic profiles were also included as background information. Thematic qualitative data analysis was performed manually from a database coded and organized by Atlas.ti. Codes were derived from the structure of the interview guide and from themes that emerged during daily, midpoint and endpoint debriefings. Codebook development and coding were undertaken through consensus by a team, including research assistants who conducted data collection and whose work was reviewed by a supervisor. A framework approach [25] was taken in the qualitative portion of the research, utilizing an inductive approach with pre-defined research questions. Preliminary findings from both the quantitative and qualitative analysis were shared with MoHSW and implementing partner for their feedback and review. Ethical approval The study received ethical approval from the Muhimbili University of Health and Allied Sciences (MUHAS) and the Johns Hopkins School of Public Health (JHSPH) Institutional Review Boards. Permission to conduct the study was obtained from MoHSW and from the region and district administration authorities. Individual written consents were obtained from the study participants prior to their participation in the study. All information was kept confidential and anonymous. The study received ethical approval from the Muhimbili University of Health and Allied Sciences (MUHAS) and the Johns Hopkins School of Public Health (JHSPH) Institutional Review Boards. Permission to conduct the study was obtained from MoHSW and from the region and district administration authorities. Individual written consents were obtained from the study participants prior to their participation in the study. All information was kept confidential and anonymous. Study site: Populated with 44.8 million people and located in east Africa, Tanzania is a low-income country with a per capita gross national income of 540 U.S. dollars [19]. With regards to maternal health, focused antenatal care (FANC) guidelines in 2002 reduced the frequency of facility visits from monthly to a minimum of four times with new counselling and clinical services (Table 2) [20,21]. Between 2005 and 2010, 95.8% of pregnant women in mainland Tanzania made at least one antenatal care visit with skilled providers. Yet, only 42.7% of women in mainland Tanzania made four or more antenatal care visits, and half of them made their first visit during the fifth month of pregnancy [22]. While women can and do access antenatal care in facilities, challenges in terms of continuity and quality remain. Overall, 23.1% of women reported having at least one problem in accessing health care [22].Table 2 Integrated HIV and ANC services in Tanzania Focused antenatal care checklist Parameter First visit <16 weeks Second visit 20–24 weeks Third visit 28–32 weeks Fourth visit 36 weeks Laboratory investigations, blood Haemoglobin✓✓✓✓Grouping and rhesus factor✓RPR✓HIV testing✓ Client education and counselling (for the couple) Process of pregnancy and complications✓✓✓✓Diet and nutrition✓✓✓✓Rest and exercise in pregnancy✓✓✓✓Personal hygiene✓Danger signs in pregnancy✓✓✓✓Use of drugs in pregnancy✓✓✓✓Effects of STI/HIV/AIDS✓✓✓✓Voluntary counselling and testing for HIV✓Care of breasts and breast feeding✓✓Symptoms/signs of labour✓✓Plans of delivery (emergency preparedness, place of delivery, transportation, financial arrangements)✓✓✓✓Plans for postpartum care✓✓Family planning✓✓Harmful habits (e.g. smoking, drug abuse, alcoholism)✓✓✓✓Schedule of return visit✓✓✓✓Source: Adapted from von Both C, Fleba S, Makuwani A, Mpembeni R, Jahn A. How much time do health services spend on antenatal care? Implications for the introduction of the focused antenatal care model in Tanzania. BMC Pregnancy and Childbirth 2006, 6(22). Integrated HIV and ANC services in Tanzania Source: Adapted from von Both C, Fleba S, Makuwani A, Mpembeni R, Jahn A. How much time do health services spend on antenatal care? Implications for the introduction of the focused antenatal care model in Tanzania. BMC Pregnancy and Childbirth 2006, 6(22). Morogoro is one of 30 regions in Tanzania, located about 200 kilometers southwest of Dar es Salaam [23]. With a population of 2.2 million and a population density of 31 inhabitants per square kilometer, Morogoro region is among Tanzania’s largest and least densely populated regions. According to a 2002 census, more people live in rural areas (73%) than in urban areas (27%) in Morogoro, similar to most regions in Tanzania [24]. Regional averages for education, poverty and care seeking are also similar to national averages. With regards to HIV, 67.1% of women of reproductive age and 49.8% of men between 15–49 in Morogoro have ever tested for HIV, while 5.3% of women of reproductive age and 2.1% of men between the ages of 15–49 are HIV-positive [2]. Study design: As part of a three-year evaluation of a maternal and newborn health care program implemented by the Ministry of Health and Social Welfare (MoHSW) and MAISHA through Jhpiego in Morogoro, Tanzania, all 18 government health centers in four rural and peri-urban districts (Gairo, Kilosa,a Morogoro District Council, Mvomero, and Ulanga) were chosen for a cross-sectional health facility assessment. Data collection: A team of six research assistants received training over six days that included research ethics and techniques, project objectives, overview of instruments, and two days of pilot testing in health care facilities. Data collection proceeded from September to early December 2012. In each health facility, data were collected over a period of two days. Prior to the start of data collection, study personnel visited each health facility in-charge to brief him or her on data collection objectives and coordinate data collection on the days when antenatal and postnatal services were provided (Table 3). At each health facility, the first ten pregnant women attending routine antenatal services were approached for their participation and consent to the study and then subsequently observed and interviewed.Table 3 Data sources included in MNCH facility survey Data source Sampling Final sample Facility observation checklists and interviews with facility in chargeCensus of health centers18ANC provider interviews quantitativeSub-analysis of 88 RCH providers interviewed based on availability on day of visit and provision of antenatal care in the preceding 7 days65ANC provider interviews qualitativeSub-sample of 88 RCH providers interviewed based on availability on day of visit, receipt of Jhpiego PNC training, and years of service as a provider in the facility; average of 3 per facility57ANC sessions observedQuota based on availability on day of visit, average of 10 per facility, total approved target sample of 240203; 8 refusalsANC women exit interviewsSub-sample of those who consented to observation of ANC sessions196; 7 refusals Data sources included in MNCH facility survey At least five providers per facility providing antenatal and postnatal services during the day shift were administered a structured quantitative survey. A sub-sample of about three providers per facility were then chosen for in-depth qualitative interviews based on their Jhpiego training, provision of maternal and newborn health services, and years of service. Provider interviews covered topics including antenatal and postnatal service utilization, integration of family planning and HIV services, and linkages to other levels of the health system. Data quality was ensured by two field-based supervisors who provided overarching support to field implementation, including review of completed instruments and conduct of daily debriefings following in-depth interviews. Completed and supervisor-checked questionnaires were sent to Dar es Salaam for data entry and cleaning. Qualitative provider interviews were digitally recorded, transcribed, and translated to English. Team debriefings at midpoint and endpoint of data collection reviewed emerging themes and assessed reliability of data through triangulation. After the midpoint debrief, revised interview guides focusing on emerging themes were implemented for the last seven health facilities visited by the research team. Analysis: This paper drew primarily from qualitative interviews with pregnant women and antenatal care providers on the topic of integrated HIV testing and counselling services during routine antenatal care. In addition, women’s and providers’ demographic profiles were also included as background information. Thematic qualitative data analysis was performed manually from a database coded and organized by Atlas.ti. Codes were derived from the structure of the interview guide and from themes that emerged during daily, midpoint and endpoint debriefings. Codebook development and coding were undertaken through consensus by a team, including research assistants who conducted data collection and whose work was reviewed by a supervisor. A framework approach [25] was taken in the qualitative portion of the research, utilizing an inductive approach with pre-defined research questions. Preliminary findings from both the quantitative and qualitative analysis were shared with MoHSW and implementing partner for their feedback and review. Ethical approval: The study received ethical approval from the Muhimbili University of Health and Allied Sciences (MUHAS) and the Johns Hopkins School of Public Health (JHSPH) Institutional Review Boards. Permission to conduct the study was obtained from MoHSW and from the region and district administration authorities. Individual written consents were obtained from the study participants prior to their participation in the study. All information was kept confidential and anonymous. Results: Profile of respondents and antenatal care seeking We conducted a total of 203 clinical observations of pregnant women receiving routine antenatal care and 196 exit interviews with women. We received eight refusals for observations and seven refusals for exit interviews. In addition, we also conducted 65 quantitative interviews with providers of antenatal services and 57 qualitative in-depth provider interviews. The mean age of pregnant women was 26.0 years, and their mean year of formal education was 5.8 years. The mean number of antenatal care visits that pregnant women had, including the one after which they were interviewed, was 2.7, and a majority of women came for antenatal care at the recommendation of their health provider. The mean time that women took to reach the health center was 48.3 minutes, while the mean time between arrival at the health center and being seen by the provider was 117.2 minutes (Table 4).Table 4 Characteristics of interviewed pregnant women (N=196) Age, in years (mean/median/range)26.0/25.0/16 – 50 Number of years of education (mean/median/range)5.8/7.0/0 – 13 Previous number of live childbirths (mean/median/range)1.6/1.0/0 – 6 Number of ANC visits completed (mean/ median/ range)2.7/3.0/1 – 9 Number of weeks pregnant (mean/median/range)27.3/28.0/8 – 40 Travel time between home and health facility, in minutes (mean/median/range)48.3/30.0/1 – 240 Time between arrival at health facility and being seeing by provider, in minutes (mean/ median/ range)117.2/98.8/2 – 420 Reason for attending ANC Self-recommended23.5%Family member recommended6.1%Informal provider recommended0.5%Health provider recommended52.0%Complications this pregnancy5.6%Complications prior pregnancy1.0%Came to facility for other reason, then received ANC1.5%Other27.0% Characteristics of interviewed pregnant women (N=196) Some women expressed dissatisfaction with the long queues and wait times. For example, a 19-year-old woman during exit interview 07–08 said:[The] [n]umber of health providers should be increased so that we don’t need to spend a lot of time waiting for services. [The] [n]umber of health providers should be increased so that we don’t need to spend a lot of time waiting for services. Researchers observed that although women at different health centers expressed this concern, most pregnant women saw the wait between their arrival and receipt of services as an expected part of care seeking at rural health centers. Of the 65 health workers who were interviewed, almost 80% were female and the average age was 39.2 years. A little more than half were enrolled nurses, while registered nurses and medical attendants comprised 15.4% each. Antenatal care providers had worked for a mean duration 14.1 years in total, with a mean duration of 6.4 years at the facility where the interview took place (Table 5).Table 5 Characteristics of interviewed antenatal care providers (N=65) Age, in years (mean/median/range)39.2/37.9/13 – 60 Female 78.5% Marital status Married/Co-habitating49.2%Single38.5%Widowed/divorced/ separated10.8% Designation Assistant medical officer, 5 years of clinical training1.5%Clinical officer, 3 years of clinical training6.2%Assistant clinical officer, 3 years of clinical training1.5%Registered nurse, 4 years of nursing training15.4%Enrolled nurse, 3 years of nursing training55.4%Medical assistant, Secondary school15.4%Health assistant, Secondary school3.1%Other (“afisa muuguzi msaidizi,” assistant nursing officer)1.5% Received in-service training On HIV/AIDS58.5%On Focused ANC31.3% Years as health worker (mean/median/range)14.1/11.0/0 – 39 Years employed at this health center (mean/ median/ range)6.4/3.5/0 – 29 Number of previous postings (mean/median/range)1.6/1.0/0 – 7 Characteristics of interviewed antenatal care providers (N=65) We conducted a total of 203 clinical observations of pregnant women receiving routine antenatal care and 196 exit interviews with women. We received eight refusals for observations and seven refusals for exit interviews. In addition, we also conducted 65 quantitative interviews with providers of antenatal services and 57 qualitative in-depth provider interviews. The mean age of pregnant women was 26.0 years, and their mean year of formal education was 5.8 years. The mean number of antenatal care visits that pregnant women had, including the one after which they were interviewed, was 2.7, and a majority of women came for antenatal care at the recommendation of their health provider. The mean time that women took to reach the health center was 48.3 minutes, while the mean time between arrival at the health center and being seen by the provider was 117.2 minutes (Table 4).Table 4 Characteristics of interviewed pregnant women (N=196) Age, in years (mean/median/range)26.0/25.0/16 – 50 Number of years of education (mean/median/range)5.8/7.0/0 – 13 Previous number of live childbirths (mean/median/range)1.6/1.0/0 – 6 Number of ANC visits completed (mean/ median/ range)2.7/3.0/1 – 9 Number of weeks pregnant (mean/median/range)27.3/28.0/8 – 40 Travel time between home and health facility, in minutes (mean/median/range)48.3/30.0/1 – 240 Time between arrival at health facility and being seeing by provider, in minutes (mean/ median/ range)117.2/98.8/2 – 420 Reason for attending ANC Self-recommended23.5%Family member recommended6.1%Informal provider recommended0.5%Health provider recommended52.0%Complications this pregnancy5.6%Complications prior pregnancy1.0%Came to facility for other reason, then received ANC1.5%Other27.0% Characteristics of interviewed pregnant women (N=196) Some women expressed dissatisfaction with the long queues and wait times. For example, a 19-year-old woman during exit interview 07–08 said:[The] [n]umber of health providers should be increased so that we don’t need to spend a lot of time waiting for services. [The] [n]umber of health providers should be increased so that we don’t need to spend a lot of time waiting for services. Researchers observed that although women at different health centers expressed this concern, most pregnant women saw the wait between their arrival and receipt of services as an expected part of care seeking at rural health centers. Of the 65 health workers who were interviewed, almost 80% were female and the average age was 39.2 years. A little more than half were enrolled nurses, while registered nurses and medical attendants comprised 15.4% each. Antenatal care providers had worked for a mean duration 14.1 years in total, with a mean duration of 6.4 years at the facility where the interview took place (Table 5).Table 5 Characteristics of interviewed antenatal care providers (N=65) Age, in years (mean/median/range)39.2/37.9/13 – 60 Female 78.5% Marital status Married/Co-habitating49.2%Single38.5%Widowed/divorced/ separated10.8% Designation Assistant medical officer, 5 years of clinical training1.5%Clinical officer, 3 years of clinical training6.2%Assistant clinical officer, 3 years of clinical training1.5%Registered nurse, 4 years of nursing training15.4%Enrolled nurse, 3 years of nursing training55.4%Medical assistant, Secondary school15.4%Health assistant, Secondary school3.1%Other (“afisa muuguzi msaidizi,” assistant nursing officer)1.5% Received in-service training On HIV/AIDS58.5%On Focused ANC31.3% Years as health worker (mean/median/range)14.1/11.0/0 – 39 Years employed at this health center (mean/ median/ range)6.4/3.5/0 – 29 Number of previous postings (mean/median/range)1.6/1.0/0 – 7 Characteristics of interviewed antenatal care providers (N=65) Positive effects of integration on care seeking According to both providers and pregnant women, provision of HIV testing during antenatal care has increased coverage to include more women and other populations that were not previously being tested, for example women from the Maasai community. In addition, integration has also increased the coverage of testing among men. Providers and women explained that they utilized the opportunity to test partners for HIV as well as to involve partners in counselling and MNCH in general. For example, enrolled nurse 02–28, a health worker since 1982, discussed the changes in the involvement of women’s partners in counselling as a result of PMTCT policy changes:…during the past a woman can come along with her husband but the man will stay outside; only if the woman has got some complications [do] we ask her to call her husband. Otherwise he will stay outside waiting for his wife until she finishes without gaining anything there… But nowadays if a woman comes with her husband, if there is education provided he will be involved too, due to the policy change even the man can enter and listen to the education provided. …during the past a woman can come along with her husband but the man will stay outside; only if the woman has got some complications [do] we ask her to call her husband. Otherwise he will stay outside waiting for his wife until she finishes without gaining anything there… But nowadays if a woman comes with her husband, if there is education provided he will be involved too, due to the policy change even the man can enter and listen to the education provided. Providers and women noted that integrated services ensured that women were tested for and counseled about HIV during routine antenatal care visits, reducing the number of times they returned to the health center. Thus, integrated service delivery was more convenient for pregnant women. Provider 07–06, with 11 years of experience at the facility of interview, explained in this way:[Providing HIV testing and counselling during ANC] is a good service and very important to a mother because it saves time and a mother does not take a long time in getting the service. She is [taking] a pregnant test, then [she will find out her] HIV status, then she is getting back to her home place… [Providing HIV testing and counselling during ANC] is a good service and very important to a mother because it saves time and a mother does not take a long time in getting the service. She is [taking] a pregnant test, then [she will find out her] HIV status, then she is getting back to her home place… Exit interview 02–07 with a 29-year-old woman who has had two previous live births confirmed this view:[Providing HIV testing and counselling during ANC is] good because if the provider tells me to come only for HIV instead of antenatal services I wouldn’t come. [Providing HIV testing and counselling during ANC is] good because if the provider tells me to come only for HIV instead of antenatal services I wouldn’t come. Pregnant women also appreciated knowledge gained during testing and counselling sessions about their health status and ways to prevent transmission from mother to child and between partners. Exit interview 13–11 with a 20-year-old woman who has had one previous live birth noted:I think [HIV testing during ANC] is a good system because you get a chance to know your HIV status so that if you’re infected, you can start treatment early. If you’re not, the counselling will help you take care of yourself. I think [HIV testing during ANC] is a good system because you get a chance to know your HIV status so that if you’re infected, you can start treatment early. If you’re not, the counselling will help you take care of yourself. Moreover, provider 02–29, a health worker since 2008, commented that integrated HIV testing and counselling has reduced stigma, as the services became a routine part of the antenatal care expected during every pregnant woman’s visit to the health center. Provider 01–12, a health worker since 1989, added that integration of HIV testing and counselling into routine antenatal service delivery led to improved confidentiality for women. When a woman visited the health center for antenatal services, no one else knew her HIV status or her health issues. Provider 07–07 mentioned that having one health worker throughout the MNCH spectrum was reassuring to women and believed that women thought that having one provider ensured privacy of their information. Aside from health advantages for mothers and children, integration has also given women an opportunity to exchange information with other women in the community. Provider 09–04, who has worked at the facility of interview for 23 years, explained the added benefit from peer learning:Another success [of integration] is [that] education has spread because when she gets knowledge here [at the health center], she convinces another mother also to test; that’s why you find many more mothers test than the fathers, because when you give knowledge here at the clinic about testing when she arrives, there she tells another mother; therefore we get many mothers. Another success [of integration] is [that] education has spread because when she gets knowledge here [at the health center], she convinces another mother also to test; that’s why you find many more mothers test than the fathers, because when you give knowledge here at the clinic about testing when she arrives, there she tells another mother; therefore we get many mothers. According to both providers and pregnant women, provision of HIV testing during antenatal care has increased coverage to include more women and other populations that were not previously being tested, for example women from the Maasai community. In addition, integration has also increased the coverage of testing among men. Providers and women explained that they utilized the opportunity to test partners for HIV as well as to involve partners in counselling and MNCH in general. For example, enrolled nurse 02–28, a health worker since 1982, discussed the changes in the involvement of women’s partners in counselling as a result of PMTCT policy changes:…during the past a woman can come along with her husband but the man will stay outside; only if the woman has got some complications [do] we ask her to call her husband. Otherwise he will stay outside waiting for his wife until she finishes without gaining anything there… But nowadays if a woman comes with her husband, if there is education provided he will be involved too, due to the policy change even the man can enter and listen to the education provided. …during the past a woman can come along with her husband but the man will stay outside; only if the woman has got some complications [do] we ask her to call her husband. Otherwise he will stay outside waiting for his wife until she finishes without gaining anything there… But nowadays if a woman comes with her husband, if there is education provided he will be involved too, due to the policy change even the man can enter and listen to the education provided. Providers and women noted that integrated services ensured that women were tested for and counseled about HIV during routine antenatal care visits, reducing the number of times they returned to the health center. Thus, integrated service delivery was more convenient for pregnant women. Provider 07–06, with 11 years of experience at the facility of interview, explained in this way:[Providing HIV testing and counselling during ANC] is a good service and very important to a mother because it saves time and a mother does not take a long time in getting the service. She is [taking] a pregnant test, then [she will find out her] HIV status, then she is getting back to her home place… [Providing HIV testing and counselling during ANC] is a good service and very important to a mother because it saves time and a mother does not take a long time in getting the service. She is [taking] a pregnant test, then [she will find out her] HIV status, then she is getting back to her home place… Exit interview 02–07 with a 29-year-old woman who has had two previous live births confirmed this view:[Providing HIV testing and counselling during ANC is] good because if the provider tells me to come only for HIV instead of antenatal services I wouldn’t come. [Providing HIV testing and counselling during ANC is] good because if the provider tells me to come only for HIV instead of antenatal services I wouldn’t come. Pregnant women also appreciated knowledge gained during testing and counselling sessions about their health status and ways to prevent transmission from mother to child and between partners. Exit interview 13–11 with a 20-year-old woman who has had one previous live birth noted:I think [HIV testing during ANC] is a good system because you get a chance to know your HIV status so that if you’re infected, you can start treatment early. If you’re not, the counselling will help you take care of yourself. I think [HIV testing during ANC] is a good system because you get a chance to know your HIV status so that if you’re infected, you can start treatment early. If you’re not, the counselling will help you take care of yourself. Moreover, provider 02–29, a health worker since 2008, commented that integrated HIV testing and counselling has reduced stigma, as the services became a routine part of the antenatal care expected during every pregnant woman’s visit to the health center. Provider 01–12, a health worker since 1989, added that integration of HIV testing and counselling into routine antenatal service delivery led to improved confidentiality for women. When a woman visited the health center for antenatal services, no one else knew her HIV status or her health issues. Provider 07–07 mentioned that having one health worker throughout the MNCH spectrum was reassuring to women and believed that women thought that having one provider ensured privacy of their information. Aside from health advantages for mothers and children, integration has also given women an opportunity to exchange information with other women in the community. Provider 09–04, who has worked at the facility of interview for 23 years, explained the added benefit from peer learning:Another success [of integration] is [that] education has spread because when she gets knowledge here [at the health center], she convinces another mother also to test; that’s why you find many more mothers test than the fathers, because when you give knowledge here at the clinic about testing when she arrives, there she tells another mother; therefore we get many mothers. Another success [of integration] is [that] education has spread because when she gets knowledge here [at the health center], she convinces another mother also to test; that’s why you find many more mothers test than the fathers, because when you give knowledge here at the clinic about testing when she arrives, there she tells another mother; therefore we get many mothers. Patient-provider interactions Despite the perceived benefits of integrated HIV testing and counselling, care seeking-related challenges linked to the quality of provided services remained. One challenge that emerged surrounded patient-provider interactions, which included the nature of consent for opt out HIV testing, unequal social relations and lack of supportive communications between pregnant women and providers during counselling sessions, and privacy and confidentiality concerns. According to providers, very few women refused HIV testing. In case of refusal, health workers continued to provide counselling for routine MNCH visits and for HIV testing during subsequent routine antenatal visits until women accepted. Pregnant women confirmed provider comments about the rarity of HIV test refusals and reported their understanding of consent for HIV tests in terms of provider authority rather than choice. Many women saw HIV testing as compulsory and did not know or were not counseled that they could opt out of HIV testing during antenatal services. Some women also reported that they felt that further antenatal care services would be withheld if they did not consent to be tested for HIV. In addition, some of the women also felt that they did not have an opportunity to voice their concerns and were disempowered from making informed decisions about consenting for HIV testing during antenatal services. For example, during exit interview 03–07, an 18-year-old woman with three previous live births expressed that she would have refused HIV testing had she known that it was an option available to her:…they did not tell me [that I had a choice in HIV testing]. If they did, I would say that I do not want to get an HIV test because I do not know the meaning [of getting tested]. I got tested, because I had no choice, and there is nothing you can do about it. You just follow the instructions of the providers. …they did not tell me [that I had a choice in HIV testing]. If they did, I would say that I do not want to get an HIV test because I do not know the meaning [of getting tested]. I got tested, because I had no choice, and there is nothing you can do about it. You just follow the instructions of the providers. Provider 15–04, with 13 years of experience at the facility of interview, acknowledged that refusal was difficult for women to voice, since women interpreted the HIV test as “orders from the Ministry of Health” and “she cannot violate an order without a reasonable cause.” Women’s responses also showed that providers were highly regarded, and that providers’ positions of authority gave them power to make health-related decisions for the women. Many women viewed HIV testing as following the instructions of the providers, which, to the women, were beyond questioning. This perception of the provider as the person to make health decisions for the women was seen in other aspects of the antenatal care consultation. From exit interviews, 52.0% of pregnant women reported that they came for antenatal care at the recommendation of their providers, in comparison to 23.5% who responded that they chose to come of their own volition (Table 4). A 25-year-old woman with two previous live births in exit interview 14–06 expressed her thoughts on being asked by the provider to return to the health center again and again due to an HIV test kit stock out: “We just think it’s okay. What can we do then? We can do nothing. We just go.” Only a few pregnant women discussed health decisions in terms of exerting control over their own bodies. The few women who did so articulated an intentional involvement in decision-making and understood health services as beneficial to their health. For example, when probed about whether she felt comfortable telling the provider her opinion about HIV testing, a 28-year-old woman with three previous live births responded in exit interview 13–05:Yes, that is my body if it is going to be tested. So it does not benefit [the provider] if she knows my health; it benefits me. Yes, that is my body if it is going to be tested. So it does not benefit [the provider] if she knows my health; it benefits me. Most of the women who expressed this perspective came from the few facilities where women reported supportive relationships with their providers and felt encouraged to ask questions. In contrast, many women at other facilities communicated discomfort in voicing their opinions or asking questions of their providers due to the providers’ “harsh” attitude. Patient-provider relations were complicated by differential social status, including dramatic divergence in educational levels and an average age difference of 14 years. Findings from the observations of antenatal consultations showed that while at least 89.7% of providers greeted women and her companions with respect, spoke using understandable local language, and addressed women respectfully, only 66.5% of women were encouraged to ask questions and only 20.8% of providers responded to their questions (Table 6). In addition, only 28.6% of providers thanked the women for coming to the health center for services.Table 6 Observations of interactions between women and providers during antenatal services (N=203) Provider greets woman and her companion/relative with respect89.7%Provider speaks using easy, understandable local language99.0%Provider addresses the woman by her name/calls her ‘mama’93.1%Women encouraged to ask questions during clinical session66.5%Provider respond to questions asked by women20.8%Provider thanks woman for coming to health facility for services28.6% Observations of interactions between women and providers during antenatal services (N=203) At the same time, some health workers were friends with women in the community, which enabled a pregnant mother to seek services at the facility. Provider 06–06, a health worker since 1980, said:women [seeking services at the health center] can be my friends and I have their phone number… When they reach here, [they say]… ‘[W]here is the [specific] attendant, I am looking for attendant’… You already know this medicine is a private thing…so you come, you give her. women [seeking services at the health center] can be my friends and I have their phone number… When they reach here, [they say]… ‘[W]here is the [specific] attendant, I am looking for attendant’… You already know this medicine is a private thing…so you come, you give her. While this relationship could be positive, some women at one facility also reported that they could not trust their providers to keep their HIV results confidential and expressed concern that the provider would discuss their results with other providers at the facility and with community members at home. These concerns did not vary by educational levels. At this particular facility, nine of the ten interviewed women completed seven years of formal education, or primary schooling, and one woman completed 11 years of formal education. Provider 07–06, a health worker since 1998, was aware that women did not trust their providers to maintain confidentiality in some cases. During antenatal services at one facility, a research team member observed providers openly discussing a pregnant woman’s HIV+ status and medication needs in public. Despite the perceived benefits of integrated HIV testing and counselling, care seeking-related challenges linked to the quality of provided services remained. One challenge that emerged surrounded patient-provider interactions, which included the nature of consent for opt out HIV testing, unequal social relations and lack of supportive communications between pregnant women and providers during counselling sessions, and privacy and confidentiality concerns. According to providers, very few women refused HIV testing. In case of refusal, health workers continued to provide counselling for routine MNCH visits and for HIV testing during subsequent routine antenatal visits until women accepted. Pregnant women confirmed provider comments about the rarity of HIV test refusals and reported their understanding of consent for HIV tests in terms of provider authority rather than choice. Many women saw HIV testing as compulsory and did not know or were not counseled that they could opt out of HIV testing during antenatal services. Some women also reported that they felt that further antenatal care services would be withheld if they did not consent to be tested for HIV. In addition, some of the women also felt that they did not have an opportunity to voice their concerns and were disempowered from making informed decisions about consenting for HIV testing during antenatal services. For example, during exit interview 03–07, an 18-year-old woman with three previous live births expressed that she would have refused HIV testing had she known that it was an option available to her:…they did not tell me [that I had a choice in HIV testing]. If they did, I would say that I do not want to get an HIV test because I do not know the meaning [of getting tested]. I got tested, because I had no choice, and there is nothing you can do about it. You just follow the instructions of the providers. …they did not tell me [that I had a choice in HIV testing]. If they did, I would say that I do not want to get an HIV test because I do not know the meaning [of getting tested]. I got tested, because I had no choice, and there is nothing you can do about it. You just follow the instructions of the providers. Provider 15–04, with 13 years of experience at the facility of interview, acknowledged that refusal was difficult for women to voice, since women interpreted the HIV test as “orders from the Ministry of Health” and “she cannot violate an order without a reasonable cause.” Women’s responses also showed that providers were highly regarded, and that providers’ positions of authority gave them power to make health-related decisions for the women. Many women viewed HIV testing as following the instructions of the providers, which, to the women, were beyond questioning. This perception of the provider as the person to make health decisions for the women was seen in other aspects of the antenatal care consultation. From exit interviews, 52.0% of pregnant women reported that they came for antenatal care at the recommendation of their providers, in comparison to 23.5% who responded that they chose to come of their own volition (Table 4). A 25-year-old woman with two previous live births in exit interview 14–06 expressed her thoughts on being asked by the provider to return to the health center again and again due to an HIV test kit stock out: “We just think it’s okay. What can we do then? We can do nothing. We just go.” Only a few pregnant women discussed health decisions in terms of exerting control over their own bodies. The few women who did so articulated an intentional involvement in decision-making and understood health services as beneficial to their health. For example, when probed about whether she felt comfortable telling the provider her opinion about HIV testing, a 28-year-old woman with three previous live births responded in exit interview 13–05:Yes, that is my body if it is going to be tested. So it does not benefit [the provider] if she knows my health; it benefits me. Yes, that is my body if it is going to be tested. So it does not benefit [the provider] if she knows my health; it benefits me. Most of the women who expressed this perspective came from the few facilities where women reported supportive relationships with their providers and felt encouraged to ask questions. In contrast, many women at other facilities communicated discomfort in voicing their opinions or asking questions of their providers due to the providers’ “harsh” attitude. Patient-provider relations were complicated by differential social status, including dramatic divergence in educational levels and an average age difference of 14 years. Findings from the observations of antenatal consultations showed that while at least 89.7% of providers greeted women and her companions with respect, spoke using understandable local language, and addressed women respectfully, only 66.5% of women were encouraged to ask questions and only 20.8% of providers responded to their questions (Table 6). In addition, only 28.6% of providers thanked the women for coming to the health center for services.Table 6 Observations of interactions between women and providers during antenatal services (N=203) Provider greets woman and her companion/relative with respect89.7%Provider speaks using easy, understandable local language99.0%Provider addresses the woman by her name/calls her ‘mama’93.1%Women encouraged to ask questions during clinical session66.5%Provider respond to questions asked by women20.8%Provider thanks woman for coming to health facility for services28.6% Observations of interactions between women and providers during antenatal services (N=203) At the same time, some health workers were friends with women in the community, which enabled a pregnant mother to seek services at the facility. Provider 06–06, a health worker since 1980, said:women [seeking services at the health center] can be my friends and I have their phone number… When they reach here, [they say]… ‘[W]here is the [specific] attendant, I am looking for attendant’… You already know this medicine is a private thing…so you come, you give her. women [seeking services at the health center] can be my friends and I have their phone number… When they reach here, [they say]… ‘[W]here is the [specific] attendant, I am looking for attendant’… You already know this medicine is a private thing…so you come, you give her. While this relationship could be positive, some women at one facility also reported that they could not trust their providers to keep their HIV results confidential and expressed concern that the provider would discuss their results with other providers at the facility and with community members at home. These concerns did not vary by educational levels. At this particular facility, nine of the ten interviewed women completed seven years of formal education, or primary schooling, and one woman completed 11 years of formal education. Provider 07–06, a health worker since 1998, was aware that women did not trust their providers to maintain confidentiality in some cases. During antenatal services at one facility, a research team member observed providers openly discussing a pregnant woman’s HIV+ status and medication needs in public. Stigma Another challenge that emerged from interviews with women and providers was stigma surrounding HIV infection at both the individual and community levels, resulting in some women seeking health services in other health facilities or through other providers, for example community health workers or traditional healers. Despite integration of HIV testing and counselling in routine antenatal visits and the reduction of stigma associated with HIV testing, fear of a positive test result was still a significant barrier to care seeking. A 21-year-old woman with one previous live birth in exit interview 06–05 commented:Some [pregnant women] are afraid to know their results, as they know that being HIV positive is the end of living. Some [pregnant women] are afraid to know their results, as they know that being HIV positive is the end of living. Women’s fear of a possible positive result was strong enough that provider 02–28 and three women described it as a contributing factor to pregnant women deciding against attending antenatal services. A 22-year-old woman in exit interview 07–03 commented that women who opted out of antenatal services at the facility would rather not know their HIV status than to subject themselves to the psychological pressure of thinking that they would die soon after a positive test result, even though treatment was generally known to be available. A 21-year-old woman with no previous live births in exit interview 08–02 noted:Other [women] are afraid to take their results from the HIV test while others do not come back once they are tested and found to be HIV positive…they are afraid of people hearing about their HIV status. In the community, those who are HIV positive are ostracized. So they are afraid to go back and forth to the health center. Other [women] are afraid to take their results from the HIV test while others do not come back once they are tested and found to be HIV positive…they are afraid of people hearing about their HIV status. In the community, those who are HIV positive are ostracized. So they are afraid to go back and forth to the health center. Some women reported knowing other pregnant women who had undergone HIV testing during their first antenatal visit and then discontinued further attendance at the facility. Providers added that some women would go to other points of access, such as dispensaries and community health workers, for antenatal services. Another challenge that emerged from interviews with women and providers was stigma surrounding HIV infection at both the individual and community levels, resulting in some women seeking health services in other health facilities or through other providers, for example community health workers or traditional healers. Despite integration of HIV testing and counselling in routine antenatal visits and the reduction of stigma associated with HIV testing, fear of a positive test result was still a significant barrier to care seeking. A 21-year-old woman with one previous live birth in exit interview 06–05 commented:Some [pregnant women] are afraid to know their results, as they know that being HIV positive is the end of living. Some [pregnant women] are afraid to know their results, as they know that being HIV positive is the end of living. Women’s fear of a possible positive result was strong enough that provider 02–28 and three women described it as a contributing factor to pregnant women deciding against attending antenatal services. A 22-year-old woman in exit interview 07–03 commented that women who opted out of antenatal services at the facility would rather not know their HIV status than to subject themselves to the psychological pressure of thinking that they would die soon after a positive test result, even though treatment was generally known to be available. A 21-year-old woman with no previous live births in exit interview 08–02 noted:Other [women] are afraid to take their results from the HIV test while others do not come back once they are tested and found to be HIV positive…they are afraid of people hearing about their HIV status. In the community, those who are HIV positive are ostracized. So they are afraid to go back and forth to the health center. Other [women] are afraid to take their results from the HIV test while others do not come back once they are tested and found to be HIV positive…they are afraid of people hearing about their HIV status. In the community, those who are HIV positive are ostracized. So they are afraid to go back and forth to the health center. Some women reported knowing other pregnant women who had undergone HIV testing during their first antenatal visit and then discontinued further attendance at the facility. Providers added that some women would go to other points of access, such as dispensaries and community health workers, for antenatal services. Profile of respondents and antenatal care seeking: We conducted a total of 203 clinical observations of pregnant women receiving routine antenatal care and 196 exit interviews with women. We received eight refusals for observations and seven refusals for exit interviews. In addition, we also conducted 65 quantitative interviews with providers of antenatal services and 57 qualitative in-depth provider interviews. The mean age of pregnant women was 26.0 years, and their mean year of formal education was 5.8 years. The mean number of antenatal care visits that pregnant women had, including the one after which they were interviewed, was 2.7, and a majority of women came for antenatal care at the recommendation of their health provider. The mean time that women took to reach the health center was 48.3 minutes, while the mean time between arrival at the health center and being seen by the provider was 117.2 minutes (Table 4).Table 4 Characteristics of interviewed pregnant women (N=196) Age, in years (mean/median/range)26.0/25.0/16 – 50 Number of years of education (mean/median/range)5.8/7.0/0 – 13 Previous number of live childbirths (mean/median/range)1.6/1.0/0 – 6 Number of ANC visits completed (mean/ median/ range)2.7/3.0/1 – 9 Number of weeks pregnant (mean/median/range)27.3/28.0/8 – 40 Travel time between home and health facility, in minutes (mean/median/range)48.3/30.0/1 – 240 Time between arrival at health facility and being seeing by provider, in minutes (mean/ median/ range)117.2/98.8/2 – 420 Reason for attending ANC Self-recommended23.5%Family member recommended6.1%Informal provider recommended0.5%Health provider recommended52.0%Complications this pregnancy5.6%Complications prior pregnancy1.0%Came to facility for other reason, then received ANC1.5%Other27.0% Characteristics of interviewed pregnant women (N=196) Some women expressed dissatisfaction with the long queues and wait times. For example, a 19-year-old woman during exit interview 07–08 said:[The] [n]umber of health providers should be increased so that we don’t need to spend a lot of time waiting for services. [The] [n]umber of health providers should be increased so that we don’t need to spend a lot of time waiting for services. Researchers observed that although women at different health centers expressed this concern, most pregnant women saw the wait between their arrival and receipt of services as an expected part of care seeking at rural health centers. Of the 65 health workers who were interviewed, almost 80% were female and the average age was 39.2 years. A little more than half were enrolled nurses, while registered nurses and medical attendants comprised 15.4% each. Antenatal care providers had worked for a mean duration 14.1 years in total, with a mean duration of 6.4 years at the facility where the interview took place (Table 5).Table 5 Characteristics of interviewed antenatal care providers (N=65) Age, in years (mean/median/range)39.2/37.9/13 – 60 Female 78.5% Marital status Married/Co-habitating49.2%Single38.5%Widowed/divorced/ separated10.8% Designation Assistant medical officer, 5 years of clinical training1.5%Clinical officer, 3 years of clinical training6.2%Assistant clinical officer, 3 years of clinical training1.5%Registered nurse, 4 years of nursing training15.4%Enrolled nurse, 3 years of nursing training55.4%Medical assistant, Secondary school15.4%Health assistant, Secondary school3.1%Other (“afisa muuguzi msaidizi,” assistant nursing officer)1.5% Received in-service training On HIV/AIDS58.5%On Focused ANC31.3% Years as health worker (mean/median/range)14.1/11.0/0 – 39 Years employed at this health center (mean/ median/ range)6.4/3.5/0 – 29 Number of previous postings (mean/median/range)1.6/1.0/0 – 7 Characteristics of interviewed antenatal care providers (N=65) Positive effects of integration on care seeking: According to both providers and pregnant women, provision of HIV testing during antenatal care has increased coverage to include more women and other populations that were not previously being tested, for example women from the Maasai community. In addition, integration has also increased the coverage of testing among men. Providers and women explained that they utilized the opportunity to test partners for HIV as well as to involve partners in counselling and MNCH in general. For example, enrolled nurse 02–28, a health worker since 1982, discussed the changes in the involvement of women’s partners in counselling as a result of PMTCT policy changes:…during the past a woman can come along with her husband but the man will stay outside; only if the woman has got some complications [do] we ask her to call her husband. Otherwise he will stay outside waiting for his wife until she finishes without gaining anything there… But nowadays if a woman comes with her husband, if there is education provided he will be involved too, due to the policy change even the man can enter and listen to the education provided. …during the past a woman can come along with her husband but the man will stay outside; only if the woman has got some complications [do] we ask her to call her husband. Otherwise he will stay outside waiting for his wife until she finishes without gaining anything there… But nowadays if a woman comes with her husband, if there is education provided he will be involved too, due to the policy change even the man can enter and listen to the education provided. Providers and women noted that integrated services ensured that women were tested for and counseled about HIV during routine antenatal care visits, reducing the number of times they returned to the health center. Thus, integrated service delivery was more convenient for pregnant women. Provider 07–06, with 11 years of experience at the facility of interview, explained in this way:[Providing HIV testing and counselling during ANC] is a good service and very important to a mother because it saves time and a mother does not take a long time in getting the service. She is [taking] a pregnant test, then [she will find out her] HIV status, then she is getting back to her home place… [Providing HIV testing and counselling during ANC] is a good service and very important to a mother because it saves time and a mother does not take a long time in getting the service. She is [taking] a pregnant test, then [she will find out her] HIV status, then she is getting back to her home place… Exit interview 02–07 with a 29-year-old woman who has had two previous live births confirmed this view:[Providing HIV testing and counselling during ANC is] good because if the provider tells me to come only for HIV instead of antenatal services I wouldn’t come. [Providing HIV testing and counselling during ANC is] good because if the provider tells me to come only for HIV instead of antenatal services I wouldn’t come. Pregnant women also appreciated knowledge gained during testing and counselling sessions about their health status and ways to prevent transmission from mother to child and between partners. Exit interview 13–11 with a 20-year-old woman who has had one previous live birth noted:I think [HIV testing during ANC] is a good system because you get a chance to know your HIV status so that if you’re infected, you can start treatment early. If you’re not, the counselling will help you take care of yourself. I think [HIV testing during ANC] is a good system because you get a chance to know your HIV status so that if you’re infected, you can start treatment early. If you’re not, the counselling will help you take care of yourself. Moreover, provider 02–29, a health worker since 2008, commented that integrated HIV testing and counselling has reduced stigma, as the services became a routine part of the antenatal care expected during every pregnant woman’s visit to the health center. Provider 01–12, a health worker since 1989, added that integration of HIV testing and counselling into routine antenatal service delivery led to improved confidentiality for women. When a woman visited the health center for antenatal services, no one else knew her HIV status or her health issues. Provider 07–07 mentioned that having one health worker throughout the MNCH spectrum was reassuring to women and believed that women thought that having one provider ensured privacy of their information. Aside from health advantages for mothers and children, integration has also given women an opportunity to exchange information with other women in the community. Provider 09–04, who has worked at the facility of interview for 23 years, explained the added benefit from peer learning:Another success [of integration] is [that] education has spread because when she gets knowledge here [at the health center], she convinces another mother also to test; that’s why you find many more mothers test than the fathers, because when you give knowledge here at the clinic about testing when she arrives, there she tells another mother; therefore we get many mothers. Another success [of integration] is [that] education has spread because when she gets knowledge here [at the health center], she convinces another mother also to test; that’s why you find many more mothers test than the fathers, because when you give knowledge here at the clinic about testing when she arrives, there she tells another mother; therefore we get many mothers. Patient-provider interactions: Despite the perceived benefits of integrated HIV testing and counselling, care seeking-related challenges linked to the quality of provided services remained. One challenge that emerged surrounded patient-provider interactions, which included the nature of consent for opt out HIV testing, unequal social relations and lack of supportive communications between pregnant women and providers during counselling sessions, and privacy and confidentiality concerns. According to providers, very few women refused HIV testing. In case of refusal, health workers continued to provide counselling for routine MNCH visits and for HIV testing during subsequent routine antenatal visits until women accepted. Pregnant women confirmed provider comments about the rarity of HIV test refusals and reported their understanding of consent for HIV tests in terms of provider authority rather than choice. Many women saw HIV testing as compulsory and did not know or were not counseled that they could opt out of HIV testing during antenatal services. Some women also reported that they felt that further antenatal care services would be withheld if they did not consent to be tested for HIV. In addition, some of the women also felt that they did not have an opportunity to voice their concerns and were disempowered from making informed decisions about consenting for HIV testing during antenatal services. For example, during exit interview 03–07, an 18-year-old woman with three previous live births expressed that she would have refused HIV testing had she known that it was an option available to her:…they did not tell me [that I had a choice in HIV testing]. If they did, I would say that I do not want to get an HIV test because I do not know the meaning [of getting tested]. I got tested, because I had no choice, and there is nothing you can do about it. You just follow the instructions of the providers. …they did not tell me [that I had a choice in HIV testing]. If they did, I would say that I do not want to get an HIV test because I do not know the meaning [of getting tested]. I got tested, because I had no choice, and there is nothing you can do about it. You just follow the instructions of the providers. Provider 15–04, with 13 years of experience at the facility of interview, acknowledged that refusal was difficult for women to voice, since women interpreted the HIV test as “orders from the Ministry of Health” and “she cannot violate an order without a reasonable cause.” Women’s responses also showed that providers were highly regarded, and that providers’ positions of authority gave them power to make health-related decisions for the women. Many women viewed HIV testing as following the instructions of the providers, which, to the women, were beyond questioning. This perception of the provider as the person to make health decisions for the women was seen in other aspects of the antenatal care consultation. From exit interviews, 52.0% of pregnant women reported that they came for antenatal care at the recommendation of their providers, in comparison to 23.5% who responded that they chose to come of their own volition (Table 4). A 25-year-old woman with two previous live births in exit interview 14–06 expressed her thoughts on being asked by the provider to return to the health center again and again due to an HIV test kit stock out: “We just think it’s okay. What can we do then? We can do nothing. We just go.” Only a few pregnant women discussed health decisions in terms of exerting control over their own bodies. The few women who did so articulated an intentional involvement in decision-making and understood health services as beneficial to their health. For example, when probed about whether she felt comfortable telling the provider her opinion about HIV testing, a 28-year-old woman with three previous live births responded in exit interview 13–05:Yes, that is my body if it is going to be tested. So it does not benefit [the provider] if she knows my health; it benefits me. Yes, that is my body if it is going to be tested. So it does not benefit [the provider] if she knows my health; it benefits me. Most of the women who expressed this perspective came from the few facilities where women reported supportive relationships with their providers and felt encouraged to ask questions. In contrast, many women at other facilities communicated discomfort in voicing their opinions or asking questions of their providers due to the providers’ “harsh” attitude. Patient-provider relations were complicated by differential social status, including dramatic divergence in educational levels and an average age difference of 14 years. Findings from the observations of antenatal consultations showed that while at least 89.7% of providers greeted women and her companions with respect, spoke using understandable local language, and addressed women respectfully, only 66.5% of women were encouraged to ask questions and only 20.8% of providers responded to their questions (Table 6). In addition, only 28.6% of providers thanked the women for coming to the health center for services.Table 6 Observations of interactions between women and providers during antenatal services (N=203) Provider greets woman and her companion/relative with respect89.7%Provider speaks using easy, understandable local language99.0%Provider addresses the woman by her name/calls her ‘mama’93.1%Women encouraged to ask questions during clinical session66.5%Provider respond to questions asked by women20.8%Provider thanks woman for coming to health facility for services28.6% Observations of interactions between women and providers during antenatal services (N=203) At the same time, some health workers were friends with women in the community, which enabled a pregnant mother to seek services at the facility. Provider 06–06, a health worker since 1980, said:women [seeking services at the health center] can be my friends and I have their phone number… When they reach here, [they say]… ‘[W]here is the [specific] attendant, I am looking for attendant’… You already know this medicine is a private thing…so you come, you give her. women [seeking services at the health center] can be my friends and I have their phone number… When they reach here, [they say]… ‘[W]here is the [specific] attendant, I am looking for attendant’… You already know this medicine is a private thing…so you come, you give her. While this relationship could be positive, some women at one facility also reported that they could not trust their providers to keep their HIV results confidential and expressed concern that the provider would discuss their results with other providers at the facility and with community members at home. These concerns did not vary by educational levels. At this particular facility, nine of the ten interviewed women completed seven years of formal education, or primary schooling, and one woman completed 11 years of formal education. Provider 07–06, a health worker since 1998, was aware that women did not trust their providers to maintain confidentiality in some cases. During antenatal services at one facility, a research team member observed providers openly discussing a pregnant woman’s HIV+ status and medication needs in public. Stigma: Another challenge that emerged from interviews with women and providers was stigma surrounding HIV infection at both the individual and community levels, resulting in some women seeking health services in other health facilities or through other providers, for example community health workers or traditional healers. Despite integration of HIV testing and counselling in routine antenatal visits and the reduction of stigma associated with HIV testing, fear of a positive test result was still a significant barrier to care seeking. A 21-year-old woman with one previous live birth in exit interview 06–05 commented:Some [pregnant women] are afraid to know their results, as they know that being HIV positive is the end of living. Some [pregnant women] are afraid to know their results, as they know that being HIV positive is the end of living. Women’s fear of a possible positive result was strong enough that provider 02–28 and three women described it as a contributing factor to pregnant women deciding against attending antenatal services. A 22-year-old woman in exit interview 07–03 commented that women who opted out of antenatal services at the facility would rather not know their HIV status than to subject themselves to the psychological pressure of thinking that they would die soon after a positive test result, even though treatment was generally known to be available. A 21-year-old woman with no previous live births in exit interview 08–02 noted:Other [women] are afraid to take their results from the HIV test while others do not come back once they are tested and found to be HIV positive…they are afraid of people hearing about their HIV status. In the community, those who are HIV positive are ostracized. So they are afraid to go back and forth to the health center. Other [women] are afraid to take their results from the HIV test while others do not come back once they are tested and found to be HIV positive…they are afraid of people hearing about their HIV status. In the community, those who are HIV positive are ostracized. So they are afraid to go back and forth to the health center. Some women reported knowing other pregnant women who had undergone HIV testing during their first antenatal visit and then discontinued further attendance at the facility. Providers added that some women would go to other points of access, such as dispensaries and community health workers, for antenatal services. Discussion: This study found that both pregnant women and providers had positive perceptions of the integration of HIV counselling and testing into antenatal services. Many providers and pregnant women felt that integration has increased uptake of HIV testing and enabled marginalized groups and partners to be involved. In addition, some women stated that knowing one’s status would help protect both the pregnant woman as well as her unborn child, and that early testing and treatment were key to preventing mother-to-child transmission. Yet potential gains in increased coverage of HIV testing belied outstanding challenges related to perceptions of patient-provider relations and stigma against HIV. The implicitly compulsory nature of the HIV test described by respondents raised questions about consent. A sense of powerlessness and anxiety was evident in women’s discussions about providers’ behavior. Women were concerned that providers were unable to maintain confidentiality of their HIV status at the health center and in the wider community. Stigma and fear of implications of a positive test result discouraged some pregnant women from seeking antenatal services completely or contributed to their discontinuing antenatal services at the health center after HIV testing. This analysis had some limitations. The cross-sectional design of the study and timing of data collection during the harvest season in an agricultural region could potentially exclude pregnant women with harvesting responsibilities who did not attend antenatal clinic during the time period when the data collection team was at each health center. Furthermore, informant fatigue could have resulted from long in-depth interviews, which usually followed quantitative interviews. This fatigue was mitigated by rest periods between the quantitative and qualitative portions of interviews, ranging from half an hour to an entire day. As ANC is the entry point for PMTCT services, engaging with women from their first antenatal care visit is important. From this study, women reported that HIV testing during antenatal services was generally accepted by pregnant women, confirming other studies that found generally positive attitudes towards integrated HIV testing and counselling [26,27]. In South Africa, women who knew their HIV status were more likely to utilize health services and adhere to drug regimens across the MNCH spectrum [28], suggesting that knowing HIV status from integrated HIV testing and counselling could lead to better drug adherence if treatment were required. In our study, we also found that with integrated service delivery, women’s exchange of information in their communities included HIV counselling, which potentially reinforced HIV counselling messages discussed during antenatal visits and community outreach efforts and furthered women’s knowledge about HIV. At the same time, most of the women in this study believed that HIV testing was compulsory and were not aware of the “opt out” option, with some perceiving HIV test as necessary before receipt of antenatal services. This finding was also reflected by other studies in the region [29,30]. In fact, some women found HIV testing to be coercive and that they were unable to make an informed decision [31,32]. In rural Malawi, an HIV-positive result from testing during routine antenatal care discouraged some women from coming back for further services and from delivering at the health center [30]. In this study, the disparity in providers’ and pregnant women’s views about consenting for HIV testing holds implications for service utilization further along the PMTCT cascade and MNCH continuum of care by HIV-positive women. The perceived compulsory nature of HIV testing was complicated by the relationship between providers and women seen in this study, which was often characterized by unequal social relations and lack of supportive communication. This study found that women often accepted the recommendations of providers without question, which emerged from their understanding that providers were best equipped to make health decisions on their behalf because providers knew more about health issues. Additionally, while some women and providers reported that they shared positive and supportive relationships at the health center and in the community, other women perceived their providers to be harsh, rude, and sometimes dismissive. Studies from Burkina Faso, Kenya, Malawi, and Uganda had similar findings and reported that women perceived health workers to be powerful figures who made decisions regarding their health, and that this message was reinforced by social, cultural, and political norms [30,32]. In addition to some participants highlighting poor consent processes and lack of supportive communications, some women in this study were concerned about their providers breaching confidentiality by discussing their HIV status with other providers or with community members. Other studies in the region echoed this finding [32,33]. While in general, providers kept the confidence of women seeking health services [27], direct breaches of confidentiality by providers have been documented in the literature [30,32,34]. In one study in Tanzania, a health worker demanded a bribe from a woman in return for keeping her information confidential [33]. A trusting and informative provider-patient relationship has been shown to be a key factor not only in the perceived quality of services but also in the uptake of further PMTCT services, drug adherence, and health outcomes [28,31,35]. Aside from ethical concerns, direct breaches of confidentiality by providers has also contributed to women’s distrust not only of the providers but also of the health system [33]. At the same time, another study aiming to measure differences in consent, confidentiality, and referral between non-integrated voluntary counselling and testing programs and integrated PMTCT programs found ethical practices of testing and counselling in general and no difference in confidentiality practices by the mode of HIV testing and counselling as perceived by women [27]. The literature thus points to mixed observations regarding confidentiality and suggests that patient-provider relations may be problematic independent of integration. This study found that the reported increased uptake of HIV testing could be partly attributed to de-stigmatization of the test, as it became a routine part of and accepted by most women attending antenatal services [36]. Improved access to and coverage of HIV testing could be seen in the increasing percentage of women and men who were tested for HIV in the past 12 months and received their results: 19% to 30% for women and 19% to 27% for men from 2007–2008 to 2011–2012 [2]. However, multiple studies found that stigma against people living with HIV and AIDS was still prevalent, and that anxiety and fear surrounding a potentially positive HIV test result continued to deter women from seeking care [37]. In a study conducted in Dar es Salaam exploring the feasibility of Option B+ as recommended by the WHO, stigma remained an important barrier to care and treatment as well as adherence to the drug regimen for HIV+ women [38]. In rural communities, stigma has an even stronger influence over women precisely because women in rural areas, in comparison to women in urban areas, have fewer choices in access points for health services. Thus, since stigma has remained a barrier for uptake of the PMTCT service cascade [39,40], de-stigmatization serves as an important first step to address “missed opportunities” [41–43] in engaging with women seeking MNCH services in order to prevent vertical transmission. Evidence shows that community engagement is necessary for reducing stigma and promoting a supportive social environment in which women and their families can seek care [34,44]. In Rwanda and Kenya, early engagement with community stakeholders has contributed to the success of similar integrated and community-based PMTCT programs [44,45]. In the Democratic Republic of Congo, community leaders led efforts to identify HIV- and PMTCT-related priorities, and collaboration between community health workers and facility providers ensured that pregnant women could access the services they needed across a continuum of care [46]. The success of scaled up PMTCT programs in these countries highlighted the importance of collaboration between the community and the health center in reducing stigma surrounding HIV/AIDS and creating a continuum of care for all pregnant women. At the health center, a supportive relationship between providers and women must be fostered to address concerns about the nature of HIV testing and confidentiality. Revising the content of in-service training, facilitating follow up provider training, and conducting community outreach with emphasis on a patient rights-based approach would be first steps in enabling a more equal and trusting relationship between providers and women. This relationship has been recognized as key in promoting women’s health [47]. In the Health Workers for Change program, for example, the participatory quality improvement measure improved relations between providers and patients [48]. In addition, the Skilled Care Initiative in Burkina Faso, Kenya, and Tanzania showed that “compassionate care” could contribute more broadly to quality of care and utilization of services over the entire MNCH spectrum [49]. However, while quality improvement initiatives could be effective, the determinants of success were not always clear [50–52], and the initiatives were not always successfully scaled up by Ministries of Health. Change can only be maintained if a well-equipped health system is responsive to needs of women and providers in health centers. From this perspective, a health systems approach to addressing this challenge requires a new consideration of “dignified, respectful health care” [53] as countries strengthen the integration between PMTCT and MNCH programs. Conclusion: This study found that while perceptions of integrated service delivery of HIV services and antenatal care were generally positive, important barriers related to patient-provider interactions and stigma remained for women seeking antenatal care at health facilities. Tanzania and other low-resource, high-burden countries are currently rolling out Option B+, which further integrates care and treatment into maternal health services. In this context, women’s care seeking practices and perceptions about integrated health services are especially important to understand so as to inform future policies and service delivery reforms. The identified challenges must also be mitigated to ensure that pregnant mothers actively engage with the MNCH spectrum of care for themselves and their children, and concerns about integrated delivery of care must be addressed in order to reduce the impact of HIV/AIDS and to improve maternal and child health in Tanzania. Endnote: aAt the time of data collection, Gairo had yet to become an independent district and facilities in the district were counted as part of Kilosa district.
Background: Women and children in sub-Saharan Africa bear a disproportionate burden of HIV/AIDS. Integration of HIV with maternal and child services aims to reduce the impact of HIV/AIDS. To assess the potential gains and risks of such integration, this paper considers pregnant women's and providers' perceptions about the effects of integrated HIV testing and counselling on care seeking by pregnant women during antenatal care in Tanzania. Methods: From a larger evaluation of an integrated maternal and newborn health care program in Morogoro, Tanzania, this analysis included a subset of information from 203 observations of antenatal care and interviews with 57 providers and 190 pregnant women from 18 public health centers in rural and peri-urban settings. Qualitative data were analyzed manually and with Atlas.ti using a framework approach, and quantitative data of respondents' demographic information were analyzed with Stata 12.0. Results: Perceptions of integrating HIV testing with routine antenatal care from women and health providers were generally positive. Respondents felt that integration increased coverage of HIV testing, particularly among difficult-to-reach populations, and improved convenience, efficiency, and confidentiality for women while reducing stigma. Pregnant women believed that early detection of HIV protected their own health and that of their children. Despite these positive views, challenges remained. Providers and women perceived opt out HIV testing and counselling during antenatal services to be compulsory. A sense of powerlessness and anxiety pervaded some women's responses, reflecting the unequal relations, lack of supportive communications and breaches in confidentiality between women and providers. Lastly, stigma surrounding HIV was reported to lead some women to discontinue services or seek care through other access points in the health system. Conclusions: While providers and pregnant women view program synergies from integrating HIV services into antenatal care positively, lack of supportive provider-patient relationships, lack of trust resulting from harsh treatment or breaches in confidentiality, and stigma still inhibit women's care seeking. As countries continue rollout of Option B+, social relations between patients and providers must be understood and addressed to ensure that integrated delivery of HIV counselling and services encourages women's care seeking in order to improve maternal and child health.
Background: In sub-Saharan Africa, women comprised 60% of people living with human immunodeficiency virus and acquired immune deficiency syndrome (HIV/AIDS) in 2011, and AIDS was the leading cause of death among mothers [1]. The disproportionate burden of HIV in women has implications for not only their health but also the health of their children. Ninety one percent of children under 15 years living with HIV in 2011 were in sub-Saharan Africa [1]. In 2012, the prevalence of HIV/AIDS in Tanzania was 5.1% among adults generally, 6.2% among women of reproductive age, and 3.2% among pregnant women [2]. An estimated 70,000 to 80,000 newborns were at risk of acquiring HIV every year during pregnancy or delivery, or via breastfeeding [3]. One programmatic response to the vulnerability of women and children to HIV has been the package of interventions focused on the prevention of mother-to-child transmission (PMTCT) [4]. PMTCT is envisioned as a cascade of services throughout the reproductive, maternal, newborn, and child health spectrum that entails a series of services, including counselling, testing and treatment, delivered at multiple time points throughout a woman’s interaction with the health system. In the last decade, integration of PMTCT services into routine maternal and child health (MCH) services has been a key strategy in responding to HIV/AIDS in low-resource settings [4,5]. By strengthening the linkages between MCH and PMTCT, integration is believed to improve the coverage of HIV testing and treatment, leading to earlier treatment for those who need it and an opportunity for HIV-positive pregnant mothers to receive prophylaxis so that transmission is prevented. In addition, integration is also believed to strengthen basic health systems services, improve the efficiency of service delivery, and increase the effectiveness of health interventions [4–7]. In Tanzania, HIV integration entailed provision of HIV testing, counselling, and treatment for HIV-positive pregnant women during antenatal care (ANC). HIV testing and counselling were integrated into MCH services starting in 2007 on an “opt out” basis (Table 1) [8]. Under these guidelines, prophylaxis for prevention of vertical transmission was part of reproductive and child health clinics (RCH), while care and treatment for HIV-positive pregnant women remained in care and treatment centers (CTC) [8]. In October 2013, Tanzania began the implementation of Option B+ as recommended by the World Health Organization (WHO) [9]. Under Option B+, pregnant and lactating mothers who test positive for HIV are eligible for antiretroviral therapy (ART) in RCH wards, regardless of their CD4 count and stage.Table 1 PMTCT policies in Tanzania Year Source Policy content 2000-2002Pilot PMTCT Program• Short course regimen for preventing mother-to-child transmission in four referral hospitals and one regional hospital• Use of AZT short course from 36 weeks to delivery2004First national PMTCT guidelines for scale up• Scale up from 5 pilot testing sites to the whole country (1347 sites across the country by 2006)• sdNVP during labor and delivery2007Second national PMTCT guidelines for scale up• Provider initiated testing and counselling in antenatal visits in an “opt out” system• PMTCT remained in parallel to Care and Treatment Centers (CTC), where eligible mothers received care• Change of regimen from sdNVP to AZT from 28 weeks of pregnancy until labor and delivery for PMTCT2011Third national PMTCT guidelines for scale up• Tanzania adopts option A of 2010 WHO guidelines (use of ARV drugs for treating pregnant women and preventing mother-to-child transmission of HIV)• Engagement with, testing of, and counselling partners at health facilities• PMTCT program expanded to 3420 sites in the country2013Fourth national PMTCT guidelines Option B/B+• All HIV-infected pregnant and lactating mothers, regardless of CD4 count, eligible for lifelong treatment with antiretroviral drugs• Care and treatment integrated into RCH wards PMTCT policies in Tanzania Connecting HIV-positive mothers who tested during antenatal care to ART and following up with them after delivery has been challenging [10]. While some studies suggested that higher utilization of MCH services could lead to increased utilization of integrated HIV services [11,12], evidence for the uptake and adherence to ART among pregnant women and infants remained mixed [13–15]. At the same time, literature on client satisfaction with integrated HIV testing and counselling programs showed that most clients were satisfied with the services provided under integration, including counselling, wait time, and providers [16,17]. From a provider perspective, a study in rural Kenya found that health workers viewed integration as a mostly positive development, as this approach enhanced service provision, improved patient-provider relationships, and increased likelihood of HIV-positive women’s enrolment into HIV care by decreasing stigma [18]. Given the resources devoted to integration of HIV services in the maternal, newborn and child health (MNCH) spectrum of care and to inform future policies in this area, we aim in this paper to understand providers’ and pregnant women’s perceptions of the integration of HIV testing and counselling within routine antenatal care and the effects of integration on care seeking. We report the characteristics of respondents and antenatal care health workers to provide some contextual background for the findings. We then detail the views of pregnant women and providers on the generally positive program synergies from integrating HIV testing and counselling into antenatal care and then discuss the remaining challenges and concerns reflecting the social relations that underpin service delivery. Conclusion: This study found that while perceptions of integrated service delivery of HIV services and antenatal care were generally positive, important barriers related to patient-provider interactions and stigma remained for women seeking antenatal care at health facilities. Tanzania and other low-resource, high-burden countries are currently rolling out Option B+, which further integrates care and treatment into maternal health services. In this context, women’s care seeking practices and perceptions about integrated health services are especially important to understand so as to inform future policies and service delivery reforms. The identified challenges must also be mitigated to ensure that pregnant mothers actively engage with the MNCH spectrum of care for themselves and their children, and concerns about integrated delivery of care must be addressed in order to reduce the impact of HIV/AIDS and to improve maternal and child health in Tanzania.
Background: Women and children in sub-Saharan Africa bear a disproportionate burden of HIV/AIDS. Integration of HIV with maternal and child services aims to reduce the impact of HIV/AIDS. To assess the potential gains and risks of such integration, this paper considers pregnant women's and providers' perceptions about the effects of integrated HIV testing and counselling on care seeking by pregnant women during antenatal care in Tanzania. Methods: From a larger evaluation of an integrated maternal and newborn health care program in Morogoro, Tanzania, this analysis included a subset of information from 203 observations of antenatal care and interviews with 57 providers and 190 pregnant women from 18 public health centers in rural and peri-urban settings. Qualitative data were analyzed manually and with Atlas.ti using a framework approach, and quantitative data of respondents' demographic information were analyzed with Stata 12.0. Results: Perceptions of integrating HIV testing with routine antenatal care from women and health providers were generally positive. Respondents felt that integration increased coverage of HIV testing, particularly among difficult-to-reach populations, and improved convenience, efficiency, and confidentiality for women while reducing stigma. Pregnant women believed that early detection of HIV protected their own health and that of their children. Despite these positive views, challenges remained. Providers and women perceived opt out HIV testing and counselling during antenatal services to be compulsory. A sense of powerlessness and anxiety pervaded some women's responses, reflecting the unequal relations, lack of supportive communications and breaches in confidentiality between women and providers. Lastly, stigma surrounding HIV was reported to lead some women to discontinue services or seek care through other access points in the health system. Conclusions: While providers and pregnant women view program synergies from integrating HIV services into antenatal care positively, lack of supportive provider-patient relationships, lack of trust resulting from harsh treatment or breaches in confidentiality, and stigma still inhibit women's care seeking. As countries continue rollout of Option B+, social relations between patients and providers must be understood and addressed to ensure that integrated delivery of HIV counselling and services encourages women's care seeking in order to improve maternal and child health.
18,189
411
[ 645, 76, 491, 163, 75, 682, 1062, 1382, 455, 28 ]
15
[ "women", "hiv", "health", "antenatal", "providers", "care", "services", "testing", "provider", "pregnant" ]
[ "tanzania connecting hiv", "hiv aids tanzania", "family planning hiv", "child transmission hiv", "aids improve maternal" ]
null
[CONTENT] ANC | HIV testing and counselling | Integration | Care seeking | Patient-provider interaction | Stigma [SUMMARY]
null
[CONTENT] ANC | HIV testing and counselling | Integration | Care seeking | Patient-provider interaction | Stigma [SUMMARY]
[CONTENT] ANC | HIV testing and counselling | Integration | Care seeking | Patient-provider interaction | Stigma [SUMMARY]
[CONTENT] ANC | HIV testing and counselling | Integration | Care seeking | Patient-provider interaction | Stigma [SUMMARY]
[CONTENT] ANC | HIV testing and counselling | Integration | Care seeking | Patient-provider interaction | Stigma [SUMMARY]
[CONTENT] Adolescent | Adult | Africa South of the Sahara | Confidentiality | Counseling | Delivery of Health Care, Integrated | Delivery, Obstetric | Female | HIV Infections | Humans | Interviews as Topic | Mass Screening | Maternal Health Services | Maternal Welfare | Middle Aged | Pregnancy | Professional-Patient Relations | Qualitative Research | Rural Population | Tanzania | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Africa South of the Sahara | Confidentiality | Counseling | Delivery of Health Care, Integrated | Delivery, Obstetric | Female | HIV Infections | Humans | Interviews as Topic | Mass Screening | Maternal Health Services | Maternal Welfare | Middle Aged | Pregnancy | Professional-Patient Relations | Qualitative Research | Rural Population | Tanzania | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Africa South of the Sahara | Confidentiality | Counseling | Delivery of Health Care, Integrated | Delivery, Obstetric | Female | HIV Infections | Humans | Interviews as Topic | Mass Screening | Maternal Health Services | Maternal Welfare | Middle Aged | Pregnancy | Professional-Patient Relations | Qualitative Research | Rural Population | Tanzania | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Africa South of the Sahara | Confidentiality | Counseling | Delivery of Health Care, Integrated | Delivery, Obstetric | Female | HIV Infections | Humans | Interviews as Topic | Mass Screening | Maternal Health Services | Maternal Welfare | Middle Aged | Pregnancy | Professional-Patient Relations | Qualitative Research | Rural Population | Tanzania | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Africa South of the Sahara | Confidentiality | Counseling | Delivery of Health Care, Integrated | Delivery, Obstetric | Female | HIV Infections | Humans | Interviews as Topic | Mass Screening | Maternal Health Services | Maternal Welfare | Middle Aged | Pregnancy | Professional-Patient Relations | Qualitative Research | Rural Population | Tanzania | Young Adult [SUMMARY]
[CONTENT] tanzania connecting hiv | hiv aids tanzania | family planning hiv | child transmission hiv | aids improve maternal [SUMMARY]
null
[CONTENT] tanzania connecting hiv | hiv aids tanzania | family planning hiv | child transmission hiv | aids improve maternal [SUMMARY]
[CONTENT] tanzania connecting hiv | hiv aids tanzania | family planning hiv | child transmission hiv | aids improve maternal [SUMMARY]
[CONTENT] tanzania connecting hiv | hiv aids tanzania | family planning hiv | child transmission hiv | aids improve maternal [SUMMARY]
[CONTENT] tanzania connecting hiv | hiv aids tanzania | family planning hiv | child transmission hiv | aids improve maternal [SUMMARY]
[CONTENT] women | hiv | health | antenatal | providers | care | services | testing | provider | pregnant [SUMMARY]
null
[CONTENT] women | hiv | health | antenatal | providers | care | services | testing | provider | pregnant [SUMMARY]
[CONTENT] women | hiv | health | antenatal | providers | care | services | testing | provider | pregnant [SUMMARY]
[CONTENT] women | hiv | health | antenatal | providers | care | services | testing | provider | pregnant [SUMMARY]
[CONTENT] women | hiv | health | antenatal | providers | care | services | testing | provider | pregnant [SUMMARY]
[CONTENT] hiv | pmtct | treatment | integration | care | health | women | guidelines | child | testing [SUMMARY]
null
[CONTENT] women | hiv | health | mean | provider | providers | testing | woman | antenatal | years [SUMMARY]
[CONTENT] care | perceptions integrated | delivery | perceptions | health | integrated | important | service delivery | tanzania | maternal [SUMMARY]
[CONTENT] women | health | hiv | care | antenatal | services | providers | testing | provider | data [SUMMARY]
[CONTENT] women | health | hiv | care | antenatal | services | providers | testing | provider | data [SUMMARY]
[CONTENT] Africa ||| ||| Tanzania [SUMMARY]
null
[CONTENT] ||| ||| ||| ||| ||| ||| [SUMMARY]
[CONTENT] ||| Option B+ [SUMMARY]
[CONTENT] Africa ||| ||| Tanzania ||| Morogoro | Tanzania | 203 | 57 | 190 | 18 ||| Atlas.ti | Stata 12.0 ||| ||| ||| ||| ||| ||| ||| ||| ||| Option B+ [SUMMARY]
[CONTENT] Africa ||| ||| Tanzania ||| Morogoro | Tanzania | 203 | 57 | 190 | 18 ||| Atlas.ti | Stata 12.0 ||| ||| ||| ||| ||| ||| ||| ||| ||| Option B+ [SUMMARY]